diff --git a/.github/ISSUE_TEMPLATE/40_bug-report.md b/.github/ISSUE_TEMPLATE/40_bug-report.md index 4dfd19266d0..97137366189 100644 --- a/.github/ISSUE_TEMPLATE/40_bug-report.md +++ b/.github/ISSUE_TEMPLATE/40_bug-report.md @@ -7,7 +7,7 @@ assignees: '' --- -(you don't have to strictly follow this form) +You have to provide the following information whenever possible. **Describe the bug** A clear and concise description of what works not as it is supposed to. diff --git a/.gitignore b/.gitignore index d33dbf0600d..1db6e0a78c9 100644 --- a/.gitignore +++ b/.gitignore @@ -27,6 +27,7 @@ /docs/zh/single.md /docs/ja/single.md /docs/fa/single.md +/docs/en/development/cmake-in-clickhouse.md # callgrind files callgrind.out.* diff --git a/.gitmodules b/.gitmodules index 7a2c5600e65..f7dcf5f4ac1 100644 --- a/.gitmodules +++ b/.gitmodules @@ -93,7 +93,7 @@ url = https://github.com/ClickHouse-Extras/libunwind.git [submodule "contrib/simdjson"] path = contrib/simdjson - url = https://github.com/ClickHouse-Extras/simdjson.git + url = https://github.com/simdjson/simdjson.git [submodule "contrib/rapidjson"] path = contrib/rapidjson url = https://github.com/ClickHouse-Extras/rapidjson @@ -133,7 +133,7 @@ url = https://github.com/unicode-org/icu.git [submodule "contrib/flatbuffers"] path = contrib/flatbuffers - url = https://github.com/google/flatbuffers.git + url = https://github.com/ClickHouse-Extras/flatbuffers.git [submodule "contrib/libc-headers"] path = contrib/libc-headers url = https://github.com/ClickHouse-Extras/libc-headers.git @@ -221,3 +221,9 @@ [submodule "contrib/NuRaft"] path = contrib/NuRaft url = https://github.com/ClickHouse-Extras/NuRaft.git +[submodule "contrib/nanodbc"] + path = contrib/nanodbc + url = https://github.com/ClickHouse-Extras/nanodbc.git +[submodule "contrib/datasketches-cpp"] + path = contrib/datasketches-cpp + url = https://github.com/ClickHouse-Extras/datasketches-cpp.git diff --git a/CHANGELOG.md b/CHANGELOG.md index e2c777b3bcf..cc1ec835a7b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,312 @@ +## ClickHouse release 21.4 + +### ClickHouse release 21.4.1 2021-04-12 + +#### Backward Incompatible Change + +* The `toStartOfIntervalFunction` will align hour intervals to the midnight (in previous versions they were aligned to the start of unix epoch). For example, `toStartOfInterval(x, INTERVAL 11 HOUR)` will split every day into three intervals: `00:00:00..10:59:59`, `11:00:00..21:59:59` and `22:00:00..23:59:59`. This behaviour is more suited for practical needs. This closes [#9510](https://github.com/ClickHouse/ClickHouse/issues/9510). [#22060](https://github.com/ClickHouse/ClickHouse/pull/22060) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* `Age` and `Precision` in graphite rollup configs should increase from retention to retention. Now it's checked and the wrong config raises an exception. [#21496](https://github.com/ClickHouse/ClickHouse/pull/21496) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix `cutToFirstSignificantSubdomainCustom()`/`firstSignificantSubdomainCustom()` returning wrong result for 3+ level domains present in custom top-level domain list. For input domains matching these custom top-level domains, the third-level domain was considered to be the first significant one. This is now fixed. This change may introduce incompatibility if the function is used in e.g. the sharding key. [#21946](https://github.com/ClickHouse/ClickHouse/pull/21946) ([Azat Khuzhin](https://github.com/azat)). +* Column `keys` in table `system.dictionaries` was replaced to columns `key.names` and `key.types`. Columns `key.names`, `key.types`, `attribute.names`, `attribute.types` from `system.dictionaries` table does not require dictionary to be loaded. [#21884](https://github.com/ClickHouse/ClickHouse/pull/21884) ([Maksim Kita](https://github.com/kitaisreal)). +* Now replicas that are processing the `ALTER TABLE ATTACH PART[ITION]` command search in their `detached/` folders before fetching the data from other replicas. As an implementation detail, a new command `ATTACH_PART` is introduced in the replicated log. Parts are searched and compared by their checksums. [#18978](https://github.com/ClickHouse/ClickHouse/pull/18978) ([Mike Kot](https://github.com/myrrc)). **Note**: + * `ATTACH PART[ITION]` queries may not work during cluster upgrade. + * It's not possible to rollback to older ClickHouse version after executing `ALTER ... ATTACH` query in new version as the old servers would fail to pass the `ATTACH_PART` entry in the replicated log. +* In this version, empty `` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)). + + +#### New Feature + +* Extended range of `DateTime64` to support dates from year 1925 to 2283. Improved support of `DateTime` around zero date (`1970-01-01`). [#9404](https://github.com/ClickHouse/ClickHouse/pull/9404) ([alexey-milovidov](https://github.com/alexey-milovidov), [Vasily Nemkov](https://github.com/Enmk)). Not every time and date functions are working for extended range of dates. +* Added support of Kerberos authentication for preconfigured users and HTTP requests (GSS-SPNEGO). [#14995](https://github.com/ClickHouse/ClickHouse/pull/14995) ([Denis Glazachev](https://github.com/traceon)). +* Add `prefer_column_name_to_alias` setting to use original column names instead of aliases. it is needed to be more compatible with common databases' aliasing rules. This is for [#9715](https://github.com/ClickHouse/ClickHouse/issues/9715) and [#9887](https://github.com/ClickHouse/ClickHouse/issues/9887). [#22044](https://github.com/ClickHouse/ClickHouse/pull/22044) ([Amos Bird](https://github.com/amosbird)). +* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)). +* Added `executable_pool` dictionary source. Close [#14528](https://github.com/ClickHouse/ClickHouse/issues/14528). [#21321](https://github.com/ClickHouse/ClickHouse/pull/21321) ([Maksim Kita](https://github.com/kitaisreal)). +* Added table function `dictionary`. It works the same way as `Dictionary` engine. Closes [#21560](https://github.com/ClickHouse/ClickHouse/issues/21560). [#21910](https://github.com/ClickHouse/ClickHouse/pull/21910) ([Maksim Kita](https://github.com/kitaisreal)). +* Support `Nullable` type for `PolygonDictionary` attribute. [#21890](https://github.com/ClickHouse/ClickHouse/pull/21890) ([Maksim Kita](https://github.com/kitaisreal)). +* Functions `dictGet`, `dictHas` use current database name if it is not specified for dictionaries created with DDL. Closes [#21632](https://github.com/ClickHouse/ClickHouse/issues/21632). [#21859](https://github.com/ClickHouse/ClickHouse/pull/21859) ([Maksim Kita](https://github.com/kitaisreal)). +* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)). +* Added async update in `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for `Nullable` type in `Cache`, `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for multiple attributes fetch with `dictGet`, `dictGetOrDefault` functions. Fixes [#21517](https://github.com/ClickHouse/ClickHouse/issues/21517). [#20595](https://github.com/ClickHouse/ClickHouse/pull/20595) ([Maksim Kita](https://github.com/kitaisreal)). +* Support `dictHas` function for `RangeHashedDictionary`. Fixes [#6680](https://github.com/ClickHouse/ClickHouse/issues/6680). [#19816](https://github.com/ClickHouse/ClickHouse/pull/19816) ([Maksim Kita](https://github.com/kitaisreal)). +* Add function `timezoneOf` that returns the timezone name of `DateTime` or `DateTime64` data types. This does not close [#9959](https://github.com/ClickHouse/ClickHouse/issues/9959). Fix inconsistencies in function names: add aliases `timezone` and `timeZone` as well as `toTimezone` and `toTimeZone` and `timezoneOf` and `timeZoneOf`. [#22001](https://github.com/ClickHouse/ClickHouse/pull/22001) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add new optional clause `GRANTEES` for `CREATE/ALTER USER` commands. It specifies users or roles which are allowed to receive grants from this user on condition this user has also all required access granted with grant option. By default `GRANTEES ANY` is used which means a user with grant option can grant to anyone. Syntax: `CREATE USER ... GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]`. [#21641](https://github.com/ClickHouse/ClickHouse/pull/21641) ([Vitaly Baranov](https://github.com/vitlibar)). +* Add new column `slowdowns_count` to `system.clusters`. When using hedged requests, it shows how many times we switched to another replica because this replica was responding slowly. Also show actual value of `errors_count` in `system.clusters`. [#21480](https://github.com/ClickHouse/ClickHouse/pull/21480) ([Kruglov Pavel](https://github.com/Avogar)). +* Add `_partition_id` virtual column for `MergeTree*` engines. Allow to prune partitions by `_partition_id`. Add `partitionID()` function to calculate partition id string. [#21401](https://github.com/ClickHouse/ClickHouse/pull/21401) ([Amos Bird](https://github.com/amosbird)). +* Add function `isIPAddressInRange` to test if an IPv4 or IPv6 address is contained in a given CIDR network prefix. [#21329](https://github.com/ClickHouse/ClickHouse/pull/21329) ([PHO](https://github.com/depressed-pho)). +* Added new SQL command `ALTER TABLE 'table_name' UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name'`. This command is needed to properly remove 'freezed' partitions from all disks. [#21142](https://github.com/ClickHouse/ClickHouse/pull/21142) ([Pavel Kovalenko](https://github.com/Jokser)). +* Supports implicit key type conversion for JOIN. [#19885](https://github.com/ClickHouse/ClickHouse/pull/19885) ([Vladimir](https://github.com/vdimir)). + +#### Experimental Feature + +* Support `RANGE OFFSET` frame (for window functions) for floating point types. Implement `lagInFrame`/`leadInFrame` window functions, which are analogous to `lag`/`lead`, but respect the window frame. They are identical when the frame is `between unbounded preceding and unbounded following`. This closes [#5485](https://github.com/ClickHouse/ClickHouse/issues/5485). [#21895](https://github.com/ClickHouse/ClickHouse/pull/21895) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Zero-copy replication for `ReplicatedMergeTree` over S3 storage. [#16240](https://github.com/ClickHouse/ClickHouse/pull/16240) ([ianton-ru](https://github.com/ianton-ru)). +* Added possibility to migrate existing S3 disk to the schema with backup-restore capabilities. [#22070](https://github.com/ClickHouse/ClickHouse/pull/22070) ([Pavel Kovalenko](https://github.com/Jokser)). + +#### Performance Improvement + +* Supported parallel formatting in `clickhouse-local` and everywhere else. [#21630](https://github.com/ClickHouse/ClickHouse/pull/21630) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Support parallel parsing for `CSVWithNames` and `TSVWithNames` formats. This closes [#21085](https://github.com/ClickHouse/ClickHouse/issues/21085). [#21149](https://github.com/ClickHouse/ClickHouse/pull/21149) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Enable read with mmap IO for file ranges from 64 MiB (the settings `min_bytes_to_use_mmap_io`). It may lead to moderate performance improvement. [#22326](https://github.com/ClickHouse/ClickHouse/pull/22326) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add cache for files read with `min_bytes_to_use_mmap_io` setting. It makes significant (2x and more) performance improvement when the value of the setting is small by avoiding frequent mmap/munmap calls and the consequent page faults. Note that mmap IO has major drawbacks that makes it less reliable in production (e.g. hung or SIGBUS on faulty disks; less controllable memory usage). Nevertheless it is good in benchmarks. [#22206](https://github.com/ClickHouse/ClickHouse/pull/22206) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Avoid unnecessary data copy when using codec `NONE`. Please note that codec `NONE` is mostly useless - it's recommended to always use compression (`LZ4` is by default). Despite the common belief, disabling compression may not improve performance (the opposite effect is possible). The `NONE` codec is useful in some cases: - when data is uncompressable; - for synthetic benchmarks. [#22145](https://github.com/ClickHouse/ClickHouse/pull/22145) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Faster `GROUP BY` with small `max_rows_to_group_by` and `group_by_overflow_mode='any'`. [#21856](https://github.com/ClickHouse/ClickHouse/pull/21856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Optimize performance of queries like `SELECT ... FINAL ... WHERE`. Now in queries with `FINAL` it's allowed to move to `PREWHERE` columns, which are in sorting key. [#21830](https://github.com/ClickHouse/ClickHouse/pull/21830) ([foolchi](https://github.com/foolchi)). +* Improved performance by replacing `memcpy` to another implementation. This closes [#18583](https://github.com/ClickHouse/ClickHouse/issues/18583). [#21520](https://github.com/ClickHouse/ClickHouse/pull/21520) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improve performance of aggregation in order of sorting key (with enabled setting `optimize_aggregation_in_order`). [#19401](https://github.com/ClickHouse/ClickHouse/pull/19401) ([Anton Popov](https://github.com/CurtizJ)). + +#### Improvement + +* Add connection pool for PostgreSQL table/database engine and dictionary source. Should fix [#21444](https://github.com/ClickHouse/ClickHouse/issues/21444). [#21839](https://github.com/ClickHouse/ClickHouse/pull/21839) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Support non-default table schema for postgres storage/table-function. Closes [#21701](https://github.com/ClickHouse/ClickHouse/issues/21701). [#21711](https://github.com/ClickHouse/ClickHouse/pull/21711) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Support replicas priority for postgres dictionary source. [#21710](https://github.com/ClickHouse/ClickHouse/pull/21710) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Introduce a new merge tree setting `min_bytes_to_rebalance_partition_over_jbod` which allows assigning new parts to different disks of a JBOD volume in a balanced way. [#16481](https://github.com/ClickHouse/ClickHouse/pull/16481) ([Amos Bird](https://github.com/amosbird)). +* Added `Grant`, `Revoke` and `System` values of `query_kind` column for corresponding queries in `system.query_log`. [#21102](https://github.com/ClickHouse/ClickHouse/pull/21102) ([Vasily Nemkov](https://github.com/Enmk)). +* Allow customizing timeouts for HTTP connections used for replication independently from other HTTP timeouts. [#20088](https://github.com/ClickHouse/ClickHouse/pull/20088) ([nvartolomei](https://github.com/nvartolomei)). +* Better exception message in client in case of exception while server is writing blocks. In previous versions client may get misleading message like `Data compressed with different methods`. [#22427](https://github.com/ClickHouse/ClickHouse/pull/22427) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix error `Directory tmp_fetch_XXX already exists` which could happen after failed fetch part. Delete temporary fetch directory if it already exists. Fixes [#14197](https://github.com/ClickHouse/ClickHouse/issues/14197). [#22411](https://github.com/ClickHouse/ClickHouse/pull/22411) ([nvartolomei](https://github.com/nvartolomei)). +* Fix MSan report for function `range` with `UInt256` argument (support for large integers is experimental). This closes [#22157](https://github.com/ClickHouse/ClickHouse/issues/22157). [#22387](https://github.com/ClickHouse/ClickHouse/pull/22387) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add `current_database` column to `system.processes` table. It contains the current database of the query. [#22365](https://github.com/ClickHouse/ClickHouse/pull/22365) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Add case-insensitive history search/navigation and subword movement features to `clickhouse-client`. [#22105](https://github.com/ClickHouse/ClickHouse/pull/22105) ([Amos Bird](https://github.com/amosbird)). +* If tuple of NULLs, e.g. `(NULL, NULL)` is on the left hand side of `IN` operator with tuples of non-NULLs on the right hand side, e.g. `SELECT (NULL, NULL) IN ((0, 0), (3, 1))` return 0 instead of throwing an exception about incompatible types. The expression may also appear due to optimization of something like `SELECT (NULL, NULL) = (8, 0) OR (NULL, NULL) = (3, 2) OR (NULL, NULL) = (0, 0) OR (NULL, NULL) = (3, 1)`. This closes [#22017](https://github.com/ClickHouse/ClickHouse/issues/22017). [#22063](https://github.com/ClickHouse/ClickHouse/pull/22063) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Update used version of simdjson to 0.9.1. This fixes [#21984](https://github.com/ClickHouse/ClickHouse/issues/21984). [#22057](https://github.com/ClickHouse/ClickHouse/pull/22057) ([Vitaly Baranov](https://github.com/vitlibar)). +* Added case insensitive aliases for `CONNECTION_ID()` and `VERSION()` functions. This fixes [#22028](https://github.com/ClickHouse/ClickHouse/issues/22028). [#22042](https://github.com/ClickHouse/ClickHouse/pull/22042) ([Eugene Klimov](https://github.com/Slach)). +* Add option `strict_increase` to `windowFunnel` function to calculate each event once (resolve [#21835](https://github.com/ClickHouse/ClickHouse/issues/21835)). [#22025](https://github.com/ClickHouse/ClickHouse/pull/22025) ([Vladimir](https://github.com/vdimir)). +* If partition key of a `MergeTree` table does not include `Date` or `DateTime` columns but includes exactly one `DateTime64` column, expose its values in the `min_time` and `max_time` columns in `system.parts` and `system.parts_columns` tables. Add `min_time` and `max_time` columns to `system.parts_columns` table (these was inconsistency to the `system.parts` table). This closes [#18244](https://github.com/ClickHouse/ClickHouse/issues/18244). [#22011](https://github.com/ClickHouse/ClickHouse/pull/22011) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Supported `replication_alter_partitions_sync=1` setting in `clickhouse-copier` for moving partitions from helping table to destination. Decreased default timeouts. Fixes [#21911](https://github.com/ClickHouse/ClickHouse/issues/21911). [#21912](https://github.com/ClickHouse/ClickHouse/pull/21912) ([turbo jason](https://github.com/songenjie)). +* Show path to data directory of `EmbeddedRocksDB` tables in system tables. [#21903](https://github.com/ClickHouse/ClickHouse/pull/21903) ([tavplubix](https://github.com/tavplubix)). +* Add profile event `HedgedRequestsChangeReplica`, change read data timeout from sec to ms. [#21886](https://github.com/ClickHouse/ClickHouse/pull/21886) ([Kruglov Pavel](https://github.com/Avogar)). +* DiskS3 (experimental feature under development). Fixed bug with the impossibility to move directory if the destination is not empty and cache disk is used. [#21837](https://github.com/ClickHouse/ClickHouse/pull/21837) ([Pavel Kovalenko](https://github.com/Jokser)). +* Better formatting for `Array` and `Map` data types in Web UI. [#21798](https://github.com/ClickHouse/ClickHouse/pull/21798) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Update clusters only if their configurations were updated. [#21685](https://github.com/ClickHouse/ClickHouse/pull/21685) ([Kruglov Pavel](https://github.com/Avogar)). +* Propagate query and session settings for distributed DDL queries. Set `distributed_ddl_entry_format_version` to 2 to enable this. Added `distributed_ddl_output_mode` setting. Supported modes: `none`, `throw` (default), `null_status_on_timeout` and `never_throw`. Miscellaneous fixes and improvements for `Replicated` database engine. [#21535](https://github.com/ClickHouse/ClickHouse/pull/21535) ([tavplubix](https://github.com/tavplubix)). +* If `PODArray` was instantiated with element size that is neither a fraction or a multiple of 16, buffer overflow was possible. No bugs in current releases exist. [#21533](https://github.com/ClickHouse/ClickHouse/pull/21533) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add `last_error_time`/`last_error_message`/`last_error_stacktrace`/`remote` columns for `system.errors`. [#21529](https://github.com/ClickHouse/ClickHouse/pull/21529) ([Azat Khuzhin](https://github.com/azat)). +* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes #21383. [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)). +* Add setting `optimize_skip_unused_shards_limit` to limit the number of sharding key values for `optimize_skip_unused_shards`. [#21512](https://github.com/ClickHouse/ClickHouse/pull/21512) ([Azat Khuzhin](https://github.com/azat)). +* Improve `clickhouse-format` to not throw exception when there are extra spaces or comment after the last query, and throw exception early with readable message when format `ASTInsertQuery` with data . [#21311](https://github.com/ClickHouse/ClickHouse/pull/21311) ([flynn](https://github.com/ucasFL)). +* Improve support of integer keys in data type `Map`. [#21157](https://github.com/ClickHouse/ClickHouse/pull/21157) ([Anton Popov](https://github.com/CurtizJ)). +* MaterializeMySQL: attempt to reconnect to MySQL if the connection is lost. [#20961](https://github.com/ClickHouse/ClickHouse/pull/20961) ([Håvard Kvålen](https://github.com/havardk)). +* Support more cases to rewrite `CROSS JOIN` to `INNER JOIN`. [#20392](https://github.com/ClickHouse/ClickHouse/pull/20392) ([Vladimir](https://github.com/vdimir)). +* Do not create empty parts on INSERT when `optimize_on_insert` setting enabled. Fixes [#20304](https://github.com/ClickHouse/ClickHouse/issues/20304). [#20387](https://github.com/ClickHouse/ClickHouse/pull/20387) ([Kruglov Pavel](https://github.com/Avogar)). +* `MaterializeMySQL`: add minmax skipping index for `_version` column. [#20382](https://github.com/ClickHouse/ClickHouse/pull/20382) ([Stig Bakken](https://github.com/stigsb)). +* Add option `--backslash` for `clickhouse-format`, which can add a backslash at the end of each line of the formatted query. [#21494](https://github.com/ClickHouse/ClickHouse/pull/21494) ([flynn](https://github.com/ucasFL)). +* Now clickhouse will not throw `LOGICAL_ERROR` exception when we try to mutate the already covered part. Fixes [#22013](https://github.com/ClickHouse/ClickHouse/issues/22013). [#22291](https://github.com/ClickHouse/ClickHouse/pull/22291) ([alesapin](https://github.com/alesapin)). + +#### Bug Fix + +* Remove socket from epoll before cancelling packet receiver in `HedgedConnections` to prevent possible race. Fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)). +* Add (missing) memory accounting in parallel parsing routines. In previous versions OOM was possible when the resultset contains very large blocks of data. This closes [#22008](https://github.com/ClickHouse/ClickHouse/issues/22008). [#22425](https://github.com/ClickHouse/ClickHouse/pull/22425) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix exception which may happen when `SELECT` has constant `WHERE` condition and source table has columns which names are digits. [#22270](https://github.com/ClickHouse/ClickHouse/pull/22270) ([LiuNeng](https://github.com/liuneng1994)). +* Fix query cancellation with `use_hedged_requests=0` and `async_socket_for_remote=1`. [#22183](https://github.com/ClickHouse/ClickHouse/pull/22183) ([Azat Khuzhin](https://github.com/azat)). +* Fix uncaught exception in `InterserverIOHTTPHandler`. [#22146](https://github.com/ClickHouse/ClickHouse/pull/22146) ([Azat Khuzhin](https://github.com/azat)). +* Fix docker entrypoint in case `http_port` is not in the config. [#22132](https://github.com/ClickHouse/ClickHouse/pull/22132) ([Ewout](https://github.com/devwout)). +* Fix error `Invalid number of rows in Chunk` in `JOIN` with `TOTALS` and `arrayJoin`. Closes [#19303](https://github.com/ClickHouse/ClickHouse/issues/19303). [#22129](https://github.com/ClickHouse/ClickHouse/pull/22129) ([Vladimir](https://github.com/vdimir)). +* Fix the background thread pool name which used to poll message from Kafka. The Kafka engine with the broken thread pool will not consume the message from message queue. [#22122](https://github.com/ClickHouse/ClickHouse/pull/22122) ([fastio](https://github.com/fastio)). +* Fix waiting for `OPTIMIZE` and `ALTER` queries for `ReplicatedMergeTree` table engines. Now the query will not hang when the table was detached or restarted. [#22118](https://github.com/ClickHouse/ClickHouse/pull/22118) ([alesapin](https://github.com/alesapin)). +* Disable `async_socket_for_remote`/`use_hedged_requests` for buggy Linux kernels. [#22109](https://github.com/ClickHouse/ClickHouse/pull/22109) ([Azat Khuzhin](https://github.com/azat)). +* Docker entrypoint: avoid chown of `.` in case when `LOG_PATH` is empty. Closes [#22100](https://github.com/ClickHouse/ClickHouse/issues/22100). [#22102](https://github.com/ClickHouse/ClickHouse/pull/22102) ([filimonov](https://github.com/filimonov)). +* The function `decrypt` was lacking a check for the minimal size of data encrypted in `AEAD` mode. This closes [#21897](https://github.com/ClickHouse/ClickHouse/issues/21897). [#22064](https://github.com/ClickHouse/ClickHouse/pull/22064) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* In rare case, merge for `CollapsingMergeTree` may create granule with `index_granularity + 1` rows. Because of this, internal check, added in [#18928](https://github.com/ClickHouse/ClickHouse/issues/18928) (affects 21.2 and 21.3), may fail with error `Incomplete granules are not allowed while blocks are granules size`. This error did not allow parts to merge. [#21976](https://github.com/ClickHouse/ClickHouse/pull/21976) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Reverted [#15454](https://github.com/ClickHouse/ClickHouse/issues/15454) that may cause significant increase in memory usage while loading external dictionaries of hashed type. This closes [#21935](https://github.com/ClickHouse/ClickHouse/issues/21935). [#21948](https://github.com/ClickHouse/ClickHouse/pull/21948) ([Maksim Kita](https://github.com/kitaisreal)). +* Prevent hedged connections overlaps (`Unknown packet 9 from server` error). [#21941](https://github.com/ClickHouse/ClickHouse/pull/21941) ([Azat Khuzhin](https://github.com/azat)). +* Fix reading the HTTP POST request with "multipart/form-data" content type in some cases. [#21936](https://github.com/ClickHouse/ClickHouse/pull/21936) ([Ivan](https://github.com/abyss7)). +* Fix wrong `ORDER BY` results when a query contains window functions, and optimization for reading in primary key order is applied. Fixes [#21828](https://github.com/ClickHouse/ClickHouse/issues/21828). [#21915](https://github.com/ClickHouse/ClickHouse/pull/21915) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Fix deadlock in first catboost model execution. Closes [#13832](https://github.com/ClickHouse/ClickHouse/issues/13832). [#21844](https://github.com/ClickHouse/ClickHouse/pull/21844) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix incorrect query result (and possible crash) which could happen when `WHERE` or `HAVING` condition is pushed before `GROUP BY`. Fixes [#21773](https://github.com/ClickHouse/ClickHouse/issues/21773). [#21841](https://github.com/ClickHouse/ClickHouse/pull/21841) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Better error handling and logging in `WriteBufferFromS3`. [#21836](https://github.com/ClickHouse/ClickHouse/pull/21836) ([Pavel Kovalenko](https://github.com/Jokser)). +* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. This is a follow-up fix of [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) . Can only reproduced in production env. [#21818](https://github.com/ClickHouse/ClickHouse/pull/21818) ([Amos Bird](https://github.com/amosbird)). +* Fix scalar subquery index analysis. This fixes [#21717](https://github.com/ClickHouse/ClickHouse/issues/21717) , which was introduced in [#18896](https://github.com/ClickHouse/ClickHouse/pull/18896). [#21766](https://github.com/ClickHouse/ClickHouse/pull/21766) ([Amos Bird](https://github.com/amosbird)). +* Fix bug for `ReplicatedMerge` table engines when `ALTER MODIFY COLUMN` query doesn't change the type of `Decimal` column if its size (32 bit or 64 bit) doesn't change. [#21728](https://github.com/ClickHouse/ClickHouse/pull/21728) ([alesapin](https://github.com/alesapin)). +* Fix possible infinite waiting when concurrent `OPTIMIZE` and `DROP` are run for `ReplicatedMergeTree`. [#21716](https://github.com/ClickHouse/ClickHouse/pull/21716) ([Azat Khuzhin](https://github.com/azat)). +* Fix function `arrayElement` with type `Map` for constant integer arguments. [#21699](https://github.com/ClickHouse/ClickHouse/pull/21699) ([Anton Popov](https://github.com/CurtizJ)). +* Fix SIGSEGV on not existing attributes from `ip_trie` with `access_to_key_from_attributes`. [#21692](https://github.com/ClickHouse/ClickHouse/pull/21692) ([Azat Khuzhin](https://github.com/azat)). +* Server now start accepting connections only after `DDLWorker` and dictionaries initialization. [#21676](https://github.com/ClickHouse/ClickHouse/pull/21676) ([Azat Khuzhin](https://github.com/azat)). +* Add type conversion for keys of tables of type `Join` (previously led to SIGSEGV). [#21646](https://github.com/ClickHouse/ClickHouse/pull/21646) ([Azat Khuzhin](https://github.com/azat)). +* Fix distributed requests cancellation (for example simple select from multiple shards with limit, i.e. `select * from remote('127.{2,3}', system.numbers) limit 100`) with `async_socket_for_remote=1`. [#21643](https://github.com/ClickHouse/ClickHouse/pull/21643) ([Azat Khuzhin](https://github.com/azat)). +* Fix `fsync_part_directory` for horizontal merge. [#21642](https://github.com/ClickHouse/ClickHouse/pull/21642) ([Azat Khuzhin](https://github.com/azat)). +* Remove unknown columns from joined table in `WHERE` for queries to external database engines (MySQL, PostgreSQL). close [#14614](https://github.com/ClickHouse/ClickHouse/issues/14614), close [#19288](https://github.com/ClickHouse/ClickHouse/issues/19288) (dup), close [#19645](https://github.com/ClickHouse/ClickHouse/issues/19645) (dup). [#21640](https://github.com/ClickHouse/ClickHouse/pull/21640) ([Vladimir](https://github.com/vdimir)). +* `std::terminate` was called if there is an error writing data into s3. [#21624](https://github.com/ClickHouse/ClickHouse/pull/21624) ([Vladimir](https://github.com/vdimir)). +* Fix possible error `Cannot find column` when `optimize_skip_unused_shards` is enabled and zero shards are used. [#21579](https://github.com/ClickHouse/ClickHouse/pull/21579) ([Azat Khuzhin](https://github.com/azat)). +* In case if query has constant `WHERE` condition, and setting `optimize_skip_unused_shards` enabled, all shards may be skipped and query could return incorrect empty result. [#21550](https://github.com/ClickHouse/ClickHouse/pull/21550) ([Amos Bird](https://github.com/amosbird)). +* Fix table function `clusterAllReplicas` returns wrong `_shard_num`. close [#21481](https://github.com/ClickHouse/ClickHouse/issues/21481). [#21498](https://github.com/ClickHouse/ClickHouse/pull/21498) ([flynn](https://github.com/ucasFL)). +* Fix that S3 table holds old credentials after config update. [#21457](https://github.com/ClickHouse/ClickHouse/pull/21457) ([Grigory Pervakov](https://github.com/GrigoryPervakov)). +* Fixed race on SSL object inside `SecureSocket` in Poco. [#21456](https://github.com/ClickHouse/ClickHouse/pull/21456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fix `Avro` format parsing for `Kafka`. Fixes [#21437](https://github.com/ClickHouse/ClickHouse/issues/21437). [#21438](https://github.com/ClickHouse/ClickHouse/pull/21438) ([Ilya Golshtein](https://github.com/ilejn)). +* Fix receive and send timeouts and non-blocking read in secure socket. [#21429](https://github.com/ClickHouse/ClickHouse/pull/21429) ([Kruglov Pavel](https://github.com/Avogar)). +* `force_drop_table` flag didn't work for `MATERIALIZED VIEW`, it's fixed. Fixes [#18943](https://github.com/ClickHouse/ClickHouse/issues/18943). [#20626](https://github.com/ClickHouse/ClickHouse/pull/20626) ([tavplubix](https://github.com/tavplubix)). +* Fix name clashes in `PredicateRewriteVisitor`. It caused incorrect `WHERE` filtration after full join. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)). + +#### Build/Testing/Packaging Improvement + +* Add [Jepsen](https://github.com/jepsen-io/jepsen) tests for ClickHouse Keeper. [#21677](https://github.com/ClickHouse/ClickHouse/pull/21677) ([alesapin](https://github.com/alesapin)). +* Run stateless tests in parallel in CI. Depends on [#22181](https://github.com/ClickHouse/ClickHouse/issues/22181). [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)). +* Enable status check for [SQLancer](https://github.com/sqlancer/sqlancer) CI run. [#22015](https://github.com/ClickHouse/ClickHouse/pull/22015) ([Ilya Yatsishin](https://github.com/qoega)). +* Multiple preparations for PowerPC builds: Enable the bundled openldap on `ppc64le`. [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable compiling on `ppc64le` with Clang. [#22476](https://github.com/ClickHouse/ClickHouse/pull/22476) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix compiling boost on `ppc64le`. [#22474](https://github.com/ClickHouse/ClickHouse/pull/22474) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix CMake error about internal CMake variable `CMAKE_ASM_COMPILE_OBJECT` not set on `ppc64le`. [#22469](https://github.com/ClickHouse/ClickHouse/pull/22469) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix Fedora/RHEL/CentOS not finding `libclang_rt.builtins` on `ppc64le`. [#22458](https://github.com/ClickHouse/ClickHouse/pull/22458) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable building with `jemalloc` on `ppc64le`. [#22447](https://github.com/ClickHouse/ClickHouse/pull/22447) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix ClickHouse's config embedding and cctz's timezone embedding on `ppc64le`. [#22445](https://github.com/ClickHouse/ClickHouse/pull/22445) ([Kfir Itzhak](https://github.com/mastertheknife)). Fixed compiling on `ppc64le` and use the correct instruction pointer register on `ppc64le`. [#22430](https://github.com/ClickHouse/ClickHouse/pull/22430) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Re-enable the S3 (AWS) library on `aarch64`. [#22484](https://github.com/ClickHouse/ClickHouse/pull/22484) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Add `tzdata` to Docker containers because reading `ORC` formats requires it. This closes [#14156](https://github.com/ClickHouse/ClickHouse/issues/14156). [#22000](https://github.com/ClickHouse/ClickHouse/pull/22000) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Introduce 2 arguments for `clickhouse-server` image Dockerfile: `deb_location` & `single_binary_location`. [#21977](https://github.com/ClickHouse/ClickHouse/pull/21977) ([filimonov](https://github.com/filimonov)). +* Allow to use clang-tidy with release builds by enabling assertions if it is used. [#21914](https://github.com/ClickHouse/ClickHouse/pull/21914) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add llvm-12 binaries name to search in cmake scripts. Implicit constants conversions to mute clang warnings. Updated submodules to build with CMake 3.19. Mute recursion in macro expansion in `readpassphrase` library. Deprecated `-fuse-ld` changed to `--ld-path` for clang. [#21597](https://github.com/ClickHouse/ClickHouse/pull/21597) ([Ilya Yatsishin](https://github.com/qoega)). +* Updating `docker/test/testflows/runner/dockerd-entrypoint.sh` to use Yandex dockerhub-proxy, because Docker Hub has enabled very restrictive rate limits [#21551](https://github.com/ClickHouse/ClickHouse/pull/21551) ([vzakaznikov](https://github.com/vzakaznikov)). +* Fix macOS shared lib build. [#20184](https://github.com/ClickHouse/ClickHouse/pull/20184) ([nvartolomei](https://github.com/nvartolomei)). +* Add `ctime` option to `zookeeper-dump-tree`. It allows to dump node creation time. [#21842](https://github.com/ClickHouse/ClickHouse/pull/21842) ([Ilya](https://github.com/HumanUser)). + + +## ClickHouse release 21.3 (LTS) + +### ClickHouse release v21.3, 2021-03-12 + +#### Backward Incompatible Change + +* Now it's not allowed to create MergeTree tables in old syntax with table TTL because it's just ignored. Attach of old tables is still possible. [#20282](https://github.com/ClickHouse/ClickHouse/pull/20282) ([alesapin](https://github.com/alesapin)). +* Now all case-insensitive function names will be rewritten to their canonical representations. This is needed for projection query routing (the upcoming feature). [#20174](https://github.com/ClickHouse/ClickHouse/pull/20174) ([Amos Bird](https://github.com/amosbird)). +* Fix creation of `TTL` in cases, when its expression is a function and it is the same as `ORDER BY` key. Now it's allowed to set custom aggregation to primary key columns in `TTL` with `GROUP BY`. Backward incompatible: For primary key columns, which are not in `GROUP BY` and aren't set explicitly now is applied function `any` instead of `max`, when TTL is expired. Also if you use TTL with `WHERE` or `GROUP BY` you can see exceptions at merges, while making rolling update. [#15450](https://github.com/ClickHouse/ClickHouse/pull/15450) ([Anton Popov](https://github.com/CurtizJ)). + +#### New Feature + +* Add file engine settings: `engine_file_empty_if_not_exists` and `engine_file_truncate_on_insert`. [#20620](https://github.com/ClickHouse/ClickHouse/pull/20620) ([M0r64n](https://github.com/M0r64n)). +* Add aggregate function `deltaSum` for summing the differences between consecutive rows. [#20057](https://github.com/ClickHouse/ClickHouse/pull/20057) ([Russ Frank](https://github.com/rf)). +* New `event_time_microseconds` column in `system.part_log` table. [#20027](https://github.com/ClickHouse/ClickHouse/pull/20027) ([Bharat Nallan](https://github.com/bharatnc)). +* Added `timezoneOffset(datetime)` function which will give the offset from UTC in seconds. This close [#issue:19850](https://github.com/ClickHouse/ClickHouse/issues/19850). [#19962](https://github.com/ClickHouse/ClickHouse/pull/19962) ([keenwolf](https://github.com/keen-wolf)). +* Add setting `insert_shard_id` to support insert data into specific shard from distributed table. [#19961](https://github.com/ClickHouse/ClickHouse/pull/19961) ([flynn](https://github.com/ucasFL)). +* Function `reinterpretAs` updated to support big integers. Fixes [#19691](https://github.com/ClickHouse/ClickHouse/issues/19691). [#19858](https://github.com/ClickHouse/ClickHouse/pull/19858) ([Maksim Kita](https://github.com/kitaisreal)). +* Added Server Side Encryption Customer Keys (the `x-amz-server-side-encryption-customer-(key/md5)` header) support in S3 client. See [the link](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). Closes [#19428](https://github.com/ClickHouse/ClickHouse/issues/19428). [#19748](https://github.com/ClickHouse/ClickHouse/pull/19748) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Added `implicit_key` option for `executable` dictionary source. It allows to avoid printing key for every record if records comes in the same order as the input keys. Implements [#14527](https://github.com/ClickHouse/ClickHouse/issues/14527). [#19677](https://github.com/ClickHouse/ClickHouse/pull/19677) ([Maksim Kita](https://github.com/kitaisreal)). +* Add quota type `query_selects` and `query_inserts`. [#19603](https://github.com/ClickHouse/ClickHouse/pull/19603) ([JackyWoo](https://github.com/JackyWoo)). +* Add function `extractTextFromHTML` [#19600](https://github.com/ClickHouse/ClickHouse/pull/19600) ([zlx19950903](https://github.com/zlx19950903)), ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Tables with `MergeTree*` engine now have two new table-level settings for query concurrency control. Setting `max_concurrent_queries` limits the number of concurrently executed queries which are related to this table. Setting `min_marks_to_honor_max_concurrent_queries` tells to apply previous setting only if query reads at least this number of marks. [#19544](https://github.com/ClickHouse/ClickHouse/pull/19544) ([Amos Bird](https://github.com/amosbird)). +* Added `file` function to read file from user_files directory as a String. This is different from the `file` table function. This implements [#issue:18851](https://github.com/ClickHouse/ClickHouse/issues/18851). [#19204](https://github.com/ClickHouse/ClickHouse/pull/19204) ([keenwolf](https://github.com/keen-wolf)). + +#### Experimental feature + +* Add experimental `Replicated` database engine. It replicates DDL queries across multiple hosts. [#16193](https://github.com/ClickHouse/ClickHouse/pull/16193) ([tavplubix](https://github.com/tavplubix)). +* Introduce experimental support for window functions, enabled with `allow_experimental_window_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Add the ability to backup/restore metadata files for DiskS3. [#18377](https://github.com/ClickHouse/ClickHouse/pull/18377) ([Pavel Kovalenko](https://github.com/Jokser)). + +#### Performance Improvement + +* Hedged requests for remote queries. When setting `use_hedged_requests` enabled (off by default), allow to establish many connections with different replicas for query. New connection is enabled in case existent connection(s) with replica(s) were not established within `hedged_connection_timeout` or no data was received within `receive_data_timeout`. Query uses the first connection which send non empty progress packet (or data packet, if `allow_changing_replica_until_first_data_packet`); other connections are cancelled. Queries with `max_parallel_replicas > 1` are supported. [#19291](https://github.com/ClickHouse/ClickHouse/pull/19291) ([Kruglov Pavel](https://github.com/Avogar)). This allows to significantly reduce tail latencies on very large clusters. +* Added support for `PREWHERE` (and enable the corresponding optimization) when tables have row-level security expressions specified. [#19576](https://github.com/ClickHouse/ClickHouse/pull/19576) ([Denis Glazachev](https://github.com/traceon)). +* The setting `distributed_aggregation_memory_efficient` is enabled by default. It will lower memory usage and improve performance of distributed queries. [#20599](https://github.com/ClickHouse/ClickHouse/pull/20599) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improve performance of GROUP BY multiple fixed size keys. [#20472](https://github.com/ClickHouse/ClickHouse/pull/20472) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improve performance of aggregate functions by more strict aliasing. [#19946](https://github.com/ClickHouse/ClickHouse/pull/19946) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Speed up reading from `Memory` tables in extreme cases (when reading speed is in order of 50 GB/sec) by simplification of pipeline and (consequently) less lock contention in pipeline scheduling. [#20468](https://github.com/ClickHouse/ClickHouse/pull/20468) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Partially reimplement HTTP server to make it making less copies of incoming and outgoing data. It gives up to 1.5 performance improvement on inserting long records over HTTP. [#19516](https://github.com/ClickHouse/ClickHouse/pull/19516) ([Ivan](https://github.com/abyss7)). +* Add `compress` setting for `Memory` tables. If it's enabled the table will use less RAM. On some machines and datasets it can also work faster on SELECT, but it is not always the case. This closes [#20093](https://github.com/ClickHouse/ClickHouse/issues/20093). Note: there are reasons why Memory tables can work slower than MergeTree: (1) lack of compression (2) static size of blocks (3) lack of indices and prewhere... [#20168](https://github.com/ClickHouse/ClickHouse/pull/20168) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Slightly better code in aggregation. [#20978](https://github.com/ClickHouse/ClickHouse/pull/20978) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add back `intDiv`/`modulo` specializations for better performance. This fixes [#21293](https://github.com/ClickHouse/ClickHouse/issues/21293) . The regression was introduced in https://github.com/ClickHouse/ClickHouse/pull/18145 . [#21307](https://github.com/ClickHouse/ClickHouse/pull/21307) ([Amos Bird](https://github.com/amosbird)). +* Do not squash blocks too much on INSERT SELECT if inserting into Memory table. In previous versions inefficient data representation was created in Memory table after INSERT SELECT. This closes [#13052](https://github.com/ClickHouse/ClickHouse/issues/13052). [#20169](https://github.com/ClickHouse/ClickHouse/pull/20169) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix at least one case when DataType parser may have exponential complexity (found by fuzzer). This closes [#20096](https://github.com/ClickHouse/ClickHouse/issues/20096). [#20132](https://github.com/ClickHouse/ClickHouse/pull/20132) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Parallelize SELECT with FINAL for single part with level > 0 when `do_not_merge_across_partitions_select_final` setting is 1. [#19375](https://github.com/ClickHouse/ClickHouse/pull/19375) ([Kruglov Pavel](https://github.com/Avogar)). +* Fill only requested columns when querying `system.parts` and `system.parts_columns`. Closes [#19570](https://github.com/ClickHouse/ClickHouse/issues/19570). [#21035](https://github.com/ClickHouse/ClickHouse/pull/21035) ([Anmol Arora](https://github.com/anmolarora)). +* Perform algebraic optimizations of arithmetic expressions inside `avg` aggregate function. close [#20092](https://github.com/ClickHouse/ClickHouse/issues/20092). [#20183](https://github.com/ClickHouse/ClickHouse/pull/20183) ([flynn](https://github.com/ucasFL)). + +#### Improvement + +* Case-insensitive compression methods for table functions. Also fixed LZMA compression method which was checked in upper case. [#21416](https://github.com/ClickHouse/ClickHouse/pull/21416) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Add two settings to delay or throw error during insertion when there are too many inactive parts. This is useful when server fails to clean up parts quickly enough. [#20178](https://github.com/ClickHouse/ClickHouse/pull/20178) ([Amos Bird](https://github.com/amosbird)). +* Provide better compatibility for mysql clients. 1. mysql jdbc 2. mycli. [#21367](https://github.com/ClickHouse/ClickHouse/pull/21367) ([Amos Bird](https://github.com/amosbird)). +* Forbid to drop a column if it's referenced by materialized view. Closes [#21164](https://github.com/ClickHouse/ClickHouse/issues/21164). [#21303](https://github.com/ClickHouse/ClickHouse/pull/21303) ([flynn](https://github.com/ucasFL)). +* MySQL dictionary source will now retry unexpected connection failures (Lost connection to MySQL server during query) which sometimes happen on SSL/TLS connections. [#21237](https://github.com/ClickHouse/ClickHouse/pull/21237) ([Alexander Kazakov](https://github.com/Akazz)). +* Usability improvement: more consistent `DateTime64` parsing: recognize the case when unix timestamp with subsecond resolution is specified as scaled integer (like `1111111111222` instead of `1111111111.222`). This closes [#13194](https://github.com/ClickHouse/ClickHouse/issues/13194). [#21053](https://github.com/ClickHouse/ClickHouse/pull/21053) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Do only merging of sorted blocks on initiator with distributed_group_by_no_merge. [#20882](https://github.com/ClickHouse/ClickHouse/pull/20882) ([Azat Khuzhin](https://github.com/azat)). +* When loading config for mysql source ClickHouse will now randomize the list of replicas with the same priority to ensure the round-robin logics of picking mysql endpoint. This closes [#20629](https://github.com/ClickHouse/ClickHouse/issues/20629). [#20632](https://github.com/ClickHouse/ClickHouse/pull/20632) ([Alexander Kazakov](https://github.com/Akazz)). +* Function 'reinterpretAs(x, Type)' renamed into 'reinterpret(x, Type)'. [#20611](https://github.com/ClickHouse/ClickHouse/pull/20611) ([Maksim Kita](https://github.com/kitaisreal)). +* Support vhost for RabbitMQ engine [#20576](https://github.com/ClickHouse/ClickHouse/issues/20576). [#20596](https://github.com/ClickHouse/ClickHouse/pull/20596) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Improved serialization for data types combined of Arrays and Tuples. Improved matching enum data types to protobuf enum type. Fixed serialization of the `Map` data type. Omitted values are now set by default. [#20506](https://github.com/ClickHouse/ClickHouse/pull/20506) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fixed race between execution of distributed DDL tasks and cleanup of DDL queue. Now DDL task cannot be removed from ZooKeeper if there are active workers. Fixes [#20016](https://github.com/ClickHouse/ClickHouse/issues/20016). [#20448](https://github.com/ClickHouse/ClickHouse/pull/20448) ([tavplubix](https://github.com/tavplubix)). +* Make FQDN and other DNS related functions work correctly in alpine images. [#20336](https://github.com/ClickHouse/ClickHouse/pull/20336) ([filimonov](https://github.com/filimonov)). +* Do not allow early constant folding of explicitly forbidden functions. [#20303](https://github.com/ClickHouse/ClickHouse/pull/20303) ([Azat Khuzhin](https://github.com/azat)). +* Implicit conversion from integer to Decimal type might succeeded if integer value doe not fit into Decimal type. Now it throws `ARGUMENT_OUT_OF_BOUND`. [#20232](https://github.com/ClickHouse/ClickHouse/pull/20232) ([tavplubix](https://github.com/tavplubix)). +* Lockless `SYSTEM FLUSH DISTRIBUTED`. [#20215](https://github.com/ClickHouse/ClickHouse/pull/20215) ([Azat Khuzhin](https://github.com/azat)). +* Normalize count(constant), sum(1) to count(). This is needed for projection query routing. [#20175](https://github.com/ClickHouse/ClickHouse/pull/20175) ([Amos Bird](https://github.com/amosbird)). +* Support all native integer types in bitmap functions. [#20171](https://github.com/ClickHouse/ClickHouse/pull/20171) ([Amos Bird](https://github.com/amosbird)). +* Updated `CacheDictionary`, `ComplexCacheDictionary`, `SSDCacheDictionary`, `SSDComplexKeyDictionary` to use LRUHashMap as underlying index. [#20164](https://github.com/ClickHouse/ClickHouse/pull/20164) ([Maksim Kita](https://github.com/kitaisreal)). +* The setting `access_management` is now configurable on startup by providing `CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT`, defaults to disabled (`0`) which was the prior value. [#20139](https://github.com/ClickHouse/ClickHouse/pull/20139) ([Marquitos](https://github.com/sonirico)). +* Fix toDateTime64(toDate()/toDateTime()) for DateTime64 - Implement DateTime64 clamping to match DateTime behaviour. [#20131](https://github.com/ClickHouse/ClickHouse/pull/20131) ([Azat Khuzhin](https://github.com/azat)). +* Quota improvements: SHOW TABLES is now considered as one query in the quota calculations, not two queries. SYSTEM queries now consume quota. Fix calculation of interval's end in quota consumption. [#20106](https://github.com/ClickHouse/ClickHouse/pull/20106) ([Vitaly Baranov](https://github.com/vitlibar)). +* Supports `path IN (set)` expressions for `system.zookeeper` table. [#20105](https://github.com/ClickHouse/ClickHouse/pull/20105) ([小路](https://github.com/nicelulu)). +* Show full details of `MaterializeMySQL` tables in `system.tables`. [#20051](https://github.com/ClickHouse/ClickHouse/pull/20051) ([Stig Bakken](https://github.com/stigsb)). +* Fix data race in executable dictionary that was possible only on misuse (when the script returns data ignoring its input). [#20045](https://github.com/ClickHouse/ClickHouse/pull/20045) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* The value of MYSQL_OPT_RECONNECT option can now be controlled by "opt_reconnect" parameter in the config section of mysql replica. [#19998](https://github.com/ClickHouse/ClickHouse/pull/19998) ([Alexander Kazakov](https://github.com/Akazz)). +* If user calls `JSONExtract` function with `Float32` type requested, allow inaccurate conversion to the result type. For example the number `0.1` in JSON is double precision and is not representable in Float32, but the user still wants to get it. Previous versions return 0 for non-Nullable type and NULL for Nullable type to indicate that conversion is imprecise. The logic was 100% correct but it was surprising to users and leading to questions. This closes [#13962](https://github.com/ClickHouse/ClickHouse/issues/13962). [#19960](https://github.com/ClickHouse/ClickHouse/pull/19960) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add conversion of block structure for INSERT into Distributed tables if it does not match. [#19947](https://github.com/ClickHouse/ClickHouse/pull/19947) ([Azat Khuzhin](https://github.com/azat)). +* Improvement for the `system.distributed_ddl_queue` table. Initialize MaxDDLEntryID to the last value after restarting. Before this PR, MaxDDLEntryID will remain zero until a new DDLTask is processed. [#19924](https://github.com/ClickHouse/ClickHouse/pull/19924) ([Amos Bird](https://github.com/amosbird)). +* Show `MaterializeMySQL` tables in `system.parts`. [#19770](https://github.com/ClickHouse/ClickHouse/pull/19770) ([Stig Bakken](https://github.com/stigsb)). +* Add separate config directive for `Buffer` profile. [#19721](https://github.com/ClickHouse/ClickHouse/pull/19721) ([Azat Khuzhin](https://github.com/azat)). +* Move conditions that are not related to JOIN to WHERE clause. [#18720](https://github.com/ClickHouse/ClickHouse/issues/18720). [#19685](https://github.com/ClickHouse/ClickHouse/pull/19685) ([hexiaoting](https://github.com/hexiaoting)). +* Add ability to throttle INSERT into Distributed based on amount of pending bytes for async send (`bytes_to_delay_insert`/`max_delay_to_insert` and `bytes_to_throw_insert` settings for `Distributed` engine has been added). [#19673](https://github.com/ClickHouse/ClickHouse/pull/19673) ([Azat Khuzhin](https://github.com/azat)). +* Fix some rare cases when write errors can be ignored in destructors. [#19451](https://github.com/ClickHouse/ClickHouse/pull/19451) ([Azat Khuzhin](https://github.com/azat)). +* Print inline frames in stack traces for fatal errors. [#19317](https://github.com/ClickHouse/ClickHouse/pull/19317) ([Ivan](https://github.com/abyss7)). + +#### Bug Fix + +* Fix redundant reconnects to ZooKeeper and the possibility of two active sessions for a single clickhouse server. Both problems introduced in #14678. [#21264](https://github.com/ClickHouse/ClickHouse/pull/21264) ([alesapin](https://github.com/alesapin)). +* Fix error `Bad cast from type ... to DB::ColumnLowCardinality` while inserting into table with `LowCardinality` column from `Values` format. Fixes #21140 [#21357](https://github.com/ClickHouse/ClickHouse/pull/21357) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix a deadlock in `ALTER DELETE` mutations for non replicated MergeTree table engines when the predicate contains the table itself. Fixes [#20558](https://github.com/ClickHouse/ClickHouse/issues/20558). [#21477](https://github.com/ClickHouse/ClickHouse/pull/21477) ([alesapin](https://github.com/alesapin)). +* Fix SIGSEGV for distributed queries on failures. [#21434](https://github.com/ClickHouse/ClickHouse/pull/21434) ([Azat Khuzhin](https://github.com/azat)). +* Now `ALTER MODIFY COLUMN` queries will correctly affect changes in partition key, skip indices, TTLs, and so on. Fixes [#13675](https://github.com/ClickHouse/ClickHouse/issues/13675). [#21334](https://github.com/ClickHouse/ClickHouse/pull/21334) ([alesapin](https://github.com/alesapin)). +* Fix bug with `join_use_nulls` and joining `TOTALS` from subqueries. This closes [#19362](https://github.com/ClickHouse/ClickHouse/issues/19362) and [#21137](https://github.com/ClickHouse/ClickHouse/issues/21137). [#21248](https://github.com/ClickHouse/ClickHouse/pull/21248) ([vdimir](https://github.com/vdimir)). +* Fix crash in `EXPLAIN` for query with `UNION`. Fixes [#20876](https://github.com/ClickHouse/ClickHouse/issues/20876), [#21170](https://github.com/ClickHouse/ClickHouse/issues/21170). [#21246](https://github.com/ClickHouse/ClickHouse/pull/21246) ([flynn](https://github.com/ucasFL)). +* Now mutations allowed only for table engines that support them (MergeTree family, Memory, MaterializedView). Other engines will report a more clear error. Fixes [#21168](https://github.com/ClickHouse/ClickHouse/issues/21168). [#21183](https://github.com/ClickHouse/ClickHouse/pull/21183) ([alesapin](https://github.com/alesapin)). +* Fixes [#21112](https://github.com/ClickHouse/ClickHouse/issues/21112). Fixed bug that could cause duplicates with insert query (if one of the callbacks came a little too late). [#21138](https://github.com/ClickHouse/ClickHouse/pull/21138) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix `input_format_null_as_default` take effective when types are nullable. This fixes [#21116](https://github.com/ClickHouse/ClickHouse/issues/21116) . [#21121](https://github.com/ClickHouse/ClickHouse/pull/21121) ([Amos Bird](https://github.com/amosbird)). +* fix bug related to cast Tuple to Map. Closes [#21029](https://github.com/ClickHouse/ClickHouse/issues/21029). [#21120](https://github.com/ClickHouse/ClickHouse/pull/21120) ([hexiaoting](https://github.com/hexiaoting)). +* Fix the metadata leak when the Replicated*MergeTree with custom (non default) ZooKeeper cluster is dropped. [#21119](https://github.com/ClickHouse/ClickHouse/pull/21119) ([fastio](https://github.com/fastio)). +* Fix type mismatch issue when using LowCardinality keys in joinGet. This fixes [#21114](https://github.com/ClickHouse/ClickHouse/issues/21114). [#21117](https://github.com/ClickHouse/ClickHouse/pull/21117) ([Amos Bird](https://github.com/amosbird)). +* fix default_replica_path and default_replica_name values are useless on Replicated(*)MergeTree engine when the engine needs specify other parameters. [#21060](https://github.com/ClickHouse/ClickHouse/pull/21060) ([mxzlxy](https://github.com/mxzlxy)). +* Out of bound memory access was possible when formatting specifically crafted out of range value of type `DateTime64`. This closes [#20494](https://github.com/ClickHouse/ClickHouse/issues/20494). This closes [#20543](https://github.com/ClickHouse/ClickHouse/issues/20543). [#21023](https://github.com/ClickHouse/ClickHouse/pull/21023) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Block parallel insertions into storage join. [#21009](https://github.com/ClickHouse/ClickHouse/pull/21009) ([vdimir](https://github.com/vdimir)). +* Fixed behaviour, when `ALTER MODIFY COLUMN` created mutation, that will knowingly fail. [#21007](https://github.com/ClickHouse/ClickHouse/pull/21007) ([Anton Popov](https://github.com/CurtizJ)). +* Closes [#9969](https://github.com/ClickHouse/ClickHouse/issues/9969). Fixed Brotli http compression error, which reproduced for large data sizes, slightly complicated structure and with json output format. Update Brotli to the latest version to include the "fix rare access to uninitialized data in ring-buffer". [#20991](https://github.com/ClickHouse/ClickHouse/pull/20991) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix 'Empty task was returned from async task queue' on query cancellation. [#20881](https://github.com/ClickHouse/ClickHouse/pull/20881) ([Azat Khuzhin](https://github.com/azat)). +* `USE database;` query did not work when using MySQL 5.7 client to connect to ClickHouse server, it's fixed. Fixes [#18926](https://github.com/ClickHouse/ClickHouse/issues/18926). [#20878](https://github.com/ClickHouse/ClickHouse/pull/20878) ([tavplubix](https://github.com/tavplubix)). +* Fix usage of `-Distinct` combinator with `-State` combinator in aggregate functions. [#20866](https://github.com/ClickHouse/ClickHouse/pull/20866) ([Anton Popov](https://github.com/CurtizJ)). +* Fix subquery with union distinct and limit clause. close [#20597](https://github.com/ClickHouse/ClickHouse/issues/20597). [#20610](https://github.com/ClickHouse/ClickHouse/pull/20610) ([flynn](https://github.com/ucasFL)). +* Fixed inconsistent behavior of dictionary in case of queries where we look for absent keys in dictionary. [#20578](https://github.com/ClickHouse/ClickHouse/pull/20578) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fix the number of threads for scalar subqueries and subqueries for index (after [#19007](https://github.com/ClickHouse/ClickHouse/issues/19007) single thread was always used). Fixes [#20457](https://github.com/ClickHouse/ClickHouse/issues/20457), [#20512](https://github.com/ClickHouse/ClickHouse/issues/20512). [#20550](https://github.com/ClickHouse/ClickHouse/pull/20550) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix crash which could happen if unknown packet was received from remove query (was introduced in [#17868](https://github.com/ClickHouse/ClickHouse/issues/17868)). [#20547](https://github.com/ClickHouse/ClickHouse/pull/20547) ([Azat Khuzhin](https://github.com/azat)). +* Add proper checks while parsing directory names for async INSERT (fixes SIGSEGV). [#20498](https://github.com/ClickHouse/ClickHouse/pull/20498) ([Azat Khuzhin](https://github.com/azat)). +* Fix function `transform` does not work properly for floating point keys. Closes [#20460](https://github.com/ClickHouse/ClickHouse/issues/20460). [#20479](https://github.com/ClickHouse/ClickHouse/pull/20479) ([flynn](https://github.com/ucasFL)). +* Fix infinite loop when propagating WITH aliases to subqueries. This fixes [#20388](https://github.com/ClickHouse/ClickHouse/issues/20388). [#20476](https://github.com/ClickHouse/ClickHouse/pull/20476) ([Amos Bird](https://github.com/amosbird)). +* Fix abnormal server termination when http client goes away. [#20464](https://github.com/ClickHouse/ClickHouse/pull/20464) ([Azat Khuzhin](https://github.com/azat)). +* Fix `LOGICAL_ERROR` for `join_use_nulls=1` when JOIN contains const from SELECT. [#20461](https://github.com/ClickHouse/ClickHouse/pull/20461) ([Azat Khuzhin](https://github.com/azat)). +* Check if table function `view` is used in expression list and throw an error. This fixes [#20342](https://github.com/ClickHouse/ClickHouse/issues/20342). [#20350](https://github.com/ClickHouse/ClickHouse/pull/20350) ([Amos Bird](https://github.com/amosbird)). +* Avoid invalid dereference in RANGE_HASHED() dictionary. [#20345](https://github.com/ClickHouse/ClickHouse/pull/20345) ([Azat Khuzhin](https://github.com/azat)). +* Fix null dereference with `join_use_nulls=1`. [#20344](https://github.com/ClickHouse/ClickHouse/pull/20344) ([Azat Khuzhin](https://github.com/azat)). +* Fix incorrect result of binary operations between two constant decimals of different scale. Fixes [#20283](https://github.com/ClickHouse/ClickHouse/issues/20283). [#20339](https://github.com/ClickHouse/ClickHouse/pull/20339) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix too often retries of failed background tasks for `ReplicatedMergeTree` table engines family. This could lead to too verbose logging and increased CPU load. Fixes [#20203](https://github.com/ClickHouse/ClickHouse/issues/20203). [#20335](https://github.com/ClickHouse/ClickHouse/pull/20335) ([alesapin](https://github.com/alesapin)). +* Restrict to `DROP` or `RENAME` version column of `*CollapsingMergeTree` and `ReplacingMergeTree` table engines. [#20300](https://github.com/ClickHouse/ClickHouse/pull/20300) ([alesapin](https://github.com/alesapin)). +* Fixed the behavior when in case of broken JSON we tried to read the whole file into memory which leads to exception from the allocator. Fixes [#19719](https://github.com/ClickHouse/ClickHouse/issues/19719). [#20286](https://github.com/ClickHouse/ClickHouse/pull/20286) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fix exception during vertical merge for `MergeTree` table engines family which don't allow to perform vertical merges. Fixes [#20259](https://github.com/ClickHouse/ClickHouse/issues/20259). [#20279](https://github.com/ClickHouse/ClickHouse/pull/20279) ([alesapin](https://github.com/alesapin)). +* Fix rare server crash on config reload during the shutdown. Fixes [#19689](https://github.com/ClickHouse/ClickHouse/issues/19689). [#20224](https://github.com/ClickHouse/ClickHouse/pull/20224) ([alesapin](https://github.com/alesapin)). +* Fix CTE when using in INSERT SELECT. This fixes [#20187](https://github.com/ClickHouse/ClickHouse/issues/20187), fixes [#20195](https://github.com/ClickHouse/ClickHouse/issues/20195). [#20211](https://github.com/ClickHouse/ClickHouse/pull/20211) ([Amos Bird](https://github.com/amosbird)). +* Fixes [#19314](https://github.com/ClickHouse/ClickHouse/issues/19314). [#20156](https://github.com/ClickHouse/ClickHouse/pull/20156) ([Ivan](https://github.com/abyss7)). +* fix toMinute function to handle special timezone correctly. [#20149](https://github.com/ClickHouse/ClickHouse/pull/20149) ([keenwolf](https://github.com/keen-wolf)). +* Fix server crash after query with `if` function with `Tuple` type of then/else branches result. `Tuple` type must contain `Array` or another complex type. Fixes [#18356](https://github.com/ClickHouse/ClickHouse/issues/18356). [#20133](https://github.com/ClickHouse/ClickHouse/pull/20133) ([alesapin](https://github.com/alesapin)). +* The `MongoDB` table engine now establishes connection only when it's going to read data. `ATTACH TABLE` won't try to connect anymore. [#20110](https://github.com/ClickHouse/ClickHouse/pull/20110) ([Vitaly Baranov](https://github.com/vitlibar)). +* Bugfix in StorageJoin. [#20079](https://github.com/ClickHouse/ClickHouse/pull/20079) ([vdimir](https://github.com/vdimir)). +* Fix the case when calculating modulo of division of negative number by small divisor, the resulting data type was not large enough to accomodate the negative result. This closes [#20052](https://github.com/ClickHouse/ClickHouse/issues/20052). [#20067](https://github.com/ClickHouse/ClickHouse/pull/20067) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* MaterializeMySQL: Fix replication for statements that update several tables. [#20066](https://github.com/ClickHouse/ClickHouse/pull/20066) ([Håvard Kvålen](https://github.com/havardk)). +* Prevent "Connection refused" in docker during initialization script execution. [#20012](https://github.com/ClickHouse/ClickHouse/pull/20012) ([filimonov](https://github.com/filimonov)). +* `EmbeddedRocksDB` is an experimental storage. Fix the issue with lack of proper type checking. Simplified code. This closes [#19967](https://github.com/ClickHouse/ClickHouse/issues/19967). [#19972](https://github.com/ClickHouse/ClickHouse/pull/19972) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix a segfault in function `fromModifiedJulianDay` when the argument type is `Nullable(T)` for any integral types other than Int32. [#19959](https://github.com/ClickHouse/ClickHouse/pull/19959) ([PHO](https://github.com/depressed-pho)). +* BloomFilter index crash fix. Fixes [#19757](https://github.com/ClickHouse/ClickHouse/issues/19757). [#19884](https://github.com/ClickHouse/ClickHouse/pull/19884) ([Maksim Kita](https://github.com/kitaisreal)). +* Deadlock was possible if system.text_log is enabled. This fixes [#19874](https://github.com/ClickHouse/ClickHouse/issues/19874). [#19875](https://github.com/ClickHouse/ClickHouse/pull/19875) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix starting the server with tables having default expressions containing dictGet(). Allow getting return type of dictGet() without loading dictionary. [#19805](https://github.com/ClickHouse/ClickHouse/pull/19805) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix clickhouse-client abort exception while executing only `select`. [#19790](https://github.com/ClickHouse/ClickHouse/pull/19790) ([taiyang-li](https://github.com/taiyang-li)). +* Fix a bug that moving pieces to destination table may failed in case of launching multiple clickhouse-copiers. [#19743](https://github.com/ClickHouse/ClickHouse/pull/19743) ([madianjun](https://github.com/mdianjun)). +* Background thread which executes `ON CLUSTER` queries might hang waiting for dropped replicated table to do something. It's fixed. [#19684](https://github.com/ClickHouse/ClickHouse/pull/19684) ([yiguolei](https://github.com/yiguolei)). + +#### Build/Testing/Packaging Improvement + +* Allow to build ClickHouse with AVX-2 enabled globally. It gives slight performance benefits on modern CPUs. Not recommended for production and will not be supported as official build for now. [#20180](https://github.com/ClickHouse/ClickHouse/pull/20180) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix some of the issues found by Coverity. See [#19964](https://github.com/ClickHouse/ClickHouse/issues/19964). [#20010](https://github.com/ClickHouse/ClickHouse/pull/20010) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow to start up with modified binary under gdb. In previous version if you set up breakpoint in gdb before start, server will refuse to start up due to failed integrity check. [#21258](https://github.com/ClickHouse/ClickHouse/pull/21258) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add a test for different compression methods in Kafka. [#21111](https://github.com/ClickHouse/ClickHouse/pull/21111) ([filimonov](https://github.com/filimonov)). +* Fixed port clash from test_storage_kerberized_hdfs test. [#19974](https://github.com/ClickHouse/ClickHouse/pull/19974) ([Ilya Yatsishin](https://github.com/qoega)). +* Print `stdout` and `stderr` to log when failed to start docker in integration tests. Before this PR there was a very short error message in this case which didn't help to investigate the problems. [#20631](https://github.com/ClickHouse/ClickHouse/pull/20631) ([Vitaly Baranov](https://github.com/vitlibar)). + + ## ClickHouse release 21.2 ### ClickHouse release v21.2.2.8-stable, 2021-02-07 diff --git a/CMakeLists.txt b/CMakeLists.txt index 76b79a0b6c8..cb91f879d5b 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -39,6 +39,8 @@ else() set(RECONFIGURE_MESSAGE_LEVEL STATUS) endif() +enable_language(C CXX ASM) + include (cmake/arch.cmake) include (cmake/target.cmake) include (cmake/tools.cmake) @@ -66,17 +68,30 @@ endif () include (cmake/find/ccache.cmake) -option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling" OFF) +# Take care to add prlimit in command line before ccache, or else ccache thinks that +# prlimit is compiler, and clang++ is its input file, and refuses to work with +# multiple inputs, e.g in ccache log: +# [2021-03-31T18:06:32.655327 36900] Command line: /usr/bin/ccache prlimit --as=10000000000 --data=5000000000 --cpu=600 /usr/bin/clang++-11 - ...... std=gnu++2a -MD -MT src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -MF src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o.d -o src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -c ../src/Storages/MergeTree/IMergeTreeDataPart.cpp +# +# [2021-03-31T18:06:32.656704 36900] Multiple input files: /usr/bin/clang++-11 and ../src/Storages/MergeTree/IMergeTreeDataPart.cpp +# +# Another way would be to use --ccache-skip option before clang++-11 to make +# ccache ignore it. +option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling." OFF) if (ENABLE_CHECK_HEAVY_BUILDS) # set DATA (since RSS does not work since 2.6.x+) to 2G set (RLIMIT_DATA 5000000000) # set VIRT (RLIMIT_AS) to 10G (DATA*10) set (RLIMIT_AS 10000000000) + # set CPU time limit to 600 seconds + set (RLIMIT_CPU 600) + # gcc10/gcc10/clang -fsanitize=memory is too heavy if (SANITIZE STREQUAL "memory" OR COMPILER_GCC) set (RLIMIT_DATA 10000000000) endif() - set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=600) + + set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=${RLIMIT_CPU} ${CMAKE_CXX_COMPILER_LAUNCHER}) endif () if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "None") @@ -152,10 +167,10 @@ endif () # If turned `ON`, assumes the user has either the system GTest library or the bundled one. option(ENABLE_TESTS "Provide unit_test_dbms target with Google.Test unit tests" ON) +option(ENABLE_EXAMPLES "Build all example programs in 'examples' subdirectories" OFF) if (OS_LINUX AND NOT UNBUNDLED AND MAKE_STATIC_LIBRARIES AND NOT SPLIT_SHARED_LIBRARIES AND CMAKE_VERSION VERSION_GREATER "3.9.0") # Only for Linux, x86_64. - # Implies ${ENABLE_FASTMEMCPY} option(GLIBC_COMPATIBILITY "Enable compatibility with older glibc libraries." ON) elseif(GLIBC_COMPATIBILITY) message (${RECONFIGURE_MESSAGE_LEVEL} "Glibc compatibility cannot be enabled in current configuration") @@ -241,35 +256,52 @@ else() message(STATUS "Disabling compiler -pipe option (have only ${AVAILABLE_PHYSICAL_MEMORY} mb of memory)") endif() -if(NOT DISABLE_CPU_OPTIMIZE) - include(cmake/cpu_features.cmake) -endif() +include(cmake/cpu_features.cmake) -option(ARCH_NATIVE "Add -march=native compiler flag") +option(ARCH_NATIVE "Add -march=native compiler flag. This makes your binaries non-portable but more performant code may be generated.") if (ARCH_NATIVE) set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native") endif () -if (COMPILER_GCC OR COMPILER_CLANG) - # to make numeric_limits<__int128> works with GCC - set (_CXX_STANDARD "gnu++2a") -else() - set (_CXX_STANDARD "c++2a") -endif() +# Asynchronous unwind tables are needed for Query Profiler. +# They are already by default on some platforms but possibly not on all platforms. +# Enable it explicitly. +set (COMPILER_FLAGS "${COMPILER_FLAGS} -fasynchronous-unwind-tables") -# cmake < 3.12 doesn't support 20. We'll set CMAKE_CXX_FLAGS for now -# set (CMAKE_CXX_STANDARD 20) -set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=${_CXX_STANDARD}") +if (${CMAKE_VERSION} VERSION_LESS "3.12.4") + # CMake < 3.12 doesn't support setting 20 as a C++ standard version. + # We will add C++ standard controlling flag in CMAKE_CXX_FLAGS manually for now. -set (CMAKE_CXX_EXTENSIONS 0) # https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html#prop_tgt:CXX_EXTENSIONS -set (CMAKE_CXX_STANDARD_REQUIRED ON) + if (COMPILER_GCC OR COMPILER_CLANG) + # to make numeric_limits<__int128> works with GCC + set (_CXX_STANDARD "gnu++2a") + else () + set (_CXX_STANDARD "c++2a") + endif () + + set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=${_CXX_STANDARD}") +else () + set (CMAKE_CXX_STANDARD 20) + set (CMAKE_CXX_EXTENSIONS ON) # Same as gnu++2a (ON) vs c++2a (OFF): https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html + set (CMAKE_CXX_STANDARD_REQUIRED ON) +endif () + +set (CMAKE_C_STANDARD 11) +set (CMAKE_C_EXTENSIONS ON) +set (CMAKE_C_STANDARD_REQUIRED ON) if (COMPILER_GCC OR COMPILER_CLANG) # Enable C++14 sized global deallocation functions. It should be enabled by setting -std=c++14 but I'm not sure. set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsized-deallocation") endif () +# falign-functions=32 prevents from random performance regressions with the code change. Thus, providing more stable +# benchmarks. +if (COMPILER_GCC OR COMPILER_CLANG) + set(COMPILER_FLAGS "${COMPILER_FLAGS} -falign-functions=32") +endif () + # Compiler-specific coverage flags e.g. -fcoverage-mapping for gcc option(WITH_COVERAGE "Profile the resulting binary/binaries" OFF) @@ -457,6 +489,7 @@ find_contrib_lib(double-conversion) # Must be before parquet include (cmake/find/ssl.cmake) include (cmake/find/ldap.cmake) # after ssl include (cmake/find/icu.cmake) +include (cmake/find/xz.cmake) include (cmake/find/zlib.cmake) include (cmake/find/zstd.cmake) include (cmake/find/ltdl.cmake) # for odbc @@ -467,6 +500,7 @@ include (cmake/find/krb5.cmake) include (cmake/find/libgsasl.cmake) include (cmake/find/cyrus-sasl.cmake) include (cmake/find/rdkafka.cmake) +include (cmake/find/libuv.cmake) # for amqpcpp and cassandra include (cmake/find/amqpcpp.cmake) include (cmake/find/capnp.cmake) include (cmake/find/llvm.cmake) @@ -489,6 +523,7 @@ include (cmake/find/fast_float.cmake) include (cmake/find/rapidjson.cmake) include (cmake/find/fastops.cmake) include (cmake/find/odbc.cmake) +include (cmake/find/nanodbc.cmake) include (cmake/find/rocksdb.cmake) include (cmake/find/libpqxx.cmake) include (cmake/find/nuraft.cmake) @@ -504,6 +539,7 @@ include (cmake/find/msgpack.cmake) include (cmake/find/cassandra.cmake) include (cmake/find/sentry.cmake) include (cmake/find/stats.cmake) +include (cmake/find/datasketches.cmake) set (USE_INTERNAL_CITYHASH_LIBRARY ON CACHE INTERNAL "") find_contrib_lib(cityhash) @@ -536,7 +572,7 @@ macro (add_executable target) # explicitly acquire and interpose malloc symbols by clickhouse_malloc # if GLIBC_COMPATIBILITY is ON and ENABLE_THINLTO is on than provide memcpy symbol explicitly to neutrialize thinlto's libcall generation. if (GLIBC_COMPATIBILITY AND ENABLE_THINLTO) - _add_executable (${ARGV} $ $) + _add_executable (${ARGV} $ $) else () _add_executable (${ARGV} $) endif () diff --git a/README.md b/README.md index 3329a98877f..ea9f365a3c6 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ ClickHouse® is an open-source column-oriented database management system that a * [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster. * [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information. * [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format. -* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-ly9m4w1x-6j7x5Ts_pQZqrctAbRZ3cg) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time. +* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time. * [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events. * [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation. * [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any. diff --git a/base/CMakeLists.txt b/base/CMakeLists.txt index 46bd57eda12..023dcaaccae 100644 --- a/base/CMakeLists.txt +++ b/base/CMakeLists.txt @@ -8,6 +8,7 @@ add_subdirectory (loggers) add_subdirectory (pcg-random) add_subdirectory (widechar_width) add_subdirectory (readpassphrase) +add_subdirectory (bridge) if (USE_MYSQL) add_subdirectory (mysqlxx) diff --git a/base/bridge/CMakeLists.txt b/base/bridge/CMakeLists.txt new file mode 100644 index 00000000000..20b0b651677 --- /dev/null +++ b/base/bridge/CMakeLists.txt @@ -0,0 +1,7 @@ +add_library (bridge + IBridge.cpp +) + +target_include_directories (daemon PUBLIC ..) +target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC) + diff --git a/base/bridge/IBridge.cpp b/base/bridge/IBridge.cpp new file mode 100644 index 00000000000..b2ec53158b1 --- /dev/null +++ b/base/bridge/IBridge.cpp @@ -0,0 +1,233 @@ +#include "IBridge.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if USE_ODBC +# include +#endif + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int ARGUMENT_OUT_OF_BOUND; +} + +namespace +{ + Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log) + { + Poco::Net::SocketAddress socket_address; + try + { + socket_address = Poco::Net::SocketAddress(host, port); + } + catch (const Poco::Net::DNSException & e) + { + const auto code = e.code(); + if (code == EAI_FAMILY +#if defined(EAI_ADDRFAMILY) + || code == EAI_ADDRFAMILY +#endif + ) + { + LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. If it is an IPv6 address and your host has disabled IPv6, then consider to specify IPv4 address to listen in element of configuration file. Example: 0.0.0.0", host, e.code(), e.message()); + } + + throw; + } + return socket_address; + } + + Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, Poco::Logger * log) + { + auto address = makeSocketAddress(host, port, log); +#if POCO_VERSION < 0x01080000 + socket.bind(address, /* reuseAddress = */ true); +#else + socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ false); +#endif + + socket.listen(/* backlog = */ 64); + + return address; + } +} + + +void IBridge::handleHelp(const std::string &, const std::string &) +{ + Poco::Util::HelpFormatter help_formatter(options()); + help_formatter.setCommand(commandName()); + help_formatter.setHeader("HTTP-proxy for odbc requests"); + help_formatter.setUsage("--http-port "); + help_formatter.format(std::cerr); + + stopOptionsProcessing(); +} + + +void IBridge::defineOptions(Poco::Util::OptionSet & options) +{ + options.addOption( + Poco::Util::Option("http-port", "", "port to listen").argument("http-port", true) .binding("http-port")); + + options.addOption( + Poco::Util::Option("listen-host", "", "hostname or address to listen, default 127.0.0.1").argument("listen-host").binding("listen-host")); + + options.addOption( + Poco::Util::Option("http-timeout", "", "http timeout for socket, default 1800").argument("http-timeout").binding("http-timeout")); + + options.addOption( + Poco::Util::Option("max-server-connections", "", "max connections to server, default 1024").argument("max-server-connections").binding("max-server-connections")); + + options.addOption( + Poco::Util::Option("keep-alive-timeout", "", "keepalive timeout, default 10").argument("keep-alive-timeout").binding("keep-alive-timeout")); + + options.addOption( + Poco::Util::Option("log-level", "", "sets log level, default info") .argument("log-level").binding("logger.level")); + + options.addOption( + Poco::Util::Option("log-path", "", "log path for all logs, default console").argument("log-path").binding("logger.log")); + + options.addOption( + Poco::Util::Option("err-log-path", "", "err log path for all logs, default no").argument("err-log-path").binding("logger.errorlog")); + + options.addOption( + Poco::Util::Option("stdout-path", "", "stdout log path, default console").argument("stdout-path").binding("logger.stdout")); + + options.addOption( + Poco::Util::Option("stderr-path", "", "stderr log path, default console").argument("stderr-path").binding("logger.stderr")); + + using Me = std::decay_t; + + options.addOption( + Poco::Util::Option("help", "", "produce this help message").binding("help").callback(Poco::Util::OptionCallback(this, &Me::handleHelp))); + + ServerApplication::defineOptions(options); // NOLINT Don't need complex BaseDaemon's .xml config +} + + +void IBridge::initialize(Application & self) +{ + BaseDaemon::closeFDs(); + is_help = config().has("help"); + + if (is_help) + return; + + config().setString("logger", bridgeName()); + + /// Redirect stdout, stderr to specified files. + /// Some libraries and sanitizers write to stderr in case of errors. + const auto stdout_path = config().getString("logger.stdout", ""); + if (!stdout_path.empty()) + { + if (!freopen(stdout_path.c_str(), "a+", stdout)) + throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path); + + /// Disable buffering for stdout. + setbuf(stdout, nullptr); + } + const auto stderr_path = config().getString("logger.stderr", ""); + if (!stderr_path.empty()) + { + if (!freopen(stderr_path.c_str(), "a+", stderr)) + throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path); + + /// Disable buffering for stderr. + setbuf(stderr, nullptr); + } + + buildLoggers(config(), logger(), self.commandName()); + + BaseDaemon::logRevision(); + + log = &logger(); + hostname = config().getString("listen-host", "127.0.0.1"); + port = config().getUInt("http-port"); + if (port > 0xFFFF) + throw Exception("Out of range 'http-port': " + std::to_string(port), ErrorCodes::ARGUMENT_OUT_OF_BOUND); + + http_timeout = config().getUInt64("http-timeout", DEFAULT_HTTP_READ_BUFFER_TIMEOUT); + max_server_connections = config().getUInt("max-server-connections", 1024); + keep_alive_timeout = config().getUInt64("keep-alive-timeout", 10); + + initializeTerminationAndSignalProcessing(); + + ServerApplication::initialize(self); // NOLINT +} + + +void IBridge::uninitialize() +{ + BaseDaemon::uninitialize(); +} + + +int IBridge::main(const std::vector & /*args*/) +{ + if (is_help) + return Application::EXIT_OK; + + registerFormats(); + LOG_INFO(log, "Starting up {} on host: {}, port: {}", bridgeName(), hostname, port); + + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, hostname, port, log); + socket.setReceiveTimeout(http_timeout); + socket.setSendTimeout(http_timeout); + + Poco::ThreadPool server_pool(3, max_server_connections); + + Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams; + http_params->setTimeout(http_timeout); + http_params->setKeepAliveTimeout(keep_alive_timeout); + + auto shared_context = Context::createShared(); + auto context = Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); + + if (config().has("query_masking_rules")) + SensitiveDataMasker::setInstance(std::make_unique(config(), "query_masking_rules")); + + auto server = HTTPServer( + context, + getHandlerFactoryPtr(context), + server_pool, + socket, + http_params); + + SCOPE_EXIT({ + LOG_DEBUG(log, "Received termination signal."); + LOG_DEBUG(log, "Waiting for current connections to close."); + + server.stop(); + + for (size_t count : ext::range(1, 6)) + { + if (server.currentConnections() == 0) + break; + LOG_DEBUG(log, "Waiting for {} connections, try {}", server.currentConnections(), count); + std::this_thread::sleep_for(std::chrono::milliseconds(1000)); + } + }); + + server.start(); + LOG_INFO(log, "Listening http://{}", address.toString()); + + waitForTerminationRequest(); + return Application::EXIT_OK; +} + +} diff --git a/base/bridge/IBridge.h b/base/bridge/IBridge.h new file mode 100644 index 00000000000..c64003d9959 --- /dev/null +++ b/base/bridge/IBridge.h @@ -0,0 +1,51 @@ +#pragma once + +#include +#include +#include + +#include +#include + + +namespace DB +{ + +/// Class represents base for clickhouse-odbc-bridge and clickhouse-library-bridge servers. +/// Listens to incoming HTTP POST and GET requests on specified port and host. +/// Has two handlers '/' for all incoming POST requests and /ping for GET request about service status. +class IBridge : public BaseDaemon +{ + +public: + /// Define command line arguments + void defineOptions(Poco::Util::OptionSet & options) override; + +protected: + using HandlerFactoryPtr = std::shared_ptr; + + void initialize(Application & self) override; + + void uninitialize() override; + + int main(const std::vector & args) override; + + virtual std::string bridgeName() const = 0; + + virtual HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const = 0; + + size_t keep_alive_timeout; + +private: + void handleHelp(const std::string &, const std::string &); + + bool is_help; + std::string hostname; + size_t port; + std::string log_level; + size_t max_server_connections; + size_t http_timeout; + + Poco::Logger * log; +}; +} diff --git a/src/Common/BorrowedObjectPool.h b/base/common/BorrowedObjectPool.h similarity index 99% rename from src/Common/BorrowedObjectPool.h rename to base/common/BorrowedObjectPool.h index d5263cf92a8..6a90a7e7122 100644 --- a/src/Common/BorrowedObjectPool.h +++ b/base/common/BorrowedObjectPool.h @@ -7,8 +7,7 @@ #include #include - -#include +#include /** Pool for limited size objects that cannot be used from different threads simultaneously. * The main use case is to have fixed size of objects that can be reused in difference threads during their lifetime diff --git a/base/common/CMakeLists.txt b/base/common/CMakeLists.txt index cea52b443dd..e5e18669ebe 100644 --- a/base/common/CMakeLists.txt +++ b/base/common/CMakeLists.txt @@ -29,7 +29,7 @@ elseif (ENABLE_READLINE) endif () if (USE_DEBUG_HELPERS) - set (INCLUDE_DEBUG_HELPERS "-include ${ClickHouse_SOURCE_DIR}/base/common/iostream_debug_helpers.h") + set (INCLUDE_DEBUG_HELPERS "-include \"${ClickHouse_SOURCE_DIR}/base/common/iostream_debug_helpers.h\"") set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${INCLUDE_DEBUG_HELPERS}") endif () @@ -45,7 +45,11 @@ if (USE_INTERNAL_CCTZ) set_source_files_properties(DateLUTImpl.cpp PROPERTIES COMPILE_DEFINITIONS USE_INTERNAL_CCTZ) endif() -target_include_directories(common PUBLIC .. ${CMAKE_CURRENT_BINARY_DIR}/..) +target_include_directories(common PUBLIC .. "${CMAKE_CURRENT_BINARY_DIR}/..") + +if (OS_DARWIN AND NOT MAKE_STATIC_LIBRARIES) + target_link_libraries(common PUBLIC -Wl,-U,_inside_main) +endif() # Allow explicit fallback to readline if (NOT ENABLE_REPLXX AND ENABLE_READLINE) @@ -74,7 +78,6 @@ target_link_libraries (common ${CITYHASH_LIBRARIES} boost::headers_only boost::system - FastMemcpy Poco::Net Poco::Net::SSL Poco::Util diff --git a/base/common/DateLUT.cpp b/base/common/DateLUT.cpp index 6ff0884701c..d14b63cd70a 100644 --- a/base/common/DateLUT.cpp +++ b/base/common/DateLUT.cpp @@ -152,7 +152,7 @@ const DateLUTImpl & DateLUT::getImplementation(const std::string & time_zone) co auto it = impls.emplace(time_zone, nullptr).first; if (!it->second) - it->second = std::make_unique(time_zone); + it->second = std::unique_ptr(new DateLUTImpl(time_zone)); return *it->second; } diff --git a/base/common/DateLUT.h b/base/common/DateLUT.h index 93c6cb403e2..378b4360f3b 100644 --- a/base/common/DateLUT.h +++ b/base/common/DateLUT.h @@ -32,7 +32,6 @@ public: return date_lut.getImplementation(time_zone); } - static void setDefaultTimezone(const std::string & time_zone) { auto & date_lut = getInstance(); diff --git a/base/common/DateLUTImpl.cpp b/base/common/DateLUTImpl.cpp index 50620e21b8f..e7faeb63760 100644 --- a/base/common/DateLUTImpl.cpp +++ b/base/common/DateLUTImpl.cpp @@ -46,24 +46,41 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_) if (&inside_main) assert(inside_main); - size_t i = 0; - time_t start_of_day = 0; - cctz::time_zone cctz_time_zone; if (!cctz::load_time_zone(time_zone, &cctz_time_zone)) throw Poco::Exception("Cannot load time zone " + time_zone_); - cctz::time_zone::absolute_lookup start_of_epoch_lookup = cctz_time_zone.lookup(std::chrono::system_clock::from_time_t(start_of_day)); - offset_at_start_of_epoch = start_of_epoch_lookup.offset; - offset_is_whole_number_of_hours_everytime = true; + constexpr cctz::civil_day epoch{1970, 1, 1}; + constexpr cctz::civil_day lut_start{DATE_LUT_MIN_YEAR, 1, 1}; + time_t start_of_day; - cctz::civil_day date{1970, 1, 1}; + /// Note: it's validated against all timezones in the system. + static_assert((epoch - lut_start) == daynum_offset_epoch); + offset_at_start_of_epoch = cctz_time_zone.lookup(cctz_time_zone.lookup(epoch).pre).offset; + offset_at_start_of_lut = cctz_time_zone.lookup(cctz_time_zone.lookup(lut_start).pre).offset; + offset_is_whole_number_of_hours_during_epoch = true; + + cctz::civil_day date = lut_start; + + UInt32 i = 0; do { cctz::time_zone::civil_lookup lookup = cctz_time_zone.lookup(date); - start_of_day = std::chrono::system_clock::to_time_t(lookup.pre); /// Ambiguity is possible. + /// Ambiguity is possible if time was changed backwards at the midnight + /// or after midnight time has been changed back to midnight, for example one hour backwards at 01:00 + /// or after midnight time has been changed to the previous day, for example two hours backwards at 01:00 + /// Then midnight appears twice. Usually time change happens exactly at 00:00 or 01:00. + + /// If transition did not involve previous day, we should use the first midnight as the start of the day, + /// otherwise it's better to use the second midnight. + + std::chrono::time_point start_of_day_time_point = lookup.trans < lookup.post + ? lookup.post /* Second midnight appears after transition, so there was a piece of previous day after transition */ + : lookup.pre; + + start_of_day = std::chrono::system_clock::to_time_t(start_of_day_time_point); Values & values = lut[i]; values.year = date.year(); @@ -72,7 +89,7 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_) values.day_of_week = getDayOfWeek(date); values.date = start_of_day; - assert(values.year >= DATE_LUT_MIN_YEAR && values.year <= DATE_LUT_MAX_YEAR); + assert(values.year >= DATE_LUT_MIN_YEAR && values.year <= DATE_LUT_MAX_YEAR + 1); assert(values.month >= 1 && values.month <= 12); assert(values.day_of_month >= 1 && values.day_of_month <= 31); assert(values.day_of_week >= 1 && values.day_of_week <= 7); @@ -85,50 +102,42 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_) else values.days_in_month = i != 0 ? lut[i - 1].days_in_month : 31; - values.time_at_offset_change = 0; - values.amount_of_offset_change = 0; + values.time_at_offset_change_value = 0; + values.amount_of_offset_change_value = 0; - if (start_of_day % 3600) - offset_is_whole_number_of_hours_everytime = false; + if (offset_is_whole_number_of_hours_during_epoch && start_of_day > 0 && start_of_day % 3600) + offset_is_whole_number_of_hours_during_epoch = false; - /// If UTC offset was changed in previous day. - if (i != 0) + /// If UTC offset was changed this day. + /// Change in time zone without transition is possible, e.g. Moscow 1991 Sun, 31 Mar, 02:00 MSK to EEST + cctz::time_zone::civil_transition transition{}; + if (cctz_time_zone.next_transition(start_of_day_time_point - std::chrono::seconds(1), &transition) + && (cctz::civil_day(transition.from) == date || cctz::civil_day(transition.to) == date) + && transition.from != transition.to) { - auto amount_of_offset_change_at_prev_day = 86400 - (lut[i].date - lut[i - 1].date); - if (amount_of_offset_change_at_prev_day) - { - lut[i - 1].amount_of_offset_change = amount_of_offset_change_at_prev_day; + values.time_at_offset_change_value = (transition.from - cctz::civil_second(date)) / Values::OffsetChangeFactor; + values.amount_of_offset_change_value = (transition.to - transition.from) / Values::OffsetChangeFactor; - const auto utc_offset_at_beginning_of_day = cctz_time_zone.lookup(std::chrono::system_clock::from_time_t(lut[i - 1].date)).offset; +// std::cerr << time_zone << ", " << date << ": change from " << transition.from << " to " << transition.to << "\n"; +// std::cerr << time_zone << ", " << date << ": change at " << values.time_at_offset_change() << " with " << values.amount_of_offset_change() << "\n"; - /// Find a time (timestamp offset from beginning of day), - /// when UTC offset was changed. Search is performed with 15-minute granularity, assuming it is enough. + /// We don't support too large changes. + if (values.amount_of_offset_change_value > 24 * 4) + values.amount_of_offset_change_value = 24 * 4; + else if (values.amount_of_offset_change_value < -24 * 4) + values.amount_of_offset_change_value = -24 * 4; - time_t time_at_offset_change = 900; - while (time_at_offset_change < 86400) - { - auto utc_offset_at_current_time = cctz_time_zone.lookup(std::chrono::system_clock::from_time_t( - lut[i - 1].date + time_at_offset_change)).offset; - - if (utc_offset_at_current_time != utc_offset_at_beginning_of_day) - break; - - time_at_offset_change += 900; - } - - lut[i - 1].time_at_offset_change = time_at_offset_change; - - /// We doesn't support cases when time change results in switching to previous day. - if (static_cast(lut[i - 1].time_at_offset_change) + static_cast(lut[i - 1].amount_of_offset_change) < 0) - lut[i - 1].time_at_offset_change = -lut[i - 1].amount_of_offset_change; - } + /// We don't support cases when time change results in switching to previous day. + /// Shift the point of time change later. + if (values.time_at_offset_change_value + values.amount_of_offset_change_value < 0) + values.time_at_offset_change_value = -values.amount_of_offset_change_value; } /// Going to next day. ++date; ++i; } - while (start_of_day <= DATE_LUT_MAX && i <= DATE_LUT_MAX_DAY_NUM); + while (i < DATE_LUT_SIZE && lut[i - 1].year <= DATE_LUT_MAX_YEAR); /// Fill excessive part of lookup table. This is needed only to simplify handling of overflow cases. while (i < DATE_LUT_SIZE) diff --git a/base/common/DateLUTImpl.h b/base/common/DateLUTImpl.h index 0c7465ec7a5..9e60181e802 100644 --- a/base/common/DateLUTImpl.h +++ b/base/common/DateLUTImpl.h @@ -5,23 +5,32 @@ #include "types.h" #include +#include #include +#include -#define DATE_LUT_MAX (0xFFFFFFFFU - 86400) -#define DATE_LUT_MAX_DAY_NUM (0xFFFFFFFFU / 86400) -/// Table size is bigger than DATE_LUT_MAX_DAY_NUM to fill all indices within UInt16 range: this allows to remove extra check. -#define DATE_LUT_SIZE 0x10000 -#define DATE_LUT_MIN_YEAR 1970 -#define DATE_LUT_MAX_YEAR 2106 /// Last supported year (incomplete) +#define DATE_LUT_MIN_YEAR 1925 /// 1925 since wast majority of timezones changed to 15-minute aligned offsets somewhere in 1924 or earlier. +#define DATE_LUT_MAX_YEAR 2283 /// Last supported year (complete) #define DATE_LUT_YEARS (1 + DATE_LUT_MAX_YEAR - DATE_LUT_MIN_YEAR) /// Number of years in lookup table +#define DATE_LUT_SIZE 0x20000 + +#define DATE_LUT_MAX (0xFFFFFFFFU - 86400) +#define DATE_LUT_MAX_DAY_NUM 0xFFFF + +/// A constant to add to time_t so every supported time point becomes non-negative and still has the same remainder of division by 3600. +/// If we treat "remainder of division" operation in the sense of modular arithmetic (not like in C++). +#define DATE_LUT_ADD ((1970 - DATE_LUT_MIN_YEAR) * 366 * 86400) + + #if defined(__PPC__) -#if !__clang__ +#if !defined(__clang__) #pragma GCC diagnostic ignored "-Wmaybe-uninitialized" #endif #endif + /// Flags for toYearWeek() function. enum class WeekModeFlag : UInt8 { @@ -37,7 +46,8 @@ using YearWeek = std::pair; */ class DateLUTImpl { -public: +private: + friend class DateLUT; explicit DateLUTImpl(const std::string & time_zone); DateLUTImpl(const DateLUTImpl &) = delete; @@ -45,14 +55,75 @@ public: DateLUTImpl(const DateLUTImpl &&) = delete; DateLUTImpl & operator=(const DateLUTImpl &&) = delete; + // Normalized and bound-checked index of element in lut, + // has to be a separate type to support overloading + // TODO: make sure that any arithmetic on LUTIndex actually results in valid LUTIndex. + STRONG_TYPEDEF(UInt32, LUTIndex) + + template + friend inline LUTIndex operator+(const LUTIndex & index, const T v) + { + return LUTIndex{(index.toUnderType() + UInt32(v)) & date_lut_mask}; + } + + template + friend inline LUTIndex operator+(const T v, const LUTIndex & index) + { + return LUTIndex{(v + index.toUnderType()) & date_lut_mask}; + } + + friend inline LUTIndex operator+(const LUTIndex & index, const LUTIndex & v) + { + return LUTIndex{(index.toUnderType() + v.toUnderType()) & date_lut_mask}; + } + + template + friend inline LUTIndex operator-(const LUTIndex & index, const T v) + { + return LUTIndex{(index.toUnderType() - UInt32(v)) & date_lut_mask}; + } + + template + friend inline LUTIndex operator-(const T v, const LUTIndex & index) + { + return LUTIndex{(v - index.toUnderType()) & date_lut_mask}; + } + + friend inline LUTIndex operator-(const LUTIndex & index, const LUTIndex & v) + { + return LUTIndex{(index.toUnderType() - v.toUnderType()) & date_lut_mask}; + } + + template + friend inline LUTIndex operator*(const LUTIndex & index, const T v) + { + return LUTIndex{(index.toUnderType() * UInt32(v)) & date_lut_mask}; + } + + template + friend inline LUTIndex operator*(const T v, const LUTIndex & index) + { + return LUTIndex{(v * index.toUnderType()) & date_lut_mask}; + } + + template + friend inline LUTIndex operator/(const LUTIndex & index, const T v) + { + return LUTIndex{(index.toUnderType() / UInt32(v)) & date_lut_mask}; + } + + template + friend inline LUTIndex operator/(const T v, const LUTIndex & index) + { + return LUTIndex{(UInt32(v) / index.toUnderType()) & date_lut_mask}; + } + public: /// The order of fields matters for alignment and sizeof. struct Values { - /// Least significat 32 bits from time_t at beginning of the day. - /// If the unix timestamp of beginning of the day is negative (example: 1970-01-01 MSK, where time_t == -10800), then value will overflow. - /// Change to time_t; change constants above; and recompile the sources if you need to support time after 2105 year. - UInt32 date; + /// time_t at beginning of the day. + Int64 date; /// Properties of the day. UInt16 year; @@ -65,107 +136,189 @@ public: UInt8 days_in_month; /// For days, when offset from UTC was changed due to daylight saving time or permanent change, following values could be non zero. - Int16 amount_of_offset_change; /// Usually -3600 or 3600, but look at Lord Howe Island. - UInt32 time_at_offset_change; /// In seconds from beginning of the day. + /// All in OffsetChangeFactor (15 minute) intervals. + Int8 amount_of_offset_change_value; /// Usually -4 or 4, but look at Lord Howe Island. Multiply by OffsetChangeFactor + UInt8 time_at_offset_change_value; /// In seconds from beginning of the day. Multiply by OffsetChangeFactor + + inline Int32 amount_of_offset_change() const + { + return static_cast(amount_of_offset_change_value) * OffsetChangeFactor; + } + + inline UInt32 time_at_offset_change() const + { + return static_cast(time_at_offset_change_value) * OffsetChangeFactor; + } + + /// Since most of the modern timezones have a DST change aligned to 15 minutes, to save as much space as possible inside Value, + /// we are dividing any offset change related value by this factor before setting it to Value, + /// hence it has to be explicitly multiplied back by this factor before being used. + static constexpr UInt16 OffsetChangeFactor = 900; }; static_assert(sizeof(Values) == 16); private: - /// Lookup table is indexed by DayNum. + + /// Mask is all-ones to allow efficient protection against overflow. + static constexpr UInt32 date_lut_mask = 0x1ffff; + static_assert(date_lut_mask == DATE_LUT_SIZE - 1); + + /// Offset to epoch in days (ExtendedDayNum) of the first day in LUT. + /// "epoch" is the Unix Epoch (starts at unix timestamp zero) + static constexpr UInt32 daynum_offset_epoch = 16436; + static_assert(daynum_offset_epoch == (1970 - DATE_LUT_MIN_YEAR) * 365 + (1970 - DATE_LUT_MIN_YEAR / 4 * 4) / 4); + + /// Lookup table is indexed by LUTIndex. /// Day nums are the same in all time zones. 1970-01-01 is 0 and so on. /// Table is relatively large, so better not to place the object on stack. /// In comparison to std::vector, plain array is cheaper by one indirection. - Values lut[DATE_LUT_SIZE]; + Values lut[DATE_LUT_SIZE + 1]; - /// Year number after DATE_LUT_MIN_YEAR -> day num for start of year. - DayNum years_lut[DATE_LUT_YEARS]; + /// Year number after DATE_LUT_MIN_YEAR -> LUTIndex in lut for start of year. + LUTIndex years_lut[DATE_LUT_YEARS]; /// Year number after DATE_LUT_MIN_YEAR * month number starting at zero -> day num for first day of month - DayNum years_months_lut[DATE_LUT_YEARS * 12]; + LUTIndex years_months_lut[DATE_LUT_YEARS * 12]; /// UTC offset at beginning of the Unix epoch. The same as unix timestamp of 1970-01-01 00:00:00 local time. time_t offset_at_start_of_epoch; - bool offset_is_whole_number_of_hours_everytime; + /// UTC offset at the beginning of the first supported year. + time_t offset_at_start_of_lut; + bool offset_is_whole_number_of_hours_during_epoch; /// Time zone name. std::string time_zone; - - /// We can correctly process only timestamps that less DATE_LUT_MAX (i.e. up to 2105 year inclusively) - /// We don't care about overflow. - inline DayNum findIndex(time_t t) const + inline LUTIndex findIndex(time_t t) const { /// First guess. - DayNum guess(t / 86400); + Int64 guess = (t / 86400) + daynum_offset_epoch; + + /// For negative time_t the integer division was rounded up, so the guess is offset by one. + if (unlikely(t < 0)) + --guess; + + if (guess < 0) + return LUTIndex(0); + if (guess >= DATE_LUT_SIZE) + return LUTIndex(DATE_LUT_SIZE - 1); /// UTC offset is from -12 to +14 in all known time zones. This requires checking only three indices. - if ((guess == 0 || t >= lut[guess].date) && t < lut[DayNum(guess + 1)].date) - return guess; + if (t >= lut[guess].date) + { + if (guess + 1 >= DATE_LUT_SIZE || t < lut[guess + 1].date) + return LUTIndex(guess); - /// Time zones that have offset 0 from UTC do daylight saving time change (if any) towards increasing UTC offset (example: British Standard Time). - if (t >= lut[DayNum(guess + 1)].date) - return DayNum(guess + 1); + return LUTIndex(guess + 1); + } - return DayNum(guess - 1); + return LUTIndex(guess ? guess - 1 : 0); } - inline const Values & find(time_t t) const + inline LUTIndex toLUTIndex(DayNum d) const { - return lut[findIndex(t)]; + return LUTIndex{(d + daynum_offset_epoch) & date_lut_mask}; + } + + inline LUTIndex toLUTIndex(ExtendedDayNum d) const + { + return LUTIndex{static_cast(d + daynum_offset_epoch) & date_lut_mask}; + } + + inline LUTIndex toLUTIndex(time_t t) const + { + return findIndex(t); + } + + inline LUTIndex toLUTIndex(LUTIndex i) const + { + return i; + } + + template + inline const Values & find(DateOrTime v) const + { + return lut[toLUTIndex(v)]; + } + + template + static inline T roundDown(T x, Divisor divisor) + { + static_assert(std::is_integral_v && std::is_integral_v); + assert(divisor > 0); + + if (likely(x >= 0)) + return x / divisor * divisor; + + /// Integer division for negative numbers rounds them towards zero (up). + /// We will shift the number so it will be rounded towards -inf (down). + + return (x + 1 - divisor) / divisor * divisor; } public: const std::string & getTimeZone() const { return time_zone; } + // Methods only for unit-testing, it makes very little sense to use it from user code. + auto getOffsetAtStartOfEpoch() const { return offset_at_start_of_epoch; } + auto getTimeOffsetAtStartOfLUT() const { return offset_at_start_of_lut; } + /// All functions below are thread-safe; arguments are not checked. - inline time_t toDate(time_t t) const { return find(t).date; } - inline unsigned toMonth(time_t t) const { return find(t).month; } - inline unsigned toQuarter(time_t t) const { return (find(t).month - 1) / 3 + 1; } - inline unsigned toYear(time_t t) const { return find(t).year; } - inline unsigned toDayOfWeek(time_t t) const { return find(t).day_of_week; } - inline unsigned toDayOfMonth(time_t t) const { return find(t).day_of_month; } + inline ExtendedDayNum toDayNum(ExtendedDayNum d) const + { + return d; + } + + template + inline ExtendedDayNum toDayNum(DateOrTime v) const + { + return ExtendedDayNum{static_cast(toLUTIndex(v).toUnderType() - daynum_offset_epoch)}; + } /// Round down to start of monday. - inline time_t toFirstDayOfWeek(time_t t) const + template + inline time_t toFirstDayOfWeek(DateOrTime v) const { - DayNum index = findIndex(t); - return lut[DayNum(index - (lut[index].day_of_week - 1))].date; + const LUTIndex i = toLUTIndex(v); + return lut[i - (lut[i].day_of_week - 1)].date; } - inline DayNum toFirstDayNumOfWeek(DayNum d) const + template + inline ExtendedDayNum toFirstDayNumOfWeek(DateOrTime v) const { - return DayNum(d - (lut[d].day_of_week - 1)); - } - - inline DayNum toFirstDayNumOfWeek(time_t t) const - { - return toFirstDayNumOfWeek(toDayNum(t)); + const LUTIndex i = toLUTIndex(v); + return toDayNum(i - (lut[i].day_of_week - 1)); } /// Round down to start of month. - inline time_t toFirstDayOfMonth(time_t t) const + template + inline time_t toFirstDayOfMonth(DateOrTime v) const { - DayNum index = findIndex(t); - return lut[index - (lut[index].day_of_month - 1)].date; + const LUTIndex i = toLUTIndex(v); + return lut[i - (lut[i].day_of_month - 1)].date; } - inline DayNum toFirstDayNumOfMonth(DayNum d) const + template + inline ExtendedDayNum toFirstDayNumOfMonth(DateOrTime v) const { - return DayNum(d - (lut[d].day_of_month - 1)); - } - - inline DayNum toFirstDayNumOfMonth(time_t t) const - { - return toFirstDayNumOfMonth(toDayNum(t)); + const LUTIndex i = toLUTIndex(v); + return toDayNum(i - (lut[i].day_of_month - 1)); } /// Round down to start of quarter. - inline DayNum toFirstDayNumOfQuarter(DayNum d) const + template + inline ExtendedDayNum toFirstDayNumOfQuarter(DateOrTime v) const { - DayNum index = d; + return toDayNum(toFirstDayOfQuarterIndex(v)); + } + + template + inline LUTIndex toFirstDayOfQuarterIndex(DateOrTime v) const + { + LUTIndex index = toLUTIndex(v); size_t month_inside_quarter = (lut[index].month - 1) % 3; index -= lut[index].day_of_month; @@ -175,17 +328,13 @@ public: --month_inside_quarter; } - return DayNum(index + 1); + return index + 1; } - inline DayNum toFirstDayNumOfQuarter(time_t t) const + template + inline time_t toFirstDayOfQuarter(DateOrTime v) const { - return toFirstDayNumOfQuarter(toDayNum(t)); - } - - inline time_t toFirstDayOfQuarter(time_t t) const - { - return fromDayNum(toFirstDayNumOfQuarter(t)); + return toDate(toFirstDayOfQuarterIndex(v)); } /// Round down to start of year. @@ -194,48 +343,47 @@ public: return lut[years_lut[lut[findIndex(t)].year - DATE_LUT_MIN_YEAR]].date; } - inline DayNum toFirstDayNumOfYear(DayNum d) const + template + inline LUTIndex toFirstDayNumOfYearIndex(DateOrTime v) const { - return years_lut[lut[d].year - DATE_LUT_MIN_YEAR]; + return years_lut[lut[toLUTIndex(v)].year - DATE_LUT_MIN_YEAR]; } - inline DayNum toFirstDayNumOfYear(time_t t) const + template + inline ExtendedDayNum toFirstDayNumOfYear(DateOrTime v) const { - return toFirstDayNumOfYear(toDayNum(t)); + return toDayNum(toFirstDayNumOfYearIndex(v)); } inline time_t toFirstDayOfNextMonth(time_t t) const { - DayNum index = findIndex(t); + LUTIndex index = findIndex(t); index += 32 - lut[index].day_of_month; return lut[index - (lut[index].day_of_month - 1)].date; } inline time_t toFirstDayOfPrevMonth(time_t t) const { - DayNum index = findIndex(t); + LUTIndex index = findIndex(t); index -= lut[index].day_of_month; return lut[index - (lut[index].day_of_month - 1)].date; } - inline UInt8 daysInMonth(DayNum d) const + template + inline UInt8 daysInMonth(DateOrTime value) const { - return lut[d].days_in_month; + const LUTIndex i = toLUTIndex(value); + return lut[i].days_in_month; } - inline UInt8 daysInMonth(time_t t) const - { - return find(t).days_in_month; - } - - inline UInt8 daysInMonth(UInt16 year, UInt8 month) const + inline UInt8 daysInMonth(Int16 year, UInt8 month) const { UInt16 idx = year - DATE_LUT_MIN_YEAR; if (unlikely(idx >= DATE_LUT_YEARS)) return 31; /// Implementation specific behaviour on overflow. /// 32 makes arithmetic more simple. - DayNum any_day_of_month = DayNum(years_lut[idx] + 32 * (month - 1)); + const auto any_day_of_month = years_lut[year - DATE_LUT_MIN_YEAR] + 32 * (month - 1); return lut[any_day_of_month].days_in_month; } @@ -243,107 +391,111 @@ public: */ inline time_t toDateAndShift(time_t t, Int32 days) const { - return lut[DayNum(findIndex(t) + days)].date; + return lut[findIndex(t) + days].date; } inline time_t toTime(time_t t) const { - DayNum index = findIndex(t); - - if (unlikely(index == 0 || index > DATE_LUT_MAX_DAY_NUM)) - return t + offset_at_start_of_epoch; + const LUTIndex index = findIndex(t); time_t res = t - lut[index].date; - if (res >= lut[index].time_at_offset_change) - res += lut[index].amount_of_offset_change; + if (res >= lut[index].time_at_offset_change()) + res += lut[index].amount_of_offset_change(); return res - offset_at_start_of_epoch; /// Starting at 1970-01-01 00:00:00 local time. } inline unsigned toHour(time_t t) const { - DayNum index = findIndex(t); - - /// If it is overflow case, - /// then limit number of hours to avoid insane results like 1970-01-01 89:28:15 - if (unlikely(index == 0 || index > DATE_LUT_MAX_DAY_NUM)) - return static_cast((t + offset_at_start_of_epoch) / 3600) % 24; + const LUTIndex index = findIndex(t); time_t time = t - lut[index].date; - if (time >= lut[index].time_at_offset_change) - time += lut[index].amount_of_offset_change; + if (time >= lut[index].time_at_offset_change()) + time += lut[index].amount_of_offset_change(); unsigned res = time / 3600; - return res <= 23 ? res : 0; + + /// In case time was changed backwards at the start of next day, we will repeat the hour 23. + return res <= 23 ? res : 23; } /** Calculating offset from UTC in seconds. - * which means Using the same literal time of "t" to get the corresponding timestamp in UTC, - * then subtract the former from the latter to get the offset result. - * The boundaries when meets DST(daylight saving time) change should be handled very carefully. - */ + * which means Using the same literal time of "t" to get the corresponding timestamp in UTC, + * then subtract the former from the latter to get the offset result. + * The boundaries when meets DST(daylight saving time) change should be handled very carefully. + */ inline time_t timezoneOffset(time_t t) const { - DayNum index = findIndex(t); + const LUTIndex index = findIndex(t); /// Calculate daylight saving offset first. /// Because the "amount_of_offset_change" in LUT entry only exists in the change day, it's costly to scan it from the very begin. /// but we can figure out all the accumulated offsets from 1970-01-01 to that day just by get the whole difference between lut[].date, /// and then, we can directly subtract multiple 86400s to get the real DST offsets for the leap seconds is not considered now. - time_t res = (lut[index].date - lut[0].date) % 86400; + time_t res = (lut[index].date - lut[daynum_offset_epoch].date) % 86400; + /// As so far to know, the maximal DST offset couldn't be more than 2 hours, so after the modulo operation the remainder /// will sits between [-offset --> 0 --> offset] which respectively corresponds to moving clock forward or backward. res = res > 43200 ? (86400 - res) : (0 - res); /// Check if has a offset change during this day. Add the change when cross the line - if (lut[index].amount_of_offset_change != 0 && t >= lut[index].date + lut[index].time_at_offset_change) - res += lut[index].amount_of_offset_change; + if (lut[index].amount_of_offset_change() != 0 && t >= lut[index].date + lut[index].time_at_offset_change()) + res += lut[index].amount_of_offset_change(); return res + offset_at_start_of_epoch; } - /** Only for time zones with/when offset from UTC is multiple of five minutes. - * This is true for all time zones: right now, all time zones have an offset that is multiple of 15 minutes. - * - * "By 1929, most major countries had adopted hourly time zones. Nepal was the last - * country to adopt a standard offset, shifting slightly to UTC+5:45 in 1986." - * - https://en.wikipedia.org/wiki/Time_zone#Offsets_from_UTC - * - * Also please note, that unix timestamp doesn't count "leap seconds": - * each minute, with added or subtracted leap second, spans exactly 60 unix timestamps. - */ - inline unsigned toSecond(time_t t) const { return UInt32(t) % 60; } + inline unsigned toSecond(time_t t) const + { + auto res = t % 60; + if (likely(res >= 0)) + return res; + return res + 60; + } inline unsigned toMinute(time_t t) const { - if (offset_is_whole_number_of_hours_everytime) - return (UInt32(t) / 60) % 60; + if (t >= 0 && offset_is_whole_number_of_hours_during_epoch) + return (t / 60) % 60; - /// To consider the DST changing situation within this day. - /// also make the special timezones with no whole hour offset such as 'Australia/Lord_Howe' been taken into account - DayNum index = findIndex(t); - UInt32 res = t - lut[index].date; - if (lut[index].amount_of_offset_change != 0 && t >= lut[index].date + lut[index].time_at_offset_change) - res += lut[index].amount_of_offset_change; + /// To consider the DST changing situation within this day + /// also make the special timezones with no whole hour offset such as 'Australia/Lord_Howe' been taken into account. - return res / 60 % 60; + LUTIndex index = findIndex(t); + UInt32 time = t - lut[index].date; + + if (time >= lut[index].time_at_offset_change()) + time += lut[index].amount_of_offset_change(); + + return time / 60 % 60; } - inline time_t toStartOfMinute(time_t t) const { return t / 60 * 60; } - inline time_t toStartOfFiveMinute(time_t t) const { return t / 300 * 300; } - inline time_t toStartOfFifteenMinutes(time_t t) const { return t / 900 * 900; } - inline time_t toStartOfTenMinutes(time_t t) const { return t / 600 * 600; } + /// NOTE: Assuming timezone offset is a multiple of 15 minutes. + inline time_t toStartOfMinute(time_t t) const { return roundDown(t, 60); } + inline time_t toStartOfFiveMinute(time_t t) const { return roundDown(t, 300); } + inline time_t toStartOfFifteenMinutes(time_t t) const { return roundDown(t, 900); } + inline time_t toStartOfTenMinutes(time_t t) const + { + if (t >= 0 && offset_is_whole_number_of_hours_during_epoch) + return t / 600 * 600; + + /// More complex logic is for Nepal - it has offset 05:45. Australia/Eucla is also unfortunate. + Int64 date = find(t).date; + return date + (t - date) / 600 * 600; + } + + /// NOTE: Assuming timezone transitions are multiple of hours. Lord Howe Island in Australia is a notable exception. inline time_t toStartOfHour(time_t t) const { - if (offset_is_whole_number_of_hours_everytime) + if (t >= 0 && offset_is_whole_number_of_hours_during_epoch) return t / 3600 * 3600; - UInt32 date = find(t).date; - return date + (UInt32(t) - date) / 3600 * 3600; + Int64 date = find(t).date; + return date + (t - date) / 3600 * 3600; } /** Number of calendar day since the beginning of UNIX epoch (1970-01-01 is zero) @@ -354,80 +506,89 @@ public: * because the same calendar day starts/ends at different timestamps in different time zones) */ - inline DayNum toDayNum(time_t t) const { return findIndex(t); } - inline time_t fromDayNum(DayNum d) const { return lut[d].date; } + inline time_t fromDayNum(DayNum d) const { return lut[toLUTIndex(d)].date; } + inline time_t fromDayNum(ExtendedDayNum d) const { return lut[toLUTIndex(d)].date; } - inline time_t toDate(DayNum d) const { return lut[d].date; } - inline unsigned toMonth(DayNum d) const { return lut[d].month; } - inline unsigned toQuarter(DayNum d) const { return (lut[d].month - 1) / 3 + 1; } - inline unsigned toYear(DayNum d) const { return lut[d].year; } - inline unsigned toDayOfWeek(DayNum d) const { return lut[d].day_of_week; } - inline unsigned toDayOfMonth(DayNum d) const { return lut[d].day_of_month; } - inline unsigned toDayOfYear(DayNum d) const { return d + 1 - toFirstDayNumOfYear(d); } + template + inline time_t toDate(DateOrTime v) const { return lut[toLUTIndex(v)].date; } - inline unsigned toDayOfYear(time_t t) const { return toDayOfYear(toDayNum(t)); } + template + inline unsigned toMonth(DateOrTime v) const { return lut[toLUTIndex(v)].month; } + + template + inline unsigned toQuarter(DateOrTime v) const { return (lut[toLUTIndex(v)].month - 1) / 3 + 1; } + + template + inline Int16 toYear(DateOrTime v) const { return lut[toLUTIndex(v)].year; } + + template + inline unsigned toDayOfWeek(DateOrTime v) const { return lut[toLUTIndex(v)].day_of_week; } + + template + inline unsigned toDayOfMonth(DateOrTime v) const { return lut[toLUTIndex(v)].day_of_month; } + + template + inline unsigned toDayOfYear(DateOrTime v) const + { + // TODO: different overload for ExtendedDayNum + const LUTIndex i = toLUTIndex(v); + return i + 1 - toFirstDayNumOfYearIndex(i); + } /// Number of week from some fixed moment in the past. Week begins at monday. /// (round down to monday and divide DayNum by 7; we made an assumption, /// that in domain of the function there was no weeks with any other number of days than 7) - inline unsigned toRelativeWeekNum(DayNum d) const + template + inline unsigned toRelativeWeekNum(DateOrTime v) const { + const LUTIndex i = toLUTIndex(v); /// We add 8 to avoid underflow at beginning of unix epoch. - return (d + 8 - toDayOfWeek(d)) / 7; - } - - inline unsigned toRelativeWeekNum(time_t t) const - { - return toRelativeWeekNum(toDayNum(t)); + return toDayNum(i + 8 - toDayOfWeek(i)) / 7; } /// Get year that contains most of the current week. Week begins at monday. - inline unsigned toISOYear(DayNum d) const + template + inline unsigned toISOYear(DateOrTime v) const { + const LUTIndex i = toLUTIndex(v); /// That's effectively the year of thursday of current week. - return toYear(DayNum(d + 4 - toDayOfWeek(d))); - } - - inline unsigned toISOYear(time_t t) const - { - return toISOYear(toDayNum(t)); + return toYear(toLUTIndex(i + 4 - toDayOfWeek(i))); } /// ISO year begins with a monday of the week that is contained more than by half in the corresponding calendar year. /// Example: ISO year 2019 begins at 2018-12-31. And ISO year 2017 begins at 2017-01-02. /// https://en.wikipedia.org/wiki/ISO_week_date - inline DayNum toFirstDayNumOfISOYear(DayNum d) const + template + inline LUTIndex toFirstDayNumOfISOYearIndex(DateOrTime v) const { - auto iso_year = toISOYear(d); + const LUTIndex i = toLUTIndex(v); + auto iso_year = toISOYear(i); - DayNum first_day_of_year = years_lut[iso_year - DATE_LUT_MIN_YEAR]; + const auto first_day_of_year = years_lut[iso_year - DATE_LUT_MIN_YEAR]; auto first_day_of_week_of_year = lut[first_day_of_year].day_of_week; - return DayNum(first_day_of_week_of_year <= 4 + return LUTIndex{first_day_of_week_of_year <= 4 ? first_day_of_year + 1 - first_day_of_week_of_year - : first_day_of_year + 8 - first_day_of_week_of_year); + : first_day_of_year + 8 - first_day_of_week_of_year}; } - inline DayNum toFirstDayNumOfISOYear(time_t t) const + template + inline ExtendedDayNum toFirstDayNumOfISOYear(DateOrTime v) const { - return toFirstDayNumOfISOYear(toDayNum(t)); + return toDayNum(toFirstDayNumOfISOYearIndex(v)); } inline time_t toFirstDayOfISOYear(time_t t) const { - return fromDayNum(toFirstDayNumOfISOYear(t)); + return lut[toFirstDayNumOfISOYearIndex(t)].date; } /// ISO 8601 week number. Week begins at monday. /// The week number 1 is the first week in year that contains 4 or more days (that's more than half). - inline unsigned toISOWeek(DayNum d) const + template + inline unsigned toISOWeek(DateOrTime v) const { - return 1 + DayNum(toFirstDayNumOfWeek(d) - toFirstDayNumOfISOYear(d)) / 7; - } - - inline unsigned toISOWeek(time_t t) const - { - return toISOWeek(toDayNum(t)); + return 1 + (toFirstDayNumOfWeek(v) - toFirstDayNumOfISOYear(v)) / 7; } /* @@ -463,30 +624,33 @@ public: Otherwise it is the last week of the previous year, and the next week is week 1. */ - inline YearWeek toYearWeek(DayNum d, UInt8 week_mode) const + template + inline YearWeek toYearWeek(DateOrTime v, UInt8 week_mode) const { - bool newyear_day_mode = week_mode & static_cast(WeekModeFlag::NEWYEAR_DAY); + const bool newyear_day_mode = week_mode & static_cast(WeekModeFlag::NEWYEAR_DAY); week_mode = check_week_mode(week_mode); - bool monday_first_mode = week_mode & static_cast(WeekModeFlag::MONDAY_FIRST); + const bool monday_first_mode = week_mode & static_cast(WeekModeFlag::MONDAY_FIRST); bool week_year_mode = week_mode & static_cast(WeekModeFlag::YEAR); - bool first_weekday_mode = week_mode & static_cast(WeekModeFlag::FIRST_WEEKDAY); + const bool first_weekday_mode = week_mode & static_cast(WeekModeFlag::FIRST_WEEKDAY); + + const LUTIndex i = toLUTIndex(v); // Calculate week number of WeekModeFlag::NEWYEAR_DAY mode if (newyear_day_mode) { - return toYearWeekOfNewyearMode(d, monday_first_mode); + return toYearWeekOfNewyearMode(i, monday_first_mode); } - YearWeek yw(toYear(d), 0); + YearWeek yw(toYear(i), 0); UInt16 days = 0; - UInt16 daynr = makeDayNum(yw.first, toMonth(d), toDayOfMonth(d)); - UInt16 first_daynr = makeDayNum(yw.first, 1, 1); + const auto daynr = makeDayNum(yw.first, toMonth(i), toDayOfMonth(i)); + auto first_daynr = makeDayNum(yw.first, 1, 1); // 0 for monday, 1 for tuesday ... // get weekday from first day in year. - UInt16 weekday = calc_weekday(DayNum(first_daynr), !monday_first_mode); + UInt16 weekday = calc_weekday(first_daynr, !monday_first_mode); - if (toMonth(d) == 1 && toDayOfMonth(d) <= static_cast(7 - weekday)) + if (toMonth(i) == 1 && toDayOfMonth(i) <= static_cast(7 - weekday)) { if (!week_year_mode && ((first_weekday_mode && weekday != 0) || (!first_weekday_mode && weekday >= 4))) return yw; @@ -517,48 +681,51 @@ public: /// Calculate week number of WeekModeFlag::NEWYEAR_DAY mode /// The week number 1 is the first week in year that contains January 1, - inline YearWeek toYearWeekOfNewyearMode(DayNum d, bool monday_first_mode) const + template + inline YearWeek toYearWeekOfNewyearMode(DateOrTime v, bool monday_first_mode) const { YearWeek yw(0, 0); UInt16 offset_day = monday_first_mode ? 0U : 1U; + const LUTIndex i = LUTIndex(v); + // Checking the week across the year - yw.first = toYear(DayNum(d + 7 - toDayOfWeek(DayNum(d + offset_day)))); + yw.first = toYear(i + 7 - toDayOfWeek(i + offset_day)); - DayNum first_day = makeDayNum(yw.first, 1, 1); - DayNum this_day = d; + auto first_day = makeLUTIndex(yw.first, 1, 1); + auto this_day = i; + // TODO: do not perform calculations in terms of DayNum, since that would under/overflow for extended range. if (monday_first_mode) { // Rounds down a date to the nearest Monday. first_day = toFirstDayNumOfWeek(first_day); - this_day = toFirstDayNumOfWeek(d); + this_day = toFirstDayNumOfWeek(i); } else { // Rounds down a date to the nearest Sunday. if (toDayOfWeek(first_day) != 7) - first_day = DayNum(first_day - toDayOfWeek(first_day)); - if (toDayOfWeek(d) != 7) - this_day = DayNum(d - toDayOfWeek(d)); + first_day = ExtendedDayNum(first_day - toDayOfWeek(first_day)); + if (toDayOfWeek(i) != 7) + this_day = ExtendedDayNum(i - toDayOfWeek(i)); } yw.second = (this_day - first_day) / 7 + 1; return yw; } - /** - * get first day of week with week_mode, return Sunday or Monday - */ - inline DayNum toFirstDayNumOfWeek(DayNum d, UInt8 week_mode) const + /// Get first day of week with week_mode, return Sunday or Monday + template + inline ExtendedDayNum toFirstDayNumOfWeek(DateOrTime v, UInt8 week_mode) const { bool monday_first_mode = week_mode & static_cast(WeekModeFlag::MONDAY_FIRST); if (monday_first_mode) { - return toFirstDayNumOfWeek(d); + return toFirstDayNumOfWeek(v); } else { - return (toDayOfWeek(d) != 7) ? DayNum(d - toDayOfWeek(d)) : d; + return (toDayOfWeek(v) != 7) ? ExtendedDayNum(v - toDayOfWeek(v)) : toDayNum(v); } } @@ -574,192 +741,231 @@ public: /** Calculate weekday from d. * Returns 0 for monday, 1 for tuesday... */ - inline unsigned calc_weekday(DayNum d, bool sunday_first_day_of_week) const + template + inline unsigned calc_weekday(DateOrTime v, bool sunday_first_day_of_week) const { + const LUTIndex i = toLUTIndex(v); if (!sunday_first_day_of_week) - return toDayOfWeek(d) - 1; + return toDayOfWeek(i) - 1; else - return toDayOfWeek(DayNum(d + 1)) - 1; + return toDayOfWeek(i + 1) - 1; } /// Calculate days in one year. - inline unsigned calc_days_in_year(UInt16 year) const + inline unsigned calc_days_in_year(Int32 year) const { return ((year & 3) == 0 && (year % 100 || (year % 400 == 0 && year)) ? 366 : 365); } /// Number of month from some fixed moment in the past (year * 12 + month) - inline unsigned toRelativeMonthNum(DayNum d) const + template + inline unsigned toRelativeMonthNum(DateOrTime v) const { - return lut[d].year * 12 + lut[d].month; + const LUTIndex i = toLUTIndex(v); + return lut[i].year * 12 + lut[i].month; } - inline unsigned toRelativeMonthNum(time_t t) const + template + inline unsigned toRelativeQuarterNum(DateOrTime v) const { - return toRelativeMonthNum(toDayNum(t)); - } - - inline unsigned toRelativeQuarterNum(DayNum d) const - { - return lut[d].year * 4 + (lut[d].month - 1) / 3; - } - - inline unsigned toRelativeQuarterNum(time_t t) const - { - return toRelativeQuarterNum(toDayNum(t)); + const LUTIndex i = toLUTIndex(v); + return lut[i].year * 4 + (lut[i].month - 1) / 3; } /// We count all hour-length intervals, unrelated to offset changes. inline time_t toRelativeHourNum(time_t t) const { - if (offset_is_whole_number_of_hours_everytime) + if (t >= 0 && offset_is_whole_number_of_hours_during_epoch) return t / 3600; /// Assume that if offset was fractional, then the fraction is the same as at the beginning of epoch. /// NOTE This assumption is false for "Pacific/Pitcairn" and "Pacific/Kiritimati" time zones. - return (t + 86400 - offset_at_start_of_epoch) / 3600; + return (t + DATE_LUT_ADD + 86400 - offset_at_start_of_epoch) / 3600 - (DATE_LUT_ADD / 3600); } - inline time_t toRelativeHourNum(DayNum d) const + template + inline time_t toRelativeHourNum(DateOrTime v) const { - return toRelativeHourNum(lut[d].date); + return toRelativeHourNum(lut[toLUTIndex(v)].date); } inline time_t toRelativeMinuteNum(time_t t) const { - return t / 60; + return (t + DATE_LUT_ADD) / 60 - (DATE_LUT_ADD / 60); } - inline time_t toRelativeMinuteNum(DayNum d) const + template + inline time_t toRelativeMinuteNum(DateOrTime v) const { - return toRelativeMinuteNum(lut[d].date); + return toRelativeMinuteNum(lut[toLUTIndex(v)].date); } - inline DayNum toStartOfYearInterval(DayNum d, UInt64 years) const + template + inline ExtendedDayNum toStartOfYearInterval(DateOrTime v, UInt64 years) const { if (years == 1) - return toFirstDayNumOfYear(d); - return years_lut[(lut[d].year - DATE_LUT_MIN_YEAR) / years * years]; + return toFirstDayNumOfYear(v); + + const LUTIndex i = toLUTIndex(v); + + UInt16 year = lut[i].year / years * years; + + /// For example, rounding down 1925 to 100 years will be 1900, but it's less than min supported year. + if (unlikely(year < DATE_LUT_MIN_YEAR)) + year = DATE_LUT_MIN_YEAR; + + return toDayNum(years_lut[year - DATE_LUT_MIN_YEAR]); } - inline DayNum toStartOfQuarterInterval(DayNum d, UInt64 quarters) const + inline ExtendedDayNum toStartOfQuarterInterval(ExtendedDayNum d, UInt64 quarters) const { if (quarters == 1) return toFirstDayNumOfQuarter(d); return toStartOfMonthInterval(d, quarters * 3); } - inline DayNum toStartOfMonthInterval(DayNum d, UInt64 months) const + inline ExtendedDayNum toStartOfMonthInterval(ExtendedDayNum d, UInt64 months) const { if (months == 1) return toFirstDayNumOfMonth(d); - const auto & date = lut[d]; - UInt32 month_total_index = (date.year - DATE_LUT_MIN_YEAR) * 12 + date.month - 1; - return years_months_lut[month_total_index / months * months]; + const Values & values = lut[toLUTIndex(d)]; + UInt32 month_total_index = (values.year - DATE_LUT_MIN_YEAR) * 12 + values.month - 1; + return toDayNum(years_months_lut[month_total_index / months * months]); } - inline DayNum toStartOfWeekInterval(DayNum d, UInt64 weeks) const + inline ExtendedDayNum toStartOfWeekInterval(ExtendedDayNum d, UInt64 weeks) const { if (weeks == 1) return toFirstDayNumOfWeek(d); UInt64 days = weeks * 7; // January 1st 1970 was Thursday so we need this 4-days offset to make weeks start on Monday. - return DayNum(4 + (d - 4) / days * days); + return ExtendedDayNum(4 + (d - 4) / days * days); } - inline time_t toStartOfDayInterval(DayNum d, UInt64 days) const + inline time_t toStartOfDayInterval(ExtendedDayNum d, UInt64 days) const { if (days == 1) return toDate(d); - return lut[d / days * days].date; + return lut[toLUTIndex(ExtendedDayNum(d / days * days))].date; } inline time_t toStartOfHourInterval(time_t t, UInt64 hours) const { if (hours == 1) return toStartOfHour(t); + + /** We will round the hour number since the midnight. + * It may split the day into non-equal intervals. + * For example, if we will round to 11-hour interval, + * the day will be split to the intervals 00:00:00..10:59:59, 11:00:00..21:59:59, 22:00:00..23:59:59. + * In case of daylight saving time or other transitions, + * the intervals can be shortened or prolonged to the amount of transition. + */ + UInt64 seconds = hours * 3600; - t = t / seconds * seconds; - if (offset_is_whole_number_of_hours_everytime) - return t; - return toStartOfHour(t); + + const LUTIndex index = findIndex(t); + const Values & values = lut[index]; + + time_t time = t - values.date; + if (time >= values.time_at_offset_change()) + { + /// Align to new hour numbers before rounding. + time += values.amount_of_offset_change(); + time = time / seconds * seconds; + + /// Should subtract the shift back but only if rounded time is not before shift. + if (time >= values.time_at_offset_change()) + { + time -= values.amount_of_offset_change(); + + /// With cutoff at the time of the shift. Otherwise we may end up with something like 23:00 previous day. + if (time < values.time_at_offset_change()) + time = values.time_at_offset_change(); + } + } + else + { + time = time / seconds * seconds; + } + + return values.date + time; } inline time_t toStartOfMinuteInterval(time_t t, UInt64 minutes) const { if (minutes == 1) return toStartOfMinute(t); + + /** In contrast to "toStartOfHourInterval" function above, + * the minute intervals are not aligned to the midnight. + * You will get unexpected results if for example, you round down to 60 minute interval + * and there was a time shift to 30 minutes. + * + * But this is not specified in docs and can be changed in future. + */ + UInt64 seconds = 60 * minutes; - return t / seconds * seconds; + return roundDown(t, seconds); } inline time_t toStartOfSecondInterval(time_t t, UInt64 seconds) const { if (seconds == 1) return t; - return t / seconds * seconds; + + return roundDown(t, seconds); + } + + inline LUTIndex makeLUTIndex(Int16 year, UInt8 month, UInt8 day_of_month) const + { + if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31)) + return LUTIndex(0); + + return LUTIndex{years_months_lut[(year - DATE_LUT_MIN_YEAR) * 12 + month - 1] + day_of_month - 1}; } /// Create DayNum from year, month, day of month. - inline DayNum makeDayNum(UInt16 year, UInt8 month, UInt8 day_of_month) const + inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month) const { if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31)) - return DayNum(0); // TODO (nemkov, DateTime64 phase 2): implement creating real date for year outside of LUT range. + return ExtendedDayNum(0); - // The day after 2106-02-07 will not stored fully as struct Values, so just overflow it as 0 - if (unlikely(year == DATE_LUT_MAX_YEAR && (month > 2 || (month == 2 && day_of_month > 7)))) - return DayNum(0); - - return DayNum(years_months_lut[(year - DATE_LUT_MIN_YEAR) * 12 + month - 1] + day_of_month - 1); + return toDayNum(makeLUTIndex(year, month, day_of_month)); } - inline time_t makeDate(UInt16 year, UInt8 month, UInt8 day_of_month) const + inline time_t makeDate(Int16 year, UInt8 month, UInt8 day_of_month) const { - return lut[makeDayNum(year, month, day_of_month)].date; + return lut[makeLUTIndex(year, month, day_of_month)].date; } /** Does not accept daylight saving time as argument: in case of ambiguity, it choose greater timestamp. */ - inline time_t makeDateTime(UInt16 year, UInt8 month, UInt8 day_of_month, UInt8 hour, UInt8 minute, UInt8 second) const + inline time_t makeDateTime(Int16 year, UInt8 month, UInt8 day_of_month, UInt8 hour, UInt8 minute, UInt8 second) const { - size_t index = makeDayNum(year, month, day_of_month); + size_t index = makeLUTIndex(year, month, day_of_month); UInt32 time_offset = hour * 3600 + minute * 60 + second; - if (time_offset >= lut[index].time_at_offset_change) - time_offset -= lut[index].amount_of_offset_change; + if (time_offset >= lut[index].time_at_offset_change()) + time_offset -= lut[index].amount_of_offset_change(); - UInt32 res = lut[index].date + time_offset; - - if (unlikely(res > DATE_LUT_MAX)) - return 0; - - return res; + return lut[index].date + time_offset; } - inline const Values & getValues(DayNum d) const { return lut[d]; } - inline const Values & getValues(time_t t) const { return lut[findIndex(t)]; } + template + inline const Values & getValues(DateOrTime v) const { return lut[toLUTIndex(v)]; } - inline UInt32 toNumYYYYMM(time_t t) const + template + inline UInt32 toNumYYYYMM(DateOrTime v) const { - const Values & values = find(t); + const Values & values = getValues(v); return values.year * 100 + values.month; } - inline UInt32 toNumYYYYMM(DayNum d) const + template + inline UInt32 toNumYYYYMMDD(DateOrTime v) const { - const Values & values = lut[d]; - return values.year * 100 + values.month; - } - - inline UInt32 toNumYYYYMMDD(time_t t) const - { - const Values & values = find(t); - return values.year * 10000 + values.month * 100 + values.day_of_month; - } - - inline UInt32 toNumYYYYMMDD(DayNum d) const - { - const Values & values = lut[d]; + const Values & values = getValues(v); return values.year * 10000 + values.month * 100 + values.day_of_month; } @@ -768,22 +974,85 @@ public: return makeDate(num / 10000, num / 100 % 100, num % 100); } - inline DayNum YYYYMMDDToDayNum(UInt32 num) const + inline ExtendedDayNum YYYYMMDDToDayNum(UInt32 num) const { return makeDayNum(num / 10000, num / 100 % 100, num % 100); } + struct DateComponents + { + uint16_t year; + uint8_t month; + uint8_t day; + }; + + struct TimeComponents + { + uint8_t hour; + uint8_t minute; + uint8_t second; + }; + + struct DateTimeComponents + { + DateComponents date; + TimeComponents time; + }; + + inline DateComponents toDateComponents(time_t t) const + { + const Values & values = getValues(t); + return { values.year, values.month, values.day_of_month }; + } + + inline DateTimeComponents toDateTimeComponents(time_t t) const + { + const LUTIndex index = findIndex(t); + const Values & values = lut[index]; + + DateTimeComponents res; + + res.date.year = values.year; + res.date.month = values.month; + res.date.day = values.day_of_month; + + time_t time = t - values.date; + if (time >= values.time_at_offset_change()) + time += values.amount_of_offset_change(); + + if (unlikely(time < 0)) + { + res.time.second = 0; + res.time.minute = 0; + res.time.hour = 0; + } + else + { + res.time.second = time % 60; + res.time.minute = time / 60 % 60; + res.time.hour = time / 3600; + } + + /// In case time was changed backwards at the start of next day, we will repeat the hour 23. + if (unlikely(res.time.hour > 23)) + res.time.hour = 23; + + return res; + } + + inline UInt64 toNumYYYYMMDDhhmmss(time_t t) const { - const Values & values = find(t); + DateTimeComponents components = toDateTimeComponents(t); + return - toSecond(t) - + toMinute(t) * 100 - + toHour(t) * 10000 - + UInt64(values.day_of_month) * 1000000 - + UInt64(values.month) * 100000000 - + UInt64(values.year) * 10000000000; + components.time.second + + components.time.minute * 100 + + components.time.hour * 10000 + + UInt64(components.date.day) * 1000000 + + UInt64(components.date.month) * 100000000 + + UInt64(components.date.year) * 10000000000; } inline time_t YYYYMMDDhhmmssToTime(UInt64 num) const @@ -802,15 +1071,19 @@ public: inline NO_SANITIZE_UNDEFINED time_t addDays(time_t t, Int64 delta) const { - DayNum index = findIndex(t); - time_t time_offset = toHour(t) * 3600 + toMinute(t) * 60 + toSecond(t); + const LUTIndex index = findIndex(t); + const Values & values = lut[index]; - index += delta; + time_t time = t - values.date; + if (time >= values.time_at_offset_change()) + time += values.amount_of_offset_change(); - if (time_offset >= lut[index].time_at_offset_change) - time_offset -= lut[index].amount_of_offset_change; + const LUTIndex new_index = index + delta; - return lut[index].date + time_offset; + if (time >= lut[new_index].time_at_offset_change()) + time -= lut[new_index].amount_of_offset_change(); + + return lut[new_index].date + time; } inline NO_SANITIZE_UNDEFINED time_t addWeeks(time_t t, Int64 delta) const @@ -818,7 +1091,7 @@ public: return addDays(t, delta * 7); } - inline UInt8 saturateDayOfMonth(UInt16 year, UInt8 month, UInt8 day_of_month) const + inline UInt8 saturateDayOfMonth(Int16 year, UInt8 month, UInt8 day_of_month) const { if (likely(day_of_month <= 28)) return day_of_month; @@ -831,25 +1104,12 @@ public: return day_of_month; } - /// If resulting month has less deys than source month, then saturation can happen. - /// Example: 31 Aug + 1 month = 30 Sep. - inline time_t addMonths(time_t t, Int64 delta) const + template + inline LUTIndex NO_SANITIZE_UNDEFINED addMonthsIndex(DateOrTime v, Int64 delta) const { - DayNum result_day = addMonths(toDayNum(t), delta); + const Values & values = lut[toLUTIndex(v)]; - time_t time_offset = toHour(t) * 3600 + toMinute(t) * 60 + toSecond(t); - - if (time_offset >= lut[result_day].time_at_offset_change) - time_offset -= lut[result_day].amount_of_offset_change; - - return lut[result_day].date + time_offset; - } - - inline NO_SANITIZE_UNDEFINED DayNum addMonths(DayNum d, Int64 delta) const - { - const Values & values = lut[d]; - - Int64 month = static_cast(values.month) + delta; + Int64 month = values.month + delta; if (month > 0) { @@ -857,7 +1117,7 @@ public: month = ((month - 1) % 12) + 1; auto day_of_month = saturateDayOfMonth(year, month, values.day_of_month); - return makeDayNum(year, month, day_of_month); + return makeLUTIndex(year, month, day_of_month); } else { @@ -865,36 +1125,48 @@ public: month = 12 - (-month % 12); auto day_of_month = saturateDayOfMonth(year, month, values.day_of_month); - return makeDayNum(year, month, day_of_month); + return makeLUTIndex(year, month, day_of_month); } } - inline NO_SANITIZE_UNDEFINED time_t addQuarters(time_t t, Int64 delta) const + /// If resulting month has less deys than source month, then saturation can happen. + /// Example: 31 Aug + 1 month = 30 Sep. + inline time_t NO_SANITIZE_UNDEFINED addMonths(time_t t, Int64 delta) const + { + const auto result_day = addMonthsIndex(t, delta); + + const LUTIndex index = findIndex(t); + const Values & values = lut[index]; + + time_t time = t - values.date; + if (time >= values.time_at_offset_change()) + time += values.amount_of_offset_change(); + + if (time >= lut[result_day].time_at_offset_change()) + time -= lut[result_day].amount_of_offset_change(); + + return lut[result_day].date + time; + } + + inline ExtendedDayNum NO_SANITIZE_UNDEFINED addMonths(ExtendedDayNum d, Int64 delta) const + { + return toDayNum(addMonthsIndex(d, delta)); + } + + inline time_t NO_SANITIZE_UNDEFINED addQuarters(time_t t, Int64 delta) const { return addMonths(t, delta * 3); } - inline NO_SANITIZE_UNDEFINED DayNum addQuarters(DayNum d, Int64 delta) const + inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int64 delta) const { return addMonths(d, delta * 3); } - /// Saturation can occur if 29 Feb is mapped to non-leap year. - inline NO_SANITIZE_UNDEFINED time_t addYears(time_t t, Int64 delta) const + template + inline LUTIndex NO_SANITIZE_UNDEFINED addYearsIndex(DateOrTime v, Int64 delta) const { - DayNum result_day = addYears(toDayNum(t), delta); - - time_t time_offset = toHour(t) * 3600 + toMinute(t) * 60 + toSecond(t); - - if (time_offset >= lut[result_day].time_at_offset_change) - time_offset -= lut[result_day].amount_of_offset_change; - - return lut[result_day].date + time_offset; - } - - inline NO_SANITIZE_UNDEFINED DayNum addYears(DayNum d, Int64 delta) const - { - const Values & values = lut[d]; + const Values & values = lut[toLUTIndex(v)]; auto year = values.year + delta; auto month = values.month; @@ -904,42 +1176,61 @@ public: if (unlikely(day_of_month == 29 && month == 2)) day_of_month = saturateDayOfMonth(year, month, day_of_month); - return makeDayNum(year, month, day_of_month); + return makeLUTIndex(year, month, day_of_month); + } + + /// Saturation can occur if 29 Feb is mapped to non-leap year. + inline time_t addYears(time_t t, Int64 delta) const + { + auto result_day = addYearsIndex(t, delta); + + const LUTIndex index = findIndex(t); + const Values & values = lut[index]; + + time_t time = t - values.date; + if (time >= values.time_at_offset_change()) + time += values.amount_of_offset_change(); + + if (time >= lut[result_day].time_at_offset_change()) + time -= lut[result_day].amount_of_offset_change(); + + return lut[result_day].date + time; + } + + inline ExtendedDayNum addYears(ExtendedDayNum d, Int64 delta) const + { + return toDayNum(addYearsIndex(d, delta)); } inline std::string timeToString(time_t t) const { - const Values & values = find(t); + DateTimeComponents components = toDateTimeComponents(t); std::string s {"0000-00-00 00:00:00"}; - s[0] += values.year / 1000; - s[1] += (values.year / 100) % 10; - s[2] += (values.year / 10) % 10; - s[3] += values.year % 10; - s[5] += values.month / 10; - s[6] += values.month % 10; - s[8] += values.day_of_month / 10; - s[9] += values.day_of_month % 10; + s[0] += components.date.year / 1000; + s[1] += (components.date.year / 100) % 10; + s[2] += (components.date.year / 10) % 10; + s[3] += components.date.year % 10; + s[5] += components.date.month / 10; + s[6] += components.date.month % 10; + s[8] += components.date.day / 10; + s[9] += components.date.day % 10; - auto hour = toHour(t); - auto minute = toMinute(t); - auto second = toSecond(t); - - s[11] += hour / 10; - s[12] += hour % 10; - s[14] += minute / 10; - s[15] += minute % 10; - s[17] += second / 10; - s[18] += second % 10; + s[11] += components.time.hour / 10; + s[12] += components.time.hour % 10; + s[14] += components.time.minute / 10; + s[15] += components.time.minute % 10; + s[17] += components.time.second / 10; + s[18] += components.time.second % 10; return s; } inline std::string dateToString(time_t t) const { - const Values & values = find(t); + const Values & values = getValues(t); std::string s {"0000-00-00"}; @@ -955,9 +1246,9 @@ public: return s; } - inline std::string dateToString(DayNum d) const + inline std::string dateToString(ExtendedDayNum d) const { - const Values & values = lut[d]; + const Values & values = getValues(d); std::string s {"0000-00-00"}; @@ -975,7 +1266,7 @@ public: }; #if defined(__PPC__) -#if !__clang__ +#if !defined(__clang__) #pragma GCC diagnostic pop #endif #endif diff --git a/base/common/DayNum.h b/base/common/DayNum.h index a4ef0c43b69..5cf4d4635c8 100644 --- a/base/common/DayNum.h +++ b/base/common/DayNum.h @@ -7,3 +7,8 @@ * See DateLUTImpl for usage examples. */ STRONG_TYPEDEF(UInt16, DayNum) + +/** Represent number of days since 1970-01-01 but in extended range, + * for dates before 1970-01-01 and after 2105 + */ +STRONG_TYPEDEF(Int32, ExtendedDayNum) diff --git a/base/common/LocalDate.h b/base/common/LocalDate.h index e5ebe877bc5..b1e6eeb907c 100644 --- a/base/common/LocalDate.h +++ b/base/common/LocalDate.h @@ -92,20 +92,10 @@ public: LocalDate(const LocalDate &) noexcept = default; LocalDate & operator= (const LocalDate &) noexcept = default; - LocalDate & operator= (time_t time) - { - init(time); - return *this; - } - - operator time_t() const - { - return DateLUT::instance().makeDate(m_year, m_month, m_day); - } - DayNum getDayNum() const { - return DateLUT::instance().makeDayNum(m_year, m_month, m_day); + const auto & lut = DateLUT::instance(); + return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType()); } operator DayNum() const @@ -166,12 +156,3 @@ public: }; static_assert(sizeof(LocalDate) == 4); - - -namespace std -{ -inline string to_string(const LocalDate & date) -{ - return date.toString(); -} -} diff --git a/base/common/LocalDateTime.h b/base/common/LocalDateTime.h index 0e237789bd1..dde283e5ebb 100644 --- a/base/common/LocalDateTime.h +++ b/base/common/LocalDateTime.h @@ -29,29 +29,16 @@ private: /// NOTE We may use attribute packed instead, but it is less portable. unsigned char pad = 0; - void init(time_t time) + void init(time_t time, const DateLUTImpl & time_zone) { - if (unlikely(time > DATE_LUT_MAX || time == 0)) - { - m_year = 0; - m_month = 0; - m_day = 0; - m_hour = 0; - m_minute = 0; - m_second = 0; + DateLUTImpl::DateTimeComponents components = time_zone.toDateTimeComponents(time); - return; - } - - const auto & date_lut = DateLUT::instance(); - const auto & values = date_lut.getValues(time); - - m_year = values.year; - m_month = values.month; - m_day = values.day_of_month; - m_hour = date_lut.toHour(time); - m_minute = date_lut.toMinute(time); - m_second = date_lut.toSecond(time); + m_year = components.date.year; + m_month = components.date.month; + m_day = components.date.day; + m_hour = components.time.hour; + m_minute = components.time.minute; + m_second = components.time.second; (void)pad; /// Suppress unused private field warning. } @@ -73,9 +60,9 @@ private: } public: - explicit LocalDateTime(time_t time) + explicit LocalDateTime(time_t time, const DateLUTImpl & time_zone = DateLUT::instance()) { - init(time); + init(time, time_zone); } LocalDateTime(unsigned short year_, unsigned char month_, unsigned char day_, @@ -104,19 +91,6 @@ public: LocalDateTime(const LocalDateTime &) noexcept = default; LocalDateTime & operator= (const LocalDateTime &) noexcept = default; - LocalDateTime & operator= (time_t time) - { - init(time); - return *this; - } - - operator time_t() const - { - return m_year == 0 - ? 0 - : DateLUT::instance().makeDateTime(m_year, m_month, m_day, m_hour, m_minute, m_second); - } - unsigned short year() const { return m_year; } unsigned char month() const { return m_month; } unsigned char day() const { return m_day; } @@ -132,8 +106,30 @@ public: void second(unsigned char x) { m_second = x; } LocalDate toDate() const { return LocalDate(m_year, m_month, m_day); } + LocalDateTime toStartOfDate() const { return LocalDateTime(m_year, m_month, m_day, 0, 0, 0); } - LocalDateTime toStartOfDate() { return LocalDateTime(m_year, m_month, m_day, 0, 0, 0); } + std::string toString() const + { + std::string s{"0000-00-00 00:00:00"}; + + s[0] += m_year / 1000; + s[1] += (m_year / 100) % 10; + s[2] += (m_year / 10) % 10; + s[3] += m_year % 10; + s[5] += m_month / 10; + s[6] += m_month % 10; + s[8] += m_day / 10; + s[9] += m_day % 10; + + s[11] += m_hour / 10; + s[12] += m_hour % 10; + s[14] += m_minute / 10; + s[15] += m_minute % 10; + s[17] += m_second / 10; + s[18] += m_second % 10; + + return s; + } bool operator< (const LocalDateTime & other) const { @@ -167,14 +163,3 @@ public: }; static_assert(sizeof(LocalDateTime) == 8); - - -namespace std -{ -inline string to_string(const LocalDateTime & datetime) -{ - stringstream str; - str << datetime; - return str.str(); -} -} diff --git a/src/Common/MoveOrCopyIfThrow.h b/base/common/MoveOrCopyIfThrow.h similarity index 100% rename from src/Common/MoveOrCopyIfThrow.h rename to base/common/MoveOrCopyIfThrow.h diff --git a/base/common/ReplxxLineReader.cpp b/base/common/ReplxxLineReader.cpp index fcd1610e589..7893e56d751 100644 --- a/base/common/ReplxxLineReader.cpp +++ b/base/common/ReplxxLineReader.cpp @@ -91,6 +91,10 @@ ReplxxLineReader::ReplxxLineReader( /// it also binded to M-p/M-n). rx.bind_key(Replxx::KEY::meta('N'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::COMPLETE_NEXT, code); }); rx.bind_key(Replxx::KEY::meta('P'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::COMPLETE_PREVIOUS, code); }); + /// By default M-BACKSPACE is KILL_TO_WHITESPACE_ON_LEFT, while in readline it is backward-kill-word + rx.bind_key(Replxx::KEY::meta(Replxx::KEY::BACKSPACE), [this](char32_t code) { return rx.invoke(Replxx::ACTION::KILL_TO_BEGINING_OF_WORD, code); }); + /// By default C-w is KILL_TO_BEGINING_OF_WORD, while in readline it is unix-word-rubout + rx.bind_key(Replxx::KEY::control('W'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::KILL_TO_WHITESPACE_ON_LEFT, code); }); rx.bind_key(Replxx::KEY::meta('E'), [this](char32_t) { openEditor(); return Replxx::ACTION_RESULT::CONTINUE; }); } diff --git a/base/common/arithmeticOverflow.h b/base/common/arithmeticOverflow.h index a92fe56b9cb..c170d214636 100644 --- a/base/common/arithmeticOverflow.h +++ b/base/common/arithmeticOverflow.h @@ -25,6 +25,12 @@ namespace common return x - y; } + template + inline auto NO_SANITIZE_UNDEFINED negateIgnoreOverflow(T x) + { + return -x; + } + template inline bool addOverflow(T x, T y, T & res) { diff --git a/base/common/defines.h b/base/common/defines.h index 367bdd64234..ada8245f494 100644 --- a/base/common/defines.h +++ b/base/common/defines.h @@ -76,6 +76,16 @@ # endif #endif +#if !defined(UNDEFINED_BEHAVIOR_SANITIZER) +# if defined(__has_feature) +# if __has_feature(undefined_behavior_sanitizer) +# define UNDEFINED_BEHAVIOR_SANITIZER 1 +# endif +# elif defined(__UNDEFINED_BEHAVIOR_SANITIZER__) +# define UNDEFINED_BEHAVIOR_SANITIZER 1 +# endif +#endif + #if defined(ADDRESS_SANITIZER) # define BOOST_USE_ASAN 1 # define BOOST_USE_UCONTEXT 1 diff --git a/base/common/getThreadId.cpp b/base/common/getThreadId.cpp index 700c51f21fc..054e9be9074 100644 --- a/base/common/getThreadId.cpp +++ b/base/common/getThreadId.cpp @@ -25,6 +25,10 @@ uint64_t getThreadId() current_tid = syscall(SYS_gettid); /// This call is always successful. - man gettid #elif defined(OS_FREEBSD) current_tid = pthread_getthreadid_np(); +#elif defined(OS_SUNOS) + // On Solaris-derived systems, this returns the ID of the LWP, analogous + // to a thread. + current_tid = static_cast(pthread_self()); #else if (0 != pthread_threadid_np(nullptr, ¤t_tid)) throw std::logic_error("pthread_threadid_np returned error"); diff --git a/base/common/setTerminalEcho.cpp b/base/common/setTerminalEcho.cpp index 658f27705ba..66db216a235 100644 --- a/base/common/setTerminalEcho.cpp +++ b/base/common/setTerminalEcho.cpp @@ -1,45 +1,28 @@ -// https://stackoverflow.com/questions/1413445/reading-a-password-from-stdcin - #include #include #include #include #include - -#ifdef WIN32 -#include -#else #include #include -#include -#endif + void setTerminalEcho(bool enable) { -#ifdef WIN32 - auto handle = GetStdHandle(STD_INPUT_HANDLE); - DWORD mode; - if (!GetConsoleMode(handle, &mode)) - throw std::runtime_error(std::string("setTerminalEcho failed get: ") + std::to_string(GetLastError())); + /// Obtain terminal attributes, + /// toggle the ECHO flag + /// and set them back. - if (!enable) - mode &= ~ENABLE_ECHO_INPUT; - else - mode |= ENABLE_ECHO_INPUT; + struct termios tty{}; - if (!SetConsoleMode(handle, mode)) - throw std::runtime_error(std::string("setTerminalEcho failed set: ") + std::to_string(GetLastError())); -#else - struct termios tty; - if (tcgetattr(STDIN_FILENO, &tty)) + if (0 != tcgetattr(STDIN_FILENO, &tty)) throw std::runtime_error(std::string("setTerminalEcho failed get: ") + errnoToString(errno)); - if (!enable) - tty.c_lflag &= ~ECHO; - else - tty.c_lflag |= ECHO; - auto ret = tcsetattr(STDIN_FILENO, TCSANOW, &tty); - if (ret) + if (enable) + tty.c_lflag |= ECHO; + else + tty.c_lflag &= ~ECHO; + + if (0 != tcsetattr(STDIN_FILENO, TCSANOW, &tty)) throw std::runtime_error(std::string("setTerminalEcho failed set: ") + errnoToString(errno)); -#endif } diff --git a/base/common/strong_typedef.h b/base/common/strong_typedef.h index d9850a25c37..77b83bfa6e5 100644 --- a/base/common/strong_typedef.h +++ b/base/common/strong_typedef.h @@ -12,6 +12,7 @@ private: T t; public: + using UnderlyingType = T; template ::type> explicit StrongTypedef(const T & t_) : t(t_) {} template ::type> diff --git a/base/common/tests/CMakeLists.txt b/base/common/tests/CMakeLists.txt index b7082ee9900..2a07a94055f 100644 --- a/base/common/tests/CMakeLists.txt +++ b/base/common/tests/CMakeLists.txt @@ -1,25 +1,2 @@ -include (${ClickHouse_SOURCE_DIR}/cmake/add_check.cmake) - -add_executable (date_lut2 date_lut2.cpp) -add_executable (date_lut3 date_lut3.cpp) -add_executable (date_lut_default_timezone date_lut_default_timezone.cpp) -add_executable (local_date_time_comparison local_date_time_comparison.cpp) -add_executable (realloc-perf allocator.cpp) - -set(PLATFORM_LIBS ${CMAKE_DL_LIBS}) - -target_link_libraries (date_lut2 PRIVATE common ${PLATFORM_LIBS}) -target_link_libraries (date_lut3 PRIVATE common ${PLATFORM_LIBS}) -target_link_libraries (date_lut_default_timezone PRIVATE common ${PLATFORM_LIBS}) -target_link_libraries (local_date_time_comparison PRIVATE common) -target_link_libraries (realloc-perf PRIVATE common) -add_check(local_date_time_comparison) - -if(USE_GTEST) - add_executable(unit_tests_libcommon gtest_json_test.cpp gtest_strong_typedef.cpp gtest_find_symbols.cpp) - target_link_libraries(unit_tests_libcommon PRIVATE common ${GTEST_MAIN_LIBRARIES} ${GTEST_LIBRARIES}) - add_check(unit_tests_libcommon) -endif() - add_executable (dump_variable dump_variable.cpp) target_link_libraries (dump_variable PRIVATE clickhouse_common_io) diff --git a/base/common/tests/allocator.cpp b/base/common/tests/allocator.cpp deleted file mode 100644 index 03f6228e0f5..00000000000 --- a/base/common/tests/allocator.cpp +++ /dev/null @@ -1,47 +0,0 @@ -#include -#include -#include -#include - - -void thread_func() -{ - for (size_t i = 0; i < 100; ++i) - { - size_t size = 4096; - - void * buf = malloc(size); - if (!buf) - abort(); - memset(buf, 0, size); - - while (size < 1048576) - { - size_t next_size = size * 4; - - void * new_buf = realloc(buf, next_size); - if (!new_buf) - abort(); - buf = new_buf; - - memset(reinterpret_cast(buf) + size, 0, next_size - size); - size = next_size; - } - - free(buf); - } -} - - -int main(int, char **) -{ - std::vector threads(16); - for (size_t i = 0; i < 1000; ++i) - { - for (auto & thread : threads) - thread = std::thread(thread_func); - for (auto & thread : threads) - thread.join(); - } - return 0; -} diff --git a/base/common/tests/date_lut2.cpp b/base/common/tests/date_lut2.cpp deleted file mode 100644 index 6dcf5e8adf2..00000000000 --- a/base/common/tests/date_lut2.cpp +++ /dev/null @@ -1,53 +0,0 @@ -#include -#include - -#include - - -static std::string toString(time_t Value) -{ - struct tm tm; - char buf[96]; - - localtime_r(&Value, &tm); - snprintf(buf, sizeof(buf), "%04d-%02d-%02d %02d:%02d:%02d", - tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec); - - return buf; -} - -static time_t orderedIdentifierToDate(unsigned value) -{ - struct tm tm; - - memset(&tm, 0, sizeof(tm)); - - tm.tm_year = value / 10000 - 1900; - tm.tm_mon = (value % 10000) / 100 - 1; - tm.tm_mday = value % 100; - tm.tm_isdst = -1; - - return mktime(&tm); -} - - -void loop(time_t begin, time_t end, int step) -{ - const auto & date_lut = DateLUT::instance(); - - for (time_t t = begin; t < end; t += step) - std::cout << toString(t) - << ", " << toString(date_lut.toTime(t)) - << ", " << date_lut.toHour(t) - << std::endl; -} - - -int main(int, char **) -{ - loop(orderedIdentifierToDate(20101031), orderedIdentifierToDate(20101101), 15 * 60); - loop(orderedIdentifierToDate(20100328), orderedIdentifierToDate(20100330), 15 * 60); - loop(orderedIdentifierToDate(20141020), orderedIdentifierToDate(20141106), 15 * 60); - - return 0; -} diff --git a/base/common/tests/date_lut3.cpp b/base/common/tests/date_lut3.cpp deleted file mode 100644 index 411765d2b2a..00000000000 --- a/base/common/tests/date_lut3.cpp +++ /dev/null @@ -1,62 +0,0 @@ -#include -#include - -#include - -#include - - -static std::string toString(time_t Value) -{ - struct tm tm; - char buf[96]; - - localtime_r(&Value, &tm); - snprintf(buf, sizeof(buf), "%04d-%02d-%02d %02d:%02d:%02d", - tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec); - - return buf; -} - -static time_t orderedIdentifierToDate(unsigned value) -{ - struct tm tm; - - memset(&tm, 0, sizeof(tm)); - - tm.tm_year = value / 10000 - 1900; - tm.tm_mon = (value % 10000) / 100 - 1; - tm.tm_mday = value % 100; - tm.tm_isdst = -1; - - return mktime(&tm); -} - - -void loop(time_t begin, time_t end, int step) -{ - const auto & date_lut = DateLUT::instance(); - - for (time_t t = begin; t < end; t += step) - { - time_t t2 = date_lut.makeDateTime(date_lut.toYear(t), date_lut.toMonth(t), date_lut.toDayOfMonth(t), - date_lut.toHour(t), date_lut.toMinute(t), date_lut.toSecond(t)); - - std::string s1 = toString(t); - std::string s2 = toString(t2); - - std::cerr << s1 << ", " << s2 << std::endl; - - if (s1 != s2) - throw Poco::Exception("Test failed."); - } -} - - -int main(int, char **) -{ - loop(orderedIdentifierToDate(20101031), orderedIdentifierToDate(20101101), 15 * 60); - loop(orderedIdentifierToDate(20100328), orderedIdentifierToDate(20100330), 15 * 60); - - return 0; -} diff --git a/base/common/tests/date_lut_default_timezone.cpp b/base/common/tests/date_lut_default_timezone.cpp deleted file mode 100644 index b8e5aa08931..00000000000 --- a/base/common/tests/date_lut_default_timezone.cpp +++ /dev/null @@ -1,31 +0,0 @@ -#include -#include -#include - -int main(int, char **) -{ - try - { - const auto & date_lut = DateLUT::instance(); - std::cout << "Detected default timezone: `" << date_lut.getTimeZone() << "'" << std::endl; - time_t now = time(nullptr); - std::cout << "Current time: " << date_lut.timeToString(now) - << ", UTC: " << DateLUT::instance("UTC").timeToString(now) << std::endl; - } - catch (const Poco::Exception & e) - { - std::cerr << e.displayText() << std::endl; - return 1; - } - catch (std::exception & e) - { - std::cerr << "std::exception: " << e.what() << std::endl; - return 2; - } - catch (...) - { - std::cerr << "Some exception" << std::endl; - return 3; - } - return 0; -} diff --git a/base/common/tests/gtest_json_test.cpp b/base/common/tests/gtest_json_test.cpp deleted file mode 100644 index 189a1a03d99..00000000000 --- a/base/common/tests/gtest_json_test.cpp +++ /dev/null @@ -1,656 +0,0 @@ -#include -#include -#include -#include - -#include - -using namespace std::literals::string_literals; - -#include - -enum class ResultType -{ - Return, - Throw -}; - -struct GetStringTestRecord -{ - const char * input; - ResultType result_type; - const char * result; -}; - -TEST(JSONSuite, SimpleTest) -{ - std::vector test_data = - { - { R"("name")", ResultType::Return, "name" }, - { R"("Вафельница Vitek WX-1102 FL")", ResultType::Return, "Вафельница Vitek WX-1102 FL" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("184509")", ResultType::Return, "184509" }, - { R"("category")", ResultType::Return, "category" }, - { R"("Все для детей/Детская техника/Vitek")", ResultType::Return, "Все для детей/Детская техника/Vitek" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("В наличии")", ResultType::Return, "В наличии" }, - { R"("price")", ResultType::Return, "price" }, - { R"("2390.00")", ResultType::Return, "2390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("Карточка")", ResultType::Return, "Карточка" }, - { R"("position")", ResultType::Return, "position" }, - { R"("detail")", ResultType::Return, "detail" }, - { R"("actionField")", ResultType::Return, "actionField" }, - { R"("list")", ResultType::Return, "list" }, - { R"("http://www.techport.ru/q/?t=вафельница&sort=price&sdim=asc")", ResultType::Return, "http://www.techport.ru/q/?t=вафельница&sort=price&sdim=asc" }, - { R"("action")", ResultType::Return, "action" }, - { R"("detail")", ResultType::Return, "detail" }, - { R"("products")", ResultType::Return, "products" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Вафельница Vitek WX-1102 FL")", ResultType::Return, "Вафельница Vitek WX-1102 FL" }, - { R"("id")", ResultType::Return, "id" }, - { R"("184509")", ResultType::Return, "184509" }, - { R"("price")", ResultType::Return, "price" }, - { R"("2390.00")", ResultType::Return, "2390.00" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("Vitek")", ResultType::Return, "Vitek" }, - { R"("category")", ResultType::Return, "category" }, - { R"("Все для детей/Детская техника/Vitek")", ResultType::Return, "Все для детей/Детская техника/Vitek" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("В наличии")", ResultType::Return, "В наличии" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("isAuthorized")", ResultType::Return, "isAuthorized" }, - { R"("isSubscriber")", ResultType::Return, "isSubscriber" }, - { R"("postType")", ResultType::Return, "postType" }, - { R"("Новости")", ResultType::Return, "Новости" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("Электроплита GEFEST Брест ЭПНД 5140-01 0001")", ResultType::Return, "Электроплита GEFEST Брест ЭПНД 5140-01 0001" }, - { R"("price")", ResultType::Return, "price" }, - { R"("currencyCode")", ResultType::Return, "currencyCode" }, - { R"("RUB")", ResultType::Return, "RUB" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("trash_login")", ResultType::Return, "trash_login" }, - { R"("novikoff")", ResultType::Return, "novikoff" }, - { R"("trash_cat_link")", ResultType::Return, "trash_cat_link" }, - { R"("progs")", ResultType::Return, "progs" }, - { R"("trash_parent_link")", ResultType::Return, "trash_parent_link" }, - { R"("content")", ResultType::Return, "content" }, - { R"("trash_posted_parent")", ResultType::Return, "trash_posted_parent" }, - { R"("content.01.2016")", ResultType::Return, "content.01.2016" }, - { R"("trash_posted_cat")", ResultType::Return, "trash_posted_cat" }, - { R"("progs.01.2016")", ResultType::Return, "progs.01.2016" }, - { R"("trash_virus_count")", ResultType::Return, "trash_virus_count" }, - { R"("trash_is_android")", ResultType::Return, "trash_is_android" }, - { R"("trash_is_wp8")", ResultType::Return, "trash_is_wp8" }, - { R"("trash_is_ios")", ResultType::Return, "trash_is_ios" }, - { R"("trash_posted")", ResultType::Return, "trash_posted" }, - { R"("01.2016")", ResultType::Return, "01.2016" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("merchantId")", ResultType::Return, "merchantId" }, - { R"("13694_49246")", ResultType::Return, "13694_49246" }, - { R"("cps-source")", ResultType::Return, "cps-source" }, - { R"("wargaming")", ResultType::Return, "wargaming" }, - { R"("cps_provider")", ResultType::Return, "cps_provider" }, - { R"("default")", ResultType::Return, "default" }, - { R"("errorReason")", ResultType::Return, "errorReason" }, - { R"("no errors")", ResultType::Return, "no errors" }, - { R"("scid")", ResultType::Return, "scid" }, - { R"("isAuthPayment")", ResultType::Return, "isAuthPayment" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("rubric")", ResultType::Return, "rubric" }, - { R"("")", ResultType::Return, "" }, - { R"("rubric")", ResultType::Return, "rubric" }, - { R"("Мир")", ResultType::Return, "Мир" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("__ym")", ResultType::Return, "__ym" }, - { R"("ecommerce")", ResultType::Return, "ecommerce" }, - { R"("impressions")", ResultType::Return, "impressions" }, - { R"("id")", ResultType::Return, "id" }, - { R"("863813")", ResultType::Return, "863813" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Happy, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Happy, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("863839")", ResultType::Return, "863839" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Pretty kitten, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Pretty kitten, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("863847")", ResultType::Return, "863847" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Little tiger, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Little tiger, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911480")", ResultType::Return, "911480" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Puppy, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Puppy, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911484")", ResultType::Return, "911484" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Little bears, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Little bears, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911489")", ResultType::Return, "911489" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Dolphin, возраст 2-4 года, трикотаж")", ResultType::Return, "Футболка детская 3D Dolphin, возраст 2-4 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911496")", ResultType::Return, "911496" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Pretty, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Pretty, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911504")", ResultType::Return, "911504" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Fairytale, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Fairytale, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911508")", ResultType::Return, "911508" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Kittens, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Kittens, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911512")", ResultType::Return, "911512" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Sunshine, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Sunshine, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911516")", ResultType::Return, "911516" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Dog in bag, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Dog in bag, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911520")", ResultType::Return, "911520" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Cute puppy, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Cute puppy, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911524")", ResultType::Return, "911524" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Rabbit, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Rabbit, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("911528")", ResultType::Return, "911528" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Футболка детская 3D Turtle, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Turtle, возраст 1-2 года, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("390.00")", ResultType::Return, "390.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("888616")", ResultType::Return, "888616" }, - { R"("name")", ResultType::Return, "name" }, - { "\"3Д Футболка мужская \\\"Collorista\\\" Светлое завтра р-р XL(52-54), 100% хлопок, трикотаж\"", ResultType::Return, "3Д Футболка мужская \"Collorista\" Светлое завтра р-р XL(52-54), 100% хлопок, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Одежда и обувь/Мужская одежда/Футболки/")", ResultType::Return, "/Одежда и обувь/Мужская одежда/Футболки/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("406.60")", ResultType::Return, "406.60" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("913361")", ResultType::Return, "913361" }, - { R"("name")", ResultType::Return, "name" }, - { R"("3Д Футболка детская World р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская World р-р 8-10, 100% хлопок, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("470.00")", ResultType::Return, "470.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("913364")", ResultType::Return, "913364" }, - { R"("name")", ResultType::Return, "name" }, - { R"("3Д Футболка детская Force р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Force р-р 8-10, 100% хлопок, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("470.00")", ResultType::Return, "470.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("913367")", ResultType::Return, "913367" }, - { R"("name")", ResultType::Return, "name" }, - { R"("3Д Футболка детская Winter tale р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Winter tale р-р 8-10, 100% хлопок, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("470.00")", ResultType::Return, "470.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("913385")", ResultType::Return, "913385" }, - { R"("name")", ResultType::Return, "name" }, - { R"("3Д Футболка детская Moonshine р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Moonshine р-р 8-10, 100% хлопок, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("470.00")", ResultType::Return, "470.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("id")", ResultType::Return, "id" }, - { R"("913391")", ResultType::Return, "913391" }, - { R"("name")", ResultType::Return, "name" }, - { R"("3Д Футболка детская Shaman р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Shaman р-р 8-10, 100% хлопок, трикотаж" }, - { R"("category")", ResultType::Return, "category" }, - { R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("")", ResultType::Return, "" }, - { R"("price")", ResultType::Return, "price" }, - { R"("470.00")", ResultType::Return, "470.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("/retailrocket/")", ResultType::Return, "/retailrocket/" }, - { R"("position")", ResultType::Return, "position" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" }, - { R"("usertype")", ResultType::Return, "usertype" }, - { R"("visitor")", ResultType::Return, "visitor" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("__ym")", ResultType::Return, "__ym" }, - { R"("ecommerce")", ResultType::Return, "ecommerce" }, - { R"("impressions")", ResultType::Return, "impressions" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("experiments")", ResultType::Return, "experiments" }, - { R"("lang")", ResultType::Return, "lang" }, - { R"("ru")", ResultType::Return, "ru" }, - { R"("los_portal")", ResultType::Return, "los_portal" }, - { R"("los_level")", ResultType::Return, "los_level" }, - { R"("none")", ResultType::Return, "none" }, - { R"("__ym")", ResultType::Return, "__ym" }, - { R"("ecommerce")", ResultType::Return, "ecommerce" }, - { R"("currencyCode")", ResultType::Return, "currencyCode" }, - { R"("RUR")", ResultType::Return, "RUR" }, - { R"("impressions")", ResultType::Return, "impressions" }, - { R"("name")", ResultType::Return, "name" }, - { R"("Чайник электрический Mystery MEK-1627, белый")", ResultType::Return, "Чайник электрический Mystery MEK-1627, белый" }, - { R"("brand")", ResultType::Return, "brand" }, - { R"("Mystery")", ResultType::Return, "Mystery" }, - { R"("id")", ResultType::Return, "id" }, - { R"("187180")", ResultType::Return, "187180" }, - { R"("category")", ResultType::Return, "category" }, - { R"("Мелкая бытовая техника/Мелкие кухонные приборы/Чайники электрические/Mystery")", ResultType::Return, "Мелкая бытовая техника/Мелкие кухонные приборы/Чайники электрические/Mystery" }, - { R"("variant")", ResultType::Return, "variant" }, - { R"("В наличии")", ResultType::Return, "В наличии" }, - { R"("price")", ResultType::Return, "price" }, - { R"("1630.00")", ResultType::Return, "1630.00" }, - { R"("list")", ResultType::Return, "list" }, - { R"("Карточка")", ResultType::Return, "Карточка" }, - { R"("position")", ResultType::Return, "position" }, - { R"("detail")", ResultType::Return, "detail" }, - { R"("actionField")", ResultType::Return, "actionField" }, - { R"("list")", ResultType::Return, "list" }, - { "\0\"", ResultType::Throw, "JSON: expected \", got \0" }, - { "\"/igrushki/konstruktory\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/1290414/komplekt-zhenskiy-dzhemper-plusbryuki-m-254-09-malina-plustemno-siniy-\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Творчество/Рисование/Инструменты и кра\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобильных аккумуляторов/Пуско-зарядные устр\xD0\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Строительство и ремонт/Силовая техника/Зарядные устройств\xD0\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобиль\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\0t", ResultType::Throw, "JSON: expected \", got \0" }, - { "\"/Хозтовары/Хранение вещей и организа\xD1\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Хозтовары/Товары для стир\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"li\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/734859/samolet-radioupravlyaemyy-istrebitel-rabotaet-o\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/kosmetika-i-parfyum/parfyumeriya/mu\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/ko\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "", ResultType::Throw, "JSON: begin >= end." }, - { "\"/stroitelstvo-i-remont/stroit\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/stroitelstvo-i-remont/stroitelnyy-instrument/av\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/s\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Строительство и ремонт/Строительный инструмент/Изм\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/avto/soputstvuy\0l", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/str\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Отвертка 2 в 1 \\\"TUNDRA basic\\\" 5х75 мм (+,-) \0\xFF", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/stroitelstvo-i-remont/stroitelnyy-instrument/avtoinstrumen\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Мелкая бытовая техника/Мелки\xD0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Пряжа \\\"Бамбук стрейч\\0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Карандаш чёрнографитны\xD0\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Творчество/Рукоделие, аппликации/Пряжа и шерсть для \xD0\0l", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/1071547/karandash-chernografitnyy-volshebstvo-nv-kruglyy-d-7-2mm-dl-176mm-plast-tuba/\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"ca\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"ca\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/1165424/chipbord-vyrubnoy-dlya-skrapbukinga-malyshi-mikki-maus-disney-bebi\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/posuda/kuhonnye-prinadlezhnosti-i-i\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Канцтовары/Ежедневники и блокн\xD0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/kanctovary/ezhednevniki-i-blok\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Стакан \xD0\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Набор бумаги для скрапбукинга \\\"Мои первый годик\\\": Микки Маус, Дисней бэби, 12 листов 29.5 х 29.5 см, 160\0\x80", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"c\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Органайзер для хранения аксессуаров, \0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"quantity\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Сменный блок для тетрадей на кольцах А5, 160 листов клетка, офсет \xE2\x84\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Сувениры/Ф\xD0\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"\0\"", ResultType::Return, "\0" }, - { "\"\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"va\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"ca\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"В \0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/letnie-tovary/z\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Посудомоечная машина Ha\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Крупная бытов\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Полочная акустическая система Magnat Needl\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"brand\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"pos\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"c\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"var\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Телевизоры и видеотехника/Всё для домашних кинотеатр\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Флеш-диск Transcend JetFlash 620 8GB (TS8GJF62\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Табурет Мег\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"variant\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Катал\xD0\0\"", ResultType::Return, "Катал\xD0\0" }, - { "\"К\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Полочная акустическая система Magnat Needl\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"brand\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"pos\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"c\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"17\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/igrushki/razvivayusc\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Ключница \\\"\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Игр\xD1\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Игрушки/Игрушки для девочек/Игровые модули дл\xD1\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Крупная бытовая техника/Стиральные машины/С фронт\xD0\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\0 ", ResultType::Throw, "JSON: expected \", got \0" }, - { "\"Светодиодная лента SMD3528, 5 м. IP33, 60LED, зеленый, 4,8W/мет\xD1\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Сантехника/Мебель для ванных комнат/Стол\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\0o", ResultType::Throw, "JSON: expected \", got \0" }, - { "\"/igrushki/konstruktory\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/posuda/kuhonnye-prinadlezhnosti-i-instrumenty/kuhonnye-pr\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/1290414/komplekt-zhenskiy-dzhemper-plusbryuki-m-254-09-malina-plustemno-siniy-\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Творчество/Рисование/Инструменты и кра\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобильных аккумуляторов/Пуско-зарядные устр\xD0\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Строительство и ремонт/Силовая техника/Зарядные устройств\xD0\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобиль\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\0 ", ResultType::Throw, "JSON: expected \", got \0" }, - { "\"/Хозтовары/Хранение вещей и организа\xD1\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Хозтовары/Товары для стир\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"li\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/igrushki/igrus\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/734859/samolet-radioupravlyaemyy-istrebitel-rabotaet-o\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/kosmetika-i-parfyum/parfyumeriya/mu\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/ko\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/avto/avtomobilnyy\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/stroitelstvo-i-remont/stroit\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/stroitelstvo-i-remont/stroitelnyy-instrument/av\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/s\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Строительство и ремонт/Строительный инструмент/Изм\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/avto/soputstvuy\0\"", ResultType::Return, "/avto/soputstvuy\0" }, - { "\"/str\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Отвертка 2 в 1 \\\"TUNDRA basic\\\" 5х75 мм (+,-) \0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/stroitelstvo-i-remont/stroitelnyy-instrument/avtoinstrumen\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Чайник электрический Vitesse\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Мелкая бытовая техника/Мелки\xD0\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Пряжа \\\"Бамбук стрейч\\0о", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Карандаш чёрнографитны\xD0\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Творчество/Рукоделие, аппликации/Пряжа и шерсть для \xD0\0\"", ResultType::Return, "/Творчество/Рукоделие, аппликации/Пряжа и шерсть для \xD0\0" }, - { "\"/1071547/karandash-chernografitnyy-volshebstvo-nv-kruglyy-d-7-2mm-dl-176mm-plast-tuba/\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"ca\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Подаро\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Средство для прочис\xD1\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"i\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/p\0\"", ResultType::Return, "/p\0" }, - { "\"/Сувениры/Магниты, н\xD0\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Дерев\xD0\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/prazdniki/svadba/svadebnaya-c\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Канцт\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Праздники/То\xD0\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"v\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Косметика \xD0\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Спорт и отдых/Настольные игры/Покер, руле\xD1\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"categ\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/retailr\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/retailrocket\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Ежедневник недат А5 140л кл,ляссе,обл пв\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/432809/ezhednevnik-organayzer-sredniy-s-remeshkom-na-knopke-v-oblozhke-kalkulyator-kalendar-do-\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/1165424/chipbord-vyrubnoy-dlya-skrapbukinga-malyshi-mikki-maus-disney-bebi\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/posuda/kuhonnye-prinadlezhnosti-i-i\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/Канцтовары/Ежедневники и блокн\xD0\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"/kanctovary/ezhednevniki-i-blok\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Стакан \xD0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"Набор бумаги для скрапбукинга \\\"Мои первый годик\\\": Микки Маус, Дисней бэби, 12 листов 29.5 х 29.5 см, 160\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." }, - { "\"c\0\"", ResultType::Return, "c\0" }, - }; - - for (auto i : boost::irange(0, 1/*00000*/)) - { - static_cast(i); - - for (auto & r : test_data) - { - try - { - JSON j(r.input, r.input + strlen(r.input)); - - ASSERT_EQ(j.getString(), r.result); - ASSERT_TRUE(r.result_type == ResultType::Return); - } - catch (JSONException & e) - { - ASSERT_TRUE(r.result_type == ResultType::Throw); - ASSERT_EQ(e.message(), r.result); - } - } - } -} diff --git a/base/common/time.h b/base/common/time.h index 1bf588b7cb3..d0b8e94a9a5 100644 --- a/base/common/time.h +++ b/base/common/time.h @@ -2,7 +2,7 @@ #include -#if defined (OS_DARWIN) +#if defined (OS_DARWIN) || defined (OS_SUNOS) # define CLOCK_MONOTONIC_COARSE CLOCK_MONOTONIC #elif defined (OS_FREEBSD) # define CLOCK_MONOTONIC_COARSE CLOCK_MONOTONIC_FAST diff --git a/base/common/types.h b/base/common/types.h index bd5c28fe73b..e178653f7c6 100644 --- a/base/common/types.h +++ b/base/common/types.h @@ -13,7 +13,12 @@ using char8_t = unsigned char; #endif /// This is needed for more strict aliasing. https://godbolt.org/z/xpJBSb https://stackoverflow.com/a/57453713 +#if !defined(PVS_STUDIO) /// But PVS-Studio does not treat it correctly. using UInt8 = char8_t; +#else +using UInt8 = uint8_t; +#endif + using UInt16 = uint16_t; using UInt32 = uint32_t; using UInt64 = uint64_t; diff --git a/base/common/wide_integer_impl.h b/base/common/wide_integer_impl.h index 5b981326e25..456c10a22e4 100644 --- a/base/common/wide_integer_impl.h +++ b/base/common/wide_integer_impl.h @@ -271,9 +271,13 @@ struct integer::_impl /// As to_Integral does a static_cast to int64_t, it may result in UB. /// The necessary check here is that long double has enough significant (mantissa) bits to store the /// int64_t max value precisely. + + //TODO Be compatible with Apple aarch64 +#if not (defined(__APPLE__) && defined(__aarch64__)) static_assert(LDBL_MANT_DIG >= 64, "On your system long double has less than 64 precision bits," "which may result in UB when initializing double from int64_t"); +#endif if ((rhs > 0 && rhs < static_cast(max_int)) || (rhs < 0 && rhs > static_cast(min_int))) { diff --git a/base/common/ya.make.in b/base/common/ya.make.in index b5c2bbc1717..3deb36a2c71 100644 --- a/base/common/ya.make.in +++ b/base/common/ya.make.in @@ -35,7 +35,7 @@ PEERDIR( CFLAGS(-g0) SRCS( - + ) END() diff --git a/base/daemon/CMakeLists.txt b/base/daemon/CMakeLists.txt index 26d59a57e7f..6ef87db6a61 100644 --- a/base/daemon/CMakeLists.txt +++ b/base/daemon/CMakeLists.txt @@ -5,6 +5,11 @@ add_library (daemon ) target_include_directories (daemon PUBLIC ..) + +if (OS_DARWIN AND NOT MAKE_STATIC_LIBRARIES) + target_link_libraries (daemon PUBLIC -Wl,-undefined,dynamic_lookup) +endif() + target_link_libraries (daemon PUBLIC loggers PRIVATE clickhouse_common_io clickhouse_common_config common ${EXECINFO_LIBRARIES}) if (USE_SENTRY) diff --git a/base/daemon/SentryWriter.cpp b/base/daemon/SentryWriter.cpp index 29430b65983..1b7d0064b99 100644 --- a/base/daemon/SentryWriter.cpp +++ b/base/daemon/SentryWriter.cpp @@ -9,6 +9,7 @@ #include #include +#include #include #include #include diff --git a/base/ext/scope_guard_safe.h b/base/ext/scope_guard_safe.h new file mode 100644 index 00000000000..55140213572 --- /dev/null +++ b/base/ext/scope_guard_safe.h @@ -0,0 +1,68 @@ +#pragma once + +#include +#include +#include + +/// Same as SCOPE_EXIT() but block the MEMORY_LIMIT_EXCEEDED errors. +/// +/// Typical example of SCOPE_EXIT_MEMORY() usage is when code under it may do +/// some tiny allocations, that may fail under high memory pressure or/and low +/// max_memory_usage (and related limits). +/// +/// NOTE: it should be used with caution. +#define SCOPE_EXIT_MEMORY(...) SCOPE_EXIT( \ + MemoryTracker::LockExceptionInThread \ + lock_memory_tracker(VariableContext::Global); \ + __VA_ARGS__; \ +) + +/// Same as SCOPE_EXIT() but try/catch/tryLogCurrentException any exceptions. +/// +/// SCOPE_EXIT_SAFE() should be used in case the exception during the code +/// under SCOPE_EXIT() is not "that fatal" and error message in log is enough. +/// +/// Good example is calling CurrentThread::detachQueryIfNotDetached(). +/// +/// Anti-pattern is calling WriteBuffer::finalize() under SCOPE_EXIT_SAFE() +/// (since finalize() can do final write and it is better to fail abnormally +/// instead of ignoring write error). +/// +/// NOTE: it should be used with double caution. +#define SCOPE_EXIT_SAFE(...) SCOPE_EXIT( \ + try \ + { \ + __VA_ARGS__; \ + } \ + catch (...) \ + { \ + tryLogCurrentException(__PRETTY_FUNCTION__); \ + } \ +) + +/// Same as SCOPE_EXIT() but: +/// - block the MEMORY_LIMIT_EXCEEDED errors, +/// - try/catch/tryLogCurrentException any exceptions. +/// +/// SCOPE_EXIT_MEMORY_SAFE() can be used when the error can be ignored, and in +/// addition to SCOPE_EXIT_SAFE() it will also lock MEMORY_LIMIT_EXCEEDED to +/// avoid such exceptions. +/// +/// It does exists as a separate helper, since you do not need to lock +/// MEMORY_LIMIT_EXCEEDED always (there are cases when code under SCOPE_EXIT does +/// not do any allocations, while LockExceptionInThread increment atomic +/// variable). +/// +/// NOTE: it should be used with triple caution. +#define SCOPE_EXIT_MEMORY_SAFE(...) SCOPE_EXIT( \ + try \ + { \ + MemoryTracker::LockExceptionInThread \ + lock_memory_tracker(VariableContext::Global); \ + __VA_ARGS__; \ + } \ + catch (...) \ + { \ + tryLogCurrentException(__PRETTY_FUNCTION__); \ + } \ +) diff --git a/base/glibc-compatibility/CMakeLists.txt b/base/glibc-compatibility/CMakeLists.txt index 684c6162941..e785e2ab2ce 100644 --- a/base/glibc-compatibility/CMakeLists.txt +++ b/base/glibc-compatibility/CMakeLists.txt @@ -1,5 +1,8 @@ if (GLIBC_COMPATIBILITY) - set (ENABLE_FASTMEMCPY ON) + add_subdirectory(memcpy) + if(TARGET memcpy) + set(MEMCPY_LIBRARY memcpy) + endif() enable_language(ASM) include(CheckIncludeFile) @@ -27,13 +30,6 @@ if (GLIBC_COMPATIBILITY) list(APPEND glibc_compatibility_sources musl/getentropy.c) endif() - if (NOT ARCH_ARM) - # clickhouse_memcpy don't support ARCH_ARM, see https://github.com/ClickHouse/ClickHouse/issues/18951 - add_library (clickhouse_memcpy OBJECT - ${ClickHouse_SOURCE_DIR}/contrib/FastMemcpy/memcpy_wrapper.c - ) - endif() - # Need to omit frame pointers to match the performance of glibc set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fomit-frame-pointer") @@ -51,15 +47,16 @@ if (GLIBC_COMPATIBILITY) target_compile_options(glibc-compatibility PRIVATE -fPIC) endif () - target_link_libraries(global-libs INTERFACE glibc-compatibility) + target_link_libraries(global-libs INTERFACE glibc-compatibility ${MEMCPY_LIBRARY}) install( - TARGETS glibc-compatibility + TARGETS glibc-compatibility ${MEMCPY_LIBRARY} EXPORT global ARCHIVE DESTINATION lib ) message (STATUS "Some symbols from glibc will be replaced for compatibility") + elseif (YANDEX_OFFICIAL_BUILD) message (WARNING "Option GLIBC_COMPATIBILITY must be turned on for production builds.") endif () diff --git a/base/glibc-compatibility/memcpy/CMakeLists.txt b/base/glibc-compatibility/memcpy/CMakeLists.txt new file mode 100644 index 00000000000..133995d9b96 --- /dev/null +++ b/base/glibc-compatibility/memcpy/CMakeLists.txt @@ -0,0 +1,8 @@ +if (ARCH_AMD64) + add_library(memcpy STATIC memcpy.cpp) + + # We allow to include memcpy.h from user code for better inlining. + target_include_directories(memcpy PUBLIC $) + + target_compile_options(memcpy PRIVATE -fno-builtin-memcpy) +endif () diff --git a/base/glibc-compatibility/memcpy/memcpy.cpp b/base/glibc-compatibility/memcpy/memcpy.cpp new file mode 100644 index 00000000000..ec43a2c3649 --- /dev/null +++ b/base/glibc-compatibility/memcpy/memcpy.cpp @@ -0,0 +1,6 @@ +#include "memcpy.h" + +extern "C" void * memcpy(void * __restrict dst, const void * __restrict src, size_t size) +{ + return inline_memcpy(dst, src, size); +} diff --git a/base/glibc-compatibility/memcpy/memcpy.h b/base/glibc-compatibility/memcpy/memcpy.h new file mode 100644 index 00000000000..211d144cecb --- /dev/null +++ b/base/glibc-compatibility/memcpy/memcpy.h @@ -0,0 +1,217 @@ +#include + +#include + + +/** Custom memcpy implementation for ClickHouse. + * It has the following benefits over using glibc's implementation: + * 1. Avoiding dependency on specific version of glibc's symbol, like memcpy@@GLIBC_2.14 for portability. + * 2. Avoiding indirect call via PLT due to shared linking, that can be less efficient. + * 3. It's possible to include this header and call inline_memcpy directly for better inlining or interprocedural analysis. + * 4. Better results on our performance tests on current CPUs: up to 25% on some queries and up to 0.7%..1% in average across all queries. + * + * Writing our own memcpy is extremely difficult for the following reasons: + * 1. The optimal variant depends on the specific CPU model. + * 2. The optimal variant depends on the distribution of size arguments. + * 3. It depends on the number of threads copying data concurrently. + * 4. It also depends on how the calling code is using the copied data and how the different memcpy calls are related to each other. + * Due to vast range of scenarios it makes proper testing especially difficult. + * When writing our own memcpy there is a risk to overoptimize it + * on non-representative microbenchmarks while making real-world use cases actually worse. + * + * Most of the benchmarks for memcpy on the internet are wrong. + * + * Let's look at the details: + * + * For small size, the order of branches in code is important. + * There are variants with specific order of branches (like here or in glibc) + * or with jump table (in asm code see example from Cosmopolitan libc: + * https://github.com/jart/cosmopolitan/blob/de09bec215675e9b0beb722df89c6f794da74f3f/libc/nexgen32e/memcpy.S#L61) + * or with Duff device in C (see https://github.com/skywind3000/FastMemcpy/) + * + * It's also important how to copy uneven sizes. + * Almost every implementation, including this, is using two overlapping movs. + * + * It is important to disable -ftree-loop-distribute-patterns when compiling memcpy implementation, + * otherwise the compiler can replace internal loops to a call to memcpy that will lead to infinite recursion. + * + * For larger sizes it's important to choose the instructions used: + * - SSE or AVX or AVX-512; + * - rep movsb; + * Performance will depend on the size threshold, on the CPU model, on the "erms" flag + * ("Enhansed Rep MovS" - it indicates that performance of "rep movsb" is decent for large sizes) + * https://stackoverflow.com/questions/43343231/enhanced-rep-movsb-for-memcpy + * + * Using AVX-512 can be bad due to throttling. + * Using AVX can be bad if most code is using SSE due to switching penalty + * (it also depends on the usage of "vzeroupper" instruction). + * But in some cases AVX gives a win. + * + * It also depends on how many times the loop will be unrolled. + * We are unrolling the loop 8 times (by the number of available registers), but it not always the best. + * + * It also depends on the usage of aligned or unaligned loads/stores. + * We are using unaligned loads and aligned stores. + * + * It also depends on the usage of prefetch instructions. It makes sense on some Intel CPUs but can slow down performance on AMD. + * Setting up correct offset for prefetching is non-obvious. + * + * Non-temporary (cache bypassing) stores can be used for very large sizes (more than a half of L3 cache). + * But the exact threshold is unclear - when doing memcpy from multiple threads the optimal threshold can be lower, + * because L3 cache is shared (and L2 cache is partially shared). + * + * Very large size of memcpy typically indicates suboptimal (not cache friendly) algorithms in code or unrealistic scenarios, + * so we don't pay attention to using non-temporary stores. + * + * On recent Intel CPUs, the presence of "erms" makes "rep movsb" the most benefitial, + * even comparing to non-temporary aligned unrolled stores even with the most wide registers. + * + * memcpy can be written in asm, C or C++. The latter can also use inline asm. + * The asm implementation can be better to make sure that compiler won't make the code worse, + * to ensure the order of branches, the code layout, the usage of all required registers. + * But if it is located in separate translation unit, inlining will not be possible + * (inline asm can be used to overcome this limitation). + * Sometimes C or C++ code can be further optimized by compiler. + * For example, clang is capable replacing SSE intrinsics to AVX code if -mavx is used. + * + * Please note that compiler can replace plain code to memcpy and vice versa. + * - memcpy with compile-time known small size is replaced to simple instructions without a call to memcpy; + * it is controlled by -fbuiltin-memcpy and can be manually ensured by calling __builtin_memcpy. + * This is often used to implement unaligned load/store without undefined behaviour in C++. + * - a loop with copying bytes can be recognized and replaced by a call to memcpy; + * it is controlled by -ftree-loop-distribute-patterns. + * - also note that a loop with copying bytes can be unrolled, peeled and vectorized that will give you + * inline code somewhat similar to a decent implementation of memcpy. + * + * This description is up to date as of Mar 2021. + * + * How to test the memcpy implementation for performance: + * 1. Test on real production workload. + * 2. For synthetic test, see utils/memcpy-bench, but make sure you will do the best to exhaust the wide range of scenarios. + * + * TODO: Add self-tuning memcpy with bayesian bandits algorithm for large sizes. + * See https://habr.com/en/company/yandex/blog/457612/ + */ + + +static inline void * inline_memcpy(void * __restrict dst_, const void * __restrict src_, size_t size) +{ + /// We will use pointer arithmetic, so char pointer will be used. + /// Note that __restrict makes sense (otherwise compiler will reload data from memory + /// instead of using the value of registers due to possible aliasing). + char * __restrict dst = reinterpret_cast(dst_); + const char * __restrict src = reinterpret_cast(src_); + + /// Standard memcpy returns the original value of dst. It is rarely used but we have to do it. + /// If you use memcpy with small but non-constant sizes, you can call inline_memcpy directly + /// for inlining and removing this single instruction. + void * ret = dst; + +tail: + /// Small sizes and tails after the loop for large sizes. + /// The order of branches is important but in fact the optimal order depends on the distribution of sizes in your application. + /// This order of branches is from the disassembly of glibc's code. + /// We copy chunks of possibly uneven size with two overlapping movs. + /// Example: to copy 5 bytes [0, 1, 2, 3, 4] we will copy tail [1, 2, 3, 4] first and then head [0, 1, 2, 3]. + if (size <= 16) + { + if (size >= 8) + { + /// Chunks of 8..16 bytes. + __builtin_memcpy(dst + size - 8, src + size - 8, 8); + __builtin_memcpy(dst, src, 8); + } + else if (size >= 4) + { + /// Chunks of 4..7 bytes. + __builtin_memcpy(dst + size - 4, src + size - 4, 4); + __builtin_memcpy(dst, src, 4); + } + else if (size >= 2) + { + /// Chunks of 2..3 bytes. + __builtin_memcpy(dst + size - 2, src + size - 2, 2); + __builtin_memcpy(dst, src, 2); + } + else if (size >= 1) + { + /// A single byte. + *dst = *src; + } + /// No bytes remaining. + } + else + { + /// Medium and large sizes. + if (size <= 128) + { + /// Medium size, not enough for full loop unrolling. + + /// We will copy the last 16 bytes. + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst + size - 16), _mm_loadu_si128(reinterpret_cast(src + size - 16))); + + /// Then we will copy every 16 bytes from the beginning in a loop. + /// The last loop iteration will possibly overwrite some part of already copied last 16 bytes. + /// This is Ok, similar to the code for small sizes above. + while (size > 16) + { + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst), _mm_loadu_si128(reinterpret_cast(src))); + dst += 16; + src += 16; + size -= 16; + } + } + else + { + /// Large size with fully unrolled loop. + + /// Align destination to 16 bytes boundary. + size_t padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + /// If not aligned - we will copy first 16 bytes with unaligned stores. + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + /// Aligned unrolled copy. We will use half of available SSE registers. + /// It's not possible to have both src and dst aligned. + /// So, we will use aligned stores and unaligned loads. + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + while (size >= 128) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + c2 = _mm_loadu_si128(reinterpret_cast(src) + 2); + c3 = _mm_loadu_si128(reinterpret_cast(src) + 3); + c4 = _mm_loadu_si128(reinterpret_cast(src) + 4); + c5 = _mm_loadu_si128(reinterpret_cast(src) + 5); + c6 = _mm_loadu_si128(reinterpret_cast(src) + 6); + c7 = _mm_loadu_si128(reinterpret_cast(src) + 7); + src += 128; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 2), c2); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 3), c3); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 4), c4); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 5), c5); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 6), c6); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 7), c7); + dst += 128; + + size -= 128; + } + + /// The latest remaining 0..127 bytes will be processed as usual. + goto tail; + } + } + + return ret; +} + diff --git a/base/loggers/CMakeLists.txt b/base/loggers/CMakeLists.txt index 48868cf1e0d..22be002e069 100644 --- a/base/loggers/CMakeLists.txt +++ b/base/loggers/CMakeLists.txt @@ -1,4 +1,4 @@ -include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake) +include("${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake") add_headers_and_sources(loggers .) add_library(loggers ${loggers_sources} ${loggers_headers}) target_link_libraries(loggers PRIVATE dbms clickhouse_common_io) diff --git a/base/loggers/Loggers.cpp b/base/loggers/Loggers.cpp index ed806741895..f66b1eb5b91 100644 --- a/base/loggers/Loggers.cpp +++ b/base/loggers/Loggers.cpp @@ -69,7 +69,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log log_file->setProperty(Poco::FileChannel::PROP_ROTATEONOPEN, config.getRawString("logger.rotateOnOpen", "false")); log_file->open(); - Poco::AutoPtr pf = new OwnPatternFormatter(this); + Poco::AutoPtr pf = new OwnPatternFormatter; Poco::AutoPtr log = new DB::OwnFormattingChannel(pf, log_file); split->addChannel(log); @@ -90,7 +90,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log error_log_file->setProperty(Poco::FileChannel::PROP_FLUSH, config.getRawString("logger.flush", "true")); error_log_file->setProperty(Poco::FileChannel::PROP_ROTATEONOPEN, config.getRawString("logger.rotateOnOpen", "false")); - Poco::AutoPtr pf = new OwnPatternFormatter(this); + Poco::AutoPtr pf = new OwnPatternFormatter; Poco::AutoPtr errorlog = new DB::OwnFormattingChannel(pf, error_log_file); errorlog->setLevel(Poco::Message::PRIO_NOTICE); @@ -98,10 +98,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log split->addChannel(errorlog); } - /// "dynamic_layer_selection" is needed only for Yandex.Metrika, that share part of ClickHouse code. - /// We don't need this configuration parameter. - - if (config.getBool("logger.use_syslog", false) || config.getBool("dynamic_layer_selection", false)) + if (config.getBool("logger.use_syslog", false)) { //const std::string & cmd_name = commandName(); @@ -127,7 +124,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log } syslog_channel->open(); - Poco::AutoPtr pf = new OwnPatternFormatter(this, OwnPatternFormatter::ADD_LAYER_TAG); + Poco::AutoPtr pf = new OwnPatternFormatter; Poco::AutoPtr log = new DB::OwnFormattingChannel(pf, syslog_channel); split->addChannel(log); @@ -141,7 +138,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log { bool color_enabled = config.getBool("logger.color_terminal", color_logs_by_default); - Poco::AutoPtr pf = new OwnPatternFormatter(this, OwnPatternFormatter::ADD_NOTHING, color_enabled); + Poco::AutoPtr pf = new OwnPatternFormatter(color_enabled); Poco::AutoPtr log = new DB::OwnFormattingChannel(pf, new Poco::ConsoleChannel); logger.warning("Logging " + log_level + " to console"); split->addChannel(log); diff --git a/base/loggers/Loggers.h b/base/loggers/Loggers.h index 9ed75046468..151c1d3566f 100644 --- a/base/loggers/Loggers.h +++ b/base/loggers/Loggers.h @@ -8,6 +8,7 @@ #include #include "OwnSplitChannel.h" + namespace Poco::Util { class AbstractConfiguration; @@ -21,16 +22,8 @@ public: /// Close log files. On next log write files will be reopened. void closeLogs(Poco::Logger & logger); - std::optional getLayer() const - { - return layer; /// layer set in inheritor class BaseDaemonApplication. - } - void setTextLog(std::shared_ptr log, int max_priority); -protected: - std::optional layer; - private: Poco::AutoPtr log_file; Poco::AutoPtr error_log_file; diff --git a/base/loggers/OwnPatternFormatter.cpp b/base/loggers/OwnPatternFormatter.cpp index 029d06ff949..e62039f4a27 100644 --- a/base/loggers/OwnPatternFormatter.cpp +++ b/base/loggers/OwnPatternFormatter.cpp @@ -13,31 +13,18 @@ #include "Loggers.h" -OwnPatternFormatter::OwnPatternFormatter(const Loggers * loggers_, OwnPatternFormatter::Options options_, bool color_) - : Poco::PatternFormatter(""), loggers(loggers_), options(options_), color(color_) +OwnPatternFormatter::OwnPatternFormatter(bool color_) + : Poco::PatternFormatter(""), color(color_) { } -void OwnPatternFormatter::formatExtended(const DB::ExtendedLogMessage & msg_ext, std::string & text) +void OwnPatternFormatter::formatExtended(const DB::ExtendedLogMessage & msg_ext, std::string & text) const { DB::WriteBufferFromString wb(text); const Poco::Message & msg = msg_ext.base; - /// For syslog: tag must be before message and first whitespace. - /// This code is only used in Yandex.Metrika and unneeded in ClickHouse. - if ((options & ADD_LAYER_TAG) && loggers) - { - auto layer = loggers->getLayer(); - if (layer) - { - writeCString("layer[", wb); - DB::writeIntText(*layer, wb); - writeCString("]: ", wb); - } - } - /// Change delimiters in date for compatibility with old logs. DB::writeDateTimeText<'.', ':'>(msg_ext.time_seconds, wb); diff --git a/base/loggers/OwnPatternFormatter.h b/base/loggers/OwnPatternFormatter.h index 4aedcc04637..fba4f0964cb 100644 --- a/base/loggers/OwnPatternFormatter.h +++ b/base/loggers/OwnPatternFormatter.h @@ -24,20 +24,11 @@ class Loggers; class OwnPatternFormatter : public Poco::PatternFormatter { public: - /// ADD_LAYER_TAG is needed only for Yandex.Metrika, that share part of ClickHouse code. - enum Options - { - ADD_NOTHING = 0, - ADD_LAYER_TAG = 1 << 0 - }; - - OwnPatternFormatter(const Loggers * loggers_, Options options_ = ADD_NOTHING, bool color_ = false); + OwnPatternFormatter(bool color_ = false); void format(const Poco::Message & msg, std::string & text) override; - void formatExtended(const DB::ExtendedLogMessage & msg_ext, std::string & text); + void formatExtended(const DB::ExtendedLogMessage & msg_ext, std::string & text) const; private: - const Loggers * loggers; - Options options; bool color; }; diff --git a/base/mysqlxx/CMakeLists.txt b/base/mysqlxx/CMakeLists.txt index 849c58a8527..c5230c2b49f 100644 --- a/base/mysqlxx/CMakeLists.txt +++ b/base/mysqlxx/CMakeLists.txt @@ -14,8 +14,8 @@ add_library (mysqlxx target_include_directories (mysqlxx PUBLIC ..) if (USE_INTERNAL_MYSQL_LIBRARY) - target_include_directories (mysqlxx PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/mariadb-connector-c/include) - target_include_directories (mysqlxx PUBLIC ${ClickHouse_BINARY_DIR}/contrib/mariadb-connector-c/include) + target_include_directories (mysqlxx PUBLIC "${ClickHouse_SOURCE_DIR}/contrib/mariadb-connector-c/include") + target_include_directories (mysqlxx PUBLIC "${ClickHouse_BINARY_DIR}/contrib/mariadb-connector-c/include") else () set(PLATFORM_LIBRARIES ${CMAKE_DL_LIBS}) diff --git a/base/mysqlxx/Pool.h b/base/mysqlxx/Pool.h index b6189663f55..530e2c78cf2 100644 --- a/base/mysqlxx/Pool.h +++ b/base/mysqlxx/Pool.h @@ -159,9 +159,9 @@ public: */ Pool(const std::string & db_, const std::string & server_, - const std::string & user_ = "", - const std::string & password_ = "", - unsigned port_ = 0, + const std::string & user_, + const std::string & password_, + unsigned port_, const std::string & socket_ = "", unsigned connect_timeout_ = MYSQLXX_DEFAULT_TIMEOUT, unsigned rw_timeout_ = MYSQLXX_DEFAULT_RW_TIMEOUT, diff --git a/base/mysqlxx/PoolWithFailover.cpp b/base/mysqlxx/PoolWithFailover.cpp index 5e9f70f4ac1..ea2d060e596 100644 --- a/base/mysqlxx/PoolWithFailover.cpp +++ b/base/mysqlxx/PoolWithFailover.cpp @@ -2,7 +2,6 @@ #include #include #include - #include @@ -15,9 +14,12 @@ static bool startsWith(const std::string & s, const char * prefix) using namespace mysqlxx; -PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & config_, - const std::string & config_name_, const unsigned default_connections_, - const unsigned max_connections_, const size_t max_tries_) +PoolWithFailover::PoolWithFailover( + const Poco::Util::AbstractConfiguration & config_, + const std::string & config_name_, + const unsigned default_connections_, + const unsigned max_connections_, + const size_t max_tries_) : max_tries(max_tries_) { shareable = config_.getBool(config_name_ + ".share_connection", false); @@ -59,16 +61,38 @@ PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & con } } -PoolWithFailover::PoolWithFailover(const std::string & config_name_, const unsigned default_connections_, - const unsigned max_connections_, const size_t max_tries_) - : PoolWithFailover{ - Poco::Util::Application::instance().config(), config_name_, - default_connections_, max_connections_, max_tries_} + +PoolWithFailover::PoolWithFailover( + const std::string & config_name_, + const unsigned default_connections_, + const unsigned max_connections_, + const size_t max_tries_) + : PoolWithFailover{Poco::Util::Application::instance().config(), + config_name_, default_connections_, max_connections_, max_tries_} { } + +PoolWithFailover::PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t max_tries_) + : max_tries(max_tries_) + , shareable(false) +{ + /// Replicas have the same priority, but traversed replicas are moved to the end of the queue. + for (const auto & [host, port] : addresses) + { + replicas_by_priority[0].emplace_back(std::make_shared(database, host, user, password, port)); + } +} + + PoolWithFailover::PoolWithFailover(const PoolWithFailover & other) - : max_tries{other.max_tries}, shareable{other.shareable} + : max_tries{other.max_tries} + , shareable{other.shareable} { if (shareable) { diff --git a/base/mysqlxx/PoolWithFailover.h b/base/mysqlxx/PoolWithFailover.h index 029fc3ebad3..5154fc3e253 100644 --- a/base/mysqlxx/PoolWithFailover.h +++ b/base/mysqlxx/PoolWithFailover.h @@ -11,6 +11,8 @@ namespace mysqlxx { /** MySQL connection pool with support of failover. + * + * For dictionary source: * Have information about replicas and their priorities. * Tries to connect to replica in an order of priority. When equal priority, choose replica with maximum time without connections. * @@ -68,42 +70,58 @@ namespace mysqlxx using PoolPtr = std::shared_ptr; using Replicas = std::vector; - /// [priority][index] -> replica. + /// [priority][index] -> replica. Highest priority is 0. using ReplicasByPriority = std::map; - ReplicasByPriority replicas_by_priority; /// Number of connection tries. size_t max_tries; /// Mutex for set of replicas. std::mutex mutex; - /// Can the Pool be shared bool shareable; public: using Entry = Pool::Entry; + using RemoteDescription = std::vector>; /** - * config_name Name of parameter in configuration file. + * * Mysql dictionary sourse related params: + * config_name Name of parameter in configuration file for dictionary source. + * + * * Mysql storage related parameters: + * replicas_description + * + * * Mutual parameters: * default_connections Number of connection in pool to each replica at start. * max_connections Maximum number of connections in pool to each replica. * max_tries_ Max number of connection tries. */ - PoolWithFailover(const std::string & config_name_, + PoolWithFailover( + const std::string & config_name_, unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS, size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); - PoolWithFailover(const Poco::Util::AbstractConfiguration & config_, + PoolWithFailover( + const Poco::Util::AbstractConfiguration & config_, const std::string & config_name_, unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS, size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + PoolWithFailover(const PoolWithFailover & other); /** Allocates a connection to use. */ Entry get(); }; + + using PoolWithFailoverPtr = std::shared_ptr; } diff --git a/base/mysqlxx/tests/CMakeLists.txt b/base/mysqlxx/tests/CMakeLists.txt index 2cf19d78418..6473a927308 100644 --- a/base/mysqlxx/tests/CMakeLists.txt +++ b/base/mysqlxx/tests/CMakeLists.txt @@ -1,5 +1,2 @@ -add_executable (mysqlxx_test mysqlxx_test.cpp) -target_link_libraries (mysqlxx_test PRIVATE mysqlxx) - add_executable (mysqlxx_pool_test mysqlxx_pool_test.cpp) target_link_libraries (mysqlxx_pool_test PRIVATE mysqlxx) diff --git a/base/mysqlxx/tests/failover.xml b/base/mysqlxx/tests/failover.xml deleted file mode 100644 index 73702eabb29..00000000000 --- a/base/mysqlxx/tests/failover.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - - 3306 - root - Metrica - qwerty - - example02t - 0 - - - example02t - 3306 - root - qwerty - Metrica - 1 - - - diff --git a/base/mysqlxx/tests/mysqlxx_test.cpp b/base/mysqlxx/tests/mysqlxx_test.cpp deleted file mode 100644 index c505d34a58d..00000000000 --- a/base/mysqlxx/tests/mysqlxx_test.cpp +++ /dev/null @@ -1,77 +0,0 @@ -#include -#include - - -int main(int, char **) -{ - try - { - mysqlxx::Connection connection("test", "127.0.0.1", "root", "qwerty", 3306); - std::cerr << "Connected." << std::endl; - - { - mysqlxx::Query query = connection.query(); - query << "SELECT 1 x, '2010-01-01 01:01:01' d"; - mysqlxx::UseQueryResult result = query.use(); - std::cerr << "use() called." << std::endl; - - while (mysqlxx::Row row = result.fetch()) - { - std::cerr << "Fetched row." << std::endl; - std::cerr << row[0] << ", " << row["x"] << std::endl; - std::cerr << row[1] << ", " << row["d"] - << ", " << row[1].getDate() - << ", " << row[1].getDateTime() - << ", " << row[1].getDate() - << ", " << row[1].getDateTime() - << std::endl - << row[1].getDate() << ", " << row[1].getDateTime() << std::endl - << row[1].getDate() << ", " << row[1].getDateTime() << std::endl - << row[1].getDate() << ", " << row[1].getDateTime() << std::endl - << row[1].getDate() << ", " << row[1].getDateTime() << std::endl - ; - - time_t t1 = row[0]; - time_t t2 = row[1]; - std::cerr << t1 << ", " << LocalDateTime(t1) << std::endl; - std::cerr << t2 << ", " << LocalDateTime(t2) << std::endl; - } - } - - { - mysqlxx::UseQueryResult result = connection.query("SELECT 'abc\\\\def' x").use(); - mysqlxx::Row row = result.fetch(); - std::cerr << row << std::endl; - std::cerr << row << std::endl; - } - - { - /// Копирование Query - mysqlxx::Query query1 = connection.query("SELECT"); - mysqlxx::Query query2 = query1; - query2 << " 1"; - - std::cerr << query1.str() << ", " << query2.str() << std::endl; - } - - { - /// NULL - mysqlxx::Null x = mysqlxx::null; - std::cerr << (x == mysqlxx::null ? "Ok" : "Fail") << std::endl; - std::cerr << (x == 0 ? "Fail" : "Ok") << std::endl; - std::cerr << (x.isNull() ? "Ok" : "Fail") << std::endl; - x = 1; - std::cerr << (x == mysqlxx::null ? "Fail" : "Ok") << std::endl; - std::cerr << (x == 0 ? "Fail" : "Ok") << std::endl; - std::cerr << (x == 1 ? "Ok" : "Fail") << std::endl; - std::cerr << (x.isNull() ? "Fail" : "Ok") << std::endl; - } - } - catch (const mysqlxx::Exception & e) - { - std::cerr << e.code() << ", " << e.message() << std::endl; - throw; - } - - return 0; -} diff --git a/base/pcg-random/pcg_random.hpp b/base/pcg-random/pcg_random.hpp index abf83a60ee1..b7145e2309c 100644 --- a/base/pcg-random/pcg_random.hpp +++ b/base/pcg-random/pcg_random.hpp @@ -1643,22 +1643,22 @@ typedef setseq_base template -using ext_std8 = extended; template -using ext_std16 = extended; template -using ext_std32 = extended; template -using ext_std64 = extended; diff --git a/cmake/analysis.cmake b/cmake/analysis.cmake index 369be295746..267bb34248b 100644 --- a/cmake/analysis.cmake +++ b/cmake/analysis.cmake @@ -16,6 +16,10 @@ if (ENABLE_CLANG_TIDY) set (USE_CLANG_TIDY ON) + # clang-tidy requires assertions to guide the analysis + # Note that NDEBUG is set implicitly by CMake for non-debug builds + set(COMPILER_FLAGS "${COMPILER_FLAGS} -UNDEBUG") + # The variable CMAKE_CXX_CLANG_TIDY will be set inside src and base directories with non third-party code. # set (CMAKE_CXX_CLANG_TIDY "${CLANG_TIDY_PATH}") elseif (FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION) diff --git a/cmake/arch.cmake b/cmake/arch.cmake index 9604ef62b31..60e0346dbbf 100644 --- a/cmake/arch.cmake +++ b/cmake/arch.cmake @@ -1,7 +1,7 @@ if (CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64") set (ARCH_AMD64 1) endif () -if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)") +if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*|arm64.*|ARM64.*)") set (ARCH_AARCH64 1) endif () if (ARCH_AARCH64 OR CMAKE_SYSTEM_PROCESSOR MATCHES "arm") diff --git a/cmake/autogenerated_versions.txt b/cmake/autogenerated_versions.txt index bd7885bc41b..51f4b974161 100644 --- a/cmake/autogenerated_versions.txt +++ b/cmake/autogenerated_versions.txt @@ -1,9 +1,9 @@ # This strings autochanged from release_lib.sh: -SET(VERSION_REVISION 54449) +SET(VERSION_REVISION 54451) SET(VERSION_MAJOR 21) -SET(VERSION_MINOR 4) +SET(VERSION_MINOR 6) SET(VERSION_PATCH 1) -SET(VERSION_GITHASH af2135ef9dc72f16fa4f229b731262c3f0a8bbdc) -SET(VERSION_DESCRIBE v21.4.1.1-prestable) -SET(VERSION_STRING 21.4.1.1) +SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96) +SET(VERSION_DESCRIBE v21.6.1.1-prestable) +SET(VERSION_STRING 21.6.1.1) # end of autochange diff --git a/cmake/darwin/default_libs.cmake b/cmake/darwin/default_libs.cmake index 79ac675f234..a6ee800d59b 100644 --- a/cmake/darwin/default_libs.cmake +++ b/cmake/darwin/default_libs.cmake @@ -1,11 +1,14 @@ set (DEFAULT_LIBS "-nodefaultlibs") -if (NOT COMPILER_CLANG) - message (FATAL_ERROR "Darwin build is supported only for Clang") -endif () - set (DEFAULT_LIBS "${DEFAULT_LIBS} ${COVERAGE_OPTION} -lc -lm -lpthread -ldl") +if (COMPILER_GCC) + set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc_eh") + if (ARCH_AARCH64) + set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc") + endif () +endif () + message(STATUS "Default libraries: ${DEFAULT_LIBS}") set(CMAKE_CXX_STANDARD_LIBRARIES ${DEFAULT_LIBS}) diff --git a/cmake/darwin/toolchain-aarch64.cmake b/cmake/darwin/toolchain-aarch64.cmake new file mode 100644 index 00000000000..81398111495 --- /dev/null +++ b/cmake/darwin/toolchain-aarch64.cmake @@ -0,0 +1,14 @@ +set (CMAKE_SYSTEM_NAME "Darwin") +set (CMAKE_SYSTEM_PROCESSOR "aarch64") +set (CMAKE_C_COMPILER_TARGET "aarch64-apple-darwin") +set (CMAKE_CXX_COMPILER_TARGET "aarch64-apple-darwin") +set (CMAKE_ASM_COMPILER_TARGET "aarch64-apple-darwin") +set (CMAKE_OSX_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../toolchain/darwin-aarch64") + +set (CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY) # disable linkage check - it doesn't work in CMake + +set (HAS_PRE_1970_EXITCODE "0" CACHE STRING "Result from TRY_RUN" FORCE) +set (HAS_PRE_1970_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE) + +set (HAS_POST_2038_EXITCODE "0" CACHE STRING "Result from TRY_RUN" FORCE) +set (HAS_POST_2038_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE) diff --git a/cmake/find/amqpcpp.cmake b/cmake/find/amqpcpp.cmake index 4191dce26bb..a4a58349508 100644 --- a/cmake/find/amqpcpp.cmake +++ b/cmake/find/amqpcpp.cmake @@ -1,3 +1,8 @@ +if (MISSING_INTERNAL_LIBUV_LIBRARY) + message (WARNING "Can't find internal libuv needed for AMQP-CPP library") + set (ENABLE_AMQPCPP OFF CACHE INTERNAL "") +endif() + option(ENABLE_AMQPCPP "Enalbe AMQP-CPP" ${ENABLE_LIBRARIES}) if (NOT ENABLE_AMQPCPP) @@ -12,11 +17,13 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP/CMakeLists.txt") endif () set (USE_AMQPCPP 1) -set (AMQPCPP_LIBRARY AMQP-CPP) +set (AMQPCPP_LIBRARY amqp-cpp) set (AMQPCPP_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP/include") list (APPEND AMQPCPP_INCLUDE_DIR - "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP/include" + "${LIBUV_INCLUDE_DIR}" "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP") +list (APPEND AMQPCPP_LIBRARY "${LIBUV_LIBRARY}") + message (STATUS "Using AMQP-CPP=${USE_AMQPCPP}: ${AMQPCPP_INCLUDE_DIR} : ${AMQPCPP_LIBRARY}") diff --git a/cmake/find/base64.cmake b/cmake/find/base64.cmake index 7427baf9cad..acade11eb2f 100644 --- a/cmake/find/base64.cmake +++ b/cmake/find/base64.cmake @@ -1,4 +1,8 @@ -option (ENABLE_BASE64 "Enable base64" ${ENABLE_LIBRARIES}) +if(ARCH_AMD64 OR ARCH_ARM) + option (ENABLE_BASE64 "Enable base64" ${ENABLE_LIBRARIES}) +elseif(ENABLE_BASE64) + message (${RECONFIGURE_MESSAGE_LEVEL} "base64 library is only supported on x86_64 and aarch64") +endif() if (NOT ENABLE_BASE64) return() diff --git a/cmake/find/cassandra.cmake b/cmake/find/cassandra.cmake index 037d6c3f131..b6e97ff5ef8 100644 --- a/cmake/find/cassandra.cmake +++ b/cmake/find/cassandra.cmake @@ -1,3 +1,8 @@ +if (MISSING_INTERNAL_LIBUV_LIBRARY) + message (WARNING "Disabling cassandra due to missing libuv") + set (ENABLE_CASSANDRA OFF CACHE INTERNAL "") +endif() + option(ENABLE_CASSANDRA "Enable Cassandra" ${ENABLE_LIBRARIES}) if (NOT ENABLE_CASSANDRA) @@ -8,27 +13,22 @@ if (APPLE) set(CMAKE_MACOSX_RPATH ON) endif() -if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/libuv") - message (ERROR "submodule contrib/libuv is missing. to fix try run: \n git submodule update --init --recursive") - message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal libuv needed for Cassandra") -elseif (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/cassandra") +if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/cassandra") message (ERROR "submodule contrib/cassandra is missing. to fix try run: \n git submodule update --init --recursive") message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal Cassandra") -else() - set (LIBUV_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/libuv") - set (CASSANDRA_INCLUDE_DIR - "${ClickHouse_SOURCE_DIR}/contrib/cassandra/include/") - if (MAKE_STATIC_LIBRARIES) - set (LIBUV_LIBRARY uv_a) - set (CASSANDRA_LIBRARY cassandra_static) - else() - set (LIBUV_LIBRARY uv) - set (CASSANDRA_LIBRARY cassandra) - endif() - - set (USE_CASSANDRA 1) - set (CASS_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/cassandra") + set (USE_CASSANDRA 0) + return() endif() +set (USE_CASSANDRA 1) +set (CASSANDRA_INCLUDE_DIR + "${ClickHouse_SOURCE_DIR}/contrib/cassandra/include/") +if (MAKE_STATIC_LIBRARIES) + set (CASSANDRA_LIBRARY cassandra_static) +else() + set (CASSANDRA_LIBRARY cassandra) +endif() + +set (CASS_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/cassandra") + message (STATUS "Using cassandra=${USE_CASSANDRA}: ${CASSANDRA_INCLUDE_DIR} : ${CASSANDRA_LIBRARY}") -message (STATUS "Using libuv: ${LIBUV_ROOT_DIR} : ${LIBUV_LIBRARY}") diff --git a/cmake/find/ccache.cmake b/cmake/find/ccache.cmake index fea1f8b4c97..986c9cb5fe2 100644 --- a/cmake/find/ccache.cmake +++ b/cmake/find/ccache.cmake @@ -32,7 +32,9 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE) if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang") message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}") - set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE_FOUND}) + set (CMAKE_CXX_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_CXX_COMPILER_LAUNCHER}) + set (CMAKE_C_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_C_COMPILER_LAUNCHER}) + set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND}) # debian (debhelpers) set SOURCE_DATE_EPOCH environment variable, that is diff --git a/cmake/find/datasketches.cmake b/cmake/find/datasketches.cmake new file mode 100644 index 00000000000..44ef324a9f2 --- /dev/null +++ b/cmake/find/datasketches.cmake @@ -0,0 +1,29 @@ +option (ENABLE_DATASKETCHES "Enable DataSketches" ${ENABLE_LIBRARIES}) + +if (ENABLE_DATASKETCHES) + +option (USE_INTERNAL_DATASKETCHES_LIBRARY "Set to FALSE to use system DataSketches library instead of bundled" ${NOT_UNBUNDLED}) + +if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/datasketches-cpp/theta/CMakeLists.txt") + if (USE_INTERNAL_DATASKETCHES_LIBRARY) + message(WARNING "submodule contrib/datasketches-cpp is missing. to fix try run: \n git submodule update --init --recursive") + endif() + set(MISSING_INTERNAL_DATASKETCHES_LIBRARY 1) + set(USE_INTERNAL_DATASKETCHES_LIBRARY 0) +endif() + +if (USE_INTERNAL_DATASKETCHES_LIBRARY) + set(DATASKETCHES_LIBRARY theta) + set(DATASKETCHES_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/datasketches-cpp/common/include" "${ClickHouse_SOURCE_DIR}/contrib/datasketches-cpp/theta/include") +elseif (NOT MISSING_INTERNAL_DATASKETCHES_LIBRARY) + find_library(DATASKETCHES_LIBRARY theta) + find_path(DATASKETCHES_INCLUDE_DIR NAMES theta_sketch.hpp PATHS ${DATASKETCHES_INCLUDE_PATHS}) +endif() + +if (DATASKETCHES_LIBRARY AND DATASKETCHES_INCLUDE_DIR) + set(USE_DATASKETCHES 1) +endif() + +endif() + +message (STATUS "Using datasketches=${USE_DATASKETCHES}: ${DATASKETCHES_INCLUDE_DIR} : ${DATASKETCHES_LIBRARY}") diff --git a/cmake/find/fastops.cmake b/cmake/find/fastops.cmake index 5ab320bdb7a..1675646654e 100644 --- a/cmake/find/fastops.cmake +++ b/cmake/find/fastops.cmake @@ -1,7 +1,7 @@ -if(NOT ARCH_ARM AND NOT OS_FREEBSD AND NOT OS_DARWIN) +if(ARCH_AMD64 AND NOT OS_FREEBSD AND NOT OS_DARWIN) option(ENABLE_FASTOPS "Enable fast vectorized mathematical functions library by Mikhail Parakhin" ${ENABLE_LIBRARIES}) elseif(ENABLE_FASTOPS) - message (${RECONFIGURE_MESSAGE_LEVEL} "Fastops library is not supported on ARM, FreeBSD and Darwin") + message (${RECONFIGURE_MESSAGE_LEVEL} "Fastops library is supported on x86_64 only, and not FreeBSD or Darwin") endif() if(NOT ENABLE_FASTOPS) diff --git a/cmake/find/hdfs3.cmake b/cmake/find/hdfs3.cmake index 7b385f24e1e..3aab2b612ef 100644 --- a/cmake/find/hdfs3.cmake +++ b/cmake/find/hdfs3.cmake @@ -1,4 +1,4 @@ -if(NOT ARCH_ARM AND NOT OS_FREEBSD AND NOT APPLE AND USE_PROTOBUF) +if(NOT ARCH_ARM AND NOT OS_FREEBSD AND NOT APPLE AND USE_PROTOBUF AND NOT ARCH_PPC64LE) option(ENABLE_HDFS "Enable HDFS" ${ENABLE_LIBRARIES}) elseif(ENABLE_HDFS OR USE_INTERNAL_HDFS3_LIBRARY) message (${RECONFIGURE_MESSAGE_LEVEL} "Cannot use HDFS3 with current configuration") diff --git a/cmake/find/ldap.cmake b/cmake/find/ldap.cmake index 369c1e42e8d..d8baea89429 100644 --- a/cmake/find/ldap.cmake +++ b/cmake/find/ldap.cmake @@ -62,8 +62,10 @@ if (NOT OPENLDAP_FOUND AND NOT MISSING_INTERNAL_LDAP_LIBRARY) if ( ( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "x86_64" ) OR ( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "aarch64" ) OR + ( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "ppc64le" ) OR ( "${_system_name}" STREQUAL "freebsd" AND "${_system_processor}" STREQUAL "x86_64" ) OR - ( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" ) + ( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" ) OR + ( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "aarch64" ) ) set (_ldap_supported_platform TRUE) endif () diff --git a/cmake/find/libuv.cmake b/cmake/find/libuv.cmake new file mode 100644 index 00000000000..f0023209309 --- /dev/null +++ b/cmake/find/libuv.cmake @@ -0,0 +1,22 @@ +if (OS_DARWIN AND COMPILER_GCC) + message (WARNING "libuv cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082") + SET(MISSING_INTERNAL_LIBUV_LIBRARY 1) + return() +endif() + +if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/libuv") + message (WARNING "submodule contrib/libuv is missing. to fix try run: \n git submodule update --init --recursive") + SET(MISSING_INTERNAL_LIBUV_LIBRARY 1) + return() +endif() + +if (MAKE_STATIC_LIBRARIES) + set (LIBUV_LIBRARY uv_a) +else() + set (LIBUV_LIBRARY uv) +endif() + +set (LIBUV_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/libuv") +set (LIBUV_INCLUDE_DIR "${LIBUV_ROOT_DIR}/include") + +message (STATUS "Using libuv: ${LIBUV_ROOT_DIR} : ${LIBUV_LIBRARY}") diff --git a/cmake/find/llvm.cmake b/cmake/find/llvm.cmake index e0ba1d9b039..6ea7a5fd683 100644 --- a/cmake/find/llvm.cmake +++ b/cmake/find/llvm.cmake @@ -24,9 +24,9 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/CMakeLists.txt") endif () if (NOT USE_INTERNAL_LLVM_LIBRARY) - set (LLVM_PATHS "/usr/local/lib/llvm") + set (LLVM_PATHS "/usr/local/lib/llvm" "/usr/lib/llvm") - foreach(llvm_v 10 9 8) + foreach(llvm_v 11.1 11) if (NOT LLVM_FOUND) find_package (LLVM ${llvm_v} CONFIG PATHS ${LLVM_PATHS}) endif () @@ -102,7 +102,6 @@ LLVMRuntimeDyld LLVMX86CodeGen LLVMX86Desc LLVMX86Info -LLVMX86Utils LLVMAsmPrinter LLVMDebugInfoDWARF LLVMGlobalISel diff --git a/cmake/find/nanodbc.cmake b/cmake/find/nanodbc.cmake new file mode 100644 index 00000000000..894a2a60bad --- /dev/null +++ b/cmake/find/nanodbc.cmake @@ -0,0 +1,16 @@ +if (NOT ENABLE_ODBC) + return () +endif () + +if (NOT USE_INTERNAL_NANODBC_LIBRARY) + message (FATAL_ERROR "Only the bundled nanodbc library can be used") +endif () + +if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/CMakeLists.txt") + message (FATAL_ERROR "submodule contrib/nanodbc is missing. to fix try run: \n git submodule update --init --recursive") +endif() + +set (NANODBC_LIBRARY nanodbc) +set (NANODBC_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/nanodbc") + +message (STATUS "Using nanodbc: ${NANODBC_INCLUDE_DIR} : ${NANODBC_LIBRARY}") diff --git a/cmake/find/nuraft.cmake b/cmake/find/nuraft.cmake index 7fa5251946e..4e5258e132f 100644 --- a/cmake/find/nuraft.cmake +++ b/cmake/find/nuraft.cmake @@ -11,7 +11,7 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/NuRaft/CMakeLists.txt") return() endif () -if (NOT OS_FREEBSD AND NOT OS_DARWIN) +if (NOT OS_FREEBSD) set (USE_NURAFT 1) set (NURAFT_LIBRARY nuraft) diff --git a/cmake/find/odbc.cmake b/cmake/find/odbc.cmake index a23f0c831e9..c475e600c0d 100644 --- a/cmake/find/odbc.cmake +++ b/cmake/find/odbc.cmake @@ -50,4 +50,6 @@ if (NOT EXTERNAL_ODBC_LIBRARY_FOUND) set (USE_INTERNAL_ODBC_LIBRARY 1) endif () +set (USE_INTERNAL_NANODBC_LIBRARY 1) + message (STATUS "Using unixodbc") diff --git a/cmake/find/rocksdb.cmake b/cmake/find/rocksdb.cmake index 968cdb52407..94278a603d7 100644 --- a/cmake/find/rocksdb.cmake +++ b/cmake/find/rocksdb.cmake @@ -1,3 +1,7 @@ +if (OS_DARWIN AND ARCH_AARCH64) + set (ENABLE_ROCKSDB OFF CACHE INTERNAL "") +endif() + option(ENABLE_ROCKSDB "Enable ROCKSDB" ${ENABLE_LIBRARIES}) if (NOT ENABLE_ROCKSDB) diff --git a/cmake/find/s3.cmake b/cmake/find/s3.cmake index 1bbf48fd6b0..1b0c652a31a 100644 --- a/cmake/find/s3.cmake +++ b/cmake/find/s3.cmake @@ -1,7 +1,7 @@ -if(NOT OS_FREEBSD AND NOT APPLE AND NOT ARCH_ARM) +if(NOT OS_FREEBSD AND NOT APPLE) option(ENABLE_S3 "Enable S3" ${ENABLE_LIBRARIES}) elseif(ENABLE_S3 OR USE_INTERNAL_AWS_S3_LIBRARY) - message (${RECONFIGURE_MESSAGE_LEVEL} "Can't use S3 on ARM, Apple or FreeBSD") + message (${RECONFIGURE_MESSAGE_LEVEL} "Can't use S3 on Apple or FreeBSD") endif() if(NOT ENABLE_S3) diff --git a/cmake/find/xz.cmake b/cmake/find/xz.cmake new file mode 100644 index 00000000000..0d19859c6b1 --- /dev/null +++ b/cmake/find/xz.cmake @@ -0,0 +1,27 @@ +option (USE_INTERNAL_XZ_LIBRARY "Set to OFF to use system xz (lzma) library instead of bundled" ${NOT_UNBUNDLED}) + +if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api/lzma.h") + if(USE_INTERNAL_XZ_LIBRARY) + message(WARNING "submodule contrib/xz is missing. to fix try run: \n git submodule update --init --recursive") + message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal xz (lzma) library") + set(USE_INTERNAL_XZ_LIBRARY 0) + endif() + set(MISSING_INTERNAL_XZ_LIBRARY 1) +endif() + +if (NOT USE_INTERNAL_XZ_LIBRARY) + find_library (XZ_LIBRARY lzma) + find_path (XZ_INCLUDE_DIR NAMES lzma.h PATHS ${XZ_INCLUDE_PATHS}) + if (NOT XZ_LIBRARY OR NOT XZ_INCLUDE_DIR) + message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system xz (lzma) library") + endif () +endif () + +if (XZ_LIBRARY AND XZ_INCLUDE_DIR) +elseif (NOT MISSING_INTERNAL_XZ_LIBRARY) + set (USE_INTERNAL_XZ_LIBRARY 1) + set (XZ_LIBRARY liblzma) + set (XZ_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api) +endif () + +message (STATUS "Using xz (lzma): ${XZ_INCLUDE_DIR} : ${XZ_LIBRARY}") diff --git a/cmake/linux/default_libs.cmake b/cmake/linux/default_libs.cmake index d3a727e9cb8..c1e4d450389 100644 --- a/cmake/linux/default_libs.cmake +++ b/cmake/linux/default_libs.cmake @@ -6,7 +6,7 @@ set (DEFAULT_LIBS "-nodefaultlibs") # We need builtins from Clang's RT even without libcxx - for ubsan+int128. # See https://bugs.llvm.org/show_bug.cgi?id=16404 if (COMPILER_CLANG AND NOT (CMAKE_CROSSCOMPILING AND ARCH_AARCH64)) - execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-${CMAKE_SYSTEM_PROCESSOR}.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE) + execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-libgcc-file-name --rtlib=compiler-rt OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE) else () set (BUILTINS_LIBRARY "-lgcc") endif () diff --git a/cmake/sanitize.cmake b/cmake/sanitize.cmake index 6c23ce8bc91..f60f7431389 100644 --- a/cmake/sanitize.cmake +++ b/cmake/sanitize.cmake @@ -40,7 +40,7 @@ if (SANITIZE) # RelWithDebInfo, and downgrade optimizations to -O1 but not to -Og, to # keep the binary size down. # TODO: try compiling with -Og and with ld.gold. - set (MSAN_FLAGS "-fsanitize=memory -fsanitize-memory-track-origins -fno-optimize-sibling-calls -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/msan_suppressions.txt") + set (MSAN_FLAGS "-fsanitize=memory -fsanitize-memory-use-after-dtor -fsanitize-memory-track-origins -fno-optimize-sibling-calls -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/msan_suppressions.txt") set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} ${MSAN_FLAGS}") set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} ${MSAN_FLAGS}") diff --git a/cmake/target.cmake b/cmake/target.cmake index 7174ca3c2a9..d1a0b8f9cbf 100644 --- a/cmake/target.cmake +++ b/cmake/target.cmake @@ -12,6 +12,9 @@ elseif (CMAKE_SYSTEM_NAME MATCHES "FreeBSD") elseif (CMAKE_SYSTEM_NAME MATCHES "Darwin") set (OS_DARWIN 1) add_definitions(-D OS_DARWIN) +elseif (CMAKE_SYSTEM_NAME MATCHES "SunOS") + set (OS_SUNOS 1) + add_definitions(-D OS_SUNOS) endif () if (CMAKE_CROSSCOMPILING) diff --git a/cmake/tools.cmake b/cmake/tools.cmake index abb11843d59..8ff94ab867b 100644 --- a/cmake/tools.cmake +++ b/cmake/tools.cmake @@ -8,10 +8,13 @@ endif () if (COMPILER_GCC) # Require minimum version of gcc - set (GCC_MINIMUM_VERSION 9) + set (GCC_MINIMUM_VERSION 10) if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${GCC_MINIMUM_VERSION} AND NOT CMAKE_VERSION VERSION_LESS 2.8.9) message (FATAL_ERROR "GCC version must be at least ${GCC_MINIMUM_VERSION}. For example, if GCC ${GCC_MINIMUM_VERSION} is available under gcc-${GCC_MINIMUM_VERSION}, g++-${GCC_MINIMUM_VERSION} names, do the following: export CC=gcc-${GCC_MINIMUM_VERSION} CXX=g++-${GCC_MINIMUM_VERSION}; rm -rf CMakeCache.txt CMakeFiles; and re run cmake or ./release.") endif () + + message (WARNING "GCC compiler is not officially supported for ClickHouse. You should migrate to clang.") + elseif (COMPILER_CLANG) # Require minimum version of clang/apple-clang if (CMAKE_CXX_COMPILER_ID MATCHES "AppleClang") @@ -86,8 +89,3 @@ if (LINKER_NAME) message(STATUS "Using custom linker by name: ${LINKER_NAME}") endif () -if (ARCH_PPC64LE) - if (COMPILER_CLANG OR (COMPILER_GCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8)) - message(FATAL_ERROR "Only gcc-8 or higher is supported for powerpc architecture") - endif () -endif () diff --git a/cmake/warnings.cmake b/cmake/warnings.cmake index 8122e9ef31e..a85fe8963c7 100644 --- a/cmake/warnings.cmake +++ b/cmake/warnings.cmake @@ -11,11 +11,6 @@ if (NOT MSVC) set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wextra") endif () -if (USE_DEBUG_HELPERS) - set (INCLUDE_DEBUG_HELPERS "-I${ClickHouse_SOURCE_DIR}/base -include ${ClickHouse_SOURCE_DIR}/src/Core/iostream_debug_helpers.h") - set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${INCLUDE_DEBUG_HELPERS}") -endif () - # Add some warnings that are not available even with -Wall -Wextra -Wpedantic. # Intended for exploration of new compiler warnings that may be found useful. # Applies to clang only @@ -176,6 +171,7 @@ elseif (COMPILER_GCC) add_cxx_compile_options(-Wtrampolines) # Obvious add_cxx_compile_options(-Wunused) + add_cxx_compile_options(-Wundef) # Warn if vector operation is not implemented via SIMD capabilities of the architecture add_cxx_compile_options(-Wvector-operation-performance) # XXX: libstdc++ has some of these for 3way compare diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index bf4bf5eb472..9eafec23f51 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -1,4 +1,3 @@ -# Third-party libraries may have substandard code. # Put all targets defined here and in added subfolders under "contrib/" folder in GUI-based IDEs by default. # Some of third-party projects may override CMAKE_FOLDER or FOLDER property of their targets, so they will @@ -11,8 +10,10 @@ else () endif () unset (_current_dir_name) -set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w") -set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w") +# Third-party libraries may have substandard code. +# Also remove a possible source of nondeterminism. +set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w -D__DATE__= -D__TIME__= -D__TIMESTAMP__=") +set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w -D__DATE__= -D__TIME__= -D__TIMESTAMP__=") if (WITH_COVERAGE) set (WITHOUT_COVERAGE_LIST ${WITHOUT_COVERAGE}) @@ -38,7 +39,6 @@ add_subdirectory (boost-cmake) add_subdirectory (cctz-cmake) add_subdirectory (consistent-hashing) add_subdirectory (dragonbox-cmake) -add_subdirectory (FastMemcpy) add_subdirectory (hyperscan-cmake) add_subdirectory (jemalloc-cmake) add_subdirectory (libcpuid-cmake) @@ -48,7 +48,11 @@ add_subdirectory (lz4-cmake) add_subdirectory (murmurhash) add_subdirectory (replxx-cmake) add_subdirectory (unixodbc-cmake) -add_subdirectory (xz) +add_subdirectory (nanodbc-cmake) + +if (USE_INTERNAL_XZ_LIBRARY) + add_subdirectory (xz) +endif() add_subdirectory (poco-cmake) add_subdirectory (croaring-cmake) @@ -94,14 +98,8 @@ if (USE_INTERNAL_ZLIB_LIBRARY) add_subdirectory (${INTERNAL_ZLIB_NAME}) # We should use same defines when including zlib.h as used when zlib compiled target_compile_definitions (zlib PUBLIC ZLIB_COMPAT WITH_GZFILEOP) - if (TARGET zlibstatic) - target_compile_definitions (zlibstatic PUBLIC ZLIB_COMPAT WITH_GZFILEOP) - endif () if (ARCH_AMD64 OR ARCH_AARCH64) target_compile_definitions (zlib PUBLIC X86_64 UNALIGNED_OK) - if (TARGET zlibstatic) - target_compile_definitions (zlibstatic PUBLIC X86_64 UNALIGNED_OK) - endif () endif () endif () @@ -216,15 +214,17 @@ if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY) set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "") set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "") set (LLVM_TARGETS_TO_BUILD "X86;AArch64" CACHE STRING "") - # Yes it is set globally, but this is not enough, since llvm will add -std=c++11 after default - # And c++2a cannot be used, due to ambiguous operator != - if (COMPILER_GCC OR COMPILER_CLANG) - set (_CXX_STANDARD "gnu++17") - else() - set (_CXX_STANDARD "c++17") - endif() - set (LLVM_CXX_STD ${_CXX_STANDARD} CACHE STRING "" FORCE) + + # Need to use C++17 since the compilation is not possible with C++20 currently, due to ambiguous operator != etc. + # LLVM project will set its default value for the -std=... but our global setting from CMake will override it. + set (CMAKE_CXX_STANDARD_bak ${CMAKE_CXX_STANDARD}) + set (CMAKE_CXX_STANDARD 17) + add_subdirectory (llvm/llvm) + + set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak}) + unset (CMAKE_CXX_STANDARD_bak) + target_include_directories(LLVMSupport SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR}) endif () @@ -281,7 +281,14 @@ if (USE_AMQPCPP) add_subdirectory (amqpcpp-cmake) endif() if (USE_CASSANDRA) + # Need to use C++17 since the compilation is not possible with C++20 currently. + set (CMAKE_CXX_STANDARD_bak ${CMAKE_CXX_STANDARD}) + set (CMAKE_CXX_STANDARD 17) + add_subdirectory (cassandra) + + set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak}) + unset (CMAKE_CXX_STANDARD_bak) endif() # Should go before: diff --git a/contrib/FastMemcpy/CMakeLists.txt b/contrib/FastMemcpy/CMakeLists.txt deleted file mode 100644 index 8efe6d45dff..00000000000 --- a/contrib/FastMemcpy/CMakeLists.txt +++ /dev/null @@ -1,28 +0,0 @@ -option (ENABLE_FASTMEMCPY "Enable FastMemcpy library (only internal)" ${ENABLE_LIBRARIES}) - -if (NOT OS_LINUX OR ARCH_AARCH64) - set (ENABLE_FASTMEMCPY OFF) -endif () - -if (ENABLE_FASTMEMCPY) - set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/FastMemcpy) - - set (SRCS - ${LIBRARY_DIR}/FastMemcpy.c - - memcpy_wrapper.c - ) - - add_library (FastMemcpy ${SRCS}) - target_include_directories (FastMemcpy PUBLIC ${LIBRARY_DIR}) - - target_compile_definitions(FastMemcpy PUBLIC USE_FASTMEMCPY=1) - - message (STATUS "Using FastMemcpy") -else () - add_library (FastMemcpy INTERFACE) - - target_compile_definitions(FastMemcpy INTERFACE USE_FASTMEMCPY=0) - - message (STATUS "Not using FastMemcpy") -endif () diff --git a/contrib/FastMemcpy/FastMemcpy.c b/contrib/FastMemcpy/FastMemcpy.c deleted file mode 100644 index 5021bcc7d16..00000000000 --- a/contrib/FastMemcpy/FastMemcpy.c +++ /dev/null @@ -1,220 +0,0 @@ -//===================================================================== -// -// FastMemcpy.c - skywind3000@163.com, 2015 -// -// feature: -// 50% speed up in avg. vs standard memcpy (tested in vc2012/gcc4.9) -// -//===================================================================== -#include -#include -#include -#include - -#if (defined(_WIN32) || defined(WIN32)) -#include -#include -#ifdef _MSC_VER -#pragma comment(lib, "winmm.lib") -#endif -#elif defined(__unix) -#include -#include -#else -#error it can only be compiled under windows or unix -#endif - -#include "FastMemcpy.h" - -unsigned int gettime() -{ - #if (defined(_WIN32) || defined(WIN32)) - return timeGetTime(); - #else - static struct timezone tz={ 0,0 }; - struct timeval time; - gettimeofday(&time,&tz); - return (time.tv_sec * 1000 + time.tv_usec / 1000); - #endif -} - -void sleepms(unsigned int millisec) -{ -#if defined(_WIN32) || defined(WIN32) - Sleep(millisec); -#else - usleep(millisec * 1000); -#endif -} - - -void benchmark(int dstalign, int srcalign, size_t size, int times) -{ - char *DATA1 = (char*)malloc(size + 64); - char *DATA2 = (char*)malloc(size + 64); - size_t LINEAR1 = ((size_t)DATA1); - size_t LINEAR2 = ((size_t)DATA2); - char *ALIGN1 = (char*)(((64 - (LINEAR1 & 63)) & 63) + LINEAR1); - char *ALIGN2 = (char*)(((64 - (LINEAR2 & 63)) & 63) + LINEAR2); - char *dst = (dstalign)? ALIGN1 : (ALIGN1 + 1); - char *src = (srcalign)? ALIGN2 : (ALIGN2 + 3); - unsigned int t1, t2; - int k; - - sleepms(100); - t1 = gettime(); - for (k = times; k > 0; k--) { - memcpy(dst, src, size); - } - t1 = gettime() - t1; - sleepms(100); - t2 = gettime(); - for (k = times; k > 0; k--) { - memcpy_fast(dst, src, size); - } - t2 = gettime() - t2; - - free(DATA1); - free(DATA2); - - printf("result(dst %s, src %s): memcpy_fast=%dms memcpy=%d ms\n", - dstalign? "aligned" : "unalign", - srcalign? "aligned" : "unalign", (int)t2, (int)t1); -} - - -void bench(int copysize, int times) -{ - printf("benchmark(size=%d bytes, times=%d):\n", copysize, times); - benchmark(1, 1, copysize, times); - benchmark(1, 0, copysize, times); - benchmark(0, 1, copysize, times); - benchmark(0, 0, copysize, times); - printf("\n"); -} - - -void random_bench(int maxsize, int times) -{ - static char A[11 * 1024 * 1024 + 2]; - static char B[11 * 1024 * 1024 + 2]; - static int random_offsets[0x10000]; - static int random_sizes[0x8000]; - unsigned int i, p1, p2; - unsigned int t1, t2; - for (i = 0; i < 0x10000; i++) { // generate random offsets - random_offsets[i] = rand() % (10 * 1024 * 1024 + 1); - } - for (i = 0; i < 0x8000; i++) { // generate random sizes - random_sizes[i] = 1 + rand() % maxsize; - } - sleepms(100); - t1 = gettime(); - for (p1 = 0, p2 = 0, i = 0; i < times; i++) { - int offset1 = random_offsets[(p1++) & 0xffff]; - int offset2 = random_offsets[(p1++) & 0xffff]; - int size = random_sizes[(p2++) & 0x7fff]; - memcpy(A + offset1, B + offset2, size); - } - t1 = gettime() - t1; - sleepms(100); - t2 = gettime(); - for (p1 = 0, p2 = 0, i = 0; i < times; i++) { - int offset1 = random_offsets[(p1++) & 0xffff]; - int offset2 = random_offsets[(p1++) & 0xffff]; - int size = random_sizes[(p2++) & 0x7fff]; - memcpy_fast(A + offset1, B + offset2, size); - } - t2 = gettime() - t2; - printf("benchmark random access:\n"); - printf("memcpy_fast=%dms memcpy=%dms\n\n", (int)t2, (int)t1); -} - - -#ifdef _MSC_VER -#pragma comment(lib, "winmm.lib") -#endif - -int main(void) -{ - bench(32, 0x1000000); - bench(64, 0x1000000); - bench(512, 0x800000); - bench(1024, 0x400000); - bench(4096, 0x80000); - bench(8192, 0x40000); - bench(1024 * 1024 * 1, 0x800); - bench(1024 * 1024 * 4, 0x200); - bench(1024 * 1024 * 8, 0x100); - - random_bench(2048, 8000000); - - return 0; -} - - - - -/* -benchmark(size=32 bytes, times=16777216): -result(dst aligned, src aligned): memcpy_fast=78ms memcpy=260 ms -result(dst aligned, src unalign): memcpy_fast=78ms memcpy=250 ms -result(dst unalign, src aligned): memcpy_fast=78ms memcpy=266 ms -result(dst unalign, src unalign): memcpy_fast=78ms memcpy=234 ms - -benchmark(size=64 bytes, times=16777216): -result(dst aligned, src aligned): memcpy_fast=109ms memcpy=281 ms -result(dst aligned, src unalign): memcpy_fast=109ms memcpy=328 ms -result(dst unalign, src aligned): memcpy_fast=109ms memcpy=343 ms -result(dst unalign, src unalign): memcpy_fast=93ms memcpy=344 ms - -benchmark(size=512 bytes, times=8388608): -result(dst aligned, src aligned): memcpy_fast=125ms memcpy=218 ms -result(dst aligned, src unalign): memcpy_fast=156ms memcpy=484 ms -result(dst unalign, src aligned): memcpy_fast=172ms memcpy=546 ms -result(dst unalign, src unalign): memcpy_fast=172ms memcpy=515 ms - -benchmark(size=1024 bytes, times=4194304): -result(dst aligned, src aligned): memcpy_fast=109ms memcpy=172 ms -result(dst aligned, src unalign): memcpy_fast=187ms memcpy=453 ms -result(dst unalign, src aligned): memcpy_fast=172ms memcpy=437 ms -result(dst unalign, src unalign): memcpy_fast=156ms memcpy=452 ms - -benchmark(size=4096 bytes, times=524288): -result(dst aligned, src aligned): memcpy_fast=62ms memcpy=78 ms -result(dst aligned, src unalign): memcpy_fast=109ms memcpy=202 ms -result(dst unalign, src aligned): memcpy_fast=94ms memcpy=203 ms -result(dst unalign, src unalign): memcpy_fast=110ms memcpy=218 ms - -benchmark(size=8192 bytes, times=262144): -result(dst aligned, src aligned): memcpy_fast=62ms memcpy=78 ms -result(dst aligned, src unalign): memcpy_fast=78ms memcpy=202 ms -result(dst unalign, src aligned): memcpy_fast=78ms memcpy=203 ms -result(dst unalign, src unalign): memcpy_fast=94ms memcpy=203 ms - -benchmark(size=1048576 bytes, times=2048): -result(dst aligned, src aligned): memcpy_fast=203ms memcpy=191 ms -result(dst aligned, src unalign): memcpy_fast=219ms memcpy=281 ms -result(dst unalign, src aligned): memcpy_fast=218ms memcpy=328 ms -result(dst unalign, src unalign): memcpy_fast=218ms memcpy=312 ms - -benchmark(size=4194304 bytes, times=512): -result(dst aligned, src aligned): memcpy_fast=312ms memcpy=406 ms -result(dst aligned, src unalign): memcpy_fast=296ms memcpy=421 ms -result(dst unalign, src aligned): memcpy_fast=312ms memcpy=468 ms -result(dst unalign, src unalign): memcpy_fast=297ms memcpy=452 ms - -benchmark(size=8388608 bytes, times=256): -result(dst aligned, src aligned): memcpy_fast=281ms memcpy=452 ms -result(dst aligned, src unalign): memcpy_fast=280ms memcpy=468 ms -result(dst unalign, src aligned): memcpy_fast=298ms memcpy=514 ms -result(dst unalign, src unalign): memcpy_fast=344ms memcpy=472 ms - -benchmark random access: -memcpy_fast=515ms memcpy=1014ms - -*/ - - - - diff --git a/contrib/FastMemcpy/FastMemcpy.h b/contrib/FastMemcpy/FastMemcpy.h deleted file mode 100644 index 5dcbfcf1656..00000000000 --- a/contrib/FastMemcpy/FastMemcpy.h +++ /dev/null @@ -1,694 +0,0 @@ -//===================================================================== -// -// FastMemcpy.c - skywind3000@163.com, 2015 -// -// feature: -// 50% speed up in avg. vs standard memcpy (tested in vc2012/gcc5.1) -// -//===================================================================== -#ifndef __FAST_MEMCPY_H__ -#define __FAST_MEMCPY_H__ - -#include -#include -#include - - -//--------------------------------------------------------------------- -// force inline for compilers -//--------------------------------------------------------------------- -#ifndef INLINE -#ifdef __GNUC__ -#if (__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1)) - #define INLINE __inline__ __attribute__((always_inline)) -#else - #define INLINE __inline__ -#endif -#elif defined(_MSC_VER) - #define INLINE __forceinline -#elif (defined(__BORLANDC__) || defined(__WATCOMC__)) - #define INLINE __inline -#else - #define INLINE -#endif -#endif - -typedef __attribute__((__aligned__(1))) uint16_t uint16_unaligned_t; -typedef __attribute__((__aligned__(1))) uint32_t uint32_unaligned_t; -typedef __attribute__((__aligned__(1))) uint64_t uint64_unaligned_t; - -//--------------------------------------------------------------------- -// fast copy for different sizes -//--------------------------------------------------------------------- -static INLINE void memcpy_sse2_16(void *dst, const void *src) { - __m128i m0 = _mm_loadu_si128(((const __m128i*)src) + 0); - _mm_storeu_si128(((__m128i*)dst) + 0, m0); -} - -static INLINE void memcpy_sse2_32(void *dst, const void *src) { - __m128i m0 = _mm_loadu_si128(((const __m128i*)src) + 0); - __m128i m1 = _mm_loadu_si128(((const __m128i*)src) + 1); - _mm_storeu_si128(((__m128i*)dst) + 0, m0); - _mm_storeu_si128(((__m128i*)dst) + 1, m1); -} - -static INLINE void memcpy_sse2_64(void *dst, const void *src) { - __m128i m0 = _mm_loadu_si128(((const __m128i*)src) + 0); - __m128i m1 = _mm_loadu_si128(((const __m128i*)src) + 1); - __m128i m2 = _mm_loadu_si128(((const __m128i*)src) + 2); - __m128i m3 = _mm_loadu_si128(((const __m128i*)src) + 3); - _mm_storeu_si128(((__m128i*)dst) + 0, m0); - _mm_storeu_si128(((__m128i*)dst) + 1, m1); - _mm_storeu_si128(((__m128i*)dst) + 2, m2); - _mm_storeu_si128(((__m128i*)dst) + 3, m3); -} - -static INLINE void memcpy_sse2_128(void *dst, const void *src) { - __m128i m0 = _mm_loadu_si128(((const __m128i*)src) + 0); - __m128i m1 = _mm_loadu_si128(((const __m128i*)src) + 1); - __m128i m2 = _mm_loadu_si128(((const __m128i*)src) + 2); - __m128i m3 = _mm_loadu_si128(((const __m128i*)src) + 3); - __m128i m4 = _mm_loadu_si128(((const __m128i*)src) + 4); - __m128i m5 = _mm_loadu_si128(((const __m128i*)src) + 5); - __m128i m6 = _mm_loadu_si128(((const __m128i*)src) + 6); - __m128i m7 = _mm_loadu_si128(((const __m128i*)src) + 7); - _mm_storeu_si128(((__m128i*)dst) + 0, m0); - _mm_storeu_si128(((__m128i*)dst) + 1, m1); - _mm_storeu_si128(((__m128i*)dst) + 2, m2); - _mm_storeu_si128(((__m128i*)dst) + 3, m3); - _mm_storeu_si128(((__m128i*)dst) + 4, m4); - _mm_storeu_si128(((__m128i*)dst) + 5, m5); - _mm_storeu_si128(((__m128i*)dst) + 6, m6); - _mm_storeu_si128(((__m128i*)dst) + 7, m7); -} - - -//--------------------------------------------------------------------- -// tiny memory copy with jump table optimized -//--------------------------------------------------------------------- -/// Attribute is used to avoid an error with undefined behaviour sanitizer -/// ../contrib/FastMemcpy/FastMemcpy.h:91:56: runtime error: applying zero offset to null pointer -/// Found by 01307_orc_output_format.sh, cause - ORCBlockInputFormat and external ORC library. -__attribute__((__no_sanitize__("undefined"))) static INLINE void *memcpy_tiny(void *dst, const void *src, size_t size) { - unsigned char *dd = ((unsigned char*)dst) + size; - const unsigned char *ss = ((const unsigned char*)src) + size; - - switch (size) { - case 64: - memcpy_sse2_64(dd - 64, ss - 64); - case 0: - break; - - case 65: - memcpy_sse2_64(dd - 65, ss - 65); - case 1: - dd[-1] = ss[-1]; - break; - - case 66: - memcpy_sse2_64(dd - 66, ss - 66); - case 2: - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 67: - memcpy_sse2_64(dd - 67, ss - 67); - case 3: - *((uint16_unaligned_t*)(dd - 3)) = *((uint16_unaligned_t*)(ss - 3)); - dd[-1] = ss[-1]; - break; - - case 68: - memcpy_sse2_64(dd - 68, ss - 68); - case 4: - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 69: - memcpy_sse2_64(dd - 69, ss - 69); - case 5: - *((uint32_unaligned_t*)(dd - 5)) = *((uint32_unaligned_t*)(ss - 5)); - dd[-1] = ss[-1]; - break; - - case 70: - memcpy_sse2_64(dd - 70, ss - 70); - case 6: - *((uint32_unaligned_t*)(dd - 6)) = *((uint32_unaligned_t*)(ss - 6)); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 71: - memcpy_sse2_64(dd - 71, ss - 71); - case 7: - *((uint32_unaligned_t*)(dd - 7)) = *((uint32_unaligned_t*)(ss - 7)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 72: - memcpy_sse2_64(dd - 72, ss - 72); - case 8: - *((uint64_unaligned_t*)(dd - 8)) = *((uint64_unaligned_t*)(ss - 8)); - break; - - case 73: - memcpy_sse2_64(dd - 73, ss - 73); - case 9: - *((uint64_unaligned_t*)(dd - 9)) = *((uint64_unaligned_t*)(ss - 9)); - dd[-1] = ss[-1]; - break; - - case 74: - memcpy_sse2_64(dd - 74, ss - 74); - case 10: - *((uint64_unaligned_t*)(dd - 10)) = *((uint64_unaligned_t*)(ss - 10)); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 75: - memcpy_sse2_64(dd - 75, ss - 75); - case 11: - *((uint64_unaligned_t*)(dd - 11)) = *((uint64_unaligned_t*)(ss - 11)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 76: - memcpy_sse2_64(dd - 76, ss - 76); - case 12: - *((uint64_unaligned_t*)(dd - 12)) = *((uint64_unaligned_t*)(ss - 12)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 77: - memcpy_sse2_64(dd - 77, ss - 77); - case 13: - *((uint64_unaligned_t*)(dd - 13)) = *((uint64_unaligned_t*)(ss - 13)); - *((uint32_unaligned_t*)(dd - 5)) = *((uint32_unaligned_t*)(ss - 5)); - dd[-1] = ss[-1]; - break; - - case 78: - memcpy_sse2_64(dd - 78, ss - 78); - case 14: - *((uint64_unaligned_t*)(dd - 14)) = *((uint64_unaligned_t*)(ss - 14)); - *((uint64_unaligned_t*)(dd - 8)) = *((uint64_unaligned_t*)(ss - 8)); - break; - - case 79: - memcpy_sse2_64(dd - 79, ss - 79); - case 15: - *((uint64_unaligned_t*)(dd - 15)) = *((uint64_unaligned_t*)(ss - 15)); - *((uint64_unaligned_t*)(dd - 8)) = *((uint64_unaligned_t*)(ss - 8)); - break; - - case 80: - memcpy_sse2_64(dd - 80, ss - 80); - case 16: - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 81: - memcpy_sse2_64(dd - 81, ss - 81); - case 17: - memcpy_sse2_16(dd - 17, ss - 17); - dd[-1] = ss[-1]; - break; - - case 82: - memcpy_sse2_64(dd - 82, ss - 82); - case 18: - memcpy_sse2_16(dd - 18, ss - 18); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 83: - memcpy_sse2_64(dd - 83, ss - 83); - case 19: - memcpy_sse2_16(dd - 19, ss - 19); - *((uint16_unaligned_t*)(dd - 3)) = *((uint16_unaligned_t*)(ss - 3)); - dd[-1] = ss[-1]; - break; - - case 84: - memcpy_sse2_64(dd - 84, ss - 84); - case 20: - memcpy_sse2_16(dd - 20, ss - 20); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 85: - memcpy_sse2_64(dd - 85, ss - 85); - case 21: - memcpy_sse2_16(dd - 21, ss - 21); - *((uint32_unaligned_t*)(dd - 5)) = *((uint32_unaligned_t*)(ss - 5)); - dd[-1] = ss[-1]; - break; - - case 86: - memcpy_sse2_64(dd - 86, ss - 86); - case 22: - memcpy_sse2_16(dd - 22, ss - 22); - *((uint32_unaligned_t*)(dd - 6)) = *((uint32_unaligned_t*)(ss - 6)); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 87: - memcpy_sse2_64(dd - 87, ss - 87); - case 23: - memcpy_sse2_16(dd - 23, ss - 23); - *((uint32_unaligned_t*)(dd - 7)) = *((uint32_unaligned_t*)(ss - 7)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 88: - memcpy_sse2_64(dd - 88, ss - 88); - case 24: - memcpy_sse2_16(dd - 24, ss - 24); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 89: - memcpy_sse2_64(dd - 89, ss - 89); - case 25: - memcpy_sse2_16(dd - 25, ss - 25); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 90: - memcpy_sse2_64(dd - 90, ss - 90); - case 26: - memcpy_sse2_16(dd - 26, ss - 26); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 91: - memcpy_sse2_64(dd - 91, ss - 91); - case 27: - memcpy_sse2_16(dd - 27, ss - 27); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 92: - memcpy_sse2_64(dd - 92, ss - 92); - case 28: - memcpy_sse2_16(dd - 28, ss - 28); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 93: - memcpy_sse2_64(dd - 93, ss - 93); - case 29: - memcpy_sse2_16(dd - 29, ss - 29); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 94: - memcpy_sse2_64(dd - 94, ss - 94); - case 30: - memcpy_sse2_16(dd - 30, ss - 30); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 95: - memcpy_sse2_64(dd - 95, ss - 95); - case 31: - memcpy_sse2_16(dd - 31, ss - 31); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 96: - memcpy_sse2_64(dd - 96, ss - 96); - case 32: - memcpy_sse2_32(dd - 32, ss - 32); - break; - - case 97: - memcpy_sse2_64(dd - 97, ss - 97); - case 33: - memcpy_sse2_32(dd - 33, ss - 33); - dd[-1] = ss[-1]; - break; - - case 98: - memcpy_sse2_64(dd - 98, ss - 98); - case 34: - memcpy_sse2_32(dd - 34, ss - 34); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 99: - memcpy_sse2_64(dd - 99, ss - 99); - case 35: - memcpy_sse2_32(dd - 35, ss - 35); - *((uint16_unaligned_t*)(dd - 3)) = *((uint16_unaligned_t*)(ss - 3)); - dd[-1] = ss[-1]; - break; - - case 100: - memcpy_sse2_64(dd - 100, ss - 100); - case 36: - memcpy_sse2_32(dd - 36, ss - 36); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 101: - memcpy_sse2_64(dd - 101, ss - 101); - case 37: - memcpy_sse2_32(dd - 37, ss - 37); - *((uint32_unaligned_t*)(dd - 5)) = *((uint32_unaligned_t*)(ss - 5)); - dd[-1] = ss[-1]; - break; - - case 102: - memcpy_sse2_64(dd - 102, ss - 102); - case 38: - memcpy_sse2_32(dd - 38, ss - 38); - *((uint32_unaligned_t*)(dd - 6)) = *((uint32_unaligned_t*)(ss - 6)); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 103: - memcpy_sse2_64(dd - 103, ss - 103); - case 39: - memcpy_sse2_32(dd - 39, ss - 39); - *((uint32_unaligned_t*)(dd - 7)) = *((uint32_unaligned_t*)(ss - 7)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 104: - memcpy_sse2_64(dd - 104, ss - 104); - case 40: - memcpy_sse2_32(dd - 40, ss - 40); - *((uint64_unaligned_t*)(dd - 8)) = *((uint64_unaligned_t*)(ss - 8)); - break; - - case 105: - memcpy_sse2_64(dd - 105, ss - 105); - case 41: - memcpy_sse2_32(dd - 41, ss - 41); - *((uint64_unaligned_t*)(dd - 9)) = *((uint64_unaligned_t*)(ss - 9)); - dd[-1] = ss[-1]; - break; - - case 106: - memcpy_sse2_64(dd - 106, ss - 106); - case 42: - memcpy_sse2_32(dd - 42, ss - 42); - *((uint64_unaligned_t*)(dd - 10)) = *((uint64_unaligned_t*)(ss - 10)); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 107: - memcpy_sse2_64(dd - 107, ss - 107); - case 43: - memcpy_sse2_32(dd - 43, ss - 43); - *((uint64_unaligned_t*)(dd - 11)) = *((uint64_unaligned_t*)(ss - 11)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 108: - memcpy_sse2_64(dd - 108, ss - 108); - case 44: - memcpy_sse2_32(dd - 44, ss - 44); - *((uint64_unaligned_t*)(dd - 12)) = *((uint64_unaligned_t*)(ss - 12)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 109: - memcpy_sse2_64(dd - 109, ss - 109); - case 45: - memcpy_sse2_32(dd - 45, ss - 45); - *((uint64_unaligned_t*)(dd - 13)) = *((uint64_unaligned_t*)(ss - 13)); - *((uint32_unaligned_t*)(dd - 5)) = *((uint32_unaligned_t*)(ss - 5)); - dd[-1] = ss[-1]; - break; - - case 110: - memcpy_sse2_64(dd - 110, ss - 110); - case 46: - memcpy_sse2_32(dd - 46, ss - 46); - *((uint64_unaligned_t*)(dd - 14)) = *((uint64_unaligned_t*)(ss - 14)); - *((uint64_unaligned_t*)(dd - 8)) = *((uint64_unaligned_t*)(ss - 8)); - break; - - case 111: - memcpy_sse2_64(dd - 111, ss - 111); - case 47: - memcpy_sse2_32(dd - 47, ss - 47); - *((uint64_unaligned_t*)(dd - 15)) = *((uint64_unaligned_t*)(ss - 15)); - *((uint64_unaligned_t*)(dd - 8)) = *((uint64_unaligned_t*)(ss - 8)); - break; - - case 112: - memcpy_sse2_64(dd - 112, ss - 112); - case 48: - memcpy_sse2_32(dd - 48, ss - 48); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 113: - memcpy_sse2_64(dd - 113, ss - 113); - case 49: - memcpy_sse2_32(dd - 49, ss - 49); - memcpy_sse2_16(dd - 17, ss - 17); - dd[-1] = ss[-1]; - break; - - case 114: - memcpy_sse2_64(dd - 114, ss - 114); - case 50: - memcpy_sse2_32(dd - 50, ss - 50); - memcpy_sse2_16(dd - 18, ss - 18); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 115: - memcpy_sse2_64(dd - 115, ss - 115); - case 51: - memcpy_sse2_32(dd - 51, ss - 51); - memcpy_sse2_16(dd - 19, ss - 19); - *((uint16_unaligned_t*)(dd - 3)) = *((uint16_unaligned_t*)(ss - 3)); - dd[-1] = ss[-1]; - break; - - case 116: - memcpy_sse2_64(dd - 116, ss - 116); - case 52: - memcpy_sse2_32(dd - 52, ss - 52); - memcpy_sse2_16(dd - 20, ss - 20); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 117: - memcpy_sse2_64(dd - 117, ss - 117); - case 53: - memcpy_sse2_32(dd - 53, ss - 53); - memcpy_sse2_16(dd - 21, ss - 21); - *((uint32_unaligned_t*)(dd - 5)) = *((uint32_unaligned_t*)(ss - 5)); - dd[-1] = ss[-1]; - break; - - case 118: - memcpy_sse2_64(dd - 118, ss - 118); - case 54: - memcpy_sse2_32(dd - 54, ss - 54); - memcpy_sse2_16(dd - 22, ss - 22); - *((uint32_unaligned_t*)(dd - 6)) = *((uint32_unaligned_t*)(ss - 6)); - *((uint16_unaligned_t*)(dd - 2)) = *((uint16_unaligned_t*)(ss - 2)); - break; - - case 119: - memcpy_sse2_64(dd - 119, ss - 119); - case 55: - memcpy_sse2_32(dd - 55, ss - 55); - memcpy_sse2_16(dd - 23, ss - 23); - *((uint32_unaligned_t*)(dd - 7)) = *((uint32_unaligned_t*)(ss - 7)); - *((uint32_unaligned_t*)(dd - 4)) = *((uint32_unaligned_t*)(ss - 4)); - break; - - case 120: - memcpy_sse2_64(dd - 120, ss - 120); - case 56: - memcpy_sse2_32(dd - 56, ss - 56); - memcpy_sse2_16(dd - 24, ss - 24); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 121: - memcpy_sse2_64(dd - 121, ss - 121); - case 57: - memcpy_sse2_32(dd - 57, ss - 57); - memcpy_sse2_16(dd - 25, ss - 25); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 122: - memcpy_sse2_64(dd - 122, ss - 122); - case 58: - memcpy_sse2_32(dd - 58, ss - 58); - memcpy_sse2_16(dd - 26, ss - 26); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 123: - memcpy_sse2_64(dd - 123, ss - 123); - case 59: - memcpy_sse2_32(dd - 59, ss - 59); - memcpy_sse2_16(dd - 27, ss - 27); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 124: - memcpy_sse2_64(dd - 124, ss - 124); - case 60: - memcpy_sse2_32(dd - 60, ss - 60); - memcpy_sse2_16(dd - 28, ss - 28); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 125: - memcpy_sse2_64(dd - 125, ss - 125); - case 61: - memcpy_sse2_32(dd - 61, ss - 61); - memcpy_sse2_16(dd - 29, ss - 29); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 126: - memcpy_sse2_64(dd - 126, ss - 126); - case 62: - memcpy_sse2_32(dd - 62, ss - 62); - memcpy_sse2_16(dd - 30, ss - 30); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 127: - memcpy_sse2_64(dd - 127, ss - 127); - case 63: - memcpy_sse2_32(dd - 63, ss - 63); - memcpy_sse2_16(dd - 31, ss - 31); - memcpy_sse2_16(dd - 16, ss - 16); - break; - - case 128: - memcpy_sse2_128(dd - 128, ss - 128); - break; - } - - return dst; -} - - -//--------------------------------------------------------------------- -// main routine -//--------------------------------------------------------------------- -static void* memcpy_fast(void *destination, const void *source, size_t size) -{ - unsigned char *dst = (unsigned char*)destination; - const unsigned char *src = (const unsigned char*)source; - static size_t cachesize = 0x200000; // L2-cache size - size_t padding; - - // small memory copy - if (size <= 128) { - return memcpy_tiny(dst, src, size); - } - - // align destination to 16 bytes boundary - padding = (16 - (((size_t)dst) & 15)) & 15; - - if (padding > 0) { - __m128i head = _mm_loadu_si128((const __m128i*)src); - _mm_storeu_si128((__m128i*)dst, head); - dst += padding; - src += padding; - size -= padding; - } - - // medium size copy - if (size <= cachesize) { - __m128i c0, c1, c2, c3, c4, c5, c6, c7; - - for (; size >= 128; size -= 128) { - c0 = _mm_loadu_si128(((const __m128i*)src) + 0); - c1 = _mm_loadu_si128(((const __m128i*)src) + 1); - c2 = _mm_loadu_si128(((const __m128i*)src) + 2); - c3 = _mm_loadu_si128(((const __m128i*)src) + 3); - c4 = _mm_loadu_si128(((const __m128i*)src) + 4); - c5 = _mm_loadu_si128(((const __m128i*)src) + 5); - c6 = _mm_loadu_si128(((const __m128i*)src) + 6); - c7 = _mm_loadu_si128(((const __m128i*)src) + 7); - _mm_prefetch((const char*)(src + 256), _MM_HINT_NTA); - src += 128; - _mm_store_si128((((__m128i*)dst) + 0), c0); - _mm_store_si128((((__m128i*)dst) + 1), c1); - _mm_store_si128((((__m128i*)dst) + 2), c2); - _mm_store_si128((((__m128i*)dst) + 3), c3); - _mm_store_si128((((__m128i*)dst) + 4), c4); - _mm_store_si128((((__m128i*)dst) + 5), c5); - _mm_store_si128((((__m128i*)dst) + 6), c6); - _mm_store_si128((((__m128i*)dst) + 7), c7); - dst += 128; - } - } - else { // big memory copy - __m128i c0, c1, c2, c3, c4, c5, c6, c7; - - _mm_prefetch((const char*)(src), _MM_HINT_NTA); - - if ((((size_t)src) & 15) == 0) { // source aligned - for (; size >= 128; size -= 128) { - c0 = _mm_load_si128(((const __m128i*)src) + 0); - c1 = _mm_load_si128(((const __m128i*)src) + 1); - c2 = _mm_load_si128(((const __m128i*)src) + 2); - c3 = _mm_load_si128(((const __m128i*)src) + 3); - c4 = _mm_load_si128(((const __m128i*)src) + 4); - c5 = _mm_load_si128(((const __m128i*)src) + 5); - c6 = _mm_load_si128(((const __m128i*)src) + 6); - c7 = _mm_load_si128(((const __m128i*)src) + 7); - _mm_prefetch((const char*)(src + 256), _MM_HINT_NTA); - src += 128; - _mm_stream_si128((((__m128i*)dst) + 0), c0); - _mm_stream_si128((((__m128i*)dst) + 1), c1); - _mm_stream_si128((((__m128i*)dst) + 2), c2); - _mm_stream_si128((((__m128i*)dst) + 3), c3); - _mm_stream_si128((((__m128i*)dst) + 4), c4); - _mm_stream_si128((((__m128i*)dst) + 5), c5); - _mm_stream_si128((((__m128i*)dst) + 6), c6); - _mm_stream_si128((((__m128i*)dst) + 7), c7); - dst += 128; - } - } - else { // source unaligned - for (; size >= 128; size -= 128) { - c0 = _mm_loadu_si128(((const __m128i*)src) + 0); - c1 = _mm_loadu_si128(((const __m128i*)src) + 1); - c2 = _mm_loadu_si128(((const __m128i*)src) + 2); - c3 = _mm_loadu_si128(((const __m128i*)src) + 3); - c4 = _mm_loadu_si128(((const __m128i*)src) + 4); - c5 = _mm_loadu_si128(((const __m128i*)src) + 5); - c6 = _mm_loadu_si128(((const __m128i*)src) + 6); - c7 = _mm_loadu_si128(((const __m128i*)src) + 7); - _mm_prefetch((const char*)(src + 256), _MM_HINT_NTA); - src += 128; - _mm_stream_si128((((__m128i*)dst) + 0), c0); - _mm_stream_si128((((__m128i*)dst) + 1), c1); - _mm_stream_si128((((__m128i*)dst) + 2), c2); - _mm_stream_si128((((__m128i*)dst) + 3), c3); - _mm_stream_si128((((__m128i*)dst) + 4), c4); - _mm_stream_si128((((__m128i*)dst) + 5), c5); - _mm_stream_si128((((__m128i*)dst) + 6), c6); - _mm_stream_si128((((__m128i*)dst) + 7), c7); - dst += 128; - } - } - _mm_sfence(); - } - - memcpy_tiny(dst, src, size); - - return destination; -} - - -#endif diff --git a/contrib/FastMemcpy/FastMemcpy_Avx.c b/contrib/FastMemcpy/FastMemcpy_Avx.c deleted file mode 100644 index 6538c6b2126..00000000000 --- a/contrib/FastMemcpy/FastMemcpy_Avx.c +++ /dev/null @@ -1,171 +0,0 @@ -//===================================================================== -// -// FastMemcpy.c - skywind3000@163.com, 2015 -// -// feature: -// 50% speed up in avg. vs standard memcpy (tested in vc2012/gcc4.9) -// -//===================================================================== -#include -#include -#include -#include -#include - -#if (defined(_WIN32) || defined(WIN32)) -#include -#include -#ifdef _MSC_VER -#pragma comment(lib, "winmm.lib") -#endif -#elif defined(__unix) -#include -#include -#else -#error it can only be compiled under windows or unix -#endif - -#include "FastMemcpy_Avx.h" - - -unsigned int gettime() -{ - #if (defined(_WIN32) || defined(WIN32)) - return timeGetTime(); - #else - static struct timezone tz={ 0,0 }; - struct timeval time; - gettimeofday(&time,&tz); - return (time.tv_sec * 1000 + time.tv_usec / 1000); - #endif -} - -void sleepms(unsigned int millisec) -{ -#if defined(_WIN32) || defined(WIN32) - Sleep(millisec); -#else - usleep(millisec * 1000); -#endif -} - - - -void benchmark(int dstalign, int srcalign, size_t size, int times) -{ - char *DATA1 = (char*)malloc(size + 64); - char *DATA2 = (char*)malloc(size + 64); - size_t LINEAR1 = ((size_t)DATA1); - size_t LINEAR2 = ((size_t)DATA2); - char *ALIGN1 = (char*)(((64 - (LINEAR1 & 63)) & 63) + LINEAR1); - char *ALIGN2 = (char*)(((64 - (LINEAR2 & 63)) & 63) + LINEAR2); - char *dst = (dstalign)? ALIGN1 : (ALIGN1 + 1); - char *src = (srcalign)? ALIGN2 : (ALIGN2 + 3); - unsigned int t1, t2; - int k; - - sleepms(100); - t1 = gettime(); - for (k = times; k > 0; k--) { - memcpy(dst, src, size); - } - t1 = gettime() - t1; - sleepms(100); - t2 = gettime(); - for (k = times; k > 0; k--) { - memcpy_fast(dst, src, size); - } - t2 = gettime() - t2; - - free(DATA1); - free(DATA2); - - printf("result(dst %s, src %s): memcpy_fast=%dms memcpy=%d ms\n", - dstalign? "aligned" : "unalign", - srcalign? "aligned" : "unalign", (int)t2, (int)t1); -} - - -void bench(int copysize, int times) -{ - printf("benchmark(size=%d bytes, times=%d):\n", copysize, times); - benchmark(1, 1, copysize, times); - benchmark(1, 0, copysize, times); - benchmark(0, 1, copysize, times); - benchmark(0, 0, copysize, times); - printf("\n"); -} - - -void random_bench(int maxsize, int times) -{ - static char A[11 * 1024 * 1024 + 2]; - static char B[11 * 1024 * 1024 + 2]; - static int random_offsets[0x10000]; - static int random_sizes[0x8000]; - unsigned int i, p1, p2; - unsigned int t1, t2; - for (i = 0; i < 0x10000; i++) { // generate random offsets - random_offsets[i] = rand() % (10 * 1024 * 1024 + 1); - } - for (i = 0; i < 0x8000; i++) { // generate random sizes - random_sizes[i] = 1 + rand() % maxsize; - } - sleepms(100); - t1 = gettime(); - for (p1 = 0, p2 = 0, i = 0; i < times; i++) { - int offset1 = random_offsets[(p1++) & 0xffff]; - int offset2 = random_offsets[(p1++) & 0xffff]; - int size = random_sizes[(p2++) & 0x7fff]; - memcpy(A + offset1, B + offset2, size); - } - t1 = gettime() - t1; - sleepms(100); - t2 = gettime(); - for (p1 = 0, p2 = 0, i = 0; i < times; i++) { - int offset1 = random_offsets[(p1++) & 0xffff]; - int offset2 = random_offsets[(p1++) & 0xffff]; - int size = random_sizes[(p2++) & 0x7fff]; - memcpy_fast(A + offset1, B + offset2, size); - } - t2 = gettime() - t2; - printf("benchmark random access:\n"); - printf("memcpy_fast=%dms memcpy=%dms\n\n", (int)t2, (int)t1); -} - - -#ifdef _MSC_VER -#pragma comment(lib, "winmm.lib") -#endif - -int main(void) -{ -#if 1 - bench(32, 0x1000000); - bench(64, 0x1000000); - bench(512, 0x800000); - bench(1024, 0x400000); -#endif - bench(4096, 0x80000); - bench(8192, 0x40000); -#if 1 - bench(1024 * 1024 * 1, 0x800); - bench(1024 * 1024 * 4, 0x200); -#endif - bench(1024 * 1024 * 8, 0x100); - - random_bench(2048, 8000000); - - return 0; -} - - - - -/* - -*/ - - - - diff --git a/contrib/FastMemcpy/FastMemcpy_Avx.h b/contrib/FastMemcpy/FastMemcpy_Avx.h deleted file mode 100644 index 8ba064b0350..00000000000 --- a/contrib/FastMemcpy/FastMemcpy_Avx.h +++ /dev/null @@ -1,492 +0,0 @@ -//===================================================================== -// -// FastMemcpy.c - skywind3000@163.com, 2015 -// -// feature: -// 50% speed up in avg. vs standard memcpy (tested in vc2012/gcc5.1) -// -//===================================================================== -#ifndef __FAST_MEMCPY_H__ -#define __FAST_MEMCPY_H__ - -#include -#include -#include - - -//--------------------------------------------------------------------- -// force inline for compilers -//--------------------------------------------------------------------- -#ifndef INLINE -#ifdef __GNUC__ -#if (__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1)) - #define INLINE __inline__ __attribute__((always_inline)) -#else - #define INLINE __inline__ -#endif -#elif defined(_MSC_VER) - #define INLINE __forceinline -#elif (defined(__BORLANDC__) || defined(__WATCOMC__)) - #define INLINE __inline -#else - #define INLINE -#endif -#endif - - - -//--------------------------------------------------------------------- -// fast copy for different sizes -//--------------------------------------------------------------------- -static INLINE void memcpy_avx_16(void *dst, const void *src) { -#if 1 - __m128i m0 = _mm_loadu_si128(((const __m128i*)src) + 0); - _mm_storeu_si128(((__m128i*)dst) + 0, m0); -#else - *((uint64_t*)((char*)dst + 0)) = *((uint64_t*)((const char*)src + 0)); - *((uint64_t*)((char*)dst + 8)) = *((uint64_t*)((const char*)src + 8)); -#endif -} - -static INLINE void memcpy_avx_32(void *dst, const void *src) { - __m256i m0 = _mm256_loadu_si256(((const __m256i*)src) + 0); - _mm256_storeu_si256(((__m256i*)dst) + 0, m0); -} - -static INLINE void memcpy_avx_64(void *dst, const void *src) { - __m256i m0 = _mm256_loadu_si256(((const __m256i*)src) + 0); - __m256i m1 = _mm256_loadu_si256(((const __m256i*)src) + 1); - _mm256_storeu_si256(((__m256i*)dst) + 0, m0); - _mm256_storeu_si256(((__m256i*)dst) + 1, m1); -} - -static INLINE void memcpy_avx_128(void *dst, const void *src) { - __m256i m0 = _mm256_loadu_si256(((const __m256i*)src) + 0); - __m256i m1 = _mm256_loadu_si256(((const __m256i*)src) + 1); - __m256i m2 = _mm256_loadu_si256(((const __m256i*)src) + 2); - __m256i m3 = _mm256_loadu_si256(((const __m256i*)src) + 3); - _mm256_storeu_si256(((__m256i*)dst) + 0, m0); - _mm256_storeu_si256(((__m256i*)dst) + 1, m1); - _mm256_storeu_si256(((__m256i*)dst) + 2, m2); - _mm256_storeu_si256(((__m256i*)dst) + 3, m3); -} - -static INLINE void memcpy_avx_256(void *dst, const void *src) { - __m256i m0 = _mm256_loadu_si256(((const __m256i*)src) + 0); - __m256i m1 = _mm256_loadu_si256(((const __m256i*)src) + 1); - __m256i m2 = _mm256_loadu_si256(((const __m256i*)src) + 2); - __m256i m3 = _mm256_loadu_si256(((const __m256i*)src) + 3); - __m256i m4 = _mm256_loadu_si256(((const __m256i*)src) + 4); - __m256i m5 = _mm256_loadu_si256(((const __m256i*)src) + 5); - __m256i m6 = _mm256_loadu_si256(((const __m256i*)src) + 6); - __m256i m7 = _mm256_loadu_si256(((const __m256i*)src) + 7); - _mm256_storeu_si256(((__m256i*)dst) + 0, m0); - _mm256_storeu_si256(((__m256i*)dst) + 1, m1); - _mm256_storeu_si256(((__m256i*)dst) + 2, m2); - _mm256_storeu_si256(((__m256i*)dst) + 3, m3); - _mm256_storeu_si256(((__m256i*)dst) + 4, m4); - _mm256_storeu_si256(((__m256i*)dst) + 5, m5); - _mm256_storeu_si256(((__m256i*)dst) + 6, m6); - _mm256_storeu_si256(((__m256i*)dst) + 7, m7); -} - - -//--------------------------------------------------------------------- -// tiny memory copy with jump table optimized -//--------------------------------------------------------------------- -static INLINE void *memcpy_tiny(void *dst, const void *src, size_t size) { - unsigned char *dd = ((unsigned char*)dst) + size; - const unsigned char *ss = ((const unsigned char*)src) + size; - - switch (size) { - case 128: memcpy_avx_128(dd - 128, ss - 128); - case 0: break; - case 129: memcpy_avx_128(dd - 129, ss - 129); - case 1: dd[-1] = ss[-1]; break; - case 130: memcpy_avx_128(dd - 130, ss - 130); - case 2: *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; - case 131: memcpy_avx_128(dd - 131, ss - 131); - case 3: *((uint16_t*)(dd - 3)) = *((uint16_t*)(ss - 3)); dd[-1] = ss[-1]; break; - case 132: memcpy_avx_128(dd - 132, ss - 132); - case 4: *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 133: memcpy_avx_128(dd - 133, ss - 133); - case 5: *((uint32_t*)(dd - 5)) = *((uint32_t*)(ss - 5)); dd[-1] = ss[-1]; break; - case 134: memcpy_avx_128(dd - 134, ss - 134); - case 6: *((uint32_t*)(dd - 6)) = *((uint32_t*)(ss - 6)); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; - case 135: memcpy_avx_128(dd - 135, ss - 135); - case 7: *((uint32_t*)(dd - 7)) = *((uint32_t*)(ss - 7)); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 136: memcpy_avx_128(dd - 136, ss - 136); - case 8: *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 137: memcpy_avx_128(dd - 137, ss - 137); - case 9: *((uint64_t*)(dd - 9)) = *((uint64_t*)(ss - 9)); dd[-1] = ss[-1]; break; - case 138: memcpy_avx_128(dd - 138, ss - 138); - case 10: *((uint64_t*)(dd - 10)) = *((uint64_t*)(ss - 10)); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; - case 139: memcpy_avx_128(dd - 139, ss - 139); - case 11: *((uint64_t*)(dd - 11)) = *((uint64_t*)(ss - 11)); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 140: memcpy_avx_128(dd - 140, ss - 140); - case 12: *((uint64_t*)(dd - 12)) = *((uint64_t*)(ss - 12)); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 141: memcpy_avx_128(dd - 141, ss - 141); - case 13: *((uint64_t*)(dd - 13)) = *((uint64_t*)(ss - 13)); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 142: memcpy_avx_128(dd - 142, ss - 142); - case 14: *((uint64_t*)(dd - 14)) = *((uint64_t*)(ss - 14)); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 143: memcpy_avx_128(dd - 143, ss - 143); - case 15: *((uint64_t*)(dd - 15)) = *((uint64_t*)(ss - 15)); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 144: memcpy_avx_128(dd - 144, ss - 144); - case 16: memcpy_avx_16(dd - 16, ss - 16); break; - case 145: memcpy_avx_128(dd - 145, ss - 145); - case 17: memcpy_avx_16(dd - 17, ss - 17); dd[-1] = ss[-1]; break; - case 146: memcpy_avx_128(dd - 146, ss - 146); - case 18: memcpy_avx_16(dd - 18, ss - 18); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; - case 147: memcpy_avx_128(dd - 147, ss - 147); - case 19: memcpy_avx_16(dd - 19, ss - 19); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 148: memcpy_avx_128(dd - 148, ss - 148); - case 20: memcpy_avx_16(dd - 20, ss - 20); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 149: memcpy_avx_128(dd - 149, ss - 149); - case 21: memcpy_avx_16(dd - 21, ss - 21); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 150: memcpy_avx_128(dd - 150, ss - 150); - case 22: memcpy_avx_16(dd - 22, ss - 22); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 151: memcpy_avx_128(dd - 151, ss - 151); - case 23: memcpy_avx_16(dd - 23, ss - 23); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 152: memcpy_avx_128(dd - 152, ss - 152); - case 24: memcpy_avx_16(dd - 24, ss - 24); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 153: memcpy_avx_128(dd - 153, ss - 153); - case 25: memcpy_avx_16(dd - 25, ss - 25); memcpy_avx_16(dd - 16, ss - 16); break; - case 154: memcpy_avx_128(dd - 154, ss - 154); - case 26: memcpy_avx_16(dd - 26, ss - 26); memcpy_avx_16(dd - 16, ss - 16); break; - case 155: memcpy_avx_128(dd - 155, ss - 155); - case 27: memcpy_avx_16(dd - 27, ss - 27); memcpy_avx_16(dd - 16, ss - 16); break; - case 156: memcpy_avx_128(dd - 156, ss - 156); - case 28: memcpy_avx_16(dd - 28, ss - 28); memcpy_avx_16(dd - 16, ss - 16); break; - case 157: memcpy_avx_128(dd - 157, ss - 157); - case 29: memcpy_avx_16(dd - 29, ss - 29); memcpy_avx_16(dd - 16, ss - 16); break; - case 158: memcpy_avx_128(dd - 158, ss - 158); - case 30: memcpy_avx_16(dd - 30, ss - 30); memcpy_avx_16(dd - 16, ss - 16); break; - case 159: memcpy_avx_128(dd - 159, ss - 159); - case 31: memcpy_avx_16(dd - 31, ss - 31); memcpy_avx_16(dd - 16, ss - 16); break; - case 160: memcpy_avx_128(dd - 160, ss - 160); - case 32: memcpy_avx_32(dd - 32, ss - 32); break; - case 161: memcpy_avx_128(dd - 161, ss - 161); - case 33: memcpy_avx_32(dd - 33, ss - 33); dd[-1] = ss[-1]; break; - case 162: memcpy_avx_128(dd - 162, ss - 162); - case 34: memcpy_avx_32(dd - 34, ss - 34); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; - case 163: memcpy_avx_128(dd - 163, ss - 163); - case 35: memcpy_avx_32(dd - 35, ss - 35); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 164: memcpy_avx_128(dd - 164, ss - 164); - case 36: memcpy_avx_32(dd - 36, ss - 36); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 165: memcpy_avx_128(dd - 165, ss - 165); - case 37: memcpy_avx_32(dd - 37, ss - 37); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 166: memcpy_avx_128(dd - 166, ss - 166); - case 38: memcpy_avx_32(dd - 38, ss - 38); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 167: memcpy_avx_128(dd - 167, ss - 167); - case 39: memcpy_avx_32(dd - 39, ss - 39); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 168: memcpy_avx_128(dd - 168, ss - 168); - case 40: memcpy_avx_32(dd - 40, ss - 40); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 169: memcpy_avx_128(dd - 169, ss - 169); - case 41: memcpy_avx_32(dd - 41, ss - 41); memcpy_avx_16(dd - 16, ss - 16); break; - case 170: memcpy_avx_128(dd - 170, ss - 170); - case 42: memcpy_avx_32(dd - 42, ss - 42); memcpy_avx_16(dd - 16, ss - 16); break; - case 171: memcpy_avx_128(dd - 171, ss - 171); - case 43: memcpy_avx_32(dd - 43, ss - 43); memcpy_avx_16(dd - 16, ss - 16); break; - case 172: memcpy_avx_128(dd - 172, ss - 172); - case 44: memcpy_avx_32(dd - 44, ss - 44); memcpy_avx_16(dd - 16, ss - 16); break; - case 173: memcpy_avx_128(dd - 173, ss - 173); - case 45: memcpy_avx_32(dd - 45, ss - 45); memcpy_avx_16(dd - 16, ss - 16); break; - case 174: memcpy_avx_128(dd - 174, ss - 174); - case 46: memcpy_avx_32(dd - 46, ss - 46); memcpy_avx_16(dd - 16, ss - 16); break; - case 175: memcpy_avx_128(dd - 175, ss - 175); - case 47: memcpy_avx_32(dd - 47, ss - 47); memcpy_avx_16(dd - 16, ss - 16); break; - case 176: memcpy_avx_128(dd - 176, ss - 176); - case 48: memcpy_avx_32(dd - 48, ss - 48); memcpy_avx_16(dd - 16, ss - 16); break; - case 177: memcpy_avx_128(dd - 177, ss - 177); - case 49: memcpy_avx_32(dd - 49, ss - 49); memcpy_avx_32(dd - 32, ss - 32); break; - case 178: memcpy_avx_128(dd - 178, ss - 178); - case 50: memcpy_avx_32(dd - 50, ss - 50); memcpy_avx_32(dd - 32, ss - 32); break; - case 179: memcpy_avx_128(dd - 179, ss - 179); - case 51: memcpy_avx_32(dd - 51, ss - 51); memcpy_avx_32(dd - 32, ss - 32); break; - case 180: memcpy_avx_128(dd - 180, ss - 180); - case 52: memcpy_avx_32(dd - 52, ss - 52); memcpy_avx_32(dd - 32, ss - 32); break; - case 181: memcpy_avx_128(dd - 181, ss - 181); - case 53: memcpy_avx_32(dd - 53, ss - 53); memcpy_avx_32(dd - 32, ss - 32); break; - case 182: memcpy_avx_128(dd - 182, ss - 182); - case 54: memcpy_avx_32(dd - 54, ss - 54); memcpy_avx_32(dd - 32, ss - 32); break; - case 183: memcpy_avx_128(dd - 183, ss - 183); - case 55: memcpy_avx_32(dd - 55, ss - 55); memcpy_avx_32(dd - 32, ss - 32); break; - case 184: memcpy_avx_128(dd - 184, ss - 184); - case 56: memcpy_avx_32(dd - 56, ss - 56); memcpy_avx_32(dd - 32, ss - 32); break; - case 185: memcpy_avx_128(dd - 185, ss - 185); - case 57: memcpy_avx_32(dd - 57, ss - 57); memcpy_avx_32(dd - 32, ss - 32); break; - case 186: memcpy_avx_128(dd - 186, ss - 186); - case 58: memcpy_avx_32(dd - 58, ss - 58); memcpy_avx_32(dd - 32, ss - 32); break; - case 187: memcpy_avx_128(dd - 187, ss - 187); - case 59: memcpy_avx_32(dd - 59, ss - 59); memcpy_avx_32(dd - 32, ss - 32); break; - case 188: memcpy_avx_128(dd - 188, ss - 188); - case 60: memcpy_avx_32(dd - 60, ss - 60); memcpy_avx_32(dd - 32, ss - 32); break; - case 189: memcpy_avx_128(dd - 189, ss - 189); - case 61: memcpy_avx_32(dd - 61, ss - 61); memcpy_avx_32(dd - 32, ss - 32); break; - case 190: memcpy_avx_128(dd - 190, ss - 190); - case 62: memcpy_avx_32(dd - 62, ss - 62); memcpy_avx_32(dd - 32, ss - 32); break; - case 191: memcpy_avx_128(dd - 191, ss - 191); - case 63: memcpy_avx_32(dd - 63, ss - 63); memcpy_avx_32(dd - 32, ss - 32); break; - case 192: memcpy_avx_128(dd - 192, ss - 192); - case 64: memcpy_avx_64(dd - 64, ss - 64); break; - case 193: memcpy_avx_128(dd - 193, ss - 193); - case 65: memcpy_avx_64(dd - 65, ss - 65); dd[-1] = ss[-1]; break; - case 194: memcpy_avx_128(dd - 194, ss - 194); - case 66: memcpy_avx_64(dd - 66, ss - 66); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; - case 195: memcpy_avx_128(dd - 195, ss - 195); - case 67: memcpy_avx_64(dd - 67, ss - 67); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 196: memcpy_avx_128(dd - 196, ss - 196); - case 68: memcpy_avx_64(dd - 68, ss - 68); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; - case 197: memcpy_avx_128(dd - 197, ss - 197); - case 69: memcpy_avx_64(dd - 69, ss - 69); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 198: memcpy_avx_128(dd - 198, ss - 198); - case 70: memcpy_avx_64(dd - 70, ss - 70); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 199: memcpy_avx_128(dd - 199, ss - 199); - case 71: memcpy_avx_64(dd - 71, ss - 71); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 200: memcpy_avx_128(dd - 200, ss - 200); - case 72: memcpy_avx_64(dd - 72, ss - 72); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; - case 201: memcpy_avx_128(dd - 201, ss - 201); - case 73: memcpy_avx_64(dd - 73, ss - 73); memcpy_avx_16(dd - 16, ss - 16); break; - case 202: memcpy_avx_128(dd - 202, ss - 202); - case 74: memcpy_avx_64(dd - 74, ss - 74); memcpy_avx_16(dd - 16, ss - 16); break; - case 203: memcpy_avx_128(dd - 203, ss - 203); - case 75: memcpy_avx_64(dd - 75, ss - 75); memcpy_avx_16(dd - 16, ss - 16); break; - case 204: memcpy_avx_128(dd - 204, ss - 204); - case 76: memcpy_avx_64(dd - 76, ss - 76); memcpy_avx_16(dd - 16, ss - 16); break; - case 205: memcpy_avx_128(dd - 205, ss - 205); - case 77: memcpy_avx_64(dd - 77, ss - 77); memcpy_avx_16(dd - 16, ss - 16); break; - case 206: memcpy_avx_128(dd - 206, ss - 206); - case 78: memcpy_avx_64(dd - 78, ss - 78); memcpy_avx_16(dd - 16, ss - 16); break; - case 207: memcpy_avx_128(dd - 207, ss - 207); - case 79: memcpy_avx_64(dd - 79, ss - 79); memcpy_avx_16(dd - 16, ss - 16); break; - case 208: memcpy_avx_128(dd - 208, ss - 208); - case 80: memcpy_avx_64(dd - 80, ss - 80); memcpy_avx_16(dd - 16, ss - 16); break; - case 209: memcpy_avx_128(dd - 209, ss - 209); - case 81: memcpy_avx_64(dd - 81, ss - 81); memcpy_avx_32(dd - 32, ss - 32); break; - case 210: memcpy_avx_128(dd - 210, ss - 210); - case 82: memcpy_avx_64(dd - 82, ss - 82); memcpy_avx_32(dd - 32, ss - 32); break; - case 211: memcpy_avx_128(dd - 211, ss - 211); - case 83: memcpy_avx_64(dd - 83, ss - 83); memcpy_avx_32(dd - 32, ss - 32); break; - case 212: memcpy_avx_128(dd - 212, ss - 212); - case 84: memcpy_avx_64(dd - 84, ss - 84); memcpy_avx_32(dd - 32, ss - 32); break; - case 213: memcpy_avx_128(dd - 213, ss - 213); - case 85: memcpy_avx_64(dd - 85, ss - 85); memcpy_avx_32(dd - 32, ss - 32); break; - case 214: memcpy_avx_128(dd - 214, ss - 214); - case 86: memcpy_avx_64(dd - 86, ss - 86); memcpy_avx_32(dd - 32, ss - 32); break; - case 215: memcpy_avx_128(dd - 215, ss - 215); - case 87: memcpy_avx_64(dd - 87, ss - 87); memcpy_avx_32(dd - 32, ss - 32); break; - case 216: memcpy_avx_128(dd - 216, ss - 216); - case 88: memcpy_avx_64(dd - 88, ss - 88); memcpy_avx_32(dd - 32, ss - 32); break; - case 217: memcpy_avx_128(dd - 217, ss - 217); - case 89: memcpy_avx_64(dd - 89, ss - 89); memcpy_avx_32(dd - 32, ss - 32); break; - case 218: memcpy_avx_128(dd - 218, ss - 218); - case 90: memcpy_avx_64(dd - 90, ss - 90); memcpy_avx_32(dd - 32, ss - 32); break; - case 219: memcpy_avx_128(dd - 219, ss - 219); - case 91: memcpy_avx_64(dd - 91, ss - 91); memcpy_avx_32(dd - 32, ss - 32); break; - case 220: memcpy_avx_128(dd - 220, ss - 220); - case 92: memcpy_avx_64(dd - 92, ss - 92); memcpy_avx_32(dd - 32, ss - 32); break; - case 221: memcpy_avx_128(dd - 221, ss - 221); - case 93: memcpy_avx_64(dd - 93, ss - 93); memcpy_avx_32(dd - 32, ss - 32); break; - case 222: memcpy_avx_128(dd - 222, ss - 222); - case 94: memcpy_avx_64(dd - 94, ss - 94); memcpy_avx_32(dd - 32, ss - 32); break; - case 223: memcpy_avx_128(dd - 223, ss - 223); - case 95: memcpy_avx_64(dd - 95, ss - 95); memcpy_avx_32(dd - 32, ss - 32); break; - case 224: memcpy_avx_128(dd - 224, ss - 224); - case 96: memcpy_avx_64(dd - 96, ss - 96); memcpy_avx_32(dd - 32, ss - 32); break; - case 225: memcpy_avx_128(dd - 225, ss - 225); - case 97: memcpy_avx_64(dd - 97, ss - 97); memcpy_avx_64(dd - 64, ss - 64); break; - case 226: memcpy_avx_128(dd - 226, ss - 226); - case 98: memcpy_avx_64(dd - 98, ss - 98); memcpy_avx_64(dd - 64, ss - 64); break; - case 227: memcpy_avx_128(dd - 227, ss - 227); - case 99: memcpy_avx_64(dd - 99, ss - 99); memcpy_avx_64(dd - 64, ss - 64); break; - case 228: memcpy_avx_128(dd - 228, ss - 228); - case 100: memcpy_avx_64(dd - 100, ss - 100); memcpy_avx_64(dd - 64, ss - 64); break; - case 229: memcpy_avx_128(dd - 229, ss - 229); - case 101: memcpy_avx_64(dd - 101, ss - 101); memcpy_avx_64(dd - 64, ss - 64); break; - case 230: memcpy_avx_128(dd - 230, ss - 230); - case 102: memcpy_avx_64(dd - 102, ss - 102); memcpy_avx_64(dd - 64, ss - 64); break; - case 231: memcpy_avx_128(dd - 231, ss - 231); - case 103: memcpy_avx_64(dd - 103, ss - 103); memcpy_avx_64(dd - 64, ss - 64); break; - case 232: memcpy_avx_128(dd - 232, ss - 232); - case 104: memcpy_avx_64(dd - 104, ss - 104); memcpy_avx_64(dd - 64, ss - 64); break; - case 233: memcpy_avx_128(dd - 233, ss - 233); - case 105: memcpy_avx_64(dd - 105, ss - 105); memcpy_avx_64(dd - 64, ss - 64); break; - case 234: memcpy_avx_128(dd - 234, ss - 234); - case 106: memcpy_avx_64(dd - 106, ss - 106); memcpy_avx_64(dd - 64, ss - 64); break; - case 235: memcpy_avx_128(dd - 235, ss - 235); - case 107: memcpy_avx_64(dd - 107, ss - 107); memcpy_avx_64(dd - 64, ss - 64); break; - case 236: memcpy_avx_128(dd - 236, ss - 236); - case 108: memcpy_avx_64(dd - 108, ss - 108); memcpy_avx_64(dd - 64, ss - 64); break; - case 237: memcpy_avx_128(dd - 237, ss - 237); - case 109: memcpy_avx_64(dd - 109, ss - 109); memcpy_avx_64(dd - 64, ss - 64); break; - case 238: memcpy_avx_128(dd - 238, ss - 238); - case 110: memcpy_avx_64(dd - 110, ss - 110); memcpy_avx_64(dd - 64, ss - 64); break; - case 239: memcpy_avx_128(dd - 239, ss - 239); - case 111: memcpy_avx_64(dd - 111, ss - 111); memcpy_avx_64(dd - 64, ss - 64); break; - case 240: memcpy_avx_128(dd - 240, ss - 240); - case 112: memcpy_avx_64(dd - 112, ss - 112); memcpy_avx_64(dd - 64, ss - 64); break; - case 241: memcpy_avx_128(dd - 241, ss - 241); - case 113: memcpy_avx_64(dd - 113, ss - 113); memcpy_avx_64(dd - 64, ss - 64); break; - case 242: memcpy_avx_128(dd - 242, ss - 242); - case 114: memcpy_avx_64(dd - 114, ss - 114); memcpy_avx_64(dd - 64, ss - 64); break; - case 243: memcpy_avx_128(dd - 243, ss - 243); - case 115: memcpy_avx_64(dd - 115, ss - 115); memcpy_avx_64(dd - 64, ss - 64); break; - case 244: memcpy_avx_128(dd - 244, ss - 244); - case 116: memcpy_avx_64(dd - 116, ss - 116); memcpy_avx_64(dd - 64, ss - 64); break; - case 245: memcpy_avx_128(dd - 245, ss - 245); - case 117: memcpy_avx_64(dd - 117, ss - 117); memcpy_avx_64(dd - 64, ss - 64); break; - case 246: memcpy_avx_128(dd - 246, ss - 246); - case 118: memcpy_avx_64(dd - 118, ss - 118); memcpy_avx_64(dd - 64, ss - 64); break; - case 247: memcpy_avx_128(dd - 247, ss - 247); - case 119: memcpy_avx_64(dd - 119, ss - 119); memcpy_avx_64(dd - 64, ss - 64); break; - case 248: memcpy_avx_128(dd - 248, ss - 248); - case 120: memcpy_avx_64(dd - 120, ss - 120); memcpy_avx_64(dd - 64, ss - 64); break; - case 249: memcpy_avx_128(dd - 249, ss - 249); - case 121: memcpy_avx_64(dd - 121, ss - 121); memcpy_avx_64(dd - 64, ss - 64); break; - case 250: memcpy_avx_128(dd - 250, ss - 250); - case 122: memcpy_avx_64(dd - 122, ss - 122); memcpy_avx_64(dd - 64, ss - 64); break; - case 251: memcpy_avx_128(dd - 251, ss - 251); - case 123: memcpy_avx_64(dd - 123, ss - 123); memcpy_avx_64(dd - 64, ss - 64); break; - case 252: memcpy_avx_128(dd - 252, ss - 252); - case 124: memcpy_avx_64(dd - 124, ss - 124); memcpy_avx_64(dd - 64, ss - 64); break; - case 253: memcpy_avx_128(dd - 253, ss - 253); - case 125: memcpy_avx_64(dd - 125, ss - 125); memcpy_avx_64(dd - 64, ss - 64); break; - case 254: memcpy_avx_128(dd - 254, ss - 254); - case 126: memcpy_avx_64(dd - 126, ss - 126); memcpy_avx_64(dd - 64, ss - 64); break; - case 255: memcpy_avx_128(dd - 255, ss - 255); - case 127: memcpy_avx_64(dd - 127, ss - 127); memcpy_avx_64(dd - 64, ss - 64); break; - case 256: memcpy_avx_256(dd - 256, ss - 256); break; - } - - return dst; -} - - -//--------------------------------------------------------------------- -// main routine -//--------------------------------------------------------------------- -static void* memcpy_fast(void *destination, const void *source, size_t size) -{ - unsigned char *dst = (unsigned char*)destination; - const unsigned char *src = (const unsigned char*)source; - static size_t cachesize = 0x200000; // L3-cache size - size_t padding; - - // small memory copy - if (size <= 256) { - memcpy_tiny(dst, src, size); - _mm256_zeroupper(); - return destination; - } - - // align destination to 16 bytes boundary - padding = (32 - (((size_t)dst) & 31)) & 31; - -#if 0 - if (padding > 0) { - __m256i head = _mm256_loadu_si256((const __m256i*)src); - _mm256_storeu_si256((__m256i*)dst, head); - dst += padding; - src += padding; - size -= padding; - } -#else - __m256i head = _mm256_loadu_si256((const __m256i*)src); - _mm256_storeu_si256((__m256i*)dst, head); - dst += padding; - src += padding; - size -= padding; -#endif - - // medium size copy - if (size <= cachesize) { - __m256i c0, c1, c2, c3, c4, c5, c6, c7; - - for (; size >= 256; size -= 256) { - c0 = _mm256_loadu_si256(((const __m256i*)src) + 0); - c1 = _mm256_loadu_si256(((const __m256i*)src) + 1); - c2 = _mm256_loadu_si256(((const __m256i*)src) + 2); - c3 = _mm256_loadu_si256(((const __m256i*)src) + 3); - c4 = _mm256_loadu_si256(((const __m256i*)src) + 4); - c5 = _mm256_loadu_si256(((const __m256i*)src) + 5); - c6 = _mm256_loadu_si256(((const __m256i*)src) + 6); - c7 = _mm256_loadu_si256(((const __m256i*)src) + 7); - _mm_prefetch((const char*)(src + 512), _MM_HINT_NTA); - src += 256; - _mm256_storeu_si256((((__m256i*)dst) + 0), c0); - _mm256_storeu_si256((((__m256i*)dst) + 1), c1); - _mm256_storeu_si256((((__m256i*)dst) + 2), c2); - _mm256_storeu_si256((((__m256i*)dst) + 3), c3); - _mm256_storeu_si256((((__m256i*)dst) + 4), c4); - _mm256_storeu_si256((((__m256i*)dst) + 5), c5); - _mm256_storeu_si256((((__m256i*)dst) + 6), c6); - _mm256_storeu_si256((((__m256i*)dst) + 7), c7); - dst += 256; - } - } - else { // big memory copy - __m256i c0, c1, c2, c3, c4, c5, c6, c7; - /* __m256i c0, c1, c2, c3, c4, c5, c6, c7; */ - - _mm_prefetch((const char*)(src), _MM_HINT_NTA); - - if ((((size_t)src) & 31) == 0) { // source aligned - for (; size >= 256; size -= 256) { - c0 = _mm256_load_si256(((const __m256i*)src) + 0); - c1 = _mm256_load_si256(((const __m256i*)src) + 1); - c2 = _mm256_load_si256(((const __m256i*)src) + 2); - c3 = _mm256_load_si256(((const __m256i*)src) + 3); - c4 = _mm256_load_si256(((const __m256i*)src) + 4); - c5 = _mm256_load_si256(((const __m256i*)src) + 5); - c6 = _mm256_load_si256(((const __m256i*)src) + 6); - c7 = _mm256_load_si256(((const __m256i*)src) + 7); - _mm_prefetch((const char*)(src + 512), _MM_HINT_NTA); - src += 256; - _mm256_stream_si256((((__m256i*)dst) + 0), c0); - _mm256_stream_si256((((__m256i*)dst) + 1), c1); - _mm256_stream_si256((((__m256i*)dst) + 2), c2); - _mm256_stream_si256((((__m256i*)dst) + 3), c3); - _mm256_stream_si256((((__m256i*)dst) + 4), c4); - _mm256_stream_si256((((__m256i*)dst) + 5), c5); - _mm256_stream_si256((((__m256i*)dst) + 6), c6); - _mm256_stream_si256((((__m256i*)dst) + 7), c7); - dst += 256; - } - } - else { // source unaligned - for (; size >= 256; size -= 256) { - c0 = _mm256_loadu_si256(((const __m256i*)src) + 0); - c1 = _mm256_loadu_si256(((const __m256i*)src) + 1); - c2 = _mm256_loadu_si256(((const __m256i*)src) + 2); - c3 = _mm256_loadu_si256(((const __m256i*)src) + 3); - c4 = _mm256_loadu_si256(((const __m256i*)src) + 4); - c5 = _mm256_loadu_si256(((const __m256i*)src) + 5); - c6 = _mm256_loadu_si256(((const __m256i*)src) + 6); - c7 = _mm256_loadu_si256(((const __m256i*)src) + 7); - _mm_prefetch((const char*)(src + 512), _MM_HINT_NTA); - src += 256; - _mm256_stream_si256((((__m256i*)dst) + 0), c0); - _mm256_stream_si256((((__m256i*)dst) + 1), c1); - _mm256_stream_si256((((__m256i*)dst) + 2), c2); - _mm256_stream_si256((((__m256i*)dst) + 3), c3); - _mm256_stream_si256((((__m256i*)dst) + 4), c4); - _mm256_stream_si256((((__m256i*)dst) + 5), c5); - _mm256_stream_si256((((__m256i*)dst) + 6), c6); - _mm256_stream_si256((((__m256i*)dst) + 7), c7); - dst += 256; - } - } - _mm_sfence(); - } - - memcpy_tiny(dst, src, size); - _mm256_zeroupper(); - - return destination; -} - - -#endif - - - diff --git a/contrib/FastMemcpy/LICENSE b/contrib/FastMemcpy/LICENSE deleted file mode 100644 index c449da6aa8a..00000000000 --- a/contrib/FastMemcpy/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2015 Linwei - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - diff --git a/contrib/FastMemcpy/README.md b/contrib/FastMemcpy/README.md deleted file mode 100644 index e253f6bf5dd..00000000000 --- a/contrib/FastMemcpy/README.md +++ /dev/null @@ -1,20 +0,0 @@ -Internal implementation of `memcpy` function. - -It has the following advantages over `libc`-supplied implementation: -- it is linked statically, so the function is called directly, not through a `PLT` (procedure lookup table of shared library); -- it is linked statically, so the function can have position-dependent code; -- your binaries will not depend on `glibc`'s memcpy, that forces dependency on specific symbol version like `memcpy@@GLIBC_2.14` and consequently on specific version of `glibc` library; -- you can include `memcpy.h` directly and the function has the chance to be inlined, which is beneficial for small but unknown at compile time sizes of memory regions; -- this version of `memcpy` pretend to be faster (in our benchmarks, the difference is within few percents). - -Currently it uses the implementation from **Linwei** (skywind3000@163.com). -Look at https://www.zhihu.com/question/35172305 for discussion. - -Drawbacks: -- only use SSE 2, doesn't use wider (AVX, AVX 512) vector registers when available; -- no CPU dispatching; doesn't take into account actual cache size. - -Also worth to look at: -- simple implementation from Facebook: https://github.com/facebook/folly/blob/master/folly/memcpy.S -- implementation from Agner Fog: http://www.agner.org/optimize/ -- glibc source code. diff --git a/contrib/FastMemcpy/memcpy_wrapper.c b/contrib/FastMemcpy/memcpy_wrapper.c deleted file mode 100644 index 1f57345980a..00000000000 --- a/contrib/FastMemcpy/memcpy_wrapper.c +++ /dev/null @@ -1,6 +0,0 @@ -#include "FastMemcpy.h" - -void * memcpy(void * __restrict destination, const void * __restrict source, size_t size) -{ - return memcpy_fast(destination, source, size); -} diff --git a/contrib/NuRaft b/contrib/NuRaft index 3d3683e7775..377f8e77491 160000 --- a/contrib/NuRaft +++ b/contrib/NuRaft @@ -1 +1 @@ -Subproject commit 3d3683e77753cfe015a05fae95ddf418e19f59e1 +Subproject commit 377f8e77491d9f66ce8e32e88aae19dffe8dc4d7 diff --git a/contrib/amqpcpp-cmake/CMakeLists.txt b/contrib/amqpcpp-cmake/CMakeLists.txt index 4853983680e..4e8342af125 100644 --- a/contrib/amqpcpp-cmake/CMakeLists.txt +++ b/contrib/amqpcpp-cmake/CMakeLists.txt @@ -1,25 +1,25 @@ -set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP) +set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP") set (SRCS - ${LIBRARY_DIR}/src/array.cpp - ${LIBRARY_DIR}/src/channel.cpp - ${LIBRARY_DIR}/src/channelimpl.cpp - ${LIBRARY_DIR}/src/connectionimpl.cpp - ${LIBRARY_DIR}/src/deferredcancel.cpp - ${LIBRARY_DIR}/src/deferredconfirm.cpp - ${LIBRARY_DIR}/src/deferredconsumer.cpp - ${LIBRARY_DIR}/src/deferredextreceiver.cpp - ${LIBRARY_DIR}/src/deferredget.cpp - ${LIBRARY_DIR}/src/deferredpublisher.cpp - ${LIBRARY_DIR}/src/deferredreceiver.cpp - ${LIBRARY_DIR}/src/field.cpp - ${LIBRARY_DIR}/src/flags.cpp - ${LIBRARY_DIR}/src/linux_tcp/openssl.cpp - ${LIBRARY_DIR}/src/linux_tcp/tcpconnection.cpp - ${LIBRARY_DIR}/src/inbuffer.cpp - ${LIBRARY_DIR}/src/receivedframe.cpp - ${LIBRARY_DIR}/src/table.cpp - ${LIBRARY_DIR}/src/watchable.cpp + "${LIBRARY_DIR}/src/array.cpp" + "${LIBRARY_DIR}/src/channel.cpp" + "${LIBRARY_DIR}/src/channelimpl.cpp" + "${LIBRARY_DIR}/src/connectionimpl.cpp" + "${LIBRARY_DIR}/src/deferredcancel.cpp" + "${LIBRARY_DIR}/src/deferredconfirm.cpp" + "${LIBRARY_DIR}/src/deferredconsumer.cpp" + "${LIBRARY_DIR}/src/deferredextreceiver.cpp" + "${LIBRARY_DIR}/src/deferredget.cpp" + "${LIBRARY_DIR}/src/deferredpublisher.cpp" + "${LIBRARY_DIR}/src/deferredreceiver.cpp" + "${LIBRARY_DIR}/src/field.cpp" + "${LIBRARY_DIR}/src/flags.cpp" + "${LIBRARY_DIR}/src/linux_tcp/openssl.cpp" + "${LIBRARY_DIR}/src/linux_tcp/tcpconnection.cpp" + "${LIBRARY_DIR}/src/inbuffer.cpp" + "${LIBRARY_DIR}/src/receivedframe.cpp" + "${LIBRARY_DIR}/src/table.cpp" + "${LIBRARY_DIR}/src/watchable.cpp" ) add_library(amqp-cpp ${SRCS}) @@ -39,7 +39,7 @@ target_compile_options (amqp-cpp -w ) -target_include_directories (amqp-cpp SYSTEM PUBLIC ${LIBRARY_DIR}/include) +target_include_directories (amqp-cpp SYSTEM PUBLIC "${LIBRARY_DIR}/include") target_link_libraries (amqp-cpp PUBLIC ssl) diff --git a/contrib/antlr4-runtime b/contrib/antlr4-runtime index a2fa7b76e2e..672643e9a42 160000 --- a/contrib/antlr4-runtime +++ b/contrib/antlr4-runtime @@ -1 +1 @@ -Subproject commit a2fa7b76e2ee16d2ad955e9214a90bbf79da66fc +Subproject commit 672643e9a427ef803abf13bc8cb4989606553d64 diff --git a/contrib/antlr4-runtime-cmake/CMakeLists.txt b/contrib/antlr4-runtime-cmake/CMakeLists.txt index 5baefdb1e29..4f639a33ebf 100644 --- a/contrib/antlr4-runtime-cmake/CMakeLists.txt +++ b/contrib/antlr4-runtime-cmake/CMakeLists.txt @@ -1,154 +1,154 @@ -set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/antlr4-runtime) +set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/antlr4-runtime") set (SRCS - ${LIBRARY_DIR}/ANTLRErrorListener.cpp - ${LIBRARY_DIR}/ANTLRErrorStrategy.cpp - ${LIBRARY_DIR}/ANTLRFileStream.cpp - ${LIBRARY_DIR}/ANTLRInputStream.cpp - ${LIBRARY_DIR}/atn/AbstractPredicateTransition.cpp - ${LIBRARY_DIR}/atn/ActionTransition.cpp - ${LIBRARY_DIR}/atn/AmbiguityInfo.cpp - ${LIBRARY_DIR}/atn/ArrayPredictionContext.cpp - ${LIBRARY_DIR}/atn/ATN.cpp - ${LIBRARY_DIR}/atn/ATNConfig.cpp - ${LIBRARY_DIR}/atn/ATNConfigSet.cpp - ${LIBRARY_DIR}/atn/ATNDeserializationOptions.cpp - ${LIBRARY_DIR}/atn/ATNDeserializer.cpp - ${LIBRARY_DIR}/atn/ATNSerializer.cpp - ${LIBRARY_DIR}/atn/ATNSimulator.cpp - ${LIBRARY_DIR}/atn/ATNState.cpp - ${LIBRARY_DIR}/atn/AtomTransition.cpp - ${LIBRARY_DIR}/atn/BasicBlockStartState.cpp - ${LIBRARY_DIR}/atn/BasicState.cpp - ${LIBRARY_DIR}/atn/BlockEndState.cpp - ${LIBRARY_DIR}/atn/BlockStartState.cpp - ${LIBRARY_DIR}/atn/ContextSensitivityInfo.cpp - ${LIBRARY_DIR}/atn/DecisionEventInfo.cpp - ${LIBRARY_DIR}/atn/DecisionInfo.cpp - ${LIBRARY_DIR}/atn/DecisionState.cpp - ${LIBRARY_DIR}/atn/EmptyPredictionContext.cpp - ${LIBRARY_DIR}/atn/EpsilonTransition.cpp - ${LIBRARY_DIR}/atn/ErrorInfo.cpp - ${LIBRARY_DIR}/atn/LexerAction.cpp - ${LIBRARY_DIR}/atn/LexerActionExecutor.cpp - ${LIBRARY_DIR}/atn/LexerATNConfig.cpp - ${LIBRARY_DIR}/atn/LexerATNSimulator.cpp - ${LIBRARY_DIR}/atn/LexerChannelAction.cpp - ${LIBRARY_DIR}/atn/LexerCustomAction.cpp - ${LIBRARY_DIR}/atn/LexerIndexedCustomAction.cpp - ${LIBRARY_DIR}/atn/LexerModeAction.cpp - ${LIBRARY_DIR}/atn/LexerMoreAction.cpp - ${LIBRARY_DIR}/atn/LexerPopModeAction.cpp - ${LIBRARY_DIR}/atn/LexerPushModeAction.cpp - ${LIBRARY_DIR}/atn/LexerSkipAction.cpp - ${LIBRARY_DIR}/atn/LexerTypeAction.cpp - ${LIBRARY_DIR}/atn/LL1Analyzer.cpp - ${LIBRARY_DIR}/atn/LookaheadEventInfo.cpp - ${LIBRARY_DIR}/atn/LoopEndState.cpp - ${LIBRARY_DIR}/atn/NotSetTransition.cpp - ${LIBRARY_DIR}/atn/OrderedATNConfigSet.cpp - ${LIBRARY_DIR}/atn/ParseInfo.cpp - ${LIBRARY_DIR}/atn/ParserATNSimulator.cpp - ${LIBRARY_DIR}/atn/PlusBlockStartState.cpp - ${LIBRARY_DIR}/atn/PlusLoopbackState.cpp - ${LIBRARY_DIR}/atn/PrecedencePredicateTransition.cpp - ${LIBRARY_DIR}/atn/PredicateEvalInfo.cpp - ${LIBRARY_DIR}/atn/PredicateTransition.cpp - ${LIBRARY_DIR}/atn/PredictionContext.cpp - ${LIBRARY_DIR}/atn/PredictionMode.cpp - ${LIBRARY_DIR}/atn/ProfilingATNSimulator.cpp - ${LIBRARY_DIR}/atn/RangeTransition.cpp - ${LIBRARY_DIR}/atn/RuleStartState.cpp - ${LIBRARY_DIR}/atn/RuleStopState.cpp - ${LIBRARY_DIR}/atn/RuleTransition.cpp - ${LIBRARY_DIR}/atn/SemanticContext.cpp - ${LIBRARY_DIR}/atn/SetTransition.cpp - ${LIBRARY_DIR}/atn/SingletonPredictionContext.cpp - ${LIBRARY_DIR}/atn/StarBlockStartState.cpp - ${LIBRARY_DIR}/atn/StarLoopbackState.cpp - ${LIBRARY_DIR}/atn/StarLoopEntryState.cpp - ${LIBRARY_DIR}/atn/TokensStartState.cpp - ${LIBRARY_DIR}/atn/Transition.cpp - ${LIBRARY_DIR}/atn/WildcardTransition.cpp - ${LIBRARY_DIR}/BailErrorStrategy.cpp - ${LIBRARY_DIR}/BaseErrorListener.cpp - ${LIBRARY_DIR}/BufferedTokenStream.cpp - ${LIBRARY_DIR}/CharStream.cpp - ${LIBRARY_DIR}/CommonToken.cpp - ${LIBRARY_DIR}/CommonTokenFactory.cpp - ${LIBRARY_DIR}/CommonTokenStream.cpp - ${LIBRARY_DIR}/ConsoleErrorListener.cpp - ${LIBRARY_DIR}/DefaultErrorStrategy.cpp - ${LIBRARY_DIR}/dfa/DFA.cpp - ${LIBRARY_DIR}/dfa/DFASerializer.cpp - ${LIBRARY_DIR}/dfa/DFAState.cpp - ${LIBRARY_DIR}/dfa/LexerDFASerializer.cpp - ${LIBRARY_DIR}/DiagnosticErrorListener.cpp - ${LIBRARY_DIR}/Exceptions.cpp - ${LIBRARY_DIR}/FailedPredicateException.cpp - ${LIBRARY_DIR}/InputMismatchException.cpp - ${LIBRARY_DIR}/InterpreterRuleContext.cpp - ${LIBRARY_DIR}/IntStream.cpp - ${LIBRARY_DIR}/Lexer.cpp - ${LIBRARY_DIR}/LexerInterpreter.cpp - ${LIBRARY_DIR}/LexerNoViableAltException.cpp - ${LIBRARY_DIR}/ListTokenSource.cpp - ${LIBRARY_DIR}/misc/InterpreterDataReader.cpp - ${LIBRARY_DIR}/misc/Interval.cpp - ${LIBRARY_DIR}/misc/IntervalSet.cpp - ${LIBRARY_DIR}/misc/MurmurHash.cpp - ${LIBRARY_DIR}/misc/Predicate.cpp - ${LIBRARY_DIR}/NoViableAltException.cpp - ${LIBRARY_DIR}/Parser.cpp - ${LIBRARY_DIR}/ParserInterpreter.cpp - ${LIBRARY_DIR}/ParserRuleContext.cpp - ${LIBRARY_DIR}/ProxyErrorListener.cpp - ${LIBRARY_DIR}/RecognitionException.cpp - ${LIBRARY_DIR}/Recognizer.cpp - ${LIBRARY_DIR}/RuleContext.cpp - ${LIBRARY_DIR}/RuleContextWithAltNum.cpp - ${LIBRARY_DIR}/RuntimeMetaData.cpp - ${LIBRARY_DIR}/support/Any.cpp - ${LIBRARY_DIR}/support/Arrays.cpp - ${LIBRARY_DIR}/support/CPPUtils.cpp - ${LIBRARY_DIR}/support/guid.cpp - ${LIBRARY_DIR}/support/StringUtils.cpp - ${LIBRARY_DIR}/Token.cpp - ${LIBRARY_DIR}/TokenSource.cpp - ${LIBRARY_DIR}/TokenStream.cpp - ${LIBRARY_DIR}/TokenStreamRewriter.cpp - ${LIBRARY_DIR}/tree/ErrorNode.cpp - ${LIBRARY_DIR}/tree/ErrorNodeImpl.cpp - ${LIBRARY_DIR}/tree/IterativeParseTreeWalker.cpp - ${LIBRARY_DIR}/tree/ParseTree.cpp - ${LIBRARY_DIR}/tree/ParseTreeListener.cpp - ${LIBRARY_DIR}/tree/ParseTreeVisitor.cpp - ${LIBRARY_DIR}/tree/ParseTreeWalker.cpp - ${LIBRARY_DIR}/tree/pattern/Chunk.cpp - ${LIBRARY_DIR}/tree/pattern/ParseTreeMatch.cpp - ${LIBRARY_DIR}/tree/pattern/ParseTreePattern.cpp - ${LIBRARY_DIR}/tree/pattern/ParseTreePatternMatcher.cpp - ${LIBRARY_DIR}/tree/pattern/RuleTagToken.cpp - ${LIBRARY_DIR}/tree/pattern/TagChunk.cpp - ${LIBRARY_DIR}/tree/pattern/TextChunk.cpp - ${LIBRARY_DIR}/tree/pattern/TokenTagToken.cpp - ${LIBRARY_DIR}/tree/TerminalNode.cpp - ${LIBRARY_DIR}/tree/TerminalNodeImpl.cpp - ${LIBRARY_DIR}/tree/Trees.cpp - ${LIBRARY_DIR}/tree/xpath/XPath.cpp - ${LIBRARY_DIR}/tree/xpath/XPathElement.cpp - ${LIBRARY_DIR}/tree/xpath/XPathLexer.cpp - ${LIBRARY_DIR}/tree/xpath/XPathLexerErrorListener.cpp - ${LIBRARY_DIR}/tree/xpath/XPathRuleAnywhereElement.cpp - ${LIBRARY_DIR}/tree/xpath/XPathRuleElement.cpp - ${LIBRARY_DIR}/tree/xpath/XPathTokenAnywhereElement.cpp - ${LIBRARY_DIR}/tree/xpath/XPathTokenElement.cpp - ${LIBRARY_DIR}/tree/xpath/XPathWildcardAnywhereElement.cpp - ${LIBRARY_DIR}/tree/xpath/XPathWildcardElement.cpp - ${LIBRARY_DIR}/UnbufferedCharStream.cpp - ${LIBRARY_DIR}/UnbufferedTokenStream.cpp - ${LIBRARY_DIR}/Vocabulary.cpp - ${LIBRARY_DIR}/WritableToken.cpp + "${LIBRARY_DIR}/ANTLRErrorListener.cpp" + "${LIBRARY_DIR}/ANTLRErrorStrategy.cpp" + "${LIBRARY_DIR}/ANTLRFileStream.cpp" + "${LIBRARY_DIR}/ANTLRInputStream.cpp" + "${LIBRARY_DIR}/atn/AbstractPredicateTransition.cpp" + "${LIBRARY_DIR}/atn/ActionTransition.cpp" + "${LIBRARY_DIR}/atn/AmbiguityInfo.cpp" + "${LIBRARY_DIR}/atn/ArrayPredictionContext.cpp" + "${LIBRARY_DIR}/atn/ATN.cpp" + "${LIBRARY_DIR}/atn/ATNConfig.cpp" + "${LIBRARY_DIR}/atn/ATNConfigSet.cpp" + "${LIBRARY_DIR}/atn/ATNDeserializationOptions.cpp" + "${LIBRARY_DIR}/atn/ATNDeserializer.cpp" + "${LIBRARY_DIR}/atn/ATNSerializer.cpp" + "${LIBRARY_DIR}/atn/ATNSimulator.cpp" + "${LIBRARY_DIR}/atn/ATNState.cpp" + "${LIBRARY_DIR}/atn/AtomTransition.cpp" + "${LIBRARY_DIR}/atn/BasicBlockStartState.cpp" + "${LIBRARY_DIR}/atn/BasicState.cpp" + "${LIBRARY_DIR}/atn/BlockEndState.cpp" + "${LIBRARY_DIR}/atn/BlockStartState.cpp" + "${LIBRARY_DIR}/atn/ContextSensitivityInfo.cpp" + "${LIBRARY_DIR}/atn/DecisionEventInfo.cpp" + "${LIBRARY_DIR}/atn/DecisionInfo.cpp" + "${LIBRARY_DIR}/atn/DecisionState.cpp" + "${LIBRARY_DIR}/atn/EmptyPredictionContext.cpp" + "${LIBRARY_DIR}/atn/EpsilonTransition.cpp" + "${LIBRARY_DIR}/atn/ErrorInfo.cpp" + "${LIBRARY_DIR}/atn/LexerAction.cpp" + "${LIBRARY_DIR}/atn/LexerActionExecutor.cpp" + "${LIBRARY_DIR}/atn/LexerATNConfig.cpp" + "${LIBRARY_DIR}/atn/LexerATNSimulator.cpp" + "${LIBRARY_DIR}/atn/LexerChannelAction.cpp" + "${LIBRARY_DIR}/atn/LexerCustomAction.cpp" + "${LIBRARY_DIR}/atn/LexerIndexedCustomAction.cpp" + "${LIBRARY_DIR}/atn/LexerModeAction.cpp" + "${LIBRARY_DIR}/atn/LexerMoreAction.cpp" + "${LIBRARY_DIR}/atn/LexerPopModeAction.cpp" + "${LIBRARY_DIR}/atn/LexerPushModeAction.cpp" + "${LIBRARY_DIR}/atn/LexerSkipAction.cpp" + "${LIBRARY_DIR}/atn/LexerTypeAction.cpp" + "${LIBRARY_DIR}/atn/LL1Analyzer.cpp" + "${LIBRARY_DIR}/atn/LookaheadEventInfo.cpp" + "${LIBRARY_DIR}/atn/LoopEndState.cpp" + "${LIBRARY_DIR}/atn/NotSetTransition.cpp" + "${LIBRARY_DIR}/atn/OrderedATNConfigSet.cpp" + "${LIBRARY_DIR}/atn/ParseInfo.cpp" + "${LIBRARY_DIR}/atn/ParserATNSimulator.cpp" + "${LIBRARY_DIR}/atn/PlusBlockStartState.cpp" + "${LIBRARY_DIR}/atn/PlusLoopbackState.cpp" + "${LIBRARY_DIR}/atn/PrecedencePredicateTransition.cpp" + "${LIBRARY_DIR}/atn/PredicateEvalInfo.cpp" + "${LIBRARY_DIR}/atn/PredicateTransition.cpp" + "${LIBRARY_DIR}/atn/PredictionContext.cpp" + "${LIBRARY_DIR}/atn/PredictionMode.cpp" + "${LIBRARY_DIR}/atn/ProfilingATNSimulator.cpp" + "${LIBRARY_DIR}/atn/RangeTransition.cpp" + "${LIBRARY_DIR}/atn/RuleStartState.cpp" + "${LIBRARY_DIR}/atn/RuleStopState.cpp" + "${LIBRARY_DIR}/atn/RuleTransition.cpp" + "${LIBRARY_DIR}/atn/SemanticContext.cpp" + "${LIBRARY_DIR}/atn/SetTransition.cpp" + "${LIBRARY_DIR}/atn/SingletonPredictionContext.cpp" + "${LIBRARY_DIR}/atn/StarBlockStartState.cpp" + "${LIBRARY_DIR}/atn/StarLoopbackState.cpp" + "${LIBRARY_DIR}/atn/StarLoopEntryState.cpp" + "${LIBRARY_DIR}/atn/TokensStartState.cpp" + "${LIBRARY_DIR}/atn/Transition.cpp" + "${LIBRARY_DIR}/atn/WildcardTransition.cpp" + "${LIBRARY_DIR}/BailErrorStrategy.cpp" + "${LIBRARY_DIR}/BaseErrorListener.cpp" + "${LIBRARY_DIR}/BufferedTokenStream.cpp" + "${LIBRARY_DIR}/CharStream.cpp" + "${LIBRARY_DIR}/CommonToken.cpp" + "${LIBRARY_DIR}/CommonTokenFactory.cpp" + "${LIBRARY_DIR}/CommonTokenStream.cpp" + "${LIBRARY_DIR}/ConsoleErrorListener.cpp" + "${LIBRARY_DIR}/DefaultErrorStrategy.cpp" + "${LIBRARY_DIR}/dfa/DFA.cpp" + "${LIBRARY_DIR}/dfa/DFASerializer.cpp" + "${LIBRARY_DIR}/dfa/DFAState.cpp" + "${LIBRARY_DIR}/dfa/LexerDFASerializer.cpp" + "${LIBRARY_DIR}/DiagnosticErrorListener.cpp" + "${LIBRARY_DIR}/Exceptions.cpp" + "${LIBRARY_DIR}/FailedPredicateException.cpp" + "${LIBRARY_DIR}/InputMismatchException.cpp" + "${LIBRARY_DIR}/InterpreterRuleContext.cpp" + "${LIBRARY_DIR}/IntStream.cpp" + "${LIBRARY_DIR}/Lexer.cpp" + "${LIBRARY_DIR}/LexerInterpreter.cpp" + "${LIBRARY_DIR}/LexerNoViableAltException.cpp" + "${LIBRARY_DIR}/ListTokenSource.cpp" + "${LIBRARY_DIR}/misc/InterpreterDataReader.cpp" + "${LIBRARY_DIR}/misc/Interval.cpp" + "${LIBRARY_DIR}/misc/IntervalSet.cpp" + "${LIBRARY_DIR}/misc/MurmurHash.cpp" + "${LIBRARY_DIR}/misc/Predicate.cpp" + "${LIBRARY_DIR}/NoViableAltException.cpp" + "${LIBRARY_DIR}/Parser.cpp" + "${LIBRARY_DIR}/ParserInterpreter.cpp" + "${LIBRARY_DIR}/ParserRuleContext.cpp" + "${LIBRARY_DIR}/ProxyErrorListener.cpp" + "${LIBRARY_DIR}/RecognitionException.cpp" + "${LIBRARY_DIR}/Recognizer.cpp" + "${LIBRARY_DIR}/RuleContext.cpp" + "${LIBRARY_DIR}/RuleContextWithAltNum.cpp" + "${LIBRARY_DIR}/RuntimeMetaData.cpp" + "${LIBRARY_DIR}/support/Any.cpp" + "${LIBRARY_DIR}/support/Arrays.cpp" + "${LIBRARY_DIR}/support/CPPUtils.cpp" + "${LIBRARY_DIR}/support/guid.cpp" + "${LIBRARY_DIR}/support/StringUtils.cpp" + "${LIBRARY_DIR}/Token.cpp" + "${LIBRARY_DIR}/TokenSource.cpp" + "${LIBRARY_DIR}/TokenStream.cpp" + "${LIBRARY_DIR}/TokenStreamRewriter.cpp" + "${LIBRARY_DIR}/tree/ErrorNode.cpp" + "${LIBRARY_DIR}/tree/ErrorNodeImpl.cpp" + "${LIBRARY_DIR}/tree/IterativeParseTreeWalker.cpp" + "${LIBRARY_DIR}/tree/ParseTree.cpp" + "${LIBRARY_DIR}/tree/ParseTreeListener.cpp" + "${LIBRARY_DIR}/tree/ParseTreeVisitor.cpp" + "${LIBRARY_DIR}/tree/ParseTreeWalker.cpp" + "${LIBRARY_DIR}/tree/pattern/Chunk.cpp" + "${LIBRARY_DIR}/tree/pattern/ParseTreeMatch.cpp" + "${LIBRARY_DIR}/tree/pattern/ParseTreePattern.cpp" + "${LIBRARY_DIR}/tree/pattern/ParseTreePatternMatcher.cpp" + "${LIBRARY_DIR}/tree/pattern/RuleTagToken.cpp" + "${LIBRARY_DIR}/tree/pattern/TagChunk.cpp" + "${LIBRARY_DIR}/tree/pattern/TextChunk.cpp" + "${LIBRARY_DIR}/tree/pattern/TokenTagToken.cpp" + "${LIBRARY_DIR}/tree/TerminalNode.cpp" + "${LIBRARY_DIR}/tree/TerminalNodeImpl.cpp" + "${LIBRARY_DIR}/tree/Trees.cpp" + "${LIBRARY_DIR}/tree/xpath/XPath.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathElement.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathLexer.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathLexerErrorListener.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathRuleAnywhereElement.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathRuleElement.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathTokenAnywhereElement.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathTokenElement.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathWildcardAnywhereElement.cpp" + "${LIBRARY_DIR}/tree/xpath/XPathWildcardElement.cpp" + "${LIBRARY_DIR}/UnbufferedCharStream.cpp" + "${LIBRARY_DIR}/UnbufferedTokenStream.cpp" + "${LIBRARY_DIR}/Vocabulary.cpp" + "${LIBRARY_DIR}/WritableToken.cpp" ) add_library (antlr4-runtime ${SRCS}) diff --git a/contrib/arrow b/contrib/arrow index 744bdfe188f..616b3dc76a0 160000 --- a/contrib/arrow +++ b/contrib/arrow @@ -1 +1 @@ -Subproject commit 744bdfe188f018e5e05f5deebd4e9ee0a7706cf4 +Subproject commit 616b3dc76a0c8450b4027ded8a78e9619d7c845f diff --git a/contrib/arrow-cmake/CMakeLists.txt b/contrib/arrow-cmake/CMakeLists.txt index 4b402a9db79..deefb244beb 100644 --- a/contrib/arrow-cmake/CMakeLists.txt +++ b/contrib/arrow-cmake/CMakeLists.txt @@ -2,69 +2,69 @@ set (CMAKE_CXX_STANDARD 17) # === thrift -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/thrift/lib/cpp) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/thrift/lib/cpp") # contrib/thrift/lib/cpp/CMakeLists.txt set(thriftcpp_SOURCES - ${LIBRARY_DIR}/src/thrift/TApplicationException.cpp - ${LIBRARY_DIR}/src/thrift/TOutput.cpp - ${LIBRARY_DIR}/src/thrift/async/TAsyncChannel.cpp - ${LIBRARY_DIR}/src/thrift/async/TAsyncProtocolProcessor.cpp - ${LIBRARY_DIR}/src/thrift/async/TConcurrentClientSyncInfo.h - ${LIBRARY_DIR}/src/thrift/async/TConcurrentClientSyncInfo.cpp - ${LIBRARY_DIR}/src/thrift/concurrency/ThreadManager.cpp - ${LIBRARY_DIR}/src/thrift/concurrency/TimerManager.cpp - ${LIBRARY_DIR}/src/thrift/concurrency/Util.cpp - ${LIBRARY_DIR}/src/thrift/processor/PeekProcessor.cpp - ${LIBRARY_DIR}/src/thrift/protocol/TBase64Utils.cpp - ${LIBRARY_DIR}/src/thrift/protocol/TDebugProtocol.cpp - ${LIBRARY_DIR}/src/thrift/protocol/TJSONProtocol.cpp - ${LIBRARY_DIR}/src/thrift/protocol/TMultiplexedProtocol.cpp - ${LIBRARY_DIR}/src/thrift/protocol/TProtocol.cpp - ${LIBRARY_DIR}/src/thrift/transport/TTransportException.cpp - ${LIBRARY_DIR}/src/thrift/transport/TFDTransport.cpp - ${LIBRARY_DIR}/src/thrift/transport/TSimpleFileTransport.cpp - ${LIBRARY_DIR}/src/thrift/transport/THttpTransport.cpp - ${LIBRARY_DIR}/src/thrift/transport/THttpClient.cpp - ${LIBRARY_DIR}/src/thrift/transport/THttpServer.cpp - ${LIBRARY_DIR}/src/thrift/transport/TSocket.cpp - ${LIBRARY_DIR}/src/thrift/transport/TSocketPool.cpp - ${LIBRARY_DIR}/src/thrift/transport/TServerSocket.cpp - ${LIBRARY_DIR}/src/thrift/transport/TTransportUtils.cpp - ${LIBRARY_DIR}/src/thrift/transport/TBufferTransports.cpp - ${LIBRARY_DIR}/src/thrift/server/TConnectedClient.cpp - ${LIBRARY_DIR}/src/thrift/server/TServerFramework.cpp - ${LIBRARY_DIR}/src/thrift/server/TSimpleServer.cpp - ${LIBRARY_DIR}/src/thrift/server/TThreadPoolServer.cpp - ${LIBRARY_DIR}/src/thrift/server/TThreadedServer.cpp + "${LIBRARY_DIR}/src/thrift/TApplicationException.cpp" + "${LIBRARY_DIR}/src/thrift/TOutput.cpp" + "${LIBRARY_DIR}/src/thrift/async/TAsyncChannel.cpp" + "${LIBRARY_DIR}/src/thrift/async/TAsyncProtocolProcessor.cpp" + "${LIBRARY_DIR}/src/thrift/async/TConcurrentClientSyncInfo.h" + "${LIBRARY_DIR}/src/thrift/async/TConcurrentClientSyncInfo.cpp" + "${LIBRARY_DIR}/src/thrift/concurrency/ThreadManager.cpp" + "${LIBRARY_DIR}/src/thrift/concurrency/TimerManager.cpp" + "${LIBRARY_DIR}/src/thrift/concurrency/Util.cpp" + "${LIBRARY_DIR}/src/thrift/processor/PeekProcessor.cpp" + "${LIBRARY_DIR}/src/thrift/protocol/TBase64Utils.cpp" + "${LIBRARY_DIR}/src/thrift/protocol/TDebugProtocol.cpp" + "${LIBRARY_DIR}/src/thrift/protocol/TJSONProtocol.cpp" + "${LIBRARY_DIR}/src/thrift/protocol/TMultiplexedProtocol.cpp" + "${LIBRARY_DIR}/src/thrift/protocol/TProtocol.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TTransportException.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TFDTransport.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TSimpleFileTransport.cpp" + "${LIBRARY_DIR}/src/thrift/transport/THttpTransport.cpp" + "${LIBRARY_DIR}/src/thrift/transport/THttpClient.cpp" + "${LIBRARY_DIR}/src/thrift/transport/THttpServer.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TSocket.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TSocketPool.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TServerSocket.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TTransportUtils.cpp" + "${LIBRARY_DIR}/src/thrift/transport/TBufferTransports.cpp" + "${LIBRARY_DIR}/src/thrift/server/TConnectedClient.cpp" + "${LIBRARY_DIR}/src/thrift/server/TServerFramework.cpp" + "${LIBRARY_DIR}/src/thrift/server/TSimpleServer.cpp" + "${LIBRARY_DIR}/src/thrift/server/TThreadPoolServer.cpp" + "${LIBRARY_DIR}/src/thrift/server/TThreadedServer.cpp" ) set(thriftcpp_threads_SOURCES - ${LIBRARY_DIR}/src/thrift/concurrency/ThreadFactory.cpp - ${LIBRARY_DIR}/src/thrift/concurrency/Thread.cpp - ${LIBRARY_DIR}/src/thrift/concurrency/Monitor.cpp - ${LIBRARY_DIR}/src/thrift/concurrency/Mutex.cpp + "${LIBRARY_DIR}/src/thrift/concurrency/ThreadFactory.cpp" + "${LIBRARY_DIR}/src/thrift/concurrency/Thread.cpp" + "${LIBRARY_DIR}/src/thrift/concurrency/Monitor.cpp" + "${LIBRARY_DIR}/src/thrift/concurrency/Mutex.cpp" ) add_library(${THRIFT_LIBRARY} ${thriftcpp_SOURCES} ${thriftcpp_threads_SOURCES}) set_target_properties(${THRIFT_LIBRARY} PROPERTIES CXX_STANDARD 14) # REMOVE after https://github.com/apache/thrift/pull/1641 -target_include_directories(${THRIFT_LIBRARY} SYSTEM PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/thrift/lib/cpp/src) +target_include_directories(${THRIFT_LIBRARY} SYSTEM PUBLIC "${ClickHouse_SOURCE_DIR}/contrib/thrift/lib/cpp/src") target_link_libraries (${THRIFT_LIBRARY} PRIVATE boost::headers_only) # === orc -set(ORC_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/orc/c++) -set(ORC_INCLUDE_DIR ${ORC_SOURCE_DIR}/include) -set(ORC_SOURCE_SRC_DIR ${ORC_SOURCE_DIR}/src) -set(ORC_SOURCE_WRAP_DIR ${ORC_SOURCE_DIR}/wrap) +set(ORC_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/orc/c++") +set(ORC_INCLUDE_DIR "${ORC_SOURCE_DIR}/include") +set(ORC_SOURCE_SRC_DIR "${ORC_SOURCE_DIR}/src") +set(ORC_SOURCE_WRAP_DIR "${ORC_SOURCE_DIR}/wrap") -set(ORC_BUILD_SRC_DIR ${CMAKE_CURRENT_BINARY_DIR}/../orc/c++/src) -set(ORC_BUILD_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/../orc/c++/include) +set(ORC_BUILD_SRC_DIR "${CMAKE_CURRENT_BINARY_DIR}/../orc/c++/src") +set(ORC_BUILD_INCLUDE_DIR "${CMAKE_CURRENT_BINARY_DIR}/../orc/c++/include") -set(GOOGLE_PROTOBUF_DIR ${Protobuf_INCLUDE_DIR}/) +set(GOOGLE_PROTOBUF_DIR "${Protobuf_INCLUDE_DIR}/") set(ORC_ADDITION_SOURCE_DIR ${CMAKE_CURRENT_BINARY_DIR}) -set(ARROW_SRC_DIR ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src) +set(ARROW_SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src") set(PROTOBUF_EXECUTABLE ${Protobuf_PROTOC_EXECUTABLE}) -set(PROTO_DIR ${ORC_SOURCE_DIR}/../proto) +set(PROTO_DIR "${ORC_SOURCE_DIR}/../proto") add_custom_command(OUTPUT orc_proto.pb.h orc_proto.pb.cc @@ -75,9 +75,9 @@ add_custom_command(OUTPUT orc_proto.pb.h orc_proto.pb.cc # === flatbuffers -set(FLATBUFFERS_SRC_DIR ${ClickHouse_SOURCE_DIR}/contrib/flatbuffers) -set(FLATBUFFERS_BINARY_DIR ${ClickHouse_BINARY_DIR}/contrib/flatbuffers) -set(FLATBUFFERS_INCLUDE_DIR ${FLATBUFFERS_SRC_DIR}/include) +set(FLATBUFFERS_SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/flatbuffers") +set(FLATBUFFERS_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/flatbuffers") +set(FLATBUFFERS_INCLUDE_DIR "${FLATBUFFERS_SRC_DIR}/include") # set flatbuffers CMake options if (MAKE_STATIC_LIBRARIES) @@ -101,187 +101,187 @@ if (CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang") set(CXX11_FLAGS "-std=c++0x") endif () -include(${ClickHouse_SOURCE_DIR}/contrib/orc/cmake_modules/CheckSourceCompiles.cmake) +include("${ClickHouse_SOURCE_DIR}/contrib/orc/cmake_modules/CheckSourceCompiles.cmake") include(orc_check.cmake) configure_file("${ORC_INCLUDE_DIR}/orc/orc-config.hh.in" "${ORC_BUILD_INCLUDE_DIR}/orc/orc-config.hh") configure_file("${ORC_SOURCE_SRC_DIR}/Adaptor.hh.in" "${ORC_BUILD_INCLUDE_DIR}/Adaptor.hh") set(ORC_SRCS - ${ARROW_SRC_DIR}/arrow/adapters/orc/adapter.cc - ${ARROW_SRC_DIR}/arrow/adapters/orc/adapter_util.cc - ${ORC_SOURCE_SRC_DIR}/Exceptions.cc - ${ORC_SOURCE_SRC_DIR}/OrcFile.cc - ${ORC_SOURCE_SRC_DIR}/Reader.cc - ${ORC_SOURCE_SRC_DIR}/ByteRLE.cc - ${ORC_SOURCE_SRC_DIR}/ColumnPrinter.cc - ${ORC_SOURCE_SRC_DIR}/ColumnReader.cc - ${ORC_SOURCE_SRC_DIR}/ColumnWriter.cc - ${ORC_SOURCE_SRC_DIR}/Common.cc - ${ORC_SOURCE_SRC_DIR}/Compression.cc - ${ORC_SOURCE_SRC_DIR}/Exceptions.cc - ${ORC_SOURCE_SRC_DIR}/Int128.cc - ${ORC_SOURCE_SRC_DIR}/LzoDecompressor.cc - ${ORC_SOURCE_SRC_DIR}/MemoryPool.cc - ${ORC_SOURCE_SRC_DIR}/OrcFile.cc - ${ORC_SOURCE_SRC_DIR}/Reader.cc - ${ORC_SOURCE_SRC_DIR}/RLE.cc - ${ORC_SOURCE_SRC_DIR}/RLEv1.cc - ${ORC_SOURCE_SRC_DIR}/RLEv2.cc - ${ORC_SOURCE_SRC_DIR}/Statistics.cc - ${ORC_SOURCE_SRC_DIR}/StripeStream.cc - ${ORC_SOURCE_SRC_DIR}/Timezone.cc - ${ORC_SOURCE_SRC_DIR}/TypeImpl.cc - ${ORC_SOURCE_SRC_DIR}/Vector.cc - ${ORC_SOURCE_SRC_DIR}/Writer.cc - ${ORC_SOURCE_SRC_DIR}/io/InputStream.cc - ${ORC_SOURCE_SRC_DIR}/io/OutputStream.cc - ${ORC_ADDITION_SOURCE_DIR}/orc_proto.pb.cc + "${ARROW_SRC_DIR}/arrow/adapters/orc/adapter.cc" + "${ARROW_SRC_DIR}/arrow/adapters/orc/adapter_util.cc" + "${ORC_SOURCE_SRC_DIR}/Exceptions.cc" + "${ORC_SOURCE_SRC_DIR}/OrcFile.cc" + "${ORC_SOURCE_SRC_DIR}/Reader.cc" + "${ORC_SOURCE_SRC_DIR}/ByteRLE.cc" + "${ORC_SOURCE_SRC_DIR}/ColumnPrinter.cc" + "${ORC_SOURCE_SRC_DIR}/ColumnReader.cc" + "${ORC_SOURCE_SRC_DIR}/ColumnWriter.cc" + "${ORC_SOURCE_SRC_DIR}/Common.cc" + "${ORC_SOURCE_SRC_DIR}/Compression.cc" + "${ORC_SOURCE_SRC_DIR}/Exceptions.cc" + "${ORC_SOURCE_SRC_DIR}/Int128.cc" + "${ORC_SOURCE_SRC_DIR}/LzoDecompressor.cc" + "${ORC_SOURCE_SRC_DIR}/MemoryPool.cc" + "${ORC_SOURCE_SRC_DIR}/OrcFile.cc" + "${ORC_SOURCE_SRC_DIR}/Reader.cc" + "${ORC_SOURCE_SRC_DIR}/RLE.cc" + "${ORC_SOURCE_SRC_DIR}/RLEv1.cc" + "${ORC_SOURCE_SRC_DIR}/RLEv2.cc" + "${ORC_SOURCE_SRC_DIR}/Statistics.cc" + "${ORC_SOURCE_SRC_DIR}/StripeStream.cc" + "${ORC_SOURCE_SRC_DIR}/Timezone.cc" + "${ORC_SOURCE_SRC_DIR}/TypeImpl.cc" + "${ORC_SOURCE_SRC_DIR}/Vector.cc" + "${ORC_SOURCE_SRC_DIR}/Writer.cc" + "${ORC_SOURCE_SRC_DIR}/io/InputStream.cc" + "${ORC_SOURCE_SRC_DIR}/io/OutputStream.cc" + "${ORC_ADDITION_SOURCE_DIR}/orc_proto.pb.cc" ) # === arrow -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/arrow) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/arrow") configure_file("${LIBRARY_DIR}/util/config.h.cmake" "${CMAKE_CURRENT_BINARY_DIR}/cpp/src/arrow/util/config.h") # arrow/cpp/src/arrow/CMakeLists.txt set(ARROW_SRCS - ${LIBRARY_DIR}/buffer.cc - ${LIBRARY_DIR}/builder.cc - ${LIBRARY_DIR}/chunked_array.cc - ${LIBRARY_DIR}/compare.cc - ${LIBRARY_DIR}/datum.cc - ${LIBRARY_DIR}/device.cc - ${LIBRARY_DIR}/extension_type.cc - ${LIBRARY_DIR}/memory_pool.cc - ${LIBRARY_DIR}/pretty_print.cc - ${LIBRARY_DIR}/record_batch.cc - ${LIBRARY_DIR}/result.cc - ${LIBRARY_DIR}/scalar.cc - ${LIBRARY_DIR}/sparse_tensor.cc - ${LIBRARY_DIR}/status.cc - ${LIBRARY_DIR}/table_builder.cc - ${LIBRARY_DIR}/table.cc - ${LIBRARY_DIR}/tensor.cc - ${LIBRARY_DIR}/type.cc - ${LIBRARY_DIR}/visitor.cc + "${LIBRARY_DIR}/buffer.cc" + "${LIBRARY_DIR}/builder.cc" + "${LIBRARY_DIR}/chunked_array.cc" + "${LIBRARY_DIR}/compare.cc" + "${LIBRARY_DIR}/datum.cc" + "${LIBRARY_DIR}/device.cc" + "${LIBRARY_DIR}/extension_type.cc" + "${LIBRARY_DIR}/memory_pool.cc" + "${LIBRARY_DIR}/pretty_print.cc" + "${LIBRARY_DIR}/record_batch.cc" + "${LIBRARY_DIR}/result.cc" + "${LIBRARY_DIR}/scalar.cc" + "${LIBRARY_DIR}/sparse_tensor.cc" + "${LIBRARY_DIR}/status.cc" + "${LIBRARY_DIR}/table_builder.cc" + "${LIBRARY_DIR}/table.cc" + "${LIBRARY_DIR}/tensor.cc" + "${LIBRARY_DIR}/type.cc" + "${LIBRARY_DIR}/visitor.cc" - ${LIBRARY_DIR}/array/array_base.cc - ${LIBRARY_DIR}/array/array_binary.cc - ${LIBRARY_DIR}/array/array_decimal.cc - ${LIBRARY_DIR}/array/array_dict.cc - ${LIBRARY_DIR}/array/array_nested.cc - ${LIBRARY_DIR}/array/array_primitive.cc - ${LIBRARY_DIR}/array/builder_adaptive.cc - ${LIBRARY_DIR}/array/builder_base.cc - ${LIBRARY_DIR}/array/builder_binary.cc - ${LIBRARY_DIR}/array/builder_decimal.cc - ${LIBRARY_DIR}/array/builder_dict.cc - ${LIBRARY_DIR}/array/builder_nested.cc - ${LIBRARY_DIR}/array/builder_primitive.cc - ${LIBRARY_DIR}/array/builder_union.cc - ${LIBRARY_DIR}/array/concatenate.cc - ${LIBRARY_DIR}/array/data.cc - ${LIBRARY_DIR}/array/diff.cc - ${LIBRARY_DIR}/array/util.cc - ${LIBRARY_DIR}/array/validate.cc + "${LIBRARY_DIR}/array/array_base.cc" + "${LIBRARY_DIR}/array/array_binary.cc" + "${LIBRARY_DIR}/array/array_decimal.cc" + "${LIBRARY_DIR}/array/array_dict.cc" + "${LIBRARY_DIR}/array/array_nested.cc" + "${LIBRARY_DIR}/array/array_primitive.cc" + "${LIBRARY_DIR}/array/builder_adaptive.cc" + "${LIBRARY_DIR}/array/builder_base.cc" + "${LIBRARY_DIR}/array/builder_binary.cc" + "${LIBRARY_DIR}/array/builder_decimal.cc" + "${LIBRARY_DIR}/array/builder_dict.cc" + "${LIBRARY_DIR}/array/builder_nested.cc" + "${LIBRARY_DIR}/array/builder_primitive.cc" + "${LIBRARY_DIR}/array/builder_union.cc" + "${LIBRARY_DIR}/array/concatenate.cc" + "${LIBRARY_DIR}/array/data.cc" + "${LIBRARY_DIR}/array/diff.cc" + "${LIBRARY_DIR}/array/util.cc" + "${LIBRARY_DIR}/array/validate.cc" - ${LIBRARY_DIR}/compute/api_scalar.cc - ${LIBRARY_DIR}/compute/api_vector.cc - ${LIBRARY_DIR}/compute/cast.cc - ${LIBRARY_DIR}/compute/exec.cc - ${LIBRARY_DIR}/compute/function.cc - ${LIBRARY_DIR}/compute/kernel.cc - ${LIBRARY_DIR}/compute/registry.cc + "${LIBRARY_DIR}/compute/api_scalar.cc" + "${LIBRARY_DIR}/compute/api_vector.cc" + "${LIBRARY_DIR}/compute/cast.cc" + "${LIBRARY_DIR}/compute/exec.cc" + "${LIBRARY_DIR}/compute/function.cc" + "${LIBRARY_DIR}/compute/kernel.cc" + "${LIBRARY_DIR}/compute/registry.cc" - ${LIBRARY_DIR}/compute/kernels/aggregate_basic.cc - ${LIBRARY_DIR}/compute/kernels/aggregate_mode.cc - ${LIBRARY_DIR}/compute/kernels/aggregate_var_std.cc - ${LIBRARY_DIR}/compute/kernels/codegen_internal.cc - ${LIBRARY_DIR}/compute/kernels/scalar_arithmetic.cc - ${LIBRARY_DIR}/compute/kernels/scalar_boolean.cc - ${LIBRARY_DIR}/compute/kernels/scalar_cast_boolean.cc - ${LIBRARY_DIR}/compute/kernels/scalar_cast_internal.cc - ${LIBRARY_DIR}/compute/kernels/scalar_cast_nested.cc - ${LIBRARY_DIR}/compute/kernels/scalar_cast_numeric.cc - ${LIBRARY_DIR}/compute/kernels/scalar_cast_string.cc - ${LIBRARY_DIR}/compute/kernels/scalar_cast_temporal.cc - ${LIBRARY_DIR}/compute/kernels/scalar_compare.cc - ${LIBRARY_DIR}/compute/kernels/scalar_fill_null.cc - ${LIBRARY_DIR}/compute/kernels/scalar_nested.cc - ${LIBRARY_DIR}/compute/kernels/scalar_set_lookup.cc - ${LIBRARY_DIR}/compute/kernels/scalar_string.cc - ${LIBRARY_DIR}/compute/kernels/scalar_validity.cc - ${LIBRARY_DIR}/compute/kernels/vector_hash.cc - ${LIBRARY_DIR}/compute/kernels/vector_nested.cc - ${LIBRARY_DIR}/compute/kernels/vector_selection.cc - ${LIBRARY_DIR}/compute/kernels/vector_sort.cc - ${LIBRARY_DIR}/compute/kernels/util_internal.cc + "${LIBRARY_DIR}/compute/kernels/aggregate_basic.cc" + "${LIBRARY_DIR}/compute/kernels/aggregate_mode.cc" + "${LIBRARY_DIR}/compute/kernels/aggregate_var_std.cc" + "${LIBRARY_DIR}/compute/kernels/codegen_internal.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_arithmetic.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_boolean.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_cast_boolean.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_cast_internal.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_cast_nested.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_cast_numeric.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_cast_string.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_cast_temporal.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_compare.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_fill_null.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_nested.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_set_lookup.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_string.cc" + "${LIBRARY_DIR}/compute/kernels/scalar_validity.cc" + "${LIBRARY_DIR}/compute/kernels/vector_hash.cc" + "${LIBRARY_DIR}/compute/kernels/vector_nested.cc" + "${LIBRARY_DIR}/compute/kernels/vector_selection.cc" + "${LIBRARY_DIR}/compute/kernels/vector_sort.cc" + "${LIBRARY_DIR}/compute/kernels/util_internal.cc" - ${LIBRARY_DIR}/csv/chunker.cc - ${LIBRARY_DIR}/csv/column_builder.cc - ${LIBRARY_DIR}/csv/column_decoder.cc - ${LIBRARY_DIR}/csv/converter.cc - ${LIBRARY_DIR}/csv/options.cc - ${LIBRARY_DIR}/csv/parser.cc - ${LIBRARY_DIR}/csv/reader.cc + "${LIBRARY_DIR}/csv/chunker.cc" + "${LIBRARY_DIR}/csv/column_builder.cc" + "${LIBRARY_DIR}/csv/column_decoder.cc" + "${LIBRARY_DIR}/csv/converter.cc" + "${LIBRARY_DIR}/csv/options.cc" + "${LIBRARY_DIR}/csv/parser.cc" + "${LIBRARY_DIR}/csv/reader.cc" - ${LIBRARY_DIR}/ipc/dictionary.cc - ${LIBRARY_DIR}/ipc/feather.cc - ${LIBRARY_DIR}/ipc/message.cc - ${LIBRARY_DIR}/ipc/metadata_internal.cc - ${LIBRARY_DIR}/ipc/options.cc - ${LIBRARY_DIR}/ipc/reader.cc - ${LIBRARY_DIR}/ipc/writer.cc + "${LIBRARY_DIR}/ipc/dictionary.cc" + "${LIBRARY_DIR}/ipc/feather.cc" + "${LIBRARY_DIR}/ipc/message.cc" + "${LIBRARY_DIR}/ipc/metadata_internal.cc" + "${LIBRARY_DIR}/ipc/options.cc" + "${LIBRARY_DIR}/ipc/reader.cc" + "${LIBRARY_DIR}/ipc/writer.cc" - ${LIBRARY_DIR}/io/buffered.cc - ${LIBRARY_DIR}/io/caching.cc - ${LIBRARY_DIR}/io/compressed.cc - ${LIBRARY_DIR}/io/file.cc - ${LIBRARY_DIR}/io/interfaces.cc - ${LIBRARY_DIR}/io/memory.cc - ${LIBRARY_DIR}/io/slow.cc + "${LIBRARY_DIR}/io/buffered.cc" + "${LIBRARY_DIR}/io/caching.cc" + "${LIBRARY_DIR}/io/compressed.cc" + "${LIBRARY_DIR}/io/file.cc" + "${LIBRARY_DIR}/io/interfaces.cc" + "${LIBRARY_DIR}/io/memory.cc" + "${LIBRARY_DIR}/io/slow.cc" - ${LIBRARY_DIR}/tensor/coo_converter.cc - ${LIBRARY_DIR}/tensor/csf_converter.cc - ${LIBRARY_DIR}/tensor/csx_converter.cc + "${LIBRARY_DIR}/tensor/coo_converter.cc" + "${LIBRARY_DIR}/tensor/csf_converter.cc" + "${LIBRARY_DIR}/tensor/csx_converter.cc" - ${LIBRARY_DIR}/util/basic_decimal.cc - ${LIBRARY_DIR}/util/bit_block_counter.cc - ${LIBRARY_DIR}/util/bit_run_reader.cc - ${LIBRARY_DIR}/util/bit_util.cc - ${LIBRARY_DIR}/util/bitmap.cc - ${LIBRARY_DIR}/util/bitmap_builders.cc - ${LIBRARY_DIR}/util/bitmap_ops.cc - ${LIBRARY_DIR}/util/bpacking.cc - ${LIBRARY_DIR}/util/compression.cc - ${LIBRARY_DIR}/util/compression_lz4.cc - ${LIBRARY_DIR}/util/compression_snappy.cc - ${LIBRARY_DIR}/util/compression_zlib.cc - ${LIBRARY_DIR}/util/compression_zstd.cc - ${LIBRARY_DIR}/util/cpu_info.cc - ${LIBRARY_DIR}/util/decimal.cc - ${LIBRARY_DIR}/util/delimiting.cc - ${LIBRARY_DIR}/util/formatting.cc - ${LIBRARY_DIR}/util/future.cc - ${LIBRARY_DIR}/util/int_util.cc - ${LIBRARY_DIR}/util/io_util.cc - ${LIBRARY_DIR}/util/iterator.cc - ${LIBRARY_DIR}/util/key_value_metadata.cc - ${LIBRARY_DIR}/util/logging.cc - ${LIBRARY_DIR}/util/memory.cc - ${LIBRARY_DIR}/util/string_builder.cc - ${LIBRARY_DIR}/util/string.cc - ${LIBRARY_DIR}/util/task_group.cc - ${LIBRARY_DIR}/util/thread_pool.cc - ${LIBRARY_DIR}/util/time.cc - ${LIBRARY_DIR}/util/trie.cc - ${LIBRARY_DIR}/util/utf8.cc - ${LIBRARY_DIR}/util/value_parsing.cc + "${LIBRARY_DIR}/util/basic_decimal.cc" + "${LIBRARY_DIR}/util/bit_block_counter.cc" + "${LIBRARY_DIR}/util/bit_run_reader.cc" + "${LIBRARY_DIR}/util/bit_util.cc" + "${LIBRARY_DIR}/util/bitmap.cc" + "${LIBRARY_DIR}/util/bitmap_builders.cc" + "${LIBRARY_DIR}/util/bitmap_ops.cc" + "${LIBRARY_DIR}/util/bpacking.cc" + "${LIBRARY_DIR}/util/compression.cc" + "${LIBRARY_DIR}/util/compression_lz4.cc" + "${LIBRARY_DIR}/util/compression_snappy.cc" + "${LIBRARY_DIR}/util/compression_zlib.cc" + "${LIBRARY_DIR}/util/compression_zstd.cc" + "${LIBRARY_DIR}/util/cpu_info.cc" + "${LIBRARY_DIR}/util/decimal.cc" + "${LIBRARY_DIR}/util/delimiting.cc" + "${LIBRARY_DIR}/util/formatting.cc" + "${LIBRARY_DIR}/util/future.cc" + "${LIBRARY_DIR}/util/int_util.cc" + "${LIBRARY_DIR}/util/io_util.cc" + "${LIBRARY_DIR}/util/iterator.cc" + "${LIBRARY_DIR}/util/key_value_metadata.cc" + "${LIBRARY_DIR}/util/logging.cc" + "${LIBRARY_DIR}/util/memory.cc" + "${LIBRARY_DIR}/util/string_builder.cc" + "${LIBRARY_DIR}/util/string.cc" + "${LIBRARY_DIR}/util/task_group.cc" + "${LIBRARY_DIR}/util/thread_pool.cc" + "${LIBRARY_DIR}/util/time.cc" + "${LIBRARY_DIR}/util/trie.cc" + "${LIBRARY_DIR}/util/utf8.cc" + "${LIBRARY_DIR}/util/value_parsing.cc" - ${LIBRARY_DIR}/vendored/base64.cpp + "${LIBRARY_DIR}/vendored/base64.cpp" ${ORC_SRCS} ) @@ -298,21 +298,21 @@ if (ZSTD_INCLUDE_DIR AND ZSTD_LIBRARY) endif () add_definitions(-DARROW_WITH_LZ4) -SET(ARROW_SRCS ${LIBRARY_DIR}/util/compression_lz4.cc ${ARROW_SRCS}) +SET(ARROW_SRCS "${LIBRARY_DIR}/util/compression_lz4.cc" ${ARROW_SRCS}) if (ARROW_WITH_SNAPPY) add_definitions(-DARROW_WITH_SNAPPY) - SET(ARROW_SRCS ${LIBRARY_DIR}/util/compression_snappy.cc ${ARROW_SRCS}) + SET(ARROW_SRCS "${LIBRARY_DIR}/util/compression_snappy.cc" ${ARROW_SRCS}) endif () if (ARROW_WITH_ZLIB) add_definitions(-DARROW_WITH_ZLIB) - SET(ARROW_SRCS ${LIBRARY_DIR}/util/compression_zlib.cc ${ARROW_SRCS}) + SET(ARROW_SRCS "${LIBRARY_DIR}/util/compression_zlib.cc" ${ARROW_SRCS}) endif () if (ARROW_WITH_ZSTD) add_definitions(-DARROW_WITH_ZSTD) - SET(ARROW_SRCS ${LIBRARY_DIR}/util/compression_zstd.cc ${ARROW_SRCS}) + SET(ARROW_SRCS "${LIBRARY_DIR}/util/compression_zstd.cc" ${ARROW_SRCS}) endif () @@ -327,8 +327,8 @@ if (USE_INTERNAL_PROTOBUF_LIBRARY) add_dependencies(${ARROW_LIBRARY} protoc) endif () -target_include_directories(${ARROW_LIBRARY} SYSTEM PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src) -target_include_directories(${ARROW_LIBRARY} SYSTEM PUBLIC ${CMAKE_CURRENT_BINARY_DIR}/cpp/src) +target_include_directories(${ARROW_LIBRARY} SYSTEM PUBLIC "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src") +target_include_directories(${ARROW_LIBRARY} SYSTEM PUBLIC "${CMAKE_CURRENT_BINARY_DIR}/cpp/src") target_link_libraries(${ARROW_LIBRARY} PRIVATE ${DOUBLE_CONVERSION_LIBRARIES} ${Protobuf_LIBRARY}) target_link_libraries(${ARROW_LIBRARY} PRIVATE lz4) if (ARROW_WITH_SNAPPY) @@ -354,46 +354,46 @@ target_include_directories(${ARROW_LIBRARY} PRIVATE SYSTEM ${FLATBUFFERS_INCLUDE # === parquet -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/parquet) -set(GEN_LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/generated) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/parquet") +set(GEN_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/generated") # arrow/cpp/src/parquet/CMakeLists.txt set(PARQUET_SRCS - ${LIBRARY_DIR}/arrow/path_internal.cc - ${LIBRARY_DIR}/arrow/reader.cc - ${LIBRARY_DIR}/arrow/reader_internal.cc - ${LIBRARY_DIR}/arrow/schema.cc - ${LIBRARY_DIR}/arrow/schema_internal.cc - ${LIBRARY_DIR}/arrow/writer.cc - ${LIBRARY_DIR}/bloom_filter.cc - ${LIBRARY_DIR}/column_reader.cc - ${LIBRARY_DIR}/column_scanner.cc - ${LIBRARY_DIR}/column_writer.cc - ${LIBRARY_DIR}/deprecated_io.cc - ${LIBRARY_DIR}/encoding.cc - ${LIBRARY_DIR}/encryption.cc - ${LIBRARY_DIR}/encryption_internal.cc - ${LIBRARY_DIR}/file_reader.cc - ${LIBRARY_DIR}/file_writer.cc - ${LIBRARY_DIR}/internal_file_decryptor.cc - ${LIBRARY_DIR}/internal_file_encryptor.cc - ${LIBRARY_DIR}/level_conversion.cc - ${LIBRARY_DIR}/level_comparison.cc - ${LIBRARY_DIR}/metadata.cc - ${LIBRARY_DIR}/murmur3.cc - ${LIBRARY_DIR}/platform.cc - ${LIBRARY_DIR}/printer.cc - ${LIBRARY_DIR}/properties.cc - ${LIBRARY_DIR}/schema.cc - ${LIBRARY_DIR}/statistics.cc - ${LIBRARY_DIR}/types.cc + "${LIBRARY_DIR}/arrow/path_internal.cc" + "${LIBRARY_DIR}/arrow/reader.cc" + "${LIBRARY_DIR}/arrow/reader_internal.cc" + "${LIBRARY_DIR}/arrow/schema.cc" + "${LIBRARY_DIR}/arrow/schema_internal.cc" + "${LIBRARY_DIR}/arrow/writer.cc" + "${LIBRARY_DIR}/bloom_filter.cc" + "${LIBRARY_DIR}/column_reader.cc" + "${LIBRARY_DIR}/column_scanner.cc" + "${LIBRARY_DIR}/column_writer.cc" + "${LIBRARY_DIR}/deprecated_io.cc" + "${LIBRARY_DIR}/encoding.cc" + "${LIBRARY_DIR}/encryption.cc" + "${LIBRARY_DIR}/encryption_internal.cc" + "${LIBRARY_DIR}/file_reader.cc" + "${LIBRARY_DIR}/file_writer.cc" + "${LIBRARY_DIR}/internal_file_decryptor.cc" + "${LIBRARY_DIR}/internal_file_encryptor.cc" + "${LIBRARY_DIR}/level_conversion.cc" + "${LIBRARY_DIR}/level_comparison.cc" + "${LIBRARY_DIR}/metadata.cc" + "${LIBRARY_DIR}/murmur3.cc" + "${LIBRARY_DIR}/platform.cc" + "${LIBRARY_DIR}/printer.cc" + "${LIBRARY_DIR}/properties.cc" + "${LIBRARY_DIR}/schema.cc" + "${LIBRARY_DIR}/statistics.cc" + "${LIBRARY_DIR}/types.cc" - ${GEN_LIBRARY_DIR}/parquet_constants.cpp - ${GEN_LIBRARY_DIR}/parquet_types.cpp + "${GEN_LIBRARY_DIR}/parquet_constants.cpp" + "${GEN_LIBRARY_DIR}/parquet_types.cpp" ) -#list(TRANSFORM PARQUET_SRCS PREPEND ${LIBRARY_DIR}/) # cmake 3.12 +#list(TRANSFORM PARQUET_SRCS PREPEND "${LIBRARY_DIR}/") # cmake 3.12 add_library(${PARQUET_LIBRARY} ${PARQUET_SRCS}) -target_include_directories(${PARQUET_LIBRARY} SYSTEM PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src ${CMAKE_CURRENT_SOURCE_DIR}/cpp/src PRIVATE ${OPENSSL_INCLUDE_DIR}) -include(${ClickHouse_SOURCE_DIR}/contrib/thrift/build/cmake/ConfigureChecks.cmake) # makes config.h +target_include_directories(${PARQUET_LIBRARY} SYSTEM PUBLIC "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src" "${CMAKE_CURRENT_SOURCE_DIR}/cpp/src" PRIVATE ${OPENSSL_INCLUDE_DIR}) +include("${ClickHouse_SOURCE_DIR}/contrib/thrift/build/cmake/ConfigureChecks.cmake") # makes config.h target_link_libraries(${PARQUET_LIBRARY} PUBLIC ${ARROW_LIBRARY} PRIVATE ${THRIFT_LIBRARY} boost::headers_only boost::regex ${OPENSSL_LIBRARIES}) if (SANITIZE STREQUAL "undefined") @@ -403,9 +403,9 @@ endif () # === tools -set(TOOLS_DIR ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/tools/parquet) +set(TOOLS_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/tools/parquet") set(PARQUET_TOOLS parquet_dump_schema parquet_reader parquet_scan) foreach (TOOL ${PARQUET_TOOLS}) - add_executable(${TOOL} ${TOOLS_DIR}/${TOOL}.cc) + add_executable(${TOOL} "${TOOLS_DIR}/${TOOL}.cc") target_link_libraries(${TOOL} PRIVATE ${PARQUET_LIBRARY}) endforeach () diff --git a/contrib/avro-cmake/CMakeLists.txt b/contrib/avro-cmake/CMakeLists.txt index 052a19ee804..b56afd1598c 100644 --- a/contrib/avro-cmake/CMakeLists.txt +++ b/contrib/avro-cmake/CMakeLists.txt @@ -1,10 +1,10 @@ -set(AVROCPP_ROOT_DIR ${CMAKE_SOURCE_DIR}/contrib/avro/lang/c++) -set(AVROCPP_INCLUDE_DIR ${AVROCPP_ROOT_DIR}/api) -set(AVROCPP_SOURCE_DIR ${AVROCPP_ROOT_DIR}/impl) +set(AVROCPP_ROOT_DIR "${CMAKE_SOURCE_DIR}/contrib/avro/lang/c++") +set(AVROCPP_INCLUDE_DIR "${AVROCPP_ROOT_DIR}/api") +set(AVROCPP_SOURCE_DIR "${AVROCPP_ROOT_DIR}/impl") set (CMAKE_CXX_STANDARD 17) -if (EXISTS ${AVROCPP_ROOT_DIR}/../../share/VERSION.txt) +if (EXISTS "${AVROCPP_ROOT_DIR}/../../share/VERSION.txt") file(READ "${AVROCPP_ROOT_DIR}/../../share/VERSION.txt" AVRO_VERSION) endif() @@ -14,30 +14,30 @@ set (AVRO_VERSION_MAJOR ${AVRO_VERSION}) set (AVRO_VERSION_MINOR "0") set (AVROCPP_SOURCE_FILES - ${AVROCPP_SOURCE_DIR}/Compiler.cc - ${AVROCPP_SOURCE_DIR}/Node.cc - ${AVROCPP_SOURCE_DIR}/LogicalType.cc - ${AVROCPP_SOURCE_DIR}/NodeImpl.cc - ${AVROCPP_SOURCE_DIR}/ResolverSchema.cc - ${AVROCPP_SOURCE_DIR}/Schema.cc - ${AVROCPP_SOURCE_DIR}/Types.cc - ${AVROCPP_SOURCE_DIR}/ValidSchema.cc - ${AVROCPP_SOURCE_DIR}/Zigzag.cc - ${AVROCPP_SOURCE_DIR}/BinaryEncoder.cc - ${AVROCPP_SOURCE_DIR}/BinaryDecoder.cc - ${AVROCPP_SOURCE_DIR}/Stream.cc - ${AVROCPP_SOURCE_DIR}/FileStream.cc - ${AVROCPP_SOURCE_DIR}/Generic.cc - ${AVROCPP_SOURCE_DIR}/GenericDatum.cc - ${AVROCPP_SOURCE_DIR}/DataFile.cc - ${AVROCPP_SOURCE_DIR}/parsing/Symbol.cc - ${AVROCPP_SOURCE_DIR}/parsing/ValidatingCodec.cc - ${AVROCPP_SOURCE_DIR}/parsing/JsonCodec.cc - ${AVROCPP_SOURCE_DIR}/parsing/ResolvingDecoder.cc - ${AVROCPP_SOURCE_DIR}/json/JsonIO.cc - ${AVROCPP_SOURCE_DIR}/json/JsonDom.cc - ${AVROCPP_SOURCE_DIR}/Resolver.cc - ${AVROCPP_SOURCE_DIR}/Validator.cc + "${AVROCPP_SOURCE_DIR}/Compiler.cc" + "${AVROCPP_SOURCE_DIR}/Node.cc" + "${AVROCPP_SOURCE_DIR}/LogicalType.cc" + "${AVROCPP_SOURCE_DIR}/NodeImpl.cc" + "${AVROCPP_SOURCE_DIR}/ResolverSchema.cc" + "${AVROCPP_SOURCE_DIR}/Schema.cc" + "${AVROCPP_SOURCE_DIR}/Types.cc" + "${AVROCPP_SOURCE_DIR}/ValidSchema.cc" + "${AVROCPP_SOURCE_DIR}/Zigzag.cc" + "${AVROCPP_SOURCE_DIR}/BinaryEncoder.cc" + "${AVROCPP_SOURCE_DIR}/BinaryDecoder.cc" + "${AVROCPP_SOURCE_DIR}/Stream.cc" + "${AVROCPP_SOURCE_DIR}/FileStream.cc" + "${AVROCPP_SOURCE_DIR}/Generic.cc" + "${AVROCPP_SOURCE_DIR}/GenericDatum.cc" + "${AVROCPP_SOURCE_DIR}/DataFile.cc" + "${AVROCPP_SOURCE_DIR}/parsing/Symbol.cc" + "${AVROCPP_SOURCE_DIR}/parsing/ValidatingCodec.cc" + "${AVROCPP_SOURCE_DIR}/parsing/JsonCodec.cc" + "${AVROCPP_SOURCE_DIR}/parsing/ResolvingDecoder.cc" + "${AVROCPP_SOURCE_DIR}/json/JsonIO.cc" + "${AVROCPP_SOURCE_DIR}/json/JsonDom.cc" + "${AVROCPP_SOURCE_DIR}/Resolver.cc" + "${AVROCPP_SOURCE_DIR}/Validator.cc" ) add_library (avrocpp ${AVROCPP_SOURCE_FILES}) @@ -63,7 +63,7 @@ target_compile_options(avrocpp PRIVATE ${SUPPRESS_WARNINGS}) # create a symlink to include headers with ADD_CUSTOM_TARGET(avro_symlink_headers ALL - COMMAND ${CMAKE_COMMAND} -E make_directory ${AVROCPP_ROOT_DIR}/include - COMMAND ${CMAKE_COMMAND} -E create_symlink ${AVROCPP_ROOT_DIR}/api ${AVROCPP_ROOT_DIR}/include/avro + COMMAND ${CMAKE_COMMAND} -E make_directory "${AVROCPP_ROOT_DIR}/include" + COMMAND ${CMAKE_COMMAND} -E create_symlink "${AVROCPP_ROOT_DIR}/api" "${AVROCPP_ROOT_DIR}/include/avro" ) add_dependencies(avrocpp avro_symlink_headers) diff --git a/contrib/aws-s3-cmake/CMakeLists.txt b/contrib/aws-s3-cmake/CMakeLists.txt index 02dee91c70c..723ceac3991 100644 --- a/contrib/aws-s3-cmake/CMakeLists.txt +++ b/contrib/aws-s3-cmake/CMakeLists.txt @@ -1,8 +1,8 @@ -SET(AWS_S3_LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/aws/aws-cpp-sdk-s3) -SET(AWS_CORE_LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/aws/aws-cpp-sdk-core) -SET(AWS_CHECKSUMS_LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/aws-checksums) -SET(AWS_COMMON_LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/aws-c-common) -SET(AWS_EVENT_STREAM_LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/aws-c-event-stream) +SET(AWS_S3_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/aws/aws-cpp-sdk-s3") +SET(AWS_CORE_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/aws/aws-cpp-sdk-core") +SET(AWS_CHECKSUMS_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/aws-checksums") +SET(AWS_COMMON_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/aws-c-common") +SET(AWS_EVENT_STREAM_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/aws-c-event-stream") OPTION(USE_AWS_MEMORY_MANAGEMENT "Aws memory management" OFF) configure_file("${AWS_CORE_LIBRARY_DIR}/include/aws/core/SDKConfig.h.in" diff --git a/contrib/base64-cmake/CMakeLists.txt b/contrib/base64-cmake/CMakeLists.txt index a295ee45b84..4ebb4e68728 100644 --- a/contrib/base64-cmake/CMakeLists.txt +++ b/contrib/base64-cmake/CMakeLists.txt @@ -1,11 +1,11 @@ -SET(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/base64) +SET(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/base64") -add_library(base64_scalar OBJECT ${LIBRARY_DIR}/turbob64c.c ${LIBRARY_DIR}/turbob64d.c) -add_library(base64_ssse3 OBJECT ${LIBRARY_DIR}/turbob64sse.c) # This file also contains code for ARM NEON +add_library(base64_scalar OBJECT "${LIBRARY_DIR}/turbob64c.c" "${LIBRARY_DIR}/turbob64d.c") +add_library(base64_ssse3 OBJECT "${LIBRARY_DIR}/turbob64sse.c") # This file also contains code for ARM NEON if (ARCH_AMD64) - add_library(base64_avx OBJECT ${LIBRARY_DIR}/turbob64sse.c) # This is not a mistake. One file is compiled twice. - add_library(base64_avx2 OBJECT ${LIBRARY_DIR}/turbob64avx2.c) + add_library(base64_avx OBJECT "${LIBRARY_DIR}/turbob64sse.c") # This is not a mistake. One file is compiled twice. + add_library(base64_avx2 OBJECT "${LIBRARY_DIR}/turbob64avx2.c") endif () target_compile_options(base64_scalar PRIVATE -falign-loops) diff --git a/contrib/boost b/contrib/boost index ee24fa55bc4..1ccbb5a522a 160000 --- a/contrib/boost +++ b/contrib/boost @@ -1 +1 @@ -Subproject commit ee24fa55bc46e4d2ce7d0d052cc5a0d9b1be8c36 +Subproject commit 1ccbb5a522a571ce83b606dbc2e1011c42ecccfb diff --git a/contrib/boost-cmake/CMakeLists.txt b/contrib/boost-cmake/CMakeLists.txt index b9298f59f2b..9f6c5b1255d 100644 --- a/contrib/boost-cmake/CMakeLists.txt +++ b/contrib/boost-cmake/CMakeLists.txt @@ -56,19 +56,19 @@ endif() if (NOT EXTERNAL_BOOST_FOUND) set (USE_INTERNAL_BOOST_LIBRARY 1) - set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/boost) + set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/boost") # filesystem set (SRCS_FILESYSTEM - ${LIBRARY_DIR}/libs/filesystem/src/codecvt_error_category.cpp - ${LIBRARY_DIR}/libs/filesystem/src/operations.cpp - ${LIBRARY_DIR}/libs/filesystem/src/path_traits.cpp - ${LIBRARY_DIR}/libs/filesystem/src/path.cpp - ${LIBRARY_DIR}/libs/filesystem/src/portability.cpp - ${LIBRARY_DIR}/libs/filesystem/src/unique_path.cpp - ${LIBRARY_DIR}/libs/filesystem/src/utf8_codecvt_facet.cpp - ${LIBRARY_DIR}/libs/filesystem/src/windows_file_codecvt.cpp + "${LIBRARY_DIR}/libs/filesystem/src/codecvt_error_category.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/operations.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/path_traits.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/path.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/portability.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/unique_path.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/utf8_codecvt_facet.cpp" + "${LIBRARY_DIR}/libs/filesystem/src/windows_file_codecvt.cpp" ) add_library (_boost_filesystem ${SRCS_FILESYSTEM}) @@ -88,10 +88,10 @@ if (NOT EXTERNAL_BOOST_FOUND) # iostreams set (SRCS_IOSTREAMS - ${LIBRARY_DIR}/libs/iostreams/src/file_descriptor.cpp - ${LIBRARY_DIR}/libs/iostreams/src/gzip.cpp - ${LIBRARY_DIR}/libs/iostreams/src/mapped_file.cpp - ${LIBRARY_DIR}/libs/iostreams/src/zlib.cpp + "${LIBRARY_DIR}/libs/iostreams/src/file_descriptor.cpp" + "${LIBRARY_DIR}/libs/iostreams/src/gzip.cpp" + "${LIBRARY_DIR}/libs/iostreams/src/mapped_file.cpp" + "${LIBRARY_DIR}/libs/iostreams/src/zlib.cpp" ) add_library (_boost_iostreams ${SRCS_IOSTREAMS}) @@ -102,17 +102,17 @@ if (NOT EXTERNAL_BOOST_FOUND) # program_options set (SRCS_PROGRAM_OPTIONS - ${LIBRARY_DIR}/libs/program_options/src/cmdline.cpp - ${LIBRARY_DIR}/libs/program_options/src/config_file.cpp - ${LIBRARY_DIR}/libs/program_options/src/convert.cpp - ${LIBRARY_DIR}/libs/program_options/src/options_description.cpp - ${LIBRARY_DIR}/libs/program_options/src/parsers.cpp - ${LIBRARY_DIR}/libs/program_options/src/positional_options.cpp - ${LIBRARY_DIR}/libs/program_options/src/split.cpp - ${LIBRARY_DIR}/libs/program_options/src/utf8_codecvt_facet.cpp - ${LIBRARY_DIR}/libs/program_options/src/value_semantic.cpp - ${LIBRARY_DIR}/libs/program_options/src/variables_map.cpp - ${LIBRARY_DIR}/libs/program_options/src/winmain.cpp + "${LIBRARY_DIR}/libs/program_options/src/cmdline.cpp" + "${LIBRARY_DIR}/libs/program_options/src/config_file.cpp" + "${LIBRARY_DIR}/libs/program_options/src/convert.cpp" + "${LIBRARY_DIR}/libs/program_options/src/options_description.cpp" + "${LIBRARY_DIR}/libs/program_options/src/parsers.cpp" + "${LIBRARY_DIR}/libs/program_options/src/positional_options.cpp" + "${LIBRARY_DIR}/libs/program_options/src/split.cpp" + "${LIBRARY_DIR}/libs/program_options/src/utf8_codecvt_facet.cpp" + "${LIBRARY_DIR}/libs/program_options/src/value_semantic.cpp" + "${LIBRARY_DIR}/libs/program_options/src/variables_map.cpp" + "${LIBRARY_DIR}/libs/program_options/src/winmain.cpp" ) add_library (_boost_program_options ${SRCS_PROGRAM_OPTIONS}) @@ -122,24 +122,24 @@ if (NOT EXTERNAL_BOOST_FOUND) # regex set (SRCS_REGEX - ${LIBRARY_DIR}/libs/regex/src/c_regex_traits.cpp - ${LIBRARY_DIR}/libs/regex/src/cpp_regex_traits.cpp - ${LIBRARY_DIR}/libs/regex/src/cregex.cpp - ${LIBRARY_DIR}/libs/regex/src/fileiter.cpp - ${LIBRARY_DIR}/libs/regex/src/icu.cpp - ${LIBRARY_DIR}/libs/regex/src/instances.cpp - ${LIBRARY_DIR}/libs/regex/src/internals.hpp - ${LIBRARY_DIR}/libs/regex/src/posix_api.cpp - ${LIBRARY_DIR}/libs/regex/src/regex_debug.cpp - ${LIBRARY_DIR}/libs/regex/src/regex_raw_buffer.cpp - ${LIBRARY_DIR}/libs/regex/src/regex_traits_defaults.cpp - ${LIBRARY_DIR}/libs/regex/src/regex.cpp - ${LIBRARY_DIR}/libs/regex/src/static_mutex.cpp - ${LIBRARY_DIR}/libs/regex/src/usinstances.cpp - ${LIBRARY_DIR}/libs/regex/src/w32_regex_traits.cpp - ${LIBRARY_DIR}/libs/regex/src/wc_regex_traits.cpp - ${LIBRARY_DIR}/libs/regex/src/wide_posix_api.cpp - ${LIBRARY_DIR}/libs/regex/src/winstances.cpp + "${LIBRARY_DIR}/libs/regex/src/c_regex_traits.cpp" + "${LIBRARY_DIR}/libs/regex/src/cpp_regex_traits.cpp" + "${LIBRARY_DIR}/libs/regex/src/cregex.cpp" + "${LIBRARY_DIR}/libs/regex/src/fileiter.cpp" + "${LIBRARY_DIR}/libs/regex/src/icu.cpp" + "${LIBRARY_DIR}/libs/regex/src/instances.cpp" + "${LIBRARY_DIR}/libs/regex/src/internals.hpp" + "${LIBRARY_DIR}/libs/regex/src/posix_api.cpp" + "${LIBRARY_DIR}/libs/regex/src/regex_debug.cpp" + "${LIBRARY_DIR}/libs/regex/src/regex_raw_buffer.cpp" + "${LIBRARY_DIR}/libs/regex/src/regex_traits_defaults.cpp" + "${LIBRARY_DIR}/libs/regex/src/regex.cpp" + "${LIBRARY_DIR}/libs/regex/src/static_mutex.cpp" + "${LIBRARY_DIR}/libs/regex/src/usinstances.cpp" + "${LIBRARY_DIR}/libs/regex/src/w32_regex_traits.cpp" + "${LIBRARY_DIR}/libs/regex/src/wc_regex_traits.cpp" + "${LIBRARY_DIR}/libs/regex/src/wide_posix_api.cpp" + "${LIBRARY_DIR}/libs/regex/src/winstances.cpp" ) add_library (_boost_regex ${SRCS_REGEX}) @@ -149,7 +149,7 @@ if (NOT EXTERNAL_BOOST_FOUND) # system set (SRCS_SYSTEM - ${LIBRARY_DIR}/libs/system/src/error_code.cpp + "${LIBRARY_DIR}/libs/system/src/error_code.cpp" ) add_library (_boost_system ${SRCS_SYSTEM}) @@ -160,6 +160,12 @@ if (NOT EXTERNAL_BOOST_FOUND) enable_language(ASM) SET(ASM_OPTIONS "-x assembler-with-cpp") + set (SRCS_CONTEXT + "${LIBRARY_DIR}/libs/context/src/dummy.cpp" + "${LIBRARY_DIR}/libs/context/src/execution_context.cpp" + "${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp" + ) + if (SANITIZE AND (SANITIZE STREQUAL "address" OR SANITIZE STREQUAL "thread")) add_compile_definitions(BOOST_USE_UCONTEXT) @@ -169,39 +175,34 @@ if (NOT EXTERNAL_BOOST_FOUND) add_compile_definitions(BOOST_USE_TSAN) endif() - set (SRCS_CONTEXT - ${LIBRARY_DIR}/libs/context/src/fiber.cpp - ${LIBRARY_DIR}/libs/context/src/continuation.cpp - ${LIBRARY_DIR}/libs/context/src/dummy.cpp - ${LIBRARY_DIR}/libs/context/src/execution_context.cpp - ${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp + set (SRCS_CONTEXT ${SRCS_CONTEXT} + "${LIBRARY_DIR}/libs/context/src/fiber.cpp" + "${LIBRARY_DIR}/libs/context/src/continuation.cpp" ) - elseif (ARCH_ARM) - set (SRCS_CONTEXT - ${LIBRARY_DIR}/libs/context/src/asm/jump_arm64_aapcs_elf_gas.S - ${LIBRARY_DIR}/libs/context/src/asm/make_arm64_aapcs_elf_gas.S - ${LIBRARY_DIR}/libs/context/src/asm/ontop_arm64_aapcs_elf_gas.S - ${LIBRARY_DIR}/libs/context/src/dummy.cpp - ${LIBRARY_DIR}/libs/context/src/execution_context.cpp - ${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp + endif() + if (ARCH_ARM) + set (SRCS_CONTEXT ${SRCS_CONTEXT} + "${LIBRARY_DIR}/libs/context/src/asm/jump_arm64_aapcs_elf_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/make_arm64_aapcs_elf_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/ontop_arm64_aapcs_elf_gas.S" + ) + elseif (ARCH_PPC64LE) + set (SRCS_CONTEXT ${SRCS_CONTEXT} + "${LIBRARY_DIR}/libs/context/src/asm/jump_ppc64_sysv_elf_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/make_ppc64_sysv_elf_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/ontop_ppc64_sysv_elf_gas.S" ) elseif(OS_DARWIN) - set (SRCS_CONTEXT - ${LIBRARY_DIR}/libs/context/src/asm/jump_x86_64_sysv_macho_gas.S - ${LIBRARY_DIR}/libs/context/src/asm/make_x86_64_sysv_macho_gas.S - ${LIBRARY_DIR}/libs/context/src/asm/ontop_x86_64_sysv_macho_gas.S - ${LIBRARY_DIR}/libs/context/src/dummy.cpp - ${LIBRARY_DIR}/libs/context/src/execution_context.cpp - ${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp + set (SRCS_CONTEXT ${SRCS_CONTEXT} + "${LIBRARY_DIR}/libs/context/src/asm/jump_x86_64_sysv_macho_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/make_x86_64_sysv_macho_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/ontop_x86_64_sysv_macho_gas.S" ) else() - set (SRCS_CONTEXT - ${LIBRARY_DIR}/libs/context/src/asm/jump_x86_64_sysv_elf_gas.S - ${LIBRARY_DIR}/libs/context/src/asm/make_x86_64_sysv_elf_gas.S - ${LIBRARY_DIR}/libs/context/src/asm/ontop_x86_64_sysv_elf_gas.S - ${LIBRARY_DIR}/libs/context/src/dummy.cpp - ${LIBRARY_DIR}/libs/context/src/execution_context.cpp - ${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp + set (SRCS_CONTEXT ${SRCS_CONTEXT} + "${LIBRARY_DIR}/libs/context/src/asm/jump_x86_64_sysv_elf_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/make_x86_64_sysv_elf_gas.S" + "${LIBRARY_DIR}/libs/context/src/asm/ontop_x86_64_sysv_elf_gas.S" ) endif() @@ -212,9 +213,9 @@ if (NOT EXTERNAL_BOOST_FOUND) # coroutine set (SRCS_COROUTINE - ${LIBRARY_DIR}/libs/coroutine/detail/coroutine_context.cpp - ${LIBRARY_DIR}/libs/coroutine/exceptions.cpp - ${LIBRARY_DIR}/libs/coroutine/posix/stack_traits.cpp + "${LIBRARY_DIR}/libs/coroutine/detail/coroutine_context.cpp" + "${LIBRARY_DIR}/libs/coroutine/exceptions.cpp" + "${LIBRARY_DIR}/libs/coroutine/posix/stack_traits.cpp" ) add_library (_boost_coroutine ${SRCS_COROUTINE}) add_library (boost::coroutine ALIAS _boost_coroutine) diff --git a/contrib/boringssl b/contrib/boringssl index fd9ce1a0406..83c1cda8a02 160000 --- a/contrib/boringssl +++ b/contrib/boringssl @@ -1 +1 @@ -Subproject commit fd9ce1a0406f571507068b9555d0b545b8a18332 +Subproject commit 83c1cda8a0224dc817cbad2966c7ed4acc35f02a diff --git a/contrib/boringssl-cmake/CMakeLists.txt b/contrib/boringssl-cmake/CMakeLists.txt index 017a8a64c0e..9d8c6ca6083 100644 --- a/contrib/boringssl-cmake/CMakeLists.txt +++ b/contrib/boringssl-cmake/CMakeLists.txt @@ -8,7 +8,7 @@ cmake_minimum_required(VERSION 3.0) project(BoringSSL LANGUAGES C CXX) -set(BORINGSSL_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/boringssl) +set(BORINGSSL_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/boringssl") if(CMAKE_CXX_COMPILER_ID MATCHES "Clang") set(CLANG 1) @@ -16,7 +16,7 @@ endif() if(CMAKE_COMPILER_IS_GNUCXX OR CLANG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fvisibility=hidden -fno-common -fno-exceptions -fno-rtti") - if(APPLE) + if(APPLE AND CLANG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++") endif() @@ -130,7 +130,7 @@ if(BUILD_SHARED_LIBS) set(CMAKE_POSITION_INDEPENDENT_CODE TRUE) endif() -include_directories(${BORINGSSL_SOURCE_DIR}/include) +include_directories("${BORINGSSL_SOURCE_DIR}/include") set( CRYPTO_ios_aarch64_SOURCES @@ -192,8 +192,8 @@ set( linux-arm/crypto/fipsmodule/sha512-armv4.S linux-arm/crypto/fipsmodule/vpaes-armv7.S linux-arm/crypto/test/trampoline-armv4.S - ${BORINGSSL_SOURCE_DIR}/crypto/curve25519/asm/x25519-asm-arm.S - ${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305_arm_asm.S + "${BORINGSSL_SOURCE_DIR}/crypto/curve25519/asm/x25519-asm-arm.S" + "${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305_arm_asm.S" ) set( @@ -244,7 +244,7 @@ set( linux-x86_64/crypto/fipsmodule/x86_64-mont.S linux-x86_64/crypto/fipsmodule/x86_64-mont5.S linux-x86_64/crypto/test/trampoline-x86_64.S - ${BORINGSSL_SOURCE_DIR}/crypto/hrss/asm/poly_rq_mul.S + "${BORINGSSL_SOURCE_DIR}/crypto/hrss/asm/poly_rq_mul.S" ) set( @@ -348,300 +348,300 @@ add_library( ${CRYPTO_ARCH_SOURCES} err_data.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_bitstr.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_bool.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_d2i_fp.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_dup.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_enum.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_gentm.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_i2d_fp.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_int.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_mbstr.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_object.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_octet.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_print.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_strnid.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_time.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_type.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_utctm.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_utf8.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/asn1_lib.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/asn1_par.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/asn_pack.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/f_enum.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/f_int.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/f_string.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_dec.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_enc.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_fre.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_new.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_typ.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_utl.c - ${BORINGSSL_SOURCE_DIR}/crypto/asn1/time_support.c - ${BORINGSSL_SOURCE_DIR}/crypto/base64/base64.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/bio.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/bio_mem.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/connect.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/fd.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/file.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/hexdump.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/pair.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/printf.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/socket.c - ${BORINGSSL_SOURCE_DIR}/crypto/bio/socket_helper.c - ${BORINGSSL_SOURCE_DIR}/crypto/bn_extra/bn_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/bn_extra/convert.c - ${BORINGSSL_SOURCE_DIR}/crypto/buf/buf.c - ${BORINGSSL_SOURCE_DIR}/crypto/bytestring/asn1_compat.c - ${BORINGSSL_SOURCE_DIR}/crypto/bytestring/ber.c - ${BORINGSSL_SOURCE_DIR}/crypto/bytestring/cbb.c - ${BORINGSSL_SOURCE_DIR}/crypto/bytestring/cbs.c - ${BORINGSSL_SOURCE_DIR}/crypto/bytestring/unicode.c - ${BORINGSSL_SOURCE_DIR}/crypto/chacha/chacha.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/cipher_extra.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/derive_key.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_aesccm.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_aesctrhmac.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_aesgcmsiv.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_chacha20poly1305.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_null.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_rc2.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_rc4.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_tls.c - ${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/tls_cbc.c - ${BORINGSSL_SOURCE_DIR}/crypto/cmac/cmac.c - ${BORINGSSL_SOURCE_DIR}/crypto/conf/conf.c - ${BORINGSSL_SOURCE_DIR}/crypto/cpu-aarch64-fuchsia.c - ${BORINGSSL_SOURCE_DIR}/crypto/cpu-aarch64-linux.c - ${BORINGSSL_SOURCE_DIR}/crypto/cpu-arm-linux.c - ${BORINGSSL_SOURCE_DIR}/crypto/cpu-arm.c - ${BORINGSSL_SOURCE_DIR}/crypto/cpu-intel.c - ${BORINGSSL_SOURCE_DIR}/crypto/cpu-ppc64le.c - ${BORINGSSL_SOURCE_DIR}/crypto/crypto.c - ${BORINGSSL_SOURCE_DIR}/crypto/curve25519/curve25519.c - ${BORINGSSL_SOURCE_DIR}/crypto/curve25519/spake25519.c - ${BORINGSSL_SOURCE_DIR}/crypto/dh_extra/dh_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/dh_extra/params.c - ${BORINGSSL_SOURCE_DIR}/crypto/digest_extra/digest_extra.c - ${BORINGSSL_SOURCE_DIR}/crypto/dsa/dsa.c - ${BORINGSSL_SOURCE_DIR}/crypto/dsa/dsa_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/ec_extra/ec_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/ec_extra/ec_derive.c - ${BORINGSSL_SOURCE_DIR}/crypto/ec_extra/hash_to_curve.c - ${BORINGSSL_SOURCE_DIR}/crypto/ecdh_extra/ecdh_extra.c - ${BORINGSSL_SOURCE_DIR}/crypto/ecdsa_extra/ecdsa_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/engine/engine.c - ${BORINGSSL_SOURCE_DIR}/crypto/err/err.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/digestsign.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/evp.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/evp_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/evp_ctx.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_dsa_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ec.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ec_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ed25519.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ed25519_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_rsa.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_rsa_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_x25519.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/p_x25519_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/pbkdf.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/print.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/scrypt.c - ${BORINGSSL_SOURCE_DIR}/crypto/evp/sign.c - ${BORINGSSL_SOURCE_DIR}/crypto/ex_data.c - ${BORINGSSL_SOURCE_DIR}/crypto/fipsmodule/bcm.c - ${BORINGSSL_SOURCE_DIR}/crypto/fipsmodule/fips_shared_support.c - ${BORINGSSL_SOURCE_DIR}/crypto/fipsmodule/is_fips.c - ${BORINGSSL_SOURCE_DIR}/crypto/hkdf/hkdf.c - ${BORINGSSL_SOURCE_DIR}/crypto/hpke/hpke.c - ${BORINGSSL_SOURCE_DIR}/crypto/hrss/hrss.c - ${BORINGSSL_SOURCE_DIR}/crypto/lhash/lhash.c - ${BORINGSSL_SOURCE_DIR}/crypto/mem.c - ${BORINGSSL_SOURCE_DIR}/crypto/obj/obj.c - ${BORINGSSL_SOURCE_DIR}/crypto/obj/obj_xref.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_all.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_info.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_lib.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_oth.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_pk8.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_pkey.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_x509.c - ${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_xaux.c - ${BORINGSSL_SOURCE_DIR}/crypto/pkcs7/pkcs7.c - ${BORINGSSL_SOURCE_DIR}/crypto/pkcs7/pkcs7_x509.c - ${BORINGSSL_SOURCE_DIR}/crypto/pkcs8/p5_pbev2.c - ${BORINGSSL_SOURCE_DIR}/crypto/pkcs8/pkcs8.c - ${BORINGSSL_SOURCE_DIR}/crypto/pkcs8/pkcs8_x509.c - ${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305.c - ${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305_arm.c - ${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305_vec.c - ${BORINGSSL_SOURCE_DIR}/crypto/pool/pool.c - ${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/deterministic.c - ${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/forkunsafe.c - ${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/fuchsia.c - ${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/passive.c - ${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/rand_extra.c - ${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/windows.c - ${BORINGSSL_SOURCE_DIR}/crypto/rc4/rc4.c - ${BORINGSSL_SOURCE_DIR}/crypto/refcount_c11.c - ${BORINGSSL_SOURCE_DIR}/crypto/refcount_lock.c - ${BORINGSSL_SOURCE_DIR}/crypto/rsa_extra/rsa_asn1.c - ${BORINGSSL_SOURCE_DIR}/crypto/rsa_extra/rsa_print.c - ${BORINGSSL_SOURCE_DIR}/crypto/siphash/siphash.c - ${BORINGSSL_SOURCE_DIR}/crypto/stack/stack.c - ${BORINGSSL_SOURCE_DIR}/crypto/thread.c - ${BORINGSSL_SOURCE_DIR}/crypto/thread_none.c - ${BORINGSSL_SOURCE_DIR}/crypto/thread_pthread.c - ${BORINGSSL_SOURCE_DIR}/crypto/thread_win.c - ${BORINGSSL_SOURCE_DIR}/crypto/trust_token/pmbtoken.c - ${BORINGSSL_SOURCE_DIR}/crypto/trust_token/trust_token.c - ${BORINGSSL_SOURCE_DIR}/crypto/trust_token/voprf.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/a_digest.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/a_sign.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/a_strex.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/a_verify.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/algorithm.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/asn1_gen.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/by_dir.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/by_file.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/i2d_pr.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/rsa_pss.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/t_crl.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/t_req.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/t_x509.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/t_x509a.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_att.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_cmp.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_d2.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_def.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_ext.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_lu.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_obj.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_r2x.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_req.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_set.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_trs.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_txt.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_v3.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_vfy.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_vpm.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509cset.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509name.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509rset.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x509spki.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_algor.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_all.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_attrib.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_crl.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_exten.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_info.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_name.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_pkey.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_pubkey.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_req.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_sig.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_spki.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_val.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_x509.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509/x_x509a.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_cache.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_data.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_lib.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_map.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_node.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_tree.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_akey.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_akeya.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_alt.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_bcons.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_bitst.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_conf.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_cpols.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_crld.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_enum.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_extku.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_genn.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_ia5.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_info.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_int.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_lib.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_ncons.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_ocsp.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pci.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pcia.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pcons.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pmaps.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_prn.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_purp.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_skey.c - ${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_utl.c + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_bitstr.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_bool.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_d2i_fp.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_dup.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_enum.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_gentm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_i2d_fp.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_int.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_mbstr.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_object.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_octet.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_print.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_strnid.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_time.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_type.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_utctm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/a_utf8.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/asn1_lib.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/asn1_par.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/asn_pack.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/f_enum.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/f_int.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/f_string.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_dec.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_enc.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_fre.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_new.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_typ.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/tasn_utl.c" + "${BORINGSSL_SOURCE_DIR}/crypto/asn1/time_support.c" + "${BORINGSSL_SOURCE_DIR}/crypto/base64/base64.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/bio.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/bio_mem.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/connect.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/fd.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/file.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/hexdump.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/pair.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/printf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/socket.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bio/socket_helper.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bn_extra/bn_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bn_extra/convert.c" + "${BORINGSSL_SOURCE_DIR}/crypto/buf/buf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bytestring/asn1_compat.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bytestring/ber.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bytestring/cbb.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bytestring/cbs.c" + "${BORINGSSL_SOURCE_DIR}/crypto/bytestring/unicode.c" + "${BORINGSSL_SOURCE_DIR}/crypto/chacha/chacha.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/cipher_extra.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/derive_key.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_aesccm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_aesctrhmac.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_aesgcmsiv.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_chacha20poly1305.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_null.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_rc2.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_rc4.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/e_tls.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cipher_extra/tls_cbc.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cmac/cmac.c" + "${BORINGSSL_SOURCE_DIR}/crypto/conf/conf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cpu-aarch64-fuchsia.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cpu-aarch64-linux.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cpu-arm-linux.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cpu-arm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cpu-intel.c" + "${BORINGSSL_SOURCE_DIR}/crypto/cpu-ppc64le.c" + "${BORINGSSL_SOURCE_DIR}/crypto/crypto.c" + "${BORINGSSL_SOURCE_DIR}/crypto/curve25519/curve25519.c" + "${BORINGSSL_SOURCE_DIR}/crypto/curve25519/spake25519.c" + "${BORINGSSL_SOURCE_DIR}/crypto/dh_extra/dh_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/dh_extra/params.c" + "${BORINGSSL_SOURCE_DIR}/crypto/digest_extra/digest_extra.c" + "${BORINGSSL_SOURCE_DIR}/crypto/dsa/dsa.c" + "${BORINGSSL_SOURCE_DIR}/crypto/dsa/dsa_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/ec_extra/ec_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/ec_extra/ec_derive.c" + "${BORINGSSL_SOURCE_DIR}/crypto/ec_extra/hash_to_curve.c" + "${BORINGSSL_SOURCE_DIR}/crypto/ecdh_extra/ecdh_extra.c" + "${BORINGSSL_SOURCE_DIR}/crypto/ecdsa_extra/ecdsa_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/engine/engine.c" + "${BORINGSSL_SOURCE_DIR}/crypto/err/err.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/digestsign.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/evp.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/evp_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/evp_ctx.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_dsa_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ec.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ec_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ed25519.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_ed25519_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_rsa.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_rsa_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_x25519.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/p_x25519_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/pbkdf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/print.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/scrypt.c" + "${BORINGSSL_SOURCE_DIR}/crypto/evp/sign.c" + "${BORINGSSL_SOURCE_DIR}/crypto/ex_data.c" + "${BORINGSSL_SOURCE_DIR}/crypto/fipsmodule/bcm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/fipsmodule/fips_shared_support.c" + "${BORINGSSL_SOURCE_DIR}/crypto/fipsmodule/is_fips.c" + "${BORINGSSL_SOURCE_DIR}/crypto/hkdf/hkdf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/hpke/hpke.c" + "${BORINGSSL_SOURCE_DIR}/crypto/hrss/hrss.c" + "${BORINGSSL_SOURCE_DIR}/crypto/lhash/lhash.c" + "${BORINGSSL_SOURCE_DIR}/crypto/mem.c" + "${BORINGSSL_SOURCE_DIR}/crypto/obj/obj.c" + "${BORINGSSL_SOURCE_DIR}/crypto/obj/obj_xref.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_all.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_info.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_lib.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_oth.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_pk8.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_pkey.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_x509.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pem/pem_xaux.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pkcs7/pkcs7.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pkcs7/pkcs7_x509.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pkcs8/p5_pbev2.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pkcs8/pkcs8.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pkcs8/pkcs8_x509.c" + "${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305.c" + "${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305_arm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/poly1305/poly1305_vec.c" + "${BORINGSSL_SOURCE_DIR}/crypto/pool/pool.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/deterministic.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/forkunsafe.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/fuchsia.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/passive.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/rand_extra.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rand_extra/windows.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rc4/rc4.c" + "${BORINGSSL_SOURCE_DIR}/crypto/refcount_c11.c" + "${BORINGSSL_SOURCE_DIR}/crypto/refcount_lock.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rsa_extra/rsa_asn1.c" + "${BORINGSSL_SOURCE_DIR}/crypto/rsa_extra/rsa_print.c" + "${BORINGSSL_SOURCE_DIR}/crypto/siphash/siphash.c" + "${BORINGSSL_SOURCE_DIR}/crypto/stack/stack.c" + "${BORINGSSL_SOURCE_DIR}/crypto/thread.c" + "${BORINGSSL_SOURCE_DIR}/crypto/thread_none.c" + "${BORINGSSL_SOURCE_DIR}/crypto/thread_pthread.c" + "${BORINGSSL_SOURCE_DIR}/crypto/thread_win.c" + "${BORINGSSL_SOURCE_DIR}/crypto/trust_token/pmbtoken.c" + "${BORINGSSL_SOURCE_DIR}/crypto/trust_token/trust_token.c" + "${BORINGSSL_SOURCE_DIR}/crypto/trust_token/voprf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/a_digest.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/a_sign.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/a_strex.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/a_verify.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/algorithm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/asn1_gen.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/by_dir.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/by_file.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/i2d_pr.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/rsa_pss.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/t_crl.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/t_req.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/t_x509.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/t_x509a.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_att.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_cmp.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_d2.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_def.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_ext.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_lu.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_obj.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_r2x.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_req.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_set.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_trs.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_txt.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_v3.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_vfy.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509_vpm.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509cset.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509name.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509rset.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x509spki.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_algor.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_all.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_attrib.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_crl.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_exten.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_info.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_name.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_pkey.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_pubkey.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_req.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_sig.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_spki.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_val.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_x509.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509/x_x509a.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_cache.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_data.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_lib.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_map.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_node.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/pcy_tree.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_akey.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_akeya.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_alt.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_bcons.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_bitst.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_conf.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_cpols.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_crld.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_enum.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_extku.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_genn.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_ia5.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_info.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_int.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_lib.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_ncons.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_ocsp.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pci.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pcia.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pcons.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_pmaps.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_prn.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_purp.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_skey.c" + "${BORINGSSL_SOURCE_DIR}/crypto/x509v3/v3_utl.c" ) add_library( ssl - ${BORINGSSL_SOURCE_DIR}/ssl/bio_ssl.cc - ${BORINGSSL_SOURCE_DIR}/ssl/d1_both.cc - ${BORINGSSL_SOURCE_DIR}/ssl/d1_lib.cc - ${BORINGSSL_SOURCE_DIR}/ssl/d1_pkt.cc - ${BORINGSSL_SOURCE_DIR}/ssl/d1_srtp.cc - ${BORINGSSL_SOURCE_DIR}/ssl/dtls_method.cc - ${BORINGSSL_SOURCE_DIR}/ssl/dtls_record.cc - ${BORINGSSL_SOURCE_DIR}/ssl/handoff.cc - ${BORINGSSL_SOURCE_DIR}/ssl/handshake.cc - ${BORINGSSL_SOURCE_DIR}/ssl/handshake_client.cc - ${BORINGSSL_SOURCE_DIR}/ssl/handshake_server.cc - ${BORINGSSL_SOURCE_DIR}/ssl/s3_both.cc - ${BORINGSSL_SOURCE_DIR}/ssl/s3_lib.cc - ${BORINGSSL_SOURCE_DIR}/ssl/s3_pkt.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_aead_ctx.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_asn1.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_buffer.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_cert.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_cipher.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_file.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_key_share.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_lib.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_privkey.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_session.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_stat.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_transcript.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_versions.cc - ${BORINGSSL_SOURCE_DIR}/ssl/ssl_x509.cc - ${BORINGSSL_SOURCE_DIR}/ssl/t1_enc.cc - ${BORINGSSL_SOURCE_DIR}/ssl/t1_lib.cc - ${BORINGSSL_SOURCE_DIR}/ssl/tls13_both.cc - ${BORINGSSL_SOURCE_DIR}/ssl/tls13_client.cc - ${BORINGSSL_SOURCE_DIR}/ssl/tls13_enc.cc - ${BORINGSSL_SOURCE_DIR}/ssl/tls13_server.cc - ${BORINGSSL_SOURCE_DIR}/ssl/tls_method.cc - ${BORINGSSL_SOURCE_DIR}/ssl/tls_record.cc + "${BORINGSSL_SOURCE_DIR}/ssl/bio_ssl.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/d1_both.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/d1_lib.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/d1_pkt.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/d1_srtp.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/dtls_method.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/dtls_record.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/handoff.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/handshake.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/handshake_client.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/handshake_server.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/s3_both.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/s3_lib.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/s3_pkt.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_aead_ctx.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_asn1.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_buffer.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_cert.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_cipher.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_file.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_key_share.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_lib.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_privkey.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_session.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_stat.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_transcript.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_versions.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/ssl_x509.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/t1_enc.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/t1_lib.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/tls13_both.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/tls13_client.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/tls13_enc.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/tls13_server.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/tls_method.cc" + "${BORINGSSL_SOURCE_DIR}/ssl/tls_record.cc" - ${BORINGSSL_SOURCE_DIR}/decrepit/ssl/ssl_decrepit.c - ${BORINGSSL_SOURCE_DIR}/decrepit/cfb/cfb.c + "${BORINGSSL_SOURCE_DIR}/decrepit/ssl/ssl_decrepit.c" + "${BORINGSSL_SOURCE_DIR}/decrepit/cfb/cfb.c" ) add_executable( bssl - ${BORINGSSL_SOURCE_DIR}/tool/args.cc - ${BORINGSSL_SOURCE_DIR}/tool/ciphers.cc - ${BORINGSSL_SOURCE_DIR}/tool/client.cc - ${BORINGSSL_SOURCE_DIR}/tool/const.cc - ${BORINGSSL_SOURCE_DIR}/tool/digest.cc - ${BORINGSSL_SOURCE_DIR}/tool/fd.cc - ${BORINGSSL_SOURCE_DIR}/tool/file.cc - ${BORINGSSL_SOURCE_DIR}/tool/generate_ed25519.cc - ${BORINGSSL_SOURCE_DIR}/tool/genrsa.cc - ${BORINGSSL_SOURCE_DIR}/tool/pkcs12.cc - ${BORINGSSL_SOURCE_DIR}/tool/rand.cc - ${BORINGSSL_SOURCE_DIR}/tool/server.cc - ${BORINGSSL_SOURCE_DIR}/tool/sign.cc - ${BORINGSSL_SOURCE_DIR}/tool/speed.cc - ${BORINGSSL_SOURCE_DIR}/tool/tool.cc - ${BORINGSSL_SOURCE_DIR}/tool/transport_common.cc + "${BORINGSSL_SOURCE_DIR}/tool/args.cc" + "${BORINGSSL_SOURCE_DIR}/tool/ciphers.cc" + "${BORINGSSL_SOURCE_DIR}/tool/client.cc" + "${BORINGSSL_SOURCE_DIR}/tool/const.cc" + "${BORINGSSL_SOURCE_DIR}/tool/digest.cc" + "${BORINGSSL_SOURCE_DIR}/tool/fd.cc" + "${BORINGSSL_SOURCE_DIR}/tool/file.cc" + "${BORINGSSL_SOURCE_DIR}/tool/generate_ed25519.cc" + "${BORINGSSL_SOURCE_DIR}/tool/genrsa.cc" + "${BORINGSSL_SOURCE_DIR}/tool/pkcs12.cc" + "${BORINGSSL_SOURCE_DIR}/tool/rand.cc" + "${BORINGSSL_SOURCE_DIR}/tool/server.cc" + "${BORINGSSL_SOURCE_DIR}/tool/sign.cc" + "${BORINGSSL_SOURCE_DIR}/tool/speed.cc" + "${BORINGSSL_SOURCE_DIR}/tool/tool.cc" + "${BORINGSSL_SOURCE_DIR}/tool/transport_common.cc" ) target_link_libraries(ssl crypto) @@ -655,7 +655,7 @@ if(WIN32) target_link_libraries(bssl ws2_32) endif() -target_include_directories(crypto SYSTEM PUBLIC ${BORINGSSL_SOURCE_DIR}/include) -target_include_directories(ssl SYSTEM PUBLIC ${BORINGSSL_SOURCE_DIR}/include) +target_include_directories(crypto SYSTEM PUBLIC "${BORINGSSL_SOURCE_DIR}/include") +target_include_directories(ssl SYSTEM PUBLIC "${BORINGSSL_SOURCE_DIR}/include") target_compile_options(crypto PRIVATE -Wno-gnu-anonymous-struct) diff --git a/contrib/brotli-cmake/CMakeLists.txt b/contrib/brotli-cmake/CMakeLists.txt index 4c5f584de9d..7293cae0665 100644 --- a/contrib/brotli-cmake/CMakeLists.txt +++ b/contrib/brotli-cmake/CMakeLists.txt @@ -1,41 +1,41 @@ -set(BROTLI_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/brotli/c) -set(BROTLI_BINARY_DIR ${ClickHouse_BINARY_DIR}/contrib/brotli/c) +set(BROTLI_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/brotli/c") +set(BROTLI_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/brotli/c") set(SRCS - ${BROTLI_SOURCE_DIR}/enc/command.c - ${BROTLI_SOURCE_DIR}/enc/fast_log.c - ${BROTLI_SOURCE_DIR}/dec/bit_reader.c - ${BROTLI_SOURCE_DIR}/dec/state.c - ${BROTLI_SOURCE_DIR}/dec/huffman.c - ${BROTLI_SOURCE_DIR}/dec/decode.c - ${BROTLI_SOURCE_DIR}/enc/encode.c - ${BROTLI_SOURCE_DIR}/enc/dictionary_hash.c - ${BROTLI_SOURCE_DIR}/enc/cluster.c - ${BROTLI_SOURCE_DIR}/enc/entropy_encode.c - ${BROTLI_SOURCE_DIR}/enc/literal_cost.c - ${BROTLI_SOURCE_DIR}/enc/compress_fragment_two_pass.c - ${BROTLI_SOURCE_DIR}/enc/static_dict.c - ${BROTLI_SOURCE_DIR}/enc/compress_fragment.c - ${BROTLI_SOURCE_DIR}/enc/block_splitter.c - ${BROTLI_SOURCE_DIR}/enc/backward_references_hq.c - ${BROTLI_SOURCE_DIR}/enc/histogram.c - ${BROTLI_SOURCE_DIR}/enc/brotli_bit_stream.c - ${BROTLI_SOURCE_DIR}/enc/utf8_util.c - ${BROTLI_SOURCE_DIR}/enc/encoder_dict.c - ${BROTLI_SOURCE_DIR}/enc/backward_references.c - ${BROTLI_SOURCE_DIR}/enc/bit_cost.c - ${BROTLI_SOURCE_DIR}/enc/metablock.c - ${BROTLI_SOURCE_DIR}/enc/memory.c - ${BROTLI_SOURCE_DIR}/common/dictionary.c - ${BROTLI_SOURCE_DIR}/common/transform.c - ${BROTLI_SOURCE_DIR}/common/platform.c - ${BROTLI_SOURCE_DIR}/common/context.c - ${BROTLI_SOURCE_DIR}/common/constants.c + "${BROTLI_SOURCE_DIR}/enc/command.c" + "${BROTLI_SOURCE_DIR}/enc/fast_log.c" + "${BROTLI_SOURCE_DIR}/dec/bit_reader.c" + "${BROTLI_SOURCE_DIR}/dec/state.c" + "${BROTLI_SOURCE_DIR}/dec/huffman.c" + "${BROTLI_SOURCE_DIR}/dec/decode.c" + "${BROTLI_SOURCE_DIR}/enc/encode.c" + "${BROTLI_SOURCE_DIR}/enc/dictionary_hash.c" + "${BROTLI_SOURCE_DIR}/enc/cluster.c" + "${BROTLI_SOURCE_DIR}/enc/entropy_encode.c" + "${BROTLI_SOURCE_DIR}/enc/literal_cost.c" + "${BROTLI_SOURCE_DIR}/enc/compress_fragment_two_pass.c" + "${BROTLI_SOURCE_DIR}/enc/static_dict.c" + "${BROTLI_SOURCE_DIR}/enc/compress_fragment.c" + "${BROTLI_SOURCE_DIR}/enc/block_splitter.c" + "${BROTLI_SOURCE_DIR}/enc/backward_references_hq.c" + "${BROTLI_SOURCE_DIR}/enc/histogram.c" + "${BROTLI_SOURCE_DIR}/enc/brotli_bit_stream.c" + "${BROTLI_SOURCE_DIR}/enc/utf8_util.c" + "${BROTLI_SOURCE_DIR}/enc/encoder_dict.c" + "${BROTLI_SOURCE_DIR}/enc/backward_references.c" + "${BROTLI_SOURCE_DIR}/enc/bit_cost.c" + "${BROTLI_SOURCE_DIR}/enc/metablock.c" + "${BROTLI_SOURCE_DIR}/enc/memory.c" + "${BROTLI_SOURCE_DIR}/common/dictionary.c" + "${BROTLI_SOURCE_DIR}/common/transform.c" + "${BROTLI_SOURCE_DIR}/common/platform.c" + "${BROTLI_SOURCE_DIR}/common/context.c" + "${BROTLI_SOURCE_DIR}/common/constants.c" ) add_library(brotli ${SRCS}) -target_include_directories(brotli PUBLIC ${BROTLI_SOURCE_DIR}/include) +target_include_directories(brotli PUBLIC "${BROTLI_SOURCE_DIR}/include") if(M_LIBRARY) target_link_libraries(brotli PRIVATE ${M_LIBRARY}) diff --git a/contrib/capnproto-cmake/CMakeLists.txt b/contrib/capnproto-cmake/CMakeLists.txt index 949481e7ef5..9f6e076cc7d 100644 --- a/contrib/capnproto-cmake/CMakeLists.txt +++ b/contrib/capnproto-cmake/CMakeLists.txt @@ -1,53 +1,53 @@ -set (CAPNPROTO_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/capnproto/c++/src) +set (CAPNPROTO_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/capnproto/c++/src") set (CMAKE_CXX_STANDARD 17) set (KJ_SRCS - ${CAPNPROTO_SOURCE_DIR}/kj/array.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/common.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/debug.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/exception.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/io.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/memory.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/mutex.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/string.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/hash.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/table.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/thread.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/main.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/arena.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/test-helpers.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/units.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/encoding.c++ + "${CAPNPROTO_SOURCE_DIR}/kj/array.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/common.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/debug.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/exception.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/io.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/memory.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/mutex.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/string.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/hash.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/table.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/thread.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/main.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/arena.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/test-helpers.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/units.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/encoding.c++" - ${CAPNPROTO_SOURCE_DIR}/kj/refcount.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/string-tree.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/time.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/filesystem.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/filesystem-disk-unix.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/filesystem-disk-win32.c++ - ${CAPNPROTO_SOURCE_DIR}/kj/parse/char.c++ + "${CAPNPROTO_SOURCE_DIR}/kj/refcount.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/string-tree.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/time.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/filesystem.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/filesystem-disk-unix.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/filesystem-disk-win32.c++" + "${CAPNPROTO_SOURCE_DIR}/kj/parse/char.c++" ) add_library(kj ${KJ_SRCS}) target_include_directories(kj SYSTEM PUBLIC ${CAPNPROTO_SOURCE_DIR}) set (CAPNP_SRCS - ${CAPNPROTO_SOURCE_DIR}/capnp/c++.capnp.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/blob.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/arena.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/layout.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/list.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/any.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/message.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/schema.capnp.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/serialize.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/serialize-packed.c++ + "${CAPNPROTO_SOURCE_DIR}/capnp/c++.capnp.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/blob.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/arena.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/layout.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/list.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/any.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/message.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/schema.capnp.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/serialize.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/serialize-packed.c++" - ${CAPNPROTO_SOURCE_DIR}/capnp/schema.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/schema-loader.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/dynamic.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/stringify.c++ + "${CAPNPROTO_SOURCE_DIR}/capnp/schema.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/schema-loader.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/dynamic.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/stringify.c++" ) add_library(capnp ${CAPNP_SRCS}) @@ -57,16 +57,16 @@ set_target_properties(capnp target_link_libraries(capnp PUBLIC kj) set (CAPNPC_SRCS - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/type-id.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/error-reporter.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/lexer.capnp.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/lexer.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/grammar.capnp.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/parser.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/node-translator.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/compiler/compiler.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/schema-parser.c++ - ${CAPNPROTO_SOURCE_DIR}/capnp/serialize-text.c++ + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/type-id.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/error-reporter.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/lexer.capnp.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/lexer.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/grammar.capnp.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/parser.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/node-translator.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/compiler/compiler.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/schema-parser.c++" + "${CAPNPROTO_SOURCE_DIR}/capnp/serialize-text.c++" ) add_library(capnpc ${CAPNPC_SRCS}) diff --git a/contrib/cctz-cmake/CMakeLists.txt b/contrib/cctz-cmake/CMakeLists.txt index 90e33dc9f62..93413693796 100644 --- a/contrib/cctz-cmake/CMakeLists.txt +++ b/contrib/cctz-cmake/CMakeLists.txt @@ -40,23 +40,23 @@ endif() if (NOT EXTERNAL_CCTZ_LIBRARY_FOUND OR NOT EXTERNAL_CCTZ_LIBRARY_WORKS) set(USE_INTERNAL_CCTZ_LIBRARY 1) - set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/cctz) + set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/cctz") set (SRCS - ${LIBRARY_DIR}/src/civil_time_detail.cc - ${LIBRARY_DIR}/src/time_zone_fixed.cc - ${LIBRARY_DIR}/src/time_zone_format.cc - ${LIBRARY_DIR}/src/time_zone_if.cc - ${LIBRARY_DIR}/src/time_zone_impl.cc - ${LIBRARY_DIR}/src/time_zone_info.cc - ${LIBRARY_DIR}/src/time_zone_libc.cc - ${LIBRARY_DIR}/src/time_zone_lookup.cc - ${LIBRARY_DIR}/src/time_zone_posix.cc - ${LIBRARY_DIR}/src/zone_info_source.cc + "${LIBRARY_DIR}/src/civil_time_detail.cc" + "${LIBRARY_DIR}/src/time_zone_fixed.cc" + "${LIBRARY_DIR}/src/time_zone_format.cc" + "${LIBRARY_DIR}/src/time_zone_if.cc" + "${LIBRARY_DIR}/src/time_zone_impl.cc" + "${LIBRARY_DIR}/src/time_zone_info.cc" + "${LIBRARY_DIR}/src/time_zone_libc.cc" + "${LIBRARY_DIR}/src/time_zone_lookup.cc" + "${LIBRARY_DIR}/src/time_zone_posix.cc" + "${LIBRARY_DIR}/src/zone_info_source.cc" ) add_library (cctz ${SRCS}) - target_include_directories (cctz PUBLIC ${LIBRARY_DIR}/include) + target_include_directories (cctz PUBLIC "${LIBRARY_DIR}/include") if (OS_FREEBSD) # yes, need linux, because bsd check inside linux in time_zone_libc.cc:24 @@ -73,8 +73,8 @@ if (NOT EXTERNAL_CCTZ_LIBRARY_FOUND OR NOT EXTERNAL_CCTZ_LIBRARY_WORKS) # Build a libray with embedded tzdata if (OS_LINUX) # get the list of timezones from tzdata shipped with cctz - set(TZDIR ${LIBRARY_DIR}/testdata/zoneinfo) - file(STRINGS ${LIBRARY_DIR}/testdata/version TZDATA_VERSION) + set(TZDIR "${LIBRARY_DIR}/testdata/zoneinfo") + file(STRINGS "${LIBRARY_DIR}/testdata/version" TZDATA_VERSION) set_property(GLOBAL PROPERTY TZDATA_VERSION_PROP "${TZDATA_VERSION}") message(STATUS "Packaging with tzdata version: ${TZDATA_VERSION}") @@ -97,12 +97,19 @@ if (NOT EXTERNAL_CCTZ_LIBRARY_FOUND OR NOT EXTERNAL_CCTZ_LIBRARY_WORKS) set(TZ_OBJS ${TZ_OBJS} ${TZ_OBJ}) # https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake - add_custom_command(OUTPUT ${TZ_OBJ} - COMMAND cp ${TZDIR}/${TIMEZONE} ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID} - COMMAND cd ${CMAKE_CURRENT_BINARY_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} + # PPC64LE fails to do this with objcopy, use ld or lld instead + if (ARCH_PPC64LE) + add_custom_command(OUTPUT ${TZ_OBJ} + COMMAND cp "${TZDIR}/${TIMEZONE}" "${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}" + COMMAND cd ${CMAKE_CURRENT_BINARY_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o ${TZ_OBJ} ${TIMEZONE_ID} + COMMAND rm "${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}") + else() + add_custom_command(OUTPUT ${TZ_OBJ} + COMMAND cp "${TZDIR}/${TIMEZONE}" "${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}" + COMMAND cd ${CMAKE_CURRENT_BINARY_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} --rename-section .data=.rodata,alloc,load,readonly,data,contents ${TIMEZONE_ID} ${TZ_OBJ} - COMMAND rm ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}) - + COMMAND rm "${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}") + endif() set_source_files_properties(${TZ_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true) endforeach(TIMEZONE) diff --git a/contrib/cppkafka-cmake/CMakeLists.txt b/contrib/cppkafka-cmake/CMakeLists.txt index 9f512974948..0bc33ada529 100644 --- a/contrib/cppkafka-cmake/CMakeLists.txt +++ b/contrib/cppkafka-cmake/CMakeLists.txt @@ -1,25 +1,25 @@ -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/cppkafka) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/cppkafka") set(SRCS - ${LIBRARY_DIR}/src/buffer.cpp - ${LIBRARY_DIR}/src/configuration_option.cpp - ${LIBRARY_DIR}/src/configuration.cpp - ${LIBRARY_DIR}/src/consumer.cpp - ${LIBRARY_DIR}/src/error.cpp - ${LIBRARY_DIR}/src/event.cpp - ${LIBRARY_DIR}/src/exceptions.cpp - ${LIBRARY_DIR}/src/group_information.cpp - ${LIBRARY_DIR}/src/kafka_handle_base.cpp - ${LIBRARY_DIR}/src/message_internal.cpp - ${LIBRARY_DIR}/src/message_timestamp.cpp - ${LIBRARY_DIR}/src/message.cpp - ${LIBRARY_DIR}/src/metadata.cpp - ${LIBRARY_DIR}/src/producer.cpp - ${LIBRARY_DIR}/src/queue.cpp - ${LIBRARY_DIR}/src/topic_configuration.cpp - ${LIBRARY_DIR}/src/topic_partition_list.cpp - ${LIBRARY_DIR}/src/topic_partition.cpp - ${LIBRARY_DIR}/src/topic.cpp + "${LIBRARY_DIR}/src/buffer.cpp" + "${LIBRARY_DIR}/src/configuration_option.cpp" + "${LIBRARY_DIR}/src/configuration.cpp" + "${LIBRARY_DIR}/src/consumer.cpp" + "${LIBRARY_DIR}/src/error.cpp" + "${LIBRARY_DIR}/src/event.cpp" + "${LIBRARY_DIR}/src/exceptions.cpp" + "${LIBRARY_DIR}/src/group_information.cpp" + "${LIBRARY_DIR}/src/kafka_handle_base.cpp" + "${LIBRARY_DIR}/src/message_internal.cpp" + "${LIBRARY_DIR}/src/message_timestamp.cpp" + "${LIBRARY_DIR}/src/message.cpp" + "${LIBRARY_DIR}/src/metadata.cpp" + "${LIBRARY_DIR}/src/producer.cpp" + "${LIBRARY_DIR}/src/queue.cpp" + "${LIBRARY_DIR}/src/topic_configuration.cpp" + "${LIBRARY_DIR}/src/topic_partition_list.cpp" + "${LIBRARY_DIR}/src/topic_partition.cpp" + "${LIBRARY_DIR}/src/topic.cpp" ) add_library(cppkafka ${SRCS}) @@ -29,5 +29,5 @@ target_link_libraries(cppkafka ${RDKAFKA_LIBRARY} boost::headers_only ) -target_include_directories(cppkafka PRIVATE ${LIBRARY_DIR}/include/cppkafka) -target_include_directories(cppkafka SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/include) +target_include_directories(cppkafka PRIVATE "${LIBRARY_DIR}/include/cppkafka") +target_include_directories(cppkafka SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include") diff --git a/contrib/croaring-cmake/CMakeLists.txt b/contrib/croaring-cmake/CMakeLists.txt index 8a8ca62e051..f4a5d8a01dc 100644 --- a/contrib/croaring-cmake/CMakeLists.txt +++ b/contrib/croaring-cmake/CMakeLists.txt @@ -1,26 +1,26 @@ -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/croaring) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/croaring") set(SRCS - ${LIBRARY_DIR}/src/array_util.c - ${LIBRARY_DIR}/src/bitset_util.c - ${LIBRARY_DIR}/src/containers/array.c - ${LIBRARY_DIR}/src/containers/bitset.c - ${LIBRARY_DIR}/src/containers/containers.c - ${LIBRARY_DIR}/src/containers/convert.c - ${LIBRARY_DIR}/src/containers/mixed_intersection.c - ${LIBRARY_DIR}/src/containers/mixed_union.c - ${LIBRARY_DIR}/src/containers/mixed_equal.c - ${LIBRARY_DIR}/src/containers/mixed_subset.c - ${LIBRARY_DIR}/src/containers/mixed_negation.c - ${LIBRARY_DIR}/src/containers/mixed_xor.c - ${LIBRARY_DIR}/src/containers/mixed_andnot.c - ${LIBRARY_DIR}/src/containers/run.c - ${LIBRARY_DIR}/src/roaring.c - ${LIBRARY_DIR}/src/roaring_priority_queue.c - ${LIBRARY_DIR}/src/roaring_array.c) + "${LIBRARY_DIR}/src/array_util.c" + "${LIBRARY_DIR}/src/bitset_util.c" + "${LIBRARY_DIR}/src/containers/array.c" + "${LIBRARY_DIR}/src/containers/bitset.c" + "${LIBRARY_DIR}/src/containers/containers.c" + "${LIBRARY_DIR}/src/containers/convert.c" + "${LIBRARY_DIR}/src/containers/mixed_intersection.c" + "${LIBRARY_DIR}/src/containers/mixed_union.c" + "${LIBRARY_DIR}/src/containers/mixed_equal.c" + "${LIBRARY_DIR}/src/containers/mixed_subset.c" + "${LIBRARY_DIR}/src/containers/mixed_negation.c" + "${LIBRARY_DIR}/src/containers/mixed_xor.c" + "${LIBRARY_DIR}/src/containers/mixed_andnot.c" + "${LIBRARY_DIR}/src/containers/run.c" + "${LIBRARY_DIR}/src/roaring.c" + "${LIBRARY_DIR}/src/roaring_priority_queue.c" + "${LIBRARY_DIR}/src/roaring_array.c") add_library(roaring ${SRCS}) -target_include_directories(roaring PRIVATE ${LIBRARY_DIR}/include/roaring) -target_include_directories(roaring SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/include) -target_include_directories(roaring SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/cpp) +target_include_directories(roaring PRIVATE "${LIBRARY_DIR}/include/roaring") +target_include_directories(roaring SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include") +target_include_directories(roaring SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/cpp") diff --git a/contrib/curl-cmake/CMakeLists.txt b/contrib/curl-cmake/CMakeLists.txt index a24c9fa8765..1f7449af914 100644 --- a/contrib/curl-cmake/CMakeLists.txt +++ b/contrib/curl-cmake/CMakeLists.txt @@ -5,143 +5,143 @@ endif() set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/curl") set (SRCS - ${LIBRARY_DIR}/lib/file.c - ${LIBRARY_DIR}/lib/timeval.c - ${LIBRARY_DIR}/lib/base64.c - ${LIBRARY_DIR}/lib/hostip.c - ${LIBRARY_DIR}/lib/progress.c - ${LIBRARY_DIR}/lib/formdata.c - ${LIBRARY_DIR}/lib/cookie.c - ${LIBRARY_DIR}/lib/http.c - ${LIBRARY_DIR}/lib/sendf.c - ${LIBRARY_DIR}/lib/url.c - ${LIBRARY_DIR}/lib/dict.c - ${LIBRARY_DIR}/lib/if2ip.c - ${LIBRARY_DIR}/lib/speedcheck.c - ${LIBRARY_DIR}/lib/ldap.c - ${LIBRARY_DIR}/lib/version.c - ${LIBRARY_DIR}/lib/getenv.c - ${LIBRARY_DIR}/lib/escape.c - ${LIBRARY_DIR}/lib/mprintf.c - ${LIBRARY_DIR}/lib/telnet.c - ${LIBRARY_DIR}/lib/netrc.c - ${LIBRARY_DIR}/lib/getinfo.c - ${LIBRARY_DIR}/lib/transfer.c - ${LIBRARY_DIR}/lib/strcase.c - ${LIBRARY_DIR}/lib/easy.c - ${LIBRARY_DIR}/lib/security.c - ${LIBRARY_DIR}/lib/curl_fnmatch.c - ${LIBRARY_DIR}/lib/fileinfo.c - ${LIBRARY_DIR}/lib/wildcard.c - ${LIBRARY_DIR}/lib/krb5.c - ${LIBRARY_DIR}/lib/memdebug.c - ${LIBRARY_DIR}/lib/http_chunks.c - ${LIBRARY_DIR}/lib/strtok.c - ${LIBRARY_DIR}/lib/connect.c - ${LIBRARY_DIR}/lib/llist.c - ${LIBRARY_DIR}/lib/hash.c - ${LIBRARY_DIR}/lib/multi.c - ${LIBRARY_DIR}/lib/content_encoding.c - ${LIBRARY_DIR}/lib/share.c - ${LIBRARY_DIR}/lib/http_digest.c - ${LIBRARY_DIR}/lib/md4.c - ${LIBRARY_DIR}/lib/md5.c - ${LIBRARY_DIR}/lib/http_negotiate.c - ${LIBRARY_DIR}/lib/inet_pton.c - ${LIBRARY_DIR}/lib/strtoofft.c - ${LIBRARY_DIR}/lib/strerror.c - ${LIBRARY_DIR}/lib/amigaos.c - ${LIBRARY_DIR}/lib/hostasyn.c - ${LIBRARY_DIR}/lib/hostip4.c - ${LIBRARY_DIR}/lib/hostip6.c - ${LIBRARY_DIR}/lib/hostsyn.c - ${LIBRARY_DIR}/lib/inet_ntop.c - ${LIBRARY_DIR}/lib/parsedate.c - ${LIBRARY_DIR}/lib/select.c - ${LIBRARY_DIR}/lib/splay.c - ${LIBRARY_DIR}/lib/strdup.c - ${LIBRARY_DIR}/lib/socks.c - ${LIBRARY_DIR}/lib/curl_addrinfo.c - ${LIBRARY_DIR}/lib/socks_gssapi.c - ${LIBRARY_DIR}/lib/socks_sspi.c - ${LIBRARY_DIR}/lib/curl_sspi.c - ${LIBRARY_DIR}/lib/slist.c - ${LIBRARY_DIR}/lib/nonblock.c - ${LIBRARY_DIR}/lib/curl_memrchr.c - ${LIBRARY_DIR}/lib/imap.c - ${LIBRARY_DIR}/lib/pop3.c - ${LIBRARY_DIR}/lib/smtp.c - ${LIBRARY_DIR}/lib/pingpong.c - ${LIBRARY_DIR}/lib/rtsp.c - ${LIBRARY_DIR}/lib/curl_threads.c - ${LIBRARY_DIR}/lib/warnless.c - ${LIBRARY_DIR}/lib/hmac.c - ${LIBRARY_DIR}/lib/curl_rtmp.c - ${LIBRARY_DIR}/lib/openldap.c - ${LIBRARY_DIR}/lib/curl_gethostname.c - ${LIBRARY_DIR}/lib/gopher.c - ${LIBRARY_DIR}/lib/idn_win32.c - ${LIBRARY_DIR}/lib/http_proxy.c - ${LIBRARY_DIR}/lib/non-ascii.c - ${LIBRARY_DIR}/lib/asyn-thread.c - ${LIBRARY_DIR}/lib/curl_gssapi.c - ${LIBRARY_DIR}/lib/http_ntlm.c - ${LIBRARY_DIR}/lib/curl_ntlm_wb.c - ${LIBRARY_DIR}/lib/curl_ntlm_core.c - ${LIBRARY_DIR}/lib/curl_sasl.c - ${LIBRARY_DIR}/lib/rand.c - ${LIBRARY_DIR}/lib/curl_multibyte.c - ${LIBRARY_DIR}/lib/hostcheck.c - ${LIBRARY_DIR}/lib/conncache.c - ${LIBRARY_DIR}/lib/dotdot.c - ${LIBRARY_DIR}/lib/x509asn1.c - ${LIBRARY_DIR}/lib/http2.c - ${LIBRARY_DIR}/lib/smb.c - ${LIBRARY_DIR}/lib/curl_endian.c - ${LIBRARY_DIR}/lib/curl_des.c - ${LIBRARY_DIR}/lib/system_win32.c - ${LIBRARY_DIR}/lib/mime.c - ${LIBRARY_DIR}/lib/sha256.c - ${LIBRARY_DIR}/lib/setopt.c - ${LIBRARY_DIR}/lib/curl_path.c - ${LIBRARY_DIR}/lib/curl_ctype.c - ${LIBRARY_DIR}/lib/curl_range.c - ${LIBRARY_DIR}/lib/psl.c - ${LIBRARY_DIR}/lib/doh.c - ${LIBRARY_DIR}/lib/urlapi.c - ${LIBRARY_DIR}/lib/curl_get_line.c - ${LIBRARY_DIR}/lib/altsvc.c - ${LIBRARY_DIR}/lib/socketpair.c - ${LIBRARY_DIR}/lib/vauth/vauth.c - ${LIBRARY_DIR}/lib/vauth/cleartext.c - ${LIBRARY_DIR}/lib/vauth/cram.c - ${LIBRARY_DIR}/lib/vauth/digest.c - ${LIBRARY_DIR}/lib/vauth/digest_sspi.c - ${LIBRARY_DIR}/lib/vauth/krb5_gssapi.c - ${LIBRARY_DIR}/lib/vauth/krb5_sspi.c - ${LIBRARY_DIR}/lib/vauth/ntlm.c - ${LIBRARY_DIR}/lib/vauth/ntlm_sspi.c - ${LIBRARY_DIR}/lib/vauth/oauth2.c - ${LIBRARY_DIR}/lib/vauth/spnego_gssapi.c - ${LIBRARY_DIR}/lib/vauth/spnego_sspi.c - ${LIBRARY_DIR}/lib/vtls/openssl.c - ${LIBRARY_DIR}/lib/vtls/gtls.c - ${LIBRARY_DIR}/lib/vtls/vtls.c - ${LIBRARY_DIR}/lib/vtls/nss.c - ${LIBRARY_DIR}/lib/vtls/polarssl.c - ${LIBRARY_DIR}/lib/vtls/polarssl_threadlock.c - ${LIBRARY_DIR}/lib/vtls/wolfssl.c - ${LIBRARY_DIR}/lib/vtls/schannel.c - ${LIBRARY_DIR}/lib/vtls/schannel_verify.c - ${LIBRARY_DIR}/lib/vtls/sectransp.c - ${LIBRARY_DIR}/lib/vtls/gskit.c - ${LIBRARY_DIR}/lib/vtls/mbedtls.c - ${LIBRARY_DIR}/lib/vtls/mesalink.c - ${LIBRARY_DIR}/lib/vtls/bearssl.c - ${LIBRARY_DIR}/lib/vquic/ngtcp2.c - ${LIBRARY_DIR}/lib/vquic/quiche.c - ${LIBRARY_DIR}/lib/vssh/libssh2.c - ${LIBRARY_DIR}/lib/vssh/libssh.c + "${LIBRARY_DIR}/lib/file.c" + "${LIBRARY_DIR}/lib/timeval.c" + "${LIBRARY_DIR}/lib/base64.c" + "${LIBRARY_DIR}/lib/hostip.c" + "${LIBRARY_DIR}/lib/progress.c" + "${LIBRARY_DIR}/lib/formdata.c" + "${LIBRARY_DIR}/lib/cookie.c" + "${LIBRARY_DIR}/lib/http.c" + "${LIBRARY_DIR}/lib/sendf.c" + "${LIBRARY_DIR}/lib/url.c" + "${LIBRARY_DIR}/lib/dict.c" + "${LIBRARY_DIR}/lib/if2ip.c" + "${LIBRARY_DIR}/lib/speedcheck.c" + "${LIBRARY_DIR}/lib/ldap.c" + "${LIBRARY_DIR}/lib/version.c" + "${LIBRARY_DIR}/lib/getenv.c" + "${LIBRARY_DIR}/lib/escape.c" + "${LIBRARY_DIR}/lib/mprintf.c" + "${LIBRARY_DIR}/lib/telnet.c" + "${LIBRARY_DIR}/lib/netrc.c" + "${LIBRARY_DIR}/lib/getinfo.c" + "${LIBRARY_DIR}/lib/transfer.c" + "${LIBRARY_DIR}/lib/strcase.c" + "${LIBRARY_DIR}/lib/easy.c" + "${LIBRARY_DIR}/lib/security.c" + "${LIBRARY_DIR}/lib/curl_fnmatch.c" + "${LIBRARY_DIR}/lib/fileinfo.c" + "${LIBRARY_DIR}/lib/wildcard.c" + "${LIBRARY_DIR}/lib/krb5.c" + "${LIBRARY_DIR}/lib/memdebug.c" + "${LIBRARY_DIR}/lib/http_chunks.c" + "${LIBRARY_DIR}/lib/strtok.c" + "${LIBRARY_DIR}/lib/connect.c" + "${LIBRARY_DIR}/lib/llist.c" + "${LIBRARY_DIR}/lib/hash.c" + "${LIBRARY_DIR}/lib/multi.c" + "${LIBRARY_DIR}/lib/content_encoding.c" + "${LIBRARY_DIR}/lib/share.c" + "${LIBRARY_DIR}/lib/http_digest.c" + "${LIBRARY_DIR}/lib/md4.c" + "${LIBRARY_DIR}/lib/md5.c" + "${LIBRARY_DIR}/lib/http_negotiate.c" + "${LIBRARY_DIR}/lib/inet_pton.c" + "${LIBRARY_DIR}/lib/strtoofft.c" + "${LIBRARY_DIR}/lib/strerror.c" + "${LIBRARY_DIR}/lib/amigaos.c" + "${LIBRARY_DIR}/lib/hostasyn.c" + "${LIBRARY_DIR}/lib/hostip4.c" + "${LIBRARY_DIR}/lib/hostip6.c" + "${LIBRARY_DIR}/lib/hostsyn.c" + "${LIBRARY_DIR}/lib/inet_ntop.c" + "${LIBRARY_DIR}/lib/parsedate.c" + "${LIBRARY_DIR}/lib/select.c" + "${LIBRARY_DIR}/lib/splay.c" + "${LIBRARY_DIR}/lib/strdup.c" + "${LIBRARY_DIR}/lib/socks.c" + "${LIBRARY_DIR}/lib/curl_addrinfo.c" + "${LIBRARY_DIR}/lib/socks_gssapi.c" + "${LIBRARY_DIR}/lib/socks_sspi.c" + "${LIBRARY_DIR}/lib/curl_sspi.c" + "${LIBRARY_DIR}/lib/slist.c" + "${LIBRARY_DIR}/lib/nonblock.c" + "${LIBRARY_DIR}/lib/curl_memrchr.c" + "${LIBRARY_DIR}/lib/imap.c" + "${LIBRARY_DIR}/lib/pop3.c" + "${LIBRARY_DIR}/lib/smtp.c" + "${LIBRARY_DIR}/lib/pingpong.c" + "${LIBRARY_DIR}/lib/rtsp.c" + "${LIBRARY_DIR}/lib/curl_threads.c" + "${LIBRARY_DIR}/lib/warnless.c" + "${LIBRARY_DIR}/lib/hmac.c" + "${LIBRARY_DIR}/lib/curl_rtmp.c" + "${LIBRARY_DIR}/lib/openldap.c" + "${LIBRARY_DIR}/lib/curl_gethostname.c" + "${LIBRARY_DIR}/lib/gopher.c" + "${LIBRARY_DIR}/lib/idn_win32.c" + "${LIBRARY_DIR}/lib/http_proxy.c" + "${LIBRARY_DIR}/lib/non-ascii.c" + "${LIBRARY_DIR}/lib/asyn-thread.c" + "${LIBRARY_DIR}/lib/curl_gssapi.c" + "${LIBRARY_DIR}/lib/http_ntlm.c" + "${LIBRARY_DIR}/lib/curl_ntlm_wb.c" + "${LIBRARY_DIR}/lib/curl_ntlm_core.c" + "${LIBRARY_DIR}/lib/curl_sasl.c" + "${LIBRARY_DIR}/lib/rand.c" + "${LIBRARY_DIR}/lib/curl_multibyte.c" + "${LIBRARY_DIR}/lib/hostcheck.c" + "${LIBRARY_DIR}/lib/conncache.c" + "${LIBRARY_DIR}/lib/dotdot.c" + "${LIBRARY_DIR}/lib/x509asn1.c" + "${LIBRARY_DIR}/lib/http2.c" + "${LIBRARY_DIR}/lib/smb.c" + "${LIBRARY_DIR}/lib/curl_endian.c" + "${LIBRARY_DIR}/lib/curl_des.c" + "${LIBRARY_DIR}/lib/system_win32.c" + "${LIBRARY_DIR}/lib/mime.c" + "${LIBRARY_DIR}/lib/sha256.c" + "${LIBRARY_DIR}/lib/setopt.c" + "${LIBRARY_DIR}/lib/curl_path.c" + "${LIBRARY_DIR}/lib/curl_ctype.c" + "${LIBRARY_DIR}/lib/curl_range.c" + "${LIBRARY_DIR}/lib/psl.c" + "${LIBRARY_DIR}/lib/doh.c" + "${LIBRARY_DIR}/lib/urlapi.c" + "${LIBRARY_DIR}/lib/curl_get_line.c" + "${LIBRARY_DIR}/lib/altsvc.c" + "${LIBRARY_DIR}/lib/socketpair.c" + "${LIBRARY_DIR}/lib/vauth/vauth.c" + "${LIBRARY_DIR}/lib/vauth/cleartext.c" + "${LIBRARY_DIR}/lib/vauth/cram.c" + "${LIBRARY_DIR}/lib/vauth/digest.c" + "${LIBRARY_DIR}/lib/vauth/digest_sspi.c" + "${LIBRARY_DIR}/lib/vauth/krb5_gssapi.c" + "${LIBRARY_DIR}/lib/vauth/krb5_sspi.c" + "${LIBRARY_DIR}/lib/vauth/ntlm.c" + "${LIBRARY_DIR}/lib/vauth/ntlm_sspi.c" + "${LIBRARY_DIR}/lib/vauth/oauth2.c" + "${LIBRARY_DIR}/lib/vauth/spnego_gssapi.c" + "${LIBRARY_DIR}/lib/vauth/spnego_sspi.c" + "${LIBRARY_DIR}/lib/vtls/openssl.c" + "${LIBRARY_DIR}/lib/vtls/gtls.c" + "${LIBRARY_DIR}/lib/vtls/vtls.c" + "${LIBRARY_DIR}/lib/vtls/nss.c" + "${LIBRARY_DIR}/lib/vtls/polarssl.c" + "${LIBRARY_DIR}/lib/vtls/polarssl_threadlock.c" + "${LIBRARY_DIR}/lib/vtls/wolfssl.c" + "${LIBRARY_DIR}/lib/vtls/schannel.c" + "${LIBRARY_DIR}/lib/vtls/schannel_verify.c" + "${LIBRARY_DIR}/lib/vtls/sectransp.c" + "${LIBRARY_DIR}/lib/vtls/gskit.c" + "${LIBRARY_DIR}/lib/vtls/mbedtls.c" + "${LIBRARY_DIR}/lib/vtls/mesalink.c" + "${LIBRARY_DIR}/lib/vtls/bearssl.c" + "${LIBRARY_DIR}/lib/vquic/ngtcp2.c" + "${LIBRARY_DIR}/lib/vquic/quiche.c" + "${LIBRARY_DIR}/lib/vssh/libssh2.c" + "${LIBRARY_DIR}/lib/vssh/libssh.c" ) add_library (curl ${SRCS}) @@ -154,8 +154,8 @@ target_compile_definitions (curl PRIVATE OS="${CMAKE_SYSTEM_NAME}" ) target_include_directories (curl PUBLIC - ${LIBRARY_DIR}/include - ${LIBRARY_DIR}/lib + "${LIBRARY_DIR}/include" + "${LIBRARY_DIR}/lib" . # curl_config.h ) @@ -171,8 +171,8 @@ target_compile_options (curl PRIVATE -g0) # - sentry-native set (CURL_FOUND ON CACHE BOOL "") set (CURL_ROOT_DIR ${LIBRARY_DIR} CACHE PATH "") -set (CURL_INCLUDE_DIR ${LIBRARY_DIR}/include CACHE PATH "") -set (CURL_INCLUDE_DIRS ${LIBRARY_DIR}/include CACHE PATH "") +set (CURL_INCLUDE_DIR "${LIBRARY_DIR}/include" CACHE PATH "") +set (CURL_INCLUDE_DIRS "${LIBRARY_DIR}/include" CACHE PATH "") set (CURL_LIBRARY curl CACHE STRING "") set (CURL_LIBRARIES ${CURL_LIBRARY} CACHE STRING "") set (CURL_VERSION_STRING 7.67.0 CACHE STRING "") diff --git a/contrib/cyrus-sasl b/contrib/cyrus-sasl index 9995bf9d8e1..e6466edfd63 160000 --- a/contrib/cyrus-sasl +++ b/contrib/cyrus-sasl @@ -1 +1 @@ -Subproject commit 9995bf9d8e14f58934d9313ac64f13780d6dd3c9 +Subproject commit e6466edfd638cc5073debe941c53345b18a09512 diff --git a/contrib/cyrus-sasl-cmake/CMakeLists.txt b/contrib/cyrus-sasl-cmake/CMakeLists.txt index 5003c9a21db..aa25a078718 100644 --- a/contrib/cyrus-sasl-cmake/CMakeLists.txt +++ b/contrib/cyrus-sasl-cmake/CMakeLists.txt @@ -1,23 +1,23 @@ -set(CYRUS_SASL_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/cyrus-sasl) +set(CYRUS_SASL_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/cyrus-sasl") add_library(${CYRUS_SASL_LIBRARY}) target_sources(${CYRUS_SASL_LIBRARY} PRIVATE - ${CYRUS_SASL_SOURCE_DIR}/plugins/gssapi.c - # ${CYRUS_SASL_SOURCE_DIR}/plugins/gssapiv2_init.c - ${CYRUS_SASL_SOURCE_DIR}/common/plugin_common.c - ${CYRUS_SASL_SOURCE_DIR}/lib/common.c - ${CYRUS_SASL_SOURCE_DIR}/lib/canonusr.c - ${CYRUS_SASL_SOURCE_DIR}/lib/server.c - ${CYRUS_SASL_SOURCE_DIR}/lib/config.c - ${CYRUS_SASL_SOURCE_DIR}/lib/auxprop.c - ${CYRUS_SASL_SOURCE_DIR}/lib/saslutil.c - ${CYRUS_SASL_SOURCE_DIR}/lib/external.c - ${CYRUS_SASL_SOURCE_DIR}/lib/seterror.c - ${CYRUS_SASL_SOURCE_DIR}/lib/md5.c - ${CYRUS_SASL_SOURCE_DIR}/lib/dlopen.c - ${CYRUS_SASL_SOURCE_DIR}/lib/client.c - ${CYRUS_SASL_SOURCE_DIR}/lib/checkpw.c + "${CYRUS_SASL_SOURCE_DIR}/plugins/gssapi.c" + # "${CYRUS_SASL_SOURCE_DIR}/plugins/gssapiv2_init.c" + "${CYRUS_SASL_SOURCE_DIR}/common/plugin_common.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/common.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/canonusr.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/server.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/config.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/auxprop.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/saslutil.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/external.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/seterror.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/md5.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/dlopen.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/client.c" + "${CYRUS_SASL_SOURCE_DIR}/lib/checkpw.c" ) target_include_directories(${CYRUS_SASL_LIBRARY} PUBLIC @@ -26,16 +26,16 @@ target_include_directories(${CYRUS_SASL_LIBRARY} PUBLIC target_include_directories(${CYRUS_SASL_LIBRARY} PRIVATE ${CMAKE_CURRENT_SOURCE_DIR} # for config.h - ${CYRUS_SASL_SOURCE_DIR}/plugins + "${CYRUS_SASL_SOURCE_DIR}/plugins" ${CYRUS_SASL_SOURCE_DIR} - ${CYRUS_SASL_SOURCE_DIR}/include - ${CYRUS_SASL_SOURCE_DIR}/lib - ${CYRUS_SASL_SOURCE_DIR}/sasldb - ${CYRUS_SASL_SOURCE_DIR}/common - ${CYRUS_SASL_SOURCE_DIR}/saslauthd - ${CYRUS_SASL_SOURCE_DIR}/sample - ${CYRUS_SASL_SOURCE_DIR}/utils - ${CYRUS_SASL_SOURCE_DIR}/tests + "${CYRUS_SASL_SOURCE_DIR}/include" + "${CYRUS_SASL_SOURCE_DIR}/lib" + "${CYRUS_SASL_SOURCE_DIR}/sasldb" + "${CYRUS_SASL_SOURCE_DIR}/common" + "${CYRUS_SASL_SOURCE_DIR}/saslauthd" + "${CYRUS_SASL_SOURCE_DIR}/sample" + "${CYRUS_SASL_SOURCE_DIR}/utils" + "${CYRUS_SASL_SOURCE_DIR}/tests" ) target_compile_definitions(${CYRUS_SASL_LIBRARY} PUBLIC @@ -52,15 +52,15 @@ target_compile_definitions(${CYRUS_SASL_LIBRARY} PUBLIC LIBSASL_EXPORTS=1 ) -file(MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/sasl) +file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/sasl") file(COPY - ${CYRUS_SASL_SOURCE_DIR}/include/sasl.h - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/sasl + "${CYRUS_SASL_SOURCE_DIR}/include/sasl.h" + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/sasl" ) file(COPY - ${CYRUS_SASL_SOURCE_DIR}/include/prop.h + "${CYRUS_SASL_SOURCE_DIR}/include/prop.h" DESTINATION ${CMAKE_CURRENT_BINARY_DIR} ) diff --git a/contrib/datasketches-cpp b/contrib/datasketches-cpp new file mode 160000 index 00000000000..f915d35b2de --- /dev/null +++ b/contrib/datasketches-cpp @@ -0,0 +1 @@ +Subproject commit f915d35b2de676683493c86c585141a1e1c83334 diff --git a/contrib/double-conversion-cmake/CMakeLists.txt b/contrib/double-conversion-cmake/CMakeLists.txt index 0690731e1b1..c8bf1b34b8f 100644 --- a/contrib/double-conversion-cmake/CMakeLists.txt +++ b/contrib/double-conversion-cmake/CMakeLists.txt @@ -1,13 +1,13 @@ -SET(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/double-conversion) +SET(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/double-conversion") add_library(double-conversion -${LIBRARY_DIR}/double-conversion/bignum.cc -${LIBRARY_DIR}/double-conversion/bignum-dtoa.cc -${LIBRARY_DIR}/double-conversion/cached-powers.cc -${LIBRARY_DIR}/double-conversion/diy-fp.cc -${LIBRARY_DIR}/double-conversion/double-conversion.cc -${LIBRARY_DIR}/double-conversion/fast-dtoa.cc -${LIBRARY_DIR}/double-conversion/fixed-dtoa.cc -${LIBRARY_DIR}/double-conversion/strtod.cc) +"${LIBRARY_DIR}/double-conversion/bignum.cc" +"${LIBRARY_DIR}/double-conversion/bignum-dtoa.cc" +"${LIBRARY_DIR}/double-conversion/cached-powers.cc" +"${LIBRARY_DIR}/double-conversion/diy-fp.cc" +"${LIBRARY_DIR}/double-conversion/double-conversion.cc" +"${LIBRARY_DIR}/double-conversion/fast-dtoa.cc" +"${LIBRARY_DIR}/double-conversion/fixed-dtoa.cc" +"${LIBRARY_DIR}/double-conversion/strtod.cc") target_include_directories(double-conversion SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}") diff --git a/contrib/fastops-cmake/CMakeLists.txt b/contrib/fastops-cmake/CMakeLists.txt index 0269d5603c2..fe7293c614b 100644 --- a/contrib/fastops-cmake/CMakeLists.txt +++ b/contrib/fastops-cmake/CMakeLists.txt @@ -1,18 +1,18 @@ -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/fastops) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/fastops") set(SRCS "") if(HAVE_AVX) - set (SRCS ${SRCS} ${LIBRARY_DIR}/fastops/avx/ops_avx.cpp) - set_source_files_properties(${LIBRARY_DIR}/fastops/avx/ops_avx.cpp PROPERTIES COMPILE_FLAGS "-mavx -DNO_AVX2") + set (SRCS ${SRCS} "${LIBRARY_DIR}/fastops/avx/ops_avx.cpp") + set_source_files_properties("${LIBRARY_DIR}/fastops/avx/ops_avx.cpp" PROPERTIES COMPILE_FLAGS "-mavx -DNO_AVX2") endif() if(HAVE_AVX2) - set (SRCS ${SRCS} ${LIBRARY_DIR}/fastops/avx2/ops_avx2.cpp) - set_source_files_properties(${LIBRARY_DIR}/fastops/avx2/ops_avx2.cpp PROPERTIES COMPILE_FLAGS "-mavx2 -mfma") + set (SRCS ${SRCS} "${LIBRARY_DIR}/fastops/avx2/ops_avx2.cpp") + set_source_files_properties("${LIBRARY_DIR}/fastops/avx2/ops_avx2.cpp" PROPERTIES COMPILE_FLAGS "-mavx2 -mfma") endif() -set (SRCS ${SRCS} ${LIBRARY_DIR}/fastops/plain/ops_plain.cpp ${LIBRARY_DIR}/fastops/core/avx_id.cpp ${LIBRARY_DIR}/fastops/fastops.cpp) +set (SRCS ${SRCS} "${LIBRARY_DIR}/fastops/plain/ops_plain.cpp" "${LIBRARY_DIR}/fastops/core/avx_id.cpp" "${LIBRARY_DIR}/fastops/fastops.cpp") add_library(fastops ${SRCS}) diff --git a/contrib/flatbuffers b/contrib/flatbuffers index 6df40a24717..22e3ffc66d2 160000 --- a/contrib/flatbuffers +++ b/contrib/flatbuffers @@ -1 +1 @@ -Subproject commit 6df40a2471737b27271bdd9b900ab5f3aec746c7 +Subproject commit 22e3ffc66d2d7d72d1414390aa0f04ffd114a5a1 diff --git a/contrib/grpc b/contrib/grpc index 7436366ceb3..1085a941238 160000 --- a/contrib/grpc +++ b/contrib/grpc @@ -1 +1 @@ -Subproject commit 7436366ceb341ba5c00ea29f1645e02a2b70bf93 +Subproject commit 1085a941238e66b13e3fb89c310533745380acbc diff --git a/contrib/h3-cmake/CMakeLists.txt b/contrib/h3-cmake/CMakeLists.txt index 2911d7283f0..6b184a175b0 100644 --- a/contrib/h3-cmake/CMakeLists.txt +++ b/contrib/h3-cmake/CMakeLists.txt @@ -1,30 +1,30 @@ -set(H3_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/h3/src/h3lib) -set(H3_BINARY_DIR ${ClickHouse_BINARY_DIR}/contrib/h3/src/h3lib) +set(H3_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/h3/src/h3lib") +set(H3_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/h3/src/h3lib") set(SRCS -${H3_SOURCE_DIR}/lib/algos.c -${H3_SOURCE_DIR}/lib/baseCells.c -${H3_SOURCE_DIR}/lib/bbox.c -${H3_SOURCE_DIR}/lib/coordijk.c -${H3_SOURCE_DIR}/lib/faceijk.c -${H3_SOURCE_DIR}/lib/geoCoord.c -${H3_SOURCE_DIR}/lib/h3Index.c -${H3_SOURCE_DIR}/lib/h3UniEdge.c -${H3_SOURCE_DIR}/lib/linkedGeo.c -${H3_SOURCE_DIR}/lib/localij.c -${H3_SOURCE_DIR}/lib/mathExtensions.c -${H3_SOURCE_DIR}/lib/polygon.c -${H3_SOURCE_DIR}/lib/vec2d.c -${H3_SOURCE_DIR}/lib/vec3d.c -${H3_SOURCE_DIR}/lib/vertex.c -${H3_SOURCE_DIR}/lib/vertexGraph.c +"${H3_SOURCE_DIR}/lib/algos.c" +"${H3_SOURCE_DIR}/lib/baseCells.c" +"${H3_SOURCE_DIR}/lib/bbox.c" +"${H3_SOURCE_DIR}/lib/coordijk.c" +"${H3_SOURCE_DIR}/lib/faceijk.c" +"${H3_SOURCE_DIR}/lib/geoCoord.c" +"${H3_SOURCE_DIR}/lib/h3Index.c" +"${H3_SOURCE_DIR}/lib/h3UniEdge.c" +"${H3_SOURCE_DIR}/lib/linkedGeo.c" +"${H3_SOURCE_DIR}/lib/localij.c" +"${H3_SOURCE_DIR}/lib/mathExtensions.c" +"${H3_SOURCE_DIR}/lib/polygon.c" +"${H3_SOURCE_DIR}/lib/vec2d.c" +"${H3_SOURCE_DIR}/lib/vec3d.c" +"${H3_SOURCE_DIR}/lib/vertex.c" +"${H3_SOURCE_DIR}/lib/vertexGraph.c" ) -configure_file(${H3_SOURCE_DIR}/include/h3api.h.in ${H3_BINARY_DIR}/include/h3api.h) +configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h") add_library(h3 ${SRCS}) -target_include_directories(h3 SYSTEM PUBLIC ${H3_SOURCE_DIR}/include) -target_include_directories(h3 SYSTEM PUBLIC ${H3_BINARY_DIR}/include) +target_include_directories(h3 SYSTEM PUBLIC "${H3_SOURCE_DIR}/include") +target_include_directories(h3 SYSTEM PUBLIC "${H3_BINARY_DIR}/include") target_compile_definitions(h3 PRIVATE H3_HAVE_VLA) if(M_LIBRARY) target_link_libraries(h3 PRIVATE ${M_LIBRARY}) diff --git a/contrib/hyperscan-cmake/CMakeLists.txt b/contrib/hyperscan-cmake/CMakeLists.txt index 75c45ff7bf5..6a364da126d 100644 --- a/contrib/hyperscan-cmake/CMakeLists.txt +++ b/contrib/hyperscan-cmake/CMakeLists.txt @@ -40,211 +40,211 @@ endif () if (NOT EXTERNAL_HYPERSCAN_LIBRARY_FOUND) set (USE_INTERNAL_HYPERSCAN_LIBRARY 1) - set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/hyperscan) + set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/hyperscan") set (SRCS - ${LIBRARY_DIR}/src/alloc.c - ${LIBRARY_DIR}/src/compiler/asserts.cpp - ${LIBRARY_DIR}/src/compiler/compiler.cpp - ${LIBRARY_DIR}/src/compiler/error.cpp - ${LIBRARY_DIR}/src/crc32.c - ${LIBRARY_DIR}/src/database.c - ${LIBRARY_DIR}/src/fdr/engine_description.cpp - ${LIBRARY_DIR}/src/fdr/fdr_compile_util.cpp - ${LIBRARY_DIR}/src/fdr/fdr_compile.cpp - ${LIBRARY_DIR}/src/fdr/fdr_confirm_compile.cpp - ${LIBRARY_DIR}/src/fdr/fdr_engine_description.cpp - ${LIBRARY_DIR}/src/fdr/fdr.c - ${LIBRARY_DIR}/src/fdr/flood_compile.cpp - ${LIBRARY_DIR}/src/fdr/teddy_compile.cpp - ${LIBRARY_DIR}/src/fdr/teddy_engine_description.cpp - ${LIBRARY_DIR}/src/fdr/teddy.c - ${LIBRARY_DIR}/src/grey.cpp - ${LIBRARY_DIR}/src/hs_valid_platform.c - ${LIBRARY_DIR}/src/hs_version.c - ${LIBRARY_DIR}/src/hs.cpp - ${LIBRARY_DIR}/src/hwlm/hwlm_build.cpp - ${LIBRARY_DIR}/src/hwlm/hwlm_literal.cpp - ${LIBRARY_DIR}/src/hwlm/hwlm.c - ${LIBRARY_DIR}/src/hwlm/noodle_build.cpp - ${LIBRARY_DIR}/src/hwlm/noodle_engine.c - ${LIBRARY_DIR}/src/nfa/accel_dfa_build_strat.cpp - ${LIBRARY_DIR}/src/nfa/accel.c - ${LIBRARY_DIR}/src/nfa/accelcompile.cpp - ${LIBRARY_DIR}/src/nfa/castle.c - ${LIBRARY_DIR}/src/nfa/castlecompile.cpp - ${LIBRARY_DIR}/src/nfa/dfa_build_strat.cpp - ${LIBRARY_DIR}/src/nfa/dfa_min.cpp - ${LIBRARY_DIR}/src/nfa/gough.c - ${LIBRARY_DIR}/src/nfa/goughcompile_accel.cpp - ${LIBRARY_DIR}/src/nfa/goughcompile_reg.cpp - ${LIBRARY_DIR}/src/nfa/goughcompile.cpp - ${LIBRARY_DIR}/src/nfa/lbr.c - ${LIBRARY_DIR}/src/nfa/limex_64.c - ${LIBRARY_DIR}/src/nfa/limex_accel.c - ${LIBRARY_DIR}/src/nfa/limex_compile.cpp - ${LIBRARY_DIR}/src/nfa/limex_native.c - ${LIBRARY_DIR}/src/nfa/limex_simd128.c - ${LIBRARY_DIR}/src/nfa/limex_simd256.c - ${LIBRARY_DIR}/src/nfa/limex_simd384.c - ${LIBRARY_DIR}/src/nfa/limex_simd512.c - ${LIBRARY_DIR}/src/nfa/mcclellan.c - ${LIBRARY_DIR}/src/nfa/mcclellancompile_util.cpp - ${LIBRARY_DIR}/src/nfa/mcclellancompile.cpp - ${LIBRARY_DIR}/src/nfa/mcsheng_compile.cpp - ${LIBRARY_DIR}/src/nfa/mcsheng_data.c - ${LIBRARY_DIR}/src/nfa/mcsheng.c - ${LIBRARY_DIR}/src/nfa/mpv.c - ${LIBRARY_DIR}/src/nfa/mpvcompile.cpp - ${LIBRARY_DIR}/src/nfa/nfa_api_dispatch.c - ${LIBRARY_DIR}/src/nfa/nfa_build_util.cpp - ${LIBRARY_DIR}/src/nfa/rdfa_graph.cpp - ${LIBRARY_DIR}/src/nfa/rdfa_merge.cpp - ${LIBRARY_DIR}/src/nfa/rdfa.cpp - ${LIBRARY_DIR}/src/nfa/repeat.c - ${LIBRARY_DIR}/src/nfa/repeatcompile.cpp - ${LIBRARY_DIR}/src/nfa/sheng.c - ${LIBRARY_DIR}/src/nfa/shengcompile.cpp - ${LIBRARY_DIR}/src/nfa/shufti.c - ${LIBRARY_DIR}/src/nfa/shufticompile.cpp - ${LIBRARY_DIR}/src/nfa/tamarama.c - ${LIBRARY_DIR}/src/nfa/tamaramacompile.cpp - ${LIBRARY_DIR}/src/nfa/truffle.c - ${LIBRARY_DIR}/src/nfa/trufflecompile.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_anchored_acyclic.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_anchored_dots.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_asserts.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_builder.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_calc_components.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_cyclic_redundancy.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_depth.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_dominators.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_edge_redundancy.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_equivalence.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_execute.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_expr_info.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_extparam.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_fixed_width.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_fuzzy.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_haig.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_holder.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_is_equal.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_lbr.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_limex_accel.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_limex.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_literal_analysis.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_literal_component.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_literal_decorated.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_mcclellan.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_misc_opt.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_netflow.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_prefilter.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_prune.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_puff.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_redundancy.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_region_redundancy.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_region.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_repeat.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_reports.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_restructuring.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_revacc.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_sep.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_small_literal_set.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_som_add_redundancy.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_som_util.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_som.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_split.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_squash.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_stop.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_uncalc_components.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_utf8.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_util.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_vacuous.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_violet.cpp - ${LIBRARY_DIR}/src/nfagraph/ng_width.cpp - ${LIBRARY_DIR}/src/nfagraph/ng.cpp - ${LIBRARY_DIR}/src/parser/AsciiComponentClass.cpp - ${LIBRARY_DIR}/src/parser/buildstate.cpp - ${LIBRARY_DIR}/src/parser/check_refs.cpp - ${LIBRARY_DIR}/src/parser/Component.cpp - ${LIBRARY_DIR}/src/parser/ComponentAlternation.cpp - ${LIBRARY_DIR}/src/parser/ComponentAssertion.cpp - ${LIBRARY_DIR}/src/parser/ComponentAtomicGroup.cpp - ${LIBRARY_DIR}/src/parser/ComponentBackReference.cpp - ${LIBRARY_DIR}/src/parser/ComponentBoundary.cpp - ${LIBRARY_DIR}/src/parser/ComponentByte.cpp - ${LIBRARY_DIR}/src/parser/ComponentClass.cpp - ${LIBRARY_DIR}/src/parser/ComponentCondReference.cpp - ${LIBRARY_DIR}/src/parser/ComponentEmpty.cpp - ${LIBRARY_DIR}/src/parser/ComponentEUS.cpp - ${LIBRARY_DIR}/src/parser/ComponentRepeat.cpp - ${LIBRARY_DIR}/src/parser/ComponentSequence.cpp - ${LIBRARY_DIR}/src/parser/ComponentVisitor.cpp - ${LIBRARY_DIR}/src/parser/ComponentWordBoundary.cpp - ${LIBRARY_DIR}/src/parser/ConstComponentVisitor.cpp - ${LIBRARY_DIR}/src/parser/control_verbs.cpp - ${LIBRARY_DIR}/src/parser/logical_combination.cpp - ${LIBRARY_DIR}/src/parser/parse_error.cpp - ${LIBRARY_DIR}/src/parser/parser_util.cpp - ${LIBRARY_DIR}/src/parser/Parser.cpp - ${LIBRARY_DIR}/src/parser/prefilter.cpp - ${LIBRARY_DIR}/src/parser/shortcut_literal.cpp - ${LIBRARY_DIR}/src/parser/ucp_table.cpp - ${LIBRARY_DIR}/src/parser/unsupported.cpp - ${LIBRARY_DIR}/src/parser/utf8_validate.cpp - ${LIBRARY_DIR}/src/parser/Utf8ComponentClass.cpp - ${LIBRARY_DIR}/src/rose/block.c - ${LIBRARY_DIR}/src/rose/catchup.c - ${LIBRARY_DIR}/src/rose/init.c - ${LIBRARY_DIR}/src/rose/match.c - ${LIBRARY_DIR}/src/rose/program_runtime.c - ${LIBRARY_DIR}/src/rose/rose_build_add_mask.cpp - ${LIBRARY_DIR}/src/rose/rose_build_add.cpp - ${LIBRARY_DIR}/src/rose/rose_build_anchored.cpp - ${LIBRARY_DIR}/src/rose/rose_build_bytecode.cpp - ${LIBRARY_DIR}/src/rose/rose_build_castle.cpp - ${LIBRARY_DIR}/src/rose/rose_build_compile.cpp - ${LIBRARY_DIR}/src/rose/rose_build_convert.cpp - ${LIBRARY_DIR}/src/rose/rose_build_dedupe.cpp - ${LIBRARY_DIR}/src/rose/rose_build_engine_blob.cpp - ${LIBRARY_DIR}/src/rose/rose_build_exclusive.cpp - ${LIBRARY_DIR}/src/rose/rose_build_groups.cpp - ${LIBRARY_DIR}/src/rose/rose_build_infix.cpp - ${LIBRARY_DIR}/src/rose/rose_build_instructions.cpp - ${LIBRARY_DIR}/src/rose/rose_build_lit_accel.cpp - ${LIBRARY_DIR}/src/rose/rose_build_long_lit.cpp - ${LIBRARY_DIR}/src/rose/rose_build_lookaround.cpp - ${LIBRARY_DIR}/src/rose/rose_build_matchers.cpp - ${LIBRARY_DIR}/src/rose/rose_build_merge.cpp - ${LIBRARY_DIR}/src/rose/rose_build_misc.cpp - ${LIBRARY_DIR}/src/rose/rose_build_program.cpp - ${LIBRARY_DIR}/src/rose/rose_build_role_aliasing.cpp - ${LIBRARY_DIR}/src/rose/rose_build_scatter.cpp - ${LIBRARY_DIR}/src/rose/rose_build_width.cpp - ${LIBRARY_DIR}/src/rose/rose_in_util.cpp - ${LIBRARY_DIR}/src/rose/stream.c - ${LIBRARY_DIR}/src/runtime.c - ${LIBRARY_DIR}/src/scratch.c - ${LIBRARY_DIR}/src/smallwrite/smallwrite_build.cpp - ${LIBRARY_DIR}/src/som/slot_manager.cpp - ${LIBRARY_DIR}/src/som/som_runtime.c - ${LIBRARY_DIR}/src/som/som_stream.c - ${LIBRARY_DIR}/src/stream_compress.c - ${LIBRARY_DIR}/src/util/alloc.cpp - ${LIBRARY_DIR}/src/util/charreach.cpp - ${LIBRARY_DIR}/src/util/clique.cpp - ${LIBRARY_DIR}/src/util/compile_context.cpp - ${LIBRARY_DIR}/src/util/compile_error.cpp - ${LIBRARY_DIR}/src/util/cpuid_flags.c - ${LIBRARY_DIR}/src/util/depth.cpp - ${LIBRARY_DIR}/src/util/fatbit_build.cpp - ${LIBRARY_DIR}/src/util/multibit_build.cpp - ${LIBRARY_DIR}/src/util/multibit.c - ${LIBRARY_DIR}/src/util/report_manager.cpp - ${LIBRARY_DIR}/src/util/simd_utils.c - ${LIBRARY_DIR}/src/util/state_compress.c - ${LIBRARY_DIR}/src/util/target_info.cpp - ${LIBRARY_DIR}/src/util/ue2string.cpp + "${LIBRARY_DIR}/src/alloc.c" + "${LIBRARY_DIR}/src/compiler/asserts.cpp" + "${LIBRARY_DIR}/src/compiler/compiler.cpp" + "${LIBRARY_DIR}/src/compiler/error.cpp" + "${LIBRARY_DIR}/src/crc32.c" + "${LIBRARY_DIR}/src/database.c" + "${LIBRARY_DIR}/src/fdr/engine_description.cpp" + "${LIBRARY_DIR}/src/fdr/fdr_compile_util.cpp" + "${LIBRARY_DIR}/src/fdr/fdr_compile.cpp" + "${LIBRARY_DIR}/src/fdr/fdr_confirm_compile.cpp" + "${LIBRARY_DIR}/src/fdr/fdr_engine_description.cpp" + "${LIBRARY_DIR}/src/fdr/fdr.c" + "${LIBRARY_DIR}/src/fdr/flood_compile.cpp" + "${LIBRARY_DIR}/src/fdr/teddy_compile.cpp" + "${LIBRARY_DIR}/src/fdr/teddy_engine_description.cpp" + "${LIBRARY_DIR}/src/fdr/teddy.c" + "${LIBRARY_DIR}/src/grey.cpp" + "${LIBRARY_DIR}/src/hs_valid_platform.c" + "${LIBRARY_DIR}/src/hs_version.c" + "${LIBRARY_DIR}/src/hs.cpp" + "${LIBRARY_DIR}/src/hwlm/hwlm_build.cpp" + "${LIBRARY_DIR}/src/hwlm/hwlm_literal.cpp" + "${LIBRARY_DIR}/src/hwlm/hwlm.c" + "${LIBRARY_DIR}/src/hwlm/noodle_build.cpp" + "${LIBRARY_DIR}/src/hwlm/noodle_engine.c" + "${LIBRARY_DIR}/src/nfa/accel_dfa_build_strat.cpp" + "${LIBRARY_DIR}/src/nfa/accel.c" + "${LIBRARY_DIR}/src/nfa/accelcompile.cpp" + "${LIBRARY_DIR}/src/nfa/castle.c" + "${LIBRARY_DIR}/src/nfa/castlecompile.cpp" + "${LIBRARY_DIR}/src/nfa/dfa_build_strat.cpp" + "${LIBRARY_DIR}/src/nfa/dfa_min.cpp" + "${LIBRARY_DIR}/src/nfa/gough.c" + "${LIBRARY_DIR}/src/nfa/goughcompile_accel.cpp" + "${LIBRARY_DIR}/src/nfa/goughcompile_reg.cpp" + "${LIBRARY_DIR}/src/nfa/goughcompile.cpp" + "${LIBRARY_DIR}/src/nfa/lbr.c" + "${LIBRARY_DIR}/src/nfa/limex_64.c" + "${LIBRARY_DIR}/src/nfa/limex_accel.c" + "${LIBRARY_DIR}/src/nfa/limex_compile.cpp" + "${LIBRARY_DIR}/src/nfa/limex_native.c" + "${LIBRARY_DIR}/src/nfa/limex_simd128.c" + "${LIBRARY_DIR}/src/nfa/limex_simd256.c" + "${LIBRARY_DIR}/src/nfa/limex_simd384.c" + "${LIBRARY_DIR}/src/nfa/limex_simd512.c" + "${LIBRARY_DIR}/src/nfa/mcclellan.c" + "${LIBRARY_DIR}/src/nfa/mcclellancompile_util.cpp" + "${LIBRARY_DIR}/src/nfa/mcclellancompile.cpp" + "${LIBRARY_DIR}/src/nfa/mcsheng_compile.cpp" + "${LIBRARY_DIR}/src/nfa/mcsheng_data.c" + "${LIBRARY_DIR}/src/nfa/mcsheng.c" + "${LIBRARY_DIR}/src/nfa/mpv.c" + "${LIBRARY_DIR}/src/nfa/mpvcompile.cpp" + "${LIBRARY_DIR}/src/nfa/nfa_api_dispatch.c" + "${LIBRARY_DIR}/src/nfa/nfa_build_util.cpp" + "${LIBRARY_DIR}/src/nfa/rdfa_graph.cpp" + "${LIBRARY_DIR}/src/nfa/rdfa_merge.cpp" + "${LIBRARY_DIR}/src/nfa/rdfa.cpp" + "${LIBRARY_DIR}/src/nfa/repeat.c" + "${LIBRARY_DIR}/src/nfa/repeatcompile.cpp" + "${LIBRARY_DIR}/src/nfa/sheng.c" + "${LIBRARY_DIR}/src/nfa/shengcompile.cpp" + "${LIBRARY_DIR}/src/nfa/shufti.c" + "${LIBRARY_DIR}/src/nfa/shufticompile.cpp" + "${LIBRARY_DIR}/src/nfa/tamarama.c" + "${LIBRARY_DIR}/src/nfa/tamaramacompile.cpp" + "${LIBRARY_DIR}/src/nfa/truffle.c" + "${LIBRARY_DIR}/src/nfa/trufflecompile.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_anchored_acyclic.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_anchored_dots.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_asserts.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_builder.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_calc_components.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_cyclic_redundancy.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_depth.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_dominators.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_edge_redundancy.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_equivalence.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_execute.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_expr_info.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_extparam.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_fixed_width.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_fuzzy.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_haig.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_holder.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_is_equal.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_lbr.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_limex_accel.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_limex.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_literal_analysis.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_literal_component.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_literal_decorated.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_mcclellan.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_misc_opt.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_netflow.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_prefilter.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_prune.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_puff.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_redundancy.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_region_redundancy.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_region.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_repeat.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_reports.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_restructuring.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_revacc.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_sep.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_small_literal_set.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_som_add_redundancy.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_som_util.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_som.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_split.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_squash.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_stop.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_uncalc_components.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_utf8.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_util.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_vacuous.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_violet.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng_width.cpp" + "${LIBRARY_DIR}/src/nfagraph/ng.cpp" + "${LIBRARY_DIR}/src/parser/AsciiComponentClass.cpp" + "${LIBRARY_DIR}/src/parser/buildstate.cpp" + "${LIBRARY_DIR}/src/parser/check_refs.cpp" + "${LIBRARY_DIR}/src/parser/Component.cpp" + "${LIBRARY_DIR}/src/parser/ComponentAlternation.cpp" + "${LIBRARY_DIR}/src/parser/ComponentAssertion.cpp" + "${LIBRARY_DIR}/src/parser/ComponentAtomicGroup.cpp" + "${LIBRARY_DIR}/src/parser/ComponentBackReference.cpp" + "${LIBRARY_DIR}/src/parser/ComponentBoundary.cpp" + "${LIBRARY_DIR}/src/parser/ComponentByte.cpp" + "${LIBRARY_DIR}/src/parser/ComponentClass.cpp" + "${LIBRARY_DIR}/src/parser/ComponentCondReference.cpp" + "${LIBRARY_DIR}/src/parser/ComponentEmpty.cpp" + "${LIBRARY_DIR}/src/parser/ComponentEUS.cpp" + "${LIBRARY_DIR}/src/parser/ComponentRepeat.cpp" + "${LIBRARY_DIR}/src/parser/ComponentSequence.cpp" + "${LIBRARY_DIR}/src/parser/ComponentVisitor.cpp" + "${LIBRARY_DIR}/src/parser/ComponentWordBoundary.cpp" + "${LIBRARY_DIR}/src/parser/ConstComponentVisitor.cpp" + "${LIBRARY_DIR}/src/parser/control_verbs.cpp" + "${LIBRARY_DIR}/src/parser/logical_combination.cpp" + "${LIBRARY_DIR}/src/parser/parse_error.cpp" + "${LIBRARY_DIR}/src/parser/parser_util.cpp" + "${LIBRARY_DIR}/src/parser/Parser.cpp" + "${LIBRARY_DIR}/src/parser/prefilter.cpp" + "${LIBRARY_DIR}/src/parser/shortcut_literal.cpp" + "${LIBRARY_DIR}/src/parser/ucp_table.cpp" + "${LIBRARY_DIR}/src/parser/unsupported.cpp" + "${LIBRARY_DIR}/src/parser/utf8_validate.cpp" + "${LIBRARY_DIR}/src/parser/Utf8ComponentClass.cpp" + "${LIBRARY_DIR}/src/rose/block.c" + "${LIBRARY_DIR}/src/rose/catchup.c" + "${LIBRARY_DIR}/src/rose/init.c" + "${LIBRARY_DIR}/src/rose/match.c" + "${LIBRARY_DIR}/src/rose/program_runtime.c" + "${LIBRARY_DIR}/src/rose/rose_build_add_mask.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_add.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_anchored.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_bytecode.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_castle.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_compile.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_convert.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_dedupe.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_engine_blob.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_exclusive.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_groups.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_infix.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_instructions.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_lit_accel.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_long_lit.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_lookaround.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_matchers.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_merge.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_misc.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_program.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_role_aliasing.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_scatter.cpp" + "${LIBRARY_DIR}/src/rose/rose_build_width.cpp" + "${LIBRARY_DIR}/src/rose/rose_in_util.cpp" + "${LIBRARY_DIR}/src/rose/stream.c" + "${LIBRARY_DIR}/src/runtime.c" + "${LIBRARY_DIR}/src/scratch.c" + "${LIBRARY_DIR}/src/smallwrite/smallwrite_build.cpp" + "${LIBRARY_DIR}/src/som/slot_manager.cpp" + "${LIBRARY_DIR}/src/som/som_runtime.c" + "${LIBRARY_DIR}/src/som/som_stream.c" + "${LIBRARY_DIR}/src/stream_compress.c" + "${LIBRARY_DIR}/src/util/alloc.cpp" + "${LIBRARY_DIR}/src/util/charreach.cpp" + "${LIBRARY_DIR}/src/util/clique.cpp" + "${LIBRARY_DIR}/src/util/compile_context.cpp" + "${LIBRARY_DIR}/src/util/compile_error.cpp" + "${LIBRARY_DIR}/src/util/cpuid_flags.c" + "${LIBRARY_DIR}/src/util/depth.cpp" + "${LIBRARY_DIR}/src/util/fatbit_build.cpp" + "${LIBRARY_DIR}/src/util/multibit_build.cpp" + "${LIBRARY_DIR}/src/util/multibit.c" + "${LIBRARY_DIR}/src/util/report_manager.cpp" + "${LIBRARY_DIR}/src/util/simd_utils.c" + "${LIBRARY_DIR}/src/util/state_compress.c" + "${LIBRARY_DIR}/src/util/target_info.cpp" + "${LIBRARY_DIR}/src/util/ue2string.cpp" ) add_library (hyperscan ${SRCS}) @@ -259,9 +259,9 @@ if (NOT EXTERNAL_HYPERSCAN_LIBRARY_FOUND) target_include_directories (hyperscan PRIVATE common - ${LIBRARY_DIR}/include + "${LIBRARY_DIR}/include" ) - target_include_directories (hyperscan SYSTEM PUBLIC ${LIBRARY_DIR}/src) + target_include_directories (hyperscan SYSTEM PUBLIC "${LIBRARY_DIR}/src") if (ARCH_AMD64) target_include_directories (hyperscan PRIVATE x86_64) endif () diff --git a/contrib/icu-cmake/CMakeLists.txt b/contrib/icu-cmake/CMakeLists.txt index 884f5c3a336..26f3bb11006 100644 --- a/contrib/icu-cmake/CMakeLists.txt +++ b/contrib/icu-cmake/CMakeLists.txt @@ -1,447 +1,447 @@ -set(ICU_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/icu/icu4c/source) -set(ICUDATA_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/icudata/) +set(ICU_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/icu/icu4c/source") +set(ICUDATA_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/icudata/") set (CMAKE_CXX_STANDARD 17) # These lists of sources were generated from build log of the original ICU build system (configure + make). set(ICUUC_SOURCES -${ICU_SOURCE_DIR}/common/errorcode.cpp -${ICU_SOURCE_DIR}/common/putil.cpp -${ICU_SOURCE_DIR}/common/umath.cpp -${ICU_SOURCE_DIR}/common/utypes.cpp -${ICU_SOURCE_DIR}/common/uinvchar.cpp -${ICU_SOURCE_DIR}/common/umutex.cpp -${ICU_SOURCE_DIR}/common/ucln_cmn.cpp -${ICU_SOURCE_DIR}/common/uinit.cpp -${ICU_SOURCE_DIR}/common/uobject.cpp -${ICU_SOURCE_DIR}/common/cmemory.cpp -${ICU_SOURCE_DIR}/common/charstr.cpp -${ICU_SOURCE_DIR}/common/cstr.cpp -${ICU_SOURCE_DIR}/common/udata.cpp -${ICU_SOURCE_DIR}/common/ucmndata.cpp -${ICU_SOURCE_DIR}/common/udatamem.cpp -${ICU_SOURCE_DIR}/common/umapfile.cpp -${ICU_SOURCE_DIR}/common/udataswp.cpp -${ICU_SOURCE_DIR}/common/utrie_swap.cpp -${ICU_SOURCE_DIR}/common/ucol_swp.cpp -${ICU_SOURCE_DIR}/common/utrace.cpp -${ICU_SOURCE_DIR}/common/uhash.cpp -${ICU_SOURCE_DIR}/common/uhash_us.cpp -${ICU_SOURCE_DIR}/common/uenum.cpp -${ICU_SOURCE_DIR}/common/ustrenum.cpp -${ICU_SOURCE_DIR}/common/uvector.cpp -${ICU_SOURCE_DIR}/common/ustack.cpp -${ICU_SOURCE_DIR}/common/uvectr32.cpp -${ICU_SOURCE_DIR}/common/uvectr64.cpp -${ICU_SOURCE_DIR}/common/ucnv.cpp -${ICU_SOURCE_DIR}/common/ucnv_bld.cpp -${ICU_SOURCE_DIR}/common/ucnv_cnv.cpp -${ICU_SOURCE_DIR}/common/ucnv_io.cpp -${ICU_SOURCE_DIR}/common/ucnv_cb.cpp -${ICU_SOURCE_DIR}/common/ucnv_err.cpp -${ICU_SOURCE_DIR}/common/ucnvlat1.cpp -${ICU_SOURCE_DIR}/common/ucnv_u7.cpp -${ICU_SOURCE_DIR}/common/ucnv_u8.cpp -${ICU_SOURCE_DIR}/common/ucnv_u16.cpp -${ICU_SOURCE_DIR}/common/ucnv_u32.cpp -${ICU_SOURCE_DIR}/common/ucnvscsu.cpp -${ICU_SOURCE_DIR}/common/ucnvbocu.cpp -${ICU_SOURCE_DIR}/common/ucnv_ext.cpp -${ICU_SOURCE_DIR}/common/ucnvmbcs.cpp -${ICU_SOURCE_DIR}/common/ucnv2022.cpp -${ICU_SOURCE_DIR}/common/ucnvhz.cpp -${ICU_SOURCE_DIR}/common/ucnv_lmb.cpp -${ICU_SOURCE_DIR}/common/ucnvisci.cpp -${ICU_SOURCE_DIR}/common/ucnvdisp.cpp -${ICU_SOURCE_DIR}/common/ucnv_set.cpp -${ICU_SOURCE_DIR}/common/ucnv_ct.cpp -${ICU_SOURCE_DIR}/common/resource.cpp -${ICU_SOURCE_DIR}/common/uresbund.cpp -${ICU_SOURCE_DIR}/common/ures_cnv.cpp -${ICU_SOURCE_DIR}/common/uresdata.cpp -${ICU_SOURCE_DIR}/common/resbund.cpp -${ICU_SOURCE_DIR}/common/resbund_cnv.cpp -${ICU_SOURCE_DIR}/common/ucurr.cpp -${ICU_SOURCE_DIR}/common/localebuilder.cpp -${ICU_SOURCE_DIR}/common/localeprioritylist.cpp -${ICU_SOURCE_DIR}/common/messagepattern.cpp -${ICU_SOURCE_DIR}/common/ucat.cpp -${ICU_SOURCE_DIR}/common/locmap.cpp -${ICU_SOURCE_DIR}/common/uloc.cpp -${ICU_SOURCE_DIR}/common/locid.cpp -${ICU_SOURCE_DIR}/common/locutil.cpp -${ICU_SOURCE_DIR}/common/locavailable.cpp -${ICU_SOURCE_DIR}/common/locdispnames.cpp -${ICU_SOURCE_DIR}/common/locdspnm.cpp -${ICU_SOURCE_DIR}/common/loclikely.cpp -${ICU_SOURCE_DIR}/common/locresdata.cpp -${ICU_SOURCE_DIR}/common/lsr.cpp -${ICU_SOURCE_DIR}/common/loclikelysubtags.cpp -${ICU_SOURCE_DIR}/common/locdistance.cpp -${ICU_SOURCE_DIR}/common/localematcher.cpp -${ICU_SOURCE_DIR}/common/bytestream.cpp -${ICU_SOURCE_DIR}/common/stringpiece.cpp -${ICU_SOURCE_DIR}/common/bytesinkutil.cpp -${ICU_SOURCE_DIR}/common/stringtriebuilder.cpp -${ICU_SOURCE_DIR}/common/bytestriebuilder.cpp -${ICU_SOURCE_DIR}/common/bytestrie.cpp -${ICU_SOURCE_DIR}/common/bytestrieiterator.cpp -${ICU_SOURCE_DIR}/common/ucharstrie.cpp -${ICU_SOURCE_DIR}/common/ucharstriebuilder.cpp -${ICU_SOURCE_DIR}/common/ucharstrieiterator.cpp -${ICU_SOURCE_DIR}/common/dictionarydata.cpp -${ICU_SOURCE_DIR}/common/edits.cpp -${ICU_SOURCE_DIR}/common/appendable.cpp -${ICU_SOURCE_DIR}/common/ustr_cnv.cpp -${ICU_SOURCE_DIR}/common/unistr_cnv.cpp -${ICU_SOURCE_DIR}/common/unistr.cpp -${ICU_SOURCE_DIR}/common/unistr_case.cpp -${ICU_SOURCE_DIR}/common/unistr_props.cpp -${ICU_SOURCE_DIR}/common/utf_impl.cpp -${ICU_SOURCE_DIR}/common/ustring.cpp -${ICU_SOURCE_DIR}/common/ustrcase.cpp -${ICU_SOURCE_DIR}/common/ucasemap.cpp -${ICU_SOURCE_DIR}/common/ucasemap_titlecase_brkiter.cpp -${ICU_SOURCE_DIR}/common/cstring.cpp -${ICU_SOURCE_DIR}/common/ustrfmt.cpp -${ICU_SOURCE_DIR}/common/ustrtrns.cpp -${ICU_SOURCE_DIR}/common/ustr_wcs.cpp -${ICU_SOURCE_DIR}/common/utext.cpp -${ICU_SOURCE_DIR}/common/unistr_case_locale.cpp -${ICU_SOURCE_DIR}/common/ustrcase_locale.cpp -${ICU_SOURCE_DIR}/common/unistr_titlecase_brkiter.cpp -${ICU_SOURCE_DIR}/common/ustr_titlecase_brkiter.cpp -${ICU_SOURCE_DIR}/common/normalizer2impl.cpp -${ICU_SOURCE_DIR}/common/normalizer2.cpp -${ICU_SOURCE_DIR}/common/filterednormalizer2.cpp -${ICU_SOURCE_DIR}/common/normlzr.cpp -${ICU_SOURCE_DIR}/common/unorm.cpp -${ICU_SOURCE_DIR}/common/unormcmp.cpp -${ICU_SOURCE_DIR}/common/loadednormalizer2impl.cpp -${ICU_SOURCE_DIR}/common/chariter.cpp -${ICU_SOURCE_DIR}/common/schriter.cpp -${ICU_SOURCE_DIR}/common/uchriter.cpp -${ICU_SOURCE_DIR}/common/uiter.cpp -${ICU_SOURCE_DIR}/common/patternprops.cpp -${ICU_SOURCE_DIR}/common/uchar.cpp -${ICU_SOURCE_DIR}/common/uprops.cpp -${ICU_SOURCE_DIR}/common/ucase.cpp -${ICU_SOURCE_DIR}/common/propname.cpp -${ICU_SOURCE_DIR}/common/ubidi_props.cpp -${ICU_SOURCE_DIR}/common/characterproperties.cpp -${ICU_SOURCE_DIR}/common/ubidi.cpp -${ICU_SOURCE_DIR}/common/ubidiwrt.cpp -${ICU_SOURCE_DIR}/common/ubidiln.cpp -${ICU_SOURCE_DIR}/common/ushape.cpp -${ICU_SOURCE_DIR}/common/uscript.cpp -${ICU_SOURCE_DIR}/common/uscript_props.cpp -${ICU_SOURCE_DIR}/common/usc_impl.cpp -${ICU_SOURCE_DIR}/common/unames.cpp -${ICU_SOURCE_DIR}/common/utrie.cpp -${ICU_SOURCE_DIR}/common/utrie2.cpp -${ICU_SOURCE_DIR}/common/utrie2_builder.cpp -${ICU_SOURCE_DIR}/common/ucptrie.cpp -${ICU_SOURCE_DIR}/common/umutablecptrie.cpp -${ICU_SOURCE_DIR}/common/bmpset.cpp -${ICU_SOURCE_DIR}/common/unisetspan.cpp -${ICU_SOURCE_DIR}/common/uset_props.cpp -${ICU_SOURCE_DIR}/common/uniset_props.cpp -${ICU_SOURCE_DIR}/common/uniset_closure.cpp -${ICU_SOURCE_DIR}/common/uset.cpp -${ICU_SOURCE_DIR}/common/uniset.cpp -${ICU_SOURCE_DIR}/common/usetiter.cpp -${ICU_SOURCE_DIR}/common/ruleiter.cpp -${ICU_SOURCE_DIR}/common/caniter.cpp -${ICU_SOURCE_DIR}/common/unifilt.cpp -${ICU_SOURCE_DIR}/common/unifunct.cpp -${ICU_SOURCE_DIR}/common/uarrsort.cpp -${ICU_SOURCE_DIR}/common/brkiter.cpp -${ICU_SOURCE_DIR}/common/ubrk.cpp -${ICU_SOURCE_DIR}/common/brkeng.cpp -${ICU_SOURCE_DIR}/common/dictbe.cpp -${ICU_SOURCE_DIR}/common/filteredbrk.cpp -${ICU_SOURCE_DIR}/common/rbbi.cpp -${ICU_SOURCE_DIR}/common/rbbidata.cpp -${ICU_SOURCE_DIR}/common/rbbinode.cpp -${ICU_SOURCE_DIR}/common/rbbirb.cpp -${ICU_SOURCE_DIR}/common/rbbiscan.cpp -${ICU_SOURCE_DIR}/common/rbbisetb.cpp -${ICU_SOURCE_DIR}/common/rbbistbl.cpp -${ICU_SOURCE_DIR}/common/rbbitblb.cpp -${ICU_SOURCE_DIR}/common/rbbi_cache.cpp -${ICU_SOURCE_DIR}/common/serv.cpp -${ICU_SOURCE_DIR}/common/servnotf.cpp -${ICU_SOURCE_DIR}/common/servls.cpp -${ICU_SOURCE_DIR}/common/servlk.cpp -${ICU_SOURCE_DIR}/common/servlkf.cpp -${ICU_SOURCE_DIR}/common/servrbf.cpp -${ICU_SOURCE_DIR}/common/servslkf.cpp -${ICU_SOURCE_DIR}/common/uidna.cpp -${ICU_SOURCE_DIR}/common/usprep.cpp -${ICU_SOURCE_DIR}/common/uts46.cpp -${ICU_SOURCE_DIR}/common/punycode.cpp -${ICU_SOURCE_DIR}/common/util.cpp -${ICU_SOURCE_DIR}/common/util_props.cpp -${ICU_SOURCE_DIR}/common/parsepos.cpp -${ICU_SOURCE_DIR}/common/locbased.cpp -${ICU_SOURCE_DIR}/common/cwchar.cpp -${ICU_SOURCE_DIR}/common/wintz.cpp -${ICU_SOURCE_DIR}/common/dtintrv.cpp -${ICU_SOURCE_DIR}/common/ucnvsel.cpp -${ICU_SOURCE_DIR}/common/propsvec.cpp -${ICU_SOURCE_DIR}/common/ulist.cpp -${ICU_SOURCE_DIR}/common/uloc_tag.cpp -${ICU_SOURCE_DIR}/common/icudataver.cpp -${ICU_SOURCE_DIR}/common/icuplug.cpp -${ICU_SOURCE_DIR}/common/sharedobject.cpp -${ICU_SOURCE_DIR}/common/simpleformatter.cpp -${ICU_SOURCE_DIR}/common/unifiedcache.cpp -${ICU_SOURCE_DIR}/common/uloc_keytype.cpp -${ICU_SOURCE_DIR}/common/ubiditransform.cpp -${ICU_SOURCE_DIR}/common/pluralmap.cpp -${ICU_SOURCE_DIR}/common/static_unicode_sets.cpp -${ICU_SOURCE_DIR}/common/restrace.cpp) +"${ICU_SOURCE_DIR}/common/errorcode.cpp" +"${ICU_SOURCE_DIR}/common/putil.cpp" +"${ICU_SOURCE_DIR}/common/umath.cpp" +"${ICU_SOURCE_DIR}/common/utypes.cpp" +"${ICU_SOURCE_DIR}/common/uinvchar.cpp" +"${ICU_SOURCE_DIR}/common/umutex.cpp" +"${ICU_SOURCE_DIR}/common/ucln_cmn.cpp" +"${ICU_SOURCE_DIR}/common/uinit.cpp" +"${ICU_SOURCE_DIR}/common/uobject.cpp" +"${ICU_SOURCE_DIR}/common/cmemory.cpp" +"${ICU_SOURCE_DIR}/common/charstr.cpp" +"${ICU_SOURCE_DIR}/common/cstr.cpp" +"${ICU_SOURCE_DIR}/common/udata.cpp" +"${ICU_SOURCE_DIR}/common/ucmndata.cpp" +"${ICU_SOURCE_DIR}/common/udatamem.cpp" +"${ICU_SOURCE_DIR}/common/umapfile.cpp" +"${ICU_SOURCE_DIR}/common/udataswp.cpp" +"${ICU_SOURCE_DIR}/common/utrie_swap.cpp" +"${ICU_SOURCE_DIR}/common/ucol_swp.cpp" +"${ICU_SOURCE_DIR}/common/utrace.cpp" +"${ICU_SOURCE_DIR}/common/uhash.cpp" +"${ICU_SOURCE_DIR}/common/uhash_us.cpp" +"${ICU_SOURCE_DIR}/common/uenum.cpp" +"${ICU_SOURCE_DIR}/common/ustrenum.cpp" +"${ICU_SOURCE_DIR}/common/uvector.cpp" +"${ICU_SOURCE_DIR}/common/ustack.cpp" +"${ICU_SOURCE_DIR}/common/uvectr32.cpp" +"${ICU_SOURCE_DIR}/common/uvectr64.cpp" +"${ICU_SOURCE_DIR}/common/ucnv.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_bld.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_cnv.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_io.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_cb.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_err.cpp" +"${ICU_SOURCE_DIR}/common/ucnvlat1.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_u7.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_u8.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_u16.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_u32.cpp" +"${ICU_SOURCE_DIR}/common/ucnvscsu.cpp" +"${ICU_SOURCE_DIR}/common/ucnvbocu.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_ext.cpp" +"${ICU_SOURCE_DIR}/common/ucnvmbcs.cpp" +"${ICU_SOURCE_DIR}/common/ucnv2022.cpp" +"${ICU_SOURCE_DIR}/common/ucnvhz.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_lmb.cpp" +"${ICU_SOURCE_DIR}/common/ucnvisci.cpp" +"${ICU_SOURCE_DIR}/common/ucnvdisp.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_set.cpp" +"${ICU_SOURCE_DIR}/common/ucnv_ct.cpp" +"${ICU_SOURCE_DIR}/common/resource.cpp" +"${ICU_SOURCE_DIR}/common/uresbund.cpp" +"${ICU_SOURCE_DIR}/common/ures_cnv.cpp" +"${ICU_SOURCE_DIR}/common/uresdata.cpp" +"${ICU_SOURCE_DIR}/common/resbund.cpp" +"${ICU_SOURCE_DIR}/common/resbund_cnv.cpp" +"${ICU_SOURCE_DIR}/common/ucurr.cpp" +"${ICU_SOURCE_DIR}/common/localebuilder.cpp" +"${ICU_SOURCE_DIR}/common/localeprioritylist.cpp" +"${ICU_SOURCE_DIR}/common/messagepattern.cpp" +"${ICU_SOURCE_DIR}/common/ucat.cpp" +"${ICU_SOURCE_DIR}/common/locmap.cpp" +"${ICU_SOURCE_DIR}/common/uloc.cpp" +"${ICU_SOURCE_DIR}/common/locid.cpp" +"${ICU_SOURCE_DIR}/common/locutil.cpp" +"${ICU_SOURCE_DIR}/common/locavailable.cpp" +"${ICU_SOURCE_DIR}/common/locdispnames.cpp" +"${ICU_SOURCE_DIR}/common/locdspnm.cpp" +"${ICU_SOURCE_DIR}/common/loclikely.cpp" +"${ICU_SOURCE_DIR}/common/locresdata.cpp" +"${ICU_SOURCE_DIR}/common/lsr.cpp" +"${ICU_SOURCE_DIR}/common/loclikelysubtags.cpp" +"${ICU_SOURCE_DIR}/common/locdistance.cpp" +"${ICU_SOURCE_DIR}/common/localematcher.cpp" +"${ICU_SOURCE_DIR}/common/bytestream.cpp" +"${ICU_SOURCE_DIR}/common/stringpiece.cpp" +"${ICU_SOURCE_DIR}/common/bytesinkutil.cpp" +"${ICU_SOURCE_DIR}/common/stringtriebuilder.cpp" +"${ICU_SOURCE_DIR}/common/bytestriebuilder.cpp" +"${ICU_SOURCE_DIR}/common/bytestrie.cpp" +"${ICU_SOURCE_DIR}/common/bytestrieiterator.cpp" +"${ICU_SOURCE_DIR}/common/ucharstrie.cpp" +"${ICU_SOURCE_DIR}/common/ucharstriebuilder.cpp" +"${ICU_SOURCE_DIR}/common/ucharstrieiterator.cpp" +"${ICU_SOURCE_DIR}/common/dictionarydata.cpp" +"${ICU_SOURCE_DIR}/common/edits.cpp" +"${ICU_SOURCE_DIR}/common/appendable.cpp" +"${ICU_SOURCE_DIR}/common/ustr_cnv.cpp" +"${ICU_SOURCE_DIR}/common/unistr_cnv.cpp" +"${ICU_SOURCE_DIR}/common/unistr.cpp" +"${ICU_SOURCE_DIR}/common/unistr_case.cpp" +"${ICU_SOURCE_DIR}/common/unistr_props.cpp" +"${ICU_SOURCE_DIR}/common/utf_impl.cpp" +"${ICU_SOURCE_DIR}/common/ustring.cpp" +"${ICU_SOURCE_DIR}/common/ustrcase.cpp" +"${ICU_SOURCE_DIR}/common/ucasemap.cpp" +"${ICU_SOURCE_DIR}/common/ucasemap_titlecase_brkiter.cpp" +"${ICU_SOURCE_DIR}/common/cstring.cpp" +"${ICU_SOURCE_DIR}/common/ustrfmt.cpp" +"${ICU_SOURCE_DIR}/common/ustrtrns.cpp" +"${ICU_SOURCE_DIR}/common/ustr_wcs.cpp" +"${ICU_SOURCE_DIR}/common/utext.cpp" +"${ICU_SOURCE_DIR}/common/unistr_case_locale.cpp" +"${ICU_SOURCE_DIR}/common/ustrcase_locale.cpp" +"${ICU_SOURCE_DIR}/common/unistr_titlecase_brkiter.cpp" +"${ICU_SOURCE_DIR}/common/ustr_titlecase_brkiter.cpp" +"${ICU_SOURCE_DIR}/common/normalizer2impl.cpp" +"${ICU_SOURCE_DIR}/common/normalizer2.cpp" +"${ICU_SOURCE_DIR}/common/filterednormalizer2.cpp" +"${ICU_SOURCE_DIR}/common/normlzr.cpp" +"${ICU_SOURCE_DIR}/common/unorm.cpp" +"${ICU_SOURCE_DIR}/common/unormcmp.cpp" +"${ICU_SOURCE_DIR}/common/loadednormalizer2impl.cpp" +"${ICU_SOURCE_DIR}/common/chariter.cpp" +"${ICU_SOURCE_DIR}/common/schriter.cpp" +"${ICU_SOURCE_DIR}/common/uchriter.cpp" +"${ICU_SOURCE_DIR}/common/uiter.cpp" +"${ICU_SOURCE_DIR}/common/patternprops.cpp" +"${ICU_SOURCE_DIR}/common/uchar.cpp" +"${ICU_SOURCE_DIR}/common/uprops.cpp" +"${ICU_SOURCE_DIR}/common/ucase.cpp" +"${ICU_SOURCE_DIR}/common/propname.cpp" +"${ICU_SOURCE_DIR}/common/ubidi_props.cpp" +"${ICU_SOURCE_DIR}/common/characterproperties.cpp" +"${ICU_SOURCE_DIR}/common/ubidi.cpp" +"${ICU_SOURCE_DIR}/common/ubidiwrt.cpp" +"${ICU_SOURCE_DIR}/common/ubidiln.cpp" +"${ICU_SOURCE_DIR}/common/ushape.cpp" +"${ICU_SOURCE_DIR}/common/uscript.cpp" +"${ICU_SOURCE_DIR}/common/uscript_props.cpp" +"${ICU_SOURCE_DIR}/common/usc_impl.cpp" +"${ICU_SOURCE_DIR}/common/unames.cpp" +"${ICU_SOURCE_DIR}/common/utrie.cpp" +"${ICU_SOURCE_DIR}/common/utrie2.cpp" +"${ICU_SOURCE_DIR}/common/utrie2_builder.cpp" +"${ICU_SOURCE_DIR}/common/ucptrie.cpp" +"${ICU_SOURCE_DIR}/common/umutablecptrie.cpp" +"${ICU_SOURCE_DIR}/common/bmpset.cpp" +"${ICU_SOURCE_DIR}/common/unisetspan.cpp" +"${ICU_SOURCE_DIR}/common/uset_props.cpp" +"${ICU_SOURCE_DIR}/common/uniset_props.cpp" +"${ICU_SOURCE_DIR}/common/uniset_closure.cpp" +"${ICU_SOURCE_DIR}/common/uset.cpp" +"${ICU_SOURCE_DIR}/common/uniset.cpp" +"${ICU_SOURCE_DIR}/common/usetiter.cpp" +"${ICU_SOURCE_DIR}/common/ruleiter.cpp" +"${ICU_SOURCE_DIR}/common/caniter.cpp" +"${ICU_SOURCE_DIR}/common/unifilt.cpp" +"${ICU_SOURCE_DIR}/common/unifunct.cpp" +"${ICU_SOURCE_DIR}/common/uarrsort.cpp" +"${ICU_SOURCE_DIR}/common/brkiter.cpp" +"${ICU_SOURCE_DIR}/common/ubrk.cpp" +"${ICU_SOURCE_DIR}/common/brkeng.cpp" +"${ICU_SOURCE_DIR}/common/dictbe.cpp" +"${ICU_SOURCE_DIR}/common/filteredbrk.cpp" +"${ICU_SOURCE_DIR}/common/rbbi.cpp" +"${ICU_SOURCE_DIR}/common/rbbidata.cpp" +"${ICU_SOURCE_DIR}/common/rbbinode.cpp" +"${ICU_SOURCE_DIR}/common/rbbirb.cpp" +"${ICU_SOURCE_DIR}/common/rbbiscan.cpp" +"${ICU_SOURCE_DIR}/common/rbbisetb.cpp" +"${ICU_SOURCE_DIR}/common/rbbistbl.cpp" +"${ICU_SOURCE_DIR}/common/rbbitblb.cpp" +"${ICU_SOURCE_DIR}/common/rbbi_cache.cpp" +"${ICU_SOURCE_DIR}/common/serv.cpp" +"${ICU_SOURCE_DIR}/common/servnotf.cpp" +"${ICU_SOURCE_DIR}/common/servls.cpp" +"${ICU_SOURCE_DIR}/common/servlk.cpp" +"${ICU_SOURCE_DIR}/common/servlkf.cpp" +"${ICU_SOURCE_DIR}/common/servrbf.cpp" +"${ICU_SOURCE_DIR}/common/servslkf.cpp" +"${ICU_SOURCE_DIR}/common/uidna.cpp" +"${ICU_SOURCE_DIR}/common/usprep.cpp" +"${ICU_SOURCE_DIR}/common/uts46.cpp" +"${ICU_SOURCE_DIR}/common/punycode.cpp" +"${ICU_SOURCE_DIR}/common/util.cpp" +"${ICU_SOURCE_DIR}/common/util_props.cpp" +"${ICU_SOURCE_DIR}/common/parsepos.cpp" +"${ICU_SOURCE_DIR}/common/locbased.cpp" +"${ICU_SOURCE_DIR}/common/cwchar.cpp" +"${ICU_SOURCE_DIR}/common/wintz.cpp" +"${ICU_SOURCE_DIR}/common/dtintrv.cpp" +"${ICU_SOURCE_DIR}/common/ucnvsel.cpp" +"${ICU_SOURCE_DIR}/common/propsvec.cpp" +"${ICU_SOURCE_DIR}/common/ulist.cpp" +"${ICU_SOURCE_DIR}/common/uloc_tag.cpp" +"${ICU_SOURCE_DIR}/common/icudataver.cpp" +"${ICU_SOURCE_DIR}/common/icuplug.cpp" +"${ICU_SOURCE_DIR}/common/sharedobject.cpp" +"${ICU_SOURCE_DIR}/common/simpleformatter.cpp" +"${ICU_SOURCE_DIR}/common/unifiedcache.cpp" +"${ICU_SOURCE_DIR}/common/uloc_keytype.cpp" +"${ICU_SOURCE_DIR}/common/ubiditransform.cpp" +"${ICU_SOURCE_DIR}/common/pluralmap.cpp" +"${ICU_SOURCE_DIR}/common/static_unicode_sets.cpp" +"${ICU_SOURCE_DIR}/common/restrace.cpp") set(ICUI18N_SOURCES -${ICU_SOURCE_DIR}/i18n/ucln_in.cpp -${ICU_SOURCE_DIR}/i18n/fmtable.cpp -${ICU_SOURCE_DIR}/i18n/format.cpp -${ICU_SOURCE_DIR}/i18n/msgfmt.cpp -${ICU_SOURCE_DIR}/i18n/umsg.cpp -${ICU_SOURCE_DIR}/i18n/numfmt.cpp -${ICU_SOURCE_DIR}/i18n/unum.cpp -${ICU_SOURCE_DIR}/i18n/decimfmt.cpp -${ICU_SOURCE_DIR}/i18n/dcfmtsym.cpp -${ICU_SOURCE_DIR}/i18n/fmtable_cnv.cpp -${ICU_SOURCE_DIR}/i18n/choicfmt.cpp -${ICU_SOURCE_DIR}/i18n/datefmt.cpp -${ICU_SOURCE_DIR}/i18n/smpdtfmt.cpp -${ICU_SOURCE_DIR}/i18n/reldtfmt.cpp -${ICU_SOURCE_DIR}/i18n/dtfmtsym.cpp -${ICU_SOURCE_DIR}/i18n/udat.cpp -${ICU_SOURCE_DIR}/i18n/dtptngen.cpp -${ICU_SOURCE_DIR}/i18n/udatpg.cpp -${ICU_SOURCE_DIR}/i18n/nfrs.cpp -${ICU_SOURCE_DIR}/i18n/nfrule.cpp -${ICU_SOURCE_DIR}/i18n/nfsubs.cpp -${ICU_SOURCE_DIR}/i18n/rbnf.cpp -${ICU_SOURCE_DIR}/i18n/numsys.cpp -${ICU_SOURCE_DIR}/i18n/unumsys.cpp -${ICU_SOURCE_DIR}/i18n/ucsdet.cpp -${ICU_SOURCE_DIR}/i18n/ucal.cpp -${ICU_SOURCE_DIR}/i18n/calendar.cpp -${ICU_SOURCE_DIR}/i18n/gregocal.cpp -${ICU_SOURCE_DIR}/i18n/timezone.cpp -${ICU_SOURCE_DIR}/i18n/simpletz.cpp -${ICU_SOURCE_DIR}/i18n/olsontz.cpp -${ICU_SOURCE_DIR}/i18n/astro.cpp -${ICU_SOURCE_DIR}/i18n/taiwncal.cpp -${ICU_SOURCE_DIR}/i18n/buddhcal.cpp -${ICU_SOURCE_DIR}/i18n/persncal.cpp -${ICU_SOURCE_DIR}/i18n/islamcal.cpp -${ICU_SOURCE_DIR}/i18n/japancal.cpp -${ICU_SOURCE_DIR}/i18n/gregoimp.cpp -${ICU_SOURCE_DIR}/i18n/hebrwcal.cpp -${ICU_SOURCE_DIR}/i18n/indiancal.cpp -${ICU_SOURCE_DIR}/i18n/chnsecal.cpp -${ICU_SOURCE_DIR}/i18n/cecal.cpp -${ICU_SOURCE_DIR}/i18n/coptccal.cpp -${ICU_SOURCE_DIR}/i18n/dangical.cpp -${ICU_SOURCE_DIR}/i18n/ethpccal.cpp -${ICU_SOURCE_DIR}/i18n/coleitr.cpp -${ICU_SOURCE_DIR}/i18n/coll.cpp -${ICU_SOURCE_DIR}/i18n/sortkey.cpp -${ICU_SOURCE_DIR}/i18n/bocsu.cpp -${ICU_SOURCE_DIR}/i18n/ucoleitr.cpp -${ICU_SOURCE_DIR}/i18n/ucol.cpp -${ICU_SOURCE_DIR}/i18n/ucol_res.cpp -${ICU_SOURCE_DIR}/i18n/ucol_sit.cpp -${ICU_SOURCE_DIR}/i18n/collation.cpp -${ICU_SOURCE_DIR}/i18n/collationsettings.cpp -${ICU_SOURCE_DIR}/i18n/collationdata.cpp -${ICU_SOURCE_DIR}/i18n/collationtailoring.cpp -${ICU_SOURCE_DIR}/i18n/collationdatareader.cpp -${ICU_SOURCE_DIR}/i18n/collationdatawriter.cpp -${ICU_SOURCE_DIR}/i18n/collationfcd.cpp -${ICU_SOURCE_DIR}/i18n/collationiterator.cpp -${ICU_SOURCE_DIR}/i18n/utf16collationiterator.cpp -${ICU_SOURCE_DIR}/i18n/utf8collationiterator.cpp -${ICU_SOURCE_DIR}/i18n/uitercollationiterator.cpp -${ICU_SOURCE_DIR}/i18n/collationsets.cpp -${ICU_SOURCE_DIR}/i18n/collationcompare.cpp -${ICU_SOURCE_DIR}/i18n/collationfastlatin.cpp -${ICU_SOURCE_DIR}/i18n/collationkeys.cpp -${ICU_SOURCE_DIR}/i18n/rulebasedcollator.cpp -${ICU_SOURCE_DIR}/i18n/collationroot.cpp -${ICU_SOURCE_DIR}/i18n/collationrootelements.cpp -${ICU_SOURCE_DIR}/i18n/collationdatabuilder.cpp -${ICU_SOURCE_DIR}/i18n/collationweights.cpp -${ICU_SOURCE_DIR}/i18n/collationruleparser.cpp -${ICU_SOURCE_DIR}/i18n/collationbuilder.cpp -${ICU_SOURCE_DIR}/i18n/collationfastlatinbuilder.cpp -${ICU_SOURCE_DIR}/i18n/listformatter.cpp -${ICU_SOURCE_DIR}/i18n/ulistformatter.cpp -${ICU_SOURCE_DIR}/i18n/strmatch.cpp -${ICU_SOURCE_DIR}/i18n/usearch.cpp -${ICU_SOURCE_DIR}/i18n/search.cpp -${ICU_SOURCE_DIR}/i18n/stsearch.cpp -${ICU_SOURCE_DIR}/i18n/translit.cpp -${ICU_SOURCE_DIR}/i18n/utrans.cpp -${ICU_SOURCE_DIR}/i18n/esctrn.cpp -${ICU_SOURCE_DIR}/i18n/unesctrn.cpp -${ICU_SOURCE_DIR}/i18n/funcrepl.cpp -${ICU_SOURCE_DIR}/i18n/strrepl.cpp -${ICU_SOURCE_DIR}/i18n/tridpars.cpp -${ICU_SOURCE_DIR}/i18n/cpdtrans.cpp -${ICU_SOURCE_DIR}/i18n/rbt.cpp -${ICU_SOURCE_DIR}/i18n/rbt_data.cpp -${ICU_SOURCE_DIR}/i18n/rbt_pars.cpp -${ICU_SOURCE_DIR}/i18n/rbt_rule.cpp -${ICU_SOURCE_DIR}/i18n/rbt_set.cpp -${ICU_SOURCE_DIR}/i18n/nultrans.cpp -${ICU_SOURCE_DIR}/i18n/remtrans.cpp -${ICU_SOURCE_DIR}/i18n/casetrn.cpp -${ICU_SOURCE_DIR}/i18n/titletrn.cpp -${ICU_SOURCE_DIR}/i18n/tolowtrn.cpp -${ICU_SOURCE_DIR}/i18n/toupptrn.cpp -${ICU_SOURCE_DIR}/i18n/anytrans.cpp -${ICU_SOURCE_DIR}/i18n/name2uni.cpp -${ICU_SOURCE_DIR}/i18n/uni2name.cpp -${ICU_SOURCE_DIR}/i18n/nortrans.cpp -${ICU_SOURCE_DIR}/i18n/quant.cpp -${ICU_SOURCE_DIR}/i18n/transreg.cpp -${ICU_SOURCE_DIR}/i18n/brktrans.cpp -${ICU_SOURCE_DIR}/i18n/regexcmp.cpp -${ICU_SOURCE_DIR}/i18n/rematch.cpp -${ICU_SOURCE_DIR}/i18n/repattrn.cpp -${ICU_SOURCE_DIR}/i18n/regexst.cpp -${ICU_SOURCE_DIR}/i18n/regextxt.cpp -${ICU_SOURCE_DIR}/i18n/regeximp.cpp -${ICU_SOURCE_DIR}/i18n/uregex.cpp -${ICU_SOURCE_DIR}/i18n/uregexc.cpp -${ICU_SOURCE_DIR}/i18n/ulocdata.cpp -${ICU_SOURCE_DIR}/i18n/measfmt.cpp -${ICU_SOURCE_DIR}/i18n/currfmt.cpp -${ICU_SOURCE_DIR}/i18n/curramt.cpp -${ICU_SOURCE_DIR}/i18n/currunit.cpp -${ICU_SOURCE_DIR}/i18n/measure.cpp -${ICU_SOURCE_DIR}/i18n/utmscale.cpp -${ICU_SOURCE_DIR}/i18n/csdetect.cpp -${ICU_SOURCE_DIR}/i18n/csmatch.cpp -${ICU_SOURCE_DIR}/i18n/csr2022.cpp -${ICU_SOURCE_DIR}/i18n/csrecog.cpp -${ICU_SOURCE_DIR}/i18n/csrmbcs.cpp -${ICU_SOURCE_DIR}/i18n/csrsbcs.cpp -${ICU_SOURCE_DIR}/i18n/csrucode.cpp -${ICU_SOURCE_DIR}/i18n/csrutf8.cpp -${ICU_SOURCE_DIR}/i18n/inputext.cpp -${ICU_SOURCE_DIR}/i18n/wintzimpl.cpp -${ICU_SOURCE_DIR}/i18n/windtfmt.cpp -${ICU_SOURCE_DIR}/i18n/winnmfmt.cpp -${ICU_SOURCE_DIR}/i18n/basictz.cpp -${ICU_SOURCE_DIR}/i18n/dtrule.cpp -${ICU_SOURCE_DIR}/i18n/rbtz.cpp -${ICU_SOURCE_DIR}/i18n/tzrule.cpp -${ICU_SOURCE_DIR}/i18n/tztrans.cpp -${ICU_SOURCE_DIR}/i18n/vtzone.cpp -${ICU_SOURCE_DIR}/i18n/zonemeta.cpp -${ICU_SOURCE_DIR}/i18n/standardplural.cpp -${ICU_SOURCE_DIR}/i18n/upluralrules.cpp -${ICU_SOURCE_DIR}/i18n/plurrule.cpp -${ICU_SOURCE_DIR}/i18n/plurfmt.cpp -${ICU_SOURCE_DIR}/i18n/selfmt.cpp -${ICU_SOURCE_DIR}/i18n/dtitvfmt.cpp -${ICU_SOURCE_DIR}/i18n/dtitvinf.cpp -${ICU_SOURCE_DIR}/i18n/udateintervalformat.cpp -${ICU_SOURCE_DIR}/i18n/tmunit.cpp -${ICU_SOURCE_DIR}/i18n/tmutamt.cpp -${ICU_SOURCE_DIR}/i18n/tmutfmt.cpp -${ICU_SOURCE_DIR}/i18n/currpinf.cpp -${ICU_SOURCE_DIR}/i18n/uspoof.cpp -${ICU_SOURCE_DIR}/i18n/uspoof_impl.cpp -${ICU_SOURCE_DIR}/i18n/uspoof_build.cpp -${ICU_SOURCE_DIR}/i18n/uspoof_conf.cpp -${ICU_SOURCE_DIR}/i18n/smpdtfst.cpp -${ICU_SOURCE_DIR}/i18n/ztrans.cpp -${ICU_SOURCE_DIR}/i18n/zrule.cpp -${ICU_SOURCE_DIR}/i18n/vzone.cpp -${ICU_SOURCE_DIR}/i18n/fphdlimp.cpp -${ICU_SOURCE_DIR}/i18n/fpositer.cpp -${ICU_SOURCE_DIR}/i18n/ufieldpositer.cpp -${ICU_SOURCE_DIR}/i18n/decNumber.cpp -${ICU_SOURCE_DIR}/i18n/decContext.cpp -${ICU_SOURCE_DIR}/i18n/alphaindex.cpp -${ICU_SOURCE_DIR}/i18n/tznames.cpp -${ICU_SOURCE_DIR}/i18n/tznames_impl.cpp -${ICU_SOURCE_DIR}/i18n/tzgnames.cpp -${ICU_SOURCE_DIR}/i18n/tzfmt.cpp -${ICU_SOURCE_DIR}/i18n/compactdecimalformat.cpp -${ICU_SOURCE_DIR}/i18n/gender.cpp -${ICU_SOURCE_DIR}/i18n/region.cpp -${ICU_SOURCE_DIR}/i18n/scriptset.cpp -${ICU_SOURCE_DIR}/i18n/uregion.cpp -${ICU_SOURCE_DIR}/i18n/reldatefmt.cpp -${ICU_SOURCE_DIR}/i18n/quantityformatter.cpp -${ICU_SOURCE_DIR}/i18n/measunit.cpp -${ICU_SOURCE_DIR}/i18n/sharedbreakiterator.cpp -${ICU_SOURCE_DIR}/i18n/scientificnumberformatter.cpp -${ICU_SOURCE_DIR}/i18n/dayperiodrules.cpp -${ICU_SOURCE_DIR}/i18n/nounit.cpp -${ICU_SOURCE_DIR}/i18n/number_affixutils.cpp -${ICU_SOURCE_DIR}/i18n/number_compact.cpp -${ICU_SOURCE_DIR}/i18n/number_decimalquantity.cpp -${ICU_SOURCE_DIR}/i18n/number_decimfmtprops.cpp -${ICU_SOURCE_DIR}/i18n/number_fluent.cpp -${ICU_SOURCE_DIR}/i18n/number_formatimpl.cpp -${ICU_SOURCE_DIR}/i18n/number_grouping.cpp -${ICU_SOURCE_DIR}/i18n/number_integerwidth.cpp -${ICU_SOURCE_DIR}/i18n/number_longnames.cpp -${ICU_SOURCE_DIR}/i18n/number_modifiers.cpp -${ICU_SOURCE_DIR}/i18n/number_notation.cpp -${ICU_SOURCE_DIR}/i18n/number_output.cpp -${ICU_SOURCE_DIR}/i18n/number_padding.cpp -${ICU_SOURCE_DIR}/i18n/number_patternmodifier.cpp -${ICU_SOURCE_DIR}/i18n/number_patternstring.cpp -${ICU_SOURCE_DIR}/i18n/number_rounding.cpp -${ICU_SOURCE_DIR}/i18n/number_scientific.cpp -${ICU_SOURCE_DIR}/i18n/number_utils.cpp -${ICU_SOURCE_DIR}/i18n/number_asformat.cpp -${ICU_SOURCE_DIR}/i18n/number_mapper.cpp -${ICU_SOURCE_DIR}/i18n/number_multiplier.cpp -${ICU_SOURCE_DIR}/i18n/number_currencysymbols.cpp -${ICU_SOURCE_DIR}/i18n/number_skeletons.cpp -${ICU_SOURCE_DIR}/i18n/number_capi.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-string-to-double.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-double-to-string.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-bignum-dtoa.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-bignum.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-cached-powers.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-fast-dtoa.cpp -${ICU_SOURCE_DIR}/i18n/double-conversion-strtod.cpp -${ICU_SOURCE_DIR}/i18n/string_segment.cpp -${ICU_SOURCE_DIR}/i18n/numparse_parsednumber.cpp -${ICU_SOURCE_DIR}/i18n/numparse_impl.cpp -${ICU_SOURCE_DIR}/i18n/numparse_symbols.cpp -${ICU_SOURCE_DIR}/i18n/numparse_decimal.cpp -${ICU_SOURCE_DIR}/i18n/numparse_scientific.cpp -${ICU_SOURCE_DIR}/i18n/numparse_currency.cpp -${ICU_SOURCE_DIR}/i18n/numparse_affixes.cpp -${ICU_SOURCE_DIR}/i18n/numparse_compositions.cpp -${ICU_SOURCE_DIR}/i18n/numparse_validators.cpp -${ICU_SOURCE_DIR}/i18n/numrange_fluent.cpp -${ICU_SOURCE_DIR}/i18n/numrange_impl.cpp -${ICU_SOURCE_DIR}/i18n/erarules.cpp -${ICU_SOURCE_DIR}/i18n/formattedvalue.cpp -${ICU_SOURCE_DIR}/i18n/formattedval_iterimpl.cpp -${ICU_SOURCE_DIR}/i18n/formattedval_sbimpl.cpp -${ICU_SOURCE_DIR}/i18n/formatted_string_builder.cpp) +"${ICU_SOURCE_DIR}/i18n/ucln_in.cpp" +"${ICU_SOURCE_DIR}/i18n/fmtable.cpp" +"${ICU_SOURCE_DIR}/i18n/format.cpp" +"${ICU_SOURCE_DIR}/i18n/msgfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/umsg.cpp" +"${ICU_SOURCE_DIR}/i18n/numfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/unum.cpp" +"${ICU_SOURCE_DIR}/i18n/decimfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/dcfmtsym.cpp" +"${ICU_SOURCE_DIR}/i18n/fmtable_cnv.cpp" +"${ICU_SOURCE_DIR}/i18n/choicfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/datefmt.cpp" +"${ICU_SOURCE_DIR}/i18n/smpdtfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/reldtfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/dtfmtsym.cpp" +"${ICU_SOURCE_DIR}/i18n/udat.cpp" +"${ICU_SOURCE_DIR}/i18n/dtptngen.cpp" +"${ICU_SOURCE_DIR}/i18n/udatpg.cpp" +"${ICU_SOURCE_DIR}/i18n/nfrs.cpp" +"${ICU_SOURCE_DIR}/i18n/nfrule.cpp" +"${ICU_SOURCE_DIR}/i18n/nfsubs.cpp" +"${ICU_SOURCE_DIR}/i18n/rbnf.cpp" +"${ICU_SOURCE_DIR}/i18n/numsys.cpp" +"${ICU_SOURCE_DIR}/i18n/unumsys.cpp" +"${ICU_SOURCE_DIR}/i18n/ucsdet.cpp" +"${ICU_SOURCE_DIR}/i18n/ucal.cpp" +"${ICU_SOURCE_DIR}/i18n/calendar.cpp" +"${ICU_SOURCE_DIR}/i18n/gregocal.cpp" +"${ICU_SOURCE_DIR}/i18n/timezone.cpp" +"${ICU_SOURCE_DIR}/i18n/simpletz.cpp" +"${ICU_SOURCE_DIR}/i18n/olsontz.cpp" +"${ICU_SOURCE_DIR}/i18n/astro.cpp" +"${ICU_SOURCE_DIR}/i18n/taiwncal.cpp" +"${ICU_SOURCE_DIR}/i18n/buddhcal.cpp" +"${ICU_SOURCE_DIR}/i18n/persncal.cpp" +"${ICU_SOURCE_DIR}/i18n/islamcal.cpp" +"${ICU_SOURCE_DIR}/i18n/japancal.cpp" +"${ICU_SOURCE_DIR}/i18n/gregoimp.cpp" +"${ICU_SOURCE_DIR}/i18n/hebrwcal.cpp" +"${ICU_SOURCE_DIR}/i18n/indiancal.cpp" +"${ICU_SOURCE_DIR}/i18n/chnsecal.cpp" +"${ICU_SOURCE_DIR}/i18n/cecal.cpp" +"${ICU_SOURCE_DIR}/i18n/coptccal.cpp" +"${ICU_SOURCE_DIR}/i18n/dangical.cpp" +"${ICU_SOURCE_DIR}/i18n/ethpccal.cpp" +"${ICU_SOURCE_DIR}/i18n/coleitr.cpp" +"${ICU_SOURCE_DIR}/i18n/coll.cpp" +"${ICU_SOURCE_DIR}/i18n/sortkey.cpp" +"${ICU_SOURCE_DIR}/i18n/bocsu.cpp" +"${ICU_SOURCE_DIR}/i18n/ucoleitr.cpp" +"${ICU_SOURCE_DIR}/i18n/ucol.cpp" +"${ICU_SOURCE_DIR}/i18n/ucol_res.cpp" +"${ICU_SOURCE_DIR}/i18n/ucol_sit.cpp" +"${ICU_SOURCE_DIR}/i18n/collation.cpp" +"${ICU_SOURCE_DIR}/i18n/collationsettings.cpp" +"${ICU_SOURCE_DIR}/i18n/collationdata.cpp" +"${ICU_SOURCE_DIR}/i18n/collationtailoring.cpp" +"${ICU_SOURCE_DIR}/i18n/collationdatareader.cpp" +"${ICU_SOURCE_DIR}/i18n/collationdatawriter.cpp" +"${ICU_SOURCE_DIR}/i18n/collationfcd.cpp" +"${ICU_SOURCE_DIR}/i18n/collationiterator.cpp" +"${ICU_SOURCE_DIR}/i18n/utf16collationiterator.cpp" +"${ICU_SOURCE_DIR}/i18n/utf8collationiterator.cpp" +"${ICU_SOURCE_DIR}/i18n/uitercollationiterator.cpp" +"${ICU_SOURCE_DIR}/i18n/collationsets.cpp" +"${ICU_SOURCE_DIR}/i18n/collationcompare.cpp" +"${ICU_SOURCE_DIR}/i18n/collationfastlatin.cpp" +"${ICU_SOURCE_DIR}/i18n/collationkeys.cpp" +"${ICU_SOURCE_DIR}/i18n/rulebasedcollator.cpp" +"${ICU_SOURCE_DIR}/i18n/collationroot.cpp" +"${ICU_SOURCE_DIR}/i18n/collationrootelements.cpp" +"${ICU_SOURCE_DIR}/i18n/collationdatabuilder.cpp" +"${ICU_SOURCE_DIR}/i18n/collationweights.cpp" +"${ICU_SOURCE_DIR}/i18n/collationruleparser.cpp" +"${ICU_SOURCE_DIR}/i18n/collationbuilder.cpp" +"${ICU_SOURCE_DIR}/i18n/collationfastlatinbuilder.cpp" +"${ICU_SOURCE_DIR}/i18n/listformatter.cpp" +"${ICU_SOURCE_DIR}/i18n/ulistformatter.cpp" +"${ICU_SOURCE_DIR}/i18n/strmatch.cpp" +"${ICU_SOURCE_DIR}/i18n/usearch.cpp" +"${ICU_SOURCE_DIR}/i18n/search.cpp" +"${ICU_SOURCE_DIR}/i18n/stsearch.cpp" +"${ICU_SOURCE_DIR}/i18n/translit.cpp" +"${ICU_SOURCE_DIR}/i18n/utrans.cpp" +"${ICU_SOURCE_DIR}/i18n/esctrn.cpp" +"${ICU_SOURCE_DIR}/i18n/unesctrn.cpp" +"${ICU_SOURCE_DIR}/i18n/funcrepl.cpp" +"${ICU_SOURCE_DIR}/i18n/strrepl.cpp" +"${ICU_SOURCE_DIR}/i18n/tridpars.cpp" +"${ICU_SOURCE_DIR}/i18n/cpdtrans.cpp" +"${ICU_SOURCE_DIR}/i18n/rbt.cpp" +"${ICU_SOURCE_DIR}/i18n/rbt_data.cpp" +"${ICU_SOURCE_DIR}/i18n/rbt_pars.cpp" +"${ICU_SOURCE_DIR}/i18n/rbt_rule.cpp" +"${ICU_SOURCE_DIR}/i18n/rbt_set.cpp" +"${ICU_SOURCE_DIR}/i18n/nultrans.cpp" +"${ICU_SOURCE_DIR}/i18n/remtrans.cpp" +"${ICU_SOURCE_DIR}/i18n/casetrn.cpp" +"${ICU_SOURCE_DIR}/i18n/titletrn.cpp" +"${ICU_SOURCE_DIR}/i18n/tolowtrn.cpp" +"${ICU_SOURCE_DIR}/i18n/toupptrn.cpp" +"${ICU_SOURCE_DIR}/i18n/anytrans.cpp" +"${ICU_SOURCE_DIR}/i18n/name2uni.cpp" +"${ICU_SOURCE_DIR}/i18n/uni2name.cpp" +"${ICU_SOURCE_DIR}/i18n/nortrans.cpp" +"${ICU_SOURCE_DIR}/i18n/quant.cpp" +"${ICU_SOURCE_DIR}/i18n/transreg.cpp" +"${ICU_SOURCE_DIR}/i18n/brktrans.cpp" +"${ICU_SOURCE_DIR}/i18n/regexcmp.cpp" +"${ICU_SOURCE_DIR}/i18n/rematch.cpp" +"${ICU_SOURCE_DIR}/i18n/repattrn.cpp" +"${ICU_SOURCE_DIR}/i18n/regexst.cpp" +"${ICU_SOURCE_DIR}/i18n/regextxt.cpp" +"${ICU_SOURCE_DIR}/i18n/regeximp.cpp" +"${ICU_SOURCE_DIR}/i18n/uregex.cpp" +"${ICU_SOURCE_DIR}/i18n/uregexc.cpp" +"${ICU_SOURCE_DIR}/i18n/ulocdata.cpp" +"${ICU_SOURCE_DIR}/i18n/measfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/currfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/curramt.cpp" +"${ICU_SOURCE_DIR}/i18n/currunit.cpp" +"${ICU_SOURCE_DIR}/i18n/measure.cpp" +"${ICU_SOURCE_DIR}/i18n/utmscale.cpp" +"${ICU_SOURCE_DIR}/i18n/csdetect.cpp" +"${ICU_SOURCE_DIR}/i18n/csmatch.cpp" +"${ICU_SOURCE_DIR}/i18n/csr2022.cpp" +"${ICU_SOURCE_DIR}/i18n/csrecog.cpp" +"${ICU_SOURCE_DIR}/i18n/csrmbcs.cpp" +"${ICU_SOURCE_DIR}/i18n/csrsbcs.cpp" +"${ICU_SOURCE_DIR}/i18n/csrucode.cpp" +"${ICU_SOURCE_DIR}/i18n/csrutf8.cpp" +"${ICU_SOURCE_DIR}/i18n/inputext.cpp" +"${ICU_SOURCE_DIR}/i18n/wintzimpl.cpp" +"${ICU_SOURCE_DIR}/i18n/windtfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/winnmfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/basictz.cpp" +"${ICU_SOURCE_DIR}/i18n/dtrule.cpp" +"${ICU_SOURCE_DIR}/i18n/rbtz.cpp" +"${ICU_SOURCE_DIR}/i18n/tzrule.cpp" +"${ICU_SOURCE_DIR}/i18n/tztrans.cpp" +"${ICU_SOURCE_DIR}/i18n/vtzone.cpp" +"${ICU_SOURCE_DIR}/i18n/zonemeta.cpp" +"${ICU_SOURCE_DIR}/i18n/standardplural.cpp" +"${ICU_SOURCE_DIR}/i18n/upluralrules.cpp" +"${ICU_SOURCE_DIR}/i18n/plurrule.cpp" +"${ICU_SOURCE_DIR}/i18n/plurfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/selfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/dtitvfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/dtitvinf.cpp" +"${ICU_SOURCE_DIR}/i18n/udateintervalformat.cpp" +"${ICU_SOURCE_DIR}/i18n/tmunit.cpp" +"${ICU_SOURCE_DIR}/i18n/tmutamt.cpp" +"${ICU_SOURCE_DIR}/i18n/tmutfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/currpinf.cpp" +"${ICU_SOURCE_DIR}/i18n/uspoof.cpp" +"${ICU_SOURCE_DIR}/i18n/uspoof_impl.cpp" +"${ICU_SOURCE_DIR}/i18n/uspoof_build.cpp" +"${ICU_SOURCE_DIR}/i18n/uspoof_conf.cpp" +"${ICU_SOURCE_DIR}/i18n/smpdtfst.cpp" +"${ICU_SOURCE_DIR}/i18n/ztrans.cpp" +"${ICU_SOURCE_DIR}/i18n/zrule.cpp" +"${ICU_SOURCE_DIR}/i18n/vzone.cpp" +"${ICU_SOURCE_DIR}/i18n/fphdlimp.cpp" +"${ICU_SOURCE_DIR}/i18n/fpositer.cpp" +"${ICU_SOURCE_DIR}/i18n/ufieldpositer.cpp" +"${ICU_SOURCE_DIR}/i18n/decNumber.cpp" +"${ICU_SOURCE_DIR}/i18n/decContext.cpp" +"${ICU_SOURCE_DIR}/i18n/alphaindex.cpp" +"${ICU_SOURCE_DIR}/i18n/tznames.cpp" +"${ICU_SOURCE_DIR}/i18n/tznames_impl.cpp" +"${ICU_SOURCE_DIR}/i18n/tzgnames.cpp" +"${ICU_SOURCE_DIR}/i18n/tzfmt.cpp" +"${ICU_SOURCE_DIR}/i18n/compactdecimalformat.cpp" +"${ICU_SOURCE_DIR}/i18n/gender.cpp" +"${ICU_SOURCE_DIR}/i18n/region.cpp" +"${ICU_SOURCE_DIR}/i18n/scriptset.cpp" +"${ICU_SOURCE_DIR}/i18n/uregion.cpp" +"${ICU_SOURCE_DIR}/i18n/reldatefmt.cpp" +"${ICU_SOURCE_DIR}/i18n/quantityformatter.cpp" +"${ICU_SOURCE_DIR}/i18n/measunit.cpp" +"${ICU_SOURCE_DIR}/i18n/sharedbreakiterator.cpp" +"${ICU_SOURCE_DIR}/i18n/scientificnumberformatter.cpp" +"${ICU_SOURCE_DIR}/i18n/dayperiodrules.cpp" +"${ICU_SOURCE_DIR}/i18n/nounit.cpp" +"${ICU_SOURCE_DIR}/i18n/number_affixutils.cpp" +"${ICU_SOURCE_DIR}/i18n/number_compact.cpp" +"${ICU_SOURCE_DIR}/i18n/number_decimalquantity.cpp" +"${ICU_SOURCE_DIR}/i18n/number_decimfmtprops.cpp" +"${ICU_SOURCE_DIR}/i18n/number_fluent.cpp" +"${ICU_SOURCE_DIR}/i18n/number_formatimpl.cpp" +"${ICU_SOURCE_DIR}/i18n/number_grouping.cpp" +"${ICU_SOURCE_DIR}/i18n/number_integerwidth.cpp" +"${ICU_SOURCE_DIR}/i18n/number_longnames.cpp" +"${ICU_SOURCE_DIR}/i18n/number_modifiers.cpp" +"${ICU_SOURCE_DIR}/i18n/number_notation.cpp" +"${ICU_SOURCE_DIR}/i18n/number_output.cpp" +"${ICU_SOURCE_DIR}/i18n/number_padding.cpp" +"${ICU_SOURCE_DIR}/i18n/number_patternmodifier.cpp" +"${ICU_SOURCE_DIR}/i18n/number_patternstring.cpp" +"${ICU_SOURCE_DIR}/i18n/number_rounding.cpp" +"${ICU_SOURCE_DIR}/i18n/number_scientific.cpp" +"${ICU_SOURCE_DIR}/i18n/number_utils.cpp" +"${ICU_SOURCE_DIR}/i18n/number_asformat.cpp" +"${ICU_SOURCE_DIR}/i18n/number_mapper.cpp" +"${ICU_SOURCE_DIR}/i18n/number_multiplier.cpp" +"${ICU_SOURCE_DIR}/i18n/number_currencysymbols.cpp" +"${ICU_SOURCE_DIR}/i18n/number_skeletons.cpp" +"${ICU_SOURCE_DIR}/i18n/number_capi.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-string-to-double.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-double-to-string.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-bignum-dtoa.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-bignum.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-cached-powers.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-fast-dtoa.cpp" +"${ICU_SOURCE_DIR}/i18n/double-conversion-strtod.cpp" +"${ICU_SOURCE_DIR}/i18n/string_segment.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_parsednumber.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_impl.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_symbols.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_decimal.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_scientific.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_currency.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_affixes.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_compositions.cpp" +"${ICU_SOURCE_DIR}/i18n/numparse_validators.cpp" +"${ICU_SOURCE_DIR}/i18n/numrange_fluent.cpp" +"${ICU_SOURCE_DIR}/i18n/numrange_impl.cpp" +"${ICU_SOURCE_DIR}/i18n/erarules.cpp" +"${ICU_SOURCE_DIR}/i18n/formattedvalue.cpp" +"${ICU_SOURCE_DIR}/i18n/formattedval_iterimpl.cpp" +"${ICU_SOURCE_DIR}/i18n/formattedval_sbimpl.cpp" +"${ICU_SOURCE_DIR}/i18n/formatted_string_builder.cpp") -file(GENERATE OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/empty.cpp CONTENT " ") +file(GENERATE OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/empty.cpp" CONTENT " ") enable_language(ASM) set(ICUDATA_SOURCES - ${ICUDATA_SOURCE_DIR}/icudt66l_dat.S - ${CMAKE_CURRENT_BINARY_DIR}/empty.cpp # Without this cmake can incorrectly detects library type (OBJECT) instead of SHARED/STATIC + "${ICUDATA_SOURCE_DIR}/icudt66l_dat.S" + "${CMAKE_CURRENT_BINARY_DIR}/empty.cpp" # Without this cmake can incorrectly detects library type (OBJECT) instead of SHARED/STATIC ) # Note that we don't like any kind of binary plugins (because of runtime dependencies, vulnerabilities, ABI incompatibilities). @@ -454,8 +454,8 @@ add_library(icudata ${ICUDATA_SOURCES}) target_link_libraries(icuuc PRIVATE icudata) target_link_libraries(icui18n PRIVATE icuuc) -target_include_directories(icuuc SYSTEM PUBLIC ${ICU_SOURCE_DIR}/common/) -target_include_directories(icui18n SYSTEM PUBLIC ${ICU_SOURCE_DIR}/i18n/) +target_include_directories(icuuc SYSTEM PUBLIC "${ICU_SOURCE_DIR}/common/") +target_include_directories(icui18n SYSTEM PUBLIC "${ICU_SOURCE_DIR}/i18n/") target_compile_definitions(icuuc PRIVATE -DU_COMMON_IMPLEMENTATION) target_compile_definitions(icui18n PRIVATE -DU_I18N_IMPLEMENTATION) diff --git a/contrib/jemalloc-cmake/CMakeLists.txt b/contrib/jemalloc-cmake/CMakeLists.txt index b8a6474413a..140b7eb370b 100644 --- a/contrib/jemalloc-cmake/CMakeLists.txt +++ b/contrib/jemalloc-cmake/CMakeLists.txt @@ -1,10 +1,13 @@ -if (SANITIZE OR NOT (ARCH_AMD64 OR ARCH_ARM) OR NOT (OS_LINUX OR OS_FREEBSD OR OS_DARWIN)) +if (SANITIZE OR NOT ( + ((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE)) OR + (OS_DARWIN AND CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo") +)) if (ENABLE_JEMALLOC) message (${RECONFIGURE_MESSAGE_LEVEL} - "jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64 or aarch64 on linux or freebsd.") - endif() + "jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64, or ppc64le Linux or FreeBSD builds and RelWithDebInfo macOS builds.") + endif () set (ENABLE_JEMALLOC OFF) -else() +else () option (ENABLE_JEMALLOC "Enable jemalloc allocator" ${ENABLE_LIBRARIES}) endif () @@ -34,9 +37,9 @@ if (OS_LINUX) # avoid spurious latencies and additional work associated with # MADV_DONTNEED. See # https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation. - set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:10000") + set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000") else() - set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:10000") + set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000") endif() # CACHE variable is empty, to allow changing defaults without necessity # to purge cache @@ -49,46 +52,46 @@ message (STATUS "jemalloc malloc_conf: ${JEMALLOC_CONFIG_MALLOC_CONF}") set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/jemalloc") set (SRCS - ${LIBRARY_DIR}/src/arena.c - ${LIBRARY_DIR}/src/background_thread.c - ${LIBRARY_DIR}/src/base.c - ${LIBRARY_DIR}/src/bin.c - ${LIBRARY_DIR}/src/bitmap.c - ${LIBRARY_DIR}/src/ckh.c - ${LIBRARY_DIR}/src/ctl.c - ${LIBRARY_DIR}/src/div.c - ${LIBRARY_DIR}/src/extent.c - ${LIBRARY_DIR}/src/extent_dss.c - ${LIBRARY_DIR}/src/extent_mmap.c - ${LIBRARY_DIR}/src/hash.c - ${LIBRARY_DIR}/src/hook.c - ${LIBRARY_DIR}/src/jemalloc.c - ${LIBRARY_DIR}/src/large.c - ${LIBRARY_DIR}/src/log.c - ${LIBRARY_DIR}/src/malloc_io.c - ${LIBRARY_DIR}/src/mutex.c - ${LIBRARY_DIR}/src/mutex_pool.c - ${LIBRARY_DIR}/src/nstime.c - ${LIBRARY_DIR}/src/pages.c - ${LIBRARY_DIR}/src/prng.c - ${LIBRARY_DIR}/src/prof.c - ${LIBRARY_DIR}/src/rtree.c - ${LIBRARY_DIR}/src/sc.c - ${LIBRARY_DIR}/src/stats.c - ${LIBRARY_DIR}/src/sz.c - ${LIBRARY_DIR}/src/tcache.c - ${LIBRARY_DIR}/src/test_hooks.c - ${LIBRARY_DIR}/src/ticker.c - ${LIBRARY_DIR}/src/tsd.c - ${LIBRARY_DIR}/src/witness.c - ${LIBRARY_DIR}/src/safety_check.c + "${LIBRARY_DIR}/src/arena.c" + "${LIBRARY_DIR}/src/background_thread.c" + "${LIBRARY_DIR}/src/base.c" + "${LIBRARY_DIR}/src/bin.c" + "${LIBRARY_DIR}/src/bitmap.c" + "${LIBRARY_DIR}/src/ckh.c" + "${LIBRARY_DIR}/src/ctl.c" + "${LIBRARY_DIR}/src/div.c" + "${LIBRARY_DIR}/src/extent.c" + "${LIBRARY_DIR}/src/extent_dss.c" + "${LIBRARY_DIR}/src/extent_mmap.c" + "${LIBRARY_DIR}/src/hash.c" + "${LIBRARY_DIR}/src/hook.c" + "${LIBRARY_DIR}/src/jemalloc.c" + "${LIBRARY_DIR}/src/large.c" + "${LIBRARY_DIR}/src/log.c" + "${LIBRARY_DIR}/src/malloc_io.c" + "${LIBRARY_DIR}/src/mutex.c" + "${LIBRARY_DIR}/src/mutex_pool.c" + "${LIBRARY_DIR}/src/nstime.c" + "${LIBRARY_DIR}/src/pages.c" + "${LIBRARY_DIR}/src/prng.c" + "${LIBRARY_DIR}/src/prof.c" + "${LIBRARY_DIR}/src/rtree.c" + "${LIBRARY_DIR}/src/sc.c" + "${LIBRARY_DIR}/src/stats.c" + "${LIBRARY_DIR}/src/sz.c" + "${LIBRARY_DIR}/src/tcache.c" + "${LIBRARY_DIR}/src/test_hooks.c" + "${LIBRARY_DIR}/src/ticker.c" + "${LIBRARY_DIR}/src/tsd.c" + "${LIBRARY_DIR}/src/witness.c" + "${LIBRARY_DIR}/src/safety_check.c" ) if (OS_DARWIN) - list(APPEND SRCS ${LIBRARY_DIR}/src/zone.c) + list(APPEND SRCS "${LIBRARY_DIR}/src/zone.c") endif () add_library(jemalloc ${SRCS}) -target_include_directories(jemalloc PRIVATE ${LIBRARY_DIR}/include) +target_include_directories(jemalloc PRIVATE "${LIBRARY_DIR}/include") target_include_directories(jemalloc SYSTEM PUBLIC include) set (JEMALLOC_INCLUDE_PREFIX) @@ -107,6 +110,8 @@ if (ARCH_AMD64) set(JEMALLOC_INCLUDE_PREFIX "${JEMALLOC_INCLUDE_PREFIX}_x86_64") elseif (ARCH_ARM) set(JEMALLOC_INCLUDE_PREFIX "${JEMALLOC_INCLUDE_PREFIX}_aarch64") +elseif (ARCH_PPC64LE) + set(JEMALLOC_INCLUDE_PREFIX "${JEMALLOC_INCLUDE_PREFIX}_ppc64le") else () message (FATAL_ERROR "internal jemalloc: This arch is not supported") endif () @@ -114,17 +119,19 @@ endif () configure_file(${JEMALLOC_INCLUDE_PREFIX}/jemalloc/internal/jemalloc_internal_defs.h.in ${JEMALLOC_INCLUDE_PREFIX}/jemalloc/internal/jemalloc_internal_defs.h) target_include_directories(jemalloc SYSTEM PRIVATE - ${CMAKE_CURRENT_BINARY_DIR}/${JEMALLOC_INCLUDE_PREFIX}/jemalloc/internal) + "${CMAKE_CURRENT_BINARY_DIR}/${JEMALLOC_INCLUDE_PREFIX}/jemalloc/internal") target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_NO_PRIVATE_NAMESPACE) if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") - target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1 -DJEMALLOC_PROF=1) + target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1) +endif () - if (USE_UNWIND) - target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1) - target_link_libraries (jemalloc PRIVATE unwind) - endif () +target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_PROF=1) + +if (USE_UNWIND) + target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1) + target_link_libraries (jemalloc PRIVATE unwind) endif () target_compile_options(jemalloc PRIVATE -Wno-redundant-decls) diff --git a/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in b/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in index c7c884d0eaa..5c0407db24a 100644 --- a/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in +++ b/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in @@ -42,7 +42,7 @@ * total number of bits in a pointer, e.g. on x64, for which the uppermost 16 * bits are the same as bit 47. */ -#define LG_VADDR 48 +#define LG_VADDR 64 /* Defined if C11 atomics are available. */ #define JEMALLOC_C11_ATOMICS 1 @@ -101,11 +101,6 @@ */ #define JEMALLOC_HAVE_MACH_ABSOLUTE_TIME 1 -/* - * Defined if clock_gettime(CLOCK_REALTIME, ...) is available. - */ -#define JEMALLOC_HAVE_CLOCK_REALTIME 1 - /* * Defined if _malloc_thread_cleanup() exists. At least in the case of * FreeBSD, pthread_key_create() allocates, which if used during malloc @@ -181,14 +176,14 @@ /* #undef LG_QUANTUM */ /* One page is 2^LG_PAGE bytes. */ -#define LG_PAGE 16 +#define LG_PAGE 14 /* * One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the * system does not explicitly support huge pages; system calls that require * explicit huge page support are separately configured. */ -#define LG_HUGEPAGE 29 +#define LG_HUGEPAGE 21 /* * If defined, adjacent virtual memory mappings with identical attributes @@ -356,7 +351,7 @@ /* #undef JEMALLOC_EXPORT */ /* config.malloc_conf options string. */ -#define JEMALLOC_CONFIG_MALLOC_CONF "@JEMALLOC_CONFIG_MALLOC_CONF@" +#define JEMALLOC_CONFIG_MALLOC_CONF "" /* If defined, jemalloc takes the malloc/free/etc. symbol names. */ /* #undef JEMALLOC_IS_MALLOC */ diff --git a/contrib/jemalloc-cmake/include_linux_ppc64le/jemalloc/internal/jemalloc_internal_defs.h.in b/contrib/jemalloc-cmake/include_linux_ppc64le/jemalloc/internal/jemalloc_internal_defs.h.in new file mode 100644 index 00000000000..8068861041f --- /dev/null +++ b/contrib/jemalloc-cmake/include_linux_ppc64le/jemalloc/internal/jemalloc_internal_defs.h.in @@ -0,0 +1,367 @@ +/* include/jemalloc/internal/jemalloc_internal_defs.h. Generated from jemalloc_internal_defs.h.in by configure. */ +#ifndef JEMALLOC_INTERNAL_DEFS_H_ +#define JEMALLOC_INTERNAL_DEFS_H_ +/* + * If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all + * public APIs to be prefixed. This makes it possible, with some care, to use + * multiple allocators simultaneously. + */ +/* #undef JEMALLOC_PREFIX */ +/* #undef JEMALLOC_CPREFIX */ + +/* + * Define overrides for non-standard allocator-related functions if they are + * present on the system. + */ +#define JEMALLOC_OVERRIDE___LIBC_CALLOC +#define JEMALLOC_OVERRIDE___LIBC_FREE +#define JEMALLOC_OVERRIDE___LIBC_MALLOC +#define JEMALLOC_OVERRIDE___LIBC_MEMALIGN +#define JEMALLOC_OVERRIDE___LIBC_REALLOC +#define JEMALLOC_OVERRIDE___LIBC_VALLOC +/* #undef JEMALLOC_OVERRIDE___POSIX_MEMALIGN */ + +/* + * JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs. + * For shared libraries, symbol visibility mechanisms prevent these symbols + * from being exported, but for static libraries, naming collisions are a real + * possibility. + */ +#define JEMALLOC_PRIVATE_NAMESPACE je_ + +/* + * Hyper-threaded CPUs may need a special instruction inside spin loops in + * order to yield to another virtual CPU. + */ +#define CPU_SPINWAIT +/* 1 if CPU_SPINWAIT is defined, 0 otherwise. */ +#define HAVE_CPU_SPINWAIT 0 + +/* + * Number of significant bits in virtual addresses. This may be less than the + * total number of bits in a pointer, e.g. on x64, for which the uppermost 16 + * bits are the same as bit 47. + */ +#define LG_VADDR 64 + +/* Defined if C11 atomics are available. */ +#define JEMALLOC_C11_ATOMICS 1 + +/* Defined if GCC __atomic atomics are available. */ +#define JEMALLOC_GCC_ATOMIC_ATOMICS 1 +/* and the 8-bit variant support. */ +#define JEMALLOC_GCC_U8_ATOMIC_ATOMICS 1 + +/* Defined if GCC __sync atomics are available. */ +#define JEMALLOC_GCC_SYNC_ATOMICS 1 +/* and the 8-bit variant support. */ +#define JEMALLOC_GCC_U8_SYNC_ATOMICS 1 + +/* + * Defined if __builtin_clz() and __builtin_clzl() are available. + */ +#define JEMALLOC_HAVE_BUILTIN_CLZ + +/* + * Defined if os_unfair_lock_*() functions are available, as provided by Darwin. + */ +/* #undef JEMALLOC_OS_UNFAIR_LOCK */ + +/* Defined if syscall(2) is usable. */ +#define JEMALLOC_USE_SYSCALL + +/* + * Defined if secure_getenv(3) is available. + */ +// #define JEMALLOC_HAVE_SECURE_GETENV + +/* + * Defined if issetugid(2) is available. + */ +/* #undef JEMALLOC_HAVE_ISSETUGID */ + +/* Defined if pthread_atfork(3) is available. */ +#define JEMALLOC_HAVE_PTHREAD_ATFORK + +/* Defined if pthread_setname_np(3) is available. */ +#define JEMALLOC_HAVE_PTHREAD_SETNAME_NP + +/* + * Defined if clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is available. + */ +#define JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE 1 + +/* + * Defined if clock_gettime(CLOCK_MONOTONIC, ...) is available. + */ +#define JEMALLOC_HAVE_CLOCK_MONOTONIC 1 + +/* + * Defined if mach_absolute_time() is available. + */ +/* #undef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME */ + +/* + * Defined if _malloc_thread_cleanup() exists. At least in the case of + * FreeBSD, pthread_key_create() allocates, which if used during malloc + * bootstrapping will cause recursion into the pthreads library. Therefore, if + * _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in + * malloc_tsd. + */ +/* #undef JEMALLOC_MALLOC_THREAD_CLEANUP */ + +/* + * Defined if threaded initialization is known to be safe on this platform. + * Among other things, it must be possible to initialize a mutex without + * triggering allocation in order for threaded allocation to be safe. + */ +#define JEMALLOC_THREADED_INIT + +/* + * Defined if the pthreads implementation defines + * _pthread_mutex_init_calloc_cb(), in which case the function is used in order + * to avoid recursive allocation during mutex initialization. + */ +/* #undef JEMALLOC_MUTEX_INIT_CB */ + +/* Non-empty if the tls_model attribute is supported. */ +#define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec"))) + +/* + * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables + * inline functions. + */ +/* #undef JEMALLOC_DEBUG */ + +/* JEMALLOC_STATS enables statistics calculation. */ +#define JEMALLOC_STATS + +/* JEMALLOC_EXPERIMENTAL_SMALLOCX_API enables experimental smallocx API. */ +/* #undef JEMALLOC_EXPERIMENTAL_SMALLOCX_API */ + +/* JEMALLOC_PROF enables allocation profiling. */ +/* #undef JEMALLOC_PROF */ + +/* Use libunwind for profile backtracing if defined. */ +/* #undef JEMALLOC_PROF_LIBUNWIND */ + +/* Use libgcc for profile backtracing if defined. */ +/* #undef JEMALLOC_PROF_LIBGCC */ + +/* Use gcc intrinsics for profile backtracing if defined. */ +/* #undef JEMALLOC_PROF_GCC */ + +/* + * JEMALLOC_DSS enables use of sbrk(2) to allocate extents from the data storage + * segment (DSS). + */ +#define JEMALLOC_DSS + +/* Support memory filling (junk/zero). */ +#define JEMALLOC_FILL + +/* Support utrace(2)-based tracing. */ +/* #undef JEMALLOC_UTRACE */ + +/* Support optional abort() on OOM. */ +/* #undef JEMALLOC_XMALLOC */ + +/* Support lazy locking (avoid locking unless a second thread is launched). */ +/* #undef JEMALLOC_LAZY_LOCK */ + +/* + * Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size + * classes). + */ +/* #undef LG_QUANTUM */ + +/* One page is 2^LG_PAGE bytes. */ +#define LG_PAGE 16 + +/* + * One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the + * system does not explicitly support huge pages; system calls that require + * explicit huge page support are separately configured. + */ +#define LG_HUGEPAGE 21 + +/* + * If defined, adjacent virtual memory mappings with identical attributes + * automatically coalesce, and they fragment when changes are made to subranges. + * This is the normal order of things for mmap()/munmap(), but on Windows + * VirtualAlloc()/VirtualFree() operations must be precisely matched, i.e. + * mappings do *not* coalesce/fragment. + */ +#define JEMALLOC_MAPS_COALESCE + +/* + * If defined, retain memory for later reuse by default rather than using e.g. + * munmap() to unmap freed extents. This is enabled on 64-bit Linux because + * common sequences of mmap()/munmap() calls will cause virtual memory map + * holes. + */ +#define JEMALLOC_RETAIN + +/* TLS is used to map arenas and magazine caches to threads. */ +#define JEMALLOC_TLS + +/* + * Used to mark unreachable code to quiet "end of non-void" compiler warnings. + * Don't use this directly; instead use unreachable() from util.h + */ +#define JEMALLOC_INTERNAL_UNREACHABLE __builtin_unreachable + +/* + * ffs*() functions to use for bitmapping. Don't use these directly; instead, + * use ffs_*() from util.h. + */ +#define JEMALLOC_INTERNAL_FFSLL __builtin_ffsll +#define JEMALLOC_INTERNAL_FFSL __builtin_ffsl +#define JEMALLOC_INTERNAL_FFS __builtin_ffs + +/* + * popcount*() functions to use for bitmapping. + */ +#define JEMALLOC_INTERNAL_POPCOUNTL __builtin_popcountl +#define JEMALLOC_INTERNAL_POPCOUNT __builtin_popcount + +/* + * If defined, explicitly attempt to more uniformly distribute large allocation + * pointer alignments across all cache indices. + */ +#define JEMALLOC_CACHE_OBLIVIOUS + +/* + * If defined, enable logging facilities. We make this a configure option to + * avoid taking extra branches everywhere. + */ +/* #undef JEMALLOC_LOG */ + +/* + * If defined, use readlinkat() (instead of readlink()) to follow + * /etc/malloc_conf. + */ +/* #undef JEMALLOC_READLINKAT */ + +/* + * Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings. + */ +/* #undef JEMALLOC_ZONE */ + +/* + * Methods for determining whether the OS overcommits. + * JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's + * /proc/sys/vm.overcommit_memory file. + * JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl. + */ +/* #undef JEMALLOC_SYSCTL_VM_OVERCOMMIT */ +#define JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY + +/* Defined if madvise(2) is available. */ +#define JEMALLOC_HAVE_MADVISE + +/* + * Defined if transparent huge pages are supported via the MADV_[NO]HUGEPAGE + * arguments to madvise(2). + */ +#define JEMALLOC_HAVE_MADVISE_HUGE + +/* + * Methods for purging unused pages differ between operating systems. + * + * madvise(..., MADV_FREE) : This marks pages as being unused, such that they + * will be discarded rather than swapped out. + * madvise(..., MADV_DONTNEED) : If JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS is + * defined, this immediately discards pages, + * such that new pages will be demand-zeroed if + * the address region is later touched; + * otherwise this behaves similarly to + * MADV_FREE, though typically with higher + * system overhead. + */ +#define JEMALLOC_PURGE_MADVISE_FREE +#define JEMALLOC_PURGE_MADVISE_DONTNEED +#define JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS + +/* Defined if madvise(2) is available but MADV_FREE is not (x86 Linux only). */ +/* #undef JEMALLOC_DEFINE_MADVISE_FREE */ + +/* + * Defined if MADV_DO[NT]DUMP is supported as an argument to madvise. + */ +#define JEMALLOC_MADVISE_DONTDUMP + +/* + * Defined if transparent huge pages (THPs) are supported via the + * MADV_[NO]HUGEPAGE arguments to madvise(2), and THP support is enabled. + */ +/* #undef JEMALLOC_THP */ + +/* Define if operating system has alloca.h header. */ +#define JEMALLOC_HAS_ALLOCA_H 1 + +/* C99 restrict keyword supported. */ +#define JEMALLOC_HAS_RESTRICT 1 + +/* For use by hash code. */ +/* #undef JEMALLOC_BIG_ENDIAN */ + +/* sizeof(int) == 2^LG_SIZEOF_INT. */ +#define LG_SIZEOF_INT 2 + +/* sizeof(long) == 2^LG_SIZEOF_LONG. */ +#define LG_SIZEOF_LONG 3 + +/* sizeof(long long) == 2^LG_SIZEOF_LONG_LONG. */ +#define LG_SIZEOF_LONG_LONG 3 + +/* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */ +#define LG_SIZEOF_INTMAX_T 3 + +/* glibc malloc hooks (__malloc_hook, __realloc_hook, __free_hook). */ +#define JEMALLOC_GLIBC_MALLOC_HOOK + +/* glibc memalign hook. */ +#define JEMALLOC_GLIBC_MEMALIGN_HOOK + +/* pthread support */ +#define JEMALLOC_HAVE_PTHREAD + +/* dlsym() support */ +#define JEMALLOC_HAVE_DLSYM + +/* Adaptive mutex support in pthreads. */ +#define JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP + +/* GNU specific sched_getcpu support */ +#define JEMALLOC_HAVE_SCHED_GETCPU + +/* GNU specific sched_setaffinity support */ +#define JEMALLOC_HAVE_SCHED_SETAFFINITY + +/* + * If defined, all the features necessary for background threads are present. + */ +#define JEMALLOC_BACKGROUND_THREAD 1 + +/* + * If defined, jemalloc symbols are not exported (doesn't work when + * JEMALLOC_PREFIX is not defined). + */ +/* #undef JEMALLOC_EXPORT */ + +/* config.malloc_conf options string. */ +#define JEMALLOC_CONFIG_MALLOC_CONF "@JEMALLOC_CONFIG_MALLOC_CONF@" + +/* If defined, jemalloc takes the malloc/free/etc. symbol names. */ +#define JEMALLOC_IS_MALLOC 1 + +/* + * Defined if strerror_r returns char * if _GNU_SOURCE is defined. + */ +#define JEMALLOC_STRERROR_R_RETURNS_CHAR_WITH_GNU_SOURCE + +/* Performs additional safety checks when defined. */ +/* #undef JEMALLOC_OPT_SAFETY_CHECKS */ + +#endif /* JEMALLOC_INTERNAL_DEFS_H_ */ diff --git a/contrib/krb5-cmake/CMakeLists.txt b/contrib/krb5-cmake/CMakeLists.txt index fce7fbc582a..7c750ca12b6 100644 --- a/contrib/krb5-cmake/CMakeLists.txt +++ b/contrib/krb5-cmake/CMakeLists.txt @@ -3,465 +3,465 @@ if(NOT AWK_PROGRAM) message(FATAL_ERROR "You need the awk program to build ClickHouse with krb5 enabled.") endif() -set(KRB5_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/krb5/src) +set(KRB5_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/krb5/src") set(ALL_SRCS - ${KRB5_SOURCE_DIR}/util/et/et_name.c - ${KRB5_SOURCE_DIR}/util/et/com_err.c - ${KRB5_SOURCE_DIR}/util/et/error_message.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_names.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_unwrap_aead.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_name_attr.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_glue.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_imp_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/gssd_pname_to_uid.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_authorize_localname.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_prf.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_acquire_cred_with_pw.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_cred_option.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_map_name_to_any.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_seal.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_delete_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_context_time.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_get_name_attr.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_mech_invoke.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_unwrap_iov.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_exp_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_init_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_accept_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_verify.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_sign.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_mechname.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_mechattr.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_complete_auth_token.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_wrap_aead.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_cred_oid.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_buffer.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_initialize.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_export_name_comp.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_context_option.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_acquire_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_acquire_cred_imp_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_imp_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_neg_mechs.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_export_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_oid_ops.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_context_oid.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_del_name_attr.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_decapsulate_token.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_compare_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_name_mapping.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_imp_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dup_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_export_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_wrap_iov.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_oid_set.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_unseal.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_store_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_buffer_set.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_canon_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dsp_status.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dsp_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dsp_name_ext.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_saslname.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_process_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_encapsulate_token.c - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_negoex.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/delete_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/lucid_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/duplicate_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/get_tkt_flags.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/set_allowable_enctypes.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5sealiov.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/gssapi_err_krb5.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/canon_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/inq_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/export_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/inq_names.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/prf.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5sealv3iov.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/store_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/import_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/export_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/naming_exts.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/s4u_gss_glue.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/rel_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5unsealiov.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/gssapi_krb5.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/disp_status.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/import_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5seal.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/accept_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/import_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/process_context_token.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/disp_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/wrap_size_limit.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/krb5_gss_glue.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_crypt.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/set_ccache.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/export_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/rel_oid.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/val_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/context_time.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/cred_store.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/iakerb.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/copy_ccache.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/init_sec_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/indicate_mechs.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/inq_context.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_seed.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_seqnum.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/compare_name.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/ser_sctx.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5sealv3.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/acquire_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5unseal.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/rel_cred.c - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_cksum.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/disp_com_err_status.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/gssapi_generic.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/rel_oid_set.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/oid_ops.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_buffer.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_buffer_set.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_set.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_token.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/gssapi_err_generic.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/disp_major_status.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_seqstate.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_errmap.c - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/rel_buffer.c + "${KRB5_SOURCE_DIR}/util/et/et_name.c" + "${KRB5_SOURCE_DIR}/util/et/com_err.c" + "${KRB5_SOURCE_DIR}/util/et/error_message.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_names.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_unwrap_aead.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_name_attr.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_glue.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_imp_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/gssd_pname_to_uid.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_authorize_localname.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_prf.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_acquire_cred_with_pw.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_cred_option.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_map_name_to_any.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_seal.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_delete_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_context_time.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_get_name_attr.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_mech_invoke.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_unwrap_iov.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_exp_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_init_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_accept_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_verify.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_sign.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_mechname.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_mechattr.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_complete_auth_token.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_wrap_aead.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_cred_oid.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_buffer.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_initialize.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_export_name_comp.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_context_option.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_acquire_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_acquire_cred_imp_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_imp_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_set_neg_mechs.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_export_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_oid_ops.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_inq_context_oid.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_del_name_attr.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_decapsulate_token.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_compare_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_name_mapping.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_imp_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dup_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_export_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_wrap_iov.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_rel_oid_set.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_unseal.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_store_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_buffer_set.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_canon_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dsp_status.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dsp_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_dsp_name_ext.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_saslname.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_process_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_encapsulate_token.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue/g_negoex.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/delete_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/lucid_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/duplicate_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/get_tkt_flags.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/set_allowable_enctypes.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5sealiov.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/gssapi_err_krb5.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/canon_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/inq_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/export_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/inq_names.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/prf.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5sealv3iov.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/store_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/import_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/export_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/naming_exts.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/s4u_gss_glue.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/rel_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5unsealiov.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/gssapi_krb5.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/disp_status.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/import_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5seal.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/accept_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/import_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/process_context_token.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/disp_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/wrap_size_limit.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/krb5_gss_glue.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_crypt.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/set_ccache.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/export_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/rel_oid.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/val_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/context_time.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/cred_store.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/iakerb.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/copy_ccache.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/init_sec_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/indicate_mechs.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/inq_context.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_seed.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_seqnum.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/compare_name.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/ser_sctx.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5sealv3.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/acquire_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/k5unseal.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/rel_cred.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/util_cksum.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/disp_com_err_status.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/gssapi_generic.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/rel_oid_set.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/oid_ops.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_buffer.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_buffer_set.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_set.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_token.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/gssapi_err_generic.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/disp_major_status.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_seqstate.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/util_errmap.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/rel_buffer.c" - ${KRB5_SOURCE_DIR}/lib/gssapi/spnego/spnego_mech.c - ${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_util.c - ${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_ctx.c + "${KRB5_SOURCE_DIR}/lib/gssapi/spnego/spnego_mech.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_util.c" + "${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_ctx.c" - # ${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_trace.c + # "${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_trace.c" - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prng.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_dk_cmac.c - # ${KRB5_SOURCE_DIR}/lib/crypto/krb/crc32.c - # ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_cbc.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/enctype_util.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_etm.c - # ${KRB5_SOURCE_DIR}/lib/crypto/krb/combine_keys.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/default_state.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/decrypt_iov.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_dk_cmac.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/etypes.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/old_api_glue.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/cksumtypes.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_cmac.c - # ${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_old.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/decrypt.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_dk.c - # ${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_des.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_unkeyed.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/crypto_length.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/block_size.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/string_to_key.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/verify_checksum.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/crypto_libinit.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/derive.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/random_to_key.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/verify_checksum_iov.c - # ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_confounder.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_length.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_dk_hmac.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/make_checksum.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_des.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prf.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/coll_proof_cksum.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_rc4.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/cf2.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/aead.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt_iov.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/cksumtype_to_string.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/key.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_raw.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/keylengths.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_hmac_md5.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/keyed_cksum.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/keyed_checksum_types.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_aes2.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/state.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_dk_hmac.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_etm.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/make_random_key.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/string_to_cksumtype.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/mandatory_sumtype.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/make_checksum_iov.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_rc4.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/valid_cksumtype.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/nfold.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prng_fortuna.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt_length.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/cmac.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/keyblocks.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_rc4.c - ${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_pbkdf2.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/aes.c - # ${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/des.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/rc4.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/des3.c - #${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/camellia.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/sha256.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/hmac.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/pbkdf2.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/init.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/stubs.c - # ${KRB5_SOURCE_DIR}/lib/crypto/openssl/hash_provider/hash_crc32.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/hash_provider/hash_evp.c - ${KRB5_SOURCE_DIR}/lib/crypto/openssl/des/des_keys.c - ${KRB5_SOURCE_DIR}/util/support/fake-addrinfo.c - ${KRB5_SOURCE_DIR}/util/support/k5buf.c - ${KRB5_SOURCE_DIR}/util/support/hex.c - ${KRB5_SOURCE_DIR}/util/support/threads.c - ${KRB5_SOURCE_DIR}/util/support/utf8.c - ${KRB5_SOURCE_DIR}/util/support/hashtab.c - ${KRB5_SOURCE_DIR}/util/support/dir_filenames.c - ${KRB5_SOURCE_DIR}/util/support/base64.c - ${KRB5_SOURCE_DIR}/util/support/strerror_r.c - ${KRB5_SOURCE_DIR}/util/support/plugins.c - ${KRB5_SOURCE_DIR}/util/support/path.c - ${KRB5_SOURCE_DIR}/util/support/init-addrinfo.c - ${KRB5_SOURCE_DIR}/util/support/json.c - ${KRB5_SOURCE_DIR}/util/support/errors.c - ${KRB5_SOURCE_DIR}/util/support/utf8_conv.c - ${KRB5_SOURCE_DIR}/util/support/strlcpy.c - ${KRB5_SOURCE_DIR}/util/support/gmt_mktime.c - ${KRB5_SOURCE_DIR}/util/support/zap.c - ${KRB5_SOURCE_DIR}/util/support/bcmp.c - ${KRB5_SOURCE_DIR}/util/support/secure_getenv.c - ${KRB5_SOURCE_DIR}/util/profile/prof_tree.c - ${KRB5_SOURCE_DIR}/util/profile/prof_file.c - ${KRB5_SOURCE_DIR}/util/profile/prof_parse.c - ${KRB5_SOURCE_DIR}/util/profile/prof_get.c - ${KRB5_SOURCE_DIR}/util/profile/prof_set.c - ${KRB5_SOURCE_DIR}/util/profile/prof_err.c - ${KRB5_SOURCE_DIR}/util/profile/prof_init.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/fwd_tgt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/conv_creds.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/fast.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_adata.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_tick.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/enc_keyhelper.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_actx.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/init_ctx.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth2.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_princ.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/parse_host_string.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/pr_to_salt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_req.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/pac_sign.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_addrs.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/conv_princ.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_rep.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/str_conv.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gic_opt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/recvauth.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_cksum.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ai_authdata.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_ctx.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/appdefault.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/bld_princ.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/in_tkt_sky.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_creds.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/auth_con.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_key.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/kdc_rep_dc.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_cred.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gic_keytab.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_req_dec.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/set_realm.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_sam2.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/libdef_parse.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/privsafe.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_auth.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/val_renew.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/addr_order.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata_dec.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/walk_rtree.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gen_subkey.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_auth.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/chpw.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_req.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/allow_weak.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_rep.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_priv.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/s4u_authdata.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_otp.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/init_keyblock.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_addr.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/encrypt_tk.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/s4u_creds.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/srv_dec_tkt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_priv.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata_enc.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata_exp.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/decode_kdc.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/decrypt_tk.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/enc_helper.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_req_ext.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_key.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_encts.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/send_tgs.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_cksum.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/tgtname.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/encode_kdc.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_cred.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_safe.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_pkinit.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/srv_rcache.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/chk_trans.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/etype_list.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/get_creds.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_princ.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gic_pwd.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gen_save_subkey.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/vfy_increds.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/addr_comp.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/kfree.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/response_items.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/serialize.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/cammac_util.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gc_via_tkt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_ctx.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/sendauth.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/addr_srch.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_safe.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_ec.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/bld_pr_ext.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/random_str.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/sname_match.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/princ_comp.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/get_in_tkt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/gen_seqnum.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/cp_key_cnt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_error.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_athctr.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/deltat.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/get_etype_info.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/plugin.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/kerrs.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/vic_opt.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/unparse.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/parse.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_error.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/pac.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/valid_times.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_data.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb/padata.c + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prng.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_dk_cmac.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/krb/crc32.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_cbc.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/enctype_util.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_etm.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/krb/combine_keys.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/default_state.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/decrypt_iov.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_dk_cmac.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/etypes.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/old_api_glue.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/cksumtypes.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_cmac.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_old.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/decrypt.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_dk.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_des.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_unkeyed.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/crypto_length.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/block_size.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/string_to_key.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/verify_checksum.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/crypto_libinit.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/derive.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/random_to_key.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/verify_checksum_iov.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_confounder.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_length.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_dk_hmac.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/make_checksum.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_des.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prf.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/coll_proof_cksum.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_rc4.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/cf2.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/aead.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt_iov.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/cksumtype_to_string.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/key.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_raw.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/keylengths.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_hmac_md5.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/keyed_cksum.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/keyed_checksum_types.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_aes2.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/state.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_dk_hmac.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/checksum_etm.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/make_random_key.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/string_to_cksumtype.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/mandatory_sumtype.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/make_checksum_iov.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_rc4.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/valid_cksumtype.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/nfold.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prng_fortuna.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt_length.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/cmac.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/keyblocks.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_rc4.c" + "${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_pbkdf2.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/aes.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/des.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/rc4.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/des3.c" + #"${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/camellia.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/sha256.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/hmac.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/pbkdf2.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/init.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/stubs.c" + # "${KRB5_SOURCE_DIR}/lib/crypto/openssl/hash_provider/hash_crc32.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/hash_provider/hash_evp.c" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl/des/des_keys.c" + "${KRB5_SOURCE_DIR}/util/support/fake-addrinfo.c" + "${KRB5_SOURCE_DIR}/util/support/k5buf.c" + "${KRB5_SOURCE_DIR}/util/support/hex.c" + "${KRB5_SOURCE_DIR}/util/support/threads.c" + "${KRB5_SOURCE_DIR}/util/support/utf8.c" + "${KRB5_SOURCE_DIR}/util/support/hashtab.c" + "${KRB5_SOURCE_DIR}/util/support/dir_filenames.c" + "${KRB5_SOURCE_DIR}/util/support/base64.c" + "${KRB5_SOURCE_DIR}/util/support/strerror_r.c" + "${KRB5_SOURCE_DIR}/util/support/plugins.c" + "${KRB5_SOURCE_DIR}/util/support/path.c" + "${KRB5_SOURCE_DIR}/util/support/init-addrinfo.c" + "${KRB5_SOURCE_DIR}/util/support/json.c" + "${KRB5_SOURCE_DIR}/util/support/errors.c" + "${KRB5_SOURCE_DIR}/util/support/utf8_conv.c" + "${KRB5_SOURCE_DIR}/util/support/strlcpy.c" + "${KRB5_SOURCE_DIR}/util/support/gmt_mktime.c" + "${KRB5_SOURCE_DIR}/util/support/zap.c" + "${KRB5_SOURCE_DIR}/util/support/bcmp.c" + "${KRB5_SOURCE_DIR}/util/support/secure_getenv.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_tree.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_file.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_parse.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_get.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_set.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_err.c" + "${KRB5_SOURCE_DIR}/util/profile/prof_init.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/fwd_tgt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/conv_creds.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/fast.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_adata.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_tick.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/enc_keyhelper.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_actx.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/init_ctx.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth2.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_princ.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/parse_host_string.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/pr_to_salt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_req.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/pac_sign.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_addrs.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/conv_princ.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_rep.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/str_conv.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gic_opt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/recvauth.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_cksum.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ai_authdata.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_ctx.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/appdefault.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/bld_princ.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/in_tkt_sky.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_creds.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/auth_con.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_key.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/kdc_rep_dc.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_cred.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gic_keytab.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_req_dec.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/set_realm.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_sam2.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/libdef_parse.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/privsafe.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_auth.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/val_renew.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/addr_order.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata_dec.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/walk_rtree.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gen_subkey.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_auth.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/chpw.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_req.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/allow_weak.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_rep.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_priv.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/s4u_authdata.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_otp.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/init_keyblock.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_addr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/encrypt_tk.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/s4u_creds.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/srv_dec_tkt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_priv.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata_enc.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata_exp.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/decode_kdc.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/decrypt_tk.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/enc_helper.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_req_ext.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_key.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_encts.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/send_tgs.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_cksum.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/tgtname.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/encode_kdc.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_cred.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_safe.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_pkinit.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/srv_rcache.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/chk_trans.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/etype_list.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/get_creds.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_princ.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gic_pwd.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/authdata.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gen_save_subkey.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/vfy_increds.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/addr_comp.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/kfree.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/response_items.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/serialize.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/cammac_util.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gc_via_tkt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_ctx.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/sendauth.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/addr_srch.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_safe.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_ec.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/bld_pr_ext.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/random_str.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/sname_match.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/princ_comp.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/get_in_tkt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/gen_seqnum.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/cp_key_cnt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_error.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_athctr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/deltat.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/get_etype_info.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/plugin.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/kerrs.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/vic_opt.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/unparse.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/parse.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/rd_error.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/pac.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/valid_times.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/copy_data.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb/padata.c" - ${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/thread_safe.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/krbfileio.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/toffset.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/hostaddr.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/ustime.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/timeofday.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/ccdefname.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/full_ipadr.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/read_pwd.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/trace.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_k5login.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_rule.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/localaddr.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_dns.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_domain.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/sn2princ.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/net_write.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/gen_rname.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/net_read.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/accessor.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_profile.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/c_ustime.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/expand_path.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/port2ip.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/changepw.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/unlck_file.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/gen_port.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_an2ln.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/genaddrs.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/init_os_ctx.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/localauth.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/locate_kdc.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/prompter.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/ktdefname.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/realm_dom.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/dnssrv.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/mk_faddr.c - # ${KRB5_SOURCE_DIR}/lib/krb5/os/dnsglue.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/sendto_kdc.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_registry.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/write_msg.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_names.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/read_msg.c - ${KRB5_SOURCE_DIR}/lib/krb5/os/lock_file.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect_realm.c - # ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ser_cc.c + "${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/thread_safe.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/krbfileio.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/toffset.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/hostaddr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/ustime.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/timeofday.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/ccdefname.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/full_ipadr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/read_pwd.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/trace.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_k5login.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_rule.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/localaddr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_dns.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_domain.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/sn2princ.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/net_write.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/gen_rname.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/net_read.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/accessor.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_profile.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/c_ustime.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/expand_path.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/port2ip.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/changepw.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/unlck_file.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/gen_port.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_an2ln.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/genaddrs.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/init_os_ctx.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/localauth.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/locate_kdc.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/prompter.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/ktdefname.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/realm_dom.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/dnssrv.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/mk_faddr.c" + # "${KRB5_SOURCE_DIR}/lib/krb5/os/dnsglue.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/sendto_kdc.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/hostrealm_registry.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/write_msg.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/localauth_names.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/read_msg.c" + "${KRB5_SOURCE_DIR}/lib/krb5/os/lock_file.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect_realm.c" + # "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ser_cc.c" - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccdefops.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_retr.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect_k5identity.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cccopy.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccfns.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_file.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccbase.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cccursor.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccdefault.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_memory.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccmarshal.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect_hostname.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_dir.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_keyring.c - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_kcm.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktadd.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktbase.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktdefault.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/kt_memory.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktfns.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktremove.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/read_servi.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/kt_file.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/read_servi.c - ${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktfr_entry.c + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccdefops.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_retr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect_k5identity.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cccopy.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccfns.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_file.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccbase.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cccursor.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccdefault.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_memory.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccmarshal.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccselect_hostname.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_dir.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_keyring.c" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/cc_kcm.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktadd.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktbase.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktdefault.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/kt_memory.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktfns.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktremove.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/read_servi.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/kt_file.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/read_servi.c" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab/ktfr_entry.c" - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/k5e1_err.c - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kdb5_err.c - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/asn1_err.c - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb5_err.c - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb524_err.c - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kv5m_err.c + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/k5e1_err.c" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kdb5_err.c" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/asn1_err.c" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb5_err.c" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb524_err.c" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kv5m_err.c" - ${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_base.c - ${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_dfl.c - ${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_file2.c - ${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_none.c - ${KRB5_SOURCE_DIR}/lib/krb5/rcache/memrcache.c - ${KRB5_SOURCE_DIR}/lib/krb5/unicode/ucdata/ucdata.c - ${KRB5_SOURCE_DIR}/lib/krb5/unicode/ucstr.c - ${KRB5_SOURCE_DIR}/lib/krb5/asn.1/asn1_encode.c - ${KRB5_SOURCE_DIR}/lib/krb5/asn.1/asn1_k_encode.c - ${KRB5_SOURCE_DIR}/lib/krb5/asn.1/ldap_key_seq.c - ${KRB5_SOURCE_DIR}/lib/krb5/krb5_libinit.c + "${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_base.c" + "${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_dfl.c" + "${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_file2.c" + "${KRB5_SOURCE_DIR}/lib/krb5/rcache/rc_none.c" + "${KRB5_SOURCE_DIR}/lib/krb5/rcache/memrcache.c" + "${KRB5_SOURCE_DIR}/lib/krb5/unicode/ucdata/ucdata.c" + "${KRB5_SOURCE_DIR}/lib/krb5/unicode/ucstr.c" + "${KRB5_SOURCE_DIR}/lib/krb5/asn.1/asn1_encode.c" + "${KRB5_SOURCE_DIR}/lib/krb5/asn.1/asn1_k_encode.c" + "${KRB5_SOURCE_DIR}/lib/krb5/asn.1/ldap_key_seq.c" + "${KRB5_SOURCE_DIR}/lib/krb5/krb5_libinit.c" ) add_custom_command( - OUTPUT ${KRB5_SOURCE_DIR}/util/et/compile_et + OUTPUT "${KRB5_SOURCE_DIR}/util/et/compile_et" COMMAND /bin/sh ./config_script ./compile_et.sh @@ -470,7 +470,7 @@ add_custom_command( sed > compile_et - DEPENDS ${KRB5_SOURCE_DIR}/util/et/compile_et.sh ${KRB5_SOURCE_DIR}/util/et/config_script + DEPENDS "${KRB5_SOURCE_DIR}/util/et/compile_et.sh" "${KRB5_SOURCE_DIR}/util/et/config_script" WORKING_DIRECTORY "${KRB5_SOURCE_DIR}/util/et" ) @@ -497,8 +497,8 @@ function(preprocess_et out_var) get_filename_component(ET_PATH ${in_f} DIRECTORY) add_custom_command(OUTPUT ${F_C} ${F_H} - COMMAND perl ${KRB5_SOURCE_DIR}/util/et/compile_et -d "${KRB5_SOURCE_DIR}/util/et" ${in_f} - DEPENDS ${in_f} ${KRB5_SOURCE_DIR}/util/et/compile_et + COMMAND perl "${KRB5_SOURCE_DIR}/util/et/compile_et" -d "${KRB5_SOURCE_DIR}/util/et" ${in_f} + DEPENDS ${in_f} "${KRB5_SOURCE_DIR}/util/et/compile_et" WORKING_DIRECTORY ${ET_PATH} COMMENT "Creating preprocessed file ${F_C}" VERBATIM @@ -509,7 +509,7 @@ function(preprocess_et out_var) endfunction() add_custom_command( - OUTPUT ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/error_map.h + OUTPUT "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/error_map.h" COMMAND perl -I../../../util ../../../util/gen-map.pl @@ -525,27 +525,27 @@ add_custom_command( add_custom_target( ERROR_MAP_H - DEPENDS ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/error_map.h + DEPENDS "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/error_map.h" COMMENT "generating error_map.h" VERBATIM ) add_custom_command( - OUTPUT ${KRB5_SOURCE_DIR}/lib/gssapi/generic/errmap.h + OUTPUT "${KRB5_SOURCE_DIR}/lib/gssapi/generic/errmap.h" COMMAND perl -w -I../../../util ../../../util/gen.pl bimap errmap.h NAME=mecherrmap LEFT=OM_uint32 RIGHT=struct\ mecherror LEFTPRINT=print_OM_uint32 RIGHTPRINT=mecherror_print LEFTCMP=cmp_OM_uint32 RIGHTCMP=mecherror_cmp WORKING_DIRECTORY "${KRB5_SOURCE_DIR}/lib/gssapi/generic" ) add_custom_target( ERRMAP_H - DEPENDS ${KRB5_SOURCE_DIR}/lib/gssapi/generic/errmap.h + DEPENDS "${KRB5_SOURCE_DIR}/lib/gssapi/generic/errmap.h" COMMENT "generating errmap.h" VERBATIM ) add_custom_target( KRB_5_H - DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h + DEPENDS "${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h" COMMENT "generating krb5.h" VERBATIM ) @@ -563,12 +563,12 @@ preprocess_et(processed_et_files ${ET_FILES}) if(CMAKE_SYSTEM_NAME MATCHES "Darwin") add_custom_command( - OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.h ${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c - COMMAND mig -header kcmrpc.h -user kcmrpc.c -sheader /dev/null -server /dev/null -I${KRB5_SOURCE_DIR}/lib/krb5/ccache ${KRB5_SOURCE_DIR}/lib/krb5/ccache/kcmrpc.defs + OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.h" "${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c" + COMMAND mig -header kcmrpc.h -user kcmrpc.c -sheader /dev/null -server /dev/null -I"${KRB5_SOURCE_DIR}/lib/krb5/ccache" "${KRB5_SOURCE_DIR}/lib/krb5/ccache/kcmrpc.defs" WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/include_private" ) - list(APPEND ALL_SRCS ${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c) + list(APPEND ALL_SRCS "${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c") endif() target_sources(${KRB5_LIBRARY} PRIVATE @@ -576,98 +576,98 @@ target_sources(${KRB5_LIBRARY} PRIVATE ) file(MAKE_DIRECTORY - ${CMAKE_CURRENT_BINARY_DIR}/include/gssapi + "${CMAKE_CURRENT_BINARY_DIR}/include/gssapi" ) file(GLOB GSSAPI_GENERIC_HEADERS - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/*.h - ${KRB5_SOURCE_DIR}/lib/gssapi/generic/gssapi.hin + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/*.h" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic/gssapi.hin" ) file(COPY ${GSSAPI_GENERIC_HEADERS} - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/ + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/" ) file(RENAME - ${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/gssapi.hin - ${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/gssapi.h + "${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/gssapi.hin" + "${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/gssapi.h" ) -file(COPY ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/gssapi_krb5.h - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/ +file(COPY "${KRB5_SOURCE_DIR}/lib/gssapi/krb5/gssapi_krb5.h" + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/include/gssapi/" ) -file(COPY ${KRB5_SOURCE_DIR}/util/et/com_err.h - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include/ +file(COPY "${KRB5_SOURCE_DIR}/util/et/com_err.h" + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/include/" ) -file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/osconf.h - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include_private/ +file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/osconf.h" + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/include_private/" ) -file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/profile.h - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include_private/ +file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/profile.h" + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/include_private/" ) string(TOLOWER "${CMAKE_SYSTEM_NAME}" _system_name) -file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/autoconf_${_system_name}.h - DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include_private/ +file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/autoconf_${_system_name}.h" + DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/include_private/" ) file(RENAME - ${CMAKE_CURRENT_BINARY_DIR}/include_private/autoconf_${_system_name}.h - ${CMAKE_CURRENT_BINARY_DIR}/include_private/autoconf.h + "${CMAKE_CURRENT_BINARY_DIR}/include_private/autoconf_${_system_name}.h" + "${CMAKE_CURRENT_BINARY_DIR}/include_private/autoconf.h" ) file(MAKE_DIRECTORY - ${CMAKE_CURRENT_BINARY_DIR}/include/krb5 + "${CMAKE_CURRENT_BINARY_DIR}/include/krb5" ) SET(KRBHDEP - ${KRB5_SOURCE_DIR}/include/krb5/krb5.hin - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb5_err.h - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/k5e1_err.h - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kdb5_err.h - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kv5m_err.h - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb524_err.h - ${KRB5_SOURCE_DIR}/lib/krb5/error_tables/asn1_err.h + "${KRB5_SOURCE_DIR}/include/krb5/krb5.hin" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb5_err.h" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/k5e1_err.h" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kdb5_err.h" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/kv5m_err.h" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/krb524_err.h" + "${KRB5_SOURCE_DIR}/lib/krb5/error_tables/asn1_err.h" ) # cmake < 3.18 does not have 'cat' command add_custom_command( - OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h - COMMAND cat ${KRBHDEP} > ${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h + OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h" + COMMAND cat ${KRBHDEP} > "${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h" DEPENDS ${KRBHDEP} ) target_include_directories(${KRB5_LIBRARY} PUBLIC - ${KRB5_SOURCE_DIR}/include - ${CMAKE_CURRENT_BINARY_DIR}/include + "${KRB5_SOURCE_DIR}/include" + "${CMAKE_CURRENT_BINARY_DIR}/include" ) target_include_directories(${KRB5_LIBRARY} PRIVATE - ${CMAKE_CURRENT_BINARY_DIR}/include_private # For autoconf.h and other generated headers. + "${CMAKE_CURRENT_BINARY_DIR}/include_private" # For autoconf.h and other generated headers. ${KRB5_SOURCE_DIR} - ${KRB5_SOURCE_DIR}/include - ${KRB5_SOURCE_DIR}/lib/gssapi/mechglue - ${KRB5_SOURCE_DIR}/lib/ - ${KRB5_SOURCE_DIR}/lib/gssapi - ${KRB5_SOURCE_DIR}/lib/gssapi/generic - ${KRB5_SOURCE_DIR}/lib/gssapi/krb5 - ${KRB5_SOURCE_DIR}/lib/gssapi/spnego - ${KRB5_SOURCE_DIR}/util/et - ${KRB5_SOURCE_DIR}/lib/crypto/openssl - ${KRB5_SOURCE_DIR}/lib/crypto/krb - ${KRB5_SOURCE_DIR}/util/profile - ${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccapi - ${KRB5_SOURCE_DIR}/lib/krb5/ccache - ${KRB5_SOURCE_DIR}/lib/krb5/keytab - ${KRB5_SOURCE_DIR}/lib/krb5/rcache - ${KRB5_SOURCE_DIR}/lib/krb5/unicode - ${KRB5_SOURCE_DIR}/lib/krb5/os + "${KRB5_SOURCE_DIR}/include" + "${KRB5_SOURCE_DIR}/lib/gssapi/mechglue" + "${KRB5_SOURCE_DIR}/lib/" + "${KRB5_SOURCE_DIR}/lib/gssapi" + "${KRB5_SOURCE_DIR}/lib/gssapi/generic" + "${KRB5_SOURCE_DIR}/lib/gssapi/krb5" + "${KRB5_SOURCE_DIR}/lib/gssapi/spnego" + "${KRB5_SOURCE_DIR}/util/et" + "${KRB5_SOURCE_DIR}/lib/crypto/openssl" + "${KRB5_SOURCE_DIR}/lib/crypto/krb" + "${KRB5_SOURCE_DIR}/util/profile" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache/ccapi" + "${KRB5_SOURCE_DIR}/lib/krb5/ccache" + "${KRB5_SOURCE_DIR}/lib/krb5/keytab" + "${KRB5_SOURCE_DIR}/lib/krb5/rcache" + "${KRB5_SOURCE_DIR}/lib/krb5/unicode" + "${KRB5_SOURCE_DIR}/lib/krb5/os" # ${OPENSSL_INCLUDE_DIR} ) diff --git a/contrib/libcpuid-cmake/CMakeLists.txt b/contrib/libcpuid-cmake/CMakeLists.txt index 8c1be50b4e6..9baebb3ba1b 100644 --- a/contrib/libcpuid-cmake/CMakeLists.txt +++ b/contrib/libcpuid-cmake/CMakeLists.txt @@ -1,11 +1,9 @@ -if (NOT ARCH_ARM) +if(ARCH_AMD64) option (ENABLE_CPUID "Enable libcpuid library (only internal)" ${ENABLE_LIBRARIES}) -endif() - -if (ARCH_ARM AND ENABLE_CPUID) - message (${RECONFIGURE_MESSAGE_LEVEL} "cpuid is not supported on ARM") +elseif(ENABLE_CPUID) + message (${RECONFIGURE_MESSAGE_LEVEL} "libcpuid is only supported on x86_64") set (ENABLE_CPUID 0) -endif () +endif() if (NOT ENABLE_CPUID) add_library (cpuid INTERFACE) diff --git a/contrib/libcxx b/contrib/libcxx index 8b80a151d12..2fa892f69ac 160000 --- a/contrib/libcxx +++ b/contrib/libcxx @@ -1 +1 @@ -Subproject commit 8b80a151d12b98ffe2d0c22f7cec12c3b9ff88d7 +Subproject commit 2fa892f69acbaa40f8a18c6484854a6183a34482 diff --git a/contrib/libcxx-cmake/CMakeLists.txt b/contrib/libcxx-cmake/CMakeLists.txt index 3b5d53cd1c0..0cfb4191619 100644 --- a/contrib/libcxx-cmake/CMakeLists.txt +++ b/contrib/libcxx-cmake/CMakeLists.txt @@ -1,49 +1,49 @@ include(CheckCXXCompilerFlag) -set(LIBCXX_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libcxx) +set(LIBCXX_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libcxx") set(SRCS -${LIBCXX_SOURCE_DIR}/src/algorithm.cpp -${LIBCXX_SOURCE_DIR}/src/any.cpp -${LIBCXX_SOURCE_DIR}/src/atomic.cpp -${LIBCXX_SOURCE_DIR}/src/barrier.cpp -${LIBCXX_SOURCE_DIR}/src/bind.cpp -${LIBCXX_SOURCE_DIR}/src/charconv.cpp -${LIBCXX_SOURCE_DIR}/src/chrono.cpp -${LIBCXX_SOURCE_DIR}/src/condition_variable.cpp -${LIBCXX_SOURCE_DIR}/src/condition_variable_destructor.cpp -${LIBCXX_SOURCE_DIR}/src/debug.cpp -${LIBCXX_SOURCE_DIR}/src/exception.cpp -${LIBCXX_SOURCE_DIR}/src/experimental/memory_resource.cpp -${LIBCXX_SOURCE_DIR}/src/filesystem/directory_iterator.cpp -${LIBCXX_SOURCE_DIR}/src/filesystem/int128_builtins.cpp -${LIBCXX_SOURCE_DIR}/src/filesystem/operations.cpp -${LIBCXX_SOURCE_DIR}/src/functional.cpp -${LIBCXX_SOURCE_DIR}/src/future.cpp -${LIBCXX_SOURCE_DIR}/src/hash.cpp -${LIBCXX_SOURCE_DIR}/src/ios.cpp -${LIBCXX_SOURCE_DIR}/src/ios.instantiations.cpp -${LIBCXX_SOURCE_DIR}/src/iostream.cpp -${LIBCXX_SOURCE_DIR}/src/locale.cpp -${LIBCXX_SOURCE_DIR}/src/memory.cpp -${LIBCXX_SOURCE_DIR}/src/mutex.cpp -${LIBCXX_SOURCE_DIR}/src/mutex_destructor.cpp -${LIBCXX_SOURCE_DIR}/src/new.cpp -${LIBCXX_SOURCE_DIR}/src/optional.cpp -${LIBCXX_SOURCE_DIR}/src/random.cpp -${LIBCXX_SOURCE_DIR}/src/random_shuffle.cpp -${LIBCXX_SOURCE_DIR}/src/regex.cpp -${LIBCXX_SOURCE_DIR}/src/shared_mutex.cpp -${LIBCXX_SOURCE_DIR}/src/stdexcept.cpp -${LIBCXX_SOURCE_DIR}/src/string.cpp -${LIBCXX_SOURCE_DIR}/src/strstream.cpp -${LIBCXX_SOURCE_DIR}/src/system_error.cpp -${LIBCXX_SOURCE_DIR}/src/thread.cpp -${LIBCXX_SOURCE_DIR}/src/typeinfo.cpp -${LIBCXX_SOURCE_DIR}/src/utility.cpp -${LIBCXX_SOURCE_DIR}/src/valarray.cpp -${LIBCXX_SOURCE_DIR}/src/variant.cpp -${LIBCXX_SOURCE_DIR}/src/vector.cpp +"${LIBCXX_SOURCE_DIR}/src/algorithm.cpp" +"${LIBCXX_SOURCE_DIR}/src/any.cpp" +"${LIBCXX_SOURCE_DIR}/src/atomic.cpp" +"${LIBCXX_SOURCE_DIR}/src/barrier.cpp" +"${LIBCXX_SOURCE_DIR}/src/bind.cpp" +"${LIBCXX_SOURCE_DIR}/src/charconv.cpp" +"${LIBCXX_SOURCE_DIR}/src/chrono.cpp" +"${LIBCXX_SOURCE_DIR}/src/condition_variable.cpp" +"${LIBCXX_SOURCE_DIR}/src/condition_variable_destructor.cpp" +"${LIBCXX_SOURCE_DIR}/src/debug.cpp" +"${LIBCXX_SOURCE_DIR}/src/exception.cpp" +"${LIBCXX_SOURCE_DIR}/src/experimental/memory_resource.cpp" +"${LIBCXX_SOURCE_DIR}/src/filesystem/directory_iterator.cpp" +"${LIBCXX_SOURCE_DIR}/src/filesystem/int128_builtins.cpp" +"${LIBCXX_SOURCE_DIR}/src/filesystem/operations.cpp" +"${LIBCXX_SOURCE_DIR}/src/functional.cpp" +"${LIBCXX_SOURCE_DIR}/src/future.cpp" +"${LIBCXX_SOURCE_DIR}/src/hash.cpp" +"${LIBCXX_SOURCE_DIR}/src/ios.cpp" +"${LIBCXX_SOURCE_DIR}/src/ios.instantiations.cpp" +"${LIBCXX_SOURCE_DIR}/src/iostream.cpp" +"${LIBCXX_SOURCE_DIR}/src/locale.cpp" +"${LIBCXX_SOURCE_DIR}/src/memory.cpp" +"${LIBCXX_SOURCE_DIR}/src/mutex.cpp" +"${LIBCXX_SOURCE_DIR}/src/mutex_destructor.cpp" +"${LIBCXX_SOURCE_DIR}/src/new.cpp" +"${LIBCXX_SOURCE_DIR}/src/optional.cpp" +"${LIBCXX_SOURCE_DIR}/src/random.cpp" +"${LIBCXX_SOURCE_DIR}/src/random_shuffle.cpp" +"${LIBCXX_SOURCE_DIR}/src/regex.cpp" +"${LIBCXX_SOURCE_DIR}/src/shared_mutex.cpp" +"${LIBCXX_SOURCE_DIR}/src/stdexcept.cpp" +"${LIBCXX_SOURCE_DIR}/src/string.cpp" +"${LIBCXX_SOURCE_DIR}/src/strstream.cpp" +"${LIBCXX_SOURCE_DIR}/src/system_error.cpp" +"${LIBCXX_SOURCE_DIR}/src/thread.cpp" +"${LIBCXX_SOURCE_DIR}/src/typeinfo.cpp" +"${LIBCXX_SOURCE_DIR}/src/utility.cpp" +"${LIBCXX_SOURCE_DIR}/src/valarray.cpp" +"${LIBCXX_SOURCE_DIR}/src/variant.cpp" +"${LIBCXX_SOURCE_DIR}/src/vector.cpp" ) add_library(cxx ${SRCS}) @@ -56,6 +56,11 @@ if (USE_UNWIND) target_compile_definitions(cxx PUBLIC -DSTD_EXCEPTION_HAS_STACK_TRACE=1) endif () +# Override the deduced attribute support that causes error. +if (OS_DARWIN AND COMPILER_GCC) + add_compile_definitions(_LIBCPP_INIT_PRIORITY_MAX) +endif () + target_compile_options(cxx PUBLIC $<$:-nostdinc++>) # Third party library may have substandard code. diff --git a/contrib/libcxxabi-cmake/CMakeLists.txt b/contrib/libcxxabi-cmake/CMakeLists.txt index 9d8b94dabf0..0bb5d663633 100644 --- a/contrib/libcxxabi-cmake/CMakeLists.txt +++ b/contrib/libcxxabi-cmake/CMakeLists.txt @@ -1,24 +1,24 @@ -set(LIBCXXABI_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libcxxabi) +set(LIBCXXABI_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libcxxabi") set(SRCS -${LIBCXXABI_SOURCE_DIR}/src/stdlib_stdexcept.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_virtual.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_thread_atexit.cpp -${LIBCXXABI_SOURCE_DIR}/src/fallback_malloc.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_guard.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_default_handlers.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_personality.cpp -${LIBCXXABI_SOURCE_DIR}/src/stdlib_exception.cpp -${LIBCXXABI_SOURCE_DIR}/src/abort_message.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_demangle.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_exception.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_handlers.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_exception_storage.cpp -${LIBCXXABI_SOURCE_DIR}/src/private_typeinfo.cpp -${LIBCXXABI_SOURCE_DIR}/src/stdlib_typeinfo.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_aux_runtime.cpp -${LIBCXXABI_SOURCE_DIR}/src/cxa_vector.cpp -${LIBCXXABI_SOURCE_DIR}/src/stdlib_new_delete.cpp +"${LIBCXXABI_SOURCE_DIR}/src/stdlib_stdexcept.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_virtual.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_thread_atexit.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/fallback_malloc.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_guard.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_default_handlers.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_personality.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/stdlib_exception.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/abort_message.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_demangle.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_exception.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_handlers.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_exception_storage.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/private_typeinfo.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/stdlib_typeinfo.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_aux_runtime.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/cxa_vector.cpp" +"${LIBCXXABI_SOURCE_DIR}/src/stdlib_new_delete.cpp" ) add_library(cxxabi ${SRCS}) diff --git a/contrib/libdivide/libdivide.h b/contrib/libdivide/libdivide.h index 81057b7b43d..33d210310a1 100644 --- a/contrib/libdivide/libdivide.h +++ b/contrib/libdivide/libdivide.h @@ -18,78 +18,79 @@ #include #if defined(__cplusplus) - #include - #include - #include +#include +#include +#include #else - #include - #include +#include +#include #endif -#if defined(LIBDIVIDE_AVX512) - #include -#elif defined(LIBDIVIDE_AVX2) - #include -#elif defined(LIBDIVIDE_SSE2) - #include +#if defined(LIBDIVIDE_SSE2) +#include +#endif +#if defined(LIBDIVIDE_AVX2) || defined(LIBDIVIDE_AVX512) +#include +#endif +#if defined(LIBDIVIDE_NEON) +#include #endif #if defined(_MSC_VER) - #include - // disable warning C4146: unary minus operator applied - // to unsigned type, result still unsigned - #pragma warning(disable: 4146) - #define LIBDIVIDE_VC +#include +// disable warning C4146: unary minus operator applied +// to unsigned type, result still unsigned +#pragma warning(disable : 4146) +#define LIBDIVIDE_VC #endif #if !defined(__has_builtin) - #define __has_builtin(x) 0 +#define __has_builtin(x) 0 #endif #if defined(__SIZEOF_INT128__) - #define HAS_INT128_T - // clang-cl on Windows does not yet support 128-bit division - #if !(defined(__clang__) && defined(LIBDIVIDE_VC)) - #define HAS_INT128_DIV - #endif +#define HAS_INT128_T +// clang-cl on Windows does not yet support 128-bit division +#if !(defined(__clang__) && defined(LIBDIVIDE_VC)) +#define HAS_INT128_DIV +#endif #endif #if defined(__x86_64__) || defined(_M_X64) - #define LIBDIVIDE_X86_64 +#define LIBDIVIDE_X86_64 #endif #if defined(__i386__) - #define LIBDIVIDE_i386 +#define LIBDIVIDE_i386 #endif #if defined(__GNUC__) || defined(__clang__) - #define LIBDIVIDE_GCC_STYLE_ASM +#define LIBDIVIDE_GCC_STYLE_ASM #endif #if defined(__cplusplus) || defined(LIBDIVIDE_VC) - #define LIBDIVIDE_FUNCTION __FUNCTION__ +#define LIBDIVIDE_FUNCTION __FUNCTION__ #else - #define LIBDIVIDE_FUNCTION __func__ +#define LIBDIVIDE_FUNCTION __func__ #endif -#define LIBDIVIDE_ERROR(msg) \ - do { \ - fprintf(stderr, "libdivide.h:%d: %s(): Error: %s\n", \ - __LINE__, LIBDIVIDE_FUNCTION, msg); \ - abort(); \ +#define LIBDIVIDE_ERROR(msg) \ + do { \ + fprintf(stderr, "libdivide.h:%d: %s(): Error: %s\n", __LINE__, LIBDIVIDE_FUNCTION, msg); \ + abort(); \ } while (0) #if defined(LIBDIVIDE_ASSERTIONS_ON) - #define LIBDIVIDE_ASSERT(x) \ - do { \ - if (!(x)) { \ - fprintf(stderr, "libdivide.h:%d: %s(): Assertion failed: %s\n", \ - __LINE__, LIBDIVIDE_FUNCTION, #x); \ - abort(); \ - } \ - } while (0) +#define LIBDIVIDE_ASSERT(x) \ + do { \ + if (!(x)) { \ + fprintf(stderr, "libdivide.h:%d: %s(): Assertion failed: %s\n", __LINE__, \ + LIBDIVIDE_FUNCTION, #x); \ + abort(); \ + } \ + } while (0) #else - #define LIBDIVIDE_ASSERT(x) +#define LIBDIVIDE_ASSERT(x) #endif #ifdef __cplusplus @@ -193,25 +194,33 @@ static inline struct libdivide_u32_branchfree_t libdivide_u32_branchfree_gen(uin static inline struct libdivide_s64_branchfree_t libdivide_s64_branchfree_gen(int64_t d); static inline struct libdivide_u64_branchfree_t libdivide_u64_branchfree_gen(uint64_t d); -static inline int32_t libdivide_s32_do(int32_t numer, const struct libdivide_s32_t *denom); +static inline int32_t libdivide_s32_do(int32_t numer, const struct libdivide_s32_t *denom); static inline uint32_t libdivide_u32_do(uint32_t numer, const struct libdivide_u32_t *denom); -static inline int64_t libdivide_s64_do(int64_t numer, const struct libdivide_s64_t *denom); +static inline int64_t libdivide_s64_do(int64_t numer, const struct libdivide_s64_t *denom); static inline uint64_t libdivide_u64_do(uint64_t numer, const struct libdivide_u64_t *denom); -static inline int32_t libdivide_s32_branchfree_do(int32_t numer, const struct libdivide_s32_branchfree_t *denom); -static inline uint32_t libdivide_u32_branchfree_do(uint32_t numer, const struct libdivide_u32_branchfree_t *denom); -static inline int64_t libdivide_s64_branchfree_do(int64_t numer, const struct libdivide_s64_branchfree_t *denom); -static inline uint64_t libdivide_u64_branchfree_do(uint64_t numer, const struct libdivide_u64_branchfree_t *denom); +static inline int32_t libdivide_s32_branchfree_do( + int32_t numer, const struct libdivide_s32_branchfree_t *denom); +static inline uint32_t libdivide_u32_branchfree_do( + uint32_t numer, const struct libdivide_u32_branchfree_t *denom); +static inline int64_t libdivide_s64_branchfree_do( + int64_t numer, const struct libdivide_s64_branchfree_t *denom); +static inline uint64_t libdivide_u64_branchfree_do( + uint64_t numer, const struct libdivide_u64_branchfree_t *denom); -static inline int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom); +static inline int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom); static inline uint32_t libdivide_u32_recover(const struct libdivide_u32_t *denom); -static inline int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom); +static inline int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom); static inline uint64_t libdivide_u64_recover(const struct libdivide_u64_t *denom); -static inline int32_t libdivide_s32_branchfree_recover(const struct libdivide_s32_branchfree_t *denom); -static inline uint32_t libdivide_u32_branchfree_recover(const struct libdivide_u32_branchfree_t *denom); -static inline int64_t libdivide_s64_branchfree_recover(const struct libdivide_s64_branchfree_t *denom); -static inline uint64_t libdivide_u64_branchfree_recover(const struct libdivide_u64_branchfree_t *denom); +static inline int32_t libdivide_s32_branchfree_recover( + const struct libdivide_s32_branchfree_t *denom); +static inline uint32_t libdivide_u32_branchfree_recover( + const struct libdivide_u32_branchfree_t *denom); +static inline int64_t libdivide_s64_branchfree_recover( + const struct libdivide_s64_branchfree_t *denom); +static inline uint64_t libdivide_u64_branchfree_recover( + const struct libdivide_u64_branchfree_t *denom); //////// Internal Utility Functions @@ -229,8 +238,7 @@ static inline int32_t libdivide_mullhi_s32(int32_t x, int32_t y) { } static inline uint64_t libdivide_mullhi_u64(uint64_t x, uint64_t y) { -#if defined(LIBDIVIDE_VC) && \ - defined(LIBDIVIDE_X86_64) +#if defined(LIBDIVIDE_VC) && defined(LIBDIVIDE_X86_64) return __umulh(x, y); #elif defined(HAS_INT128_T) __uint128_t xl = x, yl = y; @@ -256,8 +264,7 @@ static inline uint64_t libdivide_mullhi_u64(uint64_t x, uint64_t y) { } static inline int64_t libdivide_mullhi_s64(int64_t x, int64_t y) { -#if defined(LIBDIVIDE_VC) && \ - defined(LIBDIVIDE_X86_64) +#if defined(LIBDIVIDE_VC) && defined(LIBDIVIDE_X86_64) return __mulh(x, y); #elif defined(HAS_INT128_T) __int128_t xl = x, yl = y; @@ -279,8 +286,7 @@ static inline int64_t libdivide_mullhi_s64(int64_t x, int64_t y) { } static inline int32_t libdivide_count_leading_zeros32(uint32_t val) { -#if defined(__GNUC__) || \ - __has_builtin(__builtin_clz) +#if defined(__GNUC__) || __has_builtin(__builtin_clz) // Fast way to count leading zeros return __builtin_clz(val); #elif defined(LIBDIVIDE_VC) @@ -290,8 +296,7 @@ static inline int32_t libdivide_count_leading_zeros32(uint32_t val) { } return 0; #else - if (val == 0) - return 32; + if (val == 0) return 32; int32_t result = 8; uint32_t hi = 0xFFU << 24; while ((val & hi) == 0) { @@ -307,8 +312,7 @@ static inline int32_t libdivide_count_leading_zeros32(uint32_t val) { } static inline int32_t libdivide_count_leading_zeros64(uint64_t val) { -#if defined(__GNUC__) || \ - __has_builtin(__builtin_clzll) +#if defined(__GNUC__) || __has_builtin(__builtin_clzll) // Fast way to count leading zeros return __builtin_clzll(val); #elif defined(LIBDIVIDE_VC) && defined(_WIN64) @@ -328,14 +332,11 @@ static inline int32_t libdivide_count_leading_zeros64(uint64_t val) { // libdivide_64_div_32_to_32: divides a 64-bit uint {u1, u0} by a 32-bit // uint {v}. The result must fit in 32 bits. // Returns the quotient directly and the remainder in *r -static inline uint32_t libdivide_64_div_32_to_32(uint32_t u1, uint32_t u0, uint32_t v, uint32_t *r) { -#if (defined(LIBDIVIDE_i386) || defined(LIBDIVIDE_X86_64)) && \ - defined(LIBDIVIDE_GCC_STYLE_ASM) +static inline uint32_t libdivide_64_div_32_to_32( + uint32_t u1, uint32_t u0, uint32_t v, uint32_t *r) { +#if (defined(LIBDIVIDE_i386) || defined(LIBDIVIDE_X86_64)) && defined(LIBDIVIDE_GCC_STYLE_ASM) uint32_t result; - __asm__("divl %[v]" - : "=a"(result), "=d"(*r) - : [v] "r"(v), "a"(u0), "d"(u1) - ); + __asm__("divl %[v]" : "=a"(result), "=d"(*r) : [v] "r"(v), "a"(u0), "d"(u1)); return result; #else uint64_t n = ((uint64_t)u1 << 32) | u0; @@ -349,19 +350,13 @@ static inline uint32_t libdivide_64_div_32_to_32(uint32_t u1, uint32_t u0, uint3 // uint {v}. The result must fit in 64 bits. // Returns the quotient directly and the remainder in *r static uint64_t libdivide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, uint64_t *r) { -#if defined(LIBDIVIDE_X86_64) && \ - defined(LIBDIVIDE_GCC_STYLE_ASM) + // N.B. resist the temptation to use __uint128_t here. + // In LLVM compiler-rt, it performs a 128/128 -> 128 division which is many times slower than + // necessary. In gcc it's better but still slower than the divlu implementation, perhaps because + // it's not inlined. +#if defined(LIBDIVIDE_X86_64) && defined(LIBDIVIDE_GCC_STYLE_ASM) uint64_t result; - __asm__("divq %[v]" - : "=a"(result), "=d"(*r) - : [v] "r"(v), "a"(u0), "d"(u1) - ); - return result; -#elif defined(HAS_INT128_T) && \ - defined(HAS_INT128_DIV) - __uint128_t n = ((__uint128_t)u1 << 64) | u0; - uint64_t result = (uint64_t)(n / v); - *r = (uint64_t)(n - result * (__uint128_t)v); + __asm__("divq %[v]" : "=a"(result), "=d"(*r) : [v] "r"(v), "a"(u0), "d"(u1)); return result; #else // Code taken from Hacker's Delight: @@ -369,19 +364,19 @@ static uint64_t libdivide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, // License permits inclusion here per: // http://www.hackersdelight.org/permissions.htm - const uint64_t b = (1ULL << 32); // Number base (32 bits) - uint64_t un1, un0; // Norm. dividend LSD's - uint64_t vn1, vn0; // Norm. divisor digits - uint64_t q1, q0; // Quotient digits - uint64_t un64, un21, un10; // Dividend digit pairs - uint64_t rhat; // A remainder - int32_t s; // Shift amount for norm + const uint64_t b = (1ULL << 32); // Number base (32 bits) + uint64_t un1, un0; // Norm. dividend LSD's + uint64_t vn1, vn0; // Norm. divisor digits + uint64_t q1, q0; // Quotient digits + uint64_t un64, un21, un10; // Dividend digit pairs + uint64_t rhat; // A remainder + int32_t s; // Shift amount for norm // If overflow, set rem. to an impossible value, // and return the largest possible quotient if (u1 >= v) { - *r = (uint64_t) -1; - return (uint64_t) -1; + *r = (uint64_t)-1; + return (uint64_t)-1; } // count leading zeros @@ -390,7 +385,7 @@ static uint64_t libdivide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, // Normalize divisor v = v << s; un64 = (u1 << s) | (u0 >> (64 - s)); - un10 = u0 << s; // Shift dividend left + un10 = u0 << s; // Shift dividend left } else { // Avoid undefined behavior of (u0 >> 64). // The behavior is undefined if the right operand is @@ -415,11 +410,10 @@ static uint64_t libdivide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, while (q1 >= b || q1 * vn0 > b * rhat + un1) { q1 = q1 - 1; rhat = rhat + vn1; - if (rhat >= b) - break; + if (rhat >= b) break; } - // Multiply and subtract + // Multiply and subtract un21 = un64 * b + un1 - q1 * v; // Compute the second quotient digit @@ -429,8 +423,7 @@ static uint64_t libdivide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, while (q0 >= b || q0 * vn0 > b * rhat + un0) { q0 = q0 - 1; rhat = rhat + vn1; - if (rhat >= b) - break; + if (rhat >= b) break; } *r = (un21 * b + un0 - q0 * v) >> s; @@ -445,8 +438,7 @@ static inline void libdivide_u128_shift(uint64_t *u1, uint64_t *u0, int32_t sign *u1 <<= shift; *u1 |= *u0 >> (64 - shift); *u0 <<= shift; - } - else if (signed_shift < 0) { + } else if (signed_shift < 0) { uint32_t shift = -signed_shift; *u0 >>= shift; *u0 |= *u1 << (64 - shift); @@ -455,9 +447,9 @@ static inline void libdivide_u128_shift(uint64_t *u1, uint64_t *u0, int32_t sign } // Computes a 128 / 128 -> 64 bit division, with a 128 bit remainder. -static uint64_t libdivide_128_div_128_to_64(uint64_t u_hi, uint64_t u_lo, uint64_t v_hi, uint64_t v_lo, uint64_t *r_hi, uint64_t *r_lo) { -#if defined(HAS_INT128_T) && \ - defined(HAS_INT128_DIV) +static uint64_t libdivide_128_div_128_to_64( + uint64_t u_hi, uint64_t u_lo, uint64_t v_hi, uint64_t v_lo, uint64_t *r_hi, uint64_t *r_lo) { +#if defined(HAS_INT128_T) && defined(HAS_INT128_DIV) __uint128_t ufull = u_hi; __uint128_t vfull = v_hi; ufull = (ufull << 64) | u_lo; @@ -470,7 +462,10 @@ static uint64_t libdivide_128_div_128_to_64(uint64_t u_hi, uint64_t u_lo, uint64 #else // Adapted from "Unsigned Doubleword Division" in Hacker's Delight // We want to compute u / v - typedef struct { uint64_t hi; uint64_t lo; } u128_t; + typedef struct { + uint64_t hi; + uint64_t lo; + } u128_t; u128_t u = {u_hi, u_lo}; u128_t v = {v_hi, v_lo}; @@ -490,7 +485,7 @@ static uint64_t libdivide_128_div_128_to_64(uint64_t u_hi, uint64_t u_lo, uint64 // Normalize the divisor so its MSB is 1 u128_t v1t = v; libdivide_u128_shift(&v1t.hi, &v1t.lo, n); - uint64_t v1 = v1t.hi; // i.e. v1 = v1t >> 64 + uint64_t v1 = v1t.hi; // i.e. v1 = v1t >> 64 // To ensure no overflow u128_t u1 = u; @@ -508,7 +503,7 @@ static uint64_t libdivide_128_div_128_to_64(uint64_t u_hi, uint64_t u_lo, uint64 // Make q0 correct or too small by 1 // Equivalent to `if (q0 != 0) q0 = q0 - 1;` if (q0.hi != 0 || q0.lo != 0) { - q0.hi -= (q0.lo == 0); // borrow + q0.hi -= (q0.lo == 0); // borrow q0.lo -= 1; } @@ -520,22 +515,21 @@ static uint64_t libdivide_128_div_128_to_64(uint64_t u_hi, uint64_t u_lo, uint64 // Each term is 128 bit // High half of full product (upper 128 bits!) are dropped u128_t q0v = {0, 0}; - q0v.hi = q0.hi*v.lo + q0.lo*v.hi + libdivide_mullhi_u64(q0.lo, v.lo); - q0v.lo = q0.lo*v.lo; + q0v.hi = q0.hi * v.lo + q0.lo * v.hi + libdivide_mullhi_u64(q0.lo, v.lo); + q0v.lo = q0.lo * v.lo; // Compute u - q0v as u_q0v // This is the remainder u128_t u_q0v = u; - u_q0v.hi -= q0v.hi + (u.lo < q0v.lo); // second term is borrow + u_q0v.hi -= q0v.hi + (u.lo < q0v.lo); // second term is borrow u_q0v.lo -= q0v.lo; // Check if u_q0v >= v // This checks if our remainder is larger than the divisor - if ((u_q0v.hi > v.hi) || - (u_q0v.hi == v.hi && u_q0v.lo >= v.lo)) { + if ((u_q0v.hi > v.hi) || (u_q0v.hi == v.hi && u_q0v.lo >= v.lo)) { // Increment q0 q0.lo += 1; - q0.hi += (q0.lo == 0); // carry + q0.hi += (q0.lo == 0); // carry // Subtract v from remainder u_q0v.hi -= v.hi + (u_q0v.lo < v.lo); @@ -611,7 +605,8 @@ struct libdivide_u32_branchfree_t libdivide_u32_branchfree_gen(uint32_t d) { LIBDIVIDE_ERROR("branchfree divider must be != 1"); } struct libdivide_u32_t tmp = libdivide_internal_u32_gen(d, 1); - struct libdivide_u32_branchfree_t ret = {tmp.magic, (uint8_t)(tmp.more & LIBDIVIDE_32_SHIFT_MASK)}; + struct libdivide_u32_branchfree_t ret = { + tmp.magic, (uint8_t)(tmp.more & LIBDIVIDE_32_SHIFT_MASK)}; return ret; } @@ -619,14 +614,12 @@ uint32_t libdivide_u32_do(uint32_t numer, const struct libdivide_u32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return numer >> more; - } - else { + } else { uint32_t q = libdivide_mullhi_u32(denom->magic, numer); if (more & LIBDIVIDE_ADD_MARKER) { uint32_t t = ((numer - q) >> 1) + q; return t >> (more & LIBDIVIDE_32_SHIFT_MASK); - } - else { + } else { // All upper bits are 0, // don't need to mask them off. return q >> more; @@ -634,7 +627,8 @@ uint32_t libdivide_u32_do(uint32_t numer, const struct libdivide_u32_t *denom) { } } -uint32_t libdivide_u32_branchfree_do(uint32_t numer, const struct libdivide_u32_branchfree_t *denom) { +uint32_t libdivide_u32_branchfree_do( + uint32_t numer, const struct libdivide_u32_branchfree_t *denom) { uint32_t q = libdivide_mullhi_u32(denom->magic, numer); uint32_t t = ((numer - q) >> 1) + q; return t >> denom->more; @@ -671,7 +665,7 @@ uint32_t libdivide_u32_recover(const struct libdivide_u32_t *denom) { // Need to double it, and then add 1 to the quotient if doubling th // remainder would increase the quotient. // Note that rem<<1 cannot overflow, since rem < d and d is 33 bits - uint32_t full_q = half_q + half_q + ((rem<<1) >= d); + uint32_t full_q = half_q + half_q + ((rem << 1) >= d); // We rounded down in gen (hence +1) return full_q + 1; @@ -700,7 +694,7 @@ uint32_t libdivide_u32_branchfree_recover(const struct libdivide_u32_branchfree_ // Need to double it, and then add 1 to the quotient if doubling th // remainder would increase the quotient. // Note that rem<<1 cannot overflow, since rem < d and d is 33 bits - uint32_t full_q = half_q + half_q + ((rem<<1) >= d); + uint32_t full_q = half_q + half_q + ((rem << 1) >= d); // We rounded down in gen (hence +1) return full_q + 1; @@ -747,7 +741,7 @@ static inline struct libdivide_u64_t libdivide_internal_u64_gen(uint64_t d, int proposed_m += proposed_m; const uint64_t twice_rem = rem + rem; if (twice_rem >= d || twice_rem < rem) proposed_m += 1; - more = floor_log_2_d | LIBDIVIDE_ADD_MARKER; + more = floor_log_2_d | LIBDIVIDE_ADD_MARKER; } result.magic = 1 + proposed_m; result.more = more; @@ -770,7 +764,8 @@ struct libdivide_u64_branchfree_t libdivide_u64_branchfree_gen(uint64_t d) { LIBDIVIDE_ERROR("branchfree divider must be != 1"); } struct libdivide_u64_t tmp = libdivide_internal_u64_gen(d, 1); - struct libdivide_u64_branchfree_t ret = {tmp.magic, (uint8_t)(tmp.more & LIBDIVIDE_64_SHIFT_MASK)}; + struct libdivide_u64_branchfree_t ret = { + tmp.magic, (uint8_t)(tmp.more & LIBDIVIDE_64_SHIFT_MASK)}; return ret; } @@ -778,22 +773,21 @@ uint64_t libdivide_u64_do(uint64_t numer, const struct libdivide_u64_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return numer >> more; - } - else { + } else { uint64_t q = libdivide_mullhi_u64(denom->magic, numer); if (more & LIBDIVIDE_ADD_MARKER) { uint64_t t = ((numer - q) >> 1) + q; return t >> (more & LIBDIVIDE_64_SHIFT_MASK); - } - else { - // All upper bits are 0, - // don't need to mask them off. + } else { + // All upper bits are 0, + // don't need to mask them off. return q >> more; } } } -uint64_t libdivide_u64_branchfree_do(uint64_t numer, const struct libdivide_u64_branchfree_t *denom) { +uint64_t libdivide_u64_branchfree_do( + uint64_t numer, const struct libdivide_u64_branchfree_t *denom) { uint64_t q = libdivide_mullhi_u64(denom->magic, numer); uint64_t t = ((numer - q) >> 1) + q; return t >> denom->more; @@ -829,13 +823,14 @@ uint64_t libdivide_u64_recover(const struct libdivide_u64_t *denom) { // Note that the quotient is guaranteed <= 64 bits, // but the remainder may need 65! uint64_t r_hi, r_lo; - uint64_t half_q = libdivide_128_div_128_to_64(half_n_hi, half_n_lo, d_hi, d_lo, &r_hi, &r_lo); + uint64_t half_q = + libdivide_128_div_128_to_64(half_n_hi, half_n_lo, d_hi, d_lo, &r_hi, &r_lo); // We computed 2^(64+shift)/(m+2^64) // Double the remainder ('dr') and check if that is larger than d // Note that d is a 65 bit value, so r1 is small and so r1 + r1 // cannot overflow uint64_t dr_lo = r_lo + r_lo; - uint64_t dr_hi = r_hi + r_hi + (dr_lo < r_lo); // last term is carry + uint64_t dr_hi = r_hi + r_hi + (dr_lo < r_lo); // last term is carry int dr_exceeds_d = (dr_hi > d_hi) || (dr_hi == d_hi && dr_lo >= d_lo); uint64_t full_q = half_q + half_q + (dr_exceeds_d ? 1 : 0); return full_q + 1; @@ -863,13 +858,14 @@ uint64_t libdivide_u64_branchfree_recover(const struct libdivide_u64_branchfree_ // Note that the quotient is guaranteed <= 64 bits, // but the remainder may need 65! uint64_t r_hi, r_lo; - uint64_t half_q = libdivide_128_div_128_to_64(half_n_hi, half_n_lo, d_hi, d_lo, &r_hi, &r_lo); + uint64_t half_q = + libdivide_128_div_128_to_64(half_n_hi, half_n_lo, d_hi, d_lo, &r_hi, &r_lo); // We computed 2^(64+shift)/(m+2^64) // Double the remainder ('dr') and check if that is larger than d // Note that d is a 65 bit value, so r1 is small and so r1 + r1 // cannot overflow uint64_t dr_lo = r_lo + r_lo; - uint64_t dr_hi = r_hi + r_hi + (dr_lo < r_lo); // last term is carry + uint64_t dr_hi = r_hi + r_hi + (dr_lo < r_lo); // last term is carry int dr_exceeds_d = (dr_hi > d_hi) || (dr_hi == d_hi && dr_lo >= d_lo); uint64_t full_q = half_q + half_q + (dr_exceeds_d ? 1 : 0); return full_q + 1; @@ -1023,8 +1019,7 @@ int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom) { // the magic number's sign is opposite that of the divisor. // We want to compute the positive magic number. int negative_divisor = (more & LIBDIVIDE_NEGATIVE_DIVISOR); - int magic_was_negated = (more & LIBDIVIDE_ADD_MARKER) - ? denom->magic > 0 : denom->magic < 0; + int magic_was_negated = (more & LIBDIVIDE_ADD_MARKER) ? denom->magic > 0 : denom->magic < 0; // Handle the power of 2 case (including branchfree) if (denom->magic == 0) { @@ -1033,7 +1028,7 @@ int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom) { } uint32_t d = (uint32_t)(magic_was_negated ? -denom->magic : denom->magic); - uint64_t n = 1ULL << (32 + shift); // this shift cannot exceed 30 + uint64_t n = 1ULL << (32 + shift); // this shift cannot exceed 30 uint32_t q = (uint32_t)(n / d); int32_t result = (int32_t)q; result += 1; @@ -1126,7 +1121,7 @@ int64_t libdivide_s64_do(int64_t numer, const struct libdivide_s64_t *denom) { uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; - if (!denom->magic) { // shift path + if (!denom->magic) { // shift path uint64_t mask = (1ULL << shift) - 1; uint64_t uq = numer + ((numer >> 63) & mask); int64_t q = (int64_t)uq; @@ -1178,7 +1173,7 @@ int64_t libdivide_s64_branchfree_do(int64_t numer, const struct libdivide_s64_br int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom) { uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; - if (denom->magic == 0) { // shift path + if (denom->magic == 0) { // shift path uint64_t absD = 1ULL << shift; if (more & LIBDIVIDE_NEGATIVE_DIVISOR) { absD = -absD; @@ -1187,8 +1182,7 @@ int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom) { } else { // Unsigned math is much easier int negative_divisor = (more & LIBDIVIDE_NEGATIVE_DIVISOR); - int magic_was_negated = (more & LIBDIVIDE_ADD_MARKER) - ? denom->magic > 0 : denom->magic < 0; + int magic_was_negated = (more & LIBDIVIDE_ADD_MARKER) ? denom->magic > 0 : denom->magic < 0; uint64_t d = (uint64_t)(magic_was_negated ? -denom->magic : denom->magic); uint64_t n_hi = 1ULL << shift, n_lo = 0; @@ -1206,30 +1200,305 @@ int64_t libdivide_s64_branchfree_recover(const struct libdivide_s64_branchfree_t return libdivide_s64_recover((const struct libdivide_s64_t *)denom); } -#if defined(LIBDIVIDE_AVX512) +#if defined(LIBDIVIDE_NEON) -static inline __m512i libdivide_u32_do_vector(__m512i numers, const struct libdivide_u32_t *denom); -static inline __m512i libdivide_s32_do_vector(__m512i numers, const struct libdivide_s32_t *denom); -static inline __m512i libdivide_u64_do_vector(__m512i numers, const struct libdivide_u64_t *denom); -static inline __m512i libdivide_s64_do_vector(__m512i numers, const struct libdivide_s64_t *denom); +static inline uint32x4_t libdivide_u32_do_vec128( + uint32x4_t numers, const struct libdivide_u32_t *denom); +static inline int32x4_t libdivide_s32_do_vec128( + int32x4_t numers, const struct libdivide_s32_t *denom); +static inline uint64x2_t libdivide_u64_do_vec128( + uint64x2_t numers, const struct libdivide_u64_t *denom); +static inline int64x2_t libdivide_s64_do_vec128( + int64x2_t numers, const struct libdivide_s64_t *denom); -static inline __m512i libdivide_u32_branchfree_do_vector(__m512i numers, const struct libdivide_u32_branchfree_t *denom); -static inline __m512i libdivide_s32_branchfree_do_vector(__m512i numers, const struct libdivide_s32_branchfree_t *denom); -static inline __m512i libdivide_u64_branchfree_do_vector(__m512i numers, const struct libdivide_u64_branchfree_t *denom); -static inline __m512i libdivide_s64_branchfree_do_vector(__m512i numers, const struct libdivide_s64_branchfree_t *denom); +static inline uint32x4_t libdivide_u32_branchfree_do_vec128( + uint32x4_t numers, const struct libdivide_u32_branchfree_t *denom); +static inline int32x4_t libdivide_s32_branchfree_do_vec128( + int32x4_t numers, const struct libdivide_s32_branchfree_t *denom); +static inline uint64x2_t libdivide_u64_branchfree_do_vec128( + uint64x2_t numers, const struct libdivide_u64_branchfree_t *denom); +static inline int64x2_t libdivide_s64_branchfree_do_vec128( + int64x2_t numers, const struct libdivide_s64_branchfree_t *denom); //////// Internal Utility Functions -static inline __m512i libdivide_s64_signbits(__m512i v) {; +// Logical right shift by runtime value. +// NEON implements right shift as left shits by negative values. +static inline uint32x4_t libdivide_u32_neon_srl(uint32x4_t v, uint8_t amt) { + int32_t wamt = static_cast(amt); + return vshlq_u32(v, vdupq_n_s32(-wamt)); +} + +static inline uint64x2_t libdivide_u64_neon_srl(uint64x2_t v, uint8_t amt) { + int64_t wamt = static_cast(amt); + return vshlq_u64(v, vdupq_n_s64(-wamt)); +} + +// Arithmetic right shift by runtime value. +static inline int32x4_t libdivide_s32_neon_sra(int32x4_t v, uint8_t amt) { + int32_t wamt = static_cast(amt); + return vshlq_s32(v, vdupq_n_s32(-wamt)); +} + +static inline int64x2_t libdivide_s64_neon_sra(int64x2_t v, uint8_t amt) { + int64_t wamt = static_cast(amt); + return vshlq_s64(v, vdupq_n_s64(-wamt)); +} + +static inline int64x2_t libdivide_s64_signbits(int64x2_t v) { return vshrq_n_s64(v, 63); } + +static inline uint32x4_t libdivide_mullhi_u32_vec128(uint32x4_t a, uint32_t b) { + // Desire is [x0, x1, x2, x3] + uint32x4_t w1 = vreinterpretq_u32_u64(vmull_n_u32(vget_low_u32(a), b)); // [_, x0, _, x1] + uint32x4_t w2 = vreinterpretq_u32_u64(vmull_high_n_u32(a, b)); //[_, x2, _, x3] + return vuzp2q_u32(w1, w2); // [x0, x1, x2, x3] +} + +static inline int32x4_t libdivide_mullhi_s32_vec128(int32x4_t a, int32_t b) { + int32x4_t w1 = vreinterpretq_s32_s64(vmull_n_s32(vget_low_s32(a), b)); // [_, x0, _, x1] + int32x4_t w2 = vreinterpretq_s32_s64(vmull_high_n_s32(a, b)); //[_, x2, _, x3] + return vuzp2q_s32(w1, w2); // [x0, x1, x2, x3] +} + +static inline uint64x2_t libdivide_mullhi_u64_vec128(uint64x2_t x, uint64_t sy) { + // full 128 bits product is: + // x0*y0 + (x0*y1 << 32) + (x1*y0 << 32) + (x1*y1 << 64) + // Note x0,y0,x1,y1 are all conceptually uint32, products are 32x32->64. + + // Get low and high words. x0 contains low 32 bits, x1 is high 32 bits. + uint64x2_t y = vdupq_n_u64(sy); + uint32x2_t x0 = vmovn_u64(x); + uint32x2_t y0 = vmovn_u64(y); + uint32x2_t x1 = vshrn_n_u64(x, 32); + uint32x2_t y1 = vshrn_n_u64(y, 32); + + // Compute x0*y0. + uint64x2_t x0y0 = vmull_u32(x0, y0); + uint64x2_t x0y0_hi = vshrq_n_u64(x0y0, 32); + + // Compute other intermediate products. + uint64x2_t temp = vmlal_u32(x0y0_hi, x1, y0); // temp = x0y0_hi + x1*y0; + // We want to split temp into its low 32 bits and high 32 bits, both + // in the low half of 64 bit registers. + // Use shifts to avoid needing a reg for the mask. + uint64x2_t temp_lo = vshrq_n_u64(vshlq_n_u64(temp, 32), 32); // temp_lo = temp & 0xFFFFFFFF; + uint64x2_t temp_hi = vshrq_n_u64(temp, 32); // temp_hi = temp >> 32; + + temp_lo = vmlal_u32(temp_lo, x0, y1); // temp_lo += x0*y0 + temp_lo = vshrq_n_u64(temp_lo, 32); // temp_lo >>= 32 + temp_hi = vmlal_u32(temp_hi, x1, y1); // temp_hi += x1*y1 + uint64x2_t result = vaddq_u64(temp_hi, temp_lo); + return result; +} + +static inline int64x2_t libdivide_mullhi_s64_vec128(int64x2_t x, int64_t sy) { + int64x2_t p = vreinterpretq_s64_u64( + libdivide_mullhi_u64_vec128(vreinterpretq_u64_s64(x), static_cast(sy))); + int64x2_t y = vdupq_n_s64(sy); + int64x2_t t1 = vandq_s64(libdivide_s64_signbits(x), y); + int64x2_t t2 = vandq_s64(libdivide_s64_signbits(y), x); + p = vsubq_s64(p, t1); + p = vsubq_s64(p, t2); + return p; +} + +////////// UINT32 + +uint32x4_t libdivide_u32_do_vec128(uint32x4_t numers, const struct libdivide_u32_t *denom) { + uint8_t more = denom->more; + if (!denom->magic) { + return libdivide_u32_neon_srl(numers, more); + } else { + uint32x4_t q = libdivide_mullhi_u32_vec128(numers, denom->magic); + if (more & LIBDIVIDE_ADD_MARKER) { + // uint32_t t = ((numer - q) >> 1) + q; + // return t >> denom->shift; + // Note we can use halving-subtract to avoid the shift. + uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK; + uint32x4_t t = vaddq_u32(vhsubq_u32(numers, q), q); + return libdivide_u32_neon_srl(t, shift); + } else { + return libdivide_u32_neon_srl(q, more); + } + } +} + +uint32x4_t libdivide_u32_branchfree_do_vec128( + uint32x4_t numers, const struct libdivide_u32_branchfree_t *denom) { + uint32x4_t q = libdivide_mullhi_u32_vec128(numers, denom->magic); + uint32x4_t t = vaddq_u32(vhsubq_u32(numers, q), q); + return libdivide_u32_neon_srl(t, denom->more); +} + +////////// UINT64 + +uint64x2_t libdivide_u64_do_vec128(uint64x2_t numers, const struct libdivide_u64_t *denom) { + uint8_t more = denom->more; + if (!denom->magic) { + return libdivide_u64_neon_srl(numers, more); + } else { + uint64x2_t q = libdivide_mullhi_u64_vec128(numers, denom->magic); + if (more & LIBDIVIDE_ADD_MARKER) { + // uint32_t t = ((numer - q) >> 1) + q; + // return t >> denom->shift; + // No 64-bit halving subtracts in NEON :( + uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; + uint64x2_t t = vaddq_u64(vshrq_n_u64(vsubq_u64(numers, q), 1), q); + return libdivide_u64_neon_srl(t, shift); + } else { + return libdivide_u64_neon_srl(q, more); + } + } +} + +uint64x2_t libdivide_u64_branchfree_do_vec128( + uint64x2_t numers, const struct libdivide_u64_branchfree_t *denom) { + uint64x2_t q = libdivide_mullhi_u64_vec128(numers, denom->magic); + uint64x2_t t = vaddq_u64(vshrq_n_u64(vsubq_u64(numers, q), 1), q); + return libdivide_u64_neon_srl(t, denom->more); +} + +////////// SINT32 + +int32x4_t libdivide_s32_do_vec128(int32x4_t numers, const struct libdivide_s32_t *denom) { + uint8_t more = denom->more; + if (!denom->magic) { + uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK; + uint32_t mask = (1U << shift) - 1; + int32x4_t roundToZeroTweak = vdupq_n_s32((int)mask); + // q = numer + ((numer >> 31) & roundToZeroTweak); + int32x4_t q = vaddq_s32(numers, vandq_s32(vshrq_n_s32(numers, 31), roundToZeroTweak)); + q = libdivide_s32_neon_sra(q, shift); + int32x4_t sign = vdupq_n_s32((int8_t)more >> 7); + // q = (q ^ sign) - sign; + q = vsubq_s32(veorq_s32(q, sign), sign); + return q; + } else { + int32x4_t q = libdivide_mullhi_s32_vec128(numers, denom->magic); + if (more & LIBDIVIDE_ADD_MARKER) { + // must be arithmetic shift + int32x4_t sign = vdupq_n_s32((int8_t)more >> 7); + // q += ((numer ^ sign) - sign); + q = vaddq_s32(q, vsubq_s32(veorq_s32(numers, sign), sign)); + } + // q >>= shift + q = libdivide_s32_neon_sra(q, more & LIBDIVIDE_32_SHIFT_MASK); + q = vaddq_s32( + q, vreinterpretq_s32_u32(vshrq_n_u32(vreinterpretq_u32_s32(q), 31))); // q += (q < 0) + return q; + } +} + +int32x4_t libdivide_s32_branchfree_do_vec128( + int32x4_t numers, const struct libdivide_s32_branchfree_t *denom) { + int32_t magic = denom->magic; + uint8_t more = denom->more; + uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK; + // must be arithmetic shift + int32x4_t sign = vdupq_n_s32((int8_t)more >> 7); + int32x4_t q = libdivide_mullhi_s32_vec128(numers, magic); + q = vaddq_s32(q, numers); // q += numers + + // If q is non-negative, we have nothing to do + // If q is negative, we want to add either (2**shift)-1 if d is + // a power of 2, or (2**shift) if it is not a power of 2 + uint32_t is_power_of_2 = (magic == 0); + int32x4_t q_sign = vshrq_n_s32(q, 31); // q_sign = q >> 31 + int32x4_t mask = vdupq_n_s32((1U << shift) - is_power_of_2); + q = vaddq_s32(q, vandq_s32(q_sign, mask)); // q = q + (q_sign & mask) + q = libdivide_s32_neon_sra(q, shift); // q >>= shift + q = vsubq_s32(veorq_s32(q, sign), sign); // q = (q ^ sign) - sign + return q; +} + +////////// SINT64 + +int64x2_t libdivide_s64_do_vec128(int64x2_t numers, const struct libdivide_s64_t *denom) { + uint8_t more = denom->more; + int64_t magic = denom->magic; + if (magic == 0) { // shift path + uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; + uint64_t mask = (1ULL << shift) - 1; + int64x2_t roundToZeroTweak = vdupq_n_s64(mask); // TODO: no need to sign extend + // q = numer + ((numer >> 63) & roundToZeroTweak); + int64x2_t q = + vaddq_s64(numers, vandq_s64(libdivide_s64_signbits(numers), roundToZeroTweak)); + q = libdivide_s64_neon_sra(q, shift); + // q = (q ^ sign) - sign; + int64x2_t sign = vreinterpretq_s64_s8(vdupq_n_s8((int8_t)more >> 7)); + q = vsubq_s64(veorq_s64(q, sign), sign); + return q; + } else { + int64x2_t q = libdivide_mullhi_s64_vec128(numers, magic); + if (more & LIBDIVIDE_ADD_MARKER) { + // must be arithmetic shift + int64x2_t sign = vdupq_n_s64((int8_t)more >> 7); // TODO: no need to widen + // q += ((numer ^ sign) - sign); + q = vaddq_s64(q, vsubq_s64(veorq_s64(numers, sign), sign)); + } + // q >>= denom->mult_path.shift + q = libdivide_s64_neon_sra(q, more & LIBDIVIDE_64_SHIFT_MASK); + q = vaddq_s64( + q, vreinterpretq_s64_u64(vshrq_n_u64(vreinterpretq_u64_s64(q), 63))); // q += (q < 0) + return q; + } +} + +int64x2_t libdivide_s64_branchfree_do_vec128( + int64x2_t numers, const struct libdivide_s64_branchfree_t *denom) { + int64_t magic = denom->magic; + uint8_t more = denom->more; + uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; + // must be arithmetic shift + int64x2_t sign = vdupq_n_s64((int8_t)more >> 7); // TODO: avoid sign extend + + // libdivide_mullhi_s64(numers, magic); + int64x2_t q = libdivide_mullhi_s64_vec128(numers, magic); + q = vaddq_s64(q, numers); // q += numers + + // If q is non-negative, we have nothing to do. + // If q is negative, we want to add either (2**shift)-1 if d is + // a power of 2, or (2**shift) if it is not a power of 2. + uint32_t is_power_of_2 = (magic == 0); + int64x2_t q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 + int64x2_t mask = vdupq_n_s64((1ULL << shift) - is_power_of_2); + q = vaddq_s64(q, vandq_s64(q_sign, mask)); // q = q + (q_sign & mask) + q = libdivide_s64_neon_sra(q, shift); // q >>= shift + q = vsubq_s64(veorq_s64(q, sign), sign); // q = (q ^ sign) - sign + return q; +} + +#endif + +#if defined(LIBDIVIDE_AVX512) + +static inline __m512i libdivide_u32_do_vec512(__m512i numers, const struct libdivide_u32_t *denom); +static inline __m512i libdivide_s32_do_vec512(__m512i numers, const struct libdivide_s32_t *denom); +static inline __m512i libdivide_u64_do_vec512(__m512i numers, const struct libdivide_u64_t *denom); +static inline __m512i libdivide_s64_do_vec512(__m512i numers, const struct libdivide_s64_t *denom); + +static inline __m512i libdivide_u32_branchfree_do_vec512( + __m512i numers, const struct libdivide_u32_branchfree_t *denom); +static inline __m512i libdivide_s32_branchfree_do_vec512( + __m512i numers, const struct libdivide_s32_branchfree_t *denom); +static inline __m512i libdivide_u64_branchfree_do_vec512( + __m512i numers, const struct libdivide_u64_branchfree_t *denom); +static inline __m512i libdivide_s64_branchfree_do_vec512( + __m512i numers, const struct libdivide_s64_branchfree_t *denom); + +//////// Internal Utility Functions + +static inline __m512i libdivide_s64_signbits(__m512i v) { + ; return _mm512_srai_epi64(v, 63); } -static inline __m512i libdivide_s64_shift_right_vector(__m512i v, int amt) { +static inline __m512i libdivide_s64_shift_right_vec512(__m512i v, int amt) { return _mm512_srai_epi64(v, amt); } // Here, b is assumed to contain one 32-bit value repeated. -static inline __m512i libdivide_mullhi_u32_vector(__m512i a, __m512i b) { +static inline __m512i libdivide_mullhi_u32_vec512(__m512i a, __m512i b) { __m512i hi_product_0Z2Z = _mm512_srli_epi64(_mm512_mul_epu32(a, b), 32); __m512i a1X3X = _mm512_srli_epi64(a, 32); __m512i mask = _mm512_set_epi32(-1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0); @@ -1238,7 +1507,7 @@ static inline __m512i libdivide_mullhi_u32_vector(__m512i a, __m512i b) { } // b is one 32-bit value repeated. -static inline __m512i libdivide_mullhi_s32_vector(__m512i a, __m512i b) { +static inline __m512i libdivide_mullhi_s32_vec512(__m512i a, __m512i b) { __m512i hi_product_0Z2Z = _mm512_srli_epi64(_mm512_mul_epi32(a, b), 32); __m512i a1X3X = _mm512_srli_epi64(a, 32); __m512i mask = _mm512_set_epi32(-1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0); @@ -1247,30 +1516,31 @@ static inline __m512i libdivide_mullhi_s32_vector(__m512i a, __m512i b) { } // Here, y is assumed to contain one 64-bit value repeated. -// https://stackoverflow.com/a/28827013 -static inline __m512i libdivide_mullhi_u64_vector(__m512i x, __m512i y) { - __m512i lomask = _mm512_set1_epi64(0xffffffff); - __m512i xh = _mm512_shuffle_epi32(x, (_MM_PERM_ENUM) 0xB1); - __m512i yh = _mm512_shuffle_epi32(y, (_MM_PERM_ENUM) 0xB1); - __m512i w0 = _mm512_mul_epu32(x, y); - __m512i w1 = _mm512_mul_epu32(x, yh); - __m512i w2 = _mm512_mul_epu32(xh, y); - __m512i w3 = _mm512_mul_epu32(xh, yh); - __m512i w0h = _mm512_srli_epi64(w0, 32); - __m512i s1 = _mm512_add_epi64(w1, w0h); - __m512i s1l = _mm512_and_si512(s1, lomask); - __m512i s1h = _mm512_srli_epi64(s1, 32); - __m512i s2 = _mm512_add_epi64(w2, s1l); - __m512i s2h = _mm512_srli_epi64(s2, 32); - __m512i hi = _mm512_add_epi64(w3, s1h); - hi = _mm512_add_epi64(hi, s2h); +static inline __m512i libdivide_mullhi_u64_vec512(__m512i x, __m512i y) { + // see m128i variant for comments. + __m512i x0y0 = _mm512_mul_epu32(x, y); + __m512i x0y0_hi = _mm512_srli_epi64(x0y0, 32); - return hi; + __m512i x1 = _mm512_shuffle_epi32(x, (_MM_PERM_ENUM)_MM_SHUFFLE(3, 3, 1, 1)); + __m512i y1 = _mm512_shuffle_epi32(y, (_MM_PERM_ENUM)_MM_SHUFFLE(3, 3, 1, 1)); + + __m512i x0y1 = _mm512_mul_epu32(x, y1); + __m512i x1y0 = _mm512_mul_epu32(x1, y); + __m512i x1y1 = _mm512_mul_epu32(x1, y1); + + __m512i mask = _mm512_set1_epi64(0xFFFFFFFF); + __m512i temp = _mm512_add_epi64(x1y0, x0y0_hi); + __m512i temp_lo = _mm512_and_si512(temp, mask); + __m512i temp_hi = _mm512_srli_epi64(temp, 32); + + temp_lo = _mm512_srli_epi64(_mm512_add_epi64(temp_lo, x0y1), 32); + temp_hi = _mm512_add_epi64(x1y1, temp_hi); + return _mm512_add_epi64(temp_lo, temp_hi); } // y is one 64-bit value repeated. -static inline __m512i libdivide_mullhi_s64_vector(__m512i x, __m512i y) { - __m512i p = libdivide_mullhi_u64_vector(x, y); +static inline __m512i libdivide_mullhi_s64_vec512(__m512i x, __m512i y) { + __m512i p = libdivide_mullhi_u64_vec512(x, y); __m512i t1 = _mm512_and_si512(libdivide_s64_signbits(x), y); __m512i t2 = _mm512_and_si512(libdivide_s64_signbits(y), x); p = _mm512_sub_epi64(p, t1); @@ -1280,131 +1550,130 @@ static inline __m512i libdivide_mullhi_s64_vector(__m512i x, __m512i y) { ////////// UINT32 -__m512i libdivide_u32_do_vector(__m512i numers, const struct libdivide_u32_t *denom) { +__m512i libdivide_u32_do_vec512(__m512i numers, const struct libdivide_u32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return _mm512_srli_epi32(numers, more); - } - else { - __m512i q = libdivide_mullhi_u32_vector(numers, _mm512_set1_epi32(denom->magic)); + } else { + __m512i q = libdivide_mullhi_u32_vec512(numers, _mm512_set1_epi32(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { // uint32_t t = ((numer - q) >> 1) + q; // return t >> denom->shift; uint32_t shift = more & LIBDIVIDE_32_SHIFT_MASK; __m512i t = _mm512_add_epi32(_mm512_srli_epi32(_mm512_sub_epi32(numers, q), 1), q); return _mm512_srli_epi32(t, shift); - } - else { + } else { return _mm512_srli_epi32(q, more); } } } -__m512i libdivide_u32_branchfree_do_vector(__m512i numers, const struct libdivide_u32_branchfree_t *denom) { - __m512i q = libdivide_mullhi_u32_vector(numers, _mm512_set1_epi32(denom->magic)); +__m512i libdivide_u32_branchfree_do_vec512( + __m512i numers, const struct libdivide_u32_branchfree_t *denom) { + __m512i q = libdivide_mullhi_u32_vec512(numers, _mm512_set1_epi32(denom->magic)); __m512i t = _mm512_add_epi32(_mm512_srli_epi32(_mm512_sub_epi32(numers, q), 1), q); return _mm512_srli_epi32(t, denom->more); } ////////// UINT64 -__m512i libdivide_u64_do_vector(__m512i numers, const struct libdivide_u64_t *denom) { +__m512i libdivide_u64_do_vec512(__m512i numers, const struct libdivide_u64_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return _mm512_srli_epi64(numers, more); - } - else { - __m512i q = libdivide_mullhi_u64_vector(numers, _mm512_set1_epi64(denom->magic)); + } else { + __m512i q = libdivide_mullhi_u64_vec512(numers, _mm512_set1_epi64(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { // uint32_t t = ((numer - q) >> 1) + q; // return t >> denom->shift; uint32_t shift = more & LIBDIVIDE_64_SHIFT_MASK; __m512i t = _mm512_add_epi64(_mm512_srli_epi64(_mm512_sub_epi64(numers, q), 1), q); return _mm512_srli_epi64(t, shift); - } - else { + } else { return _mm512_srli_epi64(q, more); } } } -__m512i libdivide_u64_branchfree_do_vector(__m512i numers, const struct libdivide_u64_branchfree_t *denom) { - __m512i q = libdivide_mullhi_u64_vector(numers, _mm512_set1_epi64(denom->magic)); +__m512i libdivide_u64_branchfree_do_vec512( + __m512i numers, const struct libdivide_u64_branchfree_t *denom) { + __m512i q = libdivide_mullhi_u64_vec512(numers, _mm512_set1_epi64(denom->magic)); __m512i t = _mm512_add_epi64(_mm512_srli_epi64(_mm512_sub_epi64(numers, q), 1), q); return _mm512_srli_epi64(t, denom->more); } ////////// SINT32 -__m512i libdivide_s32_do_vector(__m512i numers, const struct libdivide_s32_t *denom) { +__m512i libdivide_s32_do_vec512(__m512i numers, const struct libdivide_s32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { uint32_t shift = more & LIBDIVIDE_32_SHIFT_MASK; uint32_t mask = (1U << shift) - 1; __m512i roundToZeroTweak = _mm512_set1_epi32(mask); // q = numer + ((numer >> 31) & roundToZeroTweak); - __m512i q = _mm512_add_epi32(numers, _mm512_and_si512(_mm512_srai_epi32(numers, 31), roundToZeroTweak)); + __m512i q = _mm512_add_epi32( + numers, _mm512_and_si512(_mm512_srai_epi32(numers, 31), roundToZeroTweak)); q = _mm512_srai_epi32(q, shift); __m512i sign = _mm512_set1_epi32((int8_t)more >> 7); // q = (q ^ sign) - sign; q = _mm512_sub_epi32(_mm512_xor_si512(q, sign), sign); return q; - } - else { - __m512i q = libdivide_mullhi_s32_vector(numers, _mm512_set1_epi32(denom->magic)); + } else { + __m512i q = libdivide_mullhi_s32_vec512(numers, _mm512_set1_epi32(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { - // must be arithmetic shift + // must be arithmetic shift __m512i sign = _mm512_set1_epi32((int8_t)more >> 7); - // q += ((numer ^ sign) - sign); + // q += ((numer ^ sign) - sign); q = _mm512_add_epi32(q, _mm512_sub_epi32(_mm512_xor_si512(numers, sign), sign)); } // q >>= shift q = _mm512_srai_epi32(q, more & LIBDIVIDE_32_SHIFT_MASK); - q = _mm512_add_epi32(q, _mm512_srli_epi32(q, 31)); // q += (q < 0) + q = _mm512_add_epi32(q, _mm512_srli_epi32(q, 31)); // q += (q < 0) return q; } } -__m512i libdivide_s32_branchfree_do_vector(__m512i numers, const struct libdivide_s32_branchfree_t *denom) { +__m512i libdivide_s32_branchfree_do_vec512( + __m512i numers, const struct libdivide_s32_branchfree_t *denom) { int32_t magic = denom->magic; uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK; - // must be arithmetic shift + // must be arithmetic shift __m512i sign = _mm512_set1_epi32((int8_t)more >> 7); - __m512i q = libdivide_mullhi_s32_vector(numers, _mm512_set1_epi32(magic)); - q = _mm512_add_epi32(q, numers); // q += numers + __m512i q = libdivide_mullhi_s32_vec512(numers, _mm512_set1_epi32(magic)); + q = _mm512_add_epi32(q, numers); // q += numers // If q is non-negative, we have nothing to do // If q is negative, we want to add either (2**shift)-1 if d is // a power of 2, or (2**shift) if it is not a power of 2 uint32_t is_power_of_2 = (magic == 0); - __m512i q_sign = _mm512_srai_epi32(q, 31); // q_sign = q >> 31 + __m512i q_sign = _mm512_srai_epi32(q, 31); // q_sign = q >> 31 __m512i mask = _mm512_set1_epi32((1U << shift) - is_power_of_2); - q = _mm512_add_epi32(q, _mm512_and_si512(q_sign, mask)); // q = q + (q_sign & mask) - q = _mm512_srai_epi32(q, shift); // q >>= shift - q = _mm512_sub_epi32(_mm512_xor_si512(q, sign), sign); // q = (q ^ sign) - sign + q = _mm512_add_epi32(q, _mm512_and_si512(q_sign, mask)); // q = q + (q_sign & mask) + q = _mm512_srai_epi32(q, shift); // q >>= shift + q = _mm512_sub_epi32(_mm512_xor_si512(q, sign), sign); // q = (q ^ sign) - sign return q; } ////////// SINT64 -__m512i libdivide_s64_do_vector(__m512i numers, const struct libdivide_s64_t *denom) { +__m512i libdivide_s64_do_vec512(__m512i numers, const struct libdivide_s64_t *denom) { uint8_t more = denom->more; int64_t magic = denom->magic; - if (magic == 0) { // shift path + if (magic == 0) { // shift path uint32_t shift = more & LIBDIVIDE_64_SHIFT_MASK; uint64_t mask = (1ULL << shift) - 1; __m512i roundToZeroTweak = _mm512_set1_epi64(mask); // q = numer + ((numer >> 63) & roundToZeroTweak); - __m512i q = _mm512_add_epi64(numers, _mm512_and_si512(libdivide_s64_signbits(numers), roundToZeroTweak)); - q = libdivide_s64_shift_right_vector(q, shift); + __m512i q = _mm512_add_epi64( + numers, _mm512_and_si512(libdivide_s64_signbits(numers), roundToZeroTweak)); + q = libdivide_s64_shift_right_vec512(q, shift); __m512i sign = _mm512_set1_epi32((int8_t)more >> 7); - // q = (q ^ sign) - sign; + // q = (q ^ sign) - sign; q = _mm512_sub_epi64(_mm512_xor_si512(q, sign), sign); return q; - } - else { - __m512i q = libdivide_mullhi_s64_vector(numers, _mm512_set1_epi64(magic)); + } else { + __m512i q = libdivide_mullhi_s64_vec512(numers, _mm512_set1_epi64(magic)); if (more & LIBDIVIDE_ADD_MARKER) { // must be arithmetic shift __m512i sign = _mm512_set1_epi32((int8_t)more >> 7); @@ -1412,46 +1681,53 @@ __m512i libdivide_s64_do_vector(__m512i numers, const struct libdivide_s64_t *de q = _mm512_add_epi64(q, _mm512_sub_epi64(_mm512_xor_si512(numers, sign), sign)); } // q >>= denom->mult_path.shift - q = libdivide_s64_shift_right_vector(q, more & LIBDIVIDE_64_SHIFT_MASK); - q = _mm512_add_epi64(q, _mm512_srli_epi64(q, 63)); // q += (q < 0) + q = libdivide_s64_shift_right_vec512(q, more & LIBDIVIDE_64_SHIFT_MASK); + q = _mm512_add_epi64(q, _mm512_srli_epi64(q, 63)); // q += (q < 0) return q; } } -__m512i libdivide_s64_branchfree_do_vector(__m512i numers, const struct libdivide_s64_branchfree_t *denom) { +__m512i libdivide_s64_branchfree_do_vec512( + __m512i numers, const struct libdivide_s64_branchfree_t *denom) { int64_t magic = denom->magic; uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; // must be arithmetic shift __m512i sign = _mm512_set1_epi32((int8_t)more >> 7); - // libdivide_mullhi_s64(numers, magic); - __m512i q = libdivide_mullhi_s64_vector(numers, _mm512_set1_epi64(magic)); - q = _mm512_add_epi64(q, numers); // q += numers + // libdivide_mullhi_s64(numers, magic); + __m512i q = libdivide_mullhi_s64_vec512(numers, _mm512_set1_epi64(magic)); + q = _mm512_add_epi64(q, numers); // q += numers // If q is non-negative, we have nothing to do. // If q is negative, we want to add either (2**shift)-1 if d is // a power of 2, or (2**shift) if it is not a power of 2. uint32_t is_power_of_2 = (magic == 0); - __m512i q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 + __m512i q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 __m512i mask = _mm512_set1_epi64((1ULL << shift) - is_power_of_2); - q = _mm512_add_epi64(q, _mm512_and_si512(q_sign, mask)); // q = q + (q_sign & mask) - q = libdivide_s64_shift_right_vector(q, shift); // q >>= shift - q = _mm512_sub_epi64(_mm512_xor_si512(q, sign), sign); // q = (q ^ sign) - sign + q = _mm512_add_epi64(q, _mm512_and_si512(q_sign, mask)); // q = q + (q_sign & mask) + q = libdivide_s64_shift_right_vec512(q, shift); // q >>= shift + q = _mm512_sub_epi64(_mm512_xor_si512(q, sign), sign); // q = (q ^ sign) - sign return q; } -#elif defined(LIBDIVIDE_AVX2) +#endif -static inline __m256i libdivide_u32_do_vector(__m256i numers, const struct libdivide_u32_t *denom); -static inline __m256i libdivide_s32_do_vector(__m256i numers, const struct libdivide_s32_t *denom); -static inline __m256i libdivide_u64_do_vector(__m256i numers, const struct libdivide_u64_t *denom); -static inline __m256i libdivide_s64_do_vector(__m256i numers, const struct libdivide_s64_t *denom); +#if defined(LIBDIVIDE_AVX2) -static inline __m256i libdivide_u32_branchfree_do_vector(__m256i numers, const struct libdivide_u32_branchfree_t *denom); -static inline __m256i libdivide_s32_branchfree_do_vector(__m256i numers, const struct libdivide_s32_branchfree_t *denom); -static inline __m256i libdivide_u64_branchfree_do_vector(__m256i numers, const struct libdivide_u64_branchfree_t *denom); -static inline __m256i libdivide_s64_branchfree_do_vector(__m256i numers, const struct libdivide_s64_branchfree_t *denom); +static inline __m256i libdivide_u32_do_vec256(__m256i numers, const struct libdivide_u32_t *denom); +static inline __m256i libdivide_s32_do_vec256(__m256i numers, const struct libdivide_s32_t *denom); +static inline __m256i libdivide_u64_do_vec256(__m256i numers, const struct libdivide_u64_t *denom); +static inline __m256i libdivide_s64_do_vec256(__m256i numers, const struct libdivide_s64_t *denom); + +static inline __m256i libdivide_u32_branchfree_do_vec256( + __m256i numers, const struct libdivide_u32_branchfree_t *denom); +static inline __m256i libdivide_s32_branchfree_do_vec256( + __m256i numers, const struct libdivide_s32_branchfree_t *denom); +static inline __m256i libdivide_u64_branchfree_do_vec256( + __m256i numers, const struct libdivide_u64_branchfree_t *denom); +static inline __m256i libdivide_s64_branchfree_do_vec256( + __m256i numers, const struct libdivide_s64_branchfree_t *denom); //////// Internal Utility Functions @@ -1463,7 +1739,7 @@ static inline __m256i libdivide_s64_signbits(__m256i v) { } // Implementation of _mm256_srai_epi64 (from AVX512). -static inline __m256i libdivide_s64_shift_right_vector(__m256i v, int amt) { +static inline __m256i libdivide_s64_shift_right_vec256(__m256i v, int amt) { const int b = 64 - amt; __m256i m = _mm256_set1_epi64x(1ULL << (b - 1)); __m256i x = _mm256_srli_epi64(v, amt); @@ -1472,7 +1748,7 @@ static inline __m256i libdivide_s64_shift_right_vector(__m256i v, int amt) { } // Here, b is assumed to contain one 32-bit value repeated. -static inline __m256i libdivide_mullhi_u32_vector(__m256i a, __m256i b) { +static inline __m256i libdivide_mullhi_u32_vec256(__m256i a, __m256i b) { __m256i hi_product_0Z2Z = _mm256_srli_epi64(_mm256_mul_epu32(a, b), 32); __m256i a1X3X = _mm256_srli_epi64(a, 32); __m256i mask = _mm256_set_epi32(-1, 0, -1, 0, -1, 0, -1, 0); @@ -1481,7 +1757,7 @@ static inline __m256i libdivide_mullhi_u32_vector(__m256i a, __m256i b) { } // b is one 32-bit value repeated. -static inline __m256i libdivide_mullhi_s32_vector(__m256i a, __m256i b) { +static inline __m256i libdivide_mullhi_s32_vec256(__m256i a, __m256i b) { __m256i hi_product_0Z2Z = _mm256_srli_epi64(_mm256_mul_epi32(a, b), 32); __m256i a1X3X = _mm256_srli_epi64(a, 32); __m256i mask = _mm256_set_epi32(-1, 0, -1, 0, -1, 0, -1, 0); @@ -1490,30 +1766,31 @@ static inline __m256i libdivide_mullhi_s32_vector(__m256i a, __m256i b) { } // Here, y is assumed to contain one 64-bit value repeated. -// https://stackoverflow.com/a/28827013 -static inline __m256i libdivide_mullhi_u64_vector(__m256i x, __m256i y) { - __m256i lomask = _mm256_set1_epi64x(0xffffffff); - __m256i xh = _mm256_shuffle_epi32(x, 0xB1); // x0l, x0h, x1l, x1h - __m256i yh = _mm256_shuffle_epi32(y, 0xB1); // y0l, y0h, y1l, y1h - __m256i w0 = _mm256_mul_epu32(x, y); // x0l*y0l, x1l*y1l - __m256i w1 = _mm256_mul_epu32(x, yh); // x0l*y0h, x1l*y1h - __m256i w2 = _mm256_mul_epu32(xh, y); // x0h*y0l, x1h*y0l - __m256i w3 = _mm256_mul_epu32(xh, yh); // x0h*y0h, x1h*y1h - __m256i w0h = _mm256_srli_epi64(w0, 32); - __m256i s1 = _mm256_add_epi64(w1, w0h); - __m256i s1l = _mm256_and_si256(s1, lomask); - __m256i s1h = _mm256_srli_epi64(s1, 32); - __m256i s2 = _mm256_add_epi64(w2, s1l); - __m256i s2h = _mm256_srli_epi64(s2, 32); - __m256i hi = _mm256_add_epi64(w3, s1h); - hi = _mm256_add_epi64(hi, s2h); +static inline __m256i libdivide_mullhi_u64_vec256(__m256i x, __m256i y) { + // see m128i variant for comments. + __m256i x0y0 = _mm256_mul_epu32(x, y); + __m256i x0y0_hi = _mm256_srli_epi64(x0y0, 32); - return hi; + __m256i x1 = _mm256_shuffle_epi32(x, _MM_SHUFFLE(3, 3, 1, 1)); + __m256i y1 = _mm256_shuffle_epi32(y, _MM_SHUFFLE(3, 3, 1, 1)); + + __m256i x0y1 = _mm256_mul_epu32(x, y1); + __m256i x1y0 = _mm256_mul_epu32(x1, y); + __m256i x1y1 = _mm256_mul_epu32(x1, y1); + + __m256i mask = _mm256_set1_epi64x(0xFFFFFFFF); + __m256i temp = _mm256_add_epi64(x1y0, x0y0_hi); + __m256i temp_lo = _mm256_and_si256(temp, mask); + __m256i temp_hi = _mm256_srli_epi64(temp, 32); + + temp_lo = _mm256_srli_epi64(_mm256_add_epi64(temp_lo, x0y1), 32); + temp_hi = _mm256_add_epi64(x1y1, temp_hi); + return _mm256_add_epi64(temp_lo, temp_hi); } // y is one 64-bit value repeated. -static inline __m256i libdivide_mullhi_s64_vector(__m256i x, __m256i y) { - __m256i p = libdivide_mullhi_u64_vector(x, y); +static inline __m256i libdivide_mullhi_s64_vec256(__m256i x, __m256i y) { + __m256i p = libdivide_mullhi_u64_vec256(x, y); __m256i t1 = _mm256_and_si256(libdivide_s64_signbits(x), y); __m256i t2 = _mm256_and_si256(libdivide_s64_signbits(y), x); p = _mm256_sub_epi64(p, t1); @@ -1523,131 +1800,130 @@ static inline __m256i libdivide_mullhi_s64_vector(__m256i x, __m256i y) { ////////// UINT32 -__m256i libdivide_u32_do_vector(__m256i numers, const struct libdivide_u32_t *denom) { +__m256i libdivide_u32_do_vec256(__m256i numers, const struct libdivide_u32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return _mm256_srli_epi32(numers, more); - } - else { - __m256i q = libdivide_mullhi_u32_vector(numers, _mm256_set1_epi32(denom->magic)); + } else { + __m256i q = libdivide_mullhi_u32_vec256(numers, _mm256_set1_epi32(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { // uint32_t t = ((numer - q) >> 1) + q; // return t >> denom->shift; uint32_t shift = more & LIBDIVIDE_32_SHIFT_MASK; __m256i t = _mm256_add_epi32(_mm256_srli_epi32(_mm256_sub_epi32(numers, q), 1), q); return _mm256_srli_epi32(t, shift); - } - else { + } else { return _mm256_srli_epi32(q, more); } } } -__m256i libdivide_u32_branchfree_do_vector(__m256i numers, const struct libdivide_u32_branchfree_t *denom) { - __m256i q = libdivide_mullhi_u32_vector(numers, _mm256_set1_epi32(denom->magic)); +__m256i libdivide_u32_branchfree_do_vec256( + __m256i numers, const struct libdivide_u32_branchfree_t *denom) { + __m256i q = libdivide_mullhi_u32_vec256(numers, _mm256_set1_epi32(denom->magic)); __m256i t = _mm256_add_epi32(_mm256_srli_epi32(_mm256_sub_epi32(numers, q), 1), q); return _mm256_srli_epi32(t, denom->more); } ////////// UINT64 -__m256i libdivide_u64_do_vector(__m256i numers, const struct libdivide_u64_t *denom) { +__m256i libdivide_u64_do_vec256(__m256i numers, const struct libdivide_u64_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return _mm256_srli_epi64(numers, more); - } - else { - __m256i q = libdivide_mullhi_u64_vector(numers, _mm256_set1_epi64x(denom->magic)); + } else { + __m256i q = libdivide_mullhi_u64_vec256(numers, _mm256_set1_epi64x(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { // uint32_t t = ((numer - q) >> 1) + q; // return t >> denom->shift; uint32_t shift = more & LIBDIVIDE_64_SHIFT_MASK; __m256i t = _mm256_add_epi64(_mm256_srli_epi64(_mm256_sub_epi64(numers, q), 1), q); return _mm256_srli_epi64(t, shift); - } - else { + } else { return _mm256_srli_epi64(q, more); } } } -__m256i libdivide_u64_branchfree_do_vector(__m256i numers, const struct libdivide_u64_branchfree_t *denom) { - __m256i q = libdivide_mullhi_u64_vector(numers, _mm256_set1_epi64x(denom->magic)); +__m256i libdivide_u64_branchfree_do_vec256( + __m256i numers, const struct libdivide_u64_branchfree_t *denom) { + __m256i q = libdivide_mullhi_u64_vec256(numers, _mm256_set1_epi64x(denom->magic)); __m256i t = _mm256_add_epi64(_mm256_srli_epi64(_mm256_sub_epi64(numers, q), 1), q); return _mm256_srli_epi64(t, denom->more); } ////////// SINT32 -__m256i libdivide_s32_do_vector(__m256i numers, const struct libdivide_s32_t *denom) { +__m256i libdivide_s32_do_vec256(__m256i numers, const struct libdivide_s32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { uint32_t shift = more & LIBDIVIDE_32_SHIFT_MASK; uint32_t mask = (1U << shift) - 1; __m256i roundToZeroTweak = _mm256_set1_epi32(mask); // q = numer + ((numer >> 31) & roundToZeroTweak); - __m256i q = _mm256_add_epi32(numers, _mm256_and_si256(_mm256_srai_epi32(numers, 31), roundToZeroTweak)); + __m256i q = _mm256_add_epi32( + numers, _mm256_and_si256(_mm256_srai_epi32(numers, 31), roundToZeroTweak)); q = _mm256_srai_epi32(q, shift); __m256i sign = _mm256_set1_epi32((int8_t)more >> 7); // q = (q ^ sign) - sign; q = _mm256_sub_epi32(_mm256_xor_si256(q, sign), sign); return q; - } - else { - __m256i q = libdivide_mullhi_s32_vector(numers, _mm256_set1_epi32(denom->magic)); + } else { + __m256i q = libdivide_mullhi_s32_vec256(numers, _mm256_set1_epi32(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { - // must be arithmetic shift + // must be arithmetic shift __m256i sign = _mm256_set1_epi32((int8_t)more >> 7); - // q += ((numer ^ sign) - sign); + // q += ((numer ^ sign) - sign); q = _mm256_add_epi32(q, _mm256_sub_epi32(_mm256_xor_si256(numers, sign), sign)); } // q >>= shift q = _mm256_srai_epi32(q, more & LIBDIVIDE_32_SHIFT_MASK); - q = _mm256_add_epi32(q, _mm256_srli_epi32(q, 31)); // q += (q < 0) + q = _mm256_add_epi32(q, _mm256_srli_epi32(q, 31)); // q += (q < 0) return q; } } -__m256i libdivide_s32_branchfree_do_vector(__m256i numers, const struct libdivide_s32_branchfree_t *denom) { +__m256i libdivide_s32_branchfree_do_vec256( + __m256i numers, const struct libdivide_s32_branchfree_t *denom) { int32_t magic = denom->magic; uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK; - // must be arithmetic shift + // must be arithmetic shift __m256i sign = _mm256_set1_epi32((int8_t)more >> 7); - __m256i q = libdivide_mullhi_s32_vector(numers, _mm256_set1_epi32(magic)); - q = _mm256_add_epi32(q, numers); // q += numers + __m256i q = libdivide_mullhi_s32_vec256(numers, _mm256_set1_epi32(magic)); + q = _mm256_add_epi32(q, numers); // q += numers // If q is non-negative, we have nothing to do // If q is negative, we want to add either (2**shift)-1 if d is // a power of 2, or (2**shift) if it is not a power of 2 uint32_t is_power_of_2 = (magic == 0); - __m256i q_sign = _mm256_srai_epi32(q, 31); // q_sign = q >> 31 + __m256i q_sign = _mm256_srai_epi32(q, 31); // q_sign = q >> 31 __m256i mask = _mm256_set1_epi32((1U << shift) - is_power_of_2); - q = _mm256_add_epi32(q, _mm256_and_si256(q_sign, mask)); // q = q + (q_sign & mask) - q = _mm256_srai_epi32(q, shift); // q >>= shift - q = _mm256_sub_epi32(_mm256_xor_si256(q, sign), sign); // q = (q ^ sign) - sign + q = _mm256_add_epi32(q, _mm256_and_si256(q_sign, mask)); // q = q + (q_sign & mask) + q = _mm256_srai_epi32(q, shift); // q >>= shift + q = _mm256_sub_epi32(_mm256_xor_si256(q, sign), sign); // q = (q ^ sign) - sign return q; } ////////// SINT64 -__m256i libdivide_s64_do_vector(__m256i numers, const struct libdivide_s64_t *denom) { +__m256i libdivide_s64_do_vec256(__m256i numers, const struct libdivide_s64_t *denom) { uint8_t more = denom->more; int64_t magic = denom->magic; - if (magic == 0) { // shift path + if (magic == 0) { // shift path uint32_t shift = more & LIBDIVIDE_64_SHIFT_MASK; uint64_t mask = (1ULL << shift) - 1; __m256i roundToZeroTweak = _mm256_set1_epi64x(mask); // q = numer + ((numer >> 63) & roundToZeroTweak); - __m256i q = _mm256_add_epi64(numers, _mm256_and_si256(libdivide_s64_signbits(numers), roundToZeroTweak)); - q = libdivide_s64_shift_right_vector(q, shift); + __m256i q = _mm256_add_epi64( + numers, _mm256_and_si256(libdivide_s64_signbits(numers), roundToZeroTweak)); + q = libdivide_s64_shift_right_vec256(q, shift); __m256i sign = _mm256_set1_epi32((int8_t)more >> 7); - // q = (q ^ sign) - sign; + // q = (q ^ sign) - sign; q = _mm256_sub_epi64(_mm256_xor_si256(q, sign), sign); return q; - } - else { - __m256i q = libdivide_mullhi_s64_vector(numers, _mm256_set1_epi64x(magic)); + } else { + __m256i q = libdivide_mullhi_s64_vec256(numers, _mm256_set1_epi64x(magic)); if (more & LIBDIVIDE_ADD_MARKER) { // must be arithmetic shift __m256i sign = _mm256_set1_epi32((int8_t)more >> 7); @@ -1655,46 +1931,53 @@ __m256i libdivide_s64_do_vector(__m256i numers, const struct libdivide_s64_t *de q = _mm256_add_epi64(q, _mm256_sub_epi64(_mm256_xor_si256(numers, sign), sign)); } // q >>= denom->mult_path.shift - q = libdivide_s64_shift_right_vector(q, more & LIBDIVIDE_64_SHIFT_MASK); - q = _mm256_add_epi64(q, _mm256_srli_epi64(q, 63)); // q += (q < 0) + q = libdivide_s64_shift_right_vec256(q, more & LIBDIVIDE_64_SHIFT_MASK); + q = _mm256_add_epi64(q, _mm256_srli_epi64(q, 63)); // q += (q < 0) return q; } } -__m256i libdivide_s64_branchfree_do_vector(__m256i numers, const struct libdivide_s64_branchfree_t *denom) { +__m256i libdivide_s64_branchfree_do_vec256( + __m256i numers, const struct libdivide_s64_branchfree_t *denom) { int64_t magic = denom->magic; uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; // must be arithmetic shift __m256i sign = _mm256_set1_epi32((int8_t)more >> 7); - // libdivide_mullhi_s64(numers, magic); - __m256i q = libdivide_mullhi_s64_vector(numers, _mm256_set1_epi64x(magic)); - q = _mm256_add_epi64(q, numers); // q += numers + // libdivide_mullhi_s64(numers, magic); + __m256i q = libdivide_mullhi_s64_vec256(numers, _mm256_set1_epi64x(magic)); + q = _mm256_add_epi64(q, numers); // q += numers // If q is non-negative, we have nothing to do. // If q is negative, we want to add either (2**shift)-1 if d is // a power of 2, or (2**shift) if it is not a power of 2. uint32_t is_power_of_2 = (magic == 0); - __m256i q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 + __m256i q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 __m256i mask = _mm256_set1_epi64x((1ULL << shift) - is_power_of_2); - q = _mm256_add_epi64(q, _mm256_and_si256(q_sign, mask)); // q = q + (q_sign & mask) - q = libdivide_s64_shift_right_vector(q, shift); // q >>= shift - q = _mm256_sub_epi64(_mm256_xor_si256(q, sign), sign); // q = (q ^ sign) - sign + q = _mm256_add_epi64(q, _mm256_and_si256(q_sign, mask)); // q = q + (q_sign & mask) + q = libdivide_s64_shift_right_vec256(q, shift); // q >>= shift + q = _mm256_sub_epi64(_mm256_xor_si256(q, sign), sign); // q = (q ^ sign) - sign return q; } -#elif defined(LIBDIVIDE_SSE2) +#endif -static inline __m128i libdivide_u32_do_vector(__m128i numers, const struct libdivide_u32_t *denom); -static inline __m128i libdivide_s32_do_vector(__m128i numers, const struct libdivide_s32_t *denom); -static inline __m128i libdivide_u64_do_vector(__m128i numers, const struct libdivide_u64_t *denom); -static inline __m128i libdivide_s64_do_vector(__m128i numers, const struct libdivide_s64_t *denom); +#if defined(LIBDIVIDE_SSE2) -static inline __m128i libdivide_u32_branchfree_do_vector(__m128i numers, const struct libdivide_u32_branchfree_t *denom); -static inline __m128i libdivide_s32_branchfree_do_vector(__m128i numers, const struct libdivide_s32_branchfree_t *denom); -static inline __m128i libdivide_u64_branchfree_do_vector(__m128i numers, const struct libdivide_u64_branchfree_t *denom); -static inline __m128i libdivide_s64_branchfree_do_vector(__m128i numers, const struct libdivide_s64_branchfree_t *denom); +static inline __m128i libdivide_u32_do_vec128(__m128i numers, const struct libdivide_u32_t *denom); +static inline __m128i libdivide_s32_do_vec128(__m128i numers, const struct libdivide_s32_t *denom); +static inline __m128i libdivide_u64_do_vec128(__m128i numers, const struct libdivide_u64_t *denom); +static inline __m128i libdivide_s64_do_vec128(__m128i numers, const struct libdivide_s64_t *denom); + +static inline __m128i libdivide_u32_branchfree_do_vec128( + __m128i numers, const struct libdivide_u32_branchfree_t *denom); +static inline __m128i libdivide_s32_branchfree_do_vec128( + __m128i numers, const struct libdivide_s32_branchfree_t *denom); +static inline __m128i libdivide_u64_branchfree_do_vec128( + __m128i numers, const struct libdivide_u64_branchfree_t *denom); +static inline __m128i libdivide_s64_branchfree_do_vec128( + __m128i numers, const struct libdivide_s64_branchfree_t *denom); //////// Internal Utility Functions @@ -1706,7 +1989,7 @@ static inline __m128i libdivide_s64_signbits(__m128i v) { } // Implementation of _mm_srai_epi64 (from AVX512). -static inline __m128i libdivide_s64_shift_right_vector(__m128i v, int amt) { +static inline __m128i libdivide_s64_shift_right_vec128(__m128i v, int amt) { const int b = 64 - amt; __m128i m = _mm_set1_epi64x(1ULL << (b - 1)); __m128i x = _mm_srli_epi64(v, amt); @@ -1715,7 +1998,7 @@ static inline __m128i libdivide_s64_shift_right_vector(__m128i v, int amt) { } // Here, b is assumed to contain one 32-bit value repeated. -static inline __m128i libdivide_mullhi_u32_vector(__m128i a, __m128i b) { +static inline __m128i libdivide_mullhi_u32_vec128(__m128i a, __m128i b) { __m128i hi_product_0Z2Z = _mm_srli_epi64(_mm_mul_epu32(a, b), 32); __m128i a1X3X = _mm_srli_epi64(a, 32); __m128i mask = _mm_set_epi32(-1, 0, -1, 0); @@ -1726,8 +2009,8 @@ static inline __m128i libdivide_mullhi_u32_vector(__m128i a, __m128i b) { // SSE2 does not have a signed multiplication instruction, but we can convert // unsigned to signed pretty efficiently. Again, b is just a 32 bit value // repeated four times. -static inline __m128i libdivide_mullhi_s32_vector(__m128i a, __m128i b) { - __m128i p = libdivide_mullhi_u32_vector(a, b); +static inline __m128i libdivide_mullhi_s32_vec128(__m128i a, __m128i b) { + __m128i p = libdivide_mullhi_u32_vec128(a, b); // t1 = (a >> 31) & y, arithmetic shift __m128i t1 = _mm_and_si128(_mm_srai_epi32(a, 31), b); __m128i t2 = _mm_and_si128(_mm_srai_epi32(b, 31), a); @@ -1737,30 +2020,41 @@ static inline __m128i libdivide_mullhi_s32_vector(__m128i a, __m128i b) { } // Here, y is assumed to contain one 64-bit value repeated. -// https://stackoverflow.com/a/28827013 -static inline __m128i libdivide_mullhi_u64_vector(__m128i x, __m128i y) { - __m128i lomask = _mm_set1_epi64x(0xffffffff); - __m128i xh = _mm_shuffle_epi32(x, 0xB1); // x0l, x0h, x1l, x1h - __m128i yh = _mm_shuffle_epi32(y, 0xB1); // y0l, y0h, y1l, y1h - __m128i w0 = _mm_mul_epu32(x, y); // x0l*y0l, x1l*y1l - __m128i w1 = _mm_mul_epu32(x, yh); // x0l*y0h, x1l*y1h - __m128i w2 = _mm_mul_epu32(xh, y); // x0h*y0l, x1h*y0l - __m128i w3 = _mm_mul_epu32(xh, yh); // x0h*y0h, x1h*y1h - __m128i w0h = _mm_srli_epi64(w0, 32); - __m128i s1 = _mm_add_epi64(w1, w0h); - __m128i s1l = _mm_and_si128(s1, lomask); - __m128i s1h = _mm_srli_epi64(s1, 32); - __m128i s2 = _mm_add_epi64(w2, s1l); - __m128i s2h = _mm_srli_epi64(s2, 32); - __m128i hi = _mm_add_epi64(w3, s1h); - hi = _mm_add_epi64(hi, s2h); +static inline __m128i libdivide_mullhi_u64_vec128(__m128i x, __m128i y) { + // full 128 bits product is: + // x0*y0 + (x0*y1 << 32) + (x1*y0 << 32) + (x1*y1 << 64) + // Note x0,y0,x1,y1 are all conceptually uint32, products are 32x32->64. - return hi; + // Compute x0*y0. + // Note x1, y1 are ignored by mul_epu32. + __m128i x0y0 = _mm_mul_epu32(x, y); + __m128i x0y0_hi = _mm_srli_epi64(x0y0, 32); + + // Get x1, y1 in the low bits. + // We could shuffle or right shift. Shuffles are preferred as they preserve + // the source register for the next computation. + __m128i x1 = _mm_shuffle_epi32(x, _MM_SHUFFLE(3, 3, 1, 1)); + __m128i y1 = _mm_shuffle_epi32(y, _MM_SHUFFLE(3, 3, 1, 1)); + + // No need to mask off top 32 bits for mul_epu32. + __m128i x0y1 = _mm_mul_epu32(x, y1); + __m128i x1y0 = _mm_mul_epu32(x1, y); + __m128i x1y1 = _mm_mul_epu32(x1, y1); + + // Mask here selects low bits only. + __m128i mask = _mm_set1_epi64x(0xFFFFFFFF); + __m128i temp = _mm_add_epi64(x1y0, x0y0_hi); + __m128i temp_lo = _mm_and_si128(temp, mask); + __m128i temp_hi = _mm_srli_epi64(temp, 32); + + temp_lo = _mm_srli_epi64(_mm_add_epi64(temp_lo, x0y1), 32); + temp_hi = _mm_add_epi64(x1y1, temp_hi); + return _mm_add_epi64(temp_lo, temp_hi); } // y is one 64-bit value repeated. -static inline __m128i libdivide_mullhi_s64_vector(__m128i x, __m128i y) { - __m128i p = libdivide_mullhi_u64_vector(x, y); +static inline __m128i libdivide_mullhi_s64_vec128(__m128i x, __m128i y) { + __m128i p = libdivide_mullhi_u64_vec128(x, y); __m128i t1 = _mm_and_si128(libdivide_s64_signbits(x), y); __m128i t2 = _mm_and_si128(libdivide_s64_signbits(y), x); p = _mm_sub_epi64(p, t1); @@ -1770,131 +2064,130 @@ static inline __m128i libdivide_mullhi_s64_vector(__m128i x, __m128i y) { ////////// UINT32 -__m128i libdivide_u32_do_vector(__m128i numers, const struct libdivide_u32_t *denom) { +__m128i libdivide_u32_do_vec128(__m128i numers, const struct libdivide_u32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return _mm_srli_epi32(numers, more); - } - else { - __m128i q = libdivide_mullhi_u32_vector(numers, _mm_set1_epi32(denom->magic)); + } else { + __m128i q = libdivide_mullhi_u32_vec128(numers, _mm_set1_epi32(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { // uint32_t t = ((numer - q) >> 1) + q; // return t >> denom->shift; uint32_t shift = more & LIBDIVIDE_32_SHIFT_MASK; __m128i t = _mm_add_epi32(_mm_srli_epi32(_mm_sub_epi32(numers, q), 1), q); return _mm_srli_epi32(t, shift); - } - else { + } else { return _mm_srli_epi32(q, more); } } } -__m128i libdivide_u32_branchfree_do_vector(__m128i numers, const struct libdivide_u32_branchfree_t *denom) { - __m128i q = libdivide_mullhi_u32_vector(numers, _mm_set1_epi32(denom->magic)); +__m128i libdivide_u32_branchfree_do_vec128( + __m128i numers, const struct libdivide_u32_branchfree_t *denom) { + __m128i q = libdivide_mullhi_u32_vec128(numers, _mm_set1_epi32(denom->magic)); __m128i t = _mm_add_epi32(_mm_srli_epi32(_mm_sub_epi32(numers, q), 1), q); return _mm_srli_epi32(t, denom->more); } ////////// UINT64 -__m128i libdivide_u64_do_vector(__m128i numers, const struct libdivide_u64_t *denom) { +__m128i libdivide_u64_do_vec128(__m128i numers, const struct libdivide_u64_t *denom) { uint8_t more = denom->more; if (!denom->magic) { return _mm_srli_epi64(numers, more); - } - else { - __m128i q = libdivide_mullhi_u64_vector(numers, _mm_set1_epi64x(denom->magic)); + } else { + __m128i q = libdivide_mullhi_u64_vec128(numers, _mm_set1_epi64x(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { // uint32_t t = ((numer - q) >> 1) + q; // return t >> denom->shift; uint32_t shift = more & LIBDIVIDE_64_SHIFT_MASK; __m128i t = _mm_add_epi64(_mm_srli_epi64(_mm_sub_epi64(numers, q), 1), q); return _mm_srli_epi64(t, shift); - } - else { + } else { return _mm_srli_epi64(q, more); } } } -__m128i libdivide_u64_branchfree_do_vector(__m128i numers, const struct libdivide_u64_branchfree_t *denom) { - __m128i q = libdivide_mullhi_u64_vector(numers, _mm_set1_epi64x(denom->magic)); +__m128i libdivide_u64_branchfree_do_vec128( + __m128i numers, const struct libdivide_u64_branchfree_t *denom) { + __m128i q = libdivide_mullhi_u64_vec128(numers, _mm_set1_epi64x(denom->magic)); __m128i t = _mm_add_epi64(_mm_srli_epi64(_mm_sub_epi64(numers, q), 1), q); return _mm_srli_epi64(t, denom->more); } ////////// SINT32 -__m128i libdivide_s32_do_vector(__m128i numers, const struct libdivide_s32_t *denom) { +__m128i libdivide_s32_do_vec128(__m128i numers, const struct libdivide_s32_t *denom) { uint8_t more = denom->more; if (!denom->magic) { uint32_t shift = more & LIBDIVIDE_32_SHIFT_MASK; uint32_t mask = (1U << shift) - 1; __m128i roundToZeroTweak = _mm_set1_epi32(mask); // q = numer + ((numer >> 31) & roundToZeroTweak); - __m128i q = _mm_add_epi32(numers, _mm_and_si128(_mm_srai_epi32(numers, 31), roundToZeroTweak)); + __m128i q = + _mm_add_epi32(numers, _mm_and_si128(_mm_srai_epi32(numers, 31), roundToZeroTweak)); q = _mm_srai_epi32(q, shift); __m128i sign = _mm_set1_epi32((int8_t)more >> 7); // q = (q ^ sign) - sign; q = _mm_sub_epi32(_mm_xor_si128(q, sign), sign); return q; - } - else { - __m128i q = libdivide_mullhi_s32_vector(numers, _mm_set1_epi32(denom->magic)); + } else { + __m128i q = libdivide_mullhi_s32_vec128(numers, _mm_set1_epi32(denom->magic)); if (more & LIBDIVIDE_ADD_MARKER) { - // must be arithmetic shift + // must be arithmetic shift __m128i sign = _mm_set1_epi32((int8_t)more >> 7); - // q += ((numer ^ sign) - sign); + // q += ((numer ^ sign) - sign); q = _mm_add_epi32(q, _mm_sub_epi32(_mm_xor_si128(numers, sign), sign)); } // q >>= shift q = _mm_srai_epi32(q, more & LIBDIVIDE_32_SHIFT_MASK); - q = _mm_add_epi32(q, _mm_srli_epi32(q, 31)); // q += (q < 0) + q = _mm_add_epi32(q, _mm_srli_epi32(q, 31)); // q += (q < 0) return q; } } -__m128i libdivide_s32_branchfree_do_vector(__m128i numers, const struct libdivide_s32_branchfree_t *denom) { +__m128i libdivide_s32_branchfree_do_vec128( + __m128i numers, const struct libdivide_s32_branchfree_t *denom) { int32_t magic = denom->magic; uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK; - // must be arithmetic shift + // must be arithmetic shift __m128i sign = _mm_set1_epi32((int8_t)more >> 7); - __m128i q = libdivide_mullhi_s32_vector(numers, _mm_set1_epi32(magic)); - q = _mm_add_epi32(q, numers); // q += numers + __m128i q = libdivide_mullhi_s32_vec128(numers, _mm_set1_epi32(magic)); + q = _mm_add_epi32(q, numers); // q += numers // If q is non-negative, we have nothing to do // If q is negative, we want to add either (2**shift)-1 if d is // a power of 2, or (2**shift) if it is not a power of 2 uint32_t is_power_of_2 = (magic == 0); - __m128i q_sign = _mm_srai_epi32(q, 31); // q_sign = q >> 31 + __m128i q_sign = _mm_srai_epi32(q, 31); // q_sign = q >> 31 __m128i mask = _mm_set1_epi32((1U << shift) - is_power_of_2); - q = _mm_add_epi32(q, _mm_and_si128(q_sign, mask)); // q = q + (q_sign & mask) - q = _mm_srai_epi32(q, shift); // q >>= shift - q = _mm_sub_epi32(_mm_xor_si128(q, sign), sign); // q = (q ^ sign) - sign + q = _mm_add_epi32(q, _mm_and_si128(q_sign, mask)); // q = q + (q_sign & mask) + q = _mm_srai_epi32(q, shift); // q >>= shift + q = _mm_sub_epi32(_mm_xor_si128(q, sign), sign); // q = (q ^ sign) - sign return q; } ////////// SINT64 -__m128i libdivide_s64_do_vector(__m128i numers, const struct libdivide_s64_t *denom) { +__m128i libdivide_s64_do_vec128(__m128i numers, const struct libdivide_s64_t *denom) { uint8_t more = denom->more; int64_t magic = denom->magic; - if (magic == 0) { // shift path + if (magic == 0) { // shift path uint32_t shift = more & LIBDIVIDE_64_SHIFT_MASK; uint64_t mask = (1ULL << shift) - 1; __m128i roundToZeroTweak = _mm_set1_epi64x(mask); // q = numer + ((numer >> 63) & roundToZeroTweak); - __m128i q = _mm_add_epi64(numers, _mm_and_si128(libdivide_s64_signbits(numers), roundToZeroTweak)); - q = libdivide_s64_shift_right_vector(q, shift); + __m128i q = + _mm_add_epi64(numers, _mm_and_si128(libdivide_s64_signbits(numers), roundToZeroTweak)); + q = libdivide_s64_shift_right_vec128(q, shift); __m128i sign = _mm_set1_epi32((int8_t)more >> 7); - // q = (q ^ sign) - sign; + // q = (q ^ sign) - sign; q = _mm_sub_epi64(_mm_xor_si128(q, sign), sign); return q; - } - else { - __m128i q = libdivide_mullhi_s64_vector(numers, _mm_set1_epi64x(magic)); + } else { + __m128i q = libdivide_mullhi_s64_vec128(numers, _mm_set1_epi64x(magic)); if (more & LIBDIVIDE_ADD_MARKER) { // must be arithmetic shift __m128i sign = _mm_set1_epi32((int8_t)more >> 7); @@ -1902,32 +2195,33 @@ __m128i libdivide_s64_do_vector(__m128i numers, const struct libdivide_s64_t *de q = _mm_add_epi64(q, _mm_sub_epi64(_mm_xor_si128(numers, sign), sign)); } // q >>= denom->mult_path.shift - q = libdivide_s64_shift_right_vector(q, more & LIBDIVIDE_64_SHIFT_MASK); - q = _mm_add_epi64(q, _mm_srli_epi64(q, 63)); // q += (q < 0) + q = libdivide_s64_shift_right_vec128(q, more & LIBDIVIDE_64_SHIFT_MASK); + q = _mm_add_epi64(q, _mm_srli_epi64(q, 63)); // q += (q < 0) return q; } } -__m128i libdivide_s64_branchfree_do_vector(__m128i numers, const struct libdivide_s64_branchfree_t *denom) { +__m128i libdivide_s64_branchfree_do_vec128( + __m128i numers, const struct libdivide_s64_branchfree_t *denom) { int64_t magic = denom->magic; uint8_t more = denom->more; uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK; // must be arithmetic shift __m128i sign = _mm_set1_epi32((int8_t)more >> 7); - // libdivide_mullhi_s64(numers, magic); - __m128i q = libdivide_mullhi_s64_vector(numers, _mm_set1_epi64x(magic)); - q = _mm_add_epi64(q, numers); // q += numers + // libdivide_mullhi_s64(numers, magic); + __m128i q = libdivide_mullhi_s64_vec128(numers, _mm_set1_epi64x(magic)); + q = _mm_add_epi64(q, numers); // q += numers // If q is non-negative, we have nothing to do. // If q is negative, we want to add either (2**shift)-1 if d is // a power of 2, or (2**shift) if it is not a power of 2. uint32_t is_power_of_2 = (magic == 0); - __m128i q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 + __m128i q_sign = libdivide_s64_signbits(q); // q_sign = q >> 63 __m128i mask = _mm_set1_epi64x((1ULL << shift) - is_power_of_2); - q = _mm_add_epi64(q, _mm_and_si128(q_sign, mask)); // q = q + (q_sign & mask) - q = libdivide_s64_shift_right_vector(q, shift); // q >>= shift - q = _mm_sub_epi64(_mm_xor_si128(q, sign), sign); // q = (q ^ sign) - sign + q = _mm_add_epi64(q, _mm_and_si128(q_sign, mask)); // q = q + (q_sign & mask) + q = libdivide_s64_shift_right_vec128(q, shift); // q >>= shift + q = _mm_sub_epi64(_mm_xor_si128(q, sign), sign); // q = (q ^ sign) - sign return q; } @@ -1937,143 +2231,273 @@ __m128i libdivide_s64_branchfree_do_vector(__m128i numers, const struct libdivid #ifdef __cplusplus -// The C++ divider class is templated on both an integer type -// (like uint64_t) and an algorithm type. -// * BRANCHFULL is the default algorithm type. -// * BRANCHFREE is the branchfree algorithm type. -enum { - BRANCHFULL, - BRANCHFREE +enum Branching { + BRANCHFULL, // use branching algorithms + BRANCHFREE // use branchfree algorithms }; -#if defined(LIBDIVIDE_AVX512) - #define LIBDIVIDE_VECTOR_TYPE __m512i -#elif defined(LIBDIVIDE_AVX2) - #define LIBDIVIDE_VECTOR_TYPE __m256i -#elif defined(LIBDIVIDE_SSE2) - #define LIBDIVIDE_VECTOR_TYPE __m128i +#if defined(LIBDIVIDE_NEON) +// Helper to deduce NEON vector type for integral type. +template +struct NeonVecFor {}; + +template <> +struct NeonVecFor { + typedef uint32x4_t type; +}; + +template <> +struct NeonVecFor { + typedef int32x4_t type; +}; + +template <> +struct NeonVecFor { + typedef uint64x2_t type; +}; + +template <> +struct NeonVecFor { + typedef int64x2_t type; +}; #endif -#if !defined(LIBDIVIDE_VECTOR_TYPE) - #define LIBDIVIDE_DIVIDE_VECTOR(ALGO) +// Versions of our algorithms for SIMD. +#if defined(LIBDIVIDE_NEON) +#define LIBDIVIDE_DIVIDE_NEON(ALGO, INT_TYPE) \ + typename NeonVecFor::type divide(typename NeonVecFor::type n) const { \ + return libdivide_##ALGO##_do_vec128(n, &denom); \ + } #else - #define LIBDIVIDE_DIVIDE_VECTOR(ALGO) \ - LIBDIVIDE_VECTOR_TYPE divide(LIBDIVIDE_VECTOR_TYPE n) const { \ - return libdivide_##ALGO##_do_vector(n, &denom); \ - } +#define LIBDIVIDE_DIVIDE_NEON(ALGO, INT_TYPE) +#endif +#if defined(LIBDIVIDE_SSE2) +#define LIBDIVIDE_DIVIDE_SSE2(ALGO) \ + __m128i divide(__m128i n) const { return libdivide_##ALGO##_do_vec128(n, &denom); } +#else +#define LIBDIVIDE_DIVIDE_SSE2(ALGO) +#endif + +#if defined(LIBDIVIDE_AVX2) +#define LIBDIVIDE_DIVIDE_AVX2(ALGO) \ + __m256i divide(__m256i n) const { return libdivide_##ALGO##_do_vec256(n, &denom); } +#else +#define LIBDIVIDE_DIVIDE_AVX2(ALGO) +#endif + +#if defined(LIBDIVIDE_AVX512) +#define LIBDIVIDE_DIVIDE_AVX512(ALGO) \ + __m512i divide(__m512i n) const { return libdivide_##ALGO##_do_vec512(n, &denom); } +#else +#define LIBDIVIDE_DIVIDE_AVX512(ALGO) #endif // The DISPATCHER_GEN() macro generates C++ methods (for the given integer // and algorithm types) that redirect to libdivide's C API. -#define DISPATCHER_GEN(T, ALGO) \ - libdivide_##ALGO##_t denom; \ - dispatcher() { } \ - dispatcher(T d) \ - : denom(libdivide_##ALGO##_gen(d)) \ - { } \ - T divide(T n) const { \ - return libdivide_##ALGO##_do(n, &denom); \ - } \ - LIBDIVIDE_DIVIDE_VECTOR(ALGO) \ - T recover() const { \ - return libdivide_##ALGO##_recover(&denom); \ - } +#define DISPATCHER_GEN(T, ALGO) \ + libdivide_##ALGO##_t denom; \ + dispatcher() {} \ + dispatcher(T d) : denom(libdivide_##ALGO##_gen(d)) {} \ + T divide(T n) const { return libdivide_##ALGO##_do(n, &denom); } \ + T recover() const { return libdivide_##ALGO##_recover(&denom); } \ + LIBDIVIDE_DIVIDE_NEON(ALGO, T) \ + LIBDIVIDE_DIVIDE_SSE2(ALGO) \ + LIBDIVIDE_DIVIDE_AVX2(ALGO) \ + LIBDIVIDE_DIVIDE_AVX512(ALGO) // The dispatcher selects a specific division algorithm for a given // type and ALGO using partial template specialization. -template struct dispatcher { }; +template +struct dispatcher {}; -template<> struct dispatcher { DISPATCHER_GEN(int32_t, s32) }; -template<> struct dispatcher { DISPATCHER_GEN(int32_t, s32_branchfree) }; -template<> struct dispatcher { DISPATCHER_GEN(uint32_t, u32) }; -template<> struct dispatcher { DISPATCHER_GEN(uint32_t, u32_branchfree) }; -template<> struct dispatcher { DISPATCHER_GEN(int64_t, s64) }; -template<> struct dispatcher { DISPATCHER_GEN(int64_t, s64_branchfree) }; -template<> struct dispatcher { DISPATCHER_GEN(uint64_t, u64) }; -template<> struct dispatcher { DISPATCHER_GEN(uint64_t, u64_branchfree) }; +template <> +struct dispatcher { + DISPATCHER_GEN(int32_t, s32) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(int32_t, s32_branchfree) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(uint32_t, u32) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(uint32_t, u32_branchfree) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(int64_t, s64) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(int64_t, s64_branchfree) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(uint64_t, u64) +}; +template <> +struct dispatcher { + DISPATCHER_GEN(uint64_t, u64_branchfree) +}; // This is the main divider class for use by the user (C++ API). // The actual division algorithm is selected using the dispatcher struct // based on the integer and algorithm template parameters. -template +template class divider { -public: + public: // We leave the default constructor empty so that creating // an array of dividers and then initializing them // later doesn't slow us down. - divider() { } + divider() {} // Constructor that takes the divisor as a parameter - divider(T d) : div(d) { } + divider(T d) : div(d) {} // Divides n by the divisor - T divide(T n) const { - return div.divide(n); - } + T divide(T n) const { return div.divide(n); } // Recovers the divisor, returns the value that was // used to initialize this divider object. - T recover() const { - return div.recover(); + T recover() const { return div.recover(); } + + bool operator==(const divider &other) const { + return div.denom.magic == other.denom.magic && div.denom.more == other.denom.more; } - bool operator==(const divider& other) const { - return div.denom.magic == other.denom.magic && - div.denom.more == other.denom.more; - } + bool operator!=(const divider &other) const { return !(*this == other); } - bool operator!=(const divider& other) const { - return !(*this == other); - } - -#if defined(LIBDIVIDE_VECTOR_TYPE) - // Treats the vector as packed integer values with the same type as - // the divider (e.g. s32, u32, s64, u64) and divides each of - // them by the divider, returning the packed quotients. - LIBDIVIDE_VECTOR_TYPE divide(LIBDIVIDE_VECTOR_TYPE n) const { + // Vector variants treat the input as packed integer values with the same type as the divider + // (e.g. s32, u32, s64, u64) and divides each of them by the divider, returning the packed + // quotients. +#if defined(LIBDIVIDE_SSE2) + __m128i divide(__m128i n) const { return div.divide(n); } +#endif +#if defined(LIBDIVIDE_AVX2) + __m256i divide(__m256i n) const { return div.divide(n); } +#endif +#if defined(LIBDIVIDE_AVX512) + __m512i divide(__m512i n) const { return div.divide(n); } +#endif +#if defined(LIBDIVIDE_NEON) + typename NeonVecFor::type divide(typename NeonVecFor::type n) const { return div.divide(n); } #endif -private: + private: // Storage for the actual divisor - dispatcher::value, - std::is_signed::value, sizeof(T), ALGO> div; + dispatcher::value, std::is_signed::value, sizeof(T), ALGO> div; }; // Overload of operator / for scalar division -template -T operator/(T n, const divider& div) { +template +T operator/(T n, const divider &div) { return div.divide(n); } // Overload of operator /= for scalar division -template -T& operator/=(T& n, const divider& div) { +template +T &operator/=(T &n, const divider &div) { n = div.divide(n); return n; } -#if defined(LIBDIVIDE_VECTOR_TYPE) - // Overload of operator / for vector division - template - LIBDIVIDE_VECTOR_TYPE operator/(LIBDIVIDE_VECTOR_TYPE n, const divider& div) { - return div.divide(n); - } - // Overload of operator /= for vector division - template - LIBDIVIDE_VECTOR_TYPE& operator/=(LIBDIVIDE_VECTOR_TYPE& n, const divider& div) { - n = div.divide(n); - return n; - } +// Overloads for vector types. +#if defined(LIBDIVIDE_SSE2) +template +__m128i operator/(__m128i n, const divider &div) { + return div.divide(n); +} + +template +__m128i operator/=(__m128i &n, const divider &div) { + n = div.divide(n); + return n; +} +#endif +#if defined(LIBDIVIDE_AVX2) +template +__m256i operator/(__m256i n, const divider &div) { + return div.divide(n); +} + +template +__m256i operator/=(__m256i &n, const divider &div) { + n = div.divide(n); + return n; +} +#endif +#if defined(LIBDIVIDE_AVX512) +template +__m512i operator/(__m512i n, const divider &div) { + return div.divide(n); +} + +template +__m512i operator/=(__m512i &n, const divider &div) { + n = div.divide(n); + return n; +} #endif -// libdivdie::branchfree_divider +#if defined(LIBDIVIDE_NEON) +template +uint32x4_t operator/(uint32x4_t n, const divider &div) { + return div.divide(n); +} + +template +int32x4_t operator/(int32x4_t n, const divider &div) { + return div.divide(n); +} + +template +uint64x2_t operator/(uint64x2_t n, const divider &div) { + return div.divide(n); +} + +template +int64x2_t operator/(int64x2_t n, const divider &div) { + return div.divide(n); +} + +template +uint32x4_t operator/=(uint32x4_t &n, const divider &div) { + n = div.divide(n); + return n; +} + +template +int32x4_t operator/=(int32x4_t &n, const divider &div) { + n = div.divide(n); + return n; +} + +template +uint64x2_t operator/=(uint64x2_t &n, const divider &div) { + n = div.divide(n); + return n; +} + +template +int64x2_t operator/=(int64x2_t &n, const divider &div) { + n = div.divide(n); + return n; +} +#endif + +#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900) +// libdivide::branchfree_divider template using branchfree_divider = divider; +#endif -} // namespace libdivide +} // namespace libdivide -#endif // __cplusplus +#endif // __cplusplus -#endif // LIBDIVIDE_H +#endif // LIBDIVIDE_H diff --git a/contrib/libhdfs3-cmake/CMake/Options.cmake b/contrib/libhdfs3-cmake/CMake/Options.cmake index d7ccc8b6475..04ab823eedc 100644 --- a/contrib/libhdfs3-cmake/CMake/Options.cmake +++ b/contrib/libhdfs3-cmake/CMake/Options.cmake @@ -22,7 +22,7 @@ ADD_DEFINITIONS(-D_GLIBCXX_USE_NANOSLEEP) TRY_COMPILE(STRERROR_R_RETURN_INT ${CMAKE_CURRENT_BINARY_DIR} - ${HDFS3_ROOT_DIR}/CMake/CMakeTestCompileStrerror.cpp + "${HDFS3_ROOT_DIR}/CMake/CMakeTestCompileStrerror.cpp" CMAKE_FLAGS "-DCMAKE_CXX_LINK_EXECUTABLE='echo not linking now...'" OUTPUT_VARIABLE OUTPUT) @@ -36,13 +36,13 @@ ENDIF(STRERROR_R_RETURN_INT) TRY_COMPILE(HAVE_STEADY_CLOCK ${CMAKE_CURRENT_BINARY_DIR} - ${HDFS3_ROOT_DIR}/CMake/CMakeTestCompileSteadyClock.cpp + "${HDFS3_ROOT_DIR}/CMake/CMakeTestCompileSteadyClock.cpp" CMAKE_FLAGS "-DCMAKE_CXX_LINK_EXECUTABLE='echo not linking now...'" OUTPUT_VARIABLE OUTPUT) TRY_COMPILE(HAVE_NESTED_EXCEPTION ${CMAKE_CURRENT_BINARY_DIR} - ${HDFS3_ROOT_DIR}/CMake/CMakeTestCompileNestedException.cpp + "${HDFS3_ROOT_DIR}/CMake/CMakeTestCompileNestedException.cpp" CMAKE_FLAGS "-DCMAKE_CXX_LINK_EXECUTABLE='echo not linking now...'" OUTPUT_VARIABLE OUTPUT) diff --git a/contrib/libhdfs3-cmake/CMakeLists.txt b/contrib/libhdfs3-cmake/CMakeLists.txt index 60f4376bdea..c9b9179d5e6 100644 --- a/contrib/libhdfs3-cmake/CMakeLists.txt +++ b/contrib/libhdfs3-cmake/CMakeLists.txt @@ -24,9 +24,9 @@ else() endif() # project and source dir -set(HDFS3_ROOT_DIR ${ClickHouse_SOURCE_DIR}/contrib/libhdfs3) -set(HDFS3_SOURCE_DIR ${HDFS3_ROOT_DIR}/src) -set(HDFS3_COMMON_DIR ${HDFS3_SOURCE_DIR}/common) +set(HDFS3_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/libhdfs3") +set(HDFS3_SOURCE_DIR "${HDFS3_ROOT_DIR}/src") +set(HDFS3_COMMON_DIR "${HDFS3_SOURCE_DIR}/common") # module set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/CMake" ${CMAKE_MODULE_PATH}) @@ -35,165 +35,165 @@ include(Options) # source set(PROTO_FILES - #${HDFS3_SOURCE_DIR}/proto/encryption.proto - ${HDFS3_SOURCE_DIR}/proto/ClientDatanodeProtocol.proto - ${HDFS3_SOURCE_DIR}/proto/hdfs.proto - ${HDFS3_SOURCE_DIR}/proto/Security.proto - ${HDFS3_SOURCE_DIR}/proto/ProtobufRpcEngine.proto - ${HDFS3_SOURCE_DIR}/proto/ClientNamenodeProtocol.proto - ${HDFS3_SOURCE_DIR}/proto/IpcConnectionContext.proto - ${HDFS3_SOURCE_DIR}/proto/RpcHeader.proto - ${HDFS3_SOURCE_DIR}/proto/datatransfer.proto + #"${HDFS3_SOURCE_DIR}/proto/encryption.proto" + "${HDFS3_SOURCE_DIR}/proto/ClientDatanodeProtocol.proto" + "${HDFS3_SOURCE_DIR}/proto/hdfs.proto" + "${HDFS3_SOURCE_DIR}/proto/Security.proto" + "${HDFS3_SOURCE_DIR}/proto/ProtobufRpcEngine.proto" + "${HDFS3_SOURCE_DIR}/proto/ClientNamenodeProtocol.proto" + "${HDFS3_SOURCE_DIR}/proto/IpcConnectionContext.proto" + "${HDFS3_SOURCE_DIR}/proto/RpcHeader.proto" + "${HDFS3_SOURCE_DIR}/proto/datatransfer.proto" ) if(USE_PROTOBUF) PROTOBUF_GENERATE_CPP(PROTO_SOURCES PROTO_HEADERS ${PROTO_FILES}) endif() -configure_file(${HDFS3_SOURCE_DIR}/platform.h.in ${CMAKE_CURRENT_BINARY_DIR}/platform.h) +configure_file("${HDFS3_SOURCE_DIR}/platform.h.in" "${CMAKE_CURRENT_BINARY_DIR}/platform.h") set(SRCS - ${HDFS3_SOURCE_DIR}/network/TcpSocket.cpp - ${HDFS3_SOURCE_DIR}/network/DomainSocket.cpp - ${HDFS3_SOURCE_DIR}/network/BufferedSocketReader.cpp - ${HDFS3_SOURCE_DIR}/client/ReadShortCircuitInfo.cpp - ${HDFS3_SOURCE_DIR}/client/Pipeline.cpp - ${HDFS3_SOURCE_DIR}/client/Hdfs.cpp - ${HDFS3_SOURCE_DIR}/client/Packet.cpp - ${HDFS3_SOURCE_DIR}/client/OutputStreamImpl.cpp - ${HDFS3_SOURCE_DIR}/client/KerberosName.cpp - ${HDFS3_SOURCE_DIR}/client/PacketHeader.cpp - ${HDFS3_SOURCE_DIR}/client/LocalBlockReader.cpp - ${HDFS3_SOURCE_DIR}/client/UserInfo.cpp - ${HDFS3_SOURCE_DIR}/client/RemoteBlockReader.cpp - ${HDFS3_SOURCE_DIR}/client/Permission.cpp - ${HDFS3_SOURCE_DIR}/client/FileSystemImpl.cpp - ${HDFS3_SOURCE_DIR}/client/DirectoryIterator.cpp - ${HDFS3_SOURCE_DIR}/client/FileSystemKey.cpp - ${HDFS3_SOURCE_DIR}/client/DataTransferProtocolSender.cpp - ${HDFS3_SOURCE_DIR}/client/LeaseRenewer.cpp - ${HDFS3_SOURCE_DIR}/client/PeerCache.cpp - ${HDFS3_SOURCE_DIR}/client/InputStream.cpp - ${HDFS3_SOURCE_DIR}/client/FileSystem.cpp - ${HDFS3_SOURCE_DIR}/client/InputStreamImpl.cpp - ${HDFS3_SOURCE_DIR}/client/Token.cpp - ${HDFS3_SOURCE_DIR}/client/PacketPool.cpp - ${HDFS3_SOURCE_DIR}/client/OutputStream.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcChannelKey.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcProtocolInfo.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcRemoteCall.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcChannel.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcAuth.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcContentWrapper.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcConfig.cpp - ${HDFS3_SOURCE_DIR}/rpc/RpcServerInfo.cpp - ${HDFS3_SOURCE_DIR}/rpc/SaslClient.cpp - ${HDFS3_SOURCE_DIR}/server/Datanode.cpp - ${HDFS3_SOURCE_DIR}/server/LocatedBlocks.cpp - ${HDFS3_SOURCE_DIR}/server/NamenodeProxy.cpp - ${HDFS3_SOURCE_DIR}/server/NamenodeImpl.cpp - ${HDFS3_SOURCE_DIR}/server/NamenodeInfo.cpp - ${HDFS3_SOURCE_DIR}/common/WritableUtils.cpp - ${HDFS3_SOURCE_DIR}/common/ExceptionInternal.cpp - ${HDFS3_SOURCE_DIR}/common/SessionConfig.cpp - ${HDFS3_SOURCE_DIR}/common/StackPrinter.cpp - ${HDFS3_SOURCE_DIR}/common/Exception.cpp - ${HDFS3_SOURCE_DIR}/common/Logger.cpp - ${HDFS3_SOURCE_DIR}/common/CFileWrapper.cpp - ${HDFS3_SOURCE_DIR}/common/XmlConfig.cpp - ${HDFS3_SOURCE_DIR}/common/WriteBuffer.cpp - ${HDFS3_SOURCE_DIR}/common/HWCrc32c.cpp - ${HDFS3_SOURCE_DIR}/common/MappedFileWrapper.cpp - ${HDFS3_SOURCE_DIR}/common/Hash.cpp - ${HDFS3_SOURCE_DIR}/common/SWCrc32c.cpp - ${HDFS3_SOURCE_DIR}/common/Thread.cpp + "${HDFS3_SOURCE_DIR}/network/TcpSocket.cpp" + "${HDFS3_SOURCE_DIR}/network/DomainSocket.cpp" + "${HDFS3_SOURCE_DIR}/network/BufferedSocketReader.cpp" + "${HDFS3_SOURCE_DIR}/client/ReadShortCircuitInfo.cpp" + "${HDFS3_SOURCE_DIR}/client/Pipeline.cpp" + "${HDFS3_SOURCE_DIR}/client/Hdfs.cpp" + "${HDFS3_SOURCE_DIR}/client/Packet.cpp" + "${HDFS3_SOURCE_DIR}/client/OutputStreamImpl.cpp" + "${HDFS3_SOURCE_DIR}/client/KerberosName.cpp" + "${HDFS3_SOURCE_DIR}/client/PacketHeader.cpp" + "${HDFS3_SOURCE_DIR}/client/LocalBlockReader.cpp" + "${HDFS3_SOURCE_DIR}/client/UserInfo.cpp" + "${HDFS3_SOURCE_DIR}/client/RemoteBlockReader.cpp" + "${HDFS3_SOURCE_DIR}/client/Permission.cpp" + "${HDFS3_SOURCE_DIR}/client/FileSystemImpl.cpp" + "${HDFS3_SOURCE_DIR}/client/DirectoryIterator.cpp" + "${HDFS3_SOURCE_DIR}/client/FileSystemKey.cpp" + "${HDFS3_SOURCE_DIR}/client/DataTransferProtocolSender.cpp" + "${HDFS3_SOURCE_DIR}/client/LeaseRenewer.cpp" + "${HDFS3_SOURCE_DIR}/client/PeerCache.cpp" + "${HDFS3_SOURCE_DIR}/client/InputStream.cpp" + "${HDFS3_SOURCE_DIR}/client/FileSystem.cpp" + "${HDFS3_SOURCE_DIR}/client/InputStreamImpl.cpp" + "${HDFS3_SOURCE_DIR}/client/Token.cpp" + "${HDFS3_SOURCE_DIR}/client/PacketPool.cpp" + "${HDFS3_SOURCE_DIR}/client/OutputStream.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcChannelKey.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcProtocolInfo.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcRemoteCall.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcChannel.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcAuth.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcContentWrapper.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcConfig.cpp" + "${HDFS3_SOURCE_DIR}/rpc/RpcServerInfo.cpp" + "${HDFS3_SOURCE_DIR}/rpc/SaslClient.cpp" + "${HDFS3_SOURCE_DIR}/server/Datanode.cpp" + "${HDFS3_SOURCE_DIR}/server/LocatedBlocks.cpp" + "${HDFS3_SOURCE_DIR}/server/NamenodeProxy.cpp" + "${HDFS3_SOURCE_DIR}/server/NamenodeImpl.cpp" + "${HDFS3_SOURCE_DIR}/server/NamenodeInfo.cpp" + "${HDFS3_SOURCE_DIR}/common/WritableUtils.cpp" + "${HDFS3_SOURCE_DIR}/common/ExceptionInternal.cpp" + "${HDFS3_SOURCE_DIR}/common/SessionConfig.cpp" + "${HDFS3_SOURCE_DIR}/common/StackPrinter.cpp" + "${HDFS3_SOURCE_DIR}/common/Exception.cpp" + "${HDFS3_SOURCE_DIR}/common/Logger.cpp" + "${HDFS3_SOURCE_DIR}/common/CFileWrapper.cpp" + "${HDFS3_SOURCE_DIR}/common/XmlConfig.cpp" + "${HDFS3_SOURCE_DIR}/common/WriteBuffer.cpp" + "${HDFS3_SOURCE_DIR}/common/HWCrc32c.cpp" + "${HDFS3_SOURCE_DIR}/common/MappedFileWrapper.cpp" + "${HDFS3_SOURCE_DIR}/common/Hash.cpp" + "${HDFS3_SOURCE_DIR}/common/SWCrc32c.cpp" + "${HDFS3_SOURCE_DIR}/common/Thread.cpp" - ${HDFS3_SOURCE_DIR}/network/TcpSocket.h - ${HDFS3_SOURCE_DIR}/network/BufferedSocketReader.h - ${HDFS3_SOURCE_DIR}/network/Socket.h - ${HDFS3_SOURCE_DIR}/network/DomainSocket.h - ${HDFS3_SOURCE_DIR}/network/Syscall.h - ${HDFS3_SOURCE_DIR}/client/InputStreamImpl.h - ${HDFS3_SOURCE_DIR}/client/FileSystem.h - ${HDFS3_SOURCE_DIR}/client/ReadShortCircuitInfo.h - ${HDFS3_SOURCE_DIR}/client/InputStreamInter.h - ${HDFS3_SOURCE_DIR}/client/FileSystemImpl.h - ${HDFS3_SOURCE_DIR}/client/PacketPool.h - ${HDFS3_SOURCE_DIR}/client/Pipeline.h - ${HDFS3_SOURCE_DIR}/client/OutputStreamInter.h - ${HDFS3_SOURCE_DIR}/client/RemoteBlockReader.h - ${HDFS3_SOURCE_DIR}/client/Token.h - ${HDFS3_SOURCE_DIR}/client/KerberosName.h - ${HDFS3_SOURCE_DIR}/client/DirectoryIterator.h - ${HDFS3_SOURCE_DIR}/client/hdfs.h - ${HDFS3_SOURCE_DIR}/client/FileSystemStats.h - ${HDFS3_SOURCE_DIR}/client/FileSystemKey.h - ${HDFS3_SOURCE_DIR}/client/DataTransferProtocolSender.h - ${HDFS3_SOURCE_DIR}/client/Packet.h - ${HDFS3_SOURCE_DIR}/client/PacketHeader.h - ${HDFS3_SOURCE_DIR}/client/FileSystemInter.h - ${HDFS3_SOURCE_DIR}/client/LocalBlockReader.h - ${HDFS3_SOURCE_DIR}/client/TokenInternal.h - ${HDFS3_SOURCE_DIR}/client/InputStream.h - ${HDFS3_SOURCE_DIR}/client/PipelineAck.h - ${HDFS3_SOURCE_DIR}/client/BlockReader.h - ${HDFS3_SOURCE_DIR}/client/Permission.h - ${HDFS3_SOURCE_DIR}/client/OutputStreamImpl.h - ${HDFS3_SOURCE_DIR}/client/LeaseRenewer.h - ${HDFS3_SOURCE_DIR}/client/UserInfo.h - ${HDFS3_SOURCE_DIR}/client/PeerCache.h - ${HDFS3_SOURCE_DIR}/client/OutputStream.h - ${HDFS3_SOURCE_DIR}/client/FileStatus.h - ${HDFS3_SOURCE_DIR}/client/DataTransferProtocol.h - ${HDFS3_SOURCE_DIR}/client/BlockLocation.h - ${HDFS3_SOURCE_DIR}/rpc/RpcConfig.h - ${HDFS3_SOURCE_DIR}/rpc/SaslClient.h - ${HDFS3_SOURCE_DIR}/rpc/RpcAuth.h - ${HDFS3_SOURCE_DIR}/rpc/RpcClient.h - ${HDFS3_SOURCE_DIR}/rpc/RpcCall.h - ${HDFS3_SOURCE_DIR}/rpc/RpcContentWrapper.h - ${HDFS3_SOURCE_DIR}/rpc/RpcProtocolInfo.h - ${HDFS3_SOURCE_DIR}/rpc/RpcRemoteCall.h - ${HDFS3_SOURCE_DIR}/rpc/RpcServerInfo.h - ${HDFS3_SOURCE_DIR}/rpc/RpcChannel.h - ${HDFS3_SOURCE_DIR}/rpc/RpcChannelKey.h - ${HDFS3_SOURCE_DIR}/server/BlockLocalPathInfo.h - ${HDFS3_SOURCE_DIR}/server/LocatedBlocks.h - ${HDFS3_SOURCE_DIR}/server/DatanodeInfo.h - ${HDFS3_SOURCE_DIR}/server/RpcHelper.h - ${HDFS3_SOURCE_DIR}/server/ExtendedBlock.h - ${HDFS3_SOURCE_DIR}/server/NamenodeInfo.h - ${HDFS3_SOURCE_DIR}/server/NamenodeImpl.h - ${HDFS3_SOURCE_DIR}/server/LocatedBlock.h - ${HDFS3_SOURCE_DIR}/server/NamenodeProxy.h - ${HDFS3_SOURCE_DIR}/server/Datanode.h - ${HDFS3_SOURCE_DIR}/server/Namenode.h - ${HDFS3_SOURCE_DIR}/common/XmlConfig.h - ${HDFS3_SOURCE_DIR}/common/Logger.h - ${HDFS3_SOURCE_DIR}/common/WriteBuffer.h - ${HDFS3_SOURCE_DIR}/common/HWCrc32c.h - ${HDFS3_SOURCE_DIR}/common/Checksum.h - ${HDFS3_SOURCE_DIR}/common/SessionConfig.h - ${HDFS3_SOURCE_DIR}/common/Unordered.h - ${HDFS3_SOURCE_DIR}/common/BigEndian.h - ${HDFS3_SOURCE_DIR}/common/Thread.h - ${HDFS3_SOURCE_DIR}/common/StackPrinter.h - ${HDFS3_SOURCE_DIR}/common/Exception.h - ${HDFS3_SOURCE_DIR}/common/WritableUtils.h - ${HDFS3_SOURCE_DIR}/common/StringUtil.h - ${HDFS3_SOURCE_DIR}/common/LruMap.h - ${HDFS3_SOURCE_DIR}/common/Function.h - ${HDFS3_SOURCE_DIR}/common/DateTime.h - ${HDFS3_SOURCE_DIR}/common/Hash.h - ${HDFS3_SOURCE_DIR}/common/SWCrc32c.h - ${HDFS3_SOURCE_DIR}/common/ExceptionInternal.h - ${HDFS3_SOURCE_DIR}/common/Memory.h - ${HDFS3_SOURCE_DIR}/common/FileWrapper.h + "${HDFS3_SOURCE_DIR}/network/TcpSocket.h" + "${HDFS3_SOURCE_DIR}/network/BufferedSocketReader.h" + "${HDFS3_SOURCE_DIR}/network/Socket.h" + "${HDFS3_SOURCE_DIR}/network/DomainSocket.h" + "${HDFS3_SOURCE_DIR}/network/Syscall.h" + "${HDFS3_SOURCE_DIR}/client/InputStreamImpl.h" + "${HDFS3_SOURCE_DIR}/client/FileSystem.h" + "${HDFS3_SOURCE_DIR}/client/ReadShortCircuitInfo.h" + "${HDFS3_SOURCE_DIR}/client/InputStreamInter.h" + "${HDFS3_SOURCE_DIR}/client/FileSystemImpl.h" + "${HDFS3_SOURCE_DIR}/client/PacketPool.h" + "${HDFS3_SOURCE_DIR}/client/Pipeline.h" + "${HDFS3_SOURCE_DIR}/client/OutputStreamInter.h" + "${HDFS3_SOURCE_DIR}/client/RemoteBlockReader.h" + "${HDFS3_SOURCE_DIR}/client/Token.h" + "${HDFS3_SOURCE_DIR}/client/KerberosName.h" + "${HDFS3_SOURCE_DIR}/client/DirectoryIterator.h" + "${HDFS3_SOURCE_DIR}/client/hdfs.h" + "${HDFS3_SOURCE_DIR}/client/FileSystemStats.h" + "${HDFS3_SOURCE_DIR}/client/FileSystemKey.h" + "${HDFS3_SOURCE_DIR}/client/DataTransferProtocolSender.h" + "${HDFS3_SOURCE_DIR}/client/Packet.h" + "${HDFS3_SOURCE_DIR}/client/PacketHeader.h" + "${HDFS3_SOURCE_DIR}/client/FileSystemInter.h" + "${HDFS3_SOURCE_DIR}/client/LocalBlockReader.h" + "${HDFS3_SOURCE_DIR}/client/TokenInternal.h" + "${HDFS3_SOURCE_DIR}/client/InputStream.h" + "${HDFS3_SOURCE_DIR}/client/PipelineAck.h" + "${HDFS3_SOURCE_DIR}/client/BlockReader.h" + "${HDFS3_SOURCE_DIR}/client/Permission.h" + "${HDFS3_SOURCE_DIR}/client/OutputStreamImpl.h" + "${HDFS3_SOURCE_DIR}/client/LeaseRenewer.h" + "${HDFS3_SOURCE_DIR}/client/UserInfo.h" + "${HDFS3_SOURCE_DIR}/client/PeerCache.h" + "${HDFS3_SOURCE_DIR}/client/OutputStream.h" + "${HDFS3_SOURCE_DIR}/client/FileStatus.h" + "${HDFS3_SOURCE_DIR}/client/DataTransferProtocol.h" + "${HDFS3_SOURCE_DIR}/client/BlockLocation.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcConfig.h" + "${HDFS3_SOURCE_DIR}/rpc/SaslClient.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcAuth.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcClient.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcCall.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcContentWrapper.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcProtocolInfo.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcRemoteCall.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcServerInfo.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcChannel.h" + "${HDFS3_SOURCE_DIR}/rpc/RpcChannelKey.h" + "${HDFS3_SOURCE_DIR}/server/BlockLocalPathInfo.h" + "${HDFS3_SOURCE_DIR}/server/LocatedBlocks.h" + "${HDFS3_SOURCE_DIR}/server/DatanodeInfo.h" + "${HDFS3_SOURCE_DIR}/server/RpcHelper.h" + "${HDFS3_SOURCE_DIR}/server/ExtendedBlock.h" + "${HDFS3_SOURCE_DIR}/server/NamenodeInfo.h" + "${HDFS3_SOURCE_DIR}/server/NamenodeImpl.h" + "${HDFS3_SOURCE_DIR}/server/LocatedBlock.h" + "${HDFS3_SOURCE_DIR}/server/NamenodeProxy.h" + "${HDFS3_SOURCE_DIR}/server/Datanode.h" + "${HDFS3_SOURCE_DIR}/server/Namenode.h" + "${HDFS3_SOURCE_DIR}/common/XmlConfig.h" + "${HDFS3_SOURCE_DIR}/common/Logger.h" + "${HDFS3_SOURCE_DIR}/common/WriteBuffer.h" + "${HDFS3_SOURCE_DIR}/common/HWCrc32c.h" + "${HDFS3_SOURCE_DIR}/common/Checksum.h" + "${HDFS3_SOURCE_DIR}/common/SessionConfig.h" + "${HDFS3_SOURCE_DIR}/common/Unordered.h" + "${HDFS3_SOURCE_DIR}/common/BigEndian.h" + "${HDFS3_SOURCE_DIR}/common/Thread.h" + "${HDFS3_SOURCE_DIR}/common/StackPrinter.h" + "${HDFS3_SOURCE_DIR}/common/Exception.h" + "${HDFS3_SOURCE_DIR}/common/WritableUtils.h" + "${HDFS3_SOURCE_DIR}/common/StringUtil.h" + "${HDFS3_SOURCE_DIR}/common/LruMap.h" + "${HDFS3_SOURCE_DIR}/common/Function.h" + "${HDFS3_SOURCE_DIR}/common/DateTime.h" + "${HDFS3_SOURCE_DIR}/common/Hash.h" + "${HDFS3_SOURCE_DIR}/common/SWCrc32c.h" + "${HDFS3_SOURCE_DIR}/common/ExceptionInternal.h" + "${HDFS3_SOURCE_DIR}/common/Memory.h" + "${HDFS3_SOURCE_DIR}/common/FileWrapper.h" ) # old kernels (< 3.17) doesn't have SYS_getrandom. Always use POSIX implementation to have better compatibility -set_source_files_properties(${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp PROPERTIES COMPILE_FLAGS "-DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX=1") +set_source_files_properties("${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp" PROPERTIES COMPILE_FLAGS "-DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX=1") # target add_library(hdfs3 ${SRCS} ${PROTO_SOURCES} ${PROTO_HEADERS}) diff --git a/contrib/libpq b/contrib/libpq index 1f9c286dba6..c7624588ddd 160000 --- a/contrib/libpq +++ b/contrib/libpq @@ -1 +1 @@ -Subproject commit 1f9c286dba60809edb64e384d6727d80d269b6cf +Subproject commit c7624588ddd84f153dd5990e81b886e4568bddde diff --git a/contrib/libpq-cmake/CMakeLists.txt b/contrib/libpq-cmake/CMakeLists.txt index 34c57799a8a..028fabe52b8 100644 --- a/contrib/libpq-cmake/CMakeLists.txt +++ b/contrib/libpq-cmake/CMakeLists.txt @@ -1,58 +1,58 @@ -set(LIBPQ_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libpq) +set(LIBPQ_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libpq") set(SRCS - ${LIBPQ_SOURCE_DIR}/fe-auth.c - ${LIBPQ_SOURCE_DIR}/fe-auth-scram.c - ${LIBPQ_SOURCE_DIR}/fe-connect.c - ${LIBPQ_SOURCE_DIR}/fe-exec.c - ${LIBPQ_SOURCE_DIR}/fe-lobj.c - ${LIBPQ_SOURCE_DIR}/fe-misc.c - ${LIBPQ_SOURCE_DIR}/fe-print.c - ${LIBPQ_SOURCE_DIR}/fe-protocol2.c - ${LIBPQ_SOURCE_DIR}/fe-protocol3.c - ${LIBPQ_SOURCE_DIR}/fe-secure.c - ${LIBPQ_SOURCE_DIR}/fe-secure-common.c - ${LIBPQ_SOURCE_DIR}/fe-secure-openssl.c - ${LIBPQ_SOURCE_DIR}/legacy-pqsignal.c - ${LIBPQ_SOURCE_DIR}/libpq-events.c - ${LIBPQ_SOURCE_DIR}/pqexpbuffer.c + "${LIBPQ_SOURCE_DIR}/fe-auth.c" + "${LIBPQ_SOURCE_DIR}/fe-auth-scram.c" + "${LIBPQ_SOURCE_DIR}/fe-connect.c" + "${LIBPQ_SOURCE_DIR}/fe-exec.c" + "${LIBPQ_SOURCE_DIR}/fe-lobj.c" + "${LIBPQ_SOURCE_DIR}/fe-misc.c" + "${LIBPQ_SOURCE_DIR}/fe-print.c" + "${LIBPQ_SOURCE_DIR}/fe-protocol2.c" + "${LIBPQ_SOURCE_DIR}/fe-protocol3.c" + "${LIBPQ_SOURCE_DIR}/fe-secure.c" + "${LIBPQ_SOURCE_DIR}/fe-secure-common.c" + "${LIBPQ_SOURCE_DIR}/fe-secure-openssl.c" + "${LIBPQ_SOURCE_DIR}/legacy-pqsignal.c" + "${LIBPQ_SOURCE_DIR}/libpq-events.c" + "${LIBPQ_SOURCE_DIR}/pqexpbuffer.c" - ${LIBPQ_SOURCE_DIR}/common/scram-common.c - ${LIBPQ_SOURCE_DIR}/common/sha2_openssl.c - ${LIBPQ_SOURCE_DIR}/common/md5.c - ${LIBPQ_SOURCE_DIR}/common/saslprep.c - ${LIBPQ_SOURCE_DIR}/common/unicode_norm.c - ${LIBPQ_SOURCE_DIR}/common/ip.c - ${LIBPQ_SOURCE_DIR}/common/jsonapi.c - ${LIBPQ_SOURCE_DIR}/common/wchar.c - ${LIBPQ_SOURCE_DIR}/common/base64.c - ${LIBPQ_SOURCE_DIR}/common/link-canary.c - ${LIBPQ_SOURCE_DIR}/common/fe_memutils.c - ${LIBPQ_SOURCE_DIR}/common/string.c - ${LIBPQ_SOURCE_DIR}/common/pg_get_line.c - ${LIBPQ_SOURCE_DIR}/common/stringinfo.c - ${LIBPQ_SOURCE_DIR}/common/psprintf.c - ${LIBPQ_SOURCE_DIR}/common/encnames.c - ${LIBPQ_SOURCE_DIR}/common/logging.c + "${LIBPQ_SOURCE_DIR}/common/scram-common.c" + "${LIBPQ_SOURCE_DIR}/common/sha2_openssl.c" + "${LIBPQ_SOURCE_DIR}/common/md5.c" + "${LIBPQ_SOURCE_DIR}/common/saslprep.c" + "${LIBPQ_SOURCE_DIR}/common/unicode_norm.c" + "${LIBPQ_SOURCE_DIR}/common/ip.c" + "${LIBPQ_SOURCE_DIR}/common/jsonapi.c" + "${LIBPQ_SOURCE_DIR}/common/wchar.c" + "${LIBPQ_SOURCE_DIR}/common/base64.c" + "${LIBPQ_SOURCE_DIR}/common/link-canary.c" + "${LIBPQ_SOURCE_DIR}/common/fe_memutils.c" + "${LIBPQ_SOURCE_DIR}/common/string.c" + "${LIBPQ_SOURCE_DIR}/common/pg_get_line.c" + "${LIBPQ_SOURCE_DIR}/common/stringinfo.c" + "${LIBPQ_SOURCE_DIR}/common/psprintf.c" + "${LIBPQ_SOURCE_DIR}/common/encnames.c" + "${LIBPQ_SOURCE_DIR}/common/logging.c" - ${LIBPQ_SOURCE_DIR}/port/snprintf.c - ${LIBPQ_SOURCE_DIR}/port/strlcpy.c - ${LIBPQ_SOURCE_DIR}/port/strerror.c - ${LIBPQ_SOURCE_DIR}/port/inet_net_ntop.c - ${LIBPQ_SOURCE_DIR}/port/getpeereid.c - ${LIBPQ_SOURCE_DIR}/port/chklocale.c - ${LIBPQ_SOURCE_DIR}/port/noblock.c - ${LIBPQ_SOURCE_DIR}/port/pg_strong_random.c - ${LIBPQ_SOURCE_DIR}/port/pgstrcasecmp.c - ${LIBPQ_SOURCE_DIR}/port/thread.c - ${LIBPQ_SOURCE_DIR}/port/path.c - ${LIBPQ_SOURCE_DIR}/port/explicit_bzero.c + "${LIBPQ_SOURCE_DIR}/port/snprintf.c" + "${LIBPQ_SOURCE_DIR}/port/strlcpy.c" + "${LIBPQ_SOURCE_DIR}/port/strerror.c" + "${LIBPQ_SOURCE_DIR}/port/inet_net_ntop.c" + "${LIBPQ_SOURCE_DIR}/port/getpeereid.c" + "${LIBPQ_SOURCE_DIR}/port/chklocale.c" + "${LIBPQ_SOURCE_DIR}/port/noblock.c" + "${LIBPQ_SOURCE_DIR}/port/pg_strong_random.c" + "${LIBPQ_SOURCE_DIR}/port/pgstrcasecmp.c" + "${LIBPQ_SOURCE_DIR}/port/thread.c" + "${LIBPQ_SOURCE_DIR}/port/path.c" + "${LIBPQ_SOURCE_DIR}/port/explicit_bzero.c" ) add_library(libpq ${SRCS}) target_include_directories (libpq PUBLIC ${LIBPQ_SOURCE_DIR}) -target_include_directories (libpq PUBLIC ${LIBPQ_SOURCE_DIR}/include) -target_include_directories (libpq PRIVATE ${LIBPQ_SOURCE_DIR}/configs) +target_include_directories (libpq PUBLIC "${LIBPQ_SOURCE_DIR}/include") +target_include_directories (libpq PRIVATE "${LIBPQ_SOURCE_DIR}/configs") target_link_libraries (libpq PRIVATE ssl) diff --git a/contrib/libpqxx-cmake/CMakeLists.txt b/contrib/libpqxx-cmake/CMakeLists.txt index ed372951f82..4edef7bdd82 100644 --- a/contrib/libpqxx-cmake/CMakeLists.txt +++ b/contrib/libpqxx-cmake/CMakeLists.txt @@ -1,70 +1,70 @@ -set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/libpqxx) +set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/libpqxx") set (SRCS - ${LIBRARY_DIR}/src/strconv.cxx - ${LIBRARY_DIR}/src/array.cxx - ${LIBRARY_DIR}/src/binarystring.cxx - ${LIBRARY_DIR}/src/connection.cxx - ${LIBRARY_DIR}/src/cursor.cxx - ${LIBRARY_DIR}/src/encodings.cxx - ${LIBRARY_DIR}/src/errorhandler.cxx - ${LIBRARY_DIR}/src/except.cxx - ${LIBRARY_DIR}/src/field.cxx - ${LIBRARY_DIR}/src/largeobject.cxx - ${LIBRARY_DIR}/src/notification.cxx - ${LIBRARY_DIR}/src/pipeline.cxx - ${LIBRARY_DIR}/src/result.cxx - ${LIBRARY_DIR}/src/robusttransaction.cxx - ${LIBRARY_DIR}/src/sql_cursor.cxx - ${LIBRARY_DIR}/src/stream_from.cxx - ${LIBRARY_DIR}/src/stream_to.cxx - ${LIBRARY_DIR}/src/subtransaction.cxx - ${LIBRARY_DIR}/src/transaction.cxx - ${LIBRARY_DIR}/src/transaction_base.cxx - ${LIBRARY_DIR}/src/row.cxx - ${LIBRARY_DIR}/src/util.cxx - ${LIBRARY_DIR}/src/version.cxx + "${LIBRARY_DIR}/src/strconv.cxx" + "${LIBRARY_DIR}/src/array.cxx" + "${LIBRARY_DIR}/src/binarystring.cxx" + "${LIBRARY_DIR}/src/connection.cxx" + "${LIBRARY_DIR}/src/cursor.cxx" + "${LIBRARY_DIR}/src/encodings.cxx" + "${LIBRARY_DIR}/src/errorhandler.cxx" + "${LIBRARY_DIR}/src/except.cxx" + "${LIBRARY_DIR}/src/field.cxx" + "${LIBRARY_DIR}/src/largeobject.cxx" + "${LIBRARY_DIR}/src/notification.cxx" + "${LIBRARY_DIR}/src/pipeline.cxx" + "${LIBRARY_DIR}/src/result.cxx" + "${LIBRARY_DIR}/src/robusttransaction.cxx" + "${LIBRARY_DIR}/src/sql_cursor.cxx" + "${LIBRARY_DIR}/src/stream_from.cxx" + "${LIBRARY_DIR}/src/stream_to.cxx" + "${LIBRARY_DIR}/src/subtransaction.cxx" + "${LIBRARY_DIR}/src/transaction.cxx" + "${LIBRARY_DIR}/src/transaction_base.cxx" + "${LIBRARY_DIR}/src/row.cxx" + "${LIBRARY_DIR}/src/util.cxx" + "${LIBRARY_DIR}/src/version.cxx" ) # Need to explicitly include each header file, because in the directory include/pqxx there are also files # like just 'array'. So if including the whole directory with `target_include_directories`, it will make # conflicts with all includes of . set (HDRS - ${LIBRARY_DIR}/include/pqxx/array.hxx - ${LIBRARY_DIR}/include/pqxx/binarystring.hxx - ${LIBRARY_DIR}/include/pqxx/composite.hxx - ${LIBRARY_DIR}/include/pqxx/connection.hxx - ${LIBRARY_DIR}/include/pqxx/cursor.hxx - ${LIBRARY_DIR}/include/pqxx/dbtransaction.hxx - ${LIBRARY_DIR}/include/pqxx/errorhandler.hxx - ${LIBRARY_DIR}/include/pqxx/except.hxx - ${LIBRARY_DIR}/include/pqxx/field.hxx - ${LIBRARY_DIR}/include/pqxx/isolation.hxx - ${LIBRARY_DIR}/include/pqxx/largeobject.hxx - ${LIBRARY_DIR}/include/pqxx/nontransaction.hxx - ${LIBRARY_DIR}/include/pqxx/notification.hxx - ${LIBRARY_DIR}/include/pqxx/pipeline.hxx - ${LIBRARY_DIR}/include/pqxx/prepared_statement.hxx - ${LIBRARY_DIR}/include/pqxx/result.hxx - ${LIBRARY_DIR}/include/pqxx/robusttransaction.hxx - ${LIBRARY_DIR}/include/pqxx/row.hxx - ${LIBRARY_DIR}/include/pqxx/separated_list.hxx - ${LIBRARY_DIR}/include/pqxx/strconv.hxx - ${LIBRARY_DIR}/include/pqxx/stream_from.hxx - ${LIBRARY_DIR}/include/pqxx/stream_to.hxx - ${LIBRARY_DIR}/include/pqxx/subtransaction.hxx - ${LIBRARY_DIR}/include/pqxx/transaction.hxx - ${LIBRARY_DIR}/include/pqxx/transaction_base.hxx - ${LIBRARY_DIR}/include/pqxx/types.hxx - ${LIBRARY_DIR}/include/pqxx/util.hxx - ${LIBRARY_DIR}/include/pqxx/version.hxx - ${LIBRARY_DIR}/include/pqxx/zview.hxx + "${LIBRARY_DIR}/include/pqxx/array.hxx" + "${LIBRARY_DIR}/include/pqxx/binarystring.hxx" + "${LIBRARY_DIR}/include/pqxx/composite.hxx" + "${LIBRARY_DIR}/include/pqxx/connection.hxx" + "${LIBRARY_DIR}/include/pqxx/cursor.hxx" + "${LIBRARY_DIR}/include/pqxx/dbtransaction.hxx" + "${LIBRARY_DIR}/include/pqxx/errorhandler.hxx" + "${LIBRARY_DIR}/include/pqxx/except.hxx" + "${LIBRARY_DIR}/include/pqxx/field.hxx" + "${LIBRARY_DIR}/include/pqxx/isolation.hxx" + "${LIBRARY_DIR}/include/pqxx/largeobject.hxx" + "${LIBRARY_DIR}/include/pqxx/nontransaction.hxx" + "${LIBRARY_DIR}/include/pqxx/notification.hxx" + "${LIBRARY_DIR}/include/pqxx/pipeline.hxx" + "${LIBRARY_DIR}/include/pqxx/prepared_statement.hxx" + "${LIBRARY_DIR}/include/pqxx/result.hxx" + "${LIBRARY_DIR}/include/pqxx/robusttransaction.hxx" + "${LIBRARY_DIR}/include/pqxx/row.hxx" + "${LIBRARY_DIR}/include/pqxx/separated_list.hxx" + "${LIBRARY_DIR}/include/pqxx/strconv.hxx" + "${LIBRARY_DIR}/include/pqxx/stream_from.hxx" + "${LIBRARY_DIR}/include/pqxx/stream_to.hxx" + "${LIBRARY_DIR}/include/pqxx/subtransaction.hxx" + "${LIBRARY_DIR}/include/pqxx/transaction.hxx" + "${LIBRARY_DIR}/include/pqxx/transaction_base.hxx" + "${LIBRARY_DIR}/include/pqxx/types.hxx" + "${LIBRARY_DIR}/include/pqxx/util.hxx" + "${LIBRARY_DIR}/include/pqxx/version.hxx" + "${LIBRARY_DIR}/include/pqxx/zview.hxx" ) add_library(libpqxx ${SRCS} ${HDRS}) target_link_libraries(libpqxx PUBLIC ${LIBPQ_LIBRARY}) -target_include_directories (libpqxx PRIVATE ${LIBRARY_DIR}/include) +target_include_directories (libpqxx PRIVATE "${LIBRARY_DIR}/include") # crutch set(CM_CONFIG_H_IN "${LIBRARY_DIR}/include/pqxx/config.h.in") diff --git a/contrib/librdkafka b/contrib/librdkafka index cf11d0aa36d..43491d33ca2 160000 --- a/contrib/librdkafka +++ b/contrib/librdkafka @@ -1 +1 @@ -Subproject commit cf11d0aa36d4738f2c9bf4377807661660f1be76 +Subproject commit 43491d33ca2826531d1e3cae70d4bf1e5249e3c9 diff --git a/contrib/librdkafka-cmake/CMakeLists.txt b/contrib/librdkafka-cmake/CMakeLists.txt index 2b55b22cd2b..97b6a7e1ec5 100644 --- a/contrib/librdkafka-cmake/CMakeLists.txt +++ b/contrib/librdkafka-cmake/CMakeLists.txt @@ -1,83 +1,83 @@ -set(RDKAFKA_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/librdkafka/src) +set(RDKAFKA_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/librdkafka/src") set(SRCS - ${RDKAFKA_SOURCE_DIR}/crc32c.c -# ${RDKAFKA_SOURCE_DIR}/lz4.c -# ${RDKAFKA_SOURCE_DIR}/lz4frame.c -# ${RDKAFKA_SOURCE_DIR}/lz4hc.c - ${RDKAFKA_SOURCE_DIR}/rdaddr.c - ${RDKAFKA_SOURCE_DIR}/rdavl.c - ${RDKAFKA_SOURCE_DIR}/rdbuf.c - ${RDKAFKA_SOURCE_DIR}/rdcrc32.c - ${RDKAFKA_SOURCE_DIR}/rddl.c - ${RDKAFKA_SOURCE_DIR}/rdfnv1a.c - ${RDKAFKA_SOURCE_DIR}/rdgz.c - ${RDKAFKA_SOURCE_DIR}/rdhdrhistogram.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_admin.c # looks optional - ${RDKAFKA_SOURCE_DIR}/rdkafka_assignment.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_assignor.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_aux.c # looks optional - ${RDKAFKA_SOURCE_DIR}/rdkafka_background.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_broker.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_buf.c - ${RDKAFKA_SOURCE_DIR}/rdkafka.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_cert.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_cgrp.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_conf.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_coord.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_error.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_event.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_feature.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_header.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_idempotence.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_interceptor.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_lz4.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_metadata.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_metadata_cache.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_mock.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_mock_cgrp.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_mock_handlers.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_msg.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_msgset_reader.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_msgset_writer.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_offset.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_op.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_partition.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_pattern.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_plugin.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_queue.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_range_assignor.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_request.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_roundrobin_assignor.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl.c -# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_cyrus.c # optionally included below -# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_oauthbearer.c # optionally included below - ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_plain.c -# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c # optionally included below -# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_win32.c -# ${RDKAFKA_SOURCE_DIR}/rdkafka_ssl.c # optionally included below - ${RDKAFKA_SOURCE_DIR}/rdkafka_sticky_assignor.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_subscription.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_timer.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_topic.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_transport.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_txnmgr.c - ${RDKAFKA_SOURCE_DIR}/rdkafka_zstd.c - ${RDKAFKA_SOURCE_DIR}/rdlist.c - ${RDKAFKA_SOURCE_DIR}/rdlog.c - ${RDKAFKA_SOURCE_DIR}/rdmap.c - ${RDKAFKA_SOURCE_DIR}/rdmurmur2.c - ${RDKAFKA_SOURCE_DIR}/rdports.c - ${RDKAFKA_SOURCE_DIR}/rdrand.c - ${RDKAFKA_SOURCE_DIR}/rdregex.c - ${RDKAFKA_SOURCE_DIR}/rdstring.c - ${RDKAFKA_SOURCE_DIR}/rdunittest.c - ${RDKAFKA_SOURCE_DIR}/rdvarint.c - ${RDKAFKA_SOURCE_DIR}/rdxxhash.c - # ${RDKAFKA_SOURCE_DIR}/regexp.c - ${RDKAFKA_SOURCE_DIR}/snappy.c - ${RDKAFKA_SOURCE_DIR}/tinycthread.c - ${RDKAFKA_SOURCE_DIR}/tinycthread_extra.c + "${RDKAFKA_SOURCE_DIR}/crc32c.c" +# "${RDKAFKA_SOURCE_DIR}/lz4.c" +# "${RDKAFKA_SOURCE_DIR}/lz4frame.c" +# "${RDKAFKA_SOURCE_DIR}/lz4hc.c" + "${RDKAFKA_SOURCE_DIR}/rdaddr.c" + "${RDKAFKA_SOURCE_DIR}/rdavl.c" + "${RDKAFKA_SOURCE_DIR}/rdbuf.c" + "${RDKAFKA_SOURCE_DIR}/rdcrc32.c" + "${RDKAFKA_SOURCE_DIR}/rddl.c" + "${RDKAFKA_SOURCE_DIR}/rdfnv1a.c" + "${RDKAFKA_SOURCE_DIR}/rdgz.c" + "${RDKAFKA_SOURCE_DIR}/rdhdrhistogram.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_admin.c" # looks optional + "${RDKAFKA_SOURCE_DIR}/rdkafka_assignment.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_assignor.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_aux.c" # looks optional + "${RDKAFKA_SOURCE_DIR}/rdkafka_background.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_broker.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_buf.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_cert.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_cgrp.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_conf.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_coord.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_error.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_event.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_feature.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_header.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_idempotence.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_interceptor.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_lz4.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_metadata.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_metadata_cache.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_mock.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_mock_cgrp.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_mock_handlers.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_msg.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_msgset_reader.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_msgset_writer.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_offset.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_op.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_partition.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_pattern.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_plugin.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_queue.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_range_assignor.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_request.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_roundrobin_assignor.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl.c" +# "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_cyrus.c" # optionally included below +# "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_oauthbearer.c" # optionally included below + "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_plain.c" +# "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c" # optionally included below +# "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_win32.c" +# "${RDKAFKA_SOURCE_DIR}/rdkafka_ssl.c" # optionally included below + "${RDKAFKA_SOURCE_DIR}/rdkafka_sticky_assignor.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_subscription.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_timer.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_topic.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_transport.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_txnmgr.c" + "${RDKAFKA_SOURCE_DIR}/rdkafka_zstd.c" + "${RDKAFKA_SOURCE_DIR}/rdlist.c" + "${RDKAFKA_SOURCE_DIR}/rdlog.c" + "${RDKAFKA_SOURCE_DIR}/rdmap.c" + "${RDKAFKA_SOURCE_DIR}/rdmurmur2.c" + "${RDKAFKA_SOURCE_DIR}/rdports.c" + "${RDKAFKA_SOURCE_DIR}/rdrand.c" + "${RDKAFKA_SOURCE_DIR}/rdregex.c" + "${RDKAFKA_SOURCE_DIR}/rdstring.c" + "${RDKAFKA_SOURCE_DIR}/rdunittest.c" + "${RDKAFKA_SOURCE_DIR}/rdvarint.c" + "${RDKAFKA_SOURCE_DIR}/rdxxhash.c" + # "${RDKAFKA_SOURCE_DIR}/regexp.c" + "${RDKAFKA_SOURCE_DIR}/snappy.c" + "${RDKAFKA_SOURCE_DIR}/tinycthread.c" + "${RDKAFKA_SOURCE_DIR}/tinycthread_extra.c" ) if(${ENABLE_CYRUS_SASL}) @@ -96,28 +96,28 @@ if(OPENSSL_FOUND) endif() if(WITH_SSL) - list(APPEND SRCS ${RDKAFKA_SOURCE_DIR}/rdkafka_ssl.c) + list(APPEND SRCS "${RDKAFKA_SOURCE_DIR}/rdkafka_ssl.c") endif() if(WITH_SASL_CYRUS) - list(APPEND SRCS ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_cyrus.c) # needed to support Kerberos, requires cyrus-sasl + list(APPEND SRCS "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_cyrus.c") # needed to support Kerberos, requires cyrus-sasl endif() if(WITH_SASL_SCRAM) - list(APPEND SRCS ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c) + list(APPEND SRCS "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c") endif() if(WITH_SASL_OAUTHBEARER) - list(APPEND SRCS ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_oauthbearer.c) + list(APPEND SRCS "${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_oauthbearer.c") endif() add_library(rdkafka ${SRCS}) target_compile_options(rdkafka PRIVATE -fno-sanitize=undefined) # target_include_directories(rdkafka SYSTEM PUBLIC include) -target_include_directories(rdkafka SYSTEM PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include) # for "librdkafka/rdkafka.h" +target_include_directories(rdkafka SYSTEM PUBLIC "${CMAKE_CURRENT_SOURCE_DIR}/include") # for "librdkafka/rdkafka.h" target_include_directories(rdkafka SYSTEM PUBLIC ${RDKAFKA_SOURCE_DIR}) # Because weird logic with "include_next" is used. -target_include_directories(rdkafka SYSTEM PUBLIC ${CMAKE_CURRENT_BINARY_DIR}/auxdir) # for "../config.h" -target_include_directories(rdkafka SYSTEM PRIVATE ${ZSTD_INCLUDE_DIR}/common) # Because wrong path to "zstd_errors.h" is used. +target_include_directories(rdkafka SYSTEM PUBLIC "${CMAKE_CURRENT_BINARY_DIR}/auxdir") # for "../config.h" +target_include_directories(rdkafka SYSTEM PRIVATE "${ZSTD_INCLUDE_DIR}/common") # Because wrong path to "zstd_errors.h" is used. target_link_libraries(rdkafka PRIVATE lz4 ${ZLIB_LIBRARIES} ${ZSTD_LIBRARY}) if(OPENSSL_SSL_LIBRARY AND OPENSSL_CRYPTO_LIBRARY) target_link_libraries(rdkafka PRIVATE ${OPENSSL_SSL_LIBRARY} ${OPENSSL_CRYPTO_LIBRARY}) @@ -126,7 +126,7 @@ if(${ENABLE_CYRUS_SASL}) target_link_libraries(rdkafka PRIVATE ${CYRUS_SASL_LIBRARY}) endif() -file(MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/auxdir) +file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/auxdir") configure_file( "${CMAKE_CURRENT_SOURCE_DIR}/config.h.in" diff --git a/contrib/librdkafka-cmake/config.h.in b/contrib/librdkafka-cmake/config.h.in index 80b6ea61b6e..9fecb45e42d 100644 --- a/contrib/librdkafka-cmake/config.h.in +++ b/contrib/librdkafka-cmake/config.h.in @@ -66,7 +66,7 @@ #cmakedefine WITH_SASL_OAUTHBEARER 1 #cmakedefine WITH_SASL_CYRUS 1 // crc32chw -#if !defined(__PPC__) && (!defined(__aarch64__) || defined(__ARM_FEATURE_CRC32)) +#if !defined(__PPC__) && (!defined(__aarch64__) || defined(__ARM_FEATURE_CRC32)) && !(defined(__aarch64__) && defined(__APPLE__)) #define WITH_CRC32C_HW 1 #endif // regex @@ -75,6 +75,8 @@ #define HAVE_STRNDUP 1 // strerror_r #define HAVE_STRERROR_R 1 +// rand_r +#define HAVE_RAND_R 1 #ifdef __APPLE__ // pthread_setname_np diff --git a/contrib/libunwind-cmake/CMakeLists.txt b/contrib/libunwind-cmake/CMakeLists.txt index 3afff30eee7..1a9f5e50abd 100644 --- a/contrib/libunwind-cmake/CMakeLists.txt +++ b/contrib/libunwind-cmake/CMakeLists.txt @@ -1,27 +1,27 @@ include(CheckCCompilerFlag) include(CheckCXXCompilerFlag) -set(LIBUNWIND_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libunwind) +set(LIBUNWIND_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libunwind") set(LIBUNWIND_CXX_SOURCES - ${LIBUNWIND_SOURCE_DIR}/src/libunwind.cpp - ${LIBUNWIND_SOURCE_DIR}/src/Unwind-EHABI.cpp - ${LIBUNWIND_SOURCE_DIR}/src/Unwind-seh.cpp) + "${LIBUNWIND_SOURCE_DIR}/src/libunwind.cpp" + "${LIBUNWIND_SOURCE_DIR}/src/Unwind-EHABI.cpp" + "${LIBUNWIND_SOURCE_DIR}/src/Unwind-seh.cpp") if (APPLE) - set(LIBUNWIND_CXX_SOURCES ${LIBUNWIND_CXX_SOURCES} ${LIBUNWIND_SOURCE_DIR}/src/Unwind_AppleExtras.cpp) + set(LIBUNWIND_CXX_SOURCES ${LIBUNWIND_CXX_SOURCES} "${LIBUNWIND_SOURCE_DIR}/src/Unwind_AppleExtras.cpp") endif () set(LIBUNWIND_C_SOURCES - ${LIBUNWIND_SOURCE_DIR}/src/UnwindLevel1.c - ${LIBUNWIND_SOURCE_DIR}/src/UnwindLevel1-gcc-ext.c - ${LIBUNWIND_SOURCE_DIR}/src/Unwind-sjlj.c + "${LIBUNWIND_SOURCE_DIR}/src/UnwindLevel1.c" + "${LIBUNWIND_SOURCE_DIR}/src/UnwindLevel1-gcc-ext.c" + "${LIBUNWIND_SOURCE_DIR}/src/Unwind-sjlj.c" # Use unw_backtrace to override libgcc's backtrace symbol for better ABI compatibility unwind-override.c) set_source_files_properties(${LIBUNWIND_C_SOURCES} PROPERTIES COMPILE_FLAGS "-std=c99") set(LIBUNWIND_ASM_SOURCES - ${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersRestore.S - ${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersSave.S) + "${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersRestore.S" + "${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersSave.S") # CMake doesn't pass the correct architecture for Apple prior to CMake 3.19 [1] # Workaround these two issues by compiling as C. diff --git a/contrib/libxml2-cmake/CMakeLists.txt b/contrib/libxml2-cmake/CMakeLists.txt index 068662c7213..8fda0399ea3 100644 --- a/contrib/libxml2-cmake/CMakeLists.txt +++ b/contrib/libxml2-cmake/CMakeLists.txt @@ -1,54 +1,54 @@ -set(LIBXML2_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libxml2) -set(LIBXML2_BINARY_DIR ${ClickHouse_BINARY_DIR}/contrib/libxml2) +set(LIBXML2_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libxml2") +set(LIBXML2_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/libxml2") set(SRCS - ${LIBXML2_SOURCE_DIR}/SAX.c - ${LIBXML2_SOURCE_DIR}/entities.c - ${LIBXML2_SOURCE_DIR}/encoding.c - ${LIBXML2_SOURCE_DIR}/error.c - ${LIBXML2_SOURCE_DIR}/parserInternals.c - ${LIBXML2_SOURCE_DIR}/parser.c - ${LIBXML2_SOURCE_DIR}/tree.c - ${LIBXML2_SOURCE_DIR}/hash.c - ${LIBXML2_SOURCE_DIR}/list.c - ${LIBXML2_SOURCE_DIR}/xmlIO.c - ${LIBXML2_SOURCE_DIR}/xmlmemory.c - ${LIBXML2_SOURCE_DIR}/uri.c - ${LIBXML2_SOURCE_DIR}/valid.c - ${LIBXML2_SOURCE_DIR}/xlink.c - ${LIBXML2_SOURCE_DIR}/HTMLparser.c - ${LIBXML2_SOURCE_DIR}/HTMLtree.c - ${LIBXML2_SOURCE_DIR}/debugXML.c - ${LIBXML2_SOURCE_DIR}/xpath.c - ${LIBXML2_SOURCE_DIR}/xpointer.c - ${LIBXML2_SOURCE_DIR}/xinclude.c - ${LIBXML2_SOURCE_DIR}/nanohttp.c - ${LIBXML2_SOURCE_DIR}/nanoftp.c - ${LIBXML2_SOURCE_DIR}/DOCBparser.c - ${LIBXML2_SOURCE_DIR}/catalog.c - ${LIBXML2_SOURCE_DIR}/globals.c - ${LIBXML2_SOURCE_DIR}/threads.c - ${LIBXML2_SOURCE_DIR}/c14n.c - ${LIBXML2_SOURCE_DIR}/xmlstring.c - ${LIBXML2_SOURCE_DIR}/buf.c - ${LIBXML2_SOURCE_DIR}/xmlregexp.c - ${LIBXML2_SOURCE_DIR}/xmlschemas.c - ${LIBXML2_SOURCE_DIR}/xmlschemastypes.c - ${LIBXML2_SOURCE_DIR}/xmlunicode.c - ${LIBXML2_SOURCE_DIR}/triostr.c - #${LIBXML2_SOURCE_DIR}/trio.c - ${LIBXML2_SOURCE_DIR}/xmlreader.c - ${LIBXML2_SOURCE_DIR}/relaxng.c - ${LIBXML2_SOURCE_DIR}/dict.c - ${LIBXML2_SOURCE_DIR}/SAX2.c - ${LIBXML2_SOURCE_DIR}/xmlwriter.c - ${LIBXML2_SOURCE_DIR}/legacy.c - ${LIBXML2_SOURCE_DIR}/chvalid.c - ${LIBXML2_SOURCE_DIR}/pattern.c - ${LIBXML2_SOURCE_DIR}/xmlsave.c - ${LIBXML2_SOURCE_DIR}/xmlmodule.c - ${LIBXML2_SOURCE_DIR}/schematron.c - ${LIBXML2_SOURCE_DIR}/xzlib.c + "${LIBXML2_SOURCE_DIR}/SAX.c" + "${LIBXML2_SOURCE_DIR}/entities.c" + "${LIBXML2_SOURCE_DIR}/encoding.c" + "${LIBXML2_SOURCE_DIR}/error.c" + "${LIBXML2_SOURCE_DIR}/parserInternals.c" + "${LIBXML2_SOURCE_DIR}/parser.c" + "${LIBXML2_SOURCE_DIR}/tree.c" + "${LIBXML2_SOURCE_DIR}/hash.c" + "${LIBXML2_SOURCE_DIR}/list.c" + "${LIBXML2_SOURCE_DIR}/xmlIO.c" + "${LIBXML2_SOURCE_DIR}/xmlmemory.c" + "${LIBXML2_SOURCE_DIR}/uri.c" + "${LIBXML2_SOURCE_DIR}/valid.c" + "${LIBXML2_SOURCE_DIR}/xlink.c" + "${LIBXML2_SOURCE_DIR}/HTMLparser.c" + "${LIBXML2_SOURCE_DIR}/HTMLtree.c" + "${LIBXML2_SOURCE_DIR}/debugXML.c" + "${LIBXML2_SOURCE_DIR}/xpath.c" + "${LIBXML2_SOURCE_DIR}/xpointer.c" + "${LIBXML2_SOURCE_DIR}/xinclude.c" + "${LIBXML2_SOURCE_DIR}/nanohttp.c" + "${LIBXML2_SOURCE_DIR}/nanoftp.c" + "${LIBXML2_SOURCE_DIR}/DOCBparser.c" + "${LIBXML2_SOURCE_DIR}/catalog.c" + "${LIBXML2_SOURCE_DIR}/globals.c" + "${LIBXML2_SOURCE_DIR}/threads.c" + "${LIBXML2_SOURCE_DIR}/c14n.c" + "${LIBXML2_SOURCE_DIR}/xmlstring.c" + "${LIBXML2_SOURCE_DIR}/buf.c" + "${LIBXML2_SOURCE_DIR}/xmlregexp.c" + "${LIBXML2_SOURCE_DIR}/xmlschemas.c" + "${LIBXML2_SOURCE_DIR}/xmlschemastypes.c" + "${LIBXML2_SOURCE_DIR}/xmlunicode.c" + "${LIBXML2_SOURCE_DIR}/triostr.c" + #"${LIBXML2_SOURCE_DIR}/trio.c" + "${LIBXML2_SOURCE_DIR}/xmlreader.c" + "${LIBXML2_SOURCE_DIR}/relaxng.c" + "${LIBXML2_SOURCE_DIR}/dict.c" + "${LIBXML2_SOURCE_DIR}/SAX2.c" + "${LIBXML2_SOURCE_DIR}/xmlwriter.c" + "${LIBXML2_SOURCE_DIR}/legacy.c" + "${LIBXML2_SOURCE_DIR}/chvalid.c" + "${LIBXML2_SOURCE_DIR}/pattern.c" + "${LIBXML2_SOURCE_DIR}/xmlsave.c" + "${LIBXML2_SOURCE_DIR}/xmlmodule.c" + "${LIBXML2_SOURCE_DIR}/schematron.c" + "${LIBXML2_SOURCE_DIR}/xzlib.c" ) add_library(libxml2 ${SRCS}) @@ -57,6 +57,6 @@ if(M_LIBRARY) target_link_libraries(libxml2 PRIVATE ${M_LIBRARY}) endif() -target_include_directories(libxml2 PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64/include) -target_include_directories(libxml2 PUBLIC ${LIBXML2_SOURCE_DIR}/include) +target_include_directories(libxml2 PUBLIC "${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64/include") +target_include_directories(libxml2 PUBLIC "${LIBXML2_SOURCE_DIR}/include") target_include_directories(libxml2 SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR}) diff --git a/contrib/llvm b/contrib/llvm index 8f24d507c1c..cfaf365cf96 160000 --- a/contrib/llvm +++ b/contrib/llvm @@ -1 +1 @@ -Subproject commit 8f24d507c1cfeec66d27f48fe74518fd278e2d25 +Subproject commit cfaf365cf96918999d09d976ec736b4518cf5d02 diff --git a/contrib/lz4-cmake/CMakeLists.txt b/contrib/lz4-cmake/CMakeLists.txt index 72510d72534..77e00d4295b 100644 --- a/contrib/lz4-cmake/CMakeLists.txt +++ b/contrib/lz4-cmake/CMakeLists.txt @@ -33,5 +33,5 @@ if (NOT EXTERNAL_LZ4_LIBRARY_FOUND) if (SANITIZE STREQUAL "undefined") target_compile_options (lz4 PRIVATE -fno-sanitize=undefined) endif () - target_include_directories(lz4 PUBLIC ${LIBRARY_DIR}/lib) + target_include_directories(lz4 PUBLIC "${LIBRARY_DIR}/lib") endif () diff --git a/contrib/mariadb-connector-c b/contrib/mariadb-connector-c index f4476ee7311..5f4034a3a63 160000 --- a/contrib/mariadb-connector-c +++ b/contrib/mariadb-connector-c @@ -1 +1 @@ -Subproject commit f4476ee7311b35b593750f6ae2cbdb62a4006374 +Subproject commit 5f4034a3a6376416504f17186c55fe401c6d8e5e diff --git a/contrib/nanodbc b/contrib/nanodbc new file mode 160000 index 00000000000..9fc45967551 --- /dev/null +++ b/contrib/nanodbc @@ -0,0 +1 @@ +Subproject commit 9fc459675515d491401727ec67fca38db721f28c diff --git a/contrib/nanodbc-cmake/CMakeLists.txt b/contrib/nanodbc-cmake/CMakeLists.txt new file mode 100644 index 00000000000..26a030c3995 --- /dev/null +++ b/contrib/nanodbc-cmake/CMakeLists.txt @@ -0,0 +1,18 @@ +if (NOT USE_INTERNAL_NANODBC_LIBRARY) + return () +endif () + +set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/nanodbc") + +if (NOT TARGET unixodbc) + message(FATAL_ERROR "Configuration error: unixodbc is not a target") +endif() + +set (SRCS + "${LIBRARY_DIR}/nanodbc/nanodbc.cpp" +) + +add_library(nanodbc ${SRCS}) + +target_link_libraries (nanodbc PUBLIC unixodbc) +target_include_directories (nanodbc SYSTEM PUBLIC "${LIBRARY_DIR}/") diff --git a/contrib/nuraft-cmake/CMakeLists.txt b/contrib/nuraft-cmake/CMakeLists.txt index 83137fe73bf..725e86195e1 100644 --- a/contrib/nuraft-cmake/CMakeLists.txt +++ b/contrib/nuraft-cmake/CMakeLists.txt @@ -1,30 +1,30 @@ -set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/NuRaft) +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/NuRaft") set(SRCS - ${LIBRARY_DIR}/src/handle_priority.cxx - ${LIBRARY_DIR}/src/buffer_serializer.cxx - ${LIBRARY_DIR}/src/peer.cxx - ${LIBRARY_DIR}/src/global_mgr.cxx - ${LIBRARY_DIR}/src/buffer.cxx - ${LIBRARY_DIR}/src/asio_service.cxx - ${LIBRARY_DIR}/src/handle_client_request.cxx - ${LIBRARY_DIR}/src/raft_server.cxx - ${LIBRARY_DIR}/src/snapshot.cxx - ${LIBRARY_DIR}/src/handle_commit.cxx - ${LIBRARY_DIR}/src/error_code.cxx - ${LIBRARY_DIR}/src/crc32.cxx - ${LIBRARY_DIR}/src/handle_snapshot_sync.cxx - ${LIBRARY_DIR}/src/stat_mgr.cxx - ${LIBRARY_DIR}/src/handle_join_leave.cxx - ${LIBRARY_DIR}/src/handle_user_cmd.cxx - ${LIBRARY_DIR}/src/handle_custom_notification.cxx - ${LIBRARY_DIR}/src/handle_vote.cxx - ${LIBRARY_DIR}/src/launcher.cxx - ${LIBRARY_DIR}/src/srv_config.cxx - ${LIBRARY_DIR}/src/snapshot_sync_req.cxx - ${LIBRARY_DIR}/src/handle_timeout.cxx - ${LIBRARY_DIR}/src/handle_append_entries.cxx - ${LIBRARY_DIR}/src/cluster_config.cxx + "${LIBRARY_DIR}/src/handle_priority.cxx" + "${LIBRARY_DIR}/src/buffer_serializer.cxx" + "${LIBRARY_DIR}/src/peer.cxx" + "${LIBRARY_DIR}/src/global_mgr.cxx" + "${LIBRARY_DIR}/src/buffer.cxx" + "${LIBRARY_DIR}/src/asio_service.cxx" + "${LIBRARY_DIR}/src/handle_client_request.cxx" + "${LIBRARY_DIR}/src/raft_server.cxx" + "${LIBRARY_DIR}/src/snapshot.cxx" + "${LIBRARY_DIR}/src/handle_commit.cxx" + "${LIBRARY_DIR}/src/error_code.cxx" + "${LIBRARY_DIR}/src/crc32.cxx" + "${LIBRARY_DIR}/src/handle_snapshot_sync.cxx" + "${LIBRARY_DIR}/src/stat_mgr.cxx" + "${LIBRARY_DIR}/src/handle_join_leave.cxx" + "${LIBRARY_DIR}/src/handle_user_cmd.cxx" + "${LIBRARY_DIR}/src/handle_custom_notification.cxx" + "${LIBRARY_DIR}/src/handle_vote.cxx" + "${LIBRARY_DIR}/src/launcher.cxx" + "${LIBRARY_DIR}/src/srv_config.cxx" + "${LIBRARY_DIR}/src/snapshot_sync_req.cxx" + "${LIBRARY_DIR}/src/handle_timeout.cxx" + "${LIBRARY_DIR}/src/handle_append_entries.cxx" + "${LIBRARY_DIR}/src/cluster_config.cxx" ) @@ -37,9 +37,9 @@ else() target_compile_definitions(nuraft PRIVATE USE_BOOST_ASIO=1 BOOST_ASIO_STANDALONE=1) endif() -target_include_directories (nuraft SYSTEM PRIVATE ${LIBRARY_DIR}/include/libnuraft) +target_include_directories (nuraft SYSTEM PRIVATE "${LIBRARY_DIR}/include/libnuraft") # for some reason include "asio.h" directly without "boost/" prefix. -target_include_directories (nuraft SYSTEM PRIVATE ${ClickHouse_SOURCE_DIR}/contrib/boost/boost) +target_include_directories (nuraft SYSTEM PRIVATE "${ClickHouse_SOURCE_DIR}/contrib/boost/boost") target_link_libraries (nuraft PRIVATE boost::headers_only boost::coroutine) @@ -47,4 +47,4 @@ if(OPENSSL_SSL_LIBRARY AND OPENSSL_CRYPTO_LIBRARY) target_link_libraries (nuraft PRIVATE ${OPENSSL_SSL_LIBRARY} ${OPENSSL_CRYPTO_LIBRARY}) endif() -target_include_directories (nuraft SYSTEM PUBLIC ${LIBRARY_DIR}/include) +target_include_directories (nuraft SYSTEM PUBLIC "${LIBRARY_DIR}/include") diff --git a/contrib/openldap-cmake/CMakeLists.txt b/contrib/openldap-cmake/CMakeLists.txt index b0a5f4048ff..0892403bb62 100644 --- a/contrib/openldap-cmake/CMakeLists.txt +++ b/contrib/openldap-cmake/CMakeLists.txt @@ -1,4 +1,4 @@ -set(OPENLDAP_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/openldap) +set(OPENLDAP_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/openldap") # How these lists were generated? # I compiled the original OpenLDAP with it's original build system and copied the list of source files from build commands. @@ -12,9 +12,9 @@ set(OPENLDAP_VERSION_STRING "2.5.X") macro(mkversion _lib_name) add_custom_command( - OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${_lib_name}-version.c - COMMAND ${CMAKE_COMMAND} -E env bash -c "${OPENLDAP_SOURCE_DIR}/build/mkversion -v '${OPENLDAP_VERSION_STRING}' liblber.la > ${CMAKE_CURRENT_BINARY_DIR}/${_lib_name}-version.c" - MAIN_DEPENDENCY ${OPENLDAP_SOURCE_DIR}/build/mkversion + OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${_lib_name}-version.c" + COMMAND ${CMAKE_COMMAND} -E env bash -c "${OPENLDAP_SOURCE_DIR}/build/mkversion -v '${OPENLDAP_VERSION_STRING}' liblber.la > \"${CMAKE_CURRENT_BINARY_DIR}/${_lib_name}-version.c\"" + MAIN_DEPENDENCY "${OPENLDAP_SOURCE_DIR}/build/mkversion" WORKING_DIRECTORY ${OPENLDAP_SOURCE_DIR} VERBATIM ) @@ -37,23 +37,23 @@ endif() set(_extra_build_dir "${CMAKE_CURRENT_SOURCE_DIR}/${_system_name}_${_system_processor}") set(_lber_srcs - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/assert.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/decode.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/encode.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/io.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/bprint.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/debug.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/memory.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/options.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/sockbuf.c - ${OPENLDAP_SOURCE_DIR}/libraries/liblber/stdio.c + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/assert.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/decode.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/encode.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/io.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/bprint.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/debug.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/memory.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/options.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/sockbuf.c" + "${OPENLDAP_SOURCE_DIR}/libraries/liblber/stdio.c" ) mkversion(lber) add_library(lber ${_libs_type} ${_lber_srcs} - ${CMAKE_CURRENT_BINARY_DIR}/lber-version.c + "${CMAKE_CURRENT_BINARY_DIR}/lber-version.c" ) target_link_libraries(lber @@ -62,8 +62,8 @@ target_link_libraries(lber target_include_directories(lber PRIVATE ${_extra_build_dir}/include - PRIVATE ${OPENLDAP_SOURCE_DIR}/include - PRIVATE ${OPENLDAP_SOURCE_DIR}/libraries/liblber + PRIVATE "${OPENLDAP_SOURCE_DIR}/include" + PRIVATE "${OPENLDAP_SOURCE_DIR}/libraries/liblber" PRIVATE ${OPENSSL_INCLUDE_DIR} ) @@ -72,78 +72,78 @@ target_compile_definitions(lber ) set(_ldap_srcs - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/bind.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/open.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/result.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/error.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/compare.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/search.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/controls.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/messages.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/references.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/extended.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/cyrus.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/modify.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/add.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/modrdn.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/delete.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/abandon.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/sasl.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/sbind.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/unbind.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/cancel.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/filter.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/free.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/sort.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/passwd.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/whoami.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/vc.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/getdn.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/getentry.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/getattr.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/getvalues.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/addentry.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/request.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/os-ip.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/url.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/pagectrl.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/sortctrl.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/vlvctrl.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/init.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/options.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/print.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/string.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/util-int.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/schema.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/charray.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/os-local.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/dnssrv.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/utf-8.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/utf-8-conv.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/tls2.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/tls_o.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/tls_g.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/turn.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/ppolicy.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/dds.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/txn.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/ldap_sync.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/stctrl.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/assertion.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/deref.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/ldifutil.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/ldif.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/fetch.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/lbase64.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/msctrl.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap/psearchctrl.c + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/bind.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/open.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/result.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/error.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/compare.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/search.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/controls.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/messages.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/references.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/extended.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/cyrus.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/modify.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/add.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/modrdn.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/delete.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/abandon.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/sasl.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/sbind.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/unbind.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/cancel.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/filter.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/free.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/sort.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/passwd.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/whoami.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/vc.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/getdn.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/getentry.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/getattr.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/getvalues.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/addentry.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/request.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/os-ip.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/url.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/pagectrl.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/sortctrl.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/vlvctrl.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/init.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/options.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/print.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/string.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/util-int.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/schema.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/charray.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/os-local.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/dnssrv.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/utf-8.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/utf-8-conv.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/tls2.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/tls_o.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/tls_g.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/turn.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/ppolicy.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/dds.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/txn.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/ldap_sync.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/stctrl.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/assertion.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/deref.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/ldifutil.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/ldif.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/fetch.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/lbase64.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/msctrl.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap/psearchctrl.c" ) mkversion(ldap) add_library(ldap ${_libs_type} ${_ldap_srcs} - ${CMAKE_CURRENT_BINARY_DIR}/ldap-version.c + "${CMAKE_CURRENT_BINARY_DIR}/ldap-version.c" ) target_link_libraries(ldap @@ -153,8 +153,8 @@ target_link_libraries(ldap target_include_directories(ldap PRIVATE ${_extra_build_dir}/include - PRIVATE ${OPENLDAP_SOURCE_DIR}/include - PRIVATE ${OPENLDAP_SOURCE_DIR}/libraries/libldap + PRIVATE "${OPENLDAP_SOURCE_DIR}/include" + PRIVATE "${OPENLDAP_SOURCE_DIR}/libraries/libldap" PRIVATE ${OPENSSL_INCLUDE_DIR} ) @@ -163,16 +163,16 @@ target_compile_definitions(ldap ) set(_ldap_r_specific_srcs - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/threads.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/rdwr.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/tpool.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/rq.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_posix.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_thr.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_nt.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_pth.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_stub.c - ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_debug.c + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/threads.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/rdwr.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/tpool.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/rq.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_posix.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_thr.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_nt.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_pth.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_stub.c" + "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r/thr_debug.c" ) mkversion(ldap_r) @@ -180,7 +180,7 @@ mkversion(ldap_r) add_library(ldap_r ${_libs_type} ${_ldap_r_specific_srcs} ${_ldap_srcs} - ${CMAKE_CURRENT_BINARY_DIR}/ldap_r-version.c + "${CMAKE_CURRENT_BINARY_DIR}/ldap_r-version.c" ) target_link_libraries(ldap_r @@ -190,9 +190,9 @@ target_link_libraries(ldap_r target_include_directories(ldap_r PRIVATE ${_extra_build_dir}/include - PRIVATE ${OPENLDAP_SOURCE_DIR}/include - PRIVATE ${OPENLDAP_SOURCE_DIR}/libraries/libldap_r - PRIVATE ${OPENLDAP_SOURCE_DIR}/libraries/libldap + PRIVATE "${OPENLDAP_SOURCE_DIR}/include" + PRIVATE "${OPENLDAP_SOURCE_DIR}/libraries/libldap_r" + PRIVATE "${OPENLDAP_SOURCE_DIR}/libraries/libldap" PRIVATE ${OPENSSL_INCLUDE_DIR} ) diff --git a/contrib/openldap-cmake/darwin_aarch64/include/lber_types.h b/contrib/openldap-cmake/darwin_aarch64/include/lber_types.h new file mode 100644 index 00000000000..dbd59430527 --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/lber_types.h @@ -0,0 +1,63 @@ +/* include/lber_types.h. Generated from lber_types.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * LBER types + */ + +#ifndef _LBER_TYPES_H +#define _LBER_TYPES_H + +#include + +LDAP_BEGIN_DECL + +/* LBER boolean, enum, integers (32 bits or larger) */ +#define LBER_INT_T int + +/* LBER tags (32 bits or larger) */ +#define LBER_TAG_T long + +/* LBER socket descriptor */ +#define LBER_SOCKET_T int + +/* LBER lengths (32 bits or larger) */ +#define LBER_LEN_T long + +/* ------------------------------------------------------------ */ + +/* booleans, enumerations, and integers */ +typedef LBER_INT_T ber_int_t; + +/* signed and unsigned versions */ +typedef signed LBER_INT_T ber_sint_t; +typedef unsigned LBER_INT_T ber_uint_t; + +/* tags */ +typedef unsigned LBER_TAG_T ber_tag_t; + +/* "socket" descriptors */ +typedef LBER_SOCKET_T ber_socket_t; + +/* lengths */ +typedef unsigned LBER_LEN_T ber_len_t; + +/* signed lengths */ +typedef signed LBER_LEN_T ber_slen_t; + +LDAP_END_DECL + +#endif /* _LBER_TYPES_H */ diff --git a/contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h b/contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h new file mode 100644 index 00000000000..89f7b40b884 --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h @@ -0,0 +1,74 @@ +/* include/ldap_config.h. Generated from ldap_config.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * This file works in conjunction with OpenLDAP configure system. + * If you do no like the values below, adjust your configure options. + */ + +#ifndef _LDAP_CONFIG_H +#define _LDAP_CONFIG_H + +/* directory separator */ +#ifndef LDAP_DIRSEP +#ifndef _WIN32 +#define LDAP_DIRSEP "/" +#else +#define LDAP_DIRSEP "\\" +#endif +#endif + +/* directory for temporary files */ +#if defined(_WIN32) +# define LDAP_TMPDIR "C:\\." /* we don't have much of a choice */ +#elif defined( _P_tmpdir ) +# define LDAP_TMPDIR _P_tmpdir +#elif defined( P_tmpdir ) +# define LDAP_TMPDIR P_tmpdir +#elif defined( _PATH_TMPDIR ) +# define LDAP_TMPDIR _PATH_TMPDIR +#else +# define LDAP_TMPDIR LDAP_DIRSEP "tmp" +#endif + +/* directories */ +#ifndef LDAP_BINDIR +#define LDAP_BINDIR "/tmp/ldap-prefix/bin" +#endif +#ifndef LDAP_SBINDIR +#define LDAP_SBINDIR "/tmp/ldap-prefix/sbin" +#endif +#ifndef LDAP_DATADIR +#define LDAP_DATADIR "/tmp/ldap-prefix/share/openldap" +#endif +#ifndef LDAP_SYSCONFDIR +#define LDAP_SYSCONFDIR "/tmp/ldap-prefix/etc/openldap" +#endif +#ifndef LDAP_LIBEXECDIR +#define LDAP_LIBEXECDIR "/tmp/ldap-prefix/libexec" +#endif +#ifndef LDAP_MODULEDIR +#define LDAP_MODULEDIR "/tmp/ldap-prefix/libexec/openldap" +#endif +#ifndef LDAP_RUNDIR +#define LDAP_RUNDIR "/tmp/ldap-prefix/var" +#endif +#ifndef LDAP_LOCALEDIR +#define LDAP_LOCALEDIR "" +#endif + + +#endif /* _LDAP_CONFIG_H */ diff --git a/contrib/openldap-cmake/darwin_aarch64/include/ldap_features.h b/contrib/openldap-cmake/darwin_aarch64/include/ldap_features.h new file mode 100644 index 00000000000..f0cc7c3626f --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/ldap_features.h @@ -0,0 +1,61 @@ +/* include/ldap_features.h. Generated from ldap_features.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * LDAP Features + */ + +#ifndef _LDAP_FEATURES_H +#define _LDAP_FEATURES_H 1 + +/* OpenLDAP API version macros */ +#define LDAP_VENDOR_VERSION 20501 +#define LDAP_VENDOR_VERSION_MAJOR 2 +#define LDAP_VENDOR_VERSION_MINOR 5 +#define LDAP_VENDOR_VERSION_PATCH X + +/* +** WORK IN PROGRESS! +** +** OpenLDAP reentrancy/thread-safeness should be dynamically +** checked using ldap_get_option(). +** +** The -lldap implementation is not thread-safe. +** +** The -lldap_r implementation is: +** LDAP_API_FEATURE_THREAD_SAFE (basic thread safety) +** but also be: +** LDAP_API_FEATURE_SESSION_THREAD_SAFE +** LDAP_API_FEATURE_OPERATION_THREAD_SAFE +** +** The preprocessor flag LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE +** can be used to determine if -lldap_r is available at compile +** time. You must define LDAP_THREAD_SAFE if and only if you +** link with -lldap_r. +** +** If you fail to define LDAP_THREAD_SAFE when linking with +** -lldap_r or define LDAP_THREAD_SAFE when linking with -lldap, +** provided header definitions and declarations may be incorrect. +** +*/ + +/* is -lldap_r available or not */ +#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1 + +/* LDAP v2 Referrals */ +/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */ + +#endif /* LDAP_FEATURES */ diff --git a/contrib/openldap-cmake/darwin_aarch64/include/portable.h b/contrib/openldap-cmake/darwin_aarch64/include/portable.h new file mode 100644 index 00000000000..fdf4e89017e --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/portable.h @@ -0,0 +1,1169 @@ +/* include/portable.h. Generated from portable.hin by configure. */ +/* include/portable.hin. Generated from configure.in by autoheader. */ + + +/* begin of portable.h.pre */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in the file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +#ifndef _LDAP_PORTABLE_H +#define _LDAP_PORTABLE_H + +/* define this if needed to get reentrant functions */ +#ifndef REENTRANT +#define REENTRANT 1 +#endif +#ifndef _REENTRANT +#define _REENTRANT 1 +#endif + +/* define this if needed to get threadsafe functions */ +#ifndef THREADSAFE +#define THREADSAFE 1 +#endif +#ifndef _THREADSAFE +#define _THREADSAFE 1 +#endif +#ifndef THREAD_SAFE +#define THREAD_SAFE 1 +#endif +#ifndef _THREAD_SAFE +#define _THREAD_SAFE 1 +#endif + +#ifndef _SGI_MP_SOURCE +#define _SGI_MP_SOURCE 1 +#endif + +/* end of portable.h.pre */ + + +/* Define if building universal (internal helper macro) */ +/* #undef AC_APPLE_UNIVERSAL_BUILD */ + +/* define to use both and */ +/* #undef BOTH_STRINGS_H */ + +/* define if cross compiling */ +/* #undef CROSS_COMPILING */ + +/* set to the number of arguments ctime_r() expects */ +#define CTIME_R_NARGS 2 + +/* define if toupper() requires islower() */ +/* #undef C_UPPER_LOWER */ + +/* define if sys_errlist is not declared in stdio.h or errno.h */ +/* #undef DECL_SYS_ERRLIST */ + +/* define to enable slapi library */ +/* #undef ENABLE_SLAPI */ + +/* defined to be the EXE extension */ +#define EXEEXT "" + +/* set to the number of arguments gethostbyaddr_r() expects */ +/* #undef GETHOSTBYADDR_R_NARGS */ + +/* set to the number of arguments gethostbyname_r() expects */ +/* #undef GETHOSTBYNAME_R_NARGS */ + +/* Define to 1 if `TIOCGWINSZ' requires . */ +/* #undef GWINSZ_IN_SYS_IOCTL */ + +/* define if you have AIX security lib */ +/* #undef HAVE_AIX_SECURITY */ + +/* Define to 1 if you have the header file. */ +#define HAVE_ARPA_INET_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ARPA_NAMESER_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ASSERT_H 1 + +/* Define to 1 if you have the `bcopy' function. */ +#define HAVE_BCOPY 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_BITS_TYPES_H */ + +/* Define to 1 if you have the `chroot' function. */ +#define HAVE_CHROOT 1 + +/* Define to 1 if you have the `closesocket' function. */ +/* #undef HAVE_CLOSESOCKET */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_CONIO_H */ + +/* define if crypt(3) is available */ +/* #undef HAVE_CRYPT */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_CRYPT_H */ + +/* define if crypt_r() is also available */ +/* #undef HAVE_CRYPT_R */ + +/* Define to 1 if you have the `ctime_r' function. */ +#define HAVE_CTIME_R 1 + +/* define if you have Cyrus SASL */ +/* #undef HAVE_CYRUS_SASL */ + +/* define if your system supports /dev/poll */ +/* #undef HAVE_DEVPOLL */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_DIRECT_H */ + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +#define HAVE_DIRENT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_DLFCN_H 1 + +/* Define to 1 if you don't have `vprintf' but do have `_doprnt.' */ +/* #undef HAVE_DOPRNT */ + +/* define if system uses EBCDIC instead of ASCII */ +/* #undef HAVE_EBCDIC */ + +/* Define to 1 if you have the `endgrent' function. */ +#define HAVE_ENDGRENT 1 + +/* Define to 1 if you have the `endpwent' function. */ +#define HAVE_ENDPWENT 1 + +/* define if your system supports epoll */ +/* #undef HAVE_EPOLL */ + +/* Define to 1 if you have the header file. */ +#define HAVE_ERRNO_H 1 + +/* Define to 1 if you have the `fcntl' function. */ +#define HAVE_FCNTL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_FCNTL_H 1 + +/* define if you actually have FreeBSD fetch(3) */ +/* #undef HAVE_FETCH */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_FILIO_H */ + +/* Define to 1 if you have the `flock' function. */ +#define HAVE_FLOCK 1 + +/* Define to 1 if you have the `fstat' function. */ +#define HAVE_FSTAT 1 + +/* Define to 1 if you have the `gai_strerror' function. */ +#define HAVE_GAI_STRERROR 1 + +/* Define to 1 if you have the `getaddrinfo' function. */ +#define HAVE_GETADDRINFO 1 + +/* Define to 1 if you have the `getdtablesize' function. */ +#define HAVE_GETDTABLESIZE 1 + +/* Define to 1 if you have the `geteuid' function. */ +#define HAVE_GETEUID 1 + +/* Define to 1 if you have the `getgrgid' function. */ +#define HAVE_GETGRGID 1 + +/* Define to 1 if you have the `gethostbyaddr_r' function. */ +/* #undef HAVE_GETHOSTBYADDR_R */ + +/* Define to 1 if you have the `gethostbyname_r' function. */ +/* #undef HAVE_GETHOSTBYNAME_R */ + +/* Define to 1 if you have the `gethostname' function. */ +#define HAVE_GETHOSTNAME 1 + +/* Define to 1 if you have the `getnameinfo' function. */ +#define HAVE_GETNAMEINFO 1 + +/* Define to 1 if you have the `getopt' function. */ +#define HAVE_GETOPT 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_GETOPT_H 1 + +/* Define to 1 if you have the `getpassphrase' function. */ +/* #undef HAVE_GETPASSPHRASE */ + +/* Define to 1 if you have the `getpeereid' function. */ +#define HAVE_GETPEEREID 1 + +/* Define to 1 if you have the `getpeerucred' function. */ +/* #undef HAVE_GETPEERUCRED */ + +/* Define to 1 if you have the `getpwnam' function. */ +#define HAVE_GETPWNAM 1 + +/* Define to 1 if you have the `getpwuid' function. */ +#define HAVE_GETPWUID 1 + +/* Define to 1 if you have the `getspnam' function. */ +/* #undef HAVE_GETSPNAM */ + +/* Define to 1 if you have the `gettimeofday' function. */ +#define HAVE_GETTIMEOFDAY 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_GMP_H */ + +/* Define to 1 if you have the `gmtime_r' function. */ +#define HAVE_GMTIME_R 1 + +/* define if you have GNUtls */ +/* #undef HAVE_GNUTLS */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_GNUTLS_GNUTLS_H */ + +/* if you have GNU Pth */ +/* #undef HAVE_GNU_PTH */ + +/* Define to 1 if you have the header file. */ +#define HAVE_GRP_H 1 + +/* Define to 1 if you have the `hstrerror' function. */ +#define HAVE_HSTRERROR 1 + +/* define to you inet_aton(3) is available */ +#define HAVE_INET_ATON 1 + +/* Define to 1 if you have the `inet_ntoa_b' function. */ +/* #undef HAVE_INET_NTOA_B */ + +/* Define to 1 if you have the `inet_ntop' function. */ +#define HAVE_INET_NTOP 1 + +/* Define to 1 if you have the `initgroups' function. */ +#define HAVE_INITGROUPS 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_INTTYPES_H 1 + +/* Define to 1 if you have the `ioctl' function. */ +#define HAVE_IOCTL 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_IO_H */ + +/* define if your system supports kqueue */ +#define HAVE_KQUEUE 1 + +/* Define to 1 if you have the `gen' library (-lgen). */ +/* #undef HAVE_LIBGEN */ + +/* Define to 1 if you have the `gmp' library (-lgmp). */ +/* #undef HAVE_LIBGMP */ + +/* Define to 1 if you have the `inet' library (-linet). */ +/* #undef HAVE_LIBINET */ + +/* define if you have libtool -ltdl */ +/* #undef HAVE_LIBLTDL */ + +/* Define to 1 if you have the `net' library (-lnet). */ +/* #undef HAVE_LIBNET */ + +/* Define to 1 if you have the `nsl' library (-lnsl). */ +/* #undef HAVE_LIBNSL */ + +/* Define to 1 if you have the `nsl_s' library (-lnsl_s). */ +/* #undef HAVE_LIBNSL_S */ + +/* Define to 1 if you have the `socket' library (-lsocket). */ +/* #undef HAVE_LIBSOCKET */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_LIBUTIL_H */ + +/* Define to 1 if you have the `V3' library (-lV3). */ +/* #undef HAVE_LIBV3 */ + +/* Define to 1 if you have the header file. */ +#define HAVE_LIMITS_H 1 + +/* if you have LinuxThreads */ +/* #undef HAVE_LINUX_THREADS */ + +/* Define to 1 if you have the header file. */ +#define HAVE_LOCALE_H 1 + +/* Define to 1 if you have the `localtime_r' function. */ +#define HAVE_LOCALTIME_R 1 + +/* Define to 1 if you have the `lockf' function. */ +#define HAVE_LOCKF 1 + +/* Define to 1 if the system has the type `long long'. */ +#define HAVE_LONG_LONG 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_LTDL_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_MALLOC_H */ + +/* Define to 1 if you have the `memcpy' function. */ +#define HAVE_MEMCPY 1 + +/* Define to 1 if you have the `memmove' function. */ +#define HAVE_MEMMOVE 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_MEMORY_H 1 + +/* Define to 1 if you have the `memrchr' function. */ +/* #undef HAVE_MEMRCHR */ + +/* Define to 1 if you have the `mkstemp' function. */ +#define HAVE_MKSTEMP 1 + +/* Define to 1 if you have the `mktemp' function. */ +#define HAVE_MKTEMP 1 + +/* define this if you have mkversion */ +#define HAVE_MKVERSION 1 + +/* Define to 1 if you have the header file, and it defines `DIR'. */ +/* #undef HAVE_NDIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_NETINET_TCP_H 1 + +/* define if strerror_r returns char* instead of int */ +/* #undef HAVE_NONPOSIX_STRERROR_R */ + +/* if you have NT Event Log */ +/* #undef HAVE_NT_EVENT_LOG */ + +/* if you have NT Service Manager */ +/* #undef HAVE_NT_SERVICE_MANAGER */ + +/* if you have NT Threads */ +/* #undef HAVE_NT_THREADS */ + +/* define if you have OpenSSL */ +#define HAVE_OPENSSL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_BN_H 1 + +/* define if you have OpenSSL with CRL checking capability */ +#define HAVE_OPENSSL_CRL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_CRYPTO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_SSL_H 1 + +/* Define to 1 if you have the `pipe' function. */ +#define HAVE_PIPE 1 + +/* Define to 1 if you have the `poll' function. */ +#define HAVE_POLL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_POLL_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PROCESS_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PSAP_H */ + +/* define to pthreads API spec revision */ +#define HAVE_PTHREADS 10 + +/* define if you have pthread_detach function */ +#define HAVE_PTHREAD_DETACH 1 + +/* Define to 1 if you have the `pthread_getconcurrency' function. */ +#define HAVE_PTHREAD_GETCONCURRENCY 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_PTHREAD_H 1 + +/* Define to 1 if you have the `pthread_kill' function. */ +#define HAVE_PTHREAD_KILL 1 + +/* Define to 1 if you have the `pthread_kill_other_threads_np' function. */ +/* #undef HAVE_PTHREAD_KILL_OTHER_THREADS_NP */ + +/* define if you have pthread_rwlock_destroy function */ +#define HAVE_PTHREAD_RWLOCK_DESTROY 1 + +/* Define to 1 if you have the `pthread_setconcurrency' function. */ +#define HAVE_PTHREAD_SETCONCURRENCY 1 + +/* Define to 1 if you have the `pthread_yield' function. */ +/* #undef HAVE_PTHREAD_YIELD */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PTH_H */ + +/* Define to 1 if the system has the type `ptrdiff_t'. */ +#define HAVE_PTRDIFF_T 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_PWD_H 1 + +/* Define to 1 if you have the `read' function. */ +#define HAVE_READ 1 + +/* Define to 1 if you have the `recv' function. */ +#define HAVE_RECV 1 + +/* Define to 1 if you have the `recvfrom' function. */ +#define HAVE_RECVFROM 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_REGEX_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_RESOLV_H */ + +/* define if you have res_query() */ +/* #undef HAVE_RES_QUERY */ + +/* define if OpenSSL needs RSAref */ +/* #undef HAVE_RSAREF */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SASL_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SASL_SASL_H */ + +/* define if your SASL library has sasl_version() */ +/* #undef HAVE_SASL_VERSION */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SCHED_H 1 + +/* Define to 1 if you have the `sched_yield' function. */ +#define HAVE_SCHED_YIELD 1 + +/* Define to 1 if you have the `send' function. */ +#define HAVE_SEND 1 + +/* Define to 1 if you have the `sendmsg' function. */ +#define HAVE_SENDMSG 1 + +/* Define to 1 if you have the `sendto' function. */ +#define HAVE_SENDTO 1 + +/* Define to 1 if you have the `setegid' function. */ +#define HAVE_SETEGID 1 + +/* Define to 1 if you have the `seteuid' function. */ +#define HAVE_SETEUID 1 + +/* Define to 1 if you have the `setgid' function. */ +#define HAVE_SETGID 1 + +/* Define to 1 if you have the `setpwfile' function. */ +/* #undef HAVE_SETPWFILE */ + +/* Define to 1 if you have the `setsid' function. */ +#define HAVE_SETSID 1 + +/* Define to 1 if you have the `setuid' function. */ +#define HAVE_SETUID 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SGTTY_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SHADOW_H */ + +/* Define to 1 if you have the `sigaction' function. */ +#define HAVE_SIGACTION 1 + +/* Define to 1 if you have the `signal' function. */ +#define HAVE_SIGNAL 1 + +/* Define to 1 if you have the `sigset' function. */ +#define HAVE_SIGSET 1 + +/* define if you have -lslp */ +/* #undef HAVE_SLP */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SLP_H */ + +/* Define to 1 if you have the `snprintf' function. */ +#define HAVE_SNPRINTF 1 + +/* if you have spawnlp() */ +/* #undef HAVE_SPAWNLP */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SQLEXT_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SQL_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_STDDEF_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STDINT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STDLIB_H 1 + +/* Define to 1 if you have the `strdup' function. */ +#define HAVE_STRDUP 1 + +/* Define to 1 if you have the `strerror' function. */ +#define HAVE_STRERROR 1 + +/* Define to 1 if you have the `strerror_r' function. */ +#define HAVE_STRERROR_R 1 + +/* Define to 1 if you have the `strftime' function. */ +#define HAVE_STRFTIME 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STRINGS_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STRING_H 1 + +/* Define to 1 if you have the `strpbrk' function. */ +#define HAVE_STRPBRK 1 + +/* Define to 1 if you have the `strrchr' function. */ +#define HAVE_STRRCHR 1 + +/* Define to 1 if you have the `strsep' function. */ +#define HAVE_STRSEP 1 + +/* Define to 1 if you have the `strspn' function. */ +#define HAVE_STRSPN 1 + +/* Define to 1 if you have the `strstr' function. */ +#define HAVE_STRSTR 1 + +/* Define to 1 if you have the `strtol' function. */ +#define HAVE_STRTOL 1 + +/* Define to 1 if you have the `strtoll' function. */ +#define HAVE_STRTOLL 1 + +/* Define to 1 if you have the `strtoq' function. */ +#define HAVE_STRTOQ 1 + +/* Define to 1 if you have the `strtoul' function. */ +#define HAVE_STRTOUL 1 + +/* Define to 1 if you have the `strtoull' function. */ +#define HAVE_STRTOULL 1 + +/* Define to 1 if you have the `strtouq' function. */ +#define HAVE_STRTOUQ 1 + +/* Define to 1 if `msg_accrightslen' is a member of `struct msghdr'. */ +/* #undef HAVE_STRUCT_MSGHDR_MSG_ACCRIGHTSLEN */ + +/* Define to 1 if `msg_control' is a member of `struct msghdr'. */ +/* #undef HAVE_STRUCT_MSGHDR_MSG_CONTROL */ + +/* Define to 1 if `pw_gecos' is a member of `struct passwd'. */ +#define HAVE_STRUCT_PASSWD_PW_GECOS 1 + +/* Define to 1 if `pw_passwd' is a member of `struct passwd'. */ +#define HAVE_STRUCT_PASSWD_PW_PASSWD 1 + +/* Define to 1 if `st_blksize' is a member of `struct stat'. */ +#define HAVE_STRUCT_STAT_ST_BLKSIZE 1 + +/* Define to 1 if `st_fstype' is a member of `struct stat'. */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE */ + +/* define to 1 if st_fstype is char * */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE_CHAR */ + +/* define to 1 if st_fstype is int */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE_INT */ + +/* Define to 1 if `st_vfstype' is a member of `struct stat'. */ +/* #undef HAVE_STRUCT_STAT_ST_VFSTYPE */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYNCH_H */ + +/* Define to 1 if you have the `sysconf' function. */ +#define HAVE_SYSCONF 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYSEXITS_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYSLOG_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_DEVPOLL_H */ + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +/* #undef HAVE_SYS_DIR_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_EPOLL_H */ + +/* define if you actually have sys_errlist in your libs */ +#define HAVE_SYS_ERRLIST 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_ERRNO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_EVENT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_FILE_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_FILIO_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_FSTYP_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_IOCTL_H 1 + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +/* #undef HAVE_SYS_NDIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_PARAM_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_POLL_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_PRIVGRP_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_RESOURCE_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SELECT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SOCKET_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_STAT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SYSLOG_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_TIME_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_TYPES_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UCRED_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UIO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UN_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_UUID_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_VMOUNT_H */ + +/* Define to 1 if you have that is POSIX.1 compatible. */ +#define HAVE_SYS_WAIT_H 1 + +/* define if you have -lwrap */ +/* #undef HAVE_TCPD */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_TCPD_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_TERMIOS_H 1 + +/* if you have Solaris LWP (thr) package */ +/* #undef HAVE_THR */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_THREAD_H */ + +/* Define to 1 if you have the `thr_getconcurrency' function. */ +/* #undef HAVE_THR_GETCONCURRENCY */ + +/* Define to 1 if you have the `thr_setconcurrency' function. */ +/* #undef HAVE_THR_SETCONCURRENCY */ + +/* Define to 1 if you have the `thr_yield' function. */ +/* #undef HAVE_THR_YIELD */ + +/* define if you have TLS */ +#define HAVE_TLS 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_UNISTD_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_UTIME_H 1 + +/* define if you have uuid_generate() */ +/* #undef HAVE_UUID_GENERATE */ + +/* define if you have uuid_to_str() */ +/* #undef HAVE_UUID_TO_STR */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_UUID_UUID_H */ + +/* Define to 1 if you have the `vprintf' function. */ +#define HAVE_VPRINTF 1 + +/* Define to 1 if you have the `vsnprintf' function. */ +#define HAVE_VSNPRINTF 1 + +/* Define to 1 if you have the `wait4' function. */ +#define HAVE_WAIT4 1 + +/* Define to 1 if you have the `waitpid' function. */ +#define HAVE_WAITPID 1 + +/* define if you have winsock */ +/* #undef HAVE_WINSOCK */ + +/* define if you have winsock2 */ +/* #undef HAVE_WINSOCK2 */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WINSOCK2_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WINSOCK_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WIREDTIGER_H */ + +/* Define to 1 if you have the `write' function. */ +#define HAVE_WRITE 1 + +/* define if select implicitly yields */ +#define HAVE_YIELDING_SELECT 1 + +/* Define to 1 if you have the `_vsnprintf' function. */ +/* #undef HAVE__VSNPRINTF */ + +/* define to 32-bit or greater integer type */ +#define LBER_INT_T int + +/* define to large integer type */ +#define LBER_LEN_T long + +/* define to socket descriptor type */ +#define LBER_SOCKET_T int + +/* define to large integer type */ +#define LBER_TAG_T long + +/* define to 1 if library is thread safe */ +#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1 + +/* define to LDAP VENDOR VERSION */ +/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */ + +/* define this to add debugging code */ +/* #undef LDAP_DEBUG */ + +/* define if LDAP libs are dynamic */ +/* #undef LDAP_LIBS_DYNAMIC */ + +/* define to support PF_INET6 */ +#define LDAP_PF_INET6 1 + +/* define to support PF_LOCAL */ +#define LDAP_PF_LOCAL 1 + +/* define this to add SLAPI code */ +/* #undef LDAP_SLAPI */ + +/* define this to add syslog code */ +/* #undef LDAP_SYSLOG */ + +/* Version */ +#define LDAP_VENDOR_VERSION 20501 + +/* Major */ +#define LDAP_VENDOR_VERSION_MAJOR 2 + +/* Minor */ +#define LDAP_VENDOR_VERSION_MINOR 5 + +/* Patch */ +#define LDAP_VENDOR_VERSION_PATCH X + +/* Define to the sub-directory where libtool stores uninstalled libraries. */ +#define LT_OBJDIR ".libs/" + +/* define if memcmp is not 8-bit clean or is otherwise broken */ +/* #undef NEED_MEMCMP_REPLACEMENT */ + +/* define if you have (or want) no threads */ +/* #undef NO_THREADS */ + +/* define to use the original debug style */ +/* #undef OLD_DEBUG */ + +/* Package */ +#define OPENLDAP_PACKAGE "OpenLDAP" + +/* Version */ +#define OPENLDAP_VERSION "2.5.X" + +/* Define to the address where bug reports for this package should be sent. */ +#define PACKAGE_BUGREPORT "" + +/* Define to the full name of this package. */ +#define PACKAGE_NAME "" + +/* Define to the full name and version of this package. */ +#define PACKAGE_STRING "" + +/* Define to the one symbol short name of this package. */ +#define PACKAGE_TARNAME "" + +/* Define to the home page for this package. */ +#define PACKAGE_URL "" + +/* Define to the version of this package. */ +#define PACKAGE_VERSION "" + +/* define if sched_yield yields the entire process */ +/* #undef REPLACE_BROKEN_YIELD */ + +/* Define as the return type of signal handlers (`int' or `void'). */ +#define RETSIGTYPE void + +/* Define to the type of arg 1 for `select'. */ +#define SELECT_TYPE_ARG1 int + +/* Define to the type of args 2, 3 and 4 for `select'. */ +#define SELECT_TYPE_ARG234 (fd_set *) + +/* Define to the type of arg 5 for `select'. */ +#define SELECT_TYPE_ARG5 (struct timeval *) + +/* The size of `int', as computed by sizeof. */ +#define SIZEOF_INT 4 + +/* The size of `long', as computed by sizeof. */ +#define SIZEOF_LONG 8 + +/* The size of `long long', as computed by sizeof. */ +#define SIZEOF_LONG_LONG 8 + +/* The size of `short', as computed by sizeof. */ +#define SIZEOF_SHORT 2 + +/* The size of `wchar_t', as computed by sizeof. */ +#define SIZEOF_WCHAR_T 4 + +/* define to support per-object ACIs */ +/* #undef SLAPD_ACI_ENABLED */ + +/* define to support LDAP Async Metadirectory backend */ +/* #undef SLAPD_ASYNCMETA */ + +/* define to support cleartext passwords */ +/* #undef SLAPD_CLEARTEXT */ + +/* define to support crypt(3) passwords */ +/* #undef SLAPD_CRYPT */ + +/* define to support DNS SRV backend */ +/* #undef SLAPD_DNSSRV */ + +/* define to support LDAP backend */ +/* #undef SLAPD_LDAP */ + +/* define to support MDB backend */ +/* #undef SLAPD_MDB */ + +/* define to support LDAP Metadirectory backend */ +/* #undef SLAPD_META */ + +/* define to support modules */ +/* #undef SLAPD_MODULES */ + +/* dynamically linked module */ +#define SLAPD_MOD_DYNAMIC 2 + +/* statically linked module */ +#define SLAPD_MOD_STATIC 1 + +/* define to support cn=Monitor backend */ +/* #undef SLAPD_MONITOR */ + +/* define to support NDB backend */ +/* #undef SLAPD_NDB */ + +/* define to support NULL backend */ +/* #undef SLAPD_NULL */ + +/* define for In-Directory Access Logging overlay */ +/* #undef SLAPD_OVER_ACCESSLOG */ + +/* define for Audit Logging overlay */ +/* #undef SLAPD_OVER_AUDITLOG */ + +/* define for Automatic Certificate Authority overlay */ +/* #undef SLAPD_OVER_AUTOCA */ + +/* define for Collect overlay */ +/* #undef SLAPD_OVER_COLLECT */ + +/* define for Attribute Constraint overlay */ +/* #undef SLAPD_OVER_CONSTRAINT */ + +/* define for Dynamic Directory Services overlay */ +/* #undef SLAPD_OVER_DDS */ + +/* define for Dynamic Directory Services overlay */ +/* #undef SLAPD_OVER_DEREF */ + +/* define for Dynamic Group overlay */ +/* #undef SLAPD_OVER_DYNGROUP */ + +/* define for Dynamic List overlay */ +/* #undef SLAPD_OVER_DYNLIST */ + +/* define for Reverse Group Membership overlay */ +/* #undef SLAPD_OVER_MEMBEROF */ + +/* define for Password Policy overlay */ +/* #undef SLAPD_OVER_PPOLICY */ + +/* define for Proxy Cache overlay */ +/* #undef SLAPD_OVER_PROXYCACHE */ + +/* define for Referential Integrity overlay */ +/* #undef SLAPD_OVER_REFINT */ + +/* define for Return Code overlay */ +/* #undef SLAPD_OVER_RETCODE */ + +/* define for Rewrite/Remap overlay */ +/* #undef SLAPD_OVER_RWM */ + +/* define for Sequential Modify overlay */ +/* #undef SLAPD_OVER_SEQMOD */ + +/* define for ServerSideSort/VLV overlay */ +/* #undef SLAPD_OVER_SSSVLV */ + +/* define for Syncrepl Provider overlay */ +/* #undef SLAPD_OVER_SYNCPROV */ + +/* define for Translucent Proxy overlay */ +/* #undef SLAPD_OVER_TRANSLUCENT */ + +/* define for Attribute Uniqueness overlay */ +/* #undef SLAPD_OVER_UNIQUE */ + +/* define for Value Sorting overlay */ +/* #undef SLAPD_OVER_VALSORT */ + +/* define to support PASSWD backend */ +/* #undef SLAPD_PASSWD */ + +/* define to support PERL backend */ +/* #undef SLAPD_PERL */ + +/* define to support relay backend */ +/* #undef SLAPD_RELAY */ + +/* define to support reverse lookups */ +/* #undef SLAPD_RLOOKUPS */ + +/* define to support SHELL backend */ +/* #undef SLAPD_SHELL */ + +/* define to support SOCK backend */ +/* #undef SLAPD_SOCK */ + +/* define to support SASL passwords */ +/* #undef SLAPD_SPASSWD */ + +/* define to support SQL backend */ +/* #undef SLAPD_SQL */ + +/* define to support WiredTiger backend */ +/* #undef SLAPD_WT */ + +/* define to support run-time loadable ACL */ +/* #undef SLAP_DYNACL */ + +/* Define to 1 if you have the ANSI C header files. */ +#define STDC_HEADERS 1 + +/* Define to 1 if you can safely include both and . */ +#define TIME_WITH_SYS_TIME 1 + +/* Define to 1 if your declares `struct tm'. */ +/* #undef TM_IN_SYS_TIME */ + +/* set to urandom device */ +#define URANDOM_DEVICE "/dev/urandom" + +/* define to use OpenSSL BIGNUM for MP */ +/* #undef USE_MP_BIGNUM */ + +/* define to use GMP for MP */ +/* #undef USE_MP_GMP */ + +/* define to use 'long' for MP */ +/* #undef USE_MP_LONG */ + +/* define to use 'long long' for MP */ +/* #undef USE_MP_LONG_LONG */ + +/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most + significant byte first (like Motorola and SPARC, unlike Intel). */ +#if defined AC_APPLE_UNIVERSAL_BUILD +# if defined __BIG_ENDIAN__ +# define WORDS_BIGENDIAN 1 +# endif +#else +# ifndef WORDS_BIGENDIAN +/* # undef WORDS_BIGENDIAN */ +# endif +#endif + +/* Define to the type of arg 3 for `accept'. */ +#define ber_socklen_t socklen_t + +/* Define to `char *' if does not define. */ +/* #undef caddr_t */ + +/* Define to empty if `const' does not conform to ANSI C. */ +/* #undef const */ + +/* Define to `int' if doesn't define. */ +/* #undef gid_t */ + +/* Define to `int' if does not define. */ +/* #undef mode_t */ + +/* Define to `long' if does not define. */ +/* #undef off_t */ + +/* Define to `int' if does not define. */ +/* #undef pid_t */ + +/* Define to `int' if does not define. */ +/* #undef sig_atomic_t */ + +/* Define to `unsigned' if does not define. */ +/* #undef size_t */ + +/* define to snprintf routine */ +/* #undef snprintf */ + +/* Define like ber_socklen_t if does not define. */ +/* #undef socklen_t */ + +/* Define to `signed int' if does not define. */ +/* #undef ssize_t */ + +/* Define to `int' if doesn't define. */ +/* #undef uid_t */ + +/* define as empty if volatile is not supported */ +/* #undef volatile */ + +/* define to snprintf routine */ +/* #undef vsnprintf */ + + +/* begin of portable.h.post */ + +#ifdef _WIN32 +/* don't suck in all of the win32 api */ +# define WIN32_LEAN_AND_MEAN 1 +#endif + +#ifndef LDAP_NEEDS_PROTOTYPES +/* force LDAP_P to always include prototypes */ +#define LDAP_NEEDS_PROTOTYPES 1 +#endif + +#ifndef LDAP_REL_ENG +#if (LDAP_VENDOR_VERSION == 000000) && !defined(LDAP_DEVEL) +#define LDAP_DEVEL +#endif +#if defined(LDAP_DEVEL) && !defined(LDAP_TEST) +#define LDAP_TEST +#endif +#endif + +#ifdef HAVE_STDDEF_H +# include +#endif + +#ifdef HAVE_EBCDIC +/* ASCII/EBCDIC converting replacements for stdio funcs + * vsnprintf and snprintf are used too, but they are already + * checked by the configure script + */ +#define fputs ber_pvt_fputs +#define fgets ber_pvt_fgets +#define printf ber_pvt_printf +#define fprintf ber_pvt_fprintf +#define vfprintf ber_pvt_vfprintf +#define vsprintf ber_pvt_vsprintf +#endif + +#include "ac/fdset.h" + +#include "ldap_cdefs.h" +#include "ldap_features.h" + +#include "ac/assert.h" +#include "ac/localize.h" + +#endif /* _LDAP_PORTABLE_H */ +/* end of portable.h.post */ + diff --git a/contrib/openldap-cmake/linux_ppc64le/include/lber_types.h b/contrib/openldap-cmake/linux_ppc64le/include/lber_types.h new file mode 100644 index 00000000000..dbd59430527 --- /dev/null +++ b/contrib/openldap-cmake/linux_ppc64le/include/lber_types.h @@ -0,0 +1,63 @@ +/* include/lber_types.h. Generated from lber_types.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * LBER types + */ + +#ifndef _LBER_TYPES_H +#define _LBER_TYPES_H + +#include + +LDAP_BEGIN_DECL + +/* LBER boolean, enum, integers (32 bits or larger) */ +#define LBER_INT_T int + +/* LBER tags (32 bits or larger) */ +#define LBER_TAG_T long + +/* LBER socket descriptor */ +#define LBER_SOCKET_T int + +/* LBER lengths (32 bits or larger) */ +#define LBER_LEN_T long + +/* ------------------------------------------------------------ */ + +/* booleans, enumerations, and integers */ +typedef LBER_INT_T ber_int_t; + +/* signed and unsigned versions */ +typedef signed LBER_INT_T ber_sint_t; +typedef unsigned LBER_INT_T ber_uint_t; + +/* tags */ +typedef unsigned LBER_TAG_T ber_tag_t; + +/* "socket" descriptors */ +typedef LBER_SOCKET_T ber_socket_t; + +/* lengths */ +typedef unsigned LBER_LEN_T ber_len_t; + +/* signed lengths */ +typedef signed LBER_LEN_T ber_slen_t; + +LDAP_END_DECL + +#endif /* _LBER_TYPES_H */ diff --git a/contrib/openldap-cmake/linux_ppc64le/include/ldap_config.h b/contrib/openldap-cmake/linux_ppc64le/include/ldap_config.h new file mode 100644 index 00000000000..89f7b40b884 --- /dev/null +++ b/contrib/openldap-cmake/linux_ppc64le/include/ldap_config.h @@ -0,0 +1,74 @@ +/* include/ldap_config.h. Generated from ldap_config.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * This file works in conjunction with OpenLDAP configure system. + * If you do no like the values below, adjust your configure options. + */ + +#ifndef _LDAP_CONFIG_H +#define _LDAP_CONFIG_H + +/* directory separator */ +#ifndef LDAP_DIRSEP +#ifndef _WIN32 +#define LDAP_DIRSEP "/" +#else +#define LDAP_DIRSEP "\\" +#endif +#endif + +/* directory for temporary files */ +#if defined(_WIN32) +# define LDAP_TMPDIR "C:\\." /* we don't have much of a choice */ +#elif defined( _P_tmpdir ) +# define LDAP_TMPDIR _P_tmpdir +#elif defined( P_tmpdir ) +# define LDAP_TMPDIR P_tmpdir +#elif defined( _PATH_TMPDIR ) +# define LDAP_TMPDIR _PATH_TMPDIR +#else +# define LDAP_TMPDIR LDAP_DIRSEP "tmp" +#endif + +/* directories */ +#ifndef LDAP_BINDIR +#define LDAP_BINDIR "/tmp/ldap-prefix/bin" +#endif +#ifndef LDAP_SBINDIR +#define LDAP_SBINDIR "/tmp/ldap-prefix/sbin" +#endif +#ifndef LDAP_DATADIR +#define LDAP_DATADIR "/tmp/ldap-prefix/share/openldap" +#endif +#ifndef LDAP_SYSCONFDIR +#define LDAP_SYSCONFDIR "/tmp/ldap-prefix/etc/openldap" +#endif +#ifndef LDAP_LIBEXECDIR +#define LDAP_LIBEXECDIR "/tmp/ldap-prefix/libexec" +#endif +#ifndef LDAP_MODULEDIR +#define LDAP_MODULEDIR "/tmp/ldap-prefix/libexec/openldap" +#endif +#ifndef LDAP_RUNDIR +#define LDAP_RUNDIR "/tmp/ldap-prefix/var" +#endif +#ifndef LDAP_LOCALEDIR +#define LDAP_LOCALEDIR "" +#endif + + +#endif /* _LDAP_CONFIG_H */ diff --git a/contrib/openldap-cmake/linux_ppc64le/include/ldap_features.h b/contrib/openldap-cmake/linux_ppc64le/include/ldap_features.h new file mode 100644 index 00000000000..f0cc7c3626f --- /dev/null +++ b/contrib/openldap-cmake/linux_ppc64le/include/ldap_features.h @@ -0,0 +1,61 @@ +/* include/ldap_features.h. Generated from ldap_features.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * LDAP Features + */ + +#ifndef _LDAP_FEATURES_H +#define _LDAP_FEATURES_H 1 + +/* OpenLDAP API version macros */ +#define LDAP_VENDOR_VERSION 20501 +#define LDAP_VENDOR_VERSION_MAJOR 2 +#define LDAP_VENDOR_VERSION_MINOR 5 +#define LDAP_VENDOR_VERSION_PATCH X + +/* +** WORK IN PROGRESS! +** +** OpenLDAP reentrancy/thread-safeness should be dynamically +** checked using ldap_get_option(). +** +** The -lldap implementation is not thread-safe. +** +** The -lldap_r implementation is: +** LDAP_API_FEATURE_THREAD_SAFE (basic thread safety) +** but also be: +** LDAP_API_FEATURE_SESSION_THREAD_SAFE +** LDAP_API_FEATURE_OPERATION_THREAD_SAFE +** +** The preprocessor flag LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE +** can be used to determine if -lldap_r is available at compile +** time. You must define LDAP_THREAD_SAFE if and only if you +** link with -lldap_r. +** +** If you fail to define LDAP_THREAD_SAFE when linking with +** -lldap_r or define LDAP_THREAD_SAFE when linking with -lldap, +** provided header definitions and declarations may be incorrect. +** +*/ + +/* is -lldap_r available or not */ +#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1 + +/* LDAP v2 Referrals */ +/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */ + +#endif /* LDAP_FEATURES */ diff --git a/contrib/openldap-cmake/linux_ppc64le/include/portable.h b/contrib/openldap-cmake/linux_ppc64le/include/portable.h new file mode 100644 index 00000000000..2924b6713a4 --- /dev/null +++ b/contrib/openldap-cmake/linux_ppc64le/include/portable.h @@ -0,0 +1,1169 @@ +/* include/portable.h. Generated from portable.hin by configure. */ +/* include/portable.hin. Generated from configure.in by autoheader. */ + + +/* begin of portable.h.pre */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in the file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +#ifndef _LDAP_PORTABLE_H +#define _LDAP_PORTABLE_H + +/* define this if needed to get reentrant functions */ +#ifndef REENTRANT +#define REENTRANT 1 +#endif +#ifndef _REENTRANT +#define _REENTRANT 1 +#endif + +/* define this if needed to get threadsafe functions */ +#ifndef THREADSAFE +#define THREADSAFE 1 +#endif +#ifndef _THREADSAFE +#define _THREADSAFE 1 +#endif +#ifndef THREAD_SAFE +#define THREAD_SAFE 1 +#endif +#ifndef _THREAD_SAFE +#define _THREAD_SAFE 1 +#endif + +#ifndef _SGI_MP_SOURCE +#define _SGI_MP_SOURCE 1 +#endif + +/* end of portable.h.pre */ + + +/* Define if building universal (internal helper macro) */ +/* #undef AC_APPLE_UNIVERSAL_BUILD */ + +/* define to use both and */ +/* #undef BOTH_STRINGS_H */ + +/* define if cross compiling */ +/* #undef CROSS_COMPILING */ + +/* set to the number of arguments ctime_r() expects */ +#define CTIME_R_NARGS 2 + +/* define if toupper() requires islower() */ +/* #undef C_UPPER_LOWER */ + +/* define if sys_errlist is not declared in stdio.h or errno.h */ +/* #undef DECL_SYS_ERRLIST */ + +/* define to enable slapi library */ +/* #undef ENABLE_SLAPI */ + +/* defined to be the EXE extension */ +#define EXEEXT "" + +/* set to the number of arguments gethostbyaddr_r() expects */ +#define GETHOSTBYADDR_R_NARGS 8 + +/* set to the number of arguments gethostbyname_r() expects */ +#define GETHOSTBYNAME_R_NARGS 6 + +/* Define to 1 if `TIOCGWINSZ' requires . */ +#define GWINSZ_IN_SYS_IOCTL 1 + +/* define if you have AIX security lib */ +/* #undef HAVE_AIX_SECURITY */ + +/* Define to 1 if you have the header file. */ +#define HAVE_ARPA_INET_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ARPA_NAMESER_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ASSERT_H 1 + +/* Define to 1 if you have the `bcopy' function. */ +#define HAVE_BCOPY 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_BITS_TYPES_H 1 + +/* Define to 1 if you have the `chroot' function. */ +#define HAVE_CHROOT 1 + +/* Define to 1 if you have the `closesocket' function. */ +/* #undef HAVE_CLOSESOCKET */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_CONIO_H */ + +/* define if crypt(3) is available */ +/* #undef HAVE_CRYPT */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_CRYPT_H */ + +/* define if crypt_r() is also available */ +/* #undef HAVE_CRYPT_R */ + +/* Define to 1 if you have the `ctime_r' function. */ +#define HAVE_CTIME_R 1 + +/* define if you have Cyrus SASL */ +/* #undef HAVE_CYRUS_SASL */ + +/* define if your system supports /dev/poll */ +/* #undef HAVE_DEVPOLL */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_DIRECT_H */ + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +#define HAVE_DIRENT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_DLFCN_H 1 + +/* Define to 1 if you don't have `vprintf' but do have `_doprnt.' */ +/* #undef HAVE_DOPRNT */ + +/* define if system uses EBCDIC instead of ASCII */ +/* #undef HAVE_EBCDIC */ + +/* Define to 1 if you have the `endgrent' function. */ +#define HAVE_ENDGRENT 1 + +/* Define to 1 if you have the `endpwent' function. */ +#define HAVE_ENDPWENT 1 + +/* define if your system supports epoll */ +#define HAVE_EPOLL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ERRNO_H 1 + +/* Define to 1 if you have the `fcntl' function. */ +#define HAVE_FCNTL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_FCNTL_H 1 + +/* define if you actually have FreeBSD fetch(3) */ +/* #undef HAVE_FETCH */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_FILIO_H */ + +/* Define to 1 if you have the `flock' function. */ +#define HAVE_FLOCK 1 + +/* Define to 1 if you have the `fstat' function. */ +#define HAVE_FSTAT 1 + +/* Define to 1 if you have the `gai_strerror' function. */ +#define HAVE_GAI_STRERROR 1 + +/* Define to 1 if you have the `getaddrinfo' function. */ +#define HAVE_GETADDRINFO 1 + +/* Define to 1 if you have the `getdtablesize' function. */ +#define HAVE_GETDTABLESIZE 1 + +/* Define to 1 if you have the `geteuid' function. */ +#define HAVE_GETEUID 1 + +/* Define to 1 if you have the `getgrgid' function. */ +#define HAVE_GETGRGID 1 + +/* Define to 1 if you have the `gethostbyaddr_r' function. */ +#define HAVE_GETHOSTBYADDR_R 1 + +/* Define to 1 if you have the `gethostbyname_r' function. */ +#define HAVE_GETHOSTBYNAME_R 1 + +/* Define to 1 if you have the `gethostname' function. */ +#define HAVE_GETHOSTNAME 1 + +/* Define to 1 if you have the `getnameinfo' function. */ +#define HAVE_GETNAMEINFO 1 + +/* Define to 1 if you have the `getopt' function. */ +#define HAVE_GETOPT 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_GETOPT_H 1 + +/* Define to 1 if you have the `getpassphrase' function. */ +/* #undef HAVE_GETPASSPHRASE */ + +/* Define to 1 if you have the `getpeereid' function. */ +/* #undef HAVE_GETPEEREID */ + +/* Define to 1 if you have the `getpeerucred' function. */ +/* #undef HAVE_GETPEERUCRED */ + +/* Define to 1 if you have the `getpwnam' function. */ +#define HAVE_GETPWNAM 1 + +/* Define to 1 if you have the `getpwuid' function. */ +#define HAVE_GETPWUID 1 + +/* Define to 1 if you have the `getspnam' function. */ +#define HAVE_GETSPNAM 1 + +/* Define to 1 if you have the `gettimeofday' function. */ +#define HAVE_GETTIMEOFDAY 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_GMP_H */ + +/* Define to 1 if you have the `gmtime_r' function. */ +#define HAVE_GMTIME_R 1 + +/* define if you have GNUtls */ +/* #undef HAVE_GNUTLS */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_GNUTLS_GNUTLS_H */ + +/* if you have GNU Pth */ +/* #undef HAVE_GNU_PTH */ + +/* Define to 1 if you have the header file. */ +#define HAVE_GRP_H 1 + +/* Define to 1 if you have the `hstrerror' function. */ +#define HAVE_HSTRERROR 1 + +/* define to you inet_aton(3) is available */ +#define HAVE_INET_ATON 1 + +/* Define to 1 if you have the `inet_ntoa_b' function. */ +/* #undef HAVE_INET_NTOA_B */ + +/* Define to 1 if you have the `inet_ntop' function. */ +#define HAVE_INET_NTOP 1 + +/* Define to 1 if you have the `initgroups' function. */ +#define HAVE_INITGROUPS 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_INTTYPES_H 1 + +/* Define to 1 if you have the `ioctl' function. */ +#define HAVE_IOCTL 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_IO_H */ + +/* define if your system supports kqueue */ +/* #undef HAVE_KQUEUE */ + +/* Define to 1 if you have the `gen' library (-lgen). */ +/* #undef HAVE_LIBGEN */ + +/* Define to 1 if you have the `gmp' library (-lgmp). */ +/* #undef HAVE_LIBGMP */ + +/* Define to 1 if you have the `inet' library (-linet). */ +/* #undef HAVE_LIBINET */ + +/* define if you have libtool -ltdl */ +/* #undef HAVE_LIBLTDL */ + +/* Define to 1 if you have the `net' library (-lnet). */ +/* #undef HAVE_LIBNET */ + +/* Define to 1 if you have the `nsl' library (-lnsl). */ +/* #undef HAVE_LIBNSL */ + +/* Define to 1 if you have the `nsl_s' library (-lnsl_s). */ +/* #undef HAVE_LIBNSL_S */ + +/* Define to 1 if you have the `socket' library (-lsocket). */ +/* #undef HAVE_LIBSOCKET */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_LIBUTIL_H */ + +/* Define to 1 if you have the `V3' library (-lV3). */ +/* #undef HAVE_LIBV3 */ + +/* Define to 1 if you have the header file. */ +#define HAVE_LIMITS_H 1 + +/* if you have LinuxThreads */ +/* #undef HAVE_LINUX_THREADS */ + +/* Define to 1 if you have the header file. */ +#define HAVE_LOCALE_H 1 + +/* Define to 1 if you have the `localtime_r' function. */ +#define HAVE_LOCALTIME_R 1 + +/* Define to 1 if you have the `lockf' function. */ +#define HAVE_LOCKF 1 + +/* Define to 1 if the system has the type `long long'. */ +#define HAVE_LONG_LONG 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_LTDL_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_MALLOC_H 1 + +/* Define to 1 if you have the `memcpy' function. */ +#define HAVE_MEMCPY 1 + +/* Define to 1 if you have the `memmove' function. */ +#define HAVE_MEMMOVE 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_MEMORY_H 1 + +/* Define to 1 if you have the `memrchr' function. */ +#define HAVE_MEMRCHR 1 + +/* Define to 1 if you have the `mkstemp' function. */ +#define HAVE_MKSTEMP 1 + +/* Define to 1 if you have the `mktemp' function. */ +#define HAVE_MKTEMP 1 + +/* define this if you have mkversion */ +#define HAVE_MKVERSION 1 + +/* Define to 1 if you have the header file, and it defines `DIR'. */ +/* #undef HAVE_NDIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_NETINET_TCP_H 1 + +/* define if strerror_r returns char* instead of int */ +/* #undef HAVE_NONPOSIX_STRERROR_R */ + +/* if you have NT Event Log */ +/* #undef HAVE_NT_EVENT_LOG */ + +/* if you have NT Service Manager */ +/* #undef HAVE_NT_SERVICE_MANAGER */ + +/* if you have NT Threads */ +/* #undef HAVE_NT_THREADS */ + +/* define if you have OpenSSL */ +#define HAVE_OPENSSL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_BN_H 1 + +/* define if you have OpenSSL with CRL checking capability */ +#define HAVE_OPENSSL_CRL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_CRYPTO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_SSL_H 1 + +/* Define to 1 if you have the `pipe' function. */ +#define HAVE_PIPE 1 + +/* Define to 1 if you have the `poll' function. */ +#define HAVE_POLL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_POLL_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PROCESS_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PSAP_H */ + +/* define to pthreads API spec revision */ +#define HAVE_PTHREADS 10 + +/* define if you have pthread_detach function */ +#define HAVE_PTHREAD_DETACH 1 + +/* Define to 1 if you have the `pthread_getconcurrency' function. */ +#define HAVE_PTHREAD_GETCONCURRENCY 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_PTHREAD_H 1 + +/* Define to 1 if you have the `pthread_kill' function. */ +#define HAVE_PTHREAD_KILL 1 + +/* Define to 1 if you have the `pthread_kill_other_threads_np' function. */ +/* #undef HAVE_PTHREAD_KILL_OTHER_THREADS_NP */ + +/* define if you have pthread_rwlock_destroy function */ +#define HAVE_PTHREAD_RWLOCK_DESTROY 1 + +/* Define to 1 if you have the `pthread_setconcurrency' function. */ +#define HAVE_PTHREAD_SETCONCURRENCY 1 + +/* Define to 1 if you have the `pthread_yield' function. */ +#define HAVE_PTHREAD_YIELD 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PTH_H */ + +/* Define to 1 if the system has the type `ptrdiff_t'. */ +#define HAVE_PTRDIFF_T 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_PWD_H 1 + +/* Define to 1 if you have the `read' function. */ +#define HAVE_READ 1 + +/* Define to 1 if you have the `recv' function. */ +#define HAVE_RECV 1 + +/* Define to 1 if you have the `recvfrom' function. */ +#define HAVE_RECVFROM 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_REGEX_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_RESOLV_H */ + +/* define if you have res_query() */ +/* #undef HAVE_RES_QUERY */ + +/* define if OpenSSL needs RSAref */ +/* #undef HAVE_RSAREF */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SASL_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SASL_SASL_H */ + +/* define if your SASL library has sasl_version() */ +/* #undef HAVE_SASL_VERSION */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SCHED_H 1 + +/* Define to 1 if you have the `sched_yield' function. */ +#define HAVE_SCHED_YIELD 1 + +/* Define to 1 if you have the `send' function. */ +#define HAVE_SEND 1 + +/* Define to 1 if you have the `sendmsg' function. */ +#define HAVE_SENDMSG 1 + +/* Define to 1 if you have the `sendto' function. */ +#define HAVE_SENDTO 1 + +/* Define to 1 if you have the `setegid' function. */ +#define HAVE_SETEGID 1 + +/* Define to 1 if you have the `seteuid' function. */ +#define HAVE_SETEUID 1 + +/* Define to 1 if you have the `setgid' function. */ +#define HAVE_SETGID 1 + +/* Define to 1 if you have the `setpwfile' function. */ +/* #undef HAVE_SETPWFILE */ + +/* Define to 1 if you have the `setsid' function. */ +#define HAVE_SETSID 1 + +/* Define to 1 if you have the `setuid' function. */ +#define HAVE_SETUID 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SGTTY_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SHADOW_H */ + +/* Define to 1 if you have the `sigaction' function. */ +#define HAVE_SIGACTION 1 + +/* Define to 1 if you have the `signal' function. */ +#define HAVE_SIGNAL 1 + +/* Define to 1 if you have the `sigset' function. */ +#define HAVE_SIGSET 1 + +/* define if you have -lslp */ +/* #undef HAVE_SLP */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SLP_H */ + +/* Define to 1 if you have the `snprintf' function. */ +#define HAVE_SNPRINTF 1 + +/* if you have spawnlp() */ +/* #undef HAVE_SPAWNLP */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SQLEXT_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SQL_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_STDDEF_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STDINT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STDLIB_H 1 + +/* Define to 1 if you have the `strdup' function. */ +#define HAVE_STRDUP 1 + +/* Define to 1 if you have the `strerror' function. */ +#define HAVE_STRERROR 1 + +/* Define to 1 if you have the `strerror_r' function. */ +#define HAVE_STRERROR_R 1 + +/* Define to 1 if you have the `strftime' function. */ +#define HAVE_STRFTIME 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STRINGS_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STRING_H 1 + +/* Define to 1 if you have the `strpbrk' function. */ +#define HAVE_STRPBRK 1 + +/* Define to 1 if you have the `strrchr' function. */ +#define HAVE_STRRCHR 1 + +/* Define to 1 if you have the `strsep' function. */ +#define HAVE_STRSEP 1 + +/* Define to 1 if you have the `strspn' function. */ +#define HAVE_STRSPN 1 + +/* Define to 1 if you have the `strstr' function. */ +#define HAVE_STRSTR 1 + +/* Define to 1 if you have the `strtol' function. */ +#define HAVE_STRTOL 1 + +/* Define to 1 if you have the `strtoll' function. */ +#define HAVE_STRTOLL 1 + +/* Define to 1 if you have the `strtoq' function. */ +#define HAVE_STRTOQ 1 + +/* Define to 1 if you have the `strtoul' function. */ +#define HAVE_STRTOUL 1 + +/* Define to 1 if you have the `strtoull' function. */ +#define HAVE_STRTOULL 1 + +/* Define to 1 if you have the `strtouq' function. */ +#define HAVE_STRTOUQ 1 + +/* Define to 1 if `msg_accrightslen' is a member of `struct msghdr'. */ +/* #undef HAVE_STRUCT_MSGHDR_MSG_ACCRIGHTSLEN */ + +/* Define to 1 if `msg_control' is a member of `struct msghdr'. */ +#define HAVE_STRUCT_MSGHDR_MSG_CONTROL 1 + +/* Define to 1 if `pw_gecos' is a member of `struct passwd'. */ +#define HAVE_STRUCT_PASSWD_PW_GECOS 1 + +/* Define to 1 if `pw_passwd' is a member of `struct passwd'. */ +#define HAVE_STRUCT_PASSWD_PW_PASSWD 1 + +/* Define to 1 if `st_blksize' is a member of `struct stat'. */ +#define HAVE_STRUCT_STAT_ST_BLKSIZE 1 + +/* Define to 1 if `st_fstype' is a member of `struct stat'. */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE */ + +/* define to 1 if st_fstype is char * */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE_CHAR */ + +/* define to 1 if st_fstype is int */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE_INT */ + +/* Define to 1 if `st_vfstype' is a member of `struct stat'. */ +/* #undef HAVE_STRUCT_STAT_ST_VFSTYPE */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYNCH_H */ + +/* Define to 1 if you have the `sysconf' function. */ +#define HAVE_SYSCONF 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYSEXITS_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYSLOG_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_DEVPOLL_H */ + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +/* #undef HAVE_SYS_DIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_EPOLL_H 1 + +/* define if you actually have sys_errlist in your libs */ +#define HAVE_SYS_ERRLIST 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_ERRNO_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_EVENT_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_FILE_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_FILIO_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_FSTYP_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_IOCTL_H 1 + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +/* #undef HAVE_SYS_NDIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_PARAM_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_POLL_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_PRIVGRP_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_RESOURCE_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SELECT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SOCKET_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_STAT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SYSLOG_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_TIME_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_TYPES_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_UCRED_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UIO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UN_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_UUID_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_VMOUNT_H */ + +/* Define to 1 if you have that is POSIX.1 compatible. */ +#define HAVE_SYS_WAIT_H 1 + +/* define if you have -lwrap */ +/* #undef HAVE_TCPD */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_TCPD_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_TERMIOS_H 1 + +/* if you have Solaris LWP (thr) package */ +/* #undef HAVE_THR */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_THREAD_H */ + +/* Define to 1 if you have the `thr_getconcurrency' function. */ +/* #undef HAVE_THR_GETCONCURRENCY */ + +/* Define to 1 if you have the `thr_setconcurrency' function. */ +/* #undef HAVE_THR_SETCONCURRENCY */ + +/* Define to 1 if you have the `thr_yield' function. */ +/* #undef HAVE_THR_YIELD */ + +/* define if you have TLS */ +#define HAVE_TLS 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_UNISTD_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_UTIME_H 1 + +/* define if you have uuid_generate() */ +/* #undef HAVE_UUID_GENERATE */ + +/* define if you have uuid_to_str() */ +/* #undef HAVE_UUID_TO_STR */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_UUID_UUID_H */ + +/* Define to 1 if you have the `vprintf' function. */ +#define HAVE_VPRINTF 1 + +/* Define to 1 if you have the `vsnprintf' function. */ +#define HAVE_VSNPRINTF 1 + +/* Define to 1 if you have the `wait4' function. */ +#define HAVE_WAIT4 1 + +/* Define to 1 if you have the `waitpid' function. */ +#define HAVE_WAITPID 1 + +/* define if you have winsock */ +/* #undef HAVE_WINSOCK */ + +/* define if you have winsock2 */ +/* #undef HAVE_WINSOCK2 */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WINSOCK2_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WINSOCK_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WIREDTIGER_H */ + +/* Define to 1 if you have the `write' function. */ +#define HAVE_WRITE 1 + +/* define if select implicitly yields */ +#define HAVE_YIELDING_SELECT 1 + +/* Define to 1 if you have the `_vsnprintf' function. */ +/* #undef HAVE__VSNPRINTF */ + +/* define to 32-bit or greater integer type */ +#define LBER_INT_T int + +/* define to large integer type */ +#define LBER_LEN_T long + +/* define to socket descriptor type */ +#define LBER_SOCKET_T int + +/* define to large integer type */ +#define LBER_TAG_T long + +/* define to 1 if library is thread safe */ +#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1 + +/* define to LDAP VENDOR VERSION */ +/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */ + +/* define this to add debugging code */ +/* #undef LDAP_DEBUG */ + +/* define if LDAP libs are dynamic */ +/* #undef LDAP_LIBS_DYNAMIC */ + +/* define to support PF_INET6 */ +#define LDAP_PF_INET6 1 + +/* define to support PF_LOCAL */ +#define LDAP_PF_LOCAL 1 + +/* define this to add SLAPI code */ +/* #undef LDAP_SLAPI */ + +/* define this to add syslog code */ +/* #undef LDAP_SYSLOG */ + +/* Version */ +#define LDAP_VENDOR_VERSION 20501 + +/* Major */ +#define LDAP_VENDOR_VERSION_MAJOR 2 + +/* Minor */ +#define LDAP_VENDOR_VERSION_MINOR 5 + +/* Patch */ +#define LDAP_VENDOR_VERSION_PATCH X + +/* Define to the sub-directory where libtool stores uninstalled libraries. */ +#define LT_OBJDIR ".libs/" + +/* define if memcmp is not 8-bit clean or is otherwise broken */ +/* #undef NEED_MEMCMP_REPLACEMENT */ + +/* define if you have (or want) no threads */ +/* #undef NO_THREADS */ + +/* define to use the original debug style */ +/* #undef OLD_DEBUG */ + +/* Package */ +#define OPENLDAP_PACKAGE "OpenLDAP" + +/* Version */ +#define OPENLDAP_VERSION "2.5.X" + +/* Define to the address where bug reports for this package should be sent. */ +#define PACKAGE_BUGREPORT "" + +/* Define to the full name of this package. */ +#define PACKAGE_NAME "" + +/* Define to the full name and version of this package. */ +#define PACKAGE_STRING "" + +/* Define to the one symbol short name of this package. */ +#define PACKAGE_TARNAME "" + +/* Define to the home page for this package. */ +#define PACKAGE_URL "" + +/* Define to the version of this package. */ +#define PACKAGE_VERSION "" + +/* define if sched_yield yields the entire process */ +/* #undef REPLACE_BROKEN_YIELD */ + +/* Define as the return type of signal handlers (`int' or `void'). */ +#define RETSIGTYPE void + +/* Define to the type of arg 1 for `select'. */ +#define SELECT_TYPE_ARG1 int + +/* Define to the type of args 2, 3 and 4 for `select'. */ +#define SELECT_TYPE_ARG234 (fd_set *) + +/* Define to the type of arg 5 for `select'. */ +#define SELECT_TYPE_ARG5 (struct timeval *) + +/* The size of `int', as computed by sizeof. */ +#define SIZEOF_INT 4 + +/* The size of `long', as computed by sizeof. */ +#define SIZEOF_LONG 8 + +/* The size of `long long', as computed by sizeof. */ +#define SIZEOF_LONG_LONG 8 + +/* The size of `short', as computed by sizeof. */ +#define SIZEOF_SHORT 2 + +/* The size of `wchar_t', as computed by sizeof. */ +#define SIZEOF_WCHAR_T 4 + +/* define to support per-object ACIs */ +/* #undef SLAPD_ACI_ENABLED */ + +/* define to support LDAP Async Metadirectory backend */ +/* #undef SLAPD_ASYNCMETA */ + +/* define to support cleartext passwords */ +/* #undef SLAPD_CLEARTEXT */ + +/* define to support crypt(3) passwords */ +/* #undef SLAPD_CRYPT */ + +/* define to support DNS SRV backend */ +/* #undef SLAPD_DNSSRV */ + +/* define to support LDAP backend */ +/* #undef SLAPD_LDAP */ + +/* define to support MDB backend */ +/* #undef SLAPD_MDB */ + +/* define to support LDAP Metadirectory backend */ +/* #undef SLAPD_META */ + +/* define to support modules */ +/* #undef SLAPD_MODULES */ + +/* dynamically linked module */ +#define SLAPD_MOD_DYNAMIC 2 + +/* statically linked module */ +#define SLAPD_MOD_STATIC 1 + +/* define to support cn=Monitor backend */ +/* #undef SLAPD_MONITOR */ + +/* define to support NDB backend */ +/* #undef SLAPD_NDB */ + +/* define to support NULL backend */ +/* #undef SLAPD_NULL */ + +/* define for In-Directory Access Logging overlay */ +/* #undef SLAPD_OVER_ACCESSLOG */ + +/* define for Audit Logging overlay */ +/* #undef SLAPD_OVER_AUDITLOG */ + +/* define for Automatic Certificate Authority overlay */ +/* #undef SLAPD_OVER_AUTOCA */ + +/* define for Collect overlay */ +/* #undef SLAPD_OVER_COLLECT */ + +/* define for Attribute Constraint overlay */ +/* #undef SLAPD_OVER_CONSTRAINT */ + +/* define for Dynamic Directory Services overlay */ +/* #undef SLAPD_OVER_DDS */ + +/* define for Dynamic Directory Services overlay */ +/* #undef SLAPD_OVER_DEREF */ + +/* define for Dynamic Group overlay */ +/* #undef SLAPD_OVER_DYNGROUP */ + +/* define for Dynamic List overlay */ +/* #undef SLAPD_OVER_DYNLIST */ + +/* define for Reverse Group Membership overlay */ +/* #undef SLAPD_OVER_MEMBEROF */ + +/* define for Password Policy overlay */ +/* #undef SLAPD_OVER_PPOLICY */ + +/* define for Proxy Cache overlay */ +/* #undef SLAPD_OVER_PROXYCACHE */ + +/* define for Referential Integrity overlay */ +/* #undef SLAPD_OVER_REFINT */ + +/* define for Return Code overlay */ +/* #undef SLAPD_OVER_RETCODE */ + +/* define for Rewrite/Remap overlay */ +/* #undef SLAPD_OVER_RWM */ + +/* define for Sequential Modify overlay */ +/* #undef SLAPD_OVER_SEQMOD */ + +/* define for ServerSideSort/VLV overlay */ +/* #undef SLAPD_OVER_SSSVLV */ + +/* define for Syncrepl Provider overlay */ +/* #undef SLAPD_OVER_SYNCPROV */ + +/* define for Translucent Proxy overlay */ +/* #undef SLAPD_OVER_TRANSLUCENT */ + +/* define for Attribute Uniqueness overlay */ +/* #undef SLAPD_OVER_UNIQUE */ + +/* define for Value Sorting overlay */ +/* #undef SLAPD_OVER_VALSORT */ + +/* define to support PASSWD backend */ +/* #undef SLAPD_PASSWD */ + +/* define to support PERL backend */ +/* #undef SLAPD_PERL */ + +/* define to support relay backend */ +/* #undef SLAPD_RELAY */ + +/* define to support reverse lookups */ +/* #undef SLAPD_RLOOKUPS */ + +/* define to support SHELL backend */ +/* #undef SLAPD_SHELL */ + +/* define to support SOCK backend */ +/* #undef SLAPD_SOCK */ + +/* define to support SASL passwords */ +/* #undef SLAPD_SPASSWD */ + +/* define to support SQL backend */ +/* #undef SLAPD_SQL */ + +/* define to support WiredTiger backend */ +/* #undef SLAPD_WT */ + +/* define to support run-time loadable ACL */ +/* #undef SLAP_DYNACL */ + +/* Define to 1 if you have the ANSI C header files. */ +#define STDC_HEADERS 1 + +/* Define to 1 if you can safely include both and . */ +#define TIME_WITH_SYS_TIME 1 + +/* Define to 1 if your declares `struct tm'. */ +/* #undef TM_IN_SYS_TIME */ + +/* set to urandom device */ +#define URANDOM_DEVICE "/dev/urandom" + +/* define to use OpenSSL BIGNUM for MP */ +/* #undef USE_MP_BIGNUM */ + +/* define to use GMP for MP */ +/* #undef USE_MP_GMP */ + +/* define to use 'long' for MP */ +/* #undef USE_MP_LONG */ + +/* define to use 'long long' for MP */ +/* #undef USE_MP_LONG_LONG */ + +/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most + significant byte first (like Motorola and SPARC, unlike Intel). */ +#if defined AC_APPLE_UNIVERSAL_BUILD +# if defined __BIG_ENDIAN__ +# define WORDS_BIGENDIAN 1 +# endif +#else +# ifndef WORDS_BIGENDIAN +/* # undef WORDS_BIGENDIAN */ +# endif +#endif + +/* Define to the type of arg 3 for `accept'. */ +#define ber_socklen_t socklen_t + +/* Define to `char *' if does not define. */ +/* #undef caddr_t */ + +/* Define to empty if `const' does not conform to ANSI C. */ +/* #undef const */ + +/* Define to `int' if doesn't define. */ +/* #undef gid_t */ + +/* Define to `int' if does not define. */ +/* #undef mode_t */ + +/* Define to `long' if does not define. */ +/* #undef off_t */ + +/* Define to `int' if does not define. */ +/* #undef pid_t */ + +/* Define to `int' if does not define. */ +/* #undef sig_atomic_t */ + +/* Define to `unsigned' if does not define. */ +/* #undef size_t */ + +/* define to snprintf routine */ +/* #undef snprintf */ + +/* Define like ber_socklen_t if does not define. */ +/* #undef socklen_t */ + +/* Define to `signed int' if does not define. */ +/* #undef ssize_t */ + +/* Define to `int' if doesn't define. */ +/* #undef uid_t */ + +/* define as empty if volatile is not supported */ +/* #undef volatile */ + +/* define to snprintf routine */ +/* #undef vsnprintf */ + + +/* begin of portable.h.post */ + +#ifdef _WIN32 +/* don't suck in all of the win32 api */ +# define WIN32_LEAN_AND_MEAN 1 +#endif + +#ifndef LDAP_NEEDS_PROTOTYPES +/* force LDAP_P to always include prototypes */ +#define LDAP_NEEDS_PROTOTYPES 1 +#endif + +#ifndef LDAP_REL_ENG +#if (LDAP_VENDOR_VERSION == 000000) && !defined(LDAP_DEVEL) +#define LDAP_DEVEL +#endif +#if defined(LDAP_DEVEL) && !defined(LDAP_TEST) +#define LDAP_TEST +#endif +#endif + +#ifdef HAVE_STDDEF_H +# include +#endif + +#ifdef HAVE_EBCDIC +/* ASCII/EBCDIC converting replacements for stdio funcs + * vsnprintf and snprintf are used too, but they are already + * checked by the configure script + */ +#define fputs ber_pvt_fputs +#define fgets ber_pvt_fgets +#define printf ber_pvt_printf +#define fprintf ber_pvt_fprintf +#define vfprintf ber_pvt_vfprintf +#define vsprintf ber_pvt_vsprintf +#endif + +#include "ac/fdset.h" + +#include "ldap_cdefs.h" +#include "ldap_features.h" + +#include "ac/assert.h" +#include "ac/localize.h" + +#endif /* _LDAP_PORTABLE_H */ +/* end of portable.h.post */ + diff --git a/contrib/poco b/contrib/poco index c55b91f394e..b7d9ec16ee3 160000 --- a/contrib/poco +++ b/contrib/poco @@ -1 +1 @@ -Subproject commit c55b91f394efa9c238c33957682501681ef9b716 +Subproject commit b7d9ec16ee33ca76643d5fcd907ea9a33285640a diff --git a/contrib/poco-cmake/CMakeLists.txt b/contrib/poco-cmake/CMakeLists.txt index 1d2dc7b873e..d173f35b9bf 100644 --- a/contrib/poco-cmake/CMakeLists.txt +++ b/contrib/poco-cmake/CMakeLists.txt @@ -1,4 +1,4 @@ -set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/poco) +set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/poco") add_subdirectory (Crypto) add_subdirectory (Data) diff --git a/contrib/poco-cmake/Crypto/CMakeLists.txt b/contrib/poco-cmake/Crypto/CMakeLists.txt index 1685e96728b..e93ed5cf17d 100644 --- a/contrib/poco-cmake/Crypto/CMakeLists.txt +++ b/contrib/poco-cmake/Crypto/CMakeLists.txt @@ -1,35 +1,35 @@ if (ENABLE_SSL) if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/Crypto/src/Cipher.cpp - ${LIBRARY_DIR}/Crypto/src/CipherFactory.cpp - ${LIBRARY_DIR}/Crypto/src/CipherImpl.cpp - ${LIBRARY_DIR}/Crypto/src/CipherKey.cpp - ${LIBRARY_DIR}/Crypto/src/CipherKeyImpl.cpp - ${LIBRARY_DIR}/Crypto/src/CryptoException.cpp - ${LIBRARY_DIR}/Crypto/src/CryptoStream.cpp - ${LIBRARY_DIR}/Crypto/src/CryptoTransform.cpp - ${LIBRARY_DIR}/Crypto/src/DigestEngine.cpp - ${LIBRARY_DIR}/Crypto/src/ECDSADigestEngine.cpp - ${LIBRARY_DIR}/Crypto/src/ECKey.cpp - ${LIBRARY_DIR}/Crypto/src/ECKeyImpl.cpp - ${LIBRARY_DIR}/Crypto/src/EVPPKey.cpp - ${LIBRARY_DIR}/Crypto/src/KeyPair.cpp - ${LIBRARY_DIR}/Crypto/src/KeyPairImpl.cpp - ${LIBRARY_DIR}/Crypto/src/OpenSSLInitializer.cpp - ${LIBRARY_DIR}/Crypto/src/PKCS12Container.cpp - ${LIBRARY_DIR}/Crypto/src/RSACipherImpl.cpp - ${LIBRARY_DIR}/Crypto/src/RSADigestEngine.cpp - ${LIBRARY_DIR}/Crypto/src/RSAKey.cpp - ${LIBRARY_DIR}/Crypto/src/RSAKeyImpl.cpp - ${LIBRARY_DIR}/Crypto/src/X509Certificate.cpp + "${LIBRARY_DIR}/Crypto/src/Cipher.cpp" + "${LIBRARY_DIR}/Crypto/src/CipherFactory.cpp" + "${LIBRARY_DIR}/Crypto/src/CipherImpl.cpp" + "${LIBRARY_DIR}/Crypto/src/CipherKey.cpp" + "${LIBRARY_DIR}/Crypto/src/CipherKeyImpl.cpp" + "${LIBRARY_DIR}/Crypto/src/CryptoException.cpp" + "${LIBRARY_DIR}/Crypto/src/CryptoStream.cpp" + "${LIBRARY_DIR}/Crypto/src/CryptoTransform.cpp" + "${LIBRARY_DIR}/Crypto/src/DigestEngine.cpp" + "${LIBRARY_DIR}/Crypto/src/ECDSADigestEngine.cpp" + "${LIBRARY_DIR}/Crypto/src/ECKey.cpp" + "${LIBRARY_DIR}/Crypto/src/ECKeyImpl.cpp" + "${LIBRARY_DIR}/Crypto/src/EVPPKey.cpp" + "${LIBRARY_DIR}/Crypto/src/KeyPair.cpp" + "${LIBRARY_DIR}/Crypto/src/KeyPairImpl.cpp" + "${LIBRARY_DIR}/Crypto/src/OpenSSLInitializer.cpp" + "${LIBRARY_DIR}/Crypto/src/PKCS12Container.cpp" + "${LIBRARY_DIR}/Crypto/src/RSACipherImpl.cpp" + "${LIBRARY_DIR}/Crypto/src/RSADigestEngine.cpp" + "${LIBRARY_DIR}/Crypto/src/RSAKey.cpp" + "${LIBRARY_DIR}/Crypto/src/RSAKeyImpl.cpp" + "${LIBRARY_DIR}/Crypto/src/X509Certificate.cpp" ) add_library (_poco_crypto ${SRCS}) add_library (Poco::Crypto ALIAS _poco_crypto) target_compile_options (_poco_crypto PRIVATE -Wno-newline-eof) - target_include_directories (_poco_crypto SYSTEM PUBLIC ${LIBRARY_DIR}/Crypto/include) + target_include_directories (_poco_crypto SYSTEM PUBLIC "${LIBRARY_DIR}/Crypto/include") target_link_libraries (_poco_crypto PUBLIC Poco::Foundation ssl crypto) else () add_library (Poco::Crypto UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/Data/CMakeLists.txt b/contrib/poco-cmake/Data/CMakeLists.txt index 1c185df8961..4fdd755b45d 100644 --- a/contrib/poco-cmake/Data/CMakeLists.txt +++ b/contrib/poco-cmake/Data/CMakeLists.txt @@ -1,40 +1,40 @@ if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/Data/src/AbstractBinder.cpp - ${LIBRARY_DIR}/Data/src/AbstractBinding.cpp - ${LIBRARY_DIR}/Data/src/AbstractExtraction.cpp - ${LIBRARY_DIR}/Data/src/AbstractExtractor.cpp - ${LIBRARY_DIR}/Data/src/AbstractPreparation.cpp - ${LIBRARY_DIR}/Data/src/AbstractPreparator.cpp - ${LIBRARY_DIR}/Data/src/ArchiveStrategy.cpp - ${LIBRARY_DIR}/Data/src/Bulk.cpp - ${LIBRARY_DIR}/Data/src/Connector.cpp - ${LIBRARY_DIR}/Data/src/DataException.cpp - ${LIBRARY_DIR}/Data/src/Date.cpp - ${LIBRARY_DIR}/Data/src/DynamicLOB.cpp - ${LIBRARY_DIR}/Data/src/Limit.cpp - ${LIBRARY_DIR}/Data/src/MetaColumn.cpp - ${LIBRARY_DIR}/Data/src/PooledSessionHolder.cpp - ${LIBRARY_DIR}/Data/src/PooledSessionImpl.cpp - ${LIBRARY_DIR}/Data/src/Position.cpp - ${LIBRARY_DIR}/Data/src/Range.cpp - ${LIBRARY_DIR}/Data/src/RecordSet.cpp - ${LIBRARY_DIR}/Data/src/Row.cpp - ${LIBRARY_DIR}/Data/src/RowFilter.cpp - ${LIBRARY_DIR}/Data/src/RowFormatter.cpp - ${LIBRARY_DIR}/Data/src/RowIterator.cpp - ${LIBRARY_DIR}/Data/src/Session.cpp - ${LIBRARY_DIR}/Data/src/SessionFactory.cpp - ${LIBRARY_DIR}/Data/src/SessionImpl.cpp - ${LIBRARY_DIR}/Data/src/SessionPool.cpp - ${LIBRARY_DIR}/Data/src/SessionPoolContainer.cpp - ${LIBRARY_DIR}/Data/src/SimpleRowFormatter.cpp - ${LIBRARY_DIR}/Data/src/SQLChannel.cpp - ${LIBRARY_DIR}/Data/src/Statement.cpp - ${LIBRARY_DIR}/Data/src/StatementCreator.cpp - ${LIBRARY_DIR}/Data/src/StatementImpl.cpp - ${LIBRARY_DIR}/Data/src/Time.cpp - ${LIBRARY_DIR}/Data/src/Transaction.cpp + "${LIBRARY_DIR}/Data/src/AbstractBinder.cpp" + "${LIBRARY_DIR}/Data/src/AbstractBinding.cpp" + "${LIBRARY_DIR}/Data/src/AbstractExtraction.cpp" + "${LIBRARY_DIR}/Data/src/AbstractExtractor.cpp" + "${LIBRARY_DIR}/Data/src/AbstractPreparation.cpp" + "${LIBRARY_DIR}/Data/src/AbstractPreparator.cpp" + "${LIBRARY_DIR}/Data/src/ArchiveStrategy.cpp" + "${LIBRARY_DIR}/Data/src/Bulk.cpp" + "${LIBRARY_DIR}/Data/src/Connector.cpp" + "${LIBRARY_DIR}/Data/src/DataException.cpp" + "${LIBRARY_DIR}/Data/src/Date.cpp" + "${LIBRARY_DIR}/Data/src/DynamicLOB.cpp" + "${LIBRARY_DIR}/Data/src/Limit.cpp" + "${LIBRARY_DIR}/Data/src/MetaColumn.cpp" + "${LIBRARY_DIR}/Data/src/PooledSessionHolder.cpp" + "${LIBRARY_DIR}/Data/src/PooledSessionImpl.cpp" + "${LIBRARY_DIR}/Data/src/Position.cpp" + "${LIBRARY_DIR}/Data/src/Range.cpp" + "${LIBRARY_DIR}/Data/src/RecordSet.cpp" + "${LIBRARY_DIR}/Data/src/Row.cpp" + "${LIBRARY_DIR}/Data/src/RowFilter.cpp" + "${LIBRARY_DIR}/Data/src/RowFormatter.cpp" + "${LIBRARY_DIR}/Data/src/RowIterator.cpp" + "${LIBRARY_DIR}/Data/src/Session.cpp" + "${LIBRARY_DIR}/Data/src/SessionFactory.cpp" + "${LIBRARY_DIR}/Data/src/SessionImpl.cpp" + "${LIBRARY_DIR}/Data/src/SessionPool.cpp" + "${LIBRARY_DIR}/Data/src/SessionPoolContainer.cpp" + "${LIBRARY_DIR}/Data/src/SimpleRowFormatter.cpp" + "${LIBRARY_DIR}/Data/src/SQLChannel.cpp" + "${LIBRARY_DIR}/Data/src/Statement.cpp" + "${LIBRARY_DIR}/Data/src/StatementCreator.cpp" + "${LIBRARY_DIR}/Data/src/StatementImpl.cpp" + "${LIBRARY_DIR}/Data/src/Time.cpp" + "${LIBRARY_DIR}/Data/src/Transaction.cpp" ) add_library (_poco_data ${SRCS}) @@ -43,7 +43,7 @@ if (USE_INTERNAL_POCO_LIBRARY) if (COMPILER_GCC) target_compile_options (_poco_data PRIVATE -Wno-deprecated-copy) endif () - target_include_directories (_poco_data SYSTEM PUBLIC ${LIBRARY_DIR}/Data/include) + target_include_directories (_poco_data SYSTEM PUBLIC "${LIBRARY_DIR}/Data/include") target_link_libraries (_poco_data PUBLIC Poco::Foundation) else () # NOTE: don't know why, but the GLOBAL is required here. diff --git a/contrib/poco-cmake/Data/ODBC/CMakeLists.txt b/contrib/poco-cmake/Data/ODBC/CMakeLists.txt index cd7c5ef2863..a3561304541 100644 --- a/contrib/poco-cmake/Data/ODBC/CMakeLists.txt +++ b/contrib/poco-cmake/Data/ODBC/CMakeLists.txt @@ -5,27 +5,27 @@ if (ENABLE_ODBC) if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/Data/ODBC/src/Binder.cpp - ${LIBRARY_DIR}/Data/ODBC/src/ConnectionHandle.cpp - ${LIBRARY_DIR}/Data/ODBC/src/Connector.cpp - ${LIBRARY_DIR}/Data/ODBC/src/EnvironmentHandle.cpp - ${LIBRARY_DIR}/Data/ODBC/src/Extractor.cpp - ${LIBRARY_DIR}/Data/ODBC/src/ODBCException.cpp - ${LIBRARY_DIR}/Data/ODBC/src/ODBCMetaColumn.cpp - ${LIBRARY_DIR}/Data/ODBC/src/ODBCStatementImpl.cpp - ${LIBRARY_DIR}/Data/ODBC/src/Parameter.cpp - ${LIBRARY_DIR}/Data/ODBC/src/Preparator.cpp - ${LIBRARY_DIR}/Data/ODBC/src/SessionImpl.cpp - ${LIBRARY_DIR}/Data/ODBC/src/TypeInfo.cpp - ${LIBRARY_DIR}/Data/ODBC/src/Unicode.cpp - ${LIBRARY_DIR}/Data/ODBC/src/Utility.cpp + "${LIBRARY_DIR}/Data/ODBC/src/Binder.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/ConnectionHandle.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/Connector.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/EnvironmentHandle.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/Extractor.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/ODBCException.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/ODBCMetaColumn.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/ODBCStatementImpl.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/Parameter.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/Preparator.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/SessionImpl.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/TypeInfo.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/Unicode.cpp" + "${LIBRARY_DIR}/Data/ODBC/src/Utility.cpp" ) add_library (_poco_data_odbc ${SRCS}) add_library (Poco::Data::ODBC ALIAS _poco_data_odbc) target_compile_options (_poco_data_odbc PRIVATE -Wno-unused-variable) - target_include_directories (_poco_data_odbc SYSTEM PUBLIC ${LIBRARY_DIR}/Data/ODBC/include) + target_include_directories (_poco_data_odbc SYSTEM PUBLIC "${LIBRARY_DIR}/Data/ODBC/include") target_link_libraries (_poco_data_odbc PUBLIC Poco::Data unixodbc) else () add_library (Poco::Data::ODBC UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/Foundation/CMakeLists.txt b/contrib/poco-cmake/Foundation/CMakeLists.txt index f4647461ec0..a9a4933873c 100644 --- a/contrib/poco-cmake/Foundation/CMakeLists.txt +++ b/contrib/poco-cmake/Foundation/CMakeLists.txt @@ -2,27 +2,27 @@ if (USE_INTERNAL_POCO_LIBRARY) # Foundation (pcre) set (SRCS_PCRE - ${LIBRARY_DIR}/Foundation/src/pcre_config.c - ${LIBRARY_DIR}/Foundation/src/pcre_byte_order.c - ${LIBRARY_DIR}/Foundation/src/pcre_chartables.c - ${LIBRARY_DIR}/Foundation/src/pcre_compile.c - ${LIBRARY_DIR}/Foundation/src/pcre_exec.c - ${LIBRARY_DIR}/Foundation/src/pcre_fullinfo.c - ${LIBRARY_DIR}/Foundation/src/pcre_globals.c - ${LIBRARY_DIR}/Foundation/src/pcre_maketables.c - ${LIBRARY_DIR}/Foundation/src/pcre_newline.c - ${LIBRARY_DIR}/Foundation/src/pcre_ord2utf8.c - ${LIBRARY_DIR}/Foundation/src/pcre_study.c - ${LIBRARY_DIR}/Foundation/src/pcre_tables.c - ${LIBRARY_DIR}/Foundation/src/pcre_dfa_exec.c - ${LIBRARY_DIR}/Foundation/src/pcre_get.c - ${LIBRARY_DIR}/Foundation/src/pcre_jit_compile.c - ${LIBRARY_DIR}/Foundation/src/pcre_refcount.c - ${LIBRARY_DIR}/Foundation/src/pcre_string_utils.c - ${LIBRARY_DIR}/Foundation/src/pcre_version.c - ${LIBRARY_DIR}/Foundation/src/pcre_ucd.c - ${LIBRARY_DIR}/Foundation/src/pcre_valid_utf8.c - ${LIBRARY_DIR}/Foundation/src/pcre_xclass.c + "${LIBRARY_DIR}/Foundation/src/pcre_config.c" + "${LIBRARY_DIR}/Foundation/src/pcre_byte_order.c" + "${LIBRARY_DIR}/Foundation/src/pcre_chartables.c" + "${LIBRARY_DIR}/Foundation/src/pcre_compile.c" + "${LIBRARY_DIR}/Foundation/src/pcre_exec.c" + "${LIBRARY_DIR}/Foundation/src/pcre_fullinfo.c" + "${LIBRARY_DIR}/Foundation/src/pcre_globals.c" + "${LIBRARY_DIR}/Foundation/src/pcre_maketables.c" + "${LIBRARY_DIR}/Foundation/src/pcre_newline.c" + "${LIBRARY_DIR}/Foundation/src/pcre_ord2utf8.c" + "${LIBRARY_DIR}/Foundation/src/pcre_study.c" + "${LIBRARY_DIR}/Foundation/src/pcre_tables.c" + "${LIBRARY_DIR}/Foundation/src/pcre_dfa_exec.c" + "${LIBRARY_DIR}/Foundation/src/pcre_get.c" + "${LIBRARY_DIR}/Foundation/src/pcre_jit_compile.c" + "${LIBRARY_DIR}/Foundation/src/pcre_refcount.c" + "${LIBRARY_DIR}/Foundation/src/pcre_string_utils.c" + "${LIBRARY_DIR}/Foundation/src/pcre_version.c" + "${LIBRARY_DIR}/Foundation/src/pcre_ucd.c" + "${LIBRARY_DIR}/Foundation/src/pcre_valid_utf8.c" + "${LIBRARY_DIR}/Foundation/src/pcre_xclass.c" ) add_library (_poco_foundation_pcre ${SRCS_PCRE}) @@ -33,159 +33,159 @@ if (USE_INTERNAL_POCO_LIBRARY) # Foundation set (SRCS - ${LIBRARY_DIR}/Foundation/src/AbstractObserver.cpp - ${LIBRARY_DIR}/Foundation/src/ActiveDispatcher.cpp - ${LIBRARY_DIR}/Foundation/src/ArchiveStrategy.cpp - ${LIBRARY_DIR}/Foundation/src/Ascii.cpp - ${LIBRARY_DIR}/Foundation/src/ASCIIEncoding.cpp - ${LIBRARY_DIR}/Foundation/src/AsyncChannel.cpp - ${LIBRARY_DIR}/Foundation/src/AtomicCounter.cpp - ${LIBRARY_DIR}/Foundation/src/Base32Decoder.cpp - ${LIBRARY_DIR}/Foundation/src/Base32Encoder.cpp - ${LIBRARY_DIR}/Foundation/src/Base64Decoder.cpp - ${LIBRARY_DIR}/Foundation/src/Base64Encoder.cpp - ${LIBRARY_DIR}/Foundation/src/BinaryReader.cpp - ${LIBRARY_DIR}/Foundation/src/BinaryWriter.cpp - ${LIBRARY_DIR}/Foundation/src/Bugcheck.cpp - ${LIBRARY_DIR}/Foundation/src/ByteOrder.cpp - ${LIBRARY_DIR}/Foundation/src/Channel.cpp - ${LIBRARY_DIR}/Foundation/src/Checksum.cpp - ${LIBRARY_DIR}/Foundation/src/Clock.cpp - ${LIBRARY_DIR}/Foundation/src/Condition.cpp - ${LIBRARY_DIR}/Foundation/src/Configurable.cpp - ${LIBRARY_DIR}/Foundation/src/ConsoleChannel.cpp - ${LIBRARY_DIR}/Foundation/src/CountingStream.cpp - ${LIBRARY_DIR}/Foundation/src/DateTime.cpp - ${LIBRARY_DIR}/Foundation/src/DateTimeFormat.cpp - ${LIBRARY_DIR}/Foundation/src/DateTimeFormatter.cpp - ${LIBRARY_DIR}/Foundation/src/DateTimeParser.cpp - ${LIBRARY_DIR}/Foundation/src/Debugger.cpp - ${LIBRARY_DIR}/Foundation/src/DeflatingStream.cpp - ${LIBRARY_DIR}/Foundation/src/DigestEngine.cpp - ${LIBRARY_DIR}/Foundation/src/DigestStream.cpp - ${LIBRARY_DIR}/Foundation/src/DirectoryIterator.cpp - ${LIBRARY_DIR}/Foundation/src/DirectoryIteratorStrategy.cpp - ${LIBRARY_DIR}/Foundation/src/DirectoryWatcher.cpp - ${LIBRARY_DIR}/Foundation/src/Environment.cpp - ${LIBRARY_DIR}/Foundation/src/Error.cpp - ${LIBRARY_DIR}/Foundation/src/ErrorHandler.cpp - ${LIBRARY_DIR}/Foundation/src/Event.cpp - ${LIBRARY_DIR}/Foundation/src/EventArgs.cpp - ${LIBRARY_DIR}/Foundation/src/EventChannel.cpp - ${LIBRARY_DIR}/Foundation/src/Exception.cpp - ${LIBRARY_DIR}/Foundation/src/FIFOBufferStream.cpp - ${LIBRARY_DIR}/Foundation/src/File.cpp - ${LIBRARY_DIR}/Foundation/src/FileChannel.cpp - ${LIBRARY_DIR}/Foundation/src/FileStream.cpp - ${LIBRARY_DIR}/Foundation/src/FileStreamFactory.cpp - ${LIBRARY_DIR}/Foundation/src/Format.cpp - ${LIBRARY_DIR}/Foundation/src/Formatter.cpp - ${LIBRARY_DIR}/Foundation/src/FormattingChannel.cpp - ${LIBRARY_DIR}/Foundation/src/FPEnvironment.cpp - ${LIBRARY_DIR}/Foundation/src/Glob.cpp - ${LIBRARY_DIR}/Foundation/src/Hash.cpp - ${LIBRARY_DIR}/Foundation/src/HashStatistic.cpp - ${LIBRARY_DIR}/Foundation/src/HexBinaryDecoder.cpp - ${LIBRARY_DIR}/Foundation/src/HexBinaryEncoder.cpp - ${LIBRARY_DIR}/Foundation/src/InflatingStream.cpp - ${LIBRARY_DIR}/Foundation/src/JSONString.cpp - ${LIBRARY_DIR}/Foundation/src/Latin1Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/Latin2Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/Latin9Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/LineEndingConverter.cpp - ${LIBRARY_DIR}/Foundation/src/LocalDateTime.cpp - ${LIBRARY_DIR}/Foundation/src/LogFile.cpp - ${LIBRARY_DIR}/Foundation/src/Logger.cpp - ${LIBRARY_DIR}/Foundation/src/LoggingFactory.cpp - ${LIBRARY_DIR}/Foundation/src/LoggingRegistry.cpp - ${LIBRARY_DIR}/Foundation/src/LogStream.cpp - ${LIBRARY_DIR}/Foundation/src/Manifest.cpp - ${LIBRARY_DIR}/Foundation/src/MD4Engine.cpp - ${LIBRARY_DIR}/Foundation/src/MD5Engine.cpp - ${LIBRARY_DIR}/Foundation/src/MemoryPool.cpp - ${LIBRARY_DIR}/Foundation/src/MemoryStream.cpp - ${LIBRARY_DIR}/Foundation/src/Message.cpp - ${LIBRARY_DIR}/Foundation/src/Mutex.cpp - ${LIBRARY_DIR}/Foundation/src/NamedEvent.cpp - ${LIBRARY_DIR}/Foundation/src/NamedMutex.cpp - ${LIBRARY_DIR}/Foundation/src/NestedDiagnosticContext.cpp - ${LIBRARY_DIR}/Foundation/src/Notification.cpp - ${LIBRARY_DIR}/Foundation/src/NotificationCenter.cpp - ${LIBRARY_DIR}/Foundation/src/NotificationQueue.cpp - ${LIBRARY_DIR}/Foundation/src/NullChannel.cpp - ${LIBRARY_DIR}/Foundation/src/NullStream.cpp - ${LIBRARY_DIR}/Foundation/src/NumberFormatter.cpp - ${LIBRARY_DIR}/Foundation/src/NumberParser.cpp - ${LIBRARY_DIR}/Foundation/src/NumericString.cpp - ${LIBRARY_DIR}/Foundation/src/Path.cpp - ${LIBRARY_DIR}/Foundation/src/PatternFormatter.cpp - ${LIBRARY_DIR}/Foundation/src/Pipe.cpp - ${LIBRARY_DIR}/Foundation/src/PipeImpl.cpp - ${LIBRARY_DIR}/Foundation/src/PipeStream.cpp - ${LIBRARY_DIR}/Foundation/src/PriorityNotificationQueue.cpp - ${LIBRARY_DIR}/Foundation/src/Process.cpp - ${LIBRARY_DIR}/Foundation/src/PurgeStrategy.cpp - ${LIBRARY_DIR}/Foundation/src/Random.cpp - ${LIBRARY_DIR}/Foundation/src/RandomStream.cpp - ${LIBRARY_DIR}/Foundation/src/RefCountedObject.cpp - ${LIBRARY_DIR}/Foundation/src/RegularExpression.cpp - ${LIBRARY_DIR}/Foundation/src/RotateStrategy.cpp - ${LIBRARY_DIR}/Foundation/src/Runnable.cpp - ${LIBRARY_DIR}/Foundation/src/RWLock.cpp - ${LIBRARY_DIR}/Foundation/src/Semaphore.cpp - ${LIBRARY_DIR}/Foundation/src/SHA1Engine.cpp - ${LIBRARY_DIR}/Foundation/src/SharedLibrary.cpp - ${LIBRARY_DIR}/Foundation/src/SharedMemory.cpp - ${LIBRARY_DIR}/Foundation/src/SignalHandler.cpp - ${LIBRARY_DIR}/Foundation/src/SimpleFileChannel.cpp - ${LIBRARY_DIR}/Foundation/src/SortedDirectoryIterator.cpp - ${LIBRARY_DIR}/Foundation/src/SplitterChannel.cpp - ${LIBRARY_DIR}/Foundation/src/Stopwatch.cpp - ${LIBRARY_DIR}/Foundation/src/StreamChannel.cpp - ${LIBRARY_DIR}/Foundation/src/StreamConverter.cpp - ${LIBRARY_DIR}/Foundation/src/StreamCopier.cpp - ${LIBRARY_DIR}/Foundation/src/StreamTokenizer.cpp - ${LIBRARY_DIR}/Foundation/src/String.cpp - ${LIBRARY_DIR}/Foundation/src/StringTokenizer.cpp - ${LIBRARY_DIR}/Foundation/src/SynchronizedObject.cpp - ${LIBRARY_DIR}/Foundation/src/SyslogChannel.cpp - ${LIBRARY_DIR}/Foundation/src/Task.cpp - ${LIBRARY_DIR}/Foundation/src/TaskManager.cpp - ${LIBRARY_DIR}/Foundation/src/TaskNotification.cpp - ${LIBRARY_DIR}/Foundation/src/TeeStream.cpp - ${LIBRARY_DIR}/Foundation/src/TemporaryFile.cpp - ${LIBRARY_DIR}/Foundation/src/TextBufferIterator.cpp - ${LIBRARY_DIR}/Foundation/src/TextConverter.cpp - ${LIBRARY_DIR}/Foundation/src/TextEncoding.cpp - ${LIBRARY_DIR}/Foundation/src/TextIterator.cpp - ${LIBRARY_DIR}/Foundation/src/Thread.cpp - ${LIBRARY_DIR}/Foundation/src/ThreadLocal.cpp - ${LIBRARY_DIR}/Foundation/src/ThreadPool.cpp - ${LIBRARY_DIR}/Foundation/src/ThreadTarget.cpp - ${LIBRARY_DIR}/Foundation/src/TimedNotificationQueue.cpp - ${LIBRARY_DIR}/Foundation/src/Timer.cpp - ${LIBRARY_DIR}/Foundation/src/Timespan.cpp - ${LIBRARY_DIR}/Foundation/src/Timestamp.cpp - ${LIBRARY_DIR}/Foundation/src/Timezone.cpp - ${LIBRARY_DIR}/Foundation/src/Token.cpp - ${LIBRARY_DIR}/Foundation/src/Unicode.cpp - ${LIBRARY_DIR}/Foundation/src/UnicodeConverter.cpp - ${LIBRARY_DIR}/Foundation/src/URI.cpp - ${LIBRARY_DIR}/Foundation/src/URIStreamFactory.cpp - ${LIBRARY_DIR}/Foundation/src/URIStreamOpener.cpp - ${LIBRARY_DIR}/Foundation/src/UTF16Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/UTF32Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/UTF8Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/UTF8String.cpp - ${LIBRARY_DIR}/Foundation/src/UUID.cpp - ${LIBRARY_DIR}/Foundation/src/UUIDGenerator.cpp - ${LIBRARY_DIR}/Foundation/src/Var.cpp - ${LIBRARY_DIR}/Foundation/src/VarHolder.cpp - ${LIBRARY_DIR}/Foundation/src/VarIterator.cpp - ${LIBRARY_DIR}/Foundation/src/Void.cpp - ${LIBRARY_DIR}/Foundation/src/Windows1250Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/Windows1251Encoding.cpp - ${LIBRARY_DIR}/Foundation/src/Windows1252Encoding.cpp + "${LIBRARY_DIR}/Foundation/src/AbstractObserver.cpp" + "${LIBRARY_DIR}/Foundation/src/ActiveDispatcher.cpp" + "${LIBRARY_DIR}/Foundation/src/ArchiveStrategy.cpp" + "${LIBRARY_DIR}/Foundation/src/Ascii.cpp" + "${LIBRARY_DIR}/Foundation/src/ASCIIEncoding.cpp" + "${LIBRARY_DIR}/Foundation/src/AsyncChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/AtomicCounter.cpp" + "${LIBRARY_DIR}/Foundation/src/Base32Decoder.cpp" + "${LIBRARY_DIR}/Foundation/src/Base32Encoder.cpp" + "${LIBRARY_DIR}/Foundation/src/Base64Decoder.cpp" + "${LIBRARY_DIR}/Foundation/src/Base64Encoder.cpp" + "${LIBRARY_DIR}/Foundation/src/BinaryReader.cpp" + "${LIBRARY_DIR}/Foundation/src/BinaryWriter.cpp" + "${LIBRARY_DIR}/Foundation/src/Bugcheck.cpp" + "${LIBRARY_DIR}/Foundation/src/ByteOrder.cpp" + "${LIBRARY_DIR}/Foundation/src/Channel.cpp" + "${LIBRARY_DIR}/Foundation/src/Checksum.cpp" + "${LIBRARY_DIR}/Foundation/src/Clock.cpp" + "${LIBRARY_DIR}/Foundation/src/Condition.cpp" + "${LIBRARY_DIR}/Foundation/src/Configurable.cpp" + "${LIBRARY_DIR}/Foundation/src/ConsoleChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/CountingStream.cpp" + "${LIBRARY_DIR}/Foundation/src/DateTime.cpp" + "${LIBRARY_DIR}/Foundation/src/DateTimeFormat.cpp" + "${LIBRARY_DIR}/Foundation/src/DateTimeFormatter.cpp" + "${LIBRARY_DIR}/Foundation/src/DateTimeParser.cpp" + "${LIBRARY_DIR}/Foundation/src/Debugger.cpp" + "${LIBRARY_DIR}/Foundation/src/DeflatingStream.cpp" + "${LIBRARY_DIR}/Foundation/src/DigestEngine.cpp" + "${LIBRARY_DIR}/Foundation/src/DigestStream.cpp" + "${LIBRARY_DIR}/Foundation/src/DirectoryIterator.cpp" + "${LIBRARY_DIR}/Foundation/src/DirectoryIteratorStrategy.cpp" + "${LIBRARY_DIR}/Foundation/src/DirectoryWatcher.cpp" + "${LIBRARY_DIR}/Foundation/src/Environment.cpp" + "${LIBRARY_DIR}/Foundation/src/Error.cpp" + "${LIBRARY_DIR}/Foundation/src/ErrorHandler.cpp" + "${LIBRARY_DIR}/Foundation/src/Event.cpp" + "${LIBRARY_DIR}/Foundation/src/EventArgs.cpp" + "${LIBRARY_DIR}/Foundation/src/EventChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/Exception.cpp" + "${LIBRARY_DIR}/Foundation/src/FIFOBufferStream.cpp" + "${LIBRARY_DIR}/Foundation/src/File.cpp" + "${LIBRARY_DIR}/Foundation/src/FileChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/FileStream.cpp" + "${LIBRARY_DIR}/Foundation/src/FileStreamFactory.cpp" + "${LIBRARY_DIR}/Foundation/src/Format.cpp" + "${LIBRARY_DIR}/Foundation/src/Formatter.cpp" + "${LIBRARY_DIR}/Foundation/src/FormattingChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/FPEnvironment.cpp" + "${LIBRARY_DIR}/Foundation/src/Glob.cpp" + "${LIBRARY_DIR}/Foundation/src/Hash.cpp" + "${LIBRARY_DIR}/Foundation/src/HashStatistic.cpp" + "${LIBRARY_DIR}/Foundation/src/HexBinaryDecoder.cpp" + "${LIBRARY_DIR}/Foundation/src/HexBinaryEncoder.cpp" + "${LIBRARY_DIR}/Foundation/src/InflatingStream.cpp" + "${LIBRARY_DIR}/Foundation/src/JSONString.cpp" + "${LIBRARY_DIR}/Foundation/src/Latin1Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/Latin2Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/Latin9Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/LineEndingConverter.cpp" + "${LIBRARY_DIR}/Foundation/src/LocalDateTime.cpp" + "${LIBRARY_DIR}/Foundation/src/LogFile.cpp" + "${LIBRARY_DIR}/Foundation/src/Logger.cpp" + "${LIBRARY_DIR}/Foundation/src/LoggingFactory.cpp" + "${LIBRARY_DIR}/Foundation/src/LoggingRegistry.cpp" + "${LIBRARY_DIR}/Foundation/src/LogStream.cpp" + "${LIBRARY_DIR}/Foundation/src/Manifest.cpp" + "${LIBRARY_DIR}/Foundation/src/MD4Engine.cpp" + "${LIBRARY_DIR}/Foundation/src/MD5Engine.cpp" + "${LIBRARY_DIR}/Foundation/src/MemoryPool.cpp" + "${LIBRARY_DIR}/Foundation/src/MemoryStream.cpp" + "${LIBRARY_DIR}/Foundation/src/Message.cpp" + "${LIBRARY_DIR}/Foundation/src/Mutex.cpp" + "${LIBRARY_DIR}/Foundation/src/NamedEvent.cpp" + "${LIBRARY_DIR}/Foundation/src/NamedMutex.cpp" + "${LIBRARY_DIR}/Foundation/src/NestedDiagnosticContext.cpp" + "${LIBRARY_DIR}/Foundation/src/Notification.cpp" + "${LIBRARY_DIR}/Foundation/src/NotificationCenter.cpp" + "${LIBRARY_DIR}/Foundation/src/NotificationQueue.cpp" + "${LIBRARY_DIR}/Foundation/src/NullChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/NullStream.cpp" + "${LIBRARY_DIR}/Foundation/src/NumberFormatter.cpp" + "${LIBRARY_DIR}/Foundation/src/NumberParser.cpp" + "${LIBRARY_DIR}/Foundation/src/NumericString.cpp" + "${LIBRARY_DIR}/Foundation/src/Path.cpp" + "${LIBRARY_DIR}/Foundation/src/PatternFormatter.cpp" + "${LIBRARY_DIR}/Foundation/src/Pipe.cpp" + "${LIBRARY_DIR}/Foundation/src/PipeImpl.cpp" + "${LIBRARY_DIR}/Foundation/src/PipeStream.cpp" + "${LIBRARY_DIR}/Foundation/src/PriorityNotificationQueue.cpp" + "${LIBRARY_DIR}/Foundation/src/Process.cpp" + "${LIBRARY_DIR}/Foundation/src/PurgeStrategy.cpp" + "${LIBRARY_DIR}/Foundation/src/Random.cpp" + "${LIBRARY_DIR}/Foundation/src/RandomStream.cpp" + "${LIBRARY_DIR}/Foundation/src/RefCountedObject.cpp" + "${LIBRARY_DIR}/Foundation/src/RegularExpression.cpp" + "${LIBRARY_DIR}/Foundation/src/RotateStrategy.cpp" + "${LIBRARY_DIR}/Foundation/src/Runnable.cpp" + "${LIBRARY_DIR}/Foundation/src/RWLock.cpp" + "${LIBRARY_DIR}/Foundation/src/Semaphore.cpp" + "${LIBRARY_DIR}/Foundation/src/SHA1Engine.cpp" + "${LIBRARY_DIR}/Foundation/src/SharedLibrary.cpp" + "${LIBRARY_DIR}/Foundation/src/SharedMemory.cpp" + "${LIBRARY_DIR}/Foundation/src/SignalHandler.cpp" + "${LIBRARY_DIR}/Foundation/src/SimpleFileChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/SortedDirectoryIterator.cpp" + "${LIBRARY_DIR}/Foundation/src/SplitterChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/Stopwatch.cpp" + "${LIBRARY_DIR}/Foundation/src/StreamChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/StreamConverter.cpp" + "${LIBRARY_DIR}/Foundation/src/StreamCopier.cpp" + "${LIBRARY_DIR}/Foundation/src/StreamTokenizer.cpp" + "${LIBRARY_DIR}/Foundation/src/String.cpp" + "${LIBRARY_DIR}/Foundation/src/StringTokenizer.cpp" + "${LIBRARY_DIR}/Foundation/src/SynchronizedObject.cpp" + "${LIBRARY_DIR}/Foundation/src/SyslogChannel.cpp" + "${LIBRARY_DIR}/Foundation/src/Task.cpp" + "${LIBRARY_DIR}/Foundation/src/TaskManager.cpp" + "${LIBRARY_DIR}/Foundation/src/TaskNotification.cpp" + "${LIBRARY_DIR}/Foundation/src/TeeStream.cpp" + "${LIBRARY_DIR}/Foundation/src/TemporaryFile.cpp" + "${LIBRARY_DIR}/Foundation/src/TextBufferIterator.cpp" + "${LIBRARY_DIR}/Foundation/src/TextConverter.cpp" + "${LIBRARY_DIR}/Foundation/src/TextEncoding.cpp" + "${LIBRARY_DIR}/Foundation/src/TextIterator.cpp" + "${LIBRARY_DIR}/Foundation/src/Thread.cpp" + "${LIBRARY_DIR}/Foundation/src/ThreadLocal.cpp" + "${LIBRARY_DIR}/Foundation/src/ThreadPool.cpp" + "${LIBRARY_DIR}/Foundation/src/ThreadTarget.cpp" + "${LIBRARY_DIR}/Foundation/src/TimedNotificationQueue.cpp" + "${LIBRARY_DIR}/Foundation/src/Timer.cpp" + "${LIBRARY_DIR}/Foundation/src/Timespan.cpp" + "${LIBRARY_DIR}/Foundation/src/Timestamp.cpp" + "${LIBRARY_DIR}/Foundation/src/Timezone.cpp" + "${LIBRARY_DIR}/Foundation/src/Token.cpp" + "${LIBRARY_DIR}/Foundation/src/Unicode.cpp" + "${LIBRARY_DIR}/Foundation/src/UnicodeConverter.cpp" + "${LIBRARY_DIR}/Foundation/src/URI.cpp" + "${LIBRARY_DIR}/Foundation/src/URIStreamFactory.cpp" + "${LIBRARY_DIR}/Foundation/src/URIStreamOpener.cpp" + "${LIBRARY_DIR}/Foundation/src/UTF16Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/UTF32Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/UTF8Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/UTF8String.cpp" + "${LIBRARY_DIR}/Foundation/src/UUID.cpp" + "${LIBRARY_DIR}/Foundation/src/UUIDGenerator.cpp" + "${LIBRARY_DIR}/Foundation/src/Var.cpp" + "${LIBRARY_DIR}/Foundation/src/VarHolder.cpp" + "${LIBRARY_DIR}/Foundation/src/VarIterator.cpp" + "${LIBRARY_DIR}/Foundation/src/Void.cpp" + "${LIBRARY_DIR}/Foundation/src/Windows1250Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/Windows1251Encoding.cpp" + "${LIBRARY_DIR}/Foundation/src/Windows1252Encoding.cpp" ) add_library (_poco_foundation ${SRCS}) @@ -221,7 +221,7 @@ if (USE_INTERNAL_POCO_LIBRARY) POCO_ENABLE_CPP11 POCO_OS_FAMILY_UNIX ) - target_include_directories (_poco_foundation SYSTEM PUBLIC ${LIBRARY_DIR}/Foundation/include) + target_include_directories (_poco_foundation SYSTEM PUBLIC "${LIBRARY_DIR}/Foundation/include") target_link_libraries (_poco_foundation PRIVATE Poco::Foundation::PCRE ${ZLIB_LIBRARIES}) else () add_library (Poco::Foundation UNKNOWN IMPORTED GLOBAL) @@ -233,3 +233,10 @@ else () message (STATUS "Using Poco::Foundation: ${LIBRARY_POCO_FOUNDATION} ${INCLUDE_POCO_FOUNDATION}") endif () + +if(OS_DARWIN AND ARCH_AARCH64) + target_compile_definitions (_poco_foundation + PRIVATE + POCO_NO_STAT64 + ) +endif() diff --git a/contrib/poco-cmake/JSON/CMakeLists.txt b/contrib/poco-cmake/JSON/CMakeLists.txt index 89054cf225d..7033b800d5d 100644 --- a/contrib/poco-cmake/JSON/CMakeLists.txt +++ b/contrib/poco-cmake/JSON/CMakeLists.txt @@ -2,7 +2,7 @@ if (USE_INTERNAL_POCO_LIBRARY) # Poco::JSON (pdjson) set (SRCS_PDJSON - ${LIBRARY_DIR}/JSON/src/pdjson.c + "${LIBRARY_DIR}/JSON/src/pdjson.c" ) add_library (_poco_json_pdjson ${SRCS_PDJSON}) @@ -11,24 +11,24 @@ if (USE_INTERNAL_POCO_LIBRARY) # Poco::JSON set (SRCS - ${LIBRARY_DIR}/JSON/src/Array.cpp - ${LIBRARY_DIR}/JSON/src/Handler.cpp - ${LIBRARY_DIR}/JSON/src/JSONException.cpp - ${LIBRARY_DIR}/JSON/src/Object.cpp - ${LIBRARY_DIR}/JSON/src/ParseHandler.cpp - ${LIBRARY_DIR}/JSON/src/Parser.cpp - ${LIBRARY_DIR}/JSON/src/ParserImpl.cpp - ${LIBRARY_DIR}/JSON/src/PrintHandler.cpp - ${LIBRARY_DIR}/JSON/src/Query.cpp - ${LIBRARY_DIR}/JSON/src/Stringifier.cpp - ${LIBRARY_DIR}/JSON/src/Template.cpp - ${LIBRARY_DIR}/JSON/src/TemplateCache.cpp + "${LIBRARY_DIR}/JSON/src/Array.cpp" + "${LIBRARY_DIR}/JSON/src/Handler.cpp" + "${LIBRARY_DIR}/JSON/src/JSONException.cpp" + "${LIBRARY_DIR}/JSON/src/Object.cpp" + "${LIBRARY_DIR}/JSON/src/ParseHandler.cpp" + "${LIBRARY_DIR}/JSON/src/Parser.cpp" + "${LIBRARY_DIR}/JSON/src/ParserImpl.cpp" + "${LIBRARY_DIR}/JSON/src/PrintHandler.cpp" + "${LIBRARY_DIR}/JSON/src/Query.cpp" + "${LIBRARY_DIR}/JSON/src/Stringifier.cpp" + "${LIBRARY_DIR}/JSON/src/Template.cpp" + "${LIBRARY_DIR}/JSON/src/TemplateCache.cpp" ) add_library (_poco_json ${SRCS}) add_library (Poco::JSON ALIAS _poco_json) - target_include_directories (_poco_json SYSTEM PUBLIC ${LIBRARY_DIR}/JSON/include) + target_include_directories (_poco_json SYSTEM PUBLIC "${LIBRARY_DIR}/JSON/include") target_link_libraries (_poco_json PUBLIC Poco::Foundation Poco::JSON::Pdjson) else () add_library (Poco::JSON UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/MongoDB/CMakeLists.txt b/contrib/poco-cmake/MongoDB/CMakeLists.txt index 0d79f680a64..e3dce7ac5cd 100644 --- a/contrib/poco-cmake/MongoDB/CMakeLists.txt +++ b/contrib/poco-cmake/MongoDB/CMakeLists.txt @@ -1,32 +1,32 @@ if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/MongoDB/src/Array.cpp - ${LIBRARY_DIR}/MongoDB/src/Binary.cpp - ${LIBRARY_DIR}/MongoDB/src/Connection.cpp - ${LIBRARY_DIR}/MongoDB/src/Cursor.cpp - ${LIBRARY_DIR}/MongoDB/src/Database.cpp - ${LIBRARY_DIR}/MongoDB/src/DeleteRequest.cpp - ${LIBRARY_DIR}/MongoDB/src/Document.cpp - ${LIBRARY_DIR}/MongoDB/src/Element.cpp - ${LIBRARY_DIR}/MongoDB/src/GetMoreRequest.cpp - ${LIBRARY_DIR}/MongoDB/src/InsertRequest.cpp - ${LIBRARY_DIR}/MongoDB/src/JavaScriptCode.cpp - ${LIBRARY_DIR}/MongoDB/src/KillCursorsRequest.cpp - ${LIBRARY_DIR}/MongoDB/src/Message.cpp - ${LIBRARY_DIR}/MongoDB/src/MessageHeader.cpp - ${LIBRARY_DIR}/MongoDB/src/ObjectId.cpp - ${LIBRARY_DIR}/MongoDB/src/QueryRequest.cpp - ${LIBRARY_DIR}/MongoDB/src/RegularExpression.cpp - ${LIBRARY_DIR}/MongoDB/src/ReplicaSet.cpp - ${LIBRARY_DIR}/MongoDB/src/RequestMessage.cpp - ${LIBRARY_DIR}/MongoDB/src/ResponseMessage.cpp - ${LIBRARY_DIR}/MongoDB/src/UpdateRequest.cpp + "${LIBRARY_DIR}/MongoDB/src/Array.cpp" + "${LIBRARY_DIR}/MongoDB/src/Binary.cpp" + "${LIBRARY_DIR}/MongoDB/src/Connection.cpp" + "${LIBRARY_DIR}/MongoDB/src/Cursor.cpp" + "${LIBRARY_DIR}/MongoDB/src/Database.cpp" + "${LIBRARY_DIR}/MongoDB/src/DeleteRequest.cpp" + "${LIBRARY_DIR}/MongoDB/src/Document.cpp" + "${LIBRARY_DIR}/MongoDB/src/Element.cpp" + "${LIBRARY_DIR}/MongoDB/src/GetMoreRequest.cpp" + "${LIBRARY_DIR}/MongoDB/src/InsertRequest.cpp" + "${LIBRARY_DIR}/MongoDB/src/JavaScriptCode.cpp" + "${LIBRARY_DIR}/MongoDB/src/KillCursorsRequest.cpp" + "${LIBRARY_DIR}/MongoDB/src/Message.cpp" + "${LIBRARY_DIR}/MongoDB/src/MessageHeader.cpp" + "${LIBRARY_DIR}/MongoDB/src/ObjectId.cpp" + "${LIBRARY_DIR}/MongoDB/src/QueryRequest.cpp" + "${LIBRARY_DIR}/MongoDB/src/RegularExpression.cpp" + "${LIBRARY_DIR}/MongoDB/src/ReplicaSet.cpp" + "${LIBRARY_DIR}/MongoDB/src/RequestMessage.cpp" + "${LIBRARY_DIR}/MongoDB/src/ResponseMessage.cpp" + "${LIBRARY_DIR}/MongoDB/src/UpdateRequest.cpp" ) add_library (_poco_mongodb ${SRCS}) add_library (Poco::MongoDB ALIAS _poco_mongodb) - target_include_directories (_poco_mongodb SYSTEM PUBLIC ${LIBRARY_DIR}/MongoDB/include) + target_include_directories (_poco_mongodb SYSTEM PUBLIC "${LIBRARY_DIR}/MongoDB/include") target_link_libraries (_poco_mongodb PUBLIC Poco::Net) else () add_library (Poco::MongoDB UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/Net/CMakeLists.txt b/contrib/poco-cmake/Net/CMakeLists.txt index 9bc06e52e05..45989af8d45 100644 --- a/contrib/poco-cmake/Net/CMakeLists.txt +++ b/contrib/poco-cmake/Net/CMakeLists.txt @@ -1,105 +1,105 @@ if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/Net/src/AbstractHTTPRequestHandler.cpp - ${LIBRARY_DIR}/Net/src/DatagramSocket.cpp - ${LIBRARY_DIR}/Net/src/DatagramSocketImpl.cpp - ${LIBRARY_DIR}/Net/src/DialogSocket.cpp - ${LIBRARY_DIR}/Net/src/DNS.cpp - ${LIBRARY_DIR}/Net/src/FilePartSource.cpp - ${LIBRARY_DIR}/Net/src/FTPClientSession.cpp - ${LIBRARY_DIR}/Net/src/FTPStreamFactory.cpp - ${LIBRARY_DIR}/Net/src/HostEntry.cpp - ${LIBRARY_DIR}/Net/src/HTMLForm.cpp - ${LIBRARY_DIR}/Net/src/HTTPAuthenticationParams.cpp - ${LIBRARY_DIR}/Net/src/HTTPBasicCredentials.cpp - ${LIBRARY_DIR}/Net/src/HTTPBufferAllocator.cpp - ${LIBRARY_DIR}/Net/src/HTTPChunkedStream.cpp - ${LIBRARY_DIR}/Net/src/HTTPClientSession.cpp - ${LIBRARY_DIR}/Net/src/HTTPCookie.cpp - ${LIBRARY_DIR}/Net/src/HTTPCredentials.cpp - ${LIBRARY_DIR}/Net/src/HTTPDigestCredentials.cpp - ${LIBRARY_DIR}/Net/src/HTTPFixedLengthStream.cpp - ${LIBRARY_DIR}/Net/src/HTTPHeaderStream.cpp - ${LIBRARY_DIR}/Net/src/HTTPIOStream.cpp - ${LIBRARY_DIR}/Net/src/HTTPMessage.cpp - ${LIBRARY_DIR}/Net/src/HTTPRequest.cpp - ${LIBRARY_DIR}/Net/src/HTTPRequestHandler.cpp - ${LIBRARY_DIR}/Net/src/HTTPRequestHandlerFactory.cpp - ${LIBRARY_DIR}/Net/src/HTTPResponse.cpp - ${LIBRARY_DIR}/Net/src/HTTPServer.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerConnection.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerConnectionFactory.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerParams.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerRequest.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerRequestImpl.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerResponse.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerResponseImpl.cpp - ${LIBRARY_DIR}/Net/src/HTTPServerSession.cpp - ${LIBRARY_DIR}/Net/src/HTTPSession.cpp - ${LIBRARY_DIR}/Net/src/HTTPSessionFactory.cpp - ${LIBRARY_DIR}/Net/src/HTTPSessionInstantiator.cpp - ${LIBRARY_DIR}/Net/src/HTTPStream.cpp - ${LIBRARY_DIR}/Net/src/HTTPStreamFactory.cpp - ${LIBRARY_DIR}/Net/src/ICMPClient.cpp - ${LIBRARY_DIR}/Net/src/ICMPEventArgs.cpp - ${LIBRARY_DIR}/Net/src/ICMPPacket.cpp - ${LIBRARY_DIR}/Net/src/ICMPPacketImpl.cpp - ${LIBRARY_DIR}/Net/src/ICMPSocket.cpp - ${LIBRARY_DIR}/Net/src/ICMPSocketImpl.cpp - ${LIBRARY_DIR}/Net/src/ICMPv4PacketImpl.cpp - ${LIBRARY_DIR}/Net/src/IPAddress.cpp - ${LIBRARY_DIR}/Net/src/IPAddressImpl.cpp - ${LIBRARY_DIR}/Net/src/MailMessage.cpp - ${LIBRARY_DIR}/Net/src/MailRecipient.cpp - ${LIBRARY_DIR}/Net/src/MailStream.cpp - ${LIBRARY_DIR}/Net/src/MediaType.cpp - ${LIBRARY_DIR}/Net/src/MessageHeader.cpp - ${LIBRARY_DIR}/Net/src/MulticastSocket.cpp - ${LIBRARY_DIR}/Net/src/MultipartReader.cpp - ${LIBRARY_DIR}/Net/src/MultipartWriter.cpp - ${LIBRARY_DIR}/Net/src/NameValueCollection.cpp - ${LIBRARY_DIR}/Net/src/Net.cpp - ${LIBRARY_DIR}/Net/src/NetException.cpp - ${LIBRARY_DIR}/Net/src/NetworkInterface.cpp - ${LIBRARY_DIR}/Net/src/NTPClient.cpp - ${LIBRARY_DIR}/Net/src/NTPEventArgs.cpp - ${LIBRARY_DIR}/Net/src/NTPPacket.cpp - ${LIBRARY_DIR}/Net/src/NullPartHandler.cpp - ${LIBRARY_DIR}/Net/src/OAuth10Credentials.cpp - ${LIBRARY_DIR}/Net/src/OAuth20Credentials.cpp - ${LIBRARY_DIR}/Net/src/PartHandler.cpp - ${LIBRARY_DIR}/Net/src/PartSource.cpp - ${LIBRARY_DIR}/Net/src/PartStore.cpp - ${LIBRARY_DIR}/Net/src/PollSet.cpp - ${LIBRARY_DIR}/Net/src/POP3ClientSession.cpp - ${LIBRARY_DIR}/Net/src/QuotedPrintableDecoder.cpp - ${LIBRARY_DIR}/Net/src/QuotedPrintableEncoder.cpp - ${LIBRARY_DIR}/Net/src/RawSocket.cpp - ${LIBRARY_DIR}/Net/src/RawSocketImpl.cpp - ${LIBRARY_DIR}/Net/src/RemoteSyslogChannel.cpp - ${LIBRARY_DIR}/Net/src/RemoteSyslogListener.cpp - ${LIBRARY_DIR}/Net/src/ServerSocket.cpp - ${LIBRARY_DIR}/Net/src/ServerSocketImpl.cpp - ${LIBRARY_DIR}/Net/src/SMTPChannel.cpp - ${LIBRARY_DIR}/Net/src/SMTPClientSession.cpp - ${LIBRARY_DIR}/Net/src/Socket.cpp - ${LIBRARY_DIR}/Net/src/SocketAddress.cpp - ${LIBRARY_DIR}/Net/src/SocketAddressImpl.cpp - ${LIBRARY_DIR}/Net/src/SocketImpl.cpp - ${LIBRARY_DIR}/Net/src/SocketNotification.cpp - ${LIBRARY_DIR}/Net/src/SocketNotifier.cpp - ${LIBRARY_DIR}/Net/src/SocketReactor.cpp - ${LIBRARY_DIR}/Net/src/SocketStream.cpp - ${LIBRARY_DIR}/Net/src/StreamSocket.cpp - ${LIBRARY_DIR}/Net/src/StreamSocketImpl.cpp - ${LIBRARY_DIR}/Net/src/StringPartSource.cpp - ${LIBRARY_DIR}/Net/src/TCPServer.cpp - ${LIBRARY_DIR}/Net/src/TCPServerConnection.cpp - ${LIBRARY_DIR}/Net/src/TCPServerConnectionFactory.cpp - ${LIBRARY_DIR}/Net/src/TCPServerDispatcher.cpp - ${LIBRARY_DIR}/Net/src/TCPServerParams.cpp - ${LIBRARY_DIR}/Net/src/WebSocket.cpp - ${LIBRARY_DIR}/Net/src/WebSocketImpl.cpp + "${LIBRARY_DIR}/Net/src/AbstractHTTPRequestHandler.cpp" + "${LIBRARY_DIR}/Net/src/DatagramSocket.cpp" + "${LIBRARY_DIR}/Net/src/DatagramSocketImpl.cpp" + "${LIBRARY_DIR}/Net/src/DialogSocket.cpp" + "${LIBRARY_DIR}/Net/src/DNS.cpp" + "${LIBRARY_DIR}/Net/src/FilePartSource.cpp" + "${LIBRARY_DIR}/Net/src/FTPClientSession.cpp" + "${LIBRARY_DIR}/Net/src/FTPStreamFactory.cpp" + "${LIBRARY_DIR}/Net/src/HostEntry.cpp" + "${LIBRARY_DIR}/Net/src/HTMLForm.cpp" + "${LIBRARY_DIR}/Net/src/HTTPAuthenticationParams.cpp" + "${LIBRARY_DIR}/Net/src/HTTPBasicCredentials.cpp" + "${LIBRARY_DIR}/Net/src/HTTPBufferAllocator.cpp" + "${LIBRARY_DIR}/Net/src/HTTPChunkedStream.cpp" + "${LIBRARY_DIR}/Net/src/HTTPClientSession.cpp" + "${LIBRARY_DIR}/Net/src/HTTPCookie.cpp" + "${LIBRARY_DIR}/Net/src/HTTPCredentials.cpp" + "${LIBRARY_DIR}/Net/src/HTTPDigestCredentials.cpp" + "${LIBRARY_DIR}/Net/src/HTTPFixedLengthStream.cpp" + "${LIBRARY_DIR}/Net/src/HTTPHeaderStream.cpp" + "${LIBRARY_DIR}/Net/src/HTTPIOStream.cpp" + "${LIBRARY_DIR}/Net/src/HTTPMessage.cpp" + "${LIBRARY_DIR}/Net/src/HTTPRequest.cpp" + "${LIBRARY_DIR}/Net/src/HTTPRequestHandler.cpp" + "${LIBRARY_DIR}/Net/src/HTTPRequestHandlerFactory.cpp" + "${LIBRARY_DIR}/Net/src/HTTPResponse.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServer.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerConnection.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerConnectionFactory.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerParams.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerRequest.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerRequestImpl.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerResponse.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerResponseImpl.cpp" + "${LIBRARY_DIR}/Net/src/HTTPServerSession.cpp" + "${LIBRARY_DIR}/Net/src/HTTPSession.cpp" + "${LIBRARY_DIR}/Net/src/HTTPSessionFactory.cpp" + "${LIBRARY_DIR}/Net/src/HTTPSessionInstantiator.cpp" + "${LIBRARY_DIR}/Net/src/HTTPStream.cpp" + "${LIBRARY_DIR}/Net/src/HTTPStreamFactory.cpp" + "${LIBRARY_DIR}/Net/src/ICMPClient.cpp" + "${LIBRARY_DIR}/Net/src/ICMPEventArgs.cpp" + "${LIBRARY_DIR}/Net/src/ICMPPacket.cpp" + "${LIBRARY_DIR}/Net/src/ICMPPacketImpl.cpp" + "${LIBRARY_DIR}/Net/src/ICMPSocket.cpp" + "${LIBRARY_DIR}/Net/src/ICMPSocketImpl.cpp" + "${LIBRARY_DIR}/Net/src/ICMPv4PacketImpl.cpp" + "${LIBRARY_DIR}/Net/src/IPAddress.cpp" + "${LIBRARY_DIR}/Net/src/IPAddressImpl.cpp" + "${LIBRARY_DIR}/Net/src/MailMessage.cpp" + "${LIBRARY_DIR}/Net/src/MailRecipient.cpp" + "${LIBRARY_DIR}/Net/src/MailStream.cpp" + "${LIBRARY_DIR}/Net/src/MediaType.cpp" + "${LIBRARY_DIR}/Net/src/MessageHeader.cpp" + "${LIBRARY_DIR}/Net/src/MulticastSocket.cpp" + "${LIBRARY_DIR}/Net/src/MultipartReader.cpp" + "${LIBRARY_DIR}/Net/src/MultipartWriter.cpp" + "${LIBRARY_DIR}/Net/src/NameValueCollection.cpp" + "${LIBRARY_DIR}/Net/src/Net.cpp" + "${LIBRARY_DIR}/Net/src/NetException.cpp" + "${LIBRARY_DIR}/Net/src/NetworkInterface.cpp" + "${LIBRARY_DIR}/Net/src/NTPClient.cpp" + "${LIBRARY_DIR}/Net/src/NTPEventArgs.cpp" + "${LIBRARY_DIR}/Net/src/NTPPacket.cpp" + "${LIBRARY_DIR}/Net/src/NullPartHandler.cpp" + "${LIBRARY_DIR}/Net/src/OAuth10Credentials.cpp" + "${LIBRARY_DIR}/Net/src/OAuth20Credentials.cpp" + "${LIBRARY_DIR}/Net/src/PartHandler.cpp" + "${LIBRARY_DIR}/Net/src/PartSource.cpp" + "${LIBRARY_DIR}/Net/src/PartStore.cpp" + "${LIBRARY_DIR}/Net/src/PollSet.cpp" + "${LIBRARY_DIR}/Net/src/POP3ClientSession.cpp" + "${LIBRARY_DIR}/Net/src/QuotedPrintableDecoder.cpp" + "${LIBRARY_DIR}/Net/src/QuotedPrintableEncoder.cpp" + "${LIBRARY_DIR}/Net/src/RawSocket.cpp" + "${LIBRARY_DIR}/Net/src/RawSocketImpl.cpp" + "${LIBRARY_DIR}/Net/src/RemoteSyslogChannel.cpp" + "${LIBRARY_DIR}/Net/src/RemoteSyslogListener.cpp" + "${LIBRARY_DIR}/Net/src/ServerSocket.cpp" + "${LIBRARY_DIR}/Net/src/ServerSocketImpl.cpp" + "${LIBRARY_DIR}/Net/src/SMTPChannel.cpp" + "${LIBRARY_DIR}/Net/src/SMTPClientSession.cpp" + "${LIBRARY_DIR}/Net/src/Socket.cpp" + "${LIBRARY_DIR}/Net/src/SocketAddress.cpp" + "${LIBRARY_DIR}/Net/src/SocketAddressImpl.cpp" + "${LIBRARY_DIR}/Net/src/SocketImpl.cpp" + "${LIBRARY_DIR}/Net/src/SocketNotification.cpp" + "${LIBRARY_DIR}/Net/src/SocketNotifier.cpp" + "${LIBRARY_DIR}/Net/src/SocketReactor.cpp" + "${LIBRARY_DIR}/Net/src/SocketStream.cpp" + "${LIBRARY_DIR}/Net/src/StreamSocket.cpp" + "${LIBRARY_DIR}/Net/src/StreamSocketImpl.cpp" + "${LIBRARY_DIR}/Net/src/StringPartSource.cpp" + "${LIBRARY_DIR}/Net/src/TCPServer.cpp" + "${LIBRARY_DIR}/Net/src/TCPServerConnection.cpp" + "${LIBRARY_DIR}/Net/src/TCPServerConnectionFactory.cpp" + "${LIBRARY_DIR}/Net/src/TCPServerDispatcher.cpp" + "${LIBRARY_DIR}/Net/src/TCPServerParams.cpp" + "${LIBRARY_DIR}/Net/src/WebSocket.cpp" + "${LIBRARY_DIR}/Net/src/WebSocketImpl.cpp" ) add_library (_poco_net ${SRCS}) @@ -125,7 +125,7 @@ if (USE_INTERNAL_POCO_LIBRARY) -Wno-deprecated -Wno-extra-semi ) - target_include_directories (_poco_net SYSTEM PUBLIC ${LIBRARY_DIR}/Net/include) + target_include_directories (_poco_net SYSTEM PUBLIC "${LIBRARY_DIR}/Net/include") target_link_libraries (_poco_net PUBLIC Poco::Foundation) else () add_library (Poco::Net UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/Net/SSL/CMakeLists.txt b/contrib/poco-cmake/Net/SSL/CMakeLists.txt index 7cc71f441c7..4b3adacfb8f 100644 --- a/contrib/poco-cmake/Net/SSL/CMakeLists.txt +++ b/contrib/poco-cmake/Net/SSL/CMakeLists.txt @@ -1,39 +1,39 @@ if (ENABLE_SSL) if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/AcceptCertificateHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/CertificateHandlerFactory.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/CertificateHandlerFactoryMgr.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/ConsoleCertificateHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/Context.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/HTTPSClientSession.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/HTTPSSessionInstantiator.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/HTTPSStreamFactory.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/InvalidCertificateHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/KeyConsoleHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/KeyFileHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/PrivateKeyFactory.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/PrivateKeyFactoryMgr.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/PrivateKeyPassphraseHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/RejectCertificateHandler.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureServerSocket.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureServerSocketImpl.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureSMTPClientSession.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureSocketImpl.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureStreamSocket.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureStreamSocketImpl.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/Session.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SSLException.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/SSLManager.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/Utility.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/VerificationErrorArgs.cpp - ${LIBRARY_DIR}/NetSSL_OpenSSL/src/X509Certificate.cpp + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/AcceptCertificateHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/CertificateHandlerFactory.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/CertificateHandlerFactoryMgr.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/ConsoleCertificateHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/Context.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/HTTPSClientSession.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/HTTPSSessionInstantiator.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/HTTPSStreamFactory.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/InvalidCertificateHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/KeyConsoleHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/KeyFileHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/PrivateKeyFactory.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/PrivateKeyFactoryMgr.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/PrivateKeyPassphraseHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/RejectCertificateHandler.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureServerSocket.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureServerSocketImpl.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureSMTPClientSession.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureSocketImpl.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureStreamSocket.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SecureStreamSocketImpl.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/Session.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SSLException.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/SSLManager.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/Utility.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/VerificationErrorArgs.cpp" + "${LIBRARY_DIR}/NetSSL_OpenSSL/src/X509Certificate.cpp" ) add_library (_poco_net_ssl ${SRCS}) add_library (Poco::Net::SSL ALIAS _poco_net_ssl) - target_include_directories (_poco_net_ssl SYSTEM PUBLIC ${LIBRARY_DIR}/NetSSL_OpenSSL/include) + target_include_directories (_poco_net_ssl SYSTEM PUBLIC "${LIBRARY_DIR}/NetSSL_OpenSSL/include") target_link_libraries (_poco_net_ssl PUBLIC Poco::Crypto Poco::Net Poco::Util) else () add_library (Poco::Net::SSL UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/Redis/CMakeLists.txt b/contrib/poco-cmake/Redis/CMakeLists.txt index 43d0009101c..b5892addd85 100644 --- a/contrib/poco-cmake/Redis/CMakeLists.txt +++ b/contrib/poco-cmake/Redis/CMakeLists.txt @@ -1,14 +1,14 @@ if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/Redis/src/Array.cpp - ${LIBRARY_DIR}/Redis/src/AsyncReader.cpp - ${LIBRARY_DIR}/Redis/src/Client.cpp - ${LIBRARY_DIR}/Redis/src/Command.cpp - ${LIBRARY_DIR}/Redis/src/Error.cpp - ${LIBRARY_DIR}/Redis/src/Exception.cpp - ${LIBRARY_DIR}/Redis/src/RedisEventArgs.cpp - ${LIBRARY_DIR}/Redis/src/RedisStream.cpp - ${LIBRARY_DIR}/Redis/src/Type.cpp + "${LIBRARY_DIR}/Redis/src/Array.cpp" + "${LIBRARY_DIR}/Redis/src/AsyncReader.cpp" + "${LIBRARY_DIR}/Redis/src/Client.cpp" + "${LIBRARY_DIR}/Redis/src/Command.cpp" + "${LIBRARY_DIR}/Redis/src/Error.cpp" + "${LIBRARY_DIR}/Redis/src/Exception.cpp" + "${LIBRARY_DIR}/Redis/src/RedisEventArgs.cpp" + "${LIBRARY_DIR}/Redis/src/RedisStream.cpp" + "${LIBRARY_DIR}/Redis/src/Type.cpp" ) add_library (_poco_redis ${SRCS}) @@ -18,7 +18,7 @@ if (USE_INTERNAL_POCO_LIBRARY) target_compile_options (_poco_redis PRIVATE -Wno-deprecated-copy) endif () target_compile_options (_poco_redis PRIVATE -Wno-shadow) - target_include_directories (_poco_redis SYSTEM PUBLIC ${LIBRARY_DIR}/Redis/include) + target_include_directories (_poco_redis SYSTEM PUBLIC "${LIBRARY_DIR}/Redis/include") target_link_libraries (_poco_redis PUBLIC Poco::Net) else () add_library (Poco::Redis UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/Util/CMakeLists.txt b/contrib/poco-cmake/Util/CMakeLists.txt index f5af3a5793c..e233e65cfea 100644 --- a/contrib/poco-cmake/Util/CMakeLists.txt +++ b/contrib/poco-cmake/Util/CMakeLists.txt @@ -1,38 +1,38 @@ if (USE_INTERNAL_POCO_LIBRARY) set (SRCS - ${LIBRARY_DIR}/Util/src/AbstractConfiguration.cpp - ${LIBRARY_DIR}/Util/src/Application.cpp - ${LIBRARY_DIR}/Util/src/ConfigurationMapper.cpp - ${LIBRARY_DIR}/Util/src/ConfigurationView.cpp - ${LIBRARY_DIR}/Util/src/FilesystemConfiguration.cpp - ${LIBRARY_DIR}/Util/src/HelpFormatter.cpp - ${LIBRARY_DIR}/Util/src/IniFileConfiguration.cpp - ${LIBRARY_DIR}/Util/src/IntValidator.cpp - ${LIBRARY_DIR}/Util/src/JSONConfiguration.cpp - ${LIBRARY_DIR}/Util/src/LayeredConfiguration.cpp - ${LIBRARY_DIR}/Util/src/LoggingConfigurator.cpp - ${LIBRARY_DIR}/Util/src/LoggingSubsystem.cpp - ${LIBRARY_DIR}/Util/src/MapConfiguration.cpp - ${LIBRARY_DIR}/Util/src/Option.cpp - ${LIBRARY_DIR}/Util/src/OptionCallback.cpp - ${LIBRARY_DIR}/Util/src/OptionException.cpp - ${LIBRARY_DIR}/Util/src/OptionProcessor.cpp - ${LIBRARY_DIR}/Util/src/OptionSet.cpp - ${LIBRARY_DIR}/Util/src/PropertyFileConfiguration.cpp - ${LIBRARY_DIR}/Util/src/RegExpValidator.cpp - ${LIBRARY_DIR}/Util/src/ServerApplication.cpp - ${LIBRARY_DIR}/Util/src/Subsystem.cpp - ${LIBRARY_DIR}/Util/src/SystemConfiguration.cpp - ${LIBRARY_DIR}/Util/src/Timer.cpp - ${LIBRARY_DIR}/Util/src/TimerTask.cpp - ${LIBRARY_DIR}/Util/src/Validator.cpp - ${LIBRARY_DIR}/Util/src/XMLConfiguration.cpp + "${LIBRARY_DIR}/Util/src/AbstractConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/Application.cpp" + "${LIBRARY_DIR}/Util/src/ConfigurationMapper.cpp" + "${LIBRARY_DIR}/Util/src/ConfigurationView.cpp" + "${LIBRARY_DIR}/Util/src/FilesystemConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/HelpFormatter.cpp" + "${LIBRARY_DIR}/Util/src/IniFileConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/IntValidator.cpp" + "${LIBRARY_DIR}/Util/src/JSONConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/LayeredConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/LoggingConfigurator.cpp" + "${LIBRARY_DIR}/Util/src/LoggingSubsystem.cpp" + "${LIBRARY_DIR}/Util/src/MapConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/Option.cpp" + "${LIBRARY_DIR}/Util/src/OptionCallback.cpp" + "${LIBRARY_DIR}/Util/src/OptionException.cpp" + "${LIBRARY_DIR}/Util/src/OptionProcessor.cpp" + "${LIBRARY_DIR}/Util/src/OptionSet.cpp" + "${LIBRARY_DIR}/Util/src/PropertyFileConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/RegExpValidator.cpp" + "${LIBRARY_DIR}/Util/src/ServerApplication.cpp" + "${LIBRARY_DIR}/Util/src/Subsystem.cpp" + "${LIBRARY_DIR}/Util/src/SystemConfiguration.cpp" + "${LIBRARY_DIR}/Util/src/Timer.cpp" + "${LIBRARY_DIR}/Util/src/TimerTask.cpp" + "${LIBRARY_DIR}/Util/src/Validator.cpp" + "${LIBRARY_DIR}/Util/src/XMLConfiguration.cpp" ) add_library (_poco_util ${SRCS}) add_library (Poco::Util ALIAS _poco_util) - target_include_directories (_poco_util SYSTEM PUBLIC ${LIBRARY_DIR}/Util/include) + target_include_directories (_poco_util SYSTEM PUBLIC "${LIBRARY_DIR}/Util/include") target_link_libraries (_poco_util PUBLIC Poco::JSON Poco::XML) else () add_library (Poco::Util UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/poco-cmake/XML/CMakeLists.txt b/contrib/poco-cmake/XML/CMakeLists.txt index 448b7e22c7c..af801a65f03 100644 --- a/contrib/poco-cmake/XML/CMakeLists.txt +++ b/contrib/poco-cmake/XML/CMakeLists.txt @@ -2,101 +2,101 @@ if (USE_INTERNAL_POCO_LIBRARY) # Poco::XML (expat) set (SRCS_EXPAT - ${LIBRARY_DIR}/XML/src/xmlrole.c - ${LIBRARY_DIR}/XML/src/xmltok_impl.c - ${LIBRARY_DIR}/XML/src/xmltok_ns.c - ${LIBRARY_DIR}/XML/src/xmltok.c + "${LIBRARY_DIR}/XML/src/xmlrole.c" + "${LIBRARY_DIR}/XML/src/xmltok_impl.c" + "${LIBRARY_DIR}/XML/src/xmltok_ns.c" + "${LIBRARY_DIR}/XML/src/xmltok.c" ) add_library (_poco_xml_expat ${SRCS_EXPAT}) add_library (Poco::XML::Expat ALIAS _poco_xml_expat) - target_include_directories (_poco_xml_expat PUBLIC ${LIBRARY_DIR}/XML/include) + target_include_directories (_poco_xml_expat PUBLIC "${LIBRARY_DIR}/XML/include") # Poco::XML set (SRCS - ${LIBRARY_DIR}/XML/src/AbstractContainerNode.cpp - ${LIBRARY_DIR}/XML/src/AbstractNode.cpp - ${LIBRARY_DIR}/XML/src/Attr.cpp - ${LIBRARY_DIR}/XML/src/Attributes.cpp - ${LIBRARY_DIR}/XML/src/AttributesImpl.cpp - ${LIBRARY_DIR}/XML/src/AttrMap.cpp - ${LIBRARY_DIR}/XML/src/CDATASection.cpp - ${LIBRARY_DIR}/XML/src/CharacterData.cpp - ${LIBRARY_DIR}/XML/src/ChildNodesList.cpp - ${LIBRARY_DIR}/XML/src/Comment.cpp - ${LIBRARY_DIR}/XML/src/ContentHandler.cpp - ${LIBRARY_DIR}/XML/src/DeclHandler.cpp - ${LIBRARY_DIR}/XML/src/DefaultHandler.cpp - ${LIBRARY_DIR}/XML/src/Document.cpp - ${LIBRARY_DIR}/XML/src/DocumentEvent.cpp - ${LIBRARY_DIR}/XML/src/DocumentFragment.cpp - ${LIBRARY_DIR}/XML/src/DocumentType.cpp - ${LIBRARY_DIR}/XML/src/DOMBuilder.cpp - ${LIBRARY_DIR}/XML/src/DOMException.cpp - ${LIBRARY_DIR}/XML/src/DOMImplementation.cpp - ${LIBRARY_DIR}/XML/src/DOMObject.cpp - ${LIBRARY_DIR}/XML/src/DOMParser.cpp - ${LIBRARY_DIR}/XML/src/DOMSerializer.cpp - ${LIBRARY_DIR}/XML/src/DOMWriter.cpp - ${LIBRARY_DIR}/XML/src/DTDHandler.cpp - ${LIBRARY_DIR}/XML/src/DTDMap.cpp - ${LIBRARY_DIR}/XML/src/Element.cpp - ${LIBRARY_DIR}/XML/src/ElementsByTagNameList.cpp - ${LIBRARY_DIR}/XML/src/Entity.cpp - ${LIBRARY_DIR}/XML/src/EntityReference.cpp - ${LIBRARY_DIR}/XML/src/EntityResolver.cpp - ${LIBRARY_DIR}/XML/src/EntityResolverImpl.cpp - ${LIBRARY_DIR}/XML/src/ErrorHandler.cpp - ${LIBRARY_DIR}/XML/src/Event.cpp - ${LIBRARY_DIR}/XML/src/EventDispatcher.cpp - ${LIBRARY_DIR}/XML/src/EventException.cpp - ${LIBRARY_DIR}/XML/src/EventListener.cpp - ${LIBRARY_DIR}/XML/src/EventTarget.cpp - ${LIBRARY_DIR}/XML/src/InputSource.cpp - ${LIBRARY_DIR}/XML/src/LexicalHandler.cpp - ${LIBRARY_DIR}/XML/src/Locator.cpp - ${LIBRARY_DIR}/XML/src/LocatorImpl.cpp - ${LIBRARY_DIR}/XML/src/MutationEvent.cpp - ${LIBRARY_DIR}/XML/src/Name.cpp - ${LIBRARY_DIR}/XML/src/NamedNodeMap.cpp - ${LIBRARY_DIR}/XML/src/NamePool.cpp - ${LIBRARY_DIR}/XML/src/NamespaceStrategy.cpp - ${LIBRARY_DIR}/XML/src/NamespaceSupport.cpp - ${LIBRARY_DIR}/XML/src/Node.cpp - ${LIBRARY_DIR}/XML/src/NodeAppender.cpp - ${LIBRARY_DIR}/XML/src/NodeFilter.cpp - ${LIBRARY_DIR}/XML/src/NodeIterator.cpp - ${LIBRARY_DIR}/XML/src/NodeList.cpp - ${LIBRARY_DIR}/XML/src/Notation.cpp - ${LIBRARY_DIR}/XML/src/ParserEngine.cpp - ${LIBRARY_DIR}/XML/src/ProcessingInstruction.cpp - ${LIBRARY_DIR}/XML/src/QName.cpp - ${LIBRARY_DIR}/XML/src/SAXException.cpp - ${LIBRARY_DIR}/XML/src/SAXParser.cpp - ${LIBRARY_DIR}/XML/src/Text.cpp - ${LIBRARY_DIR}/XML/src/TreeWalker.cpp - ${LIBRARY_DIR}/XML/src/ValueTraits.cpp - ${LIBRARY_DIR}/XML/src/WhitespaceFilter.cpp - ${LIBRARY_DIR}/XML/src/XMLException.cpp - ${LIBRARY_DIR}/XML/src/XMLFilter.cpp - ${LIBRARY_DIR}/XML/src/XMLFilterImpl.cpp - ${LIBRARY_DIR}/XML/src/XMLReader.cpp - ${LIBRARY_DIR}/XML/src/XMLStreamParser.cpp - ${LIBRARY_DIR}/XML/src/XMLStreamParserException.cpp - ${LIBRARY_DIR}/XML/src/XMLString.cpp - ${LIBRARY_DIR}/XML/src/XMLWriter.cpp + "${LIBRARY_DIR}/XML/src/AbstractContainerNode.cpp" + "${LIBRARY_DIR}/XML/src/AbstractNode.cpp" + "${LIBRARY_DIR}/XML/src/Attr.cpp" + "${LIBRARY_DIR}/XML/src/Attributes.cpp" + "${LIBRARY_DIR}/XML/src/AttributesImpl.cpp" + "${LIBRARY_DIR}/XML/src/AttrMap.cpp" + "${LIBRARY_DIR}/XML/src/CDATASection.cpp" + "${LIBRARY_DIR}/XML/src/CharacterData.cpp" + "${LIBRARY_DIR}/XML/src/ChildNodesList.cpp" + "${LIBRARY_DIR}/XML/src/Comment.cpp" + "${LIBRARY_DIR}/XML/src/ContentHandler.cpp" + "${LIBRARY_DIR}/XML/src/DeclHandler.cpp" + "${LIBRARY_DIR}/XML/src/DefaultHandler.cpp" + "${LIBRARY_DIR}/XML/src/Document.cpp" + "${LIBRARY_DIR}/XML/src/DocumentEvent.cpp" + "${LIBRARY_DIR}/XML/src/DocumentFragment.cpp" + "${LIBRARY_DIR}/XML/src/DocumentType.cpp" + "${LIBRARY_DIR}/XML/src/DOMBuilder.cpp" + "${LIBRARY_DIR}/XML/src/DOMException.cpp" + "${LIBRARY_DIR}/XML/src/DOMImplementation.cpp" + "${LIBRARY_DIR}/XML/src/DOMObject.cpp" + "${LIBRARY_DIR}/XML/src/DOMParser.cpp" + "${LIBRARY_DIR}/XML/src/DOMSerializer.cpp" + "${LIBRARY_DIR}/XML/src/DOMWriter.cpp" + "${LIBRARY_DIR}/XML/src/DTDHandler.cpp" + "${LIBRARY_DIR}/XML/src/DTDMap.cpp" + "${LIBRARY_DIR}/XML/src/Element.cpp" + "${LIBRARY_DIR}/XML/src/ElementsByTagNameList.cpp" + "${LIBRARY_DIR}/XML/src/Entity.cpp" + "${LIBRARY_DIR}/XML/src/EntityReference.cpp" + "${LIBRARY_DIR}/XML/src/EntityResolver.cpp" + "${LIBRARY_DIR}/XML/src/EntityResolverImpl.cpp" + "${LIBRARY_DIR}/XML/src/ErrorHandler.cpp" + "${LIBRARY_DIR}/XML/src/Event.cpp" + "${LIBRARY_DIR}/XML/src/EventDispatcher.cpp" + "${LIBRARY_DIR}/XML/src/EventException.cpp" + "${LIBRARY_DIR}/XML/src/EventListener.cpp" + "${LIBRARY_DIR}/XML/src/EventTarget.cpp" + "${LIBRARY_DIR}/XML/src/InputSource.cpp" + "${LIBRARY_DIR}/XML/src/LexicalHandler.cpp" + "${LIBRARY_DIR}/XML/src/Locator.cpp" + "${LIBRARY_DIR}/XML/src/LocatorImpl.cpp" + "${LIBRARY_DIR}/XML/src/MutationEvent.cpp" + "${LIBRARY_DIR}/XML/src/Name.cpp" + "${LIBRARY_DIR}/XML/src/NamedNodeMap.cpp" + "${LIBRARY_DIR}/XML/src/NamePool.cpp" + "${LIBRARY_DIR}/XML/src/NamespaceStrategy.cpp" + "${LIBRARY_DIR}/XML/src/NamespaceSupport.cpp" + "${LIBRARY_DIR}/XML/src/Node.cpp" + "${LIBRARY_DIR}/XML/src/NodeAppender.cpp" + "${LIBRARY_DIR}/XML/src/NodeFilter.cpp" + "${LIBRARY_DIR}/XML/src/NodeIterator.cpp" + "${LIBRARY_DIR}/XML/src/NodeList.cpp" + "${LIBRARY_DIR}/XML/src/Notation.cpp" + "${LIBRARY_DIR}/XML/src/ParserEngine.cpp" + "${LIBRARY_DIR}/XML/src/ProcessingInstruction.cpp" + "${LIBRARY_DIR}/XML/src/QName.cpp" + "${LIBRARY_DIR}/XML/src/SAXException.cpp" + "${LIBRARY_DIR}/XML/src/SAXParser.cpp" + "${LIBRARY_DIR}/XML/src/Text.cpp" + "${LIBRARY_DIR}/XML/src/TreeWalker.cpp" + "${LIBRARY_DIR}/XML/src/ValueTraits.cpp" + "${LIBRARY_DIR}/XML/src/WhitespaceFilter.cpp" + "${LIBRARY_DIR}/XML/src/XMLException.cpp" + "${LIBRARY_DIR}/XML/src/XMLFilter.cpp" + "${LIBRARY_DIR}/XML/src/XMLFilterImpl.cpp" + "${LIBRARY_DIR}/XML/src/XMLReader.cpp" + "${LIBRARY_DIR}/XML/src/XMLStreamParser.cpp" + "${LIBRARY_DIR}/XML/src/XMLStreamParserException.cpp" + "${LIBRARY_DIR}/XML/src/XMLString.cpp" + "${LIBRARY_DIR}/XML/src/XMLWriter.cpp" # expat - ${LIBRARY_DIR}/XML/src/xmlparse.cpp + "${LIBRARY_DIR}/XML/src/xmlparse.cpp" ) add_library (_poco_xml ${SRCS}) add_library (Poco::XML ALIAS _poco_xml) target_compile_options (_poco_xml PRIVATE -Wno-old-style-cast) - target_include_directories (_poco_xml SYSTEM PUBLIC ${LIBRARY_DIR}/XML/include) + target_include_directories (_poco_xml SYSTEM PUBLIC "${LIBRARY_DIR}/XML/include") target_link_libraries (_poco_xml PUBLIC Poco::Foundation Poco::XML::Expat) else () add_library (Poco::XML UNKNOWN IMPORTED GLOBAL) diff --git a/contrib/protobuf-cmake/CMakeLists.txt b/contrib/protobuf-cmake/CMakeLists.txt index 1f8d9b02b3e..a4993030d04 100644 --- a/contrib/protobuf-cmake/CMakeLists.txt +++ b/contrib/protobuf-cmake/CMakeLists.txt @@ -14,4 +14,4 @@ add_subdirectory("${protobuf_SOURCE_DIR}/cmake" "${protobuf_BINARY_DIR}") # We don't want to stop compilation on warnings in protobuf's headers. # The following line overrides the value assigned by the command target_include_directories() in libprotobuf.cmake -set_property(TARGET libprotobuf PROPERTY INTERFACE_SYSTEM_INCLUDE_DIRECTORIES ${protobuf_SOURCE_DIR}/src) +set_property(TARGET libprotobuf PROPERTY INTERFACE_SYSTEM_INCLUDE_DIRECTORIES "${protobuf_SOURCE_DIR}/src") diff --git a/contrib/replxx b/contrib/replxx index cdb6e3f2ce4..2b24f14594d 160000 --- a/contrib/replxx +++ b/contrib/replxx @@ -1 +1 @@ -Subproject commit cdb6e3f2ce4464225daf9c8beeae7db98d590bdc +Subproject commit 2b24f14594d7606792b92544bb112a6322ba34d7 diff --git a/contrib/replxx-cmake/CMakeLists.txt b/contrib/replxx-cmake/CMakeLists.txt index df17e0ed646..07f24bae25d 100644 --- a/contrib/replxx-cmake/CMakeLists.txt +++ b/contrib/replxx-cmake/CMakeLists.txt @@ -62,7 +62,7 @@ if (NOT LIBRARY_REPLXX OR NOT INCLUDE_REPLXX OR NOT EXTERNAL_REPLXX_WORKS) ) add_library (replxx ${SRCS}) - target_include_directories(replxx SYSTEM PUBLIC ${LIBRARY_DIR}/include) + target_include_directories(replxx SYSTEM PUBLIC "${LIBRARY_DIR}/include") endif () if (COMPILER_CLANG) diff --git a/contrib/rocksdb-cmake/CMakeLists.txt b/contrib/rocksdb-cmake/CMakeLists.txt index 77a30776a4a..bccc9ed5294 100644 --- a/contrib/rocksdb-cmake/CMakeLists.txt +++ b/contrib/rocksdb-cmake/CMakeLists.txt @@ -2,15 +2,6 @@ set(ROCKSDB_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/rocksdb") list(APPEND CMAKE_MODULE_PATH "${ROCKSDB_SOURCE_DIR}/cmake/modules/") -if (SANITIZE STREQUAL "undefined") - set(WITH_UBSAN ON) -elseif (SANITIZE STREQUAL "address") - set(WITH_ASAN ON) -elseif (SANITIZE STREQUAL "thread") - set(WITH_TSAN ON) -endif() - - set(PORTABLE ON) ## always disable jemalloc for rocksdb by default ## because it introduces non-standard jemalloc APIs @@ -40,7 +31,7 @@ endif() if(MSVC) option(WITH_XPRESS "build with windows built in compression" OFF) - include(${ROCKSDB_SOURCE_DIR}/thirdparty.inc) + include("${ROCKSDB_SOURCE_DIR}/thirdparty.inc") else() if(CMAKE_SYSTEM_NAME MATCHES "FreeBSD" AND NOT CMAKE_SYSTEM_NAME MATCHES "kFreeBSD") # FreeBSD has jemalloc as default malloc @@ -71,55 +62,18 @@ else() if(WITH_ZSTD) add_definitions(-DZSTD) include_directories(${ZSTD_INCLUDE_DIR}) - include_directories(${ZSTD_INCLUDE_DIR}/common) - include_directories(${ZSTD_INCLUDE_DIR}/dictBuilder) - include_directories(${ZSTD_INCLUDE_DIR}/deprecated) + include_directories("${ZSTD_INCLUDE_DIR}/common") + include_directories("${ZSTD_INCLUDE_DIR}/dictBuilder") + include_directories("${ZSTD_INCLUDE_DIR}/deprecated") list(APPEND THIRDPARTY_LIBS zstd) endif() endif() -string(TIMESTAMP TS "%Y/%m/%d %H:%M:%S" UTC) -set(GIT_DATE_TIME "${TS}" CACHE STRING "the time we first built rocksdb") - -find_package(Git) - -if(GIT_FOUND AND EXISTS "${ROCKSDB_SOURCE_DIR}/.git") - if(WIN32) - execute_process(COMMAND $ENV{COMSPEC} /C ${GIT_EXECUTABLE} -C ${ROCKSDB_SOURCE_DIR} rev-parse HEAD OUTPUT_VARIABLE GIT_SHA) - else() - execute_process(COMMAND ${GIT_EXECUTABLE} -C ${ROCKSDB_SOURCE_DIR} rev-parse HEAD OUTPUT_VARIABLE GIT_SHA) - endif() -else() - set(GIT_SHA 0) -endif() - -string(REGEX REPLACE "[^0-9a-f]+" "" GIT_SHA "${GIT_SHA}") - -set(BUILD_VERSION_CC ${CMAKE_BINARY_DIR}/rocksdb_build_version.cc) -configure_file(${ROCKSDB_SOURCE_DIR}/util/build_version.cc.in ${BUILD_VERSION_CC} @ONLY) +set(BUILD_VERSION_CC rocksdb_build_version.cc) add_library(rocksdb_build_version OBJECT ${BUILD_VERSION_CC}) -target_include_directories(rocksdb_build_version PRIVATE - ${ROCKSDB_SOURCE_DIR}/util) -if(MSVC) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Zi /nologo /EHsc /GS /Gd /GR /GF /fp:precise /Zc:wchar_t /Zc:forScope /errorReport:queue") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /FC /d2Zi+ /W4 /wd4127 /wd4800 /wd4996 /wd4351 /wd4100 /wd4204 /wd4324") -else() - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -W -Wextra -Wall") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wsign-compare -Wshadow -Wno-unused-parameter -Wno-unused-variable -Woverloaded-virtual -Wnon-virtual-dtor -Wno-missing-field-initializers -Wno-strict-aliasing") - if(MINGW) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-format -fno-asynchronous-unwind-tables") - add_definitions(-D_POSIX_C_SOURCE=1) - endif() - if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-omit-frame-pointer") - include(CheckCXXCompilerFlag) - CHECK_CXX_COMPILER_FLAG("-momit-leaf-frame-pointer" HAVE_OMIT_LEAF_FRAME_POINTER) - if(HAVE_OMIT_LEAF_FRAME_POINTER) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -momit-leaf-frame-pointer") - endif() - endif() -endif() + +target_include_directories(rocksdb_build_version PRIVATE "${ROCKSDB_SOURCE_DIR}/util") include(CheckCCompilerFlag) if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") @@ -142,14 +96,14 @@ if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") endif(HAS_ALTIVEC) endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") -if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64") +if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64|arm64|ARM64") CHECK_C_COMPILER_FLAG("-march=armv8-a+crc+crypto" HAS_ARMV8_CRC) if(HAS_ARMV8_CRC) message(STATUS " HAS_ARMV8_CRC yes") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function") endif(HAS_ARMV8_CRC) -endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64") +endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64|arm64|ARM64") include(CheckCXXSourceCompiles) @@ -189,50 +143,7 @@ if(HAVE_THREAD_LOCAL) add_definitions(-DROCKSDB_SUPPORT_THREAD_LOCAL) endif() -option(FAIL_ON_WARNINGS "Treat compile warnings as errors" ON) -if(FAIL_ON_WARNINGS) - if(MSVC) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /WX") - else() # assume GCC - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror") - endif() -endif() - -option(WITH_ASAN "build with ASAN" OFF) -if(WITH_ASAN) - set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=address") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=address") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address") - if(WITH_JEMALLOC) - message(FATAL "ASAN does not work well with JeMalloc") - endif() -endif() - -option(WITH_TSAN "build with TSAN" OFF) -if(WITH_TSAN) - set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=thread -pie") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=thread -fPIC") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=thread -fPIC") - if(WITH_JEMALLOC) - message(FATAL "TSAN does not work well with JeMalloc") - endif() -endif() - -option(WITH_UBSAN "build with UBSAN" OFF) -if(WITH_UBSAN) - add_definitions(-DROCKSDB_UBSAN_RUN) - set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=undefined") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=undefined") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=undefined") - if(WITH_JEMALLOC) - message(FATAL "UBSAN does not work well with JeMalloc") - endif() -endif() - - -if(CMAKE_SYSTEM_NAME MATCHES "Cygwin") - add_definitions(-fno-builtin-memcmp -DCYGWIN) -elseif(CMAKE_SYSTEM_NAME MATCHES "Darwin") +if(CMAKE_SYSTEM_NAME MATCHES "Darwin") add_definitions(-DOS_MACOSX) if(CMAKE_SYSTEM_PROCESSOR MATCHES arm) add_definitions(-DIOS_CROSS_COMPILE -DROCKSDB_LITE) @@ -304,9 +215,9 @@ endif() include(CheckCXXSymbolExists) if(CMAKE_SYSTEM_NAME MATCHES "^FreeBSD") - check_cxx_symbol_exists(malloc_usable_size ${ROCKSDB_SOURCE_DIR}/malloc_np.h HAVE_MALLOC_USABLE_SIZE) + check_cxx_symbol_exists(malloc_usable_size "${ROCKSDB_SOURCE_DIR}/malloc_np.h" HAVE_MALLOC_USABLE_SIZE) else() - check_cxx_symbol_exists(malloc_usable_size ${ROCKSDB_SOURCE_DIR}/malloc.h HAVE_MALLOC_USABLE_SIZE) + check_cxx_symbol_exists(malloc_usable_size "${ROCKSDB_SOURCE_DIR}/malloc.h" HAVE_MALLOC_USABLE_SIZE) endif() if(HAVE_MALLOC_USABLE_SIZE) add_definitions(-DROCKSDB_MALLOC_USABLE_SIZE) @@ -323,347 +234,316 @@ if(HAVE_AUXV_GETAUXVAL) endif() include_directories(${ROCKSDB_SOURCE_DIR}) -include_directories(${ROCKSDB_SOURCE_DIR}/include) +include_directories("${ROCKSDB_SOURCE_DIR}/include") if(WITH_FOLLY_DISTRIBUTED_MUTEX) - include_directories(${ROCKSDB_SOURCE_DIR}/third-party/folly) + include_directories("${ROCKSDB_SOURCE_DIR}/third-party/folly") endif() find_package(Threads REQUIRED) # Main library source code set(SOURCES - ${ROCKSDB_SOURCE_DIR}/cache/cache.cc - ${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc - ${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc - ${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc - ${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc - ${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc - ${ROCKSDB_SOURCE_DIR}/db/builder.cc - ${ROCKSDB_SOURCE_DIR}/db/c.cc - ${ROCKSDB_SOURCE_DIR}/db/column_family.cc - ${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc - ${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc - ${ROCKSDB_SOURCE_DIR}/db/convenience.cc - ${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc - ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc - ${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc - ${ROCKSDB_SOURCE_DIR}/db/db_iter.cc - ${ROCKSDB_SOURCE_DIR}/db/dbformat.cc - ${ROCKSDB_SOURCE_DIR}/db/error_handler.cc - ${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc - ${ROCKSDB_SOURCE_DIR}/db/experimental.cc - ${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc - ${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc - ${ROCKSDB_SOURCE_DIR}/db/flush_job.cc - ${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc - ${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc - ${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc - ${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc - ${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc - ${ROCKSDB_SOURCE_DIR}/db/log_reader.cc - ${ROCKSDB_SOURCE_DIR}/db/log_writer.cc - ${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc - ${ROCKSDB_SOURCE_DIR}/db/memtable.cc - ${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc - ${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc - ${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc - ${ROCKSDB_SOURCE_DIR}/db/output_validator.cc - ${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc - ${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc - ${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc - ${ROCKSDB_SOURCE_DIR}/db/repair.cc - ${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc - ${ROCKSDB_SOURCE_DIR}/db/table_cache.cc - ${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc - ${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc - ${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc - ${ROCKSDB_SOURCE_DIR}/db/version_builder.cc - ${ROCKSDB_SOURCE_DIR}/db/version_edit.cc - ${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc - ${ROCKSDB_SOURCE_DIR}/db/version_set.cc - ${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc - ${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc - ${ROCKSDB_SOURCE_DIR}/db/write_batch.cc - ${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc - ${ROCKSDB_SOURCE_DIR}/db/write_controller.cc - ${ROCKSDB_SOURCE_DIR}/db/write_thread.cc - ${ROCKSDB_SOURCE_DIR}/env/env.cc - ${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc - ${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc - ${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc - ${ROCKSDB_SOURCE_DIR}/env/file_system.cc - ${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc - ${ROCKSDB_SOURCE_DIR}/env/mock_env.cc - ${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc - ${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc - ${ROCKSDB_SOURCE_DIR}/file/file_util.cc - ${ROCKSDB_SOURCE_DIR}/file/filename.cc - ${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc - ${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc - ${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc - ${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc - ${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc - ${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc - ${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc - ${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc - ${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc - ${ROCKSDB_SOURCE_DIR}/memory/arena.cc - ${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc - ${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc - ${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc - ${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc - ${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc - ${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc - ${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc - ${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc - ${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc - ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc - ${ROCKSDB_SOURCE_DIR}/options/cf_options.cc - ${ROCKSDB_SOURCE_DIR}/options/configurable.cc - ${ROCKSDB_SOURCE_DIR}/options/customizable.cc - ${ROCKSDB_SOURCE_DIR}/options/db_options.cc - ${ROCKSDB_SOURCE_DIR}/options/options.cc - ${ROCKSDB_SOURCE_DIR}/options/options_helper.cc - ${ROCKSDB_SOURCE_DIR}/options/options_parser.cc - ${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc - ${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc - ${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc - ${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc - ${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc - ${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/format.cc - ${ROCKSDB_SOURCE_DIR}/table/get_context.cc - ${ROCKSDB_SOURCE_DIR}/table/iterator.cc - ${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc - ${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc - ${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc - ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc - ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc - ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc - ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc - ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc - ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc - ${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc - ${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc - ${ROCKSDB_SOURCE_DIR}/table/table_factory.cc - ${ROCKSDB_SOURCE_DIR}/table/table_properties.cc - ${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc - ${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc - ${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc - ${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc - ${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc - ${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc - ${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc - ${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc - ${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc - ${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc - ${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc - ${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc - ${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc - ${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc - ${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc - ${ROCKSDB_SOURCE_DIR}/util/coding.cc - ${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc - ${ROCKSDB_SOURCE_DIR}/util/comparator.cc - ${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc - ${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc - ${ROCKSDB_SOURCE_DIR}/util/crc32c.cc - ${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc - ${ROCKSDB_SOURCE_DIR}/util/hash.cc - ${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc - ${ROCKSDB_SOURCE_DIR}/util/random.cc - ${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc - ${ROCKSDB_SOURCE_DIR}/util/slice.cc - ${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc - ${ROCKSDB_SOURCE_DIR}/util/status.cc - ${ROCKSDB_SOURCE_DIR}/util/string_util.cc - ${ROCKSDB_SOURCE_DIR}/util/thread_local.cc - ${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc - ${ROCKSDB_SOURCE_DIR}/util/xxhash.cc - ${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc - ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc - ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc - ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc - ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc - ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc - ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc - ${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc - ${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc - ${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc - ${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc - ${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc - ${ROCKSDB_SOURCE_DIR}/utilities/debug.cc - ${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc - ${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc - ${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc - ${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc - ${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc - ${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc - ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc - ${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc - ${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc - ${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc - ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc - ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc - ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc - ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc - ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc - ${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc - ${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc - ${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc - ${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc - ${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc - ${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc - ${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc - ${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc + "${ROCKSDB_SOURCE_DIR}/cache/cache.cc" + "${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc" + "${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc" + "${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc" + "${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc" + "${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc" + "${ROCKSDB_SOURCE_DIR}/db/builder.cc" + "${ROCKSDB_SOURCE_DIR}/db/c.cc" + "${ROCKSDB_SOURCE_DIR}/db/column_family.cc" + "${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc" + "${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc" + "${ROCKSDB_SOURCE_DIR}/db/convenience.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc" + "${ROCKSDB_SOURCE_DIR}/db/db_iter.cc" + "${ROCKSDB_SOURCE_DIR}/db/dbformat.cc" + "${ROCKSDB_SOURCE_DIR}/db/error_handler.cc" + "${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc" + "${ROCKSDB_SOURCE_DIR}/db/experimental.cc" + "${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc" + "${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc" + "${ROCKSDB_SOURCE_DIR}/db/flush_job.cc" + "${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc" + "${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc" + "${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc" + "${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc" + "${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc" + "${ROCKSDB_SOURCE_DIR}/db/log_reader.cc" + "${ROCKSDB_SOURCE_DIR}/db/log_writer.cc" + "${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc" + "${ROCKSDB_SOURCE_DIR}/db/memtable.cc" + "${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc" + "${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc" + "${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc" + "${ROCKSDB_SOURCE_DIR}/db/output_validator.cc" + "${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc" + "${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc" + "${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc" + "${ROCKSDB_SOURCE_DIR}/db/repair.cc" + "${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc" + "${ROCKSDB_SOURCE_DIR}/db/table_cache.cc" + "${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc" + "${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc" + "${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc" + "${ROCKSDB_SOURCE_DIR}/db/version_builder.cc" + "${ROCKSDB_SOURCE_DIR}/db/version_edit.cc" + "${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc" + "${ROCKSDB_SOURCE_DIR}/db/version_set.cc" + "${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc" + "${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc" + "${ROCKSDB_SOURCE_DIR}/db/write_batch.cc" + "${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc" + "${ROCKSDB_SOURCE_DIR}/db/write_controller.cc" + "${ROCKSDB_SOURCE_DIR}/db/write_thread.cc" + "${ROCKSDB_SOURCE_DIR}/env/env.cc" + "${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc" + "${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc" + "${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc" + "${ROCKSDB_SOURCE_DIR}/env/file_system.cc" + "${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc" + "${ROCKSDB_SOURCE_DIR}/env/mock_env.cc" + "${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc" + "${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc" + "${ROCKSDB_SOURCE_DIR}/file/file_util.cc" + "${ROCKSDB_SOURCE_DIR}/file/filename.cc" + "${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc" + "${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc" + "${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc" + "${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc" + "${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc" + "${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc" + "${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc" + "${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc" + "${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc" + "${ROCKSDB_SOURCE_DIR}/memory/arena.cc" + "${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc" + "${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc" + "${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc" + "${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc" + "${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc" + "${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc" + "${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc" + "${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc" + "${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc" + "${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc" + "${ROCKSDB_SOURCE_DIR}/options/cf_options.cc" + "${ROCKSDB_SOURCE_DIR}/options/configurable.cc" + "${ROCKSDB_SOURCE_DIR}/options/customizable.cc" + "${ROCKSDB_SOURCE_DIR}/options/db_options.cc" + "${ROCKSDB_SOURCE_DIR}/options/options.cc" + "${ROCKSDB_SOURCE_DIR}/options/options_helper.cc" + "${ROCKSDB_SOURCE_DIR}/options/options_parser.cc" + "${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc" + "${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc" + "${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc" + "${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc" + "${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/format.cc" + "${ROCKSDB_SOURCE_DIR}/table/get_context.cc" + "${ROCKSDB_SOURCE_DIR}/table/iterator.cc" + "${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc" + "${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc" + "${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc" + "${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc" + "${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc" + "${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc" + "${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc" + "${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc" + "${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc" + "${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc" + "${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc" + "${ROCKSDB_SOURCE_DIR}/table/table_factory.cc" + "${ROCKSDB_SOURCE_DIR}/table/table_properties.cc" + "${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc" + "${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc" + "${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc" + "${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc" + "${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc" + "${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc" + "${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc" + "${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc" + "${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc" + "${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc" + "${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc" + "${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc" + "${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc" + "${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc" + "${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc" + "${ROCKSDB_SOURCE_DIR}/util/coding.cc" + "${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc" + "${ROCKSDB_SOURCE_DIR}/util/comparator.cc" + "${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc" + "${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc" + "${ROCKSDB_SOURCE_DIR}/util/crc32c.cc" + "${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc" + "${ROCKSDB_SOURCE_DIR}/util/hash.cc" + "${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc" + "${ROCKSDB_SOURCE_DIR}/util/random.cc" + "${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc" + "${ROCKSDB_SOURCE_DIR}/util/slice.cc" + "${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc" + "${ROCKSDB_SOURCE_DIR}/util/status.cc" + "${ROCKSDB_SOURCE_DIR}/util/string_util.cc" + "${ROCKSDB_SOURCE_DIR}/util/thread_local.cc" + "${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc" + "${ROCKSDB_SOURCE_DIR}/util/xxhash.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/debug.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc" + "${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc" $) if(HAVE_SSE42 AND NOT MSVC) set_source_files_properties( - ${ROCKSDB_SOURCE_DIR}/util/crc32c.cc + "${ROCKSDB_SOURCE_DIR}/util/crc32c.cc" PROPERTIES COMPILE_FLAGS "-msse4.2 -mpclmul") endif() if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc.c - ${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc_asm.S) + "${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc.c" + "${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc_asm.S") endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") if(HAS_ARMV8_CRC) list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/util/crc32c_arm64.cc) + "${ROCKSDB_SOURCE_DIR}/util/crc32c_arm64.cc") endif(HAS_ARMV8_CRC) -if(WIN32) - list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/port/win/io_win.cc - ${ROCKSDB_SOURCE_DIR}/port/win/env_win.cc - ${ROCKSDB_SOURCE_DIR}/port/win/env_default.cc - ${ROCKSDB_SOURCE_DIR}/port/win/port_win.cc - ${ROCKSDB_SOURCE_DIR}/port/win/win_logger.cc) - if(NOT MINGW) - # Mingw only supports std::thread when using - # posix threads. - list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/port/win/win_thread.cc) - endif() -if(WITH_XPRESS) - list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/port/win/xpress_win.cc) -endif() - -if(WITH_JEMALLOC) - list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/port/win/win_jemalloc.cc) -endif() - -else() - list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/port/port_posix.cc - ${ROCKSDB_SOURCE_DIR}/env/env_posix.cc - ${ROCKSDB_SOURCE_DIR}/env/fs_posix.cc - ${ROCKSDB_SOURCE_DIR}/env/io_posix.cc) -endif() +list(APPEND SOURCES + "${ROCKSDB_SOURCE_DIR}/port/port_posix.cc" + "${ROCKSDB_SOURCE_DIR}/env/env_posix.cc" + "${ROCKSDB_SOURCE_DIR}/env/fs_posix.cc" + "${ROCKSDB_SOURCE_DIR}/env/io_posix.cc") if(WITH_FOLLY_DISTRIBUTED_MUTEX) list(APPEND SOURCES - ${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/detail/Futex.cpp - ${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/AtomicNotification.cpp - ${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/DistributedMutex.cpp - ${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/ParkingLot.cpp - ${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/WaitOptions.cpp) + "${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/detail/Futex.cpp" + "${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/AtomicNotification.cpp" + "${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/DistributedMutex.cpp" + "${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/ParkingLot.cpp" + "${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/WaitOptions.cpp") endif() set(ROCKSDB_STATIC_LIB rocksdb) -if(WIN32) - set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib) -else() - set(SYSTEM_LIBS ${CMAKE_THREAD_LIBS_INIT}) -endif() - add_library(${ROCKSDB_STATIC_LIB} STATIC ${SOURCES}) target_link_libraries(${ROCKSDB_STATIC_LIB} PRIVATE ${THIRDPARTY_LIBS} ${SYSTEM_LIBS}) diff --git a/contrib/rocksdb-cmake/rocksdb_build_version.cc b/contrib/rocksdb-cmake/rocksdb_build_version.cc new file mode 100644 index 00000000000..8697652ae9f --- /dev/null +++ b/contrib/rocksdb-cmake/rocksdb_build_version.cc @@ -0,0 +1,3 @@ +const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:0"; +const char* rocksdb_build_git_date = "rocksdb_build_git_date:2000-01-01"; +const char* rocksdb_build_compile_date = "2000-01-01"; diff --git a/contrib/simdjson b/contrib/simdjson index 3190d66a490..95b4870e20b 160000 --- a/contrib/simdjson +++ b/contrib/simdjson @@ -1 +1 @@ -Subproject commit 3190d66a49059092a1753dc35595923debfc1698 +Subproject commit 95b4870e20be5f97d9dcf63b23b1c6f520c366c1 diff --git a/contrib/simdjson-cmake/CMakeLists.txt b/contrib/simdjson-cmake/CMakeLists.txt index 2fb60b905da..d3bcf6c046c 100644 --- a/contrib/simdjson-cmake/CMakeLists.txt +++ b/contrib/simdjson-cmake/CMakeLists.txt @@ -1,6 +1,6 @@ set(SIMDJSON_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/simdjson/include") set(SIMDJSON_SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/simdjson/src") -set(SIMDJSON_SRC ${SIMDJSON_SRC_DIR}/simdjson.cpp) +set(SIMDJSON_SRC "${SIMDJSON_SRC_DIR}/simdjson.cpp") add_library(simdjson ${SIMDJSON_SRC}) target_include_directories(simdjson SYSTEM PUBLIC "${SIMDJSON_INCLUDE_DIR}" PRIVATE "${SIMDJSON_SRC_DIR}") diff --git a/contrib/stats-cmake/CMakeLists.txt b/contrib/stats-cmake/CMakeLists.txt index a159e85a0e3..8279e49c3f0 100644 --- a/contrib/stats-cmake/CMakeLists.txt +++ b/contrib/stats-cmake/CMakeLists.txt @@ -1,7 +1,7 @@ # The stats is a header-only library of probability density functions, # cumulative distribution functions, quantile functions, and random sampling methods. -set(STATS_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/stats/include) -set(GCEM_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/gcem/include) +set(STATS_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/stats/include") +set(GCEM_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/gcem/include") add_library(stats INTERFACE) diff --git a/contrib/unixodbc-cmake/CMakeLists.txt b/contrib/unixodbc-cmake/CMakeLists.txt index c971c4bdd89..c154533739c 100644 --- a/contrib/unixodbc-cmake/CMakeLists.txt +++ b/contrib/unixodbc-cmake/CMakeLists.txt @@ -2,7 +2,7 @@ if (NOT USE_INTERNAL_ODBC_LIBRARY) return() endif() -set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/unixodbc) +set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/unixodbc") # ltdl @@ -10,14 +10,14 @@ set (SRCS_LTDL # This file is generated by 'libtool' inside libltdl directory and then removed. linux_x86_64/libltdl/libltdlcS.c - ${LIBRARY_DIR}/libltdl/lt__alloc.c - ${LIBRARY_DIR}/libltdl/lt__strl.c - ${LIBRARY_DIR}/libltdl/ltdl.c - ${LIBRARY_DIR}/libltdl/lt_dlloader.c - ${LIBRARY_DIR}/libltdl/slist.c - ${LIBRARY_DIR}/libltdl/lt_error.c - ${LIBRARY_DIR}/libltdl/loaders/dlopen.c - ${LIBRARY_DIR}/libltdl/loaders/preopen.c + "${LIBRARY_DIR}/libltdl/lt__alloc.c" + "${LIBRARY_DIR}/libltdl/lt__strl.c" + "${LIBRARY_DIR}/libltdl/ltdl.c" + "${LIBRARY_DIR}/libltdl/lt_dlloader.c" + "${LIBRARY_DIR}/libltdl/slist.c" + "${LIBRARY_DIR}/libltdl/lt_error.c" + "${LIBRARY_DIR}/libltdl/loaders/dlopen.c" + "${LIBRARY_DIR}/libltdl/loaders/preopen.c" ) add_library (ltdl ${SRCS_LTDL}) @@ -26,8 +26,8 @@ target_include_directories(ltdl PRIVATE linux_x86_64/libltdl PUBLIC - ${LIBRARY_DIR}/libltdl - ${LIBRARY_DIR}/libltdl/libltdl + "${LIBRARY_DIR}/libltdl" + "${LIBRARY_DIR}/libltdl/libltdl" ) target_compile_definitions(ltdl PRIVATE -DHAVE_CONFIG_H -DLTDL -DLTDLOPEN=libltdlc) target_compile_options(ltdl PRIVATE -Wno-constant-logical-operand -Wno-unknown-warning-option -O2) @@ -35,238 +35,238 @@ target_compile_options(ltdl PRIVATE -Wno-constant-logical-operand -Wno-unknown-w # odbc set (SRCS - ${LIBRARY_DIR}/DriverManager/__attribute.c - ${LIBRARY_DIR}/DriverManager/__connection.c - ${LIBRARY_DIR}/DriverManager/__handles.c - ${LIBRARY_DIR}/DriverManager/__info.c - ${LIBRARY_DIR}/DriverManager/__stats.c - ${LIBRARY_DIR}/DriverManager/SQLAllocConnect.c - ${LIBRARY_DIR}/DriverManager/SQLAllocEnv.c - ${LIBRARY_DIR}/DriverManager/SQLAllocHandle.c - ${LIBRARY_DIR}/DriverManager/SQLAllocHandleStd.c - ${LIBRARY_DIR}/DriverManager/SQLAllocStmt.c - ${LIBRARY_DIR}/DriverManager/SQLBindCol.c - ${LIBRARY_DIR}/DriverManager/SQLBindParam.c - ${LIBRARY_DIR}/DriverManager/SQLBindParameter.c - ${LIBRARY_DIR}/DriverManager/SQLBrowseConnect.c - ${LIBRARY_DIR}/DriverManager/SQLBrowseConnectW.c - ${LIBRARY_DIR}/DriverManager/SQLBulkOperations.c - ${LIBRARY_DIR}/DriverManager/SQLCancel.c - ${LIBRARY_DIR}/DriverManager/SQLCancelHandle.c - ${LIBRARY_DIR}/DriverManager/SQLCloseCursor.c - ${LIBRARY_DIR}/DriverManager/SQLColAttribute.c - ${LIBRARY_DIR}/DriverManager/SQLColAttributes.c - ${LIBRARY_DIR}/DriverManager/SQLColAttributesW.c - ${LIBRARY_DIR}/DriverManager/SQLColAttributeW.c - ${LIBRARY_DIR}/DriverManager/SQLColumnPrivileges.c - ${LIBRARY_DIR}/DriverManager/SQLColumnPrivilegesW.c - ${LIBRARY_DIR}/DriverManager/SQLColumns.c - ${LIBRARY_DIR}/DriverManager/SQLColumnsW.c - ${LIBRARY_DIR}/DriverManager/SQLConnect.c - ${LIBRARY_DIR}/DriverManager/SQLConnectW.c - ${LIBRARY_DIR}/DriverManager/SQLCopyDesc.c - ${LIBRARY_DIR}/DriverManager/SQLDataSources.c - ${LIBRARY_DIR}/DriverManager/SQLDataSourcesW.c - ${LIBRARY_DIR}/DriverManager/SQLDescribeCol.c - ${LIBRARY_DIR}/DriverManager/SQLDescribeColW.c - ${LIBRARY_DIR}/DriverManager/SQLDescribeParam.c - ${LIBRARY_DIR}/DriverManager/SQLDisconnect.c - ${LIBRARY_DIR}/DriverManager/SQLDriverConnect.c - ${LIBRARY_DIR}/DriverManager/SQLDriverConnectW.c - ${LIBRARY_DIR}/DriverManager/SQLDrivers.c - ${LIBRARY_DIR}/DriverManager/SQLDriversW.c - ${LIBRARY_DIR}/DriverManager/SQLEndTran.c - ${LIBRARY_DIR}/DriverManager/SQLError.c - ${LIBRARY_DIR}/DriverManager/SQLErrorW.c - ${LIBRARY_DIR}/DriverManager/SQLExecDirect.c - ${LIBRARY_DIR}/DriverManager/SQLExecDirectW.c - ${LIBRARY_DIR}/DriverManager/SQLExecute.c - ${LIBRARY_DIR}/DriverManager/SQLExtendedFetch.c - ${LIBRARY_DIR}/DriverManager/SQLFetch.c - ${LIBRARY_DIR}/DriverManager/SQLFetchScroll.c - ${LIBRARY_DIR}/DriverManager/SQLForeignKeys.c - ${LIBRARY_DIR}/DriverManager/SQLForeignKeysW.c - ${LIBRARY_DIR}/DriverManager/SQLFreeConnect.c - ${LIBRARY_DIR}/DriverManager/SQLFreeEnv.c - ${LIBRARY_DIR}/DriverManager/SQLFreeHandle.c - ${LIBRARY_DIR}/DriverManager/SQLFreeStmt.c - ${LIBRARY_DIR}/DriverManager/SQLGetConnectAttr.c - ${LIBRARY_DIR}/DriverManager/SQLGetConnectAttrW.c - ${LIBRARY_DIR}/DriverManager/SQLGetConnectOption.c - ${LIBRARY_DIR}/DriverManager/SQLGetConnectOptionW.c - ${LIBRARY_DIR}/DriverManager/SQLGetCursorName.c - ${LIBRARY_DIR}/DriverManager/SQLGetCursorNameW.c - ${LIBRARY_DIR}/DriverManager/SQLGetData.c - ${LIBRARY_DIR}/DriverManager/SQLGetDescField.c - ${LIBRARY_DIR}/DriverManager/SQLGetDescFieldW.c - ${LIBRARY_DIR}/DriverManager/SQLGetDescRec.c - ${LIBRARY_DIR}/DriverManager/SQLGetDescRecW.c - ${LIBRARY_DIR}/DriverManager/SQLGetDiagField.c - ${LIBRARY_DIR}/DriverManager/SQLGetDiagFieldW.c - ${LIBRARY_DIR}/DriverManager/SQLGetDiagRec.c - ${LIBRARY_DIR}/DriverManager/SQLGetDiagRecW.c - ${LIBRARY_DIR}/DriverManager/SQLGetEnvAttr.c - ${LIBRARY_DIR}/DriverManager/SQLGetFunctions.c - ${LIBRARY_DIR}/DriverManager/SQLGetInfo.c - ${LIBRARY_DIR}/DriverManager/SQLGetInfoW.c - ${LIBRARY_DIR}/DriverManager/SQLGetStmtAttr.c - ${LIBRARY_DIR}/DriverManager/SQLGetStmtAttrW.c - ${LIBRARY_DIR}/DriverManager/SQLGetStmtOption.c - ${LIBRARY_DIR}/DriverManager/SQLGetTypeInfo.c - ${LIBRARY_DIR}/DriverManager/SQLGetTypeInfoW.c - ${LIBRARY_DIR}/DriverManager/SQLMoreResults.c - ${LIBRARY_DIR}/DriverManager/SQLNativeSql.c - ${LIBRARY_DIR}/DriverManager/SQLNativeSqlW.c - ${LIBRARY_DIR}/DriverManager/SQLNumParams.c - ${LIBRARY_DIR}/DriverManager/SQLNumResultCols.c - ${LIBRARY_DIR}/DriverManager/SQLParamData.c - ${LIBRARY_DIR}/DriverManager/SQLParamOptions.c - ${LIBRARY_DIR}/DriverManager/SQLPrepare.c - ${LIBRARY_DIR}/DriverManager/SQLPrepareW.c - ${LIBRARY_DIR}/DriverManager/SQLPrimaryKeys.c - ${LIBRARY_DIR}/DriverManager/SQLPrimaryKeysW.c - ${LIBRARY_DIR}/DriverManager/SQLProcedureColumns.c - ${LIBRARY_DIR}/DriverManager/SQLProcedureColumnsW.c - ${LIBRARY_DIR}/DriverManager/SQLProcedures.c - ${LIBRARY_DIR}/DriverManager/SQLProceduresW.c - ${LIBRARY_DIR}/DriverManager/SQLPutData.c - ${LIBRARY_DIR}/DriverManager/SQLRowCount.c - ${LIBRARY_DIR}/DriverManager/SQLSetConnectAttr.c - ${LIBRARY_DIR}/DriverManager/SQLSetConnectAttrW.c - ${LIBRARY_DIR}/DriverManager/SQLSetConnectOption.c - ${LIBRARY_DIR}/DriverManager/SQLSetConnectOptionW.c - ${LIBRARY_DIR}/DriverManager/SQLSetCursorName.c - ${LIBRARY_DIR}/DriverManager/SQLSetCursorNameW.c - ${LIBRARY_DIR}/DriverManager/SQLSetDescField.c - ${LIBRARY_DIR}/DriverManager/SQLSetDescFieldW.c - ${LIBRARY_DIR}/DriverManager/SQLSetDescRec.c - ${LIBRARY_DIR}/DriverManager/SQLSetEnvAttr.c - ${LIBRARY_DIR}/DriverManager/SQLSetParam.c - ${LIBRARY_DIR}/DriverManager/SQLSetPos.c - ${LIBRARY_DIR}/DriverManager/SQLSetScrollOptions.c - ${LIBRARY_DIR}/DriverManager/SQLSetStmtAttr.c - ${LIBRARY_DIR}/DriverManager/SQLSetStmtAttrW.c - ${LIBRARY_DIR}/DriverManager/SQLSetStmtOption.c - ${LIBRARY_DIR}/DriverManager/SQLSetStmtOptionW.c - ${LIBRARY_DIR}/DriverManager/SQLSpecialColumns.c - ${LIBRARY_DIR}/DriverManager/SQLSpecialColumnsW.c - ${LIBRARY_DIR}/DriverManager/SQLStatistics.c - ${LIBRARY_DIR}/DriverManager/SQLStatisticsW.c - ${LIBRARY_DIR}/DriverManager/SQLTablePrivileges.c - ${LIBRARY_DIR}/DriverManager/SQLTablePrivilegesW.c - ${LIBRARY_DIR}/DriverManager/SQLTables.c - ${LIBRARY_DIR}/DriverManager/SQLTablesW.c - ${LIBRARY_DIR}/DriverManager/SQLTransact.c - ${LIBRARY_DIR}/ini/_iniDump.c - ${LIBRARY_DIR}/ini/_iniObjectRead.c - ${LIBRARY_DIR}/ini/_iniPropertyRead.c - ${LIBRARY_DIR}/ini/_iniScanUntilObject.c - ${LIBRARY_DIR}/ini/iniAllTrim.c - ${LIBRARY_DIR}/ini/iniAppend.c - ${LIBRARY_DIR}/ini/iniClose.c - ${LIBRARY_DIR}/ini/iniCommit.c - ${LIBRARY_DIR}/ini/iniCursor.c - ${LIBRARY_DIR}/ini/iniDelete.c - ${LIBRARY_DIR}/ini/iniElement.c - ${LIBRARY_DIR}/ini/iniElementCount.c - ${LIBRARY_DIR}/ini/iniGetBookmark.c - ${LIBRARY_DIR}/ini/iniGotoBookmark.c - ${LIBRARY_DIR}/ini/iniObject.c - ${LIBRARY_DIR}/ini/iniObjectDelete.c - ${LIBRARY_DIR}/ini/iniObjectEOL.c - ${LIBRARY_DIR}/ini/iniObjectFirst.c - ${LIBRARY_DIR}/ini/iniObjectInsert.c - ${LIBRARY_DIR}/ini/iniObjectLast.c - ${LIBRARY_DIR}/ini/iniObjectNext.c - ${LIBRARY_DIR}/ini/iniObjectSeek.c - ${LIBRARY_DIR}/ini/iniObjectSeekSure.c - ${LIBRARY_DIR}/ini/iniObjectUpdate.c - ${LIBRARY_DIR}/ini/iniOpen.c - ${LIBRARY_DIR}/ini/iniProperty.c - ${LIBRARY_DIR}/ini/iniPropertyDelete.c - ${LIBRARY_DIR}/ini/iniPropertyEOL.c - ${LIBRARY_DIR}/ini/iniPropertyFirst.c - ${LIBRARY_DIR}/ini/iniPropertyInsert.c - ${LIBRARY_DIR}/ini/iniPropertyLast.c - ${LIBRARY_DIR}/ini/iniPropertyNext.c - ${LIBRARY_DIR}/ini/iniPropertySeek.c - ${LIBRARY_DIR}/ini/iniPropertySeekSure.c - ${LIBRARY_DIR}/ini/iniPropertyUpdate.c - ${LIBRARY_DIR}/ini/iniPropertyValue.c - ${LIBRARY_DIR}/ini/iniToUpper.c - ${LIBRARY_DIR}/ini/iniValue.c - ${LIBRARY_DIR}/log/_logFreeMsg.c - ${LIBRARY_DIR}/log/logClear.c - ${LIBRARY_DIR}/log/logClose.c - ${LIBRARY_DIR}/log/logOn.c - ${LIBRARY_DIR}/log/logOpen.c - ${LIBRARY_DIR}/log/logPeekMsg.c - ${LIBRARY_DIR}/log/logPopMsg.c - ${LIBRARY_DIR}/log/logPushMsg.c - ${LIBRARY_DIR}/lst/_lstAdjustCurrent.c - ${LIBRARY_DIR}/lst/_lstDump.c - ${LIBRARY_DIR}/lst/_lstFreeItem.c - ${LIBRARY_DIR}/lst/_lstNextValidItem.c - ${LIBRARY_DIR}/lst/_lstPrevValidItem.c - ${LIBRARY_DIR}/lst/_lstVisible.c - ${LIBRARY_DIR}/lst/lstAppend.c - ${LIBRARY_DIR}/lst/lstClose.c - ${LIBRARY_DIR}/lst/lstDelete.c - ${LIBRARY_DIR}/lst/lstEOL.c - ${LIBRARY_DIR}/lst/lstFirst.c - ${LIBRARY_DIR}/lst/lstGet.c - ${LIBRARY_DIR}/lst/lstGetBookMark.c - ${LIBRARY_DIR}/lst/lstGoto.c - ${LIBRARY_DIR}/lst/lstGotoBookMark.c - ${LIBRARY_DIR}/lst/lstInsert.c - ${LIBRARY_DIR}/lst/lstLast.c - ${LIBRARY_DIR}/lst/lstNext.c - ${LIBRARY_DIR}/lst/lstOpen.c - ${LIBRARY_DIR}/lst/lstOpenCursor.c - ${LIBRARY_DIR}/lst/lstPrev.c - ${LIBRARY_DIR}/lst/lstSeek.c - ${LIBRARY_DIR}/lst/lstSeekItem.c - ${LIBRARY_DIR}/lst/lstSet.c - ${LIBRARY_DIR}/lst/lstSetFreeFunc.c - ${LIBRARY_DIR}/odbcinst/_logging.c - ${LIBRARY_DIR}/odbcinst/_odbcinst_ConfigModeINI.c - ${LIBRARY_DIR}/odbcinst/_odbcinst_GetEntries.c - ${LIBRARY_DIR}/odbcinst/_odbcinst_GetSections.c - ${LIBRARY_DIR}/odbcinst/_odbcinst_SystemINI.c - ${LIBRARY_DIR}/odbcinst/_odbcinst_UserINI.c - ${LIBRARY_DIR}/odbcinst/_SQLDriverConnectPrompt.c - ${LIBRARY_DIR}/odbcinst/_SQLGetInstalledDrivers.c - ${LIBRARY_DIR}/odbcinst/_SQLWriteInstalledDrivers.c - ${LIBRARY_DIR}/odbcinst/ODBCINSTConstructProperties.c - ${LIBRARY_DIR}/odbcinst/ODBCINSTDestructProperties.c - ${LIBRARY_DIR}/odbcinst/ODBCINSTSetProperty.c - ${LIBRARY_DIR}/odbcinst/ODBCINSTValidateProperties.c - ${LIBRARY_DIR}/odbcinst/ODBCINSTValidateProperty.c - ${LIBRARY_DIR}/odbcinst/SQLConfigDataSource.c - ${LIBRARY_DIR}/odbcinst/SQLConfigDriver.c - ${LIBRARY_DIR}/odbcinst/SQLCreateDataSource.c - ${LIBRARY_DIR}/odbcinst/SQLGetAvailableDrivers.c - ${LIBRARY_DIR}/odbcinst/SQLGetConfigMode.c - ${LIBRARY_DIR}/odbcinst/SQLGetInstalledDrivers.c - ${LIBRARY_DIR}/odbcinst/SQLGetPrivateProfileString.c - ${LIBRARY_DIR}/odbcinst/SQLGetTranslator.c - ${LIBRARY_DIR}/odbcinst/SQLInstallDriverEx.c - ${LIBRARY_DIR}/odbcinst/SQLInstallDriverManager.c - ${LIBRARY_DIR}/odbcinst/SQLInstallerError.c - ${LIBRARY_DIR}/odbcinst/SQLInstallODBC.c - ${LIBRARY_DIR}/odbcinst/SQLInstallTranslatorEx.c - ${LIBRARY_DIR}/odbcinst/SQLManageDataSources.c - ${LIBRARY_DIR}/odbcinst/SQLPostInstallerError.c - ${LIBRARY_DIR}/odbcinst/SQLReadFileDSN.c - ${LIBRARY_DIR}/odbcinst/SQLRemoveDriver.c - ${LIBRARY_DIR}/odbcinst/SQLRemoveDriverManager.c - ${LIBRARY_DIR}/odbcinst/SQLRemoveDSNFromIni.c - ${LIBRARY_DIR}/odbcinst/SQLRemoveTranslator.c - ${LIBRARY_DIR}/odbcinst/SQLSetConfigMode.c - ${LIBRARY_DIR}/odbcinst/SQLValidDSN.c - ${LIBRARY_DIR}/odbcinst/SQLWriteDSNToIni.c - ${LIBRARY_DIR}/odbcinst/SQLWriteFileDSN.c - ${LIBRARY_DIR}/odbcinst/SQLWritePrivateProfileString.c + "${LIBRARY_DIR}/DriverManager/__attribute.c" + "${LIBRARY_DIR}/DriverManager/__connection.c" + "${LIBRARY_DIR}/DriverManager/__handles.c" + "${LIBRARY_DIR}/DriverManager/__info.c" + "${LIBRARY_DIR}/DriverManager/__stats.c" + "${LIBRARY_DIR}/DriverManager/SQLAllocConnect.c" + "${LIBRARY_DIR}/DriverManager/SQLAllocEnv.c" + "${LIBRARY_DIR}/DriverManager/SQLAllocHandle.c" + "${LIBRARY_DIR}/DriverManager/SQLAllocHandleStd.c" + "${LIBRARY_DIR}/DriverManager/SQLAllocStmt.c" + "${LIBRARY_DIR}/DriverManager/SQLBindCol.c" + "${LIBRARY_DIR}/DriverManager/SQLBindParam.c" + "${LIBRARY_DIR}/DriverManager/SQLBindParameter.c" + "${LIBRARY_DIR}/DriverManager/SQLBrowseConnect.c" + "${LIBRARY_DIR}/DriverManager/SQLBrowseConnectW.c" + "${LIBRARY_DIR}/DriverManager/SQLBulkOperations.c" + "${LIBRARY_DIR}/DriverManager/SQLCancel.c" + "${LIBRARY_DIR}/DriverManager/SQLCancelHandle.c" + "${LIBRARY_DIR}/DriverManager/SQLCloseCursor.c" + "${LIBRARY_DIR}/DriverManager/SQLColAttribute.c" + "${LIBRARY_DIR}/DriverManager/SQLColAttributes.c" + "${LIBRARY_DIR}/DriverManager/SQLColAttributesW.c" + "${LIBRARY_DIR}/DriverManager/SQLColAttributeW.c" + "${LIBRARY_DIR}/DriverManager/SQLColumnPrivileges.c" + "${LIBRARY_DIR}/DriverManager/SQLColumnPrivilegesW.c" + "${LIBRARY_DIR}/DriverManager/SQLColumns.c" + "${LIBRARY_DIR}/DriverManager/SQLColumnsW.c" + "${LIBRARY_DIR}/DriverManager/SQLConnect.c" + "${LIBRARY_DIR}/DriverManager/SQLConnectW.c" + "${LIBRARY_DIR}/DriverManager/SQLCopyDesc.c" + "${LIBRARY_DIR}/DriverManager/SQLDataSources.c" + "${LIBRARY_DIR}/DriverManager/SQLDataSourcesW.c" + "${LIBRARY_DIR}/DriverManager/SQLDescribeCol.c" + "${LIBRARY_DIR}/DriverManager/SQLDescribeColW.c" + "${LIBRARY_DIR}/DriverManager/SQLDescribeParam.c" + "${LIBRARY_DIR}/DriverManager/SQLDisconnect.c" + "${LIBRARY_DIR}/DriverManager/SQLDriverConnect.c" + "${LIBRARY_DIR}/DriverManager/SQLDriverConnectW.c" + "${LIBRARY_DIR}/DriverManager/SQLDrivers.c" + "${LIBRARY_DIR}/DriverManager/SQLDriversW.c" + "${LIBRARY_DIR}/DriverManager/SQLEndTran.c" + "${LIBRARY_DIR}/DriverManager/SQLError.c" + "${LIBRARY_DIR}/DriverManager/SQLErrorW.c" + "${LIBRARY_DIR}/DriverManager/SQLExecDirect.c" + "${LIBRARY_DIR}/DriverManager/SQLExecDirectW.c" + "${LIBRARY_DIR}/DriverManager/SQLExecute.c" + "${LIBRARY_DIR}/DriverManager/SQLExtendedFetch.c" + "${LIBRARY_DIR}/DriverManager/SQLFetch.c" + "${LIBRARY_DIR}/DriverManager/SQLFetchScroll.c" + "${LIBRARY_DIR}/DriverManager/SQLForeignKeys.c" + "${LIBRARY_DIR}/DriverManager/SQLForeignKeysW.c" + "${LIBRARY_DIR}/DriverManager/SQLFreeConnect.c" + "${LIBRARY_DIR}/DriverManager/SQLFreeEnv.c" + "${LIBRARY_DIR}/DriverManager/SQLFreeHandle.c" + "${LIBRARY_DIR}/DriverManager/SQLFreeStmt.c" + "${LIBRARY_DIR}/DriverManager/SQLGetConnectAttr.c" + "${LIBRARY_DIR}/DriverManager/SQLGetConnectAttrW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetConnectOption.c" + "${LIBRARY_DIR}/DriverManager/SQLGetConnectOptionW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetCursorName.c" + "${LIBRARY_DIR}/DriverManager/SQLGetCursorNameW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetData.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDescField.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDescFieldW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDescRec.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDescRecW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDiagField.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDiagFieldW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDiagRec.c" + "${LIBRARY_DIR}/DriverManager/SQLGetDiagRecW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetEnvAttr.c" + "${LIBRARY_DIR}/DriverManager/SQLGetFunctions.c" + "${LIBRARY_DIR}/DriverManager/SQLGetInfo.c" + "${LIBRARY_DIR}/DriverManager/SQLGetInfoW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetStmtAttr.c" + "${LIBRARY_DIR}/DriverManager/SQLGetStmtAttrW.c" + "${LIBRARY_DIR}/DriverManager/SQLGetStmtOption.c" + "${LIBRARY_DIR}/DriverManager/SQLGetTypeInfo.c" + "${LIBRARY_DIR}/DriverManager/SQLGetTypeInfoW.c" + "${LIBRARY_DIR}/DriverManager/SQLMoreResults.c" + "${LIBRARY_DIR}/DriverManager/SQLNativeSql.c" + "${LIBRARY_DIR}/DriverManager/SQLNativeSqlW.c" + "${LIBRARY_DIR}/DriverManager/SQLNumParams.c" + "${LIBRARY_DIR}/DriverManager/SQLNumResultCols.c" + "${LIBRARY_DIR}/DriverManager/SQLParamData.c" + "${LIBRARY_DIR}/DriverManager/SQLParamOptions.c" + "${LIBRARY_DIR}/DriverManager/SQLPrepare.c" + "${LIBRARY_DIR}/DriverManager/SQLPrepareW.c" + "${LIBRARY_DIR}/DriverManager/SQLPrimaryKeys.c" + "${LIBRARY_DIR}/DriverManager/SQLPrimaryKeysW.c" + "${LIBRARY_DIR}/DriverManager/SQLProcedureColumns.c" + "${LIBRARY_DIR}/DriverManager/SQLProcedureColumnsW.c" + "${LIBRARY_DIR}/DriverManager/SQLProcedures.c" + "${LIBRARY_DIR}/DriverManager/SQLProceduresW.c" + "${LIBRARY_DIR}/DriverManager/SQLPutData.c" + "${LIBRARY_DIR}/DriverManager/SQLRowCount.c" + "${LIBRARY_DIR}/DriverManager/SQLSetConnectAttr.c" + "${LIBRARY_DIR}/DriverManager/SQLSetConnectAttrW.c" + "${LIBRARY_DIR}/DriverManager/SQLSetConnectOption.c" + "${LIBRARY_DIR}/DriverManager/SQLSetConnectOptionW.c" + "${LIBRARY_DIR}/DriverManager/SQLSetCursorName.c" + "${LIBRARY_DIR}/DriverManager/SQLSetCursorNameW.c" + "${LIBRARY_DIR}/DriverManager/SQLSetDescField.c" + "${LIBRARY_DIR}/DriverManager/SQLSetDescFieldW.c" + "${LIBRARY_DIR}/DriverManager/SQLSetDescRec.c" + "${LIBRARY_DIR}/DriverManager/SQLSetEnvAttr.c" + "${LIBRARY_DIR}/DriverManager/SQLSetParam.c" + "${LIBRARY_DIR}/DriverManager/SQLSetPos.c" + "${LIBRARY_DIR}/DriverManager/SQLSetScrollOptions.c" + "${LIBRARY_DIR}/DriverManager/SQLSetStmtAttr.c" + "${LIBRARY_DIR}/DriverManager/SQLSetStmtAttrW.c" + "${LIBRARY_DIR}/DriverManager/SQLSetStmtOption.c" + "${LIBRARY_DIR}/DriverManager/SQLSetStmtOptionW.c" + "${LIBRARY_DIR}/DriverManager/SQLSpecialColumns.c" + "${LIBRARY_DIR}/DriverManager/SQLSpecialColumnsW.c" + "${LIBRARY_DIR}/DriverManager/SQLStatistics.c" + "${LIBRARY_DIR}/DriverManager/SQLStatisticsW.c" + "${LIBRARY_DIR}/DriverManager/SQLTablePrivileges.c" + "${LIBRARY_DIR}/DriverManager/SQLTablePrivilegesW.c" + "${LIBRARY_DIR}/DriverManager/SQLTables.c" + "${LIBRARY_DIR}/DriverManager/SQLTablesW.c" + "${LIBRARY_DIR}/DriverManager/SQLTransact.c" + "${LIBRARY_DIR}/ini/_iniDump.c" + "${LIBRARY_DIR}/ini/_iniObjectRead.c" + "${LIBRARY_DIR}/ini/_iniPropertyRead.c" + "${LIBRARY_DIR}/ini/_iniScanUntilObject.c" + "${LIBRARY_DIR}/ini/iniAllTrim.c" + "${LIBRARY_DIR}/ini/iniAppend.c" + "${LIBRARY_DIR}/ini/iniClose.c" + "${LIBRARY_DIR}/ini/iniCommit.c" + "${LIBRARY_DIR}/ini/iniCursor.c" + "${LIBRARY_DIR}/ini/iniDelete.c" + "${LIBRARY_DIR}/ini/iniElement.c" + "${LIBRARY_DIR}/ini/iniElementCount.c" + "${LIBRARY_DIR}/ini/iniGetBookmark.c" + "${LIBRARY_DIR}/ini/iniGotoBookmark.c" + "${LIBRARY_DIR}/ini/iniObject.c" + "${LIBRARY_DIR}/ini/iniObjectDelete.c" + "${LIBRARY_DIR}/ini/iniObjectEOL.c" + "${LIBRARY_DIR}/ini/iniObjectFirst.c" + "${LIBRARY_DIR}/ini/iniObjectInsert.c" + "${LIBRARY_DIR}/ini/iniObjectLast.c" + "${LIBRARY_DIR}/ini/iniObjectNext.c" + "${LIBRARY_DIR}/ini/iniObjectSeek.c" + "${LIBRARY_DIR}/ini/iniObjectSeekSure.c" + "${LIBRARY_DIR}/ini/iniObjectUpdate.c" + "${LIBRARY_DIR}/ini/iniOpen.c" + "${LIBRARY_DIR}/ini/iniProperty.c" + "${LIBRARY_DIR}/ini/iniPropertyDelete.c" + "${LIBRARY_DIR}/ini/iniPropertyEOL.c" + "${LIBRARY_DIR}/ini/iniPropertyFirst.c" + "${LIBRARY_DIR}/ini/iniPropertyInsert.c" + "${LIBRARY_DIR}/ini/iniPropertyLast.c" + "${LIBRARY_DIR}/ini/iniPropertyNext.c" + "${LIBRARY_DIR}/ini/iniPropertySeek.c" + "${LIBRARY_DIR}/ini/iniPropertySeekSure.c" + "${LIBRARY_DIR}/ini/iniPropertyUpdate.c" + "${LIBRARY_DIR}/ini/iniPropertyValue.c" + "${LIBRARY_DIR}/ini/iniToUpper.c" + "${LIBRARY_DIR}/ini/iniValue.c" + "${LIBRARY_DIR}/log/_logFreeMsg.c" + "${LIBRARY_DIR}/log/logClear.c" + "${LIBRARY_DIR}/log/logClose.c" + "${LIBRARY_DIR}/log/logOn.c" + "${LIBRARY_DIR}/log/logOpen.c" + "${LIBRARY_DIR}/log/logPeekMsg.c" + "${LIBRARY_DIR}/log/logPopMsg.c" + "${LIBRARY_DIR}/log/logPushMsg.c" + "${LIBRARY_DIR}/lst/_lstAdjustCurrent.c" + "${LIBRARY_DIR}/lst/_lstDump.c" + "${LIBRARY_DIR}/lst/_lstFreeItem.c" + "${LIBRARY_DIR}/lst/_lstNextValidItem.c" + "${LIBRARY_DIR}/lst/_lstPrevValidItem.c" + "${LIBRARY_DIR}/lst/_lstVisible.c" + "${LIBRARY_DIR}/lst/lstAppend.c" + "${LIBRARY_DIR}/lst/lstClose.c" + "${LIBRARY_DIR}/lst/lstDelete.c" + "${LIBRARY_DIR}/lst/lstEOL.c" + "${LIBRARY_DIR}/lst/lstFirst.c" + "${LIBRARY_DIR}/lst/lstGet.c" + "${LIBRARY_DIR}/lst/lstGetBookMark.c" + "${LIBRARY_DIR}/lst/lstGoto.c" + "${LIBRARY_DIR}/lst/lstGotoBookMark.c" + "${LIBRARY_DIR}/lst/lstInsert.c" + "${LIBRARY_DIR}/lst/lstLast.c" + "${LIBRARY_DIR}/lst/lstNext.c" + "${LIBRARY_DIR}/lst/lstOpen.c" + "${LIBRARY_DIR}/lst/lstOpenCursor.c" + "${LIBRARY_DIR}/lst/lstPrev.c" + "${LIBRARY_DIR}/lst/lstSeek.c" + "${LIBRARY_DIR}/lst/lstSeekItem.c" + "${LIBRARY_DIR}/lst/lstSet.c" + "${LIBRARY_DIR}/lst/lstSetFreeFunc.c" + "${LIBRARY_DIR}/odbcinst/_logging.c" + "${LIBRARY_DIR}/odbcinst/_odbcinst_ConfigModeINI.c" + "${LIBRARY_DIR}/odbcinst/_odbcinst_GetEntries.c" + "${LIBRARY_DIR}/odbcinst/_odbcinst_GetSections.c" + "${LIBRARY_DIR}/odbcinst/_odbcinst_SystemINI.c" + "${LIBRARY_DIR}/odbcinst/_odbcinst_UserINI.c" + "${LIBRARY_DIR}/odbcinst/_SQLDriverConnectPrompt.c" + "${LIBRARY_DIR}/odbcinst/_SQLGetInstalledDrivers.c" + "${LIBRARY_DIR}/odbcinst/_SQLWriteInstalledDrivers.c" + "${LIBRARY_DIR}/odbcinst/ODBCINSTConstructProperties.c" + "${LIBRARY_DIR}/odbcinst/ODBCINSTDestructProperties.c" + "${LIBRARY_DIR}/odbcinst/ODBCINSTSetProperty.c" + "${LIBRARY_DIR}/odbcinst/ODBCINSTValidateProperties.c" + "${LIBRARY_DIR}/odbcinst/ODBCINSTValidateProperty.c" + "${LIBRARY_DIR}/odbcinst/SQLConfigDataSource.c" + "${LIBRARY_DIR}/odbcinst/SQLConfigDriver.c" + "${LIBRARY_DIR}/odbcinst/SQLCreateDataSource.c" + "${LIBRARY_DIR}/odbcinst/SQLGetAvailableDrivers.c" + "${LIBRARY_DIR}/odbcinst/SQLGetConfigMode.c" + "${LIBRARY_DIR}/odbcinst/SQLGetInstalledDrivers.c" + "${LIBRARY_DIR}/odbcinst/SQLGetPrivateProfileString.c" + "${LIBRARY_DIR}/odbcinst/SQLGetTranslator.c" + "${LIBRARY_DIR}/odbcinst/SQLInstallDriverEx.c" + "${LIBRARY_DIR}/odbcinst/SQLInstallDriverManager.c" + "${LIBRARY_DIR}/odbcinst/SQLInstallerError.c" + "${LIBRARY_DIR}/odbcinst/SQLInstallODBC.c" + "${LIBRARY_DIR}/odbcinst/SQLInstallTranslatorEx.c" + "${LIBRARY_DIR}/odbcinst/SQLManageDataSources.c" + "${LIBRARY_DIR}/odbcinst/SQLPostInstallerError.c" + "${LIBRARY_DIR}/odbcinst/SQLReadFileDSN.c" + "${LIBRARY_DIR}/odbcinst/SQLRemoveDriver.c" + "${LIBRARY_DIR}/odbcinst/SQLRemoveDriverManager.c" + "${LIBRARY_DIR}/odbcinst/SQLRemoveDSNFromIni.c" + "${LIBRARY_DIR}/odbcinst/SQLRemoveTranslator.c" + "${LIBRARY_DIR}/odbcinst/SQLSetConfigMode.c" + "${LIBRARY_DIR}/odbcinst/SQLValidDSN.c" + "${LIBRARY_DIR}/odbcinst/SQLWriteDSNToIni.c" + "${LIBRARY_DIR}/odbcinst/SQLWriteFileDSN.c" + "${LIBRARY_DIR}/odbcinst/SQLWritePrivateProfileString.c" ) add_library (unixodbc ${SRCS}) @@ -280,7 +280,7 @@ target_include_directories (unixodbc linux_x86_64/private PUBLIC linux_x86_64 - ${LIBRARY_DIR}/include + "${LIBRARY_DIR}/include" ) target_compile_definitions (unixodbc PRIVATE -DHAVE_CONFIG_H) target_compile_options (unixodbc diff --git a/contrib/zlib-ng b/contrib/zlib-ng index 6fd1846c8b8..5cc4d232020 160000 --- a/contrib/zlib-ng +++ b/contrib/zlib-ng @@ -1 +1 @@ -Subproject commit 6fd1846c8b8f59436fe2dd752d0f316ddbb64df6 +Subproject commit 5cc4d232020dc66d1d6c5438834457e2a2f6127b diff --git a/contrib/zstd-cmake/CMakeLists.txt b/contrib/zstd-cmake/CMakeLists.txt index 58a827761ea..d74dcdffd9c 100644 --- a/contrib/zstd-cmake/CMakeLists.txt +++ b/contrib/zstd-cmake/CMakeLists.txt @@ -39,108 +39,108 @@ function(GetLibraryVersion _content _outputVar1 _outputVar2 _outputVar3) endfunction() # Define library directory, where sources and header files are located -SET(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/zstd/lib) -INCLUDE_DIRECTORIES(BEFORE ${LIBRARY_DIR} ${LIBRARY_DIR}/common) +SET(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/zstd/lib") +INCLUDE_DIRECTORIES(BEFORE ${LIBRARY_DIR} "${LIBRARY_DIR}/common") # Read file content -FILE(READ ${LIBRARY_DIR}/zstd.h HEADER_CONTENT) +FILE(READ "${LIBRARY_DIR}/zstd.h" HEADER_CONTENT) # Parse version GetLibraryVersion("${HEADER_CONTENT}" LIBVER_MAJOR LIBVER_MINOR LIBVER_RELEASE) MESSAGE(STATUS "ZSTD VERSION ${LIBVER_MAJOR}.${LIBVER_MINOR}.${LIBVER_RELEASE}") # cd contrib/zstd/lib -# find . -name '*.c' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ ${LIBRARY_DIR}/' +# find . -name '*.c' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ "${LIBRARY_DIR}/"' SET(Sources - ${LIBRARY_DIR}/common/debug.c - ${LIBRARY_DIR}/common/entropy_common.c - ${LIBRARY_DIR}/common/error_private.c - ${LIBRARY_DIR}/common/fse_decompress.c - ${LIBRARY_DIR}/common/pool.c - ${LIBRARY_DIR}/common/threading.c - ${LIBRARY_DIR}/common/xxhash.c - ${LIBRARY_DIR}/common/zstd_common.c - ${LIBRARY_DIR}/compress/fse_compress.c - ${LIBRARY_DIR}/compress/hist.c - ${LIBRARY_DIR}/compress/huf_compress.c - ${LIBRARY_DIR}/compress/zstd_compress.c - ${LIBRARY_DIR}/compress/zstd_compress_literals.c - ${LIBRARY_DIR}/compress/zstd_compress_sequences.c - ${LIBRARY_DIR}/compress/zstd_double_fast.c - ${LIBRARY_DIR}/compress/zstd_fast.c - ${LIBRARY_DIR}/compress/zstd_lazy.c - ${LIBRARY_DIR}/compress/zstd_ldm.c - ${LIBRARY_DIR}/compress/zstdmt_compress.c - ${LIBRARY_DIR}/compress/zstd_opt.c - ${LIBRARY_DIR}/decompress/huf_decompress.c - ${LIBRARY_DIR}/decompress/zstd_ddict.c - ${LIBRARY_DIR}/decompress/zstd_decompress_block.c - ${LIBRARY_DIR}/decompress/zstd_decompress.c - ${LIBRARY_DIR}/dictBuilder/cover.c - ${LIBRARY_DIR}/dictBuilder/divsufsort.c - ${LIBRARY_DIR}/dictBuilder/fastcover.c - ${LIBRARY_DIR}/dictBuilder/zdict.c) + "${LIBRARY_DIR}/common/debug.c" + "${LIBRARY_DIR}/common/entropy_common.c" + "${LIBRARY_DIR}/common/error_private.c" + "${LIBRARY_DIR}/common/fse_decompress.c" + "${LIBRARY_DIR}/common/pool.c" + "${LIBRARY_DIR}/common/threading.c" + "${LIBRARY_DIR}/common/xxhash.c" + "${LIBRARY_DIR}/common/zstd_common.c" + "${LIBRARY_DIR}/compress/fse_compress.c" + "${LIBRARY_DIR}/compress/hist.c" + "${LIBRARY_DIR}/compress/huf_compress.c" + "${LIBRARY_DIR}/compress/zstd_compress.c" + "${LIBRARY_DIR}/compress/zstd_compress_literals.c" + "${LIBRARY_DIR}/compress/zstd_compress_sequences.c" + "${LIBRARY_DIR}/compress/zstd_double_fast.c" + "${LIBRARY_DIR}/compress/zstd_fast.c" + "${LIBRARY_DIR}/compress/zstd_lazy.c" + "${LIBRARY_DIR}/compress/zstd_ldm.c" + "${LIBRARY_DIR}/compress/zstdmt_compress.c" + "${LIBRARY_DIR}/compress/zstd_opt.c" + "${LIBRARY_DIR}/decompress/huf_decompress.c" + "${LIBRARY_DIR}/decompress/zstd_ddict.c" + "${LIBRARY_DIR}/decompress/zstd_decompress_block.c" + "${LIBRARY_DIR}/decompress/zstd_decompress.c" + "${LIBRARY_DIR}/dictBuilder/cover.c" + "${LIBRARY_DIR}/dictBuilder/divsufsort.c" + "${LIBRARY_DIR}/dictBuilder/fastcover.c" + "${LIBRARY_DIR}/dictBuilder/zdict.c") # cd contrib/zstd/lib -# find . -name '*.h' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ ${LIBRARY_DIR}/' +# find . -name '*.h' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ "${LIBRARY_DIR}/"' SET(Headers - ${LIBRARY_DIR}/common/bitstream.h - ${LIBRARY_DIR}/common/compiler.h - ${LIBRARY_DIR}/common/cpu.h - ${LIBRARY_DIR}/common/debug.h - ${LIBRARY_DIR}/common/error_private.h - ${LIBRARY_DIR}/common/fse.h - ${LIBRARY_DIR}/common/huf.h - ${LIBRARY_DIR}/common/mem.h - ${LIBRARY_DIR}/common/pool.h - ${LIBRARY_DIR}/common/threading.h - ${LIBRARY_DIR}/common/xxhash.h - ${LIBRARY_DIR}/common/zstd_errors.h - ${LIBRARY_DIR}/common/zstd_internal.h - ${LIBRARY_DIR}/compress/hist.h - ${LIBRARY_DIR}/compress/zstd_compress_internal.h - ${LIBRARY_DIR}/compress/zstd_compress_literals.h - ${LIBRARY_DIR}/compress/zstd_compress_sequences.h - ${LIBRARY_DIR}/compress/zstd_cwksp.h - ${LIBRARY_DIR}/compress/zstd_double_fast.h - ${LIBRARY_DIR}/compress/zstd_fast.h - ${LIBRARY_DIR}/compress/zstd_lazy.h - ${LIBRARY_DIR}/compress/zstd_ldm.h - ${LIBRARY_DIR}/compress/zstdmt_compress.h - ${LIBRARY_DIR}/compress/zstd_opt.h - ${LIBRARY_DIR}/decompress/zstd_ddict.h - ${LIBRARY_DIR}/decompress/zstd_decompress_block.h - ${LIBRARY_DIR}/decompress/zstd_decompress_internal.h - ${LIBRARY_DIR}/dictBuilder/cover.h - ${LIBRARY_DIR}/dictBuilder/divsufsort.h - ${LIBRARY_DIR}/dictBuilder/zdict.h - ${LIBRARY_DIR}/zstd.h) + "${LIBRARY_DIR}/common/bitstream.h" + "${LIBRARY_DIR}/common/compiler.h" + "${LIBRARY_DIR}/common/cpu.h" + "${LIBRARY_DIR}/common/debug.h" + "${LIBRARY_DIR}/common/error_private.h" + "${LIBRARY_DIR}/common/fse.h" + "${LIBRARY_DIR}/common/huf.h" + "${LIBRARY_DIR}/common/mem.h" + "${LIBRARY_DIR}/common/pool.h" + "${LIBRARY_DIR}/common/threading.h" + "${LIBRARY_DIR}/common/xxhash.h" + "${LIBRARY_DIR}/common/zstd_errors.h" + "${LIBRARY_DIR}/common/zstd_internal.h" + "${LIBRARY_DIR}/compress/hist.h" + "${LIBRARY_DIR}/compress/zstd_compress_internal.h" + "${LIBRARY_DIR}/compress/zstd_compress_literals.h" + "${LIBRARY_DIR}/compress/zstd_compress_sequences.h" + "${LIBRARY_DIR}/compress/zstd_cwksp.h" + "${LIBRARY_DIR}/compress/zstd_double_fast.h" + "${LIBRARY_DIR}/compress/zstd_fast.h" + "${LIBRARY_DIR}/compress/zstd_lazy.h" + "${LIBRARY_DIR}/compress/zstd_ldm.h" + "${LIBRARY_DIR}/compress/zstdmt_compress.h" + "${LIBRARY_DIR}/compress/zstd_opt.h" + "${LIBRARY_DIR}/decompress/zstd_ddict.h" + "${LIBRARY_DIR}/decompress/zstd_decompress_block.h" + "${LIBRARY_DIR}/decompress/zstd_decompress_internal.h" + "${LIBRARY_DIR}/dictBuilder/cover.h" + "${LIBRARY_DIR}/dictBuilder/divsufsort.h" + "${LIBRARY_DIR}/dictBuilder/zdict.h" + "${LIBRARY_DIR}/zstd.h") SET(ZSTD_LEGACY_SUPPORT true) IF (ZSTD_LEGACY_SUPPORT) - SET(LIBRARY_LEGACY_DIR ${LIBRARY_DIR}/legacy) + SET(LIBRARY_LEGACY_DIR "${LIBRARY_DIR}/legacy") INCLUDE_DIRECTORIES(BEFORE ${LIBRARY_LEGACY_DIR}) ADD_DEFINITIONS(-D ZSTD_LEGACY_SUPPORT=1) SET(Sources ${Sources} - ${LIBRARY_LEGACY_DIR}/zstd_v01.c - ${LIBRARY_LEGACY_DIR}/zstd_v02.c - ${LIBRARY_LEGACY_DIR}/zstd_v03.c - ${LIBRARY_LEGACY_DIR}/zstd_v04.c - ${LIBRARY_LEGACY_DIR}/zstd_v05.c - ${LIBRARY_LEGACY_DIR}/zstd_v06.c - ${LIBRARY_LEGACY_DIR}/zstd_v07.c) + "${LIBRARY_LEGACY_DIR}/zstd_v01.c" + "${LIBRARY_LEGACY_DIR}/zstd_v02.c" + "${LIBRARY_LEGACY_DIR}/zstd_v03.c" + "${LIBRARY_LEGACY_DIR}/zstd_v04.c" + "${LIBRARY_LEGACY_DIR}/zstd_v05.c" + "${LIBRARY_LEGACY_DIR}/zstd_v06.c" + "${LIBRARY_LEGACY_DIR}/zstd_v07.c") SET(Headers ${Headers} - ${LIBRARY_LEGACY_DIR}/zstd_legacy.h - ${LIBRARY_LEGACY_DIR}/zstd_v01.h - ${LIBRARY_LEGACY_DIR}/zstd_v02.h - ${LIBRARY_LEGACY_DIR}/zstd_v03.h - ${LIBRARY_LEGACY_DIR}/zstd_v04.h - ${LIBRARY_LEGACY_DIR}/zstd_v05.h - ${LIBRARY_LEGACY_DIR}/zstd_v06.h - ${LIBRARY_LEGACY_DIR}/zstd_v07.h) + "${LIBRARY_LEGACY_DIR}/zstd_legacy.h" + "${LIBRARY_LEGACY_DIR}/zstd_v01.h" + "${LIBRARY_LEGACY_DIR}/zstd_v02.h" + "${LIBRARY_LEGACY_DIR}/zstd_v03.h" + "${LIBRARY_LEGACY_DIR}/zstd_v04.h" + "${LIBRARY_LEGACY_DIR}/zstd_v05.h" + "${LIBRARY_LEGACY_DIR}/zstd_v06.h" + "${LIBRARY_LEGACY_DIR}/zstd_v07.h") ENDIF (ZSTD_LEGACY_SUPPORT) ADD_LIBRARY(zstd ${Sources} ${Headers}) diff --git a/debian/changelog b/debian/changelog index 23d63b41099..8b6626416a9 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,5 +1,5 @@ -clickhouse (21.4.1.1) unstable; urgency=low +clickhouse (21.6.1.1) unstable; urgency=low * Modified source code - -- clickhouse-release Sat, 06 Mar 2021 14:43:27 +0300 + -- clickhouse-release Tue, 20 Apr 2021 01:48:16 +0300 diff --git a/debian/clickhouse-client.postinst b/debian/clickhouse-client.postinst deleted file mode 100644 index 480bf2f5c67..00000000000 --- a/debian/clickhouse-client.postinst +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/sh -set -e - -CLICKHOUSE_USER=${CLICKHOUSE_USER=clickhouse} - -mkdir -p /etc/clickhouse-client/conf.d - -#DEBHELPER# diff --git a/debian/clickhouse-common-static.install b/debian/clickhouse-common-static.install index f1cbf0848d3..087a6dbba8f 100644 --- a/debian/clickhouse-common-static.install +++ b/debian/clickhouse-common-static.install @@ -1,4 +1,5 @@ usr/bin/clickhouse usr/bin/clickhouse-odbc-bridge +usr/bin/clickhouse-library-bridge usr/bin/clickhouse-extract-from-config -etc/security/limits.d/clickhouse.conf +usr/share/bash-completion/completions diff --git a/debian/clickhouse-server.config b/debian/clickhouse-server.config deleted file mode 100644 index 636ff7f4da7..00000000000 --- a/debian/clickhouse-server.config +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/sh -e - -test -f /usr/share/debconf/confmodule && . /usr/share/debconf/confmodule - -db_fget clickhouse-server/default-password seen || true -password_seen="$RET" - -if [ "$1" = "reconfigure" ]; then - password_seen=false -fi - -if [ "$password_seen" != "true" ]; then - db_input high clickhouse-server/default-password || true - db_go || true -fi -db_go || true diff --git a/debian/clickhouse-server.postinst b/debian/clickhouse-server.postinst index dc876f45954..419c13e3daf 100644 --- a/debian/clickhouse-server.postinst +++ b/debian/clickhouse-server.postinst @@ -23,11 +23,13 @@ if [ ! -f "/etc/debian_version" ]; then fi if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then + + ${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}" + if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then # if old rc.d service present - remove it if [ -x "/etc/init.d/clickhouse-server" ] && [ -x "/usr/sbin/update-rc.d" ]; then /usr/sbin/update-rc.d clickhouse-server remove - echo "ClickHouse init script has migrated to systemd. Please manually stop old server and restart the service: sudo killall clickhouse-server && sleep 5 && sudo service clickhouse-server restart" fi /bin/systemctl daemon-reload @@ -38,10 +40,8 @@ if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then if [ -x "/usr/sbin/update-rc.d" ]; then /usr/sbin/update-rc.d clickhouse-server defaults 19 19 >/dev/null || exit $? else - echo # TODO [ "$OS" = "rhel" ] || [ "$OS" = "centos" ] || [ "$OS" = "fedora" ] + echo # Other OS fi fi fi - - ${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}" fi diff --git a/debian/clickhouse-server.preinst b/debian/clickhouse-server.preinst deleted file mode 100644 index 3529aefa7da..00000000000 --- a/debian/clickhouse-server.preinst +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/sh - -if [ "$1" = "upgrade" ]; then - # Return etc/cron.d/clickhouse-server to original state - service clickhouse-server disable_cron ||: -fi - -#DEBHELPER# diff --git a/debian/clickhouse-server.prerm b/debian/clickhouse-server.prerm deleted file mode 100644 index 02e855a7125..00000000000 --- a/debian/clickhouse-server.prerm +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/sh - -if [ "$1" = "upgrade" ] || [ "$1" = "remove" ]; then - # Return etc/cron.d/clickhouse-server to original state - service clickhouse-server disable_cron ||: -fi diff --git a/debian/clickhouse-server.templates b/debian/clickhouse-server.templates deleted file mode 100644 index dd55824e15c..00000000000 --- a/debian/clickhouse-server.templates +++ /dev/null @@ -1,3 +0,0 @@ -Template: clickhouse-server/default-password -Type: password -Description: Enter password for default user: diff --git a/debian/clickhouse.limits b/debian/clickhouse.limits deleted file mode 100644 index aca44082c4e..00000000000 --- a/debian/clickhouse.limits +++ /dev/null @@ -1,2 +0,0 @@ -clickhouse soft nofile 262144 -clickhouse hard nofile 262144 diff --git a/debian/pbuilder-hooks/A00ccache b/debian/pbuilder-hooks/A00ccache deleted file mode 100755 index 575358f31eb..00000000000 --- a/debian/pbuilder-hooks/A00ccache +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/sh - -# set -x - -# CCACHEDIR - for pbuilder ; CCACHE_DIR - for ccache - -echo "CCACHEDIR=$CCACHEDIR CCACHE_DIR=$CCACHE_DIR SET_CCACHEDIR=$SET_CCACHEDIR" - -[ -z "$CCACHE_DIR" ] && export CCACHE_DIR=${CCACHEDIR:=${SET_CCACHEDIR=/var/cache/pbuilder/ccache}} - -if [ -n "$CCACHE_DIR" ]; then - mkdir -p $CCACHE_DIR $DISTCC_DIR ||: - chown -R $BUILDUSERID:$BUILDUSERID $CCACHE_DIR $DISTCC_DIR ||: - chmod -R a+rwx $CCACHE_DIR $DISTCC_DIR ||: -fi - -[ $CCACHE_PREFIX = 'distcc' ] && mkdir -p $DISTCC_DIR && echo "localhost/`nproc`" >> $DISTCC_DIR/hosts && distcc --show-hosts - -df -h -ccache --show-stats -ccache --zero-stats -ccache --max-size=${CCACHE_SIZE:=32G} diff --git a/debian/pbuilder-hooks/A01xlocale b/debian/pbuilder-hooks/A01xlocale deleted file mode 100755 index 0e90f4ee71c..00000000000 --- a/debian/pbuilder-hooks/A01xlocale +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/sh - -# https://github.com/llvm-mirror/libcxx/commit/6e02e89f65ca1ca1d6ce30fbc557563164dd327e - -touch /usr/include/xlocale.h diff --git a/debian/pbuilder-hooks/B00ccache-stat b/debian/pbuilder-hooks/B00ccache-stat deleted file mode 100755 index fdf6db1b7e7..00000000000 --- a/debian/pbuilder-hooks/B00ccache-stat +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/sh - -ccache --show-stats diff --git a/debian/pbuilder-hooks/B90test-server b/debian/pbuilder-hooks/B90test-server deleted file mode 100755 index e36c255f9fc..00000000000 --- a/debian/pbuilder-hooks/B90test-server +++ /dev/null @@ -1,85 +0,0 @@ -#!/usr/bin/env bash -set -e -set -x - -TEST_CONNECT=${TEST_CONNECT=1} -TEST_SSL=${TEST_SSL=1} -PACKAGE_INSTALL=${PACKAGE_INSTALL=1} -TEST_PORT_RANDOM=${TEST_PORT_RANDOM=1} - -if [ "${PACKAGE_INSTALL}" ]; then - dpkg --auto-deconfigure -i /tmp/buildd/*.deb ||: - apt install -y -f --allow-downgrades ||: - dpkg -l | grep clickhouse ||: - - # Second install to replace debian versions - dpkg --auto-deconfigure -i /tmp/buildd/*.deb ||: - dpkg -l | grep clickhouse ||: - - # Some test references uses specific timezone - ln -fs /usr/share/zoneinfo/Europe/Moscow /etc/localtime - echo 'Europe/Moscow' > /etc/timezone - dpkg-reconfigure -f noninteractive tzdata -fi - -mkdir -p /etc/clickhouse-server/config.d /etc/clickhouse-client/config.d - -if [ "${TEST_PORT_RANDOM}" ]; then - CLICKHOUSE_PORT_BASE=${CLICKHOUSE_PORT_BASE:=$(( ( RANDOM % 50000 ) + 10000 ))} - CLICKHOUSE_PORT_TCP=${CLICKHOUSE_PORT_TCP:=$(($CLICKHOUSE_PORT_BASE + 1))} - CLICKHOUSE_PORT_HTTP=${CLICKHOUSE_PORT_HTTP:=$(($CLICKHOUSE_PORT_BASE + 2))} - CLICKHOUSE_PORT_INTERSERVER=${CLICKHOUSE_PORT_INTERSERVER:=$(($CLICKHOUSE_PORT_BASE + 3))} - CLICKHOUSE_PORT_TCP_SECURE=${CLICKHOUSE_PORT_TCP_SECURE:=$(($CLICKHOUSE_PORT_BASE + 4))} - CLICKHOUSE_PORT_HTTPS=${CLICKHOUSE_PORT_HTTPS:=$(($CLICKHOUSE_PORT_BASE + 5))} -fi - -export CLICKHOUSE_PORT_TCP=${CLICKHOUSE_PORT_TCP:=9000} -export CLICKHOUSE_PORT_HTTP=${CLICKHOUSE_PORT_HTTP:=8123} -export CLICKHOUSE_PORT_INTERSERVER=${CLICKHOUSE_PORT_INTERSERVER:=9009} -export CLICKHOUSE_PORT_TCP_SECURE=${CLICKHOUSE_PORT_TCP_SECURE:=9440} -export CLICKHOUSE_PORT_HTTPS=${CLICKHOUSE_PORT_HTTPS:=8443} - -if [ "${TEST_CONNECT}" ]; then - [ "${TEST_PORT_RANDOM}" ] && echo "${CLICKHOUSE_PORT_HTTP}${CLICKHOUSE_PORT_TCP}${CLICKHOUSE_PORT_INTERSERVER}" > /etc/clickhouse-server/config.d/port.xml - - if [ "${TEST_SSL}" ]; then - CLICKHOUSE_SSL_CONFIG="noneAcceptCertificateHandler" - echo "${CLICKHOUSE_PORT_HTTPS}${CLICKHOUSE_PORT_TCP_SECURE}${CLICKHOUSE_SSL_CONFIG}" > /etc/clickhouse-server/config.d/ssl.xml - echo "${CLICKHOUSE_PORT_TCP}${CLICKHOUSE_PORT_TCP_SECURE}${CLICKHOUSE_SSL_CONFIG}" > /etc/clickhouse-client/config.xml - openssl dhparam -out /etc/clickhouse-server/dhparam.pem 256 - openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt - chmod -f a+r /etc/clickhouse-server/* /etc/clickhouse-client/* ||: - CLIENT_ADD+="--secure --port ${CLICKHOUSE_PORT_TCP_SECURE}" - else - CLIENT_ADD+="--port ${CLICKHOUSE_PORT_TCP}" - fi - - # For debug - # tail -n +1 -- /etc/clickhouse-server/*.xml /etc/clickhouse-server/config.d/*.xml ||: - - function finish { - service clickhouse-server stop - tail -n 100 /var/log/clickhouse-server/*.log ||: - sleep 1 - killall -9 clickhouse-server ||: - } - trap finish EXIT SIGINT SIGQUIT SIGTERM - - service clickhouse-server start - sleep ${TEST_SERVER_STARTUP_WAIT:=5} - service clickhouse-server status - - # TODO: remove me or make only on error: - tail -n100 /var/log/clickhouse-server/*.log ||: - - clickhouse-client --port $CLICKHOUSE_PORT_TCP -q "SELECT * from system.build_options;" - clickhouse-client ${CLIENT_ADD} -q "SELECT toDateTime(1);" - - ( [ "${TEST_RUN}" ] && clickhouse-test --queries /usr/share/clickhouse-test/queries --tmp /tmp/clickhouse-test/ ${TEST_OPT} ) || ${TEST_TRUE:=true} - - service clickhouse-server stop - -fi - -# Test debug symbols -# gdb -ex quit --args /usr/bin/clickhouse-server diff --git a/debian/pbuilder-hooks/C99kill-make b/debian/pbuilder-hooks/C99kill-make deleted file mode 100755 index 2068e75dc40..00000000000 --- a/debian/pbuilder-hooks/C99kill-make +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/sh - -# Try stop parallel build after timeout - -killall make gcc gcc-8 g++-8 gcc-9 g++-9 clang clang-6.0 clang++-6.0 clang-7 clang++-7 ||: diff --git a/debian/rules b/debian/rules index 8eb47e95389..73d1f3d3b34 100755 --- a/debian/rules +++ b/debian/rules @@ -113,9 +113,6 @@ override_dh_install: ln -sf clickhouse-server.docs debian/clickhouse-client.docs ln -sf clickhouse-server.docs debian/clickhouse-common-static.docs - mkdir -p $(DESTDIR)/etc/security/limits.d - cp debian/clickhouse.limits $(DESTDIR)/etc/security/limits.d/clickhouse.conf - # systemd compatibility mkdir -p $(DESTDIR)/etc/systemd/system/ cp debian/clickhouse-server.service $(DESTDIR)/etc/systemd/system/ diff --git a/debian/watch b/debian/watch index 7ad4cedf713..ed3cab97ade 100644 --- a/debian/watch +++ b/debian/watch @@ -1,6 +1,6 @@ version=4 opts="filenamemangle=s%(?:.*?)?v?(\d[\d.]*)-stable\.tar\.gz%clickhouse-$1.tar.gz%" \ - https://github.com/yandex/clickhouse/tags \ + https://github.com/ClickHouse/ClickHouse/tags \ (?:.*?/)?v?(\d[\d.]*)-stable\.tar\.gz debian uupdate diff --git a/docker/client/Dockerfile b/docker/client/Dockerfile index 8443eae691b..569025dec1c 100644 --- a/docker/client/Dockerfile +++ b/docker/client/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.4.1.* +ARG version=21.6.1.* RUN apt-get update \ && apt-get install --yes --no-install-recommends \ @@ -18,6 +18,7 @@ RUN apt-get update \ clickhouse-client=$version \ clickhouse-common-static=$version \ locales \ + tzdata \ && rm -rf /var/lib/apt/lists/* /var/cache/debconf \ && apt-get clean diff --git a/docker/images.json b/docker/images.json index 303bd159ce4..e2e22468596 100644 --- a/docker/images.json +++ b/docker/images.json @@ -138,7 +138,8 @@ "docker/test/stateless_unbundled", "docker/test/stateless_pytest", "docker/test/integration/base", - "docker/test/fuzzer" + "docker/test/fuzzer", + "docker/test/keeper-jepsen" ] }, "docker/packager/unbundled": { @@ -159,5 +160,9 @@ "docker/test/sqlancer": { "name": "yandex/clickhouse-sqlancer-test", "dependent": [] + }, + "docker/test/keeper-jepsen": { + "name": "yandex/clickhouse-keeper-jepsen-test", + "dependent": [] } } diff --git a/docker/packager/README.md b/docker/packager/README.md index 9fbc2d7f8b5..a745f6225fa 100644 --- a/docker/packager/README.md +++ b/docker/packager/README.md @@ -3,10 +3,10 @@ compilers and build settings. Correctly configured Docker daemon is single depen Usage: -Build deb package with `gcc-9` in `debug` mode: +Build deb package with `clang-11` in `debug` mode: ``` $ mkdir deb/test_output -$ ./packager --output-dir deb/test_output/ --package-type deb --compiler=gcc-9 --build-type=debug +$ ./packager --output-dir deb/test_output/ --package-type deb --compiler=clang-11 --build-type=debug $ ls -l deb/test_output -rw-r--r-- 1 root root 3730 clickhouse-client_18.14.2+debug_all.deb -rw-r--r-- 1 root root 84221888 clickhouse-common-static_18.14.2+debug_amd64.deb @@ -18,11 +18,11 @@ $ ls -l deb/test_output ``` -Build ClickHouse binary with `clang-10` and `address` sanitizer in `relwithdebuginfo` +Build ClickHouse binary with `clang-11` and `address` sanitizer in `relwithdebuginfo` mode: ``` $ mkdir $HOME/some_clickhouse -$ ./packager --output-dir=$HOME/some_clickhouse --package-type binary --compiler=clang-10 --sanitizer=address +$ ./packager --output-dir=$HOME/some_clickhouse --package-type binary --compiler=clang-11 --sanitizer=address $ ls -l $HOME/some_clickhouse -rwxr-xr-x 1 root root 787061952 clickhouse lrwxrwxrwx 1 root root 10 clickhouse-benchmark -> clickhouse diff --git a/docker/packager/binary/Dockerfile b/docker/packager/binary/Dockerfile index 74de1a3e9bd..56b2af5cf84 100644 --- a/docker/packager/binary/Dockerfile +++ b/docker/packager/binary/Dockerfile @@ -4,14 +4,22 @@ FROM ubuntu:20.04 ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=11 RUN apt-get update \ - && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ + && apt-get install \ + apt-transport-https \ + apt-utils \ + ca-certificates \ + dnsutils \ + gnupg \ + iputils-ping \ + lsb-release \ + wget \ --yes --no-install-recommends --verbose-versions \ && export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \ && wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \ && echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \ && apt-key add /tmp/llvm-snapshot.gpg.key \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ - && echo "deb [trusted=yes] http://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \ + && echo "deb [trusted=yes] https://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \ /etc/apt/sources.list # initial packages @@ -27,35 +35,27 @@ RUN apt-get update \ RUN apt-get update \ && apt-get install \ bash \ - cmake \ + build-essential \ ccache \ - curl \ - gcc-9 \ - g++-9 \ - clang-10 \ - clang-tidy-10 \ - lld-10 \ - llvm-10 \ - llvm-10-dev \ clang-11 \ clang-tidy-11 \ + cmake \ + curl \ + g++-10 \ + gcc-10 \ + gdb \ + git \ + gperf \ + libicu-dev \ + libreadline-dev \ lld-11 \ llvm-11 \ llvm-11-dev \ - libicu-dev \ - libreadline-dev \ + moreutils \ ninja-build \ - gperf \ - git \ - opencl-headers \ - ocl-icd-libopencl1 \ - intel-opencl-icd \ - tzdata \ - gperf \ - cmake \ - gdb \ + pigz \ rename \ - build-essential \ + tzdata \ --yes --no-install-recommends # This symlink required by gcc to find lld compiler @@ -103,4 +103,4 @@ RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update COPY build.sh / -CMD ["/bin/bash", "/build.sh"] +CMD ["bash", "-c", "/build.sh 2>&1 | ts"] diff --git a/docker/packager/binary/build.sh b/docker/packager/binary/build.sh index a42789c6186..cf74105fbbb 100755 --- a/docker/packager/binary/build.sh +++ b/docker/packager/binary/build.sh @@ -11,17 +11,28 @@ tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build/cmake/toolc mkdir -p build/cmake/toolchain/freebsd-x86_64 tar xJf freebsd-11.3-toolchain.tar.xz -C build/cmake/toolchain/freebsd-x86_64 --strip-components=1 +# Uncomment to debug ccache. Don't put ccache log in /output right away, or it +# will be confusingly packed into the "performance" package. +# export CCACHE_LOGFILE=/build/ccache.log +# export CCACHE_DEBUG=1 + mkdir -p build/build_docker cd build/build_docker -ccache --show-stats ||: -ccache --zero-stats ||: -ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||: rm -f CMakeCache.txt # Read cmake arguments into array (possibly empty) read -ra CMAKE_FLAGS <<< "${CMAKE_FLAGS:-}" cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" .. + +ccache --show-config ||: +ccache --show-stats ||: +ccache --zero-stats ||: + # shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty. ninja $NINJA_FLAGS clickhouse-bundle + +ccache --show-config ||: +ccache --show-stats ||: + mv ./programs/clickhouse* /output mv ./src/unit_tests_dbms /output ||: # may not exist for some binary builds find . -name '*.so' -print -exec mv '{}' /output \; @@ -65,8 +76,21 @@ then cp ../programs/server/config.xml /output/config cp ../programs/server/users.xml /output/config cp -r --dereference ../programs/server/config.d /output/config - tar -czvf "$COMBINED_OUTPUT.tgz" /output + tar -cv -I pigz -f "$COMBINED_OUTPUT.tgz" /output rm -r /output/* mv "$COMBINED_OUTPUT.tgz" /output fi -ccache --show-stats ||: + +if [ "${CCACHE_DEBUG:-}" == "1" ] +then + find . -name '*.ccache-*' -print0 \ + | tar -c -I pixz -f /output/ccache-debug.txz --null -T - +fi + +if [ -n "$CCACHE_LOGFILE" ] +then + # Compress the log as well, or else the CI will try to compress all log + # files in place, and will fail because this directory is not writable. + tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE" +fi + diff --git a/docker/packager/deb/Dockerfile b/docker/packager/deb/Dockerfile index 8fd89d60f85..2f1d28efe61 100644 --- a/docker/packager/deb/Dockerfile +++ b/docker/packager/deb/Dockerfile @@ -34,31 +34,25 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \ # Libraries from OS are only needed to test the "unbundled" build (this is not used in production). RUN apt-get update \ && apt-get install \ - gcc-9 \ - g++-9 \ + alien \ clang-11 \ clang-tidy-11 \ + cmake \ + debhelper \ + devscripts \ + gdb \ + git \ + gperf \ lld-11 \ llvm-11 \ llvm-11-dev \ - clang-10 \ - clang-tidy-10 \ - lld-10 \ - llvm-10 \ - llvm-10-dev \ + moreutils \ ninja-build \ perl \ - pkg-config \ - devscripts \ - debhelper \ - git \ - tzdata \ - gperf \ - alien \ - cmake \ - gdb \ - moreutils \ pigz \ + pixz \ + pkg-config \ + tzdata \ --yes --no-install-recommends # NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable. diff --git a/docker/packager/deb/build.sh b/docker/packager/deb/build.sh index 6450e21d289..4e14574b738 100755 --- a/docker/packager/deb/build.sh +++ b/docker/packager/deb/build.sh @@ -2,10 +2,16 @@ set -x -e +# Uncomment to debug ccache. +# export CCACHE_LOGFILE=/build/ccache.log +# export CCACHE_DEBUG=1 + +ccache --show-config ||: ccache --show-stats ||: ccache --zero-stats ||: + read -ra ALIEN_PKGS <<< "${ALIEN_PKGS:-}" -build/release --no-pbuilder "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S' +build/release "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S' mv /*.deb /output mv -- *.changes /output mv -- *.buildinfo /output @@ -22,5 +28,19 @@ then mv /build/obj-*/src/unit_tests_dbms /output/binary fi fi + +ccache --show-config ||: ccache --show-stats ||: -ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||: + +if [ "${CCACHE_DEBUG:-}" == "1" ] +then + find /build -name '*.ccache-*' -print0 \ + | tar -c -I pixz -f /output/ccache-debug.txz --null -T - +fi + +if [ -n "$CCACHE_LOGFILE" ] +then + # Compress the log as well, or else the CI will try to compress all log + # files in place, and will fail because this directory is not writable. + tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE" +fi diff --git a/docker/packager/packager b/docker/packager/packager index 65c03cc10e3..836f30dec42 100755 --- a/docker/packager/packager +++ b/docker/packager/packager @@ -143,8 +143,7 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ cmake_flags.append('-DUSE_GTEST=1') if unbundled: - # TODO: fix build with ENABLE_RDKAFKA - cmake_flags.append('-DUNBUNDLED=1 -DUSE_INTERNAL_RDKAFKA_LIBRARY=1 -DENABLE_ARROW=0 -DENABLE_ORC=0 -DENABLE_PARQUET=0') + cmake_flags.append('-DUNBUNDLED=1 -DUSE_INTERNAL_RDKAFKA_LIBRARY=1 -DENABLE_ARROW=0 -DENABLE_AVRO=0 -DENABLE_ORC=0 -DENABLE_PARQUET=0') if split_binary: cmake_flags.append('-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1') @@ -182,9 +181,8 @@ if __name__ == "__main__": parser.add_argument("--clickhouse-repo-path", default=os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir, os.pardir)) parser.add_argument("--output-dir", required=True) parser.add_argument("--build-type", choices=("debug", ""), default="") - parser.add_argument("--compiler", choices=("clang-10", "clang-10-darwin", "clang-10-aarch64", "clang-10-freebsd", - "clang-11", "clang-11-darwin", "clang-11-aarch64", "clang-11-freebsd", - "gcc-9", "gcc-10"), default="gcc-9") + parser.add_argument("--compiler", choices=("clang-11", "clang-11-darwin", "clang-11-aarch64", "clang-11-freebsd", + "gcc-10"), default="clang-11") parser.add_argument("--sanitizer", choices=("address", "thread", "memory", "undefined", ""), default="") parser.add_argument("--unbundled", action="store_true") parser.add_argument("--split-binary", action="store_true") diff --git a/docker/packager/unbundled/Dockerfile b/docker/packager/unbundled/Dockerfile index f640c595f14..4dd6dbc61d8 100644 --- a/docker/packager/unbundled/Dockerfile +++ b/docker/packager/unbundled/Dockerfile @@ -35,9 +35,6 @@ RUN apt-get update \ libjemalloc-dev \ libmsgpack-dev \ libcurl4-openssl-dev \ - opencl-headers \ - ocl-icd-libopencl1 \ - intel-opencl-icd \ unixodbc-dev \ odbcinst \ tzdata \ diff --git a/docker/packager/unbundled/build.sh b/docker/packager/unbundled/build.sh index 54575ab977c..c43c6b5071e 100755 --- a/docker/packager/unbundled/build.sh +++ b/docker/packager/unbundled/build.sh @@ -5,7 +5,7 @@ set -x -e ccache --show-stats ||: ccache --zero-stats ||: read -ra ALIEN_PKGS <<< "${ALIEN_PKGS:-}" -build/release --no-pbuilder "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S' +build/release "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S' mv /*.deb /output mv -- *.changes /output mv -- *.buildinfo /output @@ -13,4 +13,3 @@ mv /*.rpm /output ||: # if exists mv /*.tgz /output ||: # if exists ccache --show-stats ||: -ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||: diff --git a/docker/server/Dockerfile b/docker/server/Dockerfile index 295784a6184..d302fec7417 100644 --- a/docker/server/Dockerfile +++ b/docker/server/Dockerfile @@ -1,9 +1,24 @@ FROM ubuntu:20.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.4.1.* +ARG version=21.6.1.* ARG gosu_ver=1.10 +# set non-empty deb_location_url url to create a docker image +# from debs created by CI build, for example: +# docker build . --network host --build-arg version="21.4.1.6282" --build-arg deb_location_url="https://clickhouse-builds.s3.yandex.net/21852/069cfbff388b3d478d1a16dc7060b48073f5d522/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_deb/" -t filimonovq/clickhouse-server:pr21852 +ARG deb_location_url="" + +# set non-empty single_binary_location_url to create docker image +# from a single binary url (useful for non-standard builds - with sanitizers, for arm64). +# for example (run on aarch64 server): +# docker build . --network host --build-arg single_binary_location_url="https://builds.clickhouse.tech/master/aarch64/clickhouse" -t altinity/clickhouse-server:master-testing-arm +# note: clickhouse-odbc-bridge is not supported there. +ARG single_binary_location_url="" + +# see https://github.com/moby/moby/issues/4032#issuecomment-192327844 +ARG DEBIAN_FRONTEND=noninteractive + # user/group precreated explicitly with fixed uid/gid on purpose. # It is especially important for rootless containers: in that case entrypoint # can't do chown and owners of mounted volumes should be configured externally. @@ -19,19 +34,39 @@ RUN groupadd -r clickhouse --gid=101 \ ca-certificates \ dirmngr \ gnupg \ + locales \ + wget \ + tzdata \ && mkdir -p /etc/apt/sources.list.d \ && apt-key adv --keyserver keyserver.ubuntu.com --recv E0C56BD4 \ && echo $repository > /etc/apt/sources.list.d/clickhouse.list \ - && apt-get update \ - && env DEBIAN_FRONTEND=noninteractive \ - apt-get --yes -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" upgrade \ - && env DEBIAN_FRONTEND=noninteractive \ - apt-get install --allow-unauthenticated --yes --no-install-recommends \ - clickhouse-common-static=$version \ - clickhouse-client=$version \ - clickhouse-server=$version \ - locales \ - wget \ + && if [ -n "$deb_location_url" ]; then \ + echo "installing from custom url with deb packages: $deb_location_url" \ + rm -rf /tmp/clickhouse_debs \ + && mkdir -p /tmp/clickhouse_debs \ + && wget --progress=bar:force:noscroll "${deb_location_url}/clickhouse-common-static_${version}_amd64.deb" -P /tmp/clickhouse_debs \ + && wget --progress=bar:force:noscroll "${deb_location_url}/clickhouse-client_${version}_all.deb" -P /tmp/clickhouse_debs \ + && wget --progress=bar:force:noscroll "${deb_location_url}/clickhouse-server_${version}_all.deb" -P /tmp/clickhouse_debs \ + && dpkg -i /tmp/clickhouse_debs/*.deb ; \ + elif [ -n "$single_binary_location_url" ]; then \ + echo "installing from single binary url: $single_binary_location_url" \ + && rm -rf /tmp/clickhouse_binary \ + && mkdir -p /tmp/clickhouse_binary \ + && wget --progress=bar:force:noscroll "$single_binary_location_url" -O /tmp/clickhouse_binary/clickhouse \ + && chmod +x /tmp/clickhouse_binary/clickhouse \ + && /tmp/clickhouse_binary/clickhouse install --user "clickhouse" --group "clickhouse" ; \ + else \ + echo "installing from repository: $repository" \ + && apt-get update \ + && apt-get --yes -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" upgrade \ + && apt-get install --allow-unauthenticated --yes --no-install-recommends \ + clickhouse-common-static=$version \ + clickhouse-client=$version \ + clickhouse-server=$version ; \ + fi \ + && wget --progress=bar:force:noscroll "https://github.com/tianon/gosu/releases/download/$gosu_ver/gosu-$(dpkg --print-architecture)" -O /bin/gosu \ + && chmod +x /bin/gosu \ + && clickhouse-local -q 'SELECT * FROM system.build_options' \ && rm -rf \ /var/lib/apt/lists/* \ /var/cache/debconf \ @@ -43,8 +78,6 @@ RUN groupadd -r clickhouse --gid=101 \ # we need to allow "others" access to clickhouse folder, because docker container # can be started with arbitrary uid (openshift usecase) -ADD https://github.com/tianon/gosu/releases/download/$gosu_ver/gosu-amd64 /bin/gosu - RUN locale-gen en_US.UTF-8 ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en @@ -55,10 +88,7 @@ RUN mkdir /docker-entrypoint-initdb.d COPY docker_related_config.xml /etc/clickhouse-server/config.d/ COPY entrypoint.sh /entrypoint.sh - -RUN chmod +x \ - /entrypoint.sh \ - /bin/gosu +RUN chmod +x /entrypoint.sh EXPOSE 9000 8123 9009 VOLUME /var/lib/clickhouse diff --git a/docker/server/Dockerfile.alpine b/docker/server/Dockerfile.alpine index 0f9de1996ab..cd192c0c9da 100644 --- a/docker/server/Dockerfile.alpine +++ b/docker/server/Dockerfile.alpine @@ -21,7 +21,9 @@ RUN addgroup -S -g 101 clickhouse \ && chown clickhouse:clickhouse /var/lib/clickhouse \ && chown root:clickhouse /var/log/clickhouse-server \ && chmod +x /entrypoint.sh \ - && apk add --no-cache su-exec bash \ + && apk add --no-cache su-exec bash tzdata \ + && cp /usr/share/zoneinfo/UTC /etc/localtime \ + && echo "UTC" > /etc/timezone \ && chmod ugo+Xrw -R /var/lib/clickhouse /var/log/clickhouse-server /etc/clickhouse-server /etc/clickhouse-client # we need to allow "others" access to clickhouse folder, because docker container diff --git a/docker/server/entrypoint.sh b/docker/server/entrypoint.sh index 0138a165505..4486b0d9d7f 100755 --- a/docker/server/entrypoint.sh +++ b/docker/server/entrypoint.sh @@ -38,17 +38,16 @@ if ! $gosu test -f "$CLICKHOUSE_CONFIG" -a -r "$CLICKHOUSE_CONFIG"; then exit 1 fi -# port is needed to check if clickhouse-server is ready for connections -HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port)" - # get CH directories locations DATA_DIR="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=path || true)" TMP_DIR="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=tmp_path || true)" USER_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=user_files_path || true)" LOG_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=logger.log || true)" -LOG_DIR="$(dirname "$LOG_PATH" || true)" +LOG_DIR="" +if [ -n "$LOG_PATH" ]; then LOG_DIR="$(dirname "$LOG_PATH")"; fi ERROR_LOG_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=logger.errorlog || true)" -ERROR_LOG_DIR="$(dirname "$ERROR_LOG_PATH" || true)" +ERROR_LOG_DIR="" +if [ -n "$ERROR_LOG_PATH" ]; then ERROR_LOG_DIR="$(dirname "$ERROR_LOG_PATH")"; fi FORMAT_SCHEMA_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=format_schema_path || true)" CLICKHOUSE_USER="${CLICKHOUSE_USER:-default}" @@ -106,6 +105,9 @@ EOT fi if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then + # port is needed to check if clickhouse-server is ready for connections + HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port)" + # Listen only on localhost until the initialization is done $gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" -- --listen_host=127.0.0.1 & pid="$!" diff --git a/docker/test/Dockerfile b/docker/test/Dockerfile index e727d2a3ecf..0e4646386ce 100644 --- a/docker/test/Dockerfile +++ b/docker/test/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.4.1.* +ARG version=21.6.1.* RUN apt-get update && \ apt-get install -y apt-transport-https dirmngr && \ diff --git a/docker/test/base/Dockerfile b/docker/test/base/Dockerfile index e8653c2122e..44b9d42d6a1 100644 --- a/docker/test/base/Dockerfile +++ b/docker/test/base/Dockerfile @@ -51,13 +51,13 @@ RUN apt-get update \ # Sanitizer options for services (clickhouse-server) RUN echo "TSAN_OPTIONS='verbosity=1000 halt_on_error=1 history_size=7'" >> /etc/environment; \ echo "UBSAN_OPTIONS='print_stacktrace=1'" >> /etc/environment; \ - echo "MSAN_OPTIONS='abort_on_error=1'" >> /etc/environment; \ + echo "MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1'" >> /etc/environment; \ echo "LSAN_OPTIONS='suppressions=/usr/share/clickhouse-test/config/lsan_suppressions.txt'" >> /etc/environment; \ ln -s /usr/lib/llvm-${LLVM_VERSION}/bin/llvm-symbolizer /usr/bin/llvm-symbolizer; # Sanitizer options for current shell (not current, but the one that will be spawned on "docker run") # (but w/o verbosity for TSAN, otherwise test.reference will not match) ENV TSAN_OPTIONS='halt_on_error=1 history_size=7' ENV UBSAN_OPTIONS='print_stacktrace=1' -ENV MSAN_OPTIONS='abort_on_error=1' +ENV MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1' CMD sleep 1 diff --git a/docker/test/fasttest/Dockerfile b/docker/test/fasttest/Dockerfile index 64be52d8e30..2864f7fc4da 100644 --- a/docker/test/fasttest/Dockerfile +++ b/docker/test/fasttest/Dockerfile @@ -1,7 +1,7 @@ # docker build -t yandex/clickhouse-fasttest . FROM ubuntu:20.04 -ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10 +ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=11 RUN apt-get update \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ @@ -43,20 +43,20 @@ RUN apt-get update \ clang-tidy-${LLVM_VERSION} \ cmake \ curl \ - lsof \ expect \ fakeroot \ - git \ gdb \ + git \ gperf \ lld-${LLVM_VERSION} \ llvm-${LLVM_VERSION} \ + lsof \ moreutils \ ninja-build \ psmisc \ python3 \ - python3-pip \ python3-lxml \ + python3-pip \ python3-requests \ python3-termcolor \ rename \ diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 649f9f812e1..a7cc398e5c9 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -8,6 +8,9 @@ trap 'kill $(jobs -pr) ||:' EXIT # that we can run the "everything else" stage from the cloned source. stage=${stage:-} +# Compiler version, normally set by Dockerfile +export LLVM_VERSION=${LLVM_VERSION:-11} + # A variable to pass additional flags to CMake. # Here we explicitly default it to nothing so that bash doesn't complain about # it being undefined. Also read it as array so that we can pass an empty list @@ -70,7 +73,7 @@ function start_server --path "$FASTTEST_DATA" --user_files_path "$FASTTEST_DATA/user_files" --top_level_domains_path "$FASTTEST_DATA/top_level_domains" - --test_keeper_server.log_storage_path "$FASTTEST_DATA/coordination" + --keeper_server.log_storage_path "$FASTTEST_DATA/coordination" ) clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" & server_pid=$! @@ -124,22 +127,26 @@ continue function clone_root { - git clone https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt" + git clone --depth 1 https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt" ( cd "$FASTTEST_SOURCE" if [ "$PULL_REQUEST_NUMBER" != "0" ]; then - if git fetch origin "+refs/pull/$PULL_REQUEST_NUMBER/merge"; then + if git fetch --depth 1 origin "+refs/pull/$PULL_REQUEST_NUMBER/merge"; then git checkout FETCH_HEAD - echo 'Clonned merge head' + echo "Checked out pull/$PULL_REQUEST_NUMBER/merge ($(git rev-parse FETCH_HEAD))" else - git fetch origin "+refs/pull/$PULL_REQUEST_NUMBER/head" + git fetch --depth 1 origin "+refs/pull/$PULL_REQUEST_NUMBER/head" git checkout "$COMMIT_SHA" - echo 'Checked out to commit' + echo "Checked out nominal SHA $COMMIT_SHA for PR $PULL_REQUEST_NUMBER" fi else if [ -v COMMIT_SHA ]; then + git fetch --depth 1 origin "$COMMIT_SHA" git checkout "$COMMIT_SHA" + echo "Checked out nominal SHA $COMMIT_SHA for master" + else + echo "Using default repository head $(git rev-parse HEAD)" fi fi ) @@ -181,7 +188,7 @@ function clone_submodules ) git submodule sync - git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}" + git submodule update --depth 1 --init --recursive "${SUBMODULES_TO_UPDATE[@]}" git submodule foreach git reset --hard git submodule foreach git checkout @ -f git submodule foreach git clean -xfd @@ -215,7 +222,7 @@ function run_cmake ( cd "$FASTTEST_BUILD" - cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER=clang++-10 -DCMAKE_C_COMPILER=clang-10 "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt" + cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER="clang++-${LLVM_VERSION}" -DCMAKE_C_COMPILER="clang-${LLVM_VERSION}" "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt" ) } @@ -223,7 +230,7 @@ function build { ( cd "$FASTTEST_BUILD" - time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt" + time ninja clickhouse-bundle 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt" if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse" fi @@ -292,6 +299,8 @@ function run_tests 01318_decrypt # Depends on OpenSSL 01663_aes_msan # Depends on OpenSSL 01667_aes_args_check # Depends on OpenSSL + 01776_decrypt_aead_size_check # Depends on OpenSSL + 01811_filter_by_null # Depends on OpenSSL 01281_unsucceeded_insert_select_queries_counter 01292_create_user 01294_lazy_database_concurrent @@ -299,10 +308,8 @@ function run_tests 01354_order_by_tuple_collate_const 01355_ilike 01411_bayesian_ab_testing - 01532_collate_in_low_cardinality - 01533_collate_in_nullable - 01542_collate_in_array - 01543_collate_in_tuple + collate + collation _orc_ arrow avro @@ -357,6 +364,12 @@ function run_tests # JSON functions 01666_blns + + # Requires postgresql-client + 01802_test_postgresql_protocol_with_row_policy + + # Depends on AWS + 01801_s3_cluster ) (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" @@ -419,7 +432,7 @@ case "$stage" in # See the compatibility hacks in `clone_root` stage above. Remove at the same time, # after Nov 1, 2020. cd "$FASTTEST_WORKSPACE" - clone_submodules | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/submodule_log.txt" + clone_submodules 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/submodule_log.txt" ;& "run_cmake") run_cmake @@ -430,7 +443,7 @@ case "$stage" in "configure") # The `install_log.txt` is also needed for compatibility with old CI task -- # if there is no log, it will decide that build failed. - configure | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/install_log.txt" + configure 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/install_log.txt" ;& "run_tests") run_tests diff --git a/docker/test/fuzzer/run-fuzzer.sh b/docker/test/fuzzer/run-fuzzer.sh index 766fec76179..626bedb453c 100755 --- a/docker/test/fuzzer/run-fuzzer.sh +++ b/docker/test/fuzzer/run-fuzzer.sh @@ -4,7 +4,9 @@ set -eux set -o pipefail trap "exit" INT TERM -trap 'kill $(jobs -pr) ||:' EXIT +# The watchdog is in the separate process group, so we have to kill it separately +# if the script terminates earlier. +trap 'kill $(jobs -pr) ${watchdog_pid:-} ||:' EXIT stage=${stage:-} script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" @@ -14,35 +16,28 @@ BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-11_debug_none_bundled_unsplitted function clone { -( + # The download() function is dependent on CI binaries anyway, so we can take + # the repo from the CI as well. For local runs, start directly from the "fuzz" + # stage. rm -rf ch ||: - mkdir ch - cd ch - - git init - git remote add origin https://github.com/ClickHouse/ClickHouse - - # Network is unreliable. GitHub neither. - for _ in {1..100}; do git fetch --depth=100 origin "$SHA_TO_TEST" && break; sleep 1; done - # Used to obtain the list of modified or added tests - for _ in {1..100}; do git fetch --depth=100 origin master && break; sleep 1; done - - # If not master, try to fetch pull/.../{head,merge} - if [ "$PR_TO_TEST" != "0" ] - then - for _ in {1..100}; do git fetch --depth=100 origin "refs/pull/$PR_TO_TEST/*:refs/heads/pull/$PR_TO_TEST/*" && break; sleep 1; done - fi - - git checkout "$SHA_TO_TEST" -) + mkdir ch ||: + wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/clickhouse_no_subs.tar.gz" + tar -C ch --strip-components=1 -xf clickhouse_no_subs.tar.gz + ls -lath ||: } function download { - wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/$BINARY_TO_DOWNLOAD/clickhouse" + wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/$BINARY_TO_DOWNLOAD/clickhouse" & + wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/ci-changed-files.txt" & + wait + chmod +x clickhouse ln -s ./clickhouse ./clickhouse-server ln -s ./clickhouse ./clickhouse-client + + # clickhouse-server is in the current dir + export PATH="$PWD:$PATH" } function configure @@ -74,25 +69,38 @@ function watchdog killall -9 clickhouse-client ||: } +function filter_exists +{ + local path + for path in "$@"; do + if [ -e "$path" ]; then + echo "$path" + else + echo "'$path' does not exists" >&2 + fi + done +} + function fuzz { # Obtain the list of newly added tests. They will be fuzzed in more extreme way than other tests. - cd ch - NEW_TESTS=$(git diff --name-only "$(git merge-base origin/master "$SHA_TO_TEST"~)" "$SHA_TO_TEST" | grep -P 'tests/queries/0_stateless/.*\.sql' | sed -r -e 's!^!ch/!' | sort -R) - cd .. + # Don't overwrite the NEW_TESTS_OPT so that it can be set from the environment. + NEW_TESTS="$(grep -P 'tests/queries/0_stateless/.*\.sql' ci-changed-files.txt | sed -r -e 's!^!ch/!' | sort -R)" + # ci-changed-files.txt contains also files that has been deleted/renamed, filter them out. + NEW_TESTS="$(filter_exists $NEW_TESTS)" if [[ -n "$NEW_TESTS" ]] then - NEW_TESTS_OPT="--interleave-queries-file ${NEW_TESTS}" + NEW_TESTS_OPT="${NEW_TESTS_OPT:---interleave-queries-file ${NEW_TESTS}}" else - NEW_TESTS_OPT="" + NEW_TESTS_OPT="${NEW_TESTS_OPT:-}" fi - ./clickhouse-server --config-file db/config.xml -- --path db 2>&1 | tail -100000 > server.log & + clickhouse-server --config-file db/config.xml -- --path db 2>&1 | tail -100000 > server.log & server_pid=$! kill -0 $server_pid - while ! ./clickhouse-client --query "select 1" && kill -0 $server_pid ; do echo . ; sleep 1 ; done - ./clickhouse-client --query "select 1" + while ! clickhouse-client --query "select 1" && kill -0 $server_pid ; do echo . ; sleep 1 ; done + clickhouse-client --query "select 1" kill -0 $server_pid echo Server started @@ -111,14 +119,14 @@ continue # SC2012: Use find instead of ls to better handle non-alphanumeric filenames. They are all alphanumeric. # SC2046: Quote this to prevent word splitting. Actually I need word splitting. # shellcheck disable=SC2012,SC2046 - ./clickhouse-client --query-fuzzer-runs=1000 --queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) $NEW_TESTS_OPT \ + clickhouse-client --query-fuzzer-runs=1000 --queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) $NEW_TESTS_OPT \ > >(tail -n 100000 > fuzzer.log) \ 2>&1 \ || fuzzer_exit_code=$? echo "Fuzzer exit code is $fuzzer_exit_code" - ./clickhouse-client --query "select elapsed, query from system.processes" ||: + clickhouse-client --query "select elapsed, query from system.processes" ||: killall clickhouse-server ||: for _ in {1..10} do @@ -190,7 +198,7 @@ case "$stage" in # Lost connection to the server. This probably means that the server died # with abort. echo "failure" > status.txt - if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt + if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt then echo "Lost connection to server. See the logs." > description.txt fi diff --git a/docker/test/integration/base/Dockerfile b/docker/test/integration/base/Dockerfile index 938d8d45ffd..1c962f1bf8f 100644 --- a/docker/test/integration/base/Dockerfile +++ b/docker/test/integration/base/Dockerfile @@ -19,7 +19,8 @@ RUN apt-get update \ tar \ krb5-user \ iproute2 \ - lsof + lsof \ + g++ RUN rm -rf \ /var/lib/apt/lists/* \ /var/cache/debconf \ diff --git a/docker/test/integration/runner/Dockerfile b/docker/test/integration/runner/Dockerfile index e0e5e36a3d6..783e689ed01 100644 --- a/docker/test/integration/runner/Dockerfile +++ b/docker/test/integration/runner/Dockerfile @@ -31,6 +31,7 @@ RUN apt-get update \ software-properties-common \ libkrb5-dev \ krb5-user \ + g++ \ && rm -rf \ /var/lib/apt/lists/* \ /var/cache/debconf \ diff --git a/docker/test/integration/runner/compose/docker_compose_mysql_8_0_for_materialize_mysql.yml b/docker/test/integration/runner/compose/docker_compose_mysql_8_0_for_materialize_mysql.yml index 7c8a930c84e..93bfee35caf 100644 --- a/docker/test/integration/runner/compose/docker_compose_mysql_8_0_for_materialize_mysql.yml +++ b/docker/test/integration/runner/compose/docker_compose_mysql_8_0_for_materialize_mysql.yml @@ -6,7 +6,7 @@ services: environment: MYSQL_ROOT_PASSWORD: clickhouse ports: - - 33308:3306 + - 3309:3306 command: --server_id=100 --log-bin='mysql-bin-1.log' --default_authentication_plugin='mysql_native_password' --default-time-zone='+3:00' diff --git a/docker/test/integration/runner/compose/docker_compose_mysql_cluster.yml b/docker/test/integration/runner/compose/docker_compose_mysql_cluster.yml new file mode 100644 index 00000000000..d0674362709 --- /dev/null +++ b/docker/test/integration/runner/compose/docker_compose_mysql_cluster.yml @@ -0,0 +1,23 @@ +version: '2.3' +services: + mysql2: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: clickhouse + ports: + - 3348:3306 + mysql3: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: clickhouse + ports: + - 3388:3306 + mysql4: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: clickhouse + ports: + - 3368:3306 diff --git a/docker/test/integration/runner/compose/docker_compose_postgres_cluster.yml b/docker/test/integration/runner/compose/docker_compose_postgres_cluster.yml new file mode 100644 index 00000000000..d04c8a2f3a6 --- /dev/null +++ b/docker/test/integration/runner/compose/docker_compose_postgres_cluster.yml @@ -0,0 +1,23 @@ +version: '2.3' +services: + postgres2: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: mysecretpassword + ports: + - 5421:5432 + postgres3: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: mysecretpassword + ports: + - 5441:5432 + postgres4: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: mysecretpassword + ports: + - 5461:5432 diff --git a/docker/test/integration/runner/dockerd-entrypoint.sh b/docker/test/integration/runner/dockerd-entrypoint.sh index c0255d3d706..9b100d947b0 100755 --- a/docker/test/integration/runner/dockerd-entrypoint.sh +++ b/docker/test/integration/runner/dockerd-entrypoint.sh @@ -1,6 +1,17 @@ #!/bin/bash set -e +mkdir -p /etc/docker/ +cat > /etc/docker/daemon.json << EOF +{ + "ipv6": true, + "fixed-cidr-v6": "fd00::/8", + "ip-forward": true, + "insecure-registries" : ["dockerhub-proxy.sas.yp-c.yandex.net:5000"], + "registry-mirrors" : ["http://dockerhub-proxy.sas.yp-c.yandex.net:5000"] +} +EOF + dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 &>/var/log/somefile & set +e @@ -21,6 +32,7 @@ export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=/clickhouse-config export CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH=/clickhouse-odbc-bridge +export CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH=/clickhouse-library-bridge export DOCKER_MYSQL_GOLANG_CLIENT_TAG=${DOCKER_MYSQL_GOLANG_CLIENT_TAG:=latest} export DOCKER_MYSQL_JAVA_CLIENT_TAG=${DOCKER_MYSQL_JAVA_CLIENT_TAG:=latest} diff --git a/docker/test/keeper-jepsen/Dockerfile b/docker/test/keeper-jepsen/Dockerfile new file mode 100644 index 00000000000..1a62d5e793f --- /dev/null +++ b/docker/test/keeper-jepsen/Dockerfile @@ -0,0 +1,39 @@ +# docker build -t yandex/clickhouse-keeper-jepsen-test . +FROM yandex/clickhouse-test-base + +ENV DEBIAN_FRONTEND=noninteractive +ENV CLOJURE_VERSION=1.10.3.814 + +# arguments +ENV PR_TO_TEST="" +ENV SHA_TO_TEST="" + +ENV NODES_USERNAME="root" +ENV NODES_PASSWORD="" +ENV TESTS_TO_RUN="30" +ENV TIME_LIMIT="30" + + +# volumes +ENV NODES_FILE_PATH="/nodes.txt" +ENV TEST_OUTPUT="/test_output" + +RUN mkdir "/root/.ssh" +RUN touch "/root/.ssh/known_hosts" + +# install java +RUN apt-get update && apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends + +# install clojure +RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \ + chmod +x "linux-install-${CLOJURE_VERSION}.sh" && \ + bash "./linux-install-${CLOJURE_VERSION}.sh" + +# install leiningen +RUN curl -O "https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein" && \ + chmod +x ./lein && \ + mv ./lein /usr/bin + +COPY run.sh / + +CMD ["/bin/bash", "/run.sh"] diff --git a/docker/test/keeper-jepsen/run.sh b/docker/test/keeper-jepsen/run.sh new file mode 100644 index 00000000000..352585e16e3 --- /dev/null +++ b/docker/test/keeper-jepsen/run.sh @@ -0,0 +1,22 @@ +#!/usr/bin/env bash +set -euo pipefail + + +CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"} +CLICKHOUSE_REPO_PATH=${CLICKHOUSE_REPO_PATH:=""} + + +if [ -z "$CLICKHOUSE_REPO_PATH" ]; then + CLICKHOUSE_REPO_PATH=ch + rm -rf ch ||: + mkdir ch ||: + wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/clickhouse_no_subs.tar.gz" + tar -C ch --strip-components=1 -xf clickhouse_no_subs.tar.gz + ls -lath ||: +fi + +cd "$CLICKHOUSE_REPO_PATH/tests/jepsen.clickhouse-keeper" + +(lein run test-all --nodes-file "$NODES_FILE_PATH" --username "$NODES_USERNAME" --logging-json --password "$NODES_PASSWORD" --time-limit "$TIME_LIMIT" --concurrency 50 -r 50 --snapshot-distance 100 --stale-log-gap 100 --reserved-log-items 10 --lightweight-run --clickhouse-source "$CLICKHOUSE_PACKAGE" -q --test-count "$TESTS_TO_RUN" || true) | tee "$TEST_OUTPUT/jepsen_run_all_tests.log" + +mv store "$TEST_OUTPUT/" diff --git a/docker/test/performance-comparison/compare.sh b/docker/test/performance-comparison/compare.sh index 4d862cf987e..093629e61fc 100755 --- a/docker/test/performance-comparison/compare.sh +++ b/docker/test/performance-comparison/compare.sh @@ -2,7 +2,9 @@ set -exu set -o pipefail trap "exit" INT TERM -trap 'kill $(jobs -pr) ||:' EXIT +# The watchdog is in the separate process group, so we have to kill it separately +# if the script terminates earlier. +trap 'kill $(jobs -pr) ${watchdog_pid:-} ||:' EXIT stage=${stage:-} script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" @@ -241,9 +243,12 @@ function run_tests profile_seconds_left=600 # Run the tests. + total_tests=$(echo "$test_files" | wc -w) + current_test=0 test_name="" for test in $test_files do + echo "$current_test of $total_tests tests complete" > status.txt # Check that both servers are alive, and restart them if they die. clickhouse-client --port $LEFT_SERVER_PORT --query "select 1 format Null" \ || { echo $test_name >> left-server-died.log ; restart ; } @@ -271,6 +276,7 @@ function run_tests profile_seconds_left=$(awk -F' ' \ 'BEGIN { s = '$profile_seconds_left'; } /^profile-total/ { s -= $2 } END { print s }' \ "$test_name-raw.tsv") + current_test=$((current_test + 1)) done unset TIMEFORMAT @@ -758,7 +764,7 @@ create view test_times_view as total_client_time, queries, query_max, - real / queries avg_real_per_query, + real / if(queries > 0, queries, 1) avg_real_per_query, query_min, runs from test_time @@ -779,7 +785,7 @@ create view test_times_view_total as sum(total_client_time), sum(queries), max(query_max), - sum(real) / sum(queries) avg_real_per_query, + sum(real) / if(sum(queries) > 0, sum(queries), 1) avg_real_per_query, min(query_min), -- Totaling the number of runs doesn't make sense, but use the max so -- that the reporting script doesn't complain about queries being too @@ -1013,6 +1019,7 @@ done wait # Create per-query flamegraphs +touch report/query-files.txt IFS=$'\n' for version in {right,left} do @@ -1147,20 +1154,21 @@ function upload_results return 0 fi - # Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000 - # so I have to extract host and port with clickhouse-local. I tried to use - # Poco URI parser to support this in the client, but it's broken and can't - # parse host:port. set +x # Don't show password in the log - clickhouse-client \ - $(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") \ - --secure \ - --user "${CHPC_DATABASE_USER}" \ - --password "${CHPC_DATABASE_PASSWORD}" \ - --config "right/config/client_config.xml" \ - --database perftest \ - --date_time_input_format=best_effort \ - --query " + client=(clickhouse-client + # Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000 + # so I have to extract host and port with clickhouse-local. I tried to use + # Poco URI parser to support this in the client, but it's broken and can't + # parse host:port. + $(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") + --secure + --user "${CHPC_DATABASE_USER}" + --password "${CHPC_DATABASE_PASSWORD}" + --config "right/config/client_config.xml" + --database perftest + --date_time_input_format=best_effort) + + "${client[@]}" --query " insert into query_metrics_v2 select toDate(event_time) event_date, @@ -1183,6 +1191,31 @@ function upload_results format TSV settings date_time_input_format='best_effort' " < report/all-query-metrics.tsv # Don't leave whitespace after INSERT: https://github.com/ClickHouse/ClickHouse/issues/16652 + + # Upload some run attributes. I use this weird form because it is the same + # form that can be used for historical data when you only have compare.log. + cat compare.log \ + | sed -n ' + s/.*Model name:[[:space:]]\+\(.*\)$/metric lscpu-model-name \1/p; + s/.*L1d cache:[[:space:]]\+\(.*\)$/metric lscpu-l1d-cache \1/p; + s/.*L1i cache:[[:space:]]\+\(.*\)$/metric lscpu-l1i-cache \1/p; + s/.*L2 cache:[[:space:]]\+\(.*\)$/metric lscpu-l2-cache \1/p; + s/.*L3 cache:[[:space:]]\+\(.*\)$/metric lscpu-l3-cache \1/p; + s/.*left_sha=\(.*\)$/old-sha \1/p; + s/.*right_sha=\(.*\)/new-sha \1/p' \ + | awk ' + BEGIN { FS = "\t"; OFS = "\t" } + /^old-sha/ { old_sha=$2 } + /^new-sha/ { new_sha=$2 } + /^metric/ { print old_sha, new_sha, $2, $3 }' \ + | "${client[@]}" --query "INSERT INTO run_attributes_v1 FORMAT TSV" + + # Grepping numactl results from log is too crazy, I'll just call it again. + "${client[@]}" --query "INSERT INTO run_attributes_v1 FORMAT TSV" < + - + :: diff --git a/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml b/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml index 41bc7f777bf..63e23d8453c 100644 --- a/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml +++ b/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml @@ -17,6 +17,9 @@ 12 + + + 64Mi diff --git a/docker/test/performance-comparison/perf.py b/docker/test/performance-comparison/perf.py index f1c5df146aa..8231caceca8 100755 --- a/docker/test/performance-comparison/perf.py +++ b/docker/test/performance-comparison/perf.py @@ -66,7 +66,12 @@ reportStageEnd('parse') subst_elems = root.findall('substitutions/substitution') available_parameters = {} # { 'table': ['hits_10m', 'hits_100m'], ... } for e in subst_elems: - available_parameters[e.find('name').text] = [v.text for v in e.findall('values/value')] + name = e.find('name').text + values = [v.text for v in e.findall('values/value')] + if not values: + raise Exception(f'No values given for substitution {{{name}}}') + + available_parameters[name] = values # Takes parallel lists of templates, substitutes them with all combos of # parameters. The set of parameters is determined based on the first list. @@ -76,7 +81,10 @@ def substitute_parameters(query_templates, other_templates = []): query_results = [] other_results = [[]] * (len(other_templates)) for i, q in enumerate(query_templates): - keys = set(n for _, n, _, _ in string.Formatter().parse(q) if n) + # We need stable order of keys here, so that the order of substitutions + # is always the same, and the query indexes are consistent across test + # runs. + keys = sorted(set(n for _, n, _, _ in string.Formatter().parse(q) if n)) values = [available_parameters[k] for k in keys] combos = itertools.product(*values) for c in combos: @@ -263,8 +271,16 @@ for query_index in queries_to_run: for conn_index, c in enumerate(all_connections): try: prewarm_id = f'{query_prefix}.prewarm0' - # Will also detect too long queries during warmup stage - res = c.execute(q, query_id = prewarm_id, settings = {'max_execution_time': 10}) + + try: + # Will also detect too long queries during warmup stage + res = c.execute(q, query_id = prewarm_id, settings = {'max_execution_time': args.max_query_seconds}) + except clickhouse_driver.errors.Error as e: + # Add query id to the exception to make debugging easier. + e.args = (prewarm_id, *e.args) + e.message = prewarm_id + ': ' + e.message + raise + print(f'prewarm\t{query_index}\t{prewarm_id}\t{conn_index}\t{c.last_query.elapsed}') except KeyboardInterrupt: raise @@ -311,8 +327,8 @@ for query_index in queries_to_run: for conn_index, c in enumerate(this_query_connections): try: - res = c.execute(q, query_id = run_id) - except Exception as e: + res = c.execute(q, query_id = run_id, settings = {'max_execution_time': args.max_query_seconds}) + except clickhouse_driver.errors.Error as e: # Add query id to the exception to make debugging easier. e.args = (run_id, *e.args) e.message = run_id + ': ' + e.message @@ -389,7 +405,7 @@ for query_index in queries_to_run: try: res = c.execute(q, query_id = run_id, settings = {'query_profiler_real_time_period_ns': 10000000}) print(f'profile\t{query_index}\t{run_id}\t{conn_index}\t{c.last_query.elapsed}') - except Exception as e: + except clickhouse_driver.errors.Error as e: # Add query id to the exception to make debugging easier. e.args = (run_id, *e.args) e.message = run_id + ': ' + e.message diff --git a/docker/test/performance-comparison/report.py b/docker/test/performance-comparison/report.py index 9d3ccabb788..42490971127 100755 --- a/docker/test/performance-comparison/report.py +++ b/docker/test/performance-comparison/report.py @@ -520,12 +520,13 @@ if args.report == 'main': for t in tables: print(t) - print(""" + print(f""" @@ -638,12 +639,13 @@ elif args.report == 'all-queries': for t in tables: print(t) - print(""" + print(f""" diff --git a/docker/test/pvs/Dockerfile b/docker/test/pvs/Dockerfile index 382b486dda3..2983be2305f 100644 --- a/docker/test/pvs/Dockerfile +++ b/docker/test/pvs/Dockerfile @@ -41,6 +41,6 @@ CMD echo "Running PVS version $PKG_VERSION" && cd /repo_folder && pvs-studio-ana && cmake . -D"ENABLE_EMBEDDED_COMPILER"=OFF -D"USE_INTERNAL_PROTOBUF_LIBRARY"=OFF -D"USE_INTERNAL_GRPC_LIBRARY"=OFF \ && ninja re2_st clickhouse_grpc_protos \ && pvs-studio-analyzer analyze -o pvs-studio.log -e contrib -j 4 -l ./licence.lic; \ + cp /repo_folder/pvs-studio.log /test_output; \ plog-converter -a GA:1,2 -t fullhtml -o /test_output/pvs-studio-html-report pvs-studio.log; \ plog-converter -a GA:1,2 -t tasklist -o /test_output/pvs-studio-task-report.txt pvs-studio.log - diff --git a/docker/test/split_build_smoke_test/Dockerfile b/docker/test/split_build_smoke_test/Dockerfile index c77db1c6c88..54a9eb17868 100644 --- a/docker/test/split_build_smoke_test/Dockerfile +++ b/docker/test/split_build_smoke_test/Dockerfile @@ -2,5 +2,6 @@ FROM yandex/clickhouse-binary-builder COPY run.sh /run.sh +COPY process_split_build_smoke_test_result.py / CMD /run.sh diff --git a/docker/test/split_build_smoke_test/process_split_build_smoke_test_result.py b/docker/test/split_build_smoke_test/process_split_build_smoke_test_result.py new file mode 100755 index 00000000000..58d6ba8c62a --- /dev/null +++ b/docker/test/split_build_smoke_test/process_split_build_smoke_test_result.py @@ -0,0 +1,61 @@ +#!/usr/bin/env python3 + +import os +import logging +import argparse +import csv + +RESULT_LOG_NAME = "run.log" + +def process_result(result_folder): + + status = "success" + description = 'Server started and responded' + summary = [("Smoke test", "OK")] + with open(os.path.join(result_folder, RESULT_LOG_NAME), 'r') as run_log: + lines = run_log.read().split('\n') + if not lines or lines[0].strip() != 'OK': + status = "failure" + logging.info("Lines is not ok: %s", str('\n'.join(lines))) + summary = [("Smoke test", "FAIL")] + description = 'Server failed to respond, see result in logs' + + result_logs = [] + server_log_path = os.path.join(result_folder, "clickhouse-server.log") + stderr_log_path = os.path.join(result_folder, "stderr.log") + client_stderr_log_path = os.path.join(result_folder, "clientstderr.log") + + if os.path.exists(server_log_path): + result_logs.append(server_log_path) + + if os.path.exists(stderr_log_path): + result_logs.append(stderr_log_path) + + if os.path.exists(client_stderr_log_path): + result_logs.append(client_stderr_log_path) + + return status, description, summary, result_logs + + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of split build smoke test") + parser.add_argument("--in-results-dir", default='/test_output/') + parser.add_argument("--out-results-file", default='/test_output/test_results.tsv') + parser.add_argument("--out-status-file", default='/test_output/check_status.tsv') + args = parser.parse_args() + + state, description, test_results, logs = process_result(args.in_results_dir) + logging.info("Result parsed") + status = (state, description) + write_results(args.out_results_file, args.out_status_file, test_results, status) + logging.info("Result written") diff --git a/docker/test/split_build_smoke_test/run.sh b/docker/test/split_build_smoke_test/run.sh index eac9848030e..b565d7a481e 100755 --- a/docker/test/split_build_smoke_test/run.sh +++ b/docker/test/split_build_smoke_test/run.sh @@ -5,16 +5,18 @@ set -x install_and_run_server() { mkdir /unpacked tar -xzf /package_folder/shared_build.tgz -C /unpacked --strip 1 - LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-server --config /unpacked/config/config.xml >/var/log/clickhouse-server/stderr.log 2>&1 & + LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-server --config /unpacked/config/config.xml >/test_output/stderr.log 2>&1 & } run_client() { for i in {1..100}; do sleep 1 - LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-client --query "select 'OK'" 2>/var/log/clickhouse-server/clientstderr.log && break + LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-client --query "select 'OK'" > /test_output/run.log 2> /test_output/clientstderr.log && break [[ $i == 100 ]] && echo 'FAIL' done } install_and_run_server run_client +mv /var/log/clickhouse-server/clickhouse-server.log /test_output/clickhouse-server.log +/process_split_build_smoke_test_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv diff --git a/docker/test/sqlancer/Dockerfile b/docker/test/sqlancer/Dockerfile index 38a773e65ad..6bcdc3df5cd 100644 --- a/docker/test/sqlancer/Dockerfile +++ b/docker/test/sqlancer/Dockerfile @@ -1,7 +1,7 @@ # docker build -t yandex/clickhouse-sqlancer-test . FROM ubuntu:20.04 -RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven --yes --no-install-recommends +RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven python3 --yes --no-install-recommends RUN wget https://github.com/sqlancer/sqlancer/archive/master.zip -O /sqlancer.zip RUN mkdir /sqlancer && \ @@ -10,4 +10,5 @@ RUN mkdir /sqlancer && \ RUN cd /sqlancer/sqlancer-master && mvn package -DskipTests COPY run.sh / +COPY process_sqlancer_result.py / CMD ["/bin/bash", "/run.sh"] diff --git a/docker/test/sqlancer/process_sqlancer_result.py b/docker/test/sqlancer/process_sqlancer_result.py new file mode 100755 index 00000000000..ede3cabc1c5 --- /dev/null +++ b/docker/test/sqlancer/process_sqlancer_result.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python3 + +import os +import logging +import argparse +import csv + + +def process_result(result_folder): + status = "success" + summary = [] + paths = [] + tests = ["TLPWhere", "TLPGroupBy", "TLPHaving", "TLPWhereGroupBy", "TLPDistinct", "TLPAggregate"] + + for test in tests: + err_path = '{}/{}.err'.format(result_folder, test) + out_path = '{}/{}.out'.format(result_folder, test) + if not os.path.exists(err_path): + logging.info("No output err on path %s", err_path) + summary.append((test, "SKIPPED")) + elif not os.path.exists(out_path): + logging.info("No output log on path %s", out_path) + else: + paths.append(err_path) + paths.append(out_path) + with open(err_path, 'r') as f: + if 'AssertionError' in f.read(): + summary.append((test, "FAIL")) + status = 'failure' + else: + summary.append((test, "OK")) + + logs_path = '{}/logs.tar.gz'.format(result_folder) + if not os.path.exists(logs_path): + logging.info("No logs tar on path %s", logs_path) + else: + paths.append(logs_path) + stdout_path = '{}/stdout.log'.format(result_folder) + if not os.path.exists(stdout_path): + logging.info("No stdout log on path %s", stdout_path) + else: + paths.append(stdout_path) + stderr_path = '{}/stderr.log'.format(result_folder) + if not os.path.exists(stderr_path): + logging.info("No stderr log on path %s", stderr_path) + else: + paths.append(stderr_path) + + description = "SQLancer test run. See report" + + return status, description, summary, paths + + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of sqlancer test") + parser.add_argument("--in-results-dir", default='/test_output/') + parser.add_argument("--out-results-file", default='/test_output/test_results.tsv') + parser.add_argument("--out-status-file", default='/test_output/check_status.tsv') + args = parser.parse_args() + + state, description, test_results, logs = process_result(args.in_results_dir) + logging.info("Result parsed") + status = (state, description) + write_results(args.out_results_file, args.out_status_file, test_results, status) + logging.info("Result written") diff --git a/docker/test/sqlancer/run.sh b/docker/test/sqlancer/run.sh index ffe0afd98a8..e465ba1c993 100755 --- a/docker/test/sqlancer/run.sh +++ b/docker/test/sqlancer/run.sh @@ -11,7 +11,7 @@ service clickhouse-server start && sleep 5 cd /sqlancer/sqlancer-master -export TIMEOUT=60 +export TIMEOUT=300 export NUM_QUERIES=1000 ( java -jar target/sqlancer-*.jar --num-threads 10 --timeout-seconds $TIMEOUT --num-queries $NUM_QUERIES --username default --password "" clickhouse --oracle TLPWhere | tee /test_output/TLPWhere.out ) 3>&1 1>&2 2>&3 | tee /test_output/TLPWhere.err @@ -29,4 +29,5 @@ tail -n 1000 /var/log/clickhouse-server/stderr.log > /test_output/stderr.log tail -n 1000 /var/log/clickhouse-server/stdout.log > /test_output/stdout.log tail -n 1000 /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log +/process_sqlancer_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv ls /test_output diff --git a/docker/test/stateful/run.sh b/docker/test/stateful/run.sh index 7779f0e9dc2..8d865431570 100755 --- a/docker/test/stateful/run.sh +++ b/docker/test/stateful/run.sh @@ -13,6 +13,25 @@ dpkg -i package_folder/clickhouse-test_*.deb function start() { + if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then + # NOTE We run "clickhouse server" instead of "clickhouse-server" + # to make "pidof clickhouse-server" return single pid of the main instance. + # We wil run main instance using "service clickhouse-server start" + sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \ + -- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \ + --logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \ + --tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \ + --mysql_port 19004 --postgresql_port 19005 \ + --keeper_server.tcp_port 19181 --keeper_server.server_id 2 + + sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \ + -- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \ + --logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \ + --tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \ + --mysql_port 29004 --postgresql_port 29005 \ + --keeper_server.tcp_port 29181 --keeper_server.server_id 3 + fi + counter=0 until clickhouse-client --query "SELECT 1" do @@ -35,9 +54,8 @@ start /s3downloader --dataset-names $DATASETS chmod 777 -R /var/lib/clickhouse clickhouse-client --query "SHOW DATABASES" -clickhouse-client --query "ATTACH DATABASE datasets ENGINE = Ordinary" -clickhouse-client --query "CREATE DATABASE test" +clickhouse-client --query "ATTACH DATABASE datasets ENGINE = Ordinary" service clickhouse-server restart # Wait for server to start accepting connections @@ -47,21 +65,61 @@ for _ in {1..120}; do done clickhouse-client --query "SHOW TABLES FROM datasets" -clickhouse-client --query "SHOW TABLES FROM test" -clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits" -clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits" -clickhouse-client --query "SHOW TABLES FROM test" - -if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test ; then - SKIP_LIST_OPT="--use-skip-list" -fi - -# We can have several additional options so we path them as array because it's -# more idiologically correct. -read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}" if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then - ADDITIONAL_OPTIONS+=('--replicated-database') + clickhouse-client --query "CREATE DATABASE test ON CLUSTER 'test_cluster_database_replicated' + ENGINE=Replicated('/test/clickhouse/db/test', '{shard}', '{replica}')" + + clickhouse-client --query "CREATE TABLE test.hits AS datasets.hits_v1" + clickhouse-client --query "CREATE TABLE test.visits AS datasets.visits_v1" + + clickhouse-client --query "INSERT INTO test.hits SELECT * FROM datasets.hits_v1" + clickhouse-client --query "INSERT INTO test.visits SELECT * FROM datasets.visits_v1" + + clickhouse-client --query "DROP TABLE datasets.hits_v1" + clickhouse-client --query "DROP TABLE datasets.visits_v1" + + MAX_RUN_TIME=$((MAX_RUN_TIME < 9000 ? MAX_RUN_TIME : 9000)) # min(MAX_RUN_TIME, 2.5 hours) + MAX_RUN_TIME=$((MAX_RUN_TIME != 0 ? MAX_RUN_TIME : 9000)) # set to 2.5 hours if 0 (unlimited) +else + clickhouse-client --query "CREATE DATABASE test" + clickhouse-client --query "SHOW TABLES FROM test" + clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits" + clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits" fi -clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt +clickhouse-client --query "SHOW TABLES FROM test" +clickhouse-client --query "SELECT count() FROM test.hits" +clickhouse-client --query "SELECT count() FROM test.visits" + +function run_tests() +{ + set -x + # We can have several additional options so we path them as array because it's + # more idiologically correct. + read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}" + + if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then + ADDITIONAL_OPTIONS+=('--replicated-database') + fi + + clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --use-skip-list --print-time "${ADDITIONAL_OPTIONS[@]}" \ + "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt +} + +export -f run_tests +timeout "$MAX_RUN_TIME" bash -c run_tests ||: + +./process_functional_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv + +pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz ||: +mv /var/log/clickhouse-server/stderr.log /test_output/ ||: +if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then + tar -chf /test_output/clickhouse_coverage.tar.gz /profraw ||: +fi +if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then + pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||: + pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||: + mv /var/log/clickhouse-server/stderr1.log /test_output/ ||: + mv /var/log/clickhouse-server/stderr2.log /test_output/ ||: +fi diff --git a/docker/test/stateless/Dockerfile b/docker/test/stateless/Dockerfile index 2437415d17c..658ae1f27ba 100644 --- a/docker/test/stateless/Dockerfile +++ b/docker/test/stateless/Dockerfile @@ -28,7 +28,8 @@ RUN apt-get update -y \ tree \ unixodbc \ wget \ - mysql-client=5.7* + mysql-client=5.7* \ + postgresql-client RUN pip3 install numpy scipy pandas @@ -46,4 +47,5 @@ ENV NUM_TRIES=1 ENV MAX_RUN_TIME=0 COPY run.sh / +COPY process_functional_tests_result.py / CMD ["/bin/bash", "/run.sh"] diff --git a/docker/test/stateless/process_functional_tests_result.py b/docker/test/stateless/process_functional_tests_result.py new file mode 100755 index 00000000000..02adf108212 --- /dev/null +++ b/docker/test/stateless/process_functional_tests_result.py @@ -0,0 +1,126 @@ +#!/usr/bin/env python3 + +import os +import logging +import argparse +import csv + +OK_SIGN = "[ OK " +FAIL_SING = "[ FAIL " +TIMEOUT_SING = "[ Timeout! " +UNKNOWN_SIGN = "[ UNKNOWN " +SKIPPED_SIGN = "[ SKIPPED " +HUNG_SIGN = "Found hung queries in processlist" + +NO_TASK_TIMEOUT_SIGN = "All tests have finished" + +def process_test_log(log_path): + total = 0 + skipped = 0 + unknown = 0 + failed = 0 + success = 0 + hung = False + task_timeout = True + test_results = [] + with open(log_path, 'r') as test_file: + for line in test_file: + line = line.strip() + if NO_TASK_TIMEOUT_SIGN in line: + task_timeout = False + if HUNG_SIGN in line: + hung = True + if any(sign in line for sign in (OK_SIGN, FAIL_SING, UNKNOWN_SIGN, SKIPPED_SIGN)): + test_name = line.split(' ')[2].split(':')[0] + + test_time = '' + try: + time_token = line.split(']')[1].strip().split()[0] + float(time_token) + test_time = time_token + except: + pass + + total += 1 + if TIMEOUT_SING in line: + failed += 1 + test_results.append((test_name, "Timeout", test_time)) + elif FAIL_SING in line: + failed += 1 + test_results.append((test_name, "FAIL", test_time)) + elif UNKNOWN_SIGN in line: + unknown += 1 + test_results.append((test_name, "FAIL", test_time)) + elif SKIPPED_SIGN in line: + skipped += 1 + test_results.append((test_name, "SKIPPED", test_time)) + else: + success += int(OK_SIGN in line) + test_results.append((test_name, "OK", test_time)) + return total, skipped, unknown, failed, success, hung, task_timeout, test_results + +def process_result(result_path): + test_results = [] + state = "success" + description = "" + files = os.listdir(result_path) + if files: + logging.info("Find files in result folder %s", ','.join(files)) + result_path = os.path.join(result_path, 'test_result.txt') + else: + result_path = None + description = "No output log" + state = "error" + + if result_path and os.path.exists(result_path): + total, skipped, unknown, failed, success, hung, task_timeout, test_results = process_test_log(result_path) + is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1)) + # If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately) + # But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped. + if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)): + state = "failure" + + if hung: + description = "Some queries hung, " + state = "failure" + elif task_timeout: + description = "Timeout, " + state = "failure" + else: + description = "" + + description += "fail: {}, passed: {}".format(failed, success) + if skipped != 0: + description += ", skipped: {}".format(skipped) + if unknown != 0: + description += ", unknown: {}".format(unknown) + else: + state = "failure" + description = "Output log doesn't exist" + test_results = [] + + return state, description, test_results + + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of functional tests") + parser.add_argument("--in-results-dir", default='/test_output/') + parser.add_argument("--out-results-file", default='/test_output/test_results.tsv') + parser.add_argument("--out-status-file", default='/test_output/check_status.tsv') + args = parser.parse_args() + + state, description, test_results = process_result(args.in_results_dir) + logging.info("Result parsed") + status = (state, description) + write_results(args.out_results_file, args.out_status_file, test_results, status) + logging.info("Result written") diff --git a/docker/test/stateless/run.sh b/docker/test/stateless/run.sh index d078f3739fd..8440b1548a5 100755 --- a/docker/test/stateless/run.sh +++ b/docker/test/stateless/run.sh @@ -34,36 +34,61 @@ if [ "$NUM_TRIES" -gt "1" ]; then # simpliest way to forward env variables to server sudo -E -u clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml --daemon - sleep 5 else - service clickhouse-server start && sleep 5 + service clickhouse-server start fi -if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then - SKIP_LIST_OPT="--use-skip-list" +if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then + + sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \ + -- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \ + --logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \ + --tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \ + --mysql_port 19004 --postgresql_port 19005 \ + --keeper_server.tcp_port 19181 --keeper_server.server_id 2 \ + --macros.replica r2 # It doesn't work :( + + sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \ + -- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \ + --logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \ + --tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \ + --mysql_port 29004 --postgresql_port 29005 \ + --keeper_server.tcp_port 29181 --keeper_server.server_id 3 \ + --macros.shard s2 # It doesn't work :( + + MAX_RUN_TIME=$((MAX_RUN_TIME < 9000 ? MAX_RUN_TIME : 9000)) # min(MAX_RUN_TIME, 2.5 hours) + MAX_RUN_TIME=$((MAX_RUN_TIME != 0 ? MAX_RUN_TIME : 9000)) # set to 2.5 hours if 0 (unlimited) fi +sleep 5 + function run_tests() { + set -x # We can have several additional options so we path them as array because it's # more idiologically correct. read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}" # Skip these tests, because they fail when we rerun them multiple times if [ "$NUM_TRIES" -gt "1" ]; then + ADDITIONAL_OPTIONS+=('--order=random') ADDITIONAL_OPTIONS+=('--skip') ADDITIONAL_OPTIONS+=('00000_no_tests_to_skip') - ADDITIONAL_OPTIONS+=('--jobs') - ADDITIONAL_OPTIONS+=('4') + # Note that flaky check must be ran in parallel, but for now we run + # everything in parallel except DatabaseReplicated. See below. fi if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then ADDITIONAL_OPTIONS+=('--replicated-database') + else + # Too many tests fail for DatabaseReplicated in parallel. All other + # configurations are OK. + ADDITIONAL_OPTIONS+=('--jobs') + ADDITIONAL_OPTIONS+=('8') fi clickhouse-test --testname --shard --zookeeper --hung-check --print-time \ - --test-runs "$NUM_TRIES" \ - "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" 2>&1 \ + --use-skip-list --test-runs "$NUM_TRIES" "${ADDITIONAL_OPTIONS[@]}" 2>&1 \ | ts '%Y-%m-%d %H:%M:%S' \ | tee -a test_output/test_result.txt } @@ -72,5 +97,51 @@ export -f run_tests timeout "$MAX_RUN_TIME" bash -c run_tests ||: +./process_functional_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv + +clickhouse-client -q "system flush logs" ||: + +pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz & +clickhouse-client -q "select * from system.query_log format TSVWithNamesAndTypes" | pigz > /test_output/query-log.tsv.gz & +clickhouse-client -q "select * from system.query_thread_log format TSVWithNamesAndTypes" | pigz > /test_output/query-thread-log.tsv.gz & +clickhouse-client --allow_introspection_functions=1 -q " + WITH + arrayMap(x -> concat(demangle(addressToSymbol(x)), ':', addressToLine(x)), trace) AS trace_array, + arrayStringConcat(trace_array, '\n') AS trace_string + SELECT * EXCEPT(trace), trace_string FROM system.trace_log FORMAT TSVWithNamesAndTypes +" | pigz > /test_output/trace-log.tsv.gz & + +# Also export trace log in flamegraph-friendly format. +for trace_type in CPU Memory Real +do + clickhouse-client -q " + select + arrayStringConcat((arrayMap(x -> concat(splitByChar('/', addressToLine(x))[-1], '#', demangle(addressToSymbol(x)) ), trace)), ';') AS stack, + count(*) AS samples + from system.trace_log + where trace_type = '$trace_type' + group by trace + order by samples desc + settings allow_introspection_functions = 1 + format TabSeparated" \ + | pigz > "/test_output/trace-log-$trace_type-flamegraph.tsv.gz" & +done + +wait ||: + +mv /var/log/clickhouse-server/stderr.log /test_output/ ||: +if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then + tar -chf /test_output/clickhouse_coverage.tar.gz /profraw ||: +fi tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||: tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||: +tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||: + +if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then + pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||: + pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||: + mv /var/log/clickhouse-server/stderr1.log /test_output/ ||: + mv /var/log/clickhouse-server/stderr2.log /test_output/ ||: + tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||: + tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||: +fi diff --git a/docker/test/stateless_unbundled/Dockerfile b/docker/test/stateless_unbundled/Dockerfile index 9efe08dbf23..c5463ac447d 100644 --- a/docker/test/stateless_unbundled/Dockerfile +++ b/docker/test/stateless_unbundled/Dockerfile @@ -14,9 +14,7 @@ RUN apt-get --allow-unauthenticated update -y \ expect \ gdb \ gperf \ - gperf \ heimdal-multidev \ - intel-opencl-icd \ libboost-filesystem-dev \ libboost-iostreams-dev \ libboost-program-options-dev \ @@ -50,9 +48,7 @@ RUN apt-get --allow-unauthenticated update -y \ moreutils \ ncdu \ netcat-openbsd \ - ocl-icd-libopencl1 \ odbcinst \ - opencl-headers \ openssl \ perl \ pigz \ diff --git a/docker/test/stress/run.sh b/docker/test/stress/run.sh index 817a5082bb9..43a92fdeebe 100755 --- a/docker/test/stress/run.sh +++ b/docker/test/stress/run.sh @@ -20,6 +20,14 @@ function configure() # since we run clickhouse from root sudo chown root: /var/lib/clickhouse + + # Set more frequent update period of asynchronous metrics to more frequently update information about real memory usage (less chance of OOM). + echo "1" \ + > /etc/clickhouse-server/config.d/asynchronous_metrics_update_period_s.xml + + # Set maximum memory usage as half of total memory (less chance of OOM). + echo "0.5" \ + > /etc/clickhouse-server/config.d/max_server_memory_usage_to_ram_ratio.xml } function stop() @@ -53,10 +61,14 @@ handle SIGBUS stop print handle SIGABRT stop print continue thread apply all backtrace -continue +detach +quit " > script.gdb - gdb -batch -command script.gdb -p "$(cat /var/run/clickhouse-server/clickhouse-server.pid)" & + # FIXME Hung check may work incorrectly because of attached gdb + # 1. False positives are possible + # 2. We cannot attach another gdb to get stacktraces if some queries hung + gdb -batch -command script.gdb -p "$(cat /var/run/clickhouse-server/clickhouse-server.pid)" >> /test_output/gdb.log & } configure @@ -78,11 +90,62 @@ clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits" clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits" clickhouse-client --query "SHOW TABLES FROM test" -./stress --hung-check --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" && echo "OK" > /test_output/script_exit_code.txt || echo "FAIL" > /test_output/script_exit_code.txt +./stress --hung-check --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" \ + && echo -e 'Test script exit code\tOK' >> /test_output/test_results.tsv \ + || echo -e 'Test script failed\tFAIL' >> /test_output/test_results.tsv stop start -clickhouse-client --query "SELECT 'Server successfuly started'" > /test_output/alive_check.txt || echo 'Server failed to start' > /test_output/alive_check.txt +clickhouse-client --query "SELECT 'Server successfully started', 'OK'" >> /test_output/test_results.tsv \ + || echo -e 'Server failed to start\tFAIL' >> /test_output/test_results.tsv +[ -f /var/log/clickhouse-server/clickhouse-server.log ] || echo -e "Server log does not exist\tFAIL" +[ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL" + +# Print Fatal log messages to stdout +zgrep -Fa " " /var/log/clickhouse-server/clickhouse-server.log + +# Grep logs for sanitizer asserts, crashes and other critical errors + +# Sanitizer asserts +zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp +zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp +zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev/null \ + && echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \ + || echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv +rm -f /test_output/tmp + +# OOM +zgrep -Fa " Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ + && echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \ + || echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv + +# Logical errors +zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ + && echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \ + || echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv + +# Crash +zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ + && echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \ + || echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv + +# It also checks for crash without stacktrace (printed by watchdog) +zgrep -Fa " " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ + && echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \ + || echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv + +zgrep -Fa "########################################" /test_output/* > /dev/null \ + && echo -e 'Killed by signal (output files)\tFAIL' >> /test_output/test_results.tsv + +# Put logs into /test_output/ +pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||: +mv /var/log/clickhouse-server/stderr.log /test_output/ +tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||: +tar -chf /test_output/trace_log_dump.tar /var/lib/clickhouse/data/system/trace_log ||: + +# Write check result into check_status.tsv +clickhouse-local --structure "test String, res String" -q "SELECT 'failure', test FROM table WHERE res != 'OK' order by (lower(test) like '%hung%') LIMIT 1" < /test_output/test_results.tsv > /test_output/check_status.tsv +[ -s /test_output/check_status.tsv ] || echo -e "success\tNo errors found" > /test_output/check_status.tsv diff --git a/docker/test/stress/stress b/docker/test/stress/stress index 841556cf090..4fbedceb0b8 100755 --- a/docker/test/stress/stress +++ b/docker/test/stress/stress @@ -1,7 +1,7 @@ #!/usr/bin/env python3 # -*- coding: utf-8 -*- from multiprocessing import cpu_count -from subprocess import Popen, call, STDOUT +from subprocess import Popen, call, check_output, STDOUT import os import sys import shutil @@ -58,6 +58,54 @@ def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_t time.sleep(0.5) return pipes +def prepare_for_hung_check(): + # FIXME this function should not exist, but... + + # We attach gdb to clickhouse-server before running tests + # to print stacktraces of all crashes even if clickhouse cannot print it for some reason. + # However, it obstruct checking for hung queries. + logging.info("Will terminate gdb (if any)") + call("kill -TERM $(pidof gdb)", shell=True, stderr=STDOUT) + + # Some tests set too low memory limit for default user and forget to reset in back. + # It may cause SYSTEM queries to fail, let's disable memory limit. + call("clickhouse client --max_memory_usage_for_user=0 -q 'SELECT 1 FORMAT Null'", shell=True, stderr=STDOUT) + + # Some tests execute SYSTEM STOP MERGES or similar queries. + # It may cause some ALTERs to hang. + # Possibly we should fix tests and forbid to use such queries without specifying table. + call("clickhouse client -q 'SYSTEM START MERGES'", shell=True, stderr=STDOUT) + call("clickhouse client -q 'SYSTEM START DISTRIBUTED SENDS'", shell=True, stderr=STDOUT) + call("clickhouse client -q 'SYSTEM START TTL MERGES'", shell=True, stderr=STDOUT) + call("clickhouse client -q 'SYSTEM START MOVES'", shell=True, stderr=STDOUT) + call("clickhouse client -q 'SYSTEM START FETCHES'", shell=True, stderr=STDOUT) + call("clickhouse client -q 'SYSTEM START REPLICATED SENDS'", shell=True, stderr=STDOUT) + call("clickhouse client -q 'SYSTEM START REPLICATION QUEUES'", shell=True, stderr=STDOUT) + + # Issue #21004, live views are experimental, so let's just suppress it + call("""clickhouse client -q "KILL QUERY WHERE upper(query) LIKE 'WATCH %'" """, shell=True, stderr=STDOUT) + + # Kill other queries which known to be slow + # It's query from 01232_preparing_sets_race_condition_long, it may take up to 1000 seconds in slow builds + call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'insert into tableB select %'" """, shell=True, stderr=STDOUT) + # Long query from 00084_external_agregation + call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'SELECT URL, uniq(SearchPhrase) AS u FROM test.hits GROUP BY URL ORDER BY u %'" """, shell=True, stderr=STDOUT) + + # Wait for last queries to finish if any, not longer than 300 seconds + call("""clickhouse client -q "select sleepEachRow(( + select maxOrDefault(300 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 300 + ) / 300) from numbers(300) format Null" """, shell=True, stderr=STDOUT) + + # Even if all clickhouse-test processes are finished, there are probably some sh scripts, + # which still run some new queries. Let's ignore them. + try: + query = """clickhouse client -q "SELECT count() FROM system.processes where where elapsed > 300" """ + output = check_output(query, shell=True, stderr=STDOUT).decode('utf-8').strip() + if int(output) == 0: + return False + except: + pass + return True if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') @@ -88,11 +136,14 @@ if __name__ == "__main__": logging.info("All processes finished") if args.hung_check: + have_long_running_queries = prepare_for_hung_check() logging.info("Checking if some queries hung") cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1") res = call(cmd, shell=True, stderr=STDOUT) - if res != 0: + hung_check_status = "No queries hung\tOK\n" + if res != 0 and have_long_running_queries: logging.info("Hung check failed with exit code {}".format(res)) - sys.exit(1) + hung_check_status = "Hung check failed\tFAIL\n" + open(os.path.join(args.output_folder, "test_results.tsv"), 'w+').write(hung_check_status) logging.info("Stress test finished") diff --git a/docker/test/style/Dockerfile b/docker/test/style/Dockerfile index e70f9e05679..86595a77a54 100644 --- a/docker/test/style/Dockerfile +++ b/docker/test/style/Dockerfile @@ -10,14 +10,6 @@ RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \ yamllint \ && pip3 install codespell - -# For |& syntax -SHELL ["bash", "-c"] - -CMD cd /ClickHouse/utils/check-style && \ - ./check-style -n |& tee /test_output/style_output.txt && \ - ./check-typos |& tee /test_output/typos_output.txt && \ - ./check-whitespaces -n |& tee /test_output/whitespaces_output.txt && \ - ./check-duplicate-includes.sh |& tee /test_output/duplicate_output.txt && \ - ./shellcheck-run.sh |& tee /test_output/shellcheck_output.txt && \ - true +COPY run.sh / +COPY process_style_check_result.py / +CMD ["/bin/bash", "/run.sh"] diff --git a/docker/test/style/process_style_check_result.py b/docker/test/style/process_style_check_result.py new file mode 100755 index 00000000000..61b1e0f05c5 --- /dev/null +++ b/docker/test/style/process_style_check_result.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python3 + +import os +import logging +import argparse +import csv + + +def process_result(result_folder): + status = "success" + description = "" + test_results = [] + + style_log_path = '{}/style_output.txt'.format(result_folder) + if not os.path.exists(style_log_path): + logging.info("No style check log on path %s", style_log_path) + return "exception", "No style check log", [] + elif os.stat(style_log_path).st_size != 0: + description += "Style check failed. " + test_results.append(("Style check", "FAIL")) + status = "failure" # Disabled for now + else: + test_results.append(("Style check", "OK")) + + typos_log_path = '{}/typos_output.txt'.format(result_folder) + if not os.path.exists(style_log_path): + logging.info("No typos check log on path %s", style_log_path) + return "exception", "No typos check log", [] + elif os.stat(typos_log_path).st_size != 0: + description += "Typos check failed. " + test_results.append(("Typos check", "FAIL")) + status = "failure" + else: + test_results.append(("Typos check", "OK")) + + whitespaces_log_path = '{}/whitespaces_output.txt'.format(result_folder) + if not os.path.exists(style_log_path): + logging.info("No whitespaces check log on path %s", style_log_path) + return "exception", "No whitespaces check log", [] + elif os.stat(whitespaces_log_path).st_size != 0: + description += "Whitespaces check failed. " + test_results.append(("Whitespaces check", "FAIL")) + status = "failure" + else: + test_results.append(("Whitespaces check", "OK")) + + duplicate_log_path = '{}/duplicate_output.txt'.format(result_folder) + if not os.path.exists(duplicate_log_path): + logging.info("No header duplicates check log on path %s", duplicate_log_path) + return "exception", "No header duplicates check log", [] + elif os.stat(duplicate_log_path).st_size != 0: + description += " Header duplicates check failed. " + test_results.append(("Header duplicates check", "FAIL")) + status = "failure" + else: + test_results.append(("Header duplicates check", "OK")) + + shellcheck_log_path = '{}/shellcheck_output.txt'.format(result_folder) + if not os.path.exists(shellcheck_log_path): + logging.info("No shellcheck log on path %s", shellcheck_log_path) + return "exception", "No shellcheck log", [] + elif os.stat(shellcheck_log_path).st_size != 0: + description += " Shellcheck check failed. " + test_results.append(("Shellcheck ", "FAIL")) + status = "failure" + else: + test_results.append(("Shellcheck", "OK")) + + if not description: + description += "Style check success" + + return status, description, test_results + + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of style check") + parser.add_argument("--in-results-dir", default='/test_output/') + parser.add_argument("--out-results-file", default='/test_output/test_results.tsv') + parser.add_argument("--out-status-file", default='/test_output/check_status.tsv') + args = parser.parse_args() + + state, description, test_results = process_result(args.in_results_dir) + logging.info("Result parsed") + status = (state, description) + write_results(args.out_results_file, args.out_status_file, test_results, status) + logging.info("Result written") diff --git a/docker/test/style/run.sh b/docker/test/style/run.sh new file mode 100755 index 00000000000..424bfe71b15 --- /dev/null +++ b/docker/test/style/run.sh @@ -0,0 +1,9 @@ +#!/bin/bash + +cd /ClickHouse/utils/check-style || echo -e "failure\tRepo not found" > /test_output/check_status.tsv +./check-style -n |& tee /test_output/style_output.txt +./check-typos |& tee /test_output/typos_output.txt +./check-whitespaces -n |& tee /test_output/whitespaces_output.txt +./check-duplicate-includes.sh |& tee /test_output/duplicate_output.txt +./shellcheck-run.sh |& tee /test_output/shellcheck_output.txt +/process_style_check_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv diff --git a/docker/test/testflows/runner/Dockerfile b/docker/test/testflows/runner/Dockerfile index 4139fb9e044..bd7eee4c166 100644 --- a/docker/test/testflows/runner/Dockerfile +++ b/docker/test/testflows/runner/Dockerfile @@ -35,7 +35,7 @@ RUN apt-get update \ ENV TZ=Europe/Moscow RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone -RUN pip3 install urllib3 testflows==1.6.72 docker-compose docker dicttoxml kazoo tzlocal +RUN pip3 install urllib3 testflows==1.6.74 docker-compose docker dicttoxml kazoo tzlocal ENV DOCKER_CHANNEL stable ENV DOCKER_VERSION 17.09.1-ce @@ -61,6 +61,7 @@ RUN set -eux; \ COPY modprobe.sh /usr/local/bin/modprobe COPY dockerd-entrypoint.sh /usr/local/bin/ +COPY process_testflows_result.py /usr/local/bin/ RUN set -x \ && addgroup --system dockremap \ @@ -72,5 +73,5 @@ RUN set -x \ VOLUME /var/lib/docker EXPOSE 2375 ENTRYPOINT ["dockerd-entrypoint.sh"] -CMD ["sh", "-c", "python3 regression.py --no-color -o classic --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json"] +CMD ["sh", "-c", "python3 regression.py --no-color -o classic --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json; /usr/local/bin/process_testflows_result.py || echo -e 'failure\tCannot parse results' > check_status.tsv"] diff --git a/docker/test/testflows/runner/dockerd-entrypoint.sh b/docker/test/testflows/runner/dockerd-entrypoint.sh index 1bac94a9df2..01593488648 100755 --- a/docker/test/testflows/runner/dockerd-entrypoint.sh +++ b/docker/test/testflows/runner/dockerd-entrypoint.sh @@ -16,6 +16,14 @@ while true; do done set -e +echo "Configure to use Yandex dockerhub-proxy" +cat > /etc/docker/daemon.json << EOF +{ + "insecure-registries": ["dockerhub-proxy.sas.yp-c.yandex.net:5000"], + "registry-mirrors": ["dockerhub-proxy.sas.yp-c.yandex.net:5000"] +} +EOF + echo "Start tests" export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse diff --git a/docker/test/testflows/runner/process_testflows_result.py b/docker/test/testflows/runner/process_testflows_result.py new file mode 100755 index 00000000000..37d0b6a69d1 --- /dev/null +++ b/docker/test/testflows/runner/process_testflows_result.py @@ -0,0 +1,67 @@ +#!/usr/bin/env python3 + +import os +import logging +import argparse +import csv +import json + + +def process_result(result_folder): + json_path = os.path.join(result_folder, "results.json") + if not os.path.exists(json_path): + return "success", "No testflows in branch", None, [] + + test_binary_log = os.path.join(result_folder, "test.log") + with open(json_path) as source: + results = json.loads(source.read()) + + total_tests = 0 + total_ok = 0 + total_fail = 0 + total_other = 0 + test_results = [] + for test in results["tests"]: + test_name = test['test']['test_name'] + test_result = test['result']['result_type'].upper() + test_time = str(test['result']['message_rtime']) + total_tests += 1 + if test_result == "OK": + total_ok += 1 + elif test_result == "FAIL" or test_result == "ERROR": + total_fail += 1 + else: + total_other += 1 + + test_results.append((test_name, test_result, test_time)) + if total_fail != 0: + status = "failure" + else: + status = "success" + + description = "failed: {}, passed: {}, other: {}".format(total_fail, total_ok, total_other) + return status, description, test_results, [json_path, test_binary_log] + + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of Testflows tests") + parser.add_argument("--in-results-dir", default='./') + parser.add_argument("--out-results-file", default='./test_results.tsv') + parser.add_argument("--out-status-file", default='./check_status.tsv') + args = parser.parse_args() + + state, description, test_results, logs = process_result(args.in_results_dir) + logging.info("Result parsed") + status = (state, description) + write_results(args.out_results_file, args.out_status_file, test_results, status) + logging.info("Result written") + diff --git a/docker/test/unit/Dockerfile b/docker/test/unit/Dockerfile index f01ed613918..e2f4a691939 100644 --- a/docker/test/unit/Dockerfile +++ b/docker/test/unit/Dockerfile @@ -5,6 +5,6 @@ ENV TZ=Europe/Moscow RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN apt-get install gdb -CMD service zookeeper start && sleep 7 && /usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 -create create /clickhouse_test ''; \ - gdb -q -ex 'set print inferior-events off' -ex 'set confirm off' -ex 'set print thread-events off' -ex run -ex bt -ex quit --args ./unit_tests_dbms | tee test_output/test_result.txt - +COPY run.sh / +COPY process_unit_tests_result.py / +CMD ["/bin/bash", "/run.sh"] diff --git a/docker/test/unit/process_unit_tests_result.py b/docker/test/unit/process_unit_tests_result.py new file mode 100755 index 00000000000..7219aa13b82 --- /dev/null +++ b/docker/test/unit/process_unit_tests_result.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python3 + +import os +import logging +import argparse +import csv + +OK_SIGN = 'OK ]' +FAILED_SIGN = 'FAILED ]' +SEGFAULT = 'Segmentation fault' +SIGNAL = 'received signal SIG' +PASSED = 'PASSED' + +def get_test_name(line): + elements = reversed(line.split(' ')) + for element in elements: + if '(' not in element and ')' not in element: + return element + raise Exception("No test name in line '{}'".format(line)) + +def process_result(result_folder): + summary = [] + total_counter = 0 + failed_counter = 0 + result_log_path = '{}/test_result.txt'.format(result_folder) + if not os.path.exists(result_log_path): + logging.info("No output log on path %s", result_log_path) + return "exception", "No output log", [] + + status = "success" + description = "" + passed = False + with open(result_log_path, 'r') as test_result: + for line in test_result: + if OK_SIGN in line: + logging.info("Found ok line: '%s'", line) + test_name = get_test_name(line.strip()) + logging.info("Test name: '%s'", test_name) + summary.append((test_name, "OK")) + total_counter += 1 + elif FAILED_SIGN in line and 'listed below' not in line and 'ms)' in line: + logging.info("Found fail line: '%s'", line) + test_name = get_test_name(line.strip()) + logging.info("Test name: '%s'", test_name) + summary.append((test_name, "FAIL")) + total_counter += 1 + failed_counter += 1 + elif SEGFAULT in line: + logging.info("Found segfault line: '%s'", line) + status = "failure" + description += "Segmentation fault. " + break + elif SIGNAL in line: + logging.info("Received signal line: '%s'", line) + status = "failure" + description += "Exit on signal. " + break + elif PASSED in line: + logging.info("PASSED record found: '%s'", line) + passed = True + + if not passed: + status = "failure" + description += "PASSED record not found. " + + if failed_counter != 0: + status = "failure" + + if not description: + description += "fail: {}, passed: {}".format(failed_counter, total_counter - failed_counter) + + return status, description, summary + + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of unit tests") + parser.add_argument("--in-results-dir", default='/test_output/') + parser.add_argument("--out-results-file", default='/test_output/test_results.tsv') + parser.add_argument("--out-status-file", default='/test_output/check_status.tsv') + args = parser.parse_args() + + state, description, test_results = process_result(args.in_results_dir) + logging.info("Result parsed") + status = (state, description) + write_results(args.out_results_file, args.out_status_file, test_results, status) + logging.info("Result written") + diff --git a/docker/test/unit/run.sh b/docker/test/unit/run.sh new file mode 100644 index 00000000000..abc35fa40d2 --- /dev/null +++ b/docker/test/unit/run.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +set -x + +service zookeeper start && sleep 7 && /usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 -create create /clickhouse_test ''; +gdb -q -ex 'set print inferior-events off' -ex 'set confirm off' -ex 'set print thread-events off' -ex run -ex bt -ex quit --args ./unit_tests_dbms | tee test_output/test_result.txt +./process_unit_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv diff --git a/docs/.gitignore b/docs/.gitignore new file mode 100644 index 00000000000..378eac25d31 --- /dev/null +++ b/docs/.gitignore @@ -0,0 +1 @@ +build diff --git a/docs/README.md b/docs/README.md index 8b3066501bf..a4df023a6ad 100644 --- a/docs/README.md +++ b/docs/README.md @@ -126,7 +126,13 @@ Contribute all new information in English language. Other languages are translat ### Adding a New File -When adding a new file: +When you add a new file, it should end with a link like: + +`[Original article](https://clickhouse.tech/docs/) ` + +and there should be **a new empty line** after it. + +{## When adding a new file: - Make symbolic links for all other languages. You can use the following commands: @@ -134,7 +140,7 @@ When adding a new file: $ cd /ClickHouse/clone/directory/docs $ ln -sr en/new/file.md lang/new/file.md ``` - +##} ### Adding a New Language @@ -195,8 +201,11 @@ Templates: - [Function](_description_templates/template-function.md) - [Setting](_description_templates/template-setting.md) +- [Server Setting](_description_templates/template-server-setting.md) - [Database or Table engine](_description_templates/template-engine.md) - [System table](_description_templates/template-system-table.md) +- [Data type](_description_templates/data-type.md) +- [Statement](_description_templates/statement.md) diff --git a/docs/en/commercial/support.md b/docs/en/commercial/support.md index 37bc54e3e8b..1a3d1b71869 100644 --- a/docs/en/commercial/support.md +++ b/docs/en/commercial/support.md @@ -7,6 +7,10 @@ toc_title: Support !!! info "Info" If you have launched a ClickHouse commercial support service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) adding it to the following list. + +## Yandex.Cloud + +ClickHouse worldwide support from the authors of ClickHouse. Supports on-premise and cloud deployments. Ask details on clickhouse-support@yandex-team.com ## Altinity {#altinity} diff --git a/docs/en/development/build-osx.md b/docs/en/development/build-osx.md index e0b1be710f1..f34107ca3d3 100644 --- a/docs/en/development/build-osx.md +++ b/docs/en/development/build-osx.md @@ -5,44 +5,80 @@ toc_title: Build on Mac OS X # How to Build ClickHouse on Mac OS X {#how-to-build-clickhouse-on-mac-os-x} -Build should work on Mac OS X 10.15 (Catalina). +Build should work on x86_64 (Intel) and arm64 (Apple Silicon) based macOS 10.15 (Catalina) and higher with recent Xcode's native AppleClang, or Homebrew's vanilla Clang or GCC compilers. ## Install Homebrew {#install-homebrew} ``` bash -$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" +/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" +# ...and follow the printed instructions on any additional steps required to complete the installation. ``` +## Install Xcode and Command Line Tools {#install-xcode-and-command-line-tools} + +Install the latest [Xcode](https://apps.apple.com/am/app/xcode/id497799835?mt=12) from App Store. + +Open it at least once to accept the end-user license agreement and automatically install the required components. + +Then, make sure that the latest Comman Line Tools are installed and selected in the system: + +``` bash +sudo rm -rf /Library/Developer/CommandLineTools +sudo xcode-select --install +``` + +Reboot. + ## Install Required Compilers, Tools, and Libraries {#install-required-compilers-tools-and-libraries} ``` bash -$ brew install cmake ninja libtool gettext llvm +brew update +brew install cmake ninja libtool gettext llvm gcc ``` ## Checkout ClickHouse Sources {#checkout-clickhouse-sources} ``` bash -$ git clone --recursive git@github.com:ClickHouse/ClickHouse.git -``` - -or - -``` bash -$ git clone --recursive https://github.com/ClickHouse/ClickHouse.git - -$ cd ClickHouse +git clone --recursive git@github.com:ClickHouse/ClickHouse.git +# ...alternatively, you can use https://github.com/ClickHouse/ClickHouse.git as the repo URL. ``` ## Build ClickHouse {#build-clickhouse} -> Please note: ClickHouse doesn't support build with native Apple Clang compiler, we need use clang from LLVM. +To build using Xcode's native AppleClang compiler: ``` bash -$ mkdir build -$ cd build -$ cmake .. -DCMAKE_C_COMPILER=`brew --prefix llvm`/bin/clang -DCMAKE_CXX_COMPILER=`brew --prefix llvm`/bin/clang++ -DCMAKE_PREFIX_PATH=`brew --prefix llvm` -$ ninja -$ cd .. +cd ClickHouse +rm -rf build +mkdir build +cd build +cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. +cmake --build . --config RelWithDebInfo +cd .. +``` + +To build using Homebrew's vanilla Clang compiler: + +``` bash +cd ClickHouse +rm -rf build +mkdir build +cd build +cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER=$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo .. +cmake --build . --config RelWithDebInfo +cd .. +``` + +To build using Homebrew's vanilla GCC compiler: + +``` bash +cd ClickHouse +rm -rf build +mkdir build +cd build +cmake -DCMAKE_C_COMPILER=$(brew --prefix gcc)/bin/gcc-10 -DCMAKE_CXX_COMPILER=$(brew --prefix gcc)/bin/g++-10 -DCMAKE_BUILD_TYPE=RelWithDebInfo .. +cmake --build . --config RelWithDebInfo +cd .. ``` ## Caveats {#caveats} @@ -81,11 +117,18 @@ To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the Execute the following command: ``` bash -$ sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist +sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist ``` Reboot. To check if it’s working, you can use `ulimit -n` command. +## Run ClickHouse server: + +``` +cd ClickHouse +./build/programs/clickhouse-server --config-file ./programs/server/config.xml +``` + [Original article](https://clickhouse.tech/docs/en/development/build_osx/) diff --git a/docs/en/development/build.md b/docs/en/development/build.md index 3181f26800d..b6cb68f7ff8 100644 --- a/docs/en/development/build.md +++ b/docs/en/development/build.md @@ -27,53 +27,20 @@ Or cmake3 instead of cmake on older systems. On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -```bash +```bash sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ``` For other Linux distribution - check the availability of the [prebuild packages](https://releases.llvm.org/download.html) or build clang [from sources](https://clang.llvm.org/get_started.html). -#### Use clang-11 for Builds {#use-gcc-10-for-builds} +#### Use clang-11 for Builds ``` bash $ export CC=clang-11 $ export CXX=clang++-11 ``` -### Install GCC 10 {#install-gcc-10} - -We recommend building ClickHouse with clang-11, GCC-10 also supported, but it is not used for production builds. - -If you want to use GCC-10 there are several ways to install it. - -#### Install from Repository {#install-from-repository} - -On Ubuntu 19.10 or newer: - - $ sudo apt-get update - $ sudo apt-get install gcc-10 g++-10 - -#### Install from a PPA Package {#install-from-a-ppa-package} - -On older Ubuntu: - -``` bash -$ sudo apt-get install software-properties-common -$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test -$ sudo apt-get update -$ sudo apt-get install gcc-10 g++-10 -``` - -#### Install from Sources {#install-from-sources} - -See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -#### Use GCC 10 for Builds {#use-gcc-10-for-builds} - -``` bash -$ export CC=gcc-10 -$ export CXX=g++-10 -``` +Gcc can also be used though it is discouraged. ### Checkout ClickHouse Sources {#checkout-clickhouse-sources} @@ -106,9 +73,9 @@ The build requires the following components: - Git (is used only to checkout the sources, it’s not needed for the build) - CMake 3.10 or newer -- Ninja (recommended) or Make -- C++ compiler: gcc 10 or clang 8 or newer -- Linker: lld or gold (the classic GNU ld won’t work) +- Ninja +- C++ compiler: clang-11 or newer +- Linker: lld - Python (is only used inside LLVM build and it is optional) If all the components are installed, you may build in the same way as the steps above. @@ -116,7 +83,7 @@ If all the components are installed, you may build in the same way as the steps Example for Ubuntu Eoan: ``` bash sudo apt update -sudo apt install git cmake ninja-build g++ python +sudo apt install git cmake ninja-build clang++ python git clone --recursive https://github.com/ClickHouse/ClickHouse.git mkdir build && cd build cmake ../ClickHouse @@ -125,7 +92,7 @@ ninja Example for OpenSUSE Tumbleweed: ``` bash -sudo zypper install git cmake ninja gcc-c++ python lld +sudo zypper install git cmake ninja clang-c++ python lld git clone --recursive https://github.com/ClickHouse/ClickHouse.git mkdir build && cd build cmake ../ClickHouse @@ -135,7 +102,7 @@ ninja Example for Fedora Rawhide: ``` bash sudo yum update -yum --nogpg install git cmake make gcc-c++ python3 +yum --nogpg install git cmake make clang-c++ python3 git clone --recursive https://github.com/ClickHouse/ClickHouse.git mkdir build && cd build cmake ../ClickHouse @@ -145,11 +112,11 @@ make -j $(nproc) ## How to Build ClickHouse Debian Package {#how-to-build-clickhouse-debian-package} -### Install Git and Pbuilder {#install-git-and-pbuilder} +### Install Git {#install-git} ``` bash $ sudo apt-get update -$ sudo apt-get install git python pbuilder debhelper lsb-release fakeroot sudo debian-archive-keyring debian-keyring +$ sudo apt-get install git python debhelper lsb-release fakeroot sudo debian-archive-keyring debian-keyring ``` ### Checkout ClickHouse Sources {#checkout-clickhouse-sources-1} diff --git a/docs/en/development/contrib.md b/docs/en/development/contrib.md index 76a2f647231..64ca2387029 100644 --- a/docs/en/development/contrib.md +++ b/docs/en/development/contrib.md @@ -5,36 +5,87 @@ toc_title: Third-Party Libraries Used # Third-Party Libraries Used {#third-party-libraries-used} -| Library | License | -|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------| -| base64 | [BSD 2-Clause License](https://github.com/aklomp/base64/blob/a27c565d1b6c676beaf297fe503c4518185666f7/LICENSE) | -| boost | [Boost Software License 1.0](https://github.com/ClickHouse-Extras/boost-extra/blob/6883b40449f378019aec792f9983ce3afc7ff16e/LICENSE_1_0.txt) | -| brotli | [MIT](https://github.com/google/brotli/blob/master/LICENSE) | -| capnproto | [MIT](https://github.com/capnproto/capnproto/blob/master/LICENSE) | -| cctz | [Apache License 2.0](https://github.com/google/cctz/blob/4f9776a310f4952454636363def82c2bf6641d5f/LICENSE.txt) | -| double-conversion | [BSD 3-Clause License](https://github.com/google/double-conversion/blob/cf2f0f3d547dc73b4612028a155b80536902ba02/LICENSE) | -| FastMemcpy | [MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libmemcpy/impl/LICENSE) | -| googletest | [BSD 3-Clause License](https://github.com/google/googletest/blob/master/LICENSE) | -| h3 | [Apache License 2.0](https://github.com/uber/h3/blob/master/LICENSE) | -| hyperscan | [BSD 3-Clause License](https://github.com/intel/hyperscan/blob/master/LICENSE) | -| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) | -| libdivide | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) | -| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) | -| libhdfs3 | [Apache License 2.0](https://github.com/ClickHouse-Extras/libhdfs3/blob/bd6505cbb0c130b0db695305b9a38546fa880e5a/LICENSE.txt) | -| libmetrohash | [Apache License 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libmetrohash/LICENSE) | -| libpcg-random | [Apache License 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libpcg-random/LICENSE-APACHE.txt) | -| libressl | [OpenSSL License](https://github.com/ClickHouse-Extras/ssl/blob/master/COPYING) | -| librdkafka | [BSD 2-Clause License](https://github.com/edenhill/librdkafka/blob/363dcad5a23dc29381cc626620e68ae418b3af19/LICENSE) | -| libwidechar_width | [CC0 1.0 Universal](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libwidechar_width/LICENSE) | -| llvm | [BSD 3-Clause License](https://github.com/ClickHouse-Extras/llvm/blob/163def217817c90fb982a6daf384744d8472b92b/llvm/LICENSE.TXT) | -| lz4 | [BSD 2-Clause License](https://github.com/lz4/lz4/blob/c10863b98e1503af90616ae99725ecd120265dfb/LICENSE) | -| mariadb-connector-c | [LGPL v2.1](https://github.com/ClickHouse-Extras/mariadb-connector-c/blob/3.1/COPYING.LIB) | -| murmurhash | [Public Domain](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/murmurhash/LICENSE) | -| pdqsort | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/pdqsort/license.txt) | -| poco | [Boost Software License - Version 1.0](https://github.com/ClickHouse-Extras/poco/blob/fe5505e56c27b6ecb0dcbc40c49dc2caf4e9637f/LICENSE) | -| protobuf | [BSD 3-Clause License](https://github.com/ClickHouse-Extras/protobuf/blob/12735370922a35f03999afff478e1c6d7aa917a4/LICENSE) | -| re2 | [BSD 3-Clause License](https://github.com/google/re2/blob/7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0/LICENSE) | -| sentry-native | [MIT License](https://github.com/getsentry/sentry-native/blob/master/LICENSE) | -| UnixODBC | [LGPL v2.1](https://github.com/ClickHouse-Extras/UnixODBC/tree/b0ad30f7f6289c12b76f04bfb9d466374bb32168) | -| zlib-ng | [Zlib License](https://github.com/ClickHouse-Extras/zlib-ng/blob/develop/LICENSE.md) | -| zstd | [BSD 3-Clause License](https://github.com/facebook/zstd/blob/dev/LICENSE) | +The list of third-party libraries can be obtained by the following query: + +``` +SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en' +``` + +[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==) + +| library_name | license_type | license_path | +|:-|:-|:-| +| abseil-cpp | Apache | /contrib/abseil-cpp/LICENSE | +| AMQP-CPP | Apache | /contrib/AMQP-CPP/LICENSE | +| arrow | Apache | /contrib/arrow/LICENSE.txt | +| avro | Apache | /contrib/avro/LICENSE.txt | +| aws | Apache | /contrib/aws/LICENSE.txt | +| aws-c-common | Apache | /contrib/aws-c-common/LICENSE | +| aws-c-event-stream | Apache | /contrib/aws-c-event-stream/LICENSE | +| aws-checksums | Apache | /contrib/aws-checksums/LICENSE | +| base64 | BSD 2-clause | /contrib/base64/LICENSE | +| boost | Boost | /contrib/boost/LICENSE_1_0.txt | +| boringssl | BSD | /contrib/boringssl/LICENSE | +| brotli | MIT | /contrib/brotli/LICENSE | +| capnproto | MIT | /contrib/capnproto/LICENSE | +| cassandra | Apache | /contrib/cassandra/LICENSE.txt | +| cctz | Apache | /contrib/cctz/LICENSE.txt | +| cityhash102 | MIT | /contrib/cityhash102/COPYING | +| cppkafka | BSD 2-clause | /contrib/cppkafka/LICENSE | +| croaring | Apache | /contrib/croaring/LICENSE | +| curl | Apache | /contrib/curl/docs/LICENSE-MIXING.md | +| cyrus-sasl | BSD 2-clause | /contrib/cyrus-sasl/COPYING | +| double-conversion | BSD 3-clause | /contrib/double-conversion/LICENSE | +| dragonbox | Apache | /contrib/dragonbox/LICENSE-Apache2-LLVM | +| fast_float | Apache | /contrib/fast_float/LICENSE | +| fastops | MIT | /contrib/fastops/LICENSE | +| flatbuffers | Apache | /contrib/flatbuffers/LICENSE.txt | +| fmtlib | Unknown | /contrib/fmtlib/LICENSE.rst | +| gcem | Apache | /contrib/gcem/LICENSE | +| googletest | BSD 3-clause | /contrib/googletest/LICENSE | +| grpc | Apache | /contrib/grpc/LICENSE | +| h3 | Apache | /contrib/h3/LICENSE | +| hyperscan | Boost | /contrib/hyperscan/LICENSE | +| icu | Public Domain | /contrib/icu/icu4c/LICENSE | +| icudata | Public Domain | /contrib/icudata/LICENSE | +| jemalloc | BSD 2-clause | /contrib/jemalloc/COPYING | +| krb5 | MIT | /contrib/krb5/src/lib/gssapi/LICENSE | +| libc-headers | LGPL | /contrib/libc-headers/LICENSE | +| libcpuid | BSD 2-clause | /contrib/libcpuid/COPYING | +| libcxx | Apache | /contrib/libcxx/LICENSE.TXT | +| libcxxabi | Apache | /contrib/libcxxabi/LICENSE.TXT | +| libdivide | zLib | /contrib/libdivide/LICENSE.txt | +| libfarmhash | MIT | /contrib/libfarmhash/COPYING | +| libgsasl | LGPL | /contrib/libgsasl/LICENSE | +| libhdfs3 | Apache | /contrib/libhdfs3/LICENSE.txt | +| libmetrohash | Apache | /contrib/libmetrohash/LICENSE | +| libpq | Unknown | /contrib/libpq/COPYRIGHT | +| libpqxx | BSD 3-clause | /contrib/libpqxx/COPYING | +| librdkafka | MIT | /contrib/librdkafka/LICENSE.murmur2 | +| libunwind | Apache | /contrib/libunwind/LICENSE.TXT | +| libuv | BSD | /contrib/libuv/LICENSE | +| llvm | Apache | /contrib/llvm/llvm/LICENSE.TXT | +| lz4 | BSD | /contrib/lz4/LICENSE | +| mariadb-connector-c | LGPL | /contrib/mariadb-connector-c/COPYING.LIB | +| miniselect | Boost | /contrib/miniselect/LICENSE_1_0.txt | +| msgpack-c | Boost | /contrib/msgpack-c/LICENSE_1_0.txt | +| murmurhash | Public Domain | /contrib/murmurhash/LICENSE | +| NuRaft | Apache | /contrib/NuRaft/LICENSE | +| openldap | Unknown | /contrib/openldap/LICENSE | +| orc | Apache | /contrib/orc/LICENSE | +| poco | Boost | /contrib/poco/LICENSE | +| protobuf | BSD 3-clause | /contrib/protobuf/LICENSE | +| rapidjson | MIT | /contrib/rapidjson/bin/jsonschema/LICENSE | +| re2 | BSD 3-clause | /contrib/re2/LICENSE | +| replxx | BSD 3-clause | /contrib/replxx/LICENSE.md | +| rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb | +| sentry-native | MIT | /contrib/sentry-native/LICENSE | +| simdjson | Apache | /contrib/simdjson/LICENSE | +| snappy | Public Domain | /contrib/snappy/COPYING | +| sparsehash-c11 | BSD 3-clause | /contrib/sparsehash-c11/LICENSE | +| stats | Apache | /contrib/stats/LICENSE | +| thrift | Apache | /contrib/thrift/LICENSE | +| unixodbc | LGPL | /contrib/unixodbc/COPYING | +| xz | Public Domain | /contrib/xz/COPYING | +| zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md | +| zstd | BSD | /contrib/zstd/LICENSE | diff --git a/docs/en/development/developer-instruction.md b/docs/en/development/developer-instruction.md index 5511e8e19c7..35ca4725af8 100644 --- a/docs/en/development/developer-instruction.md +++ b/docs/en/development/developer-instruction.md @@ -131,17 +131,18 @@ ClickHouse uses several external libraries for building. All of them do not need ## C++ Compiler {#c-compiler} -Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse. +Compilers Clang starting from version 11 is supported for building ClickHouse. -Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations. +Clang should be used instead of gcc. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations. -To install GCC on Ubuntu run: `sudo apt install gcc g++` +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10. +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` -Mac OS X build is supported only for Clang. Just run `brew install llvm` +Mac OS X build is also supported. Just run `brew install llvm` -If you decide to use Clang, you can also install `libc++` and `lld`, if you know what it is. Using `ccache` is also recommended. ## The Building Process {#the-building-process} @@ -152,14 +153,7 @@ Now that you are ready to build ClickHouse we recommend you to create a separate You can have several different directories (build_release, build_debug, etc.) for different types of build. -While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example). - -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: +While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler. export CC=clang CXX=clang++ cmake .. diff --git a/docs/en/development/style.md b/docs/en/development/style.md index 4c620c44aef..b27534d9890 100644 --- a/docs/en/development/style.md +++ b/docs/en/development/style.md @@ -701,7 +701,7 @@ But other things being equal, cross-platform or portable code is preferred. **2.** Language: C++20 (see the list of available [C++20 features](https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_features)). -**3.** Compiler: `gcc`. At this time (August 2020), the code is compiled using version 9.3. (It can also be compiled using `clang 8`.) +**3.** Compiler: `clang`. At this time (April 2021), the code is compiled using clang version 11. (It can also be compiled using `gcc` version 10, but it's untested and not suitable for production usage). The standard library is used (`libc++`). @@ -711,7 +711,7 @@ The standard library is used (`libc++`). The CPU instruction set is the minimum supported set among our servers. Currently, it is SSE 4.2. -**6.** Use `-Wall -Wextra -Werror` compilation flags. +**6.** Use `-Wall -Wextra -Werror` compilation flags. Also `-Weverything` is used with few exceptions. **7.** Use static linking with all libraries except those that are difficult to connect to statically (see the output of the `ldd` command). diff --git a/docs/en/development/tests.md b/docs/en/development/tests.md index fb453e55417..7547497b9af 100644 --- a/docs/en/development/tests.md +++ b/docs/en/development/tests.md @@ -233,7 +233,7 @@ Google OSS-Fuzz can be found at `docker/fuzz`. We also use simple fuzz test to generate random SQL queries and to check that the server doesn’t die executing them. You can find it in `00746_sql_fuzzy.pl`. This test should be run continuously (overnight and longer). -We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. +We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. You can learn more about this fuzzer in [this blog article](https://clickhouse.tech/blog/en/2021/fuzzing-clickhouse/). ## Stress test diff --git a/docs/en/engines/database-engines/atomic.md b/docs/en/engines/database-engines/atomic.md index f019b94a00b..d897631dd6e 100644 --- a/docs/en/engines/database-engines/atomic.md +++ b/docs/en/engines/database-engines/atomic.md @@ -3,15 +3,52 @@ toc_priority: 32 toc_title: Atomic --- - # Atomic {#atomic} -It is supports non-blocking `DROP` and `RENAME TABLE` queries and atomic `EXCHANGE TABLES t1 AND t2` queries. Atomic database engine is used by default. +It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES t1 AND t2](#exchange-tables) queries. `Atomic` database engine is used by default. ## Creating a Database {#creating-a-database} -```sql -CREATE DATABASE test ENGINE = Atomic; +``` sql + CREATE DATABASE test[ ENGINE = Atomic]; ``` -[Original article](https://clickhouse.tech/docs/en/engines/database_engines/atomic/) +## Specifics and recommendations {#specifics-and-recommendations} + +### Table UUID {#table-uuid} + +All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table. +Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended). To display the `SHOW CREATE` query with the UUID you can use setting [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). For example: + +```sql +CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...; +``` +### RENAME TABLE {#rename-table} + +`RENAME` queries are performed without changing UUID and moving table data. These queries do not wait for the completion of queries using the table and will be executed instantly. + +### DROP/DETACH TABLE {#drop-detach-table} + +On `DROP TABLE` no data is removed, database `Atomic` just marks table as dropped by moving metadata to `/clickhouse_path/metadata_dropped/` and notifies background thread. Delay before final table data deletion is specify by [database_atomic_delay_before_drop_table_sec](../../operations/server-configuration-parameters/settings.md#database_atomic_delay_before_drop_table_sec) setting. +You can specify synchronous mode using `SYNC` modifier. Use the [database_atomic_wait_for_drop_and_detach_synchronously](../../operations/settings/settings.md#database_atomic_wait_for_drop_and_detach_synchronously) setting to do this. In this case `DROP` waits for running `SELECT`, `INSERT` and other queries which are using the table to finish. Table will be actually removed when it's not in use. + +### EXCHANGE TABLES {#exchange-tables} + +`EXCHANGE` query swaps tables atomically. So instead of this non-atomic operation: + +```sql +RENAME TABLE new_table TO tmp, old_table TO new_table, tmp TO old_table; +``` +you can use one atomic query: + +``` sql +EXCHANGE TABLES new_table AND old_table; +``` + +### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database} + +For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables is recomended do not specify parameters of engine - path in ZooKeeper and replica name. In this case will be used parameters of the configuration [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want specify parameters of engine explicitly than recomended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in the ZooKeeper. + +## See Also + +- [system.databases](../../operations/system-tables/databases.md) system table diff --git a/docs/en/engines/database-engines/index.md b/docs/en/engines/database-engines/index.md index 2db11998483..b6892099378 100644 --- a/docs/en/engines/database-engines/index.md +++ b/docs/en/engines/database-engines/index.md @@ -18,4 +18,8 @@ You can also use the following database engines: - [Lazy](../../engines/database-engines/lazy.md) +- [Atomic](../../engines/database-engines/atomic.md) + +- [PostgreSQL](../../engines/database-engines/postgresql.md) + [Original article](https://clickhouse.tech/docs/en/database_engines/) diff --git a/docs/en/engines/database-engines/materialize-mysql.md b/docs/en/engines/database-engines/materialize-mysql.md index 2e361cc82f0..69d3122c268 100644 --- a/docs/en/engines/database-engines/materialize-mysql.md +++ b/docs/en/engines/database-engines/materialize-mysql.md @@ -69,7 +69,7 @@ MySQL DDL queries are converted into the corresponding ClickHouse DDL queries ([ - MySQL `INSERT` query is converted into `INSERT` with `_sign=1`. -- MySQl `DELETE` query is converted into `INSERT` with `_sign=-1`. +- MySQL `DELETE` query is converted into `INSERT` with `_sign=-1`. - MySQL `UPDATE` query is converted into `INSERT` with `_sign=-1` and `INSERT` with `_sign=1`. diff --git a/docs/en/engines/database-engines/postgresql.md b/docs/en/engines/database-engines/postgresql.md new file mode 100644 index 00000000000..1fa86b7ac21 --- /dev/null +++ b/docs/en/engines/database-engines/postgresql.md @@ -0,0 +1,138 @@ +--- +toc_priority: 35 +toc_title: PostgreSQL +--- + +# PostgreSQL {#postgresql} + +Allows to connect to databases on a remote [PostgreSQL](https://www.postgresql.org) server. Supports read and write operations (`SELECT` and `INSERT` queries) to exchange data between ClickHouse and PostgreSQL. + +Gives the real-time access to table list and table structure from remote PostgreSQL with the help of `SHOW TABLES` and `DESCRIBE TABLE` queries. + +Supports table structure modifications (`ALTER TABLE ... ADD|DROP COLUMN`). If `use_table_cache` parameter (see the Engine Parameters below) it set to `1`, the table structure is cached and not checked for being modified, but can be updated with `DETACH` and `ATTACH` queries. + +## Creating a Database {#creating-a-database} + +``` sql +CREATE DATABASE test_database +ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cache`]); +``` + +**Engine Parameters** + +- `host:port` — PostgreSQL server address. +- `database` — Remote database name. +- `user` — PostgreSQL user. +- `password` — User password. +- `use_table_cache` — Defines if the database table structure is cached or not. Optional. Default value: `0`. + +## Data Types Support {#data_types-support} + +| PostgerSQL | ClickHouse | +|------------------|--------------------------------------------------------------| +| DATE | [Date](../../sql-reference/data-types/date.md) | +| TIMESTAMP | [DateTime](../../sql-reference/data-types/datetime.md) | +| REAL | [Float32](../../sql-reference/data-types/float.md) | +| DOUBLE | [Float64](../../sql-reference/data-types/float.md) | +| DECIMAL, NUMERIC | [Decimal](../../sql-reference/data-types/decimal.md) | +| SMALLINT | [Int16](../../sql-reference/data-types/int-uint.md) | +| INTEGER | [Int32](../../sql-reference/data-types/int-uint.md) | +| BIGINT | [Int64](../../sql-reference/data-types/int-uint.md) | +| SERIAL | [UInt32](../../sql-reference/data-types/int-uint.md) | +| BIGSERIAL | [UInt64](../../sql-reference/data-types/int-uint.md) | +| TEXT, CHAR | [String](../../sql-reference/data-types/string.md) | +| INTEGER | Nullable([Int32](../../sql-reference/data-types/int-uint.md))| +| ARRAY | [Array](../../sql-reference/data-types/array.md) | + + +## Examples of Use {#examples-of-use} + +Database in ClickHouse, exchanging data with the PostgreSQL server: + +``` sql +CREATE DATABASE test_database +ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 1); +``` + +``` sql +SHOW DATABASES; +``` + +``` text +┌─name──────────┐ +│ default │ +│ test_database │ +│ system │ +└───────────────┘ +``` + +``` sql +SHOW TABLES FROM test_database; +``` + +``` text +┌─name───────┐ +│ test_table │ +└────────────┘ +``` + +Reading data from the PostgreSQL table: + +``` sql +SELECT * FROM test_database.test_table; +``` + +``` text +┌─id─┬─value─┐ +│ 1 │ 2 │ +└────┴───────┘ +``` + +Writing data to the PostgreSQL table: + +``` sql +INSERT INTO test_database.test_table VALUES (3,4); +SELECT * FROM test_database.test_table; +``` + +``` text +┌─int_id─┬─value─┐ +│ 1 │ 2 │ +│ 3 │ 4 │ +└────────┴───────┘ +``` + +Consider the table structure was modified in PostgreSQL: + +``` sql +postgre> ALTER TABLE test_table ADD COLUMN data Text +``` + +As the `use_table_cache` parameter was set to `1` when the database was created, the table structure in ClickHouse was cached and therefore not modified: + +``` sql +DESCRIBE TABLE test_database.test_table; +``` +``` text +┌─name───┬─type──────────────┐ +│ id │ Nullable(Integer) │ +│ value │ Nullable(Integer) │ +└────────┴───────────────────┘ +``` + +After detaching the table and attaching it again, the structure was updated: + +``` sql +DETACH TABLE test_database.test_table; +ATTACH TABLE test_database.test_table; +DESCRIBE TABLE test_database.test_table; +``` +``` text +┌─name───┬─type──────────────┐ +│ id │ Nullable(Integer) │ +│ value │ Nullable(Integer) │ +│ data │ Nullable(String) │ +└────────┴───────────────────┘ +``` + +[Original article](https://clickhouse.tech/docs/en/database-engines/postgresql/) diff --git a/docs/en/engines/table-engines/index.md b/docs/en/engines/table-engines/index.md index 546557beb57..eb4fc583f88 100644 --- a/docs/en/engines/table-engines/index.md +++ b/docs/en/engines/table-engines/index.md @@ -47,12 +47,17 @@ Engines for communicating with other data storage and processing systems. Engines in the family: -- [Kafka](../../engines/table-engines/integrations/kafka.md#kafka) -- [MySQL](../../engines/table-engines/integrations/mysql.md#mysql) -- [ODBC](../../engines/table-engines/integrations/odbc.md#table-engine-odbc) -- [JDBC](../../engines/table-engines/integrations/jdbc.md#table-engine-jdbc) -- [HDFS](../../engines/table-engines/integrations/hdfs.md#hdfs) -- [S3](../../engines/table-engines/integrations/s3.md#table_engines-s3) + +- [ODBC](../../engines/table-engines/integrations/odbc.md) +- [JDBC](../../engines/table-engines/integrations/jdbc.md) +- [MySQL](../../engines/table-engines/integrations/mysql.md) +- [MongoDB](../../engines/table-engines/integrations/mongodb.md) +- [HDFS](../../engines/table-engines/integrations/hdfs.md) +- [S3](../../engines/table-engines/integrations/s3.md) +- [Kafka](../../engines/table-engines/integrations/kafka.md) +- [EmbeddedRocksDB](../../engines/table-engines/integrations/embedded-rocksdb.md) +- [RabbitMQ](../../engines/table-engines/integrations/rabbitmq.md) +- [PostgreSQL](../../engines/table-engines/integrations/postgresql.md) ### Special Engines {#special-engines} diff --git a/docs/en/engines/table-engines/integrations/embedded-rocksdb.md b/docs/en/engines/table-engines/integrations/embedded-rocksdb.md index 6e864751cc3..88c8973eeab 100644 --- a/docs/en/engines/table-engines/integrations/embedded-rocksdb.md +++ b/docs/en/engines/table-engines/integrations/embedded-rocksdb.md @@ -1,5 +1,5 @@ --- -toc_priority: 6 +toc_priority: 9 toc_title: EmbeddedRocksDB --- @@ -39,4 +39,4 @@ ENGINE = EmbeddedRocksDB PRIMARY KEY key ``` -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/embedded-rocksdb/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/embedded-rocksdb/) diff --git a/docs/en/engines/table-engines/integrations/hdfs.md b/docs/en/engines/table-engines/integrations/hdfs.md index 5c36e3f1c21..cf4bb5ecbf7 100644 --- a/docs/en/engines/table-engines/integrations/hdfs.md +++ b/docs/en/engines/table-engines/integrations/hdfs.md @@ -1,11 +1,11 @@ --- -toc_priority: 4 +toc_priority: 6 toc_title: HDFS --- # HDFS {#table_engines-hdfs} -This engine provides integration with [Apache Hadoop](https://en.wikipedia.org/wiki/Apache_Hadoop) ecosystem by allowing to manage data on [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)via ClickHouse. This engine is similar +This engine provides integration with [Apache Hadoop](https://en.wikipedia.org/wiki/Apache_Hadoop) ecosystem by allowing to manage data on [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) via ClickHouse. This engine is similar to the [File](../../../engines/table-engines/special/file.md#table_engines-file) and [URL](../../../engines/table-engines/special/url.md#table_engines-url) engines, but provides Hadoop-specific features. ## Usage {#usage} @@ -174,7 +174,7 @@ Similar to GraphiteMergeTree, the HDFS engine supports extended configuration us | dfs\_domain\_socket\_path | "" | -[HDFS Configuration Reference ](https://hawq.apache.org/docs/userguide/2.3.0.0-incubating/reference/HDFSConfigurationParameterReference.html) might explain some parameters. +[HDFS Configuration Reference](https://hawq.apache.org/docs/userguide/2.3.0.0-incubating/reference/HDFSConfigurationParameterReference.html) might explain some parameters. #### ClickHouse extras {#clickhouse-extras} @@ -185,7 +185,6 @@ Similar to GraphiteMergeTree, the HDFS engine supports extended configuration us |hadoop\_kerberos\_kinit\_command | kinit | #### Limitations {#limitations} - * hadoop\_security\_kerberos\_ticket\_cache\_path can be global only, not user specific ## Kerberos support {#kerberos-support} @@ -207,4 +206,4 @@ If hadoop\_kerberos\_keytab, hadoop\_kerberos\_principal or hadoop\_kerberos\_ki - [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/hdfs/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/hdfs/) diff --git a/docs/en/engines/table-engines/integrations/index.md b/docs/en/engines/table-engines/integrations/index.md index 288c9c3cd56..eb1c5411e18 100644 --- a/docs/en/engines/table-engines/integrations/index.md +++ b/docs/en/engines/table-engines/integrations/index.md @@ -1,6 +1,6 @@ --- toc_folder_title: Integrations -toc_priority: 30 +toc_priority: 1 --- # Table Engines for Integrations {#table-engines-for-integrations} @@ -18,3 +18,4 @@ List of supported integrations: - [Kafka](../../../engines/table-engines/integrations/kafka.md) - [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md) - [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) +- [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md) diff --git a/docs/en/engines/table-engines/integrations/jdbc.md b/docs/en/engines/table-engines/integrations/jdbc.md index 2144be9f1e3..82efb842ae7 100644 --- a/docs/en/engines/table-engines/integrations/jdbc.md +++ b/docs/en/engines/table-engines/integrations/jdbc.md @@ -1,5 +1,5 @@ --- -toc_priority: 2 +toc_priority: 3 toc_title: JDBC --- @@ -85,4 +85,4 @@ FROM jdbc_table - [JDBC table function](../../../sql-reference/table-functions/jdbc.md). -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/jdbc/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/jdbc/) diff --git a/docs/en/engines/table-engines/integrations/kafka.md b/docs/en/engines/table-engines/integrations/kafka.md index fb1df62bb15..2eebf5bdb92 100644 --- a/docs/en/engines/table-engines/integrations/kafka.md +++ b/docs/en/engines/table-engines/integrations/kafka.md @@ -1,5 +1,5 @@ --- -toc_priority: 5 +toc_priority: 8 toc_title: Kafka --- @@ -194,4 +194,4 @@ Example: - [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/kafka/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/kafka/) diff --git a/docs/en/engines/table-engines/integrations/mongodb.md b/docs/en/engines/table-engines/integrations/mongodb.md index e648a13b5e0..a378ab03f55 100644 --- a/docs/en/engines/table-engines/integrations/mongodb.md +++ b/docs/en/engines/table-engines/integrations/mongodb.md @@ -1,5 +1,5 @@ --- -toc_priority: 7 +toc_priority: 5 toc_title: MongoDB --- @@ -54,4 +54,4 @@ SELECT COUNT() FROM mongo_table; └─────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/integrations/mongodb/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/mongodb/) diff --git a/docs/en/engines/table-engines/integrations/mysql.md b/docs/en/engines/table-engines/integrations/mysql.md index 2cb1facce91..3847e7a9e0e 100644 --- a/docs/en/engines/table-engines/integrations/mysql.md +++ b/docs/en/engines/table-engines/integrations/mysql.md @@ -1,5 +1,5 @@ --- -toc_priority: 3 +toc_priority: 4 toc_title: MySQL --- @@ -24,6 +24,7 @@ The table structure can differ from the original MySQL table structure: - Column names should be the same as in the original MySQL table, but you can use just some of these columns and in any order. - Column types may differ from those in the original MySQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. +- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is true, if false - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types. **Engine Parameters** @@ -100,4 +101,4 @@ SELECT * FROM mysql_table - [The ‘mysql’ table function](../../../sql-reference/table-functions/mysql.md) - [Using MySQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql) -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/mysql/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/mysql/) diff --git a/docs/en/engines/table-engines/integrations/odbc.md b/docs/en/engines/table-engines/integrations/odbc.md index fffc125b0ff..26bfb6aeb0d 100644 --- a/docs/en/engines/table-engines/integrations/odbc.md +++ b/docs/en/engines/table-engines/integrations/odbc.md @@ -1,5 +1,5 @@ --- -toc_priority: 1 +toc_priority: 2 toc_title: ODBC --- @@ -29,6 +29,7 @@ The table structure can differ from the source table structure: - Column names should be the same as in the source table, but you can use just some of these columns and in any order. - Column types may differ from those in the source table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. +- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is true, if false - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types. **Engine Parameters** @@ -127,4 +128,4 @@ SELECT * FROM odbc_t - [ODBC external dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc) - [ODBC table function](../../../sql-reference/table-functions/odbc.md) -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/odbc/) +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/odbc/) diff --git a/docs/en/engines/table-engines/integrations/postgresql.md b/docs/en/engines/table-engines/integrations/postgresql.md new file mode 100644 index 00000000000..4474b764d2e --- /dev/null +++ b/docs/en/engines/table-engines/integrations/postgresql.md @@ -0,0 +1,145 @@ +--- +toc_priority: 11 +toc_title: PostgreSQL +--- + +# PostgreSQL {#postgresql} + +The PostgreSQL engine allows to perform `SELECT` and `INSERT` queries on data that is stored on a remote PostgreSQL server. + +## Creating a Table {#creating-a-table} + +``` sql +CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] +( + name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], + name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], + ... +) ENGINE = PostgreSQL('host:port', 'database', 'table', 'user', 'password'[, `schema`]); +``` + +See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query. + +The table structure can differ from the original PostgreSQL table structure: + +- Column names should be the same as in the original PostgreSQL table, but you can use just some of these columns and in any order. +- Column types may differ from those in the original PostgreSQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. +- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is 1, if 0 - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types. + +**Engine Parameters** + +- `host:port` — PostgreSQL server address. +- `database` — Remote database name. +- `table` — Remote table name. +- `user` — PostgreSQL user. +- `password` — User password. +- `schema` — Non-default table schema. Optional. + +## Implementation Details {#implementation-details} + +`SELECT` queries on PostgreSQL side run as `COPY (SELECT ...) TO STDOUT` inside read-only PostgreSQL transaction with commit after each `SELECT` query. + +Simple `WHERE` clauses such as `=`, `!=`, `>`, `>=`, `<`, `<=`, and `IN` are executed on the PostgreSQL server. + +All joins, aggregations, sorting, `IN [ array ]` conditions and the `LIMIT` sampling constraint are executed in ClickHouse only after the query to PostgreSQL finishes. + +`INSERT` queries on PostgreSQL side run as `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` inside PostgreSQL transaction with auto-commit after each `INSERT` statement. + +PostgreSQL `Array` types are converted into ClickHouse arrays. + +!!! info "Note" + Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column. + +Replicas priority for PostgreSQL dictionary source is supported. The bigger the number in map, the less the priority. The highest priority is `0`. + +In the example below replica `example01-1` has the highest priority: + +```xml + + 5432 + clickhouse + qwerty + + example01-1 + 1 + + + example01-2 + 2 + + db_name + table_name
+ id=10 + SQL_QUERY +
+ +``` + +## Usage Example {#usage-example} + +Table in PostgreSQL: + +``` text +postgres=# CREATE TABLE "public"."test" ( +"int_id" SERIAL, +"int_nullable" INT NULL DEFAULT NULL, +"float" FLOAT NOT NULL, +"str" VARCHAR(100) NOT NULL DEFAULT '', +"float_nullable" FLOAT NULL DEFAULT NULL, +PRIMARY KEY (int_id)); + +CREATE TABLE + +postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); +INSERT 0 1 + +postgresql> SELECT * FROM test; + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) +``` + +Table in ClickHouse, retrieving data from the PostgreSQL table created above: + +``` sql +CREATE TABLE default.postgresql_table +( + `float_nullable` Nullable(Float32), + `str` String, + `int_id` Int32 +) +ENGINE = PostgreSQL('localhost:5432', 'public', 'test', 'postges_user', 'postgres_password'); +``` + +``` sql +SELECT * FROM postgresql_table WHERE str IN ('test'); +``` + +``` text +┌─float_nullable─┬─str──┬─int_id─┐ +│ ᴺᵁᴸᴸ │ test │ 1 │ +└────────────────┴──────┴────────┘ +``` + +Using Non-default Schema: + +```text +postgres=# CREATE SCHEMA "nice.schema"; + +postgres=# CREATE TABLE "nice.schema"."nice.table" (a integer); + +postgres=# INSERT INTO "nice.schema"."nice.table" SELECT i FROM generate_series(0, 99) as t(i) +``` + +```sql +CREATE TABLE pg_table_schema_with_dots (a UInt32) + ENGINE PostgreSQL('localhost:5432', 'clickhouse', 'nice.table', 'postgrsql_user', 'password', 'nice.schema'); +``` + +**See Also** + +- [The `postgresql` table function](../../../sql-reference/table-functions/postgresql.md) +- [Using PostgreSQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql) + +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/postgresql/) diff --git a/docs/en/engines/table-engines/integrations/rabbitmq.md b/docs/en/engines/table-engines/integrations/rabbitmq.md index 4a0550275ca..5fb9ce5b151 100644 --- a/docs/en/engines/table-engines/integrations/rabbitmq.md +++ b/docs/en/engines/table-engines/integrations/rabbitmq.md @@ -1,5 +1,5 @@ --- -toc_priority: 6 +toc_priority: 10 toc_title: RabbitMQ --- @@ -163,3 +163,5 @@ Example: - `_redelivered` - `redelivered` flag of the message. - `_message_id` - messageID of the received message; non-empty if was set, when message was published. - `_timestamp` - timestamp of the received message; non-empty if was set, when message was published. + +[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/rabbitmq/) diff --git a/docs/en/engines/table-engines/integrations/s3.md b/docs/en/engines/table-engines/integrations/s3.md index 5858a0803e6..eb0d92b7738 100644 --- a/docs/en/engines/table-engines/integrations/s3.md +++ b/docs/en/engines/table-engines/integrations/s3.md @@ -1,52 +1,58 @@ --- -toc_priority: 4 +toc_priority: 7 toc_title: S3 --- -# S3 {#table_engines-s3} +# S3 Table Engine {#table-engine-s3} -This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar -to the [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs) engine, but provides S3-specific features. +This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar to the [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs) engine, but provides S3-specific features. -## Usage {#usage} +## Create Table {#creating-a-table} ``` sql -ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression]) +CREATE TABLE s3_engine_table (name String, value UInt32) +ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, [compression]) ``` -**Input parameters** +**Engine parameters** -- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, `’abc’, ‘def’ — strings. +- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path). - `format` — The [format](../../../interfaces/formats.md#formats) of the file. -- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. -- `compression` — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension. +- `aws_access_key_id`, `aws_secret_access_key` - Long-term credentials for the [AWS](https://aws.amazon.com/) account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file. For more information see [Using S3 for Data Storage](../mergetree-family/mergetree.md#table_engine-mergetree-s3). +- `compression` — Compression type. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Parameter is optional. By default, it will autodetect compression by file extension. -**Example:** +**Example** -**1.** Set up the `s3_engine_table` table: +1. Set up the `s3_engine_table` table: ``` sql -CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') +CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'gzip'); ``` -**2.** Fill file: +2. Fill file: ``` sql -INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3) +INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3); ``` -**3.** Query the data: +3. Query the data: ``` sql -SELECT * FROM s3_engine_table LIMIT 2 +SELECT * FROM s3_engine_table LIMIT 2; ``` -``` text +```text ┌─name─┬─value─┐ │ one │ 1 │ │ two │ 2 │ └──────┴───────┘ ``` +## Virtual columns {#virtual-columns} + +- `_path` — Path to the file. +- `_file` — Name of the file. + +For more information about virtual columns see [here](../../../engines/table-engines/index.md#table_engines-virtual_columns). ## Implementation Details {#implementation-details} @@ -56,9 +62,9 @@ SELECT * FROM s3_engine_table LIMIT 2 - Indexes. - Replication. -**Globs in path** +## Wildcards In Path {#wildcards-in-path} -Multiple path components can have globs. For being processed file should exist and match to the whole path pattern. Listing of files determines during `SELECT` (not at `CREATE` moment). +`path` argument can specify multiple files using bash-like wildcards. For being processed file should exist and match to the whole path pattern. Listing of files is determined during `SELECT` (not at `CREATE` moment). - `*` — Substitutes any number of any characters except `/` including empty string. - `?` — Substitutes any single character. @@ -69,7 +75,7 @@ Constructions with `{}` are similar to the [remote](../../../sql-reference/table **Example** -1. Suppose we have several files in TSV format with the following URIs on HDFS: +1. Suppose we have several files in CSV format with the following URIs on S3: - ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv’ @@ -78,35 +84,34 @@ Constructions with `{}` are similar to the [remote](../../../sql-reference/table - ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv’ -2. There are several ways to make a table consisting of all six files: +There are several ways to make a table consisting of all six files: - +The first way: ``` sql -CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV') +CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV'); ``` -3. Another way: +Another way: ``` sql -CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV') +CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV'); ``` -4. Table consists of all the files in both directories (all files should satisfy format and schema described in query): +Table consists of all the files in both directories (all files should satisfy format and schema described in query): ``` sql -CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV') +CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV'); ``` -!!! warning "Warning" - If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. +If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. **Example** Create table with files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`: ``` sql -CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV') +CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV'); ``` ## Virtual Columns {#virtual-columns} @@ -122,35 +127,82 @@ CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage. The following settings can be set before query execution or placed into configuration file. -- `s3_max_single_part_upload_size` — Default value is `64Mb`. The maximum size of object to upload using singlepart upload to S3. -- `s3_min_upload_part_size` — Default value is `512Mb`. The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). -- `s3_max_redirects` — Default value is `10`. Max number of S3 redirects hops allowed. +- `s3_max_single_part_upload_size` — The maximum size of object to upload using singlepart upload to S3. Default value is `64Mb`. +- `s3_min_upload_part_size` — The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). Default value is `512Mb`. +- `s3_max_redirects` — Max number of S3 redirects hops allowed. Default value is `10`. Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration. -### Endpoint-based settings {#endpointsettings} +## Endpoint-based Settings {#endpoint-settings} The following settings can be specified in configuration file for given endpoint (which will be matched by exact prefix of a URL): -- `endpoint` — Mandatory. Specifies prefix of an endpoint. -- `access_key_id` and `secret_access_key` — Optional. Specifies credentials to use with given endpoint. -- `use_environment_credentials` — Optional, default value is `false`. If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint. -- `header` — Optional, can be speficied multiple times. Adds specified HTTP header to a request to given endpoint. -- `server_side_encryption_customer_key_base64` — Optional. If specified, required headers for accessing S3 objects with SSE-C encryption will be set. +- `endpoint` — Specifies prefix of an endpoint. Mandatory. +- `access_key_id` and `secret_access_key` — Specifies credentials to use with given endpoint. Optional. +- `use_environment_credentials` — If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint. Optional, default value is `false`. +- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is `false`. +- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times. +- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional. -Example: +**Example:** -``` +``` xml https://storage.yandexcloud.net/my-test-bucket-768/ + ``` -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/s3/) +## Usage {#usage-examples} + +Suppose we have several files in TSV format with the following URIs on HDFS: + +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv' + + +1. There are several ways to make a table consisting of all six files: + +``` sql +CREATE TABLE table_with_range (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV'); +``` + +2. Another way: + +``` sql +CREATE TABLE table_with_question_mark (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV'); +``` + +3. Table consists of all the files in both directories (all files should satisfy format and schema described in query): + +``` sql +CREATE TABLE table_with_asterisk (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV'); +``` + +!!! warning "Warning" + If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. + +4. Create table with files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`: + +``` sql +CREATE TABLE big_table (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV'); +``` + +## See also + +- [S3 table function](../../../sql-reference/table-functions/s3.md) diff --git a/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md b/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md index 1a997b6b237..818830646cb 100644 --- a/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md @@ -3,7 +3,7 @@ toc_priority: 35 toc_title: AggregatingMergeTree --- -# Aggregatingmergetree {#aggregatingmergetree} +# AggregatingMergeTree {#aggregatingmergetree} The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree), altering the logic for data parts merging. ClickHouse replaces all rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with a single row (within a one data part) that stores a combination of states of aggregate functions. diff --git a/docs/en/engines/table-engines/mergetree-family/mergetree.md b/docs/en/engines/table-engines/mergetree-family/mergetree.md index 753859b46d2..dfe0ca84723 100644 --- a/docs/en/engines/table-engines/mergetree-family/mergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/mergetree.md @@ -17,7 +17,7 @@ Main features: - Partitions can be used if the [partitioning key](../../../engines/table-engines/mergetree-family/custom-partitioning-key.md) is specified. - ClickHouse supports certain operations with partitions that are more effective than general operations on the same data with the same result. ClickHouse also automatically cuts off the partition data where the partitioning key is specified in the query. This also improves query performance. + ClickHouse supports certain operations with partitions that are more effective than general operations on the same data with the same result. ClickHouse also automatically cuts off the partition data where the partitioning key is specified in the query. - Data replication support. @@ -191,9 +191,7 @@ Sparse indexes allow you to work with a very large number of table rows, because ClickHouse does not require a unique primary key. You can insert multiple rows with the same primary key. -You can use `Nullable`-typed expressions in the `PRIMARY KEY` and `ORDER BY` clauses. To allow this feature, turn on the [allow_nullable_key](../../../operations/settings/settings.md#allow-nullable-key) setting. - -The [NULLS_LAST](../../../sql-reference/statements/select/order-by.md#sorting-of-special-values) principle applies for `NULL` values in the `ORDER BY` clause. +You can use `Nullable`-typed expressions in the `PRIMARY KEY` and `ORDER BY` clauses but it is strongly discouraged. To allow this feature, turn on the [allow_nullable_key](../../../operations/settings/settings.md#allow-nullable-key) setting. The [NULLS_LAST](../../../sql-reference/statements/select/order-by.md#sorting-of-special-values) principle applies for `NULL` values in the `ORDER BY` clause. ### Selecting the Primary Key {#selecting-the-primary-key} @@ -353,7 +351,7 @@ The `set` index can be used with all functions. Function subsets for other index | Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter | |------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------| | [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ | -| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ | +| [notEquals(!=, <>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ | | [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ | | [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ | | [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ | @@ -361,10 +359,10 @@ The `set` index can be used with all functions. Function subsets for other index | [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ | | [in](../../../sql-reference/functions/in-functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | | [notIn](../../../sql-reference/functions/in-functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | -| [less (\<)](../../../sql-reference/functions/comparison-functions.md#function-less) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [greater (\>)](../../../sql-reference/functions/comparison-functions.md#function-greater) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [lessOrEquals (\<=)](../../../sql-reference/functions/comparison-functions.md#function-lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [greaterOrEquals (\>=)](../../../sql-reference/functions/comparison-functions.md#function-greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | +| [less (<)](../../../sql-reference/functions/comparison-functions.md#function-less) | ✔ | ✔ | ✗ | ✗ | ✗ | +| [greater (>)](../../../sql-reference/functions/comparison-functions.md#function-greater) | ✔ | ✔ | ✗ | ✗ | ✗ | +| [lessOrEquals (<=)](../../../sql-reference/functions/comparison-functions.md#function-lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | +| [greaterOrEquals (>=)](../../../sql-reference/functions/comparison-functions.md#function-greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | | [empty](../../../sql-reference/functions/array-functions.md#function-empty) | ✔ | ✔ | ✗ | ✗ | ✗ | | [notEmpty](../../../sql-reference/functions/array-functions.md#function-notempty) | ✔ | ✔ | ✗ | ✗ | ✗ | | hasToken | ✗ | ✗ | ✗ | ✔ | ✗ | @@ -529,7 +527,7 @@ CREATE TABLE table_for_aggregation y Int ) ENGINE = MergeTree -ORDER BY k1, k2 +ORDER BY (k1, k2) TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y); ``` @@ -701,6 +699,32 @@ The `default` storage policy implies using only one volume, which consists of on The number of threads performing background moves of data parts can be changed by [background_move_pool_size](../../../operations/settings/settings.md#background_move_pool_size) setting. +### Details {#details} + +In the case of `MergeTree` tables, data is getting to disk in different ways: + +- As a result of an insert (`INSERT` query). +- During background merges and [mutations](../../../sql-reference/statements/alter/index.md#alter-mutations). +- When downloading from another replica. +- As a result of partition freezing [ALTER TABLE … FREEZE PARTITION](../../../sql-reference/statements/alter/partition.md#alter_freeze-partition). + +In all these cases except for mutations and partition freezing, a part is stored on a volume and a disk according to the given storage policy: + +1. The first volume (in the order of definition) that has enough disk space for storing a part (`unreserved_space > current_part_size`) and allows for storing parts of a given size (`max_data_part_size_bytes > current_part_size`) is chosen. +2. Within this volume, that disk is chosen that follows the one, which was used for storing the previous chunk of data, and that has free space more than the part size (`unreserved_space - keep_free_space_bytes > current_part_size`). + +Under the hood, mutations and partition freezing make use of [hard links](https://en.wikipedia.org/wiki/Hard_link). Hard links between different disks are not supported, therefore in such cases the resulting parts are stored on the same disks as the initial ones. + +In the background, parts are moved between volumes on the basis of the amount of free space (`move_factor` parameter) according to the order the volumes are declared in the configuration file. +Data is never transferred from the last one and into the first one. One may use system tables [system.part_log](../../../operations/system-tables/part_log.md#system_tables-part-log) (field `type = MOVE_PART`) and [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) (fields `path` and `disk`) to monitor background moves. Also, the detailed information can be found in server logs. + +User can force moving a part or a partition from one volume to another using the query [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../sql-reference/statements/alter/partition.md#alter_move-partition), all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met. + +Moving data does not interfere with data replication. Therefore, different storage policies can be specified for the same table on different replicas. + +After the completion of background merges and mutations, old parts are removed only after a certain amount of time (`old_parts_lifetime`). +During this time, they are not moved to other volumes or disks. Therefore, until the parts are finally removed, they are still taken into account for evaluation of the occupied disk space. + ## Using S3 for Data Storage {#table_engine-mergetree-s3} `MergeTree` family table engines is able to store data to [S3](https://aws.amazon.com/s3/) using a disk with type `s3`. @@ -722,7 +746,6 @@ Configuration markup: 10000 5000 - 100 10 1000 /var/lib/clickhouse/disks/s3/ @@ -742,10 +765,10 @@ Required parameters: Optional parameters: - `use_environment_credentials` — Reads AWS credentials from the Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN if they exist. Default value is `false`. +- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Default value is `false`. - `proxy` — Proxy configuration for S3 endpoint. Each `uri` element inside `proxy` block should contain a proxy URL. - `connect_timeout_ms` — Socket connect timeout in milliseconds. Default value is `10 seconds`. - `request_timeout_ms` — Request timeout in milliseconds. Default value is `5 seconds`. -- `max_connections` — S3 connections pool size. Default value is `100`. - `retry_attempts` — Number of retry attempts in case of failed request. Default value is `10`. - `min_bytes_for_seek` — Minimal number of bytes to use seek operation instead of sequential read. Default value is `1 Mb`. - `metadata_path` — Path on local FS to store metadata files for S3. Default value is `/var/lib/clickhouse/disks//`. @@ -793,30 +816,4 @@ S3 disk can be configured as `main` or `cold` storage: In case of `cold` option a data can be moved to S3 if local disk free size will be smaller than `move_factor * disk_size` or by TTL move rule. -### Details {#details} - -In the case of `MergeTree` tables, data is getting to disk in different ways: - -- As a result of an insert (`INSERT` query). -- During background merges and [mutations](../../../sql-reference/statements/alter/index.md#alter-mutations). -- When downloading from another replica. -- As a result of partition freezing [ALTER TABLE … FREEZE PARTITION](../../../sql-reference/statements/alter/partition.md#alter_freeze-partition). - -In all these cases except for mutations and partition freezing, a part is stored on a volume and a disk according to the given storage policy: - -1. The first volume (in the order of definition) that has enough disk space for storing a part (`unreserved_space > current_part_size`) and allows for storing parts of a given size (`max_data_part_size_bytes > current_part_size`) is chosen. -2. Within this volume, that disk is chosen that follows the one, which was used for storing the previous chunk of data, and that has free space more than the part size (`unreserved_space - keep_free_space_bytes > current_part_size`). - -Under the hood, mutations and partition freezing make use of [hard links](https://en.wikipedia.org/wiki/Hard_link). Hard links between different disks are not supported, therefore in such cases the resulting parts are stored on the same disks as the initial ones. - -In the background, parts are moved between volumes on the basis of the amount of free space (`move_factor` parameter) according to the order the volumes are declared in the configuration file. -Data is never transferred from the last one and into the first one. One may use system tables [system.part_log](../../../operations/system-tables/part_log.md#system_tables-part-log) (field `type = MOVE_PART`) and [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) (fields `path` and `disk`) to monitor background moves. Also, the detailed information can be found in server logs. - -User can force moving a part or a partition from one volume to another using the query [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../sql-reference/statements/alter/partition.md#alter_move-partition), all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met. - -Moving data does not interfere with data replication. Therefore, different storage policies can be specified for the same table on different replicas. - -After the completion of background merges and mutations, old parts are removed only after a certain amount of time (`old_parts_lifetime`). -During this time, they are not moved to other volumes or disks. Therefore, until the parts are finally removed, they are still taken into account for evaluation of the occupied disk space. - [Original article](https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/) diff --git a/docs/en/engines/table-engines/special/buffer.md b/docs/en/engines/table-engines/special/buffer.md index bf6c08f8f6c..8245cd19e8c 100644 --- a/docs/en/engines/table-engines/special/buffer.md +++ b/docs/en/engines/table-engines/special/buffer.md @@ -18,11 +18,17 @@ Engine parameters: - `num_layers` – Parallelism layer. Physically, the table will be represented as `num_layers` of independent buffers. Recommended value: 16. - `min_time`, `max_time`, `min_rows`, `max_rows`, `min_bytes`, and `max_bytes` – Conditions for flushing data from the buffer. +Optional engine parameters: + +- `flush_time`, `flush_rows`, `flush_bytes` – Conditions for flushing data from the buffer, that will happen only in background (ommited or zero means no `flush*` parameters). + Data is flushed from the buffer and written to the destination table if all the `min*` conditions or at least one `max*` condition are met. -- `min_time`, `max_time` – Condition for the time in seconds from the moment of the first write to the buffer. -- `min_rows`, `max_rows` – Condition for the number of rows in the buffer. -- `min_bytes`, `max_bytes` – Condition for the number of bytes in the buffer. +Also if at least one `flush*` condition are met flush initiated in background, this is different from `max*`, since `flush*` allows you to configure background flushes separately to avoid adding latency for `INSERT` (into `Buffer`) queries. + +- `min_time`, `max_time`, `flush_time` – Condition for the time in seconds from the moment of the first write to the buffer. +- `min_rows`, `max_rows`, `flush_rows` – Condition for the number of rows in the buffer. +- `min_bytes`, `max_bytes`, `flush_bytes` – Condition for the number of bytes in the buffer. During the write operation, data is inserted to a `num_layers` number of random buffers. Or, if the data part to insert is large enough (greater than `max_rows` or `max_bytes`), it is written directly to the destination table, omitting the buffer. diff --git a/docs/en/faq/integration/json-import.md b/docs/en/faq/integration/json-import.md index 7038cc539d2..3fa026c794a 100644 --- a/docs/en/faq/integration/json-import.md +++ b/docs/en/faq/integration/json-import.md @@ -19,7 +19,7 @@ $ echo '{"foo":"bar"}' | curl 'http://localhost:8123/?query=INSERT%20INTO%20test Using [CLI interface](../../interfaces/cli.md): ``` bash -$ echo '{"foo":"bar"}' | clickhouse-client ---query="INSERT INTO test FORMAT JSONEachRow" +$ echo '{"foo":"bar"}' | clickhouse-client --query="INSERT INTO test FORMAT JSONEachRow" ``` Instead of inserting data manually, you might consider to use one of [client libraries](../../interfaces/index.md) instead. diff --git a/docs/en/getting-started/example-datasets/cell-towers.md b/docs/en/getting-started/example-datasets/cell-towers.md index 76effdd4c62..7028b650ad1 100644 --- a/docs/en/getting-started/example-datasets/cell-towers.md +++ b/docs/en/getting-started/example-datasets/cell-towers.md @@ -3,31 +3,31 @@ toc_priority: 21 toc_title: Cell Towers --- -# Cell Towers +# Cell Towers {#cell-towers} This dataset is from [OpenCellid](https://www.opencellid.org/) - The world's largest Open Database of Cell Towers. -As of 2021 it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc). +As of 2021, it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc). -OpenCelliD Project is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, and we redistribute a snapshot of this dataset under the terms of the same license. The up to date version of the dataset is available to download after sign in. +OpenCelliD Project is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, and we redistribute a snapshot of this dataset under the terms of the same license. The up-to-date version of the dataset is available to download after sign in. -## Get the Dataset +## Get the Dataset {#get-the-dataset} -Download the snapshot of the dataset from Feb 2021: [https://datasets.clickhouse.tech/cell_towers.csv.xz] (729 MB). +1. Download the snapshot of the dataset from February 2021: [https://datasets.clickhouse.tech/cell_towers.csv.xz] (729 MB). -Optionally validate the integrity: +2. Validate the integrity (optional step): ``` md5sum cell_towers.csv.xz 8cf986f4a0d9f12c6f384a0e9192c908 cell_towers.csv.xz ``` -Decompress it with the following command: +3. Decompress it with the following command: ``` xz -d cell_towers.csv.xz ``` -Create a table: +4. Create a table: ``` CREATE TABLE cell_towers @@ -50,15 +50,15 @@ CREATE TABLE cell_towers ENGINE = MergeTree ORDER BY (radio, mcc, net, created); ``` -Insert the dataset: +5. Insert the dataset: ``` clickhouse-client --query "INSERT INTO cell_towers FORMAT CSVWithNames" < cell_towers.csv ``` +## Examples {#examples} -## Run some queries +1. A number of cell towers by type: -Number of cell towers by type: ``` SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC @@ -73,7 +73,8 @@ SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC 5 rows in set. Elapsed: 0.011 sec. Processed 43.28 million rows, 43.28 MB (3.83 billion rows/s., 3.83 GB/s.) ``` -Cell towers by mobile country code (MCC): +2. Cell towers by [mobile country code (MCC)](https://en.wikipedia.org/wiki/Mobile_country_code): + ``` SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10 @@ -93,28 +94,28 @@ SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10 10 rows in set. Elapsed: 0.019 sec. Processed 43.28 million rows, 86.55 MB (2.33 billion rows/s., 4.65 GB/s.) ``` -See the dictionary here: [https://en.wikipedia.org/wiki/Mobile_country_code](https://en.wikipedia.org/wiki/Mobile_country_code). +So, the top countries are: the USA, Germany, and Russia. -So, the top countries are USA, Germany and Russia. - -You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts/) in ClickHouse to decode these values. +You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values. -### Example of using `pointInPolygon` function +## Use case {#use-case} -Create a table where we will store polygons: +Using `pointInPolygon` function. + +1. Create a table where we will store polygons: ``` CREATE TEMPORARY TABLE moscow (polygon Array(Tuple(Float64, Float64))); ``` -This is a rough shape of Moscow (without "new Moscow"): +2. This is a rough shape of Moscow (without "new Moscow"): ``` INSERT INTO moscow VALUES ([(37.84172564285271, 55.78000432402266), (37.8381207618713, 55.775874525970494), (37.83979446823122, 55.775626746008065), (37.84243326983639, 55.77446586811748), (37.84262672750849, 55.771974101091104), (37.84153238623039, 55.77114545193181), (37.841124690460184, 55.76722010265554), (37.84239076983644, 55.76654891107098), (37.842283558197025, 55.76258709833121), (37.8421759312134, 55.758073999993734), (37.84198330422974, 55.75381499999371), (37.8416827275085, 55.749277102484484), (37.84157576190186, 55.74794544108413), (37.83897929098507, 55.74525257875241), (37.83739676451868, 55.74404373042019), (37.838732481460525, 55.74298009816793), (37.841183997352545, 55.743060321833575), (37.84097476190185, 55.73938799999373), (37.84048155819702, 55.73570799999372), (37.840095812164286, 55.73228210777237), (37.83983814285274, 55.73080491981639), (37.83846476321406, 55.729799917464675), (37.83835745269769, 55.72919751082619), (37.838636380279524, 55.72859509486539), (37.8395161005249, 55.727705075632784), (37.83897964285276, 55.722727886185154), (37.83862557539366, 55.72034817326636), (37.83559735744853, 55.71944437307499), (37.835370708803126, 55.71831419154461), (37.83738169402022, 55.71765218986692), (37.83823396494291, 55.71691750159089), (37.838056931213345, 55.71547311301385), (37.836812846557606, 55.71221445615604), (37.83522525396725, 55.709331054395555), (37.83269301586908, 55.70953687463627), (37.829667367706236, 55.70903403789297), (37.83311126588435, 55.70552351822608), (37.83058993121339, 55.70041317726053), (37.82983872750851, 55.69883771404813), (37.82934501586913, 55.69718947487017), (37.828926414016685, 55.69504441658371), (37.82876530422971, 55.69287499999378), (37.82894754100031, 55.690759754047335), (37.827697554878185, 55.68951421135665), (37.82447346292115, 55.68965045405069), (37.83136543914793, 55.68322046195302), (37.833554015869154, 55.67814012759211), (37.83544184655761, 55.67295011628339), (37.837480388885474, 55.6672498719639), (37.838960677246064, 55.66316274139358), (37.83926093121332, 55.66046999999383), (37.839025050262435, 55.65869897264431), (37.83670784390257, 55.65794084879904), (37.835656529083245, 55.65694309303843), (37.83704060449217, 55.65689306460552), (37.83696819873806, 55.65550363526252), (37.83760389616388, 55.65487847246661), (37.83687972750851, 55.65356745541324), (37.83515216004943, 55.65155951234079), (37.83312418518067, 55.64979413590619), (37.82801726983639, 55.64640836412121), (37.820614174591, 55.64164525405531), (37.818908190475426, 55.6421883258084), (37.81717543386075, 55.64112490388471), (37.81690987037274, 55.63916106913107), (37.815099354492155, 55.637925371757085), (37.808769150787356, 55.633798276884455), (37.80100123544311, 55.62873670012244), (37.79598013491824, 55.62554336109055), (37.78634567724606, 55.62033499605651), (37.78334147619623, 55.618768681480326), (37.77746201055901, 55.619855533402706), (37.77527329626457, 55.61909966711279), (37.77801986242668, 55.618770300976294), (37.778212973541216, 55.617257701952106), (37.77784818518065, 55.61574504433011), (37.77016867724609, 55.61148576294007), (37.760191219573976, 55.60599579539028), (37.75338926983641, 55.60227892751446), (37.746329965606634, 55.59920577639331), (37.73939925396728, 55.59631430313617), (37.73273665739439, 55.5935318803559), (37.7299954450912, 55.59350760316188), (37.7268679946899, 55.59469840523759), (37.72626726983634, 55.59229549697373), (37.7262673598022, 55.59081598950582), (37.71897193121335, 55.5877595845419), (37.70871550793456, 55.58393177431724), (37.700497489410374, 55.580917323756644), (37.69204305026244, 55.57778089778455), (37.68544477378839, 55.57815154690915), (37.68391050793454, 55.57472945079756), (37.678803592590306, 55.57328235936491), (37.6743402539673, 55.57255251445782), (37.66813862698363, 55.57216388774464), (37.617927457672096, 55.57505691895805), (37.60443099999999, 55.5757737568051), (37.599683515869145, 55.57749105910326), (37.59754177842709, 55.57796291823627), (37.59625834786988, 55.57906686095235), (37.59501783265684, 55.57746616444403), (37.593090671936025, 55.57671634534502), (37.587018007904, 55.577944600233785), (37.578692203704804, 55.57982895000019), (37.57327546607398, 55.58116294118248), (37.57385012109279, 55.581550362779), (37.57399562266922, 55.5820107079112), (37.5735356072979, 55.58226289171689), (37.57290393054962, 55.582393529795155), (37.57037722355653, 55.581919415056234), (37.5592298306885, 55.584471614867844), (37.54189249206543, 55.58867650795186), (37.5297256269836, 55.59158133551745), (37.517837865081766, 55.59443656218868), (37.51200186508174, 55.59635625174229), (37.506808949737554, 55.59907823904434), (37.49820432275389, 55.6062944994944), (37.494406071441674, 55.60967103463367), (37.494760001358024, 55.61066689753365), (37.49397137107085, 55.61220931698269), (37.49016528606031, 55.613417718449064), (37.48773249206542, 55.61530616333343), (37.47921386508177, 55.622640129112334), (37.470652153442394, 55.62993723476164), (37.46273446298218, 55.6368075123157), (37.46350692265317, 55.64068225239439), (37.46050283203121, 55.640794546982576), (37.457627470916734, 55.64118904154646), (37.450718034393326, 55.64690488145138), (37.44239252645875, 55.65397824729769), (37.434587576721185, 55.66053543155961), (37.43582144975277, 55.661693766520735), (37.43576786245721, 55.662755031737014), (37.430982915344174, 55.664610641628116), (37.428547447097685, 55.66778515273695), (37.42945134592044, 55.668633314343566), (37.42859571562949, 55.66948145750025), (37.4262836402282, 55.670813882451405), (37.418709037048295, 55.6811141674414), (37.41922139651101, 55.68235377885389), (37.419218771842885, 55.68359335082235), (37.417196501327446, 55.684375235224735), (37.41607020370478, 55.68540557585352), (37.415640857147146, 55.68686637150793), (37.414632153442334, 55.68903015131686), (37.413344899475064, 55.690896881757396), (37.41171432275391, 55.69264232162232), (37.40948282275393, 55.69455101638112), (37.40703674603271, 55.69638690385348), (37.39607169577025, 55.70451821283731), (37.38952706878662, 55.70942491932811), (37.387778313491815, 55.71149057784176), (37.39049275399779, 55.71419814298992), (37.385557272491454, 55.7155489617061), (37.38388335714726, 55.71849856042102), (37.378368238098155, 55.7292763261685), (37.37763597123337, 55.730845879211614), (37.37890062088197, 55.73167906388319), (37.37750451918789, 55.734703664681774), (37.375610832015965, 55.734851959522246), (37.3723813571472, 55.74105626086403), (37.37014935714723, 55.746115620904355), (37.36944173016362, 55.750883999993725), (37.36975304365541, 55.76335905525834), (37.37244070571134, 55.76432079697595), (37.3724259757175, 55.76636979670426), (37.369922155757884, 55.76735417953104), (37.369892695770275, 55.76823419316575), (37.370214730163575, 55.782312184391266), (37.370493611114505, 55.78436801120489), (37.37120164550783, 55.78596427165359), (37.37284851456452, 55.7874378183096), (37.37608325135799, 55.7886695054807), (37.3764587460632, 55.78947647305964), (37.37530000265506, 55.79146512926804), (37.38235915344241, 55.79899647809345), (37.384344043655396, 55.80113596939471), (37.38594269577028, 55.80322699999366), (37.38711208598329, 55.804919036911976), (37.3880239841309, 55.806610999993666), (37.38928977249147, 55.81001864976979), (37.39038389947512, 55.81348641242801), (37.39235781481933, 55.81983538336746), (37.393709457672124, 55.82417822811877), (37.394685720901464, 55.82792275755836), (37.39557615344238, 55.830447148154136), (37.39844478226658, 55.83167107969975), (37.40019761214057, 55.83151823557964), (37.400398790382326, 55.83264967594742), (37.39659544313046, 55.83322180909622), (37.39667059524539, 55.83402792148566), (37.39682089947515, 55.83638877400216), (37.39643489154053, 55.83861656112751), (37.3955338994751, 55.84072348043264), (37.392680272491454, 55.84502158126453), (37.39241188227847, 55.84659117913199), (37.392529730163616, 55.84816071336481), (37.39486835714723, 55.85288092980303), (37.39873052645878, 55.859893456073635), (37.40272161111449, 55.86441833633205), (37.40697072750854, 55.867579567544375), (37.410007082016016, 55.868369880337), (37.4120992989502, 55.86920843741314), (37.412668021163924, 55.87055369615854), (37.41482461111453, 55.87170587948249), (37.41862266137694, 55.873183961039565), (37.42413732540892, 55.874879126654704), (37.4312182698669, 55.875614937236705), (37.43111093783558, 55.8762723478417), (37.43332105622856, 55.87706546369396), (37.43385747619623, 55.87790681284802), (37.441303050262405, 55.88027084462084), (37.44747234260555, 55.87942070143253), (37.44716141796871, 55.88072960917233), (37.44769797085568, 55.88121221323979), (37.45204320500181, 55.882080694420715), (37.45673176190186, 55.882346110794586), (37.463383999999984, 55.88252729504517), (37.46682797486874, 55.88294937719063), (37.470014457672086, 55.88361266759345), (37.47751410450743, 55.88546991372396), (37.47860317658232, 55.88534929207307), (37.48165826025772, 55.882563306475106), (37.48316434442331, 55.8815803226785), (37.483831555817645, 55.882427612793315), (37.483182967125686, 55.88372791409729), (37.483092277908824, 55.88495581062434), (37.4855716508179, 55.8875561994203), (37.486440636245746, 55.887827444039566), (37.49014203439328, 55.88897899871799), (37.493210285705544, 55.890208937135604), (37.497512451065035, 55.891342397444696), (37.49780744510645, 55.89174030252967), (37.49940333499519, 55.89239745507079), (37.50018383334346, 55.89339220941865), (37.52421672750851, 55.903869074155224), (37.52977457672118, 55.90564076517974), (37.53503220370484, 55.90661661218259), (37.54042858064267, 55.90714113744566), (37.54320461007303, 55.905645048442985), (37.545686966066306, 55.906608607018505), (37.54743976120755, 55.90788552162358), (37.55796999999999, 55.90901557907218), (37.572711542327866, 55.91059395704873), (37.57942799999998, 55.91073854155573), (37.58502865872187, 55.91009969268444), (37.58739968913264, 55.90794809960554), (37.59131567193598, 55.908713267595054), (37.612687423278814, 55.902866854295375), (37.62348079629517, 55.90041967242986), (37.635797880950896, 55.898141151686396), (37.649487626983664, 55.89639275532968), (37.65619302513125, 55.89572360207488), (37.66294133862307, 55.895295577183965), (37.66874564418033, 55.89505457604897), (37.67375601586915, 55.89254677027454), (37.67744661901856, 55.8947775867987), (37.688347, 55.89450045676125), (37.69480554232789, 55.89422926332761), (37.70107096560668, 55.89322256101114), (37.705962965606716, 55.891763491662616), (37.711885134918205, 55.889110234998974), (37.71682005026245, 55.886577568759876), (37.7199315476074, 55.88458159806678), (37.72234560316464, 55.882281005794134), (37.72364385977171, 55.8809452036196), (37.725371142837474, 55.8809722706006), (37.727870902099546, 55.88037213862385), (37.73394330422971, 55.877941504088696), (37.745339592590376, 55.87208120378722), (37.75525267724611, 55.86703807949492), (37.76919976190188, 55.859821640197474), (37.827835219574, 55.82962968399116), (37.83341438888553, 55.82575289922351), (37.83652584655761, 55.82188784027888), (37.83809213491821, 55.81612575504693), (37.83605359521481, 55.81460347077685), (37.83632178569025, 55.81276696067908), (37.838623105812026, 55.811486181656385), (37.83912198147584, 55.807329380532785), (37.839079078033414, 55.80510270463816), (37.83965844708251, 55.79940712529036), (37.840581150787344, 55.79131399999368), (37.84172564285271, 55.78000432402266)]); ``` -Check how many cell towers are in Moscow: +3. Check how many cell towers are in Moscow: ``` SELECT count() FROM cell_towers WHERE pointInPolygon((lon, lat), (SELECT * FROM moscow)) @@ -128,6 +129,4 @@ SELECT count() FROM cell_towers WHERE pointInPolygon((lon, lat), (SELECT * FROM The data is also available for interactive queries in the [Playground](https://gh-api.clickhouse.tech/play?user=play), [example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIG1jYywgY291bnQoKSBGUk9NIGNlbGxfdG93ZXJzIEdST1VQIEJZIG1jYyBPUkRFUiBCWSBjb3VudCgpIERFU0M=). -Although you cannot create temporary tables there. - -[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets/cell-towers/) +Although you cannot create temporary tables there. \ No newline at end of file diff --git a/docs/en/getting-started/example-datasets/ontime.md b/docs/en/getting-started/example-datasets/ontime.md index 83673cdceb6..f18acc6fd50 100644 --- a/docs/en/getting-started/example-datasets/ontime.md +++ b/docs/en/getting-started/example-datasets/ontime.md @@ -21,120 +21,121 @@ echo https://transtats.bts.gov/PREZIP/On_Time_Reporting_Carrier_On_Time_Performa Creating a table: ``` sql -CREATE TABLE `ontime` ( - `Year` UInt16, - `Quarter` UInt8, - `Month` UInt8, - `DayofMonth` UInt8, - `DayOfWeek` UInt8, - `FlightDate` Date, - `UniqueCarrier` FixedString(7), - `AirlineID` Int32, - `Carrier` FixedString(2), - `TailNum` String, - `FlightNum` String, - `OriginAirportID` Int32, - `OriginAirportSeqID` Int32, - `OriginCityMarketID` Int32, - `Origin` FixedString(5), - `OriginCityName` String, - `OriginState` FixedString(2), - `OriginStateFips` String, - `OriginStateName` String, - `OriginWac` Int32, - `DestAirportID` Int32, - `DestAirportSeqID` Int32, - `DestCityMarketID` Int32, - `Dest` FixedString(5), - `DestCityName` String, - `DestState` FixedString(2), - `DestStateFips` String, - `DestStateName` String, - `DestWac` Int32, - `CRSDepTime` Int32, - `DepTime` Int32, - `DepDelay` Int32, - `DepDelayMinutes` Int32, - `DepDel15` Int32, - `DepartureDelayGroups` String, - `DepTimeBlk` String, - `TaxiOut` Int32, - `WheelsOff` Int32, - `WheelsOn` Int32, - `TaxiIn` Int32, - `CRSArrTime` Int32, - `ArrTime` Int32, - `ArrDelay` Int32, - `ArrDelayMinutes` Int32, - `ArrDel15` Int32, - `ArrivalDelayGroups` Int32, - `ArrTimeBlk` String, - `Cancelled` UInt8, - `CancellationCode` FixedString(1), - `Diverted` UInt8, - `CRSElapsedTime` Int32, - `ActualElapsedTime` Int32, - `AirTime` Int32, - `Flights` Int32, - `Distance` Int32, - `DistanceGroup` UInt8, - `CarrierDelay` Int32, - `WeatherDelay` Int32, - `NASDelay` Int32, - `SecurityDelay` Int32, - `LateAircraftDelay` Int32, - `FirstDepTime` String, - `TotalAddGTime` String, - `LongestAddGTime` String, - `DivAirportLandings` String, - `DivReachedDest` String, - `DivActualElapsedTime` String, - `DivArrDelay` String, - `DivDistance` String, - `Div1Airport` String, - `Div1AirportID` Int32, - `Div1AirportSeqID` Int32, - `Div1WheelsOn` String, - `Div1TotalGTime` String, - `Div1LongestGTime` String, - `Div1WheelsOff` String, - `Div1TailNum` String, - `Div2Airport` String, - `Div2AirportID` Int32, - `Div2AirportSeqID` Int32, - `Div2WheelsOn` String, - `Div2TotalGTime` String, - `Div2LongestGTime` String, - `Div2WheelsOff` String, - `Div2TailNum` String, - `Div3Airport` String, - `Div3AirportID` Int32, - `Div3AirportSeqID` Int32, - `Div3WheelsOn` String, - `Div3TotalGTime` String, - `Div3LongestGTime` String, - `Div3WheelsOff` String, - `Div3TailNum` String, - `Div4Airport` String, - `Div4AirportID` Int32, - `Div4AirportSeqID` Int32, - `Div4WheelsOn` String, - `Div4TotalGTime` String, - `Div4LongestGTime` String, - `Div4WheelsOff` String, - `Div4TailNum` String, - `Div5Airport` String, - `Div5AirportID` Int32, - `Div5AirportSeqID` Int32, - `Div5WheelsOn` String, - `Div5TotalGTime` String, - `Div5LongestGTime` String, - `Div5WheelsOff` String, - `Div5TailNum` String +CREATE TABLE `ontime` +( + `Year` UInt16, + `Quarter` UInt8, + `Month` UInt8, + `DayofMonth` UInt8, + `DayOfWeek` UInt8, + `FlightDate` Date, + `Reporting_Airline` String, + `DOT_ID_Reporting_Airline` Int32, + `IATA_CODE_Reporting_Airline` String, + `Tail_Number` Int32, + `Flight_Number_Reporting_Airline` String, + `OriginAirportID` Int32, + `OriginAirportSeqID` Int32, + `OriginCityMarketID` Int32, + `Origin` FixedString(5), + `OriginCityName` String, + `OriginState` FixedString(2), + `OriginStateFips` String, + `OriginStateName` String, + `OriginWac` Int32, + `DestAirportID` Int32, + `DestAirportSeqID` Int32, + `DestCityMarketID` Int32, + `Dest` FixedString(5), + `DestCityName` String, + `DestState` FixedString(2), + `DestStateFips` String, + `DestStateName` String, + `DestWac` Int32, + `CRSDepTime` Int32, + `DepTime` Int32, + `DepDelay` Int32, + `DepDelayMinutes` Int32, + `DepDel15` Int32, + `DepartureDelayGroups` String, + `DepTimeBlk` String, + `TaxiOut` Int32, + `WheelsOff` Int32, + `WheelsOn` Int32, + `TaxiIn` Int32, + `CRSArrTime` Int32, + `ArrTime` Int32, + `ArrDelay` Int32, + `ArrDelayMinutes` Int32, + `ArrDel15` Int32, + `ArrivalDelayGroups` Int32, + `ArrTimeBlk` String, + `Cancelled` UInt8, + `CancellationCode` FixedString(1), + `Diverted` UInt8, + `CRSElapsedTime` Int32, + `ActualElapsedTime` Int32, + `AirTime` Nullable(Int32), + `Flights` Int32, + `Distance` Int32, + `DistanceGroup` UInt8, + `CarrierDelay` Int32, + `WeatherDelay` Int32, + `NASDelay` Int32, + `SecurityDelay` Int32, + `LateAircraftDelay` Int32, + `FirstDepTime` String, + `TotalAddGTime` String, + `LongestAddGTime` String, + `DivAirportLandings` String, + `DivReachedDest` String, + `DivActualElapsedTime` String, + `DivArrDelay` String, + `DivDistance` String, + `Div1Airport` String, + `Div1AirportID` Int32, + `Div1AirportSeqID` Int32, + `Div1WheelsOn` String, + `Div1TotalGTime` String, + `Div1LongestGTime` String, + `Div1WheelsOff` String, + `Div1TailNum` String, + `Div2Airport` String, + `Div2AirportID` Int32, + `Div2AirportSeqID` Int32, + `Div2WheelsOn` String, + `Div2TotalGTime` String, + `Div2LongestGTime` String, + `Div2WheelsOff` String, + `Div2TailNum` String, + `Div3Airport` String, + `Div3AirportID` Int32, + `Div3AirportSeqID` Int32, + `Div3WheelsOn` String, + `Div3TotalGTime` String, + `Div3LongestGTime` String, + `Div3WheelsOff` String, + `Div3TailNum` String, + `Div4Airport` String, + `Div4AirportID` Int32, + `Div4AirportSeqID` Int32, + `Div4WheelsOn` String, + `Div4TotalGTime` String, + `Div4LongestGTime` String, + `Div4WheelsOff` String, + `Div4TailNum` String, + `Div5Airport` String, + `Div5AirportID` Int32, + `Div5AirportSeqID` Int32, + `Div5WheelsOn` String, + `Div5TotalGTime` String, + `Div5LongestGTime` String, + `Div5WheelsOff` String, + `Div5TailNum` String ) ENGINE = MergeTree -PARTITION BY Year -ORDER BY (Carrier, FlightDate) -SETTINGS index_granularity = 8192; + PARTITION BY Year + ORDER BY (IATA_CODE_Reporting_Airline, FlightDate) + SETTINGS index_granularity = 8192; ``` Loading data with multiple threads: @@ -206,7 +207,7 @@ LIMIT 10; Q4. The number of delays by carrier for 2007 ``` sql -SELECT Carrier, count(*) +SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier @@ -220,29 +221,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year=2007 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` Better version of the same query: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year=2007 GROUP BY Carrier @@ -256,29 +257,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` Better version of the same query: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier @@ -297,7 +298,7 @@ FROM from ontime WHERE DepDelay>10 GROUP BY Year -) +) q JOIN ( select @@ -305,7 +306,7 @@ JOIN count(*) as c2 from ontime GROUP BY Year -) USING (Year) +) qq USING (Year) ORDER BY Year; ``` @@ -340,7 +341,7 @@ Q10. ``` sql SELECT - min(Year), max(Year), Carrier, count(*) AS cnt, + min(Year), max(Year), IATA_CODE_Reporting_Airline AS Carrier, count(*) AS cnt, sum(ArrDelayMinutes>30) AS flights_delayed, round(sum(ArrDelayMinutes>30)/count(*),2) AS rate FROM ontime diff --git a/docs/en/getting-started/example-datasets/recipes.md b/docs/en/getting-started/example-datasets/recipes.md index b3c7d82f485..0f4e81c8470 100644 --- a/docs/en/getting-started/example-datasets/recipes.md +++ b/docs/en/getting-started/example-datasets/recipes.md @@ -7,15 +7,17 @@ toc_title: Recipes Dataset RecipeNLG dataset is available for download [here](https://recipenlg.cs.put.poznan.pl/dataset). It contains 2.2 million recipes. The size is slightly less than 1 GB. -## Download and unpack the dataset +## Download and Unpack the Dataset -Accept Terms and Conditions and download it [here](https://recipenlg.cs.put.poznan.pl/dataset). Unpack the zip file with `unzip`. You will get the `full_dataset.csv` file. +1. Go to the download page [https://recipenlg.cs.put.poznan.pl/dataset](https://recipenlg.cs.put.poznan.pl/dataset). +1. Accept Terms and Conditions and download zip file. +1. Unpack the zip file with `unzip`. You will get the `full_dataset.csv` file. -## Create a table +## Create a Table Run clickhouse-client and execute the following CREATE query: -``` +``` sql CREATE TABLE recipes ( title String, @@ -27,11 +29,11 @@ CREATE TABLE recipes ) ENGINE = MergeTree ORDER BY title; ``` -## Insert the data +## Insert the Data Run the following command: -``` +``` bash clickhouse-client --query " INSERT INTO recipes SELECT @@ -49,32 +51,41 @@ clickhouse-client --query " This is a showcase how to parse custom CSV, as it requires multiple tunes. Explanation: -- the dataset is in CSV format, but it requires some preprocessing on insertion; we use table function [input](../../sql-reference/table-functions/input/) to perform preprocessing; -- the structure of CSV file is specified in the argument of the table function `input`; -- the field `num` (row number) is unneeded - we parse it from file and ignore; -- we use `FORMAT CSVWithNames` but the header in CSV will be ignored (by command line parameter `--input_format_with_names_use_header 0`), because the header does not contain the name for the first field; -- file is using only double quotes to enclose CSV strings; some strings are not enclosed in double quotes, and single quote must not be parsed as the string enclosing - that's why we also add the `--format_csv_allow_single_quote 0` parameter; -- some strings from CSV cannot parse, because they contain `\M/` sequence at the beginning of the value; the only value starting with backslash in CSV can be `\N` that is parsed as SQL NULL. We add `--input_format_allow_errors_num 10` parameter and up to ten malformed records can be skipped; -- there are arrays for ingredients, directions and NER fields; these arrays are represented in unusual form: they are serialized into string as JSON and then placed in CSV - we parse them as String and then use [JSONExtract](../../sql-reference/functions/json-functions/) function to transform it to Array. +- The dataset is in CSV format, but it requires some preprocessing on insertion; we use table function [input](../../sql-reference/table-functions/input.md) to perform preprocessing; +- The structure of CSV file is specified in the argument of the table function `input`; +- The field `num` (row number) is unneeded - we parse it from file and ignore; +- We use `FORMAT CSVWithNames` but the header in CSV will be ignored (by command line parameter `--input_format_with_names_use_header 0`), because the header does not contain the name for the first field; +- File is using only double quotes to enclose CSV strings; some strings are not enclosed in double quotes, and single quote must not be parsed as the string enclosing - that's why we also add the `--format_csv_allow_single_quote 0` parameter; +- Some strings from CSV cannot parse, because they contain `\M/` sequence at the beginning of the value; the only value starting with backslash in CSV can be `\N` that is parsed as SQL NULL. We add `--input_format_allow_errors_num 10` parameter and up to ten malformed records can be skipped; +- There are arrays for ingredients, directions and NER fields; these arrays are represented in unusual form: they are serialized into string as JSON and then placed in CSV - we parse them as String and then use [JSONExtract](../../sql-reference/functions/json-functions/) function to transform it to Array. -## Validate the inserted data +## Validate the Inserted Data By checking the row count: -``` -SELECT count() FROM recipes +Query: +``` sq; +SELECT count() FROM recipes; +``` + +Result: + +``` text ┌─count()─┐ │ 2231141 │ └─────────┘ ``` +## Example Queries -## Example queries +### Top Components by the Number of Recipes: -### Top components by the number of recipes: +In this example we learn how to use [arrayJoin](../../sql-reference/functions/array-join/) function to expand an array into a set of rows. -``` +Query: + +``` sql SELECT arrayJoin(NER) AS k, count() AS c @@ -82,7 +93,11 @@ FROM recipes GROUP BY k ORDER BY c DESC LIMIT 50 +``` +Result: + +``` text ┌─k────────────────────┬──────c─┐ │ salt │ 890741 │ │ sugar │ 620027 │ @@ -139,11 +154,9 @@ LIMIT 50 50 rows in set. Elapsed: 0.112 sec. Processed 2.23 million rows, 361.57 MB (19.99 million rows/s., 3.24 GB/s.) ``` -In this example we learn how to use [arrayJoin](../../sql-reference/functions/array-join/) function to multiply data by array elements. +### The Most Complex Recipes with Strawberry -### The most complex recipes with strawberry - -``` +``` sql SELECT title, length(NER), @@ -152,7 +165,11 @@ FROM recipes WHERE has(NER, 'strawberry') ORDER BY length(directions) DESC LIMIT 10 +``` +Result: + +``` text ┌─title────────────────────────────────────────────────────────────┬─length(NER)─┬─length(directions)─┐ │ Chocolate-Strawberry-Orange Wedding Cake │ 24 │ 126 │ │ Strawberry Cream Cheese Crumble Tart │ 19 │ 47 │ @@ -171,15 +188,19 @@ LIMIT 10 In this example, we involve [has](../../sql-reference/functions/array-functions/#hasarr-elem) function to filter by array elements and sort by the number of directions. -There is a wedding cake that requires the whole 126 steps to produce! +There is a wedding cake that requires the whole 126 steps to produce! Show that directions: -Show that directions: +Query: -``` +``` sql SELECT arrayJoin(directions) FROM recipes WHERE title = 'Chocolate-Strawberry-Orange Wedding Cake' +``` +Result: + +``` text ┌─arrayJoin(directions)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Position 1 rack in center and 1 rack in bottom third of oven and preheat to 350F. │ │ Butter one 5-inch-diameter cake pan with 2-inch-high sides, one 8-inch-diameter cake pan with 2-inch-high sides and one 12-inch-diameter cake pan with 2-inch-high sides. │ @@ -312,6 +333,8 @@ WHERE title = 'Chocolate-Strawberry-Orange Wedding Cake' 126 rows in set. Elapsed: 0.011 sec. Processed 8.19 thousand rows, 5.34 MB (737.75 thousand rows/s., 480.59 MB/s.) ``` -### Online playground +### Online Playground -The dataset is also available in the [Playground](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUCiAgICBhcnJheUpvaW4oTkVSKSBBUyBrLAogICAgY291bnQoKSBBUyBjCkZST00gcmVjaXBlcwpHUk9VUCBCWSBrCk9SREVSIEJZIGMgREVTQwpMSU1JVCA1MA==). +The dataset is also available in the [Online Playground](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUCiAgICBhcnJheUpvaW4oTkVSKSBBUyBrLAogICAgY291bnQoKSBBUyBjCkZST00gcmVjaXBlcwpHUk9VUCBCWSBrCk9SREVSIEJZIGMgREVTQwpMSU1JVCA1MA==). + +[Original article](https://clickhouse.tech/docs/en/getting-started/example-datasets/recipes/) diff --git a/docs/en/getting-started/install.md b/docs/en/getting-started/install.md index a8753de6abd..2134de9d0f3 100644 --- a/docs/en/getting-started/install.md +++ b/docs/en/getting-started/install.md @@ -102,7 +102,9 @@ For non-Linux operating systems and for AArch64 CPU arhitecture, ClickHouse buil - [FreeBSD](https://builds.clickhouse.tech/master/freebsd/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/freebsd/clickhouse' && chmod a+x ./clickhouse` - [AArch64](https://builds.clickhouse.tech/master/aarch64/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/aarch64/clickhouse' && chmod a+x ./clickhouse` -After downloading, you can use the `clickhouse client` to connect to the server, or `clickhouse local` to process local data. To run `clickhouse server`, you have to additionally download [server](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/config.xml) and [users](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/users.xml) configuration files from GitHub. +After downloading, you can use the `clickhouse client` to connect to the server, or `clickhouse local` to process local data. + +Run `sudo ./clickhouse install` if you want to install clickhouse system-wide (also with needed condiguration files, configuring users etc.). After that run `clickhouse start` commands to start the clickhouse-server and `clickhouse-client` to connect to it. These builds are not recommended for use in production environments because they are less thoroughly tested, but you can do so on your own risk. They also have only a subset of ClickHouse features available. diff --git a/docs/en/getting-started/playground.md b/docs/en/getting-started/playground.md index 7838dad14ea..9adf0423cf3 100644 --- a/docs/en/getting-started/playground.md +++ b/docs/en/getting-started/playground.md @@ -38,10 +38,10 @@ The queries are executed as a read-only user. It implies some limitations: The following settings are also enforced: -- [max_result_bytes=10485760](../operations/settings/query_complexity/#max-result-bytes) -- [max_result_rows=2000](../operations/settings/query_complexity/#setting-max_result_rows) -- [result_overflow_mode=break](../operations/settings/query_complexity/#result-overflow-mode) -- [max_execution_time=60000](../operations/settings/query_complexity/#max-execution-time) +- [max_result_bytes=10485760](../operations/settings/query-complexity/#max-result-bytes) +- [max_result_rows=2000](../operations/settings/query-complexity/#setting-max_result_rows) +- [result_overflow_mode=break](../operations/settings/query-complexity/#result-overflow-mode) +- [max_execution_time=60000](../operations/settings/query-complexity/#max-execution-time) ## Examples {#examples} diff --git a/docs/en/guides/apply-catboost-model.md b/docs/en/guides/apply-catboost-model.md index f614b121714..7c2c8a575ec 100644 --- a/docs/en/guides/apply-catboost-model.md +++ b/docs/en/guides/apply-catboost-model.md @@ -159,6 +159,9 @@ The fastest way to evaluate a CatBoost model is compile `libcatboostmodel./home/catboost/models/*_model.xml ``` +!!! note "Note" + You can change path to the CatBoost model configuration later without restarting server. + ## 4. Run the Model Inference from SQL {#run-model-inference} For test model run the ClickHouse client `$ clickhouse client`. diff --git a/docs/en/interfaces/formats.md b/docs/en/interfaces/formats.md index 33bf90a8b52..5987ba0f676 100644 --- a/docs/en/interfaces/formats.md +++ b/docs/en/interfaces/formats.md @@ -50,7 +50,7 @@ The supported formats are: | [Parquet](#data-format-parquet) | ✔ | ✔ | | [Arrow](#data-format-arrow) | ✔ | ✔ | | [ArrowStream](#data-format-arrow-stream) | ✔ | ✔ | -| [ORC](#data-format-orc) | ✔ | ✗ | +| [ORC](#data-format-orc) | ✔ | ✔ | | [RowBinary](#rowbinary) | ✔ | ✔ | | [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ | | [Native](#native) | ✔ | ✔ | @@ -1254,7 +1254,7 @@ ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query Unsupported Parquet data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. -Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [cast](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) the data to that data type which is set for the ClickHouse table column. +Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [cast](../sql-reference/functions/type-conversion-functions/#type_conversion_function-cast) the data to that data type which is set for the ClickHouse table column. ### Inserting and Selecting Data {#inserting-and-selecting-data} @@ -1284,32 +1284,33 @@ To exchange data with Hadoop, you can use [HDFS table engine](../engines/table-e ## ORC {#data-format-orc} -[Apache ORC](https://orc.apache.org/) is a columnar storage format widespread in the Hadoop ecosystem. You can only insert data in this format to ClickHouse. +[Apache ORC](https://orc.apache.org/) is a columnar storage format widespread in the [Hadoop](https://hadoop.apache.org/) ecosystem. ### Data Types Matching {#data_types-matching-3} -The table below shows supported data types and how they match ClickHouse [data types](../sql-reference/data-types/index.md) in `INSERT` queries. +The table below shows supported data types and how they match ClickHouse [data types](../sql-reference/data-types/index.md) in `INSERT` and `SELECT` queries. -| ORC data type (`INSERT`) | ClickHouse data type | -|--------------------------|-----------------------------------------------------| -| `UINT8`, `BOOL` | [UInt8](../sql-reference/data-types/int-uint.md) | -| `INT8` | [Int8](../sql-reference/data-types/int-uint.md) | -| `UINT16` | [UInt16](../sql-reference/data-types/int-uint.md) | -| `INT16` | [Int16](../sql-reference/data-types/int-uint.md) | -| `UINT32` | [UInt32](../sql-reference/data-types/int-uint.md) | -| `INT32` | [Int32](../sql-reference/data-types/int-uint.md) | -| `UINT64` | [UInt64](../sql-reference/data-types/int-uint.md) | -| `INT64` | [Int64](../sql-reference/data-types/int-uint.md) | -| `FLOAT`, `HALF_FLOAT` | [Float32](../sql-reference/data-types/float.md) | -| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | -| `DATE32` | [Date](../sql-reference/data-types/date.md) | -| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | -| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | -| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | +| ORC data type (`INSERT`) | ClickHouse data type | ORC data type (`SELECT`) | +|--------------------------|-----------------------------------------------------|--------------------------| +| `UINT8`, `BOOL` | [UInt8](../sql-reference/data-types/int-uint.md) | `UINT8` | +| `INT8` | [Int8](../sql-reference/data-types/int-uint.md) | `INT8` | +| `UINT16` | [UInt16](../sql-reference/data-types/int-uint.md) | `UINT16` | +| `INT16` | [Int16](../sql-reference/data-types/int-uint.md) | `INT16` | +| `UINT32` | [UInt32](../sql-reference/data-types/int-uint.md) | `UINT32` | +| `INT32` | [Int32](../sql-reference/data-types/int-uint.md) | `INT32` | +| `UINT64` | [UInt64](../sql-reference/data-types/int-uint.md) | `UINT64` | +| `INT64` | [Int64](../sql-reference/data-types/int-uint.md) | `INT64` | +| `FLOAT`, `HALF_FLOAT` | [Float32](../sql-reference/data-types/float.md) | `FLOAT` | +| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `DOUBLE` | +| `DATE32` | [Date](../sql-reference/data-types/date.md) | `DATE32` | +| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `TIMESTAMP` | +| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` | +| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` | +| `-` | [Array](../sql-reference/data-types/array.md) | `LIST` | ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the ORC `DECIMAL` type as the ClickHouse `Decimal128` type. -Unsupported ORC data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. +Unsupported ORC data types: `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. The data types of ClickHouse table columns don’t have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) the data to the data type set for the ClickHouse table column. @@ -1321,6 +1322,14 @@ You can insert ORC data from a file into ClickHouse table by the following comma $ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC" ``` +### Selecting Data {#selecting-data-2} + +You can select data from a ClickHouse table and save them into some file in the ORC format by the following command: + +``` bash +$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT ORC" > {filename.orc} +``` + To exchange data with Hadoop, you can use [HDFS table engine](../engines/table-engines/integrations/hdfs.md). ## LineAsString {#lineasstring} @@ -1359,15 +1368,15 @@ When working with the `Regexp` format, you can use the following settings: - Escaped (similarly to [TSV](#tabseparated)) - Quoted (similarly to [Values](#data-format-values)) - Raw (extracts subpatterns as a whole, no escaping rules) -- `format_regexp_skip_unmatched` — [UInt8](../sql-reference/data-types/int-uint.md). Defines the need to throw an exeption in case the `format_regexp` expression does not match the imported data. Can be set to `0` or `1`. +- `format_regexp_skip_unmatched` — [UInt8](../sql-reference/data-types/int-uint.md). Defines the need to throw an exeption in case the `format_regexp` expression does not match the imported data. Can be set to `0` or `1`. -**Usage** +**Usage** -The regular expression from `format_regexp` setting is applied to every line of imported data. The number of subpatterns in the regular expression must be equal to the number of columns in imported dataset. +The regular expression from `format_regexp` setting is applied to every line of imported data. The number of subpatterns in the regular expression must be equal to the number of columns in imported dataset. -Lines of the imported data must be separated by newline character `'\n'` or DOS-style newline `"\r\n"`. +Lines of the imported data must be separated by newline character `'\n'` or DOS-style newline `"\r\n"`. -The content of every matched subpattern is parsed with the method of corresponding data type, according to `format_regexp_escaping_rule` setting. +The content of every matched subpattern is parsed with the method of corresponding data type, according to `format_regexp_escaping_rule` setting. If the regular expression does not match the line and `format_regexp_skip_unmatched` is set to 1, the line is silently skipped. If `format_regexp_skip_unmatched` is set to 0, exception is thrown. diff --git a/docs/en/interfaces/third-party/client-libraries.md b/docs/en/interfaces/third-party/client-libraries.md index c08eec61b1c..f5c85289171 100644 --- a/docs/en/interfaces/third-party/client-libraries.md +++ b/docs/en/interfaces/third-party/client-libraries.md @@ -23,6 +23,7 @@ toc_title: Client Libraries - [SeasClick C++ client](https://github.com/SeasX/SeasClick) - [one-ck](https://github.com/lizhichao/one-ck) - [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel) + - [kolya7k ClickHouse PHP extension](https://github.com//kolya7k/clickhouse-php) - Go - [clickhouse](https://github.com/kshvakov/clickhouse/) - [go-clickhouse](https://github.com/roistat/go-clickhouse) diff --git a/docs/en/interfaces/third-party/gui.md b/docs/en/interfaces/third-party/gui.md index fa123d8b23d..fffe0c87a53 100644 --- a/docs/en/interfaces/third-party/gui.md +++ b/docs/en/interfaces/third-party/gui.md @@ -167,4 +167,25 @@ Features: [How to configure ClickHouse in Looker.](https://docs.looker.com/setup-and-management/database-config/clickhouse) +### SeekTable {#seektable} + +[SeekTable](https://www.seektable.com) is a self-service BI tool for data exploration and operational reporting. It is available both as a cloud service and a self-hosted version. Reports from SeekTable may be embedded into any web-app. + +Features: + +- Business users-friendly reports builder. +- Powerful report parameters for SQL filtering and report-specific query customizations. +- Can connect to ClickHouse both with a native TCP/IP endpoint and a HTTP(S) interface (2 different drivers). +- It is possible to use all power of ClickHouse SQL dialect in dimensions/measures definitions. +- [Web API](https://www.seektable.com/help/web-api-integration) for automated reports generation. +- Supports reports development flow with account data [backup/restore](https://www.seektable.com/help/self-hosted-backup-restore); data models (cubes) / reports configuration is a human-readable XML and can be stored under version control system. + +SeekTable is [free](https://www.seektable.com/help/cloud-pricing) for personal/individual usage. + +[How to configure ClickHouse connection in SeekTable.](https://www.seektable.com/help/clickhouse-pivot-table) + +### Chadmin {#chadmin} + +[Chadmin](https://github.com/bun4uk/chadmin) is a simple UI where you can visualize your currently running queries on your ClickHouse cluster and info about them and kill them if you want. + [Original article](https://clickhouse.tech/docs/en/interfaces/third-party/gui/) diff --git a/docs/en/introduction/adopters.md b/docs/en/introduction/adopters.md index 454d856f779..fa257a84173 100644 --- a/docs/en/introduction/adopters.md +++ b/docs/en/introduction/adopters.md @@ -12,9 +12,13 @@ toc_title: Adopters |---------|----------|---------|--------------|------------------------------------------------------------------------------|-----------| |
2gis | Maps | Monitoring | — | — | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) | | Admiral | Martech | Engagement Management | — | — | [Webinar Slides, June 2020](https://altinity.com/presentations/2020/06/16/big-data-in-real-time-how-clickhouse-powers-admirals-visitor-relationships-for-publishers) | +| AdScribe | Ads | TV Analytics | — | — | [A quote from CTO](https://altinity.com/24x7-support/) | +| Ahrefs | SEO | Analytics | — | — | [Job listing](https://ahrefs.com/jobs/data-scientist-search) | | Alibaba Cloud | Cloud | Managed Service | — | — | [Official Website](https://help.aliyun.com/product/144466.html) | | Aloha Browser | Mobile App | Browser backend | — | — | [Slides in Russian, May 2019](https://presentations.clickhouse.tech/meetup22/aloha.pdf) | +| Altinity | Cloud, SaaS | Main product | — | — | [Official Website](https://altinity.com/) | | Amadeus | Travel | Analytics | — | — | [Press Release, April 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) | +| ApiRoad | API marketplace | Analytics | — | — | [Blog post, Nov 2018, Mar 2020](https://pixeljets.com/blog/clickhouse-vs-elasticsearch/) | | Appsflyer | Mobile analytics | Main product | — | — | [Talk in Russian, July 2019](https://www.youtube.com/watch?v=M3wbRlcpBbY) | | ArenaData | Data Platform | Main product | — | — | [Slides in Russian, December 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup38/indexes.pdf) | | Avito | Classifieds | Monitoring | — | — | [Meetup, April 2020](https://www.youtube.com/watch?v=n1tm4j4W8ZQ) | @@ -37,23 +41,27 @@ toc_title: Adopters | CraiditX 氪信 | Finance AI | Analysis | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/udf.pptx) | | Crazypanda | Games | | — | — | Live session on ClickHouse meetup | | Criteo | Retail | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) | +| Cryptology | Digital Assets Trading Platform | — | — | — | [Job advertisement, March 2021](https://career.habr.com/companies/cryptology/vacancies) | | Dataliance for China Telecom | Telecom | Analytics | — | — | [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) | | Deutsche Bank | Finance | BI Analytics | — | — | [Slides in English, October 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) | | Deeplay | Gaming Analytics | — | — | — | [Job advertisement, 2020](https://career.habr.com/vacancies/1000062568) | | Diva-e | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) | | Ecwid | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) | | eBay | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) | -| Exness | Trading | Metrics, Logging | — | — | [Talk in Russian, May 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) | +| Exness | Trading | Metrics, Logging | — | — | [Talk in Russian, May 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) | +| EventBunker.io | Serverless Data Processing | — | — | — | [Tweet, April 2021](https://twitter.com/Halil_D_/status/1379839133472985091) | | FastNetMon | DDoS Protection | Main Product | | — | [Official website](https://fastnetmon.com/docs-fnm-advanced/fastnetmon-advanced-traffic-persistency/) | | Flipkart | e-Commerce | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=239) | | FunCorp | Games | | — | 14 bn records/day as of Jan 2021 | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) | | Geniee | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) | | Genotek | Bioinformatics | Main product | — | — | [Video, August 2020](https://youtu.be/v3KyZbz9lEE) | +| Glaber | Monitoring | Main product | — | — | [Website](https://glaber.io/) | | HUYA | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) | | ICA | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) | | Idealista | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) | | Infovista | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) | | InnoGames | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) | +| Instabug | APM Platform | Main product | — | — | [A quote from Co-Founder](https://altinity.com/) | | Instana | APM Platform | Main product | — | — | [Twitter post](https://twitter.com/mieldonkers/status/1248884119158882304) | | Integros | Platform for video services | Analytics | — | — | [Slides in Russian, May 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup22/strategies.pdf) | | Ippon Technologies | Technology Consulting | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=205) | @@ -65,15 +73,20 @@ toc_title: Adopters | Lawrence Berkeley National Laboratory | Research | Traffic analysis | 1 server | 11.8 TiB | [Slides in English, April 2019](https://www.smitasin.com/presentations/2019-04-17_DOE-NSM.pdf) | | LifeStreet | Ad network | Main product | 75 servers (3 replicas) | 5.27 PiB | [Blog post in Russian, February 2017](https://habr.com/en/post/322620/) | | Mail.ru Cloud Solutions | Cloud services | Main product | — | — | [Article in Russian](https://mcs.mail.ru/help/db-create/clickhouse#) | +| MAXILECT | Ad Tech, Blockchain, ML, AI | — | — | — | [Job advertisement, 2021](https://www.linkedin.com/feed/update/urn:li:activity:6780842017229430784/) | | Marilyn | Advertising | Statistics | — | — | [Talk in Russian, June 2017](https://www.youtube.com/watch?v=iXlIgx2khwc) | | Mello | Marketing | Analytics | 1 server | — | [Article, Oct 2020](https://vc.ru/marketing/166180-razrabotka-tipovogo-otcheta-skvoznoy-analitiki) | | MessageBird | Telecommunications | Statistics | — | — | [Slides in English, November 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) | -| MindsDB | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) |x +| Microsoft | Web Analytics | Clarity (Main Product) | — | — | [A question on GitHub](https://github.com/ClickHouse/ClickHouse/issues/21556) | +| MindsDB | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) | | MUX | Online Video | Video Analytics | — | — | [Talk in English, August 2019](https://altinity.com/presentations/2019/8/13/how-clickhouse-became-the-default-analytics-database-for-mux/) | | MGID | Ad network | Web-analytics | — | — | [Blog post in Russian, April 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) | +| Netskope | Network Security | — | — | — | [Job advertisement, March 2021](https://www.mendeley.com/careers/job/senior-software-developer-backend-developer-1346348) | +| NIC Labs | Network Monitoring | RaTA-DNS | — | — | [Blog post, March 2021](https://niclabs.cl/ratadns/2021/03/Clickhouse) | | NOC Project | Network Monitoring | Analytics | Main Product | — | [Official Website](https://getnoc.com/features/big-data/) | | Nuna Inc. | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) | | OneAPM | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) | +| OZON | E-commerce | — | — | — | [Official website](https://job.ozon.ru/vacancy/razrabotchik-clickhouse-ekspluatatsiya-40991870/) | | Panelbear | Analytics | Monitoring and Analytics | — | — | [Tech Stack, November 2020](https://panelbear.com/blog/tech-stack/) | | Percent 百分点 | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) | | Percona | Performance analysis | Percona Monitoring and Management | — | — | [Official website, Mar 2020](https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/) | @@ -90,14 +103,17 @@ toc_title: Adopters | Rspamd | Antispam | Analytics | — | — | [Official Website](https://rspamd.com/doc/modules/clickhouse.html) | | RuSIEM | SIEM | Main Product | — | — | [Official Website](https://rusiem.com/en/products/architecture) | | S7 Airlines | Airlines | Metrics, Logging | — | — | [Talk in Russian, March 2019](https://www.youtube.com/watch?v=nwG68klRpPg&t=15s) | +| Sber | Banking, Fintech, Retail, Cloud, Media | — | — | — | [Job advertisement, March 2021](https://career.habr.com/vacancies/1000073536) | | scireum GmbH | e-Commerce | Main product | — | — | [Talk in German, February 2020](https://www.youtube.com/watch?v=7QWAn5RbyR4) | | Segment | Data processing | Main product | 9 * i3en.3xlarge nodes 7.5TB NVME SSDs, 96GB Memory, 12 vCPUs | — | [Slides, 2019](https://slides.com/abraithwaite/segment-clickhouse) | +| sembot.io | Shopping Ads | — | — | — | A comment on LinkedIn, 2020 | | SEMrush | Marketing | Main product | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/5_semrush.pdf) | | Sentry | Software Development | Main product | — | — | [Blog Post in English, May 2019](https://blog.sentry.io/2019/05/16/introducing-snuba-sentrys-new-search-infrastructure) | | seo.do | Analytics | Main product | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/CH%20Presentation-%20Metehan%20Çetinkaya.pdf) | | SGK | Goverment Social Security | Analytics | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/ClickHouse%20Meetup-Ramazan%20POLAT.pdf) | | Sina | News | — | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/6.%20ClickHouse最佳实践%20高鹏_新浪.pdf) | | SMI2 | News | Analytics | — | — | [Blog Post in Russian, November 2017](https://habr.com/ru/company/smi2/blog/314558/) | +| Spark New Zealand | Telecommunications | Security Operations | — | — | [Blog Post, Feb 2020](https://blog.n0p.me/2020/02/2020-02-05-dnsmonster/) | | Splunk | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) | | Spotify | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) | | Staffcop | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) | @@ -106,22 +122,31 @@ toc_title: Adopters | Tencent | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) | | Tencent | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) | | Tencent Music Entertainment (TME) | BigData | Data processing | — | — | [Blog in Chinese, June 2020](https://cloud.tencent.com/developer/article/1637840) | +| Tinybird | Real-time Data Products | Data processing | — | — | [Official website](https://www.tinybird.co/) | | Traffic Stars | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) | | Uber | Taxi | Logging | — | — | [Slides, February 2020](https://presentations.clickhouse.tech/meetup40/uber.pdf) | +| UTMSTAT | Analytics | Main product | — | — | [Blog post, June 2020](https://vc.ru/tribuna/133956-striming-dannyh-iz-servisa-skvoznoy-analitiki-v-clickhouse) | | VKontakte | Social Network | Statistics, Logging | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/3_vk.pdf) | +| VMWare | Cloud | VeloCloud, SDN | — | — | [Product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.3/com.vmware.vcom.metrics.doc/GUID-A9AD72E1-C948-4CA2-971B-919385AB3CA8.html) | | Walmart Labs | Internet, Retail | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=144) | | Wargaming | Games | | — | — | [Interview](https://habr.com/en/post/496954/) | +| Wildberries | E-commerce | | — | — | [Official website](https://it.wildberries.ru/) | | Wisebits | IT Solutions | Analytics | — | — | [Slides in Russian, May 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup22/strategies.pdf) | | Workato | Automation Software | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=334) | +| Xenoss | Marketing, Advertising | — | — | — | [Instagram, March 2021](https://www.instagram.com/p/CNATV7qBgB1/) | | Xiaoxin Tech | Education | Common purpose | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/sync-clickhouse-with-mysql-mongodb.pptx) | | Ximalaya | Audio sharing | OLAP | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/ximalaya.pdf) | | Yandex Cloud | Public Cloud | Main product | — | — | [Talk in Russian, December 2019](https://www.youtube.com/watch?v=pgnak9e_E0o) | | Yandex DataLens | Business Intelligence | Main product | — | — | [Slides in Russian, December 2019](https://presentations.clickhouse.tech/meetup38/datalens.pdf) | | Yandex Market | e-Commerce | Metrics, Logging | — | — | [Talk in Russian, January 2019](https://youtu.be/_l1qP0DyBcA?t=478) | | Yandex Metrica | Web analytics | Main product | 630 servers in one cluster, 360 servers in another cluster, 1862 servers in one department | 133 PiB / 8.31 PiB / 120 trillion records | [Slides, February 2020](https://presentations.clickhouse.tech/meetup40/introduction/#13) | +| Yotascale | Cloud | Data pipeline | — | 2 bn records/day | [LinkedIn (Accomplishments)](https://www.linkedin.com/in/adilsaleem/) | | ЦВТ | Software Development | Metrics, Logging | — | — | [Blog Post, March 2019, in Russian](https://vc.ru/dev/62715-kak-my-stroili-monitoring-na-prometheus-clickhouse-i-elk) | | МКБ | Bank | Web-system monitoring | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/mkb.pdf) | | ЦФТ | Banking, Financial products, Payments | — | — | — | [Meetup in Russian, April 2020](https://team.cft.ru/events/162) | +| Цифровой Рабочий | Industrial IoT, Analytics | — | — | — | [Blog post in Russian, March 2021](https://habr.com/en/company/croc/blog/548018/) | | kakaocorp | Internet company | — | — | — | [if(kakao)2020 conference](https://if.kakao.com/session/117) | +| ООО «МПЗ Богородский» | Agriculture | — | — | — | [Article in Russian, November 2020](https://cloud.yandex.ru/cases/okraina) | +| Tesla | Electric vehicle and clean energy company | — | — | — | [Vacancy description, March 2021](https://news.ycombinator.com/item?id=26306170) | [Original article](https://clickhouse.tech/docs/en/introduction/adopters/) diff --git a/docs/en/operations/access-rights.md b/docs/en/operations/access-rights.md index 32f8fdcb642..9f7d2a0b95b 100644 --- a/docs/en/operations/access-rights.md +++ b/docs/en/operations/access-rights.md @@ -101,6 +101,9 @@ Privileges can be granted to a role by the [GRANT](../sql-reference/statements/g Row policy is a filter that defines which of the rows are available to a user or a role. Row policy contains filters for one particular table, as well as a list of roles and/or users which should use this row policy. +!!! note "Warning" + Row policies makes sense only for users with readonly access. If user can modify table or copy partitions between tables, it defeats the restrictions of row policies. + Management queries: - [CREATE ROW POLICY](../sql-reference/statements/create/row-policy.md) diff --git a/docs/en/operations/external-authenticators/index.md b/docs/en/operations/external-authenticators/index.md index 95f80f192f5..aa220f50ef8 100644 --- a/docs/en/operations/external-authenticators/index.md +++ b/docs/en/operations/external-authenticators/index.md @@ -11,3 +11,6 @@ ClickHouse supports authenticating and managing users using external services. The following external authenticators and directories are supported: - [LDAP](./ldap.md#external-authenticators-ldap) [Authenticator](./ldap.md#ldap-external-authenticator) and [Directory](./ldap.md#ldap-external-user-directory) +- Kerberos [Authenticator](./kerberos.md#external-authenticators-kerberos) + +[Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/index/) diff --git a/docs/en/operations/external-authenticators/kerberos.md b/docs/en/operations/external-authenticators/kerberos.md new file mode 100644 index 00000000000..5fe0b2bfc37 --- /dev/null +++ b/docs/en/operations/external-authenticators/kerberos.md @@ -0,0 +1,115 @@ +# Kerberos {#external-authenticators-kerberos} + +Existing and properly configured ClickHouse users can be authenticated via Kerberos authentication protocol. + +Currently, Kerberos can only be used as an external authenticator for existing users, which are defined in `users.xml` or in local access control paths. Those users may only use HTTP requests and must be able to authenticate using GSS-SPNEGO mechanism. + +For this approach, Kerberos must be configured in the system and must be enabled in ClickHouse config. + + +## Enabling Kerberos in ClickHouse {#enabling-kerberos-in-clickhouse} + +To enable Kerberos, one should include `kerberos` section in `config.xml`. This section may contain additional parameters. + +#### Parameters: + +- `principal` - canonical service principal name that will be acquired and used when accepting security contexts. + - This parameter is optional, if omitted, the default principal will be used. + + +- `realm` - a realm, that will be used to restrict authentication to only those requests whose initiator's realm matches it. + - This parameter is optional, if omitted, no additional filtering by realm will be applied. + +Example (goes into `config.xml`): + +```xml + + + + +``` + +With principal specification: + +```xml + + + + HTTP/clickhouse.example.com@EXAMPLE.COM + + +``` + +With filtering by realm: + +```xml + + + + EXAMPLE.COM + + +``` + +!!! warning "Note" + You can define only one `kerberos` section. The presence of multiple `kerberos` sections will force ClickHouse to disable Kerberos authentication. + +!!! warning "Note" + `principal` and `realm` sections cannot be specified at the same time. The presence of both `principal` and `realm` sections will force ClickHouse to disable Kerberos authentication. + + +## Kerberos as an external authenticator for existing users {#kerberos-as-an-external-authenticator-for-existing-users} + +Kerberos can be used as a method for verifying the identity of locally defined users (users defined in `users.xml` or in local access control paths). Currently, **only** requests over the HTTP interface can be *kerberized* (via GSS-SPNEGO mechanism). + +Kerberos principal name format usually follows this pattern: + +- *primary/instance@REALM* + +The */instance* part may occur zero or more times. **The *primary* part of the canonical principal name of the initiator is expected to match the kerberized user name for authentication to succeed**. + +### Enabling Kerberos in `users.xml` {#enabling-kerberos-in-users-xml} + +In order to enable Kerberos authentication for the user, specify `kerberos` section instead of `password` or similar sections in the user definition. + +Parameters: + +- `realm` - a realm that will be used to restrict authentication to only those requests whose initiator's realm matches it. + - This parameter is optional, if omitted, no additional filtering by realm will be applied. + +Example (goes into `users.xml`): + +```xml + + + + + + + + EXAMPLE.COM + + + + +``` + +!!! warning "Warning" + Note that Kerberos authentication cannot be used alongside with any other authentication mechanism. The presence of any other sections like `password` alongside `kerberos` will force ClickHouse to shutdown. + +!!! info "Reminder" + Note, that now, once user `my_user` uses `kerberos`, Kerberos must be enabled in the main `config.xml` file as described previously. + +### Enabling Kerberos using SQL {#enabling-kerberos-using-sql} + +When [SQL-driven Access Control and Account Management](../access-rights.md#access-control) is enabled in ClickHouse, users identified by Kerberos can also be created using SQL statements. + +```sql +CREATE USER my_user IDENTIFIED WITH kerberos REALM 'EXAMPLE.COM' +``` + +...or, without filtering by realm: + +```sql +CREATE USER my_user IDENTIFIED WITH kerberos +``` diff --git a/docs/en/operations/external-authenticators/ldap.md b/docs/en/operations/external-authenticators/ldap.md index 5c06ad7daed..1b65ecc968b 100644 --- a/docs/en/operations/external-authenticators/ldap.md +++ b/docs/en/operations/external-authenticators/ldap.md @@ -2,14 +2,16 @@ LDAP server can be used to authenticate ClickHouse users. There are two different approaches for doing this: -- use LDAP as an external authenticator for existing users, which are defined in `users.xml` or in local access control paths -- use LDAP as an external user directory and allow locally undefined users to be authenticated if they exist on the LDAP server +- Use LDAP as an external authenticator for existing users, which are defined in `users.xml` or in local access control paths. +- Use LDAP as an external user directory and allow locally undefined users to be authenticated if they exist on the LDAP server. -For both of these approaches, an internally named LDAP server must be defined in the ClickHouse config so that other parts of config are able to refer to it. +For both of these approaches, an internally named LDAP server must be defined in the ClickHouse config so that other parts of the config can refer to it. ## LDAP Server Definition {#ldap-server-definition} -To define LDAP server you must add `ldap_servers` section to the `config.xml`. For example, +To define LDAP server you must add `ldap_servers` section to the `config.xml`. + +**Example** ```xml @@ -35,38 +37,35 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`. F Note, that you can define multiple LDAP servers inside the `ldap_servers` section using distinct names. -Parameters: +**Parameters** -- `host` - LDAP server hostname or IP, this parameter is mandatory and cannot be empty. -- `port` - LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise. -- `bind_dn` - template used to construct the DN to bind to. - - The resulting DN will be constructed by replacing all `{user_name}` substrings of the - template with the actual user name during each authentication attempt. -- `verification_cooldown` - a period of time, in seconds, after a successful bind attempt, - during which the user will be assumed to be successfully authenticated for all consecutive - requests without contacting the LDAP server. +- `host` — LDAP server hostname or IP, this parameter is mandatory and cannot be empty. +- `port` — LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise. +- `bind_dn` — Template used to construct the DN to bind to. + - The resulting DN will be constructed by replacing all `{user_name}` substrings of the template with the actual user name during each authentication attempt. +- `verification_cooldown` — A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server. - Specify `0` (the default) to disable caching and force contacting the LDAP server for each authentication request. -- `enable_tls` - flag to trigger use of secure connection to the LDAP server. +- `enable_tls` — A flag to trigger the use of the secure connection to the LDAP server. - Specify `no` for plain text `ldap://` protocol (not recommended). - Specify `yes` for LDAP over SSL/TLS `ldaps://` protocol (recommended, the default). - Specify `starttls` for legacy StartTLS protocol (plain text `ldap://` protocol, upgraded to TLS). -- `tls_minimum_protocol_version` - the minimum protocol version of SSL/TLS. +- `tls_minimum_protocol_version` — The minimum protocol version of SSL/TLS. - Accepted values are: `ssl2`, `ssl3`, `tls1.0`, `tls1.1`, `tls1.2` (the default). -- `tls_require_cert` - SSL/TLS peer certificate verification behavior. +- `tls_require_cert` — SSL/TLS peer certificate verification behavior. - Accepted values are: `never`, `allow`, `try`, `demand` (the default). -- `tls_cert_file` - path to certificate file. -- `tls_key_file` - path to certificate key file. -- `tls_ca_cert_file` - path to CA certificate file. -- `tls_ca_cert_dir` - path to the directory containing CA certificates. -- `tls_cipher_suite` - allowed cipher suite (in OpenSSL notation). +- `tls_cert_file` — Path to certificate file. +- `tls_key_file` — Path to certificate key file. +- `tls_ca_cert_file` — Path to CA certificate file. +- `tls_ca_cert_dir` — Path to the directory containing CA certificates. +- `tls_cipher_suite` — Allowed cipher suite (in OpenSSL notation). ## LDAP External Authenticator {#ldap-external-authenticator} -A remote LDAP server can be used as a method for verifying passwords for locally defined users (users defined in `users.xml` or in local access control paths). In order to achieve this, specify previously defined LDAP server name instead of `password` or similar sections in the user definition. +A remote LDAP server can be used as a method for verifying passwords for locally defined users (users defined in `users.xml` or in local access control paths). To achieve this, specify previously defined LDAP server name instead of `password` or similar sections in the user definition. -At each login attempt, ClickHouse will try to "bind" to the specified DN defined by the `bind_dn` parameter in the [LDAP server definition](#ldap-server-definition) using the provided credentials, and if successful, the user will be considered authenticated. This is often called a "simple bind" method. +At each login attempt, ClickHouse tries to "bind" to the specified DN defined by the `bind_dn` parameter in the [LDAP server definition](#ldap-server-definition) using the provided credentials, and if successful, the user is considered authenticated. This is often called a "simple bind" method. -For example, +**Example** ```xml @@ -85,19 +84,24 @@ For example, Note, that user `my_user` refers to `my_ldap_server`. This LDAP server must be configured in the main `config.xml` file as described previously. -When SQL-driven [Access Control and Account Management](../access-rights.md#access-control) is enabled in ClickHouse, users that are authenticated by LDAP servers can also be created using the [CRATE USER](../../sql-reference/statements/create/user.md#create-user-statement) statement. +When SQL-driven [Access Control and Account Management](../access-rights.md#access-control) is enabled, users that are authenticated by LDAP servers can also be created using the [CREATE USER](../../sql-reference/statements/create/user.md#create-user-statement) statement. + + +Query: ```sql -CREATE USER my_user IDENTIFIED WITH ldap SERVER 'my_ldap_server' +CREATE USER my_user IDENTIFIED WITH ldap SERVER 'my_ldap_server'; ``` ## LDAP Exernal User Directory {#ldap-external-user-directory} -In addition to the locally defined users, a remote LDAP server can be used as a source of user definitions. In order to achieve this, specify previously defined LDAP server name (see [LDAP Server Definition](#ldap-server-definition)) in an `ldap` section inside the `users_directories` section of the `config.xml` file. +In addition to the locally defined users, a remote LDAP server can be used as a source of user definitions. To achieve this, specify previously defined LDAP server name (see [LDAP Server Definition](#ldap-server-definition)) in the `ldap` section inside the `users_directories` section of the `config.xml` file. -At each login attempt, ClickHouse will try to find the user definition locally and authenticate it as usual, but if the user is not defined, ClickHouse will assume it exists in the external LDAP directory, and will try to "bind" to the specified DN at the LDAP server using the provided credentials. If successful, the user will be considered existing and authenticated. The user will be assigned roles from the list specified in the `roles` section. Additionally, LDAP "search" can be performed and results can be transformed and treated as role names and then be assigned to the user if the `role_mapping` section is also configured. All this implies that the SQL-driven [Access Control and Account Management](../access-rights.md#access-control) is enabled and roles are created using the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement. +At each login attempt, ClickHouse tries to find the user definition locally and authenticate it as usual. If the user is not defined, ClickHouse will assume the definition exists in the external LDAP directory and will try to "bind" to the specified DN at the LDAP server using the provided credentials. If successful, the user will be considered existing and authenticated. The user will be assigned roles from the list specified in the `roles` section. Additionally, LDAP "search" can be performed and results can be transformed and treated as role names and then be assigned to the user if the `role_mapping` section is also configured. All this implies that the SQL-driven [Access Control and Account Management](../access-rights.md#access-control) is enabled and roles are created using the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement. -Example (goes into `config.xml`): +**Example** + +Goes into `config.xml`. ```xml @@ -122,33 +126,24 @@ Example (goes into `config.xml`): ``` -Note that `my_ldap_server` referred in the `ldap` section inside the `user_directories` section must be a previously -defined LDAP server that is configured in the `config.xml` (see [LDAP Server Definition](#ldap-server-definition)). +Note that `my_ldap_server` referred in the `ldap` section inside the `user_directories` section must be a previously defined LDAP server that is configured in the `config.xml` (see [LDAP Server Definition](#ldap-server-definition)). -Parameters: +**Parameters** -- `server` - one of LDAP server names defined in the `ldap_servers` config section above. - This parameter is mandatory and cannot be empty. -- `roles` - section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server. - - If no roles are specified here or assigned during role mapping (below), user will not be able - to perform any actions after authentication. -- `role_mapping` - section with LDAP search parameters and mapping rules. - - When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` - and the name of the logged in user. For each entry found during that search, the value of the specified - attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, - and the rest of the value becomes the name of a local role defined in ClickHouse, - which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement. +- `server` — One of LDAP server names defined in the `ldap_servers` config section above. This parameter is mandatory and cannot be empty. +- `roles` — Section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server. + - If no roles are specified here or assigned during role mapping (below), user will not be able to perform any actions after authentication. +- `role_mapping` — Section with LDAP search parameters and mapping rules. + - When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement. - There can be multiple `role_mapping` sections defined inside the same `ldap` section. All of them will be applied. - - `base_dn` - template used to construct the base DN for the LDAP search. - - The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` - substrings of the template with the actual user name and bind DN during each LDAP search. - - `scope` - scope of the LDAP search. + - `base_dn` — Template used to construct the base DN for the LDAP search. + - The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during each LDAP search. + - `scope` — Scope of the LDAP search. - Accepted values are: `base`, `one_level`, `children`, `subtree` (the default). - - `search_filter` - template used to construct the search filter for the LDAP search. - - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{base_dn}` - substrings of the template with the actual user name, bind DN, and base DN during each LDAP search. + - `search_filter` — Template used to construct the search filter for the LDAP search. + - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}` and `{base_dn}` substrings of the template with the actual user name, bind DN and base DN during each LDAP search. - Note, that the special characters must be escaped properly in XML. - - `attribute` - attribute name whose values will be returned by the LDAP search. - - `prefix` - prefix, that will be expected to be in front of each string in the original - list of strings returned by the LDAP search. Prefix will be removed from the original - strings and resulting strings will be treated as local role names. Empty, by default. + - `attribute` — Attribute name whose values will be returned by the LDAP search. + - `prefix` — Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default. + +[Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap/) diff --git a/docs/en/operations/performance-test.md b/docs/en/operations/performance-test.md index ca805923ba9..a808ffd0a85 100644 --- a/docs/en/operations/performance-test.md +++ b/docs/en/operations/performance-test.md @@ -12,6 +12,7 @@ With this instruction you can run basic ClickHouse performance test on any serve 3. Copy the link to `clickhouse` binary for amd64 or aarch64. 4. ssh to the server and download it with wget: ```bash +# These links are outdated, please obtain the fresh link from the "commits" page. # For amd64: wget https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse # For aarch64: diff --git a/docs/en/operations/server-configuration-parameters/settings.md b/docs/en/operations/server-configuration-parameters/settings.md index 89fcbafe663..19671b523e3 100644 --- a/docs/en/operations/server-configuration-parameters/settings.md +++ b/docs/en/operations/server-configuration-parameters/settings.md @@ -100,6 +100,11 @@ Default value: `1073741824` (1 GB). 1073741824 ``` +## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec} + +Sets the delay before remove table data in seconds. If the query has `SYNC` modifier, this setting is ignored. + +Default value: `480` (8 minute). ## default_database {#default-database} @@ -125,6 +130,25 @@ Settings profiles are located in the file specified in the parameter `user_confi default ``` +## default_replica_path {#default_replica_path} + +The path to the table in ZooKeeper. + +**Example** + +``` xml +/clickhouse/tables/{uuid}/{shard} +``` +## default_replica_name {#default_replica_name} + + The replica name in ZooKeeper. + +**Example** + +``` xml +{replica} +``` + ## dictionaries_config {#server_configuration_parameters-dictionaries_config} The path to the config file for external dictionaries. @@ -321,7 +345,8 @@ Similar to `interserver_http_host`, except that this hostname can be used by oth The username and password used to authenticate during [replication](../../engines/table-engines/mergetree-family/replication.md) with the Replicated\* engines. These credentials are used only for communication between replicas and are unrelated to credentials for ClickHouse clients. The server is checking these credentials for connecting replicas and use the same credentials when connecting to other replicas. So, these credentials should be set the same for all replicas in a cluster. By default, the authentication is not used. -**Note:** These credentials are common for replication through `HTTP` and `HTTPS`. +!!! note "Note" + These credentials are common for replication through `HTTP` and `HTTPS`. This section contains the following parameters: @@ -405,7 +430,7 @@ Keys for syslog: Default value: `LOG_USER` if `address` is specified, `LOG_DAEMON` otherwise. - format – Message format. Possible values: `bsd` and `syslog.` -## send_crash_reports {#server_configuration_parameters-logger} +## send_crash_reports {#server_configuration_parameters-send_crash_reports} Settings for opt-in sending crash reports to the ClickHouse core developers team via [Sentry](https://sentry.io). Enabling it, especially in pre-production environments, is highly appreciated. @@ -502,7 +527,15 @@ On hosts with low RAM and swap, you possibly need setting `max_server_memory_usa ## max_concurrent_queries {#max-concurrent-queries} -The maximum number of simultaneously processed requests. +The maximum number of simultaneously processed queries related to MergeTree table. Queries may be limited by other settings: [max_concurrent_queries_for_all_users](#max-concurrent-queries-for-all-users), [min_marks_to_honor_max_concurrent_queries](#min-marks-to-honor-max-concurrent-queries). + +!!! info "Note" + These settings can be modified at runtime and will take effect immediately. Queries that are already running will remain unchanged. + +Possible values: + +- Positive integer. +- 0 — Disabled. **Example** @@ -530,6 +563,21 @@ Default value: `0` that means no limit. - [max_concurrent_queries](#max-concurrent-queries) +## min_marks_to_honor_max_concurrent_queries {#min-marks-to-honor-max-concurrent-queries} + +The minimal number of marks read by the query for applying the [max_concurrent_queries](#max-concurrent-queries) setting. + +Possible values: + +- Positive integer. +- 0 — Disabled. + +**Example** + +``` xml +10 +``` + ## max_connections {#max-connections} The maximum number of inbound connections. diff --git a/docs/en/operations/settings/merge-tree-settings.md b/docs/en/operations/settings/merge-tree-settings.md index 77b68715ba9..b2470207dcc 100644 --- a/docs/en/operations/settings/merge-tree-settings.md +++ b/docs/en/operations/settings/merge-tree-settings.md @@ -56,6 +56,26 @@ Default value: 150. ClickHouse artificially executes `INSERT` longer (adds ‘sleep’) so that the background merge process can merge parts faster than they are added. +## inactive_parts_to_throw_insert {#inactive-parts-to-throw-insert} + +If the number of inactive parts in a single partition more than the `inactive_parts_to_throw_insert` value, `INSERT` is interrupted with the "Too many inactive parts (N). Parts cleaning are processing significantly slower than inserts" exception. + +Possible values: + +- Any positive integer. + +Default value: 0 (unlimited). + +## inactive_parts_to_delay_insert {#inactive-parts-to-delay-insert} + +If the number of inactive parts in a single partition in the table at least that many the `inactive_parts_to_delay_insert` value, an `INSERT` artificially slows down. It is useful when a server fails to clean up parts quickly enough. + +Possible values: + +- Any positive integer. + +Default value: 0 (unlimited). + ## max_delay_to_insert {#max-delay-to-insert} The value in seconds, which is used to calculate the `INSERT` delay, if the number of active parts in a single partition exceeds the [parts_to_delay_insert](#parts-to-delay-insert) value. diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index 3c343e09fd3..b0c879af931 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -143,6 +143,16 @@ Possible values: Default value: 0. +## http_max_uri_size {#http-max-uri-size} + +Sets the maximum URI length of an HTTP request. + +Possible values: + +- Positive integer. + +Default value: 1048576. + ## send_progress_in_http_headers {#settings-send_progress_in_http_headers} Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses. @@ -769,6 +779,38 @@ Example: log_query_threads=1 ``` +## log_comment {#settings-log-comment} + +Specifies the value for the `log_comment` field of the [system.query_log](../system-tables/query_log.md) table and comment text for the server log. + +It can be used to improve the readability of server logs. Additionally, it helps to select queries related to the test from the `system.query_log` after running [clickhouse-test](../../development/tests.md). + +Possible values: + +- Any string no longer than [max_query_size](#settings-max_query_size). If length is exceeded, the server throws an exception. + +Default value: empty string. + +**Example** + +Query: + +``` sql +SET log_comment = 'log_comment test', log_queries = 1; +SELECT 1; +SYSTEM FLUSH LOGS; +SELECT type, query FROM system.query_log WHERE log_comment = 'log_comment test' AND event_date >= yesterday() ORDER BY event_time DESC LIMIT 2; +``` + +Result: + +``` text +┌─type────────┬─query─────┐ +│ QueryStart │ SELECT 1; │ +│ QueryFinish │ SELECT 1; │ +└─────────────┴───────────┘ +``` + ## max_insert_block_size {#settings-max_insert_block_size} The size of blocks (in a count of rows) to form for insertion into a table. @@ -822,8 +864,6 @@ For example, when reading from a table, if it is possible to evaluate expression Default value: the number of physical CPU cores. -If less than one SELECT query is normally run on a server at a time, set this parameter to a value slightly less than the actual number of processor cores. - For queries that are completed quickly because of a LIMIT, you can set a lower ‘max_threads’. For example, if the necessary number of entries are located in every block and max_threads = 8, then 8 blocks are retrieved, although it would have been enough to read just one. The smaller the `max_threads` value, the less memory is consumed. @@ -1097,14 +1137,25 @@ See the section “WITH TOTALS modifier”. ## max_parallel_replicas {#settings-max_parallel_replicas} -The maximum number of replicas for each shard when executing a query. In limited circumstances, this can make a query faster by executing it on more servers. This setting is only useful for replicated tables with a sampling key. There are cases where performance will not improve or even worsen: +The maximum number of replicas for each shard when executing a query. -- the position of the sampling key in the partitioning key's order doesn't allow efficient range scans -- adding a sampling key to the table makes filtering by other columns less efficient -- the sampling key is an expression that is expensive to calculate -- the cluster's latency distribution has a long tail, so that querying more servers increases the query's overall latency +Possible values: -In addition, this setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain conditions. See [Distributed Subqueries and max_parallel_replicas](../../sql-reference/operators/in.md#max_parallel_replica-subqueries) for more details. +- Positive integer. + +Default value: `1`. + +**Additional Info** + +This setting is useful for replicated tables with a sampling key. A query may be processed faster if it is executed on several servers in parallel. But the query performance may degrade in the following cases: + +- The position of the sampling key in the partitioning key doesn't allow efficient range scans. +- Adding a sampling key to the table makes filtering by other columns less efficient. +- The sampling key is an expression that is expensive to calculate. +- The cluster latency distribution has a long tail, so that querying more servers increases the query overall latency. + +!!! warning "Warning" + This setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain requirements. See [Distributed Subqueries and max_parallel_replicas](../../sql-reference/operators/in.md#max_parallel_replica-subqueries) for more details. ## compile {#compile} @@ -1503,6 +1554,14 @@ FORMAT PrettyCompactMonoBlock Default value: 0 +## optimize_skip_unused_shards_limit {#optimize-skip-unused-shards-limit} + +Limit for number of sharding key values, turns off `optimize_skip_unused_shards` if the limit is reached. + +Too many values may require significant amount for processing, while the benefit is doubtful, since if you have huge number of values in `IN (...)`, then most likely the query will be sent to all shards anyway. + +Default value: 1000 + ## optimize_skip_unused_shards {#optimize-skip-unused-shards} Enables or disables skipping of unused shards for [SELECT](../../sql-reference/statements/select/index.md) queries that have sharding key condition in `WHERE/PREWHERE` (assuming that the data is distributed by sharding key, otherwise does nothing). @@ -1514,6 +1573,17 @@ Possible values: Default value: 0 +## optimize_skip_unused_shards_rewrite_in {#optimize-skip-unused-shardslrewrite-in} + +Rewrite IN in query for remote shards to exclude values that does not belong to the shard (requires optimize_skip_unused_shards). + +Possible values: + +- 0 — Disabled. +- 1 — Enabled. + +Default value: 1 (since it requires `optimize_skip_unused_shards` anyway, which `0` by default) + ## allow_nondeterministic_optimize_skip_unused_shards {#allow-nondeterministic-optimize-skip-unused-shards} Allow nondeterministic (like `rand` or `dictGet`, since later has some caveats with updates) functions in sharding key. @@ -1863,7 +1933,7 @@ Default value: `0`. Enables or disables random shard insertion into a [Distributed](../../engines/table-engines/special/distributed.md#distributed) table when there is no distributed key. -By default, when inserting data into a `Distributed` table with more than one shard, the ClickHouse server will any insertion request if there is no distributed key. When `insert_distributed_one_random_shard = 1`, insertions are allowed and data is forwarded randomly among all shards. +By default, when inserting data into a `Distributed` table with more than one shard, the ClickHouse server will reject any insertion request if there is no distributed key. When `insert_distributed_one_random_shard = 1`, insertions are allowed and data is forwarded randomly among all shards. Possible values: @@ -1872,6 +1942,53 @@ Possible values: Default value: `0`. +## insert_shard_id {#insert_shard_id} + +If not `0`, specifies the shard of [Distributed](../../engines/table-engines/special/distributed.md#distributed) table into which the data will be inserted synchronously. + +If `insert_shard_id` value is incorrect, the server will throw an exception. + +To get the number of shards on `requested_cluster`, you can check server config or use this query: + +``` sql +SELECT uniq(shard_num) FROM system.clusters WHERE cluster = 'requested_cluster'; +``` + +Possible values: + +- 0 — Disabled. +- Any number from `1` to `shards_num` of corresponding [Distributed](../../engines/table-engines/special/distributed.md#distributed) table. + +Default value: `0`. + +**Example** + +Query: + +```sql +CREATE TABLE x AS system.numbers ENGINE = MergeTree ORDER BY number; +CREATE TABLE x_dist AS x ENGINE = Distributed('test_cluster_two_shards_localhost', currentDatabase(), x); +INSERT INTO x_dist SELECT * FROM numbers(5) SETTINGS insert_shard_id = 1; +SELECT * FROM x_dist ORDER BY number ASC; +``` + +Result: + +``` text +┌─number─┐ +│ 0 │ +│ 0 │ +│ 1 │ +│ 1 │ +│ 2 │ +│ 2 │ +│ 3 │ +│ 3 │ +│ 4 │ +│ 4 │ +└────────┘ +``` + ## use_compact_format_in_distributed_parts_names {#use_compact_format_in_distributed_parts_names} Uses compact format for storing blocks for async (`insert_distributed_sync`) INSERT into tables with `Distributed` engine. @@ -2670,11 +2787,11 @@ Default value: `0`. ## engine_file_truncate_on_insert {#engine-file-truncate-on-insert} -Enables or disables truncate before insert in file engine tables. +Enables or disables truncate before insert in [File](../../engines/table-engines/special/file.md) engine tables. Possible values: -- 0 — Disabled. -- 1 — Enabled. +- 0 — `INSERT` query appends new data to the end of the file. +- 1 — `INSERT` replaces existing content of the file with the new data. Default value: `0`. @@ -2689,4 +2806,165 @@ Possible values: Default value: `0`. +## database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously} + +Adds a modifier `SYNC` to all `DROP` and `DETACH` queries. + +Possible values: + +- 0 — Queries will be executed with delay. +- 1 — Queries will be executed without delay. + +Default value: `0`. + +## show_table_uuid_in_table_create_query_if_not_nil {#show_table_uuid_in_table_create_query_if_not_nil} + +Sets the `SHOW TABLE` query display. + +Possible values: + +- 0 — The query will be displayed without table UUID. +- 1 — The query will be displayed with table UUID. + +Default value: `0`. + +## allow_experimental_live_view {#allow-experimental-live-view} + +Allows creation of experimental [live views](../../sql-reference/statements/create/view.md#live-view). + +Possible values: + +- 0 — Working with live views is disabled. +- 1 — Working with live views is enabled. + +Default value: `0`. + +## live_view_heartbeat_interval {#live-view-heartbeat-interval} + +Sets the heartbeat interval in seconds to indicate [live view](../../sql-reference/statements/create/view.md#live-view) is alive . + +Default value: `15`. + +## max_live_view_insert_blocks_before_refresh {#max-live-view-insert-blocks-before-refresh} + +Sets the maximum number of inserted blocks after which mergeable blocks are dropped and query for [live view](../../sql-reference/statements/create/view.md#live-view) is re-executed. + +Default value: `64`. + +## temporary_live_view_timeout {#temporary-live-view-timeout} + +Sets the interval in seconds after which [live view](../../sql-reference/statements/create/view.md#live-view) with timeout is deleted. + +Default value: `5`. + +## periodic_live_view_refresh {#periodic-live-view-refresh} + +Sets the interval in seconds after which periodically refreshed [live view](../../sql-reference/statements/create/view.md#live-view) is forced to refresh. + +Default value: `60`. + +## check_query_single_value_result {#check_query_single_value_result} + +Defines the level of detail for the [CHECK TABLE](../../sql-reference/statements/check-table.md#checking-mergetree-tables) query result for `MergeTree` family engines . + +Possible values: + +- 0 — the query shows a check status for every individual data part of a table. +- 1 — the query shows the general table check status. + +Default value: `0`. + +## prefer_column_name_to_alias {#prefer-column-name-to-alias} + +Enables or disables using the original column names instead of aliases in query expressions and clauses. It especially matters when alias is the same as the column name, see [Expression Aliases](../../sql-reference/syntax.md#notes-on-usage). Enable this setting to make aliases syntax rules in ClickHouse more compatible with most other database engines. + +Possible values: + +- 0 — The column name is substituted with the alias. +- 1 — The column name is not substituted with the alias. + +Default value: `0`. + +**Example** + +The difference between enabled and disabled: + +Query: + +```sql +SET prefer_column_name_to_alias = 0; +SELECT avg(number) AS number, max(number) FROM numbers(10); +``` + +Result: + +```text +Received exception from server (version 21.5.1): +Code: 184. DB::Exception: Received from localhost:9000. DB::Exception: Aggregate function avg(number) is found inside another aggregate function in query: While processing avg(number) AS number. +``` + +Query: + +```sql +SET prefer_column_name_to_alias = 1; +SELECT avg(number) AS number, max(number) FROM numbers(10); +``` + +Result: + +```text +┌─number─┬─max(number)─┐ +│ 4.5 │ 9 │ +└────────┴─────────────┘ +``` + +## limit {#limit} + +Sets the maximum number of rows to get from the query result. It adjusts the value set by the [LIMIT](../../sql-reference/statements/select/limit.md#limit-clause) clause, so that the limit, specified in the query, cannot exceed the limit, set by this setting. + +Possible values: + +- 0 — The number of rows is not limited. +- Positive integer. + +Default value: `0`. + +## offset {#offset} + +Sets the number of rows to skip before starting to return rows from the query. It adjusts the offset set by the [OFFSET](../../sql-reference/statements/select/offset.md#offset-fetch) clause, so that these two values are summarized. + +Possible values: + +- 0 — No rows are skipped . +- Positive integer. + +Default value: `0`. + +**Example** + +Input table: + +``` sql +CREATE TABLE test (i UInt64) ENGINE = MergeTree() ORDER BY i; +INSERT INTO test SELECT number FROM numbers(500); +``` + +Query: + +``` sql +SET limit = 5; +SET offset = 7; +SELECT * FROM test LIMIT 10 OFFSET 100; +``` + +Result: + +``` text +┌───i─┐ +│ 107 │ +│ 108 │ +│ 109 │ +└─────┘ +``` + [Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) diff --git a/docs/en/operations/system-tables/clusters.md b/docs/en/operations/system-tables/clusters.md index cba52586e93..096eca12e7d 100644 --- a/docs/en/operations/system-tables/clusters.md +++ b/docs/en/operations/system-tables/clusters.md @@ -4,63 +4,68 @@ Contains information about clusters available in the config file and the servers Columns: -- `cluster` (String) — The cluster name. -- `shard_num` (UInt32) — The shard number in the cluster, starting from 1. -- `shard_weight` (UInt32) — The relative weight of the shard when writing data. -- `replica_num` (UInt32) — The replica number in the shard, starting from 1. -- `host_name` (String) — The host name, as specified in the config. -- `host_address` (String) — The host IP address obtained from DNS. -- `port` (UInt16) — The port to use for connecting to the server. -- `user` (String) — The name of the user for connecting to the server. -- `errors_count` (UInt32) - number of times this host failed to reach replica. -- `estimated_recovery_time` (UInt32) - seconds left until replica error count is zeroed and it is considered to be back to normal. +- `cluster` ([String](../../sql-reference/data-types/string.md)) — The cluster name. +- `shard_num` ([UInt32](../../sql-reference/data-types/int-uint.md)) — The shard number in the cluster, starting from 1. +- `shard_weight` ([UInt32](../../sql-reference/data-types/int-uint.md)) — The relative weight of the shard when writing data. +- `replica_num` ([UInt32](../../sql-reference/data-types/int-uint.md)) — The replica number in the shard, starting from 1. +- `host_name` ([String](../../sql-reference/data-types/string.md)) — The host name, as specified in the config. +- `host_address` ([String](../../sql-reference/data-types/string.md)) — The host IP address obtained from DNS. +- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The port to use for connecting to the server. +- `is_local` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the host is local. +- `user` ([String](../../sql-reference/data-types/string.md)) — The name of the user for connecting to the server. +- `default_database` ([String](../../sql-reference/data-types/string.md)) — The default database name. +- `errors_count` ([UInt32](../../sql-reference/data-types/int-uint.md)) — The number of times this host failed to reach replica. +- `slowdowns_count` ([UInt32](../../sql-reference/data-types/int-uint.md)) — The number of slowdowns that led to changing replica when establishing a connection with hedged requests. +- `estimated_recovery_time` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Seconds remaining until the replica error count is zeroed and it is considered to be back to normal. -Please note that `errors_count` is updated once per query to the cluster, but `estimated_recovery_time` is recalculated on-demand. So there could be a case of non-zero `errors_count` and zero `estimated_recovery_time`, that next query will zero `errors_count` and try to use replica as if it has no errors. +**Example** -**See also** +Query: + +```sql +SELECT * FROM system.clusters LIMIT 2 FORMAT Vertical; +``` + +Result: + +```text +Row 1: +────── +cluster: test_cluster_two_shards +shard_num: 1 +shard_weight: 1 +replica_num: 1 +host_name: 127.0.0.1 +host_address: 127.0.0.1 +port: 9000 +is_local: 1 +user: default +default_database: +errors_count: 0 +slowdowns_count: 0 +estimated_recovery_time: 0 + +Row 2: +────── +cluster: test_cluster_two_shards +shard_num: 2 +shard_weight: 1 +replica_num: 1 +host_name: 127.0.0.2 +host_address: 127.0.0.2 +port: 9000 +is_local: 0 +user: default +default_database: +errors_count: 0 +slowdowns_count: 0 +estimated_recovery_time: 0 +``` + +**See Also** - [Table engine Distributed](../../engines/table-engines/special/distributed.md) - [distributed_replica_error_cap setting](../../operations/settings/settings.md#settings-distributed_replica_error_cap) - [distributed_replica_error_half_life setting](../../operations/settings/settings.md#settings-distributed_replica_error_half_life) -**Example** - -```sql -:) SELECT * FROM system.clusters LIMIT 2 FORMAT Vertical; -``` - -```text -Row 1: -────── -cluster: test_cluster -shard_num: 1 -shard_weight: 1 -replica_num: 1 -host_name: clickhouse01 -host_address: 172.23.0.11 -port: 9000 -is_local: 1 -user: default -default_database: -errors_count: 0 -estimated_recovery_time: 0 - -Row 2: -────── -cluster: test_cluster -shard_num: 1 -shard_weight: 1 -replica_num: 2 -host_name: clickhouse02 -host_address: 172.23.0.12 -port: 9000 -is_local: 0 -user: default -default_database: -errors_count: 0 -estimated_recovery_time: 0 - -2 rows in set. Elapsed: 0.002 sec. -``` - [Original article](https://clickhouse.tech/docs/en/operations/system_tables/clusters) diff --git a/docs/en/operations/system-tables/columns.md b/docs/en/operations/system-tables/columns.md index 92a6315d06b..9160dca9a1a 100644 --- a/docs/en/operations/system-tables/columns.md +++ b/docs/en/operations/system-tables/columns.md @@ -4,7 +4,9 @@ Contains information about columns in all the tables. You can use this table to get information similar to the [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table) query, but for multiple tables at once. -The `system.columns` table contains the following columns (the column type is shown in brackets): +Columns from [temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.columns` only in those session where they have been created. They are shown with the empty `database` field. + +Columns: - `database` ([String](../../sql-reference/data-types/string.md)) — Database name. - `table` ([String](../../sql-reference/data-types/string.md)) — Table name. @@ -26,7 +28,7 @@ The `system.columns` table contains the following columns (the column type is sh **Example** ```sql -:) select * from system.columns LIMIT 2 FORMAT Vertical; +SELECT * FROM system.columns LIMIT 2 FORMAT Vertical; ``` ```text @@ -65,8 +67,6 @@ is_in_sorting_key: 0 is_in_primary_key: 0 is_in_sampling_key: 0 compression_codec: - -2 rows in set. Elapsed: 0.002 sec. ``` [Original article](https://clickhouse.tech/docs/en/operations/system_tables/columns) diff --git a/docs/en/operations/system-tables/data_type_families.md b/docs/en/operations/system-tables/data_type_families.md index ddda91ed151..4e439f13aa5 100644 --- a/docs/en/operations/system-tables/data_type_families.md +++ b/docs/en/operations/system-tables/data_type_families.md @@ -1,6 +1,6 @@ # system.data_type_families {#system_tables-data_type_families} -Contains information about supported [data types](../../sql-reference/data-types/). +Contains information about supported [data types](../../sql-reference/data-types/index.md). Columns: diff --git a/docs/en/operations/system-tables/distribution_queue.md b/docs/en/operations/system-tables/distribution_queue.md index fdc6a134da2..3b09c20874c 100644 --- a/docs/en/operations/system-tables/distribution_queue.md +++ b/docs/en/operations/system-tables/distribution_queue.md @@ -18,6 +18,10 @@ Columns: - `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Size of compressed data in local files, in bytes. +- `broken_data_files` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of files that has been marked as broken (due to an error). + +- `broken_data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Size of compressed data in broken files, in bytes. + - `last_exception` ([String](../../sql-reference/data-types/string.md)) — Text message about the last error that occurred (if any). **Example** diff --git a/docs/en/operations/system-tables/errors.md b/docs/en/operations/system-tables/errors.md index ec874efd711..583cce88ca4 100644 --- a/docs/en/operations/system-tables/errors.md +++ b/docs/en/operations/system-tables/errors.md @@ -7,11 +7,15 @@ Columns: - `name` ([String](../../sql-reference/data-types/string.md)) — name of the error (`errorCodeToName`). - `code` ([Int32](../../sql-reference/data-types/int-uint.md)) — code number of the error. - `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) — the number of times this error has been happened. +- `last_error_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — time when the last error happened. +- `last_error_message` ([String](../../sql-reference/data-types/string.md)) — message for the last error. +- `last_error_trace` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — A [stack trace](https://en.wikipedia.org/wiki/Stack_trace) which represents a list of physical addresses where the called methods are stored. +- `remote` ([UInt8](../../sql-reference/data-types/int-uint.md)) — remote exception (i.e. received during one of the distributed query). **Example** ``` sql -SELECT * +SELECT name, code, value FROM system.errors WHERE value > 0 ORDER BY code ASC @@ -21,3 +25,12 @@ LIMIT 1 │ CANNOT_OPEN_FILE │ 76 │ 1 │ └──────────────────┴──────┴───────┘ ``` + +``` sql +WITH arrayMap(x -> demangle(addressToSymbol(x)), last_error_trace) AS all +SELECT name, arrayStringConcat(all, '\n') AS res +FROM system.errors +LIMIT 1 +SETTINGS allow_introspection_functions=1\G +``` + diff --git a/docs/en/operations/system-tables/query_log.md b/docs/en/operations/system-tables/query_log.md index 32b2bdf2133..6cf87ee1f17 100644 --- a/docs/en/operations/system-tables/query_log.md +++ b/docs/en/operations/system-tables/query_log.md @@ -44,9 +44,15 @@ Columns: - `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` query, or a number of rows in the `INSERT` query. - `result_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — RAM volume in bytes used to store a query result. - `memory_usage` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Memory consumption by the query. +- `current_database` ([String](../../sql-reference/data-types/string.md)) — Name of the current database. - `query` ([String](../../sql-reference/data-types/string.md)) — Query string. -- `exception` ([String](../../sql-reference/data-types/string.md)) — Exception message. +- `normalized_query_hash` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Identical hash value without the values of literals for similar queries. +- `query_kind` ([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md)) — Type of the query. +- `databases` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the databases present in the query. +- `tables` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the tables present in the query. +- `columns` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the columns present in the query. - `exception_code` ([Int32](../../sql-reference/data-types/int-uint.md)) — Code of an exception. +- `exception` ([String](../../sql-reference/data-types/string.md)) — Exception message. - `stack_trace` ([String](../../sql-reference/data-types/string.md)) — [Stack trace](https://en.wikipedia.org/wiki/Stack_trace). An empty string, if the query was completed successfully. - `is_initial_query` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Query type. Possible values: - 1 — Query was initiated by the client. @@ -73,69 +79,98 @@ Columns: - 0 — The query was launched from the TCP interface. - 1 — `GET` method was used. - 2 — `POST` method was used. -- `http_user_agent` ([String](../../sql-reference/data-types/string.md)) — The `UserAgent` header passed in the HTTP request. -- `quota_key` ([String](../../sql-reference/data-types/string.md)) — The “quota key” specified in the [quotas](../../operations/quotas.md) setting (see `keyed`). +- `http_user_agent` ([String](../../sql-reference/data-types/string.md)) — HTTP header `UserAgent` passed in the HTTP query. +- `http_referer` ([String](../../sql-reference/data-types/string.md)) — HTTP header `Referer` passed in the HTTP query (contains an absolute or partial address of the page making the query). +- `forwarded_for` ([String](../../sql-reference/data-types/string.md)) — HTTP header `X-Forwarded-For` passed in the HTTP query. +- `quota_key` ([String](../../sql-reference/data-types/string.md)) — The `quota key` specified in the [quotas](../../operations/quotas.md) setting (see `keyed`). - `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse revision. -- `thread_numbers` ([Array(UInt32)](../../sql-reference/data-types/array.md)) — Number of threads that are participating in query execution. +- `log_comment` ([String](../../sql-reference/data-types/string.md)) — Log comment. It can be set to arbitrary string no longer than [max_query_size](../../operations/settings/settings.md#settings-max_query_size). An empty string if it is not defined. +- `thread_ids` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — Thread ids that are participating in query execution. - `ProfileEvents.Names` ([Array(String)](../../sql-reference/data-types/array.md)) — Counters that measure different metrics. The description of them could be found in the table [system.events](../../operations/system-tables/events.md#system_tables-events) - `ProfileEvents.Values` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — Values of metrics that are listed in the `ProfileEvents.Names` column. - `Settings.Names` ([Array(String)](../../sql-reference/data-types/array.md)) — Names of settings that were changed when the client ran the query. To enable logging changes to settings, set the `log_query_settings` parameter to 1. - `Settings.Values` ([Array(String)](../../sql-reference/data-types/array.md)) — Values of settings that are listed in the `Settings.Names` column. +- `used_aggregate_functions` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `aggregate functions`, which were used during query execution. +- `used_aggregate_function_combinators` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `aggregate functions combinators`, which were used during query execution. +- `used_database_engines` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `database engines`, which were used during query execution. +- `used_data_type_families` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `data type families`, which were used during query execution. +- `used_dictionaries` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `dictionaries`, which were used during query execution. +- `used_formats` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `formats`, which were used during query execution. +- `used_functions` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `functions`, which were used during query execution. +- `used_storages` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `storages`, which were used during query execution. +- `used_table_functions` ([Array(String)](../../sql-reference/data-types/array.md)) — Canonical names of `table functions`, which were used during query execution. **Example** ``` sql -SELECT * FROM system.query_log LIMIT 1 \G +SELECT * FROM system.query_log WHERE type = 'QueryFinish' AND (query LIKE '%toDate(\'2000-12-05\')%') ORDER BY query_start_time DESC LIMIT 1 FORMAT Vertical; ``` ``` text Row 1: ────── -type: QueryStart -event_date: 2020-09-11 -event_time: 2020-09-11 10:08:17 -event_time_microseconds: 2020-09-11 10:08:17.063321 -query_start_time: 2020-09-11 10:08:17 -query_start_time_microseconds: 2020-09-11 10:08:17.063321 -query_duration_ms: 0 -read_rows: 0 -read_bytes: 0 -written_rows: 0 -written_bytes: 0 -result_rows: 0 -result_bytes: 0 -memory_usage: 0 -current_database: default -query: INSERT INTO test1 VALUES -exception_code: 0 -exception: -stack_trace: -is_initial_query: 1 -user: default -query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef -address: ::ffff:127.0.0.1 -port: 33452 -initial_user: default -initial_query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef -initial_address: ::ffff:127.0.0.1 -initial_port: 33452 -interface: 1 -os_user: bharatnc -client_hostname: tower -client_name: ClickHouse -client_revision: 54437 -client_version_major: 20 -client_version_minor: 7 -client_version_patch: 2 -http_method: 0 -http_user_agent: -quota_key: -revision: 54440 -thread_ids: [] -ProfileEvents.Names: [] -ProfileEvents.Values: [] -Settings.Names: ['use_uncompressed_cache','load_balancing','log_queries','max_memory_usage','allow_introspection_functions'] -Settings.Values: ['0','random','1','10000000000','1'] +type: QueryFinish +event_date: 2021-03-18 +event_time: 2021-03-18 20:54:18 +event_time_microseconds: 2021-03-18 20:54:18.676686 +query_start_time: 2021-03-18 20:54:18 +query_start_time_microseconds: 2021-03-18 20:54:18.673934 +query_duration_ms: 2 +read_rows: 100 +read_bytes: 800 +written_rows: 0 +written_bytes: 0 +result_rows: 2 +result_bytes: 4858 +memory_usage: 0 +current_database: default +query: SELECT uniqArray([1, 1, 2]), SUBSTRING('Hello, world', 7, 5), flatten([[[BIT_AND(123)]], [[mod(3, 2)], [CAST('1' AS INTEGER)]]]), week(toDate('2000-12-05')), CAST(arrayJoin([NULL, NULL]) AS Nullable(TEXT)), avgOrDefaultIf(number, number % 2), sumOrNull(number), toTypeName(sumOrNull(number)), countIf(toDate('2000-12-05') + number as d, toDayOfYear(d) % 2) FROM numbers(100) +normalized_query_hash: 17858008518552525706 +query_kind: Select +databases: ['_table_function'] +tables: ['_table_function.numbers'] +columns: ['_table_function.numbers.number'] +exception_code: 0 +exception: +stack_trace: +is_initial_query: 1 +user: default +query_id: 58f3d392-0fa0-4663-ae1d-29917a1a9c9c +address: ::ffff:127.0.0.1 +port: 37486 +initial_user: default +initial_query_id: 58f3d392-0fa0-4663-ae1d-29917a1a9c9c +initial_address: ::ffff:127.0.0.1 +initial_port: 37486 +interface: 1 +os_user: sevirov +client_hostname: clickhouse.ru-central1.internal +client_name: ClickHouse +client_revision: 54447 +client_version_major: 21 +client_version_minor: 4 +client_version_patch: 1 +http_method: 0 +http_user_agent: +http_referer: +forwarded_for: +quota_key: +revision: 54449 +log_comment: +thread_ids: [587,11939] +ProfileEvents.Names: ['Query','SelectQuery','ReadCompressedBytes','CompressedReadBufferBlocks','CompressedReadBufferBytes','IOBufferAllocs','IOBufferAllocBytes','ArenaAllocChunks','ArenaAllocBytes','FunctionExecute','TableFunctionExecute','NetworkSendElapsedMicroseconds','SelectedRows','SelectedBytes','ContextLock','RWLockAcquiredReadLocks','RealTimeMicroseconds','UserTimeMicroseconds','SystemTimeMicroseconds','SoftPageFaults','OSCPUVirtualTimeMicroseconds','OSWriteBytes'] +ProfileEvents.Values: [1,1,36,1,10,2,1048680,1,4096,36,1,110,100,800,77,1,3137,1476,1101,8,2577,8192] +Settings.Names: ['load_balancing','max_memory_usage'] +Settings.Values: ['random','10000000000'] +used_aggregate_functions: ['groupBitAnd','avg','sum','count','uniq'] +used_aggregate_function_combinators: ['OrDefault','If','OrNull','Array'] +used_database_engines: [] +used_data_type_families: ['String','Array','Int32','Nullable'] +used_dictionaries: [] +used_formats: [] +used_functions: ['toWeek','CAST','arrayFlatten','toTypeName','toDayOfYear','addDays','array','toDate','modulo','substring','plus'] +used_storages: [] +used_table_functions: ['numbers'] ``` **See Also** @@ -143,4 +178,3 @@ Settings.Values: ['0','random','1','10000000000','1'] - [system.query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log) — This table contains information about each query execution thread. [Original article](https://clickhouse.tech/docs/en/operations/system_tables/query_log) - diff --git a/docs/en/operations/system-tables/quota_limits.md b/docs/en/operations/system-tables/quota_limits.md index c2dcb4db34d..11616990206 100644 --- a/docs/en/operations/system-tables/quota_limits.md +++ b/docs/en/operations/system-tables/quota_limits.md @@ -17,5 +17,3 @@ Columns: - `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of rows read from all tables and table functions participated in queries. - `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of bytes read from all tables and table functions participated in queries. - `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Maximum of the query execution time, in seconds. - -[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quota_limits) diff --git a/docs/en/operations/system-tables/quota_usage.md b/docs/en/operations/system-tables/quota_usage.md index 17af9ad9a30..89fdfe70069 100644 --- a/docs/en/operations/system-tables/quota_usage.md +++ b/docs/en/operations/system-tables/quota_usage.md @@ -28,5 +28,3 @@ Columns: ## See Also {#see-also} - [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) - -[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quota_usage) diff --git a/docs/en/operations/system-tables/quotas_usage.md b/docs/en/operations/system-tables/quotas_usage.md index 31aafd3e697..04cf91cb990 100644 --- a/docs/en/operations/system-tables/quotas_usage.md +++ b/docs/en/operations/system-tables/quotas_usage.md @@ -30,6 +30,4 @@ Columns: ## See Also {#see-also} -- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) - -[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quotas_usage) +- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) \ No newline at end of file diff --git a/docs/en/operations/system-tables/replication_queue.md b/docs/en/operations/system-tables/replication_queue.md index aa379caa46c..539a29432ac 100644 --- a/docs/en/operations/system-tables/replication_queue.md +++ b/docs/en/operations/system-tables/replication_queue.md @@ -14,7 +14,17 @@ Columns: - `node_name` ([String](../../sql-reference/data-types/string.md)) — Node name in ZooKeeper. -- `type` ([String](../../sql-reference/data-types/string.md)) — Type of the task in the queue: `GET_PARTS`, `MERGE_PARTS`, `DETACH_PARTS`, `DROP_PARTS`, or `MUTATE_PARTS`. +- `type` ([String](../../sql-reference/data-types/string.md)) — Type of the task in the queue, one of: + + - `GET_PART` — Get the part from another replica. + - `ATTACH_PART` — Attach the part, possibly from our own replica (if found in the `detached` folder). You may think of it as a `GET_PART` with some optimizations as they're nearly identical. + - `MERGE_PARTS` — Merge the parts. + - `DROP_RANGE` — Delete the parts in the specified partition in the specified number range. + - `CLEAR_COLUMN` — NOTE: Deprecated. Drop specific column from specified partition. + - `CLEAR_INDEX` — NOTE: Deprecated. Drop specific index from specified partition. + - `REPLACE_RANGE` — Drop a certain range of parts and replace them with new ones. + - `MUTATE_PART` — Apply one or several mutations to the part. + - `ALTER_METADATA` — Apply alter modification according to global /metadata and /columns paths. - `create_time` ([Datetime](../../sql-reference/data-types/datetime.md)) — Date and time when the task was submitted for execution. @@ -70,12 +80,12 @@ num_tries: 36 last_exception: Code: 226, e.displayText() = DB::Exception: Marks file '/opt/clickhouse/data/merge/visits_v2/tmp_fetch_20201130_121373_121384_2/CounterID.mrk' doesn't exist (version 20.8.7.15 (official build)) last_attempt_time: 2020-12-08 17:35:54 num_postponed: 0 -postpone_reason: +postpone_reason: last_postpone_time: 1970-01-01 03:00:00 ``` **See Also** -- [Managing ReplicatedMergeTree Tables](../../sql-reference/statements/system.md/#query-language-system-replicated) +- [Managing ReplicatedMergeTree Tables](../../sql-reference/statements/system.md#query-language-system-replicated) [Original article](https://clickhouse.tech/docs/en/operations/system_tables/replication_queue) diff --git a/docs/en/operations/system-tables/tables.md b/docs/en/operations/system-tables/tables.md index 6ad1425e032..ccc9ab94f8b 100644 --- a/docs/en/operations/system-tables/tables.md +++ b/docs/en/operations/system-tables/tables.md @@ -1,59 +1,65 @@ # system.tables {#system-tables} -Contains metadata of each table that the server knows about. Detached tables are not shown in `system.tables`. +Contains metadata of each table that the server knows about. -This table contains the following columns (the column type is shown in brackets): +[Detached](../../sql-reference/statements/detach.md) tables are not shown in `system.tables`. -- `database` (String) — The name of the database the table is in. +[Temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.tables` only in those session where they have been created. They are shown with the empty `database` field and with the `is_temporary` flag switched on. -- `name` (String) — Table name. +Columns: -- `engine` (String) — Table engine name (without parameters). +- `database` ([String](../../sql-reference/data-types/string.md)) — The name of the database the table is in. -- `is_temporary` (UInt8) - Flag that indicates whether the table is temporary. +- `name` ([String](../../sql-reference/data-types/string.md)) — Table name. -- `data_path` (String) - Path to the table data in the file system. +- `engine` ([String](../../sql-reference/data-types/string.md)) — Table engine name (without parameters). -- `metadata_path` (String) - Path to the table metadata in the file system. +- `is_temporary` ([UInt8](../../sql-reference/data-types/int-uint.md)) - Flag that indicates whether the table is temporary. -- `metadata_modification_time` (DateTime) - Time of latest modification of the table metadata. +- `data_path` ([String](../../sql-reference/data-types/string.md)) - Path to the table data in the file system. -- `dependencies_database` (Array(String)) - Database dependencies. +- `metadata_path` ([String](../../sql-reference/data-types/string.md)) - Path to the table metadata in the file system. -- `dependencies_table` (Array(String)) - Table dependencies ([MaterializedView](../../engines/table-engines/special/materializedview.md) tables based on the current table). +- `metadata_modification_time` ([DateTime](../../sql-reference/data-types/datetime.md)) - Time of latest modification of the table metadata. -- `create_table_query` (String) - The query that was used to create the table. +- `dependencies_database` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) - Database dependencies. -- `engine_full` (String) - Parameters of the table engine. +- `dependencies_table` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) - Table dependencies ([MaterializedView](../../engines/table-engines/special/materializedview.md) tables based on the current table). -- `partition_key` (String) - The partition key expression specified in the table. +- `create_table_query` ([String](../../sql-reference/data-types/string.md)) - The query that was used to create the table. -- `sorting_key` (String) - The sorting key expression specified in the table. +- `engine_full` ([String](../../sql-reference/data-types/string.md)) - Parameters of the table engine. -- `primary_key` (String) - The primary key expression specified in the table. +- `partition_key` ([String](../../sql-reference/data-types/string.md)) - The partition key expression specified in the table. -- `sampling_key` (String) - The sampling key expression specified in the table. +- `sorting_key` ([String](../../sql-reference/data-types/string.md)) - The sorting key expression specified in the table. -- `storage_policy` (String) - The storage policy: +- `primary_key` ([String](../../sql-reference/data-types/string.md)) - The primary key expression specified in the table. + +- `sampling_key` ([String](../../sql-reference/data-types/string.md)) - The sampling key expression specified in the table. + +- `storage_policy` ([String](../../sql-reference/data-types/string.md)) - The storage policy: - [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) - [Distributed](../../engines/table-engines/special/distributed.md#distributed) -- `total_rows` (Nullable(UInt64)) - Total number of rows, if it is possible to quickly determine exact number of rows in the table, otherwise `Null` (including underying `Buffer` table). +- `total_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of rows, if it is possible to quickly determine exact number of rows in the table, otherwise `NULL` (including underying `Buffer` table). -- `total_bytes` (Nullable(UInt64)) - Total number of bytes, if it is possible to quickly determine exact number of bytes for the table on storage, otherwise `Null` (**does not** includes any underlying storage). +- `total_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of bytes, if it is possible to quickly determine exact number of bytes for the table on storage, otherwise `NULL` (does not includes any underlying storage). - If the table stores data on disk, returns used space on disk (i.e. compressed). - If the table stores data in memory, returns approximated number of used bytes in memory. -- `lifetime_rows` (Nullable(UInt64)) - Total number of rows INSERTed since server start (only for `Buffer` tables). +- `lifetime_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of rows INSERTed since server start (only for `Buffer` tables). -- `lifetime_bytes` (Nullable(UInt64)) - Total number of bytes INSERTed since server start (only for `Buffer` tables). +- `lifetime_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of bytes INSERTed since server start (only for `Buffer` tables). The `system.tables` table is used in `SHOW TABLES` query implementation. +**Example** + ```sql -:) SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; +SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; ``` ```text @@ -100,8 +106,6 @@ sampling_key: storage_policy: total_rows: ᴺᵁᴸᴸ total_bytes: ᴺᵁᴸᴸ - -2 rows in set. Elapsed: 0.004 sec. ``` [Original article](https://clickhouse.tech/docs/en/operations/system_tables/tables) diff --git a/docs/en/operations/system-tables/trace_log.md b/docs/en/operations/system-tables/trace_log.md index b3b04795a60..e4c01a65d9d 100644 --- a/docs/en/operations/system-tables/trace_log.md +++ b/docs/en/operations/system-tables/trace_log.md @@ -20,10 +20,12 @@ Columns: When connecting to the server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server. -- `timer_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Timer type: +- `trace_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Trace type: - - `Real` represents wall-clock time. - - `CPU` represents CPU time. + - `Real` represents collecting stack traces by wall-clock time. + - `CPU` represents collecting stack traces by CPU time. + - `Memory` represents collecting allocations and deallocations when memory allocation exceeds the subsequent watermark. + - `MemorySample` represents collecting random allocations and deallocations. - `thread_number` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Thread identifier. diff --git a/docs/en/operations/tips.md b/docs/en/operations/tips.md index e62dea0b04e..865fe58d7cd 100644 --- a/docs/en/operations/tips.md +++ b/docs/en/operations/tips.md @@ -191,8 +191,9 @@ dynamicConfigFile=/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/zoo. Java version: ``` text -Java(TM) SE Runtime Environment (build 1.8.0_25-b17) -Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) +openjdk 11.0.5-shenandoah 2019-10-15 +OpenJDK Runtime Environment (build 11.0.5-shenandoah+10-adhoc.heretic.src) +OpenJDK 64-Bit Server VM (build 11.0.5-shenandoah+10-adhoc.heretic.src, mixed mode) ``` JVM parameters: @@ -204,7 +205,7 @@ ZOOCFGDIR=/etc/$NAME/conf # TODO this is really ugly # How to find out, which jars are needed? # seems, that log4j requires the log4j.properties file to be in the classpath -CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper/zookeeper-3.5.1-metrika.jar:/usr/share/zookeeper/slf4j-log4j12-1.7.5.jar:/usr/share/zookeeper/slf4j-api-1.7.5.jar:/usr/share/zookeeper/servlet-api-2.5-20081211.jar:/usr/share/zookeeper/netty-3.7.0.Final.jar:/usr/share/zookeeper/log4j-1.2.16.jar:/usr/share/zookeeper/jline-2.11.jar:/usr/share/zookeeper/jetty-util-6.1.26.jar:/usr/share/zookeeper/jetty-6.1.26.jar:/usr/share/zookeeper/javacc.jar:/usr/share/zookeeper/jackson-mapper-asl-1.9.11.jar:/usr/share/zookeeper/jackson-core-asl-1.9.11.jar:/usr/share/zookeeper/commons-cli-1.2.jar:/usr/src/java/lib/*.jar:/usr/etc/zookeeper" +CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper-3.6.2/lib/audience-annotations-0.5.0.jar:/usr/share/zookeeper-3.6.2/lib/commons-cli-1.2.jar:/usr/share/zookeeper-3.6.2/lib/commons-lang-2.6.jar:/usr/share/zookeeper-3.6.2/lib/jackson-annotations-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-core-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-databind-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/javax.servlet-api-3.1.0.jar:/usr/share/zookeeper-3.6.2/lib/jetty-http-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-io-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-security-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-server-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-servlet-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-util-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jline-2.14.6.jar:/usr/share/zookeeper-3.6.2/lib/json-simple-1.1.1.jar:/usr/share/zookeeper-3.6.2/lib/log4j-1.2.17.jar:/usr/share/zookeeper-3.6.2/lib/metrics-core-3.2.5.jar:/usr/share/zookeeper-3.6.2/lib/netty-buffer-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-codec-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-handler-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-resolver-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-epoll-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-unix-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_common-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_hotspot-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_servlet-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-api-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-log4j12-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/snappy-java-1.1.7.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-jute-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-prometheus-metrics-3.6.2.jar:/usr/share/zookeeper-3.6.2/etc" ZOOCFG="$ZOOCFGDIR/zoo.cfg" ZOO_LOG_DIR=/var/log/$NAME @@ -213,27 +214,17 @@ GROUP=zookeeper PIDDIR=/var/run/$NAME PIDFILE=$PIDDIR/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME -JAVA=/usr/bin/java +JAVA=/usr/local/jdk-11/bin/java ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain" ZOO_LOG4J_PROP="INFO,ROLLINGFILE" JMXLOCALONLY=false JAVA_OPTS="-Xms{{ '{{' }} cluster.get('xms','128M') {{ '}}' }} \ -Xmx{{ '{{' }} cluster.get('xmx','1G') {{ '}}' }} \ - -Xloggc:/var/log/$NAME/zookeeper-gc.log \ - -XX:+UseGCLogFileRotation \ - -XX:NumberOfGCLogFiles=16 \ - -XX:GCLogFileSize=16M \ + -Xlog:safepoint,gc*=info,age*=debug:file=/var/log/$NAME/zookeeper-gc.log:time,level,tags:filecount=16,filesize=16M -verbose:gc \ - -XX:+PrintGCTimeStamps \ - -XX:+PrintGCDateStamps \ - -XX:+PrintGCDetails - -XX:+PrintTenuringDistribution \ - -XX:+PrintGCApplicationStoppedTime \ - -XX:+PrintGCApplicationConcurrentTime \ - -XX:+PrintSafepointStatistics \ - -XX:+UseParNewGC \ - -XX:+UseConcMarkSweepGC \ --XX:+CMSParallelRemarkEnabled" + -XX:+UseG1GC \ + -Djute.maxbuffer=8388608 \ + -XX:MaxGCPauseMillis=50" ``` Salt init: diff --git a/docs/en/operations/update.md b/docs/en/operations/update.md index 9fa9c44e130..dbcf9ae2b3e 100644 --- a/docs/en/operations/update.md +++ b/docs/en/operations/update.md @@ -15,7 +15,8 @@ $ sudo service clickhouse-server restart If you installed ClickHouse using something other than the recommended `deb` packages, use the appropriate update method. -ClickHouse does not support a distributed update. The operation should be performed consecutively on each separate server. Do not update all the servers on a cluster simultaneously, or the cluster will be unavailable for some time. +!!! note "Note" + You can update multiple servers at once as soon as there is no moment when all replicas of one shard are offline. The upgrade of older version of ClickHouse to specific version: @@ -28,7 +29,3 @@ $ sudo apt-get update $ sudo apt-get install clickhouse-server=xx.yy.a.b clickhouse-client=xx.yy.a.b clickhouse-common-static=xx.yy.a.b $ sudo service clickhouse-server restart ``` - - - - diff --git a/docs/en/sql-reference/aggregate-functions/combinators.md b/docs/en/sql-reference/aggregate-functions/combinators.md index 015c90e90c7..259202805d3 100644 --- a/docs/en/sql-reference/aggregate-functions/combinators.md +++ b/docs/en/sql-reference/aggregate-functions/combinators.md @@ -27,7 +27,37 @@ Example 2: `uniqArray(arr)` – Counts the number of unique elements in all ‘a ## -SimpleState {#agg-functions-combinator-simplestate} -If you apply this combinator, the aggregate function returns the same value but with a different type. This is an `SimpleAggregateFunction(...)` that can be stored in a table to work with [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engines. +If you apply this combinator, the aggregate function returns the same value but with a different type. This is a [SimpleAggregateFunction(...)](../../sql-reference/data-types/simpleaggregatefunction.md) that can be stored in a table to work with [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) tables. + +**Syntax** + +``` sql +SimpleState(x) +``` + +**Arguments** + +- `x` — Aggregate function parameters. + +**Returned values** + +The value of an aggregate function with the `SimpleAggregateFunction(...)` type. + +**Example** + +Query: + +``` sql +WITH anySimpleState(number) AS c SELECT toTypeName(c), c FROM numbers(1); +``` + +Result: + +``` text +┌─toTypeName(c)────────────────────────┬─c─┐ +│ SimpleAggregateFunction(any, UInt64) │ 0 │ +└──────────────────────────────────────┴───┘ +``` ## -State {#agg-functions-combinator-state} @@ -249,5 +279,3 @@ FROM people └────────┴───────────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/query_language/agg_functions/combinators/) diff --git a/docs/en/sql-reference/aggregate-functions/index.md b/docs/en/sql-reference/aggregate-functions/index.md index 543a5d3fed8..d2b46f6de53 100644 --- a/docs/en/sql-reference/aggregate-functions/index.md +++ b/docs/en/sql-reference/aggregate-functions/index.md @@ -59,4 +59,3 @@ SELECT groupArray(y) FROM t_null_big `groupArray` does not include `NULL` in the resulting array. -[Original article](https://clickhouse.tech/docs/en/query_language/agg_functions/) diff --git a/docs/en/sql-reference/aggregate-functions/parametric-functions.md b/docs/en/sql-reference/aggregate-functions/parametric-functions.md index c6c97b5428b..b9d504241db 100644 --- a/docs/en/sql-reference/aggregate-functions/parametric-functions.md +++ b/docs/en/sql-reference/aggregate-functions/parametric-functions.md @@ -243,7 +243,7 @@ The function works according to the algorithm: **Syntax** ``` sql -windowFunnel(window, [mode])(timestamp, cond1, cond2, ..., condN) +windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN) ``` **Arguments** @@ -253,9 +253,11 @@ windowFunnel(window, [mode])(timestamp, cond1, cond2, ..., condN) **Parameters** -- `window` — Length of the sliding window. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond2 <= timestamp of cond1 + window`. -- `mode` - It is an optional argument. - - `'strict'` - When the `'strict'` is set, the windowFunnel() applies conditions only for the unique values. +- `window` — Length of the sliding window, it is the time interval between first condition and last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`. +- `mode` — It is an optional argument. One or more modes can be set. + - `'strict'` — If same condition holds for sequence of events then such non-unique events would be skipped. + - `'strict_order'` — Don't allow interventions of other events. E.g. in the case of `A->B->D->C`, it stops finding `A->B->C` at the `D` and the max event level is 2. + - `'strict_increase'` — Apply conditions only to events with strictly increasing timestamps. **Returned value** @@ -336,14 +338,14 @@ retention(cond1, cond2, ..., cond32); **Arguments** -- `cond` — an expression that returns a `UInt8` result (1 or 0). +- `cond` — An expression that returns a `UInt8` result (1 or 0). **Returned value** The array of 1 or 0. -- 1 — condition was met for the event. -- 0 — condition wasn’t met for the event. +- 1 — Condition was met for the event. +- 0 — Condition wasn’t met for the event. Type: `UInt8`. @@ -500,7 +502,6 @@ Problem: Generate a report that shows only keywords that produced at least 5 uni Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5 ``` -[Original article](https://clickhouse.tech/docs/en/query_language/agg_functions/parametric_functions/) ## sumMapFiltered(keys_to_keep)(keys, values) {#summapfilteredkeys-to-keepkeys-values} diff --git a/docs/en/sql-reference/aggregate-functions/reference/argmax.md b/docs/en/sql-reference/aggregate-functions/reference/argmax.md index 72aa607a751..0630e2f585e 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/argmax.md +++ b/docs/en/sql-reference/aggregate-functions/reference/argmax.md @@ -6,20 +6,12 @@ toc_priority: 106 Calculates the `arg` value for a maximum `val` value. If there are several different values of `arg` for maximum values of `val`, returns the first of these values encountered. -Tuple version of this function will return the tuple with the maximum `val` value. It is convenient for use with [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). - **Syntax** ``` sql argMax(arg, val) ``` -or - -``` sql -argMax(tuple(arg, val)) -``` - **Arguments** - `arg` — Argument. @@ -29,13 +21,7 @@ argMax(tuple(arg, val)) - `arg` value that corresponds to maximum `val` value. -Type: matches `arg` type. - -For tuple in the input: - -- Tuple `(arg, val)`, where `val` is the maximum value and `arg` is a corresponding value. - -Type: [Tuple](../../../sql-reference/data-types/tuple.md). +Type: matches `arg` type. **Example** @@ -52,15 +38,13 @@ Input table: Query: ``` sql -SELECT argMax(user, salary), argMax(tuple(user, salary), salary), argMax(tuple(user, salary)) FROM salary; +SELECT argMax(user, salary) FROM salary; ``` Result: ``` text -┌─argMax(user, salary)─┬─argMax(tuple(user, salary), salary)─┬─argMax(tuple(user, salary))─┐ -│ director │ ('director',5000) │ ('director',5000) │ -└──────────────────────┴─────────────────────────────────────┴─────────────────────────────┘ +┌─argMax(user, salary)─┐ +│ director │ +└──────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/argmin.md b/docs/en/sql-reference/aggregate-functions/reference/argmin.md index 7ddc38cd28a..a259a76b7d7 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/argmin.md +++ b/docs/en/sql-reference/aggregate-functions/reference/argmin.md @@ -6,20 +6,12 @@ toc_priority: 105 Calculates the `arg` value for a minimum `val` value. If there are several different values of `arg` for minimum values of `val`, returns the first of these values encountered. -Tuple version of this function will return the tuple with the minimum `val` value. It is convenient for use with [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). - **Syntax** ``` sql argMin(arg, val) ``` -or - -``` sql -argMin(tuple(arg, val)) -``` - **Arguments** - `arg` — Argument. @@ -29,13 +21,7 @@ argMin(tuple(arg, val)) - `arg` value that corresponds to minimum `val` value. -Type: matches `arg` type. - -For tuple in the input: - -- Tuple `(arg, val)`, where `val` is the minimum value and `arg` is a corresponding value. - -Type: [Tuple](../../../sql-reference/data-types/tuple.md). +Type: matches `arg` type. **Example** @@ -52,15 +38,13 @@ Input table: Query: ``` sql -SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary; +SELECT argMin(user, salary) FROM salary ``` Result: ``` text -┌─argMin(user, salary)─┬─argMin(tuple(user, salary))─┐ -│ worker │ ('worker',1000) │ -└──────────────────────┴─────────────────────────────┘ +┌─argMin(user, salary)─┐ +│ worker │ +└──────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/avg.md b/docs/en/sql-reference/aggregate-functions/reference/avg.md index d53a47a36a3..cbd409ccab6 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/avg.md +++ b/docs/en/sql-reference/aggregate-functions/reference/avg.md @@ -14,26 +14,19 @@ avg(x) **Arguments** -- `x` — Values. - -`x` must be -[Integer](../../../sql-reference/data-types/int-uint.md), -[floating-point](../../../sql-reference/data-types/float.md), or -[Decimal](../../../sql-reference/data-types/decimal.md). +- `x` — input values, must be [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md), or [Decimal](../../../sql-reference/data-types/decimal.md). **Returned value** -- `NaN` if the supplied parameter is empty. -- Mean otherwise. - -**Return type** is always [Float64](../../../sql-reference/data-types/float.md). +- The arithmetic mean, always as [Float64](../../../sql-reference/data-types/float.md). +- `NaN` if the input parameter `x` is empty. **Example** Query: ``` sql -SELECT avg(x) FROM values('x Int8', 0, 1, 2, 3, 4, 5) +SELECT avg(x) FROM values('x Int8', 0, 1, 2, 3, 4, 5); ``` Result: @@ -46,11 +39,20 @@ Result: **Example** +Create a temp table: + Query: ``` sql CREATE table test (t UInt8) ENGINE = Memory; -SELECT avg(t) FROM test +``` + +Get the arithmetic mean: + +Query: + +``` +SELECT avg(t) FROM test; ``` Result: @@ -60,3 +62,5 @@ Result: │ nan │ └────────┘ ``` + +[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avg/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/count.md b/docs/en/sql-reference/aggregate-functions/reference/count.md index 0a5aef2fe97..48c6f3f8c05 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/count.md +++ b/docs/en/sql-reference/aggregate-functions/reference/count.md @@ -7,8 +7,9 @@ toc_priority: 1 Counts the number of rows or not-NULL values. ClickHouse supports the following syntaxes for `count`: -- `count(expr)` or `COUNT(DISTINCT expr)`. -- `count()` or `COUNT(*)`. The `count()` syntax is ClickHouse-specific. + +- `count(expr)` or `COUNT(DISTINCT expr)`. +- `count()` or `COUNT(*)`. The `count()` syntax is ClickHouse-specific. **Arguments** diff --git a/docs/en/sql-reference/aggregate-functions/reference/deltasum.md b/docs/en/sql-reference/aggregate-functions/reference/deltasum.md index bb6f802ccaf..c40f2372033 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/deltasum.md +++ b/docs/en/sql-reference/aggregate-functions/reference/deltasum.md @@ -4,16 +4,70 @@ toc_priority: 141 # deltaSum {#agg_functions-deltasum} -Syntax: `deltaSum(value)` +Sums the arithmetic difference between consecutive rows. If the difference is negative, it is ignored. -Adds the differences between consecutive rows. If the difference is negative, it is ignored. -`value` must be some integer or floating point type. +Note that the underlying data must be sorted in order for this function to work properly. +If you would like to use this function in a materialized view, you most likely want to use the +[deltaSumTimestamp](deltasumtimestamp.md) method instead. -Example: +**Syntax** -```sql -select deltaSum(arrayJoin([1, 2, 3])); -- => 2 -select deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3])); -- => 7 -select deltaSum(arrayJoin([2.25, 3, 4.5])); -- => 2.25 +``` sql +deltaSum(value) ``` +**Arguments** + +- `value` — Input values, must be [Integer](../../data-types/int-uint.md) or [Float](../../data-types/float.md) type. + +**Returned value** + +- A gained arithmetic difference of the `Integer` or `Float` type. + +**Examples** + +Query: + +``` sql +SELECT deltaSum(arrayJoin([1, 2, 3])); +``` + +Result: + +``` text +┌─deltaSum(arrayJoin([1, 2, 3]))─┐ +│ 2 │ +└────────────────────────────────┘ +``` + +Query: + +``` sql +SELECT deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3])); +``` + +Result: + +``` text +┌─deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3]))─┐ +│ 7 │ +└───────────────────────────────────────────────┘ +``` + +Query: + +``` sql +SELECT deltaSum(arrayJoin([2.25, 3, 4.5])); +``` + +Result: + +``` text +┌─deltaSum(arrayJoin([2.25, 3, 4.5]))─┐ +│ 2.25 │ +└─────────────────────────────────────┘ +``` + +## See Also {#see-also} + +- [runningDifference](../../functions/other-functions.md#other_functions-runningdifference) diff --git a/docs/en/sql-reference/aggregate-functions/reference/deltasumtimestamp.md b/docs/en/sql-reference/aggregate-functions/reference/deltasumtimestamp.md new file mode 100644 index 00000000000..2bfafdc81d1 --- /dev/null +++ b/docs/en/sql-reference/aggregate-functions/reference/deltasumtimestamp.md @@ -0,0 +1,41 @@ +--- +toc_priority: 141 +--- + +# deltaSumTimestamp {#agg_functions-deltasum} + +Syntax: `deltaSumTimestamp(value, timestamp)` + +Adds the differences between consecutive rows. If the difference is negative, it is ignored. +Uses `timestamp` to order values. + +This function is primarily for materialized views that are ordered by some time bucket aligned +timestamp, for example a `toStartOfMinute` bucket. Because the rows in such a materialized view +will all have the same timestamp, it is impossible for them to be merged in the "right" order. This +function keeps track of the `timestamp` of the values it's seen, so it's possible to order the states +correctly during merging. + +To calculate the delta sum across an ordered collection you can simply use the +[deltaSum](./deltasum.md) function. + +**Arguments** + +- `value` must be some [Integer](../../data-types/int-uint.md) type or [Float](../../data-types/float.md) type or a [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md). +- `timestamp` must be some [Integer](../../data-types/int-uint.md) type or [Float](../../data-types/float.md) type or a [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md). + +**Returned value** + +- Accumulated differences between consecutive values, ordered by the `timestamp` parameter. + +**Example** + +```sql +SELECT deltaSumTimestamp(value, timestamp) +FROM (select number as timestamp, [0, 4, 8, 3, 0, 0, 0, 1, 3, 5][number] as value from numbers(1, 10)) +``` + +``` text +┌─deltaSumTimestamp(value, timestamp)─┐ +│ 13 │ +└─────────────────────────────────────┘ +``` diff --git a/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat.md b/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat.md index 68456bf7844..d29550b007e 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat.md +++ b/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat.md @@ -9,7 +9,7 @@ Inserts a value into the array at the specified position. **Syntax** ``` sql -groupArrayInsertAt(default_x, size)(x, pos); +groupArrayInsertAt(default_x, size)(x, pos) ``` If in one query several values are inserted into the same position, the function behaves in the following ways: @@ -21,8 +21,8 @@ If in one query several values are inserted into the same position, the function - `x` — Value to be inserted. [Expression](../../../sql-reference/syntax.md#syntax-expressions) resulting in one of the [supported data types](../../../sql-reference/data-types/index.md). - `pos` — Position at which the specified element `x` is to be inserted. Index numbering in the array starts from zero. [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges). -- `default_x`— Default value for substituting in empty positions. Optional parameter. [Expression](../../../sql-reference/syntax.md#syntax-expressions) resulting in the data type configured for the `x` parameter. If `default_x` is not defined, the [default values](../../../sql-reference/statements/create/table.md#create-default-values) are used. -- `size`— Length of the resulting array. Optional parameter. When using this parameter, the default value `default_x` must be specified. [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges). +- `default_x` — Default value for substituting in empty positions. Optional parameter. [Expression](../../../sql-reference/syntax.md#syntax-expressions) resulting in the data type configured for the `x` parameter. If `default_x` is not defined, the [default values](../../../sql-reference/statements/create/table.md#create-default-values) are used. +- `size` — Length of the resulting array. Optional parameter. When using this parameter, the default value `default_x` must be specified. [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges). **Returned value** diff --git a/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor.md b/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor.md index a4d99fd29e3..d3f40f63f65 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor.md +++ b/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor.md @@ -14,7 +14,7 @@ groupBitmapOr(expr) `expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` type. -**Return value** +**Returned value** Value of the `UInt64` type. diff --git a/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor.md b/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor.md index 834f088d02f..cbe01e08145 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor.md +++ b/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor.md @@ -14,7 +14,7 @@ groupBitmapOr(expr) `expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` type. -**Return value** +**Returned value** Value of the `UInt64` type. diff --git a/docs/en/sql-reference/aggregate-functions/reference/groupbitor.md b/docs/en/sql-reference/aggregate-functions/reference/groupbitor.md index e427a9ad970..24077de0adc 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/groupbitor.md +++ b/docs/en/sql-reference/aggregate-functions/reference/groupbitor.md @@ -14,7 +14,7 @@ groupBitOr(expr) `expr` – An expression that results in `UInt*` type. -**Return value** +**Returned value** Value of the `UInt*` type. diff --git a/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation.md b/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation.md index 313d6bf81f5..c8fb535089b 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation.md +++ b/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation.md @@ -10,7 +10,7 @@ Use it for tests or to process columns of types `AggregateFunction` and `Aggrega **Syntax** ``` sql -initializeAggregation (aggregate_function, column_1, column_2); +initializeAggregation (aggregate_function, column_1, column_2) ``` **Arguments** diff --git a/docs/en/sql-reference/aggregate-functions/reference/kurtpop.md b/docs/en/sql-reference/aggregate-functions/reference/kurtpop.md index db402c99663..c51c4b92e74 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/kurtpop.md +++ b/docs/en/sql-reference/aggregate-functions/reference/kurtpop.md @@ -21,5 +21,5 @@ The kurtosis of the given distribution. Type — [Float64](../../../sql-referenc **Example** ``` sql -SELECT kurtPop(value) FROM series_with_value_column +SELECT kurtPop(value) FROM series_with_value_column; ``` diff --git a/docs/en/sql-reference/aggregate-functions/reference/kurtsamp.md b/docs/en/sql-reference/aggregate-functions/reference/kurtsamp.md index 4bb9f76763b..0ee40138adc 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/kurtsamp.md +++ b/docs/en/sql-reference/aggregate-functions/reference/kurtsamp.md @@ -23,5 +23,5 @@ The kurtosis of the given distribution. Type — [Float64](../../../sql-referenc **Example** ``` sql -SELECT kurtSamp(value) FROM series_with_value_column +SELECT kurtSamp(value) FROM series_with_value_column; ``` diff --git a/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md b/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md index dc5fc45b878..34e8188299c 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md +++ b/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md @@ -27,7 +27,7 @@ The null hypothesis is that two populations are stochastically equal. Also one-s - `'two-sided'`; - `'greater'`; - `'less'`. -- `continuity_correction` - if not 0 then continuity correction in the normal approximation for the p-value is applied. (Optional, default: 1.) [UInt64](../../../sql-reference/data-types/int-uint.md). +- `continuity_correction` — if not 0 then continuity correction in the normal approximation for the p-value is applied. (Optional, default: 1.) [UInt64](../../../sql-reference/data-types/int-uint.md). **Returned values** diff --git a/docs/en/sql-reference/aggregate-functions/reference/max.md b/docs/en/sql-reference/aggregate-functions/reference/max.md index c462dd590a6..25173a48906 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/max.md +++ b/docs/en/sql-reference/aggregate-functions/reference/max.md @@ -4,4 +4,21 @@ toc_priority: 3 # max {#agg_function-max} -Calculates the maximum. +Aggregate function that calculates the maximum across a group of values. + +Example: + +``` +SELECT max(salary) FROM employees; +``` + +``` +SELECT department, max(salary) FROM employees GROUP BY department; +``` + +If you need non-aggregate function to choose a maximum of two values, see `greatest`: + +``` +SELECT greatest(a, b) FROM table; +``` + diff --git a/docs/en/sql-reference/aggregate-functions/reference/min.md b/docs/en/sql-reference/aggregate-functions/reference/min.md index 56b03468243..64b155857f8 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/min.md +++ b/docs/en/sql-reference/aggregate-functions/reference/min.md @@ -4,4 +4,20 @@ toc_priority: 2 ## min {#agg_function-min} -Calculates the minimum. +Aggregate function that calculates the minimum across a group of values. + +Example: + +``` +SELECT min(salary) FROM employees; +``` + +``` +SELECT department, min(salary) FROM employees GROUP BY department; +``` + +If you need non-aggregate function to choose a minimum of two values, see `least`: + +``` +SELECT least(a, b) FROM table; +``` diff --git a/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md b/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md index dcc665a68af..dd0d59978d1 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md +++ b/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md @@ -6,7 +6,7 @@ toc_priority: 207 Computes an approximate [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence using the [t-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) algorithm. -The maximum error is 1%. Memory consumption is `log(n)`, where `n` is a number of values. The result depends on the order of running the query, and is nondeterministic. +Memory consumption is `log(n)`, where `n` is a number of values. The result depends on the order of running the query, and is nondeterministic. The performance of the function is lower than performance of [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile) or [quantileTiming](../../../sql-reference/aggregate-functions/reference/quantiletiming.md#quantiletiming). In terms of the ratio of State size to precision, this function is much better than `quantile`. diff --git a/docs/en/sql-reference/aggregate-functions/reference/skewpop.md b/docs/en/sql-reference/aggregate-functions/reference/skewpop.md index b9dfc390f9d..f84f8897a35 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/skewpop.md +++ b/docs/en/sql-reference/aggregate-functions/reference/skewpop.md @@ -21,5 +21,5 @@ The skewness of the given distribution. Type — [Float64](../../../sql-referenc **Example** ``` sql -SELECT skewPop(value) FROM series_with_value_column +SELECT skewPop(value) FROM series_with_value_column; ``` diff --git a/docs/en/sql-reference/aggregate-functions/reference/skewsamp.md b/docs/en/sql-reference/aggregate-functions/reference/skewsamp.md index f7a6df8f507..48a049ca69d 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/skewsamp.md +++ b/docs/en/sql-reference/aggregate-functions/reference/skewsamp.md @@ -23,5 +23,5 @@ The skewness of the given distribution. Type — [Float64](../../../sql-referenc **Example** ``` sql -SELECT skewSamp(value) FROM series_with_value_column +SELECT skewSamp(value) FROM series_with_value_column; ``` diff --git a/docs/en/sql-reference/aggregate-functions/reference/studentttest.md b/docs/en/sql-reference/aggregate-functions/reference/studentttest.md index a1d7ae33fe1..3398fc1ca8c 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/studentttest.md +++ b/docs/en/sql-reference/aggregate-functions/reference/studentttest.md @@ -18,8 +18,8 @@ The null hypothesis is that means of populations are equal. Normal distribution **Arguments** -- `sample_data` — sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). -- `sample_index` — sample index. [Integer](../../../sql-reference/data-types/int-uint.md). +- `sample_data` — Sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — Sample index. [Integer](../../../sql-reference/data-types/int-uint.md). **Returned values** diff --git a/docs/en/sql-reference/aggregate-functions/reference/topk.md b/docs/en/sql-reference/aggregate-functions/reference/topk.md index b3e79803ba1..b9bea013ea8 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/topk.md +++ b/docs/en/sql-reference/aggregate-functions/reference/topk.md @@ -18,13 +18,13 @@ We recommend using the `N < 10` value; performance is reduced with large `N` val **Arguments** -- ‘N’ is the number of elements to return. +- `N` – The number of elements to return. If the parameter is omitted, default value 10 is used. **Arguments** -- ’ x ’ – The value to calculate frequency. +- `x` – The value to calculate frequency. **Example** diff --git a/docs/en/sql-reference/aggregate-functions/reference/topkweighted.md b/docs/en/sql-reference/aggregate-functions/reference/topkweighted.md index 02b9f77ea6f..8562336c829 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/topkweighted.md +++ b/docs/en/sql-reference/aggregate-functions/reference/topkweighted.md @@ -18,7 +18,7 @@ topKWeighted(N)(x, weight) **Arguments** -- `x` – The value. +- `x` — The value. - `weight` — The weight. [UInt8](../../../sql-reference/data-types/int-uint.md). **Returned value** diff --git a/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md b/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md index 5b23ea81eae..4983220ed7f 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md +++ b/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md @@ -26,7 +26,7 @@ Function: - Uses the HyperLogLog algorithm to approximate the number of different argument values. - 212 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). + 2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). - Provides the determinate result (it doesn’t depend on the query processing order). diff --git a/docs/en/sql-reference/aggregate-functions/reference/welchttest.md b/docs/en/sql-reference/aggregate-functions/reference/welchttest.md index b391fb1d979..02238de42ef 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/welchttest.md +++ b/docs/en/sql-reference/aggregate-functions/reference/welchttest.md @@ -18,8 +18,8 @@ The null hypothesis is that means of populations are equal. Normal distribution **Arguments** -- `sample_data` — sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). -- `sample_index` — sample index. [Integer](../../../sql-reference/data-types/int-uint.md). +- `sample_data` — Sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — Sample index. [Integer](../../../sql-reference/data-types/int-uint.md). **Returned values** diff --git a/docs/en/sql-reference/data-types/date.md b/docs/en/sql-reference/data-types/date.md index 886e93f433c..0cfac4d59fe 100644 --- a/docs/en/sql-reference/data-types/date.md +++ b/docs/en/sql-reference/data-types/date.md @@ -5,7 +5,7 @@ toc_title: Date # Date {#data_type-date} -A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105). +A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2149, but the final fully-supported year is 2148). The date value is stored without the time zone. diff --git a/docs/en/sql-reference/data-types/datetime.md b/docs/en/sql-reference/data-types/datetime.md index d95abe57510..ed07f599b91 100644 --- a/docs/en/sql-reference/data-types/datetime.md +++ b/docs/en/sql-reference/data-types/datetime.md @@ -23,7 +23,7 @@ The point in time is saved as a [Unix timestamp](https://en.wikipedia.org/wiki/U Timezone agnostic unix timestamp is stored in tables, and the timezone is used to transform it to text format or back during data import/export or to make calendar calculations on the values (example: `toDate`, `toHour` functions et cetera). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata. -A list of supported time zones can be found in the [IANA Time Zone Database](https://www.iana.org/time-zones) and also can be queried by `SELECT * FROM system.time_zones`. +A list of supported time zones can be found in the [IANA Time Zone Database](https://www.iana.org/time-zones) and also can be queried by `SELECT * FROM system.time_zones`. [The list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) is also available at Wikipedia. You can explicitly set a time zone for `DateTime`-type columns when creating a table. Example: `DateTime('UTC')`. If the time zone isn’t set, ClickHouse uses the value of the [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) parameter in the server settings or the operating system settings at the moment of the ClickHouse server start. diff --git a/docs/en/sql-reference/data-types/datetime64.md b/docs/en/sql-reference/data-types/datetime64.md index 5cba8315090..1d3725b9fb3 100644 --- a/docs/en/sql-reference/data-types/datetime64.md +++ b/docs/en/sql-reference/data-types/datetime64.md @@ -9,7 +9,7 @@ Allows to store an instant in time, that can be expressed as a calendar date and Tick size (precision): 10-precision seconds -Syntax: +**Syntax:** ``` sql DateTime64(precision, [timezone]) @@ -17,9 +17,11 @@ DateTime64(precision, [timezone]) Internally, stores data as a number of ‘ticks’ since epoch start (1970-01-01 00:00:00 UTC) as Int64. The tick resolution is determined by the precision parameter. Additionally, the `DateTime64` type can store time zone that is the same for the entire column, that affects how the values of the `DateTime64` type values are displayed in text format and how the values specified as strings are parsed (‘2020-01-01 05:00:01.000’). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata. See details in [DateTime](../../sql-reference/data-types/datetime.md). +Supported range from January 1, 1925 till December 31, 2283. + ## Examples {#examples} -**1.** Creating a table with `DateTime64`-type column and inserting data into it: +1. Creating a table with `DateTime64`-type column and inserting data into it: ``` sql CREATE TABLE dt @@ -27,15 +29,15 @@ CREATE TABLE dt `timestamp` DateTime64(3, 'Europe/Moscow'), `event_id` UInt8 ) -ENGINE = TinyLog +ENGINE = TinyLog; ``` ``` sql -INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2) +INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2); ``` ``` sql -SELECT * FROM dt +SELECT * FROM dt; ``` ``` text @@ -45,13 +47,13 @@ SELECT * FROM dt └─────────────────────────┴──────────┘ ``` -- When inserting datetime as an integer, it is treated as an appropriately scaled Unix Timestamp (UTC). `1546300800000` (with precision 3) represents `'2019-01-01 00:00:00'` UTC. However, as `timestamp` column has `Europe/Moscow` (UTC+3) timezone specified, when outputting as a string the value will be shown as `'2019-01-01 03:00:00'` +- When inserting datetime as an integer, it is treated as an appropriately scaled Unix Timestamp (UTC). `1546300800000` (with precision 3) represents `'2019-01-01 00:00:00'` UTC. However, as `timestamp` column has `Europe/Moscow` (UTC+3) timezone specified, when outputting as a string the value will be shown as `'2019-01-01 03:00:00'`. - When inserting string value as datetime, it is treated as being in column timezone. `'2019-01-01 00:00:00'` will be treated as being in `Europe/Moscow` timezone and stored as `1546290000000`. -**2.** Filtering on `DateTime64` values +2. Filtering on `DateTime64` values ``` sql -SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow') +SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'); ``` ``` text @@ -60,12 +62,12 @@ SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europ └─────────────────────────┴──────────┘ ``` -Unlike `DateTime`, `DateTime64` values are not converted from `String` automatically +Unlike `DateTime`, `DateTime64` values are not converted from `String` automatically. -**3.** Getting a time zone for a `DateTime64`-type value: +3. Getting a time zone for a `DateTime64`-type value: ``` sql -SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x +SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x; ``` ``` text @@ -74,13 +76,13 @@ SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS └─────────────────────────┴────────────────────────────────┘ ``` -**4.** Timezone conversion +4. Timezone conversion ``` sql SELECT toDateTime64(timestamp, 3, 'Europe/London') as lon_time, toDateTime64(timestamp, 3, 'Europe/Moscow') as mos_time -FROM dt +FROM dt; ``` ``` text @@ -90,7 +92,7 @@ FROM dt └─────────────────────────┴─────────────────────────┘ ``` -## See Also {#see-also} +**See Also** - [Type conversion functions](../../sql-reference/functions/type-conversion-functions.md) - [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md) diff --git a/docs/en/sql-reference/data-types/simpleaggregatefunction.md b/docs/en/sql-reference/data-types/simpleaggregatefunction.md index 244779c5ca8..f3a245e9627 100644 --- a/docs/en/sql-reference/data-types/simpleaggregatefunction.md +++ b/docs/en/sql-reference/data-types/simpleaggregatefunction.md @@ -2,6 +2,8 @@ `SimpleAggregateFunction(name, types_of_arguments…)` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we don’t have to store and process any extra data. +The common way to produce an aggregate function value is by calling the aggregate function with the [-SimpleState](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-simplestate) suffix. + The following aggregate functions are supported: - [`any`](../../sql-reference/aggregate-functions/reference/any.md#agg_function-any) @@ -18,8 +20,6 @@ The following aggregate functions are supported: - [`sumMap`](../../sql-reference/aggregate-functions/reference/summap.md#agg_functions-summap) - [`minMap`](../../sql-reference/aggregate-functions/reference/minmap.md#agg_functions-minmap) - [`maxMap`](../../sql-reference/aggregate-functions/reference/maxmap.md#agg_functions-maxmap) -- [`argMin`](../../sql-reference/aggregate-functions/reference/argmin.md) -- [`argMax`](../../sql-reference/aggregate-functions/reference/argmax.md) !!! note "Note" diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md index a5e105d2e13..08d3b8d8ad0 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md @@ -65,4 +65,3 @@ For our example, the structure of dictionary can be the following: ``` -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_hierarchical/) diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md index efef91b4b09..c5889a8c185 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md @@ -7,9 +7,9 @@ toc_title: Storing Dictionaries in Memory There are a variety of ways to store dictionaries in memory. -We recommend [flat](#flat), [hashed](#dicts-external_dicts_dict_layout-hashed) and [complex_key_hashed](#complex-key-hashed). which provide optimal processing speed. +We recommend [flat](#flat), [hashed](#dicts-external_dicts_dict_layout-hashed) and [complex_key_hashed](#complex-key-hashed), which provide optimal processing speed. -Caching is not recommended because of potentially poor performance and difficulties in selecting optimal parameters. Read more in the section “[cache](#cache)”. +Caching is not recommended because of potentially poor performance and difficulties in selecting optimal parameters. Read more in the section [cache](#cache). There are several ways to improve dictionary performance: @@ -68,9 +68,9 @@ LAYOUT(LAYOUT_TYPE(param value)) -- layout settings The dictionary is completely stored in memory in the form of flat arrays. How much memory does the dictionary use? The amount is proportional to the size of the largest key (in space used). -The dictionary key has the `UInt64` type and the value is limited to 500,000. If a larger key is discovered when creating the dictionary, ClickHouse throws an exception and does not create the dictionary. +The dictionary key has the [UInt64](../../../sql-reference/data-types/int-uint.md) type and the value is limited to `max_array_size` (by default — 500,000). If a larger key is discovered when creating the dictionary, ClickHouse throws an exception and does not create the dictionary. Dictionary flat arrays initial size is controlled by `initial_array_size` setting (by default — 1024). -All types of sources are supported. When updating, data (from a file or from a table) is read in its entirety. +All types of sources are supported. When updating, data (from a file or from a table) is read in it entirety. This method provides the best performance among all available methods of storing the dictionary. @@ -78,14 +78,17 @@ Configuration example: ``` xml - + + 50000 + 5000000 + ``` or ``` sql -LAYOUT(FLAT()) +LAYOUT(FLAT(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000)) ``` ### hashed {#dicts-external_dicts_dict_layout-hashed} @@ -320,8 +323,6 @@ Similar to `cache`, but stores data on SSD and index in RAM. 1048576 /var/lib/clickhouse/clickhouse_dictionaries/test_dict - - 1048576 ``` @@ -329,8 +330,8 @@ Similar to `cache`, but stores data on SSD and index in RAM. or ``` sql -LAYOUT(CACHE(BLOCK_SIZE 4096 FILE_SIZE 16777216 READ_BUFFER_SIZE 1048576 - PATH /var/lib/clickhouse/clickhouse_dictionaries/test_dict MAX_STORED_KEYS 1048576)) +LAYOUT(SSD_CACHE(BLOCK_SIZE 4096 FILE_SIZE 16777216 READ_BUFFER_SIZE 1048576 + PATH /var/lib/clickhouse/clickhouse_dictionaries/test_dict)) ``` ### complex_key_ssd_cache {#complex-key-ssd-cache} @@ -445,4 +446,3 @@ Other types are not supported yet. The function returns the attribute for the pr Data must completely fit into RAM. -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_layout/) diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md index 20486ebbcc8..081cc5b0b69 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md @@ -19,6 +19,8 @@ Example of settings: ``` +or + ``` sql CREATE DICTIONARY (...) ... @@ -58,7 +60,7 @@ When upgrading the dictionaries, the ClickHouse server applies different logic d - For MySQL source, the time of modification is checked using a `SHOW TABLE STATUS` query (in case of MySQL 8 you need to disable meta-information caching in MySQL by `set global information_schema_stats_expiry=0`. - Dictionaries from other sources are updated every time by default. -For other sources (ODBC, ClickHouse, etc), you can set up a query that will update the dictionaries only if they really changed, rather than each time. To do this, follow these steps: +For other sources (ODBC, PostgreSQL, ClickHouse, etc), you can set up a query that will update the dictionaries only if they really changed, rather than each time. To do this, follow these steps: - The dictionary table must have a field that always changes when the source data is updated. - The settings of the source must specify a query that retrieves the changing field. The ClickHouse server interprets the query result as a row, and if this row has changed relative to its previous state, the dictionary is updated. Specify the query in the `` field in the settings for the [source](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md). @@ -84,4 +86,3 @@ SOURCE(ODBC(... invalidate_query 'SELECT update_time FROM dictionary_source wher ... ``` -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_lifetime/) diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index 7cd26a9dffb..dc0b6e17198 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -65,9 +65,12 @@ Types of sources (`source_type`): - DBMS - [ODBC](#dicts-external_dicts_dict_sources-odbc) - [MySQL](#dicts-external_dicts_dict_sources-mysql) + - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) - [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse) - [MongoDB](#dicts-external_dicts_dict_sources-mongodb) - [Redis](#dicts-external_dicts_dict_sources-redis) + - [Cassandra](#dicts-external_dicts_dict_sources-cassandra) + - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) ## Local File {#dicts-external_dicts_dict_sources-local_file} @@ -659,7 +662,7 @@ Example of settings: Setting fields: - `host` – The Cassandra host or comma-separated list of hosts. -- `port` – The port on the Cassandra servers. If not specified, default port is used. +- `port` – The port on the Cassandra servers. If not specified, default port 9042 is used. - `user` – Name of the Cassandra user. - `password` – Password of the Cassandra user. - `keyspace` – Name of the keyspace (database). @@ -673,4 +676,52 @@ Default value is 1 (the first key column is a partition key and other key column - `where` – Optional selection criteria. - `max_threads` – The maximum number of threads to use for loading data from multiple partitions in compose key dictionaries. -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_sources/) +### PosgreSQL {#dicts-external_dicts_dict_sources-postgresql} + +Example of settings: + +``` xml + + + 5432 + clickhouse + qwerty + db_name + table_name
+ id=10 + SQL_QUERY +
+ +``` + +or + +``` sql +SOURCE(POSTGRESQL( + port 5432 + host 'postgresql-hostname' + user 'postgres_user' + password 'postgres_password' + db 'db_name' + table 'table_name' + replica(host 'example01-1' port 5432 priority 1) + replica(host 'example01-2' port 5432 priority 2) + where 'id=10' + invalidate_query 'SQL_QUERY' +)) +``` + +Setting fields: + +- `host` – The host on the PostgreSQL server. You can specify it for all replicas, or for each one individually (inside ``). +- `port` – The port on the PostgreSQL server. You can specify it for all replicas, or for each one individually (inside ``). +- `user` – Name of the PostgreSQL user. You can specify it for all replicas, or for each one individually (inside ``). +- `password` – Password of the PostgreSQL user. You can specify it for all replicas, or for each one individually (inside ``). +- `replica` – Section of replica configurations. There can be multiple sections. + - `replica/host` – The PostgreSQL host. + - `replica/port` – The PostgreSQL port. + - `replica/priority` – The replica priority. When attempting to connect, ClickHouse traverses the replicas in order of priority. The lower the number, the higher the priority. +- `db` – Name of the database. +- `table` – Name of the table. +- `where` – The selection criteria. The syntax for conditions is the same as for `WHERE` clause in PostgreSQL, for example, `id > 10 AND id < 20`. Optional parameter. +- `invalidate_query` – Query for checking the dictionary status. Optional parameter. Read more in the section [Updating dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md). diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md index e25b3ab78c3..f22d2a0b59e 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md @@ -159,15 +159,14 @@ Configuration fields: | Tag | Description | Required | |------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------| | `name` | Column name. | Yes | -| `type` | ClickHouse data type.
ClickHouse tries to cast value from dictionary to the specified data type. For example, for MySQL, the field might be `TEXT`, `VARCHAR`, or `BLOB` in the MySQL source table, but it can be uploaded as `String` in ClickHouse.
[Nullable](../../../sql-reference/data-types/nullable.md) is not supported. | Yes | -| `null_value` | Default value for a non-existing element.
In the example, it is an empty string. You cannot use `NULL` in this field. | Yes | +| `type` | ClickHouse data type.
ClickHouse tries to cast value from dictionary to the specified data type. For example, for MySQL, the field might be `TEXT`, `VARCHAR`, or `BLOB` in the MySQL source table, but it can be uploaded as `String` in ClickHouse.
[Nullable](../../../sql-reference/data-types/nullable.md) is currently supported for [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md) dictionaries. In [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache), [IPTrie](external-dicts-dict-layout.md#ip-trie) dictionaries `Nullable` types are not supported. | Yes | +| `null_value` | Default value for a non-existing element.
In the example, it is an empty string. [NULL](../../syntax.md#null-literal) value can be used only for the `Nullable` types (see the previous line with types description). | Yes | | `expression` | [Expression](../../../sql-reference/syntax.md#syntax-expressions) that ClickHouse executes on the value.
The expression can be a column name in the remote SQL database. Thus, you can use it to create an alias for the remote column.

Default value: no expression. | No | | `hierarchical` | If `true`, the attribute contains the value of a parent key for the current key. See [Hierarchical Dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md).

Default value: `false`. | No | | `injective` | Flag that shows whether the `id -> attribute` image is [injective](https://en.wikipedia.org/wiki/Injective_function).
If `true`, ClickHouse can automatically place after the `GROUP BY` clause the requests to dictionaries with injection. Usually it significantly reduces the amount of such requests.

Default value: `false`. | No | | `is_object_id` | Flag that shows whether the query is executed for a MongoDB document by `ObjectID`.

Default value: `false`. | No | -## See Also {#see-also} +**See Also** - [Functions for working with external dictionaries](../../../sql-reference/functions/ext-dict-functions.md). -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_structure/) diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md index 17ad110aa19..e15d944130e 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md @@ -48,4 +48,3 @@ LIFETIME(...) -- Lifetime of dictionary in memory - [structure](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md) — Structure of the dictionary . A key and attributes that can be retrieved by this key. - [lifetime](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) — Frequency of dictionary updates. -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict/) diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md index 99a62002822..8217fb8da3a 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md @@ -57,4 +57,3 @@ You can [configure](../../../sql-reference/dictionaries/external-dictionaries/ex - [Dictionary Key and Fields](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md) - [Functions for Working with External Dictionaries](../../../sql-reference/functions/ext-dict-functions.md) -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts/) diff --git a/docs/en/sql-reference/dictionaries/index.md b/docs/en/sql-reference/dictionaries/index.md index 420182642bb..22f4182a1c0 100644 --- a/docs/en/sql-reference/dictionaries/index.md +++ b/docs/en/sql-reference/dictionaries/index.md @@ -10,11 +10,8 @@ A dictionary is a mapping (`key -> attributes`) that is convenient for various t ClickHouse supports special functions for working with dictionaries that can be used in queries. It is easier and more efficient to use dictionaries with functions than a `JOIN` with reference tables. -[NULL](../../sql-reference/syntax.md#null-literal) values can’t be stored in a dictionary. - ClickHouse supports: - [Built-in dictionaries](../../sql-reference/dictionaries/internal-dicts.md#internal_dicts) with a specific [set of functions](../../sql-reference/functions/ym-dict-functions.md). - [Plug-in (external) dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md#dicts-external-dicts) with a [set of functions](../../sql-reference/functions/ext-dict-functions.md). -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/) diff --git a/docs/en/sql-reference/dictionaries/internal-dicts.md b/docs/en/sql-reference/dictionaries/internal-dicts.md index 7d657d4177f..472351a19a4 100644 --- a/docs/en/sql-reference/dictionaries/internal-dicts.md +++ b/docs/en/sql-reference/dictionaries/internal-dicts.md @@ -50,4 +50,3 @@ We recommend periodically updating the dictionaries with the geobase. During an There are also functions for working with OS identifiers and Yandex.Metrica search engines, but they shouldn’t be used. -[Original article](https://clickhouse.tech/docs/en/query_language/dicts/internal_dicts/) diff --git a/docs/en/sql-reference/functions/arithmetic-functions.md b/docs/en/sql-reference/functions/arithmetic-functions.md index c4b151f59ce..faa03dfc9d3 100644 --- a/docs/en/sql-reference/functions/arithmetic-functions.md +++ b/docs/en/sql-reference/functions/arithmetic-functions.md @@ -82,4 +82,3 @@ An exception is thrown when dividing by zero or when dividing a minimal negative Returns the least common multiple of the numbers. An exception is thrown when dividing by zero or when dividing a minimal negative number by minus one. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/arithmetic_functions/) diff --git a/docs/en/sql-reference/functions/array-functions.md b/docs/en/sql-reference/functions/array-functions.md index c9c418d57a4..499376a70d4 100644 --- a/docs/en/sql-reference/functions/array-functions.md +++ b/docs/en/sql-reference/functions/array-functions.md @@ -245,7 +245,7 @@ Elements set to `NULL` are handled as normal values. Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array. -Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. +Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. ## countEqual(arr, x) {#countequalarr-x} @@ -376,7 +376,7 @@ arrayPopBack(array) **Example** ``` sql -SELECT arrayPopBack([1, 2, 3]) AS res +SELECT arrayPopBack([1, 2, 3]) AS res; ``` ``` text @@ -400,7 +400,7 @@ arrayPopFront(array) **Example** ``` sql -SELECT arrayPopFront([1, 2, 3]) AS res +SELECT arrayPopFront([1, 2, 3]) AS res; ``` ``` text @@ -425,7 +425,7 @@ arrayPushBack(array, single_value) **Example** ``` sql -SELECT arrayPushBack(['a'], 'b') AS res +SELECT arrayPushBack(['a'], 'b') AS res; ``` ``` text @@ -450,7 +450,7 @@ arrayPushFront(array, single_value) **Example** ``` sql -SELECT arrayPushFront(['b'], 'a') AS res +SELECT arrayPushFront(['b'], 'a') AS res; ``` ``` text @@ -482,7 +482,7 @@ An array of length `size`. **Examples of calls** ``` sql -SELECT arrayResize([1], 3) +SELECT arrayResize([1], 3); ``` ``` text @@ -492,7 +492,7 @@ SELECT arrayResize([1], 3) ``` ``` sql -SELECT arrayResize([1], 3, NULL) +SELECT arrayResize([1], 3, NULL); ``` ``` text @@ -513,12 +513,12 @@ arraySlice(array, offset[, length]) - `array` – Array of data. - `offset` – Indent from the edge of the array. A positive value indicates an offset on the left, and a negative value is an indent on the right. Numbering of the array items begins with 1. -- `length` - The length of the required slice. If you specify a negative value, the function returns an open slice `[offset, array_length - length)`. If you omit the value, the function returns the slice `[offset, the_end_of_array]`. +- `length` – The length of the required slice. If you specify a negative value, the function returns an open slice `[offset, array_length - length)`. If you omit the value, the function returns the slice `[offset, the_end_of_array]`. **Example** ``` sql -SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res +SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res; ``` ``` text @@ -766,7 +766,7 @@ Type: [UInt\*](https://clickhouse.tech/docs/en/data_types/int_uint/#uint-ranges) Query: ``` sql -SELECT arrayDifference([1, 2, 3, 4]) +SELECT arrayDifference([1, 2, 3, 4]); ``` Result: @@ -782,7 +782,7 @@ Example of the overflow due to result type Int64: Query: ``` sql -SELECT arrayDifference([0, 10000000000000000000]) +SELECT arrayDifference([0, 10000000000000000000]); ``` Result: @@ -816,7 +816,7 @@ Returns an array containing the distinct elements. Query: ``` sql -SELECT arrayDistinct([1, 2, 2, 3, 1]) +SELECT arrayDistinct([1, 2, 2, 3, 1]); ``` Result: @@ -883,7 +883,7 @@ arrayReduce(agg_func, arr1, arr2, ..., arrN) Query: ``` sql -SELECT arrayReduce('max', [1, 2, 3]) +SELECT arrayReduce('max', [1, 2, 3]); ``` Result: @@ -899,7 +899,7 @@ If an aggregate function takes multiple arguments, then this function must be ap Query: ``` sql -SELECT arrayReduce('maxIf', [3, 5], [1, 0]) +SELECT arrayReduce('maxIf', [3, 5], [1, 0]); ``` Result: @@ -915,7 +915,7 @@ Example with a parametric aggregate function: Query: ``` sql -SELECT arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) +SELECT arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]); ``` Result: @@ -1014,7 +1014,7 @@ Alias: `flatten`. **Examples** ``` sql -SELECT flatten([[[1]], [[2], [3]]]) +SELECT flatten([[[1]], [[2], [3]]]); ``` ``` text @@ -1048,7 +1048,7 @@ Type: `Array`. Query: ``` sql -SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]) +SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]); ``` Result: @@ -1086,7 +1086,7 @@ Type: [Array](../../sql-reference/data-types/array.md). Query: ``` sql -SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]) +SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]); ``` Result: @@ -1108,17 +1108,20 @@ arrayAUC(arr_scores, arr_labels) ``` **Arguments** + - `arr_scores` — scores prediction model gives. - `arr_labels` — labels of samples, usually 1 for positive sample and 0 for negtive sample. **Returned value** + Returns AUC value with type Float64. **Example** + Query: ``` sql -select arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]) +select arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]); ``` Result: @@ -1226,7 +1229,7 @@ SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, └────────────────────────────────────┘ ``` -Note that the `arrayReverseFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted. +Note that the `arrayReverseFill` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted. ## arraySplit(func, arr1, …) {#array-split} @@ -1290,7 +1293,7 @@ Note that the `arrayFirstIndex` is a [higher-order function](../../sql-reference ## arrayMin {#array-min} -Returns the minimum of elements in the source array. +Returns the minimum of elements in the source array. If the `func` function is specified, returns the mininum of elements converted by this function. @@ -1309,9 +1312,9 @@ arrayMin([func,] arr) **Returned value** -- The minimum of function values (or the array minimum). +- The minimum of function values (or the array minimum). -Type: if `func` is specified, matches `func` return value type, else matches the array elements type. +Type: if `func` is specified, matches `func` return value type, else matches the array elements type. **Examples** @@ -1345,7 +1348,7 @@ Result: ## arrayMax {#array-max} -Returns the maximum of elements in the source array. +Returns the maximum of elements in the source array. If the `func` function is specified, returns the maximum of elements converted by this function. @@ -1364,9 +1367,9 @@ arrayMax([func,] arr) **Returned value** -- The maximum of function values (or the array maximum). +- The maximum of function values (or the array maximum). -Type: if `func` is specified, matches `func` return value type, else matches the array elements type. +Type: if `func` is specified, matches `func` return value type, else matches the array elements type. **Examples** @@ -1400,7 +1403,7 @@ Result: ## arraySum {#array-sum} -Returns the sum of elements in the source array. +Returns the sum of elements in the source array. If the `func` function is specified, returns the sum of elements converted by this function. @@ -1415,7 +1418,7 @@ arraySum([func,] arr) **Arguments** - `func` — Function. [Expression](../../sql-reference/data-types/special-data-types/expression.md). -- `arr` — Array. [Array](../../sql-reference/data-types/array.md). +- `arr` — Array. [Array](../../sql-reference/data-types/array.md). **Returned value** @@ -1455,7 +1458,7 @@ Result: ## arrayAvg {#array-avg} -Returns the average of elements in the source array. +Returns the average of elements in the source array. If the `func` function is specified, returns the average of elements converted by this function. @@ -1470,7 +1473,7 @@ arrayAvg([func,] arr) **Arguments** - `func` — Function. [Expression](../../sql-reference/data-types/special-data-types/expression.md). -- `arr` — Array. [Array](../../sql-reference/data-types/array.md). +- `arr` — Array. [Array](../../sql-reference/data-types/array.md). **Returned value** @@ -1541,4 +1544,3 @@ SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res ``` Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) diff --git a/docs/en/sql-reference/functions/array-join.md b/docs/en/sql-reference/functions/array-join.md index f1f9a545366..f35e0d10117 100644 --- a/docs/en/sql-reference/functions/array-join.md +++ b/docs/en/sql-reference/functions/array-join.md @@ -32,4 +32,3 @@ SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src └─────┴───────────┴─────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_join/) diff --git a/docs/en/sql-reference/functions/bit-functions.md b/docs/en/sql-reference/functions/bit-functions.md index a3d0c82d8ab..e07f28c0f24 100644 --- a/docs/en/sql-reference/functions/bit-functions.md +++ b/docs/en/sql-reference/functions/bit-functions.md @@ -37,8 +37,8 @@ SELECT bitTest(number, index) **Arguments** -- `number` – integer number. -- `index` – position of bit. +- `number` – Integer number. +- `index` – Position of bit. **Returned values** @@ -53,7 +53,7 @@ For example, the number 43 in base-2 (binary) numeral system is 101011. Query: ``` sql -SELECT bitTest(43, 1) +SELECT bitTest(43, 1); ``` Result: @@ -69,7 +69,7 @@ Another example: Query: ``` sql -SELECT bitTest(43, 2) +SELECT bitTest(43, 2); ``` Result: @@ -102,8 +102,8 @@ SELECT bitTestAll(number, index1, index2, index3, index4, ...) **Arguments** -- `number` – integer number. -- `index1`, `index2`, `index3`, `index4` – positions of bit. For example, for set of positions (`index1`, `index2`, `index3`, `index4`) is true if and only if all of its positions are true (`index1` ⋀ `index2`, ⋀ `index3` ⋀ `index4`). +- `number` – Integer number. +- `index1`, `index2`, `index3`, `index4` – Positions of bit. For example, for set of positions (`index1`, `index2`, `index3`, `index4`) is true if and only if all of its positions are true (`index1` ⋀ `index2`, ⋀ `index3` ⋀ `index4`). **Returned values** @@ -118,7 +118,7 @@ For example, the number 43 in base-2 (binary) numeral system is 101011. Query: ``` sql -SELECT bitTestAll(43, 0, 1, 3, 5) +SELECT bitTestAll(43, 0, 1, 3, 5); ``` Result: @@ -134,7 +134,7 @@ Another example: Query: ``` sql -SELECT bitTestAll(43, 0, 1, 3, 5, 2) +SELECT bitTestAll(43, 0, 1, 3, 5, 2); ``` Result: @@ -167,8 +167,8 @@ SELECT bitTestAny(number, index1, index2, index3, index4, ...) **Arguments** -- `number` – integer number. -- `index1`, `index2`, `index3`, `index4` – positions of bit. +- `number` – Integer number. +- `index1`, `index2`, `index3`, `index4` – Positions of bit. **Returned values** @@ -183,7 +183,7 @@ For example, the number 43 in base-2 (binary) numeral system is 101011. Query: ``` sql -SELECT bitTestAny(43, 0, 2) +SELECT bitTestAny(43, 0, 2); ``` Result: @@ -199,7 +199,7 @@ Another example: Query: ``` sql -SELECT bitTestAny(43, 4, 2) +SELECT bitTestAny(43, 4, 2); ``` Result: @@ -239,7 +239,7 @@ Take for example the number 333. Its binary representation: 0000000101001101. Query: ``` sql -SELECT bitCount(333) +SELECT bitCount(333); ``` Result: @@ -250,4 +250,53 @@ Result: └───────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/bit_functions/) +## bitHammingDistance {#bithammingdistance} + +Returns the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) between the bit representations of two integer values. Can be used with [SimHash](../../sql-reference/functions/hash-functions.md#ngramsimhash) functions for detection of semi-duplicate strings. The smaller is the distance, the more likely those strings are the same. + +**Syntax** + +``` sql +bitHammingDistance(int1, int2) +``` + +**Arguments** + +- `int1` — First integer value. [Int64](../../sql-reference/data-types/int-uint.md). +- `int2` — Second integer value. [Int64](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- The Hamming distance. + +Type: [UInt8](../../sql-reference/data-types/int-uint.md). + +**Examples** + +Query: + +``` sql +SELECT bitHammingDistance(111, 121); +``` + +Result: + +``` text +┌─bitHammingDistance(111, 121)─┐ +│ 3 │ +└──────────────────────────────┘ +``` + +With [SimHash](../../sql-reference/functions/hash-functions.md#ngramsimhash): + +``` sql +SELECT bitHammingDistance(ngramSimHash('cat ate rat'), ngramSimHash('rat ate cat')); +``` + +Result: + +``` text +┌─bitHammingDistance(ngramSimHash('cat ate rat'), ngramSimHash('rat ate cat'))─┐ +│ 5 │ +└──────────────────────────────────────────────────────────────────────────────┘ +``` diff --git a/docs/en/sql-reference/functions/bitmap-functions.md b/docs/en/sql-reference/functions/bitmap-functions.md index bfff70576f2..4875532605e 100644 --- a/docs/en/sql-reference/functions/bitmap-functions.md +++ b/docs/en/sql-reference/functions/bitmap-functions.md @@ -23,17 +23,17 @@ bitmapBuild(array) **Arguments** -- `array` – unsigned integer array. +- `array` – Unsigned integer array. **Example** ``` sql -SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) +SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res); ``` ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` @@ -47,12 +47,12 @@ bitmapToArray(bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res +SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res; ``` ``` text @@ -72,13 +72,13 @@ bitmapSubsetInRange(bitmap, range_start, range_end) **Arguments** - `bitmap` – [Bitmap object](#bitmap_functions-bitmapbuild). -- `range_start` – range start point. Type: [UInt32](../../sql-reference/data-types/int-uint.md). -- `range_end` – range end point(excluded). Type: [UInt32](../../sql-reference/data-types/int-uint.md). +- `range_start` – Range start point. Type: [UInt32](../../sql-reference/data-types/int-uint.md). +- `range_end` – Range end point (excluded). Type: [UInt32](../../sql-reference/data-types/int-uint.md). **Example** ``` sql -SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res +SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res; ``` ``` text @@ -114,7 +114,7 @@ Type: `Bitmap object`. Query: ``` sql -SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res +SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res; ``` Result: @@ -148,7 +148,7 @@ Type: `UInt8`. **Example** ``` sql -SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res +SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res; ``` ``` text @@ -169,7 +169,7 @@ If you are sure that `bitmap2` contains strictly one element, consider using the **Arguments** -- `bitmap*` – bitmap object. +- `bitmap*` – Bitmap object. **Return values** @@ -179,7 +179,7 @@ If you are sure that `bitmap2` contains strictly one element, consider using the **Example** ``` sql -SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res +SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; ``` ``` text @@ -199,12 +199,12 @@ bitmapHasAll(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res +SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; ``` ``` text @@ -223,12 +223,12 @@ bitmapCardinality(bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res +SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res; ``` ``` text @@ -245,17 +245,19 @@ Retrun the smallest value of type UInt64 in the set, UINT32_MAX if the set is em **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapMin(bitmapBuild([1, 2, 3, 4, 5])) AS res +SELECT bitmapMin(bitmapBuild([1, 2, 3, 4, 5])) AS res; ``` - ┌─res─┐ - │ 1 │ - └─────┘ +``` text + ┌─res─┐ + │ 1 │ + └─────┘ +``` ## bitmapMax {#bitmapmax} @@ -265,17 +267,19 @@ Retrun the greatest value of type UInt64 in the set, 0 if the set is empty. **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapMax(bitmapBuild([1, 2, 3, 4, 5])) AS res +SELECT bitmapMax(bitmapBuild([1, 2, 3, 4, 5])) AS res; ``` - ┌─res─┐ - │ 5 │ - └─────┘ +``` text + ┌─res─┐ + │ 5 │ + └─────┘ +``` ## bitmapTransform {#bitmaptransform} @@ -285,19 +289,21 @@ Transform an array of values in a bitmap to another array of values, the result **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. - `from_array` – UInt32 array. For idx in range \[0, from_array.size()), if bitmap contains from_array\[idx\], then replace it with to_array\[idx\]. Note that the result depends on array ordering if there are common elements between from_array and to_array. - `to_array` – UInt32 array, its size shall be the same to from_array. **Example** ``` sql -SELECT bitmapToArray(bitmapTransform(bitmapBuild([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), cast([5,999,2] as Array(UInt32)), cast([2,888,20] as Array(UInt32)))) AS res +SELECT bitmapToArray(bitmapTransform(bitmapBuild([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), cast([5,999,2] as Array(UInt32)), cast([2,888,20] as Array(UInt32)))) AS res; ``` - ┌─res───────────────────┐ - │ [1,3,4,6,7,8,9,10,20] │ - └───────────────────────┘ +``` text + ┌─res───────────────────┐ + │ [1,3,4,6,7,8,9,10,20] │ + └───────────────────────┘ +``` ## bitmapAnd {#bitmapand} @@ -309,12 +315,12 @@ bitmapAnd(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -333,12 +339,12 @@ bitmapOr(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -357,12 +363,12 @@ bitmapXor(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -381,12 +387,12 @@ bitmapAndnot(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** ``` sql -SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -405,7 +411,7 @@ bitmapAndCardinality(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** @@ -429,7 +435,7 @@ bitmapOrCardinality(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** @@ -453,7 +459,7 @@ bitmapXorCardinality(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** @@ -477,7 +483,7 @@ bitmapAndnotCardinality(bitmap,bitmap) **Arguments** -- `bitmap` – bitmap object. +- `bitmap` – Bitmap object. **Example** @@ -491,4 +497,3 @@ SELECT bitmapAndnotCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res └─────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/bitmap_functions/) diff --git a/docs/en/sql-reference/functions/comparison-functions.md b/docs/en/sql-reference/functions/comparison-functions.md index 0b6d8b6e36e..edaf0a01c73 100644 --- a/docs/en/sql-reference/functions/comparison-functions.md +++ b/docs/en/sql-reference/functions/comparison-functions.md @@ -32,4 +32,3 @@ Strings are compared by bytes. A shorter string is smaller than all strings that ## greaterOrEquals, \>= operator {#function-greaterorequals} -[Original article](https://clickhouse.tech/docs/en/query_language/functions/comparison_functions/) diff --git a/docs/en/sql-reference/functions/conditional-functions.md b/docs/en/sql-reference/functions/conditional-functions.md index 2d57cbb3bd5..a23da82a9c6 100644 --- a/docs/en/sql-reference/functions/conditional-functions.md +++ b/docs/en/sql-reference/functions/conditional-functions.md @@ -20,8 +20,8 @@ If the condition `cond` evaluates to a non-zero value, returns the result of the **Arguments** - `cond` – The condition for evaluation that can be zero or not. The type is UInt8, Nullable(UInt8) or NULL. -- `then` - The expression to return if condition is met. -- `else` - The expression to return if condition is not met. +- `then` – The expression to return if condition is met. +- `else` – The expression to return if condition is not met. **Returned values** @@ -32,7 +32,7 @@ The function executes `then` and `else` expressions and returns its result, depe Query: ``` sql -SELECT if(1, plus(2, 2), plus(2, 6)) +SELECT if(1, plus(2, 2), plus(2, 6)); ``` Result: @@ -46,7 +46,7 @@ Result: Query: ``` sql -SELECT if(0, plus(2, 2), plus(2, 6)) +SELECT if(0, plus(2, 2), plus(2, 6)); ``` Result: @@ -202,4 +202,3 @@ FROM LEFT_RIGHT └──────┴───────┴──────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/conditional_functions/) diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md index 304371f44eb..69baf64ef55 100644 --- a/docs/en/sql-reference/functions/date-time-functions.md +++ b/docs/en/sql-reference/functions/date-time-functions.md @@ -5,7 +5,7 @@ toc_title: Dates and Times # Functions for Working with Dates and Times {#functions-for-working-with-dates-and-times} -Support for time zones +Support for time zones. All functions for working with the date and time that have a logical use for the time zone can accept a second optional time zone argument. Example: Asia/Yekaterinburg. In this case, they use the specified time zone instead of the local (default) one. @@ -23,13 +23,53 @@ SELECT └─────────────────────┴────────────┴────────────┴─────────────────────┘ ``` +## timeZone {#timezone} + +Returns the timezone of the server. + +**Syntax** + +``` sql +timeZone() +``` + +Alias: `timezone`. + +**Returned value** + +- Timezone. + +Type: [String](../../sql-reference/data-types/string.md). + ## toTimeZone {#totimezone} -Convert time or date and time to the specified time zone. The time zone is an attribute of the Date/DateTime types. The internal value (number of seconds) of the table field or of the resultset's column does not change, the column's type changes and its string representation changes accordingly. +Converts time or date and time to the specified time zone. The time zone is an attribute of the `Date` and `DateTime` data types. The internal value (number of seconds) of the table field or of the resultset's column does not change, the column's type changes and its string representation changes accordingly. + +**Syntax** + +``` sql +toTimezone(value, timezone) +``` + +Alias: `toTimezone`. + +**Arguments** + +- `value` — Time or date and time. [DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — Timezone for the returned value. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Date and time. + +Type: [DateTime](../../sql-reference/data-types/datetime.md). + +**Example** + +Query: ```sql -SELECT - toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc, +SELECT toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc, toTypeName(time_utc) AS type_utc, toInt32(time_utc) AS int32utc, toTimeZone(time_utc, 'Asia/Yekaterinburg') AS time_yekat, @@ -40,6 +80,7 @@ SELECT toInt32(time_samoa) AS int32samoa FORMAT Vertical; ``` +Result: ```text Row 1: @@ -57,6 +98,82 @@ int32samoa: 1546300800 `toTimeZone(time_utc, 'Asia/Yekaterinburg')` changes the `DateTime('UTC')` type to `DateTime('Asia/Yekaterinburg')`. The value (Unixtimestamp) 1546300800 stays the same, but the string representation (the result of the toString() function) changes from `time_utc: 2019-01-01 00:00:00` to `time_yekat: 2019-01-01 05:00:00`. +## timeZoneOf {#timezoneof} + +Returns the timezone name of [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md) data types. + +**Syntax** + +``` sql +timeZoneOf(value) +``` + +Alias: `timezoneOf`. + +**Arguments** + +- `value` — Date and time. [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Returned value** + +- Timezone name. + +Type: [String](../../sql-reference/data-types/string.md). + +**Example** + +Query: +``` sql +SELECT timezoneOf(now()); +``` + +Result: +``` text +┌─timezoneOf(now())─┐ +│ Etc/UTC │ +└───────────────────┘ +``` + +## timeZoneOffset {#timezoneoffset} + +Returns a timezone offset in seconds from [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time). The function takes into account [daylight saving time](https://en.wikipedia.org/wiki/Daylight_saving_time) and historical timezone changes at the specified date and time. +[IANA timezone database](https://www.iana.org/time-zones) is used to calculate the offset. + +**Syntax** + +``` sql +timeZoneOffset(value) +``` + +Alias: `timezoneOffset`. + +**Arguments** + +- `value` — Date and time. [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Returned value** + +- Offset from UTC in seconds. + +Type: [Int32](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT toDateTime('2021-04-21 10:20:30', 'America/New_York') AS Time, toTypeName(Time) AS Type, + timeZoneOffset(Time) AS Offset_in_seconds, (Offset_in_seconds / 3600) AS Offset_in_hours; +``` + +Result: + +``` text +┌────────────────Time─┬─Type─────────────────────────┬─Offset_in_seconds─┬─Offset_in_hours─┐ +│ 2021-04-21 10:20:30 │ DateTime('America/New_York') │ -14400 │ -4 │ +└─────────────────────┴──────────────────────────────┴───────────────────┴─────────────────┘ +``` + ## toYear {#toyear} Converts a date or date with time to a UInt16 number containing the year number (AD). @@ -147,6 +264,9 @@ Result: └────────────────┘ ``` +!!! attention "Attention" + The return type `toStartOf*` functions described below is `Date` or `DateTime`. Though these functions can take `DateTime64` as an argument, passing them a `DateTime64` that is out of normal range (years 1970 - 2105) will give incorrect result. + ## toStartOfYear {#tostartofyear} Rounds down a date or date with time to the first day of the year. @@ -388,13 +508,13 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d Truncates date and time data to the specified part of date. -**Syntax** +**Syntax** ``` sql date_trunc(unit, value[, timezone]) ``` -Alias: `dateTrunc`. +Alias: `dateTrunc`. **Arguments** @@ -457,13 +577,13 @@ Result: Adds the time interval or date interval to the provided date or date with time. -**Syntax** +**Syntax** ``` sql date_add(unit, value, date) ``` -Aliases: `dateAdd`, `DATE_ADD`. +Aliases: `dateAdd`, `DATE_ADD`. **Arguments** @@ -478,7 +598,7 @@ Aliases: `dateAdd`, `DATE_ADD`. - `month` - `quarter` - `year` - + - `value` — Value of interval to add. [Int](../../sql-reference/data-types/int-uint.md). - `date` — The date or date with time to which `value` is added. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). @@ -583,7 +703,7 @@ Aliases: `dateSub`, `DATE_SUB`. - `month` - `quarter` - `year` - + - `value` — Value of interval to subtract. [Int](../../sql-reference/data-types/int-uint.md). - `date` — The date or date with time from which `value` is subtracted. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). @@ -613,16 +733,16 @@ Result: Adds the specified time value with the provided date or date time value. -**Syntax** +**Syntax** ``` sql timestamp_add(date, INTERVAL value unit) ``` -Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. +Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. **Arguments** - + - `date` — Date or date with time. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). - `value` — Value of interval to add. [Int](../../sql-reference/data-types/int-uint.md). - `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md). @@ -642,7 +762,7 @@ Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. Date or date with time with the specified `value` expressed in `unit` added to `date`. Type: [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). - + **Example** Query: @@ -663,13 +783,13 @@ Result: Subtracts the time interval from the provided date or date with time. -**Syntax** +**Syntax** ``` sql timestamp_sub(unit, value, date) ``` -Aliases: `timeStampSub`, `TIMESTAMP_SUB`. +Aliases: `timeStampSub`, `TIMESTAMP_SUB`. **Arguments** @@ -684,7 +804,7 @@ Aliases: `timeStampSub`, `TIMESTAMP_SUB`. - `month` - `quarter` - `year` - + - `value` — Value of interval to subtract. [Int](../../sql-reference/data-types/int-uint.md). - `date` — Date or date with time. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). @@ -709,12 +829,12 @@ Result: │ 2018-07-18 01:02:03 │ └──────────────────────────────────────────────────────────────┘ ``` - + ## now {#now} -Returns the current date and time. +Returns the current date and time. -**Syntax** +**Syntax** ``` sql now([timezone]) @@ -853,7 +973,7 @@ Using replacement fields, you can define a pattern for the resulting string. “ | %C | year divided by 100 and truncated to integer (00-99) | 20 | | %d | day of the month, zero-padded (01-31) | 02 | | %D | Short MM/DD/YY date, equivalent to %m/%d/%y | 01/02/18 | -| %e | day of the month, space-padded ( 1-31) | 2 | +| %e | day of the month, space-padded ( 1-31) |   2 | | %F | short YYYY-MM-DD date, equivalent to %Y-%m-%d | 2018-01-02 | | %G | four-digit year format for ISO week number, calculated from the week-based year [defined by the ISO 8601](https://en.wikipedia.org/wiki/ISO_8601#Week_dates) standard, normally useful only with %V | 2018 | | %g | two-digit year format, aligned to ISO 8601, abbreviated from four-digit notation | 18 | @@ -1069,5 +1189,3 @@ Result: │ 2020-01-01 │ └────────────────────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) diff --git a/docs/en/sql-reference/functions/encoding-functions.md b/docs/en/sql-reference/functions/encoding-functions.md index c1013ebb0e1..6b72d3c2269 100644 --- a/docs/en/sql-reference/functions/encoding-functions.md +++ b/docs/en/sql-reference/functions/encoding-functions.md @@ -30,7 +30,7 @@ Type: `String`. Query: ``` sql -SELECT char(104.1, 101, 108.9, 108.9, 111) AS hello +SELECT char(104.1, 101, 108.9, 108.9, 111) AS hello; ``` Result: @@ -172,4 +172,3 @@ Accepts an integer. Returns a string containing the list of powers of two that t Accepts an integer. Returns an array of UInt64 numbers containing the list of powers of two that total the source number when summed. Numbers in the array are in ascending order. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/encoding_functions/) diff --git a/docs/en/sql-reference/functions/ext-dict-functions.md b/docs/en/sql-reference/functions/ext-dict-functions.md index 834fcdf8282..5fc146f603f 100644 --- a/docs/en/sql-reference/functions/ext-dict-functions.md +++ b/docs/en/sql-reference/functions/ext-dict-functions.md @@ -203,4 +203,3 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr) ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesn’t match the attribute data type. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/ext_dict_functions/) diff --git a/docs/en/sql-reference/functions/files.md b/docs/en/sql-reference/functions/files.md new file mode 100644 index 00000000000..9cbf8932465 --- /dev/null +++ b/docs/en/sql-reference/functions/files.md @@ -0,0 +1,35 @@ +--- +toc_priority: 43 +toc_title: Files +--- + +# Functions for Working with Files {#functions-for-working-with-files} + +## file {#file} + +Reads file as a String. The file content is not parsed, so any information is read as one string and placed into the specified column. + +**Syntax** + +``` sql +file(path) +``` + +**Arguments** + +- `path` — The relative path to the file from [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). Path to file support following wildcards: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc', 'def'` — strings. + +**Example** + +Inserting data from files a.txt and b.txt into a table as strings: + +Query: + +``` sql +INSERT INTO table SELECT file('a.txt'), file('b.txt'); +``` + +**See Also** + +- [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path) +- [file](../table-functions/file.md) diff --git a/docs/en/sql-reference/functions/functions-for-nulls.md b/docs/en/sql-reference/functions/functions-for-nulls.md index f57f0f7e27d..c06711b3cd2 100644 --- a/docs/en/sql-reference/functions/functions-for-nulls.md +++ b/docs/en/sql-reference/functions/functions-for-nulls.md @@ -38,7 +38,7 @@ Input table Query ``` sql -SELECT x FROM t_null WHERE isNull(y) +SELECT x FROM t_null WHERE isNull(y); ``` ``` text @@ -78,7 +78,7 @@ Input table Query ``` sql -SELECT x FROM t_null WHERE isNotNull(y) +SELECT x FROM t_null WHERE isNotNull(y); ``` ``` text @@ -120,7 +120,7 @@ The `mail` and `phone` fields are of type String, but the `icq` field is `UInt32 Get the first available contact method for the customer from the contact list: ``` sql -SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook +SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook; ``` ``` text @@ -151,7 +151,7 @@ ifNull(x,alt) **Example** ``` sql -SELECT ifNull('a', 'b') +SELECT ifNull('a', 'b'); ``` ``` text @@ -161,7 +161,7 @@ SELECT ifNull('a', 'b') ``` ``` sql -SELECT ifNull(NULL, 'b') +SELECT ifNull(NULL, 'b'); ``` ``` text @@ -190,7 +190,7 @@ nullIf(x, y) **Example** ``` sql -SELECT nullIf(1, 1) +SELECT nullIf(1, 1); ``` ``` text @@ -200,7 +200,7 @@ SELECT nullIf(1, 1) ``` ``` sql -SELECT nullIf(1, 2) +SELECT nullIf(1, 2); ``` ``` text @@ -224,14 +224,14 @@ assumeNotNull(x) **Returned values** - The original value from the non-`Nullable` type, if it is not `NULL`. -- The default value for the non-`Nullable` type if the original value was `NULL`. +- Implementation specific result if the original value was `NULL`. **Example** Consider the `t_null` table. ``` sql -SHOW CREATE TABLE t_null +SHOW CREATE TABLE t_null; ``` ``` text @@ -250,7 +250,7 @@ SHOW CREATE TABLE t_null Apply the `assumeNotNull` function to the `y` column. ``` sql -SELECT assumeNotNull(y) FROM t_null +SELECT assumeNotNull(y) FROM t_null; ``` ``` text @@ -261,7 +261,7 @@ SELECT assumeNotNull(y) FROM t_null ``` ``` sql -SELECT toTypeName(assumeNotNull(y)) FROM t_null +SELECT toTypeName(assumeNotNull(y)) FROM t_null; ``` ``` text @@ -290,7 +290,7 @@ toNullable(x) **Example** ``` sql -SELECT toTypeName(10) +SELECT toTypeName(10); ``` ``` text @@ -300,7 +300,7 @@ SELECT toTypeName(10) ``` ``` sql -SELECT toTypeName(toNullable(10)) +SELECT toTypeName(toNullable(10)); ``` ``` text @@ -309,4 +309,3 @@ SELECT toTypeName(toNullable(10)) └────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/functions_for_nulls/) diff --git a/docs/en/sql-reference/functions/geo/geohash.md b/docs/en/sql-reference/functions/geo/geohash.md index c27eab0b421..cfe35746809 100644 --- a/docs/en/sql-reference/functions/geo/geohash.md +++ b/docs/en/sql-reference/functions/geo/geohash.md @@ -29,7 +29,7 @@ geohashEncode(longitude, latitude, [precision]) **Example** ``` sql -SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res +SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res; ``` ``` text @@ -53,7 +53,7 @@ Decodes any [geohash](#geohash)-encoded string into longitude and latitude. **Example** ``` sql -SELECT geohashDecode('ezs42') AS res +SELECT geohashDecode('ezs42') AS res; ``` ``` text @@ -98,8 +98,9 @@ Type: [Array](../../../sql-reference/data-types/array.md)([String](../../../sql- Query: ``` sql -SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos +SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos; ``` + Result: ``` text diff --git a/docs/en/sql-reference/functions/geo/h3.md b/docs/en/sql-reference/functions/geo/h3.md index 9dda947b3a7..20dc7b29902 100644 --- a/docs/en/sql-reference/functions/geo/h3.md +++ b/docs/en/sql-reference/functions/geo/h3.md @@ -40,8 +40,9 @@ Type: [UInt8](../../../sql-reference/data-types/int-uint.md). Query: ``` sql -SELECT h3IsValid(630814730351855103) as h3IsValid +SELECT h3IsValid(630814730351855103) as h3IsValid; ``` + Result: ``` text @@ -76,8 +77,9 @@ Type: [UInt8](../../../sql-reference/data-types/int-uint.md). Query: ``` sql -SELECT h3GetResolution(639821929606596015) as resolution +SELECT h3GetResolution(639821929606596015) as resolution; ``` + Result: ``` text @@ -109,8 +111,9 @@ h3EdgeAngle(resolution) Query: ``` sql -SELECT h3EdgeAngle(10) as edgeAngle +SELECT h3EdgeAngle(10) as edgeAngle; ``` + Result: ``` text @@ -142,8 +145,9 @@ h3EdgeLengthM(resolution) Query: ``` sql -SELECT h3EdgeLengthM(15) as edgeLengthM +SELECT h3EdgeLengthM(15) as edgeLengthM; ``` + Result: ``` text @@ -180,7 +184,7 @@ Type: [UInt64](../../../sql-reference/data-types/int-uint.md). Query: ``` sql -SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index +SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index; ``` Result: @@ -217,8 +221,9 @@ Type: [Array](../../../sql-reference/data-types/array.md)([UInt64](../../../sql- Query: ``` sql -SELECT arrayJoin(h3kRing(644325529233966508, 1)) AS h3index +SELECT arrayJoin(h3kRing(644325529233966508, 1)) AS h3index; ``` + Result: ``` text diff --git a/docs/en/sql-reference/functions/hash-functions.md b/docs/en/sql-reference/functions/hash-functions.md index 465ad01527f..0ea4cfd6fbe 100644 --- a/docs/en/sql-reference/functions/hash-functions.md +++ b/docs/en/sql-reference/functions/hash-functions.md @@ -7,6 +7,8 @@ toc_title: Hash Hash functions can be used for the deterministic pseudo-random shuffling of elements. +Simhash is a hash function, which returns close hash values for close (similar) arguments. + ## halfMD5 {#hash-functions-halfmd5} [Interprets](../../sql-reference/functions/type-conversion-functions.md#type_conversion_functions-reinterpretAsString) all the input parameters as strings and calculates the [MD5](https://en.wikipedia.org/wiki/MD5) hash value for each of them. Then combines hashes, takes the first 8 bytes of the hash of the resulting string, and interprets them as `UInt64` in big-endian byte order. @@ -29,7 +31,7 @@ A [UInt64](../../sql-reference/data-types/int-uint.md) data type hash value. **Example** ``` sql -SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS halfMD5hash, toTypeName(halfMD5hash) AS type +SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS halfMD5hash, toTypeName(halfMD5hash) AS type; ``` ``` text @@ -72,7 +74,7 @@ A [UInt64](../../sql-reference/data-types/int-uint.md) data type hash value. **Example** ``` sql -SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type +SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type; ``` ``` text @@ -110,7 +112,7 @@ A [UInt64](../../sql-reference/data-types/int-uint.md) data type hash value. Call example: ``` sql -SELECT cityHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS CityHash, toTypeName(CityHash) AS type +SELECT cityHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS CityHash, toTypeName(CityHash) AS type; ``` ``` text @@ -177,7 +179,7 @@ A [UInt64](../../sql-reference/data-types/int-uint.md) data type hash value. **Example** ``` sql -SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS FarmHash, toTypeName(FarmHash) AS type +SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS FarmHash, toTypeName(FarmHash) AS type; ``` ``` text @@ -193,7 +195,7 @@ Calculates [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add97 **Syntax** ``` sql -SELECT javaHash(''); +SELECT javaHash('') ``` **Returned value** @@ -241,7 +243,7 @@ Correct query with UTF-16LE encoded string. Query: ``` sql -SELECT javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le')) +SELECT javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le')); ``` Result: @@ -257,7 +259,7 @@ Result: Calculates `HiveHash` from a string. ``` sql -SELECT hiveHash(''); +SELECT hiveHash('') ``` This is just [JavaHash](#hash_functions-javahash) with zeroed out sign bit. This function is used in [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) for versions before 3.0. This hash function is neither fast nor having a good quality. The only reason to use it is when this algorithm is already used in another system and you have to calculate exactly the same result. @@ -303,7 +305,7 @@ A [UInt64](../../sql-reference/data-types/int-uint.md) data type hash value. **Example** ``` sql -SELECT metroHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MetroHash, toTypeName(MetroHash) AS type +SELECT metroHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MetroHash, toTypeName(MetroHash) AS type; ``` ``` text @@ -339,7 +341,7 @@ Both functions take a variable number of input parameters. Arguments can be any **Example** ``` sql -SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash2, toTypeName(MurmurHash2) AS type +SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash2, toTypeName(MurmurHash2) AS type; ``` ``` text @@ -355,7 +357,7 @@ Calculates a 64-bit [MurmurHash2](https://github.com/aappleby/smhasher) hash val **Syntax** ``` sql -gccMurmurHash(par1, ...); +gccMurmurHash(par1, ...) ``` **Arguments** @@ -407,7 +409,7 @@ Both functions take a variable number of input parameters. Arguments can be any **Example** ``` sql -SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text @@ -435,13 +437,13 @@ A [FixedString(16)](../../sql-reference/data-types/fixedstring.md) data type has **Example** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32, xxHash64 {#hash-functions-xxhash32} @@ -449,11 +451,11 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) Calculates `xxHash` from a string. It is proposed in two flavors, 32 and 64 bits. ``` sql -SELECT xxHash32(''); +SELECT xxHash32('') OR -SELECT xxHash64(''); +SELECT xxHash64('') ``` **Returned value** @@ -482,4 +484,938 @@ Result: - [xxHash](http://cyan4973.github.io/xxHash/). -[Original article](https://clickhouse.tech/docs/en/query_language/functions/hash_functions/) +## ngramSimHash {#ngramsimhash} + +Splits a ASCII string into n-grams of `ngramsize` symbols and returns the n-gram `simhash`. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +ngramSimHash(string[, ngramsize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT ngramSimHash('ClickHouse') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 1627567969 │ +└────────────┘ +``` + +## ngramSimHashCaseInsensitive {#ngramsimhashcaseinsensitive} + +Splits a ASCII string into n-grams of `ngramsize` symbols and returns the n-gram `simhash`. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +ngramSimHashCaseInsensitive(string[, ngramsize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT ngramSimHashCaseInsensitive('ClickHouse') AS Hash; +``` + +Result: + +``` text +┌──────Hash─┐ +│ 562180645 │ +└───────────┘ +``` + +## ngramSimHashUTF8 {#ngramsimhashutf8} + +Splits a UTF-8 string into n-grams of `ngramsize` symbols and returns the n-gram `simhash`. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +ngramSimHashUTF8(string[, ngramsize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT ngramSimHashUTF8('ClickHouse') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 1628157797 │ +└────────────┘ +``` + +## ngramSimHashCaseInsensitiveUTF8 {#ngramsimhashcaseinsensitiveutf8} + +Splits a UTF-8 string into n-grams of `ngramsize` symbols and returns the n-gram `simhash`. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +ngramSimHashCaseInsensitiveUTF8(string[, ngramsize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT ngramSimHashCaseInsensitiveUTF8('ClickHouse') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 1636742693 │ +└────────────┘ +``` + +## wordShingleSimHash {#wordshinglesimhash} + +Splits a ASCII string into parts (shingles) of `shinglesize` words and returns the word shingle `simhash`. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +wordShingleSimHash(string[, shinglesize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT wordShingleSimHash('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 2328277067 │ +└────────────┘ +``` + +## wordShingleSimHashCaseInsensitive {#wordshinglesimhashcaseinsensitive} + +Splits a ASCII string into parts (shingles) of `shinglesize` words and returns the word shingle `simhash`. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +wordShingleSimHashCaseInsensitive(string[, shinglesize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT wordShingleSimHashCaseInsensitive('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 2194812424 │ +└────────────┘ +``` + +## wordShingleSimHashUTF8 {#wordshinglesimhashutf8} + +Splits a UTF-8 string into parts (shingles) of `shinglesize` words and returns the word shingle `simhash`. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +wordShingleSimHashUTF8(string[, shinglesize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optinal. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT wordShingleSimHashUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 2328277067 │ +└────────────┘ +``` + +## wordShingleSimHashCaseInsensitiveUTF8 {#wordshinglesimhashcaseinsensitiveutf8} + +Splits a UTF-8 string into parts (shingles) of `shinglesize` words and returns the word shingle `simhash`. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). The smaller is the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) of the calculated `simhashes` of two strings, the more likely these strings are the same. + +**Syntax** + +``` sql +wordShingleSimHashCaseInsensitiveUTF8(string[, shinglesize]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Hash value. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT wordShingleSimHashCaseInsensitiveUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Result: + +``` text +┌───────Hash─┐ +│ 2194812424 │ +└────────────┘ +``` + +## ngramMinHash {#ngramminhash} + +Splits a ASCII string into n-grams of `ngramsize` symbols and calculates hash values for each n-gram. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +ngramMinHash(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT ngramMinHash('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (18333312859352735453,9054248444481805918) │ +└────────────────────────────────────────────┘ +``` + +## ngramMinHashCaseInsensitive {#ngramminhashcaseinsensitive} + +Splits a ASCII string into n-grams of `ngramsize` symbols and calculates hash values for each n-gram. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +ngramMinHashCaseInsensitive(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashCaseInsensitive('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (2106263556442004574,13203602793651726206) │ +└────────────────────────────────────────────┘ +``` + +## ngramMinHashUTF8 {#ngramminhashutf8} + +Splits a UTF-8 string into n-grams of `ngramsize` symbols and calculates hash values for each n-gram. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +ngramMinHashUTF8(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashUTF8('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (18333312859352735453,6742163577938632877) │ +└────────────────────────────────────────────┘ +``` + +## ngramMinHashCaseInsensitiveUTF8 {#ngramminhashcaseinsensitiveutf8} + +Splits a UTF-8 string into n-grams of `ngramsize` symbols and calculates hash values for each n-gram. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +ngramMinHashCaseInsensitiveUTF8(string [, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashCaseInsensitiveUTF8('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple───────────────────────────────────────┐ +│ (12493625717655877135,13203602793651726206) │ +└─────────────────────────────────────────────┘ +``` + +## ngramMinHashArg {#ngramminhasharg} + +Splits a ASCII string into n-grams of `ngramsize` symbols and returns the n-grams with minimum and maximum hashes, calculated by the [ngramMinHash](#ngramminhash) function with the same input. Is case sensitive. + +**Syntax** + +``` sql +ngramMinHashArg(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` n-grams each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashArg('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ous','ick','lic','Hou','kHo','use'),('Hou','lic','ick','ous','ckH','Cli')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## ngramMinHashArgCaseInsensitive {#ngramminhashargcaseinsensitive} + +Splits a ASCII string into n-grams of `ngramsize` symbols and returns the n-grams with minimum and maximum hashes, calculated by the [ngramMinHashCaseInsensitive](#ngramminhashcaseinsensitive) function with the same input. Is case insensitive. + +**Syntax** + +``` sql +ngramMinHashArgCaseInsensitive(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` n-grams each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashArgCaseInsensitive('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ous','ick','lic','kHo','use','Cli'),('kHo','lic','ick','ous','ckH','Hou')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## ngramMinHashArgUTF8 {#ngramminhashargutf8} + +Splits a UTF-8 string into n-grams of `ngramsize` symbols and returns the n-grams with minimum and maximum hashes, calculated by the [ngramMinHashUTF8](#ngramminhashutf8) function with the same input. Is case sensitive. + +**Syntax** + +``` sql +ngramMinHashArgUTF8(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` n-grams each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashArgUTF8('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ous','ick','lic','Hou','kHo','use'),('kHo','Hou','lic','ick','ous','ckH')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## ngramMinHashArgCaseInsensitiveUTF8 {#ngramminhashargcaseinsensitiveutf8} + +Splits a UTF-8 string into n-grams of `ngramsize` symbols and returns the n-grams with minimum and maximum hashes, calculated by the [ngramMinHashCaseInsensitiveUTF8](#ngramminhashcaseinsensitiveutf8) function with the same input. Is case insensitive. + +**Syntax** + +``` sql +ngramMinHashArgCaseInsensitiveUTF8(string[, ngramsize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — The size of an n-gram. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` n-grams each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT ngramMinHashArgCaseInsensitiveUTF8('ClickHouse') AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ckH','ous','ick','lic','kHo','use'),('kHo','lic','ick','ous','ckH','Hou')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHash {#wordshingleminhash} + +Splits a ASCII string into parts (shingles) of `shinglesize` words and calculates hash values for each word shingle. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +wordShingleMinHash(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHash('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (16452112859864147620,5844417301642981317) │ +└────────────────────────────────────────────┘ +``` + +## wordShingleMinHashCaseInsensitive {#wordshingleminhashcaseinsensitive} + +Splits a ASCII string into parts (shingles) of `shinglesize` words and calculates hash values for each word shingle. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +wordShingleMinHashCaseInsensitive(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashCaseInsensitive('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────┐ +│ (3065874883688416519,1634050779997673240) │ +└───────────────────────────────────────────┘ +``` + +## wordShingleMinHashUTF8 {#wordshingleminhashutf8} + +Splits a UTF-8 string into parts (shingles) of `shinglesize` words and calculates hash values for each word shingle. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case sensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +wordShingleMinHashUTF8(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (16452112859864147620,5844417301642981317) │ +└────────────────────────────────────────────┘ +``` + +## wordShingleMinHashCaseInsensitiveUTF8 {#wordshingleminhashcaseinsensitiveutf8} + +Splits a UTF-8 string into parts (shingles) of `shinglesize` words and calculates hash values for each word shingle. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case insensitive. + +Can be used for detection of semi-duplicate strings with [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). For two strings: if one of the returned hashes is the same for both strings, we think that those strings are the same. + +**Syntax** + +``` sql +wordShingleMinHashCaseInsensitiveUTF8(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two hashes — the minimum and the maximum. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashCaseInsensitiveUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────┐ +│ (3065874883688416519,1634050779997673240) │ +└───────────────────────────────────────────┘ +``` + +## wordShingleMinHashArg {#wordshingleminhasharg} + +Splits a ASCII string into parts (shingles) of `shinglesize` words each and returns the shingles with minimum and maximum word hashes, calculated by the [wordshingleMinHash](#wordshingleminhash) function with the same input. Is case sensitive. + +**Syntax** + +``` sql +wordShingleMinHashArg(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` word shingles each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashArg('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────┐ +│ (('OLAP','database','analytical'),('online','oriented','processing')) │ +└───────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHashArgCaseInsensitive {#wordshingleminhashargcaseinsensitive} + +Splits a ASCII string into parts (shingles) of `shinglesize` words each and returns the shingles with minimum and maximum word hashes, calculated by the [wordShingleMinHashCaseInsensitive](#wordshingleminhashcaseinsensitive) function with the same input. Is case insensitive. + +**Syntax** + +``` sql +wordShingleMinHashArgCaseInsensitive(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` word shingles each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashArgCaseInsensitive('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────────────────────────────────┐ +│ (('queries','database','analytical'),('oriented','processing','DBMS')) │ +└────────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHashArgUTF8 {#wordshingleminhashargutf8} + +Splits a UTF-8 string into parts (shingles) of `shinglesize` words each and returns the shingles with minimum and maximum word hashes, calculated by the [wordShingleMinHashUTF8](#wordshingleminhashutf8) function with the same input. Is case sensitive. + +**Syntax** + +``` sql +wordShingleMinHashArgUTF8(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` word shingles each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashArgUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Result: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────┐ +│ (('OLAP','database','analytical'),('online','oriented','processing')) │ +└───────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHashArgCaseInsensitiveUTF8 {#wordshingleminhashargcaseinsensitiveutf8} + +Splits a UTF-8 string into parts (shingles) of `shinglesize` words each and returns the shingles with minimum and maximum word hashes, calculated by the [wordShingleMinHashCaseInsensitiveUTF8](#wordshingleminhashcaseinsensitiveutf8) function with the same input. Is case insensitive. + +**Syntax** + +``` sql +wordShingleMinHashArgCaseInsensitiveUTF8(string[, shinglesize, hashnum]) +``` + +**Arguments** + +- `string` — String. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — The size of a word shingle. Optional. Possible values: any number from `1` to `25`. Default value: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — The number of minimum and maximum hashes used to calculate the result. Optional. Possible values: any number from `1` to `25`. Default value: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned value** + +- Tuple with two tuples with `hashnum` word shingles each. + +Type: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Example** + +Query: + +``` sql +SELECT wordShingleMinHashArgCaseInsensitiveUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Result: + +``` text +┌─Tuple──────────────────────────────────────────────────────────────────┐ +│ (('queries','database','analytical'),('oriented','processing','DBMS')) │ +└────────────────────────────────────────────────────────────────────────┘ +``` diff --git a/docs/en/sql-reference/functions/in-functions.md b/docs/en/sql-reference/functions/in-functions.md index dd3c1900fdc..c8936e74954 100644 --- a/docs/en/sql-reference/functions/in-functions.md +++ b/docs/en/sql-reference/functions/in-functions.md @@ -9,4 +9,3 @@ toc_title: IN Operator See the section [IN operators](../../sql-reference/operators/in.md#select-in-operators). -[Original article](https://clickhouse.tech/docs/en/query_language/functions/in_functions/) diff --git a/docs/en/sql-reference/functions/index.md b/docs/en/sql-reference/functions/index.md index 1a0b9d83b5f..32408759b98 100644 --- a/docs/en/sql-reference/functions/index.md +++ b/docs/en/sql-reference/functions/index.md @@ -84,4 +84,3 @@ Another example is the `hostName` function, which returns the name of the server If a function in a query is performed on the requestor server, but you need to perform it on remote servers, you can wrap it in an ‘any’ aggregate function or add it to a key in `GROUP BY`. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/) diff --git a/docs/en/sql-reference/functions/introspection.md b/docs/en/sql-reference/functions/introspection.md index 964265a461b..44685e3cb67 100644 --- a/docs/en/sql-reference/functions/introspection.md +++ b/docs/en/sql-reference/functions/introspection.md @@ -53,13 +53,13 @@ Type: [String](../../sql-reference/data-types/string.md). Enabling introspection functions: ``` sql -SET allow_introspection_functions=1 +SET allow_introspection_functions=1; ``` Selecting the first string from the `trace_log` system table: ``` sql -SELECT * FROM system.trace_log LIMIT 1 \G +SELECT * FROM system.trace_log LIMIT 1 \G; ``` ``` text @@ -79,7 +79,7 @@ The `trace` field contains the stack trace at the moment of sampling. Getting the source code filename and the line number for a single address: ``` sql -SELECT addressToLine(94784076370703) \G +SELECT addressToLine(94784076370703) \G; ``` ``` text @@ -139,13 +139,13 @@ Type: [String](../../sql-reference/data-types/string.md). Enabling introspection functions: ``` sql -SET allow_introspection_functions=1 +SET allow_introspection_functions=1; ``` Selecting the first string from the `trace_log` system table: ``` sql -SELECT * FROM system.trace_log LIMIT 1 \G +SELECT * FROM system.trace_log LIMIT 1 \G; ``` ``` text @@ -165,7 +165,7 @@ The `trace` field contains the stack trace at the moment of sampling. Getting a symbol for a single address: ``` sql -SELECT addressToSymbol(94138803686098) \G +SELECT addressToSymbol(94138803686098) \G; ``` ``` text @@ -236,13 +236,13 @@ Type: [String](../../sql-reference/data-types/string.md). Enabling introspection functions: ``` sql -SET allow_introspection_functions=1 +SET allow_introspection_functions=1; ``` Selecting the first string from the `trace_log` system table: ``` sql -SELECT * FROM system.trace_log LIMIT 1 \G +SELECT * FROM system.trace_log LIMIT 1 \G; ``` ``` text @@ -262,7 +262,7 @@ The `trace` field contains the stack trace at the moment of sampling. Getting a function name for a single address: ``` sql -SELECT demangle(addressToSymbol(94138803686098)) \G +SELECT demangle(addressToSymbol(94138803686098)) \G; ``` ``` text @@ -335,6 +335,7 @@ Result: │ 3878 │ └───────┘ ``` + ## logTrace {#logtrace} Emits trace log message to server log for each [Block](https://clickhouse.tech/docs/en/development/architecture/#block). @@ -369,4 +370,3 @@ Result: └──────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/introspection/) diff --git a/docs/en/sql-reference/functions/ip-address-functions.md b/docs/en/sql-reference/functions/ip-address-functions.md index 64457627cce..0b5dd7160b8 100644 --- a/docs/en/sql-reference/functions/ip-address-functions.md +++ b/docs/en/sql-reference/functions/ip-address-functions.md @@ -60,7 +60,7 @@ Alias: `INET6_NTOA`. Examples: ``` sql -SELECT IPv6NumToString(toFixedString(unhex('2A0206B8000000000000000000000011'), 16)) AS addr +SELECT IPv6NumToString(toFixedString(unhex('2A0206B8000000000000000000000011'), 16)) AS addr; ``` ``` text @@ -164,7 +164,7 @@ Result: └────────────┴──────────────────────────────────────┘ ``` -**See also** +**See Also** - [cutIPv6](#cutipv6x-bytestocutforipv6-bytestocutforipv4). @@ -173,7 +173,7 @@ Result: Takes a `UInt32` number. Interprets it as an IPv4 address in [big endian](https://en.wikipedia.org/wiki/Endianness). Returns a `FixedString(16)` value containing the IPv6 address in binary format. Examples: ``` sql -SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr +SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr; ``` ``` text @@ -206,7 +206,7 @@ SELECT Accepts an IPv4 and an UInt8 value containing the [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). Return a tuple with two IPv4 containing the lower range and the higher range of the subnet. ``` sql -SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16) +SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16); ``` ``` text @@ -342,7 +342,7 @@ Type: [UInt8](../../sql-reference/data-types/int-uint.md). Query: ```sql -SELECT addr, isIPv4String(addr) FROM ( SELECT ['0.0.0.0', '127.0.0.1', '::ffff:127.0.0.1'] AS addr ) ARRAY JOIN addr +SELECT addr, isIPv4String(addr) FROM ( SELECT ['0.0.0.0', '127.0.0.1', '::ffff:127.0.0.1'] AS addr ) ARRAY JOIN addr; ``` Result: @@ -380,7 +380,7 @@ Type: [UInt8](../../sql-reference/data-types/int-uint.md). Query: ``` sql -SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0.0.1', '127.0.0.1'] AS addr ) ARRAY JOIN addr +SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0.0.1', '127.0.0.1'] AS addr ) ARRAY JOIN addr; ``` Result: @@ -394,4 +394,55 @@ Result: └──────────────────┴────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/ip_address_functions/) +## isIPAddressInRange {#isipaddressinrange} + +Determines if an IP address is contained in a network represented in the [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Returns `1` if true, or `0` otherwise. + +**Syntax** + +``` sql +isIPAddressInRange(address, prefix) +``` + +This function accepts both IPv4 and IPv6 addresses (and networks) represented as strings. It returns `0` if the IP version of the address and the CIDR don't match. + +**Arguments** + +- `address` — An IPv4 or IPv6 address. [String](../../sql-reference/data-types/string.md). +- `prefix` — An IPv4 or IPv6 network prefix in CIDR. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- `1` or `0`. + +Type: [UInt8](../../sql-reference/data-types/int-uint.md). + +**Example** + +Query: + +``` sql +SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8') +``` + +Result: + +``` text +┌─isIPAddressInRange('127.0.0.1', '127.0.0.0/8')─┐ +│ 1 │ +└────────────────────────────────────────────────┘ +``` + +Query: + +``` sql +SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16') +``` + +Result: + +``` text +┌─isIPAddressInRange('127.0.0.1', 'ffff::/16')─┐ +│ 0 │ +└──────────────────────────────────────────────┘ +``` diff --git a/docs/en/sql-reference/functions/json-functions.md b/docs/en/sql-reference/functions/json-functions.md index edee048eb77..d545a0ae4e6 100644 --- a/docs/en/sql-reference/functions/json-functions.md +++ b/docs/en/sql-reference/functions/json-functions.md @@ -16,46 +16,60 @@ The following assumptions are made: ## visitParamHas(params, name) {#visitparamhasparams-name} -Checks whether there is a field with the ‘name’ name. +Checks whether there is a field with the `name` name. + +Alias: `simpleJSONHas`. ## visitParamExtractUInt(params, name) {#visitparamextractuintparams-name} -Parses UInt64 from the value of the field named ‘name’. If this is a string field, it tries to parse a number from the beginning of the string. If the field doesn’t exist, or it exists but doesn’t contain a number, it returns 0. +Parses UInt64 from the value of the field named `name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field doesn’t exist, or it exists but doesn’t contain a number, it returns 0. + +Alias: `simpleJSONExtractUInt`. ## visitParamExtractInt(params, name) {#visitparamextractintparams-name} The same as for Int64. +Alias: `simpleJSONExtractInt`. + ## visitParamExtractFloat(params, name) {#visitparamextractfloatparams-name} The same as for Float64. +Alias: `simpleJSONExtractFloat`. + ## visitParamExtractBool(params, name) {#visitparamextractboolparams-name} Parses a true/false value. The result is UInt8. +Alias: `simpleJSONExtractBool`. + ## visitParamExtractRaw(params, name) {#visitparamextractrawparams-name} Returns the value of a field, including separators. +Alias: `simpleJSONExtractRaw`. + Examples: ``` sql -visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"' -visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}' +visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"'; +visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'; ``` ## visitParamExtractString(params, name) {#visitparamextractstringparams-name} Parses the string in double quotes. The value is unescaped. If unescaping failed, it returns an empty string. +Alias: `simpleJSONExtractString`. + Examples: ``` sql -visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0' -visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺' -visitParamExtractString('{"abc":"\\u263"}', 'abc') = '' -visitParamExtractString('{"abc":"hello}', 'abc') = '' +visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'; +visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺'; +visitParamExtractString('{"abc":"\\u263"}', 'abc') = ''; +visitParamExtractString('{"abc":"hello}', 'abc') = ''; ``` There is currently no support for code points in the format `\uXXXX\uYYYY` that are not from the basic multilingual plane (they are converted to CESU-8 instead of UTF-8). @@ -199,7 +213,7 @@ Parses key-value pairs from a JSON where the values are of the given ClickHouse Example: ``` sql -SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') = [('a',5),('b',7),('c',11)] +SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') = [('a',5),('b',7),('c',11)]; ``` ## JSONExtractRaw(json\[, indices_or_keys\]…) {#jsonextractrawjson-indices-or-keys} @@ -211,7 +225,7 @@ If the part does not exist or has a wrong type, an empty string will be returned Example: ``` sql -SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]' +SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'; ``` ## JSONExtractArrayRaw(json\[, indices_or_keys…\]) {#jsonextractarrayrawjson-indices-or-keys} @@ -223,7 +237,7 @@ If the part does not exist or isn’t array, an empty array will be returned. Example: ``` sql -SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = ['-100', '200.0', '"hello"']' +SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = ['-100', '200.0', '"hello"']'; ``` ## JSONExtractKeysAndValuesRaw {#json-extract-keys-and-values-raw} @@ -253,7 +267,7 @@ Type: [Array](../../sql-reference/data-types/array.md)([Tuple](../../sql-referen Query: ``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}') +SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}'); ``` Result: @@ -267,7 +281,7 @@ Result: Query: ``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b') +SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b'); ``` Result: @@ -281,7 +295,7 @@ Result: Query: ``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c') +SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c'); ``` Result: @@ -292,4 +306,3 @@ Result: └───────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/json_functions/) diff --git a/docs/en/sql-reference/functions/logical-functions.md b/docs/en/sql-reference/functions/logical-functions.md index 13452f88a85..6cce0e4fff5 100644 --- a/docs/en/sql-reference/functions/logical-functions.md +++ b/docs/en/sql-reference/functions/logical-functions.md @@ -17,4 +17,3 @@ Zero as an argument is considered “false,” while any non-zero value is consi ## xor {#xor} -[Original article](https://clickhouse.tech/docs/en/query_language/functions/logical_functions/) diff --git a/docs/en/sql-reference/functions/machine-learning-functions.md b/docs/en/sql-reference/functions/machine-learning-functions.md index f103a4ea421..60dabd73781 100644 --- a/docs/en/sql-reference/functions/machine-learning-functions.md +++ b/docs/en/sql-reference/functions/machine-learning-functions.md @@ -9,7 +9,7 @@ toc_title: Machine Learning Prediction using fitted regression models uses `evalMLMethod` function. See link in `linearRegression`. -## stochasticLinearRegressionn {#stochastic-linear-regression} +## stochasticLinearRegression {#stochastic-linear-regression} The [stochasticLinearRegression](../../sql-reference/aggregate-functions/reference/stochasticlinearregression.md#agg_functions-stochasticlinearregression) aggregate function implements stochastic gradient descent method using linear model and MSE loss function. Uses `evalMLMethod` to predict on new data. @@ -36,14 +36,14 @@ bayesAB(distribution_name, higher_is_better, variant_names, x, y) - `higher_is_better` — Boolean flag. [Boolean](../../sql-reference/data-types/boolean.md). Possible values: - - `0` - lower values are considered to be better than higher - - `1` - higher values are considered to be better than lower + - `0` — lower values are considered to be better than higher + - `1` — higher values are considered to be better than lower -- `variant_names` - Variant names. [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). +- `variant_names` — Variant names. [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). -- `x` - Numbers of tests for the corresponding variants. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). +- `x` — Numbers of tests for the corresponding variants. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). -- `y` - Numbers of successful tests for the corresponding variants. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). +- `y` — Numbers of successful tests for the corresponding variants. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). !!! note "Note" All three arrays must have the same size. All `x` and `y` values must be non-negative constant numbers. `y` cannot be larger than `x`. @@ -51,8 +51,8 @@ bayesAB(distribution_name, higher_is_better, variant_names, x, y) **Returned values** For each variant the function calculates: -- `beats_control` - long-term probability to out-perform the first (control) variant -- `to_be_best` - long-term probability to out-perform all other variants +- `beats_control` — long-term probability to out-perform the first (control) variant +- `to_be_best` — long-term probability to out-perform all other variants Type: JSON. @@ -94,4 +94,3 @@ Result: } ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/machine-learning-functions/) diff --git a/docs/en/sql-reference/functions/math-functions.md b/docs/en/sql-reference/functions/math-functions.md index f56a721c0c0..2b3c000bc19 100644 --- a/docs/en/sql-reference/functions/math-functions.md +++ b/docs/en/sql-reference/functions/math-functions.md @@ -54,7 +54,7 @@ If ‘x’ is non-negative, then `erf(x / σ√2)` is the probability that a ran Example (three sigma rule): ``` sql -SELECT erf(3 / sqrt(2)) +SELECT erf(3 / sqrt(2)); ``` ``` text @@ -415,7 +415,7 @@ Result: ## sign(x) {#signx} -The `sign` function can extract the sign of a real number. +Returns the sign of a real number. **Syntax** @@ -433,9 +433,9 @@ sign(x) - 0 for `x = 0` - 1 for `x > 0` -**Example** +**Examples** -Query: +Sign for the zero value: ``` sql SELECT sign(0); @@ -449,7 +449,7 @@ Result: └─────────┘ ``` -Query: +Sign for the positive value: ``` sql SELECT sign(1); @@ -463,7 +463,7 @@ Result: └─────────┘ ``` -Query: +Sign for the negative value: ``` sql SELECT sign(-1); @@ -477,4 +477,3 @@ Result: └──────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/math_functions/) diff --git a/docs/en/sql-reference/functions/other-functions.md b/docs/en/sql-reference/functions/other-functions.md index 2c7f8da881e..ecb7b982157 100644 --- a/docs/en/sql-reference/functions/other-functions.md +++ b/docs/en/sql-reference/functions/other-functions.md @@ -696,10 +696,6 @@ Returns the server’s uptime in seconds. Returns the version of the server as a string. -## timezone() {#timezone} - -Returns the timezone of the server. - ## blockNumber {#blocknumber} Returns the sequence number of the data block where the row is located. @@ -907,66 +903,64 @@ WHERE diff != 1 ## runningDifferenceStartingWithFirstValue {#runningdifferencestartingwithfirstvalue} -Same as for [runningDifference](../../sql-reference/functions/other-functions.md#other_functions-runningdifference), the difference is the value of the first row, returned the value of the first row, and each subsequent row returns the difference from the previous row. +Same as for [runningDifference](./other-functions.md#other_functions-runningdifference), the difference is the value of the first row, returned the value of the first row, and each subsequent row returns the difference from the previous row. ## runningConcurrency {#runningconcurrency} -Given a series of beginning time and ending time of events, this function calculates concurrency of the events at each of the data point, that is, the beginning time. +Calculates the number of concurrent events. +Each event has a start time and an end time. The start time is included in the event, while the end time is excluded. Columns with a start time and an end time must be of the same data type. +The function calculates the total number of active (concurrent) events for each event start time. + !!! warning "Warning" - Events spanning multiple data blocks will not be processed correctly. The function resets its state for each new data block. - -The result of the function depends on the order of data in the block. It assumes the beginning time is sorted in ascending order. + Events must be ordered by the start time in ascending order. If this requirement is violated the function raises an exception. + Every data block is processed separately. If events from different data blocks overlap then they can not be processed correctly. **Syntax** ``` sql -runningConcurrency(begin, end) +runningConcurrency(start, end) ``` **Arguments** -- `begin` — A column for the beginning time of events (inclusive). [Date](../../sql-reference/data-types/date.md), [DateTime](../../sql-reference/data-types/datetime.md), or [DateTime64](../../sql-reference/data-types/datetime64.md). -- `end` — A column for the ending time of events (exclusive). [Date](../../sql-reference/data-types/date.md), [DateTime](../../sql-reference/data-types/datetime.md), or [DateTime64](../../sql-reference/data-types/datetime64.md). - -Note that two columns `begin` and `end` must have the same type. +- `start` — A column with the start time of events. [Date](../../sql-reference/data-types/date.md), [DateTime](../../sql-reference/data-types/datetime.md), or [DateTime64](../../sql-reference/data-types/datetime64.md). +- `end` — A column with the end time of events. [Date](../../sql-reference/data-types/date.md), [DateTime](../../sql-reference/data-types/datetime.md), or [DateTime64](../../sql-reference/data-types/datetime64.md). **Returned values** -- The concurrency of events at the data point. +- The number of concurrent events at each event start time. Type: [UInt32](../../sql-reference/data-types/int-uint.md) **Example** -Input table: +Consider the table: ``` text -┌───────────────begin─┬─────────────────end─┐ -│ 2020-12-01 00:00:00 │ 2020-12-01 00:59:59 │ -│ 2020-12-01 00:30:00 │ 2020-12-01 00:59:59 │ -│ 2020-12-01 00:40:00 │ 2020-12-01 01:30:30 │ -│ 2020-12-01 01:10:00 │ 2020-12-01 01:30:30 │ -│ 2020-12-01 01:50:00 │ 2020-12-01 01:59:59 │ -└─────────────────────┴─────────────────────┘ +┌──────start─┬────────end─┐ +│ 2021-03-03 │ 2021-03-11 │ +│ 2021-03-06 │ 2021-03-12 │ +│ 2021-03-07 │ 2021-03-08 │ +│ 2021-03-11 │ 2021-03-12 │ +└────────────┴────────────┘ ``` Query: ``` sql -SELECT runningConcurrency(begin, end) FROM example +SELECT start, runningConcurrency(start, end) FROM example_table; ``` Result: ``` text -┌─runningConcurrency(begin, end)─┐ -│ 1 │ -│ 2 │ -│ 3 │ -│ 2 │ -│ 1 │ -└────────────────────────────────┘ +┌──────start─┬─runningConcurrency(start, end)─┐ +│ 2021-03-03 │ 1 │ +│ 2021-03-06 │ 2 │ +│ 2021-03-07 │ 3 │ +│ 2021-03-11 │ 2 │ +└────────────┴────────────────────────────────┘ ``` ## MACNumToString(num) {#macnumtostringnum} @@ -1194,6 +1188,109 @@ SELECT defaultValueOfTypeName('Nullable(Int8)') └──────────────────────────────────────────┘ ``` +## indexHint {#indexhint} +The function is intended for debugging and introspection purposes. The function ignores it's argument and always returns 1. Arguments are not even evaluated. + +But for the purpose of index analysis, the argument of this function is analyzed as if it was present directly without being wrapped inside `indexHint` function. This allows to select data in index ranges by the corresponding condition but without further filtering by this condition. The index in ClickHouse is sparse and using `indexHint` will yield more data than specifying the same condition directly. + +**Syntax** + +```sql +SELECT * FROM table WHERE indexHint() +``` + +**Returned value** + +1. Type: [Uint8](https://clickhouse.yandex/docs/en/data_types/int_uint/#diapazony-uint). + +**Example** + +Here is the example of test data from the table [ontime](../../getting-started/example-datasets/ontime.md). + +Input table: + +```sql +SELECT count() FROM ontime +``` + +```text +┌─count()─┐ +│ 4276457 │ +└─────────┘ +``` + +The table has indexes on the fields `(FlightDate, (Year, FlightDate))`. + +Create a query, where the index is not used. + +Query: + +```sql +SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k +``` + +ClickHouse processed the entire table (`Processed 4.28 million rows`). + +Result: + +```text +┌──────────k─┬─count()─┐ +│ 2017-01-01 │ 13970 │ +│ 2017-01-02 │ 15882 │ +........................ +│ 2017-09-28 │ 16411 │ +│ 2017-09-29 │ 16384 │ +│ 2017-09-30 │ 12520 │ +└────────────┴─────────┘ +``` + +To apply the index, select a specific date. + +Query: + +```sql +SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k +``` + +By using the index, ClickHouse processed a significantly smaller number of rows (`Processed 32.74 thousand rows`). + +Result: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-15 │ 16428 │ +└────────────┴─────────┘ +``` + +Now wrap the expression `k = '2017-09-15'` into `indexHint` function. + +Query: + +```sql +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE indexHint(k = '2017-09-15') +GROUP BY k +ORDER BY k ASC +``` + +ClickHouse used the index in the same way as the previous time (`Processed 32.74 thousand rows`). +The expression `k = '2017-09-15'` was not used when generating the result. +In examle the `indexHint` function allows to see adjacent dates. + +Result: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-14 │ 7071 │ +│ 2017-09-15 │ 16428 │ +│ 2017-09-16 │ 1077 │ +│ 2017-09-30 │ 8167 │ +└────────────┴─────────┘ +``` + ## replicate {#other-functions-replicate} Creates an array with a single value. @@ -1762,7 +1859,6 @@ Result: ``` - ## randomStringUTF8 {#randomstringutf8} Generates a random string of a specified length. Result string contains valid UTF-8 code points. The value of code points may be outside of the range of assigned Unicode. @@ -1971,4 +2067,3 @@ Result: - [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) -[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) diff --git a/docs/en/sql-reference/functions/random-functions.md b/docs/en/sql-reference/functions/random-functions.md index 2b9846344e4..aab9483de45 100644 --- a/docs/en/sql-reference/functions/random-functions.md +++ b/docs/en/sql-reference/functions/random-functions.md @@ -102,4 +102,3 @@ FROM numbers(3) │ aeca2A │ └───────────────────────────────────────┘ -[Original article](https://clickhouse.tech/docs/en/query_language/functions/random_functions/) diff --git a/docs/en/sql-reference/functions/rounding-functions.md b/docs/en/sql-reference/functions/rounding-functions.md index 83db1975366..c0bd44a6467 100644 --- a/docs/en/sql-reference/functions/rounding-functions.md +++ b/docs/en/sql-reference/functions/rounding-functions.md @@ -35,7 +35,7 @@ The function returns the nearest number of the specified order. In case when giv round(expression [, decimal_places]) ``` -**Arguments:** +**Arguments** - `expression` — A number to be rounded. Can be any [expression](../../sql-reference/syntax.md#syntax-expressions) returning the numeric [data type](../../sql-reference/data-types/index.md#data_types). - `decimal-places` — An integer value. @@ -185,4 +185,3 @@ Accepts a number. If the number is less than 18, it returns 0. Otherwise, it rou Accepts a number and rounds it down to an element in the specified array. If the value is less than the lowest bound, the lowest bound is returned. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/rounding_functions/) diff --git a/docs/en/sql-reference/functions/splitting-merging-functions.md b/docs/en/sql-reference/functions/splitting-merging-functions.md index c70ee20f076..bd7e209549c 100644 --- a/docs/en/sql-reference/functions/splitting-merging-functions.md +++ b/docs/en/sql-reference/functions/splitting-merging-functions.md @@ -150,4 +150,3 @@ Result: └───────────────────────────────────────────────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/splitting_merging_functions/) diff --git a/docs/en/sql-reference/functions/string-functions.md b/docs/en/sql-reference/functions/string-functions.md index 2c08fa3acb7..85570cb408d 100644 --- a/docs/en/sql-reference/functions/string-functions.md +++ b/docs/en/sql-reference/functions/string-functions.md @@ -73,19 +73,19 @@ Returns 1, if the set of bytes is valid UTF-8 encoded, otherwise 0. Replaces invalid UTF-8 characters by the `�` (U+FFFD) character. All running in a row invalid characters are collapsed into the one replacement character. ``` sql -toValidUTF8( input_string ) +toValidUTF8(input_string) ``` **Arguments** -- input_string — Any set of bytes represented as the [String](../../sql-reference/data-types/string.md) data type object. +- `input_string` — Any set of bytes represented as the [String](../../sql-reference/data-types/string.md) data type object. Returned value: Valid UTF-8 string. **Example** ``` sql -SELECT toValidUTF8('\x61\xF0\x80\x80\x80b') +SELECT toValidUTF8('\x61\xF0\x80\x80\x80b'); ``` ``` text @@ -122,7 +122,7 @@ Type: `String`. Query: ``` sql -SELECT repeat('abc', 10) +SELECT repeat('abc', 10); ``` Result: @@ -190,7 +190,7 @@ If any of argument values is `NULL`, `concat` returns `NULL`. Query: ``` sql -SELECT concat('Hello, ', 'World!') +SELECT concat('Hello, ', 'World!'); ``` Result: @@ -245,7 +245,7 @@ SELECT * from key_val; Query: ``` sql -SELECT concat(key1, key2), sum(value) FROM key_val GROUP BY concatAssumeInjective(key1, key2) +SELECT concat(key1, key2), sum(value) FROM key_val GROUP BY concatAssumeInjective(key1, key2); ``` Result: @@ -336,8 +336,8 @@ trim([[LEADING|TRAILING|BOTH] trim_character FROM] input_string) **Arguments** -- `trim_character` — specified characters for trim. [String](../../sql-reference/data-types/string.md). -- `input_string` — string for trim. [String](../../sql-reference/data-types/string.md). +- `trim_character` — Specified characters for trim. [String](../../sql-reference/data-types/string.md). +- `input_string` — String for trim. [String](../../sql-reference/data-types/string.md). **Returned value** @@ -350,7 +350,7 @@ Type: `String`. Query: ``` sql -SELECT trim(BOTH ' ()' FROM '( Hello, world! )') +SELECT trim(BOTH ' ()' FROM '( Hello, world! )'); ``` Result: @@ -388,7 +388,7 @@ Type: `String`. Query: ``` sql -SELECT trimLeft(' Hello, world! ') +SELECT trimLeft(' Hello, world! '); ``` Result: @@ -426,7 +426,7 @@ Type: `String`. Query: ``` sql -SELECT trimRight(' Hello, world! ') +SELECT trimRight(' Hello, world! '); ``` Result: @@ -464,7 +464,7 @@ Type: `String`. Query: ``` sql -SELECT trimBoth(' Hello, world! ') +SELECT trimBoth(' Hello, world! '); ``` Result: @@ -497,7 +497,8 @@ The result type is UInt64. Replaces literals, sequences of literals and complex aliases with placeholders. -**Syntax** +**Syntax** + ``` sql normalizeQuery(x) ``` @@ -617,7 +618,7 @@ This function also replaces numeric character references with Unicode characters decodeXMLComponent(x) ``` -**Parameters** +**Arguments** - `x` — A sequence of characters. [String](../../sql-reference/data-types/string.md). @@ -648,4 +649,65 @@ Result: - [List of XML and HTML character entity references](https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references) -[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_functions/) + +## extractTextFromHTML {#extracttextfromhtml} + +A function to extract text from HTML or XHTML. +It does not necessarily 100% conform to any of the HTML, XML or XHTML standards, but the implementation is reasonably accurate and it is fast. The rules are the following: + +1. Comments are skipped. Example: ``. Comment must end with `-->`. Nested comments are not possible. +Note: constructions like `` and `` are not valid comments in HTML but they are skipped by other rules. +2. CDATA is pasted verbatim. Note: CDATA is XML/XHTML specific. But it is processed for "best-effort" approach. +3. `script` and `style` elements are removed with all their content. Note: it is assumed that closing tag cannot appear inside content. For example, in JS string literal has to be escaped like `"<\/script>"`. +Note: comments and CDATA are possible inside `script` or `style` - then closing tags are not searched inside CDATA. Example: `]]>`. But they are still searched inside comments. Sometimes it becomes complicated: ` var y = "-->"; alert(x + y);` +Note: `script` and `style` can be the names of XML namespaces - then they are not treated like usual `script` or `style` elements. Example: `Hello`. +Note: whitespaces are possible after closing tag name: `` but not before: `< / script>`. +4. Other tags or tag-like elements are skipped without inner content. Example: `.` +Note: it is expected that this HTML is illegal: `` +Note: it also skips something like tags: `<>`, ``, etc. +Note: tag without end is skipped to the end of input: `world`, `Helloworld` - there is no whitespace in HTML, but the function inserts it. Also consider: `Hello

world

`, `Hello
world`. This behavior is reasonable for data analysis, e.g. to convert HTML to a bag of words. +7. Also note that correct handling of whitespaces requires the support of `
` and CSS `display` and `white-space` properties.
+
+**Syntax**
+
+``` sql
+extractTextFromHTML(x)
+```
+
+**Arguments**
+
+-   `x` — input text. [String](../../sql-reference/data-types/string.md). 
+
+**Returned value**
+
+-   Extracted text.
+
+Type: [String](../../sql-reference/data-types/string.md).
+
+**Example**
+
+The first example contains several tags and a comment and also shows whitespace processing.
+The second example shows `CDATA` and `script` tag processing.
+In the third example text is extracted from the full HTML response received by the [url](../../sql-reference/table-functions/url.md) function.
+
+Query:
+
+``` sql
+SELECT extractTextFromHTML(' 

A text withtags.

'); +SELECT extractTextFromHTML('CDATA]]> '); +SELECT extractTextFromHTML(html) FROM url('http://www.donothingfor2minutes.com/', RawBLOB, 'html String'); +``` + +Result: + +``` text +A text with tags . +The content within CDATA +Do Nothing for 2 Minutes 2:00   +``` diff --git a/docs/en/sql-reference/functions/string-replace-functions.md b/docs/en/sql-reference/functions/string-replace-functions.md index 8905500995c..144b4fbc1da 100644 --- a/docs/en/sql-reference/functions/string-replace-functions.md +++ b/docs/en/sql-reference/functions/string-replace-functions.md @@ -92,4 +92,3 @@ Predefined characters: `\0`, `\\`, `|`, `(`, `)`, `^`, `$`, `.`, `[`, `]`, `?`, This implementation slightly differs from re2::RE2::QuoteMeta. It escapes zero byte as `\0` instead of `\x00` and it escapes only required characters. For more information, see the link: [RE2](https://github.com/google/re2/blob/master/re2/re2.cc#L473) -[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_replace_functions/) diff --git a/docs/en/sql-reference/functions/string-search-functions.md b/docs/en/sql-reference/functions/string-search-functions.md index 83b0edea438..01b1dd2d004 100644 --- a/docs/en/sql-reference/functions/string-search-functions.md +++ b/docs/en/sql-reference/functions/string-search-functions.md @@ -12,7 +12,9 @@ The search is case-sensitive by default in all these functions. There are separa ## position(haystack, needle), locate(haystack, needle) {#position} -Returns the position (in bytes) of the found substring in the string, starting from 1. +Searches for the substring `needle` in the string `haystack`. + +Returns the position (in bytes) of the found substring in the string, starting from 1. For a case-insensitive search, use the function [positionCaseInsensitive](#positioncaseinsensitive). @@ -20,15 +22,22 @@ For a case-insensitive search, use the function [positionCaseInsensitive](#posit ``` sql position(haystack, needle[, start_pos]) -``` +``` + +``` sql +position(needle IN haystack) +``` Alias: `locate(haystack, needle[, start_pos])`. +!!! note "Note" + Syntax of `position(needle IN haystack)` provides SQL-compatibility, the function works the same way as to `position(haystack, needle)`. + **Arguments** -- `haystack` — string, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md) +- `haystack` — String, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `needle` — Substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `start_pos` – Position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md). Optional. **Returned values** @@ -44,7 +53,7 @@ The phrase “Hello, world!” contains a set of bytes representing a single-byt Query: ``` sql -SELECT position('Hello, world!', '!') +SELECT position('Hello, world!', '!'); ``` Result: @@ -72,7 +81,7 @@ The same phrase in Russian contains characters which can’t be represented usin Query: ``` sql -SELECT position('Привет, мир!', '!') +SELECT position('Привет, мир!', '!'); ``` Result: @@ -83,6 +92,36 @@ Result: └───────────────────────────────┘ ``` +**Examples for POSITION(needle IN haystack) syntax** + +Query: + +```sql +SELECT 3 = position('c' IN 'abc'); +``` + +Result: + +```text +┌─equals(3, position('abc', 'c'))─┐ +│ 1 │ +└─────────────────────────────────┘ +``` + +Query: + +```sql +SELECT 6 = position('/' IN s) FROM (SELECT 'Hello/World' AS s); +``` + +Result: + +```text +┌─equals(6, position(s, '/'))─┐ +│ 1 │ +└─────────────────────────────┘ +``` + ## positionCaseInsensitive {#positioncaseinsensitive} The same as [position](#position) returns the position (in bytes) of the found substring in the string, starting from 1. Use the function for a case-insensitive search. @@ -97,9 +136,9 @@ positionCaseInsensitive(haystack, needle[, start_pos]) **Arguments** -- `haystack` — string, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md) +- `haystack` — String, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `needle` — Substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `start_pos` — Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md). **Returned values** @@ -113,7 +152,7 @@ Type: `Integer`. Query: ``` sql -SELECT positionCaseInsensitive('Hello, world!', 'hello') +SELECT positionCaseInsensitive('Hello, world!', 'hello'); ``` Result: @@ -140,9 +179,9 @@ positionUTF8(haystack, needle[, start_pos]) **Arguments** -- `haystack` — string, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md) +- `haystack` — String, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `needle` — Substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `start_pos` — Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md) **Returned values** @@ -158,7 +197,7 @@ The phrase “Hello, world!” in Russian contains a set of Unicode points repre Query: ``` sql -SELECT positionUTF8('Привет, мир!', '!') +SELECT positionUTF8('Привет, мир!', '!'); ``` Result: @@ -174,7 +213,7 @@ The phrase “Salut, étudiante!”, where character `é` can be represented usi Query for the letter `é`, which is represented one Unicode point `U+00E9`: ``` sql -SELECT positionUTF8('Salut, étudiante!', '!') +SELECT positionUTF8('Salut, étudiante!', '!'); ``` Result: @@ -188,7 +227,7 @@ Result: Query for the letter `é`, which is represented two Unicode points `U+0065U+0301`: ``` sql -SELECT positionUTF8('Salut, étudiante!', '!') +SELECT positionUTF8('Salut, étudiante!', '!'); ``` Result: @@ -213,9 +252,9 @@ positionCaseInsensitiveUTF8(haystack, needle[, start_pos]) **Arguments** -- `haystack` — string, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md) +- `haystack` — String, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `needle` — Substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `start_pos` — Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md) **Returned value** @@ -229,7 +268,7 @@ Type: `Integer`. Query: ``` sql -SELECT positionCaseInsensitiveUTF8('Привет, мир!', 'Мир') +SELECT positionCaseInsensitiveUTF8('Привет, мир!', 'Мир'); ``` Result: @@ -258,8 +297,8 @@ multiSearchAllPositions(haystack, [needle1, needle2, ..., needlen]) **Arguments** -- `haystack` — string, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `haystack` — String, in which substring will to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `needle` — Substring to be searched. [String](../../sql-reference/syntax.md#syntax-string-literal). **Returned values** @@ -270,7 +309,7 @@ multiSearchAllPositions(haystack, [needle1, needle2, ..., needlen]) Query: ``` sql -SELECT multiSearchAllPositions('Hello, World!', ['hello', '!', 'world']) +SELECT multiSearchAllPositions('Hello, World!', ['hello', '!', 'world']); ``` Result: @@ -387,7 +426,7 @@ If `haystack` doesn’t match the `pattern` regex, an array of empty arrays is r Query: ``` sql -SELECT extractAllGroupsHorizontal('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)') +SELECT extractAllGroupsHorizontal('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)'); ``` Result: @@ -428,7 +467,7 @@ If `haystack` doesn’t match the `pattern` regex, an empty array is returned. Query: ``` sql -SELECT extractAllGroupsVertical('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)') +SELECT extractAllGroupsVertical('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)'); ``` Result: @@ -506,7 +545,7 @@ Input table: Query: ``` sql -SELECT * FROM Months WHERE ilike(name, '%j%') +SELECT * FROM Months WHERE ilike(name, '%j%'); ``` Result: @@ -618,7 +657,7 @@ countSubstringsCaseInsensitive(haystack, needle[, start_pos]) - `haystack` — The string to search in. [String](../../sql-reference/syntax.md#syntax-string-literal). - `needle` — The substring to search for. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – Position of the first character in the string to start search. Optional. [UInt](../../sql-reference/data-types/int-uint.md). +- `start_pos` — Position of the first character in the string to start search. Optional. [UInt](../../sql-reference/data-types/int-uint.md). **Returned values** @@ -631,7 +670,7 @@ Type: [UInt64](../../sql-reference/data-types/int-uint.md). Query: ``` sql -select countSubstringsCaseInsensitive('aba', 'B'); +SELECT countSubstringsCaseInsensitive('aba', 'B'); ``` Result: @@ -684,7 +723,7 @@ SELECT countSubstringsCaseInsensitiveUTF8(haystack, needle[, start_pos]) - `haystack` — The string to search in. [String](../../sql-reference/syntax.md#syntax-string-literal). - `needle` — The substring to search for. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – Position of the first character in the string to start search. Optional. [UInt](../../sql-reference/data-types/int-uint.md). +- `start_pos` — Position of the first character in the string to start search. Optional. [UInt](../../sql-reference/data-types/int-uint.md). **Returned values** @@ -772,5 +811,3 @@ Result: │ 2 │ └───────────────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_search_functions/) diff --git a/docs/en/sql-reference/functions/tuple-functions.md b/docs/en/sql-reference/functions/tuple-functions.md index 1006b68b8ee..86442835425 100644 --- a/docs/en/sql-reference/functions/tuple-functions.md +++ b/docs/en/sql-reference/functions/tuple-functions.md @@ -47,7 +47,7 @@ You can use the `EXCEPT` expression to skip columns as a result of the query. **Arguments** -- `x` - A `tuple` function, column, or tuple of elements. [Tuple](../../sql-reference/data-types/tuple.md). +- `x` — A `tuple` function, column, or tuple of elements. [Tuple](../../sql-reference/data-types/tuple.md). **Returned value** @@ -111,4 +111,55 @@ Result: - [Tuple](../../sql-reference/data-types/tuple.md) -[Original article](https://clickhouse.tech/docs/en/sql-reference/functions/tuple-functions/) +## tupleHammingDistance {#tuplehammingdistance} + +Returns the [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) between two tuples of the same size. + +**Syntax** + +``` sql +tupleHammingDistance(tuple1, tuple2) +``` + +**Arguments** + +- `tuple1` — First tuple. [Tuple](../../sql-reference/data-types/tuple.md). +- `tuple2` — Second tuple. [Tuple](../../sql-reference/data-types/tuple.md). + +Tuples should have the same type of the elements. + +**Returned value** + +- The Hamming distance. + +Type: [UInt8](../../sql-reference/data-types/int-uint.md). + +**Examples** + +Query: + +``` sql +SELECT tupleHammingDistance((1, 2, 3), (3, 2, 1)) AS HammingDistance; +``` + +Result: + +``` text +┌─HammingDistance─┐ +│ 2 │ +└─────────────────┘ +``` + +Can be used with [MinHash](../../sql-reference/functions/hash-functions.md#ngramminhash) functions for detection of semi-duplicate strings: + +``` sql +SELECT tupleHammingDistance(wordShingleMinHash(string), wordShingleMinHashCaseInsensitive(string)) as HammingDistance FROM (SELECT 'Clickhouse is a column-oriented database management system for online analytical processing of queries.' AS string); +``` + +Result: + +``` text +┌─HammingDistance─┐ +│ 2 │ +└─────────────────┘ +``` diff --git a/docs/en/sql-reference/functions/tuple-map-functions.md b/docs/en/sql-reference/functions/tuple-map-functions.md index 1d4839cbbf9..8b0710c0182 100644 --- a/docs/en/sql-reference/functions/tuple-map-functions.md +++ b/docs/en/sql-reference/functions/tuple-map-functions.md @@ -66,7 +66,6 @@ Result: - [Map(key, value)](../../sql-reference/data-types/map.md) data type - ## mapAdd {#function-mapadd} Collect all the keys and sum corresponding values. diff --git a/docs/en/sql-reference/functions/type-conversion-functions.md b/docs/en/sql-reference/functions/type-conversion-functions.md index 8a793b99ac9..d8d13d81d97 100644 --- a/docs/en/sql-reference/functions/type-conversion-functions.md +++ b/docs/en/sql-reference/functions/type-conversion-functions.md @@ -381,12 +381,10 @@ This function accepts 16 bytes string, and returns UUID containing bytes represe reinterpretAsUUID(fixed_string) ``` -**Parameters** +**Arguments** - `fixed_string` — Big-endian byte string. [FixedString](../../sql-reference/data-types/fixedstring.md#fixedstring). -## reinterpret(x, T) {#type_conversion_function-reinterpret} - **Returned value** - The UUID type value. [UUID](../../sql-reference/data-types/uuid.md#uuid-data-type). @@ -398,9 +396,7 @@ String to UUID. Query: ``` sql -SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint, - reinterpret(toInt8(1), 'Float32') as int_to_float, - reinterpret('1', 'UInt32') as string_to_int; +SELECT reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f'))); ``` Result: @@ -431,15 +427,51 @@ Result: └─────────────────────┘ ``` +## reinterpret(x, T) {#type_conversion_function-reinterpret} + +Use the same source in-memory bytes sequence for `x` value and reinterpret it to destination type + +Query: +```sql +SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint, + reinterpret(toInt8(1), 'Float32') as int_to_float, + reinterpret('1', 'UInt32') as string_to_int; +``` + +Result: + +``` +┌─int_to_uint─┬─int_to_float─┬─string_to_int─┐ +│ 255 │ 1e-45 │ 49 │ +└─────────────┴──────────────┴───────────────┘ +``` + ## CAST(x, T) {#type_conversion_function-cast} -Converts input value `x` to the `T` data type. +Converts input value `x` to the `T` data type. Unlike to `reinterpret` function use external representation of `x` value. The syntax `CAST(x AS t)` is also supported. Note, that if value `x` does not fit the bounds of type T, the function overflows. For example, CAST(-1, 'UInt8') returns 255. -**Example** +**Examples** + +Query: + +```sql +SELECT + cast(toInt8(-1), 'UInt8') AS cast_int_to_uint, + cast(toInt8(1), 'Float32') AS cast_int_to_float, + cast('1', 'UInt32') AS cast_string_to_int +``` + +Result: + +``` +┌─cast_int_to_uint─┬─cast_int_to_float─┬─cast_string_to_int─┐ +│ 255 │ 1 │ 1 │ +└──────────────────┴───────────────────┴────────────────────┘ +``` Query: @@ -634,6 +666,7 @@ Result: ``` ## parseDateTimeBestEffort {#parsedatetimebesteffort} +## parseDateTime32BestEffort {#parsedatetime32besteffort} Converts a date and time in the [String](../../sql-reference/data-types/string.md) representation to [DateTime](../../sql-reference/data-types/datetime.md#data_type-datetime) data type. @@ -822,10 +855,12 @@ Result: ``` ## parseDateTimeBestEffortOrNull {#parsedatetimebesteffortornull} +## parseDateTime32BestEffortOrNull {#parsedatetime32besteffortornull} -Same as for [parseDateTimeBestEffort](#parsedatetimebesteffort) except that it returns null when it encounters a date format that cannot be processed. +Same as for [parseDateTimeBestEffort](#parsedatetimebesteffort) except that it returns `NULL` when it encounters a date format that cannot be processed. ## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero} +## parseDateTime32BestEffortOrZero {#parsedatetime32besteffortorzero} Same as for [parseDateTimeBestEffort](#parsedatetimebesteffort) except that it returns zero date or zero date time when it encounters a date format that cannot be processed. @@ -1001,6 +1036,57 @@ Result: └─────────────────────────────────┘ ``` +## parseDateTime64BestEffort {#parsedatetime64besteffort} + +Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and return `DateTime64(3)` or `DateTime64(6)` data types. + +**Syntax** + +``` sql +parseDateTime64BestEffort(time_string [, precision [, time_zone]]) +``` + +**Parameters** + +- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md). +- `precision` — `3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md). +- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md). + +**Examples** + +Query: + +```sql +SELECT parseDateTime64BestEffort('2021-01-01') AS a, toTypeName(a) AS t +UNION ALL +SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346') AS a, toTypeName(a) AS t +UNION ALL +SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t +UNION ALL +SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t +FORMAT PrettyCompactMonoBlcok +``` + +Result: + +``` +┌──────────────────────────a─┬─t──────────────────────────────┐ +│ 2021-01-01 01:01:00.123000 │ DateTime64(3) │ +│ 2021-01-01 00:00:00.000000 │ DateTime64(3) │ +│ 2021-01-01 01:01:00.123460 │ DateTime64(6) │ +│ 2020-12-31 22:01:00.123000 │ DateTime64(3, 'Europe/Moscow') │ +└────────────────────────────┴────────────────────────────────┘ +``` + +## parseDateTime64BestEffortOrNull {#parsedatetime32besteffortornull} + +Same as for [parseDateTime64BestEffort](#parsedatetime64besteffort) except that it returns `NULL` when it encounters a date format that cannot be processed. + +## parseDateTime64BestEffortOrZero {#parsedatetime64besteffortorzero} + +Same as for [parseDateTime64BestEffort](#parsedatetimebesteffort) except that it returns zero date or zero date time when it encounters a date format that cannot be processed. + + ## toLowCardinality {#tolowcardinality} Converts input parameter to the [LowCardianlity](../../sql-reference/data-types/lowcardinality.md) version of same data type. @@ -1009,7 +1095,7 @@ To convert data from the `LowCardinality` data type use the [CAST](#type_convers **Syntax** -``` sql +```sql toLowCardinality(expr) ``` @@ -1027,7 +1113,7 @@ Type: `LowCardinality(expr_result_type)` Query: -``` sql +```sql SELECT toLowCardinality('1'); ``` @@ -1045,7 +1131,8 @@ Result: ## toUnixTimestamp64Nano {#tounixtimestamp64nano} -Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. Input value is scaled up or down appropriately depending on it precision. Please note that output value is a timestamp in UTC, not in timezone of `DateTime64`. +Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. +Input value is scaled up or down appropriately depending on it precision. Please note that output value is a timestamp in UTC, not in timezone of `DateTime64`. **Syntax** @@ -1078,6 +1165,8 @@ Result: └──────────────────────────────┘ ``` +Query: + ``` sql WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64 SELECT toUnixTimestamp64Nano(dt64); @@ -1210,4 +1299,3 @@ Result: └───────────────────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/type_conversion_functions/) diff --git a/docs/en/sql-reference/functions/url-functions.md b/docs/en/sql-reference/functions/url-functions.md index 9e79ef2d0cb..9feb7a3c711 100644 --- a/docs/en/sql-reference/functions/url-functions.md +++ b/docs/en/sql-reference/functions/url-functions.md @@ -55,7 +55,7 @@ Type: `String`. **Example** ``` sql -SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk') +SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk'); ``` ``` text @@ -98,7 +98,7 @@ Type: `String`. **Example** ``` sql -SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk') +SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk'); ``` ``` text @@ -420,4 +420,3 @@ Removes the query string and fragment identifier. The question mark and number s Removes the ‘name’ URL parameter, if present. This function works under the assumption that the parameter name is encoded in the URL exactly the same way as in the passed argument. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/url_functions/) diff --git a/docs/en/sql-reference/functions/uuid-functions.md b/docs/en/sql-reference/functions/uuid-functions.md index 01a61c65b67..e7e55c699cd 100644 --- a/docs/en/sql-reference/functions/uuid-functions.md +++ b/docs/en/sql-reference/functions/uuid-functions.md @@ -165,4 +165,3 @@ SELECT - [dictGetUUID](../../sql-reference/functions/ext-dict-functions.md#ext_dict_functions-other) -[Original article](https://clickhouse.tech/docs/en/query_language/functions/uuid_function/) diff --git a/docs/en/sql-reference/functions/ym-dict-functions.md b/docs/en/sql-reference/functions/ym-dict-functions.md index 56530b5e83b..941f75ff006 100644 --- a/docs/en/sql-reference/functions/ym-dict-functions.md +++ b/docs/en/sql-reference/functions/ym-dict-functions.md @@ -112,7 +112,7 @@ Finds the highest continent in the hierarchy for the region. **Syntax** ``` sql -regionToTopContinent(id[, geobase]); +regionToTopContinent(id[, geobase]) ``` **Arguments** @@ -150,4 +150,3 @@ Accepts a UInt32 number – the region ID from the Yandex geobase. A string with `ua` and `uk` both mean Ukrainian. -[Original article](https://clickhouse.tech/docs/en/query_language/functions/ym_dict_functions/) diff --git a/docs/en/sql-reference/operators/in.md b/docs/en/sql-reference/operators/in.md index 34866f3d09a..0abeabc7f57 100644 --- a/docs/en/sql-reference/operators/in.md +++ b/docs/en/sql-reference/operators/in.md @@ -221,7 +221,7 @@ It also makes sense to specify a local table in the `GLOBAL IN` clause, in case When max_parallel_replicas is greater than 1, distributed queries are further transformed. For example, the following: ```sql -SEELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) +SELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) SETTINGS max_parallel_replicas=3 ``` diff --git a/docs/en/sql-reference/operators/index.md b/docs/en/sql-reference/operators/index.md index 274f7269bc8..e073d5f23f0 100644 --- a/docs/en/sql-reference/operators/index.md +++ b/docs/en/sql-reference/operators/index.md @@ -296,4 +296,3 @@ SELECT * FROM t_null WHERE y IS NOT NULL └───┴───┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/operators/) diff --git a/docs/en/sql-reference/statements/alter/column.md b/docs/en/sql-reference/statements/alter/column.md index 16aa266ebf9..d661bd4cd59 100644 --- a/docs/en/sql-reference/statements/alter/column.md +++ b/docs/en/sql-reference/statements/alter/column.md @@ -74,6 +74,9 @@ Deletes the column with the name `name`. If the `IF EXISTS` clause is specified, Deletes data from the file system. Since this deletes entire files, the query is completed almost instantly. +!!! warning "Warning" + You can’t delete a column if it is referenced by [materialized view](../../../sql-reference/statements/create/view.md#materialized). Otherwise, it returns an error. + Example: ``` sql @@ -144,7 +147,7 @@ This query changes the `name` column properties: - TTL - For examples of columns TTL modifying, see [Column TTL](../../engines/table_engines/mergetree_family/mergetree.md#mergetree-column-ttl). +For examples of columns TTL modifying, see [Column TTL](../../../engines/table-engines/mergetree-family/mergetree.md#mergetree-column-ttl). If the `IF EXISTS` clause is specified, the query won’t return an error if the column doesn’t exist. @@ -180,7 +183,7 @@ ALTER TABLE table_name MODIFY column_name REMOVE property; ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL; ``` -## See Also +**See Also** - [REMOVE TTL](ttl.md). @@ -191,7 +194,7 @@ Renames an existing column. Syntax: ```sql -ALTER TABLE table_name RENAME COLUMN column_name TO new_column_name; +ALTER TABLE table_name RENAME COLUMN column_name TO new_column_name ``` **Example** diff --git a/docs/en/sql-reference/statements/alter/index.md b/docs/en/sql-reference/statements/alter/index.md index 30603122096..71333e6fcce 100644 --- a/docs/en/sql-reference/statements/alter/index.md +++ b/docs/en/sql-reference/statements/alter/index.md @@ -47,4 +47,3 @@ For `ALTER ... ATTACH|DETACH|DROP` queries, you can use the `replication_alter_p For `ALTER TABLE ... UPDATE|DELETE` queries the synchronicity is defined by the [mutations_sync](../../../operations/settings/settings.md#mutations_sync) setting. -[Original article](https://clickhouse.tech/docs/en/query_language/alter/) diff --git a/docs/en/sql-reference/statements/alter/partition.md b/docs/en/sql-reference/statements/alter/partition.md index 42396223b86..b22f89928b9 100644 --- a/docs/en/sql-reference/statements/alter/partition.md +++ b/docs/en/sql-reference/statements/alter/partition.md @@ -16,7 +16,7 @@ The following operations with [partitions](../../../engines/table-engines/merget - [CLEAR COLUMN IN PARTITION](#alter_clear-column-partition) — Resets the value of a specified column in a partition. - [CLEAR INDEX IN PARTITION](#alter_clear-index-partition) — Resets the specified secondary index in a partition. - [FREEZE PARTITION](#alter_freeze-partition) — Creates a backup of a partition. -- [FETCH PARTITION](#alter_fetch-partition) — Downloads a partition from another server. +- [FETCH PARTITION\|PART](#alter_fetch-partition) — Downloads a part or partition from another server. - [MOVE PARTITION\|PART](#alter_move-partition) — Move partition/data part to another disk or volume. @@ -40,7 +40,7 @@ Read about setting the partition expression in a section [How to specify the par After the query is executed, you can do whatever you want with the data in the `detached` directory — delete it from the file system, or just leave it. -This query is replicated – it moves the data to the `detached` directory on all replicas. Note that you can execute this query only on a leader replica. To find out if a replica is a leader, perform the `SELECT` query to the [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas) table. Alternatively, it is easier to make a `DETACH` query on all replicas - all the replicas throw an exception, except the leader replica. +This query is replicated – it moves the data to the `detached` directory on all replicas. Note that you can execute this query only on a leader replica. To find out if a replica is a leader, perform the `SELECT` query to the [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas) table. Alternatively, it is easier to make a `DETACH` query on all replicas - all the replicas throw an exception, except the leader replicas (as multiple leaders are allowed). ## DROP PARTITION\|PART {#alter_drop-partition} @@ -85,9 +85,13 @@ ALTER TABLE visits ATTACH PART 201901_2_2_0; Read more about setting the partition expression in a section [How to specify the partition expression](#alter-how-to-specify-part-expr). -This query is replicated. The replica-initiator checks whether there is data in the `detached` directory. If data exists, the query checks its integrity. If everything is correct, the query adds the data to the table. All other replicas download the data from the replica-initiator. +This query is replicated. The replica-initiator checks whether there is data in the `detached` directory. +If data exists, the query checks its integrity. If everything is correct, the query adds the data to the table. -So you can put data to the `detached` directory on one replica, and use the `ALTER ... ATTACH` query to add it to the table on all replicas. +If the non-initiator replica, receiving the attach command, finds the part with the correct checksums in its own `detached` folder, it attaches the data without fetching it from other replicas. +If there is no part with the correct checksums, the data is downloaded from any replica having the part. + +You can put data to the `detached` directory on one replica and use the `ALTER ... ATTACH` query to add it to the table on all replicas. ## ATTACH PARTITION FROM {#alter_attach-partition-from} @@ -95,7 +99,8 @@ So you can put data to the `detached` directory on one replica, and use the `ALT ALTER TABLE table2 ATTACH PARTITION partition_expr FROM table1 ``` -This query copies the data partition from the `table1` to `table2` adds data to exsisting in the `table2`. Note that data won’t be deleted from `table1`. +This query copies the data partition from `table1` to `table2`. +Note that data will be deleted neither from `table1` nor from `table2`. For the query to run successfully, the following conditions must be met: @@ -191,29 +196,35 @@ ALTER TABLE table_name CLEAR INDEX index_name IN PARTITION partition_expr The query works similar to `CLEAR COLUMN`, but it resets an index instead of a column data. -## FETCH PARTITION {#alter_fetch-partition} +## FETCH PARTITION|PART {#alter_fetch-partition} ``` sql -ALTER TABLE table_name FETCH PARTITION partition_expr FROM 'path-in-zookeeper' +ALTER TABLE table_name FETCH PARTITION|PART partition_expr FROM 'path-in-zookeeper' ``` Downloads a partition from another server. This query only works for the replicated tables. The query does the following: -1. Downloads the partition from the specified shard. In ‘path-in-zookeeper’ you must specify a path to the shard in ZooKeeper. +1. Downloads the partition|part from the specified shard. In ‘path-in-zookeeper’ you must specify a path to the shard in ZooKeeper. 2. Then the query puts the downloaded data to the `detached` directory of the `table_name` table. Use the [ATTACH PARTITION\|PART](#alter_attach-partition) query to add the data to the table. For example: +1. FETCH PARTITION ``` sql ALTER TABLE users FETCH PARTITION 201902 FROM '/clickhouse/tables/01-01/visits'; ALTER TABLE users ATTACH PARTITION 201902; ``` +2. FETCH PART +``` sql +ALTER TABLE users FETCH PART 201901_2_2_0 FROM '/clickhouse/tables/01-01/visits'; +ALTER TABLE users ATTACH PART 201901_2_2_0; +``` Note that: -- The `ALTER ... FETCH PARTITION` query isn’t replicated. It places the partition to the `detached` directory only on the local server. +- The `ALTER ... FETCH PARTITION|PART` query isn’t replicated. It places the part or partition to the `detached` directory only on the local server. - The `ALTER TABLE ... ATTACH` query is replicated. It adds the data to all replicas. The data is added to one of the replicas from the `detached` directory, and to the others - from neighboring replicas. Before downloading, the system checks if the partition exists and the table structure matches. The most appropriate replica is selected automatically from the healthy replicas. diff --git a/docs/en/sql-reference/statements/alter/quota.md b/docs/en/sql-reference/statements/alter/quota.md index a43b5255598..05130a569ab 100644 --- a/docs/en/sql-reference/statements/alter/quota.md +++ b/docs/en/sql-reference/statements/alter/quota.md @@ -36,4 +36,4 @@ For the default user limit the maximum execution time with half a second in 30 m ``` sql ALTER QUOTA IF EXISTS qB FOR INTERVAL 30 minute MAX execution_time = 0.5, FOR INTERVAL 5 quarter MAX queries = 321, errors = 10 TO default; -``` +``` \ No newline at end of file diff --git a/docs/en/sql-reference/statements/alter/ttl.md b/docs/en/sql-reference/statements/alter/ttl.md index e8bfb78ec68..9cd63d3b8fe 100644 --- a/docs/en/sql-reference/statements/alter/ttl.md +++ b/docs/en/sql-reference/statements/alter/ttl.md @@ -18,7 +18,7 @@ ALTER TABLE table_name MODIFY TTL ttl_expression; TTL-property can be removed from table with the following query: ```sql -ALTER TABLE table_name REMOVE TTL +ALTER TABLE table_name REMOVE TTL ``` **Example** @@ -79,7 +79,7 @@ The `TTL` is no longer there, so the second row is not deleted: └───────────────────────┴─────────┴──────────────┘ ``` -### See Also +**See Also** -- More about the [TTL-expression](../../../../sql-reference/statements/create/table#ttl-expression). -- Modify column [with TTL](../../../../sql-reference/statements/alter/column#alter_modify-column). +- More about the [TTL-expression](../../../sql-reference/statements/create/table.md#ttl-expression). +- Modify column [with TTL](../../../sql-reference/statements/alter/column.md#alter_modify-column). diff --git a/docs/en/sql-reference/statements/attach.md b/docs/en/sql-reference/statements/attach.md index 035441ef5f1..01783e9cb2f 100644 --- a/docs/en/sql-reference/statements/attach.md +++ b/docs/en/sql-reference/statements/attach.md @@ -5,16 +5,55 @@ toc_title: ATTACH # ATTACH Statement {#attach} -This query is exactly the same as [CREATE](../../sql-reference/statements/create/table.md), but +Attaches the table, for example, when moving a database to another server. -- Instead of the word `CREATE` it uses the word `ATTACH`. -- The query does not create data on the disk, but assumes that data is already in the appropriate places, and just adds information about the table to the server. - After executing an ATTACH query, the server will know about the existence of the table. +The query does not create data on the disk, but assumes that data is already in the appropriate places, and just adds information about the table to the server. After executing an `ATTACH` query, the server will know about the existence of the table. -If the table was previously detached ([DETACH](../../sql-reference/statements/detach.md)), meaning that its structure is known, you can use shorthand without defining the structure. +If the table was previously detached ([DETACH](../../sql-reference/statements/detach.md)) query, meaning that its structure is known, you can use shorthand without defining the structure. + +## Syntax Forms {#syntax-forms} +### Attach Existing Table {#attach-existing-table} ``` sql ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] ``` -This query is used when starting the server. The server stores table metadata as files with `ATTACH` queries, which it simply runs at launch (with the exception of system tables, which are explicitly created on the server). +This query is used when starting the server. The server stores table metadata as files with `ATTACH` queries, which it simply runs at launch (with the exception of some system tables, which are explicitly created on the server). + +If the table was detached permanently, it won't be reattached at the server start, so you need to use `ATTACH` query explicitly. + +### Сreate New Table And Attach Data {#create-new-table-and-attach-data} + +**With specify path to table data** + +```sql +ATTACH TABLE name FROM 'path/to/data/' (col1 Type1, ...) +``` + +It creates new table with provided structure and attaches table data from provided directory in `user_files`. + +**Example** + +Query: + +```sql +DROP TABLE IF EXISTS test; +INSERT INTO TABLE FUNCTION file('01188_attach/test/data.TSV', 'TSV', 's String, n UInt8') VALUES ('test', 42); +ATTACH TABLE test FROM '01188_attach/test' (s String, n UInt8) ENGINE = File(TSV); +SELECT * FROM test; +``` +Result: + +```sql +┌─s────┬──n─┐ +│ test │ 42 │ +└──────┴────┘ +``` + +**With specify table UUID** (Only for `Atomic` database) + +```sql +ATTACH TABLE name UUID '' (col1 Type1, ...) +``` + +It creates new table with provided structure and attaches data from table with the specified UUID. \ No newline at end of file diff --git a/docs/en/sql-reference/statements/check-table.md b/docs/en/sql-reference/statements/check-table.md index 450447acaf8..65e6238ebbc 100644 --- a/docs/en/sql-reference/statements/check-table.md +++ b/docs/en/sql-reference/statements/check-table.md @@ -30,9 +30,36 @@ Performed over the tables with another table engines causes an exception. Engines from the `*Log` family don’t provide automatic data recovery on failure. Use the `CHECK TABLE` query to track data loss in a timely manner. -For `MergeTree` family engines, the `CHECK TABLE` query shows a check status for every individual data part of a table on the local server. +## Checking the MergeTree Family Tables {#checking-mergetree-tables} -**If the data is corrupted** +For `MergeTree` family engines, if [check_query_single_value_result](../../operations/settings/settings.md#check_query_single_value_result) = 0, the `CHECK TABLE` query shows a check status for every individual data part of a table on the local server. + +```sql +SET check_query_single_value_result = 0; +CHECK TABLE test_table; +``` + +```text +┌─part_path─┬─is_passed─┬─message─┐ +│ all_1_4_1 │ 1 │ │ +│ all_1_4_2 │ 1 │ │ +└───────────┴───────────┴─────────┘ +``` + +If `check_query_single_value_result` = 0, the `CHECK TABLE` query shows the general table check status. + +```sql +SET check_query_single_value_result = 1; +CHECK TABLE test_table; +``` + +```text +┌─result─┐ +│ 1 │ +└────────┘ +``` + +## If the Data Is Corrupted {#if-data-is-corrupted} If the table is corrupted, you can copy the non-corrupted data to another table. To do this: diff --git a/docs/en/sql-reference/statements/create/quota.md b/docs/en/sql-reference/statements/create/quota.md index 71416abf588..0698d9bede5 100644 --- a/docs/en/sql-reference/statements/create/quota.md +++ b/docs/en/sql-reference/statements/create/quota.md @@ -18,7 +18,7 @@ CREATE QUOTA [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name] [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` -Keys `user_name`, `ip_address`, `client_key`, `client_key, user_name` and `client_key, ip_address` correspond to the fields in the [system.quotas](../../../operations/system-tables/quotas.md) table. +Keys `user_name`, `ip_address`, `client_key`, `client_key, user_name` and `client_key, ip_address` correspond to the fields in the [system.quotas](../../../operations/system-tables/quotas.md) table. Parameters `queries`, `query_selects`, `query_inserts`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` correspond to the fields in the [system.quotas_usage](../../../operations/system-tables/quotas_usage.md) table. diff --git a/docs/en/sql-reference/statements/create/row-policy.md b/docs/en/sql-reference/statements/create/row-policy.md index cbe639c6fc5..1df7cc36995 100644 --- a/docs/en/sql-reference/statements/create/row-policy.md +++ b/docs/en/sql-reference/statements/create/row-policy.md @@ -5,39 +5,84 @@ toc_title: ROW POLICY # CREATE ROW POLICY {#create-row-policy-statement} -Creates [filters for rows](../../../operations/access-rights.md#row-policy-management), which a user can read from a table. +Creates a [row policy](../../../operations/access-rights.md#row-policy-management), i.e. a filter used to determine which rows a user can read from a table. + +!!! note "Warning" + Row policies makes sense only for users with readonly access. If user can modify table or copy partitions between tables, it defeats the restrictions of row policies. Syntax: ``` sql CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluster_name1] ON [db1.]table1 [, policy_name2 [ON CLUSTER cluster_name2] ON [db2.]table2 ...] + [FOR SELECT] USING condition [AS {PERMISSIVE | RESTRICTIVE}] - [FOR SELECT] - [USING condition] [TO {role1 [, role2 ...] | ALL | ALL EXCEPT role1 [, role2 ...]}] ``` -`ON CLUSTER` clause allows creating row policies on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md). +## USING Clause {#create-row-policy-using} -## AS Clause {#create-row-policy-as} - -Using this section you can create permissive or restrictive policies. - -Permissive policy grants access to rows. Permissive policies which apply to the same table are combined together using the boolean `OR` operator. Policies are permissive by default. - -Restrictive policy restricts access to rows. Restrictive policies which apply to the same table are combined together using the boolean `AND` operator. - -Restrictive policies apply to rows that passed the permissive filters. If you set restrictive policies but no permissive policies, the user can’t get any row from the table. +Allows to specify a condition to filter rows. An user will see a row if the condition is calculated to non-zero for the row. ## TO Clause {#create-row-policy-to} -In the section `TO` you can provide a mixed list of roles and users, for example, `CREATE ROW POLICY ... TO accountant, john@localhost`. +In the section `TO` you can provide a list of users and roles this policy should work for. For example, `CREATE ROW POLICY ... TO accountant, john@localhost`. -Keyword `ALL` means all the ClickHouse users including current user. Keywords `ALL EXCEPT` allow to exclude some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` +Keyword `ALL` means all the ClickHouse users including current user. Keyword `ALL EXCEPT` allow to exclude some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` -## Examples {#examples} +!!! note "Note" + If there are no row policies defined for a table then any user can `SELECT` all the row from the table. Defining one or more row policies for the table makes the access to the table depending on the row policies no matter if those row policies are defined for the current user or not. For example, the following policy + + `CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter` -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO accountant, john@localhost` + forbids the users `mira` and `peter` to see the rows with `b != 1`, and any non-mentioned user (e.g., the user `paul`) will see no rows from `mydb.table1` at all. + + If that's not desirable it can't be fixed by adding one more row policy, like the following: -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO ALL EXCEPT mira` + `CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter` + +## AS Clause {#create-row-policy-as} + +It's allowed to have more than one policy enabled on the same table for the same user at the one time. So we need a way to combine the conditions from multiple policies. + +By default policies are combined using the boolean `OR` operator. For example, the following policies + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio +``` + +enables the user `peter` to see rows with either `b=1` or `c=2`. + +The `AS` clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive. By default policies are permissive, which means they are combined using the boolean `OR` operator. + +A policy can be defined as restrictive as an alternative. Restrictive policies are combined using the boolean `AND` operator. + +Here is the general formula: + +``` +row_is_visible = (one or more of the permissive policies' conditions are non-zero) AND + (all of the restrictive policies's conditions are non-zero) +``` + +For example, the following policies + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio +``` + +enables the user `peter` to see rows only if both `b=1` AND `c=2`. + +## ON CLUSTER Clause {#create-row-policy-on-cluster} + +Allows creating row policies on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md). + + +## Examples + +`CREATE ROW POLICY filter1 ON mydb.mytable USING a<1000 TO accountant, john@localhost` + +`CREATE ROW POLICY filter2 ON mydb.mytable USING a<1000 AND b=5 TO ALL EXCEPT mira` + +`CREATE ROW POLICY filter3 ON mydb.mytable USING 1 TO admin` diff --git a/docs/en/sql-reference/statements/create/table.md b/docs/en/sql-reference/statements/create/table.md index 0090eec14b7..5f1f0151350 100644 --- a/docs/en/sql-reference/statements/create/table.md +++ b/docs/en/sql-reference/statements/create/table.md @@ -47,19 +47,38 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_function() Creates a table with the same result as that of the [table function](../../../sql-reference/table-functions/index.md#table-functions) specified. The created table will also work in the same way as the corresponding table function that was specified. +### From SELECT query {#from-select-query} + ``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... +CREATE TABLE [IF NOT EXISTS] [db.]table_name[(name1 [type1], name2 [type2], ...)] ENGINE = engine AS SELECT ... ``` -Creates a table with a structure like the result of the `SELECT` query, with the `engine` engine, and fills it with data from SELECT. +Creates a table with a structure like the result of the `SELECT` query, with the `engine` engine, and fills it with data from `SELECT`. Also you can explicitly specify columns description. -In all cases, if `IF NOT EXISTS` is specified, the query won’t return an error if the table already exists. In this case, the query won’t do anything. +If the table already exists and `IF NOT EXISTS` is specified, the query won’t do anything. There can be other clauses after the `ENGINE` clause in the query. See detailed documentation on how to create tables in the descriptions of [table engines](../../../engines/table-engines/index.md#table_engines). +**Example** + +Query: + +``` sql +CREATE TABLE t1 (x String) ENGINE = Memory AS SELECT 1; +SELECT x, toTypeName(x) FROM t1; +``` + +Result: + +```text +┌─x─┬─toTypeName(x)─┐ +│ 1 │ String │ +└───┴───────────────┘ +``` + ## NULL Or NOT NULL Modifiers {#null-modifiers} -`NULL` and `NOT NULL` modifiers after data type in column definition allow or do not allow it to be [Nullable](../../../sql-reference/data-types/nullable.md#data_type-nullable). +`NULL` and `NOT NULL` modifiers after data type in column definition allow or do not allow it to be [Nullable](../../../sql-reference/data-types/nullable.md#data_type-nullable). If the type is not `Nullable` and if `NULL` is specified, it will be treated as `Nullable`; if `NOT NULL` is specified, then no. For example, `INT NULL` is the same as `Nullable(INT)`. If the type is `Nullable` and `NULL` or `NOT NULL` modifiers are specified, the exception will be thrown. @@ -109,16 +128,16 @@ It is not possible to set default values for elements in nested data structures. ## Primary Key {#primary-key} -You can define a [primary key](../../../engines/table-engines/mergetree-family/mergetree.md#primary-keys-and-indexes-in-queries) when creating a table. Primary key can be specified in two ways: +You can define a [primary key](../../../engines/table-engines/mergetree-family/mergetree.md#primary-keys-and-indexes-in-queries) when creating a table. Primary key can be specified in two ways: - Inside the column list ``` sql -CREATE TABLE db.table_name -( - name1 type1, name2 type2, ..., +CREATE TABLE db.table_name +( + name1 type1, name2 type2, ..., PRIMARY KEY(expr1[, expr2,...])] -) +) ENGINE = engine; ``` @@ -126,9 +145,9 @@ ENGINE = engine; ``` sql CREATE TABLE db.table_name -( +( name1 type1, name2 type2, ... -) +) ENGINE = engine PRIMARY KEY(expr1[, expr2,...]); ``` @@ -285,7 +304,9 @@ REPLACE TABLE myOldTable SELECT * FROM myOldTable WHERE CounterID <12345; ### Syntax -{CREATE [OR REPLACE]|REPLACE} TABLE [db.]table_name +``` sql +{CREATE [OR REPLACE] | REPLACE} TABLE [db.]table_name +``` All syntax forms for `CREATE` query also work for this query. `REPLACE` for a non-existent table will cause an error. @@ -333,5 +354,3 @@ SELECT * FROM base.t1; │ 3 │ └───┘ ``` - - [Original article](https://clickhouse.tech/docs/en/sql-reference/statements/create/table) diff --git a/docs/en/sql-reference/statements/create/view.md b/docs/en/sql-reference/statements/create/view.md index 8acd58f4338..633db355d4a 100644 --- a/docs/en/sql-reference/statements/create/view.md +++ b/docs/en/sql-reference/statements/create/view.md @@ -62,13 +62,13 @@ Note that materialized view is influenced by [optimize_on_insert](../../../opera Views look the same as normal tables. For example, they are listed in the result of the `SHOW TABLES` query. -There isn’t a separate query for deleting views. To delete a view, use [DROP TABLE](../../../sql-reference/statements/drop.md). +To delete a view, use [DROP VIEW](../../../sql-reference/statements/drop.md#drop-view). Although `DROP TABLE` works for VIEWs as well. ## Live View (Experimental) {#live-view} !!! important "Important" This is an experimental feature that may change in backwards-incompatible ways in the future releases. - Enable usage of live views and `WATCH` query using `set allow_experimental_live_view = 1`. + Enable usage of live views and `WATCH` query using [allow_experimental_live_view](../../../operations/settings/settings.md#allow-experimental-live-view) setting. Input the command `set allow_experimental_live_view = 1`. ```sql @@ -90,7 +90,9 @@ Live views work similarly to how a query in a distributed table works. But inste See [WITH REFRESH](#live-view-with-refresh) to force periodic updates of a live view that in some cases can be used as a workaround. -You can watch for changes in the live view query result using the [WATCH](../../../sql-reference/statements/watch.md) query +### Monitoring Changes {#live-view-monitoring} + +You can monitor changes in the `LIVE VIEW` query result using [WATCH](../../../sql-reference/statements/watch.md) query. ```sql WATCH [db.]live_view @@ -102,11 +104,10 @@ WATCH [db.]live_view CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x; CREATE LIVE VIEW lv AS SELECT sum(x) FROM mt; ``` - Watch a live view while doing a parallel insert into the source table. ```sql -WATCH lv +WATCH lv; ``` ```bash @@ -128,16 +129,16 @@ INSERT INTO mt VALUES (2); INSERT INTO mt VALUES (3); ``` -or add [EVENTS](../../../sql-reference/statements/watch.md#events-clause) clause to just get change events. +Or add [EVENTS](../../../sql-reference/statements/watch.md#events-clause) clause to just get change events. ```sql -WATCH [db.]live_view EVENTS +WATCH [db.]live_view EVENTS; ``` **Example:** ```sql -WATCH lv EVENTS +WATCH lv EVENTS; ``` ```bash @@ -163,15 +164,15 @@ SELECT * FROM [db.]live_view WHERE ... You can force live view refresh using the `ALTER LIVE VIEW [db.]table_name REFRESH` statement. -### With Timeout {#live-view-with-timeout} +### WITH TIMEOUT Clause {#live-view-with-timeout} -When a live view is create with a `WITH TIMEOUT` clause then the live view will be dropped automatically after the specified number of seconds elapse since the end of the last [WATCH](../../../sql-reference/statements/watch.md) query that was watching the live view. +When a live view is created with a `WITH TIMEOUT` clause then the live view will be dropped automatically after the specified number of seconds elapse since the end of the last [WATCH](../../../sql-reference/statements/watch.md) query that was watching the live view. ```sql CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AS SELECT ... ``` -If the timeout value is not specified then the value specified by the `temporary_live_view_timeout` setting is used. +If the timeout value is not specified then the value specified by the [temporary_live_view_timeout](../../../operations/settings/settings.md#temporary-live-view-timeout) setting is used. **Example:** @@ -180,7 +181,7 @@ CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x; CREATE LIVE VIEW lv WITH TIMEOUT 15 AS SELECT sum(x) FROM mt; ``` -### With Refresh {#live-view-with-refresh} +### WITH REFRESH Clause {#live-view-with-refresh} When a live view is created with a `WITH REFRESH` clause then it will be automatically refreshed after the specified number of seconds elapse since the last refresh or trigger. @@ -188,7 +189,7 @@ When a live view is created with a `WITH REFRESH` clause then it will be automat CREATE LIVE VIEW [db.]table_name WITH REFRESH [value_in_sec] AS SELECT ... ``` -If the refresh value is not specified then the value specified by the `periodic_live_view_refresh` setting is used. +If the refresh value is not specified then the value specified by the [periodic_live_view_refresh](../../../operations/settings/settings.md#periodic-live-view-refresh) setting is used. **Example:** @@ -231,7 +232,7 @@ WATCH lv Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.lv doesn't exist.. ``` -### Usage +### Usage {#live-view-usage} Most common uses of live view tables include: @@ -240,15 +241,4 @@ Most common uses of live view tables include: - Watching for table changes and triggering a follow-up select queries. - Watching metrics from system tables using periodic refresh. -### Settings {#live-view-settings} - -You can use the following settings to control the behaviour of live views. - -- `allow_experimental_live_view` - enable live views. Default is `0`. -- `live_view_heartbeat_interval` - the heartbeat interval in seconds to indicate live query is alive. Default is `15` seconds. -- `max_live_view_insert_blocks_before_refresh` - maximum number of inserted blocks after which - mergeable blocks are dropped and query is re-executed. Default is `64` inserts. -- `temporary_live_view_timeout` - interval after which live view with timeout is deleted. Default is `5` seconds. -- `periodic_live_view_refresh` - interval after which periodically refreshed live view is forced to refresh. Default is `60` seconds. - [Original article](https://clickhouse.tech/docs/en/sql-reference/statements/create/view/) diff --git a/docs/en/sql-reference/statements/detach.md b/docs/en/sql-reference/statements/detach.md index 62a7c0cc1e0..59f5b297ece 100644 --- a/docs/en/sql-reference/statements/detach.md +++ b/docs/en/sql-reference/statements/detach.md @@ -5,12 +5,66 @@ toc_title: DETACH # DETACH Statement {#detach} -Deletes information about the ‘name’ table from the server. The server stops knowing about the table’s existence. +Makes the server "forget" about the existence of the table or materialized view. + +Syntax: ``` sql -DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] +DETACH TABLE|VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster] [PERMANENTLY] ``` -This does not delete the table’s data or metadata. On the next server launch, the server will read the metadata and find out about the table again. +Detaching does not delete the data or metadata for the table or materialized view. If the table or view was not detached `PERMANENTLY`, on the next server launch the server will read the metadata and recall the table/view again. If the table or view was detached `PERMANENTLY`, there will be no automatic recall. -Similarly, a “detached” table can be re-attached using the `ATTACH` query (with the exception of system tables, which do not have metadata stored for them). +Whether the table was detached permanently or not, in both cases you can reattach it using the [ATTACH](../../sql-reference/statements/attach.md). System log tables can be also attached back (e.g. `query_log`, `text_log`, etc). Other system tables can't be reattached. On the next server launch the server will recall those tables again. + +`ATTACH MATERIALIZED VIEW` doesn't work with short syntax (without `SELECT`), but you can attach it using the `ATTACH TABLE` query. + +Note that you can not detach permanently the table which is already detached (temporary). But you can attach it back and then detach permanently again. + +Also you can not [DROP](../../sql-reference/statements/drop.md#drop-table) the detached table, or [CREATE TABLE](../../sql-reference/statements/create/table.md) with the same name as detached permanently, or replace it with the other table with [RENAME TABLE](../../sql-reference/statements/rename.md) query. + +**Example** + +Creating a table: + +Query: + +``` sql +CREATE TABLE test ENGINE = Log AS SELECT * FROM numbers(10); +SELECT * FROM test; +``` + +Result: + +``` text +┌─number─┐ +│ 0 │ +│ 1 │ +│ 2 │ +│ 3 │ +│ 4 │ +│ 5 │ +│ 6 │ +│ 7 │ +│ 8 │ +│ 9 │ +└────────┘ +``` + +Detaching the table: + +Query: + +``` sql +DETACH TABLE test; +SELECT * FROM test; +``` + +Result: + +``` text +Received exception from server (version 21.4.1): +Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test doesn't exist. +``` + +[Original article](https://clickhouse.tech/docs/en/sql-reference/statements/detach/) diff --git a/docs/en/sql-reference/statements/explain.md b/docs/en/sql-reference/statements/explain.md index 3cca29801dd..8083fa160fa 100644 --- a/docs/en/sql-reference/statements/explain.md +++ b/docs/en/sql-reference/statements/explain.md @@ -5,7 +5,7 @@ toc_title: EXPLAIN # EXPLAIN Statement {#explain} -Show the execution plan of a statement. +Shows the execution plan of a statement. Syntax: @@ -65,7 +65,7 @@ SelectWithUnionQuery (children 1) ### EXPLAIN SYNTAX {#explain-syntax} -Return query after syntax optimizations. +Returns query after syntax optimizations. Example: @@ -88,15 +88,16 @@ FROM ) AS `--.s` CROSS JOIN system.numbers AS c ``` + ### EXPLAIN PLAN {#explain-plan} Dump query plan steps. Settings: -- `header` — Print output header for step. Default: 0. -- `description` — Print step description. Default: 1. -- `actions` — Print detailed information about step actions. Default: 0. +- `header` — Prints output header for step. Default: 0. +- `description` — Prints step description. Default: 1. +- `actions` — Prints detailed information about step actions. Default: 0. Example: @@ -115,15 +116,16 @@ Union ``` !!! note "Note" - Step and query cost estimation is not supported. + Step and query cost estimation is not supported. ### EXPLAIN PIPELINE {#explain-pipeline} Settings: -- `header` — Print header for each output port. Default: 0. -- `graph` — Use DOT graph description language. Default: 0. -- `compact` — Print graph in compact mode if graph is enabled. Default: 1. +- `header` — Prints header for each output port. Default: 0. +- `graph` — Prints a graph described in the [DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) graph description language. Default: 0. +- `compact` — Prints graph in compact mode if `graph` setting is enabled. Default: 1. +- `indexes` — Shows used indexes, the number of filtered parts, and granules for every index applied. Default: 0. Supported for [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables. Example: diff --git a/docs/en/sql-reference/statements/grant.md b/docs/en/sql-reference/statements/grant.md index f3829de2fbb..89f35b5f701 100644 --- a/docs/en/sql-reference/statements/grant.md +++ b/docs/en/sql-reference/statements/grant.md @@ -91,7 +91,7 @@ Hierarchy of privileges: - `ALTER ADD CONSTRAINT` - `ALTER DROP CONSTRAINT` - `ALTER TTL` - - `ALTER MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL` - `ALTER SETTINGS` - `ALTER MOVE PARTITION` - `ALTER FETCH PARTITION` @@ -102,9 +102,9 @@ Hierarchy of privileges: - [CREATE](#grant-create) - `CREATE DATABASE` - `CREATE TABLE` + - `CREATE TEMPORARY TABLE` - `CREATE VIEW` - `CREATE DICTIONARY` - - `CREATE TEMPORARY TABLE` - [DROP](#grant-drop) - `DROP DATABASE` - `DROP TABLE` @@ -150,7 +150,7 @@ Hierarchy of privileges: - `SYSTEM RELOAD` - `SYSTEM RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES` - `SYSTEM TTL MERGES` - `SYSTEM FETCHES` @@ -276,10 +276,10 @@ Allows executing [ALTER](../../sql-reference/statements/alter/index.md) queries - `ALTER ADD CONSTRAINT`. Level: `TABLE`. Aliases: `ADD CONSTRAINT` - `ALTER DROP CONSTRAINT`. Level: `TABLE`. Aliases: `DROP CONSTRAINT` - `ALTER TTL`. Level: `TABLE`. Aliases: `ALTER MODIFY TTL`, `MODIFY TTL` - - `ALTER MATERIALIZE TTL`. Level: `TABLE`. Aliases: `MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL`. Level: `TABLE`. Aliases: `MATERIALIZE TTL` - `ALTER SETTINGS`. Level: `TABLE`. Aliases: `ALTER SETTING`, `ALTER MODIFY SETTING`, `MODIFY SETTING` - `ALTER MOVE PARTITION`. Level: `TABLE`. Aliases: `ALTER MOVE PART`, `MOVE PARTITION`, `MOVE PART` - - `ALTER FETCH PARTITION`. Level: `TABLE`. Aliases: `FETCH PARTITION` + - `ALTER FETCH PARTITION`. Level: `TABLE`. Aliases: `ALTER FETCH PART`, `FETCH PARTITION`, `FETCH PART` - `ALTER FREEZE PARTITION`. Level: `TABLE`. Aliases: `FREEZE PARTITION` - `ALTER VIEW` Level: `GROUP` - `ALTER VIEW REFRESH`. Level: `VIEW`. Aliases: `ALTER LIVE VIEW REFRESH`, `REFRESH VIEW` @@ -304,9 +304,9 @@ Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [A - `CREATE`. Level: `GROUP` - `CREATE DATABASE`. Level: `DATABASE` - `CREATE TABLE`. Level: `TABLE` + - `CREATE TEMPORARY TABLE`. Level: `GLOBAL` - `CREATE VIEW`. Level: `VIEW` - `CREATE DICTIONARY`. Level: `DICTIONARY` - - `CREATE TEMPORARY TABLE`. Level: `GLOBAL` **Notes** @@ -401,7 +401,7 @@ Allows a user to execute [SYSTEM](../../sql-reference/statements/system.md) quer - `SYSTEM RELOAD`. Level: `GROUP` - `SYSTEM RELOAD CONFIG`. Level: `GLOBAL`. Aliases: `RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY`. Level: `GLOBAL`. Aliases: `SYSTEM RELOAD DICTIONARIES`, `RELOAD DICTIONARY`, `RELOAD DICTIONARIES` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Level: `GLOBAL`. Aliases: R`ELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Level: `GLOBAL`. Aliases: `RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES`. Level: `TABLE`. Aliases: `SYSTEM STOP MERGES`, `SYSTEM START MERGES`, `STOP MERGES`, `START MERGES` - `SYSTEM TTL MERGES`. Level: `TABLE`. Aliases: `SYSTEM STOP TTL MERGES`, `SYSTEM START TTL MERGES`, `STOP TTL MERGES`, `START TTL MERGES` - `SYSTEM FETCHES`. Level: `TABLE`. Aliases: `SYSTEM STOP FETCHES`, `SYSTEM START FETCHES`, `STOP FETCHES`, `START FETCHES` @@ -473,4 +473,3 @@ Doesn’t grant any privileges. The `ADMIN OPTION` privilege allows a user to grant their role to another user. -[Original article](https://clickhouse.tech/docs/en/query_language/grant/) diff --git a/docs/en/sql-reference/statements/insert-into.md b/docs/en/sql-reference/statements/insert-into.md index c517a515ab7..66effcccc3f 100644 --- a/docs/en/sql-reference/statements/insert-into.md +++ b/docs/en/sql-reference/statements/insert-into.md @@ -117,4 +117,3 @@ Performance will not decrease if: - Data is added in real time. - You upload data that is usually sorted by time. -[Original article](https://clickhouse.tech/docs/en/query_language/insert_into/) diff --git a/docs/en/sql-reference/statements/optimize.md b/docs/en/sql-reference/statements/optimize.md index 9b16a12d2e2..247252d3f4e 100644 --- a/docs/en/sql-reference/statements/optimize.md +++ b/docs/en/sql-reference/statements/optimize.md @@ -5,20 +5,89 @@ toc_title: OPTIMIZE # OPTIMIZE Statement {#misc_operations-optimize} +This query tries to initialize an unscheduled merge of data parts for tables. + +!!! warning "Warning" + `OPTIMIZE` can’t fix the `Too many parts` error. + +**Syntax** + ``` sql -OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE] +OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE [BY expression]] ``` -This query tries to initialize an unscheduled merge of data parts for tables with a table engine from the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) family. - -The `OPTMIZE` query is also supported for the [MaterializedView](../../engines/table-engines/special/materializedview.md) and the [Buffer](../../engines/table-engines/special/buffer.md) engines. Other table engines aren’t supported. +The `OPTMIZE` query is supported for [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) family, the [MaterializedView](../../engines/table-engines/special/materializedview.md) and the [Buffer](../../engines/table-engines/special/buffer.md) engines. Other table engines aren’t supported. When `OPTIMIZE` is used with the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) family of table engines, ClickHouse creates a task for merging and waits for execution on all nodes (if the `replication_alter_partitions_sync` setting is enabled). - If `OPTIMIZE` doesn’t perform a merge for any reason, it doesn’t notify the client. To enable notifications, use the [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop) setting. - If you specify a `PARTITION`, only the specified partition is optimized. [How to set partition expression](../../sql-reference/statements/alter/index.md#alter-how-to-specify-part-expr). - If you specify `FINAL`, optimization is performed even when all the data is already in one part. Also merge is forced even if concurrent merges are performed. -- If you specify `DEDUPLICATE`, then completely identical rows will be deduplicated (all columns are compared), it makes sense only for the MergeTree engine. +- If you specify `DEDUPLICATE`, then completely identical rows (unless by-clause is specified) will be deduplicated (all columns are compared), it makes sense only for the MergeTree engine. -!!! warning "Warning" - `OPTIMIZE` can’t fix the “Too many parts” error. + +## BY expression {#by-expression} + +If you want to perform deduplication on custom set of columns rather than on all, you can specify list of columns explicitly or use any combination of [`*`](../../sql-reference/statements/select/index.md#asterisk), [`COLUMNS`](../../sql-reference/statements/select/index.md#columns-expression) or [`EXCEPT`](../../sql-reference/statements/select/index.md#except-modifier) expressions. The explictly written or implicitly expanded list of columns must include all columns specified in row ordering expression (both primary and sorting keys) and partitioning expression (partitioning key). + +!!! note "Note" + Notice that `*` behaves just like in `SELECT`: `MATERIALIZED` and `ALIAS` columns are not used for expansion. + Also, it is an error to specify empty list of columns, or write an expression that results in an empty list of columns, or deduplicate by an ALIAS column. + +``` sql +OPTIMIZE TABLE table DEDUPLICATE; -- the old one +OPTIMIZE TABLE table DEDUPLICATE BY *; -- not the same as the old one, excludes MATERIALIZED columns (see the note above) +OPTIMIZE TABLE table DEDUPLICATE BY * EXCEPT colX; +OPTIMIZE TABLE table DEDUPLICATE BY * EXCEPT (colX, colY); +OPTIMIZE TABLE table DEDUPLICATE BY col1,col2,col3; +OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex'); +OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex') EXCEPT colX; +OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex') EXCEPT (colX, colY); +``` + +**Examples** + +Create a table: + +``` sql +CREATE TABLE example ( + primary_key Int32, + secondary_key Int32, + value UInt32, + partition_key UInt32, + materialized_value UInt32 MATERIALIZED 12345, + aliased_value UInt32 ALIAS 2, + PRIMARY KEY primary_key +) ENGINE=MergeTree +PARTITION BY partition_key +ORDER BY (primary_key, secondary_key); +``` + +The 'old' deduplicate, all columns are taken into account, i.e. row is removed only if all values in all columns are equal to corresponding values in previous row. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE; +``` + +Deduplicate by all columns that are not `ALIAS` or `MATERIALIZED`: `primary_key`, `secondary_key`, `value`, `partition_key`, and `materialized_value` columns. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY *; +``` + +Deduplicate by all columns that are not `ALIAS` or `MATERIALIZED` and explicitly not `materialized_value`: `primary_key`, `secondary_key`, `value`, and `partition_key` columns. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY * EXCEPT materialized_value; +``` + +Deduplicate explicitly by `primary_key`, `secondary_key`, and `partition_key` columns. +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY primary_key, secondary_key, partition_key; +``` + +Deduplicate by any column matching a regex: `primary_key`, `secondary_key`, and `partition_key` columns. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY COLUMNS('.*_key'); +``` diff --git a/docs/en/sql-reference/statements/rename.md b/docs/en/sql-reference/statements/rename.md index 4f14ad016a3..a9dda6ed3b2 100644 --- a/docs/en/sql-reference/statements/rename.md +++ b/docs/en/sql-reference/statements/rename.md @@ -5,6 +5,14 @@ toc_title: RENAME # RENAME Statement {#misc_operations-rename} +## RENAME DATABASE {#misc_operations-rename_database} +Renames database, support only for Atomic database engine + +``` +RENAME DATABASE atomic_database1 TO atomic_database2 [ON CLUSTER cluster] +``` + +## RENAME TABLE {#misc_operations-rename_table} Renames one or more tables. ``` sql diff --git a/docs/en/sql-reference/statements/select/index.md b/docs/en/sql-reference/statements/select/index.md index e99ebef838c..0712ea8daa7 100644 --- a/docs/en/sql-reference/statements/select/index.md +++ b/docs/en/sql-reference/statements/select/index.md @@ -47,6 +47,7 @@ Specifics of each optional clause are covered in separate sections, which are li - [SELECT clause](#select-clause) - [DISTINCT clause](../../../sql-reference/statements/select/distinct.md) - [LIMIT clause](../../../sql-reference/statements/select/limit.md) +- [OFFSET clause](../../../sql-reference/statements/select/offset.md) - [UNION clause](../../../sql-reference/statements/select/union.md) - [INTO OUTFILE clause](../../../sql-reference/statements/select/into-outfile.md) - [FORMAT clause](../../../sql-reference/statements/select/format.md) @@ -57,6 +58,9 @@ Specifics of each optional clause are covered in separate sections, which are li If you want to include all columns in the result, use the asterisk (`*`) symbol. For example, `SELECT * FROM ...`. + +### COLUMNS expression {#columns-expression} + To match some columns in the result with a [re2](https://en.wikipedia.org/wiki/RE2_(software)) regular expression, you can use the `COLUMNS` expression. ``` sql diff --git a/docs/en/sql-reference/statements/select/limit.md b/docs/en/sql-reference/statements/select/limit.md index 4b25efbe95a..6ed38b2dd64 100644 --- a/docs/en/sql-reference/statements/select/limit.md +++ b/docs/en/sql-reference/statements/select/limit.md @@ -12,6 +12,9 @@ toc_title: LIMIT If there is no [ORDER BY](../../../sql-reference/statements/select/order-by.md) clause that explicitly sorts results, the choice of rows for the result may be arbitrary and non-deterministic. +!!! note "Note" + The number of rows in the result set can also depend on the [limit](../../../operations/settings/settings.md#limit) setting. + ## LIMIT … WITH TIES Modifier {#limit-with-ties} When you set `WITH TIES` modifier for `LIMIT n[,m]` and specify `ORDER BY expr_list`, you will get in result first `n` or `n,m` rows and all rows with same `ORDER BY` fields values equal to row at position `n` for `LIMIT n` and `m` for `LIMIT n,m`. diff --git a/docs/en/sql-reference/statements/select/offset.md b/docs/en/sql-reference/statements/select/offset.md new file mode 100644 index 00000000000..3efd916bcb8 --- /dev/null +++ b/docs/en/sql-reference/statements/select/offset.md @@ -0,0 +1,86 @@ +--- +toc_title: OFFSET +--- + +# OFFSET FETCH Clause {#offset-fetch} + +`OFFSET` and `FETCH` allow you to retrieve data by portions. They specify a row block which you want to get by a single query. + +``` sql +OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}] +``` + +The `offset_row_count` or `fetch_row_count` value can be a number or a literal constant. You can omit `fetch_row_count`; by default, it equals to 1. + +`OFFSET` specifies the number of rows to skip before starting to return rows from the query result set. + +The `FETCH` specifies the maximum number of rows that can be in the result of a query. + +The `ONLY` option is used to return rows that immediately follow the rows omitted by the `OFFSET`. In this case the `FETCH` is an alternative to the [LIMIT](../../../sql-reference/statements/select/limit.md) clause. For example, the following query + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY; +``` + +is identical to the query + +``` sql +SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1; +``` + +The `WITH TIES` option is used to return any additional rows that tie for the last place in the result set according to the `ORDER BY` clause. For example, if `fetch_row_count` is set to 5 but two additional rows match the values of the `ORDER BY` columns in the fifth row, the result set will contain seven rows. + +!!! note "Note" + According to the standard, the `OFFSET` clause must come before the `FETCH` clause if both are present. + +!!! note "Note" + The real offset can also depend on the [offset](../../../operations/settings/settings.md#offset) setting. + +## Examples {#examples} + +Input table: + +``` text +┌─a─┬─b─┐ +│ 1 │ 1 │ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 1 │ 3 │ +│ 5 │ 4 │ +│ 0 │ 6 │ +│ 5 │ 7 │ +└───┴───┘ +``` + +Usage of the `ONLY` option: + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY; +``` + +Result: + +``` text +┌─a─┬─b─┐ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 5 │ 4 │ +└───┴───┘ +``` + +Usage of the `WITH TIES` option: + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES; +``` + +Result: + +``` text +┌─a─┬─b─┐ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 5 │ 4 │ +│ 5 │ 7 │ +└───┴───┘ +``` diff --git a/docs/en/sql-reference/statements/select/order-by.md b/docs/en/sql-reference/statements/select/order-by.md index fb1df445db1..f19a785c6b7 100644 --- a/docs/en/sql-reference/statements/select/order-by.md +++ b/docs/en/sql-reference/statements/select/order-by.md @@ -400,84 +400,4 @@ returns └────────────┴────────────┴──────────┘ ``` -## OFFSET FETCH Clause {#offset-fetch} - -`OFFSET` and `FETCH` allow you to retrieve data by portions. They specify a row block which you want to get by a single query. - -``` sql -OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}] -``` - -The `offset_row_count` or `fetch_row_count` value can be a number or a literal constant. You can omit `fetch_row_count`; by default, it equals 1. - -`OFFSET` specifies the number of rows to skip before starting to return rows from the query. - -The `FETCH` specifies the maximum number of rows that can be in the result of a query. - -The `ONLY` option is used to return rows that immediately follow the rows omitted by the `OFFSET`. In this case the `FETCH` is an alternative to the [LIMIT](../../../sql-reference/statements/select/limit.md) clause. For example, the following query - -``` sql -SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY; -``` - -is identical to the query - -``` sql -SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1; -``` - -The `WITH TIES` option is used to return any additional rows that tie for the last place in the result set according to the `ORDER BY` clause. For example, if `fetch_row_count` is set to 5 but two additional rows match the values of the `ORDER BY` columns in the fifth row, the result set will contain seven rows. - -!!! note "Note" - According to the standard, the `OFFSET` clause must come before the `FETCH` clause if both are present. - -### Examples {#examples} - -Input table: - -``` text -┌─a─┬─b─┐ -│ 1 │ 1 │ -│ 2 │ 1 │ -│ 3 │ 4 │ -│ 1 │ 3 │ -│ 5 │ 4 │ -│ 0 │ 6 │ -│ 5 │ 7 │ -└───┴───┘ -``` - -Usage of the `ONLY` option: - -``` sql -SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY; -``` - -Result: - -``` text -┌─a─┬─b─┐ -│ 2 │ 1 │ -│ 3 │ 4 │ -│ 5 │ 4 │ -└───┴───┘ -``` - -Usage of the `WITH TIES` option: - -``` sql -SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES; -``` - -Result: - -``` text -┌─a─┬─b─┐ -│ 2 │ 1 │ -│ 3 │ 4 │ -│ 5 │ 4 │ -│ 5 │ 7 │ -└───┴───┘ -``` - [Original article](https://clickhouse.tech/docs/en/sql-reference/statements/select/order-by/) diff --git a/docs/en/sql-reference/statements/system.md b/docs/en/sql-reference/statements/system.md index bb279703cc2..7871894ccac 100644 --- a/docs/en/sql-reference/statements/system.md +++ b/docs/en/sql-reference/statements/system.md @@ -169,7 +169,7 @@ SYSTEM START MERGES [ON VOLUME | [db.]merge_tree_family_table_name ### STOP TTL MERGES {#query_language-stop-ttl-merges} Provides possibility to stop background delete old data according to [TTL expression](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists or table have not MergeTree engine. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist or table has not MergeTree engine. Returns error when database doesn’t exist: ``` sql SYSTEM STOP TTL MERGES [[db.]merge_tree_family_table_name] @@ -178,7 +178,7 @@ SYSTEM STOP TTL MERGES [[db.]merge_tree_family_table_name] ### START TTL MERGES {#query_language-start-ttl-merges} Provides possibility to start background delete old data according to [TTL expression](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: ``` sql SYSTEM START TTL MERGES [[db.]merge_tree_family_table_name] @@ -187,7 +187,7 @@ SYSTEM START TTL MERGES [[db.]merge_tree_family_table_name] ### STOP MOVES {#query_language-stop-moves} Provides possibility to stop background move data according to [TTL table expression with TO VOLUME or TO DISK clause](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: ``` sql SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] @@ -196,7 +196,7 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] ### START MOVES {#query_language-start-moves} Provides possibility to start background move data according to [TTL table expression with TO VOLUME and TO DISK clause](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: ``` sql SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] @@ -209,7 +209,7 @@ ClickHouse can manage background replication related processes in [ReplicatedMer ### STOP FETCHES {#query_language-system-stop-fetches} Provides possibility to stop background fetches for inserted parts for tables in the `ReplicatedMergeTree` family: -Always returns `Ok.` regardless of the table engine and even table or database doesn’t exists. +Always returns `Ok.` regardless of the table engine and even if table or database doesn’t exist. ``` sql SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] @@ -218,7 +218,7 @@ SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] ### START FETCHES {#query_language-system-start-fetches} Provides possibility to start background fetches for inserted parts for tables in the `ReplicatedMergeTree` family: -Always returns `Ok.` regardless of the table engine and even table or database doesn’t exists. +Always returns `Ok.` regardless of the table engine and even if table or database doesn’t exist. ``` sql SYSTEM START FETCHES [[db.]replicated_merge_tree_family_table_name] @@ -264,6 +264,8 @@ Wait until a `ReplicatedMergeTree` table will be synced with other replicas in a SYSTEM SYNC REPLICA [db.]replicated_merge_tree_family_table_name ``` +After running this statement the `[db.]replicated_merge_tree_family_table_name` fetches commands from the common replicated log into its own replication queue, and then the query waits till the replica processes all of the fetched commands. + ### RESTART REPLICA {#query_language-system-restart-replica} Provides possibility to reinitialize Zookeeper sessions state for `ReplicatedMergeTree` table, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed @@ -276,5 +278,3 @@ SYSTEM RESTART REPLICA [db.]replicated_merge_tree_family_table_name ### RESTART REPLICAS {#query_language-system-restart-replicas} Provides possibility to reinitialize Zookeeper sessions state for all `ReplicatedMergeTree` tables, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed - -[Original article](https://clickhouse.tech/docs/en/query_language/system/) diff --git a/docs/en/sql-reference/statements/watch.md b/docs/en/sql-reference/statements/watch.md index 761bc8a041e..be793d30f3d 100644 --- a/docs/en/sql-reference/statements/watch.md +++ b/docs/en/sql-reference/statements/watch.md @@ -17,19 +17,21 @@ WATCH [db.]live_view [FORMAT format] ``` -The `WATCH` query performs continuous data retrieval from a [live view](./create/view.md#live-view) table. Unless the `LIMIT` clause is specified it provides an infinite stream of query results from a [live view](./create/view.md#live-view). +The `WATCH` query performs continuous data retrieval from a [LIVE VIEW](./create/view.md#live-view) table. Unless the `LIMIT` clause is specified it provides an infinite stream of query results from a [LIVE VIEW](./create/view.md#live-view). ```sql -WATCH [db.]live_view +WATCH [db.]live_view [EVENTS] [LIMIT n] [FORMAT format] ``` +## Virtual columns {#watch-virtual-columns} + The virtual `_version` column in the query result indicates the current result version. **Example:** ```sql CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); -WATCH lv +WATCH lv; ``` ```bash @@ -47,6 +49,8 @@ WATCH lv By default, the requested data is returned to the client, while in conjunction with [INSERT INTO](../../sql-reference/statements/insert-into.md) it can be forwarded to a different table. +**Example:** + ```sql INSERT INTO [db.]table WATCH [db.]live_view ... ``` @@ -56,14 +60,14 @@ INSERT INTO [db.]table WATCH [db.]live_view ... The `EVENTS` clause can be used to obtain a short form of the `WATCH` query where instead of the query result you will just get the latest query result version. ```sql -WATCH [db.]live_view EVENTS +WATCH [db.]live_view EVENTS; ``` **Example:** ```sql CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); -WATCH lv EVENTS +WATCH lv EVENTS; ``` ```bash @@ -78,17 +82,17 @@ WATCH lv EVENTS ## LIMIT Clause {#limit-clause} -The `LIMIT n` clause species the number of updates the `WATCH` query should wait for before terminating. By default there is no limit on the number of updates and therefore the query will not terminate. The value of `0` indicates that the `WATCH` query should not wait for any new query results and therefore will return immediately once query is evaluated. +The `LIMIT n` clause specifies the number of updates the `WATCH` query should wait for before terminating. By default there is no limit on the number of updates and therefore the query will not terminate. The value of `0` indicates that the `WATCH` query should not wait for any new query results and therefore will return immediately once query result is evaluated. ```sql -WATCH [db.]live_view LIMIT 1 +WATCH [db.]live_view LIMIT 1; ``` **Example:** ```sql CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); -WATCH lv EVENTS LIMIT 1 +WATCH lv EVENTS LIMIT 1; ``` ```bash @@ -102,5 +106,4 @@ WATCH lv EVENTS LIMIT 1 The `FORMAT` clause works the same way as for the [SELECT](../../sql-reference/statements/select/format.md#format-clause). !!! info "Note" - The [JSONEachRowWithProgress](../../../interfaces/formats/#jsoneachrowwithprogress) format should be used when watching [live view](./create/view.md#live-view) tables over the HTTP interface. The progress messages will be added to the output to keep the long-lived HTTP connection alive until the query result changes. The interval between progress messages is controlled using the [live_view_heartbeat_interval](./create/view.md#live-view-settings) setting. - + The [JSONEachRowWithProgress](../../interfaces/formats.md#jsoneachrowwithprogress) format should be used when watching [LIVE VIEW](./create/view.md#live-view) tables over the HTTP interface. The progress messages will be added to the output to keep the long-lived HTTP connection alive until the query result changes. The interval between progress messages is controlled using the [live_view_heartbeat_interval](./create/view.md#live-view-settings) setting. diff --git a/docs/en/sql-reference/syntax.md b/docs/en/sql-reference/syntax.md index 5d0eee76393..573e35d2f71 100644 --- a/docs/en/sql-reference/syntax.md +++ b/docs/en/sql-reference/syntax.md @@ -171,7 +171,7 @@ Received exception from server (version 18.14.17): Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query. ``` -In this example, we declared table `t` with column `b`. Then, when selecting data, we defined the `sum(b) AS b` alias. As aliases are global, ClickHouse substituted the literal `b` in the expression `argMax(a, b)` with the expression `sum(b)`. This substitution caused the exception. +In this example, we declared table `t` with column `b`. Then, when selecting data, we defined the `sum(b) AS b` alias. As aliases are global, ClickHouse substituted the literal `b` in the expression `argMax(a, b)` with the expression `sum(b)`. This substitution caused the exception. You can change this default behavior by setting [prefer_column_name_to_alias](../operations/settings/settings.md#prefer_column_name_to_alias) to `1`. ## Asterisk {#asterisk} diff --git a/docs/en/sql-reference/table-functions/file.md b/docs/en/sql-reference/table-functions/file.md index da0999e66eb..e1459b5e254 100644 --- a/docs/en/sql-reference/table-functions/file.md +++ b/docs/en/sql-reference/table-functions/file.md @@ -124,6 +124,6 @@ SELECT count(*) FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, **See Also** -- [Virtual columns](index.md#table_engines-virtual_columns) +- [Virtual columns](../../engines/table-engines/index.md#table_engines-virtual_columns) [Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/file/) diff --git a/docs/en/sql-reference/table-functions/generate.md b/docs/en/sql-reference/table-functions/generate.md index be6ba2b8bc4..ae22e1a1b88 100644 --- a/docs/en/sql-reference/table-functions/generate.md +++ b/docs/en/sql-reference/table-functions/generate.md @@ -10,7 +10,7 @@ Allows to populate test tables with data. Supports all data types that can be stored in table except `LowCardinality` and `AggregateFunction`. ``` sql -generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]); +generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]) ``` **Arguments** @@ -39,4 +39,3 @@ SELECT * FROM generateRandom('a Array(Int8), d Decimal32(4), c Tuple(DateTime64( └──────────┴──────────────┴────────────────────────────────────────────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/generate/) diff --git a/docs/en/sql-reference/table-functions/hdfs.md b/docs/en/sql-reference/table-functions/hdfs.md index 512f47a2b46..a7c3baca299 100644 --- a/docs/en/sql-reference/table-functions/hdfs.md +++ b/docs/en/sql-reference/table-functions/hdfs.md @@ -97,6 +97,5 @@ FROM hdfs('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name Strin **See Also** -- [Virtual columns](https://clickhouse.tech/docs/en/operations/table_engines/#table_engines-virtual_columns) +- [Virtual columns](../../engines/table-engines/index.md#table_engines-virtual_columns) -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/hdfs/) diff --git a/docs/en/sql-reference/table-functions/index.md b/docs/en/sql-reference/table-functions/index.md index 691687dea25..d65a18ab985 100644 --- a/docs/en/sql-reference/table-functions/index.md +++ b/docs/en/sql-reference/table-functions/index.md @@ -21,17 +21,18 @@ You can use table functions in: !!! warning "Warning" You can’t use table functions if the [allow_ddl](../../operations/settings/permissions-for-queries.md#settings_allow_ddl) setting is disabled. -| Function | Description | -|-----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------| -| [file](../../sql-reference/table-functions/file.md) | Creates a [File](../../engines/table-engines/special/file.md)-engine table. | -| [merge](../../sql-reference/table-functions/merge.md) | Creates a [Merge](../../engines/table-engines/special/merge.md)-engine table. | -| [numbers](../../sql-reference/table-functions/numbers.md) | Creates a table with a single column filled with integer numbers. | -| [remote](../../sql-reference/table-functions/remote.md) | Allows you to access remote servers without creating a [Distributed](../../engines/table-engines/special/distributed.md)-engine table. | -| [url](../../sql-reference/table-functions/url.md) | Creates a [Url](../../engines/table-engines/special/url.md)-engine table. | -| [mysql](../../sql-reference/table-functions/mysql.md) | Creates a [MySQL](../../engines/table-engines/integrations/mysql.md)-engine table. | -| [jdbc](../../sql-reference/table-functions/jdbc.md) | Creates a [JDBC](../../engines/table-engines/integrations/jdbc.md)-engine table. | -| [odbc](../../sql-reference/table-functions/odbc.md) | Creates a [ODBC](../../engines/table-engines/integrations/odbc.md)-engine table. | -| [hdfs](../../sql-reference/table-functions/hdfs.md) | Creates a [HDFS](../../engines/table-engines/integrations/hdfs.md)-engine table. | -| [s3](../../sql-reference/table-functions/s3.md) | Creates a [S3](../../engines/table-engines/integrations/s3.md)-engine table. | +| Function | Description | +|------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------| +| [file](../../sql-reference/table-functions/file.md) | Creates a [File](../../engines/table-engines/special/file.md)-engine table. | +| [merge](../../sql-reference/table-functions/merge.md) | Creates a [Merge](../../engines/table-engines/special/merge.md)-engine table. | +| [numbers](../../sql-reference/table-functions/numbers.md) | Creates a table with a single column filled with integer numbers. | +| [remote](../../sql-reference/table-functions/remote.md) | Allows you to access remote servers without creating a [Distributed](../../engines/table-engines/special/distributed.md)-engine table. | +| [url](../../sql-reference/table-functions/url.md) | Creates a [Url](../../engines/table-engines/special/url.md)-engine table. | +| [mysql](../../sql-reference/table-functions/mysql.md) | Creates a [MySQL](../../engines/table-engines/integrations/mysql.md)-engine table. | +| [postgresql](../../sql-reference/table-functions/postgresql.md) | Creates a [PostgreSQL](../../engines/table-engines/integrations/postgresql.md)-engine table. | +| [jdbc](../../sql-reference/table-functions/jdbc.md) | Creates a [JDBC](../../engines/table-engines/integrations/jdbc.md)-engine table. | +| [odbc](../../sql-reference/table-functions/odbc.md) | Creates a [ODBC](../../engines/table-engines/integrations/odbc.md)-engine table. | +| [hdfs](../../sql-reference/table-functions/hdfs.md) | Creates a [HDFS](../../engines/table-engines/integrations/hdfs.md)-engine table. | +| [s3](../../sql-reference/table-functions/s3.md) | Creates a [S3](../../engines/table-engines/integrations/s3.md)-engine table. | -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/) +[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/) diff --git a/docs/en/sql-reference/table-functions/input.md b/docs/en/sql-reference/table-functions/input.md index 40f9f4f7f6f..17707b798d6 100644 --- a/docs/en/sql-reference/table-functions/input.md +++ b/docs/en/sql-reference/table-functions/input.md @@ -42,4 +42,3 @@ $ cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV" $ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV" ``` -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/input/) diff --git a/docs/en/sql-reference/table-functions/jdbc.md b/docs/en/sql-reference/table-functions/jdbc.md index 6fd53b0e794..c6df022c342 100644 --- a/docs/en/sql-reference/table-functions/jdbc.md +++ b/docs/en/sql-reference/table-functions/jdbc.md @@ -24,4 +24,3 @@ SELECT * FROM jdbc('mysql://localhost:3306/?user=root&password=root', 'schema', SELECT * FROM jdbc('datasource://mysql-local', 'schema', 'table') ``` -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/jdbc/) diff --git a/docs/en/sql-reference/table-functions/merge.md b/docs/en/sql-reference/table-functions/merge.md index 7b3d88f6266..a5c74b71069 100644 --- a/docs/en/sql-reference/table-functions/merge.md +++ b/docs/en/sql-reference/table-functions/merge.md @@ -9,4 +9,3 @@ toc_title: merge The table structure is taken from the first table encountered that matches the regular expression. -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/merge/) diff --git a/docs/en/sql-reference/table-functions/numbers.md b/docs/en/sql-reference/table-functions/numbers.md index 53e4e42a2f8..f9735056b05 100644 --- a/docs/en/sql-reference/table-functions/numbers.md +++ b/docs/en/sql-reference/table-functions/numbers.md @@ -25,4 +25,3 @@ Examples: select toDate('2010-01-01') + number as d FROM numbers(365); ``` -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/numbers/) diff --git a/docs/en/sql-reference/table-functions/odbc.md b/docs/en/sql-reference/table-functions/odbc.md index ea79cd44a93..a8481fbfd68 100644 --- a/docs/en/sql-reference/table-functions/odbc.md +++ b/docs/en/sql-reference/table-functions/odbc.md @@ -102,5 +102,3 @@ SELECT * FROM odbc('DSN=mysqlconn', 'test', 'test') - [ODBC external dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc) - [ODBC table engine](../../engines/table-engines/integrations/odbc.md). - -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/jdbc/) diff --git a/docs/en/sql-reference/table-functions/postgresql.md b/docs/en/sql-reference/table-functions/postgresql.md new file mode 100644 index 00000000000..3eab572ac12 --- /dev/null +++ b/docs/en/sql-reference/table-functions/postgresql.md @@ -0,0 +1,120 @@ +--- +toc_priority: 42 +toc_title: postgresql +--- + +# postgresql {#postgresql} + +Allows `SELECT` and `INSERT` queries to be performed on data that is stored on a remote PostgreSQL server. + +**Syntax** + +``` sql +postgresql('host:port', 'database', 'table', 'user', 'password'[, `schema`]) +``` + +**Arguments** + +- `host:port` — PostgreSQL server address. +- `database` — Remote database name. +- `table` — Remote table name. +- `user` — PostgreSQL user. +- `password` — User password. +- `schema` — Non-default table schema. Optional. + +**Returned Value** + +A table object with the same columns as the original PostgreSQL table. + +!!! info "Note" + In the `INSERT` query to distinguish table function `postgresql(...)` from table name with column names list you must use keywords `FUNCTION` or `TABLE FUNCTION`. See examples below. + +## Implementation Details {#implementation-details} + +`SELECT` queries on PostgreSQL side run as `COPY (SELECT ...) TO STDOUT` inside read-only PostgreSQL transaction with commit after each `SELECT` query. + +Simple `WHERE` clauses such as `=`, `!=`, `>`, `>=`, `<`, `<=`, and `IN` are executed on the PostgreSQL server. + +All joins, aggregations, sorting, `IN [ array ]` conditions and the `LIMIT` sampling constraint are executed in ClickHouse only after the query to PostgreSQL finishes. + +`INSERT` queries on PostgreSQL side run as `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` inside PostgreSQL transaction with auto-commit after each `INSERT` statement. + +PostgreSQL Array types converts into ClickHouse arrays. + +!!! info "Note" + Be careful, in PostgreSQL an array data type column like Integer[] may contain arrays of different dimensions in different rows, but in ClickHouse it is only allowed to have multidimensional arrays of the same dimension in all rows. + +Supports replicas priority for PostgreSQL dictionary source. The bigger the number in map, the less the priority. The highest priority is `0`. + +**Examples** + +Table in PostgreSQL: + +``` text +postgres=# CREATE TABLE "public"."test" ( +"int_id" SERIAL, +"int_nullable" INT NULL DEFAULT NULL, +"float" FLOAT NOT NULL, +"str" VARCHAR(100) NOT NULL DEFAULT '', +"float_nullable" FLOAT NULL DEFAULT NULL, +PRIMARY KEY (int_id)); + +CREATE TABLE + +postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); +INSERT 0 1 + +postgresql> SELECT * FROM test; + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | +(1 row) +``` + +Selecting data from ClickHouse: + +```sql +SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password') WHERE str IN ('test'); +``` + +``` text +┌─int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐ +│ 1 │ ᴺᵁᴸᴸ │ 2 │ test │ ᴺᵁᴸᴸ │ +└────────┴──────────────┴───────┴──────┴────────────────┘ +``` + +Inserting: + +```sql +INSERT INTO TABLE FUNCTION postgresql('localhost:5432', 'test', 'test', 'postgrsql_user', 'password') (int_id, float) VALUES (2, 3); +SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password'); +``` + +``` text +┌─int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐ +│ 1 │ ᴺᵁᴸᴸ │ 2 │ test │ ᴺᵁᴸᴸ │ +│ 2 │ ᴺᵁᴸᴸ │ 3 │ │ ᴺᵁᴸᴸ │ +└────────┴──────────────┴───────┴──────┴────────────────┘ +``` + +Using Non-default Schema: + +```text +postgres=# CREATE SCHEMA "nice.schema"; + +postgres=# CREATE TABLE "nice.schema"."nice.table" (a integer); + +postgres=# INSERT INTO "nice.schema"."nice.table" SELECT i FROM generate_series(0, 99) as t(i) +``` + +```sql +CREATE TABLE pg_table_schema_with_dots (a UInt32) + ENGINE PostgreSQL('localhost:5432', 'clickhouse', 'nice.table', 'postgrsql_user', 'password', 'nice.schema'); +``` + +**See Also** + +- [The PostgreSQL table engine](../../engines/table-engines/integrations/postgresql.md) +- [Using PostgreSQL as a source of external dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql) + +[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/postgresql/) diff --git a/docs/en/sql-reference/table-functions/s3.md b/docs/en/sql-reference/table-functions/s3.md index 76a0e042ea4..285ec862aab 100644 --- a/docs/en/sql-reference/table-functions/s3.md +++ b/docs/en/sql-reference/table-functions/s3.md @@ -3,33 +3,35 @@ toc_priority: 45 toc_title: s3 --- -# s3 {#s3} +# S3 Table Function {#s3-table-function} -Provides table-like interface to select/insert files in S3. This table function is similar to [hdfs](../../sql-reference/table-functions/hdfs.md). +Provides table-like interface to select/insert files in [Amazon S3](https://aws.amazon.com/s3/). This table function is similar to [hdfs](../../sql-reference/table-functions/hdfs.md), but provides S3-specific features. + +**Syntax** ``` sql s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression]) ``` -**Input parameters** +**Arguments** -- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, `’abc’, ‘def’ — strings. +- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [here](../../engines/table-engines/integrations/s3.md#wildcards-in-path). - `format` — The [format](../../interfaces/formats.md#formats) of the file. - `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. -- `compression` — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension. +- `compression` — Parameter is optional. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. By default, it will autodetect compression by file extension. **Returned value** A table with the specified structure for reading or writing data in the specified file. -**Example** +**Examples** -Table from S3 file `https://storage.yandexcloud.net/my-test-bucket-768/data.csv` and selection of the first two rows from it: +Selecting the first two rows from the table from S3 file `https://storage.yandexcloud.net/my-test-bucket-768/data.csv`: ``` sql SELECT * FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') -LIMIT 2 +LIMIT 2; ``` ``` text @@ -44,7 +46,7 @@ The similar but from file with `gzip` compression: ``` sql SELECT * FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', 'gzip') -LIMIT 2 +LIMIT 2; ``` ``` text @@ -54,33 +56,20 @@ LIMIT 2 └─────────┴─────────┴─────────┘ ``` -**Globs in path** +## Usage {#usage-examples} -Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern (not only suffix or prefix). +Suppose that we have several files with following URIs on S3: -- `*` — Substitutes any number of any characters except `/` including empty string. -- `?` — Substitutes any single character. -- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. -- `{N..M}` — Substitutes any number in range from N to M including both borders. N and M can have leading zeroes e.g. `000..078`. +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_4.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_4.csv' -Constructions with `{}` are similar to the [remote table function](../../sql-reference/table-functions/remote.md)). - -**Example** - -1. Suppose that we have several files with following URIs on S3: - -- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_4.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv’ -- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_4.csv’ - -2. Query the amount of rows in files end with number from 1 to 3: - - +Count the amount of rows in files ending with numbers from 1 to 3: ``` sql SELECT count(*) @@ -93,9 +82,7 @@ FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefi └─────────┘ ``` -3. Query the amount of rows in all files of these two directories: - - +Count the total amount of rows in all files in these two directories: ``` sql SELECT count(*) @@ -108,17 +95,14 @@ FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefi └─────────┘ ``` - !!! warning "Warning" If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. -**Example** - -Query the data from files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`: +Count the total amount of rows in files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`: ``` sql SELECT count(*) -FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32') +FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32'); ``` ``` text @@ -127,43 +111,22 @@ FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000 └─────────┘ ``` -**Data insert** - -The S3 table function may be used for data insert as well. - -**Example** - -Insert a data into file `test-data.csv.gz`: +Insert data into file `test-data.csv.gz`: ``` sql INSERT INTO s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') -VALUES ('test-data', 1), ('test-data-2', 2) +VALUES ('test-data', 1), ('test-data-2', 2); ``` -Insert a data into file `test-data.csv.gz` from existing table: +Insert data into file `test-data.csv.gz` from existing table: ``` sql INSERT INTO s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') -SELECT name, value FROM existing_table +SELECT name, value FROM existing_table; ``` -## Virtual Columns {#virtual-columns} - -- `_path` — Path to the file. -- `_file` — Name of the file. - -## S3-related settings {#settings} - -The following settings can be set before query execution or placed into configuration file. - -- `s3_max_single_part_upload_size` — Default value is `64Mb`. The maximum size of object to upload using singlepart upload to S3. -- `s3_min_upload_part_size` — Default value is `512Mb`. The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). -- `s3_max_redirects` — Default value is `10`. Max number of S3 redirects hops allowed. - -Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration. - **See Also** -- [Virtual columns](https://clickhouse.tech/docs/en/operations/table_engines/#table_engines-virtual_columns) +- [S3 engine](../../engines/table-engines/integrations/s3.md) -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/s3/) +[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/s3/) diff --git a/docs/en/sql-reference/table-functions/url.md b/docs/en/sql-reference/table-functions/url.md index 63b0ff0e152..2192b69d006 100644 --- a/docs/en/sql-reference/table-functions/url.md +++ b/docs/en/sql-reference/table-functions/url.md @@ -27,7 +27,7 @@ A table with the specified format and structure and with data from the defined ` **Examples** -Getting the first 3 lines of a table that contains columns of `String` and [UInt32](../../sql-reference/data-types/int-uint.md) type from HTTP-server which answers in [CSV](../../interfaces/formats.md/#csv) format. +Getting the first 3 lines of a table that contains columns of `String` and [UInt32](../../sql-reference/data-types/int-uint.md) type from HTTP-server which answers in [CSV](../../interfaces/formats.md#csv) format. ``` sql SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3; diff --git a/docs/en/sql-reference/table-functions/view.md b/docs/en/sql-reference/table-functions/view.md index 08096c2b019..e49a9f5218b 100644 --- a/docs/en/sql-reference/table-functions/view.md +++ b/docs/en/sql-reference/table-functions/view.md @@ -37,7 +37,7 @@ Input table: Query: ``` sql -SELECT * FROM view(SELECT name FROM months) +SELECT * FROM view(SELECT name FROM months); ``` Result: @@ -54,14 +54,15 @@ Result: You can use the `view` function as a parameter of the [remote](https://clickhouse.tech/docs/en/sql-reference/table-functions/remote/#remote-remotesecure) and [cluster](https://clickhouse.tech/docs/en/sql-reference/table-functions/cluster/#cluster-clusterallreplicas) table functions: ``` sql -SELECT * FROM remote(`127.0.0.1`, view(SELECT a, b, c FROM table_name)) +SELECT * FROM remote(`127.0.0.1`, view(SELECT a, b, c FROM table_name)); ``` ``` sql -SELECT * FROM cluster(`cluster_name`, view(SELECT a, b, c FROM table_name)) +SELECT * FROM cluster(`cluster_name`, view(SELECT a, b, c FROM table_name)); ``` **See Also** - [View Table Engine](https://clickhouse.tech/docs/en/engines/table-engines/special/view/) -[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/view/) \ No newline at end of file + +[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/view/) diff --git a/docs/en/sql-reference/window-functions/index.md b/docs/en/sql-reference/window-functions/index.md index cbf03a44d46..a646347ea60 100644 --- a/docs/en/sql-reference/window-functions/index.md +++ b/docs/en/sql-reference/window-functions/index.md @@ -23,7 +23,9 @@ ClickHouse supports the standard grammar for defining windows and window functio | `GROUPS` frame | not supported | | Calculating aggregate functions over a frame (`sum(value) over (order by time)`) | all aggregate functions are supported | | `rank()`, `dense_rank()`, `row_number()` | supported | -| `lag/lead(value, offset)` | not supported, replace with `any(value) over (.... rows between preceding and preceding)`, or `following` for `lead`| +| `lag/lead(value, offset)` | Not supported. Workarounds: | +| | 1) replace with `any(value) over (.... rows between preceding and preceding)`, or `following` for `lead`| +| | 2) use `lagInFrame/leadInFrame`, which are analogous, but respect the window frame. To get behavior identical to `lag/lead`, use `rows between unbounded preceding and unbounded following` | ## References diff --git a/docs/es/commercial/cloud.md b/docs/es/commercial/cloud.md deleted file mode 100644 index bc593a82ad7..00000000000 --- a/docs/es/commercial/cloud.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 1 -toc_title: Nube ---- - -# Proveedores de servicios en la nube de ClickHouse {#clickhouse-cloud-service-providers} - -!!! info "INFO" - Si ha lanzado una nube pública con el servicio ClickHouse administrado, no dude en [abrir una solicitud de extracción](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/cloud.md) añadiéndolo a la siguiente lista. - -## Nube de Yandex {#yandex-cloud} - -[Servicio administrado de Yandex para ClickHouse](https://cloud.yandex.com/services/managed-clickhouse?utm_source=referrals&utm_medium=clickhouseofficialsite&utm_campaign=link3) proporciona las siguientes características clave: - -- Servicio ZooKeeper totalmente gestionado para [Replicación de ClickHouse](../engines/table-engines/mergetree-family/replication.md) -- Múltiples opciones de tipo de almacenamiento -- Réplicas en diferentes zonas de disponibilidad -- Cifrado y aislamiento -- Mantenimiento automatizado - -{## [Artículo Original](https://clickhouse.tech/docs/en/commercial/cloud/) ##} diff --git a/docs/es/commercial/index.md b/docs/es/commercial/index.md deleted file mode 100644 index b367631ae1c..00000000000 --- a/docs/es/commercial/index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Comercial -toc_priority: 70 -toc_title: Comercial ---- - - diff --git a/docs/es/commercial/support.md b/docs/es/commercial/support.md deleted file mode 100644 index a817d90dcb5..00000000000 --- a/docs/es/commercial/support.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 3 -toc_title: Apoyo ---- - -# Proveedores de servicios de soporte comercial ClickHouse {#clickhouse-commercial-support-service-providers} - -!!! info "INFO" - Si ha lanzado un servicio de soporte comercial ClickHouse, no dude en [abrir una solicitud de extracción](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) añadiéndolo a la siguiente lista. - -## Altinidad {#altinity} - -Altinity ha ofrecido soporte y servicios empresariales ClickHouse desde 2017. Los clientes de Altinity van desde empresas Fortune 100 hasta startups. Visitar [Más información](https://www.altinity.com/) para más información. - -## Mafiree {#mafiree} - -[Descripción del servicio](http://mafiree.com/clickhouse-analytics-services.php) - -## MinervaDB {#minervadb} - -[Descripción del servicio](https://minervadb.com/index.php/clickhouse-consulting-and-support-by-minervadb/) diff --git a/docs/es/development/architecture.md b/docs/es/development/architecture.md deleted file mode 100644 index 1620a58a3a0..00000000000 --- a/docs/es/development/architecture.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 62 -toc_title: "Descripci\xF3n general de la arquitectura ClickHouse" ---- - -# Descripción general de la arquitectura ClickHouse {#overview-of-clickhouse-architecture} - -ClickHouse es un verdadero DBMS orientado a columnas. Los datos se almacenan por columnas y durante la ejecución de matrices (vectores o fragmentos de columnas). Siempre que sea posible, las operaciones se envían en matrices, en lugar de en valores individuales. Se llama “vectorized query execution,” y ayuda a reducir el costo del procesamiento de datos real. - -> Esta idea no es nada nuevo. Se remonta a la `APL` lenguaje de programación y sus descendientes: `A +`, `J`, `K`, y `Q`. La programación de matrices se utiliza en el procesamiento de datos científicos. Tampoco es esta idea algo nuevo en las bases de datos relacionales: por ejemplo, se usa en el `Vectorwise` sistema. - -Existen dos enfoques diferentes para acelerar el procesamiento de consultas: la ejecución de consultas vectorizadas y la generación de código en tiempo de ejecución. Este último elimina toda la indirección y el despacho dinámico. Ninguno de estos enfoques es estrictamente mejor que el otro. La generación de código de tiempo de ejecución puede ser mejor cuando fusiona muchas operaciones, utilizando así las unidades de ejecución de la CPU y la canalización. La ejecución de consultas vectorizadas puede ser menos práctica porque implica vectores temporales que deben escribirse en la memoria caché y leerse. Si los datos temporales no caben en la memoria caché L2, esto se convierte en un problema. Pero la ejecución de consultas vectorizadas utiliza más fácilmente las capacidades SIMD de la CPU. Un [documento de investigación](http://15721.courses.cs.cmu.edu/spring2016/papers/p5-sompolski.pdf) escrito por nuestros amigos muestra que es mejor combinar ambos enfoques. ClickHouse utiliza la ejecución de consultas vectorizadas y tiene un soporte inicial limitado para la generación de código en tiempo de ejecución. - -## Columna {#columns} - -`IColumn` interfaz se utiliza para representar columnas en la memoria (en realidad, fragmentos de columnas). Esta interfaz proporciona métodos auxiliares para la implementación de varios operadores relacionales. Casi todas las operaciones son inmutables: no modifican la columna original, sino que crean una nueva modificada. Por ejemplo, el `IColumn :: filter` método acepta una máscara de bytes de filtro. Se utiliza para el `WHERE` y `HAVING` operadores relacionales. Ejemplos adicionales: el `IColumn :: permute` para apoyar `ORDER BY`, el `IColumn :: cut` para apoyar `LIMIT`. - -Diversos `IColumn` aplicación (`ColumnUInt8`, `ColumnString`, y así sucesivamente) son responsables del diseño de memoria de las columnas. El diseño de memoria suele ser una matriz contigua. Para el tipo entero de columnas, es solo una matriz contigua, como `std :: vector`. Para `String` y `Array` columnas, son dos vectores: uno para todos los elementos de la matriz, colocados contiguamente, y un segundo para los desplazamientos al comienzo de cada matriz. También hay `ColumnConst` que almacena solo un valor en la memoria, pero parece una columna. - -## Campo {#field} - -Sin embargo, también es posible trabajar con valores individuales. Para representar un valor individual, el `Field` se utiliza. `Field` es sólo una unión discriminada de `UInt64`, `Int64`, `Float64`, `String` y `Array`. `IColumn` tiene el `operator[]` para obtener el valor n-ésimo como un `Field` y el `insert` método para agregar un `Field` al final de una columna. Estos métodos no son muy eficientes, ya que requieren tratar con temporal `Field` objetos que representan un valor individual. Hay métodos más eficientes, tales como `insertFrom`, `insertRangeFrom` y así sucesivamente. - -`Field` no tiene suficiente información sobre un tipo de datos específico para una tabla. Por ejemplo, `UInt8`, `UInt16`, `UInt32`, y `UInt64` todos están representados como `UInt64` en una `Field`. - -## Abstracciones con fugas {#leaky-abstractions} - -`IColumn` tiene métodos para transformaciones relacionales comunes de datos, pero no satisfacen todas las necesidades. Por ejemplo, `ColumnUInt64` no tiene un método para calcular la suma de dos columnas, y `ColumnString` no tiene un método para ejecutar una búsqueda de subcadena. Estas innumerables rutinas se implementan fuera de `IColumn`. - -Varias funciones en columnas se pueden implementar de una manera genérica, no eficiente utilizando `IColumn` para extraer `Field` valores, o de una manera especializada utilizando el conocimiento del diseño de la memoria interna de los datos en un `IColumn` aplicación. Se implementa mediante la conversión de funciones a un `IColumn` escriba y trate con la representación interna directamente. Por ejemplo, `ColumnUInt64` tiene el `getData` método que devuelve una referencia a una matriz interna, luego una rutina separada lee o llena esa matriz directamente. Tenemos “leaky abstractions” para permitir especializaciones eficientes de varias rutinas. - -## Tipos de datos {#data_types} - -`IDataType` es responsable de la serialización y deserialización: para leer y escribir fragmentos de columnas o valores individuales en formato binario o de texto. `IDataType` corresponde directamente a los tipos de datos en las tablas. Por ejemplo, hay `DataTypeUInt32`, `DataTypeDateTime`, `DataTypeString` y así sucesivamente. - -`IDataType` y `IColumn` están vagamente relacionados entre sí. Diferentes tipos de datos se pueden representar en la memoria por el mismo `IColumn` aplicación. Por ejemplo, `DataTypeUInt32` y `DataTypeDateTime` están representados por `ColumnUInt32` o `ColumnConstUInt32`. Además, el mismo tipo de datos se puede representar mediante `IColumn` aplicación. Por ejemplo, `DataTypeUInt8` puede ser representado por `ColumnUInt8` o `ColumnConstUInt8`. - -`IDataType` sólo almacena metadatos. Por ejemplo, `DataTypeUInt8` no almacena nada en absoluto (excepto vptr) y `DataTypeFixedString` tiendas solo `N` (el tamaño de las cadenas de tamaño fijo). - -`IDataType` tiene métodos auxiliares para varios formatos de datos. Los ejemplos son métodos para serializar un valor con posibles citas, para serializar un valor para JSON y para serializar un valor como parte del formato XML. No hay correspondencia directa con los formatos de datos. Por ejemplo, los diferentes formatos de datos `Pretty` y `TabSeparated` puede utilizar el mismo `serializeTextEscaped` método de ayuda de la `IDataType` interfaz. - -## Bloque {#block} - -A `Block` es un contenedor que representa un subconjunto (porción) de una tabla en la memoria. Es sólo un conjunto de triples: `(IColumn, IDataType, column name)`. Durante la ejecución de la consulta, los datos son procesados por `Block`s. Si tenemos un `Block`, tenemos datos (en el `IColumn` objeto), tenemos información sobre su tipo (en `IDataType`) que nos dice cómo lidiar con esa columna, y tenemos el nombre de la columna. Podría ser el nombre de columna original de la tabla o algún nombre artificial asignado para obtener resultados temporales de los cálculos. - -Cuando calculamos alguna función sobre columnas en un bloque, agregamos otra columna con su resultado al bloque, y no tocamos columnas para argumentos de la función porque las operaciones son inmutables. Más tarde, las columnas innecesarias se pueden eliminar del bloque, pero no se pueden modificar. Es conveniente para la eliminación de subexpresiones comunes. - -Se crean bloques para cada fragmento de datos procesado. Tenga en cuenta que para el mismo tipo de cálculo, los nombres y tipos de columna siguen siendo los mismos para diferentes bloques y solo cambian los datos de columna. Es mejor dividir los datos del bloque desde el encabezado del bloque porque los tamaños de bloque pequeños tienen una gran sobrecarga de cadenas temporales para copiar shared_ptrs y nombres de columna. - -## Bloquear flujos {#block-streams} - -Los flujos de bloques son para procesar datos. Usamos flujos de bloques para leer datos de algún lugar, realizar transformaciones de datos o escribir datos en algún lugar. `IBlockInputStream` tiene el `read` método para buscar el siguiente bloque mientras esté disponible. `IBlockOutputStream` tiene el `write` método para empujar el bloque en alguna parte. - -Los flujos son responsables de: - -1. Leer o escribir en una mesa. La tabla solo devuelve una secuencia para leer o escribir bloques. -2. Implementación de formatos de datos. Por ejemplo, si desea enviar datos a un terminal en `Pretty` formato, crea un flujo de salida de bloque donde presiona bloques y los formatea. -3. Realización de transformaciones de datos. Digamos que tienes `IBlockInputStream` y desea crear una secuencia filtrada. Usted crea `FilterBlockInputStream` e inicializarlo con su transmisión. Luego, cuando tiras de un bloque de `FilterBlockInputStream`, extrae un bloque de su flujo, lo filtra y le devuelve el bloque filtrado. Las canalizaciones de ejecución de consultas se representan de esta manera. - -Hay transformaciones más sofisticadas. Por ejemplo, cuando tiras de `AggregatingBlockInputStream`, lee todos los datos de su origen, los agrega y, a continuación, devuelve un flujo de datos agregados para usted. Otro ejemplo: `UnionBlockInputStream` acepta muchas fuentes de entrada en el constructor y también una serie de subprocesos. Lanza múltiples hilos y lee de múltiples fuentes en paralelo. - -> Las secuencias de bloques usan el “pull” enfoque para controlar el flujo: cuando extrae un bloque de la primera secuencia, en consecuencia extrae los bloques requeridos de las secuencias anidadas, y toda la tubería de ejecución funcionará. Ni “pull” ni “push” es la mejor solución, porque el flujo de control está implícito y eso limita la implementación de varias características, como la ejecución simultánea de múltiples consultas (fusionando muchas tuberías). Esta limitación podría superarse con coroutines o simplemente ejecutando hilos adicionales que se esperan el uno al otro. Podemos tener más posibilidades si hacemos explícito el flujo de control: si localizamos la lógica para pasar datos de una unidad de cálculo a otra fuera de esas unidades de cálculo. Lea esto [artículo](http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/) para más pensamientos. - -Debemos tener en cuenta que la canalización de ejecución de consultas crea datos temporales en cada paso. Tratamos de mantener el tamaño del bloque lo suficientemente pequeño para que los datos temporales se ajusten a la memoria caché de la CPU. Con esa suposición, escribir y leer datos temporales es casi gratis en comparación con otros cálculos. Podríamos considerar una alternativa, que es fusionar muchas operaciones en la tubería. Podría hacer que la tubería sea lo más corta posible y eliminar gran parte de los datos temporales, lo que podría ser una ventaja, pero también tiene inconvenientes. Por ejemplo, una canalización dividida facilita la implementación de almacenamiento en caché de datos intermedios, el robo de datos intermedios de consultas similares que se ejecutan al mismo tiempo y la fusión de canalizaciones para consultas similares. - -## Formato {#formats} - -Los formatos de datos se implementan con flujos de bloques. Hay “presentational” sólo es adecuado para la salida de datos al cliente, tales como `Pretty` formato, que proporciona sólo `IBlockOutputStream`. Y hay formatos de entrada / salida, como `TabSeparated` o `JSONEachRow`. - -También hay secuencias de filas: `IRowInputStream` y `IRowOutputStream`. Permiten pull/push datos por filas individuales, no por bloques. Y solo son necesarios para simplificar la implementación de formatos orientados a filas. Envoltura `BlockInputStreamFromRowInputStream` y `BlockOutputStreamFromRowOutputStream` le permite convertir flujos orientados a filas en flujos regulares orientados a bloques. - -## I/O {#io} - -Para la entrada / salida orientada a bytes, hay `ReadBuffer` y `WriteBuffer` clases abstractas. Se usan en lugar de C ++ `iostream`s. No se preocupe: cada proyecto maduro de C ++ está usando algo más que `iostream`s por buenas razones. - -`ReadBuffer` y `WriteBuffer` son solo un búfer contiguo y un cursor apuntando a la posición en ese búfer. Las implementaciones pueden poseer o no la memoria del búfer. Hay un método virtual para llenar el búfer con los siguientes datos (para `ReadBuffer`) o para vaciar el búfer en algún lugar (para `WriteBuffer`). Los métodos virtuales rara vez se llaman. - -Implementaciones de `ReadBuffer`/`WriteBuffer` se utilizan para trabajar con archivos y descriptores de archivos y sockets de red, para implementar la compresión (`CompressedWriteBuffer` is initialized with another WriteBuffer and performs compression before writing data to it), and for other purposes – the names `ConcatReadBuffer`, `LimitReadBuffer`, y `HashingWriteBuffer` hablar por sí mismos. - -Read / WriteBuffers solo se ocupan de bytes. Hay funciones de `ReadHelpers` y `WriteHelpers` archivos de encabezado para ayudar con el formato de entrada / salida. Por ejemplo, hay ayudantes para escribir un número en formato decimal. - -Veamos qué sucede cuando quieres escribir un conjunto de resultados en `JSON` formato a stdout. Tiene un conjunto de resultados listo para ser recuperado de `IBlockInputStream`. Usted crea `WriteBufferFromFileDescriptor(STDOUT_FILENO)` para escribir bytes en stdout. Usted crea `JSONRowOutputStream`, inicializado con eso `WriteBuffer` para escribir filas en `JSON` a stdout. Usted crea `BlockOutputStreamFromRowOutputStream` encima de él, para representarlo como `IBlockOutputStream`. Entonces usted llama `copyData` para transferir datos desde `IBlockInputStream` a `IBlockOutputStream` y todo funciona. Internamente, `JSONRowOutputStream` escribirá varios delimitadores JSON y llamará al `IDataType::serializeTextJSON` con una referencia a `IColumn` y el número de fila como argumentos. Consecuentemente, `IDataType::serializeTextJSON` llamará a un método de `WriteHelpers.h`: por ejemplo, `writeText` para tipos numéricos y `writeJSONString` para `DataTypeString`. - -## Tabla {#tables} - -El `IStorage` interfaz representa tablas. Las diferentes implementaciones de esa interfaz son diferentes motores de tabla. Los ejemplos son `StorageMergeTree`, `StorageMemory` y así sucesivamente. Las instancias de estas clases son solo tablas. - -Clave `IStorage` son `read` y `write`. También hay `alter`, `rename`, `drop` y así sucesivamente. El `read` método acepta los siguientes argumentos: el conjunto de columnas para leer de una tabla, el `AST` consulta a considerar, y el número deseado de flujos para devolver. Devuelve uno o varios `IBlockInputStream` objetos e información sobre la etapa de procesamiento de datos que se completó dentro de un motor de tablas durante la ejecución de la consulta. - -En la mayoría de los casos, el método de lectura solo es responsable de leer las columnas especificadas de una tabla, no de ningún procesamiento de datos adicional. Todo el procesamiento de datos adicional es realizado por el intérprete de consultas y está fuera de la responsabilidad de `IStorage`. - -Pero hay excepciones notables: - -- La consulta AST se pasa al `read` método, y el motor de tablas puede usarlo para derivar el uso del índice y leer menos datos de una tabla. -- A veces, el motor de tablas puede procesar los datos a una etapa específica. Por ejemplo, `StorageDistributed` puede enviar una consulta a servidores remotos, pedirles que procesen datos a una etapa donde se puedan fusionar datos de diferentes servidores remotos y devolver esos datos preprocesados. El intérprete de consultas termina de procesar los datos. - -Tabla `read` método puede devolver múltiples `IBlockInputStream` objetos para permitir el procesamiento de datos en paralelo. Estos flujos de entrada de bloques múltiples pueden leer de una tabla en paralelo. A continuación, puede ajustar estas secuencias con varias transformaciones (como la evaluación de expresiones o el filtrado) que se pueden calcular de forma independiente y crear un `UnionBlockInputStream` encima de ellos, para leer desde múltiples flujos en paralelo. - -También hay `TableFunction`s. Estas son funciones que devuelven un `IStorage` objeto a utilizar en el `FROM` cláusula de una consulta. - -Para tener una idea rápida de cómo implementar su motor de tabla, vea algo simple, como `StorageMemory` o `StorageTinyLog`. - -> Como resultado de la `read` método, `IStorage` devoluciones `QueryProcessingStage` – information about what parts of the query were already calculated inside storage. - -## Analizador {#parsers} - -Un analizador de descenso recursivo escrito a mano analiza una consulta. Por ejemplo, `ParserSelectQuery` simplemente llama recursivamente a los analizadores subyacentes para varias partes de la consulta. Los analizadores crean un `AST`. El `AST` está representado por nodos, que son instancias de `IAST`. - -> Los generadores de analizadores no se utilizan por razones históricas. - -## Interprete {#interpreters} - -Los intérpretes son responsables de crear la canalización de ejecución de consultas `AST`. Hay intérpretes simples, como `InterpreterExistsQuery` y `InterpreterDropQuery` o el más sofisticado `InterpreterSelectQuery`. La canalización de ejecución de consultas es una combinación de flujos de entrada o salida de bloques. Por ejemplo, el resultado de interpretar el `SELECT` la consulta es la `IBlockInputStream` para leer el conjunto de resultados; el resultado de la consulta INSERT es el `IBlockOutputStream` para escribir datos para su inserción, y el resultado de interpretar el `INSERT SELECT` la consulta es la `IBlockInputStream` que devuelve un conjunto de resultados vacío en la primera lectura, pero que copia datos de `SELECT` a `INSERT` al mismo tiempo. - -`InterpreterSelectQuery` utilizar `ExpressionAnalyzer` y `ExpressionActions` maquinaria para el análisis de consultas y transformaciones. Aquí es donde se realizan la mayoría de las optimizaciones de consultas basadas en reglas. `ExpressionAnalyzer` es bastante complicado y debe reescribirse: se deben extraer varias transformaciones de consultas y optimizaciones para separar clases para permitir transformaciones modulares o consultas. - -## Función {#functions} - -Hay funciones ordinarias y funciones agregadas. Para las funciones agregadas, consulte la siguiente sección. - -Ordinary functions don't change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for `Block`de datos para implementar la ejecución de consultas vectorizadas. - -Hay algunas funciones diversas, como [BlockSize](../sql-reference/functions/other-functions.md#function-blocksize), [rowNumberInBlock](../sql-reference/functions/other-functions.md#function-rownumberinblock), y [runningAccumulate](../sql-reference/functions/other-functions.md#function-runningaccumulate), que explotan el procesamiento de bloques y violan la independencia de las filas. - -ClickHouse tiene una tipificación fuerte, por lo que no hay conversión de tipo implícita. Si una función no admite una combinación específica de tipos, produce una excepción. Pero las funciones pueden funcionar (estar sobrecargadas) para muchas combinaciones diferentes de tipos. Por ejemplo, el `plus` función (para implementar el `+` operador) funciona para cualquier combinación de tipos numéricos: `UInt8` + `Float32`, `UInt16` + `Int8` y así sucesivamente. Además, algunas funciones variadas pueden aceptar cualquier número de argumentos, como el `concat` función. - -Implementar una función puede ser un poco inconveniente porque una función distribuye explícitamente tipos de datos compatibles y `IColumns`. Por ejemplo, el `plus` La función tiene código generado por la creación de instancias de una plantilla de C ++ para cada combinación de tipos numéricos y argumentos izquierdo y derecho constantes o no constantes. - -Es un excelente lugar para implementar la generación de código en tiempo de ejecución para evitar la hinchazón del código de plantilla. Además, permite agregar funciones fusionadas como multiplicar-agregar fusionado o hacer comparaciones múltiples en una iteración de bucle. - -Debido a la ejecución de consultas vectorizadas, las funciones no se cortocircuitan. Por ejemplo, si escribe `WHERE f(x) AND g(y)`, ambos lados se calculan, incluso para las filas, cuando `f(x)` es cero (excepto cuando `f(x)` es una expresión constante cero). Pero si la selectividad del `f(x)` la condición es alta, y el cálculo de `f(x)` es mucho más barato que `g(y)`, es mejor implementar el cálculo de paso múltiple. Primero calcularía `f(x)`, a continuación, filtrar columnas por el resultado, y luego calcular `g(y)` solo para trozos de datos más pequeños y filtrados. - -## Funciones agregadas {#aggregate-functions} - -Las funciones agregadas son funciones con estado. Acumulan valores pasados en algún estado y le permiten obtener resultados de ese estado. Se gestionan con el `IAggregateFunction` interfaz. Los estados pueden ser bastante simples (el estado para `AggregateFunctionCount` es sólo una sola `UInt64` valor) o bastante complejo (el estado de `AggregateFunctionUniqCombined` es una combinación de una matriz lineal, una tabla hash, y un `HyperLogLog` estructura de datos probabilística). - -Los Estados están asignados en `Arena` (un grupo de memoria) para tratar con múltiples estados mientras se ejecuta una alta cardinalidad `GROUP BY` consulta. Los estados pueden tener un constructor y destructor no trivial: por ejemplo, los estados de agregación complicados pueden asignar memoria adicional ellos mismos. Requiere cierta atención a la creación y destrucción de estados y a la adecuada aprobación de su orden de propiedad y destrucción. - -Los estados de agregación se pueden serializar y deserializar para pasar a través de la red durante la ejecución de consultas distribuidas o para escribirlos en el disco donde no hay suficiente RAM. Incluso se pueden almacenar en una tabla con el `DataTypeAggregateFunction` para permitir la agregación incremental de datos. - -> El formato de datos serializados para los estados de función agregados no tiene versiones en este momento. Está bien si los estados agregados solo se almacenan temporalmente. Pero tenemos el `AggregatingMergeTree` motor de tabla para la agregación incremental, y la gente ya lo está utilizando en producción. Es la razón por la que se requiere compatibilidad con versiones anteriores al cambiar el formato serializado para cualquier función agregada en el futuro. - -## Servidor {#server} - -El servidor implementa varias interfaces diferentes: - -- Una interfaz HTTP para cualquier cliente extranjero. -- Una interfaz TCP para el cliente nativo de ClickHouse y para la comunicación entre servidores durante la ejecución de consultas distribuidas. -- Una interfaz para transferir datos para la replicación. - -Internamente, es solo un servidor multiproceso primitivo sin corutinas ni fibras. Dado que el servidor no está diseñado para procesar una alta tasa de consultas simples, sino para procesar una tasa relativamente baja de consultas complejas, cada uno de ellos puede procesar una gran cantidad de datos para análisis. - -El servidor inicializa el `Context` clase con el entorno necesario para la ejecución de consultas: la lista de bases de datos disponibles, usuarios y derechos de acceso, configuración, clústeres, la lista de procesos, el registro de consultas, etc. Los intérpretes utilizan este entorno. - -Mantenemos una compatibilidad total con versiones anteriores y posteriores para el protocolo TCP del servidor: los clientes antiguos pueden hablar con servidores nuevos y los nuevos clientes pueden hablar con servidores antiguos. Pero no queremos mantenerlo eternamente, y estamos eliminando el soporte para versiones antiguas después de aproximadamente un año. - -!!! note "Nota" - Para la mayoría de las aplicaciones externas, recomendamos usar la interfaz HTTP porque es simple y fácil de usar. El protocolo TCP está más estrechamente vinculado a las estructuras de datos internas: utiliza un formato interno para pasar bloques de datos y utiliza marcos personalizados para datos comprimidos. No hemos lanzado una biblioteca C para ese protocolo porque requiere vincular la mayor parte de la base de código ClickHouse, lo cual no es práctico. - -## Ejecución de consultas distribuidas {#distributed-query-execution} - -Los servidores de una configuración de clúster son en su mayoría independientes. Puede crear un `Distributed` en uno o todos los servidores de un clúster. El `Distributed` table does not store data itself – it only provides a “view” a todas las tablas locales en varios nodos de un clúster. Cuando se SELECCIONA desde un `Distributed` tabla, reescribe esa consulta, elige nodos remotos de acuerdo con la configuración de equilibrio de carga y les envía la consulta. El `Distributed` table solicita a los servidores remotos que procesen una consulta hasta una etapa en la que se pueden fusionar resultados intermedios de diferentes servidores. Luego recibe los resultados intermedios y los fusiona. La tabla distribuida intenta distribuir tanto trabajo como sea posible a servidores remotos y no envía muchos datos intermedios a través de la red. - -Las cosas se vuelven más complicadas cuando tiene subconsultas en cláusulas IN o JOIN, y cada una de ellas usa un `Distributed` tabla. Tenemos diferentes estrategias para la ejecución de estas consultas. - -No existe un plan de consulta global para la ejecución de consultas distribuidas. Cada nodo tiene su plan de consulta local para su parte del trabajo. Solo tenemos una ejecución simple de consultas distribuidas de un solo paso: enviamos consultas para nodos remotos y luego fusionamos los resultados. Pero esto no es factible para consultas complicadas con alta cardinalidad GROUP BY o con una gran cantidad de datos temporales para JOIN. En tales casos, necesitamos “reshuffle” datos entre servidores, lo que requiere una coordinación adicional. ClickHouse no admite ese tipo de ejecución de consultas, y tenemos que trabajar en ello. - -## Árbol de fusión {#merge-tree} - -`MergeTree` es una familia de motores de almacenamiento que admite la indexación por clave principal. La clave principal puede ser una tupla arbitraria de columnas o expresiones. Datos en un `MergeTree` se almacena en “parts”. Cada parte almacena los datos en el orden de la clave principal, por lo que la tupla de la clave principal ordena los datos lexicográficamente. Todas las columnas de la tabla se almacenan en `column.bin` archivos en estas partes. Los archivos consisten en bloques comprimidos. Cada bloque suele ser de 64 KB a 1 MB de datos sin comprimir, dependiendo del tamaño del valor promedio. Los bloques constan de valores de columna colocados contiguamente uno tras otro. Los valores de columna están en el mismo orden para cada columna (la clave principal define el orden), por lo que cuando itera por muchas columnas, obtiene valores para las filas correspondientes. - -La clave principal en sí es “sparse”. No aborda cada fila, sino solo algunos rangos de datos. Separado `primary.idx` file tiene el valor de la clave principal para cada fila N-ésima, donde se llama N `index_granularity` (generalmente, N = 8192). Además, para cada columna, tenemos `column.mrk` archivos con “marks,” que son desplazamientos a cada fila N-ésima en el archivo de datos. Cada marca es un par: el desplazamiento en el archivo al comienzo del bloque comprimido y el desplazamiento en el bloque descomprimido al comienzo de los datos. Por lo general, los bloques comprimidos están alineados por marcas, y el desplazamiento en el bloque descomprimido es cero. Datos para `primary.idx` siempre reside en la memoria, y los datos para `column.mrk` archivos se almacena en caché. - -Cuando vamos a leer algo de una parte en `MergeTree` miramos `primary.idx` datos y localice rangos que podrían contener datos solicitados, luego mire `column.mrk` datos y calcular compensaciones para dónde comenzar a leer esos rangos. Debido a la escasez, el exceso de datos puede ser leído. ClickHouse no es adecuado para una gran carga de consultas de puntos simples, porque todo el rango con `index_granularity` se deben leer filas para cada clave, y todo el bloque comprimido debe descomprimirse para cada columna. Hicimos que el índice sea disperso porque debemos poder mantener billones de filas por único servidor sin un consumo de memoria notable para el índice. Además, debido a que la clave principal es escasa, no es única: no puede verificar la existencia de la clave en la tabla en el momento de INSERTAR. Podría tener muchas filas con la misma clave en una tabla. - -Cuando `INSERT` un montón de datos en `MergeTree`, ese grupo está ordenado por orden de clave primaria y forma una nueva parte. Hay subprocesos de fondo que seleccionan periódicamente algunas partes y las fusionan en una sola parte ordenada para mantener el número de partes relativamente bajo. Es por eso que se llama `MergeTree`. Por supuesto, la fusión conduce a “write amplification”. Todas las partes son inmutables: solo se crean y eliminan, pero no se modifican. Cuando se ejecuta SELECT, contiene una instantánea de la tabla (un conjunto de partes). Después de la fusión, también mantenemos las piezas viejas durante algún tiempo para facilitar la recuperación después de la falla, por lo que si vemos que alguna parte fusionada probablemente esté rota, podemos reemplazarla con sus partes de origen. - -`MergeTree` no es un árbol de LSM porque no contiene “memtable” y “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently – about once per second is ok, but a thousand times a second is not. We did it this way for simplicity's sake, and because we are already inserting data in batches in our applications. - -> Las tablas MergeTree solo pueden tener un índice (primario): no hay índices secundarios. Sería bueno permitir múltiples representaciones físicas bajo una tabla lógica, por ejemplo, para almacenar datos en más de un orden físico o incluso para permitir representaciones con datos preagregados junto con datos originales. - -Hay motores MergeTree que están haciendo un trabajo adicional durante las fusiones en segundo plano. Los ejemplos son `CollapsingMergeTree` y `AggregatingMergeTree`. Esto podría tratarse como soporte especial para actualizaciones. Tenga en cuenta que estas no son actualizaciones reales porque los usuarios generalmente no tienen control sobre el tiempo en que se ejecutan las fusiones en segundo plano y los datos en un `MergeTree` casi siempre se almacena en más de una parte, no en forma completamente fusionada. - -## Replicación {#replication} - -La replicación en ClickHouse se puede configurar por tabla. Podría tener algunas tablas replicadas y otras no replicadas en el mismo servidor. También puede tener tablas replicadas de diferentes maneras, como una tabla con replicación de dos factores y otra con replicación de tres factores. - -La replicación se implementa en el `ReplicatedMergeTree` motor de almacenamiento. El camino en `ZooKeeper` se especifica como un parámetro para el motor de almacenamiento. Todas las tablas con la misma ruta en `ZooKeeper` se convierten en réplicas entre sí: sincronizan sus datos y mantienen la coherencia. Las réplicas se pueden agregar y eliminar dinámicamente simplemente creando o soltando una tabla. - -La replicación utiliza un esquema multi-maestro asíncrono. Puede insertar datos en cualquier réplica que tenga una sesión con `ZooKeeper`, y los datos se replican en todas las demás réplicas de forma asíncrona. Como ClickHouse no admite UPDATE, la replicación está libre de conflictos. Como no hay reconocimiento de quórum de inserciones, los datos recién insertados pueden perderse si un nodo falla. - -Los metadatos para la replicación se almacenan en ZooKeeper. Hay un registro de replicación que enumera las acciones que se deben realizar. Las acciones son: obtener parte; fusionar partes; soltar una partición, etc. Cada réplica copia el registro de replicación en su cola y, a continuación, ejecuta las acciones desde la cola. Por ejemplo, en la inserción, el “get the part” la acción se crea en el registro y cada réplica descarga esa parte. Las fusiones se coordinan entre réplicas para obtener resultados idénticos en bytes. Todas las piezas se combinan de la misma manera en todas las réplicas. Se logra eligiendo una réplica como líder, y esa réplica inicia fusiones y escrituras “merge parts” acciones al registro. - -La replicación es física: solo las partes comprimidas se transfieren entre nodos, no consultas. Las fusiones se procesan en cada réplica de forma independiente en la mayoría de los casos para reducir los costos de red al evitar la amplificación de la red. Las piezas combinadas grandes se envían a través de la red solo en casos de retraso de replicación significativo. - -Además, cada réplica almacena su estado en ZooKeeper como el conjunto de piezas y sus sumas de comprobación. Cuando el estado en el sistema de archivos local difiere del estado de referencia en ZooKeeper, la réplica restaura su coherencia descargando partes faltantes y rotas de otras réplicas. Cuando hay algunos datos inesperados o rotos en el sistema de archivos local, ClickHouse no los elimina, sino que los mueve a un directorio separado y los olvida. - -!!! note "Nota" - El clúster ClickHouse consta de fragmentos independientes y cada fragmento consta de réplicas. El clúster es **no elástico**, por lo tanto, después de agregar un nuevo fragmento, los datos no se reequilibran automáticamente entre fragmentos. En su lugar, se supone que la carga del clúster debe ajustarse para que sea desigual. Esta implementación le da más control, y está bien para clústeres relativamente pequeños, como decenas de nodos. Pero para los clústeres con cientos de nodos que estamos utilizando en producción, este enfoque se convierte en un inconveniente significativo. Debemos implementar un motor de tablas que abarque todo el clúster con regiones replicadas dinámicamente que puedan dividirse y equilibrarse entre clústeres automáticamente. - -{## [Artículo Original](https://clickhouse.tech/docs/en/development/architecture/) ##} diff --git a/docs/es/development/browse-code.md b/docs/es/development/browse-code.md deleted file mode 100644 index ca031ad03f3..00000000000 --- a/docs/es/development/browse-code.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 63 -toc_title: "Buscar c\xF3digo fuente" ---- - -# Examinar el código fuente de ClickHouse {#browse-clickhouse-source-code} - -Usted puede utilizar **Woboq** navegador de código en línea disponible [aqui](https://clickhouse.tech/codebrowser/html_report/ClickHouse/src/index.html). Proporciona navegación de código y resaltado semántico, búsqueda e indexación. La instantánea de código se actualiza diariamente. - -Además, puede navegar por las fuentes en [GitHub](https://github.com/ClickHouse/ClickHouse) como de costumbre. - -Si está interesado en qué IDE usar, recomendamos CLion, QT Creator, VS Code y KDevelop (con advertencias). Puedes usar cualquier IDE favorito. Vim y Emacs también cuentan. diff --git a/docs/es/development/build-cross-arm.md b/docs/es/development/build-cross-arm.md deleted file mode 100644 index 2758e9a0e94..00000000000 --- a/docs/es/development/build-cross-arm.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 67 -toc_title: "C\xF3mo construir ClickHouse en Linux para AARCH64 (ARM64)" ---- - -# Cómo construir ClickHouse en Linux para la arquitectura AARCH64 (ARM64) {#how-to-build-clickhouse-on-linux-for-aarch64-arm64-architecture} - -Esto es para el caso cuando tiene una máquina Linux y desea usarla para compilar `clickhouse` binario que se ejecutará en otra máquina Linux con arquitectura de CPU AARCH64. Esto está destinado a las comprobaciones de integración continua que se ejecutan en servidores Linux. - -La compilación cruzada para AARCH64 se basa en el [Instrucciones de construcción](build.md), seguirlos primero. - -# Instalar Clang-8 {#install-clang-8} - -Siga las instrucciones de https://apt.llvm.org/ para la configuración de Ubuntu o Debian. -Por ejemplo, en Ubuntu Bionic puede usar los siguientes comandos: - -``` bash -echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main" | sudo tee /etc/apt/sources.list.d/llvm.list -sudo apt-get update -sudo apt-get install clang-8 -``` - -# Instalar conjunto de herramientas de compilación cruzada {#install-cross-compilation-toolset} - -``` bash -cd ClickHouse -mkdir -p build-aarch64/cmake/toolchain/linux-aarch64 -wget 'https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz?revision=2e88a73f-d233-4f96-b1f4-d8b36e9bb0b9&la=en' -O gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build-aarch64/cmake/toolchain/linux-aarch64 --strip-components=1 -``` - -# Construir ClickHouse {#build-clickhouse} - -``` bash -cd ClickHouse -mkdir build-arm64 -CC=clang-8 CXX=clang++-8 cmake . -Bbuild-arm64 -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-aarch64.cmake -ninja -C build-arm64 -``` - -El binario resultante se ejecutará solo en Linux con la arquitectura de CPU AARCH64. diff --git a/docs/es/development/build-cross-osx.md b/docs/es/development/build-cross-osx.md deleted file mode 100644 index d00e57c5d31..00000000000 --- a/docs/es/development/build-cross-osx.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 66 -toc_title: "C\xF3mo construir ClickHouse en Linux para Mac OS X" ---- - -# Cómo construir ClickHouse en Linux para Mac OS X {#how-to-build-clickhouse-on-linux-for-mac-os-x} - -Esto es para el caso cuando tiene una máquina Linux y desea usarla para compilar `clickhouse` Esto está destinado a las comprobaciones de integración continuas que se ejecutan en servidores Linux. Si desea crear ClickHouse directamente en Mac OS X, continúe con [otra instrucción](build-osx.md). - -La compilación cruzada para Mac OS X se basa en el [Instrucciones de construcción](build.md), seguirlos primero. - -# Instalar Clang-8 {#install-clang-8} - -Siga las instrucciones de https://apt.llvm.org/ para la configuración de Ubuntu o Debian. -Por ejemplo, los comandos para Bionic son como: - -``` bash -sudo echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main" >> /etc/apt/sources.list -sudo apt-get install clang-8 -``` - -# Instalar conjunto de herramientas de compilación cruzada {#install-cross-compilation-toolset} - -Recordemos la ruta donde instalamos `cctools` como ${CCTOOLS} - -``` bash -mkdir ${CCTOOLS} - -git clone https://github.com/tpoechtrager/apple-libtapi.git -cd apple-libtapi -INSTALLPREFIX=${CCTOOLS} ./build.sh -./install.sh -cd .. - -git clone https://github.com/tpoechtrager/cctools-port.git -cd cctools-port/cctools -./configure --prefix=${CCTOOLS} --with-libtapi=${CCTOOLS} --target=x86_64-apple-darwin -make install -``` - -Además, necesitamos descargar macOS X SDK en el árbol de trabajo. - -``` bash -cd ClickHouse -wget 'https://github.com/phracker/MacOSX-SDKs/releases/download/10.15/MacOSX10.15.sdk.tar.xz' -mkdir -p build-darwin/cmake/toolchain/darwin-x86_64 -tar xJf MacOSX10.15.sdk.tar.xz -C build-darwin/cmake/toolchain/darwin-x86_64 --strip-components=1 -``` - -# Construir ClickHouse {#build-clickhouse} - -``` bash -cd ClickHouse -mkdir build-osx -CC=clang-8 CXX=clang++-8 cmake . -Bbuild-osx -DCMAKE_TOOLCHAIN_FILE=cmake/darwin/toolchain-x86_64.cmake \ - -DCMAKE_AR:FILEPATH=${CCTOOLS}/bin/x86_64-apple-darwin-ar \ - -DCMAKE_RANLIB:FILEPATH=${CCTOOLS}/bin/x86_64-apple-darwin-ranlib \ - -DLINKER_NAME=${CCTOOLS}/bin/x86_64-apple-darwin-ld -ninja -C build-osx -``` - -El binario resultante tendrá un formato ejecutable Mach-O y no se puede ejecutar en Linux. diff --git a/docs/es/development/build-osx.md b/docs/es/development/build-osx.md deleted file mode 100644 index 39eba389798..00000000000 --- a/docs/es/development/build-osx.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 65 -toc_title: "C\xF3mo crear ClickHouse en Mac OS X" ---- - -# Cómo crear ClickHouse en Mac OS X {#how-to-build-clickhouse-on-mac-os-x} - -Build debería funcionar en Mac OS X 10.15 (Catalina) - -## Instalar Homebrew {#install-homebrew} - -``` bash -$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" -``` - -## Instalar compiladores, herramientas y bibliotecas necesarios {#install-required-compilers-tools-and-libraries} - -``` bash -$ brew install cmake ninja libtool gettext -``` - -## Fuentes de ClickHouse de pago {#checkout-clickhouse-sources} - -``` bash -$ git clone --recursive git@github.com:ClickHouse/ClickHouse.git -``` - -o - -``` bash -$ git clone --recursive https://github.com/ClickHouse/ClickHouse.git - -$ cd ClickHouse -``` - -## Construir ClickHouse {#build-clickhouse} - -``` bash -$ mkdir build -$ cd build -$ cmake .. -DCMAKE_CXX_COMPILER=`which clang++` -DCMAKE_C_COMPILER=`which clang` -$ ninja -$ cd .. -``` - -## Advertencia {#caveats} - -Si tiene la intención de ejecutar clickhouse-server, asegúrese de aumentar la variable maxfiles del sistema. - -!!! info "Nota" - Tendrás que usar sudo. - -Para ello, cree el siguiente archivo: - -/Library/LaunchDaemons/limit.maxfiles.lista: - -``` xml - - - - - Label - limit.maxfiles - ProgramArguments - - launchctl - limit - maxfiles - 524288 - 524288 - - RunAtLoad - - ServiceIPC - - - -``` - -Ejecute el siguiente comando: - -``` bash -$ sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist -``` - -Reiniciar. - -Para verificar si está funcionando, puede usar `ulimit -n` comando. - -[Artículo Original](https://clickhouse.tech/docs/en/development/build_osx/) diff --git a/docs/es/development/build.md b/docs/es/development/build.md deleted file mode 100644 index 42cd9b5433f..00000000000 --- a/docs/es/development/build.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 64 -toc_title: "C\xF3mo crear ClickHouse en Linux" ---- - -# Cómo construir ClickHouse para el desarrollo {#how-to-build-clickhouse-for-development} - -El siguiente tutorial se basa en el sistema Ubuntu Linux. -Con los cambios apropiados, también debería funcionar en cualquier otra distribución de Linux. -Plataformas compatibles: x86_64 y AArch64. El soporte para Power9 es experimental. - -## Instalar Git, CMake, Python y Ninja {#install-git-cmake-python-and-ninja} - -``` bash -$ sudo apt-get install git cmake python ninja-build -``` - -O cmake3 en lugar de cmake en sistemas más antiguos. - -## Instalar GCC 10 {#install-gcc-10} - -Hay varias formas de hacer esto. - -### Instalar desde un paquete PPA {#install-from-a-ppa-package} - -``` bash -$ sudo apt-get install software-properties-common -$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test -$ sudo apt-get update -$ sudo apt-get install gcc-10 g++-10 -``` - -### Instalar desde fuentes {#install-from-sources} - -Mira [Sistema abierto.](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -## Usar GCC 10 para compilaciones {#use-gcc-10-for-builds} - -``` bash -$ export CC=gcc-10 -$ export CXX=g++-10 -``` - -## Fuentes de ClickHouse de pago {#checkout-clickhouse-sources} - -``` bash -$ git clone --recursive git@github.com:ClickHouse/ClickHouse.git -``` - -o - -``` bash -$ git clone --recursive https://github.com/ClickHouse/ClickHouse.git -``` - -## Construir ClickHouse {#build-clickhouse} - -``` bash -$ cd ClickHouse -$ mkdir build -$ cd build -$ cmake .. -$ ninja -$ cd .. -``` - -Para crear un ejecutable, ejecute `ninja clickhouse`. -Esto creará el `programs/clickhouse` ejecutable, que se puede usar con `client` o `server` argumento. - -# Cómo construir ClickHouse en cualquier Linux {#how-to-build-clickhouse-on-any-linux} - -La compilación requiere los siguientes componentes: - -- Git (se usa solo para verificar las fuentes, no es necesario para la compilación) -- CMake 3.10 o más reciente -- Ninja (recomendado) o Hacer -- Compilador de C ++: gcc 10 o clang 8 o más reciente -- Enlazador: lld u oro (el clásico GNU ld no funcionará) -- Python (solo se usa dentro de la compilación LLVM y es opcional) - -Si todos los componentes están instalados, puede compilar de la misma manera que los pasos anteriores. - -Ejemplo para Ubuntu Eoan: - - sudo apt update - sudo apt install git cmake ninja-build g++ python - git clone --recursive https://github.com/ClickHouse/ClickHouse.git - mkdir build && cd build - cmake ../ClickHouse - ninja - -Ejemplo de OpenSUSE Tumbleweed: - - sudo zypper install git cmake ninja gcc-c++ python lld - git clone --recursive https://github.com/ClickHouse/ClickHouse.git - mkdir build && cd build - cmake ../ClickHouse - ninja - -Ejemplo de Fedora Rawhide: - - sudo yum update - yum --nogpg install git cmake make gcc-c++ python3 - git clone --recursive https://github.com/ClickHouse/ClickHouse.git - mkdir build && cd build - cmake ../ClickHouse - make -j $(nproc) - -# No tienes que construir ClickHouse {#you-dont-have-to-build-clickhouse} - -ClickHouse está disponible en binarios y paquetes preconstruidos. Los binarios son portátiles y se pueden ejecutar en cualquier tipo de Linux. - -Están diseñados para lanzamientos estables, preestablecidos y de prueba, siempre que para cada compromiso con el maestro y para cada solicitud de extracción. - -Para encontrar la construcción más fresca de `master`, ir a [se compromete página](https://github.com/ClickHouse/ClickHouse/commits/master), haga clic en la primera marca de verificación verde o cruz roja cerca de confirmar, y haga clic en “Details” enlace justo después “ClickHouse Build Check”. - -# Cómo construir el paquete Debian ClickHouse {#how-to-build-clickhouse-debian-package} - -## Instalar Git y Pbuilder {#install-git-and-pbuilder} - -``` bash -$ sudo apt-get update -$ sudo apt-get install git python pbuilder debhelper lsb-release fakeroot sudo debian-archive-keyring debian-keyring -``` - -## Fuentes de ClickHouse de pago {#checkout-clickhouse-sources-1} - -``` bash -$ git clone --recursive --branch master https://github.com/ClickHouse/ClickHouse.git -$ cd ClickHouse -``` - -## Ejecutar secuencia de comandos de lanzamiento {#run-release-script} - -``` bash -$ ./release -``` - -[Artículo Original](https://clickhouse.tech/docs/en/development/build/) diff --git a/docs/es/development/contrib.md b/docs/es/development/contrib.md deleted file mode 100644 index 3f3013570e5..00000000000 --- a/docs/es/development/contrib.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 70 -toc_title: Bibliotecas de terceros utilizadas ---- - -# Bibliotecas de terceros utilizadas {#third-party-libraries-used} - -| Biblioteca | Licencia | -|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------| -| base64 | [Licencia BSD de 2 cláusulas](https://github.com/aklomp/base64/blob/a27c565d1b6c676beaf297fe503c4518185666f7/LICENSE) | -| impulsar | [Licencia de software Boost 1.0](https://github.com/ClickHouse-Extras/boost-extra/blob/6883b40449f378019aec792f9983ce3afc7ff16e/LICENSE_1_0.txt) | -| Bienvenido | [MIT](https://github.com/google/brotli/blob/master/LICENSE) | -| capnproto | [MIT](https://github.com/capnproto/capnproto/blob/master/LICENSE) | -| Cctz | [Licencia Apache 2.0](https://github.com/google/cctz/blob/4f9776a310f4952454636363def82c2bf6641d5f/LICENSE.txt) | -| doble conversión | [Licencia de 3 cláusulas BSD](https://github.com/google/double-conversion/blob/cf2f0f3d547dc73b4612028a155b80536902ba02/LICENSE) | -| FastMemcpy | [MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libmemcpy/impl/LICENSE) | -| Más información | [Licencia de 3 cláusulas BSD](https://github.com/google/googletest/blob/master/LICENSE) | -| H3 | [Licencia Apache 2.0](https://github.com/uber/h3/blob/master/LICENSE) | -| hyperscan | [Licencia de 3 cláusulas BSD](https://github.com/intel/hyperscan/blob/master/LICENSE) | -| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) | -| libdivide | [Licencia Zlib](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) | -| libgsasl | [Información adicional](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) | -| libhdfs3 | [Licencia Apache 2.0](https://github.com/ClickHouse-Extras/libhdfs3/blob/bd6505cbb0c130b0db695305b9a38546fa880e5a/LICENSE.txt) | -| libmetrohash | [Licencia Apache 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libmetrohash/LICENSE) | -| libpcg-al azar | [Licencia Apache 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libpcg-random/LICENSE-APACHE.txt) | -| Libressl | [Licencia OpenSSL](https://github.com/ClickHouse-Extras/ssl/blob/master/COPYING) | -| Librdkafka | [Licencia BSD de 2 cláusulas](https://github.com/edenhill/librdkafka/blob/363dcad5a23dc29381cc626620e68ae418b3af19/LICENSE) | -| libwidechar_width | [CC0 1.0 Universal](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libwidechar_width/LICENSE) | -| llvm | [Licencia de 3 cláusulas BSD](https://github.com/ClickHouse-Extras/llvm/blob/163def217817c90fb982a6daf384744d8472b92b/llvm/LICENSE.TXT) | -| lz4 | [Licencia BSD de 2 cláusulas](https://github.com/lz4/lz4/blob/c10863b98e1503af90616ae99725ecd120265dfb/LICENSE) | -| mariadb-conector-c | [Información adicional](https://github.com/ClickHouse-Extras/mariadb-connector-c/blob/3.1/COPYING.LIB) | -| murmurhash | [Dominio público](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/murmurhash/LICENSE) | -| pdqsort | [Licencia Zlib](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/pdqsort/license.txt) | -| Poco | [Boost Software License - Versión 1.0](https://github.com/ClickHouse-Extras/poco/blob/fe5505e56c27b6ecb0dcbc40c49dc2caf4e9637f/LICENSE) | -| protobuf | [Licencia de 3 cláusulas BSD](https://github.com/ClickHouse-Extras/protobuf/blob/12735370922a35f03999afff478e1c6d7aa917a4/LICENSE) | -| Re2 | [Licencia de 3 cláusulas BSD](https://github.com/google/re2/blob/7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0/LICENSE) | -| UnixODBC | [Información adicional](https://github.com/ClickHouse-Extras/UnixODBC/tree/b0ad30f7f6289c12b76f04bfb9d466374bb32168) | -| Sistema abierto. | [Licencia Zlib](https://github.com/ClickHouse-Extras/zlib-ng/blob/develop/LICENSE.md) | -| zstd | [Licencia de 3 cláusulas BSD](https://github.com/facebook/zstd/blob/dev/LICENSE) | diff --git a/docs/es/development/developer-instruction.md b/docs/es/development/developer-instruction.md deleted file mode 100644 index 0ce5d0b457a..00000000000 --- a/docs/es/development/developer-instruction.md +++ /dev/null @@ -1,287 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 61 -toc_title: "La instrucci\xF3n para desarrolladores de ClickHouse para principiantes" ---- - -La construcción de ClickHouse es compatible con Linux, FreeBSD y Mac OS X. - -# Si utiliza Windows {#if-you-use-windows} - -Si usa Windows, necesita crear una máquina virtual con Ubuntu. Para comenzar a trabajar con una máquina virtual, instale VirtualBox. Puede descargar Ubuntu desde el sitio web: https://www.ubuntu.com/#download. Por favor, cree una máquina virtual a partir de la imagen descargada (debe reservar al menos 4 GB de RAM para ello). Para ejecutar un terminal de línea de comandos en Ubuntu, busque un programa que contenga la palabra “terminal” en su nombre (gnome-terminal, konsole etc.) o simplemente presione Ctrl + Alt + T. - -# Si utiliza un sistema de 32 bits {#if-you-use-a-32-bit-system} - -ClickHouse no puede funcionar ni construir en un sistema de 32 bits. Debe adquirir acceso a un sistema de 64 bits y puede continuar leyendo. - -# Creación de un repositorio en GitHub {#creating-a-repository-on-github} - -Para comenzar a trabajar con el repositorio de ClickHouse, necesitará una cuenta de GitHub. - -Probablemente ya tenga uno, pero si no lo hace, regístrese en https://github.com . En caso de que no tenga claves SSH, debe generarlas y luego cargarlas en GitHub. Es necesario para enviar a través de sus parches. También es posible usar las mismas claves SSH que usa con cualquier otro servidor SSH, probablemente ya las tenga. - -Cree una bifurcación del repositorio ClickHouse. Para hacerlo por favor haga clic en el “fork” botón en la esquina superior derecha en https://github.com/ClickHouse/ClickHouse . Se bifurcará su propia copia de ClickHouse/ClickHouse a su cuenta. - -El proceso de desarrollo consiste en comprometer primero los cambios previstos en su bifurcación de ClickHouse y luego crear un “pull request” para que estos cambios sean aceptados en el repositorio principal (ClickHouse / ClickHouse). - -Para trabajar con repositorios git, instale `git`. - -Para hacer eso en Ubuntu, ejecutaría en la terminal de línea de comandos: - - sudo apt update - sudo apt install git - -Puede encontrar un breve manual sobre el uso de Git aquí: https://education.github.com/git-cheat-sheet-education.pdf . -Para obtener un manual detallado sobre Git, consulte https://git-scm.com/book/en/v2 . - -# Clonación de un repositorio en su máquina de desarrollo {#cloning-a-repository-to-your-development-machine} - -A continuación, debe descargar los archivos fuente en su máquina de trabajo. Esto se llama “to clone a repository” porque crea una copia local del repositorio en su máquina de trabajo. - -En el terminal de línea de comandos, ejecute: - - git clone --recursive git@github.com:your_github_username/ClickHouse.git - cd ClickHouse - -Nota: por favor, sustituye *your_github_username* con lo que es apropiado! - -Este comando creará un directorio `ClickHouse` que contiene la copia de trabajo del proyecto. - -Es importante que la ruta al directorio de trabajo no contenga espacios en blanco, ya que puede ocasionar problemas con la ejecución del sistema de compilación. - -Tenga en cuenta que el repositorio ClickHouse utiliza `submodules`. That is what the references to additional repositories are called (i.e. external libraries on which the project depends). It means that when cloning the repository you need to specify the `--recursive` como en el ejemplo anterior. Si el repositorio se ha clonado sin submódulos, para descargarlos debe ejecutar lo siguiente: - - git submodule init - git submodule update - -Puede verificar el estado con el comando: `git submodule status`. - -Si recibe el siguiente mensaje de error: - - Permission denied (publickey). - fatal: Could not read from remote repository. - - Please make sure you have the correct access rights - and the repository exists. - -Por lo general, significa que faltan las claves SSH para conectarse a GitHub. Estas teclas se encuentran normalmente en `~/.ssh`. Para que las claves SSH sean aceptadas, debe cargarlas en la sección de configuración de la interfaz de usuario de GitHub. - -También puede clonar el repositorio a través del protocolo https: - - git clone https://github.com/ClickHouse/ClickHouse.git - -Sin embargo, esto no le permitirá enviar los cambios al servidor. Aún puede usarlo temporalmente y agregar las claves SSH más tarde reemplazando la dirección remota del repositorio con `git remote` comando. - -También puede agregar la dirección original del repositorio de ClickHouse a su repositorio local para extraer actualizaciones desde allí: - - git remote add upstream git@github.com:ClickHouse/ClickHouse.git - -Después de ejecutar con éxito este comando, podrá extraer actualizaciones del repositorio principal de ClickHouse ejecutando `git pull upstream master`. - -## Trabajar con submódulos {#working-with-submodules} - -Trabajar con submódulos en git podría ser doloroso. Los siguientes comandos ayudarán a administrarlo: - - # ! each command accepts --recursive - # Update remote URLs for submodules. Barely rare case - git submodule sync - # Add new submodules - git submodule init - # Update existing submodules to the current state - git submodule update - # Two last commands could be merged together - git submodule update --init - -Los siguientes comandos le ayudarían a restablecer todos los submódulos al estado inicial (!¡ADVERTENCIA! - cualquier cambio en el interior será eliminado): - - # Synchronizes submodules' remote URL with .gitmodules - git submodule sync --recursive - # Update the registered submodules with initialize not yet initialized - git submodule update --init --recursive - # Reset all changes done after HEAD - git submodule foreach git reset --hard - # Clean files from .gitignore - git submodule foreach git clean -xfd - # Repeat last 4 commands for all submodule - git submodule foreach git submodule sync --recursive - git submodule foreach git submodule update --init --recursive - git submodule foreach git submodule foreach git reset --hard - git submodule foreach git submodule foreach git clean -xfd - -# Sistema de construcción {#build-system} - -ClickHouse utiliza CMake y Ninja para la construcción. - -CMake - un sistema de meta-construcción que puede generar archivos Ninja (tareas de construcción). -Ninja: un sistema de compilación más pequeño con un enfoque en la velocidad utilizada para ejecutar esas tareas generadas por cmake. - -Para instalar en Ubuntu, Debian o Mint run `sudo apt install cmake ninja-build`. - -En CentOS, RedHat se ejecuta `sudo yum install cmake ninja-build`. - -Si usa Arch o Gentoo, probablemente lo sepa usted mismo cómo instalar CMake. - -Para instalar CMake y Ninja en Mac OS X, primero instale Homebrew y luego instale todo lo demás a través de brew: - - /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" - brew install cmake ninja - -A continuación, verifique la versión de CMake: `cmake --version`. Si está por debajo de 3.3, debe instalar una versión más reciente desde el sitio web: https://cmake.org/download/. - -# Bibliotecas externas opcionales {#optional-external-libraries} - -ClickHouse utiliza varias bibliotecas externas para la construcción. Todos ellos no necesitan ser instalados por separado, ya que se construyen junto con ClickHouse a partir de las fuentes ubicadas en los submódulos. Puede consultar la lista en `contrib`. - -# Compilador de C ++ {#c-compiler} - -Los compiladores GCC a partir de la versión 10 y Clang versión 8 o superior son compatibles para construir ClickHouse. - -Las compilaciones oficiales de Yandex actualmente usan GCC porque genera código de máquina de un rendimiento ligeramente mejor (con una diferencia de hasta varios por ciento según nuestros puntos de referencia). Y Clang es más conveniente para el desarrollo generalmente. Sin embargo, nuestra plataforma de integración continua (CI) ejecuta verificaciones de aproximadamente una docena de combinaciones de compilación. - -Para instalar GCC en Ubuntu, ejecute: `sudo apt install gcc g++` - -Compruebe la versión de gcc: `gcc --version`. Si está por debajo de 9, siga las instrucciones aquí: https://clickhouse.tech/docs/es/development/build/#install-gcc-10. - -La compilación de Mac OS X solo es compatible con Clang. Sólo tiene que ejecutar `brew install llvm` - -Si decide utilizar Clang, también puede instalar `libc++` y `lld` si usted sabe lo que es. Utilizar `ccache` también se recomienda. - -# El proceso de construcción {#the-building-process} - -Ahora que está listo para construir ClickHouse, le recomendamos que cree un directorio separado `build` dentro `ClickHouse` que contendrá todos los de la generación de artefactos: - - mkdir build - cd build - -Puede tener varios directorios diferentes (build_release, build_debug, etc.) para diferentes tipos de construcción. - -Mientras que dentro de la `build` directorio, configure su compilación ejecutando CMake. Antes de la primera ejecución, debe definir variables de entorno que especifiquen el compilador (compilador gcc versión 10 en este ejemplo). - -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: - - export CC=clang CXX=clang++ - cmake .. - -El `CC` variable especifica el compilador para C (abreviatura de C Compiler), y `CXX` variable indica qué compilador de C ++ se usará para compilar. - -Para una construcción más rápida, puede recurrir al `debug` tipo de compilación: una compilación sin optimizaciones. Para ese suministro el siguiente parámetro `-D CMAKE_BUILD_TYPE=Debug`: - - cmake -D CMAKE_BUILD_TYPE=Debug .. - -Puede cambiar el tipo de compilación ejecutando este comando en el `build` directorio. - -Ejecutar ninja para construir: - - ninja clickhouse-server clickhouse-client - -Solo los binarios requeridos se van a construir en este ejemplo. - -Si necesita construir todos los binarios (utilidades y pruebas), debe ejecutar ninja sin parámetros: - - ninja - -La compilación completa requiere aproximadamente 30 GB de espacio libre en disco o 15 GB para construir los binarios principales. - -Cuando hay una gran cantidad de RAM disponible en la máquina de compilación, debe limitar el número de tareas de compilación que se ejecutan en paralelo con `-j` parámetro: - - ninja -j 1 clickhouse-server clickhouse-client - -En máquinas con 4GB de RAM, se recomienda especificar 1, para 8GB de RAM `-j 2` se recomienda. - -Si recibe el mensaje: `ninja: error: loading 'build.ninja': No such file or directory`, significa que la generación de una configuración de compilación ha fallado y necesita inspeccionar el mensaje anterior. - -Cuando se inicie correctamente el proceso de construcción, verá el progreso de la compilación: el número de tareas procesadas y el número total de tareas. - -Al crear mensajes sobre archivos protobuf en la biblioteca libhdfs2, como `libprotobuf WARNING` puede aparecer. Afectan a nada y son seguros para ser ignorado. - -Tras la compilación exitosa, obtienes un archivo ejecutable `ClickHouse//programs/clickhouse`: - - ls -l programs/clickhouse - -# Ejecución del ejecutable construido de ClickHouse {#running-the-built-executable-of-clickhouse} - -Para ejecutar el servidor bajo el usuario actual, debe navegar hasta `ClickHouse/programs/server/` (situado fuera de `build`) y ejecutar: - - ../../build/programs/clickhouse server - -En este caso, ClickHouse usará archivos de configuración ubicados en el directorio actual. Puede ejecutar `clickhouse server` desde cualquier directorio que especifique la ruta a un archivo de configuración como un parámetro de línea de comandos `--config-file`. - -Para conectarse a ClickHouse con clickhouse-client en otro terminal, vaya a `ClickHouse/build/programs/` y ejecutar `./clickhouse client`. - -Si usted consigue `Connection refused` mensaje en Mac OS X o FreeBSD, intente especificar la dirección de host 127.0.0.1: - - clickhouse client --host 127.0.0.1 - -Puede reemplazar la versión de producción del binario ClickHouse instalado en su sistema con su binario ClickHouse personalizado. Para ello, instale ClickHouse en su máquina siguiendo las instrucciones del sitio web oficial. A continuación, ejecute lo siguiente: - - sudo service clickhouse-server stop - sudo cp ClickHouse/build/programs/clickhouse /usr/bin/ - sudo service clickhouse-server start - -Tenga en cuenta que `clickhouse-client`, `clickhouse-server` y otros son enlaces simbólicos a los comúnmente compartidos `clickhouse` binario. - -También puede ejecutar su binario ClickHouse personalizado con el archivo de configuración del paquete ClickHouse instalado en su sistema: - - sudo service clickhouse-server stop - sudo -u clickhouse ClickHouse/build/programs/clickhouse server --config-file /etc/clickhouse-server/config.xml - -# IDE (entorno de desarrollo integrado) {#ide-integrated-development-environment} - -Si no sabe qué IDE usar, le recomendamos que use CLion. CLion es un software comercial, pero ofrece un período de prueba gratuito de 30 días. También es gratuito para los estudiantes. CLion se puede usar tanto en Linux como en Mac OS X. - -KDevelop y QTCreator son otras excelentes alternativas de un IDE para desarrollar ClickHouse. KDevelop viene como un IDE muy útil aunque inestable. Si KDevelop se bloquea después de un tiempo al abrir el proyecto, debe hacer clic “Stop All” botón tan pronto como se ha abierto la lista de archivos del proyecto. Después de hacerlo, KDevelop debería estar bien para trabajar. - -Como editores de código simples, puede usar Sublime Text o Visual Studio Code, o Kate (todos los cuales están disponibles en Linux). - -Por si acaso, vale la pena mencionar que CLion crea `build` por sí mismo, también por sí mismo selecciona `debug` para el tipo de compilación, para la configuración usa una versión de CMake que está definida en CLion y no la instalada por usted, y finalmente, CLion usará `make` para ejecutar tareas de compilación en lugar de `ninja`. Este es un comportamiento normal, solo tenlo en cuenta para evitar confusiones. - -# Código de escritura {#writing-code} - -La descripción de la arquitectura ClickHouse se puede encontrar aquí: https://clickhouse.tech/docs/es/desarrollo/arquitectura/ - -La Guía de estilo de código: https://clickhouse.tech/docs/en/development/style/ - -Pruebas de escritura: https://clickhouse.tech/docs/en/development/tests/ - -Lista de tareas: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22 - -# Datos de prueba {#test-data} - -El desarrollo de ClickHouse a menudo requiere cargar conjuntos de datos realistas. Es particularmente importante para las pruebas de rendimiento. Tenemos un conjunto especialmente preparado de datos anónimos de Yandex.Métrica. Se requiere, además, unos 3 GB de espacio libre en disco. Tenga en cuenta que estos datos no son necesarios para realizar la mayoría de las tareas de desarrollo. - - sudo apt install wget xz-utils - - wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz - wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz - - xz -v -d hits_v1.tsv.xz - xz -v -d visits_v1.tsv.xz - - clickhouse-client - - CREATE DATABASE IF NOT EXISTS test - - CREATE TABLE test.hits ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, `ParsedParams.Key1` Array(String), `ParsedParams.Key2` Array(String), `ParsedParams.Key3` Array(String), `ParsedParams.Key4` Array(String), `ParsedParams.Key5` Array(String), `ParsedParams.ValueDouble` Array(Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree PARTITION BY toYYYYMM(EventDate) SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID), EventTime); - - CREATE TABLE test.visits ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), `Goals.ID` Array(UInt32), `Goals.Serial` Array(UInt32), `Goals.EventTime` Array(DateTime), `Goals.Price` Array(Int64), `Goals.OrderID` Array(String), `Goals.CurrencyID` Array(UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, `TraficSource.ID` Array(Int8), `TraficSource.SearchEngineID` Array(UInt16), `TraficSource.AdvEngineID` Array(UInt8), `TraficSource.PlaceID` Array(UInt16), `TraficSource.SocialSourceNetworkID` Array(UInt8), `TraficSource.Domain` Array(String), `TraficSource.SearchPhrase` Array(String), `TraficSource.SocialSourcePage` Array(String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, `ParsedParams.Key1` Array(String), `ParsedParams.Key2` Array(String), `ParsedParams.Key3` Array(String), `ParsedParams.Key4` Array(String), `ParsedParams.Key5` Array(String), `ParsedParams.ValueDouble` Array(Float64), `Market.Type` Array(UInt8), `Market.GoalID` Array(UInt32), `Market.OrderID` Array(String), `Market.OrderPrice` Array(Int64), `Market.PP` Array(UInt32), `Market.DirectPlaceID` Array(UInt32), `Market.DirectOrderID` Array(UInt32), `Market.DirectBannerID` Array(UInt32), `Market.GoodID` Array(String), `Market.GoodName` Array(String), `Market.GoodQuantity` Array(Int32), `Market.GoodPrice` Array(Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) SAMPLE BY intHash32(UserID) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID); - - clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.hits FORMAT TSV" < hits_v1.tsv - clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.visits FORMAT TSV" < visits_v1.tsv - -# Creación de solicitud de extracción {#creating-pull-request} - -Navega a tu repositorio de fork en la interfaz de usuario de GitHub. Si ha estado desarrollando en una sucursal, debe seleccionar esa sucursal. Habrá un “Pull request” botón situado en la pantalla. En esencia, esto significa “create a request for accepting my changes into the main repository”. - -Se puede crear una solicitud de extracción incluso si el trabajo aún no se ha completado. En este caso, por favor ponga la palabra “WIP” (trabajo en curso) al comienzo del título, se puede cambiar más tarde. Esto es útil para la revisión cooperativa y la discusión de los cambios, así como para ejecutar todas las pruebas disponibles. Es importante que proporcione una breve descripción de sus cambios, que más tarde se utilizará para generar registros de cambios de lanzamiento. - -Las pruebas comenzarán tan pronto como los empleados de Yandex etiqueten su PR con una etiqueta “can be tested”. The results of some first checks (e.g. code style) will come in within several minutes. Build check results will arrive within half an hour. And the main set of tests will report itself within an hour. - -El sistema preparará compilaciones binarias ClickHouse para su solicitud de extracción individualmente. Para recuperar estas compilaciones, haga clic en “Details” junto al link “ClickHouse build check” en la lista de cheques. Allí encontrará enlaces directos a la construcción.deb paquetes de ClickHouse que puede implementar incluso en sus servidores de producción (si no tiene miedo). - -Lo más probable es que algunas de las compilaciones fallen las primeras veces. Esto se debe al hecho de que verificamos las compilaciones tanto con gcc como con clang, con casi todas las advertencias existentes (siempre con el `-Werror` bandera) habilitado para sonido. En esa misma página, puede encontrar todos los registros de compilación para que no tenga que compilar ClickHouse de todas las formas posibles. diff --git a/docs/es/development/index.md b/docs/es/development/index.md deleted file mode 100644 index 6f96f9b3f02..00000000000 --- a/docs/es/development/index.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Desarrollo -toc_hidden: true -toc_priority: 58 -toc_title: oculto ---- - -# Desarrollo de ClickHouse {#clickhouse-development} - -[Artículo Original](https://clickhouse.tech/docs/en/development/) diff --git a/docs/es/development/style.md b/docs/es/development/style.md deleted file mode 100644 index ec55516fe2c..00000000000 --- a/docs/es/development/style.md +++ /dev/null @@ -1,841 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 68 -toc_title: "C\xF3mo escribir c\xF3digo C ++" ---- - -# Cómo escribir código C ++ {#how-to-write-c-code} - -## Recomendaciones generales {#general-recommendations} - -**1.** Las siguientes son recomendaciones, no requisitos. - -**2.** Si está editando código, tiene sentido seguir el formato del código existente. - -**3.** El estilo de código es necesario para la coherencia. La consistencia facilita la lectura del código y también facilita la búsqueda del código. - -**4.** Muchas de las reglas no tienen razones lógicas; están dictadas por prácticas establecidas. - -## Formatear {#formatting} - -**1.** La mayor parte del formato se realizará automáticamente por `clang-format`. - -**2.** Las sangrías son 4 espacios. Configure el entorno de desarrollo para que una pestaña agregue cuatro espacios. - -**3.** Abrir y cerrar llaves deben estar en una línea separada. - -``` cpp -inline void readBoolText(bool & x, ReadBuffer & buf) -{ - char tmp = '0'; - readChar(tmp, buf); - x = tmp != '0'; -} -``` - -**4.** Si todo el cuerpo de la función es `statement`, se puede colocar en una sola línea. Coloque espacios alrededor de llaves (además del espacio al final de la línea). - -``` cpp -inline size_t mask() const { return buf_size() - 1; } -inline size_t place(HashValue x) const { return x & mask(); } -``` - -**5.** Para funciones. No coloque espacios alrededor de los corchetes. - -``` cpp -void reinsert(const Value & x) -``` - -``` cpp -memcpy(&buf[place_value], &x, sizeof(x)); -``` - -**6.** En `if`, `for`, `while` y otras expresiones, se inserta un espacio delante del corchete de apertura (a diferencia de las llamadas a funciones). - -``` cpp -for (size_t i = 0; i < rows; i += storage.index_granularity) -``` - -**7.** Agregar espacios alrededor de los operadores binarios (`+`, `-`, `*`, `/`, `%`, …) and the ternary operator `?:`. - -``` cpp -UInt16 year = (s[0] - '0') * 1000 + (s[1] - '0') * 100 + (s[2] - '0') * 10 + (s[3] - '0'); -UInt8 month = (s[5] - '0') * 10 + (s[6] - '0'); -UInt8 day = (s[8] - '0') * 10 + (s[9] - '0'); -``` - -**8.** Si se introduce un avance de línea, coloque al operador en una nueva línea y aumente la sangría antes de ella. - -``` cpp -if (elapsed_ns) - message << " (" - << rows_read_on_server * 1000000000 / elapsed_ns << " rows/s., " - << bytes_read_on_server * 1000.0 / elapsed_ns << " MB/s.) "; -``` - -**9.** Puede utilizar espacios para la alineación dentro de una línea, si lo desea. - -``` cpp -dst.ClickLogID = click.LogID; -dst.ClickEventID = click.EventID; -dst.ClickGoodEvent = click.GoodEvent; -``` - -**10.** No use espacios alrededor de los operadores `.`, `->`. - -Si es necesario, el operador se puede envolver a la siguiente línea. En este caso, el desplazamiento frente a él aumenta. - -**11.** No utilice un espacio para separar los operadores unarios (`--`, `++`, `*`, `&`, …) from the argument. - -**12.** Pon un espacio después de una coma, pero no antes. La misma regla se aplica a un punto y coma dentro de un `for` expresion. - -**13.** No utilice espacios para separar el `[]` operador. - -**14.** En un `template <...>` expresión, use un espacio entre `template` y `<`; sin espacios después de `<` o antes `>`. - -``` cpp -template -struct AggregatedStatElement -{} -``` - -**15.** En clases y estructuras, escribe `public`, `private`, y `protected` en el mismo nivel que `class/struct`, y sangrar el resto del código. - -``` cpp -template -class MultiVersion -{ -public: - /// Version of object for usage. shared_ptr manage lifetime of version. - using Version = std::shared_ptr; - ... -} -``` - -**16.** Si el mismo `namespace` se usa para todo el archivo, y no hay nada más significativo, no es necesario un desplazamiento dentro `namespace`. - -**17.** Si el bloque para un `if`, `for`, `while`, u otra expresión consiste en una sola `statement`, las llaves son opcionales. Coloque el `statement` en una línea separada, en su lugar. Esta regla también es válida para `if`, `for`, `while`, … - -Pero si el interior `statement` contiene llaves o `else`, el bloque externo debe escribirse entre llaves. - -``` cpp -/// Finish write. -for (auto & stream : streams) - stream.second->finalize(); -``` - -**18.** No debería haber espacios al final de las líneas. - -**19.** Los archivos de origen están codificados en UTF-8. - -**20.** Los caracteres no ASCII se pueden usar en literales de cadena. - -``` cpp -<< ", " << (timer.elapsed() / chunks_stats.hits) << " μsec/hit."; -``` - -**21.** No escriba varias expresiones en una sola línea. - -**22.** Agrupe secciones de código dentro de las funciones y sepárelas con no más de una línea vacía. - -**23.** Separe funciones, clases, etc. con una o dos líneas vacías. - -**24.** `A const` (relacionado con un valor) debe escribirse antes del nombre del tipo. - -``` cpp -//correct -const char * pos -const std::string & s -//incorrect -char const * pos -``` - -**25.** Al declarar un puntero o referencia, el `*` y `&` Los símbolos deben estar separados por espacios en ambos lados. - -``` cpp -//correct -const char * pos -//incorrect -const char* pos -const char *pos -``` - -**26.** Cuando utilice tipos de plantilla, alias con el `using` palabra clave (excepto en los casos más simples). - -En otras palabras, los parámetros de la plantilla se especifican solo en `using` y no se repiten en el código. - -`using` se puede declarar localmente, como dentro de una función. - -``` cpp -//correct -using FileStreams = std::map>; -FileStreams streams; -//incorrect -std::map> streams; -``` - -**27.** No declare varias variables de diferentes tipos en una instrucción. - -``` cpp -//incorrect -int x, *y; -``` - -**28.** No utilice moldes de estilo C. - -``` cpp -//incorrect -std::cerr << (int)c <<; std::endl; -//correct -std::cerr << static_cast(c) << std::endl; -``` - -**29.** En clases y estructuras, los miembros del grupo y las funciones por separado dentro de cada ámbito de visibilidad. - -**30.** Para clases y estructuras pequeñas, no es necesario separar la declaración del método de la implementación. - -Lo mismo es cierto para los métodos pequeños en cualquier clase o estructura. - -Para clases y estructuras con plantillas, no separe las declaraciones de métodos de la implementación (porque de lo contrario deben definirse en la misma unidad de traducción). - -**31.** Puede ajustar líneas en 140 caracteres, en lugar de 80. - -**32.** Utilice siempre los operadores de incremento / decremento de prefijo si no se requiere postfix. - -``` cpp -for (Names::const_iterator it = column_names.begin(); it != column_names.end(); ++it) -``` - -## Comentario {#comments} - -**1.** Asegúrese de agregar comentarios para todas las partes no triviales del código. - -Esto es muy importante. Escribir el comentario puede ayudarte a darte cuenta de que el código no es necesario o que está diseñado incorrectamente. - -``` cpp -/** Part of piece of memory, that can be used. - * For example, if internal_buffer is 1MB, and there was only 10 bytes loaded to buffer from file for reading, - * then working_buffer will have size of only 10 bytes - * (working_buffer.end() will point to position right after those 10 bytes available for read). - */ -``` - -**2.** Los comentarios pueden ser tan detallados como sea necesario. - -**3.** Coloque comentarios antes del código que describen. En casos raros, los comentarios pueden aparecer después del código, en la misma línea. - -``` cpp -/** Parses and executes the query. -*/ -void executeQuery( - ReadBuffer & istr, /// Where to read the query from (and data for INSERT, if applicable) - WriteBuffer & ostr, /// Where to write the result - Context & context, /// DB, tables, data types, engines, functions, aggregate functions... - BlockInputStreamPtr & query_plan, /// Here could be written the description on how query was executed - QueryProcessingStage::Enum stage = QueryProcessingStage::Complete /// Up to which stage process the SELECT query - ) -``` - -**4.** Los comentarios deben escribirse en inglés solamente. - -**5.** Si está escribiendo una biblioteca, incluya comentarios detallados que la expliquen en el archivo de encabezado principal. - -**6.** No agregue comentarios que no proporcionen información adicional. En particular, no deje comentarios vacíos como este: - -``` cpp -/* -* Procedure Name: -* Original procedure name: -* Author: -* Date of creation: -* Dates of modification: -* Modification authors: -* Original file name: -* Purpose: -* Intent: -* Designation: -* Classes used: -* Constants: -* Local variables: -* Parameters: -* Date of creation: -* Purpose: -*/ -``` - -El ejemplo se toma prestado del recurso http://home.tamk.fi/~jaalto/course/coding-style/doc/unmaintainable-code/. - -**7.** No escriba comentarios de basura (autor, fecha de creación ..) al principio de cada archivo. - -**8.** Los comentarios de una sola línea comienzan con tres barras: `///` y los comentarios de varias líneas comienzan con `/**`. Estos comentarios son considerados “documentation”. - -Nota: Puede usar Doxygen para generar documentación a partir de estos comentarios. Pero Doxygen no se usa generalmente porque es más conveniente navegar por el código en el IDE. - -**9.** Los comentarios de varias líneas no deben tener líneas vacías al principio y al final (excepto la línea que cierra un comentario de varias líneas). - -**10.** Para comentar el código, use comentarios básicos, no “documenting” comentario. - -**11.** Elimine las partes comentadas del código antes de confirmar. - -**12.** No use blasfemias en comentarios o código. - -**13.** No use letras mayúsculas. No use puntuación excesiva. - -``` cpp -/// WHAT THE FAIL??? -``` - -**14.** No use comentarios para hacer delímetros. - -``` cpp -///****************************************************** -``` - -**15.** No comiencen las discusiones en los comentarios. - -``` cpp -/// Why did you do this stuff? -``` - -**16.** No es necesario escribir un comentario al final de un bloque que describa de qué se trataba. - -``` cpp -/// for -``` - -## Nombre {#names} - -**1.** Use letras minúsculas con guiones bajos en los nombres de variables y miembros de clase. - -``` cpp -size_t max_block_size; -``` - -**2.** Para los nombres de las funciones (métodos), use camelCase comenzando con una letra minúscula. - -``` cpp -std::string getName() const override { return "Memory"; } -``` - -**3.** Para los nombres de las clases (estructuras), use CamelCase comenzando con una letra mayúscula. Los prefijos distintos de I no se usan para interfaces. - -``` cpp -class StorageMemory : public IStorage -``` - -**4.** `using` se nombran de la misma manera que las clases, o con `_t` al final. - -**5.** Nombres de argumentos de tipo de plantilla: en casos simples, use `T`; `T`, `U`; `T1`, `T2`. - -Para casos más complejos, siga las reglas para los nombres de clase o agregue el prefijo `T`. - -``` cpp -template -struct AggregatedStatElement -``` - -**6.** Nombres de argumentos constantes de plantilla: siga las reglas para los nombres de variables o use `N` en casos simples. - -``` cpp -template -struct ExtractDomain -``` - -**7.** Para clases abstractas (interfaces) puede agregar el `I` prefijo. - -``` cpp -class IBlockInputStream -``` - -**8.** Si usa una variable localmente, puede usar el nombre corto. - -En todos los demás casos, use un nombre que describa el significado. - -``` cpp -bool info_successfully_loaded = false; -``` - -**9.** Nombres de `define`s y las constantes globales usan ALL_CAPS con guiones bajos. - -``` cpp -#define MAX_SRC_TABLE_NAMES_TO_STORE 1000 -``` - -**10.** Los nombres de archivo deben usar el mismo estilo que su contenido. - -Si un archivo contiene una sola clase, nombre el archivo de la misma manera que la clase (CamelCase). - -Si el archivo contiene una sola función, nombre el archivo de la misma manera que la función (camelCase). - -**11.** Si el nombre contiene una abreviatura, : - -- Para los nombres de variables, la abreviatura debe usar letras minúsculas `mysql_connection` (ni `mySQL_connection`). -- Para los nombres de clases y funciones, mantenga las letras mayúsculas en la abreviatura`MySQLConnection` (ni `MySqlConnection`). - -**12.** Los argumentos del constructor que se usan solo para inicializar los miembros de la clase deben nombrarse de la misma manera que los miembros de la clase, pero con un guión bajo al final. - -``` cpp -FileQueueProcessor( - const std::string & path_, - const std::string & prefix_, - std::shared_ptr handler_) - : path(path_), - prefix(prefix_), - handler(handler_), - log(&Logger::get("FileQueueProcessor")) -{ -} -``` - -El sufijo de subrayado se puede omitir si el argumento no se usa en el cuerpo del constructor. - -**13.** No hay diferencia en los nombres de las variables locales y los miembros de la clase (no se requieren prefijos). - -``` cpp -timer (not m_timer) -``` - -**14.** Para las constantes en un `enum`, usar CamelCase con una letra mayúscula. ALL_CAPS también es aceptable. Si el `enum` no es local, utilice un `enum class`. - -``` cpp -enum class CompressionMethod -{ - QuickLZ = 0, - LZ4 = 1, -}; -``` - -**15.** Todos los nombres deben estar en inglés. La transliteración de palabras rusas no está permitida. - - not Stroka - -**16.** Las abreviaturas son aceptables si son bien conocidas (cuando puede encontrar fácilmente el significado de la abreviatura en Wikipedia o en un motor de búsqueda). - - `AST`, `SQL`. - - Not `NVDH` (some random letters) - -Las palabras incompletas son aceptables si la versión abreviada es de uso común. - -También puede usar una abreviatura si el nombre completo se incluye junto a él en los comentarios. - -**17.** Los nombres de archivo con código fuente de C++ deben tener `.cpp` ampliación. Los archivos de encabezado deben tener `.h` ampliación. - -## Cómo escribir código {#how-to-write-code} - -**1.** Gestión de la memoria. - -Desasignación de memoria manual (`delete`) solo se puede usar en el código de la biblioteca. - -En el código de la biblioteca, el `delete` operador sólo se puede utilizar en destructores. - -En el código de la aplicación, la memoria debe ser liberada por el objeto que la posee. - -Ejemplos: - -- La forma más fácil es colocar un objeto en la pila o convertirlo en miembro de otra clase. -- Para una gran cantidad de objetos pequeños, use contenedores. -- Para la desasignación automática de un pequeño número de objetos que residen en el montón, use `shared_ptr/unique_ptr`. - -**2.** Gestión de recursos. - -Utilizar `RAII` y ver arriba. - -**3.** Manejo de errores. - -Utilice excepciones. En la mayoría de los casos, solo necesita lanzar una excepción y no necesita atraparla (debido a `RAII`). - -En las aplicaciones de procesamiento de datos fuera de línea, a menudo es aceptable no detectar excepciones. - -En los servidores que manejan las solicitudes de los usuarios, generalmente es suficiente detectar excepciones en el nivel superior del controlador de conexión. - -En las funciones de subproceso, debe capturar y mantener todas las excepciones para volver a lanzarlas en el subproceso principal después `join`. - -``` cpp -/// If there weren't any calculations yet, calculate the first block synchronously -if (!started) -{ - calculate(); - started = true; -} -else /// If calculations are already in progress, wait for the result - pool.wait(); - -if (exception) - exception->rethrow(); -``` - -Nunca oculte excepciones sin manejo. Nunca simplemente ponga ciegamente todas las excepciones para iniciar sesión. - -``` cpp -//Not correct -catch (...) {} -``` - -Si necesita ignorar algunas excepciones, hágalo solo para las específicas y vuelva a lanzar el resto. - -``` cpp -catch (const DB::Exception & e) -{ - if (e.code() == ErrorCodes::UNKNOWN_AGGREGATE_FUNCTION) - return nullptr; - else - throw; -} -``` - -Al usar funciones con códigos de respuesta o `errno`, siempre verifique el resultado y arroje una excepción en caso de error. - -``` cpp -if (0 != close(fd)) - throwFromErrno("Cannot close file " + file_name, ErrorCodes::CANNOT_CLOSE_FILE); -``` - -`Do not use assert`. - -**4.** Tipos de excepción. - -No es necesario utilizar una jerarquía de excepciones compleja en el código de la aplicación. El texto de excepción debe ser comprensible para un administrador del sistema. - -**5.** Lanzar excepciones de destructores. - -Esto no es recomendable, pero está permitido. - -Utilice las siguientes opciones: - -- Crear una función (`done()` o `finalize()`) que hará todo el trabajo de antemano que podría conducir a una excepción. Si se llamó a esa función, no debería haber excepciones en el destructor más adelante. -- Las tareas que son demasiado complejas (como enviar mensajes a través de la red) se pueden poner en un método separado al que el usuario de la clase tendrá que llamar antes de la destrucción. -- Si hay una excepción en el destructor, es mejor registrarla que ocultarla (si el registrador está disponible). -- En aplicaciones simples, es aceptable confiar en `std::terminate` (para los casos de `noexcept` de forma predeterminada en C ++ 11) para manejar excepciones. - -**6.** Bloques de código anónimos. - -Puede crear un bloque de código separado dentro de una sola función para hacer que ciertas variables sean locales, de modo que se llame a los destructores al salir del bloque. - -``` cpp -Block block = data.in->read(); - -{ - std::lock_guard lock(mutex); - data.ready = true; - data.block = block; -} - -ready_any.set(); -``` - -**7.** Multithreading. - -En programas de procesamiento de datos fuera de línea: - -- Trate de obtener el mejor rendimiento posible en un solo núcleo de CPU. A continuación, puede paralelizar su código si es necesario. - -En aplicaciones de servidor: - -- Utilice el grupo de subprocesos para procesar solicitudes. En este punto, no hemos tenido ninguna tarea que requiera el cambio de contexto de espacio de usuario. - -La horquilla no se usa para la paralelización. - -**8.** Sincronización de hilos. - -A menudo es posible hacer que diferentes hilos usen diferentes celdas de memoria (incluso mejor: diferentes líneas de caché) y no usar ninguna sincronización de hilos (excepto `joinAll`). - -Si se requiere sincronización, en la mayoría de los casos, es suficiente usar mutex bajo `lock_guard`. - -En otros casos, use primitivas de sincronización del sistema. No utilice la espera ocupada. - -Las operaciones atómicas deben usarse solo en los casos más simples. - -No intente implementar estructuras de datos sin bloqueo a menos que sea su principal área de especialización. - -**9.** Punteros vs referencias. - -En la mayoría de los casos, prefiera referencias. - -**10.** Construir. - -Usar referencias constantes, punteros a constantes, `const_iterator`, y métodos const. - -Considerar `const` para ser predeterminado y usar no-`const` sólo cuando sea necesario. - -Al pasar variables por valor, usando `const` por lo general no tiene sentido. - -**11.** sin firmar. - -Utilizar `unsigned` si es necesario. - -**12.** Tipos numéricos. - -Utilice los tipos `UInt8`, `UInt16`, `UInt32`, `UInt64`, `Int8`, `Int16`, `Int32`, y `Int64`, así como `size_t`, `ssize_t`, y `ptrdiff_t`. - -No use estos tipos para números: `signed/unsigned long`, `long long`, `short`, `signed/unsigned char`, `char`. - -**13.** Pasando argumentos. - -Pasar valores complejos por referencia (incluyendo `std::string`). - -Si una función captura la propiedad de un objeto creado en el montón, cree el tipo de argumento `shared_ptr` o `unique_ptr`. - -**14.** Valores devueltos. - -En la mayoría de los casos, sólo tiene que utilizar `return`. No escribir `return std::move(res)`. - -Si la función asigna un objeto en el montón y lo devuelve, use `shared_ptr` o `unique_ptr`. - -En casos excepcionales, es posible que deba devolver el valor a través de un argumento. En este caso, el argumento debe ser una referencia. - -``` cpp -using AggregateFunctionPtr = std::shared_ptr; - -/** Allows creating an aggregate function by its name. - */ -class AggregateFunctionFactory -{ -public: - AggregateFunctionFactory(); - AggregateFunctionPtr get(const String & name, const DataTypes & argument_types) const; -``` - -**15.** espacio de nombres. - -No hay necesidad de usar un `namespace` para el código de aplicación. - -Las bibliotecas pequeñas tampoco necesitan esto. - -Para bibliotecas medianas a grandes, coloque todo en un `namespace`. - -En la biblioteca `.h` archivo, se puede utilizar `namespace detail` para ocultar los detalles de implementación no necesarios para el código de la aplicación. - -En un `.cpp` archivo, puede usar un `static` o espacio de nombres anónimo para ocultar símbolos. - -Además, un `namespace` puede ser utilizado para un `enum` para evitar que los nombres correspondientes caigan en un `namespace` (pero es mejor usar un `enum class`). - -**16.** Inicialización diferida. - -Si se requieren argumentos para la inicialización, normalmente no debe escribir un constructor predeterminado. - -Si más adelante tendrá que retrasar la inicialización, puede agregar un constructor predeterminado que creará un objeto no válido. O, para un pequeño número de objetos, puede usar `shared_ptr/unique_ptr`. - -``` cpp -Loader(DB::Connection * connection_, const std::string & query, size_t max_block_size_); - -/// For deferred initialization -Loader() {} -``` - -**17.** Funciones virtuales. - -Si la clase no está destinada para uso polimórfico, no necesita hacer que las funciones sean virtuales. Esto también se aplica al destructor. - -**18.** Codificación. - -Usa UTF-8 en todas partes. Utilizar `std::string`y`char *`. No use `std::wstring`y`wchar_t`. - -**19.** Tala. - -Vea los ejemplos en todas partes del código. - -Antes de confirmar, elimine todo el registro de depuración y sin sentido, y cualquier otro tipo de salida de depuración. - -Se debe evitar el registro en ciclos, incluso en el nivel Trace. - -Los registros deben ser legibles en cualquier nivel de registro. - -El registro solo debe usarse en el código de la aplicación, en su mayor parte. - -Los mensajes de registro deben estar escritos en inglés. - -El registro debe ser preferiblemente comprensible para el administrador del sistema. - -No use blasfemias en el registro. - -Utilice la codificación UTF-8 en el registro. En casos excepcionales, puede usar caracteres que no sean ASCII en el registro. - -**20.** Entrada-salida. - -No utilice `iostreams` en ciclos internos que son críticos para el rendimiento de la aplicación (y nunca usan `stringstream`). - -Utilice el `DB/IO` biblioteca en su lugar. - -**21.** Fecha y hora. - -Ver el `DateLUT` biblioteca. - -**22.** incluir. - -Utilice siempre `#pragma once` en lugar de incluir guardias. - -**23.** utilizar. - -`using namespace` no se utiliza. Usted puede utilizar `using` con algo específico. Pero hazlo local dentro de una clase o función. - -**24.** No use `trailing return type` para funciones a menos que sea necesario. - -``` cpp -auto f() -> void -``` - -**25.** Declaración e inicialización de variables. - -``` cpp -//right way -std::string s = "Hello"; -std::string s{"Hello"}; - -//wrong way -auto s = std::string{"Hello"}; -``` - -**26.** Para funciones virtuales, escriba `virtual` en la clase base, pero escribe `override` en lugar de `virtual` en las clases descendientes. - -## Características no utilizadas de C ++ {#unused-features-of-c} - -**1.** La herencia virtual no se utiliza. - -**2.** Los especificadores de excepción de C ++ 03 no se usan. - -## Plataforma {#platform} - -**1.** Escribimos código para una plataforma específica. - -Pero en igualdad de condiciones, se prefiere el código multiplataforma o portátil. - -**2.** Idioma: C++20. - -**3.** Compilación: `gcc`. En este momento (agosto 2020), el código se compila utilizando la versión 9.3. (También se puede compilar usando `clang 8`.) - -Se utiliza la biblioteca estándar (`libc++`). - -**4.**OS: Linux Ubuntu, no más viejo que Precise. - -**5.**El código está escrito para la arquitectura de CPU x86_64. - -El conjunto de instrucciones de CPU es el conjunto mínimo admitido entre nuestros servidores. Actualmente, es SSE 4.2. - -**6.** Utilizar `-Wall -Wextra -Werror` flags de compilación. - -**7.** Use enlaces estáticos con todas las bibliotecas, excepto aquellas a las que son difíciles de conectar estáticamente (consulte la salida de la `ldd` comando). - -**8.** El código se desarrolla y se depura con la configuración de la versión. - -## Herramienta {#tools} - -**1.** KDevelop es un buen IDE. - -**2.** Para la depuración, use `gdb`, `valgrind` (`memcheck`), `strace`, `-fsanitize=...`, o `tcmalloc_minimal_debug`. - -**3.** Para crear perfiles, use `Linux Perf`, `valgrind` (`callgrind`), o `strace -cf`. - -**4.** Las fuentes están en Git. - -**5.** Usos de ensamblaje `CMake`. - -**6.** Los programas se lanzan usando `deb` paquete. - -**7.** Los compromisos a dominar no deben romper la compilación. - -Aunque solo las revisiones seleccionadas se consideran viables. - -**8.** Realice confirmaciones tan a menudo como sea posible, incluso si el código está parcialmente listo. - -Use ramas para este propósito. - -Si su código en el `master` branch todavía no se puede construir, excluirlo de la compilación antes de la `push`. Tendrá que terminarlo o eliminarlo dentro de unos días. - -**9.** Para cambios no triviales, use ramas y publíquelas en el servidor. - -**10.** El código no utilizado se elimina del repositorio. - -## Biblioteca {#libraries} - -**1.** Se utiliza la biblioteca estándar de C++20 (se permiten extensiones experimentales), así como `boost` y `Poco` marco. - -**2.** Si es necesario, puede usar cualquier biblioteca conocida disponible en el paquete del sistema operativo. - -Si ya hay una buena solución disponible, úsela, incluso si eso significa que debe instalar otra biblioteca. - -(Pero prepárese para eliminar las bibliotecas incorrectas del código.) - -**3.** Puede instalar una biblioteca que no esté en los paquetes, si los paquetes no tienen lo que necesita o tienen una versión obsoleta o el tipo de compilación incorrecto. - -**4.** Si la biblioteca es pequeña y no tiene su propio sistema de compilación complejo, coloque los archivos `contrib` carpeta. - -**5.** Siempre se da preferencia a las bibliotecas que ya están en uso. - -## Recomendaciones generales {#general-recommendations-1} - -**1.** Escribe el menor código posible. - -**2.** Pruebe la solución más simple. - -**3.** No escriba código hasta que sepa cómo va a funcionar y cómo funcionará el bucle interno. - -**4.** En los casos más simples, use `using` en lugar de clases o estructuras. - -**5.** Si es posible, no escriba constructores de copia, operadores de asignación, destructores (que no sean virtuales, si la clase contiene al menos una función virtual), mueva constructores o mueva operadores de asignación. En otras palabras, las funciones generadas por el compilador deben funcionar correctamente. Usted puede utilizar `default`. - -**6.** Se fomenta la simplificación del código. Reduzca el tamaño de su código siempre que sea posible. - -## Recomendaciones adicionales {#additional-recommendations} - -**1.** Especificar explícitamente `std::` para tipos de `stddef.h` - -no se recomienda. En otras palabras, recomendamos escribir `size_t` en su lugar `std::size_t` porque es más corto. - -Es aceptable agregar `std::`. - -**2.** Especificar explícitamente `std::` para funciones de la biblioteca C estándar - -no se recomienda. En otras palabras, escribir `memcpy` en lugar de `std::memcpy`. - -La razón es que hay funciones no estándar similares, tales como `memmem`. Utilizamos estas funciones en ocasiones. Estas funciones no existen en `namespace std`. - -Si usted escribe `std::memcpy` en lugar de `memcpy` en todas partes, entonces `memmem` sin `std::` se verá extraño. - -Sin embargo, todavía puedes usar `std::` si lo prefieres. - -**3.** Usar funciones de C cuando las mismas están disponibles en la biblioteca estándar de C ++. - -Esto es aceptable si es más eficiente. - -Por ejemplo, use `memcpy` en lugar de `std::copy` para copiar grandes trozos de memoria. - -**4.** Argumentos de función multilínea. - -Se permite cualquiera de los siguientes estilos de ajuste: - -``` cpp -function( - T1 x1, - T2 x2) -``` - -``` cpp -function( - size_t left, size_t right, - const & RangesInDataParts ranges, - size_t limit) -``` - -``` cpp -function(size_t left, size_t right, - const & RangesInDataParts ranges, - size_t limit) -``` - -``` cpp -function(size_t left, size_t right, - const & RangesInDataParts ranges, - size_t limit) -``` - -``` cpp -function( - size_t left, - size_t right, - const & RangesInDataParts ranges, - size_t limit) -``` - -[Artículo Original](https://clickhouse.tech/docs/en/development/style/) diff --git a/docs/es/development/tests.md b/docs/es/development/tests.md deleted file mode 120000 index c03d36c3916..00000000000 --- a/docs/es/development/tests.md +++ /dev/null @@ -1 +0,0 @@ -../../en/development/tests.md \ No newline at end of file diff --git a/docs/es/engines/database-engines/atomic.md b/docs/es/engines/database-engines/atomic.md deleted file mode 100644 index f019b94a00b..00000000000 --- a/docs/es/engines/database-engines/atomic.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -toc_priority: 32 -toc_title: Atomic ---- - - -# Atomic {#atomic} - -It is supports non-blocking `DROP` and `RENAME TABLE` queries and atomic `EXCHANGE TABLES t1 AND t2` queries. Atomic database engine is used by default. - -## Creating a Database {#creating-a-database} - -```sql -CREATE DATABASE test ENGINE = Atomic; -``` - -[Original article](https://clickhouse.tech/docs/en/engines/database_engines/atomic/) diff --git a/docs/es/engines/database-engines/index.md b/docs/es/engines/database-engines/index.md deleted file mode 100644 index 8784b9bd02b..00000000000 --- a/docs/es/engines/database-engines/index.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Motores de base de datos -toc_priority: 27 -toc_title: "Implantaci\xF3n" ---- - -# Motores de base de datos {#database-engines} - -Los motores de bases de datos le permiten trabajar con tablas. - -De forma predeterminada, ClickHouse utiliza su motor de base de datos nativa, que proporciona [motores de mesa](../../engines/table-engines/index.md) y una [Dialecto SQL](../../sql-reference/syntax.md). - -También puede utilizar los siguientes motores de base de datos: - -- [MySQL](mysql.md) - -- [Perezoso](lazy.md) - -[Artículo Original](https://clickhouse.tech/docs/en/database_engines/) diff --git a/docs/es/engines/database-engines/lazy.md b/docs/es/engines/database-engines/lazy.md deleted file mode 100644 index 0988c4cb395..00000000000 --- a/docs/es/engines/database-engines/lazy.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 31 -toc_title: Perezoso ---- - -# Perezoso {#lazy} - -Mantiene las tablas en RAM solamente `expiration_time_in_seconds` segundos después del último acceso. Solo se puede usar con tablas \*Log. - -Está optimizado para almacenar muchas tablas pequeñas \* Log, para las cuales hay un largo intervalo de tiempo entre los accesos. - -## Creación de una base de datos {#creating-a-database} - - CREATE DATABASE testlazy ENGINE = Lazy(expiration_time_in_seconds); - -[Artículo Original](https://clickhouse.tech/docs/en/database_engines/lazy/) diff --git a/docs/es/engines/database-engines/mysql.md b/docs/es/engines/database-engines/mysql.md deleted file mode 100644 index 5f1dec97f35..00000000000 --- a/docs/es/engines/database-engines/mysql.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 30 -toc_title: MySQL ---- - -# MySQL {#mysql} - -Permite conectarse a bases de datos en un servidor MySQL remoto y realizar `INSERT` y `SELECT` consultas para intercambiar datos entre ClickHouse y MySQL. - -El `MySQL` motor de base de datos traducir consultas al servidor MySQL para que pueda realizar operaciones tales como `SHOW TABLES` o `SHOW CREATE TABLE`. - -No puede realizar las siguientes consultas: - -- `RENAME` -- `CREATE TABLE` -- `ALTER` - -## Creación de una base de datos {#creating-a-database} - -``` sql -CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster] -ENGINE = MySQL('host:port', ['database' | database], 'user', 'password') -``` - -**Parámetros del motor** - -- `host:port` — MySQL server address. -- `database` — Remote database name. -- `user` — MySQL user. -- `password` — User password. - -## Soporte de tipos de datos {#data_types-support} - -| MySQL | Haga clic en Casa | -|----------------------------------|--------------------------------------------------------------| -| UNSIGNED TINYINT | [UInt8](../../sql-reference/data-types/int-uint.md) | -| TINYINT | [Int8](../../sql-reference/data-types/int-uint.md) | -| UNSIGNED SMALLINT | [UInt16](../../sql-reference/data-types/int-uint.md) | -| SMALLINT | [Int16](../../sql-reference/data-types/int-uint.md) | -| UNSIGNED INT, UNSIGNED MEDIUMINT | [UInt32](../../sql-reference/data-types/int-uint.md) | -| INT, MEDIUMINT | [Int32](../../sql-reference/data-types/int-uint.md) | -| UNSIGNED BIGINT | [UInt64](../../sql-reference/data-types/int-uint.md) | -| BIGINT | [Int64](../../sql-reference/data-types/int-uint.md) | -| FLOAT | [Float32](../../sql-reference/data-types/float.md) | -| DOUBLE | [Float64](../../sql-reference/data-types/float.md) | -| DATE | [Fecha](../../sql-reference/data-types/date.md) | -| DATETIME, TIMESTAMP | [FechaHora](../../sql-reference/data-types/datetime.md) | -| BINARY | [Cadena fija](../../sql-reference/data-types/fixedstring.md) | - -Todos los demás tipos de datos MySQL se convierten en [Cadena](../../sql-reference/data-types/string.md). - -[NULL](../../sql-reference/data-types/nullable.md) se admite. - -## Ejemplos de uso {#examples-of-use} - -Tabla en MySQL: - -``` text -mysql> USE test; -Database changed - -mysql> CREATE TABLE `mysql_table` ( - -> `int_id` INT NOT NULL AUTO_INCREMENT, - -> `float` FLOAT NOT NULL, - -> PRIMARY KEY (`int_id`)); -Query OK, 0 rows affected (0,09 sec) - -mysql> insert into mysql_table (`int_id`, `float`) VALUES (1,2); -Query OK, 1 row affected (0,00 sec) - -mysql> select * from mysql_table; -+------+-----+ -| int_id | value | -+------+-----+ -| 1 | 2 | -+------+-----+ -1 row in set (0,00 sec) -``` - -Base de datos en ClickHouse, intercambiando datos con el servidor MySQL: - -``` sql -CREATE DATABASE mysql_db ENGINE = MySQL('localhost:3306', 'test', 'my_user', 'user_password') -``` - -``` sql -SHOW DATABASES -``` - -``` text -┌─name─────┐ -│ default │ -│ mysql_db │ -│ system │ -└──────────┘ -``` - -``` sql -SHOW TABLES FROM mysql_db -``` - -``` text -┌─name─────────┐ -│ mysql_table │ -└──────────────┘ -``` - -``` sql -SELECT * FROM mysql_db.mysql_table -``` - -``` text -┌─int_id─┬─value─┐ -│ 1 │ 2 │ -└────────┴───────┘ -``` - -``` sql -INSERT INTO mysql_db.mysql_table VALUES (3,4) -``` - -``` sql -SELECT * FROM mysql_db.mysql_table -``` - -``` text -┌─int_id─┬─value─┐ -│ 1 │ 2 │ -│ 3 │ 4 │ -└────────┴───────┘ -``` - -[Artículo Original](https://clickhouse.tech/docs/en/database_engines/mysql/) diff --git a/docs/es/engines/index.md b/docs/es/engines/index.md deleted file mode 100644 index 03e4426dd8d..00000000000 --- a/docs/es/engines/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Motor -toc_priority: 25 ---- - - diff --git a/docs/es/engines/table-engines/index.md b/docs/es/engines/table-engines/index.md deleted file mode 100644 index 7be315e3ee3..00000000000 --- a/docs/es/engines/table-engines/index.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Motores de mesa -toc_priority: 26 -toc_title: "Implantaci\xF3n" ---- - -# Motores de mesa {#table_engines} - -El motor de tabla (tipo de tabla) determina: - -- Cómo y dónde se almacenan los datos, dónde escribirlos y dónde leerlos. -- Qué consultas son compatibles y cómo. -- Acceso a datos simultáneos. -- Uso de índices, si está presente. -- Si es posible la ejecución de solicitudes multiproceso. -- Parámetros de replicación de datos. - -## Familias de motores {#engine-families} - -### Método de codificación de datos: {#mergetree} - -Los motores de mesa más universales y funcionales para tareas de alta carga. La propiedad compartida por estos motores es la inserción rápida de datos con el posterior procesamiento de datos en segundo plano. `MergeTree` Los motores familiares admiten la replicación de datos (con [Replicado\*](mergetree-family/replication.md#table_engines-replication) versiones de motores), particionamiento y otras características no admitidas en otros motores. - -Motores en la familia: - -- [Método de codificación de datos:](mergetree-family/mergetree.md#mergetree) -- [ReplacingMergeTree](mergetree-family/replacingmergetree.md#replacingmergetree) -- [SummingMergeTree](mergetree-family/summingmergetree.md#summingmergetree) -- [AgregaciónMergeTree](mergetree-family/aggregatingmergetree.md#aggregatingmergetree) -- [ColapsarMergeTree](mergetree-family/collapsingmergetree.md#table_engine-collapsingmergetree) -- [VersionedCollapsingMergeTree](mergetree-family/versionedcollapsingmergetree.md#versionedcollapsingmergetree) -- [GraphiteMergeTree](mergetree-family/graphitemergetree.md#graphitemergetree) - -### Registro {#log} - -Ligero [motor](log-family/index.md) con funcionalidad mínima. Son los más efectivos cuando necesita escribir rápidamente muchas tablas pequeñas (hasta aproximadamente 1 millón de filas) y leerlas más tarde como un todo. - -Motores en la familia: - -- [TinyLog](log-family/tinylog.md#tinylog) -- [StripeLog](log-family/stripelog.md#stripelog) -- [Registro](log-family/log.md#log) - -### Motores de integración {#integration-engines} - -Motores para comunicarse con otros sistemas de almacenamiento y procesamiento de datos. - -Motores en la familia: - -- [Kafka](integrations/kafka.md#kafka) -- [MySQL](integrations/mysql.md#mysql) -- [ODBC](integrations/odbc.md#table-engine-odbc) -- [JDBC](integrations/jdbc.md#table-engine-jdbc) -- [HDFS](integrations/hdfs.md#hdfs) - -### Motores especiales {#special-engines} - -Motores en la familia: - -- [Distribuido](special/distributed.md#distributed) -- [Método de codificación de datos:](special/materializedview.md#materializedview) -- [Diccionario](special/dictionary.md#dictionary) -- \[Fusión\](special/merge.md#merge -- [File](special/file.md#file) -- [Nulo](special/null.md#null) -- [Establecer](special/set.md#set) -- [Unir](special/join.md#join) -- [URL](special/url.md#table_engines-url) -- [Vista](special/view.md#table_engines-view) -- [Memoria](special/memory.md#memory) -- [Búfer](special/buffer.md#buffer) - -## Virtual Columnas {#table_engines-virtual_columns} - -La columna virtual es un atributo de motor de tabla integral que se define en el código fuente del motor. - -No debe especificar columnas virtuales en el `CREATE TABLE` consulta y no puedes verlos en `SHOW CREATE TABLE` y `DESCRIBE TABLE` resultados de la consulta. Las columnas virtuales también son de solo lectura, por lo que no puede insertar datos en columnas virtuales. - -Para seleccionar datos de una columna virtual, debe especificar su nombre en el `SELECT` consulta. `SELECT *` no devuelve valores de columnas virtuales. - -Si crea una tabla con una columna que tiene el mismo nombre que una de las columnas virtuales de la tabla, la columna virtual se vuelve inaccesible. No recomendamos hacer esto. Para ayudar a evitar conflictos, los nombres de columna virtual suelen tener el prefijo de un guión bajo. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/) diff --git a/docs/es/engines/table-engines/integrations/hdfs.md b/docs/es/engines/table-engines/integrations/hdfs.md deleted file mode 100644 index 5e0211660f5..00000000000 --- a/docs/es/engines/table-engines/integrations/hdfs.md +++ /dev/null @@ -1,123 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 36 -toc_title: HDFS ---- - -# HDFS {#table_engines-hdfs} - -Este motor proporciona integración con [Acerca de nosotros](https://en.wikipedia.org/wiki/Apache_Hadoop) permitiendo gestionar datos sobre [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)a través de ClickHouse. Este motor es similar -a la [File](../special/file.md#table_engines-file) y [URL](../special/url.md#table_engines-url) motores, pero proporciona características específicas de Hadoop. - -## Uso {#usage} - -``` sql -ENGINE = HDFS(URI, format) -``` - -El `URI` El parámetro es el URI del archivo completo en HDFS. -El `format` parámetro especifica uno de los formatos de archivo disponibles. Realizar -`SELECT` consultas, el formato debe ser compatible para la entrada, y para realizar -`INSERT` queries – for output. The available formats are listed in the -[Formato](../../../interfaces/formats.md#formats) apartado. -La parte de la ruta de `URI` puede contener globs. En este caso, la tabla sería de solo lectura. - -**Ejemplo:** - -**1.** Configurar el `hdfs_engine_table` tabla: - -``` sql -CREATE TABLE hdfs_engine_table (name String, value UInt32) ENGINE=HDFS('hdfs://hdfs1:9000/other_storage', 'TSV') -``` - -**2.** Llenar archivo: - -``` sql -INSERT INTO hdfs_engine_table VALUES ('one', 1), ('two', 2), ('three', 3) -``` - -**3.** Consultar los datos: - -``` sql -SELECT * FROM hdfs_engine_table LIMIT 2 -``` - -``` text -┌─name─┬─value─┐ -│ one │ 1 │ -│ two │ 2 │ -└──────┴───────┘ -``` - -## Detalles de implementación {#implementation-details} - -- Las lecturas y escrituras pueden ser paralelas -- No soportado: - - `ALTER` y `SELECT...SAMPLE` operación. - - Índices. - - Replicación. - -**Globs en el camino** - -Múltiples componentes de ruta de acceso pueden tener globs. Para ser procesado, el archivo debe existir y coincidir con todo el patrón de ruta. Listado de archivos determina durante `SELECT` (no en `CREATE` momento). - -- `*` — Substitutes any number of any characters except `/` incluyendo cadena vacía. -- `?` — Substitutes any single character. -- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. -- `{N..M}` — Substitutes any number in range from N to M including both borders. - -Construcciones con `{}` son similares a la [remoto](../../../sql-reference/table-functions/remote.md) función de la tabla. - -**Ejemplo** - -1. Supongamos que tenemos varios archivos en formato TSV con los siguientes URI en HDFS: - -- ‘hdfs://hdfs1:9000/some_dir/some_file_1’ -- ‘hdfs://hdfs1:9000/some_dir/some_file_2’ -- ‘hdfs://hdfs1:9000/some_dir/some_file_3’ -- ‘hdfs://hdfs1:9000/another_dir/some_file_1’ -- ‘hdfs://hdfs1:9000/another_dir/some_file_2’ -- ‘hdfs://hdfs1:9000/another_dir/some_file_3’ - -1. Hay varias maneras de hacer una tabla que consta de los seis archivos: - - - -``` sql -CREATE TABLE table_with_range (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV') -``` - -Otra forma: - -``` sql -CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_?', 'TSV') -``` - -La tabla consta de todos los archivos en ambos directorios (todos los archivos deben satisfacer el formato y el esquema descritos en la consulta): - -``` sql -CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV') -``` - -!!! warning "Advertencia" - Si la lista de archivos contiene rangos de números con ceros a la izquierda, use la construcción con llaves para cada dígito por separado o use `?`. - -**Ejemplo** - -Crear tabla con archivos llamados `file000`, `file001`, … , `file999`: - -``` sql -CREARE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV') -``` - -## Virtual Columnas {#virtual-columns} - -- `_path` — Path to the file. -- `_file` — Name of the file. - -**Ver también** - -- [Virtual columnas](../index.md#table_engines-virtual_columns) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/hdfs/) diff --git a/docs/es/engines/table-engines/integrations/index.md b/docs/es/engines/table-engines/integrations/index.md deleted file mode 100644 index e57aaf88744..00000000000 --- a/docs/es/engines/table-engines/integrations/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Integraci\xF3n" -toc_priority: 30 ---- - - diff --git a/docs/es/engines/table-engines/integrations/jdbc.md b/docs/es/engines/table-engines/integrations/jdbc.md deleted file mode 100644 index fd3450cef7c..00000000000 --- a/docs/es/engines/table-engines/integrations/jdbc.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 34 -toc_title: JDBC ---- - -# JDBC {#table-engine-jdbc} - -Permite que ClickHouse se conecte a bases de datos externas a través de [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity). - -Para implementar la conexión JDBC, ClickHouse utiliza el programa independiente [Sistema abierto.](https://github.com/alex-krash/clickhouse-jdbc-bridge) que debería ejecutarse como un demonio. - -Este motor soporta el [NULL](../../../sql-reference/data-types/nullable.md) tipo de datos. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name -( - columns list... -) -ENGINE = JDBC(dbms_uri, external_database, external_table) -``` - -**Parámetros del motor** - -- `dbms_uri` — URI of an external DBMS. - - Formato: `jdbc:://:/?user=&password=`. - Ejemplo para MySQL: `jdbc:mysql://localhost:3306/?user=root&password=root`. - -- `external_database` — Database in an external DBMS. - -- `external_table` — Name of the table in `external_database`. - -## Ejemplo de uso {#usage-example} - -Crear una tabla en el servidor MySQL conectándose directamente con su cliente de consola: - -``` text -mysql> CREATE TABLE `test`.`test` ( - -> `int_id` INT NOT NULL AUTO_INCREMENT, - -> `int_nullable` INT NULL DEFAULT NULL, - -> `float` FLOAT NOT NULL, - -> `float_nullable` FLOAT NULL DEFAULT NULL, - -> PRIMARY KEY (`int_id`)); -Query OK, 0 rows affected (0,09 sec) - -mysql> insert into test (`int_id`, `float`) VALUES (1,2); -Query OK, 1 row affected (0,00 sec) - -mysql> select * from test; -+------+----------+-----+----------+ -| int_id | int_nullable | float | float_nullable | -+------+----------+-----+----------+ -| 1 | NULL | 2 | NULL | -+------+----------+-----+----------+ -1 row in set (0,00 sec) -``` - -Creación de una tabla en el servidor ClickHouse y selección de datos de ella: - -``` sql -CREATE TABLE jdbc_table -( - `int_id` Int32, - `int_nullable` Nullable(Int32), - `float` Float32, - `float_nullable` Nullable(Float32) -) -ENGINE JDBC('jdbc:mysql://localhost:3306/?user=root&password=root', 'test', 'test') -``` - -``` sql -SELECT * -FROM jdbc_table -``` - -``` text -┌─int_id─┬─int_nullable─┬─float─┬─float_nullable─┐ -│ 1 │ ᴺᵁᴸᴸ │ 2 │ ᴺᵁᴸᴸ │ -└────────┴──────────────┴───────┴────────────────┘ -``` - -## Ver también {#see-also} - -- [Función de la tabla de JDBC](../../../sql-reference/table-functions/jdbc.md). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/jdbc/) diff --git a/docs/es/engines/table-engines/integrations/kafka.md b/docs/es/engines/table-engines/integrations/kafka.md deleted file mode 100644 index 54250aae82a..00000000000 --- a/docs/es/engines/table-engines/integrations/kafka.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 32 -toc_title: Kafka ---- - -# Kafka {#kafka} - -Este motor funciona con [Acerca de nosotros](http://kafka.apache.org/). - -Kafka te permite: - -- Publicar o suscribirse a flujos de datos. -- Organice el almacenamiento tolerante a fallos. -- Secuencias de proceso a medida que estén disponibles. - -## Creación de una tabla {#table_engine-kafka-creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = Kafka() -SETTINGS - kafka_broker_list = 'host:port', - kafka_topic_list = 'topic1,topic2,...', - kafka_group_name = 'group_name', - kafka_format = 'data_format'[,] - [kafka_row_delimiter = 'delimiter_symbol',] - [kafka_schema = '',] - [kafka_num_consumers = N,] - [kafka_max_block_size = 0,] - [kafka_skip_broken_messages = N,] - [kafka_commit_every_batch = 0] -``` - -Parámetros requeridos: - -- `kafka_broker_list` – A comma-separated list of brokers (for example, `localhost:9092`). -- `kafka_topic_list` – A list of Kafka topics. -- `kafka_group_name` – A group of Kafka consumers. Reading margins are tracked for each group separately. If you don't want messages to be duplicated in the cluster, use the same group name everywhere. -- `kafka_format` – Message format. Uses the same notation as the SQL `FORMAT` función, tal como `JSONEachRow`. Para obtener más información, consulte [Formato](../../../interfaces/formats.md) apartado. - -Parámetros opcionales: - -- `kafka_row_delimiter` – Delimiter character, which ends the message. -- `kafka_schema` – Parameter that must be used if the format requires a schema definition. For example, [Cap'n Proto](https://capnproto.org/) requiere la ruta de acceso al archivo de esquema y el nombre de la raíz `schema.capnp:Message` objeto. -- `kafka_num_consumers` – The number of consumers per table. Default: `1`. Especifique más consumidores si el rendimiento de un consumidor es insuficiente. El número total de consumidores no debe exceder el número de particiones en el tema, ya que solo se puede asignar un consumidor por partición. -- `kafka_max_block_size` - El tamaño máximo de lote (en mensajes) para la encuesta (predeterminado: `max_block_size`). -- `kafka_skip_broken_messages` – Kafka message parser tolerance to schema-incompatible messages per block. Default: `0`. Si `kafka_skip_broken_messages = N` entonces el motor salta *N* Mensajes de Kafka que no se pueden analizar (un mensaje es igual a una fila de datos). -- `kafka_commit_every_batch` - Confirmar cada lote consumido y manejado en lugar de una única confirmación después de escribir un bloque completo (predeterminado: `0`). - -Ejemplos: - -``` sql - CREATE TABLE queue ( - timestamp UInt64, - level String, - message String - ) ENGINE = Kafka('localhost:9092', 'topic', 'group1', 'JSONEachRow'); - - SELECT * FROM queue LIMIT 5; - - CREATE TABLE queue2 ( - timestamp UInt64, - level String, - message String - ) ENGINE = Kafka SETTINGS kafka_broker_list = 'localhost:9092', - kafka_topic_list = 'topic', - kafka_group_name = 'group1', - kafka_format = 'JSONEachRow', - kafka_num_consumers = 4; - - CREATE TABLE queue2 ( - timestamp UInt64, - level String, - message String - ) ENGINE = Kafka('localhost:9092', 'topic', 'group1') - SETTINGS kafka_format = 'JSONEachRow', - kafka_num_consumers = 4; -``` - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No utilice este método en nuevos proyectos. Si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format - [, kafka_row_delimiter, kafka_schema, kafka_num_consumers, kafka_skip_broken_messages]) -``` - -
- -## Descripci {#description} - -Los mensajes entregados se realizan un seguimiento automático, por lo que cada mensaje de un grupo solo se cuenta una vez. Si desea obtener los datos dos veces, cree una copia de la tabla con otro nombre de grupo. - -Los grupos son flexibles y se sincronizan en el clúster. Por ejemplo, si tiene 10 temas y 5 copias de una tabla en un clúster, cada copia obtiene 2 temas. Si el número de copias cambia, los temas se redistribuyen automáticamente entre las copias. Lea más sobre esto en http://kafka.apache.org/intro . - -`SELECT` no es particularmente útil para leer mensajes (excepto para la depuración), ya que cada mensaje se puede leer solo una vez. Es más práctico crear subprocesos en tiempo real utilizando vistas materializadas. Para hacer esto: - -1. Use el motor para crear un consumidor de Kafka y considérelo como un flujo de datos. -2. Crea una tabla con la estructura deseada. -3. Cree una vista materializada que convierta los datos del motor y los coloque en una tabla creada previamente. - -Cuando el `MATERIALIZED VIEW` se une al motor, comienza a recopilar datos en segundo plano. Esto le permite recibir continuamente mensajes de Kafka y convertirlos al formato requerido usando `SELECT`. -Una tabla kafka puede tener tantas vistas materializadas como desee, no leen datos de la tabla kafka directamente, sino que reciben nuevos registros (en bloques), de esta manera puede escribir en varias tablas con diferentes niveles de detalle (con agrupación - agregación y sin). - -Ejemplo: - -``` sql - CREATE TABLE queue ( - timestamp UInt64, - level String, - message String - ) ENGINE = Kafka('localhost:9092', 'topic', 'group1', 'JSONEachRow'); - - CREATE TABLE daily ( - day Date, - level String, - total UInt64 - ) ENGINE = SummingMergeTree(day, (day, level), 8192); - - CREATE MATERIALIZED VIEW consumer TO daily - AS SELECT toDate(toDateTime(timestamp)) AS day, level, count() as total - FROM queue GROUP BY day, level; - - SELECT level, sum(total) FROM daily GROUP BY level; -``` - -Para mejorar el rendimiento, los mensajes recibidos se agrupan en bloques del tamaño de [Max_insert_block_size](../../../operations/server-configuration-parameters/settings.md#settings-max_insert_block_size). Si el bloque no se formó dentro de [Nombre de la red inalámbrica (SSID):](../../../operations/server-configuration-parameters/settings.md) milisegundos, los datos se vaciarán a la tabla independientemente de la integridad del bloque. - -Para detener la recepción de datos de tema o cambiar la lógica de conversión, desconecte la vista materializada: - -``` sql - DETACH TABLE consumer; - ATTACH TABLE consumer; -``` - -Si desea cambiar la tabla de destino utilizando `ALTER`, recomendamos deshabilitar la vista de material para evitar discrepancias entre la tabla de destino y los datos de la vista. - -## Configuración {#configuration} - -Similar a GraphiteMergeTree, el motor Kafka admite una configuración extendida utilizando el archivo de configuración ClickHouse. Hay dos claves de configuración que puede usar: global (`kafka`) y a nivel de tema (`kafka_*`). La configuración global se aplica primero y, a continuación, se aplica la configuración de nivel de tema (si existe). - -``` xml - - - cgrp - smallest - - - - - 250 - 100000 - -``` - -Para obtener una lista de posibles opciones de configuración, consulte [referencia de configuración librdkafka](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md). Usa el guión bajo (`_`) en lugar de un punto en la configuración de ClickHouse. Por ejemplo, `check.crcs=true` será `true`. - -## Virtual Columnas {#virtual-columns} - -- `_topic` — Kafka topic. -- `_key` — Key of the message. -- `_offset` — Offset of the message. -- `_timestamp` — Timestamp of the message. -- `_partition` — Partition of Kafka topic. - -**Ver también** - -- [Virtual columnas](../index.md#table_engines-virtual_columns) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/kafka/) diff --git a/docs/es/engines/table-engines/integrations/mysql.md b/docs/es/engines/table-engines/integrations/mysql.md deleted file mode 100644 index 52799117255..00000000000 --- a/docs/es/engines/table-engines/integrations/mysql.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 33 -toc_title: MySQL ---- - -# Mysql {#mysql} - -El motor MySQL le permite realizar `SELECT` consultas sobre datos almacenados en un servidor MySQL remoto. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], - ... -) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); -``` - -Vea una descripción detallada del [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) consulta. - -La estructura de la tabla puede diferir de la estructura de la tabla MySQL original: - -- Los nombres de columna deben ser los mismos que en la tabla MySQL original, pero puede usar solo algunas de estas columnas y en cualquier orden. -- Los tipos de columna pueden diferir de los de la tabla MySQL original. ClickHouse intenta [elenco](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) valores a los tipos de datos ClickHouse. - -**Parámetros del motor** - -- `host:port` — MySQL server address. - -- `database` — Remote database name. - -- `table` — Remote table name. - -- `user` — MySQL user. - -- `password` — User password. - -- `replace_query` — Flag that converts `INSERT INTO` consultas a `REPLACE INTO`. Si `replace_query=1`, la consulta se sustituye. - -- `on_duplicate_clause` — The `ON DUPLICATE KEY on_duplicate_clause` expresión que se añade a la `INSERT` consulta. - - Ejemplo: `INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1`, donde `on_duplicate_clause` ser `UPDATE c2 = c2 + 1`. Ver el [Documentación de MySQL](https://dev.mysql.com/doc/refman/8.0/en/insert-on-duplicate.html) para encontrar qué `on_duplicate_clause` se puede utilizar con el `ON DUPLICATE KEY` clausula. - - Especificar `on_duplicate_clause` tienes que pasar `0` a la `replace_query` parámetro. Si pasa simultáneamente `replace_query = 1` y `on_duplicate_clause`, ClickHouse genera una excepción. - -Simple `WHERE` cláusulas tales como `=, !=, >, >=, <, <=` se ejecutan en el servidor MySQL. - -El resto de las condiciones y el `LIMIT` La restricción de muestreo se ejecuta en ClickHouse solo después de que finalice la consulta a MySQL. - -## Ejemplo de uso {#usage-example} - -Tabla en MySQL: - -``` text -mysql> CREATE TABLE `test`.`test` ( - -> `int_id` INT NOT NULL AUTO_INCREMENT, - -> `int_nullable` INT NULL DEFAULT NULL, - -> `float` FLOAT NOT NULL, - -> `float_nullable` FLOAT NULL DEFAULT NULL, - -> PRIMARY KEY (`int_id`)); -Query OK, 0 rows affected (0,09 sec) - -mysql> insert into test (`int_id`, `float`) VALUES (1,2); -Query OK, 1 row affected (0,00 sec) - -mysql> select * from test; -+------+----------+-----+----------+ -| int_id | int_nullable | float | float_nullable | -+------+----------+-----+----------+ -| 1 | NULL | 2 | NULL | -+------+----------+-----+----------+ -1 row in set (0,00 sec) -``` - -Tabla en ClickHouse, recuperando datos de la tabla MySQL creada anteriormente: - -``` sql -CREATE TABLE mysql_table -( - `float_nullable` Nullable(Float32), - `int_id` Int32 -) -ENGINE = MySQL('localhost:3306', 'test', 'test', 'bayonet', '123') -``` - -``` sql -SELECT * FROM mysql_table -``` - -``` text -┌─float_nullable─┬─int_id─┐ -│ ᴺᵁᴸᴸ │ 1 │ -└────────────────┴────────┘ -``` - -## Ver también {#see-also} - -- [El ‘mysql’ función de la tabla](../../../sql-reference/table-functions/mysql.md) -- [Uso de MySQL como fuente de diccionario externo](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/mysql/) diff --git a/docs/es/engines/table-engines/integrations/odbc.md b/docs/es/engines/table-engines/integrations/odbc.md deleted file mode 100644 index 75c79484d61..00000000000 --- a/docs/es/engines/table-engines/integrations/odbc.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 35 -toc_title: ODBC ---- - -# ODBC {#table-engine-odbc} - -Permite que ClickHouse se conecte a bases de datos externas a través de [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity). - -Para implementar con seguridad conexiones ODBC, ClickHouse usa un programa separado `clickhouse-odbc-bridge`. Si el controlador ODBC se carga directamente desde `clickhouse-server`, problemas de controlador pueden bloquear el servidor ClickHouse. ClickHouse se inicia automáticamente `clickhouse-odbc-bridge` cuando se requiere. El programa de puente ODBC se instala desde el mismo paquete que el `clickhouse-server`. - -Este motor soporta el [NULL](../../../sql-reference/data-types/nullable.md) tipo de datos. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1], - name2 [type2], - ... -) -ENGINE = ODBC(connection_settings, external_database, external_table) -``` - -Vea una descripción detallada del [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) consulta. - -La estructura de la tabla puede diferir de la estructura de la tabla de origen: - -- Los nombres de columna deben ser los mismos que en la tabla de origen, pero puede usar solo algunas de estas columnas y en cualquier orden. -- Los tipos de columna pueden diferir de los de la tabla de origen. ClickHouse intenta [elenco](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) valores a los tipos de datos ClickHouse. - -**Parámetros del motor** - -- `connection_settings` — Name of the section with connection settings in the `odbc.ini` file. -- `external_database` — Name of a database in an external DBMS. -- `external_table` — Name of a table in the `external_database`. - -## Ejemplo de uso {#usage-example} - -**Recuperación de datos de la instalación local de MySQL a través de ODBC** - -Este ejemplo se comprueba para Ubuntu Linux 18.04 y el servidor MySQL 5.7. - -Asegúrese de que unixODBC y MySQL Connector están instalados. - -De forma predeterminada (si se instala desde paquetes), ClickHouse comienza como usuario `clickhouse`. Por lo tanto, debe crear y configurar este usuario en el servidor MySQL. - -``` bash -$ sudo mysql -``` - -``` sql -mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse'; -mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION; -``` - -A continuación, configure la conexión en `/etc/odbc.ini`. - -``` bash -$ cat /etc/odbc.ini -[mysqlconn] -DRIVER = /usr/local/lib/libmyodbc5w.so -SERVER = 127.0.0.1 -PORT = 3306 -DATABASE = test -USERNAME = clickhouse -PASSWORD = clickhouse -``` - -Puede verificar la conexión usando el `isql` utilidad desde la instalación de unixODBC. - -``` bash -$ isql -v mysqlconn -+-------------------------+ -| Connected! | -| | -... -``` - -Tabla en MySQL: - -``` text -mysql> CREATE TABLE `test`.`test` ( - -> `int_id` INT NOT NULL AUTO_INCREMENT, - -> `int_nullable` INT NULL DEFAULT NULL, - -> `float` FLOAT NOT NULL, - -> `float_nullable` FLOAT NULL DEFAULT NULL, - -> PRIMARY KEY (`int_id`)); -Query OK, 0 rows affected (0,09 sec) - -mysql> insert into test (`int_id`, `float`) VALUES (1,2); -Query OK, 1 row affected (0,00 sec) - -mysql> select * from test; -+------+----------+-----+----------+ -| int_id | int_nullable | float | float_nullable | -+------+----------+-----+----------+ -| 1 | NULL | 2 | NULL | -+------+----------+-----+----------+ -1 row in set (0,00 sec) -``` - -Tabla en ClickHouse, recuperando datos de la tabla MySQL: - -``` sql -CREATE TABLE odbc_t -( - `int_id` Int32, - `float_nullable` Nullable(Float32) -) -ENGINE = ODBC('DSN=mysqlconn', 'test', 'test') -``` - -``` sql -SELECT * FROM odbc_t -``` - -``` text -┌─int_id─┬─float_nullable─┐ -│ 1 │ ᴺᵁᴸᴸ │ -└────────┴────────────────┘ -``` - -## Ver también {#see-also} - -- [Diccionarios externos ODBC](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc) -- [Tabla ODBC función](../../../sql-reference/table-functions/odbc.md) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/odbc/) diff --git a/docs/es/engines/table-engines/log-family/index.md b/docs/es/engines/table-engines/log-family/index.md deleted file mode 100644 index a7a3016f967..00000000000 --- a/docs/es/engines/table-engines/log-family/index.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Familia de registro -toc_priority: 29 -toc_title: "Implantaci\xF3n" ---- - -# Familia del motor de registro {#log-engine-family} - -Estos motores fueron desarrollados para escenarios en los que necesita escribir rápidamente muchas tablas pequeñas (hasta aproximadamente 1 millón de filas) y leerlas más tarde en su conjunto. - -Motores de la familia: - -- [StripeLog](stripelog.md) -- [Registro](log.md) -- [TinyLog](tinylog.md) - -## Propiedades comunes {#common-properties} - -Motor: - -- Almacenar datos en un disco. - -- Agregue datos al final del archivo al escribir. - -- Bloqueos de soporte para el acceso a datos simultáneos. - - Durante `INSERT` consultas, la tabla está bloqueada y otras consultas para leer y escribir datos esperan a que la tabla se desbloquee. Si no hay consultas de escritura de datos, se puede realizar cualquier número de consultas de lectura de datos simultáneamente. - -- No apoyo [mutación](../../../sql-reference/statements/alter.md#alter-mutations) operación. - -- No admite índices. - - Esto significa que `SELECT` las consultas para rangos de datos no son eficientes. - -- No escriba datos atómicamente. - - Puede obtener una tabla con datos dañados si algo rompe la operación de escritura, por ejemplo, un cierre anormal del servidor. - -## Diferencia {#differences} - -El `TinyLog` es el más simple de la familia y proporciona la funcionalidad más pobre y la eficiencia más baja. El `TinyLog` el motor no admite la lectura de datos paralelos por varios hilos. Lee datos más lentamente que otros motores de la familia que admiten lectura paralela y utiliza casi tantos descriptores como los `Log` motor porque almacena cada columna en un archivo separado. Úselo en escenarios simples de baja carga. - -El `Log` y `StripeLog` Los motores admiten lectura de datos paralela. Al leer datos, ClickHouse usa múltiples hilos. Cada subproceso procesa un bloque de datos separado. El `Log` utiliza un archivo separado para cada columna de la tabla. `StripeLog` almacena todos los datos en un archivo. Como resultado, el `StripeLog` el motor utiliza menos descriptores en el sistema operativo, pero el `Log` proporciona una mayor eficiencia al leer datos. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/log_family/) diff --git a/docs/es/engines/table-engines/log-family/log.md b/docs/es/engines/table-engines/log-family/log.md deleted file mode 100644 index 1db374390e4..00000000000 --- a/docs/es/engines/table-engines/log-family/log.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 33 -toc_title: Registro ---- - -# Registro {#log} - -El motor pertenece a la familia de motores de registro. Consulte las propiedades comunes de los motores de registro y sus diferencias en [Familia del motor de registro](index.md) artículo. - -El registro difiere de [TinyLog](tinylog.md) en que un pequeño archivo de “marks” reside con los archivos de columna. Estas marcas se escriben en cada bloque de datos y contienen compensaciones que indican dónde comenzar a leer el archivo para omitir el número especificado de filas. Esto hace posible leer datos de tabla en múltiples hilos. -Para el acceso a datos simultáneos, las operaciones de lectura se pueden realizar simultáneamente, mientras que las operaciones de escritura bloquean las lecturas entre sí. -El motor de registro no admite índices. Del mismo modo, si la escritura en una tabla falla, la tabla se rompe y la lectura de ella devuelve un error. El motor de registro es adecuado para datos temporales, tablas de escritura única y para fines de prueba o demostración. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/log/) diff --git a/docs/es/engines/table-engines/log-family/stripelog.md b/docs/es/engines/table-engines/log-family/stripelog.md deleted file mode 100644 index 0965e9a987c..00000000000 --- a/docs/es/engines/table-engines/log-family/stripelog.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 32 -toc_title: StripeLog ---- - -# Lista de Stripelog {#stripelog} - -Este motor pertenece a la familia de motores de registro. Consulte las propiedades comunes de los motores de registro y sus diferencias en [Familia del motor de registro](index.md) artículo. - -Utilice este motor en escenarios en los que necesite escribir muchas tablas con una pequeña cantidad de datos (menos de 1 millón de filas). - -## Creación de una tabla {#table_engines-stripelog-creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - column1_name [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - column2_name [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = StripeLog -``` - -Vea la descripción detallada del [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) consulta. - -## Escribir los datos {#table_engines-stripelog-writing-the-data} - -El `StripeLog` el motor almacena todas las columnas en un archivo. Para cada `INSERT` consulta, ClickHouse agrega el bloque de datos al final de un archivo de tabla, escribiendo columnas una por una. - -Para cada tabla, ClickHouse escribe los archivos: - -- `data.bin` — Data file. -- `index.mrk` — File with marks. Marks contain offsets for each column of each data block inserted. - -El `StripeLog` el motor no soporta el `ALTER UPDATE` y `ALTER DELETE` operación. - -## Lectura de los datos {#table_engines-stripelog-reading-the-data} - -El archivo con marcas permite ClickHouse paralelizar la lectura de datos. Esto significa que un `SELECT` query devuelve filas en un orden impredecible. Utilice el `ORDER BY` cláusula para ordenar filas. - -## Ejemplo de uso {#table_engines-stripelog-example-of-use} - -Creación de una tabla: - -``` sql -CREATE TABLE stripe_log_table -( - timestamp DateTime, - message_type String, - message String -) -ENGINE = StripeLog -``` - -Insertar datos: - -``` sql -INSERT INTO stripe_log_table VALUES (now(),'REGULAR','The first regular message') -INSERT INTO stripe_log_table VALUES (now(),'REGULAR','The second regular message'),(now(),'WARNING','The first warning message') -``` - -Se utilizaron dos `INSERT` consultas para crear dos bloques de datos dentro del `data.bin` file. - -ClickHouse usa múltiples subprocesos al seleccionar datos. Cada subproceso lee un bloque de datos separado y devuelve las filas resultantes de forma independiente a medida que termina. Como resultado, el orden de los bloques de filas en la salida no coincide con el orden de los mismos bloques en la entrada en la mayoría de los casos. Por ejemplo: - -``` sql -SELECT * FROM stripe_log_table -``` - -``` text -┌───────────timestamp─┬─message_type─┬─message────────────────────┐ -│ 2019-01-18 14:27:32 │ REGULAR │ The second regular message │ -│ 2019-01-18 14:34:53 │ WARNING │ The first warning message │ -└─────────────────────┴──────────────┴────────────────────────────┘ -┌───────────timestamp─┬─message_type─┬─message───────────────────┐ -│ 2019-01-18 14:23:43 │ REGULAR │ The first regular message │ -└─────────────────────┴──────────────┴───────────────────────────┘ -``` - -Ordenación de los resultados (orden ascendente por defecto): - -``` sql -SELECT * FROM stripe_log_table ORDER BY timestamp -``` - -``` text -┌───────────timestamp─┬─message_type─┬─message────────────────────┐ -│ 2019-01-18 14:23:43 │ REGULAR │ The first regular message │ -│ 2019-01-18 14:27:32 │ REGULAR │ The second regular message │ -│ 2019-01-18 14:34:53 │ WARNING │ The first warning message │ -└─────────────────────┴──────────────┴────────────────────────────┘ -``` - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/stripelog/) diff --git a/docs/es/engines/table-engines/log-family/tinylog.md b/docs/es/engines/table-engines/log-family/tinylog.md deleted file mode 100644 index a2cbf7257b6..00000000000 --- a/docs/es/engines/table-engines/log-family/tinylog.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 34 -toc_title: TinyLog ---- - -# TinyLog {#tinylog} - -El motor pertenece a la familia de motores de registro. Ver [Familia del motor de registro](index.md) para las propiedades comunes de los motores de registro y sus diferencias. - -Este motor de tablas se usa normalmente con el método write-once: escribir datos una vez, luego leerlos tantas veces como sea necesario. Por ejemplo, puede usar `TinyLog`-type tablas para datos intermedios que se procesan en pequeños lotes. Tenga en cuenta que el almacenamiento de datos en un gran número de tablas pequeñas es ineficiente. - -Las consultas se ejecutan en una sola secuencia. En otras palabras, este motor está diseñado para tablas relativamente pequeñas (hasta aproximadamente 1,000,000 filas). Tiene sentido usar este motor de tablas si tiene muchas tablas pequeñas, ya que es más simple que el [Registro](log.md) motor (menos archivos necesitan ser abiertos). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/tinylog/) diff --git a/docs/es/engines/table-engines/mergetree-family/aggregatingmergetree.md b/docs/es/engines/table-engines/mergetree-family/aggregatingmergetree.md deleted file mode 100644 index 2aedfbd2317..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/aggregatingmergetree.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 35 -toc_title: "Agregaci\xF3nMergeTree" ---- - -# Aggregatingmergetree {#aggregatingmergetree} - -El motor hereda de [Método de codificación de datos:](mergetree.md#table_engines-mergetree), alterando la lógica para la fusión de partes de datos. ClickHouse reemplaza todas las filas con la misma clave principal (o más exactamente, con la misma [clave de clasificación](mergetree.md)) con una sola fila (dentro de una parte de datos) que almacena una combinación de estados de funciones agregadas. - -Usted puede utilizar `AggregatingMergeTree` tablas para la agregación de datos incrementales, incluidas las vistas materializadas agregadas. - -El motor procesa todas las columnas con los siguientes tipos: - -- [AggregateFunction](../../../sql-reference/data-types/aggregatefunction.md) -- [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md) - -Es apropiado usar `AggregatingMergeTree` si reduce el número de filas por pedidos. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = AggregatingMergeTree() -[PARTITION BY expr] -[ORDER BY expr] -[SAMPLE BY expr] -[TTL expr] -[SETTINGS name=value, ...] -``` - -Para obtener una descripción de los parámetros de solicitud, consulte [descripción de la solicitud](../../../sql-reference/statements/create.md). - -**Cláusulas de consulta** - -Al crear un `AggregatingMergeTree` mesa de la misma [clausula](mergetree.md) se requieren, como al crear un `MergeTree` tabla. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No use este método en proyectos nuevos y, si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE [=] AggregatingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity) -``` - -Todos los parámetros tienen el mismo significado que en `MergeTree`. -
- -## SELECCIONAR e INSERTAR {#select-and-insert} - -Para insertar datos, utilice [INSERT SELECT](../../../sql-reference/statements/insert-into.md) consulta con funciones agregadas -State-. -Al seleccionar datos de `AggregatingMergeTree` mesa, uso `GROUP BY` cláusula y las mismas funciones agregadas que al insertar datos, pero usando `-Merge` sufijo. - -En los resultados de `SELECT` consulta, los valores de `AggregateFunction` tipo tiene representación binaria específica de la implementación para todos los formatos de salida de ClickHouse. Si volcar datos en, por ejemplo, `TabSeparated` formato con `SELECT` consulta entonces este volcado se puede cargar de nuevo usando `INSERT` consulta. - -## Ejemplo de una vista materializada agregada {#example-of-an-aggregated-materialized-view} - -`AggregatingMergeTree` vista materializada que mira el `test.visits` tabla: - -``` sql -CREATE MATERIALIZED VIEW test.basic -ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate) -AS SELECT - CounterID, - StartDate, - sumState(Sign) AS Visits, - uniqState(UserID) AS Users -FROM test.visits -GROUP BY CounterID, StartDate; -``` - -Insertar datos en el `test.visits` tabla. - -``` sql -INSERT INTO test.visits ... -``` - -Los datos se insertan tanto en la tabla como en la vista `test.basic` que realizará la agregación. - -Para obtener los datos agregados, necesitamos ejecutar una consulta como `SELECT ... GROUP BY ...` de la vista `test.basic`: - -``` sql -SELECT - StartDate, - sumMerge(Visits) AS Visits, - uniqMerge(Users) AS Users -FROM test.basic -GROUP BY StartDate -ORDER BY StartDate; -``` - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/aggregatingmergetree/) diff --git a/docs/es/engines/table-engines/mergetree-family/collapsingmergetree.md b/docs/es/engines/table-engines/mergetree-family/collapsingmergetree.md deleted file mode 100644 index 027d5c2adf7..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/collapsingmergetree.md +++ /dev/null @@ -1,306 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 36 -toc_title: ColapsarMergeTree ---- - -# ColapsarMergeTree {#table_engine-collapsingmergetree} - -El motor hereda de [Método de codificación de datos:](mergetree.md) y agrega la lógica de las filas que colapsan al algoritmo de fusión de partes de datos. - -`CollapsingMergeTree` elimina de forma asincrónica (colapsa) pares de filas si todos los campos de una clave de ordenación (`ORDER BY`) son equivalentes excepto el campo particular `Sign` que puede tener `1` y `-1` valor. Las filas sin un par se mantienen. Para más detalles, consulte el [Derrumbar](#table_engine-collapsingmergetree-collapsing) sección del documento. - -El motor puede reducir significativamente el volumen de almacenamiento y aumentar la eficiencia de `SELECT` consulta como consecuencia. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = CollapsingMergeTree(sign) -[PARTITION BY expr] -[ORDER BY expr] -[SAMPLE BY expr] -[SETTINGS name=value, ...] -``` - -Para obtener una descripción de los parámetros de consulta, consulte [descripción de la consulta](../../../sql-reference/statements/create.md). - -**CollapsingMergeTree Parámetros** - -- `sign` — Name of the column with the type of row: `1` es una “state” fila, `-1` es una “cancel” fila. - - Column data type — `Int8`. - -**Cláusulas de consulta** - -Al crear un `CollapsingMergeTree` mesa, la misma [cláusulas de consulta](mergetree.md#table_engine-mergetree-creating-a-table) se requieren, como al crear un `MergeTree` tabla. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No use este método en proyectos nuevos y, si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE [=] CollapsingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, sign) -``` - -Todos los parámetros excepto `sign` el mismo significado que en `MergeTree`. - -- `sign` — Name of the column with the type of row: `1` — “state” fila, `-1` — “cancel” fila. - - Column Data Type — `Int8`. - -
- -## Derrumbar {#table_engine-collapsingmergetree-collapsing} - -### Datos {#data} - -Considere la situación en la que necesita guardar datos que cambian continuamente para algún objeto. Parece lógico tener una fila para un objeto y actualizarla en cualquier cambio, pero la operación de actualización es costosa y lenta para DBMS porque requiere la reescritura de los datos en el almacenamiento. Si necesita escribir datos rápidamente, la actualización no es aceptable, pero puede escribir los cambios de un objeto secuencialmente de la siguiente manera. - -Utilice la columna en particular `Sign`. Si `Sign = 1` significa que la fila es un estado de un objeto, llamémoslo “state” fila. Si `Sign = -1` significa la cancelación del estado de un objeto con los mismos atributos, llamémoslo “cancel” fila. - -Por ejemplo, queremos calcular cuántas páginas revisaron los usuarios en algún sitio y cuánto tiempo estuvieron allí. En algún momento escribimos la siguiente fila con el estado de la actividad del usuario: - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -En algún momento después registramos el cambio de actividad del usuario y lo escribimos con las siguientes dos filas. - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -La primera fila cancela el estado anterior del objeto (usuario). Debe copiar los campos clave de ordenación del estado cancelado exceptuando `Sign`. - -La segunda fila contiene el estado actual. - -Como solo necesitamos el último estado de actividad del usuario, las filas - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -se puede eliminar colapsando el estado no válido (antiguo) de un objeto. `CollapsingMergeTree` hace esto mientras se fusionan las partes de datos. - -Por qué necesitamos 2 filas para cada cambio leído en el [Algoritmo](#table_engine-collapsingmergetree-collapsing-algorithm) apartado. - -**Propiedades peculiares de tal enfoque** - -1. El programa que escribe los datos debe recordar el estado de un objeto para poder cancelarlo. “Cancel” debe contener copias de los campos de clave de ordenación “state” y lo opuesto `Sign`. Aumenta el tamaño inicial de almacenamiento, pero permite escribir los datos rápidamente. -2. Las matrices de largo crecimiento en columnas reducen la eficiencia del motor debido a la carga para escribir. Los datos más sencillos, mayor será la eficiencia. -3. El `SELECT` Los resultados dependen en gran medida de la consistencia del historial de cambios de objetos. Sea preciso al preparar los datos para insertarlos. Puede obtener resultados impredecibles en datos incoherentes, por ejemplo, valores negativos para métricas no negativas, como la profundidad de la sesión. - -### Algoritmo {#table_engine-collapsingmergetree-collapsing-algorithm} - -Cuando ClickHouse combina partes de datos, cada grupo de filas consecutivas tiene la misma clave de ordenación (`ORDER BY`) se reduce a no más de dos filas, una con `Sign = 1` (“state” fila) y otro con `Sign = -1` (“cancel” fila). En otras palabras, las entradas colapsan. - -Para cada parte de datos resultante, ClickHouse guarda: - -1. El primero “cancel” y el último “state” si el número de “state” y “cancel” y la última fila es una “state” fila. -2. El último “state” fila, si hay más “state” filas que “cancel” filas. -3. El primero “cancel” fila, si hay más “cancel” filas que “state” filas. -4. Ninguna de las filas, en todos los demás casos. - -También cuando hay al menos 2 más “state” filas que “cancel” filas, o al menos 2 más “cancel” filas entonces “state” fila, la fusión continúa, pero ClickHouse trata esta situación como un error lógico y la registra en el registro del servidor. Este error puede producirse si se insertan los mismos datos más de una vez. - -Por lo tanto, el colapso no debe cambiar los resultados del cálculo de las estadísticas. -Los cambios colapsaron gradualmente para que al final solo quedara el último estado de casi todos los objetos. - -El `Sign` se requiere porque el algoritmo de fusión no garantiza que todas las filas con la misma clave de clasificación estén en la misma parte de datos resultante e incluso en el mismo servidor físico. Proceso de ClickHouse `SELECT` consultas con múltiples hilos, y no puede predecir el orden de las filas en el resultado. La agregación es necesaria si hay una necesidad de obtener completamente “collapsed” datos de `CollapsingMergeTree` tabla. - -Para finalizar el colapso, escriba una consulta con `GROUP BY` cláusula y funciones agregadas que representan el signo. Por ejemplo, para calcular la cantidad, use `sum(Sign)` en lugar de `count()`. Para calcular la suma de algo, use `sum(Sign * x)` en lugar de `sum(x)` y así sucesivamente, y también añadir `HAVING sum(Sign) > 0`. - -Los agregados `count`, `sum` y `avg` podría calcularse de esta manera. El agregado `uniq` podría calcularse si un objeto tiene al menos un estado no colapsado. Los agregados `min` y `max` no se pudo calcular porque `CollapsingMergeTree` no guarda el historial de valores de los estados colapsados. - -Si necesita extraer datos sin agregación (por ejemplo, para comprobar si hay filas presentes cuyos valores más recientes coinciden con ciertas condiciones), puede utilizar el `FINAL` modificador para el `FROM` clausula. Este enfoque es significativamente menos eficiente. - -## Ejemplo de uso {#example-of-use} - -Datos de ejemplo: - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -Creación de la tabla: - -``` sql -CREATE TABLE UAct -( - UserID UInt64, - PageViews UInt8, - Duration UInt8, - Sign Int8 -) -ENGINE = CollapsingMergeTree(Sign) -ORDER BY UserID -``` - -Inserción de los datos: - -``` sql -INSERT INTO UAct VALUES (4324182021466249494, 5, 146, 1) -``` - -``` sql -INSERT INTO UAct VALUES (4324182021466249494, 5, 146, -1),(4324182021466249494, 6, 185, 1) -``` - -Usamos dos `INSERT` consultas para crear dos partes de datos diferentes. Si insertamos los datos con una consulta, ClickHouse crea una parte de datos y nunca realizará ninguna fusión. - -Obtener los datos: - -``` sql -SELECT * FROM UAct -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -¿Qué vemos y dónde está colapsando? - -Con dos `INSERT` consultas, hemos creado 2 partes de datos. El `SELECT` la consulta se realizó en 2 hilos, y obtuvimos un orden aleatorio de filas. No se ha producido un colapso porque todavía no se había fusionado las partes de datos. ClickHouse fusiona parte de datos en un momento desconocido que no podemos predecir. - -Por lo tanto, necesitamos agregación: - -``` sql -SELECT - UserID, - sum(PageViews * Sign) AS PageViews, - sum(Duration * Sign) AS Duration -FROM UAct -GROUP BY UserID -HAVING sum(Sign) > 0 -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┐ -│ 4324182021466249494 │ 6 │ 185 │ -└─────────────────────┴───────────┴──────────┘ -``` - -Si no necesitamos agregación y queremos forzar el colapso, podemos usar `FINAL` modificador para `FROM` clausula. - -``` sql -SELECT * FROM UAct FINAL -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -Esta forma de seleccionar los datos es muy ineficiente. No lo use para mesas grandes. - -## Ejemplo de otro enfoque {#example-of-another-approach} - -Datos de ejemplo: - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ -│ 4324182021466249494 │ -5 │ -146 │ -1 │ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -La idea es que las fusiones tengan en cuenta solo los campos clave. Y en el “Cancel” línea podemos especificar valores negativos que igualan la versión anterior de la fila al sumar sin usar la columna Sign. Para este enfoque, es necesario cambiar el tipo de datos `PageViews`,`Duration` para almacenar valores negativos de UInt8 -\> Int16. - -``` sql -CREATE TABLE UAct -( - UserID UInt64, - PageViews Int16, - Duration Int16, - Sign Int8 -) -ENGINE = CollapsingMergeTree(Sign) -ORDER BY UserID -``` - -Vamos a probar el enfoque: - -``` sql -insert into UAct values(4324182021466249494, 5, 146, 1); -insert into UAct values(4324182021466249494, -5, -146, -1); -insert into UAct values(4324182021466249494, 6, 185, 1); - -select * from UAct final; // avoid using final in production (just for a test or small tables) -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -``` sql -SELECT - UserID, - sum(PageViews) AS PageViews, - sum(Duration) AS Duration -FROM UAct -GROUP BY UserID -```text -┌──────────────UserID─┬─PageViews─┬─Duration─┐ -│ 4324182021466249494 │ 6 │ 185 │ -└─────────────────────┴───────────┴──────────┘ -``` - -``` sqk -select count() FROM UAct -``` - -``` text -┌─count()─┐ -│ 3 │ -└─────────┘ -``` - -``` sql -optimize table UAct final; - -select * FROM UAct -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/collapsingmergetree/) diff --git a/docs/es/engines/table-engines/mergetree-family/custom-partitioning-key.md b/docs/es/engines/table-engines/mergetree-family/custom-partitioning-key.md deleted file mode 100644 index 6cbc0a9192e..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/custom-partitioning-key.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 32 -toc_title: "Clave de partici\xF3n personalizada" ---- - -# Clave de partición personalizada {#custom-partitioning-key} - -La partición está disponible para el [Método de codificación de datos:](mergetree.md) mesas familiares (incluyendo [repetición](replication.md) tabla). [Vistas materializadas](../special/materializedview.md#materializedview) basado en tablas MergeTree soporte de particionamiento, también. - -Una partición es una combinación lógica de registros en una tabla por un criterio especificado. Puede establecer una partición por un criterio arbitrario, como por mes, por día o por tipo de evento. Cada partición se almacena por separado para simplificar las manipulaciones de estos datos. Al acceder a los datos, ClickHouse utiliza el subconjunto más pequeño de particiones posible. - -La partición se especifica en el `PARTITION BY expr` cláusula cuando [creando una tabla](mergetree.md#table_engine-mergetree-creating-a-table). La clave de partición puede ser cualquier expresión de las columnas de la tabla. Por ejemplo, para especificar la partición por mes, utilice la expresión `toYYYYMM(date_column)`: - -``` sql -CREATE TABLE visits -( - VisitDate Date, - Hour UInt8, - ClientID UUID -) -ENGINE = MergeTree() -PARTITION BY toYYYYMM(VisitDate) -ORDER BY Hour; -``` - -La clave de partición también puede ser una tupla de expresiones (similar a la [clave primaria](mergetree.md#primary-keys-and-indexes-in-queries)). Por ejemplo: - -``` sql -ENGINE = ReplicatedCollapsingMergeTree('/clickhouse/tables/name', 'replica1', Sign) -PARTITION BY (toMonday(StartDate), EventType) -ORDER BY (CounterID, StartDate, intHash32(UserID)); -``` - -En este ejemplo, establecemos la partición por los tipos de eventos que se produjeron durante la semana actual. - -Al insertar datos nuevos en una tabla, estos datos se almacenan como una parte separada (porción) ordenada por la clave principal. En 10-15 minutos después de insertar, las partes de la misma partición se fusionan en toda la parte. - -!!! info "INFO" - Una combinación solo funciona para partes de datos que tienen el mismo valor para la expresión de partición. Esto significa **no deberías hacer particiones demasiado granulares** (más de un millar de particiones). De lo contrario, el `SELECT` consulta funciona mal debido a un número excesivamente grande de archivos en el sistema de archivos y descriptores de archivos abiertos. - -Utilice el [sistema.parte](../../../operations/system-tables.md#system_tables-parts) tabla para ver las partes y particiones de la tabla. Por ejemplo, supongamos que tenemos un `visits` tabla con partición por mes. Vamos a realizar el `SELECT` consulta para el `system.parts` tabla: - -``` sql -SELECT - partition, - name, - active -FROM system.parts -WHERE table = 'visits' -``` - -``` text -┌─partition─┬─name───────────┬─active─┐ -│ 201901 │ 201901_1_3_1 │ 0 │ -│ 201901 │ 201901_1_9_2 │ 1 │ -│ 201901 │ 201901_8_8_0 │ 0 │ -│ 201901 │ 201901_9_9_0 │ 0 │ -│ 201902 │ 201902_4_6_1 │ 1 │ -│ 201902 │ 201902_10_10_0 │ 1 │ -│ 201902 │ 201902_11_11_0 │ 1 │ -└───────────┴────────────────┴────────┘ -``` - -El `partition` columna contiene los nombres de las particiones. Hay dos particiones en este ejemplo: `201901` y `201902`. Puede utilizar este valor de columna para especificar el nombre de partición en [ALTER … PARTITION](#alter_manipulations-with-partitions) consulta. - -El `name` columna contiene los nombres de las partes de datos de partición. Puede utilizar esta columna para especificar el nombre de la pieza [ALTER ATTACH PART](#alter_attach-partition) consulta. - -Vamos a desglosar el nombre de la primera parte: `201901_1_3_1`: - -- `201901` es el nombre de la partición. -- `1` es el número mínimo del bloque de datos. -- `3` es el número máximo del bloque de datos. -- `1` es el nivel de fragmento (la profundidad del árbol de fusión del que se forma). - -!!! info "INFO" - Las partes de las tablas de tipo antiguo tienen el nombre: `20190117_20190123_2_2_0` (fecha mínima - fecha máxima - número de bloque mínimo - número de bloque máximo - nivel). - -El `active` columna muestra el estado de la pieza. `1` está activo; `0` está inactivo. Las partes inactivas son, por ejemplo, las partes de origen que quedan después de fusionarse con una parte más grande. Las partes de datos dañadas también se indican como inactivas. - -Como puede ver en el ejemplo, hay varias partes separadas de la misma partición (por ejemplo, `201901_1_3_1` y `201901_1_9_2`). Esto significa que estas partes aún no están fusionadas. ClickHouse combina las partes insertadas de datos periódicamente, aproximadamente 15 minutos después de la inserción. Además, puede realizar una fusión no programada utilizando el [OPTIMIZE](../../../sql-reference/statements/misc.md#misc_operations-optimize) consulta. Ejemplo: - -``` sql -OPTIMIZE TABLE visits PARTITION 201902; -``` - -``` text -┌─partition─┬─name───────────┬─active─┐ -│ 201901 │ 201901_1_3_1 │ 0 │ -│ 201901 │ 201901_1_9_2 │ 1 │ -│ 201901 │ 201901_8_8_0 │ 0 │ -│ 201901 │ 201901_9_9_0 │ 0 │ -│ 201902 │ 201902_4_6_1 │ 0 │ -│ 201902 │ 201902_4_11_2 │ 1 │ -│ 201902 │ 201902_10_10_0 │ 0 │ -│ 201902 │ 201902_11_11_0 │ 0 │ -└───────────┴────────────────┴────────┘ -``` - -Las partes inactivas se eliminarán aproximadamente 10 minutos después de la fusión. - -Otra forma de ver un conjunto de partes y particiones es ir al directorio de la tabla: `/var/lib/clickhouse/data///`. Por ejemplo: - -``` bash -/var/lib/clickhouse/data/default/visits$ ls -l -total 40 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 201901_1_3_1 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201901_1_9_2 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 15:52 201901_8_8_0 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 15:52 201901_9_9_0 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201902_10_10_0 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201902_11_11_0 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:19 201902_4_11_2 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 12:09 201902_4_6_1 -drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 detached -``` - -Carpeta ‘201901_1_1_0’, ‘201901_1_7_1’ y así sucesivamente son los directorios de las partes. Cada parte se relaciona con una partición correspondiente y contiene datos solo para un mes determinado (la tabla de este ejemplo tiene particiones por mes). - -El `detached` el directorio contiene partes que se separaron de la tabla utilizando el [DETACH](../../../sql-reference/statements/alter.md#alter_detach-partition) consulta. Las partes dañadas también se mueven a este directorio, en lugar de eliminarse. El servidor no utiliza las piezas del `detached` directory. You can add, delete, or modify the data in this directory at any time – the server will not know about this until you run the [ATTACH](../../../sql-reference/statements/alter.md#alter_attach-partition) consulta. - -Tenga en cuenta que en el servidor operativo, no puede cambiar manualmente el conjunto de piezas o sus datos en el sistema de archivos, ya que el servidor no lo sabrá. Para tablas no replicadas, puede hacer esto cuando se detiene el servidor, pero no se recomienda. Para tablas replicadas, el conjunto de piezas no se puede cambiar en ningún caso. - -ClickHouse le permite realizar operaciones con las particiones: eliminarlas, copiar de una tabla a otra o crear una copia de seguridad. Consulte la lista de todas las operaciones en la sección [Manipulaciones con particiones y piezas](../../../sql-reference/statements/alter.md#alter_manipulations-with-partitions). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/custom_partitioning_key/) diff --git a/docs/es/engines/table-engines/mergetree-family/graphitemergetree.md b/docs/es/engines/table-engines/mergetree-family/graphitemergetree.md deleted file mode 100644 index d33ddcebac2..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/graphitemergetree.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 38 -toc_title: GraphiteMergeTree ---- - -# GraphiteMergeTree {#graphitemergetree} - -Este motor está diseñado para el adelgazamiento y la agregación / promedio (rollup) [Grafito](http://graphite.readthedocs.io/en/latest/index.html) datos. Puede ser útil para los desarrolladores que desean usar ClickHouse como almacén de datos para Graphite. - -Puede usar cualquier motor de tabla ClickHouse para almacenar los datos de Graphite si no necesita un paquete acumulativo, pero si necesita un paquete acumulativo, use `GraphiteMergeTree`. El motor reduce el volumen de almacenamiento y aumenta la eficiencia de las consultas de Graphite. - -El motor hereda propiedades de [Método de codificación de datos:](mergetree.md). - -## Creación de una tabla {#creating-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - Path String, - Time DateTime, - Value , - Version - ... -) ENGINE = GraphiteMergeTree(config_section) -[PARTITION BY expr] -[ORDER BY expr] -[SAMPLE BY expr] -[SETTINGS name=value, ...] -``` - -Vea una descripción detallada del [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) consulta. - -Una tabla para los datos de grafito debe tener las siguientes columnas para los siguientes datos: - -- Nombre métrico (sensor de grafito). Tipo de datos: `String`. - -- Tiempo de medición de la métrica. Tipo de datos: `DateTime`. - -- Valor de la métrica. Tipo de datos: cualquier numérico. - -- Versión de la métrica. Tipo de datos: cualquier numérico. - - ClickHouse guarda las filas con la versión más alta o la última escrita si las versiones son las mismas. Otras filas se eliminan durante la fusión de partes de datos. - -Los nombres de estas columnas deben establecerse en la configuración acumulativa. - -**GraphiteMergeTree parámetros** - -- `config_section` — Name of the section in the configuration file, where are the rules of rollup set. - -**Cláusulas de consulta** - -Al crear un `GraphiteMergeTree` mesa, la misma [clausula](mergetree.md#table_engine-mergetree-creating-a-table) se requieren, como al crear un `MergeTree` tabla. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No use este método en proyectos nuevos y, si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - EventDate Date, - Path String, - Time DateTime, - Value , - Version - ... -) ENGINE [=] GraphiteMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, config_section) -``` - -Todos los parámetros excepto `config_section` el mismo significado que en `MergeTree`. - -- `config_section` — Name of the section in the configuration file, where are the rules of rollup set. - -
- -## Configuración acumulativa {#rollup-configuration} - -La configuración del paquete acumulativo está definida por [graphite_rollup](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-graphite) parámetro en la configuración del servidor. El nombre del parámetro podría ser cualquiera. Puede crear varias configuraciones y usarlas para diferentes tablas. - -Estructura de configuración Rollup: - - required-columns - patterns - -### Columnas requeridas {#required-columns} - -- `path_column_name` — The name of the column storing the metric name (Graphite sensor). Default value: `Path`. -- `time_column_name` — The name of the column storing the time of measuring the metric. Default value: `Time`. -- `value_column_name` — The name of the column storing the value of the metric at the time set in `time_column_name`. Valor predeterminado: `Value`. -- `version_column_name` — The name of the column storing the version of the metric. Default value: `Timestamp`. - -### Patrón {#patterns} - -Estructura del `patterns` apartado: - -``` text -pattern - regexp - function -pattern - regexp - age + precision - ... -pattern - regexp - function - age + precision - ... -pattern - ... -default - function - age + precision - ... -``` - -!!! warning "Atención" - Los patrones deben ser estrictamente ordenados: - - 1. Patterns without `function` or `retention`. - 1. Patterns with both `function` and `retention`. - 1. Pattern `default`. - -Al procesar una fila, ClickHouse comprueba las reglas en el `pattern` apartado. Cada uno de `pattern` (incluir `default` secciones pueden contener `function` parámetro para la agregación, `retention` parámetros o ambos. Si el nombre de la métrica coincide con `regexp`, las reglas de la `pattern` sección (o secciones); de lo contrario, las reglas de la `default` sección se utilizan. - -Campos para `pattern` y `default` apartado: - -- `regexp`– A pattern for the metric name. -- `age` – The minimum age of the data in seconds. -- `precision`– How precisely to define the age of the data in seconds. Should be a divisor for 86400 (seconds in a day). -- `function` – The name of the aggregating function to apply to data whose age falls within the range `[age, age + precision]`. - -### Ejemplo de configuración {#configuration-example} - -``` xml - - Version - - click_cost - any - - 0 - 5 - - - 86400 - 60 - - - - max - - 0 - 60 - - - 3600 - 300 - - - 86400 - 3600 - - - -``` - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/graphitemergetree/) diff --git a/docs/es/engines/table-engines/mergetree-family/index.md b/docs/es/engines/table-engines/mergetree-family/index.md deleted file mode 100644 index 359d58b2ff1..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Familia MergeTree -toc_priority: 28 ---- - - diff --git a/docs/es/engines/table-engines/mergetree-family/mergetree.md b/docs/es/engines/table-engines/mergetree-family/mergetree.md deleted file mode 100644 index a4bab840b52..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/mergetree.md +++ /dev/null @@ -1,654 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 30 -toc_title: "M\xE9todo de codificaci\xF3n de datos:" ---- - -# Método de codificación de datos: {#table_engines-mergetree} - -El `MergeTree` motor y otros motores de esta familia (`*MergeTree`) son los motores de mesa ClickHouse más robustos. - -Motores en el `MergeTree` familia están diseñados para insertar una gran cantidad de datos en una tabla. Los datos se escriben rápidamente en la tabla parte por parte, luego se aplican reglas para fusionar las partes en segundo plano. Este método es mucho más eficiente que reescribir continuamente los datos en almacenamiento durante la inserción. - -Principales características: - -- Almacena datos ordenados por clave principal. - - Esto le permite crear un pequeño índice disperso que ayuda a encontrar datos más rápido. - -- Las particiones se pueden utilizar si [clave de partición](custom-partitioning-key.md) se especifica. - - ClickHouse admite ciertas operaciones con particiones que son más efectivas que las operaciones generales en los mismos datos con el mismo resultado. ClickHouse también corta automáticamente los datos de partición donde se especifica la clave de partición en la consulta. Esto también mejora el rendimiento de las consultas. - -- Soporte de replicación de datos. - - La familia de `ReplicatedMergeTree` proporciona la replicación de datos. Para obtener más información, consulte [Replicación de datos](replication.md). - -- Soporte de muestreo de datos. - - Si es necesario, puede establecer el método de muestreo de datos en la tabla. - -!!! info "INFO" - El [Fusionar](../special/merge.md#merge) el motor no pertenece al `*MergeTree` familia. - -## Creación de una tabla {#table_engine-mergetree-creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], - ... - INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1, - INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2 -) ENGINE = MergeTree() -[PARTITION BY expr] -[ORDER BY expr] -[PRIMARY KEY expr] -[SAMPLE BY expr] -[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...] -[SETTINGS name=value, ...] -``` - -Para obtener una descripción de los parámetros, consulte [Descripción de la consulta CREATE](../../../sql-reference/statements/create.md). - -!!! note "Nota" - `INDEX` es una característica experimental, ver [Índices de saltos de datos](#table_engine-mergetree-data_skipping-indexes). - -### Cláusulas de consulta {#mergetree-query-clauses} - -- `ENGINE` — Name and parameters of the engine. `ENGINE = MergeTree()`. El `MergeTree` el motor no tiene parámetros. - -- `PARTITION BY` — The [clave de partición](custom-partitioning-key.md). - - Para particionar por mes, utilice el `toYYYYMM(date_column)` expresión, donde `date_column` es una columna con una fecha del tipo [Fecha](../../../sql-reference/data-types/date.md). Los nombres de partición aquí tienen el `"YYYYMM"` formato. - -- `ORDER BY` — The sorting key. - - Una tupla de columnas o expresiones arbitrarias. Ejemplo: `ORDER BY (CounterID, EventDate)`. - -- `PRIMARY KEY` — The primary key if it [difiere de la clave de clasificación](#choosing-a-primary-key-that-differs-from-the-sorting-key). - - De forma predeterminada, la clave principal es la misma que la clave de ordenación (que se especifica `ORDER BY` clausula). Por lo tanto, en la mayoría de los casos no es necesario especificar un `PRIMARY KEY` clausula. - -- `SAMPLE BY` — An expression for sampling. - - Si se utiliza una expresión de muestreo, la clave principal debe contenerla. Ejemplo: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`. - -- `TTL` — A list of rules specifying storage duration of rows and defining logic of automatic parts movement [entre discos y volúmenes](#table_engine-mergetree-multiple-volumes). - - La expresión debe tener una `Date` o `DateTime` columna como resultado. Ejemplo: - `TTL date + INTERVAL 1 DAY` - - Tipo de regla `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` especifica una acción que debe realizarse con la pieza si la expresión está satisfecha (alcanza la hora actual): eliminación de filas caducadas, mover una pieza (si la expresión está satisfecha para todas las filas de una pieza) al disco especificado (`TO DISK 'xxx'`) o al volumen (`TO VOLUME 'xxx'`). El tipo predeterminado de la regla es la eliminación (`DELETE`). Se puede especificar una lista de varias reglas, pero no debe haber más de una `DELETE` regla. - - Para obtener más información, consulte [TTL para columnas y tablas](#table_engine-mergetree-ttl) - -- `SETTINGS` — Additional parameters that control the behavior of the `MergeTree`: - - - `index_granularity` — Maximum number of data rows between the marks of an index. Default value: 8192. See [Almacenamiento de datos](#mergetree-data-storage). - - `index_granularity_bytes` — Maximum size of data granules in bytes. Default value: 10Mb. To restrict the granule size only by number of rows, set to 0 (not recommended). See [Almacenamiento de datos](#mergetree-data-storage). - - `enable_mixed_granularity_parts` — Enables or disables transitioning to control the granule size with the `index_granularity_bytes` configuración. Antes de la versión 19.11, sólo existía el `index_granularity` ajuste para restringir el tamaño del gránulo. El `index_granularity_bytes` mejora el rendimiento de ClickHouse al seleccionar datos de tablas con filas grandes (decenas y cientos de megabytes). Si tiene tablas con filas grandes, puede habilitar esta configuración para que las tablas mejoren la eficiencia de `SELECT` consulta. - - `use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, entonces ZooKeeper almacena menos datos. Para obtener más información, consulte [descripción del ajuste](../../../operations/server-configuration-parameters/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) en “Server configuration parameters”. - - `min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse lee y escribe los datos en el disco de almacenamiento utilizando la interfaz de E / S directa (`O_DIRECT` opcion). Si `min_merge_bytes_to_use_direct_io = 0`, entonces la E/S directa está deshabilitada. Valor predeterminado: `10 * 1024 * 1024 * 1024` byte. - - - `merge_with_ttl_timeout` — Minimum delay in seconds before repeating a merge with TTL. Default value: 86400 (1 day). - - `write_final_mark` — Enables or disables writing the final index mark at the end of data part (after the last byte). Default value: 1. Don't turn it off. - - `merge_max_block_size` — Maximum number of rows in block for merge operations. Default value: 8192. - - `storage_policy` — Storage policy. See [Uso de varios dispositivos de bloque para el almacenamiento de datos](#table_engine-mergetree-multiple-volumes). - -**Ejemplo de configuración de secciones** - -``` sql -ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity=8192 -``` - -En el ejemplo, configuramos la partición por mes. - -También establecemos una expresión para el muestreo como un hash por el ID de usuario. Esto le permite pseudoaleatorizar los datos en la tabla para cada `CounterID` y `EventDate`. Si define un [SAMPLE](../../../sql-reference/statements/select/sample.md#select-sample-clause) cláusula al seleccionar los datos, ClickHouse devolverá una muestra de datos pseudoaleatoria uniforme para un subconjunto de usuarios. - -El `index_granularity` se puede omitir porque 8192 es el valor predeterminado. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No utilice este método en nuevos proyectos. Si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE [=] MergeTree(date-column [, sampling_expression], (primary, key), index_granularity) -``` - -**Parámetros MergeTree()** - -- `date-column` — The name of a column of the [Fecha](../../../sql-reference/data-types/date.md) tipo. ClickHouse crea automáticamente particiones por mes en función de esta columna. Los nombres de partición están en el `"YYYYMM"` formato. -- `sampling_expression` — An expression for sampling. -- `(primary, key)` — Primary key. Type: [Tupla()](../../../sql-reference/data-types/tuple.md) -- `index_granularity` — The granularity of an index. The number of data rows between the “marks” de un índice. El valor 8192 es apropiado para la mayoría de las tareas. - -**Ejemplo** - -``` sql -MergeTree(EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID)), 8192) -``` - -El `MergeTree` engine se configura de la misma manera que en el ejemplo anterior para el método de configuración del motor principal. -
- -## Almacenamiento de datos {#mergetree-data-storage} - -Una tabla consta de partes de datos ordenadas por clave principal. - -Cuando se insertan datos en una tabla, se crean partes de datos separadas y cada una de ellas se ordena lexicográficamente por clave principal. Por ejemplo, si la clave principal es `(CounterID, Date)`, los datos en la parte se ordenan por `CounterID`, y dentro de cada `CounterID` es ordenado por `Date`. - -Los datos que pertenecen a diferentes particiones se separan en diferentes partes. En el fondo, ClickHouse combina partes de datos para un almacenamiento más eficiente. Las piezas que pertenecen a particiones diferentes no se fusionan. El mecanismo de combinación no garantiza que todas las filas con la misma clave principal estén en la misma parte de datos. - -Cada parte de datos se divide lógicamente en gránulos. Un gránulo es el conjunto de datos indivisibles más pequeño que ClickHouse lee al seleccionar datos. ClickHouse no divide filas o valores, por lo que cada gránulo siempre contiene un número entero de filas. La primera fila de un gránulo está marcada con el valor de la clave principal de la fila. Para cada parte de datos, ClickHouse crea un archivo de índice que almacena las marcas. Para cada columna, ya sea en la clave principal o no, ClickHouse también almacena las mismas marcas. Estas marcas le permiten encontrar datos directamente en archivos de columnas. - -El tamaño del gránulo es restringido por `index_granularity` y `index_granularity_bytes` configuración del motor de tabla. El número de filas en un gránulo se encuentra en el `[1, index_granularity]` rango, dependiendo del tamaño de las filas. El tamaño de un gránulo puede exceder `index_granularity_bytes` si el tamaño de una sola fila es mayor que el valor de la configuración. En este caso, el tamaño del gránulo es igual al tamaño de la fila. - -## Claves e índices principales en consultas {#primary-keys-and-indexes-in-queries} - -Tome el `(CounterID, Date)` clave primaria como ejemplo. En este caso, la clasificación y el índice se pueden ilustrar de la siguiente manera: - - Whole data: [---------------------------------------------] - CounterID: [aaaaaaaaaaaaaaaaaabbbbcdeeeeeeeeeeeeefgggggggghhhhhhhhhiiiiiiiiikllllllll] - Date: [1111111222222233331233211111222222333211111112122222223111112223311122333] - Marks: | | | | | | | | | | | - a,1 a,2 a,3 b,3 e,2 e,3 g,1 h,2 i,1 i,3 l,3 - Marks numbers: 0 1 2 3 4 5 6 7 8 9 10 - -Si la consulta de datos especifica: - -- `CounterID in ('a', 'h')`, el servidor lee los datos en los rangos de marcas `[0, 3)` y `[6, 8)`. -- `CounterID IN ('a', 'h') AND Date = 3`, el servidor lee los datos en los rangos de marcas `[1, 3)` y `[7, 8)`. -- `Date = 3`, el servidor lee los datos en el rango de marcas `[1, 10]`. - -Los ejemplos anteriores muestran que siempre es más efectivo usar un índice que un análisis completo. - -Un índice disperso permite leer datos adicionales. Al leer un único rango de la clave primaria, hasta `index_granularity * 2` se pueden leer filas adicionales en cada bloque de datos. - -Los índices dispersos le permiten trabajar con una gran cantidad de filas de tabla, porque en la mayoría de los casos, dichos índices caben en la RAM de la computadora. - -ClickHouse no requiere una clave principal única. Puede insertar varias filas con la misma clave principal. - -### Selección de la clave principal {#selecting-the-primary-key} - -El número de columnas en la clave principal no está explícitamente limitado. Dependiendo de la estructura de datos, puede incluir más o menos columnas en la clave principal. Esto puede: - -- Mejorar el rendimiento de un índice. - - Si la clave principal es `(a, b)`, a continuación, añadir otra columna `c` mejorará el rendimiento si se cumplen las siguientes condiciones: - - - Hay consultas con una condición en la columna `c`. - - Rangos de datos largos (varias veces más `index_granularity`) con valores idénticos para `(a, b)` son comunes. En otras palabras, al agregar otra columna le permite omitir rangos de datos bastante largos. - -- Mejorar la compresión de datos. - - ClickHouse ordena los datos por clave principal, por lo que cuanto mayor sea la consistencia, mejor será la compresión. - -- Proporcione una lógica adicional al fusionar partes de datos en el [ColapsarMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) y [SummingMergeTree](summingmergetree.md) motor. - - En este caso tiene sentido especificar el *clave de clasificación* que es diferente de la clave principal. - -Una clave principal larga afectará negativamente al rendimiento de la inserción y al consumo de memoria, pero las columnas adicionales de la clave principal no afectarán al rendimiento de ClickHouse durante `SELECT` consulta. - -### Elegir una clave principal que difiere de la clave de ordenación {#choosing-a-primary-key-that-differs-from-the-sorting-key} - -Es posible especificar una clave principal (una expresión con valores que se escriben en el archivo de índice para cada marca) que es diferente de la clave de ordenación (una expresión para ordenar las filas en partes de datos). En este caso, la tupla de expresión de clave primaria debe ser un prefijo de la tupla de expresión de clave de ordenación. - -Esta característica es útil cuando se [SummingMergeTree](summingmergetree.md) y -[AgregaciónMergeTree](aggregatingmergetree.md) motores de mesa. En un caso común cuando se utilizan estos motores, la tabla tiene dos tipos de columnas: *cota* y *medida*. Las consultas típicas agregan valores de columnas de medida con `GROUP BY` y filtrado por dimensiones. Debido a que SummingMergeTree y AggregatingMergeTree agregan filas con el mismo valor de la clave de ordenación, es natural agregarle todas las dimensiones. Como resultado, la expresión de clave consta de una larga lista de columnas y esta lista debe actualizarse con frecuencia con las dimensiones recién agregadas. - -En este caso, tiene sentido dejar solo unas pocas columnas en la clave principal que proporcionarán análisis de rango eficientes y agregarán las columnas de dimensión restantes a la tupla de clave de clasificación. - -[ALTER](../../../sql-reference/statements/alter.md) de la clave de ordenación es una operación ligera porque cuando se agrega una nueva columna simultáneamente a la tabla y a la clave de ordenación, las partes de datos existentes no necesitan ser cambiadas. Dado que la clave de ordenación anterior es un prefijo de la nueva clave de ordenación y no hay datos en la columna recién agregada, los datos se ordenan tanto por las claves de ordenación antiguas como por las nuevas en el momento de la modificación de la tabla. - -### Uso de índices y particiones en consultas {#use-of-indexes-and-partitions-in-queries} - -Para `SELECT` consultas, ClickHouse analiza si se puede usar un índice. Se puede usar un índice si el `WHERE/PREWHERE` clause tiene una expresión (como uno de los elementos de conjunción, o enteramente) que representa una operación de comparación de igualdad o desigualdad, o si tiene `IN` o `LIKE` con un prefijo fijo en columnas o expresiones que están en la clave principal o clave de partición, o en ciertas funciones parcialmente repetitivas de estas columnas, o relaciones lógicas de estas expresiones. - -Por lo tanto, es posible ejecutar rápidamente consultas en uno o varios rangos de la clave principal. En este ejemplo, las consultas serán rápidas cuando se ejecuten para una etiqueta de seguimiento específica, para una etiqueta y un intervalo de fechas específicos, para una etiqueta y una fecha específicas, para varias etiquetas con un intervalo de fechas, etc. - -Veamos el motor configurado de la siguiente manera: - - ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate) SETTINGS index_granularity=8192 - -En este caso, en consultas: - -``` sql -SELECT count() FROM table WHERE EventDate = toDate(now()) AND CounterID = 34 -SELECT count() FROM table WHERE EventDate = toDate(now()) AND (CounterID = 34 OR CounterID = 42) -SELECT count() FROM table WHERE ((EventDate >= toDate('2014-01-01') AND EventDate <= toDate('2014-01-31')) OR EventDate = toDate('2014-05-01')) AND CounterID IN (101500, 731962, 160656) AND (CounterID = 101500 OR EventDate != toDate('2014-05-01')) -``` - -ClickHouse utilizará el índice de clave principal para recortar datos incorrectos y la clave de partición mensual para recortar particiones que están en intervalos de fechas incorrectos. - -Las consultas anteriores muestran que el índice se usa incluso para expresiones complejas. La lectura de la tabla está organizada de modo que el uso del índice no puede ser más lento que un escaneo completo. - -En el siguiente ejemplo, el índice no se puede usar. - -``` sql -SELECT count() FROM table WHERE CounterID = 34 OR URL LIKE '%upyachka%' -``` - -Para comprobar si ClickHouse puede usar el índice al ejecutar una consulta, use la configuración [Fecha de nacimiento](../../../operations/settings/settings.md#settings-force_index_by_date) y [force_primary_key](../../../operations/settings/settings.md). - -La clave para particionar por mes permite leer solo aquellos bloques de datos que contienen fechas del rango adecuado. En este caso, el bloque de datos puede contener datos para muchas fechas (hasta un mes). Dentro de un bloque, los datos se ordenan por clave principal, que puede no contener la fecha como la primera columna. Debido a esto, el uso de una consulta con solo una condición de fecha que no especifica el prefijo de clave principal hará que se lean más datos que para una sola fecha. - -### Uso del índice para claves primarias parcialmente monótonas {#use-of-index-for-partially-monotonic-primary-keys} - -Considere, por ejemplo, los días del mes. Ellos forman un [monótona secuencia](https://en.wikipedia.org/wiki/Monotonic_function) durante un mes, pero no monótono durante períodos más prolongados. Esta es una secuencia parcialmente monotónica. Si un usuario crea la tabla con clave primaria parcialmente monótona, ClickHouse crea un índice disperso como de costumbre. Cuando un usuario selecciona datos de este tipo de tabla, ClickHouse analiza las condiciones de consulta. Si el usuario desea obtener datos entre dos marcas del índice y ambas marcas caen dentro de un mes, ClickHouse puede usar el índice en este caso particular porque puede calcular la distancia entre los parámetros de una consulta y las marcas de índice. - -ClickHouse no puede usar un índice si los valores de la clave principal en el rango de parámetros de consulta no representan una secuencia monotónica. En este caso, ClickHouse utiliza el método de análisis completo. - -ClickHouse usa esta lógica no solo para secuencias de días del mes, sino para cualquier clave principal que represente una secuencia parcialmente monotónica. - -### Índices de saltos de datos (experimental) {#table_engine-mergetree-data_skipping-indexes} - -La declaración de índice se encuentra en la sección de columnas del `CREATE` consulta. - -``` sql -INDEX index_name expr TYPE type(...) GRANULARITY granularity_value -``` - -Para tablas de la `*MergeTree` familia, se pueden especificar índices de omisión de datos. - -Estos índices agregan cierta información sobre la expresión especificada en bloques, que consisten en `granularity_value` gránulos (el tamaño del gránulo se especifica utilizando el `index_granularity` ajuste en el motor de la tabla). Entonces estos agregados se usan en `SELECT` consultas para reducir la cantidad de datos a leer desde el disco omitiendo grandes bloques de datos donde el `where` consulta no puede ser satisfecha. - -**Ejemplo** - -``` sql -CREATE TABLE table_name -( - u64 UInt64, - i32 Int32, - s String, - ... - INDEX a (u64 * i32, s) TYPE minmax GRANULARITY 3, - INDEX b (u64 * length(s)) TYPE set(1000) GRANULARITY 4 -) ENGINE = MergeTree() -... -``` - -ClickHouse puede utilizar los índices del ejemplo para reducir la cantidad de datos que se leen desde el disco en las siguientes consultas: - -``` sql -SELECT count() FROM table WHERE s < 'z' -SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234 -``` - -#### Tipos de índices disponibles {#available-types-of-indices} - -- `minmax` - - Almacena los extremos de la expresión especificada (si la expresión `tuple`, entonces almacena extremos para cada elemento de `tuple`), utiliza información almacenada para omitir bloques de datos como la clave principal. - -- `set(max_rows)` - - Almacena valores únicos de la expresión especificada (no más de `max_rows` filas, `max_rows=0` medio “no limits”). Utiliza los valores para comprobar si `WHERE` expresión no es satisfactorio en un bloque de datos. - -- `ngrambf_v1(n, size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)` - - Tiendas a [Filtro de floración](https://en.wikipedia.org/wiki/Bloom_filter) que contiene todos los ngrams de un bloque de datos. Funciona solo con cadenas. Puede ser utilizado para la optimización de `equals`, `like` y `in` expresiones. - - - `n` — ngram size, - - `size_of_bloom_filter_in_bytes` — Bloom filter size in bytes (you can use large values here, for example, 256 or 512, because it can be compressed well). - - `number_of_hash_functions` — The number of hash functions used in the Bloom filter. - - `random_seed` — The seed for Bloom filter hash functions. - -- `tokenbf_v1(size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)` - - Lo mismo que `ngrambf_v1`, pero almacena tokens en lugar de ngrams. Los tokens son secuencias separadas por caracteres no alfanuméricos. - -- `bloom_filter([false_positive])` — Stores a [Filtro de floración](https://en.wikipedia.org/wiki/Bloom_filter) para las columnas especificadas. - - Opcional `false_positive` parámetro es la probabilidad de recibir una respuesta falsa positiva del filtro. Valores posibles: (0, 1). Valor predeterminado: 0.025. - - Tipos de datos admitidos: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`, `Array`, `LowCardinality`, `Nullable`. - - Las siguientes funciones pueden usarlo: [igual](../../../sql-reference/functions/comparison-functions.md), [notEquals](../../../sql-reference/functions/comparison-functions.md), [en](../../../sql-reference/functions/in-functions.md), [noEn](../../../sql-reference/functions/in-functions.md), [tener](../../../sql-reference/functions/array-functions.md). - - - -``` sql -INDEX sample_index (u64 * length(s)) TYPE minmax GRANULARITY 4 -INDEX sample_index2 (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100) GRANULARITY 4 -INDEX sample_index3 (lower(str), str) TYPE ngrambf_v1(3, 256, 2, 0) GRANULARITY 4 -``` - -#### Funciones de apoyo {#functions-support} - -Condiciones en el `WHERE` cláusula contiene llamadas de las funciones que operan con columnas. Si la columna forma parte de un índice, ClickHouse intenta usar este índice al realizar las funciones. ClickHouse admite diferentes subconjuntos de funciones para usar índices. - -El `set` index se puede utilizar con todas las funciones. Subconjuntos de funciones para otros índices se muestran en la siguiente tabla. - -| Función (operador) / Índice | clave primaria | minmax | Descripción | Sistema abierto. | bloom_filter | -|----------------------------------------------------------------------------------------------------------|----------------|--------|-------------|------------------|---------------| -| [igual (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ | -| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ | -| [como](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✗ | ✗ | -| [No como](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✗ | ✗ | -| [Comienza con](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ | -| [Finaliza con](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ | -| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ | -| [en](../../../sql-reference/functions/in-functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | -| [noEn](../../../sql-reference/functions/in-functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | -| [menos (\<)](../../../sql-reference/functions/comparison-functions.md#function-less) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [mayor (\>)](../../../sql-reference/functions/comparison-functions.md#function-greater) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [menosOrEquals (\<=)](../../../sql-reference/functions/comparison-functions.md#function-lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [mayorOrEquals (\>=)](../../../sql-reference/functions/comparison-functions.md#function-greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [vaciar](../../../sql-reference/functions/array-functions.md#function-empty) | ✔ | ✔ | ✗ | ✗ | ✗ | -| [notEmpty](../../../sql-reference/functions/array-functions.md#function-notempty) | ✔ | ✔ | ✗ | ✗ | ✗ | -| hasToken | ✗ | ✗ | ✗ | ✔ | ✗ | - -Las funciones con un argumento constante que es menor que el tamaño de ngram no pueden ser utilizadas por `ngrambf_v1` para la optimización de consultas. - -Los filtros Bloom pueden tener coincidencias falsas positivas, por lo que `ngrambf_v1`, `tokenbf_v1`, y `bloom_filter` los índices no se pueden usar para optimizar consultas donde se espera que el resultado de una función sea falso, por ejemplo: - -- Puede ser optimizado: - - `s LIKE '%test%'` - - `NOT s NOT LIKE '%test%'` - - `s = 1` - - `NOT s != 1` - - `startsWith(s, 'test')` -- No se puede optimizar: - - `NOT s LIKE '%test%'` - - `s NOT LIKE '%test%'` - - `NOT s = 1` - - `s != 1` - - `NOT startsWith(s, 'test')` - -## Acceso a datos simultáneos {#concurrent-data-access} - -Para el acceso simultáneo a tablas, usamos versiones múltiples. En otras palabras, cuando una tabla se lee y actualiza simultáneamente, los datos se leen de un conjunto de partes que está actualizado en el momento de la consulta. No hay cerraduras largas. Las inserciones no se interponen en el camino de las operaciones de lectura. - -La lectura de una tabla se paralela automáticamente. - -## TTL para columnas y tablas {#table_engine-mergetree-ttl} - -Determina la duración de los valores. - -El `TTL` se puede establecer para toda la tabla y para cada columna individual. TTL de nivel de tabla también puede especificar la lógica de movimiento automático de datos entre discos y volúmenes. - -Las expresiones deben evaluar [Fecha](../../../sql-reference/data-types/date.md) o [FechaHora](../../../sql-reference/data-types/datetime.md) tipo de datos. - -Ejemplo: - -``` sql -TTL time_column -TTL time_column + interval -``` - -Definir `interval`, utilizar [intervalo de tiempo](../../../sql-reference/operators/index.md#operators-datetime) operador. - -``` sql -TTL date_time + INTERVAL 1 MONTH -TTL date_time + INTERVAL 15 HOUR -``` - -### Columna TTL {#mergetree-column-ttl} - -Cuando los valores de la columna caducan, ClickHouse los reemplaza con los valores predeterminados para el tipo de datos de columna. Si todos los valores de columna en la parte de datos caducan, ClickHouse elimina esta columna de la parte de datos en un sistema de archivos. - -El `TTL` cláusula no se puede utilizar para columnas clave. - -Ejemplos: - -Creación de una tabla con TTL - -``` sql -CREATE TABLE example_table -( - d DateTime, - a Int TTL d + INTERVAL 1 MONTH, - b Int TTL d + INTERVAL 1 MONTH, - c String -) -ENGINE = MergeTree -PARTITION BY toYYYYMM(d) -ORDER BY d; -``` - -Adición de TTL a una columna de una tabla existente - -``` sql -ALTER TABLE example_table - MODIFY COLUMN - c String TTL d + INTERVAL 1 DAY; -``` - -Modificación de TTL de la columna - -``` sql -ALTER TABLE example_table - MODIFY COLUMN - c String TTL d + INTERVAL 1 MONTH; -``` - -### Tabla TTL {#mergetree-table-ttl} - -La tabla puede tener una expresión para la eliminación de filas caducadas y varias expresiones para el movimiento automático de partes entre [discos o volúmenes](#table_engine-mergetree-multiple-volumes). Cuando las filas de la tabla caducan, ClickHouse elimina todas las filas correspondientes. Para la entidad de movimiento de piezas, todas las filas de una pieza deben cumplir los criterios de expresión de movimiento. - -``` sql -TTL expr [DELETE|TO DISK 'aaa'|TO VOLUME 'bbb'], ... -``` - -El tipo de regla TTL puede seguir cada expresión TTL. Afecta a una acción que debe realizarse una vez que se satisface la expresión (alcanza la hora actual): - -- `DELETE` - Eliminar filas caducadas (acción predeterminada); -- `TO DISK 'aaa'` - mover parte al disco `aaa`; -- `TO VOLUME 'bbb'` - mover parte al disco `bbb`. - -Ejemplos: - -Creación de una tabla con TTL - -``` sql -CREATE TABLE example_table -( - d DateTime, - a Int -) -ENGINE = MergeTree -PARTITION BY toYYYYMM(d) -ORDER BY d -TTL d + INTERVAL 1 MONTH [DELETE], - d + INTERVAL 1 WEEK TO VOLUME 'aaa', - d + INTERVAL 2 WEEK TO DISK 'bbb'; -``` - -Modificación de TTL de la tabla - -``` sql -ALTER TABLE example_table - MODIFY TTL d + INTERVAL 1 DAY; -``` - -**Eliminación de datos** - -Los datos con un TTL caducado se eliminan cuando ClickHouse fusiona partes de datos. - -Cuando ClickHouse ve que los datos han caducado, realiza una combinación fuera de programación. Para controlar la frecuencia de tales fusiones, puede establecer `merge_with_ttl_timeout`. Si el valor es demasiado bajo, realizará muchas fusiones fuera de horario que pueden consumir muchos recursos. - -Si realiza el `SELECT` consulta entre fusiones, puede obtener datos caducados. Para evitarlo, use el [OPTIMIZE](../../../sql-reference/statements/misc.md#misc_operations-optimize) consulta antes `SELECT`. - -## Uso de varios dispositivos de bloque para el almacenamiento de datos {#table_engine-mergetree-multiple-volumes} - -### Implantación {#introduction} - -`MergeTree` Los motores de tablas familiares pueden almacenar datos en múltiples dispositivos de bloque. Por ejemplo, puede ser útil cuando los datos de una determinada tabla se dividen implícitamente en “hot” y “cold”. Los datos más recientes se solicitan regularmente, pero solo requieren una pequeña cantidad de espacio. Por el contrario, los datos históricos de cola gorda se solicitan raramente. Si hay varios discos disponibles, el “hot” los datos pueden estar ubicados en discos rápidos (por ejemplo, SSD NVMe o en memoria), mientras que “cold” datos - en los relativamente lentos (por ejemplo, HDD). - -La parte de datos es la unidad móvil mínima para `MergeTree`-mesas de motor. Los datos que pertenecen a una parte se almacenan en un disco. Las partes de datos se pueden mover entre discos en segundo plano (según la configuración del usuario) así como por medio de la [ALTER](../../../sql-reference/statements/alter.md#alter_move-partition) consulta. - -### Plazo {#terms} - -- Disk — Block device mounted to the filesystem. -- Default disk — Disk that stores the path specified in the [camino](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-path) configuración del servidor. -- Volume — Ordered set of equal disks (similar to [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures)). -- Storage policy — Set of volumes and the rules for moving data between them. - -Los nombres dados a las entidades descritas se pueden encontrar en las tablas del sistema, [sistema.almacenamiento_policies](../../../operations/system-tables.md#system_tables-storage_policies) y [sistema.disco](../../../operations/system-tables.md#system_tables-disks). Para aplicar una de las directivas de almacenamiento configuradas para una tabla, `storage_policy` establecimiento de `MergeTree`-mesas de la familia del motor. - -### Configuración {#table_engine-mergetree-multiple-volumes_configure} - -Los discos, los volúmenes y las políticas de almacenamiento deben declararse `` etiqueta ya sea en el archivo principal `config.xml` o en un archivo distinto en el `config.d` directorio. - -Estructura de configuración: - -``` xml - - - - /mnt/fast_ssd/clickhouse/ - - - /mnt/hdd1/clickhouse/ - 10485760 - - - /mnt/hdd2/clickhouse/ - 10485760 - - - ... - - - ... - -``` - -Tags: - -- `` — Disk name. Names must be different for all disks. -- `path` — path under which a server will store data (`data` y `shadow` carpetas), debe terminarse con ‘/’. -- `keep_free_space_bytes` — the amount of free disk space to be reserved. - -El orden de la definición del disco no es importante. - -Marcado de configuración de directivas de almacenamiento: - -``` xml - - ... - - - - - disk_name_from_disks_configuration - 1073741824 - - - - - - - 0.2 - - - - - - - - ... - -``` - -Tags: - -- `policy_name_N` — Policy name. Policy names must be unique. -- `volume_name_N` — Volume name. Volume names must be unique. -- `disk` — a disk within a volume. -- `max_data_part_size_bytes` — the maximum size of a part that can be stored on any of the volume's disks. -- `move_factor` — when the amount of available space gets lower than this factor, data automatically start to move on the next volume if any (by default, 0.1). - -Cofiguration ejemplos: - -``` xml - - ... - - - - - disk1 - disk2 - - - - - - - - fast_ssd - 1073741824 - - - disk1 - - - 0.2 - - - ... - -``` - -En un ejemplo dado, el `hdd_in_order` la política implementa el [Ronda-robin](https://en.wikipedia.org/wiki/Round-robin_scheduling) enfoque. Por lo tanto, esta política define solo un volumen (`single`), las partes de datos se almacenan en todos sus discos en orden circular. Dicha política puede ser bastante útil si hay varios discos similares montados en el sistema, pero RAID no está configurado. Tenga en cuenta que cada unidad de disco individual no es confiable y es posible que desee compensarlo con un factor de replicación de 3 o más. - -Si hay diferentes tipos de discos disponibles en el sistema, `moving_from_ssd_to_hdd` política se puede utilizar en su lugar. Volumen `hot` consta de un disco SSD (`fast_ssd`), y el tamaño máximo de una pieza que se puede almacenar en este volumen es de 1 GB. Todas las piezas con el tamaño más grande que 1GB serán almacenadas directamente en `cold` volumen, que contiene un disco duro `disk1`. -Además, una vez que el disco `fast_ssd` se llena en más del 80%, los datos se transferirán al `disk1` por un proceso en segundo plano. - -El orden de enumeración de volúmenes dentro de una directiva de almacenamiento es importante. Una vez que un volumen está sobrellenado, los datos se mueven al siguiente. El orden de la enumeración del disco también es importante porque los datos se almacenan en ellos por turnos. - -Al crear una tabla, se puede aplicarle una de las directivas de almacenamiento configuradas: - -``` sql -CREATE TABLE table_with_non_default_policy ( - EventDate Date, - OrderID UInt64, - BannerID UInt64, - SearchPhrase String -) ENGINE = MergeTree -ORDER BY (OrderID, BannerID) -PARTITION BY toYYYYMM(EventDate) -SETTINGS storage_policy = 'moving_from_ssd_to_hdd' -``` - -El `default` política de almacenamiento implica el uso de un solo volumen, que consiste en un solo disco dado en ``. Una vez que se crea una tabla, no se puede cambiar su política de almacenamiento. - -### Detalles {#details} - -En el caso de `MergeTree` tablas, los datos están llegando al disco de diferentes maneras: - -- Como resultado de un inserto (`INSERT` consulta). -- Durante las fusiones de fondo y [mutación](../../../sql-reference/statements/alter.md#alter-mutations). -- Al descargar desde otra réplica. -- Como resultado de la congelación de particiones [ALTER TABLE … FREEZE PARTITION](../../../sql-reference/statements/alter.md#alter_freeze-partition). - -En todos estos casos, excepto las mutaciones y la congelación de particiones, una pieza se almacena en un volumen y un disco de acuerdo con la política de almacenamiento dada: - -1. El primer volumen (en el orden de definición) que tiene suficiente espacio en disco para almacenar una pieza (`unreserved_space > current_part_size`) y permite almacenar partes de un tamaño determinado (`max_data_part_size_bytes > current_part_size`) se elige. -2. Dentro de este volumen, se elige ese disco que sigue al que se utilizó para almacenar el fragmento de datos anterior y que tiene espacio libre más que el tamaño de la pieza (`unreserved_space - keep_free_space_bytes > current_part_size`). - -Bajo el capó, las mutaciones y la congelación de particiones hacen uso de [enlaces duros](https://en.wikipedia.org/wiki/Hard_link). Los enlaces duros entre diferentes discos no son compatibles, por lo tanto, en tales casos las partes resultantes se almacenan en los mismos discos que los iniciales. - -En el fondo, las partes se mueven entre volúmenes en función de la cantidad de espacio libre (`move_factor` parámetro) según el orden en que se declaran los volúmenes en el archivo de configuración. -Los datos nunca se transfieren desde el último y al primero. Uno puede usar tablas del sistema [sistema.part_log](../../../operations/system-tables.md#system_tables-part-log) (campo `type = MOVE_PART`) y [sistema.parte](../../../operations/system-tables.md#system_tables-parts) (campo `path` y `disk`) para monitorear movimientos de fondo. Además, la información detallada se puede encontrar en los registros del servidor. - -El usuario puede forzar el movimiento de una pieza o una partición de un volumen a otro mediante la consulta [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../sql-reference/statements/alter.md#alter_move-partition), todas las restricciones para las operaciones en segundo plano se tienen en cuenta. La consulta inicia un movimiento por sí misma y no espera a que se completen las operaciones en segundo plano. El usuario recibirá un mensaje de error si no hay suficiente espacio libre disponible o si no se cumple alguna de las condiciones requeridas. - -Mover datos no interfiere con la replicación de datos. Por lo tanto, se pueden especificar diferentes directivas de almacenamiento para la misma tabla en diferentes réplicas. - -Después de la finalización de las fusiones y mutaciones de fondo, las partes viejas se eliminan solo después de un cierto período de tiempo (`old_parts_lifetime`). -Durante este tiempo, no se mueven a otros volúmenes o discos. Por lo tanto, hasta que las partes finalmente se eliminen, aún se tienen en cuenta para la evaluación del espacio en disco ocupado. - -[Artículo Original](https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/) diff --git a/docs/es/engines/table-engines/mergetree-family/replacingmergetree.md b/docs/es/engines/table-engines/mergetree-family/replacingmergetree.md deleted file mode 100644 index a1e95c5b5f4..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/replacingmergetree.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 33 -toc_title: ReplacingMergeTree ---- - -# ReplacingMergeTree {#replacingmergetree} - -El motor difiere de [Método de codificación de datos:](mergetree.md#table_engines-mergetree) en que elimina las entradas duplicadas con el mismo valor de clave principal (o más exactamente, con el mismo [clave de clasificación](mergetree.md) valor). - -La desduplicación de datos solo se produce durante una fusión. La fusión ocurre en segundo plano en un momento desconocido, por lo que no puede planificarla. Algunos de los datos pueden permanecer sin procesar. Aunque puede ejecutar una fusión no programada utilizando el `OPTIMIZE` consulta, no cuente con usarlo, porque el `OPTIMIZE` consulta leerá y escribirá una gran cantidad de datos. - -Así, `ReplacingMergeTree` es adecuado para borrar datos duplicados en segundo plano para ahorrar espacio, pero no garantiza la ausencia de duplicados. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = ReplacingMergeTree([ver]) -[PARTITION BY expr] -[ORDER BY expr] -[PRIMARY KEY expr] -[SAMPLE BY expr] -[SETTINGS name=value, ...] -``` - -Para obtener una descripción de los parámetros de solicitud, consulte [descripción de la solicitud](../../../sql-reference/statements/create.md). - -**ReplacingMergeTree Parámetros** - -- `ver` — column with version. Type `UInt*`, `Date` o `DateTime`. Parámetro opcional. - - Al fusionar, `ReplacingMergeTree` de todas las filas con la misma clave primaria deja solo una: - - - Último en la selección, si `ver` no establecido. - - Con la versión máxima, si `ver` indicado. - -**Cláusulas de consulta** - -Al crear un `ReplacingMergeTree` mesa de la misma [clausula](mergetree.md) se requieren, como al crear un `MergeTree` tabla. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No use este método en proyectos nuevos y, si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE [=] ReplacingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, [ver]) -``` - -Todos los parámetros excepto `ver` el mismo significado que en `MergeTree`. - -- `ver` - columna con la versión. Parámetro opcional. Para una descripción, vea el texto anterior. - -
- -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/replacingmergetree/) diff --git a/docs/es/engines/table-engines/mergetree-family/replication.md b/docs/es/engines/table-engines/mergetree-family/replication.md deleted file mode 100644 index 505f5223800..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/replication.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 31 -toc_title: "Replicaci\xF3n de datos" ---- - -# Replicación de datos {#table_engines-replication} - -La replicación solo se admite para tablas de la familia MergeTree: - -- ReplicatedMergeTree -- ReplicatedSummingMergeTree -- ReplicatedReplacingMergeTree -- ReplicatedAggregatingMergeTree -- ReplicatedCollapsingMergeTree -- ReplicatedVersionedCollapsingMergetree -- ReplicatedGraphiteMergeTree - -La replicación funciona a nivel de una tabla individual, no de todo el servidor. Un servidor puede almacenar tablas replicadas y no replicadas al mismo tiempo. - -La replicación no depende de la fragmentación. Cada fragmento tiene su propia replicación independiente. - -Datos comprimidos para `INSERT` y `ALTER` se replica (para obtener más información, consulte la documentación para [ALTER](../../../sql-reference/statements/alter.md#query_language_queries_alter)). - -`CREATE`, `DROP`, `ATTACH`, `DETACH` y `RENAME` las consultas se ejecutan en un único servidor y no se replican: - -- El `CREATE TABLE` query crea una nueva tabla replicable en el servidor donde se ejecuta la consulta. Si esta tabla ya existe en otros servidores, agrega una nueva réplica. -- El `DROP TABLE` query elimina la réplica ubicada en el servidor donde se ejecuta la consulta. -- El `RENAME` query cambia el nombre de la tabla en una de las réplicas. En otras palabras, las tablas replicadas pueden tener diferentes nombres en diferentes réplicas. - -Uso de ClickHouse [Apache ZooKeeper](https://zookeeper.apache.org) para almacenar metainformación de réplicas. Utilice ZooKeeper versión 3.4.5 o posterior. - -Para utilizar la replicación, establezca los parámetros [Zookeeper](../../../operations/server-configuration-parameters/settings.md#server-settings_zookeeper) sección de configuración del servidor. - -!!! attention "Atención" - No descuides la configuración de seguridad. ClickHouse soporta el `digest` [Esquema de ACL](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#sc_ZooKeeperAccessControl) del subsistema de seguridad ZooKeeper. - -Ejemplo de configuración de las direcciones del clúster ZooKeeper: - -``` xml - - - example1 - 2181 - - - example2 - 2181 - - - example3 - 2181 - - -``` - -Puede especificar cualquier clúster ZooKeeper existente y el sistema utilizará un directorio en él para sus propios datos (el directorio se especifica al crear una tabla replicable). - -Si ZooKeeper no está establecido en el archivo de configuración, no puede crear tablas replicadas y las tablas replicadas existentes serán de solo lectura. - -ZooKeeper no se utiliza en `SELECT` consultas porque la replicación no afecta al rendimiento de `SELECT` y las consultas se ejecutan tan rápido como lo hacen para las tablas no replicadas. Al consultar tablas replicadas distribuidas, el comportamiento de ClickHouse se controla mediante la configuración [max_replica_delay_for_distributed_queries](../../../operations/settings/settings.md#settings-max_replica_delay_for_distributed_queries) y [fallback_to_stale_replicas_for_distributed_queries](../../../operations/settings/settings.md#settings-fallback_to_stale_replicas_for_distributed_queries). - -Para cada `INSERT` consulta, aproximadamente diez entradas se agregan a ZooKeeper a través de varias transacciones. (Para ser más precisos, esto es para cada bloque de datos insertado; una consulta INSERT contiene un bloque o un bloque por `max_insert_block_size = 1048576` filas.) Esto conduce a latencias ligeramente más largas para `INSERT` en comparación con las tablas no replicadas. Pero si sigue las recomendaciones para insertar datos en lotes de no más de uno `INSERT` por segundo, no crea ningún problema. Todo el clúster ClickHouse utilizado para coordinar un clúster ZooKeeper tiene un total de varios cientos `INSERTs` por segundo. El rendimiento en las inserciones de datos (el número de filas por segundo) es tan alto como para los datos no replicados. - -Para clústeres muy grandes, puede usar diferentes clústeres de ZooKeeper para diferentes fragmentos. Sin embargo, esto no ha demostrado ser necesario en el Yandex.Clúster Metrica (aproximadamente 300 servidores). - -La replicación es asíncrona y multi-master. `INSERT` consultas (así como `ALTER`) se puede enviar a cualquier servidor disponible. Los datos se insertan en el servidor donde se ejecuta la consulta y, a continuación, se copian a los demás servidores. Debido a que es asincrónico, los datos insertados recientemente aparecen en las otras réplicas con cierta latencia. Si parte de las réplicas no está disponible, los datos se escriben cuando estén disponibles. Si hay una réplica disponible, la latencia es la cantidad de tiempo que tarda en transferir el bloque de datos comprimidos a través de la red. - -De forma predeterminada, una consulta INSERT espera la confirmación de la escritura de los datos de una sola réplica. Si los datos fue correctamente escrito a sólo una réplica y el servidor con esta réplica deja de existir, los datos almacenados se perderán. Para habilitar la confirmación de las escrituras de datos de varias réplicas, utilice `insert_quorum` opcion. - -Cada bloque de datos se escribe atómicamente. La consulta INSERT se divide en bloques hasta `max_insert_block_size = 1048576` filas. En otras palabras, si el `INSERT` consulta tiene menos de 1048576 filas, se hace atómicamente. - -Los bloques de datos se deduplican. Para varias escrituras del mismo bloque de datos (bloques de datos del mismo tamaño que contienen las mismas filas en el mismo orden), el bloque solo se escribe una vez. La razón de esto es en caso de fallas de red cuando la aplicación cliente no sabe si los datos se escribieron en la base de datos, por lo que `INSERT` consulta simplemente se puede repetir. No importa a qué réplica se enviaron los INSERT con datos idénticos. `INSERTs` son idempotentes. Los parámetros de desduplicación son controlados por [merge_tree](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-merge_tree) configuración del servidor. - -Durante la replicación, sólo los datos de origen que se van a insertar se transfieren a través de la red. La transformación de datos adicional (fusión) se coordina y se realiza en todas las réplicas de la misma manera. Esto minimiza el uso de la red, lo que significa que la replicación funciona bien cuando las réplicas residen en centros de datos diferentes. (Tenga en cuenta que la duplicación de datos en diferentes centros de datos es el objetivo principal de la replicación.) - -Puede tener cualquier número de réplicas de los mismos datos. El Yandex.Metrica utiliza doble replicación en producción. Cada servidor utiliza RAID-5 o RAID-6, y RAID-10 en algunos casos. Esta es una solución relativamente confiable y conveniente. - -El sistema supervisa la sincronicidad de los datos en las réplicas y puede recuperarse después de un fallo. La conmutación por error es automática (para pequeñas diferencias en los datos) o semiautomática (cuando los datos difieren demasiado, lo que puede indicar un error de configuración). - -## Creación de tablas replicadas {#creating-replicated-tables} - -El `Replicated` prefijo se agrega al nombre del motor de tabla. Por ejemplo:`ReplicatedMergeTree`. - -**Replicated\*MergeTree parámetros** - -- `zoo_path` — The path to the table in ZooKeeper. -- `replica_name` — The replica name in ZooKeeper. - -Ejemplo: - -``` sql -CREATE TABLE table_name -( - EventDate DateTime, - CounterID UInt32, - UserID UInt32 -) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{layer}-{shard}/table_name', '{replica}') -PARTITION BY toYYYYMM(EventDate) -ORDER BY (CounterID, EventDate, intHash32(UserID)) -SAMPLE BY intHash32(UserID) -``` - -
- -Ejemplo en sintaxis obsoleta - -``` sql -CREATE TABLE table_name -( - EventDate DateTime, - CounterID UInt32, - UserID UInt32 -) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{layer}-{shard}/table_name', '{replica}', EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID), EventTime), 8192) -``` - -
- -Como muestra el ejemplo, estos parámetros pueden contener sustituciones entre llaves. Los valores sustituidos se toman de la ‘macros’ sección del archivo de configuración. Ejemplo: - -``` xml - - 05 - 02 - example05-02-1.yandex.ru - -``` - -La ruta de acceso a la tabla en ZooKeeper debe ser única para cada tabla replicada. Las tablas en diferentes fragmentos deben tener rutas diferentes. -En este caso, la ruta consta de las siguientes partes: - -`/clickhouse/tables/` es el prefijo común. Recomendamos usar exactamente este. - -`{layer}-{shard}` es el identificador de fragmento. En este ejemplo consta de dos partes, ya que el Yandex.Metrica clúster utiliza sharding de dos niveles. Para la mayoría de las tareas, puede dejar solo la sustitución {shard}, que se expandirá al identificador de fragmento. - -`table_name` es el nombre del nodo de la tabla en ZooKeeper. Es una buena idea hacerlo igual que el nombre de la tabla. Se define explícitamente, porque a diferencia del nombre de la tabla, no cambia después de una consulta RENAME. -*HINT*: podría agregar un nombre de base de datos delante de `table_name` También. Nivel de Cifrado WEP `db_name.table_name` - -El nombre de réplica identifica diferentes réplicas de la misma tabla. Puede usar el nombre del servidor para esto, como en el ejemplo. El nombre solo tiene que ser único dentro de cada fragmento. - -Puede definir los parámetros explícitamente en lugar de utilizar sustituciones. Esto podría ser conveniente para probar y para configurar clústeres pequeños. Sin embargo, no puede usar consultas DDL distribuidas (`ON CLUSTER` en este caso. - -Cuando se trabaja con clústeres grandes, se recomienda utilizar sustituciones porque reducen la probabilidad de error. - -Ejecute el `CREATE TABLE` consulta en cada réplica. Esta consulta crea una nueva tabla replicada o agrega una nueva réplica a una existente. - -Si agrega una nueva réplica después de que la tabla ya contenga algunos datos en otras réplicas, los datos se copiarán de las otras réplicas a la nueva después de ejecutar la consulta. En otras palabras, la nueva réplica se sincroniza con las demás. - -Para eliminar una réplica, ejecute `DROP TABLE`. However, only one replica is deleted – the one that resides on the server where you run the query. - -## Recuperación después de fallos {#recovery-after-failures} - -Si ZooKeeper no está disponible cuando se inicia un servidor, las tablas replicadas cambian al modo de solo lectura. El sistema intenta conectarse periódicamente a ZooKeeper. - -Si ZooKeeper no está disponible durante un `INSERT`, o se produce un error al interactuar con ZooKeeper, se produce una excepción. - -Después de conectarse a ZooKeeper, el sistema comprueba si el conjunto de datos en el sistema de archivos local coincide con el conjunto de datos esperado (ZooKeeper almacena esta información). Si hay incoherencias menores, el sistema las resuelve sincronizando datos con las réplicas. - -Si el sistema detecta partes de datos rotas (con un tamaño incorrecto de archivos) o partes no reconocidas (partes escritas en el sistema de archivos pero no grabadas en ZooKeeper), las mueve al `detached` subdirectorio (no se eliminan). Las piezas que faltan se copian de las réplicas. - -Tenga en cuenta que ClickHouse no realiza ninguna acción destructiva, como eliminar automáticamente una gran cantidad de datos. - -Cuando el servidor se inicia (o establece una nueva sesión con ZooKeeper), solo verifica la cantidad y el tamaño de todos los archivos. Si los tamaños de los archivos coinciden pero los bytes se han cambiado en algún punto intermedio, esto no se detecta inmediatamente, sino solo cuando se intenta leer los datos `SELECT` consulta. La consulta produce una excepción sobre una suma de comprobación no coincidente o el tamaño de un bloque comprimido. En este caso, las partes de datos se agregan a la cola de verificación y se copian de las réplicas si es necesario. - -Si el conjunto local de datos difiere demasiado del esperado, se activa un mecanismo de seguridad. El servidor ingresa esto en el registro y se niega a iniciarse. La razón de esto es que este caso puede indicar un error de configuración, como si una réplica en un fragmento se configurara accidentalmente como una réplica en un fragmento diferente. Sin embargo, los umbrales para este mecanismo se establecen bastante bajos, y esta situación puede ocurrir durante la recuperación de falla normal. En este caso, los datos se restauran semiautomáticamente, mediante “pushing a button”. - -Para iniciar la recuperación, cree el nodo `/path_to_table/replica_name/flags/force_restore_data` en ZooKeeper con cualquier contenido, o ejecute el comando para restaurar todas las tablas replicadas: - -``` bash -sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data -``` - -A continuación, reinicie el servidor. Al iniciar, el servidor elimina estos indicadores e inicia la recuperación. - -## Recuperación después de la pérdida completa de datos {#recovery-after-complete-data-loss} - -Si todos los datos y metadatos desaparecieron de uno de los servidores, siga estos pasos para la recuperación: - -1. Instale ClickHouse en el servidor. Defina correctamente las sustituciones en el archivo de configuración que contiene el identificador de fragmento y las réplicas, si las usa. -2. Si tenía tablas no duplicadas que deben duplicarse manualmente en los servidores, copie sus datos desde una réplica (en el directorio `/var/lib/clickhouse/data/db_name/table_name/`). -3. Copiar definiciones de tablas ubicadas en `/var/lib/clickhouse/metadata/` de una réplica. Si un identificador de fragmento o réplica se define explícitamente en las definiciones de tabla, corríjalo para que corresponda a esta réplica. (Como alternativa, inicie el servidor y `ATTACH TABLE` consultas que deberían haber estado en el .sql archivos en `/var/lib/clickhouse/metadata/`.) -4. Para iniciar la recuperación, cree el nodo ZooKeeper `/path_to_table/replica_name/flags/force_restore_data` con cualquier contenido o ejecute el comando para restaurar todas las tablas replicadas: `sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data` - -Luego inicie el servidor (reinicie, si ya se está ejecutando). Los datos se descargarán de las réplicas. - -Una opción de recuperación alternativa es eliminar información sobre la réplica perdida de ZooKeeper (`/path_to_table/replica_name`), luego vuelva a crear la réplica como se describe en “[Creación de tablas replicadas](#creating-replicated-tables)”. - -No hay restricción en el ancho de banda de la red durante la recuperación. Tenga esto en cuenta si está restaurando muchas réplicas a la vez. - -## La conversión de MergeTree a ReplicatedMergeTree {#converting-from-mergetree-to-replicatedmergetree} - -Usamos el término `MergeTree` para referirse a todos los motores de mesa en el `MergeTree family`, lo mismo que para `ReplicatedMergeTree`. - -Si usted tenía un `MergeTree` tabla replicada manualmente, puede convertirla en una tabla replicada. Es posible que tenga que hacer esto si ya ha recopilado una gran cantidad de datos `MergeTree` y ahora desea habilitar la replicación. - -Si los datos difieren en varias réplicas, primero sincronícelos o elimínelos en todas las réplicas, excepto en una. - -Cambie el nombre de la tabla MergeTree existente y, a continuación, cree un `ReplicatedMergeTree` mesa con el antiguo nombre. -Mueva los datos de la tabla antigua a la `detached` subdirectorio dentro del directorio con los nuevos datos de la tabla (`/var/lib/clickhouse/data/db_name/table_name/`). -Luego ejecuta `ALTER TABLE ATTACH PARTITION` en una de las réplicas para agregar estas partes de datos al conjunto de trabajo. - -## La conversión de ReplicatedMergeTree a MergeTree {#converting-from-replicatedmergetree-to-mergetree} - -Cree una tabla MergeTree con un nombre diferente. Mueva todos los datos del directorio con el `ReplicatedMergeTree` datos de la tabla al directorio de datos de la nueva tabla. A continuación, elimine el `ReplicatedMergeTree` y reinicie el servidor. - -Si desea deshacerse de un `ReplicatedMergeTree` sin iniciar el servidor: - -- Eliminar el correspondiente `.sql` archivo en el directorio de metadatos (`/var/lib/clickhouse/metadata/`). -- Eliminar la ruta correspondiente en ZooKeeper (`/path_to_table/replica_name`). - -Después de esto, puede iniciar el servidor, crear un `MergeTree` tabla, mueva los datos a su directorio y, a continuación, reinicie el servidor. - -## Recuperación cuando se pierden o se dañan los metadatos del clúster Zookeeper {#recovery-when-metadata-in-the-zookeeper-cluster-is-lost-or-damaged} - -Si los datos de ZooKeeper se perdieron o se dañaron, puede guardar los datos moviéndolos a una tabla no duplicada como se describió anteriormente. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/replication/) diff --git a/docs/es/engines/table-engines/mergetree-family/summingmergetree.md b/docs/es/engines/table-engines/mergetree-family/summingmergetree.md deleted file mode 100644 index 3ae9a1515c0..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/summingmergetree.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 34 -toc_title: SummingMergeTree ---- - -# SummingMergeTree {#summingmergetree} - -El motor hereda de [Método de codificación de datos:](mergetree.md#table_engines-mergetree). La diferencia es que al fusionar partes de datos para `SummingMergeTree` ClickHouse reemplaza todas las filas con la misma clave primaria (o más exactamente, con la misma [clave de clasificación](mergetree.md)) con una fila que contiene valores resumidos para las columnas con el tipo de datos numérico. Si la clave de ordenación está compuesta de manera que un solo valor de clave corresponde a un gran número de filas, esto reduce significativamente el volumen de almacenamiento y acelera la selección de datos. - -Recomendamos usar el motor junto con `MergeTree`. Almacenar datos completos en `MergeTree` mesa, y el uso `SummingMergeTree` para el almacenamiento de datos agregados, por ejemplo, al preparar informes. Tal enfoque evitará que pierda datos valiosos debido a una clave primaria compuesta incorrectamente. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = SummingMergeTree([columns]) -[PARTITION BY expr] -[ORDER BY expr] -[SAMPLE BY expr] -[SETTINGS name=value, ...] -``` - -Para obtener una descripción de los parámetros de solicitud, consulte [descripción de la solicitud](../../../sql-reference/statements/create.md). - -**Parámetros de SummingMergeTree** - -- `columns` - una tupla con los nombres de las columnas donde se resumirán los valores. Parámetro opcional. - Las columnas deben ser de tipo numérico y no deben estar en la clave principal. - - Si `columns` no especificado, ClickHouse resume los valores de todas las columnas con un tipo de datos numérico que no están en la clave principal. - -**Cláusulas de consulta** - -Al crear un `SummingMergeTree` mesa de la misma [clausula](mergetree.md) se requieren, como al crear un `MergeTree` tabla. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No use este método en proyectos nuevos y, si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE [=] SummingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, [columns]) -``` - -Todos los parámetros excepto `columns` el mismo significado que en `MergeTree`. - -- `columns` — tuple with names of columns values of which will be summarized. Optional parameter. For a description, see the text above. - -
- -## Ejemplo de uso {#usage-example} - -Considere la siguiente tabla: - -``` sql -CREATE TABLE summtt -( - key UInt32, - value UInt32 -) -ENGINE = SummingMergeTree() -ORDER BY key -``` - -Insertar datos: - -``` sql -INSERT INTO summtt Values(1,1),(1,2),(2,1) -``` - -ClickHouse puede sumar todas las filas no completamente ([ver abajo](#data-processing)), entonces usamos una función agregada `sum` y `GROUP BY` cláusula en la consulta. - -``` sql -SELECT key, sum(value) FROM summtt GROUP BY key -``` - -``` text -┌─key─┬─sum(value)─┐ -│ 2 │ 1 │ -│ 1 │ 3 │ -└─────┴────────────┘ -``` - -## Procesamiento de datos {#data-processing} - -Cuando los datos se insertan en una tabla, se guardan tal cual. ClickHouse combina las partes insertadas de los datos periódicamente y esto es cuando las filas con la misma clave principal se suman y se reemplazan con una para cada parte resultante de los datos. - -ClickHouse can merge the data parts so that different resulting parts of data cat consist rows with the same primary key, i.e. the summation will be incomplete. Therefore (`SELECT`) una función agregada [resumir()](../../../sql-reference/aggregate-functions/reference.md#agg_function-sum) y `GROUP BY` cláusula se debe utilizar en una consulta como se describe en el ejemplo anterior. - -### Reglas comunes para la suma {#common-rules-for-summation} - -Se resumen los valores de las columnas con el tipo de datos numérico. El conjunto de columnas está definido por el parámetro `columns`. - -Si los valores eran 0 en todas las columnas para la suma, se elimina la fila. - -Si la columna no está en la clave principal y no se resume, se selecciona un valor arbitrario entre los existentes. - -Los valores no se resumen para las columnas de la clave principal. - -### La suma en las columnas de función agregada {#the-summation-in-the-aggregatefunction-columns} - -Para columnas de [Tipo AggregateFunction](../../../sql-reference/data-types/aggregatefunction.md) ClickHouse se comporta como [AgregaciónMergeTree](aggregatingmergetree.md) agregación del motor según la función. - -### Estructuras anidadas {#nested-structures} - -La tabla puede tener estructuras de datos anidadas que se procesan de una manera especial. - -Si el nombre de una tabla anidada termina con `Map` y contiene al menos dos columnas que cumplen los siguientes criterios: - -- la primera columna es numérica `(*Int*, Date, DateTime)` o una cadena `(String, FixedString)`, vamos a llamarlo `key`, -- las otras columnas son aritméticas `(*Int*, Float32/64)`, vamos a llamarlo `(values...)`, - -entonces esta tabla anidada se interpreta como una asignación de `key => (values...)`, y al fusionar sus filas, los elementos de dos conjuntos de datos se fusionan por `key` con una suma de los correspondientes `(values...)`. - -Ejemplos: - -``` text -[(1, 100)] + [(2, 150)] -> [(1, 100), (2, 150)] -[(1, 100)] + [(1, 150)] -> [(1, 250)] -[(1, 100)] + [(1, 150), (2, 150)] -> [(1, 250), (2, 150)] -[(1, 100), (2, 150)] + [(1, -100)] -> [(2, 150)] -``` - -Al solicitar datos, utilice el [sumMap(clave, valor)](../../../sql-reference/aggregate-functions/reference.md) función para la agregación de `Map`. - -Para la estructura de datos anidados, no necesita especificar sus columnas en la tupla de columnas para la suma. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/summingmergetree/) diff --git a/docs/es/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md b/docs/es/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md deleted file mode 100644 index d69bfe9440e..00000000000 --- a/docs/es/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md +++ /dev/null @@ -1,238 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: VersionedCollapsingMergeTree ---- - -# VersionedCollapsingMergeTree {#versionedcollapsingmergetree} - -Este motor: - -- Permite la escritura rápida de estados de objetos que cambian continuamente. -- Elimina los estados de objetos antiguos en segundo plano. Esto reduce significativamente el volumen de almacenamiento. - -Vea la sección [Derrumbar](#table_engines_versionedcollapsingmergetree) para más detalles. - -El motor hereda de [Método de codificación de datos:](mergetree.md#table_engines-mergetree) y agrega la lógica para colapsar filas al algoritmo para fusionar partes de datos. `VersionedCollapsingMergeTree` tiene el mismo propósito que [ColapsarMergeTree](collapsingmergetree.md) pero usa un algoritmo de colapso diferente que permite insertar los datos en cualquier orden con múltiples hilos. En particular, el `Version` columna ayuda a contraer las filas correctamente, incluso si se insertan en el orden incorrecto. En contraste, `CollapsingMergeTree` sólo permite la inserción estrictamente consecutiva. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE = VersionedCollapsingMergeTree(sign, version) -[PARTITION BY expr] -[ORDER BY expr] -[SAMPLE BY expr] -[SETTINGS name=value, ...] -``` - -Para obtener una descripción de los parámetros de consulta, consulte [descripción de la consulta](../../../sql-reference/statements/create.md). - -**Parámetros del motor** - -``` sql -VersionedCollapsingMergeTree(sign, version) -``` - -- `sign` — Name of the column with the type of row: `1` es una “state” fila, `-1` es una “cancel” fila. - - El tipo de datos de columna debe ser `Int8`. - -- `version` — Name of the column with the version of the object state. - - El tipo de datos de columna debe ser `UInt*`. - -**Cláusulas de consulta** - -Al crear un `VersionedCollapsingMergeTree` mesa, la misma [clausula](mergetree.md) se requieren como al crear un `MergeTree` tabla. - -
- -Método obsoleto para crear una tabla - -!!! attention "Atención" - No utilice este método en nuevos proyectos. Si es posible, cambie los proyectos antiguos al método descrito anteriormente. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) ENGINE [=] VersionedCollapsingMergeTree(date-column [, samp#table_engines_versionedcollapsingmergetreeling_expression], (primary, key), index_granularity, sign, version) -``` - -Todos los parámetros excepto `sign` y `version` el mismo significado que en `MergeTree`. - -- `sign` — Name of the column with the type of row: `1` es una “state” fila, `-1` es una “cancel” fila. - - Column Data Type — `Int8`. - -- `version` — Name of the column with the version of the object state. - - El tipo de datos de columna debe ser `UInt*`. - -
- -## Derrumbar {#table_engines_versionedcollapsingmergetree} - -### Datos {#data} - -Considere una situación en la que necesite guardar datos que cambien continuamente para algún objeto. Es razonable tener una fila para un objeto y actualizar la fila siempre que haya cambios. Sin embargo, la operación de actualización es costosa y lenta para un DBMS porque requiere volver a escribir los datos en el almacenamiento. La actualización no es aceptable si necesita escribir datos rápidamente, pero puede escribir los cambios en un objeto secuencialmente de la siguiente manera. - -Utilice el `Sign` columna al escribir la fila. Si `Sign = 1` significa que la fila es un estado de un objeto (llamémoslo el “state” fila). Si `Sign = -1` indica la cancelación del estado de un objeto con los mismos atributos (llamémoslo el “cancel” fila). También use el `Version` columna, que debe identificar cada estado de un objeto con un número separado. - -Por ejemplo, queremos calcular cuántas páginas visitaron los usuarios en algún sitio y cuánto tiempo estuvieron allí. En algún momento escribimos la siguiente fila con el estado de la actividad del usuario: - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 | -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -``` - -En algún momento después registramos el cambio de actividad del usuario y lo escribimos con las siguientes dos filas. - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 | -│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 | -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -``` - -La primera fila cancela el estado anterior del objeto (usuario). Debe copiar todos los campos del estado cancelado excepto `Sign`. - -La segunda fila contiene el estado actual. - -Debido a que solo necesitamos el último estado de actividad del usuario, las filas - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 | -│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 | -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -``` - -se puede eliminar, colapsando el estado no válido (antiguo) del objeto. `VersionedCollapsingMergeTree` hace esto mientras fusiona las partes de datos. - -Para averiguar por qué necesitamos dos filas para cada cambio, vea [Algoritmo](#table_engines-versionedcollapsingmergetree-algorithm). - -**Notas sobre el uso** - -1. El programa que escribe los datos debe recordar el estado de un objeto para cancelarlo. El “cancel” cadena debe ser una copia de la “state” con lo opuesto `Sign`. Esto aumenta el tamaño inicial de almacenamiento, pero permite escribir los datos rápidamente. -2. Las matrices de largo crecimiento en columnas reducen la eficiencia del motor debido a la carga para escribir. Cuanto más sencillos sean los datos, mejor será la eficiencia. -3. `SELECT` Los resultados dependen en gran medida de la coherencia del historial de cambios de objetos. Sea preciso al preparar los datos para insertarlos. Puede obtener resultados impredecibles con datos incoherentes, como valores negativos para métricas no negativas, como la profundidad de la sesión. - -### Algoritmo {#table_engines-versionedcollapsingmergetree-algorithm} - -Cuando ClickHouse combina partes de datos, elimina cada par de filas que tienen la misma clave principal y versión y diferentes `Sign`. El orden de las filas no importa. - -Cuando ClickHouse inserta datos, ordena filas por la clave principal. Si el `Version` la columna no está en la clave principal, ClickHouse la agrega a la clave principal implícitamente como el último campo y la usa para ordenar. - -## Selección de datos {#selecting-data} - -ClickHouse no garantiza que todas las filas con la misma clave principal estén en la misma parte de datos resultante o incluso en el mismo servidor físico. Esto es cierto tanto para escribir los datos como para la posterior fusión de las partes de datos. Además, ClickHouse procesa `SELECT` consultas con múltiples subprocesos, y no puede predecir el orden de las filas en el resultado. Esto significa que la agregación es necesaria si hay una necesidad de obtener completamente “collapsed” datos de un `VersionedCollapsingMergeTree` tabla. - -Para finalizar el colapso, escriba una consulta con un `GROUP BY` cláusula y funciones agregadas que representan el signo. Por ejemplo, para calcular la cantidad, use `sum(Sign)` en lugar de `count()`. Para calcular la suma de algo, use `sum(Sign * x)` en lugar de `sum(x)` y agregar `HAVING sum(Sign) > 0`. - -Los agregados `count`, `sum` y `avg` se puede calcular de esta manera. El agregado `uniq` se puede calcular si un objeto tiene al menos un estado no colapsado. Los agregados `min` y `max` no se puede calcular porque `VersionedCollapsingMergeTree` no guarda el historial de valores de estados colapsados. - -Si necesita extraer los datos con “collapsing” pero sin agregación (por ejemplo, para verificar si hay filas presentes cuyos valores más nuevos coinciden con ciertas condiciones), puede usar el `FINAL` modificador para el `FROM` clausula. Este enfoque es ineficiente y no debe usarse con tablas grandes. - -## Ejemplo de uso {#example-of-use} - -Datos de ejemplo: - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 | -│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 | -│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 | -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -``` - -Creación de la tabla: - -``` sql -CREATE TABLE UAct -( - UserID UInt64, - PageViews UInt8, - Duration UInt8, - Sign Int8, - Version UInt8 -) -ENGINE = VersionedCollapsingMergeTree(Sign, Version) -ORDER BY UserID -``` - -Insertar los datos: - -``` sql -INSERT INTO UAct VALUES (4324182021466249494, 5, 146, 1, 1) -``` - -``` sql -INSERT INTO UAct VALUES (4324182021466249494, 5, 146, -1, 1),(4324182021466249494, 6, 185, 1, 2) -``` - -Usamos dos `INSERT` consultas para crear dos partes de datos diferentes. Si insertamos los datos con una sola consulta, ClickHouse crea una parte de datos y nunca realizará ninguna fusión. - -Obtener los datos: - -``` sql -SELECT * FROM UAct -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 │ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 │ -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -``` - -¿Qué vemos aquí y dónde están las partes colapsadas? -Creamos dos partes de datos usando dos `INSERT` consulta. El `SELECT` la consulta se realizó en dos subprocesos, y el resultado es un orden aleatorio de filas. -No se produjo el colapso porque las partes de datos aún no se han fusionado. ClickHouse fusiona partes de datos en un punto desconocido en el tiempo que no podemos predecir. - -Es por eso que necesitamos agregación: - -``` sql -SELECT - UserID, - sum(PageViews * Sign) AS PageViews, - sum(Duration * Sign) AS Duration, - Version -FROM UAct -GROUP BY UserID, Version -HAVING sum(Sign) > 0 -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Version─┐ -│ 4324182021466249494 │ 6 │ 185 │ 2 │ -└─────────────────────┴───────────┴──────────┴─────────┘ -``` - -Si no necesitamos agregación y queremos forzar el colapso, podemos usar el `FINAL` modificador para el `FROM` clausula. - -``` sql -SELECT * FROM UAct FINAL -``` - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 │ -└─────────────────────┴───────────┴──────────┴──────┴─────────┘ -``` - -Esta es una forma muy ineficiente de seleccionar datos. No lo use para mesas grandes. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/versionedcollapsingmergetree/) diff --git a/docs/es/engines/table-engines/special/buffer.md b/docs/es/engines/table-engines/special/buffer.md deleted file mode 100644 index b3a26ff356a..00000000000 --- a/docs/es/engines/table-engines/special/buffer.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 45 -toc_title: "B\xFAfer" ---- - -# Búfer {#buffer} - -Almacena los datos para escribir en la memoria RAM, enjuagándolos periódicamente a otra tabla. Durante la operación de lectura, los datos se leen desde el búfer y la otra tabla simultáneamente. - -``` sql -Buffer(database, table, num_layers, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes) -``` - -Parámetros del motor: - -- `database` – Database name. Instead of the database name, you can use a constant expression that returns a string. -- `table` – Table to flush data to. -- `num_layers` – Parallelism layer. Physically, the table will be represented as `num_layers` de búferes independientes. Valor recomendado: 16. -- `min_time`, `max_time`, `min_rows`, `max_rows`, `min_bytes`, y `max_bytes` – Conditions for flushing data from the buffer. - -Los datos se vacían del búfer y se escriben en la tabla de destino si `min*` condiciones o al menos una `max*` condición se cumplen. - -- `min_time`, `max_time` – Condition for the time in seconds from the moment of the first write to the buffer. -- `min_rows`, `max_rows` – Condition for the number of rows in the buffer. -- `min_bytes`, `max_bytes` – Condition for the number of bytes in the buffer. - -Durante la operación de escritura, los datos se insertan en un `num_layers` número de búferes aleatorios. O bien, si la parte de datos para insertar es lo suficientemente grande (mayor que `max_rows` o `max_bytes`), se escribe directamente en la tabla de destino, omitiendo el búfer. - -Las condiciones para el lavado de los datos se calculan por separado para cada uno de los `num_layers` búfer. Por ejemplo, si `num_layers = 16` y `max_bytes = 100000000`, el consumo máximo de RAM es de 1.6 GB. - -Ejemplo: - -``` sql -CREATE TABLE merge.hits_buffer AS merge.hits ENGINE = Buffer(merge, hits, 16, 10, 100, 10000, 1000000, 10000000, 100000000) -``` - -Creación de un ‘merge.hits_buffer’ mesa con la misma estructura que ‘merge.hits’ y usando el motor Buffer. Al escribir en esta tabla, los datos se almacenan en la memoria RAM y ‘merge.hits’ tabla. Se crean 16 búferes. Los datos de cada uno de ellos se vacían si han pasado 100 segundos o se han escrito un millón de filas o se han escrito 100 MB de datos; o si simultáneamente han pasado 10 segundos y se han escrito 10.000 filas y 10 MB de datos. Por ejemplo, si solo se ha escrito una fila, después de 100 segundos se vaciará, pase lo que pase. Pero si se han escrito muchas filas, los datos se vaciarán antes. - -Cuando se detiene el servidor, con DROP TABLE o DETACH TABLE, los datos del búfer también se vacían a la tabla de destino. - -Puede establecer cadenas vacías entre comillas simples para la base de datos y el nombre de la tabla. Esto indica la ausencia de una tabla de destino. En este caso, cuando se alcanzan las condiciones de descarga de datos, el búfer simplemente se borra. Esto puede ser útil para mantener una ventana de datos en la memoria. - -Al leer desde una tabla de búfer, los datos se procesan tanto desde el búfer como desde la tabla de destino (si hay uno). -Tenga en cuenta que las tablas Buffer no admiten un índice. En otras palabras, los datos del búfer se analizan por completo, lo que puede ser lento para los búferes grandes. (Para los datos de una tabla subordinada, se utilizará el índice que admite.) - -Si el conjunto de columnas de la tabla Buffer no coincide con el conjunto de columnas de una tabla subordinada, se inserta un subconjunto de columnas que existen en ambas tablas. - -Si los tipos no coinciden con una de las columnas de la tabla Búfer y una tabla subordinada, se escribe un mensaje de error en el registro del servidor y se borra el búfer. -Lo mismo sucede si la tabla subordinada no existe cuando se vacía el búfer. - -Si necesita ejecutar ALTER para una tabla subordinada y la tabla de búfer, se recomienda eliminar primero la tabla de búfer, ejecutar ALTER para la tabla subordinada y, a continuación, crear la tabla de búfer de nuevo. - -Si el servidor se reinicia de forma anormal, se pierden los datos del búfer. - -FINAL y SAMPLE no funcionan correctamente para las tablas Buffer. Estas condiciones se pasan a la tabla de destino, pero no se utilizan para procesar datos en el búfer. Si se requieren estas características, recomendamos usar solo la tabla Buffer para escribir, mientras lee desde la tabla de destino. - -Al agregar datos a un búfer, uno de los búferes está bloqueado. Esto provoca retrasos si se realiza una operación de lectura simultáneamente desde la tabla. - -Los datos que se insertan en una tabla de búfer pueden terminar en la tabla subordinada en un orden diferente y en bloques diferentes. Debido a esto, una tabla Buffer es difícil de usar para escribir en un CollapsingMergeTree correctamente. Para evitar problemas, puede establecer ‘num_layers’ a 1. - -Si se replica la tabla de destino, se pierden algunas características esperadas de las tablas replicadas al escribir en una tabla de búfer. Los cambios aleatorios en el orden de las filas y los tamaños de las partes de datos hacen que la desduplicación de datos deje de funcionar, lo que significa que no es posible tener un ‘exactly once’ escribir en tablas replicadas. - -Debido a estas desventajas, solo podemos recomendar el uso de una tabla Buffer en casos raros. - -Una tabla de búfer se usa cuando se reciben demasiados INSERT de un gran número de servidores durante una unidad de tiempo y los datos no se pueden almacenar en búfer antes de la inserción, lo que significa que los INSERT no pueden ejecutarse lo suficientemente rápido. - -Tenga en cuenta que no tiene sentido insertar datos una fila a la vez, incluso para las tablas de búfer. Esto solo producirá una velocidad de unos pocos miles de filas por segundo, mientras que la inserción de bloques de datos más grandes puede producir más de un millón de filas por segundo (consulte la sección “Performance”). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/buffer/) diff --git a/docs/es/engines/table-engines/special/dictionary.md b/docs/es/engines/table-engines/special/dictionary.md deleted file mode 100644 index 6d9136a6a23..00000000000 --- a/docs/es/engines/table-engines/special/dictionary.md +++ /dev/null @@ -1,97 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 35 -toc_title: Diccionario ---- - -# Diccionario {#dictionary} - -El `Dictionary` el motor muestra el [diccionario](../../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) datos como una tabla ClickHouse. - -Como ejemplo, considere un diccionario de `products` con la siguiente configuración: - -``` xml - - - products - - -
products
- DSN=some-db-server - - - - 300 - 360 - - - - - - - product_id - - - title - String - - - - - -``` - -Consultar los datos del diccionario: - -``` sql -SELECT - name, - type, - key, - attribute.names, - attribute.types, - bytes_allocated, - element_count, - source -FROM system.dictionaries -WHERE name = 'products' -``` - -``` text -┌─name─────┬─type─┬─key────┬─attribute.names─┬─attribute.types─┬─bytes_allocated─┬─element_count─┬─source──────────┐ -│ products │ Flat │ UInt64 │ ['title'] │ ['String'] │ 23065376 │ 175032 │ ODBC: .products │ -└──────────┴──────┴────────┴─────────────────┴─────────────────┴─────────────────┴───────────────┴─────────────────┘ -``` - -Puede usar el [dictGet\*](../../../sql-reference/functions/ext-dict-functions.md#ext_dict_functions) función para obtener los datos del diccionario en este formato. - -Esta vista no es útil cuando necesita obtener datos sin procesar o cuando `JOIN` operación. Para estos casos, puede usar el `Dictionary` motor, que muestra los datos del diccionario en una tabla. - -Sintaxis: - -``` sql -CREATE TABLE %table_name% (%fields%) engine = Dictionary(%dictionary_name%)` -``` - -Ejemplo de uso: - -``` sql -create table products (product_id UInt64, title String) Engine = Dictionary(products); -``` - - Ok - -Echa un vistazo a lo que hay en la mesa. - -``` sql -select * from products limit 1; -``` - -``` text -┌────product_id─┬─title───────────┐ -│ 152689 │ Some item │ -└───────────────┴─────────────────┘ -``` - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/dictionary/) diff --git a/docs/es/engines/table-engines/special/distributed.md b/docs/es/engines/table-engines/special/distributed.md deleted file mode 100644 index bac407a651a..00000000000 --- a/docs/es/engines/table-engines/special/distributed.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 33 -toc_title: Distribuido ---- - -# Distribuido {#distributed} - -**Las tablas con motor distribuido no almacenan ningún dato por sí mismas**, pero permite el procesamiento de consultas distribuidas en varios servidores. -La lectura se paralela automáticamente. Durante una lectura, se utilizan los índices de tabla en servidores remotos, si los hay. - -El motor distribuido acepta parámetros: - -- el nombre del clúster en el archivo de configuración del servidor - -- el nombre de una base de datos remota - -- el nombre de una tabla remota - -- (opcionalmente) clave de fragmentación - -- nombre de política (opcionalmente), se usará para almacenar archivos temporales para el envío asíncrono - - Ver también: - - - `insert_distributed_sync` configuración - - [Método de codificación de datos:](../mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) para los ejemplos - -Ejemplo: - -``` sql -Distributed(logs, default, hits[, sharding_key[, policy_name]]) -``` - -Los datos se leerán desde todos los servidores ‘logs’ clúster, desde el valor predeterminado.tabla de éxitos ubicada en cada servidor del clúster. -Los datos no solo se leen sino que se procesan parcialmente en los servidores remotos (en la medida en que esto sea posible). -Por ejemplo, para una consulta con GROUP BY, los datos se agregarán en servidores remotos y los estados intermedios de las funciones agregadas se enviarán al servidor solicitante. Luego, los datos se agregarán más. - -En lugar del nombre de la base de datos, puede usar una expresión constante que devuelva una cadena. Por ejemplo: currentDatabase(). - -logs – The cluster name in the server's config file. - -Los clústeres se establecen así: - -``` xml - - - - - 1 - - false - - example01-01-1 - 9000 - - - example01-01-2 - 9000 - - - - 2 - false - - example01-02-1 - 9000 - - - example01-02-2 - 1 - 9440 - - - - -``` - -Aquí se define un clúster con el nombre ‘logs’ que consta de dos fragmentos, cada uno de los cuales contiene dos réplicas. -Los fragmentos se refieren a los servidores que contienen diferentes partes de los datos (para leer todos los datos, debe acceder a todos los fragmentos). -Las réplicas están duplicando servidores (para leer todos los datos, puede acceder a los datos en cualquiera de las réplicas). - -Los nombres de clúster no deben contener puntos. - -Los parámetros `host`, `port`, y opcionalmente `user`, `password`, `secure`, `compression` se especifican para cada servidor: -- `host` – The address of the remote server. You can use either the domain or the IPv4 or IPv6 address. If you specify the domain, the server makes a DNS request when it starts, and the result is stored as long as the server is running. If the DNS request fails, the server doesn't start. If you change the DNS record, restart the server. -- `port` – The TCP port for messenger activity (‘tcp_port’ en la configuración, generalmente establecido en 9000). No lo confundas con http_port. -- `user` – Name of the user for connecting to a remote server. Default value: default. This user must have access to connect to the specified server. Access is configured in the users.xml file. For more information, see the section [Derechos de acceso](../../../operations/access-rights.md). -- `password` – The password for connecting to a remote server (not masked). Default value: empty string. -- `secure` - Use ssl para la conexión, por lo general también debe definir `port` = 9440. El servidor debe escuchar en `9440` y tener certificados correctos. -- `compression` - Utilice la compresión de datos. Valor predeterminado: true. - -When specifying replicas, one of the available replicas will be selected for each of the shards when reading. You can configure the algorithm for load balancing (the preference for which replica to access) – see the [load_balancing](../../../operations/settings/settings.md#settings-load_balancing) configuración. -Si no se establece la conexión con el servidor, habrá un intento de conectarse con un breve tiempo de espera. Si la conexión falla, se seleccionará la siguiente réplica, y así sucesivamente para todas las réplicas. Si el intento de conexión falló para todas las réplicas, el intento se repetirá de la misma manera, varias veces. -Esto funciona a favor de la resiliencia, pero no proporciona una tolerancia completa a errores: un servidor remoto podría aceptar la conexión, pero podría no funcionar o funcionar mal. - -Puede especificar solo uno de los fragmentos (en este caso, el procesamiento de consultas debe denominarse remoto, en lugar de distribuido) o hasta cualquier número de fragmentos. En cada fragmento, puede especificar entre una y cualquier número de réplicas. Puede especificar un número diferente de réplicas para cada fragmento. - -Puede especificar tantos clústeres como desee en la configuración. - -Para ver los clústeres, utilice el ‘system.clusters’ tabla. - -El motor distribuido permite trabajar con un clúster como un servidor local. Sin embargo, el clúster es inextensible: debe escribir su configuración en el archivo de configuración del servidor (mejor aún, para todos los servidores del clúster). - -The Distributed engine requires writing clusters to the config file. Clusters from the config file are updated on the fly, without restarting the server. If you need to send a query to an unknown set of shards and replicas each time, you don't need to create a Distributed table – use the ‘remote’ función de tabla en su lugar. Vea la sección [Funciones de tabla](../../../sql-reference/table-functions/index.md). - -Hay dos métodos para escribir datos en un clúster: - -Primero, puede definir a qué servidores escribir en qué datos y realizar la escritura directamente en cada fragmento. En otras palabras, realice INSERT en las tablas que la tabla distribuida “looks at”. Esta es la solución más flexible, ya que puede usar cualquier esquema de fragmentación, que podría ser no trivial debido a los requisitos del área temática. Esta es también la solución más óptima ya que los datos se pueden escribir en diferentes fragmentos de forma completamente independiente. - -En segundo lugar, puede realizar INSERT en una tabla distribuida. En este caso, la tabla distribuirá los datos insertados a través de los propios servidores. Para escribir en una tabla distribuida, debe tener un conjunto de claves de fragmentación (el último parámetro). Además, si solo hay un fragmento, la operación de escritura funciona sin especificar la clave de fragmentación, ya que no significa nada en este caso. - -Cada fragmento puede tener un peso definido en el archivo de configuración. Por defecto, el peso es igual a uno. Los datos se distribuyen entre fragmentos en la cantidad proporcional al peso del fragmento. Por ejemplo, si hay dos fragmentos y el primero tiene un peso de 9 mientras que el segundo tiene un peso de 10, el primero se enviará 9 / 19 partes de las filas, y el segundo se enviará 10 / 19. - -Cada fragmento puede tener el ‘internal_replication’ parámetro definido en el archivo de configuración. - -Si este parámetro se establece en ‘true’, la operación de escritura selecciona la primera réplica en buen estado y escribe datos en ella. Utilice esta alternativa si la tabla Distribuida “looks at” tablas replicadas. En otras palabras, si la tabla donde se escribirán los datos los replicará por sí misma. - -Si se establece en ‘false’ (el valor predeterminado), los datos se escriben en todas las réplicas. En esencia, esto significa que la tabla distribuida replica los datos en sí. Esto es peor que usar tablas replicadas, porque no se verifica la consistencia de las réplicas y, con el tiempo, contendrán datos ligeramente diferentes. - -Para seleccionar el fragmento al que se envía una fila de datos, se analiza la expresión de fragmentación y su resto se toma de dividirlo por el peso total de los fragmentos. La fila se envía al fragmento que corresponde al medio intervalo de los restos de ‘prev_weight’ a ‘prev_weights + weight’, donde ‘prev_weights’ es el peso total de los fragmentos con el número más pequeño, y ‘weight’ es el peso de este fragmento. Por ejemplo, si hay dos fragmentos, y el primero tiene un peso de 9 mientras que el segundo tiene un peso de 10, la fila se enviará al primer fragmento para los restos del rango \[0, 9), y al segundo para los restos del rango \[9, 19). - -La expresión de fragmentación puede ser cualquier expresión de constantes y columnas de tabla que devuelva un entero. Por ejemplo, puede usar la expresión ‘rand()’ para la distribución aleatoria de datos, o ‘UserID’ para la distribución por el resto de dividir la ID del usuario (entonces los datos de un solo usuario residirán en un solo fragmento, lo que simplifica la ejecución de IN y JOIN por los usuarios). Si una de las columnas no se distribuye lo suficientemente uniformemente, puede envolverla en una función hash: intHash64(UserID) . - -Un simple recordatorio de la división es una solución limitada para sharding y no siempre es apropiado. Funciona para volúmenes medianos y grandes de datos (docenas de servidores), pero no para volúmenes muy grandes de datos (cientos de servidores o más). En este último caso, use el esquema de fragmentación requerido por el área asunto, en lugar de usar entradas en Tablas distribuidas. - -SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you don't have to transfer the old data to it. You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently. - -Debería preocuparse por el esquema de fragmentación en los siguientes casos: - -- Se utilizan consultas que requieren unir datos (IN o JOIN) mediante una clave específica. Si esta clave fragmenta datos, puede usar IN local o JOIN en lugar de GLOBAL IN o GLOBAL JOIN, que es mucho más eficiente. -- Se usa una gran cantidad de servidores (cientos o más) con una gran cantidad de consultas pequeñas (consultas de clientes individuales: sitios web, anunciantes o socios). Para que las pequeñas consultas no afecten a todo el clúster, tiene sentido ubicar datos para un solo cliente en un solo fragmento. Alternativamente, como lo hemos hecho en Yandex.Metrica, puede configurar sharding de dos niveles: divida todo el clúster en “layers”, donde una capa puede consistir en varios fragmentos. Los datos de un único cliente se encuentran en una sola capa, pero los fragmentos se pueden agregar a una capa según sea necesario y los datos se distribuyen aleatoriamente dentro de ellos. Las tablas distribuidas se crean para cada capa y se crea una única tabla distribuida compartida para consultas globales. - -Los datos se escriben de forma asíncrona. Cuando se inserta en la tabla, el bloque de datos se acaba de escribir en el sistema de archivos local. Los datos se envían a los servidores remotos en segundo plano tan pronto como sea posible. El período de envío de datos está gestionado por el [Distributed_directory_monitor_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_sleep_time_ms) y [Distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms) configuración. El `Distributed` el motor envía cada archivo con datos insertados por separado, pero puede habilitar el envío por lotes de archivos [distributed_directory_monitor_batch_inserts](../../../operations/settings/settings.md#distributed_directory_monitor_batch_inserts) configuración. Esta configuración mejora el rendimiento del clúster al utilizar mejor los recursos de red y servidor local. Debe comprobar si los datos se envían correctamente comprobando la lista de archivos (datos en espera de ser enviados) en el directorio de la tabla: `/var/lib/clickhouse/data/database/table/`. - -Si el servidor dejó de existir o tuvo un reinicio aproximado (por ejemplo, después de un error de dispositivo) después de un INSERT en una tabla distribuida, es posible que se pierdan los datos insertados. Si se detecta un elemento de datos dañado en el directorio de la tabla, se transfiere al ‘broken’ subdirectorio y ya no se utiliza. - -Cuando la opción max_parallel_replicas está habilitada, el procesamiento de consultas se paralela en todas las réplicas dentro de un solo fragmento. Para obtener más información, consulte la sección [max_parallel_replicas](../../../operations/settings/settings.md#settings-max_parallel_replicas). - -## Virtual Columnas {#virtual-columns} - -- `_shard_num` — Contains the `shard_num` (de `system.clusters`). Tipo: [UInt32](../../../sql-reference/data-types/int-uint.md). - -!!! note "Nota" - Ya [`remote`](../../../sql-reference/table-functions/remote.md)/`cluster` funciones de tabla crean internamente instancia temporal del mismo motor distribuido, `_shard_num` está disponible allí también. - -**Ver también** - -- [Virtual columnas](index.md#table_engines-virtual_columns) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/distributed/) diff --git a/docs/es/engines/table-engines/special/external-data.md b/docs/es/engines/table-engines/special/external-data.md deleted file mode 100644 index f2ce4abbb0f..00000000000 --- a/docs/es/engines/table-engines/special/external-data.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 34 -toc_title: Datos externos ---- - -# Datos externos para el procesamiento de consultas {#external-data-for-query-processing} - -ClickHouse permite enviar a un servidor los datos necesarios para procesar una consulta, junto con una consulta SELECT. Estos datos se colocan en una tabla temporal (consulte la sección “Temporary tables”) y se puede utilizar en la consulta (por ejemplo, en operadores IN). - -Por ejemplo, si tiene un archivo de texto con identificadores de usuario importantes, puede cargarlo en el servidor junto con una consulta que utilice la filtración de esta lista. - -Si necesita ejecutar más de una consulta con un gran volumen de datos externos, no utilice esta función. Es mejor cargar los datos a la base de datos con anticipación. - -Los datos externos se pueden cargar mediante el cliente de línea de comandos (en modo no interactivo) o mediante la interfaz HTTP. - -En el cliente de línea de comandos, puede especificar una sección de parámetros en el formato - -``` bash ---external --file=... [--name=...] [--format=...] [--types=...|--structure=...] -``` - -Puede tener varias secciones como esta, para el número de tablas que se transmiten. - -**–external** – Marks the beginning of a clause. -**–file** – Path to the file with the table dump, or -, which refers to stdin. -Solo se puede recuperar una sola tabla de stdin. - -Los siguientes parámetros son opcionales: **–name**– Name of the table. If omitted, _data is used. -**–format** – Data format in the file. If omitted, TabSeparated is used. - -Se requiere uno de los siguientes parámetros:**–types** – A list of comma-separated column types. For example: `UInt64,String`. The columns will be named _1, _2, … -**–structure**– The table structure in the format`UserID UInt64`, `URL String`. Define los nombres y tipos de columna. - -Los archivos especificados en ‘file’ se analizará mediante el formato especificado en ‘format’ utilizando los tipos de datos especificados en ‘types’ o ‘structure’. La mesa será cargado en el servidor y accesibles, como una tabla temporal con el nombre de ‘name’. - -Ejemplos: - -``` bash -$ echo -ne "1\n2\n3\n" | clickhouse-client --query="SELECT count() FROM test.visits WHERE TraficSourceID IN _data" --external --file=- --types=Int8 -849897 -$ cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, count() AS c FROM passwd GROUP BY shell ORDER BY c DESC" --external --file=- --name=passwd --structure='login String, unused String, uid UInt16, gid UInt16, comment String, home String, shell String' -/bin/sh 20 -/bin/false 5 -/bin/bash 4 -/usr/sbin/nologin 1 -/bin/sync 1 -``` - -Cuando se utiliza la interfaz HTTP, los datos externos se pasan en el formato multipart/form-data. Cada tabla se transmite como un archivo separado. El nombre de la tabla se toma del nombre del archivo. El ‘query_string’ se pasa los parámetros ‘name_format’, ‘name_types’, y ‘name_structure’, donde ‘name’ es el nombre de la tabla a la que corresponden estos parámetros. El significado de los parámetros es el mismo que cuando se usa el cliente de línea de comandos. - -Ejemplo: - -``` bash -$ cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv - -$ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count()+AS+c+FROM+passwd+GROUP+BY+shell+ORDER+BY+c+DESC&passwd_structure=login+String,+unused+String,+uid+UInt16,+gid+UInt16,+comment+String,+home+String,+shell+String' -/bin/sh 20 -/bin/false 5 -/bin/bash 4 -/usr/sbin/nologin 1 -/bin/sync 1 -``` - -Para el procesamiento de consultas distribuidas, las tablas temporales se envían a todos los servidores remotos. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/external_data/) diff --git a/docs/es/engines/table-engines/special/file.md b/docs/es/engines/table-engines/special/file.md deleted file mode 100644 index fb739506a22..00000000000 --- a/docs/es/engines/table-engines/special/file.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: File ---- - -# File {#table_engines-file} - -El motor de tabla de archivos mantiene los datos en un archivo en uno de los [file -formato](../../../interfaces/formats.md#formats) (TabSeparated, Native, etc.). - -Ejemplos de uso: - -- Exportación de datos de ClickHouse a archivo. -- Convertir datos de un formato a otro. -- Actualización de datos en ClickHouse mediante la edición de un archivo en un disco. - -## Uso en el servidor ClickHouse {#usage-in-clickhouse-server} - -``` sql -File(Format) -``` - -El `Format` parámetro especifica uno de los formatos de archivo disponibles. Realizar -`SELECT` consultas, el formato debe ser compatible para la entrada, y para realizar -`INSERT` queries – for output. The available formats are listed in the -[Formato](../../../interfaces/formats.md#formats) apartado. - -ClickHouse no permite especificar la ruta del sistema de archivos para`File`. Utilizará la carpeta definida por [camino](../../../operations/server-configuration-parameters/settings.md) configuración en la configuración del servidor. - -Al crear una tabla usando `File(Format)` crea un subdirectorio vacío en esa carpeta. Cuando los datos se escriben en esa tabla, se colocan en `data.Format` en ese subdirectorio. - -Puede crear manualmente esta subcarpeta y archivo en el sistema de archivos del servidor y luego [ATTACH](../../../sql-reference/statements/misc.md) para mostrar información con el nombre coincidente, para que pueda consultar datos desde ese archivo. - -!!! warning "Advertencia" - Tenga cuidado con esta funcionalidad, ya que ClickHouse no realiza un seguimiento de los cambios externos en dichos archivos. El resultado de las escrituras simultáneas a través de ClickHouse y fuera de ClickHouse no está definido. - -**Ejemplo:** - -**1.** Configurar el `file_engine_table` tabla: - -``` sql -CREATE TABLE file_engine_table (name String, value UInt32) ENGINE=File(TabSeparated) -``` - -Por defecto, ClickHouse creará una carpeta `/var/lib/clickhouse/data/default/file_engine_table`. - -**2.** Crear manualmente `/var/lib/clickhouse/data/default/file_engine_table/data.TabSeparated` contener: - -``` bash -$ cat data.TabSeparated -one 1 -two 2 -``` - -**3.** Consultar los datos: - -``` sql -SELECT * FROM file_engine_table -``` - -``` text -┌─name─┬─value─┐ -│ one │ 1 │ -│ two │ 2 │ -└──────┴───────┘ -``` - -## Uso en ClickHouse-local {#usage-in-clickhouse-local} - -En [Sistema abierto.](../../../operations/utilities/clickhouse-local.md#clickhouse-local) El motor de archivos acepta la ruta del archivo además de `Format`. Los flujos de entrada / salida predeterminados se pueden especificar utilizando nombres numéricos o legibles por humanos como `0` o `stdin`, `1` o `stdout`. -**Ejemplo:** - -``` bash -$ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64) ENGINE = File(CSV, stdin); SELECT a, b FROM table; DROP TABLE table" -``` - -## Detalles de la implementación {#details-of-implementation} - -- Multiple `SELECT` las consultas se pueden realizar simultáneamente, pero `INSERT` las consultas se esperarán entre sí. -- Apoyado la creación de nuevos archivos por `INSERT` consulta. -- Si el archivo existe, `INSERT` añadiría nuevos valores en él. -- No soportado: - - `ALTER` - - `SELECT ... SAMPLE` - - Indice - - Replicación - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/file/) diff --git a/docs/es/engines/table-engines/special/generate.md b/docs/es/engines/table-engines/special/generate.md deleted file mode 100644 index 67e664284b4..00000000000 --- a/docs/es/engines/table-engines/special/generate.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 46 -toc_title: GenerateRandom ---- - -# Generaterandom {#table_engines-generate} - -El motor de tabla GenerateRandom produce datos aleatorios para el esquema de tabla determinado. - -Ejemplos de uso: - -- Se usa en la prueba para poblar una tabla grande reproducible. -- Generar entrada aleatoria para pruebas de fuzzing. - -## Uso en el servidor ClickHouse {#usage-in-clickhouse-server} - -``` sql -ENGINE = GenerateRandom(random_seed, max_string_length, max_array_length) -``` - -El `max_array_length` y `max_string_length` parámetros especifican la longitud máxima de todos -columnas y cadenas de matriz correspondientemente en los datos generados. - -Generar motor de tabla sólo admite `SELECT` consulta. - -Es compatible con todos [Tipos de datos](../../../sql-reference/data-types/index.md) que se pueden almacenar en una tabla excepto `LowCardinality` y `AggregateFunction`. - -**Ejemplo:** - -**1.** Configurar el `generate_engine_table` tabla: - -``` sql -CREATE TABLE generate_engine_table (name String, value UInt32) ENGINE = GenerateRandom(1, 5, 3) -``` - -**2.** Consultar los datos: - -``` sql -SELECT * FROM generate_engine_table LIMIT 3 -``` - -``` text -┌─name─┬──────value─┐ -│ c4xJ │ 1412771199 │ -│ r │ 1791099446 │ -│ 7#$ │ 124312908 │ -└──────┴────────────┘ -``` - -## Detalles de la implementación {#details-of-implementation} - -- No soportado: - - `ALTER` - - `SELECT ... SAMPLE` - - `INSERT` - - Indice - - Replicación - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/generate/) diff --git a/docs/es/engines/table-engines/special/index.md b/docs/es/engines/table-engines/special/index.md deleted file mode 100644 index 9927a1f61d9..00000000000 --- a/docs/es/engines/table-engines/special/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Especial -toc_priority: 31 ---- - - diff --git a/docs/es/engines/table-engines/special/join.md b/docs/es/engines/table-engines/special/join.md deleted file mode 100644 index 83e21b7c8cc..00000000000 --- a/docs/es/engines/table-engines/special/join.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 40 -toc_title: Unir ---- - -# Unir {#join} - -Estructura de datos preparada para usar en [JOIN](../../../sql-reference/statements/select/join.md#select-join) operación. - -## Creación de una tabla {#creating-a-table} - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], -) ENGINE = Join(join_strictness, join_type, k1[, k2, ...]) -``` - -Vea la descripción detallada del [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) consulta. - -**Parámetros del motor** - -- `join_strictness` – [ÚNETE a la rigurosidad](../../../sql-reference/statements/select/join.md#select-join-types). -- `join_type` – [Tipo de unión](../../../sql-reference/statements/select/join.md#select-join-types). -- `k1[, k2, ...]` – Key columns from the `USING` cláusula que el `JOIN` operación se hace con. - -Entrar `join_strictness` y `join_type` parámetros sin comillas, por ejemplo, `Join(ANY, LEFT, col1)`. Deben coincidir con el `JOIN` operación para la que se utilizará la tabla. Si los parámetros no coinciden, ClickHouse no lanza una excepción y puede devolver datos incorrectos. - -## Uso de la tabla {#table-usage} - -### Ejemplo {#example} - -Creación de la tabla del lado izquierdo: - -``` sql -CREATE TABLE id_val(`id` UInt32, `val` UInt32) ENGINE = TinyLog -``` - -``` sql -INSERT INTO id_val VALUES (1,11)(2,12)(3,13) -``` - -Creando el lado derecho `Join` tabla: - -``` sql -CREATE TABLE id_val_join(`id` UInt32, `val` UInt8) ENGINE = Join(ANY, LEFT, id) -``` - -``` sql -INSERT INTO id_val_join VALUES (1,21)(1,22)(3,23) -``` - -Unirse a las tablas: - -``` sql -SELECT * FROM id_val ANY LEFT JOIN id_val_join USING (id) SETTINGS join_use_nulls = 1 -``` - -``` text -┌─id─┬─val─┬─id_val_join.val─┐ -│ 1 │ 11 │ 21 │ -│ 2 │ 12 │ ᴺᵁᴸᴸ │ -│ 3 │ 13 │ 23 │ -└────┴─────┴─────────────────┘ -``` - -Como alternativa, puede recuperar datos del `Join` tabla, especificando el valor de la clave de unión: - -``` sql -SELECT joinGet('id_val_join', 'val', toUInt32(1)) -``` - -``` text -┌─joinGet('id_val_join', 'val', toUInt32(1))─┐ -│ 21 │ -└────────────────────────────────────────────┘ -``` - -### Selección e inserción de datos {#selecting-and-inserting-data} - -Usted puede utilizar `INSERT` consultas para agregar datos al `Join`-mesas de motor. Si la tabla se creó con el `ANY` estricta, se ignoran los datos de las claves duplicadas. Con el `ALL` estricta, se agregan todas las filas. - -No se puede realizar una `SELECT` consulta directamente desde la tabla. En su lugar, use uno de los siguientes métodos: - -- Coloque la mesa hacia el lado derecho en un `JOIN` clausula. -- Llame al [joinGet](../../../sql-reference/functions/other-functions.md#joinget) función, que le permite extraer datos de la tabla de la misma manera que de un diccionario. - -### Limitaciones y ajustes {#join-limitations-and-settings} - -Al crear una tabla, se aplican los siguientes valores: - -- [Sistema abierto.](../../../operations/settings/settings.md#join_use_nulls) -- [Método de codificación de datos:](../../../operations/settings/query-complexity.md#settings-max_rows_in_join) -- [Método de codificación de datos:](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join) -- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode) -- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row) - -El `Join`-las tablas del motor no se pueden usar en `GLOBAL JOIN` operación. - -El `Join`-motor permite el uso [Sistema abierto.](../../../operations/settings/settings.md#join_use_nulls) ajuste en el `CREATE TABLE` instrucción. Y [SELECT](../../../sql-reference/statements/select/index.md) consulta permite el uso `join_use_nulls` demasiado. Si tienes diferentes `join_use_nulls` configuración, puede obtener un error al unirse a la tabla. Depende del tipo de JOIN. Cuando se utiliza [joinGet](../../../sql-reference/functions/other-functions.md#joinget) función, usted tiene que utilizar el mismo `join_use_nulls` ajuste en `CRATE TABLE` y `SELECT` instrucción. - -## Almacenamiento de datos {#data-storage} - -`Join` datos de la tabla siempre se encuentra en la memoria RAM. Al insertar filas en una tabla, ClickHouse escribe bloques de datos en el directorio del disco para que puedan restaurarse cuando se reinicie el servidor. - -Si el servidor se reinicia incorrectamente, el bloque de datos en el disco puede perderse o dañarse. En este caso, es posible que deba eliminar manualmente el archivo con datos dañados. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/join/) diff --git a/docs/es/engines/table-engines/special/materializedview.md b/docs/es/engines/table-engines/special/materializedview.md deleted file mode 100644 index 87e5218eb6a..00000000000 --- a/docs/es/engines/table-engines/special/materializedview.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 43 -toc_title: "M\xE9todo de codificaci\xF3n de datos:" ---- - -# Método de codificación de datos: {#materializedview} - -Se utiliza para implementar vistas materializadas (para obtener más información, consulte [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query)). Para almacenar datos, utiliza un motor diferente que se especificó al crear la vista. Al leer desde una tabla, solo usa este motor. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/materializedview/) diff --git a/docs/es/engines/table-engines/special/memory.md b/docs/es/engines/table-engines/special/memory.md deleted file mode 100644 index 3d4f8ddff54..00000000000 --- a/docs/es/engines/table-engines/special/memory.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 44 -toc_title: Memoria ---- - -# Memoria {#memory} - -El motor de memoria almacena datos en RAM, en forma sin comprimir. Los datos se almacenan exactamente en la misma forma en que se reciben cuando se leen. En otras palabras, la lectura de esta tabla es completamente gratuita. -El acceso a los datos simultáneos está sincronizado. Los bloqueos son cortos: las operaciones de lectura y escritura no se bloquean entre sí. -Los índices no son compatibles. La lectura está paralelizada. -La productividad máxima (más de 10 GB/s) se alcanza en consultas simples, porque no hay lectura del disco, descomprimir o deserializar datos. (Cabe señalar que, en muchos casos, la productividad del motor MergeTree es casi tan alta.) -Al reiniciar un servidor, los datos desaparecen de la tabla y la tabla queda vacía. -Normalmente, el uso de este motor de tabla no está justificado. Sin embargo, se puede usar para pruebas y para tareas donde se requiere la velocidad máxima en un número relativamente pequeño de filas (hasta aproximadamente 100,000,000). - -El sistema utiliza el motor de memoria para tablas temporales con datos de consulta externos (consulte la sección “External data for processing a query”), y para la implementación de GLOBAL IN (véase la sección “IN operators”). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/memory/) diff --git a/docs/es/engines/table-engines/special/merge.md b/docs/es/engines/table-engines/special/merge.md deleted file mode 100644 index 6ed2c272914..00000000000 --- a/docs/es/engines/table-engines/special/merge.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 36 -toc_title: Fusionar ---- - -# Fusionar {#merge} - -El `Merge` motor (no debe confundirse con `MergeTree`) no almacena datos en sí, pero permite leer de cualquier número de otras tablas simultáneamente. -La lectura se paralela automáticamente. No se admite la escritura en una tabla. Al leer, se usan los índices de las tablas que realmente se están leyendo, si existen. -El `Merge` engine acepta parámetros: el nombre de la base de datos y una expresión regular para las tablas. - -Ejemplo: - -``` sql -Merge(hits, '^WatchLog') -``` - -Los datos se leerán de las tablas en el `hits` base de datos que tienen nombres que coinciden con la expresión regular ‘`^WatchLog`’. - -En lugar del nombre de la base de datos, puede usar una expresión constante que devuelva una cadena. Por ejemplo, `currentDatabase()`. - -Regular expressions — [Re2](https://github.com/google/re2) (soporta un subconjunto de PCRE), sensible a mayúsculas y minúsculas. -Vea las notas sobre los símbolos de escape en expresiones regulares en el “match” apartado. - -Al seleccionar tablas para leer, el `Merge` no se seleccionará la tabla en sí, incluso si coincide con la expresión regular. Esto es para evitar bucles. -Es posible crear dos `Merge` tablas que intentarán interminablemente leer los datos de los demás, pero esta no es una buena idea. - -La forma típica de usar el `Merge` para trabajar con un gran número de `TinyLog` tablas como si con una sola tabla. - -Ejemplo 2: - -Digamos que tiene una tabla antigua (WatchLog_old) y decidió cambiar la partición sin mover datos a una nueva tabla (WatchLog_new) y necesita ver datos de ambas tablas. - -``` sql -CREATE TABLE WatchLog_old(date Date, UserId Int64, EventType String, Cnt UInt64) -ENGINE=MergeTree(date, (UserId, EventType), 8192); -INSERT INTO WatchLog_old VALUES ('2018-01-01', 1, 'hit', 3); - -CREATE TABLE WatchLog_new(date Date, UserId Int64, EventType String, Cnt UInt64) -ENGINE=MergeTree PARTITION BY date ORDER BY (UserId, EventType) SETTINGS index_granularity=8192; -INSERT INTO WatchLog_new VALUES ('2018-01-02', 2, 'hit', 3); - -CREATE TABLE WatchLog as WatchLog_old ENGINE=Merge(currentDatabase(), '^WatchLog'); - -SELECT * -FROM WatchLog -``` - -``` text -┌───────date─┬─UserId─┬─EventType─┬─Cnt─┐ -│ 2018-01-01 │ 1 │ hit │ 3 │ -└────────────┴────────┴───────────┴─────┘ -┌───────date─┬─UserId─┬─EventType─┬─Cnt─┐ -│ 2018-01-02 │ 2 │ hit │ 3 │ -└────────────┴────────┴───────────┴─────┘ -``` - -## Virtual Columnas {#virtual-columns} - -- `_table` — Contains the name of the table from which data was read. Type: [Cadena](../../../sql-reference/data-types/string.md). - - Puede establecer las condiciones constantes en `_table` en el `WHERE/PREWHERE` cláusula (por ejemplo, `WHERE _table='xyz'`). En este caso, la operación de lectura se realiza sólo para las tablas donde la condición en `_table` está satisfecho, por lo que el `_table` columna actúa como un índice. - -**Ver también** - -- [Virtual columnas](index.md#table_engines-virtual_columns) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/merge/) diff --git a/docs/es/engines/table-engines/special/null.md b/docs/es/engines/table-engines/special/null.md deleted file mode 100644 index cc05e7839c9..00000000000 --- a/docs/es/engines/table-engines/special/null.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 38 -toc_title: Nulo ---- - -# Nulo {#null} - -Al escribir en una tabla Null, los datos se ignoran. Al leer desde una tabla Null, la respuesta está vacía. - -Sin embargo, puede crear una vista materializada en una tabla Null. Entonces los datos escritos en la tabla terminarán en la vista. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/null/) diff --git a/docs/es/engines/table-engines/special/set.md b/docs/es/engines/table-engines/special/set.md deleted file mode 100644 index 4ff23202443..00000000000 --- a/docs/es/engines/table-engines/special/set.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 39 -toc_title: Establecer ---- - -# Establecer {#set} - -Un conjunto de datos que siempre está en la memoria RAM. Está diseñado para su uso en el lado derecho del operador IN (consulte la sección “IN operators”). - -Puede usar INSERT para insertar datos en la tabla. Se agregarán nuevos elementos al conjunto de datos, mientras que los duplicados se ignorarán. -Pero no puede realizar SELECT desde la tabla. La única forma de recuperar datos es usándolos en la mitad derecha del operador IN. - -Los datos siempre se encuentran en la memoria RAM. Para INSERT, los bloques de datos insertados también se escriben en el directorio de tablas en el disco. Al iniciar el servidor, estos datos se cargan en la RAM. En otras palabras, después de reiniciar, los datos permanecen en su lugar. - -Para un reinicio aproximado del servidor, el bloque de datos en el disco puede perderse o dañarse. En este último caso, es posible que deba eliminar manualmente el archivo con datos dañados. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/set/) diff --git a/docs/es/engines/table-engines/special/url.md b/docs/es/engines/table-engines/special/url.md deleted file mode 100644 index 654b8e99a4e..00000000000 --- a/docs/es/engines/table-engines/special/url.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: URL ---- - -# URL(URL, Formato) {#table_engines-url} - -Administra datos en un servidor HTTP/HTTPS remoto. Este motor es similar -a la [File](file.md) motor. - -## Uso del motor en el servidor ClickHouse {#using-the-engine-in-the-clickhouse-server} - -El `format` debe ser uno que ClickHouse pueda usar en -`SELECT` consultas y, si es necesario, en `INSERTs`. Para obtener la lista completa de formatos admitidos, consulte -[Formato](../../../interfaces/formats.md#formats). - -El `URL` debe ajustarse a la estructura de un localizador uniforme de recursos. La dirección URL especificada debe apuntar a un servidor -que utiliza HTTP o HTTPS. Esto no requiere ningún -encabezados adicionales para obtener una respuesta del servidor. - -`INSERT` y `SELECT` las consultas se transforman en `POST` y `GET` peticiones, -respectivamente. Para el procesamiento `POST` solicitudes, el servidor remoto debe admitir -[Codificación de transferencia fragmentada](https://en.wikipedia.org/wiki/Chunked_transfer_encoding). - -Puede limitar el número máximo de saltos de redirección HTTP GET utilizando el [Nombre de la red inalámbrica (SSID):](../../../operations/settings/settings.md#setting-max_http_get_redirects) configuración. - -**Ejemplo:** - -**1.** Crear un `url_engine_table` tabla en el servidor : - -``` sql -CREATE TABLE url_engine_table (word String, value UInt64) -ENGINE=URL('http://127.0.0.1:12345/', CSV) -``` - -**2.** Cree un servidor HTTP básico utilizando las herramientas estándar de Python 3 y -comenzarlo: - -``` python3 -from http.server import BaseHTTPRequestHandler, HTTPServer - -class CSVHTTPServer(BaseHTTPRequestHandler): - def do_GET(self): - self.send_response(200) - self.send_header('Content-type', 'text/csv') - self.end_headers() - - self.wfile.write(bytes('Hello,1\nWorld,2\n', "utf-8")) - -if __name__ == "__main__": - server_address = ('127.0.0.1', 12345) - HTTPServer(server_address, CSVHTTPServer).serve_forever() -``` - -``` bash -$ python3 server.py -``` - -**3.** Solicitar datos: - -``` sql -SELECT * FROM url_engine_table -``` - -``` text -┌─word──┬─value─┐ -│ Hello │ 1 │ -│ World │ 2 │ -└───────┴───────┘ -``` - -## Detalles de la implementación {#details-of-implementation} - -- Las lecturas y escrituras pueden ser paralelas -- No soportado: - - `ALTER` y `SELECT...SAMPLE` operación. - - Índices. - - Replicación. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/url/) diff --git a/docs/es/engines/table-engines/special/view.md b/docs/es/engines/table-engines/special/view.md deleted file mode 100644 index dbb496bcca4..00000000000 --- a/docs/es/engines/table-engines/special/view.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 42 -toc_title: Vista ---- - -# Vista {#table_engines-view} - -Se utiliza para implementar vistas (para obtener más información, consulte `CREATE VIEW query`). No almacena datos, pero solo almacena los datos especificados `SELECT` consulta. Al leer desde una tabla, ejecuta esta consulta (y elimina todas las columnas innecesarias de la consulta). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/table_engines/view/) diff --git a/docs/es/faq/general.md b/docs/es/faq/general.md deleted file mode 100644 index f8446e99152..00000000000 --- a/docs/es/faq/general.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 78 -toc_title: Preguntas generales ---- - -# Preguntas generales {#general-questions} - -## ¿Por qué no usar algo como MapReduce? {#why-not-use-something-like-mapreduce} - -Podemos referirnos a sistemas como MapReduce como sistemas informáticos distribuidos en los que la operación de reducción se basa en la clasificación distribuida. La solución de código abierto más común en esta clase es [Acerca de nosotros](http://hadoop.apache.org). Yandex utiliza su solución interna, YT. - -Estos sistemas no son apropiados para consultas en línea debido a su alta latencia. En otras palabras, no se pueden usar como back-end para una interfaz web. Estos tipos de sistemas no son útiles para actualizaciones de datos en tiempo real. La clasificación distribuida no es la mejor manera de realizar operaciones de reducción si el resultado de la operación y todos los resultados intermedios (si los hay) se encuentran en la RAM de un único servidor, que generalmente es el caso de las consultas en línea. En tal caso, una tabla hash es una forma óptima de realizar operaciones de reducción. Un enfoque común para optimizar las tareas de reducción de mapas es la preagregación (reducción parcial) utilizando una tabla hash en RAM. El usuario realiza esta optimización manualmente. La clasificación distribuida es una de las principales causas de un rendimiento reducido cuando se ejecutan tareas simples de reducción de mapas. - -La mayoría de las implementaciones de MapReduce le permiten ejecutar código arbitrario en un clúster. Pero un lenguaje de consulta declarativo es más adecuado para OLAP para ejecutar experimentos rápidamente. Por ejemplo, Hadoop tiene Hive y Pig. También considere Cloudera Impala o Shark (obsoleto) para Spark, así como Spark SQL, Presto y Apache Drill. El rendimiento cuando se ejecutan tales tareas es muy subóptimo en comparación con los sistemas especializados, pero la latencia relativamente alta hace que sea poco realista utilizar estos sistemas como back-end para una interfaz web. - -## ¿Qué sucede si tengo un problema con las codificaciones al usar Oracle a través de ODBC? {#oracle-odbc-encodings} - -Si utiliza Oracle a través del controlador ODBC como fuente de diccionarios externos, debe establecer el valor `NLS_LANG` variable de entorno en `/etc/default/clickhouse`. Para obtener más información, consulte [Oracle NLS_LANG Preguntas frecuentes](https://www.oracle.com/technetwork/products/globalization/nls-lang-099431.html). - -**Ejemplo** - -``` sql -NLS_LANG=RUSSIAN_RUSSIA.UTF8 -``` - -## Cómo exporto datos de ClickHouse a un archivo? {#how-to-export-to-file} - -### Uso de la cláusula INTO OUTFILE {#using-into-outfile-clause} - -Añadir un [INTO OUTFILE](../sql-reference/statements/select/into-outfile.md#into-outfile-clause) cláusula a su consulta. - -Por ejemplo: - -``` sql -SELECT * FROM table INTO OUTFILE 'file' -``` - -De forma predeterminada, ClickHouse usa el [TabSeparated](../interfaces/formats.md#tabseparated) formato de datos de salida. Para seleccionar el [formato de datos](../interfaces/formats.md), utilizar el [Cláusula FORMAT](../sql-reference/statements/select/format.md#format-clause). - -Por ejemplo: - -``` sql -SELECT * FROM table INTO OUTFILE 'file' FORMAT CSV -``` - -### Uso de una tabla de motor de archivo {#using-a-file-engine-table} - -Ver [File](../engines/table-engines/special/file.md). - -### Uso de la redirección de línea de comandos {#using-command-line-redirection} - -``` sql -$ clickhouse-client --query "SELECT * from table" --format FormatName > result.txt -``` - -Ver [Casa de clics-cliente](../interfaces/cli.md). - -{## [Artículo Original](https://clickhouse.tech/docs/en/faq/general/) ##} diff --git a/docs/es/faq/index.md b/docs/es/faq/index.md deleted file mode 100644 index a44dbb31e89..00000000000 --- a/docs/es/faq/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: F.A.Q. -toc_priority: 76 ---- - - diff --git a/docs/es/getting-started/example-datasets/amplab-benchmark.md b/docs/es/getting-started/example-datasets/amplab-benchmark.md deleted file mode 100644 index 066bf036266..00000000000 --- a/docs/es/getting-started/example-datasets/amplab-benchmark.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 17 -toc_title: Referencia de Big Data de AMPLab ---- - -# Referencia de Big Data de AMPLab {#amplab-big-data-benchmark} - -Ver https://amplab.cs.berkeley.edu/benchmark/ - -Regístrese para obtener una cuenta gratuita en https://aws.amazon.com. Requiere una tarjeta de crédito, correo electrónico y número de teléfono. Obtenga una nueva clave de acceso en https://console.aws.amazon.com/iam/home?nc2=h_m_sc#security_credential - -Ejecute lo siguiente en la consola: - -``` bash -$ sudo apt-get install s3cmd -$ mkdir tiny; cd tiny; -$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/tiny/ . -$ cd .. -$ mkdir 1node; cd 1node; -$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/1node/ . -$ cd .. -$ mkdir 5nodes; cd 5nodes; -$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/5nodes/ . -$ cd .. -``` - -Ejecute las siguientes consultas de ClickHouse: - -``` sql -CREATE TABLE rankings_tiny -( - pageURL String, - pageRank UInt32, - avgDuration UInt32 -) ENGINE = Log; - -CREATE TABLE uservisits_tiny -( - sourceIP String, - destinationURL String, - visitDate Date, - adRevenue Float32, - UserAgent String, - cCode FixedString(3), - lCode FixedString(6), - searchWord String, - duration UInt32 -) ENGINE = MergeTree(visitDate, visitDate, 8192); - -CREATE TABLE rankings_1node -( - pageURL String, - pageRank UInt32, - avgDuration UInt32 -) ENGINE = Log; - -CREATE TABLE uservisits_1node -( - sourceIP String, - destinationURL String, - visitDate Date, - adRevenue Float32, - UserAgent String, - cCode FixedString(3), - lCode FixedString(6), - searchWord String, - duration UInt32 -) ENGINE = MergeTree(visitDate, visitDate, 8192); - -CREATE TABLE rankings_5nodes_on_single -( - pageURL String, - pageRank UInt32, - avgDuration UInt32 -) ENGINE = Log; - -CREATE TABLE uservisits_5nodes_on_single -( - sourceIP String, - destinationURL String, - visitDate Date, - adRevenue Float32, - UserAgent String, - cCode FixedString(3), - lCode FixedString(6), - searchWord String, - duration UInt32 -) ENGINE = MergeTree(visitDate, visitDate, 8192); -``` - -Volver a la consola: - -``` bash -$ for i in tiny/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_tiny FORMAT CSV"; done -$ for i in tiny/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_tiny FORMAT CSV"; done -$ for i in 1node/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_1node FORMAT CSV"; done -$ for i in 1node/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_1node FORMAT CSV"; done -$ for i in 5nodes/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_5nodes_on_single FORMAT CSV"; done -$ for i in 5nodes/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_5nodes_on_single FORMAT CSV"; done -``` - -Consultas para obtener muestras de datos: - -``` sql -SELECT pageURL, pageRank FROM rankings_1node WHERE pageRank > 1000 - -SELECT substring(sourceIP, 1, 8), sum(adRevenue) FROM uservisits_1node GROUP BY substring(sourceIP, 1, 8) - -SELECT - sourceIP, - sum(adRevenue) AS totalRevenue, - avg(pageRank) AS pageRank -FROM rankings_1node ALL INNER JOIN -( - SELECT - sourceIP, - destinationURL AS pageURL, - adRevenue - FROM uservisits_1node - WHERE (visitDate > '1980-01-01') AND (visitDate < '1980-04-01') -) USING pageURL -GROUP BY sourceIP -ORDER BY totalRevenue DESC -LIMIT 1 -``` - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets/amplab_benchmark/) diff --git a/docs/es/getting-started/example-datasets/criteo.md b/docs/es/getting-started/example-datasets/criteo.md deleted file mode 100644 index 79203b0276d..00000000000 --- a/docs/es/getting-started/example-datasets/criteo.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 19 -toc_title: Registros de clics de Terabyte de Criteo ---- - -# Terabyte de registros de clics de Criteo {#terabyte-of-click-logs-from-criteo} - -Descargue los datos de http://labs.criteo.com/downloads/download-terabyte-click-logs/ - -Cree una tabla para importar el registro: - -``` sql -CREATE TABLE criteo_log (date Date, clicked UInt8, int1 Int32, int2 Int32, int3 Int32, int4 Int32, int5 Int32, int6 Int32, int7 Int32, int8 Int32, int9 Int32, int10 Int32, int11 Int32, int12 Int32, int13 Int32, cat1 String, cat2 String, cat3 String, cat4 String, cat5 String, cat6 String, cat7 String, cat8 String, cat9 String, cat10 String, cat11 String, cat12 String, cat13 String, cat14 String, cat15 String, cat16 String, cat17 String, cat18 String, cat19 String, cat20 String, cat21 String, cat22 String, cat23 String, cat24 String, cat25 String, cat26 String) ENGINE = Log -``` - -Descargar los datos: - -``` bash -$ for i in {00..23}; do echo $i; zcat datasets/criteo/day_${i#0}.gz | sed -r 's/^/2000-01-'${i/00/24}'\t/' | clickhouse-client --host=example-perftest01j --query="INSERT INTO criteo_log FORMAT TabSeparated"; done -``` - -Crear una tabla para los datos convertidos: - -``` sql -CREATE TABLE criteo -( - date Date, - clicked UInt8, - int1 Int32, - int2 Int32, - int3 Int32, - int4 Int32, - int5 Int32, - int6 Int32, - int7 Int32, - int8 Int32, - int9 Int32, - int10 Int32, - int11 Int32, - int12 Int32, - int13 Int32, - icat1 UInt32, - icat2 UInt32, - icat3 UInt32, - icat4 UInt32, - icat5 UInt32, - icat6 UInt32, - icat7 UInt32, - icat8 UInt32, - icat9 UInt32, - icat10 UInt32, - icat11 UInt32, - icat12 UInt32, - icat13 UInt32, - icat14 UInt32, - icat15 UInt32, - icat16 UInt32, - icat17 UInt32, - icat18 UInt32, - icat19 UInt32, - icat20 UInt32, - icat21 UInt32, - icat22 UInt32, - icat23 UInt32, - icat24 UInt32, - icat25 UInt32, - icat26 UInt32 -) ENGINE = MergeTree(date, intHash32(icat1), (date, intHash32(icat1)), 8192) -``` - -Transforme los datos del registro sin procesar y colóquelos en la segunda tabla: - -``` sql -INSERT INTO criteo SELECT date, clicked, int1, int2, int3, int4, int5, int6, int7, int8, int9, int10, int11, int12, int13, reinterpretAsUInt32(unhex(cat1)) AS icat1, reinterpretAsUInt32(unhex(cat2)) AS icat2, reinterpretAsUInt32(unhex(cat3)) AS icat3, reinterpretAsUInt32(unhex(cat4)) AS icat4, reinterpretAsUInt32(unhex(cat5)) AS icat5, reinterpretAsUInt32(unhex(cat6)) AS icat6, reinterpretAsUInt32(unhex(cat7)) AS icat7, reinterpretAsUInt32(unhex(cat8)) AS icat8, reinterpretAsUInt32(unhex(cat9)) AS icat9, reinterpretAsUInt32(unhex(cat10)) AS icat10, reinterpretAsUInt32(unhex(cat11)) AS icat11, reinterpretAsUInt32(unhex(cat12)) AS icat12, reinterpretAsUInt32(unhex(cat13)) AS icat13, reinterpretAsUInt32(unhex(cat14)) AS icat14, reinterpretAsUInt32(unhex(cat15)) AS icat15, reinterpretAsUInt32(unhex(cat16)) AS icat16, reinterpretAsUInt32(unhex(cat17)) AS icat17, reinterpretAsUInt32(unhex(cat18)) AS icat18, reinterpretAsUInt32(unhex(cat19)) AS icat19, reinterpretAsUInt32(unhex(cat20)) AS icat20, reinterpretAsUInt32(unhex(cat21)) AS icat21, reinterpretAsUInt32(unhex(cat22)) AS icat22, reinterpretAsUInt32(unhex(cat23)) AS icat23, reinterpretAsUInt32(unhex(cat24)) AS icat24, reinterpretAsUInt32(unhex(cat25)) AS icat25, reinterpretAsUInt32(unhex(cat26)) AS icat26 FROM criteo_log; - -DROP TABLE criteo_log; -``` - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets/criteo/) diff --git a/docs/es/getting-started/example-datasets/index.md b/docs/es/getting-started/example-datasets/index.md deleted file mode 100644 index 28e06987af1..00000000000 --- a/docs/es/getting-started/example-datasets/index.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Datos De Ejemplo -toc_priority: 12 -toc_title: "Implantaci\xF3n" ---- - -# Datos De Ejemplo {#example-datasets} - -En esta sección se describe cómo obtener conjuntos de datos de ejemplo e importarlos a ClickHouse. -Para algunos conjuntos de datos también están disponibles consultas de ejemplo. - -- [Yandex anonimizado.Conjunto de datos de Metrica](metrica.md) -- [Estrella Schema Benchmark](star-schema.md) -- [Nombre de la red inalámbrica (SSID):](wikistat.md) -- [Terabyte de registros de clics de Criteo](criteo.md) -- [Referencia de Big Data de AMPLab](amplab-benchmark.md) -- [Datos de taxis de Nueva York](nyc-taxi.md) -- [A tiempo](ontime.md) - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets) diff --git a/docs/es/getting-started/example-datasets/metrica.md b/docs/es/getting-started/example-datasets/metrica.md deleted file mode 100644 index 0b3bc8b6833..00000000000 --- a/docs/es/getting-started/example-datasets/metrica.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 14 -toc_title: El Yandex.Metrica Datos ---- - -# Yandex anonimizado.Metrica Datos {#anonymized-yandex-metrica-data} - -El conjunto de datos consta de dos tablas que contienen datos anónimos sobre los hits (`hits_v1`) y visitas (`visits_v1`) el Yandex.Métrica. Puedes leer más sobre Yandex.Metrica en [Historial de ClickHouse](../../introduction/history.md) apartado. - -El conjunto de datos consta de dos tablas, cualquiera de ellas se puede descargar como `tsv.xz` o como particiones preparadas. Además, una versión extendida de la `hits` La tabla que contiene 100 millones de filas está disponible como TSV en https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz y como particiones preparadas en https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz. - -## Obtención de tablas a partir de particiones preparadas {#obtaining-tables-from-prepared-partitions} - -Descargar e importar tabla de hits: - -``` bash -curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar -tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory -# check permissions on unpacked data, fix if required -sudo service clickhouse-server restart -clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" -``` - -Descargar e importar visitas: - -``` bash -curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar -tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory -# check permissions on unpacked data, fix if required -sudo service clickhouse-server restart -clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" -``` - -## Obtención de tablas a partir de un archivo TSV comprimido {#obtaining-tables-from-compressed-tsv-file} - -Descargar e importar hits desde un archivo TSV comprimido: - -``` bash -curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -# now create table -clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" -clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" -# import data -cat hits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.hits_v1 FORMAT TSV" --max_insert_block_size=100000 -# optionally you can optimize table -clickhouse-client --query "OPTIMIZE TABLE datasets.hits_v1 FINAL" -clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" -``` - -Descargue e importe visitas desde un archivo tsv comprimido: - -``` bash -curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv -# now create table -clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" -clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" -# import data -cat visits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.visits_v1 FORMAT TSV" --max_insert_block_size=100000 -# optionally you can optimize table -clickhouse-client --query "OPTIMIZE TABLE datasets.visits_v1 FINAL" -clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" -``` - -## Consultas de ejemplo {#example-queries} - -[Tutorial de ClickHouse](../../getting-started/tutorial.md) se basa en Yandex.El conjunto de datos de Metrica y la forma recomendada de comenzar con este conjunto de datos es simplemente pasar por el tutorial. - -Se pueden encontrar ejemplos adicionales de consultas a estas tablas entre [pruebas estatales](https://github.com/ClickHouse/ClickHouse/tree/master/tests/queries/1_stateful) de ClickHouse (se nombran `test.hists` y `test.visits` alli). diff --git a/docs/es/getting-started/example-datasets/nyc-taxi.md b/docs/es/getting-started/example-datasets/nyc-taxi.md deleted file mode 100644 index c6441311c96..00000000000 --- a/docs/es/getting-started/example-datasets/nyc-taxi.md +++ /dev/null @@ -1,390 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 16 -toc_title: Datos de taxis de Nueva York ---- - -# Datos de taxis de Nueva York {#new-york-taxi-data} - -Este conjunto de datos se puede obtener de dos maneras: - -- importación de datos sin procesar -- descarga de particiones preparadas - -## Cómo importar los datos sin procesar {#how-to-import-the-raw-data} - -Consulte https://github.com/toddwschneider/nyc-taxi-data y http://tech.marksblogg.com/billion-nyc-taxi-rides-redshift.html para obtener la descripción de un conjunto de datos e instrucciones para descargar. - -La descarga dará como resultado aproximadamente 227 GB de datos sin comprimir en archivos CSV. La descarga tarda aproximadamente una hora en una conexión de 1 Gbit (la descarga paralela de s3.amazonaws.com recupera al menos la mitad de un canal de 1 Gbit). -Es posible que algunos de los archivos no se descarguen por completo. Verifique los tamaños de archivo y vuelva a descargar cualquiera que parezca dudoso. - -Algunos de los archivos pueden contener filas no válidas. Puede arreglarlos de la siguiente manera: - -``` bash -sed -E '/(.*,){18,}/d' data/yellow_tripdata_2010-02.csv > data/yellow_tripdata_2010-02.csv_ -sed -E '/(.*,){18,}/d' data/yellow_tripdata_2010-03.csv > data/yellow_tripdata_2010-03.csv_ -mv data/yellow_tripdata_2010-02.csv_ data/yellow_tripdata_2010-02.csv -mv data/yellow_tripdata_2010-03.csv_ data/yellow_tripdata_2010-03.csv -``` - -Entonces los datos deben ser preprocesados en PostgreSQL. Esto creará selecciones de puntos en los polígonos (para hacer coincidir los puntos en el mapa con los distritos de la ciudad de Nueva York) y combinará todos los datos en una única tabla plana desnormalizada mediante el uso de una unión. Para hacer esto, deberá instalar PostgreSQL con soporte PostGIS. - -Tenga cuidado al correr `initialize_database.sh` y volver a verificar manualmente que todas las tablas se crearon correctamente. - -Se tarda entre 20 y 30 minutos en procesar los datos de cada mes en PostgreSQL, por un total de aproximadamente 48 horas. - -Puede comprobar el número de filas descargadas de la siguiente manera: - -``` bash -$ time psql nyc-taxi-data -c "SELECT count(*) FROM trips;" -## Count - 1298979494 -(1 row) - -real 7m9.164s -``` - -(Esto es un poco más de 1.1 mil millones de filas reportadas por Mark Litwintschik en una serie de publicaciones de blog.) - -Los datos en PostgreSQL utilizan 370 GB de espacio. - -Exportación de los datos de PostgreSQL: - -``` sql -COPY -( - SELECT trips.id, - trips.vendor_id, - trips.pickup_datetime, - trips.dropoff_datetime, - trips.store_and_fwd_flag, - trips.rate_code_id, - trips.pickup_longitude, - trips.pickup_latitude, - trips.dropoff_longitude, - trips.dropoff_latitude, - trips.passenger_count, - trips.trip_distance, - trips.fare_amount, - trips.extra, - trips.mta_tax, - trips.tip_amount, - trips.tolls_amount, - trips.ehail_fee, - trips.improvement_surcharge, - trips.total_amount, - trips.payment_type, - trips.trip_type, - trips.pickup, - trips.dropoff, - - cab_types.type cab_type, - - weather.precipitation_tenths_of_mm rain, - weather.snow_depth_mm, - weather.snowfall_mm, - weather.max_temperature_tenths_degrees_celsius max_temp, - weather.min_temperature_tenths_degrees_celsius min_temp, - weather.average_wind_speed_tenths_of_meters_per_second wind, - - pick_up.gid pickup_nyct2010_gid, - pick_up.ctlabel pickup_ctlabel, - pick_up.borocode pickup_borocode, - pick_up.boroname pickup_boroname, - pick_up.ct2010 pickup_ct2010, - pick_up.boroct2010 pickup_boroct2010, - pick_up.cdeligibil pickup_cdeligibil, - pick_up.ntacode pickup_ntacode, - pick_up.ntaname pickup_ntaname, - pick_up.puma pickup_puma, - - drop_off.gid dropoff_nyct2010_gid, - drop_off.ctlabel dropoff_ctlabel, - drop_off.borocode dropoff_borocode, - drop_off.boroname dropoff_boroname, - drop_off.ct2010 dropoff_ct2010, - drop_off.boroct2010 dropoff_boroct2010, - drop_off.cdeligibil dropoff_cdeligibil, - drop_off.ntacode dropoff_ntacode, - drop_off.ntaname dropoff_ntaname, - drop_off.puma dropoff_puma - FROM trips - LEFT JOIN cab_types - ON trips.cab_type_id = cab_types.id - LEFT JOIN central_park_weather_observations_raw weather - ON weather.date = trips.pickup_datetime::date - LEFT JOIN nyct2010 pick_up - ON pick_up.gid = trips.pickup_nyct2010_gid - LEFT JOIN nyct2010 drop_off - ON drop_off.gid = trips.dropoff_nyct2010_gid -) TO '/opt/milovidov/nyc-taxi-data/trips.tsv'; -``` - -La instantánea de datos se crea a una velocidad de aproximadamente 50 MB por segundo. Al crear la instantánea, PostgreSQL lee desde el disco a una velocidad de aproximadamente 28 MB por segundo. -Esto toma alrededor de 5 horas. El archivo TSV resultante es 590612904969 bytes. - -Crear una tabla temporal en ClickHouse: - -``` sql -CREATE TABLE trips -( -trip_id UInt32, -vendor_id String, -pickup_datetime DateTime, -dropoff_datetime Nullable(DateTime), -store_and_fwd_flag Nullable(FixedString(1)), -rate_code_id Nullable(UInt8), -pickup_longitude Nullable(Float64), -pickup_latitude Nullable(Float64), -dropoff_longitude Nullable(Float64), -dropoff_latitude Nullable(Float64), -passenger_count Nullable(UInt8), -trip_distance Nullable(Float64), -fare_amount Nullable(Float32), -extra Nullable(Float32), -mta_tax Nullable(Float32), -tip_amount Nullable(Float32), -tolls_amount Nullable(Float32), -ehail_fee Nullable(Float32), -improvement_surcharge Nullable(Float32), -total_amount Nullable(Float32), -payment_type Nullable(String), -trip_type Nullable(UInt8), -pickup Nullable(String), -dropoff Nullable(String), -cab_type Nullable(String), -precipitation Nullable(UInt8), -snow_depth Nullable(UInt8), -snowfall Nullable(UInt8), -max_temperature Nullable(UInt8), -min_temperature Nullable(UInt8), -average_wind_speed Nullable(UInt8), -pickup_nyct2010_gid Nullable(UInt8), -pickup_ctlabel Nullable(String), -pickup_borocode Nullable(UInt8), -pickup_boroname Nullable(String), -pickup_ct2010 Nullable(String), -pickup_boroct2010 Nullable(String), -pickup_cdeligibil Nullable(FixedString(1)), -pickup_ntacode Nullable(String), -pickup_ntaname Nullable(String), -pickup_puma Nullable(String), -dropoff_nyct2010_gid Nullable(UInt8), -dropoff_ctlabel Nullable(String), -dropoff_borocode Nullable(UInt8), -dropoff_boroname Nullable(String), -dropoff_ct2010 Nullable(String), -dropoff_boroct2010 Nullable(String), -dropoff_cdeligibil Nullable(String), -dropoff_ntacode Nullable(String), -dropoff_ntaname Nullable(String), -dropoff_puma Nullable(String) -) ENGINE = Log; -``` - -Es necesario para convertir campos a tipos de datos más correctos y, si es posible, para eliminar NULL. - -``` bash -$ time clickhouse-client --query="INSERT INTO trips FORMAT TabSeparated" < trips.tsv - -real 75m56.214s -``` - -Los datos se leen a una velocidad de 112-140 Mb / segundo. -La carga de datos en una tabla de tipos de registro en una secuencia tardó 76 minutos. -Los datos de esta tabla utilizan 142 GB. - -(Importar datos directamente desde Postgres también es posible usando `COPY ... TO PROGRAM`.) - -Unfortunately, all the fields associated with the weather (precipitation…average_wind_speed) were filled with NULL. Because of this, we will remove them from the final data set. - -Para empezar, crearemos una tabla en un único servidor. Posteriormente haremos la mesa distribuida. - -Crear y rellenar una tabla de resumen: - -``` sql -CREATE TABLE trips_mergetree -ENGINE = MergeTree(pickup_date, pickup_datetime, 8192) -AS SELECT - -trip_id, -CAST(vendor_id AS Enum8('1' = 1, '2' = 2, 'CMT' = 3, 'VTS' = 4, 'DDS' = 5, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14)) AS vendor_id, -toDate(pickup_datetime) AS pickup_date, -ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime, -toDate(dropoff_datetime) AS dropoff_date, -ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime, -assumeNotNull(store_and_fwd_flag) IN ('Y', '1', '2') AS store_and_fwd_flag, -assumeNotNull(rate_code_id) AS rate_code_id, -assumeNotNull(pickup_longitude) AS pickup_longitude, -assumeNotNull(pickup_latitude) AS pickup_latitude, -assumeNotNull(dropoff_longitude) AS dropoff_longitude, -assumeNotNull(dropoff_latitude) AS dropoff_latitude, -assumeNotNull(passenger_count) AS passenger_count, -assumeNotNull(trip_distance) AS trip_distance, -assumeNotNull(fare_amount) AS fare_amount, -assumeNotNull(extra) AS extra, -assumeNotNull(mta_tax) AS mta_tax, -assumeNotNull(tip_amount) AS tip_amount, -assumeNotNull(tolls_amount) AS tolls_amount, -assumeNotNull(ehail_fee) AS ehail_fee, -assumeNotNull(improvement_surcharge) AS improvement_surcharge, -assumeNotNull(total_amount) AS total_amount, -CAST((assumeNotNull(payment_type) AS pt) IN ('CSH', 'CASH', 'Cash', 'CAS', 'Cas', '1') ? 'CSH' : (pt IN ('CRD', 'Credit', 'Cre', 'CRE', 'CREDIT', '2') ? 'CRE' : (pt IN ('NOC', 'No Charge', 'No', '3') ? 'NOC' : (pt IN ('DIS', 'Dispute', 'Dis', '4') ? 'DIS' : 'UNK'))) AS Enum8('CSH' = 1, 'CRE' = 2, 'UNK' = 0, 'NOC' = 3, 'DIS' = 4)) AS payment_type_, -assumeNotNull(trip_type) AS trip_type, -ifNull(toFixedString(unhex(pickup), 25), toFixedString('', 25)) AS pickup, -ifNull(toFixedString(unhex(dropoff), 25), toFixedString('', 25)) AS dropoff, -CAST(assumeNotNull(cab_type) AS Enum8('yellow' = 1, 'green' = 2, 'uber' = 3)) AS cab_type, - -assumeNotNull(pickup_nyct2010_gid) AS pickup_nyct2010_gid, -toFloat32(ifNull(pickup_ctlabel, '0')) AS pickup_ctlabel, -assumeNotNull(pickup_borocode) AS pickup_borocode, -CAST(assumeNotNull(pickup_boroname) AS Enum8('Manhattan' = 1, 'Queens' = 4, 'Brooklyn' = 3, '' = 0, 'Bronx' = 2, 'Staten Island' = 5)) AS pickup_boroname, -toFixedString(ifNull(pickup_ct2010, '000000'), 6) AS pickup_ct2010, -toFixedString(ifNull(pickup_boroct2010, '0000000'), 7) AS pickup_boroct2010, -CAST(assumeNotNull(ifNull(pickup_cdeligibil, ' ')) AS Enum8(' ' = 0, 'E' = 1, 'I' = 2)) AS pickup_cdeligibil, -toFixedString(ifNull(pickup_ntacode, '0000'), 4) AS pickup_ntacode, - -CAST(assumeNotNull(pickup_ntaname) AS Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195)) AS pickup_ntaname, - -toUInt16(ifNull(pickup_puma, '0')) AS pickup_puma, - -assumeNotNull(dropoff_nyct2010_gid) AS dropoff_nyct2010_gid, -toFloat32(ifNull(dropoff_ctlabel, '0')) AS dropoff_ctlabel, -assumeNotNull(dropoff_borocode) AS dropoff_borocode, -CAST(assumeNotNull(dropoff_boroname) AS Enum8('Manhattan' = 1, 'Queens' = 4, 'Brooklyn' = 3, '' = 0, 'Bronx' = 2, 'Staten Island' = 5)) AS dropoff_boroname, -toFixedString(ifNull(dropoff_ct2010, '000000'), 6) AS dropoff_ct2010, -toFixedString(ifNull(dropoff_boroct2010, '0000000'), 7) AS dropoff_boroct2010, -CAST(assumeNotNull(ifNull(dropoff_cdeligibil, ' ')) AS Enum8(' ' = 0, 'E' = 1, 'I' = 2)) AS dropoff_cdeligibil, -toFixedString(ifNull(dropoff_ntacode, '0000'), 4) AS dropoff_ntacode, - -CAST(assumeNotNull(dropoff_ntaname) AS Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195)) AS dropoff_ntaname, - -toUInt16(ifNull(dropoff_puma, '0')) AS dropoff_puma - -FROM trips -``` - -Esto toma 3030 segundos a una velocidad de aproximadamente 428,000 filas por segundo. -Para cargarlo más rápido, puede crear la tabla con el `Log` motor en lugar de `MergeTree`. En este caso, la descarga funciona más rápido que 200 segundos. - -La tabla utiliza 126 GB de espacio en disco. - -``` sql -SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mergetree' AND active -``` - -``` text -┌─formatReadableSize(sum(bytes))─┐ -│ 126.18 GiB │ -└────────────────────────────────┘ -``` - -Entre otras cosas, puede ejecutar la consulta OPTIMIZE en MergeTree. Pero no es necesario ya que todo estará bien sin él. - -## Descarga de Prepared Partitions {#download-of-prepared-partitions} - -``` bash -$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar -$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory -$ # check permissions of unpacked data, fix if required -$ sudo service clickhouse-server restart -$ clickhouse-client --query "select count(*) from datasets.trips_mergetree" -``` - -!!! info "INFO" - Si va a ejecutar las consultas que se describen a continuación, debe usar el nombre completo de la tabla, `datasets.trips_mergetree`. - -## Resultados en un solo servidor {#results-on-single-server} - -Q1: - -``` sql -SELECT cab_type, count(*) FROM trips_mergetree GROUP BY cab_type -``` - -0.490 segundos. - -Q2: - -``` sql -SELECT passenger_count, avg(total_amount) FROM trips_mergetree GROUP BY passenger_count -``` - -1.224 segundos. - -Q3: - -``` sql -SELECT passenger_count, toYear(pickup_date) AS year, count(*) FROM trips_mergetree GROUP BY passenger_count, year -``` - -2.104 segundos. - -Q4: - -``` sql -SELECT passenger_count, toYear(pickup_date) AS year, round(trip_distance) AS distance, count(*) -FROM trips_mergetree -GROUP BY passenger_count, year, distance -ORDER BY year, count(*) DESC -``` - -3.593 segundos. - -Se utilizó el siguiente servidor: - -Dos CPU Intel (R) Xeon (R) E5-2650 v2 @ 2.60GHz, 16 núcleos físicos en total, 128 GiB RAM, 8x6 TB HD en hardware RAID-5 - -El tiempo de ejecución es el mejor de tres carreras. Pero a partir de la segunda ejecución, las consultas leen datos de la memoria caché del sistema de archivos. No se produce más almacenamiento en caché: los datos se leen y procesan en cada ejecución. - -Creación de una tabla en tres servidores: - -En cada servidor: - -``` sql -CREATE TABLE default.trips_mergetree_third ( trip_id UInt32, vendor_id Enum8('1' = 1, '2' = 2, 'CMT' = 3, 'VTS' = 4, 'DDS' = 5, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14), pickup_date Date, pickup_datetime DateTime, dropoff_date Date, dropoff_datetime DateTime, store_and_fwd_flag UInt8, rate_code_id UInt8, pickup_longitude Float64, pickup_latitude Float64, dropoff_longitude Float64, dropoff_latitude Float64, passenger_count UInt8, trip_distance Float64, fare_amount Float32, extra Float32, mta_tax Float32, tip_amount Float32, tolls_amount Float32, ehail_fee Float32, improvement_surcharge Float32, total_amount Float32, payment_type_ Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4), trip_type UInt8, pickup FixedString(25), dropoff FixedString(25), cab_type Enum8('yellow' = 1, 'green' = 2, 'uber' = 3), pickup_nyct2010_gid UInt8, pickup_ctlabel Float32, pickup_borocode UInt8, pickup_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), pickup_ct2010 FixedString(6), pickup_boroct2010 FixedString(7), pickup_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), pickup_ntacode FixedString(4), pickup_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), pickup_puma UInt16, dropoff_nyct2010_gid UInt8, dropoff_ctlabel Float32, dropoff_borocode UInt8, dropoff_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), dropoff_ct2010 FixedString(6), dropoff_boroct2010 FixedString(7), dropoff_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), dropoff_ntacode FixedString(4), dropoff_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), dropoff_puma UInt16) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192) -``` - -En el servidor de origen: - -``` sql -CREATE TABLE trips_mergetree_x3 AS trips_mergetree_third ENGINE = Distributed(perftest, default, trips_mergetree_third, rand()) -``` - -La siguiente consulta redistribuye los datos: - -``` sql -INSERT INTO trips_mergetree_x3 SELECT * FROM trips_mergetree -``` - -Esto tarda 2454 segundos. - -En tres servidores: - -Q1: 0.212 segundos. -Q2: 0.438 segundos. -Q3: 0.733 segundos. -Q4: 1.241 segundos. - -No hay sorpresas aquí, ya que las consultas se escalan linealmente. - -También tenemos los resultados de un clúster de 140 servidores: - -Q1: 0,028 seg. -Q2: 0,043 seg. -Q3: 0,051 seg. -Q4: 0,072 seg. - -En este caso, el tiempo de procesamiento de la consulta está determinado sobre todo por la latencia de la red. -Ejecutamos consultas utilizando un cliente ubicado en un centro de datos de Yandex en Finlandia en un clúster en Rusia, que agregó aproximadamente 20 ms de latencia. - -## Resumen {#summary} - -| servidor | Q1 | Q2 | Q3 | Q4 | -|----------|-------|-------|-------|-------| -| 1 | 0.490 | 1.224 | 2.104 | 3.593 | -| 3 | 0.212 | 0.438 | 0.733 | 1.241 | -| 140 | 0.028 | 0.043 | 0.051 | 0.072 | - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets/nyc_taxi/) diff --git a/docs/es/getting-started/example-datasets/ontime.md b/docs/es/getting-started/example-datasets/ontime.md deleted file mode 100644 index f89d74048bd..00000000000 --- a/docs/es/getting-started/example-datasets/ontime.md +++ /dev/null @@ -1,412 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 15 -toc_title: A tiempo ---- - -# A tiempo {#ontime} - -Este conjunto de datos se puede obtener de dos maneras: - -- importación de datos sin procesar -- descarga de particiones preparadas - -## Importar desde datos sin procesar {#import-from-raw-data} - -Descarga de datos: - -``` bash -for s in `seq 1987 2018` -do -for m in `seq 1 12` -do -wget https://transtats.bts.gov/PREZIP/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_${s}_${m}.zip -done -done -``` - -(a partir de https://github.com/Percona-Lab/ontime-airline-performance/blob/master/download.sh ) - -Creación de una tabla: - -``` sql -CREATE TABLE `ontime` ( - `Year` UInt16, - `Quarter` UInt8, - `Month` UInt8, - `DayofMonth` UInt8, - `DayOfWeek` UInt8, - `FlightDate` Date, - `UniqueCarrier` FixedString(7), - `AirlineID` Int32, - `Carrier` FixedString(2), - `TailNum` String, - `FlightNum` String, - `OriginAirportID` Int32, - `OriginAirportSeqID` Int32, - `OriginCityMarketID` Int32, - `Origin` FixedString(5), - `OriginCityName` String, - `OriginState` FixedString(2), - `OriginStateFips` String, - `OriginStateName` String, - `OriginWac` Int32, - `DestAirportID` Int32, - `DestAirportSeqID` Int32, - `DestCityMarketID` Int32, - `Dest` FixedString(5), - `DestCityName` String, - `DestState` FixedString(2), - `DestStateFips` String, - `DestStateName` String, - `DestWac` Int32, - `CRSDepTime` Int32, - `DepTime` Int32, - `DepDelay` Int32, - `DepDelayMinutes` Int32, - `DepDel15` Int32, - `DepartureDelayGroups` String, - `DepTimeBlk` String, - `TaxiOut` Int32, - `WheelsOff` Int32, - `WheelsOn` Int32, - `TaxiIn` Int32, - `CRSArrTime` Int32, - `ArrTime` Int32, - `ArrDelay` Int32, - `ArrDelayMinutes` Int32, - `ArrDel15` Int32, - `ArrivalDelayGroups` Int32, - `ArrTimeBlk` String, - `Cancelled` UInt8, - `CancellationCode` FixedString(1), - `Diverted` UInt8, - `CRSElapsedTime` Int32, - `ActualElapsedTime` Int32, - `AirTime` Int32, - `Flights` Int32, - `Distance` Int32, - `DistanceGroup` UInt8, - `CarrierDelay` Int32, - `WeatherDelay` Int32, - `NASDelay` Int32, - `SecurityDelay` Int32, - `LateAircraftDelay` Int32, - `FirstDepTime` String, - `TotalAddGTime` String, - `LongestAddGTime` String, - `DivAirportLandings` String, - `DivReachedDest` String, - `DivActualElapsedTime` String, - `DivArrDelay` String, - `DivDistance` String, - `Div1Airport` String, - `Div1AirportID` Int32, - `Div1AirportSeqID` Int32, - `Div1WheelsOn` String, - `Div1TotalGTime` String, - `Div1LongestGTime` String, - `Div1WheelsOff` String, - `Div1TailNum` String, - `Div2Airport` String, - `Div2AirportID` Int32, - `Div2AirportSeqID` Int32, - `Div2WheelsOn` String, - `Div2TotalGTime` String, - `Div2LongestGTime` String, - `Div2WheelsOff` String, - `Div2TailNum` String, - `Div3Airport` String, - `Div3AirportID` Int32, - `Div3AirportSeqID` Int32, - `Div3WheelsOn` String, - `Div3TotalGTime` String, - `Div3LongestGTime` String, - `Div3WheelsOff` String, - `Div3TailNum` String, - `Div4Airport` String, - `Div4AirportID` Int32, - `Div4AirportSeqID` Int32, - `Div4WheelsOn` String, - `Div4TotalGTime` String, - `Div4LongestGTime` String, - `Div4WheelsOff` String, - `Div4TailNum` String, - `Div5Airport` String, - `Div5AirportID` Int32, - `Div5AirportSeqID` Int32, - `Div5WheelsOn` String, - `Div5TotalGTime` String, - `Div5LongestGTime` String, - `Div5WheelsOff` String, - `Div5TailNum` String -) ENGINE = MergeTree -PARTITION BY Year -ORDER BY (Carrier, FlightDate) -SETTINGS index_granularity = 8192; -``` - -Carga de datos: - -``` bash -$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done -``` - -## Descarga de Prepared Partitions {#download-of-prepared-partitions} - -``` bash -$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar -$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory -$ # check permissions of unpacked data, fix if required -$ sudo service clickhouse-server restart -$ clickhouse-client --query "select count(*) from datasets.ontime" -``` - -!!! info "INFO" - Si va a ejecutar las consultas que se describen a continuación, debe usar el nombre completo de la tabla, `datasets.ontime`. - -## Consulta {#queries} - -Q0. - -``` sql -SELECT avg(c1) -FROM -( - SELECT Year, Month, count(*) AS c1 - FROM ontime - GROUP BY Year, Month -); -``` - -Q1. El número de vuelos por día desde el año 2000 hasta 2008 - -``` sql -SELECT DayOfWeek, count(*) AS c -FROM ontime -WHERE Year>=2000 AND Year<=2008 -GROUP BY DayOfWeek -ORDER BY c DESC; -``` - -Preguntas frecuentes El número de vuelos retrasados por más de 10 minutos, agrupados por el día de la semana, para 2000-2008 - -``` sql -SELECT DayOfWeek, count(*) AS c -FROM ontime -WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 -GROUP BY DayOfWeek -ORDER BY c DESC; -``` - -Q3. El número de retrasos por parte del aeropuerto para 2000-2008 - -``` sql -SELECT Origin, count(*) AS c -FROM ontime -WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 -GROUP BY Origin -ORDER BY c DESC -LIMIT 10; -``` - -Preguntas más frecuentes Número de retrasos por transportista para 2007 - -``` sql -SELECT Carrier, count(*) -FROM ontime -WHERE DepDelay>10 AND Year=2007 -GROUP BY Carrier -ORDER BY count(*) DESC; -``` - -Q5. El porcentaje de retrasos por transportista para 2007 - -``` sql -SELECT Carrier, c, c2, c*100/c2 as c3 -FROM -( - SELECT - Carrier, - count(*) AS c - FROM ontime - WHERE DepDelay>10 - AND Year=2007 - GROUP BY Carrier -) -JOIN -( - SELECT - Carrier, - count(*) AS c2 - FROM ontime - WHERE Year=2007 - GROUP BY Carrier -) USING Carrier -ORDER BY c3 DESC; -``` - -Mejor versión de la misma consulta: - -``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 -FROM ontime -WHERE Year=2007 -GROUP BY Carrier -ORDER BY c3 DESC -``` - -¿Por qué? La solicitud anterior de una gama más amplia de años, 2000-2008 - -``` sql -SELECT Carrier, c, c2, c*100/c2 as c3 -FROM -( - SELECT - Carrier, - count(*) AS c - FROM ontime - WHERE DepDelay>10 - AND Year>=2000 AND Year<=2008 - GROUP BY Carrier -) -JOIN -( - SELECT - Carrier, - count(*) AS c2 - FROM ontime - WHERE Year>=2000 AND Year<=2008 - GROUP BY Carrier -) USING Carrier -ORDER BY c3 DESC; -``` - -Mejor versión de la misma consulta: - -``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 -FROM ontime -WHERE Year>=2000 AND Year<=2008 -GROUP BY Carrier -ORDER BY c3 DESC; -``` - -Preguntas frecuentes Porcentaje de vuelos retrasados por más de 10 minutos, por año - -``` sql -SELECT Year, c1/c2 -FROM -( - select - Year, - count(*)*100 as c1 - from ontime - WHERE DepDelay>10 - GROUP BY Year -) -JOIN -( - select - Year, - count(*) as c2 - from ontime - GROUP BY Year -) USING (Year) -ORDER BY Year; -``` - -Mejor versión de la misma consulta: - -``` sql -SELECT Year, avg(DepDelay>10)*100 -FROM ontime -GROUP BY Year -ORDER BY Year; -``` - -¿Por qué? Los destinos más populares por el número de ciudades conectadas directamente para varios rangos de año - -``` sql -SELECT DestCityName, uniqExact(OriginCityName) AS u -FROM ontime -WHERE Year >= 2000 and Year <= 2010 -GROUP BY DestCityName -ORDER BY u DESC LIMIT 10; -``` - -Q9. - -``` sql -SELECT Year, count(*) AS c1 -FROM ontime -GROUP BY Year; -``` - -Q10. - -``` sql -SELECT - min(Year), max(Year), Carrier, count(*) AS cnt, - sum(ArrDelayMinutes>30) AS flights_delayed, - round(sum(ArrDelayMinutes>30)/count(*),2) AS rate -FROM ontime -WHERE - DayOfWeek NOT IN (6,7) AND OriginState NOT IN ('AK', 'HI', 'PR', 'VI') - AND DestState NOT IN ('AK', 'HI', 'PR', 'VI') - AND FlightDate < '2010-01-01' -GROUP by Carrier -HAVING cnt>100000 and max(Year)>1990 -ORDER by rate DESC -LIMIT 1000; -``` - -Bono: - -``` sql -SELECT avg(cnt) -FROM -( - SELECT Year,Month,count(*) AS cnt - FROM ontime - WHERE DepDel15=1 - GROUP BY Year,Month -); - -SELECT avg(c1) FROM -( - SELECT Year,Month,count(*) AS c1 - FROM ontime - GROUP BY Year,Month -); - -SELECT DestCityName, uniqExact(OriginCityName) AS u -FROM ontime -GROUP BY DestCityName -ORDER BY u DESC -LIMIT 10; - -SELECT OriginCityName, DestCityName, count() AS c -FROM ontime -GROUP BY OriginCityName, DestCityName -ORDER BY c DESC -LIMIT 10; - -SELECT OriginCityName, count() AS c -FROM ontime -GROUP BY OriginCityName -ORDER BY c DESC -LIMIT 10; -``` - -Esta prueba de rendimiento fue creada por Vadim Tkachenko. Ver: - -- https://www.percona.com/blog/2009/10/02/analyzing-air-traffic-performance-with-infobright-and-monetdb/ -- https://www.percona.com/blog/2009/10/26/air-traffic-queries-in-luciddb/ -- https://www.percona.com/blog/2009/11/02/air-traffic-queries-in-infinidb-early-alpha/ -- https://www.percona.com/blog/2014/04/21/using-apache-hadoop-and-impala-together-with-mysql-for-data-analysis/ -- https://www.percona.com/blog/2016/01/07/apache-spark-with-air-ontime-performance-data/ -- http://nickmakos.blogspot.ru/2012/08/analyzing-air-traffic-performance-with.html - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets/ontime/) diff --git a/docs/es/getting-started/example-datasets/star-schema.md b/docs/es/getting-started/example-datasets/star-schema.md deleted file mode 100644 index 43f878eb205..00000000000 --- a/docs/es/getting-started/example-datasets/star-schema.md +++ /dev/null @@ -1,370 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 20 -toc_title: Estrella Schema Benchmark ---- - -# Estrella Schema Benchmark {#star-schema-benchmark} - -Compilación de dbgen: - -``` bash -$ git clone git@github.com:vadimtk/ssb-dbgen.git -$ cd ssb-dbgen -$ make -``` - -Generación de datos: - -!!! warning "Atención" - Con `-s 100` dbgen genera 600 millones de filas (67 GB), mientras que `-s 1000` genera 6 mil millones de filas (lo que lleva mucho tiempo) - -``` bash -$ ./dbgen -s 1000 -T c -$ ./dbgen -s 1000 -T l -$ ./dbgen -s 1000 -T p -$ ./dbgen -s 1000 -T s -$ ./dbgen -s 1000 -T d -``` - -Creación de tablas en ClickHouse: - -``` sql -CREATE TABLE customer -( - C_CUSTKEY UInt32, - C_NAME String, - C_ADDRESS String, - C_CITY LowCardinality(String), - C_NATION LowCardinality(String), - C_REGION LowCardinality(String), - C_PHONE String, - C_MKTSEGMENT LowCardinality(String) -) -ENGINE = MergeTree ORDER BY (C_CUSTKEY); - -CREATE TABLE lineorder -( - LO_ORDERKEY UInt32, - LO_LINENUMBER UInt8, - LO_CUSTKEY UInt32, - LO_PARTKEY UInt32, - LO_SUPPKEY UInt32, - LO_ORDERDATE Date, - LO_ORDERPRIORITY LowCardinality(String), - LO_SHIPPRIORITY UInt8, - LO_QUANTITY UInt8, - LO_EXTENDEDPRICE UInt32, - LO_ORDTOTALPRICE UInt32, - LO_DISCOUNT UInt8, - LO_REVENUE UInt32, - LO_SUPPLYCOST UInt32, - LO_TAX UInt8, - LO_COMMITDATE Date, - LO_SHIPMODE LowCardinality(String) -) -ENGINE = MergeTree PARTITION BY toYear(LO_ORDERDATE) ORDER BY (LO_ORDERDATE, LO_ORDERKEY); - -CREATE TABLE part -( - P_PARTKEY UInt32, - P_NAME String, - P_MFGR LowCardinality(String), - P_CATEGORY LowCardinality(String), - P_BRAND LowCardinality(String), - P_COLOR LowCardinality(String), - P_TYPE LowCardinality(String), - P_SIZE UInt8, - P_CONTAINER LowCardinality(String) -) -ENGINE = MergeTree ORDER BY P_PARTKEY; - -CREATE TABLE supplier -( - S_SUPPKEY UInt32, - S_NAME String, - S_ADDRESS String, - S_CITY LowCardinality(String), - S_NATION LowCardinality(String), - S_REGION LowCardinality(String), - S_PHONE String -) -ENGINE = MergeTree ORDER BY S_SUPPKEY; -``` - -Insertar datos: - -``` bash -$ clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl -$ clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl -$ clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl -$ clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl -``` - -Conversión “star schema” a desnormalizado “flat schema”: - -``` sql -SET max_memory_usage = 20000000000; - -CREATE TABLE lineorder_flat -ENGINE = MergeTree -PARTITION BY toYear(LO_ORDERDATE) -ORDER BY (LO_ORDERDATE, LO_ORDERKEY) AS -SELECT - l.LO_ORDERKEY AS LO_ORDERKEY, - l.LO_LINENUMBER AS LO_LINENUMBER, - l.LO_CUSTKEY AS LO_CUSTKEY, - l.LO_PARTKEY AS LO_PARTKEY, - l.LO_SUPPKEY AS LO_SUPPKEY, - l.LO_ORDERDATE AS LO_ORDERDATE, - l.LO_ORDERPRIORITY AS LO_ORDERPRIORITY, - l.LO_SHIPPRIORITY AS LO_SHIPPRIORITY, - l.LO_QUANTITY AS LO_QUANTITY, - l.LO_EXTENDEDPRICE AS LO_EXTENDEDPRICE, - l.LO_ORDTOTALPRICE AS LO_ORDTOTALPRICE, - l.LO_DISCOUNT AS LO_DISCOUNT, - l.LO_REVENUE AS LO_REVENUE, - l.LO_SUPPLYCOST AS LO_SUPPLYCOST, - l.LO_TAX AS LO_TAX, - l.LO_COMMITDATE AS LO_COMMITDATE, - l.LO_SHIPMODE AS LO_SHIPMODE, - c.C_NAME AS C_NAME, - c.C_ADDRESS AS C_ADDRESS, - c.C_CITY AS C_CITY, - c.C_NATION AS C_NATION, - c.C_REGION AS C_REGION, - c.C_PHONE AS C_PHONE, - c.C_MKTSEGMENT AS C_MKTSEGMENT, - s.S_NAME AS S_NAME, - s.S_ADDRESS AS S_ADDRESS, - s.S_CITY AS S_CITY, - s.S_NATION AS S_NATION, - s.S_REGION AS S_REGION, - s.S_PHONE AS S_PHONE, - p.P_NAME AS P_NAME, - p.P_MFGR AS P_MFGR, - p.P_CATEGORY AS P_CATEGORY, - p.P_BRAND AS P_BRAND, - p.P_COLOR AS P_COLOR, - p.P_TYPE AS P_TYPE, - p.P_SIZE AS P_SIZE, - p.P_CONTAINER AS P_CONTAINER -FROM lineorder AS l -INNER JOIN customer AS c ON c.C_CUSTKEY = l.LO_CUSTKEY -INNER JOIN supplier AS s ON s.S_SUPPKEY = l.LO_SUPPKEY -INNER JOIN part AS p ON p.P_PARTKEY = l.LO_PARTKEY; -``` - -Las consultas: - -Q1.1 - -``` sql -SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue -FROM lineorder_flat -WHERE toYear(LO_ORDERDATE) = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25; -``` - -Q1.2 - -``` sql -SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue -FROM lineorder_flat -WHERE toYYYYMM(LO_ORDERDATE) = 199401 AND LO_DISCOUNT BETWEEN 4 AND 6 AND LO_QUANTITY BETWEEN 26 AND 35; -``` - -Q1.3 - -``` sql -SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue -FROM lineorder_flat -WHERE toISOWeek(LO_ORDERDATE) = 6 AND toYear(LO_ORDERDATE) = 1994 - AND LO_DISCOUNT BETWEEN 5 AND 7 AND LO_QUANTITY BETWEEN 26 AND 35; -``` - -Q2.1 - -``` sql -SELECT - sum(LO_REVENUE), - toYear(LO_ORDERDATE) AS year, - P_BRAND -FROM lineorder_flat -WHERE P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA' -GROUP BY - year, - P_BRAND -ORDER BY - year, - P_BRAND; -``` - -Q2.2 - -``` sql -SELECT - sum(LO_REVENUE), - toYear(LO_ORDERDATE) AS year, - P_BRAND -FROM lineorder_flat -WHERE P_BRAND >= 'MFGR#2221' AND P_BRAND <= 'MFGR#2228' AND S_REGION = 'ASIA' -GROUP BY - year, - P_BRAND -ORDER BY - year, - P_BRAND; -``` - -Q2.3 - -``` sql -SELECT - sum(LO_REVENUE), - toYear(LO_ORDERDATE) AS year, - P_BRAND -FROM lineorder_flat -WHERE P_BRAND = 'MFGR#2239' AND S_REGION = 'EUROPE' -GROUP BY - year, - P_BRAND -ORDER BY - year, - P_BRAND; -``` - -Q3.1 - -``` sql -SELECT - C_NATION, - S_NATION, - toYear(LO_ORDERDATE) AS year, - sum(LO_REVENUE) AS revenue -FROM lineorder_flat -WHERE C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND year >= 1992 AND year <= 1997 -GROUP BY - C_NATION, - S_NATION, - year -ORDER BY - year ASC, - revenue DESC; -``` - -Q3.2 - -``` sql -SELECT - C_CITY, - S_CITY, - toYear(LO_ORDERDATE) AS year, - sum(LO_REVENUE) AS revenue -FROM lineorder_flat -WHERE C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND year >= 1992 AND year <= 1997 -GROUP BY - C_CITY, - S_CITY, - year -ORDER BY - year ASC, - revenue DESC; -``` - -Q3.3 - -``` sql -SELECT - C_CITY, - S_CITY, - toYear(LO_ORDERDATE) AS year, - sum(LO_REVENUE) AS revenue -FROM lineorder_flat -WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND year >= 1992 AND year <= 1997 -GROUP BY - C_CITY, - S_CITY, - year -ORDER BY - year ASC, - revenue DESC; -``` - -Q3.4 - -``` sql -SELECT - C_CITY, - S_CITY, - toYear(LO_ORDERDATE) AS year, - sum(LO_REVENUE) AS revenue -FROM lineorder_flat -WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND toYYYYMM(LO_ORDERDATE) = 199712 -GROUP BY - C_CITY, - S_CITY, - year -ORDER BY - year ASC, - revenue DESC; -``` - -Q4.1 - -``` sql -SELECT - toYear(LO_ORDERDATE) AS year, - C_NATION, - sum(LO_REVENUE - LO_SUPPLYCOST) AS profit -FROM lineorder_flat -WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') -GROUP BY - year, - C_NATION -ORDER BY - year ASC, - C_NATION ASC; -``` - -Q4.2 - -``` sql -SELECT - toYear(LO_ORDERDATE) AS year, - S_NATION, - P_CATEGORY, - sum(LO_REVENUE - LO_SUPPLYCOST) AS profit -FROM lineorder_flat -WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (year = 1997 OR year = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') -GROUP BY - year, - S_NATION, - P_CATEGORY -ORDER BY - year ASC, - S_NATION ASC, - P_CATEGORY ASC; -``` - -Q4.3 - -``` sql -SELECT - toYear(LO_ORDERDATE) AS year, - S_CITY, - P_BRAND, - sum(LO_REVENUE - LO_SUPPLYCOST) AS profit -FROM lineorder_flat -WHERE S_NATION = 'UNITED STATES' AND (year = 1997 OR year = 1998) AND P_CATEGORY = 'MFGR#14' -GROUP BY - year, - S_CITY, - P_BRAND -ORDER BY - year ASC, - S_CITY ASC, - P_BRAND ASC; -``` - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets/star_schema/) diff --git a/docs/es/getting-started/example-datasets/wikistat.md b/docs/es/getting-started/example-datasets/wikistat.md deleted file mode 100644 index 49d7263cdec..00000000000 --- a/docs/es/getting-started/example-datasets/wikistat.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 18 -toc_title: "Nombre de la red inal\xE1mbrica (SSID):" ---- - -# Nombre de la red inalámbrica (SSID): {#wikistat} - -Ver: http://dumps.wikimedia.org/other/pagecounts-raw/ - -Creación de una tabla: - -``` sql -CREATE TABLE wikistat -( - date Date, - time DateTime, - project String, - subproject String, - path String, - hits UInt64, - size UInt64 -) ENGINE = MergeTree(date, (path, time), 8192); -``` - -Carga de datos: - -``` bash -$ for i in {2007..2016}; do for j in {01..12}; do echo $i-$j >&2; curl -sSL "http://dumps.wikimedia.org/other/pagecounts-raw/$i/$i-$j/" | grep -oE 'pagecounts-[0-9]+-[0-9]+\.gz'; done; done | sort | uniq | tee links.txt -$ cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/pagecounts-raw/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1/')/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1-\2/')/$link; done -$ ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done -``` - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/example_datasets/wikistat/) diff --git a/docs/es/getting-started/index.md b/docs/es/getting-started/index.md deleted file mode 100644 index 681c2017ac1..00000000000 --- a/docs/es/getting-started/index.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Primeros pasos -toc_hidden: true -toc_priority: 8 -toc_title: oculto ---- - -# Primeros pasos {#getting-started} - -Si eres nuevo en ClickHouse y quieres tener una sensación práctica de su rendimiento, antes que nada, debes pasar por el [proceso de instalación](install.md). Después de eso puedes: - -- [Ir a través de tutorial detallado](tutorial.md) -- [Experimente con conjuntos de datos de ejemplo](example-datasets/ontime.md) - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/) diff --git a/docs/es/getting-started/install.md b/docs/es/getting-started/install.md deleted file mode 100644 index 092ef47b2f7..00000000000 --- a/docs/es/getting-started/install.md +++ /dev/null @@ -1,182 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 11 -toc_title: "Instalaci\xF3n" ---- - -# Instalación {#installation} - -## Requisitos del sistema {#system-requirements} - -ClickHouse puede ejecutarse en cualquier Linux, FreeBSD o Mac OS X con arquitectura de CPU x86_64, AArch64 o PowerPC64LE. - -Los binarios oficiales preconstruidos generalmente se compilan para x86_64 y aprovechan el conjunto de instrucciones SSE 4.2, por lo que, a menos que se indique lo contrario, el uso de la CPU que lo admite se convierte en un requisito adicional del sistema. Aquí está el comando para verificar si la CPU actual tiene soporte para SSE 4.2: - -``` bash -$ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported" -``` - -Para ejecutar ClickHouse en procesadores que no admiten SSE 4.2 o tienen arquitectura AArch64 o PowerPC64LE, debe [construir ClickHouse a partir de fuentes](#from-sources) con los ajustes de configuración adecuados. - -## Opciones de instalación disponibles {#available-installation-options} - -### De paquetes DEB {#install-from-deb-packages} - -Se recomienda utilizar pre-compilado oficial `deb` Paquetes para Debian o Ubuntu. Ejecute estos comandos para instalar paquetes: - -``` bash -{% include 'install/deb.sh' %} -``` - -Si desea utilizar la versión más reciente, reemplace `stable` con `testing` (esto se recomienda para sus entornos de prueba). - -También puede descargar e instalar paquetes manualmente desde [aqui](https://repo.clickhouse.tech/deb/stable/main/). - -#### Paquete {#packages} - -- `clickhouse-common-static` — Installs ClickHouse compiled binary files. -- `clickhouse-server` — Creates a symbolic link for `clickhouse-server` e instala la configuración predeterminada del servidor. -- `clickhouse-client` — Creates a symbolic link for `clickhouse-client` y otras herramientas relacionadas con el cliente. e instala los archivos de configuración del cliente. -- `clickhouse-common-static-dbg` — Installs ClickHouse compiled binary files with debug info. - -### De paquetes RPM {#from-rpm-packages} - -Se recomienda utilizar pre-compilado oficial `rpm` También puede utilizar los paquetes para CentOS, RedHat y todas las demás distribuciones de Linux basadas en rpm. - -Primero, necesitas agregar el repositorio oficial: - -``` bash -sudo yum install yum-utils -sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG -sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64 -``` - -Si desea utilizar la versión más reciente, reemplace `stable` con `testing` (esto se recomienda para sus entornos de prueba). El `prestable` etiqueta a veces está disponible también. - -A continuación, ejecute estos comandos para instalar paquetes: - -``` bash -sudo yum install clickhouse-server clickhouse-client -``` - -También puede descargar e instalar paquetes manualmente desde [aqui](https://repo.clickhouse.tech/rpm/stable/x86_64). - -### De archivos Tgz {#from-tgz-archives} - -Se recomienda utilizar pre-compilado oficial `tgz` para todas las distribuciones de Linux, donde la instalación de `deb` o `rpm` paquetes no es posible. - -La versión requerida se puede descargar con `curl` o `wget` desde el repositorio https://repo.clickhouse.tech/tgz/. -Después de eso, los archivos descargados deben desempaquetarse e instalarse con scripts de instalación. Ejemplo para la última versión: - -``` bash -export LATEST_VERSION=`curl https://api.github.com/repos/ClickHouse/ClickHouse/tags 2>/dev/null | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -n 1` -curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.tech/tgz/clickhouse-server-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.tech/tgz/clickhouse-client-$LATEST_VERSION.tgz - -tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz -sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz -sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-server-$LATEST_VERSION.tgz -sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh -sudo /etc/init.d/clickhouse-server start - -tar -xzvf clickhouse-client-$LATEST_VERSION.tgz -sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh -``` - -Para los entornos de producción, se recomienda utilizar las últimas `stable`-versión. Puede encontrar su número en la página de GitHub https://github.com/ClickHouse/ClickHouse/tags con postfix `-stable`. - -### Desde Docker Image {#from-docker-image} - -Para ejecutar ClickHouse dentro de Docker, siga la guía en [Eje de acoplador](https://hub.docker.com/r/yandex/clickhouse-server/). Esas imágenes usan oficial `deb` paquetes dentro. - -### De fuentes {#from-sources} - -Para compilar manualmente ClickHouse, siga las instrucciones para [Linux](../development/build.md) o [Mac OS X](../development/build-osx.md). - -Puede compilar paquetes e instalarlos o usar programas sin instalar paquetes. Además, al construir manualmente, puede deshabilitar el requisito de SSE 4.2 o compilar para CPU AArch64. - - Client: programs/clickhouse-client - Server: programs/clickhouse-server - -Tendrá que crear carpetas de datos y metadatos y `chown` para el usuario deseado. Sus rutas se pueden cambiar en la configuración del servidor (src/programs/server/config.xml), por defecto son: - - /opt/clickhouse/data/default/ - /opt/clickhouse/metadata/default/ - -En Gentoo, puedes usar `emerge clickhouse` para instalar ClickHouse desde fuentes. - -## Lanzar {#launch} - -Para iniciar el servidor como demonio, ejecute: - -``` bash -$ sudo service clickhouse-server start -``` - -Si no tienes `service` comando ejecutar como - -``` bash -$ sudo /etc/init.d/clickhouse-server start -``` - -Vea los registros en el `/var/log/clickhouse-server/` directorio. - -Si el servidor no se inicia, compruebe las configuraciones en el archivo `/etc/clickhouse-server/config.xml`. - -También puede iniciar manualmente el servidor desde la consola: - -``` bash -$ clickhouse-server --config-file=/etc/clickhouse-server/config.xml -``` - -En este caso, el registro se imprimirá en la consola, lo cual es conveniente durante el desarrollo. -Si el archivo de configuración está en el directorio actual, no es necesario `--config-file` parámetro. De forma predeterminada, utiliza `./config.xml`. - -ClickHouse admite la configuración de restricción de acceso. Están ubicados en el `users.xml` archivo (junto a `config.xml`). -De forma predeterminada, se permite el acceso desde cualquier lugar `default` usuario, sin una contraseña. Ver `user/default/networks`. -Para obtener más información, consulte la sección [“Configuration Files”](../operations/configuration-files.md). - -Después de iniciar el servidor, puede usar el cliente de línea de comandos para conectarse a él: - -``` bash -$ clickhouse-client -``` - -Por defecto, se conecta a `localhost:9000` en nombre del usuario `default` sin una contraseña. También se puede usar para conectarse a un servidor remoto usando `--host` argumento. - -El terminal debe usar codificación UTF-8. -Para obtener más información, consulte la sección [“Command-line client”](../interfaces/cli.md). - -Ejemplo: - -``` bash -$ ./clickhouse-client -ClickHouse client version 0.0.18749. -Connecting to localhost:9000. -Connected to ClickHouse server version 0.0.18749. - -:) SELECT 1 - -SELECT 1 - -┌─1─┐ -│ 1 │ -└───┘ - -1 rows in set. Elapsed: 0.003 sec. - -:) -``` - -**Felicidades, el sistema funciona!** - -Para continuar experimentando, puede descargar uno de los conjuntos de datos de prueba o pasar por [tutorial](https://clickhouse.tech/tutorial.html). - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/install/) diff --git a/docs/es/getting-started/playground.md b/docs/es/getting-started/playground.md deleted file mode 100644 index 1ab7246e2d4..00000000000 --- a/docs/es/getting-started/playground.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 14 -toc_title: Infantil ---- - -# Zona de juegos ClickHouse {#clickhouse-playground} - -[Zona de juegos ClickHouse](https://play.clickhouse.tech?file=welcome) permite a las personas experimentar con ClickHouse ejecutando consultas al instante, sin configurar su servidor o clúster. -Varios conjuntos de datos de ejemplo están disponibles en Playground, así como consultas de ejemplo que muestran las características de ClickHouse. - -Las consultas se ejecutan como un usuario de sólo lectura. Implica algunas limitaciones: - -- No se permiten consultas DDL -- Las consultas INSERT no están permitidas - -También se aplican los siguientes valores: -- [`max_result_bytes=10485760`](../operations/settings/query_complexity/#max-result-bytes) -- [`max_result_rows=2000`](../operations/settings/query_complexity/#setting-max_result_rows) -- [`result_overflow_mode=break`](../operations/settings/query_complexity/#result-overflow-mode) -- [`max_execution_time=60000`](../operations/settings/query_complexity/#max-execution-time) - -ClickHouse Playground da la experiencia de m2.pequeño -[Servicio administrado para ClickHouse](https://cloud.yandex.com/services/managed-clickhouse) -instancia alojada en [El Yandex.Nube](https://cloud.yandex.com/). -Más información sobre [proveedores de la nube](../commercial/cloud.md). - -La interfaz web de ClickHouse Playground realiza solicitudes a través de ClickHouse [HTTP API](../interfaces/http.md). -El backend Playground es solo un clúster ClickHouse sin ninguna aplicación adicional del lado del servidor. -El punto final HTTPS de ClickHouse también está disponible como parte de Playground. - -Puede realizar consultas al patio de recreo utilizando cualquier cliente HTTP, por ejemplo [rizo](https://curl.haxx.se) o [wget](https://www.gnu.org/software/wget/), o configurar una conexión usando [JDBC](../interfaces/jdbc.md) o [ODBC](../interfaces/odbc.md) controlador. -Más información sobre los productos de software compatibles con ClickHouse está disponible [aqui](../interfaces/index.md). - -| Parámetro | Valor | -|:------------|:----------------------------------------------| -| Punto final | https://play-api.casa de clic.tecnología:8443 | -| Usuario | `playground` | -| Contraseña | `clickhouse` | - -Tenga en cuenta que este extremo requiere una conexión segura. - -Ejemplo: - -``` bash -curl "https://play-api.clickhouse.tech:8443/?query=SELECT+'Play+ClickHouse!';&user=playground&password=clickhouse&database=datasets" -``` diff --git a/docs/es/getting-started/tutorial.md b/docs/es/getting-started/tutorial.md deleted file mode 100644 index 2cc9339f954..00000000000 --- a/docs/es/getting-started/tutorial.md +++ /dev/null @@ -1,664 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 12 -toc_title: Tutorial ---- - -# Tutorial de ClickHouse {#clickhouse-tutorial} - -## Qué Esperar de Este Tutorial? {#what-to-expect-from-this-tutorial} - -Al pasar por este tutorial, aprenderá cómo configurar un clúster de ClickHouse simple. Será pequeño, pero tolerante a fallos y escalable. Luego usaremos uno de los conjuntos de datos de ejemplo para llenarlo con datos y ejecutar algunas consultas de demostración. - -## Configuración de nodo único {#single-node-setup} - -Para posponer las complejidades de un entorno distribuido, comenzaremos con la implementación de ClickHouse en un único servidor o máquina virtual. ClickHouse generalmente se instala desde [deb](install.md#install-from-deb-packages) o [RPM](install.md#from-rpm-packages) paquetes, pero hay [alternativa](install.md#from-docker-image) para los sistemas operativos que no los admiten. - -Por ejemplo, ha elegido `deb` paquetes y ejecutado: - -``` bash -{% include 'install/deb.sh' %} -``` - -¿Qué tenemos en los paquetes que tengo instalados: - -- `clickhouse-client` el paquete contiene [Casa de clics-cliente](../interfaces/cli.md) aplicación, cliente interactivo de la consola ClickHouse. -- `clickhouse-common` El paquete contiene un archivo ejecutable ClickHouse. -- `clickhouse-server` El paquete contiene archivos de configuración para ejecutar ClickHouse como servidor. - -Los archivos de configuración del servidor se encuentran en `/etc/clickhouse-server/`. Antes de ir más lejos, tenga en cuenta el `` elemento en `config.xml`. La ruta determina la ubicación para el almacenamiento de datos, por lo que debe ubicarse en un volumen con gran capacidad de disco; el valor predeterminado es `/var/lib/clickhouse/`. Si desea ajustar la configuración, no es útil editar directamente `config.xml` archivo, teniendo en cuenta que podría ser reescrito en futuras actualizaciones de paquetes. La forma recomendada de anular los elementos de configuración es crear [archivos en config.directorio d](../operations/configuration-files.md) que sirven como “patches” de configuración.XML. - -Como habrás notado, `clickhouse-server` no se inicia automáticamente después de la instalación del paquete. Tampoco se reiniciará automáticamente después de las actualizaciones. La forma en que inicia el servidor depende de su sistema de inicio, por lo general, es: - -``` bash -sudo service clickhouse-server start -``` - -o - -``` bash -sudo /etc/init.d/clickhouse-server start -``` - -La ubicación predeterminada para los registros del servidor es `/var/log/clickhouse-server/`. El servidor está listo para manejar las conexiones de cliente una vez que registra el `Ready for connections` mensaje. - -Una vez que el `clickhouse-server` está en funcionamiento, podemos usar `clickhouse-client` para conectarse al servidor y ejecutar algunas consultas de prueba como `SELECT "Hello, world!";`. - -
- -Consejos rápidos para clickhouse-cliente - -Modo interactivo: - -``` bash -clickhouse-client -clickhouse-client --host=... --port=... --user=... --password=... -``` - -Habilitar consultas multilínea: - -``` bash -clickhouse-client -m -clickhouse-client --multiline -``` - -Ejecutar consultas en modo por lotes: - -``` bash -clickhouse-client --query='SELECT 1' -echo 'SELECT 1' | clickhouse-client -clickhouse-client <<< 'SELECT 1' -``` - -Insertar datos de un archivo en el formato especificado: - -``` bash -clickhouse-client --query='INSERT INTO table VALUES' < data.txt -clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv -``` - -
- -## Importar conjunto de datos de muestra {#import-sample-dataset} - -Ahora es el momento de llenar nuestro servidor ClickHouse con algunos datos de muestra. En este tutorial, usaremos los datos anónimos de Yandex.Metrica, el primer servicio que ejecuta ClickHouse en forma de producción antes de que se convirtiera en código abierto (más sobre eso en [sección de historia](../introduction/history.md)). Hay [múltiples formas de importar Yandex.Conjunto de datos de Metrica](example-datasets/metrica.md), y por el bien del tutorial, iremos con el más realista. - -### Descargar y extraer datos de tabla {#download-and-extract-table-data} - -``` bash -curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv -``` - -Los archivos extraídos tienen un tamaño de aproximadamente 10 GB. - -### Crear tablas {#create-tables} - -Como en la mayoría de los sistemas de gestión de bases de datos, ClickHouse agrupa lógicamente las tablas en “databases”. Hay un `default` base de datos, pero crearemos una nueva llamada `tutorial`: - -``` bash -clickhouse-client --query "CREATE DATABASE IF NOT EXISTS tutorial" -``` - -La sintaxis para crear tablas es mucho más complicada en comparación con las bases de datos (ver [referencia](../sql-reference/statements/create.md). En general `CREATE TABLE` declaración tiene que especificar tres cosas clave: - -1. Nombre de la tabla que se va a crear. -2. Table schema, i.e. list of columns and their [tipos de datos](../sql-reference/data-types/index.md). -3. [Motor de tabla](../engines/table-engines/index.md) y su configuración, que determina todos los detalles sobre cómo se ejecutarán físicamente las consultas a esta tabla. - -El Yandex.Metrica es un servicio de análisis web, y el conjunto de datos de muestra no cubre toda su funcionalidad, por lo que solo hay dos tablas para crear: - -- `hits` es una tabla con cada acción realizada por todos los usuarios en todos los sitios web cubiertos por el servicio. -- `visits` es una tabla que contiene sesiones precompiladas en lugar de acciones individuales. - -Veamos y ejecutemos las consultas de tabla de creación real para estas tablas: - -``` sql -CREATE TABLE tutorial.hits_v1 -( - `WatchID` UInt64, - `JavaEnable` UInt8, - `Title` String, - `GoodEvent` Int16, - `EventTime` DateTime, - `EventDate` Date, - `CounterID` UInt32, - `ClientIP` UInt32, - `ClientIP6` FixedString(16), - `RegionID` UInt32, - `UserID` UInt64, - `CounterClass` Int8, - `OS` UInt8, - `UserAgent` UInt8, - `URL` String, - `Referer` String, - `URLDomain` String, - `RefererDomain` String, - `Refresh` UInt8, - `IsRobot` UInt8, - `RefererCategories` Array(UInt16), - `URLCategories` Array(UInt16), - `URLRegions` Array(UInt32), - `RefererRegions` Array(UInt32), - `ResolutionWidth` UInt16, - `ResolutionHeight` UInt16, - `ResolutionDepth` UInt8, - `FlashMajor` UInt8, - `FlashMinor` UInt8, - `FlashMinor2` String, - `NetMajor` UInt8, - `NetMinor` UInt8, - `UserAgentMajor` UInt16, - `UserAgentMinor` FixedString(2), - `CookieEnable` UInt8, - `JavascriptEnable` UInt8, - `IsMobile` UInt8, - `MobilePhone` UInt8, - `MobilePhoneModel` String, - `Params` String, - `IPNetworkID` UInt32, - `TraficSourceID` Int8, - `SearchEngineID` UInt16, - `SearchPhrase` String, - `AdvEngineID` UInt8, - `IsArtifical` UInt8, - `WindowClientWidth` UInt16, - `WindowClientHeight` UInt16, - `ClientTimeZone` Int16, - `ClientEventTime` DateTime, - `SilverlightVersion1` UInt8, - `SilverlightVersion2` UInt8, - `SilverlightVersion3` UInt32, - `SilverlightVersion4` UInt16, - `PageCharset` String, - `CodeVersion` UInt32, - `IsLink` UInt8, - `IsDownload` UInt8, - `IsNotBounce` UInt8, - `FUniqID` UInt64, - `HID` UInt32, - `IsOldCounter` UInt8, - `IsEvent` UInt8, - `IsParameter` UInt8, - `DontCountHits` UInt8, - `WithHash` UInt8, - `HitColor` FixedString(1), - `UTCEventTime` DateTime, - `Age` UInt8, - `Sex` UInt8, - `Income` UInt8, - `Interests` UInt16, - `Robotness` UInt8, - `GeneralInterests` Array(UInt16), - `RemoteIP` UInt32, - `RemoteIP6` FixedString(16), - `WindowName` Int32, - `OpenerName` Int32, - `HistoryLength` Int16, - `BrowserLanguage` FixedString(2), - `BrowserCountry` FixedString(2), - `SocialNetwork` String, - `SocialAction` String, - `HTTPError` UInt16, - `SendTiming` Int32, - `DNSTiming` Int32, - `ConnectTiming` Int32, - `ResponseStartTiming` Int32, - `ResponseEndTiming` Int32, - `FetchTiming` Int32, - `RedirectTiming` Int32, - `DOMInteractiveTiming` Int32, - `DOMContentLoadedTiming` Int32, - `DOMCompleteTiming` Int32, - `LoadEventStartTiming` Int32, - `LoadEventEndTiming` Int32, - `NSToDOMContentLoadedTiming` Int32, - `FirstPaintTiming` Int32, - `RedirectCount` Int8, - `SocialSourceNetworkID` UInt8, - `SocialSourcePage` String, - `ParamPrice` Int64, - `ParamOrderID` String, - `ParamCurrency` FixedString(3), - `ParamCurrencyID` UInt16, - `GoalsReached` Array(UInt32), - `OpenstatServiceName` String, - `OpenstatCampaignID` String, - `OpenstatAdID` String, - `OpenstatSourceID` String, - `UTMSource` String, - `UTMMedium` String, - `UTMCampaign` String, - `UTMContent` String, - `UTMTerm` String, - `FromTag` String, - `HasGCLID` UInt8, - `RefererHash` UInt64, - `URLHash` UInt64, - `CLID` UInt32, - `YCLID` UInt64, - `ShareService` String, - `ShareURL` String, - `ShareTitle` String, - `ParsedParams` Nested( - Key1 String, - Key2 String, - Key3 String, - Key4 String, - Key5 String, - ValueDouble Float64), - `IslandID` FixedString(16), - `RequestNum` UInt32, - `RequestTry` UInt8 -) -ENGINE = MergeTree() -PARTITION BY toYYYYMM(EventDate) -ORDER BY (CounterID, EventDate, intHash32(UserID)) -SAMPLE BY intHash32(UserID) -``` - -``` sql -CREATE TABLE tutorial.visits_v1 -( - `CounterID` UInt32, - `StartDate` Date, - `Sign` Int8, - `IsNew` UInt8, - `VisitID` UInt64, - `UserID` UInt64, - `StartTime` DateTime, - `Duration` UInt32, - `UTCStartTime` DateTime, - `PageViews` Int32, - `Hits` Int32, - `IsBounce` UInt8, - `Referer` String, - `StartURL` String, - `RefererDomain` String, - `StartURLDomain` String, - `EndURL` String, - `LinkURL` String, - `IsDownload` UInt8, - `TraficSourceID` Int8, - `SearchEngineID` UInt16, - `SearchPhrase` String, - `AdvEngineID` UInt8, - `PlaceID` Int32, - `RefererCategories` Array(UInt16), - `URLCategories` Array(UInt16), - `URLRegions` Array(UInt32), - `RefererRegions` Array(UInt32), - `IsYandex` UInt8, - `GoalReachesDepth` Int32, - `GoalReachesURL` Int32, - `GoalReachesAny` Int32, - `SocialSourceNetworkID` UInt8, - `SocialSourcePage` String, - `MobilePhoneModel` String, - `ClientEventTime` DateTime, - `RegionID` UInt32, - `ClientIP` UInt32, - `ClientIP6` FixedString(16), - `RemoteIP` UInt32, - `RemoteIP6` FixedString(16), - `IPNetworkID` UInt32, - `SilverlightVersion3` UInt32, - `CodeVersion` UInt32, - `ResolutionWidth` UInt16, - `ResolutionHeight` UInt16, - `UserAgentMajor` UInt16, - `UserAgentMinor` UInt16, - `WindowClientWidth` UInt16, - `WindowClientHeight` UInt16, - `SilverlightVersion2` UInt8, - `SilverlightVersion4` UInt16, - `FlashVersion3` UInt16, - `FlashVersion4` UInt16, - `ClientTimeZone` Int16, - `OS` UInt8, - `UserAgent` UInt8, - `ResolutionDepth` UInt8, - `FlashMajor` UInt8, - `FlashMinor` UInt8, - `NetMajor` UInt8, - `NetMinor` UInt8, - `MobilePhone` UInt8, - `SilverlightVersion1` UInt8, - `Age` UInt8, - `Sex` UInt8, - `Income` UInt8, - `JavaEnable` UInt8, - `CookieEnable` UInt8, - `JavascriptEnable` UInt8, - `IsMobile` UInt8, - `BrowserLanguage` UInt16, - `BrowserCountry` UInt16, - `Interests` UInt16, - `Robotness` UInt8, - `GeneralInterests` Array(UInt16), - `Params` Array(String), - `Goals` Nested( - ID UInt32, - Serial UInt32, - EventTime DateTime, - Price Int64, - OrderID String, - CurrencyID UInt32), - `WatchIDs` Array(UInt64), - `ParamSumPrice` Int64, - `ParamCurrency` FixedString(3), - `ParamCurrencyID` UInt16, - `ClickLogID` UInt64, - `ClickEventID` Int32, - `ClickGoodEvent` Int32, - `ClickEventTime` DateTime, - `ClickPriorityID` Int32, - `ClickPhraseID` Int32, - `ClickPageID` Int32, - `ClickPlaceID` Int32, - `ClickTypeID` Int32, - `ClickResourceID` Int32, - `ClickCost` UInt32, - `ClickClientIP` UInt32, - `ClickDomainID` UInt32, - `ClickURL` String, - `ClickAttempt` UInt8, - `ClickOrderID` UInt32, - `ClickBannerID` UInt32, - `ClickMarketCategoryID` UInt32, - `ClickMarketPP` UInt32, - `ClickMarketCategoryName` String, - `ClickMarketPPName` String, - `ClickAWAPSCampaignName` String, - `ClickPageName` String, - `ClickTargetType` UInt16, - `ClickTargetPhraseID` UInt64, - `ClickContextType` UInt8, - `ClickSelectType` Int8, - `ClickOptions` String, - `ClickGroupBannerID` Int32, - `OpenstatServiceName` String, - `OpenstatCampaignID` String, - `OpenstatAdID` String, - `OpenstatSourceID` String, - `UTMSource` String, - `UTMMedium` String, - `UTMCampaign` String, - `UTMContent` String, - `UTMTerm` String, - `FromTag` String, - `HasGCLID` UInt8, - `FirstVisit` DateTime, - `PredLastVisit` Date, - `LastVisit` Date, - `TotalVisits` UInt32, - `TraficSource` Nested( - ID Int8, - SearchEngineID UInt16, - AdvEngineID UInt8, - PlaceID UInt16, - SocialSourceNetworkID UInt8, - Domain String, - SearchPhrase String, - SocialSourcePage String), - `Attendance` FixedString(16), - `CLID` UInt32, - `YCLID` UInt64, - `NormalizedRefererHash` UInt64, - `SearchPhraseHash` UInt64, - `RefererDomainHash` UInt64, - `NormalizedStartURLHash` UInt64, - `StartURLDomainHash` UInt64, - `NormalizedEndURLHash` UInt64, - `TopLevelDomain` UInt64, - `URLScheme` UInt64, - `OpenstatServiceNameHash` UInt64, - `OpenstatCampaignIDHash` UInt64, - `OpenstatAdIDHash` UInt64, - `OpenstatSourceIDHash` UInt64, - `UTMSourceHash` UInt64, - `UTMMediumHash` UInt64, - `UTMCampaignHash` UInt64, - `UTMContentHash` UInt64, - `UTMTermHash` UInt64, - `FromHash` UInt64, - `WebVisorEnabled` UInt8, - `WebVisorActivity` UInt32, - `ParsedParams` Nested( - Key1 String, - Key2 String, - Key3 String, - Key4 String, - Key5 String, - ValueDouble Float64), - `Market` Nested( - Type UInt8, - GoalID UInt32, - OrderID String, - OrderPrice Int64, - PP UInt32, - DirectPlaceID UInt32, - DirectOrderID UInt32, - DirectBannerID UInt32, - GoodID String, - GoodName String, - GoodQuantity Int32, - GoodPrice Int64), - `IslandID` FixedString(16) -) -ENGINE = CollapsingMergeTree(Sign) -PARTITION BY toYYYYMM(StartDate) -ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) -SAMPLE BY intHash32(UserID) -``` - -Puede ejecutar esas consultas utilizando el modo interactivo de `clickhouse-client` (simplemente ejecútelo en un terminal sin especificar una consulta por adelantado) o pruebe algunos [interfaz alternativa](../interfaces/index.md) Si quieres. - -Como podemos ver, `hits_v1` utiliza el [motor básico MergeTree](../engines/table-engines/mergetree-family/mergetree.md), mientras que el `visits_v1` utiliza el [Derrumbar](../engines/table-engines/mergetree-family/collapsingmergetree.md) variante. - -### Importar datos {#import-data} - -La importación de datos a ClickHouse se realiza a través de [INSERT INTO](../sql-reference/statements/insert-into.md) consulta como en muchas otras bases de datos SQL. Sin embargo, los datos generalmente se proporcionan en uno de los [Formatos de serialización compatibles](../interfaces/formats.md) en lugar de `VALUES` cláusula (que también es compatible). - -Los archivos que descargamos anteriormente están en formato separado por tabuladores, así que aquí le mostramos cómo importarlos a través del cliente de la consola: - -``` bash -clickhouse-client --query "INSERT INTO tutorial.hits_v1 FORMAT TSV" --max_insert_block_size=100000 < hits_v1.tsv -clickhouse-client --query "INSERT INTO tutorial.visits_v1 FORMAT TSV" --max_insert_block_size=100000 < visits_v1.tsv -``` - -ClickHouse tiene un montón de [ajustes para sintonizar](../operations/settings/index.md) y una forma de especificarlos en el cliente de la consola es a través de argumentos, como podemos ver con `--max_insert_block_size`. La forma más fácil de averiguar qué configuraciones están disponibles, qué significan y cuáles son los valores predeterminados es consultar el `system.settings` tabla: - -``` sql -SELECT name, value, changed, description -FROM system.settings -WHERE name LIKE '%max_insert_b%' -FORMAT TSV - -max_insert_block_size 1048576 0 "The maximum block size for insertion, if we control the creation of blocks for insertion." -``` - -Opcionalmente se puede [OPTIMIZE](../sql-reference/statements/misc.md#misc_operations-optimize) las tablas después de la importación. Las tablas que están configuradas con un motor de la familia MergeTree siempre fusionan partes de datos en segundo plano para optimizar el almacenamiento de datos (o al menos verificar si tiene sentido). Estas consultas obligan al motor de tablas a realizar la optimización del almacenamiento en este momento en lugar de algún tiempo después: - -``` bash -clickhouse-client --query "OPTIMIZE TABLE tutorial.hits_v1 FINAL" -clickhouse-client --query "OPTIMIZE TABLE tutorial.visits_v1 FINAL" -``` - -Estas consultas inician una operación intensiva de E / S y CPU, por lo que si la tabla recibe datos nuevos de manera consistente, es mejor dejarlos solos y dejar que las fusiones se ejecuten en segundo plano. - -Ahora podemos comprobar si la importación de la tabla fue exitosa: - -``` bash -clickhouse-client --query "SELECT COUNT(*) FROM tutorial.hits_v1" -clickhouse-client --query "SELECT COUNT(*) FROM tutorial.visits_v1" -``` - -## Consultas de ejemplo {#example-queries} - -``` sql -SELECT - StartURL AS URL, - AVG(Duration) AS AvgDuration -FROM tutorial.visits_v1 -WHERE StartDate BETWEEN '2014-03-23' AND '2014-03-30' -GROUP BY URL -ORDER BY AvgDuration DESC -LIMIT 10 -``` - -``` sql -SELECT - sum(Sign) AS visits, - sumIf(Sign, has(Goals.ID, 1105530)) AS goal_visits, - (100. * goal_visits) / visits AS goal_percent -FROM tutorial.visits_v1 -WHERE (CounterID = 912887) AND (toYYYYMM(StartDate) = 201403) AND (domain(StartURL) = 'yandex.ru') -``` - -## Implementación de clúster {#cluster-deployment} - -El clúster ClickHouse es un clúster homogéneo. Pasos para configurar: - -1. Instale el servidor ClickHouse en todas las máquinas del clúster -2. Configurar configuraciones de clúster en archivos de configuración -3. Crear tablas locales en cada instancia -4. Crear un [Tabla distribuida](../engines/table-engines/special/distributed.md) - -[Tabla distribuida](../engines/table-engines/special/distributed.md) es en realidad una especie de “view” a las tablas locales del clúster ClickHouse. La consulta SELECT de una tabla distribuida se ejecuta utilizando recursos de todos los fragmentos del clúster. Puede especificar configuraciones para varios clústeres y crear varias tablas distribuidas que proporcionen vistas a diferentes clústeres. - -Ejemplo de configuración para un clúster con tres fragmentos, una réplica cada uno: - -``` xml - - - - - example-perftest01j.yandex.ru - 9000 - - - - - example-perftest02j.yandex.ru - 9000 - - - - - example-perftest03j.yandex.ru - 9000 - - - - -``` - -Para más demostraciones, vamos a crear una nueva tabla local con la misma `CREATE TABLE` consulta que utilizamos para `hits_v1`, pero nombre de tabla diferente: - -``` sql -CREATE TABLE tutorial.hits_local (...) ENGINE = MergeTree() ... -``` - -Creación de una tabla distribuida que proporcione una vista en las tablas locales del clúster: - -``` sql -CREATE TABLE tutorial.hits_all AS tutorial.hits_local -ENGINE = Distributed(perftest_3shards_1replicas, tutorial, hits_local, rand()); -``` - -Una práctica común es crear tablas distribuidas similares en todas las máquinas del clúster. Permite ejecutar consultas distribuidas en cualquier máquina del clúster. También hay una opción alternativa para crear una tabla distribuida temporal para una consulta SELECT determinada usando [remoto](../sql-reference/table-functions/remote.md) función de la tabla. - -Vamos a correr [INSERT SELECT](../sql-reference/statements/insert-into.md) en la tabla Distributed para extender la tabla a varios servidores. - -``` sql -INSERT INTO tutorial.hits_all SELECT * FROM tutorial.hits_v1; -``` - -!!! warning "Aviso" - Este enfoque no es adecuado para la fragmentación de tablas grandes. Hay una herramienta separada [Método de codificación de datos:](../operations/utilities/clickhouse-copier.md) que puede volver a fragmentar tablas grandes arbitrarias. - -Como era de esperar, las consultas computacionalmente pesadas se ejecutan N veces más rápido si utilizan 3 servidores en lugar de uno. - -En este caso, hemos utilizado un clúster con 3 fragmentos, y cada uno contiene una sola réplica. - -Para proporcionar resiliencia en un entorno de producción, se recomienda que cada fragmento contenga 2-3 réplicas distribuidas entre varias zonas de disponibilidad o centros de datos (o al menos racks). Tenga en cuenta que ClickHouse admite un número ilimitado de réplicas. - -Ejemplo de configuración para un clúster de un fragmento que contiene tres réplicas: - -``` xml - - ... - - - - example-perftest01j.yandex.ru - 9000 - - - example-perftest02j.yandex.ru - 9000 - - - example-perftest03j.yandex.ru - 9000 - - - - -``` - -Para habilitar la replicación nativa [ZooKeeper](http://zookeeper.apache.org/) se requiere. ClickHouse se encarga de la coherencia de los datos en todas las réplicas y ejecuta el procedimiento de restauración después de la falla automáticamente. Se recomienda implementar el clúster ZooKeeper en servidores independientes (donde no se están ejecutando otros procesos, incluido ClickHouse). - -!!! note "Nota" - ZooKeeper no es un requisito estricto: en algunos casos simples, puede duplicar los datos escribiéndolos en todas las réplicas de su código de aplicación. Este enfoque es **ni** recomendado, en este caso, ClickHouse no podrá garantizar la coherencia de los datos en todas las réplicas. Por lo tanto, se convierte en responsabilidad de su aplicación. - -Las ubicaciones de ZooKeeper se especifican en el archivo de configuración: - -``` xml - - - zoo01.yandex.ru - 2181 - - - zoo02.yandex.ru - 2181 - - - zoo03.yandex.ru - 2181 - - -``` - -Además, necesitamos establecer macros para identificar cada fragmento y réplica que se utilizan en la creación de tablas: - -``` xml - - 01 - 01 - -``` - -Si no hay réplicas en este momento en la creación de la tabla replicada, se crea una instancia de una nueva primera réplica. Si ya hay réplicas activas, la nueva réplica clona los datos de las existentes. Tiene la opción de crear primero todas las tablas replicadas y, a continuación, insertar datos en ella. Otra opción es crear algunas réplicas y agregar las otras después o durante la inserción de datos. - -``` sql -CREATE TABLE tutorial.hits_replica (...) -ENGINE = ReplcatedMergeTree( - '/clickhouse_perftest/tables/{shard}/hits', - '{replica}' -) -... -``` - -Aquí usamos [ReplicatedMergeTree](../engines/table-engines/mergetree-family/replication.md) motor de mesa. En los parámetros, especificamos la ruta ZooKeeper que contiene identificadores de fragmentos y réplicas. - -``` sql -INSERT INTO tutorial.hits_replica SELECT * FROM tutorial.hits_local; -``` - -La replicación funciona en modo multi-master. Los datos se pueden cargar en cualquier réplica y el sistema los sincroniza automáticamente con otras instancias. La replicación es asíncrona, por lo que en un momento dado, no todas las réplicas pueden contener datos insertados recientemente. Al menos una réplica debe estar disponible para permitir la ingestión de datos. Otros sincronizarán los datos y repararán la coherencia una vez que vuelvan a activarse. Tenga en cuenta que este enfoque permite la baja posibilidad de una pérdida de datos recientemente insertados. - -[Artículo Original](https://clickhouse.tech/docs/en/getting_started/tutorial/) diff --git a/docs/es/guides/apply-catboost-model.md b/docs/es/guides/apply-catboost-model.md deleted file mode 100644 index b1fe50f3276..00000000000 --- a/docs/es/guides/apply-catboost-model.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: "Aplicaci\xF3n de modelos CatBoost" ---- - -# Aplicación de un modelo Catboost en ClickHouse {#applying-catboost-model-in-clickhouse} - -[CatBoost](https://catboost.ai) es una biblioteca de impulso de gradiente libre y de código abierto desarrollada en [Yandex](https://yandex.com/company/) para el aprendizaje automático. - -Con esta instrucción, aprenderá a aplicar modelos preentrenados en ClickHouse ejecutando la inferencia de modelos desde SQL. - -Para aplicar un modelo CatBoost en ClickHouse: - -1. [Crear una tabla](#create-table). -2. [Insertar los datos en la tabla](#insert-data-to-table). -3. [Integrar CatBoost en ClickHouse](#integrate-catboost-into-clickhouse) (Paso opcional). -4. [Ejecute la inferencia del modelo desde SQL](#run-model-inference). - -Para obtener más información sobre la formación de modelos CatBoost, consulte [Entrenamiento y aplicación de modelos](https://catboost.ai/docs/features/training.html#training). - -## Requisito {#prerequisites} - -Si no tienes el [Acoplador](https://docs.docker.com/install/) sin embargo, instalarlo. - -!!! note "Nota" - [Acoplador](https://www.docker.com) es una plataforma de software que le permite crear contenedores que aíslan una instalación de CatBoost y ClickHouse del resto del sistema. - -Antes de aplicar un modelo CatBoost: - -**1.** Tire de la [Imagen de acoplador](https://hub.docker.com/r/yandex/tutorial-catboost-clickhouse) del registro: - -``` bash -$ docker pull yandex/tutorial-catboost-clickhouse -``` - -Esta imagen de Docker contiene todo lo que necesita para ejecutar CatBoost y ClickHouse: código, tiempo de ejecución, bibliotecas, variables de entorno y archivos de configuración. - -**2.** Asegúrese de que la imagen de Docker se haya extraído correctamente: - -``` bash -$ docker image ls -REPOSITORY TAG IMAGE ID CREATED SIZE -yandex/tutorial-catboost-clickhouse latest 622e4d17945b 22 hours ago 1.37GB -``` - -**3.** Inicie un contenedor Docker basado en esta imagen: - -``` bash -$ docker run -it -p 8888:8888 yandex/tutorial-catboost-clickhouse -``` - -## 1. Crear una tabla {#create-table} - -Para crear una tabla ClickHouse para el ejemplo de capacitación: - -**1.** Inicie el cliente de consola ClickHouse en el modo interactivo: - -``` bash -$ clickhouse client -``` - -!!! note "Nota" - El servidor ClickHouse ya se está ejecutando dentro del contenedor Docker. - -**2.** Cree la tabla usando el comando: - -``` sql -:) CREATE TABLE amazon_train -( - date Date MATERIALIZED today(), - ACTION UInt8, - RESOURCE UInt32, - MGR_ID UInt32, - ROLE_ROLLUP_1 UInt32, - ROLE_ROLLUP_2 UInt32, - ROLE_DEPTNAME UInt32, - ROLE_TITLE UInt32, - ROLE_FAMILY_DESC UInt32, - ROLE_FAMILY UInt32, - ROLE_CODE UInt32 -) -ENGINE = MergeTree ORDER BY date -``` - -**3.** Salir del cliente de la consola ClickHouse: - -``` sql -:) exit -``` - -## 2. Insertar los datos en la tabla {#insert-data-to-table} - -Para insertar los datos: - -**1.** Ejecute el siguiente comando: - -``` bash -$ clickhouse client --host 127.0.0.1 --query 'INSERT INTO amazon_train FORMAT CSVWithNames' < ~/amazon/train.csv -``` - -**2.** Inicie el cliente de consola ClickHouse en el modo interactivo: - -``` bash -$ clickhouse client -``` - -**3.** Asegúrese de que los datos se hayan cargado: - -``` sql -:) SELECT count() FROM amazon_train - -SELECT count() -FROM amazon_train - -+-count()-+ -| 65538 | -+-------+ -``` - -## 3. Integrar CatBoost en ClickHouse {#integrate-catboost-into-clickhouse} - -!!! note "Nota" - **Paso opcional.** La imagen de Docker contiene todo lo que necesita para ejecutar CatBoost y ClickHouse. - -Para integrar CatBoost en ClickHouse: - -**1.** Construir la biblioteca de evaluación. - -La forma más rápida de evaluar un modelo CatBoost es compilar `libcatboostmodel.` biblioteca. Para obtener más información acerca de cómo construir la biblioteca, vea [Documentación de CatBoost](https://catboost.ai/docs/concepts/c-plus-plus-api_dynamic-c-pluplus-wrapper.html). - -**2.** Cree un nuevo directorio en cualquier lugar y con cualquier nombre, por ejemplo, `data` y poner la biblioteca creada en ella. La imagen de Docker ya contiene la biblioteca `data/libcatboostmodel.so`. - -**3.** Cree un nuevo directorio para el modelo de configuración en cualquier lugar y con cualquier nombre, por ejemplo, `models`. - -**4.** Cree un archivo de configuración de modelo con cualquier nombre, por ejemplo, `models/amazon_model.xml`. - -**5.** Describir la configuración del modelo: - -``` xml - - - - catboost - - amazon - - /home/catboost/tutorial/catboost_model.bin - - 0 - - -``` - -**6.** Agregue la ruta de acceso a CatBoost y la configuración del modelo a la configuración de ClickHouse: - -``` xml - -/home/catboost/data/libcatboostmodel.so -/home/catboost/models/*_model.xml -``` - -## 4. Ejecute la inferencia del modelo desde SQL {#run-model-inference} - -Para el modelo de prueba, ejecute el cliente ClickHouse `$ clickhouse client`. - -Asegurémonos de que el modelo esté funcionando: - -``` sql -:) SELECT - modelEvaluate('amazon', - RESOURCE, - MGR_ID, - ROLE_ROLLUP_1, - ROLE_ROLLUP_2, - ROLE_DEPTNAME, - ROLE_TITLE, - ROLE_FAMILY_DESC, - ROLE_FAMILY, - ROLE_CODE) > 0 AS prediction, - ACTION AS target -FROM amazon_train -LIMIT 10 -``` - -!!! note "Nota" - Función [modelEvaluar](../sql-reference/functions/other-functions.md#function-modelevaluate) devuelve tupla con predicciones sin procesar por clase para modelos multiclase. - -Vamos a predecir la probabilidad: - -``` sql -:) SELECT - modelEvaluate('amazon', - RESOURCE, - MGR_ID, - ROLE_ROLLUP_1, - ROLE_ROLLUP_2, - ROLE_DEPTNAME, - ROLE_TITLE, - ROLE_FAMILY_DESC, - ROLE_FAMILY, - ROLE_CODE) AS prediction, - 1. / (1 + exp(-prediction)) AS probability, - ACTION AS target -FROM amazon_train -LIMIT 10 -``` - -!!! note "Nota" - Más información sobre [exp()](../sql-reference/functions/math-functions.md) función. - -Vamos a calcular LogLoss en la muestra: - -``` sql -:) SELECT -avg(tg * log(prob) + (1 - tg) * log(1 - prob)) AS logloss -FROM -( - SELECT - modelEvaluate('amazon', - RESOURCE, - MGR_ID, - ROLE_ROLLUP_1, - ROLE_ROLLUP_2, - ROLE_DEPTNAME, - ROLE_TITLE, - ROLE_FAMILY_DESC, - ROLE_FAMILY, - ROLE_CODE) AS prediction, - 1. / (1. + exp(-prediction)) AS prob, - ACTION AS tg - FROM amazon_train -) -``` - -!!! note "Nota" - Más información sobre [avg()](../sql-reference/aggregate-functions/reference.md#agg_function-avg) y [registro()](../sql-reference/functions/math-functions.md) función. - -[Artículo Original](https://clickhouse.tech/docs/en/guides/apply_catboost_model/) diff --git a/docs/es/guides/index.md b/docs/es/guides/index.md deleted file mode 100644 index c8332ac7846..00000000000 --- a/docs/es/guides/index.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Guiar -toc_priority: 38 -toc_title: "Descripci\xF3n" ---- - -# Guías de ClickHouse {#clickhouse-guides} - -Lista de instrucciones detalladas paso a paso que ayudan a resolver varias tareas usando ClickHouse: - -- [Tutorial sobre la configuración simple del clúster](../getting-started/tutorial.md) -- [Aplicación de un modelo CatBoost en ClickHouse](apply-catboost-model.md) - -[Artículo Original](https://clickhouse.tech/docs/en/guides/) diff --git a/docs/es/images/column-oriented.gif b/docs/es/images/column-oriented.gif deleted file mode 100644 index d5ac7c82848..00000000000 Binary files a/docs/es/images/column-oriented.gif and /dev/null differ diff --git a/docs/es/images/logo.svg b/docs/es/images/logo.svg deleted file mode 100644 index b5ab923ff65..00000000000 --- a/docs/es/images/logo.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/es/images/row-oriented.gif b/docs/es/images/row-oriented.gif deleted file mode 100644 index 41395b5693e..00000000000 Binary files a/docs/es/images/row-oriented.gif and /dev/null differ diff --git a/docs/es/index.md b/docs/es/index.md deleted file mode 100644 index c76fe32e33b..00000000000 --- a/docs/es/index.md +++ /dev/null @@ -1,97 +0,0 @@ ---- -machine_translated: false -machine_translated_rev: -toc_priority: 0 -toc_title: "Descripción" ---- - -# ¿Qué es ClickHouse? {#what-is-clickhouse} - -ClickHouse es un sistema de gestión de bases de datos (DBMS), orientado a columnas, para el procesamiento analítico de consultas en línea (OLAP). - -En un DBMS “normal”, orientado a filas, los datos se almacenan en este orden: - -| Fila | Argumento | JavaEnable | Titular | GoodEvent | EventTime | -|------|-------------|------------|---------------------------|-----------|---------------------| -| #0 | 89354350662 | 1 | Relaciones con inversores | 1 | 2016-05-18 05:19:20 | -| #1 | 90329509958 | 0 | Contáctenos | 1 | 2016-05-18 08:10:20 | -| #2 | 89953706054 | 1 | Mision | 1 | 2016-05-18 07:38:00 | -| #N | … | … | … | … | … | - -En otras palabras, todos los valores relacionados con una fila se almacenan físicamente uno junto al otro. - -Ejemplos de un DBMS orientado a filas son MySQL, Postgres y MS SQL Server. - -En un DBMS orientado a columnas, los datos se almacenan así: - -| Fila: | #0 | #1 | #2 | #N | -|-------------|---------------------------|---------------------|---------------------|-----| -| Argumento: | 89354350662 | 90329509958 | 89953706054 | … | -| JavaEnable: | 1 | 0 | 1 | … | -| Titular: | Relaciones con inversores | Contáctenos | Mision | … | -| GoodEvent: | 1 | 1 | 1 | … | -| EventTime: | 2016-05-18 05:19:20 | 2016-05-18 08:10:20 | 2016-05-18 07:38:00 | … | - -Estos ejemplos solo muestran el orden en el que se organizan los datos. Los valores de diferentes columnas se almacenan por separado y los datos de la misma columna se almacenan juntos. - -Ejemplos de un DBMS orientado a columnas: Vertica, Paraccel (Actian Matrix y Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise y Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid y kdb+. - -Los diferentes modos de ordenar los datos al guardarlos se adecúan mejor a diferentes escenarios. El escenario de acceso a los datos se refiere a qué consultas se hacen, con qué frecuencia y en qué proporción; cuántos datos se leen para cada tipo de consulta - filas, columnas y bytes; la relación entre lectura y actualización de datos; el tamaño de trabajo de los datos y qué tan localmente son usados; si se usan transacciones y qué tan aisladas están;requerimientos de replicación de los datos y de integridad lógica, requerimientos de latencia y caudal (throughput) para cada tipo de consulta, y cosas por el estilo. - -Cuanto mayor sea la carga en el sistema, más importante es personalizar el sistema configurado para que coincida con los requisitos del escenario de uso, y más fino será esta personalización. No existe un sistema que sea igualmente adecuado para escenarios significativamente diferentes. Si un sistema es adaptable a un amplio conjunto de escenarios, bajo una carga alta, el sistema manejará todos los escenarios igualmente mal, o funcionará bien para solo uno o algunos de los escenarios posibles. - -## Propiedades clave del escenario OLAP {#key-properties-of-olap-scenario} - -- La gran mayoría de las solicitudes son para acceso de lectura. -- Los datos se actualizan en lotes bastante grandes (\> 1000 filas), no por filas individuales; o no se actualiza en absoluto. -- Los datos se agregan a la base de datos pero no se modifican. -- Para las lecturas, se extrae un número bastante grande de filas de la base de datos, pero solo un pequeño subconjunto de columnas. -- Las tablas son “wide,” lo que significa que contienen un gran número de columnas. -- Las consultas son relativamente raras (generalmente cientos de consultas por servidor o menos por segundo). -- Para consultas simples, se permiten latencias de alrededor de 50 ms. -- Los valores de columna son bastante pequeños: números y cadenas cortas (por ejemplo, 60 bytes por URL). -- Requiere un alto rendimiento al procesar una sola consulta (hasta miles de millones de filas por segundo por servidor). -- Las transacciones no son necesarias. -- Bajos requisitos para la coherencia de los datos. -- Hay una tabla grande por consulta. Todas las mesas son pequeñas, excepto una. -- Un resultado de consulta es significativamente menor que los datos de origen. En otras palabras, los datos se filtran o se agregan, por lo que el resultado se ajusta a la RAM de un solo servidor. - -Es fácil ver que el escenario OLAP es muy diferente de otros escenarios populares (como el acceso OLTP o Key-Value). Por lo tanto, no tiene sentido intentar usar OLTP o una base de datos de valor clave para procesar consultas analíticas si desea obtener un rendimiento decente. Por ejemplo, si intenta usar MongoDB o Redis para análisis, obtendrá un rendimiento muy bajo en comparación con las bases de datos OLAP. - -## Por qué las bases de datos orientadas a columnas funcionan mejor en el escenario OLAP {#why-column-oriented-databases-work-better-in-the-olap-scenario} - -Las bases de datos orientadas a columnas son más adecuadas para los escenarios OLAP: son al menos 100 veces más rápidas en el procesamiento de la mayoría de las consultas. Las razones se explican en detalle a continuación, pero el hecho es más fácil de demostrar visualmente: - -**DBMS orientado a filas** - -![Row-oriented](images/row-oriented.gif#) - -**DBMS orientado a columnas** - -![Column-oriented](images/column-oriented.gif#) - -Ver la diferencia? - -### Entrada/salida {#inputoutput} - -1. Para una consulta analítica, solo es necesario leer un pequeño número de columnas de tabla. En una base de datos orientada a columnas, puede leer solo los datos que necesita. Por ejemplo, si necesita 5 columnas de 100, puede esperar una reducción de 20 veces en E/S. -2. Dado que los datos se leen en paquetes, es más fácil de comprimir. Los datos en columnas también son más fáciles de comprimir. Esto reduce aún más el volumen de E/S. -3. Debido a la reducción de E / S, más datos se ajustan a la memoria caché del sistema. - -Por ejemplo, la consulta “count the number of records for each advertising platform” requiere leer uno “advertising platform ID” columna, que ocupa 1 byte sin comprimir. Si la mayor parte del tráfico no proviene de plataformas publicitarias, puede esperar al menos una compresión de 10 veces de esta columna. Cuando se utiliza un algoritmo de compresión rápida, la descompresión de datos es posible a una velocidad de al menos varios gigabytes de datos sin comprimir por segundo. En otras palabras, esta consulta se puede procesar a una velocidad de aproximadamente varios miles de millones de filas por segundo en un único servidor. Esta velocidad se logra realmente en la práctica. - -### CPU {#cpu} - -Dado que la ejecución de una consulta requiere procesar un gran número de filas, ayuda enviar todas las operaciones para vectores completos en lugar de para filas separadas, o implementar el motor de consultas para que casi no haya costo de envío. Si no hace esto, con cualquier subsistema de disco medio decente, el intérprete de consultas inevitablemente detiene la CPU. Tiene sentido almacenar datos en columnas y procesarlos, cuando sea posible, por columnas. - -Hay dos formas de hacer esto: - -1. Un vector motor. Todas las operaciones se escriben para vectores, en lugar de para valores separados. Esto significa que no necesita llamar a las operaciones con mucha frecuencia, y los costos de envío son insignificantes. El código de operación contiene un ciclo interno optimizado. - -2. Generación de código. El código generado para la consulta tiene todas las llamadas indirectas. - -Esto no se hace en “normal” bases de datos, porque no tiene sentido cuando se ejecutan consultas simples. Sin embargo, hay excepciones. Por ejemplo, MemSQL utiliza la generación de código para reducir la latencia al procesar consultas SQL. (A modo de comparación, los DBMS analíticos requieren la optimización del rendimiento, no la latencia.) - -Tenga en cuenta que para la eficiencia de la CPU, el lenguaje de consulta debe ser declarativo (SQL o MDX), o al menos un vector (J, K). La consulta solo debe contener bucles implícitos, lo que permite la optimización. - -{## [Artículo Original](https://clickhouse.tech/docs/en/) ##} diff --git a/docs/es/interfaces/cli.md b/docs/es/interfaces/cli.md deleted file mode 100644 index 395f9831a4e..00000000000 --- a/docs/es/interfaces/cli.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 17 -toc_title: "Cliente de l\xEDnea de comandos" ---- - -# Cliente de línea de comandos {#command-line-client} - -ClickHouse proporciona un cliente de línea de comandos nativo: `clickhouse-client`. El cliente admite opciones de línea de comandos y archivos de configuración. Para obtener más información, consulte [Configuración](#interfaces_cli_configuration). - -[Instalar](../getting-started/index.md) desde el `clickhouse-client` paquete y ejecútelo con el comando `clickhouse-client`. - -``` bash -$ clickhouse-client -ClickHouse client version 19.17.1.1579 (official build). -Connecting to localhost:9000 as user default. -Connected to ClickHouse server version 19.17.1 revision 54428. - -:) -``` - -Las diferentes versiones de cliente y servidor son compatibles entre sí, pero es posible que algunas funciones no estén disponibles en clientes anteriores. Se recomienda utilizar la misma versión del cliente que la aplicación de servidor. Cuando intenta usar un cliente de la versión anterior, entonces el servidor, `clickhouse-client` muestra el mensaje: - - ClickHouse client version is older than ClickHouse server. It may lack support for new features. - -## Uso {#cli_usage} - -El cliente se puede utilizar en modo interactivo y no interactivo (por lotes). Para utilizar el modo por lotes, especifique el ‘query’ parámetro, o enviar datos a ‘stdin’ (verifica que ‘stdin’ no es un terminal), o ambos. Similar a la interfaz HTTP, cuando se utiliza el ‘query’ parámetro y el envío de datos a ‘stdin’ la solicitud es una concatenación de la ‘query’ parámetro, un avance de línea y los datos en ‘stdin’. Esto es conveniente para grandes consultas INSERT. - -Ejemplo de uso del cliente para insertar datos: - -``` bash -$ echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV"; - -$ cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV"; -3, 'some text', '2016-08-14 00:00:00' -4, 'some more text', '2016-08-14 00:00:01' -_EOF - -$ cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV"; -``` - -En el modo por lotes, el formato de datos predeterminado es TabSeparated. Puede establecer el formato en la cláusula FORMAT de la consulta. - -De forma predeterminada, solo puede procesar una única consulta en modo por lotes. Para realizar múltiples consultas desde un “script,” utilizar el `--multiquery` parámetro. Esto funciona para todas las consultas excepto INSERT . Los resultados de la consulta se generan consecutivamente sin separadores adicionales. Del mismo modo, para procesar un gran número de consultas, puede ejecutar ‘clickhouse-client’ para cada consulta. Tenga en cuenta que puede tomar decenas de milisegundos para iniciar el ‘clickhouse-client’ programa. - -En el modo interactivo, obtiene una línea de comandos donde puede ingresar consultas. - -Si ‘multiline’ no se especifica (el valor predeterminado): Para ejecutar la consulta, pulse Intro. El punto y coma no es necesario al final de la consulta. Para introducir una consulta de varias líneas, introduzca una barra invertida `\` antes de la alimentación de línea. Después de presionar Enter, se le pedirá que ingrese la siguiente línea de la consulta. - -Si se especifica multilínea: Para ejecutar una consulta, finalícela con un punto y coma y presione Intro. Si se omitió el punto y coma al final de la línea ingresada, se le pedirá que ingrese la siguiente línea de la consulta. - -Solo se ejecuta una sola consulta, por lo que se ignora todo después del punto y coma. - -Puede especificar `\G` en lugar o después del punto y coma. Esto indica el formato vertical. En este formato, cada valor se imprime en una línea separada, lo cual es conveniente para tablas anchas. Esta característica inusual se agregó por compatibilidad con la CLI de MySQL. - -La línea de comandos se basa en ‘replxx’ (similar a ‘readline’). En otras palabras, utiliza los atajos de teclado familiares y mantiene un historial. La historia está escrita para `~/.clickhouse-client-history`. - -De forma predeterminada, el formato utilizado es PrettyCompact. Puede cambiar el formato en la cláusula FORMAT de la consulta o especificando `\G` al final de la consulta, utilizando el `--format` o `--vertical` en la línea de comandos, o utilizando el archivo de configuración del cliente. - -Para salir del cliente, presione Ctrl+D o introduzca una de las siguientes opciones en lugar de una consulta: “exit”, “quit”, “logout”, “exit;”, “quit;”, “logout;”, “q”, “Q”, “:q” - -Al procesar una consulta, el cliente muestra: - -1. Progreso, que se actualiza no más de 10 veces por segundo (de forma predeterminada). Para consultas rápidas, es posible que el progreso no tenga tiempo para mostrarse. -2. La consulta con formato después del análisis, para la depuración. -3. El resultado en el formato especificado. -4. El número de líneas en el resultado, el tiempo transcurrido y la velocidad promedio de procesamiento de consultas. - -Puede cancelar una consulta larga presionando Ctrl + C. Sin embargo, aún tendrá que esperar un poco para que el servidor aborte la solicitud. No es posible cancelar una consulta en determinadas etapas. Si no espera y presiona Ctrl + C por segunda vez, el cliente saldrá. - -El cliente de línea de comandos permite pasar datos externos (tablas temporales externas) para consultar. Para obtener más información, consulte la sección “External data for query processing”. - -### Consultas con parámetros {#cli-queries-with-parameters} - -Puede crear una consulta con parámetros y pasarles valores desde la aplicación cliente. Esto permite evitar formatear consultas con valores dinámicos específicos en el lado del cliente. Por ejemplo: - -``` bash -$ clickhouse-client --param_parName="[1, 2]" -q "SELECT * FROM table WHERE a = {parName:Array(UInt16)}" -``` - -#### Sintaxis de consulta {#cli-queries-with-parameters-syntax} - -Formatee una consulta como de costumbre, luego coloque los valores que desea pasar de los parámetros de la aplicación a la consulta entre llaves en el siguiente formato: - -``` sql -{:} -``` - -- `name` — Placeholder identifier. In the console client it should be used in app parameters as `--param_ = value`. -- `data type` — [Tipo de datos](../sql-reference/data-types/index.md) del valor del parámetro de la aplicación. Por ejemplo, una estructura de datos como `(integer, ('string', integer))` puede tener el `Tuple(UInt8, Tuple(String, UInt8))` tipo de datos (también puede usar otro [entero](../sql-reference/data-types/int-uint.md) tipo). - -#### Ejemplo {#example} - -``` bash -$ clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" -q "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}" -``` - -## Configuración {#interfaces_cli_configuration} - -Puede pasar parámetros a `clickhouse-client` (todos los parámetros tienen un valor predeterminado) usando: - -- Desde la línea de comandos - - Las opciones de la línea de comandos anulan los valores y valores predeterminados de los archivos de configuración. - -- Archivos de configuración. - - Los valores de los archivos de configuración anulan los valores predeterminados. - -### Opciones de línea de comandos {#command-line-options} - -- `--host, -h` -– The server name, ‘localhost’ predeterminada. Puede utilizar el nombre o la dirección IPv4 o IPv6. -- `--port` – The port to connect to. Default value: 9000. Note that the HTTP interface and the native interface use different ports. -- `--user, -u` – The username. Default value: default. -- `--password` – The password. Default value: empty string. -- `--query, -q` – The query to process when using non-interactive mode. -- `--database, -d` – Select the current default database. Default value: the current database from the server settings (‘default’ predeterminada). -- `--multiline, -m` – If specified, allow multiline queries (do not send the query on Enter). -- `--multiquery, -n` – If specified, allow processing multiple queries separated by semicolons. -- `--format, -f` – Use the specified default format to output the result. -- `--vertical, -E` – If specified, use the Vertical format by default to output the result. This is the same as ‘–format=Vertical’. En este formato, cada valor se imprime en una línea separada, lo que es útil cuando se muestran tablas anchas. -- `--time, -t` – If specified, print the query execution time to ‘stderr’ en modo no interactivo. -- `--stacktrace` – If specified, also print the stack trace if an exception occurs. -- `--config-file` – The name of the configuration file. -- `--secure` – If specified, will connect to server over secure connection. -- `--param_` — Value for a [consulta con parámetros](#cli-queries-with-parameters). - -### Archivos de configuración {#configuration_files} - -`clickhouse-client` utiliza el primer archivo existente de los siguientes: - -- Definido en el `--config-file` parámetro. -- `./clickhouse-client.xml` -- `~/.clickhouse-client/config.xml` -- `/etc/clickhouse-client/config.xml` - -Ejemplo de un archivo de configuración: - -``` xml - - username - password - False - -``` - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/cli/) diff --git a/docs/es/interfaces/cpp.md b/docs/es/interfaces/cpp.md deleted file mode 100644 index bc5dc3dbc24..00000000000 --- a/docs/es/interfaces/cpp.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 24 -toc_title: Biblioteca de clientes de C++ ---- - -# Biblioteca de clientes de C++ {#c-client-library} - -Ver README en [Bienvenidos](https://github.com/ClickHouse/clickhouse-cpp) repositorio. - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/cpp/) diff --git a/docs/es/interfaces/formats.md b/docs/es/interfaces/formats.md deleted file mode 100644 index 03c1873d306..00000000000 --- a/docs/es/interfaces/formats.md +++ /dev/null @@ -1,1212 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 21 -toc_title: Formatos de entrada y salida ---- - -# Formatos para datos de entrada y salida {#formats} - -ClickHouse puede aceptar y devolver datos en varios formatos. Se puede utilizar un formato admitido para la entrada para analizar los datos proporcionados a `INSERT`s, para llevar a cabo `SELECT`s de una tabla respaldada por archivos como File, URL o HDFS, o para leer un diccionario externo. Se puede utilizar un formato compatible con la salida para organizar el -resultados de un `SELECT`, y realizar `INSERT`s en una tabla respaldada por archivos. - -Los formatos soportados son: - -| Formato | Entrada | Salida | -|-----------------------------------------------------------------|---------|--------| -| [TabSeparated](#tabseparated) | ✔ | ✔ | -| [TabSeparatedRaw](#tabseparatedraw) | ✗ | ✔ | -| [TabSeparatedWithNames](#tabseparatedwithnames) | ✔ | ✔ | -| [TabSeparatedWithNamesAndTypes](#tabseparatedwithnamesandtypes) | ✔ | ✔ | -| [Plantilla](#format-template) | ✔ | ✔ | -| [TemplateIgnoreSpaces](#templateignorespaces) | ✔ | ✗ | -| [CSV](#csv) | ✔ | ✔ | -| [CSVWithNames](#csvwithnames) | ✔ | ✔ | -| [CustomSeparated](#format-customseparated) | ✔ | ✔ | -| [Valor](#data-format-values) | ✔ | ✔ | -| [Vertical](#vertical) | ✗ | ✔ | -| [VerticalRaw](#verticalraw) | ✗ | ✔ | -| [JSON](#json) | ✗ | ✔ | -| [JSONCompact](#jsoncompact) | ✗ | ✔ | -| [JSONEachRow](#jsoneachrow) | ✔ | ✔ | -| [TSKV](#tskv) | ✔ | ✔ | -| [Bastante](#pretty) | ✗ | ✔ | -| [PrettyCompact](#prettycompact) | ✗ | ✔ | -| [PrettyCompactMonoBlock](#prettycompactmonoblock) | ✗ | ✔ | -| [PrettyNoEscapes](#prettynoescapes) | ✗ | ✔ | -| [Bienvenido a WordPress.](#prettyspace) | ✗ | ✔ | -| [Protobuf](#protobuf) | ✔ | ✔ | -| [Avro](#data-format-avro) | ✔ | ✔ | -| [AvroConfluent](#data-format-avro-confluent) | ✔ | ✗ | -| [Parquet](#data-format-parquet) | ✔ | ✔ | -| [ORC](#data-format-orc) | ✔ | ✗ | -| [RowBinary](#rowbinary) | ✔ | ✔ | -| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ | -| [Nativo](#native) | ✔ | ✔ | -| [Nulo](#null) | ✗ | ✔ | -| [XML](#xml) | ✗ | ✔ | -| [CapnProto](#capnproto) | ✔ | ✗ | - -Puede controlar algunos parámetros de procesamiento de formato con la configuración de ClickHouse. Para obtener más información, lea el [Configuración](../operations/settings/settings.md) apartado. - -## TabSeparated {#tabseparated} - -En el formato TabSeparated, los datos se escriben por fila. Cada fila contiene valores separados por pestañas. Cada valor es seguido por una ficha, excepto el último valor de la fila, que es seguido por un avance de línea. Estrictamente las fuentes de línea Unix se asumen en todas partes. La última fila también debe contener un avance de línea al final. Los valores se escriben en formato de texto, sin incluir comillas y con caracteres especiales escapados. - -Este formato también está disponible bajo el nombre `TSV`. - -El `TabSeparated` es conveniente para procesar datos utilizando programas y scripts personalizados. Se usa de forma predeterminada en la interfaz HTTP y en el modo por lotes del cliente de línea de comandos. Este formato también permite transferir datos entre diferentes DBMS. Por ejemplo, puede obtener un volcado de MySQL y subirlo a ClickHouse, o viceversa. - -El `TabSeparated` el formato admite la salida de valores totales (cuando se usa WITH TOTALS) y valores extremos (cuando ‘extremes’ se establece en 1). En estos casos, los valores totales y los extremos se emiten después de los datos principales. El resultado principal, los valores totales y los extremos están separados entre sí por una línea vacía. Ejemplo: - -``` sql -SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated`` -``` - -``` text -2014-03-17 1406958 -2014-03-18 1383658 -2014-03-19 1405797 -2014-03-20 1353623 -2014-03-21 1245779 -2014-03-22 1031592 -2014-03-23 1046491 - -1970-01-01 8873898 - -2014-03-17 1031592 -2014-03-23 1406958 -``` - -### Formato de datos {#data-formatting} - -Los números enteros se escriben en forma decimal. Los números pueden contener un extra “+” carácter al principio (ignorado al analizar y no grabado al formatear). Los números no negativos no pueden contener el signo negativo. Al leer, se permite analizar una cadena vacía como cero, o (para tipos con signo) una cadena que consiste en solo un signo menos como cero. Los números que no encajan en el tipo de datos correspondiente se pueden analizar como un número diferente, sin un mensaje de error. - -Los números de punto flotante se escriben en forma decimal. El punto se usa como separador decimal. Las entradas exponenciales son compatibles, al igual que ‘inf’, ‘+inf’, ‘-inf’, y ‘nan’. Una entrada de números de coma flotante puede comenzar o terminar con un punto decimal. -Durante el formateo, la precisión puede perderse en los números de coma flotante. -Durante el análisis, no es estrictamente necesario leer el número representable de la máquina más cercano. - -Las fechas se escriben en formato AAAA-MM-DD y se analizan en el mismo formato, pero con los caracteres como separadores. -Las fechas con horas se escriben en el formato `YYYY-MM-DD hh:mm:ss` y analizado en el mismo formato, pero con cualquier carácter como separadores. -Todo esto ocurre en la zona horaria del sistema en el momento en que se inicia el cliente o servidor (dependiendo de cuál de ellos formatea los datos). Para fechas con horarios, no se especifica el horario de verano. Por lo tanto, si un volcado tiene tiempos durante el horario de verano, el volcado no coincide inequívocamente con los datos, y el análisis seleccionará una de las dos veces. -Durante una operación de lectura, las fechas incorrectas y las fechas con horas se pueden analizar con desbordamiento natural o como fechas y horas nulas, sin un mensaje de error. - -Como excepción, el análisis de fechas con horas también se admite en el formato de marca de tiempo Unix, si consta de exactamente 10 dígitos decimales. El resultado no depende de la zona horaria. Los formatos AAAA-MM-DD hh:mm:ss y NNNNNNNNNN se diferencian automáticamente. - -Las cadenas se generan con caracteres especiales de escape de barra invertida. Las siguientes secuencias de escape se utilizan para la salida: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. El análisis también admite las secuencias `\a`, `\v`, y `\xHH` (secuencias de escape hexagonales) y cualquier `\c` secuencias, donde `c` es cualquier carácter (estas secuencias se convierten en `c`). Por lo tanto, la lectura de datos admite formatos donde un avance de línea se puede escribir como `\n` o `\` o como un avance de línea. Por ejemplo, la cadena `Hello world` con un avance de línea entre las palabras en lugar de espacio se puede analizar en cualquiera de las siguientes variaciones: - -``` text -Hello\nworld - -Hello\ -world -``` - -La segunda variante es compatible porque MySQL la usa al escribir volcados separados por tabuladores. - -El conjunto mínimo de caracteres que debe escapar al pasar datos en formato TabSeparated: tabulación, salto de línea (LF) y barra invertida. - -Solo se escapa un pequeño conjunto de símbolos. Puede tropezar fácilmente con un valor de cadena que su terminal arruinará en la salida. - -Las matrices se escriben como una lista de valores separados por comas entre corchetes. Los elementos numéricos de la matriz tienen el formato normal. `Date` y `DateTime` están escritos entre comillas simples. Las cadenas se escriben entre comillas simples con las mismas reglas de escape que las anteriores. - -[NULL](../sql-reference/syntax.md) se formatea como `\N`. - -Cada elemento de [Anidar](../sql-reference/data-types/nested-data-structures/nested.md) estructuras se representa como una matriz. - -Por ejemplo: - -``` sql -CREATE TABLE nestedt -( - `id` UInt8, - `aux` Nested( - a UInt8, - b String - ) -) -ENGINE = TinyLog -``` - -``` sql -INSERT INTO nestedt Values ( 1, [1], ['a']) -``` - -``` sql -SELECT * FROM nestedt FORMAT TSV -``` - -``` text -1 [1] ['a'] -``` - -## TabSeparatedRaw {#tabseparatedraw} - -Difiere de `TabSeparated` formato en que las filas se escriben sin escapar. -Este formato solo es apropiado para generar un resultado de consulta, pero no para analizar (recuperar datos para insertar en una tabla). - -Este formato también está disponible bajo el nombre `TSVRaw`. - -## TabSeparatedWithNames {#tabseparatedwithnames} - -Difiere de la `TabSeparated` formato en que los nombres de columna se escriben en la primera fila. -Durante el análisis, la primera fila se ignora por completo. No puede usar nombres de columna para determinar su posición o para comprobar su corrección. -(Se puede agregar soporte para analizar la fila de encabezado en el futuro.) - -Este formato también está disponible bajo el nombre `TSVWithNames`. - -## TabSeparatedWithNamesAndTypes {#tabseparatedwithnamesandtypes} - -Difiere de la `TabSeparated` formato en que los nombres de columna se escriben en la primera fila, mientras que los tipos de columna están en la segunda fila. -Durante el análisis, la primera y la segunda filas se ignoran por completo. - -Este formato también está disponible bajo el nombre `TSVWithNamesAndTypes`. - -## Plantilla {#format-template} - -Este formato permite especificar una cadena de formato personalizado con marcadores de posición para los valores con una regla de escape especificada. - -Utiliza la configuración `format_template_resultset`, `format_template_row`, `format_template_rows_between_delimiter` and some settings of other formats (e.g. `output_format_json_quote_64bit_integers` cuando se utiliza `JSON` escapar, ver más) - -Configuración `format_template_row` especifica la ruta de acceso al archivo, que contiene una cadena de formato para las filas con la siguiente sintaxis: - -`delimiter_1${column_1:serializeAs_1}delimiter_2${column_2:serializeAs_2} ... delimiter_N`, - -donde `delimiter_i` es un delimitador entre valores (`$` símbolo se puede escapar como `$$`), -`column_i` es un nombre o índice de una columna cuyos valores se deben seleccionar o insertar (si está vacío, se omitirá la columna), -`serializeAs_i` es una regla de escape para los valores de columna. Se admiten las siguientes reglas de escape: - -- `CSV`, `JSON`, `XML` (similar a los formatos de los mismos nombres) -- `Escaped` (similar a `TSV`) -- `Quoted` (similar a `Values`) -- `Raw` (sin escapar, de manera similar a `TSVRaw`) -- `None` (sin regla de escape, ver más) - -Si se omite una regla de escape, entonces `None` se utilizará. `XML` y `Raw` son adecuados sólo para la salida. - -Entonces, para la siguiente cadena de formato: - - `Search phrase: ${SearchPhrase:Quoted}, count: ${c:Escaped}, ad price: $$${price:JSON};` - -los valores de `SearchPhrase`, `c` y `price` columnas, que se escapan como `Quoted`, `Escaped` y `JSON` se imprimirá (para seleccionar) o se esperará (para insertar) entre `Search phrase:`, `, count:`, `, ad price: $` y `;` delimitadores respectivamente. Por ejemplo: - -`Search phrase: 'bathroom interior design', count: 2166, ad price: $3;` - -El `format_template_rows_between_delimiter` setting especifica el delimitador entre filas, que se imprime (o se espera) después de cada fila, excepto la última (`\n` predeterminada) - -Configuración `format_template_resultset` especifica la ruta al archivo, que contiene una cadena de formato para el conjunto de resultados. La cadena de formato para el conjunto de resultados tiene la misma sintaxis que una cadena de formato para la fila y permite especificar un prefijo, un sufijo y una forma de imprimir información adicional. Contiene los siguientes marcadores de posición en lugar de nombres de columna: - -- `data` son las filas con datos en `format_template_row` formato, separados por `format_template_rows_between_delimiter`. Este marcador de posición debe ser el primer marcador de posición en la cadena de formato. -- `totals` es la fila con valores totales en `format_template_row` formato (cuando se usa WITH TOTALS) -- `min` es la fila con valores mínimos en `format_template_row` formato (cuando los extremos se establecen en 1) -- `max` es la fila con valores máximos en `format_template_row` formato (cuando los extremos se establecen en 1) -- `rows` es el número total de filas de salida -- `rows_before_limit` es el número mínimo de filas que habría habido sin LIMIT. Salida solo si la consulta contiene LIMIT. Si la consulta contiene GROUP BY, rows_before_limit_at_least es el número exacto de filas que habría habido sin un LIMIT . -- `time` es el tiempo de ejecución de la solicitud en segundos -- `rows_read` es el número de filas que se ha leído -- `bytes_read` es el número de bytes (sin comprimir) que se ha leído - -Marcador `data`, `totals`, `min` y `max` no debe tener una regla de escape especificada (o `None` debe especificarse explícitamente). Los marcadores de posición restantes pueden tener cualquier regla de escape especificada. -Si el `format_template_resultset` valor es una cadena vacía, `${data}` se utiliza como valor predeterminado. -Para el formato de consultas de inserción permite omitir algunas columnas o algunos campos si prefijo o sufijo (ver ejemplo). - -Seleccionar ejemplo: - -``` sql -SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase ORDER BY c DESC LIMIT 5 FORMAT Template SETTINGS -format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = '\n ' -``` - -`/some/path/resultset.format`: - -``` text - - Search phrases - - - - ${data} -
Search phrases
Search phrase Count
- - ${max} -
Max
- Processed ${rows_read:XML} rows in ${time:XML} sec - - -``` - -`/some/path/row.format`: - -``` text - ${0:XML} ${1:XML} -``` - -Resultado: - -``` html - - Search phrases - - - - - - - - -
Search phrases
Search phrase Count
8267016
bathroom interior design 2166
yandex 1655
spring 2014 fashion 1549
freeform photos 1480
- - -
Max
8873898
- Processed 3095973 rows in 0.1569913 sec - - -``` - -Insertar ejemplo: - -``` text -Some header -Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1 -Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1 -Total rows: 2 -``` - -``` sql -INSERT INTO UserActivity FORMAT Template SETTINGS -format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format' -``` - -`/some/path/resultset.format`: - -``` text -Some header\n${data}\nTotal rows: ${:CSV}\n -``` - -`/some/path/row.format`: - -``` text -Page views: ${PageViews:CSV}, User id: ${UserID:CSV}, Useless field: ${:CSV}, Duration: ${Duration:CSV}, Sign: ${Sign:CSV} -``` - -`PageViews`, `UserID`, `Duration` y `Sign` dentro de los marcadores de posición son nombres de columnas en la tabla. Valores después `Useless field` en filas y después `\nTotal rows:` en el sufijo será ignorado. -Todos los delimitadores de los datos de entrada deben ser estrictamente iguales a los delimitadores de las cadenas de formato especificadas. - -## TemplateIgnoreSpaces {#templateignorespaces} - -Este formato es adecuado sólo para la entrada. -Similar a `Template`, pero omite caracteres de espacio en blanco entre delimitadores y valores en la secuencia de entrada. Sin embargo, si las cadenas de formato contienen caracteres de espacio en blanco, se esperarán estos caracteres en la secuencia de entrada. También permite especificar marcadores de posición vacíos (`${}` o `${:None}`) para dividir algún delimitador en partes separadas para ignorar los espacios entre ellos. Dichos marcadores de posición se usan solo para omitir caracteres de espacio en blanco. -Es posible leer `JSON` usando este formato, si los valores de las columnas tienen el mismo orden en todas las filas. Por ejemplo, la siguiente solicitud se puede utilizar para insertar datos del ejemplo de salida de formato [JSON](#json): - -``` sql -INSERT INTO table_name FORMAT TemplateIgnoreSpaces SETTINGS -format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = ',' -``` - -`/some/path/resultset.format`: - -``` text -{${}"meta"${}:${:JSON},${}"data"${}:${}[${data}]${},${}"totals"${}:${:JSON},${}"extremes"${}:${:JSON},${}"rows"${}:${:JSON},${}"rows_before_limit_at_least"${}:${:JSON}${}} -``` - -`/some/path/row.format`: - -``` text -{${}"SearchPhrase"${}:${}${phrase:JSON}${},${}"c"${}:${}${cnt:JSON}${}} -``` - -## TSKV {#tskv} - -Similar a TabSeparated , pero genera un valor en formato name=value . Los nombres se escapan de la misma manera que en el formato TabSeparated, y el símbolo = también se escapa. - -``` text -SearchPhrase= count()=8267016 -SearchPhrase=bathroom interior design count()=2166 -SearchPhrase=yandex count()=1655 -SearchPhrase=2014 spring fashion count()=1549 -SearchPhrase=freeform photos count()=1480 -SearchPhrase=angelina jolie count()=1245 -SearchPhrase=omsk count()=1112 -SearchPhrase=photos of dog breeds count()=1091 -SearchPhrase=curtain designs count()=1064 -SearchPhrase=baku count()=1000 -``` - -[NULL](../sql-reference/syntax.md) se formatea como `\N`. - -``` sql -SELECT * FROM t_null FORMAT TSKV -``` - -``` text -x=1 y=\N -``` - -Cuando hay una gran cantidad de columnas pequeñas, este formato no es efectivo y generalmente no hay razón para usarlo. Sin embargo, no es peor que JSONEachRow en términos de eficiencia. - -Both data output and parsing are supported in this format. For parsing, any order is supported for the values of different columns. It is acceptable for some values to be omitted – they are treated as equal to their default values. In this case, zeros and blank rows are used as default values. Complex values that could be specified in the table are not supported as defaults. - -El análisis permite la presencia del campo adicional `tskv` sin el signo igual o un valor. Este campo se ignora. - -## CSV {#csv} - -Formato de valores separados por comas ([RFC](https://tools.ietf.org/html/rfc4180)). - -Al formatear, las filas están encerradas en comillas dobles. Una comilla doble dentro de una cadena se genera como dos comillas dobles en una fila. No hay otras reglas para escapar de los personajes. Fecha y fecha-hora están encerrados en comillas dobles. Los números se emiten sin comillas. Los valores están separados por un carácter delimitador, que es `,` predeterminada. El carácter delimitador se define en la configuración [Formato_csv_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter). Las filas se separan usando el avance de línea Unix (LF). Las matrices se serializan en CSV de la siguiente manera: primero, la matriz se serializa en una cadena como en el formato TabSeparated, y luego la cadena resultante se envía a CSV en comillas dobles. Las tuplas en formato CSV se serializan como columnas separadas (es decir, se pierde su anidamiento en la tupla). - -``` bash -$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv -``` - -\*De forma predeterminada, el delimitador es `,`. Ver el [Formato_csv_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter) para obtener más información. - -Al analizar, todos los valores se pueden analizar con o sin comillas. Ambas comillas dobles y simples son compatibles. Las filas también se pueden organizar sin comillas. En este caso, se analizan hasta el carácter delimitador o el avance de línea (CR o LF). En violación del RFC, al analizar filas sin comillas, se ignoran los espacios y pestañas iniciales y finales. Para el avance de línea, se admiten los tipos Unix (LF), Windows (CR LF) y Mac OS Classic (CR LF). - -Los valores de entrada vacíos sin comillas se sustituyen por valores predeterminados para las columnas respectivas, si -[Entrada_format_defaults_for_omitted_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields) -está habilitado. - -`NULL` se formatea como `\N` o `NULL` o una cadena vacía sin comillas (consulte la configuración [input_format_csv_unquoted_null_literal_as_null](../operations/settings/settings.md#settings-input_format_csv_unquoted_null_literal_as_null) y [Entrada_format_defaults_for_omitted_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields)). - -El formato CSV admite la salida de totales y extremos de la misma manera que `TabSeparated`. - -## CSVWithNames {#csvwithnames} - -También imprime la fila del encabezado, similar a `TabSeparatedWithNames`. - -## CustomSeparated {#format-customseparated} - -Similar a [Plantilla](#format-template), pero imprime o lee todas las columnas y usa la regla de escape de la configuración `format_custom_escaping_rule` y delimitadores desde la configuración `format_custom_field_delimiter`, `format_custom_row_before_delimiter`, `format_custom_row_after_delimiter`, `format_custom_row_between_delimiter`, `format_custom_result_before_delimiter` y `format_custom_result_after_delimiter`, no de cadenas de formato. -También hay `CustomSeparatedIgnoreSpaces` formato, que es similar a `TemplateIgnoreSpaces`. - -## JSON {#json} - -Salida de datos en formato JSON. Además de las tablas de datos, también genera nombres y tipos de columnas, junto con información adicional: el número total de filas de salida y el número de filas que podrían haberse generado si no hubiera un LIMIT . Ejemplo: - -``` sql -SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTALS ORDER BY c DESC LIMIT 5 FORMAT JSON -``` - -``` json -{ - "meta": - [ - { - "name": "SearchPhrase", - "type": "String" - }, - { - "name": "c", - "type": "UInt64" - } - ], - - "data": - [ - { - "SearchPhrase": "", - "c": "8267016" - }, - { - "SearchPhrase": "bathroom interior design", - "c": "2166" - }, - { - "SearchPhrase": "yandex", - "c": "1655" - }, - { - "SearchPhrase": "spring 2014 fashion", - "c": "1549" - }, - { - "SearchPhrase": "freeform photos", - "c": "1480" - } - ], - - "totals": - { - "SearchPhrase": "", - "c": "8873898" - }, - - "extremes": - { - "min": - { - "SearchPhrase": "", - "c": "1480" - }, - "max": - { - "SearchPhrase": "", - "c": "8267016" - } - }, - - "rows": 5, - - "rows_before_limit_at_least": 141137 -} -``` - -El JSON es compatible con JavaScript. Para garantizar esto, algunos caracteres se escapan adicionalmente: la barra inclinada `/` se escapa como `\/`; saltos de línea alternativos `U+2028` y `U+2029`, que rompen algunos navegadores, se escapan como `\uXXXX`. Los caracteres de control ASCII se escapan: retroceso, avance de formulario, avance de línea, retorno de carro y tabulación horizontal se reemplazan con `\b`, `\f`, `\n`, `\r`, `\t` , así como los bytes restantes en el rango 00-1F usando `\uXXXX` sequences. Invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences. For compatibility with JavaScript, Int64 and UInt64 integers are enclosed in double-quotes by default. To remove the quotes, you can set the configuration parameter [output_format_json_quote_64bit_integers](../operations/settings/settings.md#session_settings-output_format_json_quote_64bit_integers) a 0. - -`rows` – The total number of output rows. - -`rows_before_limit_at_least` El número mínimo de filas habría sido sin LIMIT . Salida solo si la consulta contiene LIMIT. -Si la consulta contiene GROUP BY, rows_before_limit_at_least es el número exacto de filas que habría habido sin un LIMIT . - -`totals` – Total values (when using WITH TOTALS). - -`extremes` – Extreme values (when extremes are set to 1). - -Este formato solo es apropiado para generar un resultado de consulta, pero no para analizar (recuperar datos para insertar en una tabla). - -Soporta ClickHouse [NULL](../sql-reference/syntax.md), que se muestra como `null` en la salida JSON. - -Ver también el [JSONEachRow](#jsoneachrow) formato. - -## JSONCompact {#jsoncompact} - -Difiere de JSON solo en que las filas de datos se generan en matrices, no en objetos. - -Ejemplo: - -``` json -{ - "meta": - [ - { - "name": "SearchPhrase", - "type": "String" - }, - { - "name": "c", - "type": "UInt64" - } - ], - - "data": - [ - ["", "8267016"], - ["bathroom interior design", "2166"], - ["yandex", "1655"], - ["fashion trends spring 2014", "1549"], - ["freeform photo", "1480"] - ], - - "totals": ["","8873898"], - - "extremes": - { - "min": ["","1480"], - "max": ["","8267016"] - }, - - "rows": 5, - - "rows_before_limit_at_least": 141137 -} -``` - -Este formato solo es apropiado para generar un resultado de consulta, pero no para analizar (recuperar datos para insertar en una tabla). -Ver también el `JSONEachRow` formato. - -## JSONEachRow {#jsoneachrow} - -Al usar este formato, ClickHouse genera filas como objetos JSON separados, delimitados por nuevas líneas, pero los datos en su conjunto no son JSON válidos. - -``` json -{"SearchPhrase":"curtain designs","count()":"1064"} -{"SearchPhrase":"baku","count()":"1000"} -{"SearchPhrase":"","count()":"8267016"} -``` - -Al insertar los datos, debe proporcionar un objeto JSON independiente para cada fila. - -### Insertar datos {#inserting-data} - -``` sql -INSERT INTO UserActivity FORMAT JSONEachRow {"PageViews":5, "UserID":"4324182021466249494", "Duration":146,"Sign":-1} {"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1} -``` - -ClickHouse permite: - -- Cualquier orden de pares clave-valor en el objeto. -- Omitiendo algunos valores. - -ClickHouse ignora los espacios entre los elementos y las comas después de los objetos. Puede pasar todos los objetos en una línea. No tiene que separarlos con saltos de línea. - -**Procesamiento de valores omitidos** - -ClickHouse sustituye los valores omitidos por los valores predeterminados para el [tipos de datos](../sql-reference/data-types/index.md). - -Si `DEFAULT expr` se especifica, ClickHouse utiliza diferentes reglas de sustitución dependiendo de la [Entrada_format_defaults_for_omitted_fields](../operations/settings/settings.md#session_settings-input_format_defaults_for_omitted_fields) configuración. - -Considere la siguiente tabla: - -``` sql -CREATE TABLE IF NOT EXISTS example_table -( - x UInt32, - a DEFAULT x * 2 -) ENGINE = Memory; -``` - -- Si `input_format_defaults_for_omitted_fields = 0`, entonces el valor predeterminado para `x` y `a` igual `0` (como el valor predeterminado para el `UInt32` tipo de datos). -- Si `input_format_defaults_for_omitted_fields = 1`, entonces el valor predeterminado para `x` igual `0` pero el valor predeterminado de `a` igual `x * 2`. - -!!! note "Advertencia" - Al insertar datos con `insert_sample_with_metadata = 1`, ClickHouse consume más recursos computacionales, en comparación con la inserción con `insert_sample_with_metadata = 0`. - -### Selección de datos {#selecting-data} - -Considere el `UserActivity` tabla como un ejemplo: - -``` text -┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐ -│ 4324182021466249494 │ 5 │ 146 │ -1 │ -│ 4324182021466249494 │ 6 │ 185 │ 1 │ -└─────────────────────┴───────────┴──────────┴──────┘ -``` - -Consulta `SELECT * FROM UserActivity FORMAT JSONEachRow` devoluciones: - -``` text -{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1} -{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1} -``` - -A diferencia de la [JSON](#json) formato, no hay sustitución de secuencias UTF-8 no válidas. Los valores se escapan de la misma manera que para `JSON`. - -!!! note "Nota" - Cualquier conjunto de bytes se puede generar en las cadenas. Utilice el `JSONEachRow` si está seguro de que los datos de la tabla se pueden formatear como JSON sin perder ninguna información. - -### Uso de estructuras anidadas {#jsoneachrow-nested} - -Si tienes una mesa con [Anidar](../sql-reference/data-types/nested-data-structures/nested.md) columnas de tipo de datos, puede insertar datos JSON con la misma estructura. Habilite esta función con el [Entrada_format_import_nested_json](../operations/settings/settings.md#settings-input_format_import_nested_json) configuración. - -Por ejemplo, considere la siguiente tabla: - -``` sql -CREATE TABLE json_each_row_nested (n Nested (s String, i Int32) ) ENGINE = Memory -``` - -Como se puede ver en el `Nested` descripción del tipo de datos, ClickHouse trata cada componente de la estructura anidada como una columna separada (`n.s` y `n.i` para nuestra mesa). Puede insertar datos de la siguiente manera: - -``` sql -INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n.s": ["abc", "def"], "n.i": [1, 23]} -``` - -Para insertar datos como un objeto JSON jerárquico, establezca [input_format_import_nested_json=1](../operations/settings/settings.md#settings-input_format_import_nested_json). - -``` json -{ - "n": { - "s": ["abc", "def"], - "i": [1, 23] - } -} -``` - -Sin esta configuración, ClickHouse produce una excepción. - -``` sql -SELECT name, value FROM system.settings WHERE name = 'input_format_import_nested_json' -``` - -``` text -┌─name────────────────────────────┬─value─┐ -│ input_format_import_nested_json │ 0 │ -└─────────────────────────────────┴───────┘ -``` - -``` sql -INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}} -``` - -``` text -Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: n: (at row 1) -``` - -``` sql -SET input_format_import_nested_json=1 -INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}} -SELECT * FROM json_each_row_nested -``` - -``` text -┌─n.s───────────┬─n.i────┐ -│ ['abc','def'] │ [1,23] │ -└───────────────┴────────┘ -``` - -## Nativo {#native} - -El formato más eficiente. Los datos son escritos y leídos por bloques en formato binario. Para cada bloque, el número de filas, número de columnas, nombres y tipos de columnas y partes de columnas de este bloque se registran una tras otra. En otras palabras, este formato es “columnar” – it doesn't convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients. - -Puede utilizar este formato para generar rápidamente volcados que sólo pueden ser leídos por el DBMS de ClickHouse. No tiene sentido trabajar con este formato usted mismo. - -## Nulo {#null} - -Nada es salida. Sin embargo, la consulta se procesa y, cuando se utiliza el cliente de línea de comandos, los datos se transmiten al cliente. Esto se usa para pruebas, incluidas las pruebas de rendimiento. -Obviamente, este formato solo es apropiado para la salida, no para el análisis. - -## Bastante {#pretty} - -Salidas de datos como tablas de arte Unicode, también utilizando secuencias de escape ANSI para establecer colores en el terminal. -Se dibuja una cuadrícula completa de la tabla, y cada fila ocupa dos líneas en la terminal. -Cada bloque de resultados se muestra como una tabla separada. Esto es necesario para que los bloques se puedan generar sin resultados de almacenamiento en búfer (el almacenamiento en búfer sería necesario para calcular previamente el ancho visible de todos los valores). - -[NULL](../sql-reference/syntax.md) se emite como `ᴺᵁᴸᴸ`. - -Ejemplo (mostrado para el [PrettyCompact](#prettycompact) formato): - -``` sql -SELECT * FROM t_null -``` - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -└───┴──────┘ -``` - -Las filas no se escapan en formatos Pretty \*. Se muestra un ejemplo para el [PrettyCompact](#prettycompact) formato: - -``` sql -SELECT 'String with \'quotes\' and \t character' AS Escaping_test -``` - -``` text -┌─Escaping_test────────────────────────┐ -│ String with 'quotes' and character │ -└──────────────────────────────────────┘ -``` - -Para evitar volcar demasiados datos al terminal, solo se imprimen las primeras 10.000 filas. Si el número de filas es mayor o igual que 10.000, el mensaje “Showed first 10 000” se imprime. -Este formato solo es apropiado para generar un resultado de consulta, pero no para analizar (recuperar datos para insertar en una tabla). - -El formato Pretty admite la salida de valores totales (cuando se usa WITH TOTALS) y extremos (cuando ‘extremes’ se establece en 1). En estos casos, los valores totales y los valores extremos se generan después de los datos principales, en tablas separadas. Ejemplo (mostrado para el [PrettyCompact](#prettycompact) formato): - -``` sql -SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact -``` - -``` text -┌──EventDate─┬───────c─┐ -│ 2014-03-17 │ 1406958 │ -│ 2014-03-18 │ 1383658 │ -│ 2014-03-19 │ 1405797 │ -│ 2014-03-20 │ 1353623 │ -│ 2014-03-21 │ 1245779 │ -│ 2014-03-22 │ 1031592 │ -│ 2014-03-23 │ 1046491 │ -└────────────┴─────────┘ - -Totals: -┌──EventDate─┬───────c─┐ -│ 1970-01-01 │ 8873898 │ -└────────────┴─────────┘ - -Extremes: -┌──EventDate─┬───────c─┐ -│ 2014-03-17 │ 1031592 │ -│ 2014-03-23 │ 1406958 │ -└────────────┴─────────┘ -``` - -## PrettyCompact {#prettycompact} - -Difiere de [Bastante](#pretty) en que la cuadrícula se dibuja entre filas y el resultado es más compacto. -Este formato se usa de forma predeterminada en el cliente de línea de comandos en modo interactivo. - -## PrettyCompactMonoBlock {#prettycompactmonoblock} - -Difiere de [PrettyCompact](#prettycompact) en que hasta 10,000 filas se almacenan en búfer, luego se salen como una sola tabla, no por bloques. - -## PrettyNoEscapes {#prettynoescapes} - -Difiere de Pretty en que las secuencias de escape ANSI no se usan. Esto es necesario para mostrar este formato en un navegador, así como para usar el ‘watch’ utilidad de línea de comandos. - -Ejemplo: - -``` bash -$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'" -``` - -Puede usar la interfaz HTTP para mostrar en el navegador. - -### PrettyCompactNoEscapes {#prettycompactnoescapes} - -Lo mismo que el ajuste anterior. - -### PrettySpaceNoEscapes {#prettyspacenoescapes} - -Lo mismo que el ajuste anterior. - -## Bienvenido a WordPress {#prettyspace} - -Difiere de [PrettyCompact](#prettycompact) en ese espacio en blanco (caracteres de espacio) se usa en lugar de la cuadrícula. - -## RowBinary {#rowbinary} - -Formatea y analiza datos por fila en formato binario. Las filas y los valores se enumeran consecutivamente, sin separadores. -Este formato es menos eficiente que el formato nativo, ya que está basado en filas. - -Los integradores usan una representación little-endian de longitud fija. Por ejemplo, UInt64 usa 8 bytes. -DateTime se representa como UInt32 que contiene la marca de tiempo Unix como el valor. -Date se representa como un objeto UInt16 que contiene el número de días desde 1970-01-01 como el valor. -La cadena se representa como una longitud varint (sin signo [LEB128](https://en.wikipedia.org/wiki/LEB128)), seguido de los bytes de la cadena. -FixedString se representa simplemente como una secuencia de bytes. - -La matriz se representa como una longitud varint (sin signo [LEB128](https://en.wikipedia.org/wiki/LEB128)), seguido de elementos sucesivos de la matriz. - -Para [NULL](../sql-reference/syntax.md#null-literal) soporte, se añade un byte adicional que contiene 1 o 0 antes de cada [NULL](../sql-reference/data-types/nullable.md) valor. Si 1, entonces el valor es `NULL` y este byte se interpreta como un valor separado. Si es 0, el valor después del byte no es `NULL`. - -## RowBinaryWithNamesAndTypes {#rowbinarywithnamesandtypes} - -Similar a [RowBinary](#rowbinary), pero con encabezado añadido: - -- [LEB128](https://en.wikipedia.org/wiki/LEB128)-número codificado de columnas (N) -- N `String`s especificando nombres de columna -- N `String`s especificando tipos de columna - -## Valor {#data-format-values} - -Imprime cada fila entre paréntesis. Las filas están separadas por comas. No hay coma después de la última fila. Los valores dentro de los corchetes también están separados por comas. Los números se emiten en formato decimal sin comillas. Las matrices se emiten entre corchetes. Las cadenas, fechas y fechas con horas se generan entre comillas. Las reglas de escape y el análisis son similares a las [TabSeparated](#tabseparated) formato. Durante el formateo, los espacios adicionales no se insertan, pero durante el análisis, se permiten y omiten (excepto los espacios dentro de los valores de la matriz, que no están permitidos). [NULL](../sql-reference/syntax.md) se representa como `NULL`. - -The minimum set of characters that you need to escape when passing data in Values ​​format: single quotes and backslashes. - -Este es el formato que se utiliza en `INSERT INTO t VALUES ...`, pero también puede usarlo para formatear los resultados de la consulta. - -Ver también: [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions) y [input_format_values_deduce_templates_of_expressions](../operations/settings/settings.md#settings-input_format_values_deduce_templates_of_expressions) configuración. - -## Vertical {#vertical} - -Imprime cada valor en una línea independiente con el nombre de columna especificado. Este formato es conveniente para imprimir solo una o varias filas si cada fila consta de un gran número de columnas. - -[NULL](../sql-reference/syntax.md) se emite como `ᴺᵁᴸᴸ`. - -Ejemplo: - -``` sql -SELECT * FROM t_null FORMAT Vertical -``` - -``` text -Row 1: -────── -x: 1 -y: ᴺᵁᴸᴸ -``` - -Las filas no se escapan en formato vertical: - -``` sql -SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical -``` - -``` text -Row 1: -────── -test: string with 'quotes' and with some special - characters -``` - -Este formato solo es apropiado para generar un resultado de consulta, pero no para analizar (recuperar datos para insertar en una tabla). - -## VerticalRaw {#verticalraw} - -Similar a [Vertical](#vertical), pero con escapar deshabilitado. Este formato solo es adecuado para generar resultados de consultas, no para analizar (recibir datos e insertarlos en la tabla). - -## XML {#xml} - -El formato XML es adecuado solo para la salida, no para el análisis. Ejemplo: - -``` xml - - - - - - SearchPhrase - String - - - count() - UInt64 - - - - - - - 8267016 - - - bathroom interior design - 2166 - - - yandex - 1655 - - - 2014 spring fashion - 1549 - - - freeform photos - 1480 - - - angelina jolie - 1245 - - - omsk - 1112 - - - photos of dog breeds - 1091 - - - curtain designs - 1064 - - - baku - 1000 - - - 10 - 141137 - -``` - -Si el nombre de la columna no tiene un formato aceptable, simplemente ‘field’ se utiliza como el nombre del elemento. En general, la estructura XML sigue la estructura JSON. -Just as for JSON, invalid UTF-8 sequences are changed to the replacement character � so the output text will consist of valid UTF-8 sequences. - -En los valores de cadena, los caracteres `<` y `&` se escaparon como `<` y `&`. - -Las matrices se emiten como `HelloWorld...`y tuplas como `HelloWorld...`. - -## CapnProto {#capnproto} - -Cap'n Proto es un formato de mensaje binario similar a Protocol Buffers y Thrift, pero no como JSON o MessagePack. - -Los mensajes de Cap'n Proto están estrictamente escritos y no autodescribidos, lo que significa que necesitan una descripción de esquema externo. El esquema se aplica sobre la marcha y se almacena en caché para cada consulta. - -``` bash -$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'" -``` - -Donde `schema.capnp` se ve así: - -``` capnp -struct Message { - SearchPhrase @0 :Text; - c @1 :Uint64; -} -``` - -La deserialización es efectiva y generalmente no aumenta la carga del sistema. - -Ver también [Esquema de formato](#formatschema). - -## Protobuf {#protobuf} - -Protobuf - es un [Búferes de protocolo](https://developers.google.com/protocol-buffers/) formato. - -Este formato requiere un esquema de formato externo. El esquema se almacena en caché entre las consultas. -ClickHouse soporta ambos `proto2` y `proto3` sintaxis. Se admiten campos repetidos / opcionales / requeridos. - -Ejemplos de uso: - -``` sql -SELECT * FROM test.table FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType' -``` - -``` bash -cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table FORMAT Protobuf SETTINGS format_schema='schemafile:MessageType'" -``` - -donde el archivo `schemafile.proto` se ve así: - -``` capnp -syntax = "proto3"; - -message MessageType { - string name = 1; - string surname = 2; - uint32 birthDate = 3; - repeated string phoneNumbers = 4; -}; -``` - -Para encontrar la correspondencia entre las columnas de la tabla y los campos del tipo de mensaje de Protocol Buffers, ClickHouse compara sus nombres. -Esta comparación no distingue entre mayúsculas y minúsculas y los caracteres `_` (subrayado) y `.` (punto) se consideran iguales. -Si los tipos de una columna y un campo del mensaje de Protocol Buffers son diferentes, se aplica la conversión necesaria. - -Los mensajes anidados son compatibles. Por ejemplo, para el campo `z` en el siguiente tipo de mensaje - -``` capnp -message MessageType { - message XType { - message YType { - int32 z; - }; - repeated YType y; - }; - XType x; -}; -``` - -ClickHouse intenta encontrar una columna llamada `x.y.z` (o `x_y_z` o `X.y_Z` y así sucesivamente). -Los mensajes anidados son adecuados para [estructuras de datos anidados](../sql-reference/data-types/nested-data-structures/nested.md). - -Valores predeterminados definidos en un esquema protobuf como este - -``` capnp -syntax = "proto2"; - -message MessageType { - optional int32 result_per_page = 3 [default = 10]; -} -``` - -no se aplican; el [valores predeterminados de la tabla](../sql-reference/statements/create.md#create-default-values) se utilizan en lugar de ellos. - -ClickHouse entra y emite mensajes protobuf en el `length-delimited` formato. -Significa que antes de cada mensaje debe escribirse su longitud como un [varint](https://developers.google.com/protocol-buffers/docs/encoding#varints). -Ver también [cómo leer / escribir mensajes protobuf delimitados por longitud en idiomas populares](https://cwiki.apache.org/confluence/display/GEODE/Delimiting+Protobuf+Messages). - -## Avro {#data-format-avro} - -[Más información](http://avro.apache.org/) es un marco de serialización de datos orientado a filas desarrollado dentro del proyecto Hadoop de Apache. - -El formato ClickHouse Avro admite lectura y escritura [Archivos de datos Avro](http://avro.apache.org/docs/current/spec.html#Object+Container+Files). - -### Coincidencia de tipos de datos {#data_types-matching} - -La siguiente tabla muestra los tipos de datos admitidos y cómo coinciden con ClickHouse [tipos de datos](../sql-reference/data-types/index.md) en `INSERT` y `SELECT` consulta. - -| Tipo de datos Avro `INSERT` | Tipo de datos ClickHouse | Tipo de datos Avro `SELECT` | -|---------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|------------------------------| -| `boolean`, `int`, `long`, `float`, `double` | [¿Cómo funciona?)](../sql-reference/data-types/int-uint.md), [UInt(8\|16\|32)](../sql-reference/data-types/int-uint.md) | `int` | -| `boolean`, `int`, `long`, `float`, `double` | [Int64](../sql-reference/data-types/int-uint.md), [UInt64](../sql-reference/data-types/int-uint.md) | `long` | -| `boolean`, `int`, `long`, `float`, `double` | [Float32](../sql-reference/data-types/float.md) | `float` | -| `boolean`, `int`, `long`, `float`, `double` | [Float64](../sql-reference/data-types/float.md) | `double` | -| `bytes`, `string`, `fixed`, `enum` | [Cadena](../sql-reference/data-types/string.md) | `bytes` | -| `bytes`, `string`, `fixed` | [Cadena fija (N)](../sql-reference/data-types/fixedstring.md) | `fixed(N)` | -| `enum` | [Enum (8\|16)](../sql-reference/data-types/enum.md) | `enum` | -| `array(T)` | [Matriz (T)](../sql-reference/data-types/array.md) | `array(T)` | -| `union(null, T)`, `union(T, null)` | [Nivel de Cifrado WEP)](../sql-reference/data-types/date.md) | `union(null, T)` | -| `null` | [Nullable (nada)](../sql-reference/data-types/special-data-types/nothing.md) | `null` | -| `int (date)` \* | [Fecha](../sql-reference/data-types/date.md) | `int (date)` \* | -| `long (timestamp-millis)` \* | [¿Qué puedes encontrar en Neodigit)](../sql-reference/data-types/datetime.md) | `long (timestamp-millis)` \* | -| `long (timestamp-micros)` \* | [Cómo hacer esto?)](../sql-reference/data-types/datetime.md) | `long (timestamp-micros)` \* | - -\* [Tipos lógicos Avro](http://avro.apache.org/docs/current/spec.html#Logical+Types) - -Tipos de datos Avro no admitidos: `record` (no root), `map` - -Tipos de datos lógicos Avro no admitidos: `uuid`, `time-millis`, `time-micros`, `duration` - -### Insertar datos {#inserting-data-1} - -Para insertar datos de un archivo Avro en la tabla ClickHouse: - -``` bash -$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro" -``` - -El esquema raíz del archivo Avro de entrada debe ser de `record` tipo. - -Para encontrar la correspondencia entre las columnas de la tabla y los campos de Avro esquema ClickHouse compara sus nombres. Esta comparación distingue entre mayúsculas y minúsculas. -Los campos no utilizados se omiten. - -Los tipos de datos de las columnas de tabla ClickHouse pueden diferir de los campos correspondientes de los datos de Avro insertados. Al insertar datos, ClickHouse interpreta los tipos de datos de acuerdo con la tabla anterior y luego [elenco](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) los datos al tipo de columna correspondiente. - -### Selección de datos {#selecting-data-1} - -Para seleccionar datos de la tabla ClickHouse en un archivo Avro: - -``` bash -$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro -``` - -Los nombres de columna deben: - -- comenzar con `[A-Za-z_]` -- posteriormente contienen sólo `[A-Za-z0-9_]` - -La compresión de archivos Avro de salida y el intervalo de sincronización se pueden configurar con [Sistema abierto.](../operations/settings/settings.md#settings-output_format_avro_codec) y [Sistema abierto.](../operations/settings/settings.md#settings-output_format_avro_sync_interval) respectivamente. - -## AvroConfluent {#data-format-avro-confluent} - -AvroConfluent admite la decodificación de mensajes Avro de un solo objeto comúnmente utilizados con [Kafka](https://kafka.apache.org/) y [Registro de Esquemas Confluentes](https://docs.confluent.io/current/schema-registry/index.html). - -Cada mensaje de Avro incrusta un id de esquema que se puede resolver en el esquema real con la ayuda del Registro de esquemas. - -Los esquemas se almacenan en caché una vez resueltos. - -La URL del registro de esquemas se configura con [Todos los derechos reservados.](../operations/settings/settings.md#settings-format_avro_schema_registry_url) - -### Coincidencia de tipos de datos {#data_types-matching-1} - -Lo mismo que [Avro](#data-format-avro) - -### Uso {#usage} - -Para verificar rápidamente la resolución del esquema, puede usar [Método de codificación de datos:](https://github.com/edenhill/kafkacat) con [Sistema abierto.](../operations/utilities/clickhouse-local.md#clickhouse-local): - -``` bash -$ kafkacat -b kafka-broker -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String" -q 'select * from table' -1 a -2 b -3 c -``` - -Utilizar `AvroConfluent` con [Kafka](../engines/table-engines/integrations/kafka.md): - -``` sql -CREATE TABLE topic1_stream -( - field1 String, - field2 String -) -ENGINE = Kafka() -SETTINGS -kafka_broker_list = 'kafka-broker', -kafka_topic_list = 'topic1', -kafka_group_name = 'group1', -kafka_format = 'AvroConfluent'; - -SET format_avro_schema_registry_url = 'http://schema-registry'; - -SELECT * FROM topic1_stream; -``` - -!!! note "Advertencia" - Configuración `format_avro_schema_registry_url` necesita ser configurado en `users.xml` para mantener su valor después de un reinicio. - -## Parquet {#data-format-parquet} - -[Apache Parquet](http://parquet.apache.org/) es un formato de almacenamiento columnar generalizado en el ecosistema Hadoop. ClickHouse admite operaciones de lectura y escritura para este formato. - -### Coincidencia de tipos de datos {#data_types-matching-2} - -La siguiente tabla muestra los tipos de datos admitidos y cómo coinciden con ClickHouse [tipos de datos](../sql-reference/data-types/index.md) en `INSERT` y `SELECT` consulta. - -| Tipo de datos de parquet (`INSERT`) | Tipo de datos ClickHouse | Tipo de datos de parquet (`SELECT`) | -|-------------------------------------|-----------------------------------------------------------|-------------------------------------| -| `UINT8`, `BOOL` | [UInt8](../sql-reference/data-types/int-uint.md) | `UINT8` | -| `INT8` | [Int8](../sql-reference/data-types/int-uint.md) | `INT8` | -| `UINT16` | [UInt16](../sql-reference/data-types/int-uint.md) | `UINT16` | -| `INT16` | [Int16](../sql-reference/data-types/int-uint.md) | `INT16` | -| `UINT32` | [UInt32](../sql-reference/data-types/int-uint.md) | `UINT32` | -| `INT32` | [Int32](../sql-reference/data-types/int-uint.md) | `INT32` | -| `UINT64` | [UInt64](../sql-reference/data-types/int-uint.md) | `UINT64` | -| `INT64` | [Int64](../sql-reference/data-types/int-uint.md) | `INT64` | -| `FLOAT`, `HALF_FLOAT` | [Float32](../sql-reference/data-types/float.md) | `FLOAT` | -| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `DOUBLE` | -| `DATE32` | [Fecha](../sql-reference/data-types/date.md) | `UINT16` | -| `DATE64`, `TIMESTAMP` | [FechaHora](../sql-reference/data-types/datetime.md) | `UINT32` | -| `STRING`, `BINARY` | [Cadena](../sql-reference/data-types/string.md) | `STRING` | -| — | [Cadena fija](../sql-reference/data-types/fixedstring.md) | `STRING` | -| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` | - -ClickHouse admite una precisión configurable de `Decimal` tipo. El `INSERT` consulta trata el Parquet `DECIMAL` tipo como el ClickHouse `Decimal128` tipo. - -Tipos de datos de parquet no admitidos: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. - -Los tipos de datos de las columnas de tabla ClickHouse pueden diferir de los campos correspondientes de los datos de Parquet insertados. Al insertar datos, ClickHouse interpreta los tipos de datos de acuerdo con la tabla anterior y luego [elenco](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) los datos de ese tipo de datos que se establece para la columna de tabla ClickHouse. - -### Insertar y seleccionar datos {#inserting-and-selecting-data} - -Puede insertar datos de Parquet desde un archivo en la tabla ClickHouse mediante el siguiente comando: - -``` bash -$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet" -``` - -Puede seleccionar datos de una tabla ClickHouse y guardarlos en algún archivo en el formato Parquet mediante el siguiente comando: - -``` bash -$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq} -``` - -Para intercambiar datos con Hadoop, puede usar [Motor de mesa HDFS](../engines/table-engines/integrations/hdfs.md). - -## ORC {#data-format-orc} - -[Apache ORC](https://orc.apache.org/) es un formato de almacenamiento columnar generalizado en el ecosistema Hadoop. Solo puede insertar datos en este formato en ClickHouse. - -### Coincidencia de tipos de datos {#data_types-matching-3} - -La siguiente tabla muestra los tipos de datos admitidos y cómo coinciden con ClickHouse [tipos de datos](../sql-reference/data-types/index.md) en `INSERT` consulta. - -| Tipo de datos ORC (`INSERT`) | Tipo de datos ClickHouse | -|------------------------------|------------------------------------------------------| -| `UINT8`, `BOOL` | [UInt8](../sql-reference/data-types/int-uint.md) | -| `INT8` | [Int8](../sql-reference/data-types/int-uint.md) | -| `UINT16` | [UInt16](../sql-reference/data-types/int-uint.md) | -| `INT16` | [Int16](../sql-reference/data-types/int-uint.md) | -| `UINT32` | [UInt32](../sql-reference/data-types/int-uint.md) | -| `INT32` | [Int32](../sql-reference/data-types/int-uint.md) | -| `UINT64` | [UInt64](../sql-reference/data-types/int-uint.md) | -| `INT64` | [Int64](../sql-reference/data-types/int-uint.md) | -| `FLOAT`, `HALF_FLOAT` | [Float32](../sql-reference/data-types/float.md) | -| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | -| `DATE32` | [Fecha](../sql-reference/data-types/date.md) | -| `DATE64`, `TIMESTAMP` | [FechaHora](../sql-reference/data-types/datetime.md) | -| `STRING`, `BINARY` | [Cadena](../sql-reference/data-types/string.md) | -| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | - -ClickHouse soporta la precisión configurable de la `Decimal` tipo. El `INSERT` consulta trata el ORC `DECIMAL` tipo como el ClickHouse `Decimal128` tipo. - -Tipos de datos ORC no admitidos: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. - -Los tipos de datos de las columnas de tabla ClickHouse no tienen que coincidir con los campos de datos ORC correspondientes. Al insertar datos, ClickHouse interpreta los tipos de datos de acuerdo con la tabla anterior y luego [elenco](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) los datos al tipo de datos establecido para la columna de tabla ClickHouse. - -### Insertar datos {#inserting-data-2} - -Puede insertar datos ORC de un archivo en la tabla ClickHouse mediante el siguiente comando: - -``` bash -$ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC" -``` - -Para intercambiar datos con Hadoop, puede usar [Motor de mesa HDFS](../engines/table-engines/integrations/hdfs.md). - -## Esquema de formato {#formatschema} - -El valor establece el nombre de archivo que contiene el esquema de formato `format_schema`. -Es necesario establecer esta configuración cuando se utiliza uno de los formatos `Cap'n Proto` y `Protobuf`. -El esquema de formato es una combinación de un nombre de archivo y el nombre de un tipo de mensaje en este archivo, delimitado por dos puntos, -e.g. `schemafile.proto:MessageType`. -Si el archivo tiene la extensión estándar para el formato (por ejemplo, `.proto` para `Protobuf`), -se puede omitir y en este caso, el esquema de formato se ve así `schemafile:MessageType`. - -Si introduce o emite datos a través del [cliente](../interfaces/cli.md) en el [modo interactivo](../interfaces/cli.md#cli_usage), el nombre de archivo especificado en el esquema de formato -puede contener una ruta absoluta o una ruta relativa al directorio actual en el cliente. -Si utiliza el cliente en el [modo por lotes](../interfaces/cli.md#cli_usage), la ruta de acceso al esquema debe ser relativa por razones de seguridad. - -Si introduce o emite datos a través del [Interfaz HTTP](../interfaces/http.md) el nombre de archivo especificado en el esquema de formato -debe estar ubicado en el directorio especificado en [format_schema_path](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-format_schema_path) -en la configuración del servidor. - -## Salto de errores {#skippingerrors} - -Algunos formatos como `CSV`, `TabSeparated`, `TSKV`, `JSONEachRow`, `Template`, `CustomSeparated` y `Protobuf` puede omitir la fila rota si se produjo un error de análisis y continuar el análisis desde el comienzo de la siguiente fila. Ver [Entrada_format_allow_errors_num](../operations/settings/settings.md#settings-input_format_allow_errors_num) y -[Entrada_format_allow_errors_ratio](../operations/settings/settings.md#settings-input_format_allow_errors_ratio) configuración. -Limitacion: -- En caso de error de análisis `JSONEachRow` omite todos los datos hasta la nueva línea (o EOF), por lo que las filas deben estar delimitadas por `\n` para contar los errores correctamente. -- `Template` y `CustomSeparated` use el delimitador después de la última columna y el delimitador entre filas para encontrar el comienzo de la siguiente fila, por lo que omitir errores solo funciona si al menos uno de ellos no está vacío. - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/formats/) diff --git a/docs/es/interfaces/http.md b/docs/es/interfaces/http.md deleted file mode 100644 index ab510a268e3..00000000000 --- a/docs/es/interfaces/http.md +++ /dev/null @@ -1,617 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 19 -toc_title: Interfaz HTTP ---- - -# Interfaz HTTP {#http-interface} - -La interfaz HTTP le permite usar ClickHouse en cualquier plataforma desde cualquier lenguaje de programación. Lo usamos para trabajar desde Java y Perl, así como scripts de shell. En otros departamentos, la interfaz HTTP se usa desde Perl, Python y Go. La interfaz HTTP es más limitada que la interfaz nativa, pero tiene una mejor compatibilidad. - -De forma predeterminada, clickhouse-server escucha HTTP en el puerto 8123 (esto se puede cambiar en la configuración). - -Si realiza una solicitud GET / sin parámetros, devuelve 200 códigos de respuesta y la cadena que definió en [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) valor predeterminado “Ok.” (con un avance de línea al final) - -``` bash -$ curl 'http://localhost:8123/' -Ok. -``` - -Use la solicitud GET / ping en los scripts de comprobación de estado. Este controlador siempre devuelve “Ok.” (con un avance de línea al final). Disponible a partir de la versión 18.12.13. - -``` bash -$ curl 'http://localhost:8123/ping' -Ok. -``` - -Enviar la solicitud como una URL ‘query’ parámetro, o como un POST. O envíe el comienzo de la consulta en el ‘query’ parámetro, y el resto en el POST (explicaremos más adelante por qué esto es necesario). El tamaño de la URL está limitado a 16 KB, así que tenga esto en cuenta al enviar consultas grandes. - -Si tiene éxito, recibirá el código de respuesta 200 y el resultado en el cuerpo de respuesta. -Si se produce un error, recibirá el código de respuesta 500 y un texto de descripción de error en el cuerpo de la respuesta. - -Al usar el método GET, ‘readonly’ se establece. En otras palabras, para consultas que modifican datos, solo puede usar el método POST. Puede enviar la consulta en sí misma en el cuerpo POST o en el parámetro URL. - -Ejemplos: - -``` bash -$ curl 'http://localhost:8123/?query=SELECT%201' -1 - -$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1' -1 - -$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 -HTTP/1.0 200 OK -Date: Wed, 27 Nov 2019 10:30:18 GMT -Connection: Close -Content-Type: text/tab-separated-values; charset=UTF-8 -X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal -X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f -X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"} - -1 -``` - -Como puede ver, curl es algo inconveniente ya que los espacios deben ser URL escapadas. -Aunque wget escapa de todo en sí, no recomendamos usarlo porque no funciona bien sobre HTTP 1.1 cuando se usa keep-alive y Transfer-Encoding: chunked . - -``` bash -$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @- -1 - -$ echo 'SELECT 1' | curl 'http://localhost:8123/?query=' --data-binary @- -1 - -$ echo '1' | curl 'http://localhost:8123/?query=SELECT' --data-binary @- -1 -``` - -Si se envía parte de la consulta en el parámetro y parte en el POST, se inserta un avance de línea entre estas dos partes de datos. -Ejemplo (esto no funcionará): - -``` bash -$ echo 'ECT 1' | curl 'http://localhost:8123/?query=SEL' --data-binary @- -Code: 59, e.displayText() = DB::Exception: Syntax error: failed at position 0: SEL -ECT 1 -, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception -``` - -De forma predeterminada, los datos se devuelven en formato TabSeparated (para obtener más información, “Formats” apartado). -Utilice la cláusula FORMAT de la consulta para solicitar cualquier otro formato. - -``` bash -$ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @- -┏━━━┓ -┃ 1 ┃ -┡━━━┩ -│ 1 │ -└───┘ -``` - -El método POST de transmitir datos es necesario para las consultas INSERT. En este caso, puede escribir el comienzo de la consulta en el parámetro URL y usar POST para pasar los datos a insertar. Los datos a insertar podrían ser, por ejemplo, un volcado separado por tabuladores de MySQL. De esta manera, la consulta INSERT reemplaza LOAD DATA LOCAL INFILE de MySQL. - -Ejemplos: Crear una tabla: - -``` bash -$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @- -``` - -Uso de la consulta INSERT familiar para la inserción de datos: - -``` bash -$ echo 'INSERT INTO t VALUES (1),(2),(3)' | curl 'http://localhost:8123/' --data-binary @- -``` - -Los datos se pueden enviar por separado de la consulta: - -``` bash -$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @- -``` - -Puede especificar cualquier formato de datos. El ‘Values’ el formato es el mismo que el que se usa al escribir INSERT INTO t VALUES: - -``` bash -$ echo '(7),(8),(9)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20Values' --data-binary @- -``` - -Para insertar datos de un volcado separado por tabuladores, especifique el formato correspondiente: - -``` bash -$ echo -ne '10\n11\n12\n' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20TabSeparated' --data-binary @- -``` - -Lectura del contenido de la tabla. Los datos se emiten en orden aleatorio debido al procesamiento de consultas paralelas: - -``` bash -$ curl 'http://localhost:8123/?query=SELECT%20a%20FROM%20t' -7 -8 -9 -10 -11 -12 -1 -2 -3 -4 -5 -6 -``` - -Eliminando la mesa. - -``` bash -$ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @- -``` - -Para las solicitudes correctas que no devuelven una tabla de datos, se devuelve un cuerpo de respuesta vacío. - -Puede utilizar el formato interno de compresión ClickHouse al transmitir datos. Los datos comprimidos tienen un formato no estándar, y deberá usar el `clickhouse-compressor` programa para trabajar con él (se instala con el `clickhouse-client` paquete). Para aumentar la eficiencia de la inserción de datos, puede deshabilitar la verificación de suma de comprobación [http_native_compression_disable_checksumming_on_decompress](../operations/settings/settings.md#settings-http_native_compression_disable_checksumming_on_decompress) configuración. - -Si ha especificado `compress=1` en la URL, el servidor comprime los datos que le envía. -Si ha especificado `decompress=1` en la dirección URL, el servidor descomprime los mismos datos que `POST` método. - -También puede optar por utilizar [Compresión HTTP](https://en.wikipedia.org/wiki/HTTP_compression). Para enviar un `POST` solicitud, agregue el encabezado de solicitud `Content-Encoding: compression_method`. Para que ClickHouse comprima la respuesta, debe agregar `Accept-Encoding: compression_method`. Soporta ClickHouse `gzip`, `br`, y `deflate` [métodos de compresión](https://en.wikipedia.org/wiki/HTTP_compression#Content-Encoding_tokens). Para habilitar la compresión HTTP, debe usar ClickHouse [enable_http_compression](../operations/settings/settings.md#settings-enable_http_compression) configuración. Puede configurar el nivel de compresión de datos [http_zlib_compression_level](#settings-http_zlib_compression_level) para todos los métodos de compresión. - -Puede usar esto para reducir el tráfico de red al transmitir una gran cantidad de datos o para crear volcados que se comprimen inmediatamente. - -Ejemplos de envío de datos con compresión: - -``` bash -#Sending data to the server: -$ curl -vsS "http://localhost:8123/?enable_http_compression=1" -d 'SELECT number FROM system.numbers LIMIT 10' -H 'Accept-Encoding: gzip' - -#Sending data to the client: -$ echo "SELECT 1" | gzip -c | curl -sS --data-binary @- -H 'Content-Encoding: gzip' 'http://localhost:8123/' -``` - -!!! note "Nota" - Algunos clientes HTTP pueden descomprimir datos del servidor de forma predeterminada (con `gzip` y `deflate`) y puede obtener datos descomprimidos incluso si usa la configuración de compresión correctamente. - -Puede usar el ‘database’ Parámetro URL para especificar la base de datos predeterminada. - -``` bash -$ echo 'SELECT number FROM numbers LIMIT 10' | curl 'http://localhost:8123/?database=system' --data-binary @- -0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -``` - -De forma predeterminada, la base de datos que está registrada en la configuración del servidor se utiliza como base de datos predeterminada. De forma predeterminada, esta es la base de datos llamada ‘default’. Como alternativa, siempre puede especificar la base de datos utilizando un punto antes del nombre de la tabla. - -El nombre de usuario y la contraseña se pueden indicar de una de estas tres maneras: - -1. Uso de la autenticación básica HTTP. Ejemplo: - - - -``` bash -$ echo 'SELECT 1' | curl 'http://user:password@localhost:8123/' -d @- -``` - -1. En el ‘user’ y ‘password’ Parámetros de URL. Ejemplo: - - - -``` bash -$ echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @- -``` - -1. Utilizar ‘X-ClickHouse-User’ y ‘X-ClickHouse-Key’ cabecera. Ejemplo: - - - -``` bash -$ echo 'SELECT 1' | curl -H 'X-ClickHouse-User: user' -H 'X-ClickHouse-Key: password' 'http://localhost:8123/' -d @- -``` - -Si no se especifica el nombre de usuario, `default` se utiliza el nombre. Si no se especifica la contraseña, se utiliza la contraseña vacía. -También puede utilizar los parámetros de URL para especificar cualquier configuración para procesar una sola consulta o perfiles completos de configuración. Ejemplo:http://localhost:8123/?perfil=web&max_rows_to_read=1000000000&consulta=SELECCIONA+1 - -Para obtener más información, consulte [Configuración](../operations/settings/index.md) apartado. - -``` bash -$ echo 'SELECT number FROM system.numbers LIMIT 10' | curl 'http://localhost:8123/?' --data-binary @- -0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -``` - -Para obtener información sobre otros parámetros, consulte la sección “SET”. - -Del mismo modo, puede utilizar sesiones ClickHouse en el protocolo HTTP. Para hacer esto, debe agregar el `session_id` GET parámetro a la solicitud. Puede usar cualquier cadena como ID de sesión. De forma predeterminada, la sesión finaliza después de 60 segundos de inactividad. Para cambiar este tiempo de espera, modifique `default_session_timeout` configuración en la configuración del servidor, o `session_timeout` GET parámetro a la solicitud. Para comprobar el estado de la sesión, `session_check=1` parámetro. Solo se puede ejecutar una consulta a la vez en una sola sesión. - -Puede recibir información sobre el progreso de una consulta en `X-ClickHouse-Progress` encabezados de respuesta. Para hacer esto, habilite [send_progress_in_http_headers](../operations/settings/settings.md#settings-send_progress_in_http_headers). Ejemplo de la secuencia de encabezado: - -``` text -X-ClickHouse-Progress: {"read_rows":"2752512","read_bytes":"240570816","total_rows_to_read":"8880128"} -X-ClickHouse-Progress: {"read_rows":"5439488","read_bytes":"482285394","total_rows_to_read":"8880128"} -X-ClickHouse-Progress: {"read_rows":"8783786","read_bytes":"819092887","total_rows_to_read":"8880128"} -``` - -Posibles campos de encabezado: - -- `read_rows` — Number of rows read. -- `read_bytes` — Volume of data read in bytes. -- `total_rows_to_read` — Total number of rows to be read. -- `written_rows` — Number of rows written. -- `written_bytes` — Volume of data written in bytes. - -Las solicitudes en ejecución no se detienen automáticamente si se pierde la conexión HTTP. El análisis y el formato de datos se realizan en el lado del servidor, y el uso de la red puede ser ineficaz. -Opcional ‘query_id’ parámetro se puede pasar como el ID de consulta (cualquier cadena). Para obtener más información, consulte la sección “Settings, replace_running_query”. - -Opcional ‘quota_key’ parámetro se puede pasar como la clave de cuota (cualquier cadena). Para obtener más información, consulte la sección “Quotas”. - -La interfaz HTTP permite pasar datos externos (tablas temporales externas) para consultar. Para obtener más información, consulte la sección “External data for query processing”. - -## Almacenamiento en búfer de respuesta {#response-buffering} - -Puede habilitar el almacenamiento en búfer de respuestas en el lado del servidor. El `buffer_size` y `wait_end_of_query` Los parámetros URL se proporcionan para este propósito. - -`buffer_size` determina el número de bytes en el resultado para almacenar en búfer en la memoria del servidor. Si un cuerpo de resultado es mayor que este umbral, el búfer se escribe en el canal HTTP y los datos restantes se envían directamente al canal HTTP. - -Para asegurarse de que toda la respuesta se almacena en búfer, establezca `wait_end_of_query=1`. En este caso, los datos que no se almacenan en la memoria se almacenarán en un archivo de servidor temporal. - -Ejemplo: - -``` bash -$ curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary' -``` - -Utilice el almacenamiento en búfer para evitar situaciones en las que se produjo un error de procesamiento de consultas después de enviar al cliente el código de respuesta y los encabezados HTTP. En esta situación, se escribe un mensaje de error al final del cuerpo de la respuesta y, en el lado del cliente, el error solo se puede detectar en la etapa de análisis. - -### Consultas con parámetros {#cli-queries-with-parameters} - -Puede crear una consulta con parámetros y pasar valores para ellos desde los parámetros de solicitud HTTP correspondientes. Para obtener más información, consulte [Consultas con parámetros para CLI](cli.md#cli-queries-with-parameters). - -### Ejemplo {#example} - -``` bash -$ curl -sS "
?param_id=2¶m_phrase=test" -d "SELECT * FROM table WHERE int_column = {id:UInt8} and string_column = {phrase:String}" -``` - -## Interfaz HTTP predefinida {#predefined_http_interface} - -ClickHouse admite consultas específicas a través de la interfaz HTTP. Por ejemplo, puede escribir datos en una tabla de la siguiente manera: - -``` bash -$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @- -``` - -ClickHouse también es compatible con la interfaz HTTP predefinida que puede ayudarle a una integración más fácil con herramientas de terceros como [Prometheus exportador](https://github.com/percona-lab/clickhouse_exporter). - -Ejemplo: - -- En primer lugar, agregue esta sección al archivo de configuración del servidor: - - - -``` xml - - - /predefined_query - POST,GET - - predefined_query_handler - SELECT * FROM system.metrics LIMIT 5 FORMAT Template SETTINGS format_template_resultset = 'prometheus_template_output_format_resultset', format_template_row = 'prometheus_template_output_format_row', format_template_rows_between_delimiter = '\n' - - - ... - ... - -``` - -- Ahora puede solicitar la url directamente para los datos en el formato Prometheus: - - - -``` bash -$ curl -v 'http://localhost:8123/predefined_query' -* Trying ::1... -* Connected to localhost (::1) port 8123 (#0) -> GET /predefined_query HTTP/1.1 -> Host: localhost:8123 -> User-Agent: curl/7.47.0 -> Accept: */* -> -< HTTP/1.1 200 OK -< Date: Tue, 28 Apr 2020 08:52:56 GMT -< Connection: Keep-Alive -< Content-Type: text/plain; charset=UTF-8 -< X-ClickHouse-Server-Display-Name: i-mloy5trc -< Transfer-Encoding: chunked -< X-ClickHouse-Query-Id: 96fe0052-01e6-43ce-b12a-6b7370de6e8a -< X-ClickHouse-Format: Template -< X-ClickHouse-Timezone: Asia/Shanghai -< Keep-Alive: timeout=3 -< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"} -< -# HELP "Query" "Number of executing queries" -# TYPE "Query" counter -"Query" 1 - -# HELP "Merge" "Number of executing background merges" -# TYPE "Merge" counter -"Merge" 0 - -# HELP "PartMutation" "Number of mutations (ALTER DELETE/UPDATE)" -# TYPE "PartMutation" counter -"PartMutation" 0 - -# HELP "ReplicatedFetch" "Number of data parts being fetched from replica" -# TYPE "ReplicatedFetch" counter -"ReplicatedFetch" 0 - -# HELP "ReplicatedSend" "Number of data parts being sent to replicas" -# TYPE "ReplicatedSend" counter -"ReplicatedSend" 0 - -* Connection #0 to host localhost left intact - - -* Connection #0 to host localhost left intact -``` - -Como puede ver en el ejemplo, si `` está configurado en la configuración.archivo xml y `` puede contener muchos `s`. ClickHouse coincidirá con las solicitudes HTTP recibidas con el tipo predefinido en `` y el primer emparejado ejecuta el controlador. Luego, ClickHouse ejecutará la consulta predefinida correspondiente si la coincidencia es exitosa. - -> Ahora `` puede configurar ``, ``, ``,``: -> `` es responsable de hacer coincidir la parte del método de la solicitud HTTP. `` se ajusta plenamente a la definición de [método](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods) en el protocolo HTTP. Es una configuración opcional. Si no está definido en el archivo de configuración, no coincide con la parte del método de la solicitud HTTP. -> -> `` es responsable de hacer coincidir la parte url de la solicitud HTTP. Es compatible con [RE2](https://github.com/google/re2)expresiones regulares. Es una configuración opcional. Si no está definido en el archivo de configuración, no coincide con la parte url de la solicitud HTTP. -> -> `` es responsable de hacer coincidir la parte del encabezado de la solicitud HTTP. Es compatible con las expresiones regulares de RE2. Es una configuración opcional. Si no está definido en el archivo de configuración, no coincide con la parte de encabezado de la solicitud HTTP. -> -> `` contiene la parte de procesamiento principal. Ahora `` puede configurar ``, ``, ``, ``, ``, ``. -> \> `` Actualmente soporta tres tipos: **Dirección de correo electrónico**, **Nombre de la red inalámbrica (SSID):**, **estática**. -> \> -> \> `` - utilizar con el tipo predefined_query_handler, ejecuta la consulta cuando se llama al controlador. -> \> -> \> `` - utilizar con el tipo dynamic_query_handler, extrae y ejecuta el valor correspondiente al `` valor en parámetros de solicitud HTTP. -> \> -> \> `` - uso con tipo estático, código de estado de respuesta. -> \> -> \> `` - uso con tipo estático, respuesta [tipo de contenido](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type). -> \> -> \> `` - uso con tipo estático, contenido de respuesta enviado al cliente, cuando se usa el prefijo ‘file://’ o ‘config://’, encontrar el contenido del archivo o configuración enviar al cliente. - -A continuación están los métodos de configuración para los diferentes ``. - -## Dirección de correo electrónico {#predefined_query_handler} - -`` admite la configuración de valores Settings y query_params. Puede configurar `` en el tipo de ``. - -`` valor es una consulta predefinida de ``, que es ejecutado por ClickHouse cuando se hace coincidir una solicitud HTTP y se devuelve el resultado de la consulta. Es una configuración imprescindible. - -En el ejemplo siguiente se definen los valores de `max_threads` y `max_alter_threads` configuración, a continuación, consulta la tabla del sistema para comprobar si estos ajustes se han establecido correctamente. - -Ejemplo: - -``` xml - - - [^/]+)(/(?P[^/]+))?]]> - GET - - TEST_HEADER_VALUE - [^/]+)(/(?P[^/]+))?]]> - - - predefined_query_handler - SELECT value FROM system.settings WHERE name = {name_1:String} - SELECT name, value FROM system.settings WHERE name = {name_2:String} - - - -``` - -``` bash -$ curl -H 'XXX:TEST_HEADER_VALUE' -H 'PARAMS_XXX:max_threads' 'http://localhost:8123/query_param_with_url/1/max_threads/max_alter_threads?max_threads=1&max_alter_threads=2' -1 -max_alter_threads 2 -``` - -!!! note "precaución" - En uno `` sólo es compatible con uno `` de un tipo de plaquita. - -## Nombre de la red inalámbrica (SSID): {#dynamic_query_handler} - -En ``, consulta se escribe en forma de param de la solicitud HTTP. La diferencia es que en ``, consulta se escribe en el archivo de configuración. Puede configurar `` en ``. - -ClickHouse extrae y ejecuta el valor correspondiente al `` valor en la url de la solicitud HTTP. El valor predeterminado de `` ser `/query` . Es una configuración opcional. Si no hay una definición en el archivo de configuración, el parámetro no se pasa. - -Para experimentar con esta funcionalidad, el ejemplo define los valores de max_threads y max_alter_threads y consulta si la configuración se estableció correctamente. - -Ejemplo: - -``` xml - - - - TEST_HEADER_VALUE_DYNAMIC - - dynamic_query_handler - query_param - - - -``` - -``` bash -$ curl -H 'XXX:TEST_HEADER_VALUE_DYNAMIC' 'http://localhost:8123/own?max_threads=1&max_alter_threads=2¶m_name_1=max_threads¶m_name_2=max_alter_threads&query_param=SELECT%20name,value%20FROM%20system.settings%20where%20name%20=%20%7Bname_1:String%7D%20OR%20name%20=%20%7Bname_2:String%7D' -max_threads 1 -max_alter_threads 2 -``` - -## estática {#static} - -`` puede volver [Content_type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type), [estatus](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) y response_content. response_content puede devolver el contenido especificado - -Ejemplo: - -Devuelve un mensaje. - -``` xml - - - GET - xxx - /hi - - static - 402 - text/html; charset=UTF-8 - Say Hi! - - - -``` - -``` bash -$ curl -vv -H 'XXX:xxx' 'http://localhost:8123/hi' -* Trying ::1... -* Connected to localhost (::1) port 8123 (#0) -> GET /hi HTTP/1.1 -> Host: localhost:8123 -> User-Agent: curl/7.47.0 -> Accept: */* -> XXX:xxx -> -< HTTP/1.1 402 Payment Required -< Date: Wed, 29 Apr 2020 03:51:26 GMT -< Connection: Keep-Alive -< Content-Type: text/html; charset=UTF-8 -< Transfer-Encoding: chunked -< Keep-Alive: timeout=3 -< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"} -< -* Connection #0 to host localhost left intact -Say Hi!% -``` - -Busque el contenido de la configuración enviada al cliente. - -``` xml -
]]>
- - - - GET - xxx - /get_config_static_handler - - static - config://get_config_static_handler - - - -``` - -``` bash -$ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler' -* Trying ::1... -* Connected to localhost (::1) port 8123 (#0) -> GET /get_config_static_handler HTTP/1.1 -> Host: localhost:8123 -> User-Agent: curl/7.47.0 -> Accept: */* -> XXX:xxx -> -< HTTP/1.1 200 OK -< Date: Wed, 29 Apr 2020 04:01:24 GMT -< Connection: Keep-Alive -< Content-Type: text/plain; charset=UTF-8 -< Transfer-Encoding: chunked -< Keep-Alive: timeout=3 -< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"} -< -* Connection #0 to host localhost left intact -
% -``` - -Encuentra el contenido del archivo enviado al cliente. - -``` xml - - - GET - xxx - /get_absolute_path_static_handler - - static - text/html; charset=UTF-8 - file:///absolute_path_file.html - - - - GET - xxx - /get_relative_path_static_handler - - static - text/html; charset=UTF-8 - file://./relative_path_file.html - - - -``` - -``` bash -$ user_files_path='/var/lib/clickhouse/user_files' -$ sudo echo "Relative Path File" > $user_files_path/relative_path_file.html -$ sudo echo "Absolute Path File" > $user_files_path/absolute_path_file.html -$ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler' -* Trying ::1... -* Connected to localhost (::1) port 8123 (#0) -> GET /get_absolute_path_static_handler HTTP/1.1 -> Host: localhost:8123 -> User-Agent: curl/7.47.0 -> Accept: */* -> XXX:xxx -> -< HTTP/1.1 200 OK -< Date: Wed, 29 Apr 2020 04:18:16 GMT -< Connection: Keep-Alive -< Content-Type: text/html; charset=UTF-8 -< Transfer-Encoding: chunked -< Keep-Alive: timeout=3 -< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"} -< -Absolute Path File -* Connection #0 to host localhost left intact -$ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler' -* Trying ::1... -* Connected to localhost (::1) port 8123 (#0) -> GET /get_relative_path_static_handler HTTP/1.1 -> Host: localhost:8123 -> User-Agent: curl/7.47.0 -> Accept: */* -> XXX:xxx -> -< HTTP/1.1 200 OK -< Date: Wed, 29 Apr 2020 04:18:31 GMT -< Connection: Keep-Alive -< Content-Type: text/html; charset=UTF-8 -< Transfer-Encoding: chunked -< Keep-Alive: timeout=3 -< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"} -< -Relative Path File -* Connection #0 to host localhost left intact -``` - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/http_interface/) diff --git a/docs/es/interfaces/index.md b/docs/es/interfaces/index.md deleted file mode 100644 index 3632c8a9e29..00000000000 --- a/docs/es/interfaces/index.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Interfaz -toc_priority: 14 -toc_title: "Implantaci\xF3n" ---- - -# Interfaz {#interfaces} - -ClickHouse proporciona dos interfaces de red (ambas se pueden ajustar opcionalmente en TLS para mayor seguridad): - -- [HTTP](http.md), que está documentado y fácil de usar directamente. -- [TCP nativo](tcp.md), que tiene menos sobrecarga. - -En la mayoría de los casos, se recomienda utilizar la herramienta o biblioteca apropiada en lugar de interactuar con ellos directamente. Oficialmente apoyados por Yandex son los siguientes: - -- [Cliente de línea de comandos](cli.md) -- [Controlador JDBC](jdbc.md) -- [Controlador ODBC](odbc.md) -- [Biblioteca cliente de C++](cpp.md) - -También hay una amplia gama de bibliotecas de terceros para trabajar con ClickHouse: - -- [Bibliotecas de clientes](third-party/client-libraries.md) -- [Integración](third-party/integrations.md) -- [Interfaces visuales](third-party/gui.md) - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/) diff --git a/docs/es/interfaces/jdbc.md b/docs/es/interfaces/jdbc.md deleted file mode 100644 index 7303dec8960..00000000000 --- a/docs/es/interfaces/jdbc.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 22 -toc_title: Controlador JDBC ---- - -# Controlador JDBC {#jdbc-driver} - -- **[Conductor oficial](https://github.com/ClickHouse/clickhouse-jdbc)** -- Controladores de terceros: - - [Sistema abierto.](https://github.com/housepower/ClickHouse-Native-JDBC) - - [Método de codificación de datos:](https://github.com/blynkkk/clickhouse4j) - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/jdbc/) diff --git a/docs/es/interfaces/mysql.md b/docs/es/interfaces/mysql.md deleted file mode 100644 index a5124c61dd5..00000000000 --- a/docs/es/interfaces/mysql.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 20 -toc_title: Interfaz MySQL ---- - -# Interfaz MySQL {#mysql-interface} - -ClickHouse soporta el protocolo de cable MySQL. Puede ser habilitado por [mysql_port](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-mysql_port) configuración en el archivo de configuración: - -``` xml -9004 -``` - -Ejemplo de conexión mediante la herramienta de línea de comandos `mysql`: - -``` bash -$ mysql --protocol tcp -u default -P 9004 -``` - -Salida si una conexión se realizó correctamente: - -``` text -Welcome to the MySQL monitor. Commands end with ; or \g. -Your MySQL connection id is 4 -Server version: 20.2.1.1-ClickHouse - -Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. - -Oracle is a registered trademark of Oracle Corporation and/or its -affiliates. Other names may be trademarks of their respective -owners. - -Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - -mysql> -``` - -Para la compatibilidad con todos los clientes MySQL, se recomienda especificar la contraseña de usuario con [doble SHA1](../operations/settings/settings-users.md#password_double_sha1_hex) en el archivo de configuración. -Si la contraseña de usuario se especifica usando [SHA256](../operations/settings/settings-users.md#password_sha256_hex), algunos clientes no podrán autenticarse (mysqljs y versiones antiguas de la herramienta de línea de comandos mysql). - -Restricción: - -- las consultas preparadas no son compatibles - -- algunos tipos de datos se envían como cadenas - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/mysql/) diff --git a/docs/es/interfaces/odbc.md b/docs/es/interfaces/odbc.md deleted file mode 100644 index 6ccb979c7f7..00000000000 --- a/docs/es/interfaces/odbc.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 23 -toc_title: Conductor ODBC ---- - -# Conductor ODBC {#odbc-driver} - -- [Conductor oficial](https://github.com/ClickHouse/clickhouse-odbc). - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/odbc/) diff --git a/docs/es/interfaces/tcp.md b/docs/es/interfaces/tcp.md deleted file mode 100644 index 47df0d12829..00000000000 --- a/docs/es/interfaces/tcp.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 18 -toc_title: Interfaz nativa (TCP) ---- - -# Interfaz nativa (TCP) {#native-interface-tcp} - -El protocolo nativo se utiliza en el [cliente de línea de comandos](cli.md), para la comunicación entre servidores durante el procesamiento de consultas distribuidas, y también en otros programas de C, Desafortunadamente, el protocolo nativo de ClickHouse aún no tiene especificaciones formales, pero puede ser diseñado de manera inversa desde el código fuente de ClickHouse (comenzando [por aquí](https://github.com/ClickHouse/ClickHouse/tree/master/src/Client)) y/o mediante la interceptación y el análisis del tráfico TCP. - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/tcp/) diff --git a/docs/es/interfaces/third-party/client-libraries.md b/docs/es/interfaces/third-party/client-libraries.md deleted file mode 100644 index b61ab1a5d9c..00000000000 --- a/docs/es/interfaces/third-party/client-libraries.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -toc_priority: 26 -toc_title: Client Libraries ---- - -# Client Libraries from Third-party Developers {#client-libraries-from-third-party-developers} - -!!! warning "Disclaimer" - Yandex does **not** maintain the libraries listed below and haven’t done any extensive testing to ensure their quality. - -- Python - - [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm) - - [clickhouse-driver](https://github.com/mymarilyn/clickhouse-driver) - - [clickhouse-client](https://github.com/yurial/clickhouse-client) - - [aiochclient](https://github.com/maximdanilchenko/aiochclient) - - [asynch](https://github.com/long2ice/asynch) -- PHP - - [smi2/phpclickhouse](https://packagist.org/packages/smi2/phpClickHouse) - - [8bitov/clickhouse-php-client](https://packagist.org/packages/8bitov/clickhouse-php-client) - - [bozerkins/clickhouse-client](https://packagist.org/packages/bozerkins/clickhouse-client) - - [simpod/clickhouse-client](https://packagist.org/packages/simpod/clickhouse-client) - - [seva-code/php-click-house-client](https://packagist.org/packages/seva-code/php-click-house-client) - - [SeasClick C++ client](https://github.com/SeasX/SeasClick) -- Go - - [clickhouse](https://github.com/kshvakov/clickhouse/) - - [go-clickhouse](https://github.com/roistat/go-clickhouse) - - [mailrugo-clickhouse](https://github.com/mailru/go-clickhouse) - - [golang-clickhouse](https://github.com/leprosus/golang-clickhouse) -- NodeJs - - [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse) - - [node-clickhouse](https://github.com/apla/node-clickhouse) -- Perl - - [perl-DBD-ClickHouse](https://github.com/elcamlost/perl-DBD-ClickHouse) - - [HTTP-ClickHouse](https://metacpan.org/release/HTTP-ClickHouse) - - [AnyEvent-ClickHouse](https://metacpan.org/release/AnyEvent-ClickHouse) -- Ruby - - [ClickHouse (Ruby)](https://github.com/shlima/click_house) - - [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord) -- R - - [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r) - - [RClickHouse](https://github.com/IMSMWU/RClickHouse) -- Java - - [clickhouse-client-java](https://github.com/VirtusAI/clickhouse-client-java) - - [clickhouse-client](https://github.com/Ecwid/clickhouse-client) -- Scala - - [clickhouse-scala-client](https://github.com/crobox/clickhouse-scala-client) -- Kotlin - - [AORM](https://github.com/TanVD/AORM) -- C# - - [Octonica.ClickHouseClient](https://github.com/Octonica/ClickHouseClient) - - [ClickHouse.Ado](https://github.com/killwort/ClickHouse-Net) - - [ClickHouse.Client](https://github.com/DarkWanderer/ClickHouse.Client) - - [ClickHouse.Net](https://github.com/ilyabreev/ClickHouse.Net) -- Elixir - - [clickhousex](https://github.com/appodeal/clickhousex/) - - [pillar](https://github.com/sofakingworld/pillar) -- Nim - - [nim-clickhouse](https://github.com/leonardoce/nim-clickhouse) - -[Original article](https://clickhouse.tech/docs/en/interfaces/third-party/client_libraries/) diff --git a/docs/es/interfaces/third-party/gui.md b/docs/es/interfaces/third-party/gui.md deleted file mode 100644 index 754c0f68c69..00000000000 --- a/docs/es/interfaces/third-party/gui.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 28 -toc_title: Interfaces Visuales ---- - -# Interfaces visuales de desarrolladores de terceros {#visual-interfaces-from-third-party-developers} - -## De código abierto {#open-source} - -### Tabix {#tabix} - -Interfaz web para ClickHouse en el [Tabix](https://github.com/tabixio/tabix) proyecto. - -Función: - -- Funciona con ClickHouse directamente desde el navegador, sin la necesidad de instalar software adicional. -- Editor de consultas con resaltado de sintaxis. -- Autocompletado de comandos. -- Herramientas para el análisis gráfico de la ejecución de consultas. -- Opciones de esquema de color. - -[Documentación de Tabix](https://tabix.io/doc/). - -### Sistema abierto {#houseops} - -[Sistema abierto.](https://github.com/HouseOps/HouseOps) Es una interfaz de usuario / IDE para OSX, Linux y Windows. - -Función: - -- Generador de consultas con resaltado de sintaxis. Ver la respuesta en una tabla o vista JSON. -- Exportar resultados de consultas como CSV o JSON. -- Lista de procesos con descripciones. Modo de escritura. Capacidad de parar (`KILL`) proceso. -- Gráfico de base de datos. Muestra todas las tablas y sus columnas con información adicional. -- Una vista rápida del tamaño de la columna. -- Configuración del servidor. - -Las siguientes características están planificadas para el desarrollo: - -- Gestión de bases de datos. -- Gestión de usuarios. -- Análisis de datos en tiempo real. -- Supervisión de clúster. -- Gestión de clústeres. -- Monitoreo de tablas replicadas y Kafka. - -### Faro {#lighthouse} - -[Faro](https://github.com/VKCOM/lighthouse) Es una interfaz web ligera para ClickHouse. - -Función: - -- Lista de tablas con filtrado y metadatos. -- Vista previa de la tabla con filtrado y clasificación. -- Ejecución de consultas de sólo lectura. - -### Redash {#redash} - -[Redash](https://github.com/getredash/redash) es una plataforma para la visualización de datos. - -Admite múltiples fuentes de datos, incluido ClickHouse, Redash puede unir los resultados de consultas de diferentes fuentes de datos en un conjunto de datos final. - -Función: - -- Potente editor de consultas. -- Explorador de base de datos. -- Herramientas de visualización, que le permiten representar datos en diferentes formas. - -### DBeaver {#dbeaver} - -[DBeaver](https://dbeaver.io/) - Cliente de base de datos de escritorio universal con soporte ClickHouse. - -Función: - -- Desarrollo de consultas con resaltado de sintaxis y autocompletado. -- Lista de tablas con filtros y búsqueda de metadatos. -- Vista previa de datos de tabla. -- Búsqueda de texto completo. - -### Sistema abierto {#clickhouse-cli} - -[Sistema abierto.](https://github.com/hatarist/clickhouse-cli) es un cliente de línea de comandos alternativo para ClickHouse, escrito en Python 3. - -Función: - -- Autocompletado. -- Resaltado de sintaxis para las consultas y la salida de datos. -- Soporte de buscapersonas para la salida de datos. -- Comandos similares a PostgreSQL personalizados. - -### Sistema abierto {#clickhouse-flamegraph} - -[Sistema abierto.](https://github.com/Slach/clickhouse-flamegraph) es una herramienta especializada para visualizar el `system.trace_log` como [Flamegraph](http://www.brendangregg.com/flamegraphs.html). - -### Bienvenidos al Portal de Licitación Electrónica de Licitación Electrónica {#clickhouse-plantuml} - -[Método de codificación de datos:](https://pypi.org/project/clickhouse-plantuml/) es un script para generar [PlantUML](https://plantuml.com/) diagrama de esquemas de tablas. - -## Comercial {#commercial} - -### DataGrip {#datagrip} - -[DataGrip](https://www.jetbrains.com/datagrip/) Es un IDE de base de datos de JetBrains con soporte dedicado para ClickHouse. También está integrado en otras herramientas basadas en IntelliJ: PyCharm, IntelliJ IDEA, GoLand, PhpStorm y otros. - -Función: - -- Finalización de código muy rápida. -- Resaltado de sintaxis de ClickHouse. -- Soporte para características específicas de ClickHouse, por ejemplo, columnas anidadas, motores de tablas. -- Editor de datos. -- Refactorizaciones. -- Búsqueda y navegación. - -### Yandex DataLens {#yandex-datalens} - -[Yandex DataLens](https://cloud.yandex.ru/services/datalens) es un servicio de visualización y análisis de datos. - -Función: - -- Amplia gama de visualizaciones disponibles, desde simples gráficos de barras hasta paneles complejos. -- Los paneles podrían ponerse a disposición del público. -- Soporte para múltiples fuentes de datos, incluyendo ClickHouse. -- Almacenamiento de datos materializados basados en ClickHouse. - -Nivel de Cifrado WEP [disponible de forma gratuita](https://cloud.yandex.com/docs/datalens/pricing) para proyectos de baja carga, incluso para uso comercial. - -- [Documentación de DataLens](https://cloud.yandex.com/docs/datalens/). -- [Tutorial](https://cloud.yandex.com/docs/solutions/datalens/data-from-ch-visualization) en la visualización de datos de una base de datos ClickHouse. - -### Software de Holística {#holistics-software} - -[Holística](https://www.holistics.io/) es una plataforma de datos de pila completa y una herramienta de inteligencia de negocios. - -Función: - -- Correo electrónico automatizado, Slack y horarios de informes de Google Sheet. -- Editor SQL con visualizaciones, control de versiones, autocompletado, componentes de consulta reutilizables y filtros dinámicos. -- Análisis integrado de informes y cuadros de mando a través de iframe. -- Preparación de datos y capacidades ETL. -- Soporte de modelado de datos SQL para mapeo relacional de datos. - -### Mirador {#looker} - -[Mirador](https://looker.com) Es una plataforma de datos y una herramienta de inteligencia de negocios con soporte para más de 50 dialectos de bases de datos, incluido ClickHouse. Bravo está disponible como una plataforma SaaS y auto-organizada. Los usuarios pueden utilizar Looker a través del navegador para explorar datos, crear visualizaciones y paneles, programar informes y compartir sus conocimientos con colegas. Looker proporciona un amplio conjunto de herramientas para incrustar estas características en otras aplicaciones y una API -para integrar datos con otras aplicaciones. - -Función: - -- Desarrollo fácil y ágil utilizando LookML, un lenguaje que soporta curado - [Modelado de datos](https://looker.com/platform/data-modeling) para apoyar a los redactores de informes y a los usuarios finales. -- Potente integración de flujo de trabajo a través de Looker's [Acciones de datos](https://looker.com/platform/actions). - -[Cómo configurar ClickHouse en Looker.](https://docs.looker.com/setup-and-management/database-config/clickhouse) - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/third-party/gui/) diff --git a/docs/es/interfaces/third-party/index.md b/docs/es/interfaces/third-party/index.md deleted file mode 100644 index adf50b05cdf..00000000000 --- a/docs/es/interfaces/third-party/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: tercero -toc_priority: 24 ---- - - diff --git a/docs/es/interfaces/third-party/integrations.md b/docs/es/interfaces/third-party/integrations.md deleted file mode 100644 index 7588bef0230..00000000000 --- a/docs/es/interfaces/third-party/integrations.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -toc_priority: 27 -toc_title: Integrations ---- - -# Integration Libraries from Third-party Developers {#integration-libraries-from-third-party-developers} - -!!! warning "Disclaimer" - Yandex does **not** maintain the tools and libraries listed below and haven’t done any extensive testing to ensure their quality. - -## Infrastructure Products {#infrastructure-products} - -- Relational database management systems - - [MySQL](https://www.mysql.com) - - [mysql2ch](https://github.com/long2ice/mysql2ch) - - [ProxySQL](https://github.com/sysown/proxysql/wiki/ClickHouse-Support) - - [clickhouse-mysql-data-reader](https://github.com/Altinity/clickhouse-mysql-data-reader) - - [horgh-replicator](https://github.com/larsnovikov/horgh-replicator) - - [PostgreSQL](https://www.postgresql.org) - - [clickhousedb_fdw](https://github.com/Percona-Lab/clickhousedb_fdw) - - [infi.clickhouse_fdw](https://github.com/Infinidat/infi.clickhouse_fdw) (uses [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)) - - [pg2ch](https://github.com/mkabilov/pg2ch) - - [clickhouse_fdw](https://github.com/adjust/clickhouse_fdw) - - [MSSQL](https://en.wikipedia.org/wiki/Microsoft_SQL_Server) - - [ClickHouseMigrator](https://github.com/zlzforever/ClickHouseMigrator) -- Message queues - - [Kafka](https://kafka.apache.org) - - [clickhouse_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/ClickHouse/clickhouse-go/)) - - [stream-loader-clickhouse](https://github.com/adform/stream-loader) -- Stream processing - - [Flink](https://flink.apache.org) - - [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink) -- Object storages - - [S3](https://en.wikipedia.org/wiki/Amazon_S3) - - [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup) -- Container orchestration - - [Kubernetes](https://kubernetes.io) - - [clickhouse-operator](https://github.com/Altinity/clickhouse-operator) -- Configuration management - - [puppet](https://puppet.com) - - [innogames/clickhouse](https://forge.puppet.com/innogames/clickhouse) - - [mfedotov/clickhouse](https://forge.puppet.com/mfedotov/clickhouse) -- Monitoring - - [Graphite](https://graphiteapp.org) - - [graphouse](https://github.com/yandex/graphouse) - - [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse) + - - [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse) - - [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - optimizes staled partitions in [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md#graphitemergetree) if rules from [rollup configuration](../../engines/table-engines/mergetree-family/graphitemergetree.md#rollup-configuration) could be applied - - [Grafana](https://grafana.com/) - - [clickhouse-grafana](https://github.com/Vertamedia/clickhouse-grafana) - - [Prometheus](https://prometheus.io/) - - [clickhouse_exporter](https://github.com/f1yegor/clickhouse_exporter) - - [PromHouse](https://github.com/Percona-Lab/PromHouse) - - [clickhouse_exporter](https://github.com/hot-wifi/clickhouse_exporter) (uses [Go client](https://github.com/kshvakov/clickhouse/)) - - [Nagios](https://www.nagios.org/) - - [check_clickhouse](https://github.com/exogroup/check_clickhouse/) - - [check_clickhouse.py](https://github.com/innogames/igmonplugins/blob/master/src/check_clickhouse.py) - - [Zabbix](https://www.zabbix.com) - - [clickhouse-zabbix-template](https://github.com/Altinity/clickhouse-zabbix-template) - - [Sematext](https://sematext.com/) - - [clickhouse integration](https://github.com/sematext/sematext-agent-integrations/tree/master/clickhouse) -- Logging - - [rsyslog](https://www.rsyslog.com/) - - [omclickhouse](https://www.rsyslog.com/doc/master/configuration/modules/omclickhouse.html) - - [fluentd](https://www.fluentd.org) - - [loghouse](https://github.com/flant/loghouse) (for [Kubernetes](https://kubernetes.io)) - - [logagent](https://www.sematext.com/logagent) - - [logagent output-plugin-clickhouse](https://sematext.com/docs/logagent/output-plugin-clickhouse/) -- Geo - - [MaxMind](https://dev.maxmind.com/geoip/) - - [clickhouse-maxmind-geoip](https://github.com/AlexeyKupershtokh/clickhouse-maxmind-geoip) - -## Programming Language Ecosystems {#programming-language-ecosystems} - -- Python - - [SQLAlchemy](https://www.sqlalchemy.org) - - [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (uses [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)) - - [pandas](https://pandas.pydata.org) - - [pandahouse](https://github.com/kszucs/pandahouse) -- PHP - - [Doctrine](https://www.doctrine-project.org/) - - [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse) -- R - - [dplyr](https://db.rstudio.com/dplyr/) - - [RClickHouse](https://github.com/IMSMWU/RClickHouse) (uses [clickhouse-cpp](https://github.com/artpaul/clickhouse-cpp)) -- Java - - [Hadoop](http://hadoop.apache.org) - - [clickhouse-hdfs-loader](https://github.com/jaykelin/clickhouse-hdfs-loader) (uses [JDBC](../../sql-reference/table-functions/jdbc.md)) -- Scala - - [Akka](https://akka.io) - - [clickhouse-scala-client](https://github.com/crobox/clickhouse-scala-client) -- C# - - [ADO.NET](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/ado-net-overview) - - [ClickHouse.Ado](https://github.com/killwort/ClickHouse-Net) - - [ClickHouse.Client](https://github.com/DarkWanderer/ClickHouse.Client) - - [ClickHouse.Net](https://github.com/ilyabreev/ClickHouse.Net) - - [ClickHouse.Net.Migrations](https://github.com/ilyabreev/ClickHouse.Net.Migrations) -- Elixir - - [Ecto](https://github.com/elixir-ecto/ecto) - - [clickhouse_ecto](https://github.com/appodeal/clickhouse_ecto) -- Ruby - - [Ruby on Rails](https://rubyonrails.org/) - - [activecube](https://github.com/bitquery/activecube) - - [ActiveRecord](https://github.com/PNixx/clickhouse-activerecord) - - [GraphQL](https://github.com/graphql) - - [activecube-graphql](https://github.com/bitquery/activecube-graphql) - -[Original article](https://clickhouse.tech/docs/en/interfaces/third-party/integrations/) diff --git a/docs/es/interfaces/third-party/proxy.md b/docs/es/interfaces/third-party/proxy.md deleted file mode 100644 index e1aabf8fce4..00000000000 --- a/docs/es/interfaces/third-party/proxy.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 29 -toc_title: Proxy ---- - -# Servidores proxy de desarrolladores de terceros {#proxy-servers-from-third-party-developers} - -## chproxy {#chproxy} - -[chproxy](https://github.com/Vertamedia/chproxy), es un proxy HTTP y equilibrador de carga para la base de datos ClickHouse. - -Función: - -- Enrutamiento por usuario y almacenamiento en caché de respuestas. -- Flexible límites. -- Renovación automática del certificado SSL. - -Implementado en Go. - -## Bienvenido a WordPress {#kittenhouse} - -[Bienvenido a WordPress.](https://github.com/VKCOM/kittenhouse) está diseñado para ser un proxy local entre ClickHouse y el servidor de aplicaciones en caso de que sea imposible o inconveniente almacenar los datos INSERT en el lado de su aplicación. - -Función: - -- Almacenamiento en búfer de datos en memoria y en disco. -- Enrutamiento por tabla. -- Equilibrio de carga y comprobación de estado. - -Implementado en Go. - -## Bienvenidos al Portal de Licitación Electrónica de Licitación Electrónica {#clickhouse-bulk} - -[Bienvenidos al Portal de Licitación Electrónica de Licitación Electrónica](https://github.com/nikepan/clickhouse-bulk) es un simple colector de insertos ClickHouse. - -Función: - -- Agrupe las solicitudes y envíe por umbral o intervalo. -- Múltiples servidores remotos. -- Autenticación básica. - -Implementado en Go. - -[Artículo Original](https://clickhouse.tech/docs/en/interfaces/third-party/proxy/) diff --git a/docs/es/introduction/adopters.md b/docs/es/introduction/adopters.md deleted file mode 100644 index 4c0aa78d57b..00000000000 --- a/docs/es/introduction/adopters.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 8 -toc_title: Adoptante ---- - -# Adoptadores de ClickHouse {#clickhouse-adopters} - -!!! warning "Descargo" - La siguiente lista de empresas que utilizan ClickHouse y sus historias de éxito se recopila a partir de fuentes públicas, por lo que podría diferir de la realidad actual. Le agradeceríamos que compartiera la historia de adoptar ClickHouse en su empresa y [agregarlo a la lista](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/introduction/adopters.md), pero por favor asegúrese de que usted no tendrá ningunos problemas de NDA haciendo así. Proporcionar actualizaciones con publicaciones de otras compañías también es útil. - -| Empresa | Industria | Usecase | Tamaño de clúster | (Un)Tamaño de datos comprimidos\* | Referencia | -|-------------------------------------------------------------------------------------------------|------------------------------------|-----------------------------|------------------------------------------------------------------|-------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| 2gis | Asignar | Monitoreo | — | — | [Charla en ruso, julio 2019](https://youtu.be/58sPkXfq6nw) | -| Aloha Browser | Aplicación móvil | Backend del navegador | — | — | [Diapositivas en ruso, mayo 2019](https://github.com/yandex/clickhouse-presentations/blob/master/meetup22/aloha.pdf) | -| Amadeus | Viaje | Analítica | — | — | [Comunicado de prensa, abril de 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) | -| Appsflyer | Análisis móvil | Producto principal | — | — | [Charla en ruso, julio 2019](https://www.youtube.com/watch?v=M3wbRlcpBbY) | -| ArenaData | Plataforma de datos | Producto principal | — | — | [Diapositivas en ruso, diciembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup38/indexes.pdf) | -| Badoo | Citas | Serie de tiempo | — | — | [Diapositivas en ruso, diciembre 2019](https://presentations.clickhouse.tech/meetup38/forecast.pdf) | -| Benocs | Telemetría y análisis de red | Producto principal | — | — | [Diapositivas en español, octubre de 2017](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup9/lpm.pdf) | -| Bloomberg | Finanzas, Medios | Monitoreo | 102 servidores | — | [Diapositivas, Mayo 2018](https://www.slideshare.net/Altinity/http-analytics-for-6m-requests-per-second-using-clickhouse-by-alexander-bocharov) | -| Bloxy | Blockchain | Analítica | — | — | [Diapositivas en ruso, agosto 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/4_bloxy.pptx) | -| Dataliance para China Telecom | Telecomunicaciones | Analítica | — | — | [Diapositivas en chino, enero 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) | -| CARTO | Inteligencia de negocios | Análisis geográfico | — | — | [Procesamiento geoespacial con ClickHouse](https://carto.com/blog/geospatial-processing-with-clickhouse/) | -| CERN | Investigación | Experimento | — | — | [Comunicado de prensa, abril de 2012](https://www.yandex.com/company/press_center/press_releases/2012/2012-04-10/) | -| Cisco | Red | Análisis de tráfico | — | — | [Charla relámpago, octubre 2019](https://youtu.be/-hI1vDR2oPY?t=5057) | -| Citadel Securities | Financiación | — | — | — | [Contribución, marzo 2019](https://github.com/ClickHouse/ClickHouse/pull/4774) | -| Más información | Taxi | Analítica | — | — | [Blog Post en ruso, marzo 2020](https://habr.com/en/company/citymobil/blog/490660/) | -| ContentSquare | Análisis web | Producto principal | — | — | [Publicación de blog en francés, noviembre 2018](http://souslecapot.net/2018/11/21/patrick-chatain-vp-engineering-chez-contentsquare-penser-davantage-amelioration-continue-que-revolution-constante/) | -| Cloudflare | CDN | Análisis de tráfico | 36 servidores | — | [Mensaje del blog, Mayo 2017](https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/), [Mensaje del blog, marzo 2018](https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/) | -| Corunet | Analítica | Producto principal | — | — | [Diapositivas en español, Abril 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup21/predictive_models.pdf) | -| CraiditX 氪信 | Finanzas AI | Análisis | — | — | [Diapositivas en español, noviembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/udf.pptx) | -| Criteo | Menor | Producto principal | — | — | [Diapositivas en español, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) | -| Deutsche Bank | Financiación | BI Analytics | — | — | [Diapositivas en español, octubre 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) | -| Diva-e | Consultoría digital | Producto principal | — | — | [Diapositivas en español, septiembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) | -| Exness | Comercio | Métricas, Registro | — | — | [Charla en ruso, mayo 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) | -| Sistema abierto. | Red Ad | Producto principal | — | — | [Publicación de blog en japonés, julio 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) | -| HUYA | Video Streaming | Analítica | — | — | [Diapositivas en chino, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) | -| Idealista | Inmobiliario | Analítica | — | — | [Blog Post en Inglés, Abril 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) | -| Infovista | Red | Analítica | — | — | [Diapositivas en español, octubre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) | -| InnoGames | Juego | Métricas, Registro | — | — | [Diapositivas en ruso, septiembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) | -| Integros | Plataforma para servicios de video | Analítica | — | — | [Diapositivas en ruso, mayo 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup22/strategies.pdf) | -| Datos de Kodiak | Nube | Producto principal | — | — | [Diapositivas en Engish, Abril 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup13/kodiak_data.pdf) | -| Kontur | Desarrollo de software | Métricas | — | — | [Charla en ruso, noviembre 2018](https://www.youtube.com/watch?v=U4u4Bd0FtrY) | -| Sistema abierto. | Red Ad | Producto principal | 75 servidores (3 réplicas) | 5.27 PiB | [Publicación de blog en ruso, febrero 2017](https://habr.com/en/post/322620/) | -| Soluciones en la nube de Mail.ru | Servicios en la nube | Producto principal | — | — | [Artículo en ruso](https://mcs.mail.ru/help/db-create/clickhouse#) | -| Mensaje de pájaro | Telecomunicaciones | Estadísticas | — | — | [Diapositivas en español, noviembre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) | -| MGID | Red Ad | Analítica Web | — | — | [Publicación de blog en ruso, abril 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) | -| UnoAPM | Supervisión y análisis de datos | Producto principal | — | — | [Diapositivas en chino, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) | -| Pragma Innovation | Telemetría y Análisis de Big Data | Producto principal | — | — | [Diapositivas en español, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/4_pragma_innovation.pdf) | -| QINGCLOUD | Servicios en la nube | Producto principal | — | — | [Diapositivas en chino, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/4.%20Cloud%20%2B%20TSDB%20for%20ClickHouse%20张健%20QingCloud.pdf) | -| Qrator | Protección DDoS | Producto principal | — | — | [Blog Post, marzo 2019](https://blog.qrator.net/en/clickhouse-ddos-mitigation_37/) | -| Percent 百分点 | Analítica | Producto principal | — | — | [Diapositivas en chino, junio 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) | -| Rambler | Servicios de Internet | Analítica | — | — | [Charla en ruso, abril 2018](https://medium.com/@ramblertop/разработка-api-clickhouse-для-рамблер-топ-100-f4c7e56f3141) | -| Tencent | Mensajería | Tala | — | — | [Charla en chino, noviembre 2019](https://youtu.be/T-iVQRuw-QY?t=5050) | -| Traffic Stars | Red AD | — | — | — | [Diapositivas en ruso, mayo 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) | -| S7 Airlines | Aérea | Métricas, Registro | — | — | [Charla en ruso, marzo 2019](https://www.youtube.com/watch?v=nwG68klRpPg&t=15s) | -| SEMrush | Marketing | Producto principal | — | — | [Diapositivas en ruso, agosto 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/5_semrush.pdf) | -| scireum GmbH | Comercio electrónico | Producto principal | — | — | [Charla en alemán, febrero de 2020](https://www.youtube.com/watch?v=7QWAn5RbyR4) | -| Centinela | Desarrollador de software | Backend para el producto | — | — | [Publicación de blog en inglés, mayo 2019](https://blog.sentry.io/2019/05/16/introducing-snuba-sentrys-new-search-infrastructure) | -| SGK | Gobierno Seguridad Social | Analítica | — | — | [Diapositivas en español, noviembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/ClickHouse%20Meetup-Ramazan%20POLAT.pdf) | -| el seo.¿ | Analítica | Producto principal | — | — | [Diapositivas en español, noviembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/CH%20Presentation-%20Metehan%20Çetinkaya.pdf) | -| Sina | Noticia | — | — | — | [Diapositivas en chino, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/6.%20ClickHouse最佳实践%20高鹏_新浪.pdf) | -| SMI2 | Noticia | Analítica | — | — | [Blog Post en ruso, noviembre 2017](https://habr.com/ru/company/smi2/blog/314558/) | -| Salto | Análisis de negocios | Producto principal | — | — | [Diapositivas en español, enero 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) | -| Spotify | Sica | Experimentación | — | — | [Diapositivas, julio 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) | -| Tencent | Grandes Datos | Procesamiento de datos | — | — | [Diapositivas en chino, octubre 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) | -| Más información | Taxi | Tala | — | — | [Diapositivas, febrero de 2020](https://presentations.clickhouse.tech/meetup40/uber.pdf) | -| VKontakte | Red social | Estadísticas, Registro | — | — | [Diapositivas en ruso, agosto 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/3_vk.pdf) | -| Método de codificación de datos: | Soluciones de TI | Analítica | — | — | [Diapositivas en ruso, mayo 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup22/strategies.pdf) | -| Xiaoxin Tech | Educación | Propósito común | — | — | [Diapositivas en español, noviembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/sync-clickhouse-with-mysql-mongodb.pptx) | -| Ximalaya | Compartir audio | OLAP | — | — | [Diapositivas en español, noviembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/ximalaya.pdf) | -| Yandex Cloud | Nube pública | Producto principal | — | — | [Charla en ruso, diciembre 2019](https://www.youtube.com/watch?v=pgnak9e_E0o) | -| Yandex DataLens | Inteligencia de negocios | Producto principal | — | — | [Diapositivas en ruso, diciembre 2019](https://presentations.clickhouse.tech/meetup38/datalens.pdf) | -| Yandex Market | Comercio electrónico | Métricas, Registro | — | — | [Charla en ruso, enero 2019](https://youtu.be/_l1qP0DyBcA?t=478) | -| Yandex Metrica | Análisis web | Producto principal | 360 servidores en un clúster, 1862 servidores en un departamento | 66.41 PiB / 5.68 PiB | [Diapositivas, febrero de 2020](https://presentations.clickhouse.tech/meetup40/introduction/#13) | -| ЦВТ | Desarrollo de software | Métricas, Registro | — | — | [Blog Post, marzo 2019, en ruso](https://vc.ru/dev/62715-kak-my-stroili-monitoring-na-prometheus-clickhouse-i-elk) | -| МКБ | Banco | Supervisión del sistema web | — | — | [Diapositivas en ruso, septiembre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/mkb.pdf) | -| Jinshuju 金数据 | BI Analytics | Producto principal | — | — | [Diapositivas en chino, octubre 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/3.%20金数据数据架构调整方案Public.pdf) | -| Instana | Plataforma APM | Producto principal | — | — | [Publicación de Twitter](https://twitter.com/mieldonkers/status/1248884119158882304) | -| Wargaming | Juego | | — | — | [Entrevista](https://habr.com/en/post/496954/) | -| Crazypanda | Juego | | — | — | Sesión en vivo en ClickHouse meetup | -| FunCorp | Juego | | — | — | [Artículo](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) | - -[Artículo Original](https://clickhouse.tech/docs/en/introduction/adopters/) diff --git a/docs/es/introduction/distinctive-features.md b/docs/es/introduction/distinctive-features.md deleted file mode 100644 index 154b12a65e9..00000000000 --- a/docs/es/introduction/distinctive-features.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 4 -toc_title: "Caracter\xEDsticas distintivas" ---- - -# Características distintivas de ClickHouse {#distinctive-features-of-clickhouse} - -## DBMS orientado a columnas verdaderas {#true-column-oriented-dbms} - -En un verdadero DBMS orientado a columnas, no se almacenan datos adicionales con los valores. Entre otras cosas, esto significa que los valores de longitud constante deben ser compatibles, para evitar almacenar su longitud “number” al lado de los valores. Como ejemplo, mil millones de valores de tipo UInt8 deberían consumir alrededor de 1 GB sin comprimir, o esto afecta fuertemente el uso de la CPU. Es esencial almacenar los datos de forma compacta (sin “garbage”) incluso sin comprimir, ya que la velocidad de descompresión (uso de CPU) depende principalmente del volumen de datos sin comprimir. - -Vale la pena señalar porque hay sistemas que pueden almacenar valores de diferentes columnas por separado, pero que no pueden procesar efectivamente las consultas analíticas debido a su optimización para otros escenarios. Los ejemplos son HBase, BigTable, Cassandra e HyperTable. En estos sistemas, obtendría un rendimiento de alrededor de cien mil filas por segundo, pero no cientos de millones de filas por segundo. - -También vale la pena señalar que ClickHouse es un sistema de administración de bases de datos, no una sola base de datos. ClickHouse permite crear tablas y bases de datos en tiempo de ejecución, cargar datos y ejecutar consultas sin volver a configurar y reiniciar el servidor. - -## Compresión de datos {#data-compression} - -Algunos DBMS orientados a columnas (InfiniDB CE y MonetDB) no utilizan la compresión de datos. Sin embargo, la compresión de datos juega un papel clave para lograr un rendimiento excelente. - -## Almacenamiento en disco de datos {#disk-storage-of-data} - -Mantener los datos físicamente ordenados por clave principal permite extraer datos para sus valores específicos o rangos de valores con baja latencia, menos de unas pocas docenas de milisegundos. Algunos DBMS orientados a columnas (como SAP HANA y Google PowerDrill) solo pueden funcionar en RAM. Este enfoque fomenta la asignación de un presupuesto de hardware más grande que el necesario para el análisis en tiempo real. ClickHouse está diseñado para funcionar en discos duros normales, lo que significa que el costo por GB de almacenamiento de datos es bajo, pero SSD y RAM adicional también se utilizan completamente si están disponibles. - -## Procesamiento paralelo en varios núcleos {#parallel-processing-on-multiple-cores} - -Las consultas grandes se paralelizan naturalmente, tomando todos los recursos necesarios disponibles en el servidor actual. - -## Procesamiento distribuido en varios servidores {#distributed-processing-on-multiple-servers} - -Casi ninguno de los DBMS columnar mencionados anteriormente tiene soporte para el procesamiento de consultas distribuidas. -En ClickHouse, los datos pueden residir en diferentes fragmentos. Cada fragmento puede ser un grupo de réplicas utilizadas para la tolerancia a errores. Todos los fragmentos se utilizan para ejecutar una consulta en paralelo, de forma transparente para el usuario. - -## Soporte SQL {#sql-support} - -ClickHouse admite un lenguaje de consulta declarativo basado en SQL que es idéntico al estándar SQL en muchos casos. -Las consultas admitidas incluyen GROUP BY, ORDER BY, subconsultas en cláusulas FROM, IN y JOIN y subconsultas escalares. -No se admiten subconsultas y funciones de ventana dependientes. - -## Motor del vector {#vector-engine} - -Los datos no solo se almacenan mediante columnas, sino que se procesan mediante vectores (partes de columnas), lo que permite lograr una alta eficiencia de CPU. - -## Actualizaciones de datos en tiempo real {#real-time-data-updates} - -ClickHouse admite tablas con una clave principal. Para realizar consultas rápidamente en el rango de la clave principal, los datos se ordenan de forma incremental utilizando el árbol de combinación. Debido a esto, los datos se pueden agregar continuamente a la tabla. No se toman bloqueos cuando se ingieren nuevos datos. - -## Indice {#index} - -Tener un dato ordenado físicamente por clave principal permite extraer datos para sus valores específicos o rangos de valores con baja latencia, menos de unas pocas docenas de milisegundos. - -## Adecuado para consultas en línea {#suitable-for-online-queries} - -La baja latencia significa que las consultas se pueden procesar sin demora y sin intentar preparar una respuesta por adelantado, justo en el mismo momento mientras se carga la página de la interfaz de usuario. En otras palabras, en línea. - -## Soporte para cálculos aproximados {#support-for-approximated-calculations} - -ClickHouse proporciona varias formas de intercambiar precisión por rendimiento: - -1. Funciones agregadas para el cálculo aproximado del número de valores distintos, medianas y cuantiles. -2. Ejecutar una consulta basada en una parte (muestra) de datos y obtener un resultado aproximado. En este caso, se recuperan proporcionalmente menos datos del disco. -3. Ejecutar una agregación para un número limitado de claves aleatorias, en lugar de para todas las claves. Bajo ciertas condiciones para la distribución de claves en los datos, esto proporciona un resultado razonablemente preciso mientras se utilizan menos recursos. - -## Replicación de datos e integridad de datos {#data-replication-and-data-integrity-support} - -ClickHouse utiliza la replicación multi-maestro asincrónica. Después de escribir en cualquier réplica disponible, todas las réplicas restantes recuperan su copia en segundo plano. El sistema mantiene datos idénticos en diferentes réplicas. La recuperación después de la mayoría de las fallas se realiza automáticamente, o semiautomáticamente en casos complejos. - -Para obtener más información, consulte la sección [Replicación de datos](../engines/table-engines/mergetree-family/replication.md). - -## Características que pueden considerarse desventajas {#clickhouse-features-that-can-be-considered-disadvantages} - -1. No hay transacciones completas. -2. Falta de capacidad para modificar o eliminar datos ya insertados con alta tasa y baja latencia. Hay eliminaciones y actualizaciones por lotes disponibles para limpiar o modificar datos, por ejemplo, para cumplir con [GDPR](https://gdpr-info.eu). -3. El índice disperso hace que ClickHouse no sea tan adecuado para consultas de puntos que recuperan filas individuales por sus claves. - -[Artículo Original](https://clickhouse.tech/docs/en/introduction/distinctive_features/) diff --git a/docs/es/introduction/history.md b/docs/es/introduction/history.md deleted file mode 100644 index 7311fa01959..00000000000 --- a/docs/es/introduction/history.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 7 -toc_title: Historia ---- - -# Historial de ClickHouse {#clickhouse-history} - -ClickHouse se ha desarrollado inicialmente para alimentar [El Yandex.Métrica](https://metrica.yandex.com/), [la segunda plataforma de análisis web más grande del mundo](http://w3techs.com/technologies/overview/traffic_analysis/all), y sigue siendo el componente central de este sistema. Con más de 13 billones de registros en la base de datos y más de 20 mil millones de eventos diarios, ClickHouse permite generar informes personalizados sobre la marcha directamente a partir de datos no agregados. Este artículo cubre brevemente los objetivos de ClickHouse en las primeras etapas de su desarrollo. - -El Yandex.Metrica construye informes personalizados sobre la marcha basados en hits y sesiones, con segmentos arbitrarios definidos por el usuario. Hacerlo a menudo requiere construir agregados complejos, como el número de usuarios únicos. Los nuevos datos para crear un informe llegan en tiempo real. - -A partir de abril de 2014, Yandex.Metrica estaba rastreando alrededor de 12 mil millones de eventos (vistas de páginas y clics) diariamente. Todos estos eventos deben almacenarse para crear informes personalizados. Una sola consulta puede requerir escanear millones de filas en unos pocos cientos de milisegundos, o cientos de millones de filas en solo unos segundos. - -## Uso en Yandex.Metrica y otros servicios de Yandex {#usage-in-yandex-metrica-and-other-yandex-services} - -ClickHouse sirve para múltiples propósitos en Yandex.Métrica. -Su tarea principal es crear informes en modo en línea utilizando datos no agregados. Utiliza un clúster de 374 servidores, que almacenan más de 20,3 billones de filas en la base de datos. El volumen de datos comprimidos es de aproximadamente 2 PB, sin tener en cuenta duplicados y réplicas. El volumen de datos sin comprimir (en formato TSV) sería de aproximadamente 17 PB. - -ClickHouse también juega un papel clave en los siguientes procesos: - -- Almacenamiento de datos para Session Replay de Yandex.Métrica. -- Procesamiento de datos intermedios. -- Creación de informes globales con Analytics. -- Ejecutar consultas para depurar el Yandex.Motor Metrica. -- Análisis de registros desde la API y la interfaz de usuario. - -Hoy en día, hay varias docenas de instalaciones de ClickHouse en otros servicios y departamentos de Yandex: verticales de búsqueda, comercio electrónico, publicidad, análisis de negocios, desarrollo móvil, servicios personales y otros. - -## Datos agregados y no agregados {#aggregated-and-non-aggregated-data} - -Existe una opinión generalizada de que para calcular las estadísticas de manera efectiva, debe agregar datos ya que esto reduce el volumen de datos. - -Pero la agregación de datos viene con muchas limitaciones: - -- Debe tener una lista predefinida de los informes necesarios. -- El usuario no puede hacer informes personalizados. -- Al agregar sobre un gran número de claves distintas, el volumen de datos apenas se reduce, por lo que la agregación es inútil. -- Para un gran número de informes, hay demasiadas variaciones de agregación (explosión combinatoria). -- Al agregar claves con alta cardinalidad (como las URL), el volumen de datos no se reduce en mucho (menos del doble). -- Por esta razón, el volumen de datos con agregación podría crecer en lugar de reducirse. -- Los usuarios no ven todos los informes que generamos para ellos. Una gran parte de esos cálculos es inútil. -- La integridad lógica de los datos puede ser violada para varias agregaciones. - -Si no agregamos nada y trabajamos con datos no agregados, esto podría reducir el volumen de cálculos. - -Sin embargo, con la agregación, una parte significativa del trabajo se desconecta y se completa con relativa calma. Por el contrario, los cálculos en línea requieren calcular lo más rápido posible, ya que el usuario está esperando el resultado. - -El Yandex.Metrica tiene un sistema especializado para agregar datos llamado Metrage, que se utilizó para la mayoría de los informes. -A partir de 2009, Yandex.Metrica también utilizó una base de datos OLAP especializada para datos no agregados llamada OLAPServer, que anteriormente se usaba para el generador de informes. -OLAPServer funcionó bien para datos no agregados, pero tenía muchas restricciones que no permitían que se utilizara para todos los informes según lo deseado. Estos incluyeron la falta de soporte para tipos de datos (solo números) y la incapacidad de actualizar datos de forma incremental en tiempo real (solo se podía hacer reescribiendo datos diariamente). OLAPServer no es un DBMS, sino una base de datos especializada. - -El objetivo inicial de ClickHouse era eliminar las limitaciones de OLAPServer y resolver el problema de trabajar con datos no agregados para todos los informes, pero a lo largo de los años, se ha convertido en un sistema de gestión de bases de datos de propósito general adecuado para una amplia gama de tareas analíticas. - -[Artículo Original](https://clickhouse.tech/docs/en/introduction/history/) diff --git a/docs/es/introduction/index.md b/docs/es/introduction/index.md deleted file mode 100644 index 7026dc800e4..00000000000 --- a/docs/es/introduction/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Implantaci\xF3n" -toc_priority: 1 ---- - - diff --git a/docs/es/introduction/performance.md b/docs/es/introduction/performance.md deleted file mode 100644 index 01640439128..00000000000 --- a/docs/es/introduction/performance.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 6 -toc_title: Rendimiento ---- - -# Rendimiento {#performance} - -De acuerdo con los resultados de las pruebas internas en Yandex, ClickHouse muestra el mejor rendimiento (tanto el mayor rendimiento para consultas largas como la menor latencia en consultas cortas) para escenarios operativos comparables entre los sistemas de su clase que estaban disponibles para pruebas. Puede ver los resultados de la prueba en un [página separada](https://clickhouse.tech/benchmark/dbms/). - -Numerosos puntos de referencia independientes llegaron a conclusiones similares. No son difíciles de encontrar mediante una búsqueda en Internet, o se puede ver [nuestra pequeña colección de enlaces relacionados](https://clickhouse.tech/#independent-benchmarks). - -## Rendimiento para una única consulta grande {#throughput-for-a-single-large-query} - -El rendimiento se puede medir en filas por segundo o megabytes por segundo. Si los datos se colocan en la caché de la página, una consulta que no es demasiado compleja se procesa en hardware moderno a una velocidad de aproximadamente 2-10 GB / s de datos sin comprimir en un solo servidor (para los casos más sencillos, la velocidad puede alcanzar 30 GB / s). Si los datos no se colocan en la memoria caché de la página, la velocidad depende del subsistema de disco y la velocidad de compresión de datos. Por ejemplo, si el subsistema de disco permite leer datos a 400 MB/s y la tasa de compresión de datos es 3, se espera que la velocidad sea de alrededor de 1,2 GB/s. Para obtener la velocidad en filas por segundo, divida la velocidad en bytes por segundo por el tamaño total de las columnas utilizadas en la consulta. Por ejemplo, si se extraen 10 bytes de columnas, se espera que la velocidad sea de alrededor de 100-200 millones de filas por segundo. - -La velocidad de procesamiento aumenta casi linealmente para el procesamiento distribuido, pero solo si el número de filas resultantes de la agregación o la clasificación no es demasiado grande. - -## Latencia al procesar consultas cortas {#latency-when-processing-short-queries} - -Si una consulta usa una clave principal y no selecciona demasiadas columnas y filas para procesar (cientos de miles), puede esperar menos de 50 milisegundos de latencia (dígitos individuales de milisegundos en el mejor de los casos) si los datos se colocan en la memoria caché de la página. De lo contrario, la latencia está dominada principalmente por el número de búsquedas. Si utiliza unidades de disco giratorias, para un sistema que no está sobrecargado, la latencia se puede estimar con esta fórmula: `seek time (10 ms) * count of columns queried * count of data parts`. - -## Rendimiento al procesar una gran cantidad de consultas cortas {#throughput-when-processing-a-large-quantity-of-short-queries} - -En las mismas condiciones, ClickHouse puede manejar varios cientos de consultas por segundo en un solo servidor (hasta varios miles en el mejor de los casos). Dado que este escenario no es típico para DBMS analíticos, se recomienda esperar un máximo de 100 consultas por segundo. - -## Rendimiento al insertar datos {#performance-when-inserting-data} - -Recomendamos insertar datos en paquetes de al menos 1000 filas o no más de una sola solicitud por segundo. Al insertar en una tabla MergeTree desde un volcado separado por tabuladores, la velocidad de inserción puede ser de 50 a 200 MB/s. Si las filas insertadas tienen alrededor de 1 Kb de tamaño, la velocidad será de 50,000 a 200,000 filas por segundo. Si las filas son pequeñas, el rendimiento puede ser mayor en filas por segundo (en los datos del sistema Banner -`>` 500.000 filas por segundo; en datos de grafito -`>` 1.000.000 de filas por segundo). Para mejorar el rendimiento, puede realizar varias consultas INSERT en paralelo, que se escala linealmente. - -[Artículo Original](https://clickhouse.tech/docs/en/introduction/performance/) diff --git a/docs/es/operations/access-rights.md b/docs/es/operations/access-rights.md deleted file mode 100644 index 6c777d9f081..00000000000 --- a/docs/es/operations/access-rights.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 48 -toc_title: "Control de acceso y gesti\xF3n de cuentas" ---- - -# Control de acceso y gestión de cuentas {#access-control} - -ClickHouse admite la administración de control de acceso basada en [RBAC](https://en.wikipedia.org/wiki/Role-based_access_control) enfoque. - -Entidades de acceso de ClickHouse: -- [Cuenta de usuario](#user-account-management) -- [Rol](#role-management) -- [Política de fila](#row-policy-management) -- [Perfil de configuración](#settings-profiles-management) -- [Cuota](#quotas-management) - -Puede configurar entidades de acceso utilizando: - -- Flujo de trabajo controlado por SQL. - - Es necesario [permitir](#enabling-access-control) esta funcionalidad. - -- Servidor [archivos de configuración](configuration-files.md) `users.xml` y `config.xml`. - -Se recomienda utilizar el flujo de trabajo controlado por SQL. Ambos métodos de configuración funcionan simultáneamente, por lo que si utiliza los archivos de configuración del servidor para administrar cuentas y derechos de acceso, puede pasar suavemente al flujo de trabajo controlado por SQL. - -!!! note "Advertencia" - No puede administrar la misma entidad de acceso mediante ambos métodos de configuración simultáneamente. - -## Uso {#access-control-usage} - -De forma predeterminada, el servidor ClickHouse proporciona la cuenta de usuario `default` que no está permitido usar control de acceso controlado por SQL y administración de cuentas, pero tiene todos los derechos y permisos. El `default` cuenta de usuario se utiliza en cualquier caso cuando el nombre de usuario no está definido, por ejemplo, al iniciar sesión desde el cliente o en consultas distribuidas. En el procesamiento de consultas distribuidas se utiliza una cuenta de usuario predeterminada, si la configuración del servidor o clúster no [usuario y contraseña](../engines/table-engines/special/distributed.md) propiedad. - -Si acaba de comenzar a usar ClickHouse, puede usar el siguiente escenario: - -1. [Permitir](#enabling-access-control) Control de acceso basado en SQL y gestión de cuentas `default` usuario. -2. Inicie sesión bajo el `default` cuenta de usuario y crear todos los usuarios. No olvides crear una cuenta de administrador (`GRANT ALL ON *.* WITH GRANT OPTION TO admin_user_account`). -3. [Restringir permisos](settings/permissions-for-queries.md#permissions_for_queries) para el `default` usuario y deshabilitar el control de acceso impulsado por SQL y la administración de cuentas para ello. - -### Propiedades de la solución actual {#access-control-properties} - -- Puede conceder permisos para bases de datos y tablas incluso si no existen. -- Si se eliminó una tabla, no se revocarán todos los privilegios que corresponden a esta tabla. Por lo tanto, si se crea una nueva tabla más tarde con el mismo nombre, todos los privilegios vuelven a ser reales. Para revocar los privilegios correspondientes a la tabla eliminada, debe realizar, por ejemplo, el `REVOKE ALL PRIVILEGES ON db.table FROM ALL` consulta. -- No hay ninguna configuración de por vida para los privilegios. - -## Cuenta de usuario {#user-account-management} - -Una cuenta de usuario es una entidad de acceso que permite autorizar a alguien en ClickHouse. Una cuenta de usuario contiene: - -- Información de identificación. -- [Privilegio](../sql-reference/statements/grant.md#grant-privileges) que definen un ámbito de consultas que el usuario puede realizar. -- Hosts desde los que se permite la conexión al servidor ClickHouse. -- Roles otorgados y predeterminados. -- Configuración con sus restricciones que se aplican de forma predeterminada en el inicio de sesión del usuario. -- Perfiles de configuración asignados. - -Los privilegios a una cuenta de usuario pueden ser otorgados por el [GRANT](../sql-reference/statements/grant.md) consulta o asignando [rol](#role-management). Para revocar privilegios de un usuario, ClickHouse proporciona el [REVOKE](../sql-reference/statements/revoke.md) consulta. Para listar los privilegios de un usuario, utilice - [SHOW GRANTS](../sql-reference/statements/show.md#show-grants-statement) instrucción. - -Consultas de gestión: - -- [CREATE USER](../sql-reference/statements/create.md#create-user-statement) -- [ALTER USER](../sql-reference/statements/alter.md#alter-user-statement) -- [DROP USER](../sql-reference/statements/misc.md#drop-user-statement) -- [SHOW CREATE USER](../sql-reference/statements/show.md#show-create-user-statement) - -### Ajustes Aplicación {#access-control-settings-applying} - -Los ajustes se pueden establecer de diferentes maneras: para una cuenta de usuario, en sus roles y perfiles de configuración concedidos. En un inicio de sesión de usuario, si se establece una configuración en diferentes entidades de acceso, el valor y las restricciones de esta configuración se aplican mediante las siguientes prioridades (de mayor a menor): - -1. Configuración de la cuenta de usuario. -2. La configuración de los roles predeterminados de la cuenta de usuario. Si se establece una configuración en algunos roles, el orden de la configuración que se aplica no está definido. -3. La configuración de los perfiles de configuración asignados a un usuario o a sus roles predeterminados. Si se establece una configuración en algunos perfiles, el orden de aplicación de la configuración no está definido. -4. Ajustes aplicados a todo el servidor de forma predeterminada o desde el [perfil predeterminado](server-configuration-parameters/settings.md#default-profile). - -## Rol {#role-management} - -Role es un contenedor para las entidades de acceso que se pueden conceder a una cuenta de usuario. - -El rol contiene: - -- [Privilegio](../sql-reference/statements/grant.md#grant-privileges) -- Configuración y restricciones -- Lista de funciones concedidas - -Consultas de gestión: - -- [CREATE ROLE](../sql-reference/statements/create.md#create-role-statement) -- [ALTER ROLE](../sql-reference/statements/alter.md#alter-role-statement) -- [DROP ROLE](../sql-reference/statements/misc.md#drop-role-statement) -- [SET ROLE](../sql-reference/statements/misc.md#set-role-statement) -- [SET DEFAULT ROLE](../sql-reference/statements/misc.md#set-default-role-statement) -- [SHOW CREATE ROLE](../sql-reference/statements/show.md#show-create-role-statement) - -Los privilegios a un rol pueden ser otorgados por el [GRANT](../sql-reference/statements/grant.md) consulta. Para revocar privilegios de un rol, ClickHouse proporciona el [REVOKE](../sql-reference/statements/revoke.md) consulta. - -## Política de fila {#row-policy-management} - -La directiva de filas es un filtro que define qué filas está disponible para un usuario o para un rol. La directiva de filas contiene filtros para una tabla específica y una lista de roles y/o usuarios que deben usar esta directiva de filas. - -Consultas de gestión: - -- [CREATE ROW POLICY](../sql-reference/statements/create.md#create-row-policy-statement) -- [ALTER ROW POLICY](../sql-reference/statements/alter.md#alter-row-policy-statement) -- [DROP ROW POLICY](../sql-reference/statements/misc.md#drop-row-policy-statement) -- [SHOW CREATE ROW POLICY](../sql-reference/statements/show.md#show-create-row-policy-statement) - -## Perfil de configuración {#settings-profiles-management} - -El perfil de configuración es una colección de [configuración](settings/index.md). El perfil de configuración contiene configuraciones y restricciones, y una lista de roles y/o usuarios a los que se aplica esta cuota. - -Consultas de gestión: - -- [CREATE SETTINGS PROFILE](../sql-reference/statements/create.md#create-settings-profile-statement) -- [ALTER SETTINGS PROFILE](../sql-reference/statements/alter.md#alter-settings-profile-statement) -- [DROP SETTINGS PROFILE](../sql-reference/statements/misc.md#drop-settings-profile-statement) -- [SHOW CREATE SETTINGS PROFILE](../sql-reference/statements/show.md#show-create-settings-profile-statement) - -## Cuota {#quotas-management} - -La cuota limita el uso de recursos. Ver [Cuota](quotas.md). - -La cuota contiene un conjunto de límites para algunas duraciones y una lista de roles y / o usuarios que deben usar esta cuota. - -Consultas de gestión: - -- [CREATE QUOTA](../sql-reference/statements/create.md#create-quota-statement) -- [ALTER QUOTA](../sql-reference/statements/alter.md#alter-quota-statement) -- [DROP QUOTA](../sql-reference/statements/misc.md#drop-quota-statement) -- [SHOW CREATE QUOTA](../sql-reference/statements/show.md#show-create-quota-statement) - -## Habilitación del control de acceso basado en SQL y la administración de cuentas {#enabling-access-control} - -- Configure un directorio para el almacenamiento de configuraciones. - - ClickHouse almacena las configuraciones de entidades de acceso en la carpeta [access_control_path](server-configuration-parameters/settings.md#access_control_path) parámetro de configuración del servidor. - -- Habilite el control de acceso controlado por SQL y la administración de cuentas para al menos una cuenta de usuario. - - De forma predeterminada, el control de acceso controlado por SQL y la administración de cuentas se activan para todos los usuarios. Debe configurar al menos un usuario en el `users.xml` archivo de configuración y asigne 1 al [access_management](settings/settings-users.md#access_management-user-setting) configuración. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/access_rights/) diff --git a/docs/es/operations/backup.md b/docs/es/operations/backup.md deleted file mode 100644 index be33851574a..00000000000 --- a/docs/es/operations/backup.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -toc_priority: 49 -toc_title: Copia de seguridad de datos ---- - -# Copia de seguridad de datos {#data-backup} - -Mientras que la [replicación](../engines/table-engines/mergetree-family/replication.md) proporciona protección contra fallos de hardware, no protege de errores humanos: el borrado accidental de datos, elminar la tabla equivocada o una tabla en el clúster equivocado, y bugs de software que dan como resultado un procesado incorrecto de los datos o la corrupción de los datos. En muchos casos, errores como estos afectarán a todas las réplicas. ClickHouse dispone de salvaguardas para prevenir algunos tipos de errores — por ejemplo, por defecto [no se puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](server-configuration-parameters/settings.md#max-table-size-to-drop). Sin embargo, estas salvaguardas no cubren todos los casos posibles y pueden eludirse. - -Para mitigar eficazmente los posibles errores humanos, debe preparar cuidadosamente una estrategia para realizar copias de seguridad y restaurar sus datos **previamente**. - -Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de uno solo para compensar sus diversas deficiencias. - -!!! note "Nota" - Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y ponerlo en practica en un clúster de ClickHouse de repuesto regularmente. - -## Duplicar datos de origen en otro lugar {#duplicating-source-data-somewhere-else} - -A menudo, los datos que se ingieren en ClickHouse se entregan a través de algún tipo de cola persistente, como [Acerca de nosotros](https://kafka.apache.org). En este caso, es posible configurar un conjunto adicional de suscriptores que leerá el mismo flujo de datos mientras se escribe en ClickHouse y lo almacenará en almacenamiento en frío en algún lugar. La mayoría de las empresas ya tienen algún almacenamiento en frío recomendado por defecto, que podría ser un almacén de objetos o un sistema de archivos distribuido como [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). - -## Instantáneas del sistema de archivos {#filesystem-snapshots} - -Algunos sistemas de archivos locales proporcionan funcionalidad de instantánea (por ejemplo, [ZFS](https://en.wikipedia.org/wiki/ZFS)), pero podrían no ser la mejor opción para servir consultas en vivo. Una posible solución es crear réplicas adicionales con este tipo de sistema de archivos y excluirlas del [Distribuido](../engines/table-engines/special/distributed.md) tablas que se utilizan para `SELECT` consulta. Las instantáneas en tales réplicas estarán fuera del alcance de cualquier consulta que modifique los datos. Como beneficio adicional, estas réplicas podrían tener configuraciones de hardware especiales con más discos conectados por servidor, lo que sería rentable. - -## Método de codificación de datos: {#clickhouse-copier} - -[Método de codificación de datos:](utilities/clickhouse-copier.md) es una herramienta versátil que se creó inicialmente para volver a dividir tablas de tamaño petabyte. También se puede usar con fines de copia de seguridad y restauración porque copia datos de forma fiable entre tablas y clústeres de ClickHouse. - -Para volúmenes de datos más pequeños, un simple `INSERT INTO ... SELECT ...` a tablas remotas podría funcionar también. - -## Manipulaciones con piezas {#manipulations-with-parts} - -ClickHouse permite usar la consulta `ALTER TABLE ... FREEZE PARTITION ...` para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros a la carpeta `/var/lib/clickhouse/shadow/`, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)). - -Para obtener más información sobre las consultas relacionadas con las manipulaciones de particiones, consulte [Documentación de ALTER](../sql-reference/statements/alter.md#alter_manipulations-with-partitions). - -Una herramienta de terceros está disponible para automatizar este enfoque: [Haga clic en el botón de copia de seguridad](https://github.com/AlexAkulov/clickhouse-backup). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/backup/) diff --git a/docs/es/operations/configuration-files.md b/docs/es/operations/configuration-files.md deleted file mode 100644 index d9aa8567868..00000000000 --- a/docs/es/operations/configuration-files.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 50 -toc_title: "Archivos de configuraci\xF3n" ---- - -# Archivos de configuración {#configuration_files} - -ClickHouse admite la administración de configuración de varios archivos. El archivo de configuración del servidor principal es `/etc/clickhouse-server/config.xml`. Otros archivos deben estar en el `/etc/clickhouse-server/config.d` directorio. - -!!! note "Nota" - Todos los archivos de configuración deben estar en formato XML. Además, deben tener el mismo elemento raíz, generalmente ``. - -Algunos valores especificados en el archivo de configuración principal se pueden anular en otros archivos de configuración. El `replace` o `remove` se pueden especificar atributos para los elementos de estos archivos de configuración. - -Si no se especifica ninguno, combina el contenido de los elementos de forma recursiva, reemplazando los valores de los elementos secundarios duplicados. - -Si `replace` se especifica, reemplaza todo el elemento por el especificado. - -Si `remove` se especifica, elimina el elemento. - -La configuración también puede definir “substitutions”. Si un elemento tiene el `incl` atributo, la sustitución correspondiente del archivo se utilizará como el valor. De forma predeterminada, la ruta al archivo con sustituciones es `/etc/metrika.xml`. Esto se puede cambiar en el [include_from](server-configuration-parameters/settings.md#server_configuration_parameters-include_from) elemento en la configuración del servidor. Los valores de sustitución se especifican en `/yandex/substitution_name` elementos en este archivo. Si una sustitución especificada en `incl` no existe, se registra en el registro. Para evitar que ClickHouse registre las sustituciones que faltan, especifique `optional="true"` atributo (por ejemplo, ajustes para [macro](server-configuration-parameters/settings.md)). - -Las sustituciones también se pueden realizar desde ZooKeeper. Para hacer esto, especifique el atributo `from_zk = "/path/to/node"`. El valor del elemento se sustituye por el contenido del nodo en `/path/to/node` en ZooKeeper. También puede colocar un subárbol XML completo en el nodo ZooKeeper y se insertará completamente en el elemento de origen. - -El `config.xml` file puede especificar una configuración separada con configuraciones de usuario, perfiles y cuotas. La ruta relativa a esta configuración se establece en el `users_config` elemento. Por defecto, es `users.xml`. Si `users_config` se omite, la configuración de usuario, los perfiles y las cuotas se especifican directamente en `config.xml`. - -La configuración de los usuarios se puede dividir en archivos separados similares a `config.xml` y `config.d/`. -El nombre del directorio se define como `users_config` sin `.xml` postfix concatenado con `.d`. -Directorio `users.d` se utiliza por defecto, como `users_config` por defecto `users.xml`. -Por ejemplo, puede tener un archivo de configuración separado para cada usuario como este: - -``` bash -$ cat /etc/clickhouse-server/users.d/alice.xml -``` - -``` xml - - - - analytics - - ::/0 - - ... - analytics - - - -``` - -Para cada archivo de configuración, el servidor también genera `file-preprocessed.xml` archivos al iniciar. Estos archivos contienen todas las sustituciones y anulaciones completadas, y están destinados para uso informativo. Si se utilizaron sustituciones de ZooKeeper en los archivos de configuración pero ZooKeeper no está disponible en el inicio del servidor, el servidor carga la configuración desde el archivo preprocesado. - -El servidor realiza un seguimiento de los cambios en los archivos de configuración, así como archivos y nodos ZooKeeper que se utilizaron al realizar sustituciones y anulaciones, y vuelve a cargar la configuración de los usuarios y clústeres sobre la marcha. Esto significa que puede modificar el clúster, los usuarios y su configuración sin reiniciar el servidor. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/configuration_files/) diff --git a/docs/es/operations/index.md b/docs/es/operations/index.md deleted file mode 100644 index 9a928fa0f01..00000000000 --- a/docs/es/operations/index.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Operaci\xF3n" -toc_priority: 41 -toc_title: "Implantaci\xF3n" ---- - -# Operación {#operations} - -El manual de operaciones de ClickHouse consta de las siguientes secciones principales: - -- [Requisito](requirements.md) -- [Monitoreo](monitoring.md) -- [Solución de problemas](troubleshooting.md) -- [Recomendaciones de uso](tips.md) -- [Procedimiento de actualización](update.md) -- [Derechos de acceso](access-rights.md) -- [Copia de seguridad de datos](backup.md) -- [Archivos de configuración](configuration-files.md) -- [Cuota](quotas.md) -- [Tablas del sistema](system-tables.md) -- [Parámetros de configuración del servidor](server-configuration-parameters/index.md) -- [Cómo probar su hardware con ClickHouse](performance-test.md) -- [Configuración](settings/index.md) -- [Utilidad](utilities/index.md) - -{## [Artículo Original](https://clickhouse.tech/docs/en/operations/) ##} diff --git a/docs/es/operations/monitoring.md b/docs/es/operations/monitoring.md deleted file mode 100644 index 19912d23f3b..00000000000 --- a/docs/es/operations/monitoring.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 45 -toc_title: Monitoreo ---- - -# Monitoreo {#monitoring} - -Usted puede monitorear: - -- Utilización de recursos de hardware. -- Métricas del servidor ClickHouse. - -## Utilización de recursos {#resource-utilization} - -ClickHouse no supervisa el estado de los recursos de hardware por sí mismo. - -Se recomienda encarecidamente configurar la supervisión para: - -- Carga y temperatura en los procesadores. - - Usted puede utilizar [dmesg](https://en.wikipedia.org/wiki/Dmesg), [Turbostat](https://www.linux.org/docs/man8/turbostat.html) u otros instrumentos. - -- Utilización del sistema de almacenamiento, RAM y red. - -## Métricas del servidor ClickHouse {#clickhouse-server-metrics} - -El servidor ClickHouse tiene instrumentos integrados para el monitoreo de estado propio. - -Para realizar un seguimiento de los eventos del servidor, use los registros del servidor. Ver el [registrador](server-configuration-parameters/settings.md#server_configuration_parameters-logger) sección del archivo de configuración. - -ClickHouse recoge: - -- Diferentes métricas de cómo el servidor utiliza recursos computacionales. -- Estadísticas comunes sobre el procesamiento de consultas. - -Puede encontrar métricas en el [sistema.métricas](../operations/system-tables.md#system_tables-metrics), [sistema.evento](../operations/system-tables.md#system_tables-events), y [sistema.asynchronous_metrics](../operations/system-tables.md#system_tables-asynchronous_metrics) tabla. - -Puede configurar ClickHouse para exportar métricas a [Grafito](https://github.com/graphite-project). Ver el [Sección de grafito](server-configuration-parameters/settings.md#server_configuration_parameters-graphite) en el archivo de configuración del servidor ClickHouse. Antes de configurar la exportación de métricas, debe configurar Graphite siguiendo sus [guiar](https://graphite.readthedocs.io/en/latest/install.html). - -Puede configurar ClickHouse para exportar métricas a [Prometeo](https://prometheus.io). Ver el [Sección Prometheus](server-configuration-parameters/settings.md#server_configuration_parameters-prometheus) en el archivo de configuración del servidor ClickHouse. Antes de configurar la exportación de métricas, debe configurar Prometheus siguiendo su oficial [guiar](https://prometheus.io/docs/prometheus/latest/installation/). - -Además, puede supervisar la disponibilidad del servidor a través de la API HTTP. Enviar el `HTTP GET` solicitud de `/ping`. Si el servidor está disponible, responde con `200 OK`. - -Para supervisar servidores en una configuración de clúster, debe establecer [max_replica_delay_for_distributed_queries](settings/settings.md#settings-max_replica_delay_for_distributed_queries) parámetro y utilizar el recurso HTTP `/replicas_status`. Una solicitud para `/replicas_status` devoluciones `200 OK` si la réplica está disponible y no se retrasa detrás de las otras réplicas. Si una réplica se retrasa, devuelve `503 HTTP_SERVICE_UNAVAILABLE` con información sobre la brecha. diff --git a/docs/es/operations/optimizing-performance/index.md b/docs/es/operations/optimizing-performance/index.md deleted file mode 100644 index d2796c6e0d3..00000000000 --- a/docs/es/operations/optimizing-performance/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Optimizaci\xF3n del rendimiento" -toc_priority: 52 ---- - - diff --git a/docs/es/operations/optimizing-performance/sampling-query-profiler.md b/docs/es/operations/optimizing-performance/sampling-query-profiler.md deleted file mode 100644 index a474dde6af2..00000000000 --- a/docs/es/operations/optimizing-performance/sampling-query-profiler.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 54 -toc_title: "Generaci\xF3n de perfiles de consultas" ---- - -# Analizador de consultas de muestreo {#sampling-query-profiler} - -ClickHouse ejecuta el generador de perfiles de muestreo que permite analizar la ejecución de consultas. Utilizando el generador de perfiles puede encontrar rutinas de código fuente que se utilizan con más frecuencia durante la ejecución de la consulta. Puede rastrear el tiempo de CPU y el tiempo de reloj de pared invertido, incluido el tiempo de inactividad. - -Para usar el generador de perfiles: - -- Configurar el [trace_log](../server-configuration-parameters/settings.md#server_configuration_parameters-trace_log) sección de la configuración del servidor. - - Esta sección configura la [trace_log](../../operations/system-tables.md#system_tables-trace_log) tabla del sistema que contiene los resultados del funcionamiento del generador de perfiles. Está configurado de forma predeterminada. Recuerde que los datos de esta tabla solo son válidos para un servidor en ejecución. Después de reiniciar el servidor, ClickHouse no limpia la tabla y toda la dirección de memoria virtual almacenada puede dejar de ser válida. - -- Configurar el [Los resultados de la prueba](../settings/settings.md#query_profiler_cpu_time_period_ns) o [query_profiler_real_time_period_ns](../settings/settings.md#query_profiler_real_time_period_ns) configuración. Ambos ajustes se pueden utilizar simultáneamente. - - Estas opciones le permiten configurar temporizadores del generador de perfiles. Como estos son los ajustes de sesión, puede obtener diferentes frecuencias de muestreo para todo el servidor, usuarios individuales o perfiles de usuario, para su sesión interactiva y para cada consulta individual. - -La frecuencia de muestreo predeterminada es una muestra por segundo y tanto la CPU como los temporizadores reales están habilitados. Esta frecuencia permite recopilar suficiente información sobre el clúster ClickHouse. Al mismo tiempo, al trabajar con esta frecuencia, el generador de perfiles no afecta el rendimiento del servidor ClickHouse. Si necesita perfilar cada consulta individual, intente usar una mayor frecuencia de muestreo. - -Para analizar el `trace_log` tabla del sistema: - -- Instale el `clickhouse-common-static-dbg` paquete. Ver [Instalar desde paquetes DEB](../../getting-started/install.md#install-from-deb-packages). - -- Permitir funciones de introspección [allow_introspection_functions](../settings/settings.md#settings-allow_introspection_functions) configuración. - - Por razones de seguridad, las funciones de introspección están deshabilitadas de forma predeterminada. - -- Utilice el `addressToLine`, `addressToSymbol` y `demangle` [funciones de la introspección](../../sql-reference/functions/introspection.md) para obtener nombres de funciones y sus posiciones en el código ClickHouse. Para obtener un perfil para alguna consulta, debe agregar datos del `trace_log` tabla. Puede agregar datos por funciones individuales o por los seguimientos de pila completos. - -Si necesita visualizar `trace_log` información, intente [Flamegraph](../../interfaces/third-party/gui/#clickhouse-flamegraph) y [Nivel de Cifrado WEP](https://github.com/laplab/clickhouse-speedscope). - -## Ejemplo {#example} - -En este ejemplo nos: - -- Filtrado `trace_log` datos por un identificador de consulta y la fecha actual. - -- Agregando por seguimiento de pila. - -- Usando funciones de introspección, obtendremos un informe de: - - - Nombres de símbolos y funciones de código fuente correspondientes. - - Ubicaciones del código fuente de estas funciones. - - - -``` sql -SELECT - count(), - arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym -FROM system.trace_log -WHERE (query_id = 'ebca3574-ad0a-400a-9cbc-dca382f5998c') AND (event_date = today()) -GROUP BY trace -ORDER BY count() DESC -LIMIT 10 -``` - -``` text -{% include "examples/sampling_query_profiler_result.txt" %} -``` diff --git a/docs/es/operations/performance-test.md b/docs/es/operations/performance-test.md deleted file mode 100644 index 97444f339cd..00000000000 --- a/docs/es/operations/performance-test.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 54 -toc_title: Prueba de hardware ---- - -# Cómo probar su hardware con ClickHouse {#how-to-test-your-hardware-with-clickhouse} - -Con esta instrucción, puede ejecutar una prueba de rendimiento básica de ClickHouse en cualquier servidor sin instalar paquetes de ClickHouse. - -1. Ir a “commits” página: https://github.com/ClickHouse/ClickHouse/commits/master - -2. Haga clic en la primera marca de verificación verde o cruz roja con verde “ClickHouse Build Check” y haga clic en el “Details” enlace cerca “ClickHouse Build Check”. No existe tal enlace en algunas confirmaciones, por ejemplo, confirmaciones con documentación. En este caso, elija la confirmación más cercana que tenga este enlace. - -3. Copie el enlace a “clickhouse” binario para amd64 o aarch64. - -4. ssh al servidor y descargarlo con wget: - - - - # For amd64: - wget https://clickhouse-builds.s3.yandex.net/0/00ba767f5d2a929394ea3be193b1f79074a1c4bc/1578163263_binary/clickhouse - # For aarch64: - wget https://clickhouse-builds.s3.yandex.net/0/00ba767f5d2a929394ea3be193b1f79074a1c4bc/1578161264_binary/clickhouse - # Then do: - chmod a+x clickhouse - -1. Descargar configs: - - - - wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/programs/server/config.xml - wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/programs/server/users.xml - mkdir config.d - wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/programs/server/config.d/path.xml -O config.d/path.xml - wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/programs/server/config.d/log_to_console.xml -O config.d/log_to_console.xml - -1. Descargar archivos de referencia: - - - - wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/benchmark-new.sh - chmod a+x benchmark-new.sh - wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/queries.sql - -1. Descargue los datos de prueba de acuerdo con el [El Yandex.Conjunto de datos de Metrica](../getting-started/example-datasets/metrica.md) instrucción (“hits” tabla que contiene 100 millones de filas). - - - - wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz - tar xvf hits_100m_obfuscated_v1.tar.xz -C . - mv hits_100m_obfuscated_v1/* . - -1. Ejecute el servidor: - - - - ./clickhouse server - -1. Verifique los datos: ssh al servidor en otro terminal - - - - ./clickhouse client --query "SELECT count() FROM hits_100m_obfuscated" - 100000000 - -1. Edite el benchmark-new.sh, cambie `clickhouse-client` a `./clickhouse client` y añadir `–-max_memory_usage 100000000000` parámetro. - - - - mcedit benchmark-new.sh - -1. Ejecute el punto de referencia: - - - - ./benchmark-new.sh hits_100m_obfuscated - -1. Envíe los números y la información sobre la configuración de su hardware a clickhouse-feedback@yandex-team.com - -Todos los resultados se publican aquí: https://clickhouse.tecnología/punto de referencia/hardware/ diff --git a/docs/es/operations/quotas.md b/docs/es/operations/quotas.md deleted file mode 100644 index 9d84ce21339..00000000000 --- a/docs/es/operations/quotas.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 51 -toc_title: Cuota ---- - -# Cuota {#quotas} - -Las cuotas le permiten limitar el uso de recursos durante un período de tiempo o realizar un seguimiento del uso de recursos. -Las cuotas se configuran en la configuración del usuario, que generalmente ‘users.xml’. - -El sistema también tiene una característica para limitar la complejidad de una sola consulta. Vea la sección “Restrictions on query complexity”). - -A diferencia de las restricciones de complejidad de consultas, las cuotas: - -- Coloque restricciones en un conjunto de consultas que se pueden ejecutar durante un período de tiempo, en lugar de limitar una sola consulta. -- Tenga en cuenta los recursos gastados en todos los servidores remotos para el procesamiento de consultas distribuidas. - -Veamos la sección del ‘users.xml’ fichero que define las cuotas. - -``` xml - - - - - - - - 3600 - - - 0 - 0 - 0 - 0 - 0 - - -``` - -De forma predeterminada, la cuota realiza un seguimiento del consumo de recursos para cada hora, sin limitar el uso. -El consumo de recursos calculado para cada intervalo se envía al registro del servidor después de cada solicitud. - -``` xml - - - - - 3600 - - 1000 - 100 - 1000000000 - 100000000000 - 900 - - - - 86400 - - 10000 - 1000 - 5000000000 - 500000000000 - 7200 - - -``` - -Para el ‘statbox’ Las restricciones se establecen por cada hora y por cada 24 horas (86.400 segundos). El intervalo de tiempo se cuenta, a partir de un momento fijo definido por la implementación en el tiempo. En otras palabras, el intervalo de 24 horas no necesariamente comienza a medianoche. - -Cuando finaliza el intervalo, se borran todos los valores recopilados. Para la siguiente hora, el cálculo de la cuota comienza de nuevo. - -Estas son las cantidades que se pueden restringir: - -`queries` – The total number of requests. - -`errors` – The number of queries that threw an exception. - -`result_rows` – The total number of rows given as a result. - -`read_rows` – The total number of source rows read from tables for running the query on all remote servers. - -`execution_time` – The total query execution time, in seconds (wall time). - -Si se excede el límite durante al menos un intervalo de tiempo, se lanza una excepción con un texto sobre qué restricción se excedió, para qué intervalo y cuándo comienza el nuevo intervalo (cuando se pueden enviar consultas nuevamente). - -Las cuotas pueden usar el “quota key” característica para informar sobre los recursos para múltiples claves de forma independiente. Aquí hay un ejemplo de esto: - -``` xml - - - - -``` - -La cuota se asigna a los usuarios ‘users’ sección de la configuración. Vea la sección “Access rights”. - -Para el procesamiento de consultas distribuidas, los importes acumulados se almacenan en el servidor del solicitante. Entonces, si el usuario va a otro servidor, la cuota allí “start over”. - -Cuando se reinicia el servidor, las cuotas se restablecen. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/quotas/) diff --git a/docs/es/operations/requirements.md b/docs/es/operations/requirements.md deleted file mode 100644 index d6f0f25cf21..00000000000 --- a/docs/es/operations/requirements.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 44 -toc_title: Requisito ---- - -# Requisito {#requirements} - -## CPU {#cpu} - -Para la instalación desde paquetes deb precompilados, utilice una CPU con arquitectura x86_64 y soporte para las instrucciones de SSE 4.2. Para ejecutar ClickHouse con procesadores que no admiten SSE 4.2 o tienen arquitectura AArch64 o PowerPC64LE, debe compilar ClickHouse a partir de fuentes. - -ClickHouse implementa el procesamiento de datos paralelo y utiliza todos los recursos de hardware disponibles. Al elegir un procesador, tenga en cuenta que ClickHouse funciona de manera más eficiente en configuraciones con un gran número de núcleos pero con una velocidad de reloj más baja que en configuraciones con menos núcleos y una velocidad de reloj más alta. Por ejemplo, 16 núcleos con 2600 MHz es preferible a 8 núcleos con 3600 MHz. - -Se recomienda usar **Impulso de Turbo** y **hiper-threading** tecnología. Mejora significativamente el rendimiento con una carga de trabajo típica. - -## RAM {#ram} - -Recomendamos utilizar un mínimo de 4 GB de RAM para realizar consultas no triviales. El servidor ClickHouse puede ejecutarse con una cantidad mucho menor de RAM, pero requiere memoria para procesar consultas. - -El volumen requerido de RAM depende de: - -- La complejidad de las consultas. -- La cantidad de datos que se procesan en las consultas. - -Para calcular el volumen requerido de RAM, debe estimar el tamaño de los datos temporales para [GROUP BY](../sql-reference/statements/select/group-by.md#select-group-by-clause), [DISTINCT](../sql-reference/statements/select/distinct.md#select-distinct), [JOIN](../sql-reference/statements/select/join.md#select-join) y otras operaciones que utilice. - -ClickHouse puede usar memoria externa para datos temporales. Ver [GROUP BY en memoria externa](../sql-reference/statements/select/group-by.md#select-group-by-in-external-memory) para más detalles. - -## Archivo de intercambio {#swap-file} - -Deshabilite el archivo de intercambio para entornos de producción. - -## Subsistema de almacenamiento {#storage-subsystem} - -Necesita tener 2 GB de espacio libre en disco para instalar ClickHouse. - -El volumen de almacenamiento requerido para sus datos debe calcularse por separado. La evaluación debe incluir: - -- Estimación del volumen de datos. - - Puede tomar una muestra de los datos y obtener el tamaño promedio de una fila de ella. Luego multiplique el valor por el número de filas que planea almacenar. - -- El coeficiente de compresión de datos. - - Para estimar el coeficiente de compresión de datos, cargue una muestra de sus datos en ClickHouse y compare el tamaño real de los datos con el tamaño de la tabla almacenada. Por ejemplo, los datos de clickstream generalmente se comprimen de 6 a 10 veces. - -Para calcular el volumen final de datos que se almacenarán, aplique el coeficiente de compresión al volumen de datos estimado. Si planea almacenar datos en varias réplicas, multiplique el volumen estimado por el número de réplicas. - -## Red {#network} - -Si es posible, use redes de 10G o clase superior. - -El ancho de banda de la red es fundamental para procesar consultas distribuidas con una gran cantidad de datos intermedios. Además, la velocidad de la red afecta a los procesos de replicación. - -## Software {#software} - -ClickHouse está desarrollado principalmente para la familia de sistemas operativos Linux. La distribución de Linux recomendada es Ubuntu. El `tzdata` paquete debe ser instalado en el sistema. - -ClickHouse también puede funcionar en otras familias de sistemas operativos. Ver detalles en el [Primeros pasos](../getting-started/index.md) sección de la documentación. diff --git a/docs/es/operations/server-configuration-parameters/index.md b/docs/es/operations/server-configuration-parameters/index.md deleted file mode 100644 index e1e2e777b94..00000000000 --- a/docs/es/operations/server-configuration-parameters/index.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Par\xE1metros de configuraci\xF3n del servidor" -toc_priority: 54 -toc_title: "Implantaci\xF3n" ---- - -# Parámetros de configuración del servidor {#server-settings} - -Esta sección contiene descripciones de la configuración del servidor que no se puede cambiar en el nivel de sesión o consulta. - -Estos ajustes se almacenan en el `config.xml` archivo en el servidor ClickHouse. - -Otros ajustes se describen en el “[Configuración](../settings/index.md#session-settings-intro)” apartado. - -Antes de estudiar la configuración, lea el [Archivos de configuración](../configuration-files.md#configuration_files) sección y tomar nota del uso de sustituciones (el `incl` y `optional` atributo). - -[Artículo Original](https://clickhouse.tech/docs/en/operations/server_configuration_parameters/) diff --git a/docs/es/operations/server-configuration-parameters/settings.md b/docs/es/operations/server-configuration-parameters/settings.md deleted file mode 100644 index 86264ed0440..00000000000 --- a/docs/es/operations/server-configuration-parameters/settings.md +++ /dev/null @@ -1,906 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 57 -toc_title: "Configuraci\xF3n del servidor" ---- - -# Configuración del servidor {#server-settings} - -## builtin_dictionaries_reload_interval {#builtin-dictionaries-reload-interval} - -El intervalo en segundos antes de volver a cargar los diccionarios integrados. - -ClickHouse recarga los diccionarios incorporados cada x segundos. Esto hace posible editar diccionarios “on the fly” sin reiniciar el servidor. - -Valor predeterminado: 3600. - -**Ejemplo** - -``` xml -3600 -``` - -## compresión {#server-settings-compression} - -Ajustes de compresión de datos para [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md)-mesas de motor. - -!!! warning "Advertencia" - No lo use si acaba de comenzar a usar ClickHouse. - -Plantilla de configuración: - -``` xml - - - ... - ... - ... - - ... - -``` - -`` campo: - -- `min_part_size` – The minimum size of a data part. -- `min_part_size_ratio` – The ratio of the data part size to the table size. -- `method` – Compression method. Acceptable values: `lz4` o `zstd`. - -Puede configurar múltiples `` apartado. - -Acciones cuando se cumplen las condiciones: - -- Si un elemento de datos coincide con un conjunto de condiciones, ClickHouse utiliza el método de compresión especificado. -- Si un elemento de datos coincide con varios conjuntos de condiciones, ClickHouse utiliza el primer conjunto de condiciones coincidente. - -Si no se cumplen condiciones para un elemento de datos, ClickHouse utiliza el `lz4` compresión. - -**Ejemplo** - -``` xml - - - 10000000000 - 0.01 - zstd - - -``` - -## default_database {#default-database} - -La base de datos predeterminada. - -Para obtener una lista de bases de datos, [SHOW DATABASES](../../sql-reference/statements/show.md#show-databases) consulta. - -**Ejemplo** - -``` xml -default -``` - -## default_profile {#default-profile} - -Perfil de configuración predeterminado. - -Los perfiles de configuración se encuentran en el archivo especificado en el parámetro `user_config`. - -**Ejemplo** - -``` xml -default -``` - -## Diccionarios_config {#server_configuration_parameters-dictionaries_config} - -La ruta de acceso al archivo de configuración para diccionarios externos. - -Camino: - -- Especifique la ruta absoluta o la ruta relativa al archivo de configuración del servidor. -- La ruta puede contener comodines \* y ?. - -Ver también “[Diccionarios externos](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md)”. - -**Ejemplo** - -``` xml -*_dictionary.xml -``` - -## Diccionarios_lazy_load {#server_configuration_parameters-dictionaries_lazy_load} - -La carga perezosa de los diccionarios. - -Si `true`, entonces cada diccionario es creado en el primer uso. Si se produce un error en la creación del diccionario, la función que estaba utilizando el diccionario produce una excepción. - -Si `false`, todos los diccionarios se crean cuando se inicia el servidor, y si hay un error, el servidor se apaga. - -El valor predeterminado es `true`. - -**Ejemplo** - -``` xml -true -``` - -## format_schema_path {#server_configuration_parameters-format_schema_path} - -La ruta de acceso al directorio con los esquemas para los datos de entrada, como los esquemas [CapnProto](../../interfaces/formats.md#capnproto) formato. - -**Ejemplo** - -``` xml - - format_schemas/ -``` - -## grafito {#server_configuration_parameters-graphite} - -Envío de datos a [Grafito](https://github.com/graphite-project). - -Configuración: - -- host – The Graphite server. -- port – The port on the Graphite server. -- interval – The interval for sending, in seconds. -- timeout – The timeout for sending data, in seconds. -- root_path – Prefix for keys. -- metrics – Sending data from the [sistema.métricas](../../operations/system-tables.md#system_tables-metrics) tabla. -- events – Sending deltas data accumulated for the time period from the [sistema.evento](../../operations/system-tables.md#system_tables-events) tabla. -- events_cumulative – Sending cumulative data from the [sistema.evento](../../operations/system-tables.md#system_tables-events) tabla. -- asynchronous_metrics – Sending data from the [sistema.asynchronous_metrics](../../operations/system-tables.md#system_tables-asynchronous_metrics) tabla. - -Puede configurar múltiples `` clausula. Por ejemplo, puede usar esto para enviar datos diferentes a intervalos diferentes. - -**Ejemplo** - -``` xml - - localhost - 42000 - 0.1 - 60 - one_min - true - true - false - true - -``` - -## graphite_rollup {#server_configuration_parameters-graphite-rollup} - -Ajustes para reducir los datos de grafito. - -Para obtener más información, consulte [GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md). - -**Ejemplo** - -``` xml - - - max - - 0 - 60 - - - 3600 - 300 - - - 86400 - 3600 - - - -``` - -## http_port/https_port {#http-porthttps-port} - -El puerto para conectarse al servidor a través de HTTP(s). - -Si `https_port` se especifica, [openSSL](#server_configuration_parameters-openssl) debe ser configurado. - -Si `http_port` se especifica, la configuración de OpenSSL se ignora incluso si está establecida. - -**Ejemplo** - -``` xml -9999 -``` - -## http_server_default_response {#server_configuration_parameters-http_server_default_response} - -La página que se muestra de forma predeterminada al acceder al servidor HTTP de ClickHouse. -El valor predeterminado es “Ok.” (con un avance de línea al final) - -**Ejemplo** - -Abrir `https://tabix.io/` al acceder `http://localhost: http_port`. - -``` xml - -
]]> -
-``` - -## include_from {#server_configuration_parameters-include_from} - -La ruta al archivo con sustituciones. - -Para obtener más información, consulte la sección “[Archivos de configuración](../configuration-files.md#configuration_files)”. - -**Ejemplo** - -``` xml -/etc/metrica.xml -``` - -## Interesante {#interserver-http-port} - -Puerto para el intercambio de datos entre servidores ClickHouse. - -**Ejemplo** - -``` xml -9009 -``` - -## Sistema abierto {#interserver-http-host} - -El nombre de host que pueden utilizar otros servidores para acceder a este servidor. - -Si se omite, se define de la misma manera que el `hostname-f` comando. - -Útil para separarse de una interfaz de red específica. - -**Ejemplo** - -``` xml -example.yandex.ru -``` - -## interserver_http_credentials {#server-settings-interserver-http-credentials} - -El nombre de usuario y la contraseña utilizados para [replicación](../../engines/table-engines/mergetree-family/replication.md) con los motores Replicated\*. Estas credenciales sólo se utilizan para la comunicación entre réplicas y no están relacionadas con las credenciales de los clientes de ClickHouse. El servidor está comprobando estas credenciales para conectar réplicas y utiliza las mismas credenciales cuando se conecta a otras réplicas. Por lo tanto, estas credenciales deben establecerse igual para todas las réplicas de un clúster. -De forma predeterminada, la autenticación no se utiliza. - -Esta sección contiene los siguientes parámetros: - -- `user` — username. -- `password` — password. - -**Ejemplo** - -``` xml - - admin - 222 - -``` - -## keep_alive_timeout {#keep-alive-timeout} - -El número de segundos que ClickHouse espera las solicitudes entrantes antes de cerrar la conexión. El valor predeterminado es de 3 segundos. - -**Ejemplo** - -``` xml -3 -``` - -## listen_host {#server_configuration_parameters-listen_host} - -Restricción en hosts de los que pueden provenir las solicitudes. Si desea que el servidor responda a todos ellos, especifique `::`. - -Ejemplos: - -``` xml -::1 -127.0.0.1 -``` - -## registrador {#server_configuration_parameters-logger} - -Configuración de registro. - -Claves: - -- level – Logging level. Acceptable values: `trace`, `debug`, `information`, `warning`, `error`. -- log – The log file. Contains all the entries according to `level`. -- errorlog – Error log file. -- size – Size of the file. Applies to `log`y`errorlog`. Una vez que el archivo alcanza `size`, ClickHouse archiva y cambia el nombre, y crea un nuevo archivo de registro en su lugar. -- count – The number of archived log files that ClickHouse stores. - -**Ejemplo** - -``` xml - - trace - /var/log/clickhouse-server/clickhouse-server.log - /var/log/clickhouse-server/clickhouse-server.err.log - 1000M - 10 - -``` - -También se admite la escritura en el syslog. Config ejemplo: - -``` xml - - 1 - -
syslog.remote:10514
- myhost.local - LOG_LOCAL6 - syslog -
-
-``` - -Claves: - -- use_syslog — Required setting if you want to write to the syslog. -- address — The host\[:port\] of syslogd. If omitted, the local daemon is used. -- hostname — Optional. The name of the host that logs are sent from. -- facility — [La palabra clave syslog facility](https://en.wikipedia.org/wiki/Syslog#Facility) en letras mayúsculas con el “LOG_” prefijo: (`LOG_USER`, `LOG_DAEMON`, `LOG_LOCAL3` y así sucesivamente). - Valor predeterminado: `LOG_USER` si `address` se especifica, `LOG_DAEMON otherwise.` -- format – Message format. Possible values: `bsd` y `syslog.` - -## macro {#macros} - -Sustituciones de parámetros para tablas replicadas. - -Se puede omitir si no se utilizan tablas replicadas. - -Para obtener más información, consulte la sección “[Creación de tablas replicadas](../../engines/table-engines/mergetree-family/replication.md)”. - -**Ejemplo** - -``` xml - -``` - -## Método de codificación de datos: {#server-mark-cache-size} - -Tamaño aproximado (en bytes) de la memoria caché de marcas utilizadas por los motores de [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md) familia. - -La memoria caché se comparte para el servidor y la memoria se asigna según sea necesario. El tamaño de la memoria caché debe ser al menos 5368709120. - -**Ejemplo** - -``` xml -5368709120 -``` - -## max_concurrent_queries {#max-concurrent-queries} - -El número máximo de solicitudes procesadas simultáneamente. - -**Ejemplo** - -``` xml -100 -``` - -## max_connections {#max-connections} - -El número máximo de conexiones entrantes. - -**Ejemplo** - -``` xml -4096 -``` - -## max_open_files {#max-open-files} - -El número máximo de archivos abiertos. - -Predeterminada: `maximum`. - -Recomendamos usar esta opción en Mac OS X desde el `getrlimit()` función devuelve un valor incorrecto. - -**Ejemplo** - -``` xml -262144 -``` - -## max_table_size_to_drop {#max-table-size-to-drop} - -Restricción en la eliminación de tablas. - -Si el tamaño de un [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md) mesa excede `max_table_size_to_drop` (en bytes), no puede eliminarlo usando una consulta DROP. - -Si aún necesita eliminar la tabla sin reiniciar el servidor ClickHouse, cree el `/flags/force_drop_table` y ejecute la consulta DROP. - -Valor predeterminado: 50 GB. - -El valor 0 significa que puede eliminar todas las tablas sin restricciones. - -**Ejemplo** - -``` xml -0 -``` - -## merge_tree {#server_configuration_parameters-merge_tree} - -Ajuste fino para tablas en el [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md). - -Para obtener más información, vea MergeTreeSettings.h archivo de encabezado. - -**Ejemplo** - -``` xml - - 5 - -``` - -## openSSL {#server_configuration_parameters-openssl} - -Configuración cliente/servidor SSL. - -El soporte para SSL es proporcionado por el `libpoco` biblioteca. La interfaz se describe en el archivo [Nombre de la red inalámbrica (SSID):h](https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h) - -Claves para la configuración del servidor/cliente: - -- privateKeyFile – The path to the file with the secret key of the PEM certificate. The file may contain a key and certificate at the same time. -- certificateFile – The path to the client/server certificate file in PEM format. You can omit it if `privateKeyFile` contiene el certificado. -- caConfig – The path to the file or directory that contains trusted root certificates. -- verificationMode – The method for checking the node's certificates. Details are in the description of the [Contexto](https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/Context.h) clase. Valores posibles: `none`, `relaxed`, `strict`, `once`. -- verificationDepth – The maximum length of the verification chain. Verification will fail if the certificate chain length exceeds the set value. -- loadDefaultCAFile – Indicates that built-in CA certificates for OpenSSL will be used. Acceptable values: `true`, `false`. \| -- cipherList – Supported OpenSSL encryptions. For example: `ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH`. -- cacheSessions – Enables or disables caching sessions. Must be used in combination with `sessionIdContext`. Valores aceptables: `true`, `false`. -- sessionIdContext – A unique set of random characters that the server appends to each generated identifier. The length of the string must not exceed `SSL_MAX_SSL_SESSION_ID_LENGTH`. Este parámetro siempre se recomienda ya que ayuda a evitar problemas tanto si el servidor almacena en caché la sesión como si el cliente solicita el almacenamiento en caché. Valor predeterminado: `${application.name}`. -- sessionCacheSize – The maximum number of sessions that the server caches. Default value: 1024\*20. 0 – Unlimited sessions. -- sessionTimeout – Time for caching the session on the server. -- extendedVerification – Automatically extended verification of certificates after the session ends. Acceptable values: `true`, `false`. -- requireTLSv1 – Require a TLSv1 connection. Acceptable values: `true`, `false`. -- requireTLSv1_1 – Require a TLSv1.1 connection. Acceptable values: `true`, `false`. -- requireTLSv1 – Require a TLSv1.2 connection. Acceptable values: `true`, `false`. -- fips – Activates OpenSSL FIPS mode. Supported if the library's OpenSSL version supports FIPS. -- privateKeyPassphraseHandler – Class (PrivateKeyPassphraseHandler subclass) that requests the passphrase for accessing the private key. For example: ``, `KeyFileHandler`, `test`, ``. -- invalidCertificateHandler – Class (a subclass of CertificateHandler) for verifying invalid certificates. For example: ` ConsoleCertificateHandler ` . -- disableProtocols – Protocols that are not allowed to use. -- preferServerCiphers – Preferred server ciphers on the client. - -**Ejemplo de configuración:** - -``` xml - - - - /etc/clickhouse-server/server.crt - /etc/clickhouse-server/server.key - - /etc/clickhouse-server/dhparam.pem - none - true - true - sslv2,sslv3 - true - - - true - true - sslv2,sslv3 - true - - - - RejectCertificateHandler - - - -``` - -## part_log {#server_configuration_parameters-part-log} - -Registro de eventos asociados con [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md). Por ejemplo, agregar o fusionar datos. Puede utilizar el registro para simular algoritmos de combinación y comparar sus características. Puede visualizar el proceso de fusión. - -Las consultas se registran en el [sistema.part_log](../../operations/system-tables.md#system_tables-part-log) tabla, no en un archivo separado. Puede configurar el nombre de esta tabla en el `table` parámetro (ver más abajo). - -Utilice los siguientes parámetros para configurar el registro: - -- `database` – Name of the database. -- `table` – Name of the system table. -- `partition_by` – Sets a [clave de partición personalizada](../../engines/table-engines/mergetree-family/custom-partitioning-key.md). -- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table. - -**Ejemplo** - -``` xml - - system - part_log
- toMonday(event_date) - 7500 -
-``` - -## camino {#server_configuration_parameters-path} - -La ruta de acceso al directorio que contiene los datos. - -!!! note "Nota" - La barra diagonal es obligatoria. - -**Ejemplo** - -``` xml -/var/lib/clickhouse/ -``` - -## prometeo {#server_configuration_parameters-prometheus} - -Exponer datos de métricas para raspar desde [Prometeo](https://prometheus.io). - -Configuración: - -- `endpoint` – HTTP endpoint for scraping metrics by prometheus server. Start from ‘/’. -- `port` – Port for `endpoint`. -- `metrics` – Flag that sets to expose metrics from the [sistema.métricas](../system-tables.md#system_tables-metrics) tabla. -- `events` – Flag that sets to expose metrics from the [sistema.evento](../system-tables.md#system_tables-events) tabla. -- `asynchronous_metrics` – Flag that sets to expose current metrics values from the [sistema.asynchronous_metrics](../system-tables.md#system_tables-asynchronous_metrics) tabla. - -**Ejemplo** - -``` xml - - /metrics - 8001 - true - true - true - -``` - -## query_log {#server_configuration_parameters-query-log} - -Configuración de las consultas de registro recibidas con [log_queries=1](../settings/settings.md) configuración. - -Las consultas se registran en el [sistema.query_log](../../operations/system-tables.md#system_tables-query_log) tabla, no en un archivo separado. Puede cambiar el nombre de la tabla en el `table` parámetro (ver más abajo). - -Utilice los siguientes parámetros para configurar el registro: - -- `database` – Name of the database. -- `table` – Name of the system table the queries will be logged in. -- `partition_by` – Sets a [clave de partición personalizada](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) para una mesa. -- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table. - -Si la tabla no existe, ClickHouse la creará. Si la estructura del registro de consultas cambió cuando se actualizó el servidor ClickHouse, se cambia el nombre de la tabla con la estructura anterior y se crea una nueva tabla automáticamente. - -**Ejemplo** - -``` xml - - system - query_log
- toMonday(event_date) - 7500 -
-``` - -## Sistema abierto {#server_configuration_parameters-query-thread-log} - -Configuración de subprocesos de registro de consultas recibidas con [Log_query_threads = 1](../settings/settings.md#settings-log-query-threads) configuración. - -Las consultas se registran en el [sistema.Sistema abierto.](../../operations/system-tables.md#system_tables-query-thread-log) tabla, no en un archivo separado. Puede cambiar el nombre de la tabla en el `table` parámetro (ver más abajo). - -Utilice los siguientes parámetros para configurar el registro: - -- `database` – Name of the database. -- `table` – Name of the system table the queries will be logged in. -- `partition_by` – Sets a [clave de partición personalizada](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) para una tabla del sistema. -- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table. - -Si la tabla no existe, ClickHouse la creará. Si la estructura del registro de subprocesos de consulta cambió cuando se actualizó el servidor ClickHouse, se cambia el nombre de la tabla con la estructura anterior y se crea una nueva tabla automáticamente. - -**Ejemplo** - -``` xml - - system - query_thread_log
- toMonday(event_date) - 7500 -
-``` - -## trace_log {#server_configuration_parameters-trace_log} - -Ajustes para el [trace_log](../../operations/system-tables.md#system_tables-trace_log) operación de la tabla del sistema. - -Parámetros: - -- `database` — Database for storing a table. -- `table` — Table name. -- `partition_by` — [Clave de partición personalizada](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) para una tabla del sistema. -- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table. - -El archivo de configuración del servidor predeterminado `config.xml` contiene la siguiente sección de configuración: - -``` xml - - system - trace_log
- toYYYYMM(event_date) - 7500 -
-``` - -## query_masking_rules {#query-masking-rules} - -Reglas basadas en Regexp, que se aplicarán a las consultas, así como a todos los mensajes de registro antes de almacenarlos en los registros del servidor, -`system.query_log`, `system.text_log`, `system.processes` tabla, y en los registros enviados al cliente. Eso permite prevenir -fuga de datos sensible de consultas SQL (como nombres, correos electrónicos, -identificadores o números de tarjetas de crédito) a los registros. - -**Ejemplo** - -``` xml - - - hide SSN - (^|\D)\d{3}-\d{2}-\d{4}($|\D) - 000-00-0000 - - -``` - -Campos de configuración: -- `name` - nombre de la regla (opcional) -- `regexp` - Expresión regular compatible con RE2 (obligatoria) -- `replace` - cadena de sustitución para datos confidenciales (opcional, por defecto - seis asteriscos) - -Las reglas de enmascaramiento se aplican a toda la consulta (para evitar fugas de datos confidenciales de consultas mal formadas / no analizables). - -`system.events` la tabla tiene contador `QueryMaskingRulesMatch` que tienen un número total de coincidencias de reglas de enmascaramiento de consultas. - -Para consultas distribuidas, cada servidor debe configurarse por separado; de lo contrario, las subconsultas pasan a otros -los nodos se almacenarán sin enmascarar. - -## remote_servers {#server-settings-remote-servers} - -Configuración de los clústeres utilizados por [Distribuido](../../engines/table-engines/special/distributed.md) motor de mesa y por el `cluster` función de la tabla. - -**Ejemplo** - -``` xml - -``` - -Para el valor de la `incl` atributo, consulte la sección “[Archivos de configuración](../configuration-files.md#configuration_files)”. - -**Ver también** - -- [skip_unavailable_shards](../settings/settings.md#settings-skip_unavailable_shards) - -## Zona horaria {#server_configuration_parameters-timezone} - -La zona horaria del servidor. - -Especificado como un identificador de la IANA para la zona horaria UTC o la ubicación geográfica (por ejemplo, África/Abidjan). - -La zona horaria es necesaria para las conversiones entre los formatos String y DateTime cuando los campos DateTime se envían al formato de texto (impreso en la pantalla o en un archivo) y cuando se obtiene DateTime de una cadena. Además, la zona horaria se usa en funciones que funcionan con la hora y la fecha si no recibieron la zona horaria en los parámetros de entrada. - -**Ejemplo** - -``` xml -Europe/Moscow -``` - -## Tcp_port {#server_configuration_parameters-tcp_port} - -Puerto para comunicarse con clientes a través del protocolo TCP. - -**Ejemplo** - -``` xml -9000 -``` - -## Tcp_port_secure {#server_configuration_parameters-tcp_port_secure} - -Puerto TCP para una comunicación segura con los clientes. Úselo con [OpenSSL](#server_configuration_parameters-openssl) configuración. - -**Valores posibles** - -Entero positivo. - -**Valor predeterminado** - -``` xml -9440 -``` - -## mysql_port {#server_configuration_parameters-mysql_port} - -Puerto para comunicarse con clientes a través del protocolo MySQL. - -**Valores posibles** - -Entero positivo. - -Ejemplo - -``` xml -9004 -``` - -## tmp_path {#server-settings-tmp_path} - -Ruta de acceso a datos temporales para procesar consultas grandes. - -!!! note "Nota" - La barra diagonal es obligatoria. - -**Ejemplo** - -``` xml -/var/lib/clickhouse/tmp/ -``` - -## tmp_policy {#server-settings-tmp-policy} - -Política de [`storage_configuration`](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) para almacenar archivos temporales. -Si no se establece [`tmp_path`](#server-settings-tmp_path) se utiliza, de lo contrario se ignora. - -!!! note "Nota" - - `move_factor` se ignora -- `keep_free_space_bytes` se ignora -- `max_data_part_size_bytes` se ignora -- debe tener exactamente un volumen en esa política - -## Uncompressed_cache_size {#server-settings-uncompressed_cache_size} - -Tamaño de la memoria caché (en bytes) para los datos sin comprimir utilizados por los motores de [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md). - -Hay una caché compartida para el servidor. La memoria se asigna a pedido. La caché se usa si la opción [Use_uncompressed_cache](../settings/settings.md#setting-use_uncompressed_cache) está habilitado. - -La caché sin comprimir es ventajosa para consultas muy cortas en casos individuales. - -**Ejemplo** - -``` xml -8589934592 -``` - -## user_files_path {#server_configuration_parameters-user_files_path} - -El directorio con archivos de usuario. Utilizado en la función de tabla [file()](../../sql-reference/table-functions/file.md). - -**Ejemplo** - -``` xml -/var/lib/clickhouse/user_files/ -``` - -## users_config {#users-config} - -Ruta de acceso al archivo que contiene: - -- Configuraciones de usuario. -- Derechos de acceso. -- Perfiles de configuración. -- Configuración de cuota. - -**Ejemplo** - -``` xml -users.xml -``` - -## Zookeeper {#server-settings_zookeeper} - -Contiene la configuración que permite a ClickHouse interactuar con [ZooKeeper](http://zookeeper.apache.org/) Cluster. - -ClickHouse utiliza ZooKeeper para almacenar metadatos de réplicas cuando se utilizan tablas replicadas. Si no se utilizan tablas replicadas, se puede omitir esta sección de parámetros. - -Esta sección contiene los siguientes parámetros: - -- `node` — ZooKeeper endpoint. You can set multiple endpoints. - - Por ejemplo: - - - -``` xml - - example_host - 2181 - -``` - - The `index` attribute specifies the node order when trying to connect to the ZooKeeper cluster. - -- `session_timeout` — Maximum timeout for the client session in milliseconds. -- `root` — The [Znode](http://zookeeper.apache.org/doc/r3.5.5/zookeeperOver.html#Nodes+and+ephemeral+nodes) que se utiliza como la raíz de los znodes utilizados por el servidor ClickHouse. Opcional. -- `identity` — User and password, that can be required by ZooKeeper to give access to requested znodes. Optional. - -**Ejemplo de configuración** - -``` xml - - - example1 - 2181 - - - example2 - 2181 - - 30000 - 10000 - - /path/to/zookeeper/node - - user:password - -``` - -**Ver también** - -- [Replicación](../../engines/table-engines/mergetree-family/replication.md) -- [Guía del programador ZooKeeper](http://zookeeper.apache.org/doc/current/zookeeperProgrammers.html) - -## use_minimalistic_part_header_in_zookeeper {#server-settings-use_minimalistic_part_header_in_zookeeper} - -Método de almacenamiento para encabezados de parte de datos en ZooKeeper. - -Esta configuración sólo se aplica a `MergeTree` familia. Se puede especificar: - -- A nivel mundial en el [merge_tree](#server_configuration_parameters-merge_tree) sección de la `config.xml` file. - - ClickHouse utiliza la configuración para todas las tablas del servidor. Puede cambiar la configuración en cualquier momento. Las tablas existentes cambian su comportamiento cuando cambia la configuración. - -- Para cada tabla. - - Al crear una tabla, especifique la correspondiente [ajuste del motor](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table). El comportamiento de una tabla existente con esta configuración no cambia, incluso si la configuración global cambia. - -**Valores posibles** - -- 0 — Functionality is turned off. -- 1 — Functionality is turned on. - -Si `use_minimalistic_part_header_in_zookeeper = 1`, entonces [repetición](../../engines/table-engines/mergetree-family/replication.md) las tablas almacenan los encabezados de las partes de datos de forma compacta `znode`. Si la tabla contiene muchas columnas, este método de almacenamiento reduce significativamente el volumen de los datos almacenados en Zookeeper. - -!!! attention "Atención" - Después de aplicar `use_minimalistic_part_header_in_zookeeper = 1`, no puede degradar el servidor ClickHouse a una versión que no admite esta configuración. Tenga cuidado al actualizar ClickHouse en servidores de un clúster. No actualice todos los servidores a la vez. Es más seguro probar nuevas versiones de ClickHouse en un entorno de prueba o solo en unos pocos servidores de un clúster. - - Data part headers already stored with this setting can't be restored to their previous (non-compact) representation. - -**Valor predeterminado:** 0. - -## disable_internal_dns_cache {#server-settings-disable-internal-dns-cache} - -Deshabilita la memoria caché DNS interna. Recomendado para operar ClickHouse en sistemas -con infraestructura que cambia frecuentemente como Kubernetes. - -**Valor predeterminado:** 0. - -## dns_cache_update_period {#server-settings-dns-cache-update-period} - -El período de actualización de las direcciones IP almacenadas en la caché DNS interna de ClickHouse (en segundos). -La actualización se realiza de forma asíncrona, en un subproceso del sistema separado. - -**Valor predeterminado**: 15. - -## access_control_path {#access_control_path} - -Ruta de acceso a una carpeta donde un servidor ClickHouse almacena configuraciones de usuario y rol creadas por comandos SQL. - -Valor predeterminado: `/var/lib/clickhouse/access/`. - -**Ver también** - -- [Control de acceso y gestión de cuentas](../access-rights.md#access-control) - -[Artículo Original](https://clickhouse.tech/docs/en/operations/server_configuration_parameters/settings/) diff --git a/docs/es/operations/settings/constraints-on-settings.md b/docs/es/operations/settings/constraints-on-settings.md deleted file mode 100644 index fe385f6ddbb..00000000000 --- a/docs/es/operations/settings/constraints-on-settings.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 62 -toc_title: "Restricciones en la configuraci\xF3n" ---- - -# Restricciones en la configuración {#constraints-on-settings} - -Las restricciones en los ajustes se pueden definir en el `profiles` sección de la `user.xml` el archivo de configuración y prohíba a los usuarios cambiar algunos de los ajustes `SET` consulta. -Las restricciones se definen como las siguientes: - -``` xml - - - - - lower_boundary - - - upper_boundary - - - lower_boundary - upper_boundary - - - - - - - -``` - -Si el usuario intenta violar las restricciones, se lanza una excepción y la configuración no se cambia. -Se admiten tres tipos de restricciones: `min`, `max`, `readonly`. El `min` y `max` Las restricciones especifican los límites superior e inferior para una configuración numérica y se pueden usar en combinación. El `readonly` constraint especifica que el usuario no puede cambiar la configuración correspondiente en absoluto. - -**Ejemplo:** Dejar `users.xml` incluye líneas: - -``` xml - - - 10000000000 - 0 - ... - - - 5000000000 - 20000000000 - - - - - - - -``` - -Las siguientes consultas arrojan excepciones: - -``` sql -SET max_memory_usage=20000000001; -SET max_memory_usage=4999999999; -SET force_index_by_date=1; -``` - -``` text -Code: 452, e.displayText() = DB::Exception: Setting max_memory_usage should not be greater than 20000000000. -Code: 452, e.displayText() = DB::Exception: Setting max_memory_usage should not be less than 5000000000. -Code: 452, e.displayText() = DB::Exception: Setting force_index_by_date should not be changed. -``` - -**Nota:** el `default` perfil tiene un manejo especial: todas las restricciones definidas para el `default` profile se convierten en las restricciones predeterminadas, por lo que restringen a todos los usuarios hasta que se anulan explícitamente para estos usuarios. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/constraints_on_settings/) diff --git a/docs/es/operations/settings/index.md b/docs/es/operations/settings/index.md deleted file mode 100644 index 37aab0a7e1b..00000000000 --- a/docs/es/operations/settings/index.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Configuraci\xF3n" -toc_priority: 55 -toc_title: "Implantaci\xF3n" ---- - -# Configuración {#session-settings-intro} - -Hay varias maneras de realizar todos los ajustes descritos en esta sección de documentación. - -Los ajustes se configuran en capas, por lo que cada capa subsiguiente redefine los ajustes anteriores. - -Formas de configurar los ajustes, por orden de prioridad: - -- Ajustes en el `users.xml` archivo de configuración del servidor. - - Establecer en el elemento ``. - -- Configuración de la sesión. - - Enviar `SET setting=value` desde el cliente de consola ClickHouse en modo interactivo. - Del mismo modo, puede utilizar sesiones ClickHouse en el protocolo HTTP. Para hacer esto, debe especificar el `session_id` Parámetro HTTP. - -- Configuración de consulta. - - - Al iniciar el cliente de consola de ClickHouse en modo no interactivo, establezca el parámetro de inicio `--setting=value`. - - Al usar la API HTTP, pase los parámetros CGI (`URL?setting_1=value&setting_2=value...`). - -Los ajustes que solo se pueden realizar en el archivo de configuración del servidor no se tratan en esta sección. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/) diff --git a/docs/es/operations/settings/permissions-for-queries.md b/docs/es/operations/settings/permissions-for-queries.md deleted file mode 100644 index f9f669b876e..00000000000 --- a/docs/es/operations/settings/permissions-for-queries.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 58 -toc_title: Permisos para consultas ---- - -# Permisos para consultas {#permissions_for_queries} - -Las consultas en ClickHouse se pueden dividir en varios tipos: - -1. Leer consultas de datos: `SELECT`, `SHOW`, `DESCRIBE`, `EXISTS`. -2. Escribir consultas de datos: `INSERT`, `OPTIMIZE`. -3. Cambiar la consulta de configuración: `SET`, `USE`. -4. [DDL](https://en.wikipedia.org/wiki/Data_definition_language) consulta: `CREATE`, `ALTER`, `RENAME`, `ATTACH`, `DETACH`, `DROP` `TRUNCATE`. -5. `KILL QUERY`. - -La siguiente configuración regula los permisos de usuario según el tipo de consulta: - -- [sólo lectura](#settings_readonly) — Restricts permissions for all types of queries except DDL queries. -- [Método de codificación de datos:](#settings_allow_ddl) — Restricts permissions for DDL queries. - -`KILL QUERY` se puede realizar con cualquier configuración. - -## sólo lectura {#settings_readonly} - -Restringe los permisos para leer datos, escribir datos y cambiar las consultas de configuración. - -Vea cómo las consultas se dividen en tipos [arriba](#permissions_for_queries). - -Valores posibles: - -- 0 — All queries are allowed. -- 1 — Only read data queries are allowed. -- 2 — Read data and change settings queries are allowed. - -Después de configurar `readonly = 1` el usuario no puede cambiar `readonly` y `allow_ddl` configuración en la sesión actual. - -Cuando se utiliza el `GET` método en el [Interfaz HTTP](../../interfaces/http.md), `readonly = 1` se establece automáticamente. Para modificar los datos, `POST` método. - -Configuración `readonly = 1` prohibir al usuario cambiar todas las configuraciones. Hay una manera de prohibir al usuario -de cambiar sólo ajustes específicos, para más detalles ver [restricciones en la configuración](constraints-on-settings.md). - -Valor predeterminado: 0 - -## Método de codificación de datos: {#settings_allow_ddl} - -Permite o niega [DDL](https://en.wikipedia.org/wiki/Data_definition_language) consulta. - -Vea cómo las consultas se dividen en tipos [arriba](#permissions_for_queries). - -Valores posibles: - -- 0 — DDL queries are not allowed. -- 1 — DDL queries are allowed. - -No se puede ejecutar `SET allow_ddl = 1` si `allow_ddl = 0` para la sesión actual. - -Valor predeterminado: 1 - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/permissions_for_queries/) diff --git a/docs/es/operations/settings/query-complexity.md b/docs/es/operations/settings/query-complexity.md deleted file mode 100644 index 82bc235c30d..00000000000 --- a/docs/es/operations/settings/query-complexity.md +++ /dev/null @@ -1,300 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 59 -toc_title: Restricciones en la complejidad de consultas ---- - -# Restricciones en la complejidad de consultas {#restrictions-on-query-complexity} - -Las restricciones en la complejidad de la consulta forman parte de la configuración. -Se utilizan para proporcionar una ejecución más segura desde la interfaz de usuario. -Casi todas las restricciones solo se aplican a `SELECT`. Para el procesamiento de consultas distribuidas, las restricciones se aplican en cada servidor por separado. - -ClickHouse comprueba las restricciones para las partes de datos, no para cada fila. Significa que puede exceder el valor de restricción con el tamaño de la parte de datos. - -Restricciones en el “maximum amount of something” puede tomar el valor 0, lo que significa “unrestricted”. -La mayoría de las restricciones también tienen un ‘overflow_mode’ establecer, lo que significa qué hacer cuando se excede el límite. -Puede tomar uno de dos valores: `throw` o `break`. Las restricciones en la agregación (group_by_overflow_mode) también tienen el valor `any`. - -`throw` – Throw an exception (default). - -`break` – Stop executing the query and return the partial result, as if the source data ran out. - -`any (only for group_by_overflow_mode)` – Continuing aggregation for the keys that got into the set, but don't add new keys to the set. - -## Método de codificación de datos: {#settings_max_memory_usage} - -La cantidad máxima de RAM que se utiliza para ejecutar una consulta en un único servidor. - -En el archivo de configuración predeterminado, el máximo es de 10 GB. - -La configuración no tiene en cuenta el volumen de memoria disponible ni el volumen total de memoria en la máquina. -La restricción se aplica a una sola consulta dentro de un único servidor. -Usted puede utilizar `SHOW PROCESSLIST` para ver el consumo de memoria actual para cada consulta. -Además, el consumo máximo de memoria se rastrea para cada consulta y se escribe en el registro. - -El uso de memoria no se supervisa para los estados de ciertas funciones agregadas. - -El uso de memoria no se realiza un seguimiento completo de los estados de las funciones agregadas `min`, `max`, `any`, `anyLast`, `argMin`, `argMax` de `String` y `Array` argumento. - -El consumo de memoria también está restringido por los parámetros `max_memory_usage_for_user` y `max_memory_usage_for_all_queries`. - -## Max_memory_usage_for_user {#max-memory-usage-for-user} - -La cantidad máxima de RAM que se utilizará para ejecutar las consultas de un usuario en un único servidor. - -Los valores predeterminados se definen en [Configuración.h](https://github.com/ClickHouse/ClickHouse/blob/master/src/Core/Settings.h#L288). De forma predeterminada, el importe no está restringido (`max_memory_usage_for_user = 0`). - -Ver también la descripción de [Método de codificación de datos:](#settings_max_memory_usage). - -## Todos los derechos reservados {#max-memory-usage-for-all-queries} - -La cantidad máxima de RAM que se utilizará para ejecutar todas las consultas en un único servidor. - -Los valores predeterminados se definen en [Configuración.h](https://github.com/ClickHouse/ClickHouse/blob/master/src/Core/Settings.h#L289). De forma predeterminada, el importe no está restringido (`max_memory_usage_for_all_queries = 0`). - -Ver también la descripción de [Método de codificación de datos:](#settings_max_memory_usage). - -## ¿Qué puedes encontrar en Neodigit {#max-rows-to-read} - -Las siguientes restricciones se pueden verificar en cada bloque (en lugar de en cada fila). Es decir, las restricciones se pueden romper un poco. - -Un número máximo de filas que se pueden leer de una tabla al ejecutar una consulta. - -## ¿Qué puedes encontrar en Neodigit {#max-bytes-to-read} - -Un número máximo de bytes (datos sin comprimir) que se pueden leer de una tabla al ejecutar una consulta. - -## Método de codificación de datos: {#read-overflow-mode} - -Qué hacer cuando el volumen de datos leídos excede uno de los límites: ‘throw’ o ‘break’. Por defecto, throw. - -## Método de codificación de datos: {#settings-max-rows-to-group-by} - -Un número máximo de claves únicas recibidas de la agregación. Esta configuración le permite limitar el consumo de memoria al agregar. - -## Grupo_by_overflow_mode {#group-by-overflow-mode} - -Qué hacer cuando el número de claves únicas para la agregación excede el límite: ‘throw’, ‘break’, o ‘any’. Por defecto, throw. -Uso de la ‘any’ valor le permite ejecutar una aproximación de GROUP BY. La calidad de esta aproximación depende de la naturaleza estadística de los datos. - -## max_bytes_before_external_group_by {#settings-max_bytes_before_external_group_by} - -Habilita o deshabilita la ejecución de `GROUP BY` en la memoria externa. Ver [GROUP BY en memoria externa](../../sql-reference/statements/select/group-by.md#select-group-by-in-external-memory). - -Valores posibles: - -- Volumen máximo de RAM (en bytes) que puede ser utilizado por el único [GROUP BY](../../sql-reference/statements/select/group-by.md#select-group-by-clause) operación. -- 0 — `GROUP BY` en la memoria externa deshabilitada. - -Valor predeterminado: 0. - -## Método de codificación de datos: {#max-rows-to-sort} - -Un número máximo de filas antes de ordenar. Esto le permite limitar el consumo de memoria al ordenar. - -## Método de codificación de datos: {#max-bytes-to-sort} - -Un número máximo de bytes antes de ordenar. - -## sort_overflow_mode {#sort-overflow-mode} - -Qué hacer si el número de filas recibidas antes de ordenar excede uno de los límites: ‘throw’ o ‘break’. Por defecto, throw. - -## max_result_rows {#setting-max_result_rows} - -Límite en el número de filas en el resultado. También se comprueba si hay subconsultas y en servidores remotos cuando se ejecutan partes de una consulta distribuida. - -## max_result_bytes {#max-result-bytes} - -Límite en el número de bytes en el resultado. Lo mismo que el ajuste anterior. - -## result_overflow_mode {#result-overflow-mode} - -Qué hacer si el volumen del resultado excede uno de los límites: ‘throw’ o ‘break’. Por defecto, throw. - -Utilizar ‘break’ es similar a usar LIMIT. `Break` interrumpe la ejecución sólo en el nivel de bloque. Esto significa que la cantidad de filas devueltas es mayor que [max_result_rows](#setting-max_result_rows), múltiplo de [max_block_size](settings.md#setting-max_block_size) y depende de [max_threads](settings.md#settings-max_threads). - -Ejemplo: - -``` sql -SET max_threads = 3, max_block_size = 3333; -SET max_result_rows = 3334, result_overflow_mode = 'break'; - -SELECT * -FROM numbers_mt(100000) -FORMAT Null; -``` - -Resultado: - -``` text -6666 rows in set. ... -``` - -## max_execution_time {#max-execution-time} - -Tiempo máximo de ejecución de la consulta en segundos. -En este momento, no se comprueba una de las etapas de clasificación, o al fusionar y finalizar funciones agregadas. - -## timeout_overflow_mode {#timeout-overflow-mode} - -Qué hacer si la consulta se ejecuta más de ‘max_execution_time’: ‘throw’ o ‘break’. Por defecto, throw. - -## Método de codificación de datos: {#min-execution-speed} - -Velocidad de ejecución mínima en filas por segundo. Comprobado en cada bloque de datos cuando ‘timeout_before_checking_execution_speed’ expirar. Si la velocidad de ejecución es menor, se produce una excepción. - -## Todos los derechos reservados {#min-execution-speed-bytes} - -Un número mínimo de bytes de ejecución por segundo. Comprobado en cada bloque de datos cuando ‘timeout_before_checking_execution_speed’ expirar. Si la velocidad de ejecución es menor, se produce una excepción. - -## Max_execution_speed {#max-execution-speed} - -Un número máximo de filas de ejecución por segundo. Comprobado en cada bloque de datos cuando ‘timeout_before_checking_execution_speed’ expirar. Si la velocidad de ejecución es alta, la velocidad de ejecución se reducirá. - -## Max_execution_speed_bytes {#max-execution-speed-bytes} - -Un número máximo de bytes de ejecución por segundo. Comprobado en cada bloque de datos cuando ‘timeout_before_checking_execution_speed’ expirar. Si la velocidad de ejecución es alta, la velocidad de ejecución se reducirá. - -## Tiempo de espera antes de comprobar_ejecución_velocidad {#timeout-before-checking-execution-speed} - -Comprueba que la velocidad de ejecución no sea demasiado lenta (no menos de ‘min_execution_speed’), después de que el tiempo especificado en segundos haya expirado. - -## Max_columns_to_read {#max-columns-to-read} - -Un número máximo de columnas que se pueden leer de una tabla en una sola consulta. Si una consulta requiere leer un mayor número de columnas, produce una excepción. - -## max_temporary_columns {#max-temporary-columns} - -Un número máximo de columnas temporales que se deben mantener en la memoria RAM al mismo tiempo cuando se ejecuta una consulta, incluidas las columnas constantes. Si hay más columnas temporales que esto, arroja una excepción. - -## max_temporary_non_const_columns {#max-temporary-non-const-columns} - -Lo mismo que ‘max_temporary_columns’, pero sin contar columnas constantes. -Tenga en cuenta que las columnas constantes se forman con bastante frecuencia cuando se ejecuta una consulta, pero requieren aproximadamente cero recursos informáticos. - -## max_subquery_depth {#max-subquery-depth} - -Profundidad máxima de anidamiento de subconsultas. Si las subconsultas son más profundas, se produce una excepción. De forma predeterminada, 100. - -## max_pipeline_depth {#max-pipeline-depth} - -Profundidad máxima de la tubería. Corresponde al número de transformaciones que realiza cada bloque de datos durante el procesamiento de consultas. Contado dentro de los límites de un único servidor. Si la profundidad de la canalización es mayor, se produce una excepción. Por defecto, 1000. - -## max_ast_depth {#max-ast-depth} - -Profundidad máxima de anidamiento de un árbol sintáctico de consulta. Si se supera, se produce una excepción. -En este momento, no se verifica durante el análisis, sino solo después de analizar la consulta. Es decir, se puede crear un árbol sintáctico demasiado profundo durante el análisis, pero la consulta fallará. Por defecto, 1000. - -## max_ast_elements {#max-ast-elements} - -Un número máximo de elementos en un árbol sintáctico de consulta. Si se supera, se produce una excepción. -De la misma manera que la configuración anterior, se verifica solo después de analizar la consulta. De forma predeterminada, 50.000. - -## Método de codificación de datos: {#max-rows-in-set} - -Un número máximo de filas para un conjunto de datos en la cláusula IN creada a partir de una subconsulta. - -## Método de codificación de datos: {#max-bytes-in-set} - -Número máximo de bytes (datos sin comprimir) utilizados por un conjunto en la cláusula IN creada a partir de una subconsulta. - -## set_overflow_mode {#set-overflow-mode} - -Qué hacer cuando la cantidad de datos excede uno de los límites: ‘throw’ o ‘break’. Por defecto, throw. - -## Método de codificación de datos: {#max-rows-in-distinct} - -Un número máximo de filas diferentes al usar DISTINCT. - -## Método de codificación de datos: {#max-bytes-in-distinct} - -Un número máximo de bytes utilizados por una tabla hash cuando se utiliza DISTINCT. - -## distinct_overflow_mode {#distinct-overflow-mode} - -Qué hacer cuando la cantidad de datos excede uno de los límites: ‘throw’ o ‘break’. Por defecto, throw. - -## max_rows_to_transfer {#max-rows-to-transfer} - -Un número máximo de filas que se pueden pasar a un servidor remoto o guardar en una tabla temporal cuando se utiliza GLOBAL IN. - -## max_bytes_to_transfer {#max-bytes-to-transfer} - -Un número máximo de bytes (datos sin comprimir) que se pueden pasar a un servidor remoto o guardar en una tabla temporal cuando se utiliza GLOBAL IN. - -## transfer_overflow_mode {#transfer-overflow-mode} - -Qué hacer cuando la cantidad de datos excede uno de los límites: ‘throw’ o ‘break’. Por defecto, throw. - -## Método de codificación de datos: {#settings-max_rows_in_join} - -Limita el número de filas de la tabla hash que se utiliza al unir tablas. - -Esta configuración se aplica a [SELECT … JOIN](../../sql-reference/statements/select/join.md#select-join) operaciones y la [Unir](../../engines/table-engines/special/join.md) motor de mesa. - -Si una consulta contiene varias combinaciones, ClickHouse comprueba esta configuración para cada resultado intermedio. - -ClickHouse puede proceder con diferentes acciones cuando se alcanza el límite. Utilice el [join_overflow_mode](#settings-join_overflow_mode) configuración para elegir la acción. - -Valores posibles: - -- Entero positivo. -- 0 — Unlimited number of rows. - -Valor predeterminado: 0. - -## Método de codificación de datos: {#settings-max_bytes_in_join} - -Limita el tamaño en bytes de la tabla hash utilizada al unir tablas. - -Esta configuración se aplica a [SELECT … JOIN](../../sql-reference/statements/select/join.md#select-join) operaciones y [Unirse al motor de tabla](../../engines/table-engines/special/join.md). - -Si la consulta contiene combinaciones, ClickHouse comprueba esta configuración para cada resultado intermedio. - -ClickHouse puede proceder con diferentes acciones cuando se alcanza el límite. Utilizar [join_overflow_mode](#settings-join_overflow_mode) para elegir la acción. - -Valores posibles: - -- Entero positivo. -- 0 — Memory control is disabled. - -Valor predeterminado: 0. - -## join_overflow_mode {#settings-join_overflow_mode} - -Define qué acción realiza ClickHouse cuando se alcanza cualquiera de los siguientes límites de combinación: - -- [Método de codificación de datos:](#settings-max_bytes_in_join) -- [Método de codificación de datos:](#settings-max_rows_in_join) - -Valores posibles: - -- `THROW` — ClickHouse throws an exception and breaks operation. -- `BREAK` — ClickHouse breaks operation and doesn't throw an exception. - -Valor predeterminado: `THROW`. - -**Ver también** - -- [Cláusula JOIN](../../sql-reference/statements/select/join.md#select-join) -- [Unirse al motor de tabla](../../engines/table-engines/special/join.md) - -## max_partitions_per_insert_block {#max-partitions-per-insert-block} - -Limita el número máximo de particiones en un único bloque insertado. - -- Entero positivo. -- 0 — Unlimited number of partitions. - -Valor predeterminado: 100. - -**Detalles** - -Al insertar datos, ClickHouse calcula el número de particiones en el bloque insertado. Si el número de particiones es mayor que `max_partitions_per_insert_block`, ClickHouse lanza una excepción con el siguiente texto: - -> “Too many partitions for single INSERT block (more than” ¿Cómo puedo hacerlo? “). The limit is controlled by ‘max_partitions_per_insert_block’ setting. A large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc).” - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/query_complexity/) diff --git a/docs/es/operations/settings/settings-profiles.md b/docs/es/operations/settings/settings-profiles.md deleted file mode 100644 index 3d96a2c8fba..00000000000 --- a/docs/es/operations/settings/settings-profiles.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 61 -toc_title: "Perfiles de configuraci\xF3n" ---- - -# Perfiles de configuración {#settings-profiles} - -Un perfil de configuración es una colección de configuraciones agrupadas con el mismo nombre. - -!!! note "Información" - ClickHouse también es compatible [Flujo de trabajo controlado por SQL](../access-rights.md#access-control) para administrar perfiles de configuración. Recomendamos usarlo. - -Un perfil puede tener cualquier nombre. El perfil puede tener cualquier nombre. Puede especificar el mismo perfil para diferentes usuarios. Lo más importante que puede escribir en el perfil de configuración es `readonly=1`, que asegura el acceso de sólo lectura. - -Los perfiles de configuración pueden heredar unos de otros. Para usar la herencia, indique una o varias `profile` configuraciones antes de las demás configuraciones que se enumeran en el perfil. En caso de que se defina una configuración en diferentes perfiles, se utiliza la última definida. - -Para aplicar todos los ajustes de un perfil, establezca el `profile` configuración. - -Ejemplo: - -Instale el `web` perfil. - -``` sql -SET profile = 'web' -``` - -Los perfiles de configuración se declaran en el archivo de configuración del usuario. Esto suele ser `users.xml`. - -Ejemplo: - -``` xml - - - - - - 8 - - - - - 1000000000 - 100000000000 - - 1000000 - any - - 1000000 - 1000000000 - - 100000 - 100000000 - break - - 600 - 1000000 - 15 - - 25 - 100 - 50 - - 2 - 25 - 50 - 100 - - 1 - - -``` - -El ejemplo especifica dos perfiles: `default` y `web`. - -El `default` tiene un propósito especial: siempre debe estar presente y se aplica al iniciar el servidor. En otras palabras, el `default` perfil contiene la configuración predeterminada. - -El `web` profile es un perfil regular que se puede establecer utilizando el `SET` consulta o utilizando un parámetro URL en una consulta HTTP. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/settings_profiles/) diff --git a/docs/es/operations/settings/settings-users.md b/docs/es/operations/settings/settings-users.md deleted file mode 100644 index 1c1ac7914f0..00000000000 --- a/docs/es/operations/settings/settings-users.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 63 -toc_title: "Configuraci\xF3n del usuario" ---- - -# Configuración del usuario {#user-settings} - -El `users` sección de la `user.xml` el archivo de configuración contiene la configuración del usuario. - -!!! note "Información" - ClickHouse también es compatible [Flujo de trabajo controlado por SQL](../access-rights.md#access-control) para la gestión de usuarios. Recomendamos usarlo. - -Estructura del `users` apartado: - -``` xml - - - - - - - - 0|1 - - - - - profile_name - - default - - - - - expression - - - - - - -``` - -### user_name/contraseña {#user-namepassword} - -La contraseña se puede especificar en texto sin formato o en SHA256 (formato hexagonal). - -- Para asignar una contraseña en texto sin formato (**no se recomienda**), colóquelo en un `password` elemento. - - Por ejemplo, `qwerty`. La contraseña se puede dejar en blanco. - - - -- Para asignar una contraseña utilizando su hash SHA256, colóquela en un `password_sha256_hex` elemento. - - Por ejemplo, `65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5`. - - Ejemplo de cómo generar una contraseña desde el shell: - - PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-' - - La primera línea del resultado es la contraseña. La segunda línea es el hash SHA256 correspondiente. - - - -- Para la compatibilidad con los clientes MySQL, la contraseña se puede especificar en doble hash SHA1. Colóquelo en `password_double_sha1_hex` elemento. - - Por ejemplo, `08b4a0f1de6ad37da17359e592c8d74788a83eb0`. - - Ejemplo de cómo generar una contraseña desde el shell: - - PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-' - - La primera línea del resultado es la contraseña. La segunda línea es el hash SHA1 doble correspondiente. - -### access_management {#access_management-user-setting} - -Esta configuración habilita deshabilita el uso de [control de acceso y gestión de cuentas](../access-rights.md#access-control) para el usuario. - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -### user_name/redes {#user-namenetworks} - -Lista de redes desde las que el usuario puede conectarse al servidor ClickHouse. - -Cada elemento de la lista puede tener una de las siguientes formas: - -- `` — IP address or network mask. - - Ejemplos: `213.180.204.3`, `10.0.0.1/8`, `10.0.0.1/255.255.255.0`, `2a02:6b8::3`, `2a02:6b8::3/64`, `2a02:6b8::3/ffff:ffff:ffff:ffff::`. - -- `` — Hostname. - - Ejemplo: `example01.host.ru`. - - Para comprobar el acceso, se realiza una consulta DNS y todas las direcciones IP devueltas se comparan con la dirección del mismo nivel. - -- `` — Regular expression for hostnames. - - Ejemplo, `^example\d\d-\d\d-\d\.host\.ru$` - - Para comprobar el acceso, un [Consulta de DNS PTR](https://en.wikipedia.org/wiki/Reverse_DNS_lookup) se realiza para la dirección del mismo nivel y luego se aplica la expresión regular especificada. A continuación, se realiza otra consulta DNS para los resultados de la consulta PTR y todas las direcciones recibidas se comparan con la dirección del mismo nivel. Recomendamos encarecidamente que regexp termine con $ . - -Todos los resultados de las solicitudes DNS se almacenan en caché hasta que el servidor se reinicia. - -**Ejemplos** - -Para abrir el acceso del usuario desde cualquier red, especifique: - -``` xml -::/0 -``` - -!!! warning "Advertencia" - No es seguro abrir el acceso desde cualquier red a menos que tenga un firewall configurado correctamente o el servidor no esté conectado directamente a Internet. - -Para abrir el acceso solo desde localhost, especifique: - -``` xml -::1 -127.0.0.1 -``` - -### user_name/perfil {#user-nameprofile} - -Puede asignar un perfil de configuración para el usuario. Los perfiles de configuración se configuran en una sección separada del `users.xml` file. Para obtener más información, consulte [Perfiles de configuración](settings-profiles.md). - -### user_name/cuota {#user-namequota} - -Las cuotas le permiten realizar un seguimiento o limitar el uso de recursos durante un período de tiempo. Las cuotas se configuran en el `quotas` -sección de la `users.xml` archivo de configuración. - -Puede asignar un conjunto de cuotas para el usuario. Para obtener una descripción detallada de la configuración de las cuotas, consulte [Cuota](../quotas.md#quotas). - -### nombre_usuario/bases de datos {#user-namedatabases} - -En esta sección, puede limitar las filas devueltas por ClickHouse para `SELECT` consultas realizadas por el usuario actual, implementando así la seguridad básica a nivel de fila. - -**Ejemplo** - -La siguiente configuración obliga a que el usuario `user1` sólo puede ver las filas de `table1` como resultado de `SELECT` consultas, donde el valor de la `id` campo es 1000. - -``` xml - - - - - id = 1000 - - - - -``` - -El `filter` puede ser cualquier expresión que resulte en un [UInt8](../../sql-reference/data-types/int-uint.md)-tipo de valor. Por lo general, contiene comparaciones y operadores lógicos. Filas de `database_name.table1` donde los resultados del filtro a 0 no se devuelven para este usuario. El filtrado es incompatible con `PREWHERE` operaciones y desactiva `WHERE→PREWHERE` optimización. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/settings_users/) diff --git a/docs/es/operations/settings/settings.md b/docs/es/operations/settings/settings.md deleted file mode 100644 index 62511dd9fc0..00000000000 --- a/docs/es/operations/settings/settings.md +++ /dev/null @@ -1,1254 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Configuración {#settings} - -## distributed_product_mode {#distributed-product-mode} - -Cambia el comportamiento de [subconsultas distribuidas](../../sql-reference/operators/in.md). - -ClickHouse applies this setting when the query contains the product of distributed tables, i.e. when the query for a distributed table contains a non-GLOBAL subquery for the distributed table. - -Restricción: - -- Solo se aplica para las subconsultas IN y JOIN. -- Solo si la sección FROM utiliza una tabla distribuida que contiene más de un fragmento. -- Si la subconsulta se refiere a una tabla distribuida que contiene más de un fragmento. -- No se usa para un valor de tabla [remoto](../../sql-reference/table-functions/remote.md) función. - -Valores posibles: - -- `deny` — Default value. Prohibits using these types of subqueries (returns the “Double-distributed in/JOIN subqueries is denied” salvedad). -- `local` — Replaces the database and table in the subquery with local ones for the destination server (shard), leaving the normal `IN`/`JOIN.` -- `global` — Replaces the `IN`/`JOIN` consulta con `GLOBAL IN`/`GLOBAL JOIN.` -- `allow` — Allows the use of these types of subqueries. - -## enable_optimize_predicate_expression {#enable-optimize-predicate-expression} - -Activa el pushdown de predicado en `SELECT` consulta. - -La extracción de predicados puede reducir significativamente el tráfico de red para consultas distribuidas. - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 1. - -Uso - -Considere las siguientes consultas: - -1. `SELECT count() FROM test_table WHERE date = '2018-10-10'` -2. `SELECT count() FROM (SELECT * FROM test_table) WHERE date = '2018-10-10'` - -Si `enable_optimize_predicate_expression = 1`, entonces el tiempo de ejecución de estas consultas es igual porque se aplica ClickHouse `WHERE` a la subconsulta al procesarla. - -Si `enable_optimize_predicate_expression = 0`, entonces el tiempo de ejecución de la segunda consulta es mucho más largo, porque el `WHERE` cláusula se aplica a todos los datos después de que finalice la subconsulta. - -## fallback_to_stale_replicas_for_distributed_queries {#settings-fallback_to_stale_replicas_for_distributed_queries} - -Fuerza una consulta a una réplica obsoleta si los datos actualizados no están disponibles. Ver [Replicación](../../engines/table-engines/mergetree-family/replication.md). - -ClickHouse selecciona la más relevante de las réplicas obsoletas de la tabla. - -Se utiliza al realizar `SELECT` desde una tabla distribuida que apunta a tablas replicadas. - -De forma predeterminada, 1 (habilitado). - -## Fecha de nacimiento {#settings-force_index_by_date} - -Deshabilita la ejecución de consultas si el índice no se puede usar por fecha. - -Funciona con tablas de la familia MergeTree. - -Si `force_index_by_date=1`, ClickHouse comprueba si la consulta tiene una condición de clave de fecha que se puede usar para restringir intervalos de datos. Si no hay una condición adecuada, arroja una excepción. Sin embargo, no comprueba si la condición reduce la cantidad de datos a leer. Por ejemplo, la condición `Date != ' 2000-01-01 '` es aceptable incluso cuando coincide con todos los datos de la tabla (es decir, ejecutar la consulta requiere un escaneo completo). Para obtener más información acerca de los intervalos de datos en las tablas MergeTree, vea [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md). - -## force_primary_key {#force-primary-key} - -Deshabilita la ejecución de consultas si no es posible la indexación mediante la clave principal. - -Funciona con tablas de la familia MergeTree. - -Si `force_primary_key=1`, ClickHouse comprueba si la consulta tiene una condición de clave principal que se puede usar para restringir rangos de datos. Si no hay una condición adecuada, arroja una excepción. Sin embargo, no comprueba si la condición reduce la cantidad de datos a leer. Para obtener más información acerca de los intervalos de datos en las tablas MergeTree, consulte [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md). - -## Formato_esquema {#format-schema} - -Este parámetro es útil cuando se utilizan formatos que requieren una definición de esquema, como [Cap'n Proto](https://capnproto.org/) o [Protobuf](https://developers.google.com/protocol-buffers/). El valor depende del formato. - -## fsync_metadata {#fsync-metadata} - -Habilita o deshabilita [fsync](http://pubs.opengroup.org/onlinepubs/9699919799/functions/fsync.html) al escribir `.sql` file. Habilitado de forma predeterminada. - -Tiene sentido desactivarlo si el servidor tiene millones de pequeñas tablas que se crean y destruyen constantemente. - -## enable_http_compression {#settings-enable_http_compression} - -Habilita o deshabilita la compresión de datos en la respuesta a una solicitud HTTP. - -Para obtener más información, lea el [Descripción de la interfaz HTTP](../../interfaces/http.md). - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -## http_zlib_compression_level {#settings-http_zlib_compression_level} - -Establece el nivel de compresión de datos en la respuesta a una solicitud HTTP si [enable_http_compression = 1](#settings-enable_http_compression). - -Valores posibles: Números del 1 al 9. - -Valor predeterminado: 3. - -## http_native_compression_disable_checksumming_on_decompress {#settings-http_native_compression_disable_checksumming_on_decompress} - -Habilita o deshabilita la verificación de suma de comprobación al descomprimir los datos HTTP POST del cliente. Se usa solo para el formato de compresión nativa ClickHouse (no se usa con `gzip` o `deflate`). - -Para obtener más información, lea el [Descripción de la interfaz HTTP](../../interfaces/http.md). - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -## send_progress_in_http_headers {#settings-send_progress_in_http_headers} - -Habilita o deshabilita `X-ClickHouse-Progress` Encabezados de respuesta HTTP en `clickhouse-server` respuesta. - -Para obtener más información, lea el [Descripción de la interfaz HTTP](../../interfaces/http.md). - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -## Nombre de la red inalámbrica (SSID): {#setting-max_http_get_redirects} - -Limita el número máximo de saltos de redirección HTTP GET para [URL](../../engines/table-engines/special/url.md)-mesas de motor. La configuración se aplica a ambos tipos de tablas: las creadas por [CREATE TABLE](../../sql-reference/statements/create.md#create-table-query) consulta y por el [URL](../../sql-reference/table-functions/url.md) función de la tabla. - -Valores posibles: - -- Cualquier número entero positivo de saltos. -- 0 — No hops allowed. - -Valor predeterminado: 0. - -## Entrada_format_allow_errors_num {#settings-input_format_allow_errors_num} - -Establece el número máximo de errores aceptables al leer desde formatos de texto (CSV, TSV, etc.). - -El valor predeterminado es 0. - -Siempre emparejarlo con `input_format_allow_errors_ratio`. - -Si se produjo un error al leer filas, pero el contador de errores sigue siendo menor que `input_format_allow_errors_num`, ClickHouse ignora la fila y pasa a la siguiente. - -Si ambos `input_format_allow_errors_num` y `input_format_allow_errors_ratio` se exceden, ClickHouse lanza una excepción. - -## Entrada_format_allow_errors_ratio {#settings-input_format_allow_errors_ratio} - -Establece el porcentaje máximo de errores permitidos al leer desde formatos de texto (CSV, TSV, etc.). -El porcentaje de errores se establece como un número de punto flotante entre 0 y 1. - -El valor predeterminado es 0. - -Siempre emparejarlo con `input_format_allow_errors_num`. - -Si se produjo un error al leer filas, pero el contador de errores sigue siendo menor que `input_format_allow_errors_ratio`, ClickHouse ignora la fila y pasa a la siguiente. - -Si ambos `input_format_allow_errors_num` y `input_format_allow_errors_ratio` se exceden, ClickHouse lanza una excepción. - -## input_format_values_interpret_expressions {#settings-input_format_values_interpret_expressions} - -Habilita o deshabilita el analizador SQL completo si el analizador de secuencias rápidas no puede analizar los datos. Esta configuración sólo se utiliza para [Valor](../../interfaces/formats.md#data-format-values) formato en la inserción de datos. Para obtener más información sobre el análisis de sintaxis, consulte [Sintaxis](../../sql-reference/syntax.md) apartado. - -Valores posibles: - -- 0 — Disabled. - - En este caso, debe proporcionar datos con formato. Ver el [Formato](../../interfaces/formats.md) apartado. - -- 1 — Enabled. - - En este caso, puede usar una expresión SQL como valor, pero la inserción de datos es mucho más lenta de esta manera. Si inserta solo datos con formato, ClickHouse se comporta como si el valor de configuración fuera 0. - -Valor predeterminado: 1. - -Ejemplo de uso - -Inserte el [FechaHora](../../sql-reference/data-types/datetime.md) valor de tipo con los diferentes ajustes. - -``` sql -SET input_format_values_interpret_expressions = 0; -INSERT INTO datetime_t VALUES (now()) -``` - -``` text -Exception on client: -Code: 27. DB::Exception: Cannot parse input: expected ) before: now()): (at row 1) -``` - -``` sql -SET input_format_values_interpret_expressions = 1; -INSERT INTO datetime_t VALUES (now()) -``` - -``` text -Ok. -``` - -La última consulta es equivalente a la siguiente: - -``` sql -SET input_format_values_interpret_expressions = 0; -INSERT INTO datetime_t SELECT now() -``` - -``` text -Ok. -``` - -## input_format_values_deduce_templates_of_expressions {#settings-input_format_values_deduce_templates_of_expressions} - -Habilita o deshabilita la deducción de plantilla para expresiones SQL en [Valor](../../interfaces/formats.md#data-format-values) formato. Permite analizar e interpretar expresiones en `Values` mucho más rápido si las expresiones en filas consecutivas tienen la misma estructura. ClickHouse intenta deducir la plantilla de una expresión, analizar las siguientes filas utilizando esta plantilla y evaluar la expresión en un lote de filas analizadas correctamente. - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 1. - -Para la siguiente consulta: - -``` sql -INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), (upper('Values')), ... -``` - -- Si `input_format_values_interpret_expressions=1` y `format_values_deduce_templates_of_expressions=0`, las expresiones se interpretan por separado para cada fila (esto es muy lento para un gran número de filas). -- Si `input_format_values_interpret_expressions=0` y `format_values_deduce_templates_of_expressions=1`, las expresiones en la primera, segunda y tercera filas se analizan usando la plantilla `lower(String)` e interpretados juntos, la expresión en la cuarta fila se analiza con otra plantilla (`upper(String)`). -- Si `input_format_values_interpret_expressions=1` y `format_values_deduce_templates_of_expressions=1`, lo mismo que en el caso anterior, pero también permite la alternativa a la interpretación de expresiones por separado si no es posible deducir la plantilla. - -## Entrada_format_values_accurate_types_of_literals {#settings-input-format-values-accurate-types-of-literals} - -Esta configuración sólo se utiliza cuando `input_format_values_deduce_templates_of_expressions = 1`. Puede suceder que las expresiones para alguna columna tengan la misma estructura, pero contengan literales numéricos de diferentes tipos, por ejemplo - -``` sql -(..., abs(0), ...), -- UInt64 literal -(..., abs(3.141592654), ...), -- Float64 literal -(..., abs(-1), ...), -- Int64 literal -``` - -Valores posibles: - -- 0 — Disabled. - - In this case, ClickHouse may use a more general type for some literals (e.g., `Float64` o `Int64` en lugar de `UInt64` para `42`), pero puede causar problemas de desbordamiento y precisión. - -- 1 — Enabled. - - En este caso, ClickHouse comprueba el tipo real de literal y utiliza una plantilla de expresión del tipo correspondiente. En algunos casos, puede ralentizar significativamente la evaluación de expresiones en `Values`. - -Valor predeterminado: 1. - -## Entrada_format_defaults_for_omitted_fields {#session_settings-input_format_defaults_for_omitted_fields} - -Al realizar `INSERT` consultas, reemplace los valores de columna de entrada omitidos con valores predeterminados de las columnas respectivas. Esta opción sólo se aplica a [JSONEachRow](../../interfaces/formats.md#jsoneachrow), [CSV](../../interfaces/formats.md#csv) y [TabSeparated](../../interfaces/formats.md#tabseparated) formato. - -!!! note "Nota" - Cuando esta opción está habilitada, los metadatos de la tabla extendida se envían del servidor al cliente. Consume recursos informáticos adicionales en el servidor y puede reducir el rendimiento. - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 1. - -## input_format_tsv_empty_as_default {#settings-input-format-tsv-empty-as-default} - -Cuando esté habilitado, reemplace los campos de entrada vacíos en TSV con valores predeterminados. Para expresiones predeterminadas complejas `input_format_defaults_for_omitted_fields` debe estar habilitado también. - -Deshabilitado de forma predeterminada. - -## input_format_null_as_default {#settings-input-format-null-as-default} - -Habilita o deshabilita el uso de valores predeterminados si los datos de entrada `NULL`, pero el tipo de datos de la columna correspondiente en no `Nullable(T)` (para formatos de entrada de texto). - -## input_format_skip_unknown_fields {#settings-input-format-skip-unknown-fields} - -Habilita o deshabilita omitir la inserción de datos adicionales. - -Al escribir datos, ClickHouse produce una excepción si los datos de entrada contienen columnas que no existen en la tabla de destino. Si la omisión está habilitada, ClickHouse no inserta datos adicionales y no lanza una excepción. - -Formatos soportados: - -- [JSONEachRow](../../interfaces/formats.md#jsoneachrow) -- [CSVWithNames](../../interfaces/formats.md#csvwithnames) -- [TabSeparatedWithNames](../../interfaces/formats.md#tabseparatedwithnames) -- [TSKV](../../interfaces/formats.md#tskv) - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -## Entrada_format_import_nested_json {#settings-input_format_import_nested_json} - -Habilita o deshabilita la inserción de datos JSON con objetos anidados. - -Formatos soportados: - -- [JSONEachRow](../../interfaces/formats.md#jsoneachrow) - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -Ver también: - -- [Uso de estructuras anidadas](../../interfaces/formats.md#jsoneachrow-nested) con el `JSONEachRow` formato. - -## Entrada_format_with_names_use_header {#settings-input-format-with-names-use-header} - -Habilita o deshabilita la comprobación del orden de las columnas al insertar datos. - -Para mejorar el rendimiento de la inserción, se recomienda deshabilitar esta comprobación si está seguro de que el orden de columna de los datos de entrada es el mismo que en la tabla de destino. - -Formatos soportados: - -- [CSVWithNames](../../interfaces/formats.md#csvwithnames) -- [TabSeparatedWithNames](../../interfaces/formats.md#tabseparatedwithnames) - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 1. - -## Date_time_input_format {#settings-date_time_input_format} - -Permite elegir un analizador de la representación de texto de fecha y hora. - -La configuración no se aplica a [Funciones de fecha y hora](../../sql-reference/functions/date-time-functions.md). - -Valores posibles: - -- `'best_effort'` — Enables extended parsing. - - ClickHouse puede analizar el básico `YYYY-MM-DD HH:MM:SS` formato y todo [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) formatos de fecha y hora. Por ejemplo, `'2018-06-08T01:02:03.000Z'`. - -- `'basic'` — Use basic parser. - - ClickHouse puede analizar solo lo básico `YYYY-MM-DD HH:MM:SS` formato. Por ejemplo, `'2019-08-20 10:18:56'`. - -Valor predeterminado: `'basic'`. - -Ver también: - -- [Tipo de datos DateTime.](../../sql-reference/data-types/datetime.md) -- [Funciones para trabajar con fechas y horas.](../../sql-reference/functions/date-time-functions.md) - -## Por favor, introduzca su dirección de correo electrónico {#settings-join_default_strictness} - -Establece el rigor predeterminado para [Cláusulas JOIN](../../sql-reference/statements/select/join.md#select-join). - -Valores posibles: - -- `ALL` — If the right table has several matching rows, ClickHouse creates a [Producto cartesiano](https://en.wikipedia.org/wiki/Cartesian_product) de filas coincidentes. Esta es la normal `JOIN` comportamiento de SQL estándar. -- `ANY` — If the right table has several matching rows, only the first one found is joined. If the right table has only one matching row, the results of `ANY` y `ALL` son los mismos. -- `ASOF` — For joining sequences with an uncertain match. -- `Empty string` — If `ALL` o `ANY` no se especifica en la consulta, ClickHouse produce una excepción. - -Valor predeterminado: `ALL`. - -## join_any_take_last_row {#settings-join_any_take_last_row} - -Cambia el comportamiento de las operaciones de unión con `ANY` rigor. - -!!! warning "Atención" - Esta configuración sólo se aplica a `JOIN` operaciones con [Unir](../../engines/table-engines/special/join.md) mesas de motores. - -Valores posibles: - -- 0 — If the right table has more than one matching row, only the first one found is joined. -- 1 — If the right table has more than one matching row, only the last one found is joined. - -Valor predeterminado: 0. - -Ver también: - -- [Cláusula JOIN](../../sql-reference/statements/select/join.md#select-join) -- [Unirse al motor de tabla](../../engines/table-engines/special/join.md) -- [Por favor, introduzca su dirección de correo electrónico](#settings-join_default_strictness) - -## Sistema abierto {#join_use_nulls} - -Establece el tipo de [JOIN](../../sql-reference/statements/select/join.md) comportamiento. Al fusionar tablas, pueden aparecer celdas vacías. ClickHouse los rellena de manera diferente según esta configuración. - -Valores posibles: - -- 0 — The empty cells are filled with the default value of the corresponding field type. -- 1 — `JOIN` se comporta de la misma manera que en SQL estándar. El tipo del campo correspondiente se convierte en [NULL](../../sql-reference/data-types/nullable.md#data_type-nullable), y las celdas vacías se llenan con [NULL](../../sql-reference/syntax.md). - -Valor predeterminado: 0. - -## max_block_size {#setting-max_block_size} - -En ClickHouse, los datos se procesan mediante bloques (conjuntos de partes de columna). Los ciclos de procesamiento interno para un solo bloque son lo suficientemente eficientes, pero hay gastos notables en cada bloque. El `max_block_size` set es una recomendación para el tamaño del bloque (en un recuento de filas) para cargar desde las tablas. El tamaño del bloque no debe ser demasiado pequeño, por lo que los gastos en cada bloque aún se notan, pero no demasiado grande para que la consulta con LIMIT que se complete después del primer bloque se procese rápidamente. El objetivo es evitar consumir demasiada memoria al extraer un gran número de columnas en múltiples subprocesos y preservar al menos alguna localidad de caché. - -Valor predeterminado: 65,536. - -Bloquea el tamaño de `max_block_size` no siempre se cargan desde la tabla. Si es obvio que se deben recuperar menos datos, se procesa un bloque más pequeño. - -## preferred_block_size_bytes {#preferred-block-size-bytes} - -Utilizado para el mismo propósito que `max_block_size`, pero establece el tamaño de bloque recomendado en bytes adaptándolo al número de filas en el bloque. -Sin embargo, el tamaño del bloque no puede ser más que `max_block_size` filas. -Por defecto: 1,000,000. Solo funciona cuando se lee desde los motores MergeTree. - -## merge_tree_min_rows_for_concurrent_read {#setting-merge-tree-min-rows-for-concurrent-read} - -Si el número de filas que se leerán de un fichero [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md) mesa excede `merge_tree_min_rows_for_concurrent_read` luego ClickHouse intenta realizar una lectura simultánea de este archivo en varios hilos. - -Valores posibles: - -- Cualquier entero positivo. - -Valor predeterminado: 163840. - -## merge_tree_min_bytes_for_concurrent_read {#setting-merge-tree-min-bytes-for-concurrent-read} - -Si el número de bytes a leer de un archivo de un [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md)-La tabla del motor excede `merge_tree_min_bytes_for_concurrent_read`, entonces ClickHouse intenta leer simultáneamente este archivo en varios subprocesos. - -Valor posible: - -- Cualquier entero positivo. - -Valor predeterminado: 251658240. - -## Método de codificación de datos: {#setting-merge-tree-min-rows-for-seek} - -Si la distancia entre dos bloques de datos que se leen en un archivo es menor que `merge_tree_min_rows_for_seek` filas, luego ClickHouse no busca a través del archivo, sino que lee los datos secuencialmente. - -Valores posibles: - -- Cualquier entero positivo. - -Valor predeterminado: 0. - -## merge_tree_min_bytes_for_seek {#setting-merge-tree-min-bytes-for-seek} - -Si la distancia entre dos bloques de datos que se leen en un archivo es menor que `merge_tree_min_bytes_for_seek` bytes, luego ClickHouse lee secuencialmente un rango de archivos que contiene ambos bloques, evitando así la búsqueda adicional. - -Valores posibles: - -- Cualquier entero positivo. - -Valor predeterminado: 0. - -## merge_tree_coarse_index_granularity {#setting-merge-tree-coarse-index-granularity} - -Al buscar datos, ClickHouse comprueba las marcas de datos en el archivo de índice. Si ClickHouse encuentra que las claves requeridas están en algún rango, divide este rango en `merge_tree_coarse_index_granularity` subintervalos y busca las claves necesarias allí de forma recursiva. - -Valores posibles: - -- Cualquier entero incluso positivo. - -Valor predeterminado: 8. - -## merge_tree_max_rows_to_use_cache {#setting-merge-tree-max-rows-to-use-cache} - -Si ClickHouse debería leer más de `merge_tree_max_rows_to_use_cache` en una consulta, no usa la memoria caché de bloques sin comprimir. - -La memoria caché de bloques sin comprimir almacena datos extraídos para consultas. ClickHouse utiliza esta memoria caché para acelerar las respuestas a pequeñas consultas repetidas. Esta configuración protege la memoria caché del deterioro de las consultas que leen una gran cantidad de datos. El [Uncompressed_cache_size](../server-configuration-parameters/settings.md#server-settings-uncompressed_cache_size) configuración del servidor define el tamaño de la memoria caché de bloques sin comprimir. - -Valores posibles: - -- Cualquier entero positivo. - -Default value: 128 ✕ 8192. - -## merge_tree_max_bytes_to_use_cache {#setting-merge-tree-max-bytes-to-use-cache} - -Si ClickHouse debería leer más de `merge_tree_max_bytes_to_use_cache` bytes en una consulta, no usa el caché de bloques sin comprimir. - -La memoria caché de bloques sin comprimir almacena datos extraídos para consultas. ClickHouse utiliza esta memoria caché para acelerar las respuestas a pequeñas consultas repetidas. Esta configuración protege la memoria caché del deterioro de las consultas que leen una gran cantidad de datos. El [Uncompressed_cache_size](../server-configuration-parameters/settings.md#server-settings-uncompressed_cache_size) configuración del servidor define el tamaño de la memoria caché de bloques sin comprimir. - -Valor posible: - -- Cualquier entero positivo. - -Valor predeterminado: 2013265920. - -## Todos los derechos reservados {#settings-min-bytes-to-use-direct-io} - -El volumen de datos mínimo necesario para utilizar el acceso directo de E/S al disco de almacenamiento. - -ClickHouse usa esta configuración al leer datos de tablas. Si el volumen total de almacenamiento de todos los datos a leer excede `min_bytes_to_use_direct_io` luego ClickHouse lee los datos del disco de almacenamiento con el `O_DIRECT` opcion. - -Valores posibles: - -- 0 — Direct I/O is disabled. -- Entero positivo. - -Valor predeterminado: 0. - -## Log_queries {#settings-log-queries} - -Configuración del registro de consultas. - -Las consultas enviadas a ClickHouse con esta configuración se registran de acuerdo con las reglas [query_log](../server-configuration-parameters/settings.md#server_configuration_parameters-query-log) parámetro de configuración del servidor. - -Ejemplo: - -``` text -log_queries=1 -``` - -## Nombre de la red inalámbrica (SSID): {#settings-log-queries-min-type} - -`query_log` tipo mínimo para iniciar sesión. - -Valores posibles: -- `QUERY_START` (`=1`) -- `QUERY_FINISH` (`=2`) -- `EXCEPTION_BEFORE_START` (`=3`) -- `EXCEPTION_WHILE_PROCESSING` (`=4`) - -Valor predeterminado: `QUERY_START`. - -Se puede usar para limitar a qué entiries va `query_log`, digamos que eres interesante solo en errores, entonces puedes usar `EXCEPTION_WHILE_PROCESSING`: - -``` text -log_queries_min_type='EXCEPTION_WHILE_PROCESSING' -``` - -## Log_query_threads {#settings-log-query-threads} - -Configuración del registro de subprocesos de consulta. - -Los subprocesos de consultas ejecutados por ClickHouse con esta configuración se registran de acuerdo con las reglas en el [Sistema abierto.](../server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) parámetro de configuración del servidor. - -Ejemplo: - -``` text -log_query_threads=1 -``` - -## Max_insert_block_size {#settings-max_insert_block_size} - -El tamaño de los bloques a formar para su inserción en una tabla. -Esta configuración solo se aplica en los casos en que el servidor forma los bloques. -Por ejemplo, para un INSERT a través de la interfaz HTTP, el servidor analiza el formato de datos y forma bloques del tamaño especificado. -Pero al usar clickhouse-client, el cliente analiza los datos en sí, y el ‘max_insert_block_size’ configuración en el servidor no afecta el tamaño de los bloques insertados. -La configuración tampoco tiene un propósito cuando se usa INSERT SELECT , ya que los datos se insertan usando los mismos bloques que se forman después de SELECT . - -Valor predeterminado: 1.048.576. - -El valor predeterminado es ligeramente más que `max_block_size`. La razón de esto se debe a que ciertos motores de mesa (`*MergeTree`) formar una parte de datos en el disco para cada bloque insertado, que es una entidad bastante grande. Similar, `*MergeTree` las tablas ordenan los datos durante la inserción y un tamaño de bloque lo suficientemente grande permiten clasificar más datos en la RAM. - -## Nombre de la red inalámbrica (SSID): {#min-insert-block-size-rows} - -Establece el número mínimo de filas en el bloque que se pueden insertar en una tabla `INSERT` consulta. Los bloques de menor tamaño se aplastan en otros más grandes. - -Valores posibles: - -- Entero positivo. -- 0 — Squashing disabled. - -Valor predeterminado: 1048576. - -## Todos los derechos reservados {#min-insert-block-size-bytes} - -Establece el número mínimo de bytes en el bloque que se pueden insertar en una tabla `INSERT` consulta. Los bloques de menor tamaño se aplastan en otros más grandes. - -Valores posibles: - -- Entero positivo. -- 0 — Squashing disabled. - -Valor predeterminado: 268435456. - -## max_replica_delay_for_distributed_queries {#settings-max_replica_delay_for_distributed_queries} - -Deshabilita las réplicas rezagadas para consultas distribuidas. Ver [Replicación](../../engines/table-engines/mergetree-family/replication.md). - -Establece el tiempo en segundos. Si una réplica tiene un retraso superior al valor establecido, no se utiliza esta réplica. - -Valor predeterminado: 300. - -Se utiliza al realizar `SELECT` desde una tabla distribuida que apunta a tablas replicadas. - -## max_threads {#settings-max_threads} - -El número máximo de subprocesos de procesamiento de consultas, excluyendo subprocesos para recuperar datos de servidores ‘max_distributed_connections’ parámetro). - -Este parámetro se aplica a los subprocesos que realizan las mismas etapas de la canalización de procesamiento de consultas en paralelo. -Por ejemplo, al leer desde una tabla, si es posible evaluar expresiones con funciones, filtre con WHERE y preagregue para GROUP BY en paralelo usando al menos ‘max_threads’ número de hilos, entonces ‘max_threads’ se utilizan. - -Valor predeterminado: el número de núcleos de CPU físicos. - -Si normalmente se ejecuta menos de una consulta SELECT en un servidor a la vez, establezca este parámetro en un valor ligeramente inferior al número real de núcleos de procesador. - -Para las consultas que se completan rápidamente debido a un LIMIT, puede establecer un ‘max_threads’. Por ejemplo, si el número necesario de entradas se encuentra en cada bloque y max_threads = 8, entonces se recuperan 8 bloques, aunque hubiera sido suficiente leer solo uno. - -Cuanto menor sea el `max_threads` valor, menos memoria se consume. - -## Método de codificación de datos: {#settings-max-insert-threads} - -El número máximo de subprocesos para ejecutar el `INSERT SELECT` consulta. - -Valores posibles: - -- 0 (or 1) — `INSERT SELECT` sin ejecución paralela. -- Entero positivo. Más grande que 1. - -Valor predeterminado: 0. - -Paralelo `INSERT SELECT` sólo tiene efecto si el `SELECT` parte se ejecuta en paralelo, ver [max_threads](#settings-max_threads) configuración. -Los valores más altos conducirán a un mayor uso de memoria. - -## max_compress_block_size {#max-compress-block-size} - -El tamaño máximo de bloques de datos sin comprimir antes de comprimir para escribir en una tabla. De forma predeterminada, 1.048.576 (1 MiB). Si se reduce el tamaño, la tasa de compresión se reduce significativamente, la velocidad de compresión y descompresión aumenta ligeramente debido a la localidad de la memoria caché, y se reduce el consumo de memoria. Por lo general, no hay ninguna razón para cambiar esta configuración. - -No confunda bloques para la compresión (un fragmento de memoria que consta de bytes) con bloques para el procesamiento de consultas (un conjunto de filas de una tabla). - -## Descripción del producto {#min-compress-block-size} - -Para [Método de codificación de datos:](../../engines/table-engines/mergetree-family/mergetree.md)" tabla. Para reducir la latencia al procesar consultas, un bloque se comprime al escribir la siguiente marca si su tamaño es al menos ‘min_compress_block_size’. De forma predeterminada, 65.536. - -El tamaño real del bloque, si los datos sin comprimir son menores que ‘max_compress_block_size’, no es menor que este valor y no menor que el volumen de datos para una marca. - -Veamos un ejemplo. Supongamos que ‘index_granularity’ se estableció en 8192 durante la creación de la tabla. - -Estamos escribiendo una columna de tipo UInt32 (4 bytes por valor). Al escribir 8192 filas, el total será de 32 KB de datos. Como min_compress_block_size = 65,536, se formará un bloque comprimido por cada dos marcas. - -Estamos escribiendo una columna URL con el tipo String (tamaño promedio de 60 bytes por valor). Al escribir 8192 filas, el promedio será ligeramente inferior a 500 KB de datos. Como esto es más de 65,536, se formará un bloque comprimido para cada marca. En este caso, al leer datos del disco en el rango de una sola marca, los datos adicionales no se descomprimirán. - -Por lo general, no hay ninguna razón para cambiar esta configuración. - -## max_query_size {#settings-max_query_size} - -La parte máxima de una consulta que se puede llevar a la RAM para analizar con el analizador SQL. -La consulta INSERT también contiene datos para INSERT que es procesado por un analizador de secuencias independiente (que consume O(1) RAM), que no está incluido en esta restricción. - -Valor predeterminado: 256 KiB. - -## interactive_delay {#interactive-delay} - -El intervalo en microsegundos para comprobar si la ejecución de la solicitud se ha cancelado y enviar el progreso. - -Valor predeterminado: 100.000 (comprueba la cancelación y envía el progreso diez veces por segundo). - -## ¿Cómo puedo hacerlo? {#connect-timeout-receive-timeout-send-timeout} - -Tiempos de espera en segundos en el socket utilizado para comunicarse con el cliente. - -Valor predeterminado: 10, 300, 300. - -## Cancel_http_readonly_queries_on_client_close {#cancel-http-readonly-queries-on-client-close} - -Cancels HTTP read-only queries (e.g. SELECT) when a client closes the connection without waiting for the response. - -Valor predeterminado: 0 - -## poll_interval {#poll-interval} - -Bloquear en un bucle de espera durante el número especificado de segundos. - -Valor predeterminado: 10. - -## max_distributed_connections {#max-distributed-connections} - -El número máximo de conexiones simultáneas con servidores remotos para el procesamiento distribuido de una única consulta a una única tabla distribuida. Se recomienda establecer un valor no menor que el número de servidores en el clúster. - -Valor predeterminado: 1024. - -Los siguientes parámetros solo se usan al crear tablas distribuidas (y al iniciar un servidor), por lo que no hay ninguna razón para cambiarlas en tiempo de ejecución. - -## Distributed_connections_pool_size {#distributed-connections-pool-size} - -El número máximo de conexiones simultáneas con servidores remotos para el procesamiento distribuido de todas las consultas a una única tabla distribuida. Se recomienda establecer un valor no menor que el número de servidores en el clúster. - -Valor predeterminado: 1024. - -## Conecte_timeout_with_failover_ms {#connect-timeout-with-failover-ms} - -El tiempo de espera en milisegundos para conectarse a un servidor remoto para un motor de tablas distribuidas ‘shard’ y ‘replica’ secciones se utilizan en la definición de clúster. -Si no tiene éxito, se realizan varios intentos para conectarse a varias réplicas. - -Valor predeterminado: 50. - -## connections_with_failover_max_tries {#connections-with-failover-max-tries} - -El número máximo de intentos de conexión con cada réplica para el motor de tablas distribuidas. - -Valor predeterminado: 3. - -## extremo {#extremes} - -Ya sea para contar valores extremos (los mínimos y máximos en columnas de un resultado de consulta). Acepta 0 o 1. De forma predeterminada, 0 (deshabilitado). -Para obtener más información, consulte la sección “Extreme values”. - -## Use_uncompressed_cache {#setting-use_uncompressed_cache} - -Si se debe usar una memoria caché de bloques sin comprimir. Acepta 0 o 1. De forma predeterminada, 0 (deshabilitado). -El uso de la memoria caché sin comprimir (solo para tablas de la familia MergeTree) puede reducir significativamente la latencia y aumentar el rendimiento cuando se trabaja con un gran número de consultas cortas. Habilite esta configuración para los usuarios que envían solicitudes cortas frecuentes. También preste atención al [Uncompressed_cache_size](../server-configuration-parameters/settings.md#server-settings-uncompressed_cache_size) configuration parameter (only set in the config file) – the size of uncompressed cache blocks. By default, it is 8 GiB. The uncompressed cache is filled in as needed and the least-used data is automatically deleted. - -Para consultas que leen al menos un volumen algo grande de datos (un millón de filas o más), la memoria caché sin comprimir se desactiva automáticamente para ahorrar espacio para consultas realmente pequeñas. Esto significa que puede mantener el ‘use_uncompressed_cache’ ajuste siempre establecido en 1. - -## Reemplazar_running_query {#replace-running-query} - -Cuando se utiliza la interfaz HTTP, el ‘query_id’ parámetro puede ser pasado. Se trata de cualquier cadena que sirva como identificador de consulta. -Si una consulta del mismo usuario ‘query_id’ que ya existe en este momento, el comportamiento depende de la ‘replace_running_query’ parámetro. - -`0` (default) – Throw an exception (don't allow the query to run if a query with the same ‘query_id’ ya se está ejecutando). - -`1` – Cancel the old query and start running the new one. - -El Yandex.Metrica utiliza este parámetro establecido en 1 para implementar sugerencias para las condiciones de segmentación. Después de ingresar el siguiente carácter, si la consulta anterior aún no ha finalizado, debe cancelarse. - -## Nombre de la red inalámbrica (SSID): {#stream-flush-interval-ms} - -Funciona para tablas con streaming en el caso de un tiempo de espera, o cuando un subproceso genera [Max_insert_block_size](#settings-max_insert_block_size) filas. - -El valor predeterminado es 7500. - -Cuanto menor sea el valor, más a menudo los datos se vacían en la tabla. Establecer el valor demasiado bajo conduce a un rendimiento deficiente. - -## load_balancing {#settings-load_balancing} - -Especifica el algoritmo de selección de réplicas que se utiliza para el procesamiento de consultas distribuidas. - -ClickHouse admite los siguientes algoritmos para elegir réplicas: - -- [Aleatorio](#load_balancing-random) (predeterminada) -- [Nombre de host más cercano](#load_balancing-nearest_hostname) -- [En orden](#load_balancing-in_order) -- [Primero o aleatorio](#load_balancing-first_or_random) - -### Aleatorio (por defecto) {#load_balancing-random} - -``` sql -load_balancing = random -``` - -El número de errores se cuenta para cada réplica. La consulta se envía a la réplica con el menor número de errores, y si hay varios de estos, a cualquiera de ellos. -Desventajas: La proximidad del servidor no se tiene en cuenta; si las réplicas tienen datos diferentes, también obtendrá datos diferentes. - -### Nombre de host más cercano {#load_balancing-nearest_hostname} - -``` sql -load_balancing = nearest_hostname -``` - -The number of errors is counted for each replica. Every 5 minutes, the number of errors is integrally divided by 2. Thus, the number of errors is calculated for a recent time with exponential smoothing. If there is one replica with a minimal number of errors (i.e. errors occurred recently on the other replicas), the query is sent to it. If there are multiple replicas with the same minimal number of errors, the query is sent to the replica with a hostname that is most similar to the server's hostname in the config file (for the number of different characters in identical positions, up to the minimum length of both hostnames). - -Por ejemplo, example01-01-1 y example01-01-2.yandex.ru son diferentes en una posición, mientras que example01-01-1 y example01-02-2 difieren en dos lugares. -Este método puede parecer primitivo, pero no requiere datos externos sobre la topología de red, y no compara las direcciones IP, lo que sería complicado para nuestras direcciones IPv6. - -Por lo tanto, si hay réplicas equivalentes, se prefiere la más cercana por nombre. -También podemos suponer que al enviar una consulta al mismo servidor, en ausencia de fallas, una consulta distribuida también irá a los mismos servidores. Por lo tanto, incluso si se colocan datos diferentes en las réplicas, la consulta devolverá principalmente los mismos resultados. - -### En orden {#load_balancing-in_order} - -``` sql -load_balancing = in_order -``` - -Se accede a las réplicas con el mismo número de errores en el mismo orden en que se especifican en la configuración. -Este método es apropiado cuando se sabe exactamente qué réplica es preferible. - -### Primero o aleatorio {#load_balancing-first_or_random} - -``` sql -load_balancing = first_or_random -``` - -Este algoritmo elige la primera réplica del conjunto o una réplica aleatoria si la primera no está disponible. Es efectivo en configuraciones de topología de replicación cruzada, pero inútil en otras configuraciones. - -El `first_or_random` resuelve el problema del algoritmo `in_order` algoritmo. Con `in_order`, si una réplica se cae, la siguiente obtiene una carga doble mientras que las réplicas restantes manejan la cantidad habitual de tráfico. Cuando se utiliza el `first_or_random` algoritmo, la carga se distribuye uniformemente entre las réplicas que todavía están disponibles. - -## prefer_localhost_replica {#settings-prefer-localhost-replica} - -Habilita/deshabilita el uso preferible de la réplica localhost al procesar consultas distribuidas. - -Valores posibles: - -- 1 — ClickHouse always sends a query to the localhost replica if it exists. -- 0 — ClickHouse uses the balancing strategy specified by the [load_balancing](#settings-load_balancing) configuración. - -Valor predeterminado: 1. - -!!! warning "Advertencia" - Deshabilite esta configuración si usa [max_parallel_replicas](#settings-max_parallel_replicas). - -## totals_mode {#totals-mode} - -Cómo calcular TOTALS cuando HAVING está presente, así como cuando max_rows_to_group_by y group_by_overflow_mode = ‘any’ están presentes. -Vea la sección “WITH TOTALS modifier”. - -## totals_auto_threshold {#totals-auto-threshold} - -El umbral para `totals_mode = 'auto'`. -Vea la sección “WITH TOTALS modifier”. - -## max_parallel_replicas {#settings-max_parallel_replicas} - -El número máximo de réplicas para cada fragmento al ejecutar una consulta. -Para obtener coherencia (para obtener diferentes partes de la misma división de datos), esta opción solo funciona cuando se establece la clave de muestreo. -El retraso de réplica no está controlado. - -## compilar {#compile} - -Habilitar la compilación de consultas. De forma predeterminada, 0 (deshabilitado). - -La compilación solo se usa para parte de la canalización de procesamiento de consultas: para la primera etapa de agregación (GROUP BY). -Si se compiló esta parte de la canalización, la consulta puede ejecutarse más rápido debido a la implementación de ciclos cortos y a las llamadas de función agregadas en línea. La mejora del rendimiento máximo (hasta cuatro veces más rápido en casos excepcionales) se ve para consultas con múltiples funciones agregadas simples. Por lo general, la ganancia de rendimiento es insignificante. En casos muy raros, puede ralentizar la ejecución de la consulta. - -## min_count_to_compile {#min-count-to-compile} - -¿Cuántas veces usar potencialmente un fragmento de código compilado antes de ejecutar la compilación? Por defecto, 3. -For testing, the value can be set to 0: compilation runs synchronously and the query waits for the end of the compilation process before continuing execution. For all other cases, use values ​​starting with 1. Compilation normally takes about 5-10 seconds. -Si el valor es 1 o más, la compilación se produce de forma asíncrona en un subproceso independiente. El resultado se utilizará tan pronto como esté listo, incluidas las consultas que se están ejecutando actualmente. - -Se requiere código compilado para cada combinación diferente de funciones agregadas utilizadas en la consulta y el tipo de claves en la cláusula GROUP BY. -The results of the compilation are saved in the build directory in the form of .so files. There is no restriction on the number of compilation results since they don't use very much space. Old results will be used after server restarts, except in the case of a server upgrade – in this case, the old results are deleted. - -## output_format_json_quote_64bit_integers {#session_settings-output_format_json_quote_64bit_integers} - -Si el valor es true, los enteros aparecen entre comillas cuando se usan los formatos JSON\* Int64 y UInt64 (por compatibilidad con la mayoría de las implementaciones de JavaScript); de lo contrario, los enteros se generan sin las comillas. - -## Formato_csv_delimiter {#settings-format_csv_delimiter} - -El carácter interpretado como un delimitador en los datos CSV. De forma predeterminada, el delimitador es `,`. - -## input_format_csv_unquoted_null_literal_as_null {#settings-input_format_csv_unquoted_null_literal_as_null} - -Para el formato de entrada CSV, habilita o deshabilita el análisis de `NULL` como literal (sinónimo de `\N`). - -## output_format_csv_crlf_end_of_line {#settings-output-format-csv-crlf-end-of-line} - -Utilice el separador de línea de estilo DOS / Windows (CRLF) en CSV en lugar de estilo Unix (LF). - -## output_format_tsv_crlf_end_of_line {#settings-output-format-tsv-crlf-end-of-line} - -Utilice el separador de línea de estilo DOC / Windows (CRLF) en TSV en lugar del estilo Unix (LF). - -## insert_quorum {#settings-insert_quorum} - -Habilita las escrituras de quórum. - -- Si `insert_quorum < 2`, las escrituras de quórum están deshabilitadas. -- Si `insert_quorum >= 2`, las escrituras de quórum están habilitadas. - -Valor predeterminado: 0. - -Quorum escribe - -`INSERT` solo tiene éxito cuando ClickHouse logra escribir correctamente datos en el `insert_quorum` de réplicas durante el `insert_quorum_timeout`. Si por alguna razón el número de réplicas con escrituras exitosas no alcanza el `insert_quorum`, la escritura se considera fallida y ClickHouse eliminará el bloque insertado de todas las réplicas donde los datos ya se han escrito. - -Todas las réplicas del quórum son consistentes, es decir, contienen datos de todas las réplicas anteriores `INSERT` consulta. El `INSERT` la secuencia está linealizada. - -Al leer los datos escritos desde el `insert_quorum` usted puede utilizar el [select_sequential_consistency](#settings-select_sequential_consistency) opcion. - -ClickHouse genera una excepción - -- Si el número de réplicas disponibles en el momento de la consulta es `insert_quorum`. -- En un intento de escribir datos cuando el bloque anterior aún no se ha insertado en el `insert_quorum` de réplicas. Esta situación puede ocurrir si el usuario intenta realizar una `INSERT` antes de la anterior con el `insert_quorum` se ha completado. - -Ver también: - -- [insert_quorum_timeout](#settings-insert_quorum_timeout) -- [select_sequential_consistency](#settings-select_sequential_consistency) - -## insert_quorum_timeout {#settings-insert_quorum_timeout} - -Escribir en tiempo de espera de quórum en segundos. Si el tiempo de espera ha pasado y aún no se ha realizado ninguna escritura, ClickHouse generará una excepción y el cliente debe repetir la consulta para escribir el mismo bloque en la misma réplica o en cualquier otra réplica. - -Valor predeterminado: 60 segundos. - -Ver también: - -- [insert_quorum](#settings-insert_quorum) -- [select_sequential_consistency](#settings-select_sequential_consistency) - -## select_sequential_consistency {#settings-select_sequential_consistency} - -Habilita o deshabilita la coherencia secuencial para `SELECT` consulta: - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 0. - -Uso - -Cuando se habilita la coherencia secuencial, ClickHouse permite al cliente ejecutar el `SELECT` consulta sólo para aquellas réplicas que contienen datos de todas las `INSERT` consultas ejecutadas con `insert_quorum`. Si el cliente hace referencia a una réplica parcial, ClickHouse generará una excepción. La consulta SELECT no incluirá datos que aún no se hayan escrito en el quórum de réplicas. - -Ver también: - -- [insert_quorum](#settings-insert_quorum) -- [insert_quorum_timeout](#settings-insert_quorum_timeout) - -## insert_deduplicate {#settings-insert-deduplicate} - -Habilita o deshabilita la desduplicación de bloques `INSERT` (para tablas replicadas\* - -Valores posibles: - -- 0 — Disabled. -- 1 — Enabled. - -Valor predeterminado: 1. - -De forma predeterminada, los bloques insertados en tablas replicadas `INSERT` declaración se deduplican (ver [Replicación de datos](../../engines/table-engines/mergetree-family/replication.md)). - -## deduplicate_blocks_in_dependent_materialized_views {#settings-deduplicate-blocks-in-dependent-materialized-views} - -Habilita o deshabilita la comprobación de desduplicación para las vistas materializadas que reciben datos de tablas replicadas\*. - -Valores posibles: - - 0 — Disabled. - 1 — Enabled. - -Valor predeterminado: 0. - -Uso - -De forma predeterminada, la desduplicación no se realiza para las vistas materializadas, sino que se realiza en sentido ascendente, en la tabla de origen. -Si se omite un bloque INSERTed debido a la desduplicación en la tabla de origen, no habrá inserción en las vistas materializadas adjuntas. Este comportamiento existe para permitir la inserción de datos altamente agregados en vistas materializadas, para los casos en que los bloques insertados son los mismos después de la agregación de vistas materializadas pero derivados de diferentes INSERT en la tabla de origen. -Al mismo tiempo, este comportamiento “breaks” `INSERT` idempotencia. Si una `INSERT` en la mesa principal fue exitoso y `INSERT` into a materialized view failed (e.g. because of communication failure with Zookeeper) a client will get an error and can retry the operation. However, the materialized view won't receive the second insert because it will be discarded by deduplication in the main (source) table. The setting `deduplicate_blocks_in_dependent_materialized_views` permite cambiar este comportamiento. Al reintentar, una vista materializada recibirá la inserción de repetición y realizará la comprobación de desduplicación por sí misma, -ignorando el resultado de la comprobación para la tabla de origen, e insertará filas perdidas debido a la primera falla. - -## Método de codificación de datos: {#settings-max-network-bytes} - -Limita el volumen de datos (en bytes) que se recibe o se transmite a través de la red al ejecutar una consulta. Esta configuración se aplica a cada consulta individual. - -Valores posibles: - -- Entero positivo. -- 0 — Data volume control is disabled. - -Valor predeterminado: 0. - -## Método de codificación de datos: {#settings-max-network-bandwidth} - -Limita la velocidad del intercambio de datos a través de la red en bytes por segundo. Esta configuración se aplica a todas las consultas. - -Valores posibles: - -- Entero positivo. -- 0 — Bandwidth control is disabled. - -Valor predeterminado: 0. - -## Todos los derechos reservados {#settings-max-network-bandwidth-for-user} - -Limita la velocidad del intercambio de datos a través de la red en bytes por segundo. Esta configuración se aplica a todas las consultas que se ejecutan simultáneamente realizadas por un único usuario. - -Valores posibles: - -- Entero positivo. -- 0 — Control of the data speed is disabled. - -Valor predeterminado: 0. - -## Todos los derechos reservados {#settings-max-network-bandwidth-for-all-users} - -Limita la velocidad a la que se intercambian datos a través de la red en bytes por segundo. Esta configuración se aplica a todas las consultas que se ejecutan simultáneamente en el servidor. - -Valores posibles: - -- Entero positivo. -- 0 — Control of the data speed is disabled. - -Valor predeterminado: 0. - -## count_distinct_implementation {#settings-count_distinct_implementation} - -Especifica cuál de las `uniq*` se deben utilizar para realizar el [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference.md#agg_function-count) construcción. - -Valores posibles: - -- [uniq](../../sql-reference/aggregate-functions/reference.md#agg_function-uniq) -- [uniqCombined](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqcombined) -- [UniqCombined64](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqcombined64) -- [uniqHLL12](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqhll12) -- [uniqExact](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqexact) - -Valor predeterminado: `uniqExact`. - -## skip_unavailable_shards {#settings-skip_unavailable_shards} - -Habilita o deshabilita la omisión silenciosa de fragmentos no disponibles. - -El fragmento se considera no disponible si todas sus réplicas no están disponibles. Una réplica no está disponible en los siguientes casos: - -- ClickHouse no puede conectarse a la réplica por ningún motivo. - - Al conectarse a una réplica, ClickHouse realiza varios intentos. Si todos estos intentos fallan, la réplica se considera que no está disponible. - -- La réplica no se puede resolver a través de DNS. - - Si el nombre de host de la réplica no se puede resolver a través de DNS, puede indicar las siguientes situaciones: - - - El host de Replica no tiene registro DNS. Puede ocurrir en sistemas con DNS dinámico, por ejemplo, [Kubernetes](https://kubernetes.io), donde los nodos pueden ser irresolubles durante el tiempo de inactividad, y esto no es un error. - - - Error de configuración. El archivo de configuración de ClickHouse contiene un nombre de host incorrecto. - -Valores posibles: - -- 1 — skipping enabled. - - Si un fragmento no está disponible, ClickHouse devuelve un resultado basado en datos parciales y no informa de problemas de disponibilidad de nodos. - -- 0 — skipping disabled. - - Si un fragmento no está disponible, ClickHouse produce una excepción. - -Valor predeterminado: 0. - -## Optize_skip_unused_shards {#settings-optimize_skip_unused_shards} - -Habilita o deshabilita la omisión de fragmentos no utilizados para las consultas SELECT que tienen la condición de clave de fragmentación en PREWHERE / WHERE (supone que los datos se distribuyen mediante clave de fragmentación, de lo contrario no hacer nada). - -Valor predeterminado: 0 - -## Fuerza_optimize_skip_unused_shards {#settings-force_optimize_skip_unused_shards} - -Habilita o deshabilita la ejecución de consultas si [`optimize_skip_unused_shards`](#settings-optimize_skip_unused_shards) no es posible omitir fragmentos no utilizados. Si la omisión no es posible y la configuración está habilitada, se lanzará una excepción. - -Valores posibles: - -- 0 - Discapacitados (no lanza) -- 1: deshabilite la ejecución de consultas solo si la tabla tiene una clave de fragmentación -- 2: deshabilita la ejecución de consultas independientemente de que se haya definido la clave de fragmentación para la tabla - -Valor predeterminado: 0 - -## Optize_throw_if_noop {#setting-optimize_throw_if_noop} - -Habilita o deshabilita el lanzamiento de una excepción [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) la consulta no realizó una fusión. - -Predeterminada, `OPTIMIZE` devuelve con éxito incluso si no hizo nada. Esta configuración le permite diferenciar estas situaciones y obtener el motivo en un mensaje de excepción. - -Valores posibles: - -- 1 — Throwing an exception is enabled. -- 0 — Throwing an exception is disabled. - -Valor predeterminado: 0. - -## distributed_replica_error_half_life {#settings-distributed_replica_error_half_life} - -- Tipo: segundos -- Valor predeterminado: 60 segundos - -Controla la rapidez con la que se ponen a cero los errores en las tablas distribuidas. Si una réplica no está disponible durante algún tiempo, acumula 5 errores y distribut_replica_error_half_life se establece en 1 segundo, la réplica se considera normal 3 segundos después del último error. - -Ver también: - -- [Motor de tabla distribuido](../../engines/table-engines/special/distributed.md) -- [distributed_replica_error_cap](#settings-distributed_replica_error_cap) - -## distributed_replica_error_cap {#settings-distributed_replica_error_cap} - -- Tipo: unsigned int -- Valor predeterminado: 1000 - -El recuento de errores de cada réplica está limitado a este valor, lo que impide que una sola réplica acumule demasiados errores. - -Ver también: - -- [Motor de tabla distribuido](../../engines/table-engines/special/distributed.md) -- [distributed_replica_error_half_life](#settings-distributed_replica_error_half_life) - -## Distributed_directory_monitor_sleep_time_ms {#distributed_directory_monitor_sleep_time_ms} - -Intervalo base para el [Distribuido](../../engines/table-engines/special/distributed.md) motor de tabla para enviar datos. El intervalo real crece exponencialmente en caso de errores. - -Valores posibles: - -- Un número entero positivo de milisegundos. - -Valor predeterminado: 100 milisegundos. - -## Distributed_directory_monitor_max_sleep_time_ms {#distributed_directory_monitor_max_sleep_time_ms} - -Intervalo máximo para el [Distribuido](../../engines/table-engines/special/distributed.md) motor de tabla para enviar datos. Limita el crecimiento exponencial del intervalo establecido en el [Distributed_directory_monitor_sleep_time_ms](#distributed_directory_monitor_sleep_time_ms) configuración. - -Valores posibles: - -- Un número entero positivo de milisegundos. - -Valor predeterminado: 30000 milisegundos (30 segundos). - -## distributed_directory_monitor_batch_inserts {#distributed_directory_monitor_batch_inserts} - -Habilita/deshabilita el envío de datos insertados en lotes. - -Cuando el envío por lotes está habilitado, el [Distribuido](../../engines/table-engines/special/distributed.md) El motor de tabla intenta enviar varios archivos de datos insertados en una operación en lugar de enviarlos por separado. El envío por lotes mejora el rendimiento del clúster al utilizar mejor los recursos del servidor y de la red. - -Valores posibles: - -- 1 — Enabled. -- 0 — Disabled. - -Valor predeterminado: 0. - -## os_thread_priority {#setting-os-thread-priority} - -Establece la prioridad ([agradable](https://en.wikipedia.org/wiki/Nice_(Unix))) para subprocesos que ejecutan consultas. El programador del sistema operativo considera esta prioridad al elegir el siguiente hilo para ejecutar en cada núcleo de CPU disponible. - -!!! warning "Advertencia" - Para utilizar esta configuración, debe establecer el `CAP_SYS_NICE` capacidad. El `clickhouse-server` paquete lo configura durante la instalación. Algunos entornos virtuales no le permiten establecer `CAP_SYS_NICE` capacidad. En este caso, `clickhouse-server` muestra un mensaje al respecto al principio. - -Valores posibles: - -- Puede establecer valores en el rango `[-20, 19]`. - -Los valores más bajos significan mayor prioridad. Hilos con bajo `nice` Los valores de prioridad se ejecutan con más frecuencia que los subprocesos con valores altos. Los valores altos son preferibles para consultas no interactivas de larga ejecución porque les permite renunciar rápidamente a recursos en favor de consultas interactivas cortas cuando llegan. - -Valor predeterminado: 0. - -## query_profiler_real_time_period_ns {#query_profiler_real_time_period_ns} - -Establece el período para un temporizador de reloj real del [perfilador de consultas](../../operations/optimizing-performance/sampling-query-profiler.md). El temporizador de reloj real cuenta el tiempo del reloj de pared. - -Valores posibles: - -- Número entero positivo, en nanosegundos. - - Valores recomendados: - - - 10000000 (100 times a second) nanoseconds and less for single queries. - - 1000000000 (once a second) for cluster-wide profiling. - -- 0 para apagar el temporizador. - -Tipo: [UInt64](../../sql-reference/data-types/int-uint.md). - -Valor predeterminado: 1000000000 nanosegundos (una vez por segundo). - -Ver también: - -- Tabla del sistema [trace_log](../../operations/system-tables.md#system_tables-trace_log) - -## Los resultados de la prueba {#query_profiler_cpu_time_period_ns} - -Establece el período para un temporizador de reloj de CPU [perfilador de consultas](../../operations/optimizing-performance/sampling-query-profiler.md). Este temporizador solo cuenta el tiempo de CPU. - -Valores posibles: - -- Un número entero positivo de nanosegundos. - - Valores recomendados: - - - 10000000 (100 times a second) nanoseconds and more for single queries. - - 1000000000 (once a second) for cluster-wide profiling. - -- 0 para apagar el temporizador. - -Tipo: [UInt64](../../sql-reference/data-types/int-uint.md). - -Valor predeterminado: 1000000000 nanosegundos. - -Ver también: - -- Tabla del sistema [trace_log](../../operations/system-tables.md#system_tables-trace_log) - -## allow_introspection_functions {#settings-allow_introspection_functions} - -Habilita deshabilita [funciones de introspecciones](../../sql-reference/functions/introspection.md) para la creación de perfiles de consultas. - -Valores posibles: - -- 1 — Introspection functions enabled. -- 0 — Introspection functions disabled. - -Valor predeterminado: 0. - -**Ver también** - -- [Analizador de consultas de muestreo](../optimizing-performance/sampling-query-profiler.md) -- Tabla del sistema [trace_log](../../operations/system-tables.md#system_tables-trace_log) - -## input_format_parallel_parsing {#input-format-parallel-parsing} - -- Tipo: bool -- Valor predeterminado: True - -Habilitar el análisis paralelo de los formatos de datos para preservar el orden. Solo se admite para los formatos TSV, TKSV, CSV y JSONEachRow. - -## También puede utilizar los siguientes métodos de envío: {#min-chunk-bytes-for-parallel-parsing} - -- Tipo: unsigned int -- Valor predeterminado: 1 MiB - -El tamaño mínimo de fragmento en bytes, que cada subproceso analizará en paralelo. - -## Sistema abierto {#settings-output_format_avro_codec} - -Establece el códec de compresión utilizado para el archivo Avro de salida. - -Tipo: cadena - -Valores posibles: - -- `null` — No compression -- `deflate` — Compress with Deflate (zlib) -- `snappy` — Compress with [Rápido](https://google.github.io/snappy/) - -Valor predeterminado: `snappy` (si está disponible) o `deflate`. - -## Sistema abierto {#settings-output_format_avro_sync_interval} - -Establece el tamaño mínimo de datos (en bytes) entre los marcadores de sincronización para el archivo Avro de salida. - -Tipo: unsigned int - -Valores posibles: 32 (32 bytes) - 1073741824 (1 GiB) - -Valor predeterminado: 32768 (32 KiB) - -## Todos los derechos reservados {#settings-format_avro_schema_registry_url} - -Establece la URL del Registro de esquemas confluentes para usar con [AvroConfluent](../../interfaces/formats.md#data-format-avro-confluent) formato - -Tipo: URL - -Valor predeterminado: Vacío - -## background_pool_size {#background_pool_size} - -Establece el número de subprocesos que realizan operaciones en segundo plano en motores de tabla (por ejemplo, fusiona [Motor MergeTree](../../engines/table-engines/mergetree-family/index.md) tabla). Esta configuración se aplica al inicio del servidor ClickHouse y no se puede cambiar en una sesión de usuario. Al ajustar esta configuración, puede administrar la carga de la CPU y el disco. Un tamaño de grupo más pequeño utiliza menos recursos de CPU y disco, pero los procesos en segundo plano avanzan más lentamente, lo que eventualmente podría afectar el rendimiento de la consulta. - -Valores posibles: - -- Cualquier entero positivo. - -Valor predeterminado: 16. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/settings/settings/) diff --git a/docs/es/operations/system-tables.md b/docs/es/operations/system-tables.md deleted file mode 100644 index 18e7f7227da..00000000000 --- a/docs/es/operations/system-tables.md +++ /dev/null @@ -1,1168 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 52 -toc_title: Tablas del sistema ---- - -# Tablas del sistema {#system-tables} - -Las tablas del sistema se utilizan para implementar parte de la funcionalidad del sistema y para proporcionar acceso a información sobre cómo funciona el sistema. -No puede eliminar una tabla del sistema (pero puede realizar DETACH). -Las tablas del sistema no tienen archivos con datos en el disco o archivos con metadatos. El servidor crea todas las tablas del sistema cuando se inicia. -Las tablas del sistema son de solo lectura. -Están ubicados en el ‘system’ base. - -## sistema.asynchronous_metrics {#system_tables-asynchronous_metrics} - -Contiene métricas que se calculan periódicamente en segundo plano. Por ejemplo, la cantidad de RAM en uso. - -Columna: - -- `metric` ([Cadena](../sql-reference/data-types/string.md)) — Metric name. -- `value` ([Float64](../sql-reference/data-types/float.md)) — Metric value. - -**Ejemplo** - -``` sql -SELECT * FROM system.asynchronous_metrics LIMIT 10 -``` - -``` text -┌─metric──────────────────────────────────┬──────value─┐ -│ jemalloc.background_thread.run_interval │ 0 │ -│ jemalloc.background_thread.num_runs │ 0 │ -│ jemalloc.background_thread.num_threads │ 0 │ -│ jemalloc.retained │ 422551552 │ -│ jemalloc.mapped │ 1682989056 │ -│ jemalloc.resident │ 1656446976 │ -│ jemalloc.metadata_thp │ 0 │ -│ jemalloc.metadata │ 10226856 │ -│ UncompressedCacheCells │ 0 │ -│ MarkCacheFiles │ 0 │ -└─────────────────────────────────────────┴────────────┘ -``` - -**Ver también** - -- [Monitoreo](monitoring.md) — Base concepts of ClickHouse monitoring. -- [sistema.métricas](#system_tables-metrics) — Contains instantly calculated metrics. -- [sistema.evento](#system_tables-events) — Contains a number of events that have occurred. -- [sistema.metric_log](#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`. - -## sistema.Cluster {#system-clusters} - -Contiene información sobre los clústeres disponibles en el archivo de configuración y los servidores que contienen. - -Columna: - -- `cluster` (String) — The cluster name. -- `shard_num` (UInt32) — The shard number in the cluster, starting from 1. -- `shard_weight` (UInt32) — The relative weight of the shard when writing data. -- `replica_num` (UInt32) — The replica number in the shard, starting from 1. -- `host_name` (String) — The host name, as specified in the config. -- `host_address` (String) — The host IP address obtained from DNS. -- `port` (UInt16) — The port to use for connecting to the server. -- `user` (String) — The name of the user for connecting to the server. -- `errors_count` (UInt32): número de veces que este host no pudo alcanzar la réplica. -- `estimated_recovery_time` (UInt32): quedan segundos hasta que el recuento de errores de réplica se ponga a cero y se considere que vuelve a la normalidad. - -Tenga en cuenta que `errors_count` se actualiza una vez por consulta al clúster, pero `estimated_recovery_time` se vuelve a calcular bajo demanda. Entonces podría haber un caso distinto de cero `errors_count` y cero `estimated_recovery_time`, esa próxima consulta será cero `errors_count` e intente usar la réplica como si no tuviera errores. - -**Ver también** - -- [Motor de tabla distribuido](../engines/table-engines/special/distributed.md) -- [distributed_replica_error_cap configuración](settings/settings.md#settings-distributed_replica_error_cap) -- [distributed_replica_error_half_life configuración](settings/settings.md#settings-distributed_replica_error_half_life) - -## sistema.columna {#system-columns} - -Contiene información sobre las columnas de todas las tablas. - -Puede utilizar esta tabla para obtener información similar a la [DESCRIBE TABLE](../sql-reference/statements/misc.md#misc-describe-table) consulta, pero para varias tablas a la vez. - -El `system.columns` tabla contiene las siguientes columnas (el tipo de columna se muestra entre corchetes): - -- `database` (String) — Database name. -- `table` (String) — Table name. -- `name` (String) — Column name. -- `type` (String) — Column type. -- `default_kind` (String) — Expression type (`DEFAULT`, `MATERIALIZED`, `ALIAS`) para el valor predeterminado, o una cadena vacía si no está definida. -- `default_expression` (String) — Expression for the default value, or an empty string if it is not defined. -- `data_compressed_bytes` (UInt64) — The size of compressed data, in bytes. -- `data_uncompressed_bytes` (UInt64) — The size of decompressed data, in bytes. -- `marks_bytes` (UInt64) — The size of marks, in bytes. -- `comment` (String) — Comment on the column, or an empty string if it is not defined. -- `is_in_partition_key` (UInt8) — Flag that indicates whether the column is in the partition expression. -- `is_in_sorting_key` (UInt8) — Flag that indicates whether the column is in the sorting key expression. -- `is_in_primary_key` (UInt8) — Flag that indicates whether the column is in the primary key expression. -- `is_in_sampling_key` (UInt8) — Flag that indicates whether the column is in the sampling key expression. - -## sistema.colaborador {#system-contributors} - -Contiene información sobre los colaboradores. Todos los constributores en orden aleatorio. El orden es aleatorio en el momento de la ejecución de la consulta. - -Columna: - -- `name` (String) — Contributor (author) name from git log. - -**Ejemplo** - -``` sql -SELECT * FROM system.contributors LIMIT 10 -``` - -``` text -┌─name─────────────┐ -│ Olga Khvostikova │ -│ Max Vetrov │ -│ LiuYangkuan │ -│ svladykin │ -│ zamulla │ -│ Šimon Podlipský │ -│ BayoNet │ -│ Ilya Khomutov │ -│ Amy Krishnevsky │ -│ Loud_Scream │ -└──────────────────┘ -``` - -Para descubrirlo en la tabla, use una consulta: - -``` sql -SELECT * FROM system.contributors WHERE name='Olga Khvostikova' -``` - -``` text -┌─name─────────────┐ -│ Olga Khvostikova │ -└──────────────────┘ -``` - -## sistema.base {#system-databases} - -Esta tabla contiene una sola columna String llamada ‘name’ – the name of a database. -Cada base de datos que el servidor conoce tiene una entrada correspondiente en la tabla. -Esta tabla del sistema se utiliza para implementar el `SHOW DATABASES` consulta. - -## sistema.detached_parts {#system_tables-detached_parts} - -Contiene información sobre piezas separadas de [Método de codificación de datos:](../engines/table-engines/mergetree-family/mergetree.md) tabla. El `reason` columna especifica por qué se separó la pieza. Para las piezas separadas por el usuario, el motivo está vacío. Tales partes se pueden unir con [ALTER TABLE ATTACH PARTITION\|PART](../sql-reference/statements/alter.md#alter_attach-partition) comando. Para obtener la descripción de otras columnas, consulte [sistema.parte](#system_tables-parts). Si el nombre de la pieza no es válido, los valores de algunas columnas pueden ser `NULL`. Tales partes se pueden eliminar con [ALTER TABLE DROP DETACHED PART](../sql-reference/statements/alter.md#alter_drop-detached). - -## sistema.diccionario {#system_tables-dictionaries} - -Contiene información sobre [diccionarios externos](../sql-reference/dictionaries/external-dictionaries/external-dicts.md). - -Columna: - -- `database` ([Cadena](../sql-reference/data-types/string.md)) — Name of the database containing the dictionary created by DDL query. Empty string for other dictionaries. -- `name` ([Cadena](../sql-reference/data-types/string.md)) — [Nombre del diccionario](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md). -- `status` ([Enum8](../sql-reference/data-types/enum.md)) — Dictionary status. Possible values: - - `NOT_LOADED` — Dictionary was not loaded because it was not used. - - `LOADED` — Dictionary loaded successfully. - - `FAILED` — Unable to load the dictionary as a result of an error. - - `LOADING` — Dictionary is loading now. - - `LOADED_AND_RELOADING` — Dictionary is loaded successfully, and is being reloaded right now (frequent reasons: [SYSTEM RELOAD DICTIONARY](../sql-reference/statements/system.md#query_language-system-reload-dictionary) consulta, tiempo de espera, configuración del diccionario ha cambiado). - - `FAILED_AND_RELOADING` — Could not load the dictionary as a result of an error and is loading now. -- `origin` ([Cadena](../sql-reference/data-types/string.md)) — Path to the configuration file that describes the dictionary. -- `type` ([Cadena](../sql-reference/data-types/string.md)) — Type of a dictionary allocation. [Almacenamiento de diccionarios en la memoria](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md). -- `key` — [Tipo de llave](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#ext_dict_structure-key): Clave numérica ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) or Сomposite key ([Cadena](../sql-reference/data-types/string.md)) — form “(type 1, type 2, …, type n)”. -- `attribute.names` ([Matriz](../sql-reference/data-types/array.md)([Cadena](../sql-reference/data-types/string.md))) — Array of [nombres de atributos](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#ext_dict_structure-attributes) proporcionada por el diccionario. -- `attribute.types` ([Matriz](../sql-reference/data-types/array.md)([Cadena](../sql-reference/data-types/string.md))) — Corresponding array of [tipos de atributos](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#ext_dict_structure-attributes) que son proporcionados por el diccionario. -- `bytes_allocated` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Amount of RAM allocated for the dictionary. -- `query_count` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of queries since the dictionary was loaded or since the last successful reboot. -- `hit_rate` ([Float64](../sql-reference/data-types/float.md)) — For cache dictionaries, the percentage of uses for which the value was in the cache. -- `element_count` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of items stored in the dictionary. -- `load_factor` ([Float64](../sql-reference/data-types/float.md)) — Percentage filled in the dictionary (for a hashed dictionary, the percentage filled in the hash table). -- `source` ([Cadena](../sql-reference/data-types/string.md)) — Text describing the [fuente de datos](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) para el diccionario. -- `lifetime_min` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Minimum [vida](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) del diccionario en la memoria, después de lo cual ClickHouse intenta volver a cargar el diccionario (si `invalidate_query` está configurado, entonces solo si ha cambiado). Establecer en segundos. -- `lifetime_max` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Maximum [vida](../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) del diccionario en la memoria, después de lo cual ClickHouse intenta volver a cargar el diccionario (si `invalidate_query` está configurado, entonces solo si ha cambiado). Establecer en segundos. -- `loading_start_time` ([FechaHora](../sql-reference/data-types/datetime.md)) — Start time for loading the dictionary. -- `last_successful_update_time` ([FechaHora](../sql-reference/data-types/datetime.md)) — End time for loading or updating the dictionary. Helps to monitor some troubles with external sources and investigate causes. -- `loading_duration` ([Float32](../sql-reference/data-types/float.md)) — Duration of a dictionary loading. -- `last_exception` ([Cadena](../sql-reference/data-types/string.md)) — Text of the error that occurs when creating or reloading the dictionary if the dictionary couldn't be created. - -**Ejemplo** - -Configurar el diccionario. - -``` sql -CREATE DICTIONARY dictdb.dict -( - `key` Int64 DEFAULT -1, - `value_default` String DEFAULT 'world', - `value_expression` String DEFAULT 'xxx' EXPRESSION 'toString(127 * 172)' -) -PRIMARY KEY key -SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'dicttbl' DB 'dictdb')) -LIFETIME(MIN 0 MAX 1) -LAYOUT(FLAT()) -``` - -Asegúrese de que el diccionario esté cargado. - -``` sql -SELECT * FROM system.dictionaries -``` - -``` text -┌─database─┬─name─┬─status─┬─origin──────┬─type─┬─key────┬─attribute.names──────────────────────┬─attribute.types─────┬─bytes_allocated─┬─query_count─┬─hit_rate─┬─element_count─┬───────────load_factor─┬─source─────────────────────┬─lifetime_min─┬─lifetime_max─┬──loading_start_time─┌──last_successful_update_time─┬──────loading_duration─┬─last_exception─┐ -│ dictdb │ dict │ LOADED │ dictdb.dict │ Flat │ UInt64 │ ['value_default','value_expression'] │ ['String','String'] │ 74032 │ 0 │ 1 │ 1 │ 0.0004887585532746823 │ ClickHouse: dictdb.dicttbl │ 0 │ 1 │ 2020-03-04 04:17:34 │ 2020-03-04 04:30:34 │ 0.002 │ │ -└──────────┴──────┴────────┴─────────────┴──────┴────────┴──────────────────────────────────────┴─────────────────────┴─────────────────┴─────────────┴──────────┴───────────────┴───────────────────────┴────────────────────────────┴──────────────┴──────────────┴─────────────────────┴──────────────────────────────┘───────────────────────┴────────────────┘ -``` - -## sistema.evento {#system_tables-events} - -Contiene información sobre el número de eventos que se han producido en el sistema. Por ejemplo, en la tabla, puede encontrar cuántos `SELECT` las consultas se procesaron desde que se inició el servidor ClickHouse. - -Columna: - -- `event` ([Cadena](../sql-reference/data-types/string.md)) — Event name. -- `value` ([UInt64](../sql-reference/data-types/int-uint.md)) — Number of events occurred. -- `description` ([Cadena](../sql-reference/data-types/string.md)) — Event description. - -**Ejemplo** - -``` sql -SELECT * FROM system.events LIMIT 5 -``` - -``` text -┌─event─────────────────────────────────┬─value─┬─description────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ -│ Query │ 12 │ Number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. │ -│ SelectQuery │ 8 │ Same as Query, but only for SELECT queries. │ -│ FileOpen │ 73 │ Number of files opened. │ -│ ReadBufferFromFileDescriptorRead │ 155 │ Number of reads (read/pread) from a file descriptor. Does not include sockets. │ -│ ReadBufferFromFileDescriptorReadBytes │ 9931 │ Number of bytes read from file descriptors. If the file is compressed, this will show the compressed data size. │ -└───────────────────────────────────────┴───────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -**Ver también** - -- [sistema.asynchronous_metrics](#system_tables-asynchronous_metrics) — Contains periodically calculated metrics. -- [sistema.métricas](#system_tables-metrics) — Contains instantly calculated metrics. -- [sistema.metric_log](#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`. -- [Monitoreo](monitoring.md) — Base concepts of ClickHouse monitoring. - -## sistema.función {#system-functions} - -Contiene información sobre funciones normales y agregadas. - -Columna: - -- `name`(`String`) – The name of the function. -- `is_aggregate`(`UInt8`) — Whether the function is aggregate. - -## sistema.graphite_retentions {#system-graphite-retentions} - -Contiene información sobre los parámetros [graphite_rollup](server-configuration-parameters/settings.md#server_configuration_parameters-graphite) que se utilizan en tablas con [\*GraphiteMergeTree](../engines/table-engines/mergetree-family/graphitemergetree.md) motor. - -Columna: - -- `config_name` (Cadena) - `graphite_rollup` nombre del parámetro. -- `regexp` (Cadena) - Un patrón para el nombre de la métrica. -- `function` (String) - El nombre de la función de agregación. -- `age` (UInt64) - La edad mínima de los datos en segundos. -- `precision` (UInt64) - Cómo definir con precisión la edad de los datos en segundos. -- `priority` (UInt16) - Prioridad de patrón. -- `is_default` (UInt8) - Si el patrón es el predeterminado. -- `Tables.database` (Array(String)) - Matriz de nombres de tablas de base de datos que utilizan `config_name` parámetro. -- `Tables.table` (Array(String)) - Matriz de nombres de tablas que utilizan `config_name` parámetro. - -## sistema.fusionar {#system-merges} - -Contiene información sobre fusiones y mutaciones de piezas actualmente en proceso para tablas de la familia MergeTree. - -Columna: - -- `database` (String) — The name of the database the table is in. -- `table` (String) — Table name. -- `elapsed` (Float64) — The time elapsed (in seconds) since the merge started. -- `progress` (Float64) — The percentage of completed work from 0 to 1. -- `num_parts` (UInt64) — The number of pieces to be merged. -- `result_part_name` (String) — The name of the part that will be formed as the result of merging. -- `is_mutation` (UInt8) - 1 si este proceso es una mutación parte. -- `total_size_bytes_compressed` (UInt64) — The total size of the compressed data in the merged chunks. -- `total_size_marks` (UInt64) — The total number of marks in the merged parts. -- `bytes_read_uncompressed` (UInt64) — Number of bytes read, uncompressed. -- `rows_read` (UInt64) — Number of rows read. -- `bytes_written_uncompressed` (UInt64) — Number of bytes written, uncompressed. -- `rows_written` (UInt64) — Number of rows written. - -## sistema.métricas {#system_tables-metrics} - -Contiene métricas que pueden calcularse instantáneamente o tener un valor actual. Por ejemplo, el número de consultas procesadas simultáneamente o el retraso de réplica actual. Esta tabla está siempre actualizada. - -Columna: - -- `metric` ([Cadena](../sql-reference/data-types/string.md)) — Metric name. -- `value` ([Int64](../sql-reference/data-types/int-uint.md)) — Metric value. -- `description` ([Cadena](../sql-reference/data-types/string.md)) — Metric description. - -La lista de métricas admitidas que puede encontrar en el [src/Common/CurrentMetrics.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/CurrentMetrics.cpp) archivo fuente de ClickHouse. - -**Ejemplo** - -``` sql -SELECT * FROM system.metrics LIMIT 10 -``` - -``` text -┌─metric─────────────────────┬─value─┬─description──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ -│ Query │ 1 │ Number of executing queries │ -│ Merge │ 0 │ Number of executing background merges │ -│ PartMutation │ 0 │ Number of mutations (ALTER DELETE/UPDATE) │ -│ ReplicatedFetch │ 0 │ Number of data parts being fetched from replicas │ -│ ReplicatedSend │ 0 │ Number of data parts being sent to replicas │ -│ ReplicatedChecks │ 0 │ Number of data parts checking for consistency │ -│ BackgroundPoolTask │ 0 │ Number of active tasks in BackgroundProcessingPool (merges, mutations, fetches, or replication queue bookkeeping) │ -│ BackgroundSchedulePoolTask │ 0 │ Number of active tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc. │ -│ DiskSpaceReservedForMerge │ 0 │ Disk space reserved for currently running background merges. It is slightly more than the total size of currently merging parts. │ -│ DistributedSend │ 0 │ Number of connections to remote servers sending data that was INSERTed into Distributed tables. Both synchronous and asynchronous mode. │ -└────────────────────────────┴───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -**Ver también** - -- [sistema.asynchronous_metrics](#system_tables-asynchronous_metrics) — Contains periodically calculated metrics. -- [sistema.evento](#system_tables-events) — Contains a number of events that occurred. -- [sistema.metric_log](#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`. -- [Monitoreo](monitoring.md) — Base concepts of ClickHouse monitoring. - -## sistema.metric_log {#system_tables-metric_log} - -Contiene el historial de valores de métricas de tablas `system.metrics` y `system.events`, periódicamente enjuagado al disco. -Para activar la recopilación de historial de métricas en `system.metric_log`, crear `/etc/clickhouse-server/config.d/metric_log.xml` con el siguiente contenido: - -``` xml - - - system - metric_log
- 7500 - 1000 -
-
-``` - -**Ejemplo** - -``` sql -SELECT * FROM system.metric_log LIMIT 1 FORMAT Vertical; -``` - -``` text -Row 1: -────── -event_date: 2020-02-18 -event_time: 2020-02-18 07:15:33 -milliseconds: 554 -ProfileEvent_Query: 0 -ProfileEvent_SelectQuery: 0 -ProfileEvent_InsertQuery: 0 -ProfileEvent_FileOpen: 0 -ProfileEvent_Seek: 0 -ProfileEvent_ReadBufferFromFileDescriptorRead: 1 -ProfileEvent_ReadBufferFromFileDescriptorReadFailed: 0 -ProfileEvent_ReadBufferFromFileDescriptorReadBytes: 0 -ProfileEvent_WriteBufferFromFileDescriptorWrite: 1 -ProfileEvent_WriteBufferFromFileDescriptorWriteFailed: 0 -ProfileEvent_WriteBufferFromFileDescriptorWriteBytes: 56 -... -CurrentMetric_Query: 0 -CurrentMetric_Merge: 0 -CurrentMetric_PartMutation: 0 -CurrentMetric_ReplicatedFetch: 0 -CurrentMetric_ReplicatedSend: 0 -CurrentMetric_ReplicatedChecks: 0 -... -``` - -**Ver también** - -- [sistema.asynchronous_metrics](#system_tables-asynchronous_metrics) — Contains periodically calculated metrics. -- [sistema.evento](#system_tables-events) — Contains a number of events that occurred. -- [sistema.métricas](#system_tables-metrics) — Contains instantly calculated metrics. -- [Monitoreo](monitoring.md) — Base concepts of ClickHouse monitoring. - -## sistema.numero {#system-numbers} - -Esta tabla contiene una única columna UInt64 llamada ‘number’ que contiene casi todos los números naturales a partir de cero. -Puede usar esta tabla para pruebas, o si necesita hacer una búsqueda de fuerza bruta. -Las lecturas de esta tabla no están paralelizadas. - -## sistema.Números_mt {#system-numbers-mt} - -Lo mismo que ‘system.numbers’ pero las lecturas están paralelizadas. Los números se pueden devolver en cualquier orden. -Se utiliza para pruebas. - -## sistema.una {#system-one} - -Esta tabla contiene una sola fila con una ‘dummy’ Columna UInt8 que contiene el valor 0. -Esta tabla se utiliza si una consulta SELECT no especifica la cláusula FROM. -Esto es similar a la tabla DUAL que se encuentra en otros DBMS. - -## sistema.parte {#system_tables-parts} - -Contiene información sobre partes de [Método de codificación de datos:](../engines/table-engines/mergetree-family/mergetree.md) tabla. - -Cada fila describe una parte de datos. - -Columna: - -- `partition` (String) – The partition name. To learn what a partition is, see the description of the [ALTER](../sql-reference/statements/alter.md#query_language_queries_alter) consulta. - - Formato: - - - `YYYYMM` para la partición automática por mes. - - `any_string` al particionar manualmente. - -- `name` (`String`) – Name of the data part. - -- `active` (`UInt8`) – Flag that indicates whether the data part is active. If a data part is active, it's used in a table. Otherwise, it's deleted. Inactive data parts remain after merging. - -- `marks` (`UInt64`) – The number of marks. To get the approximate number of rows in a data part, multiply `marks` por la granularidad del índice (generalmente 8192) (esta sugerencia no funciona para la granularidad adaptativa). - -- `rows` (`UInt64`) – The number of rows. - -- `bytes_on_disk` (`UInt64`) – Total size of all the data part files in bytes. - -- `data_compressed_bytes` (`UInt64`) – Total size of compressed data in the data part. All the auxiliary files (for example, files with marks) are not included. - -- `data_uncompressed_bytes` (`UInt64`) – Total size of uncompressed data in the data part. All the auxiliary files (for example, files with marks) are not included. - -- `marks_bytes` (`UInt64`) – The size of the file with marks. - -- `modification_time` (`DateTime`) – The time the directory with the data part was modified. This usually corresponds to the time of data part creation.\| - -- `remove_time` (`DateTime`) – The time when the data part became inactive. - -- `refcount` (`UInt32`) – The number of places where the data part is used. A value greater than 2 indicates that the data part is used in queries or merges. - -- `min_date` (`Date`) – The minimum value of the date key in the data part. - -- `max_date` (`Date`) – The maximum value of the date key in the data part. - -- `min_time` (`DateTime`) – The minimum value of the date and time key in the data part. - -- `max_time`(`DateTime`) – The maximum value of the date and time key in the data part. - -- `partition_id` (`String`) – ID of the partition. - -- `min_block_number` (`UInt64`) – The minimum number of data parts that make up the current part after merging. - -- `max_block_number` (`UInt64`) – The maximum number of data parts that make up the current part after merging. - -- `level` (`UInt32`) – Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts. - -- `data_version` (`UInt64`) – Number that is used to determine which mutations should be applied to the data part (mutations with a version higher than `data_version`). - -- `primary_key_bytes_in_memory` (`UInt64`) – The amount of memory (in bytes) used by primary key values. - -- `primary_key_bytes_in_memory_allocated` (`UInt64`) – The amount of memory (in bytes) reserved for primary key values. - -- `is_frozen` (`UInt8`) – Flag that shows that a partition data backup exists. 1, the backup exists. 0, the backup doesn't exist. For more details, see [FREEZE PARTITION](../sql-reference/statements/alter.md#alter_freeze-partition) - -- `database` (`String`) – Name of the database. - -- `table` (`String`) – Name of the table. - -- `engine` (`String`) – Name of the table engine without parameters. - -- `path` (`String`) – Absolute path to the folder with data part files. - -- `disk` (`String`) – Name of a disk that stores the data part. - -- `hash_of_all_files` (`String`) – [sipHash128](../sql-reference/functions/hash-functions.md#hash_functions-siphash128) de archivos comprimidos. - -- `hash_of_uncompressed_files` (`String`) – [sipHash128](../sql-reference/functions/hash-functions.md#hash_functions-siphash128) de archivos sin comprimir (archivos con marcas, archivo de índice, etc.). - -- `uncompressed_hash_of_compressed_files` (`String`) – [sipHash128](../sql-reference/functions/hash-functions.md#hash_functions-siphash128) de datos en los archivos comprimidos como si estuvieran descomprimidos. - -- `bytes` (`UInt64`) – Alias for `bytes_on_disk`. - -- `marks_size` (`UInt64`) – Alias for `marks_bytes`. - -## sistema.part_log {#system_tables-part-log} - -El `system.part_log` se crea sólo si el [part_log](server-configuration-parameters/settings.md#server_configuration_parameters-part-log) se especifica la configuración del servidor. - -Esta tabla contiene información sobre eventos que ocurrieron con [partes de datos](../engines/table-engines/mergetree-family/custom-partitioning-key.md) en el [Método de codificación de datos:](../engines/table-engines/mergetree-family/mergetree.md) tablas familiares, como agregar o fusionar datos. - -El `system.part_log` contiene las siguientes columnas: - -- `event_type` (Enum) — Type of the event that occurred with the data part. Can have one of the following values: - - `NEW_PART` — Inserting of a new data part. - - `MERGE_PARTS` — Merging of data parts. - - `DOWNLOAD_PART` — Downloading a data part. - - `REMOVE_PART` — Removing or detaching a data part using [DETACH PARTITION](../sql-reference/statements/alter.md#alter_detach-partition). - - `MUTATE_PART` — Mutating of a data part. - - `MOVE_PART` — Moving the data part from the one disk to another one. -- `event_date` (Date) — Event date. -- `event_time` (DateTime) — Event time. -- `duration_ms` (UInt64) — Duration. -- `database` (String) — Name of the database the data part is in. -- `table` (String) — Name of the table the data part is in. -- `part_name` (String) — Name of the data part. -- `partition_id` (String) — ID of the partition that the data part was inserted to. The column takes the ‘all’ valor si la partición es por `tuple()`. -- `rows` (UInt64) — The number of rows in the data part. -- `size_in_bytes` (UInt64) — Size of the data part in bytes. -- `merged_from` (Array(String)) — An array of names of the parts which the current part was made up from (after the merge). -- `bytes_uncompressed` (UInt64) — Size of uncompressed bytes. -- `read_rows` (UInt64) — The number of rows was read during the merge. -- `read_bytes` (UInt64) — The number of bytes was read during the merge. -- `error` (UInt16) — The code number of the occurred error. -- `exception` (String) — Text message of the occurred error. - -El `system.part_log` se crea después de la primera inserción de datos `MergeTree` tabla. - -## sistema.procesa {#system_tables-processes} - -Esta tabla del sistema se utiliza para implementar el `SHOW PROCESSLIST` consulta. - -Columna: - -- `user` (String) – The user who made the query. Keep in mind that for distributed processing, queries are sent to remote servers under the `default` usuario. El campo contiene el nombre de usuario para una consulta específica, no para una consulta que esta consulta inició. -- `address` (String) – The IP address the request was made from. The same for distributed processing. To track where a distributed query was originally made from, look at `system.processes` en el servidor de solicitud de consulta. -- `elapsed` (Float64) – The time in seconds since request execution started. -- `rows_read` (UInt64) – The number of rows read from the table. For distributed processing, on the requestor server, this is the total for all remote servers. -- `bytes_read` (UInt64) – The number of uncompressed bytes read from the table. For distributed processing, on the requestor server, this is the total for all remote servers. -- `total_rows_approx` (UInt64) – The approximation of the total number of rows that should be read. For distributed processing, on the requestor server, this is the total for all remote servers. It can be updated during request processing, when new sources to process become known. -- `memory_usage` (UInt64) – Amount of RAM the request uses. It might not include some types of dedicated memory. See the [Método de codificación de datos:](../operations/settings/query-complexity.md#settings_max_memory_usage) configuración. -- `query` (String) – The query text. For `INSERT`, no incluye los datos para insertar. -- `query_id` (String) – Query ID, if defined. - -## sistema.text_log {#system_tables-text_log} - -Contiene entradas de registro. El nivel de registro que va a esta tabla se puede limitar con `text_log.level` configuración del servidor. - -Columna: - -- `event_date` (`Date`) - Fecha de la entrada. -- `event_time` (`DateTime`) - Hora de la entrada. -- `microseconds` (`UInt32`) - Microsegundos de la entrada. -- `thread_name` (String) — Name of the thread from which the logging was done. -- `thread_id` (UInt64) — OS thread ID. -- `level` (`Enum8`) - Nivel de entrada. - - `'Fatal' = 1` - - `'Critical' = 2` - - `'Error' = 3` - - `'Warning' = 4` - - `'Notice' = 5` - - `'Information' = 6` - - `'Debug' = 7` - - `'Trace' = 8` -- `query_id` (`String`) - ID de la consulta. -- `logger_name` (`LowCardinality(String)`) - Name of the logger (i.e. `DDLWorker`) -- `message` (`String`) - El mensaje en sí. -- `revision` (`UInt32`) - Revisión de ClickHouse. -- `source_file` (`LowCardinality(String)`) - Archivo de origen desde el que se realizó el registro. -- `source_line` (`UInt64`) - Línea de origen desde la que se realizó el registro. - -## sistema.query_log {#system_tables-query_log} - -Contiene información sobre la ejecución de consultas. Para cada consulta, puede ver la hora de inicio del procesamiento, la duración del procesamiento, los mensajes de error y otra información. - -!!! note "Nota" - La tabla no contiene datos de entrada para `INSERT` consulta. - -ClickHouse crea esta tabla sólo si el [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) se especifica el parámetro server. Este parámetro establece las reglas de registro, como el intervalo de registro o el nombre de la tabla en la que se registrarán las consultas. - -Para habilitar el registro de consultas, [Log_queries](settings/settings.md#settings-log-queries) parámetro a 1. Para obtener más información, consulte el [Configuración](settings/settings.md) apartado. - -El `system.query_log` tabla registra dos tipos de consultas: - -1. Consultas iniciales ejecutadas directamente por el cliente. -2. Consultas secundarias iniciadas por otras consultas (para la ejecución de consultas distribuidas). Para estos tipos de consultas, la información sobre las consultas principales se muestra en el `initial_*` columna. - -Columna: - -- `type` (`Enum8`) — Type of event that occurred when executing the query. Values: - - `'QueryStart' = 1` — Successful start of query execution. - - `'QueryFinish' = 2` — Successful end of query execution. - - `'ExceptionBeforeStart' = 3` — Exception before the start of query execution. - - `'ExceptionWhileProcessing' = 4` — Exception during the query execution. -- `event_date` (Date) — Query starting date. -- `event_time` (DateTime) — Query starting time. -- `query_start_time` (DateTime) — Start time of query execution. -- `query_duration_ms` (UInt64) — Duration of query execution. -- `read_rows` (UInt64) — Number of read rows. -- `read_bytes` (UInt64) — Number of read bytes. -- `written_rows` (UInt64) — For `INSERT` consultas, el número de filas escritas. Para otras consultas, el valor de la columna es 0. -- `written_bytes` (UInt64) — For `INSERT` consultas, el número de bytes escritos. Para otras consultas, el valor de la columna es 0. -- `result_rows` (UInt64) — Number of rows in the result. -- `result_bytes` (UInt64) — Number of bytes in the result. -- `memory_usage` (UInt64) — Memory consumption by the query. -- `query` (String) — Query string. -- `exception` (String) — Exception message. -- `stack_trace` (String) — Stack trace (a list of methods called before the error occurred). An empty string, if the query is completed successfully. -- `is_initial_query` (UInt8) — Query type. Possible values: - - 1 — Query was initiated by the client. - - 0 — Query was initiated by another query for distributed query execution. -- `user` (String) — Name of the user who initiated the current query. -- `query_id` (String) — ID of the query. -- `address` (IPv6) — IP address that was used to make the query. -- `port` (UInt16) — The client port that was used to make the query. -- `initial_user` (String) — Name of the user who ran the initial query (for distributed query execution). -- `initial_query_id` (String) — ID of the initial query (for distributed query execution). -- `initial_address` (IPv6) — IP address that the parent query was launched from. -- `initial_port` (UInt16) — The client port that was used to make the parent query. -- `interface` (UInt8) — Interface that the query was initiated from. Possible values: - - 1 — TCP. - - 2 — HTTP. -- `os_user` (String) — OS's username who runs [Casa de clics-cliente](../interfaces/cli.md). -- `client_hostname` (String) — Hostname of the client machine where the [Casa de clics-cliente](../interfaces/cli.md) o se ejecuta otro cliente TCP. -- `client_name` (String) — The [Casa de clics-cliente](../interfaces/cli.md) o otro nombre de cliente TCP. -- `client_revision` (UInt32) — Revision of the [Casa de clics-cliente](../interfaces/cli.md) o otro cliente TCP. -- `client_version_major` (UInt32) — Major version of the [Casa de clics-cliente](../interfaces/cli.md) o otro cliente TCP. -- `client_version_minor` (UInt32) — Minor version of the [Casa de clics-cliente](../interfaces/cli.md) o otro cliente TCP. -- `client_version_patch` (UInt32) — Patch component of the [Casa de clics-cliente](../interfaces/cli.md) o otra versión de cliente TCP. -- `http_method` (UInt8) — HTTP method that initiated the query. Possible values: - - 0 — The query was launched from the TCP interface. - - 1 — `GET` se utilizó el método. - - 2 — `POST` se utilizó el método. -- `http_user_agent` (String) — The `UserAgent` encabezado pasado en la solicitud HTTP. -- `quota_key` (String) — The “quota key” especificado en el [cuota](quotas.md) ajuste (ver `keyed`). -- `revision` (UInt32) — ClickHouse revision. -- `thread_numbers` (Array(UInt32)) — Number of threads that are participating in query execution. -- `ProfileEvents.Names` (Array(String)) — Counters that measure different metrics. The description of them could be found in the table [sistema.evento](#system_tables-events) -- `ProfileEvents.Values` (Array(UInt64)) — Values of metrics that are listed in the `ProfileEvents.Names` columna. -- `Settings.Names` (Array(String)) — Names of settings that were changed when the client ran the query. To enable logging changes to settings, set the `log_query_settings` parámetro a 1. -- `Settings.Values` (Array(String)) — Values of settings that are listed in the `Settings.Names` columna. - -Cada consulta crea una o dos filas en el `query_log` tabla, dependiendo del estado de la consulta: - -1. Si la ejecución de la consulta se realiza correctamente, se crean dos eventos con los tipos 1 y 2 (consulte `type` columna). -2. Si se produjo un error durante el procesamiento de la consulta, se crean dos eventos con los tipos 1 y 4. -3. Si se produjo un error antes de iniciar la consulta, se crea un solo evento con el tipo 3. - -De forma predeterminada, los registros se agregan a la tabla a intervalos de 7,5 segundos. Puede establecer este intervalo en el [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) configuración del servidor (consulte el `flush_interval_milliseconds` parámetro). Para vaciar los registros a la fuerza desde el búfer de memoria a la tabla, utilice `SYSTEM FLUSH LOGS` consulta. - -Cuando la tabla se elimina manualmente, se creará automáticamente sobre la marcha. Tenga en cuenta que se eliminarán todos los registros anteriores. - -!!! note "Nota" - El período de almacenamiento para los registros es ilimitado. Los registros no se eliminan automáticamente de la tabla. Debe organizar la eliminación de registros obsoletos usted mismo. - -Puede especificar una clave de partición arbitraria `system.query_log` mesa en el [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) configuración del servidor (consulte el `partition_by` parámetro). - -## sistema.Sistema abierto {#system_tables-query-thread-log} - -La tabla contiene información sobre cada subproceso de ejecución de consultas. - -ClickHouse crea esta tabla sólo si el [Sistema abierto.](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) se especifica el parámetro server. Este parámetro establece las reglas de registro, como el intervalo de registro o el nombre de la tabla en la que se registrarán las consultas. - -Para habilitar el registro de consultas, [Log_query_threads](settings/settings.md#settings-log-query-threads) parámetro a 1. Para obtener más información, consulte el [Configuración](settings/settings.md) apartado. - -Columna: - -- `event_date` (Date) — the date when the thread has finished execution of the query. -- `event_time` (DateTime) — the date and time when the thread has finished execution of the query. -- `query_start_time` (DateTime) — Start time of query execution. -- `query_duration_ms` (UInt64) — Duration of query execution. -- `read_rows` (UInt64) — Number of read rows. -- `read_bytes` (UInt64) — Number of read bytes. -- `written_rows` (UInt64) — For `INSERT` consultas, el número de filas escritas. Para otras consultas, el valor de la columna es 0. -- `written_bytes` (UInt64) — For `INSERT` consultas, el número de bytes escritos. Para otras consultas, el valor de la columna es 0. -- `memory_usage` (Int64) — The difference between the amount of allocated and freed memory in context of this thread. -- `peak_memory_usage` (Int64) — The maximum difference between the amount of allocated and freed memory in context of this thread. -- `thread_name` (String) — Name of the thread. -- `thread_number` (UInt32) — Internal thread ID. -- `os_thread_id` (Int32) — OS thread ID. -- `master_thread_id` (UInt64) — OS initial ID of initial thread. -- `query` (String) — Query string. -- `is_initial_query` (UInt8) — Query type. Possible values: - - 1 — Query was initiated by the client. - - 0 — Query was initiated by another query for distributed query execution. -- `user` (String) — Name of the user who initiated the current query. -- `query_id` (String) — ID of the query. -- `address` (IPv6) — IP address that was used to make the query. -- `port` (UInt16) — The client port that was used to make the query. -- `initial_user` (String) — Name of the user who ran the initial query (for distributed query execution). -- `initial_query_id` (String) — ID of the initial query (for distributed query execution). -- `initial_address` (IPv6) — IP address that the parent query was launched from. -- `initial_port` (UInt16) — The client port that was used to make the parent query. -- `interface` (UInt8) — Interface that the query was initiated from. Possible values: - - 1 — TCP. - - 2 — HTTP. -- `os_user` (String) — OS's username who runs [Casa de clics-cliente](../interfaces/cli.md). -- `client_hostname` (String) — Hostname of the client machine where the [Casa de clics-cliente](../interfaces/cli.md) o se ejecuta otro cliente TCP. -- `client_name` (String) — The [Casa de clics-cliente](../interfaces/cli.md) o otro nombre de cliente TCP. -- `client_revision` (UInt32) — Revision of the [Casa de clics-cliente](../interfaces/cli.md) o otro cliente TCP. -- `client_version_major` (UInt32) — Major version of the [Casa de clics-cliente](../interfaces/cli.md) o otro cliente TCP. -- `client_version_minor` (UInt32) — Minor version of the [Casa de clics-cliente](../interfaces/cli.md) o otro cliente TCP. -- `client_version_patch` (UInt32) — Patch component of the [Casa de clics-cliente](../interfaces/cli.md) o otra versión de cliente TCP. -- `http_method` (UInt8) — HTTP method that initiated the query. Possible values: - - 0 — The query was launched from the TCP interface. - - 1 — `GET` se utilizó el método. - - 2 — `POST` se utilizó el método. -- `http_user_agent` (String) — The `UserAgent` encabezado pasado en la solicitud HTTP. -- `quota_key` (String) — The “quota key” especificado en el [cuota](quotas.md) ajuste (ver `keyed`). -- `revision` (UInt32) — ClickHouse revision. -- `ProfileEvents.Names` (Array(String)) — Counters that measure different metrics for this thread. The description of them could be found in the table [sistema.evento](#system_tables-events) -- `ProfileEvents.Values` (Array(UInt64)) — Values of metrics for this thread that are listed in the `ProfileEvents.Names` columna. - -De forma predeterminada, los registros se agregan a la tabla a intervalos de 7,5 segundos. Puede establecer este intervalo en el [Sistema abierto.](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) configuración del servidor (consulte el `flush_interval_milliseconds` parámetro). Para vaciar los registros a la fuerza desde el búfer de memoria a la tabla, utilice `SYSTEM FLUSH LOGS` consulta. - -Cuando la tabla se elimina manualmente, se creará automáticamente sobre la marcha. Tenga en cuenta que se eliminarán todos los registros anteriores. - -!!! note "Nota" - El período de almacenamiento para los registros es ilimitado. Los registros no se eliminan automáticamente de la tabla. Debe organizar la eliminación de registros obsoletos usted mismo. - -Puede especificar una clave de partición arbitraria `system.query_thread_log` mesa en el [Sistema abierto.](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) configuración del servidor (consulte el `partition_by` parámetro). - -## sistema.trace_log {#system_tables-trace_log} - -Contiene seguimientos de pila recopilados por el generador de perfiles de consultas de muestreo. - -ClickHouse crea esta tabla cuando el [trace_log](server-configuration-parameters/settings.md#server_configuration_parameters-trace_log) se establece la sección de configuración del servidor. También el [query_profiler_real_time_period_ns](settings/settings.md#query_profiler_real_time_period_ns) y [Los resultados de la prueba](settings/settings.md#query_profiler_cpu_time_period_ns) los ajustes deben establecerse. - -Para analizar los registros, utilice el `addressToLine`, `addressToSymbol` y `demangle` funciones de inspección. - -Columna: - -- `event_date` ([Fecha](../sql-reference/data-types/date.md)) — Date of sampling moment. - -- `event_time` ([FechaHora](../sql-reference/data-types/datetime.md)) — Timestamp of the sampling moment. - -- `timestamp_ns` ([UInt64](../sql-reference/data-types/int-uint.md)) — Timestamp of the sampling moment in nanoseconds. - -- `revision` ([UInt32](../sql-reference/data-types/int-uint.md)) — ClickHouse server build revision. - - Cuando se conecta al servidor por `clickhouse-client`, ves la cadena similar a `Connected to ClickHouse server version 19.18.1 revision 54429.`. Este campo contiene el `revision`, pero no el `version` de un servidor. - -- `timer_type` ([Enum8](../sql-reference/data-types/enum.md)) — Timer type: - - - `Real` representa el tiempo del reloj de pared. - - `CPU` representa el tiempo de CPU. - -- `thread_number` ([UInt32](../sql-reference/data-types/int-uint.md)) — Thread identifier. - -- `query_id` ([Cadena](../sql-reference/data-types/string.md)) — Query identifier that can be used to get details about a query that was running from the [query_log](#system_tables-query_log) tabla del sistema. - -- `trace` ([Matriz (UInt64)](../sql-reference/data-types/array.md)) — Stack trace at the moment of sampling. Each element is a virtual memory address inside ClickHouse server process. - -**Ejemplo** - -``` sql -SELECT * FROM system.trace_log LIMIT 1 \G -``` - -``` text -Row 1: -────── -event_date: 2019-11-15 -event_time: 2019-11-15 15:09:38 -revision: 54428 -timer_type: Real -thread_number: 48 -query_id: acc4d61f-5bd1-4a3e-bc91-2180be37c915 -trace: [94222141367858,94222152240175,94222152325351,94222152329944,94222152330796,94222151449980,94222144088167,94222151682763,94222144088167,94222151682763,94222144088167,94222144058283,94222144059248,94222091840750,94222091842302,94222091831228,94222189631488,140509950166747,140509942945935] -``` - -## sistema.Replica {#system_tables-replicas} - -Contiene información y estado de las tablas replicadas que residen en el servidor local. -Esta tabla se puede utilizar para el monitoreo. La tabla contiene una fila para cada tabla Replicated\*. - -Ejemplo: - -``` sql -SELECT * -FROM system.replicas -WHERE table = 'visits' -FORMAT Vertical -``` - -``` text -Row 1: -────── -database: merge -table: visits -engine: ReplicatedCollapsingMergeTree -is_leader: 1 -can_become_leader: 1 -is_readonly: 0 -is_session_expired: 0 -future_parts: 1 -parts_to_check: 0 -zookeeper_path: /clickhouse/tables/01-06/visits -replica_name: example01-06-1.yandex.ru -replica_path: /clickhouse/tables/01-06/visits/replicas/example01-06-1.yandex.ru -columns_version: 9 -queue_size: 1 -inserts_in_queue: 0 -merges_in_queue: 1 -part_mutations_in_queue: 0 -queue_oldest_time: 2020-02-20 08:34:30 -inserts_oldest_time: 1970-01-01 00:00:00 -merges_oldest_time: 2020-02-20 08:34:30 -part_mutations_oldest_time: 1970-01-01 00:00:00 -oldest_part_to_get: -oldest_part_to_merge_to: 20200220_20284_20840_7 -oldest_part_to_mutate_to: -log_max_index: 596273 -log_pointer: 596274 -last_queue_update: 2020-02-20 08:34:32 -absolute_delay: 0 -total_replicas: 2 -active_replicas: 2 -``` - -Columna: - -- `database` (`String`) - Nombre de la base de datos -- `table` (`String`) - Nombre de la tabla -- `engine` (`String`) - Nombre del motor de tabla -- `is_leader` (`UInt8`) - Si la réplica es la líder. - Sólo una réplica a la vez puede ser el líder. El líder es responsable de seleccionar las fusiones de fondo para realizar. - Tenga en cuenta que las escrituras se pueden realizar en cualquier réplica que esté disponible y tenga una sesión en ZK, independientemente de si es un líder. -- `can_become_leader` (`UInt8`) - Si la réplica puede ser elegida como líder. -- `is_readonly` (`UInt8`) - Si la réplica está en modo de sólo lectura. - Este modo se activa si la configuración no tiene secciones con ZooKeeper, si se produce un error desconocido al reinicializar sesiones en ZooKeeper y durante la reinicialización de sesiones en ZooKeeper. -- `is_session_expired` (`UInt8`) - la sesión con ZooKeeper ha expirado. Básicamente lo mismo que `is_readonly`. -- `future_parts` (`UInt32`) - El número de partes de datos que aparecerán como resultado de INSERTs o fusiones que aún no se han realizado. -- `parts_to_check` (`UInt32`) - El número de partes de datos en la cola para la verificación. Una pieza se coloca en la cola de verificación si existe la sospecha de que podría estar dañada. -- `zookeeper_path` (`String`) - Ruta de acceso a los datos de la tabla en ZooKeeper. -- `replica_name` (`String`) - Nombre de réplica en ZooKeeper. Diferentes réplicas de la misma tabla tienen diferentes nombres. -- `replica_path` (`String`) - Ruta de acceso a los datos de réplica en ZooKeeper. Lo mismo que concatenar ‘zookeeper_path/replicas/replica_path’. -- `columns_version` (`Int32`) - Número de versión de la estructura de la tabla. Indica cuántas veces se realizó ALTER. Si las réplicas tienen versiones diferentes, significa que algunas réplicas aún no han hecho todas las ALTER. -- `queue_size` (`UInt32`) - Tamaño de la cola para las operaciones en espera de ser realizadas. Las operaciones incluyen insertar bloques de datos, fusiones y otras acciones. Por lo general, coincide con `future_parts`. -- `inserts_in_queue` (`UInt32`) - Número de inserciones de bloques de datos que deben realizarse. Las inserciones generalmente se replican con bastante rapidez. Si este número es grande, significa que algo anda mal. -- `merges_in_queue` (`UInt32`) - El número de fusiones en espera de hacerse. A veces las fusiones son largas, por lo que este valor puede ser mayor que cero durante mucho tiempo. -- `part_mutations_in_queue` (`UInt32`) - El número de mutaciones a la espera de hacerse. -- `queue_oldest_time` (`DateTime`) - Si `queue_size` mayor que 0, muestra cuándo se agregó la operación más antigua a la cola. -- `inserts_oldest_time` (`DateTime`) - Ver `queue_oldest_time` -- `merges_oldest_time` (`DateTime`) - Ver `queue_oldest_time` -- `part_mutations_oldest_time` (`DateTime`) - Ver `queue_oldest_time` - -Las siguientes 4 columnas tienen un valor distinto de cero solo cuando hay una sesión activa con ZK. - -- `log_max_index` (`UInt64`) - Número máximo de inscripción en el registro de actividad general. -- `log_pointer` (`UInt64`) - Número máximo de entrada en el registro de actividad general que la réplica copió en su cola de ejecución, más uno. Si `log_pointer` es mucho más pequeño que `log_max_index`, algo está mal. -- `last_queue_update` (`DateTime`) - Cuando la cola se actualizó la última vez. -- `absolute_delay` (`UInt64`) - ¿Qué tan grande retraso en segundos tiene la réplica actual. -- `total_replicas` (`UInt8`) - El número total de réplicas conocidas de esta tabla. -- `active_replicas` (`UInt8`) - El número de réplicas de esta tabla que tienen una sesión en ZooKeeper (es decir, el número de réplicas en funcionamiento). - -Si solicita todas las columnas, la tabla puede funcionar un poco lentamente, ya que se realizan varias lecturas de ZooKeeper para cada fila. -Si no solicita las últimas 4 columnas (log_max_index, log_pointer, total_replicas, active_replicas), la tabla funciona rápidamente. - -Por ejemplo, puede verificar que todo funcione correctamente de esta manera: - -``` sql -SELECT - database, - table, - is_leader, - is_readonly, - is_session_expired, - future_parts, - parts_to_check, - columns_version, - queue_size, - inserts_in_queue, - merges_in_queue, - log_max_index, - log_pointer, - total_replicas, - active_replicas -FROM system.replicas -WHERE - is_readonly - OR is_session_expired - OR future_parts > 20 - OR parts_to_check > 10 - OR queue_size > 20 - OR inserts_in_queue > 10 - OR log_max_index - log_pointer > 10 - OR total_replicas < 2 - OR active_replicas < total_replicas -``` - -Si esta consulta no devuelve nada, significa que todo está bien. - -## sistema.configuración {#system-tables-system-settings} - -Contiene información sobre la configuración de sesión para el usuario actual. - -Columna: - -- `name` ([Cadena](../sql-reference/data-types/string.md)) — Setting name. -- `value` ([Cadena](../sql-reference/data-types/string.md)) — Setting value. -- `changed` ([UInt8](../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows whether a setting is changed from its default value. -- `description` ([Cadena](../sql-reference/data-types/string.md)) — Short setting description. -- `min` ([NULL](../sql-reference/data-types/nullable.md)([Cadena](../sql-reference/data-types/string.md))) — Minimum value of the setting, if any is set via [limitación](settings/constraints-on-settings.md#constraints-on-settings). Si la configuración no tiene ningún valor mínimo, contiene [NULL](../sql-reference/syntax.md#null-literal). -- `max` ([NULL](../sql-reference/data-types/nullable.md)([Cadena](../sql-reference/data-types/string.md))) — Maximum value of the setting, if any is set via [limitación](settings/constraints-on-settings.md#constraints-on-settings). Si la configuración no tiene ningún valor máximo, contiene [NULL](../sql-reference/syntax.md#null-literal). -- `readonly` ([UInt8](../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows whether the current user can change the setting: - - `0` — Current user can change the setting. - - `1` — Current user can't change the setting. - -**Ejemplo** - -En el ejemplo siguiente se muestra cómo obtener información sobre la configuración cuyo nombre contiene `min_i`. - -``` sql -SELECT * -FROM system.settings -WHERE name LIKE '%min_i%' -``` - -``` text -┌─name────────────────────────────────────────┬─value─────┬─changed─┬─description───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─min──┬─max──┬─readonly─┐ -│ min_insert_block_size_rows │ 1048576 │ 0 │ Squash blocks passed to INSERT query to specified size in rows, if blocks are not big enough. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ -│ min_insert_block_size_bytes │ 268435456 │ 0 │ Squash blocks passed to INSERT query to specified size in bytes, if blocks are not big enough. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ -│ read_backoff_min_interval_between_events_ms │ 1000 │ 0 │ Settings to reduce the number of threads in case of slow reads. Do not pay attention to the event, if the previous one has passed less than a certain amount of time. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ -└─────────────────────────────────────────────┴───────────┴─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────┴──────┴──────────┘ -``` - -Uso de `WHERE changed` puede ser útil, por ejemplo, cuando se desea comprobar: - -- Si los ajustes de los archivos de configuración se cargan correctamente y están en uso. -- Configuración que cambió en la sesión actual. - - - -``` sql -SELECT * FROM system.settings WHERE changed AND name='load_balancing' -``` - -**Ver también** - -- [Configuración](settings/index.md#session-settings-intro) -- [Permisos para consultas](settings/permissions-for-queries.md#settings_readonly) -- [Restricciones en la configuración](settings/constraints-on-settings.md) - -## sistema.table_engines {#system.table_engines} - -``` text -┌─name───────────────────┬─value───────┐ -│ max_threads │ 8 │ -│ use_uncompressed_cache │ 0 │ -│ load_balancing │ random │ -│ max_memory_usage │ 10000000000 │ -└────────────────────────┴─────────────┘ -``` - -## sistema.merge_tree_settings {#system-merge_tree_settings} - -Contiene información sobre la configuración `MergeTree` tabla. - -Columna: - -- `name` (String) — Setting name. -- `value` (String) — Setting value. -- `description` (String) — Setting description. -- `type` (String) — Setting type (implementation specific string value). -- `changed` (UInt8) — Whether the setting was explicitly defined in the config or explicitly changed. - -## sistema.table_engines {#system-table-engines} - -Contiene la descripción de los motores de tablas admitidos por el servidor y su información de soporte de características. - -Esta tabla contiene las siguientes columnas (el tipo de columna se muestra entre corchetes): - -- `name` (String) — The name of table engine. -- `supports_settings` (UInt8) — Flag that indicates if table engine supports `SETTINGS` clausula. -- `supports_skipping_indices` (UInt8) — Flag that indicates if table engine supports [Índices de saltos](../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-data_skipping-indexes). -- `supports_ttl` (UInt8) — Flag that indicates if table engine supports [TTL](../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl). -- `supports_sort_order` (UInt8) — Flag that indicates if table engine supports clauses `PARTITION_BY`, `PRIMARY_KEY`, `ORDER_BY` y `SAMPLE_BY`. -- `supports_replication` (UInt8) — Flag that indicates if table engine supports [Replicación de datos](../engines/table-engines/mergetree-family/replication.md). -- `supports_duduplication` (UInt8) — Flag that indicates if table engine supports data deduplication. - -Ejemplo: - -``` sql -SELECT * -FROM system.table_engines -WHERE name in ('Kafka', 'MergeTree', 'ReplicatedCollapsingMergeTree') -``` - -``` text -┌─name──────────────────────────┬─supports_settings─┬─supports_skipping_indices─┬─supports_sort_order─┬─supports_ttl─┬─supports_replication─┬─supports_deduplication─┐ -│ Kafka │ 1 │ 0 │ 0 │ 0 │ 0 │ 0 │ -│ MergeTree │ 1 │ 1 │ 1 │ 1 │ 0 │ 0 │ -│ ReplicatedCollapsingMergeTree │ 1 │ 1 │ 1 │ 1 │ 1 │ 1 │ -└───────────────────────────────┴───────────────────┴───────────────────────────┴─────────────────────┴──────────────┴──────────────────────┴────────────────────────┘ -``` - -**Ver también** - -- Familia MergeTree [cláusulas de consulta](../engines/table-engines/mergetree-family/mergetree.md#mergetree-query-clauses) -- Kafka [configuración](../engines/table-engines/integrations/kafka.md#table_engine-kafka-creating-a-table) -- Unir [configuración](../engines/table-engines/special/join.md#join-limitations-and-settings) - -## sistema.tabla {#system-tables} - -Contiene metadatos de cada tabla que el servidor conoce. Las tablas separadas no se muestran en `system.tables`. - -Esta tabla contiene las siguientes columnas (el tipo de columna se muestra entre corchetes): - -- `database` (String) — The name of the database the table is in. - -- `name` (String) — Table name. - -- `engine` (String) — Table engine name (without parameters). - -- `is_temporary` (UInt8): marca que indica si la tabla es temporal. - -- `data_path` (String) - Ruta de acceso a los datos de la tabla en el sistema de archivos. - -- `metadata_path` (String) - Ruta de acceso a los metadatos de la tabla en el sistema de archivos. - -- `metadata_modification_time` (DateTime) - Hora de la última modificación de los metadatos de la tabla. - -- `dependencies_database` (Array(String)) - Dependencias de base de datos. - -- `dependencies_table` (Array(String)) - Dependencias de tabla ([Método de codificación de datos:](../engines/table-engines/special/materializedview.md) tablas basadas en la tabla actual). - -- `create_table_query` (String) - La consulta que se utilizó para crear la tabla. - -- `engine_full` (String) - Parámetros del motor de tabla. - -- `partition_key` (String) - La expresión de clave de partición especificada en la tabla. - -- `sorting_key` (String) - La expresión de clave de ordenación especificada en la tabla. - -- `primary_key` (String) - La expresión de clave principal especificada en la tabla. - -- `sampling_key` (String) - La expresión de clave de muestreo especificada en la tabla. - -- `storage_policy` (String) - La política de almacenamiento: - - - [Método de codificación de datos:](../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) - - [Distribuido](../engines/table-engines/special/distributed.md#distributed) - -- `total_rows` (Nullable(UInt64)) - Número total de filas, si es posible determinar rápidamente el número exacto de filas en la tabla, de lo contrario `Null` (incluyendo underying `Buffer` tabla). - -- `total_bytes` (Nullable(UInt64)) - Número total de bytes, si es posible determinar rápidamente el número exacto de bytes para la tabla en el almacenamiento, de lo contrario `Null` (**no** incluye cualquier almacenamiento subyacente). - - - If the table stores data on disk, returns used space on disk (i.e. compressed). - - Si la tabla almacena datos en la memoria, devuelve el número aproximado de bytes utilizados en la memoria. - -El `system.tables` se utiliza en `SHOW TABLES` implementación de consultas. - -## sistema.Zookeeper {#system-zookeeper} - -La tabla no existe si ZooKeeper no está configurado. Permite leer datos del clúster ZooKeeper definido en la configuración. -La consulta debe tener un ‘path’ condición de igualdad en la cláusula WHERE. Este es el camino en ZooKeeper para los niños para los que desea obtener datos. - -Consulta `SELECT * FROM system.zookeeper WHERE path = '/clickhouse'` salidas de datos para todos los niños en el `/clickhouse` nodo. -Para generar datos para todos los nodos raíz, escriba path = ‘/’. -Si la ruta especificada en ‘path’ no existe, se lanzará una excepción. - -Columna: - -- `name` (String) — The name of the node. -- `path` (String) — The path to the node. -- `value` (String) — Node value. -- `dataLength` (Int32) — Size of the value. -- `numChildren` (Int32) — Number of descendants. -- `czxid` (Int64) — ID of the transaction that created the node. -- `mzxid` (Int64) — ID of the transaction that last changed the node. -- `pzxid` (Int64) — ID of the transaction that last deleted or added descendants. -- `ctime` (DateTime) — Time of node creation. -- `mtime` (DateTime) — Time of the last modification of the node. -- `version` (Int32) — Node version: the number of times the node was changed. -- `cversion` (Int32) — Number of added or removed descendants. -- `aversion` (Int32) — Number of changes to the ACL. -- `ephemeralOwner` (Int64) — For ephemeral nodes, the ID of the session that owns this node. - -Ejemplo: - -``` sql -SELECT * -FROM system.zookeeper -WHERE path = '/clickhouse/tables/01-08/visits/replicas' -FORMAT Vertical -``` - -``` text -Row 1: -────── -name: example01-08-1.yandex.ru -value: -czxid: 932998691229 -mzxid: 932998691229 -ctime: 2015-03-27 16:49:51 -mtime: 2015-03-27 16:49:51 -version: 0 -cversion: 47 -aversion: 0 -ephemeralOwner: 0 -dataLength: 0 -numChildren: 7 -pzxid: 987021031383 -path: /clickhouse/tables/01-08/visits/replicas - -Row 2: -────── -name: example01-08-2.yandex.ru -value: -czxid: 933002738135 -mzxid: 933002738135 -ctime: 2015-03-27 16:57:01 -mtime: 2015-03-27 16:57:01 -version: 0 -cversion: 37 -aversion: 0 -ephemeralOwner: 0 -dataLength: 0 -numChildren: 7 -pzxid: 987021252247 -path: /clickhouse/tables/01-08/visits/replicas -``` - -## sistema.mutación {#system_tables-mutations} - -La tabla contiene información sobre [mutación](../sql-reference/statements/alter.md#alter-mutations) de las tablas MergeTree y su progreso. Cada comando de mutación está representado por una sola fila. La tabla tiene las siguientes columnas: - -**base**, **tabla** - El nombre de la base de datos y la tabla a la que se aplicó la mutación. - -**mutation_id** - La identificación de la mutación. Para las tablas replicadas, estos identificadores corresponden a los nombres de znode `/mutations/` directorio en ZooKeeper. Para las tablas no duplicadas, los ID corresponden a los nombres de archivo en el directorio de datos de la tabla. - -**comando** - La cadena de comandos de mutación (la parte de la consulta después de `ALTER TABLE [db.]table`). - -**create_time** - Cuando este comando de mutación fue enviado para su ejecución. - -**block_numbers.partition_id**, **block_numbers.numero** - Una columna anidada. Para las mutaciones de tablas replicadas, contiene un registro para cada partición: el ID de partición y el número de bloque que fue adquirido por la mutación (en cada partición, solo se mutarán las partes que contienen bloques con números menores que el número de bloque adquirido por la mutación en esa partición). En tablas no replicadas, los números de bloque en todas las particiones forman una sola secuencia. Esto significa que para las mutaciones de tablas no replicadas, la columna contendrá un registro con un solo número de bloque adquirido por la mutación. - -**partes_a_do** - El número de partes de datos que deben mutarse para que finalice la mutación. - -**is_done** - Es la mutación hecho? Tenga en cuenta que incluso si `parts_to_do = 0` es posible que aún no se haya realizado una mutación de una tabla replicada debido a un INSERT de larga ejecución que creará una nueva parte de datos que deberá mutarse. - -Si hubo problemas con la mutación de algunas partes, las siguientes columnas contienen información adicional: - -**Método de codificación de datos:** - El nombre de la parte más reciente que no se pudo mutar. - -**Método de codificación de datos:** - El momento del fracaso de la mutación de la parte más reciente. - -**Método de codificación de datos:** - El mensaje de excepción que causó el error de mutación de parte más reciente. - -## sistema.disco {#system_tables-disks} - -Contiene información sobre los discos definidos en el [configuración del servidor](../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes_configure). - -Columna: - -- `name` ([Cadena](../sql-reference/data-types/string.md)) — Name of a disk in the server configuration. -- `path` ([Cadena](../sql-reference/data-types/string.md)) — Path to the mount point in the file system. -- `free_space` ([UInt64](../sql-reference/data-types/int-uint.md)) — Free space on disk in bytes. -- `total_space` ([UInt64](../sql-reference/data-types/int-uint.md)) — Disk volume in bytes. -- `keep_free_space` ([UInt64](../sql-reference/data-types/int-uint.md)) — Amount of disk space that should stay free on disk in bytes. Defined in the `keep_free_space_bytes` parámetro de configuración del disco. - -## sistema.almacenamiento_policies {#system_tables-storage_policies} - -Contiene información sobre las directivas de almacenamiento y los volúmenes [configuración del servidor](../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes_configure). - -Columna: - -- `policy_name` ([Cadena](../sql-reference/data-types/string.md)) — Name of the storage policy. -- `volume_name` ([Cadena](../sql-reference/data-types/string.md)) — Volume name defined in the storage policy. -- `volume_priority` ([UInt64](../sql-reference/data-types/int-uint.md)) — Volume order number in the configuration. -- `disks` ([Array(Cadena)](../sql-reference/data-types/array.md)) — Disk names, defined in the storage policy. -- `max_data_part_size` ([UInt64](../sql-reference/data-types/int-uint.md)) — Maximum size of a data part that can be stored on volume disks (0 — no limit). -- `move_factor` ([Float64](../sql-reference/data-types/float.md)) — Ratio of free disk space. When the ratio exceeds the value of configuration parameter, ClickHouse start to move data to the next volume in order. - -Si la directiva de almacenamiento contiene más de un volumen, la información de cada volumen se almacena en la fila individual de la tabla. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/system_tables/) diff --git a/docs/es/operations/tips.md b/docs/es/operations/tips.md deleted file mode 100644 index deb226450aa..00000000000 --- a/docs/es/operations/tips.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 58 -toc_title: Recomendaciones de uso ---- - -# Recomendaciones de uso {#usage-recommendations} - -## CPU Scaling Governor {#cpu-scaling-governor} - -Utilice siempre el `performance` gobernador de escala. El `on-demand` regulador de escala funciona mucho peor con una demanda constante. - -``` bash -$ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor -``` - -## Limitaciones de la CPU {#cpu-limitations} - -Los procesadores pueden sobrecalentarse. Utilizar `dmesg` para ver si la velocidad de reloj de la CPU era limitada debido al sobrecalentamiento. -La restricción también se puede establecer externamente en el nivel del centro de datos. Usted puede utilizar `turbostat` para controlarlo bajo una carga. - -## RAM {#ram} - -Para pequeñas cantidades de datos (hasta ~200 GB comprimidos), es mejor usar tanta memoria como el volumen de datos. -Para grandes cantidades de datos y al procesar consultas interactivas (en línea), debe usar una cantidad razonable de RAM (128 GB o más) para que el subconjunto de datos en caliente quepa en la memoria caché de páginas. -Incluso para volúmenes de datos de ~ 50 TB por servidor, el uso de 128 GB de RAM mejora significativamente el rendimiento de las consultas en comparación con 64 GB. - -No deshabilite el sobrecompromiso. Valor `cat /proc/sys/vm/overcommit_memory` debe ser 0 o 1. Ejecutar - -``` bash -$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory -``` - -## Páginas enormes {#huge-pages} - -Siempre deshabilite las páginas enormes transparentes. Interfiere con los asignadores de memoria, lo que conduce a una degradación significativa del rendimiento. - -``` bash -$ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled -``` - -Utilizar `perf top` para ver el tiempo pasado en el kernel para la administración de memoria. -Las páginas enormes permanentes tampoco necesitan ser asignadas. - -## Subsistema de almacenamiento {#storage-subsystem} - -Si su presupuesto le permite usar SSD, use SSD. -Si no, use HDD. Los discos duros SATA 7200 RPM servirán. - -Dar preferencia a una gran cantidad de servidores con discos duros locales sobre un número menor de servidores con estantes de discos conectados. -Pero para almacenar archivos con consultas raras, los estantes funcionarán. - -## RAID {#raid} - -Al usar HDD, puede combinar su RAID-10, RAID-5, RAID-6 o RAID-50. -Para Linux, el software RAID es mejor (con `mdadm`). No recomendamos usar LVM. -Al crear RAID-10, seleccione el `far` diseño. -Si su presupuesto lo permite, elija RAID-10. - -Si tiene más de 4 discos, utilice RAID-6 (preferido) o RAID-50, en lugar de RAID-5. -Cuando use RAID-5, RAID-6 o RAID-50, siempre aumente stripe_cache_size, ya que el valor predeterminado generalmente no es la mejor opción. - -``` bash -$ echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size -``` - -Calcule el número exacto a partir del número de dispositivos y el tamaño del bloque, utilizando la fórmula: `2 * num_devices * chunk_size_in_bytes / 4096`. - -Un tamaño de bloque de 1024 KB es suficiente para todas las configuraciones RAID. -Nunca ajuste el tamaño del bloque demasiado pequeño o demasiado grande. - -Puede usar RAID-0 en SSD. -Independientemente del uso de RAID, utilice siempre la replicación para la seguridad de los datos. - -Habilite NCQ con una cola larga. Para HDD, elija el programador CFQ, y para SSD, elija noop. No reduzca el ‘readahead’ configuración. -Para HDD, habilite la memoria caché de escritura. - -## Sistema de archivos {#file-system} - -Ext4 es la opción más confiable. Establecer las opciones de montaje `noatime, nobarrier`. -XFS también es adecuado, pero no ha sido probado tan a fondo con ClickHouse. -La mayoría de los otros sistemas de archivos también deberían funcionar bien. Los sistemas de archivos con asignación retrasada funcionan mejor. - -## Núcleo de Linux {#linux-kernel} - -No use un kernel de Linux obsoleto. - -## Red {#network} - -Si está utilizando IPv6, aumente el tamaño de la caché de ruta. -El kernel de Linux anterior a 3.2 tenía una multitud de problemas con la implementación de IPv6. - -Utilice al menos una red de 10 GB, si es posible. 1 Gb también funcionará, pero será mucho peor para parchear réplicas con decenas de terabytes de datos, o para procesar consultas distribuidas con una gran cantidad de datos intermedios. - -## ZooKeeper {#zookeeper} - -Probablemente ya esté utilizando ZooKeeper para otros fines. Puede usar la misma instalación de ZooKeeper, si aún no está sobrecargada. - -It's best to use a fresh version of ZooKeeper – 3.4.9 or later. The version in stable Linux distributions may be outdated. - -Nunca debe usar scripts escritos manualmente para transferir datos entre diferentes clústeres de ZooKeeper, ya que el resultado será incorrecto para los nodos secuenciales. Nunca utilice el “zkcopy” utilidad por la misma razón: https://github.com/ksprojects/zkcopy/issues/15 - -Si desea dividir un clúster ZooKeeper existente en dos, la forma correcta es aumentar el número de sus réplicas y, a continuación, volver a configurarlo como dos clústeres independientes. - -No ejecute ZooKeeper en los mismos servidores que ClickHouse. Porque ZooKeeper es muy sensible a la latencia y ClickHouse puede utilizar todos los recursos del sistema disponibles. - -Con la configuración predeterminada, ZooKeeper es una bomba de tiempo: - -> El servidor ZooKeeper no eliminará archivos de instantáneas y registros antiguos cuando utilice la configuración predeterminada (consulte autopurge), y esto es responsabilidad del operador. - -Esta bomba debe ser desactivada. - -La configuración ZooKeeper (3.5.1) a continuación se usa en Yandex.Entorno de producción de Métrica al 20 de mayo de 2017: - -zoológico.Cómo: - -``` bash -# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html - -# The number of milliseconds of each tick -tickTime=2000 -# The number of ticks that the initial -# synchronization phase can take -initLimit=30000 -# The number of ticks that can pass between -# sending a request and getting an acknowledgement -syncLimit=10 - -maxClientCnxns=2000 - -maxSessionTimeout=60000000 -# the directory where the snapshot is stored. -dataDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/data -# Place the dataLogDir to a separate physical disc for better performance -dataLogDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/logs - -autopurge.snapRetainCount=10 -autopurge.purgeInterval=1 - - -# To avoid seeks ZooKeeper allocates space in the transaction log file in -# blocks of preAllocSize kilobytes. The default block size is 64M. One reason -# for changing the size of the blocks is to reduce the block size if snapshots -# are taken more often. (Also, see snapCount). -preAllocSize=131072 - -# Clients can submit requests faster than ZooKeeper can process them, -# especially if there are a lot of clients. To prevent ZooKeeper from running -# out of memory due to queued requests, ZooKeeper will throttle clients so that -# there is no more than globalOutstandingLimit outstanding requests in the -# system. The default limit is 1,000.ZooKeeper logs transactions to a -# transaction log. After snapCount transactions are written to a log file a -# snapshot is started and a new transaction log file is started. The default -# snapCount is 10,000. -snapCount=3000000 - -# If this option is defined, requests will be will logged to a trace file named -# traceFile.year.month.day. -#traceFile= - -# Leader accepts client connections. Default value is "yes". The leader machine -# coordinates updates. For higher update throughput at thes slight expense of -# read throughput the leader can be configured to not accept clients and focus -# on coordination. -leaderServes=yes - -standaloneEnabled=false -dynamicConfigFile=/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/zoo.cfg.dynamic -``` - -Versión Java: - -``` text -Java(TM) SE Runtime Environment (build 1.8.0_25-b17) -Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) -``` - -Parámetros de JVM: - -``` bash -NAME=zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} -ZOOCFGDIR=/etc/$NAME/conf - -# TODO this is really ugly -# How to find out, which jars are needed? -# seems, that log4j requires the log4j.properties file to be in the classpath -CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper/zookeeper-3.5.1-metrika.jar:/usr/share/zookeeper/slf4j-log4j12-1.7.5.jar:/usr/share/zookeeper/slf4j-api-1.7.5.jar:/usr/share/zookeeper/servlet-api-2.5-20081211.jar:/usr/share/zookeeper/netty-3.7.0.Final.jar:/usr/share/zookeeper/log4j-1.2.16.jar:/usr/share/zookeeper/jline-2.11.jar:/usr/share/zookeeper/jetty-util-6.1.26.jar:/usr/share/zookeeper/jetty-6.1.26.jar:/usr/share/zookeeper/javacc.jar:/usr/share/zookeeper/jackson-mapper-asl-1.9.11.jar:/usr/share/zookeeper/jackson-core-asl-1.9.11.jar:/usr/share/zookeeper/commons-cli-1.2.jar:/usr/src/java/lib/*.jar:/usr/etc/zookeeper" - -ZOOCFG="$ZOOCFGDIR/zoo.cfg" -ZOO_LOG_DIR=/var/log/$NAME -USER=zookeeper -GROUP=zookeeper -PIDDIR=/var/run/$NAME -PIDFILE=$PIDDIR/$NAME.pid -SCRIPTNAME=/etc/init.d/$NAME -JAVA=/usr/bin/java -ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain" -ZOO_LOG4J_PROP="INFO,ROLLINGFILE" -JMXLOCALONLY=false -JAVA_OPTS="-Xms{{ '{{' }} cluster.get('xms','128M') {{ '}}' }} \ - -Xmx{{ '{{' }} cluster.get('xmx','1G') {{ '}}' }} \ - -Xloggc:/var/log/$NAME/zookeeper-gc.log \ - -XX:+UseGCLogFileRotation \ - -XX:NumberOfGCLogFiles=16 \ - -XX:GCLogFileSize=16M \ - -verbose:gc \ - -XX:+PrintGCTimeStamps \ - -XX:+PrintGCDateStamps \ - -XX:+PrintGCDetails - -XX:+PrintTenuringDistribution \ - -XX:+PrintGCApplicationStoppedTime \ - -XX:+PrintGCApplicationConcurrentTime \ - -XX:+PrintSafepointStatistics \ - -XX:+UseParNewGC \ - -XX:+UseConcMarkSweepGC \ --XX:+CMSParallelRemarkEnabled" -``` - -Sal init: - -``` text -description "zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} centralized coordination service" - -start on runlevel [2345] -stop on runlevel [!2345] - -respawn - -limit nofile 8192 8192 - -pre-start script - [ -r "/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment" ] || exit 0 - . /etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment - [ -d $ZOO_LOG_DIR ] || mkdir -p $ZOO_LOG_DIR - chown $USER:$GROUP $ZOO_LOG_DIR -end script - -script - . /etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment - [ -r /etc/default/zookeeper ] && . /etc/default/zookeeper - if [ -z "$JMXDISABLE" ]; then - JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY" - fi - exec start-stop-daemon --start -c $USER --exec $JAVA --name zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} \ - -- -cp $CLASSPATH $JAVA_OPTS -Dzookeeper.log.dir=${ZOO_LOG_DIR} \ - -Dzookeeper.root.logger=${ZOO_LOG4J_PROP} $ZOOMAIN $ZOOCFG -end script -``` - -{## [Artículo Original](https://clickhouse.tech/docs/en/operations/tips/) ##} diff --git a/docs/es/operations/troubleshooting.md b/docs/es/operations/troubleshooting.md deleted file mode 100644 index 9e8d2caca59..00000000000 --- a/docs/es/operations/troubleshooting.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 46 -toc_title: "Soluci\xF3n de problemas" ---- - -# Solución de problemas {#troubleshooting} - -- [Instalación](#troubleshooting-installation-errors) -- [Conexión al servidor](#troubleshooting-accepts-no-connections) -- [Procesamiento de consultas](#troubleshooting-does-not-process-queries) -- [Eficiencia del procesamiento de consultas](#troubleshooting-too-slow) - -## Instalación {#troubleshooting-installation-errors} - -### No puede obtener paquetes Deb del repositorio ClickHouse con Apt-get {#you-cannot-get-deb-packages-from-clickhouse-repository-with-apt-get} - -- Compruebe la configuración del firewall. -- Si no puede acceder al repositorio por cualquier motivo, descargue los paquetes como se describe en el [Primeros pasos](../getting-started/index.md) artículo e instálelos manualmente usando el `sudo dpkg -i ` comando. También necesitará el `tzdata` paquete. - -## Conexión al servidor {#troubleshooting-accepts-no-connections} - -Posibles problemas: - -- El servidor no se está ejecutando. -- Parámetros de configuración inesperados o incorrectos. - -### El servidor no se está ejecutando {#server-is-not-running} - -**Compruebe si el servidor está ejecutado** - -Comando: - -``` bash -$ sudo service clickhouse-server status -``` - -Si el servidor no se está ejecutando, inícielo con el comando: - -``` bash -$ sudo service clickhouse-server start -``` - -**Comprobar registros** - -El registro principal de `clickhouse-server` está en `/var/log/clickhouse-server/clickhouse-server.log` predeterminada. - -Si el servidor se inició correctamente, debería ver las cadenas: - -- ` Application: starting up.` — Server started. -- ` Application: Ready for connections.` — Server is running and ready for connections. - -Si `clickhouse-server` error de inicio con un error de configuración, debería ver el `` cadena con una descripción de error. Por ejemplo: - -``` text -2019.01.11 15:23:25.549505 [ 45 ] {} ExternalDictionaries: Failed reloading 'event2id' external dictionary: Poco::Exception. Code: 1000, e.code() = 111, e.displayText() = Connection refused, e.what() = Connection refused -``` - -Si no ve un error al final del archivo, revise todo el archivo a partir de la cadena: - -``` text - Application: starting up. -``` - -Si intenta iniciar una segunda instancia de `clickhouse-server` en el servidor, verá el siguiente registro: - -``` text -2019.01.11 15:25:11.151730 [ 1 ] {} : Starting ClickHouse 19.1.0 with revision 54413 -2019.01.11 15:25:11.154578 [ 1 ] {} Application: starting up -2019.01.11 15:25:11.156361 [ 1 ] {} StatusFile: Status file ./status already exists - unclean restart. Contents: -PID: 8510 -Started at: 2019-01-11 15:24:23 -Revision: 54413 - -2019.01.11 15:25:11.156673 [ 1 ] {} Application: DB::Exception: Cannot lock file ./status. Another server instance in same directory is already running. -2019.01.11 15:25:11.156682 [ 1 ] {} Application: shutting down -2019.01.11 15:25:11.156686 [ 1 ] {} Application: Uninitializing subsystem: Logging Subsystem -2019.01.11 15:25:11.156716 [ 2 ] {} BaseDaemon: Stop SignalListener thread -``` - -**Ver sistema.d registros** - -Si no encuentra ninguna información útil en `clickhouse-server` registros o no hay registros, puede ver `system.d` registros usando el comando: - -``` bash -$ sudo journalctl -u clickhouse-server -``` - -**Iniciar clickhouse-server en modo interactivo** - -``` bash -$ sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml -``` - -Este comando inicia el servidor como una aplicación interactiva con parámetros estándar del script de inicio automático. En este modo `clickhouse-server` imprime todos los mensajes de eventos en la consola. - -### Parámetros de configuración {#configuration-parameters} - -Comprobar: - -- Configuración de Docker. - - Si ejecuta ClickHouse en Docker en una red IPv6, asegúrese de que `network=host` se establece. - -- Configuración del punto final. - - Comprobar [listen_host](server-configuration-parameters/settings.md#server_configuration_parameters-listen_host) y [Tcp_port](server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) configuración. - - El servidor ClickHouse acepta conexiones localhost solo de forma predeterminada. - -- Configuración del protocolo HTTP. - - Compruebe la configuración del protocolo para la API HTTP. - -- Configuración de conexión segura. - - Comprobar: - - - El [Tcp_port_secure](server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) configuración. - - Ajustes para [Sertificados SSL](server-configuration-parameters/settings.md#server_configuration_parameters-openssl). - - Utilice los parámetros adecuados mientras se conecta. Por ejemplo, utilice el `port_secure` parámetro con `clickhouse_client`. - -- Configuración del usuario. - - Es posible que esté utilizando el nombre de usuario o la contraseña incorrectos. - -## Procesamiento de consultas {#troubleshooting-does-not-process-queries} - -Si ClickHouse no puede procesar la consulta, envía una descripción de error al cliente. En el `clickhouse-client` obtienes una descripción del error en la consola. Si está utilizando la interfaz HTTP, ClickHouse envía la descripción del error en el cuerpo de la respuesta. Por ejemplo: - -``` bash -$ curl 'http://localhost:8123/' --data-binary "SELECT a" -Code: 47, e.displayText() = DB::Exception: Unknown identifier: a. Note that there are no tables (FROM clause) in your query, context: required_names: 'a' source_tables: table_aliases: private_aliases: column_aliases: public_columns: 'a' masked_columns: array_join_columns: source_columns: , e.what() = DB::Exception -``` - -Si empiezas `clickhouse-client` con el `stack-trace` parámetro, ClickHouse devuelve el seguimiento de la pila del servidor con la descripción de un error. - -Es posible que vea un mensaje sobre una conexión rota. En este caso, puede repetir la consulta. Si la conexión se rompe cada vez que realiza la consulta, compruebe si hay errores en los registros del servidor. - -## Eficiencia del procesamiento de consultas {#troubleshooting-too-slow} - -Si ve que ClickHouse funciona demasiado lentamente, debe perfilar la carga en los recursos del servidor y la red para sus consultas. - -Puede utilizar la utilidad clickhouse-benchmark para crear perfiles de consultas. Muestra el número de consultas procesadas por segundo, el número de filas procesadas por segundo y percentiles de tiempos de procesamiento de consultas. diff --git a/docs/es/operations/update.md b/docs/es/operations/update.md deleted file mode 100644 index 11d15381d72..00000000000 --- a/docs/es/operations/update.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 47 -toc_title: "Actualizaci\xF3n de ClickHouse" ---- - -# Actualización de ClickHouse {#clickhouse-update} - -Si se instaló ClickHouse desde paquetes deb, ejecute los siguientes comandos en el servidor: - -``` bash -$ sudo apt-get update -$ sudo apt-get install clickhouse-client clickhouse-server -$ sudo service clickhouse-server restart -``` - -Si ha instalado ClickHouse utilizando algo distinto de los paquetes deb recomendados, utilice el método de actualización adecuado. - -ClickHouse no admite una actualización distribuida. La operación debe realizarse consecutivamente en cada servidor separado. No actualice todos los servidores de un clúster simultáneamente, o el clúster no estará disponible durante algún tiempo. diff --git a/docs/es/operations/utilities/clickhouse-benchmark.md b/docs/es/operations/utilities/clickhouse-benchmark.md deleted file mode 100644 index 9bcafa40dfe..00000000000 --- a/docs/es/operations/utilities/clickhouse-benchmark.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 61 -toc_title: Sistema abierto. ---- - -# Sistema abierto {#clickhouse-benchmark} - -Se conecta a un servidor ClickHouse y envía repetidamente las consultas especificadas. - -Sintaxis: - -``` bash -$ echo "single query" | clickhouse-benchmark [keys] -``` - -o - -``` bash -$ clickhouse-benchmark [keys] <<< "single query" -``` - -Si desea enviar un conjunto de consultas, cree un archivo de texto y coloque cada consulta en la cadena individual de este archivo. Por ejemplo: - -``` sql -SELECT * FROM system.numbers LIMIT 10000000 -SELECT 1 -``` - -Luego pase este archivo a una entrada estándar de `clickhouse-benchmark`. - -``` bash -clickhouse-benchmark [keys] < queries_file -``` - -## Claves {#clickhouse-benchmark-keys} - -- `-c N`, `--concurrency=N` — Number of queries that `clickhouse-benchmark` se envía simultáneamente. Valor predeterminado: 1. -- `-d N`, `--delay=N` — Interval in seconds between intermediate reports (set 0 to disable reports). Default value: 1. -- `-h WORD`, `--host=WORD` — Server host. Default value: `localhost`. Para el [modo de comparación](#clickhouse-benchmark-comparison-mode) puedes usar múltiples `-h` claves. -- `-p N`, `--port=N` — Server port. Default value: 9000. For the [modo de comparación](#clickhouse-benchmark-comparison-mode) puedes usar múltiples `-p` claves. -- `-i N`, `--iterations=N` — Total number of queries. Default value: 0. -- `-r`, `--randomize` — Random order of queries execution if there is more then one input query. -- `-s`, `--secure` — Using TLS connection. -- `-t N`, `--timelimit=N` — Time limit in seconds. `clickhouse-benchmark` detiene el envío de consultas cuando se alcanza el límite de tiempo especificado. Valor predeterminado: 0 (límite de tiempo desactivado). -- `--confidence=N` — Level of confidence for T-test. Possible values: 0 (80%), 1 (90%), 2 (95%), 3 (98%), 4 (99%), 5 (99.5%). Default value: 5. In the [modo de comparación](#clickhouse-benchmark-comparison-mode) `clickhouse-benchmark` realiza el [Prueba t independiente de dos muestras para estudiantes](https://en.wikipedia.org/wiki/Student%27s_t-test#Independent_two-sample_t-test) prueba para determinar si las dos distribuciones no son diferentes con el nivel de confianza seleccionado. -- `--cumulative` — Printing cumulative data instead of data per interval. -- `--database=DATABASE_NAME` — ClickHouse database name. Default value: `default`. -- `--json=FILEPATH` — JSON output. When the key is set, `clickhouse-benchmark` emite un informe al archivo JSON especificado. -- `--user=USERNAME` — ClickHouse user name. Default value: `default`. -- `--password=PSWD` — ClickHouse user password. Default value: empty string. -- `--stacktrace` — Stack traces output. When the key is set, `clickhouse-bencmark` las salidas acumulan rastros de excepciones. -- `--stage=WORD` — Query processing stage at server. ClickHouse stops query processing and returns answer to `clickhouse-benchmark` en la etapa especificada. Valores posibles: `complete`, `fetch_columns`, `with_mergeable_state`. Valor predeterminado: `complete`. -- `--help` — Shows the help message. - -Si desea aplicar alguna [configuración](../../operations/settings/index.md) para consultas, páselas como una clave `--= SETTING_VALUE`. Por ejemplo, `--max_memory_usage=1048576`. - -## Salida {#clickhouse-benchmark-output} - -Predeterminada, `clickhouse-benchmark` informes para cada `--delay` intervalo. - -Ejemplo del informe: - -``` text -Queries executed: 10. - -localhost:9000, queries 10, QPS: 6.772, RPS: 67904487.440, MiB/s: 518.070, result RPS: 67721584.984, result MiB/s: 516.675. - -0.000% 0.145 sec. -10.000% 0.146 sec. -20.000% 0.146 sec. -30.000% 0.146 sec. -40.000% 0.147 sec. -50.000% 0.148 sec. -60.000% 0.148 sec. -70.000% 0.148 sec. -80.000% 0.149 sec. -90.000% 0.150 sec. -95.000% 0.150 sec. -99.000% 0.150 sec. -99.900% 0.150 sec. -99.990% 0.150 sec. -``` - -En el informe puedes encontrar: - -- Número de consultas en el `Queries executed:` campo. - -- Cadena de estado que contiene (en orden): - - - Punto final del servidor ClickHouse. - - Número de consultas procesadas. - - QPS: QPS: ¿Cuántas consultas realizó el servidor por segundo durante un período `--delay` argumento. - - RPS: ¿Cuántas filas lee el servidor por segundo durante un período `--delay` argumento. - - MiB/s: ¿Cuántos mebibytes servidor leído por segundo durante un período especificado en el `--delay` argumento. - - resultado RPS: ¿Cuántas filas colocadas por el servidor al resultado de una consulta por segundo durante un período `--delay` argumento. - - resultado MiB/s. ¿Cuántos mebibytes colocados por el servidor al resultado de una consulta por segundo durante un período especificado en el `--delay` argumento. - -- Percentiles de tiempo de ejecución de consultas. - -## Modo de comparación {#clickhouse-benchmark-comparison-mode} - -`clickhouse-benchmark` puede comparar el rendimiento de dos servidores ClickHouse en ejecución. - -Para utilizar el modo de comparación, especifique los puntos finales de ambos servidores `--host`, `--port` claves. Las claves coinciden entre sí por posición en la lista de argumentos, la primera `--host` se empareja con la primera `--port` y así sucesivamente. `clickhouse-benchmark` establece conexiones a ambos servidores, luego envía consultas. Cada consulta dirigida a un servidor seleccionado al azar. Los resultados se muestran para cada servidor por separado. - -## Ejemplo {#clickhouse-benchmark-example} - -``` bash -$ echo "SELECT * FROM system.numbers LIMIT 10000000 OFFSET 10000000" | clickhouse-benchmark -i 10 -``` - -``` text -Loaded 1 queries. - -Queries executed: 6. - -localhost:9000, queries 6, QPS: 6.153, RPS: 123398340.957, MiB/s: 941.455, result RPS: 61532982.200, result MiB/s: 469.459. - -0.000% 0.159 sec. -10.000% 0.159 sec. -20.000% 0.159 sec. -30.000% 0.160 sec. -40.000% 0.160 sec. -50.000% 0.162 sec. -60.000% 0.164 sec. -70.000% 0.165 sec. -80.000% 0.166 sec. -90.000% 0.166 sec. -95.000% 0.167 sec. -99.000% 0.167 sec. -99.900% 0.167 sec. -99.990% 0.167 sec. - - - -Queries executed: 10. - -localhost:9000, queries 10, QPS: 6.082, RPS: 121959604.568, MiB/s: 930.478, result RPS: 60815551.642, result MiB/s: 463.986. - -0.000% 0.159 sec. -10.000% 0.159 sec. -20.000% 0.160 sec. -30.000% 0.163 sec. -40.000% 0.164 sec. -50.000% 0.165 sec. -60.000% 0.166 sec. -70.000% 0.166 sec. -80.000% 0.167 sec. -90.000% 0.167 sec. -95.000% 0.170 sec. -99.000% 0.172 sec. -99.900% 0.172 sec. -99.990% 0.172 sec. -``` diff --git a/docs/es/operations/utilities/clickhouse-copier.md b/docs/es/operations/utilities/clickhouse-copier.md deleted file mode 100644 index 5717ffaa737..00000000000 --- a/docs/es/operations/utilities/clickhouse-copier.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 59 -toc_title: "M\xE9todo de codificaci\xF3n de datos:" ---- - -# Método de codificación de datos: {#clickhouse-copier} - -Copia datos de las tablas de un clúster en tablas de otro (o del mismo) clúster. - -Puede ejecutar varios `clickhouse-copier` instancias en diferentes servidores para realizar el mismo trabajo. ZooKeeper se utiliza para sincronizar los procesos. - -Después de comenzar, `clickhouse-copier`: - -- Se conecta a ZooKeeper y recibe: - - - Copia de trabajos. - - El estado de los trabajos de copia. - -- Realiza los trabajos. - - Cada proceso en ejecución elige el “closest” el fragmento del clúster de origen y copia los datos en el clúster de destino, reafirmando los datos si es necesario. - -`clickhouse-copier` realiza un seguimiento de los cambios en ZooKeeper y los aplica sobre la marcha. - -Para reducir el tráfico de red, recomendamos ejecutar `clickhouse-copier` en el mismo servidor donde se encuentran los datos de origen. - -## Ejecución de Clickhouse-copiadora {#running-clickhouse-copier} - -La utilidad debe ejecutarse manualmente: - -``` bash -$ clickhouse-copier copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir -``` - -Parámetros: - -- `daemon` — Starts `clickhouse-copier` en modo daemon. -- `config` — The path to the `zookeeper.xml` con los parámetros para la conexión a ZooKeeper. -- `task-path` — The path to the ZooKeeper node. This node is used for syncing `clickhouse-copier` procesos y tareas de almacenamiento. Las tareas se almacenan en `$task-path/description`. -- `task-file` — Optional path to file with task configuration for initial upload to ZooKeeper. -- `task-upload-force` — Force upload `task-file` incluso si el nodo ya existe. -- `base-dir` — The path to logs and auxiliary files. When it starts, `clickhouse-copier` crear `clickhouse-copier_YYYYMMHHSS_` subdirectorios en `$base-dir`. Si se omite este parámetro, los directorios se crean en el directorio donde `clickhouse-copier` se puso en marcha. - -## Formato de Zookeeper.XML {#format-of-zookeeper-xml} - -``` xml - - - trace - 100M - 3 - - - - - 127.0.0.1 - 2181 - - - -``` - -## Configuración de tareas de copia {#configuration-of-copying-tasks} - -``` xml - - - - - - false - - 127.0.0.1 - 9000 - - - ... - - - - ... - - - - - 2 - - - - 1 - - - - - 0 - - - - - 3 - - 1 - - - - - - - - source_cluster - test - hits - - - destination_cluster - test - hits2 - - - - ENGINE=ReplicatedMergeTree('/clickhouse/tables/{cluster}/{shard}/hits2', '{replica}') - PARTITION BY toMonday(date) - ORDER BY (CounterID, EventDate) - - - - jumpConsistentHash(intHash64(UserID), 2) - - - CounterID != 0 - - - - '2018-02-26' - '2018-03-05' - ... - - - - - - ... - - ... - - -``` - -`clickhouse-copier` seguimiento de los cambios en `/task/path/description` y los aplica sobre la marcha. Por ejemplo, si cambia el valor de `max_workers`, el número de procesos que ejecutan tareas también cambiará. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/utils/clickhouse-copier/) diff --git a/docs/es/operations/utilities/clickhouse-local.md b/docs/es/operations/utilities/clickhouse-local.md deleted file mode 100644 index e122f668f53..00000000000 --- a/docs/es/operations/utilities/clickhouse-local.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 60 -toc_title: clickhouse-local ---- - -# clickhouse-local {#clickhouse-local} - -El `clickhouse-local` El programa le permite realizar un procesamiento rápido en archivos locales, sin tener que implementar y configurar el servidor ClickHouse. - -Acepta datos que representan tablas y las consulta usando [Nombre de la red inalámbrica (SSID):](../../sql-reference/index.md). - -`clickhouse-local` utiliza el mismo núcleo que el servidor ClickHouse, por lo que es compatible con la mayoría de las características y el mismo conjunto de formatos y motores de tabla. - -Predeterminada `clickhouse-local` no tiene acceso a los datos en el mismo host, pero admite la carga de la configuración del servidor `--config-file` argumento. - -!!! warning "Advertencia" - No se recomienda cargar la configuración del servidor de producción en `clickhouse-local` Porque los datos pueden dañarse en caso de error humano. - -## Uso {#usage} - -Uso básico: - -``` bash -$ clickhouse-local --structure "table_structure" --input-format "format_of_incoming_data" -q "query" -``` - -Argumento: - -- `-S`, `--structure` — table structure for input data. -- `-if`, `--input-format` — input format, `TSV` predeterminada. -- `-f`, `--file` — path to data, `stdin` predeterminada. -- `-q` `--query` — queries to execute with `;` como delimitador. -- `-N`, `--table` — table name where to put output data, `table` predeterminada. -- `-of`, `--format`, `--output-format` — output format, `TSV` predeterminada. -- `--stacktrace` — whether to dump debug output in case of exception. -- `--verbose` — more details on query execution. -- `-s` — disables `stderr` tala. -- `--config-file` — path to configuration file in same format as for ClickHouse server, by default the configuration empty. -- `--help` — arguments references for `clickhouse-local`. - -También hay argumentos para cada variable de configuración de ClickHouse que se usan más comúnmente en lugar de `--config-file`. - -## Ejemplos {#examples} - -``` bash -$ echo -e "1,2\n3,4" | clickhouse-local -S "a Int64, b Int64" -if "CSV" -q "SELECT * FROM table" -Read 2 rows, 32.00 B in 0.000 sec., 5182 rows/sec., 80.97 KiB/sec. -1 2 -3 4 -``` - -El ejemplo anterior es el mismo que: - -``` bash -$ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64) ENGINE = File(CSV, stdin); SELECT a, b FROM table; DROP TABLE table" -Read 2 rows, 32.00 B in 0.000 sec., 4987 rows/sec., 77.93 KiB/sec. -1 2 -3 4 -``` - -Ahora vamos a usuario de memoria de salida para cada usuario de Unix: - -``` bash -$ ps aux | tail -n +2 | awk '{ printf("%s\t%s\n", $1, $4) }' | clickhouse-local -S "user String, mem Float64" -q "SELECT user, round(sum(mem), 2) as memTotal FROM table GROUP BY user ORDER BY memTotal DESC FORMAT Pretty" -``` - -``` text -Read 186 rows, 4.15 KiB in 0.035 sec., 5302 rows/sec., 118.34 KiB/sec. -┏━━━━━━━━━━┳━━━━━━━━━━┓ -┃ user ┃ memTotal ┃ -┡━━━━━━━━━━╇━━━━━━━━━━┩ -│ bayonet │ 113.5 │ -├──────────┼──────────┤ -│ root │ 8.8 │ -├──────────┼──────────┤ -... -``` - -[Artículo Original](https://clickhouse.tech/docs/en/operations/utils/clickhouse-local/) diff --git a/docs/es/operations/utilities/index.md b/docs/es/operations/utilities/index.md deleted file mode 100644 index a69397a326c..00000000000 --- a/docs/es/operations/utilities/index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Utilidad -toc_priority: 56 -toc_title: "Descripci\xF3n" ---- - -# Utilidad ClickHouse {#clickhouse-utility} - -- [Sistema abierto.](clickhouse-local.md#clickhouse-local) — Allows running SQL queries on data without stopping the ClickHouse server, similar to how `awk` hace esto. -- [Método de codificación de datos:](clickhouse-copier.md) — Copies (and reshards) data from one cluster to another cluster. -- [Sistema abierto.](clickhouse-benchmark.md) — Loads server with the custom queries and settings. - -[Artículo Original](https://clickhouse.tech/docs/en/operations/utils/) diff --git a/docs/es/roadmap.md b/docs/es/roadmap.md deleted file mode 100644 index 60db1c608df..00000000000 --- a/docs/es/roadmap.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -machine_translated: true ---- - -# Hoja De Ruta {#roadmap} - -## Q1 2020 {#q1-2020} - -- Control de acceso basado en roles - -## Q2 2020 {#q2-2020} - -- Integración con servicios de autenticación externos -- Grupos de recursos para una distribución más precisa de la capacidad del clúster entre los usuarios - -{## [Artículo Original](https://clickhouse.tech/docs/es/roadmap/) ##} diff --git a/docs/es/sql-reference/aggregate-functions/combinators.md b/docs/es/sql-reference/aggregate-functions/combinators.md deleted file mode 100644 index c9fdcb9478f..00000000000 --- a/docs/es/sql-reference/aggregate-functions/combinators.md +++ /dev/null @@ -1,245 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: Combinadores ---- - -# Combinadores de funciones agregadas {#aggregate_functions_combinators} - -El nombre de una función agregada puede tener un sufijo anexado. Esto cambia la forma en que funciona la función de agregado. - -## -Si {#agg-functions-combinator-if} - -The suffix -If can be appended to the name of any aggregate function. In this case, the aggregate function accepts an extra argument – a condition (Uint8 type). The aggregate function processes only the rows that trigger the condition. If the condition was not triggered even once, it returns a default value (usually zeros or empty strings). - -Ejemplos: `sumIf(column, cond)`, `countIf(cond)`, `avgIf(x, cond)`, `quantilesTimingIf(level1, level2)(x, cond)`, `argMinIf(arg, val, cond)` y así sucesivamente. - -Con las funciones de agregado condicional, puede calcular agregados para varias condiciones a la vez, sin utilizar subconsultas y `JOIN`Por ejemplo, en Yandex.Metrica, las funciones de agregado condicional se utilizan para implementar la funcionalidad de comparación de segmentos. - -## -Matriz {#agg-functions-combinator-array} - -El sufijo -Array se puede agregar a cualquier función agregada. En este caso, la función de agregado toma argumentos del ‘Array(T)’ tipo (arrays) en lugar de ‘T’ argumentos de tipo. Si la función de agregado acepta varios argumentos, deben ser matrices de igual longitud. Al procesar matrices, la función de agregado funciona como la función de agregado original en todos los elementos de la matriz. - -Ejemplo 1: `sumArray(arr)` - Totales de todos los elementos de todos ‘arr’ matriz. En este ejemplo, podría haber sido escrito más simplemente: `sum(arraySum(arr))`. - -Ejemplo 2: `uniqArray(arr)` – Counts the number of unique elements in all ‘arr’ matriz. Esto podría hacerse de una manera más fácil: `uniq(arrayJoin(arr))`, pero no siempre es posible agregar ‘arrayJoin’ a una consulta. - --If y -Array se pueden combinar. Obstante, ‘Array’ debe venir primero, entonces ‘If’. Ejemplos: `uniqArrayIf(arr, cond)`, `quantilesTimingArrayIf(level1, level2)(arr, cond)`. Debido a este pedido, el ‘cond’ argumento no será una matriz. - -## -Estado {#agg-functions-combinator-state} - -Si aplica este combinador, la función de agregado no devuelve el valor resultante (como el número de valores únicos para el [uniq](reference.md#agg_function-uniq) función), pero un estado intermedio de la agregación (para `uniq`, esta es la tabla hash para calcular el número de valores únicos). Este es un `AggregateFunction(...)` que puede ser utilizado para su posterior procesamiento o almacenado en una tabla para terminar de agregar más tarde. - -Para trabajar con estos estados, use: - -- [AgregaciónMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) motor de mesa. -- [finalizeAggregation](../../sql-reference/functions/other-functions.md#function-finalizeaggregation) función. -- [runningAccumulate](../../sql-reference/functions/other-functions.md#function-runningaccumulate) función. -- [-Fusionar](#aggregate_functions_combinators-merge) combinador. -- [-MergeState](#aggregate_functions_combinators-mergestate) combinador. - -## -Fusionar {#aggregate_functions_combinators-merge} - -Si aplica este combinador, la función de agregado toma el estado de agregación intermedio como argumento, combina los estados para finalizar la agregación y devuelve el valor resultante. - -## -MergeState {#aggregate_functions_combinators-mergestate} - -Combina los estados de agregación intermedios de la misma manera que el combinador -Merge. Sin embargo, no devuelve el valor resultante, sino un estado de agregación intermedio, similar al combinador -State. - -## -ForEach {#agg-functions-combinator-foreach} - -Convierte una función de agregado para tablas en una función de agregado para matrices que agrega los elementos de matriz correspondientes y devuelve una matriz de resultados. Por ejemplo, `sumForEach` para las matrices `[1, 2]`, `[3, 4, 5]`y`[6, 7]`devuelve el resultado `[10, 13, 5]` después de agregar los elementos de la matriz correspondientes. - -## -OPor defecto {#agg-functions-combinator-ordefault} - -Cambia el comportamiento de una función agregada. - -Si una función agregada no tiene valores de entrada, con este combinador devuelve el valor predeterminado para su tipo de datos de retorno. Se aplica a las funciones agregadas que pueden tomar datos de entrada vacíos. - -`-OrDefault` se puede utilizar con otros combinadores. - -**Sintaxis** - -``` sql -OrDefault(x) -``` - -**Parámetros** - -- `x` — Aggregate function parameters. - -**Valores devueltos** - -Devuelve el valor predeterminado del tipo devuelto de una función de agregado si no hay nada que agregar. - -El tipo depende de la función de agregado utilizada. - -**Ejemplo** - -Consulta: - -``` sql -SELECT avg(number), avgOrDefault(number) FROM numbers(0) -``` - -Resultado: - -``` text -┌─avg(number)─┬─avgOrDefault(number)─┐ -│ nan │ 0 │ -└─────────────┴──────────────────────┘ -``` - -También `-OrDefault` se puede utilizar con otros combinadores. Es útil cuando la función de agregado no acepta la entrada vacía. - -Consulta: - -``` sql -SELECT avgOrDefaultIf(x, x > 10) -FROM -( - SELECT toDecimal32(1.23, 2) AS x -) -``` - -Resultado: - -``` text -┌─avgOrDefaultIf(x, greater(x, 10))─┐ -│ 0.00 │ -└───────────────────────────────────┘ -``` - -## -OrNull {#agg-functions-combinator-ornull} - -Cambia el comportamiento de una función agregada. - -Este combinador convierte un resultado de una función agregada en [NULL](../data-types/nullable.md) tipo de datos. Si la función de agregado no tiene valores para calcular devuelve [NULL](../syntax.md#null-literal). - -`-OrNull` se puede utilizar con otros combinadores. - -**Sintaxis** - -``` sql -OrNull(x) -``` - -**Parámetros** - -- `x` — Aggregate function parameters. - -**Valores devueltos** - -- El resultado de la función de agregado, convertida a la `Nullable` tipo de datos. -- `NULL`, si no hay nada que agregar. - -Tipo: `Nullable(aggregate function return type)`. - -**Ejemplo** - -Añadir `-orNull` hasta el final de la función agregada. - -Consulta: - -``` sql -SELECT sumOrNull(number), toTypeName(sumOrNull(number)) FROM numbers(10) WHERE number > 10 -``` - -Resultado: - -``` text -┌─sumOrNull(number)─┬─toTypeName(sumOrNull(number))─┐ -│ ᴺᵁᴸᴸ │ Nullable(UInt64) │ -└───────────────────┴───────────────────────────────┘ -``` - -También `-OrNull` se puede utilizar con otros combinadores. Es útil cuando la función de agregado no acepta la entrada vacía. - -Consulta: - -``` sql -SELECT avgOrNullIf(x, x > 10) -FROM -( - SELECT toDecimal32(1.23, 2) AS x -) -``` - -Resultado: - -``` text -┌─avgOrNullIf(x, greater(x, 10))─┐ -│ ᴺᵁᴸᴸ │ -└────────────────────────────────┘ -``` - -## -Remuestrear {#agg-functions-combinator-resample} - -Permite dividir los datos en grupos y, a continuación, agregar por separado los datos de esos grupos. Los grupos se crean dividiendo los valores de una columna en intervalos. - -``` sql -Resample(start, end, step)(, resampling_key) -``` - -**Parámetros** - -- `start` — Starting value of the whole required interval for `resampling_key` valor. -- `stop` — Ending value of the whole required interval for `resampling_key` valor. Todo el intervalo no incluye el `stop` valor `[start, stop)`. -- `step` — Step for separating the whole interval into subintervals. The `aggFunction` se ejecuta sobre cada uno de esos subintervalos de forma independiente. -- `resampling_key` — Column whose values are used for separating data into intervals. -- `aggFunction_params` — `aggFunction` parámetros. - -**Valores devueltos** - -- Matriz de `aggFunction` resultados para cada subintervalo. - -**Ejemplo** - -Considere el `people` con los siguientes datos: - -``` text -┌─name───┬─age─┬─wage─┐ -│ John │ 16 │ 10 │ -│ Alice │ 30 │ 15 │ -│ Mary │ 35 │ 8 │ -│ Evelyn │ 48 │ 11.5 │ -│ David │ 62 │ 9.9 │ -│ Brian │ 60 │ 16 │ -└────────┴─────┴──────┘ -``` - -Obtengamos los nombres de las personas cuya edad se encuentra en los intervalos de `[30,60)` y `[60,75)`. Como usamos la representación entera para la edad, obtenemos edades en el `[30, 59]` y `[60,74]` intervalo. - -Para agregar nombres en una matriz, usamos el [Método de codificación de datos:](reference.md#agg_function-grouparray) función de agregado. Se necesita un argumento. En nuestro caso, es el `name` columna. El `groupArrayResample` función debe utilizar el `age` columna para agregar nombres por edad. Para definir los intervalos requeridos, pasamos el `30, 75, 30` discusiones sobre el `groupArrayResample` función. - -``` sql -SELECT groupArrayResample(30, 75, 30)(name, age) FROM people -``` - -``` text -┌─groupArrayResample(30, 75, 30)(name, age)─────┐ -│ [['Alice','Mary','Evelyn'],['David','Brian']] │ -└───────────────────────────────────────────────┘ -``` - -Considera los resultados. - -`Jonh` est? fuera de la muestra porque es demasiado joven. Otras personas se distribuyen de acuerdo con los intervalos de edad especificados. - -Ahora vamos a contar el número total de personas y su salario promedio en los intervalos de edad especificados. - -``` sql -SELECT - countResample(30, 75, 30)(name, age) AS amount, - avgResample(30, 75, 30)(wage, age) AS avg_wage -FROM people -``` - -``` text -┌─amount─┬─avg_wage──────────────────┐ -│ [3,2] │ [11.5,12.949999809265137] │ -└────────┴───────────────────────────┘ -``` - -[Artículo Original](https://clickhouse.tech/docs/en/query_language/agg_functions/combinators/) diff --git a/docs/es/sql-reference/aggregate-functions/index.md b/docs/es/sql-reference/aggregate-functions/index.md deleted file mode 100644 index 7c7d58d5f94..00000000000 --- a/docs/es/sql-reference/aggregate-functions/index.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Funciones agregadas -toc_priority: 33 -toc_title: "Implantaci\xF3n" ---- - -# Funciones agregadas {#aggregate-functions} - -Las funciones agregadas funcionan en el [normal](http://www.sql-tutorial.com/sql-aggregate-functions-sql-tutorial) forma esperada por los expertos en bases de datos. - -ClickHouse también es compatible: - -- [Funciones agregadas paramétricas](parametric-functions.md#aggregate_functions_parametric) que aceptan otros parámetros además de las columnas. -- [Combinadores](combinators.md#aggregate_functions_combinators), que cambian el comportamiento de las funciones agregadas. - -## Procesamiento NULL {#null-processing} - -Durante la agregación, todos `NULL`s se omiten. - -**Ejemplos:** - -Considere esta tabla: - -``` text -┌─x─┬────y─┐ -│ 1 │ 2 │ -│ 2 │ ᴺᵁᴸᴸ │ -│ 3 │ 2 │ -│ 3 │ 3 │ -│ 3 │ ᴺᵁᴸᴸ │ -└───┴──────┘ -``` - -Digamos que necesita sumar los valores en el `y` columna: - -``` sql -SELECT sum(y) FROM t_null_big -``` - - ┌─sum(y)─┐ - │ 7 │ - └────────┘ - -El `sum` función interpreta `NULL` como `0`. En particular, esto significa que si la función recibe la entrada de una selección donde todos los valores son `NULL`, entonces el resultado será `0`, ni `NULL`. - -Ahora puedes usar el `groupArray` función para crear una matriz a partir de la `y` columna: - -``` sql -SELECT groupArray(y) FROM t_null_big -``` - -``` text -┌─groupArray(y)─┐ -│ [2,2,3] │ -└───────────────┘ -``` - -`groupArray` no incluye `NULL` en la matriz resultante. - -[Artículo Original](https://clickhouse.tech/docs/en/query_language/agg_functions/) diff --git a/docs/es/sql-reference/aggregate-functions/parametric-functions.md b/docs/es/sql-reference/aggregate-functions/parametric-functions.md deleted file mode 100644 index ea32920401b..00000000000 --- a/docs/es/sql-reference/aggregate-functions/parametric-functions.md +++ /dev/null @@ -1,499 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 38 -toc_title: "Param\xE9trico" ---- - -# Funciones agregadas paramétricas {#aggregate_functions_parametric} - -Some aggregate functions can accept not only argument columns (used for compression), but a set of parameters – constants for initialization. The syntax is two pairs of brackets instead of one. The first is for parameters, and the second is for arguments. - -## histograma {#histogram} - -Calcula un histograma adaptativo. No garantiza resultados precisos. - -``` sql -histogram(number_of_bins)(values) -``` - -Las funciones utiliza [Un algoritmo de árbol de decisión paralelo de transmisión](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf). Los bordes de los contenedores de histograma se ajustan a medida que los nuevos datos entran en una función. En caso común, los anchos de los contenedores no son iguales. - -**Parámetros** - -`number_of_bins` — Upper limit for the number of bins in the histogram. The function automatically calculates the number of bins. It tries to reach the specified number of bins, but if it fails, it uses fewer bins. -`values` — [Expresion](../syntax.md#syntax-expressions) resultando en valores de entrada. - -**Valores devueltos** - -- [Matriz](../../sql-reference/data-types/array.md) de [Tuples](../../sql-reference/data-types/tuple.md) del siguiente formato: - - ``` - [(lower_1, upper_1, height_1), ... (lower_N, upper_N, height_N)] - ``` - - - `lower` — Lower bound of the bin. - - `upper` — Upper bound of the bin. - - `height` — Calculated height of the bin. - -**Ejemplo** - -``` sql -SELECT histogram(5)(number + 1) -FROM ( - SELECT * - FROM system.numbers - LIMIT 20 -) -``` - -``` text -┌─histogram(5)(plus(number, 1))───────────────────────────────────────────┐ -│ [(1,4.5,4),(4.5,8.5,4),(8.5,12.75,4.125),(12.75,17,4.625),(17,20,3.25)] │ -└─────────────────────────────────────────────────────────────────────────┘ -``` - -Puede visualizar un histograma con el [Bar](../../sql-reference/functions/other-functions.md#function-bar) función, por ejemplo: - -``` sql -WITH histogram(5)(rand() % 100) AS hist -SELECT - arrayJoin(hist).3 AS height, - bar(height, 0, 6, 5) AS bar -FROM -( - SELECT * - FROM system.numbers - LIMIT 20 -) -``` - -``` text -┌─height─┬─bar───┐ -│ 2.125 │ █▋ │ -│ 3.25 │ ██▌ │ -│ 5.625 │ ████▏ │ -│ 5.625 │ ████▏ │ -│ 3.375 │ ██▌ │ -└────────┴───────┘ -``` - -En este caso, debe recordar que no conoce los bordes del contenedor del histograma. - -## sequenceMatch(pattern)(timestamp, cond1, cond2, …) {#function-sequencematch} - -Comprueba si la secuencia contiene una cadena de eventos que coincida con el patrón. - -``` sql -sequenceMatch(pattern)(timestamp, cond1, cond2, ...) -``` - -!!! warning "Advertencia" - Los eventos que ocurren en el mismo segundo pueden estar en la secuencia en un orden indefinido que afecta el resultado. - -**Parámetros** - -- `pattern` — Pattern string. See [Sintaxis de patrón](#sequence-function-pattern-syntax). - -- `timestamp` — Column considered to contain time data. Typical data types are `Date` y `DateTime`. También puede utilizar cualquiera de los [UInt](../../sql-reference/data-types/int-uint.md) tipos de datos. - -- `cond1`, `cond2` — Conditions that describe the chain of events. Data type: `UInt8`. Puede pasar hasta 32 argumentos de condición. La función sólo tiene en cuenta los eventos descritos en estas condiciones. Si la secuencia contiene datos que no se describen en una condición, la función los omite. - -**Valores devueltos** - -- 1, si el patrón coincide. -- 0, si el patrón no coincide. - -Tipo: `UInt8`. - - -**Sintaxis de patrón** - -- `(?N)` — Matches the condition argument at position `N`. Las condiciones están numeradas en el `[1, 32]` gama. Por ejemplo, `(?1)` coincide con el argumento pasado al `cond1` parámetro. - -- `.*` — Matches any number of events. You don't need conditional arguments to match this element of the pattern. - -- `(?t operator value)` — Sets the time in seconds that should separate two events. For example, pattern `(?1)(?t>1800)(?2)` coincide con los eventos que ocurren a más de 1800 segundos el uno del otro. Un número arbitrario de cualquier evento puede estar entre estos eventos. Puede usar el `>=`, `>`, `<`, `<=` operador. - -**Ejemplos** - -Considere los datos en el `t` tabla: - -``` text -┌─time─┬─number─┐ -│ 1 │ 1 │ -│ 2 │ 3 │ -│ 3 │ 2 │ -└──────┴────────┘ -``` - -Realizar la consulta: - -``` sql -SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2) FROM t -``` - -``` text -┌─sequenceMatch('(?1)(?2)')(time, equals(number, 1), equals(number, 2))─┐ -│ 1 │ -└───────────────────────────────────────────────────────────────────────┘ -``` - -La función encontró la cadena de eventos donde el número 2 sigue al número 1. Se saltó el número 3 entre ellos, porque el número no se describe como un evento. Si queremos tener en cuenta este número al buscar la cadena de eventos dada en el ejemplo, debemos establecer una condición para ello. - -``` sql -SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 3) FROM t -``` - -``` text -┌─sequenceMatch('(?1)(?2)')(time, equals(number, 1), equals(number, 2), equals(number, 3))─┐ -│ 0 │ -└──────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -En este caso, la función no pudo encontrar la cadena de eventos que coincida con el patrón, porque el evento para el número 3 ocurrió entre 1 y 2. Si en el mismo caso comprobamos la condición para el número 4, la secuencia coincidiría con el patrón. - -``` sql -SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM t -``` - -``` text -┌─sequenceMatch('(?1)(?2)')(time, equals(number, 1), equals(number, 2), equals(number, 4))─┐ -│ 1 │ -└──────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -**Ver también** - -- [sequenceCount](#function-sequencecount) - -## sequenceCount(pattern)(time, cond1, cond2, …) {#function-sequencecount} - -Cuenta el número de cadenas de eventos que coinciden con el patrón. La función busca cadenas de eventos que no se superponen. Comienza a buscar la siguiente cadena después de que se haga coincidir la cadena actual. - -!!! warning "Advertencia" - Los eventos que ocurren en el mismo segundo pueden estar en la secuencia en un orden indefinido que afecta el resultado. - -``` sql -sequenceCount(pattern)(timestamp, cond1, cond2, ...) -``` - -**Parámetros** - -- `pattern` — Pattern string. See [Sintaxis de patrón](#sequence-function-pattern-syntax). - -- `timestamp` — Column considered to contain time data. Typical data types are `Date` y `DateTime`. También puede utilizar cualquiera de los [UInt](../../sql-reference/data-types/int-uint.md) tipos de datos. - -- `cond1`, `cond2` — Conditions that describe the chain of events. Data type: `UInt8`. Puede pasar hasta 32 argumentos de condición. La función sólo tiene en cuenta los eventos descritos en estas condiciones. Si la secuencia contiene datos que no se describen en una condición, la función los omite. - -**Valores devueltos** - -- Número de cadenas de eventos no superpuestas que coinciden. - -Tipo: `UInt64`. - -**Ejemplo** - -Considere los datos en el `t` tabla: - -``` text -┌─time─┬─number─┐ -│ 1 │ 1 │ -│ 2 │ 3 │ -│ 3 │ 2 │ -│ 4 │ 1 │ -│ 5 │ 3 │ -│ 6 │ 2 │ -└──────┴────────┘ -``` - -Cuente cuántas veces ocurre el número 2 después del número 1 con cualquier cantidad de otros números entre ellos: - -``` sql -SELECT sequenceCount('(?1).*(?2)')(time, number = 1, number = 2) FROM t -``` - -``` text -┌─sequenceCount('(?1).*(?2)')(time, equals(number, 1), equals(number, 2))─┐ -│ 2 │ -└─────────────────────────────────────────────────────────────────────────┘ -``` - -**Ver también** - -- [sequenceMatch](#function-sequencematch) - -## ventanaEmbudo {#windowfunnel} - -Busca cadenas de eventos en una ventana de tiempo deslizante y calcula el número máximo de eventos que ocurrieron desde la cadena. - -La función funciona de acuerdo con el algoritmo: - -- La función busca datos que desencadenan la primera condición en la cadena y establece el contador de eventos en 1. Este es el momento en que comienza la ventana deslizante. - -- Si los eventos de la cadena ocurren secuencialmente dentro de la ventana, el contador se incrementa. Si se interrumpe la secuencia de eventos, el contador no se incrementa. - -- Si los datos tienen varias cadenas de eventos en diferentes puntos de finalización, la función solo generará el tamaño de la cadena más larga. - -**Sintaxis** - -``` sql -windowFunnel(window, [mode])(timestamp, cond1, cond2, ..., condN) -``` - -**Parámetros** - -- `window` — Length of the sliding window in seconds. -- `mode` - Es un argumento opcional. - - `'strict'` - Cuando el `'strict'` se establece, windowFunnel() aplica condiciones solo para los valores únicos. -- `timestamp` — Name of the column containing the timestamp. Data types supported: [Fecha](../../sql-reference/data-types/date.md), [FechaHora](../../sql-reference/data-types/datetime.md#data_type-datetime) y otros tipos de enteros sin signo (tenga en cuenta que aunque timestamp admite el `UInt64` tipo, su valor no puede exceder el máximo de Int64, que es 2 ^ 63 - 1). -- `cond` — Conditions or data describing the chain of events. [UInt8](../../sql-reference/data-types/int-uint.md). - -**Valor devuelto** - -El número máximo de condiciones desencadenadas consecutivas de la cadena dentro de la ventana de tiempo deslizante. -Se analizan todas las cadenas en la selección. - -Tipo: `Integer`. - -**Ejemplo** - -Determine si un período de tiempo establecido es suficiente para que el usuario seleccione un teléfono y lo compre dos veces en la tienda en línea. - -Establezca la siguiente cadena de eventos: - -1. El usuario inició sesión en su cuenta en la tienda (`eventID = 1003`). -2. El usuario busca un teléfono (`eventID = 1007, product = 'phone'`). -3. El usuario realizó un pedido (`eventID = 1009`). -4. El usuario volvió a realizar el pedido (`eventID = 1010`). - -Tabla de entrada: - -``` text -┌─event_date─┬─user_id─┬───────────timestamp─┬─eventID─┬─product─┐ -│ 2019-01-28 │ 1 │ 2019-01-29 10:00:00 │ 1003 │ phone │ -└────────────┴─────────┴─────────────────────┴─────────┴─────────┘ -┌─event_date─┬─user_id─┬───────────timestamp─┬─eventID─┬─product─┐ -│ 2019-01-31 │ 1 │ 2019-01-31 09:00:00 │ 1007 │ phone │ -└────────────┴─────────┴─────────────────────┴─────────┴─────────┘ -┌─event_date─┬─user_id─┬───────────timestamp─┬─eventID─┬─product─┐ -│ 2019-01-30 │ 1 │ 2019-01-30 08:00:00 │ 1009 │ phone │ -└────────────┴─────────┴─────────────────────┴─────────┴─────────┘ -┌─event_date─┬─user_id─┬───────────timestamp─┬─eventID─┬─product─┐ -│ 2019-02-01 │ 1 │ 2019-02-01 08:00:00 │ 1010 │ phone │ -└────────────┴─────────┴─────────────────────┴─────────┴─────────┘ -``` - -Averigüe hasta qué punto el usuario `user_id` podría atravesar la cadena en un período de enero a febrero de 2019. - -Consulta: - -``` sql -SELECT - level, - count() AS c -FROM -( - SELECT - user_id, - windowFunnel(6048000000000000)(timestamp, eventID = 1003, eventID = 1009, eventID = 1007, eventID = 1010) AS level - FROM trend - WHERE (event_date >= '2019-01-01') AND (event_date <= '2019-02-02') - GROUP BY user_id -) -GROUP BY level -ORDER BY level ASC -``` - -Resultado: - -``` text -┌─level─┬─c─┐ -│ 4 │ 1 │ -└───────┴───┘ -``` - -## retención {#retention} - -La función toma como argumentos un conjunto de condiciones de 1 a 32 argumentos de tipo `UInt8` que indican si se cumplió una determinada condición para el evento. -Cualquier condición se puede especificar como un argumento (como en [WHERE](../../sql-reference/statements/select/where.md#select-where)). - -Las condiciones, excepto la primera, se aplican en pares: el resultado del segundo será verdadero si el primero y el segundo son verdaderos, del tercero si el primero y el fird son verdaderos, etc. - -**Sintaxis** - -``` sql -retention(cond1, cond2, ..., cond32); -``` - -**Parámetros** - -- `cond` — an expression that returns a `UInt8` resultado (1 o 0). - -**Valor devuelto** - -La matriz de 1 o 0. - -- 1 — condition was met for the event. -- 0 — condition wasn't met for the event. - -Tipo: `UInt8`. - -**Ejemplo** - -Consideremos un ejemplo de cálculo del `retention` función para determinar el tráfico del sitio. - -**1.** Сreate a table to illustrate an example. - -``` sql -CREATE TABLE retention_test(date Date, uid Int32) ENGINE = Memory; - -INSERT INTO retention_test SELECT '2020-01-01', number FROM numbers(5); -INSERT INTO retention_test SELECT '2020-01-02', number FROM numbers(10); -INSERT INTO retention_test SELECT '2020-01-03', number FROM numbers(15); -``` - -Tabla de entrada: - -Consulta: - -``` sql -SELECT * FROM retention_test -``` - -Resultado: - -``` text -┌───────date─┬─uid─┐ -│ 2020-01-01 │ 0 │ -│ 2020-01-01 │ 1 │ -│ 2020-01-01 │ 2 │ -│ 2020-01-01 │ 3 │ -│ 2020-01-01 │ 4 │ -└────────────┴─────┘ -┌───────date─┬─uid─┐ -│ 2020-01-02 │ 0 │ -│ 2020-01-02 │ 1 │ -│ 2020-01-02 │ 2 │ -│ 2020-01-02 │ 3 │ -│ 2020-01-02 │ 4 │ -│ 2020-01-02 │ 5 │ -│ 2020-01-02 │ 6 │ -│ 2020-01-02 │ 7 │ -│ 2020-01-02 │ 8 │ -│ 2020-01-02 │ 9 │ -└────────────┴─────┘ -┌───────date─┬─uid─┐ -│ 2020-01-03 │ 0 │ -│ 2020-01-03 │ 1 │ -│ 2020-01-03 │ 2 │ -│ 2020-01-03 │ 3 │ -│ 2020-01-03 │ 4 │ -│ 2020-01-03 │ 5 │ -│ 2020-01-03 │ 6 │ -│ 2020-01-03 │ 7 │ -│ 2020-01-03 │ 8 │ -│ 2020-01-03 │ 9 │ -│ 2020-01-03 │ 10 │ -│ 2020-01-03 │ 11 │ -│ 2020-01-03 │ 12 │ -│ 2020-01-03 │ 13 │ -│ 2020-01-03 │ 14 │ -└────────────┴─────┘ -``` - -**2.** Agrupar usuarios por ID único `uid` utilizando el `retention` función. - -Consulta: - -``` sql -SELECT - uid, - retention(date = '2020-01-01', date = '2020-01-02', date = '2020-01-03') AS r -FROM retention_test -WHERE date IN ('2020-01-01', '2020-01-02', '2020-01-03') -GROUP BY uid -ORDER BY uid ASC -``` - -Resultado: - -``` text -┌─uid─┬─r───────┐ -│ 0 │ [1,1,1] │ -│ 1 │ [1,1,1] │ -│ 2 │ [1,1,1] │ -│ 3 │ [1,1,1] │ -│ 4 │ [1,1,1] │ -│ 5 │ [0,0,0] │ -│ 6 │ [0,0,0] │ -│ 7 │ [0,0,0] │ -│ 8 │ [0,0,0] │ -│ 9 │ [0,0,0] │ -│ 10 │ [0,0,0] │ -│ 11 │ [0,0,0] │ -│ 12 │ [0,0,0] │ -│ 13 │ [0,0,0] │ -│ 14 │ [0,0,0] │ -└─────┴─────────┘ -``` - -**3.** Calcule el número total de visitas al sitio por día. - -Consulta: - -``` sql -SELECT - sum(r[1]) AS r1, - sum(r[2]) AS r2, - sum(r[3]) AS r3 -FROM -( - SELECT - uid, - retention(date = '2020-01-01', date = '2020-01-02', date = '2020-01-03') AS r - FROM retention_test - WHERE date IN ('2020-01-01', '2020-01-02', '2020-01-03') - GROUP BY uid -) -``` - -Resultado: - -``` text -┌─r1─┬─r2─┬─r3─┐ -│ 5 │ 5 │ 5 │ -└────┴────┴────┘ -``` - -Donde: - -- `r1`- el número de visitantes únicos que visitaron el sitio durante 2020-01-01 (la `cond1` condición). -- `r2`- el número de visitantes únicos que visitaron el sitio durante un período de tiempo específico entre 2020-01-01 y 2020-01-02 (`cond1` y `cond2` condición). -- `r3`- el número de visitantes únicos que visitaron el sitio durante un período de tiempo específico entre 2020-01-01 y 2020-01-03 (`cond1` y `cond3` condición). - -## UniqUpTo(N)(x) {#uniquptonx} - -Calculates the number of different argument values ​​if it is less than or equal to N. If the number of different argument values is greater than N, it returns N + 1. - -Recomendado para usar con Ns pequeños, hasta 10. El valor máximo de N es 100. - -Para el estado de una función agregada, utiliza la cantidad de memoria igual a 1 + N \* el tamaño de un valor de bytes. -Para las cadenas, almacena un hash no criptográfico de 8 bytes. Es decir, el cálculo se aproxima a las cadenas. - -La función también funciona para varios argumentos. - -Funciona lo más rápido posible, excepto en los casos en que se usa un valor N grande y el número de valores únicos es ligeramente menor que N. - -Ejemplo de uso: - -``` text -Problem: Generate a report that shows only keywords that produced at least 5 unique users. -Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5 -``` - -[Artículo Original](https://clickhouse.tech/docs/en/query_language/agg_functions/parametric_functions/) - -## sumMapFiltered(keys_to_keep)(claves, valores) {#summapfilteredkeys-to-keepkeys-values} - -El mismo comportamiento que [sumMap](reference.md#agg_functions-summap) excepto que una matriz de claves se pasa como un parámetro. Esto puede ser especialmente útil cuando se trabaja con una alta cardinalidad de claves. diff --git a/docs/es/sql-reference/aggregate-functions/reference.md b/docs/es/sql-reference/aggregate-functions/reference.md deleted file mode 100644 index 572c4d01051..00000000000 --- a/docs/es/sql-reference/aggregate-functions/reference.md +++ /dev/null @@ -1,1914 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 36 -toc_title: Referencia ---- - -# Referencia de función agregada {#aggregate-functions-reference} - -## contar {#agg_function-count} - -Cuenta el número de filas o valores no NULL. - -ClickHouse admite las siguientes sintaxis para `count`: -- `count(expr)` o `COUNT(DISTINCT expr)`. -- `count()` o `COUNT(*)`. El `count()` la sintaxis es específica de ClickHouse. - -**Parámetros** - -La función puede tomar: - -- Cero parámetros. -- Una [expresion](../syntax.md#syntax-expressions). - -**Valor devuelto** - -- Si se llama a la función sin parámetros, cuenta el número de filas. -- Si el [expresion](../syntax.md#syntax-expressions) se pasa, entonces la función cuenta cuántas veces esta expresión devuelve no nula. Si la expresión devuelve un [NULL](../../sql-reference/data-types/nullable.md)-type valor, entonces el resultado de `count` no se queda `Nullable`. La función devuelve 0 si la expresión devuelta `NULL` para todas las filas. - -En ambos casos el tipo del valor devuelto es [UInt64](../../sql-reference/data-types/int-uint.md). - -**Detalles** - -ClickHouse soporta el `COUNT(DISTINCT ...)` sintaxis. El comportamiento de esta construcción depende del [count_distinct_implementation](../../operations/settings/settings.md#settings-count_distinct_implementation) configuración. Define cuál de las [uniq\*](#agg_function-uniq) se utiliza para realizar la operación. El valor predeterminado es el [uniqExact](#agg_function-uniqexact) función. - -El `SELECT count() FROM table` consulta no está optimizado, porque el número de entradas en la tabla no se almacena por separado. Elige una pequeña columna de la tabla y cuenta el número de valores en ella. - -**Ejemplos** - -Ejemplo 1: - -``` sql -SELECT count() FROM t -``` - -``` text -┌─count()─┐ -│ 5 │ -└─────────┘ -``` - -Ejemplo 2: - -``` sql -SELECT name, value FROM system.settings WHERE name = 'count_distinct_implementation' -``` - -``` text -┌─name──────────────────────────┬─value─────┐ -│ count_distinct_implementation │ uniqExact │ -└───────────────────────────────┴───────────┘ -``` - -``` sql -SELECT count(DISTINCT num) FROM t -``` - -``` text -┌─uniqExact(num)─┐ -│ 3 │ -└────────────────┘ -``` - -Este ejemplo muestra que `count(DISTINCT num)` se realiza por el `uniqExact` función según el `count_distinct_implementation` valor de ajuste. - -## cualquiera (x) {#agg_function-any} - -Selecciona el primer valor encontrado. -La consulta se puede ejecutar en cualquier orden e incluso en un orden diferente cada vez, por lo que el resultado de esta función es indeterminado. -Para obtener un resultado determinado, puede usar el ‘min’ o ‘max’ función en lugar de ‘any’. - -En algunos casos, puede confiar en el orden de ejecución. Esto se aplica a los casos en que SELECT proviene de una subconsulta que usa ORDER BY. - -Cuando un `SELECT` consulta tiene el `GROUP BY` cláusula o al menos una función agregada, ClickHouse (en contraste con MySQL) requiere que todas las expresiones `SELECT`, `HAVING`, y `ORDER BY` las cláusulas pueden calcularse a partir de claves o de funciones agregadas. En otras palabras, cada columna seleccionada de la tabla debe usarse en claves o dentro de funciones agregadas. Para obtener un comportamiento como en MySQL, puede colocar las otras columnas en el `any` función de agregado. - -## Cualquier pesado (x) {#anyheavyx} - -Selecciona un valor que ocurre con frecuencia [pesos pesados](http://www.cs.umd.edu/~samir/498/karp.pdf) algoritmo. Si hay un valor que se produce más de la mitad de los casos en cada uno de los subprocesos de ejecución de la consulta, se devuelve este valor. Normalmente, el resultado es no determinista. - -``` sql -anyHeavy(column) -``` - -**Argumento** - -- `column` – The column name. - -**Ejemplo** - -Tome el [A tiempo](../../getting-started/example-datasets/ontime.md) conjunto de datos y seleccione cualquier valor que ocurra con frecuencia `AirlineID` columna. - -``` sql -SELECT anyHeavy(AirlineID) AS res -FROM ontime -``` - -``` text -┌───res─┐ -│ 19690 │ -└───────┘ -``` - -## Cualquier último (x) {#anylastx} - -Selecciona el último valor encontrado. -El resultado es tan indeterminado como para el `any` función. - -## Método de codificación de datos: {#groupbitand} - -Se aplica bit a bit `AND` para la serie de números. - -``` sql -groupBitAnd(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `UInt*` tipo. - -**Valor de retorno** - -Valor de la `UInt*` tipo. - -**Ejemplo** - -Datos de prueba: - -``` text -binary decimal -00101100 = 44 -00011100 = 28 -00001101 = 13 -01010101 = 85 -``` - -Consulta: - -``` sql -SELECT groupBitAnd(num) FROM t -``` - -Donde `num` es la columna con los datos de prueba. - -Resultado: - -``` text -binary decimal -00000100 = 4 -``` - -## GrupoBitO {#groupbitor} - -Se aplica bit a bit `OR` para la serie de números. - -``` sql -groupBitOr(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `UInt*` tipo. - -**Valor de retorno** - -Valor de la `UInt*` tipo. - -**Ejemplo** - -Datos de prueba: - -``` text -binary decimal -00101100 = 44 -00011100 = 28 -00001101 = 13 -01010101 = 85 -``` - -Consulta: - -``` sql -SELECT groupBitOr(num) FROM t -``` - -Donde `num` es la columna con los datos de prueba. - -Resultado: - -``` text -binary decimal -01111101 = 125 -``` - -## GrupoBitXor {#groupbitxor} - -Se aplica bit a bit `XOR` para la serie de números. - -``` sql -groupBitXor(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `UInt*` tipo. - -**Valor de retorno** - -Valor de la `UInt*` tipo. - -**Ejemplo** - -Datos de prueba: - -``` text -binary decimal -00101100 = 44 -00011100 = 28 -00001101 = 13 -01010101 = 85 -``` - -Consulta: - -``` sql -SELECT groupBitXor(num) FROM t -``` - -Donde `num` es la columna con los datos de prueba. - -Resultado: - -``` text -binary decimal -01101000 = 104 -``` - -## Método de codificación de datos: {#groupbitmap} - -Mapa de bits o cálculos agregados de una columna entera sin signo, devuelve cardinalidad de tipo UInt64, si agrega el sufijo -State, luego devuelve [objeto de mapa de bits](../../sql-reference/functions/bitmap-functions.md). - -``` sql -groupBitmap(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `UInt*` tipo. - -**Valor de retorno** - -Valor de la `UInt64` tipo. - -**Ejemplo** - -Datos de prueba: - -``` text -UserID -1 -1 -2 -3 -``` - -Consulta: - -``` sql -SELECT groupBitmap(UserID) as num FROM t -``` - -Resultado: - -``` text -num -3 -``` - -## Mínimo (x) {#agg_function-min} - -Calcula el mínimo. - -## máximo (x) {#agg_function-max} - -Calcula el máximo. - -## ¿Cómo puedo hacerlo?) {#agg-function-argmin} - -Calcula el ‘arg’ para un valor mínimo ‘val’ valor. Si hay varios valores diferentes de ‘arg’ para valores mínimos de ‘val’, el primero de estos valores encontrados es la salida. - -**Ejemplo:** - -``` text -┌─user─────┬─salary─┐ -│ director │ 5000 │ -│ manager │ 3000 │ -│ worker │ 1000 │ -└──────────┴────────┘ -``` - -``` sql -SELECT argMin(user, salary) FROM salary -``` - -``` text -┌─argMin(user, salary)─┐ -│ worker │ -└──────────────────────┘ -``` - -## Descripción) {#agg-function-argmax} - -Calcula el ‘arg’ para un valor máximo ‘val’ valor. Si hay varios valores diferentes de ‘arg’ para valores máximos de ‘val’, el primero de estos valores encontrados es la salida. - -## suma (x) {#agg_function-sum} - -Calcula la suma. -Solo funciona para números. - -## ¿Cómo puedo obtener más información?) {#sumwithoverflowx} - -Calcula la suma de los números, utilizando el mismo tipo de datos para el resultado que para los parámetros de entrada. Si la suma supera el valor máximo para este tipo de datos, la función devuelve un error. - -Solo funciona para números. - -## Por ejemplo, el valor es el siguiente:)) {#agg_functions-summap} - -Totals el ‘value’ matriz de acuerdo con las claves especificadas en el ‘key’ matriz. -Pasar una tupla de matrices de claves y valores es sinónimo de pasar dos matrices de claves y valores. -El número de elementos en ‘key’ y ‘value’ debe ser el mismo para cada fila que se sume. -Returns a tuple of two arrays: keys in sorted order, and values ​​summed for the corresponding keys. - -Ejemplo: - -``` sql -CREATE TABLE sum_map( - date Date, - timeslot DateTime, - statusMap Nested( - status UInt16, - requests UInt64 - ), - statusMapTuple Tuple(Array(Int32), Array(Int32)) -) ENGINE = Log; -INSERT INTO sum_map VALUES - ('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10], ([1, 2, 3], [10, 10, 10])), - ('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10], ([3, 4, 5], [10, 10, 10])), - ('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10], ([4, 5, 6], [10, 10, 10])), - ('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10], ([6, 7, 8], [10, 10, 10])); - -SELECT - timeslot, - sumMap(statusMap.status, statusMap.requests), - sumMap(statusMapTuple) -FROM sum_map -GROUP BY timeslot -``` - -``` text -┌────────────timeslot─┬─sumMap(statusMap.status, statusMap.requests)─┬─sumMap(statusMapTuple)─────────┐ -│ 2000-01-01 00:00:00 │ ([1,2,3,4,5],[10,10,20,10,10]) │ ([1,2,3,4,5],[10,10,20,10,10]) │ -│ 2000-01-01 00:01:00 │ ([4,5,6,7,8],[10,10,20,10,10]) │ ([4,5,6,7,8],[10,10,20,10,10]) │ -└─────────────────────┴──────────────────────────────────────────────┴────────────────────────────────┘ -``` - -## SkewPop {#skewpop} - -Calcula el [la asimetría](https://en.wikipedia.org/wiki/Skewness) de una secuencia. - -``` sql -skewPop(expr) -``` - -**Parámetros** - -`expr` — [Expresion](../syntax.md#syntax-expressions) devolviendo un número. - -**Valor devuelto** - -The skewness of the given distribution. Type — [Float64](../../sql-reference/data-types/float.md) - -**Ejemplo** - -``` sql -SELECT skewPop(value) FROM series_with_value_column -``` - -## Sistema abierto {#skewsamp} - -Calcula el [asimetría de la muestra](https://en.wikipedia.org/wiki/Skewness) de una secuencia. - -Representa una estimación imparcial de la asimetría de una variable aleatoria si los valores pasados forman su muestra. - -``` sql -skewSamp(expr) -``` - -**Parámetros** - -`expr` — [Expresion](../syntax.md#syntax-expressions) devolviendo un número. - -**Valor devuelto** - -The skewness of the given distribution. Type — [Float64](../../sql-reference/data-types/float.md). Si `n <= 1` (`n` es el tamaño de la muestra), luego la función devuelve `nan`. - -**Ejemplo** - -``` sql -SELECT skewSamp(value) FROM series_with_value_column -``` - -## KurtPop {#kurtpop} - -Calcula el [curtosis](https://en.wikipedia.org/wiki/Kurtosis) de una secuencia. - -``` sql -kurtPop(expr) -``` - -**Parámetros** - -`expr` — [Expresion](../syntax.md#syntax-expressions) devolviendo un número. - -**Valor devuelto** - -The kurtosis of the given distribution. Type — [Float64](../../sql-reference/data-types/float.md) - -**Ejemplo** - -``` sql -SELECT kurtPop(value) FROM series_with_value_column -``` - -## KurtSamp {#kurtsamp} - -Calcula el [curtosis muestra](https://en.wikipedia.org/wiki/Kurtosis) de una secuencia. - -Representa una estimación imparcial de la curtosis de una variable aleatoria si los valores pasados forman su muestra. - -``` sql -kurtSamp(expr) -``` - -**Parámetros** - -`expr` — [Expresion](../syntax.md#syntax-expressions) devolviendo un número. - -**Valor devuelto** - -The kurtosis of the given distribution. Type — [Float64](../../sql-reference/data-types/float.md). Si `n <= 1` (`n` es un tamaño de la muestra), luego la función devuelve `nan`. - -**Ejemplo** - -``` sql -SELECT kurtSamp(value) FROM series_with_value_column -``` - -## Acerca de) {#agg_function-avg} - -Calcula el promedio. -Solo funciona para números. -El resultado es siempre Float64. - -## avgPonderado {#avgweighted} - -Calcula el [media aritmética ponderada](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean). - -**Sintaxis** - -``` sql -avgWeighted(x, weight) -``` - -**Parámetros** - -- `x` — Values. [Entero](../data-types/int-uint.md) o [punto flotante](../data-types/float.md). -- `weight` — Weights of the values. [Entero](../data-types/int-uint.md) o [punto flotante](../data-types/float.md). - -Tipo de `x` y `weight` debe ser el mismo. - -**Valor devuelto** - -- Media ponderada. -- `NaN`. Si todos los pesos son iguales a 0. - -Tipo: [Float64](../data-types/float.md). - -**Ejemplo** - -Consulta: - -``` sql -SELECT avgWeighted(x, w) -FROM values('x Int8, w Int8', (4, 1), (1, 0), (10, 2)) -``` - -Resultado: - -``` text -┌─avgWeighted(x, weight)─┐ -│ 8 │ -└────────────────────────┘ -``` - -## uniq {#agg_function-uniq} - -Calcula el número aproximado de diferentes valores del argumento. - -``` sql -uniq(x[, ...]) -``` - -**Parámetros** - -La función toma un número variable de parámetros. Los parámetros pueden ser `Tuple`, `Array`, `Date`, `DateTime`, `String`, o tipos numéricos. - -**Valor devuelto** - -- A [UInt64](../../sql-reference/data-types/int-uint.md)-tipo número. - -**Detalles de implementación** - -Función: - -- Calcula un hash para todos los parámetros en el agregado, luego lo usa en los cálculos. - -- Utiliza un algoritmo de muestreo adaptativo. Para el estado de cálculo, la función utiliza una muestra de valores hash de elemento de hasta 65536. - - This algorithm is very accurate and very efficient on the CPU. When the query contains several of these functions, using `uniq` is almost as fast as using other aggregate functions. - -- Proporciona el resultado de forma determinista (no depende del orden de procesamiento de la consulta). - -Recomendamos usar esta función en casi todos los escenarios. - -**Ver también** - -- [uniqCombined](#agg_function-uniqcombined) -- [UniqCombined64](#agg_function-uniqcombined64) -- [uniqHLL12](#agg_function-uniqhll12) -- [uniqExact](#agg_function-uniqexact) - -## uniqCombined {#agg_function-uniqcombined} - -Calcula el número aproximado de diferentes valores de argumento. - -``` sql -uniqCombined(HLL_precision)(x[, ...]) -``` - -El `uniqCombined` es una buena opción para calcular el número de valores diferentes. - -**Parámetros** - -La función toma un número variable de parámetros. Los parámetros pueden ser `Tuple`, `Array`, `Date`, `DateTime`, `String`, o tipos numéricos. - -`HLL_precision` es el logaritmo base-2 del número de células en [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog). Opcional, puede utilizar la función como `uniqCombined(x[, ...])`. El valor predeterminado para `HLL_precision` es 17, que es efectivamente 96 KiB de espacio (2 ^ 17 celdas, 6 bits cada una). - -**Valor devuelto** - -- Numero [UInt64](../../sql-reference/data-types/int-uint.md)-tipo número. - -**Detalles de implementación** - -Función: - -- Calcula un hash (hash de 64 bits para `String` y 32 bits de lo contrario) para todos los parámetros en el agregado, luego lo usa en los cálculos. - -- Utiliza una combinación de tres algoritmos: matriz, tabla hash e HyperLogLog con una tabla de corrección de errores. - - For a small number of distinct elements, an array is used. When the set size is larger, a hash table is used. For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory. - -- Proporciona el resultado de forma determinista (no depende del orden de procesamiento de la consulta). - -!!! note "Nota" - Dado que usa hash de 32 bits para no-`String` tipo, el resultado tendrá un error muy alto para cardinalidades significativamente mayores que `UINT_MAX` (el error aumentará rápidamente después de unas pocas decenas de miles de millones de valores distintos), por lo tanto, en este caso debe usar [UniqCombined64](#agg_function-uniqcombined64) - -En comparación con el [uniq](#agg_function-uniq) función, el `uniqCombined`: - -- Consume varias veces menos memoria. -- Calcula con una precisión varias veces mayor. -- Por lo general, tiene un rendimiento ligeramente menor. En algunos escenarios, `uniqCombined` puede funcionar mejor que `uniq`, por ejemplo, con consultas distribuidas que transmiten un gran número de estados de agregación a través de la red. - -**Ver también** - -- [uniq](#agg_function-uniq) -- [UniqCombined64](#agg_function-uniqcombined64) -- [uniqHLL12](#agg_function-uniqhll12) -- [uniqExact](#agg_function-uniqexact) - -## UniqCombined64 {#agg_function-uniqcombined64} - -Lo mismo que [uniqCombined](#agg_function-uniqcombined), pero utiliza hash de 64 bits para todos los tipos de datos. - -## uniqHLL12 {#agg_function-uniqhll12} - -Calcula el número aproximado de diferentes valores de argumento [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) algoritmo. - -``` sql -uniqHLL12(x[, ...]) -``` - -**Parámetros** - -La función toma un número variable de parámetros. Los parámetros pueden ser `Tuple`, `Array`, `Date`, `DateTime`, `String`, o tipos numéricos. - -**Valor devuelto** - -- A [UInt64](../../sql-reference/data-types/int-uint.md)-tipo número. - -**Detalles de implementación** - -Función: - -- Calcula un hash para todos los parámetros en el agregado, luego lo usa en los cálculos. - -- Utiliza el algoritmo HyperLogLog para aproximar el número de valores de argumento diferentes. - - 212 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). - -- Proporciona el resultado determinado (no depende del orden de procesamiento de la consulta). - -No recomendamos usar esta función. En la mayoría de los casos, use el [uniq](#agg_function-uniq) o [uniqCombined](#agg_function-uniqcombined) función. - -**Ver también** - -- [uniq](#agg_function-uniq) -- [uniqCombined](#agg_function-uniqcombined) -- [uniqExact](#agg_function-uniqexact) - -## uniqExact {#agg_function-uniqexact} - -Calcula el número exacto de diferentes valores de argumento. - -``` sql -uniqExact(x[, ...]) -``` - -Utilice el `uniqExact` función si necesita absolutamente un resultado exacto. De lo contrario, use el [uniq](#agg_function-uniq) función. - -El `uniqExact` función utiliza más memoria que `uniq`, porque el tamaño del estado tiene un crecimiento ilimitado a medida que aumenta el número de valores diferentes. - -**Parámetros** - -La función toma un número variable de parámetros. Los parámetros pueden ser `Tuple`, `Array`, `Date`, `DateTime`, `String`, o tipos numéricos. - -**Ver también** - -- [uniq](#agg_function-uniq) -- [uniqCombined](#agg_function-uniqcombined) -- [uniqHLL12](#agg_function-uniqhll12) - -## ¿Cómo puedo hacerlo?) {#agg_function-grouparray} - -Crea una matriz de valores de argumento. -Los valores se pueden agregar a la matriz en cualquier orden (indeterminado). - -La segunda versión (con el `max_size` parámetro) limita el tamaño de la matriz resultante a `max_size` elemento. -Por ejemplo, `groupArray (1) (x)` es equivalente a `[any (x)]`. - -En algunos casos, aún puede confiar en el orden de ejecución. Esto se aplica a los casos en que `SELECT` procede de una subconsulta que utiliza `ORDER BY`. - -## GrupoArrayInsertAt {#grouparrayinsertat} - -Inserta un valor en la matriz en la posición especificada. - -**Sintaxis** - -``` sql -groupArrayInsertAt(default_x, size)(x, pos); -``` - -Si en una consulta se insertan varios valores en la misma posición, la función se comporta de las siguientes maneras: - -- Si se ejecuta una consulta en un solo subproceso, se utiliza el primero de los valores insertados. -- Si una consulta se ejecuta en varios subprocesos, el valor resultante es uno indeterminado de los valores insertados. - -**Parámetros** - -- `x` — Value to be inserted. [Expresion](../syntax.md#syntax-expressions) lo que resulta en uno de los [tipos de datos compatibles](../../sql-reference/data-types/index.md). -- `pos` — Position at which the specified element `x` se va a insertar. La numeración de índices en la matriz comienza desde cero. [UInt32](../../sql-reference/data-types/int-uint.md#uint-ranges). -- `default_x`— Default value for substituting in empty positions. Optional parameter. [Expresion](../syntax.md#syntax-expressions) dando como resultado el tipo de datos configurado para `x` parámetro. Si `default_x` no está definido, el [valores predeterminados](../../sql-reference/statements/create.md#create-default-values) se utilizan. -- `size`— Length of the resulting array. Optional parameter. When using this parameter, the default value `default_x` debe ser especificado. [UInt32](../../sql-reference/data-types/int-uint.md#uint-ranges). - -**Valor devuelto** - -- Matriz con valores insertados. - -Tipo: [Matriz](../../sql-reference/data-types/array.md#data-type-array). - -**Ejemplo** - -Consulta: - -``` sql -SELECT groupArrayInsertAt(toString(number), number * 2) FROM numbers(5); -``` - -Resultado: - -``` text -┌─groupArrayInsertAt(toString(number), multiply(number, 2))─┐ -│ ['0','','1','','2','','3','','4'] │ -└───────────────────────────────────────────────────────────┘ -``` - -Consulta: - -``` sql -SELECT groupArrayInsertAt('-')(toString(number), number * 2) FROM numbers(5); -``` - -Resultado: - -``` text -┌─groupArrayInsertAt('-')(toString(number), multiply(number, 2))─┐ -│ ['0','-','1','-','2','-','3','-','4'] │ -└────────────────────────────────────────────────────────────────┘ -``` - -Consulta: - -``` sql -SELECT groupArrayInsertAt('-', 5)(toString(number), number * 2) FROM numbers(5); -``` - -Resultado: - -``` text -┌─groupArrayInsertAt('-', 5)(toString(number), multiply(number, 2))─┐ -│ ['0','-','1','-','2'] │ -└───────────────────────────────────────────────────────────────────┘ -``` - -Inserción multihilo de elementos en una posición. - -Consulta: - -``` sql -SELECT groupArrayInsertAt(number, 0) FROM numbers_mt(10) SETTINGS max_block_size = 1; -``` - -Como resultado de esta consulta, obtiene un entero aleatorio en el `[0,9]` gama. Por ejemplo: - -``` text -┌─groupArrayInsertAt(number, 0)─┐ -│ [7] │ -└───────────────────────────────┘ -``` - -## groupArrayMovingSum {#agg_function-grouparraymovingsum} - -Calcula la suma móvil de los valores de entrada. - -``` sql -groupArrayMovingSum(numbers_for_summing) -groupArrayMovingSum(window_size)(numbers_for_summing) -``` - -La función puede tomar el tamaño de la ventana como un parámetro. Si no se especifica, la función toma el tamaño de ventana igual al número de filas de la columna. - -**Parámetros** - -- `numbers_for_summing` — [Expresion](../syntax.md#syntax-expressions) dando como resultado un valor de tipo de datos numérico. -- `window_size` — Size of the calculation window. - -**Valores devueltos** - -- Matriz del mismo tamaño y tipo que los datos de entrada. - -**Ejemplo** - -La tabla de ejemplo: - -``` sql -CREATE TABLE t -( - `int` UInt8, - `float` Float32, - `dec` Decimal32(2) -) -ENGINE = TinyLog -``` - -``` text -┌─int─┬─float─┬──dec─┐ -│ 1 │ 1.1 │ 1.10 │ -│ 2 │ 2.2 │ 2.20 │ -│ 4 │ 4.4 │ 4.40 │ -│ 7 │ 7.77 │ 7.77 │ -└─────┴───────┴──────┘ -``` - -Consulta: - -``` sql -SELECT - groupArrayMovingSum(int) AS I, - groupArrayMovingSum(float) AS F, - groupArrayMovingSum(dec) AS D -FROM t -``` - -``` text -┌─I──────────┬─F───────────────────────────────┬─D──────────────────────┐ -│ [1,3,7,14] │ [1.1,3.3000002,7.7000003,15.47] │ [1.10,3.30,7.70,15.47] │ -└────────────┴─────────────────────────────────┴────────────────────────┘ -``` - -``` sql -SELECT - groupArrayMovingSum(2)(int) AS I, - groupArrayMovingSum(2)(float) AS F, - groupArrayMovingSum(2)(dec) AS D -FROM t -``` - -``` text -┌─I──────────┬─F───────────────────────────────┬─D──────────────────────┐ -│ [1,3,6,11] │ [1.1,3.3000002,6.6000004,12.17] │ [1.10,3.30,6.60,12.17] │ -└────────────┴─────────────────────────────────┴────────────────────────┘ -``` - -## Método de codificación de datos: {#agg_function-grouparraymovingavg} - -Calcula la media móvil de los valores de entrada. - -``` sql -groupArrayMovingAvg(numbers_for_summing) -groupArrayMovingAvg(window_size)(numbers_for_summing) -``` - -La función puede tomar el tamaño de la ventana como un parámetro. Si no se especifica, la función toma el tamaño de ventana igual al número de filas de la columna. - -**Parámetros** - -- `numbers_for_summing` — [Expresion](../syntax.md#syntax-expressions) dando como resultado un valor de tipo de datos numérico. -- `window_size` — Size of the calculation window. - -**Valores devueltos** - -- Matriz del mismo tamaño y tipo que los datos de entrada. - -La función utiliza [redondeando hacia cero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero). Trunca los decimales insignificantes para el tipo de datos resultante. - -**Ejemplo** - -La tabla de ejemplo `b`: - -``` sql -CREATE TABLE t -( - `int` UInt8, - `float` Float32, - `dec` Decimal32(2) -) -ENGINE = TinyLog -``` - -``` text -┌─int─┬─float─┬──dec─┐ -│ 1 │ 1.1 │ 1.10 │ -│ 2 │ 2.2 │ 2.20 │ -│ 4 │ 4.4 │ 4.40 │ -│ 7 │ 7.77 │ 7.77 │ -└─────┴───────┴──────┘ -``` - -Consulta: - -``` sql -SELECT - groupArrayMovingAvg(int) AS I, - groupArrayMovingAvg(float) AS F, - groupArrayMovingAvg(dec) AS D -FROM t -``` - -``` text -┌─I─────────┬─F───────────────────────────────────┬─D─────────────────────┐ -│ [0,0,1,3] │ [0.275,0.82500005,1.9250001,3.8675] │ [0.27,0.82,1.92,3.86] │ -└───────────┴─────────────────────────────────────┴───────────────────────┘ -``` - -``` sql -SELECT - groupArrayMovingAvg(2)(int) AS I, - groupArrayMovingAvg(2)(float) AS F, - groupArrayMovingAvg(2)(dec) AS D -FROM t -``` - -``` text -┌─I─────────┬─F────────────────────────────────┬─D─────────────────────┐ -│ [0,1,3,5] │ [0.55,1.6500001,3.3000002,6.085] │ [0.55,1.65,3.30,6.08] │ -└───────────┴──────────────────────────────────┴───────────────────────┘ -``` - -## ¿Cómo puedo obtener más información?) {#groupuniqarrayx-groupuniqarraymax-sizex} - -Crea una matriz a partir de diferentes valores de argumento. El consumo de memoria es el mismo que para el `uniqExact` función. - -La segunda versión (con el `max_size` parámetro) limita el tamaño de la matriz resultante a `max_size` elemento. -Por ejemplo, `groupUniqArray(1)(x)` es equivalente a `[any(x)]`. - -## cuantil {#quantile} - -Calcula un aproximado [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos. - -Esta función se aplica [muestreo de embalses](https://en.wikipedia.org/wiki/Reservoir_sampling) con un tamaño de depósito de hasta 8192 y un generador de números aleatorios para el muestreo. El resultado es no determinista. Para obtener un cuantil exacto, use el [quantileExact](#quantileexact) función. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantile(level)(expr) -``` - -Apodo: `median`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). -- `expr` — Expression over the column values resulting in numeric [tipos de datos](../../sql-reference/data-types/index.md#data_types), [Fecha](../../sql-reference/data-types/date.md) o [FechaHora](../../sql-reference/data-types/datetime.md). - -**Valor devuelto** - -- Cuantil aproximado del nivel especificado. - -Tipo: - -- [Float64](../../sql-reference/data-types/float.md) para la entrada de tipo de datos numéricos. -- [Fecha](../../sql-reference/data-types/date.md) si los valores de entrada tienen `Date` tipo. -- [FechaHora](../../sql-reference/data-types/datetime.md) si los valores de entrada tienen `DateTime` tipo. - -**Ejemplo** - -Tabla de entrada: - -``` text -┌─val─┐ -│ 1 │ -│ 1 │ -│ 2 │ -│ 3 │ -└─────┘ -``` - -Consulta: - -``` sql -SELECT quantile(val) FROM t -``` - -Resultado: - -``` text -┌─quantile(val)─┐ -│ 1.5 │ -└───────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileDeterminista {#quantiledeterministic} - -Calcula un aproximado [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos. - -Esta función se aplica [muestreo de embalses](https://en.wikipedia.org/wiki/Reservoir_sampling) con un tamaño de depósito de hasta 8192 y algoritmo determinista de muestreo. El resultado es determinista. Para obtener un cuantil exacto, use el [quantileExact](#quantileexact) función. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileDeterministic(level)(expr, determinator) -``` - -Apodo: `medianDeterministic`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). -- `expr` — Expression over the column values resulting in numeric [tipos de datos](../../sql-reference/data-types/index.md#data_types), [Fecha](../../sql-reference/data-types/date.md) o [FechaHora](../../sql-reference/data-types/datetime.md). -- `determinator` — Number whose hash is used instead of a random number generator in the reservoir sampling algorithm to make the result of sampling deterministic. As a determinator you can use any deterministic positive number, for example, a user id or an event id. If the same determinator value occures too often, the function works incorrectly. - -**Valor devuelto** - -- Cuantil aproximado del nivel especificado. - -Tipo: - -- [Float64](../../sql-reference/data-types/float.md) para la entrada de tipo de datos numéricos. -- [Fecha](../../sql-reference/data-types/date.md) si los valores de entrada tienen `Date` tipo. -- [FechaHora](../../sql-reference/data-types/datetime.md) si los valores de entrada tienen `DateTime` tipo. - -**Ejemplo** - -Tabla de entrada: - -``` text -┌─val─┐ -│ 1 │ -│ 1 │ -│ 2 │ -│ 3 │ -└─────┘ -``` - -Consulta: - -``` sql -SELECT quantileDeterministic(val, 1) FROM t -``` - -Resultado: - -``` text -┌─quantileDeterministic(val, 1)─┐ -│ 1.5 │ -└───────────────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileExact {#quantileexact} - -Calcula exactamente el [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos. - -To get exact value, all the passed values ​​are combined into an array, which is then partially sorted. Therefore, the function consumes `O(n)` memoria, donde `n` es un número de valores que se pasaron. Sin embargo, para un pequeño número de valores, la función es muy efectiva. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileExact(level)(expr) -``` - -Apodo: `medianExact`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). -- `expr` — Expression over the column values resulting in numeric [tipos de datos](../../sql-reference/data-types/index.md#data_types), [Fecha](../../sql-reference/data-types/date.md) o [FechaHora](../../sql-reference/data-types/datetime.md). - -**Valor devuelto** - -- Cuantil del nivel especificado. - -Tipo: - -- [Float64](../../sql-reference/data-types/float.md) para la entrada de tipo de datos numéricos. -- [Fecha](../../sql-reference/data-types/date.md) si los valores de entrada tienen `Date` tipo. -- [FechaHora](../../sql-reference/data-types/datetime.md) si los valores de entrada tienen `DateTime` tipo. - -**Ejemplo** - -Consulta: - -``` sql -SELECT quantileExact(number) FROM numbers(10) -``` - -Resultado: - -``` text -┌─quantileExact(number)─┐ -│ 5 │ -└───────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileExactWeighted {#quantileexactweighted} - -Calcula exactamente el [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos, teniendo en cuenta el peso de cada elemento. - -To get exact value, all the passed values ​​are combined into an array, which is then partially sorted. Each value is counted with its weight, as if it is present `weight` times. A hash table is used in the algorithm. Because of this, if the passed values ​​are frequently repeated, the function consumes less RAM than [quantileExact](#quantileexact). Puede usar esta función en lugar de `quantileExact` y especifique el peso 1. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileExactWeighted(level)(expr, weight) -``` - -Apodo: `medianExactWeighted`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). -- `expr` — Expression over the column values resulting in numeric [tipos de datos](../../sql-reference/data-types/index.md#data_types), [Fecha](../../sql-reference/data-types/date.md) o [FechaHora](../../sql-reference/data-types/datetime.md). -- `weight` — Column with weights of sequence members. Weight is a number of value occurrences. - -**Valor devuelto** - -- Cuantil del nivel especificado. - -Tipo: - -- [Float64](../../sql-reference/data-types/float.md) para la entrada de tipo de datos numéricos. -- [Fecha](../../sql-reference/data-types/date.md) si los valores de entrada tienen `Date` tipo. -- [FechaHora](../../sql-reference/data-types/datetime.md) si los valores de entrada tienen `DateTime` tipo. - -**Ejemplo** - -Tabla de entrada: - -``` text -┌─n─┬─val─┐ -│ 0 │ 3 │ -│ 1 │ 2 │ -│ 2 │ 1 │ -│ 5 │ 4 │ -└───┴─────┘ -``` - -Consulta: - -``` sql -SELECT quantileExactWeighted(n, val) FROM t -``` - -Resultado: - -``` text -┌─quantileExactWeighted(n, val)─┐ -│ 1 │ -└───────────────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileTiming {#quantiletiming} - -Con la precisión determinada calcula el [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos. - -El resultado es determinista (no depende del orden de procesamiento de la consulta). La función está optimizada para trabajar con secuencias que describen distribuciones como tiempos de carga de páginas web o tiempos de respuesta de back-end. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileTiming(level)(expr) -``` - -Apodo: `medianTiming`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). - -- `expr` — [Expresion](../syntax.md#syntax-expressions) sobre una columna valores que devuelven un [Flotante\*](../../sql-reference/data-types/float.md)-tipo número. - - - If negative values are passed to the function, the behavior is undefined. - - If the value is greater than 30,000 (a page loading time of more than 30 seconds), it is assumed to be 30,000. - -**Exactitud** - -El cálculo es preciso si: - -- El número total de valores no supera los 5670. -- El número total de valores supera los 5670, pero el tiempo de carga de la página es inferior a 1024 ms. - -De lo contrario, el resultado del cálculo se redondea al múltiplo más cercano de 16 ms. - -!!! note "Nota" - Para calcular los cuantiles de tiempo de carga de la página, esta función es más efectiva y precisa que [cuantil](#quantile). - -**Valor devuelto** - -- Cuantil del nivel especificado. - -Tipo: `Float32`. - -!!! note "Nota" - Si no se pasan valores a la función (cuando se `quantileTimingIf`), [NaN](../../sql-reference/data-types/float.md#data_type-float-nan-inf) se devuelve. El propósito de esto es diferenciar estos casos de los casos que resultan en cero. Ver [ORDER BY cláusula](../statements/select/order-by.md#select-order-by) para notas sobre la clasificación `NaN` valor. - -**Ejemplo** - -Tabla de entrada: - -``` text -┌─response_time─┐ -│ 72 │ -│ 112 │ -│ 126 │ -│ 145 │ -│ 104 │ -│ 242 │ -│ 313 │ -│ 168 │ -│ 108 │ -└───────────────┘ -``` - -Consulta: - -``` sql -SELECT quantileTiming(response_time) FROM t -``` - -Resultado: - -``` text -┌─quantileTiming(response_time)─┐ -│ 126 │ -└───────────────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileTimingWeighted {#quantiletimingweighted} - -Con la precisión determinada calcula el [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos según el peso de cada miembro de secuencia. - -El resultado es determinista (no depende del orden de procesamiento de la consulta). La función está optimizada para trabajar con secuencias que describen distribuciones como tiempos de carga de páginas web o tiempos de respuesta de back-end. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileTimingWeighted(level)(expr, weight) -``` - -Apodo: `medianTimingWeighted`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). - -- `expr` — [Expresion](../syntax.md#syntax-expressions) sobre una columna valores que devuelven un [Flotante\*](../../sql-reference/data-types/float.md)-tipo número. - - - If negative values are passed to the function, the behavior is undefined. - - If the value is greater than 30,000 (a page loading time of more than 30 seconds), it is assumed to be 30,000. - -- `weight` — Column with weights of sequence elements. Weight is a number of value occurrences. - -**Exactitud** - -El cálculo es preciso si: - -- El número total de valores no supera los 5670. -- El número total de valores supera los 5670, pero el tiempo de carga de la página es inferior a 1024 ms. - -De lo contrario, el resultado del cálculo se redondea al múltiplo más cercano de 16 ms. - -!!! note "Nota" - Para calcular los cuantiles de tiempo de carga de la página, esta función es más efectiva y precisa que [cuantil](#quantile). - -**Valor devuelto** - -- Cuantil del nivel especificado. - -Tipo: `Float32`. - -!!! note "Nota" - Si no se pasan valores a la función (cuando se `quantileTimingIf`), [NaN](../../sql-reference/data-types/float.md#data_type-float-nan-inf) se devuelve. El propósito de esto es diferenciar estos casos de los casos que resultan en cero. Ver [ORDER BY cláusula](../statements/select/order-by.md#select-order-by) para notas sobre la clasificación `NaN` valor. - -**Ejemplo** - -Tabla de entrada: - -``` text -┌─response_time─┬─weight─┐ -│ 68 │ 1 │ -│ 104 │ 2 │ -│ 112 │ 3 │ -│ 126 │ 2 │ -│ 138 │ 1 │ -│ 162 │ 1 │ -└───────────────┴────────┘ -``` - -Consulta: - -``` sql -SELECT quantileTimingWeighted(response_time, weight) FROM t -``` - -Resultado: - -``` text -┌─quantileTimingWeighted(response_time, weight)─┐ -│ 112 │ -└───────────────────────────────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileTDigest {#quantiletdigest} - -Calcula un aproximado [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos usando el [T-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) algoritmo. - -El error máximo es 1%. El consumo de memoria es `log(n)`, donde `n` es un número de valores. El resultado depende del orden de ejecución de la consulta y no es determinista. - -El rendimiento de la función es menor que el rendimiento de [cuantil](#quantile) o [quantileTiming](#quantiletiming). En términos de la relación entre el tamaño del estado y la precisión, esta función es mucho mejor que `quantile`. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileTDigest(level)(expr) -``` - -Apodo: `medianTDigest`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). -- `expr` — Expression over the column values resulting in numeric [tipos de datos](../../sql-reference/data-types/index.md#data_types), [Fecha](../../sql-reference/data-types/date.md) o [FechaHora](../../sql-reference/data-types/datetime.md). - -**Valor devuelto** - -- Cuantil aproximado del nivel especificado. - -Tipo: - -- [Float64](../../sql-reference/data-types/float.md) para la entrada de tipo de datos numéricos. -- [Fecha](../../sql-reference/data-types/date.md) si los valores de entrada tienen `Date` tipo. -- [FechaHora](../../sql-reference/data-types/datetime.md) si los valores de entrada tienen `DateTime` tipo. - -**Ejemplo** - -Consulta: - -``` sql -SELECT quantileTDigest(number) FROM numbers(10) -``` - -Resultado: - -``` text -┌─quantileTDigest(number)─┐ -│ 4.5 │ -└─────────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## quantileTDigestWeighted {#quantiletdigestweighted} - -Calcula un aproximado [cuantil](https://en.wikipedia.org/wiki/Quantile) de una secuencia de datos numéricos usando el [T-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) algoritmo. La función tiene en cuenta el peso de cada miembro de secuencia. El error máximo es 1%. El consumo de memoria es `log(n)`, donde `n` es un número de valores. - -El rendimiento de la función es menor que el rendimiento de [cuantil](#quantile) o [quantileTiming](#quantiletiming). En términos de la relación entre el tamaño del estado y la precisión, esta función es mucho mejor que `quantile`. - -El resultado depende del orden de ejecución de la consulta y no es determinista. - -Cuando se utilizan múltiples `quantile*` funciones con diferentes niveles en una consulta, los estados internos no se combinan (es decir, la consulta funciona de manera menos eficiente de lo que podría). En este caso, use el [cantiles](#quantiles) función. - -**Sintaxis** - -``` sql -quantileTDigest(level)(expr) -``` - -Apodo: `medianTDigest`. - -**Parámetros** - -- `level` — Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a `level` valor en el rango de `[0.01, 0.99]`. Valor predeterminado: 0.5. En `level=0.5` la función calcula [mediana](https://en.wikipedia.org/wiki/Median). -- `expr` — Expression over the column values resulting in numeric [tipos de datos](../../sql-reference/data-types/index.md#data_types), [Fecha](../../sql-reference/data-types/date.md) o [FechaHora](../../sql-reference/data-types/datetime.md). -- `weight` — Column with weights of sequence elements. Weight is a number of value occurrences. - -**Valor devuelto** - -- Cuantil aproximado del nivel especificado. - -Tipo: - -- [Float64](../../sql-reference/data-types/float.md) para la entrada de tipo de datos numéricos. -- [Fecha](../../sql-reference/data-types/date.md) si los valores de entrada tienen `Date` tipo. -- [FechaHora](../../sql-reference/data-types/datetime.md) si los valores de entrada tienen `DateTime` tipo. - -**Ejemplo** - -Consulta: - -``` sql -SELECT quantileTDigestWeighted(number, 1) FROM numbers(10) -``` - -Resultado: - -``` text -┌─quantileTDigestWeighted(number, 1)─┐ -│ 4.5 │ -└────────────────────────────────────┘ -``` - -**Ver también** - -- [mediana](#median) -- [cantiles](#quantiles) - -## mediana {#median} - -El `median*` funciones son los alias para el `quantile*` función. Calculan la mediana de una muestra de datos numéricos. - -Función: - -- `median` — Alias for [cuantil](#quantile). -- `medianDeterministic` — Alias for [quantileDeterminista](#quantiledeterministic). -- `medianExact` — Alias for [quantileExact](#quantileexact). -- `medianExactWeighted` — Alias for [quantileExactWeighted](#quantileexactweighted). -- `medianTiming` — Alias for [quantileTiming](#quantiletiming). -- `medianTimingWeighted` — Alias for [quantileTimingWeighted](#quantiletimingweighted). -- `medianTDigest` — Alias for [quantileTDigest](#quantiletdigest). -- `medianTDigestWeighted` — Alias for [quantileTDigestWeighted](#quantiletdigestweighted). - -**Ejemplo** - -Tabla de entrada: - -``` text -┌─val─┐ -│ 1 │ -│ 1 │ -│ 2 │ -│ 3 │ -└─────┘ -``` - -Consulta: - -``` sql -SELECT medianDeterministic(val, 1) FROM t -``` - -Resultado: - -``` text -┌─medianDeterministic(val, 1)─┐ -│ 1.5 │ -└─────────────────────────────┘ -``` - -## quantiles(level1, level2, …)(x) {#quantiles} - -Todas las funciones de cuantiles también tienen funciones de cuantiles correspondientes: `quantiles`, `quantilesDeterministic`, `quantilesTiming`, `quantilesTimingWeighted`, `quantilesExact`, `quantilesExactWeighted`, `quantilesTDigest`. Estas funciones calculan todos los cuantiles de los niveles enumerados en una sola pasada y devuelven una matriz de los valores resultantes. - -## Acerca de Nosotros) {#varsampx} - -Calcula la cantidad `Σ((x - x̅)^2) / (n - 1)`, donde `n` es el tamaño de la muestra y `x̅`es el valor promedio de `x`. - -Representa una estimación imparcial de la varianza de una variable aleatoria si los valores pasados forman su muestra. - -Devoluciones `Float64`. Cuando `n <= 1`, devoluciones `+∞`. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `varSampStable` función. Funciona más lento, pero proporciona un menor error computacional. - -## Nombre de la red inalámbrica (SSID):) {#varpopx} - -Calcula la cantidad `Σ((x - x̅)^2) / n`, donde `n` es el tamaño de la muestra y `x̅`es el valor promedio de `x`. - -En otras palabras, dispersión para un conjunto de valores. Devoluciones `Float64`. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `varPopStable` función. Funciona más lento, pero proporciona un menor error computacional. - -## Soporte técnico) {#stddevsampx} - -El resultado es igual a la raíz cuadrada de `varSamp(x)`. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `stddevSampStable` función. Funciona más lento, pero proporciona un menor error computacional. - -## stddevPop(x) {#stddevpopx} - -El resultado es igual a la raíz cuadrada de `varPop(x)`. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `stddevPopStable` función. Funciona más lento, pero proporciona un menor error computacional. - -## topK(N)(x) {#topknx} - -Devuelve una matriz de los valores aproximadamente más frecuentes de la columna especificada. La matriz resultante se ordena en orden descendente de frecuencia aproximada de valores (no por los valores mismos). - -Implementa el [Ahorro de espacio filtrado](http://www.l2f.inesc-id.pt/~fmmb/wiki/uploads/Work/misnis.ref0a.pdf) algoritmo para analizar TopK, basado en el algoritmo de reducción y combinación de [Ahorro de espacio paralelo](https://arxiv.org/pdf/1401.0702.pdf). - -``` sql -topK(N)(column) -``` - -Esta función no proporciona un resultado garantizado. En ciertas situaciones, pueden producirse errores y pueden devolver valores frecuentes que no son los valores más frecuentes. - -Recomendamos usar el `N < 10` valor; el rendimiento se reduce con grandes `N` valor. Valor máximo de `N = 65536`. - -**Parámetros** - -- ‘N’ es el número de elementos a devolver. - -Si se omite el parámetro, se utiliza el valor predeterminado 10. - -**Argumento** - -- ' x ' – The value to calculate frequency. - -**Ejemplo** - -Tome el [A tiempo](../../getting-started/example-datasets/ontime.md) conjunto de datos y seleccione los tres valores más frecuentes `AirlineID` columna. - -``` sql -SELECT topK(3)(AirlineID) AS res -FROM ontime -``` - -``` text -┌─res─────────────────┐ -│ [19393,19790,19805] │ -└─────────────────────┘ -``` - -## topKPeso {#topkweighted} - -Similar a `topK` pero toma un argumento adicional de tipo entero - `weight`. Cada valor se contabiliza `weight` veces para el cálculo de la frecuencia. - -**Sintaxis** - -``` sql -topKWeighted(N)(x, weight) -``` - -**Parámetros** - -- `N` — The number of elements to return. - -**Argumento** - -- `x` – The value. -- `weight` — The weight. [UInt8](../../sql-reference/data-types/int-uint.md). - -**Valor devuelto** - -Devuelve una matriz de los valores con la suma aproximada máxima de pesos. - -**Ejemplo** - -Consulta: - -``` sql -SELECT topKWeighted(10)(number, number) FROM numbers(1000) -``` - -Resultado: - -``` text -┌─topKWeighted(10)(number, number)──────────┐ -│ [999,998,997,996,995,994,993,992,991,990] │ -└───────────────────────────────────────────┘ -``` - -## covarSamp(x, y) {#covarsampx-y} - -Calcula el valor de `Σ((x - x̅)(y - y̅)) / (n - 1)`. - -Devuelve Float64. Cuando `n <= 1`, returns +∞. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `covarSampStable` función. Funciona más lento, pero proporciona un menor error computacional. - -## covarPop(x, y) {#covarpopx-y} - -Calcula el valor de `Σ((x - x̅)(y - y̅)) / n`. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `covarPopStable` función. Funciona más lento pero proporciona un menor error computacional. - -## corr(x, y) {#corrx-y} - -Calcula el coeficiente de correlación de Pearson: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`. - -!!! note "Nota" - Esta función utiliza un algoritmo numéricamente inestable. Si necesita [estabilidad numérica](https://en.wikipedia.org/wiki/Numerical_stability) en los cálculos, utilice el `corrStable` función. Funciona más lento, pero proporciona un menor error computacional. - -## categoricalInformationValue {#categoricalinformationvalue} - -Calcula el valor de `(P(tag = 1) - P(tag = 0))(log(P(tag = 1)) - log(P(tag = 0)))` para cada categoría. - -``` sql -categoricalInformationValue(category1, category2, ..., tag) -``` - -El resultado indica cómo una característica discreta (categórica `[category1, category2, ...]` contribuir a un modelo de aprendizaje que predice el valor de `tag`. - -## SimpleLinearRegression {#simplelinearregression} - -Realiza una regresión lineal simple (unidimensional). - -``` sql -simpleLinearRegression(x, y) -``` - -Parámetros: - -- `x` — Column with dependent variable values. -- `y` — Column with explanatory variable values. - -Valores devueltos: - -Constante `(a, b)` de la línea resultante `y = a*x + b`. - -**Ejemplos** - -``` sql -SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3]) -``` - -``` text -┌─arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3])─┐ -│ (1,0) │ -└───────────────────────────────────────────────────────────────────┘ -``` - -``` sql -SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6]) -``` - -``` text -┌─arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6])─┐ -│ (1,3) │ -└───────────────────────────────────────────────────────────────────┘ -``` - -## stochasticLinearRegression {#agg_functions-stochasticlinearregression} - -Esta función implementa la regresión lineal estocástica. Admite parámetros personalizados para la tasa de aprendizaje, el coeficiente de regularización L2, el tamaño de mini lote y tiene pocos métodos para actualizar los pesos ([Adán](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam) (utilizado por defecto), [SGD simple](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), [Impulso](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum), [Nesterov](https://mipt.ru/upload/medialibrary/d7e/41-91.pdf)). - -### Parámetros {#agg_functions-stochasticlinearregression-parameters} - -Hay 4 parámetros personalizables. Se pasan a la función secuencialmente, pero no es necesario pasar los cuatro; se usarán valores predeterminados, sin embargo, un buen modelo requirió algún ajuste de parámetros. - -``` text -stochasticLinearRegression(1.0, 1.0, 10, 'SGD') -``` - -1. `learning rate` es el coeficiente en la longitud del paso, cuando se realiza el paso de descenso de gradiente. Una tasa de aprendizaje demasiado grande puede causar pesos infinitos del modelo. El valor predeterminado es `0.00001`. -2. `l2 regularization coefficient` que puede ayudar a prevenir el sobreajuste. El valor predeterminado es `0.1`. -3. `mini-batch size` establece el número de elementos, cuyos gradientes se calcularán y sumarán para realizar un paso de descenso de gradiente. El descenso estocástico puro usa un elemento, sin embargo, tener lotes pequeños (aproximadamente 10 elementos) hace que los pasos de gradiente sean más estables. El valor predeterminado es `15`. -4. `method for updating weights`, son: `Adam` (predeterminada), `SGD`, `Momentum`, `Nesterov`. `Momentum` y `Nesterov` requieren un poco más de cálculos y memoria, sin embargo, resultan útiles en términos de velocidad de convergencia y estabilidad de los métodos de gradiente estocásticos. - -### Uso {#agg_functions-stochasticlinearregression-usage} - -`stochasticLinearRegression` se utiliza en dos pasos: ajustar el modelo y predecir nuevos datos. Para ajustar el modelo y guardar su estado para su uso posterior, utilizamos `-State` combinador, que básicamente guarda el estado (pesos del modelo, etc.). -Para predecir usamos la función [evalMLMethod](../functions/machine-learning-functions.md#machine_learning_methods-evalmlmethod), que toma un estado como argumento, así como características para predecir. - - - -**1.** Accesorio - -Dicha consulta puede ser utilizada. - -``` sql -CREATE TABLE IF NOT EXISTS train_data -( - param1 Float64, - param2 Float64, - target Float64 -) ENGINE = Memory; - -CREATE TABLE your_model ENGINE = Memory AS SELECT -stochasticLinearRegressionState(0.1, 0.0, 5, 'SGD')(target, param1, param2) -AS state FROM train_data; -``` - -Aquí también tenemos que insertar datos en `train_data` tabla. El número de parámetros no es fijo, depende solo del número de argumentos, pasados a `linearRegressionState`. Todos deben ser valores numéricos. -Tenga en cuenta que la columna con valor objetivo (que nos gustaría aprender a predecir) se inserta como primer argumento. - -**2.** Predecir - -Después de guardar un estado en la tabla, podemos usarlo varias veces para la predicción, o incluso fusionarlo con otros estados y crear nuevos modelos aún mejores. - -``` sql -WITH (SELECT state FROM your_model) AS model SELECT -evalMLMethod(model, param1, param2) FROM test_data -``` - -La consulta devolverá una columna de valores predichos. Tenga en cuenta que el primer argumento de `evalMLMethod` ser `AggregateFunctionState` objeto, siguiente son columnas de características. - -`test_data` es una mesa como `train_data` pero puede no contener el valor objetivo. - -### Nota {#agg_functions-stochasticlinearregression-notes} - -1. Para fusionar dos modelos, el usuario puede crear dicha consulta: - `sql SELECT state1 + state2 FROM your_models` - donde `your_models` la tabla contiene ambos modelos. Esta consulta devolverá un nuevo `AggregateFunctionState` objeto. - -2. El usuario puede obtener pesos del modelo creado para sus propios fines sin guardar el modelo si no `-State` combinador se utiliza. - `sql SELECT stochasticLinearRegression(0.01)(target, param1, param2) FROM train_data` - Dicha consulta se ajustará al modelo y devolverá sus pesos: primero son los pesos, que corresponden a los parámetros del modelo, el último es el sesgo. Entonces, en el ejemplo anterior, la consulta devolverá una columna con 3 valores. - -**Ver también** - -- [stochasticLogisticRegression](#agg_functions-stochasticlogisticregression) -- [Diferencia entre regresiones lineales y logísticas](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) - -## stochasticLogisticRegression {#agg_functions-stochasticlogisticregression} - -Esta función implementa la regresión logística estocástica. Se puede usar para problemas de clasificación binaria, admite los mismos parámetros personalizados que stochasticLinearRegression y funciona de la misma manera. - -### Parámetros {#agg_functions-stochasticlogisticregression-parameters} - -Los parámetros son exactamente los mismos que en stochasticLinearRegression: -`learning rate`, `l2 regularization coefficient`, `mini-batch size`, `method for updating weights`. -Para obtener más información, consulte [parámetros](#agg_functions-stochasticlinearregression-parameters). - -``` text -stochasticLogisticRegression(1.0, 1.0, 10, 'SGD') -``` - -1. Accesorio - - - - See the `Fitting` section in the [stochasticLinearRegression](#stochasticlinearregression-usage-fitting) description. - - Predicted labels have to be in \[-1, 1\]. - -1. Predecir - - - - Using saved state we can predict probability of object having label `1`. - - ``` sql - WITH (SELECT state FROM your_model) AS model SELECT - evalMLMethod(model, param1, param2) FROM test_data - ``` - - The query will return a column of probabilities. Note that first argument of `evalMLMethod` is `AggregateFunctionState` object, next are columns of features. - - We can also set a bound of probability, which assigns elements to different labels. - - ``` sql - SELECT ans < 1.1 AND ans > 0.5 FROM - (WITH (SELECT state FROM your_model) AS model SELECT - evalMLMethod(model, param1, param2) AS ans FROM test_data) - ``` - - Then the result will be labels. - - `test_data` is a table like `train_data` but may not contain target value. - -**Ver también** - -- [stochasticLinearRegression](#agg_functions-stochasticlinearregression) -- [Diferencia entre regresiones lineales y logísticas.](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) - -## Método de codificación de datos: {#groupbitmapand} - -Calcula el AND de una columna de mapa de bits, devuelve la cardinalidad del tipo UInt64, si agrega el sufijo -State, luego devuelve [objeto de mapa de bits](../../sql-reference/functions/bitmap-functions.md). - -``` sql -groupBitmapAnd(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` tipo. - -**Valor de retorno** - -Valor de la `UInt64` tipo. - -**Ejemplo** - -``` sql -DROP TABLE IF EXISTS bitmap_column_expr_test2; -CREATE TABLE bitmap_column_expr_test2 -( - tag_id String, - z AggregateFunction(groupBitmap, UInt32) -) -ENGINE = MergeTree -ORDER BY tag_id; - -INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); - -SELECT groupBitmapAnd(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─groupBitmapAnd(z)─┐ -│ 3 │ -└───────────────────┘ - -SELECT arraySort(bitmapToArray(groupBitmapAndState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─arraySort(bitmapToArray(groupBitmapAndState(z)))─┐ -│ [6,8,10] │ -└──────────────────────────────────────────────────┘ -``` - -## Método de codificación de datos: {#groupbitmapor} - -Calcula el OR de una columna de mapa de bits, devuelve la cardinalidad del tipo UInt64, si agrega el sufijo -State, luego devuelve [objeto de mapa de bits](../../sql-reference/functions/bitmap-functions.md). Esto es equivalente a `groupBitmapMerge`. - -``` sql -groupBitmapOr(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` tipo. - -**Valor de retorno** - -Valor de la `UInt64` tipo. - -**Ejemplo** - -``` sql -DROP TABLE IF EXISTS bitmap_column_expr_test2; -CREATE TABLE bitmap_column_expr_test2 -( - tag_id String, - z AggregateFunction(groupBitmap, UInt32) -) -ENGINE = MergeTree -ORDER BY tag_id; - -INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); - -SELECT groupBitmapOr(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─groupBitmapOr(z)─┐ -│ 15 │ -└──────────────────┘ - -SELECT arraySort(bitmapToArray(groupBitmapOrState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─arraySort(bitmapToArray(groupBitmapOrState(z)))─┐ -│ [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] │ -└─────────────────────────────────────────────────┘ -``` - -## Método de codificación de datos: {#groupbitmapxor} - -Calcula el XOR de una columna de mapa de bits, devuelve la cardinalidad del tipo UInt64, si agrega el sufijo -State, luego devuelve [objeto de mapa de bits](../../sql-reference/functions/bitmap-functions.md). - -``` sql -groupBitmapOr(expr) -``` - -**Parámetros** - -`expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` tipo. - -**Valor de retorno** - -Valor de la `UInt64` tipo. - -**Ejemplo** - -``` sql -DROP TABLE IF EXISTS bitmap_column_expr_test2; -CREATE TABLE bitmap_column_expr_test2 -( - tag_id String, - z AggregateFunction(groupBitmap, UInt32) -) -ENGINE = MergeTree -ORDER BY tag_id; - -INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); - -SELECT groupBitmapXor(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─groupBitmapXor(z)─┐ -│ 10 │ -└───────────────────┘ - -SELECT arraySort(bitmapToArray(groupBitmapXorState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─arraySort(bitmapToArray(groupBitmapXorState(z)))─┐ -│ [1,3,5,6,8,10,11,13,14,15] │ -└──────────────────────────────────────────────────┘ -``` - -[Artículo Original](https://clickhouse.tech/docs/en/query_language/agg_functions/reference/) diff --git a/docs/es/sql-reference/ansi.md b/docs/es/sql-reference/ansi.md deleted file mode 100644 index 29e2c5b12e9..00000000000 --- a/docs/es/sql-reference/ansi.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: ad252bbb4f7e2899c448eb42ecc39ff195c8faa1 -toc_priority: 40 -toc_title: Compatibilidad con ANSI ---- - -# Compatibilidad de SQL ANSI de ClickHouse SQL Dialect {#ansi-sql-compatibility-of-clickhouse-sql-dialect} - -!!! note "Nota" - Este artículo se basa en la Tabla 38, “Feature taxonomy and definition for mandatory features”, Annex F of ISO/IEC CD 9075-2:2013. - -## Diferencias en el comportamiento {#differences-in-behaviour} - -En la tabla siguiente se enumeran los casos en que la característica de consulta funciona en ClickHouse, pero no se comporta como se especifica en ANSI SQL. - -| Feature ID | Nombre de la función | Diferencia | -|------------|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------| -| E011 | Tipos de datos numéricos | El literal numérico con punto se interpreta como aproximado (`Float64`) en lugar de exacta (`Decimal`) | -| E051-05 | Los elementos seleccionados pueden ser renombrados | Los cambios de nombre de los elementos tienen un alcance de visibilidad más amplio que solo el resultado SELECT | -| E141-01 | Restricciones NOT NULL | `NOT NULL` está implícito para las columnas de tabla de forma predeterminada | -| E011-04 | Operadores aritméticos | ClickHouse se desborda en lugar de la aritmética comprobada y cambia el tipo de datos de resultado en función de las reglas personalizadas | - -## Estado de la función {#feature-status} - -| Feature ID | Nombre de la función | Estatus | Comentario | -|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **E011** | **Tipos de datos numéricos** | **Parcial**{.text-warning} | | -| E011-01 | Tipos de datos INTEGER y SMALLINT | Sí {.text-success} | | -| E011-02 | REAL, DOUBLE PRECISION y FLOAT tipos de datos tipos de datos | Parcial {.text-warning} | `FLOAT()`, `REAL` y `DOUBLE PRECISION` no son compatibles | -| E011-03 | Tipos de datos DECIMAL y NUMERIC | Parcial {.text-warning} | Solo `DECIMAL(p,s)` es compatible, no `NUMERIC` | -| E011-04 | Operadores aritméticos | Sí {.text-success} | | -| E011-05 | Comparación numérica | Sí {.text-success} | | -| E011-06 | Conversión implícita entre los tipos de datos numéricos | No {.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos numéricos, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita | -| **E021** | **Tipos de cadena de caracteres** | **Parcial**{.text-warning} | | -| E021-01 | Tipo de datos CHARACTER | No {.text-danger} | | -| E021-02 | Tipo de datos CHARACTER VARYING | No {.text-danger} | `String` se comporta de manera similar, pero sin límite de longitud entre paréntesis | -| E021-03 | Literales de caracteres | Parcial {.text-warning} | Sin concatenación automática de literales consecutivos y compatibilidad con el conjunto de caracteres | -| E021-04 | Función CHARACTER_LENGTH | Parcial {.text-warning} | No `USING` clausula | -| E021-05 | Función OCTET_LENGTH | No {.text-danger} | `LENGTH` se comporta de manera similar | -| E021-06 | SUBSTRING | Parcial {.text-warning} | No hay soporte para `SIMILAR` y `ESCAPE` cláusulas, no `SUBSTRING_REGEX` variante | -| E021-07 | Concatenación de caracteres | Parcial {.text-warning} | No `COLLATE` clausula | -| E021-08 | Funciones SUPERIOR e INFERIOR | Sí {.text-success} | | -| E021-09 | Función TRIM | Sí {.text-success} | | -| E021-10 | Conversión implícita entre los tipos de cadena de caracteres de longitud fija y longitud variable | No {.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos de cadena, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita | -| E021-11 | Función POSITION | Parcial {.text-warning} | No hay soporte para `IN` y `USING` cláusulas, no `POSITION_REGEX` variante | -| E021-12 | Comparación de caracteres | Sí {.text-success} | | -| **E031** | **Identificador** | **Parcial**{.text-warning} | | -| E031-01 | Identificadores delimitados | Parcial {.text-warning} | El soporte literal Unicode es limitado | -| E031-02 | Identificadores de minúsculas | Sí {.text-success} | | -| E031-03 | Trailing subrayado | Sí {.text-success} | | -| **E051** | **Especificación básica de la consulta** | **Parcial**{.text-warning} | | -| E051-01 | SELECT DISTINCT | Sí {.text-success} | | -| E051-02 | Cláusula GROUP BY | Sí {.text-success} | | -| E051-04 | GROUP BY puede contener columnas que no estén en `` | Oui {.text-success} | | -| E051-05 | Les éléments sélectionnés peuvent être renommés | Oui {.text-success} | | -| E051-06 | Clause HAVING | Oui {.text-success} | | -| E051-07 | Qualifié \* dans la liste select | Oui {.text-success} | | -| E051-08 | Nom de corrélation dans la clause FROM | Oui {.text-success} | | -| E051-09 | Renommer les colonnes de la clause FROM | Aucun {.text-danger} | | -| **E061** | **Prédicats de base et conditions de recherche** | **Partiel**{.text-warning} | | -| E061-01 | Prédicat de comparaison | Oui {.text-success} | | -| E061-02 | Entre prédicat | Partiel {.text-warning} | Aucun `SYMMETRIC` et `ASYMMETRIC` clause | -| E061-03 | Dans le prédicat avec la liste des valeurs | Oui {.text-success} | | -| E061-04 | Comme prédicat | Oui {.text-success} | | -| E061-05 | Comme prédicat: clause D'échappement | Aucun {.text-danger} | | -| E061-06 | Prédicat NULL | Oui {.text-success} | | -| E061-07 | Prédicat de comparaison quantifié | Aucun {.text-danger} | | -| E061-08 | Existe prédicat | Aucun {.text-danger} | | -| E061-09 | Sous-requêtes dans le prédicat de comparaison | Oui {.text-success} | | -| E061-11 | Sous-requêtes dans dans le prédicat | Oui {.text-success} | | -| E061-12 | Sous-requêtes dans le prédicat de comparaison quantifiée | Aucun {.text-danger} | | -| E061-13 | Sous-requêtes corrélées | Aucun {.text-danger} | | -| E061-14 | Condition de recherche | Oui {.text-success} | | -| **E071** | **Expressions de requête de base** | **Partiel**{.text-warning} | | -| E071-01 | Opérateur de table distinct UNION | Aucun {.text-danger} | | -| E071-02 | Opérateur de table UNION ALL | Oui {.text-success} | | -| E071-03 | Sauf opérateur de table DISTINCT | Aucun {.text-danger} | | -| E071-05 | Les colonnes combinées via les opérateurs de table n'ont pas besoin d'avoir exactement le même type de données | Oui {.text-success} | | -| E071-06 | Tableau des opérateurs dans les sous-requêtes | Oui {.text-success} | | -| **E081** | **Les privilèges de base** | **Partiel**{.text-warning} | Les travaux en cours | -| **E091** | **Les fonctions de jeu** | **Oui**{.text-success} | | -| E091-01 | AVG | Oui {.text-success} | | -| E091-02 | COUNT | Oui {.text-success} | | -| E091-03 | MAX | Oui {.text-success} | | -| E091-04 | MIN | Oui {.text-success} | | -| E091-05 | SUM | Oui {.text-success} | | -| E091-06 | TOUS les quantificateurs | Aucun {.text-danger} | | -| E091-07 | Quantificateur DISTINCT | Partiel {.text-warning} | Toutes les fonctions d'agrégation ne sont pas prises en charge | -| **E101** | **Manipulation des données de base** | **Partiel**{.text-warning} | | -| E101-01 | Insérer une déclaration | Oui {.text-success} | Remarque: la clé primaire dans ClickHouse n'implique pas `UNIQUE` contrainte | -| E101-03 | Déclaration de mise à jour recherchée | Aucun {.text-danger} | Il y a un `ALTER UPDATE` déclaration pour la modification des données de lot | -| E101-04 | Requête de suppression recherchée | Aucun {.text-danger} | Il y a un `ALTER DELETE` déclaration pour la suppression de données par lots | -| **E111** | **Instruction SELECT à une ligne** | **Aucun**{.text-danger} | | -| **E121** | **Prise en charge du curseur de base** | **Aucun**{.text-danger} | | -| E121-01 | DECLARE CURSOR | Aucun {.text-danger} | | -| E121-02 | Les colonnes ORDER BY n'ont pas besoin d'être dans la liste select | Aucun {.text-danger} | | -| E121-03 | Expressions de valeur dans la clause ORDER BY | Aucun {.text-danger} | | -| E121-04 | Instruction OPEN | Aucun {.text-danger} | | -| E121-06 | Déclaration de mise à jour positionnée | Aucun {.text-danger} | | -| E121-07 | Instruction de suppression positionnée | Aucun {.text-danger} | | -| E121-08 | Déclaration de fermeture | Aucun {.text-danger} | | -| E121-10 | Instruction FETCH: implicite suivant | Aucun {.text-danger} | | -| E121-17 | Avec curseurs HOLD | Aucun {.text-danger} | | -| **E131** | **Support de valeur Null (nulls au lieu de valeurs)** | **Partiel**{.text-warning} | Certaines restrictions s'appliquent | -| **E141** | **Contraintes d'intégrité de base** | **Partiel**{.text-warning} | | -| E141-01 | Contraintes non nulles | Oui {.text-success} | Note: `NOT NULL` est implicite pour les colonnes de table par défaut | -| E141-02 | Contrainte UNIQUE de colonnes non nulles | Aucun {.text-danger} | | -| E141-03 | Contraintes de clé primaire | Aucun {.text-danger} | | -| E141-04 | Contrainte de clé étrangère de base avec la valeur par défaut NO ACTION Pour l'action de suppression référentielle et l'action de mise à jour référentielle | Aucun {.text-danger} | | -| E141-06 | Vérifier la contrainte | Oui {.text-success} | | -| E141-07 | Colonne par défaut | Oui {.text-success} | | -| E141-08 | Non NULL déduit sur la clé primaire | Oui {.text-success} | | -| E141-10 | Les noms dans une clé étrangère peut être spécifié dans n'importe quel ordre | Aucun {.text-danger} | | -| **E151** | **Support de Transaction** | **Aucun**{.text-danger} | | -| E151-01 | COMMIT déclaration | Aucun {.text-danger} | | -| E151-02 | Déclaration de restauration | Aucun {.text-danger} | | -| **E152** | **Instruction de transaction set de base** | **Aucun**{.text-danger} | | -| E152-01 | SET TRANSACTION statement: clause sérialisable de niveau D'isolement | Aucun {.text-danger} | | -| E152-02 | SET TRANSACTION statement: clauses en lecture seule et en lecture écriture | Aucun {.text-danger} | | -| **E153** | **Requêtes pouvant être mises à jour avec des sous requêtes** | **Aucun**{.text-danger} | | -| **E161** | **Commentaires SQL en utilisant le premier Double moins** | **Oui**{.text-success} | | -| **E171** | **Support SQLSTATE** | **Aucun**{.text-danger} | | -| **E182** | **Liaison du langage hôte** | **Aucun**{.text-danger} | | -| **F031** | **Manipulation de schéma de base** | **Partiel**{.text-warning} | | -| F031-01 | Instruction CREATE TABLE pour créer des tables de base persistantes | Partiel {.text-warning} | Aucun `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses et aucun support pour les types de données résolus par l'utilisateur | -| F031-02 | Instruction créer une vue | Partiel {.text-warning} | Aucun `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses et aucun support pour les types de données résolus par l'utilisateur | -| F031-03 | Déclaration de subvention | Oui {.text-success} | | -| F031-04 | ALTER TABLE statement: ajouter une clause de colonne | Partiel {.text-warning} | Pas de support pour `GENERATED` clause et période de temps du système | -| F031-13 | Instruction DROP TABLE: clause RESTRICT | Aucun {.text-danger} | | -| F031-16 | Instruction DROP VIEW: clause RESTRICT | Aucun {.text-danger} | | -| F031-19 | REVOKE statement: clause RESTRICT | Aucun {.text-danger} | | -| **F041** | **Table jointe de base** | **Partiel**{.text-warning} | | -| F041-01 | INNER join (mais pas nécessairement le mot-clé INNER) | Oui {.text-success} | | -| F041-02 | INTÉRIEURE mot-clé | Oui {.text-success} | | -| F041-03 | LEFT OUTER JOIN | Oui {.text-success} | | -| F041-04 | RIGHT OUTER JOIN | Oui {.text-success} | | -| F041-05 | Les jointures externes peuvent être imbriqués | Oui {.text-success} | | -| F041-07 | La table intérieure dans une jointure extérieure gauche ou droite peut également être utilisée dans une jointure intérieure | Oui {.text-success} | | -| F041-08 | Tous les opérateurs de comparaison sont pris en charge (plutôt que juste =) | Aucun {.text-danger} | | -| **F051** | **Date et heure de base** | **Partiel**{.text-warning} | | -| F051-01 | Type de données de DATE (y compris la prise en charge du littéral de DATE) | Partiel {.text-warning} | Aucun littéral | -| F051-02 | TYPE DE DONNÉES DE TEMPS (y compris la prise en charge du littéral de temps) avec une précision de secondes fractionnaires d'au moins 0 | Aucun {.text-danger} | | -| F051-03 | Type de données D'horodatage (y compris la prise en charge du littéral D'horodatage) avec une précision de secondes fractionnaires d'au moins 0 et 6 | Aucun {.text-danger} | `DateTime64` temps fournit des fonctionnalités similaires | -| F051-04 | Prédicat de comparaison sur les types de données DATE, heure et horodatage | Partiel {.text-warning} | Un seul type de données disponible | -| F051-05 | Distribution explicite entre les types datetime et les types de chaînes de caractères | Oui {.text-success} | | -| F051-06 | CURRENT_DATE | Aucun {.text-danger} | `today()` est similaire | -| F051-07 | LOCALTIME | Aucun {.text-danger} | `now()` est similaire | -| F051-08 | LOCALTIMESTAMP | Aucun {.text-danger} | | -| **F081** | **UNION et sauf dans les vues** | **Partiel**{.text-warning} | | -| **F131** | **Groupées des opérations** | **Partiel**{.text-warning} | | -| F131-01 | WHERE, GROUP BY et ayant des clauses prises en charge dans les requêtes avec des vues groupées | Oui {.text-success} | | -| F131-02 | Plusieurs tables prises en charge dans les requêtes avec des vues groupées | Oui {.text-success} | | -| F131-03 | Définir les fonctions prises en charge dans les requêtes groupées vues | Oui {.text-success} | | -| F131-04 | Sous requêtes avec des clauses GROUP BY et HAVING et des vues groupées | Oui {.text-success} | | -| F131-05 | Sélectionnez une seule ligne avec des clauses GROUP BY et HAVING et des vues groupées | Aucun {.text-danger} | | -| **F181** | **Support de module Multiple** | **Aucun**{.text-danger} | | -| **F201** | **Fonction de distribution** | **Oui**{.text-success} | | -| **F221** | **Valeurs par défaut explicites** | **Aucun**{.text-danger} | | -| **F261** | **Expression de cas** | **Oui**{.text-success} | | -| F261-01 | Cas Simple | Oui {.text-success} | | -| F261-02 | Cas recherché | Oui {.text-success} | | -| F261-03 | NULLIF | Oui {.text-success} | | -| F261-04 | COALESCE | Oui {.text-success} | | -| **F311** | **Déclaration de définition de schéma** | **Partiel**{.text-warning} | | -| F311-01 | CREATE SCHEMA | Aucun {.text-danger} | | -| F311-02 | Créer une TABLE pour les tables de base persistantes | Oui {.text-success} | | -| F311-03 | CREATE VIEW | Oui {.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | Aucun {.text-danger} | | -| F311-05 | Déclaration de subvention | Oui {.text-success} | | -| **F471** | **Valeurs de sous-requête scalaire** | **Oui**{.text-success} | | -| **F481** | **Prédicat null étendu** | **Oui**{.text-success} | | -| **F812** | **Base de repérage** | **Aucun**{.text-danger} | | -| **T321** | **Routines SQL-invoked de base** | **Aucun**{.text-danger} | | -| T321-01 | Fonctions définies par l'utilisateur sans surcharge | Aucun {.text-danger} | | -| T321-02 | Procédures stockées définies par l'utilisateur sans surcharge | Aucun {.text-danger} | | -| T321-03 | L'invocation de la fonction | Aucun {.text-danger} | | -| T321-04 | L'instruction d'APPEL de | Aucun {.text-danger} | | -| T321-05 | Déclaration de retour | Aucun {.text-danger} | | -| **T631** | **Dans le prédicat avec un élément de liste** | **Oui**{.text-success} | | diff --git a/docs/fr/sql-reference/data-types/aggregatefunction.md b/docs/fr/sql-reference/data-types/aggregatefunction.md deleted file mode 100644 index 18874cd3cb7..00000000000 --- a/docs/fr/sql-reference/data-types/aggregatefunction.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 52 -toc_title: AggregateFunction (nom, types_of_arguments...) ---- - -# AggregateFunction(name, types_of_arguments…) {#data-type-aggregatefunction} - -Aggregate functions can have an implementation-defined intermediate state that can be serialized to an AggregateFunction(…) data type and stored in a table, usually, by means of [une vue matérialisée](../../sql-reference/statements/create.md#create-view). La manière courante de produire un État de fonction d'agrégat est d'appeler la fonction d'agrégat avec le `-State` suffixe. Pour obtenir le résultat final de l'agrégation dans l'avenir, vous devez utiliser la même fonction d'agrégation avec la `-Merge`suffixe. - -`AggregateFunction` — parametric data type. - -**Paramètre** - -- Nom de la fonction d'agrégation. - - If the function is parametric, specify its parameters too. - -- Types des arguments de la fonction d'agrégation. - -**Exemple** - -``` sql -CREATE TABLE t -( - column1 AggregateFunction(uniq, UInt64), - column2 AggregateFunction(anyIf, String, UInt8), - column3 AggregateFunction(quantiles(0.5, 0.9), UInt64) -) ENGINE = ... -``` - -[uniq](../../sql-reference/aggregate-functions/reference.md#agg_function-uniq), anyIf ([tout](../../sql-reference/aggregate-functions/reference.md#agg_function-any)+[Si](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-if)) et [les quantiles](../../sql-reference/aggregate-functions/reference.md) les fonctions d'agrégation sont-elles prises en charge dans ClickHouse. - -## Utilisation {#usage} - -### Insertion De Données {#data-insertion} - -Pour insérer des données, utilisez `INSERT SELECT` avec le regroupement d' `-State`- fonction. - -**Exemples de fonction** - -``` sql -uniqState(UserID) -quantilesState(0.5, 0.9)(SendTiming) -``` - -Contrairement aux fonctions correspondantes `uniq` et `quantiles`, `-State`- les fonctions renvoient l'état, au lieu de la valeur finale. En d'autres termes, ils renvoient une valeur de `AggregateFunction` type. - -Dans les résultats de `SELECT` requête, les valeurs de `AggregateFunction` type ont une représentation binaire spécifique à l'implémentation pour tous les formats de sortie ClickHouse. Si les données de vidage dans, par exemple, `TabSeparated` format avec `SELECT` requête, puis ce vidage peut être chargé en utilisant `INSERT` requête. - -### Sélection De Données {#data-selection} - -Lors de la sélection des données `AggregatingMergeTree` table, utilisez `GROUP BY` et les mêmes fonctions d'agrégat que lors de l'insertion de données, mais en utilisant `-Merge`suffixe. - -Une fonction d'agrégation avec `-Merge` suffixe prend un ensemble d'états, les combine, et renvoie le résultat complet de l'agrégation de données. - -Par exemple, les deux requêtes suivantes retournent le même résultat: - -``` sql -SELECT uniq(UserID) FROM table - -SELECT uniqMerge(state) FROM (SELECT uniqState(UserID) AS state FROM table GROUP BY RegionID) -``` - -## Exemple D'Utilisation {#usage-example} - -Voir [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) Description du moteur. - -[Article Original](https://clickhouse.tech/docs/en/data_types/nested_data_structures/aggregatefunction/) diff --git a/docs/fr/sql-reference/data-types/array.md b/docs/fr/sql-reference/data-types/array.md deleted file mode 100644 index 41772cab177..00000000000 --- a/docs/fr/sql-reference/data-types/array.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 51 -toc_title: Array(T) ---- - -# Array(t) {#data-type-array} - -Un tableau de `T`les éléments de type. `T` peut être n'importe quel type de données, y compris un tableau. - -## La création d'un Tableau {#creating-an-array} - -Vous pouvez utiliser une fonction pour créer un tableau: - -``` sql -array(T) -``` - -Vous pouvez également utiliser des crochets. - -``` sql -[] -``` - -Exemple de création d'un tableau: - -``` sql -SELECT array(1, 2) AS x, toTypeName(x) -``` - -``` text -┌─x─────┬─toTypeName(array(1, 2))─┐ -│ [1,2] │ Array(UInt8) │ -└───────┴─────────────────────────┘ -``` - -``` sql -SELECT [1, 2] AS x, toTypeName(x) -``` - -``` text -┌─x─────┬─toTypeName([1, 2])─┐ -│ [1,2] │ Array(UInt8) │ -└───────┴────────────────────┘ -``` - -## Utilisation de Types de données {#working-with-data-types} - -Lors de la création d'un tableau à la volée, ClickHouse définit automatiquement le type d'argument comme le type de données le plus étroit pouvant stocker tous les arguments listés. S'il y a des [Nullable](nullable.md#data_type-nullable) ou littéral [NULL](../../sql-reference/syntax.md#null-literal) les valeurs, le type d'un élément de tableau devient également [Nullable](nullable.md). - -Si ClickHouse n'a pas pu déterminer le type de données, il génère une exception. Par exemple, cela se produit lorsque vous essayez de créer un tableau avec des chaînes et des nombres simultanément (`SELECT array(1, 'a')`). - -Exemples de détection automatique de type de données: - -``` sql -SELECT array(1, 2, NULL) AS x, toTypeName(x) -``` - -``` text -┌─x──────────┬─toTypeName(array(1, 2, NULL))─┐ -│ [1,2,NULL] │ Array(Nullable(UInt8)) │ -└────────────┴───────────────────────────────┘ -``` - -Si vous essayez de créer un tableau de types de données incompatibles, ClickHouse lève une exception: - -``` sql -SELECT array(1, 'a') -``` - -``` text -Received exception from server (version 1.1.54388): -Code: 386. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: There is no supertype for types UInt8, String because some of them are String/FixedString and some of them are not. -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/array/) diff --git a/docs/fr/sql-reference/data-types/boolean.md b/docs/fr/sql-reference/data-types/boolean.md deleted file mode 100644 index aeb84cf1cc1..00000000000 --- a/docs/fr/sql-reference/data-types/boolean.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 43 -toc_title: "Bool\xE9en" ---- - -# Les Valeurs Booléennes {#boolean-values} - -Il n'y a pas de type distinct pour les valeurs booléennes. Utilisez le type UInt8, limité aux valeurs 0 ou 1. - -[Article Original](https://clickhouse.tech/docs/en/data_types/boolean/) diff --git a/docs/fr/sql-reference/data-types/date.md b/docs/fr/sql-reference/data-types/date.md deleted file mode 100644 index 698639f1d2f..00000000000 --- a/docs/fr/sql-reference/data-types/date.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 47 -toc_title: Date ---- - -# Date {#date} - -Date. Stocké en deux octets comme le nombre de jours depuis 1970-01-01 (non signé). Permet de stocker des valeurs juste après le début de L'époque Unix jusqu'au seuil supérieur défini par une constante au stade de la compilation (actuellement, c'est jusqu'à l'année 2106, mais l'année finale entièrement prise en charge est 2105). - -La valeur de date est stockée sans le fuseau horaire. - -[Article Original](https://clickhouse.tech/docs/en/data_types/date/) diff --git a/docs/fr/sql-reference/data-types/datetime.md b/docs/fr/sql-reference/data-types/datetime.md deleted file mode 100644 index 915270e4d2b..00000000000 --- a/docs/fr/sql-reference/data-types/datetime.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 48 -toc_title: DateTime ---- - -# Datetime {#data_type-datetime} - -Permet de stocker un instant dans le temps, qui peut être exprimé comme une date de calendrier et une heure d'une journée. - -Syntaxe: - -``` sql -DateTime([timezone]) -``` - -Plage de valeurs prise en charge: \[1970-01-01 00:00:00, 2105-12-31 23:59:59\]. - -Résolution: 1 seconde. - -## Utilisation Remarques {#usage-remarks} - -Le point dans le temps est enregistré en tant que [Le timestamp Unix](https://en.wikipedia.org/wiki/Unix_time), quel que soit le fuseau horaire ou l'heure d'été. En outre, l' `DateTime` type peut stocker le fuseau horaire qui est le même pour la colonne entière, qui affecte la façon dont les valeurs de la `DateTime` les valeurs de type sont affichées au format texte et comment les valeurs spécifiées en tant que chaînes sont analysées (‘2020-01-01 05:00:01’). Le fuseau horaire n'est pas stocké dans les lignes de la table (ou dans resultset), mais est stocké dans les métadonnées de la colonne. -Une liste des fuseaux horaires pris en charge peut être trouvée dans le [Base de données de fuseau horaire IANA](https://www.iana.org/time-zones). -Le `tzdata` paquet, contenant [Base de données de fuseau horaire IANA](https://www.iana.org/time-zones), doit être installé dans le système. L'utilisation de la `timedatectl list-timezones` commande pour lister les fuseaux horaires connus par un système local. - -Vous pouvez définir explicitement un fuseau horaire `DateTime`- tapez des colonnes lors de la création d'une table. Si le fuseau horaire n'est pas défini, ClickHouse utilise la valeur [fuseau](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) paramètre dans les paramètres du serveur ou les paramètres du système d'exploitation au moment du démarrage du serveur ClickHouse. - -Le [clickhouse-client](../../interfaces/cli.md) applique le fuseau horaire du serveur par défaut si un fuseau horaire n'est pas explicitement défini lors de l'initialisation du type de données. Pour utiliser le fuseau horaire du client, exécutez `clickhouse-client` avec l' `--use_client_time_zone` paramètre. - -Clickhouse affiche les valeurs dans `YYYY-MM-DD hh:mm:ss` format de texte par défaut. Vous pouvez modifier la sortie avec le [formatDateTime](../../sql-reference/functions/date-time-functions.md#formatdatetime) fonction. - -Lorsque vous insérez des données dans ClickHouse, vous pouvez utiliser différents formats de chaînes de date et d'heure, en fonction de la valeur du [date_time_input_format](../../operations/settings/settings.md#settings-date_time_input_format) paramètre. - -## Exemple {#examples} - -**1.** Création d'une table avec un `DateTime`- tapez la colonne et insérez des données dedans: - -``` sql -CREATE TABLE dt -( - `timestamp` DateTime('Europe/Moscow'), - `event_id` UInt8 -) -ENGINE = TinyLog; -``` - -``` sql -INSERT INTO dt Values (1546300800, 1), ('2019-01-01 00:00:00', 2); -``` - -``` sql -SELECT * FROM dt; -``` - -``` text -┌───────────timestamp─┬─event_id─┐ -│ 2019-01-01 03:00:00 │ 1 │ -│ 2019-01-01 00:00:00 │ 2 │ -└─────────────────────┴──────────┘ -``` - -- Lors de l'insertion de datetime en tant qu'entier, il est traité comme un horodatage Unix (UTC). `1546300800` représenter `'2019-01-01 00:00:00'` L'UTC. Cependant, comme `timestamp` la colonne a `Europe/Moscow` (UTC+3) fuseau horaire spécifié, lors de la sortie en tant que chaîne, la valeur sera affichée comme `'2019-01-01 03:00:00'` -- Lors de l'insertion d'une valeur de chaîne en tant que datetime, elle est traitée comme étant dans le fuseau horaire de la colonne. `'2019-01-01 00:00:00'` sera considérée comme étant en `Europe/Moscow` fuseau horaire et enregistré sous `1546290000`. - -**2.** Le filtrage sur `DateTime` valeur - -``` sql -SELECT * FROM dt WHERE timestamp = toDateTime('2019-01-01 00:00:00', 'Europe/Moscow') -``` - -``` text -┌───────────timestamp─┬─event_id─┐ -│ 2019-01-01 00:00:00 │ 2 │ -└─────────────────────┴──────────┘ -``` - -`DateTime` les valeurs de colonne peuvent être filtrées à l'aide d'une `WHERE` prédicat. Elle sera convertie `DateTime` automatiquement: - -``` sql -SELECT * FROM dt WHERE timestamp = '2019-01-01 00:00:00' -``` - -``` text -┌───────────timestamp─┬─event_id─┐ -│ 2019-01-01 03:00:00 │ 1 │ -└─────────────────────┴──────────┘ -``` - -**3.** Obtenir un fuseau horaire pour un `DateTime`colonne de type: - -``` sql -SELECT toDateTime(now(), 'Europe/Moscow') AS column, toTypeName(column) AS x -``` - -``` text -┌──────────────column─┬─x─────────────────────────┐ -│ 2019-10-16 04:12:04 │ DateTime('Europe/Moscow') │ -└─────────────────────┴───────────────────────────┘ -``` - -**4.** Conversion de fuseau horaire - -``` sql -SELECT -toDateTime(timestamp, 'Europe/London') as lon_time, -toDateTime(timestamp, 'Europe/Moscow') as mos_time -FROM dt -``` - -``` text -┌───────────lon_time──┬────────────mos_time─┐ -│ 2019-01-01 00:00:00 │ 2019-01-01 03:00:00 │ -│ 2018-12-31 21:00:00 │ 2019-01-01 00:00:00 │ -└─────────────────────┴─────────────────────┘ -``` - -## Voir Aussi {#see-also} - -- [Fonctions de conversion de Type](../../sql-reference/functions/type-conversion-functions.md) -- [Fonctions pour travailler avec des dates et des heures](../../sql-reference/functions/date-time-functions.md) -- [Fonctions pour travailler avec des tableaux](../../sql-reference/functions/array-functions.md) -- [Le `date_time_input_format` paramètre](../../operations/settings/settings.md#settings-date_time_input_format) -- [Le `timezone` paramètre de configuration du serveur](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) -- [Opérateurs pour travailler avec des dates et des heures](../../sql-reference/operators/index.md#operators-datetime) -- [Le `Date` type de données](date.md) - -[Article Original](https://clickhouse.tech/docs/en/data_types/datetime/) diff --git a/docs/fr/sql-reference/data-types/datetime64.md b/docs/fr/sql-reference/data-types/datetime64.md deleted file mode 100644 index 027891c595d..00000000000 --- a/docs/fr/sql-reference/data-types/datetime64.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 49 -toc_title: DateTime64 ---- - -# Datetime64 {#data_type-datetime64} - -Permet de stocker un instant dans le temps, qui peut être exprimé comme une date de calendrier et une heure d'un jour, avec une précision de sous-seconde définie - -Tick taille (précision): 10-précision deuxième - -Syntaxe: - -``` sql -DateTime64(precision, [timezone]) -``` - -En interne, stocke les données comme un certain nombre de ‘ticks’ depuis le début de l'époque (1970-01-01 00: 00: 00 UTC) comme Int64. La résolution des tiques est déterminée par le paramètre de précision. En outre, l' `DateTime64` type peut stocker le fuseau horaire qui est le même pour la colonne entière, qui affecte la façon dont les valeurs de la `DateTime64` les valeurs de type sont affichées au format texte et comment les valeurs spécifiées en tant que chaînes sont analysées (‘2020-01-01 05:00:01.000’). Le fuseau horaire n'est pas stocké dans les lignes de la table (ou dans resultset), mais est stocké dans les métadonnées de la colonne. Voir les détails dans [DateTime](datetime.md). - -## Exemple {#examples} - -**1.** Création d'une table avec `DateTime64`- tapez la colonne et insérez des données dedans: - -``` sql -CREATE TABLE dt -( - `timestamp` DateTime64(3, 'Europe/Moscow'), - `event_id` UInt8 -) -ENGINE = TinyLog -``` - -``` sql -INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2) -``` - -``` sql -SELECT * FROM dt -``` - -``` text -┌───────────────timestamp─┬─event_id─┐ -│ 2019-01-01 03:00:00.000 │ 1 │ -│ 2019-01-01 00:00:00.000 │ 2 │ -└─────────────────────────┴──────────┘ -``` - -- Lors de l'insertion de datetime en tant qu'entier, il est traité comme un horodatage Unix (UTC) mis à l'échelle de manière appropriée. `1546300800000` (avec précision 3) représente `'2019-01-01 00:00:00'` L'UTC. Cependant, comme `timestamp` la colonne a `Europe/Moscow` (UTC+3) fuseau horaire spécifié, lors de la sortie sous forme de chaîne, la valeur sera affichée comme `'2019-01-01 03:00:00'` -- Lors de l'insertion d'une valeur de chaîne en tant que datetime, elle est traitée comme étant dans le fuseau horaire de la colonne. `'2019-01-01 00:00:00'` sera considérée comme étant en `Europe/Moscow` fuseau horaire et stocké comme `1546290000000`. - -**2.** Le filtrage sur `DateTime64` valeur - -``` sql -SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow') -``` - -``` text -┌───────────────timestamp─┬─event_id─┐ -│ 2019-01-01 00:00:00.000 │ 2 │ -└─────────────────────────┴──────────┘ -``` - -Contrairement `DateTime`, `DateTime64` les valeurs ne sont pas converties depuis `String` automatiquement - -**3.** Obtenir un fuseau horaire pour un `DateTime64`-le type de la valeur: - -``` sql -SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x -``` - -``` text -┌──────────────────column─┬─x──────────────────────────────┐ -│ 2019-10-16 04:12:04.000 │ DateTime64(3, 'Europe/Moscow') │ -└─────────────────────────┴────────────────────────────────┘ -``` - -**4.** Conversion de fuseau horaire - -``` sql -SELECT -toDateTime64(timestamp, 3, 'Europe/London') as lon_time, -toDateTime64(timestamp, 3, 'Europe/Moscow') as mos_time -FROM dt -``` - -``` text -┌───────────────lon_time──┬────────────────mos_time─┐ -│ 2019-01-01 00:00:00.000 │ 2019-01-01 03:00:00.000 │ -│ 2018-12-31 21:00:00.000 │ 2019-01-01 00:00:00.000 │ -└─────────────────────────┴─────────────────────────┘ -``` - -## Voir Aussi {#see-also} - -- [Fonctions de conversion de Type](../../sql-reference/functions/type-conversion-functions.md) -- [Fonctions pour travailler avec des dates et des heures](../../sql-reference/functions/date-time-functions.md) -- [Fonctions pour travailler avec des tableaux](../../sql-reference/functions/array-functions.md) -- [Le `date_time_input_format` paramètre](../../operations/settings/settings.md#settings-date_time_input_format) -- [Le `timezone` paramètre de configuration du serveur](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) -- [Opérateurs pour travailler avec des dates et des heures](../../sql-reference/operators/index.md#operators-datetime) -- [`Date` type de données](date.md) -- [`DateTime` type de données](datetime.md) diff --git a/docs/fr/sql-reference/data-types/decimal.md b/docs/fr/sql-reference/data-types/decimal.md deleted file mode 100644 index 171bc1cf6dd..00000000000 --- a/docs/fr/sql-reference/data-types/decimal.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 42 -toc_title: "D\xE9cimal" ---- - -# Décimal (P, S), Décimal32 (S), Décimal64 (S), Décimal128 (S) {#decimalp-s-decimal32s-decimal64s-decimal128s} - -Nombres à points fixes signés qui conservent la précision pendant les opérations d'addition, de soustraction et de multiplication. Pour la division, les chiffres les moins significatifs sont ignorés (non arrondis). - -## Paramètre {#parameters} - -- P-précision. Plage valide: \[1: 38 \]. Détermine le nombre de chiffres décimaux nombre peut avoir (fraction y compris). -- S - échelle. Plage valide: \[0: P \]. Détermine le nombre de chiffres décimaux fraction peut avoir. - -En fonction de P Paramètre Valeur décimal (P, S) est un synonyme de: -- P à partir de \[ 1: 9\] - Pour Décimal32 (S) -- P à partir de \[10: 18\] - pour Décimal64 (S) -- P à partir de \[19: 38\] - pour Décimal128 (S) - -## Plages De Valeurs Décimales {#decimal-value-ranges} - -- Décimal32 (S) - ( -1 \* 10^(9 - S), 1 \* 10^(9-S) ) -- Décimal64 (S) - ( -1 \* 10^(18 - S), 1 \* 10^(18-S) ) -- Décimal128 (S) - ( -1 \* 10^(38 - S), 1 \* 10^(38-S) ) - -Par exemple, Decimal32(4) peut contenir des nombres de -99999.9999 à 99999.9999 avec 0,0001 étape. - -## Représentation Interne {#internal-representation} - -En interne, les données sont représentées comme des entiers signés normaux avec une largeur de bit respective. Les plages de valeurs réelles qui peuvent être stockées en mémoire sont un peu plus grandes que celles spécifiées ci-dessus, qui sont vérifiées uniquement lors de la conversion à partir d'une chaîne. - -Parce que les processeurs modernes ne prennent pas en charge les entiers 128 bits nativement, les opérations sur Decimal128 sont émulées. Pour cette raison, Decimal128 fonctionne significativement plus lentement que Decimal32 / Decimal64. - -## Opérations et type de résultat {#operations-and-result-type} - -Les opérations binaires sur le résultat décimal dans le type de résultat plus large (avec n'importe quel ordre d'arguments). - -- `Decimal64(S1) Decimal32(S2) -> Decimal64(S)` -- `Decimal128(S1) Decimal32(S2) -> Decimal128(S)` -- `Decimal128(S1) Decimal64(S2) -> Decimal128(S)` - -Règles pour l'échelle: - -- ajouter, soustraire: S = max (S1, S2). -- multuply: S = S1 + S2. -- diviser: S = S1. - -Pour des opérations similaires entre décimal et entier, le résultat est Décimal de la même taille qu'un argument. - -Les opérations entre Decimal et Float32 / Float64 ne sont pas définies. Si vous en avez besoin, vous pouvez explicitement lancer l'un des arguments en utilisant les builtins toDecimal32, toDecimal64, toDecimal128 ou toFloat32, toFloat64. Gardez à l'esprit que le résultat perdra de la précision et que la conversion de type est une opération coûteuse en calcul. - -Certaines fonctions sur le résultat de retour décimal comme Float64 (par exemple, var ou stddev). Les calculs intermédiaires peuvent toujours être effectués en décimal, ce qui peut conduire à des résultats différents entre les entrées Float64 et Decimal avec les mêmes valeurs. - -## Contrôles De Débordement {#overflow-checks} - -Pendant les calculs sur Décimal, des débordements entiers peuvent se produire. Les chiffres excessifs dans une fraction sont éliminés (non arrondis). Les chiffres excessifs dans la partie entière conduiront à une exception. - -``` sql -SELECT toDecimal32(2, 4) AS x, x / 3 -``` - -``` text -┌──────x─┬─divide(toDecimal32(2, 4), 3)─┐ -│ 2.0000 │ 0.6666 │ -└────────┴──────────────────────────────┘ -``` - -``` sql -SELECT toDecimal32(4.2, 8) AS x, x * x -``` - -``` text -DB::Exception: Scale is out of bounds. -``` - -``` sql -SELECT toDecimal32(4.2, 8) AS x, 6 * x -``` - -``` text -DB::Exception: Decimal math overflow. -``` - -Les contrôles de débordement entraînent un ralentissement des opérations. S'il est connu que les débordements ne sont pas possibles, il est logique de désactiver les contrôles en utilisant `decimal_check_overflow` paramètre. Lorsque des contrôles sont désactivés et le débordement se produit, le résultat sera faux: - -``` sql -SET decimal_check_overflow = 0; -SELECT toDecimal32(4.2, 8) AS x, 6 * x -``` - -``` text -┌──────────x─┬─multiply(6, toDecimal32(4.2, 8))─┐ -│ 4.20000000 │ -17.74967296 │ -└────────────┴──────────────────────────────────┘ -``` - -Les contrôles de débordement se produisent non seulement sur les opérations arithmétiques mais aussi sur la comparaison de valeurs: - -``` sql -SELECT toDecimal32(1, 8) < 100 -``` - -``` text -DB::Exception: Can't compare. -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/decimal/) diff --git a/docs/fr/sql-reference/data-types/domains/index.md b/docs/fr/sql-reference/data-types/domains/index.md deleted file mode 100644 index 7e11f9a8a68..00000000000 --- a/docs/fr/sql-reference/data-types/domains/index.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Domaine -toc_priority: 56 -toc_title: "Aper\xE7u" ---- - -# Domaine {#domains} - -Les domaines sont des types spéciaux qui ajoutent des fonctionnalités supplémentaires au sommet du type de base existant, mais en laissant le format on-wire et on-disc du type de données sous-jacent intact. À l'heure actuelle, ClickHouse ne prend pas en charge les domaines définis par l'utilisateur. - -Vous pouvez utiliser des domaines partout type de base correspondant peut être utilisé, par exemple: - -- Créer une colonne d'un type de domaine -- Valeurs de lecture / écriture depuis / vers la colonne de domaine -- L'utiliser comme un indice si un type de base peut être utilisée comme un indice -- Fonctions d'appel avec des valeurs de colonne de domaine - -### Fonctionnalités supplémentaires des domaines {#extra-features-of-domains} - -- Nom de type de colonne explicite dans `SHOW CREATE TABLE` ou `DESCRIBE TABLE` -- Entrée du format convivial avec `INSERT INTO domain_table(domain_column) VALUES(...)` -- Sortie au format convivial pour `SELECT domain_column FROM domain_table` -- Chargement de données à partir d'une source externe dans un format convivial: `INSERT INTO domain_table FORMAT CSV ...` - -### Limitation {#limitations} - -- Impossible de convertir la colonne d'index du type de base en type de domaine via `ALTER TABLE`. -- Impossible de convertir implicitement des valeurs de chaîne en valeurs de domaine lors de l'insertion de données d'une autre colonne ou table. -- Le domaine n'ajoute aucune contrainte sur les valeurs stockées. - -[Article Original](https://clickhouse.tech/docs/en/data_types/domains/overview) diff --git a/docs/fr/sql-reference/data-types/domains/ipv4.md b/docs/fr/sql-reference/data-types/domains/ipv4.md deleted file mode 100644 index 12895992e77..00000000000 --- a/docs/fr/sql-reference/data-types/domains/ipv4.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 59 -toc_title: IPv4 ---- - -## IPv4 {#ipv4} - -`IPv4` est un domaine basé sur `UInt32` tapez et sert de remplacement typé pour stocker des valeurs IPv4. Il fournit un stockage compact avec le format d'entrée-sortie convivial et les informations de type de colonne sur l'inspection. - -### Utilisation De Base {#basic-usage} - -``` sql -CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY url; - -DESCRIBE TABLE hits; -``` - -``` text -┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┐ -│ url │ String │ │ │ │ │ -│ from │ IPv4 │ │ │ │ │ -└──────┴────────┴──────────────┴────────────────────┴─────────┴──────────────────┘ -``` - -Ou vous pouvez utiliser le domaine IPv4 comme clé: - -``` sql -CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY from; -``` - -`IPv4` le domaine prend en charge le format d'entrée personnalisé en tant que chaînes IPv4: - -``` sql -INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '116.253.40.133')('https://clickhouse.tech', '183.247.232.58')('https://clickhouse.tech/docs/en/', '116.106.34.242'); - -SELECT * FROM hits; -``` - -``` text -┌─url────────────────────────────────┬───────────from─┐ -│ https://clickhouse.tech/docs/en/ │ 116.106.34.242 │ -│ https://wikipedia.org │ 116.253.40.133 │ -│ https://clickhouse.tech │ 183.247.232.58 │ -└────────────────────────────────────┴────────────────┘ -``` - -Les valeurs sont stockées sous forme binaire compacte: - -``` sql -SELECT toTypeName(from), hex(from) FROM hits LIMIT 1; -``` - -``` text -┌─toTypeName(from)─┬─hex(from)─┐ -│ IPv4 │ B7F7E83A │ -└──────────────────┴───────────┘ -``` - -Les valeurs de domaine ne sont pas implicitement convertibles en types autres que `UInt32`. -Si vous voulez convertir `IPv4` valeur à une chaîne, vous devez le faire explicitement avec `IPv4NumToString()` fonction: - -``` sql -SELECT toTypeName(s), IPv4NumToString(from) as s FROM hits LIMIT 1; -``` - - ┌─toTypeName(IPv4NumToString(from))─┬─s──────────────┐ - │ String │ 183.247.232.58 │ - └───────────────────────────────────┴────────────────┘ - -Ou coulé à un `UInt32` valeur: - -``` sql -SELECT toTypeName(i), CAST(from as UInt32) as i FROM hits LIMIT 1; -``` - -``` text -┌─toTypeName(CAST(from, 'UInt32'))─┬──────────i─┐ -│ UInt32 │ 3086477370 │ -└──────────────────────────────────┴────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/domains/ipv4) diff --git a/docs/fr/sql-reference/data-types/domains/ipv6.md b/docs/fr/sql-reference/data-types/domains/ipv6.md deleted file mode 100644 index 77510a950cb..00000000000 --- a/docs/fr/sql-reference/data-types/domains/ipv6.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 60 -toc_title: IPv6 ---- - -## IPv6 {#ipv6} - -`IPv6` est un domaine basé sur `FixedString(16)` tapez et sert de remplacement typé pour stocker des valeurs IPv6. Il fournit un stockage compact avec le format d'entrée-sortie convivial et les informations de type de colonne sur l'inspection. - -### Utilisation De Base {#basic-usage} - -``` sql -CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY url; - -DESCRIBE TABLE hits; -``` - -``` text -┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┐ -│ url │ String │ │ │ │ │ -│ from │ IPv6 │ │ │ │ │ -└──────┴────────┴──────────────┴────────────────────┴─────────┴──────────────────┘ -``` - -Ou vous pouvez utiliser `IPv6` domaine comme l'un des principaux: - -``` sql -CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY from; -``` - -`IPv6` le domaine prend en charge l'entrée personnalisée en tant que chaînes IPv6: - -``` sql -INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '2a02:aa08:e000:3100::2')('https://clickhouse.tech', '2001:44c8:129:2632:33:0:252:2')('https://clickhouse.tech/docs/en/', '2a02:e980:1e::1'); - -SELECT * FROM hits; -``` - -``` text -┌─url────────────────────────────────┬─from──────────────────────────┐ -│ https://clickhouse.tech │ 2001:44c8:129:2632:33:0:252:2 │ -│ https://clickhouse.tech/docs/en/ │ 2a02:e980:1e::1 │ -│ https://wikipedia.org │ 2a02:aa08:e000:3100::2 │ -└────────────────────────────────────┴───────────────────────────────┘ -``` - -Les valeurs sont stockées sous forme binaire compacte: - -``` sql -SELECT toTypeName(from), hex(from) FROM hits LIMIT 1; -``` - -``` text -┌─toTypeName(from)─┬─hex(from)────────────────────────┐ -│ IPv6 │ 200144C8012926320033000002520002 │ -└──────────────────┴──────────────────────────────────┘ -``` - -Les valeurs de domaine ne sont pas implicitement convertibles en types autres que `FixedString(16)`. -Si vous voulez convertir `IPv6` valeur à une chaîne, vous devez le faire explicitement avec `IPv6NumToString()` fonction: - -``` sql -SELECT toTypeName(s), IPv6NumToString(from) as s FROM hits LIMIT 1; -``` - -``` text -┌─toTypeName(IPv6NumToString(from))─┬─s─────────────────────────────┐ -│ String │ 2001:44c8:129:2632:33:0:252:2 │ -└───────────────────────────────────┴───────────────────────────────┘ -``` - -Ou coulé à un `FixedString(16)` valeur: - -``` sql -SELECT toTypeName(i), CAST(from as FixedString(16)) as i FROM hits LIMIT 1; -``` - -``` text -┌─toTypeName(CAST(from, 'FixedString(16)'))─┬─i───────┐ -│ FixedString(16) │ ��� │ -└───────────────────────────────────────────┴─────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/domains/ipv6) diff --git a/docs/fr/sql-reference/data-types/enum.md b/docs/fr/sql-reference/data-types/enum.md deleted file mode 100644 index b9751c1c804..00000000000 --- a/docs/fr/sql-reference/data-types/enum.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 50 -toc_title: Enum ---- - -# Enum {#enum} - -Type énuméré composé de valeurs nommées. - -Les valeurs nommées doivent être déclarées comme `'string' = integer` pair. ClickHouse ne stocke que des nombres, mais prend en charge les opérations avec les valeurs à travers leurs noms. - -Supports ClickHouse: - -- 8-bit `Enum`. Il peut contenir jusqu'à 256 valeurs énumérées dans le `[-128, 127]` gamme. -- 16 bits `Enum`. Il peut contenir jusqu'à 65 536 valeurs énumérées dans le `[-32768, 32767]` gamme. - -Clickhouse choisit automatiquement le type de `Enum` lorsque les données sont insérées. Vous pouvez également utiliser `Enum8` ou `Enum16` types pour être sûr de la taille de stockage. - -## Exemples D'Utilisation {#usage-examples} - -Ici, nous créons une table avec une `Enum8('hello' = 1, 'world' = 2)` type de colonne: - -``` sql -CREATE TABLE t_enum -( - x Enum('hello' = 1, 'world' = 2) -) -ENGINE = TinyLog -``` - -Colonne `x` ne peut stocker que les valeurs répertoriées dans la définition de type: `'hello'` ou `'world'`. Si vous essayez d'enregistrer une autre valeur, ClickHouse déclenchera une exception. Taille 8 bits pour cela `Enum` est choisi automatiquement. - -``` sql -INSERT INTO t_enum VALUES ('hello'), ('world'), ('hello') -``` - -``` text -Ok. -``` - -``` sql -INSERT INTO t_enum values('a') -``` - -``` text -Exception on client: -Code: 49. DB::Exception: Unknown element 'a' for type Enum('hello' = 1, 'world' = 2) -``` - -Lorsque vous interrogez des données de la table, ClickHouse affiche les valeurs de chaîne de `Enum`. - -``` sql -SELECT * FROM t_enum -``` - -``` text -┌─x─────┐ -│ hello │ -│ world │ -│ hello │ -└───────┘ -``` - -Si vous avez besoin de voir les équivalents numériques des lignes, vous devez `Enum` valeur en type entier. - -``` sql -SELECT CAST(x, 'Int8') FROM t_enum -``` - -``` text -┌─CAST(x, 'Int8')─┐ -│ 1 │ -│ 2 │ -│ 1 │ -└─────────────────┘ -``` - -Pour créer une valeur d'Enum dans une requête, vous devez également utiliser `CAST`. - -``` sql -SELECT toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)')) -``` - -``` text -┌─toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))─┐ -│ Enum8('a' = 1, 'b' = 2) │ -└─────────────────────────────────────────────────────┘ -``` - -## Règles générales et utilisation {#general-rules-and-usage} - -Chacune des valeurs se voit attribuer un nombre dans la plage `-128 ... 127` pour `Enum8` ou dans la gamme `-32768 ... 32767` pour `Enum16`. Toutes les chaînes et les nombres doivent être différents. Une chaîne vide est autorisé. Si ce type est spécifié (dans une définition de table), les nombres peuvent être dans un ordre arbitraire. Toutefois, l'ordre n'a pas d'importance. - -Ni la chaîne ni la valeur numérique dans un `Enum` peut être [NULL](../../sql-reference/syntax.md). - -Un `Enum` peut être contenue dans [Nullable](nullable.md) type. Donc, si vous créez une table en utilisant la requête - -``` sql -CREATE TABLE t_enum_nullable -( - x Nullable( Enum8('hello' = 1, 'world' = 2) ) -) -ENGINE = TinyLog -``` - -il peut stocker non seulement des `'hello'` et `'world'`, mais `NULL`, ainsi. - -``` sql -INSERT INTO t_enum_nullable Values('hello'),('world'),(NULL) -``` - -Dans la mémoire RAM, un `Enum` la colonne est stockée dans la même manière que `Int8` ou `Int16` des valeurs numériques correspondantes. - -Lors de la lecture sous forme de texte, ClickHouse analyse la valeur sous forme de chaîne et recherche la chaîne correspondante à partir de l'ensemble des valeurs Enum. Si elle n'est pas trouvée, une exception est levée. Lors de la lecture au format texte, la chaîne est lue et la valeur numérique correspondante est recherchée. Une exception sera levée si il n'est pas trouvé. -Lors de l'écriture sous forme de texte, il écrit la valeur correspondante de la chaîne. Si les données de colonne contiennent des déchets (nombres qui ne proviennent pas de l'ensemble valide), une exception est levée. Lors de la lecture et de l'écriture sous forme binaire, cela fonctionne de la même manière que pour les types de données Int8 et Int16. -La valeur implicite par défaut est la valeur avec le numéro le plus bas. - -Lors `ORDER BY`, `GROUP BY`, `IN`, `DISTINCT` et ainsi de suite, les Énumérations se comportent de la même façon que les nombres correspondants. Par exemple, ORDER BY les trie numériquement. Les opérateurs d'égalité et de comparaison fonctionnent de la même manière sur les énumérations que sur les valeurs numériques sous-jacentes. - -Les valeurs Enum ne peuvent pas être comparées aux nombres. Les Enums peuvent être comparés à une chaîne constante. Si la chaîne comparée à n'est pas une valeur valide pour L'énumération, une exception sera levée. L'opérateur est pris en charge avec l'Enum sur le côté gauche, et un ensemble de chaînes sur le côté droit. Les chaînes sont les valeurs de L'énumération correspondante. - -Most numeric and string operations are not defined for Enum values, e.g. adding a number to an Enum or concatenating a string to an Enum. -Cependant, L'énumération a un naturel `toString` fonction qui renvoie sa valeur de chaîne. - -Les valeurs Enum sont également convertibles en types numériques en utilisant `toT` fonction, où T est un type numérique. Lorsque T correspond au type numérique sous-jacent de l'énumération, cette conversion est à coût nul. -Le type Enum peut être modifié sans coût en utilisant ALTER, si seulement l'ensemble des valeurs est modifié. Il est possible d'ajouter et de supprimer des membres de L'énumération en utilisant ALTER (la suppression n'est sûre que si la valeur supprimée n'a jamais été utilisée dans la table). À titre de sauvegarde, la modification de la valeur numérique d'un membre Enum précédemment défini lancera une exception. - -En utilisant ALTER, il est possible de changer un Enum8 en Enum16 ou vice versa, tout comme changer un Int8 en Int16. - -[Article Original](https://clickhouse.tech/docs/en/data_types/enum/) diff --git a/docs/fr/sql-reference/data-types/fixedstring.md b/docs/fr/sql-reference/data-types/fixedstring.md deleted file mode 100644 index 5ba09187581..00000000000 --- a/docs/fr/sql-reference/data-types/fixedstring.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 45 -toc_title: FixedString (N) ---- - -# Fixedstring {#fixedstring} - -Une chaîne de longueur fixe de `N` octets (ni caractères ni points de code). - -Pour déclarer une colonne de `FixedString` tapez, utilisez la syntaxe suivante: - -``` sql - FixedString(N) -``` - -Où `N` est un nombre naturel. - -Le `FixedString` type est efficace lorsque les données ont la longueur de précisément `N` octet. Dans tous les autres cas, il est susceptible de réduire l'efficacité. - -Exemples de valeurs qui peuvent être stockées efficacement dans `FixedString`-tapé colonnes: - -- La représentation binaire des adresses IP (`FixedString(16)` pour IPv6). -- Language codes (ru_RU, en_US … ). -- Currency codes (USD, RUB … ). -- Représentation binaire des hachages (`FixedString(16)` pour MD5, `FixedString(32)` pour SHA256). - -Pour stocker les valeurs UUID, utilisez [UUID](uuid.md) type de données. - -Lors de l'insertion des données, ClickHouse: - -- Complète une chaîne avec des octets null si la chaîne contient moins de `N` octet. -- Jette le `Too large value for FixedString(N)` exception si la chaîne contient plus de `N` octet. - -Lors de la sélection des données, ClickHouse ne supprime pas les octets nuls à la fin de la chaîne. Si vous utilisez le `WHERE` clause, vous devez ajouter des octets null manuellement pour `FixedString` valeur. L'exemple suivant illustre l'utilisation de l' `WHERE` la clause de `FixedString`. - -Considérons le tableau suivant avec le seul `FixedString(2)` colonne: - -``` text -┌─name──┐ -│ b │ -└───────┘ -``` - -Requête `SELECT * FROM FixedStringTable WHERE a = 'b'` ne renvoie aucune donnée en conséquence. Nous devrions compléter le modèle de filtre avec des octets nuls. - -``` sql -SELECT * FROM FixedStringTable -WHERE a = 'b\0' -``` - -``` text -┌─a─┐ -│ b │ -└───┘ -``` - -Ce comportement diffère de MySQL pour le `CHAR` type (où les chaînes sont remplies d'espaces et les espaces sont supprimés pour la sortie). - -À noter que la longueur de la `FixedString(N)` la valeur est constante. Le [longueur](../../sql-reference/functions/array-functions.md#array_functions-length) la fonction renvoie `N` même si l' `FixedString(N)` la valeur est remplie uniquement avec des octets [vide](../../sql-reference/functions/string-functions.md#empty) la fonction renvoie `1` dans ce cas. - -[Article Original](https://clickhouse.tech/docs/en/data_types/fixedstring/) diff --git a/docs/fr/sql-reference/data-types/float.md b/docs/fr/sql-reference/data-types/float.md deleted file mode 100644 index b269b930110..00000000000 --- a/docs/fr/sql-reference/data-types/float.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: Float32, Float64 ---- - -# Float32, Float64 {#float32-float64} - -[Les nombres à virgule flottante](https://en.wikipedia.org/wiki/IEEE_754). - -Les Types sont équivalents aux types de C: - -- `Float32` - `float` -- `Float64` - `double` - -Nous vous recommandons de stocker les données sous forme entière chaque fois que possible. Par exemple, convertissez des nombres de précision fixes en valeurs entières, telles que des montants monétaires ou des temps de chargement de page en millisecondes. - -## Utilisation de nombres à virgule flottante {#using-floating-point-numbers} - -- Calculs avec des nombres à virgule flottante peut produire une erreur d'arrondi. - - - -``` sql -SELECT 1 - 0.9 -``` - -``` text -┌───────minus(1, 0.9)─┐ -│ 0.09999999999999998 │ -└─────────────────────┘ -``` - -- Le résultat du calcul dépend de la méthode de calcul (le type de processeur et de l'architecture du système informatique). -- Les calculs à virgule flottante peuvent entraîner des nombres tels que l'infini (`Inf`) et “not-a-number” (`NaN`). Cela doit être pris en compte lors du traitement des résultats de calculs. -- Lors de l'analyse de nombres à virgule flottante à partir de texte, le résultat peut ne pas être le nombre représentable par machine le plus proche. - -## NaN et Inf {#data_type-float-nan-inf} - -Contrairement à SQL standard, ClickHouse prend en charge les catégories suivantes de nombres à virgule flottante: - -- `Inf` – Infinity. - - - -``` sql -SELECT 0.5 / 0 -``` - -``` text -┌─divide(0.5, 0)─┐ -│ inf │ -└────────────────┘ -``` - -- `-Inf` – Negative infinity. - - - -``` sql -SELECT -0.5 / 0 -``` - -``` text -┌─divide(-0.5, 0)─┐ -│ -inf │ -└─────────────────┘ -``` - -- `NaN` – Not a number. - - - -``` sql -SELECT 0 / 0 -``` - -``` text -┌─divide(0, 0)─┐ -│ nan │ -└──────────────┘ -``` - - See the rules for `NaN` sorting in the section [ORDER BY clause](../sql_reference/statements/select/order-by.md). - -[Article Original](https://clickhouse.tech/docs/en/data_types/float/) diff --git a/docs/fr/sql-reference/data-types/index.md b/docs/fr/sql-reference/data-types/index.md deleted file mode 100644 index 887e2efd69f..00000000000 --- a/docs/fr/sql-reference/data-types/index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Types De Donn\xE9es" -toc_priority: 37 -toc_title: Introduction ---- - -# Types De Données {#data_types} - -ClickHouse peut stocker différents types de données dans des cellules de table. - -Cette section décrit les types de données pris en charge et les considérations spéciales pour les utiliser et/ou les implémenter le cas échéant. - -[Article Original](https://clickhouse.tech/docs/en/data_types/) diff --git a/docs/fr/sql-reference/data-types/int-uint.md b/docs/fr/sql-reference/data-types/int-uint.md deleted file mode 100644 index 9b196c164a4..00000000000 --- a/docs/fr/sql-reference/data-types/int-uint.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 40 -toc_title: UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64 ---- - -# UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64 {#uint8-uint16-uint32-uint64-int8-int16-int32-int64} - -Entiers de longueur fixe, avec ou sans signe. - -## Plages Int {#int-ranges} - -- Int8 - \[-128: 127\] -- Int16 - \[-32768: 32767\] -- Int32 - \[-2147483648: 2147483647\] -- Int64 - \[-9223372036854775808: 9223372036854775807\] - -## Plages Uint {#uint-ranges} - -- UInt8 - \[0: 255\] -- UInt16 - \[0: 65535\] -- UInt32- \[0: 4294967295\] -- UInt64- \[0: 18446744073709551615\] - -[Article Original](https://clickhouse.tech/docs/en/data_types/int_uint/) diff --git a/docs/fr/sql-reference/data-types/nested-data-structures/index.md b/docs/fr/sql-reference/data-types/nested-data-structures/index.md deleted file mode 100644 index 528e0bad0cd..00000000000 --- a/docs/fr/sql-reference/data-types/nested-data-structures/index.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Structures De Donn\xE9es Imbriqu\xE9es" -toc_hidden: true -toc_priority: 54 -toc_title: "cach\xE9s" ---- - -# Structures De Données Imbriquées {#nested-data-structures} - -[Article Original](https://clickhouse.tech/docs/en/data_types/nested_data_structures/) diff --git a/docs/fr/sql-reference/data-types/nested-data-structures/nested.md b/docs/fr/sql-reference/data-types/nested-data-structures/nested.md deleted file mode 100644 index 2805780de24..00000000000 --- a/docs/fr/sql-reference/data-types/nested-data-structures/nested.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 57 -toc_title: "Imbriqu\xE9e(Type1 Nom1, Nom2 Type2, ...)" ---- - -# Nested(name1 Type1, Name2 Type2, …) {#nestedname1-type1-name2-type2} - -A nested data structure is like a table inside a cell. The parameters of a nested data structure – the column names and types – are specified the same way as in a [CREATE TABLE](../../../sql-reference/statements/create.md) requête. Chaque ligne de table peut correspondre à n'importe quel nombre de lignes dans une structure de données imbriquée. - -Exemple: - -``` sql -CREATE TABLE test.visits -( - CounterID UInt32, - StartDate Date, - Sign Int8, - IsNew UInt8, - VisitID UInt64, - UserID UInt64, - ... - Goals Nested - ( - ID UInt32, - Serial UInt32, - EventTime DateTime, - Price Int64, - OrderID String, - CurrencyID UInt32 - ), - ... -) ENGINE = CollapsingMergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192, Sign) -``` - -Cet exemple déclare le `Goals` structure de données imbriquée, qui contient des données sur les conversions (objectifs atteints). Chaque ligne de la ‘visits’ table peut correspondre à zéro ou n'importe quel nombre de conversions. - -Un seul niveau d'imbrication est pris en charge. Les colonnes de structures imbriquées contenant des tableaux sont équivalentes à des tableaux multidimensionnels, elles ont donc un support limité (il n'y a pas de support pour stocker ces colonnes dans des tables avec le moteur MergeTree). - -Dans la plupart des cas, lorsque vous travaillez avec une structure de données imbriquée, ses colonnes sont spécifiées avec des noms de colonnes séparés par un point. Ces colonnes constituent un tableau de types correspondants. Tous les tableaux de colonnes d'une structure de données imbriquée unique ont la même longueur. - -Exemple: - -``` sql -SELECT - Goals.ID, - Goals.EventTime -FROM test.visits -WHERE CounterID = 101500 AND length(Goals.ID) < 5 -LIMIT 10 -``` - -``` text -┌─Goals.ID───────────────────────┬─Goals.EventTime───────────────────────────────────────────────────────────────────────────┐ -│ [1073752,591325,591325] │ ['2014-03-17 16:38:10','2014-03-17 16:38:48','2014-03-17 16:42:27'] │ -│ [1073752] │ ['2014-03-17 00:28:25'] │ -│ [1073752] │ ['2014-03-17 10:46:20'] │ -│ [1073752,591325,591325,591325] │ ['2014-03-17 13:59:20','2014-03-17 22:17:55','2014-03-17 22:18:07','2014-03-17 22:18:51'] │ -│ [] │ [] │ -│ [1073752,591325,591325] │ ['2014-03-17 11:37:06','2014-03-17 14:07:47','2014-03-17 14:36:21'] │ -│ [] │ [] │ -│ [] │ [] │ -│ [591325,1073752] │ ['2014-03-17 00:46:05','2014-03-17 00:46:05'] │ -│ [1073752,591325,591325,591325] │ ['2014-03-17 13:28:33','2014-03-17 13:30:26','2014-03-17 18:51:21','2014-03-17 18:51:45'] │ -└────────────────────────────────┴───────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -Il est plus facile de penser à une structure de données imbriquée comme un ensemble de plusieurs tableaux de colonnes de la même longueur. - -Le seul endroit où une requête SELECT peut spécifier le nom d'une structure de données imbriquée entière au lieu de colonnes individuelles est la clause de jointure de tableau. Pour plus d'informations, voir “ARRAY JOIN clause”. Exemple: - -``` sql -SELECT - Goal.ID, - Goal.EventTime -FROM test.visits -ARRAY JOIN Goals AS Goal -WHERE CounterID = 101500 AND length(Goals.ID) < 5 -LIMIT 10 -``` - -``` text -┌─Goal.ID─┬──────Goal.EventTime─┐ -│ 1073752 │ 2014-03-17 16:38:10 │ -│ 591325 │ 2014-03-17 16:38:48 │ -│ 591325 │ 2014-03-17 16:42:27 │ -│ 1073752 │ 2014-03-17 00:28:25 │ -│ 1073752 │ 2014-03-17 10:46:20 │ -│ 1073752 │ 2014-03-17 13:59:20 │ -│ 591325 │ 2014-03-17 22:17:55 │ -│ 591325 │ 2014-03-17 22:18:07 │ -│ 591325 │ 2014-03-17 22:18:51 │ -│ 1073752 │ 2014-03-17 11:37:06 │ -└─────────┴─────────────────────┘ -``` - -Vous ne pouvez pas effectuer SELECT pour une structure de données imbriquée entière. Vous ne pouvez lister explicitement que les colonnes individuelles qui en font partie. - -Pour une requête INSERT, vous devez passer tous les tableaux de colonnes composant d'une structure de données imbriquée séparément (comme s'il s'agissait de tableaux de colonnes individuels). Au cours de l'insertion, le système vérifie qu'ils ont la même longueur. - -Pour une requête DESCRIBE, les colonnes d'une structure de données imbriquée sont répertoriées séparément de la même manière. - -La requête ALTER pour les éléments d'une structure de données imbriquée a des limites. - -[Article Original](https://clickhouse.tech/docs/en/data_types/nested_data_structures/nested/) diff --git a/docs/fr/sql-reference/data-types/nullable.md b/docs/fr/sql-reference/data-types/nullable.md deleted file mode 100644 index 6b37b571a96..00000000000 --- a/docs/fr/sql-reference/data-types/nullable.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 54 -toc_title: Nullable ---- - -# Nullable(typename) {#data_type-nullable} - -Permet de stocker marqueur spécial ([NULL](../../sql-reference/syntax.md)) qui dénote “missing value” aux valeurs normales autorisées par `TypeName`. Par exemple, un `Nullable(Int8)` type colonne peut stocker `Int8` type de valeurs, et les lignes qui n'ont pas de valeur magasin `NULL`. - -Pour un `TypeName` vous ne pouvez pas utiliser les types de données composites [Tableau](array.md) et [Tuple](tuple.md). Les types de données composites peuvent contenir `Nullable` valeurs de type, telles que `Array(Nullable(Int8))`. - -A `Nullable` le champ type ne peut pas être inclus dans les index de table. - -`NULL` est la valeur par défaut pour tout `Nullable` type, sauf indication contraire dans la configuration du serveur ClickHouse. - -## Caractéristiques De Stockage {#storage-features} - -Stocker `Nullable` valeurs de type dans une colonne de table, ClickHouse utilise un fichier séparé avec `NULL` masques en plus du fichier normal avec des valeurs. Les entrées du fichier masks permettent à ClickHouse de faire la distinction entre `NULL` et une valeur par défaut du type de données correspondant pour chaque ligne de table. En raison d'un fichier supplémentaire, `Nullable` colonne consomme de l'espace de stockage supplémentaire par rapport à une normale similaire. - -!!! info "Note" - Utiliser `Nullable` affecte presque toujours négativement les performances, gardez cela à l'esprit lors de la conception de vos bases de données. - -## Exemple D'Utilisation {#usage-example} - -``` sql -CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE TinyLog -``` - -``` sql -INSERT INTO t_null VALUES (1, NULL), (2, 3) -``` - -``` sql -SELECT x + y FROM t_null -``` - -``` text -┌─plus(x, y)─┐ -│ ᴺᵁᴸᴸ │ -│ 5 │ -└────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/nullable/) diff --git a/docs/fr/sql-reference/data-types/simpleaggregatefunction.md b/docs/fr/sql-reference/data-types/simpleaggregatefunction.md deleted file mode 100644 index 81fcd67cfae..00000000000 --- a/docs/fr/sql-reference/data-types/simpleaggregatefunction.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# SimpleAggregateFunction {#data-type-simpleaggregatefunction} - -`SimpleAggregateFunction(name, types_of_arguments…)` le type de données stocke la valeur actuelle de la fonction d'agrégat et ne stocke pas son état complet comme [`AggregateFunction`](aggregatefunction.md) faire. Cette optimisation peut être appliquée aux fonctions pour lesquelles la propriété suivante est conservée: le résultat de l'application d'une fonction `f` pour un ensemble de lignes `S1 UNION ALL S2` peut être obtenu en appliquant `f` pour les parties de la ligne définie séparément, puis à nouveau l'application `f` pour les résultats: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. Cette propriété garantit que les résultats d'agrégation partielle sont suffisants pour calculer le combiné, de sorte que nous n'avons pas à stocker et traiter de données supplémentaires. - -Les fonctions d'agrégation suivantes sont prises en charge: - -- [`any`](../../sql-reference/aggregate-functions/reference.md#agg_function-any) -- [`anyLast`](../../sql-reference/aggregate-functions/reference.md#anylastx) -- [`min`](../../sql-reference/aggregate-functions/reference.md#agg_function-min) -- [`max`](../../sql-reference/aggregate-functions/reference.md#agg_function-max) -- [`sum`](../../sql-reference/aggregate-functions/reference.md#agg_function-sum) -- [`groupBitAnd`](../../sql-reference/aggregate-functions/reference.md#groupbitand) -- [`groupBitOr`](../../sql-reference/aggregate-functions/reference.md#groupbitor) -- [`groupBitXor`](../../sql-reference/aggregate-functions/reference.md#groupbitxor) - -Les valeurs de la `SimpleAggregateFunction(func, Type)` regarder et stockées de la même manière que `Type`, de sorte que vous n'avez pas besoin d'appliquer des fonctions avec `-Merge`/`-State` suffixe. `SimpleAggregateFunction` a de meilleures performances que `AggregateFunction` avec la même fonction d'agrégation. - -**Paramètre** - -- Nom de la fonction d'agrégation. -- Types des arguments de la fonction d'agrégation. - -**Exemple** - -``` sql -CREATE TABLE t -( - column1 SimpleAggregateFunction(sum, UInt64), - column2 SimpleAggregateFunction(any, String) -) ENGINE = ... -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/simpleaggregatefunction/) diff --git a/docs/fr/sql-reference/data-types/special-data-types/expression.md b/docs/fr/sql-reference/data-types/special-data-types/expression.md deleted file mode 100644 index c3ba5e42ba1..00000000000 --- a/docs/fr/sql-reference/data-types/special-data-types/expression.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 58 -toc_title: Expression ---- - -# Expression {#expression} - -Les Expressions sont utilisées pour représenter des lambdas dans des fonctions d'ordre Élevé. - -[Article Original](https://clickhouse.tech/docs/en/data_types/special_data_types/expression/) diff --git a/docs/fr/sql-reference/data-types/special-data-types/index.md b/docs/fr/sql-reference/data-types/special-data-types/index.md deleted file mode 100644 index 6d292dc522e..00000000000 --- a/docs/fr/sql-reference/data-types/special-data-types/index.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "Types De Donn\xE9es Sp\xE9ciaux" -toc_hidden: true -toc_priority: 55 -toc_title: "cach\xE9s" ---- - -# Types De Données Spéciaux {#special-data-types} - -Les valeurs de type de données spéciales ne peuvent pas être sérialisées pour l'enregistrement dans une table ou la sortie dans les résultats de la requête, mais peuvent être utilisées comme résultat intermédiaire lors de l'exécution de la requête. - -[Article Original](https://clickhouse.tech/docs/en/data_types/special_data_types/) diff --git a/docs/fr/sql-reference/data-types/special-data-types/interval.md b/docs/fr/sql-reference/data-types/special-data-types/interval.md deleted file mode 100644 index 464de8a10ab..00000000000 --- a/docs/fr/sql-reference/data-types/special-data-types/interval.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 61 -toc_title: Intervalle ---- - -# Intervalle {#data-type-interval} - -Famille de types de données représentant des intervalles d'heure et de date. Les types de la [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) opérateur. - -!!! warning "Avertissement" - `Interval` les valeurs de type de données ne peuvent pas être stockées dans les tables. - -Structure: - -- Intervalle de temps en tant que valeur entière non signée. -- Type de l'intervalle. - -Types d'intervalles pris en charge: - -- `SECOND` -- `MINUTE` -- `HOUR` -- `DAY` -- `WEEK` -- `MONTH` -- `QUARTER` -- `YEAR` - -Pour chaque type d'intervalle, il existe un type de données distinct. Par exemple, l' `DAY` l'intervalle correspond au `IntervalDay` type de données: - -``` sql -SELECT toTypeName(INTERVAL 4 DAY) -``` - -``` text -┌─toTypeName(toIntervalDay(4))─┐ -│ IntervalDay │ -└──────────────────────────────┘ -``` - -## Utilisation Remarques {#data-type-interval-usage-remarks} - -Vous pouvez utiliser `Interval`-tapez des valeurs dans des opérations arithmétiques avec [Date](../../../sql-reference/data-types/date.md) et [DateTime](../../../sql-reference/data-types/datetime.md)-type de valeurs. Par exemple, vous pouvez ajouter 4 jours à l'heure actuelle: - -``` sql -SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY -``` - -``` text -┌───current_date_time─┬─plus(now(), toIntervalDay(4))─┐ -│ 2019-10-23 10:58:45 │ 2019-10-27 10:58:45 │ -└─────────────────────┴───────────────────────────────┘ -``` - -Les intervalles avec différents types ne peuvent pas être combinés. Vous ne pouvez pas utiliser des intervalles comme `4 DAY 1 HOUR`. Spécifiez des intervalles en unités inférieures ou égales à la plus petite unité de l'intervalle, par exemple, l'intervalle `1 day and an hour` l'intervalle peut être exprimée comme `25 HOUR` ou `90000 SECOND`. - -Vous ne pouvez pas effectuer d'opérations arithmétiques avec `Interval`- tapez des valeurs, mais vous pouvez ajouter des intervalles de différents types par conséquent aux valeurs dans `Date` ou `DateTime` types de données. Exemple: - -``` sql -SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR -``` - -``` text -┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐ -│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │ -└─────────────────────┴────────────────────────────────────────────────────────┘ -``` - -La requête suivante provoque une exception: - -``` sql -select now() AS current_date_time, current_date_time + (INTERVAL 4 DAY + INTERVAL 3 HOUR) -``` - -``` text -Received exception from server (version 19.14.1): -Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argument types for function plus: if one argument is Interval, then another must be Date or DateTime.. -``` - -## Voir Aussi {#see-also} - -- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) opérateur -- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type fonctions de conversion diff --git a/docs/fr/sql-reference/data-types/special-data-types/nothing.md b/docs/fr/sql-reference/data-types/special-data-types/nothing.md deleted file mode 100644 index 2e3d76b7207..00000000000 --- a/docs/fr/sql-reference/data-types/special-data-types/nothing.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 60 -toc_title: Rien ---- - -# Rien {#nothing} - -Le seul but de ce type de données est de représenter les cas où une valeur n'est pas prévu. Donc vous ne pouvez pas créer un `Nothing` type de valeur. - -Par exemple, littéral [NULL](../../../sql-reference/syntax.md#null-literal) a type de `Nullable(Nothing)`. Voir plus sur [Nullable](../../../sql-reference/data-types/nullable.md). - -Le `Nothing` type peut également être utilisé pour désigner des tableaux vides: - -``` sql -SELECT toTypeName(array()) -``` - -``` text -┌─toTypeName(array())─┐ -│ Array(Nothing) │ -└─────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/special_data_types/nothing/) diff --git a/docs/fr/sql-reference/data-types/special-data-types/set.md b/docs/fr/sql-reference/data-types/special-data-types/set.md deleted file mode 100644 index 8f50175bb6b..00000000000 --- a/docs/fr/sql-reference/data-types/special-data-types/set.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 59 -toc_title: "D\xE9finir" ---- - -# Définir {#set} - -Utilisé pour la moitié droite d'un [IN](../../operators/in.md#select-in-operators) expression. - -[Article Original](https://clickhouse.tech/docs/en/data_types/special_data_types/set/) diff --git a/docs/fr/sql-reference/data-types/string.md b/docs/fr/sql-reference/data-types/string.md deleted file mode 100644 index b82e1fe6c69..00000000000 --- a/docs/fr/sql-reference/data-types/string.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 44 -toc_title: "Cha\xEEne" ---- - -# Chaîne {#string} - -Les chaînes d'une longueur arbitraire. La longueur n'est pas limitée. La valeur peut contenir un ensemble arbitraire d'octets, y compris des octets nuls. -Le type de chaîne remplace les types VARCHAR, BLOB, CLOB et autres provenant d'autres SGBD. - -## Encodage {#encodings} - -ClickHouse n'a pas le concept d'encodages. Les chaînes peuvent contenir un ensemble arbitraire d'octets, qui sont stockés et sortis tels quels. -Si vous avez besoin de stocker des textes, nous vous recommandons d'utiliser L'encodage UTF-8. À tout le moins, si votre terminal utilise UTF-8 (comme recommandé), vous pouvez lire et écrire vos valeurs sans effectuer de conversions. -De même, certaines fonctions pour travailler avec des chaînes ont des variations distinctes qui fonctionnent sous l'hypothèse que la chaîne contient un ensemble d'octets représentant un texte codé en UTF-8. -Par exemple, l' ‘length’ fonction calcule la longueur de la chaîne en octets, tandis que le ‘lengthUTF8’ la fonction calcule la longueur de la chaîne en points de code Unicode, en supposant que la valeur est encodée en UTF-8. - -[Article Original](https://clickhouse.tech/docs/en/data_types/string/) diff --git a/docs/fr/sql-reference/data-types/tuple.md b/docs/fr/sql-reference/data-types/tuple.md deleted file mode 100644 index ab9db735181..00000000000 --- a/docs/fr/sql-reference/data-types/tuple.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 53 -toc_title: Tuple (T1, T2,...) ---- - -# Tuple(t1, T2, …) {#tuplet1-t2} - -Un n-uplet d'éléments, chacun ayant une personne [type](index.md#data_types). - -Les Tuples sont utilisés pour le regroupement temporaire de colonnes. Les colonnes peuvent être regroupées lorsqu'une expression IN est utilisée dans une requête et pour spécifier certains paramètres formels des fonctions lambda. Pour plus d'informations, voir les sections [Dans les opérateurs](../../sql-reference/operators/in.md) et [Des fonctions d'ordre supérieur](../../sql-reference/functions/higher-order-functions.md). - -Les Tuples peuvent être le résultat d'une requête. Dans ce cas, pour les formats de texte autres que JSON, les valeurs sont séparées par des virgules entre parenthèses. Dans les formats JSON, les tuples sont sortis sous forme de tableaux (entre crochets). - -## La création d'un Tuple {#creating-a-tuple} - -Vous pouvez utiliser une fonction pour créer un tuple: - -``` sql -tuple(T1, T2, ...) -``` - -Exemple de création d'un tuple: - -``` sql -SELECT tuple(1,'a') AS x, toTypeName(x) -``` - -``` text -┌─x───────┬─toTypeName(tuple(1, 'a'))─┐ -│ (1,'a') │ Tuple(UInt8, String) │ -└─────────┴───────────────────────────┘ -``` - -## Utilisation de Types de données {#working-with-data-types} - -Lors de la création d'un tuple à la volée, ClickHouse détecte automatiquement le type de chaque argument comme le minimum des types qui peuvent stocker la valeur de l'argument. Si l'argument est [NULL](../../sql-reference/syntax.md#null-literal) le type de l'élément tuple est [Nullable](nullable.md). - -Exemple de détection automatique de type de données: - -``` sql -SELECT tuple(1, NULL) AS x, toTypeName(x) -``` - -``` text -┌─x────────┬─toTypeName(tuple(1, NULL))──────┐ -│ (1,NULL) │ Tuple(UInt8, Nullable(Nothing)) │ -└──────────┴─────────────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/data_types/tuple/) diff --git a/docs/fr/sql-reference/data-types/uuid.md b/docs/fr/sql-reference/data-types/uuid.md deleted file mode 100644 index 60973a3f855..00000000000 --- a/docs/fr/sql-reference/data-types/uuid.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 46 -toc_title: UUID ---- - -# UUID {#uuid-data-type} - -Un identifiant unique universel (UUID) est un numéro de 16 octets utilisé pour identifier les enregistrements. Pour plus d'informations sur L'UUID, voir [Wikipedia](https://en.wikipedia.org/wiki/Universally_unique_identifier). - -L'exemple de valeur de type UUID est représenté ci-dessous: - -``` text -61f0c404-5cb3-11e7-907b-a6006ad3dba0 -``` - -Si vous ne spécifiez pas la valeur de la colonne UUID lors de l'insertion d'un nouvel enregistrement, la valeur UUID est remplie avec zéro: - -``` text -00000000-0000-0000-0000-000000000000 -``` - -## Comment générer {#how-to-generate} - -Pour générer la valeur UUID, ClickHouse fournit [generateUUIDv4](../../sql-reference/functions/uuid-functions.md) fonction. - -## Exemple D'Utilisation {#usage-example} - -**Exemple 1** - -Cet exemple montre la création d'une table avec la colonne de type UUID et l'insertion d'une valeur dans la table. - -``` sql -CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog -``` - -``` sql -INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1' -``` - -``` sql -SELECT * FROM t_uuid -``` - -``` text -┌────────────────────────────────────x─┬─y─────────┐ -│ 417ddc5d-e556-4d27-95dd-a34d84e46a50 │ Example 1 │ -└──────────────────────────────────────┴───────────┘ -``` - -**Exemple 2** - -Dans cet exemple, la valeur de la colonne UUID n'est pas spécifiée lors de l'insertion d'un nouvel enregistrement. - -``` sql -INSERT INTO t_uuid (y) VALUES ('Example 2') -``` - -``` sql -SELECT * FROM t_uuid -``` - -``` text -┌────────────────────────────────────x─┬─y─────────┐ -│ 417ddc5d-e556-4d27-95dd-a34d84e46a50 │ Example 1 │ -│ 00000000-0000-0000-0000-000000000000 │ Example 2 │ -└──────────────────────────────────────┴───────────┘ -``` - -## Restriction {#restrictions} - -Le type de données UUID ne prend en charge que les fonctions qui [Chaîne](string.md) type de données prend également en charge (par exemple, [min](../../sql-reference/aggregate-functions/reference.md#agg_function-min), [Max](../../sql-reference/aggregate-functions/reference.md#agg_function-max), et [compter](../../sql-reference/aggregate-functions/reference.md#agg_function-count)). - -Le type de données UUID n'est pas pris en charge par les opérations arithmétiques (par exemple, [ABS](../../sql-reference/functions/arithmetic-functions.md#arithm_func-abs)) ou des fonctions d'agrégation, comme [somme](../../sql-reference/aggregate-functions/reference.md#agg_function-sum) et [avg](../../sql-reference/aggregate-functions/reference.md#agg_function-avg). - -[Article Original](https://clickhouse.tech/docs/en/data_types/uuid/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md deleted file mode 100644 index cc238f02f3a..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 45 -toc_title: "Dictionnaires hi\xE9rarchiques" ---- - -# Dictionnaires Hiérarchiques {#hierarchical-dictionaries} - -Clickhouse prend en charge les dictionnaires hiérarchiques avec un [touche numérique](external-dicts-dict-structure.md#ext_dict-numeric-key). - -Voici une structure hiérarchique: - -``` text -0 (Common parent) -│ -├── 1 (Russia) -│ │ -│ └── 2 (Moscow) -│ │ -│ └── 3 (Center) -│ -└── 4 (Great Britain) - │ - └── 5 (London) -``` - -Cette hiérarchie peut être exprimée comme la table de dictionnaire suivante. - -| id_région | région_parent | nom_région | -|------------|----------------|--------------------| -| 1 | 0 | Russie | -| 2 | 1 | Moscou | -| 3 | 2 | Center | -| 4 | 0 | La Grande-Bretagne | -| 5 | 4 | Londres | - -Ce tableau contient une colonne `parent_region` qui contient la clé du parent le plus proche de l'élément. - -Clickhouse soutient le [hiérarchique](external-dicts-dict-structure.md#hierarchical-dict-attr) propriété pour [externe dictionnaire](index.md) attribut. Cette propriété vous permet de configurer le dictionnaire hiérarchique comme décrit ci-dessus. - -Le [dictGetHierarchy](../../../sql-reference/functions/ext-dict-functions.md#dictgethierarchy) la fonction vous permet d'obtenir la chaîne parent d'un élément. - -Pour notre exemple, la structure du dictionnaire peut être la suivante: - -``` xml - - - - region_id - - - - parent_region - UInt64 - 0 - true - - - - region_name - String - - - - - -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_hierarchical/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md deleted file mode 100644 index 2569329fefd..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: "Stockage des dictionnaires en m\xE9moire" ---- - -# Stockage des dictionnaires en mémoire {#dicts-external-dicts-dict-layout} - -Il existe une variété de façons de stocker les dictionnaires en mémoire. - -Nous vous recommandons [plat](#flat), [haché](#dicts-external_dicts_dict_layout-hashed) et [complex_key_hashed](#complex-key-hashed). qui fournissent la vitesse de traitement optimale. - -La mise en cache n'est pas recommandée en raison de performances potentiellement médiocres et de difficultés à sélectionner les paramètres optimaux. En savoir plus dans la section “[cache](#cache)”. - -Il existe plusieurs façons d'améliorer les performances du dictionnaire: - -- Appelez la fonction pour travailler avec le dictionnaire après `GROUP BY`. -- Marquer les attributs à extraire comme injectifs. Un attribut est appelé injectif si différentes valeurs d'attribut correspondent à différentes clés. Alors, quand `GROUP BY` utilise une fonction qui récupère une valeur d'attribut par la clé, cette fonction est automatiquement retirée de `GROUP BY`. - -ClickHouse génère une exception pour les erreurs avec les dictionnaires. Des exemples d'erreurs: - -- Le dictionnaire accessible n'a pas pu être chargé. -- Erreur de la requête d'une `cached` dictionnaire. - -Vous pouvez afficher la liste des dictionnaires externes et leurs statuts dans le `system.dictionaries` table. - -La configuration ressemble à ceci: - -``` xml - - - ... - - - - - - ... - - -``` - -Correspondant [DDL-requête](../../statements/create.md#create-dictionary-query): - -``` sql -CREATE DICTIONARY (...) -... -LAYOUT(LAYOUT_TYPE(param value)) -- layout settings -... -``` - -## Façons de stocker des dictionnaires en mémoire {#ways-to-store-dictionaries-in-memory} - -- [plat](#flat) -- [haché](#dicts-external_dicts_dict_layout-hashed) -- [sparse_hashed](#dicts-external_dicts_dict_layout-sparse_hashed) -- [cache](#cache) -- [direct](#direct) -- [range_hashed](#range-hashed) -- [complex_key_hashed](#complex-key-hashed) -- [complex_key_cache](#complex-key-cache) -- [complex_key_direct](#complex-key-direct) -- [ip_trie](#ip-trie) - -### plat {#flat} - -Le dictionnaire est complètement stocké en mémoire sous la forme de tableaux plats. Combien de mémoire le dictionnaire utilise-t-il? Le montant est proportionnel à la taille de la plus grande clé (dans l'espace). - -La clé du dictionnaire a le `UInt64` type et la valeur est limitée à 500 000. Si une clé plus grande est découverte lors de la création du dictionnaire, ClickHouse lève une exception et ne crée pas le dictionnaire. - -Tous les types de sources sont pris en charge. Lors de la mise à jour, les données (à partir d'un fichier ou d'une table) sont lues dans leur intégralité. - -Cette méthode fournit les meilleures performances parmi toutes les méthodes disponibles de stockage du dictionnaire. - -Exemple de Configuration: - -``` xml - - - -``` - -ou - -``` sql -LAYOUT(FLAT()) -``` - -### haché {#dicts-external_dicts_dict_layout-hashed} - -Le dictionnaire est entièrement stockée en mémoire sous la forme d'une table de hachage. Le dictionnaire peut contenir n'importe quel nombre d'éléments avec tous les identificateurs Dans la pratique, le nombre de clés peut atteindre des dizaines de millions d'articles. - -Tous les types de sources sont pris en charge. Lors de la mise à jour, les données (à partir d'un fichier ou d'une table) sont lues dans leur intégralité. - -Exemple de Configuration: - -``` xml - - - -``` - -ou - -``` sql -LAYOUT(HASHED()) -``` - -### sparse_hashed {#dicts-external_dicts_dict_layout-sparse_hashed} - -Semblable à `hashed`, mais utilise moins de mémoire en faveur de plus D'utilisation du processeur. - -Exemple de Configuration: - -``` xml - - - -``` - -``` sql -LAYOUT(SPARSE_HASHED()) -``` - -### complex_key_hashed {#complex-key-hashed} - -Ce type de stockage est pour une utilisation avec composite [touches](external-dicts-dict-structure.md). Semblable à `hashed`. - -Exemple de Configuration: - -``` xml - - - -``` - -``` sql -LAYOUT(COMPLEX_KEY_HASHED()) -``` - -### range_hashed {#range-hashed} - -Le dictionnaire est stocké en mémoire sous la forme d'une table de hachage avec un tableau ordonné de gammes et leurs valeurs correspondantes. - -Cette méthode de stockage fonctionne de la même manière que hachée et permet d'utiliser des plages de date / heure (Type numérique arbitraire) en plus de la clé. - -Exemple: Le tableau contient des réductions pour chaque annonceur dans le format: - -``` text -+---------|-------------|-------------|------+ -| advertiser id | discount start date | discount end date | amount | -+===============+=====================+===================+========+ -| 123 | 2015-01-01 | 2015-01-15 | 0.15 | -+---------|-------------|-------------|------+ -| 123 | 2015-01-16 | 2015-01-31 | 0.25 | -+---------|-------------|-------------|------+ -| 456 | 2015-01-01 | 2015-01-15 | 0.05 | -+---------|-------------|-------------|------+ -``` - -Pour utiliser un échantillon pour les plages de dates, définissez `range_min` et `range_max` éléments dans le [structure](external-dicts-dict-structure.md). Ces éléments doivent contenir des éléments `name` et`type` (si `type` n'est pas spécifié, le type par défaut sera utilisé-Date). `type` peut être n'importe quel type numérique (Date / DateTime / UInt64 / Int32 / autres). - -Exemple: - -``` xml - - - Id - - - first - Date - - - last - Date - - ... -``` - -ou - -``` sql -CREATE DICTIONARY somedict ( - id UInt64, - first Date, - last Date -) -PRIMARY KEY id -LAYOUT(RANGE_HASHED()) -RANGE(MIN first MAX last) -``` - -Pour travailler avec ces dictionnaires, vous devez passer un argument supplémentaire à l' `dictGetT` fonction, pour laquelle une plage est sélectionnée: - -``` sql -dictGetT('dict_name', 'attr_name', id, date) -``` - -Cette fonction retourne la valeur pour l' `id`s et la plage de dates qui inclut la date passée. - -Détails de l'algorithme: - -- Si l' `id` est introuvable ou une plage n'est pas trouvé pour l' `id` il retourne la valeur par défaut pour le dictionnaire. -- S'il y a des plages qui se chevauchent, vous pouvez en utiliser. -- Si le délimiteur est `NULL` ou une date non valide (telle que 1900-01-01 ou 2039-01-01), la plage est laissée ouverte. La gamme peut être ouverte des deux côtés. - -Exemple de Configuration: - -``` xml - - - - ... - - - - - - - - Abcdef - - - StartTimeStamp - UInt64 - - - EndTimeStamp - UInt64 - - - XXXType - String - - - - - - -``` - -ou - -``` sql -CREATE DICTIONARY somedict( - Abcdef UInt64, - StartTimeStamp UInt64, - EndTimeStamp UInt64, - XXXType String DEFAULT '' -) -PRIMARY KEY Abcdef -RANGE(MIN StartTimeStamp MAX EndTimeStamp) -``` - -### cache {#cache} - -Le dictionnaire est stocké dans un cache qui a un nombre fixe de cellules. Ces cellules contiennent des éléments fréquemment utilisés. - -Lors de la recherche d'un dictionnaire, le cache est recherché en premier. Pour chaque bloc de données, toutes les clés qui ne sont pas trouvées dans le cache ou qui sont obsolètes sont demandées à la source en utilisant `SELECT attrs... FROM db.table WHERE id IN (k1, k2, ...)`. Les données reçues sont ensuite écrites dans le cache. - -Pour les dictionnaires de cache, l'expiration [vie](external-dicts-dict-lifetime.md) des données dans le cache peuvent être définies. Si plus de temps que `lifetime` passé depuis le chargement des données dans une cellule, la valeur de la cellule n'est pas utilisée et elle est demandée à nouveau la prochaine fois qu'elle doit être utilisée. -C'est la moins efficace de toutes les façons de stocker les dictionnaires. La vitesse du cache dépend fortement des paramètres corrects et que le scénario d'utilisation. Un dictionnaire de type de cache fonctionne bien uniquement lorsque les taux de réussite sont suffisamment élevés (recommandé 99% et plus). Vous pouvez afficher le taux de réussite moyen dans le `system.dictionaries` table. - -Pour améliorer les performances du cache, utilisez une sous-requête avec `LIMIT`, et appelez la fonction avec le dictionnaire en externe. - -Soutenu [source](external-dicts-dict-sources.md): MySQL, ClickHouse, exécutable, HTTP. - -Exemple de paramètres: - -``` xml - - - - 1000000000 - - -``` - -ou - -``` sql -LAYOUT(CACHE(SIZE_IN_CELLS 1000000000)) -``` - -Définissez une taille de cache suffisamment grande. Vous devez expérimenter pour sélectionner le nombre de cellules: - -1. Définissez une valeur. -2. Exécutez les requêtes jusqu'à ce que le cache soit complètement plein. -3. Évaluer la consommation de mémoire en utilisant le `system.dictionaries` table. -4. Augmentez ou diminuez le nombre de cellules jusqu'à ce que la consommation de mémoire requise soit atteinte. - -!!! warning "Avertissement" - N'utilisez pas ClickHouse comme source, car le traitement des requêtes avec des lectures aléatoires est lent. - -### complex_key_cache {#complex-key-cache} - -Ce type de stockage est pour une utilisation avec composite [touches](external-dicts-dict-structure.md). Semblable à `cache`. - -### direct {#direct} - -Le dictionnaire n'est pas stocké dans la mémoire et va directement à la source, pendant le traitement d'une demande. - -La clé du dictionnaire a le `UInt64` type. - -Tous les types de [source](external-dicts-dict-sources.md), sauf les fichiers locaux, sont pris en charge. - -Exemple de Configuration: - -``` xml - - - -``` - -ou - -``` sql -LAYOUT(DIRECT()) -``` - -### complex_key_direct {#complex-key-direct} - -Ce type de stockage est destiné à être utilisé avec des [clés](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md) composites. Similaire à `direct` - -### ip_trie {#ip-trie} - -Ce type de stockage permet de mapper des préfixes de réseau (adresses IP) à des métadonnées telles que ASN. - -Exemple: la table contient les préfixes de réseau et leur correspondant en tant que numéro et Code de pays: - -``` text - +-----------|-----|------+ - | prefix | asn | cca2 | - +=================+=======+========+ - | 202.79.32.0/20 | 17501 | NP | - +-----------|-----|------+ - | 2620:0:870::/48 | 3856 | US | - +-----------|-----|------+ - | 2a02:6b8:1::/48 | 13238 | RU | - +-----------|-----|------+ - | 2001:db8::/32 | 65536 | ZZ | - +-----------|-----|------+ -``` - -Lorsque vous utilisez ce type de mise en page, la structure doit avoir une clé composite. - -Exemple: - -``` xml - - - - prefix - String - - - - asn - UInt32 - - - - cca2 - String - ?? - - ... - - - - true - - -``` - -ou - -``` sql -CREATE DICTIONARY somedict ( - prefix String, - asn UInt32, - cca2 String DEFAULT '??' -) -PRIMARY KEY prefix -``` - -La clé ne doit avoir qu'un seul attribut de type chaîne contenant un préfixe IP autorisé. Les autres types ne sont pas encore pris en charge. - -Pour les requêtes, vous devez utiliser les mêmes fonctions (`dictGetT` avec un n-uplet) comme pour les dictionnaires avec des clés composites: - -``` sql -dictGetT('dict_name', 'attr_name', tuple(ip)) -``` - -La fonction prend soit `UInt32` pour IPv4, ou `FixedString(16)` pour IPv6: - -``` sql -dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1'))) -``` - -Les autres types ne sont pas encore pris en charge. La fonction renvoie l'attribut du préfixe correspondant à cette adresse IP. S'il y a chevauchement des préfixes, le plus spécifique est retourné. - -Les données doit complètement s'intégrer dans la RAM. - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_layout/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md deleted file mode 100644 index 8ce78919ff1..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 42 -toc_title: "Mises \xC0 Jour Du Dictionnaire" ---- - -# Mises À Jour Du Dictionnaire {#dictionary-updates} - -ClickHouse met périodiquement à jour les dictionnaires. L'intervalle de mise à jour pour les dictionnaires entièrement téléchargés et l'intervalle d'invalidation pour les dictionnaires `` tag en quelques secondes. - -Les mises à jour du dictionnaire (autres que le chargement pour la première utilisation) ne bloquent pas les requêtes. Lors des mises à jour, l'ancienne version d'un dictionnaire est utilisée. Si une erreur se produit pendant une mise à jour, l'erreur est écrite dans le journal du serveur et les requêtes continuent d'utiliser l'ancienne version des dictionnaires. - -Exemple de paramètres: - -``` xml - - ... - 300 - ... - -``` - -``` sql -CREATE DICTIONARY (...) -... -LIFETIME(300) -... -``` - -Paramètre `0` (`LIFETIME(0)`) empêche la mise à jour des dictionnaires. - -Vous pouvez définir un intervalle de temps pour les mises à niveau, et ClickHouse choisira un temps uniformément aléatoire dans cette plage. Ceci est nécessaire pour répartir la charge sur la source du dictionnaire lors de la mise à niveau sur un grand nombre de serveurs. - -Exemple de paramètres: - -``` xml - - ... - - 300 - 360 - - ... - -``` - -ou - -``` sql -LIFETIME(MIN 300 MAX 360) -``` - -Si `0` et `0`, ClickHouse ne recharge pas le dictionnaire par timeout. -Dans ce cas, ClickHouse peut recharger le dictionnaire plus tôt si le fichier de configuration du dictionnaire a été `SYSTEM RELOAD DICTIONARY` la commande a été exécutée. - -Lors de la mise à niveau des dictionnaires, le serveur ClickHouse applique une logique différente selon le type de [source](external-dicts-dict-sources.md): - -Lors de la mise à niveau des dictionnaires, le serveur ClickHouse applique une logique différente selon le type de [source](external-dicts-dict-sources.md): - -- Pour un fichier texte, il vérifie l'heure de la modification. Si l'heure diffère de l'heure enregistrée précédemment, le dictionnaire est mis à jour. -- Pour les tables MyISAM, l'Heure de modification est vérifiée à l'aide d'un `SHOW TABLE STATUS` requête. -- Les dictionnaires d'autres sources sont mis à jour à chaque fois par défaut. - -Pour les sources MySQL (InnoDB), ODBC et ClickHouse, vous pouvez configurer une requête qui mettra à jour les dictionnaires uniquement s'ils ont vraiment changé, plutôt que chaque fois. Pour ce faire, suivez ces étapes: - -- La table de dictionnaire doit avoir un champ qui change toujours lorsque les données source sont mises à jour. -- Les paramètres de la source doivent spécifier une requête qui récupère le champ de modification. Le serveur ClickHouse interprète le résultat de la requête comme une ligne, et si cette ligne a changé par rapport à son état précédent, le dictionnaire est mis à jour. Spécifier la requête dans le `` champ dans les paramètres pour le [source](external-dicts-dict-sources.md). - -Exemple de paramètres: - -``` xml - - ... - - ... - SELECT update_time FROM dictionary_source where id = 1 - - ... - -``` - -ou - -``` sql -... -SOURCE(ODBC(... invalidate_query 'SELECT update_time FROM dictionary_source where id = 1')) -... -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_lifetime/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md deleted file mode 100644 index 4c608fa7188..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ /dev/null @@ -1,630 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 43 -toc_title: Sources de dictionnaires externes ---- - -# Sources de dictionnaires externes {#dicts-external-dicts-dict-sources} - -Externe dictionnaire peut être connecté à partir de nombreuses sources différentes. - -Si dictionary est configuré à l'aide de xml-file, la configuration ressemble à ceci: - -``` xml - - - ... - - - - - - ... - - ... - -``` - -En cas de [DDL-requête](../../statements/create.md#create-dictionary-query), configuration égale ressemblera à: - -``` sql -CREATE DICTIONARY dict_name (...) -... -SOURCE(SOURCE_TYPE(param1 val1 ... paramN valN)) -- Source configuration -... -``` - -La source est configurée dans le `source` section. - -Pour les types de source [Fichier Local](#dicts-external_dicts_dict_sources-local_file), [Fichier exécutable](#dicts-external_dicts_dict_sources-executable), [HTTP(S)](#dicts-external_dicts_dict_sources-http), [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse) -les paramètres optionnels sont disponibles: - -``` xml - - - /opt/dictionaries/os.tsv - TabSeparated - - - 0 - - -``` - -ou - -``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) -SETTINGS(format_csv_allow_single_quotes = 0) -``` - -Les Types de sources (`source_type`): - -- [Fichier Local](#dicts-external_dicts_dict_sources-local_file) -- [Fichier exécutable](#dicts-external_dicts_dict_sources-executable) -- [HTTP(S)](#dicts-external_dicts_dict_sources-http) -- DBMS - - [ODBC](#dicts-external_dicts_dict_sources-odbc) - - [MySQL](#dicts-external_dicts_dict_sources-mysql) - - [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse) - - [MongoDB](#dicts-external_dicts_dict_sources-mongodb) - - [Redis](#dicts-external_dicts_dict_sources-redis) - -## Fichier Local {#dicts-external_dicts_dict_sources-local_file} - -Exemple de paramètres: - -``` xml - - - /opt/dictionaries/os.tsv - TabSeparated - - -``` - -ou - -``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) -``` - -Définition des champs: - -- `path` – The absolute path to the file. -- `format` – The file format. All the formats described in “[Format](../../../interfaces/formats.md#formats)” sont pris en charge. - -## Fichier Exécutable {#dicts-external_dicts_dict_sources-executable} - -Travailler avec des fichiers exécutables en dépend [comment le dictionnaire est stocké dans la mémoire](external-dicts-dict-layout.md). Si le dictionnaire est stocké en utilisant `cache` et `complex_key_cache`, Clickhouse demande les clés nécessaires en envoyant une requête au STDIN du fichier exécutable. Sinon, ClickHouse démarre le fichier exécutable et traite sa sortie comme des données de dictionnaire. - -Exemple de paramètres: - -``` xml - - - cat /opt/dictionaries/os.tsv - TabSeparated - - -``` - -ou - -``` sql -SOURCE(EXECUTABLE(command 'cat /opt/dictionaries/os.tsv' format 'TabSeparated')) -``` - -Définition des champs: - -- `command` – The absolute path to the executable file, or the file name (if the program directory is written to `PATH`). -- `format` – The file format. All the formats described in “[Format](../../../interfaces/formats.md#formats)” sont pris en charge. - -## Http(s) {#dicts-external_dicts_dict_sources-http} - -Travailler avec un serveur HTTP (S) dépend de [comment le dictionnaire est stocké dans la mémoire](external-dicts-dict-layout.md). Si le dictionnaire est stocké en utilisant `cache` et `complex_key_cache`, Clickhouse demande les clés nécessaires en envoyant une demande via le `POST` méthode. - -Exemple de paramètres: - -``` xml - - - http://[::1]/os.tsv - TabSeparated - - user - password - - -
- API-KEY - key -
-
-
- -``` - -ou - -``` sql -SOURCE(HTTP( - url 'http://[::1]/os.tsv' - format 'TabSeparated' - credentials(user 'user' password 'password') - headers(header(name 'API-KEY' value 'key')) -)) -``` - -Pour que ClickHouse accède à une ressource HTTPS, vous devez [configurer openSSL](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-openssl) dans la configuration du serveur. - -Définition des champs: - -- `url` – The source URL. -- `format` – The file format. All the formats described in “[Format](../../../interfaces/formats.md#formats)” sont pris en charge. -- `credentials` – Basic HTTP authentication. Optional parameter. - - `user` – Username required for the authentication. - - `password` – Password required for the authentication. -- `headers` – All custom HTTP headers entries used for the HTTP request. Optional parameter. - - `header` – Single HTTP header entry. - - `name` – Identifiant name used for the header send on the request. - - `value` – Value set for a specific identifiant name. - -## ODBC {#dicts-external_dicts_dict_sources-odbc} - -Vous pouvez utiliser cette méthode pour connecter n'importe quelle base de données dotée d'un pilote ODBC. - -Exemple de paramètres: - -``` xml - - - DatabaseName - ShemaName.TableName
- DSN=some_parameters - SQL_QUERY -
- -``` - -ou - -``` sql -SOURCE(ODBC( - db 'DatabaseName' - table 'SchemaName.TableName' - connection_string 'DSN=some_parameters' - invalidate_query 'SQL_QUERY' -)) -``` - -Définition des champs: - -- `db` – Name of the database. Omit it if the database name is set in the `` paramètre. -- `table` – Name of the table and schema if exists. -- `connection_string` – Connection string. -- `invalidate_query` – Query for checking the dictionary status. Optional parameter. Read more in the section [Mise à jour des dictionnaires](external-dicts-dict-lifetime.md). - -ClickHouse reçoit des symboles de citation D'ODBC-driver et cite tous les paramètres des requêtes au pilote, il est donc nécessaire de définir le nom de la table en conséquence sur le cas du nom de la table dans la base de données. - -Si vous avez des problèmes avec des encodages lors de l'utilisation d'Oracle, consultez le [FAQ](../../../faq/general.md#oracle-odbc-encodings) article. - -### Vulnérabilité connue de la fonctionnalité du dictionnaire ODBC {#known-vulnerability-of-the-odbc-dictionary-functionality} - -!!! attention "Attention" - Lors de la connexion à la base de données via le paramètre de connexion du pilote ODBC `Servername` peut être substitué. Dans ce cas, les valeurs de `USERNAME` et `PASSWORD` de `odbc.ini` sont envoyés au serveur distant et peuvent être compromis. - -**Exemple d'utilisation non sécurisée** - -Configurons unixODBC pour PostgreSQL. Le contenu de `/etc/odbc.ini`: - -``` text -[gregtest] -Driver = /usr/lib/psqlodbca.so -Servername = localhost -PORT = 5432 -DATABASE = test_db -#OPTION = 3 -USERNAME = test -PASSWORD = test -``` - -Si vous faites alors une requête telle que - -``` sql -SELECT * FROM odbc('DSN=gregtest;Servername=some-server.com', 'test_db'); -``` - -Le pilote ODBC enverra des valeurs de `USERNAME` et `PASSWORD` de `odbc.ini` de `some-server.com`. - -### Exemple de connexion Postgresql {#example-of-connecting-postgresql} - -Ubuntu OS. - -Installation d'unixODBC et du pilote ODBC pour PostgreSQL: - -``` bash -$ sudo apt-get install -y unixodbc odbcinst odbc-postgresql -``` - -Configuration `/etc/odbc.ini` (ou `~/.odbc.ini`): - -``` text - [DEFAULT] - Driver = myconnection - - [myconnection] - Description = PostgreSQL connection to my_db - Driver = PostgreSQL Unicode - Database = my_db - Servername = 127.0.0.1 - UserName = username - Password = password - Port = 5432 - Protocol = 9.3 - ReadOnly = No - RowVersioning = No - ShowSystemTables = No - ConnSettings = -``` - -La configuration du dictionnaire dans ClickHouse: - -``` xml - - - table_name - - - - - DSN=myconnection - postgresql_table
-
- - - 300 - 360 - - - - - - - id - - - some_column - UInt64 - 0 - - -
-
-``` - -ou - -``` sql -CREATE DICTIONARY table_name ( - id UInt64, - some_column UInt64 DEFAULT 0 -) -PRIMARY KEY id -SOURCE(ODBC(connection_string 'DSN=myconnection' table 'postgresql_table')) -LAYOUT(HASHED()) -LIFETIME(MIN 300 MAX 360) -``` - -Vous devrez peut-être modifier `odbc.ini` pour spécifier le chemin d'accès complet à la bibliothèque avec le conducteur `DRIVER=/usr/local/lib/psqlodbcw.so`. - -### Exemple de connexion à MS SQL Server {#example-of-connecting-ms-sql-server} - -Ubuntu OS. - -Installation du pilote: : - -``` bash -$ sudo apt-get install tdsodbc freetds-bin sqsh -``` - -Configuration du pilote: - -``` bash - $ cat /etc/freetds/freetds.conf - ... - - [MSSQL] - host = 192.168.56.101 - port = 1433 - tds version = 7.0 - client charset = UTF-8 - - $ cat /etc/odbcinst.ini - ... - - [FreeTDS] - Description = FreeTDS - Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so - Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so - FileUsage = 1 - UsageCount = 5 - - $ cat ~/.odbc.ini - ... - - [MSSQL] - Description = FreeTDS - Driver = FreeTDS - Servername = MSSQL - Database = test - UID = test - PWD = test - Port = 1433 -``` - -Configuration du dictionnaire dans ClickHouse: - -``` xml - - - test - - - dict
- DSN=MSSQL;UID=test;PWD=test -
- - - - 300 - 360 - - - - - - - - - k - - - s - String - - - -
-
-``` - -ou - -``` sql -CREATE DICTIONARY test ( - k UInt64, - s String DEFAULT '' -) -PRIMARY KEY k -SOURCE(ODBC(table 'dict' connection_string 'DSN=MSSQL;UID=test;PWD=test')) -LAYOUT(FLAT()) -LIFETIME(MIN 300 MAX 360) -``` - -## DBMS {#dbms} - -### Mysql {#dicts-external_dicts_dict_sources-mysql} - -Exemple de paramètres: - -``` xml - - - 3306 - clickhouse - qwerty - - example01-1 - 1 - - - example01-2 - 1 - - db_name - table_name
- id=10 - SQL_QUERY -
- -``` - -ou - -``` sql -SOURCE(MYSQL( - port 3306 - user 'clickhouse' - password 'qwerty' - replica(host 'example01-1' priority 1) - replica(host 'example01-2' priority 1) - db 'db_name' - table 'table_name' - where 'id=10' - invalidate_query 'SQL_QUERY' -)) -``` - -Définition des champs: - -- `port` – The port on the MySQL server. You can specify it for all replicas, or for each one individually (inside ``). - -- `user` – Name of the MySQL user. You can specify it for all replicas, or for each one individually (inside ``). - -- `password` – Password of the MySQL user. You can specify it for all replicas, or for each one individually (inside ``). - -- `replica` – Section of replica configurations. There can be multiple sections. - - - `replica/host` – The MySQL host. - - `replica/priority` – The replica priority. When attempting to connect, ClickHouse traverses the replicas in order of priority. The lower the number, the higher the priority. - -- `db` – Name of the database. - -- `table` – Name of the table. - -- `where` – The selection criteria. The syntax for conditions is the same as for `WHERE` clause dans MySQL, par exemple, `id > 10 AND id < 20`. Paramètre facultatif. - -- `invalidate_query` – Query for checking the dictionary status. Optional parameter. Read more in the section [Mise à jour des dictionnaires](external-dicts-dict-lifetime.md). - -MySQL peut être connecté sur un hôte local via des sockets. Pour ce faire, définissez `host` et `socket`. - -Exemple de paramètres: - -``` xml - - - localhost - /path/to/socket/file.sock - clickhouse - qwerty - db_name - table_name
- id=10 - SQL_QUERY -
- -``` - -ou - -``` sql -SOURCE(MYSQL( - host 'localhost' - socket '/path/to/socket/file.sock' - user 'clickhouse' - password 'qwerty' - db 'db_name' - table 'table_name' - where 'id=10' - invalidate_query 'SQL_QUERY' -)) -``` - -### ClickHouse {#dicts-external_dicts_dict_sources-clickhouse} - -Exemple de paramètres: - -``` xml - - - example01-01-1 - 9000 - default - - default - ids
- id=10 -
- -``` - -ou - -``` sql -SOURCE(CLICKHOUSE( - host 'example01-01-1' - port 9000 - user 'default' - password '' - db 'default' - table 'ids' - where 'id=10' -)) -``` - -Définition des champs: - -- `host` – The ClickHouse host. If it is a local host, the query is processed without any network activity. To improve fault tolerance, you can create a [Distribué](../../../engines/table-engines/special/distributed.md) table et entrez-le dans les configurations suivantes. -- `port` – The port on the ClickHouse server. -- `user` – Name of the ClickHouse user. -- `password` – Password of the ClickHouse user. -- `db` – Name of the database. -- `table` – Name of the table. -- `where` – The selection criteria. May be omitted. -- `invalidate_query` – Query for checking the dictionary status. Optional parameter. Read more in the section [Mise à jour des dictionnaires](external-dicts-dict-lifetime.md). - -### Mongodb {#dicts-external_dicts_dict_sources-mongodb} - -Exemple de paramètres: - -``` xml - - - localhost - 27017 - - - test - dictionary_source - - -``` - -ou - -``` sql -SOURCE(MONGO( - host 'localhost' - port 27017 - user '' - password '' - db 'test' - collection 'dictionary_source' -)) -``` - -Définition des champs: - -- `host` – The MongoDB host. -- `port` – The port on the MongoDB server. -- `user` – Name of the MongoDB user. -- `password` – Password of the MongoDB user. -- `db` – Name of the database. -- `collection` – Name of the collection. - -### Redis {#dicts-external_dicts_dict_sources-redis} - -Exemple de paramètres: - -``` xml - - - localhost - 6379 - simple - 0 - - -``` - -ou - -``` sql -SOURCE(REDIS( - host 'localhost' - port 6379 - storage_type 'simple' - db_index 0 -)) -``` - -Définition des champs: - -- `host` – The Redis host. -- `port` – The port on the Redis server. -- `storage_type` – The structure of internal Redis storage using for work with keys. `simple` est pour les sources simples et pour les sources à clé unique hachées, `hash_map` est pour les sources hachées avec deux clés. Les sources À Distance et les sources de cache à clé complexe ne sont pas prises en charge. Peut être omis, la valeur par défaut est `simple`. -- `db_index` – The specific numeric index of Redis logical database. May be omitted, default value is 0. - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_sources/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md deleted file mode 100644 index 1b9215baf06..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 44 -toc_title: "Cl\xE9 et champs du dictionnaire" ---- - -# Clé et champs du dictionnaire {#dictionary-key-and-fields} - -Le `` la clause décrit la clé du dictionnaire et les champs disponibles pour les requêtes. - -Description XML: - -``` xml - - - - Id - - - - - - - ... - - - -``` - -Les attributs sont décrits dans les éléments: - -- `` — [La colonne de la clé](external-dicts-dict-structure.md#ext_dict_structure-key). -- `` — [Colonne de données](external-dicts-dict-structure.md#ext_dict_structure-attributes). Il peut y avoir un certain nombre d'attributs. - -Requête DDL: - -``` sql -CREATE DICTIONARY dict_name ( - Id UInt64, - -- attributes -) -PRIMARY KEY Id -... -``` - -Les attributs sont décrits dans le corps de la requête: - -- `PRIMARY KEY` — [La colonne de la clé](external-dicts-dict-structure.md#ext_dict_structure-key) -- `AttrName AttrType` — [Colonne de données](external-dicts-dict-structure.md#ext_dict_structure-attributes). Il peut y avoir un certain nombre d'attributs. - -## Clé {#ext_dict_structure-key} - -ClickHouse prend en charge les types de clés suivants: - -- Touche numérique. `UInt64`. Défini dans le `` tag ou en utilisant `PRIMARY KEY` mot. -- Clé Composite. Ensemble de valeurs de types différents. Défini dans la balise `` ou `PRIMARY KEY` mot. - -Une structure xml peut contenir `` ou ``. DDL-requête doit contenir unique `PRIMARY KEY`. - -!!! warning "Avertissement" - Vous ne devez pas décrire clé comme un attribut. - -### Touche Numérique {#ext_dict-numeric-key} - -Type: `UInt64`. - -Exemple de Configuration: - -``` xml - - Id - -``` - -Champs de Configuration: - -- `name` – The name of the column with keys. - -Pour DDL-requête: - -``` sql -CREATE DICTIONARY ( - Id UInt64, - ... -) -PRIMARY KEY Id -... -``` - -- `PRIMARY KEY` – The name of the column with keys. - -### Clé Composite {#composite-key} - -La clé peut être un `tuple` de tous les types de champs. Le [disposition](external-dicts-dict-layout.md) dans ce cas, doit être `complex_key_hashed` ou `complex_key_cache`. - -!!! tip "Conseil" - Une clé composite peut être constitué d'un seul élément. Cela permet d'utiliser une chaîne comme clé, par exemple. - -La structure de clé est définie dans l'élément ``. Les principaux champs sont spécifiés dans le même format que le dictionnaire [attribut](external-dicts-dict-structure.md). Exemple: - -``` xml - - - - field1 - String - - - field2 - UInt32 - - ... - -... -``` - -ou - -``` sql -CREATE DICTIONARY ( - field1 String, - field2 String - ... -) -PRIMARY KEY field1, field2 -... -``` - -Pour une requête à l' `dictGet*` fonction, un tuple est passé comme clé. Exemple: `dictGetString('dict_name', 'attr_name', tuple('string for field1', num_for_field2))`. - -## Attribut {#ext_dict_structure-attributes} - -Exemple de Configuration: - -``` xml - - ... - - Name - ClickHouseDataType - - rand64() - true - true - true - - -``` - -ou - -``` sql -CREATE DICTIONARY somename ( - Name ClickHouseDataType DEFAULT '' EXPRESSION rand64() HIERARCHICAL INJECTIVE IS_OBJECT_ID -) -``` - -Champs de Configuration: - -| Balise | Description | Requis | -|------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| -| `name` | Nom de la colonne. | Oui | -| `type` | Type de données ClickHouse.
ClickHouse tente de convertir la valeur du dictionnaire vers le type de données spécifié. Par exemple, pour MySQL, le champ peut être `TEXT`, `VARCHAR`, ou `BLOB` dans la table source MySQL, mais il peut être téléchargé comme `String` à ClickHouse.
[Nullable](../../../sql-reference/data-types/nullable.md) n'est pas pris en charge. | Oui | -| `null_value` | Valeur par défaut pour un élément inexistant.
Dans l'exemple, c'est une chaîne vide. Vous ne pouvez pas utiliser `NULL` dans ce domaine. | Oui | -| `expression` | [Expression](../../syntax.md#syntax-expressions) que ClickHouse s'exécute sur la valeur.
L'expression peut être un nom de colonne dans la base de données SQL distante. Ainsi, vous pouvez l'utiliser pour créer un alias pour la colonne à distance.

Valeur par défaut: aucune expression. | Aucun | -| `hierarchical` | Si `true`, l'attribut contient la valeur d'un parent clé de la clé actuelle. Voir [Dictionnaires Hiérarchiques](external-dicts-dict-hierarchical.md).

Valeur par défaut: `false`. | Aucun | -| `injective` | Indicateur qui indique si le `id -> attribute` l'image est [injective](https://en.wikipedia.org/wiki/Injective_function).
Si `true`, ClickHouse peut automatiquement placer après le `GROUP BY` clause les requêtes aux dictionnaires avec injection. Habituellement, il réduit considérablement le montant de ces demandes.

Valeur par défaut: `false`. | Aucun | -| `is_object_id` | Indicateur qui indique si la requête est exécutée pour un document MongoDB par `ObjectID`.

Valeur par défaut: `false`. | Aucun | - -## Voir Aussi {#see-also} - -- [Fonctions pour travailler avec des dictionnaires externes](../../../sql-reference/functions/ext-dict-functions.md). - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_structure/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md deleted file mode 100644 index 3bb8884df2f..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 40 -toc_title: Configuration D'un dictionnaire externe ---- - -# Configuration D'un dictionnaire externe {#dicts-external-dicts-dict} - -Si dictionary est configuré à l'aide d'un fichier xml, than dictionary configuration a la structure suivante: - -``` xml - - dict_name - - - - - - - - - - - - - - - - - -``` - -Correspondant [DDL-requête](../../statements/create.md#create-dictionary-query) a la structure suivante: - -``` sql -CREATE DICTIONARY dict_name -( - ... -- attributes -) -PRIMARY KEY ... -- complex or single key configuration -SOURCE(...) -- Source configuration -LAYOUT(...) -- Memory layout configuration -LIFETIME(...) -- Lifetime of dictionary in memory -``` - -- `name` – The identifier that can be used to access the dictionary. Use the characters `[a-zA-Z0-9_\-]`. -- [source](external-dicts-dict-sources.md) — Source of the dictionary. -- [disposition](external-dicts-dict-layout.md) — Dictionary layout in memory. -- [structure](external-dicts-dict-structure.md) — Structure of the dictionary . A key and attributes that can be retrieved by this key. -- [vie](external-dicts-dict-lifetime.md) — Frequency of dictionary updates. - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts.md deleted file mode 100644 index d68b7a7f112..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 39 -toc_title: "Description G\xE9n\xE9rale" ---- - -# Dictionnaires Externes {#dicts-external-dicts} - -Vous pouvez ajouter vos propres dictionnaires à partir de diverses sources de données. La source de données d'un dictionnaire peut être un texte local ou un fichier exécutable, une ressource HTTP(S) ou un autre SGBD. Pour plus d'informations, voir “[Sources pour les dictionnaires externes](external-dicts-dict-sources.md)”. - -ClickHouse: - -- Stocke entièrement ou partiellement les dictionnaires en RAM. -- Met à jour périodiquement les dictionnaires et charge dynamiquement les valeurs manquantes. En d'autres mots, les dictionnaires peuvent être chargés dynamiquement. -- Permet de créer des dictionnaires externes avec des fichiers xml ou [Les requêtes DDL](../../statements/create.md#create-dictionary-query). - -La configuration des dictionnaires externes peut être située dans un ou plusieurs fichiers xml. Le chemin d'accès à la configuration spécifiée dans le [dictionaries_config](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_config) paramètre. - -Les dictionnaires peuvent être chargés au démarrage du serveur ou à la première utilisation, en fonction [dictionaries_lazy_load](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_lazy_load) paramètre. - -Le [dictionnaire](../../../operations/system-tables.md#system_tables-dictionaries) la table système contient des informations sur les dictionnaires configurés sur le serveur. Pour chaque dictionnaire, vous pouvez y trouver: - -- Statut du dictionnaire. -- Paramètres de Configuration. -- Des métriques telles que la quantité de RAM allouée pour le dictionnaire ou un certain nombre de requêtes depuis que le dictionnaire a été chargé avec succès. - -Le fichier de configuration du dictionnaire a le format suivant: - -``` xml - - An optional element with any content. Ignored by the ClickHouse server. - - - /etc/metrika.xml - - - - - - - - -``` - -Vous pouvez [configurer](external-dicts-dict.md) le nombre de dictionnaires dans le même fichier. - -[Requêtes DDL pour les dictionnaires](../../statements/create.md#create-dictionary-query) ne nécessite aucun enregistrement supplémentaire dans la configuration du serveur. Ils permettent de travailler avec des dictionnaires en tant qu'entités de première classe, comme des tables ou des vues. - -!!! attention "Attention" - Vous pouvez convertir les valeurs pour un petit dictionnaire en le décrivant dans un `SELECT` requête (voir la [transformer](../../../sql-reference/functions/other-functions.md) fonction). Cette fonctionnalité n'est pas liée aux dictionnaires externes. - -## Voir Aussi {#ext-dicts-see-also} - -- [Configuration D'un dictionnaire externe](external-dicts-dict.md) -- [Stockage des dictionnaires en mémoire](external-dicts-dict-layout.md) -- [Mises À Jour Du Dictionnaire](external-dicts-dict-lifetime.md) -- [Sources de dictionnaires externes](external-dicts-dict-sources.md) -- [Clé et champs du dictionnaire](external-dicts-dict-structure.md) -- [Fonctions pour travailler avec des dictionnaires externes](../../../sql-reference/functions/ext-dict-functions.md) - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts/) diff --git a/docs/fr/sql-reference/dictionaries/external-dictionaries/index.md b/docs/fr/sql-reference/dictionaries/external-dictionaries/index.md deleted file mode 100644 index 109220205dd..00000000000 --- a/docs/fr/sql-reference/dictionaries/external-dictionaries/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Dictionnaires Externes -toc_priority: 37 ---- - - diff --git a/docs/fr/sql-reference/dictionaries/index.md b/docs/fr/sql-reference/dictionaries/index.md deleted file mode 100644 index 3ec31085cc5..00000000000 --- a/docs/fr/sql-reference/dictionaries/index.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Dictionnaire -toc_priority: 35 -toc_title: Introduction ---- - -# Dictionnaire {#dictionaries} - -Un dictionnaire est une cartographie (`key -> attributes`) qui est pratique pour différents types de listes de référence. - -ClickHouse prend en charge des fonctions spéciales pour travailler avec des dictionnaires qui peuvent être utilisés dans les requêtes. Il est plus facile et plus efficace d'utiliser des dictionnaires avec des fonctions que par une `JOIN` avec des tableaux de référence. - -[NULL](../../sql-reference/syntax.md#null-literal) les valeurs ne peuvent pas être stockées dans un dictionnaire. - -Supports ClickHouse: - -- [Construit-dans les dictionnaires](internal-dicts.md#internal_dicts) avec un [ensemble de fonctions](../../sql-reference/functions/ym-dict-functions.md). -- [Plug-in (externe) dictionnaires](external-dictionaries/external-dicts.md#dicts-external-dicts) avec un [ensemble de fonctions](../../sql-reference/functions/ext-dict-functions.md). - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/) diff --git a/docs/fr/sql-reference/dictionaries/internal-dicts.md b/docs/fr/sql-reference/dictionaries/internal-dicts.md deleted file mode 100644 index 607936031a1..00000000000 --- a/docs/fr/sql-reference/dictionaries/internal-dicts.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 39 -toc_title: Dictionnaires Internes ---- - -# Dictionnaires Internes {#internal_dicts} - -ClickHouse contient une fonction intégrée pour travailler avec une géobase. - -Cela vous permet de: - -- Utilisez L'ID d'une région pour obtenir son nom dans la langue souhaitée. -- Utilisez L'ID d'une région pour obtenir L'ID d'une ville, d'une région, d'un district fédéral, d'un pays ou d'un continent. -- Vérifiez si une région fait partie d'une autre région. -- Obtenez une chaîne de régions parentes. - -Toutes les fonctions prennent en charge “translocality,” la capacité d'utiliser simultanément différentes perspectives sur la propriété de la région. Pour plus d'informations, consultez la section “Functions for working with Yandex.Metrica dictionaries”. - -Les dictionnaires internes sont désactivés dans le package par défaut. -Pour les activer, décommentez les paramètres `path_to_regions_hierarchy_file` et `path_to_regions_names_files` dans le fichier de configuration du serveur. - -La géobase est chargée à partir de fichiers texte. - -Place de la `regions_hierarchy*.txt` les fichiers dans le `path_to_regions_hierarchy_file` répertoire. Ce paramètre de configuration doit contenir le chemin `regions_hierarchy.txt` fichier (la hiérarchie régionale par défaut), et les autres fichiers (`regions_hierarchy_ua.txt`) doit être situé dans le même répertoire. - -Mettre le `regions_names_*.txt` les fichiers dans le `path_to_regions_names_files` répertoire. - -Vous pouvez également créer ces fichiers vous-même. Le format de fichier est le suivant: - -`regions_hierarchy*.txt`: TabSeparated (pas d'en-tête), colonnes: - -- région de l'ID (`UInt32`) -- ID de région parent (`UInt32`) -- type de région (`UInt8`): 1-continent, 3-pays, 4-district fédéral, 5-région, 6-ville; les autres types n'ont pas de valeurs -- population (`UInt32`) — optional column - -`regions_names_*.txt`: TabSeparated (pas d'en-tête), colonnes: - -- région de l'ID (`UInt32`) -- nom de la région (`String`) — Can't contain tabs or line feeds, even escaped ones. - -Un tableau plat est utilisé pour stocker dans la RAM. Pour cette raison, les ID ne devraient pas dépasser un million. - -Les dictionnaires peuvent être mis à jour sans redémarrer le serveur. Cependant, l'ensemble des dictionnaires n'est pas mis à jour. -Pour les mises à jour, les temps de modification du fichier sont vérifiés. Si un fichier a été modifié, le dictionnaire est mis à jour. -L'intervalle de vérification des modifications est configuré dans le `builtin_dictionaries_reload_interval` paramètre. -Les mises à jour du dictionnaire (autres que le chargement lors de la première utilisation) ne bloquent pas les requêtes. Lors des mises à jour, les requêtes utilisent les anciennes versions des dictionnaires. Si une erreur se produit pendant une mise à jour, l'erreur est écrite dans le journal du serveur et les requêtes continuent d'utiliser l'ancienne version des dictionnaires. - -Nous vous recommandons de mettre à jour périodiquement les dictionnaires avec la géobase. Lors d'une mise à jour, générez de nouveaux fichiers et écrivez-les dans un emplacement séparé. Lorsque tout est prêt, renommez - les en fichiers utilisés par le serveur. - -Il existe également des fonctions pour travailler avec les identifiants du système d'exploitation et Yandex.Moteurs de recherche Metrica, mais ils ne devraient pas être utilisés. - -[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/internal_dicts/) diff --git a/docs/fr/sql-reference/functions/arithmetic-functions.md b/docs/fr/sql-reference/functions/arithmetic-functions.md deleted file mode 100644 index c35fb104236..00000000000 --- a/docs/fr/sql-reference/functions/arithmetic-functions.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 35 -toc_title: "Arithm\xE9tique" ---- - -# Fonctions Arithmétiques {#arithmetic-functions} - -Pour toutes les fonctions arithmétiques, le type de résultat est calculé comme le plus petit type de nombre dans lequel le résultat correspond, s'il existe un tel type. Le minimum est pris simultanément sur la base du nombre de bits, s'il est signé, et s'il flotte. S'il n'y a pas assez de bits, le type de bits le plus élevé est pris. - -Exemple: - -``` sql -SELECT toTypeName(0), toTypeName(0 + 0), toTypeName(0 + 0 + 0), toTypeName(0 + 0 + 0 + 0) -``` - -``` text -┌─toTypeName(0)─┬─toTypeName(plus(0, 0))─┬─toTypeName(plus(plus(0, 0), 0))─┬─toTypeName(plus(plus(plus(0, 0), 0), 0))─┐ -│ UInt8 │ UInt16 │ UInt32 │ UInt64 │ -└───────────────┴────────────────────────┴─────────────────────────────────┴──────────────────────────────────────────┘ -``` - -Les fonctions arithmétiques fonctionnent pour n'importe quelle paire de types de UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64, Float32 ou Float64. - -Le débordement est produit de la même manière qu'en C++. - -## plus (A, B), opérateur a + b {#plusa-b-a-b-operator} - -Calcule la somme des nombres. -Vous pouvez également ajouter des nombres entiers avec une date ou la date et l'heure. Dans le cas d'une date, Ajouter un entier signifie ajouter le nombre de jours correspondant. Pour une date avec l'heure, cela signifie ajouter le nombre de secondes correspondant. - -## moins (A, B), opérateur a - b {#minusa-b-a-b-operator} - -Calcule la différence. Le résultat est toujours signé. - -You can also calculate integer numbers from a date or date with time. The idea is the same – see above for ‘plus’. - -## la multiplication(a, b), a \* et b \* de l'opérateur {#multiplya-b-a-b-operator} - -Calcule le produit des nombres. - -## diviser (A, B), opérateur a / b {#dividea-b-a-b-operator} - -Calcule le quotient des nombres. Le type de résultat est toujours un type à virgule flottante. -Il n'est pas de division entière. Pour la division entière, utilisez le ‘intDiv’ fonction. -En divisant par zéro vous obtenez ‘inf’, ‘-inf’, ou ‘nan’. - -## intDiv (a, b) {#intdiva-b} - -Calcule le quotient des nombres. Divise en entiers, arrondi vers le bas (par la valeur absolue). -Une exception est levée en divisant par zéro ou en divisant un nombre négatif minimal par moins un. - -## intDivOrZero(a, b) {#intdivorzeroa-b} - -Diffère de ‘intDiv’ en ce sens qu'il renvoie zéro en divisant par zéro ou en divisant un nombre négatif minimal par moins un. - -## opérateur modulo(A, B), A % B {#moduloa-b-a-b-operator} - -Calcule le reste après la division. -Si les arguments sont des nombres à virgule flottante, ils sont pré-convertis en entiers en supprimant la partie décimale. -Le reste est pris dans le même sens qu'en C++. La division tronquée est utilisée pour les nombres négatifs. -Une exception est levée en divisant par zéro ou en divisant un nombre négatif minimal par moins un. - -## moduloOrZero (a, b) {#moduloorzeroa-b} - -Diffère de ‘modulo’ en ce sens qu'il renvoie zéro lorsque le diviseur est nul. - -## annuler (a), - un opérateur {#negatea-a-operator} - -Calcule un nombre avec le signe inverse. Le résultat est toujours signé. - -## abs(un) {#arithm_func-abs} - -Calcule la valeur absolue d'un nombre (un). Autrement dit, si un \< 0, Il renvoie-A. pour les types non signés, il ne fait rien. Pour les types entiers signés, il renvoie un nombre non signé. - -## pgcd(a, b) {#gcda-b} - -Renvoie le plus grand diviseur commun des nombres. -Une exception est levée en divisant par zéro ou en divisant un nombre négatif minimal par moins un. - -## ppcm(a, b) {#lcma-b} - -Renvoie le multiple le moins commun des nombres. -Une exception est levée en divisant par zéro ou en divisant un nombre négatif minimal par moins un. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/arithmetic_functions/) diff --git a/docs/fr/sql-reference/functions/array-functions.md b/docs/fr/sql-reference/functions/array-functions.md deleted file mode 100644 index 40568841372..00000000000 --- a/docs/fr/sql-reference/functions/array-functions.md +++ /dev/null @@ -1,1061 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 46 -toc_title: Travailler avec des tableaux ---- - -# Fonctions pour travailler avec des tableaux {#functions-for-working-with-arrays} - -## vide {#function-empty} - -Retourne 1 pour un tableau vide, ou 0 pour un non-vide. -Le type de résultat est UInt8. -La fonction fonctionne également pour les chaînes. - -## notEmpty {#function-notempty} - -Retourne 0 pour un tableau vide, ou 1 pour un non-vide. -Le type de résultat est UInt8. -La fonction fonctionne également pour les chaînes. - -## longueur {#array_functions-length} - -Retourne le nombre d'éléments dans le tableau. -Le type de résultat est UInt64. -La fonction fonctionne également pour les chaînes. - -## emptyArrayUInt8, emptyArrayUInt16, emptyArrayUInt32, emptyArrayUInt64 {#emptyarrayuint8-emptyarrayuint16-emptyarrayuint32-emptyarrayuint64} - -## emptyArrayInt8, emptyArrayInt16, emptyArrayInt32, emptyArrayInt64 {#emptyarrayint8-emptyarrayint16-emptyarrayint32-emptyarrayint64} - -## emptyArrayFloat32, emptyArrayFloat64 {#emptyarrayfloat32-emptyarrayfloat64} - -## emptyArrayDate, emptyArrayDateTime {#emptyarraydate-emptyarraydatetime} - -## emptyArrayString {#emptyarraystring} - -Accepte zéro argument et renvoie un tableau vide du type approprié. - -## emptyArrayToSingle {#emptyarraytosingle} - -Accepte un tableau vide et renvoie un élément de tableau qui est égal à la valeur par défaut. - -## plage (fin), Plage(début, fin \[, étape\]) {#rangeend-rangestart-end-step} - -Retourne un tableau de nombres du début à la fin-1 par étape. -Si l'argument `start` n'est pas spécifié, la valeur par défaut est 0. -Si l'argument `step` n'est pas spécifié, la valeur par défaut est 1. -Il se comporte presque comme pythonic `range`. Mais la différence est que tous les types d'arguments doivent être `UInt` nombre. -Juste au cas où, une exception est levée si des tableaux d'une longueur totale de plus de 100 000 000 d'éléments sont créés dans un bloc de données. - -## array(x1, …), operator \[x1, …\] {#arrayx1-operator-x1} - -Crée un tableau à partir des arguments de la fonction. -Les arguments doivent être des constantes et avoir des types qui ont le plus petit type commun. Au moins un argument doit être passé, sinon il n'est pas clair quel type de tableau créer. Qui est, vous ne pouvez pas utiliser cette fonction pour créer un tableau vide (pour ce faire, utilisez la ‘emptyArray\*’ la fonction décrite ci-dessus). -Retourne un ‘Array(T)’ type de résultat, où ‘T’ est le plus petit type commun parmi les arguments passés. - -## arrayConcat {#arrayconcat} - -Combine des tableaux passés comme arguments. - -``` sql -arrayConcat(arrays) -``` - -**Paramètre** - -- `arrays` – Arbitrary number of arguments of [Tableau](../../sql-reference/data-types/array.md) type. - **Exemple** - - - -``` sql -SELECT arrayConcat([1, 2], [3, 4], [5, 6]) AS res -``` - -``` text -┌─res───────────┐ -│ [1,2,3,4,5,6] │ -└───────────────┘ -``` - -## arrayElement(arr, n), opérateur arr\[n\] {#arrayelementarr-n-operator-arrn} - -Récupérer l'élément avec l'index `n` à partir du tableau `arr`. `n` doit être n'importe quel type entier. -Les index dans un tableau commencent à partir d'un. -Les index négatifs sont pris en charge. Dans ce cas, il sélectionne l'élément correspondant numérotées à partir de la fin. Exemple, `arr[-1]` est le dernier élément du tableau. - -Si l'index est en dehors des limites d'un tableau, il renvoie une valeur (0 pour les nombres, une chaîne vide pour les cordes, etc.), sauf pour le cas avec un tableau non constant et un index constant 0 (dans ce cas, il y aura une erreur `Array indices are 1-based`). - -## a (arr, elem) {#hasarr-elem} - -Vérifie si le ‘arr’ tableau a la ‘elem’ élément. -Retourne 0 si l'élément n'est pas dans le tableau, ou 1 si elle l'est. - -`NULL` est traitée comme une valeur. - -``` sql -SELECT has([1, 2, NULL], NULL) -``` - -``` text -┌─has([1, 2, NULL], NULL)─┐ -│ 1 │ -└─────────────────────────┘ -``` - -## hasAll {#hasall} - -Vérifie si un tableau est un sous-ensemble de l'autre. - -``` sql -hasAll(set, subset) -``` - -**Paramètre** - -- `set` – Array of any type with a set of elements. -- `subset` – Array of any type with elements that should be tested to be a subset of `set`. - -**Les valeurs de retour** - -- `1`, si `set` contient tous les éléments de `subset`. -- `0`, autrement. - -**Propriétés particulières** - -- Un tableau vide est un sous-ensemble d'un tableau quelconque. -- `Null` traitée comme une valeur. -- Ordre des valeurs dans les deux tableaux n'a pas d'importance. - -**Exemple** - -`SELECT hasAll([], [])` retours 1. - -`SELECT hasAll([1, Null], [Null])` retours 1. - -`SELECT hasAll([1.0, 2, 3, 4], [1, 3])` retours 1. - -`SELECT hasAll(['a', 'b'], ['a'])` retours 1. - -`SELECT hasAll([1], ['a'])` renvoie 0. - -`SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]])` renvoie 0. - -## hasAny {#hasany} - -Vérifie si deux tableaux ont une intersection par certains éléments. - -``` sql -hasAny(array1, array2) -``` - -**Paramètre** - -- `array1` – Array of any type with a set of elements. -- `array2` – Array of any type with a set of elements. - -**Les valeurs de retour** - -- `1`, si `array1` et `array2` avoir un élément similaire au moins. -- `0`, autrement. - -**Propriétés particulières** - -- `Null` traitée comme une valeur. -- Ordre des valeurs dans les deux tableaux n'a pas d'importance. - -**Exemple** - -`SELECT hasAny([1], [])` retourner `0`. - -`SELECT hasAny([Null], [Null, 1])` retourner `1`. - -`SELECT hasAny([-128, 1., 512], [1])` retourner `1`. - -`SELECT hasAny([[1, 2], [3, 4]], ['a', 'c'])` retourner `0`. - -`SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [1, 2]])` retourner `1`. - -## indexOf (arr, x) {#indexofarr-x} - -Renvoie l'index de la première ‘x’ élément (à partir de 1) s'il est dans le tableau, ou 0 s'il ne l'est pas. - -Exemple: - -``` sql -SELECT indexOf([1, 3, NULL, NULL], NULL) -``` - -``` text -┌─indexOf([1, 3, NULL, NULL], NULL)─┐ -│ 3 │ -└───────────────────────────────────┘ -``` - -Ensemble d'éléments de `NULL` sont traités comme des valeurs normales. - -## countEqual (arr, x) {#countequalarr-x} - -Renvoie le nombre d'éléments dans le tableau égal à X. équivalent à arrayCount (elem - \> elem = x, arr). - -`NULL` les éléments sont traités comme des valeurs distinctes. - -Exemple: - -``` sql -SELECT countEqual([1, 2, NULL, NULL], NULL) -``` - -``` text -┌─countEqual([1, 2, NULL, NULL], NULL)─┐ -│ 2 │ -└──────────────────────────────────────┘ -``` - -## arrayEnumerate (arr) {#array_functions-arrayenumerate} - -Returns the array \[1, 2, 3, …, length (arr) \] - -Cette fonction est normalement utilisée avec ARRAY JOIN. Il permet de compter quelque chose une seule fois pour chaque tableau après l'application de la jointure de tableau. Exemple: - -``` sql -SELECT - count() AS Reaches, - countIf(num = 1) AS Hits -FROM test.hits -ARRAY JOIN - GoalsReached, - arrayEnumerate(GoalsReached) AS num -WHERE CounterID = 160656 -LIMIT 10 -``` - -``` text -┌─Reaches─┬──Hits─┐ -│ 95606 │ 31406 │ -└─────────┴───────┘ -``` - -Dans cet exemple, Reaches est le nombre de conversions (les chaînes reçues après l'application de la jointure de tableau), et Hits est le nombre de pages vues (chaînes avant la jointure de tableau). Dans ce cas particulier, vous pouvez obtenir le même résultat dans une voie plus facile: - -``` sql -SELECT - sum(length(GoalsReached)) AS Reaches, - count() AS Hits -FROM test.hits -WHERE (CounterID = 160656) AND notEmpty(GoalsReached) -``` - -``` text -┌─Reaches─┬──Hits─┐ -│ 95606 │ 31406 │ -└─────────┴───────┘ -``` - -Cette fonction peut également être utilisée dans les fonctions d'ordre supérieur. Par exemple, vous pouvez l'utiliser pour obtenir les indices de tableau pour les éléments qui correspondent à une condition. - -## arrayEnumerateUniq(arr, …) {#arrayenumerateuniqarr} - -Renvoie un tableau de la même taille que le tableau source, indiquant pour chaque élément Quelle est sa position parmi les éléments de même valeur. -Par exemple: arrayEnumerateUniq(\[10, 20, 10, 30\]) = \[1, 1, 2, 1\]. - -Cette fonction est utile lors de L'utilisation de la jointure de tableau et de l'agrégation d'éléments de tableau. -Exemple: - -``` sql -SELECT - Goals.ID AS GoalID, - sum(Sign) AS Reaches, - sumIf(Sign, num = 1) AS Visits -FROM test.visits -ARRAY JOIN - Goals, - arrayEnumerateUniq(Goals.ID) AS num -WHERE CounterID = 160656 -GROUP BY GoalID -ORDER BY Reaches DESC -LIMIT 10 -``` - -``` text -┌──GoalID─┬─Reaches─┬─Visits─┐ -│ 53225 │ 3214 │ 1097 │ -│ 2825062 │ 3188 │ 1097 │ -│ 56600 │ 2803 │ 488 │ -│ 1989037 │ 2401 │ 365 │ -│ 2830064 │ 2396 │ 910 │ -│ 1113562 │ 2372 │ 373 │ -│ 3270895 │ 2262 │ 812 │ -│ 1084657 │ 2262 │ 345 │ -│ 56599 │ 2260 │ 799 │ -│ 3271094 │ 2256 │ 812 │ -└─────────┴─────────┴────────┘ -``` - -Dans cet exemple, chaque ID d'objectif a un calcul du nombre de conversions (chaque élément de la structure de données imbriquées objectifs est un objectif atteint, que nous appelons une conversion) et le nombre de sessions. Sans array JOIN, nous aurions compté le nombre de sessions comme sum(signe). Mais dans ce cas particulier, les lignes ont été multipliées par la structure des objectifs imbriqués, donc pour compter chaque session une fois après cela, nous appliquons une condition à la valeur de arrayEnumerateUniq(Goals.ID) fonction. - -La fonction arrayEnumerateUniq peut prendre plusieurs tableaux de la même taille que les arguments. Dans ce cas, l'unicité est considérée pour les tuples d'éléments dans les mêmes positions dans tous les tableaux. - -``` sql -SELECT arrayEnumerateUniq([1, 1, 1, 2, 2, 2], [1, 1, 2, 1, 1, 2]) AS res -``` - -``` text -┌─res───────────┐ -│ [1,2,1,1,2,1] │ -└───────────────┘ -``` - -Ceci est nécessaire lors de L'utilisation de Array JOIN avec une structure de données imbriquée et une agrégation supplémentaire entre plusieurs éléments de cette structure. - -## arrayPopBack {#arraypopback} - -Supprime le dernier élément du tableau. - -``` sql -arrayPopBack(array) -``` - -**Paramètre** - -- `array` – Array. - -**Exemple** - -``` sql -SELECT arrayPopBack([1, 2, 3]) AS res -``` - -``` text -┌─res───┐ -│ [1,2] │ -└───────┘ -``` - -## arrayPopFront {#arraypopfront} - -Supprime le premier élément de la matrice. - -``` sql -arrayPopFront(array) -``` - -**Paramètre** - -- `array` – Array. - -**Exemple** - -``` sql -SELECT arrayPopFront([1, 2, 3]) AS res -``` - -``` text -┌─res───┐ -│ [2,3] │ -└───────┘ -``` - -## arrayPushBack {#arraypushback} - -Ajoute un élément à la fin du tableau. - -``` sql -arrayPushBack(array, single_value) -``` - -**Paramètre** - -- `array` – Array. -- `single_value` – A single value. Only numbers can be added to an array with numbers, and only strings can be added to an array of strings. When adding numbers, ClickHouse automatically sets the `single_value` type pour le type de données du tableau. Pour plus d'informations sur les types de données dans ClickHouse, voir “[Types de données](../../sql-reference/data-types/index.md#data_types)”. Peut être `NULL`. La fonction ajoute un `NULL` tableau, et le type d'éléments de tableau convertit en `Nullable`. - -**Exemple** - -``` sql -SELECT arrayPushBack(['a'], 'b') AS res -``` - -``` text -┌─res───────┐ -│ ['a','b'] │ -└───────────┘ -``` - -## arrayPushFront {#arraypushfront} - -Ajoute un élément au début du tableau. - -``` sql -arrayPushFront(array, single_value) -``` - -**Paramètre** - -- `array` – Array. -- `single_value` – A single value. Only numbers can be added to an array with numbers, and only strings can be added to an array of strings. When adding numbers, ClickHouse automatically sets the `single_value` type pour le type de données du tableau. Pour plus d'informations sur les types de données dans ClickHouse, voir “[Types de données](../../sql-reference/data-types/index.md#data_types)”. Peut être `NULL`. La fonction ajoute un `NULL` tableau, et le type d'éléments de tableau convertit en `Nullable`. - -**Exemple** - -``` sql -SELECT arrayPushFront(['b'], 'a') AS res -``` - -``` text -┌─res───────┐ -│ ['a','b'] │ -└───────────┘ -``` - -## arrayResize {#arrayresize} - -Les changements de la longueur du tableau. - -``` sql -arrayResize(array, size[, extender]) -``` - -**Paramètre:** - -- `array` — Array. -- `size` — Required length of the array. - - Si `size` est inférieure à la taille d'origine du tableau, le tableau est tronqué à partir de la droite. -- Si `size` est plus grande que la taille initiale du tableau, le tableau est étendu vers la droite avec `extender` valeurs ou valeurs par défaut pour le type de données des éléments du tableau. -- `extender` — Value for extending an array. Can be `NULL`. - -**Valeur renvoyée:** - -Un tableau de longueur `size`. - -**Exemples d'appels** - -``` sql -SELECT arrayResize([1], 3) -``` - -``` text -┌─arrayResize([1], 3)─┐ -│ [1,0,0] │ -└─────────────────────┘ -``` - -``` sql -SELECT arrayResize([1], 3, NULL) -``` - -``` text -┌─arrayResize([1], 3, NULL)─┐ -│ [1,NULL,NULL] │ -└───────────────────────────┘ -``` - -## arraySlice {#arrayslice} - -Retourne une tranche du tableau. - -``` sql -arraySlice(array, offset[, length]) -``` - -**Paramètre** - -- `array` – Array of data. -- `offset` – Indent from the edge of the array. A positive value indicates an offset on the left, and a negative value is an indent on the right. Numbering of the array items begins with 1. -- `length` - La longueur de la nécessaire tranche. Si vous spécifiez une valeur négative, la fonction renvoie un ouvert tranche `[offset, array_length - length)`. Si vous omettez la valeur, la fonction renvoie la tranche `[offset, the_end_of_array]`. - -**Exemple** - -``` sql -SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res -``` - -``` text -┌─res────────┐ -│ [2,NULL,4] │ -└────────────┘ -``` - -Éléments de tableau définis sur `NULL` sont traités comme des valeurs normales. - -## arraySort(\[func,\] arr, …) {#array_functions-sort} - -Trie les éléments de la `arr` tableau dans l'ordre croissant. Si l' `func` fonction est spécifiée, l'ordre de tri est déterminé par le résultat de la `func` fonction appliquée aux éléments du tableau. Si `func` accepte plusieurs arguments, le `arraySort` la fonction est passé plusieurs tableaux que les arguments de `func` correspond à. Des exemples détaillés sont présentés à la fin de `arraySort` Description. - -Exemple de tri de valeurs entières: - -``` sql -SELECT arraySort([1, 3, 3, 0]); -``` - -``` text -┌─arraySort([1, 3, 3, 0])─┐ -│ [0,1,3,3] │ -└─────────────────────────┘ -``` - -Exemple de tri des valeurs de chaîne: - -``` sql -SELECT arraySort(['hello', 'world', '!']); -``` - -``` text -┌─arraySort(['hello', 'world', '!'])─┐ -│ ['!','hello','world'] │ -└────────────────────────────────────┘ -``` - -Considérez l'ordre de tri suivant pour le `NULL`, `NaN` et `Inf` valeur: - -``` sql -SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]); -``` - -``` text -┌─arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf])─┐ -│ [-inf,-4,1,2,3,inf,nan,nan,NULL,NULL] │ -└───────────────────────────────────────────────────────────┘ -``` - -- `-Inf` les valeurs sont d'abord dans le tableau. -- `NULL` les valeurs sont les derniers dans le tableau. -- `NaN` les valeurs sont juste avant `NULL`. -- `Inf` les valeurs sont juste avant `NaN`. - -Notez que `arraySort` est un [fonction d'ordre supérieur](higher-order-functions.md). Vous pouvez passer d'une fonction lambda comme premier argument. Dans ce cas, l'ordre de classement est déterminé par le résultat de la fonction lambda appliquée aux éléments de la matrice. - -Considérons l'exemple suivant: - -``` sql -SELECT arraySort((x) -> -x, [1, 2, 3]) as res; -``` - -``` text -┌─res─────┐ -│ [3,2,1] │ -└─────────┘ -``` - -For each element of the source array, the lambda function returns the sorting key, that is, \[1 –\> -1, 2 –\> -2, 3 –\> -3\]. Since the `arraySort` fonction trie les touches dans l'ordre croissant, le résultat est \[3, 2, 1\]. Ainsi, l' `(x) –> -x` fonction lambda définit le [l'ordre décroissant](#array_functions-reverse-sort) dans un tri. - -La fonction lambda peut accepter plusieurs arguments. Dans ce cas, vous avez besoin de passer l' `arraySort` fonction plusieurs tableaux de longueur identique à laquelle correspondront les arguments de la fonction lambda. Le tableau résultant sera composé d'éléments du premier tableau d'entrée; les éléments du(des) Tableau (s) d'entrée suivant (s) spécifient les clés de tri. Exemple: - -``` sql -SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]) as res; -``` - -``` text -┌─res────────────────┐ -│ ['world', 'hello'] │ -└────────────────────┘ -``` - -Ici, les éléments qui sont passés dans le deuxième tableau (\[2, 1\]) définissent une clé de tri pour l'élément correspondant à partir du tableau source (\[‘hello’, ‘world’\]), qui est, \[‘hello’ –\> 2, ‘world’ –\> 1\]. Since the lambda function doesn't use `x`, les valeurs réelles du tableau source n'affectent pas l'ordre dans le résultat. Si, ‘hello’ sera le deuxième élément du résultat, et ‘world’ sera le premier. - -D'autres exemples sont présentés ci-dessous. - -``` sql -SELECT arraySort((x, y) -> y, [0, 1, 2], ['c', 'b', 'a']) as res; -``` - -``` text -┌─res─────┐ -│ [2,1,0] │ -└─────────┘ -``` - -``` sql -SELECT arraySort((x, y) -> -y, [0, 1, 2], [1, 2, 3]) as res; -``` - -``` text -┌─res─────┐ -│ [2,1,0] │ -└─────────┘ -``` - -!!! note "Note" - Pour améliorer l'efficacité du tri, de la [Transformation schwartzienne](https://en.wikipedia.org/wiki/Schwartzian_transform) est utilisée. - -## arrayReverseSort(\[func,\] arr, …) {#array_functions-reverse-sort} - -Trie les éléments de la `arr` tableau dans l'ordre décroissant. Si l' `func` la fonction est spécifiée, `arr` est trié en fonction du résultat de la `func` fonction appliquée aux éléments du tableau, puis le tableau trié est inversé. Si `func` accepte plusieurs arguments, le `arrayReverseSort` la fonction est passé plusieurs tableaux que les arguments de `func` correspond à. Des exemples détaillés sont présentés à la fin de `arrayReverseSort` Description. - -Exemple de tri de valeurs entières: - -``` sql -SELECT arrayReverseSort([1, 3, 3, 0]); -``` - -``` text -┌─arrayReverseSort([1, 3, 3, 0])─┐ -│ [3,3,1,0] │ -└────────────────────────────────┘ -``` - -Exemple de tri des valeurs de chaîne: - -``` sql -SELECT arrayReverseSort(['hello', 'world', '!']); -``` - -``` text -┌─arrayReverseSort(['hello', 'world', '!'])─┐ -│ ['world','hello','!'] │ -└───────────────────────────────────────────┘ -``` - -Considérez l'ordre de tri suivant pour le `NULL`, `NaN` et `Inf` valeur: - -``` sql -SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res; -``` - -``` text -┌─res───────────────────────────────────┐ -│ [inf,3,2,1,-4,-inf,nan,nan,NULL,NULL] │ -└───────────────────────────────────────┘ -``` - -- `Inf` les valeurs sont d'abord dans le tableau. -- `NULL` les valeurs sont les derniers dans le tableau. -- `NaN` les valeurs sont juste avant `NULL`. -- `-Inf` les valeurs sont juste avant `NaN`. - -Notez que l' `arrayReverseSort` est un [fonction d'ordre supérieur](higher-order-functions.md). Vous pouvez passer d'une fonction lambda comme premier argument. Exemple est montré ci-dessous. - -``` sql -SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res; -``` - -``` text -┌─res─────┐ -│ [1,2,3] │ -└─────────┘ -``` - -Le tableau est trié de la façon suivante: - -1. Dans un premier temps, le tableau source (\[1, 2, 3\]) est trié en fonction du résultat de la fonction lambda appliquée aux éléments du tableau. Le résultat est un tableau \[3, 2, 1\]. -2. Tableau qui est obtenu à l'étape précédente, est renversé. Donc, le résultat final est \[1, 2, 3\]. - -La fonction lambda peut accepter plusieurs arguments. Dans ce cas, vous avez besoin de passer l' `arrayReverseSort` fonction plusieurs tableaux de longueur identique à laquelle correspondront les arguments de la fonction lambda. Le tableau résultant sera composé d'éléments du premier tableau d'entrée; les éléments du(des) Tableau (s) d'entrée suivant (s) spécifient les clés de tri. Exemple: - -``` sql -SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res; -``` - -``` text -┌─res───────────────┐ -│ ['hello','world'] │ -└───────────────────┘ -``` - -Dans cet exemple, le tableau est trié de la façon suivante: - -1. Au début, le tableau source (\[‘hello’, ‘world’\]) est triée selon le résultat de la fonction lambda appliquée aux éléments de tableaux. Les éléments qui sont passés dans le deuxième tableau (\[2, 1\]), définissent les clés de tri pour les éléments correspondants du tableau source. Le résultat est un tableau \[‘world’, ‘hello’\]. -2. Tableau trié lors de l'étape précédente, est renversé. Donc, le résultat final est \[‘hello’, ‘world’\]. - -D'autres exemples sont présentés ci-dessous. - -``` sql -SELECT arrayReverseSort((x, y) -> y, [4, 3, 5], ['a', 'b', 'c']) AS res; -``` - -``` text -┌─res─────┐ -│ [5,3,4] │ -└─────────┘ -``` - -``` sql -SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res; -``` - -``` text -┌─res─────┐ -│ [4,3,5] │ -└─────────┘ -``` - -## arrayUniq(arr, …) {#arrayuniqarr} - -Si un argument est passé, il compte le nombre de différents éléments dans le tableau. -Si plusieurs arguments sont passés, il compte le nombre de tuples différents d'éléments aux positions correspondantes dans plusieurs tableaux. - -Si vous souhaitez obtenir une liste des éléments dans un tableau, vous pouvez utiliser arrayReduce(‘groupUniqArray’, arr). - -## arrayJoin (arr) {#array-functions-join} - -Une fonction spéciale. Voir la section [“ArrayJoin function”](array-join.md#functions_arrayjoin). - -## tableaudifférence {#arraydifference} - -Calcule la différence entre les éléments de tableau adjacents. Renvoie un tableau où le premier élément sera 0, le second est la différence entre `a[1] - a[0]`, etc. The type of elements in the resulting array is determined by the type inference rules for subtraction (e.g. `UInt8` - `UInt8` = `Int16`). - -**Syntaxe** - -``` sql -arrayDifference(array) -``` - -**Paramètre** - -- `array` – [Tableau](https://clickhouse.tech/docs/en/data_types/array/). - -**Valeurs renvoyées** - -Renvoie un tableau de différences entre les éléments adjacents. - -Type: [UInt\*](https://clickhouse.tech/docs/en/data_types/int_uint/#uint-ranges), [Int\*](https://clickhouse.tech/docs/en/data_types/int_uint/#int-ranges), [Flottant\*](https://clickhouse.tech/docs/en/data_types/float/). - -**Exemple** - -Requête: - -``` sql -SELECT arrayDifference([1, 2, 3, 4]) -``` - -Résultat: - -``` text -┌─arrayDifference([1, 2, 3, 4])─┐ -│ [0,1,1,1] │ -└───────────────────────────────┘ -``` - -Exemple de débordement dû au type de résultat Int64: - -Requête: - -``` sql -SELECT arrayDifference([0, 10000000000000000000]) -``` - -Résultat: - -``` text -┌─arrayDifference([0, 10000000000000000000])─┐ -│ [0,-8446744073709551616] │ -└────────────────────────────────────────────┘ -``` - -## arrayDistinct {#arraydistinct} - -Prend un tableau, retourne un tableau contenant les différents éléments seulement. - -**Syntaxe** - -``` sql -arrayDistinct(array) -``` - -**Paramètre** - -- `array` – [Tableau](https://clickhouse.tech/docs/en/data_types/array/). - -**Valeurs renvoyées** - -Retourne un tableau contenant les éléments distincts. - -**Exemple** - -Requête: - -``` sql -SELECT arrayDistinct([1, 2, 2, 3, 1]) -``` - -Résultat: - -``` text -┌─arrayDistinct([1, 2, 2, 3, 1])─┐ -│ [1,2,3] │ -└────────────────────────────────┘ -``` - -## arrayEnumerateDense(arr) {#array_functions-arrayenumeratedense} - -Renvoie un tableau de la même taille que le tableau source, indiquant où chaque élément apparaît en premier dans le tableau source. - -Exemple: - -``` sql -SELECT arrayEnumerateDense([10, 20, 10, 30]) -``` - -``` text -┌─arrayEnumerateDense([10, 20, 10, 30])─┐ -│ [1,2,1,3] │ -└───────────────────────────────────────┘ -``` - -## arrayIntersect (arr) {#array-functions-arrayintersect} - -Prend plusieurs tableaux, retourne un tableau avec des éléments présents dans tous les tableaux source. L'ordre des éléments dans le tableau résultant est le même que dans le premier tableau. - -Exemple: - -``` sql -SELECT - arrayIntersect([1, 2], [1, 3], [2, 3]) AS no_intersect, - arrayIntersect([1, 2], [1, 3], [1, 4]) AS intersect -``` - -``` text -┌─no_intersect─┬─intersect─┐ -│ [] │ [1] │ -└──────────────┴───────────┘ -``` - -## arrayReduce {#arrayreduce} - -Applique une fonction d'agrégation aux éléments du tableau et renvoie son résultat. Le nom de la fonction d'agrégation est passé sous forme de chaîne entre guillemets simples `'max'`, `'sum'`. Lorsque vous utilisez des fonctions d'agrégat paramétriques, le paramètre est indiqué après le nom de la fonction entre parenthèses `'uniqUpTo(6)'`. - -**Syntaxe** - -``` sql -arrayReduce(agg_func, arr1, arr2, ..., arrN) -``` - -**Paramètre** - -- `agg_func` — The name of an aggregate function which should be a constant [chaîne](../../sql-reference/data-types/string.md). -- `arr` — Any number of [tableau](../../sql-reference/data-types/array.md) tapez les colonnes comme paramètres de la fonction d'agrégation. - -**Valeur renvoyée** - -**Exemple** - -``` sql -SELECT arrayReduce('max', [1, 2, 3]) -``` - -``` text -┌─arrayReduce('max', [1, 2, 3])─┐ -│ 3 │ -└───────────────────────────────┘ -``` - -Si une fonction d'agrégation prend plusieurs arguments, cette fonction doit être appliqué à plusieurs ensembles de même taille. - -``` sql -SELECT arrayReduce('maxIf', [3, 5], [1, 0]) -``` - -``` text -┌─arrayReduce('maxIf', [3, 5], [1, 0])─┐ -│ 3 │ -└──────────────────────────────────────┘ -``` - -Exemple avec une fonction d'agrégat paramétrique: - -``` sql -SELECT arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) -``` - -``` text -┌─arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])─┐ -│ 4 │ -└─────────────────────────────────────────────────────────────┘ -``` - -## arrayReduceInRanges {#arrayreduceinranges} - -Applique une fonction d'agrégation d'éléments de tableau dans des plages et retourne un tableau contenant le résultat correspondant à chaque gamme. La fonction retourne le même résultat que plusieurs `arrayReduce(agg_func, arraySlice(arr1, index, length), ...)`. - -**Syntaxe** - -``` sql -arrayReduceInRanges(agg_func, ranges, arr1, arr2, ..., arrN) -``` - -**Paramètre** - -- `agg_func` — The name of an aggregate function which should be a constant [chaîne](../../sql-reference/data-types/string.md). -- `ranges` — The ranges to aggretate which should be an [tableau](../../sql-reference/data-types/array.md) de [tuple](../../sql-reference/data-types/tuple.md) qui contient l'indice et la longueur de chaque plage. -- `arr` — Any number of [tableau](../../sql-reference/data-types/array.md) tapez les colonnes comme paramètres de la fonction d'agrégation. - -**Valeur renvoyée** - -**Exemple** - -``` sql -SELECT arrayReduceInRanges( - 'sum', - [(1, 5), (2, 3), (3, 4), (4, 4)], - [1000000, 200000, 30000, 4000, 500, 60, 7] -) AS res -``` - -``` text -┌─res─────────────────────────┐ -│ [1234500,234000,34560,4567] │ -└─────────────────────────────┘ -``` - -## arrayReverse(arr) {#arrayreverse} - -Retourne un tableau de la même taille que l'original tableau contenant les éléments dans l'ordre inverse. - -Exemple: - -``` sql -SELECT arrayReverse([1, 2, 3]) -``` - -``` text -┌─arrayReverse([1, 2, 3])─┐ -│ [3,2,1] │ -└─────────────────────────┘ -``` - -## inverse (arr) {#array-functions-reverse} - -Synonyme de [“arrayReverse”](#arrayreverse) - -## arrayFlatten {#arrayflatten} - -Convertit un tableau de tableaux dans un tableau associatif. - -Fonction: - -- S'applique à toute profondeur de tableaux imbriqués. -- Ne change pas les tableaux qui sont déjà plats. - -Le tableau aplati contient tous les éléments de tous les tableaux source. - -**Syntaxe** - -``` sql -flatten(array_of_arrays) -``` - -Alias: `flatten`. - -**Paramètre** - -- `array_of_arrays` — [Tableau](../../sql-reference/data-types/array.md) de tableaux. Exemple, `[[1,2,3], [4,5]]`. - -**Exemple** - -``` sql -SELECT flatten([[[1]], [[2], [3]]]) -``` - -``` text -┌─flatten(array(array([1]), array([2], [3])))─┐ -│ [1,2,3] │ -└─────────────────────────────────────────────┘ -``` - -## arrayCompact {#arraycompact} - -Supprime les éléments en double consécutifs d'un tableau. L'ordre des valeurs de résultat est déterminée par l'ordre dans le tableau source. - -**Syntaxe** - -``` sql -arrayCompact(arr) -``` - -**Paramètre** - -`arr` — The [tableau](../../sql-reference/data-types/array.md) inspecter. - -**Valeur renvoyée** - -Le tableau sans doublon. - -Type: `Array`. - -**Exemple** - -Requête: - -``` sql -SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]) -``` - -Résultat: - -``` text -┌─arrayCompact([1, 1, nan, nan, 2, 3, 3, 3])─┐ -│ [1,nan,nan,2,3] │ -└────────────────────────────────────────────┘ -``` - -## arrayZip {#arrayzip} - -Combine plusieurs tableaux en un seul tableau. Le tableau résultant contient les éléments correspondants des tableaux source regroupés en tuples dans l'ordre des arguments listés. - -**Syntaxe** - -``` sql -arrayZip(arr1, arr2, ..., arrN) -``` - -**Paramètre** - -- `arrN` — [Tableau](../data-types/array.md). - -La fonction peut prendre n'importe quel nombre de tableaux de différents types. Tous les tableaux doivent être de taille égale. - -**Valeur renvoyée** - -- Tableau avec des éléments des tableaux source regroupés en [tuple](../data-types/tuple.md). Types de données dans le tuple sont les mêmes que les types de l'entrée des tableaux et dans le même ordre que les tableaux sont passés. - -Type: [Tableau](../data-types/array.md). - -**Exemple** - -Requête: - -``` sql -SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]) -``` - -Résultat: - -``` text -┌─arrayZip(['a', 'b', 'c'], [5, 2, 1])─┐ -│ [('a',5),('b',2),('c',1)] │ -└──────────────────────────────────────┘ -``` - -## arrayAUC {#arrayauc} - -Calculer AUC (zone sous la courbe, qui est un concept dans l'apprentissage automatique, voir plus de détails: https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). - -**Syntaxe** - -``` sql -arrayAUC(arr_scores, arr_labels) -``` - -**Paramètre** -- `arr_scores` — scores prediction model gives. -- `arr_labels` — labels of samples, usually 1 for positive sample and 0 for negtive sample. - -**Valeur renvoyée** -Renvoie la valeur AUC avec le type Float64. - -**Exemple** -Requête: - -``` sql -select arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]) -``` - -Résultat: - -``` text -┌─arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1])─┐ -│ 0.75 │ -└────────────────────────────────────────---──┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) diff --git a/docs/fr/sql-reference/functions/array-join.md b/docs/fr/sql-reference/functions/array-join.md deleted file mode 100644 index 859e801994d..00000000000 --- a/docs/fr/sql-reference/functions/array-join.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 61 -toc_title: arrayJoin ---- - -# fonction arrayJoin {#functions_arrayjoin} - -C'est un très inhabituelle de la fonction. - -Les fonctions normales ne modifient pas un ensemble de lignes, mais modifient simplement les valeurs de chaque ligne (map). -Les fonctions d'agrégation compriment un ensemble de lignes (plier ou réduire). -Le ‘arrayJoin’ la fonction prend chaque ligne et génère un ensemble de lignes (dépliante). - -Cette fonction prend un tableau comme argument et propage la ligne source à plusieurs lignes pour le nombre d'éléments dans le tableau. -Toutes les valeurs des colonnes sont simplement copiés, sauf les valeurs dans la colonne où cette fonction est appliquée; elle est remplacée par la valeur correspondante de tableau. - -Une requête peut utiliser plusieurs `arrayJoin` fonction. Dans ce cas, la transformation est effectuée plusieurs fois. - -Notez la syntaxe de jointure de tableau dans la requête SELECT, qui offre des possibilités plus larges. - -Exemple: - -``` sql -SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src -``` - -``` text -┌─dst─┬─\'Hello\'─┬─src─────┐ -│ 1 │ Hello │ [1,2,3] │ -│ 2 │ Hello │ [1,2,3] │ -│ 3 │ Hello │ [1,2,3] │ -└─────┴───────────┴─────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/array_join/) diff --git a/docs/fr/sql-reference/functions/bit-functions.md b/docs/fr/sql-reference/functions/bit-functions.md deleted file mode 100644 index 7b8795815f2..00000000000 --- a/docs/fr/sql-reference/functions/bit-functions.md +++ /dev/null @@ -1,255 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 48 -toc_title: Bit ---- - -# Peu De Fonctions {#bit-functions} - -Les fonctions Bit fonctionnent pour n'importe quelle paire de types de UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64, Float32 ou Float64. - -Le type de résultat est un entier avec des bits égaux aux bits maximum de ses arguments. Si au moins l'un des arguments est signé, le résultat est un signé nombre. Si un argument est un nombre à virgule flottante, Il est converti en Int64. - -## bitAnd (a, b) {#bitanda-b} - -## bitOr (a, b) {#bitora-b} - -## bitXor (a, b) {#bitxora-b} - -## bitNot (a) {#bitnota} - -## bitShiftLeft (A, b) {#bitshiftlefta-b} - -## bitShiftRight (A, b) {#bitshiftrighta-b} - -## bitRotateLeft (a, b) {#bitrotatelefta-b} - -## bitRotateRight (a, b) {#bitrotaterighta-b} - -## bitTest {#bittest} - -Prend tout entier et le convertit en [forme binaire](https://en.wikipedia.org/wiki/Binary_number) renvoie la valeur d'un bit à la position spécifiée. Le compte à rebours commence à partir de 0 de la droite vers la gauche. - -**Syntaxe** - -``` sql -SELECT bitTest(number, index) -``` - -**Paramètre** - -- `number` – integer number. -- `index` – position of bit. - -**Valeurs renvoyées** - -Renvoie une valeur de bit à la position spécifiée. - -Type: `UInt8`. - -**Exemple** - -Par exemple, le nombre 43 dans le système numérique de base-2 (binaire) est 101011. - -Requête: - -``` sql -SELECT bitTest(43, 1) -``` - -Résultat: - -``` text -┌─bitTest(43, 1)─┐ -│ 1 │ -└────────────────┘ -``` - -Un autre exemple: - -Requête: - -``` sql -SELECT bitTest(43, 2) -``` - -Résultat: - -``` text -┌─bitTest(43, 2)─┐ -│ 0 │ -└────────────────┘ -``` - -## bitTestAll {#bittestall} - -Renvoie le résultat de [logique de conjonction](https://en.wikipedia.org/wiki/Logical_conjunction) (Et opérateur) de tous les bits à des positions données. Le compte à rebours commence à partir de 0 de la droite vers la gauche. - -La conjonction pour les opérations bit à bit: - -0 AND 0 = 0 - -0 AND 1 = 0 - -1 AND 0 = 0 - -1 AND 1 = 1 - -**Syntaxe** - -``` sql -SELECT bitTestAll(number, index1, index2, index3, index4, ...) -``` - -**Paramètre** - -- `number` – integer number. -- `index1`, `index2`, `index3`, `index4` – positions of bit. For example, for set of positions (`index1`, `index2`, `index3`, `index4`) est vrai si et seulement si toutes ses positions sont remplies (`index1` ⋀ `index2`, ⋀ `index3` ⋀ `index4`). - -**Valeurs renvoyées** - -Retourne le résultat de la conjonction logique. - -Type: `UInt8`. - -**Exemple** - -Par exemple, le nombre 43 dans le système numérique de base-2 (binaire) est 101011. - -Requête: - -``` sql -SELECT bitTestAll(43, 0, 1, 3, 5) -``` - -Résultat: - -``` text -┌─bitTestAll(43, 0, 1, 3, 5)─┐ -│ 1 │ -└────────────────────────────┘ -``` - -Un autre exemple: - -Requête: - -``` sql -SELECT bitTestAll(43, 0, 1, 3, 5, 2) -``` - -Résultat: - -``` text -┌─bitTestAll(43, 0, 1, 3, 5, 2)─┐ -│ 0 │ -└───────────────────────────────┘ -``` - -## bitTestAny {#bittestany} - -Renvoie le résultat de [disjonction logique](https://en.wikipedia.org/wiki/Logical_disjunction) (Ou opérateur) de tous les bits à des positions données. Le compte à rebours commence à partir de 0 de la droite vers la gauche. - -La disjonction pour les opérations binaires: - -0 OR 0 = 0 - -0 OR 1 = 1 - -1 OR 0 = 1 - -1 OR 1 = 1 - -**Syntaxe** - -``` sql -SELECT bitTestAny(number, index1, index2, index3, index4, ...) -``` - -**Paramètre** - -- `number` – integer number. -- `index1`, `index2`, `index3`, `index4` – positions of bit. - -**Valeurs renvoyées** - -Renvoie le résultat de la disjuction logique. - -Type: `UInt8`. - -**Exemple** - -Par exemple, le nombre 43 dans le système numérique de base-2 (binaire) est 101011. - -Requête: - -``` sql -SELECT bitTestAny(43, 0, 2) -``` - -Résultat: - -``` text -┌─bitTestAny(43, 0, 2)─┐ -│ 1 │ -└──────────────────────┘ -``` - -Un autre exemple: - -Requête: - -``` sql -SELECT bitTestAny(43, 4, 2) -``` - -Résultat: - -``` text -┌─bitTestAny(43, 4, 2)─┐ -│ 0 │ -└──────────────────────┘ -``` - -## bitCount {#bitcount} - -Calcule le nombre de bits mis à un dans la représentation binaire d'un nombre. - -**Syntaxe** - -``` sql -bitCount(x) -``` - -**Paramètre** - -- `x` — [Entier](../../sql-reference/data-types/int-uint.md) ou [virgule flottante](../../sql-reference/data-types/float.md) nombre. La fonction utilise la représentation de la valeur en mémoire. Il permet de financer les nombres à virgule flottante. - -**Valeur renvoyée** - -- Nombre de bits défini sur un dans le numéro d'entrée. - -La fonction ne convertit pas la valeur d'entrée en un type plus grand ([l'extension du signe](https://en.wikipedia.org/wiki/Sign_extension)). Ainsi, par exemple, `bitCount(toUInt8(-1)) = 8`. - -Type: `UInt8`. - -**Exemple** - -Prenez par exemple le numéro 333. Sa représentation binaire: 0000000101001101. - -Requête: - -``` sql -SELECT bitCount(333) -``` - -Résultat: - -``` text -┌─bitCount(333)─┐ -│ 5 │ -└───────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/bit_functions/) diff --git a/docs/fr/sql-reference/functions/bitmap-functions.md b/docs/fr/sql-reference/functions/bitmap-functions.md deleted file mode 100644 index 15cb68ffc52..00000000000 --- a/docs/fr/sql-reference/functions/bitmap-functions.md +++ /dev/null @@ -1,496 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 49 -toc_title: Bitmap ---- - -# Fonctions De Bitmap {#bitmap-functions} - -Les fonctions Bitmap fonctionnent pour le calcul de la valeur de L'objet de deux bitmaps, il s'agit de renvoyer un nouveau bitmap ou une cardinalité tout en utilisant le calcul de la formule, tel que and, or, xor, and not, etc. - -Il existe 2 types de méthodes de construction pour L'objet Bitmap. L'un doit être construit par la fonction d'agrégation groupBitmap avec-State, l'autre doit être construit par L'objet Array. Il est également de convertir L'objet Bitmap en objet tableau. - -RoaringBitmap est enveloppé dans une structure de données pendant le stockage réel des objets Bitmap. Lorsque la cardinalité est inférieure ou égale à 32, elle utilise Set objet. Lorsque la cardinalité est supérieure à 32, elle utilise l'objet RoaringBitmap. C'est pourquoi le stockage de faible cardinalité jeu est plus rapide. - -Pour plus d'informations sur RoaringBitmap, voir: [CRoaring](https://github.com/RoaringBitmap/CRoaring). - -## bitmapBuild {#bitmap_functions-bitmapbuild} - -Construire un bitmap à partir d'un tableau entier non signé. - -``` sql -bitmapBuild(array) -``` - -**Paramètre** - -- `array` – unsigned integer array. - -**Exemple** - -``` sql -SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) -``` - -``` text -┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ -└─────┴──────────────────────────────────────────────┘ -``` - -## bitmapToArray {#bitmaptoarray} - -Convertir bitmap en tableau entier. - -``` sql -bitmapToArray(bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res -``` - -``` text -┌─res─────────┐ -│ [1,2,3,4,5] │ -└─────────────┘ -``` - -## bitmapSubsetInRange {#bitmap-functions-bitmapsubsetinrange} - -Retourne le sous-ensemble dans la plage spécifiée (n'inclut pas le range_end). - -``` sql -bitmapSubsetInRange(bitmap, range_start, range_end) -``` - -**Paramètre** - -- `bitmap` – [Objet Bitmap](#bitmap_functions-bitmapbuild). -- `range_start` – range start point. Type: [UInt32](../../sql-reference/data-types/int-uint.md). -- `range_end` – range end point(excluded). Type: [UInt32](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res -``` - -``` text -┌─res───────────────┐ -│ [30,31,32,33,100] │ -└───────────────────┘ -``` - -## bitmapSubsetLimit {#bitmapsubsetlimit} - -Crée un sous-ensemble de bitmap avec n éléments pris entre `range_start` et `cardinality_limit`. - -**Syntaxe** - -``` sql -bitmapSubsetLimit(bitmap, range_start, cardinality_limit) -``` - -**Paramètre** - -- `bitmap` – [Objet Bitmap](#bitmap_functions-bitmapbuild). -- `range_start` – The subset starting point. Type: [UInt32](../../sql-reference/data-types/int-uint.md). -- `cardinality_limit` – The subset cardinality upper limit. Type: [UInt32](../../sql-reference/data-types/int-uint.md). - -**Valeur renvoyée** - -Ensemble. - -Type: `Bitmap object`. - -**Exemple** - -Requête: - -``` sql -SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res -``` - -Résultat: - -``` text -┌─res───────────────────────┐ -│ [30,31,32,33,100,200,500] │ -└───────────────────────────┘ -``` - -## bitmapContains {#bitmap_functions-bitmapcontains} - -Vérifie si le bitmap contient un élément. - -``` sql -bitmapContains(haystack, needle) -``` - -**Paramètre** - -- `haystack` – [Objet Bitmap](#bitmap_functions-bitmapbuild) où la fonction recherche. -- `needle` – Value that the function searches. Type: [UInt32](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- 0 — If `haystack` ne contient pas de `needle`. -- 1 — If `haystack` contenir `needle`. - -Type: `UInt8`. - -**Exemple** - -``` sql -SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res -``` - -``` text -┌─res─┐ -│ 1 │ -└─────┘ -``` - -## bitmapHasAny {#bitmaphasany} - -Vérifie si deux bitmaps ont une intersection par certains éléments. - -``` sql -bitmapHasAny(bitmap1, bitmap2) -``` - -Si vous êtes sûr que `bitmap2` contient strictement un élément, envisagez d'utiliser le [bitmapContains](#bitmap_functions-bitmapcontains) fonction. Cela fonctionne plus efficacement. - -**Paramètre** - -- `bitmap*` – bitmap object. - -**Les valeurs de retour** - -- `1`, si `bitmap1` et `bitmap2` avoir un élément similaire au moins. -- `0`, autrement. - -**Exemple** - -``` sql -SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res -``` - -``` text -┌─res─┐ -│ 1 │ -└─────┘ -``` - -## bitmapHasAll {#bitmaphasall} - -Analogue à `hasAll(array, array)` renvoie 1 si le premier bitmap contient tous les éléments du second, 0 sinon. -Si le deuxième argument est un bitmap vide, alors renvoie 1. - -``` sql -bitmapHasAll(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res -``` - -``` text -┌─res─┐ -│ 0 │ -└─────┘ -``` - -## bitmapCardinality {#bitmapcardinality} - -Retrun bitmap cardinalité de type UInt64. - -``` sql -bitmapCardinality(bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res -``` - -``` text -┌─res─┐ -│ 5 │ -└─────┘ -``` - -## bitmapMin {#bitmapmin} - -Retrun la plus petite valeur de type UInt64 dans l'ensemble, UINT32_MAX si l'ensemble est vide. - - bitmapMin(bitmap) - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapMin(bitmapBuild([1, 2, 3, 4, 5])) AS res -``` - - ┌─res─┐ - │ 1 │ - └─────┘ - -## bitmapMax {#bitmapmax} - -Retrun la plus grande valeur de type UInt64 dans l'ensemble, 0 si l'ensemble est vide. - - bitmapMax(bitmap) - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapMax(bitmapBuild([1, 2, 3, 4, 5])) AS res -``` - - ┌─res─┐ - │ 5 │ - └─────┘ - -## bitmapTransform {#bitmaptransform} - -Transformer un tableau de valeurs d'une image à l'autre tableau de valeurs, le résultat est une nouvelle image. - - bitmapTransform(bitmap, from_array, to_array) - -**Paramètre** - -- `bitmap` – bitmap object. -- `from_array` – UInt32 array. For idx in range \[0, from_array.size()), if bitmap contains from_array\[idx\], then replace it with to_array\[idx\]. Note that the result depends on array ordering if there are common elements between from_array and to_array. -- `to_array` – UInt32 array, its size shall be the same to from_array. - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapTransform(bitmapBuild([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), cast([5,999,2] as Array(UInt32)), cast([2,888,20] as Array(UInt32)))) AS res -``` - - ┌─res───────────────────┐ - │ [1,3,4,6,7,8,9,10,20] │ - └───────────────────────┘ - -## bitmapAnd {#bitmapand} - -Deux bitmap et calcul, le résultat est un nouveau bitmap. - -``` sql -bitmapAnd(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res -``` - -``` text -┌─res─┐ -│ [3] │ -└─────┘ -``` - -## bitmapOr {#bitmapor} - -Deux bitmap ou calcul, le résultat est un nouveau bitmap. - -``` sql -bitmapOr(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res -``` - -``` text -┌─res─────────┐ -│ [1,2,3,4,5] │ -└─────────────┘ -``` - -## bitmapXor {#bitmapxor} - -Deux bitmap xor calcul, le résultat est une nouvelle image. - -``` sql -bitmapXor(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res -``` - -``` text -┌─res───────┐ -│ [1,2,4,5] │ -└───────────┘ -``` - -## bitmapetnot {#bitmapandnot} - -Deux Bitmap andnot calcul, le résultat est un nouveau bitmap. - -``` sql -bitmapAndnot(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res -``` - -``` text -┌─res───┐ -│ [1,2] │ -└───────┘ -``` - -## bitmapetcardinalité {#bitmapandcardinality} - -Deux bitmap et calcul, retour cardinalité de type UInt64. - -``` sql -bitmapAndCardinality(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapAndCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; -``` - -``` text -┌─res─┐ -│ 1 │ -└─────┘ -``` - -## bitmapOrCardinality {#bitmaporcardinality} - -Deux bitmap ou calcul, retour cardinalité de type UInt64. - -``` sql -bitmapOrCardinality(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapOrCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; -``` - -``` text -┌─res─┐ -│ 5 │ -└─────┘ -``` - -## bitmapXorCardinality {#bitmapxorcardinality} - -Deux bitmap XOR calcul, retour cardinalité de type UInt64. - -``` sql -bitmapXorCardinality(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapXorCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; -``` - -``` text -┌─res─┐ -│ 4 │ -└─────┘ -``` - -## bitmapetnotcardinality {#bitmapandnotcardinality} - -Deux bitmap andnot calcul, retour cardinalité de type UInt64. - -``` sql -bitmapAndnotCardinality(bitmap,bitmap) -``` - -**Paramètre** - -- `bitmap` – bitmap object. - -**Exemple** - -``` sql -SELECT bitmapAndnotCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; -``` - -``` text -┌─res─┐ -│ 2 │ -└─────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/bitmap_functions/) diff --git a/docs/fr/sql-reference/functions/comparison-functions.md b/docs/fr/sql-reference/functions/comparison-functions.md deleted file mode 100644 index a5008c676fa..00000000000 --- a/docs/fr/sql-reference/functions/comparison-functions.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 36 -toc_title: Comparaison ---- - -# Fonctions De Comparaison {#comparison-functions} - -Les fonctions de comparaison renvoient toujours 0 ou 1 (Uint8). - -Les types suivants peuvent être comparés: - -- nombre -- cordes et cordes fixes -- date -- dates avec heures - -au sein de chaque groupe, mais pas entre différents groupes. - -Par exemple, vous ne pouvez pas comparer une date avec une chaîne. Vous devez utiliser une fonction pour convertir la chaîne en une date, ou vice versa. - -Les chaînes sont comparées par octets. Une courte chaîne est plus petite que toutes les chaînes qui commencent par elle et qui contiennent au moins un caractère de plus. - -## égal, A = B et a = = b opérateur {#function-equals} - -## notEquals, a ! opérateur= b et a \<\> b {#function-notequals} - -## moins, opérateur \< {#function-less} - -## de plus, \> opérateur {#function-greater} - -## lessOrEquals, \< = opérateur {#function-lessorequals} - -## greaterOrEquals, \> = opérateur {#function-greaterorequals} - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/comparison_functions/) diff --git a/docs/fr/sql-reference/functions/conditional-functions.md b/docs/fr/sql-reference/functions/conditional-functions.md deleted file mode 100644 index 3912b49aa6a..00000000000 --- a/docs/fr/sql-reference/functions/conditional-functions.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 43 -toc_title: 'Conditionnel ' ---- - -# Fonctions Conditionnelles {#conditional-functions} - -## si {#if} - -Contrôle la ramification conditionnelle. Contrairement à la plupart des systèmes, ClickHouse évalue toujours les deux expressions `then` et `else`. - -**Syntaxe** - -``` sql -SELECT if(cond, then, else) -``` - -Si la condition `cond` renvoie une valeur non nulle, retourne le résultat de l'expression `then` et le résultat de l'expression `else`, si présent, est ignoré. Si l' `cond` est égal à zéro ou `NULL` alors le résultat de la `then` l'expression est ignorée et le résultat de `else` expression, si elle est présente, est renvoyée. - -**Paramètre** - -- `cond` – The condition for evaluation that can be zero or not. The type is UInt8, Nullable(UInt8) or NULL. -- `then` - L'expression à renvoyer si la condition est remplie. -- `else` - L'expression à renvoyer si la condition n'est pas remplie. - -**Valeurs renvoyées** - -La fonction s'exécute `then` et `else` expressions et retourne son résultat, selon que la condition `cond` fini par être zéro ou pas. - -**Exemple** - -Requête: - -``` sql -SELECT if(1, plus(2, 2), plus(2, 6)) -``` - -Résultat: - -``` text -┌─plus(2, 2)─┐ -│ 4 │ -└────────────┘ -``` - -Requête: - -``` sql -SELECT if(0, plus(2, 2), plus(2, 6)) -``` - -Résultat: - -``` text -┌─plus(2, 6)─┐ -│ 8 │ -└────────────┘ -``` - -- `then` et `else` doit avoir le type commun le plus bas. - -**Exemple:** - -Prendre cette `LEFT_RIGHT` table: - -``` sql -SELECT * -FROM LEFT_RIGHT - -┌─left─┬─right─┐ -│ ᴺᵁᴸᴸ │ 4 │ -│ 1 │ 3 │ -│ 2 │ 2 │ -│ 3 │ 1 │ -│ 4 │ ᴺᵁᴸᴸ │ -└──────┴───────┘ -``` - -La requête suivante compare `left` et `right` valeur: - -``` sql -SELECT - left, - right, - if(left < right, 'left is smaller than right', 'right is greater or equal than left') AS is_smaller -FROM LEFT_RIGHT -WHERE isNotNull(left) AND isNotNull(right) - -┌─left─┬─right─┬─is_smaller──────────────────────────┐ -│ 1 │ 3 │ left is smaller than right │ -│ 2 │ 2 │ right is greater or equal than left │ -│ 3 │ 1 │ right is greater or equal than left │ -└──────┴───────┴─────────────────────────────────────┘ -``` - -Note: `NULL` les valeurs ne sont pas utilisés dans cet exemple, vérifier [Valeurs nulles dans les conditions](#null-values-in-conditionals) section. - -## Opérateur Ternaire {#ternary-operator} - -Il fonctionne même comme `if` fonction. - -Syntaxe: `cond ? then : else` - -Retourner `then` si l' `cond` renvoie la valeur vrai (supérieur à zéro), sinon renvoie `else`. - -- `cond` doit être de type de `UInt8`, et `then` et `else` doit avoir le type commun le plus bas. - -- `then` et `else` peut être `NULL` - -**Voir aussi** - -- [ifNotFinite](other-functions.md#ifnotfinite). - -## multiIf {#multiif} - -Permet d'écrire le [CASE](../operators/index.md#operator_case) opérateur plus compacte dans la requête. - -Syntaxe: `multiIf(cond_1, then_1, cond_2, then_2, ..., else)` - -**Paramètre:** - -- `cond_N` — The condition for the function to return `then_N`. -- `then_N` — The result of the function when executed. -- `else` — The result of the function if none of the conditions is met. - -La fonction accepte `2N+1` paramètre. - -**Valeurs renvoyées** - -La fonction renvoie l'une des valeurs `then_N` ou `else` selon les conditions `cond_N`. - -**Exemple** - -En utilisant à nouveau `LEFT_RIGHT` table. - -``` sql -SELECT - left, - right, - multiIf(left < right, 'left is smaller', left > right, 'left is greater', left = right, 'Both equal', 'Null value') AS result -FROM LEFT_RIGHT - -┌─left─┬─right─┬─result──────────┐ -│ ᴺᵁᴸᴸ │ 4 │ Null value │ -│ 1 │ 3 │ left is smaller │ -│ 2 │ 2 │ Both equal │ -│ 3 │ 1 │ left is greater │ -│ 4 │ ᴺᵁᴸᴸ │ Null value │ -└──────┴───────┴─────────────────┘ -``` - -## Utilisation Directe Des Résultats Conditionnels {#using-conditional-results-directly} - -Les conditions entraînent toujours `0`, `1` ou `NULL`. Vous pouvez donc utiliser des résultats conditionnels directement comme ceci: - -``` sql -SELECT left < right AS is_small -FROM LEFT_RIGHT - -┌─is_small─┐ -│ ᴺᵁᴸᴸ │ -│ 1 │ -│ 0 │ -│ 0 │ -│ ᴺᵁᴸᴸ │ -└──────────┘ -``` - -## Valeurs nulles dans les conditions {#null-values-in-conditionals} - -Lorsque `NULL` les valeurs sont impliqués dans des conditions, le résultat sera également `NULL`. - -``` sql -SELECT - NULL < 1, - 2 < NULL, - NULL < NULL, - NULL = NULL - -┌─less(NULL, 1)─┬─less(2, NULL)─┬─less(NULL, NULL)─┬─equals(NULL, NULL)─┐ -│ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ -└───────────────┴───────────────┴──────────────────┴────────────────────┘ -``` - -Donc, vous devriez construire vos requêtes avec soin si les types sont `Nullable`. - -L'exemple suivant le démontre en omettant d'ajouter la condition égale à `multiIf`. - -``` sql -SELECT - left, - right, - multiIf(left < right, 'left is smaller', left > right, 'right is smaller', 'Both equal') AS faulty_result -FROM LEFT_RIGHT - -┌─left─┬─right─┬─faulty_result────┐ -│ ᴺᵁᴸᴸ │ 4 │ Both equal │ -│ 1 │ 3 │ left is smaller │ -│ 2 │ 2 │ Both equal │ -│ 3 │ 1 │ right is smaller │ -│ 4 │ ᴺᵁᴸᴸ │ Both equal │ -└──────┴───────┴──────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/conditional_functions/) diff --git a/docs/fr/sql-reference/functions/date-time-functions.md b/docs/fr/sql-reference/functions/date-time-functions.md deleted file mode 100644 index d1c16b42d07..00000000000 --- a/docs/fr/sql-reference/functions/date-time-functions.md +++ /dev/null @@ -1,450 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 39 -toc_title: Travailler avec les Dates et les heures ---- - -# Fonctions pour travailler avec des Dates et des heures {#functions-for-working-with-dates-and-times} - -Support des fuseaux horaires - -Toutes les fonctions pour travailler avec la date et l'heure qui ont une logique d'utilisation pour le fuseau horaire peut accepter un second fuseau horaire argument. Exemple: Asie / Ekaterinbourg. Dans ce cas, ils utilisent le fuseau horaire spécifié au lieu du fuseau horaire local (par défaut). - -``` sql -SELECT - toDateTime('2016-06-15 23:00:00') AS time, - toDate(time) AS date_local, - toDate(time, 'Asia/Yekaterinburg') AS date_yekat, - toString(time, 'US/Samoa') AS time_samoa -``` - -``` text -┌────────────────time─┬─date_local─┬─date_yekat─┬─time_samoa──────────┐ -│ 2016-06-15 23:00:00 │ 2016-06-15 │ 2016-06-16 │ 2016-06-15 09:00:00 │ -└─────────────────────┴────────────┴────────────┴─────────────────────┘ -``` - -Seuls les fuseaux horaires qui diffèrent de L'UTC par un nombre entier d'heures sont pris en charge. - -## toTimeZone {#totimezone} - -Convertir l'heure ou la date et de l'heure au fuseau horaire spécifié. - -## toYear {#toyear} - -Convertit une date ou une date avec l'heure en un numéro UInt16 contenant le numéro d'année (AD). - -## toQuarter {#toquarter} - -Convertit une date ou une date avec l'heure en un numéro UInt8 contenant le numéro de trimestre. - -## toMonth {#tomonth} - -Convertit une date ou une date avec l'heure en un numéro UInt8 contenant le numéro de mois (1-12). - -## toDayOfYear {#todayofyear} - -Convertit une date ou une date avec l'heure en un numéro UInt16 contenant le numéro du jour de l'année (1-366). - -## toDayOfMonth {#todayofmonth} - -Convertit une date ou une date avec le temps à un UInt8 contenant le numéro du jour du mois (1-31). - -## toDayOfWeek {#todayofweek} - -Convertit une date ou une date avec l'heure en un numéro UInt8 contenant le numéro du jour de la semaine (lundi est 1, et dimanche est 7). - -## toHour {#tohour} - -Convertit une date avec l'heure en un nombre UInt8 contenant le numéro de l'heure dans l'Heure de 24 heures (0-23). -This function assumes that if clocks are moved ahead, it is by one hour and occurs at 2 a.m., and if clocks are moved back, it is by one hour and occurs at 3 a.m. (which is not always true – even in Moscow the clocks were twice changed at a different time). - -## toMinute {#tominute} - -Convertit une date avec l'heure en un numéro UInt8 contenant le numéro de la minute de l'heure (0-59). - -## toseconde {#tosecond} - -Convertit une date avec l'heure en un nombre UInt8 contenant le numéro de la seconde dans la minute (0-59). -Les secondes intercalaires ne sont pas comptabilisés. - -## toUnixTimestamp {#to-unix-timestamp} - -Pour L'argument DateTime: convertit la valeur en sa représentation numérique interne (horodatage Unix). -For String argument: analyse datetime from string en fonction du fuseau horaire (second argument optionnel, le fuseau horaire du serveur est utilisé par défaut) et renvoie l'horodatage unix correspondant. -Pour L'argument Date: le comportement n'est pas spécifié. - -**Syntaxe** - -``` sql -toUnixTimestamp(datetime) -toUnixTimestamp(str, [timezone]) -``` - -**Valeur renvoyée** - -- Renvoie l'horodatage unix. - -Type: `UInt32`. - -**Exemple** - -Requête: - -``` sql -SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp -``` - -Résultat: - -``` text -┌─unix_timestamp─┐ -│ 1509836867 │ -└────────────────┘ -``` - -## toStartOfYear {#tostartofyear} - -Arrondit une date ou une date avec l'heure jusqu'au premier jour de l'année. -Renvoie la date. - -## toStartOfISOYear {#tostartofisoyear} - -Arrondit une date ou une date avec l'heure jusqu'au premier jour de L'année ISO. -Renvoie la date. - -## toStartOfQuarter {#tostartofquarter} - -Arrondit une date ou une date avec l'heure jusqu'au premier jour du trimestre. -Le premier jour du trimestre, soit le 1er janvier, 1er avril, 1er juillet ou 1er octobre. -Renvoie la date. - -## toStartOfMonth {#tostartofmonth} - -Arrondit une date ou une date avec l'heure jusqu'au premier jour du mois. -Renvoie la date. - -!!! attention "Attention" - Le comportement de l'analyse des dates incorrectes est spécifique à l'implémentation. ClickHouse peut renvoyer la date zéro, lancer une exception ou faire “natural” débordement. - -## toMonday {#tomonday} - -Arrondit une date ou une date avec l'heure au lundi le plus proche. -Renvoie la date. - -## toStartOfWeek (t \[, mode\]) {#tostartofweektmode} - -Arrondit une date ou une date avec l'heure au dimanche ou au lundi le plus proche par mode. -Renvoie la date. -L'argument mode fonctionne exactement comme l'argument mode de toWeek(). Pour la syntaxe à argument unique, une valeur de mode de 0 est utilisée. - -## toStartOfDay {#tostartofday} - -Arrondit une date avec le temps au début de la journée. - -## toStartOfHour {#tostartofhour} - -Arrondit une date avec le temps au début de l " heure. - -## toStartOfMinute {#tostartofminute} - -Arrondit une date avec le temps au début de la minute. - -## toStartOfFiveMinute {#tostartoffiveminute} - -Arrondit à une date avec l'heure de début de l'intervalle de cinq minutes. - -## toStartOfTenMinutes {#tostartoftenminutes} - -Arrondit une date avec le temps au début de l " intervalle de dix minutes. - -## toStartOfFifteenMinutes {#tostartoffifteenminutes} - -Arrondit la date avec le temps jusqu'au début de l'intervalle de quinze minutes. - -## toStartOfInterval(time_or_data, intervalle x Unité \[, time_zone\]) {#tostartofintervaltime-or-data-interval-x-unit-time-zone} - -Ceci est une généralisation d'autres fonctions nommées `toStartOf*`. Exemple, -`toStartOfInterval(t, INTERVAL 1 year)` renvoie la même chose que `toStartOfYear(t)`, -`toStartOfInterval(t, INTERVAL 1 month)` renvoie la même chose que `toStartOfMonth(t)`, -`toStartOfInterval(t, INTERVAL 1 day)` renvoie la même chose que `toStartOfDay(t)`, -`toStartOfInterval(t, INTERVAL 15 minute)` renvoie la même chose que `toStartOfFifteenMinutes(t)` etc. - -## toTime {#totime} - -Convertit une date avec l'heure en une certaine date fixe, tout en préservant l'heure. - -## toRelativeYearNum {#torelativeyearnum} - -Convertit une date avec l'heure ou la date, le numéro de l'année, à partir d'un certain point fixe dans le passé. - -## toRelativeQuarterNum {#torelativequarternum} - -Convertit une date avec l'heure ou la date au numéro du trimestre, à partir d'un certain point fixe dans le passé. - -## toRelativeMonthNum {#torelativemonthnum} - -Convertit une date avec l'heure ou la date au numéro du mois, à partir d'un certain point fixe dans le passé. - -## toRelativeWeekNum {#torelativeweeknum} - -Convertit une date avec l'heure ou la date, le numéro de la semaine, à partir d'un certain point fixe dans le passé. - -## toRelativeDayNum {#torelativedaynum} - -Convertit une date avec l'heure ou la date au numéro du jour, à partir d'un certain point fixe dans le passé. - -## toRelativeHourNum {#torelativehournum} - -Convertit une date avec l'heure ou la date au nombre de l'heure, à partir d'un certain point fixe dans le passé. - -## toRelativeMinuteNum {#torelativeminutenum} - -Convertit une date avec l'heure ou la date au numéro de la minute, à partir d'un certain point fixe dans le passé. - -## toRelativeSecondNum {#torelativesecondnum} - -Convertit une date avec l'heure ou la date au numéro de la seconde, à partir d'un certain point fixe dans le passé. - -## toISOYear {#toisoyear} - -Convertit une date ou une date avec l'heure en un numéro UInt16 contenant le numéro D'année ISO. - -## toISOWeek {#toisoweek} - -Convertit une date ou une date avec l'heure en un numéro UInt8 contenant le numéro de semaine ISO. - -## toWeek (date \[, mode\]) {#toweekdatemode} - -Cette fonction renvoie le numéro de semaine pour date ou datetime. La forme à deux arguments de toWeek() vous permet de spécifier si la semaine commence le dimanche ou le lundi et si la valeur de retour doit être comprise entre 0 et 53 ou entre 1 et 53. Si l'argument mode est omis, le mode par défaut est 0. -`toISOWeek()`est une fonction de compatibilité équivalente à `toWeek(date,3)`. -Le tableau suivant décrit le fonctionnement de l'argument mode. - -| Mode | Premier jour de la semaine | Gamme | Week 1 is the first week … | -|------|----------------------------|-------|----------------------------------| -| 0 | Dimanche | 0-53 | avec un dimanche cette année | -| 1 | Lundi | 0-53 | avec 4 jours ou plus cette année | -| 2 | Dimanche | 1-53 | avec un dimanche cette année | -| 3 | Lundi | 1-53 | avec 4 jours ou plus cette année | -| 4 | Dimanche | 0-53 | avec 4 jours ou plus cette année | -| 5 | Lundi | 0-53 | avec un lundi cette année | -| 6 | Dimanche | 1-53 | avec 4 jours ou plus cette année | -| 7 | Lundi | 1-53 | avec un lundi cette année | -| 8 | Dimanche | 1-53 | contient Janvier 1 | -| 9 | Lundi | 1-53 | contient Janvier 1 | - -Pour les valeurs de mode avec une signification de “with 4 or more days this year,” les semaines sont numérotées selon ISO 8601: 1988: - -- Si la semaine contenant Janvier 1 A 4 jours ou plus dans la nouvelle année, il est Semaine 1. - -- Sinon, c'est la dernière semaine de l'année précédente, et la semaine prochaine est la semaine 1. - -Pour les valeurs de mode avec une signification de “contains January 1”, la semaine contient Janvier 1 est Semaine 1. Peu importe combien de jours dans la nouvelle année la semaine contenait, même si elle contenait seulement un jour. - -``` sql -toWeek(date, [, mode][, Timezone]) -``` - -**Paramètre** - -- `date` – Date or DateTime. -- `mode` – Optional parameter, Range of values is \[0,9\], default is 0. -- `Timezone` – Optional parameter, it behaves like any other conversion function. - -**Exemple** - -``` sql -SELECT toDate('2016-12-27') AS date, toWeek(date) AS week0, toWeek(date,1) AS week1, toWeek(date,9) AS week9; -``` - -``` text -┌───────date─┬─week0─┬─week1─┬─week9─┐ -│ 2016-12-27 │ 52 │ 52 │ 1 │ -└────────────┴───────┴───────┴───────┘ -``` - -## toYearWeek (date \[, mode\]) {#toyearweekdatemode} - -Retourne l'année et la semaine pour une date. L'année dans le résultat peut être différente de l'année dans l'argument date pour la première et la dernière semaine de l'année. - -L'argument mode fonctionne exactement comme l'argument mode de toWeek(). Pour la syntaxe à argument unique, une valeur de mode de 0 est utilisée. - -`toISOYear()`est une fonction de compatibilité équivalente à `intDiv(toYearWeek(date,3),100)`. - -**Exemple** - -``` sql -SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(date,1) AS yearWeek1, toYearWeek(date,9) AS yearWeek9; -``` - -``` text -┌───────date─┬─yearWeek0─┬─yearWeek1─┬─yearWeek9─┐ -│ 2016-12-27 │ 201652 │ 201652 │ 201701 │ -└────────────┴───────────┴───────────┴───────────┘ -``` - -## maintenant {#now} - -Accepte zéro argument et renvoie l'heure actuelle à l'un des moments de l'exécution de la requête. -Cette fonction renvoie une constante, même si la requête a pris beaucoup de temps à compléter. - -## aujourd' {#today} - -Accepte zéro argument et renvoie la date actuelle à l'un des moments de l'exécution de la requête. -Le même que ‘toDate(now())’. - -## hier {#yesterday} - -Accepte zéro argument et renvoie la date d'hier à l'un des moments de l'exécution de la requête. -Le même que ‘today() - 1’. - -## l'horaire de diffusion {#timeslot} - -Arrondit le temps à la demi-heure. -Cette fonction est spécifique à Yandex.Metrica, car une demi-heure est le temps minimum pour diviser une session en deux sessions si une balise de suivi affiche les pages vues consécutives d'un seul utilisateur qui diffèrent dans le temps de strictement plus que ce montant. Cela signifie que les tuples (l'ID de balise, l'ID utilisateur et l'intervalle de temps) peuvent être utilisés pour rechercher les pages vues incluses dans la session correspondante. - -## toYYYYMM {#toyyyymm} - -Convertit une date ou une date avec l'heure en un numéro UInt32 contenant le numéro d'année et de mois (AAAA \* 100 + MM). - -## toYYYYMMDD {#toyyyymmdd} - -Convertit une date ou une date avec l'heure en un numéro UInt32 contenant le numéro d'année et de mois (AAAA \* 10000 + MM \* 100 + JJ). - -## toYYYYMMDDhhmmss {#toyyyymmddhhmmss} - -Convertit une date ou une date avec l'heure en un numéro UInt64 contenant le numéro d'année et de mois (AAAA \* 10000000000 + MM \* 100000000 + DD \* 1000000 + hh \* 10000 + mm \* 100 + ss). - -## addYears, addMonths, addWeeks, addDays, addHours, addMinutes, addSeconds, addQuarters {#addyears-addmonths-addweeks-adddays-addhours-addminutes-addseconds-addquarters} - -Fonction ajoute une date / DateTime intervalle à une Date / DateTime, puis retourner la Date / DateTime. Exemple: - -``` sql -WITH - toDate('2018-01-01') AS date, - toDateTime('2018-01-01 00:00:00') AS date_time -SELECT - addYears(date, 1) AS add_years_with_date, - addYears(date_time, 1) AS add_years_with_date_time -``` - -``` text -┌─add_years_with_date─┬─add_years_with_date_time─┐ -│ 2019-01-01 │ 2019-01-01 00:00:00 │ -└─────────────────────┴──────────────────────────┘ -``` - -## subtractYears, subtractMonths, subtractWeeks, subtractDays, subtractHours, subtractMinutes, subtractSeconds, subtractQuarters {#subtractyears-subtractmonths-subtractweeks-subtractdays-subtracthours-subtractminutes-subtractseconds-subtractquarters} - -Fonction soustrayez un intervalle de Date / DateTime à une Date / DateTime, puis renvoyez la Date / DateTime. Exemple: - -``` sql -WITH - toDate('2019-01-01') AS date, - toDateTime('2019-01-01 00:00:00') AS date_time -SELECT - subtractYears(date, 1) AS subtract_years_with_date, - subtractYears(date_time, 1) AS subtract_years_with_date_time -``` - -``` text -┌─subtract_years_with_date─┬─subtract_years_with_date_time─┐ -│ 2018-01-01 │ 2018-01-01 00:00:00 │ -└──────────────────────────┴───────────────────────────────┘ -``` - -## dateDiff {#datediff} - -Renvoie la différence entre deux valeurs Date ou DateTime. - -**Syntaxe** - -``` sql -dateDiff('unit', startdate, enddate, [timezone]) -``` - -**Paramètre** - -- `unit` — Time unit, in which the returned value is expressed. [Chaîne](../syntax.md#syntax-string-literal). - - Supported values: - - | unit | - | ---- | - |second | - |minute | - |hour | - |day | - |week | - |month | - |quarter | - |year | - -- `startdate` — The first time value to compare. [Date](../../sql-reference/data-types/date.md) ou [DateTime](../../sql-reference/data-types/datetime.md). - -- `enddate` — The second time value to compare. [Date](../../sql-reference/data-types/date.md) ou [DateTime](../../sql-reference/data-types/datetime.md). - -- `timezone` — Optional parameter. If specified, it is applied to both `startdate` et `enddate`. Si non spécifié, fuseaux horaires de l' `startdate` et `enddate` sont utilisés. Si elles ne sont pas identiques, le résultat n'est pas spécifié. - -**Valeur renvoyée** - -Différence entre `startdate` et `enddate` exprimé en `unit`. - -Type: `int`. - -**Exemple** - -Requête: - -``` sql -SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00')); -``` - -Résultat: - -``` text -┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐ -│ 25 │ -└────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -## intervalle de temps (StartTime, Duration, \[, Size\]) {#timeslotsstarttime-duration-size} - -Pour un intervalle de temps commençant à ‘StartTime’ et de poursuivre pour ‘Duration’ secondes, il renvoie un tableau de moments dans le temps, composé de points de cet intervalle arrondis vers le bas à la ‘Size’ en quelques secondes. ‘Size’ est un paramètre optionnel: une constante UInt32, définie sur 1800 par défaut. -Exemple, `timeSlots(toDateTime('2012-01-01 12:20:00'), 600) = [toDateTime('2012-01-01 12:00:00'), toDateTime('2012-01-01 12:30:00')]`. -Ceci est nécessaire pour rechercher les pages vues dans la session correspondante. - -## formatDateTime(Heure, Format \[, fuseau horaire\]) {#formatdatetime} - -Function formats a Time according given Format string. N.B.: Format is a constant expression, e.g. you can not have multiple formats for single result column. - -Modificateurs pris en charge pour le Format: -(“Example” colonne affiche le résultat de formatage pour le temps `2018-01-02 22:33:44`) - -| Modificateur | Description | Exemple | -|--------------|------------------------------------------------------------------------|------------| -| %C | année divisée par 100 et tronquée en entier (00-99) | 20 | -| %d | jour du mois, zero-rembourré (01-31) | 02 | -| %D | Date courte MM / JJ / AA, équivalente à %m / % d / % y | 01/02/18 | -| % e | jour du mois, rembourré dans l'espace ( 1-31) | 2 | -| %F | date courte AAAA-MM-JJ, équivalente à % Y - % m - % d | 2018-01-02 | -| %H | heure en format 24h (00-23) | 22 | -| %I | heure en format 12h (01-12) | 10 | -| %j | les jours de l'année (001-366) | 002 | -| %m | mois en nombre décimal (01-12) | 01 | -| %M | minute (00-59) | 33 | -| %et | caractère de nouvelle ligne (") | | -| %p | Désignation AM ou PM | PM | -| %R | 24 heures HH:MM temps, équivalent à %H: % M | 22:33 | -| %S | deuxième (00-59) | 44 | -| % t | horizontal-caractère de tabulation (') | | -| %T | Format d'heure ISO 8601 (HH:MM:SS), équivalent à %H: % M:%S | 22:33:44 | -| % u | ISO 8601 jour de la semaine comme numéro avec Lundi comme 1 (1-7) | 2 | -| %V | Numéro de semaine ISO 8601 (01-53) | 01 | -| %W | jour de la semaine comme un nombre décimal avec dimanche comme 0 (0-6) | 2 | -| % y | Année, deux derniers chiffres (00-99) | 18 | -| %Y | An | 2018 | -| %% | signe | % | - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) diff --git a/docs/fr/sql-reference/functions/encoding-functions.md b/docs/fr/sql-reference/functions/encoding-functions.md deleted file mode 100644 index 6c99ed4f32e..00000000000 --- a/docs/fr/sql-reference/functions/encoding-functions.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 52 -toc_title: Encodage ---- - -# L'Encodage Des Fonctions {#encoding-functions} - -## char {#char} - -Retourne la chaîne avec la longueur que le nombre d'arguments passés et chaque octet a la valeur de l'argument correspondant. Accepte plusieurs arguments de types numériques. Si la valeur de l'argument est hors de portée du type de données UInt8, elle est convertie en UInt8 avec arrondi et débordement possibles. - -**Syntaxe** - -``` sql -char(number_1, [number_2, ..., number_n]); -``` - -**Paramètre** - -- `number_1, number_2, ..., number_n` — Numerical arguments interpreted as integers. Types: [Int](../../sql-reference/data-types/int-uint.md), [Flottant](../../sql-reference/data-types/float.md). - -**Valeur renvoyée** - -- une chaîne d'octets. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT char(104.1, 101, 108.9, 108.9, 111) AS hello -``` - -Résultat: - -``` text -┌─hello─┐ -│ hello │ -└───────┘ -``` - -Vous pouvez construire une chaîne de codage arbitraire en passant les octets correspondants. Voici un exemple pour UTF-8: - -Requête: - -``` sql -SELECT char(0xD0, 0xBF, 0xD1, 0x80, 0xD0, 0xB8, 0xD0, 0xB2, 0xD0, 0xB5, 0xD1, 0x82) AS hello; -``` - -Résultat: - -``` text -┌─hello──┐ -│ привет │ -└────────┘ -``` - -Requête: - -``` sql -SELECT char(0xE4, 0xBD, 0xA0, 0xE5, 0xA5, 0xBD) AS hello; -``` - -Résultat: - -``` text -┌─hello─┐ -│ 你好 │ -└───────┘ -``` - -## Hex {#hex} - -Renvoie une chaîne contenant la représentation hexadécimale de l'argument. - -**Syntaxe** - -``` sql -hex(arg) -``` - -La fonction utilise des lettres majuscules `A-F` et ne pas utiliser de préfixes (comme `0x`) ou suffixes (comme `h`). - -Pour les arguments entiers, il imprime des chiffres hexadécimaux (“nibbles”) du plus significatif au moins significatif (big endian ou “human readable” ordre). Il commence par l'octet non nul le plus significatif (les octets de début zéro sont omis) mais imprime toujours les deux chiffres de chaque octet même si le chiffre de début est nul. - -Exemple: - -**Exemple** - -Requête: - -``` sql -SELECT hex(1); -``` - -Résultat: - -``` text -01 -``` - -Les valeurs de type `Date` et `DateTime` sont formatés comme des entiers correspondants (le nombre de jours depuis Epoch pour Date et la valeur de L'horodatage Unix pour DateTime). - -Pour `String` et `FixedString`, tous les octets sont simplement codés en deux nombres hexadécimaux. Zéro octets ne sont pas omis. - -Les valeurs des types virgule flottante et décimale sont codées comme leur représentation en mémoire. Comme nous soutenons l'architecture little endian, ils sont codés dans little endian. Zéro octets de début / fin ne sont pas omis. - -**Paramètre** - -- `arg` — A value to convert to hexadecimal. Types: [Chaîne](../../sql-reference/data-types/string.md), [UInt](../../sql-reference/data-types/int-uint.md), [Flottant](../../sql-reference/data-types/float.md), [Décimal](../../sql-reference/data-types/decimal.md), [Date](../../sql-reference/data-types/date.md) ou [DateTime](../../sql-reference/data-types/datetime.md). - -**Valeur renvoyée** - -- Une chaîne avec la représentation hexadécimale de l'argument. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT hex(toFloat32(number)) as hex_presentation FROM numbers(15, 2); -``` - -Résultat: - -``` text -┌─hex_presentation─┐ -│ 00007041 │ -│ 00008041 │ -└──────────────────┘ -``` - -Requête: - -``` sql -SELECT hex(toFloat64(number)) as hex_presentation FROM numbers(15, 2); -``` - -Résultat: - -``` text -┌─hex_presentation─┐ -│ 0000000000002E40 │ -│ 0000000000003040 │ -└──────────────────┘ -``` - -## unhex (str) {#unhexstr} - -Accepte une chaîne contenant un nombre quelconque de chiffres hexadécimaux, et renvoie une chaîne contenant le correspondant octets. Prend en charge les lettres majuscules et minuscules A-F. Le nombre de chiffres hexadécimaux ne doit pas être pair. S'il est impair, le dernier chiffre est interprété comme la moitié la moins significative de l'octet 00-0F. Si la chaîne d'argument contient autre chose que des chiffres hexadécimaux, un résultat défini par l'implémentation est renvoyé (une exception n'est pas levée). -Si vous voulez convertir le résultat en un nombre, vous pouvez utiliser le ‘reverse’ et ‘reinterpretAsType’ fonction. - -## UUIDStringToNum (str) {#uuidstringtonumstr} - -Accepte une chaîne contenant 36 caractères dans le format `123e4567-e89b-12d3-a456-426655440000`, et le renvoie comme un ensemble d'octets dans un FixedString (16). - -## UUIDNumToString (str) {#uuidnumtostringstr} - -Accepte une valeur FixedString (16). Renvoie une chaîne contenant 36 caractères au format texte. - -## bitmaskToList(num) {#bitmasktolistnum} - -Accepte un entier. Renvoie une chaîne contenant la liste des puissances de deux qui totalisent le nombre source lorsqu'il est additionné. Ils sont séparés par des virgules sans espaces au format texte, dans l'ordre croissant. - -## bitmaskToArray(num) {#bitmasktoarraynum} - -Accepte un entier. Renvoie un tableau de nombres UInt64 contenant la liste des puissances de deux qui totalisent le nombre source lorsqu'il est additionné. Les numéros dans le tableau sont dans l'ordre croissant. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/encoding_functions/) diff --git a/docs/fr/sql-reference/functions/ext-dict-functions.md b/docs/fr/sql-reference/functions/ext-dict-functions.md deleted file mode 100644 index 1cec307747d..00000000000 --- a/docs/fr/sql-reference/functions/ext-dict-functions.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 58 -toc_title: Travailler avec des dictionnaires externes ---- - -# Fonctions pour travailler avec des dictionnaires externes {#ext_dict_functions} - -Pour plus d'informations sur la connexion et la configuration de dictionnaires externes, voir [Dictionnaires externes](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md). - -## dictGet {#dictget} - -Récupère une valeur d'un dictionnaire externe. - -``` sql -dictGet('dict_name', 'attr_name', id_expr) -dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr) -``` - -**Paramètre** - -- `dict_name` — Name of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `attr_name` — Name of the column of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `id_expr` — Key value. [Expression](../syntax.md#syntax-expressions) de retour d'un [UInt64](../../sql-reference/data-types/int-uint.md) ou [Tuple](../../sql-reference/data-types/tuple.md)- tapez la valeur en fonction de la configuration du dictionnaire. -- `default_value_expr` — Value returned if the dictionary doesn't contain a row with the `id_expr` clé. [Expression](../syntax.md#syntax-expressions) renvoyer la valeur dans le type de données configuré pour `attr_name` attribut. - -**Valeur renvoyée** - -- Si ClickHouse analyse l'attribut avec succès dans le [l'attribut type de données](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#ext_dict_structure-attributes), les fonctions renvoient la valeur du dictionnaire de l'attribut qui correspond à `id_expr`. - -- Si il n'y a pas la clé, correspondant à `id_expr` dans le dictionnaire, puis: - - - `dictGet` returns the content of the `` element specified for the attribute in the dictionary configuration. - - `dictGetOrDefault` returns the value passed as the `default_value_expr` parameter. - -ClickHouse lève une exception si elle ne peut pas analyser la valeur de l'attribut ou si la valeur ne correspond pas au type de données d'attribut. - -**Exemple** - -Créer un fichier texte `ext-dict-text.csv` contenant les éléments suivants: - -``` text -1,1 -2,2 -``` - -La première colonne est `id` la deuxième colonne est `c1`. - -Configurer le dictionnaire externe: - -``` xml - - - ext-dict-test - - - /path-to/ext-dict-test.csv - CSV - - - - - - - - id - - - c1 - UInt32 - - - - 0 - - -``` - -Effectuer la requête: - -``` sql -SELECT - dictGetOrDefault('ext-dict-test', 'c1', number + 1, toUInt32(number * 10)) AS val, - toTypeName(val) AS type -FROM system.numbers -LIMIT 3 -``` - -``` text -┌─val─┬─type───┐ -│ 1 │ UInt32 │ -│ 2 │ UInt32 │ -│ 20 │ UInt32 │ -└─────┴────────┘ -``` - -**Voir Aussi** - -- [Dictionnaires Externes](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) - -## dictHas {#dicthas} - -Vérifie si une clé est présente dans un dictionnaire. - -``` sql -dictHas('dict_name', id_expr) -``` - -**Paramètre** - -- `dict_name` — Name of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `id_expr` — Key value. [Expression](../syntax.md#syntax-expressions) de retour d'un [UInt64](../../sql-reference/data-types/int-uint.md)-le type de la valeur. - -**Valeur renvoyée** - -- 0, si il n'y a pas de clé. -- 1, si il y a une clé. - -Type: `UInt8`. - -## dictGetHierarchy {#dictgethierarchy} - -Crée un tableau contenant tous les parents d'une clé dans le [hiérarchique dictionnaire](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md). - -**Syntaxe** - -``` sql -dictGetHierarchy('dict_name', key) -``` - -**Paramètre** - -- `dict_name` — Name of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `key` — Key value. [Expression](../syntax.md#syntax-expressions) de retour d'un [UInt64](../../sql-reference/data-types/int-uint.md)-le type de la valeur. - -**Valeur renvoyée** - -- Les Parents pour la clé. - -Type: [Tableau (UInt64)](../../sql-reference/data-types/array.md). - -## dictisine {#dictisin} - -Vérifie l'ancêtre d'une clé à travers toute la chaîne hiérarchique dans le dictionnaire. - -``` sql -dictIsIn('dict_name', child_id_expr, ancestor_id_expr) -``` - -**Paramètre** - -- `dict_name` — Name of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `child_id_expr` — Key to be checked. [Expression](../syntax.md#syntax-expressions) de retour d'un [UInt64](../../sql-reference/data-types/int-uint.md)-le type de la valeur. -- `ancestor_id_expr` — Alleged ancestor of the `child_id_expr` clé. [Expression](../syntax.md#syntax-expressions) de retour d'un [UInt64](../../sql-reference/data-types/int-uint.md)-le type de la valeur. - -**Valeur renvoyée** - -- 0, si `child_id_expr` n'est pas un enfant de `ancestor_id_expr`. -- 1, si `child_id_expr` est un enfant de `ancestor_id_expr` ou si `child_id_expr` est un `ancestor_id_expr`. - -Type: `UInt8`. - -## D'Autres Fonctions {#ext_dict_functions-other} - -ClickHouse prend en charge des fonctions spécialisées qui convertissent les valeurs d'attribut de dictionnaire en un type de données spécifique, quelle que soit la configuration du dictionnaire. - -Fonction: - -- `dictGetInt8`, `dictGetInt16`, `dictGetInt32`, `dictGetInt64` -- `dictGetUInt8`, `dictGetUInt16`, `dictGetUInt32`, `dictGetUInt64` -- `dictGetFloat32`, `dictGetFloat64` -- `dictGetDate` -- `dictGetDateTime` -- `dictGetUUID` -- `dictGetString` - -Toutes ces fonctions ont le `OrDefault` modification. Exemple, `dictGetDateOrDefault`. - -Syntaxe: - -``` sql -dictGet[Type]('dict_name', 'attr_name', id_expr) -dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr) -``` - -**Paramètre** - -- `dict_name` — Name of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `attr_name` — Name of the column of the dictionary. [Chaîne littérale](../syntax.md#syntax-string-literal). -- `id_expr` — Key value. [Expression](../syntax.md#syntax-expressions) de retour d'un [UInt64](../../sql-reference/data-types/int-uint.md)-le type de la valeur. -- `default_value_expr` — Value which is returned if the dictionary doesn't contain a row with the `id_expr` clé. [Expression](../syntax.md#syntax-expressions) renvoyer une valeur dans le type de données configuré pour `attr_name` attribut. - -**Valeur renvoyée** - -- Si ClickHouse analyse l'attribut avec succès dans le [l'attribut type de données](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#ext_dict_structure-attributes), les fonctions renvoient la valeur du dictionnaire de l'attribut qui correspond à `id_expr`. - -- Si il n'est pas demandé `id_expr` dans le dictionnaire,: - - - `dictGet[Type]` returns the content of the `` element specified for the attribute in the dictionary configuration. - - `dictGet[Type]OrDefault` returns the value passed as the `default_value_expr` parameter. - -ClickHouse lève une exception si elle ne peut pas analyser la valeur de l'attribut ou si la valeur ne correspond pas au type de données d'attribut. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/ext_dict_functions/) diff --git a/docs/fr/sql-reference/functions/functions-for-nulls.md b/docs/fr/sql-reference/functions/functions-for-nulls.md deleted file mode 100644 index ef7be728ce7..00000000000 --- a/docs/fr/sql-reference/functions/functions-for-nulls.md +++ /dev/null @@ -1,312 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 63 -toc_title: Travailler avec des arguments nullables ---- - -# Fonctions pour travailler avec des agrégats nullables {#functions-for-working-with-nullable-aggregates} - -## isNull {#isnull} - -Vérifie si l'argument est [NULL](../../sql-reference/syntax.md#null-literal). - -``` sql -isNull(x) -``` - -**Paramètre** - -- `x` — A value with a non-compound data type. - -**Valeur renvoyée** - -- `1` si `x` être `NULL`. -- `0` si `x` n'est pas `NULL`. - -**Exemple** - -Table d'entrée - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -│ 2 │ 3 │ -└───┴──────┘ -``` - -Requête - -``` sql -SELECT x FROM t_null WHERE isNull(y) -``` - -``` text -┌─x─┐ -│ 1 │ -└───┘ -``` - -## isNotNull {#isnotnull} - -Vérifie si l'argument est [NULL](../../sql-reference/syntax.md#null-literal). - -``` sql -isNotNull(x) -``` - -**Paramètre:** - -- `x` — A value with a non-compound data type. - -**Valeur renvoyée** - -- `0` si `x` être `NULL`. -- `1` si `x` n'est pas `NULL`. - -**Exemple** - -Table d'entrée - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -│ 2 │ 3 │ -└───┴──────┘ -``` - -Requête - -``` sql -SELECT x FROM t_null WHERE isNotNull(y) -``` - -``` text -┌─x─┐ -│ 2 │ -└───┘ -``` - -## fusionner {#coalesce} - -Vérifie de gauche à droite si `NULL` les arguments ont été passés et renvoie le premier non-`NULL` argument. - -``` sql -coalesce(x,...) -``` - -**Paramètre:** - -- N'importe quel nombre de paramètres d'un type non composé. Tous les paramètres doivent être compatibles par type de données. - -**Valeurs renvoyées** - -- Le premier non-`NULL` argument. -- `NULL` si tous les arguments sont `NULL`. - -**Exemple** - -Considérez une liste de contacts qui peuvent spécifier plusieurs façons de contacter un client. - -``` text -┌─name─────┬─mail─┬─phone─────┬──icq─┐ -│ client 1 │ ᴺᵁᴸᴸ │ 123-45-67 │ 123 │ -│ client 2 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ -└──────────┴──────┴───────────┴──────┘ -``` - -Le `mail` et `phone` les champs sont de type Chaîne de caractères, mais la `icq` le terrain est `UInt32`, de sorte qu'il doit être converti en `String`. - -Obtenir la première méthode de contact pour le client à partir de la liste de contacts: - -``` sql -SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook -``` - -``` text -┌─name─────┬─coalesce(mail, phone, CAST(icq, 'Nullable(String)'))─┐ -│ client 1 │ 123-45-67 │ -│ client 2 │ ᴺᵁᴸᴸ │ -└──────────┴──────────────────────────────────────────────────────┘ -``` - -## ifNull {#ifnull} - -Renvoie une valeur alternative si l'argument principal est `NULL`. - -``` sql -ifNull(x,alt) -``` - -**Paramètre:** - -- `x` — The value to check for `NULL`. -- `alt` — The value that the function returns if `x` être `NULL`. - -**Valeurs renvoyées** - -- Valeur `x`, si `x` n'est pas `NULL`. -- Valeur `alt`, si `x` être `NULL`. - -**Exemple** - -``` sql -SELECT ifNull('a', 'b') -``` - -``` text -┌─ifNull('a', 'b')─┐ -│ a │ -└──────────────────┘ -``` - -``` sql -SELECT ifNull(NULL, 'b') -``` - -``` text -┌─ifNull(NULL, 'b')─┐ -│ b │ -└───────────────────┘ -``` - -## nullIf {#nullif} - -Retourner `NULL` si les arguments sont égaux. - -``` sql -nullIf(x, y) -``` - -**Paramètre:** - -`x`, `y` — Values for comparison. They must be compatible types, or ClickHouse will generate an exception. - -**Valeurs renvoyées** - -- `NULL` si les arguments sont égaux. -- Le `x` valeur, si les arguments ne sont pas égaux. - -**Exemple** - -``` sql -SELECT nullIf(1, 1) -``` - -``` text -┌─nullIf(1, 1)─┐ -│ ᴺᵁᴸᴸ │ -└──────────────┘ -``` - -``` sql -SELECT nullIf(1, 2) -``` - -``` text -┌─nullIf(1, 2)─┐ -│ 1 │ -└──────────────┘ -``` - -## assumeNotNull {#assumenotnull} - -Résultats dans une valeur de type [Nullable](../../sql-reference/data-types/nullable.md) pour un non- `Nullable` si la valeur n'est pas `NULL`. - -``` sql -assumeNotNull(x) -``` - -**Paramètre:** - -- `x` — The original value. - -**Valeurs renvoyées** - -- La valeur d'origine du non-`Nullable` type, si elle n'est pas `NULL`. -- La valeur par défaut pour le non-`Nullable` Tapez si la valeur d'origine était `NULL`. - -**Exemple** - -Envisager l' `t_null` table. - -``` sql -SHOW CREATE TABLE t_null -``` - -``` text -┌─statement─────────────────────────────────────────────────────────────────┐ -│ CREATE TABLE default.t_null ( x Int8, y Nullable(Int8)) ENGINE = TinyLog │ -└───────────────────────────────────────────────────────────────────────────┘ -``` - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -│ 2 │ 3 │ -└───┴──────┘ -``` - -Appliquer le `assumeNotNull` la fonction de la `y` colonne. - -``` sql -SELECT assumeNotNull(y) FROM t_null -``` - -``` text -┌─assumeNotNull(y)─┐ -│ 0 │ -│ 3 │ -└──────────────────┘ -``` - -``` sql -SELECT toTypeName(assumeNotNull(y)) FROM t_null -``` - -``` text -┌─toTypeName(assumeNotNull(y))─┐ -│ Int8 │ -│ Int8 │ -└──────────────────────────────┘ -``` - -## toNullable {#tonullable} - -Convertit le type d'argument en `Nullable`. - -``` sql -toNullable(x) -``` - -**Paramètre:** - -- `x` — The value of any non-compound type. - -**Valeur renvoyée** - -- La valeur d'entrée avec un `Nullable` type. - -**Exemple** - -``` sql -SELECT toTypeName(10) -``` - -``` text -┌─toTypeName(10)─┐ -│ UInt8 │ -└────────────────┘ -``` - -``` sql -SELECT toTypeName(toNullable(10)) -``` - -``` text -┌─toTypeName(toNullable(10))─┐ -│ Nullable(UInt8) │ -└────────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/functions_for_nulls/) diff --git a/docs/fr/sql-reference/functions/geo.md b/docs/fr/sql-reference/functions/geo.md deleted file mode 100644 index a89f03c7216..00000000000 --- a/docs/fr/sql-reference/functions/geo.md +++ /dev/null @@ -1,510 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 62 -toc_title: "Travailler avec des coordonn\xE9es g\xE9ographiques" ---- - -# Fonctions pour travailler avec des coordonnées géographiques {#functions-for-working-with-geographical-coordinates} - -## greatCircleDistance {#greatcircledistance} - -Calculer la distance entre deux points sur la surface de la Terre en utilisant [la formule du grand cercle](https://en.wikipedia.org/wiki/Great-circle_distance). - -``` sql -greatCircleDistance(lon1Deg, lat1Deg, lon2Deg, lat2Deg) -``` - -**Les paramètres d'entrée** - -- `lon1Deg` — Longitude of the first point in degrees. Range: `[-180°, 180°]`. -- `lat1Deg` — Latitude of the first point in degrees. Range: `[-90°, 90°]`. -- `lon2Deg` — Longitude of the second point in degrees. Range: `[-180°, 180°]`. -- `lat2Deg` — Latitude of the second point in degrees. Range: `[-90°, 90°]`. - -Les valeurs positives correspondent à la latitude nord et à la longitude Est, et les valeurs négatives à la latitude Sud et à la longitude ouest. - -**Valeur renvoyée** - -La distance entre deux points sur la surface de la Terre, en mètres. - -Génère une exception lorsque les valeurs des paramètres d'entrée se situent en dehors de la plage. - -**Exemple** - -``` sql -SELECT greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673) -``` - -``` text -┌─greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673)─┐ -│ 14132374.194975413 │ -└───────────────────────────────────────────────────────────────────┘ -``` - -## pointInEllipses {#pointinellipses} - -Vérifie si le point appartient à au moins une des ellipses. -Coordonnées géométriques sont dans le système de coordonnées Cartésiennes. - -``` sql -pointInEllipses(x, y, x₀, y₀, a₀, b₀,...,xₙ, yₙ, aₙ, bₙ) -``` - -**Les paramètres d'entrée** - -- `x, y` — Coordinates of a point on the plane. -- `xᵢ, yᵢ` — Coordinates of the center of the `i`-ème points de suspension. -- `aᵢ, bᵢ` — Axes of the `i`- e ellipse en unités de coordonnées x, Y. - -Les paramètres d'entrée doivent être `2+4⋅n`, où `n` est le nombre de points de suspension. - -**Valeurs renvoyées** - -`1` si le point est à l'intérieur d'au moins l'un des ellipses; `0`si elle ne l'est pas. - -**Exemple** - -``` sql -SELECT pointInEllipses(10., 10., 10., 9.1, 1., 0.9999) -``` - -``` text -┌─pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)─┐ -│ 1 │ -└─────────────────────────────────────────────────┘ -``` - -## pointtinpolygon {#pointinpolygon} - -Vérifie si le point appartient au polygone sur l'avion. - -``` sql -pointInPolygon((x, y), [(a, b), (c, d) ...], ...) -``` - -**Les valeurs d'entrée** - -- `(x, y)` — Coordinates of a point on the plane. Data type — [Tuple](../../sql-reference/data-types/tuple.md) — A tuple of two numbers. -- `[(a, b), (c, d) ...]` — Polygon vertices. Data type — [Tableau](../../sql-reference/data-types/array.md). Chaque sommet est représenté par une paire de coordonnées `(a, b)`. Les sommets doivent être spécifiés dans le sens horaire ou antihoraire. Le nombre minimum de sommets est 3. Le polygone doit être constante. -- La fonction prend également en charge les polygones avec des trous (découper des sections). Dans ce cas, ajoutez des polygones qui définissent les sections découpées en utilisant des arguments supplémentaires de la fonction. La fonction ne prend pas en charge les polygones non simplement connectés. - -**Valeurs renvoyées** - -`1` si le point est à l'intérieur du polygone, `0` si elle ne l'est pas. -Si le point est sur la limite du polygone, la fonction peut renvoyer 0 ou 1. - -**Exemple** - -``` sql -SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res -``` - -``` text -┌─res─┐ -│ 1 │ -└─────┘ -``` - -## geohashEncode {#geohashencode} - -Encode la latitude et la longitude en tant que chaîne geohash, voir (http://geohash.org/, https://en.wikipedia.org/wiki/Geohash). - -``` sql -geohashEncode(longitude, latitude, [precision]) -``` - -**Les valeurs d'entrée** - -- longitude longitude partie de la coordonnée que vous souhaitez encoder. Flottant dans la gamme`[-180°, 180°]` -- latitude latitude partie de la coordonnée que vous souhaitez encoder. Flottant dans la gamme `[-90°, 90°]` -- precision-facultatif, longueur de la chaîne codée résultante, par défaut `12`. Entier dans la gamme `[1, 12]`. Toute valeur inférieure à `1` ou supérieure à `12` silencieusement converti à `12`. - -**Valeurs renvoyées** - -- alphanumérique `String` de coordonnées codées (la version modifiée de l'alphabet de codage base32 est utilisée). - -**Exemple** - -``` sql -SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res -``` - -``` text -┌─res──────────┐ -│ ezs42d000000 │ -└──────────────┘ -``` - -## geohashDecode {#geohashdecode} - -Décode toute chaîne codée geohash en longitude et latitude. - -**Les valeurs d'entrée** - -- chaîne codée-chaîne codée geohash. - -**Valeurs renvoyées** - -- (longitude, latitude) - 2-n-uplet de `Float64` les valeurs de longitude et de latitude. - -**Exemple** - -``` sql -SELECT geohashDecode('ezs42') AS res -``` - -``` text -┌─res─────────────────────────────┐ -│ (-5.60302734375,42.60498046875) │ -└─────────────────────────────────┘ -``` - -## geoToH3 {#geotoh3} - -Retourner [H3](https://uber.github.io/h3/#/documentation/overview/introduction) point d'indice `(lon, lat)` avec une résolution spécifiée. - -[H3](https://uber.github.io/h3/#/documentation/overview/introduction) est un système d'indexation géographique où la surface de la Terre divisée en carreaux hexagonaux même. Ce système est hiérarchique, c'est-à-dire que chaque hexagone au niveau supérieur peut être divisé en sept, même mais plus petits, etc. - -Cet indice est principalement utilisé pour les emplacements de bucketing et d'autres manipulations géospatiales. - -**Syntaxe** - -``` sql -geoToH3(lon, lat, resolution) -``` - -**Paramètre** - -- `lon` — Longitude. Type: [Float64](../../sql-reference/data-types/float.md). -- `lat` — Latitude. Type: [Float64](../../sql-reference/data-types/float.md). -- `resolution` — Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Numéro d'indice hexagonal. -- 0 en cas d'erreur. - -Type: `UInt64`. - -**Exemple** - -Requête: - -``` sql -SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index -``` - -Résultat: - -``` text -┌────────────h3Index─┐ -│ 644325524701193974 │ -└────────────────────┘ -``` - -## geohashesInBox {#geohashesinbox} - -Renvoie un tableau de chaînes codées geohash de précision donnée qui tombent à l'intérieur et croisent les limites d'une boîte donnée, essentiellement une grille 2D aplatie en tableau. - -**Les valeurs d'entrée** - -- longitude_min-longitude min, valeur flottante dans la plage `[-180°, 180°]` -- latitude_min-latitude min, valeur flottante dans la plage `[-90°, 90°]` -- longitude_max-longitude maximale, valeur flottante dans la plage `[-180°, 180°]` -- latitude_max-latitude maximale, valeur flottante dans la plage `[-90°, 90°]` -- précision - geohash précision, `UInt8` dans la gamme `[1, 12]` - -Veuillez noter que tous les paramètres de coordonnées doit être du même type: soit `Float32` ou `Float64`. - -**Valeurs renvoyées** - -- gamme de précision de longues chaînes de geohash-boîtes couvrant la zone, vous ne devriez pas compter sur l'ordre des éléments. -- \[\] - tableau vide si *min* les valeurs de *latitude* et *longitude* ne sont pas moins de correspondant *Max* valeur. - -Veuillez noter que la fonction lancera une exception si le tableau résultant a plus de 10'000'000 éléments. - -**Exemple** - -``` sql -SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos -``` - -``` text -┌─thasos──────────────────────────────────────┐ -│ ['sx1q','sx1r','sx32','sx1w','sx1x','sx38'] │ -└─────────────────────────────────────────────┘ -``` - -## h3GetBaseCell {#h3getbasecell} - -Renvoie le numéro de cellule de base de l'index. - -**Syntaxe** - -``` sql -h3GetBaseCell(index) -``` - -**Paramètre** - -- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Numéro de cellule de base hexagonale. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3GetBaseCell(612916788725809151) as basecell -``` - -Résultat: - -``` text -┌─basecell─┐ -│ 12 │ -└──────────┘ -``` - -## h3HexAreaM2 {#h3hexaream2} - -Surface hexagonale Moyenne en mètres carrés à la résolution donnée. - -**Syntaxe** - -``` sql -h3HexAreaM2(resolution) -``` - -**Paramètre** - -- `resolution` — Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Area in m². Type: [Float64](../../sql-reference/data-types/float.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3HexAreaM2(13) as area -``` - -Résultat: - -``` text -┌─area─┐ -│ 43.9 │ -└──────┘ -``` - -## h3IndexesAreNeighbors {#h3indexesareneighbors} - -Renvoie si les H3Indexes fournis sont voisins ou non. - -**Syntaxe** - -``` sql -h3IndexesAreNeighbors(index1, index2) -``` - -**Paramètre** - -- `index1` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). -- `index2` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Retourner `1` si les index sont voisins, `0` autrement. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3IndexesAreNeighbors(617420388351344639, 617420388352655359) AS n -``` - -Résultat: - -``` text -┌─n─┐ -│ 1 │ -└───┘ -``` - -## h3enfants {#h3tochildren} - -Retourne un tableau avec les index enfants de l'index donné. - -**Syntaxe** - -``` sql -h3ToChildren(index, resolution) -``` - -**Paramètre** - -- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). -- `resolution` — Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Tableau avec les index H3 enfants. Tableau de type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3ToChildren(599405990164561919, 6) AS children -``` - -Résultat: - -``` text -┌─children───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ -│ [603909588852408319,603909588986626047,603909589120843775,603909589255061503,603909589389279231,603909589523496959,603909589657714687] │ -└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -## h3ToParent {#h3toparent} - -Renvoie l'index parent (plus grossier) contenant l'index donné. - -**Syntaxe** - -``` sql -h3ToParent(index, resolution) -``` - -**Paramètre** - -- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). -- `resolution` — Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Parent H3 index. Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3ToParent(599405990164561919, 3) as parent -``` - -Résultat: - -``` text -┌─────────────parent─┐ -│ 590398848891879423 │ -└────────────────────┘ -``` - -## h3ToString {#h3tostring} - -Convertit la représentation H3Index de l'index en représentation de chaîne. - -``` sql -h3ToString(index) -``` - -**Paramètre** - -- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- Représentation en chaîne de l'index H3. Type: [Chaîne](../../sql-reference/data-types/string.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3ToString(617420388352917503) as h3_string -``` - -Résultat: - -``` text -┌─h3_string───────┐ -│ 89184926cdbffff │ -└─────────────────┘ -``` - -## stringToH3 {#stringtoh3} - -Convertit la représentation de chaîne en représentation H3Index (UInt64). - -``` sql -stringToH3(index_str) -``` - -**Paramètre** - -- `index_str` — String representation of the H3 index. Type: [Chaîne](../../sql-reference/data-types/string.md). - -**Valeurs renvoyées** - -- Numéro d'indice hexagonal. Renvoie 0 en cas d'erreur. Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT stringToH3('89184926cc3ffff') as index -``` - -Résultat: - -``` text -┌──────────────index─┐ -│ 617420388351344639 │ -└────────────────────┘ -``` - -## h3grésolution {#h3getresolution} - -Retourne la résolution de l'index. - -**Syntaxe** - -``` sql -h3GetResolution(index) -``` - -**Paramètre** - -- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Valeurs renvoyées** - -- L'indice de la résolution. Gamme: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT h3GetResolution(617420388352917503) as res -``` - -Résultat: - -``` text -┌─res─┐ -│ 9 │ -└─────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/geo/) diff --git a/docs/fr/sql-reference/functions/hash-functions.md b/docs/fr/sql-reference/functions/hash-functions.md deleted file mode 100644 index 3b0f92dd4f8..00000000000 --- a/docs/fr/sql-reference/functions/hash-functions.md +++ /dev/null @@ -1,484 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 50 -toc_title: Hachage ---- - -# Les Fonctions De Hachage {#hash-functions} - -Les fonctions de hachage peuvent être utilisées pour le brassage pseudo-aléatoire déterministe des éléments. - -## halfMD5 {#hash-functions-halfmd5} - -[Interpréter](../../sql-reference/functions/type-conversion-functions.md#type_conversion_functions-reinterpretAsString) tous les paramètres d'entrée sous forme de chaînes et calcule le [MD5](https://en.wikipedia.org/wiki/MD5) la valeur de hachage pour chacun d'eux. Puis combine les hachages, prend les 8 premiers octets du hachage de la chaîne résultante, et les interprète comme `UInt64` dans l'ordre des octets big-endian. - -``` sql -halfMD5(par1, ...) -``` - -La fonction est relativement lente (5 millions de chaînes courtes par seconde par cœur de processeur). -Envisager l'utilisation de la [sipHash64](#hash_functions-siphash64) la fonction la place. - -**Paramètre** - -La fonction prend un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -A [UInt64](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. - -**Exemple** - -``` sql -SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS halfMD5hash, toTypeName(halfMD5hash) AS type -``` - -``` text -┌────────halfMD5hash─┬─type───┐ -│ 186182704141653334 │ UInt64 │ -└────────────────────┴────────┘ -``` - -## MD5 {#hash_functions-md5} - -Calcule le MD5 à partir d'une chaîne et renvoie L'ensemble d'octets résultant en tant que FixedString(16). -Si vous n'avez pas besoin de MD5 en particulier, mais que vous avez besoin d'un hachage cryptographique 128 bits décent, utilisez le ‘sipHash128’ la fonction la place. -Si vous voulez obtenir le même résultat que la sortie de l'utilitaire md5sum, utilisez lower (hex(MD5 (s))). - -## sipHash64 {#hash_functions-siphash64} - -Produit un 64 bits [SipHash](https://131002.net/siphash/) la valeur de hachage. - -``` sql -sipHash64(par1,...) -``` - -C'est une fonction de hachage cryptographique. Il fonctionne au moins trois fois plus vite que le [MD5](#hash_functions-md5) fonction. - -Fonction [interpréter](../../sql-reference/functions/type-conversion-functions.md#type_conversion_functions-reinterpretAsString) tous les paramètres d'entrée sous forme de chaînes et calcule la valeur de hachage pour chacun d'eux. Puis combine les hachages par l'algorithme suivant: - -1. Après avoir haché tous les paramètres d'entrée, la fonction obtient le tableau de hachages. -2. La fonction prend le premier et le second éléments et calcule un hachage pour le tableau d'entre eux. -3. Ensuite, la fonction prend la valeur de hachage, calculée à l'étape précédente, et le troisième élément du tableau de hachage initial, et calcule un hachage pour le tableau d'entre eux. -4. L'étape précédente est répétée pour tous les éléments restants de la période initiale de hachage tableau. - -**Paramètre** - -La fonction prend un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -A [UInt64](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. - -**Exemple** - -``` sql -SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type -``` - -``` text -┌──────────────SipHash─┬─type───┐ -│ 13726873534472839665 │ UInt64 │ -└──────────────────────┴────────┘ -``` - -## sipHash128 {#hash_functions-siphash128} - -Calcule SipHash à partir d'une chaîne. -Accepte un argument de type chaîne. Renvoie FixedString (16). -Diffère de sipHash64 en ce que l'état de pliage xor final n'est effectué que jusqu'à 128 bits. - -## cityHash64 {#cityhash64} - -Produit un 64 bits [CityHash](https://github.com/google/cityhash) la valeur de hachage. - -``` sql -cityHash64(par1,...) -``` - -Ceci est une fonction de hachage non cryptographique rapide. Il utilise L'algorithme CityHash pour les paramètres de chaîne et la fonction de hachage rapide non cryptographique spécifique à l'implémentation pour les paramètres avec d'autres types de données. La fonction utilise le combinateur CityHash pour obtenir les résultats finaux. - -**Paramètre** - -La fonction prend un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -A [UInt64](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. - -**Exemple** - -Appelez exemple: - -``` sql -SELECT cityHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS CityHash, toTypeName(CityHash) AS type -``` - -``` text -┌─────────────CityHash─┬─type───┐ -│ 12072650598913549138 │ UInt64 │ -└──────────────────────┴────────┘ -``` - -L'exemple suivant montre comment calculer la somme de l'ensemble de la table avec précision jusqu'à la ligne de commande: - -``` sql -SELECT groupBitXor(cityHash64(*)) FROM table -``` - -## intHash32 {#inthash32} - -Calcule un code de hachage 32 bits à partir de n'importe quel type d'entier. -C'est une fonction de hachage non cryptographique relativement rapide de qualité moyenne pour les nombres. - -## intHash64 {#inthash64} - -Calcule un code de hachage 64 bits à partir de n'importe quel type d'entier. -Il fonctionne plus vite que intHash32. Qualité moyenne. - -## SHA1 {#sha1} - -## SHA224 {#sha224} - -## SHA256 {#sha256} - -Calcule SHA-1, SHA-224 ou SHA-256 à partir d'une chaîne et renvoie l'ensemble d'octets résultant en tant que FixedString(20), FixedString(28) ou FixedString(32). -La fonction fonctionne assez lentement (SHA-1 traite environ 5 millions de chaînes courtes par seconde par cœur de processeur, tandis que SHA-224 et SHA-256 traitent environ 2,2 millions). -Nous vous recommandons d'utiliser cette fonction uniquement dans les cas où vous avez besoin d'une fonction de hachage spécifique et que vous ne pouvez pas la sélectionner. -Même dans ces cas, nous vous recommandons d'appliquer la fonction hors ligne et de pré-calculer les valeurs lors de leur insertion dans la table, au lieu de l'appliquer dans SELECTS. - -## URLHash(url \[, N\]) {#urlhashurl-n} - -Une fonction de hachage non cryptographique rapide et de qualité décente pour une chaîne obtenue à partir d'une URL en utilisant un type de normalisation. -`URLHash(s)` – Calculates a hash from a string without one of the trailing symbols `/`,`?` ou `#` à la fin, si elle est présente. -`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` ou `#` à la fin, si elle est présente. -Les niveaux sont les mêmes que dans URLHierarchy. Cette fonction est spécifique à Yandex.Metrica. - -## farmHash64 {#farmhash64} - -Produit un 64 bits [FarmHash](https://github.com/google/farmhash) la valeur de hachage. - -``` sql -farmHash64(par1, ...) -``` - -La fonction utilise le `Hash64` la méthode de tous les [les méthodes disponibles](https://github.com/google/farmhash/blob/master/src/farmhash.h). - -**Paramètre** - -La fonction prend un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -A [UInt64](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. - -**Exemple** - -``` sql -SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS FarmHash, toTypeName(FarmHash) AS type -``` - -``` text -┌─────────────FarmHash─┬─type───┐ -│ 17790458267262532859 │ UInt64 │ -└──────────────────────┴────────┘ -``` - -## javaHash {#hash_functions-javahash} - -Calculer [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452) à partir d'une chaîne. Cette fonction de hachage n'est ni rapide ni de bonne qualité. La seule raison de l'utiliser est lorsque cet algorithme est déjà utilisé dans un autre système et que vous devez calculer exactement le même résultat. - -**Syntaxe** - -``` sql -SELECT javaHash(''); -``` - -**Valeur renvoyée** - -A `Int32` valeur de hachage du type de données. - -**Exemple** - -Requête: - -``` sql -SELECT javaHash('Hello, world!'); -``` - -Résultat: - -``` text -┌─javaHash('Hello, world!')─┐ -│ -1880044555 │ -└───────────────────────────┘ -``` - -## javaHashUTF16LE {#javahashutf16le} - -Calculer [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452) à partir d'une chaîne, en supposant qu'elle contient des octets représentant une chaîne en encodage UTF-16LE. - -**Syntaxe** - -``` sql -javaHashUTF16LE(stringUtf16le) -``` - -**Paramètre** - -- `stringUtf16le` — a string in UTF-16LE encoding. - -**Valeur renvoyée** - -A `Int32` valeur de hachage du type de données. - -**Exemple** - -Requête correcte avec une chaîne codée UTF-16LE. - -Requête: - -``` sql -SELECT javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le')) -``` - -Résultat: - -``` text -┌─javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le'))─┐ -│ 3556498 │ -└──────────────────────────────────────────────────────────────┘ -``` - -## hiveHash {#hash-functions-hivehash} - -Calculer `HiveHash` à partir d'une chaîne. - -``` sql -SELECT hiveHash(''); -``` - -C'est juste [JavaHash](#hash_functions-javahash) avec le bit de signe mis à zéro. Cette fonction est utilisée dans [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) pour les versions antérieures à la version 3.0. Cette fonction de hachage n'est ni rapide ni de bonne qualité. La seule raison de l'utiliser est lorsque cet algorithme est déjà utilisé dans un autre système et que vous devez calculer exactement le même résultat. - -**Valeur renvoyée** - -A `Int32` valeur de hachage du type de données. - -Type: `hiveHash`. - -**Exemple** - -Requête: - -``` sql -SELECT hiveHash('Hello, world!'); -``` - -Résultat: - -``` text -┌─hiveHash('Hello, world!')─┐ -│ 267439093 │ -└───────────────────────────┘ -``` - -## metroHash64 {#metrohash64} - -Produit un 64 bits [MetroHash](http://www.jandrewrogers.com/2015/05/27/metrohash/) la valeur de hachage. - -``` sql -metroHash64(par1, ...) -``` - -**Paramètre** - -La fonction prend un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -A [UInt64](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. - -**Exemple** - -``` sql -SELECT metroHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MetroHash, toTypeName(MetroHash) AS type -``` - -``` text -┌────────────MetroHash─┬─type───┐ -│ 14235658766382344533 │ UInt64 │ -└──────────────────────┴────────┘ -``` - -## jumpConsistentHash {#jumpconsistenthash} - -Calcule JumpConsistentHash forme un UInt64. -Accepte deux arguments: une clé de type UInt64 et le nombre de compartiments. Renvoie Int32. -Pour plus d'informations, voir le lien: [JumpConsistentHash](https://arxiv.org/pdf/1406.2294.pdf) - -## murmurHash2_32, murmurHash2_64 {#murmurhash2-32-murmurhash2-64} - -Produit un [MurmurHash2](https://github.com/aappleby/smhasher) la valeur de hachage. - -``` sql -murmurHash2_32(par1, ...) -murmurHash2_64(par1, ...) -``` - -**Paramètre** - -Les deux fonctions prennent un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -- Le `murmurHash2_32` fonction renvoie la valeur de hachage ayant le [UInt32](../../sql-reference/data-types/int-uint.md) type de données. -- Le `murmurHash2_64` fonction renvoie la valeur de hachage ayant le [UInt64](../../sql-reference/data-types/int-uint.md) type de données. - -**Exemple** - -``` sql -SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash2, toTypeName(MurmurHash2) AS type -``` - -``` text -┌──────────MurmurHash2─┬─type───┐ -│ 11832096901709403633 │ UInt64 │ -└──────────────────────┴────────┘ -``` - -## gccMurmurHash {#gccmurmurhash} - -Calcule un 64 bits [MurmurHash2](https://github.com/aappleby/smhasher) valeur de hachage utilisant la même graine de hachage que [gcc](https://github.com/gcc-mirror/gcc/blob/41d6b10e96a1de98e90a7c0378437c3255814b16/libstdc%2B%2B-v3/include/bits/functional_hash.h#L191). Il est portable entre Clang et GCC construit. - -**Syntaxe** - -``` sql -gccMurmurHash(par1, ...); -``` - -**Paramètre** - -- `par1, ...` — A variable number of parameters that can be any of the [types de données pris en charge](../../sql-reference/data-types/index.md#data_types). - -**Valeur renvoyée** - -- Valeur de hachage calculée. - -Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT - gccMurmurHash(1, 2, 3) AS res1, - gccMurmurHash(('a', [1, 2, 3], 4, (4, ['foo', 'bar'], 1, (1, 2)))) AS res2 -``` - -Résultat: - -``` text -┌─────────────────res1─┬────────────────res2─┐ -│ 12384823029245979431 │ 1188926775431157506 │ -└──────────────────────┴─────────────────────┘ -``` - -## murmurHash3_32, murmurHash3_64 {#murmurhash3-32-murmurhash3-64} - -Produit un [MurmurHash3](https://github.com/aappleby/smhasher) la valeur de hachage. - -``` sql -murmurHash3_32(par1, ...) -murmurHash3_64(par1, ...) -``` - -**Paramètre** - -Les deux fonctions prennent un nombre variable de paramètres d'entrée. Les paramètres peuvent être tout de la [types de données pris en charge](../../sql-reference/data-types/index.md). - -**Valeur Renvoyée** - -- Le `murmurHash3_32` la fonction retourne un [UInt32](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. -- Le `murmurHash3_64` la fonction retourne un [UInt64](../../sql-reference/data-types/int-uint.md) valeur de hachage du type de données. - -**Exemple** - -``` sql -SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash3, toTypeName(MurmurHash3) AS type -``` - -``` text -┌─MurmurHash3─┬─type───┐ -│ 2152717 │ UInt32 │ -└─────────────┴────────┘ -``` - -## murmurHash3_128 {#murmurhash3-128} - -Produit de 128 bits [MurmurHash3](https://github.com/aappleby/smhasher) la valeur de hachage. - -``` sql -murmurHash3_128( expr ) -``` - -**Paramètre** - -- `expr` — [Expression](../syntax.md#syntax-expressions) de retour d'un [Chaîne](../../sql-reference/data-types/string.md)-le type de la valeur. - -**Valeur Renvoyée** - -A [FixedString (16)](../../sql-reference/data-types/fixedstring.md) valeur de hachage du type de données. - -**Exemple** - -``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type -``` - -``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ -``` - -## xxHash32, xxHash64 {#hash-functions-xxhash32} - -Calculer `xxHash` à partir d'une chaîne. Il est proposé en deux saveurs, 32 et 64 bits. - -``` sql -SELECT xxHash32(''); - -OR - -SELECT xxHash64(''); -``` - -**Valeur renvoyée** - -A `Uint32` ou `Uint64` valeur de hachage du type de données. - -Type: `xxHash`. - -**Exemple** - -Requête: - -``` sql -SELECT xxHash32('Hello, world!'); -``` - -Résultat: - -``` text -┌─xxHash32('Hello, world!')─┐ -│ 834093149 │ -└───────────────────────────┘ -``` - -**Voir Aussi** - -- [xxHash](http://cyan4973.github.io/xxHash/). - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/hash_functions/) diff --git a/docs/fr/sql-reference/functions/higher-order-functions.md b/docs/fr/sql-reference/functions/higher-order-functions.md deleted file mode 100644 index ac24b67bb97..00000000000 --- a/docs/fr/sql-reference/functions/higher-order-functions.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 57 -toc_title: "D'Ordre Sup\xE9rieur" ---- - -# Fonctions d'ordre supérieur {#higher-order-functions} - -## `->` opérateur, fonction lambda (params, expr) {#operator-lambdaparams-expr-function} - -Allows describing a lambda function for passing to a higher-order function. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns. - -Exemple: `x -> 2 * x, str -> str != Referer.` - -Les fonctions d'ordre supérieur ne peuvent accepter que les fonctions lambda comme argument fonctionnel. - -Une fonction lambda qui accepte plusieurs arguments peuvent être passés à une fonction d'ordre supérieur. Dans ce cas, la fonction d'ordre supérieur est passé plusieurs tableaux de longueur identique que ces arguments correspondent. - -Pour certaines fonctions, telles que [arrayCount](#higher_order_functions-array-count) ou [arraySum](#higher_order_functions-array-count) le premier argument (la fonction lambda) peut être omis. Dans ce cas, un mappage identique est supposé. - -Une fonction lambda ne peut pas être omise pour les fonctions suivantes: - -- [arrayMap](#higher_order_functions-array-map) -- [arrayFilter](#higher_order_functions-array-filter) -- [arrayFill](#higher_order_functions-array-fill) -- [arrayReverseFill](#higher_order_functions-array-reverse-fill) -- [arraySplit](#higher_order_functions-array-split) -- [arrayReverseSplit](#higher_order_functions-array-reverse-split) -- [arrayFirst](#higher_order_functions-array-first) -- [arrayFirstIndex](#higher_order_functions-array-first-index) - -### arrayMap(func, arr1, …) {#higher_order_functions-array-map} - -Renvoie un tableau obtenu à partir de l'application d'origine `func` fonction à chaque élément dans le `arr` tableau. - -Exemple: - -``` sql -SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res; -``` - -``` text -┌─res─────┐ -│ [3,4,5] │ -└─────────┘ -``` - -L'exemple suivant montre comment créer un n-uplet d'éléments de différents tableaux: - -``` sql -SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res -``` - -``` text -┌─res─────────────────┐ -│ [(1,4),(2,5),(3,6)] │ -└─────────────────────┘ -``` - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arrayMap` fonction. - -### arrayFilter(func, arr1, …) {#higher_order_functions-array-filter} - -Renvoie un tableau contenant uniquement les éléments `arr1` pour ce qui `func` retourne autre chose que 0. - -Exemple: - -``` sql -SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res -``` - -``` text -┌─res───────────┐ -│ ['abc World'] │ -└───────────────┘ -``` - -``` sql -SELECT - arrayFilter( - (i, x) -> x LIKE '%World%', - arrayEnumerate(arr), - ['Hello', 'abc World'] AS arr) - AS res -``` - -``` text -┌─res─┐ -│ [2] │ -└─────┘ -``` - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arrayFilter` fonction. - -### arrayFill(func, arr1, …) {#higher_order_functions-array-fill} - -Analyse par le biais de `arr1` du premier élément au dernier élément et remplacer `arr1[i]` par `arr1[i - 1]` si `func` renvoie 0. Le premier élément de `arr1` ne sera pas remplacé. - -Exemple: - -``` sql -SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res -``` - -``` text -┌─res──────────────────────────────┐ -│ [1,1,3,11,12,12,12,5,6,14,14,14] │ -└──────────────────────────────────┘ -``` - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arrayFill` fonction. - -### arrayReverseFill(func, arr1, …) {#higher_order_functions-array-reverse-fill} - -Analyse par le biais de `arr1` du dernier élément au premier élément et remplacer `arr1[i]` par `arr1[i + 1]` si `func` renvoie 0. Le dernier élément de `arr1` ne sera pas remplacé. - -Exemple: - -``` sql -SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res -``` - -``` text -┌─res────────────────────────────────┐ -│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │ -└────────────────────────────────────┘ -``` - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arrayReverseFill` fonction. - -### arraySplit(func, arr1, …) {#higher_order_functions-array-split} - -Split `arr1` en plusieurs tableaux. Lorsque `func` retourne autre chose que 0, la matrice sera de split sur le côté gauche de l'élément. Le tableau ne sera pas partagé avant le premier élément. - -Exemple: - -``` sql -SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res -``` - -``` text -┌─res─────────────┐ -│ [[1,2,3],[4,5]] │ -└─────────────────┘ -``` - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arraySplit` fonction. - -### arrayReverseSplit(func, arr1, …) {#higher_order_functions-array-reverse-split} - -Split `arr1` en plusieurs tableaux. Lorsque `func` retourne autre chose que 0, la matrice sera de split sur le côté droit de l'élément. Le tableau ne sera pas divisé après le dernier élément. - -Exemple: - -``` sql -SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res -``` - -``` text -┌─res───────────────┐ -│ [[1],[2,3,4],[5]] │ -└───────────────────┘ -``` - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arraySplit` fonction. - -### arrayCount(\[func,\] arr1, …) {#higher_order_functions-array-count} - -Renvoie le nombre d'éléments dans l'arr tableau pour lequel func renvoie autre chose que 0. Si ‘func’ n'est pas spécifié, il renvoie le nombre d'éléments non nuls dans le tableau. - -### arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1} - -Renvoie 1 s'il existe au moins un élément ‘arr’ pour ce qui ‘func’ retourne autre chose que 0. Sinon, il renvoie 0. - -### arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1} - -Renvoie 1 si ‘func’ retourne autre chose que 0 pour tous les éléments de ‘arr’. Sinon, il renvoie 0. - -### arraySum(\[func,\] arr1, …) {#higher-order-functions-array-sum} - -Renvoie la somme de la ‘func’ valeur. Si la fonction est omise, elle retourne la somme des éléments du tableau. - -### arrayFirst(func, arr1, …) {#higher_order_functions-array-first} - -Renvoie le premier élément du ‘arr1’ tableau pour lequel ‘func’ retourne autre chose que 0. - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arrayFirst` fonction. - -### arrayFirstIndex(func, arr1, …) {#higher_order_functions-array-first-index} - -Renvoie l'index du premier élément de la ‘arr1’ tableau pour lequel ‘func’ retourne autre chose que 0. - -Notez que le premier argument (fonction lambda) ne peut pas être omis dans le `arrayFirstIndex` fonction. - -### arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1} - -Retourne un tableau des sommes partielles d'éléments dans le tableau source (une somme). Si l' `func` la fonction est spécifiée, les valeurs des éléments du tableau sont convertis par cette fonction avant l'addition. - -Exemple: - -``` sql -SELECT arrayCumSum([1, 1, 1, 1]) AS res -``` - -``` text -┌─res──────────┐ -│ [1, 2, 3, 4] │ -└──────────────┘ -``` - -### arrayCumSumNonNegative (arr) {#arraycumsumnonnegativearr} - -Même que `arrayCumSum`, renvoie un tableau des sommes partielles d'éléments dans le tableau source (une somme). Différent `arrayCumSum`, lorsque la valeur renvoyée contient une valeur inférieure à zéro, la valeur est remplacée par zéro et le calcul ultérieur est effectué avec des paramètres zéro. Exemple: - -``` sql -SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res -``` - -``` text -┌─res───────┐ -│ [1,2,0,1] │ -└───────────┘ -``` - -### arraySort(\[func,\] arr1, …) {#arraysortfunc-arr1} - -Renvoie un tableau à la suite du tri des éléments de `arr1` dans l'ordre croissant. Si l' `func` la fonction est spécifiée, l'ordre de classement est déterminé par le résultat de la fonction `func` appliquée aux éléments du tableau (tableaux) - -Le [Transformation schwartzienne](https://en.wikipedia.org/wiki/Schwartzian_transform) est utilisé pour améliorer l'efficacité du tri. - -Exemple: - -``` sql -SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]); -``` - -``` text -┌─res────────────────┐ -│ ['world', 'hello'] │ -└────────────────────┘ -``` - -Pour plus d'informations sur la `arraySort` la méthode, voir l' [Fonctions pour travailler avec des tableaux](array-functions.md#array_functions-sort) section. - -### arrayReverseSort(\[func,\] arr1, …) {#arrayreversesortfunc-arr1} - -Renvoie un tableau à la suite du tri des éléments de `arr1` dans l'ordre décroissant. Si l' `func` la fonction est spécifiée, l'ordre de classement est déterminé par le résultat de la fonction `func` appliquée aux éléments du tableau (tableaux). - -Exemple: - -``` sql -SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res; -``` - -``` text -┌─res───────────────┐ -│ ['hello','world'] │ -└───────────────────┘ -``` - -Pour plus d'informations sur la `arrayReverseSort` la méthode, voir l' [Fonctions pour travailler avec des tableaux](array-functions.md#array_functions-reverse-sort) section. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/higher_order_functions/) diff --git a/docs/fr/sql-reference/functions/in-functions.md b/docs/fr/sql-reference/functions/in-functions.md deleted file mode 100644 index ced5ef73e46..00000000000 --- a/docs/fr/sql-reference/functions/in-functions.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 60 -toc_title: "Mise en \u0153uvre de L'op\xE9rateur IN" ---- - -# Fonctions de mise en œuvre de L'opérateur IN {#functions-for-implementing-the-in-operator} - -## in, notin, globalIn, globalNotIn {#in-functions} - -Voir la section [Dans les opérateurs](../operators/in.md#select-in-operators). - -## tuple(x, y, …), operator (x, y, …) {#tuplex-y-operator-x-y} - -Une fonction qui permet de regrouper plusieurs colonnes. -For columns with the types T1, T2, …, it returns a Tuple(T1, T2, …) type tuple containing these columns. There is no cost to execute the function. -Les Tuples sont normalement utilisés comme valeurs intermédiaires pour un argument D'opérateurs IN, ou pour créer une liste de paramètres formels de fonctions lambda. Les Tuples ne peuvent pas être écrits sur une table. - -## tupleElement (tuple, n), opérateur X. N {#tupleelementtuple-n-operator-x-n} - -Une fonction qui permet d'obtenir une colonne à partir d'un tuple. -‘N’ est l'index de colonne, à partir de 1. N doit être une constante. ‘N’ doit être une constante. ‘N’ doit être un entier postif strict ne dépassant pas la taille du tuple. -Il n'y a aucun coût pour exécuter la fonction. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/in_functions/) diff --git a/docs/fr/sql-reference/functions/index.md b/docs/fr/sql-reference/functions/index.md deleted file mode 100644 index 6e5333f68f5..00000000000 --- a/docs/fr/sql-reference/functions/index.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Fonction -toc_priority: 32 -toc_title: Introduction ---- - -# Fonction {#functions} - -Il y a au moins\* deux types de fonctions - des fonctions régulières (elles sont simplement appelées “functions”) and aggregate functions. These are completely different concepts. Regular functions work as if they are applied to each row separately (for each row, the result of the function doesn't depend on the other rows). Aggregate functions accumulate a set of values from various rows (i.e. they depend on the entire set of rows). - -Dans cette section, nous discutons des fonctions classiques. Pour les fonctions d'agrégation, voir la section “Aggregate functions”. - -\* - Il existe un troisième type de fonction ‘arrayJoin’ la fonction appartient à; les fonctions de table peuvent également être mentionnées séparément.\* - -## Typage Fort {#strong-typing} - -Contrairement à SQL standard, ClickHouse a une forte typage. En d'autres termes, il ne fait pas de conversions implicites entre les types. Chaque fonction fonctionne pour un ensemble spécifique de types. Cela signifie que vous devez parfois utiliser des fonctions de conversion de type. - -## Élimination Des Sous-Expressions Courantes {#common-subexpression-elimination} - -Toutes les expressions d'une requête qui ont le même AST (le même enregistrement ou le même résultat d'analyse syntaxique) sont considérées comme ayant des valeurs identiques. De telles expressions sont concaténées et exécutées une fois. Les sous-requêtes identiques sont également éliminées de cette façon. - -## Types de résultats {#types-of-results} - -Toutes les fonctions renvoient un seul retour comme résultat (pas plusieurs valeurs, et pas des valeurs nulles). Le type de résultat est généralement défini uniquement par les types d'arguments, pas par les valeurs. Les Exceptions sont la fonction tupleElement (l'opérateur A. N) et la fonction toFixedString. - -## Constant {#constants} - -Pour simplifier, certaines fonctions ne peuvent fonctionner qu'avec des constantes pour certains arguments. Par exemple, le bon argument de L'opérateur LIKE doit être une constante. -Presque toutes les fonctions renvoient une constante pour des arguments constants. L'exception est les fonctions qui génèrent des nombres aléatoires. -Le ‘now’ function renvoie des valeurs différentes pour les requêtes qui ont été exécutées à des moments différents, mais le résultat est considéré comme une constante, car la constance n'est importante que dans une seule requête. -Une expression constante est également considérée comme une constante (par exemple, la moitié droite de L'opérateur LIKE peut être construite à partir de plusieurs constantes). - -Les fonctions peuvent être implémentées de différentes manières pour des arguments constants et non constants (un code différent est exécuté). Mais les résultats pour une constante et pour une colonne vraie Ne contenant que la même valeur doivent correspondre les uns aux autres. - -## Le Traitement NULL {#null-processing} - -Les fonctions ont les comportements suivants: - -- Si au moins l'un des arguments de la fonction est `NULL` le résultat de la fonction est également `NULL`. -- Comportement spécial spécifié individuellement dans la description de chaque fonction. Dans le code source de ClickHouse, ces fonctions ont `UseDefaultImplementationForNulls=false`. - -## Constance {#constancy} - -Functions can't change the values of their arguments – any changes are returned as the result. Thus, the result of calculating separate functions does not depend on the order in which the functions are written in the query. - -## Erreur De Manipulation {#error-handling} - -Certaines fonctions peuvent lancer une exception si les données ne sont pas valides. Dans ce cas, la requête est annulée et un message d'erreur est retourné au client. Pour le traitement distribué, lorsqu'une exception se produit sur l'un des serveurs, les autres serveurs aussi tenté d'interrompre la requête. - -## Évaluation des Expressions D'Argument {#evaluation-of-argument-expressions} - -Dans presque tous les langages de programmation, l'un des arguments peut pas être évalué pour certains opérateurs. Ce sont généralement les opérateurs `&&`, `||`, et `?:`. -Mais dans ClickHouse, les arguments des fonctions (opérateurs) sont toujours évalués. En effet, des parties entières de colonnes sont évaluées à la fois, au lieu de calculer chaque ligne séparément. - -## Exécution de fonctions pour le traitement de requêtes distribuées {#performing-functions-for-distributed-query-processing} - -Pour le traitement de requête distribué, autant d'étapes de traitement de requête que possible sont effectuées sur des serveurs distants, et le reste des étapes (fusion des résultats intermédiaires et tout ce qui suit) sont effectuées sur le serveur demandeur. - -Cela signifie que les fonctions peuvent être effectuées sur différents serveurs. -Par exemple, dans la requête `SELECT f(sum(g(x))) FROM distributed_table GROUP BY h(y),` - -- si un `distributed_table` a au moins deux fragments, les fonctions ‘g’ et ‘h’ sont effectuées sur des serveurs distants, et la fonction ‘f’ est effectuée sur le serveur demandeur. -- si un `distributed_table` a un seul fragment, tous les ‘f’, ‘g’, et ‘h’ les fonctions sont exécutées sur le serveur de ce fragment. - -Le résultat d'une fonction habituellement ne dépendent pas le serveur sur lequel elle est exécutée. Cependant, parfois c'est important. -Par exemple, les fonctions qui fonctionnent avec des dictionnaires utilisent le dictionnaire qui existe sur le serveur sur lequel elles s'exécutent. -Un autre exemple est l' `hostName` fonction, qui renvoie le nom du serveur sur lequel il s'exécute afin de `GROUP BY` par les serveurs dans un `SELECT` requête. - -Si une fonction dans une requête est effectuée sur le demandeur serveur, mais vous devez l'exécuter sur des serveurs distants, vous pouvez l'envelopper dans un ‘any’ fonction d'agrégation ou l'ajouter à une clé dans `GROUP BY`. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/) diff --git a/docs/fr/sql-reference/functions/introspection.md b/docs/fr/sql-reference/functions/introspection.md deleted file mode 100644 index 91299217dc7..00000000000 --- a/docs/fr/sql-reference/functions/introspection.md +++ /dev/null @@ -1,310 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 65 -toc_title: Introspection ---- - -# Fonctions D'Introspection {#introspection-functions} - -Vous pouvez utiliser les fonctions décrites dans ce chapitre pour introspecter [ELF](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) et [DWARF](https://en.wikipedia.org/wiki/DWARF) pour le profilage de requête. - -!!! warning "Avertissement" - Ces fonctions sont lentes et peuvent imposer des considérations de sécurité. - -Pour le bon fonctionnement des fonctions d'introspection: - -- Installer le `clickhouse-common-static-dbg` paquet. - -- Définir le [allow_introspection_functions](../../operations/settings/settings.md#settings-allow_introspection_functions) réglage sur 1. - - For security reasons introspection functions are disabled by default. - -Clickhouse enregistre les rapports du profileur [trace_log](../../operations/system-tables.md#system_tables-trace_log) système de table. Assurez-vous que la table et le profileur sont correctement configurés. - -## addressToLine {#addresstoline} - -Convertit l'adresse de mémoire virtuelle dans le processus de serveur ClickHouse en nom de fichier et en numéro de ligne dans le code source de ClickHouse. - -Si vous utilisez des paquets clickhouse officiels, vous devez installer le `clickhouse-common-static-dbg` paquet. - -**Syntaxe** - -``` sql -addressToLine(address_of_binary_instruction) -``` - -**Paramètre** - -- `address_of_binary_instruction` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Address of instruction in a running process. - -**Valeur renvoyée** - -- Nom de fichier du code Source et le numéro de ligne dans ce fichier délimité par deux-points. - - For example, `/build/obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:199`, where `199` is a line number. - -- Nom d'un binaire, si la fonction n'a pas pu trouver les informations de débogage. - -- Chaîne vide, si l'adresse n'est pas valide. - -Type: [Chaîne](../../sql-reference/data-types/string.md). - -**Exemple** - -Activation des fonctions d'introspection: - -``` sql -SET allow_introspection_functions=1 -``` - -Sélection de la première chaîne de `trace_log` système de table: - -``` sql -SELECT * FROM system.trace_log LIMIT 1 \G -``` - -``` text -Row 1: -────── -event_date: 2019-11-19 -event_time: 2019-11-19 18:57:23 -revision: 54429 -timer_type: Real -thread_number: 48 -query_id: 421b6855-1858-45a5-8f37-f383409d6d72 -trace: [140658411141617,94784174532828,94784076370703,94784076372094,94784076361020,94784175007680,140658411116251,140658403895439] -``` - -Le `trace` champ contient la trace de pile au moment de l'échantillonnage. - -Obtenir le nom de fichier du code source et le numéro de ligne pour une seule adresse: - -``` sql -SELECT addressToLine(94784076370703) \G -``` - -``` text -Row 1: -────── -addressToLine(94784076370703): /build/obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:199 -``` - -Application de la fonction à la trace de la pile entière: - -``` sql -SELECT - arrayStringConcat(arrayMap(x -> addressToLine(x), trace), '\n') AS trace_source_code_lines -FROM system.trace_log -LIMIT 1 -\G -``` - -Le [arrayMap](higher-order-functions.md#higher_order_functions-array-map) permet de traiter chaque élément individuel de l' `trace` tableau par la `addressToLine` fonction. Le résultat de ce traitement que vous voyez dans l' `trace_source_code_lines` colonne de sortie. - -``` text -Row 1: -────── -trace_source_code_lines: /lib/x86_64-linux-gnu/libpthread-2.27.so -/usr/lib/debug/usr/bin/clickhouse -/build/obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:199 -/build/obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:155 -/usr/include/c++/9/bits/atomic_base.h:551 -/usr/lib/debug/usr/bin/clickhouse -/lib/x86_64-linux-gnu/libpthread-2.27.so -/build/glibc-OTsEL5/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97 -``` - -## adressetosymbol {#addresstosymbol} - -Convertit l'adresse de mémoire virtuelle dans le processus de serveur ClickHouse en symbole à partir des fichiers d'objets ClickHouse. - -**Syntaxe** - -``` sql -addressToSymbol(address_of_binary_instruction) -``` - -**Paramètre** - -- `address_of_binary_instruction` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Address of instruction in a running process. - -**Valeur renvoyée** - -- Symbole des fichiers D'objets ClickHouse. -- Chaîne vide, si l'adresse n'est pas valide. - -Type: [Chaîne](../../sql-reference/data-types/string.md). - -**Exemple** - -Activation des fonctions d'introspection: - -``` sql -SET allow_introspection_functions=1 -``` - -Sélection de la première chaîne de `trace_log` système de table: - -``` sql -SELECT * FROM system.trace_log LIMIT 1 \G -``` - -``` text -Row 1: -────── -event_date: 2019-11-20 -event_time: 2019-11-20 16:57:59 -revision: 54429 -timer_type: Real -thread_number: 48 -query_id: 724028bf-f550-45aa-910d-2af6212b94ac -trace: [94138803686098,94138815010911,94138815096522,94138815101224,94138815102091,94138814222988,94138806823642,94138814457211,94138806823642,94138814457211,94138806823642,94138806795179,94138806796144,94138753770094,94138753771646,94138753760572,94138852407232,140399185266395,140399178045583] -``` - -Le `trace` champ contient la trace de pile au moment de l'échantillonnage. - -Obtenir un symbole pour une seule adresse: - -``` sql -SELECT addressToSymbol(94138803686098) \G -``` - -``` text -Row 1: -────── -addressToSymbol(94138803686098): _ZNK2DB24IAggregateFunctionHelperINS_20AggregateFunctionSumImmNS_24AggregateFunctionSumDataImEEEEE19addBatchSinglePlaceEmPcPPKNS_7IColumnEPNS_5ArenaE -``` - -Application de la fonction à la trace de la pile entière: - -``` sql -SELECT - arrayStringConcat(arrayMap(x -> addressToSymbol(x), trace), '\n') AS trace_symbols -FROM system.trace_log -LIMIT 1 -\G -``` - -Le [arrayMap](higher-order-functions.md#higher_order_functions-array-map) permet de traiter chaque élément individuel de l' `trace` tableau par la `addressToSymbols` fonction. Le résultat de ce traitement que vous voyez dans l' `trace_symbols` colonne de sortie. - -``` text -Row 1: -────── -trace_symbols: _ZNK2DB24IAggregateFunctionHelperINS_20AggregateFunctionSumImmNS_24AggregateFunctionSumDataImEEEEE19addBatchSinglePlaceEmPcPPKNS_7IColumnEPNS_5ArenaE -_ZNK2DB10Aggregator21executeWithoutKeyImplERPcmPNS0_28AggregateFunctionInstructionEPNS_5ArenaE -_ZN2DB10Aggregator14executeOnBlockESt6vectorIN3COWINS_7IColumnEE13immutable_ptrIS3_EESaIS6_EEmRNS_22AggregatedDataVariantsERS1_IPKS3_SaISC_EERS1_ISE_SaISE_EERb -_ZN2DB10Aggregator14executeOnBlockERKNS_5BlockERNS_22AggregatedDataVariantsERSt6vectorIPKNS_7IColumnESaIS9_EERS6_ISB_SaISB_EERb -_ZN2DB10Aggregator7executeERKSt10shared_ptrINS_17IBlockInputStreamEERNS_22AggregatedDataVariantsE -_ZN2DB27AggregatingBlockInputStream8readImplEv -_ZN2DB17IBlockInputStream4readEv -_ZN2DB26ExpressionBlockInputStream8readImplEv -_ZN2DB17IBlockInputStream4readEv -_ZN2DB26ExpressionBlockInputStream8readImplEv -_ZN2DB17IBlockInputStream4readEv -_ZN2DB28AsynchronousBlockInputStream9calculateEv -_ZNSt17_Function_handlerIFvvEZN2DB28AsynchronousBlockInputStream4nextEvEUlvE_E9_M_invokeERKSt9_Any_data -_ZN14ThreadPoolImplI20ThreadFromGlobalPoolE6workerESt14_List_iteratorIS0_E -_ZZN20ThreadFromGlobalPoolC4IZN14ThreadPoolImplIS_E12scheduleImplIvEET_St8functionIFvvEEiSt8optionalImEEUlvE1_JEEEOS4_DpOT0_ENKUlvE_clEv -_ZN14ThreadPoolImplISt6threadE6workerESt14_List_iteratorIS0_E -execute_native_thread_routine -start_thread -clone -``` - -## demangle {#demangle} - -Convertit un symbole que vous pouvez obtenir en utilisant le [adressetosymbol](#addresstosymbol) fonction au nom de la fonction c++. - -**Syntaxe** - -``` sql -demangle(symbol) -``` - -**Paramètre** - -- `symbol` ([Chaîne](../../sql-reference/data-types/string.md)) — Symbol from an object file. - -**Valeur renvoyée** - -- Nom de la fonction C++. -- Chaîne vide si un symbole n'est pas valide. - -Type: [Chaîne](../../sql-reference/data-types/string.md). - -**Exemple** - -Activation des fonctions d'introspection: - -``` sql -SET allow_introspection_functions=1 -``` - -Sélection de la première chaîne de `trace_log` système de table: - -``` sql -SELECT * FROM system.trace_log LIMIT 1 \G -``` - -``` text -Row 1: -────── -event_date: 2019-11-20 -event_time: 2019-11-20 16:57:59 -revision: 54429 -timer_type: Real -thread_number: 48 -query_id: 724028bf-f550-45aa-910d-2af6212b94ac -trace: [94138803686098,94138815010911,94138815096522,94138815101224,94138815102091,94138814222988,94138806823642,94138814457211,94138806823642,94138814457211,94138806823642,94138806795179,94138806796144,94138753770094,94138753771646,94138753760572,94138852407232,140399185266395,140399178045583] -``` - -Le `trace` champ contient la trace de pile au moment de l'échantillonnage. - -Obtenir un nom de fonction pour une seule adresse: - -``` sql -SELECT demangle(addressToSymbol(94138803686098)) \G -``` - -``` text -Row 1: -────── -demangle(addressToSymbol(94138803686098)): DB::IAggregateFunctionHelper > >::addBatchSinglePlace(unsigned long, char*, DB::IColumn const**, DB::Arena*) const -``` - -Application de la fonction à la trace de la pile entière: - -``` sql -SELECT - arrayStringConcat(arrayMap(x -> demangle(addressToSymbol(x)), trace), '\n') AS trace_functions -FROM system.trace_log -LIMIT 1 -\G -``` - -Le [arrayMap](higher-order-functions.md#higher_order_functions-array-map) permet de traiter chaque élément individuel de l' `trace` tableau par la `demangle` fonction. Le résultat de ce traitement que vous voyez dans l' `trace_functions` colonne de sortie. - -``` text -Row 1: -────── -trace_functions: DB::IAggregateFunctionHelper > >::addBatchSinglePlace(unsigned long, char*, DB::IColumn const**, DB::Arena*) const -DB::Aggregator::executeWithoutKeyImpl(char*&, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, DB::Arena*) const -DB::Aggregator::executeOnBlock(std::vector::immutable_ptr, std::allocator::immutable_ptr > >, unsigned long, DB::AggregatedDataVariants&, std::vector >&, std::vector >, std::allocator > > >&, bool&) -DB::Aggregator::executeOnBlock(DB::Block const&, DB::AggregatedDataVariants&, std::vector >&, std::vector >, std::allocator > > >&, bool&) -DB::Aggregator::execute(std::shared_ptr const&, DB::AggregatedDataVariants&) -DB::AggregatingBlockInputStream::readImpl() -DB::IBlockInputStream::read() -DB::ExpressionBlockInputStream::readImpl() -DB::IBlockInputStream::read() -DB::ExpressionBlockInputStream::readImpl() -DB::IBlockInputStream::read() -DB::AsynchronousBlockInputStream::calculate() -std::_Function_handler::_M_invoke(std::_Any_data const&) -ThreadPoolImpl::worker(std::_List_iterator) -ThreadFromGlobalPool::ThreadFromGlobalPool::scheduleImpl(std::function, int, std::optional)::{lambda()#3}>(ThreadPoolImpl::scheduleImpl(std::function, int, std::optional)::{lambda()#3}&&)::{lambda()#1}::operator()() const -ThreadPoolImpl::worker(std::_List_iterator) -execute_native_thread_routine -start_thread -clone -``` diff --git a/docs/fr/sql-reference/functions/ip-address-functions.md b/docs/fr/sql-reference/functions/ip-address-functions.md deleted file mode 100644 index 8beb40a534b..00000000000 --- a/docs/fr/sql-reference/functions/ip-address-functions.md +++ /dev/null @@ -1,248 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 55 -toc_title: Travailler avec des adresses IP ---- - -# Fonctions pour travailler avec des adresses IP {#functions-for-working-with-ip-addresses} - -## IPv4NumToString (num) {#ipv4numtostringnum} - -Prend un numéro UInt32. Interprète comme une adresse IPv4 dans big endian. Renvoie une chaîne contenant l'adresse IPv4 correspondante au format A. B. C. d (Nombres séparés par des points sous forme décimale). - -## IPv4StringToNum (s) {#ipv4stringtonums} - -La fonction inverse de IPv4NumToString. Si L'adresse IPv4 a un format non valide, elle renvoie 0. - -## IPv4NumToStringClassC(num) {#ipv4numtostringclasscnum} - -Similaire à IPv4NumToString, mais en utilisant xxx au lieu du dernier octet. - -Exemple: - -``` sql -SELECT - IPv4NumToStringClassC(ClientIP) AS k, - count() AS c -FROM test.hits -GROUP BY k -ORDER BY c DESC -LIMIT 10 -``` - -``` text -┌─k──────────────┬─────c─┐ -│ 83.149.9.xxx │ 26238 │ -│ 217.118.81.xxx │ 26074 │ -│ 213.87.129.xxx │ 25481 │ -│ 83.149.8.xxx │ 24984 │ -│ 217.118.83.xxx │ 22797 │ -│ 78.25.120.xxx │ 22354 │ -│ 213.87.131.xxx │ 21285 │ -│ 78.25.121.xxx │ 20887 │ -│ 188.162.65.xxx │ 19694 │ -│ 83.149.48.xxx │ 17406 │ -└────────────────┴───────┘ -``` - -Depuis l'utilisation de ‘xxx’ est très inhabituel, cela peut être changé à l'avenir. Nous vous recommandons de ne pas compter sur le format exact de ce fragment. - -### IPv6NumToString (x) {#ipv6numtostringx} - -Accepte une valeur FixedString (16) contenant L'adresse IPv6 au format binaire. Renvoie une chaîne contenant cette adresse au format texte. -Les adresses IPv4 mappées IPv6 sont sorties au format:: ffff: 111.222.33.44. Exemple: - -``` sql -SELECT IPv6NumToString(toFixedString(unhex('2A0206B8000000000000000000000011'), 16)) AS addr -``` - -``` text -┌─addr─────────┐ -│ 2a02:6b8::11 │ -└──────────────┘ -``` - -``` sql -SELECT - IPv6NumToString(ClientIP6 AS k), - count() AS c -FROM hits_all -WHERE EventDate = today() AND substring(ClientIP6, 1, 12) != unhex('00000000000000000000FFFF') -GROUP BY k -ORDER BY c DESC -LIMIT 10 -``` - -``` text -┌─IPv6NumToString(ClientIP6)──────────────┬─────c─┐ -│ 2a02:2168:aaa:bbbb::2 │ 24695 │ -│ 2a02:2698:abcd:abcd:abcd:abcd:8888:5555 │ 22408 │ -│ 2a02:6b8:0:fff::ff │ 16389 │ -│ 2a01:4f8:111:6666::2 │ 16016 │ -│ 2a02:2168:888:222::1 │ 15896 │ -│ 2a01:7e00::ffff:ffff:ffff:222 │ 14774 │ -│ 2a02:8109:eee:ee:eeee:eeee:eeee:eeee │ 14443 │ -│ 2a02:810b:8888:888:8888:8888:8888:8888 │ 14345 │ -│ 2a02:6b8:0:444:4444:4444:4444:4444 │ 14279 │ -│ 2a01:7e00::ffff:ffff:ffff:ffff │ 13880 │ -└─────────────────────────────────────────┴───────┘ -``` - -``` sql -SELECT - IPv6NumToString(ClientIP6 AS k), - count() AS c -FROM hits_all -WHERE EventDate = today() -GROUP BY k -ORDER BY c DESC -LIMIT 10 -``` - -``` text -┌─IPv6NumToString(ClientIP6)─┬──────c─┐ -│ ::ffff:94.26.111.111 │ 747440 │ -│ ::ffff:37.143.222.4 │ 529483 │ -│ ::ffff:5.166.111.99 │ 317707 │ -│ ::ffff:46.38.11.77 │ 263086 │ -│ ::ffff:79.105.111.111 │ 186611 │ -│ ::ffff:93.92.111.88 │ 176773 │ -│ ::ffff:84.53.111.33 │ 158709 │ -│ ::ffff:217.118.11.22 │ 154004 │ -│ ::ffff:217.118.11.33 │ 148449 │ -│ ::ffff:217.118.11.44 │ 148243 │ -└────────────────────────────┴────────┘ -``` - -## IPv6StringToNum (s) {#ipv6stringtonums} - -La fonction inverse de IPv6NumToString. Si L'adresse IPv6 a un format non valide, elle renvoie une chaîne d'octets null. -HEX peut être en majuscules ou en minuscules. - -## IPv4ToIPv6 (x) {#ipv4toipv6x} - -Prend un `UInt32` nombre. Interprète comme une adresse IPv4 dans [big endian](https://en.wikipedia.org/wiki/Endianness). Retourne un `FixedString(16)` valeur contenant l'adresse IPv6 au format binaire. Exemple: - -``` sql -SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr -``` - -``` text -┌─addr───────────────┐ -│ ::ffff:192.168.0.1 │ -└────────────────────┘ -``` - -## cutIPv6 (x, bytesToCutForIPv6, bytesToCutForIPv4) {#cutipv6x-bytestocutforipv6-bytestocutforipv4} - -Accepte une valeur FixedString (16) contenant L'adresse IPv6 au format binaire. Renvoie une chaîne contenant l'adresse du nombre spécifié d'octets retiré au format texte. Exemple: - -``` sql -WITH - IPv6StringToNum('2001:0DB8:AC10:FE01:FEED:BABE:CAFE:F00D') AS ipv6, - IPv4ToIPv6(IPv4StringToNum('192.168.0.1')) AS ipv4 -SELECT - cutIPv6(ipv6, 2, 0), - cutIPv6(ipv4, 0, 2) -``` - -``` text -┌─cutIPv6(ipv6, 2, 0)─────────────────┬─cutIPv6(ipv4, 0, 2)─┐ -│ 2001:db8:ac10:fe01:feed:babe:cafe:0 │ ::ffff:192.168.0.0 │ -└─────────────────────────────────────┴─────────────────────┘ -``` - -## Ipv4cirtorange (ipv4, Cidr), {#ipv4cidrtorangeipv4-cidr} - -Accepte un IPv4 et une valeur UInt8 contenant [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). Renvoie un tuple avec deux IPv4 contenant la plage inférieure et la plage supérieure du sous-réseau. - -``` sql -SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16) -``` - -``` text -┌─IPv4CIDRToRange(toIPv4('192.168.5.2'), 16)─┐ -│ ('192.168.0.0','192.168.255.255') │ -└────────────────────────────────────────────┘ -``` - -## Ipv6cirtorange (ipv6, Cidr), {#ipv6cidrtorangeipv6-cidr} - -Accepte un IPv6 et une valeur UInt8 contenant le CIDR. Renvoie un tuple avec deux IPv6 contenant la plage inférieure et la plage supérieure du sous-réseau. - -``` sql -SELECT IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32); -``` - -``` text -┌─IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32)─┐ -│ ('2001:db8::','2001:db8:ffff:ffff:ffff:ffff:ffff:ffff') │ -└────────────────────────────────────────────────────────────────────────┘ -``` - -## toipv4 (chaîne) {#toipv4string} - -Un alias `IPv4StringToNum()` cela prend une forme de chaîne D'adresse IPv4 et renvoie la valeur de [IPv4](../../sql-reference/data-types/domains/ipv4.md) type, qui est binaire égal à la valeur renvoyée par `IPv4StringToNum()`. - -``` sql -WITH - '171.225.130.45' as IPv4_string -SELECT - toTypeName(IPv4StringToNum(IPv4_string)), - toTypeName(toIPv4(IPv4_string)) -``` - -``` text -┌─toTypeName(IPv4StringToNum(IPv4_string))─┬─toTypeName(toIPv4(IPv4_string))─┐ -│ UInt32 │ IPv4 │ -└──────────────────────────────────────────┴─────────────────────────────────┘ -``` - -``` sql -WITH - '171.225.130.45' as IPv4_string -SELECT - hex(IPv4StringToNum(IPv4_string)), - hex(toIPv4(IPv4_string)) -``` - -``` text -┌─hex(IPv4StringToNum(IPv4_string))─┬─hex(toIPv4(IPv4_string))─┐ -│ ABE1822D │ ABE1822D │ -└───────────────────────────────────┴──────────────────────────┘ -``` - -## toipv6 (chaîne) {#toipv6string} - -Un alias `IPv6StringToNum()` cela prend une forme de chaîne D'adresse IPv6 et renvoie la valeur de [IPv6](../../sql-reference/data-types/domains/ipv6.md) type, qui est binaire égal à la valeur renvoyée par `IPv6StringToNum()`. - -``` sql -WITH - '2001:438:ffff::407d:1bc1' as IPv6_string -SELECT - toTypeName(IPv6StringToNum(IPv6_string)), - toTypeName(toIPv6(IPv6_string)) -``` - -``` text -┌─toTypeName(IPv6StringToNum(IPv6_string))─┬─toTypeName(toIPv6(IPv6_string))─┐ -│ FixedString(16) │ IPv6 │ -└──────────────────────────────────────────┴─────────────────────────────────┘ -``` - -``` sql -WITH - '2001:438:ffff::407d:1bc1' as IPv6_string -SELECT - hex(IPv6StringToNum(IPv6_string)), - hex(toIPv6(IPv6_string)) -``` - -``` text -┌─hex(IPv6StringToNum(IPv6_string))─┬─hex(toIPv6(IPv6_string))─────────┐ -│ 20010438FFFF000000000000407D1BC1 │ 20010438FFFF000000000000407D1BC1 │ -└───────────────────────────────────┴──────────────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/ip_address_functions/) diff --git a/docs/fr/sql-reference/functions/json-functions.md b/docs/fr/sql-reference/functions/json-functions.md deleted file mode 100644 index 5f92c99d0f5..00000000000 --- a/docs/fr/sql-reference/functions/json-functions.md +++ /dev/null @@ -1,297 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 56 -toc_title: Travailler avec JSON ---- - -# Fonctions pour travailler avec JSON {#functions-for-working-with-json} - -Dans Yandex.Metrica, JSON est transmis par les utilisateurs en tant que paramètres de session. Il y a quelques fonctions spéciales pour travailler avec ce JSON. (Bien que dans la plupart des cas, les JSONs soient en outre prétraités et les valeurs résultantes sont placées dans des colonnes séparées dans leur format traité.) Toutes ces fonctions sont basées sur des hypothèses fortes sur ce que le JSON peut être, mais elles essaient de faire le moins possible pour faire le travail. - -Les hypothèses suivantes sont apportées: - -1. Le nom du champ (argument de fonction) doit être une constante. -2. Le nom du champ est en quelque sorte codé canoniquement dans JSON. Exemple: `visitParamHas('{"abc":"def"}', 'abc') = 1`, mais `visitParamHas('{"\\u0061\\u0062\\u0063":"def"}', 'abc') = 0` -3. Les champs sont recherchés à n'importe quel niveau d'imbrication, sans discrimination. S'il y a plusieurs champs correspondants, la première occurrence est utilisé. -4. Le JSON n'a pas de caractères d'espace en dehors des littéraux de chaîne. - -## visitParamHas(params, nom) {#visitparamhasparams-name} - -Vérifie s'il existe un champ avec ‘name’ nom. - -## visitParamExtractUInt(params, nom) {#visitparamextractuintparams-name} - -Analyse UInt64 à partir de la valeur du champ nommé ‘name’. Si c'est un champ de type chaîne, il tente d'analyser un numéro à partir du début de la chaîne. Si le champ n'existe pas, ou s'il existe mais ne contient pas de nombre, il renvoie 0. - -## visitParamExtractInt(params, name) {#visitparamextractintparams-name} - -Le même que pour Int64. - -## visitParamExtractFloat(params, nom) {#visitparamextractfloatparams-name} - -Le même que pour Float64. - -## visitParamExtractBool(params, nom) {#visitparamextractboolparams-name} - -Analyse d'une valeur vrai/faux. Le résultat est UInt8. - -## visitParamExtractRaw(params, nom) {#visitparamextractrawparams-name} - -Retourne la valeur d'un champ, y compris les séparateurs. - -Exemple: - -``` sql -visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"' -visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}' -``` - -## visitParamExtractString(params, nom) {#visitparamextractstringparams-name} - -Analyse la chaîne entre guillemets doubles. La valeur est sans échappement. Si l'échappement échoue, il renvoie une chaîne vide. - -Exemple: - -``` sql -visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0' -visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺' -visitParamExtractString('{"abc":"\\u263"}', 'abc') = '' -visitParamExtractString('{"abc":"hello}', 'abc') = '' -``` - -Il n'y a actuellement aucun support pour les points de code dans le format `\uXXXX\uYYYY` qui ne proviennent pas du plan multilingue de base (ils sont convertis en CESU-8 au lieu de UTF-8). - -Les fonctions suivantes sont basées sur [simdjson](https://github.com/lemire/simdjson) conçu pour des exigences D'analyse JSON plus complexes. L'hypothèse 2 mentionnée ci-dessus s'applique toujours. - -## isValidJSON (json) {#isvalidjsonjson} - -Vérifie que la chaîne est un json valide. - -Exemple: - -``` sql -SELECT isValidJSON('{"a": "hello", "b": [-100, 200.0, 300]}') = 1 -SELECT isValidJSON('not a json') = 0 -``` - -## JSONHas(json\[, indices_or_keys\]…) {#jsonhasjson-indices-or-keys} - -Si la valeur existe dans le document JSON, `1` sera retourné. - -Si la valeur n'existe pas, `0` sera retourné. - -Exemple: - -``` sql -SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 1 -SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0 -``` - -`indices_or_keys` est une liste de zéro ou plusieurs arguments chacun d'entre eux peut être une chaîne ou un entier. - -- String = membre d'objet d'accès par clé. -- Entier positif = accédez au n-ème membre / clé depuis le début. -- Entier négatif = accédez au n-ème membre / clé à partir de la fin. - -Minimum de l'indice de l'élément est 1. Ainsi, l'élément 0 n'existe pas. - -Vous pouvez utiliser des entiers pour accéder à la fois aux tableaux JSON et aux objets JSON. - -Ainsi, par exemple: - -``` sql -SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'a' -SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 2) = 'b' -SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -1) = 'b' -SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a' -SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello' -``` - -## JSONLength(json\[, indices_or_keys\]…) {#jsonlengthjson-indices-or-keys} - -Renvoie la longueur D'un tableau JSON ou d'un objet JSON. - -Si la valeur n'existe pas ou a un mauvais type, `0` sera retourné. - -Exemple: - -``` sql -SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3 -SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2 -``` - -## JSONType(json\[, indices_or_keys\]…) {#jsontypejson-indices-or-keys} - -De retour le type d'une valeur JSON. - -Si la valeur n'existe pas, `Null` sera retourné. - -Exemple: - -``` sql -SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}') = 'Object' -SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String' -SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array' -``` - -## JSONExtractUInt(json\[, indices_or_keys\]…) {#jsonextractuintjson-indices-or-keys} - -## JSONExtractInt(json\[, indices_or_keys\]…) {#jsonextractintjson-indices-or-keys} - -## JSONExtractFloat(json\[, indices_or_keys\]…) {#jsonextractfloatjson-indices-or-keys} - -## JSONExtractBool(json\[, indices_or_keys\]…) {#jsonextractbooljson-indices-or-keys} - -Analyse un JSON et extrait une valeur. Ces fonctions sont similaires à `visitParam` fonction. - -Si la valeur n'existe pas ou a un mauvais type, `0` sera retourné. - -Exemple: - -``` sql -SELECT JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 1) = -100 -SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200.0 -SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300 -``` - -## JSONExtractString(json\[, indices_or_keys\]…) {#jsonextractstringjson-indices-or-keys} - -Analyse un JSON et extrait une chaîne. Cette fonction est similaire à `visitParamExtractString` fonction. - -Si la valeur n'existe pas ou a un mauvais type, une chaîne vide est retournée. - -La valeur est sans échappement. Si l'échappement échoue, il renvoie une chaîne vide. - -Exemple: - -``` sql -SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'hello' -SELECT JSONExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0' -SELECT JSONExtractString('{"abc":"\\u263a"}', 'abc') = '☺' -SELECT JSONExtractString('{"abc":"\\u263"}', 'abc') = '' -SELECT JSONExtractString('{"abc":"hello}', 'abc') = '' -``` - -## JSONExtract(json\[, indices_or_keys…\], Return_type) {#jsonextractjson-indices-or-keys-return-type} - -Analyse un JSON et extrait une valeur du type de données clickhouse donné. - -C'est une généralisation de la précédente `JSONExtract` fonction. -Cela signifie -`JSONExtract(..., 'String')` retourne exactement le même que `JSONExtractString()`, -`JSONExtract(..., 'Float64')` retourne exactement le même que `JSONExtractFloat()`. - -Exemple: - -``` sql -SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(String, Array(Float64))') = ('hello',[-100,200,300]) -SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(b Array(Float64), a String)') = ([-100,200,300],'hello') -SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 'Array(Nullable(Int8))') = [-100, NULL, NULL] -SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4, 'Nullable(Int64)') = NULL -SELECT JSONExtract('{"passed": true}', 'passed', 'UInt8') = 1 -SELECT JSONExtract('{"day": "Thursday"}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday\' = 1, \'Tuesday\' = 2, \'Wednesday\' = 3, \'Thursday\' = 4, \'Friday\' = 5, \'Saturday\' = 6)') = 'Thursday' -SELECT JSONExtract('{"day": 5}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday\' = 1, \'Tuesday\' = 2, \'Wednesday\' = 3, \'Thursday\' = 4, \'Friday\' = 5, \'Saturday\' = 6)') = 'Friday' -``` - -## JSONExtractKeysAndValues(json\[, indices_or_keys…\], Value_type) {#jsonextractkeysandvaluesjson-indices-or-keys-value-type} - -Analyse les paires clé-valeur à partir D'un JSON où les valeurs sont du type de données clickhouse donné. - -Exemple: - -``` sql -SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') = [('a',5),('b',7),('c',11)] -``` - -## JSONExtractRaw(json\[, indices_or_keys\]…) {#jsonextractrawjson-indices-or-keys} - -Renvoie une partie de JSON en tant que chaîne non analysée. - -Si la pièce n'existe pas ou a un mauvais type, une chaîne vide est retournée. - -Exemple: - -``` sql -SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]' -``` - -## JSONExtractArrayRaw(json\[, indices_or_keys…\]) {#jsonextractarrayrawjson-indices-or-keys} - -Retourne un tableau avec des éléments de tableau JSON, chacun représenté comme une chaîne non analysée. - -Si la pièce n'existe pas ou n'est pas de tableau, un tableau vide sera retournée. - -Exemple: - -``` sql -SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = ['-100', '200.0', '"hello"']' -``` - -## JSONExtractKeysAndValuesRaw {#json-extract-keys-and-values-raw} - -Extrait les données brutes d'un objet JSON. - -**Syntaxe** - -``` sql -JSONExtractKeysAndValuesRaw(json[, p, a, t, h]) -``` - -**Paramètre** - -- `json` — [Chaîne](../data-types/string.md) avec JSON valide. -- `p, a, t, h` — Comma-separated indices or keys that specify the path to the inner field in a nested JSON object. Each argument can be either a [chaîne](../data-types/string.md) pour obtenir le champ par la touche ou un [entier](../data-types/int-uint.md) pour obtenir le N-ème champ (indexé à partir de 1, les entiers négatifs comptent à partir de la fin). S'il n'est pas défini, le JSON entier est analysé en tant qu'objet de niveau supérieur. Paramètre facultatif. - -**Valeurs renvoyées** - -- Tableau avec `('key', 'value')` tuple. Les deux membres du tuple sont des chaînes. -- Tableau vide si l'objet demandé n'existe pas, ou entrée JSON n'est pas valide. - -Type: [Tableau](../data-types/array.md)([Tuple](../data-types/tuple.md)([Chaîne](../data-types/string.md), [Chaîne](../data-types/string.md)). - -**Exemple** - -Requête: - -``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}') -``` - -Résultat: - -``` text -┌─JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}')─┐ -│ [('a','[-100,200]'),('b','{"c":{"d":"hello","f":"world"}}')] │ -└──────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -Requête: - -``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b') -``` - -Résultat: - -``` text -┌─JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b')─┐ -│ [('c','{"d":"hello","f":"world"}')] │ -└───────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -Requête: - -``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c') -``` - -Résultat: - -``` text -┌─JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c')─┐ -│ [('d','"hello"'),('f','"world"')] │ -└───────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/json_functions/) diff --git a/docs/fr/sql-reference/functions/logical-functions.md b/docs/fr/sql-reference/functions/logical-functions.md deleted file mode 100644 index d01d9e02088..00000000000 --- a/docs/fr/sql-reference/functions/logical-functions.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: Logique ---- - -# Les Fonctions Logiques {#logical-functions} - -Les fonctions logiques acceptent tous les types numériques, mais renvoient un nombre UInt8 égal à 0 ou 1. - -Zéro comme argument est considéré “false,” alors que toute valeur non nulle est considérée comme “true”. - -## et, et opérateur {#and-and-operator} - -## ou, ou opérateur {#or-or-operator} - -## pas, pas opérateur {#not-not-operator} - -## xor {#xor} - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/logical_functions/) diff --git a/docs/fr/sql-reference/functions/machine-learning-functions.md b/docs/fr/sql-reference/functions/machine-learning-functions.md deleted file mode 100644 index 2212e0caa5a..00000000000 --- a/docs/fr/sql-reference/functions/machine-learning-functions.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 64 -toc_title: Fonctions D'Apprentissage Automatique ---- - -# Fonctions D'Apprentissage Automatique {#machine-learning-functions} - -## evalMLMethod (prédiction) {#machine_learning_methods-evalmlmethod} - -Prédiction utilisant des modèles de régression ajustés utilise `evalMLMethod` fonction. Voir le lien dans la `linearRegression`. - -### Régression Linéaire Stochastique {#stochastic-linear-regression} - -Le [stochasticLinearRegression](../../sql-reference/aggregate-functions/reference.md#agg_functions-stochasticlinearregression) la fonction d'agrégat implémente une méthode de descente de gradient stochastique utilisant un modèle linéaire et une fonction de perte MSE. Utiliser `evalMLMethod` prédire sur de nouvelles données. - -### Régression Logistique Stochastique {#stochastic-logistic-regression} - -Le [stochasticLogisticRegression](../../sql-reference/aggregate-functions/reference.md#agg_functions-stochasticlogisticregression) la fonction d'agrégation implémente la méthode de descente de gradient stochastique pour le problème de classification binaire. Utiliser `evalMLMethod` prédire sur de nouvelles données. diff --git a/docs/fr/sql-reference/functions/math-functions.md b/docs/fr/sql-reference/functions/math-functions.md deleted file mode 100644 index f5dff150caa..00000000000 --- a/docs/fr/sql-reference/functions/math-functions.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 44 -toc_title: "Math\xE9matique" ---- - -# Fonctions Mathématiques {#mathematical-functions} - -Toutes les fonctions renvoient un nombre Float64. La précision du résultat est proche de la précision maximale possible, mais le résultat peut ne pas coïncider avec le nombre représentable de la machine le plus proche du nombre réel correspondant. - -## e() {#e} - -Renvoie un nombre Float64 proche du nombre E. - -## pi() {#pi} - -Returns a Float64 number that is close to the number π. - -## exp (x) {#expx} - -Accepte un argument numérique et renvoie un Float64 nombre proche de l'exposant de l'argument. - -## log(x), ln (x) {#logx-lnx} - -Accepte un argument numérique et renvoie un nombre Float64 proche du logarithme naturel de l'argument. - -## exp2 (x) {#exp2x} - -Accepte un argument numérique et renvoie un nombre Float64 proche de 2 à la puissance de X. - -## log2 (x) {#log2x} - -Accepte un argument numérique et renvoie un Float64 nombre proximité du logarithme binaire de l'argument. - -## exp10 (x) {#exp10x} - -Accepte un argument numérique et renvoie un nombre Float64 proche de 10 à la puissance de X. - -## log10 (x) {#log10x} - -Accepte un argument numérique et renvoie un nombre Float64 proche du logarithme décimal de l'argument. - -## sqrt (x) {#sqrtx} - -Accepte un argument numérique et renvoie un Float64 nombre proche de la racine carrée de l'argument. - -## cbrt (x) {#cbrtx} - -Accepte un argument numérique et renvoie un Float64 nombre proche de la racine cubique de l'argument. - -## erf (x) {#erfx} - -Si ‘x’ est non négatif, alors `erf(x / σ√2)` est la probabilité qu'une variable aléatoire ayant une distribution normale avec un écart type ‘σ’ prend la valeur qui est séparée de la valeur attendue par plus de ‘x’. - -Exemple (règle de trois sigma): - -``` sql -SELECT erf(3 / sqrt(2)) -``` - -``` text -┌─erf(divide(3, sqrt(2)))─┐ -│ 0.9973002039367398 │ -└─────────────────────────┘ -``` - -## erfc (x) {#erfcx} - -Accepte un argument numérique et renvoie un nombre Float64 proche de 1-erf (x), mais sans perte de précision pour ‘x’ valeur. - -## lgamma (x) {#lgammax} - -Le logarithme de la fonction gamma. - -## tgamma (x) {#tgammax} - -La fonction Gamma. - -## sin (x) {#sinx} - -Sine. - -## cos (x) {#cosx} - -Cosinus. - -## tan (x) {#tanx} - -Tangente. - -## asin (x) {#asinx} - -Le sinus d'arc. - -## acos (x) {#acosx} - -Le cosinus de l'arc. - -## atan (x) {#atanx} - -L'arc tangente. - -## pow(x, y), la puissance(x, y) {#powx-y-powerx-y} - -Prend deux arguments numériques x et Y. renvoie un nombre Float64 proche de x à la puissance de Y. - -## intExp2 {#intexp2} - -Accepte un argument numérique et renvoie un nombre UInt64 proche de 2 à la puissance de X. - -## intExp10 {#intexp10} - -Accepte un argument numérique et renvoie un nombre UInt64 proche de 10 à la puissance de X. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/math_functions/) diff --git a/docs/fr/sql-reference/functions/other-functions.md b/docs/fr/sql-reference/functions/other-functions.md deleted file mode 100644 index e5c6abedd75..00000000000 --- a/docs/fr/sql-reference/functions/other-functions.md +++ /dev/null @@ -1,1205 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 66 -toc_title: Autre ---- - -# D'Autres Fonctions {#other-functions} - -## hôte() {#hostname} - -Renvoie une chaîne avec le nom de l'hôte sur lequel cette fonction a été exécutée. Pour le traitement distribué, c'est le nom du serveur distant, si la fonction est exécutée sur un serveur distant. - -## getMacro {#getmacro} - -Obtient une valeur nommée à partir [macro](../../operations/server-configuration-parameters/settings.md#macros) la section de la configuration du serveur. - -**Syntaxe** - -``` sql -getMacro(name); -``` - -**Paramètre** - -- `name` — Name to retrieve from the `macros` section. [Chaîne](../../sql-reference/data-types/string.md#string). - -**Valeur renvoyée** - -- Valeur de la macro spécifiée. - -Type: [Chaîne](../../sql-reference/data-types/string.md). - -**Exemple** - -Exemple `macros` section dans le fichier de configuration du serveur: - -``` xml - - Value - -``` - -Requête: - -``` sql -SELECT getMacro('test'); -``` - -Résultat: - -``` text -┌─getMacro('test')─┐ -│ Value │ -└──────────────────┘ -``` - -Une méthode alternative pour obtenir la même valeur: - -``` sql -SELECT * FROM system.macros -WHERE macro = 'test'; -``` - -``` text -┌─macro─┬─substitution─┐ -│ test │ Value │ -└───────┴──────────────┘ -``` - -## FQDN {#fqdn} - -Retourne le nom de domaine pleinement qualifié. - -**Syntaxe** - -``` sql -fqdn(); -``` - -Cette fonction est insensible à la casse. - -**Valeur renvoyée** - -- Chaîne avec le nom de domaine complet. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT FQDN(); -``` - -Résultat: - -``` text -┌─FQDN()──────────────────────────┐ -│ clickhouse.ru-central1.internal │ -└─────────────────────────────────┘ -``` - -## basename {#basename} - -Extrait la partie finale d'une chaîne après la dernière barre oblique ou barre oblique inverse. Cette fonction est souvent utilisée pour extraire le nom de fichier d'un chemin. - -``` sql -basename( expr ) -``` - -**Paramètre** - -- `expr` — Expression resulting in a [Chaîne](../../sql-reference/data-types/string.md) type de valeur. Tous les antislashs doivent être échappés dans la valeur résultante. - -**Valeur Renvoyée** - -Une chaîne de caractères qui contient: - -- La partie finale d'une chaîne après la dernière barre oblique ou barre oblique inverse. - - If the input string contains a path ending with slash or backslash, for example, `/` or `c:\`, the function returns an empty string. - -- La chaîne d'origine s'il n'y a pas de barres obliques ou de barres obliques inverses. - -**Exemple** - -``` sql -SELECT 'some/long/path/to/file' AS a, basename(a) -``` - -``` text -┌─a──────────────────────┬─basename('some\\long\\path\\to\\file')─┐ -│ some\long\path\to\file │ file │ -└────────────────────────┴────────────────────────────────────────┘ -``` - -``` sql -SELECT 'some\\long\\path\\to\\file' AS a, basename(a) -``` - -``` text -┌─a──────────────────────┬─basename('some\\long\\path\\to\\file')─┐ -│ some\long\path\to\file │ file │ -└────────────────────────┴────────────────────────────────────────┘ -``` - -``` sql -SELECT 'some-file-name' AS a, basename(a) -``` - -``` text -┌─a──────────────┬─basename('some-file-name')─┐ -│ some-file-name │ some-file-name │ -└────────────────┴────────────────────────────┘ -``` - -## visibleWidth (x) {#visiblewidthx} - -Calcule la largeur approximative lors de la sortie des valeurs vers la console au format texte (séparé par des tabulations). -Cette fonction est utilisée par le système pour implémenter de jolis formats. - -`NULL` est représenté comme une chaîne correspondant à `NULL` dans `Pretty` format. - -``` sql -SELECT visibleWidth(NULL) -``` - -``` text -┌─visibleWidth(NULL)─┐ -│ 4 │ -└────────────────────┘ -``` - -## toTypeName (x) {#totypenamex} - -Renvoie une chaîne contenant le nom du type de l'argument passé. - -Si `NULL` est passé à la fonction en entrée, puis il renvoie le `Nullable(Nothing)` type, ce qui correspond à un interne `NULL` représentation à ClickHouse. - -## la taille de bloc() {#function-blocksize} - -Récupère la taille du bloc. -Dans ClickHouse, les requêtes sont toujours exécutées sur des blocs (ensembles de parties de colonne). Cette fonction permet d'obtenir la taille du bloc pour lequel vous l'avez appelé. - -## matérialiser (x) {#materializex} - -Transforme une constante dans une colonne contenant une seule valeur. -Dans ClickHouse, les colonnes complètes et les constantes sont représentées différemment en mémoire. Les fonctions fonctionnent différemment pour les arguments constants et les arguments normaux (un code différent est exécuté), bien que le résultat soit presque toujours le même. Cette fonction sert à déboguer ce comportement. - -## ignore(…) {#ignore} - -Accepte tous les arguments, y compris `NULL`. Renvoie toujours 0. -Cependant, l'argument est toujours évalué. Cela peut être utilisé pour les benchmarks. - -## sommeil(secondes) {#sleepseconds} - -Dormir ‘seconds’ secondes sur chaque bloc de données. Vous pouvez spécifier un nombre entier ou un nombre à virgule flottante. - -## sleepEachRow (secondes) {#sleepeachrowseconds} - -Dormir ‘seconds’ secondes sur chaque ligne. Vous pouvez spécifier un nombre entier ou un nombre à virgule flottante. - -## currentDatabase() {#currentdatabase} - -Retourne le nom de la base de données actuelle. -Vous pouvez utiliser cette fonction dans les paramètres du moteur de table dans une requête CREATE TABLE où vous devez spécifier la base de données. - -## currentUser() {#other-function-currentuser} - -Renvoie la connexion de l'utilisateur actuel. La connexion de l'utilisateur, cette requête initiée, sera renvoyée en cas de requête distibuted. - -``` sql -SELECT currentUser(); -``` - -Alias: `user()`, `USER()`. - -**Valeurs renvoyées** - -- Connexion de l'utilisateur actuel. -- Connexion de l'utilisateur qui a lancé la requête en cas de requête distribuée. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT currentUser(); -``` - -Résultat: - -``` text -┌─currentUser()─┐ -│ default │ -└───────────────┘ -``` - -## isConstant {#is-constant} - -Vérifie si l'argument est une expression constante. - -A constant expression means an expression whose resulting value is known at the query analysis (i.e. before execution). For example, expressions over [littéral](../syntax.md#literals) sont des expressions constantes. - -La fonction est destinée au développement, au débogage et à la démonstration. - -**Syntaxe** - -``` sql -isConstant(x) -``` - -**Paramètre** - -- `x` — Expression to check. - -**Valeurs renvoyées** - -- `1` — `x` est constante. -- `0` — `x` est non constante. - -Type: [UInt8](../data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT isConstant(x + 1) FROM (SELECT 43 AS x) -``` - -Résultat: - -``` text -┌─isConstant(plus(x, 1))─┐ -│ 1 │ -└────────────────────────┘ -``` - -Requête: - -``` sql -WITH 3.14 AS pi SELECT isConstant(cos(pi)) -``` - -Résultat: - -``` text -┌─isConstant(cos(pi))─┐ -│ 1 │ -└─────────────────────┘ -``` - -Requête: - -``` sql -SELECT isConstant(number) FROM numbers(1) -``` - -Résultat: - -``` text -┌─isConstant(number)─┐ -│ 0 │ -└────────────────────┘ -``` - -## isFinite (x) {#isfinitex} - -Accepte Float32 et Float64 et renvoie UInt8 égal à 1 si l'argument n'est pas infini et pas un NaN, sinon 0. - -## isInfinite (x) {#isinfinitex} - -Accepte Float32 et Float64 et renvoie UInt8 égal à 1 si l'argument est infini, sinon 0. Notez que 0 est retourné pour un NaN. - -## ifNotFinite {#ifnotfinite} - -Vérifie si la valeur à virgule flottante est finie. - -**Syntaxe** - - ifNotFinite(x,y) - -**Paramètre** - -- `x` — Value to be checked for infinity. Type: [Flottant\*](../../sql-reference/data-types/float.md). -- `y` — Fallback value. Type: [Flottant\*](../../sql-reference/data-types/float.md). - -**Valeur renvoyée** - -- `x` si `x` est finie. -- `y` si `x` n'est pas finie. - -**Exemple** - -Requête: - - SELECT 1/0 as infimum, ifNotFinite(infimum,42) - -Résultat: - - ┌─infimum─┬─ifNotFinite(divide(1, 0), 42)─┐ - │ inf │ 42 │ - └─────────┴───────────────────────────────┘ - -Vous pouvez obtenir un résultat similaire en utilisant [opérateur ternaire](conditional-functions.md#ternary-operator): `isFinite(x) ? x : y`. - -## isNaN (x) {#isnanx} - -Accepte Float32 et Float64 et renvoie UInt8 égal à 1 si l'argument est un NaN, sinon 0. - -## hasColumnInTable(\[‘hostname’\[, ‘username’\[, ‘password’\]\],\] ‘database’, ‘table’, ‘column’) {#hascolumnintablehostname-username-password-database-table-column} - -Accepte les chaînes constantes: nom de la base de données, nom de la table et nom de la colonne. Renvoie une expression constante UInt8 égale à 1 s'il y a une colonne, sinon 0. Si le paramètre hostname est défini, le test s'exécutera sur un serveur distant. -La fonction renvoie une exception si la table n'existe pas. -Pour les éléments imbriqués structure des données, la fonction vérifie l'existence d'une colonne. Pour la structure de données imbriquée elle-même, la fonction renvoie 0. - -## bar {#function-bar} - -Permet de construire un diagramme unicode-art. - -`bar(x, min, max, width)` dessine une bande avec une largeur proportionnelle à `(x - min)` et égale à `width` les caractères lors de la `x = max`. - -Paramètre: - -- `x` — Size to display. -- `min, max` — Integer constants. The value must fit in `Int64`. -- `width` — Constant, positive integer, can be fractional. - -La bande dessinée avec précision à un huitième d'un symbole. - -Exemple: - -``` sql -SELECT - toHour(EventTime) AS h, - count() AS c, - bar(c, 0, 600000, 20) AS bar -FROM test.hits -GROUP BY h -ORDER BY h ASC -``` - -``` text -┌──h─┬──────c─┬─bar────────────────┐ -│ 0 │ 292907 │ █████████▋ │ -│ 1 │ 180563 │ ██████ │ -│ 2 │ 114861 │ ███▋ │ -│ 3 │ 85069 │ ██▋ │ -│ 4 │ 68543 │ ██▎ │ -│ 5 │ 78116 │ ██▌ │ -│ 6 │ 113474 │ ███▋ │ -│ 7 │ 170678 │ █████▋ │ -│ 8 │ 278380 │ █████████▎ │ -│ 9 │ 391053 │ █████████████ │ -│ 10 │ 457681 │ ███████████████▎ │ -│ 11 │ 493667 │ ████████████████▍ │ -│ 12 │ 509641 │ ████████████████▊ │ -│ 13 │ 522947 │ █████████████████▍ │ -│ 14 │ 539954 │ █████████████████▊ │ -│ 15 │ 528460 │ █████████████████▌ │ -│ 16 │ 539201 │ █████████████████▊ │ -│ 17 │ 523539 │ █████████████████▍ │ -│ 18 │ 506467 │ ████████████████▊ │ -│ 19 │ 520915 │ █████████████████▎ │ -│ 20 │ 521665 │ █████████████████▍ │ -│ 21 │ 542078 │ ██████████████████ │ -│ 22 │ 493642 │ ████████████████▍ │ -│ 23 │ 400397 │ █████████████▎ │ -└────┴────────┴────────────────────┘ -``` - -## transformer {#transform} - -Transforme une valeur en fonction explicitement définis cartographie de certains éléments à l'autre. -Il existe deux variantes de cette fonction: - -### de transformation(x, array_from, array_to, par défaut) {#transformx-array-from-array-to-default} - -`x` – What to transform. - -`array_from` – Constant array of values for converting. - -`array_to` – Constant array of values to convert the values in ‘from’ de. - -`default` – Which value to use if ‘x’ n'est pas égale à une des valeurs de ‘from’. - -`array_from` et `array_to` – Arrays of the same size. - -Type: - -`transform(T, Array(T), Array(U), U) -> U` - -`T` et `U` peuvent être des types numériques, chaîne ou Date ou DateTime. -Lorsque la même lettre est indiquée (T ou U), pour les types numériques, il se peut qu'il ne s'agisse pas de types correspondants, mais de types ayant un type commun. -Par exemple, le premier argument peut avoir le type Int64, tandis que le second a le type Array(UInt16). - -Si l' ‘x’ la valeur est égale à l'un des éléments dans la ‘array_from’ tableau, elle renvoie l'élément existant (qui est numéroté de même) de la ‘array_to’ tableau. Sinon, elle renvoie ‘default’. S'il y a plusieurs éléments correspondants dans ‘array_from’ il renvoie l'un des matches. - -Exemple: - -``` sql -SELECT - transform(SearchEngineID, [2, 3], ['Yandex', 'Google'], 'Other') AS title, - count() AS c -FROM test.hits -WHERE SearchEngineID != 0 -GROUP BY title -ORDER BY c DESC -``` - -``` text -┌─title─────┬──────c─┐ -│ Yandex │ 498635 │ -│ Google │ 229872 │ -│ Other │ 104472 │ -└───────────┴────────┘ -``` - -### de transformation(x, array_from, array_to) {#transformx-array-from-array-to} - -Diffère de la première variation en ce que le ‘default’ l'argument est omis. -Si l' ‘x’ la valeur est égale à l'un des éléments dans la ‘array_from’ tableau, elle renvoie l'élément correspondant (qui est numéroté de même) de la ‘array_to’ tableau. Sinon, elle renvoie ‘x’. - -Type: - -`transform(T, Array(T), Array(T)) -> T` - -Exemple: - -``` sql -SELECT - transform(domain(Referer), ['yandex.ru', 'google.ru', 'vk.com'], ['www.yandex', 'example.com']) AS s, - count() AS c -FROM test.hits -GROUP BY domain(Referer) -ORDER BY count() DESC -LIMIT 10 -``` - -``` text -┌─s──────────────┬───────c─┐ -│ │ 2906259 │ -│ www.yandex │ 867767 │ -│ ███████.ru │ 313599 │ -│ mail.yandex.ru │ 107147 │ -│ ██████.ru │ 100355 │ -│ █████████.ru │ 65040 │ -│ news.yandex.ru │ 64515 │ -│ ██████.net │ 59141 │ -│ example.com │ 57316 │ -└────────────────┴─────────┘ -``` - -## formatReadableSize (x) {#formatreadablesizex} - -Accepte la taille (nombre d'octets). Renvoie une taille arrondie avec un suffixe (KiB, MiB, etc.) comme une chaîne de caractères. - -Exemple: - -``` sql -SELECT - arrayJoin([1, 1024, 1024*1024, 192851925]) AS filesize_bytes, - formatReadableSize(filesize_bytes) AS filesize -``` - -``` text -┌─filesize_bytes─┬─filesize───┐ -│ 1 │ 1.00 B │ -│ 1024 │ 1.00 KiB │ -│ 1048576 │ 1.00 MiB │ -│ 192851925 │ 183.92 MiB │ -└────────────────┴────────────┘ -``` - -## moins (a, b) {#leasta-b} - -Renvoie la plus petite valeur de a et b. - -## la plus grande(a, b) {#greatesta-b} - -Renvoie la plus grande valeur de a et B. - -## le temps de disponibilité() {#uptime} - -Renvoie la disponibilité du serveur en quelques secondes. - -## version() {#version} - -Renvoie la version du serveur sous forme de chaîne. - -## fuseau() {#timezone} - -Retourne le fuseau horaire du serveur. - -## blockNumber {#blocknumber} - -Renvoie le numéro de séquence du bloc de données où se trouve la ligne. - -## rowNumberInBlock {#function-rownumberinblock} - -Renvoie le numéro de séquence de la ligne dans le bloc de données. Différents blocs de données sont toujours recalculés. - -## rowNumberInAllBlocks() {#rownumberinallblocks} - -Renvoie le numéro de séquence de la ligne dans le bloc de données. Cette fonction ne prend en compte que les blocs de données affectés. - -## voisin {#neighbor} - -La fonction de fenêtre qui donne accès à une ligne à un décalage spécifié qui vient avant ou après la ligne actuelle d'une colonne donnée. - -**Syntaxe** - -``` sql -neighbor(column, offset[, default_value]) -``` - -Le résultat de la fonction dépend du touché des blocs de données et l'ordre des données dans le bloc. -Si vous créez une sous-requête avec ORDER BY et appelez la fonction depuis l'extérieur de la sous-requête, vous pouvez obtenir le résultat attendu. - -**Paramètre** - -- `column` — A column name or scalar expression. -- `offset` — The number of rows forwards or backwards from the current row of `column`. [Int64](../../sql-reference/data-types/int-uint.md). -- `default_value` — Optional. The value to be returned if offset goes beyond the scope of the block. Type of data blocks affected. - -**Valeurs renvoyées** - -- De la valeur pour `column` dans `offset` distance de la ligne actuelle si `offset` la valeur n'est pas en dehors des limites du bloc. -- La valeur par défaut pour `column` si `offset` la valeur est en dehors des limites du bloc. Si `default_value` est donné, alors il sera utilisé. - -Type: type de blocs de données affectés ou type de valeur par défaut. - -**Exemple** - -Requête: - -``` sql -SELECT number, neighbor(number, 2) FROM system.numbers LIMIT 10; -``` - -Résultat: - -``` text -┌─number─┬─neighbor(number, 2)─┐ -│ 0 │ 2 │ -│ 1 │ 3 │ -│ 2 │ 4 │ -│ 3 │ 5 │ -│ 4 │ 6 │ -│ 5 │ 7 │ -│ 6 │ 8 │ -│ 7 │ 9 │ -│ 8 │ 0 │ -│ 9 │ 0 │ -└────────┴─────────────────────┘ -``` - -Requête: - -``` sql -SELECT number, neighbor(number, 2, 999) FROM system.numbers LIMIT 10; -``` - -Résultat: - -``` text -┌─number─┬─neighbor(number, 2, 999)─┐ -│ 0 │ 2 │ -│ 1 │ 3 │ -│ 2 │ 4 │ -│ 3 │ 5 │ -│ 4 │ 6 │ -│ 5 │ 7 │ -│ 6 │ 8 │ -│ 7 │ 9 │ -│ 8 │ 999 │ -│ 9 │ 999 │ -└────────┴──────────────────────────┘ -``` - -Cette fonction peut être utilisée pour calculer une année à valeur métrique: - -Requête: - -``` sql -WITH toDate('2018-01-01') AS start_date -SELECT - toStartOfMonth(start_date + (number * 32)) AS month, - toInt32(month) % 100 AS money, - neighbor(money, -12) AS prev_year, - round(prev_year / money, 2) AS year_over_year -FROM numbers(16) -``` - -Résultat: - -``` text -┌──────month─┬─money─┬─prev_year─┬─year_over_year─┐ -│ 2018-01-01 │ 32 │ 0 │ 0 │ -│ 2018-02-01 │ 63 │ 0 │ 0 │ -│ 2018-03-01 │ 91 │ 0 │ 0 │ -│ 2018-04-01 │ 22 │ 0 │ 0 │ -│ 2018-05-01 │ 52 │ 0 │ 0 │ -│ 2018-06-01 │ 83 │ 0 │ 0 │ -│ 2018-07-01 │ 13 │ 0 │ 0 │ -│ 2018-08-01 │ 44 │ 0 │ 0 │ -│ 2018-09-01 │ 75 │ 0 │ 0 │ -│ 2018-10-01 │ 5 │ 0 │ 0 │ -│ 2018-11-01 │ 36 │ 0 │ 0 │ -│ 2018-12-01 │ 66 │ 0 │ 0 │ -│ 2019-01-01 │ 97 │ 32 │ 0.33 │ -│ 2019-02-01 │ 28 │ 63 │ 2.25 │ -│ 2019-03-01 │ 56 │ 91 │ 1.62 │ -│ 2019-04-01 │ 87 │ 22 │ 0.25 │ -└────────────┴───────┴───────────┴────────────────┘ -``` - -## runningDifference(x) {#other_functions-runningdifference} - -Calculates the difference between successive row values ​​in the data block. -Renvoie 0 pour la première ligne et la différence par rapport à la rangée précédente pour chaque nouvelle ligne. - -Le résultat de la fonction dépend du touché des blocs de données et l'ordre des données dans le bloc. -Si vous créez une sous-requête avec ORDER BY et appelez la fonction depuis l'extérieur de la sous-requête, vous pouvez obtenir le résultat attendu. - -Exemple: - -``` sql -SELECT - EventID, - EventTime, - runningDifference(EventTime) AS delta -FROM -( - SELECT - EventID, - EventTime - FROM events - WHERE EventDate = '2016-11-24' - ORDER BY EventTime ASC - LIMIT 5 -) -``` - -``` text -┌─EventID─┬───────────EventTime─┬─delta─┐ -│ 1106 │ 2016-11-24 00:00:04 │ 0 │ -│ 1107 │ 2016-11-24 00:00:05 │ 1 │ -│ 1108 │ 2016-11-24 00:00:05 │ 0 │ -│ 1109 │ 2016-11-24 00:00:09 │ 4 │ -│ 1110 │ 2016-11-24 00:00:10 │ 1 │ -└─────────┴─────────────────────┴───────┘ -``` - -Veuillez noter que la taille du bloc affecte le résultat. Avec chaque nouveau bloc, le `runningDifference` l'état est réinitialisé. - -``` sql -SELECT - number, - runningDifference(number + 1) AS diff -FROM numbers(100000) -WHERE diff != 1 -``` - -``` text -┌─number─┬─diff─┐ -│ 0 │ 0 │ -└────────┴──────┘ -┌─number─┬─diff─┐ -│ 65536 │ 0 │ -└────────┴──────┘ -``` - -``` sql -set max_block_size=100000 -- default value is 65536! - -SELECT - number, - runningDifference(number + 1) AS diff -FROM numbers(100000) -WHERE diff != 1 -``` - -``` text -┌─number─┬─diff─┐ -│ 0 │ 0 │ -└────────┴──────┘ -``` - -## runningDifferenceStartingWithFirstvalue {#runningdifferencestartingwithfirstvalue} - -De même que pour [runningDifference](./other-functions.md#other_functions-runningdifference) la différence est la valeur de la première ligne, est retourné à la valeur de la première ligne, et chaque rangée suivante renvoie la différence de la rangée précédente. - -## MACNumToString (num) {#macnumtostringnum} - -Accepte un numéro UInt64. Interprète comme une adresse MAC dans big endian. Renvoie une chaîne contenant l'adresse MAC correspondante au format AA:BB:CC: DD:EE: FF (Nombres séparés par deux points sous forme hexadécimale). - -## MACStringToNum (s) {#macstringtonums} - -La fonction inverse de MACNumToString. Si l'adresse MAC a un format non valide, elle renvoie 0. - -## MACStringToOUI (s) {#macstringtoouis} - -Accepte une adresse MAC au format AA:BB:CC: DD:EE: FF (Nombres séparés par deux points sous forme hexadécimale). Renvoie les trois premiers octets sous la forme D'un nombre UInt64. Si l'adresse MAC a un format non valide, elle renvoie 0. - -## getSizeOfEnumType {#getsizeofenumtype} - -Retourne le nombre de champs dans [Enum](../../sql-reference/data-types/enum.md). - -``` sql -getSizeOfEnumType(value) -``` - -**Paramètre:** - -- `value` — Value of type `Enum`. - -**Valeurs renvoyées** - -- Le nombre de champs avec `Enum` les valeurs d'entrée. -- Une exception est levée si le type n'est pas `Enum`. - -**Exemple** - -``` sql -SELECT getSizeOfEnumType( CAST('a' AS Enum8('a' = 1, 'b' = 2) ) ) AS x -``` - -``` text -┌─x─┐ -│ 2 │ -└───┘ -``` - -## blockSerializedSize {#blockserializedsize} - -Retourne la taille sur le disque (sans tenir compte de la compression). - -``` sql -blockSerializedSize(value[, value[, ...]]) -``` - -**Paramètre:** - -- `value` — Any value. - -**Valeurs renvoyées** - -- Le nombre d'octets qui seront écrites sur le disque pour le bloc de valeurs (sans compression). - -**Exemple** - -``` sql -SELECT blockSerializedSize(maxState(1)) as x -``` - -``` text -┌─x─┐ -│ 2 │ -└───┘ -``` - -## toColumnTypeName {#tocolumntypename} - -Renvoie le nom de la classe qui représente le type de données de la colonne dans la RAM. - -``` sql -toColumnTypeName(value) -``` - -**Paramètre:** - -- `value` — Any type of value. - -**Valeurs renvoyées** - -- Une chaîne avec le nom de la classe utilisée pour représenter `value` type de données dans la mémoire RAM. - -**Exemple de la différence entre`toTypeName ' and ' toColumnTypeName`** - -``` sql -SELECT toTypeName(CAST('2018-01-01 01:02:03' AS DateTime)) -``` - -``` text -┌─toTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))─┐ -│ DateTime │ -└─────────────────────────────────────────────────────┘ -``` - -``` sql -SELECT toColumnTypeName(CAST('2018-01-01 01:02:03' AS DateTime)) -``` - -``` text -┌─toColumnTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))─┐ -│ Const(UInt32) │ -└───────────────────────────────────────────────────────────┘ -``` - -L'exemple montre que le `DateTime` type de données est stocké dans la mémoire comme `Const(UInt32)`. - -## dumpColumnStructure {#dumpcolumnstructure} - -Affiche une description détaillée des structures de données en RAM - -``` sql -dumpColumnStructure(value) -``` - -**Paramètre:** - -- `value` — Any type of value. - -**Valeurs renvoyées** - -- Une chaîne décrivant la structure utilisée pour représenter `value` type de données dans la mémoire RAM. - -**Exemple** - -``` sql -SELECT dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime')) -``` - -``` text -┌─dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime'))─┐ -│ DateTime, Const(size = 1, UInt32(size = 1)) │ -└──────────────────────────────────────────────────────────────┘ -``` - -## defaultValueOfArgumentType {#defaultvalueofargumenttype} - -Affiche la valeur par défaut du type de données. - -Ne pas inclure des valeurs par défaut pour les colonnes personnalisées définies par l'utilisateur. - -``` sql -defaultValueOfArgumentType(expression) -``` - -**Paramètre:** - -- `expression` — Arbitrary type of value or an expression that results in a value of an arbitrary type. - -**Valeurs renvoyées** - -- `0` pour les nombres. -- Chaîne vide pour les chaînes. -- `ᴺᵁᴸᴸ` pour [Nullable](../../sql-reference/data-types/nullable.md). - -**Exemple** - -``` sql -SELECT defaultValueOfArgumentType( CAST(1 AS Int8) ) -``` - -``` text -┌─defaultValueOfArgumentType(CAST(1, 'Int8'))─┐ -│ 0 │ -└─────────────────────────────────────────────┘ -``` - -``` sql -SELECT defaultValueOfArgumentType( CAST(1 AS Nullable(Int8) ) ) -``` - -``` text -┌─defaultValueOfArgumentType(CAST(1, 'Nullable(Int8)'))─┐ -│ ᴺᵁᴸᴸ │ -└───────────────────────────────────────────────────────┘ -``` - -## reproduire {#other-functions-replicate} - -Crée un tableau avec une seule valeur. - -Utilisé pour la mise en œuvre interne de [arrayJoin](array-join.md#functions_arrayjoin). - -``` sql -SELECT replicate(x, arr); -``` - -**Paramètre:** - -- `arr` — Original array. ClickHouse creates a new array of the same length as the original and fills it with the value `x`. -- `x` — The value that the resulting array will be filled with. - -**Valeur renvoyée** - -Un tableau rempli de la valeur `x`. - -Type: `Array`. - -**Exemple** - -Requête: - -``` sql -SELECT replicate(1, ['a', 'b', 'c']) -``` - -Résultat: - -``` text -┌─replicate(1, ['a', 'b', 'c'])─┐ -│ [1,1,1] │ -└───────────────────────────────┘ -``` - -## filesystemAvailable {#filesystemavailable} - -Renvoie la quantité d'espace restant sur le système de fichiers où se trouvent les fichiers des bases de données. Il est toujours plus petit que l'espace libre total ([filesystemFree](#filesystemfree)) parce qu'un peu d'espace est réservé au système D'exploitation. - -**Syntaxe** - -``` sql -filesystemAvailable() -``` - -**Valeur renvoyée** - -- La quantité d'espace restant disponible en octets. - -Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT formatReadableSize(filesystemAvailable()) AS "Available space", toTypeName(filesystemAvailable()) AS "Type"; -``` - -Résultat: - -``` text -┌─Available space─┬─Type───┐ -│ 30.75 GiB │ UInt64 │ -└─────────────────┴────────┘ -``` - -## filesystemFree {#filesystemfree} - -Retourne montant total de l'espace libre sur le système de fichiers où les fichiers des bases de données. Voir aussi `filesystemAvailable` - -**Syntaxe** - -``` sql -filesystemFree() -``` - -**Valeur renvoyée** - -- Quantité d'espace libre en octets. - -Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT formatReadableSize(filesystemFree()) AS "Free space", toTypeName(filesystemFree()) AS "Type"; -``` - -Résultat: - -``` text -┌─Free space─┬─Type───┐ -│ 32.39 GiB │ UInt64 │ -└────────────┴────────┘ -``` - -## filesystemCapacity {#filesystemcapacity} - -Renvoie la capacité du système de fichiers en octets. Pour l'évaluation, la [chemin](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-path) le répertoire de données doit être configuré. - -**Syntaxe** - -``` sql -filesystemCapacity() -``` - -**Valeur renvoyée** - -- Informations de capacité du système de fichiers en octets. - -Type: [UInt64](../../sql-reference/data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT formatReadableSize(filesystemCapacity()) AS "Capacity", toTypeName(filesystemCapacity()) AS "Type" -``` - -Résultat: - -``` text -┌─Capacity──┬─Type───┐ -│ 39.32 GiB │ UInt64 │ -└───────────┴────────┘ -``` - -## finalizeAggregation {#function-finalizeaggregation} - -Prend de l'état de la fonction d'agrégation. Renvoie le résultat de l'agrégation (état finalisé). - -## runningAccumulate {#function-runningaccumulate} - -Prend les membres de la fonction d'agrégation et renvoie une colonne avec des valeurs, sont le résultat de l'accumulation de ces états pour un ensemble de bloc de lignes, de la première à la ligne actuelle. -Par exemple, prend l'état de la fonction d'agrégat (exemple runningAccumulate(uniqState(UserID))), et pour chaque ligne de bloc, retourne le résultat de la fonction d'agrégat lors de la fusion des états de toutes les lignes précédentes et de la ligne actuelle. -Ainsi, le résultat de la fonction dépend de la partition des données aux blocs et de l'ordre des données dans le bloc. - -## joinGet {#joinget} - -La fonction vous permet d'extraire les données de la table de la même manière qu'à partir d'un [dictionnaire](../../sql-reference/dictionaries/index.md). - -Obtient les données de [Rejoindre](../../engines/table-engines/special/join.md#creating-a-table) tables utilisant la clé de jointure spécifiée. - -Ne prend en charge que les tables créées avec `ENGINE = Join(ANY, LEFT, )` déclaration. - -**Syntaxe** - -``` sql -joinGet(join_storage_table_name, `value_column`, join_keys) -``` - -**Paramètre** - -- `join_storage_table_name` — an [identificateur](../syntax.md#syntax-identifiers) indique l'endroit où la recherche est effectuée. L'identificateur est recherché dans la base de données par défaut (voir paramètre `default_database` dans le fichier de config). Pour remplacer la base de données par défaut, utilisez `USE db_name` ou spécifiez la base de données et la table via le séparateur `db_name.db_table` voir l'exemple. -- `value_column` — name of the column of the table that contains required data. -- `join_keys` — list of keys. - -**Valeur renvoyée** - -Retourne la liste des valeurs correspond à la liste des clés. - -Si certain n'existe pas dans la table source alors `0` ou `null` seront renvoyés basé sur [join_use_nulls](../../operations/settings/settings.md#join_use_nulls) paramètre. - -Plus d'infos sur `join_use_nulls` dans [Opération de jointure](../../engines/table-engines/special/join.md). - -**Exemple** - -Table d'entrée: - -``` sql -CREATE DATABASE db_test -CREATE TABLE db_test.id_val(`id` UInt32, `val` UInt32) ENGINE = Join(ANY, LEFT, id) SETTINGS join_use_nulls = 1 -INSERT INTO db_test.id_val VALUES (1,11)(2,12)(4,13) -``` - -``` text -┌─id─┬─val─┐ -│ 4 │ 13 │ -│ 2 │ 12 │ -│ 1 │ 11 │ -└────┴─────┘ -``` - -Requête: - -``` sql -SELECT joinGet(db_test.id_val,'val',toUInt32(number)) from numbers(4) SETTINGS join_use_nulls = 1 -``` - -Résultat: - -``` text -┌─joinGet(db_test.id_val, 'val', toUInt32(number))─┐ -│ 0 │ -│ 11 │ -│ 12 │ -│ 0 │ -└──────────────────────────────────────────────────┘ -``` - -## modelEvaluate(model_name, …) {#function-modelevaluate} - -Évaluer le modèle externe. -Accepte un nom de modèle et le modèle de l'argumentation. Renvoie Float64. - -## throwIf (x \[, custom_message\]) {#throwifx-custom-message} - -Lever une exception si l'argument est non nul. -custom_message - est un paramètre optionnel: une chaîne constante, fournit un message d'erreur - -``` sql -SELECT throwIf(number = 3, 'Too many') FROM numbers(10); -``` - -``` text -↙ Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.) Received exception from server (version 19.14.1): -Code: 395. DB::Exception: Received from localhost:9000. DB::Exception: Too many. -``` - -## identité {#identity} - -Renvoie la même valeur qui a été utilisée comme argument. Utilisé pour le débogage et les tests, permet d'annuler l'utilisation de l'index et d'obtenir les performances de requête d'une analyse complète. Lorsque la requête est analysée pour une utilisation possible de l'index, l'analyseur ne regarde pas à l'intérieur `identity` fonction. - -**Syntaxe** - -``` sql -identity(x) -``` - -**Exemple** - -Requête: - -``` sql -SELECT identity(42) -``` - -Résultat: - -``` text -┌─identity(42)─┐ -│ 42 │ -└──────────────┘ -``` - -## randomPrintableASCII {#randomascii} - -Génère une chaîne avec un ensemble aléatoire de [ASCII](https://en.wikipedia.org/wiki/ASCII#Printable_characters) caractères imprimables. - -**Syntaxe** - -``` sql -randomPrintableASCII(length) -``` - -**Paramètre** - -- `length` — Resulting string length. Positive integer. - - If you pass `length < 0`, behavior of the function is undefined. - -**Valeur renvoyée** - -- Chaîne avec un ensemble aléatoire de [ASCII](https://en.wikipedia.org/wiki/ASCII#Printable_characters) caractères imprimables. - -Type: [Chaîne](../../sql-reference/data-types/string.md) - -**Exemple** - -``` sql -SELECT number, randomPrintableASCII(30) as str, length(str) FROM system.numbers LIMIT 3 -``` - -``` text -┌─number─┬─str────────────────────────────┬─length(randomPrintableASCII(30))─┐ -│ 0 │ SuiCOSTvC0csfABSw=UcSzp2.`rv8x │ 30 │ -│ 1 │ 1Ag NlJ &RCN:*>HVPG;PE-nO"SUFD │ 30 │ -│ 2 │ /"+<"wUTh:=LjJ Vm!c&hI*m#XTfzz │ 30 │ -└────────┴────────────────────────────────┴──────────────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) diff --git a/docs/fr/sql-reference/functions/random-functions.md b/docs/fr/sql-reference/functions/random-functions.md deleted file mode 100644 index 3c4e15507bb..00000000000 --- a/docs/fr/sql-reference/functions/random-functions.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 51 -toc_title: "La G\xE9n\xE9ration De Nombres Pseudo-Al\xE9atoires" ---- - -# Fonctions pour générer des nombres Pseudo-aléatoires {#functions-for-generating-pseudo-random-numbers} - -Des générateurs Non cryptographiques de nombres pseudo-aléatoires sont utilisés. - -Toutes les fonctions acceptent zéro argument ou un argument. -Si un argument est passé, il peut être de n'importe quel type, et sa valeur n'est utilisée pour rien. -Le seul but de cet argument est d'empêcher l'élimination des sous-expressions courantes, de sorte que deux instances différentes de la même fonction renvoient des colonnes différentes avec des nombres aléatoires différents. - -## Rand {#rand} - -Renvoie un nombre UInt32 pseudo-aléatoire, réparti uniformément entre tous les nombres de type UInt32. -Utilise un générateur congruentiel linéaire. - -## rand64 {#rand64} - -Renvoie un nombre UInt64 pseudo-aléatoire, réparti uniformément entre tous les nombres de type UInt64. -Utilise un générateur congruentiel linéaire. - -## randConstant {#randconstant} - -Produit une colonne constante avec une valeur aléatoire. - -**Syntaxe** - -``` sql -randConstant([x]) -``` - -**Paramètre** - -- `x` — [Expression](../syntax.md#syntax-expressions) résultant de la [types de données pris en charge](../data-types/index.md#data_types). La valeur résultante est ignorée, mais l'expression elle-même si elle est utilisée pour contourner [élimination des sous-expressions courantes](index.md#common-subexpression-elimination) si la fonction est appelée plusieurs fois dans une seule requête. Paramètre facultatif. - -**Valeur renvoyée** - -- Nombre Pseudo-aléatoire. - -Type: [UInt32](../data-types/int-uint.md). - -**Exemple** - -Requête: - -``` sql -SELECT rand(), rand(1), rand(number), randConstant(), randConstant(1), randConstant(number) -FROM numbers(3) -``` - -Résultat: - -``` text -┌─────rand()─┬────rand(1)─┬─rand(number)─┬─randConstant()─┬─randConstant(1)─┬─randConstant(number)─┐ -│ 3047369878 │ 4132449925 │ 4044508545 │ 2740811946 │ 4229401477 │ 1924032898 │ -│ 2938880146 │ 1267722397 │ 4154983056 │ 2740811946 │ 4229401477 │ 1924032898 │ -│ 956619638 │ 4238287282 │ 1104342490 │ 2740811946 │ 4229401477 │ 1924032898 │ -└────────────┴────────────┴──────────────┴────────────────┴─────────────────┴──────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/random_functions/) diff --git a/docs/fr/sql-reference/functions/rounding-functions.md b/docs/fr/sql-reference/functions/rounding-functions.md deleted file mode 100644 index f99e6358026..00000000000 --- a/docs/fr/sql-reference/functions/rounding-functions.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 45 -toc_title: Arrondi ---- - -# Fonctions D'Arrondi {#rounding-functions} - -## floor(x\[, N\]) {#floorx-n} - -Renvoie le plus grand nombre rond inférieur ou égal à `x`. Un nombre rond est un multiple de 1 / 10N, ou le nombre le plus proche du type de données approprié si 1 / 10N n'est pas exact. -‘N’ est une constante entière, paramètre facultatif. Par défaut, il est zéro, ce qui signifie arrondir à un entier. -‘N’ peut être négative. - -Exemple: `floor(123.45, 1) = 123.4, floor(123.45, -1) = 120.` - -`x` est n'importe quel type numérique. Le résultat est un nombre du même type. -Pour les arguments entiers, il est logique d'arrondir avec un négatif `N` valeur (pour non négatif `N`, la fonction ne fait rien). -Si l'arrondi provoque un débordement (par exemple, floor(-128, -1)), un résultat spécifique à l'implémentation est renvoyé. - -## ceil(x\[, n\]), plafond (x\[, n\]) {#ceilx-n-ceilingx-n} - -Renvoie le plus petit nombre rond supérieur ou égal à `x`. Dans tous les autres sens, il est le même que le `floor` fonction (voir ci-dessus). - -## trunc(x \[, N\]), truncate(x \[, N\]) {#truncx-n-truncatex-n} - -Renvoie le nombre rond avec la plus grande valeur absolue qui a une valeur absolue inférieure ou égale à `x`‘s. In every other way, it is the same as the ’floor’ fonction (voir ci-dessus). - -## round(x\[, N\]) {#rounding_functions-round} - -Arrondit une valeur à un nombre spécifié de décimales. - -La fonction renvoie le nombre plus proche de l'ordre spécifié. Dans le cas où un nombre donné a une distance égale aux nombres environnants, la fonction utilise l'arrondi de banquier pour les types de nombres flottants et arrondit à partir de zéro pour les autres types de nombres. - -``` sql -round(expression [, decimal_places]) -``` - -**Paramètre:** - -- `expression` — A number to be rounded. Can be any [expression](../syntax.md#syntax-expressions) retour du numérique [type de données](../../sql-reference/data-types/index.md#data_types). -- `decimal-places` — An integer value. - - Si `decimal-places > 0` alors la fonction arrondit la valeur à droite du point décimal. - - Si `decimal-places < 0` alors la fonction arrondit la valeur à gauche de la virgule décimale. - - Si `decimal-places = 0` alors la fonction arrondit la valeur à l'entier. Dans ce cas, l'argument peut être omis. - -**Valeur renvoyée:** - -Le nombre arrondi du même type que le nombre d'entrée. - -### Exemple {#examples} - -**Exemple d'utilisation** - -``` sql -SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3 -``` - -``` text -┌───x─┬─round(divide(number, 2))─┐ -│ 0 │ 0 │ -│ 0.5 │ 0 │ -│ 1 │ 1 │ -└─────┴──────────────────────────┘ -``` - -**Des exemples de l'arrondissement** - -Le résultat est arrondi au plus proche. - -``` text -round(3.2, 0) = 3 -round(4.1267, 2) = 4.13 -round(22,-1) = 20 -round(467,-2) = 500 -round(-467,-2) = -500 -``` - -Le Banquier arrondit. - -``` text -round(3.5) = 4 -round(4.5) = 4 -round(3.55, 1) = 3.6 -round(3.65, 1) = 3.6 -``` - -**Voir Aussi** - -- [roundBankers](#roundbankers) - -## roundBankers {#roundbankers} - -Arrondit un nombre à une position décimale spécifiée. - -- Si le nombre est arrondi à mi-chemin entre deux nombres, la fonction utilise l'arrondi. - - Banker's rounding is a method of rounding fractional numbers. When the rounding number is halfway between two numbers, it's rounded to the nearest even digit at the specified decimal position. For example: 3.5 rounds up to 4, 2.5 rounds down to 2. - - It's the default rounding method for floating point numbers defined in [IEEE 754](https://en.wikipedia.org/wiki/IEEE_754#Roundings_to_nearest). The [round](#rounding_functions-round) function performs the same rounding for floating point numbers. The `roundBankers` function also rounds integers the same way, for example, `roundBankers(45, -1) = 40`. - -- Dans d'autres cas, la fonction arrondit les nombres à l'entier le plus proche. - -À l'aide de l'arrondi, vous pouvez réduire l'effet qu'arrondir les nombres sur les résultats d'additionner ou de soustraire ces chiffres. - -Par exemple, les nombres de somme 1.5, 2.5, 3.5, 4.5 avec des arrondis différents: - -- Pas d'arrondi: 1.5 + 2.5 + 3.5 + 4.5 = 12. -- Arrondi du banquier: 2 + 2 + 4 + 4 = 12. -- Arrondi à l'entier le plus proche: 2 + 3 + 4 + 5 = 14. - -**Syntaxe** - -``` sql -roundBankers(expression [, decimal_places]) -``` - -**Paramètre** - -- `expression` — A number to be rounded. Can be any [expression](../syntax.md#syntax-expressions) retour du numérique [type de données](../../sql-reference/data-types/index.md#data_types). -- `decimal-places` — Decimal places. An integer number. - - `decimal-places > 0` — The function rounds the number to the given position right of the decimal point. Example: `roundBankers(3.55, 1) = 3.6`. - - `decimal-places < 0` — The function rounds the number to the given position left of the decimal point. Example: `roundBankers(24.55, -1) = 20`. - - `decimal-places = 0` — The function rounds the number to an integer. In this case the argument can be omitted. Example: `roundBankers(2.5) = 2`. - -**Valeur renvoyée** - -Valeur arrondie par la méthode d'arrondi du banquier. - -### Exemple {#examples-1} - -**Exemple d'utilisation** - -Requête: - -``` sql - SELECT number / 2 AS x, roundBankers(x, 0) AS b fROM system.numbers limit 10 -``` - -Résultat: - -``` text -┌───x─┬─b─┐ -│ 0 │ 0 │ -│ 0.5 │ 0 │ -│ 1 │ 1 │ -│ 1.5 │ 2 │ -│ 2 │ 2 │ -│ 2.5 │ 2 │ -│ 3 │ 3 │ -│ 3.5 │ 4 │ -│ 4 │ 4 │ -│ 4.5 │ 4 │ -└─────┴───┘ -``` - -**Exemples d'arrondi bancaire** - -``` text -roundBankers(0.4) = 0 -roundBankers(-3.5) = -4 -roundBankers(4.5) = 4 -roundBankers(3.55, 1) = 3.6 -roundBankers(3.65, 1) = 3.6 -roundBankers(10.35, 1) = 10.4 -roundBankers(10.755, 2) = 11,76 -``` - -**Voir Aussi** - -- [rond](#rounding_functions-round) - -## roundToExp2 (num) {#roundtoexp2num} - -Accepte un certain nombre. Si le nombre est inférieur à un, elle renvoie 0. Sinon, il arrondit le nombre au degré le plus proche (entier non négatif) de deux. - -## roundDuration (num) {#rounddurationnum} - -Accepte un certain nombre. Si le nombre est inférieur à un, elle renvoie 0. Sinon, il arrondit le nombre vers le bas pour les nombres de l'ensemble: 1, 10, 30, 60, 120, 180, 240, 300, 600, 1200, 1800, 3600, 7200, 18000, 36000. Cette fonction est spécifique à Yandex.Metrica et utilisé pour la mise en œuvre du rapport sur la durée de la session. - -## roundAge (num) {#roundagenum} - -Accepte un certain nombre. Si le nombre est inférieur à 18, il renvoie 0. Sinon, il arrondit le nombre à un nombre de l'ensemble: 18, 25, 35, 45, 55. Cette fonction est spécifique à Yandex.Metrica et utilisé pour la mise en œuvre du rapport sur l'âge des utilisateurs. - -## roundDown(num, arr) {#rounddownnum-arr} - -Accepte un nombre et l'arrondit à un élément dans le tableau spécifié. Si la valeur est inférieure à la plus basse, la plus basse lié est retourné. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/rounding_functions/) diff --git a/docs/fr/sql-reference/functions/splitting-merging-functions.md b/docs/fr/sql-reference/functions/splitting-merging-functions.md deleted file mode 100644 index a1260e918b0..00000000000 --- a/docs/fr/sql-reference/functions/splitting-merging-functions.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 47 -toc_title: "Fractionnement et fusion de cha\xEEnes et de tableaux" ---- - -# Fonctions pour diviser et fusionner des chaînes et des tableaux {#functions-for-splitting-and-merging-strings-and-arrays} - -## splitByChar (séparateur, s) {#splitbycharseparator-s} - -Divise une chaîne en sous-chaînes séparées par un caractère spécifique. Il utilise une chaîne constante `separator` qui composé d'un seul caractère. -Retourne un tableau de certaines chaînes. Les sous-chaînes vides peuvent être sélectionnées si le séparateur se produit au début ou à la fin de la chaîne, ou s'il existe plusieurs séparateurs consécutifs. - -**Syntaxe** - -``` sql -splitByChar(, ) -``` - -**Paramètre** - -- `separator` — The separator which should contain exactly one character. [Chaîne](../../sql-reference/data-types/string.md). -- `s` — The string to split. [Chaîne](../../sql-reference/data-types/string.md). - -**Valeur renvoyée(s)** - -Retourne un tableau de certaines chaînes. Des sous-chaînes vides peuvent être sélectionnées lorsque: - -- Un séparateur se produit au début ou à la fin de la chaîne; -- Il existe plusieurs séparateurs consécutifs; -- La chaîne d'origine `s` est vide. - -Type: [Tableau](../../sql-reference/data-types/array.md) de [Chaîne](../../sql-reference/data-types/string.md). - -**Exemple** - -``` sql -SELECT splitByChar(',', '1,2,3,abcde') -``` - -``` text -┌─splitByChar(',', '1,2,3,abcde')─┐ -│ ['1','2','3','abcde'] │ -└─────────────────────────────────┘ -``` - -## splitByString(séparateur, s) {#splitbystringseparator-s} - -Divise une chaîne en sous-chaînes séparées par une chaîne. Il utilise une chaîne constante `separator` de plusieurs caractères comme séparateur. Si la chaîne `separator` est vide, il va diviser la chaîne `s` dans un tableau de caractères uniques. - -**Syntaxe** - -``` sql -splitByString(, ) -``` - -**Paramètre** - -- `separator` — The separator. [Chaîne](../../sql-reference/data-types/string.md). -- `s` — The string to split. [Chaîne](../../sql-reference/data-types/string.md). - -**Valeur renvoyée(s)** - -Retourne un tableau de certaines chaînes. Des sous-chaînes vides peuvent être sélectionnées lorsque: - -Type: [Tableau](../../sql-reference/data-types/array.md) de [Chaîne](../../sql-reference/data-types/string.md). - -- Un séparateur non vide se produit au début ou à la fin de la chaîne; -- Il existe plusieurs séparateurs consécutifs non vides; -- La chaîne d'origine `s` est vide tandis que le séparateur n'est pas vide. - -**Exemple** - -``` sql -SELECT splitByString(', ', '1, 2 3, 4,5, abcde') -``` - -``` text -┌─splitByString(', ', '1, 2 3, 4,5, abcde')─┐ -│ ['1','2 3','4,5','abcde'] │ -└───────────────────────────────────────────┘ -``` - -``` sql -SELECT splitByString('', 'abcde') -``` - -``` text -┌─splitByString('', 'abcde')─┐ -│ ['a','b','c','d','e'] │ -└────────────────────────────┘ -``` - -## arrayStringConcat(arr \[, séparateur\]) {#arraystringconcatarr-separator} - -Concatène les chaînes répertoriées dans le tableau avec le séparateur."séparateur" est un paramètre facultatif: une chaîne constante, définie à une chaîne vide par défaut. -Retourne une chaîne de caractères. - -## alphaTokens (s) {#alphatokenss} - -Sélectionne des sous-chaînes d'octets consécutifs dans les plages A-z et A-Z. retourne un tableau de sous-chaînes. - -**Exemple** - -``` sql -SELECT alphaTokens('abca1abc') -``` - -``` text -┌─alphaTokens('abca1abc')─┐ -│ ['abca','abc'] │ -└─────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/splitting_merging_functions/) diff --git a/docs/fr/sql-reference/functions/string-functions.md b/docs/fr/sql-reference/functions/string-functions.md deleted file mode 100644 index 1482952426c..00000000000 --- a/docs/fr/sql-reference/functions/string-functions.md +++ /dev/null @@ -1,489 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 40 -toc_title: "Travailler avec des cha\xEEnes" ---- - -# Fonctions pour travailler avec des chaînes {#functions-for-working-with-strings} - -## vide {#empty} - -Renvoie 1 pour une chaîne vide ou 0 pour une chaîne non vide. -Le type de résultat est UInt8. -Une chaîne est considérée comme non vide si elle contient au moins un octet, même s'il s'agit d'un espace ou d'un octet nul. -La fonction fonctionne également pour les tableaux. - -## notEmpty {#notempty} - -Renvoie 0 pour une chaîne vide ou 1 pour une chaîne non vide. -Le type de résultat est UInt8. -La fonction fonctionne également pour les tableaux. - -## longueur {#length} - -Renvoie la longueur d'une chaîne en octets (pas en caractères, et pas en points de code). -Le type de résultat est UInt64. -La fonction fonctionne également pour les tableaux. - -## lengthUTF8 {#lengthutf8} - -Renvoie la longueur d'une chaîne en points de code Unicode (pas en caractères), en supposant que la chaîne contient un ensemble d'octets qui composent le texte codé en UTF-8. Si cette hypothèse n'est pas remplie, elle renvoie un résultat (elle ne lance pas d'exception). -Le type de résultat est UInt64. - -## char_length, CHAR_LENGTH {#char-length} - -Renvoie la longueur d'une chaîne en points de code Unicode (pas en caractères), en supposant que la chaîne contient un ensemble d'octets qui composent le texte codé en UTF-8. Si cette hypothèse n'est pas remplie, elle renvoie un résultat (elle ne lance pas d'exception). -Le type de résultat est UInt64. - -## character_length, CHARACTER_LENGTH {#character-length} - -Renvoie la longueur d'une chaîne en points de code Unicode (pas en caractères), en supposant que la chaîne contient un ensemble d'octets qui composent le texte codé en UTF-8. Si cette hypothèse n'est pas remplie, elle renvoie un résultat (elle ne lance pas d'exception). -Le type de résultat est UInt64. - -## plus bas, lcase {#lower} - -Convertit les symboles latins ASCII dans une chaîne en minuscules. - -## supérieur, ucase {#upper} - -Convertit les symboles latins ASCII dans une chaîne en majuscules. - -## lowerUTF8 {#lowerutf8} - -Convertit une chaîne en minuscules, en supposant que la chaîne de caractères contient un ensemble d'octets qui composent un texte UTF-8. -Il ne détecte pas la langue. Donc, pour le turc, le résultat pourrait ne pas être exactement correct. -Si la longueur de la séquence d'octets UTF-8 est différente pour les majuscules et les minuscules d'un point de code, le résultat peut être incorrect pour ce point de code. -Si la chaîne contient un ensemble d'octets qui N'est pas UTF-8, le comportement n'est pas défini. - -## upperUTF8 {#upperutf8} - -Convertit une chaîne en majuscules, en supposant que la chaîne de caractères contient un ensemble d'octets qui composent un texte UTF-8. -Il ne détecte pas la langue. Donc, pour le turc, le résultat pourrait ne pas être exactement correct. -Si la longueur de la séquence d'octets UTF-8 est différente pour les majuscules et les minuscules d'un point de code, le résultat peut être incorrect pour ce point de code. -Si la chaîne contient un ensemble d'octets qui N'est pas UTF-8, le comportement n'est pas défini. - -## isValidUTF8 {#isvalidutf8} - -Renvoie 1, si l'ensemble d'octets est codé en UTF-8 valide, sinon 0. - -## toValidUTF8 {#tovalidutf8} - -Remplace les caractères UTF-8 non valides par `�` (U+FFFD) caractère. Tous les caractères non valides s'exécutant dans une rangée sont réduits en un seul caractère de remplacement. - -``` sql -toValidUTF8( input_string ) -``` - -Paramètre: - -- input_string — Any set of bytes represented as the [Chaîne](../../sql-reference/data-types/string.md) type de données objet. - -Valeur renvoyée: chaîne UTF-8 valide. - -**Exemple** - -``` sql -SELECT toValidUTF8('\x61\xF0\x80\x80\x80b') -``` - -``` text -┌─toValidUTF8('a����b')─┐ -│ a�b │ -└───────────────────────┘ -``` - -## répéter {#repeat} - -Répète une corde autant de fois que spécifié et concatène les valeurs répliquées comme une seule chaîne. - -**Syntaxe** - -``` sql -repeat(s, n) -``` - -**Paramètre** - -- `s` — The string to repeat. [Chaîne](../../sql-reference/data-types/string.md). -- `n` — The number of times to repeat the string. [UInt](../../sql-reference/data-types/int-uint.md). - -**Valeur renvoyée** - -La chaîne unique, qui contient la chaîne `s` répéter `n` temps. Si `n` \< 1, la fonction renvoie une chaîne vide. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT repeat('abc', 10) -``` - -Résultat: - -``` text -┌─repeat('abc', 10)──────────────┐ -│ abcabcabcabcabcabcabcabcabcabc │ -└────────────────────────────────┘ -``` - -## inverser {#reverse} - -Inverse la chaîne (comme une séquence d'octets). - -## reverseUTF8 {#reverseutf8} - -Inverse une séquence de points de code Unicode, en supposant que la chaîne contient un ensemble d'octets représentant un texte UTF-8. Sinon, il fait autre chose (il ne lance pas d'exception). - -## format(pattern, s0, s1, …) {#format} - -Formatage du motif constant avec la chaîne listée dans les arguments. `pattern` est un modèle de format Python simplifié. Chaîne de Format contient “replacement fields” entouré par des accolades `{}`. Tout ce qui n'est pas contenu dans les accolades est considéré comme du texte littéral, qui est copié inchangé dans la sortie. Si vous devez inclure un caractère d'Accolade dans le texte littéral, il peut être échappé en doublant: `{{ '{{' }}` et `{{ '}}' }}`. Les noms de champs peuvent être des nombres (à partir de zéro) ou vides (ils sont alors traités comme des nombres de conséquence). - -``` sql -SELECT format('{1} {0} {1}', 'World', 'Hello') -``` - -``` text -┌─format('{1} {0} {1}', 'World', 'Hello')─┐ -│ Hello World Hello │ -└─────────────────────────────────────────┘ -``` - -``` sql -SELECT format('{} {}', 'Hello', 'World') -``` - -``` text -┌─format('{} {}', 'Hello', 'World')─┐ -│ Hello World │ -└───────────────────────────────────┘ -``` - -## concat {#concat} - -Concatène les chaînes répertoriées dans les arguments, sans séparateur. - -**Syntaxe** - -``` sql -concat(s1, s2, ...) -``` - -**Paramètre** - -Valeurs de type String ou FixedString. - -**Valeurs renvoyées** - -Renvoie la chaîne qui résulte de la concaténation des arguments. - -Si l'une des valeurs d'argument est `NULL`, `concat` retourner `NULL`. - -**Exemple** - -Requête: - -``` sql -SELECT concat('Hello, ', 'World!') -``` - -Résultat: - -``` text -┌─concat('Hello, ', 'World!')─┐ -│ Hello, World! │ -└─────────────────────────────┘ -``` - -## concatAssumeInjective {#concatassumeinjective} - -Même que [concat](#concat) la différence est que vous devez vous assurer que `concat(s1, s2, ...) → sn` est injectif, il sera utilisé pour l'optimisation du groupe par. - -La fonction est nommée “injective” si elle renvoie toujours un résultat différent pour différentes valeurs d'arguments. En d'autres termes: des arguments différents ne donnent jamais un résultat identique. - -**Syntaxe** - -``` sql -concatAssumeInjective(s1, s2, ...) -``` - -**Paramètre** - -Valeurs de type String ou FixedString. - -**Valeurs renvoyées** - -Renvoie la chaîne qui résulte de la concaténation des arguments. - -Si l'une des valeurs d'argument est `NULL`, `concatAssumeInjective` retourner `NULL`. - -**Exemple** - -Table d'entrée: - -``` sql -CREATE TABLE key_val(`key1` String, `key2` String, `value` UInt32) ENGINE = TinyLog; -INSERT INTO key_val VALUES ('Hello, ','World',1), ('Hello, ','World',2), ('Hello, ','World!',3), ('Hello',', World!',2); -SELECT * from key_val; -``` - -``` text -┌─key1────┬─key2─────┬─value─┐ -│ Hello, │ World │ 1 │ -│ Hello, │ World │ 2 │ -│ Hello, │ World! │ 3 │ -│ Hello │ , World! │ 2 │ -└─────────┴──────────┴───────┘ -``` - -Requête: - -``` sql -SELECT concat(key1, key2), sum(value) FROM key_val GROUP BY concatAssumeInjective(key1, key2) -``` - -Résultat: - -``` text -┌─concat(key1, key2)─┬─sum(value)─┐ -│ Hello, World! │ 3 │ -│ Hello, World! │ 2 │ -│ Hello, World │ 3 │ -└────────────────────┴────────────┘ -``` - -## substring(s, offset, longueur), mid(s, offset, longueur), substr(s, offset, longueur) {#substring} - -Renvoie une sous-chaîne commençant par l'octet du ‘offset’ index ‘length’ octets de long. L'indexation des caractères commence à partir d'un (comme dans SQL standard). Le ‘offset’ et ‘length’ les arguments doivent être des constantes. - -## substringUTF8(s, offset, longueur) {#substringutf8} - -Le même que ‘substring’, mais pour les points de code Unicode. Fonctionne sous l'hypothèse que la chaîne contient un ensemble d'octets représentant un texte codé en UTF-8. Si cette hypothèse n'est pas remplie, elle renvoie un résultat (elle ne lance pas d'exception). - -## appendTrailingCharIfAbsent (s, c) {#appendtrailingcharifabsent} - -Si l' ‘s’ la chaîne n'est pas vide et ne contient pas ‘c’ personnage à la fin, il ajoute le ‘c’ personnage à la fin. - -## convertCharset(s, à partir de, à) {#convertcharset} - -Retourne une chaîne de caractères ‘s’ qui a été converti à partir de l'encodage dans ‘from’ pour l'encodage dans ‘to’. - -## base64Encode(s) {#base64encode} - -Encodage ‘s’ chaîne dans base64 - -## base64Decode(s) {#base64decode} - -Décoder la chaîne codée en base64 ‘s’ dans la chaîne d'origine. En cas d'échec, une exception est levée. - -## tryBase64Decode(s) {#trybase64decode} - -Semblable à base64Decode, mais en cas d'erreur, une chaîne vide serait renvoyé. - -## endsWith (s, suffixe) {#endswith} - -Renvoie s'il faut se terminer par le suffixe spécifié. Retourne 1 si la chaîne se termine par le suffixe spécifié, sinon elle renvoie 0. - -## startsWith (STR, préfixe) {#startswith} - -Retourne 1 si la chaîne commence par le préfixe spécifié, sinon elle renvoie 0. - -``` sql -SELECT startsWith('Spider-Man', 'Spi'); -``` - -**Valeurs renvoyées** - -- 1, si la chaîne commence par le préfixe spécifié. -- 0, si la chaîne ne commence pas par le préfixe spécifié. - -**Exemple** - -Requête: - -``` sql -SELECT startsWith('Hello, world!', 'He'); -``` - -Résultat: - -``` text -┌─startsWith('Hello, world!', 'He')─┐ -│ 1 │ -└───────────────────────────────────┘ -``` - -## coupe {#trim} - -Supprime tous les caractères spécifiés du début ou de la fin d'une chaîne. -Par défaut supprime toutes les occurrences consécutives d'espaces communs (caractère ASCII 32) des deux extrémités d'une chaîne. - -**Syntaxe** - -``` sql -trim([[LEADING|TRAILING|BOTH] trim_character FROM] input_string) -``` - -**Paramètre** - -- `trim_character` — specified characters for trim. [Chaîne](../../sql-reference/data-types/string.md). -- `input_string` — string for trim. [Chaîne](../../sql-reference/data-types/string.md). - -**Valeur renvoyée** - -Une chaîne sans caractères de début et (ou) de fin spécifiés. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT trim(BOTH ' ()' FROM '( Hello, world! )') -``` - -Résultat: - -``` text -┌─trim(BOTH ' ()' FROM '( Hello, world! )')─┐ -│ Hello, world! │ -└───────────────────────────────────────────────┘ -``` - -## trimLeft {#trimleft} - -Supprime toutes les occurrences consécutives d'espaces communs (caractère ASCII 32) depuis le début d'une chaîne. Il ne supprime pas d'autres types de caractères d'espaces (tabulation, espace sans pause, etc.). - -**Syntaxe** - -``` sql -trimLeft(input_string) -``` - -Alias: `ltrim(input_string)`. - -**Paramètre** - -- `input_string` — string to trim. [Chaîne](../../sql-reference/data-types/string.md). - -**Valeur renvoyée** - -Une chaîne sans ouvrir les espaces communs. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT trimLeft(' Hello, world! ') -``` - -Résultat: - -``` text -┌─trimLeft(' Hello, world! ')─┐ -│ Hello, world! │ -└─────────────────────────────────────┘ -``` - -## trimRight {#trimright} - -Supprime toutes les occurrences consécutives d'espaces communs (caractère ASCII 32) de la fin d'une chaîne. Il ne supprime pas d'autres types de caractères d'espaces (tabulation, espace sans pause, etc.). - -**Syntaxe** - -``` sql -trimRight(input_string) -``` - -Alias: `rtrim(input_string)`. - -**Paramètre** - -- `input_string` — string to trim. [Chaîne](../../sql-reference/data-types/string.md). - -**Valeur renvoyée** - -Une chaîne sans espaces communs de fin. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT trimRight(' Hello, world! ') -``` - -Résultat: - -``` text -┌─trimRight(' Hello, world! ')─┐ -│ Hello, world! │ -└──────────────────────────────────────┘ -``` - -## trimBoth {#trimboth} - -Supprime toutes les occurrences consécutives d'espaces communs (caractère ASCII 32) des deux extrémités d'une chaîne. Il ne supprime pas d'autres types de caractères d'espaces (tabulation, espace sans pause, etc.). - -**Syntaxe** - -``` sql -trimBoth(input_string) -``` - -Alias: `trim(input_string)`. - -**Paramètre** - -- `input_string` — string to trim. [Chaîne](../../sql-reference/data-types/string.md). - -**Valeur renvoyée** - -Une chaîne sans espaces communs de début et de fin. - -Type: `String`. - -**Exemple** - -Requête: - -``` sql -SELECT trimBoth(' Hello, world! ') -``` - -Résultat: - -``` text -┌─trimBoth(' Hello, world! ')─┐ -│ Hello, world! │ -└─────────────────────────────────────┘ -``` - -## CRC32 (s) {#crc32} - -Renvoie la somme de contrôle CRC32 d'une chaîne, en utilisant le polynôme CRC-32-IEEE 802.3 et la valeur initiale `0xffffffff` (zlib mise en œuvre). - -Le type de résultat est UInt32. - -## CRC32IEEE (s) {#crc32ieee} - -Renvoie la somme de contrôle CRC32 d'une chaîne, en utilisant le polynôme CRC-32-IEEE 802.3. - -Le type de résultat est UInt32. - -## CRC64 (s) {#crc64} - -Renvoie la somme de contrôle CRC64 d'une chaîne, en utilisant le polynôme CRC-64-ECMA. - -Le type de résultat est UInt64. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/string_functions/) diff --git a/docs/fr/sql-reference/functions/string-replace-functions.md b/docs/fr/sql-reference/functions/string-replace-functions.md deleted file mode 100644 index 5389a2bc927..00000000000 --- a/docs/fr/sql-reference/functions/string-replace-functions.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 42 -toc_title: "Pour remplacer dans les cha\xEEnes" ---- - -# Fonctions de recherche et de remplacement dans les chaînes {#functions-for-searching-and-replacing-in-strings} - -## replaceOne(botte de foin, modèle, remplacement) {#replaceonehaystack-pattern-replacement} - -Remplace la première occurrence, si elle existe, ‘pattern’ sous-chaîne dans ‘haystack’ avec l' ‘replacement’ substring. -Ci-après, ‘pattern’ et ‘replacement’ doivent être constantes. - -## replaceAll(botte de foin, motif, remplacement), Remplacer(botte de foin, motif, remplacement) {#replaceallhaystack-pattern-replacement-replacehaystack-pattern-replacement} - -Remplace toutes les occurrences du ‘pattern’ sous-chaîne dans ‘haystack’ avec l' ‘replacement’ substring. - -## replaceRegexpOne(botte de foin, modèle, remplacement) {#replaceregexponehaystack-pattern-replacement} - -Remplacement en utilisant le ‘pattern’ expression régulière. Une expression régulière re2. -Remplace seulement la première occurrence, si elle existe. -Un motif peut être spécifié comme ‘replacement’. Ce modèle peut inclure des substitutions `\0-\9`. -Substitution `\0` inclut l'expression régulière entière. Substitution `\1-\9` correspond au sous-modèle numbers.To utilisez le `\` caractère dans un modèle, échappez-le en utilisant `\`. -Aussi garder à l'esprit qu'un littéral de chaîne nécessite une évasion. - -Exemple 1. Conversion de la date au format américain: - -``` sql -SELECT DISTINCT - EventDate, - replaceRegexpOne(toString(EventDate), '(\\d{4})-(\\d{2})-(\\d{2})', '\\2/\\3/\\1') AS res -FROM test.hits -LIMIT 7 -FORMAT TabSeparated -``` - -``` text -2014-03-17 03/17/2014 -2014-03-18 03/18/2014 -2014-03-19 03/19/2014 -2014-03-20 03/20/2014 -2014-03-21 03/21/2014 -2014-03-22 03/22/2014 -2014-03-23 03/23/2014 -``` - -Exemple 2. Copier une chaîne dix fois: - -``` sql -SELECT replaceRegexpOne('Hello, World!', '.*', '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0') AS res -``` - -``` text -┌─res────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ -│ Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World! │ -└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -``` - -## replaceRegexpAll(botte de foin, modèle, remplacement) {#replaceregexpallhaystack-pattern-replacement} - -Cela fait la même chose, mais remplace toutes les occurrences. Exemple: - -``` sql -SELECT replaceRegexpAll('Hello, World!', '.', '\\0\\0') AS res -``` - -``` text -┌─res────────────────────────┐ -│ HHeelllloo,, WWoorrlldd!! │ -└────────────────────────────┘ -``` - -Par exception, si une expression régulière travaillé sur un vide sous-chaîne, le remplacement n'est pas effectué plus d'une fois. -Exemple: - -``` sql -SELECT replaceRegexpAll('Hello, World!', '^', 'here: ') AS res -``` - -``` text -┌─res─────────────────┐ -│ here: Hello, World! │ -└─────────────────────┘ -``` - -## regexpQuoteMeta (s) {#regexpquotemetas} - -La fonction ajoute une barre oblique inverse avant certains caractères prédéfinis dans la chaîne. -Les personnages prédéfinis: ‘0’, ‘\\’, ‘\|’, ‘(’, ‘)’, ‘^’, ‘$’, ‘.’, ‘\[’, '\]', ‘?’, '\*‘,’+‘,’{‘,’:‘,’-'. -Cette implémentation diffère légèrement de re2:: RE2:: QuoteMeta. Il échappe à zéro octet comme \\0 au lieu de 00 et il échappe uniquement les caractères requis. -Pour plus d'informations, voir le lien: [RE2](https://github.com/google/re2/blob/master/re2/re2.cc#L473) - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/string_replace_functions/) diff --git a/docs/fr/sql-reference/functions/string-search-functions.md b/docs/fr/sql-reference/functions/string-search-functions.md deleted file mode 100644 index 20217edd32c..00000000000 --- a/docs/fr/sql-reference/functions/string-search-functions.md +++ /dev/null @@ -1,379 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: "Pour Rechercher Des Cha\xEEnes" ---- - -# Fonctions de recherche de chaînes {#functions-for-searching-strings} - -La recherche est sensible à la casse par défaut dans toutes ces fonctions. Il existe des variantes pour la recherche insensible à la casse. - -## position(botte de foin, aiguille), localiser( botte de foin, aiguille) {#position} - -Renvoie la position (en octets) de la sous-chaîne trouvée dans la chaîne, à partir de 1. - -Fonctionne sous l'hypothèse que la chaîne de caractères contient un ensemble d'octets représentant un octet texte codé. Si cette hypothèse n'est pas remplie et qu'un caractère ne peut pas être représenté à l'aide d'un seul octet, la fonction ne lance pas d'exception et renvoie un résultat inattendu. Si le caractère peut être représenté en utilisant deux octets, il utilisera deux octets et ainsi de suite. - -Pour une recherche insensible à la casse, utilisez la fonction [positioncaseinsensible](#positioncaseinsensitive). - -**Syntaxe** - -``` sql -position(haystack, needle) -``` - -Alias: `locate(haystack, needle)`. - -**Paramètre** - -- `haystack` — string, in which substring will to be searched. [Chaîne](../syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [Chaîne](../syntax.md#syntax-string-literal). - -**Valeurs renvoyées** - -- Position de départ en octets (à partir de 1), si la sous-chaîne a été trouvée. -- 0, si la sous-chaîne n'a pas été trouvé. - -Type: `Integer`. - -**Exemple** - -Phrase “Hello, world!” contient un ensemble d'octets représentant un octet texte codé. La fonction renvoie un résultat attendu: - -Requête: - -``` sql -SELECT position('Hello, world!', '!') -``` - -Résultat: - -``` text -┌─position('Hello, world!', '!')─┐ -│ 13 │ -└────────────────────────────────┘ -``` - -La même phrase en russe contient des caractères qui ne peuvent pas être représentés en utilisant un seul octet. La fonction renvoie un résultat inattendu (utilisation [positionUTF8](#positionutf8) fonction pour le texte codé sur plusieurs octets): - -Requête: - -``` sql -SELECT position('Привет, мир!', '!') -``` - -Résultat: - -``` text -┌─position('Привет, мир!', '!')─┐ -│ 21 │ -└───────────────────────────────┘ -``` - -## positioncaseinsensible {#positioncaseinsensitive} - -Le même que [position](#position) renvoie la position (en octets) de la sous-chaîne trouvée dans la chaîne, à partir de 1. Utilisez la fonction pour une recherche insensible à la casse. - -Fonctionne sous l'hypothèse que la chaîne de caractères contient un ensemble d'octets représentant un octet texte codé. Si cette hypothèse n'est pas remplie et qu'un caractère ne peut pas être représenté à l'aide d'un seul octet, la fonction ne lance pas d'exception et renvoie un résultat inattendu. Si le caractère peut être représenté en utilisant deux octets, il utilisera deux octets et ainsi de suite. - -**Syntaxe** - -``` sql -positionCaseInsensitive(haystack, needle) -``` - -**Paramètre** - -- `haystack` — string, in which substring will to be searched. [Chaîne](../syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [Chaîne](../syntax.md#syntax-string-literal). - -**Valeurs renvoyées** - -- Position de départ en octets (à partir de 1), si la sous-chaîne a été trouvée. -- 0, si la sous-chaîne n'a pas été trouvé. - -Type: `Integer`. - -**Exemple** - -Requête: - -``` sql -SELECT positionCaseInsensitive('Hello, world!', 'hello') -``` - -Résultat: - -``` text -┌─positionCaseInsensitive('Hello, world!', 'hello')─┐ -│ 1 │ -└───────────────────────────────────────────────────┘ -``` - -## positionUTF8 {#positionutf8} - -Renvoie la position (en points Unicode) de la sous-chaîne trouvée dans la chaîne, à partir de 1. - -Fonctionne sous l'hypothèse que la chaîne contient un ensemble d'octets représentant un texte codé en UTF-8. Si cette hypothèse n'est pas remplie, la fonction ne lance pas d'exception et renvoie un résultat inattendu. Si le caractère peut être représenté en utilisant deux points Unicode, il en utilisera deux et ainsi de suite. - -Pour une recherche insensible à la casse, utilisez la fonction [positionCaseInsensitiveUTF8](#positioncaseinsensitiveutf8). - -**Syntaxe** - -``` sql -positionUTF8(haystack, needle) -``` - -**Paramètre** - -- `haystack` — string, in which substring will to be searched. [Chaîne](../syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [Chaîne](../syntax.md#syntax-string-literal). - -**Valeurs renvoyées** - -- Position de départ dans les points Unicode (à partir de 1), si la sous-chaîne a été trouvée. -- 0, si la sous-chaîne n'a pas été trouvé. - -Type: `Integer`. - -**Exemple** - -Phrase “Hello, world!” en russe contient un ensemble de points Unicode représentant un texte codé à un seul point. La fonction renvoie un résultat attendu: - -Requête: - -``` sql -SELECT positionUTF8('Привет, мир!', '!') -``` - -Résultat: - -``` text -┌─positionUTF8('Привет, мир!', '!')─┐ -│ 12 │ -└───────────────────────────────────┘ -``` - -Phrase “Salut, étudiante!” où le caractère `é` peut être représenté en utilisant un point (`U+00E9`) ou deux points (`U+0065U+0301`) la fonction peut être retournée un résultat inattendu: - -Requête pour la lettre `é` qui est représenté un point Unicode `U+00E9`: - -``` sql -SELECT positionUTF8('Salut, étudiante!', '!') -``` - -Résultat: - -``` text -┌─positionUTF8('Salut, étudiante!', '!')─┐ -│ 17 │ -└────────────────────────────────────────┘ -``` - -Requête pour la lettre `é` qui est représenté deux points Unicode `U+0065U+0301`: - -``` sql -SELECT positionUTF8('Salut, étudiante!', '!') -``` - -Résultat: - -``` text -┌─positionUTF8('Salut, étudiante!', '!')─┐ -│ 18 │ -└────────────────────────────────────────┘ -``` - -## positionCaseInsensitiveUTF8 {#positioncaseinsensitiveutf8} - -Le même que [positionUTF8](#positionutf8) mais est sensible à la casse. Renvoie la position (en points Unicode) de la sous-chaîne trouvée dans la chaîne, à partir de 1. - -Fonctionne sous l'hypothèse que la chaîne contient un ensemble d'octets représentant un texte codé en UTF-8. Si cette hypothèse n'est pas remplie, la fonction ne lance pas d'exception et renvoie un résultat inattendu. Si le caractère peut être représenté en utilisant deux points Unicode, il en utilisera deux et ainsi de suite. - -**Syntaxe** - -``` sql -positionCaseInsensitiveUTF8(haystack, needle) -``` - -**Paramètre** - -- `haystack` — string, in which substring will to be searched. [Chaîne](../syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [Chaîne](../syntax.md#syntax-string-literal). - -**Valeur renvoyée** - -- Position de départ dans les points Unicode (à partir de 1), si la sous-chaîne a été trouvée. -- 0, si la sous-chaîne n'a pas été trouvé. - -Type: `Integer`. - -**Exemple** - -Requête: - -``` sql -SELECT positionCaseInsensitiveUTF8('Привет, мир!', 'Мир') -``` - -Résultat: - -``` text -┌─positionCaseInsensitiveUTF8('Привет, мир!', 'Мир')─┐ -│ 9 │ -└────────────────────────────────────────────────────┘ -``` - -## multirecherchallpositions {#multisearchallpositions} - -Le même que [position](string-search-functions.md#position) mais les retours `Array` des positions (en octets) des sous-chaînes correspondantes trouvées dans la chaîne. Les Positions sont indexées à partir de 1. - -La recherche est effectuée sur des séquences d'octets sans tenir compte de l'encodage et du classement des chaînes. - -- Pour une recherche ASCII insensible à la casse, utilisez la fonction `multiSearchAllPositionsCaseInsensitive`. -- Pour la recherche en UTF-8, Utilisez la fonction [multiSearchAllPositionsUTF8](#multiSearchAllPositionsUTF8). -- Pour la recherche UTF-8 insensible à la casse, utilisez la fonction multiSearchAllPositionsCaseInsensitiveutf8. - -**Syntaxe** - -``` sql -multiSearchAllPositions(haystack, [needle1, needle2, ..., needlen]) -``` - -**Paramètre** - -- `haystack` — string, in which substring will to be searched. [Chaîne](../syntax.md#syntax-string-literal). -- `needle` — substring to be searched. [Chaîne](../syntax.md#syntax-string-literal). - -**Valeurs renvoyées** - -- Tableau de positions de départ en octets (à partir de 1), si la sous-chaîne correspondante a été trouvée et 0 si elle n'est pas trouvée. - -**Exemple** - -Requête: - -``` sql -SELECT multiSearchAllPositions('Hello, World!', ['hello', '!', 'world']) -``` - -Résultat: - -``` text -┌─multiSearchAllPositions('Hello, World!', ['hello', '!', 'world'])─┐ -│ [0,13,0] │ -└───────────────────────────────────────────────────────────────────┘ -``` - -## multiSearchAllPositionsUTF8 {#multiSearchAllPositionsUTF8} - -Voir `multiSearchAllPositions`. - -## multiSearchFirstPosition(botte de foin, \[aiguille1, aiguille2, …, needleet\]) {#multisearchfirstposition} - -Le même que `position` mais renvoie le décalage le plus à gauche de la chaîne `haystack` et qui correspond à certains des aiguilles. - -Pour une recherche insensible à la casse ou/et au format UTF-8, utilisez les fonctions `multiSearchFirstPositionCaseInsensitive, multiSearchFirstPositionUTF8, multiSearchFirstPositionCaseInsensitiveUTF8`. - -## multiSearchFirstIndex(botte de foin, \[aiguille1, aiguille2, …, needleet\]) {#multisearchfirstindexhaystack-needle1-needle2-needlen} - -Renvoie l'index `i` (à partir de 1) de l'aiguille trouvée la plus à gaucheje dans la chaîne `haystack` et 0 sinon. - -Pour une recherche insensible à la casse ou/et au format UTF-8, utilisez les fonctions `multiSearchFirstIndexCaseInsensitive, multiSearchFirstIndexUTF8, multiSearchFirstIndexCaseInsensitiveUTF8`. - -## multiSearchAny(botte de foin, \[aiguille1, aiguille2, …, needleet\]) {#function-multisearchany} - -Renvoie 1, si au moins une aiguille de chaîneje correspond à la chaîne `haystack` et 0 sinon. - -Pour une recherche insensible à la casse ou/et au format UTF-8, utilisez les fonctions `multiSearchAnyCaseInsensitive, multiSearchAnyUTF8, multiSearchAnyCaseInsensitiveUTF8`. - -!!! note "Note" - Dans tous les `multiSearch*` fonctions le nombre d'aiguilles doit être d'au moins 28 en raison de la spécification de mise en œuvre. - -## match (botte de foin, motif) {#matchhaystack-pattern} - -Vérifie si la chaîne correspond au `pattern` expression régulière. Un `re2` expression régulière. Le [syntaxe](https://github.com/google/re2/wiki/Syntax) de la `re2` les expressions régulières sont plus limitées que la syntaxe des expressions régulières Perl. - -Renvoie 0 si elle ne correspond pas, ou 1 si elle correspond. - -Notez que le symbole antislash (`\`) est utilisé pour s'échapper dans l'expression régulière. Le même symbole est utilisé pour échapper dans les littéraux de chaîne. Donc, pour échapper au symbole dans une expression régulière, vous devez écrire deux barres obliques inverses ( \\ ) dans un littéral de chaîne. - -L'expression régulière travaille à la chaîne, comme si c'est un ensemble d'octets. L'expression régulière ne peut pas contenir d'octets nuls. -Pour que les modèles recherchent des sous-chaînes dans une chaîne, il est préférable D'utiliser LIKE ou ‘position’ depuis ils travaillent beaucoup plus vite. - -## multiMatchAny(botte de foin, \[motif1, modèle2, …, patternet\]) {#multimatchanyhaystack-pattern1-pattern2-patternn} - -Le même que `match` mais renvoie 0 si aucune des expressions régulières sont appariés et 1 si l'un des modèles les matchs. Il utilise [hyperscan](https://github.com/intel/hyperscan) bibliothèque. Pour que les modèles recherchent des sous-chaînes dans une chaîne, il est préférable d'utiliser `multiSearchAny` comme cela fonctionne beaucoup plus vite. - -!!! note "Note" - La longueur de l'un des `haystack` la chaîne doit être inférieure à 232 octets sinon l'exception est levée. Cette restriction a lieu en raison de l'API hyperscan. - -## multiMatchAnyIndex(botte de foin, \[motif1, modèle2, …, patternet\]) {#multimatchanyindexhaystack-pattern1-pattern2-patternn} - -Le même que `multiMatchAny` mais retourne un index qui correspond à la botte de foin. - -## multiMatchAllIndices(botte de foin, \[motif1, modèle2, …, patternet\]) {#multimatchallindiceshaystack-pattern1-pattern2-patternn} - -Le même que `multiMatchAny`, mais renvoie le tableau de tous les indices qui correspondent à la botte de foin dans n'importe quel ordre. - -## multiFuzzyMatchAny(botte de foin, distance, \[motif1, modèle2, …, patternet\]) {#multifuzzymatchanyhaystack-distance-pattern1-pattern2-patternn} - -Le même que `multiMatchAny`, mais renvoie 1 si un motif correspond à la botte de foin dans une constante [distance d'édition](https://en.wikipedia.org/wiki/Edit_distance). Cette fonction est également en mode expérimental et peut être extrêmement lente. Pour plus d'informations, voir [documentation hyperscan](https://intel.github.io/hyperscan/dev-reference/compilation.html#approximate-matching). - -## multiFuzzyMatchAnyIndex(botte de foin, distance, \[motif1, modèle2, …, patternet\]) {#multifuzzymatchanyindexhaystack-distance-pattern1-pattern2-patternn} - -Le même que `multiFuzzyMatchAny`, mais renvoie tout index qui correspond à la botte de foin à une distance d'édition constante. - -## multiFuzzyMatchAllIndices(botte de foin, distance, \[motif1, modèle2, …, patternet\]) {#multifuzzymatchallindiceshaystack-distance-pattern1-pattern2-patternn} - -Le même que `multiFuzzyMatchAny`, mais renvoie le tableau de tous les indices dans n'importe quel ordre qui correspond à la botte de foin à une distance d'édition constante. - -!!! note "Note" - `multiFuzzyMatch*` les fonctions ne prennent pas en charge les expressions régulières UTF-8, et ces expressions sont traitées comme des octets en raison de la restriction hyperscan. - -!!! note "Note" - Pour désactiver toutes les fonctions qui utilisent hyperscan, utilisez le réglage `SET allow_hyperscan = 0;`. - -## extrait(botte de foin, motif) {#extracthaystack-pattern} - -Extraits d'un fragment d'une chaîne à l'aide d'une expression régulière. Si ‘haystack’ ne correspond pas à l' ‘pattern’ regex, une chaîne vide est renvoyée. Si l'expression rationnelle ne contient pas de sous-modèles, elle prend le fragment qui correspond à l'expression rationnelle entière. Sinon, il prend le fragment qui correspond au premier sous-masque. - -## extractAll(botte de foin, motif) {#extractallhaystack-pattern} - -Extrait tous les fragments d'une chaîne à l'aide d'une expression régulière. Si ‘haystack’ ne correspond pas à l' ‘pattern’ regex, une chaîne vide est renvoyée. Renvoie un tableau de chaînes composé de toutes les correspondances à l'expression rationnelle. En général, le comportement est le même que le ‘extract’ fonction (il prend le premier sous-masque, ou l'expression entière s'il n'y a pas de sous-masque). - -## comme (botte de foin, motif), botte de foin comme opérateur de motif {#function-like} - -Vérifie si une chaîne correspond à une expression régulière simple. -L'expression régulière peut contenir les métasymboles `%` et `_`. - -`%` indique n'importe quelle quantité d'octets (y compris zéro caractère). - -`_` indique un octet. - -Utilisez la barre oblique inverse (`\`) pour échapper aux métasymboles. Voir la note sur l'échappement dans la description du ‘match’ fonction. - -Pour les expressions régulières comme `%needle%`, le code est plus optimale et fonctionne aussi vite que le `position` fonction. -Pour d'autres expressions régulières, le code est le même que pour la ‘match’ fonction. - -## notLike (botte de foin, motif), botte de foin pas comme opérateur de motif {#function-notlike} - -La même chose que ‘like’ mais négative. - -## ngramDistance(botte de foin, aiguille) {#ngramdistancehaystack-needle} - -Calcule la distance de 4 grammes entre `haystack` et `needle`: counts the symmetric difference between two multisets of 4-grams and normalizes it by the sum of their cardinalities. Returns float number from 0 to 1 – the closer to zero, the more strings are similar to each other. If the constant `needle` ou `haystack` est plus de 32Kb, jette une exception. Si une partie de la non-constante `haystack` ou `needle` les chaînes sont plus que 32Kb, la distance est toujours un. - -Pour une recherche insensible à la casse ou/et au format UTF-8, utilisez les fonctions `ngramDistanceCaseInsensitive, ngramDistanceUTF8, ngramDistanceCaseInsensitiveUTF8`. - -## ngramSearch(botte de foin, aiguille) {#ngramsearchhaystack-needle} - -Même que `ngramDistance` mais calcule la différence non symétrique entre `needle` et `haystack` – the number of n-grams from needle minus the common number of n-grams normalized by the number of `needle` n-grammes. Le plus proche d'un, le plus probable `needle` est dans le `haystack`. Peut être utile pour la recherche de chaîne floue. - -Pour une recherche insensible à la casse ou/et au format UTF-8, utilisez les fonctions `ngramSearchCaseInsensitive, ngramSearchUTF8, ngramSearchCaseInsensitiveUTF8`. - -!!! note "Note" - For UTF-8 case we use 3-gram distance. All these are not perfectly fair n-gram distances. We use 2-byte hashes to hash n-grams and then calculate the (non-)symmetric difference between these hash tables – collisions may occur. With UTF-8 case-insensitive format we do not use fair `tolower` function – we zero the 5-th bit (starting from zero) of each codepoint byte and first bit of zeroth byte if bytes more than one – this works for Latin and mostly for all Cyrillic letters. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/string_search_functions/) diff --git a/docs/fr/sql-reference/functions/type-conversion-functions.md b/docs/fr/sql-reference/functions/type-conversion-functions.md deleted file mode 100644 index c17b24c69dc..00000000000 --- a/docs/fr/sql-reference/functions/type-conversion-functions.md +++ /dev/null @@ -1,534 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 38 -toc_title: La Conversion De Type ---- - -# Fonctions De Conversion De Type {#type-conversion-functions} - -## Problèmes courants des Conversions numériques {#numeric-conversion-issues} - -Lorsque vous convertissez une valeur d'un type de données à un autre, vous devez vous rappeler que dans le cas courant, il s'agit d'une opération dangereuse qui peut entraîner une perte de données. Une perte de données peut se produire si vous essayez d'ajuster la valeur d'un type de données plus grand à un type de données plus petit, ou si vous convertissez des valeurs entre différents types de données. - -ClickHouse a le [même comportement que les programmes C++ ](https://en.cppreference.com/w/cpp/language/implicit_conversion). - -## toInt (8/16/32/64) {#toint8163264} - -Convertit une valeur d'entrée en [Int](../../sql-reference/data-types/int-uint.md) type de données. Cette fonction comprend: - -- `toInt8(expr)` — Results in the `Int8` type de données. -- `toInt16(expr)` — Results in the `Int16` type de données. -- `toInt32(expr)` — Results in the `Int32` type de données. -- `toInt64(expr)` — Results in the `Int64` type de données. - -**Paramètre** - -- `expr` — [Expression](../syntax.md#syntax-expressions) renvoyer un nombre ou une chaîne avec la représentation décimale d'un nombre. Les représentations binaires, octales et hexadécimales des nombres ne sont pas prises en charge. Les zéros principaux sont dépouillés. - -**Valeur renvoyée** - -Valeur entière dans le `Int8`, `Int16`, `Int32`, ou `Int64` type de données. - -Fonctions d'utilisation [l'arrondi vers zéro](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), ce qui signifie qu'ils tronquent des chiffres fractionnaires de nombres. - -Le comportement des fonctions pour le [NaN et Inf](../../sql-reference/data-types/float.md#data_type-float-nan-inf) arguments est indéfini. Rappelez-vous sur [problèmes de conversion numérique](#numeric-conversion-issues), lorsque vous utilisez les fonctions. - -**Exemple** - -``` sql -SELECT toInt64(nan), toInt32(32), toInt16('16'), toInt8(8.8) -``` - -``` text -┌─────────toInt64(nan)─┬─toInt32(32)─┬─toInt16('16')─┬─toInt8(8.8)─┐ -│ -9223372036854775808 │ 32 │ 16 │ 8 │ -└──────────────────────┴─────────────┴───────────────┴─────────────┘ -``` - -## toInt (8/16/32/64)OrZero {#toint8163264orzero} - -Il prend un argument de type String et essaie de l'analyser en Int (8 \| 16 \| 32 \| 64). En cas d'échec, renvoie 0. - -**Exemple** - -``` sql -select toInt64OrZero('123123'), toInt8OrZero('123qwe123') -``` - -``` text -┌─toInt64OrZero('123123')─┬─toInt8OrZero('123qwe123')─┐ -│ 123123 │ 0 │ -└─────────────────────────┴───────────────────────────┘ -``` - -## toInt (8/16/32/64)OrNull {#toint8163264ornull} - -Il prend un argument de type String et essaie de l'analyser en Int (8 \| 16 \| 32 \| 64). En cas d'échec, renvoie NULL. - -**Exemple** - -``` sql -select toInt64OrNull('123123'), toInt8OrNull('123qwe123') -``` - -``` text -┌─toInt64OrNull('123123')─┬─toInt8OrNull('123qwe123')─┐ -│ 123123 │ ᴺᵁᴸᴸ │ -└─────────────────────────┴───────────────────────────┘ -``` - -## toUInt (8/16/32/64) {#touint8163264} - -Convertit une valeur d'entrée en [UInt](../../sql-reference/data-types/int-uint.md) type de données. Cette fonction comprend: - -- `toUInt8(expr)` — Results in the `UInt8` type de données. -- `toUInt16(expr)` — Results in the `UInt16` type de données. -- `toUInt32(expr)` — Results in the `UInt32` type de données. -- `toUInt64(expr)` — Results in the `UInt64` type de données. - -**Paramètre** - -- `expr` — [Expression](../syntax.md#syntax-expressions) renvoyer un nombre ou une chaîne avec la représentation décimale d'un nombre. Les représentations binaires, octales et hexadécimales des nombres ne sont pas prises en charge. Les zéros principaux sont dépouillés. - -**Valeur renvoyée** - -Valeur entière dans le `UInt8`, `UInt16`, `UInt32`, ou `UInt64` type de données. - -Fonctions d'utilisation [l'arrondi vers zéro](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), ce qui signifie qu'ils tronquent des chiffres fractionnaires de nombres. - -Le comportement des fonctions pour les agruments négatifs et pour le [NaN et Inf](../../sql-reference/data-types/float.md#data_type-float-nan-inf) arguments est indéfini. Si vous passez une chaîne avec un nombre négatif, par exemple `'-32'`, ClickHouse soulève une exception. Rappelez-vous sur [problèmes de conversion numérique](#numeric-conversion-issues), lorsque vous utilisez les fonctions. - -**Exemple** - -``` sql -SELECT toUInt64(nan), toUInt32(-32), toUInt16('16'), toUInt8(8.8) -``` - -``` text -┌───────toUInt64(nan)─┬─toUInt32(-32)─┬─toUInt16('16')─┬─toUInt8(8.8)─┐ -│ 9223372036854775808 │ 4294967264 │ 16 │ 8 │ -└─────────────────────┴───────────────┴────────────────┴──────────────┘ -``` - -## toUInt (8/16/32/64)OrZero {#touint8163264orzero} - -## toUInt (8/16/32/64)OrNull {#touint8163264ornull} - -## toFloat (32/64) {#tofloat3264} - -## toFloat (32/64)OrZero {#tofloat3264orzero} - -## toFloat (32/64) OrNull {#tofloat3264ornull} - -## jour {#todate} - -## toDateOrZero {#todateorzero} - -## toDateOrNull {#todateornull} - -## toDateTime {#todatetime} - -## toDateTimeOrZero {#todatetimeorzero} - -## toDateTimeOrNull {#todatetimeornull} - -## toDecimal (32/64/128) {#todecimal3264128} - -Convertir `value` à l' [Décimal](../../sql-reference/data-types/decimal.md) type de données avec précision de `S`. Le `value` peut être un nombre ou une chaîne. Le `S` (l'échelle) paramètre spécifie le nombre de décimales. - -- `toDecimal32(value, S)` -- `toDecimal64(value, S)` -- `toDecimal128(value, S)` - -## toDecimal (32/64/128) OrNull {#todecimal3264128ornull} - -Convertit une chaîne d'entrée en [Nullable (Décimal (P, S))](../../sql-reference/data-types/decimal.md) valeur de type de données. Cette famille de fonctions comprennent: - -- `toDecimal32OrNull(expr, S)` — Results in `Nullable(Decimal32(S))` type de données. -- `toDecimal64OrNull(expr, S)` — Results in `Nullable(Decimal64(S))` type de données. -- `toDecimal128OrNull(expr, S)` — Results in `Nullable(Decimal128(S))` type de données. - -Ces fonctions devraient être utilisées à la place de `toDecimal*()` fonctions, si vous préférez obtenir un `NULL` la valeur au lieu d'une exception dans le cas d'une valeur d'entrée erreur d'analyse. - -**Paramètre** - -- `expr` — [Expression](../syntax.md#syntax-expressions), retourne une valeur dans l' [Chaîne](../../sql-reference/data-types/string.md) type de données. ClickHouse attend la représentation textuelle du nombre décimal. Exemple, `'1.111'`. -- `S` — Scale, the number of decimal places in the resulting value. - -**Valeur renvoyée** - -Une valeur dans l' `Nullable(Decimal(P,S))` type de données. La valeur contient: - -- Numéro `S` décimales, si ClickHouse interprète la chaîne d'entrée comme un nombre. -- `NULL` si ClickHouse ne peut pas interpréter la chaîne d'entrée comme un nombre ou si le nombre d'entrée contient plus de `S` décimale. - -**Exemple** - -``` sql -SELECT toDecimal32OrNull(toString(-1.111), 5) AS val, toTypeName(val) -``` - -``` text -┌──────val─┬─toTypeName(toDecimal32OrNull(toString(-1.111), 5))─┐ -│ -1.11100 │ Nullable(Decimal(9, 5)) │ -└──────────┴────────────────────────────────────────────────────┘ -``` - -``` sql -SELECT toDecimal32OrNull(toString(-1.111), 2) AS val, toTypeName(val) -``` - -``` text -┌──val─┬─toTypeName(toDecimal32OrNull(toString(-1.111), 2))─┐ -│ ᴺᵁᴸᴸ │ Nullable(Decimal(9, 2)) │ -└──────┴────────────────────────────────────────────────────┘ -``` - -## toDecimal (32/64/128)OrZero {#todecimal3264128orzero} - -Convertit une valeur d'entrée en [Decimal(P,S)](../../sql-reference/data-types/decimal.md) type de données. Cette famille de fonctions comprennent: - -- `toDecimal32OrZero( expr, S)` — Results in `Decimal32(S)` type de données. -- `toDecimal64OrZero( expr, S)` — Results in `Decimal64(S)` type de données. -- `toDecimal128OrZero( expr, S)` — Results in `Decimal128(S)` type de données. - -Ces fonctions devraient être utilisées à la place de `toDecimal*()` fonctions, si vous préférez obtenir un `0` la valeur au lieu d'une exception dans le cas d'une valeur d'entrée erreur d'analyse. - -**Paramètre** - -- `expr` — [Expression](../syntax.md#syntax-expressions), retourne une valeur dans l' [Chaîne](../../sql-reference/data-types/string.md) type de données. ClickHouse attend la représentation textuelle du nombre décimal. Exemple, `'1.111'`. -- `S` — Scale, the number of decimal places in the resulting value. - -**Valeur renvoyée** - -Une valeur dans l' `Nullable(Decimal(P,S))` type de données. La valeur contient: - -- Numéro `S` décimales, si ClickHouse interprète la chaîne d'entrée comme un nombre. -- 0 avec `S` décimales, si ClickHouse ne peut pas interpréter la chaîne d'entrée comme un nombre ou si le nombre d'entrée contient plus de `S` décimale. - -**Exemple** - -``` sql -SELECT toDecimal32OrZero(toString(-1.111), 5) AS val, toTypeName(val) -``` - -``` text -┌──────val─┬─toTypeName(toDecimal32OrZero(toString(-1.111), 5))─┐ -│ -1.11100 │ Decimal(9, 5) │ -└──────────┴────────────────────────────────────────────────────┘ -``` - -``` sql -SELECT toDecimal32OrZero(toString(-1.111), 2) AS val, toTypeName(val) -``` - -``` text -┌──val─┬─toTypeName(toDecimal32OrZero(toString(-1.111), 2))─┐ -│ 0.00 │ Decimal(9, 2) │ -└──────┴────────────────────────────────────────────────────┘ -``` - -## toString {#tostring} - -Fonctions de conversion entre des nombres, des chaînes (mais pas des chaînes fixes), des dates et des dates avec des heures. -Toutes ces fonctions acceptent un argument. - -Lors de la conversion vers ou à partir d'une chaîne, la valeur est formatée ou analysée en utilisant les mêmes règles que pour le format TabSeparated (et presque tous les autres formats de texte). Si la chaîne ne peut pas être analysée, une exception est levée et la demande est annulée. - -Lors de la conversion de dates en nombres ou vice versa, la date correspond au nombre de jours depuis le début de L'époque Unix. -Lors de la conversion de dates avec des heures en nombres ou vice versa, la date avec l'heure correspond au nombre de secondes depuis le début de L'époque Unix. - -Les formats date et date-avec-heure pour les fonctions toDate/toDateTime sont définis comme suit: - -``` text -YYYY-MM-DD -YYYY-MM-DD hh:mm:ss -``` - -À titre d'exception, si vous convertissez des types numériques UInt32, Int32, UInt64 ou Int64 à Date, et si le nombre est supérieur ou égal à 65536, le nombre est interprété comme un horodatage Unix (et non comme le nombre de jours) et est arrondi à la date. Cela permet de prendre en charge l'occurrence commune de l'écriture ‘toDate(unix_timestamp)’, qui autrement serait une erreur et nécessiterait d'écrire le plus lourd ‘toDate(toDateTime(unix_timestamp))’. - -La Conversion entre une date et une date avec l'heure est effectuée de manière naturelle: en ajoutant une heure nulle ou en supprimant l'heure. - -La Conversion entre types numériques utilise les mêmes règles que les affectations entre différents types numériques en C++. - -De plus, la fonction ToString de L'argument DateTime peut prendre un deuxième argument de chaîne contenant le nom du fuseau horaire. Exemple: `Asia/Yekaterinburg` Dans ce cas, l'heure est formatée en fonction du fuseau horaire spécifié. - -``` sql -SELECT - now() AS now_local, - toString(now(), 'Asia/Yekaterinburg') AS now_yekat -``` - -``` text -┌───────────now_local─┬─now_yekat───────────┐ -│ 2016-06-15 00:11:21 │ 2016-06-15 02:11:21 │ -└─────────────────────┴─────────────────────┘ -``` - -Voir aussi l' `toUnixTimestamp` fonction. - -## toFixedString (s, N) {#tofixedstrings-n} - -Convertit un argument de type String en un type FixedString (N) (une chaîne de longueur fixe N). N doit être une constante. -Si la chaîne a moins d'octets que N, elle est complétée avec des octets null à droite. Si la chaîne a plus d'octets que N, une exception est levée. - -## toStringCutToZero(s) {#tostringcuttozeros} - -Accepte un argument String ou FixedString. Renvoie la chaîne avec le contenu tronqué au premier octet zéro trouvé. - -Exemple: - -``` sql -SELECT toFixedString('foo', 8) AS s, toStringCutToZero(s) AS s_cut -``` - -``` text -┌─s─────────────┬─s_cut─┐ -│ foo\0\0\0\0\0 │ foo │ -└───────────────┴───────┘ -``` - -``` sql -SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut -``` - -``` text -┌─s──────────┬─s_cut─┐ -│ foo\0bar\0 │ foo │ -└────────────┴───────┘ -``` - -## reinterpretAsUInt (8/16/32/64) {#reinterpretasuint8163264} - -## reinterpretAsInt (8/16/32/64) {#reinterpretasint8163264} - -## reinterpretAsFloat (32/64) {#reinterpretasfloat3264} - -## réinterprétasdate {#reinterpretasdate} - -## reinterpretAsDateTime {#reinterpretasdatetime} - -Ces fonctions acceptent une chaîne et interprètent les octets placés au début de la chaîne comme un nombre dans l'ordre de l'hôte (little endian). Si la chaîne n'est pas assez longue, les fonctions fonctionnent comme si la chaîne était remplie avec le nombre nécessaire d'octets nuls. Si la chaîne est plus longue que nécessaire, les octets supplémentaires sont ignorés. Une date est interprétée comme le nombre de jours depuis le début de l'Époque Unix, et une date avec le temps, est interprété comme le nombre de secondes écoulées depuis le début de l'Époque Unix. - -## reinterpretAsString {#type_conversion_functions-reinterpretAsString} - -Cette fonction accepte un nombre ou une date ou une date avec l'heure, et renvoie une chaîne contenant des octets représentant la valeur correspondante dans l'ordre de l'hôte (little endian). Les octets nuls sont supprimés de la fin. Par exemple, une valeur de type uint32 de 255 est une chaîne longue d'un octet. - -## reinterpretAsFixedString {#reinterpretasfixedstring} - -Cette fonction accepte un nombre ou une date ou une date avec l'heure, et renvoie une chaîne fixe contenant des octets représentant la valeur correspondante dans l'ordre de l'hôte (little endian). Les octets nuls sont supprimés de la fin. Par exemple, une valeur de type uint32 de 255 est une chaîne fixe longue d'un octet. - -## CAST (x, T) {#type_conversion_function-cast} - -Convertir ‘x’ à l' ‘t’ type de données. La syntaxe CAST (X comme t) est également prise en charge. - -Exemple: - -``` sql -SELECT - '2016-06-15 23:00:00' AS timestamp, - CAST(timestamp AS DateTime) AS datetime, - CAST(timestamp AS Date) AS date, - CAST(timestamp, 'String') AS string, - CAST(timestamp, 'FixedString(22)') AS fixed_string -``` - -``` text -┌─timestamp───────────┬────────────datetime─┬───────date─┬─string──────────────┬─fixed_string──────────────┐ -│ 2016-06-15 23:00:00 │ 2016-06-15 23:00:00 │ 2016-06-15 │ 2016-06-15 23:00:00 │ 2016-06-15 23:00:00\0\0\0 │ -└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘ -``` - -La Conversion en FixedString (N) ne fonctionne que pour les arguments de type String ou FixedString (N). - -Type conversion en [Nullable](../../sql-reference/data-types/nullable.md) et le dos est pris en charge. Exemple: - -``` sql -SELECT toTypeName(x) FROM t_null -``` - -``` text -┌─toTypeName(x)─┐ -│ Int8 │ -│ Int8 │ -└───────────────┘ -``` - -``` sql -SELECT toTypeName(CAST(x, 'Nullable(UInt16)')) FROM t_null -``` - -``` text -┌─toTypeName(CAST(x, 'Nullable(UInt16)'))─┐ -│ Nullable(UInt16) │ -│ Nullable(UInt16) │ -└─────────────────────────────────────────┘ -``` - -## toInterval (année / trimestre / Mois / Semaine / Jour / Heure / Minute / Seconde) {#function-tointerval} - -Convertit un argument de type Number en [Intervalle](../../sql-reference/data-types/special-data-types/interval.md) type de données. - -**Syntaxe** - -``` sql -toIntervalSecond(number) -toIntervalMinute(number) -toIntervalHour(number) -toIntervalDay(number) -toIntervalWeek(number) -toIntervalMonth(number) -toIntervalQuarter(number) -toIntervalYear(number) -``` - -**Paramètre** - -- `number` — Duration of interval. Positive integer number. - -**Valeurs renvoyées** - -- La valeur de `Interval` type de données. - -**Exemple** - -``` sql -WITH - toDate('2019-01-01') AS date, - INTERVAL 1 WEEK AS interval_week, - toIntervalWeek(1) AS interval_to_week -SELECT - date + interval_week, - date + interval_to_week -``` - -``` text -┌─plus(date, interval_week)─┬─plus(date, interval_to_week)─┐ -│ 2019-01-08 │ 2019-01-08 │ -└───────────────────────────┴──────────────────────────────┘ -``` - -## parseDateTimeBestEffort {#parsedatetimebesteffort} - -Convertit une date et une heure dans le [Chaîne](../../sql-reference/data-types/string.md) la représentation de [DateTime](../../sql-reference/data-types/datetime.md#data_type-datetime) type de données. - -La fonction d'analyse [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601), [RFC 1123 - 5.2.14 RFC-822 date et heure Spécification](https://tools.ietf.org/html/rfc1123#page-55), ClickHouse et d'autres formats de date et d'heure. - -**Syntaxe** - -``` sql -parseDateTimeBestEffort(time_string [, time_zone]); -``` - -**Paramètre** - -- `time_string` — String containing a date and time to convert. [Chaîne](../../sql-reference/data-types/string.md). -- `time_zone` — Time zone. The function parses `time_string` selon le fuseau horaire. [Chaîne](../../sql-reference/data-types/string.md). - -**Formats non standard pris en charge** - -- Une chaîne contenant 9..10 chiffres [le timestamp unix](https://en.wikipedia.org/wiki/Unix_time). -- Une chaîne avec une date et une heure composant: `YYYYMMDDhhmmss`, `DD/MM/YYYY hh:mm:ss`, `DD-MM-YY hh:mm`, `YYYY-MM-DD hh:mm:ss`, etc. -- Une chaîne avec une date, mais pas de composant de temps: `YYYY`, `YYYYMM`, `YYYY*MM`, `DD/MM/YYYY`, `DD-MM-YY` etc. -- Une chaîne avec un jour et une heure: `DD`, `DD hh`, `DD hh:mm`. Dans ce cas `YYYY-MM` sont substitués comme suit `2000-01`. -- Une chaîne qui inclut la date et l'heure ainsi que des informations de décalage de fuseau horaire: `YYYY-MM-DD hh:mm:ss ±h:mm`, etc. Exemple, `2020-12-12 17:36:00 -5:00`. - -Pour tous les formats avec séparateur, la fonction analyse les noms de mois exprimés par leur nom complet ou par les trois premières lettres d'un nom de mois. Exemple: `24/DEC/18`, `24-Dec-18`, `01-September-2018`. - -**Valeur renvoyée** - -- `time_string` converti à l' `DateTime` type de données. - -**Exemple** - -Requête: - -``` sql -SELECT parseDateTimeBestEffort('12/12/2020 12:12:57') -AS parseDateTimeBestEffort; -``` - -Résultat: - -``` text -┌─parseDateTimeBestEffort─┐ -│ 2020-12-12 12:12:57 │ -└─────────────────────────┘ -``` - -Requête: - -``` sql -SELECT parseDateTimeBestEffort('Sat, 18 Aug 2018 07:22:16 GMT', 'Europe/Moscow') -AS parseDateTimeBestEffort -``` - -Résultat: - -``` text -┌─parseDateTimeBestEffort─┐ -│ 2018-08-18 10:22:16 │ -└─────────────────────────┘ -``` - -Requête: - -``` sql -SELECT parseDateTimeBestEffort('1284101485') -AS parseDateTimeBestEffort -``` - -Résultat: - -``` text -┌─parseDateTimeBestEffort─┐ -│ 2015-07-07 12:04:41 │ -└─────────────────────────┘ -``` - -Requête: - -``` sql -SELECT parseDateTimeBestEffort('2018-12-12 10:12:12') -AS parseDateTimeBestEffort -``` - -Résultat: - -``` text -┌─parseDateTimeBestEffort─┐ -│ 2018-12-12 10:12:12 │ -└─────────────────────────┘ -``` - -Requête: - -``` sql -SELECT parseDateTimeBestEffort('10 20:19') -``` - -Résultat: - -``` text -┌─parseDateTimeBestEffort('10 20:19')─┐ -│ 2000-01-10 20:19:00 │ -└─────────────────────────────────────┘ -``` - -**Voir Aussi** - -- \[Annonce ISO 8601 par @xkcd\](https://xkcd.com/1179/) -- [RFC 1123](https://tools.ietf.org/html/rfc1123) -- [jour](#todate) -- [toDateTime](#todatetime) - -## parseDateTimeBestEffortOrNull {#parsedatetimebesteffortornull} - -De même que pour [parseDateTimeBestEffort](#parsedatetimebesteffort) sauf qu'il renvoie null lorsqu'il rencontre un format de date qui ne peut pas être traité. - -## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero} - -De même que pour [parseDateTimeBestEffort](#parsedatetimebesteffort) sauf qu'il renvoie une date zéro ou une date zéro lorsqu'il rencontre un format de date qui ne peut pas être traité. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/type_conversion_functions/) diff --git a/docs/fr/sql-reference/functions/url-functions.md b/docs/fr/sql-reference/functions/url-functions.md deleted file mode 100644 index 2bb2203a10b..00000000000 --- a/docs/fr/sql-reference/functions/url-functions.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 54 -toc_title: Travailler avec des URL ---- - -# Fonctions pour travailler avec des URL {#functions-for-working-with-urls} - -Toutes ces fonctions ne suivent pas la RFC. Ils sont simplifiés au maximum pour améliorer les performances. - -## Fonctions qui extraient des parties d'une URL {#functions-that-extract-parts-of-a-url} - -Si la partie pertinente n'est pas présente dans une URL, une chaîne vide est renvoyée. - -### protocole {#protocol} - -Extrait le protocole d'une URL. - -Examples of typical returned values: http, https, ftp, mailto, tel, magnet… - -### domaine {#domain} - -Extrait le nom d'hôte d'une URL. - -``` sql -domain(url) -``` - -**Paramètre** - -- `url` — URL. Type: [Chaîne](../../sql-reference/data-types/string.md). - -L'URL peut être spécifiée avec ou sans schéma. Exemple: - -``` text -svn+ssh://some.svn-hosting.com:80/repo/trunk -some.svn-hosting.com:80/repo/trunk -https://yandex.com/time/ -``` - -Pour ces exemples, le `domain` la fonction renvoie les résultats suivants: - -``` text -some.svn-hosting.com -some.svn-hosting.com -yandex.com -``` - -**Valeurs renvoyées** - -- Nom d'hôte. Si ClickHouse peut analyser la chaîne d'entrée en tant QU'URL. -- Chaîne vide. Si ClickHouse ne peut pas analyser la chaîne d'entrée en tant QU'URL. - -Type: `String`. - -**Exemple** - -``` sql -SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk') -``` - -``` text -┌─domain('svn+ssh://some.svn-hosting.com:80/repo/trunk')─┐ -│ some.svn-hosting.com │ -└────────────────────────────────────────────────────────┘ -``` - -### domainWithoutWWW {#domainwithoutwww} - -Renvoie le domaine et ne supprime pas plus d'un ‘www.’ dès le début de celui-ci, si présent. - -### topLevelDomain {#topleveldomain} - -Extrait le domaine de premier niveau d'une URL. - -``` sql -topLevelDomain(url) -``` - -**Paramètre** - -- `url` — URL. Type: [Chaîne](../../sql-reference/data-types/string.md). - -L'URL peut être spécifiée avec ou sans schéma. Exemple: - -``` text -svn+ssh://some.svn-hosting.com:80/repo/trunk -some.svn-hosting.com:80/repo/trunk -https://yandex.com/time/ -``` - -**Valeurs renvoyées** - -- Nom de domaine. Si ClickHouse peut analyser la chaîne d'entrée en tant QU'URL. -- Chaîne vide. Si ClickHouse ne peut pas analyser la chaîne d'entrée en tant QU'URL. - -Type: `String`. - -**Exemple** - -``` sql -SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk') -``` - -``` text -┌─topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk')─┐ -│ com │ -└────────────────────────────────────────────────────────────────────┘ -``` - -### firstSignificantSubdomain {#firstsignificantsubdomain} - -Renvoie la “first significant subdomain”. C'est un concept non standard spécifique à Yandex.Metrica. Le premier sous-domaine significatif est un domaine de deuxième niveau s'il est ‘com’, ‘net’, ‘org’, ou ‘co’. Sinon, il est un domaine de troisième niveau. Exemple, `firstSignificantSubdomain (‘https://news.yandex.ru/’) = ‘yandex’, firstSignificantSubdomain (‘https://news.yandex.com.tr/’) = ‘yandex’`. La liste des “insignificant” les domaines de deuxième niveau et d'autres détails de mise en œuvre peuvent changer à l'avenir. - -### cutToFirstSignificantSubdomain {#cuttofirstsignificantsubdomain} - -Renvoie la partie du domaine qui inclut les sous-domaines de premier niveau “first significant subdomain” (voir l'explication ci-dessus). - -Exemple, `cutToFirstSignificantSubdomain('https://news.yandex.com.tr/') = 'yandex.com.tr'`. - -### chemin {#path} - -Retourne le chemin d'accès. Exemple: `/top/news.html` Le chemin n'inclut pas la chaîne de requête. - -### pathFull {#pathfull} - -La même chose que ci-dessus, mais y compris la chaîne de requête et le fragment. Exemple: / top / nouvelles.le html?page = 2 # commentaires - -### queryString {#querystring} - -Retourne la chaîne de requête. Exemple: page = 1 & lr=213. query-string n'inclut pas le point d'interrogation initial, ainsi que # et tout ce qui suit #. - -### fragment {#fragment} - -Renvoie l'identificateur de fragment. fragment n'inclut pas le symbole de hachage initial. - -### queryStringAndFragment {#querystringandfragment} - -Renvoie la chaîne de requête et l'Identificateur de fragment. Exemple: page = 1 # 29390. - -### extractURLParameter (URL, nom) {#extracturlparameterurl-name} - -Renvoie la valeur de la ‘name’ paramètre dans l'URL, le cas échéant. Sinon, une chaîne vide. S'il y a beaucoup de paramètres avec ce nom, il renvoie la première occurrence. Cette fonction fonctionne en supposant que le nom du paramètre est codé dans L'URL exactement de la même manière que dans l'argument passé. - -### extractURLParameters (URL) {#extracturlparametersurl} - -Renvoie un tableau de chaînes name = value correspondant aux paramètres D'URL. Les valeurs ne sont en aucun cas décodées. - -### extractURLParameterNames (URL) {#extracturlparameternamesurl} - -Retourne un tableau de chaînes de noms correspondant aux noms des paramètres d'URL. Les valeurs ne sont en aucun cas décodées. - -### URLHierarchy (URL) {#urlhierarchyurl} - -Retourne un tableau contenant L'URL, tronquée à la fin par les symboles /,? dans le chemin et la chaîne de requête. Les caractères séparateurs consécutifs sont comptés comme un. La coupe est faite dans la position après tous les caractères de séparation consécutifs. - -### URLPathHierarchy (URL) {#urlpathhierarchyurl} - -La même chose que ci-dessus, mais sans le protocole et l'hôte dans le résultat. Le / les élément (racine) n'est pas inclus. Exemple: la fonction est utilisée pour implémenter l'arborescence des rapports de L'URL dans Yandex. Métrique. - -``` text -URLPathHierarchy('https://example.com/browse/CONV-6788') = -[ - '/browse/', - '/browse/CONV-6788' -] -``` - -### decodeURLComponent (URL) {#decodeurlcomponenturl} - -Renvoie L'URL décodée. -Exemple: - -``` sql -SELECT decodeURLComponent('http://127.0.0.1:8123/?query=SELECT%201%3B') AS DecodedURL; -``` - -``` text -┌─DecodedURL─────────────────────────────┐ -│ http://127.0.0.1:8123/?query=SELECT 1; │ -└────────────────────────────────────────┘ -``` - -## Fonctions qui suppriment une partie D'une URL {#functions-that-remove-part-of-a-url} - -Si L'URL n'a rien de similaire, L'URL reste inchangée. - -### cutWWW {#cutwww} - -Supprime pas plus d'une ‘www.’ depuis le début du domaine de L'URL, s'il est présent. - -### cutQueryString {#cutquerystring} - -Supprime la chaîne de requête. Le point d'interrogation est également supprimé. - -### cutFragment {#cutfragment} - -Supprime l'identificateur de fragment. Le signe est également supprimé. - -### couperystringandfragment {#cutquerystringandfragment} - -Supprime la chaîne de requête et l'Identificateur de fragment. Le point d'interrogation et le signe numérique sont également supprimés. - -### cutURLParameter (URL, nom) {#cuturlparameterurl-name} - -Supprime le ‘name’ Paramètre URL, si présent. Cette fonction fonctionne en supposant que le nom du paramètre est codé dans L'URL exactement de la même manière que dans l'argument passé. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/url_functions/) diff --git a/docs/fr/sql-reference/functions/uuid-functions.md b/docs/fr/sql-reference/functions/uuid-functions.md deleted file mode 100644 index 9f9eb67d3e9..00000000000 --- a/docs/fr/sql-reference/functions/uuid-functions.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 53 -toc_title: Travailler avec UUID ---- - -# Fonctions pour travailler avec UUID {#functions-for-working-with-uuid} - -Les fonctions pour travailler avec UUID sont listées ci-dessous. - -## generateUUIDv4 {#uuid-function-generate} - -Génère le [UUID](../../sql-reference/data-types/uuid.md) de [la version 4](https://tools.ietf.org/html/rfc4122#section-4.4). - -``` sql -generateUUIDv4() -``` - -**Valeur renvoyée** - -La valeur de type UUID. - -**Exemple d'utilisation** - -Cet exemple montre la création d'une table avec la colonne de type UUID et l'insertion d'une valeur dans la table. - -``` sql -CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog - -INSERT INTO t_uuid SELECT generateUUIDv4() - -SELECT * FROM t_uuid -``` - -``` text -┌────────────────────────────────────x─┐ -│ f4bf890f-f9dc-4332-ad5c-0c18e73f28e9 │ -└──────────────────────────────────────┘ -``` - -## toUUID (x) {#touuid-x} - -Convertit la valeur de type de chaîne en type UUID. - -``` sql -toUUID(String) -``` - -**Valeur renvoyée** - -La valeur de type UUID. - -**Exemple d'utilisation** - -``` sql -SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid -``` - -``` text -┌─────────────────────────────────uuid─┐ -│ 61f0c404-5cb3-11e7-907b-a6006ad3dba0 │ -└──────────────────────────────────────┘ -``` - -## UUIDStringToNum {#uuidstringtonum} - -Accepte une chaîne contenant 36 caractères dans le format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, et le renvoie comme un ensemble d'octets dans un [FixedString (16)](../../sql-reference/data-types/fixedstring.md). - -``` sql -UUIDStringToNum(String) -``` - -**Valeur renvoyée** - -FixedString (16) - -**Exemples d'utilisation** - -``` sql -SELECT - '612f3c40-5d3b-217e-707b-6a546a3d7b29' AS uuid, - UUIDStringToNum(uuid) AS bytes -``` - -``` text -┌─uuid─────────────────────────────────┬─bytes────────────┐ -│ 612f3c40-5d3b-217e-707b-6a546a3d7b29 │ a/<@];!~p{jTj={) │ -└──────────────────────────────────────┴──────────────────┘ -``` - -## UUIDNumToString {#uuidnumtostring} - -Accepte un [FixedString (16)](../../sql-reference/data-types/fixedstring.md) valeur, et renvoie une chaîne contenant 36 caractères au format texte. - -``` sql -UUIDNumToString(FixedString(16)) -``` - -**Valeur renvoyée** - -Chaîne. - -**Exemple d'utilisation** - -``` sql -SELECT - 'a/<@];!~p{jTj={)' AS bytes, - UUIDNumToString(toFixedString(bytes, 16)) AS uuid -``` - -``` text -┌─bytes────────────┬─uuid─────────────────────────────────┐ -│ a/<@];!~p{jTj={) │ 612f3c40-5d3b-217e-707b-6a546a3d7b29 │ -└──────────────────┴──────────────────────────────────────┘ -``` - -## Voir Aussi {#see-also} - -- [dictGetUUID](ext-dict-functions.md#ext_dict_functions-other) - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/uuid_function/) diff --git a/docs/fr/sql-reference/functions/ym-dict-functions.md b/docs/fr/sql-reference/functions/ym-dict-functions.md deleted file mode 100644 index f1e4461e24a..00000000000 --- a/docs/fr/sql-reference/functions/ym-dict-functions.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 59 -toc_title: Travailler avec Yandex.Dictionnaires Metrica ---- - -# Fonctions pour travailler avec Yandex.Dictionnaires Metrica {#functions-for-working-with-yandex-metrica-dictionaries} - -Pour que les fonctions ci-dessous fonctionnent, la configuration du serveur doit spécifier les chemins et les adresses pour obtenir tous les Yandex.Dictionnaires Metrica. Les dictionnaires sont chargés au premier appel de l'une de ces fonctions. Si les listes de référence ne peuvent pas être chargées, une exception est levée. - -Pour plus d'informations sur la création de listes de références, consultez la section “Dictionaries”. - -## Plusieurs Geobases {#multiple-geobases} - -ClickHouse soutient le travail avec plusieurs géobases alternatives (hiérarchies régionales) simultanément, afin de soutenir diverses perspectives sur les pays auxquels appartiennent certaines régions. - -Le ‘clickhouse-server’ config spécifie le fichier avec l'échelon régional::`/opt/geo/regions_hierarchy.txt` - -Outre ce fichier, il recherche également les fichiers à proximité qui ont le symbole _ et tout suffixe ajouté au nom (avant l'extension de fichier). -Par exemple, il trouvera également le fichier `/opt/geo/regions_hierarchy_ua.txt` si présente. - -`ua` est appelée la clé du dictionnaire. Pour un dictionnaire sans suffixe, la clé est une chaîne vide. - -Tous les dictionnaires sont rechargés dans l'exécution (une fois toutes les secondes, comme défini dans le paramètre de configuration builtin_dictionaries_reload_interval, ou une fois par heure par défaut). Cependant, la liste des dictionnaires disponibles est définie une fois, lorsque le serveur démarre. - -All functions for working with regions have an optional argument at the end – the dictionary key. It is referred to as the geobase. -Exemple: - -``` sql -regionToCountry(RegionID) – Uses the default dictionary: /opt/geo/regions_hierarchy.txt -regionToCountry(RegionID, '') – Uses the default dictionary: /opt/geo/regions_hierarchy.txt -regionToCountry(RegionID, 'ua') – Uses the dictionary for the 'ua' key: /opt/geo/regions_hierarchy_ua.txt -``` - -### regionToCity (id \[, geobase\]) {#regiontocityid-geobase} - -Accepts a UInt32 number – the region ID from the Yandex geobase. If this region is a city or part of a city, it returns the region ID for the appropriate city. Otherwise, returns 0. - -### regionToArea (id \[, geobase\]) {#regiontoareaid-geobase} - -Convertit une région en une zone (tapez 5 dans la géobase). Dans tous les autres cas, cette fonction est la même que ‘regionToCity’. - -``` sql -SELECT DISTINCT regionToName(regionToArea(toUInt32(number), 'ua')) -FROM system.numbers -LIMIT 15 -``` - -``` text -┌─regionToName(regionToArea(toUInt32(number), \'ua\'))─┐ -│ │ -│ Moscow and Moscow region │ -│ St. Petersburg and Leningrad region │ -│ Belgorod region │ -│ Ivanovsk region │ -│ Kaluga region │ -│ Kostroma region │ -│ Kursk region │ -│ Lipetsk region │ -│ Orlov region │ -│ Ryazan region │ -│ Smolensk region │ -│ Tambov region │ -│ Tver region │ -│ Tula region │ -└──────────────────────────────────────────────────────┘ -``` - -### regionToDistrict(id \[, geobase\]) {#regiontodistrictid-geobase} - -Convertit une région en district fédéral (type 4 dans la géobase). Dans tous les autres cas, cette fonction est la même que ‘regionToCity’. - -``` sql -SELECT DISTINCT regionToName(regionToDistrict(toUInt32(number), 'ua')) -FROM system.numbers -LIMIT 15 -``` - -``` text -┌─regionToName(regionToDistrict(toUInt32(number), \'ua\'))─┐ -│ │ -│ Central federal district │ -│ Northwest federal district │ -│ South federal district │ -│ North Caucases federal district │ -│ Privolga federal district │ -│ Ural federal district │ -│ Siberian federal district │ -│ Far East federal district │ -│ Scotland │ -│ Faroe Islands │ -│ Flemish region │ -│ Brussels capital region │ -│ Wallonia │ -│ Federation of Bosnia and Herzegovina │ -└──────────────────────────────────────────────────────────┘ -``` - -### regionToCountry (id \[, geobase\]) {#regiontocountryid-geobase} - -Convertit une région en un pays. Dans tous les autres cas, cette fonction est la même que ‘regionToCity’. -Exemple: `regionToCountry(toUInt32(213)) = 225` convertit Moscou (213) en Russie (225). - -### regionToContinent(id \[, géobase\]) {#regiontocontinentid-geobase} - -Convertit une région en continent. Dans tous les autres cas, cette fonction est la même que ‘regionToCity’. -Exemple: `regionToContinent(toUInt32(213)) = 10001` convertit Moscou (213) en Eurasie (10001). - -### regionToTopContinent (#regiontotopcontinent) {#regiontotopcontinent-regiontotopcontinent} - -Trouve le continent le plus élevé dans la hiérarchie de la région. - -**Syntaxe** - -``` sql -regionToTopContinent(id[, geobase]); -``` - -**Paramètre** - -- `id` — Region ID from the Yandex geobase. [UInt32](../../sql-reference/data-types/int-uint.md). -- `geobase` — Dictionary key. See [Plusieurs Geobases](#multiple-geobases). [Chaîne](../../sql-reference/data-types/string.md). Facultatif. - -**Valeur renvoyée** - -- Identifiant du continent de haut niveau (ce dernier lorsque vous grimpez dans la hiérarchie des régions). -- 0, si il n'y a aucun. - -Type: `UInt32`. - -### regionToPopulation (id \[, geobase\]) {#regiontopopulationid-geobase} - -Obtient la population d'une région. -La population peut être enregistrée dans des fichiers avec la géobase. Voir la section “External dictionaries”. -Si la population n'est pas enregistrée pour la région, elle renvoie 0. -Dans la géobase Yandex, la population peut être enregistrée pour les régions enfants, mais pas pour les régions parentes. - -### regionIn(lhs, rhs \[, géobase\]) {#regioninlhs-rhs-geobase} - -Vérifie si un ‘lhs’ région appartient à une ‘rhs’ région. Renvoie un nombre UInt8 égal à 1 s'il appartient, Ou 0 s'il n'appartient pas. -The relationship is reflexive – any region also belongs to itself. - -### regionHierarchy (id \[, geobase\]) {#regionhierarchyid-geobase} - -Accepts a UInt32 number – the region ID from the Yandex geobase. Returns an array of region IDs consisting of the passed region and all parents along the chain. -Exemple: `regionHierarchy(toUInt32(213)) = [213,1,3,225,10001,10000]`. - -### regionToName(id \[, lang\]) {#regiontonameid-lang} - -Accepts a UInt32 number – the region ID from the Yandex geobase. A string with the name of the language can be passed as a second argument. Supported languages are: ru, en, ua, uk, by, kz, tr. If the second argument is omitted, the language ‘ru’ is used. If the language is not supported, an exception is thrown. Returns a string – the name of the region in the corresponding language. If the region with the specified ID doesn't exist, an empty string is returned. - -`ua` et `uk` les deux signifient ukrainien. - -[Article Original](https://clickhouse.tech/docs/en/query_language/functions/ym_dict_functions/) diff --git a/docs/fr/sql-reference/index.md b/docs/fr/sql-reference/index.md deleted file mode 100644 index 04e44892c05..00000000000 --- a/docs/fr/sql-reference/index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "R\xE9f\xE9rence SQL" -toc_hidden: true -toc_priority: 28 -toc_title: "cach\xE9s" ---- - -# Référence SQL {#sql-reference} - -ClickHouse prend en charge les types de requêtes suivants: - -- [SELECT](statements/select/index.md) -- [INSERT INTO](statements/insert-into.md) -- [CREATE](statements/create.md) -- [ALTER](statements/alter.md#query_language_queries_alter) -- [Autres types de requêtes](statements/misc.md) - -[Article Original](https://clickhouse.tech/docs/en/sql-reference/) diff --git a/docs/fr/sql-reference/operators/in.md b/docs/fr/sql-reference/operators/in.md deleted file mode 100644 index d87fe41a04f..00000000000 --- a/docs/fr/sql-reference/operators/in.md +++ /dev/null @@ -1,204 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -### Dans les opérateurs {#select-in-operators} - -Le `IN`, `NOT IN`, `GLOBAL IN`, et `GLOBAL NOT IN` les opérateurs sont traitées séparément, car leur fonctionnalité est assez riche. - -Le côté gauche de l'opérateur, soit une seule colonne ou un tuple. - -Exemple: - -``` sql -SELECT UserID IN (123, 456) FROM ... -SELECT (CounterID, UserID) IN ((34, 123), (101500, 456)) FROM ... -``` - -Si le côté gauche est une colonne unique qui est dans l'index, et le côté droit est un ensemble de constantes, le système utilise l'index pour le traitement de la requête. - -Don't list too many values explicitly (i.e. millions). If a data set is large, put it in a temporary table (for example, see the section “External data for query processing”), puis utiliser une sous-requête. - -Le côté droit de l'opérateur peut être un ensemble d'expressions constantes, un ensemble de tuples avec des expressions constantes (illustrées dans les exemples ci-dessus), ou le nom d'une table de base de données ou une sous-requête SELECT entre parenthèses. - -Si le côté droit de l'opérateur est le nom d'une table (par exemple, `UserID IN users`), ceci est équivalent à la sous-requête `UserID IN (SELECT * FROM users)`. Utilisez ceci lorsque vous travaillez avec des données externes envoyées avec la requête. Par exemple, la requête peut être envoyée avec un ensemble d'ID utilisateur chargés dans le ‘users’ table temporaire, qui doit être filtrée. - -Si le côté droit de l'opérateur est un nom de table qui a le moteur Set (un ensemble de données préparé qui est toujours en RAM), l'ensemble de données ne sera pas créé à nouveau pour chaque requête. - -La sous-requête peut spécifier plusieurs colonnes pour filtrer les tuples. -Exemple: - -``` sql -SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ... -``` - -Les colonnes à gauche et à droite de l'opérateur doit avoir le même type. - -L'opérateur IN et la sous-requête peuvent se produire dans n'importe quelle partie de la requête, y compris dans les fonctions d'agrégation et les fonctions lambda. -Exemple: - -``` sql -SELECT - EventDate, - avg(UserID IN - ( - SELECT UserID - FROM test.hits - WHERE EventDate = toDate('2014-03-17') - )) AS ratio -FROM test.hits -GROUP BY EventDate -ORDER BY EventDate ASC -``` - -``` text -┌──EventDate─┬────ratio─┐ -│ 2014-03-17 │ 1 │ -│ 2014-03-18 │ 0.807696 │ -│ 2014-03-19 │ 0.755406 │ -│ 2014-03-20 │ 0.723218 │ -│ 2014-03-21 │ 0.697021 │ -│ 2014-03-22 │ 0.647851 │ -│ 2014-03-23 │ 0.648416 │ -└────────────┴──────────┘ -``` - -Pour chaque jour après le 17 mars, comptez le pourcentage de pages vues par les utilisateurs qui ont visité le site le 17 mars. -Une sous-requête dans la clause est toujours exécuter une seule fois sur un seul serveur. Il n'y a pas de sous-requêtes dépendantes. - -## Le Traitement NULL {#null-processing-1} - -Pendant le traitement de la demande, l'opérateur n'assume que le résultat d'une opération avec [NULL](../syntax.md#null-literal) est toujours égale à `0` indépendamment de savoir si `NULL` est sur le côté droit ou gauche de l'opérateur. `NULL` les valeurs ne sont incluses dans aucun jeu de données, ne correspondent pas entre elles et ne peuvent pas être comparées. - -Voici un exemple avec le `t_null` table: - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -│ 2 │ 3 │ -└───┴──────┘ -``` - -L'exécution de la requête `SELECT x FROM t_null WHERE y IN (NULL,3)` vous donne le résultat suivant: - -``` text -┌─x─┐ -│ 2 │ -└───┘ -``` - -Vous pouvez voir que la ligne dans laquelle `y = NULL` est jeté hors de résultats de la requête. C'est parce que ClickHouse ne peut pas décider si `NULL` est inclus dans le `(NULL,3)` ensemble, les retours `0` comme le résultat de l'opération, et `SELECT` exclut cette ligne de la sortie finale. - -``` sql -SELECT y IN (NULL, 3) -FROM t_null -``` - -``` text -┌─in(y, tuple(NULL, 3))─┐ -│ 0 │ -│ 1 │ -└───────────────────────┘ -``` - -## Sous-Requêtes Distribuées {#select-distributed-subqueries} - -Il y a deux options pour IN-S avec des sous-requêtes (similaires aux jointures): normal `IN` / `JOIN` et `GLOBAL IN` / `GLOBAL JOIN`. Ils diffèrent dans la façon dont ils sont exécutés pour le traitement des requêtes distribuées. - -!!! attention "Attention" - Rappelez-vous que les algorithmes décrits ci-dessous peuvent travailler différemment en fonction de la [paramètre](../../operations/settings/settings.md) `distributed_product_mode` paramètre. - -Lors de l'utilisation de l'IN régulier, la requête est envoyée à des serveurs distants, et chacun d'eux exécute les sous-requêtes dans le `IN` ou `JOIN` clause. - -Lors de l'utilisation de `GLOBAL IN` / `GLOBAL JOINs`, d'abord toutes les sous-requêtes sont exécutées pour `GLOBAL IN` / `GLOBAL JOINs`, et les résultats sont recueillis dans des tableaux temporaires. Ensuite, les tables temporaires sont envoyés à chaque serveur distant, où les requêtes sont exécutées à l'aide temporaire de données. - -Pour une requête non distribuée, utilisez `IN` / `JOIN`. - -Soyez prudent lorsque vous utilisez des sous-requêtes dans le `IN` / `JOIN` clauses pour le traitement des requêtes distribuées. - -Regardons quelques exemples. Supposons que chaque serveur du cluster a un **local_table**. Chaque serveur dispose également d'une **table distributed_table** table avec le **Distribué** type, qui regarde tous les serveurs du cluster. - -Pour une requête à l' **table distributed_table**, la requête sera envoyée à tous les serveurs distants et exécutée sur eux en utilisant le **local_table**. - -Par exemple, la requête - -``` sql -SELECT uniq(UserID) FROM distributed_table -``` - -sera envoyé à tous les serveurs distants - -``` sql -SELECT uniq(UserID) FROM local_table -``` - -et l'exécuter sur chacun d'eux en parallèle, jusqu'à ce qu'il atteigne le stade où les résultats intermédiaires peuvent être combinés. Ensuite, les résultats intermédiaires seront retournés au demandeur de serveur et de fusion, et le résultat final sera envoyé au client. - -Examinons maintenant une requête avec IN: - -``` sql -SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34) -``` - -- Calcul de l'intersection des audiences de deux sites. - -Cette requête sera envoyée à tous les serveurs distants - -``` sql -SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34) -``` - -En d'autres termes, l'ensemble de données de la clause IN sera collecté sur chaque serveur indépendamment, uniquement à travers les données stockées localement sur chacun des serveurs. - -Cela fonctionnera correctement et de manière optimale si vous êtes prêt pour ce cas et que vous avez réparti les données entre les serveurs de cluster de telle sorte que les données d'un seul ID utilisateur résident entièrement sur un seul serveur. Dans ce cas, toutes les données nécessaires seront disponibles localement sur chaque serveur. Sinon, le résultat sera erroné. Nous nous référons à cette variation de la requête que “local IN”. - -Pour corriger le fonctionnement de la requête lorsque les données sont réparties aléatoirement sur les serveurs de cluster, vous pouvez spécifier **table distributed_table** à l'intérieur d'une sous-requête. La requête ressemblerait à ceci: - -``` sql -SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34) -``` - -Cette requête sera envoyée à tous les serveurs distants - -``` sql -SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34) -``` - -La sous-requête commencera à s'exécuter sur chaque serveur distant. Étant donné que la sous-requête utilise une table distribuée, la sous-requête qui se trouve sur chaque serveur distant sera renvoyée à chaque serveur distant comme - -``` sql -SELECT UserID FROM local_table WHERE CounterID = 34 -``` - -Par exemple, si vous avez un cluster de 100 SERVEURS, l'exécution de la requête entière nécessitera 10 000 requêtes élémentaires, ce qui est généralement considéré comme inacceptable. - -Dans de tels cas, vous devez toujours utiliser GLOBAL IN au lieu de IN. Voyons comment cela fonctionne pour la requête - -``` sql -SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID GLOBAL IN (SELECT UserID FROM distributed_table WHERE CounterID = 34) -``` - -Le serveur demandeur exécutera la sous requête - -``` sql -SELECT UserID FROM distributed_table WHERE CounterID = 34 -``` - -et le résultat sera mis dans une table temporaire en RAM. Ensuite, la demande sera envoyée à chaque serveur distant - -``` sql -SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL IN _data1 -``` - -et la table temporaire `_data1` sera envoyé à chaque serveur distant avec la requête (le nom de la table temporaire est défini par l'implémentation). - -Ceci est plus optimal que d'utiliser la normale dans. Cependant, gardez les points suivants à l'esprit: - -1. Lors de la création d'une table temporaire, les données ne sont pas uniques. Pour réduire le volume de données transmises sur le réseau, spécifiez DISTINCT dans la sous-requête. (Vous n'avez pas besoin de le faire pour un IN normal.) -2. La table temporaire sera envoyé à tous les serveurs distants. La Transmission ne tient pas compte de la topologie du réseau. Par exemple, si 10 serveurs distants résident dans un centre de données très distant par rapport au serveur demandeur, les données seront envoyées 10 fois sur le canal au centre de données distant. Essayez d'éviter les grands ensembles de données lorsque vous utilisez GLOBAL IN. -3. Lors de la transmission de données à des serveurs distants, les restrictions sur la bande passante réseau ne sont pas configurables. Vous pourriez surcharger le réseau. -4. Essayez de distribuer les données entre les serveurs afin que vous n'ayez pas besoin D'utiliser GLOBAL IN sur une base régulière. -5. Si vous devez utiliser GLOBAL in souvent, planifiez l'emplacement du cluster ClickHouse de sorte qu'un seul groupe de répliques ne réside pas dans plus d'un centre de données avec un réseau rapide entre eux, de sorte qu'une requête puisse être traitée entièrement dans un seul centre de données. - -Il est également judicieux de spécifier une table locale dans le `GLOBAL IN` clause, dans le cas où cette table locale est uniquement disponible sur le serveur demandeur et que vous souhaitez utiliser les données de celui-ci sur des serveurs distants. diff --git a/docs/fr/sql-reference/operators/index.md b/docs/fr/sql-reference/operators/index.md deleted file mode 100644 index 1635c7eece3..00000000000 --- a/docs/fr/sql-reference/operators/index.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: "Op\xE9rateur" ---- - -# Opérateur {#operators} - -ClickHouse transforme les opérateurs en leurs fonctions correspondantes à l'étape d'analyse des requêtes en fonction de leur priorité, de leur priorité et de leur associativité. - -## Des Opérateurs D'Accès {#access-operators} - -`a[N]` – Access to an element of an array. The `arrayElement(a, N)` fonction. - -`a.N` – Access to a tuple element. The `tupleElement(a, N)` fonction. - -## Opérateur De Négation Numérique {#numeric-negation-operator} - -`-a` – The `negate (a)` fonction. - -## Opérateurs de Multiplication et de Division {#multiplication-and-division-operators} - -`a * b` – The `multiply (a, b)` fonction. - -`a / b` – The `divide(a, b)` fonction. - -`a % b` – The `modulo(a, b)` fonction. - -## Opérateurs d'Addition et de soustraction {#addition-and-subtraction-operators} - -`a + b` – The `plus(a, b)` fonction. - -`a - b` – The `minus(a, b)` fonction. - -## Opérateurs De Comparaison {#comparison-operators} - -`a = b` – The `equals(a, b)` fonction. - -`a == b` – The `equals(a, b)` fonction. - -`a != b` – The `notEquals(a, b)` fonction. - -`a <> b` – The `notEquals(a, b)` fonction. - -`a <= b` – The `lessOrEquals(a, b)` fonction. - -`a >= b` – The `greaterOrEquals(a, b)` fonction. - -`a < b` – The `less(a, b)` fonction. - -`a > b` – The `greater(a, b)` fonction. - -`a LIKE s` – The `like(a, b)` fonction. - -`a NOT LIKE s` – The `notLike(a, b)` fonction. - -`a BETWEEN b AND c` – The same as `a >= b AND a <= c`. - -`a NOT BETWEEN b AND c` – The same as `a < b OR a > c`. - -## Opérateurs pour travailler avec des ensembles de données {#operators-for-working-with-data-sets} - -*Voir [Dans les opérateurs](in.md).* - -`a IN ...` – The `in(a, b)` fonction. - -`a NOT IN ...` – The `notIn(a, b)` fonction. - -`a GLOBAL IN ...` – The `globalIn(a, b)` fonction. - -`a GLOBAL NOT IN ...` – The `globalNotIn(a, b)` fonction. - -## Opérateurs pour travailler avec des Dates et des heures {#operators-datetime} - -### EXTRACT {#operator-extract} - -``` sql -EXTRACT(part FROM date); -``` - -Extraire des parties d'une date donnée. Par exemple, vous pouvez récupérer un mois à partir d'une date donnée, ou d'une seconde à partir d'un moment. - -Le `part` paramètre spécifie la partie de la date à récupérer. Les valeurs suivantes sont disponibles: - -- `DAY` — The day of the month. Possible values: 1–31. -- `MONTH` — The number of a month. Possible values: 1–12. -- `YEAR` — The year. -- `SECOND` — The second. Possible values: 0–59. -- `MINUTE` — The minute. Possible values: 0–59. -- `HOUR` — The hour. Possible values: 0–23. - -Le `part` le paramètre est insensible à la casse. - -Le `date` paramètre spécifie la date ou l'heure à traiter. Soit [Date](../../sql-reference/data-types/date.md) ou [DateTime](../../sql-reference/data-types/datetime.md) le type est pris en charge. - -Exemple: - -``` sql -SELECT EXTRACT(DAY FROM toDate('2017-06-15')); -SELECT EXTRACT(MONTH FROM toDate('2017-06-15')); -SELECT EXTRACT(YEAR FROM toDate('2017-06-15')); -``` - -Dans l'exemple suivant, nous créons un tableau et de les insérer dans une valeur avec le `DateTime` type. - -``` sql -CREATE TABLE test.Orders -( - OrderId UInt64, - OrderName String, - OrderDate DateTime -) -ENGINE = Log; -``` - -``` sql -INSERT INTO test.Orders VALUES (1, 'Jarlsberg Cheese', toDateTime('2008-10-11 13:23:44')); -``` - -``` sql -SELECT - toYear(OrderDate) AS OrderYear, - toMonth(OrderDate) AS OrderMonth, - toDayOfMonth(OrderDate) AS OrderDay, - toHour(OrderDate) AS OrderHour, - toMinute(OrderDate) AS OrderMinute, - toSecond(OrderDate) AS OrderSecond -FROM test.Orders; -``` - -``` text -┌─OrderYear─┬─OrderMonth─┬─OrderDay─┬─OrderHour─┬─OrderMinute─┬─OrderSecond─┐ -│ 2008 │ 10 │ 11 │ 13 │ 23 │ 44 │ -└───────────┴────────────┴──────────┴───────────┴─────────────┴─────────────┘ -``` - -Vous pouvez voir plus d'exemples de [test](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00619_extract.sql). - -### INTERVAL {#operator-interval} - -Crée un [Intervalle](../../sql-reference/data-types/special-data-types/interval.md)- valeur de type qui doit être utilisée dans les opérations arithmétiques avec [Date](../../sql-reference/data-types/date.md) et [DateTime](../../sql-reference/data-types/datetime.md)-type de valeurs. - -Types d'intervalles: -- `SECOND` -- `MINUTE` -- `HOUR` -- `DAY` -- `WEEK` -- `MONTH` -- `QUARTER` -- `YEAR` - -!!! warning "Avertissement" - Les intervalles avec différents types ne peuvent pas être combinés. Vous ne pouvez pas utiliser des expressions comme `INTERVAL 4 DAY 1 HOUR`. Spécifiez des intervalles en unités inférieures ou égales à la plus petite unité de l'intervalle, par exemple, `INTERVAL 25 HOUR`. Vous pouvez utiliser les opérations consécutives, comme dans l'exemple ci-dessous. - -Exemple: - -``` sql -SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR -``` - -``` text -┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐ -│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │ -└─────────────────────┴────────────────────────────────────────────────────────┘ -``` - -**Voir Aussi** - -- [Intervalle](../../sql-reference/data-types/special-data-types/interval.md) type de données -- [toInterval](../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type fonctions de conversion - -## Opérateur De Négation Logique {#logical-negation-operator} - -`NOT a` – The `not(a)` fonction. - -## Logique ET de l'Opérateur {#logical-and-operator} - -`a AND b` – The`and(a, b)` fonction. - -## Logique ou opérateur {#logical-or-operator} - -`a OR b` – The `or(a, b)` fonction. - -## Opérateur Conditionnel {#conditional-operator} - -`a ? b : c` – The `if(a, b, c)` fonction. - -Note: - -L'opérateur conditionnel calcule les valeurs de b et c, puis vérifie si la condition a est remplie, puis renvoie la valeur correspondante. Si `b` ou `C` est un [arrayJoin()](../../sql-reference/functions/array-join.md#functions_arrayjoin) fonction, chaque ligne sera répliquée indépendamment de la “a” condition. - -## Expression Conditionnelle {#operator_case} - -``` sql -CASE [x] - WHEN a THEN b - [WHEN ... THEN ...] - [ELSE c] -END -``` - -Si `x` est spécifié, alors `transform(x, [a, ...], [b, ...], c)` function is used. Otherwise – `multiIf(a, b, ..., c)`. - -Si il n'y a pas de `ELSE c` dans l'expression, la valeur par défaut est `NULL`. - -Le `transform` la fonction ne fonctionne pas avec `NULL`. - -## Opérateur De Concaténation {#concatenation-operator} - -`s1 || s2` – The `concat(s1, s2) function.` - -## Opérateur De Création Lambda {#lambda-creation-operator} - -`x -> expr` – The `lambda(x, expr) function.` - -Les opérateurs suivants n'ont pas de priorité puisqu'ils sont des parenthèses: - -## Opérateur De Création De Tableau {#array-creation-operator} - -`[x1, ...]` – The `array(x1, ...) function.` - -## Opérateur De Création De Tuple {#tuple-creation-operator} - -`(x1, x2, ...)` – The `tuple(x2, x2, ...) function.` - -## Associativité {#associativity} - -Tous les opérateurs binaires ont associativité gauche. Exemple, `1 + 2 + 3` est transformé à `plus(plus(1, 2), 3)`. -Parfois, cela ne fonctionne pas de la façon que vous attendez. Exemple, `SELECT 4 > 2 > 3` résultat sera 0. - -Pour l'efficacité, le `and` et `or` les fonctions acceptent n'importe quel nombre d'arguments. Les chaînes de `AND` et `OR` les opérateurs se sont transformés en un seul appel de ces fonctions. - -## La vérification de `NULL` {#checking-for-null} - -Clickhouse soutient le `IS NULL` et `IS NOT NULL` opérateur. - -### IS NULL {#operator-is-null} - -- Pour [Nullable](../../sql-reference/data-types/nullable.md) type de valeurs, l' `IS NULL` opérateur retourne: - - `1` si la valeur est `NULL`. - - `0` autrement. -- Pour les autres valeurs, la `IS NULL` l'opérateur renvoie toujours `0`. - - - -``` sql -SELECT x+100 FROM t_null WHERE y IS NULL -``` - -``` text -┌─plus(x, 100)─┐ -│ 101 │ -└──────────────┘ -``` - -### IS NOT NULL {#is-not-null} - -- Pour [Nullable](../../sql-reference/data-types/nullable.md) type de valeurs, l' `IS NOT NULL` opérateur retourne: - - `0` si la valeur est `NULL`. - - `1` autrement. -- Pour les autres valeurs, la `IS NOT NULL` l'opérateur renvoie toujours `1`. - - - -``` sql -SELECT * FROM t_null WHERE y IS NOT NULL -``` - -``` text -┌─x─┬─y─┐ -│ 2 │ 3 │ -└───┴───┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/operators/) diff --git a/docs/fr/sql-reference/statements/alter.md b/docs/fr/sql-reference/statements/alter.md deleted file mode 100644 index 64fe21046a3..00000000000 --- a/docs/fr/sql-reference/statements/alter.md +++ /dev/null @@ -1,602 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 36 -toc_title: ALTER ---- - -## ALTER {#query_language_queries_alter} - -Le `ALTER` la requête est prise en charge uniquement pour `*MergeTree` des tables, ainsi que `Merge`et`Distributed`. La requête a plusieurs variantes. - -### Manipulations De Colonne {#column-manipulations} - -Modification de la structure de la table. - -``` sql -ALTER TABLE [db].name [ON CLUSTER cluster] ADD|DROP|CLEAR|COMMENT|MODIFY COLUMN ... -``` - -Dans la requête, spécifiez une liste d'une ou plusieurs actions séparées par des virgules. -Chaque action est une opération sur une colonne. - -Les actions suivantes sont prises en charge: - -- [ADD COLUMN](#alter_add-column) — Adds a new column to the table. -- [DROP COLUMN](#alter_drop-column) — Deletes the column. -- [CLEAR COLUMN](#alter_clear-column) — Resets column values. -- [COMMENT COLUMN](#alter_comment-column) — Adds a text comment to the column. -- [MODIFY COLUMN](#alter_modify-column) — Changes column's type, default expression and TTL. - -Ces actions sont décrites en détail ci-dessous. - -#### ADD COLUMN {#alter_add-column} - -``` sql -ADD COLUMN [IF NOT EXISTS] name [type] [default_expr] [codec] [AFTER name_after] -``` - -Ajoute une nouvelle colonne à la table spécifiée `name`, `type`, [`codec`](create.md#codecs) et `default_expr` (voir la section [Expressions par défaut](create.md#create-default-values)). - -Si l' `IF NOT EXISTS` la clause est incluse, la requête ne retournera pas d'erreur si la colonne existe déjà. Si vous spécifiez `AFTER name_after` (le nom d'une autre colonne), la colonne est ajoutée après celle spécifiée dans la liste des colonnes de la table. Sinon, la colonne est ajoutée à la fin de la table. Notez qu'il n'existe aucun moyen d'ajouter une colonne au début d'un tableau. Pour une chaîne d'actions, `name_after` peut être le nom d'une colonne est ajoutée dans l'une des actions précédentes. - -L'ajout d'une colonne modifie simplement la structure de la table, sans effectuer d'actions avec des données. Les données n'apparaissent pas sur le disque après la `ALTER`. Si les données sont manquantes pour une colonne lors de la lecture de la table, elles sont remplies avec des valeurs par défaut (en exécutant l'expression par défaut s'il y en a une, ou en utilisant des zéros ou des chaînes vides). La colonne apparaît sur le disque après la fusion des parties de données (voir [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md)). - -Cette approche nous permet de compléter le `ALTER` requête instantanément, sans augmenter le volume de données anciennes. - -Exemple: - -``` sql -ALTER TABLE visits ADD COLUMN browser String AFTER user_id -``` - -#### DROP COLUMN {#alter_drop-column} - -``` sql -DROP COLUMN [IF EXISTS] name -``` - -Supprime la colonne avec le nom `name`. Si l' `IF EXISTS` la clause est spécifiée, la requête ne retournera pas d'erreur si la colonne n'existe pas. - -Supprime les données du système de fichiers. Comme cela supprime des fichiers entiers, la requête est terminée presque instantanément. - -Exemple: - -``` sql -ALTER TABLE visits DROP COLUMN browser -``` - -#### CLEAR COLUMN {#alter_clear-column} - -``` sql -CLEAR COLUMN [IF EXISTS] name IN PARTITION partition_name -``` - -Réinitialise toutes les données dans une colonne pour une partition spécifiée. En savoir plus sur la définition du nom de la partition dans la section [Comment spécifier l'expression de partition](#alter-how-to-specify-part-expr). - -Si l' `IF EXISTS` la clause est spécifiée, la requête ne retournera pas d'erreur si la colonne n'existe pas. - -Exemple: - -``` sql -ALTER TABLE visits CLEAR COLUMN browser IN PARTITION tuple() -``` - -#### COMMENT COLUMN {#alter_comment-column} - -``` sql -COMMENT COLUMN [IF EXISTS] name 'comment' -``` - -Ajoute un commentaire à la colonne. Si l' `IF EXISTS` la clause est spécifiée, la requête ne retournera pas d'erreur si la colonne n'existe pas. - -Chaque colonne peut avoir un commentaire. Si un commentaire existe déjà pour la colonne, un nouveau commentaire remplace le précédent commentaire. - -Les commentaires sont stockés dans le `comment_expression` colonne renvoyée par le [DESCRIBE TABLE](misc.md#misc-describe-table) requête. - -Exemple: - -``` sql -ALTER TABLE visits COMMENT COLUMN browser 'The table shows the browser used for accessing the site.' -``` - -#### MODIFY COLUMN {#alter_modify-column} - -``` sql -MODIFY COLUMN [IF EXISTS] name [type] [default_expr] [TTL] -``` - -Cette requête modifie le `name` les propriétés de la colonne: - -- Type - -- Expression par défaut - -- TTL - - For examples of columns TTL modifying, see [Column TTL](../engines/table_engines/mergetree_family/mergetree.md#mergetree-column-ttl). - -Si l' `IF EXISTS` la clause est spécifiée, la requête ne retournera pas d'erreur si la colonne n'existe pas. - -Lors de la modification du type, les valeurs sont converties comme si [toType](../../sql-reference/functions/type-conversion-functions.md) les fonctions ont été appliquées. Si seule l'expression par défaut est modifiée, la requête ne fait rien de complexe et est terminée presque instantanément. - -Exemple: - -``` sql -ALTER TABLE visits MODIFY COLUMN browser Array(String) -``` - -Changing the column type is the only complex action – it changes the contents of files with data. For large tables, this may take a long time. - -Il y a plusieurs étapes de traitement: - -- Préparation de (nouveaux) fichiers temporaires avec des données modifiées. -- Renommer les anciens fichiers. -- Renommer les (nouveaux) fichiers temporaires en anciens noms. -- Suppression des anciens fichiers. - -Seule la première étape prend du temps. Si il y a un échec à ce stade, les données ne sont pas modifiées. -En cas d'échec au cours d'une des étapes successives, les données peuvent être restaurées manuellement. L'exception est si les anciens fichiers ont été supprimés du système de fichiers mais que les données des nouveaux fichiers n'ont pas été écrites sur le disque et ont été perdues. - -Le `ALTER` la requête de modification des colonnes est répliquée. Les instructions sont enregistrées dans ZooKeeper, puis chaque réplique les applique. Tout `ALTER` les requêtes sont exécutées dans le même ordre. La requête attend que les actions appropriées soient terminées sur les autres répliques. Cependant, une requête pour modifier des colonnes dans une table répliquée peut être interrompue, et toutes les actions seront effectuées de manière asynchrone. - -#### Modifier les limites de la requête {#alter-query-limitations} - -Le `ALTER` query vous permet de créer et de supprimer des éléments distincts (colonnes) dans des structures de données imbriquées, mais pas des structures de données imbriquées entières. Pour ajouter une structure de données imbriquée, vous pouvez ajouter des colonnes avec un nom comme `name.nested_name` et le type `Array(T)`. Une structure de données imbriquée est équivalente à plusieurs colonnes de tableau avec un nom qui a le même préfixe avant le point. - -Il n'y a pas de support pour supprimer des colonnes dans la clé primaire ou la clé d'échantillonnage (colonnes qui sont utilisées dans le `ENGINE` expression). La modification du type des colonnes incluses dans la clé primaire n'est possible que si cette modification n'entraîne pas la modification des données (par exemple, vous êtes autorisé à ajouter des valeurs à une énumération ou à modifier un type de `DateTime` de `UInt32`). - -Si l' `ALTER` la requête n'est pas suffisante pour apporter les modifications de table dont vous avez besoin, vous pouvez créer une nouvelle table, y copier les données en utilisant le [INSERT SELECT](insert-into.md#insert_query_insert-select) requête, puis changer les tables en utilisant le [RENAME](misc.md#misc_operations-rename) requête et supprimer l'ancienne table. Vous pouvez utiliser l' [clickhouse-copieur](../../operations/utilities/clickhouse-copier.md) comme une alternative à la `INSERT SELECT` requête. - -Le `ALTER` query bloque toutes les lectures et écritures pour la table. En d'autres termes, si une longue `SELECT` est en cours d'exécution au moment de la `ALTER` requête, la `ALTER` la requête va attendre qu'elle se termine. Dans le même temps, toutes les nouvelles requêtes à la même table attendre que ce `ALTER` est en cours d'exécution. - -Pour les tables qui ne stockent pas les données elles-mêmes (telles que `Merge` et `Distributed`), `ALTER` change simplement la structure de la table, et ne change pas la structure des tables subordonnées. Par exemple, lors de L'exécution de ALTER pour un `Distributed` table, vous devrez également exécuter `ALTER` pour les tables sur tous les serveurs distants. - -### Manipulations avec des Expressions clés {#manipulations-with-key-expressions} - -La commande suivante est prise en charge: - -``` sql -MODIFY ORDER BY new_expression -``` - -Cela ne fonctionne que pour les tables du [`MergeTree`](../../engines/table-engines/mergetree-family/mergetree.md) de la famille (y compris les -[répliqué](../../engines/table-engines/mergetree-family/replication.md) table). La commande change l' -[clé de tri](../../engines/table-engines/mergetree-family/mergetree.md) de la table -de `new_expression` (une expression ou un tuple d'expressions). Clé primaire reste le même. - -La commande est légère en ce sens qu'elle ne modifie que les métadonnées. Pour conserver la propriété cette partie de données -les lignes sont ordonnées par l'expression de clé de tri vous ne pouvez pas ajouter d'expressions contenant des colonnes existantes -à la clé de tri (seules les colonnes ajoutées par `ADD COLUMN` commande dans le même `ALTER` requête). - -### Manipulations avec des Indices de saut de données {#manipulations-with-data-skipping-indices} - -Cela ne fonctionne que pour les tables du [`*MergeTree`](../../engines/table-engines/mergetree-family/mergetree.md) de la famille (y compris les -[répliqué](../../engines/table-engines/mergetree-family/replication.md) table). Les opérations suivantes -sont disponibles: - -- `ALTER TABLE [db].name ADD INDEX name expression TYPE type GRANULARITY value AFTER name [AFTER name2]` - Ajoute la description de l'index aux métadonnées des tables. - -- `ALTER TABLE [db].name DROP INDEX name` - Supprime la description de l'index des métadonnées des tables et supprime les fichiers d'index du disque. - -Ces commandes sont légères dans le sens où elles ne modifient que les métadonnées ou suppriment des fichiers. -En outre, ils sont répliqués (synchronisation des métadonnées des indices via ZooKeeper). - -### Manipulations avec contraintes {#manipulations-with-constraints} - -En voir plus sur [contraintes](create.md#constraints) - -Les contraintes peuvent être ajoutées ou supprimées à l'aide de la syntaxe suivante: - -``` sql -ALTER TABLE [db].name ADD CONSTRAINT constraint_name CHECK expression; -ALTER TABLE [db].name DROP CONSTRAINT constraint_name; -``` - -Les requêtes ajouteront ou supprimeront des métadonnées sur les contraintes de la table afin qu'elles soient traitées immédiatement. - -Contrainte de vérifier *ne sera pas exécuté* sur les données existantes si elle a été ajoutée. - -Toutes les modifications sur les tables répliquées sont diffusées sur ZooKeeper et seront donc appliquées sur d'autres répliques. - -### Manipulations avec des Partitions et des pièces {#alter_manipulations-with-partitions} - -Les opérations suivantes avec [partition](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) sont disponibles: - -- [DETACH PARTITION](#alter_detach-partition) – Moves a partition to the `detached` répertoire et de l'oublier. -- [DROP PARTITION](#alter_drop-partition) – Deletes a partition. -- [ATTACH PART\|PARTITION](#alter_attach-partition) – Adds a part or partition from the `detached` répertoire à la table. -- [ATTACH PARTITION FROM](#alter_attach-partition-from) – Copies the data partition from one table to another and adds. -- [REPLACE PARTITION](#alter_replace-partition) - Copie la partition de données d'une table à l'autre et la remplace. -- [MOVE PARTITION TO TABLE](#alter_move_to_table-partition)(#alter_move_to_table-partition) - déplace la partition de données d'une table à l'autre. -- [CLEAR COLUMN IN PARTITION](#alter_clear-column-partition) - Rétablit la valeur d'une colonne spécifiée dans une partition. -- [CLEAR INDEX IN PARTITION](#alter_clear-index-partition) - Réinitialise l'index secondaire spécifié dans une partition. -- [FREEZE PARTITION](#alter_freeze-partition) – Creates a backup of a partition. -- [FETCH PARTITION](#alter_fetch-partition) – Downloads a partition from another server. -- [MOVE PARTITION\|PART](#alter_move-partition) – Move partition/data part to another disk or volume. - - - -#### DETACH PARTITION {#alter_detach-partition} - -``` sql -ALTER TABLE table_name DETACH PARTITION partition_expr -``` - -Déplace toutes les données de la partition spécifiée vers `detached` répertoire. Le serveur oublie la partition de données détachée comme si elle n'existait pas. Le serveur ne connaîtra pas ces données tant que vous n'aurez pas [ATTACH](#alter_attach-partition) requête. - -Exemple: - -``` sql -ALTER TABLE visits DETACH PARTITION 201901 -``` - -Lisez à propos de la définition de l'expression de partition dans une section [Comment spécifier l'expression de partition](#alter-how-to-specify-part-expr). - -Une fois la requête exécutée, vous pouvez faire ce que vous voulez avec les données du `detached` directory — delete it from the file system, or just leave it. - -This query is replicated – it moves the data to the `detached` répertoire sur toutes les répliques. Notez que vous ne pouvez exécuter cette requête que sur un réplica leader. Pour savoir si une réplique est un leader, effectuez le `SELECT` requête à l' [système.réplique](../../operations/system-tables.md#system_tables-replicas) table. Alternativement, il est plus facile de faire une `DETACH` requête sur toutes les répliques - toutes les répliques lancent une exception, à l'exception de la réplique leader. - -#### DROP PARTITION {#alter_drop-partition} - -``` sql -ALTER TABLE table_name DROP PARTITION partition_expr -``` - -Supprime la partition spécifiée de la table. Cette requête marque la partition comme inactive et supprime complètement les données, environ en 10 minutes. - -Lisez à propos de la définition de l'expression de partition dans une section [Comment spécifier l'expression de partition](#alter-how-to-specify-part-expr). - -The query is replicated – it deletes data on all replicas. - -#### DROP DETACHED PARTITION\|PART {#alter_drop-detached} - -``` sql -ALTER TABLE table_name DROP DETACHED PARTITION|PART partition_expr -``` - -Supprime la partie spécifiée ou toutes les parties de la partition spécifiée de `detached`. -En savoir plus sur la définition de l'expression de partition dans une section [Comment spécifier l'expression de partition](#alter-how-to-specify-part-expr). - -#### ATTACH PARTITION\|PART {#alter_attach-partition} - -``` sql -ALTER TABLE table_name ATTACH PARTITION|PART partition_expr -``` - -Ajoute des données à la table à partir du `detached` répertoire. Il est possible d'ajouter des données dans une partition entière ou pour une partie distincte. Exemple: - -``` sql -ALTER TABLE visits ATTACH PARTITION 201901; -ALTER TABLE visits ATTACH PART 201901_2_2_0; -``` - -En savoir plus sur la définition de l'expression de partition dans une section [Comment spécifier l'expression de partition](#alter-how-to-specify-part-expr). - -Cette requête est répliquée. L'initiateur de réplica vérifie s'il y a des données dans le `detached` répertoire. Si des données existent, la requête vérifie son intégrité. Si tout est correct, la requête ajoute les données à la table. Tous les autres réplicas téléchargent les données de l'initiateur de réplica. - -Ainsi, vous pouvez mettre des données à la `detached` répertoire sur une réplique, et utilisez le `ALTER ... ATTACH` requête pour l'ajouter à la table sur tous les réplicas. - -#### ATTACH PARTITION FROM {#alter_attach-partition-from} - -``` sql -ALTER TABLE table2 ATTACH PARTITION partition_expr FROM table1 -``` - -Cette requête copie la partition de données du `table1` de `table2` ajoute des données de gratuit dans la `table2`. Notez que les données ne seront pas supprimées de `table1`. - -Pour que la requête s'exécute correctement, les conditions suivantes doivent être remplies: - -- Les deux tables doivent avoir la même structure. -- Les deux tables doivent avoir la même clé de partition. - -#### REPLACE PARTITION {#alter_replace-partition} - -``` sql -ALTER TABLE table2 REPLACE PARTITION partition_expr FROM table1 -``` - -Cette requête copie la partition de données du `table1` de `table2` et remplace la partition existante dans le `table2`. Notez que les données ne seront pas supprimées de `table1`. - -Pour que la requête s'exécute correctement, les conditions suivantes doivent être remplies: - -- Les deux tables doivent avoir la même structure. -- Les deux tables doivent avoir la même clé de partition. - -#### MOVE PARTITION TO TABLE {#alter_move_to_table-partition} - -``` sql -ALTER TABLE table_source MOVE PARTITION partition_expr TO TABLE table_dest -``` - -Cette requête déplace la partition de données du `table_source` de `table_dest` avec la suppression des données de `table_source`. - -Pour que la requête s'exécute correctement, les conditions suivantes doivent être remplies: - -- Les deux tables doivent avoir la même structure. -- Les deux tables doivent avoir la même clé de partition. -- Les deux tables doivent appartenir à la même famille de moteurs. (répliqué ou non répliqué) -- Les deux tables doivent avoir la même stratégie de stockage. - -#### CLEAR COLUMN IN PARTITION {#alter_clear-column-partition} - -``` sql -ALTER TABLE table_name CLEAR COLUMN column_name IN PARTITION partition_expr -``` - -Réinitialise toutes les valeurs de la colonne spécifiée dans une partition. Si l' `DEFAULT` la clause a été déterminée lors de la création d'une table, cette requête définit la valeur de la colonne à une valeur par défaut spécifiée. - -Exemple: - -``` sql -ALTER TABLE visits CLEAR COLUMN hour in PARTITION 201902 -``` - -#### FREEZE PARTITION {#alter_freeze-partition} - -``` sql -ALTER TABLE table_name FREEZE [PARTITION partition_expr] -``` - -Cette requête crée une sauvegarde locale d'une partition spécifiée. Si l' `PARTITION` la clause est omise, la requête crée la sauvegarde de toutes les partitions à la fois. - -!!! note "Note" - L'ensemble du processus de sauvegarde est effectuée sans arrêter le serveur. - -Notez que pour les tables de style ancien, vous pouvez spécifier le préfixe du nom de la partition (par exemple, ‘2019’)- ensuite, la requête crée la sauvegarde pour toutes les partitions correspondantes. Lisez à propos de la définition de l'expression de partition dans une section [Comment spécifier l'expression de partition](#alter-how-to-specify-part-expr). - -Au moment de l'exécution, pour un instantané de données, la requête crée des liens rigides vers des données de table. Les liens sont placés dans le répertoire `/var/lib/clickhouse/shadow/N/...`, où: - -- `/var/lib/clickhouse/` est le répertoire de travail clickhouse spécifié dans la configuration. -- `N` est le numéro incrémental de la sauvegarde. - -!!! note "Note" - Si vous utilisez [un ensemble de disques pour le stockage des données dans une table](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes), le `shadow/N` le répertoire apparaît sur chaque disque, stockant les parties de données correspondant `PARTITION` expression. - -La même structure de répertoires est créée à l'intérieur de la sauvegarde qu'à l'intérieur `/var/lib/clickhouse/`. La requête effectue ‘chmod’ pour tous les fichiers, interdisant d'écrire en eux. - -Après avoir créé la sauvegarde, vous pouvez copier les données depuis `/var/lib/clickhouse/shadow/` sur le serveur distant, puis supprimez-le du serveur local. Notez que l' `ALTER t FREEZE PARTITION` la requête n'est pas répliqué. Il crée une sauvegarde locale uniquement sur le serveur local. - -La requête crée une sauvegarde presque instantanément (mais elle attend d'abord que les requêtes en cours à la table correspondante se terminent). - -`ALTER TABLE t FREEZE PARTITION` copie uniquement les données, pas les métadonnées de la table. Faire une sauvegarde des métadonnées de la table, copiez le fichier `/var/lib/clickhouse/metadata/database/table.sql` - -Pour restaurer des données à partir d'une sauvegarde, procédez comme suit: - -1. Créer la table si elle n'existe pas. Pour afficher la requête, utilisez la .fichier sql (remplacer `ATTACH` avec `CREATE`). -2. Copier les données de la `data/database/table/` répertoire à l'intérieur de la sauvegarde `/var/lib/clickhouse/data/database/table/detached/` répertoire. -3. Exécuter `ALTER TABLE t ATTACH PARTITION` les requêtes pour ajouter les données à une table. - -La restauration à partir d'une sauvegarde ne nécessite pas l'arrêt du serveur. - -Pour plus d'informations sur les sauvegardes et la restauration [La Sauvegarde Des Données](../../operations/backup.md) section. - -#### CLEAR INDEX IN PARTITION {#alter_clear-index-partition} - -``` sql -ALTER TABLE table_name CLEAR INDEX index_name IN PARTITION partition_expr -``` - -La requête fonctionne de manière similaire à `CLEAR COLUMN` mais il remet un index au lieu d'une colonne de données. - -#### FETCH PARTITION {#alter_fetch-partition} - -``` sql -ALTER TABLE table_name FETCH PARTITION partition_expr FROM 'path-in-zookeeper' -``` - -Télécharge une partition depuis un autre serveur. Cette requête ne fonctionne que pour les tables répliquées. - -La requête effectue les opérations suivantes: - -1. Télécharge la partition à partir du fragment spécifié. Dans ‘path-in-zookeeper’ vous devez spécifier un chemin vers le fragment dans ZooKeeper. -2. Ensuite, la requête met les données téléchargées dans le `detached` répertoire de la `table_name` table. L'utilisation de la [ATTACH PARTITION\|PART](#alter_attach-partition) requête pour ajouter les données à la table. - -Exemple: - -``` sql -ALTER TABLE users FETCH PARTITION 201902 FROM '/clickhouse/tables/01-01/visits'; -ALTER TABLE users ATTACH PARTITION 201902; -``` - -Notez que: - -- Le `ALTER ... FETCH PARTITION` la requête n'est pas répliqué. Il place la partition à la `detached` répertoire sur le serveur local. -- Le `ALTER TABLE ... ATTACH` la requête est répliquée. Il ajoute les données à toutes les répliques. Les données sont ajoutées à l'une des répliques `detached` répertoire, et aux autres-des répliques voisines. - -Avant le téléchargement, le système vérifie si la partition existe et si la structure de la table correspond. La réplique la plus appropriée est sélectionnée automatiquement parmi les répliques saines. - -Bien que la requête soit appelée `ALTER TABLE`, il ne modifie pas la structure de la table et ne modifie pas immédiatement les données disponibles dans la table. - -#### MOVE PARTITION\|PART {#alter_move-partition} - -Déplace des partitions ou des parties de données vers un autre volume ou disque pour `MergeTree`-tables de moteur. Voir [Utilisation de plusieurs périphériques de bloc pour le stockage de données](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes). - -``` sql -ALTER TABLE table_name MOVE PARTITION|PART partition_expr TO DISK|VOLUME 'disk_name' -``` - -Le `ALTER TABLE t MOVE` requête: - -- Non répliqué, car différentes répliques peuvent avoir des stratégies de stockage différentes. -- Renvoie une erreur si le disque ou le volume n'est pas configuré. Query renvoie également une erreur si les conditions de déplacement des données, spécifiées dans la stratégie de stockage, ne peuvent pas être appliquées. -- Peut renvoyer une erreur dans le cas, lorsque les données à déplacer sont déjà déplacées par un processus en arrière-plan, simultané `ALTER TABLE t MOVE` requête ou à la suite de la fusion de données d'arrière-plan. Un utilisateur ne doit effectuer aucune action supplémentaire dans ce cas. - -Exemple: - -``` sql -ALTER TABLE hits MOVE PART '20190301_14343_16206_438' TO VOLUME 'slow' -ALTER TABLE hits MOVE PARTITION '2019-09-01' TO DISK 'fast_ssd' -``` - -#### Comment définir L'Expression de la Partition {#alter-how-to-specify-part-expr} - -Vous pouvez spécifier l'expression de partition dans `ALTER ... PARTITION` requêtes de différentes manières: - -- Comme une valeur de l' `partition` la colonne de la `system.parts` table. Exemple, `ALTER TABLE visits DETACH PARTITION 201901`. -- Comme expression de la colonne de la table. Les constantes et les expressions constantes sont prises en charge. Exemple, `ALTER TABLE visits DETACH PARTITION toYYYYMM(toDate('2019-01-25'))`. -- À l'aide de l'ID de partition. Partition ID est un identifiant de chaîne de la partition (lisible par l'homme, si possible) qui est utilisé comme noms de partitions dans le système de fichiers et dans ZooKeeper. L'ID de partition doit être spécifié dans `PARTITION ID` clause, entre guillemets simples. Exemple, `ALTER TABLE visits DETACH PARTITION ID '201901'`. -- Dans le [ALTER ATTACH PART](#alter_attach-partition) et [DROP DETACHED PART](#alter_drop-detached) requête, pour spécifier le nom d'une partie, utilisez le littéral de chaîne avec une valeur de `name` la colonne de la [système.detached_parts](../../operations/system-tables.md#system_tables-detached_parts) table. Exemple, `ALTER TABLE visits ATTACH PART '201901_1_1_0'`. - -L'utilisation de guillemets lors de la spécification de la partition dépend du type d'expression de partition. Par exemple, pour la `String` type, vous devez spécifier son nom entre guillemets (`'`). Pour l' `Date` et `Int*` types aucune citation n'est nécessaire. - -Pour les tables de style ancien, vous pouvez spécifier la partition sous forme de nombre `201901` ou une chaîne de caractères `'201901'`. La syntaxe des tables new-style est plus stricte avec les types (similaire à l'analyseur pour le format D'entrée des valeurs). - -Toutes les règles ci-dessus sont aussi valables pour la [OPTIMIZE](misc.md#misc_operations-optimize) requête. Si vous devez spécifier la seule partition lors de l'optimisation d'une table non partitionnée, définissez l'expression `PARTITION tuple()`. Exemple: - -``` sql -OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL; -``` - -Les exemples de `ALTER ... PARTITION` les requêtes sont démontrées dans les tests [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_local.sql) et [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql). - -### Manipulations avec Table TTL {#manipulations-with-table-ttl} - -Vous pouvez modifier [tableau TTL](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) avec une demande du formulaire suivant: - -``` sql -ALTER TABLE table-name MODIFY TTL ttl-expression -``` - -### Synchronicité des requêtes ALTER {#synchronicity-of-alter-queries} - -Pour les tables non réplicables, tous `ALTER` les requêtes sont exécutées simultanément. Pour les tables réplicables, la requête ajoute simplement des instructions pour les actions appropriées à `ZooKeeper` et les actions elles-mêmes sont effectuées dès que possible. Cependant, la requête peut attendre que ces actions soient terminées sur tous les réplicas. - -Pour `ALTER ... ATTACH|DETACH|DROP` les requêtes, vous pouvez utiliser le `replication_alter_partitions_sync` configuration pour configurer l'attente. -Valeurs possibles: `0` – do not wait; `1` – only wait for own execution (default); `2` – wait for all. - -### Mutation {#alter-mutations} - -Les Mutations sont une variante ALTER query qui permet de modifier ou de supprimer des lignes dans une table. Contrairement à la norme `UPDATE` et `DELETE` les requêtes qui sont destinées aux changements de données de point, les mutations sont destinées aux opérations lourdes qui modifient beaucoup de lignes dans une table. Pris en charge pour le `MergeTree` famille de moteurs de table, y compris les moteurs avec support de réplication. - -Les tables existantes sont prêtes pour les mutations telles quelles (aucune conversion nécessaire), mais après l'application de la première mutation à une table, son format de métadonnées devient incompatible avec les versions précédentes du serveur et il devient impossible de revenir à une version précédente. - -Commandes actuellement disponibles: - -``` sql -ALTER TABLE [db.]table DELETE WHERE filter_expr -``` - -Le `filter_expr` doit être de type `UInt8`. La requête supprime les lignes de la table pour lesquelles cette expression prend une valeur différente de zéro. - -``` sql -ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr -``` - -Le `filter_expr` doit être de type `UInt8`. Cette requête met à jour les valeurs des colonnes spécifiées en les valeurs des expressions correspondantes dans les lignes pour lesquelles `filter_expr` prend une valeur non nulle. Les valeurs sont converties en type de colonne à l'aide `CAST` opérateur. La mise à jour des colonnes utilisées dans le calcul de la clé primaire ou de la clé de partition n'est pas prise en charge. - -``` sql -ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name -``` - -La requête reconstruit l'index secondaire `name` dans la partition `partition_name`. - -Une requête peut contenir plusieurs commandes séparées par des virgules. - -Pour les tables \* MergeTree, les mutations s'exécutent en réécrivant des parties de données entières. Il n'y a pas d'atomicité-les pièces sont substituées aux pièces mutées dès qu'elles sont prêtes et un `SELECT` la requête qui a commencé à s'exécuter pendant une mutation verra les données des parties qui ont déjà été mutées ainsi que les données des parties qui n'ont pas encore été mutées. - -Les Mutations sont totalement ordonnées par leur ordre de création et sont appliquées à chaque partie dans cet ordre. Les Mutations sont également partiellement ordonnées avec des insertions - les données insérées dans la table avant la soumission de la mutation seront mutées et les données insérées après ne seront pas mutées. Notez que les mutations ne bloquent en aucune façon les INSERTs. - -Une requête de mutation retourne immédiatement après l'ajout de l'entrée de mutation (dans le cas de tables répliquées à ZooKeeper, pour les tables non compliquées - au système de fichiers). La mutation elle-même s'exécute de manière asynchrone en utilisant les paramètres du profil système. Pour suivre l'avancement des mutations vous pouvez utiliser la [`system.mutations`](../../operations/system-tables.md#system_tables-mutations) table. Une mutation qui a été soumise avec succès continuera à s'exécuter même si les serveurs ClickHouse sont redémarrés. Il n'y a aucun moyen de faire reculer la mutation une fois qu'elle est soumise, mais si la mutation est bloquée pour une raison quelconque, elle peut être annulée avec le [`KILL MUTATION`](misc.md#kill-mutation) requête. - -Les entrées pour les mutations finies ne sont pas supprimées immédiatement (le nombre d'entrées conservées est déterminé par `finished_mutations_to_keep` le moteur de stockage de paramètre). Les anciennes entrées de mutation sont supprimées. - -## ALTER USER {#alter-user-statement} - -Changements clickhouse comptes d'utilisateurs. - -### Syntaxe {#alter-user-syntax} - -``` sql -ALTER USER [IF EXISTS] name [ON CLUSTER cluster_name] - [RENAME TO new_name] - [IDENTIFIED [WITH {PLAINTEXT_PASSWORD|SHA256_PASSWORD|DOUBLE_SHA1_PASSWORD}] BY {'password'|'hash'}] - [[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] - [DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] -``` - -### Description {#alter-user-dscr} - -Utiliser `ALTER USER` vous devez avoir le [ALTER USER](grant.md#grant-access-management) privilège. - -### Exemple {#alter-user-examples} - -Définir les rôles accordés par défaut: - -``` sql -ALTER USER user DEFAULT ROLE role1, role2 -``` - -Si les rôles ne sont pas précédemment accordés à un utilisateur, ClickHouse lève une exception. - -Définissez tous les rôles accordés à défaut: - -``` sql -ALTER USER user DEFAULT ROLE ALL -``` - -Si un rôle seront accordés à un utilisateur dans l'avenir, il deviendra automatiquement par défaut. - -Définissez tous les rôles accordés sur default excepting `role1` et `role2`: - -``` sql -ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2 -``` - -## ALTER ROLE {#alter-role-statement} - -Les changements de rôles. - -### Syntaxe {#alter-role-syntax} - -``` sql -ALTER ROLE [IF EXISTS] name [ON CLUSTER cluster_name] - [RENAME TO new_name] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] -``` - -## ALTER ROW POLICY {#alter-row-policy-statement} - -Modifie la stratégie de ligne. - -### Syntaxe {#alter-row-policy-syntax} - -``` sql -ALTER [ROW] POLICY [IF EXISTS] name [ON CLUSTER cluster_name] ON [database.]table - [RENAME TO new_name] - [AS {PERMISSIVE | RESTRICTIVE}] - [FOR SELECT] - [USING {condition | NONE}][,...] - [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] -``` - -## ALTER QUOTA {#alter-quota-statement} - -Les changements de quotas. - -### Syntaxe {#alter-quota-syntax} - -``` sql -ALTER QUOTA [IF EXISTS] name [ON CLUSTER cluster_name] - [RENAME TO new_name] - [KEYED BY {'none' | 'user name' | 'ip address' | 'client key' | 'client key or user name' | 'client key or ip address'}] - [FOR [RANDOMIZED] INTERVAL number {SECOND | MINUTE | HOUR | DAY | WEEK | MONTH | QUARTER | YEAR} - {MAX { {QUERIES | ERRORS | RESULT ROWS | RESULT BYTES | READ ROWS | READ BYTES | EXECUTION TIME} = number } [,...] | - NO LIMITS | TRACKING ONLY} [,...]] - [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] -``` - -## ALTER SETTINGS PROFILE {#alter-settings-profile-statement} - -Les changements de quotas. - -### Syntaxe {#alter-settings-profile-syntax} - -``` sql -ALTER SETTINGS PROFILE [IF EXISTS] name [ON CLUSTER cluster_name] - [RENAME TO new_name] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...] -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/alter/) diff --git a/docs/fr/sql-reference/statements/create.md b/docs/fr/sql-reference/statements/create.md deleted file mode 100644 index e7c8040ee6e..00000000000 --- a/docs/fr/sql-reference/statements/create.md +++ /dev/null @@ -1,502 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 35 -toc_title: CREATE ---- - -# Créer des requêtes {#create-queries} - -## CREATE DATABASE {#query-language-create-database} - -Crée la base de données. - -``` sql -CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster] [ENGINE = engine(...)] -``` - -### Clause {#clauses} - -- `IF NOT EXISTS` - Si l' `db_name` la base de données existe déjà, alors ClickHouse ne crée pas de nouvelle base de données et: - - - Ne lance pas d'exception si la clause est spécifiée. - - Lève une exception si la clause n'est pas spécifiée. - -- `ON CLUSTER` - Clickhouse crée le `db_name` base de données sur tous les serveurs d'un cluster spécifié. - -- `ENGINE` - - - [MySQL](../../engines/database-engines/mysql.md) - Vous permet de récupérer des données à partir du serveur MySQL distant. - Par défaut, ClickHouse utilise son propre [moteur de base de données](../../engines/database-engines/index.md). - -## CREATE TABLE {#create-table-query} - -Le `CREATE TABLE` la requête peut avoir plusieurs formes. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [compression_codec] [TTL expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [compression_codec] [TTL expr2], - ... -) ENGINE = engine -``` - -Crée une table nommée ‘name’ dans le ‘db’ base de données ou la base de données actuelle si ‘db’ n'est pas définie, avec la structure spécifiée entre parenthèses et l' ‘engine’ moteur. -La structure de la table est une liste de descriptions de colonnes. Si les index sont pris en charge par le moteur, ils sont indiqués comme paramètres pour le moteur de table. - -Une description de colonne est `name type` dans le cas le plus simple. Exemple: `RegionID UInt32`. -Des Expressions peuvent également être définies pour les valeurs par défaut (voir ci-dessous). - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name AS [db2.]name2 [ENGINE = engine] -``` - -Crée une table avec la même structure qu'une autre table. Vous pouvez spécifier un moteur différent pour la table. Si le moteur n'est pas spécifié, le même moteur sera utilisé que pour la `db2.name2` table. - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_function() -``` - -Crée une table avec la structure et les données renvoyées par [fonction de table](../table-functions/index.md#table-functions). - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... -``` - -Crée une table avec une structure comme le résultat de l' `SELECT` une requête avec les ‘engine’ moteur, et le remplit avec des données de SELECT. - -Dans tous les cas, si `IF NOT EXISTS` est spécifié, la requête ne renvoie pas une erreur si la table existe déjà. Dans ce cas, la requête ne font rien. - -Il peut y avoir d'autres clauses après le `ENGINE` la clause dans la requête. Voir la documentation détaillée sur la façon de créer des tables dans les descriptions de [moteurs de table](../../engines/table-engines/index.md#table_engines). - -### Les Valeurs Par Défaut {#create-default-values} - -La description de colonne peut spécifier une expression pour une valeur par défaut, de l'une des manières suivantes:`DEFAULT expr`, `MATERIALIZED expr`, `ALIAS expr`. -Exemple: `URLDomain String DEFAULT domain(URL)`. - -Si une expression pour la valeur par défaut n'est pas définie, les valeurs par défaut seront définies sur zéros pour les nombres, chaînes vides pour les chaînes, tableaux vides pour les tableaux et `1970-01-01` pour les dates ou zero unix timestamp pour les dates avec le temps. Les valeurs NULL ne sont pas prises en charge. - -Si l'expression par défaut est définie, le type de colonne est facultatif. S'il n'y a pas de type explicitement défini, le type d'expression par défaut est utilisé. Exemple: `EventDate DEFAULT toDate(EventTime)` – the ‘Date’ type sera utilisé pour la ‘EventDate’ colonne. - -Si le type de données et l'expression par défaut sont définis explicitement, cette expression sera convertie au type spécifié à l'aide des fonctions de conversion de type. Exemple: `Hits UInt32 DEFAULT 0` signifie la même chose que `Hits UInt32 DEFAULT toUInt32(0)`. - -Default expressions may be defined as an arbitrary expression from table constants and columns. When creating and changing the table structure, it checks that expressions don't contain loops. For INSERT, it checks that expressions are resolvable – that all columns they can be calculated from have been passed. - -`DEFAULT expr` - -Valeur par défaut normale. Si la requête INSERT ne spécifie pas la colonne correspondante, elle sera remplie en calculant l'expression correspondante. - -`MATERIALIZED expr` - -Expression matérialisée. Une telle colonne ne peut pas être spécifiée pour INSERT, car elle est toujours calculée. -Pour un INSERT sans Liste de colonnes, ces colonnes ne sont pas prises en compte. -De plus, cette colonne n'est pas substituée lors de l'utilisation d'un astérisque dans une requête SELECT. C'est pour préserver l'invariant que le dump obtenu en utilisant `SELECT *` peut être inséré dans la table en utilisant INSERT sans spécifier la liste des colonnes. - -`ALIAS expr` - -Synonyme. Une telle colonne n'est pas du tout stockée dans la table. -Ses valeurs ne peuvent pas être insérées dans une table et elles ne sont pas substituées lors de l'utilisation d'un astérisque dans une requête SELECT. -Il peut être utilisé dans SELECTs si l'alias est développé pendant l'analyse des requêtes. - -Lorsque vous utilisez la requête ALTER pour ajouter de nouvelles colonnes, les anciennes données de ces colonnes ne sont pas écrites. Au lieu de cela, lors de la lecture d'anciennes données qui n'ont pas de valeurs pour les nouvelles colonnes, les expressions sont calculées à la volée par défaut. Cependant, si l'exécution des expressions nécessite différentes colonnes qui ne sont pas indiquées dans la requête, ces colonnes seront en outre lues, mais uniquement pour les blocs de données qui en ont besoin. - -Si vous ajoutez une nouvelle colonne à une table mais modifiez ultérieurement son expression par défaut, les valeurs utilisées pour les anciennes données changeront (pour les données où les valeurs n'ont pas été stockées sur le disque). Notez que lors de l'exécution de fusions d'arrière-plan, les données des colonnes manquantes dans l'une des parties de fusion sont écrites dans la partie fusionnée. - -Il n'est pas possible de définir des valeurs par défaut pour les éléments dans les structures de données. - -### Contraintes {#constraints} - -Avec les descriptions de colonnes des contraintes peuvent être définies: - -``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [compression_codec] [TTL expr1], - ... - CONSTRAINT constraint_name_1 CHECK boolean_expr_1, - ... -) ENGINE = engine -``` - -`boolean_expr_1` pourrait par n'importe quelle expression booléenne. Si les contraintes sont définies pour la table, chacun d'eux sera vérifiée pour chaque ligne `INSERT` query. If any constraint is not satisfied — server will raise an exception with constraint name and checking expression. - -L'ajout d'une grande quantité de contraintes peut affecter négativement les performances de big `INSERT` requête. - -### Expression TTL {#ttl-expression} - -Définit la durée de stockage des valeurs. Peut être spécifié uniquement pour les tables mergetree-family. Pour la description détaillée, voir [TTL pour les colonnes et les tableaux](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl). - -### Codecs De Compression De Colonne {#codecs} - -Par défaut, ClickHouse applique le `lz4` méthode de compression. Pour `MergeTree`- famille de moteurs Vous pouvez modifier la méthode de compression par défaut dans le [compression](../../operations/server-configuration-parameters/settings.md#server-settings-compression) section d'une configuration de serveur. Vous pouvez également définir la méthode de compression pour chaque colonne `CREATE TABLE` requête. - -``` sql -CREATE TABLE codec_example -( - dt Date CODEC(ZSTD), - ts DateTime CODEC(LZ4HC), - float_value Float32 CODEC(NONE), - double_value Float64 CODEC(LZ4HC(9)) - value Float32 CODEC(Delta, ZSTD) -) -ENGINE = -... -``` - -Si un codec est spécifié, le codec par défaut ne s'applique pas. Les Codecs peuvent être combinés dans un pipeline, par exemple, `CODEC(Delta, ZSTD)`. Pour sélectionner la meilleure combinaison de codecs pour votre projet, passez des benchmarks similaires à ceux décrits dans Altinity [Nouveaux encodages pour améliorer L'efficacité du ClickHouse](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article. - -!!! warning "Avertissement" - Vous ne pouvez pas décompresser les fichiers de base de données ClickHouse avec des utilitaires externes tels que `lz4`. Au lieu de cela, utilisez le spécial [clickhouse-compresseur](https://github.com/ClickHouse/ClickHouse/tree/master/programs/compressor) utilitaire. - -La Compression est prise en charge pour les moteurs de tableau suivants: - -- [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) famille. Prend en charge les codecs de compression de colonne et la sélection de la méthode de compression par défaut par [compression](../../operations/server-configuration-parameters/settings.md#server-settings-compression) paramètre. -- [Journal](../../engines/table-engines/log-family/index.md) famille. Utilise le `lz4` méthode de compression par défaut et prend en charge les codecs de compression de colonne. -- [Définir](../../engines/table-engines/special/set.md). Uniquement pris en charge la compression par défaut. -- [Rejoindre](../../engines/table-engines/special/join.md). Uniquement pris en charge la compression par défaut. - -ClickHouse prend en charge les codecs à usage commun et les codecs spécialisés. - -#### Codecs Spécialisés {#create-query-specialized-codecs} - -Ces codecs sont conçus pour rendre la compression plus efficace en utilisant des fonctionnalités spécifiques des données. Certains de ces codecs ne compressent pas les données eux-mêmes. Au lieu de cela, ils préparent les données pour un codec à usage commun, qui les compresse mieux que sans cette préparation. - -Spécialisé codecs: - -- `Delta(delta_bytes)` — Compression approach in which raw values are replaced by the difference of two neighboring values, except for the first value that stays unchanged. Up to `delta_bytes` sont utilisés pour stocker des valeurs delta, donc `delta_bytes` est la taille maximale des valeurs brutes. Possible `delta_bytes` valeurs: 1, 2, 4, 8. La valeur par défaut pour `delta_bytes` être `sizeof(type)` si égale à 1, 2, 4 ou 8. Dans tous les autres cas, c'est 1. -- `DoubleDelta` — Calculates delta of deltas and writes it in compact binary form. Optimal compression rates are achieved for monotonic sequences with a constant stride, such as time series data. Can be used with any fixed-width type. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. Uses 1 extra bit for 32-byte deltas: 5-bit prefixes instead of 4-bit prefixes. For additional information, see Compressing Time Stamps in [Gorilla: Une Base De Données De Séries Chronologiques Rapide, Évolutive Et En Mémoire](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf). -- `Gorilla` — Calculates XOR between current and previous value and writes it in compact binary form. Efficient when storing a series of floating point values that change slowly, because the best compression rate is achieved when neighboring values are binary equal. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. For additional information, see Compressing Values in [Gorilla: Une Base De Données De Séries Chronologiques Rapide, Évolutive Et En Mémoire](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf). -- `T64` — Compression approach that crops unused high bits of values in integer data types (including `Enum`, `Date` et `DateTime`). À chaque étape de son algorithme, le codec prend un bloc de 64 valeurs, les place dans une matrice de 64x64 bits, le transpose, recadre les bits de valeurs inutilisés et renvoie le reste sous forme de séquence. Les bits inutilisés sont les bits, qui ne diffèrent pas entre les valeurs maximum et minimum dans la partie de données entière pour laquelle la compression est utilisée. - -`DoubleDelta` et `Gorilla` les codecs sont utilisés dans Gorilla TSDB comme composants de son algorithme de compression. L'approche Gorilla est efficace dans les scénarios où il y a une séquence de valeurs qui changent lentement avec leurs horodatages. Les horodatages sont effectivement compressés par le `DoubleDelta` codec, et les valeurs sont effectivement comprimé par le `Gorilla` codec. Par exemple, pour obtenir une table stockée efficacement, vous pouvez la créer dans la configuration suivante: - -``` sql -CREATE TABLE codec_example -( - timestamp DateTime CODEC(DoubleDelta), - slow_values Float32 CODEC(Gorilla) -) -ENGINE = MergeTree() -``` - -#### Codecs À Usage Général {#create-query-general-purpose-codecs} - -Codec: - -- `NONE` — No compression. -- `LZ4` — Lossless [algorithme de compression de données](https://github.com/lz4/lz4) utilisé par défaut. Applique la compression rapide LZ4. -- `LZ4HC[(level)]` — LZ4 HC (high compression) algorithm with configurable level. Default level: 9. Setting `level <= 0` s'applique le niveau par défaut. Niveaux possibles: \[1, 12\]. Plage de niveau recommandée: \[4, 9\]. -- `ZSTD[(level)]` — [Algorithme de compression ZSTD](https://en.wikipedia.org/wiki/Zstandard) avec configurables `level`. Niveaux possibles: \[1, 22\]. Valeur par défaut: 1. - -Des niveaux de compression élevés sont utiles pour les scénarios asymétriques, comme compresser une fois, décompresser à plusieurs reprises. Des niveaux plus élevés signifient une meilleure compression et une utilisation plus élevée du processeur. - -## Les Tables Temporaires {#temporary-tables} - -Clickhouse prend en charge les tables temporaires qui ont les caractéristiques suivantes: - -- Les tables temporaires disparaissent à la fin de la session, y compris si la connexion est perdue. -- Une table temporaire utilise uniquement le moteur de mémoire. -- La base de données ne peut pas être spécifiée pour une table temporaire. Il est créé en dehors des bases de données. -- Impossible de créer une table temporaire avec une requête DDL distribuée sur tous les serveurs de cluster (en utilisant `ON CLUSTER`): ce tableau n'existe que dans la session en cours. -- Si une table temporaire a le même nom qu'une autre et qu'une requête spécifie le nom de la table sans spécifier la base de données, la table temporaire sera utilisée. -- Pour le traitement des requêtes distribuées, les tables temporaires utilisées dans une requête sont transmises à des serveurs distants. - -Pour créer une table temporaire, utilisez la syntaxe suivante: - -``` sql -CREATE TEMPORARY TABLE [IF NOT EXISTS] table_name -( - name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], - name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], - ... -) -``` - -Dans la plupart des cas, les tables temporaires ne sont pas créées manuellement, mais lors de l'utilisation de données externes pour une requête ou pour `(GLOBAL) IN`. Pour plus d'informations, consultez les sections appropriées - -Il est possible d'utiliser des tables avec [Moteur = mémoire](../../engines/table-engines/special/memory.md) au lieu de tables temporaires. - -## Requêtes DDL distribuées (sur la clause CLUSTER) {#distributed-ddl-queries-on-cluster-clause} - -Le `CREATE`, `DROP`, `ALTER`, et `RENAME` les requêtes prennent en charge l'exécution distribuée sur un cluster. -Par exemple, la requête suivante crée la `all_hits` `Distributed` tableau sur chaque ordinateur hôte `cluster`: - -``` sql -CREATE TABLE IF NOT EXISTS all_hits ON CLUSTER cluster (p Date, i Int32) ENGINE = Distributed(cluster, default, hits) -``` - -Pour exécuter ces requêtes correctement, chaque hôte doit avoir la même définition de cluster (pour simplifier la synchronisation des configs, vous pouvez utiliser des substitutions de ZooKeeper). Ils doivent également se connecter aux serveurs ZooKeeper. -La version locale de la requête sera finalement implémentée sur chaque hôte du cluster, même si certains hôtes ne sont actuellement pas disponibles. L'ordre d'exécution des requêtes au sein d'un seul hôte est garanti. - -## CREATE VIEW {#create-view} - -``` sql -CREATE [MATERIALIZED] VIEW [IF NOT EXISTS] [db.]table_name [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ... -``` - -Crée une vue. Il existe deux types de vues: normale et matérialisée. - -Les vues normales ne stockent aucune donnée, mais effectuent simplement une lecture à partir d'une autre table. En d'autres termes, une vue normale n'est rien de plus qu'une requête enregistrée. Lors de la lecture à partir d'une vue, cette requête enregistrée est utilisée comme sous-requête dans la clause FROM. - -Par exemple, supposons que vous avez créé une vue: - -``` sql -CREATE VIEW view AS SELECT ... -``` - -et écrit une requête: - -``` sql -SELECT a, b, c FROM view -``` - -Cette requête est entièrement équivalente à l'utilisation de la sous requête: - -``` sql -SELECT a, b, c FROM (SELECT ...) -``` - -Les vues matérialisées stockent les données transformées par la requête SELECT correspondante. - -Lors de la création d'une vue matérialisée sans `TO [db].[table]`, you must specify ENGINE – the table engine for storing data. - -Lors de la création d'une vue matérialisée avec `TO [db].[table]` vous ne devez pas utiliser `POPULATE`. - -Une vue matérialisée est agencée comme suit: lors de l'insertion de données dans la table spécifiée dans SELECT, une partie des données insérées est convertie par cette requête SELECT, et le résultat est inséré dans la vue. - -Si vous spécifiez POPULATE, les données de table existantes sont insérées dans la vue lors de sa création, comme si `CREATE TABLE ... AS SELECT ...` . Sinon, la requête ne contient que les données insérées dans la table après la création de la vue. Nous ne recommandons pas D'utiliser POPULATE, car les données insérées dans la table lors de la création de la vue ne seront pas insérées dedans. - -A `SELECT` la requête peut contenir `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Note that the corresponding conversions are performed independently on each block of inserted data. For example, if `GROUP BY` est définie, les données sont agrégées lors de l'insertion, mais uniquement dans un seul paquet de données insérées. Les données ne seront pas agrégées davantage. L'exception concerne l'utilisation d'un moteur qui effectue indépendamment l'agrégation de données, par exemple `SummingMergeTree`. - -L'exécution de `ALTER` les requêtes sur les vues matérialisées n'ont pas été complètement développées, elles pourraient donc être gênantes. Si la vue matérialisée utilise la construction `TO [db.]name` vous pouvez `DETACH` la vue, exécutez `ALTER` pour la table cible, puis `ATTACH` précédemment détaché (`DETACH`) vue. - -Les vues ressemblent aux tables normales. Par exemple, ils sont répertoriés dans le résultat de la `SHOW TABLES` requête. - -Il n'y a pas de requête séparée pour supprimer des vues. Pour supprimer une vue, utilisez `DROP TABLE`. - -## CREATE DICTIONARY {#create-dictionary-query} - -``` sql -CREATE DICTIONARY [IF NOT EXISTS] [db.]dictionary_name [ON CLUSTER cluster] -( - key1 type1 [DEFAULT|EXPRESSION expr1] [HIERARCHICAL|INJECTIVE|IS_OBJECT_ID], - key2 type2 [DEFAULT|EXPRESSION expr2] [HIERARCHICAL|INJECTIVE|IS_OBJECT_ID], - attr1 type2 [DEFAULT|EXPRESSION expr3], - attr2 type2 [DEFAULT|EXPRESSION expr4] -) -PRIMARY KEY key1, key2 -SOURCE(SOURCE_NAME([param1 value1 ... paramN valueN])) -LAYOUT(LAYOUT_NAME([param_name param_value])) -LIFETIME({MIN min_val MAX max_val | max_val}) -``` - -Crée [externe dictionnaire](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) avec le [structure](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md), [source](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md), [disposition](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md) et [vie](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md). - -Structure de dictionnaire externe se compose d'attributs. Les attributs du dictionnaire sont spécifiés de la même manière que les colonnes du tableau. La seule propriété d'attribut requise est son type, toutes les autres propriétés peuvent avoir des valeurs par défaut. - -Selon le dictionnaire [disposition](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md) un ou plusieurs attributs peuvent être spécifiés comme les clés de dictionnaire. - -Pour plus d'informations, voir [Dictionnaires Externes](../dictionaries/external-dictionaries/external-dicts.md) section. - -## CREATE USER {#create-user-statement} - -Crée un [compte d'utilisateur](../../operations/access-rights.md#user-account-management). - -### Syntaxe {#create-user-syntax} - -``` sql -CREATE USER [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name] - [IDENTIFIED [WITH {NO_PASSWORD|PLAINTEXT_PASSWORD|SHA256_PASSWORD|SHA256_HASH|DOUBLE_SHA1_PASSWORD|DOUBLE_SHA1_HASH}] BY {'password'|'hash'}] - [HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] - [DEFAULT ROLE role [,...]] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] -``` - -#### Identification {#identification} - -Il existe de multiples façons d'identification d'un utilisateur: - -- `IDENTIFIED WITH no_password` -- `IDENTIFIED WITH plaintext_password BY 'qwerty'` -- `IDENTIFIED WITH sha256_password BY 'qwerty'` ou `IDENTIFIED BY 'password'` -- `IDENTIFIED WITH sha256_hash BY 'hash'` -- `IDENTIFIED WITH double_sha1_password BY 'qwerty'` -- `IDENTIFIED WITH double_sha1_hash BY 'hash'` - -#### L'Utilisateur De L'Hôte {#user-host} - -L'hôte utilisateur est un hôte à partir duquel une connexion au serveur ClickHouse peut être établie. Hôte peut être spécifié dans le `HOST` section de requête par les moyens suivants: - -- `HOST IP 'ip_address_or_subnetwork'` — User can connect to ClickHouse server only from the specified IP address or a [sous-réseau](https://en.wikipedia.org/wiki/Subnetwork). Exemple: `HOST IP '192.168.0.0/16'`, `HOST IP '2001:DB8::/32'`. Pour une utilisation en production, spécifiez uniquement `HOST IP` (adresses IP et leurs masques), depuis l'utilisation `host` et `host_regexp` peut causer une latence supplémentaire. -- `HOST ANY` — User can connect from any location. This is default option. -- `HOST LOCAL` — User can connect only locally. -- `HOST NAME 'fqdn'` — User host can be specified as FQDN. For example, `HOST NAME 'mysite.com'`. -- `HOST NAME REGEXP 'regexp'` — You can use [pcre](http://www.pcre.org/) expressions régulières lors de la spécification des hôtes utilisateur. Exemple, `HOST NAME REGEXP '.*\.mysite\.com'`. -- `HOST LIKE 'template'` — Allows you use the [LIKE](../functions/string-search-functions.md#function-like) opérateur de filtre de l'utilisateur hôtes. Exemple, `HOST LIKE '%'` est équivalent à `HOST ANY`, `HOST LIKE '%.mysite.com'` filtre tous les hôtes dans le `mysite.com` domaine. - -Une autre façon de spécifier l'hôte est d'utiliser `@` syntaxe avec le nom d'utilisateur. Exemple: - -- `CREATE USER mira@'127.0.0.1'` — Equivalent to the `HOST IP` syntaxe. -- `CREATE USER mira@'localhost'` — Equivalent to the `HOST LOCAL` syntaxe. -- `CREATE USER mira@'192.168.%.%'` — Equivalent to the `HOST LIKE` syntaxe. - -!!! info "Avertissement" - Clickhouse traite `user_name@'address'` comme un nom d'utilisateur dans son ensemble. Donc, techniquement, vous pouvez créer plusieurs utilisateurs avec `user_name` et différentes constructions après `@`. Nous ne recommandons pas de le faire. - -### Exemple {#create-user-examples} - -Créer le compte d'utilisateur `mira` protégé par le mot de passe `qwerty`: - -``` sql -CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty' -``` - -`mira` devrait démarrer l'application client sur l'hôte où le serveur ClickHouse s'exécute. - -Créer le compte d'utilisateur `john`, attribuez-lui des rôles et définissez ces rôles par défaut: - -``` sql -CREATE USER john DEFAULT ROLE role1, role2 -``` - -Créer le compte d'utilisateur `john` et faire tous ses futurs rôles par défaut: - -``` sql -ALTER USER user DEFAULT ROLE ALL -``` - -Quand un rôle sera attribué à `john` dans l'avenir, il deviendra automatiquement par défaut. - -Créer le compte d'utilisateur `john` et faire tous ses futurs rôles par défaut sauf `role1` et `role2`: - -``` sql -ALTER USER john DEFAULT ROLE ALL EXCEPT role1, role2 -``` - -## CREATE ROLE {#create-role-statement} - -Crée un [rôle](../../operations/access-rights.md#role-management). - -### Syntaxe {#create-role-syntax} - -``` sql -CREATE ROLE [IF NOT EXISTS | OR REPLACE] name - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] -``` - -### Description {#create-role-description} - -Rôle est un ensemble de [privilège](grant.md#grant-privileges). Un utilisateur reçoit un rôle obtient tous les privilèges de ce rôle. - -Un utilisateur peut être affecté à plusieurs rôles. Les utilisateurs peuvent appliquer leurs rôles accordés dans des combinaisons arbitraires par le [SET ROLE](misc.md#set-role-statement) déclaration. La finale de la portée des privilèges est un ensemble combiné de tous les privilèges de tous les rôles. Si un utilisateur a des privilèges accordés directement à son compte d'utilisateur, ils sont également combinés avec les privilèges accordés par les rôles. - -L'utilisateur peut avoir des rôles par défaut qui s'appliquent à la connexion de l'utilisateur. Pour définir les rôles par défaut, utilisez [SET DEFAULT ROLE](misc.md#set-default-role-statement) - déclaration ou de la [ALTER USER](alter.md#alter-user-statement) déclaration. - -Pour révoquer un rôle, utilisez [REVOKE](revoke.md) déclaration. - -Pour supprimer le rôle, utilisez [DROP ROLE](misc.md#drop-role-statement) déclaration. Le rôle supprimé est automatiquement révoqué de tous les utilisateurs et rôles auxquels il a été accordé. - -### Exemple {#create-role-examples} - -``` sql -CREATE ROLE accountant; -GRANT SELECT ON db.* TO accountant; -``` - -Cette séquence de requêtes crée le rôle `accountant` cela a le privilège de lire les données du `accounting` la base de données. - -Octroi du rôle à l'utilisateur `mira`: - -``` sql -GRANT accountant TO mira; -``` - -Une fois le rôle accordé, l'utilisateur peut l'utiliser et effectuer les requêtes autorisées. Exemple: - -``` sql -SET ROLE accountant; -SELECT * FROM db.*; -``` - -## CREATE ROW POLICY {#create-row-policy-statement} - -Crée un [filtre pour les lignes](../../operations/access-rights.md#row-policy-management) qu'un utilisateur peut lire à partir d'une table. - -### Syntaxe {#create-row-policy-syntax} - -``` sql -CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name [ON CLUSTER cluster_name] ON [db.]table - [AS {PERMISSIVE | RESTRICTIVE}] - [FOR SELECT] - [USING condition] - [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] -``` - -#### Section AS {#create-row-policy-as} - -À l'aide de cette section, vous pouvez créer des stratégies permissives ou restrictives. - -La stratégie Permissive accorde l'accès aux lignes. Les stratégies permissives qui s'appliquent à la même table sont combinées ensemble en utilisant le booléen `OR` opérateur. Les stratégies sont permissives par défaut. - -La politique Restrictive limite l'accès à la ligne. Les politiques restrictives qui s'appliquent à la même table sont combinées en utilisant le booléen `AND` opérateur. - -Les stratégies restrictives s'appliquent aux lignes qui ont passé les filtres permissifs. Si vous définissez des stratégies restrictives mais aucune politique permissive, l'utilisateur ne peut obtenir aucune ligne de la table. - -#### La Section DE {#create-row-policy-to} - -Dans la section `TO` vous pouvez donner une liste mixte de rôles et d'utilisateurs, par exemple, `CREATE ROW POLICY ... TO accountant, john@localhost`. - -Mot `ALL` signifie Tous les utilisateurs de ClickHouse, y compris l'utilisateur actuel. Mot `ALL EXCEPT` autoriser à exclure certains utilisateurs de la liste tous les utilisateurs, par exemple `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` - -### Exemple {#examples} - -- `CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO accountant, john@localhost` -- `CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO ALL EXCEPT mira` - -## CREATE QUOTA {#create-quota-statement} - -Crée un [quota](../../operations/access-rights.md#quotas-management) qui peut être attribué à un utilisateur ou un rôle. - -### Syntaxe {#create-quota-syntax} - -``` sql -CREATE QUOTA [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name] - [KEYED BY {'none' | 'user name' | 'ip address' | 'client key' | 'client key or user name' | 'client key or ip address'}] - [FOR [RANDOMIZED] INTERVAL number {SECOND | MINUTE | HOUR | DAY | WEEK | MONTH | QUARTER | YEAR} - {MAX { {QUERIES | ERRORS | RESULT ROWS | RESULT BYTES | READ ROWS | READ BYTES | EXECUTION TIME} = number } [,...] | - NO LIMITS | TRACKING ONLY} [,...]] - [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] -``` - -### Exemple {#create-quota-example} - -Limiter le nombre maximum de requêtes pour l'utilisateur actuel avec 123 requêtes en 15 mois contrainte: - -``` sql -CREATE QUOTA qA FOR INTERVAL 15 MONTH MAX QUERIES 123 TO CURRENT_USER -``` - -## CREATE SETTINGS PROFILE {#create-settings-profile-statement} - -Crée un [les paramètres de profil](../../operations/access-rights.md#settings-profiles-management) qui peut être attribué à un utilisateur ou un rôle. - -### Syntaxe {#create-settings-profile-syntax} - -``` sql -CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...] -``` - -# Exemple {#create-settings-profile-syntax} - -Créer l' `max_memory_usage_profile` paramètres du profil avec valeur et contraintes pour `max_memory_usage` paramètre. L'affecter à `robin`: - -``` sql -CREATE SETTINGS PROFILE max_memory_usage_profile SETTINGS max_memory_usage = 100000001 MIN 90000000 MAX 110000000 TO robin -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/create/) diff --git a/docs/fr/sql-reference/statements/grant.md b/docs/fr/sql-reference/statements/grant.md deleted file mode 100644 index 143c9a36e33..00000000000 --- a/docs/fr/sql-reference/statements/grant.md +++ /dev/null @@ -1,476 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 39 -toc_title: GRANT ---- - -# GRANT {#grant} - -- Accorder [privilège](#grant-privileges) pour ClickHouse comptes d'utilisateurs ou des rôles. -- Affecte des rôles à des comptes d'utilisateurs ou à d'autres rôles. - -Pour révoquer les privilèges, utilisez [REVOKE](revoke.md) déclaration. Vous pouvez également classer les privilèges accordés par le [SHOW GRANTS](show.md#show-grants-statement) déclaration. - -## Accorder La Syntaxe Des Privilèges {#grant-privigele-syntax} - -``` sql -GRANT [ON CLUSTER cluster_name] privilege[(column_name [,...])] [,...] ON {db.table|db.*|*.*|table|*} TO {user | role | CURRENT_USER} [,...] [WITH GRANT OPTION] -``` - -- `privilege` — Type of privilege. -- `role` — ClickHouse user role. -- `user` — ClickHouse user account. - -Le `WITH GRANT OPTION` clause de subventions `user` ou `role` avec l'autorisation de réaliser des `GRANT` requête. Les utilisateurs peuvent accorder des privilèges de la même portée qu'ils ont et moins. - -## Attribution De La Syntaxe Du Rôle {#assign-role-syntax} - -``` sql -GRANT [ON CLUSTER cluster_name] role [,...] TO {user | another_role | CURRENT_USER} [,...] [WITH ADMIN OPTION] -``` - -- `role` — ClickHouse user role. -- `user` — ClickHouse user account. - -Le `WITH ADMIN OPTION` clause de jeux [ADMIN OPTION](#admin-option-privilege) privilège pour `user` ou `role`. - -## Utilisation {#grant-usage} - -Utiliser `GRANT` votre compte doit avoir le `GRANT OPTION` privilège. Vous ne pouvez accorder des privilèges que dans le cadre de vos privilèges de Compte. - -Par exemple, l'administrateur a accordé des privilèges `john` compte par la requête: - -``` sql -GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION -``` - -Cela signifie que `john` a la permission d'effectuer: - -- `SELECT x,y FROM db.table`. -- `SELECT x FROM db.table`. -- `SELECT y FROM db.table`. - -`john` ne pouvez pas effectuer de `SELECT z FROM db.table`. Le `SELECT * FROM db.table` aussi n'est pas disponible. En traitant cette requête, ClickHouse ne renvoie aucune donnée, même `x` et `y`. La seule exception est si une table contient uniquement `x` et `y` colonnes, dans ce cas ClickHouse renvoie toutes les données. - -Également `john` a l' `GRANT OPTION` privilège, de sorte qu'il peut accorder à d'autres utilisateurs avec des privilèges de la même ou de la plus petite portée. - -Spécification des privilèges vous pouvez utiliser asterisk (`*`) au lieu d'une table ou d'un nom de base de données. Par exemple, l' `GRANT SELECT ON db.* TO john` requête permet `john` pour effectuer la `SELECT` requête sur toutes les tables dans `db` la base de données. En outre, vous pouvez omettre le nom de la base de données. Dans ce cas, des privilèges sont accordés pour la base de données actuelle, par exemple: `GRANT SELECT ON * TO john` accorde le privilège sur toutes les tables dans la base de données actuelle, `GRANT SELECT ON mytable TO john` accorde le privilège sur le `mytable` table dans la base de données actuelle. - -L'accès à la `system` la base de données est toujours autorisée (puisque cette base de données est utilisée pour traiter les requêtes). - -Vous pouvez accorder plusieurs privilèges à plusieurs comptes dans une requête. Requête `GRANT SELECT, INSERT ON *.* TO john, robin` permet de comptes `john` et `robin` pour effectuer la `INSERT` et `SELECT` des requêtes sur toutes les tables de toutes les bases de données sur le serveur. - -## Privilège {#grant-privileges} - -Privilège est une autorisation pour effectuer un type spécifique de requêtes. - -Les privilèges ont une structure hiérarchique. Un ensemble de requêtes autorisées dépend de la portée des privilèges. - -Hiérarchie des privilèges: - -- [SELECT](#grant-select) -- [INSERT](#grant-insert) -- [ALTER](#grant-alter) - - `ALTER TABLE` - - `ALTER UPDATE` - - `ALTER DELETE` - - `ALTER COLUMN` - - `ALTER ADD COLUMN` - - `ALTER DROP COLUMN` - - `ALTER MODIFY COLUMN` - - `ALTER COMMENT COLUMN` - - `ALTER CLEAR COLUMN` - - `ALTER RENAME COLUMN` - - `ALTER INDEX` - - `ALTER ORDER BY` - - `ALTER ADD INDEX` - - `ALTER DROP INDEX` - - `ALTER MATERIALIZE INDEX` - - `ALTER CLEAR INDEX` - - `ALTER CONSTRAINT` - - `ALTER ADD CONSTRAINT` - - `ALTER DROP CONSTRAINT` - - `ALTER TTL` - - `ALTER MATERIALIZE TTL` - - `ALTER SETTINGS` - - `ALTER MOVE PARTITION` - - `ALTER FETCH PARTITION` - - `ALTER FREEZE PARTITION` - - `ALTER VIEW` - - `ALTER VIEW REFRESH` - - `ALTER VIEW MODIFY QUERY` -- [CREATE](#grant-create) - - `CREATE DATABASE` - - `CREATE TABLE` - - `CREATE VIEW` - - `CREATE DICTIONARY` - - `CREATE TEMPORARY TABLE` -- [DROP](#grant-drop) - - `DROP DATABASE` - - `DROP TABLE` - - `DROP VIEW` - - `DROP DICTIONARY` -- [TRUNCATE](#grant-truncate) -- [OPTIMIZE](#grant-optimize) -- [SHOW](#grant-show) - - `SHOW DATABASES` - - `SHOW TABLES` - - `SHOW COLUMNS` - - `SHOW DICTIONARIES` -- [KILL QUERY](#grant-kill-query) -- [ACCESS MANAGEMENT](#grant-access-management) - - `CREATE USER` - - `ALTER USER` - - `DROP USER` - - `CREATE ROLE` - - `ALTER ROLE` - - `DROP ROLE` - - `CREATE ROW POLICY` - - `ALTER ROW POLICY` - - `DROP ROW POLICY` - - `CREATE QUOTA` - - `ALTER QUOTA` - - `DROP QUOTA` - - `CREATE SETTINGS PROFILE` - - `ALTER SETTINGS PROFILE` - - `DROP SETTINGS PROFILE` - - `SHOW ACCESS` - - `SHOW_USERS` - - `SHOW_ROLES` - - `SHOW_ROW_POLICIES` - - `SHOW_QUOTAS` - - `SHOW_SETTINGS_PROFILES` - - `ROLE ADMIN` -- [SYSTEM](#grant-system) - - `SYSTEM SHUTDOWN` - - `SYSTEM DROP CACHE` - - `SYSTEM DROP DNS CACHE` - - `SYSTEM DROP MARK CACHE` - - `SYSTEM DROP UNCOMPRESSED CACHE` - - `SYSTEM RELOAD` - - `SYSTEM RELOAD CONFIG` - - `SYSTEM RELOAD DICTIONARY` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES` - - `SYSTEM MERGES` - - `SYSTEM TTL MERGES` - - `SYSTEM FETCHES` - - `SYSTEM MOVES` - - `SYSTEM SENDS` - - `SYSTEM DISTRIBUTED SENDS` - - `SYSTEM REPLICATED SENDS` - - `SYSTEM REPLICATION QUEUES` - - `SYSTEM SYNC REPLICA` - - `SYSTEM RESTART REPLICA` - - `SYSTEM FLUSH` - - `SYSTEM FLUSH DISTRIBUTED` - - `SYSTEM FLUSH LOGS` -- [INTROSPECTION](#grant-introspection) - - `addressToLine` - - `addressToSymbol` - - `demangle` -- [SOURCES](#grant-sources) - - `FILE` - - `URL` - - `REMOTE` - - `YSQL` - - `ODBC` - - `JDBC` - - `HDFS` - - `S3` -- [dictGet](#grant-dictget) - -Exemples de la façon dont cette hiérarchie est traitée: - -- Le `ALTER` privilège comprend tous les autres `ALTER*` privilège. -- `ALTER CONSTRAINT` comprendre `ALTER ADD CONSTRAINT` et `ALTER DROP CONSTRAINT` privilège. - -Les privilèges sont appliqués à différents niveaux. Connaissant un niveau suggère la syntaxe disponible pour le privilège. - -Les niveaux (du plus faible au plus élevé): - -- `COLUMN` — Privilege can be granted for column, table, database, or globally. -- `TABLE` — Privilege can be granted for table, database, or globally. -- `VIEW` — Privilege can be granted for view, database, or globally. -- `DICTIONARY` — Privilege can be granted for dictionary, database, or globally. -- `DATABASE` — Privilege can be granted for database or globally. -- `GLOBAL` — Privilege can be granted only globally. -- `GROUP` — Groups privileges of different levels. When `GROUP`- le privilège de niveau est accordé, seuls les privilèges du groupe sont accordés qui correspondent à la syntaxe utilisée. - -Exemples de syntaxe: - -- `GRANT SELECT(x) ON db.table TO user` -- `GRANT SELECT ON db.* TO user` - -Exemples de syntaxe refusée: - -- `GRANT CREATE USER(x) ON db.table TO user` -- `GRANT CREATE USER ON db.* TO user` - -Le privilège spécial [ALL](#grant-all) accorde tous les privilèges à un compte d'utilisateur ou à un rôle. - -Par défaut, un compte d'utilisateur ou un rôle a pas de privilèges. - -Si un utilisateur ou un rôle ont pas de privilèges qu'il s'affiche comme [NONE](#grant-none) privilège. - -Certaines requêtes par leur implémentation nécessitent un ensemble de privilèges. Par exemple, pour effectuer la [RENAME](misc.md#misc_operations-rename) requête vous avez besoin des privilèges suivants: `SELECT`, `CREATE TABLE`, `INSERT` et `DROP TABLE`. - -### SELECT {#grant-select} - -Permet d'effectuer des [SELECT](select/index.md) requête. - -Le niveau de privilège: `COLUMN`. - -**Description** - -L'utilisateur accordé avec ce privilège peut effectuer `SELECT` requêtes sur une liste spécifiée de colonnes dans la table et la base de données spécifiées. Si l'utilisateur inclut d'autres colonnes, une requête ne renvoie aucune donnée. - -Considérez le privilège suivant: - -``` sql -GRANT SELECT(x,y) ON db.table TO john -``` - -Ce privilège permet à `john` pour effectuer toute `SELECT` requête qui implique des données du `x` et/ou `y` les colonnes en `db.table`. Exemple, `SELECT x FROM db.table`. `john` ne pouvez pas effectuer de `SELECT z FROM db.table`. Le `SELECT * FROM db.table` aussi n'est pas disponible. En traitant cette requête, ClickHouse ne renvoie aucune donnée, même `x` et `y`. La seule exception est si une table contient uniquement `x` et `y` colonnes, dans ce cas ClickHouse renvoie toutes les données. - -### INSERT {#grant-insert} - -Permet d'effectuer des [INSERT](insert-into.md) requête. - -Le niveau de privilège: `COLUMN`. - -**Description** - -L'utilisateur accordé avec ce privilège peut effectuer `INSERT` requêtes sur une liste spécifiée de colonnes dans la table et la base de données spécifiées. Si l'utilisateur inclut d'autres colonnes, une requête n'insère aucune donnée. - -**Exemple** - -``` sql -GRANT INSERT(x,y) ON db.table TO john -``` - -Le privilège accordé permet `john` pour insérer des données à l' `x` et/ou `y` les colonnes en `db.table`. - -### ALTER {#grant-alter} - -Permet d'effectuer des [ALTER](alter.md) requêtes correspondant à la hiérarchie de privilèges suivante: - -- `ALTER`. Niveau: `COLUMN`. - - `ALTER TABLE`. Niveau: `GROUP` - - `ALTER UPDATE`. Niveau: `COLUMN`. Alias: `UPDATE` - - `ALTER DELETE`. Niveau: `COLUMN`. Alias: `DELETE` - - `ALTER COLUMN`. Niveau: `GROUP` - - `ALTER ADD COLUMN`. Niveau: `COLUMN`. Alias: `ADD COLUMN` - - `ALTER DROP COLUMN`. Niveau: `COLUMN`. Alias: `DROP COLUMN` - - `ALTER MODIFY COLUMN`. Niveau: `COLUMN`. Alias: `MODIFY COLUMN` - - `ALTER COMMENT COLUMN`. Niveau: `COLUMN`. Alias: `COMMENT COLUMN` - - `ALTER CLEAR COLUMN`. Niveau: `COLUMN`. Alias: `CLEAR COLUMN` - - `ALTER RENAME COLUMN`. Niveau: `COLUMN`. Alias: `RENAME COLUMN` - - `ALTER INDEX`. Niveau: `GROUP`. Alias: `INDEX` - - `ALTER ORDER BY`. Niveau: `TABLE`. Alias: `ALTER MODIFY ORDER BY`, `MODIFY ORDER BY` - - `ALTER ADD INDEX`. Niveau: `TABLE`. Alias: `ADD INDEX` - - `ALTER DROP INDEX`. Niveau: `TABLE`. Alias: `DROP INDEX` - - `ALTER MATERIALIZE INDEX`. Niveau: `TABLE`. Alias: `MATERIALIZE INDEX` - - `ALTER CLEAR INDEX`. Niveau: `TABLE`. Alias: `CLEAR INDEX` - - `ALTER CONSTRAINT`. Niveau: `GROUP`. Alias: `CONSTRAINT` - - `ALTER ADD CONSTRAINT`. Niveau: `TABLE`. Alias: `ADD CONSTRAINT` - - `ALTER DROP CONSTRAINT`. Niveau: `TABLE`. Alias: `DROP CONSTRAINT` - - `ALTER TTL`. Niveau: `TABLE`. Alias: `ALTER MODIFY TTL`, `MODIFY TTL` - - `ALTER MATERIALIZE TTL`. Niveau: `TABLE`. Alias: `MATERIALIZE TTL` - - `ALTER SETTINGS`. Niveau: `TABLE`. Alias: `ALTER SETTING`, `ALTER MODIFY SETTING`, `MODIFY SETTING` - - `ALTER MOVE PARTITION`. Niveau: `TABLE`. Alias: `ALTER MOVE PART`, `MOVE PARTITION`, `MOVE PART` - - `ALTER FETCH PARTITION`. Niveau: `TABLE`. Alias: `FETCH PARTITION` - - `ALTER FREEZE PARTITION`. Niveau: `TABLE`. Alias: `FREEZE PARTITION` - - `ALTER VIEW` Niveau: `GROUP` - - `ALTER VIEW REFRESH`. Niveau: `VIEW`. Alias: `ALTER LIVE VIEW REFRESH`, `REFRESH VIEW` - - `ALTER VIEW MODIFY QUERY`. Niveau: `VIEW`. Alias: `ALTER TABLE MODIFY QUERY` - -Exemples de la façon dont cette hiérarchie est traitée: - -- Le `ALTER` privilège comprend tous les autres `ALTER*` privilège. -- `ALTER CONSTRAINT` comprendre `ALTER ADD CONSTRAINT` et `ALTER DROP CONSTRAINT` privilège. - -**Note** - -- Le `MODIFY SETTING` privilège permet de modifier les paramètres du moteur de table. In n'affecte pas les paramètres ou les paramètres de configuration du serveur. -- Le `ATTACH` opération a besoin de la [CREATE](#grant-create) privilège. -- Le `DETACH` opération a besoin de la [DROP](#grant-drop) privilège. -- Pour arrêter la mutation par le [KILL MUTATION](misc.md#kill-mutation) requête, vous devez avoir un privilège pour commencer cette mutation. Par exemple, si vous voulez arrêter l' `ALTER UPDATE` requête, vous avez besoin du `ALTER UPDATE`, `ALTER TABLE`, ou `ALTER` privilège. - -### CREATE {#grant-create} - -Permet d'effectuer des [CREATE](create.md) et [ATTACH](misc.md#attach) DDL-requêtes correspondant à la hiérarchie de privilèges suivante: - -- `CREATE`. Niveau: `GROUP` - - `CREATE DATABASE`. Niveau: `DATABASE` - - `CREATE TABLE`. Niveau: `TABLE` - - `CREATE VIEW`. Niveau: `VIEW` - - `CREATE DICTIONARY`. Niveau: `DICTIONARY` - - `CREATE TEMPORARY TABLE`. Niveau: `GLOBAL` - -**Note** - -- Pour supprimer la table créée, l'utilisateur doit [DROP](#grant-drop). - -### DROP {#grant-drop} - -Permet d'effectuer des [DROP](misc.md#drop) et [DETACH](misc.md#detach) requêtes correspondant à la hiérarchie de privilèges suivante: - -- `DROP`. Niveau: - - `DROP DATABASE`. Niveau: `DATABASE` - - `DROP TABLE`. Niveau: `TABLE` - - `DROP VIEW`. Niveau: `VIEW` - - `DROP DICTIONARY`. Niveau: `DICTIONARY` - -### TRUNCATE {#grant-truncate} - -Permet d'effectuer des [TRUNCATE](misc.md#truncate-statement) requête. - -Le niveau de privilège: `TABLE`. - -### OPTIMIZE {#grant-optimize} - -Permet d'effectuer les [OPTIMIZE TABLE](misc.md#misc_operations-optimize) requête. - -Le niveau de privilège: `TABLE`. - -### SHOW {#grant-show} - -Permet d'effectuer des `SHOW`, `DESCRIBE`, `USE`, et `EXISTS` requêtes, correspondant à la hiérarchie suivante des privilèges: - -- `SHOW`. Niveau: `GROUP` - - `SHOW DATABASES`. Niveau: `DATABASE`. Permet d'exécuter des `SHOW DATABASES`, `SHOW CREATE DATABASE`, `USE ` requête. - - `SHOW TABLES`. Niveau: `TABLE`. Permet d'exécuter des `SHOW TABLES`, `EXISTS `, `CHECK
` requête. - - `SHOW COLUMNS`. Niveau: `COLUMN`. Permet d'exécuter des `SHOW CREATE TABLE`, `DESCRIBE` requête. - - `SHOW DICTIONARIES`. Niveau: `DICTIONARY`. Permet d'exécuter des `SHOW DICTIONARIES`, `SHOW CREATE DICTIONARY`, `EXISTS ` requête. - -**Note** - -Un utilisateur a le `SHOW` privilège s'il a un autre privilège concernant la table, le dictionnaire ou la base de données spécifiés. - -### KILL QUERY {#grant-kill-query} - -Permet d'effectuer les [KILL](misc.md#kill-query-statement) requêtes correspondant à la hiérarchie de privilèges suivante: - -Le niveau de privilège: `GLOBAL`. - -**Note** - -`KILL QUERY` privilège permet à un utilisateur de tuer les requêtes des autres utilisateurs. - -### ACCESS MANAGEMENT {#grant-access-management} - -Permet à un utilisateur d'effectuer des requêtes qui gèrent les utilisateurs, les rôles et les stratégies de ligne. - -- `ACCESS MANAGEMENT`. Niveau: `GROUP` - - `CREATE USER`. Niveau: `GLOBAL` - - `ALTER USER`. Niveau: `GLOBAL` - - `DROP USER`. Niveau: `GLOBAL` - - `CREATE ROLE`. Niveau: `GLOBAL` - - `ALTER ROLE`. Niveau: `GLOBAL` - - `DROP ROLE`. Niveau: `GLOBAL` - - `ROLE ADMIN`. Niveau: `GLOBAL` - - `CREATE ROW POLICY`. Niveau: `GLOBAL`. Alias: `CREATE POLICY` - - `ALTER ROW POLICY`. Niveau: `GLOBAL`. Alias: `ALTER POLICY` - - `DROP ROW POLICY`. Niveau: `GLOBAL`. Alias: `DROP POLICY` - - `CREATE QUOTA`. Niveau: `GLOBAL` - - `ALTER QUOTA`. Niveau: `GLOBAL` - - `DROP QUOTA`. Niveau: `GLOBAL` - - `CREATE SETTINGS PROFILE`. Niveau: `GLOBAL`. Alias: `CREATE PROFILE` - - `ALTER SETTINGS PROFILE`. Niveau: `GLOBAL`. Alias: `ALTER PROFILE` - - `DROP SETTINGS PROFILE`. Niveau: `GLOBAL`. Alias: `DROP PROFILE` - - `SHOW ACCESS`. Niveau: `GROUP` - - `SHOW_USERS`. Niveau: `GLOBAL`. Alias: `SHOW CREATE USER` - - `SHOW_ROLES`. Niveau: `GLOBAL`. Alias: `SHOW CREATE ROLE` - - `SHOW_ROW_POLICIES`. Niveau: `GLOBAL`. Alias: `SHOW POLICIES`, `SHOW CREATE ROW POLICY`, `SHOW CREATE POLICY` - - `SHOW_QUOTAS`. Niveau: `GLOBAL`. Alias: `SHOW CREATE QUOTA` - - `SHOW_SETTINGS_PROFILES`. Niveau: `GLOBAL`. Alias: `SHOW PROFILES`, `SHOW CREATE SETTINGS PROFILE`, `SHOW CREATE PROFILE` - -Le `ROLE ADMIN` le privilège permet à un utilisateur d'accorder et de révoquer tous les rôles, y compris ceux qui ne lui sont pas accordés avec l'option admin. - -### SYSTEM {#grant-system} - -Permet à un utilisateur d'effectuer la [SYSTEM](system.md) requêtes correspondant à la hiérarchie de privilèges suivante. - -- `SYSTEM`. Niveau: `GROUP` - - `SYSTEM SHUTDOWN`. Niveau: `GLOBAL`. Alias: `SYSTEM KILL`, `SHUTDOWN` - - `SYSTEM DROP CACHE`. Alias: `DROP CACHE` - - `SYSTEM DROP DNS CACHE`. Niveau: `GLOBAL`. Alias: `SYSTEM DROP DNS`, `DROP DNS CACHE`, `DROP DNS` - - `SYSTEM DROP MARK CACHE`. Niveau: `GLOBAL`. Alias: `SYSTEM DROP MARK`, `DROP MARK CACHE`, `DROP MARKS` - - `SYSTEM DROP UNCOMPRESSED CACHE`. Niveau: `GLOBAL`. Alias: `SYSTEM DROP UNCOMPRESSED`, `DROP UNCOMPRESSED CACHE`, `DROP UNCOMPRESSED` - - `SYSTEM RELOAD`. Niveau: `GROUP` - - `SYSTEM RELOAD CONFIG`. Niveau: `GLOBAL`. Alias: `RELOAD CONFIG` - - `SYSTEM RELOAD DICTIONARY`. Niveau: `GLOBAL`. Alias: `SYSTEM RELOAD DICTIONARIES`, `RELOAD DICTIONARY`, `RELOAD DICTIONARIES` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Niveau: `GLOBAL`. Alias: R`ELOAD EMBEDDED DICTIONARIES` - - `SYSTEM MERGES`. Niveau: `TABLE`. Alias: `SYSTEM STOP MERGES`, `SYSTEM START MERGES`, `STOP MERGES`, `START MERGES` - - `SYSTEM TTL MERGES`. Niveau: `TABLE`. Alias: `SYSTEM STOP TTL MERGES`, `SYSTEM START TTL MERGES`, `STOP TTL MERGES`, `START TTL MERGES` - - `SYSTEM FETCHES`. Niveau: `TABLE`. Alias: `SYSTEM STOP FETCHES`, `SYSTEM START FETCHES`, `STOP FETCHES`, `START FETCHES` - - `SYSTEM MOVES`. Niveau: `TABLE`. Alias: `SYSTEM STOP MOVES`, `SYSTEM START MOVES`, `STOP MOVES`, `START MOVES` - - `SYSTEM SENDS`. Niveau: `GROUP`. Alias: `SYSTEM STOP SENDS`, `SYSTEM START SENDS`, `STOP SENDS`, `START SENDS` - - `SYSTEM DISTRIBUTED SENDS`. Niveau: `TABLE`. Alias: `SYSTEM STOP DISTRIBUTED SENDS`, `SYSTEM START DISTRIBUTED SENDS`, `STOP DISTRIBUTED SENDS`, `START DISTRIBUTED SENDS` - - `SYSTEM REPLICATED SENDS`. Niveau: `TABLE`. Alias: `SYSTEM STOP REPLICATED SENDS`, `SYSTEM START REPLICATED SENDS`, `STOP REPLICATED SENDS`, `START REPLICATED SENDS` - - `SYSTEM REPLICATION QUEUES`. Niveau: `TABLE`. Alias: `SYSTEM STOP REPLICATION QUEUES`, `SYSTEM START REPLICATION QUEUES`, `STOP REPLICATION QUEUES`, `START REPLICATION QUEUES` - - `SYSTEM SYNC REPLICA`. Niveau: `TABLE`. Alias: `SYNC REPLICA` - - `SYSTEM RESTART REPLICA`. Niveau: `TABLE`. Alias: `RESTART REPLICA` - - `SYSTEM FLUSH`. Niveau: `GROUP` - - `SYSTEM FLUSH DISTRIBUTED`. Niveau: `TABLE`. Alias: `FLUSH DISTRIBUTED` - - `SYSTEM FLUSH LOGS`. Niveau: `GLOBAL`. Alias: `FLUSH LOGS` - -Le `SYSTEM RELOAD EMBEDDED DICTIONARIES` privilège implicitement accordé par le `SYSTEM RELOAD DICTIONARY ON *.*` privilège. - -### INTROSPECTION {#grant-introspection} - -Permet l'utilisation de [introspection](../../operations/optimizing-performance/sampling-query-profiler.md) fonction. - -- `INTROSPECTION`. Niveau: `GROUP`. Alias: `INTROSPECTION FUNCTIONS` - - `addressToLine`. Niveau: `GLOBAL` - - `addressToSymbol`. Niveau: `GLOBAL` - - `demangle`. Niveau: `GLOBAL` - -### SOURCES {#grant-sources} - -Permet d'utiliser des sources de données externes. S'applique à [moteurs de table](../../engines/table-engines/index.md) et [les fonctions de table](../table-functions/index.md#table-functions). - -- `SOURCES`. Niveau: `GROUP` - - `FILE`. Niveau: `GLOBAL` - - `URL`. Niveau: `GLOBAL` - - `REMOTE`. Niveau: `GLOBAL` - - `YSQL`. Niveau: `GLOBAL` - - `ODBC`. Niveau: `GLOBAL` - - `JDBC`. Niveau: `GLOBAL` - - `HDFS`. Niveau: `GLOBAL` - - `S3`. Niveau: `GLOBAL` - -Le `SOURCES` privilège permet l'utilisation de toutes les sources. Vous pouvez également accorder un privilège pour chaque source individuellement. Pour utiliser les sources, vous avez besoin de privilèges supplémentaires. - -Exemple: - -- Pour créer une table avec [Moteur de table MySQL](../../engines/table-engines/integrations/mysql.md), vous avez besoin `CREATE TABLE (ON db.table_name)` et `MYSQL` privilège. -- L'utilisation de la [fonction de table mysql](../table-functions/mysql.md), vous avez besoin `CREATE TEMPORARY TABLE` et `MYSQL` privilège. - -### dictGet {#grant-dictget} - -- `dictGet`. Alias: `dictHas`, `dictGetHierarchy`, `dictIsIn` - -Permet à un utilisateur d'exécuter [dictGet](../functions/ext-dict-functions.md#dictget), [dictHas](../functions/ext-dict-functions.md#dicthas), [dictGetHierarchy](../functions/ext-dict-functions.md#dictgethierarchy), [dictisine](../functions/ext-dict-functions.md#dictisin) fonction. - -Niveau de privilège: `DICTIONARY`. - -**Exemple** - -- `GRANT dictGet ON mydb.mydictionary TO john` -- `GRANT dictGet ON mydictionary TO john` - -### ALL {#grant-all} - -Les subventions de tous les privilèges sur l'entité réglementée à un compte d'utilisateur ou un rôle. - -### NONE {#grant-none} - -N'accorde pas de privilèges. - -### ADMIN OPTION {#admin-option-privilege} - -Le `ADMIN OPTION` le privilège permet à un utilisateur d'accorder son rôle à un autre utilisateur. - -[Article Original](https://clickhouse.tech/docs/en/query_language/grant/) diff --git a/docs/fr/sql-reference/statements/index.md b/docs/fr/sql-reference/statements/index.md deleted file mode 100644 index f08d64cee39..00000000000 --- a/docs/fr/sql-reference/statements/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "D\xE9claration" -toc_priority: 31 ---- - - diff --git a/docs/fr/sql-reference/statements/insert-into.md b/docs/fr/sql-reference/statements/insert-into.md deleted file mode 100644 index 987594bae65..00000000000 --- a/docs/fr/sql-reference/statements/insert-into.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 34 -toc_title: INSERT INTO ---- - -## INSERT {#insert} - -L'ajout de données. - -Format de requête de base: - -``` sql -INSERT INTO [db.]table [(c1, c2, c3)] VALUES (v11, v12, v13), (v21, v22, v23), ... -``` - -La requête peut spécifier une liste de colonnes à insérer `[(c1, c2, c3)]`. Dans ce cas, le reste des colonnes sont remplis avec: - -- Les valeurs calculées à partir `DEFAULT` expressions spécifiées dans la définition de la table. -- Zéros et chaînes vides, si `DEFAULT` les expressions ne sont pas définies. - -Si [strict_insert_defaults=1](../../operations/settings/settings.md), les colonnes qui n'ont pas `DEFAULT` défini doit être répertorié dans la requête. - -Les données peuvent être transmises à L'INSERT dans n'importe quel [format](../../interfaces/formats.md#formats) soutenu par ClickHouse. Le format doit être spécifié explicitement dans la requête: - -``` sql -INSERT INTO [db.]table [(c1, c2, c3)] FORMAT format_name data_set -``` - -For example, the following query format is identical to the basic version of INSERT … VALUES: - -``` sql -INSERT INTO [db.]table [(c1, c2, c3)] FORMAT Values (v11, v12, v13), (v21, v22, v23), ... -``` - -ClickHouse supprime tous les espaces et un saut de ligne (s'il y en a un) avant les données. Lors de la formation d'une requête, nous recommandons de placer les données sur une nouvelle ligne après les opérateurs de requête (ceci est important si les données commencent par des espaces). - -Exemple: - -``` sql -INSERT INTO t FORMAT TabSeparated -11 Hello, world! -22 Qwerty -``` - -Vous pouvez insérer des données séparément de la requête à l'aide du client de ligne de commande ou de L'interface HTTP. Pour plus d'informations, consultez la section “[Interface](../../interfaces/index.md#interfaces)”. - -### Contraintes {#constraints} - -Si la table a [contraintes](create.md#constraints), their expressions will be checked for each row of inserted data. If any of those constraints is not satisfied — server will raise an exception containing constraint name and expression, the query will be stopped. - -### Insertion des résultats de `SELECT` {#insert_query_insert-select} - -``` sql -INSERT INTO [db.]table [(c1, c2, c3)] SELECT ... -``` - -Les colonnes sont mappées en fonction de leur position dans la clause SELECT. Cependant, leurs noms dans L'expression SELECT et la table pour INSERT peuvent différer. Si nécessaire, la coulée de type est effectuée. - -Aucun des formats de données à l'exception des Valeurs permettent de définir des valeurs d'expressions telles que `now()`, `1 + 2` et ainsi de suite. Le format des valeurs permet une utilisation limitée des expressions, mais ce n'est pas recommandé, car dans ce cas, un code inefficace est utilisé pour leur exécution. - -Les autres requêtes de modification des parties de données ne sont pas prises en charge: `UPDATE`, `DELETE`, `REPLACE`, `MERGE`, `UPSERT`, `INSERT UPDATE`. -Cependant, vous pouvez supprimer les anciennes données en utilisant `ALTER TABLE ... DROP PARTITION`. - -`FORMAT` la clause doit être spécifié à la fin de la requête si `SELECT` la clause contient la fonction de table [entrée()](../table-functions/input.md). - -### Considérations De Performance {#performance-considerations} - -`INSERT` trie les données d'entrée par la clé primaire et les divise en partitions par une clé de partition. Si vous insérez des données dans plusieurs partitions à la fois, cela peut réduire considérablement les performances de l' `INSERT` requête. Pour éviter cela: - -- Ajoutez des données en lots assez importants, tels que 100 000 lignes à la fois. -- Groupez les données par une clé de partition avant de les télécharger sur ClickHouse. - -Les performances ne diminueront pas si: - -- Les données sont ajoutées en temps réel. -- Vous téléchargez des données qui sont généralement triées par heure. - -[Article Original](https://clickhouse.tech/docs/en/query_language/insert_into/) diff --git a/docs/fr/sql-reference/statements/misc.md b/docs/fr/sql-reference/statements/misc.md deleted file mode 100644 index 4631f856266..00000000000 --- a/docs/fr/sql-reference/statements/misc.md +++ /dev/null @@ -1,358 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: Autre ---- - -# Diverses Requêtes {#miscellaneous-queries} - -## ATTACH {#attach} - -Cette requête est exactement la même que `CREATE`, mais - -- Au lieu de la parole `CREATE` il utilise le mot `ATTACH`. -- La requête ne crée pas de données sur le disque, mais suppose que les données sont déjà aux endroits appropriés, et ajoute simplement des informations sur la table au serveur. - Après avoir exécuté une requête ATTACH, le serveur connaîtra l'existence de la table. - -Si la table a été précédemment détachée (`DETACH`), ce qui signifie que sa structure est connue, vous pouvez utiliser un raccourci sans définir la structure. - -``` sql -ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] -``` - -Cette requête est utilisée lors du démarrage du serveur. Le serveur stocke les métadonnées de la table sous forme de fichiers avec `ATTACH` requêtes, qu'il exécute simplement au lancement (à l'exception des tables système, qui sont explicitement créées sur le serveur). - -## CHECK TABLE {#check-table} - -Vérifie si les données de la table sont corrompues. - -``` sql -CHECK TABLE [db.]name -``` - -Le `CHECK TABLE` requête compare réelle des tailles de fichier avec les valeurs attendues qui sont stockés sur le serveur. Si le fichier tailles ne correspondent pas aux valeurs stockées, cela signifie que les données sont endommagées. Cela peut être causé, par exemple, par un plantage du système lors de l'exécution de la requête. - -La réponse de la requête contient `result` colonne avec une seule ligne. La ligne a une valeur de -[Booléen](../../sql-reference/data-types/boolean.md) type: - -- 0 - les données de la table sont corrompues. -- 1 - les données maintiennent l'intégrité. - -Le `CHECK TABLE` query prend en charge les moteurs de table suivants: - -- [Journal](../../engines/table-engines/log-family/log.md) -- [TinyLog](../../engines/table-engines/log-family/tinylog.md) -- [StripeLog](../../engines/table-engines/log-family/stripelog.md) -- [Famille MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) - -Effectué sur les tables avec un autre moteur de table provoque une exception. - -Les moteurs de la `*Log` la famille ne fournit pas de récupération automatique des données en cas d'échec. L'utilisation de la `CHECK TABLE` requête pour suivre la perte de données en temps opportun. - -Pour `MergeTree` moteurs de la famille, le `CHECK TABLE` query affiche un État de vérification pour chaque partie de données individuelle d'une table sur le serveur local. - -**Si les données sont corrompues** - -Si la table est corrompue, vous pouvez copier les données non corrompues dans une autre table. Pour ce faire: - -1. Créez une nouvelle table avec la même structure que la table endommagée. Pour ce faire exécutez la requête `CREATE TABLE AS `. -2. Définir le [max_threads](../../operations/settings/settings.md#settings-max_threads) la valeur 1 pour traiter la requête suivante dans un seul thread. Pour ce faire, exécutez la requête `SET max_threads = 1`. -3. Exécuter la requête `INSERT INTO SELECT * FROM `. Cette demande copie les données non corrompues de la table endommagée vers une autre table. Seules les données avant la partie corrompue seront copiées. -4. Redémarrez l' `clickhouse-client` pour réinitialiser l' `max_threads` valeur. - -## DESCRIBE TABLE {#misc-describe-table} - -``` sql -DESC|DESCRIBE TABLE [db.]table [INTO OUTFILE filename] [FORMAT format] -``` - -Renvoie ce qui suit `String` les colonnes de type: - -- `name` — Column name. -- `type`— Column type. -- `default_type` — Clause that is used in [expression par défaut](create.md#create-default-values) (`DEFAULT`, `MATERIALIZED` ou `ALIAS`). Column contient une chaîne vide, si l'expression par défaut n'est pas spécifiée. -- `default_expression` — Value specified in the `DEFAULT` clause. -- `comment_expression` — Comment text. - -Les structures de données imbriquées sont sorties dans “expanded” format. Chaque colonne est affichée séparément, avec le nom après un point. - -## DETACH {#detach} - -Supprime les informations sur le ‘name’ table du serveur. Le serveur cesse de connaître l'existence de la table. - -``` sql -DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] -``` - -Cela ne supprime pas les données ou les métadonnées de la table. Lors du prochain lancement du serveur, le serveur Lira les métadonnées et découvrira à nouveau la table. -De même, un “detached” tableau peut être re-attaché en utilisant le `ATTACH` requête (à l'exception des tables système, qui n'ont pas de stocker les métadonnées pour eux). - -Il n'y a pas de `DETACH DATABASE` requête. - -## DROP {#drop} - -Cette requête a deux types: `DROP DATABASE` et `DROP TABLE`. - -``` sql -DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster] -``` - -Supprime toutes les tables à l'intérieur de la ‘db’ la base de données, puis supprime le ‘db’ la base de données elle-même. -Si `IF EXISTS` est spécifié, il ne renvoie pas d'erreur si la base de données n'existe pas. - -``` sql -DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] -``` - -Supprime la table. -Si `IF EXISTS` est spécifié, il ne renvoie pas d'erreur si la table n'existe pas ou si la base de données n'existe pas. - - DROP DICTIONARY [IF EXISTS] [db.]name - -Delets le dictionnaire. -Si `IF EXISTS` est spécifié, il ne renvoie pas d'erreur si la table n'existe pas ou si la base de données n'existe pas. - -## DROP USER {#drop-user-statement} - -Supprime un utilisateur. - -### Syntaxe {#drop-user-syntax} - -``` sql -DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name] -``` - -## DROP ROLE {#drop-role-statement} - -Supprime un rôle. - -Le rôle supprimé est révoqué de toutes les entités où il a été accordé. - -### Syntaxe {#drop-role-syntax} - -``` sql -DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] -``` - -## DROP ROW POLICY {#drop-row-policy-statement} - -Supprime une stratégie de ligne. - -La stratégie de ligne supprimée est révoquée de toutes les entités sur lesquelles elle a été affectée. - -### Syntaxe {#drop-row-policy-syntax} - -``` sql -DROP [ROW] POLICY [IF EXISTS] name [,...] ON [database.]table [,...] [ON CLUSTER cluster_name] -``` - -## DROP QUOTA {#drop-quota-statement} - -Supprime un quota. - -Le quota supprimé est révoqué de toutes les entités où il a été affecté. - -### Syntaxe {#drop-quota-syntax} - -``` sql -DROP QUOTA [IF EXISTS] name [,...] [ON CLUSTER cluster_name] -``` - -## DROP SETTINGS PROFILE {#drop-settings-profile-statement} - -Supprime un quota. - -Le quota supprimé est révoqué de toutes les entités où il a été affecté. - -### Syntaxe {#drop-settings-profile-syntax} - -``` sql -DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] -``` - -## EXISTS {#exists-statement} - -``` sql -EXISTS [TEMPORARY] [TABLE|DICTIONARY] [db.]name [INTO OUTFILE filename] [FORMAT format] -``` - -Renvoie un seul `UInt8`- type colonne, qui contient la valeur unique `0` si la table ou base de données n'existe pas, ou `1` si la table existe dans la base de données spécifiée. - -## KILL QUERY {#kill-query-statement} - -``` sql -KILL QUERY [ON CLUSTER cluster] - WHERE - [SYNC|ASYNC|TEST] - [FORMAT format] -``` - -Tente de mettre fin de force aux requêtes en cours d'exécution. -Les requêtes à terminer sont sélectionnées dans le système.processus en utilisant les critères définis dans le `WHERE` la clause de la `KILL` requête. - -Exemple: - -``` sql --- Forcibly terminates all queries with the specified query_id: -KILL QUERY WHERE query_id='2-857d-4a57-9ee0-327da5d60a90' - --- Synchronously terminates all queries run by 'username': -KILL QUERY WHERE user='username' SYNC -``` - -Les utilisateurs en lecture seule peuvent uniquement arrêter leurs propres requêtes. - -Par défaut, la version asynchrone des requêtes est utilisé (`ASYNC`), qui n'attend pas la confirmation que les requêtes se sont arrêtées. - -La version synchrone (`SYNC`) attend que toutes les requêtes d'arrêter et affiche des informations sur chaque processus s'arrête. -La réponse contient l' `kill_status` la colonne, qui peut prendre les valeurs suivantes: - -1. ‘finished’ – The query was terminated successfully. -2. ‘waiting’ – Waiting for the query to end after sending it a signal to terminate. -3. The other values ​​explain why the query can't be stopped. - -Une requête de test (`TEST`) vérifie uniquement les droits de l'utilisateur et affiche une liste de requêtes à arrêter. - -## KILL MUTATION {#kill-mutation} - -``` sql -KILL MUTATION [ON CLUSTER cluster] - WHERE - [TEST] - [FORMAT format] -``` - -Essaie d'annuler et supprimer [mutation](alter.md#alter-mutations) actuellement en cours d'exécution. Les Mutations à annuler sont sélectionnées parmi [`system.mutations`](../../operations/system-tables.md#system_tables-mutations) tableau à l'aide du filtre spécifié par le `WHERE` la clause de la `KILL` requête. - -Une requête de test (`TEST`) vérifie uniquement les droits de l'utilisateur et affiche une liste de requêtes à arrêter. - -Exemple: - -``` sql --- Cancel and remove all mutations of the single table: -KILL MUTATION WHERE database = 'default' AND table = 'table' - --- Cancel the specific mutation: -KILL MUTATION WHERE database = 'default' AND table = 'table' AND mutation_id = 'mutation_3.txt' -``` - -The query is useful when a mutation is stuck and cannot finish (e.g. if some function in the mutation query throws an exception when applied to the data contained in the table). - -Les modifications déjà apportées par la mutation ne sont pas annulées. - -## OPTIMIZE {#misc_operations-optimize} - -``` sql -OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE] -``` - -Cette requête tente d'initialiser une fusion non programmée de parties de données pour les tables avec un moteur de [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) famille. - -Le `OPTMIZE` la requête est également prise en charge pour [MaterializedView](../../engines/table-engines/special/materializedview.md) et la [Tampon](../../engines/table-engines/special/buffer.md) moteur. Les autres moteurs de table ne sont pas pris en charge. - -Lorsque `OPTIMIZE` est utilisé avec le [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) famille de moteurs de table, ClickHouse crée une tâche pour la fusion et attend l'exécution sur tous les nœuds (si le `replication_alter_partitions_sync` paramètre est activé). - -- Si `OPTIMIZE` n'effectue pas de fusion pour une raison quelconque, il ne notifie pas le client. Pour activer les notifications, utilisez [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop) paramètre. -- Si vous spécifiez un `PARTITION`, seule la partition spécifiée est optimisé. [Comment définir l'expression de la partition](alter.md#alter-how-to-specify-part-expr). -- Si vous spécifiez `FINAL`, l'optimisation est effectuée, même lorsque toutes les données sont déjà dans une partie. -- Si vous spécifiez `DEDUPLICATE`, alors des lignes complètement identiques seront dédupliquées (toutes les colonnes sont comparées), cela n'a de sens que pour le moteur MergeTree. - -!!! warning "Avertissement" - `OPTIMIZE` ne peut pas réparer le “Too many parts” erreur. - -## RENAME {#misc_operations-rename} - -Renomme une ou plusieurs tables. - -``` sql -RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... [ON CLUSTER cluster] -``` - -Toutes les tables sont renommées sous verrouillage global. Renommer des tables est une opération légère. Si vous avez indiqué une autre base de données après TO, la table sera déplacée vers cette base de données. Cependant, les répertoires contenant des bases de données doivent résider dans le même système de fichiers (sinon, une erreur est renvoyée). - -## SET {#query-set} - -``` sql -SET param = value -``` - -Assigner `value` à l' `param` [paramètre](../../operations/settings/index.md) pour la session en cours. Vous ne pouvez pas modifier [les paramètres du serveur](../../operations/server-configuration-parameters/index.md) de cette façon. - -Vous pouvez également définir toutes les valeurs de certains paramètres de profil dans une seule requête. - -``` sql -SET profile = 'profile-name-from-the-settings-file' -``` - -Pour plus d'informations, voir [Paramètre](../../operations/settings/settings.md). - -## SET ROLE {#set-role-statement} - -Active les rôles pour l'utilisateur actuel. - -### Syntaxe {#set-role-syntax} - -``` sql -SET ROLE {DEFAULT | NONE | role [,...] | ALL | ALL EXCEPT role [,...]} -``` - -## SET DEFAULT ROLE {#set-default-role-statement} - -Définit les rôles par défaut à un utilisateur. - -Les rôles par défaut sont automatiquement activés lors de la connexion de l'utilisateur. Vous pouvez définir par défaut uniquement les rôles précédemment accordés. Si le rôle n'est pas accordé à un utilisateur, ClickHouse lève une exception. - -### Syntaxe {#set-default-role-syntax} - -``` sql -SET DEFAULT ROLE {NONE | role [,...] | ALL | ALL EXCEPT role [,...]} TO {user|CURRENT_USER} [,...] -``` - -### Exemple {#set-default-role-examples} - -Définir plusieurs rôles par défaut à un utilisateur: - -``` sql -SET DEFAULT ROLE role1, role2, ... TO user -``` - -Définissez tous les rôles accordés par défaut sur un utilisateur: - -``` sql -SET DEFAULT ROLE ALL TO user -``` - -Purger les rôles par défaut d'un utilisateur: - -``` sql -SET DEFAULT ROLE NONE TO user -``` - -Définissez tous les rôles accordés par défaut à l'exception de certains d'entre eux: - -``` sql -SET DEFAULT ROLE ALL EXCEPT role1, role2 TO user -``` - -## TRUNCATE {#truncate-statement} - -``` sql -TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] -``` - -Supprime toutes les données d'une table. Lorsque la clause `IF EXISTS` est omis, la requête renvoie une erreur si la table n'existe pas. - -Le `TRUNCATE` la requête n'est pas prise en charge pour [Vue](../../engines/table-engines/special/view.md), [Fichier](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md) et [NULL](../../engines/table-engines/special/null.md) table des moteurs. - -## USE {#use} - -``` sql -USE db -``` - -Vous permet de définir la base de données actuelle pour la session. -La base de données actuelle est utilisée pour rechercher des tables si la base de données n'est pas explicitement définie dans la requête avec un point avant le nom de la table. -Cette requête ne peut pas être faite lors de l'utilisation du protocole HTTP, car il n'y a pas de concept de session. - -[Article Original](https://clickhouse.tech/docs/en/query_language/misc/) diff --git a/docs/fr/sql-reference/statements/revoke.md b/docs/fr/sql-reference/statements/revoke.md deleted file mode 100644 index 6137cc30f8c..00000000000 --- a/docs/fr/sql-reference/statements/revoke.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 40 -toc_title: REVOKE ---- - -# REVOKE {#revoke} - -Révoque les privilèges des utilisateurs ou rôles. - -## Syntaxe {#revoke-syntax} - -**Révocation des privilèges des utilisateurs** - -``` sql -REVOKE [ON CLUSTER cluster_name] privilege[(column_name [,...])] [,...] ON {db.table|db.*|*.*|table|*} FROM {user | CURRENT_USER} [,...] | ALL | ALL EXCEPT {user | CURRENT_USER} [,...] -``` - -**Révocation des rôles des utilisateurs** - -``` sql -REVOKE [ON CLUSTER cluster_name] [ADMIN OPTION FOR] role [,...] FROM {user | role | CURRENT_USER} [,...] | ALL | ALL EXCEPT {user_name | role_name | CURRENT_USER} [,...] -``` - -## Description {#revoke-description} - -Pour révoquer certains privilèges, vous pouvez utiliser un privilège de portée plus large que vous envisagez de révoquer. Par exemple, si un utilisateur a la `SELECT (x,y)` privilège, administrateur peut effectuer `REVOKE SELECT(x,y) ...`, ou `REVOKE SELECT * ...` ou même `REVOKE ALL PRIVILEGES ...` requête de révoquer ce privilège. - -### Révocations Partielles {#partial-revokes-dscr} - -Vous pouvez révoquer une partie d'un privilège. Par exemple, si un utilisateur a la `SELECT *.*` Privilège vous pouvez révoquer un privilège pour lire les données d'une table ou d'une base de données. - -## Exemple {#revoke-example} - -Subvention de l' `john` compte utilisateur avec le privilège de sélectionner parmi toutes les bases de données `accounts` un: - -``` sql -GRANT SELECT ON *.* TO john; -REVOKE SELECT ON accounts.* FROM john; -``` - -Subvention de l' `mira` compte utilisateur avec le privilège de sélectionner parmi toutes les colonnes `accounts.staff` tableau à l'exception de la `wage` un. - -``` sql -GRANT SELECT ON accounts.staff TO mira; -REVOKE SELECT(wage) ON accounts.staff FROM mira; -``` - -{## [Article Original](https://clickhouse.tech/docs/en/operations/settings/settings/) ##} diff --git a/docs/fr/sql-reference/statements/select/array-join.md b/docs/fr/sql-reference/statements/select/array-join.md deleted file mode 100644 index 07b27d5d16c..00000000000 --- a/docs/fr/sql-reference/statements/select/array-join.md +++ /dev/null @@ -1,282 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause de jointure de tableau {#select-array-join-clause} - -C'est une opération courante pour les tables qui contiennent une colonne de tableau pour produire une nouvelle table qui a une colonne avec chaque élément de tableau individuel de cette colonne initiale, tandis que les valeurs des autres colonnes sont dupliquées. C'est le cas de fond de ce `ARRAY JOIN` la clause le fait. - -Son nom vient du fait qu'il peut être regardé comme l'exécution de `JOIN` avec un tableau ou une structure de données imbriquée. L'intention est similaire à la [arrayJoin](../../functions/array-join.md#functions_arrayjoin) fonction, mais la fonctionnalité de la clause est plus large. - -Syntaxe: - -``` sql -SELECT -FROM -[LEFT] ARRAY JOIN -[WHERE|PREWHERE ] -... -``` - -Vous ne pouvez en spécifier qu'un `ARRAY JOIN` la clause dans un `SELECT` requête. - -Types pris en charge de `ARRAY JOIN` sont énumérés ci-dessous: - -- `ARRAY JOIN` - Dans le cas de base, les tableaux vides ne sont pas inclus dans le résultat de `JOIN`. -- `LEFT ARRAY JOIN` - Le résultat de `JOIN` contient des lignes avec des tableaux vides. La valeur d'un tableau vide est définie sur la valeur par défaut pour le type d'élément de tableau (généralement 0, chaîne vide ou NULL). - -## Exemples de jointure de tableau de base {#basic-array-join-examples} - -Les exemples ci-dessous illustrent l'utilisation de la `ARRAY JOIN` et `LEFT ARRAY JOIN` clause. Créons une table avec un [Tableau](../../../sql-reference/data-types/array.md) tapez colonne et insérez des valeurs dedans: - -``` sql -CREATE TABLE arrays_test -( - s String, - arr Array(UInt8) -) ENGINE = Memory; - -INSERT INTO arrays_test -VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []); -``` - -``` text -┌─s───────────┬─arr─────┐ -│ Hello │ [1,2] │ -│ World │ [3,4,5] │ -│ Goodbye │ [] │ -└─────────────┴─────────┘ -``` - -L'exemple ci-dessous utilise la `ARRAY JOIN` clause: - -``` sql -SELECT s, arr -FROM arrays_test -ARRAY JOIN arr; -``` - -``` text -┌─s─────┬─arr─┐ -│ Hello │ 1 │ -│ Hello │ 2 │ -│ World │ 3 │ -│ World │ 4 │ -│ World │ 5 │ -└───────┴─────┘ -``` - -L'exemple suivant utilise l' `LEFT ARRAY JOIN` clause: - -``` sql -SELECT s, arr -FROM arrays_test -LEFT ARRAY JOIN arr; -``` - -``` text -┌─s───────────┬─arr─┐ -│ Hello │ 1 │ -│ Hello │ 2 │ -│ World │ 3 │ -│ World │ 4 │ -│ World │ 5 │ -│ Goodbye │ 0 │ -└─────────────┴─────┘ -``` - -## À L'Aide D'Alias {#using-aliases} - -Un alias peut être spécifié pour un tableau `ARRAY JOIN` clause. Dans ce cas, un élément de tableau peut être consulté par ce pseudonyme, mais le tableau lui-même est accessible par le nom d'origine. Exemple: - -``` sql -SELECT s, arr, a -FROM arrays_test -ARRAY JOIN arr AS a; -``` - -``` text -┌─s─────┬─arr─────┬─a─┐ -│ Hello │ [1,2] │ 1 │ -│ Hello │ [1,2] │ 2 │ -│ World │ [3,4,5] │ 3 │ -│ World │ [3,4,5] │ 4 │ -│ World │ [3,4,5] │ 5 │ -└───────┴─────────┴───┘ -``` - -En utilisant des alias, vous pouvez effectuer `ARRAY JOIN` avec un groupe externe. Exemple: - -``` sql -SELECT s, arr_external -FROM arrays_test -ARRAY JOIN [1, 2, 3] AS arr_external; -``` - -``` text -┌─s───────────┬─arr_external─┐ -│ Hello │ 1 │ -│ Hello │ 2 │ -│ Hello │ 3 │ -│ World │ 1 │ -│ World │ 2 │ -│ World │ 3 │ -│ Goodbye │ 1 │ -│ Goodbye │ 2 │ -│ Goodbye │ 3 │ -└─────────────┴──────────────┘ -``` - -Plusieurs tableaux peuvent être séparés par des virgules `ARRAY JOIN` clause. Dans ce cas, `JOIN` est effectuée avec eux simultanément (la somme directe, pas le produit cartésien). Notez que tous les tableaux doivent avoir la même taille. Exemple: - -``` sql -SELECT s, arr, a, num, mapped -FROM arrays_test -ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS mapped; -``` - -``` text -┌─s─────┬─arr─────┬─a─┬─num─┬─mapped─┐ -│ Hello │ [1,2] │ 1 │ 1 │ 2 │ -│ Hello │ [1,2] │ 2 │ 2 │ 3 │ -│ World │ [3,4,5] │ 3 │ 1 │ 4 │ -│ World │ [3,4,5] │ 4 │ 2 │ 5 │ -│ World │ [3,4,5] │ 5 │ 3 │ 6 │ -└───────┴─────────┴───┴─────┴────────┘ -``` - -L'exemple ci-dessous utilise la [arrayEnumerate](../../../sql-reference/functions/array-functions.md#array_functions-arrayenumerate) fonction: - -``` sql -SELECT s, arr, a, num, arrayEnumerate(arr) -FROM arrays_test -ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num; -``` - -``` text -┌─s─────┬─arr─────┬─a─┬─num─┬─arrayEnumerate(arr)─┐ -│ Hello │ [1,2] │ 1 │ 1 │ [1,2] │ -│ Hello │ [1,2] │ 2 │ 2 │ [1,2] │ -│ World │ [3,4,5] │ 3 │ 1 │ [1,2,3] │ -│ World │ [3,4,5] │ 4 │ 2 │ [1,2,3] │ -│ World │ [3,4,5] │ 5 │ 3 │ [1,2,3] │ -└───────┴─────────┴───┴─────┴─────────────────────┘ -``` - -## Jointure de tableau avec la Structure de données imbriquée {#array-join-with-nested-data-structure} - -`ARRAY JOIN` fonctionne également avec [structures de données imbriquées](../../../sql-reference/data-types/nested-data-structures/nested.md): - -``` sql -CREATE TABLE nested_test -( - s String, - nest Nested( - x UInt8, - y UInt32) -) ENGINE = Memory; - -INSERT INTO nested_test -VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', [], []); -``` - -``` text -┌─s───────┬─nest.x──┬─nest.y─────┐ -│ Hello │ [1,2] │ [10,20] │ -│ World │ [3,4,5] │ [30,40,50] │ -│ Goodbye │ [] │ [] │ -└─────────┴─────────┴────────────┘ -``` - -``` sql -SELECT s, `nest.x`, `nest.y` -FROM nested_test -ARRAY JOIN nest; -``` - -``` text -┌─s─────┬─nest.x─┬─nest.y─┐ -│ Hello │ 1 │ 10 │ -│ Hello │ 2 │ 20 │ -│ World │ 3 │ 30 │ -│ World │ 4 │ 40 │ -│ World │ 5 │ 50 │ -└───────┴────────┴────────┘ -``` - -Lorsque vous spécifiez des noms de structures de données imbriquées dans `ARRAY JOIN` le sens est le même que `ARRAY JOIN` avec tous les éléments du tableau qui la compose. Des exemples sont énumérés ci-dessous: - -``` sql -SELECT s, `nest.x`, `nest.y` -FROM nested_test -ARRAY JOIN `nest.x`, `nest.y`; -``` - -``` text -┌─s─────┬─nest.x─┬─nest.y─┐ -│ Hello │ 1 │ 10 │ -│ Hello │ 2 │ 20 │ -│ World │ 3 │ 30 │ -│ World │ 4 │ 40 │ -│ World │ 5 │ 50 │ -└───────┴────────┴────────┘ -``` - -Cette variation a également du sens: - -``` sql -SELECT s, `nest.x`, `nest.y` -FROM nested_test -ARRAY JOIN `nest.x`; -``` - -``` text -┌─s─────┬─nest.x─┬─nest.y─────┐ -│ Hello │ 1 │ [10,20] │ -│ Hello │ 2 │ [10,20] │ -│ World │ 3 │ [30,40,50] │ -│ World │ 4 │ [30,40,50] │ -│ World │ 5 │ [30,40,50] │ -└───────┴────────┴────────────┘ -``` - -Un alias peut être utilisé pour une structure de données imbriquée, afin de sélectionner `JOIN` le résultat ou le tableau source. Exemple: - -``` sql -SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y` -FROM nested_test -ARRAY JOIN nest AS n; -``` - -``` text -┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┐ -│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │ -│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │ -│ World │ 3 │ 30 │ [3,4,5] │ [30,40,50] │ -│ World │ 4 │ 40 │ [3,4,5] │ [30,40,50] │ -│ World │ 5 │ 50 │ [3,4,5] │ [30,40,50] │ -└───────┴─────┴─────┴─────────┴────────────┘ -``` - -Exemple d'utilisation de l' [arrayEnumerate](../../../sql-reference/functions/array-functions.md#array_functions-arrayenumerate) fonction: - -``` sql -SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`, num -FROM nested_test -ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num; -``` - -``` text -┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┬─num─┐ -│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │ 1 │ -│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │ 2 │ -│ World │ 3 │ 30 │ [3,4,5] │ [30,40,50] │ 1 │ -│ World │ 4 │ 40 │ [3,4,5] │ [30,40,50] │ 2 │ -│ World │ 5 │ 50 │ [3,4,5] │ [30,40,50] │ 3 │ -└───────┴─────┴─────┴─────────┴────────────┴─────┘ -``` - -## Détails De Mise En Œuvre {#implementation-details} - -L'ordre d'exécution de la requête est optimisé lors de l'exécution `ARRAY JOIN`. Bien `ARRAY JOIN` doit toujours être spécifié avant l' [WHERE](where.md)/[PREWHERE](prewhere.md) dans une requête, techniquement, ils peuvent être exécutés dans n'importe quel ordre, sauf résultat de `ARRAY JOIN` est utilisé pour le filtrage. L'ordre de traitement est contrôlée par l'optimiseur de requête. diff --git a/docs/fr/sql-reference/statements/select/distinct.md b/docs/fr/sql-reference/statements/select/distinct.md deleted file mode 100644 index 94552018c98..00000000000 --- a/docs/fr/sql-reference/statements/select/distinct.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# La Clause DISTINCT {#select-distinct} - -Si `SELECT DISTINCT` est spécifié, seules les lignes uniques restera un résultat de requête. Ainsi, une seule ligne restera hors de tous les ensembles de lignes entièrement correspondantes dans le résultat. - -## Le Traitement Null {#null-processing} - -`DISTINCT` fonctionne avec [NULL](../../syntax.md#null-literal) comme si `NULL` ont une valeur spécifique, et `NULL==NULL`. En d'autres termes, dans le `DISTINCT` résultats, différentes combinaisons avec `NULL` une fois seulement. Elle diffère de `NULL` traitement dans la plupart des autres contextes. - -## Alternative {#alternatives} - -Il est possible d'obtenir le même résultat en appliquant [GROUP BY](group-by.md) sur le même ensemble de valeurs, comme spécifié comme `SELECT` clause, sans utiliser de fonctions d'agrégation. Mais il y a peu de différences de `GROUP BY` approche: - -- `DISTINCT` peut être utilisé avec d' `GROUP BY`. -- Lorsque [ORDER BY](order-by.md) est omis et [LIMIT](limit.md) est définie, la requête s'arrête immédiatement après le nombre de lignes différentes, a été lu. -- Les blocs de données sont produits au fur et à mesure qu'ils sont traités, sans attendre que la requête entière se termine. - -## Limitation {#limitations} - -`DISTINCT` n'est pas pris en charge si `SELECT` a au moins une colonne de tableau. - -## Exemple {#examples} - -Clickhouse prend en charge l'utilisation du `DISTINCT` et `ORDER BY` clauses pour différentes colonnes dans une requête. Le `DISTINCT` la clause est exécutée avant la `ORDER BY` clause. - -Exemple de table: - -``` text -┌─a─┬─b─┐ -│ 2 │ 1 │ -│ 1 │ 2 │ -│ 3 │ 3 │ -│ 2 │ 4 │ -└───┴───┘ -``` - -Lors de la sélection de données avec le `SELECT DISTINCT a FROM t1 ORDER BY b ASC` requête, nous obtenons le résultat suivant: - -``` text -┌─a─┐ -│ 2 │ -│ 1 │ -│ 3 │ -└───┘ -``` - -Si nous changeons la direction de tri `SELECT DISTINCT a FROM t1 ORDER BY b DESC`, nous obtenons le résultat suivant: - -``` text -┌─a─┐ -│ 3 │ -│ 1 │ -│ 2 │ -└───┘ -``` - -Rangée `2, 4` a été coupé avant de les trier. - -Prenez en compte cette spécificité d'implémentation lors de la programmation des requêtes. diff --git a/docs/fr/sql-reference/statements/select/format.md b/docs/fr/sql-reference/statements/select/format.md deleted file mode 100644 index a88bb7831ba..00000000000 --- a/docs/fr/sql-reference/statements/select/format.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# FORMAT de la Clause {#format-clause} - -Clickhouse prend en charge une large gamme de [formats de sérialisation](../../../interfaces/formats.md) qui peut être utilisé sur les résultats de la requête entre autres choses. Il existe plusieurs façons de choisir un format pour `SELECT` de sortie, l'un d'eux est de spécifier `FORMAT format` à la fin de la requête pour obtenir les données résultantes dans tout format spécifique. - -Un format spécifique peut être utilisé pour des raisons de commodité, d'intégration avec d'autres systèmes ou d'amélioration des performances. - -## Format Par Défaut {#default-format} - -Si l' `FORMAT` la clause est omise, le format par défaut est utilisé, ce qui dépend à la fois des paramètres et de l'interface utilisée pour accéder au serveur ClickHouse. Pour l' [Interface HTTP](../../../interfaces/http.md) et la [client de ligne de commande](../../../interfaces/cli.md) en mode batch, le format par défaut est `TabSeparated`. Pour le client de ligne de commande en mode interactif, le format par défaut est `PrettyCompact` (il produit des tables compactes lisibles par l'homme). - -## Détails De Mise En Œuvre {#implementation-details} - -Lors de l'utilisation du client de ligne de commande, les données sont toujours transmises sur le réseau dans un format efficace interne (`Native`). Le client interprète indépendamment le `FORMAT` clause de la requête et formate les données elles-mêmes (soulageant ainsi le réseau et le serveur de la charge supplémentaire). diff --git a/docs/fr/sql-reference/statements/select/from.md b/docs/fr/sql-reference/statements/select/from.md deleted file mode 100644 index 964ffdd13fb..00000000000 --- a/docs/fr/sql-reference/statements/select/from.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# De la Clause {#select-from} - -Le `FROM` clause spécifie la source à partir de laquelle lire les données: - -- [Table](../../../engines/table-engines/index.md) -- [Sous-requête](index.md) {##TODO: meilleur lien ##} -- [Fonction de Table](../../table-functions/index.md#table-functions) - -[JOIN](join.md) et [ARRAY JOIN](array-join.md) les clauses peuvent également être utilisées pour étendre la fonctionnalité de la `FROM` clause. - -Subquery est un autre `SELECT` requête qui peut être spécifié entre parenthèses à l'intérieur `FROM` clause. - -`FROM` la clause peut contenir plusieurs sources de données, séparées par des virgules, ce qui équivaut à effectuer [CROSS JOIN](join.md) sur eux. - -## Modificateur FINAL {#select-from-final} - -Lorsque `FINAL` est spécifié, ClickHouse fusionne complètement les données avant de renvoyer le résultat et effectue ainsi toutes les transformations de données qui se produisent lors des fusions pour le moteur de table donné. - -Il est applicable lors de la sélection de données à partir de tables qui utilisent [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)-la famille de moteurs (à l'exception de `GraphiteMergeTree`). Également pris en charge pour: - -- [Répliqué](../../../engines/table-engines/mergetree-family/replication.md) les versions de `MergeTree` moteur. -- [Vue](../../../engines/table-engines/special/view.md), [Tampon](../../../engines/table-engines/special/buffer.md), [Distribué](../../../engines/table-engines/special/distributed.md), et [MaterializedView](../../../engines/table-engines/special/materializedview.md) moteurs qui fonctionnent sur d'autres moteurs, à condition qu'ils aient été créés sur `MergeTree`-tables de moteur. - -### Inconvénient {#drawbacks} - -Requêtes qui utilisent `FINAL` sont exécutés pas aussi vite que les requêtes similaires qui ne le font pas, car: - -- La requête est exécutée dans un seul thread et les données sont fusionnées lors de l'exécution de la requête. -- Les requêtes avec `FINAL` lire les colonnes de clé primaire en plus des colonnes spécifiées dans la requête. - -**Dans la plupart des cas, évitez d'utiliser `FINAL`.** L'approche commune consiste à utiliser différentes requêtes qui supposent les processus d'arrière-plan du `MergeTree` le moteur n'est pas encore arrivé et y faire face en appliquant l'agrégation (par exemple, pour éliminer les doublons). {##TODO: exemples ##} - -## Détails De Mise En Œuvre {#implementation-details} - -Si l' `FROM` la clause est omise, les données seront lues à partir `system.one` table. -Le `system.one` table contient exactement une ligne (cette table remplit le même but que la table double trouvée dans d'autres SGBD). - -Pour exécuter une requête, toutes les colonnes mentionnées dans la requête sont extraites de la table appropriée. Toutes les colonnes non nécessaires pour la requête externe sont rejetées des sous-requêtes. -Si une requête ne répertorie aucune colonne (par exemple, `SELECT count() FROM t`), une colonne est extraite de la table de toute façon (la plus petite est préférée), afin de calculer le nombre de lignes. diff --git a/docs/fr/sql-reference/statements/select/group-by.md b/docs/fr/sql-reference/statements/select/group-by.md deleted file mode 100644 index 9d1b5c276d5..00000000000 --- a/docs/fr/sql-reference/statements/select/group-by.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause GROUP BY {#select-group-by-clause} - -`GROUP BY` la clause change le `SELECT` requête dans un mode d'agrégation, qui fonctionne comme suit: - -- `GROUP BY` clause contient une liste des expressions (ou une seule expression, qui est considéré comme la liste de longueur). Cette liste agit comme un “grouping key”, tandis que chaque expression individuelle sera appelée “key expressions”. -- Toutes les expressions dans le [SELECT](index.md), [HAVING](having.md), et [ORDER BY](order-by.md) clause **devoir** être calculé sur la base d'expressions clés **ou** sur [les fonctions d'agrégation](../../../sql-reference/aggregate-functions/index.md) sur les expressions non-clés (y compris les colonnes simples). En d'autres termes, chaque colonne sélectionnée dans la table doit être utilisée soit dans une expression de clé, soit dans une fonction d'agrégat, mais pas les deux. -- Résultat de l'agrégation de `SELECT` la requête contiendra autant de lignes qu'il y avait des valeurs uniques de “grouping key” dans la table source. Habituellement, cela réduit considérablement le nombre de lignes, souvent par ordre de grandeur, mais pas nécessairement: le nombre de lignes reste le même si tous “grouping key” les valeurs sont distinctes. - -!!! note "Note" - Il existe un moyen supplémentaire d'exécuter l'agrégation sur une table. Si une requête ne contient que des colonnes de table à l'intérieur des fonctions `GROUP BY clause` peut être omis, et l'agrégation par un ensemble vide de touches est supposé. Ces interrogations renvoient toujours exactement une ligne. - -## Le Traitement NULL {#null-processing} - -Pour le regroupement, ClickHouse interprète [NULL](../../syntax.md#null-literal) comme une valeur, et `NULL==NULL`. Elle diffère de `NULL` traitement dans la plupart des autres contextes. - -Voici un exemple pour montrer ce que cela signifie. - -Supposons que vous avez cette table: - -``` text -┌─x─┬────y─┐ -│ 1 │ 2 │ -│ 2 │ ᴺᵁᴸᴸ │ -│ 3 │ 2 │ -│ 3 │ 3 │ -│ 3 │ ᴺᵁᴸᴸ │ -└───┴──────┘ -``` - -Requête `SELECT sum(x), y FROM t_null_big GROUP BY y` résultats dans: - -``` text -┌─sum(x)─┬────y─┐ -│ 4 │ 2 │ -│ 3 │ 3 │ -│ 5 │ ᴺᵁᴸᴸ │ -└────────┴──────┘ -``` - -Vous pouvez voir que `GROUP BY` pour `y = NULL` résumer `x` comme si `NULL` a cette valeur. - -Si vous passez plusieurs clés `GROUP BY` le résultat vous donnera toutes les combinaisons de la sélection, comme si `NULL` ont une valeur spécifique. - -## Avec modificateur de totaux {#with-totals-modifier} - -Si l' `WITH TOTALS` modificateur est spécifié, une autre ligne sera calculée. Cette ligne aura des colonnes clés contenant des valeurs par défaut (zéros ou lignes vides), et des colonnes de fonctions d'agrégat avec les valeurs calculées sur toutes les lignes (le “total” valeur). - -Cette ligne supplémentaire est uniquement produite en `JSON*`, `TabSeparated*`, et `Pretty*` formats, séparément des autres lignes: - -- Dans `JSON*` formats, cette ligne est sortie en tant que distinct ‘totals’ champ. -- Dans `TabSeparated*` formats, la ligne vient après le résultat principal, précédé par une ligne vide (après les autres données). -- Dans `Pretty*` formats, la ligne est sortie comme une table séparée après le résultat principal. -- Dans les autres formats, il n'est pas disponible. - -`WITH TOTALS` peut être exécuté de différentes manières lorsqu'il est présent. Le comportement dépend de l' ‘totals_mode’ paramètre. - -### Configuration Du Traitement Des Totaux {#configuring-totals-processing} - -Par défaut, `totals_mode = 'before_having'`. Dans ce cas, ‘totals’ est calculé sur toutes les lignes, y compris celles qui ne passent pas par `max_rows_to_group_by`. - -Les autres alternatives incluent uniquement les lignes qui passent à travers avoir dans ‘totals’, et se comporter différemment avec le réglage `max_rows_to_group_by` et `group_by_overflow_mode = 'any'`. - -`after_having_exclusive` – Don't include rows that didn't pass through `max_rows_to_group_by`. En d'autres termes, ‘totals’ aura moins ou le même nombre de lignes que si `max_rows_to_group_by` ont été omis. - -`after_having_inclusive` – Include all the rows that didn't pass through ‘max_rows_to_group_by’ dans ‘totals’. En d'autres termes, ‘totals’ aura plus ou le même nombre de lignes que si `max_rows_to_group_by` ont été omis. - -`after_having_auto` – Count the number of rows that passed through HAVING. If it is more than a certain amount (by default, 50%), include all the rows that didn't pass through ‘max_rows_to_group_by’ dans ‘totals’. Sinon, ne pas les inclure. - -`totals_auto_threshold` – By default, 0.5. The coefficient for `after_having_auto`. - -Si `max_rows_to_group_by` et `group_by_overflow_mode = 'any'` ne sont pas utilisés, toutes les variations de `after_having` sont les mêmes, et vous pouvez utiliser l'un d'eux (par exemple, `after_having_auto`). - -Vous pouvez utiliser avec les totaux dans les sous-requêtes, y compris les sous-requêtes dans la clause JOIN (dans ce cas, les valeurs totales respectives sont combinées). - -## Exemple {#examples} - -Exemple: - -``` sql -SELECT - count(), - median(FetchTiming > 60 ? 60 : FetchTiming), - count() - sum(Refresh) -FROM hits -``` - -Cependant, contrairement au SQL standard, si la table n'a pas de lignes (soit il n'y en a pas du tout, soit il n'y en a pas après avoir utilisé WHERE to filter), un résultat vide est renvoyé, et non le résultat d'une des lignes contenant les valeurs initiales des fonctions d'agrégat. - -Contrairement à MySQL (et conforme à SQL standard), vous ne pouvez pas obtenir une valeur d'une colonne qui n'est pas dans une fonction clé ou agrégée (sauf les expressions constantes). Pour contourner ce problème, vous pouvez utiliser le ‘any’ fonction d'agrégation (récupère la première valeur rencontrée) ou ‘min/max’. - -Exemple: - -``` sql -SELECT - domainWithoutWWW(URL) AS domain, - count(), - any(Title) AS title -- getting the first occurred page header for each domain. -FROM hits -GROUP BY domain -``` - -Pour chaque valeur de clé différente rencontrée, GROUP BY calcule un ensemble de valeurs de fonction d'agrégation. - -GROUP BY n'est pas pris en charge pour les colonnes de tableau. - -Une constante ne peut pas être spécifiée comme arguments pour les fonctions d'agrégation. Exemple: somme(1). Au lieu de cela, vous pouvez vous débarrasser de la constante. Exemple: `count()`. - -## Détails De Mise En Œuvre {#implementation-details} - -L'agrégation est l'une des caractéristiques les plus importantes d'un SGBD orienté colonne, et donc son implémentation est l'une des parties les plus optimisées de ClickHouse. Par défaut, l'agrégation se fait en mémoire à l'aide d'une table de hachage. Il a plus de 40 spécialisations qui sont choisies automatiquement en fonction de “grouping key” types de données. - -### Groupe par dans la mémoire externe {#select-group-by-in-external-memory} - -Vous pouvez activer le dumping des données temporaires sur le disque pour limiter l'utilisation de la mémoire pendant `GROUP BY`. -Le [max_bytes_before_external_group_by](../../../operations/settings/settings.md#settings-max_bytes_before_external_group_by) réglage détermine le seuil de consommation de RAM pour le dumping `GROUP BY` données temporaires dans le système de fichiers. Si elle est définie sur 0 (valeur par défaut), elle est désactivée. - -Lors de l'utilisation de `max_bytes_before_external_group_by`, nous vous recommandons de définir `max_memory_usage` environ deux fois plus élevé. Ceci est nécessaire car il y a deux étapes à l'agrégation: la lecture des données et la formation des données intermédiaires (1) et la fusion des données intermédiaires (2). Le Dumping des données dans le système de fichiers ne peut se produire qu'au cours de l'étape 1. Si les données temporaires n'ont pas été vidées, l'étape 2 peut nécessiter jusqu'à la même quantité de mémoire qu'à l'étape 1. - -Par exemple, si [max_memory_usage](../../../operations/settings/settings.md#settings_max_memory_usage) a été défini sur 10000000000 et que vous souhaitez utiliser l'agrégation externe, il est logique de définir `max_bytes_before_external_group_by` à 10000000000, et `max_memory_usage` à 20000000000. Lorsque l'agrégation externe est déclenchée (s'il y a eu au moins un vidage de données temporaires), la consommation maximale de RAM n'est que légèrement supérieure à `max_bytes_before_external_group_by`. - -Avec le traitement des requêtes distribuées, l'agrégation externe est effectuée sur des serveurs distants. Pour que le serveur demandeur n'utilise qu'une petite quantité de RAM, définissez `distributed_aggregation_memory_efficient` 1. - -Lors de la fusion de données vidées sur le disque, ainsi que lors de la fusion des résultats de serveurs distants lorsque `distributed_aggregation_memory_efficient` paramètre est activé, consomme jusqu'à `1/256 * the_number_of_threads` à partir de la quantité totale de mémoire RAM. - -Lorsque l'agrégation externe est activée, s'il y a moins de `max_bytes_before_external_group_by` of data (i.e. data was not flushed), the query runs just as fast as without external aggregation. If any temporary data was flushed, the run time will be several times longer (approximately three times). - -Si vous avez un [ORDER BY](order-by.md) avec un [LIMIT](limit.md) après `GROUP BY` puis la quantité de RAM dépend de la quantité de données dans `LIMIT`, pas dans l'ensemble de la table. Mais si l' `ORDER BY` n'a pas `LIMIT`, n'oubliez pas d'activer externe de tri (`max_bytes_before_external_sort`). diff --git a/docs/fr/sql-reference/statements/select/having.md b/docs/fr/sql-reference/statements/select/having.md deleted file mode 100644 index 9425830c3d4..00000000000 --- a/docs/fr/sql-reference/statements/select/having.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause HAVING {#having-clause} - -Permet de filtrer les résultats d'agrégation produits par [GROUP BY](group-by.md). Il est similaire à la [WHERE](where.md) la clause, mais la différence est que `WHERE` est effectuée avant l'agrégation, tandis que `HAVING` est effectué d'après elle. - -Il est possible de référencer les résultats d'agrégation à partir de `SELECT` la clause dans `HAVING` clause par leur alias. Alternativement, `HAVING` clause peut filtrer sur les résultats d'agrégats supplémentaires qui ne sont pas retournés dans les résultats de la requête. - -## Limitation {#limitations} - -`HAVING` ne peut pas être utilisé si le regroupement n'est pas effectuée. Utiliser `WHERE` plutôt. diff --git a/docs/fr/sql-reference/statements/select/index.md b/docs/fr/sql-reference/statements/select/index.md deleted file mode 100644 index 1d53ae80eb4..00000000000 --- a/docs/fr/sql-reference/statements/select/index.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 33 -toc_title: SELECT ---- - -# Sélectionnez la syntaxe des requêtes {#select-queries-syntax} - -`SELECT` effectue la récupération des données. - -``` sql -[WITH expr_list|(subquery)] -SELECT [DISTINCT] expr_list -[FROM [db.]table | (subquery) | table_function] [FINAL] -[SAMPLE sample_coeff] -[ARRAY JOIN ...] -[GLOBAL] [ANY|ALL|ASOF] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI] JOIN (subquery)|table (ON )|(USING ) -[PREWHERE expr] -[WHERE expr] -[GROUP BY expr_list] [WITH TOTALS] -[HAVING expr] -[ORDER BY expr_list] [WITH FILL] [FROM expr] [TO expr] [STEP expr] -[LIMIT [offset_value, ]n BY columns] -[LIMIT [n, ]m] [WITH TIES] -[UNION ALL ...] -[INTO OUTFILE filename] -[FORMAT format] -``` - -Toutes les clauses sont facultatives, à l'exception de la liste d'expressions requise immédiatement après `SELECT` qui est abordée plus en détail [dessous](#select-clause). - -Spécificités de chaque clause facultative, sont couverts dans des sections distinctes, qui sont énumérés dans le même ordre qu'elles sont exécutées: - -- [AVEC la clause](with.md) -- [La clause DISTINCT](distinct.md) -- [De la clause](from.md) -- [Exemple de clause](sample.md) -- [Clause de JOINTURE](join.md) -- [Clause PREWHERE](prewhere.md) -- [Clause where](where.md) -- [Groupe par clause](group-by.md) -- [Limite par clause](limit-by.md) -- [Clause HAVING](having.md) -- [Clause SELECT](#select-clause) -- [Clause LIMIT](limit.md) -- [Clause UNION ALL](union.md) - -## Clause SELECT {#select-clause} - -[Expression](../../syntax.md#syntax-expressions) spécifié dans le `SELECT` clause sont calculés après toutes les opérations dans les clauses décrites ci-dessus sont terminés. Ces expressions fonctionnent comme si elles s'appliquaient à des lignes séparées dans le résultat. Si les expressions dans le `SELECT` la clause contient des fonctions d'agrégation, puis clickhouse traite les fonctions d'agrégation et les expressions utilisées [GROUP BY](group-by.md) agrégation. - -Si vous souhaitez inclure toutes les colonnes dans le résultat, utilisez l'astérisque (`*`) symbole. Exemple, `SELECT * FROM ...`. - -Pour correspondre à certaines colonnes dans le résultat avec un [re2](https://en.wikipedia.org/wiki/RE2_(software)) expression régulière, vous pouvez utiliser le `COLUMNS` expression. - -``` sql -COLUMNS('regexp') -``` - -Par exemple, considérez le tableau: - -``` sql -CREATE TABLE default.col_names (aa Int8, ab Int8, bc Int8) ENGINE = TinyLog -``` - -La requête suivante sélectionne les données de toutes les colonnes contenant les `a` symbole dans leur nom. - -``` sql -SELECT COLUMNS('a') FROM col_names -``` - -``` text -┌─aa─┬─ab─┐ -│ 1 │ 1 │ -└────┴────┘ -``` - -Les colonnes sélectionnées sont retournés pas dans l'ordre alphabétique. - -Vous pouvez utiliser plusieurs `COLUMNS` expressions dans une requête et leur appliquer des fonctions. - -Exemple: - -``` sql -SELECT COLUMNS('a'), COLUMNS('c'), toTypeName(COLUMNS('c')) FROM col_names -``` - -``` text -┌─aa─┬─ab─┬─bc─┬─toTypeName(bc)─┐ -│ 1 │ 1 │ 1 │ Int8 │ -└────┴────┴────┴────────────────┘ -``` - -Chaque colonne renvoyée par le `COLUMNS` expression est passée à la fonction en tant qu'argument séparé. Vous pouvez également passer d'autres arguments à la fonction si elle les supporte. Soyez prudent lorsque vous utilisez des fonctions. Si une fonction ne prend pas en charge le nombre d'arguments que vous lui avez transmis, ClickHouse lève une exception. - -Exemple: - -``` sql -SELECT COLUMNS('a') + COLUMNS('c') FROM col_names -``` - -``` text -Received exception from server (version 19.14.1): -Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function plus doesn't match: passed 3, should be 2. -``` - -Dans cet exemple, `COLUMNS('a')` retourne deux colonnes: `aa` et `ab`. `COLUMNS('c')` renvoie la `bc` colonne. Le `+` l'opérateur ne peut pas s'appliquer à 3 arguments, donc ClickHouse lève une exception avec le message pertinent. - -Colonnes qui correspondent à la `COLUMNS` l'expression peut avoir différents types de données. Si `COLUMNS` ne correspond à aucune colonne et est la seule expression dans `SELECT`, ClickHouse lance une exception. - -### Astérisque {#asterisk} - -Vous pouvez mettre un astérisque dans quelque partie de la requête au lieu d'une expression. Lorsque la requête est analysée, l'astérisque est étendu à une liste de toutes les colonnes `MATERIALIZED` et `ALIAS` colonne). Il n'y a que quelques cas où l'utilisation d'un astérisque est justifiée: - -- Lors de la création d'un vidage de table. -- Pour les tables contenant seulement quelques colonnes, comme les tables système. -- Pour obtenir des informations sur ce que sont les colonnes dans une table. Dans ce cas, la valeur `LIMIT 1`. Mais il est préférable d'utiliser la `DESC TABLE` requête. -- Quand il y a une forte filtration sur un petit nombre de colonnes en utilisant `PREWHERE`. -- Dans les sous-requêtes (puisque les colonnes qui ne sont pas nécessaires pour la requête externe sont exclues des sous-requêtes). - -Dans tous les autres cas, nous ne recommandons pas d'utiliser l'astérisque, car il ne vous donne que les inconvénients d'un SGBD colonnaire au lieu des avantages. En d'autres termes, l'utilisation de l'astérisque n'est pas recommandée. - -### Les Valeurs Extrêmes {#extreme-values} - -En plus des résultats, vous pouvez également obtenir des valeurs minimales et maximales pour les colonnes de résultats. Pour ce faire, définissez la **extrême** réglage sur 1. Les Minimums et les maximums sont calculés pour les types numériques, les dates et les dates avec des heures. Pour les autres colonnes, les valeurs par défaut sont sorties. - -An extra two rows are calculated – the minimums and maximums, respectively. These extra two rows are output in `JSON*`, `TabSeparated*`, et `Pretty*` [format](../../../interfaces/formats.md), séparés des autres lignes. Ils ne sont pas Produits pour d'autres formats. - -Dans `JSON*` formats, les valeurs extrêmes sont sorties dans un ‘extremes’ champ. Dans `TabSeparated*` formats, la ligne vient après le résultat principal, et après ‘totals’ si elle est présente. Elle est précédée par une ligne vide (après les autres données). Dans `Pretty*` formats, la ligne est sortie comme une table séparée après le résultat principal, et après `totals` si elle est présente. - -Les valeurs extrêmes sont calculées pour les lignes avant `LIMIT` mais après `LIMIT BY`. Cependant, lors de l'utilisation de `LIMIT offset, size`, les lignes avant de les `offset` sont inclus dans `extremes`. Dans les requêtes de flux, le résultat peut également inclure un petit nombre de lignes qui ont traversé `LIMIT`. - -### Note {#notes} - -Vous pouvez utiliser des synonymes (`AS` alias) dans n'importe quelle partie d'une requête. - -Le `GROUP BY` et `ORDER BY` les clauses ne supportent pas les arguments positionnels. Cela contredit MySQL, mais est conforme à SQL standard. Exemple, `GROUP BY 1, 2` will be interpreted as grouping by constants (i.e. aggregation of all rows into one). - -## Détails De Mise En Œuvre {#implementation-details} - -Si la requête omet le `DISTINCT`, `GROUP BY` et `ORDER BY` les clauses et les `IN` et `JOIN` sous-requêtes, la requête sera complètement traitée en flux, en utilisant O (1) quantité de RAM. Sinon, la requête peut consommer beaucoup de RAM si les restrictions appropriées ne sont pas spécifiées: - -- `max_memory_usage` -- `max_rows_to_group_by` -- `max_rows_to_sort` -- `max_rows_in_distinct` -- `max_bytes_in_distinct` -- `max_rows_in_set` -- `max_bytes_in_set` -- `max_rows_in_join` -- `max_bytes_in_join` -- `max_bytes_before_external_sort` -- `max_bytes_before_external_group_by` - -Pour plus d'informations, consultez la section “Settings”. Il est possible d'utiliser le tri externe (sauvegarde des tables temporaires sur un disque) et l'agrégation externe. - -{## [Article Original](https://clickhouse.tech/docs/en/sql-reference/statements/select/) ##} diff --git a/docs/fr/sql-reference/statements/select/into-outfile.md b/docs/fr/sql-reference/statements/select/into-outfile.md deleted file mode 100644 index 0150de7cb97..00000000000 --- a/docs/fr/sql-reference/statements/select/into-outfile.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Dans OUTFILE Clause {#into-outfile-clause} - -Ajouter l' `INTO OUTFILE filename` clause (où filename est un littéral de chaîne) pour `SELECT query` pour rediriger sa sortie vers le fichier spécifié côté client. - -## Détails De Mise En Œuvre {#implementation-details} - -- Cette fonctionnalité est disponible dans les [client de ligne de commande](../../../interfaces/cli.md) et [clickhouse-local](../../../operations/utilities/clickhouse-local.md). Ainsi, une requête envoyée par [Interface HTTP](../../../interfaces/http.md) va échouer. -- La requête échouera si un fichier portant le même nom existe déjà. -- Défaut [le format de sortie](../../../interfaces/formats.md) être `TabSeparated` (comme dans le mode batch client en ligne de commande). diff --git a/docs/fr/sql-reference/statements/select/join.md b/docs/fr/sql-reference/statements/select/join.md deleted file mode 100644 index 4233a120674..00000000000 --- a/docs/fr/sql-reference/statements/select/join.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause de JOINTURE {#select-join} - -Join produit une nouvelle table en combinant des colonnes d'une ou plusieurs tables en utilisant des valeurs communes à chacune. C'est une opération courante dans les bases de données avec support SQL, ce qui correspond à [l'algèbre relationnelle](https://en.wikipedia.org/wiki/Relational_algebra#Joins_and_join-like_operators) rejoindre. Le cas particulier d'une jointure de table est souvent appelé “self-join”. - -Syntaxe: - -``` sql -SELECT -FROM -[GLOBAL] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI|ANY|ASOF] JOIN -(ON )|(USING ) ... -``` - -Les Expressions de `ON` clause et colonnes de `USING` clause sont appelés “join keys”. Sauf indication contraire, joindre un produit [Produit cartésien](https://en.wikipedia.org/wiki/Cartesian_product) des lignes, avec correspondance “join keys”, ce qui pourrait produire des résultats avec beaucoup plus de lignes que les tables source. - -## Types de jointure pris en charge {#select-join-types} - -Tous les standard [SQL JOIN](https://en.wikipedia.org/wiki/Join_(SQL)) les types sont pris en charge: - -- `INNER JOIN`, seules les lignes correspondantes sont retournés. -- `LEFT OUTER JOIN`, les lignes non correspondantes de la table de gauche sont retournées en plus des lignes correspondantes. -- `RIGHT OUTER JOIN`, les lignes non correspondantes de la table de gauche sont retournées en plus des lignes correspondantes. -- `FULL OUTER JOIN`, les lignes non correspondantes des deux tables sont renvoyées en plus des lignes correspondantes. -- `CROSS JOIN`, produit le produit cartésien des tables entières, “join keys” être **pas** défini. - -`JOIN` sans type spécifié implique `INNER`. Mot `OUTER` peut les oublier. Syntaxe Alternative pour `CROSS JOIN` spécifie plusieurs tables dans [De la clause](from.md) séparés par des virgules. - -Autres types de jointure disponibles dans ClickHouse: - -- `LEFT SEMI JOIN` et `RIGHT SEMI JOIN` une liste blanche sur “join keys”, sans produire un produit cartésien. -- `LEFT ANTI JOIN` et `RIGHT ANTI JOIN` une liste noire sur “join keys”, sans produire un produit cartésien. -- `LEFT ANY JOIN`, `RIGHT ANY JOIN` et `INNER ANY JOIN`, partially (for opposite side of `LEFT` and `RIGHT`) or completely (for `INNER` and `FULL`) disables the cartesian product for standard `JOIN` types. -- `ASOF JOIN` et `LEFT ASOF JOIN`, joining sequences with a non-exact match. `ASOF JOIN` usage is described below. - -## Setting {#join-settings} - -!!! note "Note" - La valeur de rigueur par défaut peut être remplacée à l'aide [join_default_strictness](../../../operations/settings/settings.md#settings-join_default_strictness) paramètre. - -### ASOF joindre L'utilisation {#asof-join-usage} - -`ASOF JOIN` est utile lorsque vous devez joindre des enregistrements qui n'ont pas de correspondance exacte. - -Tables pour `ASOF JOIN` doit avoir une colonne de séquence ordonnée. Cette colonne ne peut pas être seule dans une table et doit être l'un des types de données: `UInt32`, `UInt64`, `Float32`, `Float64`, `Date`, et `DateTime`. - -Syntaxe `ASOF JOIN ... ON`: - -``` sql -SELECT expressions_list -FROM table_1 -ASOF LEFT JOIN table_2 -ON equi_cond AND closest_match_cond -``` - -Vous pouvez utiliser n'importe quel nombre de conditions d'égalité et exactement une condition de correspondance la plus proche. Exemple, `SELECT count() FROM table_1 ASOF LEFT JOIN table_2 ON table_1.a == table_2.b AND table_2.t <= table_1.t`. - -Conditions prises en charge pour la correspondance la plus proche: `>`, `>=`, `<`, `<=`. - -Syntaxe `ASOF JOIN ... USING`: - -``` sql -SELECT expressions_list -FROM table_1 -ASOF JOIN table_2 -USING (equi_column1, ... equi_columnN, asof_column) -``` - -`ASOF JOIN` utiliser `equi_columnX` pour rejoindre sur l'égalité et `asof_column` pour rejoindre le match le plus proche avec le `table_1.asof_column >= table_2.asof_column` condition. Le `asof_column` colonne toujours la dernière dans le `USING` clause. - -Par exemple, considérez les tableaux suivants: - - table_1 table_2 - event | ev_time | user_id event | ev_time | user_id - ----------|---------|---------- ----------|---------|---------- - ... ... - event_1_1 | 12:00 | 42 event_2_1 | 11:59 | 42 - ... event_2_2 | 12:30 | 42 - event_1_2 | 13:00 | 42 event_2_3 | 13:00 | 42 - ... ... - -`ASOF JOIN` peut prendre la date d'un événement utilisateur de `table_1` et trouver un événement dans `table_2` où le timestamp est plus proche de l'horodatage de l'événement à partir de `table_1` correspondant à la condition de correspondance la plus proche. Les valeurs d'horodatage égales sont les plus proches si elles sont disponibles. Ici, l' `user_id` la colonne peut être utilisée pour joindre sur l'égalité et le `ev_time` la colonne peut être utilisée pour se joindre à la correspondance la plus proche. Dans notre exemple, `event_1_1` peut être jointe à `event_2_1` et `event_1_2` peut être jointe à `event_2_3`, mais `event_2_2` ne peut pas être rejoint. - -!!! note "Note" - `ASOF` jointure est **pas** pris en charge dans le [Rejoindre](../../../engines/table-engines/special/join.md) tableau moteur. - -## Jointure Distribuée {#global-join} - -Il existe deux façons d'exécuter join impliquant des tables distribuées: - -- Lors de l'utilisation normale `JOIN` la requête est envoyée aux serveurs distants. Les sous-requêtes sont exécutées sur chacune d'elles afin de créer la bonne table, et la jointure est effectuée avec cette table. En d'autres termes, la table de droite est formée sur chaque serveur séparément. -- Lors de l'utilisation de `GLOBAL ... JOIN`, d'abord le serveur demandeur exécute une sous-requête pour calculer la bonne table. Cette table temporaire est transmise à chaque serveur distant, et les requêtes sont exécutées sur eux en utilisant les données temporaires qui ont été transmises. - -Soyez prudent lorsque vous utilisez `GLOBAL`. Pour plus d'informations, voir le [Sous-requêtes distribuées](../../operators/in.md#select-distributed-subqueries) section. - -## Recommandations D'Utilisation {#usage-recommendations} - -### Traitement des cellules vides ou nulles {#processing-of-empty-or-null-cells} - -Lors de la jonction de tables, les cellules vides peuvent apparaître. Paramètre [join_use_nulls](../../../operations/settings/settings.md#join_use_nulls) définir comment clickhouse remplit ces cellules. - -Si l' `JOIN` les touches sont [Nullable](../../data-types/nullable.md) champs, les lignes où au moins une des clés a la valeur [NULL](../../../sql-reference/syntax.md#null-literal) ne sont pas jointes. - -### Syntaxe {#syntax} - -Les colonnes spécifiées dans `USING` doit avoir les mêmes noms dans les deux sous-requêtes, et les autres colonnes doivent être nommées différemment. Vous pouvez utiliser des alias pour les noms des colonnes dans les sous-requêtes. - -Le `USING` clause spécifie une ou plusieurs colonnes de jointure, qui établit l'égalité de ces colonnes. La liste des colonnes est définie sans crochets. Les conditions de jointure plus complexes ne sont pas prises en charge. - -### Limitations De Syntaxe {#syntax-limitations} - -Pour plusieurs `JOIN` clauses dans un seul `SELECT` requête: - -- Prendre toutes les colonnes via `*` n'est disponible que si les tables sont jointes, pas les sous-requêtes. -- Le `PREWHERE` la clause n'est pas disponible. - -Pour `ON`, `WHERE`, et `GROUP BY` clause: - -- Les expressions arbitraires ne peuvent pas être utilisées dans `ON`, `WHERE`, et `GROUP BY` mais vous pouvez définir une expression dans un `SELECT` clause et ensuite l'utiliser dans ces clauses via un alias. - -### Performance {#performance} - -Lors de l'exécution d'un `JOIN`, il n'y a pas d'optimisation de la commande d'exécution par rapport aux autres stades de la requête. La jointure (une recherche dans la table de droite) est exécutée avant de filtrer `WHERE` et avant l'agrégation. - -Chaque fois qu'une requête est exécutée avec la même `JOIN`, la sous-requête est exécutée à nouveau car le résultat n'est pas mis en cache. Pour éviter cela, utilisez la spéciale [Rejoindre](../../../engines/table-engines/special/join.md) table engine, qui est un tableau préparé pour l'assemblage qui est toujours en RAM. - -Dans certains cas, il est plus efficace d'utiliser [IN](../../operators/in.md) plutôt `JOIN`. - -Si vous avez besoin d'un `JOIN` pour se joindre à des tables de dimension (ce sont des tables relativement petites qui contiennent des propriétés de dimension, telles que des noms pour des campagnes publicitaires), un `JOIN` peut-être pas très pratique en raison du fait que la bonne table est ré-accédée pour chaque requête. Pour de tels cas, il y a un “external dictionaries” la fonctionnalité que vous devez utiliser à la place de `JOIN`. Pour plus d'informations, voir le [Dictionnaires externes](../../dictionaries/external-dictionaries/external-dicts.md) section. - -### Limitations De Mémoire {#memory-limitations} - -Par défaut, ClickHouse utilise [jointure de hachage](https://en.wikipedia.org/wiki/Hash_join) algorithme. ClickHouse prend le `` et crée une table de hachage pour cela dans la RAM. Après un certain seuil de consommation de mémoire, ClickHouse revient à fusionner l'algorithme de jointure. - -Si vous devez restreindre la consommation de mémoire de l'opération join utilisez les paramètres suivants: - -- [max_rows_in_join](../../../operations/settings/query-complexity.md#settings-max_rows_in_join) — Limits number of rows in the hash table. -- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join) — Limits size of the hash table. - -Lorsque l'une de ces limites est atteinte, ClickHouse agit comme [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode) réglage des instructions. - -## Exemple {#examples} - -Exemple: - -``` sql -SELECT - CounterID, - hits, - visits -FROM -( - SELECT - CounterID, - count() AS hits - FROM test.hits - GROUP BY CounterID -) ANY LEFT JOIN -( - SELECT - CounterID, - sum(Sign) AS visits - FROM test.visits - GROUP BY CounterID -) USING CounterID -ORDER BY hits DESC -LIMIT 10 -``` - -``` text -┌─CounterID─┬───hits─┬─visits─┐ -│ 1143050 │ 523264 │ 13665 │ -│ 731962 │ 475698 │ 102716 │ -│ 722545 │ 337212 │ 108187 │ -│ 722889 │ 252197 │ 10547 │ -│ 2237260 │ 196036 │ 9522 │ -│ 23057320 │ 147211 │ 7689 │ -│ 722818 │ 90109 │ 17847 │ -│ 48221 │ 85379 │ 4652 │ -│ 19762435 │ 77807 │ 7026 │ -│ 722884 │ 77492 │ 11056 │ -└───────────┴────────┴────────┘ -``` diff --git a/docs/fr/sql-reference/statements/select/limit-by.md b/docs/fr/sql-reference/statements/select/limit-by.md deleted file mode 100644 index 4d1bd766ef1..00000000000 --- a/docs/fr/sql-reference/statements/select/limit-by.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Limite par Clause {#limit-by-clause} - -Une requête avec l' `LIMIT n BY expressions` la clause sélectionne le premier `n` lignes pour chaque valeur distincte de `expressions`. La clé pour `LIMIT BY` peut contenir n'importe quel nombre de [expression](../../syntax.md#syntax-expressions). - -ClickHouse prend en charge les variantes de syntaxe suivantes: - -- `LIMIT [offset_value, ]n BY expressions` -- `LIMIT n OFFSET offset_value BY expressions` - -Pendant le traitement de la requête, ClickHouse sélectionne les données classées par clé de tri. La clé de tri est définie explicitement à l'aide [ORDER BY](order-by.md) clause ou implicitement en tant que propriété du moteur de table. Puis clickhouse s'applique `LIMIT n BY expressions` et renvoie le premier `n` lignes pour chaque combinaison distincte de `expressions`. Si `OFFSET` est spécifié, puis pour chaque bloc de données qui appartient à une combinaison particulière de `expressions`, Clickhouse saute `offset_value` nombre de lignes depuis le début du bloc et renvoie un maximum de `n` les lignes en conséquence. Si `offset_value` est plus grand que le nombre de lignes dans le bloc de données, ClickHouse renvoie zéro lignes du bloc. - -!!! note "Note" - `LIMIT BY` n'est pas liée à [LIMIT](limit.md). Ils peuvent tous deux être utilisés dans la même requête. - -## Exemple {#examples} - -Exemple de table: - -``` sql -CREATE TABLE limit_by(id Int, val Int) ENGINE = Memory; -INSERT INTO limit_by VALUES (1, 10), (1, 11), (1, 12), (2, 20), (2, 21); -``` - -Requête: - -``` sql -SELECT * FROM limit_by ORDER BY id, val LIMIT 2 BY id -``` - -``` text -┌─id─┬─val─┐ -│ 1 │ 10 │ -│ 1 │ 11 │ -│ 2 │ 20 │ -│ 2 │ 21 │ -└────┴─────┘ -``` - -``` sql -SELECT * FROM limit_by ORDER BY id, val LIMIT 1, 2 BY id -``` - -``` text -┌─id─┬─val─┐ -│ 1 │ 11 │ -│ 1 │ 12 │ -│ 2 │ 21 │ -└────┴─────┘ -``` - -Le `SELECT * FROM limit_by ORDER BY id, val LIMIT 2 OFFSET 1 BY id` requête renvoie le même résultat. - -La requête suivante renvoie les 5 principaux référents pour chaque `domain, device_type` paire avec un maximum de 100 lignes au total (`LIMIT n BY + LIMIT`). - -``` sql -SELECT - domainWithoutWWW(URL) AS domain, - domainWithoutWWW(REFERRER_URL) AS referrer, - device_type, - count() cnt -FROM hits -GROUP BY domain, referrer, device_type -ORDER BY cnt DESC -LIMIT 5 BY domain, device_type -LIMIT 100 -``` diff --git a/docs/fr/sql-reference/statements/select/limit.md b/docs/fr/sql-reference/statements/select/limit.md deleted file mode 100644 index 69334c32cc9..00000000000 --- a/docs/fr/sql-reference/statements/select/limit.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause LIMIT {#limit-clause} - -`LIMIT m` permet de sélectionner la première `m` lignes du résultat. - -`LIMIT n, m` permet de sélectionner le `m` lignes du résultat après avoir sauté le premier `n` rangée. Le `LIMIT m OFFSET n` la syntaxe est équivalente. - -`n` et `m` doivent être des entiers non négatifs. - -Si il n'y a pas de [ORDER BY](order-by.md) clause qui trie explicitement les résultats, le choix des lignes pour le résultat peut être arbitraire et non déterministe. diff --git a/docs/fr/sql-reference/statements/select/order-by.md b/docs/fr/sql-reference/statements/select/order-by.md deleted file mode 100644 index 2a4ef58d7ad..00000000000 --- a/docs/fr/sql-reference/statements/select/order-by.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause ORDER BY {#select-order-by} - -Le `ORDER BY` clause contient une liste des expressions, qui peuvent être attribuées avec `DESC` (décroissant) ou `ASC` modificateur (ascendant) qui détermine la direction de tri. Si la direction n'est pas spécifié, `ASC` est supposé, donc il est généralement omis. La direction de tri s'applique à une seule expression, pas à la liste entière. Exemple: `ORDER BY Visits DESC, SearchPhrase` - -Les lignes qui ont des valeurs identiques pour la liste des expressions de tri sont sorties dans un ordre arbitraire, qui peut également être non déterministe (différent à chaque fois). -Si la clause ORDER BY est omise, l'ordre des lignes est également indéfini et peut également être non déterministe. - -## Tri des valeurs spéciales {#sorting-of-special-values} - -Il existe deux approches pour `NaN` et `NULL` ordre de tri: - -- Par défaut ou avec le `NULLS LAST` modificateur: d'abord les valeurs, puis `NaN`, puis `NULL`. -- Avec l' `NULLS FIRST` modificateur: première `NULL`, puis `NaN` puis d'autres valeurs. - -### Exemple {#example} - -Pour la table - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -│ 2 │ 2 │ -│ 1 │ nan │ -│ 2 │ 2 │ -│ 3 │ 4 │ -│ 5 │ 6 │ -│ 6 │ nan │ -│ 7 │ ᴺᵁᴸᴸ │ -│ 6 │ 7 │ -│ 8 │ 9 │ -└───┴──────┘ -``` - -Exécuter la requête `SELECT * FROM t_null_nan ORDER BY y NULLS FIRST` obtenir: - -``` text -┌─x─┬────y─┐ -│ 1 │ ᴺᵁᴸᴸ │ -│ 7 │ ᴺᵁᴸᴸ │ -│ 1 │ nan │ -│ 6 │ nan │ -│ 2 │ 2 │ -│ 2 │ 2 │ -│ 3 │ 4 │ -│ 5 │ 6 │ -│ 6 │ 7 │ -│ 8 │ 9 │ -└───┴──────┘ -``` - -Lorsque les nombres à virgule flottante sont triés, les Nan sont séparés des autres valeurs. Quel que soit l'ordre de tri, NaNs viennent à la fin. En d'autres termes, pour le Tri ascendant, ils sont placés comme s'ils étaient plus grands que tous les autres nombres, tandis que pour le Tri descendant, ils sont placés comme s'ils étaient plus petits que les autres. - -## Classement De Soutien {#collation-support} - -Pour le tri par valeurs de chaîne, vous pouvez spécifier le classement (comparaison). Exemple: `ORDER BY SearchPhrase COLLATE 'tr'` - pour le tri par mot-clé dans l'ordre croissant, en utilisant l'alphabet turc, insensible à la casse, en supposant que les chaînes sont encodées en UTF-8. COLLATE peut être spécifié ou non pour chaque expression dans L'ordre par indépendamment. Si ASC ou DESC est spécifié, COLLATE est spécifié après. Lors de L'utilisation de COLLATE, le tri est toujours insensible à la casse. - -Nous recommandons uniquement D'utiliser COLLATE pour le tri final d'un petit nombre de lignes, car le tri avec COLLATE est moins efficace que le tri normal par octets. - -## Détails De Mise En Œuvre {#implementation-details} - -Moins de RAM est utilisé si un assez petit [LIMIT](limit.md) est précisée en plus `ORDER BY`. Sinon, la quantité de mémoire dépensée est proportionnelle au volume de données à trier. Pour le traitement des requêtes distribuées, si [GROUP BY](group-by.md) est omis, le tri est partiellement effectué sur les serveurs distants et les résultats sont fusionnés Sur le serveur demandeur. Cela signifie que pour le tri distribué, le volume de données à trier peut être supérieur à la quantité de mémoire sur un seul serveur. - -S'il N'y a pas assez de RAM, il est possible d'effectuer un tri dans la mémoire externe (création de fichiers temporaires sur un disque). Utilisez le paramètre `max_bytes_before_external_sort` pour ce but. S'il est défini sur 0 (par défaut), le tri externe est désactivé. Si elle est activée, lorsque le volume de données à trier atteint le nombre spécifié d'octets, les données collectées sont triés et déposés dans un fichier temporaire. Une fois toutes les données lues, tous les fichiers triés sont fusionnés et les résultats sont générés. Les fichiers sont écrits dans le `/var/lib/clickhouse/tmp/` dans la configuration (par défaut, mais vous pouvez `tmp_path` paramètre pour modifier ce paramètre). - -L'exécution d'une requête peut utiliser plus de mémoire que `max_bytes_before_external_sort`. Pour cette raison, ce paramètre doit avoir une valeur significativement inférieure à `max_memory_usage`. Par exemple, si votre serveur dispose de 128 Go de RAM et que vous devez exécuter une seule requête, définissez `max_memory_usage` à 100 Go, et `max_bytes_before_external_sort` à 80 Go. - -Le tri externe fonctionne beaucoup moins efficacement que le tri dans la RAM. diff --git a/docs/fr/sql-reference/statements/select/prewhere.md b/docs/fr/sql-reference/statements/select/prewhere.md deleted file mode 100644 index 2c825d050f4..00000000000 --- a/docs/fr/sql-reference/statements/select/prewhere.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause PREWHERE {#prewhere-clause} - -Prewhere est une optimisation pour appliquer le filtrage plus efficacement. Il est activé par défaut, même si `PREWHERE` la clause n'est pas explicitement spécifié. Il fonctionne en déplaçant automatiquement une partie de [WHERE](where.md) condition à prewhere étape. Le rôle de `PREWHERE` la clause est seulement pour contrôler cette optimisation si vous pensez que vous savez comment le faire mieux que par défaut. - -Avec l'optimisation prewhere, au début, seules les colonnes nécessaires à l'exécution de l'expression prewhere sont lues. Ensuite, les autres colonnes sont lues qui sont nécessaires pour exécuter le reste de la requête, mais seulement les blocs où l'expression prewhere est “true” au moins pour certaines lignes. S'il y a beaucoup de blocs où prewhere expression est “false” pour toutes les lignes et prewhere a besoin de moins de colonnes que les autres parties de la requête, cela permet souvent de lire beaucoup moins de données à partir du disque pour l'exécution de la requête. - -## Contrôle Manuel De Prewhere {#controlling-prewhere-manually} - -La clause a le même sens que la `WHERE` clause. La différence est dans laquelle les données sont lues à partir de la table. Quand à commander manuellement `PREWHERE` pour les conditions de filtration qui sont utilisées par une minorité des colonnes de la requête, mais qui fournissent une filtration de données forte. Cela réduit le volume de données à lire. - -Une requête peut spécifier simultanément `PREWHERE` et `WHERE`. Dans ce cas, `PREWHERE` précéder `WHERE`. - -Si l' `optimize_move_to_prewhere` le paramètre est défini sur 0, heuristiques pour déplacer automatiquement des parties d'expressions `WHERE` de `PREWHERE` sont désactivés. - -## Limitation {#limitations} - -`PREWHERE` est uniquement pris en charge par les tables `*MergeTree` famille. diff --git a/docs/fr/sql-reference/statements/select/sample.md b/docs/fr/sql-reference/statements/select/sample.md deleted file mode 100644 index b2ddc060a19..00000000000 --- a/docs/fr/sql-reference/statements/select/sample.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Exemple de Clause {#select-sample-clause} - -Le `SAMPLE` clause permet approchée `SELECT` le traitement de la requête. - -Lorsque l'échantillonnage de données est activé, la requête n'est pas effectuée sur toutes les données, mais uniquement sur une certaine fraction de données (échantillon). Par exemple, si vous avez besoin de calculer des statistiques pour toutes les visites, il suffit d'exécuter la requête sur le 1/10 de la fraction de toutes les visites, puis multiplier le résultat par 10. - -Le traitement approximatif des requêtes peut être utile dans les cas suivants: - -- Lorsque vous avez des exigences de synchronisation strictes (comme \<100ms), mais que vous ne pouvez pas justifier le coût des ressources matérielles supplémentaires pour y répondre. -- Lorsque vos données brutes ne sont pas précises, l'approximation ne dégrade pas sensiblement la qualité. -- Les exigences commerciales ciblent des résultats approximatifs (pour la rentabilité, ou pour commercialiser des résultats exacts aux utilisateurs premium). - -!!! note "Note" - Vous ne pouvez utiliser l'échantillonnage qu'avec les tables [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md) famille, et seulement si l'expression d'échantillonnage a été spécifiée lors de la création de la table (voir [Moteur MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table)). - -Les caractéristiques de l'échantillonnage des données sont énumérées ci-dessous: - -- L'échantillonnage de données est un mécanisme déterministe. Le résultat de la même `SELECT .. SAMPLE` la requête est toujours le même. -- L'échantillonnage fonctionne de manière cohérente pour différentes tables. Pour les tables avec une seule clé d'échantillonnage, un échantillon avec le même coefficient sélectionne toujours le même sous-ensemble de données possibles. Par exemple, un exemple d'ID utilisateur prend des lignes avec le même sous-ensemble de tous les ID utilisateur possibles de différentes tables. Cela signifie que vous pouvez utiliser l'exemple dans les sous-requêtes dans la [IN](../../operators/in.md) clause. En outre, vous pouvez joindre des échantillons en utilisant le [JOIN](join.md) clause. -- L'échantillonnage permet de lire moins de données à partir d'un disque. Notez que vous devez spécifier l'échantillonnage clé correctement. Pour plus d'informations, voir [Création d'une Table MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table). - -Pour l' `SAMPLE` clause la syntaxe suivante est prise en charge: - -| SAMPLE Clause Syntax | Description | -|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `SAMPLE k` | Ici `k` est le nombre de 0 à 1.
La requête est exécutée sur `k` fraction des données. Exemple, `SAMPLE 0.1` exécute la requête sur 10% des données. [Lire plus](#select-sample-k) | -| `SAMPLE n` | Ici `n` est un entier suffisamment grand.
La requête est exécutée sur un échantillon d'au moins `n` lignes (mais pas significativement plus que cela). Exemple, `SAMPLE 10000000` exécute la requête sur un minimum de 10 000 000 lignes. [Lire plus](#select-sample-n) | -| `SAMPLE k OFFSET m` | Ici `k` et `m` sont les nombres de 0 à 1.
La requête est exécutée sur un échantillon de `k` fraction des données. Les données utilisées pour l'échantillon est compensée par `m` fraction. [Lire plus](#select-sample-offset) | - -## SAMPLE K {#select-sample-k} - -Ici `k` est le nombre de 0 à 1 (les notations fractionnaires et décimales sont prises en charge). Exemple, `SAMPLE 1/2` ou `SAMPLE 0.5`. - -Dans un `SAMPLE k` clause, l'échantillon est prélevé à partir de la `k` fraction des données. L'exemple est illustré ci-dessous: - -``` sql -SELECT - Title, - count() * 10 AS PageViews -FROM hits_distributed -SAMPLE 0.1 -WHERE - CounterID = 34 -GROUP BY Title -ORDER BY PageViews DESC LIMIT 1000 -``` - -Dans cet exemple, la requête est exécutée sur un échantillon de 0,1 (10%) de données. Les valeurs des fonctions d'agrégat ne sont pas corrigées automatiquement, donc pour obtenir un résultat approximatif, la valeur `count()` est multiplié manuellement par 10. - -## SAMPLE N {#select-sample-n} - -Ici `n` est un entier suffisamment grand. Exemple, `SAMPLE 10000000`. - -Dans ce cas, la requête est exécutée sur un échantillon d'au moins `n` lignes (mais pas significativement plus que cela). Exemple, `SAMPLE 10000000` exécute la requête sur un minimum de 10 000 000 lignes. - -Puisque l'unité minimale pour la lecture des données est un granule (sa taille est définie par le `index_granularity` de réglage), il est logique de définir un échantillon beaucoup plus grand que la taille du granule. - -Lors de l'utilisation de la `SAMPLE n` clause, vous ne savez pas quel pourcentage relatif de données a été traité. Donc, vous ne connaissez pas le coefficient par lequel les fonctions agrégées doivent être multipliées. L'utilisation de la `_sample_factor` colonne virtuelle pour obtenir le résultat approximatif. - -Le `_sample_factor` colonne contient des coefficients relatifs qui sont calculés dynamiquement. Cette colonne est créée automatiquement lorsque vous [créer](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) une table avec la clé d'échantillonnage spécifiée. Les exemples d'utilisation de la `_sample_factor` colonne sont indiqués ci-dessous. - -Considérons la table `visits` qui contient des statistiques sur les visites de site. Le premier exemple montre comment calculer le nombre de pages vues: - -``` sql -SELECT sum(PageViews * _sample_factor) -FROM visits -SAMPLE 10000000 -``` - -L'exemple suivant montre comment calculer le nombre total de visites: - -``` sql -SELECT sum(_sample_factor) -FROM visits -SAMPLE 10000000 -``` - -L'exemple ci-dessous montre comment calculer la durée moyenne de la session. Notez que vous n'avez pas besoin d'utiliser le coefficient relatif pour calculer les valeurs moyennes. - -``` sql -SELECT avg(Duration) -FROM visits -SAMPLE 10000000 -``` - -## SAMPLE K OFFSET M {#select-sample-offset} - -Ici `k` et `m` sont des nombres de 0 à 1. Des exemples sont présentés ci-dessous. - -**Exemple 1** - -``` sql -SAMPLE 1/10 -``` - -Dans cet exemple, l'échantillon représente 1 / 10e de toutes les données: - -`[++------------]` - -**Exemple 2** - -``` sql -SAMPLE 1/10 OFFSET 1/2 -``` - -Ici, un échantillon de 10% est prélevé à partir de la seconde moitié des données. - -`[------++------]` diff --git a/docs/fr/sql-reference/statements/select/union.md b/docs/fr/sql-reference/statements/select/union.md deleted file mode 100644 index 9ae65ebcf72..00000000000 --- a/docs/fr/sql-reference/statements/select/union.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause UNION ALL {#union-clause} - -Vous pouvez utiliser `UNION ALL` à combiner `SELECT` requêtes en étendant leurs résultats. Exemple: - -``` sql -SELECT CounterID, 1 AS table, toInt64(count()) AS c - FROM test.hits - GROUP BY CounterID - -UNION ALL - -SELECT CounterID, 2 AS table, sum(Sign) AS c - FROM test.visits - GROUP BY CounterID - HAVING c > 0 -``` - -Les colonnes de résultat sont appariées par leur index (ordre intérieur `SELECT`). Si les noms de colonne ne correspondent pas, les noms du résultat final sont tirés de la première requête. - -La coulée de Type est effectuée pour les syndicats. Par exemple, si deux requêtes combinées ont le même champ avec non-`Nullable` et `Nullable` types d'un type compatible, la `UNION ALL` a un `Nullable` type de champ. - -Requêtes qui font partie de `UNION ALL` ne peut pas être placée entre parenthèses. [ORDER BY](order-by.md) et [LIMIT](limit.md) sont appliqués à des requêtes séparées, pas au résultat final. Si vous devez appliquer une conversion au résultat final, vous pouvez mettre toutes les requêtes avec `UNION ALL` dans une sous-requête dans la [FROM](from.md) clause. - -## Limitation {#limitations} - -Seulement `UNION ALL` est pris en charge. Régulier `UNION` (`UNION DISTINCT`) n'est pas pris en charge. Si vous avez besoin d' `UNION DISTINCT`, vous pouvez écrire `SELECT DISTINCT` à partir d'une sous-requête contenant `UNION ALL`. - -## Détails De Mise En Œuvre {#implementation-details} - -Requêtes qui font partie de `UNION ALL` peuvent être exécutées simultanément, et leurs résultats peuvent être mélangés ensemble. diff --git a/docs/fr/sql-reference/statements/select/where.md b/docs/fr/sql-reference/statements/select/where.md deleted file mode 100644 index a4d7bc5e87a..00000000000 --- a/docs/fr/sql-reference/statements/select/where.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# Clause where {#select-where} - -`WHERE` clause permet de filtrer les données en provenance de [FROM](from.md) la clause de `SELECT`. - -Si il y a un `WHERE` , il doit contenir une expression avec la `UInt8` type. C'est généralement une expression avec comparaison et opérateurs logiques. Les lignes où cette expression est évaluée à 0 sont exclues des transformations ou des résultats ultérieurs. - -`WHERE` expression est évaluée sur la possibilité d'utiliser des index et l'élagage de partition, si le moteur de table sous-jacent le prend en charge. - -!!! note "Note" - Il y a une optimisation de filtrage appelée [prewhere](prewhere.md). diff --git a/docs/fr/sql-reference/statements/select/with.md b/docs/fr/sql-reference/statements/select/with.md deleted file mode 100644 index a42aedf460b..00000000000 --- a/docs/fr/sql-reference/statements/select/with.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd ---- - -# AVEC la Clause {#with-clause} - -Cette section prend en charge les Expressions de Table courantes ([CTE](https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL)), de sorte que les résultats de `WITH` la clause peut être utilisé à l'intérieur `SELECT` clause. - -## Limitation {#limitations} - -1. Les requêtes récursives ne sont pas prises en charge. -2. Lorsque la sous-requête est utilisée à l'intérieur avec section, son résultat doit être scalaire avec exactement une ligne. -3. Les résultats d'Expression ne sont pas disponibles dans les sous-requêtes. - -## Exemple {#examples} - -**Exemple 1:** Utilisation d'une expression constante comme “variable” - -``` sql -WITH '2019-08-01 15:23:00' as ts_upper_bound -SELECT * -FROM hits -WHERE - EventDate = toDate(ts_upper_bound) AND - EventTime <= ts_upper_bound -``` - -**Exemple 2:** De les expulser, somme(octets) résultat de l'expression de clause SELECT de la liste de colonnes - -``` sql -WITH sum(bytes) as s -SELECT - formatReadableSize(s), - table -FROM system.parts -GROUP BY table -ORDER BY s -``` - -**Exemple 3:** Utilisation des résultats de la sous-requête scalaire - -``` sql -/* this example would return TOP 10 of most huge tables */ -WITH - ( - SELECT sum(bytes) - FROM system.parts - WHERE active - ) AS total_disk_usage -SELECT - (sum(bytes) / total_disk_usage) * 100 AS table_disk_usage, - table -FROM system.parts -GROUP BY table -ORDER BY table_disk_usage DESC -LIMIT 10 -``` - -**Exemple 4:** Réutilisation de l'expression dans la sous-requête - -Comme solution de contournement pour la limitation actuelle de l'utilisation de l'expression dans les sous-requêtes, Vous pouvez la dupliquer. - -``` sql -WITH ['hello'] AS hello -SELECT - hello, - * -FROM -( - WITH ['hello'] AS hello - SELECT hello -) -``` - -``` text -┌─hello─────┬─hello─────┐ -│ ['hello'] │ ['hello'] │ -└───────────┴───────────┘ -``` diff --git a/docs/fr/sql-reference/statements/show.md b/docs/fr/sql-reference/statements/show.md deleted file mode 100644 index 129c6e30d1c..00000000000 --- a/docs/fr/sql-reference/statements/show.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 38 -toc_title: SHOW ---- - -# Afficher les requêtes {#show-queries} - -## SHOW CREATE TABLE {#show-create-table} - -``` sql -SHOW CREATE [TEMPORARY] [TABLE|DICTIONARY] [db.]table [INTO OUTFILE filename] [FORMAT format] -``` - -Renvoie un seul `String`-type ‘statement’ column, which contains a single value – the `CREATE` requête utilisée pour créer l'objet spécifié. - -## SHOW DATABASES {#show-databases} - -``` sql -SHOW DATABASES [INTO OUTFILE filename] [FORMAT format] -``` - -Imprime une liste de toutes les bases de données. -Cette requête est identique à `SELECT name FROM system.databases [INTO OUTFILE filename] [FORMAT format]`. - -## SHOW PROCESSLIST {#show-processlist} - -``` sql -SHOW PROCESSLIST [INTO OUTFILE filename] [FORMAT format] -``` - -Sorties le contenu de la [système.processus](../../operations/system-tables.md#system_tables-processes) table, qui contient une liste de requêtes en cours de traitement en ce moment, à l'exception `SHOW PROCESSLIST` requête. - -Le `SELECT * FROM system.processes` requête renvoie des données sur toutes les requêtes en cours. - -Astuce (exécuter dans la console): - -``` bash -$ watch -n1 "clickhouse-client --query='SHOW PROCESSLIST'" -``` - -## SHOW TABLES {#show-tables} - -Affiche une liste de tableaux. - -``` sql -SHOW [TEMPORARY] TABLES [{FROM | IN} ] [LIKE '' | WHERE expr] [LIMIT ] [INTO OUTFILE ] [FORMAT ] -``` - -Si l' `FROM` la clause n'est pas spécifié, la requête renvoie la liste des tables de la base de données actuelle. - -Vous pouvez obtenir les mêmes résultats que l' `SHOW TABLES` requête de la façon suivante: - -``` sql -SELECT name FROM system.tables WHERE database = [AND name LIKE ] [LIMIT ] [INTO OUTFILE ] [FORMAT ] -``` - -**Exemple** - -La requête suivante sélectionne les deux premières lignes de la liste des tables `system` base de données, dont les noms contiennent `co`. - -``` sql -SHOW TABLES FROM system LIKE '%co%' LIMIT 2 -``` - -``` text -┌─name───────────────────────────┐ -│ aggregate_function_combinators │ -│ collations │ -└────────────────────────────────┘ -``` - -## SHOW DICTIONARIES {#show-dictionaries} - -Affiche une liste de [dictionnaires externes](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md). - -``` sql -SHOW DICTIONARIES [FROM ] [LIKE ''] [LIMIT ] [INTO OUTFILE ] [FORMAT ] -``` - -Si l' `FROM` la clause n'est pas spécifié, la requête retourne la liste des dictionnaires de la base de données actuelle. - -Vous pouvez obtenir les mêmes résultats que l' `SHOW DICTIONARIES` requête de la façon suivante: - -``` sql -SELECT name FROM system.dictionaries WHERE database = [AND name LIKE ] [LIMIT ] [INTO OUTFILE ] [FORMAT ] -``` - -**Exemple** - -La requête suivante sélectionne les deux premières lignes de la liste des tables `system` base de données, dont les noms contiennent `reg`. - -``` sql -SHOW DICTIONARIES FROM db LIKE '%reg%' LIMIT 2 -``` - -``` text -┌─name─────────┐ -│ regions │ -│ region_names │ -└──────────────┘ -``` - -## SHOW GRANTS {#show-grants-statement} - -Montre les privilèges d'un utilisateur. - -### Syntaxe {#show-grants-syntax} - -``` sql -SHOW GRANTS [FOR user] -``` - -Si l'utilisateur n'est pas spécifié, la requête renvoie les privilèges de l'utilisateur actuel. - -## SHOW CREATE USER {#show-create-user-statement} - -Affiche les paramètres qui ont été utilisés [la création d'un utilisateur](create.md#create-user-statement). - -`SHOW CREATE USER` ne produit pas de mots de passe utilisateur. - -### Syntaxe {#show-create-user-syntax} - -``` sql -SHOW CREATE USER [name | CURRENT_USER] -``` - -## SHOW CREATE ROLE {#show-create-role-statement} - -Affiche les paramètres qui ont été utilisés [la création de rôle](create.md#create-role-statement) - -### Syntaxe {#show-create-role-syntax} - -``` sql -SHOW CREATE ROLE name -``` - -## SHOW CREATE ROW POLICY {#show-create-row-policy-statement} - -Affiche les paramètres qui ont été utilisés [création de stratégie de ligne](create.md#create-row-policy-statement) - -### Syntaxe {#show-create-row-policy-syntax} - -``` sql -SHOW CREATE [ROW] POLICY name ON [database.]table -``` - -## SHOW CREATE QUOTA {#show-create-quota-statement} - -Affiche les paramètres qui ont été utilisés [quota de création](create.md#create-quota-statement) - -### Syntaxe {#show-create-row-policy-syntax} - -``` sql -SHOW CREATE QUOTA [name | CURRENT] -``` - -## SHOW CREATE SETTINGS PROFILE {#show-create-settings-profile-statement} - -Affiche les paramètres qui ont été utilisés [configuration création de profil](create.md#create-settings-profile-statement) - -### Syntaxe {#show-create-row-policy-syntax} - -``` sql -SHOW CREATE [SETTINGS] PROFILE name -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/show/) diff --git a/docs/fr/sql-reference/statements/system.md b/docs/fr/sql-reference/statements/system.md deleted file mode 100644 index e8c9ed85cbc..00000000000 --- a/docs/fr/sql-reference/statements/system.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: SYSTEM ---- - -# SYSTÈME de Requêtes {#query-language-system} - -- [RELOAD DICTIONARIES](#query_language-system-reload-dictionaries) -- [RELOAD DICTIONARY](#query_language-system-reload-dictionary) -- [DROP DNS CACHE](#query_language-system-drop-dns-cache) -- [DROP MARK CACHE](#query_language-system-drop-mark-cache) -- [FLUSH LOGS](#query_language-system-flush_logs) -- [RELOAD CONFIG](#query_language-system-reload-config) -- [SHUTDOWN](#query_language-system-shutdown) -- [KILL](#query_language-system-kill) -- [STOP DISTRIBUTED SENDS](#query_language-system-stop-distributed-sends) -- [FLUSH DISTRIBUTED](#query_language-system-flush-distributed) -- [START DISTRIBUTED SENDS](#query_language-system-start-distributed-sends) -- [STOP MERGES](#query_language-system-stop-merges) -- [START MERGES](#query_language-system-start-merges) - -## RELOAD DICTIONARIES {#query_language-system-reload-dictionaries} - -Recharge tous les dictionnaires qui ont déjà été chargés avec succès. -Par défaut, les dictionnaires sont chargés paresseusement (voir [dictionaries_lazy_load](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_lazy_load)), donc au lieu d'être chargés automatiquement au démarrage, ils sont initialisés lors du premier accès via la fonction dictGet ou sélectionnez dans les tables avec ENGINE = Dictionary . Le `SYSTEM RELOAD DICTIONARIES` query recharge ces dictionnaires (chargés). -Retourne toujours `Ok.` quel que soit le résultat de la mise à jour du dictionnaire. - -## Recharger le dictionnaire Dictionary_name {#query_language-system-reload-dictionary} - -Recharge complètement un dictionnaire `dictionary_name`, quel que soit l'état du dictionnaire (LOADED / NOT_LOADED / FAILED). -Retourne toujours `Ok.` quel que soit le résultat de la mise à jour du dictionnaire. -L'état du dictionnaire peut être vérifié en interrogeant le `system.dictionaries` table. - -``` sql -SELECT name, status FROM system.dictionaries; -``` - -## DROP DNS CACHE {#query_language-system-drop-dns-cache} - -Réinitialise le cache DNS interne de ClickHouse. Parfois (pour les anciennes versions de ClickHouse), il est nécessaire d'utiliser cette commande lors de la modification de l'infrastructure (modification de l'adresse IP d'un autre serveur ClickHouse ou du serveur utilisé par les dictionnaires). - -Pour une gestion du cache plus pratique (automatique), voir paramètres disable_internal_dns_cache, dns_cache_update_period. - -## DROP MARK CACHE {#query_language-system-drop-mark-cache} - -Réinitialise le cache de marque. Utilisé dans le développement de ClickHouse et des tests de performance. - -## FLUSH LOGS {#query_language-system-flush_logs} - -Flushes buffers of log messages to system tables (e.g. system.query_log). Allows you to not wait 7.5 seconds when debugging. - -## RELOAD CONFIG {#query_language-system-reload-config} - -Recharge la configuration de ClickHouse. Utilisé lorsque la configuration est stockée dans ZooKeeeper. - -## SHUTDOWN {#query_language-system-shutdown} - -Normalement ferme ClickHouse (comme `service clickhouse-server stop` / `kill {$pid_clickhouse-server}`) - -## KILL {#query_language-system-kill} - -Annule le processus de ClickHouse (comme `kill -9 {$ pid_clickhouse-server}`) - -## Gestion Des Tables Distribuées {#query-language-system-distributed} - -ClickHouse peut gérer [distribué](../../engines/table-engines/special/distributed.md) table. Lorsqu'un utilisateur insère des données dans ces tables, ClickHouse crée d'abord une file d'attente des données qui doivent être envoyées aux nœuds de cluster, puis l'envoie de manière asynchrone. Vous pouvez gérer le traitement des files d'attente avec [STOP DISTRIBUTED SENDS](#query_language-system-stop-distributed-sends), [FLUSH DISTRIBUTED](#query_language-system-flush-distributed), et [START DISTRIBUTED SENDS](#query_language-system-start-distributed-sends) requête. Vous pouvez également insérer de manière synchrone des données distribuées avec `insert_distributed_sync` paramètre. - -### STOP DISTRIBUTED SENDS {#query_language-system-stop-distributed-sends} - -Désactive la distribution de données en arrière-plan lors de l'insertion de données dans des tables distribuées. - -``` sql -SYSTEM STOP DISTRIBUTED SENDS [db.] -``` - -### FLUSH DISTRIBUTED {#query_language-system-flush-distributed} - -Force ClickHouse à envoyer des données aux nœuds de cluster de manière synchrone. Si des nœuds ne sont pas disponibles, ClickHouse lève une exception et arrête l'exécution de la requête. Vous pouvez réessayer la requête jusqu'à ce qu'elle réussisse, ce qui se produira lorsque tous les nœuds seront de nouveau en ligne. - -``` sql -SYSTEM FLUSH DISTRIBUTED [db.] -``` - -### START DISTRIBUTED SENDS {#query_language-system-start-distributed-sends} - -Active la distribution de données en arrière-plan lors de l'insertion de données dans des tables distribuées. - -``` sql -SYSTEM START DISTRIBUTED SENDS [db.] -``` - -### STOP MERGES {#query_language-system-stop-merges} - -Offre la possibilité d'arrêter les fusions d'arrière-plan pour les tables de la famille MergeTree: - -``` sql -SYSTEM STOP MERGES [[db.]merge_tree_family_table_name] -``` - -!!! note "Note" - `DETACH / ATTACH` table va commencer les fusions d'arrière-plan pour la table même dans le cas où les fusions ont été arrêtées pour toutes les tables MergeTree auparavant. - -### START MERGES {#query_language-system-start-merges} - -Offre la possibilité de démarrer des fusions en arrière-plan pour les tables de la famille MergeTree: - -``` sql -SYSTEM START MERGES [[db.]merge_tree_family_table_name] -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/system/) diff --git a/docs/fr/sql-reference/syntax.md b/docs/fr/sql-reference/syntax.md deleted file mode 100644 index b8b24c9bbb5..00000000000 --- a/docs/fr/sql-reference/syntax.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 31 -toc_title: Syntaxe ---- - -# Syntaxe {#syntax} - -Il existe deux types d'analyseurs dans le système: L'analyseur SQL complet (un analyseur de descente récursif) et l'analyseur de format de données (un analyseur de flux rapide). -Dans tous les cas à l'exception de la `INSERT` requête, seul L'analyseur SQL complet est utilisé. -Le `INSERT` requête utilise les deux analyseurs: - -``` sql -INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def') -``` - -Le `INSERT INTO t VALUES` fragment est analysé par l'analyseur complet, et les données `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` est analysé par l'analyseur de flux rapide. Vous pouvez également activer l'analyseur complet pour les données à l'aide de la [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions) paramètre. Lorsque `input_format_values_interpret_expressions = 1`, ClickHouse essaie d'abord d'analyser les valeurs avec l'analyseur de flux rapide. S'il échoue, ClickHouse essaie d'utiliser l'analyseur complet pour les données, en le traitant comme un SQL [expression](#syntax-expressions). - -Les données peuvent avoir n'importe quel format. Lorsqu'une requête est reçue, le serveur calcule pas plus que [max_query_size](../operations/settings/settings.md#settings-max_query_size) octets de la requête en RAM (par défaut, 1 Mo), et le reste est analysé en flux. -Il permet d'éviter les problèmes avec de grandes `INSERT` requête. - -Lors de l'utilisation de la `Values` format dans un `INSERT` de la requête, il peut sembler que les données sont analysées de même que les expressions dans un `SELECT` requête, mais ce n'est pas vrai. Le `Values` le format est beaucoup plus limitée. - -Le reste de cet article couvre l'analyseur complet. Pour plus d'informations sur les analyseurs de format, consultez [Format](../interfaces/formats.md) section. - -## Espace {#spaces} - -Il peut y avoir n'importe quel nombre de symboles d'espace entre les constructions syntaxiques (y compris le début et la fin d'une requête). Les symboles d'espace incluent l'espace, l'onglet, le saut de ligne, Le CR et le flux de formulaire. - -## Commentaire {#comments} - -ClickHouse prend en charge les commentaires de style SQL et de style C. -Les commentaires de style SQL commencent par `--` et continuer jusqu'à la fin de la ligne, un espace après `--` peut être omis. -C-style sont de `/*` de `*/`et peut être multiligne, les espaces ne sont pas requis non plus. - -## Mot {#syntax-keywords} - -Les mots clés sont insensibles à la casse lorsqu'ils correspondent à: - -- La norme SQL. Exemple, `SELECT`, `select` et `SeLeCt` sont toutes valides. -- Implémentation dans certains SGBD populaires (MySQL ou Postgres). Exemple, `DateTime` est le même que `datetime`. - -Si le nom du type de données est sensible à la casse peut être vérifié `system.data_type_families` table. - -Contrairement à SQL standard, tous les autres mots clés (y compris les noms de fonctions) sont **sensible à la casse**. - -Mots-clés ne sont pas réservés; ils sont traités comme tels que dans le contexte correspondant. Si vous utilisez [identificateur](#syntax-identifiers) avec le même nom que les mots-clés, placez-les entre guillemets doubles ou backticks. Par exemple, la requête `SELECT "FROM" FROM table_name` est valide si la table `table_name` a colonne avec le nom de `"FROM"`. - -## Identificateur {#syntax-identifiers} - -Les identificateurs sont: - -- Noms de Cluster, de base de données, de table, de partition et de colonne. -- Fonction. -- Types de données. -- [Expression des alias](#syntax-expression_aliases). - -Les identificateurs peuvent être cités ou non cités. Ce dernier est préféré. - -Non identificateurs doivent correspondre à l'expression régulière `^[a-zA-Z_][0-9a-zA-Z_]*$` et ne peut pas être égale à [mot](#syntax-keywords). Exemple: `x, _1, X_y__Z123_.` - -Si vous souhaitez utiliser les identifiants de la même manière que les mots-clés ou si vous souhaitez utiliser d'autres symboles dans les identifiants, citez-le en utilisant des guillemets doubles ou des backticks, par exemple, `"id"`, `` `id` ``. - -## Littéral {#literals} - -Il y a numérique, chaîne de caractères, composé, et `NULL` littéral. - -### Numérique {#numeric} - -Littéral numérique tente d'être analysé: - -- Tout d'abord, comme un nombre signé 64 bits, en utilisant le [strtoull](https://en.cppreference.com/w/cpp/string/byte/strtoul) fonction. -- En cas d'échec, en tant que nombre non signé 64 bits, [strtoll](https://en.cppreference.com/w/cpp/string/byte/strtol) fonction. -- En cas d'échec, en tant que nombre à virgule flottante [strtod](https://en.cppreference.com/w/cpp/string/byte/strtof) fonction. -- Sinon, elle renvoie une erreur. - -La valeur littérale a le plus petit type dans lequel la valeur correspond. -Par exemple, 1 est analysé comme `UInt8`, mais 256 est analysé comme `UInt16`. Pour plus d'informations, voir [Types de données](../sql-reference/data-types/index.md). - -Exemple: `1`, `18446744073709551615`, `0xDEADBEEF`, `01`, `0.1`, `1e100`, `-1e-100`, `inf`, `nan`. - -### Chaîne {#syntax-string-literal} - -Seuls les littéraux de chaîne entre guillemets simples sont pris en charge. Le clos de caractères barre oblique inverse échappé. Les séquences d'échappement suivantes ont une valeur spéciale correspondante: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\a`, `\v`, `\xHH`. Dans tous les autres cas, des séquences d'échappement au format `\c`, où `c` est un caractère, sont convertis à `c`. Cela signifie que vous pouvez utiliser les séquences `\'`et`\\`. La valeur aurez l' [Chaîne](../sql-reference/data-types/string.md) type. - -Dans les littéraux de chaîne, vous devez vous échapper d'au moins `'` et `\`. Les guillemets simples peuvent être échappés avec le guillemet simple, littéraux `'It\'s'` et `'It''s'` sont égaux. - -### Composé {#compound} - -Les tableaux sont construits avec des crochets `[1, 2, 3]`. Nuples sont construits avec des supports ronds `(1, 'Hello, world!', 2)`. -Techniquement, ce ne sont pas des littéraux, mais des expressions avec l'opérateur de création de tableau et l'opérateur de création de tuple, respectivement. -Un tableau doit être composé d'au moins un élément, et un tuple doit avoir au moins deux éléments. -Il y a un cas distinct lorsque les tuples apparaissent dans le `IN` clause de a `SELECT` requête. Les résultats de la requête peuvent inclure des tuples, mais les tuples ne peuvent pas être enregistrés dans une base de données (à l'exception des tables avec [Mémoire](../engines/table-engines/special/memory.md) moteur). - -### NULL {#null-literal} - -Indique que la valeur est manquante. - -Afin de stocker `NULL` dans un champ de table, il doit être de la [Nullable](../sql-reference/data-types/nullable.md) type. - -Selon le format de données (entrée ou sortie), `NULL` peut avoir une représentation différente. Pour plus d'informations, consultez la documentation de [formats de données](../interfaces/formats.md#formats). - -Il y a beaucoup de nuances au traitement `NULL`. Par exemple, si au moins l'un des arguments d'une opération de comparaison est `NULL` le résultat de cette opération est également `NULL`. Il en va de même pour la multiplication, l'addition et d'autres opérations. Pour plus d'informations, lisez la documentation pour chaque opération. - -Dans les requêtes, vous pouvez vérifier `NULL` à l'aide de la [IS NULL](operators/index.md#operator-is-null) et [IS NOT NULL](operators/index.md) opérateurs et les fonctions connexes `isNull` et `isNotNull`. - -## Fonction {#functions} - -Les appels de fonction sont écrits comme un identifiant avec une liste d'arguments (éventuellement vide) entre parenthèses. Contrairement à SQL standard, les crochets sont requis, même pour une liste d'arguments vide. Exemple: `now()`. -Il existe des fonctions régulières et agrégées (voir la section “Aggregate functions”). Certaines fonctions d'agrégat peut contenir deux listes d'arguments entre parenthèses. Exemple: `quantile (0.9) (x)`. Ces fonctions d'agrégation sont appelés “parametric” fonctions, et les arguments dans la première liste sont appelés “parameters”. La syntaxe des fonctions d'agrégation sans paramètres est la même que pour les fonctions régulières. - -## Opérateur {#operators} - -Les opérateurs sont convertis en leurs fonctions correspondantes lors de l'analyse des requêtes, en tenant compte de leur priorité et de leur associativité. -Par exemple, l'expression `1 + 2 * 3 + 4` est transformé à `plus(plus(1, multiply(2, 3)), 4)`. - -## Types de données et moteurs de Table de base de données {#data_types-and-database-table-engines} - -Types de données et moteurs de table dans `CREATE` les requêtes sont écrites de la même manière que les identifiants ou les fonctions. En d'autres termes, ils peuvent ou ne peuvent pas contenir une liste d'arguments entre parenthèses. Pour plus d'informations, voir les sections “Data types,” “Table engines,” et “CREATE”. - -## Expression Des Alias {#syntax-expression_aliases} - -Un alias est un nom défini par l'utilisateur pour l'expression dans une requête. - -``` sql -expr AS alias -``` - -- `AS` — The keyword for defining aliases. You can define the alias for a table name or a column name in a `SELECT` clause sans utiliser le `AS` mot. - - For example, `SELECT table_name_alias.column_name FROM table_name table_name_alias`. - - In the [CAST](sql_reference/functions/type_conversion_functions.md#type_conversion_function-cast) function, the `AS` keyword has another meaning. See the description of the function. - -- `expr` — Any expression supported by ClickHouse. - - For example, `SELECT column_name * 2 AS double FROM some_table`. - -- `alias` — Name for `expr`. Les alias doivent être conformes à la [identificateur](#syntax-identifiers) syntaxe. - - For example, `SELECT "table t".column_name FROM table_name AS "table t"`. - -### Notes sur l'Utilisation de la {#notes-on-usage} - -Les alias sont globaux pour une requête ou d'une sous-requête, vous pouvez définir un alias dans n'importe quelle partie d'une requête de toute expression. Exemple, `SELECT (1 AS n) + 2, n`. - -Les alias ne sont pas visibles dans les sous-requêtes et entre les sous-requêtes. Par exemple, lors de l'exécution de la requête `SELECT (SELECT sum(b.a) + num FROM b) - a.a AS num FROM a` Clickhouse génère l'exception `Unknown identifier: num`. - -Si un alias est défini pour les colonnes de `SELECT` la clause d'une sous-requête, ces colonnes sont visibles dans la requête externe. Exemple, `SELECT n + m FROM (SELECT 1 AS n, 2 AS m)`. - -Soyez prudent avec les Alias qui sont les mêmes que les noms de colonnes ou de tables. Considérons l'exemple suivant: - -``` sql -CREATE TABLE t -( - a Int, - b Int -) -ENGINE = TinyLog() -``` - -``` sql -SELECT - argMax(a, b), - sum(b) AS b -FROM t -``` - -``` text -Received exception from server (version 18.14.17): -Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query. -``` - -Dans cet exemple, nous avons déclaré table `t` avec la colonne `b`. Ensuite, lors de la sélection des données, nous avons défini le `sum(b) AS b` alias. Comme les alias sont globaux, ClickHouse a substitué le littéral `b` dans l'expression `argMax(a, b)` avec l'expression `sum(b)`. Cette substitution a provoqué l'exception. - -## Astérisque {#asterisk} - -Dans un `SELECT` requête, un astérisque peut remplacer l'expression. Pour plus d'informations, consultez la section “SELECT”. - -## Expression {#syntax-expressions} - -Une expression est une fonction, un identifiant, un littéral, une application d'un opérateur, une expression entre parenthèses, une sous-requête ou un astérisque. Il peut également contenir un alias. -Une liste des expressions est une ou plusieurs expressions séparées par des virgules. -Les fonctions et les opérateurs, à leur tour, peuvent avoir des expressions comme arguments. - -[Article Original](https://clickhouse.tech/docs/en/sql_reference/syntax/) diff --git a/docs/fr/sql-reference/table-functions/file.md b/docs/fr/sql-reference/table-functions/file.md deleted file mode 100644 index a58821d021d..00000000000 --- a/docs/fr/sql-reference/table-functions/file.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 37 -toc_title: fichier ---- - -# fichier {#file} - -Crée un tableau à partir d'un fichier. Cette fonction de table est similaire à [URL](url.md) et [hdfs](hdfs.md) ceux. - -``` sql -file(path, format, structure) -``` - -**Les paramètres d'entrée** - -- `path` — The relative path to the file from [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). Chemin d'accès à la prise en charge des fichiers suivant les globs en mode Lecture seule: `*`, `?`, `{abc,def}` et `{N..M}` où `N`, `M` — numbers, \``'abc', 'def'` — strings. -- `format` — The [format](../../interfaces/formats.md#formats) de le fichier. -- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. - -**Valeur renvoyée** - -Une table avec la structure spécifiée pour lire ou écrire des données dans le fichier spécifié. - -**Exemple** - -Paramètre `user_files_path` et le contenu du fichier `test.csv`: - -``` bash -$ grep user_files_path /etc/clickhouse-server/config.xml - /var/lib/clickhouse/user_files/ - -$ cat /var/lib/clickhouse/user_files/test.csv - 1,2,3 - 3,2,1 - 78,43,45 -``` - -Table de`test.csv` et la sélection des deux premières lignes de ce: - -``` sql -SELECT * -FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') -LIMIT 2 -``` - -``` text -┌─column1─┬─column2─┬─column3─┐ -│ 1 │ 2 │ 3 │ -│ 3 │ 2 │ 1 │ -└─────────┴─────────┴─────────┘ -``` - -``` sql --- getting the first 10 lines of a table that contains 3 columns of UInt32 type from a CSV file -SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 10 -``` - -**Globs dans le chemin** - -Plusieurs composants de chemin peuvent avoir des globs. Pour être traité, le fichier doit exister et correspondre à l'ensemble du modèle de chemin (pas seulement le suffixe ou le préfixe). - -- `*` — Substitutes any number of any characters except `/` y compris la chaîne vide. -- `?` — Substitutes any single character. -- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. -- `{N..M}` — Substitutes any number in range from N to M including both borders. - -Les Constructions avec `{}` sont similaires à l' [fonction de table à distance](../../sql-reference/table-functions/remote.md)). - -**Exemple** - -1. Supposons que nous ayons plusieurs fichiers avec les chemins relatifs suivants: - -- ‘some_dir/some_file_1’ -- ‘some_dir/some_file_2’ -- ‘some_dir/some_file_3’ -- ‘another_dir/some_file_1’ -- ‘another_dir/some_file_2’ -- ‘another_dir/some_file_3’ - -1. Interroger la quantité de lignes dans ces fichiers: - - - -``` sql -SELECT count(*) -FROM file('{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32') -``` - -1. Requête de la quantité de lignes dans tous les fichiers de ces deux répertoires: - - - -``` sql -SELECT count(*) -FROM file('{some,another}_dir/*', 'TSV', 'name String, value UInt32') -``` - -!!! warning "Avertissement" - Si votre liste de fichiers contient des plages de nombres avec des zéros en tête, utilisez la construction avec des accolades pour chaque chiffre séparément ou utilisez `?`. - -**Exemple** - -Interroger les données des fichiers nommés `file000`, `file001`, … , `file999`: - -``` sql -SELECT count(*) -FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32') -``` - -## Les Colonnes Virtuelles {#virtual-columns} - -- `_path` — Path to the file. -- `_file` — Name of the file. - -**Voir Aussi** - -- [Les colonnes virtuelles](https://clickhouse.tech/docs/en/operations/table_engines/#table_engines-virtual_columns) - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/file/) diff --git a/docs/fr/sql-reference/table-functions/generate.md b/docs/fr/sql-reference/table-functions/generate.md deleted file mode 100644 index 1f7eeddd0e1..00000000000 --- a/docs/fr/sql-reference/table-functions/generate.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 47 -toc_title: generateRandom ---- - -# generateRandom {#generaterandom} - -Génère des données aléatoires avec un schéma donné. -Permet de remplir des tables de test avec des données. -Prend en charge tous les types de données qui peuvent être stockés dans la table sauf `LowCardinality` et `AggregateFunction`. - -``` sql -generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]); -``` - -**Paramètre** - -- `name` — Name of corresponding column. -- `TypeName` — Type of corresponding column. -- `max_array_length` — Maximum array length for all generated arrays. Defaults to `10`. -- `max_string_length` — Maximum string length for all generated strings. Defaults to `10`. -- `random_seed` — Specify random seed manually to produce stable results. If NULL — seed is randomly generated. - -**Valeur Renvoyée** - -Un objet de table avec le schéma demandé. - -## Exemple D'Utilisation {#usage-example} - -``` sql -SELECT * FROM generateRandom('a Array(Int8), d Decimal32(4), c Tuple(DateTime64(3), UUID)', 1, 10, 2) LIMIT 3; -``` - -``` text -┌─a────────┬────────────d─┬─c──────────────────────────────────────────────────────────────────┐ -│ [77] │ -124167.6723 │ ('2061-04-17 21:59:44.573','3f72f405-ec3e-13c8-44ca-66ef335f7835') │ -│ [32,110] │ -141397.7312 │ ('1979-02-09 03:43:48.526','982486d1-5a5d-a308-e525-7bd8b80ffa73') │ -│ [68] │ -67417.0770 │ ('2080-03-12 14:17:31.269','110425e5-413f-10a6-05ba-fa6b3e929f15') │ -└──────────┴──────────────┴────────────────────────────────────────────────────────────────────┘ -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/generate/) diff --git a/docs/fr/sql-reference/table-functions/hdfs.md b/docs/fr/sql-reference/table-functions/hdfs.md deleted file mode 100644 index 51b742d8018..00000000000 --- a/docs/fr/sql-reference/table-functions/hdfs.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 45 -toc_title: hdfs ---- - -# hdfs {#hdfs} - -Crée une table à partir de fichiers dans HDFS. Cette fonction de table est similaire à [URL](url.md) et [fichier](file.md) ceux. - -``` sql -hdfs(URI, format, structure) -``` - -**Les paramètres d'entrée** - -- `URI` — The relative URI to the file in HDFS. Path to file support following globs in readonly mode: `*`, `?`, `{abc,def}` et `{N..M}` où `N`, `M` — numbers, \``'abc', 'def'` — strings. -- `format` — The [format](../../interfaces/formats.md#formats) de le fichier. -- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. - -**Valeur renvoyée** - -Une table avec la structure spécifiée pour lire ou écrire des données dans le fichier spécifié. - -**Exemple** - -Table de `hdfs://hdfs1:9000/test` et la sélection des deux premières lignes de ce: - -``` sql -SELECT * -FROM hdfs('hdfs://hdfs1:9000/test', 'TSV', 'column1 UInt32, column2 UInt32, column3 UInt32') -LIMIT 2 -``` - -``` text -┌─column1─┬─column2─┬─column3─┐ -│ 1 │ 2 │ 3 │ -│ 3 │ 2 │ 1 │ -└─────────┴─────────┴─────────┘ -``` - -**Globs dans le chemin** - -Plusieurs composants de chemin peuvent avoir des globs. Pour être traité, le fichier doit exister et correspondre à l'ensemble du modèle de chemin (pas seulement le suffixe ou le préfixe). - -- `*` — Substitutes any number of any characters except `/` y compris la chaîne vide. -- `?` — Substitutes any single character. -- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. -- `{N..M}` — Substitutes any number in range from N to M including both borders. - -Les Constructions avec `{}` sont similaires à l' [fonction de table à distance](../../sql-reference/table-functions/remote.md)). - -**Exemple** - -1. Supposons que nous ayons plusieurs fichiers avec les URI suivants sur HDFS: - -- ‘hdfs://hdfs1:9000/some_dir/some_file_1’ -- ‘hdfs://hdfs1:9000/some_dir/some_file_2’ -- ‘hdfs://hdfs1:9000/some_dir/some_file_3’ -- ‘hdfs://hdfs1:9000/another_dir/some_file_1’ -- ‘hdfs://hdfs1:9000/another_dir/some_file_2’ -- ‘hdfs://hdfs1:9000/another_dir/some_file_3’ - -1. Interroger la quantité de lignes dans ces fichiers: - - - -``` sql -SELECT count(*) -FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32') -``` - -1. Requête de la quantité de lignes dans tous les fichiers de ces deux répertoires: - - - -``` sql -SELECT count(*) -FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32') -``` - -!!! warning "Avertissement" - Si votre liste de fichiers contient des plages de nombres avec des zéros en tête, utilisez la construction avec des accolades pour chaque chiffre séparément ou utilisez `?`. - -**Exemple** - -Interroger les données des fichiers nommés `file000`, `file001`, … , `file999`: - -``` sql -SELECT count(*) -FROM hdfs('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32') -``` - -## Les Colonnes Virtuelles {#virtual-columns} - -- `_path` — Path to the file. -- `_file` — Name of the file. - -**Voir Aussi** - -- [Les colonnes virtuelles](https://clickhouse.tech/docs/en/operations/table_engines/#table_engines-virtual_columns) - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/hdfs/) diff --git a/docs/fr/sql-reference/table-functions/index.md b/docs/fr/sql-reference/table-functions/index.md deleted file mode 100644 index 89a8200e385..00000000000 --- a/docs/fr/sql-reference/table-functions/index.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Les Fonctions De Table -toc_priority: 34 -toc_title: Introduction ---- - -# Les Fonctions De Table {#table-functions} - -Les fonctions de Table sont des méthodes pour construire des tables. - -Vous pouvez utiliser les fonctions de table dans: - -- [FROM](../statements/select/from.md) la clause de la `SELECT` requête. - - The method for creating a temporary table that is available only in the current query. The table is deleted when the query finishes. - -- [Créer une TABLE en tant que \< table_function ()\>](../statements/create.md#create-table-query) requête. - - It's one of the methods of creating a table. - -!!! warning "Avertissement" - Vous ne pouvez pas utiliser les fonctions de table si [allow_ddl](../../operations/settings/permissions-for-queries.md#settings_allow_ddl) paramètre est désactivé. - -| Fonction | Description | -|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------| -| [fichier](file.md) | Crée un [Fichier](../../engines/table-engines/special/file.md)-moteur de table. | -| [fusionner](merge.md) | Crée un [Fusionner](../../engines/table-engines/special/merge.md)-moteur de table. | -| [nombre](numbers.md) | Crée une table avec une seule colonne remplie de nombres entiers. | -| [distant](remote.md) | Vous permet d'accéder à des serveurs distants sans [Distribué](../../engines/table-engines/special/distributed.md)-moteur de table. | -| [URL](url.md) | Crée un [URL](../../engines/table-engines/special/url.md)-moteur de table. | -| [mysql](mysql.md) | Crée un [MySQL](../../engines/table-engines/integrations/mysql.md)-moteur de table. | -| [jdbc](jdbc.md) | Crée un [JDBC](../../engines/table-engines/integrations/jdbc.md)-moteur de table. | -| [ODBC](odbc.md) | Crée un [ODBC](../../engines/table-engines/integrations/odbc.md)-moteur de table. | -| [hdfs](hdfs.md) | Crée un [HDFS](../../engines/table-engines/integrations/hdfs.md)-moteur de table. | - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/) diff --git a/docs/fr/sql-reference/table-functions/input.md b/docs/fr/sql-reference/table-functions/input.md deleted file mode 100644 index 21e0eacb5c1..00000000000 --- a/docs/fr/sql-reference/table-functions/input.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 46 -toc_title: "entr\xE9e" ---- - -# entrée {#input} - -`input(structure)` - fonction de table qui permet effectivement convertir et insérer des données envoyées à la -serveur avec une structure donnée à la table avec une autre structure. - -`structure` - structure de données envoyées au serveur dans le format suivant `'column1_name column1_type, column2_name column2_type, ...'`. -Exemple, `'id UInt32, name String'`. - -Cette fonction peut être utilisée uniquement dans `INSERT SELECT` requête et une seule fois mais se comporte autrement comme une fonction de table ordinaire -(par exemple, il peut être utilisé dans la sous-requête, etc.). - -Les données peuvent être envoyées de quelque manière que ce soit comme pour ordinaire `INSERT` requête et passé dans tout disponible [format](../../interfaces/formats.md#formats) -qui doit être spécifié à la fin de la requête (contrairement à l'ordinaire `INSERT SELECT`). - -La caractéristique principale de cette fonction est que lorsque le serveur reçoit des données du client il les convertit simultanément -selon la liste des expressions dans le `SELECT` clause et insère dans la table cible. Table temporaire -avec toutes les données transférées n'est pas créé. - -**Exemple** - -- Laissez le `test` le tableau a la structure suivante `(a String, b String)` - et les données `data.csv` a une structure différente `(col1 String, col2 Date, col3 Int32)`. Requête pour insérer - les données de l' `data.csv` dans le `test` table avec conversion simultanée ressemble à ceci: - - - -``` bash -$ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT lower(col1), col3 * col3 FROM input('col1 String, col2 Date, col3 Int32') FORMAT CSV"; -``` - -- Si `data.csv` contient les données de la même structure `test_structure` comme la table `test` puis ces deux requêtes sont égales: - - - -``` bash -$ cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV" -$ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV" -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/input/) diff --git a/docs/fr/sql-reference/table-functions/jdbc.md b/docs/fr/sql-reference/table-functions/jdbc.md deleted file mode 100644 index 76dea0e0930..00000000000 --- a/docs/fr/sql-reference/table-functions/jdbc.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 43 -toc_title: jdbc ---- - -# jdbc {#table-function-jdbc} - -`jdbc(jdbc_connection_uri, schema, table)` - retourne la table qui est connectée via le pilote JDBC. - -Ce tableau fonction nécessite séparé `clickhouse-jdbc-bridge` programme en cours d'exécution. -Il prend en charge les types Nullable (basé sur DDL de la table distante qui est interrogée). - -**Exemple** - -``` sql -SELECT * FROM jdbc('jdbc:mysql://localhost:3306/?user=root&password=root', 'schema', 'table') -``` - -``` sql -SELECT * FROM jdbc('mysql://localhost:3306/?user=root&password=root', 'schema', 'table') -``` - -``` sql -SELECT * FROM jdbc('datasource://mysql-local', 'schema', 'table') -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/jdbc/) diff --git a/docs/fr/sql-reference/table-functions/merge.md b/docs/fr/sql-reference/table-functions/merge.md deleted file mode 100644 index 1ec264b06bd..00000000000 --- a/docs/fr/sql-reference/table-functions/merge.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 38 -toc_title: fusionner ---- - -# fusionner {#merge} - -`merge(db_name, 'tables_regexp')` – Creates a temporary Merge table. For more information, see the section “Table engines, Merge”. - -La structure de la table est tirée de la première table rencontrée qui correspond à l'expression régulière. - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/merge/) diff --git a/docs/fr/sql-reference/table-functions/mysql.md b/docs/fr/sql-reference/table-functions/mysql.md deleted file mode 100644 index 295456914f0..00000000000 --- a/docs/fr/sql-reference/table-functions/mysql.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 42 -toc_title: mysql ---- - -# mysql {#mysql} - -Permettre `SELECT` requêtes à effectuer sur des données stockées sur un serveur MySQL distant. - -``` sql -mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); -``` - -**Paramètre** - -- `host:port` — MySQL server address. - -- `database` — Remote database name. - -- `table` — Remote table name. - -- `user` — MySQL user. - -- `password` — User password. - -- `replace_query` — Flag that converts `INSERT INTO` les requêtes de `REPLACE INTO`. Si `replace_query=1` la requête est remplacé. - -- `on_duplicate_clause` — The `ON DUPLICATE KEY on_duplicate_clause` expression qui est ajoutée à la `INSERT` requête. - - Example: `INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1`, where `on_duplicate_clause` is `UPDATE c2 = c2 + 1`. See the MySQL documentation to find which `on_duplicate_clause` you can use with the `ON DUPLICATE KEY` clause. - - To specify `on_duplicate_clause` you need to pass `0` to the `replace_query` parameter. If you simultaneously pass `replace_query = 1` and `on_duplicate_clause`, ClickHouse generates an exception. - -Simple `WHERE` des clauses telles que `=, !=, >, >=, <, <=` sont actuellement exécutés sur le serveur MySQL. - -Le reste des conditions et le `LIMIT` les contraintes d'échantillonnage sont exécutées dans ClickHouse uniquement après la fin de la requête à MySQL. - -**Valeur Renvoyée** - -Un objet table avec les mêmes colonnes que la table MySQL d'origine. - -## Exemple D'Utilisation {#usage-example} - -Table dans MySQL: - -``` text -mysql> CREATE TABLE `test`.`test` ( - -> `int_id` INT NOT NULL AUTO_INCREMENT, - -> `int_nullable` INT NULL DEFAULT NULL, - -> `float` FLOAT NOT NULL, - -> `float_nullable` FLOAT NULL DEFAULT NULL, - -> PRIMARY KEY (`int_id`)); -Query OK, 0 rows affected (0,09 sec) - -mysql> insert into test (`int_id`, `float`) VALUES (1,2); -Query OK, 1 row affected (0,00 sec) - -mysql> select * from test; -+------+----------+-----+----------+ -| int_id | int_nullable | float | float_nullable | -+------+----------+-----+----------+ -| 1 | NULL | 2 | NULL | -+------+----------+-----+----------+ -1 row in set (0,00 sec) -``` - -Sélection des données de ClickHouse: - -``` sql -SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123') -``` - -``` text -┌─int_id─┬─int_nullable─┬─float─┬─float_nullable─┐ -│ 1 │ ᴺᵁᴸᴸ │ 2 │ ᴺᵁᴸᴸ │ -└────────┴──────────────┴───────┴────────────────┘ -``` - -## Voir Aussi {#see-also} - -- [Le ‘MySQL’ tableau moteur](../../engines/table-engines/integrations/mysql.md) -- [Utilisation de MySQL comme source de dictionnaire externe](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql) - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/mysql/) diff --git a/docs/fr/sql-reference/table-functions/numbers.md b/docs/fr/sql-reference/table-functions/numbers.md deleted file mode 100644 index 50a5ad61002..00000000000 --- a/docs/fr/sql-reference/table-functions/numbers.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 39 -toc_title: nombre ---- - -# nombre {#numbers} - -`numbers(N)` – Returns a table with the single ‘number’ colonne (UInt64) qui contient des entiers de 0 à n-1. -`numbers(N, M)` - Retourne un tableau avec le seul ‘number’ colonne (UInt64) qui contient des entiers de N À (N + M-1). - -Similaire à la `system.numbers` table, il peut être utilisé pour tester et générer des valeurs successives, `numbers(N, M)` plus efficace que `system.numbers`. - -Les requêtes suivantes sont équivalentes: - -``` sql -SELECT * FROM numbers(10); -SELECT * FROM numbers(0, 10); -SELECT * FROM system.numbers LIMIT 10; -``` - -Exemple: - -``` sql --- Generate a sequence of dates from 2010-01-01 to 2010-12-31 -select toDate('2010-01-01') + number as d FROM numbers(365); -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/numbers/) diff --git a/docs/fr/sql-reference/table-functions/odbc.md b/docs/fr/sql-reference/table-functions/odbc.md deleted file mode 100644 index aae636a5eb2..00000000000 --- a/docs/fr/sql-reference/table-functions/odbc.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 44 -toc_title: ODBC ---- - -# ODBC {#table-functions-odbc} - -Renvoie la table connectée via [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity). - -``` sql -odbc(connection_settings, external_database, external_table) -``` - -Paramètre: - -- `connection_settings` — Name of the section with connection settings in the `odbc.ini` fichier. -- `external_database` — Name of a database in an external DBMS. -- `external_table` — Name of a table in the `external_database`. - -Pour implémenter en toute sécurité les connexions ODBC, ClickHouse utilise un programme distinct `clickhouse-odbc-bridge`. Si le pilote ODBC est chargé directement depuis `clickhouse-server`, les problèmes de pilote peuvent planter le serveur ClickHouse. Clickhouse démarre automatiquement `clickhouse-odbc-bridge` lorsque cela est nécessaire. Le programme ODBC bridge est installé à partir du même package que `clickhouse-server`. - -Les champs avec l' `NULL` les valeurs de la table externe sont converties en valeurs par défaut pour le type de données de base. Par exemple, si un champ de table MySQL distant a `INT NULL` type il est converti en 0 (la valeur par défaut pour ClickHouse `Int32` type de données). - -## Exemple D'Utilisation {#usage-example} - -**Obtenir des données de L'installation MySQL locale via ODBC** - -Cet exemple est vérifié pour Ubuntu Linux 18.04 et MySQL server 5.7. - -Assurez-vous que unixODBC et MySQL Connector sont installés. - -Par défaut (si installé à partir de paquets), ClickHouse démarre en tant qu'utilisateur `clickhouse`. Ainsi, vous devez créer et configurer cet utilisateur dans le serveur MySQL. - -``` bash -$ sudo mysql -``` - -``` sql -mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse'; -mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION; -``` - -Puis configurez la connexion dans `/etc/odbc.ini`. - -``` bash -$ cat /etc/odbc.ini -[mysqlconn] -DRIVER = /usr/local/lib/libmyodbc5w.so -SERVER = 127.0.0.1 -PORT = 3306 -DATABASE = test -USERNAME = clickhouse -PASSWORD = clickhouse -``` - -Vous pouvez vérifier la connexion en utilisant le `isql` utilitaire de l'installation unixODBC. - -``` bash -$ isql -v mysqlconn -+-------------------------+ -| Connected! | -| | -... -``` - -Table dans MySQL: - -``` text -mysql> CREATE TABLE `test`.`test` ( - -> `int_id` INT NOT NULL AUTO_INCREMENT, - -> `int_nullable` INT NULL DEFAULT NULL, - -> `float` FLOAT NOT NULL, - -> `float_nullable` FLOAT NULL DEFAULT NULL, - -> PRIMARY KEY (`int_id`)); -Query OK, 0 rows affected (0,09 sec) - -mysql> insert into test (`int_id`, `float`) VALUES (1,2); -Query OK, 1 row affected (0,00 sec) - -mysql> select * from test; -+------+----------+-----+----------+ -| int_id | int_nullable | float | float_nullable | -+------+----------+-----+----------+ -| 1 | NULL | 2 | NULL | -+------+----------+-----+----------+ -1 row in set (0,00 sec) -``` - -Récupération des données de la table MySQL dans ClickHouse: - -``` sql -SELECT * FROM odbc('DSN=mysqlconn', 'test', 'test') -``` - -``` text -┌─int_id─┬─int_nullable─┬─float─┬─float_nullable─┐ -│ 1 │ 0 │ 2 │ 0 │ -└────────┴──────────────┴───────┴────────────────┘ -``` - -## Voir Aussi {#see-also} - -- [Dictionnaires externes ODBC](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc) -- [Moteur de table ODBC](../../engines/table-engines/integrations/odbc.md). - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/jdbc/) diff --git a/docs/fr/sql-reference/table-functions/remote.md b/docs/fr/sql-reference/table-functions/remote.md deleted file mode 100644 index 380a9986116..00000000000 --- a/docs/fr/sql-reference/table-functions/remote.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 40 -toc_title: distant ---- - -# à distance, remoteSecure {#remote-remotesecure} - -Vous permet d'accéder à des serveurs distants sans `Distributed` table. - -Signature: - -``` sql -remote('addresses_expr', db, table[, 'user'[, 'password']]) -remote('addresses_expr', db.table[, 'user'[, 'password']]) -remoteSecure('addresses_expr', db, table[, 'user'[, 'password']]) -remoteSecure('addresses_expr', db.table[, 'user'[, 'password']]) -``` - -`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port` ou juste `host`. L'hôte peut être spécifié comme nom de serveur ou l'adresse IPv4 ou IPv6. Une adresse IPv6 est indiquée entre crochets. Le port est le port TCP sur le serveur distant. Si le port est omis, il utilise `tcp_port` à partir du fichier de configuration du serveur (par défaut, 9000). - -!!! important "Important" - Le port est requis pour une adresse IPv6. - -Exemple: - -``` text -example01-01-1 -example01-01-1:9000 -localhost -127.0.0.1 -[::]:9000 -[2a02:6b8:0:1111::11]:9000 -``` - -Plusieurs adresses séparées par des virgules. Dans ce cas, ClickHouse utilisera le traitement distribué, donc il enverra la requête à toutes les adresses spécifiées (comme les fragments avec des données différentes). - -Exemple: - -``` text -example01-01-1,example01-02-1 -``` - -Une partie de l'expression peut être spécifiée entre crochets. L'exemple précédent peut être écrite comme suit: - -``` text -example01-0{1,2}-1 -``` - -Les accolades peuvent contenir une plage de Nombres séparés par deux points (entiers non négatifs). Dans ce cas, la gamme est étendue à un ensemble de valeurs qui génèrent fragment d'adresses. Si le premier nombre commence par zéro, les valeurs sont formées avec le même alignement zéro. L'exemple précédent peut être écrite comme suit: - -``` text -example01-{01..02}-1 -``` - -Si vous avez plusieurs paires d'accolades, il génère le produit direct des ensembles correspondants. - -Les adresses et les parties d'adresses entre crochets peuvent être séparées par le symbole de tuyau (\|). Dans ce cas, les ensembles correspondants de adresses sont interprétés comme des répliques, et la requête sera envoyée à la première sain réplique. Cependant, les répliques sont itérées dans l'ordre actuellement défini dans [équilibrage](../../operations/settings/settings.md) paramètre. - -Exemple: - -``` text -example01-{01..02}-{1|2} -``` - -Cet exemple spécifie deux fragments qui ont chacun deux répliques. - -Le nombre d'adresses générées est limitée par une constante. En ce moment, c'est 1000 adresses. - -À l'aide de la `remote` la fonction de table est moins optimale que la création d'un `Distributed` table, car dans ce cas, la connexion au serveur est rétablie pour chaque requête. En outre, si des noms d'hôte, les noms sont résolus, et les erreurs ne sont pas comptés lors de travail avec diverses répliques. Lors du traitement d'un grand nombre de requêtes, créez toujours `Distributed` table à l'avance, et ne pas utiliser la `remote` table de fonction. - -Le `remote` table de fonction peut être utile dans les cas suivants: - -- Accès à un serveur spécifique pour la comparaison de données, le débogage et les tests. -- Requêtes entre différents clusters ClickHouse à des fins de recherche. -- Demandes distribuées peu fréquentes qui sont faites manuellement. -- Distribué demandes où l'ensemble des serveurs est redéfinie à chaque fois. - -Si l'utilisateur n'est pas spécifié, `default` est utilisée. -Si le mot de passe n'est spécifié, un mot de passe vide est utilisé. - -`remoteSecure` - la même chose que `remote` but with secured connection. Default port — [tcp_port_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) de config ou 9440. - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/remote/) diff --git a/docs/fr/sql-reference/table-functions/url.md b/docs/fr/sql-reference/table-functions/url.md deleted file mode 100644 index 1df5cf55526..00000000000 --- a/docs/fr/sql-reference/table-functions/url.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 41 -toc_title: URL ---- - -# URL {#url} - -`url(URL, format, structure)` - retourne une table créée à partir du `URL` avec le -`format` et `structure`. - -URL-adresse du serveur HTTP ou HTTPS, qui peut accepter `GET` et/ou `POST` demande. - -format - [format](../../interfaces/formats.md#formats) des données. - -structure - structure de table dans `'UserID UInt64, Name String'` format. Détermine les noms et les types de colonnes. - -**Exemple** - -``` sql --- getting the first 3 lines of a table that contains columns of String and UInt32 type from HTTP-server which answers in CSV format. -SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3 -``` - -[Article Original](https://clickhouse.tech/docs/en/query_language/table_functions/url/) diff --git a/docs/fr/whats-new/changelog/2017.md b/docs/fr/whats-new/changelog/2017.md deleted file mode 120000 index d581cbbb422..00000000000 --- a/docs/fr/whats-new/changelog/2017.md +++ /dev/null @@ -1 +0,0 @@ -../../../en/whats-new/changelog/2017.md \ No newline at end of file diff --git a/docs/fr/whats-new/changelog/2018.md b/docs/fr/whats-new/changelog/2018.md deleted file mode 120000 index 22874fcae85..00000000000 --- a/docs/fr/whats-new/changelog/2018.md +++ /dev/null @@ -1 +0,0 @@ -../../../en/whats-new/changelog/2018.md \ No newline at end of file diff --git a/docs/fr/whats-new/changelog/2019.md b/docs/fr/whats-new/changelog/2019.md deleted file mode 120000 index 0f3f095f8a1..00000000000 --- a/docs/fr/whats-new/changelog/2019.md +++ /dev/null @@ -1 +0,0 @@ -../../../en/whats-new/changelog/2019.md \ No newline at end of file diff --git a/docs/fr/whats-new/changelog/index.md b/docs/fr/whats-new/changelog/index.md deleted file mode 120000 index 5461b93ec8c..00000000000 --- a/docs/fr/whats-new/changelog/index.md +++ /dev/null @@ -1 +0,0 @@ -../../../en/whats-new/changelog/index.md \ No newline at end of file diff --git a/docs/fr/whats-new/index.md b/docs/fr/whats-new/index.md deleted file mode 100644 index 51a77da8ef4..00000000000 --- a/docs/fr/whats-new/index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: Ce qui est Nouveau -toc_priority: 72 ---- - - diff --git a/docs/fr/whats-new/roadmap.md b/docs/fr/whats-new/roadmap.md deleted file mode 100644 index 87d64208f67..00000000000 --- a/docs/fr/whats-new/roadmap.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 74 -toc_title: Feuille de route ---- - -# Feuille de route {#roadmap} - -## Q1 2020 {#q1-2020} - -- Contrôle d'accès par rôle - -## Q2 2020 {#q2-2020} - -- Intégration avec les services d'authentification externes -- Pools de ressources pour une répartition plus précise de la capacité du cluster entre les utilisateurs - -{## [Article Original](https://clickhouse.tech/docs/en/roadmap/) ##} diff --git a/docs/fr/whats-new/security-changelog.md b/docs/fr/whats-new/security-changelog.md deleted file mode 100644 index 6046ef96bb2..00000000000 --- a/docs/fr/whats-new/security-changelog.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_priority: 76 -toc_title: "S\xE9curit\xE9 Changelog" ---- - -## Correction dans la version 19.14.3.3 de ClickHouse, 2019-09-10 {#fixed-in-clickhouse-release-19-14-3-3-2019-09-10} - -### CVE-2019-15024 {#cve-2019-15024} - -Аn attacker that has write access to ZooKeeper and who ican run a custom server available from the network where ClickHouse runs, can create a custom-built malicious server that will act as a ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from the malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. - -Crédits: Eldar Zaitov de L'équipe de sécurité de L'Information Yandex - -### CVE-2019-16535 {#cve-2019-16535} - -Аn OOB read, OOB write and integer underflow in decompression algorithms can be used to achieve RCE or DoS via native protocol. - -Crédits: Eldar Zaitov de L'équipe de sécurité de L'Information Yandex - -### CVE-2019-16536 {#cve-2019-16536} - -Le débordement de pile menant à DoS peut être déclenché par un client authentifié malveillant. - -Crédits: Eldar Zaitov de L'équipe de sécurité de L'Information Yandex - -## Correction de la version 19.13.6.1 de ClickHouse, 2019-09-20 {#fixed-in-clickhouse-release-19-13-6-1-2019-09-20} - -### CVE-2019-18657 {#cve-2019-18657} - -Fonction de Table `url` la vulnérabilité avait-elle permis à l'attaquant d'injecter des en-têtes HTTP arbitraires dans la requête. - -Crédit: [Nikita Tikhomirov](https://github.com/NSTikhomirov) - -## Correction dans la version ClickHouse 18.12.13, 2018-09-10 {#fixed-in-clickhouse-release-18-12-13-2018-09-10} - -### CVE-2018-14672 {#cve-2018-14672} - -Les fonctions de chargement des modèles CatBoost permettaient de parcourir les chemins et de lire des fichiers arbitraires via des messages d'erreur. - -Crédits: Andrey Krasichkov de L'équipe de sécurité de L'Information Yandex - -## Correction dans la version 18.10.3 de ClickHouse, 2018-08-13 {#fixed-in-clickhouse-release-18-10-3-2018-08-13} - -### CVE-2018-14671 {#cve-2018-14671} - -unixODBC a permis de charger des objets partagés arbitraires à partir du système de fichiers, ce qui a conduit à une vulnérabilité D'exécution de Code À Distance. - -Crédits: Andrey Krasichkov et Evgeny Sidorov de Yandex Information Security Team - -## Correction dans la version 1.1.54388 de ClickHouse, 2018-06-28 {#fixed-in-clickhouse-release-1-1-54388-2018-06-28} - -### CVE-2018-14668 {#cve-2018-14668} - -“remote” la fonction de table a permis des symboles arbitraires dans “user”, “password” et “default_database” champs qui ont conduit à des attaques de falsification de requêtes inter-protocoles. - -Crédits: Andrey Krasichkov de L'équipe de sécurité de L'Information Yandex - -## Correction dans la version 1.1.54390 de ClickHouse, 2018-07-06 {#fixed-in-clickhouse-release-1-1-54390-2018-07-06} - -### CVE-2018-14669 {#cve-2018-14669} - -Clickhouse client MySQL avait “LOAD DATA LOCAL INFILE” fonctionnalité activée permettant à une base de données MySQL malveillante de lire des fichiers arbitraires à partir du serveur clickhouse connecté. - -Crédits: Andrey Krasichkov et Evgeny Sidorov de Yandex Information Security Team - -## Correction dans la version 1.1.54131 de ClickHouse, 2017-01-10 {#fixed-in-clickhouse-release-1-1-54131-2017-01-10} - -### CVE-2018-14670 {#cve-2018-14670} - -Configuration incorrecte dans le paquet deb pourrait conduire à l'utilisation non autorisée de la base de données. - -Crédits: National Cyber Security Centre (NCSC) - -{## [Article Original](https://clickhouse.tech/docs/en/security_changelog/) ##} diff --git a/docs/ja/development/build.md b/docs/ja/development/build.md index e44ba45485e..191fa665ccd 100644 --- a/docs/ja/development/build.md +++ b/docs/ja/development/build.md @@ -19,28 +19,17 @@ $ sudo apt-get install git cmake python ninja-build 古いシステムではcmakeの代わりにcmake3。 -## GCC9のインストール {#install-gcc-10} +## Clang 11 のインストール -これを行うにはいくつかの方法があります。 +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -### PPAパッケージからインストール {#install-from-a-ppa-package} - -``` bash -$ sudo apt-get install software-properties-common -$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test -$ sudo apt-get update -$ sudo apt-get install gcc-10 g++-10 +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ``` -### ソースからインスト {#install-from-sources} - -見て [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -## ビルドにGCC9を使用する {#use-gcc-10-for-builds} - ``` bash -$ export CC=gcc-10 -$ export CXX=g++-10 +$ export CC=clang +$ export CXX=clang++ ``` ## ツつィツ姪"ツ債ツつケ {#checkout-clickhouse-sources} @@ -76,7 +65,7 @@ $ cd .. - Git(ソースをチェックアウトするためにのみ使用され、ビルドには必要ありません) - CMake3.10以降 - 忍者(推奨)または作る -- C++コンパイラ:gcc9またはclang8以降 +- C++コンパイラ:clang11以降 - リンカ:lldまたはgold(古典的なGNU ldは動作しません) - Python(LLVMビルド内でのみ使用され、オプションです) diff --git a/docs/ja/development/developer-instruction.md b/docs/ja/development/developer-instruction.md index ccc3a177d1f..d7e5217b3b6 100644 --- a/docs/ja/development/developer-instruction.md +++ b/docs/ja/development/developer-instruction.md @@ -133,19 +133,19 @@ ArchまたはGentooを使用する場合は、おそらくCMakeのインスト ClickHouseはビルドに複数の外部ライブラリを使用します。 それらのすべては、サブモジュールにあるソースからClickHouseと一緒に構築されているので、別々にインストールする必要はありません。 リストは次の場所で確認できます `contrib`. -# C++コンパイラ {#c-compiler} +## C++ Compiler {#c-compiler} -ClickHouseのビルドには、バージョン9以降のGCCとClangバージョン8以降のコンパイラがサポートされます。 +Compilers Clang starting from version 11 is supported for building ClickHouse. -公式のYandexビルドは、わずかに優れたパフォーマンスのマシンコードを生成するため、GCCを使用しています(私たちのベンチマークに応じて最大数パーセントの そしてClangは開発のために通常より便利です。 が、当社の継続的インテグレーション(CI)プラットフォームを運チェックのための十数の組み合わせとなります。 +Clang should be used instead of gcc. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations. -UBUNTUにGCCをインストールするには: `sudo apt install gcc g++` +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -Gccのバージョンを確認する: `gcc --version`. の場合は下記9その指示に従う。https://clickhouse.tech/docs/ja/development/build/#install-gcc-10. +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` -Mac OS XのビルドはClangでのみサポートされています。 ちょうど実行 `brew install llvm` - -Clangを使用する場合は、次のものもインストールできます `libc++` と `lld` あなたがそれが何であるか知っていれば。 を使用して `ccache` また、推奨されます。 +Mac OS X build is also supported. Just run `brew install llvm` # 建築プロセス {#the-building-process} @@ -158,13 +158,6 @@ ClickHouseを構築する準備ができたので、別のディレクトリを 中の間 `build` cmakeを実行してビルドを構成します。 最初の実行の前に、コンパイラ(この例ではバージョン9gccコンパイラ)を指定する環境変数を定義する必要があります。 -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: - export CC=clang CXX=clang++ cmake .. diff --git a/docs/ja/getting-started/example-datasets/ontime.md b/docs/ja/getting-started/example-datasets/ontime.md index bd049e8caad..d12d8a36069 100644 --- a/docs/ja/getting-started/example-datasets/ontime.md +++ b/docs/ja/getting-started/example-datasets/ontime.md @@ -29,126 +29,127 @@ done テーブルの作成: ``` sql -CREATE TABLE `ontime` ( - `Year` UInt16, - `Quarter` UInt8, - `Month` UInt8, - `DayofMonth` UInt8, - `DayOfWeek` UInt8, - `FlightDate` Date, - `UniqueCarrier` FixedString(7), - `AirlineID` Int32, - `Carrier` FixedString(2), - `TailNum` String, - `FlightNum` String, - `OriginAirportID` Int32, - `OriginAirportSeqID` Int32, - `OriginCityMarketID` Int32, - `Origin` FixedString(5), - `OriginCityName` String, - `OriginState` FixedString(2), - `OriginStateFips` String, - `OriginStateName` String, - `OriginWac` Int32, - `DestAirportID` Int32, - `DestAirportSeqID` Int32, - `DestCityMarketID` Int32, - `Dest` FixedString(5), - `DestCityName` String, - `DestState` FixedString(2), - `DestStateFips` String, - `DestStateName` String, - `DestWac` Int32, - `CRSDepTime` Int32, - `DepTime` Int32, - `DepDelay` Int32, - `DepDelayMinutes` Int32, - `DepDel15` Int32, - `DepartureDelayGroups` String, - `DepTimeBlk` String, - `TaxiOut` Int32, - `WheelsOff` Int32, - `WheelsOn` Int32, - `TaxiIn` Int32, - `CRSArrTime` Int32, - `ArrTime` Int32, - `ArrDelay` Int32, - `ArrDelayMinutes` Int32, - `ArrDel15` Int32, - `ArrivalDelayGroups` Int32, - `ArrTimeBlk` String, - `Cancelled` UInt8, - `CancellationCode` FixedString(1), - `Diverted` UInt8, - `CRSElapsedTime` Int32, - `ActualElapsedTime` Int32, - `AirTime` Int32, - `Flights` Int32, - `Distance` Int32, - `DistanceGroup` UInt8, - `CarrierDelay` Int32, - `WeatherDelay` Int32, - `NASDelay` Int32, - `SecurityDelay` Int32, - `LateAircraftDelay` Int32, - `FirstDepTime` String, - `TotalAddGTime` String, - `LongestAddGTime` String, - `DivAirportLandings` String, - `DivReachedDest` String, - `DivActualElapsedTime` String, - `DivArrDelay` String, - `DivDistance` String, - `Div1Airport` String, - `Div1AirportID` Int32, - `Div1AirportSeqID` Int32, - `Div1WheelsOn` String, - `Div1TotalGTime` String, - `Div1LongestGTime` String, - `Div1WheelsOff` String, - `Div1TailNum` String, - `Div2Airport` String, - `Div2AirportID` Int32, - `Div2AirportSeqID` Int32, - `Div2WheelsOn` String, - `Div2TotalGTime` String, - `Div2LongestGTime` String, - `Div2WheelsOff` String, - `Div2TailNum` String, - `Div3Airport` String, - `Div3AirportID` Int32, - `Div3AirportSeqID` Int32, - `Div3WheelsOn` String, - `Div3TotalGTime` String, - `Div3LongestGTime` String, - `Div3WheelsOff` String, - `Div3TailNum` String, - `Div4Airport` String, - `Div4AirportID` Int32, - `Div4AirportSeqID` Int32, - `Div4WheelsOn` String, - `Div4TotalGTime` String, - `Div4LongestGTime` String, - `Div4WheelsOff` String, - `Div4TailNum` String, - `Div5Airport` String, - `Div5AirportID` Int32, - `Div5AirportSeqID` Int32, - `Div5WheelsOn` String, - `Div5TotalGTime` String, - `Div5LongestGTime` String, - `Div5WheelsOff` String, - `Div5TailNum` String +CREATE TABLE `ontime` +( + `Year` UInt16, + `Quarter` UInt8, + `Month` UInt8, + `DayofMonth` UInt8, + `DayOfWeek` UInt8, + `FlightDate` Date, + `Reporting_Airline` String, + `DOT_ID_Reporting_Airline` Int32, + `IATA_CODE_Reporting_Airline` String, + `Tail_Number` Int32, + `Flight_Number_Reporting_Airline` String, + `OriginAirportID` Int32, + `OriginAirportSeqID` Int32, + `OriginCityMarketID` Int32, + `Origin` FixedString(5), + `OriginCityName` String, + `OriginState` FixedString(2), + `OriginStateFips` String, + `OriginStateName` String, + `OriginWac` Int32, + `DestAirportID` Int32, + `DestAirportSeqID` Int32, + `DestCityMarketID` Int32, + `Dest` FixedString(5), + `DestCityName` String, + `DestState` FixedString(2), + `DestStateFips` String, + `DestStateName` String, + `DestWac` Int32, + `CRSDepTime` Int32, + `DepTime` Int32, + `DepDelay` Int32, + `DepDelayMinutes` Int32, + `DepDel15` Int32, + `DepartureDelayGroups` String, + `DepTimeBlk` String, + `TaxiOut` Int32, + `WheelsOff` Int32, + `WheelsOn` Int32, + `TaxiIn` Int32, + `CRSArrTime` Int32, + `ArrTime` Int32, + `ArrDelay` Int32, + `ArrDelayMinutes` Int32, + `ArrDel15` Int32, + `ArrivalDelayGroups` Int32, + `ArrTimeBlk` String, + `Cancelled` UInt8, + `CancellationCode` FixedString(1), + `Diverted` UInt8, + `CRSElapsedTime` Int32, + `ActualElapsedTime` Int32, + `AirTime` Nullable(Int32), + `Flights` Int32, + `Distance` Int32, + `DistanceGroup` UInt8, + `CarrierDelay` Int32, + `WeatherDelay` Int32, + `NASDelay` Int32, + `SecurityDelay` Int32, + `LateAircraftDelay` Int32, + `FirstDepTime` String, + `TotalAddGTime` String, + `LongestAddGTime` String, + `DivAirportLandings` String, + `DivReachedDest` String, + `DivActualElapsedTime` String, + `DivArrDelay` String, + `DivDistance` String, + `Div1Airport` String, + `Div1AirportID` Int32, + `Div1AirportSeqID` Int32, + `Div1WheelsOn` String, + `Div1TotalGTime` String, + `Div1LongestGTime` String, + `Div1WheelsOff` String, + `Div1TailNum` String, + `Div2Airport` String, + `Div2AirportID` Int32, + `Div2AirportSeqID` Int32, + `Div2WheelsOn` String, + `Div2TotalGTime` String, + `Div2LongestGTime` String, + `Div2WheelsOff` String, + `Div2TailNum` String, + `Div3Airport` String, + `Div3AirportID` Int32, + `Div3AirportSeqID` Int32, + `Div3WheelsOn` String, + `Div3TotalGTime` String, + `Div3LongestGTime` String, + `Div3WheelsOff` String, + `Div3TailNum` String, + `Div4Airport` String, + `Div4AirportID` Int32, + `Div4AirportSeqID` Int32, + `Div4WheelsOn` String, + `Div4TotalGTime` String, + `Div4LongestGTime` String, + `Div4WheelsOff` String, + `Div4TailNum` String, + `Div5Airport` String, + `Div5AirportID` Int32, + `Div5AirportSeqID` Int32, + `Div5WheelsOn` String, + `Div5TotalGTime` String, + `Div5LongestGTime` String, + `Div5WheelsOff` String, + `Div5TailNum` String ) ENGINE = MergeTree -PARTITION BY Year -ORDER BY (Carrier, FlightDate) -SETTINGS index_granularity = 8192; + PARTITION BY Year + ORDER BY (IATA_CODE_Reporting_Airline, FlightDate) + SETTINGS index_granularity = 8192; ``` データのロード: ``` bash -$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done +ls -1 *.zip | xargs -I{} -P $(nproc) bash -c "echo {}; unzip -cq {} '*.csv' | sed 's/\.00//g' | clickhouse-client --input_format_with_names_use_header=0 --query='INSERT INTO ontime FORMAT CSVWithNames'" ``` ## パーティション済みデータのダウンロード {#download-of-prepared-partitions} @@ -212,10 +213,10 @@ LIMIT 10; Q4. 2007年のキャリア別の遅延の数 ``` sql -SELECT Carrier, count(*) +SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year=2007 -GROUP BY Carrier +GROUP BY IATA_CODE_Reporting_Airline ORDER BY count(*) DESC; ``` @@ -226,32 +227,32 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year=2007 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` 同じクエリのより良いバージョン: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year=2007 -GROUP BY Carrier +GROUP BY IATA_CODE_Reporting_Airline ORDER BY c3 DESC ``` @@ -262,29 +263,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` 同じクエリのより良いバージョン: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier @@ -303,7 +304,7 @@ FROM from ontime WHERE DepDelay>10 GROUP BY Year -) +) q JOIN ( select @@ -311,7 +312,7 @@ JOIN count(*) as c2 from ontime GROUP BY Year -) USING (Year) +) qq USING (Year) ORDER BY Year; ``` @@ -346,7 +347,7 @@ Q10. ``` sql SELECT - min(Year), max(Year), Carrier, count(*) AS cnt, + min(Year), max(Year), IATA_CODE_Reporting_Airline AS Carrier, count(*) AS cnt, sum(ArrDelayMinutes>30) AS flights_delayed, round(sum(ArrDelayMinutes>30)/count(*),2) AS rate FROM ontime diff --git a/docs/ja/sql-reference/aggregate-functions/reference.md b/docs/ja/sql-reference/aggregate-functions/reference.md index 465f36179da..c66e9b54746 100644 --- a/docs/ja/sql-reference/aggregate-functions/reference.md +++ b/docs/ja/sql-reference/aggregate-functions/reference.md @@ -624,7 +624,7 @@ uniqHLL12(x[, ...]) - HyperLogLogアルゴリズムを使用して、異なる引数値の数を近似します。 - 212 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). + 2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). - 決定的な結果を提供します(クエリ処理順序に依存しません)。 diff --git a/docs/ja/sql-reference/functions/bitmap-functions.md b/docs/ja/sql-reference/functions/bitmap-functions.md index cc57e762610..de3ce938444 100644 --- a/docs/ja/sql-reference/functions/bitmap-functions.md +++ b/docs/ja/sql-reference/functions/bitmap-functions.md @@ -35,7 +35,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/ja/sql-reference/functions/hash-functions.md b/docs/ja/sql-reference/functions/hash-functions.md index d48e6846bb4..a98ae60690d 100644 --- a/docs/ja/sql-reference/functions/hash-functions.md +++ b/docs/ja/sql-reference/functions/hash-functions.md @@ -434,13 +434,13 @@ A [FixedString(16)](../../sql-reference/data-types/fixedstring.md) データ型 **例** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32,xxHash64 {#hash-functions-xxhash32} diff --git a/docs/ja/sql-reference/statements/select/index.md b/docs/ja/sql-reference/statements/select/index.md deleted file mode 120000 index 9c649322c82..00000000000 --- a/docs/ja/sql-reference/statements/select/index.md +++ /dev/null @@ -1 +0,0 @@ -../../../../en/sql-reference/statements/select/index.md \ No newline at end of file diff --git a/docs/ja/sql-reference/statements/select/index.md b/docs/ja/sql-reference/statements/select/index.md new file mode 100644 index 00000000000..b1a97ba1b28 --- /dev/null +++ b/docs/ja/sql-reference/statements/select/index.md @@ -0,0 +1,283 @@ +--- +title: SELECT Query +toc_folder_title: SELECT +toc_priority: 32 +toc_title: Overview +--- + +# SELECT Query {#select-queries-syntax} + +`SELECT` queries perform data retrieval. By default, the requested data is returned to the client, while in conjunction with [INSERT INTO](../../../sql-reference/statements/insert-into.md) it can be forwarded to a different table. + +## Syntax {#syntax} + +``` sql +[WITH expr_list|(subquery)] +SELECT [DISTINCT] expr_list +[FROM [db.]table | (subquery) | table_function] [FINAL] +[SAMPLE sample_coeff] +[ARRAY JOIN ...] +[GLOBAL] [ANY|ALL|ASOF] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI] JOIN (subquery)|table (ON )|(USING ) +[PREWHERE expr] +[WHERE expr] +[GROUP BY expr_list] [WITH ROLLUP|WITH CUBE] [WITH TOTALS] +[HAVING expr] +[ORDER BY expr_list] [WITH FILL] [FROM expr] [TO expr] [STEP expr] +[LIMIT [offset_value, ]n BY columns] +[LIMIT [n, ]m] [WITH TIES] +[SETTINGS ...] +[UNION ...] +[INTO OUTFILE filename] +[FORMAT format] +``` + +All clauses are optional, except for the required list of expressions immediately after `SELECT` which is covered in more detail [below](#select-clause). + +Specifics of each optional clause are covered in separate sections, which are listed in the same order as they are executed: + +- [WITH clause](../../../sql-reference/statements/select/with.md) +- [FROM clause](../../../sql-reference/statements/select/from.md) +- [SAMPLE clause](../../../sql-reference/statements/select/sample.md) +- [JOIN clause](../../../sql-reference/statements/select/join.md) +- [PREWHERE clause](../../../sql-reference/statements/select/prewhere.md) +- [WHERE clause](../../../sql-reference/statements/select/where.md) +- [GROUP BY clause](../../../sql-reference/statements/select/group-by.md) +- [LIMIT BY clause](../../../sql-reference/statements/select/limit-by.md) +- [HAVING clause](../../../sql-reference/statements/select/having.md) +- [SELECT clause](#select-clause) +- [DISTINCT clause](../../../sql-reference/statements/select/distinct.md) +- [LIMIT clause](../../../sql-reference/statements/select/limit.md) +- [OFFSET clause](../../../sql-reference/statements/select/offset.md) +- [UNION clause](../../../sql-reference/statements/select/union.md) +- [INTO OUTFILE clause](../../../sql-reference/statements/select/into-outfile.md) +- [FORMAT clause](../../../sql-reference/statements/select/format.md) + +## SELECT Clause {#select-clause} + +[Expressions](../../../sql-reference/syntax.md#syntax-expressions) specified in the `SELECT` clause are calculated after all the operations in the clauses described above are finished. These expressions work as if they apply to separate rows in the result. If expressions in the `SELECT` clause contain aggregate functions, then ClickHouse processes aggregate functions and expressions used as their arguments during the [GROUP BY](../../../sql-reference/statements/select/group-by.md) aggregation. + +If you want to include all columns in the result, use the asterisk (`*`) symbol. For example, `SELECT * FROM ...`. + + +### COLUMNS expression {#columns-expression} + +To match some columns in the result with a [re2](https://en.wikipedia.org/wiki/RE2_(software)) regular expression, you can use the `COLUMNS` expression. + +``` sql +COLUMNS('regexp') +``` + +For example, consider the table: + +``` sql +CREATE TABLE default.col_names (aa Int8, ab Int8, bc Int8) ENGINE = TinyLog +``` + +The following query selects data from all the columns containing the `a` symbol in their name. + +``` sql +SELECT COLUMNS('a') FROM col_names +``` + +``` text +┌─aa─┬─ab─┐ +│ 1 │ 1 │ +└────┴────┘ +``` + +The selected columns are returned not in the alphabetical order. + +You can use multiple `COLUMNS` expressions in a query and apply functions to them. + +For example: + +``` sql +SELECT COLUMNS('a'), COLUMNS('c'), toTypeName(COLUMNS('c')) FROM col_names +``` + +``` text +┌─aa─┬─ab─┬─bc─┬─toTypeName(bc)─┐ +│ 1 │ 1 │ 1 │ Int8 │ +└────┴────┴────┴────────────────┘ +``` + +Each column returned by the `COLUMNS` expression is passed to the function as a separate argument. Also you can pass other arguments to the function if it supports them. Be careful when using functions. If a function doesn’t support the number of arguments you have passed to it, ClickHouse throws an exception. + +For example: + +``` sql +SELECT COLUMNS('a') + COLUMNS('c') FROM col_names +``` + +``` text +Received exception from server (version 19.14.1): +Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function plus doesn't match: passed 3, should be 2. +``` + +In this example, `COLUMNS('a')` returns two columns: `aa` and `ab`. `COLUMNS('c')` returns the `bc` column. The `+` operator can’t apply to 3 arguments, so ClickHouse throws an exception with the relevant message. + +Columns that matched the `COLUMNS` expression can have different data types. If `COLUMNS` doesn’t match any columns and is the only expression in `SELECT`, ClickHouse throws an exception. + +### Asterisk {#asterisk} + +You can put an asterisk in any part of a query instead of an expression. When the query is analyzed, the asterisk is expanded to a list of all table columns (excluding the `MATERIALIZED` and `ALIAS` columns). There are only a few cases when using an asterisk is justified: + +- When creating a table dump. +- For tables containing just a few columns, such as system tables. +- For getting information about what columns are in a table. In this case, set `LIMIT 1`. But it is better to use the `DESC TABLE` query. +- When there is strong filtration on a small number of columns using `PREWHERE`. +- In subqueries (since columns that aren’t needed for the external query are excluded from subqueries). + +In all other cases, we don’t recommend using the asterisk, since it only gives you the drawbacks of a columnar DBMS instead of the advantages. In other words using the asterisk is not recommended. + +### Extreme Values {#extreme-values} + +In addition to results, you can also get minimum and maximum values for the results columns. To do this, set the **extremes** setting to 1. Minimums and maximums are calculated for numeric types, dates, and dates with times. For other columns, the default values are output. + +An extra two rows are calculated – the minimums and maximums, respectively. These extra two rows are output in `JSON*`, `TabSeparated*`, and `Pretty*` [formats](../../../interfaces/formats.md), separate from the other rows. They are not output for other formats. + +In `JSON*` formats, the extreme values are output in a separate ‘extremes’ field. In `TabSeparated*` formats, the row comes after the main result, and after ‘totals’ if present. It is preceded by an empty row (after the other data). In `Pretty*` formats, the row is output as a separate table after the main result, and after `totals` if present. + +Extreme values are calculated for rows before `LIMIT`, but after `LIMIT BY`. However, when using `LIMIT offset, size`, the rows before `offset` are included in `extremes`. In stream requests, the result may also include a small number of rows that passed through `LIMIT`. + +### Notes {#notes} + +You can use synonyms (`AS` aliases) in any part of a query. + +The `GROUP BY` and `ORDER BY` clauses do not support positional arguments. This contradicts MySQL, but conforms to standard SQL. For example, `GROUP BY 1, 2` will be interpreted as grouping by constants (i.e. aggregation of all rows into one). + +## Implementation Details {#implementation-details} + +If the query omits the `DISTINCT`, `GROUP BY` and `ORDER BY` clauses and the `IN` and `JOIN` subqueries, the query will be completely stream processed, using O(1) amount of RAM. Otherwise, the query might consume a lot of RAM if the appropriate restrictions are not specified: + +- `max_memory_usage` +- `max_rows_to_group_by` +- `max_rows_to_sort` +- `max_rows_in_distinct` +- `max_bytes_in_distinct` +- `max_rows_in_set` +- `max_bytes_in_set` +- `max_rows_in_join` +- `max_bytes_in_join` +- `max_bytes_before_external_sort` +- `max_bytes_before_external_group_by` + +For more information, see the section “Settings”. It is possible to use external sorting (saving temporary tables to a disk) and external aggregation. + +## SELECT modifiers {#select-modifiers} + +You can use the following modifiers in `SELECT` queries. + +### APPLY {#apply-modifier} + +Allows you to invoke some function for each row returned by an outer table expression of a query. + +**Syntax:** + +``` sql +SELECT APPLY( ) FROM [db.]table_name +``` + +**Example:** + +``` sql +CREATE TABLE columns_transformers (i Int64, j Int16, k Int64) ENGINE = MergeTree ORDER by (i); +INSERT INTO columns_transformers VALUES (100, 10, 324), (120, 8, 23); +SELECT * APPLY(sum) FROM columns_transformers; +``` + +``` +┌─sum(i)─┬─sum(j)─┬─sum(k)─┐ +│ 220 │ 18 │ 347 │ +└────────┴────────┴────────┘ +``` + +### EXCEPT {#except-modifier} + +Specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output. + +**Syntax:** + +``` sql +SELECT EXCEPT ( col_name1 [, col_name2, col_name3, ...] ) FROM [db.]table_name +``` + +**Example:** + +``` sql +SELECT * EXCEPT (i) from columns_transformers; +``` + +``` +┌──j─┬───k─┐ +│ 10 │ 324 │ +│ 8 │ 23 │ +└────┴─────┘ +``` + +### REPLACE {#replace-modifier} + +Specifies one or more [expression aliases](../../../sql-reference/syntax.md#syntax-expression_aliases). Each alias must match a column name from the `SELECT *` statement. In the output column list, the column that matches the alias is replaced by the expression in that `REPLACE`. + +This modifier does not change the names or order of columns. However, it can change the value and the value type. + +**Syntax:** + +``` sql +SELECT REPLACE( AS col_name) from [db.]table_name +``` + +**Example:** + +``` sql +SELECT * REPLACE(i + 1 AS i) from columns_transformers; +``` + +``` +┌───i─┬──j─┬───k─┐ +│ 101 │ 10 │ 324 │ +│ 121 │ 8 │ 23 │ +└─────┴────┴─────┘ +``` + +### Modifier Combinations {#modifier-combinations} + +You can use each modifier separately or combine them. + +**Examples:** + +Using the same modifier multiple times. + +``` sql +SELECT COLUMNS('[jk]') APPLY(toString) APPLY(length) APPLY(max) from columns_transformers; +``` + +``` +┌─max(length(toString(j)))─┬─max(length(toString(k)))─┐ +│ 2 │ 3 │ +└──────────────────────────┴──────────────────────────┘ +``` + +Using multiple modifiers in a single query. + +``` sql +SELECT * REPLACE(i + 1 AS i) EXCEPT (j) APPLY(sum) from columns_transformers; +``` + +``` +┌─sum(plus(i, 1))─┬─sum(k)─┐ +│ 222 │ 347 │ +└─────────────────┴────────┘ +``` + +## SETTINGS in SELECT Query {#settings-in-select} + +You can specify the necessary settings right in the `SELECT` query. The setting value is applied only to this query and is reset to default or previous value after the query is executed. + +Other ways to make settings see [here](../../../operations/settings/index.md). + +**Example** + +``` sql +SELECT * FROM some_table SETTINGS optimize_read_in_order=1, cast_keep_nullable=1; +``` diff --git a/docs/ja/sql-reference/statements/select/offset.md b/docs/ja/sql-reference/statements/select/offset.md new file mode 100644 index 00000000000..3efd916bcb8 --- /dev/null +++ b/docs/ja/sql-reference/statements/select/offset.md @@ -0,0 +1,86 @@ +--- +toc_title: OFFSET +--- + +# OFFSET FETCH Clause {#offset-fetch} + +`OFFSET` and `FETCH` allow you to retrieve data by portions. They specify a row block which you want to get by a single query. + +``` sql +OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}] +``` + +The `offset_row_count` or `fetch_row_count` value can be a number or a literal constant. You can omit `fetch_row_count`; by default, it equals to 1. + +`OFFSET` specifies the number of rows to skip before starting to return rows from the query result set. + +The `FETCH` specifies the maximum number of rows that can be in the result of a query. + +The `ONLY` option is used to return rows that immediately follow the rows omitted by the `OFFSET`. In this case the `FETCH` is an alternative to the [LIMIT](../../../sql-reference/statements/select/limit.md) clause. For example, the following query + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY; +``` + +is identical to the query + +``` sql +SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1; +``` + +The `WITH TIES` option is used to return any additional rows that tie for the last place in the result set according to the `ORDER BY` clause. For example, if `fetch_row_count` is set to 5 but two additional rows match the values of the `ORDER BY` columns in the fifth row, the result set will contain seven rows. + +!!! note "Note" + According to the standard, the `OFFSET` clause must come before the `FETCH` clause if both are present. + +!!! note "Note" + The real offset can also depend on the [offset](../../../operations/settings/settings.md#offset) setting. + +## Examples {#examples} + +Input table: + +``` text +┌─a─┬─b─┐ +│ 1 │ 1 │ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 1 │ 3 │ +│ 5 │ 4 │ +│ 0 │ 6 │ +│ 5 │ 7 │ +└───┴───┘ +``` + +Usage of the `ONLY` option: + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY; +``` + +Result: + +``` text +┌─a─┬─b─┐ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 5 │ 4 │ +└───┴───┘ +``` + +Usage of the `WITH TIES` option: + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES; +``` + +Result: + +``` text +┌─a─┬─b─┐ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 5 │ 4 │ +│ 5 │ 7 │ +└───┴───┘ +``` diff --git a/docs/ru/commercial/cloud.md b/docs/ru/commercial/cloud.md index 8023f738c70..e00fc3be673 100644 --- a/docs/ru/commercial/cloud.md +++ b/docs/ru/commercial/cloud.md @@ -29,4 +29,30 @@ toc_title: "Поставщики облачных услуг ClickHouse" - cross-az масштабирование для повышения производительности и обеспечения высокой доступности - встроенный мониторинг и редактор SQL-запросов -{## [Оригинальная статья](https://clickhouse.tech/docs/ru/commercial/cloud/) ##} +## Alibaba Cloud {#alibaba-cloud} + +Управляемый облачный сервис Alibaba для ClickHouse: [китайская площадка](https://www.aliyun.com/product/clickhouse), будет доступен на международной площадке в мае 2021 года. Сервис предоставляет следующие возможности: + +- надежный сервер для облачного хранилища на основе распределенной системы [Alibaba Cloud Apsara](https://www.alibabacloud.com/product/apsara-stack); +- расширяемая по запросу емкость, без переноса данных вручную; +- поддержка одноузловой и многоузловой архитектуры, архитектуры с одной или несколькими репликами, а также многоуровневого хранения cold и hot data; +- поддержка прав доступа, one-key восстановления, многоуровневая защита сети, шифрование облачного диска; +- полная интеграция с облачными системами логирования, базами данных и инструментами обработки данных; +- встроенная платформа для мониторинга и управления базами данных; +- техническая поддержка от экспертов по работе с базами данных. + +## SberCloud {#sbercloud} + +[Облачная платформа SberCloud.Advanced](https://sbercloud.ru/ru/advanced): + +- предоставляет более 50 высокотехнологичных сервисов; +- позволяет быстро создавать и эффективно управлять ИТ-инфраструктурой, приложениями и интернет-сервисами; +- радикально минимизирует ресурсы, требуемые для работы корпоративных ИТ-систем; +- в разы сокращает время вывода новых продуктов на рынок. + +SberCloud.Advanced предоставляет [MapReduce Service (MRS)](https://docs.sbercloud.ru/mrs/ug/topics/ug__clickhouse.html) — надежную, безопасную и простую в использовании платформу корпоративного уровня для хранения, обработки и анализа больших данных. MRS позволяет быстро создавать и управлять кластерами ClickHouse. + +- Инстанс ClickHouse состоит из трех узлов ZooKeeper и нескольких узлов ClickHouse. Выделенный режим реплики используется для обеспечения высокой надежности двойных копий данных. +- MRS предлагает возможности гибкого масштабирования при быстром росте сервисов в сценариях, когда емкости кластерного хранилища или вычислительных ресурсов процессора недостаточно. MRS в один клик предоставляет инструмент для балансировки данных при расширении узлов ClickHouse в кластере. Вы можете определить режим и время балансировки данных на основе характеристик сервиса, чтобы обеспечить доступность сервиса. +- MRS использует архитектуру развертывания высокой доступности на основе Elastic Load Balance (ELB) — сервиса для автоматического распределения трафика на несколько внутренних узлов. Благодаря ELB, данные записываются в локальные таблицы и считываются из распределенных таблиц на разных узлах. Такая архитектура повышает отказоустойчивость кластера и гарантирует высокую доступность приложений. + diff --git a/docs/ru/development/architecture.md b/docs/ru/development/architecture.md index 9f43fabba4f..d2cfc44b711 100644 --- a/docs/ru/development/architecture.md +++ b/docs/ru/development/architecture.md @@ -27,7 +27,7 @@ ClickHouse - полноценная колоночная СУБД. Данные `IColumn` предоставляет методы для общих реляционных преобразований данных, но они не отвечают всем потребностям. Например, `ColumnUInt64` не имеет метода для вычисления суммы двух столбцов, а `ColumnString` не имеет метода для запуска поиска по подстроке. Эти бесчисленные процедуры реализованы вне `IColumn`. -Различные функции на колонках могут быть реализованы обобщенным, неэффективным путем, используя `IColumn` методы для извлечения значений `Field`, или специальным путем, используя знания о внутреннем распределение данных в памяти в конкретной реализации `IColumn`. Для этого функции приводятся к конкретному типу `IColumn` и работают напрямую с его внутренним представлением. Например, в `ColumnUInt64` есть метод getData, который возвращает ссылку на внутренний массив, чтение и заполнение которого, выполняется отдельной процедурой напрямую. Фактически, мы имеем "дырявую абстракции", обеспечивающие эффективные специализации различных процедур. +Различные функции на колонках могут быть реализованы обобщенным, неэффективным путем, используя `IColumn` методы для извлечения значений `Field`, или специальным путем, используя знания о внутреннем распределение данных в памяти в конкретной реализации `IColumn`. Для этого функции приводятся к конкретному типу `IColumn` и работают напрямую с его внутренним представлением. Например, в `ColumnUInt64` есть метод `getData`, который возвращает ссылку на внутренний массив, чтение и заполнение которого, выполняется отдельной процедурой напрямую. Фактически, мы имеем "дырявые абстракции", обеспечивающие эффективные специализации различных процедур. ## Типы данных (Data Types) {#data_types} @@ -42,7 +42,7 @@ ClickHouse - полноценная колоночная СУБД. Данные ## Блоки (Block) {#block} -`Block` это контейнер, который представляет фрагмент (chunk) таблицы в памяти. Это набор троек - `(IColumn, IDataType, имя колонки)`. В процессе выполнения запроса, данные обрабатываются `Block`ами. Если у нас есть `Block`, значит у нас есть данные (в объекте `IColumn`), информация о типе (в `IDataType`), которая говорит нам, как работать с колонкой, и имя колонки (оригинальное имя колонки таблицы или служебное имя, присвоенное для получения промежуточных результатов вычислений). +`Block` это контейнер, который представляет фрагмент (chunk) таблицы в памяти. Это набор троек - `(IColumn, IDataType, имя колонки)`. В процессе выполнения запроса, данные обрабатываются `Block`-ами. Если у нас есть `Block`, значит у нас есть данные (в объекте `IColumn`), информация о типе (в `IDataType`), которая говорит нам, как работать с колонкой, и имя колонки (оригинальное имя колонки таблицы или служебное имя, присвоенное для получения промежуточных результатов вычислений). При вычислении некоторой функции на колонках в блоке мы добавляем еще одну колонку с результатами в блок, не трогая колонки аргументов функции, потому что операции иммутабельные. Позже ненужные колонки могут быть удалены из блока, но не модифицированы. Это удобно для устранения общих подвыражений. @@ -58,7 +58,7 @@ ClickHouse - полноценная колоночная СУБД. Данные 2. Реализацию форматов данных. Например, при выводе данных в терминал в формате `Pretty`, вы создаете выходной поток блоков, который форматирует поступающие в него блоки. 3. Трансформацию данных. Допустим, у вас есть `IBlockInputStream` и вы хотите создать отфильтрованный поток. Вы создаете `FilterBlockInputStream` и инициализируете его вашим потоком. Затем вы тянете (pull) блоки из `FilterBlockInputStream`, а он тянет блоки исходного потока, фильтрует их и возвращает отфильтрованные блоки вам. Таким образом построены конвейеры выполнения запросов. -Имеются и более сложные трансформации. Например, когда вы тянете блоки из `AggregatingBlockInputStream`, он считывает все данные из своего источника, агрегирует их, и возвращает поток агрегированных данных вам. Другой пример: конструктор `UnionBlockInputStream` принимает множество источников входных данных и число потоков. Такой `Stream` работает в несколько потоков и читает данные источников параллельно. +Имеются и более сложные трансформации. Например, когда вы тянете блоки из `AggregatingBlockInputStream`, он считывает все данные из своего источника, агрегирует их, и возвращает поток агрегированных данных вам. Другой пример: конструктор `UnionBlockInputStream` принимает множество источников входных данных и число потоков. Такой `Stream` работает в несколько потоков и читает данные источников параллельно. > Потоки блоков используют «втягивающий» (pull) подход к управлению потоком выполнения: когда вы вытягиваете блок из первого потока, он, следовательно, вытягивает необходимые блоки из вложенных потоков, так и работает весь конвейер выполнения. Ни «pull» ни «push» не имеют явного преимущества, потому что поток управления неявный, и это ограничивает в реализации различных функций, таких как одновременное выполнение нескольких запросов (слияние нескольких конвейеров вместе). Это ограничение можно преодолеть с помощью сопрограмм (coroutines) или просто запуском дополнительных потоков, которые ждут друг друга. У нас может быть больше возможностей, если мы сделаем поток управления явным: если мы локализуем логику для передачи данных из одной расчетной единицы в другую вне этих расчетных единиц. Читайте эту [статью](http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/) для углубленного изучения. @@ -110,9 +110,9 @@ ClickHouse - полноценная колоночная СУБД. Данные > Генераторы парсеров не используются по историческим причинам. ## Интерпретаторы {#interpreters} - + Интерпретаторы отвечают за создание конвейера выполнения запроса из `AST`. Есть простые интерпретаторы, такие как `InterpreterExistsQuery` и `InterpreterDropQuery` или более сложный `InterpreterSelectQuery`. Конвейер выполнения запроса представляет собой комбинацию входных и выходных потоков блоков. Например, результатом интерпретации `SELECT` запроса является `IBlockInputStream` для чтения результирующего набора данных; результат интерпретации `INSERT` запроса - это `IBlockOutputStream`, для записи данных, предназначенных для вставки; результат интерпретации `INSERT SELECT` запроса - это `IBlockInputStream`, который возвращает пустой результирующий набор при первом чтении, но копирует данные из `SELECT` к `INSERT`. - + `InterpreterSelectQuery` использует `ExpressionAnalyzer` и `ExpressionActions` механизмы для анализа запросов и преобразований. Именно здесь выполняется большинство оптимизаций запросов на основе правил. `ExpressionAnalyzer` написан довольно грязно и должен быть переписан: различные преобразования запросов и оптимизации должны быть извлечены в отдельные классы, чтобы позволить модульные преобразования или запросы. ## Функции {#functions} @@ -162,9 +162,9 @@ ClickHouse имеет сильную типизацию, поэтому нет Сервера в кластере в основном независимы. Вы можете создать `Распределенную` (`Distributed`) таблицу на одном или всех серверах в кластере. Такая таблица сама по себе не хранит данные - она только предоставляет возможность "просмотра" всех локальных таблиц на нескольких узлах кластера. При выполнении `SELECT` распределенная таблица переписывает запрос, выбирает удаленные узлы в соответствии с настройками балансировки нагрузки и отправляет им запрос. Распределенная таблица просит удаленные сервера обработать запрос до той стадии, когда промежуточные результаты с разных серверов могут быть объединены. Затем он получает промежуточные результаты и объединяет их. Распределенная таблица пытается возложить как можно больше работы на удаленные серверы и сократить объем промежуточных данных, передаваемых по сети. -Ситуация усложняется, при использовании подзапросы в случае IN или JOIN, когда каждый из них использует таблицу `Distributed`. Есть разные стратегии для выполнения таких запросов. +Ситуация усложняется, при использовании подзапросов в случае `IN` или `JOIN`, когда каждый из них использует таблицу `Distributed`. Есть разные стратегии для выполнения таких запросов. -Глобального плана выполнения распределенных запросов не существует. Каждый узел имеет собственный локальный план для своей части работы. У нас есть простое однонаправленное выполнение распределенных запросов: мы отправляем запросы на удаленные узлы и затем объединяем результаты. Но это невозможно для сложных запросов GROUP BY высокой кардинальности или запросов с большим числом временных данных в JOIN: в таких случаях нам необходимо перераспределить («reshuffle») данные между серверами, что требует дополнительной координации. ClickHouse не поддерживает выполнение запросов такого рода, и нам нужно работать над этим. +Глобального плана выполнения распределенных запросов не существует. Каждый узел имеет собственный локальный план для своей части работы. У нас есть простое однонаправленное выполнение распределенных запросов: мы отправляем запросы на удаленные узлы и затем объединяем результаты. Но это невозможно для сложных запросов `GROUP BY` высокой кардинальности или запросов с большим числом временных данных в `JOIN`: в таких случаях нам необходимо перераспределить («reshuffle») данные между серверами, что требует дополнительной координации. ClickHouse не поддерживает выполнение запросов такого рода, и нам нужно работать над этим. ## Merge Tree {#merge-tree} @@ -190,7 +190,7 @@ ClickHouse имеет сильную типизацию, поэтому нет Репликация использует асинхронную multi-master схему. Вы можете вставить данные в любую реплику, которая имеет открытую сессию в `ZooKeeper`, и данные реплицируются на все другие реплики асинхронно. Поскольку ClickHouse не поддерживает UPDATE, репликация исключает конфликты (conflict-free replication). Поскольку подтверждение вставок кворумом не реализовано, только что вставленные данные могут быть потеряны в случае сбоя одного узла. -Метаданные для репликации хранятся в `ZooKeeper`. Существует журнал репликации, в котором перечислены действия, которые необходимо выполнить. Среди этих действий: получить часть (get the part); объединить части (merge parts); удалить партицию (drop a partition) и так далее. Каждая реплика копирует журнал репликации в свою очередь, а затем выполняет действия из очереди. Например, при вставке в журнале создается действие «получить часть» (get the part), и каждая реплика загружает эту часть. Слияния координируются между репликами, чтобы получить идентичные до байта результаты. Все части объединяются одинаково на всех репликах. Одна из реплик-лидеров инициирует новое слияние кусков первой и записывает действия «слияния частей» в журнал. Несколько реплик (или все) могут быть лидерами одновременно. Реплике можно запретить быть лидером с помощью `merge_tree` настройки `replicated_can_become_leader`. +Метаданные для репликации хранятся в `ZooKeeper`. Существует журнал репликации, в котором перечислены действия, которые необходимо выполнить. Среди этих действий: получить часть (get the part); объединить части (merge parts); удалить партицию (drop a partition) и так далее. Каждая реплика копирует журнал репликации в свою очередь, а затем выполняет действия из очереди. Например, при вставке в журнале создается действие «получить часть» (get the part), и каждая реплика загружает эту часть. Слияния координируются между репликами, чтобы получить идентичные до байта результаты. Все части объединяются одинаково на всех репликах. Одна из реплик-лидеров инициирует новое слияние кусков первой и записывает действия «слияния частей» в журнал. Несколько реплик (или все) могут быть лидерами одновременно. Реплике можно запретить быть лидером с помощью `merge_tree` настройки `replicated_can_become_leader`. Репликация является физической: между узлами передаются только сжатые части, а не запросы. Слияния обрабатываются на каждой реплике независимо, в большинстве случаев, чтобы снизить затраты на сеть, во избежание усиления роли сети. Крупные объединенные части отправляются по сети только в случае значительной задержки репликации. diff --git a/docs/ru/development/developer-instruction.md b/docs/ru/development/developer-instruction.md index 9ddb17b7212..463d38a44fb 100644 --- a/docs/ru/development/developer-instruction.md +++ b/docs/ru/development/developer-instruction.md @@ -7,15 +7,15 @@ toc_title: "Инструкция для разработчиков" Сборка ClickHouse поддерживается на Linux, FreeBSD, Mac OS X. -# Если вы используете Windows {#esli-vy-ispolzuete-windows} +## Если вы используете Windows {#esli-vy-ispolzuete-windows} Если вы используете Windows, вам потребуется создать виртуальную машину с Ubuntu. Для работы с виртуальной машиной, установите VirtualBox. Скачать Ubuntu можно на сайте: https://www.ubuntu.com/#download Создайте виртуальную машину из полученного образа. Выделите для неё не менее 4 GB оперативной памяти. Для запуска терминала в Ubuntu, найдите в меню программу со словом terminal (gnome-terminal, konsole или что-то в этом роде) или нажмите Ctrl+Alt+T. -# Если вы используете 32-битную систему {#esli-vy-ispolzuete-32-bitnuiu-sistemu} +## Если вы используете 32-битную систему {#esli-vy-ispolzuete-32-bitnuiu-sistemu} ClickHouse не работает и не собирается на 32-битных системах. Получите доступ к 64-битной системе и продолжайте. -# Создание репозитория на GitHub {#sozdanie-repozitoriia-na-github} +## Создание репозитория на GitHub {#sozdanie-repozitoriia-na-github} Для работы с репозиторием ClickHouse, вам потребуется аккаунт на GitHub. Наверное, он у вас уже есть. @@ -34,7 +34,7 @@ ClickHouse не работает и не собирается на 32-битны Подробное руководство по использованию Git: https://git-scm.com/book/ru/v2 -# Клонирование репозитория на рабочую машину {#klonirovanie-repozitoriia-na-rabochuiu-mashinu} +## Клонирование репозитория на рабочую машину {#klonirovanie-repozitoriia-na-rabochuiu-mashinu} Затем вам потребуется загрузить исходники для работы на свой компьютер. Это называется «клонирование репозитория», потому что создаёт на вашем компьютере локальную копию репозитория, с которой вы будете работать. @@ -78,7 +78,7 @@ ClickHouse не работает и не собирается на 32-битны После этого, вы сможете добавлять в свой репозиторий обновления из репозитория Яндекса с помощью команды `git pull upstream master`. -## Работа с сабмодулями Git {#rabota-s-sabmoduliami-git} +### Работа с сабмодулями Git {#rabota-s-sabmoduliami-git} Работа с сабмодулями git может быть достаточно болезненной. Следующие команды позволят содержать их в порядке: @@ -110,7 +110,7 @@ The next commands would help you to reset all submodules to the initial state (! git submodule foreach git submodule foreach git reset --hard git submodule foreach git submodule foreach git clean -xfd -# Система сборки {#sistema-sborki} +## Система сборки {#sistema-sborki} ClickHouse использует систему сборки CMake и Ninja. @@ -130,25 +130,25 @@ Ninja - система запуска сборочных задач. Проверьте версию CMake: `cmake --version`. Если версия меньше 3.3, то установите новую версию с сайта https://cmake.org/download/ -# Необязательные внешние библиотеки {#neobiazatelnye-vneshnie-biblioteki} +## Необязательные внешние библиотеки {#neobiazatelnye-vneshnie-biblioteki} ClickHouse использует для сборки некоторое количество внешних библиотек. Но ни одну из них не требуется отдельно устанавливать, так как они собираются вместе с ClickHouse, из исходников, которые расположены в submodules. Посмотреть набор этих библиотек можно в директории contrib. -# Компилятор C++ {#kompiliator-c} +## Компилятор C++ {#kompiliator-c} -В качестве компилятора C++ поддерживается GCC начиная с версии 9 или Clang начиная с версии 8. +В качестве компилятора C++ поддерживается Clang начиная с версии 11. -Официальные сборки от Яндекса, на данный момент, используют GCC, так как он генерирует слегка более производительный машинный код (разница в среднем до нескольких процентов по нашим бенчмаркам). Clang обычно более удобен для разработки. Впрочем, наша среда continuous integration проверяет около десятка вариантов сборки. +Впрочем, наша среда continuous integration проверяет около десятка вариантов сборки, включая gcc, но сборка с помощью gcc непригодна для использования в продакшене. -Для установки GCC под Ubuntu, выполните: `sudo apt install gcc g++`. +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -Проверьте версию gcc: `gcc --version`. Если версия меньше 10, то следуйте инструкции: https://clickhouse.tech/docs/ru/development/build/#install-gcc-10. +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` Сборка под Mac OS X поддерживается только для компилятора Clang. Чтобы установить его выполните `brew install llvm` -Если вы решили использовать Clang, вы также можете установить `libc++` и `lld`, если вы знаете, что это такое. При желании, установите `ccache`. - -# Процесс сборки {#protsess-sborki} +## Процесс сборки {#protsess-sborki} Теперь вы готовы к сборке ClickHouse. Для размещения собранных файлов, рекомендуется создать отдельную директорию build внутри директории ClickHouse: @@ -158,14 +158,7 @@ ClickHouse использует для сборки некоторое коли Вы можете иметь несколько разных директорий (build_release, build_debug) для разных вариантов сборки. Находясь в директории build, выполните конфигурацию сборки с помощью CMake. -Перед первым запуском необходимо выставить переменные окружения, отвечающие за выбор компилятора (в данном примере это - gcc версии 9). - -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: +Перед первым запуском необходимо выставить переменные окружения, отвечающие за выбор компилятора. export CC=clang CXX=clang++ cmake .. @@ -206,7 +199,7 @@ Mac OS X: ls -l programs/clickhouse -# Запуск собранной версии ClickHouse {#zapusk-sobrannoi-versii-clickhouse} +## Запуск собранной версии ClickHouse {#zapusk-sobrannoi-versii-clickhouse} Для запуска сервера из под текущего пользователя, с выводом логов в терминал и с использованием примеров конфигурационных файлов, расположенных в исходниках, перейдите в директорию `ClickHouse/programs/server/` (эта директория находится не в директории build) и выполните: @@ -233,7 +226,7 @@ Mac OS X: sudo service clickhouse-server stop sudo -u clickhouse ClickHouse/build/programs/clickhouse server --config-file /etc/clickhouse-server/config.xml -# Среда разработки {#sreda-razrabotki} +## Среда разработки {#sreda-razrabotki} Если вы не знаете, какую среду разработки использовать, то рекомендуется использовать CLion. CLion является платным ПО, но его можно использовать бесплатно в течение пробного периода. Также он бесплатен для учащихся. CLion можно использовать как под Linux, так и под Mac OS X. @@ -243,7 +236,7 @@ Mac OS X: На всякий случай заметим, что CLion самостоятельно создаёт свою build директорию, самостоятельно выбирает тип сборки debug по-умолчанию, для конфигурации использует встроенную в CLion версию CMake вместо установленного вами, а для запуска задач использует make вместо ninja. Это нормально, просто имейте это ввиду, чтобы не возникало путаницы. -# Написание кода {#napisanie-koda} +## Написание кода {#napisanie-koda} Описание архитектуры ClickHouse: https://clickhouse.tech/docs/ru/development/architecture/ @@ -253,7 +246,7 @@ Mac OS X: Список задач: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22 -# Тестовые данные {#testovye-dannye} +## Тестовые данные {#testovye-dannye} Разработка ClickHouse часто требует загрузки реалистичных наборов данных. Особенно это важно для тестирования производительности. Специально для вас мы подготовили набор данных, представляющий собой анонимизированные данные Яндекс.Метрики. Загрузка этих данных потребует ещё 3 GB места на диске. Для выполнения большинства задач разработки, загружать эти данные не обязательно. @@ -274,7 +267,7 @@ Mac OS X: clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.hits FORMAT TSV" < hits_v1.tsv clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.visits FORMAT TSV" < visits_v1.tsv -# Создание Pull Request {#sozdanie-pull-request} +## Создание Pull Request {#sozdanie-pull-request} Откройте свой форк репозитория в интерфейсе GitHub. Если вы вели разработку в бранче, выберите этот бранч. На странице будет доступна кнопка «Pull request». По сути, это означает «создать заявку на принятие моих изменений в основной репозиторий». diff --git a/docs/ru/development/style.md b/docs/ru/development/style.md index 72607ca6bad..de29e629ceb 100644 --- a/docs/ru/development/style.md +++ b/docs/ru/development/style.md @@ -747,7 +747,7 @@ The dictionary is configured incorrectly. Есть два основных варианта проверки на такие ошибки: * Исключение с кодом `LOGICAL_ERROR`. Его можно использовать для важных проверок, которые делаются в том числе в релизной сборке. -* `assert`. Такие условия не проверяются в релизной сборке, можно использовать для тяжёлых и опциональных проверок. +* `assert`. Такие условия не проверяются в релизной сборке, можно использовать для тяжёлых и опциональных проверок. Пример сообщения, у которого должен быть код `LOGICAL_ERROR`: `Block header is inconsistent with Chunk in ICompicatedProcessor::munge(). It is a bug!` @@ -780,7 +780,7 @@ The dictionary is configured incorrectly. **2.** Язык - C++20 (см. список доступных [C++20 фич](https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_features)). -**3.** Компилятор - `gcc`. На данный момент (август 2020), код собирается версией 9.3. (Также код может быть собран `clang` версий 10 и 9) +**3.** Компилятор - `clang`. На данный момент (апрель 2021), код собирается версией 11. (Также код может быть собран `gcc` версии 10, но такая сборка не тестируется и непригодна для продакшена). Используется стандартная библиотека (реализация `libc++`). @@ -911,4 +911,3 @@ function( size_t limit) ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/development/style/) diff --git a/docs/ru/engines/database-engines/atomic.md b/docs/ru/engines/database-engines/atomic.md new file mode 100644 index 00000000000..8c75be3d93b --- /dev/null +++ b/docs/ru/engines/database-engines/atomic.md @@ -0,0 +1,54 @@ +--- +toc_priority: 32 +toc_title: Atomic +--- + +# Atomic {#atomic} + +Поддерживает неблокирующие запросы [DROP TABLE](#drop-detach-table) и [RENAME TABLE](#rename-table) и атомарные запросы [EXCHANGE TABLES t1 AND t](#exchange-tables). Движок `Atomic` используется по умолчанию. + +## Создание БД {#creating-a-database} + +``` sql + CREATE DATABASE test[ ENGINE = Atomic]; +``` + +## Особенности и рекомендации {#specifics-and-recommendations} + +### UUID {#table-uuid} + +Каждая таблица в базе данных `Atomic` имеет уникальный [UUID](../../sql-reference/data-types/uuid.md) и хранит данные в папке `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, где `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` - это UUID таблицы. +Обычно UUID генерируется автоматически, но пользователь также может явно указать UUID в момент создания таблицы (однако это не рекомендуется). Для отображения UUID в запросе `SHOW CREATE` вы можете использовать настройку [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). Результат выполнения в таком случае будет иметь вид: + +```sql +CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...; +``` +### RENAME TABLE {#rename-table} + +Запросы `RENAME` выполняются без изменения UUID и перемещения табличных данных. Эти запросы не ожидают завершения использующих таблицу запросов и будут выполнены мгновенно. + +### DROP/DETACH TABLE {#drop-detach-table} + +При выполнении запроса `DROP TABLE` никакие данные не удаляются. Таблица помечается как удаленная, метаданные перемещаются в папку `/clickhouse_path/metadata_dropped/` и база данных уведомляет фоновый поток. Задержка перед окончательным удалением данных задается настройкой [database_atomic_delay_before_drop_table_sec](../../operations/server-configuration-parameters/settings.md#database_atomic_delay_before_drop_table_sec). +Вы можете задать синхронный режим, определяя модификатор `SYNC`. Используйте для этого настройку [database_atomic_wait_for_drop_and_detach_synchronously](../../operations/settings/settings.md#database_atomic_wait_for_drop_and_detach_synchronously). В этом случае запрос `DROP` ждет завершения `SELECT`, `INSERT` и других запросов, которые используют таблицу. Таблица будет фактически удалена, когда она не будет использоваться. + +### EXCHANGE TABLES {#exchange-tables} + +Запрос `EXCHANGE` меняет местами две таблицы атомарно. Вместо неатомарной операции: + +```sql +RENAME TABLE new_table TO tmp, old_table TO new_table, tmp TO old_table; +``` +вы можете использовать один атомарный запрос: + +``` sql +EXCHANGE TABLES new_table AND old_table; +``` + +### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database} + +Для таблиц [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) рекомендуется не указывать параметры движка - путь в ZooKeeper и имя реплики. В этом случае будут использоваться параметры конфигурации: [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) и [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Если вы хотите определить параметры движка явно, рекомендуется использовать макрос {uuid}. Это удобно, так как автоматически генерируются уникальные пути для каждой таблицы в ZooKeeper. + +## Смотрите также + +- Системная таблица [system.databases](../../operations/system-tables/databases.md). diff --git a/docs/ru/engines/database-engines/index.md b/docs/ru/engines/database-engines/index.md index e06c032a636..d4fad8f43a9 100644 --- a/docs/ru/engines/database-engines/index.md +++ b/docs/ru/engines/database-engines/index.md @@ -4,11 +4,11 @@ toc_priority: 27 toc_title: "Введение" --- -# Движки баз данных {#dvizhki-baz-dannykh} +# Движки баз данных {#database-engines} Движки баз данных обеспечивают работу с таблицами. -По умолчанию ClickHouse использует собственный движок баз данных, который поддерживает конфигурируемые [движки таблиц](../../engines/database-engines/index.md) и [диалект SQL](../../engines/database-engines/index.md). +По умолчанию ClickHouse использует движок [Atomic](../../engines/database-engines/atomic.md). Он поддерживает конфигурируемые [движки таблиц](../../engines/table-engines/index.md) и [диалект SQL](../../sql-reference/syntax.md). Также можно использовать следующие движки баз данных: @@ -18,4 +18,5 @@ toc_title: "Введение" - [Lazy](../../engines/database-engines/lazy.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/database_engines/) +- [PostgreSQL](../../engines/database-engines/postgresql.md) + diff --git a/docs/ru/engines/database-engines/lazy.md b/docs/ru/engines/database-engines/lazy.md index c01aae0284e..140a67be761 100644 --- a/docs/ru/engines/database-engines/lazy.md +++ b/docs/ru/engines/database-engines/lazy.md @@ -15,4 +15,3 @@ toc_title: Lazy CREATE DATABASE testlazy ENGINE = Lazy(expiration_time_in_seconds); ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/database_engines/lazy/) diff --git a/docs/ru/engines/database-engines/materialize-mysql.md b/docs/ru/engines/database-engines/materialize-mysql.md index 3022542e294..2067dfecca0 100644 --- a/docs/ru/engines/database-engines/materialize-mysql.md +++ b/docs/ru/engines/database-engines/materialize-mysql.md @@ -157,4 +157,3 @@ SELECT * FROM mysql.test; └───┴─────┴──────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/database-engines/materialize-mysql/) diff --git a/docs/ru/engines/database-engines/postgresql.md b/docs/ru/engines/database-engines/postgresql.md new file mode 100644 index 00000000000..c11dab6f1aa --- /dev/null +++ b/docs/ru/engines/database-engines/postgresql.md @@ -0,0 +1,138 @@ +--- +toc_priority: 35 +toc_title: PostgreSQL +--- + +# PostgreSQL {#postgresql} + +Позволяет подключаться к БД на удаленном сервере [PostgreSQL](https://www.postgresql.org). Поддерживает операции чтения и записи (запросы `SELECT` и `INSERT`) для обмена данными между ClickHouse и PostgreSQL. + +Позволяет в реальном времени получать от удаленного сервера PostgreSQL информацию о таблицах БД и их структуре с помощью запросов `SHOW TABLES` и `DESCRIBE TABLE`. + +Поддерживает операции изменения структуры таблиц (`ALTER TABLE ... ADD|DROP COLUMN`). Если параметр `use_table_cache` (см. ниже раздел Параметры движка) установлен в значение `1`, структура таблицы кешируется, и изменения в структуре не отслеживаются, но будут обновлены, если выполнить команды `DETACH` и `ATTACH`. + +## Создание БД {#creating-a-database} + +``` sql +CREATE DATABASE test_database +ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cache`]); +``` + +**Параметры движка** + +- `host:port` — адрес сервера PostgreSQL. +- `database` — имя удаленной БД. +- `user` — пользователь PostgreSQL. +- `password` — пароль пользователя. +- `use_table_cache` — определяет кеширование структуры таблиц БД. Необязательный параметр. Значение по умолчанию: `0`. + +## Поддерживаемые типы данных {#data_types-support} + +| PostgerSQL | ClickHouse | +|------------------|--------------------------------------------------------------| +| DATE | [Date](../../sql-reference/data-types/date.md) | +| TIMESTAMP | [DateTime](../../sql-reference/data-types/datetime.md) | +| REAL | [Float32](../../sql-reference/data-types/float.md) | +| DOUBLE | [Float64](../../sql-reference/data-types/float.md) | +| DECIMAL, NUMERIC | [Decimal](../../sql-reference/data-types/decimal.md) | +| SMALLINT | [Int16](../../sql-reference/data-types/int-uint.md) | +| INTEGER | [Int32](../../sql-reference/data-types/int-uint.md) | +| BIGINT | [Int64](../../sql-reference/data-types/int-uint.md) | +| SERIAL | [UInt32](../../sql-reference/data-types/int-uint.md) | +| BIGSERIAL | [UInt64](../../sql-reference/data-types/int-uint.md) | +| TEXT, CHAR | [String](../../sql-reference/data-types/string.md) | +| INTEGER | Nullable([Int32](../../sql-reference/data-types/int-uint.md))| +| ARRAY | [Array](../../sql-reference/data-types/array.md) | + + +## Примеры использования {#examples-of-use} + +Обмен данными между БД ClickHouse и сервером PostgreSQL: + +``` sql +CREATE DATABASE test_database +ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 1); +``` + +``` sql +SHOW DATABASES; +``` + +``` text +┌─name──────────┐ +│ default │ +│ test_database │ +│ system │ +└───────────────┘ +``` + +``` sql +SHOW TABLES FROM test_database; +``` + +``` text +┌─name───────┐ +│ test_table │ +└────────────┘ +``` + +Чтение данных из таблицы PostgreSQL: + +``` sql +SELECT * FROM test_database.test_table; +``` + +``` text +┌─id─┬─value─┐ +│ 1 │ 2 │ +└────┴───────┘ +``` + +Запись данных в таблицу PostgreSQL: + +``` sql +INSERT INTO test_database.test_table VALUES (3,4); +SELECT * FROM test_database.test_table; +``` + +``` text +┌─int_id─┬─value─┐ +│ 1 │ 2 │ +│ 3 │ 4 │ +└────────┴───────┘ +``` + +Пусть структура таблицы была изменена в PostgreSQL: + +``` sql +postgre> ALTER TABLE test_table ADD COLUMN data Text +``` + +Поскольку при создании БД параметр `use_table_cache` был установлен в значение `1`, структура таблицы в ClickHouse была кеширована и поэтому не изменилась: + +``` sql +DESCRIBE TABLE test_database.test_table; +``` +``` text +┌─name───┬─type──────────────┐ +│ id │ Nullable(Integer) │ +│ value │ Nullable(Integer) │ +└────────┴───────────────────┘ +``` + +После того как таблицу «отцепили» и затем снова «прицепили», структура обновилась: + +``` sql +DETACH TABLE test_database.test_table; +ATTACH TABLE test_database.test_table; +DESCRIBE TABLE test_database.test_table; +``` +``` text +┌─name───┬─type──────────────┐ +│ id │ Nullable(Integer) │ +│ value │ Nullable(Integer) │ +│ data │ Nullable(String) │ +└────────┴───────────────────┘ +``` + +[Оригинальная статья](https://clickhouse.tech/docs/ru/database-engines/postgresql/) diff --git a/docs/ru/engines/table-engines/index.md b/docs/ru/engines/table-engines/index.md index 05236eb5b33..b17b2124250 100644 --- a/docs/ru/engines/table-engines/index.md +++ b/docs/ru/engines/table-engines/index.md @@ -16,7 +16,7 @@ toc_title: "Введение" - Возможно ли многопоточное выполнение запроса. - Параметры репликации данных. -## Семейства движков {#semeistva-dvizhkov} +## Семейства движков {#engine-families} ### MergeTree {#mergetree} @@ -42,7 +42,7 @@ toc_title: "Введение" - [StripeLog](log-family/stripelog.md#stripelog) - [Log](log-family/log.md#log) -### Движки для интеграции {#dvizhki-dlia-integratsii} +### Движки для интеграции {#integration-engines} Движки для связи с другими системами хранения и обработки данных. @@ -52,9 +52,22 @@ toc_title: "Введение" - [MySQL](integrations/mysql.md#mysql) - [ODBC](integrations/odbc.md#table-engine-odbc) - [JDBC](integrations/jdbc.md#table-engine-jdbc) +- [S3](integrations/s3.md#table-engine-s3) ### Специальные движки {#spetsialnye-dvizhki} +- [ODBC](../../engines/table-engines/integrations/odbc.md) +- [JDBC](../../engines/table-engines/integrations/jdbc.md) +- [MySQL](../../engines/table-engines/integrations/mysql.md) +- [MongoDB](../../engines/table-engines/integrations/mongodb.md) +- [HDFS](../../engines/table-engines/integrations/hdfs.md) +- [Kafka](../../engines/table-engines/integrations/kafka.md) +- [EmbeddedRocksDB](../../engines/table-engines/integrations/embedded-rocksdb.md) +- [RabbitMQ](../../engines/table-engines/integrations/rabbitmq.md) +- [PostgreSQL](../../engines/table-engines/integrations/postgresql.md) + +### Специальные движки {#special-engines} + Движки семейства: - [Distributed](special/distributed.md#distributed) @@ -79,5 +92,3 @@ toc_title: "Введение" Чтобы получить данные из виртуального столбца, необходимо указать его название в запросе `SELECT`. `SELECT *` не отображает данные из виртуальных столбцов. При создании таблицы со столбцом, имя которого совпадает с именем одного из виртуальных столбцов таблицы, виртуальный столбец становится недоступным. Не делайте так. Чтобы помочь избежать конфликтов, имена виртуальных столбцов обычно предваряются подчеркиванием. - -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/) diff --git a/docs/ru/engines/table-engines/integrations/embedded-rocksdb.md b/docs/ru/engines/table-engines/integrations/embedded-rocksdb.md index 9b68bcfc770..5a7909f63b2 100644 --- a/docs/ru/engines/table-engines/integrations/embedded-rocksdb.md +++ b/docs/ru/engines/table-engines/integrations/embedded-rocksdb.md @@ -1,5 +1,5 @@ --- -toc_priority: 6 +toc_priority: 9 toc_title: EmbeddedRocksDB --- @@ -41,4 +41,3 @@ ENGINE = EmbeddedRocksDB PRIMARY KEY key; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/embedded-rocksdb/) \ No newline at end of file diff --git a/docs/ru/engines/table-engines/integrations/hdfs.md b/docs/ru/engines/table-engines/integrations/hdfs.md index bd8e760fce4..b56bbfc0788 100644 --- a/docs/ru/engines/table-engines/integrations/hdfs.md +++ b/docs/ru/engines/table-engines/integrations/hdfs.md @@ -1,5 +1,5 @@ --- -toc_priority: 4 +toc_priority: 6 toc_title: HDFS --- @@ -102,16 +102,103 @@ CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs Создадим таблицу с именами `file000`, `file001`, … , `file999`: ``` sql -CREARE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV') +CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV') ``` +## Конфигурация {#configuration} + +Похоже на GraphiteMergeTree, движок HDFS поддерживает расширенную конфигурацию с использованием файла конфигурации ClickHouse. Есть два раздела конфигурации которые вы можете использовать: глобальный (`hdfs`) и на уровне пользователя (`hdfs_*`). Глобальные настройки применяются первыми, и затем применяется конфигурация уровня пользователя (если она указана). + +``` xml + + + /tmp/keytab/clickhouse.keytab + clickuser@TEST.CLICKHOUSE.TECH + kerberos + + + + + root@TEST.CLICKHOUSE.TECH + +``` + +### Список возможных опций конфигурации со значениями по умолчанию +#### Поддерживаемые из libhdfs3 + + +| **параметр** | **по умолчанию** | +| rpc\_client\_connect\_tcpnodelay | true | +| dfs\_client\_read\_shortcircuit | true | +| output\_replace-datanode-on-failure | true | +| input\_notretry-another-node | false | +| input\_localread\_mappedfile | true | +| dfs\_client\_use\_legacy\_blockreader\_local | false | +| rpc\_client\_ping\_interval | 10 * 1000 | +| rpc\_client\_connect\_timeout | 600 * 1000 | +| rpc\_client\_read\_timeout | 3600 * 1000 | +| rpc\_client\_write\_timeout | 3600 * 1000 | +| rpc\_client\_socekt\_linger\_timeout | -1 | +| rpc\_client\_connect\_retry | 10 | +| rpc\_client\_timeout | 3600 * 1000 | +| dfs\_default\_replica | 3 | +| input\_connect\_timeout | 600 * 1000 | +| input\_read\_timeout | 3600 * 1000 | +| input\_write\_timeout | 3600 * 1000 | +| input\_localread\_default\_buffersize | 1 * 1024 * 1024 | +| dfs\_prefetchsize | 10 | +| input\_read\_getblockinfo\_retry | 3 | +| input\_localread\_blockinfo\_cachesize | 1000 | +| input\_read\_max\_retry | 60 | +| output\_default\_chunksize | 512 | +| output\_default\_packetsize | 64 * 1024 | +| output\_default\_write\_retry | 10 | +| output\_connect\_timeout | 600 * 1000 | +| output\_read\_timeout | 3600 * 1000 | +| output\_write\_timeout | 3600 * 1000 | +| output\_close\_timeout | 3600 * 1000 | +| output\_packetpool\_size | 1024 | +| output\_heeartbeat\_interval | 10 * 1000 | +| dfs\_client\_failover\_max\_attempts | 15 | +| dfs\_client\_read\_shortcircuit\_streams\_cache\_size | 256 | +| dfs\_client\_socketcache\_expiryMsec | 3000 | +| dfs\_client\_socketcache\_capacity | 16 | +| dfs\_default\_blocksize | 64 * 1024 * 1024 | +| dfs\_default\_uri | "hdfs://localhost:9000" | +| hadoop\_security\_authentication | "simple" | +| hadoop\_security\_kerberos\_ticket\_cache\_path | "" | +| dfs\_client\_log\_severity | "INFO" | +| dfs\_domain\_socket\_path | "" | + + +[Руководство по конфигурации HDFS](https://hawq.apache.org/docs/userguide/2.3.0.0-incubating/reference/HDFSConfigurationParameterReference.html) поможет обьяснить назначения некоторых параметров. + + +#### Расширенные параметры для ClickHouse {#clickhouse-extras} + +| **параметр** | **по умолчанию** | +|hadoop\_kerberos\_keytab | "" | +|hadoop\_kerberos\_principal | "" | +|hadoop\_kerberos\_kinit\_command | kinit | + +#### Ограничения {#limitations} + * hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне + +## Поддержика Kerberos {#kerberos-support} + +Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos. +[Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это. +Обратите внимание что из-за ограничений libhdfs3 поддерживается только устаревший метод аутентификации, +коммуникация с узлами данных не защищена SASL (HADOOP\_SECURE\_DN\_USER надежный показатель такого +подхода к безопасности). Используйте tests/integration/test\_storage\_kerberized\_hdfs/hdfs_configs/bootstrap.sh для примера настроек. + +Если hadoop\_kerberos\_keytab, hadoop\_kerberos\_principal или hadoop\_kerberos\_kinit\_command указаны в настройках, kinit будет вызван. hadoop\_kerberos\_keytab и hadoop\_kerberos\_principal обязательны в этом случае. Необходимо также будет установить kinit и файлы конфигурации krb5. ## Виртуальные столбцы {#virtualnye-stolbtsy} - `_path` — Путь к файлу. - `_file` — Имя файла. -**Смотрите также** +**См. также** -- [Виртуальные столбцы](index.md#table_engines-virtual_columns) +- [Виртуальные колонки](../../../engines/table-engines/index.md#table_engines-virtual_columns) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/hdfs/) diff --git a/docs/ru/engines/table-engines/integrations/index.md b/docs/ru/engines/table-engines/integrations/index.md index e01d9d0cee2..cb217270129 100644 --- a/docs/ru/engines/table-engines/integrations/index.md +++ b/docs/ru/engines/table-engines/integrations/index.md @@ -14,8 +14,9 @@ toc_priority: 30 - [MySQL](../../../engines/table-engines/integrations/mysql.md) - [MongoDB](../../../engines/table-engines/integrations/mongodb.md) - [HDFS](../../../engines/table-engines/integrations/hdfs.md) +- [S3](../../../engines/table-engines/integrations/s3.md) - [Kafka](../../../engines/table-engines/integrations/kafka.md) - [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md) - [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) +- [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/integrations/) diff --git a/docs/ru/engines/table-engines/integrations/jdbc.md b/docs/ru/engines/table-engines/integrations/jdbc.md index d7d438e0633..fd7411a258e 100644 --- a/docs/ru/engines/table-engines/integrations/jdbc.md +++ b/docs/ru/engines/table-engines/integrations/jdbc.md @@ -1,5 +1,5 @@ --- -toc_priority: 2 +toc_priority: 3 toc_title: JDBC --- @@ -89,4 +89,3 @@ FROM jdbc_table - [Табличная функция JDBC](../../../engines/table-engines/integrations/jdbc.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/jdbc/) diff --git a/docs/ru/engines/table-engines/integrations/kafka.md b/docs/ru/engines/table-engines/integrations/kafka.md index 5a6971b1ae6..19e2850dd51 100644 --- a/docs/ru/engines/table-engines/integrations/kafka.md +++ b/docs/ru/engines/table-engines/integrations/kafka.md @@ -1,5 +1,5 @@ --- -toc_priority: 5 +toc_priority: 8 toc_title: Kafka --- @@ -193,4 +193,3 @@ ClickHouse может поддерживать учетные данные Kerbe - [Виртуальные столбцы](index.md#table_engines-virtual_columns) - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/kafka/) diff --git a/docs/ru/engines/table-engines/integrations/mongodb.md b/docs/ru/engines/table-engines/integrations/mongodb.md index 0765b3909de..97f903bdf89 100644 --- a/docs/ru/engines/table-engines/integrations/mongodb.md +++ b/docs/ru/engines/table-engines/integrations/mongodb.md @@ -1,5 +1,5 @@ --- -toc_priority: 7 +toc_priority: 5 toc_title: MongoDB --- @@ -54,4 +54,4 @@ SELECT COUNT() FROM mongo_table; └─────────┘ ``` -[Original article](https://clickhouse.tech/docs/ru/operations/table_engines/integrations/mongodb/) +[Original article](https://clickhouse.tech/docs/ru/engines/table-engines/integrations/mongodb/) diff --git a/docs/ru/engines/table-engines/integrations/mysql.md b/docs/ru/engines/table-engines/integrations/mysql.md index 3370e9b06d0..5011c8a93c6 100644 --- a/docs/ru/engines/table-engines/integrations/mysql.md +++ b/docs/ru/engines/table-engines/integrations/mysql.md @@ -1,5 +1,5 @@ --- -toc_priority: 3 +toc_priority: 4 toc_title: MySQL --- @@ -18,12 +18,13 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] ) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); ``` -Смотрите подробное описание запроса [CREATE TABLE](../../../engines/table-engines/integrations/mysql.md#create-table-query). +Смотрите подробное описание запроса [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query). Структура таблицы может отличаться от исходной структуры таблицы MySQL: - Имена столбцов должны быть такими же, как в исходной таблице MySQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке. -- Типы столбцов могут отличаться от типов в исходной таблице MySQL. ClickHouse пытается [приводить](../../../engines/table-engines/integrations/mysql.md#type_conversion_function-cast) значения к типам данных ClickHouse. +- Типы столбцов могут отличаться от типов в исходной таблице MySQL. ClickHouse пытается [приводить](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) значения к типам данных ClickHouse. +- Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов. **Параметры движка** @@ -100,4 +101,3 @@ SELECT * FROM mysql_table - [Табличная функция ‘mysql’](../../../engines/table-engines/integrations/mysql.md) - [Использование MySQL в качестве источника для внешнего словаря](../../../engines/table-engines/integrations/mysql.md#dicts-external_dicts_dict_sources-mysql) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/mysql/) diff --git a/docs/ru/engines/table-engines/integrations/odbc.md b/docs/ru/engines/table-engines/integrations/odbc.md index 97317d647c8..669977ff531 100644 --- a/docs/ru/engines/table-engines/integrations/odbc.md +++ b/docs/ru/engines/table-engines/integrations/odbc.md @@ -1,5 +1,5 @@ --- -toc_priority: 1 +toc_priority: 2 toc_title: ODBC --- @@ -29,6 +29,7 @@ ENGINE = ODBC(connection_settings, external_database, external_table) - Имена столбцов должны быть такими же, как в исходной таблице, но вы можете использовать только некоторые из этих столбцов и в любом порядке. - Типы столбцов могут отличаться от типов аналогичных столбцов в исходной таблице. ClickHouse пытается [приводить](../../../engines/table-engines/integrations/odbc.md#type_conversion_function-cast) значения к типам данных ClickHouse. +- Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов. **Параметры движка** @@ -127,4 +128,3 @@ SELECT * FROM odbc_t - [Внешние словари ODBC](../../../engines/table-engines/integrations/odbc.md#dicts-external_dicts_dict_sources-odbc) - [Табличная функция odbc](../../../engines/table-engines/integrations/odbc.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/odbc/) diff --git a/docs/ru/engines/table-engines/integrations/postgresql.md b/docs/ru/engines/table-engines/integrations/postgresql.md new file mode 100644 index 00000000000..cb8e38ae5c9 --- /dev/null +++ b/docs/ru/engines/table-engines/integrations/postgresql.md @@ -0,0 +1,145 @@ +--- +toc_priority: 11 +toc_title: PostgreSQL +--- + +#PostgreSQL {#postgresql} + +Движок PostgreSQL позволяет выполнять запросы `SELECT` и `INSERT` для таблиц на удаленном сервере PostgreSQL. + +## Создание таблицы {#creating-a-table} + +``` sql +CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] +( + name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], + name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], + ... +) ENGINE = PostgreSQL('host:port', 'database', 'table', 'user', 'password'[, `schema`]); +``` + +Смотрите подробное описание запроса [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query). + +Структура таблицы может отличаться от исходной структуры таблицы PostgreSQL: + +- Имена столбцов должны быть такими же, как в исходной таблице PostgreSQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке. +- Типы столбцов могут отличаться от типов в исходной таблице PostgreSQL. ClickHouse пытается [приводить](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. +- Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов. + +**Параметры движка** + +- `host:port` — адрес сервера PostgreSQL. +- `database` — Имя базы данных на сервере PostgreSQL. +- `table` — Имя таблицы. +- `user` — Имя пользователя PostgreSQL. +- `password` — Пароль пользователя PostgreSQL. +- `schema` — имя схемы, если не используется схема по умолчанию. Необязательный аргумент. + +## Особенности реализации {#implementation-details} + +Запросы `SELECT` на стороне PostgreSQL выполняются как `COPY (SELECT ...) TO STDOUT` внутри транзакции PostgreSQL только на чтение с коммитом после каждого запроса `SELECT`. + +Простые условия для `WHERE`, такие как `=`, `!=`, `>`, `>=`, `<`, `<=` и `IN`, исполняются на стороне PostgreSQL сервера. + +Все операции объединения, аггрегации, сортировки, условия `IN [ array ]` и ограничения `LIMIT` выполняются на стороне ClickHouse только после того, как запрос к PostgreSQL закончился. + +Запросы `INSERT` на стороне PostgreSQL выполняются как `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` внутри PostgreSQL транзакции с автоматическим коммитом после каждого запроса `INSERT`. + +PostgreSQL массивы конвертируются в массивы ClickHouse. + +!!! info "Внимание" + Будьте внимательны, в PostgreSQL массивы, созданные как `type_name[]`, являются многомерными и могут содержать в себе разное количество измерений в разных строках одной таблицы. Внутри ClickHouse допустимы только многомерные массивы с одинаковым кол-вом измерений во всех строках таблицы. + +При использовании словаря PostgreSQL поддерживается приоритет реплик. Чем больше номер реплики, тем ниже ее приоритет. Наивысший приоритет у реплики с номером `0`. + +В примере ниже реплика `example01-1` имеет более высокий приоритет: + +```xml + + 5432 + clickhouse + qwerty + + example01-1 + 1 + + + example01-2 + 2 + + db_name +
table_name
+ id=10 + SQL_QUERY + + +``` + +## Пример использования {#usage-example} + +Таблица в PostgreSQL: + +``` text +postgres=# CREATE TABLE "public"."test" ( +"int_id" SERIAL, +"int_nullable" INT NULL DEFAULT NULL, +"float" FLOAT NOT NULL, +"str" VARCHAR(100) NOT NULL DEFAULT '', +"float_nullable" FLOAT NULL DEFAULT NULL, +PRIMARY KEY (int_id)); + +CREATE TABLE + +postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); +INSERT 0 1 + +postgresql> SELECT * FROM test; + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) +``` + +Таблица в ClickHouse, получение данных из PostgreSQL таблицы, созданной выше: + +``` sql +CREATE TABLE default.postgresql_table +( + `float_nullable` Nullable(Float32), + `str` String, + `int_id` Int32 +) +ENGINE = PostgreSQL('localhost:5432', 'public', 'test', 'postges_user', 'postgres_password'); +``` + +``` sql +SELECT * FROM postgresql_table WHERE str IN ('test'); +``` + +``` text +┌─float_nullable─┬─str──┬─int_id─┐ +│ ᴺᵁᴸᴸ │ test │ 1 │ +└────────────────┴──────┴────────┘ +``` + +Using Non-default Schema: + +```text +postgres=# CREATE SCHEMA "nice.schema"; + +postgres=# CREATE TABLE "nice.schema"."nice.table" (a integer); + +postgres=# INSERT INTO "nice.schema"."nice.table" SELECT i FROM generate_series(0, 99) as t(i) +``` + +```sql +CREATE TABLE pg_table_schema_with_dots (a UInt32) + ENGINE PostgreSQL('localhost:5432', 'clickhouse', 'nice.table', 'postgrsql_user', 'password', 'nice.schema'); +``` + +**См. также** + +- [Табличная функция `postgresql`](../../../sql-reference/table-functions/postgresql.md) +- [Использование PostgreSQL в качестве источника для внешнего словаря](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/integrations/postgresql/) diff --git a/docs/ru/engines/table-engines/integrations/rabbitmq.md b/docs/ru/engines/table-engines/integrations/rabbitmq.md index f55163c1988..ef8a58c4c82 100644 --- a/docs/ru/engines/table-engines/integrations/rabbitmq.md +++ b/docs/ru/engines/table-engines/integrations/rabbitmq.md @@ -155,3 +155,4 @@ Example: - `_redelivered` - флаг `redelivered`. (Не равно нулю, если есть возможность, что сообщение было получено более, чем одним каналом.) - `_message_id` - значение поля `messageID` полученного сообщения. Данное поле непусто, если указано в параметрах при отправке сообщения. - `_timestamp` - значение поля `timestamp` полученного сообщения. Данное поле непусто, если указано в параметрах при отправке сообщения. + diff --git a/docs/ru/engines/table-engines/integrations/s3.md b/docs/ru/engines/table-engines/integrations/s3.md new file mode 100644 index 00000000000..bee1fc1318a --- /dev/null +++ b/docs/ru/engines/table-engines/integrations/s3.md @@ -0,0 +1,146 @@ +--- +toc_priority: 4 +toc_title: S3 +--- + +# Движок таблиц S3 {#table-engine-s3} + +Этот движок обеспечивает интеграцию с экосистемой [Amazon S3](https://aws.amazon.com/s3/). Он похож на движок [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs), но обеспечивает специфические для S3 возможности. + +## Создание таблицы {#creating-a-table} + +``` sql +CREATE TABLE s3_engine_table (name String, value UInt32) +ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, [compression]) +``` + +**Параметры движка** + +- `path` — URL-адрес бакета с указанием пути к файлу. Поддерживает следующие подстановочные знаки в режиме "только чтение": `*`, `?`, `{abc,def}` и `{N..M}` где `N`, `M` — числа, `'abc'`, `'def'` — строки. Подробнее смотри [ниже](#wildcards-in-path). +- `format` — [формат](../../../interfaces/formats.md#formats) файла. +- `aws_access_key_id`, `aws_secret_access_key` - данные пользователя учетной записи [AWS](https://aws.amazon.com/ru/). Вы можете использовать их для аутентификации ваших запросов. Необязательный параметр. Если параметры учетной записи не указаны, то используются данные из конфигурационного файла. Смотрите подробнее [Использование сервиса S3 для хранения данных](../mergetree-family/mergetree.md#table_engine-mergetree-s3). +- `compression` — тип сжатия. Возможные значения: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Необязательный параметр. Если не указано, то тип сжатия определяется автоматически по расширению файла. + +**Пример** + +``` sql +CREATE TABLE s3_engine_table (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'gzip'); +INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3); +SELECT * FROM s3_engine_table LIMIT 2; +``` + +``` text +┌─name─┬─value─┐ +│ one │ 1 │ +│ two │ 2 │ +└──────┴───────┘ +``` + +## Виртуальные столбцы {#virtual-columns} + +- `_path` — путь к файлу. +- `_file` — имя файла. + +Подробнее про виртуальные столбцы можно прочитать [здесь](../../../engines/table-engines/index.md#table_engines-virtual_columns). + +## Детали реализации {#implementation-details} + +- Чтение и запись могут быть параллельными. +- Не поддерживаются: + - запросы `ALTER` и `SELECT...SAMPLE`, + - индексы, + - репликация. + +## Символы подстановки {#wildcards-in-path} + +Аргумент `path` может указывать на несколько файлов, используя подстановочные знаки. Для обработки файл должен существовать и соответствовать всему шаблону пути. Список файлов определяется во время выполнения запроса `SELECT` (не в момент выполнения запроса `CREATE`). + +- `*` — заменяет любое количество любых символов, кроме `/`, включая пустую строку. +- `?` — заменяет любые одиночные символы. +- `{some_string, another_string, yet_another_one}` — заменяет любые строки `'some_string', 'another_string', 'yet_another_one'`. +- `{N..M}` — заменяет любое число от N до M, включая обе границы. N и M могут иметь ведущие нули, например `000..078`. + +Конструкции с `{}` аналогичны функции [remote](../../../sql-reference/table-functions/remote.md). + +## Настройки движка S3 {#s3-settings} + +Перед выполнением запроса или в конфигурационном файле могут быть установлены следующие настройки: + +- `s3_max_single_part_upload_size` — максимальный размер объекта для загрузки с использованием однокомпонентной загрузки в S3. Значение по умолчанию — `64 Mб`. +- `s3_min_upload_part_size` — минимальный размер объекта для загрузки при многокомпонентной загрузке в [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). Значение по умолчанию — `512 Mб`. +- `s3_max_redirects` — максимальное количество разрешенных переадресаций S3. Значение по умолчанию — `10`. + +Соображение безопасности: если злонамеренный пользователь попробует указать произвольные URL-адреса S3, параметр `s3_max_redirects` должен быть установлен в ноль, чтобы избежать атак [SSRF] (https://en.wikipedia.org/wiki/Server-side_request_forgery). Как альтернатива, в конфигурации сервера должен быть указан `remote_host_filter`. + +## Настройки точки приема запроса {#endpoint-settings} + +Для точки приема запроса (которая соответствует точному префиксу URL-адреса) в конфигурационном файле могут быть заданы следующие настройки: + +Обязательная настройка: +- `endpoint` — указывает префикс точки приема запроса. + +Необязательные настройки: +- `access_key_id` и `secret_access_key` — указывают учетные данные для использования с данной точкой приема запроса. +- `use_environment_credentials` — если `true`, S3-клиент будет пытаться получить учетные данные из переменных среды и метаданных Amazon EC2 для данной точки приема запроса. Значение по умолчанию - `false`. +- `header` — добавляет указанный HTTP-заголовок к запросу на заданную точку приема запроса. Может быть определен несколько раз. +- `server_side_encryption_customer_key_base64` — устанавливает необходимые заголовки для доступа к объектам S3 с шифрованием SSE-C. + +**Пример** + +``` xml + + + https://storage.yandexcloud.net/my-test-bucket-768/ + + + + + + + +``` + +## Примеры использования {#usage-examples} + +Предположим, у нас есть несколько файлов в формате TSV со следующими URL-адресами в HDFS: + +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv' + +1. Существует несколько способов создать таблицу, включающую в себя все шесть файлов: + +``` sql +CREATE TABLE table_with_range (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV'); +``` + +2. Другой способ: + +``` sql +CREATE TABLE table_with_question_mark (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV'); +``` + +3. Таблица содержит все файлы в обоих каталогах (все файлы должны соответствовать формату и схеме, описанным в запросе): + +``` sql +CREATE TABLE table_with_asterisk (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV'); +``` + +Если список файлов содержит диапазоны чисел с ведущими нулями, используйте конструкцию с фигурными скобками для каждой цифры отдельно или используйте `?`. + +4. Создание таблицы из файлов с именами `file-000.csv`, `file-001.csv`, … , `file-999.csv`: + +``` sql +CREATE TABLE big_table (name String, value UInt32) +ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV'); +``` +**Смотрите также** + +- [Табличная функция S3](../../../sql-reference/table-functions/s3.md) diff --git a/docs/ru/engines/table-engines/log-family/index.md b/docs/ru/engines/table-engines/log-family/index.md index b2a56f650f4..7737eac2f43 100644 --- a/docs/ru/engines/table-engines/log-family/index.md +++ b/docs/ru/engines/table-engines/log-family/index.md @@ -42,4 +42,3 @@ toc_priority: 29 Движки `Log` и `StripeLog` поддерживают параллельное чтение. При чтении данных, ClickHouse использует множество потоков. Каждый поток обрабатывает отдельный блок данных. Движок `Log` сохраняет каждый столбец таблицы в отдельном файле. Движок `StripeLog` хранит все данные в одном файле. Таким образом, движок `StripeLog` использует меньше дескрипторов в операционной системе, а движок `Log` обеспечивает более эффективное считывание данных. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/log_family/) diff --git a/docs/ru/engines/table-engines/log-family/log.md b/docs/ru/engines/table-engines/log-family/log.md index fad331454c7..6c5bf2221f8 100644 --- a/docs/ru/engines/table-engines/log-family/log.md +++ b/docs/ru/engines/table-engines/log-family/log.md @@ -11,4 +11,3 @@ toc_title: Log При конкурентном доступе к данным, чтения могут выполняться одновременно, а записи блокируют чтения и друг друга. Движок Log не поддерживает индексы. Также, если при записи в таблицу произошёл сбой, то таблица станет битой, и чтения из неё будут возвращать ошибку. Движок Log подходит для временных данных, write-once таблиц, а также для тестовых и демонстрационных целей. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/log/) diff --git a/docs/ru/engines/table-engines/log-family/stripelog.md b/docs/ru/engines/table-engines/log-family/stripelog.md index e505aae4c52..2f4b228f894 100644 --- a/docs/ru/engines/table-engines/log-family/stripelog.md +++ b/docs/ru/engines/table-engines/log-family/stripelog.md @@ -90,4 +90,3 @@ SELECT * FROM stripe_log_table ORDER BY timestamp └─────────────────────┴──────────────┴────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/stripelog/) diff --git a/docs/ru/engines/table-engines/log-family/tinylog.md b/docs/ru/engines/table-engines/log-family/tinylog.md index d5c24d41ca4..721355d8702 100644 --- a/docs/ru/engines/table-engines/log-family/tinylog.md +++ b/docs/ru/engines/table-engines/log-family/tinylog.md @@ -11,4 +11,3 @@ toc_title: TinyLog Запросы выполняются в один поток. То есть, этот движок предназначен для сравнительно маленьких таблиц (до 1 000 000 строк). Этот движок таблиц имеет смысл использовать в том случае, когда у вас есть много маленьких таблиц, так как он проще, чем движок [Log](log.md) (требуется открывать меньше файлов). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/tinylog/) diff --git a/docs/ru/engines/table-engines/mergetree-family/aggregatingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/aggregatingmergetree.md index 99b4ec06765..6e01cc2bcac 100644 --- a/docs/ru/engines/table-engines/mergetree-family/aggregatingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/aggregatingmergetree.md @@ -97,4 +97,3 @@ GROUP BY StartDate ORDER BY StartDate; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/aggregatingmergetree/) diff --git a/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md index 8ea3a5a7c92..424fcbb5873 100644 --- a/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md @@ -304,4 +304,3 @@ select * FROM UAct └─────────────────────┴───────────┴──────────┴──────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/collapsingmergetree/) diff --git a/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md b/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md index 00d850b01c3..9a09618e508 100644 --- a/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md +++ b/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md @@ -129,4 +129,3 @@ drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 detached ClickHouse позволяет производить различные манипуляции с кусками: удалять, копировать из одной таблицы в другую или создавать их резервные копии. Подробнее см. в разделе [Манипуляции с партициями и кусками](../../../engines/table-engines/mergetree-family/custom-partitioning-key.md#alter_manipulations-with-partitions). -[Оригинальная статья:](https://clickhouse.tech/docs/ru/operations/table_engines/custom_partitioning_key/) diff --git a/docs/ru/engines/table-engines/mergetree-family/graphitemergetree.md b/docs/ru/engines/table-engines/mergetree-family/graphitemergetree.md index e47c9127711..f3e915a413b 100644 --- a/docs/ru/engines/table-engines/mergetree-family/graphitemergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/graphitemergetree.md @@ -171,4 +171,3 @@ default ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/graphitemergetree/) diff --git a/docs/ru/engines/table-engines/mergetree-family/mergetree.md b/docs/ru/engines/table-engines/mergetree-family/mergetree.md index 6fc566b7c31..b8bd259167a 100644 --- a/docs/ru/engines/table-engines/mergetree-family/mergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/mergetree.md @@ -56,13 +56,13 @@ ORDER BY expr ClickHouse использует ключ сортировки в качестве первичного ключа, если первичный ключ не задан в секции `PRIMARY KEY`. - Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#vybor-pervichnogo-kliucha). + Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#primary-keys-and-indexes-in-queries). - `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Необязательный параметр. Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`. -- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). Необязательный параметр. +- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#choosing-a-primary-key-that-differs-from-the-sorting-key). Необязательный параметр. По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно. @@ -188,7 +188,7 @@ ClickHouse не требует уникального первичного кл При сортировке с использованием выражения `ORDER BY` для значений `NULL` всегда работает принцип [NULLS_LAST](../../../sql-reference/statements/select/order-by.md#sorting-of-special-values). -### Выбор первичного ключа {#vybor-pervichnogo-kliucha} +### Выбор первичного ключа {#selecting-the-primary-key} Количество столбцов в первичном ключе не ограничено явным образом. В зависимости от структуры данных в первичный ключ можно включать больше или меньше столбцов. Это может: @@ -217,7 +217,7 @@ ClickHouse не требует уникального первичного кл -### Первичный ключ, отличный от ключа сортировки {#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki} +### Первичный ключ, отличный от ключа сортировки {#choosing-a-primary-key-that-differs-from-the-sorting-key} Существует возможность задать первичный ключ (выражение, значения которого будут записаны в индексный файл для каждой засечки), отличный от ключа сортировки (выражение, по которому будут упорядочены строки в кусках @@ -236,7 +236,7 @@ ClickHouse не требует уникального первичного кл [ALTER ключа сортировки](../../../engines/table-engines/mergetree-family/mergetree.md) — лёгкая операция, так как при одновременном добавлении нового столбца в таблицу и ключ сортировки не нужно изменять данные кусков (они остаются упорядоченными и по новому выражению ключа). -### Использование индексов и партиций в запросах {#ispolzovanie-indeksov-i-partitsii-v-zaprosakh} +### Использование индексов и партиций в запросах {#use-of-indexes-and-partitions-in-queries} Для запросов `SELECT` ClickHouse анализирует возможность использования индекса. Индекс может использоваться, если в секции `WHERE/PREWHERE`, в качестве одного из элементов конъюнкции, или целиком, есть выражение, представляющее операции сравнения на равенства, неравенства, а также `IN` или `LIKE` с фиксированным префиксом, над столбцами или выражениями, входящими в первичный ключ или ключ партиционирования, либо над некоторыми частично монотонными функциями от этих столбцов, а также логические связки над такими выражениями. @@ -270,7 +270,7 @@ SELECT count() FROM table WHERE CounterID = 34 OR URL LIKE '%upyachka%' Ключ партиционирования по месяцам обеспечивает чтение только тех блоков данных, которые содержат даты из нужного диапазона. При этом блок данных может содержать данные за многие даты (до целого месяца). В пределах одного блока данные упорядочены по первичному ключу, который может не содержать дату в качестве первого столбца. В связи с этим, при использовании запроса с указанием условия только на дату, но не на префикс первичного ключа, будет читаться данных больше, чем за одну дату. -### Использование индекса для частично-монотонных первичных ключей {#ispolzovanie-indeksa-dlia-chastichno-monotonnykh-pervichnykh-kliuchei} +### Использование индекса для частично-монотонных первичных ключей {#use-of-index-for-partially-monotonic-primary-keys} Рассмотрим, например, дни месяца. Они образуют последовательность [монотонную](https://ru.wikipedia.org/wiki/Монотонная_последовательность) в течение одного месяца, но не монотонную на более длительных периодах. Это частично-монотонная последовательность. Если пользователь создаёт таблицу с частично-монотонным первичным ключом, ClickHouse как обычно создаёт разреженный индекс. Когда пользователь выбирает данные из такого рода таблиц, ClickHouse анализирует условия запроса. Если пользователь хочет получить данные между двумя метками индекса, и обе эти метки находятся внутри одного месяца, ClickHouse может использовать индекс в данном конкретном случае, поскольку он может рассчитать расстояние между параметрами запроса и индексными метками. @@ -312,7 +312,7 @@ SELECT count() FROM table WHERE s < 'z' SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234 ``` -#### Доступные индексы {#dostupnye-indeksy} +#### Доступные индексы {#available-types-of-indices} - `minmax` — Хранит минимум и максимум выражения (если выражение - `tuple`, то для каждого элемента `tuple`), используя их для пропуска блоков аналогично первичному ключу. @@ -375,7 +375,7 @@ INDEX b (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100) GRANULARIT - `s != 1` - `NOT startsWith(s, 'test')` -## Конкурентный доступ к данным {#konkurentnyi-dostup-k-dannym} +## Конкурентный доступ к данным {#concurrent-data-access} Для конкурентного доступа к таблице используется мультиверсионность. То есть, при одновременном чтении и обновлении таблицы, данные будут читаться из набора кусочков, актуального на момент запроса. Длинных блокировок нет. Вставки никак не мешают чтениям. @@ -517,7 +517,7 @@ CREATE TABLE table_for_aggregation y Int ) ENGINE = MergeTree -ORDER BY k1, k2 +ORDER BY (k1, k2) TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y); ``` @@ -531,13 +531,13 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y); ## Хранение данных таблицы на нескольких блочных устройствах {#table_engine-mergetree-multiple-volumes} -### Введение {#vvedenie} +### Введение {#introduction} Движки таблиц семейства `MergeTree` могут хранить данные на нескольких блочных устройствах. Это может оказаться полезным, например, при неявном разделении данных одной таблицы на «горячие» и «холодные». Наиболее свежая часть занимает малый объём и запрашивается регулярно, а большой хвост исторических данных запрашивается редко. При наличии в системе нескольких дисков, «горячая» часть данных может быть размещена на быстрых дисках (например, на NVMe SSD или в памяти), а холодная на более медленных (например, HDD). Минимальной перемещаемой единицей для `MergeTree` является кусок данных (data part). Данные одного куска могут находится только на одном диске. Куски могут перемещаться между дисками в фоне, согласно пользовательским настройкам, а также с помощью запросов [ALTER](../../../engines/table-engines/mergetree-family/mergetree.md#alter_move-partition). -### Термины {#terminy} +### Термины {#terms} - Диск — примонтированное в файловой системе блочное устройство. - Диск по умолчанию — диск, на котором находится путь, указанный в конфигурационной настройке сервера [path](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-path). @@ -689,7 +689,7 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd' Количество потоков для фоновых перемещений кусков между дисками можно изменить с помощью настройки [background_move_pool_size](../../../operations/settings/settings.md#background_move_pool_size) -### Особенности работы {#osobennosti-raboty} +### Особенности работы {#details} В таблицах `MergeTree` данные попадают на диск несколькими способами: @@ -712,4 +712,97 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd' После выполнения фоновых слияний или мутаций старые куски не удаляются сразу, а через некоторое время (табличная настройка `old_parts_lifetime`). Также они не перемещаются на другие тома или диски, поэтому до момента удаления они продолжают учитываться при подсчёте занятого дискового пространства. -[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/mergetree-family/mergetree/) +## Использование сервиса S3 для хранения данных {#table_engine-mergetree-s3} + +Таблицы семейства `MergeTree` могут хранить данные в сервисе [S3](https://aws.amazon.com/s3/) при использовании диска типа `s3`. + +Конфигурация: + +``` xml + + ... + + + s3 + https://storage.yandexcloud.net/my-bucket/root-path/ + your_access_key_id + your_secret_access_key + + http://proxy1 + http://proxy2 + + 10000 + 5000 + 10 + 1000 + /var/lib/clickhouse/disks/s3/ + true + /var/lib/clickhouse/disks/s3/cache/ + false + + + ... + +``` + +Обязательные параметры: + +- `endpoint` — URL точки приема запроса на стороне S3 в [форматах](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html) `path` или `virtual hosted`. URL точки должен содержать бакет и путь к корневой директории на сервере, где хранятся данные. +- `access_key_id` — id ключа доступа к S3. +- `secret_access_key` — секретный ключ доступа к S3. + +Необязательные параметры: + +- `use_environment_credentials` — признак, нужно ли считывать учетные данные AWS из сетевого окружения, а также из переменных окружения `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` и `AWS_SESSION_TOKEN`, если они есть. Значение по умолчанию: `false`. +- `use_insecure_imds_request` — признак, нужно ли использовать менее безопасное соединение при выполнении запроса к IMDS при получении учётных данных из метаданных Amazon EC2. Значение по умолчанию: `false`. +- `proxy` — конфигурация прокси-сервера для конечной точки S3. Каждый элемент `uri` внутри блока `proxy` должен содержать URL прокси-сервера. +- `connect_timeout_ms` — таймаут подключения к сокету в миллисекундах. Значение по умолчанию: 10 секунд. +- `request_timeout_ms` — таймаут выполнения запроса в миллисекундах. Значение по умолчанию: 5 секунд. +- `retry_attempts` — число попыток выполнения запроса в случае возникновения ошибки. Значение по умолчанию: `10`. +- `min_bytes_for_seek` — минимальное количество байтов, которые используются для операций поиска вместо последовательного чтения. Значение по умолчанию: 1 МБайт. +- `metadata_path` — путь к локальному файловому хранилищу для хранения файлов с метаданными для S3. Значение по умолчанию: `/var/lib/clickhouse/disks//`. +- `cache_enabled` — признак, разрешено ли хранение кэша засечек и файлов индекса в локальной файловой системе. Значение по умолчанию: `true`. +- `cache_path` — путь в локальной файловой системе, где будут храниться кэш засечек и файлы индекса. Значение по умолчанию: `/var/lib/clickhouse/disks//cache/`. +- `skip_access_check` — признак, выполнять ли проверку доступов при запуске диска. Если установлено значение `true`, то проверка не выполняется. Значение по умолчанию: `false`. + + +Диск S3 может быть сконфигурирован как `main` или `cold`: + +``` xml + + ... + + + s3 + https://storage.yandexcloud.net/my-bucket/root-path/ + your_access_key_id + your_secret_access_key + + + + + +
+ s3 +
+
+
+ + +
+ default +
+ + s3 + +
+ 0.2 +
+
+ ... +
+``` + +Если диск сконфигурирован как `cold`, данные будут переноситься в S3 при срабатывании правил TTL или когда свободное место на локальном диске станет меньше порогового значения, которое определяется как `move_factor * disk_size`. + + diff --git a/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md index a4e47b161ad..ec0b339e8c9 100644 --- a/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md @@ -66,4 +66,3 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/replacingmergetree/) diff --git a/docs/ru/engines/table-engines/mergetree-family/replication.md b/docs/ru/engines/table-engines/mergetree-family/replication.md index 1735a02cf4c..848adbee4da 100644 --- a/docs/ru/engines/table-engines/mergetree-family/replication.md +++ b/docs/ru/engines/table-engines/mergetree-family/replication.md @@ -251,4 +251,3 @@ $ sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) - [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/replication/) diff --git a/docs/ru/engines/table-engines/mergetree-family/summingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/summingmergetree.md index 7b9c11adc2e..adb40037319 100644 --- a/docs/ru/engines/table-engines/mergetree-family/summingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/summingmergetree.md @@ -136,4 +136,3 @@ ClickHouse может слить куски данных таким образо Для вложенной структуры данных не нужно указывать её столбцы в кортеже столбцов для суммирования. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/summingmergetree/) diff --git a/docs/ru/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md index 2adb8cc0d77..61688b1f00f 100644 --- a/docs/ru/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md @@ -233,4 +233,3 @@ SELECT * FROM UAct FINAL Это очень неэффективный способ выбора данных. Не используйте его для больших таблиц. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/versionedcollapsingmergetree/) diff --git a/docs/ru/engines/table-engines/special/buffer.md b/docs/ru/engines/table-engines/special/buffer.md index 75ce12f50fa..ba865b72b78 100644 --- a/docs/ru/engines/table-engines/special/buffer.md +++ b/docs/ru/engines/table-engines/special/buffer.md @@ -66,4 +66,3 @@ CREATE TABLE merge.hits_buffer AS merge.hits ENGINE = Buffer(merge, hits, 16, 10 Заметим, что даже для таблиц типа Buffer не имеет смысла вставлять данные по одной строке, так как таким образом будет достигнута скорость всего лишь в несколько тысяч строк в секунду, тогда как при вставке более крупными блоками, достижимо более миллиона строк в секунду (смотрите раздел [«Производительность»](../../../introduction/performance/). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/buffer/) diff --git a/docs/ru/engines/table-engines/special/dictionary.md b/docs/ru/engines/table-engines/special/dictionary.md index 048da157b2d..243fd5395c0 100644 --- a/docs/ru/engines/table-engines/special/dictionary.md +++ b/docs/ru/engines/table-engines/special/dictionary.md @@ -90,4 +90,3 @@ select * from products limit 1; └───────────────┴─────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/dictionary/) diff --git a/docs/ru/engines/table-engines/special/distributed.md b/docs/ru/engines/table-engines/special/distributed.md index 7ab0b916337..86eef35ebbc 100644 --- a/docs/ru/engines/table-engines/special/distributed.md +++ b/docs/ru/engines/table-engines/special/distributed.md @@ -136,4 +136,3 @@ logs - имя кластера в конфигурационном файле с При выставлении опции max_parallel_replicas выполнение запроса распараллеливается по всем репликам внутри одного шарда. Подробнее смотрите раздел [max_parallel_replicas](../../../operations/settings/settings.md#settings-max_parallel_replicas). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/distributed/) diff --git a/docs/ru/engines/table-engines/special/external-data.md b/docs/ru/engines/table-engines/special/external-data.md index da9e132dd4f..29075837aba 100644 --- a/docs/ru/engines/table-engines/special/external-data.md +++ b/docs/ru/engines/table-engines/special/external-data.md @@ -65,4 +65,3 @@ $ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+coun При распределённой обработке запроса, временные таблицы передаются на все удалённые серверы. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/external_data/) diff --git a/docs/ru/engines/table-engines/special/file.md b/docs/ru/engines/table-engines/special/file.md index 9be09fd33e6..6f1c723d2a7 100644 --- a/docs/ru/engines/table-engines/special/file.md +++ b/docs/ru/engines/table-engines/special/file.md @@ -81,4 +81,3 @@ $ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64 - индексы; - репликация. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/file/) diff --git a/docs/ru/engines/table-engines/special/index.md b/docs/ru/engines/table-engines/special/index.md index 0300d3ad641..231bf2979ed 100644 --- a/docs/ru/engines/table-engines/special/index.md +++ b/docs/ru/engines/table-engines/special/index.md @@ -13,4 +13,3 @@ toc_priority: 31 Остальные движки таблиц уникальны по своему назначению и еще не сгруппированы в семейства, поэтому они помещены в эту специальную категорию. -[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/special/) diff --git a/docs/ru/engines/table-engines/special/join.md b/docs/ru/engines/table-engines/special/join.md index 8cb7acd91e1..ef27ac3f10f 100644 --- a/docs/ru/engines/table-engines/special/join.md +++ b/docs/ru/engines/table-engines/special/join.md @@ -107,4 +107,3 @@ SELECT joinGet('id_val_join', 'val', toUInt32(1)) При аварийном перезапуске сервера блок данных на диске может быть потерян или повреждён. В последнем случае, может потребоваться вручную удалить файл с повреждёнными данными. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/join/) diff --git a/docs/ru/engines/table-engines/special/materializedview.md b/docs/ru/engines/table-engines/special/materializedview.md index 1281d1db9ab..6b82f95df92 100644 --- a/docs/ru/engines/table-engines/special/materializedview.md +++ b/docs/ru/engines/table-engines/special/materializedview.md @@ -7,4 +7,3 @@ toc_title: MaterializedView Используется для реализации материализованных представлений (подробнее см. запрос [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query)). Для хранения данных, использует другой движок, который был указан при создании представления. При чтении из таблицы, просто использует этот движок. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/materializedview/) diff --git a/docs/ru/engines/table-engines/special/memory.md b/docs/ru/engines/table-engines/special/memory.md index 9ca189ef3b2..5a242238a02 100644 --- a/docs/ru/engines/table-engines/special/memory.md +++ b/docs/ru/engines/table-engines/special/memory.md @@ -14,4 +14,3 @@ toc_title: Memory Движок Memory используется системой для временных таблиц - внешних данных запроса (смотрите раздел «Внешние данные для обработки запроса»), для реализации `GLOBAL IN` (смотрите раздел «Операторы IN»). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/memory/) diff --git a/docs/ru/engines/table-engines/special/merge.md b/docs/ru/engines/table-engines/special/merge.md index 656aa7cfd6b..714b087c201 100644 --- a/docs/ru/engines/table-engines/special/merge.md +++ b/docs/ru/engines/table-engines/special/merge.md @@ -65,4 +65,3 @@ FROM WatchLog - [Виртуальные столбцы](index.md#table_engines-virtual_columns) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/merge/) diff --git a/docs/ru/engines/table-engines/special/null.md b/docs/ru/engines/table-engines/special/null.md index 2c3af1ce11e..05f5c88bacb 100644 --- a/docs/ru/engines/table-engines/special/null.md +++ b/docs/ru/engines/table-engines/special/null.md @@ -7,4 +7,3 @@ toc_title: 'Null' Тем не менее, есть возможность создать материализованное представление над таблицей типа Null. Тогда данные, записываемые в таблицу, будут попадать в представление. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/null/) diff --git a/docs/ru/engines/table-engines/special/set.md b/docs/ru/engines/table-engines/special/set.md index 14b7f123a34..ced9abf55dc 100644 --- a/docs/ru/engines/table-engines/special/set.md +++ b/docs/ru/engines/table-engines/special/set.md @@ -20,4 +20,3 @@ toc_title: Set - [persistent](../../../operations/settings/settings.md#persistent) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/set/) diff --git a/docs/ru/engines/table-engines/special/url.md b/docs/ru/engines/table-engines/special/url.md index cdb5afddf75..b8fcd27204f 100644 --- a/docs/ru/engines/table-engines/special/url.md +++ b/docs/ru/engines/table-engines/special/url.md @@ -77,4 +77,3 @@ SELECT * FROM url_engine_table - индексы; - репликация. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/url/) diff --git a/docs/ru/engines/table-engines/special/view.md b/docs/ru/engines/table-engines/special/view.md index 18813a55da2..45aeb55cd85 100644 --- a/docs/ru/engines/table-engines/special/view.md +++ b/docs/ru/engines/table-engines/special/view.md @@ -7,4 +7,3 @@ toc_title: View Используется для реализации представлений (подробнее см. запрос `CREATE VIEW`). Не хранит данные, а хранит только указанный запрос `SELECT`. При чтении из таблицы, выполняет его (с удалением из запроса всех ненужных столбцов). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/view/) diff --git a/docs/ru/getting-started/example-datasets/amplab-benchmark.md b/docs/ru/getting-started/example-datasets/amplab-benchmark.md index bc59672ab26..8a75852aad9 100644 --- a/docs/ru/getting-started/example-datasets/amplab-benchmark.md +++ b/docs/ru/getting-started/example-datasets/amplab-benchmark.md @@ -125,4 +125,3 @@ ORDER BY totalRevenue DESC LIMIT 1 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/example_datasets/amplab_benchmark/) diff --git a/docs/ru/getting-started/example-datasets/brown-benchmark.md b/docs/ru/getting-started/example-datasets/brown-benchmark.md index 23702e07fcd..f1aad06b743 100644 --- a/docs/ru/getting-started/example-datasets/brown-benchmark.md +++ b/docs/ru/getting-started/example-datasets/brown-benchmark.md @@ -413,4 +413,3 @@ ORDER BY yr, Данные также доступны для работы с интерактивными запросами через [Playground](https://gh-api.clickhouse.tech/play?user=play), [пример](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIG1hY2hpbmVfbmFtZSwKICAgICAgIE1JTihjcHUpIEFTIGNwdV9taW4sCiAgICAgICBNQVgoY3B1KSBBUyBjcHVfbWF4LAogICAgICAgQVZHKGNwdSkgQVMgY3B1X2F2ZywKICAgICAgIE1JTihuZXRfaW4pIEFTIG5ldF9pbl9taW4sCiAgICAgICBNQVgobmV0X2luKSBBUyBuZXRfaW5fbWF4LAogICAgICAgQVZHKG5ldF9pbikgQVMgbmV0X2luX2F2ZywKICAgICAgIE1JTihuZXRfb3V0KSBBUyBuZXRfb3V0X21pbiwKICAgICAgIE1BWChuZXRfb3V0KSBBUyBuZXRfb3V0X21heCwKICAgICAgIEFWRyhuZXRfb3V0KSBBUyBuZXRfb3V0X2F2ZwpGUk9NICgKICBTRUxFQ1QgbWFjaGluZV9uYW1lLAogICAgICAgICBDT0FMRVNDRShjcHVfdXNlciwgMC4wKSBBUyBjcHUsCiAgICAgICAgIENPQUxFU0NFKGJ5dGVzX2luLCAwLjApIEFTIG5ldF9pbiwKICAgICAgICAgQ09BTEVTQ0UoYnl0ZXNfb3V0LCAwLjApIEFTIG5ldF9vdXQKICBGUk9NIG1nYmVuY2gubG9nczEKICBXSEVSRSBtYWNoaW5lX25hbWUgSU4gKCdhbmFuc2knLCdhcmFnb2cnLCd1cmQnKQogICAgQU5EIGxvZ190aW1lID49IFRJTUVTVEFNUCAnMjAxNy0wMS0xMSAwMDowMDowMCcKKSBBUyByCkdST1VQIEJZIG1hY2hpbmVfbmFtZQ==). -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/example_datasets/brown-benchmark/) diff --git a/docs/ru/getting-started/example-datasets/cell-towers.md b/docs/ru/getting-started/example-datasets/cell-towers.md new file mode 100644 index 00000000000..a5524248019 --- /dev/null +++ b/docs/ru/getting-started/example-datasets/cell-towers.md @@ -0,0 +1,128 @@ +--- +toc_priority: 21 +toc_title: Вышки сотовой связи +--- + +# Вышки сотовой связи {#cell-towers} + +Источник этого набора данных (dataset) - самая большая в мире открытая база данных о сотовых вышках - [OpenCellid](https://www.opencellid.org/). К 2021-му году здесь накопилось более, чем 40 миллионов записей о сотовых вышках (GSM, LTE, UMTS, и т.д.) по всему миру с их географическими координатами и метаданными (код страны, сети, и т.д.). + +OpenCelliD Project имеет лицензию Creative Commons Attribution-ShareAlike 4.0 International License, и мы распространяем снэпшот набора данных по условиям этой же лицензии. После авторизации можно загрузить последнюю версию набора данных. + +## Как получить набор данных {#get-the-dataset} + +1. Загрузите снэпшот набора данных за февраль 2021 [отсюда](https://datasets.clickhouse.tech/cell_towers.csv.xz) (729 MB). + +2. Если нужно, проверьте полноту и целостность при помощи команды: + +``` +md5sum cell_towers.csv.xz +8cf986f4a0d9f12c6f384a0e9192c908 cell_towers.csv.xz +``` + +3. Распакуйте набор данных при помощи команды: + +``` +xz -d cell_towers.csv.xz +``` + +4. Создайте таблицу: + +``` +CREATE TABLE cell_towers +( + radio Enum8('' = 0, 'CDMA' = 1, 'GSM' = 2, 'LTE' = 3, 'NR' = 4, 'UMTS' = 5), + mcc UInt16, + net UInt16, + area UInt16, + cell UInt64, + unit Int16, + lon Float64, + lat Float64, + range UInt32, + samples UInt32, + changeable UInt8, + created DateTime, + updated DateTime, + averageSignal UInt8 +) +ENGINE = MergeTree ORDER BY (radio, mcc, net, created); +``` + +5. Вставьте данные: +``` +clickhouse-client --query "INSERT INTO cell_towers FORMAT CSVWithNames" < cell_towers.csv +``` + +## Примеры {#examples} + +1. Количество вышек по типам: + +``` +SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC + +┌─radio─┬────────c─┐ +│ UMTS │ 20686487 │ +│ LTE │ 12101148 │ +│ GSM │ 9931312 │ +│ CDMA │ 556344 │ +│ NR │ 867 │ +└───────┴──────────┘ + +5 rows in set. Elapsed: 0.011 sec. Processed 43.28 million rows, 43.28 MB (3.83 billion rows/s., 3.83 GB/s.) +``` + +2. Количество вышек по [мобильному коду страны (MCC)](https://ru.wikipedia.org/wiki/Mobile_Country_Code): + +``` +SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10 + +┌─mcc─┬─count()─┐ +│ 310 │ 5024650 │ +│ 262 │ 2622423 │ +│ 250 │ 1953176 │ +│ 208 │ 1891187 │ +│ 724 │ 1836150 │ +│ 404 │ 1729151 │ +│ 234 │ 1618924 │ +│ 510 │ 1353998 │ +│ 440 │ 1343355 │ +│ 311 │ 1332798 │ +└─────┴─────────┘ + +10 rows in set. Elapsed: 0.019 sec. Processed 43.28 million rows, 86.55 MB (2.33 billion rows/s., 4.65 GB/s.) +``` + +Можно увидеть, что по количеству вышек лидируют следующие страны: США, Германия, Россия. + +Вы также можете создать [внешний словарь](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) в ClickHouse для того, чтобы расшифровать эти значения. + +## Пример использования {#use-case} + +Рассмотрим применение функции `pointInPolygon`. + +1. Создаем таблицу, в которой будем хранить многоугольники: + +``` +CREATE TEMPORARY TABLE moscow (polygon Array(Tuple(Float64, Float64))); +``` + +2. Очертания Москвы выглядят приблизительно так ("Новая Москва" в них не включена): + +``` +INSERT INTO moscow VALUES ([(37.84172564285271, 55.78000432402266), (37.8381207618713, 55.775874525970494), (37.83979446823122, 55.775626746008065), (37.84243326983639, 55.77446586811748), (37.84262672750849, 55.771974101091104), (37.84153238623039, 55.77114545193181), (37.841124690460184, 55.76722010265554), (37.84239076983644, 55.76654891107098), (37.842283558197025, 55.76258709833121), (37.8421759312134, 55.758073999993734), (37.84198330422974, 55.75381499999371), (37.8416827275085, 55.749277102484484), (37.84157576190186, 55.74794544108413), (37.83897929098507, 55.74525257875241), (37.83739676451868, 55.74404373042019), (37.838732481460525, 55.74298009816793), (37.841183997352545, 55.743060321833575), (37.84097476190185, 55.73938799999373), (37.84048155819702, 55.73570799999372), (37.840095812164286, 55.73228210777237), (37.83983814285274, 55.73080491981639), (37.83846476321406, 55.729799917464675), (37.83835745269769, 55.72919751082619), (37.838636380279524, 55.72859509486539), (37.8395161005249, 55.727705075632784), (37.83897964285276, 55.722727886185154), (37.83862557539366, 55.72034817326636), (37.83559735744853, 55.71944437307499), (37.835370708803126, 55.71831419154461), (37.83738169402022, 55.71765218986692), (37.83823396494291, 55.71691750159089), (37.838056931213345, 55.71547311301385), (37.836812846557606, 55.71221445615604), (37.83522525396725, 55.709331054395555), (37.83269301586908, 55.70953687463627), (37.829667367706236, 55.70903403789297), (37.83311126588435, 55.70552351822608), (37.83058993121339, 55.70041317726053), (37.82983872750851, 55.69883771404813), (37.82934501586913, 55.69718947487017), (37.828926414016685, 55.69504441658371), (37.82876530422971, 55.69287499999378), (37.82894754100031, 55.690759754047335), (37.827697554878185, 55.68951421135665), (37.82447346292115, 55.68965045405069), (37.83136543914793, 55.68322046195302), (37.833554015869154, 55.67814012759211), (37.83544184655761, 55.67295011628339), (37.837480388885474, 55.6672498719639), (37.838960677246064, 55.66316274139358), (37.83926093121332, 55.66046999999383), (37.839025050262435, 55.65869897264431), (37.83670784390257, 55.65794084879904), (37.835656529083245, 55.65694309303843), (37.83704060449217, 55.65689306460552), (37.83696819873806, 55.65550363526252), (37.83760389616388, 55.65487847246661), (37.83687972750851, 55.65356745541324), (37.83515216004943, 55.65155951234079), (37.83312418518067, 55.64979413590619), (37.82801726983639, 55.64640836412121), (37.820614174591, 55.64164525405531), (37.818908190475426, 55.6421883258084), (37.81717543386075, 55.64112490388471), (37.81690987037274, 55.63916106913107), (37.815099354492155, 55.637925371757085), (37.808769150787356, 55.633798276884455), (37.80100123544311, 55.62873670012244), (37.79598013491824, 55.62554336109055), (37.78634567724606, 55.62033499605651), (37.78334147619623, 55.618768681480326), (37.77746201055901, 55.619855533402706), (37.77527329626457, 55.61909966711279), (37.77801986242668, 55.618770300976294), (37.778212973541216, 55.617257701952106), (37.77784818518065, 55.61574504433011), (37.77016867724609, 55.61148576294007), (37.760191219573976, 55.60599579539028), (37.75338926983641, 55.60227892751446), (37.746329965606634, 55.59920577639331), (37.73939925396728, 55.59631430313617), (37.73273665739439, 55.5935318803559), (37.7299954450912, 55.59350760316188), (37.7268679946899, 55.59469840523759), (37.72626726983634, 55.59229549697373), (37.7262673598022, 55.59081598950582), (37.71897193121335, 55.5877595845419), (37.70871550793456, 55.58393177431724), (37.700497489410374, 55.580917323756644), (37.69204305026244, 55.57778089778455), (37.68544477378839, 55.57815154690915), (37.68391050793454, 55.57472945079756), (37.678803592590306, 55.57328235936491), (37.6743402539673, 55.57255251445782), (37.66813862698363, 55.57216388774464), (37.617927457672096, 55.57505691895805), (37.60443099999999, 55.5757737568051), (37.599683515869145, 55.57749105910326), (37.59754177842709, 55.57796291823627), (37.59625834786988, 55.57906686095235), (37.59501783265684, 55.57746616444403), (37.593090671936025, 55.57671634534502), (37.587018007904, 55.577944600233785), (37.578692203704804, 55.57982895000019), (37.57327546607398, 55.58116294118248), (37.57385012109279, 55.581550362779), (37.57399562266922, 55.5820107079112), (37.5735356072979, 55.58226289171689), (37.57290393054962, 55.582393529795155), (37.57037722355653, 55.581919415056234), (37.5592298306885, 55.584471614867844), (37.54189249206543, 55.58867650795186), (37.5297256269836, 55.59158133551745), (37.517837865081766, 55.59443656218868), (37.51200186508174, 55.59635625174229), (37.506808949737554, 55.59907823904434), (37.49820432275389, 55.6062944994944), (37.494406071441674, 55.60967103463367), (37.494760001358024, 55.61066689753365), (37.49397137107085, 55.61220931698269), (37.49016528606031, 55.613417718449064), (37.48773249206542, 55.61530616333343), (37.47921386508177, 55.622640129112334), (37.470652153442394, 55.62993723476164), (37.46273446298218, 55.6368075123157), (37.46350692265317, 55.64068225239439), (37.46050283203121, 55.640794546982576), (37.457627470916734, 55.64118904154646), (37.450718034393326, 55.64690488145138), (37.44239252645875, 55.65397824729769), (37.434587576721185, 55.66053543155961), (37.43582144975277, 55.661693766520735), (37.43576786245721, 55.662755031737014), (37.430982915344174, 55.664610641628116), (37.428547447097685, 55.66778515273695), (37.42945134592044, 55.668633314343566), (37.42859571562949, 55.66948145750025), (37.4262836402282, 55.670813882451405), (37.418709037048295, 55.6811141674414), (37.41922139651101, 55.68235377885389), (37.419218771842885, 55.68359335082235), (37.417196501327446, 55.684375235224735), (37.41607020370478, 55.68540557585352), (37.415640857147146, 55.68686637150793), (37.414632153442334, 55.68903015131686), (37.413344899475064, 55.690896881757396), (37.41171432275391, 55.69264232162232), (37.40948282275393, 55.69455101638112), (37.40703674603271, 55.69638690385348), (37.39607169577025, 55.70451821283731), (37.38952706878662, 55.70942491932811), (37.387778313491815, 55.71149057784176), (37.39049275399779, 55.71419814298992), (37.385557272491454, 55.7155489617061), (37.38388335714726, 55.71849856042102), (37.378368238098155, 55.7292763261685), (37.37763597123337, 55.730845879211614), (37.37890062088197, 55.73167906388319), (37.37750451918789, 55.734703664681774), (37.375610832015965, 55.734851959522246), (37.3723813571472, 55.74105626086403), (37.37014935714723, 55.746115620904355), (37.36944173016362, 55.750883999993725), (37.36975304365541, 55.76335905525834), (37.37244070571134, 55.76432079697595), (37.3724259757175, 55.76636979670426), (37.369922155757884, 55.76735417953104), (37.369892695770275, 55.76823419316575), (37.370214730163575, 55.782312184391266), (37.370493611114505, 55.78436801120489), (37.37120164550783, 55.78596427165359), (37.37284851456452, 55.7874378183096), (37.37608325135799, 55.7886695054807), (37.3764587460632, 55.78947647305964), (37.37530000265506, 55.79146512926804), (37.38235915344241, 55.79899647809345), (37.384344043655396, 55.80113596939471), (37.38594269577028, 55.80322699999366), (37.38711208598329, 55.804919036911976), (37.3880239841309, 55.806610999993666), (37.38928977249147, 55.81001864976979), (37.39038389947512, 55.81348641242801), (37.39235781481933, 55.81983538336746), (37.393709457672124, 55.82417822811877), (37.394685720901464, 55.82792275755836), (37.39557615344238, 55.830447148154136), (37.39844478226658, 55.83167107969975), (37.40019761214057, 55.83151823557964), (37.400398790382326, 55.83264967594742), (37.39659544313046, 55.83322180909622), (37.39667059524539, 55.83402792148566), (37.39682089947515, 55.83638877400216), (37.39643489154053, 55.83861656112751), (37.3955338994751, 55.84072348043264), (37.392680272491454, 55.84502158126453), (37.39241188227847, 55.84659117913199), (37.392529730163616, 55.84816071336481), (37.39486835714723, 55.85288092980303), (37.39873052645878, 55.859893456073635), (37.40272161111449, 55.86441833633205), (37.40697072750854, 55.867579567544375), (37.410007082016016, 55.868369880337), (37.4120992989502, 55.86920843741314), (37.412668021163924, 55.87055369615854), (37.41482461111453, 55.87170587948249), (37.41862266137694, 55.873183961039565), (37.42413732540892, 55.874879126654704), (37.4312182698669, 55.875614937236705), (37.43111093783558, 55.8762723478417), (37.43332105622856, 55.87706546369396), (37.43385747619623, 55.87790681284802), (37.441303050262405, 55.88027084462084), (37.44747234260555, 55.87942070143253), (37.44716141796871, 55.88072960917233), (37.44769797085568, 55.88121221323979), (37.45204320500181, 55.882080694420715), (37.45673176190186, 55.882346110794586), (37.463383999999984, 55.88252729504517), (37.46682797486874, 55.88294937719063), (37.470014457672086, 55.88361266759345), (37.47751410450743, 55.88546991372396), (37.47860317658232, 55.88534929207307), (37.48165826025772, 55.882563306475106), (37.48316434442331, 55.8815803226785), (37.483831555817645, 55.882427612793315), (37.483182967125686, 55.88372791409729), (37.483092277908824, 55.88495581062434), (37.4855716508179, 55.8875561994203), (37.486440636245746, 55.887827444039566), (37.49014203439328, 55.88897899871799), (37.493210285705544, 55.890208937135604), (37.497512451065035, 55.891342397444696), (37.49780744510645, 55.89174030252967), (37.49940333499519, 55.89239745507079), (37.50018383334346, 55.89339220941865), (37.52421672750851, 55.903869074155224), (37.52977457672118, 55.90564076517974), (37.53503220370484, 55.90661661218259), (37.54042858064267, 55.90714113744566), (37.54320461007303, 55.905645048442985), (37.545686966066306, 55.906608607018505), (37.54743976120755, 55.90788552162358), (37.55796999999999, 55.90901557907218), (37.572711542327866, 55.91059395704873), (37.57942799999998, 55.91073854155573), (37.58502865872187, 55.91009969268444), (37.58739968913264, 55.90794809960554), (37.59131567193598, 55.908713267595054), (37.612687423278814, 55.902866854295375), (37.62348079629517, 55.90041967242986), (37.635797880950896, 55.898141151686396), (37.649487626983664, 55.89639275532968), (37.65619302513125, 55.89572360207488), (37.66294133862307, 55.895295577183965), (37.66874564418033, 55.89505457604897), (37.67375601586915, 55.89254677027454), (37.67744661901856, 55.8947775867987), (37.688347, 55.89450045676125), (37.69480554232789, 55.89422926332761), (37.70107096560668, 55.89322256101114), (37.705962965606716, 55.891763491662616), (37.711885134918205, 55.889110234998974), (37.71682005026245, 55.886577568759876), (37.7199315476074, 55.88458159806678), (37.72234560316464, 55.882281005794134), (37.72364385977171, 55.8809452036196), (37.725371142837474, 55.8809722706006), (37.727870902099546, 55.88037213862385), (37.73394330422971, 55.877941504088696), (37.745339592590376, 55.87208120378722), (37.75525267724611, 55.86703807949492), (37.76919976190188, 55.859821640197474), (37.827835219574, 55.82962968399116), (37.83341438888553, 55.82575289922351), (37.83652584655761, 55.82188784027888), (37.83809213491821, 55.81612575504693), (37.83605359521481, 55.81460347077685), (37.83632178569025, 55.81276696067908), (37.838623105812026, 55.811486181656385), (37.83912198147584, 55.807329380532785), (37.839079078033414, 55.80510270463816), (37.83965844708251, 55.79940712529036), (37.840581150787344, 55.79131399999368), (37.84172564285271, 55.78000432402266)]); +``` + +3. Проверяем, сколько сотовых вышек находится в Москве: + +``` +SELECT count() FROM cell_towers WHERE pointInPolygon((lon, lat), (SELECT * FROM moscow)) + +┌─count()─┐ +│ 310463 │ +└─────────┘ + +1 rows in set. Elapsed: 0.067 sec. Processed 43.28 million rows, 692.42 MB (645.83 million rows/s., 10.33 GB/s.) +``` + +Вы можете протестировать другие запросы с помощью интерактивного ресурса [Playground](https://gh-api.clickhouse.tech/play?user=play). Например, [вот так](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIG1jYywgY291bnQoKSBGUk9NIGNlbGxfdG93ZXJzIEdST1VQIEJZIG1jYyBPUkRFUiBCWSBjb3VudCgpIERFU0M=). Однако, обратите внимание, что здесь нельзя создавать временные таблицы. diff --git a/docs/ru/getting-started/example-datasets/criteo.md b/docs/ru/getting-started/example-datasets/criteo.md index ecdc5f5fa41..bfa428a0e1c 100644 --- a/docs/ru/getting-started/example-datasets/criteo.md +++ b/docs/ru/getting-started/example-datasets/criteo.md @@ -76,4 +76,3 @@ INSERT INTO criteo SELECT date, clicked, int1, int2, int3, int4, int5, int6, int DROP TABLE criteo_log; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/example_datasets/criteo/) diff --git a/docs/ru/getting-started/example-datasets/index.md b/docs/ru/getting-started/example-datasets/index.md index fd89bb122e3..756b3a75dee 100644 --- a/docs/ru/getting-started/example-datasets/index.md +++ b/docs/ru/getting-started/example-datasets/index.md @@ -16,5 +16,5 @@ toc_title: "Введение" - [AMPLab Big Data Benchmark](amplab-benchmark.md) - [Данные о такси в Нью-Йорке](nyc-taxi.md) - [OnTime](ontime.md) +- [Вышки сотовой связи](../../getting-started/example-datasets/cell-towers.md) -[Оригинальная статья](https://clickhouse.tech/docs/en/getting_started/example_datasets) diff --git a/docs/ru/getting-started/example-datasets/nyc-taxi.md b/docs/ru/getting-started/example-datasets/nyc-taxi.md index 891a92e2fa7..38a60ed1b2d 100644 --- a/docs/ru/getting-started/example-datasets/nyc-taxi.md +++ b/docs/ru/getting-started/example-datasets/nyc-taxi.md @@ -390,4 +390,3 @@ Q4: 0.072 sec. | 3 | 0.212 | 0.438 | 0.733 | 1.241 | | 140 | 0.028 | 0.043 | 0.051 | 0.072 | -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/example_datasets/nyc_taxi/) diff --git a/docs/ru/getting-started/example-datasets/ontime.md b/docs/ru/getting-started/example-datasets/ontime.md index 41a1c0d3142..d46b7e75e7f 100644 --- a/docs/ru/getting-started/example-datasets/ontime.md +++ b/docs/ru/getting-started/example-datasets/ontime.md @@ -27,126 +27,127 @@ done Создание таблицы: ``` sql -CREATE TABLE `ontime` ( - `Year` UInt16, - `Quarter` UInt8, - `Month` UInt8, - `DayofMonth` UInt8, - `DayOfWeek` UInt8, - `FlightDate` Date, - `UniqueCarrier` FixedString(7), - `AirlineID` Int32, - `Carrier` FixedString(2), - `TailNum` String, - `FlightNum` String, - `OriginAirportID` Int32, - `OriginAirportSeqID` Int32, - `OriginCityMarketID` Int32, - `Origin` FixedString(5), - `OriginCityName` String, - `OriginState` FixedString(2), - `OriginStateFips` String, - `OriginStateName` String, - `OriginWac` Int32, - `DestAirportID` Int32, - `DestAirportSeqID` Int32, - `DestCityMarketID` Int32, - `Dest` FixedString(5), - `DestCityName` String, - `DestState` FixedString(2), - `DestStateFips` String, - `DestStateName` String, - `DestWac` Int32, - `CRSDepTime` Int32, - `DepTime` Int32, - `DepDelay` Int32, - `DepDelayMinutes` Int32, - `DepDel15` Int32, - `DepartureDelayGroups` String, - `DepTimeBlk` String, - `TaxiOut` Int32, - `WheelsOff` Int32, - `WheelsOn` Int32, - `TaxiIn` Int32, - `CRSArrTime` Int32, - `ArrTime` Int32, - `ArrDelay` Int32, - `ArrDelayMinutes` Int32, - `ArrDel15` Int32, - `ArrivalDelayGroups` Int32, - `ArrTimeBlk` String, - `Cancelled` UInt8, - `CancellationCode` FixedString(1), - `Diverted` UInt8, - `CRSElapsedTime` Int32, - `ActualElapsedTime` Int32, - `AirTime` Int32, - `Flights` Int32, - `Distance` Int32, - `DistanceGroup` UInt8, - `CarrierDelay` Int32, - `WeatherDelay` Int32, - `NASDelay` Int32, - `SecurityDelay` Int32, - `LateAircraftDelay` Int32, - `FirstDepTime` String, - `TotalAddGTime` String, - `LongestAddGTime` String, - `DivAirportLandings` String, - `DivReachedDest` String, - `DivActualElapsedTime` String, - `DivArrDelay` String, - `DivDistance` String, - `Div1Airport` String, - `Div1AirportID` Int32, - `Div1AirportSeqID` Int32, - `Div1WheelsOn` String, - `Div1TotalGTime` String, - `Div1LongestGTime` String, - `Div1WheelsOff` String, - `Div1TailNum` String, - `Div2Airport` String, - `Div2AirportID` Int32, - `Div2AirportSeqID` Int32, - `Div2WheelsOn` String, - `Div2TotalGTime` String, - `Div2LongestGTime` String, - `Div2WheelsOff` String, - `Div2TailNum` String, - `Div3Airport` String, - `Div3AirportID` Int32, - `Div3AirportSeqID` Int32, - `Div3WheelsOn` String, - `Div3TotalGTime` String, - `Div3LongestGTime` String, - `Div3WheelsOff` String, - `Div3TailNum` String, - `Div4Airport` String, - `Div4AirportID` Int32, - `Div4AirportSeqID` Int32, - `Div4WheelsOn` String, - `Div4TotalGTime` String, - `Div4LongestGTime` String, - `Div4WheelsOff` String, - `Div4TailNum` String, - `Div5Airport` String, - `Div5AirportID` Int32, - `Div5AirportSeqID` Int32, - `Div5WheelsOn` String, - `Div5TotalGTime` String, - `Div5LongestGTime` String, - `Div5WheelsOff` String, - `Div5TailNum` String +CREATE TABLE `ontime` +( + `Year` UInt16, + `Quarter` UInt8, + `Month` UInt8, + `DayofMonth` UInt8, + `DayOfWeek` UInt8, + `FlightDate` Date, + `Reporting_Airline` String, + `DOT_ID_Reporting_Airline` Int32, + `IATA_CODE_Reporting_Airline` String, + `Tail_Number` Int32, + `Flight_Number_Reporting_Airline` String, + `OriginAirportID` Int32, + `OriginAirportSeqID` Int32, + `OriginCityMarketID` Int32, + `Origin` FixedString(5), + `OriginCityName` String, + `OriginState` FixedString(2), + `OriginStateFips` String, + `OriginStateName` String, + `OriginWac` Int32, + `DestAirportID` Int32, + `DestAirportSeqID` Int32, + `DestCityMarketID` Int32, + `Dest` FixedString(5), + `DestCityName` String, + `DestState` FixedString(2), + `DestStateFips` String, + `DestStateName` String, + `DestWac` Int32, + `CRSDepTime` Int32, + `DepTime` Int32, + `DepDelay` Int32, + `DepDelayMinutes` Int32, + `DepDel15` Int32, + `DepartureDelayGroups` String, + `DepTimeBlk` String, + `TaxiOut` Int32, + `WheelsOff` Int32, + `WheelsOn` Int32, + `TaxiIn` Int32, + `CRSArrTime` Int32, + `ArrTime` Int32, + `ArrDelay` Int32, + `ArrDelayMinutes` Int32, + `ArrDel15` Int32, + `ArrivalDelayGroups` Int32, + `ArrTimeBlk` String, + `Cancelled` UInt8, + `CancellationCode` FixedString(1), + `Diverted` UInt8, + `CRSElapsedTime` Int32, + `ActualElapsedTime` Int32, + `AirTime` Nullable(Int32), + `Flights` Int32, + `Distance` Int32, + `DistanceGroup` UInt8, + `CarrierDelay` Int32, + `WeatherDelay` Int32, + `NASDelay` Int32, + `SecurityDelay` Int32, + `LateAircraftDelay` Int32, + `FirstDepTime` String, + `TotalAddGTime` String, + `LongestAddGTime` String, + `DivAirportLandings` String, + `DivReachedDest` String, + `DivActualElapsedTime` String, + `DivArrDelay` String, + `DivDistance` String, + `Div1Airport` String, + `Div1AirportID` Int32, + `Div1AirportSeqID` Int32, + `Div1WheelsOn` String, + `Div1TotalGTime` String, + `Div1LongestGTime` String, + `Div1WheelsOff` String, + `Div1TailNum` String, + `Div2Airport` String, + `Div2AirportID` Int32, + `Div2AirportSeqID` Int32, + `Div2WheelsOn` String, + `Div2TotalGTime` String, + `Div2LongestGTime` String, + `Div2WheelsOff` String, + `Div2TailNum` String, + `Div3Airport` String, + `Div3AirportID` Int32, + `Div3AirportSeqID` Int32, + `Div3WheelsOn` String, + `Div3TotalGTime` String, + `Div3LongestGTime` String, + `Div3WheelsOff` String, + `Div3TailNum` String, + `Div4Airport` String, + `Div4AirportID` Int32, + `Div4AirportSeqID` Int32, + `Div4WheelsOn` String, + `Div4TotalGTime` String, + `Div4LongestGTime` String, + `Div4WheelsOff` String, + `Div4TailNum` String, + `Div5Airport` String, + `Div5AirportID` Int32, + `Div5AirportSeqID` Int32, + `Div5WheelsOn` String, + `Div5TotalGTime` String, + `Div5LongestGTime` String, + `Div5WheelsOff` String, + `Div5TailNum` String ) ENGINE = MergeTree -PARTITION BY Year -ORDER BY (Carrier, FlightDate) -SETTINGS index_granularity = 8192; + PARTITION BY Year + ORDER BY (IATA_CODE_Reporting_Airline, FlightDate) + SETTINGS index_granularity = 8192; ``` Загрузка данных: ``` bash -$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done +ls -1 *.zip | xargs -I{} -P $(nproc) bash -c "echo {}; unzip -cq {} '*.csv' | sed 's/\.00//g' | clickhouse-client --input_format_with_names_use_header=0 --query='INSERT INTO ontime FORMAT CSVWithNames'" ``` ## Скачивание готовых партиций {#skachivanie-gotovykh-partitsii} @@ -211,7 +212,7 @@ LIMIT 10; Q4. Количество задержек по перевозчикам за 2007 год ``` sql -SELECT Carrier, count(*) +SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier @@ -225,29 +226,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year=2007 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` Более оптимальная версия того же запроса: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year=2007 GROUP BY Carrier @@ -261,29 +262,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` Более оптимальная версия того же запроса: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier @@ -302,7 +303,7 @@ FROM from ontime WHERE DepDelay>10 GROUP BY Year -) +) q JOIN ( select @@ -310,7 +311,7 @@ JOIN count(*) as c2 from ontime GROUP BY Year -) USING (Year) +) qq USING (Year) ORDER BY Year; ``` @@ -346,7 +347,7 @@ Q10. ``` sql SELECT - min(Year), max(Year), Carrier, count(*) AS cnt, + min(Year), max(Year), IATA_CODE_Reporting_Airline AS Carrier, count(*) AS cnt, sum(ArrDelayMinutes>30) AS flights_delayed, round(sum(ArrDelayMinutes>30)/count(*),2) AS rate FROM ontime @@ -407,4 +408,3 @@ LIMIT 10; - https://www.percona.com/blog/2016/01/07/apache-spark-with-air-ontime-performance-data/ - http://nickmakos.blogspot.ru/2012/08/analyzing-air-traffic-performance-with.html -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/example_datasets/ontime/) diff --git a/docs/ru/getting-started/example-datasets/recipes.md b/docs/ru/getting-started/example-datasets/recipes.md new file mode 100644 index 00000000000..75e385150e8 --- /dev/null +++ b/docs/ru/getting-started/example-datasets/recipes.md @@ -0,0 +1,342 @@ +--- +toc_priority: 16 +toc_title: Набор данных кулинарных рецептов +--- + +# Набор данных кулинарных рецептов + +Набор данных кулинарных рецептов от RecipeNLG доступен для загрузки [здесь](https://recipenlg.cs.put.poznan.pl/dataset). Он содержит 2.2 миллиона рецептов, а его размер чуть меньше 1 ГБ. + +## Загрузите и распакуйте набор данных + +1. Перейдите на страницу загрузки [https://recipenlg.cs.put.poznan.pl/dataset](https://recipenlg.cs.put.poznan.pl/dataset). +1. Примите Правила и условия и скачайте zip-архив с набором данных. +1. Распакуйте zip-архив и вы получите файл `full_dataset.csv`. + +## Создайте таблицу + +Запустите клиент ClickHouse и выполните следующий запрос для создания таблицы `recipes`: + +``` sql +CREATE TABLE recipes +( + title String, + ingredients Array(String), + directions Array(String), + link String, + source LowCardinality(String), + NER Array(String) +) ENGINE = MergeTree ORDER BY title; +``` + +## Добавьте данные в таблицу + +Чтобы добавить данные из файла `full_dataset.csv` в таблицу `recipes`, выполните команду: + +``` bash +clickhouse-client --query " + INSERT INTO recipes + SELECT + title, + JSONExtract(ingredients, 'Array(String)'), + JSONExtract(directions, 'Array(String)'), + link, + source, + JSONExtract(NER, 'Array(String)') + FROM input('num UInt32, title String, ingredients String, directions String, link String, source LowCardinality(String), NER String') + FORMAT CSVWithNames +" --input_format_with_names_use_header 0 --format_csv_allow_single_quote 0 --input_format_allow_errors_num 10 < full_dataset.csv +``` + +Это один из примеров анализа пользовательских CSV-файлов с применением специальных настроек. + +Пояснение: +- набор данных представлен в формате CSV и требует некоторой предварительной обработки при вставке. Для предварительной обработки используется табличная функция [input](../../sql-reference/table-functions/input.md); +- структура CSV-файла задается в аргументе табличной функции `input`; +- поле `num` (номер строки) не нужно — оно считывается из файла, но игнорируется; +- при загрузке используется `FORMAT CSVWithNames`, но заголовок в CSV будет проигнорирован (параметром командной строки `--input_format_with_names_use_header 0`), поскольку заголовок не содержит имени первого поля; +- в файле CSV для разделения строк используются только двойные кавычки. Но некоторые строки не заключены в двойные кавычки, и чтобы одинарная кавычка не рассматривалась как заключающая, используется параметр `--format_csv_allow_single_quote 0`; +- некоторые строки из CSV не могут быть считаны корректно, поскольку они начинаются с символов`\M/`, тогда как в CSV начинаться с обратной косой черты могут только символы `\N`, которые распознаются как `NULL` в SQL. Поэтому используется параметр `--input_format_allow_errors_num 10`, разрешающий пропустить до десяти некорректных записей; +- массивы `ingredients`, `directions` и `NER` представлены в необычном виде: они сериализуются в строку формата JSON, а затем помещаются в CSV — тогда они могут считываться и обрабатываться как обычные строки (`String`). Чтобы преобразовать строку в массив, используется функция [JSONExtract](../../sql-reference/functions/json-functions.md). + +## Проверьте добавленные данные + +Чтобы проверить добавленные данные, подсчитайте количество строк в таблице: + +Запрос: + +``` sql +SELECT count() FROM recipes; +``` + +Результат: + +``` text +┌─count()─┐ +│ 2231141 │ +└─────────┘ +``` + +## Примеры запросов + +### Самые упоминаемые ингридиенты в рецептах: + +В этом примере вы узнаете, как развернуть массив в набор строк с помощью функции [arrayJoin](../../sql-reference/functions/array-join.md). + +Запрос: + +``` sql +SELECT + arrayJoin(NER) AS k, + count() AS c +FROM recipes +GROUP BY k +ORDER BY c DESC +LIMIT 50 +``` + +Результат: + +``` text +┌─k────────────────────┬──────c─┐ +│ salt │ 890741 │ +│ sugar │ 620027 │ +│ butter │ 493823 │ +│ flour │ 466110 │ +│ eggs │ 401276 │ +│ onion │ 372469 │ +│ garlic │ 358364 │ +│ milk │ 346769 │ +│ water │ 326092 │ +│ vanilla │ 270381 │ +│ olive oil │ 197877 │ +│ pepper │ 179305 │ +│ brown sugar │ 174447 │ +│ tomatoes │ 163933 │ +│ egg │ 160507 │ +│ baking powder │ 148277 │ +│ lemon juice │ 146414 │ +│ Salt │ 122557 │ +│ cinnamon │ 117927 │ +│ sour cream │ 116682 │ +│ cream cheese │ 114423 │ +│ margarine │ 112742 │ +│ celery │ 112676 │ +│ baking soda │ 110690 │ +│ parsley │ 102151 │ +│ chicken │ 101505 │ +│ onions │ 98903 │ +│ vegetable oil │ 91395 │ +│ oil │ 85600 │ +│ mayonnaise │ 84822 │ +│ pecans │ 79741 │ +│ nuts │ 78471 │ +│ potatoes │ 75820 │ +│ carrots │ 75458 │ +│ pineapple │ 74345 │ +│ soy sauce │ 70355 │ +│ black pepper │ 69064 │ +│ thyme │ 68429 │ +│ mustard │ 65948 │ +│ chicken broth │ 65112 │ +│ bacon │ 64956 │ +│ honey │ 64626 │ +│ oregano │ 64077 │ +│ ground beef │ 64068 │ +│ unsalted butter │ 63848 │ +│ mushrooms │ 61465 │ +│ Worcestershire sauce │ 59328 │ +│ cornstarch │ 58476 │ +│ green pepper │ 58388 │ +│ Cheddar cheese │ 58354 │ +└──────────────────────┴────────┘ + +50 rows in set. Elapsed: 0.112 sec. Processed 2.23 million rows, 361.57 MB (19.99 million rows/s., 3.24 GB/s.) +``` + +### Самые сложные рецепты с клубникой + +Запрос: + +``` sql +SELECT + title, + length(NER), + length(directions) +FROM recipes +WHERE has(NER, 'strawberry') +ORDER BY length(directions) DESC +LIMIT 10; +``` + +Результат: + +``` text +┌─title────────────────────────────────────────────────────────────┬─length(NER)─┬─length(directions)─┐ +│ Chocolate-Strawberry-Orange Wedding Cake │ 24 │ 126 │ +│ Strawberry Cream Cheese Crumble Tart │ 19 │ 47 │ +│ Charlotte-Style Ice Cream │ 11 │ 45 │ +│ Sinfully Good a Million Layers Chocolate Layer Cake, With Strawb │ 31 │ 45 │ +│ Sweetened Berries With Elderflower Sherbet │ 24 │ 44 │ +│ Chocolate-Strawberry Mousse Cake │ 15 │ 42 │ +│ Rhubarb Charlotte with Strawberries and Rum │ 20 │ 42 │ +│ Chef Joey's Strawberry Vanilla Tart │ 7 │ 37 │ +│ Old-Fashioned Ice Cream Sundae Cake │ 17 │ 37 │ +│ Watermelon Cake │ 16 │ 36 │ +└──────────────────────────────────────────────────────────────────┴─────────────┴────────────────────┘ + +10 rows in set. Elapsed: 0.215 sec. Processed 2.23 million rows, 1.48 GB (10.35 million rows/s., 6.86 GB/s.) +``` + +В этом примере используется функция [has](../../sql-reference/functions/array-functions.md#hasarr-elem) для проверки вхождения элемента в массив, а также сортировка по количеству шагов (`length(directions)`). + +Существует свадебный торт, который требует целых 126 шагов для производства! Рассмотрим эти шаги: + +Запрос: + +``` sql +SELECT arrayJoin(directions) +FROM recipes +WHERE title = 'Chocolate-Strawberry-Orange Wedding Cake'; +``` + +Результат: + +``` text +┌─arrayJoin(directions)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ +│ Position 1 rack in center and 1 rack in bottom third of oven and preheat to 350F. │ +│ Butter one 5-inch-diameter cake pan with 2-inch-high sides, one 8-inch-diameter cake pan with 2-inch-high sides and one 12-inch-diameter cake pan with 2-inch-high sides. │ +│ Dust pans with flour; line bottoms with parchment. │ +│ Combine 1/3 cup orange juice and 2 ounces unsweetened chocolate in heavy small saucepan. │ +│ Stir mixture over medium-low heat until chocolate melts. │ +│ Remove from heat. │ +│ Gradually mix in 1 2/3 cups orange juice. │ +│ Sift 3 cups flour, 2/3 cup cocoa, 2 teaspoons baking soda, 1 teaspoon salt and 1/2 teaspoon baking powder into medium bowl. │ +│ using electric mixer, beat 1 cup (2 sticks) butter and 3 cups sugar in large bowl until blended (mixture will look grainy). │ +│ Add 4 eggs, 1 at a time, beating to blend after each. │ +│ Beat in 1 tablespoon orange peel and 1 tablespoon vanilla extract. │ +│ Add dry ingredients alternately with orange juice mixture in 3 additions each, beating well after each addition. │ +│ Mix in 1 cup chocolate chips. │ +│ Transfer 1 cup plus 2 tablespoons batter to prepared 5-inch pan, 3 cups batter to prepared 8-inch pan and remaining batter (about 6 cups) to 12-inch pan. │ +│ Place 5-inch and 8-inch pans on center rack of oven. │ +│ Place 12-inch pan on lower rack of oven. │ +│ Bake cakes until tester inserted into center comes out clean, about 35 minutes. │ +│ Transfer cakes in pans to racks and cool completely. │ +│ Mark 4-inch diameter circle on one 6-inch-diameter cardboard cake round. │ +│ Cut out marked circle. │ +│ Mark 7-inch-diameter circle on one 8-inch-diameter cardboard cake round. │ +│ Cut out marked circle. │ +│ Mark 11-inch-diameter circle on one 12-inch-diameter cardboard cake round. │ +│ Cut out marked circle. │ +│ Cut around sides of 5-inch-cake to loosen. │ +│ Place 4-inch cardboard over pan. │ +│ Hold cardboard and pan together; turn cake out onto cardboard. │ +│ Peel off parchment.Wrap cakes on its cardboard in foil. │ +│ Repeat turning out, peeling off parchment and wrapping cakes in foil, using 7-inch cardboard for 8-inch cake and 11-inch cardboard for 12-inch cake. │ +│ Using remaining ingredients, make 1 more batch of cake batter and bake 3 more cake layers as described above. │ +│ Cool cakes in pans. │ +│ Cover cakes in pans tightly with foil. │ +│ (Can be prepared ahead. │ +│ Let stand at room temperature up to 1 day or double-wrap all cake layers and freeze up to 1 week. │ +│ Bring cake layers to room temperature before using.) │ +│ Place first 12-inch cake on its cardboard on work surface. │ +│ Spread 2 3/4 cups ganache over top of cake and all the way to edge. │ +│ Spread 2/3 cup jam over ganache, leaving 1/2-inch chocolate border at edge. │ +│ Drop 1 3/4 cups white chocolate frosting by spoonfuls over jam. │ +│ Gently spread frosting over jam, leaving 1/2-inch chocolate border at edge. │ +│ Rub some cocoa powder over second 12-inch cardboard. │ +│ Cut around sides of second 12-inch cake to loosen. │ +│ Place cardboard, cocoa side down, over pan. │ +│ Turn cake out onto cardboard. │ +│ Peel off parchment. │ +│ Carefully slide cake off cardboard and onto filling on first 12-inch cake. │ +│ Refrigerate. │ +│ Place first 8-inch cake on its cardboard on work surface. │ +│ Spread 1 cup ganache over top all the way to edge. │ +│ Spread 1/4 cup jam over, leaving 1/2-inch chocolate border at edge. │ +│ Drop 1 cup white chocolate frosting by spoonfuls over jam. │ +│ Gently spread frosting over jam, leaving 1/2-inch chocolate border at edge. │ +│ Rub some cocoa over second 8-inch cardboard. │ +│ Cut around sides of second 8-inch cake to loosen. │ +│ Place cardboard, cocoa side down, over pan. │ +│ Turn cake out onto cardboard. │ +│ Peel off parchment. │ +│ Slide cake off cardboard and onto filling on first 8-inch cake. │ +│ Refrigerate. │ +│ Place first 5-inch cake on its cardboard on work surface. │ +│ Spread 1/2 cup ganache over top of cake and all the way to edge. │ +│ Spread 2 tablespoons jam over, leaving 1/2-inch chocolate border at edge. │ +│ Drop 1/3 cup white chocolate frosting by spoonfuls over jam. │ +│ Gently spread frosting over jam, leaving 1/2-inch chocolate border at edge. │ +│ Rub cocoa over second 6-inch cardboard. │ +│ Cut around sides of second 5-inch cake to loosen. │ +│ Place cardboard, cocoa side down, over pan. │ +│ Turn cake out onto cardboard. │ +│ Peel off parchment. │ +│ Slide cake off cardboard and onto filling on first 5-inch cake. │ +│ Chill all cakes 1 hour to set filling. │ +│ Place 12-inch tiered cake on its cardboard on revolving cake stand. │ +│ Spread 2 2/3 cups frosting over top and sides of cake as a first coat. │ +│ Refrigerate cake. │ +│ Place 8-inch tiered cake on its cardboard on cake stand. │ +│ Spread 1 1/4 cups frosting over top and sides of cake as a first coat. │ +│ Refrigerate cake. │ +│ Place 5-inch tiered cake on its cardboard on cake stand. │ +│ Spread 3/4 cup frosting over top and sides of cake as a first coat. │ +│ Refrigerate all cakes until first coats of frosting set, about 1 hour. │ +│ (Cakes can be made to this point up to 1 day ahead; cover and keep refrigerate.) │ +│ Prepare second batch of frosting, using remaining frosting ingredients and following directions for first batch. │ +│ Spoon 2 cups frosting into pastry bag fitted with small star tip. │ +│ Place 12-inch cake on its cardboard on large flat platter. │ +│ Place platter on cake stand. │ +│ Using icing spatula, spread 2 1/2 cups frosting over top and sides of cake; smooth top. │ +│ Using filled pastry bag, pipe decorative border around top edge of cake. │ +│ Refrigerate cake on platter. │ +│ Place 8-inch cake on its cardboard on cake stand. │ +│ Using icing spatula, spread 1 1/2 cups frosting over top and sides of cake; smooth top. │ +│ Using pastry bag, pipe decorative border around top edge of cake. │ +│ Refrigerate cake on its cardboard. │ +│ Place 5-inch cake on its cardboard on cake stand. │ +│ Using icing spatula, spread 3/4 cup frosting over top and sides of cake; smooth top. │ +│ Using pastry bag, pipe decorative border around top edge of cake, spooning more frosting into bag if necessary. │ +│ Refrigerate cake on its cardboard. │ +│ Keep all cakes refrigerated until frosting sets, about 2 hours. │ +│ (Can be prepared 2 days ahead. │ +│ Cover loosely; keep refrigerated.) │ +│ Place 12-inch cake on platter on work surface. │ +│ Press 1 wooden dowel straight down into and completely through center of cake. │ +│ Mark dowel 1/4 inch above top of frosting. │ +│ Remove dowel and cut with serrated knife at marked point. │ +│ Cut 4 more dowels to same length. │ +│ Press 1 cut dowel back into center of cake. │ +│ Press remaining 4 cut dowels into cake, positioning 3 1/2 inches inward from cake edges and spacing evenly. │ +│ Place 8-inch cake on its cardboard on work surface. │ +│ Press 1 dowel straight down into and completely through center of cake. │ +│ Mark dowel 1/4 inch above top of frosting. │ +│ Remove dowel and cut with serrated knife at marked point. │ +│ Cut 3 more dowels to same length. │ +│ Press 1 cut dowel back into center of cake. │ +│ Press remaining 3 cut dowels into cake, positioning 2 1/2 inches inward from edges and spacing evenly. │ +│ Using large metal spatula as aid, place 8-inch cake on its cardboard atop dowels in 12-inch cake, centering carefully. │ +│ Gently place 5-inch cake on its cardboard atop dowels in 8-inch cake, centering carefully. │ +│ Using citrus stripper, cut long strips of orange peel from oranges. │ +│ Cut strips into long segments. │ +│ To make orange peel coils, wrap peel segment around handle of wooden spoon; gently slide peel off handle so that peel keeps coiled shape. │ +│ Garnish cake with orange peel coils, ivy or mint sprigs, and some berries. │ +│ (Assembled cake can be made up to 8 hours ahead. │ +│ Let stand at cool room temperature.) │ +│ Remove top and middle cake tiers. │ +│ Remove dowels from cakes. │ +│ Cut top and middle cakes into slices. │ +│ To cut 12-inch cake: Starting 3 inches inward from edge and inserting knife straight down, cut through from top to bottom to make 6-inch-diameter circle in center of cake. │ +│ Cut outer portion of cake into slices; cut inner portion into slices and serve with strawberries. │ +└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ + +126 rows in set. Elapsed: 0.011 sec. Processed 8.19 thousand rows, 5.34 MB (737.75 thousand rows/s., 480.59 MB/s.) +``` + +### Online Playground + +Этот набор данных доступен в [Online Playground](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUCiAgICBhcnJheUpvaW4oTkVSKSBBUyBrLAogICAgY291bnQoKSBBUyBjCkZST00gcmVjaXBlcwpHUk9VUCBCWSBrCk9SREVSIEJZIGMgREVTQwpMSU1JVCA1MA==). + +[Оригинальная статья](https://clickhouse.tech/docs/ru/getting-started/example-datasets/recipes/) diff --git a/docs/ru/getting-started/example-datasets/wikistat.md b/docs/ru/getting-started/example-datasets/wikistat.md index c5a877ff8fd..f224c24e6ac 100644 --- a/docs/ru/getting-started/example-datasets/wikistat.md +++ b/docs/ru/getting-started/example-datasets/wikistat.md @@ -30,4 +30,3 @@ $ cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/page $ ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/example_datasets/wikistat/) diff --git a/docs/ru/getting-started/index.md b/docs/ru/getting-started/index.md index 78b56092740..599cb8b9434 100644 --- a/docs/ru/getting-started/index.md +++ b/docs/ru/getting-started/index.md @@ -14,4 +14,3 @@ toc_title: hidden - [Пройти подробное руководство для начинающих](tutorial.md) - [Поэкспериментировать с тестовыми наборами данных](example-datasets/ontime.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/) diff --git a/docs/ru/getting-started/install.md b/docs/ru/getting-started/install.md index aa5e8d77512..4ae27a910ea 100644 --- a/docs/ru/getting-started/install.md +++ b/docs/ru/getting-started/install.md @@ -173,4 +173,3 @@ SELECT 1 Для дальнейших экспериментов можно попробовать загрузить один из тестовых наборов данных или пройти [пошаговое руководство для начинающих](https://clickhouse.tech/tutorial.html). -[Оригинальная статья](https://clickhouse.tech/docs/ru/getting_started/install/) diff --git a/docs/ru/getting-started/playground.md b/docs/ru/getting-started/playground.md index 86a5cd5272c..b51a9b2b436 100644 --- a/docs/ru/getting-started/playground.md +++ b/docs/ru/getting-started/playground.md @@ -36,10 +36,10 @@ ClickHouse Playground дает возможность поработать с [ - запрещены INSERT запросы Также установлены следующие опции: -- [max_result_bytes=10485760](../operations/settings/query_complexity/#max-result-bytes) -- [max_result_rows=2000](../operations/settings/query_complexity/#setting-max_result_rows) -- [result_overflow_mode=break](../operations/settings/query_complexity/#result-overflow-mode) -- [max_execution_time=60000](../operations/settings/query_complexity/#max-execution-time) +- [max_result_bytes=10485760](../operations/settings/query-complexity.md#max-result-bytes) +- [max_result_rows=2000](../operations/settings/query-complexity.md#setting-max_result_rows) +- [result_overflow_mode=break](../operations/settings/query-complexity.md#result-overflow-mode) +- [max_execution_time=60000](../operations/settings/query-complexity.md#max-execution-time) ## Примеры {#examples} diff --git a/docs/ru/getting-started/tutorial.md b/docs/ru/getting-started/tutorial.md index f5455ba2b9a..68b3e4dbae7 100644 --- a/docs/ru/getting-started/tutorial.md +++ b/docs/ru/getting-started/tutorial.md @@ -644,7 +644,7 @@ If there are no replicas at the moment on replicated table creation, a new first ``` sql CREATE TABLE tutorial.hits_replica (...) -ENGINE = ReplcatedMergeTree( +ENGINE = ReplicatedMergeTree( '/clickhouse_perftest/tables/{shard}/hits', '{replica}' ) diff --git a/docs/ru/guides/apply-catboost-model.md b/docs/ru/guides/apply-catboost-model.md index 11964c57fc7..db2be63692f 100644 --- a/docs/ru/guides/apply-catboost-model.md +++ b/docs/ru/guides/apply-catboost-model.md @@ -158,7 +158,9 @@ FROM amazon_train /home/catboost/data/libcatboostmodel.so /home/catboost/models/*_model.xml ``` - +!!! note "Примечание" + Вы можете позднее изменить путь к конфигурации модели CatBoost без перезагрузки сервера. + ## 4. Запустите вывод модели из SQL {#run-model-inference} Для тестирования модели запустите клиент ClickHouse `$ clickhouse client`. diff --git a/docs/ru/index.md b/docs/ru/index.md index 26d7dc3bf21..e16f2afed82 100644 --- a/docs/ru/index.md +++ b/docs/ru/index.md @@ -97,4 +97,3 @@ ClickHouse - столбцовая система управления базам Стоит заметить, что для эффективности по CPU требуется, чтобы язык запросов был декларативным (SQL, MDX) или хотя бы векторным (J, K). То есть, чтобы запрос содержал циклы только в неявном виде, открывая возможности для оптимизации. -[Оригинальная статья](https://clickhouse.tech/docs/ru/) diff --git a/docs/ru/interfaces/cli.md b/docs/ru/interfaces/cli.md index 3f6b288fc2b..277b73a6d36 100644 --- a/docs/ru/interfaces/cli.md +++ b/docs/ru/interfaces/cli.md @@ -121,6 +121,7 @@ $ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="numbe - `--user, -u` — имя пользователя, по умолчанию — ‘default’. - `--password` — пароль, по умолчанию — пустая строка. - `--query, -q` — запрос для выполнения, при использовании в неинтерактивном режиме. +- `--queries-file, -qf` - путь к файлу с запросами для выполнения. Необходимо указать только одну из опций: `query` или `queries-file`. - `--database, -d` — выбрать текущую БД. Без указания значение берется из настроек сервера (по умолчанию — БД ‘default’). - `--multiline, -m` — если указано — разрешить многострочные запросы, не отправлять запрос по нажатию Enter. - `--multiquery, -n` — если указано — разрешить выполнять несколько запросов, разделённых точкой с запятой. @@ -130,6 +131,7 @@ $ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="numbe - `--stacktrace` — если указано, в случае исключения, выводить также его стек-трейс. - `--config-file` — имя конфигурационного файла. - `--secure` — если указано, будет использован безопасный канал. +- `--history_file` - путь к файлу с историей команд. - `--param_` — значение параметра для [запроса с параметрами](#cli-queries-with-parameters). Начиная с версии 20.5, в `clickhouse-client` есть автоматическая подсветка синтаксиса (включена всегда). @@ -153,4 +155,3 @@ $ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="numbe ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/cli/) diff --git a/docs/ru/interfaces/cpp.md b/docs/ru/interfaces/cpp.md index 018f4e22e34..f0691453fe6 100644 --- a/docs/ru/interfaces/cpp.md +++ b/docs/ru/interfaces/cpp.md @@ -7,4 +7,3 @@ toc_title: "C++ клиентская библиотека" См. README в репозитории [clickhouse-cpp](https://github.com/ClickHouse/clickhouse-cpp). -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/cpp/) diff --git a/docs/ru/interfaces/formats.md b/docs/ru/interfaces/formats.md index edea533b642..f67997b58d6 100644 --- a/docs/ru/interfaces/formats.md +++ b/docs/ru/interfaces/formats.md @@ -49,7 +49,7 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT | [Parquet](#data-format-parquet) | ✔ | ✔ | | [Arrow](#data-format-arrow) | ✔ | ✔ | | [ArrowStream](#data-format-arrow-stream) | ✔ | ✔ | -| [ORC](#data-format-orc) | ✔ | ✗ | +| [ORC](#data-format-orc) | ✔ | ✔ | | [RowBinary](#rowbinary) | ✔ | ✔ | | [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ | | [Native](#native) | ✔ | ✔ | @@ -1173,7 +1173,7 @@ ClickHouse поддерживает настраиваемую точность Неподдержанные типы данных Parquet: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. -Типы данных столбцов в ClickHouse могут отличаться от типов данных соответствующих полей файла в формате Parquet. При вставке данных, ClickHouse интерпретирует типы данных в соответствии с таблицей выше, а затем [приводит](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) данные к тому типу, который установлен для столбца таблицы. +Типы данных столбцов в ClickHouse могут отличаться от типов данных соответствующих полей файла в формате Parquet. При вставке данных, ClickHouse интерпретирует типы данных в соответствии с таблицей выше, а затем [приводит](../sql-reference/functions/type-conversion-functions/#type_conversion_function-cast) данные к тому типу, который установлен для столбца таблицы. ### Вставка и выборка данных {#vstavka-i-vyborka-dannykh} @@ -1203,45 +1203,53 @@ $ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_ ## ORC {#data-format-orc} -[Apache ORC](https://orc.apache.org/) - это column-oriented формат данных, распространённый в экосистеме Hadoop. Вы можете только вставлять данные этого формата в ClickHouse. +[Apache ORC](https://orc.apache.org/) — это столбцовый формат данных, распространенный в экосистеме [Hadoop](https://hadoop.apache.org/). ### Соответствие типов данных {#sootvetstvie-tipov-dannykh-1} -Таблица показывает поддержанные типы данных и их соответствие [типам данных](../sql-reference/data-types/index.md) ClickHouse для запросов `INSERT`. +Таблица ниже содержит поддерживаемые типы данных и их соответствие [типам данных](../sql-reference/data-types/index.md) ClickHouse для запросов `INSERT` и `SELECT`. -| Тип данных ORC (`INSERT`) | Тип данных ClickHouse | -|---------------------------|-----------------------------------------------------| -| `UINT8`, `BOOL` | [UInt8](../sql-reference/data-types/int-uint.md) | -| `INT8` | [Int8](../sql-reference/data-types/int-uint.md) | -| `UINT16` | [UInt16](../sql-reference/data-types/int-uint.md) | -| `INT16` | [Int16](../sql-reference/data-types/int-uint.md) | -| `UINT32` | [UInt32](../sql-reference/data-types/int-uint.md) | -| `INT32` | [Int32](../sql-reference/data-types/int-uint.md) | -| `UINT64` | [UInt64](../sql-reference/data-types/int-uint.md) | -| `INT64` | [Int64](../sql-reference/data-types/int-uint.md) | -| `FLOAT`, `HALF_FLOAT` | [Float32](../sql-reference/data-types/float.md) | -| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | -| `DATE32` | [Date](../sql-reference/data-types/date.md) | -| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | -| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | -| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | +| Тип данных ORC (`INSERT`) | Тип данных ClickHouse | Тип данных ORC (`SELECT`) | +|---------------------------|-----------------------------------------------------|---------------------------| +| `UINT8`, `BOOL` | [UInt8](../sql-reference/data-types/int-uint.md) | `UINT8` | +| `INT8` | [Int8](../sql-reference/data-types/int-uint.md) | `INT8` | +| `UINT16` | [UInt16](../sql-reference/data-types/int-uint.md) | `UINT16` | +| `INT16` | [Int16](../sql-reference/data-types/int-uint.md) | `INT16` | +| `UINT32` | [UInt32](../sql-reference/data-types/int-uint.md) | `UINT32` | +| `INT32` | [Int32](../sql-reference/data-types/int-uint.md) | `INT32` | +| `UINT64` | [UInt64](../sql-reference/data-types/int-uint.md) | `UINT64` | +| `INT64` | [Int64](../sql-reference/data-types/int-uint.md) | `INT64` | +| `FLOAT`, `HALF_FLOAT` | [Float32](../sql-reference/data-types/float.md) | `FLOAT` | +| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `DOUBLE` | +| `DATE32` | [Date](../sql-reference/data-types/date.md) | `DATE32` | +| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `TIMESTAMP` | +| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` | +| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` | +| `-` | [Array](../sql-reference/data-types/array.md) | `LIST` | -ClickHouse поддерживает настраиваемую точность для формата `Decimal`. При обработке запроса `INSERT`, ClickHouse обрабатывает тип данных Parquet `DECIMAL` как `Decimal128`. +ClickHouse поддерживает настраиваемую точность для формата `Decimal`. При обработке запроса `INSERT`, ClickHouse обрабатывает тип данных ORC `DECIMAL` как `Decimal128`. -Неподдержанные типы данных ORC: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. +Неподдерживаемые типы данных ORC: `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. -Типы данных столбцов в таблицах ClickHouse могут отличаться от типов данных для соответствующих полей ORC. При вставке данных, ClickHouse интерпретирует типы данных ORC согласно таблице соответствия, а затем [приводит](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) данные к типу, установленному для столбца таблицы ClickHouse. +Типы данных столбцов в таблицах ClickHouse могут отличаться от типов данных для соответствующих полей ORC. При вставке данных ClickHouse интерпретирует типы данных ORC согласно таблице соответствия, а затем [приводит](../sql-reference/functions/type-conversion-functions/#type_conversion_function-cast) данные к типу, установленному для столбца таблицы ClickHouse. ### Вставка данных {#vstavka-dannykh-1} -Данные ORC можно вставить в таблицу ClickHouse командой: +Чтобы вставить в ClickHouse данные из файла в формате ORC, используйте команду следующего вида: ``` bash $ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC" ``` -Для обмена данных с Hadoop можно использовать [движок таблиц HDFS](../engines/table-engines/integrations/hdfs.md). +### Вывод данных {#vyvod-dannykh-1} +Чтобы получить данные из таблицы ClickHouse и сохранить их в файл формата ORC, используйте команду следующего вида: + +``` bash +$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT ORC" > {filename.orc} +``` + +Для обмена данных с экосистемой Hadoop вы можете использовать [движок таблиц HDFS](../engines/table-engines/integrations/hdfs.md). ## LineAsString {#lineasstring} @@ -1268,7 +1276,7 @@ SELECT * FROM line_as_string; ## Regexp {#data-format-regexp} -Каждая строка импортируемых данных разбирается в соответствии с регулярным выражением. +Каждая строка импортируемых данных разбирается в соответствии с регулярным выражением. При работе с форматом `Regexp` можно использовать следующие параметры: @@ -1279,15 +1287,15 @@ SELECT * FROM line_as_string; - Escaped (как в [TSV](#tabseparated)) - Quoted (как в [Values](#data-format-values)) - Raw (данные импортируются как есть, без сериализации) -- `format_regexp_skip_unmatched` — [UInt8](../sql-reference/data-types/int-uint.md). Признак, будет ли генерироваться исключение в случае, если импортируемые данные не соответствуют регулярному выражению `format_regexp`. Может принимать значение `0` или `1`. +- `format_regexp_skip_unmatched` — [UInt8](../sql-reference/data-types/int-uint.md). Признак, будет ли генерироваться исключение в случае, если импортируемые данные не соответствуют регулярному выражению `format_regexp`. Может принимать значение `0` или `1`. -**Использование** +**Использование** -Регулярное выражение (шаблон) из параметра `format_regexp` применяется к каждой строке импортируемых данных. Количество частей в шаблоне (подшаблонов) должно соответствовать количеству колонок в импортируемых данных. +Регулярное выражение (шаблон) из параметра `format_regexp` применяется к каждой строке импортируемых данных. Количество частей в шаблоне (подшаблонов) должно соответствовать количеству колонок в импортируемых данных. -Строки импортируемых данных должны разделяться символом новой строки `'\n'` или символами `"\r\n"` (перенос строки в формате DOS). +Строки импортируемых данных должны разделяться символом новой строки `'\n'` или символами `"\r\n"` (перенос строки в формате DOS). -Данные, выделенные по подшаблонам, интерпретируются в соответствии с типом, указанным в параметре `format_regexp_escaping_rule`. +Данные, выделенные по подшаблонам, интерпретируются в соответствии с типом, указанным в параметре `format_regexp_escaping_rule`. Если строка импортируемых данных не соответствует регулярному выражению и параметр `format_regexp_skip_unmatched` равен 1, строка просто игнорируется. Если же параметр `format_regexp_skip_unmatched` равен 0, генерируется исключение. @@ -1390,4 +1398,3 @@ $ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum f9725a22f9191e064120d718e26862a9 - ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/formats/) diff --git a/docs/ru/interfaces/http.md b/docs/ru/interfaces/http.md index 5cb50d8f168..9e553c12dc0 100644 --- a/docs/ru/interfaces/http.md +++ b/docs/ru/interfaces/http.md @@ -635,4 +635,3 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler' * Connection #0 to host localhost left intact ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/http_interface/) diff --git a/docs/ru/interfaces/index.md b/docs/ru/interfaces/index.md index fc8743b3c1e..12e8853823e 100644 --- a/docs/ru/interfaces/index.md +++ b/docs/ru/interfaces/index.md @@ -24,4 +24,3 @@ ClickHouse предоставляет два сетевых интерфейса - [Библиотеки для интеграции](third-party/integrations.md); - [Визуальные интерфейсы](third-party/gui.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/) diff --git a/docs/ru/interfaces/jdbc.md b/docs/ru/interfaces/jdbc.md index ac86375c74f..30270322f7a 100644 --- a/docs/ru/interfaces/jdbc.md +++ b/docs/ru/interfaces/jdbc.md @@ -10,4 +10,3 @@ toc_title: "JDBC-драйвер" - [ClickHouse-Native-JDBC](https://github.com/housepower/ClickHouse-Native-JDBC) - [clickhouse4j](https://github.com/blynkkk/clickhouse4j) -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/jdbc/) diff --git a/docs/ru/interfaces/odbc.md b/docs/ru/interfaces/odbc.md index 7843d3cb943..22153865298 100644 --- a/docs/ru/interfaces/odbc.md +++ b/docs/ru/interfaces/odbc.md @@ -8,4 +8,3 @@ toc_title: "ODBC-драйвер" - [Официальный драйвер](https://github.com/ClickHouse/clickhouse-odbc). -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/odbc/) diff --git a/docs/ru/interfaces/tcp.md b/docs/ru/interfaces/tcp.md index ea8c170009d..5261e1eafef 100644 --- a/docs/ru/interfaces/tcp.md +++ b/docs/ru/interfaces/tcp.md @@ -7,4 +7,3 @@ toc_title: "Родной интерфейс (TCP)" Нативный протокол используется в [клиенте командной строки](cli.md), для взаимодействия между серверами во время обработки распределенных запросов, а также в других программах на C++. К сожалению, у родного протокола ClickHouse пока нет формальной спецификации, но в нем можно разобраться с использованием исходного кода ClickHouse (начиная с [примерно этого места](https://github.com/ClickHouse/ClickHouse/tree/master/src/Client)) и/или путем перехвата и анализа TCP трафика. -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/tcp/) diff --git a/docs/ru/interfaces/third-party/client-libraries.md b/docs/ru/interfaces/third-party/client-libraries.md index 65e93731300..411475f0aaa 100644 --- a/docs/ru/interfaces/third-party/client-libraries.md +++ b/docs/ru/interfaces/third-party/client-libraries.md @@ -58,4 +58,3 @@ toc_title: "Клиентские библиотеки от сторонних р - Nim - [nim-clickhouse](https://github.com/leonardoce/nim-clickhouse) -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/third-party/client_libraries/) diff --git a/docs/ru/interfaces/third-party/gui.md b/docs/ru/interfaces/third-party/gui.md index c02c32e08f4..dc96c32e996 100644 --- a/docs/ru/interfaces/third-party/gui.md +++ b/docs/ru/interfaces/third-party/gui.md @@ -103,7 +103,11 @@ toc_title: "Визуальные интерфейсы от сторонних р [xeus-clickhouse](https://github.com/wangfenjin/xeus-clickhouse) — это ядро Jupyter для ClickHouse, которое поддерживает запрос ClickHouse-данных с использованием SQL в Jupyter. -## Коммерческие {#kommercheskie} +### MindsDB Studio {#mindsdb} + +[MindsDB](https://mindsdb.com/) — это продукт с открытым исходным кодом, реализующий слой искусственного интеллекта (Artificial Intelligence, AI) для различных СУБД, в том числе для ClickHouse. MindsDB облегчает процессы создания, обучения и развертывания современных моделей машинного обучения. Графический пользовательский интерфейс MindsDB Studio позволяет обучать новые модели на основе данных в БД, интерпретировать сделанные моделями прогнозы, выявлять потенциальные ошибки в данных, визуализировать и оценивать достоверность моделей с помощью функции Explainable AI, так чтобы вы могли быстрее адаптировать и настраивать ваши модели машинного обучения. + +## Коммерческие {#commercial} ### DataGrip {#datagrip} @@ -146,7 +150,6 @@ toc_title: "Визуальные интерфейсы от сторонних р - Подготовка данных и возможности ETL. - Моделирование данных с помощью SQL для их реляционного отображения. -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/third-party/gui/) ### Looker {#looker} @@ -163,4 +166,25 @@ toc_title: "Визуальные интерфейсы от сторонних р [Как сконфигурировать ClickHouse в Looker.](https://docs.looker.com/setup-and-management/database-config/clickhouse) -[Original article](https://clickhouse.tech/docs/ru/interfaces/third-party/gui/) +### SeekTable {#seektable} + +[SeekTable](https://www.seektable.com) — это аналитический инструмент для самостоятельного анализа и обработки данных бизнес-аналитики. Он доступен как в виде облачного сервиса, так и в виде локальной версии. Отчеты из SeekTable могут быть встроены в любое веб-приложение. + +Основные возможности: + +- Удобный конструктор отчетов. +- Гибкая настройка отчетов SQL и создание запросов для специфичных отчетов. +- Интегрируется с ClickHouse, используя собственную точку приема запроса TCP/IP или интерфейс HTTP(S) (два разных драйвера). +- Поддерживает всю мощь диалекта ClickHouse SQL для построения запросов по различным измерениям и показателям. +- [WEB-API](https://www.seektable.com/help/web-api-integration) для автоматизированной генерации отчетов. +- Процесс разработки отчетов поддерживает [резервное копирование/восстановление данных](https://www.seektable.com/help/self-hosted-backup-restore); конфигурация моделей данных (кубов) / отчетов представляет собой удобочитаемый XML-файл, который может храниться в системе контроля версий. + +SeekTable [бесплатен](https://www.seektable.com/help/cloud-pricing) для личного/индивидуального использования. + +[Как сконфигурировать подключение ClickHouse в SeekTable.](https://www.seektable.com/help/clickhouse-pivot-table) + +### Chadmin {#chadmin} + +[Chadmin](https://github.com/bun4uk/chadmin) — простой графический интерфейс для визуализации запущенных запросов на вашем кластере ClickHouse. Он отображает информацию о запросах и дает возможность их завершать. + +[Original article](https://clickhouse.tech/docs/en/interfaces/third-party/gui/) diff --git a/docs/ru/interfaces/third-party/index.md b/docs/ru/interfaces/third-party/index.md index 8b59bb5fd28..bbf5a237000 100644 --- a/docs/ru/interfaces/third-party/index.md +++ b/docs/ru/interfaces/third-party/index.md @@ -15,4 +15,3 @@ toc_priority: 24 !!! note "Примечание" С ClickHouse работают также универсальные инструменты, поддерживающие общий API, такие как [ODBC](../../interfaces/odbc.md) или [JDBC](../../interfaces/jdbc.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/third-party/) diff --git a/docs/ru/interfaces/third-party/integrations.md b/docs/ru/interfaces/third-party/integrations.md index 84d5b93f92f..198e9d6be76 100644 --- a/docs/ru/interfaces/third-party/integrations.md +++ b/docs/ru/interfaces/third-party/integrations.md @@ -69,6 +69,9 @@ toc_title: "Библиотеки для интеграции от сторонн - Гео - [MaxMind](https://dev.maxmind.com/geoip/) - [clickhouse-maxmind-geoip](https://github.com/AlexeyKupershtokh/clickhouse-maxmind-geoip) +- AutoML + - [MindsDB](https://mindsdb.com/) + - [MindsDB](https://github.com/mindsdb/mindsdb) - Слой предиктивной аналитики и искусственного интеллекта для СУБД ClickHouse. ## Экосистемы вокруг языков программирования {#ekosistemy-vokrug-iazykov-programmirovaniia} @@ -105,4 +108,3 @@ toc_title: "Библиотеки для интеграции от сторонн - [GraphQL](https://github.com/graphql) - [activecube-graphql](https://github.com/bitquery/activecube-graphql) -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/third-party/integrations/) diff --git a/docs/ru/interfaces/third-party/proxy.md b/docs/ru/interfaces/third-party/proxy.md index 48853cb352e..6d85c960c0e 100644 --- a/docs/ru/interfaces/third-party/proxy.md +++ b/docs/ru/interfaces/third-party/proxy.md @@ -41,4 +41,3 @@ toc_title: "Прокси-серверы от сторонних разработ Реализован на Go. -[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/third-party/proxy/) diff --git a/docs/ru/introduction/distinctive-features.md b/docs/ru/introduction/distinctive-features.md index 852f5cecd5b..dedb1412dbf 100644 --- a/docs/ru/introduction/distinctive-features.md +++ b/docs/ru/introduction/distinctive-features.md @@ -73,4 +73,3 @@ ClickHouse предоставляет различные способы разм 3. Разреженный индекс делает ClickHouse плохо пригодным для точечных чтений одиночных строк по своим ключам. -[Оригинальная статья](https://clickhouse.tech/docs/ru/introduction/distinctive_features/) diff --git a/docs/ru/introduction/history.md b/docs/ru/introduction/history.md index ad17b2be27d..dc4aa935c27 100644 --- a/docs/ru/introduction/history.md +++ b/docs/ru/introduction/history.md @@ -52,4 +52,3 @@ OLAPServer хорошо подходил для неагрегированных Чтобы снять ограничения OLAPServer-а и решить задачу работы с неагрегированными данными для всех отчётов, разработана СУБД ClickHouse. -[Оригинальная статья](https://clickhouse.tech/docs/ru/introduction/ya_metrika_task/) diff --git a/docs/ru/introduction/index.md b/docs/ru/introduction/index.md index c37cde09060..99f8aad0531 100644 --- a/docs/ru/introduction/index.md +++ b/docs/ru/introduction/index.md @@ -2,5 +2,3 @@ toc_folder_title: "Введение" toc_priority: 1 --- - - diff --git a/docs/ru/introduction/info.md b/docs/ru/introduction/info.md index a9398b8c9cd..a5e7efffc7e 100644 --- a/docs/ru/introduction/info.md +++ b/docs/ru/introduction/info.md @@ -9,4 +9,3 @@ toc_priority: 100 - Адрес электронной почты: - Телефон: +7-495-780-6510 -[Оригинальная статья](https://clickhouse.tech/docs/ru/introduction/info/) diff --git a/docs/ru/introduction/performance.md b/docs/ru/introduction/performance.md index dd92d3df9f5..eec1dcf4d0a 100644 --- a/docs/ru/introduction/performance.md +++ b/docs/ru/introduction/performance.md @@ -27,4 +27,3 @@ toc_title: "Производительность" Данные рекомендуется вставлять пачками не менее 1000 строк или не более одного запроса в секунду. При вставке в таблицу типа MergeTree из tab-separated дампа, скорость вставки будет в районе 50-200 МБ/сек. Если вставляются строчки размером около 1 КБ, то скорость будет в районе 50 000 - 200 000 строчек в секунду. Если строчки маленькие - производительность в строчках в секунду будет выше (на данных БК - `>` 500 000 строк в секунду, на данных Graphite - `>` 1 000 000 строк в секунду). Для увеличения производительности, можно производить несколько запросов INSERT параллельно - при этом производительность растёт линейно. -[Оригинальная статья](https://clickhouse.tech/docs/ru/introduction/performance/) diff --git a/docs/ru/operations/access-rights.md b/docs/ru/operations/access-rights.md index 9aa4e5f2561..a0ad7664131 100644 --- a/docs/ru/operations/access-rights.md +++ b/docs/ru/operations/access-rights.md @@ -146,4 +146,3 @@ ClickHouse поддерживает управление доступом на По умолчанию управление доступом на основе SQL выключено для всех пользователей. Вам необходимо настроить хотя бы одного пользователя в файле конфигурации `users.xml` и присвоить значение 1 параметру [access_management](settings/settings-users.md#access_management-user-setting). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/access_rights/) diff --git a/docs/ru/operations/backup.md b/docs/ru/operations/backup.md index 703217e8547..ed0adeb5e6f 100644 --- a/docs/ru/operations/backup.md +++ b/docs/ru/operations/backup.md @@ -36,4 +36,3 @@ ClickHouse позволяет использовать запрос `ALTER TABLE Для автоматизации этого подхода доступен инструмент от сторонних разработчиков: [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/backup/) diff --git a/docs/ru/operations/caches.md b/docs/ru/operations/caches.md index 7744c596cd9..a0b71d1782a 100644 --- a/docs/ru/operations/caches.md +++ b/docs/ru/operations/caches.md @@ -26,4 +26,3 @@ toc_title: Кеши Чтобы очистить кеш, используйте выражение [SYSTEM DROP ... CACHE](../sql-reference/statements/system.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/caches/) diff --git a/docs/ru/operations/configuration-files.md b/docs/ru/operations/configuration-files.md index 84b26d0ba2a..11a01d1e6d2 100644 --- a/docs/ru/operations/configuration-files.md +++ b/docs/ru/operations/configuration-files.md @@ -52,4 +52,3 @@ $ cat /etc/clickhouse-server/users.d/alice.xml Сервер следит за изменениями конфигурационных файлов, а также файлов и ZooKeeper-узлов, которые были использованы при выполнении подстановок и переопределений, и перезагружает настройки пользователей и кластеров на лету. То есть, можно изменять кластера, пользователей и их настройки без перезапуска сервера. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/configuration_files/) diff --git a/docs/ru/operations/external-authenticators/index.md b/docs/ru/operations/external-authenticators/index.md new file mode 100644 index 00000000000..c2ed9750562 --- /dev/null +++ b/docs/ru/operations/external-authenticators/index.md @@ -0,0 +1,16 @@ +--- +toc_folder_title: "\u0412\u043d\u0435\u0448\u043d\u0438\u0435\u0020\u0430\u0443\u0442\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440\u044b\u0020\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439\u0020\u0438\u0020\u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0438" +toc_priority: 48 +toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435" +--- + +# Внешние аутентификаторы пользователей и каталоги {#external-authenticators} + +ClickHouse поддерживает аутентификацию и управление пользователями при помощи внешних сервисов. + +Поддерживаются следующие внешние аутентификаторы и каталоги: + +- [LDAP](./ldap.md#external-authenticators-ldap) [аутентификатор](./ldap.md#ldap-external-authenticator) и [каталог](./ldap.md#ldap-external-user-directory) +- Kerberos [аутентификатор](./kerberos.md#external-authenticators-kerberos) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/external-authenticators/index/) diff --git a/docs/ru/operations/external-authenticators/kerberos.md b/docs/ru/operations/external-authenticators/kerberos.md new file mode 100644 index 00000000000..b90714d14fd --- /dev/null +++ b/docs/ru/operations/external-authenticators/kerberos.md @@ -0,0 +1,118 @@ +# Kerberos {#external-authenticators-kerberos} + +ClickHouse предоставляет возможность аутентификации существующих (и правильно сконфигурированных) пользователей с использованием Kerberos. + +В настоящее время возможно использование Kerberos только как внешнего аутентификатора, то есть для аутентификации уже существующих пользователей с помощью Kerberos. Пользователи, настроенные для Kerberos-аутентификации, могут работать с ClickHouse только через HTTP-интерфейс, причём сами клиенты должны иметь возможность аутентификации с использованием механизма GSS-SPNEGO. + + +!!! info "!!!" + Для Kerberos-аутентификации необходимо предварительно корректно настроить Kerberos на стороне клиента, на сервере и в конфигурационных файлах самого ClickHouse. Ниже описана лишь конфигурация ClickHouse. + + +## Настройка Kerberos в ClickHouse {#enabling-kerberos-in-clickhouse} + +Для того, чтобы задействовать Kerberos-аутентификацию в ClickHouse, в первую очередь необходимо добавить одну-единственную секцию `kerberos` в `config.xml`. + +В секции могут быть указаны дополнительные параметры: + +- `principal` — задаёт имя принципала (canonical service principal name, SPN), используемое при авторизации ClickHouse на Kerberos-сервере. + - Это опциональный параметр, при его отсутствии будет использовано стандартное имя. + +- `realm` — обеспечивает фильтрацию по реалм (realm). Пользователям, чей реалм не совпадает с указанным, будет отказано в аутентификации. + - Это опциональный параметр, при его отсутствии фильтр по реалм применяться не будет. + +Примеры, как должен выглядеть файл `config.xml`: + +```xml + + + + +``` + +Или, с указанием принципала: + +```xml + + + + HTTP/clickhouse.example.com@EXAMPLE.COM + + +``` + +Или, с фильтрацией по реалм: + +```xml + + + + EXAMPLE.COM + + +``` + +!!! Warning "Важно" + В конфигурационном файле не могут быть указаны одновременно оба параметра. В противном случае, аутентификация с помощью Kerberos будет недоступна для всех пользователей. + +!!! Warning "Важно" + В конфигурационном файле может быть не более одной секции `kerberos`. В противном случае, аутентификация с помощью Kerberos будет отключена для всех пользователей. + + +## Аутентификация пользователей с помощью Kerberos {#kerberos-as-an-external-authenticator-for-existing-users} + +Уже существующие пользователи могут воспользоваться аутентификацией с помощью Kerberos. Однако, Kerberos-аутентификация возможна только при использовании HTTP-интерфейса. + +Имя принципала (principal name) обычно имеет вид: + +- *primary/instance@REALM* + +Для успешной аутентификации необходимо, чтобы *primary* совпало с именем пользователя ClickHouse, настроенного для использования Kerberos. + +### Настройка Kerberos в `users.xml` {#enabling-kerberos-in-users-xml} + +Для того, чтобы пользователь имел возможность производить аутентификацию с помощью Kerberos, достаточно включить секцию `kerberos` в описание пользователя в `users.xml` (например, вместо секции `password` или аналогичной ей). + +В секции могут быть указаны дополнительные параметры: + +- `realm` — обеспечивает фильтрацию по реалм (realm): аутентификация будет возможна только при совпадении реалм клиента с указанным. + - Этот параметр является опциональным, при его отсутствии фильтрация применяться не будет. + +Пример, как выглядит конфигурация Kerberos в `users.xml`: + +```xml + + + + + + + + EXAMPLE.COM + + + + +``` + + +!!! Warning "Важно" + Если пользователь настроен для Kerberos-аутентификации, другие виды уатентификации будут для него недоступны. Если наряду с `kerberos` в определении пользователя будет указан какой-либо другой способ аутентификации, ClickHouse завершит работу. + +!!! info "" + Ещё раз отметим, что кроме `users.xml`, необходимо также включить Kerberos в `config.xml`. + +### Настройка Kerberos через SQL {#enabling-kerberos-using-sql} + +Пользователей, использующих Kerberos-аутентификацию, можно создать не только с помощью изменения конфигурационных файлов. +Если SQL-ориентированное управление доступом включено в ClickHouse, можно также создать пользователя, работающего через Kerberos, с помощью SQL. + +```sql +CREATE USER my_user IDENTIFIED WITH kerberos REALM 'EXAMPLE.COM' +``` + +Или, без фильтрации по реалм: + +```sql +CREATE USER my_user IDENTIFIED WITH kerberos +``` diff --git a/docs/ru/operations/external-authenticators/ldap.md b/docs/ru/operations/external-authenticators/ldap.md new file mode 100644 index 00000000000..312020000ea --- /dev/null +++ b/docs/ru/operations/external-authenticators/ldap.md @@ -0,0 +1,148 @@ +# LDAP {#external-authenticators-ldap} + +Для аутентификации пользователей ClickHouse можно использовать сервер LDAP. Существуют два подхода: + +- Использовать LDAP как внешний аутентификатор для существующих пользователей, которые определены в `users.xml`, или в локальных параметрах управления доступом. +- Использовать LDAP как внешний пользовательский каталог и разрешить аутентификацию локально неопределенных пользователей, если они есть на LDAP сервере. + +Для обоих подходов необходимо определить внутреннее имя LDAP сервера в конфигурации ClickHouse, чтобы другие параметры конфигурации могли ссылаться на это имя. + +## Определение LDAP сервера {#ldap-server-definition} + +Чтобы определить LDAP сервер, необходимо добавить секцию `ldap_servers` в `config.xml`. + +**Пример** + +```xml + + + + + localhost + 636 + uid={user_name},ou=users,dc=example,dc=com + 300 + yes + tls1.2 + demand + /path/to/tls_cert_file + /path/to/tls_key_file + /path/to/tls_ca_cert_file + /path/to/tls_ca_cert_dir + ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 + + + +``` + +Обратите внимание, что можно определить несколько LDAP серверов внутри секции `ldap_servers`, используя различные имена. + +**Параметры** + +- `host` — имя хоста сервера LDAP или его IP. Этот параметр обязательный и не может быть пустым. +- `port` — порт сервера LDAP. Если настройка `enable_tls` равна `true`, то по умолчанию используется порт `636`, иначе — порт `389`. +- `bind_dn` — шаблон для создания DN для привязки. + - При формировании DN все подстроки `{user_name}` в шаблоне будут заменяться на фактическое имя пользователя при каждой попытке аутентификации. +- `verification_cooldown` — промежуток времени (в секундах) после успешной попытки привязки, в течение которого пользователь будет считаться аутентифицированным и сможет выполнять запросы без повторного обращения к серверам LDAP. + - Чтобы отключить кеширование и заставить обращаться к серверу LDAP для каждого запроса аутентификации, укажите `0` (значение по умолчанию). +- `enable_tls` — флаг, включающий использование защищенного соединения с сервером LDAP. + - Укажите `no` для использования текстового протокола `ldap://` (не рекомендовано). + - Укажите `yes` для обращения к LDAP по протоколу SSL/TLS `ldaps://` (рекомендовано, используется по умолчанию). + - Укажите `starttls` для использования устаревшего протокола StartTLS (текстовый `ldap://` протокол, модернизированный до TLS). +- `tls_minimum_protocol_version` — минимальная версия протокола SSL/TLS. + - Возможные значения: `ssl2`, `ssl3`, `tls1.0`, `tls1.1`, `tls1.2` (по-умолчанию). +- `tls_require_cert` — поведение при проверке сертификата SSL/TLS. + - Возможные значения: `never`, `allow`, `try`, `demand` (по-умолчанию). +- `tls_cert_file` — путь к файлу сертификата. +- `tls_key_file` — путь к файлу ключа сертификата. +- `tls_ca_cert_file` — путь к файлу ЦС (certification authority) сертификата. +- `tls_ca_cert_dir` — путь к каталогу, содержащему сертификаты ЦС. +- `tls_cipher_suite` — разрешенный набор шифров (в нотации OpenSSL). + +## Внешний аутентификатор LDAP {#ldap-external-authenticator} + +Удаленный сервер LDAP можно использовать для верификации паролей локально определенных пользователей (пользователей, которые определены в `users.xml` или в локальных параметрах управления доступом). Для этого укажите имя определенного ранее сервера LDAP вместо `password` или другой аналогичной секции в настройках пользователя. + +При каждой попытке авторизации ClickHouse пытается "привязаться" к DN, указанному в [определении LDAP сервера](#ldap-server-definition), используя параметр `bind_dn` и предоставленные реквизиты для входа. Если попытка оказалась успешной, пользователь считается аутентифицированным. Обычно это называют методом "простой привязки". + +**Пример** + +```xml + + + + + + + + my_ldap_server + + + + +``` + +Обратите внимание, что пользователь `my_user` ссылается на `my_ldap_server`. Этот LDAP сервер должен быть настроен в основном файле `config.xml`, как это было описано ранее. + +При включенном SQL-ориентированном [управлении доступом](../access-rights.md#access-control) пользователи, аутентифицированные LDAP серверами, могут также быть созданы запросом [CREATE USER](../../sql-reference/statements/create/user.md#create-user-statement). + +Запрос: + +```sql +CREATE USER my_user IDENTIFIED WITH ldap SERVER 'my_ldap_server'; +``` + +## Внешний пользовательский каталог LDAP {#ldap-external-user-directory} + +В дополнение к локально определенным пользователям, удаленный LDAP сервер может служить источником определения пользователей. Для этого укажите имя определенного ранее сервера LDAP (см. [Определение LDAP сервера](#ldap-server-definition)) в секции `ldap` внутри секции `users_directories` файла `config.xml`. + +При каждой попытке аутентификации ClickHouse пытается локально найти определение пользователя и аутентифицировать его как обычно. Если пользователь не находится локально, ClickHouse предполагает, что он определяется во внешнем LDAP каталоге и пытается "привязаться" к DN, указанному на LDAP сервере, используя предоставленные реквизиты для входа. Если попытка оказалась успешной, пользователь считается существующим и аутентифицированным. Пользователю присваиваются роли из списка, указанного в секции `roles`. Кроме того, если настроена секция `role_mapping`, то выполняется LDAP поиск, а его результаты преобразуются в имена ролей и присваиваются пользователям. Все это работает при условии, что SQL-ориентированное [управлением доступом](../access-rights.md#access-control) включено, а роли созданы запросом [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement). + +**Пример** + +В `config.xml`. + +```xml + + + + + + my_ldap_server + + + + + + ou=groups,dc=example,dc=com + subtree + (&(objectClass=groupOfNames)(member={bind_dn})) + cn + clickhouse_ + + + + +``` + +Обратите внимание, что `my_ldap_server`, указанный в секции `ldap` внутри секции `user_directories`, должен быть настроен в файле `config.xml`, как это было описано ранее. (см. [Определение LDAP сервера](#ldap-server-definition)). + +**Параметры** + +- `server` — имя одного из серверов LDAP, определенных в секции `ldap_servers` в файле конфигурации (см.выше). Этот параметр обязательный и не может быть пустым. +- `roles` — секция со списком локально определенных ролей, которые будут присвоены каждому пользователю, полученному от сервера LDAP. + - Если роли не указаны ни здесь, ни в секции `role_mapping` (см. ниже), пользователь после аутентификации не сможет выполнять никаких действий. +- `role_mapping` — секция c параметрами LDAP поиска и правилами отображения. + - При аутентификации пользователя, пока еще связанного с LDAP, производится LDAP поиск с помощью `search_filter` и имени этого пользователя. Для каждой записи, найденной в ходе поиска, выделяется значение указанного атрибута. У каждого атрибута, имеющего указанный префикс, этот префикс удаляется, а остальная часть значения становится именем локальной роли, определенной в ClickHouse, причем предполагается, что эта роль была ранее создана запросом [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) до этого. + - Внутри одной секции `ldap` может быть несколько секций `role_mapping`. Все они будут применены. + - `base_dn` — шаблон, который используется для создания базового DN для LDAP поиска. + - При формировании DN все подстроки `{user_name}` и `{bind_dn}` в шаблоне будут заменяться на фактическое имя пользователя и DN привязки соответственно при каждом LDAP поиске. + - `scope` — Область LDAP поиска. + - Возможные значения: `base`, `one_level`, `children`, `subtree` (по умолчанию). + - `search_filter` — шаблон, который используется для создания фильтра для каждого LDAP поиска. + - при формировании фильтра все подстроки `{user_name}`, `{bind_dn}` и `{base_dn}` в шаблоне будут заменяться на фактическое имя пользователя, DN привязки и базовый DN соответственно при каждом LDAP поиске. + - Обратите внимание, что специальные символы должны быть правильно экранированы в XML. + - `attribute` — имя атрибута, значение которого будет возвращаться LDAP поиском. + - `prefix` — префикс, который, как предполагается, будет находиться перед началом каждой строки в исходном списке строк, возвращаемых LDAP поиском. Префикс будет удален из исходных строк, а сами они будут рассматриваться как имена локальных ролей. По умолчанию: пустая строка. + +[Оригинальная статья](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap) diff --git a/docs/ru/operations/index.md b/docs/ru/operations/index.md index 99dcf652891..88212e6804f 100644 --- a/docs/ru/operations/index.md +++ b/docs/ru/operations/index.md @@ -23,4 +23,3 @@ toc_title: "Эксплуатация" - [Настройки](settings/index.md#settings) - [Утилиты](utilities/index.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/) diff --git a/docs/ru/operations/monitoring.md b/docs/ru/operations/monitoring.md index 7656b04d011..da51d27ded2 100644 --- a/docs/ru/operations/monitoring.md +++ b/docs/ru/operations/monitoring.md @@ -43,4 +43,3 @@ ClickHouse собирает: Для мониторинга серверов в кластерной конфигурации необходимо установить параметр [max_replica_delay_for_distributed_queries](settings/settings.md#settings-max_replica_delay_for_distributed_queries) и использовать HTTP ресурс `/replicas_status`. Если реплика доступна и не отстаёт от других реплик, то запрос к `/replicas_status` возвращает `200 OK`. Если реплика отстаёт, то запрос возвращает `503 HTTP_SERVICE_UNAVAILABLE`, включая информацию о размере отставания. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/monitoring) diff --git a/docs/ru/operations/opentelemetry.md b/docs/ru/operations/opentelemetry.md index a60f1b3e085..073e7c67e9c 100644 --- a/docs/ru/operations/opentelemetry.md +++ b/docs/ru/operations/opentelemetry.md @@ -34,4 +34,3 @@ ClickHouse создает `trace spans` для каждого запроса и Теги или атрибуты сохраняются в виде двух параллельных массивов, содержащих ключи и значения. Для работы с ними используйте [ARRAY JOIN](../sql-reference/statements/select/array-join.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/opentelemetry/) diff --git a/docs/ru/operations/quotas.md b/docs/ru/operations/quotas.md index 31f3a66a1c3..78966492f25 100644 --- a/docs/ru/operations/quotas.md +++ b/docs/ru/operations/quotas.md @@ -29,6 +29,8 @@ toc_title: "Квоты" 0 + 0 + 0 0 0 0 @@ -48,6 +50,8 @@ toc_title: "Квоты" 3600 1000 + 100 + 100 100 1000000000 100000000000 @@ -58,6 +62,8 @@ toc_title: "Квоты" 86400 10000 + 10000 + 10000 1000 5000000000 500000000000 @@ -74,6 +80,10 @@ toc_title: "Квоты" `queries` - общее количество запросов; +`query_selects` – общее количество запросов `SELECT`. + +`query_inserts` – общее количество запросов `INSERT`. + `errors` - количество запросов, при выполнении которых было выкинуто исключение; `result_rows` - суммарное количество строк, отданных в виде результата; @@ -107,4 +117,3 @@ toc_title: "Квоты" При перезапуске сервера, квоты сбрасываются. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/quotas/) diff --git a/docs/ru/operations/server-configuration-parameters/index.md b/docs/ru/operations/server-configuration-parameters/index.md index f511955ebc4..503c5d32163 100644 --- a/docs/ru/operations/server-configuration-parameters/index.md +++ b/docs/ru/operations/server-configuration-parameters/index.md @@ -14,4 +14,3 @@ toc_title: "Введение" Перед изучением настроек ознакомьтесь с разделом [Конфигурационные файлы](../configuration-files.md#configuration_files), обратите внимание на использование подстановок (атрибуты `incl` и `optional`). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/server_configuration_parameters/) diff --git a/docs/ru/operations/server-configuration-parameters/settings.md b/docs/ru/operations/server-configuration-parameters/settings.md index f46d899a3b7..abaf2a8f2da 100644 --- a/docs/ru/operations/server-configuration-parameters/settings.md +++ b/docs/ru/operations/server-configuration-parameters/settings.md @@ -101,6 +101,12 @@ ClickHouse проверяет условия для `min_part_size` и `min_part ``` +## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec} + +Устанавливает задержку перед удалением табличных данных, в секундах. Если запрос имеет идентификатор `SYNC`, эта настройка игнорируется. + +Значение по умолчанию: `480` (8 минут). + ## default\_database {#default-database} База данных по умолчанию. @@ -285,7 +291,7 @@ ClickHouse проверяет условия для `min_part_size` и `min_part ## interserver_http_host {#interserver-http-host} -Имя хоста, которое могут использовать другие серверы для обращения к этому. +Имя хоста, которое могут использовать другие серверы для обращения к этому хосту. Если не указано, то определяется аналогично команде `hostname -f`. @@ -297,11 +303,36 @@ ClickHouse проверяет условия для `min_part_size` и `min_part example.yandex.ru ``` +## interserver_https_port {#interserver-https-port} + +Порт для обмена данными между репликами ClickHouse по протоколу `HTTPS`. + +**Пример** + +``` xml +9010 +``` + +## interserver_https_host {#interserver-https-host} + +Имя хоста, которое могут использовать другие реплики для обращения к нему по протоколу `HTTPS`. + +**Пример** + +``` xml +example.yandex.ru +``` + + + ## interserver_http_credentials {#server-settings-interserver-http-credentials} Имя пользователя и пароль, использующиеся для аутентификации при [репликации](../../operations/server-configuration-parameters/settings.md) движками Replicated\*. Это имя пользователя и пароль используются только для взаимодействия между репликами кластера и никак не связаны с аутентификацией клиентов ClickHouse. Сервер проверяет совпадение имени и пароля для соединяющихся с ним реплик, а также использует это же имя и пароль для соединения с другими репликами. Соответственно, эти имя и пароль должны быть прописаны одинаковыми для всех реплик кластера. По умолчанию аутентификация не используется. +!!! note "Примечание" + Эти учетные данные являются общими для обмена данными по протоколам `HTTP` и `HTTPS`. + Раздел содержит следующие параметры: - `user` — имя пользователя. @@ -384,7 +415,7 @@ ClickHouse проверяет условия для `min_part_size` и `min_part Значения по умолчанию: при указанном `address` - `LOG_USER`, иначе - `LOG_DAEMON` - format - формат сообщений. Возможные значения - `bsd` и `syslog` -## send_crash_reports {#server_configuration_parameters-logger} +## send_crash_reports {#server_configuration_parameters-send_crash_reports} Настройки для отправки сообщений о сбоях в команду разработчиков ядра ClickHouse через [Sentry](https://sentry.io). Включение этих настроек, особенно в pre-production среде, может дать очень ценную информацию и поможет развитию ClickHouse. @@ -481,7 +512,15 @@ ClickHouse проверяет условия для `min_part_size` и `min_part ## max_concurrent_queries {#max-concurrent-queries} -Максимальное количество одновременно обрабатываемых запросов. +Определяет максимальное количество одновременно обрабатываемых запросов, связанных с таблицей семейства `MergeTree`. Запросы также могут быть ограничены настройками: [max_concurrent_queries_for_all_users](#max-concurrent-queries-for-all-users), [min_marks_to_honor_max_concurrent_queries](#min-marks-to-honor-max-concurrent-queries). + +!!! info "Примечание" + Параметры этих настроек могут быть изменены во время выполнения запросов и вступят в силу немедленно. Запросы, которые уже запущены, выполнятся без изменений. + +Возможные значения: + +- Положительное целое число. +- 0 — выключена. **Пример** @@ -509,6 +548,21 @@ ClickHouse проверяет условия для `min_part_size` и `min_part - [max_concurrent_queries](#max-concurrent-queries) +## min_marks_to_honor_max_concurrent_queries {#min-marks-to-honor-max-concurrent-queries} + +Определяет минимальное количество засечек, считываемых запросом для применения настройки [max_concurrent_queries](#max-concurrent-queries). + +Возможные значения: + +- Положительное целое число. +- 0 — выключена. + +**Пример** + +``` xml +10 +``` + ## max_connections {#max-connections} Максимальное количество входящих соединений. @@ -1159,5 +1213,3 @@ ClickHouse использует ZooKeeper для хранения метадан ``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/server_configuration_parameters/settings/) diff --git a/docs/ru/operations/settings/constraints-on-settings.md b/docs/ru/operations/settings/constraints-on-settings.md index a4c1876574d..754d6cbba8a 100644 --- a/docs/ru/operations/settings/constraints-on-settings.md +++ b/docs/ru/operations/settings/constraints-on-settings.md @@ -71,4 +71,3 @@ Code: 452, e.displayText() = DB::Exception: Setting force_index_by_date should n **Примечание:** профиль с именем `default` обрабатывается специальным образом: все ограничения на изменение настроек из этого профиля становятся дефолтными и влияют на всех пользователей, кроме тех, где эти ограничения явно переопределены. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/constraints_on_settings/) diff --git a/docs/ru/operations/settings/index.md b/docs/ru/operations/settings/index.md index 2ef1d4730a3..050df975b47 100644 --- a/docs/ru/operations/settings/index.md +++ b/docs/ru/operations/settings/index.md @@ -54,4 +54,3 @@ SELECT getSetting('custom_a'); - [Конфигурационные параметры сервера](../../operations/server-configuration-parameters/settings.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/) diff --git a/docs/ru/operations/settings/merge-tree-settings.md b/docs/ru/operations/settings/merge-tree-settings.md index bfc0b0a2644..f9093d379e3 100644 --- a/docs/ru/operations/settings/merge-tree-settings.md +++ b/docs/ru/operations/settings/merge-tree-settings.md @@ -55,6 +55,26 @@ Eсли число кусков в партиции превышает знач ClickHouse искусственно выполняет `INSERT` дольше (добавляет ‘sleep’), чтобы фоновый механизм слияния успевал слиять куски быстрее, чем они добавляются. +## inactive_parts_to_throw_insert {#inactive-parts-to-throw-insert} + +Если число неактивных кусков в партиции превышает значение `inactive_parts_to_throw_insert`, `INSERT` прерывается с исключением «Too many inactive parts (N). Parts cleaning are processing significantly slower than inserts». + +Возможные значения: + +- Положительное целое число. + +Значение по умолчанию: 0 (не ограничено). + +## inactive_parts_to_delay_insert {#inactive-parts-to-delay-insert} + +Если число неактивных кусков в партиции больше или равно значению `inactive_parts_to_delay_insert`, `INSERT` искусственно замедляется. Это полезно, когда сервер не может быстро очистить неактивные куски. + +Возможные значения: + +- Положительное целое число. + +Значение по умолчанию: 0 (не ограничено). + ## max_delay_to_insert {#max-delay-to-insert} Величина в секундах, которая используется для расчета задержки `INSERT`, если число кусков в партиции превышает значение [parts_to_delay_insert](#parts-to-delay-insert). diff --git a/docs/ru/operations/settings/permissions-for-queries.md b/docs/ru/operations/settings/permissions-for-queries.md index 571f56fc3bd..8cd5a2570ca 100644 --- a/docs/ru/operations/settings/permissions-for-queries.md +++ b/docs/ru/operations/settings/permissions-for-queries.md @@ -59,4 +59,3 @@ toc_title: "Разрешения для запросов" 1 -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/permissions_for_queries/) diff --git a/docs/ru/operations/settings/query-complexity.md b/docs/ru/operations/settings/query-complexity.md index c6e580a2209..c2e00302d18 100644 --- a/docs/ru/operations/settings/query-complexity.md +++ b/docs/ru/operations/settings/query-complexity.md @@ -314,4 +314,3 @@ FORMAT Null; > «Too many partitions for single INSERT block (more than» + toString(max_parts) + «). The limit is controlled by ‘max_partitions_per_insert_block’ setting. Large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc).» -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/query_complexity/) diff --git a/docs/ru/operations/settings/settings-profiles.md b/docs/ru/operations/settings/settings-profiles.md index e8082919d89..d3b3d29db94 100644 --- a/docs/ru/operations/settings/settings-profiles.md +++ b/docs/ru/operations/settings/settings-profiles.md @@ -77,4 +77,3 @@ SET profile = 'web' Профиль `web` — обычный профиль, который может быть установлен с помощью запроса `SET` или параметра URL при запросе по HTTP. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings_profiles/) diff --git a/docs/ru/operations/settings/settings-users.md b/docs/ru/operations/settings/settings-users.md index 21cd78569df..6a10e518817 100644 --- a/docs/ru/operations/settings/settings-users.md +++ b/docs/ru/operations/settings/settings-users.md @@ -162,4 +162,3 @@ toc_title: "Настройки пользователей" Элемент `filter` содержать любое выражение, возвращающее значение типа [UInt8](../../sql-reference/data-types/int-uint.md). Обычно он содержит сравнения и логические операторы. Строки `database_name.table1`, для которых фильтр возвращает 0 не выдаются пользователю. Фильтрация несовместима с операциями `PREWHERE` и отключает оптимизацию `WHERE→PREWHERE`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings_users/) diff --git a/docs/ru/operations/settings/settings.md b/docs/ru/operations/settings/settings.md index 663821158bd..467d27dad32 100644 --- a/docs/ru/operations/settings/settings.md +++ b/docs/ru/operations/settings/settings.md @@ -119,6 +119,16 @@ ClickHouse применяет настройку в тех случаях, ко Значение по умолчанию: 0. +## http_max_uri_size {#http-max-uri-size} + +Устанавливает максимальную длину URI в HTTP-запросе. + +Возможные значения: + +- Положительное целое. + +Значение по умолчанию: 1048576. + ## send_progress_in_http_headers {#settings-send_progress_in_http_headers} Включает или отключает HTTP-заголовки `X-ClickHouse-Progress` в ответах `clickhouse-server`. @@ -134,7 +144,7 @@ ClickHouse применяет настройку в тех случаях, ко ## max_http_get_redirects {#setting-max_http_get_redirects} -Ограничивает максимальное количество переходов по редиректам в таблицах с движком [URL](../../engines/table-engines/special/url.md) при выполнении HTTP запросов методом GET. Настройка применяется для обоих типов таблиц: созданных запросом [CREATE TABLE](../../sql_reference/create/#create-table-query) и с помощью табличной функции [url](../../sql-reference/table-functions/url.md). +Ограничивает максимальное количество переходов по редиректам в таблицах с движком [URL](../../engines/table-engines/special/url.md) при выполнении HTTP запросов методом GET. Настройка применяется для обоих типов таблиц: созданных запросом [CREATE TABLE](../../sql-reference/statements/create/table.md#create-table-query) и с помощью табличной функции [url](../../sql-reference/table-functions/url.md). Возможные значения: @@ -306,7 +316,7 @@ INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), ( CREATE TABLE table_with_enum_column_for_tsv_insert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory(); ``` -При включенной настройке `input_format_tsv_enum_as_number`: +При включенной настройке `input_format_tsv_enum_as_number`: ```sql SET input_format_tsv_enum_as_number = 1; @@ -556,7 +566,7 @@ ClickHouse может парсить только базовый формат `Y Возможные значения: -- 0 — Устаревшее поведение отключено. +- 0 — Устаревшее поведение отключено. - 1 — Устаревшее поведение включено. Значение по умолчанию: 0. @@ -759,6 +769,38 @@ log_queries_min_type='EXCEPTION_WHILE_PROCESSING' log_query_threads=1 ``` +## log_comment {#settings-log-comment} + +Задаёт значение поля `log_comment` таблицы [system.query_log](../system-tables/query_log.md) и текст комментария в логе сервера. + +Может быть использована для улучшения читабельности логов сервера. Кроме того, помогает быстро выделить связанные с тестом запросы из `system.query_log` после запуска [clickhouse-test](../../development/tests.md). + +Возможные значения: + +- Любая строка не длиннее [max_query_size](#settings-max_query_size). При превышении длины сервер сгенерирует исключение. + +Значение по умолчанию: пустая строка. + +**Пример** + +Запрос: + +``` sql +SET log_comment = 'log_comment test', log_queries = 1; +SELECT 1; +SYSTEM FLUSH LOGS; +SELECT type, query FROM system.query_log WHERE log_comment = 'log_comment test' AND event_date >= yesterday() ORDER BY event_time DESC LIMIT 2; +``` + +Результат: + +``` text +┌─type────────┬─query─────┐ +│ QueryStart │ SELECT 1; │ +│ QueryFinish │ SELECT 1; │ +└─────────────┴───────────┘ +``` + ## max_insert_block_size {#settings-max_insert_block_size} Формировать блоки указанного размера, при вставке в таблицу. @@ -812,8 +854,6 @@ log_query_threads=1 Значение по умолчанию: количество процессорных ядер без учёта Hyper-Threading. -Если на сервере обычно исполняется менее одного запроса SELECT одновременно, то выставите этот параметр в значение чуть меньше количества реальных процессорных ядер. - Для запросов, которые быстро завершаются из-за LIMIT-а, имеет смысл выставить max_threads поменьше. Например, если нужное количество записей находится в каждом блоке, то при max_threads = 8 будет считано 8 блоков, хотя достаточно было прочитать один. Чем меньше `max_threads`, тем меньше будет использоваться оперативки. @@ -1087,8 +1127,23 @@ load_balancing = round_robin ## max_parallel_replicas {#settings-max_parallel_replicas} Максимальное количество используемых реплик каждого шарда при выполнении запроса. -Для консистентности (чтобы получить разные части одного и того же разбиения), эта опция работает только при заданном ключе сэмплирования. -Отставание реплик не контролируется. + +Возможные значения: + +- Целое положительное число. + +**Дополнительная информация** + +Эта настройка полезна для реплицируемых таблиц с ключом сэмплирования. Запрос может обрабатываться быстрее, если он выполняется на нескольких серверах параллельно. Однако производительность обработки запроса, наоборот, может упасть в следующих ситуациях: + +- Позиция ключа сэмплирования в ключе партиционирования не позволяет выполнять эффективное сканирование. +- Добавление ключа сэмплирования в таблицу делает фильтрацию по другим столбцам менее эффективной. +- Ключ сэмплирования является выражением, которое сложно вычисляется. +- У распределения сетевых задержек в кластере длинный «хвост», из-за чего при параллельных запросах к нескольким серверам увеличивается среднее время задержки. + +!!! warning "Предупреждение" + Параллельное выполнение запроса может привести к неверному результату, если в запросе есть объединение или подзапросы и при этом таблицы не удовлетворяют определенным требованиям. Подробности смотрите в разделе [Распределенные подзапросы и max_parallel_replicas](../../sql-reference/operators/in.md#max_parallel_replica-subqueries). + ## compile {#compile} @@ -1236,7 +1291,7 @@ SELECT area/period FROM account_orders FORMAT JSON; CREATE TABLE table_with_enum_column_for_csv_insert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory(); ``` -При включенной настройке `input_format_csv_enum_as_number`: +При включенной настройке `input_format_csv_enum_as_number`: ```sql SET input_format_csv_enum_as_number = 1; @@ -1731,7 +1786,7 @@ ClickHouse генерирует исключение Включает или отключает режим синхронного добавления данных в распределенные таблицы (таблицы с движком [Distributed](../../engines/table-engines/special/distributed.md#distributed)). -По умолчанию ClickHouse вставляет данные в распределённую таблицу в асинхронном режиме. Если `insert_distributed_sync=1`, то данные вставляются сихронно, а запрос `INSERT` считается выполненным успешно, когда данные записаны на все шарды (по крайней мере на одну реплику для каждого шарда, если `internal_replication = true`). +По умолчанию ClickHouse вставляет данные в распределённую таблицу в асинхронном режиме. Если `insert_distributed_sync=1`, то данные вставляются сихронно, а запрос `INSERT` считается выполненным успешно, когда данные записаны на все шарды (по крайней мере на одну реплику для каждого шарда, если `internal_replication = true`). Возможные значения: @@ -1744,6 +1799,67 @@ ClickHouse генерирует исключение - [Движок Distributed](../../engines/table-engines/special/distributed.md#distributed) - [Управление распределёнными таблицами](../../sql-reference/statements/system.md#query-language-system-distributed) + +## insert_distributed_one_random_shard {#insert_distributed_one_random_shard} + +Включает или отключает режим вставки данных в [Distributed](../../engines/table-engines/special/distributed.md#distributed)) таблицу в случайный шард при отсутствии ключ шардирования. + +По умолчанию при вставке данных в `Distributed` таблицу с несколькими шардами и при отсутствии ключа шардирования сервер ClickHouse будет отклонять любой запрос на вставку данных. Когда `insert_distributed_one_random_shard = 1`, вставки принимаются, а данные записываются в случайный шард. + +Возможные значения: + +- 0 — если у таблицы несколько шардов, но ключ шардирования отсутствует, вставка данных отклоняется. +- 1 — если ключ шардирования отсутствует, то вставка данных осуществляется в случайный шард среди всех доступных шардов. + +Значение по умолчанию: `0`. + +## insert_shard_id {#insert_shard_id} + +Если не `0`, указывает, в какой шард [Distributed](../../engines/table-engines/special/distributed.md#distributed) таблицы данные будут вставлены синхронно. + +Если значение настройки `insert_shard_id` указано неверно, сервер выдаст ошибку. + +Узнать количество шардов `shard_num` на кластере `requested_cluster` можно из конфигурации сервера, либо используя запрос: + +``` sql +SELECT uniq(shard_num) FROM system.clusters WHERE cluster = 'requested_cluster'; +``` + +Возможные значения: + +- 0 — выключено. +- Любое число от `1` до `shards_num` соответствующей [Distributed](../../engines/table-engines/special/distributed.md#distributed) таблицы. + +Значение по умолчанию: `0`. + +**Пример** + +Запрос: + +```sql +CREATE TABLE x AS system.numbers ENGINE = MergeTree ORDER BY number; +CREATE TABLE x_dist AS x ENGINE = Distributed('test_cluster_two_shards_localhost', currentDatabase(), x); +INSERT INTO x_dist SELECT * FROM numbers(5) SETTINGS insert_shard_id = 1; +SELECT * FROM x_dist ORDER BY number ASC; +``` + +Результат: + +``` text +┌─number─┐ +│ 0 │ +│ 0 │ +│ 1 │ +│ 1 │ +│ 2 │ +│ 2 │ +│ 3 │ +│ 3 │ +│ 4 │ +│ 4 │ +└────────┘ +``` + ## validate_polygons {#validate_polygons} Включает или отключает генерирование исключения в функции [pointInPolygon](../../sql-reference/functions/geo/index.md#pointinpolygon), если многоугольник самопересекающийся или самокасающийся. @@ -2067,11 +2183,11 @@ SELECT * FROM a; ## ttl_only_drop_parts {#ttl_only_drop_parts} -Для таблиц [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) включает или отключает возможность полного удаления кусков данных, в которых все записи устарели. +Для таблиц [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) включает или отключает возможность полного удаления кусков данных, в которых все записи устарели. -Когда настройка `ttl_only_drop_parts` отключена (т.е. по умолчанию), сервер лишь удаляет устаревшие записи в соответствии с их временем жизни (TTL). +Когда настройка `ttl_only_drop_parts` отключена (т.е. по умолчанию), сервер лишь удаляет устаревшие записи в соответствии с их временем жизни (TTL). -Когда настройка `ttl_only_drop_parts` включена, сервер целиком удаляет куски данных, в которых все записи устарели. +Когда настройка `ttl_only_drop_parts` включена, сервер целиком удаляет куски данных, в которых все записи устарели. Удаление целых кусков данных вместо удаления отдельных записей позволяет устанавливать меньший таймаут `merge_with_ttl_timeout` и уменьшает нагрузку на сервер, что способствует росту производительности. @@ -2082,18 +2198,18 @@ SELECT * FROM a; Значение по умолчанию: `0`. -**См. также** +**См. также** - [Секции и настройки запроса CREATE TABLE](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-query-clauses) (настройка `merge_with_ttl_timeout`) - [Table TTL](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) ## output_format_pretty_max_value_width {#output_format_pretty_max_value_width} -Ограничивает длину значения, выводимого в формате [Pretty](../../interfaces/formats.md#pretty). Если значение длиннее указанного количества символов, оно обрезается. +Ограничивает длину значения, выводимого в формате [Pretty](../../interfaces/formats.md#pretty). Если значение длиннее указанного количества символов, оно обрезается. Возможные значения: -- Положительное целое число. +- Положительное целое число. - 0 — значение обрезается полностью. Значение по умолчанию: `10000` символов. @@ -2242,17 +2358,17 @@ SELECT * FROM system.events WHERE event='QueryMemoryLimitExceeded'; Включает или отключает сохранение типа `Nullable` для аргумента функции [CAST](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast). -Если настройка включена, то когда в функцию `CAST` передается аргумент с типом `Nullable`, функция возвращает результат, также преобразованный к типу `Nullable`. -Если настройка отключена, то функция `CAST` всегда возвращает результат строго указанного типа. +Если настройка включена, то когда в функцию `CAST` передается аргумент с типом `Nullable`, функция возвращает результат, также преобразованный к типу `Nullable`. +Если настройка отключена, то функция `CAST` всегда возвращает результат строго указанного типа. Возможные значения: - 0 — функция `CAST` преобразует аргумент строго к указанному типу. -- 1 — если аргумент имеет тип `Nullable`, то функция `CAST` преобразует его к типу `Nullable` для указанного типа. +- 1 — если аргумент имеет тип `Nullable`, то функция `CAST` преобразует его к типу `Nullable` для указанного типа. Значение по умолчанию: `0`. -**Примеры** +**Примеры** Запрос возвращает аргумент, преобразованный строго к указанному типу: @@ -2284,9 +2400,9 @@ SELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x); └───┴───────────────────────────────────────────────────┘ ``` -**См. также** +**См. также** -- Функция [CAST](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) +- Функция [CAST](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) ## persistent {#persistent} @@ -2364,7 +2480,7 @@ SELECT number FROM numbers(3) FORMAT JSONEachRow; [ {"number":"0"}, {"number":"1"}, -{"number":"2"} +{"number":"2"} ] ``` @@ -2552,15 +2668,194 @@ SELECT * FROM test2; Обратите внимание на то, что эта настройка влияет на поведение [материализованных представлений](../../sql-reference/statements/create/view.md#materialized) и БД [MaterializeMySQL](../../engines/database-engines/materialize-mysql.md). +## engine_file_empty_if_not_exists {#engine-file-empty_if-not-exists} + +Включает или отключает возможность выполнять запрос `SELECT` к таблице на движке [File](../../engines/table-engines/special/file.md), не содержащей файл. + +Возможные значения: +- 0 — запрос `SELECT` генерирует исключение. +- 1 — запрос `SELECT` возвращает пустой результат. + +Значение по умолчанию: `0`. + +## engine_file_truncate_on_insert {#engine-file-truncate-on-insert} + +Включает или выключает удаление данных из таблицы до вставки в таблицу на движке [File](../../engines/table-engines/special/file.md). + +Возможные значения: +- 0 — запрос `INSERT` добавляет данные в конец файла после существующих. +- 1 — `INSERT` удаляет имеющиеся в файле данные и замещает их новыми. + +Значение по умолчанию: `0`. + ## allow_experimental_geo_types {#allow-experimental-geo-types} Разрешает использование экспериментальных типов данных для работы с [географическими структурами](../../sql-reference/data-types/geo.md). Возможные значения: - -- 0 — Использование типов данных для работы с географическими структурами не поддерживается. -- 1 — Использование типов данных для работы с географическими структурами поддерживается. +- 0 — использование типов данных для работы с географическими структурами не поддерживается. +- 1 — использование типов данных для работы с географическими структурами поддерживается. Значение по умолчанию: `0`. +## database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously} + +Добавляет модификатор `SYNC` ко всем запросам `DROP` и `DETACH`. + +Возможные значения: + +- 0 — Запросы будут выполняться с задержкой. +- 1 — Запросы будут выполняться без задержки. + +Значение по умолчанию: `0`. + +## show_table_uuid_in_table_create_query_if_not_nil {#show_table_uuid_in_table_create_query_if_not_nil} + +Устанавливает отображение запроса `SHOW TABLE`. + +Возможные значения: + +- 0 — Запрос будет отображаться без UUID таблицы. +- 1 — Запрос будет отображаться с UUID таблицы. + +Значение по умолчанию: `0`. + +## allow_experimental_live_view {#allow-experimental-live-view} + +Включает экспериментальную возможность использования [LIVE-представлений](../../sql-reference/statements/create/view.md#live-view). + +Возможные значения: +- 0 — живые представления не поддерживаются. +- 1 — живые представления поддерживаются. + +Значение по умолчанию: `0`. + +## live_view_heartbeat_interval {#live-view-heartbeat-interval} + +Задает интервал в секундах для периодической проверки существования [LIVE VIEW](../../sql-reference/statements/create/view.md#live-view). + +Значение по умолчанию: `15`. + +## max_live_view_insert_blocks_before_refresh {#max-live-view-insert-blocks-before-refresh} + +Задает наибольшее число вставок, после которых запрос на формирование [LIVE VIEW](../../sql-reference/statements/create/view.md#live-view) исполняется снова. + +Значение по умолчанию: `64`. + +## temporary_live_view_timeout {#temporary-live-view-timeout} + +Задает время в секундах, после которого [LIVE VIEW](../../sql-reference/statements/create/view.md#live-view) удаляется. + +Значение по умолчанию: `5`. + +## periodic_live_view_refresh {#periodic-live-view-refresh} + +Задает время в секундах, по истечении которого [LIVE VIEW](../../sql-reference/statements/create/view.md#live-view) с установленным автообновлением обновляется. + +Значение по умолчанию: `60`. + +## check_query_single_value_result {#check_query_single_value_result} + +Определяет уровень детализации результата для запросов [CHECK TABLE](../../sql-reference/statements/check-table.md#checking-mergetree-tables) для таблиц семейства `MergeTree`. + +Возможные значения: + +- 0 — запрос возвращает статус каждого куска данных таблицы. +- 1 — запрос возвращает статус таблицы в целом. + +Значение по умолчанию: `0`. + +## prefer_column_name_to_alias {#prefer-column-name-to-alias} + +Включает или отключает замену названий столбцов на синонимы в выражениях и секциях запросов, см. [Примечания по использованию синонимов](../../sql-reference/syntax.md#syntax-expression_aliases). Включите эту настройку, чтобы синтаксис синонимов в ClickHouse был более совместим с большинством других СУБД. + +Возможные значения: + +- 0 — синоним подставляется вместо имени столбца. +- 1 — синоним не подставляется вместо имени столбца. + +Значение по умолчанию: `0`. + +**Пример** + +Какие изменения привносит включение и выключение настройки: + +Запрос: + +```sql +SET prefer_column_name_to_alias = 0; +SELECT avg(number) AS number, max(number) FROM numbers(10); +``` + +Результат: + +```text +Received exception from server (version 21.5.1): +Code: 184. DB::Exception: Received from localhost:9000. DB::Exception: Aggregate function avg(number) is found inside another aggregate function in query: While processing avg(number) AS number. +``` + +Запрос: + +```sql +SET prefer_column_name_to_alias = 1; +SELECT avg(number) AS number, max(number) FROM numbers(10); +``` + +Результат: + +```text +┌─number─┬─max(number)─┐ +│ 4.5 │ 9 │ +└────────┴─────────────┘ +``` + +## limit {#limit} + +Устанавливает максимальное количество строк, возвращаемых запросом. Ограничивает сверху значение, установленное в запросе в секции [LIMIT](../../sql-reference/statements/select/limit.md#limit-clause). + +Возможные значения: + +- 0 — число строк не ограничено. +- Положительное целое число. + +Значение по умолчанию: `0`. + +## offset {#offset} + +Устанавливает количество строк, которые необходимо пропустить перед началом возврата строк из запроса. Суммируется со значением, установленным в запросе в секции [OFFSET](../../sql-reference/statements/select/offset.md#offset-fetch). + +Возможные значения: + +- 0 — строки не пропускаются. +- Положительное целое число. + +Значение по умолчанию: `0`. + +**Пример** + +Исходная таблица: + +``` sql +CREATE TABLE test (i UInt64) ENGINE = MergeTree() ORDER BY i; +INSERT INTO test SELECT number FROM numbers(500); +``` + +Запрос: + +``` sql +SET limit = 5; +SET offset = 7; +SELECT * FROM test LIMIT 10 OFFSET 100; +``` + +Результат: + +``` text +┌───i─┐ +│ 107 │ +│ 108 │ +│ 109 │ +└─────┘ +``` + [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) diff --git a/docs/ru/operations/system-tables/asynchronous_metric_log.md b/docs/ru/operations/system-tables/asynchronous_metric_log.md index 2fe617e48af..979b63f0cc8 100644 --- a/docs/ru/operations/system-tables/asynchronous_metric_log.md +++ b/docs/ru/operations/system-tables/asynchronous_metric_log.md @@ -34,4 +34,3 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10 - [system.asynchronous_metrics](#system_tables-asynchronous_metrics) — Содержит метрики, которые периодически вычисляются в фоновом режиме. - [system.metric_log](#system_tables-metric_log) — таблица фиксирующая историю значений метрик из `system.metrics` и `system.events`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/asynchronous_metric_log) diff --git a/docs/ru/operations/system-tables/asynchronous_metrics.md b/docs/ru/operations/system-tables/asynchronous_metrics.md index 5ff010bc79f..9d12a119c43 100644 --- a/docs/ru/operations/system-tables/asynchronous_metrics.md +++ b/docs/ru/operations/system-tables/asynchronous_metrics.md @@ -35,5 +35,4 @@ SELECT * FROM system.asynchronous_metrics LIMIT 10 - [system.events](#system_tables-events) — таблица с количеством произошедших событий. - [system.metric_log](#system_tables-metric_log) — таблица фиксирующая историю значений метрик из `system.metrics` и `system.events`. - [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/asynchronous_metrics) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/clusters.md b/docs/ru/operations/system-tables/clusters.md index 9cf84ea5f02..6bfeb8aa818 100644 --- a/docs/ru/operations/system-tables/clusters.md +++ b/docs/ru/operations/system-tables/clusters.md @@ -4,13 +4,68 @@ Столбцы: -- `cluster` (String) — имя кластера. -- `shard_num` (UInt32) — номер шарда в кластере, начиная с 1. -- `shard_weight` (UInt32) — относительный вес шарда при записи данных. -- `replica_num` (UInt32) — номер реплики в шарде, начиная с 1. -- `host_name` (String) — хост, указанный в конфигурации. -- `host_address` (String) — TIP-адрес хоста, полученный из DNS. -- `port` (UInt16) — порт, на который обращаться для соединения с сервером. -- `user` (String) — имя пользователя, которого использовать для соединения с сервером. +- `cluster` ([String](../../sql-reference/data-types/string.md)) — имя кластера. +- `shard_num` ([UInt32](../../sql-reference/data-types/int-uint.md)) — номер шарда в кластере, начиная с 1. +- `shard_weight` ([UInt32](../../sql-reference/data-types/int-uint.md)) — относительный вес шарда при записи данных. +- `replica_num` ([UInt32](../../sql-reference/data-types/int-uint.md)) — номер реплики в шарде, начиная с 1. +- `host_name` ([String](../../sql-reference/data-types/string.md)) — хост, указанный в конфигурации. +- `host_address` ([String](../../sql-reference/data-types/string.md)) — TIP-адрес хоста, полученный из DNS. +- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — порт для соединения с сервером. +- `is_local` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, показывающий является ли хост локальным. +- `user` ([String](../../sql-reference/data-types/string.md)) — имя пользователя для соединения с сервером. +- `default_database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных по умолчанию. +- `errors_count` ([UInt32](../../sql-reference/data-types/int-uint.md)) — количество неудачных попыток хоста получить доступ к реплике. +- `slowdowns_count` ([UInt32](../../sql-reference/data-types/int-uint.md)) — количество замен реплики из-за долгого отсутствия ответа от нее при установке соединения при хеджированных запросах. +- `estimated_recovery_time` ([UInt32](../../sql-reference/data-types/int-uint.md)) — количество секунд до момента, когда количество ошибок будет обнулено и реплика станет доступной. + +**Пример** + +Запрос: + +```sql +SELECT * FROM system.clusters LIMIT 2 FORMAT Vertical; +``` + +Результат: + +```text +Row 1: +────── +cluster: test_cluster_two_shards +shard_num: 1 +shard_weight: 1 +replica_num: 1 +host_name: 127.0.0.1 +host_address: 127.0.0.1 +port: 9000 +is_local: 1 +user: default +default_database: +errors_count: 0 +slowdowns_count: 0 +estimated_recovery_time: 0 + +Row 2: +────── +cluster: test_cluster_two_shards +shard_num: 2 +shard_weight: 1 +replica_num: 1 +host_name: 127.0.0.2 +host_address: 127.0.0.2 +port: 9000 +is_local: 0 +user: default +default_database: +errors_count: 0 +slowdowns_count: 0 +estimated_recovery_time: 0 +``` + +**Смотрите также** + +- [Table engine Distributed](../../engines/table-engines/special/distributed.md) +- [Настройка distributed_replica_error_cap](../../operations/settings/settings.md#settings-distributed_replica_error_cap) +- [Настройка distributed_replica_error_half_life](../../operations/settings/settings.md#settings-distributed_replica_error_half_life) [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/clusters) diff --git a/docs/ru/operations/system-tables/columns.md b/docs/ru/operations/system-tables/columns.md index 8cb9408e7d8..b8a0aef2299 100644 --- a/docs/ru/operations/system-tables/columns.md +++ b/docs/ru/operations/system-tables/columns.md @@ -4,7 +4,9 @@ С помощью этой таблицы можно получить информацию аналогично запросу [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table), но для многих таблиц сразу. -Таблица `system.columns` содержит столбцы (тип столбца указан в скобках): +Колонки [временных таблиц](../../sql-reference/statements/create/table.md#temporary-tables) содержатся в `system.columns` только в тех сессиях, в которых эти таблицы были созданы. Поле `database` у таких колонок пустое. + +Cтолбцы: - `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных. - `table` ([String](../../sql-reference/data-types/string.md)) — имя таблицы. @@ -23,4 +25,46 @@ - `is_in_sampling_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, показывающий включение столбца в ключ выборки. - `compression_codec` ([String](../../sql-reference/data-types/string.md)) — имя кодека сжатия. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/columns) +**Пример** + +```sql +SELECT * FROM system.columns LIMIT 2 FORMAT Vertical; +``` + +```text +Row 1: +────── +database: system +table: aggregate_function_combinators +name: name +type: String +default_kind: +default_expression: +data_compressed_bytes: 0 +data_uncompressed_bytes: 0 +marks_bytes: 0 +comment: +is_in_partition_key: 0 +is_in_sorting_key: 0 +is_in_primary_key: 0 +is_in_sampling_key: 0 +compression_codec: + +Row 2: +────── +database: system +table: aggregate_function_combinators +name: is_internal +type: UInt8 +default_kind: +default_expression: +data_compressed_bytes: 0 +data_uncompressed_bytes: 0 +marks_bytes: 0 +comment: +is_in_partition_key: 0 +is_in_sorting_key: 0 +is_in_primary_key: 0 +is_in_sampling_key: 0 +compression_codec: +``` diff --git a/docs/ru/operations/system-tables/contributors.md b/docs/ru/operations/system-tables/contributors.md index 64c9a863bc3..6e11219e044 100644 --- a/docs/ru/operations/system-tables/contributors.md +++ b/docs/ru/operations/system-tables/contributors.md @@ -39,4 +39,3 @@ SELECT * FROM system.contributors WHERE name='Olga Khvostikova' └──────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/contributors) diff --git a/docs/ru/operations/system-tables/current-roles.md b/docs/ru/operations/system-tables/current-roles.md index a948b7b1e97..42ed4260fde 100644 --- a/docs/ru/operations/system-tables/current-roles.md +++ b/docs/ru/operations/system-tables/current-roles.md @@ -8,4 +8,3 @@ - `with_admin_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Флаг, который показывает, обладает ли `current_role` роль привилегией `ADMIN OPTION`. - `is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Флаг, который показывает, является ли `current_role` ролью по умолчанию. - [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/current-roles) diff --git a/docs/ru/operations/system-tables/data_type_families.md b/docs/ru/operations/system-tables/data_type_families.md index d8d0b5e1074..ba4e5e64ec3 100644 --- a/docs/ru/operations/system-tables/data_type_families.md +++ b/docs/ru/operations/system-tables/data_type_families.md @@ -1,6 +1,6 @@ # system.data_type_families {#system_tables-data_type_families} -Содержит информацию о поддерживаемых [типах данных](../../sql-reference/data-types/). +Содержит информацию о поддерживаемых [типах данных](../../sql-reference/data-types/index.md). Столбцы: @@ -33,4 +33,3 @@ SELECT * FROM system.data_type_families WHERE alias_to = 'String' - [Синтаксис](../../sql-reference/syntax.md) — поддерживаемый SQL синтаксис. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/data_type_families) diff --git a/docs/ru/operations/system-tables/databases.md b/docs/ru/operations/system-tables/databases.md index 00a4b543717..026f49c0d5d 100644 --- a/docs/ru/operations/system-tables/databases.md +++ b/docs/ru/operations/system-tables/databases.md @@ -4,4 +4,3 @@ Для каждой базы данных, о которой знает сервер, будет присутствовать соответствующая запись в таблице. Эта системная таблица используется для реализации запроса `SHOW DATABASES`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/databases) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/detached_parts.md b/docs/ru/operations/system-tables/detached_parts.md index c59daa3985c..7abed6500aa 100644 --- a/docs/ru/operations/system-tables/detached_parts.md +++ b/docs/ru/operations/system-tables/detached_parts.md @@ -1,7 +1,6 @@ # system.detached_parts {#system_tables-detached_parts} Содержит информацию об отсоединённых кусках таблиц семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). Столбец `reason` содержит причину, по которой кусок был отсоединён. Для кусов, отсоединённых пользователем, `reason` содержит пустую строку. -Такие куски могут быть присоединены с помощью [ALTER TABLE ATTACH PARTITION\|PART](../../sql_reference/alter/#alter_attach-partition). Остальные столбцы описаны в [system.parts](#system_tables-parts). -Если имя куска некорректно, значения некоторых столбцов могут быть `NULL`. Такие куски могут быть удалены с помощью [ALTER TABLE DROP DETACHED PART](../../sql_reference/alter/#alter_drop-detached). +Такие куски могут быть присоединены с помощью [ALTER TABLE ATTACH PARTITION|PART](../../sql-reference/statements/alter/index.md#alter_attach-partition). Остальные столбцы описаны в [system.parts](#system_tables-parts). +Если имя куска некорректно, значения некоторых столбцов могут быть `NULL`. Такие куски могут быть удалены с помощью [ALTER TABLE DROP DETACHED PART](../../sql-reference/statements/alter/index.md#alter_drop-detached). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/detached_parts) diff --git a/docs/ru/operations/system-tables/dictionaries.md b/docs/ru/operations/system-tables/dictionaries.md index cd1a4acab72..6a49904aae9 100644 --- a/docs/ru/operations/system-tables/dictionaries.md +++ b/docs/ru/operations/system-tables/dictionaries.md @@ -59,4 +59,3 @@ SELECT * FROM system.dictionaries └──────────┴──────┴────────┴─────────────┴──────┴────────┴──────────────────────────────────────┴─────────────────────┴─────────────────┴─────────────┴──────────┴───────────────┴───────────────────────┴────────────────────────────┴──────────────┴──────────────┴─────────────────────┴──────────────────────────────┘───────────────────────┴────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/dictionaries) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/disks.md b/docs/ru/operations/system-tables/disks.md index 2832e7a1a32..186dfbd7819 100644 --- a/docs/ru/operations/system-tables/disks.md +++ b/docs/ru/operations/system-tables/disks.md @@ -10,4 +10,3 @@ Cодержит информацию о дисках, заданных в [ко - `total_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — объём диска в байтах. - `keep_free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — место, которое должно остаться свободным на диске в байтах. Задаётся значением параметра `keep_free_space_bytes` конфигурации дисков. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/disks) diff --git a/docs/ru/operations/system-tables/distributed_ddl_queue.md b/docs/ru/operations/system-tables/distributed_ddl_queue.md index 71be69e98d7..99d92574a0b 100644 --- a/docs/ru/operations/system-tables/distributed_ddl_queue.md +++ b/docs/ru/operations/system-tables/distributed_ddl_queue.md @@ -61,5 +61,4 @@ exception_code: ZOK 2 rows in set. Elapsed: 0.025 sec. ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/distributed_ddl_queuedistributed_ddl_queue.md) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/distribution_queue.md b/docs/ru/operations/system-tables/distribution_queue.md index 18346b34e04..5b811ab2be8 100644 --- a/docs/ru/operations/system-tables/distribution_queue.md +++ b/docs/ru/operations/system-tables/distribution_queue.md @@ -43,4 +43,3 @@ last_exception: - [Движок таблиц Distributed](../../engines/table-engines/special/distributed.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/distribution_queue) diff --git a/docs/ru/operations/system-tables/enabled-roles.md b/docs/ru/operations/system-tables/enabled-roles.md index cd3b0846718..a3f5ba179b3 100644 --- a/docs/ru/operations/system-tables/enabled-roles.md +++ b/docs/ru/operations/system-tables/enabled-roles.md @@ -9,4 +9,3 @@ - `is_current` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Флаг, который показывает, является ли `enabled_role` текущей ролью текущего пользователя. - `is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Флаг, который показывает, является ли `enabled_role` ролью по умолчанию. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/enabled-roles) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/events.md b/docs/ru/operations/system-tables/events.md index 0a48617bb5c..c05be74eea6 100644 --- a/docs/ru/operations/system-tables/events.md +++ b/docs/ru/operations/system-tables/events.md @@ -31,4 +31,3 @@ SELECT * FROM system.events LIMIT 5 - [system.metric_log](#system_tables-metric_log) — таблица фиксирующая историю значений метрик из `system.metrics` и `system.events`. - [Мониторинг](../../operations/monitoring.md) — основы мониторинга в ClickHouse. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/events) diff --git a/docs/ru/operations/system-tables/functions.md b/docs/ru/operations/system-tables/functions.md index c51adb2c109..de752e2018c 100644 --- a/docs/ru/operations/system-tables/functions.md +++ b/docs/ru/operations/system-tables/functions.md @@ -7,4 +7,3 @@ - `name` (`String`) – Имя функции. - `is_aggregate` (`UInt8`) – Признак, является ли функция агрегатной. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/functions) diff --git a/docs/ru/operations/system-tables/grants.md b/docs/ru/operations/system-tables/grants.md index 58d8a9e1e06..76a014f62dd 100644 --- a/docs/ru/operations/system-tables/grants.md +++ b/docs/ru/operations/system-tables/grants.md @@ -21,4 +21,3 @@ - `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Разрешение предоставлено с опцией `WITH GRANT OPTION`, подробнее см. [GRANT](../../sql-reference/statements/grant.md#grant-privigele-syntax). -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/grants) diff --git a/docs/ru/operations/system-tables/graphite_retentions.md b/docs/ru/operations/system-tables/graphite_retentions.md index 66fca7ba299..1098a29aac6 100644 --- a/docs/ru/operations/system-tables/graphite_retentions.md +++ b/docs/ru/operations/system-tables/graphite_retentions.md @@ -14,4 +14,3 @@ - `Tables.database` (Array(String)) - Массив имён баз данных таблиц, использующих параметр `config_name`. - `Tables.table` (Array(String)) - Массив имён таблиц, использующих параметр `config_name`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/graphite_retentions) diff --git a/docs/ru/operations/system-tables/index.md b/docs/ru/operations/system-tables/index.md index e4b6f5beb9d..fce93f33a27 100644 --- a/docs/ru/operations/system-tables/index.md +++ b/docs/ru/operations/system-tables/index.md @@ -27,7 +27,7 @@ toc_title: "Системные таблицы" - `database` — база данных, к которой принадлежит системная таблица. Эта опция на текущий момент устарела. Все системные таблицы находятся в базе данных `system`. - `table` — таблица для добавления данных. - `partition_by` — [ключ партиционирования](../../engines/table-engines/mergetree-family/custom-partitioning-key.md). -- `ttl` — [время жизни](../../sql-reference/statements/alter/ttl.md) таблицы. +- `ttl` — [время жизни](../../sql-reference/statements/alter/ttl.md) записей в таблице. - `flush_interval_milliseconds` — интервал сброса данных на диск, в миллисекундах. - `engine` — полное имя движка (начиная с `ENGINE =` ) с параметрами. Эта опция противоречит `partition_by` и `ttl`. Если указать оба параметра вместе, сервер вернет ошибку и завершит работу. @@ -70,4 +70,3 @@ toc_title: "Системные таблицы" - `OSReadBytes` - `OSWriteBytes` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system-tables/) diff --git a/docs/ru/operations/system-tables/licenses.md b/docs/ru/operations/system-tables/licenses.md index a6a49d5e0be..598da1e72ee 100644 --- a/docs/ru/operations/system-tables/licenses.md +++ b/docs/ru/operations/system-tables/licenses.md @@ -36,4 +36,3 @@ SELECT library_name, license_type, license_path FROM system.licenses LIMIT 15 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/licenses) diff --git a/docs/ru/operations/system-tables/merges.md b/docs/ru/operations/system-tables/merges.md index 021a95981e6..f48f0d1ac27 100644 --- a/docs/ru/operations/system-tables/merges.md +++ b/docs/ru/operations/system-tables/merges.md @@ -18,4 +18,3 @@ - `bytes_written_uncompressed UInt64` — Количество записанных байт, несжатых. - `rows_written UInt64` — Количество записанных строк. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/merges) diff --git a/docs/ru/operations/system-tables/metric_log.md b/docs/ru/operations/system-tables/metric_log.md index 2458c93da59..5160b32927b 100644 --- a/docs/ru/operations/system-tables/metric_log.md +++ b/docs/ru/operations/system-tables/metric_log.md @@ -48,4 +48,3 @@ CurrentMetric_ReplicatedChecks: 0 - [system.metrics](#system_tables-metrics) — таблица с мгновенно вычисляемыми метриками. - [Мониторинг](../../operations/monitoring.md) — основы мониторинга в ClickHouse. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/metric_log) diff --git a/docs/ru/operations/system-tables/metrics.md b/docs/ru/operations/system-tables/metrics.md index db4016687d6..13d5fbc750a 100644 --- a/docs/ru/operations/system-tables/metrics.md +++ b/docs/ru/operations/system-tables/metrics.md @@ -38,4 +38,3 @@ SELECT * FROM system.metrics LIMIT 10 - [system.metric_log](#system_tables-metric_log) — таблица фиксирующая историю значений метрик из `system.metrics` и `system.events`. - [Мониторинг](../../operations/monitoring.md) — основы мониторинга в ClickHouse. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/metrics) diff --git a/docs/ru/operations/system-tables/mutations.md b/docs/ru/operations/system-tables/mutations.md index 044677030ba..4370ab593e7 100644 --- a/docs/ru/operations/system-tables/mutations.md +++ b/docs/ru/operations/system-tables/mutations.md @@ -45,4 +45,3 @@ - [Движок MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) - [Репликация данных](../../engines/table-engines/mergetree-family/replication.md) (семейство ReplicatedMergeTree) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/mutations) diff --git a/docs/ru/operations/system-tables/numbers.md b/docs/ru/operations/system-tables/numbers.md index 02192184aa1..0be4a4ce05d 100644 --- a/docs/ru/operations/system-tables/numbers.md +++ b/docs/ru/operations/system-tables/numbers.md @@ -4,4 +4,3 @@ Эту таблицу можно использовать для тестов, а также если вам нужно сделать перебор. Чтения из этой таблицы не распараллеливаются. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/numbers) diff --git a/docs/ru/operations/system-tables/numbers_mt.md b/docs/ru/operations/system-tables/numbers_mt.md index 12409d831a1..d66c4515ddb 100644 --- a/docs/ru/operations/system-tables/numbers_mt.md +++ b/docs/ru/operations/system-tables/numbers_mt.md @@ -3,4 +3,3 @@ То же самое, что и [system.numbers](../../operations/system-tables/numbers.md), но чтение распараллеливается. Числа могут возвращаться в произвольном порядке. Используется для тестов. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/numbers_mt) diff --git a/docs/ru/operations/system-tables/one.md b/docs/ru/operations/system-tables/one.md index 4231277ffe4..5cb297f06d4 100644 --- a/docs/ru/operations/system-tables/one.md +++ b/docs/ru/operations/system-tables/one.md @@ -4,4 +4,3 @@ Эта таблица используется, если в `SELECT` запросе не указана секция `FROM`. То есть, это - аналог таблицы `DUAL`, которую можно найти в других СУБД. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/one) diff --git a/docs/ru/operations/system-tables/opentelemetry_span_log.md b/docs/ru/operations/system-tables/opentelemetry_span_log.md index 96555064b0e..c421a602300 100644 --- a/docs/ru/operations/system-tables/opentelemetry_span_log.md +++ b/docs/ru/operations/system-tables/opentelemetry_span_log.md @@ -46,4 +46,3 @@ attribute.names: [] attribute.values: [] ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/opentelemetry_span_log) diff --git a/docs/ru/operations/system-tables/part_log.md b/docs/ru/operations/system-tables/part_log.md index 4157cd41bff..a8d892f3b67 100644 --- a/docs/ru/operations/system-tables/part_log.md +++ b/docs/ru/operations/system-tables/part_log.md @@ -66,4 +66,3 @@ error: 0 exception: ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/part_log) diff --git a/docs/ru/operations/system-tables/parts.md b/docs/ru/operations/system-tables/parts.md index 950e652332d..1c7f0ad2e9a 100644 --- a/docs/ru/operations/system-tables/parts.md +++ b/docs/ru/operations/system-tables/parts.md @@ -155,4 +155,3 @@ move_ttl_info.max: [] - [Движок MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) - [TTL для столбцов и таблиц](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/parts) diff --git a/docs/ru/operations/system-tables/parts_columns.md b/docs/ru/operations/system-tables/parts_columns.md index db4d453e8f1..5640929d810 100644 --- a/docs/ru/operations/system-tables/parts_columns.md +++ b/docs/ru/operations/system-tables/parts_columns.md @@ -145,4 +145,3 @@ column_marks_bytes: 48 - [Движок MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) -[Оригинальная статья](https://clickhouse.tech/docs/en/operations/system_tables/parts_columns) diff --git a/docs/ru/operations/system-tables/processes.md b/docs/ru/operations/system-tables/processes.md index c9216e162b3..682b174c483 100644 --- a/docs/ru/operations/system-tables/processes.md +++ b/docs/ru/operations/system-tables/processes.md @@ -14,4 +14,3 @@ - `query` (String) – текст запроса. Для запросов `INSERT` не содержит встаявляемые данные. - `query_id` (String) – идентификатор запроса, если был задан. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/processes) diff --git a/docs/ru/operations/system-tables/query_log.md b/docs/ru/operations/system-tables/query_log.md index 39f685288d8..d3872e1ef18 100644 --- a/docs/ru/operations/system-tables/query_log.md +++ b/docs/ru/operations/system-tables/query_log.md @@ -44,9 +44,15 @@ ClickHouse не удаляет данные из таблица автомати - `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — количество строк в результате запроса `SELECT` или количество строк в запросе `INSERT`. - `result_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — объём RAM в байтах, использованный для хранения результата запроса. - `memory_usage` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — потребление RAM запросом. +- `current_database` ([String](../../sql-reference/data-types/string.md)) — имя текущей базы данных. - `query` ([String](../../sql-reference/data-types/string.md)) — текст запроса. +- `normalized_query_hash` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — идентичная хэш-сумма без значений литералов для аналогичных запросов. +- `query_kind` ([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md)) — тип запроса. +- `databases` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — имена баз данных, присутствующих в запросе. +- `tables` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — имена таблиц, присутствующих в запросе. +- `columns` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — имена столбцов, присутствующих в запросе. +- `exception_code` ([Int32](../../sql-reference/data-types/int-uint.md)) — код исключения. - `exception` ([String](../../sql-reference/data-types/string.md)) — сообщение исключения, если запрос завершился по исключению. -- `exception_code` ([Int32](../../sql-reference/data-types/int-uint.md)) — код исключения. - `stack_trace` ([String](../../sql-reference/data-types/string.md)) — [stack trace](https://en.wikipedia.org/wiki/Stack_trace). Пустая строка, если запрос успешно завершен. - `is_initial_query` ([UInt8](../../sql-reference/data-types/int-uint.md)) — вид запроса. Возможные значения: - 1 — запрос был инициирован клиентом. @@ -74,68 +80,97 @@ ClickHouse не удаляет данные из таблица автомати - 1 — `GET`. - 2 — `POST`. - `http_user_agent` ([String](../../sql-reference/data-types/string.md)) — HTTP заголовок `UserAgent`. -- `quota_key` ([String](../../sql-reference/data-types/string.md)) — «ключ квоты» из настроек [квот](quotas.md) (см. `keyed`). +- `http_referer` ([String](../../sql-reference/data-types/string.md)) — HTTP заголовок `Referer` (содержит полный или частичный адрес страницы, с которой был выполнен запрос). +- `forwarded_for` ([String](../../sql-reference/data-types/string.md)) — HTTP заголовок `X-Forwarded-For`. +- `quota_key` ([String](../../sql-reference/data-types/string.md)) — `ключ квоты` из настроек [квот](quotas.md) (см. `keyed`). - `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ревизия ClickHouse. -- `thread_numbers` ([Array(UInt32)](../../sql-reference/data-types/array.md)) — количество потоков, участвующих в обработке запросов. +- `log_comment` ([String](../../sql-reference/data-types/string.md)) — комментарий к записи в логе. Представляет собой произвольную строку, длина которой должна быть не больше, чем [max_query_size](../../operations/settings/settings.md#settings-max_query_size). Если нет комментария, то пустая строка. +- `thread_ids` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — идентификаторы потоков, участвующих в обработке запросов. - `ProfileEvents.Names` ([Array(String)](../../sql-reference/data-types/array.md)) — счетчики для изменения различных метрик. Описание метрик можно получить из таблицы [system.events](#system_tables-events)(#system_tables-events - `ProfileEvents.Values` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — метрики, перечисленные в столбце `ProfileEvents.Names`. - `Settings.Names` ([Array(String)](../../sql-reference/data-types/array.md)) — имена настроек, которые меняются, когда клиент выполняет запрос. Чтобы разрешить логирование изменений настроек, установите параметр `log_query_settings` равным 1. - `Settings.Values` ([Array(String)](../../sql-reference/data-types/array.md)) — значения настроек, которые перечислены в столбце `Settings.Names`. +- `used_aggregate_functions` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `агрегатных функций`, использованных при выполнении запроса. +- `used_aggregate_function_combinators` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `комбинаторов агрегатных функций`, использованных при выполнении запроса. +- `used_database_engines` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `движков баз данных`, использованных при выполнении запроса. +- `used_data_type_families` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `семейств типов данных`, использованных при выполнении запроса. +- `used_dictionaries` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `источников словарей`, использованных при выполнении запроса. +- `used_formats` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `форматов`, использованных при выполнении запроса. +- `used_functions` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `функций`, использованных при выполнении запроса. +- `used_storages` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `движков таблиц`, использованных при выполнении запроса. +- `used_table_functions` ([Array(String)](../../sql-reference/data-types/array.md)) — канонические имена `табличных функций`, использованных при выполнении запроса. **Пример** ``` sql -SELECT * FROM system.query_log LIMIT 1 \G +SELECT * FROM system.query_log WHERE type = 'QueryFinish' AND (query LIKE '%toDate(\'2000-12-05\')%') ORDER BY query_start_time DESC LIMIT 1 FORMAT Vertical; ``` ``` text Row 1: ────── -type: QueryStart -event_date: 2020-09-11 -event_time: 2020-09-11 10:08:17 -event_time_microseconds: 2020-09-11 10:08:17.063321 -query_start_time: 2020-09-11 10:08:17 -query_start_time_microseconds: 2020-09-11 10:08:17.063321 -query_duration_ms: 0 -read_rows: 0 -read_bytes: 0 -written_rows: 0 -written_bytes: 0 -result_rows: 0 -result_bytes: 0 -memory_usage: 0 -current_database: default -query: INSERT INTO test1 VALUES -exception_code: 0 -exception: -stack_trace: -is_initial_query: 1 -user: default -query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef -address: ::ffff:127.0.0.1 -port: 33452 -initial_user: default -initial_query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef -initial_address: ::ffff:127.0.0.1 -initial_port: 33452 -interface: 1 -os_user: bharatnc -client_hostname: tower -client_name: ClickHouse -client_revision: 54437 -client_version_major: 20 -client_version_minor: 7 -client_version_patch: 2 -http_method: 0 -http_user_agent: -quota_key: -revision: 54440 -thread_ids: [] -ProfileEvents.Names: [] -ProfileEvents.Values: [] -Settings.Names: ['use_uncompressed_cache','load_balancing','log_queries','max_memory_usage','allow_introspection_functions'] -Settings.Values: ['0','random','1','10000000000','1'] +type: QueryFinish +event_date: 2021-03-18 +event_time: 2021-03-18 20:54:18 +event_time_microseconds: 2021-03-18 20:54:18.676686 +query_start_time: 2021-03-18 20:54:18 +query_start_time_microseconds: 2021-03-18 20:54:18.673934 +query_duration_ms: 2 +read_rows: 100 +read_bytes: 800 +written_rows: 0 +written_bytes: 0 +result_rows: 2 +result_bytes: 4858 +memory_usage: 0 +current_database: default +query: SELECT uniqArray([1, 1, 2]), SUBSTRING('Hello, world', 7, 5), flatten([[[BIT_AND(123)]], [[mod(3, 2)], [CAST('1' AS INTEGER)]]]), week(toDate('2000-12-05')), CAST(arrayJoin([NULL, NULL]) AS Nullable(TEXT)), avgOrDefaultIf(number, number % 2), sumOrNull(number), toTypeName(sumOrNull(number)), countIf(toDate('2000-12-05') + number as d, toDayOfYear(d) % 2) FROM numbers(100) +normalized_query_hash: 17858008518552525706 +query_kind: Select +databases: ['_table_function'] +tables: ['_table_function.numbers'] +columns: ['_table_function.numbers.number'] +exception_code: 0 +exception: +stack_trace: +is_initial_query: 1 +user: default +query_id: 58f3d392-0fa0-4663-ae1d-29917a1a9c9c +address: ::ffff:127.0.0.1 +port: 37486 +initial_user: default +initial_query_id: 58f3d392-0fa0-4663-ae1d-29917a1a9c9c +initial_address: ::ffff:127.0.0.1 +initial_port: 37486 +interface: 1 +os_user: sevirov +client_hostname: clickhouse.ru-central1.internal +client_name: ClickHouse +client_revision: 54447 +client_version_major: 21 +client_version_minor: 4 +client_version_patch: 1 +http_method: 0 +http_user_agent: +http_referer: +forwarded_for: +quota_key: +revision: 54449 +log_comment: +thread_ids: [587,11939] +ProfileEvents.Names: ['Query','SelectQuery','ReadCompressedBytes','CompressedReadBufferBlocks','CompressedReadBufferBytes','IOBufferAllocs','IOBufferAllocBytes','ArenaAllocChunks','ArenaAllocBytes','FunctionExecute','TableFunctionExecute','NetworkSendElapsedMicroseconds','SelectedRows','SelectedBytes','ContextLock','RWLockAcquiredReadLocks','RealTimeMicroseconds','UserTimeMicroseconds','SystemTimeMicroseconds','SoftPageFaults','OSCPUVirtualTimeMicroseconds','OSWriteBytes'] +ProfileEvents.Values: [1,1,36,1,10,2,1048680,1,4096,36,1,110,100,800,77,1,3137,1476,1101,8,2577,8192] +Settings.Names: ['load_balancing','max_memory_usage'] +Settings.Values: ['random','10000000000'] +used_aggregate_functions: ['groupBitAnd','avg','sum','count','uniq'] +used_aggregate_function_combinators: ['OrDefault','If','OrNull','Array'] +used_database_engines: [] +used_data_type_families: ['String','Array','Int32','Nullable'] +used_dictionaries: [] +used_formats: [] +used_functions: ['toWeek','CAST','arrayFlatten','toTypeName','toDayOfYear','addDays','array','toDate','modulo','substring','plus'] +used_storages: [] +used_table_functions: ['numbers'] ``` **Смотрите также** @@ -143,4 +178,3 @@ Settings.Values: ['0','random','1','10000000000','1'] - [system.query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log) — в этой таблице содержится информация о цепочке каждого выполненного запроса. [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/query_log) - diff --git a/docs/ru/operations/system-tables/query_thread_log.md b/docs/ru/operations/system-tables/query_thread_log.md index 052baf98035..0292a321524 100644 --- a/docs/ru/operations/system-tables/query_thread_log.md +++ b/docs/ru/operations/system-tables/query_thread_log.md @@ -114,4 +114,3 @@ ProfileEvents.Values: [1,1,11,11,591,148,3,71,29,6533808,1,11,72,18,47, - [system.query_log](../../operations/system-tables/query_log.md#system_tables-query_log) — описание системной таблицы `query_log`, которая содержит общую информацию о выполненных запросах. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/query_thread_log) diff --git a/docs/ru/operations/system-tables/quota_limits.md b/docs/ru/operations/system-tables/quota_limits.md index a9ab87055d4..4103391cfd6 100644 --- a/docs/ru/operations/system-tables/quota_limits.md +++ b/docs/ru/operations/system-tables/quota_limits.md @@ -4,17 +4,17 @@ Столбцы: -- `quota_name` ([String](../../sql-reference/data-types/string.md)) — Имя квоты. -- `duration` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Длина временного интервала для расчета потребления ресурсов, в секундах. -- `is_randomized_interval` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Логическое значение. Оно показывает, является ли интервал рандомизированным. Интервал всегда начинается в одно и то же время, если он не рандомизирован. Например, интервал в 1 минуту всегда начинается с целого числа минут (то есть он может начинаться в 11:20:00, но никогда не начинается в 11:20:01), интервал в один день всегда начинается в полночь UTC. Если интервал рандомизирован, то самый первый интервал начинается в произвольное время, а последующие интервалы начинаются один за другим. Значения: - - `0` — Интервал рандомизирован. - - `1` — Интервал не рандомизирован. -- `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное число запросов. -- `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество ошибок. -- `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество строк результата. -- `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальный объем оперативной памяти в байтах, используемый для хранения результата запроса. -- `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество строк, считываемых из всех таблиц и табличных функций, участвующих в запросе. -- `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество байтов, считываемых из всех таблиц и табличных функций, участвующих в запросе. -- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Максимальное время выполнения запроса, в секундах. - -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quota_limits) +- `quota_name` ([String](../../sql-reference/data-types/string.md)) — имя квоты. +- `duration` ([UInt32](../../sql-reference/data-types/int-uint.md)) — длина временного интервала для расчета потребления ресурсов, в секундах. +- `is_randomized_interval` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — логическое значение. Оно показывает, является ли интервал рандомизированным. Интервал всегда начинается в одно и то же время, если он не рандомизирован. Например, интервал в 1 минуту всегда начинается с целого числа минут (то есть он может начинаться в 11:20:00, но никогда не начинается в 11:20:01), интервал в один день всегда начинается в полночь UTC. Если интервал рандомизирован, то самый первый интервал начинается в произвольное время, а последующие интервалы начинаются один за другим. Значения: + - `0` — интервал рандомизирован. + - `1` — интервал не рандомизирован. +- `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное число запросов. +- `max_query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное число запросов `SELECT`. +- `max_query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное число запросов `INSERT`. +- `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество ошибок. +- `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество строк результата. +- `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальный объем оперативной памяти в байтах, используемый для хранения результата запроса. +- `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество строк, считываемых из всех таблиц и табличных функций, участвующих в запросе. +- `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество байтов, считываемых из всех таблиц и табличных функций, участвующих в запросе. +- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — максимальное время выполнения запроса, в секундах. \ No newline at end of file diff --git a/docs/ru/operations/system-tables/quota_usage.md b/docs/ru/operations/system-tables/quota_usage.md index cea3c4b2daa..19e9397ebaa 100644 --- a/docs/ru/operations/system-tables/quota_usage.md +++ b/docs/ru/operations/system-tables/quota_usage.md @@ -4,28 +4,28 @@ Столбцы: -- `quota_name` ([String](../../sql-reference/data-types/string.md)) — Имя квоты. -- `quota_key`([String](../../sql-reference/data-types/string.md)) — Значение ключа. Например, если keys = `ip_address`, `quota_key` может иметь значение '192.168.1.1'. -- `start_time`([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md))) — Время начала расчета потребления ресурсов. -- `end_time`([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md))) — Время окончания расчета потребления ресурс -- `duration` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Длина временного интервала для расчета потребления ресурсов, в секундах. -- `queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее количество запросов на этом интервале. -- `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество запросов. -- `errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Число запросов, вызвавших ошибки. -- `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное число ошибок. -- `result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее количество строк результата. -- `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество строк результата. -- `result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Объем оперативной памяти в байтах, используемый для хранения результата запроса. -- `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальный объем оперативной памяти, используемый для хранения результата запроса, в байтах. -- `read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее число исходных строк, считываемых из таблиц для выполнения запроса на всех удаленных серверах. -- `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество строк, считываемых из всех таблиц и табличных функций, участвующих в запросах. -- `read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее количество байт, считанных из всех таблиц и табличных функций, участвующих в запросах. -- `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество байт, считываемых из всех таблиц и табличных функций. -- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Общее время выполнения запроса, в секундах. -- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Максимальное время выполнения запроса. +- `quota_name` ([String](../../sql-reference/data-types/string.md)) — имя квоты. +- `quota_key`([String](../../sql-reference/data-types/string.md)) — значение ключа. Например, если keys = `ip_address`, `quota_key` может иметь значение '192.168.1.1'. +- `start_time`([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md))) — время начала расчета потребления ресурсов. +- `end_time`([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md))) — время окончания расчета потребления ресурс +- `duration` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — длина временного интервала для расчета потребления ресурсов, в секундах. +- `queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество запросов на этом интервале. +- `query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество запросов `SELECT` на этом интервале. +- `query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество запросов `INSERT` на этом интервале. +- `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество запросов. +- `errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — число запросов, вызвавших ошибки. +- `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное число ошибок. +- `result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество строк результата. +- `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество строк результата. +- `result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — объем оперативной памяти в байтах, используемый для хранения результата запроса. +- `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальный объем оперативной памяти, используемый для хранения результата запроса, в байтах. +- `read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее число исходных строк, считываемых из таблиц для выполнения запроса на всех удаленных серверах. +- `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество строк, считываемых из всех таблиц и табличных функций, участвующих в запросах. +- `read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество байт, считанных из всех таблиц и табличных функций, участвующих в запросах. +- `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество байт, считываемых из всех таблиц и табличных функций. +- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — общее время выполнения запроса, в секундах. +- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — максимальное время выполнения запроса. ## Смотрите также {#see-also} -- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) - -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quota_usage) +- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/quotas.md b/docs/ru/operations/system-tables/quotas.md index 15bb41a85bf..fe6b78cc44b 100644 --- a/docs/ru/operations/system-tables/quotas.md +++ b/docs/ru/operations/system-tables/quotas.md @@ -25,5 +25,4 @@ - [SHOW QUOTAS](../../sql-reference/statements/show.md#show-quotas-statement) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quotas) diff --git a/docs/ru/operations/system-tables/quotas_usage.md b/docs/ru/operations/system-tables/quotas_usage.md index 9d6d339c434..fe066e38add 100644 --- a/docs/ru/operations/system-tables/quotas_usage.md +++ b/docs/ru/operations/system-tables/quotas_usage.md @@ -4,29 +4,31 @@ Столбцы: -- `quota_name` ([String](../../sql-reference/data-types/string.md)) — Имя квоты. -- `quota_key` ([String](../../sql-reference/data-types/string.md)) — Ключ квоты. -- `is_current` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Квота используется для текущего пользователя. -- `start_time` ([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md)))) — Время начала расчета потребления ресурсов. -- `end_time` ([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md)))) — Время окончания расчета потребления ресурсов. -- `duration` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt32](../../sql-reference/data-types/int-uint.md))) — Длина временного интервала для расчета потребления ресурсов, в секундах. -- `queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее количество запросов на этом интервале. -- `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное число запросов. -- `errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Число запросов, вызвавших ошибки. -- `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное число ошибок. -- `result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of rows given as a result. -- `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum of source rows read from tables. -- `result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Объем оперативной памяти в байтах, используемый для хранения результата запроса. -- `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальный объем оперативной памяти, используемый для хранения результата запроса, в байтах. -- `read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее число исходных строк, считываемых из таблиц для выполнения запроса на всех удаленных серверах. -- `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество строк, считываемых из всех таблиц и табличных функций, участвующих в запросах. -- `read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Общее количество байт, считанных из всех таблиц и табличных функций, участвующих в запросах. -- `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Максимальное количество байт, считываемых из всех таблиц и табличных функций. -- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Общее время выполнения запроса, в секундах. -- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Максимальное время выполнения запроса. +- `quota_name` ([String](../../sql-reference/data-types/string.md)) — имя квоты. +- `quota_key` ([String](../../sql-reference/data-types/string.md)) — ключ квоты. +- `is_current` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — квота используется для текущего пользователя. +- `start_time` ([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md)))) — время начала расчета потребления ресурсов. +- `end_time` ([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md)))) — время окончания расчета потребления ресурсов. +- `duration` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt32](../../sql-reference/data-types/int-uint.md))) — длина временного интервала для расчета потребления ресурсов, в секундах. +- `queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество запросов на этом интервале. +- `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное число запросов. +- `query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество запросов `SELECT` на этом интервале. +- `max_query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество запросов `SELECT` на этом интервале. +- `query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество запросов `INSERT` на этом интервале. +- `max_query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество запросов `INSERT` на этом интервале. +- `errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — число запросов, вызвавших ошибки. +- `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное число ошибок. +- `result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество строк, приведенных в результате. +- `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество исходных строк, считываемых из таблиц. +- `result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — объем оперативной памяти в байтах, используемый для хранения результата запроса. +- `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальный объем оперативной памяти, используемый для хранения результата запроса, в байтах. +- `read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее число исходных строк, считываемых из таблиц для выполнения запроса на всех удаленных серверах. +- `max_read_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество строк, считываемых из всех таблиц и табличных функций, участвующих в запросах. +- `read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — общее количество байт, считанных из всех таблиц и табличных функций, участвующих в запросах. +- `max_read_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — максимальное количество байт, считываемых из всех таблиц и табличных функций. +- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — общее время выполнения запроса, в секундах. +- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — максимальное время выполнения запроса. ## Смотрите также {#see-also} -- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) - -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quotas_usage) +- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/replicas.md b/docs/ru/operations/system-tables/replicas.md index 8d4eb60c56a..7879ee707a4 100644 --- a/docs/ru/operations/system-tables/replicas.md +++ b/docs/ru/operations/system-tables/replicas.md @@ -120,5 +120,4 @@ WHERE Если этот запрос ничего не возвращает - значит всё хорошо. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/replicas) diff --git a/docs/ru/operations/system-tables/replicated_fetches.md b/docs/ru/operations/system-tables/replicated_fetches.md index 94584f390ee..31d5a5cfe08 100644 --- a/docs/ru/operations/system-tables/replicated_fetches.md +++ b/docs/ru/operations/system-tables/replicated_fetches.md @@ -67,4 +67,3 @@ thread_id: 54 - [Управление таблицами ReplicatedMergeTree](../../sql-reference/statements/system/#query-language-system-replicated) -[Оригинальная статья](https://clickhouse.tech/docs/en/operations/system_tables/replicated_fetches) diff --git a/docs/ru/operations/system-tables/replication_queue.md b/docs/ru/operations/system-tables/replication_queue.md index 47f64aea55d..2f9d80be16f 100644 --- a/docs/ru/operations/system-tables/replication_queue.md +++ b/docs/ru/operations/system-tables/replication_queue.md @@ -14,7 +14,17 @@ - `node_name` ([String](../../sql-reference/data-types/string.md)) — имя узла в ZooKeeper. -- `type` ([String](../../sql-reference/data-types/string.md)) — тип задачи в очереди: `GET_PARTS`, `MERGE_PARTS`, `DETACH_PARTS`, `DROP_PARTS` или `MUTATE_PARTS`. +- `type` ([String](../../sql-reference/data-types/string.md)) — тип задачи в очереди: + + - `GET_PART` — скачать кусок с другой реплики. + - `ATTACH_PART` — присоединить кусок. Задача может быть выполнена и с куском из нашей собственной реплики (если он находится в папке `detached`). Эта задача практически идентична задаче `GET_PART`, лишь немного оптимизирована. + - `MERGE_PARTS` — выполнить слияние кусков. + - `DROP_RANGE` — удалить куски в партициях из указнного диапазона. + - `CLEAR_COLUMN` — удалить указанный столбец из указанной партиции. Примечание: не используется с 20.4. + - `CLEAR_INDEX` — удалить указанный индекс из указанной партиции. Примечание: не используется с 20.4. + - `REPLACE_RANGE` — удалить указанный диапазон кусков и заменить их на новые. + - `MUTATE_PART` — применить одну или несколько мутаций к куску. + - `ALTER_METADATA` — применить изменения структуры таблицы в результате запросов с выражением `ALTER`. - `create_time` ([Datetime](../../sql-reference/data-types/datetime.md)) — дата и время отправки задачи на выполнение. @@ -70,12 +80,10 @@ num_tries: 36 last_exception: Code: 226, e.displayText() = DB::Exception: Marks file '/opt/clickhouse/data/merge/visits_v2/tmp_fetch_20201130_121373_121384_2/CounterID.mrk' doesn't exist (version 20.8.7.15 (official build)) last_attempt_time: 2020-12-08 17:35:54 num_postponed: 0 -postpone_reason: +postpone_reason: last_postpone_time: 1970-01-01 03:00:00 ``` **Смотрите также** -- [Управление таблицами ReplicatedMergeTree](../../sql-reference/statements/system.md/#query-language-system-replicated) - -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/replication_queue) +- [Управление таблицами ReplicatedMergeTree](../../sql-reference/statements/system.md#query-language-system-replicated) diff --git a/docs/ru/operations/system-tables/role-grants.md b/docs/ru/operations/system-tables/role-grants.md index f014af1fe3d..2c80a597857 100644 --- a/docs/ru/operations/system-tables/role-grants.md +++ b/docs/ru/operations/system-tables/role-grants.md @@ -14,4 +14,3 @@ - 1 — Роль обладает привилегией `ADMIN OPTION`. - 0 — Роль не обладает привилегией `ADMIN OPTION`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/role-grants) \ No newline at end of file diff --git a/docs/ru/operations/system-tables/roles.md b/docs/ru/operations/system-tables/roles.md index 1b548e85be2..c2b94214012 100644 --- a/docs/ru/operations/system-tables/roles.md +++ b/docs/ru/operations/system-tables/roles.md @@ -14,4 +14,3 @@ - [SHOW ROLES](../../sql-reference/statements/show.md#show-roles-statement) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/roles) diff --git a/docs/ru/operations/system-tables/row_policies.md b/docs/ru/operations/system-tables/row_policies.md index 7d0a490f01c..f1e84a201cb 100644 --- a/docs/ru/operations/system-tables/row_policies.md +++ b/docs/ru/operations/system-tables/row_policies.md @@ -31,4 +31,3 @@ - [SHOW POLICIES](../../sql-reference/statements/show.md#show-policies-statement) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/row_policies) diff --git a/docs/ru/operations/system-tables/settings.md b/docs/ru/operations/system-tables/settings.md index 50ccac684c4..c9d63d336b6 100644 --- a/docs/ru/operations/system-tables/settings.md +++ b/docs/ru/operations/system-tables/settings.md @@ -50,4 +50,3 @@ SELECT * FROM system.settings WHERE changed AND name='load_balancing' - [Ограничения для значений настроек](../settings/constraints-on-settings.md) - Выражение [SHOW SETTINGS](../../sql-reference/statements/show.md#show-settings) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/settings) diff --git a/docs/ru/operations/system-tables/settings_profile_elements.md b/docs/ru/operations/system-tables/settings_profile_elements.md index cd801468e21..8a1461c6bb0 100644 --- a/docs/ru/operations/system-tables/settings_profile_elements.md +++ b/docs/ru/operations/system-tables/settings_profile_elements.md @@ -27,4 +27,3 @@ - `inherit_profile` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Родительский профиль для данного профиля настроек. `NULL` если не задано. Профиль настроек может наследовать все значения и ограничения настроек (`min`, `max`, `readonly`) от своего родительского профиля. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/settings_profile_elements) diff --git a/docs/ru/operations/system-tables/settings_profiles.md b/docs/ru/operations/system-tables/settings_profiles.md index e1401553a4a..f8101fb0cb7 100644 --- a/docs/ru/operations/system-tables/settings_profiles.md +++ b/docs/ru/operations/system-tables/settings_profiles.md @@ -21,4 +21,3 @@ - [SHOW PROFILES](../../sql-reference/statements/show.md#show-profiles-statement) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/settings_profiles) diff --git a/docs/ru/operations/system-tables/stack_trace.md b/docs/ru/operations/system-tables/stack_trace.md index 0689e15c35c..58d0a1c4b6a 100644 --- a/docs/ru/operations/system-tables/stack_trace.md +++ b/docs/ru/operations/system-tables/stack_trace.md @@ -85,4 +85,3 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so - [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Описание и пример использования функции `arrayMap`. - [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Описание и пример использования функции `arrayFilter`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/stack_trace) diff --git a/docs/ru/operations/system-tables/storage_policies.md b/docs/ru/operations/system-tables/storage_policies.md index e62266af131..b2005d5f31e 100644 --- a/docs/ru/operations/system-tables/storage_policies.md +++ b/docs/ru/operations/system-tables/storage_policies.md @@ -14,4 +14,3 @@ Если политика хранения содержит несколько томов, то каждому тому соответствует отдельная запись в таблице. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/storage_policies) diff --git a/docs/ru/operations/system-tables/table_engines.md b/docs/ru/operations/system-tables/table_engines.md index eb198475e43..b6f6d3decc2 100644 --- a/docs/ru/operations/system-tables/table_engines.md +++ b/docs/ru/operations/system-tables/table_engines.md @@ -6,8 +6,8 @@ - `name` (String) — имя движка. - `supports_settings` (UInt8) — флаг, показывающий поддержку секции `SETTINGS`. -- `supports_skipping_indices` (UInt8) — флаг, показывающий поддержку [индексов пропуска данных](table_engines/mergetree/#table_engine-mergetree-data_skipping-indexes). -- `supports_ttl` (UInt8) — флаг, показывающий поддержку [TTL](table_engines/mergetree/#table_engine-mergetree-ttl). +- `supports_skipping_indices` (UInt8) — флаг, показывающий поддержку [индексов пропуска данных](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-data_skipping-indexes). +- `supports_ttl` (UInt8) — флаг, показывающий поддержку [TTL](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl). - `supports_sort_order` (UInt8) — флаг, показывающий поддержку секций `PARTITION_BY`, `PRIMARY_KEY`, `ORDER_BY` и `SAMPLE_BY`. - `supports_replication` (UInt8) — флаг, показывающий поддержку [репликации](../../engines/table-engines/mergetree-family/replication.md). - `supports_deduplication` (UInt8) — флаг, показывающий наличие в движке дедупликации данных. @@ -34,4 +34,3 @@ WHERE name in ('Kafka', 'MergeTree', 'ReplicatedCollapsingMergeTree') - [Настройки](../../engines/table-engines/integrations/kafka.md#table_engine-kafka-creating-a-table) Kafka - [Настройки](../../engines/table-engines/special/join.md#join-limitations-and-settings) Join -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/table_engines) diff --git a/docs/ru/operations/system-tables/tables.md b/docs/ru/operations/system-tables/tables.md index 52de10871b2..11bb6a9eda2 100644 --- a/docs/ru/operations/system-tables/tables.md +++ b/docs/ru/operations/system-tables/tables.md @@ -1,40 +1,94 @@ # system.tables {#system-tables} -Содержит метаданные каждой таблицы, о которой знает сервер. Отсоединённые таблицы не отображаются в `system.tables`. +Содержит метаданные каждой таблицы, о которой знает сервер. -Эта таблица содержит следующие столбцы (тип столбца показан в скобках): +Отсоединённые таблицы ([DETACH](../../sql-reference/statements/detach.md)) не отображаются в `system.tables`. -- `database String` — имя базы данных, в которой находится таблица. -- `name` (String) — имя таблицы. -- `engine` (String) — движок таблицы (без параметров). -- `is_temporary` (UInt8) — флаг, указывающий на то, временная это таблица или нет. -- `data_path` (String) — путь к данным таблицы в файловой системе. -- `metadata_path` (String) — путь к табличным метаданным в файловой системе. -- `metadata_modification_time` (DateTime) — время последней модификации табличных метаданных. -- `dependencies_database` (Array(String)) — зависимости базы данных. -- `dependencies_table` (Array(String)) — табличные зависимости (таблицы [MaterializedView](../../engines/table-engines/special/materializedview.md), созданные на базе текущей таблицы). -- `create_table_query` (String) — запрос, которым создавалась таблица. -- `engine_full` (String) — параметры табличного движка. -- `partition_key` (String) — ключ партиционирования таблицы. -- `sorting_key` (String) — ключ сортировки таблицы. -- `primary_key` (String) - первичный ключ таблицы. -- `sampling_key` (String) — ключ сэмплирования таблицы. -- `storage_policy` (String) - политика хранения данных: +Информация о [временных таблицах](../../sql-reference/statements/create/table.md#temporary-tables) содержится в `system.tables` только в тех сессиях, в которых эти таблицы были созданы. Поле `database` у таких таблиц пустое, а флаг `is_temporary` включен. + +Столбцы: + +- `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных, в которой находится таблица. +- `name` ([String](../../sql-reference/data-types/string.md)) — имя таблицы. +- `engine` ([String](../../sql-reference/data-types/string.md)) — движок таблицы (без параметров). +- `is_temporary` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, указывающий на то, временная это таблица или нет. +- `data_path` ([String](../../sql-reference/data-types/string.md)) — путь к данным таблицы в файловой системе. +- `metadata_path` ([String](../../sql-reference/data-types/string.md)) — путь к табличным метаданным в файловой системе. +- `metadata_modification_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — время последней модификации табличных метаданных. +- `dependencies_database` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — зависимости базы данных. +- `dependencies_table` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — табличные зависимости (таблицы [MaterializedView](../../engines/table-engines/special/materializedview.md), созданные на базе текущей таблицы). +- `create_table_query` ([String](../../sql-reference/data-types/string.md)) — запрос, при помощи которого создавалась таблица. +- `engine_full` ([String](../../sql-reference/data-types/string.md)) — параметры табличного движка. +- `partition_key` ([String](../../sql-reference/data-types/string.md)) — ключ партиционирования таблицы. +- `sorting_key` ([String](../../sql-reference/data-types/string.md)) — ключ сортировки таблицы. +- `primary_key` ([String](../../sql-reference/data-types/string.md)) - первичный ключ таблицы. +- `sampling_key` ([String](../../sql-reference/data-types/string.md)) — ключ сэмплирования таблицы. +- `storage_policy` ([String](../../sql-reference/data-types/string.md)) - политика хранения данных: - [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) - [Distributed](../../engines/table-engines/special/distributed.md#distributed) -- `total_rows` (Nullable(UInt64)) - общее количество строк, если есть возможность быстро определить точное количество строк в таблице, в противном случае `Null` (включая базовую таблицу `Buffer`). +- `total_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество строк, если есть возможность быстро определить точное количество строк в таблице, в противном случае `NULL` (включая базовую таблицу `Buffer`). -- `total_bytes` (Nullable(UInt64)) - общее количество байт, если можно быстро определить точное количество байт для таблицы на накопителе, в противном случае `Null` (**не включает** в себя никакого базового хранилища). +- `total_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество байт, если можно быстро определить точное количество байт для таблицы на накопителе, в противном случае `NULL` (не включает в себя никакого базового хранилища). - Если таблица хранит данные на диске, возвращает используемое пространство на диске (т. е. сжатое). - Если таблица хранит данные в памяти, возвращает приблизительное количество используемых байт в памяти. -- `lifetime_rows` (Nullable(UInt64)) - общее количество строк, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). +- `lifetime_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество строк, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). -- `lifetime_bytes` (Nullable(UInt64)) - общее количество байт, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). +- `lifetime_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество байт, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). Таблица `system.tables` используется при выполнении запроса `SHOW TABLES`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/tables) +**Пример** + +```sql +SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; +``` + +```text +Row 1: +────── +database: system +name: aggregate_function_combinators +uuid: 00000000-0000-0000-0000-000000000000 +engine: SystemAggregateFunctionCombinators +is_temporary: 0 +data_paths: [] +metadata_path: /var/lib/clickhouse/metadata/system/aggregate_function_combinators.sql +metadata_modification_time: 1970-01-01 03:00:00 +dependencies_database: [] +dependencies_table: [] +create_table_query: +engine_full: +partition_key: +sorting_key: +primary_key: +sampling_key: +storage_policy: +total_rows: ᴺᵁᴸᴸ +total_bytes: ᴺᵁᴸᴸ + +Row 2: +────── +database: system +name: asynchronous_metrics +uuid: 00000000-0000-0000-0000-000000000000 +engine: SystemAsynchronousMetrics +is_temporary: 0 +data_paths: [] +metadata_path: /var/lib/clickhouse/metadata/system/asynchronous_metrics.sql +metadata_modification_time: 1970-01-01 03:00:00 +dependencies_database: [] +dependencies_table: [] +create_table_query: +engine_full: +partition_key: +sorting_key: +primary_key: +sampling_key: +storage_policy: +total_rows: ᴺᵁᴸᴸ +total_bytes: ᴺᵁᴸᴸ +``` diff --git a/docs/ru/operations/system-tables/text_log.md b/docs/ru/operations/system-tables/text_log.md index 141c3680c07..97c6ef9e2cd 100644 --- a/docs/ru/operations/system-tables/text_log.md +++ b/docs/ru/operations/system-tables/text_log.md @@ -50,4 +50,3 @@ source_file: /ClickHouse/src/Interpreters/DNSCacheUpdater.cpp; void source_line: 45 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/text_log) diff --git a/docs/ru/operations/system-tables/trace_log.md b/docs/ru/operations/system-tables/trace_log.md index 88f4b29651b..6d8130c1d00 100644 --- a/docs/ru/operations/system-tables/trace_log.md +++ b/docs/ru/operations/system-tables/trace_log.md @@ -18,10 +18,12 @@ ClickHouse создает эту таблицу когда утсановлен Во время соединения с сервером через `clickhouse-client`, вы видите строку похожую на `Connected to ClickHouse server version 19.18.1 revision 54429.`. Это поле содержит номер после `revision`, но не содержит строку после `version`. -- `timer_type`([Enum8](../../sql-reference/data-types/enum.md)) — тип таймера: +- `trace_type`([Enum8](../../sql-reference/data-types/enum.md)) — тип трассировки: - - `Real` означает wall-clock время. - - `CPU` означает относительное CPU время. + - `Real` — сбор трассировок стека адресов вызова по времени wall-clock. + - `CPU` — сбор трассировок стека адресов вызова по времени CPU. + - `Memory` — сбор выделенной памяти, когда ее размер превышает относительный инкремент. + - `MemorySample` — сбор случайно выделенной памяти. - `thread_number`([UInt32](../../sql-reference/data-types/int-uint.md)) — идентификатор треда. @@ -50,4 +52,3 @@ trace: [371912858,371912789,371798468,371799717,371801313,3717 size: 5244400 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system-tables/trace_log) diff --git a/docs/ru/operations/system-tables/users.md b/docs/ru/operations/system-tables/users.md index c12b91f445f..2a523ae4a9a 100644 --- a/docs/ru/operations/system-tables/users.md +++ b/docs/ru/operations/system-tables/users.md @@ -31,4 +31,3 @@ - [SHOW USERS](../../sql-reference/statements/show.md#show-users-statement) -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/users) diff --git a/docs/ru/operations/system-tables/zookeeper.md b/docs/ru/operations/system-tables/zookeeper.md index 9a2b781d8f3..a6ce62a9d4e 100644 --- a/docs/ru/operations/system-tables/zookeeper.md +++ b/docs/ru/operations/system-tables/zookeeper.md @@ -69,4 +69,3 @@ pzxid: 987021252247 path: /clickhouse/tables/01-08/visits/replicas ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/zookeeper) diff --git a/docs/ru/operations/tips.md b/docs/ru/operations/tips.md index 0a2ca5ecac1..4535767e8e0 100644 --- a/docs/ru/operations/tips.md +++ b/docs/ru/operations/tips.md @@ -246,4 +246,3 @@ script end script ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/tips/) diff --git a/docs/ru/operations/update.md b/docs/ru/operations/update.md index 5c187ed1604..a3e87b52ede 100644 --- a/docs/ru/operations/update.md +++ b/docs/ru/operations/update.md @@ -3,7 +3,7 @@ toc_priority: 47 toc_title: "Обновление ClickHouse" --- -# Обновление ClickHouse {#obnovlenie-clickhouse} +# Обновление ClickHouse {#clickhouse-upgrade} Если ClickHouse установлен с помощью deb-пакетов, выполните следующие команды на сервере: @@ -15,4 +15,17 @@ $ sudo service clickhouse-server restart Если ClickHouse установлен не из рекомендуемых deb-пакетов, используйте соответствующий метод обновления. -ClickHouse не поддерживает распределенное обновление. Операция должна выполняться последовательно на каждом отдельном сервере. Не обновляйте все серверы в кластере одновременно, иначе кластер становится недоступным в течение некоторого времени. +!!! note "Примечание" + Вы можете обновить сразу несколько серверов, кроме случая, когда все реплики одного шарда отключены. + +Обновление ClickHouse до определенной версии: + +**Пример** + +`xx.yy.a.b` — это номер текущей стабильной версии. Последнюю стабильную версию можно узнать [здесь](https://github.com/ClickHouse/ClickHouse/releases) + +```bash +$ sudo apt-get update +$ sudo apt-get install clickhouse-server=xx.yy.a.b clickhouse-client=xx.yy.a.b clickhouse-common-static=xx.yy.a.b +$ sudo service clickhouse-server restart +``` diff --git a/docs/ru/operations/utilities/clickhouse-benchmark.md b/docs/ru/operations/utilities/clickhouse-benchmark.md index 2a883cf3bb5..b4769b17818 100644 --- a/docs/ru/operations/utilities/clickhouse-benchmark.md +++ b/docs/ru/operations/utilities/clickhouse-benchmark.md @@ -160,4 +160,3 @@ localhost:9000, queries 10, QPS: 6.082, RPS: 121959604.568, MiB/s: 930.478, resu 99.990% 0.172 sec. ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/utilities/clickhouse-benchmark.md) diff --git a/docs/ru/operations/utilities/clickhouse-copier.md b/docs/ru/operations/utilities/clickhouse-copier.md index 243ad7f379b..aa4fd68f8e8 100644 --- a/docs/ru/operations/utilities/clickhouse-copier.md +++ b/docs/ru/operations/utilities/clickhouse-copier.md @@ -181,4 +181,3 @@ $ clickhouse-copier --daemon --config zookeeper.xml --task-path /task/path --bas `clickhouse-copier` отслеживает изменения `/task/path/description` и применяет их «на лету». Если вы поменяете, например, значение `max_workers`, то количество процессов, выполняющих задания, также изменится. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/utils/clickhouse-copier/) diff --git a/docs/ru/operations/utilities/clickhouse-local.md b/docs/ru/operations/utilities/clickhouse-local.md index 15d069c9acf..682dc0b5ace 100644 --- a/docs/ru/operations/utilities/clickhouse-local.md +++ b/docs/ru/operations/utilities/clickhouse-local.md @@ -14,9 +14,9 @@ toc_title: clickhouse-local !!! warning "Warning" Мы не рекомендуем подключать серверную конфигурацию к `clickhouse-local`, поскольку данные можно легко повредить неосторожными действиями. -Для временных данных по умолчанию создается специальный каталог. Если вы хотите обойти это действие, каталог данных можно указать с помощью опции `-- --path`. +Для временных данных по умолчанию создается специальный каталог. -## Вызов программы {#vyzov-programmy} +## Вызов программы {#usage} Основной формат вызова: @@ -31,15 +31,23 @@ $ clickhouse-local --structure "table_structure" --input-format "format_of_incom - `-if`, `--input-format` — формат входящих данных. По умолчанию — `TSV`. - `-f`, `--file` — путь к файлу с данными. По умолчанию — `stdin`. - `-q`, `--query` — запросы на выполнение. Разделитель запросов — `;`. +- `-qf`, `--queries-file` - путь к файлу с запросами для выполнения. Необходимо задать либо параметр `query`, либо `queries-file`. - `-N`, `--table` — имя таблицы, в которую будут помещены входящие данные. По умолчанию - `table`. - `-of`, `--format`, `--output-format` — формат выходных данных. По умолчанию — `TSV`. +- `-d`, `--database` — база данных по умолчанию. Если не указано, используется значение `_local`. - `--stacktrace` — вывод отладочной информации при исключениях. +- `--echo` — перед выполнением запрос выводится в консоль. - `--verbose` — подробный вывод при выполнении запроса. -- `-s` — отключает вывод системных логов в `stderr`. -- `--config-file` — путь к файлу конфигурации. По умолчанию `clickhouse-local` запускается с пустой конфигурацией. Конфигурационный файл имеет тот же формат, что и для сервера ClickHouse и в нём можно использовать все конфигурационные параметры сервера. Обычно подключение конфигурации не требуется, если требуется установить отдельный параметр, то это можно сделать ключом с именем параметра. +- `--logger.console` — логирование действий в консоль. +- `--logger.log` — логирование действий в файл с указанным именем. +- `--logger.level` — уровень логирования. +- `--ignore-error` — не прекращать обработку если запрос выдал ошибку. +- `-c`, `--config-file` — путь к файлу конфигурации. По умолчанию `clickhouse-local` запускается с пустой конфигурацией. Конфигурационный файл имеет тот же формат, что и для сервера ClickHouse, и в нём можно использовать все конфигурационные параметры сервера. Обычно подключение конфигурации не требуется; если требуется установить отдельный параметр, то это можно сделать ключом с именем параметра. +- `--no-system-tables` — запуск без использования системных таблиц. - `--help` — вывод справочной информации о `clickhouse-local`. +- `-V`, `--version` — вывод текущей версии и выход. -## Примеры вызова {#primery-vyzova} +## Примеры вызова {#examples} ``` bash $ echo -e "1,2\n3,4" | clickhouse-local --structure "a Int64, b Int64" \ @@ -102,4 +110,3 @@ Read 186 rows, 4.15 KiB in 0.035 sec., 5302 rows/sec., 118.34 KiB/sec. ... ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/utils/clickhouse-local/) diff --git a/docs/ru/operations/utilities/index.md b/docs/ru/operations/utilities/index.md index 8b533c29ff5..fa257fb4b1a 100644 --- a/docs/ru/operations/utilities/index.md +++ b/docs/ru/operations/utilities/index.md @@ -9,4 +9,3 @@ toc_title: "Обзор" - [clickhouse-local](clickhouse-local.md) - [clickhouse-copier](clickhouse-copier.md) - копирует (и перешардирует) данные с одного кластера на другой. -[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/utils/) diff --git a/docs/ru/sql-reference/aggregate-functions/combinators.md b/docs/ru/sql-reference/aggregate-functions/combinators.md index 3b35716ec27..74f9d1c1c05 100644 --- a/docs/ru/sql-reference/aggregate-functions/combinators.md +++ b/docs/ru/sql-reference/aggregate-functions/combinators.md @@ -27,6 +27,40 @@ toc_title: "Комбинаторы агрегатных функций" Комбинаторы -If и -Array можно сочетать. При этом, должен сначала идти Array, а потом If. Примеры: `uniqArrayIf(arr, cond)`, `quantilesTimingArrayIf(level1, level2)(arr, cond)`. Из-за такого порядка получается, что аргумент cond не должен быть массивом. +## -SimpleState {#agg-functions-combinator-simplestate} + +При использовании этого комбинатора агрегатная функция возвращает то же значение, но типа [SimpleAggregateFunction(...)](../../sql-reference/data-types/simpleaggregatefunction.md). Текущее значение функции может храниться в таблице для последующей работы с таблицами семейства [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md). + +**Синтаксис** + +``` sql +SimpleState(x) +``` + +**Аргументы** + +- `x` — параметры агрегатной функции. + +**Возвращаемое значение** + +Значение агрегатной функции типа `SimpleAggregateFunction(...)`. + +**Пример** + +Запрос: + +``` sql +WITH anySimpleState(number) AS c SELECT toTypeName(c), c FROM numbers(1); +``` + +Результат: + +``` text +┌─toTypeName(c)────────────────────────┬─c─┐ +│ SimpleAggregateFunction(any, UInt64) │ 0 │ +└──────────────────────────────────────┴───┘ +``` + ## -State {#state} В случае применения этого комбинатора, агрегатная функция возвращает не готовое значение (например, в случае функции [uniq](reference/uniq.md#agg_function-uniq) — количество уникальных значений), а промежуточное состояние агрегации (например, в случае функции `uniq` — хэш-таблицу для расчёта количества уникальных значений), которое имеет тип `AggregateFunction(...)` и может использоваться для дальнейшей обработки или может быть сохранено в таблицу для последующей доагрегации. @@ -70,9 +104,9 @@ toc_title: "Комбинаторы агрегатных функций" OrDefault(x) ``` -**Параметры** +**Аргументы** -- `x` — Параметры агрегатной функции. +- `x` — аргументы агрегатной функции. **Возращаемые зачения** @@ -131,14 +165,14 @@ FROM OrNull(x) ``` -**Параметры** +**Аргументы** -- `x` — Параметры агрегатной функции. +- `x` — аргументы агрегатной функции. **Возвращаемые значения** -- Результат агрегатной функции, преобразованный в тип данных `Nullable`. -- `NULL`, если у агрегатной функции нет входных данных. +- Результат агрегатной функции, преобразованный в тип данных `Nullable`. +- `NULL`, если у агрегатной функции нет входных данных. Тип: `Nullable(aggregate function return type)`. @@ -188,7 +222,7 @@ FROM Resample(start, end, step)(, resampling_key) ``` -**Параметры** +**Аргументы** - `start` — начальное значение для интервала значений `resampling_key`. - `stop` — конечное значение для интервала значений `resampling_key`. Интервал не включает значение `stop` (`[start, stop)`). @@ -247,5 +281,3 @@ FROM people │ [3,2] │ [11.5,12.949999809265137] │ └────────┴───────────────────────────┘ ``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/agg_functions/combinators/) diff --git a/docs/ru/sql-reference/aggregate-functions/index.md b/docs/ru/sql-reference/aggregate-functions/index.md index 3c931222f58..7afb6a374a7 100644 --- a/docs/ru/sql-reference/aggregate-functions/index.md +++ b/docs/ru/sql-reference/aggregate-functions/index.md @@ -57,4 +57,3 @@ SELECT groupArray(y) FROM t_null_big `groupArray` не включает `NULL` в результирующий массив. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/) diff --git a/docs/ru/sql-reference/aggregate-functions/parametric-functions.md b/docs/ru/sql-reference/aggregate-functions/parametric-functions.md index 61518cb6f02..e5162b63b88 100644 --- a/docs/ru/sql-reference/aggregate-functions/parametric-functions.md +++ b/docs/ru/sql-reference/aggregate-functions/parametric-functions.md @@ -11,14 +11,19 @@ toc_title: "Параметрические агрегатные функции" Рассчитывает адаптивную гистограмму. Не гарантирует точного результата. - histogram(number_of_bins)(values) +``` sql +histogram(number_of_bins)(values) +``` Функция использует [A Streaming Parallel Decision Tree Algorithm](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf). Границы столбцов устанавливаются по мере поступления новых данных в функцию. В общем случае столбцы имею разную ширину. +**Аргументы** + +`values` — [выражение](../syntax.md#syntax-expressions), предоставляющее входные значения. + **Параметры** `number_of_bins` — максимальное количество корзин в гистограмме. Функция автоматически вычисляет количество корзин. Она пытается получить указанное количество корзин, но если не получилось, то в результате корзин будет меньше. -`values` — [выражение](../syntax.md#syntax-expressions), предоставляющее входные значения. **Возвращаемые значения** @@ -87,14 +92,16 @@ sequenceMatch(pattern)(timestamp, cond1, cond2, ...) !!! warning "Предупреждение" События, произошедшие в одну и ту же секунду, располагаются в последовательности в неопределенном порядке, что может повлиять на результат работы функции. -**Параметры** - -- `pattern` — строка с шаблоном. Смотрите [Синтаксис шаблонов](#sequence-function-pattern-syntax). +**Аргументы** - `timestamp` — столбец, содержащий метки времени. Типичный тип данных столбца — `Date` или `DateTime`. Также можно использовать любой из поддержанных типов данных [UInt](../../sql-reference/aggregate-functions/parametric-functions.md). - `cond1`, `cond2` — условия, описывающие цепочку событий. Тип данных — `UInt8`. Можно использовать до 32 условий. Функция учитывает только те события, которые указаны в условиях. Функция пропускает данные из последовательности, если они не описаны ни в одном из условий. +**Параметры** + +- `pattern` — строка с шаблоном. Смотрите [Синтаксис шаблонов](#sequence-function-pattern-syntax). + **Возвращаемые значения** - 1, если цепочка событий, соответствующая шаблону найдена. @@ -174,14 +181,16 @@ SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM sequenceCount(pattern)(timestamp, cond1, cond2, ...) ``` -**Параметры** - -- `pattern` — строка с шаблоном. Смотрите [Синтаксис шаблонов](#sequence-function-pattern-syntax). +**Аргументы** - `timestamp` — столбец, содержащий метки времени. Типичный тип данных столбца — `Date` или `DateTime`. Также можно использовать любой из поддержанных типов данных [UInt](../../sql-reference/aggregate-functions/parametric-functions.md). - `cond1`, `cond2` — условия, описывающие цепочку событий. Тип данных — `UInt8`. Можно использовать до 32 условий. Функция учитывает только те события, которые указаны в условиях. Функция пропускает данные из последовательности, если они не описаны ни в одном из условий. +**Параметры** + +- `pattern` — строка с шаблоном. Смотрите [Синтаксис шаблонов](#sequence-function-pattern-syntax). + **Возвращаемое значение** - Число непересекающихся цепочек событий, соответствущих шаблону. @@ -234,15 +243,21 @@ SELECT sequenceCount('(?1).*(?2)')(time, number = 1, number = 2) FROM t **Синтаксис** ``` sql -windowFunnel(window, [mode])(timestamp, cond1, cond2, ..., condN) +windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN) ``` +**Аргументы** + +- `timestamp` — имя столбца, содержащего временные отметки. [Date](../../sql-reference/aggregate-functions/parametric-functions.md), [DateTime](../../sql-reference/aggregate-functions/parametric-functions.md#data_type-datetime) и другие параметры с типом `Integer`. В случае хранения меток времени в столбцах с типом `UInt64`, максимально допустимое значение соответствует ограничению для типа `Int64`, т.е. равно `2^63-1`. +- `cond` — условия или данные, описывающие цепочку событий. [UInt8](../../sql-reference/aggregate-functions/parametric-functions.md). + **Параметры** - `window` — ширина скользящего окна по времени. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond2 <= timestamp события cond1 + window`. -- `mode` - необязательный параметр. Если установлено значение `'strict'`, то функция `windowFunnel()` применяет условия только для уникальных значений. -- `timestamp` — имя столбца, содержащего временные отметки. [Date](../../sql-reference/aggregate-functions/parametric-functions.md), [DateTime](../../sql-reference/aggregate-functions/parametric-functions.md#data_type-datetime) и другие параметры с типом `Integer`. В случае хранения меток времени в столбцах с типом `UInt64`, максимально допустимое значение соответствует ограничению для типа `Int64`, т.е. равно `2^63-1`. -- `cond` — условия или данные, описывающие цепочку событий. [UInt8](../../sql-reference/aggregate-functions/parametric-functions.md). +- `mode` — необязательный параметр. Может быть установленно несколько значений одновременно. + - `'strict'` — не учитывать подряд идущие повторяющиеся события. + - `'strict_order'` — запрещает посторонние события в искомой последовательности. Например, при поиске цепочки `A->B->C` в `A->B->D->C` поиск будет остановлен на `D` и функция вернет 2. + - `'strict_increase'` — условия прменяются только для событий со строго возрастающими временными метками. **Возвращаемое значение** @@ -306,7 +321,7 @@ ORDER BY level ASC Функция принимает набор (от 1 до 32) логических условий, как в [WHERE](../../sql-reference/statements/select/where.md#select-where), и применяет их к заданному набору данных. -Условия, кроме первого, применяются попарно: результат второго будет истинным, если истинно первое и второе, третьего - если истинно первое и третье и т. д. +Условия, кроме первого, применяются попарно: результат второго будет истинным, если истинно первое и второе, третьего - если истинно первое и третье и т.д. **Синтаксис** @@ -314,7 +329,7 @@ ORDER BY level ASC retention(cond1, cond2, ..., cond32) ``` -**Параметры** +**Аргументы** - `cond` — вычисляемое условие или выражение, которое возвращает `UInt8` результат (1/0). @@ -481,4 +496,3 @@ FROM Решение: пишем в запросе GROUP BY SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/agg_functions/parametric_functions/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/any.md b/docs/ru/sql-reference/aggregate-functions/reference/any.md index 38c412813ab..6142b9a2092 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/any.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/any.md @@ -12,4 +12,3 @@ toc_priority: 6 При наличии в запросе `SELECT` секции `GROUP BY` или хотя бы одной агрегатной функции, ClickHouse (в отличие от, например, MySQL) требует, чтобы все выражения в секциях `SELECT`, `HAVING`, `ORDER BY` вычислялись из ключей или из агрегатных функций. То есть, каждый выбираемый из таблицы столбец, должен использоваться либо в ключах, либо внутри агрегатных функций. Чтобы получить поведение, как в MySQL, вы можете поместить остальные столбцы в агрегатную функцию `any`. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/any/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/anyheavy.md b/docs/ru/sql-reference/aggregate-functions/reference/anyheavy.md index 19fda7f64b7..bb7a01a47f3 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/anyheavy.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/anyheavy.md @@ -29,4 +29,3 @@ FROM ontime └───────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anyheavy/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/anylast.md b/docs/ru/sql-reference/aggregate-functions/reference/anylast.md index da68c926d43..7be380461f7 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/anylast.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/anylast.md @@ -7,4 +7,3 @@ toc_priority: 104 Выбирает последнее попавшееся значение. Результат так же недетерминирован, как и для функции [any](../../../sql-reference/aggregate-functions/reference/any.md). -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anylast/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/argmax.md b/docs/ru/sql-reference/aggregate-functions/reference/argmax.md index f44e65831a9..edad26ee232 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/argmax.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/argmax.md @@ -20,20 +20,20 @@ argMax(arg, val) argMax(tuple(arg, val)) ``` -**Параметры** +**Аргументы** - `arg` — аргумент. - `val` — значение. **Возвращаемое значение** -- Значение `arg`, соответствующее максимальному значению `val`. +- значение `arg`, соответствующее максимальному значению `val`. Тип: соответствует типу `arg`. Если передан кортеж: -- Кортеж `(arg, val)` c максимальным значением `val` и соответствующим ему `arg`. +- кортеж `(arg, val)` c максимальным значением `val` и соответствующим ему `arg`. Тип: [Tuple](../../../sql-reference/data-types/tuple.md). @@ -52,15 +52,14 @@ argMax(tuple(arg, val)) Запрос: ``` sql -SELECT argMax(user, salary), argMax(tuple(user, salary)) FROM salary; +SELECT argMax(user, salary), argMax(tuple(user, salary), salary), argMax(tuple(user, salary)) FROM salary; ``` Результат: ``` text -┌─argMax(user, salary)─┬─argMax(tuple(user, salary))─┐ -│ director │ ('director',5000) │ -└──────────────────────┴─────────────────────────────┘ +┌─argMax(user, salary)─┬─argMax(tuple(user, salary), salary)─┬─argMax(tuple(user, salary))─┐ +│ director │ ('director',5000) │ ('director',5000) │ +└──────────────────────┴─────────────────────────────────────┴─────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/argmax/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/argmin.md b/docs/ru/sql-reference/aggregate-functions/reference/argmin.md index 8c25b79f92a..dc54c424fb3 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/argmin.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/argmin.md @@ -20,7 +20,7 @@ argMin(arg, val) argMin(tuple(arg, val)) ``` -**Параметры** +**Аргументы** - `arg` — аргумент. - `val` — значение. @@ -63,4 +63,3 @@ SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary; └──────────────────────┴─────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/argmin/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/avg.md b/docs/ru/sql-reference/aggregate-functions/reference/avg.md index b0bee64ec66..c5e1dec14e0 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/avg.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/avg.md @@ -4,8 +4,60 @@ toc_priority: 5 # avg {#agg_function-avg} -Вычисляет среднее. -Работает только для чисел. -Результат всегда Float64. +Вычисляет среднее арифметическое. + +**Синтаксис** + +``` sql +avg(x) +``` + +**Аргументы** + +- `x` — входное значение типа [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) или [Decimal](../../../sql-reference/data-types/decimal.md). + +**Возвращаемое значение** + +- среднее арифметическое, всегда типа [Float64](../../../sql-reference/data-types/float.md). +- `NaN`, если входное значение `x` — пустое. + +**Пример** + +Запрос: + +``` sql +SELECT avg(x) FROM values('x Int8', 0, 1, 2, 3, 4, 5); +``` + +Результат: + +``` text +┌─avg(x)─┐ +│ 2.5 │ +└────────┘ +``` + +**Пример** + +Создайте временную таблицу: + +Запрос: + +``` sql +CREATE table test (t UInt8) ENGINE = Memory; +``` + +Выполните запрос: + +``` sql +SELECT avg(t) FROM test; +``` + +Результат: + +``` text +┌─avg(x)─┐ +│ nan │ +└────────┘ +``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avg/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/avgweighted.md b/docs/ru/sql-reference/aggregate-functions/reference/avgweighted.md index 72e6ca5c88c..291abbfa3fb 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/avgweighted.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/avgweighted.md @@ -12,10 +12,10 @@ toc_priority: 107 avgWeighted(x, weight) ``` -**Параметры** +**Аргументы** -- `x` — Значения. [Целые числа](../../../sql-reference/data-types/int-uint.md) или [числа с плавающей запятой](../../../sql-reference/data-types/float.md). -- `weight` — Веса отдельных значений. [Целые числа](../../../sql-reference/data-types/int-uint.md) или [числа с плавающей запятой](../../../sql-reference/data-types/float.md). +- `x` — значения. [Целые числа](../../../sql-reference/data-types/int-uint.md) или [числа с плавающей запятой](../../../sql-reference/data-types/float.md). +- `weight` — веса отдельных значений. [Целые числа](../../../sql-reference/data-types/int-uint.md) или [числа с плавающей запятой](../../../sql-reference/data-types/float.md). Типы параметров должны совпадать. @@ -43,4 +43,3 @@ FROM values('x Int8, w Int8', (4, 1), (1, 0), (10, 2)) └────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avgweighted/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/corr.md b/docs/ru/sql-reference/aggregate-functions/reference/corr.md index 6d631241f6a..7522dcebd0b 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/corr.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/corr.md @@ -11,4 +11,3 @@ toc_priority: 107 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `corrStable`. Она работает медленнее, но обеспечивает меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/corr/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/count.md b/docs/ru/sql-reference/aggregate-functions/reference/count.md index d99c3b2aeb2..06cf66bd8bd 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/count.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/count.md @@ -4,14 +4,14 @@ toc_priority: 1 # count {#agg_function-count} -Вычисляет количество строк или не NULL значений . +Вычисляет количество строк или не NULL значений. ClickHouse поддерживает следующие виды синтаксиса для `count`: - `count(expr)` или `COUNT(DISTINCT expr)`. - `count()` или `COUNT(*)`. Синтаксис `count()` специфичен для ClickHouse. -**Параметры** +**Аргументы** Функция может принимать: @@ -21,7 +21,7 @@ ClickHouse поддерживает следующие виды синтакси **Возвращаемое значение** - Если функция вызывается без параметров, она вычисляет количество строк. -- Если передаётся [выражение](../../syntax.md#syntax-expressions) , то функция вычисляет количество раз, когда выражение возвращает не NULL. Если выражение возвращает значение типа [Nullable](../../../sql-reference/data-types/nullable.md), то результат `count` не становится `Nullable`. Функция возвращает 0, если выражение возвращает `NULL` для всех строк. +- Если передаётся [выражение](../../syntax.md#syntax-expressions), то функция подсчитывает количество раз, когда выражение не равно NULL. Если выражение имеет тип [Nullable](../../../sql-reference/data-types/nullable.md), то результат `count` не становится `Nullable`. Функция возвращает 0, если выражение равно `NULL` для всех строк. В обоих случаях тип возвращаемого значения [UInt64](../../../sql-reference/data-types/int-uint.md). @@ -69,4 +69,3 @@ SELECT count(DISTINCT num) FROM t Этот пример показывает, что `count(DISTINCT num)` выполняется с помощью функции `uniqExact` в соответствии со значением настройки `count_distinct_implementation`. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/count/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/covarpop.md b/docs/ru/sql-reference/aggregate-functions/reference/covarpop.md index e30b19924f9..1438fefbd8e 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/covarpop.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/covarpop.md @@ -11,4 +11,3 @@ toc_priority: 36 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `covarPopStable`. Она работает медленнее, но обеспечивает меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarpop/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/covarsamp.md b/docs/ru/sql-reference/aggregate-functions/reference/covarsamp.md index 7fa9a1d3f2c..b4cea16f4c0 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/covarsamp.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/covarsamp.md @@ -13,4 +13,3 @@ toc_priority: 37 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `covarSampStable`. Она работает медленнее, но обеспечивает меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarsamp/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/deltasum.md b/docs/ru/sql-reference/aggregate-functions/reference/deltasum.md new file mode 100644 index 00000000000..b025a248f3c --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/deltasum.md @@ -0,0 +1,69 @@ +--- +toc_priority: 141 +--- + +# deltaSum {#agg_functions-deltasum} + +Суммирует арифметическую разницу между последовательными строками. Если разница отрицательна — она будет проигнорирована. + +**Синтаксис** + +``` sql +deltaSum(value) +``` + +**Аргументы** + +- `value` — входные значения, должны быть типа [Integer](../../data-types/int-uint.md) или [Float](../../data-types/float.md). + +**Возвращаемое значение** + +- накопленная арифметическая разница, типа `Integer` или `Float`. + +**Примеры** + +Запрос: + +``` sql +SELECT deltaSum(arrayJoin([1, 2, 3])); +``` + +Результат: + +``` text +┌─deltaSum(arrayJoin([1, 2, 3]))─┐ +│ 2 │ +└────────────────────────────────┘ +``` + +Запрос: + +``` sql +SELECT deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3])); +``` + +Результат: + +``` text +┌─deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3]))─┐ +│ 7 │ +└───────────────────────────────────────────────┘ +``` + +Запрос: + +``` sql +SELECT deltaSum(arrayJoin([2.25, 3, 4.5])); +``` + +Результат: + +``` text +┌─deltaSum(arrayJoin([2.25, 3, 4.5]))─┐ +│ 2.25 │ +└─────────────────────────────────────┘ +``` + +## Смотрите также {#see-also} + +- [runningDifference](../../functions/other-functions.md#runningdifferencex) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/grouparray.md b/docs/ru/sql-reference/aggregate-functions/reference/grouparray.md index 7640795fc51..370190dbb3c 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/grouparray.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/grouparray.md @@ -14,4 +14,3 @@ toc_priority: 110 В некоторых случаях, вы всё же можете рассчитывать на порядок выполнения запроса. Это — случаи, когда `SELECT` идёт из подзапроса, в котором используется `ORDER BY`. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparray/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/grouparrayinsertat.md b/docs/ru/sql-reference/aggregate-functions/reference/grouparrayinsertat.md index 5c73bccc2bb..f91d4f19675 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/grouparrayinsertat.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/grouparrayinsertat.md @@ -9,24 +9,24 @@ toc_priority: 112 **Синтаксис** ```sql -groupArrayInsertAt(default_x, size)(x, pos); +groupArrayInsertAt(default_x, size)(x, pos) ``` Если запрос вставляет вставляется несколько значений в одну и ту же позицию, то функция ведет себя следующим образом: -- Если запрос выполняется в одном потоке, то используется первое из вставляемых значений. -- Если запрос выполняется в нескольких потоках, то в результирующем массиве может оказаться любое из вставляемых значений. +- Если запрос выполняется в одном потоке, то используется первое из вставляемых значений. +- Если запрос выполняется в нескольких потоках, то в результирующем массиве может оказаться любое из вставляемых значений. -**Параметры** +**Аргументы** -- `x` — Значение, которое будет вставлено. [Выражение](../../syntax.md#syntax-expressions), возвращающее значение одного из [поддерживаемых типов данных](../../../sql-reference/data-types/index.md#data_types). -- `pos` — Позиция, в которую вставляется заданный элемент `x`. Нумерация индексов в массиве начинается с нуля. [UInt32](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64). -- `default_x` — Значение по умолчанию для подстановки на пустые позиции. Опциональный параметр. [Выражение](../../syntax.md#syntax-expressions), возвращающее значение с типом параметра `x`. Если `default_x` не определен, используются [значения по умолчанию](../../../sql-reference/statements/create/table.md#create-default-values). -- `size`— Длина результирующего массива. Опциональный параметр. При использовании этого параметра должно быть указано значение по умолчанию `default_x`. [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges). +- `x` — значение, которое будет вставлено. [Выражение](../../syntax.md#syntax-expressions), возвращающее значение одного из [поддерживаемых типов данных](../../../sql-reference/data-types/index.md#data_types). +- `pos` — позиция, в которую вставляется заданный элемент `x`. Нумерация индексов в массиве начинается с нуля. [UInt32](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64). +- `default_x` — значение по умолчанию для подстановки на пустые позиции. Опциональный параметр. [Выражение](../../syntax.md#syntax-expressions), возвращающее значение с типом параметра `x`. Если `default_x` не определен, используются [значения по умолчанию](../../../sql-reference/statements/create/table.md#create-default-values). +- `size` — длина результирующего массива. Опциональный параметр. При использовании этого параметра должно быть указано значение по умолчанию `default_x`. [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges). **Возвращаемое значение** -- Массив со вставленными значениями. +- Массив со вставленными значениями. Тип: [Array](../../../sql-reference/data-types/array.md#data-type-array). @@ -90,4 +90,3 @@ SELECT groupArrayInsertAt(number, 0) FROM numbers_mt(10) SETTINGS max_block_size └───────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingavg.md b/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingavg.md index 6307189c440..5930e8b8484 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingavg.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingavg.md @@ -6,12 +6,14 @@ toc_priority: 114 Вычисляет скользящее среднее для входных значений. - groupArrayMovingAvg(numbers_for_summing) - groupArrayMovingAvg(window_size)(numbers_for_summing) +``` sql +groupArrayMovingAvg(numbers_for_summing) +groupArrayMovingAvg(window_size)(numbers_for_summing) +``` Функция может принимать размер окна в качестве параметра. Если окно не указано, то функция использует размер окна, равный количеству строк в столбце. -**Параметры** +**Аргументы** - `numbers_for_summing` — [выражение](../../syntax.md#syntax-expressions), возвращающее значение числового типа. - `window_size` — размер окна. @@ -75,4 +77,3 @@ FROM t └───────────┴──────────────────────────────────┴───────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingavg/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingsum.md b/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingsum.md index c95f1b0b0eb..feaef8e79d8 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingsum.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/grouparraymovingsum.md @@ -13,7 +13,7 @@ groupArrayMovingSum(window_size)(numbers_for_summing) Функция может принимать размер окна в качестве параметра. Если окно не указано, то функция использует размер окна, равный количеству строк в столбце. -**Параметры** +**Аргументы** - `numbers_for_summing` — [выражение](../../syntax.md#syntax-expressions), возвращающее значение числового типа. - `window_size` — размер окна. @@ -75,4 +75,3 @@ FROM t └────────────┴─────────────────────────────────┴────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingsum/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/grouparraysample.md b/docs/ru/sql-reference/aggregate-functions/reference/grouparraysample.md index 4c2dafe1a3c..1d58b3397ab 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/grouparraysample.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/grouparraysample.md @@ -12,7 +12,7 @@ toc_priority: 114 groupArraySample(max_size[, seed])(x) ``` -**Параметры** +**Аргументы** - `max_size` — максимальное количество элементов в возвращаемом массиве. [UInt64](../../data-types/int-uint.md). - `seed` — состояние генератора случайных чисел. Необязательный параметр. [UInt64](../../data-types/int-uint.md). Значение по умолчанию: `123456`. diff --git a/docs/ru/sql-reference/aggregate-functions/reference/groupbitand.md b/docs/ru/sql-reference/aggregate-functions/reference/groupbitand.md index 03aff64fecf..b4b862d5716 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/groupbitand.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/groupbitand.md @@ -10,7 +10,7 @@ toc_priority: 125 groupBitAnd(expr) ``` -**Параметры** +**Аргументы** `expr` – выражение, результат которого имеет тип данных `UInt*`. @@ -45,4 +45,3 @@ binary decimal 00000100 = 4 ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitand/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/groupbitmap.md b/docs/ru/sql-reference/aggregate-functions/reference/groupbitmap.md index a4be18b75ec..4012d3e052e 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/groupbitmap.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/groupbitmap.md @@ -10,7 +10,7 @@ Bitmap или агрегатные вычисления для столбца с groupBitmap(expr) ``` -**Параметры** +**Аргументы** `expr` – выражение, результат которого имеет тип данных `UInt*`. @@ -43,4 +43,3 @@ num 3 ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmap/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/groupbitor.md b/docs/ru/sql-reference/aggregate-functions/reference/groupbitor.md index e1afced014f..6967b26e722 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/groupbitor.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/groupbitor.md @@ -10,7 +10,7 @@ toc_priority: 126 groupBitOr(expr) ``` -**Параметры** +**Аргументы** `expr` – выражение, результат которого имеет тип данных `UInt*`. @@ -45,4 +45,3 @@ binary decimal 01111101 = 125 ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitor/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/groupbitxor.md b/docs/ru/sql-reference/aggregate-functions/reference/groupbitxor.md index a80f86b2a5f..ca565d5a027 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/groupbitxor.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/groupbitxor.md @@ -10,7 +10,7 @@ toc_priority: 127 groupBitXor(expr) ``` -**Параметры** +**Аргументы** `expr` – выражение, результат которого имеет тип данных `UInt*`. @@ -45,4 +45,3 @@ binary decimal 01101000 = 104 ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitxor/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/groupuniqarray.md b/docs/ru/sql-reference/aggregate-functions/reference/groupuniqarray.md index cecc63aef22..7d64b13a203 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/groupuniqarray.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/groupuniqarray.md @@ -10,4 +10,3 @@ toc_priority: 111 Функция `groupUniqArray(max_size)(x)` ограничивает размер результирующего массива до `max_size` элементов. Например, `groupUniqArray(1)(x)` равнозначно `[any(x)]`. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupuniqarray/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/index.md b/docs/ru/sql-reference/aggregate-functions/reference/index.md index e496893a771..1af07623ade 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/index.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/index.md @@ -65,4 +65,3 @@ toc_hidden: true - [stochasticLinearRegression](../../../sql-reference/aggregate-functions/reference/stochasticlinearregression.md) - [stochasticLogisticRegression](../../../sql-reference/aggregate-functions/reference/stochasticlogisticregression.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/initializeAggregation.md b/docs/ru/sql-reference/aggregate-functions/reference/initializeAggregation.md index a2e3764193e..3565115d8de 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/initializeAggregation.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/initializeAggregation.md @@ -10,10 +10,10 @@ toc_priority: 150 **Синтаксис** ``` sql -initializeAggregation (aggregate_function, column_1, column_2); +initializeAggregation (aggregate_function, column_1, column_2) ``` -**Параметры** +**Аргументы** - `aggregate_function` — название функции агрегации, состояние которой нужно создать. [String](../../../sql-reference/data-types/string.md#string). - `column_n` — столбец, который передается в функцию агрегации как аргумент. [String](../../../sql-reference/data-types/string.md#string). diff --git a/docs/ru/sql-reference/aggregate-functions/reference/kurtpop.md b/docs/ru/sql-reference/aggregate-functions/reference/kurtpop.md index a00dae51ed6..1a1198b2beb 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/kurtpop.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/kurtpop.md @@ -10,9 +10,9 @@ toc_priority: 153 kurtPop(expr) ``` -**Параметры** +**Аргументы** -`expr` — [Выражение](../../syntax.md#syntax-expressions), возвращающее число. +`expr` — [выражение](../../syntax.md#syntax-expressions), возвращающее число. **Возвращаемое значение** @@ -21,7 +21,6 @@ kurtPop(expr) **Пример** ``` sql -SELECT kurtPop(value) FROM series_with_value_column +SELECT kurtPop(value) FROM series_with_value_column; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtpop/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/kurtsamp.md b/docs/ru/sql-reference/aggregate-functions/reference/kurtsamp.md index 379d74ec0c3..50b48d11b18 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/kurtsamp.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/kurtsamp.md @@ -12,9 +12,9 @@ toc_priority: 154 kurtSamp(expr) ``` -**Параметры** +**Аргументы** -`expr` — [Выражение](../../syntax.md#syntax-expressions), возвращающее число. +`expr` — [выражение](../../syntax.md#syntax-expressions), возвращающее число. **Возвращаемое значение** @@ -23,7 +23,6 @@ kurtSamp(expr) **Пример** ``` sql -SELECT kurtSamp(value) FROM series_with_value_column +SELECT kurtSamp(value) FROM series_with_value_column; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtsamp/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md b/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md index a4647ecfb34..9d02bee8622 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md @@ -17,16 +17,18 @@ mannWhitneyUTest[(alternative[, continuity_correction])](sample_data, sample_ind Проверяется нулевая гипотеза, что генеральные совокупности стохастически равны. Наряду с двусторонней гипотезой могут быть проверены и односторонние. Для применения U-критерия Манна — Уитни закон распределения генеральных совокупностей не обязан быть нормальным. +**Аргументы** + +- `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) или [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). + **Параметры** - `alternative` — альтернативная гипотеза. (Необязательный параметр, по умолчанию: `'two-sided'`.) [String](../../../sql-reference/data-types/string.md). - `'two-sided'`; - `'greater'`; - `'less'`. -- `continuity_correction` - если не 0, то при вычислении p-значения применяется коррекция непрерывности. (Необязательный параметр, по умолчанию: 1.) [UInt64](../../../sql-reference/data-types/int-uint.md). -- `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). -- `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). - +- `continuity_correction` — если не 0, то при вычислении p-значения применяется коррекция непрерывности. (Необязательный параметр, по умолчанию: 1.) [UInt64](../../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -69,4 +71,3 @@ SELECT mannWhitneyUTest('greater')(sample_data, sample_index) FROM mww_ttest; - [U-критерий Манна — Уитни](https://ru.wikipedia.org/wiki/U-%D0%BA%D1%80%D0%B8%D1%82%D0%B5%D1%80%D0%B8%D0%B9_%D0%9C%D0%B0%D0%BD%D0%BD%D0%B0_%E2%80%94_%D0%A3%D0%B8%D1%82%D0%BD%D0%B8) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/max.md b/docs/ru/sql-reference/aggregate-functions/reference/max.md deleted file mode 100644 index 4ee577471ea..00000000000 --- a/docs/ru/sql-reference/aggregate-functions/reference/max.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -toc_priority: 3 ---- - -# max {#agg_function-max} - -Вычисляет максимум. - -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/max/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/max.md b/docs/ru/sql-reference/aggregate-functions/reference/max.md new file mode 120000 index 00000000000..ae47679c80e --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/max.md @@ -0,0 +1 @@ +../../../../en/sql-reference/aggregate-functions/reference/max.md \ No newline at end of file diff --git a/docs/ru/sql-reference/aggregate-functions/reference/median.md b/docs/ru/sql-reference/aggregate-functions/reference/median.md index 803b2309665..a208c21dd21 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/median.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/median.md @@ -40,4 +40,3 @@ SELECT medianDeterministic(val, 1) FROM t └─────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/median/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/min.md b/docs/ru/sql-reference/aggregate-functions/reference/min.md deleted file mode 100644 index 7b56de3aed4..00000000000 --- a/docs/ru/sql-reference/aggregate-functions/reference/min.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -toc_priority: 2 ---- - -## min {#agg_function-min} - -Вычисляет минимум. - -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/min/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/min.md b/docs/ru/sql-reference/aggregate-functions/reference/min.md new file mode 120000 index 00000000000..61417b347a8 --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/min.md @@ -0,0 +1 @@ +../../../../en/sql-reference/aggregate-functions/reference/min.md \ No newline at end of file diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantile.md b/docs/ru/sql-reference/aggregate-functions/reference/quantile.md index 10fec16ab94..10862e38e00 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantile.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantile.md @@ -18,10 +18,10 @@ quantile(level)(expr) Алиас: `median`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). **Возвращаемое значение** @@ -65,4 +65,3 @@ SELECT quantile(val) FROM t - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantile/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantiledeterministic.md b/docs/ru/sql-reference/aggregate-functions/reference/quantiledeterministic.md index fdbcda821f6..ec308ea239b 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantiledeterministic.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantiledeterministic.md @@ -18,11 +18,11 @@ quantileDeterministic(level)(expr, determinator) Алиас: `medianDeterministic`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). -- `determinator` — Число, хэш которого используется при сэмплировании в алгоритме reservoir sampling, чтобы сделать результат детерминированным. В качестве детерминатора можно использовать любое определённое положительное число, например, идентификатор пользователя или события. Если одно и то же значение детерминатора попадается в выборке слишком часто, то функция выдаёт некорректный результат. +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). +- `determinator` — число, хэш которого используется при сэмплировании в алгоритме «Reservoir sampling», чтобы сделать результат детерминированным. В качестве значения можно использовать любое определённое положительное число, например, идентификатор пользователя или события. Если одно и то же значение попадается в выборке слишком часто, то функция выдаёт некорректный результат. **Возвращаемое значение** @@ -65,4 +65,3 @@ SELECT quantileDeterministic(val, 1) FROM t - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/qurntiledeterministic/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantileexact.md b/docs/ru/sql-reference/aggregate-functions/reference/quantileexact.md index 4ee815a94fb..82ebae1c14e 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantileexact.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantileexact.md @@ -18,10 +18,11 @@ quantileExact(level)(expr) Алиас: `medianExact`. -**Параметры** +**Аргументы** + +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). **Возвращаемое значение** @@ -77,10 +78,11 @@ quantileExact(level)(expr) Алиас: `medianExactLow`. -**Параметры** +**Аргументы** + +- `level` — уровень квантили. Опциональный параметр. Константное занчение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://en.wikipedia.org/wiki/Median). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) или [DateTime](../../../sql-reference/data-types/datetime.md). -- `level` — Уровень квантили. Опциональный параметр. Константное занчение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://en.wikipedia.org/wiki/Median). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) или [DateTime](../../../sql-reference/data-types/datetime.md). **Возвращаемое значение** @@ -127,10 +129,11 @@ quantileExactHigh(level)(expr) Алиас: `medianExactHigh`. -**Параметры** +**Аргументы** + +- `level` — уровень квантили. Опциональный параметр. Константное занчение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://en.wikipedia.org/wiki/Median). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) или [DateTime](../../../sql-reference/data-types/datetime.md). -- `level` — Уровень квантили. Опциональный параметр. Константное занчение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://en.wikipedia.org/wiki/Median). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) или [DateTime](../../../sql-reference/data-types/datetime.md). **Возвращаемое значение** @@ -163,4 +166,3 @@ SELECT quantileExactHigh(number) FROM numbers(10) - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexact/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantileexactweighted.md b/docs/ru/sql-reference/aggregate-functions/reference/quantileexactweighted.md index f6982d4566f..3746c328470 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantileexactweighted.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantileexactweighted.md @@ -18,11 +18,11 @@ quantileExactWeighted(level)(expr, weight) Алиас: `medianExactWeighted`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). -- `weight` — Столбец с весам элементов последовательности. Вес — это количество повторений элемента в последовательности. +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). +- `weight` — столбец с весам элементов последовательности. Вес — это количество повторений элемента в последовательности. **Возвращаемое значение** @@ -66,4 +66,3 @@ SELECT quantileExactWeighted(n, val) FROM t - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexactweited/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantiles.md b/docs/ru/sql-reference/aggregate-functions/reference/quantiles.md index 82e806b67fa..671cbc1fc4d 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantiles.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantiles.md @@ -8,4 +8,3 @@ Syntax: `quantiles(level1, level2, …)(x)` All the quantile functions also have corresponding quantiles functions: `quantiles`, `quantilesDeterministic`, `quantilesTiming`, `quantilesTimingWeighted`, `quantilesExact`, `quantilesExactWeighted`, `quantilesTDigest`. These functions calculate all the quantiles of the listed levels in one pass, and return an array of the resulting values. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiles/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigest.md b/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigest.md index f372e308e73..130ff7566ba 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigest.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigest.md @@ -20,10 +20,10 @@ quantileTDigest(level)(expr) Алиас: `medianTDigest`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). **Возвращаемое значение** @@ -56,4 +56,3 @@ SELECT quantileTDigest(number) FROM numbers(10) - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/qurntiledigest/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md b/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md index b6dd846967b..f7239be0ba5 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md @@ -20,11 +20,11 @@ quantileTDigestWeighted(level)(expr, weight) Алиас: `medianTDigest`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — Выражение над значениями столбца, которое возвращает данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). -- `weight` — Столбец с весам элементов последовательности. Вес — это количество повторений элемента в последовательности. +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `expr` — выражение, зависящее от значений столбцов, возвращающее данные [числовых типов](../../../sql-reference/data-types/index.md#data_types) или типов [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md). +- `weight` — столбец с весам элементов последовательности. Вес — это количество повторений элемента в последовательности. **Возвращаемое значение** @@ -57,4 +57,3 @@ SELECT quantileTDigestWeighted(number, 1) FROM numbers(10) - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiledigestweighted/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantiletiming.md b/docs/ru/sql-reference/aggregate-functions/reference/quantiletiming.md index 32e5e6ce31b..03d448a5d63 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantiletiming.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantiletiming.md @@ -18,11 +18,11 @@ quantileTiming(level)(expr) Алиас: `medianTiming`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — [Выражение](../../syntax.md#syntax-expressions) над значения столбца, которые возвращают данные типа [Float\*](../../../sql-reference/data-types/float.md). +- `expr` — [выражение](../../syntax.md#syntax-expressions), зависящее от значений столбцов, возвращающее данные типа [Float\*](../../../sql-reference/data-types/float.md). - Если в функцию передать отрицательные значения, то её поведение не определено. - Если значение больше, чем 30 000 (например, время загрузки страницы превышает 30 секунд), то оно приравнивается к 30 000. @@ -85,4 +85,3 @@ SELECT quantileTiming(response_time) FROM t - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletiming/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/quantiletimingweighted.md b/docs/ru/sql-reference/aggregate-functions/reference/quantiletimingweighted.md index 4a7fcc666d5..a50e09668ab 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/quantiletimingweighted.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/quantiletimingweighted.md @@ -18,16 +18,16 @@ quantileTimingWeighted(level)(expr, weight) Алиас: `medianTimingWeighted`. -**Параметры** +**Аргументы** -- `level` — Уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). +- `level` — уровень квантили. Опционально. Константное значение с плавающей запятой от 0 до 1. Мы рекомендуем использовать значение `level` из диапазона `[0.01, 0.99]`. Значение по умолчанию: 0.5. При `level=0.5` функция вычисляет [медиану](https://ru.wikipedia.org/wiki/Медиана_(статистика)). -- `expr` — [Выражение](../../syntax.md#syntax-expressions) над значения столбца, которые возвращают данные типа [Float\*](../../../sql-reference/data-types/float.md). +- `expr` — [выражение](../../syntax.md#syntax-expressions), зависящее от значений столбцов, возвращающее данные типа [Float\*](../../../sql-reference/data-types/float.md). - Если в функцию передать отрицательные значения, то её поведение не определено. - Если значение больше, чем 30 000 (например, время загрузки страницы превышает 30 секунд), то оно приравнивается к 30 000. -- `weight` — Столбец с весам элементов последовательности. Вес — это количество повторений элемента в последовательности. +- `weight` — столбец с весам элементов последовательности. Вес — это количество повторений элемента в последовательности. **Точность** @@ -84,4 +84,3 @@ SELECT quantileTimingWeighted(response_time, weight) FROM t - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) - [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletiming weighted/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md b/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md index 48a19e87c52..c98e7b88bcf 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md @@ -8,10 +8,10 @@ rankCorr(x, y) ``` -**Параметры** +**Аргументы** -- `x` — Произвольное значение. [Float32](../../../sql-reference/data-types/float.md#float32-float64) или [Float64](../../../sql-reference/data-types/float.md#float32-float64). -- `y` — Произвольное значение. [Float32](../../../sql-reference/data-types/float.md#float32-float64) или [Float64](../../../sql-reference/data-types/float.md#float32-float64). +- `x` — произвольное значение. [Float32](../../../sql-reference/data-types/float.md#float32-float64) или [Float64](../../../sql-reference/data-types/float.md#float32-float64). +- `y` — произвольное значение. [Float32](../../../sql-reference/data-types/float.md#float32-float64) или [Float64](../../../sql-reference/data-types/float.md#float32-float64). **Возвращаемое значение** diff --git a/docs/ru/sql-reference/aggregate-functions/reference/simplelinearregression.md b/docs/ru/sql-reference/aggregate-functions/reference/simplelinearregression.md index 370b1bde8d2..f634e553738 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/simplelinearregression.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/simplelinearregression.md @@ -41,4 +41,3 @@ SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6]) └───────────────────────────────────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/simplelinearregression/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/skewpop.md b/docs/ru/sql-reference/aggregate-functions/reference/skewpop.md index a6dee5dc5ef..ed4a95696f2 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/skewpop.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/skewpop.md @@ -10,9 +10,9 @@ toc_priority: 150 skewPop(expr) ``` -**Параметры** +**Аргументы** -`expr` — [Выражение](../../syntax.md#syntax-expressions), возвращающее число. +`expr` — [выражение](../../syntax.md#syntax-expressions), возвращающее число. **Возвращаемое значение** @@ -21,7 +21,6 @@ skewPop(expr) **Пример** ``` sql -SELECT skewPop(value) FROM series_with_value_column +SELECT skewPop(value) FROM series_with_value_column; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewpop/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/skewsamp.md b/docs/ru/sql-reference/aggregate-functions/reference/skewsamp.md index 171eb5e304a..213d26e4647 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/skewsamp.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/skewsamp.md @@ -12,9 +12,9 @@ toc_priority: 151 skewSamp(expr) ``` -**Параметры** +**Аргументы** -`expr` — [Выражение](../../syntax.md#syntax-expressions), возвращающее число. +`expr` — [выражение](../../syntax.md#syntax-expressions), возвращающее число. **Возвращаемое значение** @@ -23,7 +23,6 @@ skewSamp(expr) **Пример** ``` sql -SELECT skewSamp(value) FROM series_with_value_column +SELECT skewSamp(value) FROM series_with_value_column; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewsamp/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/stddevpop.md b/docs/ru/sql-reference/aggregate-functions/reference/stddevpop.md index ada8b8884cd..66d63147586 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/stddevpop.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/stddevpop.md @@ -9,4 +9,3 @@ toc_priority: 30 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `stddevPopStable`. Она работает медленнее, но обеспечивает меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevpop/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/stddevsamp.md b/docs/ru/sql-reference/aggregate-functions/reference/stddevsamp.md index 952b6bcde68..5fbf438e894 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/stddevsamp.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/stddevsamp.md @@ -9,4 +9,3 @@ toc_priority: 31 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `stddevSampStable`. Она работает медленнее, но обеспечивает меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevsamp/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/stochasticlinearregression.md b/docs/ru/sql-reference/aggregate-functions/reference/stochasticlinearregression.md index 0b268e9ea1b..6da0f6caacd 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/stochasticlinearregression.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/stochasticlinearregression.md @@ -86,4 +86,3 @@ evalMLMethod(model, param1, param2) FROM test_data - [stochasticLogisticRegression](../../../sql-reference/aggregate-functions/reference/stochasticlinearregression.md#agg_functions-stochasticlogisticregression) - [Отличие линейной от логистической регрессии.](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlinearregression/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md b/docs/ru/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md index 01d3a0797bd..67454aa2c1b 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md @@ -54,4 +54,3 @@ stochasticLogisticRegression(1.0, 1.0, 10, 'SGD') - [stochasticLinearRegression](../../../sql-reference/aggregate-functions/reference/stochasticlinearregression.md#agg_functions-stochasticlinearregression) - [Отличие линейной от логистической регрессии](https://moredez.ru/q/51225972/) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md b/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md index 77378de95d1..16daddfbecf 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md @@ -16,7 +16,7 @@ studentTTest(sample_data, sample_index) Значения выборок берутся из столбца `sample_data`. Если `sample_index` равно 0, то значение из этой строки принадлежит первой выборке. Во всех остальных случаях значение принадлежит второй выборке. Проверяется нулевая гипотеза, что средние значения генеральных совокупностей совпадают. Для применения t-критерия Стьюдента распределение в генеральных совокупностях должно быть нормальным и дисперсии должны совпадать. -**Параметры** +**Аргументы** - `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). - `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). @@ -63,4 +63,3 @@ SELECT studentTTest(sample_data, sample_index) FROM student_ttest; - [t-критерий Стьюдента](https://ru.wikipedia.org/wiki/T-%D0%BA%D1%80%D0%B8%D1%82%D0%B5%D1%80%D0%B8%D0%B9_%D0%A1%D1%82%D1%8C%D1%8E%D0%B4%D0%B5%D0%BD%D1%82%D0%B0) - [welchTTest](welchttest.md#welchttest) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/studentttest/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/sum.md b/docs/ru/sql-reference/aggregate-functions/reference/sum.md index 5fa769f3479..487313c006b 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/sum.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/sum.md @@ -7,4 +7,3 @@ toc_priority: 4 Вычисляет сумму. Работает только для чисел. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sum/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/summap.md b/docs/ru/sql-reference/aggregate-functions/reference/summap.md index 460fc078893..3cfe4c26fcc 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/summap.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/summap.md @@ -42,4 +42,3 @@ GROUP BY timeslot └─────────────────────┴──────────────────────────────────────────────┴────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/summap/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/sumwithoverflow.md b/docs/ru/sql-reference/aggregate-functions/reference/sumwithoverflow.md index 845adc510f2..1e1962babbe 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/sumwithoverflow.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/sumwithoverflow.md @@ -8,4 +8,3 @@ toc_priority: 140 Работает только для чисел. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sumwithoverflow/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/topk.md b/docs/ru/sql-reference/aggregate-functions/reference/topk.md index 6aefd38bf34..4d6a8b46c2c 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/topk.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/topk.md @@ -18,8 +18,8 @@ topK(N)(column) **Аргументы** -- ‘N’ - Количество значений. -- ‘x’ – Столбец. +- `N` – количество значений. +- `x` – столбец. **Пример** @@ -36,4 +36,3 @@ FROM ontime └─────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topk/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/topkweighted.md b/docs/ru/sql-reference/aggregate-functions/reference/topkweighted.md index 20bd3ee85ff..840f9c553f5 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/topkweighted.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/topkweighted.md @@ -12,13 +12,13 @@ toc_priority: 109 topKWeighted(N)(x, weight) ``` -**Параметры** +**Аргументы** -- `N` — Количество элементов для выдачи. +- `N` — количество элементов для выдачи. **Аргументы** -- `x` – значение. +- `x` — значение. - `weight` — вес. [UInt8](../../../sql-reference/data-types/int-uint.md). **Возвращаемое значение** @@ -41,4 +41,3 @@ SELECT topKWeighted(10)(number, number) FROM numbers(1000) └───────────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topkweighted/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/uniq.md b/docs/ru/sql-reference/aggregate-functions/reference/uniq.md index f5f3f198139..01bb8bea45a 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/uniq.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/uniq.md @@ -10,7 +10,7 @@ toc_priority: 190 uniq(x[, ...]) ``` -**Параметры** +**Аргументы** Функция принимает переменное число входных параметров. Параметры могут быть числовых типов, а также `Tuple`, `Array`, `Date`, `DateTime`, `String`. @@ -39,4 +39,3 @@ uniq(x[, ...]) - [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12) - [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniq/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined.md b/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined.md index 751dc1a8c98..3009beb994b 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined.md @@ -12,7 +12,7 @@ uniqCombined(HLL_precision)(x[, ...]) Функция `uniqCombined` — это хороший выбор для вычисления количества различных значений. -**Параметры** +**Аргументы** Функция принимает переменное число входных параметров. Параметры могут быть числовых типов, а также `Tuple`, `Array`, `Date`, `DateTime`, `String`. @@ -50,4 +50,3 @@ uniqCombined(HLL_precision)(x[, ...]) - [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12) - [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined64.md b/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined64.md index 5db27fb301d..6fde16b4b0c 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined64.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/uniqcombined64.md @@ -6,4 +6,3 @@ toc_priority: 193 Использует 64-битный хэш для всех типов, в отличие от [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined). -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined64/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/uniqexact.md b/docs/ru/sql-reference/aggregate-functions/reference/uniqexact.md index 3dd22b2b4bc..613558ba887 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/uniqexact.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/uniqexact.md @@ -14,7 +14,7 @@ uniqExact(x[, ...]) Функция `uniqExact` расходует больше оперативной памяти, чем функция `uniq`, так как размер состояния неограниченно растёт по мере роста количества различных значений. -**Параметры** +**Аргументы** Функция принимает переменное число входных параметров. Параметры могут быть числовых типов, а также `Tuple`, `Array`, `Date`, `DateTime`, `String`. @@ -24,4 +24,3 @@ uniqExact(x[, ...]) - [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniqcombined) - [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniqhll12) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqexact/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md b/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md index 09e52ac6833..7a421d419ae 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md @@ -10,7 +10,7 @@ toc_priority: 194 uniqHLL12(x[, ...]) ``` -**Параметры** +**Аргументы** Функция принимает переменное число входных параметров. Параметры могут быть числовых типов, а также `Tuple`, `Array`, `Date`, `DateTime`, `String`. @@ -26,7 +26,7 @@ uniqHLL12(x[, ...]) - Использует алгоритм HyperLogLog для аппроксимации числа различных значений аргументов. - Используется 212 5-битовых ячеек. Размер состояния чуть больше 2.5 КБ. Результат не точный (ошибка до ~10%) для небольших множеств (<10K элементов). Однако для множеств большой кардинальности (10K - 100M) результат довольно точен (ошибка до ~1.6%). Начиная с 100M ошибка оценки будет только расти и для множеств огромной кардинальности (1B+ элементов) функция возвращает результат с очень большой неточностью. + Используется 2^12 5-битовых ячеек. Размер состояния чуть больше 2.5 КБ. Результат не точный (ошибка до ~10%) для небольших множеств (<10K элементов). Однако для множеств большой кардинальности (10K - 100M) результат довольно точен (ошибка до ~1.6%). Начиная с 100M ошибка оценки будет только расти и для множеств огромной кардинальности (1B+ элементов) функция возвращает результат с очень большой неточностью. - Результат детерминирован (не зависит от порядка выполнения запроса). @@ -38,4 +38,3 @@ uniqHLL12(x[, ...]) - [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqhll12/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/varpop.md b/docs/ru/sql-reference/aggregate-functions/reference/varpop.md index 9615e03673b..0a78b3cbb76 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/varpop.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/varpop.md @@ -11,4 +11,3 @@ toc_priority: 32 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `varPopStable`. Она работает медленнее, но обеспечивает меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varpop/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/varsamp.md b/docs/ru/sql-reference/aggregate-functions/reference/varsamp.md index 31aaac68e7b..e18b858b7e2 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/varsamp.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/varsamp.md @@ -13,4 +13,3 @@ toc_priority: 33 !!! note "Примечание" Функция использует вычислительно неустойчивый алгоритм. Если для ваших расчётов необходима [вычислительная устойчивость](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость), используйте функцию `varSampStable`. Она работает медленнее, но обеспечиват меньшую вычислительную ошибку. -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/vasamp/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md b/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md index 16c122d1b49..594a609d89e 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md @@ -16,7 +16,7 @@ welchTTest(sample_data, sample_index) Значения выборок берутся из столбца `sample_data`. Если `sample_index` равно 0, то значение из этой строки принадлежит первой выборке. Во всех остальных случаях значение принадлежит второй выборке. Проверяется нулевая гипотеза, что средние значения генеральных совокупностей совпадают. Для применения t-критерия Уэлча распределение в генеральных совокупностях должно быть нормальным. Дисперсии могут не совпадать. -**Параметры** +**Аргументы** - `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). - `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). @@ -63,4 +63,3 @@ SELECT welchTTest(sample_data, sample_index) FROM welch_ttest; - [t-критерий Уэлча](https://ru.wikipedia.org/wiki/T-%D0%BA%D1%80%D0%B8%D1%82%D0%B5%D1%80%D0%B8%D0%B9_%D0%A3%D1%8D%D0%BB%D1%87%D0%B0) - [studentTTest](studentttest.md#studentttest) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/welchTTest/) diff --git a/docs/ru/sql-reference/data-types/aggregatefunction.md b/docs/ru/sql-reference/data-types/aggregatefunction.md index 018d38d825e..6ca6879cf6c 100644 --- a/docs/ru/sql-reference/data-types/aggregatefunction.md +++ b/docs/ru/sql-reference/data-types/aggregatefunction.md @@ -65,4 +65,3 @@ SELECT uniqMerge(state) FROM (SELECT uniqState(UserID) AS state FROM table GROUP Смотрите в описании движка [AggregatingMergeTree](../../sql-reference/data-types/aggregatefunction.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/nested_data_structures/aggregatefunction/) diff --git a/docs/ru/sql-reference/data-types/array.md b/docs/ru/sql-reference/data-types/array.md index 86a23ed041b..30952d6e126 100644 --- a/docs/ru/sql-reference/data-types/array.md +++ b/docs/ru/sql-reference/data-types/array.md @@ -76,4 +76,3 @@ Received exception from server (version 1.1.54388): Code: 386. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: There is no supertype for types UInt8, String because some of them are String/FixedString and some of them are not. ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/array/) diff --git a/docs/ru/sql-reference/data-types/boolean.md b/docs/ru/sql-reference/data-types/boolean.md index b0fad6d7446..dff35777ff9 100644 --- a/docs/ru/sql-reference/data-types/boolean.md +++ b/docs/ru/sql-reference/data-types/boolean.md @@ -7,4 +7,3 @@ toc_title: "Булевы значения" Отдельного типа для булевых значений нет. Для них используется тип UInt8, в котором используются только значения 0 и 1. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/boolean/) diff --git a/docs/ru/sql-reference/data-types/date.md b/docs/ru/sql-reference/data-types/date.md index 490bc5c28b4..50508de96a3 100644 --- a/docs/ru/sql-reference/data-types/date.md +++ b/docs/ru/sql-reference/data-types/date.md @@ -44,4 +44,3 @@ SELECT * FROM dt; - [Тип данных `DateTime`](../../sql-reference/data-types/datetime.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/date/) diff --git a/docs/ru/sql-reference/data-types/datetime.md b/docs/ru/sql-reference/data-types/datetime.md index ffdf83e5bd0..c9804f57c33 100644 --- a/docs/ru/sql-reference/data-types/datetime.md +++ b/docs/ru/sql-reference/data-types/datetime.md @@ -20,8 +20,7 @@ DateTime([timezone]) ## Использование {#ispolzovanie} Момент времени сохраняется как [Unix timestamp](https://ru.wikipedia.org/wiki/Unix-%D0%B2%D1%80%D0%B5%D0%BC%D1%8F), независимо от часового пояса и переходов на летнее/зимнее время. Дополнительно, тип `DateTime` позволяет хранить часовой пояс, единый для всей колонки, который влияет на то, как будут отображаться значения типа `DateTime` в текстовом виде и как будут парситься значения заданные в виде строк (‘2020-01-01 05:00:01’). Часовой пояс не хранится в строках таблицы (выборки), а хранится в метаданных колонки. -Список поддерживаемых временных зон можно найти в [IANA Time Zone Database](https://www.iana.org/time-zones). -Пакет `tzdata`, содержащий [базу данных часовых поясов IANA](https://www.iana.org/time-zones), должен быть установлен в системе. Используйте команду `timedatectl list-timezones` для получения списка часовых поясов, известных локальной системе. +Список поддерживаемых часовых поясов можно найти в [IANA Time Zone Database](https://www.iana.org/time-zones) или получить из базы данных, выполнив запрос `SELECT * FROM system.time_zones`. Также [список](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) есть в Википедии. Часовой пояс для столбца типа `DateTime` можно в явном виде установить при создании таблицы. Если часовой пояс не установлен, то ClickHouse использует значение параметра [timezone](../../sql-reference/data-types/datetime.md#server_configuration_parameters-timezone), установленное в конфигурации сервера или в настройках операционной системы на момент запуска сервера. @@ -126,4 +125,3 @@ FROM dt - [Тип данных `Date`](date.md) - [Тип данных `DateTime64`](datetime64.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/datetime/) diff --git a/docs/ru/sql-reference/data-types/datetime64.md b/docs/ru/sql-reference/data-types/datetime64.md index 6576bf9dc0d..3a08da75bb7 100644 --- a/docs/ru/sql-reference/data-types/datetime64.md +++ b/docs/ru/sql-reference/data-types/datetime64.md @@ -7,9 +7,9 @@ toc_title: DateTime64 Позволяет хранить момент времени, который может быть представлен как календарная дата и время, с заданной суб-секундной точностью. -Размер тика/точность: 10-precision секунд, где precision - целочисленный параметр типа. +Размер тика (точность, precision): 10-precision секунд, где precision - целочисленный параметр. -Синтаксис: +**Синтаксис:** ``` sql DateTime64(precision, [timezone]) @@ -17,9 +17,11 @@ DateTime64(precision, [timezone]) Данные хранятся в виде количества ‘тиков’, прошедших с момента начала эпохи (1970-01-01 00:00:00 UTC), в Int64. Размер тика определяется параметром precision. Дополнительно, тип `DateTime64` позволяет хранить часовой пояс, единый для всей колонки, который влияет на то, как будут отображаться значения типа `DateTime64` в текстовом виде и как будут парситься значения заданные в виде строк (‘2020-01-01 05:00:01.000’). Часовой пояс не хранится в строках таблицы (выборки), а хранится в метаданных колонки. Подробнее см. [DateTime](datetime.md). -## Пример {#primer} +Поддерживаются значения от 1 января 1925 г. и до 31 декабря 2283 г. -**1.** Создание таблицы с столбцом типа `DateTime64` и вставка данных в неё: +## Примеры {#examples} + +1. Создание таблицы со столбцом типа `DateTime64` и вставка данных в неё: ``` sql CREATE TABLE dt @@ -27,15 +29,15 @@ CREATE TABLE dt `timestamp` DateTime64(3, 'Europe/Moscow'), `event_id` UInt8 ) -ENGINE = TinyLog +ENGINE = TinyLog; ``` ``` sql -INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2) +INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2); ``` ``` sql -SELECT * FROM dt +SELECT * FROM dt; ``` ``` text @@ -46,12 +48,12 @@ SELECT * FROM dt ``` - При вставке даты-времени как числа (аналогично ‘Unix timestamp’), время трактуется как UTC. Unix timestamp `1546300800` в часовом поясе `Europe/London (UTC+0)` представляет время `'2019-01-01 00:00:00'`. Однако, столбец `timestamp` имеет тип `DateTime('Europe/Moscow (UTC+3)')`, так что при выводе в виде строки время отобразится как `2019-01-01 03:00:00`. -- При вставке даты-времени в виде строки, время трактуется соответственно часовому поясу установленному для колонки. `'2019-01-01 00:00:00'` трактуется как время по Москве (и в базу сохраняется `'2018-12-31 21:00:00'` в виде Unix Timestamp) +- При вставке даты-времени в виде строки, время трактуется соответственно часовому поясу установленному для колонки. `'2019-01-01 00:00:00'` трактуется как время по Москве (и в базу сохраняется `'2018-12-31 21:00:00'` в виде Unix Timestamp). -**2.** Фильтрация по значениям даты-времени +2. Фильтрация по значениям даты и времени ``` sql -SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow') +SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'); ``` ``` text @@ -60,12 +62,12 @@ SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europ └─────────────────────────┴──────────┘ ``` -В отличие от типа `DateTime`, `DateTime64` не конвертируется из строк автоматически +В отличие от типа `DateTime`, `DateTime64` не конвертируется из строк автоматически. -**3.** Получение часового пояса для значения типа `DateTime64`: +3. Получение часового пояса для значения типа `DateTime64`: ``` sql -SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x +SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x; ``` ``` text @@ -74,13 +76,13 @@ SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS └─────────────────────────┴────────────────────────────────┘ ``` -**4.** Конвертация часовых поясов +4. Конвертация часовых поясов ``` sql SELECT toDateTime64(timestamp, 3, 'Europe/London') as lon_time, toDateTime64(timestamp, 3, 'Europe/Moscow') as mos_time -FROM dt +FROM dt; ``` ``` text @@ -90,7 +92,7 @@ FROM dt └─────────────────────────┴─────────────────────────┘ ``` -## See Also {#see-also} +**See Also** - [Функции преобразования типов](../../sql-reference/functions/type-conversion-functions.md) - [Функции для работы с датой и временем](../../sql-reference/functions/date-time-functions.md) diff --git a/docs/ru/sql-reference/data-types/decimal.md b/docs/ru/sql-reference/data-types/decimal.md index bdcd3c767b9..8524e8ea132 100644 --- a/docs/ru/sql-reference/data-types/decimal.md +++ b/docs/ru/sql-reference/data-types/decimal.md @@ -112,4 +112,3 @@ DB::Exception: Can't compare. - [countDigits](../../sql-reference/functions/other-functions.md#count-digits) -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/decimal/) diff --git a/docs/ru/sql-reference/data-types/domains/index.md b/docs/ru/sql-reference/data-types/domains/index.md index 6a968a76ff6..35f8149112f 100644 --- a/docs/ru/sql-reference/data-types/domains/index.md +++ b/docs/ru/sql-reference/data-types/domains/index.md @@ -30,4 +30,3 @@ toc_priority: 56 - Невозможно неявно преобразовывать строковые значение в значения с доменным типом данных при вставке данных из другого столбца или таблицы. - Домен не добавляет ограничения на хранимые значения. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/domains/overview) diff --git a/docs/ru/sql-reference/data-types/domains/ipv4.md b/docs/ru/sql-reference/data-types/domains/ipv4.md index 57d6f12ab17..af5f8261fae 100644 --- a/docs/ru/sql-reference/data-types/domains/ipv4.md +++ b/docs/ru/sql-reference/data-types/domains/ipv4.md @@ -81,4 +81,3 @@ SELECT toTypeName(i), CAST(from AS UInt32) AS i FROM hits LIMIT 1; └──────────────────────────────────┴────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/domains/ipv4) diff --git a/docs/ru/sql-reference/data-types/domains/ipv6.md b/docs/ru/sql-reference/data-types/domains/ipv6.md index 04c5fd0d491..5b3c17feceb 100644 --- a/docs/ru/sql-reference/data-types/domains/ipv6.md +++ b/docs/ru/sql-reference/data-types/domains/ipv6.md @@ -81,4 +81,3 @@ SELECT toTypeName(i), CAST(from AS FixedString(16)) AS i FROM hits LIMIT 1; └───────────────────────────────────────────┴─────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/domains/ipv6) diff --git a/docs/ru/sql-reference/data-types/enum.md b/docs/ru/sql-reference/data-types/enum.md index b86d15c19a8..95c053bed2c 100644 --- a/docs/ru/sql-reference/data-types/enum.md +++ b/docs/ru/sql-reference/data-types/enum.md @@ -126,4 +126,3 @@ INSERT INTO t_enum_nullable Values('hello'),('world'),(NULL) При ALTER, есть возможность поменять Enum8 на Enum16 и обратно - так же, как можно поменять Int8 на Int16. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/enum/) diff --git a/docs/ru/sql-reference/data-types/fixedstring.md b/docs/ru/sql-reference/data-types/fixedstring.md index 21115418e30..ef73dadaddf 100644 --- a/docs/ru/sql-reference/data-types/fixedstring.md +++ b/docs/ru/sql-reference/data-types/fixedstring.md @@ -58,4 +58,3 @@ WHERE a = 'b\0' Обратите внимание, что длина значения `FixedString(N)` постоянна. Функция [length](../../sql-reference/data-types/fixedstring.md#array_functions-length) возвращает `N` даже если значение `FixedString(N)` заполнено только нулевыми байтами, однако функция [empty](../../sql-reference/data-types/fixedstring.md#empty) в этом же случае возвращает `1`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/fixedstring/) diff --git a/docs/ru/sql-reference/data-types/float.md b/docs/ru/sql-reference/data-types/float.md index 0e861f170b7..89ac00ab62f 100644 --- a/docs/ru/sql-reference/data-types/float.md +++ b/docs/ru/sql-reference/data-types/float.md @@ -89,4 +89,3 @@ SELECT 0 / 0 Смотрите правила сортировки `NaN` в разделе [Секция ORDER BY ](../../sql-reference/statements/select/order-by.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/float/) diff --git a/docs/ru/sql-reference/data-types/geo.md b/docs/ru/sql-reference/data-types/geo.md index 23293b30927..23b47f38d05 100644 --- a/docs/ru/sql-reference/data-types/geo.md +++ b/docs/ru/sql-reference/data-types/geo.md @@ -103,4 +103,3 @@ Result: └─────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data-types/geo/) diff --git a/docs/ru/sql-reference/data-types/index.md b/docs/ru/sql-reference/data-types/index.md index 53c983a147a..2b29ee1bc19 100644 --- a/docs/ru/sql-reference/data-types/index.md +++ b/docs/ru/sql-reference/data-types/index.md @@ -11,4 +11,3 @@ ClickHouse может сохранять в ячейках таблиц данн Зависимость имен типов данных от регистра можно проверить в системной таблице [system.data_type_families](../../operations/system-tables/data_type_families.md#system_tables-data_type_families). Раздел содержит описания поддерживаемых типов данных и специфику их использования и/или реализации, если таковые имеются. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/) diff --git a/docs/ru/sql-reference/data-types/int-uint.md b/docs/ru/sql-reference/data-types/int-uint.md index d3c342e467a..c026f5fc4a5 100644 --- a/docs/ru/sql-reference/data-types/int-uint.md +++ b/docs/ru/sql-reference/data-types/int-uint.md @@ -35,4 +35,3 @@ toc_title: UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64 `UInt128` пока не реализован. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/int_uint/) diff --git a/docs/ru/sql-reference/data-types/lowcardinality.md b/docs/ru/sql-reference/data-types/lowcardinality.md index 52713e2d747..fe9118b1e14 100644 --- a/docs/ru/sql-reference/data-types/lowcardinality.md +++ b/docs/ru/sql-reference/data-types/lowcardinality.md @@ -58,4 +58,3 @@ ORDER BY id - [Reducing Clickhouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/). - [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf). -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/data-types/lowcardinality/) diff --git a/docs/ru/sql-reference/data-types/multiword-types.md b/docs/ru/sql-reference/data-types/multiword-types.md index 559755ef989..0a8afff448d 100644 --- a/docs/ru/sql-reference/data-types/multiword-types.md +++ b/docs/ru/sql-reference/data-types/multiword-types.md @@ -26,4 +26,3 @@ toc_title: Составные типы | BINARY LARGE OBJECT | [String](../../sql-reference/data-types/string.md) | | BINARY VARYING | [String](../../sql-reference/data-types/string.md) | -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/data-types/multiword-types/) diff --git a/docs/ru/sql-reference/data-types/nested-data-structures/index.md b/docs/ru/sql-reference/data-types/nested-data-structures/index.md index db214b90c03..78262347bac 100644 --- a/docs/ru/sql-reference/data-types/nested-data-structures/index.md +++ b/docs/ru/sql-reference/data-types/nested-data-structures/index.md @@ -7,4 +7,3 @@ toc_title: hidden # Вложенные структуры данных {#vlozhennye-struktury-dannykh} -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/nested_data_structures/) diff --git a/docs/ru/sql-reference/data-types/nested-data-structures/nested.md b/docs/ru/sql-reference/data-types/nested-data-structures/nested.md index 0e43383b283..199d141a191 100644 --- a/docs/ru/sql-reference/data-types/nested-data-structures/nested.md +++ b/docs/ru/sql-reference/data-types/nested-data-structures/nested.md @@ -96,4 +96,3 @@ LIMIT 10 Работоспособность запроса ALTER для элементов вложенных структур данных, является сильно ограниченной. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/nested_data_structures/nested/) diff --git a/docs/ru/sql-reference/data-types/nullable.md b/docs/ru/sql-reference/data-types/nullable.md index 71e1f7a37a0..3f33c4b2540 100644 --- a/docs/ru/sql-reference/data-types/nullable.md +++ b/docs/ru/sql-reference/data-types/nullable.md @@ -48,4 +48,3 @@ SELECT x + y from t_null └────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/nullable/) diff --git a/docs/ru/sql-reference/data-types/simpleaggregatefunction.md b/docs/ru/sql-reference/data-types/simpleaggregatefunction.md index 668b579ff78..7b81c577762 100644 --- a/docs/ru/sql-reference/data-types/simpleaggregatefunction.md +++ b/docs/ru/sql-reference/data-types/simpleaggregatefunction.md @@ -3,6 +3,8 @@ Хранит только текущее значение агрегатной функции и не сохраняет ее полное состояние, как это делает [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md). Такая оптимизация может быть применена к функциям, которые обладают следующим свойством: результат выполнения функции `f` к набору строк `S1 UNION ALL S2` может быть получен путем выполнения `f` к отдельным частям набора строк, а затем повторного выполнения `f` к результатам: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. Это свойство гарантирует, что результатов частичной агрегации достаточно для вычисления комбинированной, поэтому хранить и обрабатывать какие-либо дополнительные данные не требуется. +Чтобы получить промежуточное значение, обычно используются агрегатные функции с суффиксом [-SimpleState](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-simplestate). + Поддерживаются следующие агрегатные функции: - [`any`](../../sql-reference/aggregate-functions/reference/any.md#agg_function-any) @@ -15,14 +17,16 @@ - [`groupBitOr`](../../sql-reference/aggregate-functions/reference/groupbitor.md#groupbitor) - [`groupBitXor`](../../sql-reference/aggregate-functions/reference/groupbitxor.md#groupbitxor) - [`groupArrayArray`](../../sql-reference/aggregate-functions/reference/grouparray.md#agg_function-grouparray) -- [`groupUniqArrayArray`](../../sql-reference/aggregate-functions/reference/groupuniqarray.md#groupuniqarray) +- [`groupUniqArrayArray`](../../sql-reference/aggregate-functions/reference/groupuniqarray.md) - [`sumMap`](../../sql-reference/aggregate-functions/reference/summap.md#agg_functions-summap) - [`minMap`](../../sql-reference/aggregate-functions/reference/minmap.md#agg_functions-minmap) - [`maxMap`](../../sql-reference/aggregate-functions/reference/maxmap.md#agg_functions-maxmap) +- [`argMin`](../../sql-reference/aggregate-functions/reference/argmin.md) +- [`argMax`](../../sql-reference/aggregate-functions/reference/argmax.md) !!! note "Примечание" - Значения `SimpleAggregateFunction(func, Type)` отображаются и хранятся так же, как и `Type`, поэтому комбинаторы [-Merge](../../sql-reference/aggregate-functions/combinators.md#aggregate_functions_combinators-merge) и [-State]((../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-state) не требуются. - + Значения `SimpleAggregateFunction(func, Type)` отображаются и хранятся так же, как и `Type`, поэтому комбинаторы [-Merge](../../sql-reference/aggregate-functions/combinators.md#aggregate_functions_combinators-merge) и [-State](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-state) не требуются. + `SimpleAggregateFunction` имеет лучшую производительность, чем `AggregateFunction` с той же агрегатной функцией. **Параметры** @@ -36,4 +40,3 @@ CREATE TABLE simple (id UInt64, val SimpleAggregateFunction(sum, Double)) ENGINE=AggregatingMergeTree ORDER BY id; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/data-types/simpleaggregatefunction/) diff --git a/docs/ru/sql-reference/data-types/special-data-types/expression.md b/docs/ru/sql-reference/data-types/special-data-types/expression.md index 718fcc886a6..f11f66a40c7 100644 --- a/docs/ru/sql-reference/data-types/special-data-types/expression.md +++ b/docs/ru/sql-reference/data-types/special-data-types/expression.md @@ -7,4 +7,3 @@ toc_title: Expression Используется для представления лямбда-выражений в функциях высшего порядка. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/special_data_types/expression/) diff --git a/docs/ru/sql-reference/data-types/special-data-types/index.md b/docs/ru/sql-reference/data-types/special-data-types/index.md index e6d9fa8b011..823a84e2e43 100644 --- a/docs/ru/sql-reference/data-types/special-data-types/index.md +++ b/docs/ru/sql-reference/data-types/special-data-types/index.md @@ -9,4 +9,3 @@ toc_title: hidden Значения служебных типов данных не могут сохраняться в таблицу и выводиться в качестве результата, а возникают как промежуточный результат выполнения запроса. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/special_data_types/) diff --git a/docs/ru/sql-reference/data-types/special-data-types/nothing.md b/docs/ru/sql-reference/data-types/special-data-types/nothing.md index c6a9cb868d8..30d425461e1 100644 --- a/docs/ru/sql-reference/data-types/special-data-types/nothing.md +++ b/docs/ru/sql-reference/data-types/special-data-types/nothing.md @@ -19,4 +19,3 @@ SELECT toTypeName(Array()) └─────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/special_data_types/nothing/) diff --git a/docs/ru/sql-reference/data-types/special-data-types/set.md b/docs/ru/sql-reference/data-types/special-data-types/set.md index 4c2f4ed2c66..5867df3c947 100644 --- a/docs/ru/sql-reference/data-types/special-data-types/set.md +++ b/docs/ru/sql-reference/data-types/special-data-types/set.md @@ -7,4 +7,3 @@ toc_title: Set Используется для представления правой части выражения IN. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/special_data_types/set/) diff --git a/docs/ru/sql-reference/data-types/string.md b/docs/ru/sql-reference/data-types/string.md index 6a07f7e51de..9470f523629 100644 --- a/docs/ru/sql-reference/data-types/string.md +++ b/docs/ru/sql-reference/data-types/string.md @@ -17,4 +17,3 @@ toc_title: String Также, некоторые функции по работе со строками, имеют отдельные варианты, которые работают при допущении, что строка содержит набор байт, представляющий текст в кодировке UTF-8. Например, функция length вычисляет длину строки в байтах, а функция lengthUTF8 - длину строки в кодовых точках Unicode, при допущении, что значение в кодировке UTF-8. -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/string/) diff --git a/docs/ru/sql-reference/data-types/tuple.md b/docs/ru/sql-reference/data-types/tuple.md index e2a1450b47f..702b5962f7b 100644 --- a/docs/ru/sql-reference/data-types/tuple.md +++ b/docs/ru/sql-reference/data-types/tuple.md @@ -47,4 +47,3 @@ SELECT tuple(1,NULL) AS x, toTypeName(x) └──────────┴─────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/tuple/) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md index 9c0b731bc7d..da8492e7cc0 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md @@ -65,4 +65,3 @@ ClickHouse поддерживает свойство [hierarchical](external-dic ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict_hierarchical/) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md index 0fd4a85c46f..40ea4ba7d26 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md @@ -9,7 +9,7 @@ toc_title: "Хранение словарей в памяти" Рекомендуем [flat](#flat), [hashed](#dicts-external_dicts_dict_layout-hashed) и [complex_key_hashed](#complex-key-hashed). Скорость обработки словарей при этом максимальна. -Размещение с кэшированием не рекомендуется использовать из-за потенциально низкой производительности и сложностей в подборе оптимальных параметров. Читайте об этом подробнее в разделе «[cache](#cache)». +Размещение с кэшированием не рекомендуется использовать из-за потенциально низкой производительности и сложностей в подборе оптимальных параметров. Читайте об этом подробнее в разделе [cache](#cache). Повысить производительность словарей можно следующими способами: @@ -48,7 +48,7 @@ LAYOUT(LAYOUT_TYPE(param value)) -- layout settings ... ``` -## Способы размещения словарей в памяти {#sposoby-razmeshcheniia-slovarei-v-pamiati} +## Способы размещения словарей в памяти {#ways-to-store-dictionaries-in-memory} - [flat](#flat) - [hashed](#dicts-external_dicts_dict_layout-hashed) @@ -65,11 +65,11 @@ LAYOUT(LAYOUT_TYPE(param value)) -- layout settings ### flat {#flat} -Словарь полностью хранится в оперативной памяти в виде плоских массивов. Объём памяти, занимаемой словарём пропорционален размеру самого большого по размеру ключа. +Словарь полностью хранится в оперативной памяти в виде плоских массивов. Объём памяти, занимаемой словарём, пропорционален размеру самого большого ключа (по объему). -Ключ словаря имеет тип `UInt64` и его величина ограничена 500 000. Если при создании словаря обнаружен ключ больше, то ClickHouse бросает исключение и не создает словарь. +Ключ словаря имеет тип [UInt64](../../../sql-reference/data-types/int-uint.md) и его величина ограничена параметром `max_array_size` (значение по умолчанию — 500 000). Если при создании словаря обнаружен ключ больше, то ClickHouse бросает исключение и не создает словарь. Начальный размер плоских массивов словарей контролируется параметром initial_array_size (по умолчанию - 1024). -Поддерживаются все виды источников. При обновлении, данные (из файла, из таблицы) читаются целиком. +Поддерживаются все виды источников. При обновлении данные (из файла или из таблицы) считываются целиком. Это метод обеспечивает максимальную производительность среди всех доступных способов размещения словаря. @@ -77,21 +77,24 @@ LAYOUT(LAYOUT_TYPE(param value)) -- layout settings ``` xml - + + 50000 + 5000000 + ``` или ``` sql -LAYOUT(FLAT()) +LAYOUT(FLAT(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000)) ``` ### hashed {#dicts-external_dicts_dict_layout-hashed} -Словарь полностью хранится в оперативной памяти в виде хэш-таблиц. Словарь может содержать произвольное количество элементов с произвольными идентификаторами. На практике, количество ключей может достигать десятков миллионов элементов. +Словарь полностью хранится в оперативной памяти в виде хэш-таблиц. Словарь может содержать произвольное количество элементов с произвольными идентификаторами. На практике количество ключей может достигать десятков миллионов элементов. -Поддерживаются все виды источников. При обновлении, данные (из файла, из таблицы) читаются целиком. +Поддерживаются все виды источников. При обновлении данные (из файла, из таблицы) читаются целиком. Пример конфигурации: @@ -318,8 +321,6 @@ LAYOUT(CACHE(SIZE_IN_CELLS 1000000000)) 1048576 /var/lib/clickhouse/clickhouse_dictionaries/test_dict - - 1048576 ``` @@ -327,8 +328,8 @@ LAYOUT(CACHE(SIZE_IN_CELLS 1000000000)) или ``` sql -LAYOUT(CACHE(BLOCK_SIZE 4096 FILE_SIZE 16777216 READ_BUFFER_SIZE 1048576 - PATH /var/lib/clickhouse/clickhouse_dictionaries/test_dict MAX_STORED_KEYS 1048576)) +LAYOUT(SSD_CACHE(BLOCK_SIZE 4096 FILE_SIZE 16777216 READ_BUFFER_SIZE 1048576 + PATH /var/lib/clickhouse/clickhouse_dictionaries/test_dict)) ``` ### complex_key_ssd_cache {#complex-key-ssd-cache} @@ -443,4 +444,3 @@ dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1'))) Данные должны полностью помещаться в оперативной памяти. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict_layout/) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md index cdf1b05fb37..9589353649d 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md @@ -28,7 +28,7 @@ LIFETIME(300) ... ``` -Настройка `0` запрещает обновление словарей. +Настройка `0` (`LIFETIME(0)`) запрещает обновление словарей. Можно задать интервал, внутри которого ClickHouse равномерно-случайно выберет время для обновления. Это необходимо для распределения нагрузки на источник словаря при обновлении на большом количестве серверов. @@ -51,16 +51,19 @@ LIFETIME(300) LIFETIME(MIN 300 MAX 360) ``` +Если `0` и `0`, ClickHouse не перегружает словарь по истечению времени. +В этм случае, ClickHouse может перезагрузить данные словаря если изменился XML файл с конфигурацией словаря или если была выполнена команда `SYSTEM RELOAD DICTIONARY`. + При обновлении словарей сервер ClickHouse применяет различную логику в зависимости от типа [источника](external-dicts-dict-sources.md): -> - У текстового файла проверяется время модификации. Если время изменилось по отношению к запомненному ранее, то словарь обновляется. -> - Для MySQL источника, время модификации проверяется запросом `SHOW TABLE STATUS` (для MySQL 8 необходимо отключить кеширование мета-информации в MySQL `set global information_schema_stats_expiry=0`. -> - Словари из других источников по умолчанию обновляются каждый раз. +- У текстового файла проверяется время модификации. Если время изменилось по отношению к запомненному ранее, то словарь обновляется. +- Для MySQL источника, время модификации проверяется запросом `SHOW TABLE STATUS` (для MySQL 8 необходимо отключить кеширование мета-информации в MySQL `set global information_schema_stats_expiry=0`. +- Словари из других источников по умолчанию обновляются каждый раз. -Для других источников (ODBC, ClickHouse и т.д.) можно настроить запрос, который позволит обновлять словари только в случае их фактического изменения, а не каждый раз. Чтобы это сделать необходимо выполнить следующие условия/действия: +Для других источников (ODBC, PostgreSQL, ClickHouse и т.д.) можно настроить запрос, который позволит обновлять словари только в случае их фактического изменения, а не каждый раз. Чтобы это сделать необходимо выполнить следующие условия/действия: -> - В таблице словаря должно быть поле, которое гарантированно изменяется при обновлении данных в источнике. -> - В настройках источника указывается запрос, который получает изменяющееся поле. Результат запроса сервер ClickHouse интерпретирует как строку и если эта строка изменилась по отношению к предыдущему состоянию, то словарь обновляется. Запрос следует указывать в поле `` настроек [источника](external-dicts-dict-sources.md). +- В таблице словаря должно быть поле, которое гарантированно изменяется при обновлении данных в источнике. +- В настройках источника указывается запрос, который получает изменяющееся поле. Результат запроса сервер ClickHouse интерпретирует как строку и если эта строка изменилась по отношению к предыдущему состоянию, то словарь обновляется. Запрос следует указывать в поле `` настроек [источника](external-dicts-dict-sources.md). Пример настройки: @@ -83,4 +86,3 @@ SOURCE(ODBC(... invalidate_query 'SELECT update_time FROM dictionary_source wher ... ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict_lifetime/) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index 13b6a93b6ae..a7999470330 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -65,9 +65,11 @@ SETTINGS(format_csv_allow_single_quotes = 0) - СУБД: - [ODBC](#dicts-external_dicts_dict_sources-odbc) - [MySQL](#dicts-external_dicts_dict_sources-mysql) + - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) - [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse) - [MongoDB](#dicts-external_dicts_dict_sources-mongodb) - [Redis](#dicts-external_dicts_dict_sources-redis) + - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) ## Локальный файл {#dicts-external_dicts_dict_sources-local_file} @@ -313,6 +315,7 @@ PRIMARY KEY id SOURCE(ODBC(connection_string 'DSN=myconnection' table 'postgresql_table')) LAYOUT(HASHED()) LIFETIME(MIN 300 MAX 360) +``` Может понадобиться в `odbc.ini` указать полный путь до библиотеки с драйвером `DRIVER=/usr/local/lib/psqlodbcw.so`. @@ -320,15 +323,15 @@ LIFETIME(MIN 300 MAX 360) ОС Ubuntu. -Установка драйвера: : +Установка драйвера: ```bash $ sudo apt-get install tdsodbc freetds-bin sqsh ``` -Настройка драйвера: : +Настройка драйвера: -``` bash +```bash $ cat /etc/freetds/freetds.conf ... @@ -338,8 +341,11 @@ $ sudo apt-get install tdsodbc freetds-bin sqsh tds version = 7.0 client charset = UTF-8 + # тестирование TDS соединения + $ sqsh -S MSSQL -D database -U user -P password + + $ cat /etc/odbcinst.ini - ... [FreeTDS] Description = FreeTDS @@ -348,8 +354,8 @@ $ sudo apt-get install tdsodbc freetds-bin sqsh FileUsage = 1 UsageCount = 5 - $ cat ~/.odbc.ini - ... + $ cat /etc/odbc.ini + # $ cat ~/.odbc.ini # если вы вошли из под пользователя из под которого запущен ClickHouse [MSSQL] Description = FreeTDS @@ -359,8 +365,15 @@ $ sudo apt-get install tdsodbc freetds-bin sqsh UID = test PWD = test Port = 1433 + + + # (не обязательно) тест ODBC соединения (используйте isql поставляемый вместе с [unixodbc](https://packages.debian.org/sid/unixodbc)-package) + $ isql -v MSSQL "user" "password" ``` +Примечание: +- чтобы определить самую раннюю версию TDS, которая поддерживается определенной версией SQL Server, обратитесь к документации продукта или посмотрите на [MS-TDS Product Behavior](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/135d0ebe-5c4c-4a94-99bf-1811eccb9f4a) + Настройка словаря в ClickHouse: ``` xml @@ -624,4 +637,92 @@ SOURCE(REDIS( - `storage_type` – способ хранения ключей. Необходимо использовать `simple` для источников с одним столбцом ключей, `hash_map` – для источников с двумя столбцами ключей. Источники с более, чем двумя столбцами ключей, не поддерживаются. Может отсутствовать, значение по умолчанию `simple`. - `db_index` – номер базы данных. Может отсутствовать, значение по умолчанию 0. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict_sources/) +### Cassandra {#dicts-external_dicts_dict_sources-cassandra} + +Пример настройки: + +``` xml + + + localhost + 9042 + username + qwerty123 + database_name + table_name + 1 + 1 + One + "SomeColumn" = 42 + 8 + + +``` + +Поля настройки: +- `host` – Имя хоста с установленной Cassandra или разделенный через запятую список хостов. +- `port` – Порт на серверах Cassandra. Если не указан, используется значение по умолчанию 9042. +- `user` – Имя пользователя для соединения с Cassandra. +- `password` – Пароль для соединения с Cassandra. +- `keyspace` – Имя keyspace (база данных). +- `column_family` – Имя семейства столбцов (таблица). +- `allow_filering` – Флаг, разрешающий или не разрешающий потенциально дорогостоящие условия на кластеризации ключевых столбцов. Значение по умолчанию 1. +- `partition_key_prefix` – Количество партиций ключевых столбцов в первичном ключе таблицы Cassandra. +Необходимо для составления ключей словаря. Порядок ключевых столбцов в определении словеря должен быть таким же как в Cassandra. +Значение по умолчанию 1 (первый ключевой столбец это ключ партицирования, остальные ключевые столбцы - ключи кластеризации). +- `consistency` – Уровень консистентности. Возмодные значения: `One`, `Two`, `Three`, + `All`, `EachQuorum`, `Quorum`, `LocalQuorum`, `LocalOne`, `Serial`, `LocalSerial`. Значение по умолчанию `One`. +- `where` – Опциональный критерий выборки. +- `max_threads` – Максимальное кол-во тредов для загрузки данных из нескольких партиций в словарь. + +### PosgreSQL {#dicts-external_dicts_dict_sources-postgresql} + +Пример настройки: + +``` xml + + + 5432 + clickhouse + qwerty + db_name + table_name
+ id=10 + SQL_QUERY +
+ +``` + +или + +``` sql +SOURCE(POSTGRESQL( + port 5432 + host 'postgresql-hostname' + user 'postgres_user' + password 'postgres_password' + db 'db_name' + table 'table_name' + replica(host 'example01-1' port 5432 priority 1) + replica(host 'example01-2' port 5432 priority 2) + where 'id=10' + invalidate_query 'SQL_QUERY' +)) +``` + +Setting fields: + +- `host` – Хост для соединения с PostgreSQL. Вы можете указать его для всех реплик или задать индивидуально для каждой релпики (внутри ``). +- `port` – Порт для соединения с PostgreSQL. Вы можете указать его для всех реплик или задать индивидуально для каждой релпики (внутри ``). +- `user` – Имя пользователя для соединения с PostgreSQL. Вы можете указать его для всех реплик или задать индивидуально для каждой релпики (внутри ``). +- `password` – Пароль для пользователя PostgreSQL. +- `replica` – Section of replica configurations. There can be multiple sections. + - `replica/host` – хост PostgreSQL. + - `replica/port` – порт PostgreSQL . + - `replica/priority` – Приоритет реплики. Во время попытки соединения, ClickHouse будет перебирать реплики в порядке приоритет. Меньшее значение означает более высокий приоритет. +- `db` – Имя базы данных. +- `table` – Имя таблицы. +- `where` – Условие выборки. Синтаксис для условий такой же как для `WHERE` выражения в PostgreSQL, для примера, `id > 10 AND id < 20`. Необязательный параметр. +- `invalidate_query` – Запрос для проверки условия загрузки словаря. Необязательный параметр. Читайте больше в разделе [Обновление словарей](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md). + + diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md index 6efbe706110..609ee225ce2 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md @@ -3,7 +3,7 @@ toc_priority: 44 toc_title: "Ключ и поля словаря" --- -# Ключ и поля словаря {#kliuch-i-polia-slovaria} +# Ключ и поля словаря {#dictionary-key-and-fields} Секция `` описывает ключ словаря и поля, доступные для запросов. @@ -88,7 +88,7 @@ PRIMARY KEY Id - `PRIMARY KEY` – имя столбца с ключами. -### Составной ключ {#sostavnoi-kliuch} +### Составной ключ {#composite-key} Ключом может быть кортеж (`tuple`) из полей произвольных типов. В этом случае [layout](external-dicts-dict-layout.md) должен быть `complex_key_hashed` или `complex_key_cache`. @@ -159,14 +159,12 @@ CREATE DICTIONARY somename ( | Тег | Описание | Обязательный | |------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------| | `name` | Имя столбца. | Да | -| `type` | Тип данных ClickHouse.
ClickHouse пытается привести значение из словаря к заданному типу данных. Например, в случае MySQL, в таблице-источнике поле может быть `TEXT`, `VARCHAR`, `BLOB`, но загружено может быть как `String`. [Nullable](../../../sql-reference/data-types/nullable.md) не поддерживается. | Да | -| `null_value` | Значение по умолчанию для несуществующего элемента.
В примере это пустая строка. Нельзя указать значение `NULL`. | Да | +| `type` | Тип данных ClickHouse.
ClickHouse пытается привести значение из словаря к заданному типу данных. Например, в случае MySQL, в таблице-источнике поле может быть `TEXT`, `VARCHAR`, `BLOB`, но загружено может быть как `String`.
[Nullable](../../../sql-reference/data-types/nullable.md) в настоящее время поддерживается для словарей [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md). Для словарей [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache) и [IPTrie](external-dicts-dict-layout.md#ip-trie) `Nullable`-типы не поддерживаются. | Да | +| `null_value` | Значение по умолчанию для несуществующего элемента.
В примере это пустая строка. Значение [NULL](../../syntax.md#null-literal) можно указывать только для типов `Nullable` (см. предыдущую строку с описанием типов). | Да | | `expression` | [Выражение](../../syntax.md#syntax-expressions), которое ClickHouse выполняет со значением.
Выражением может быть имя столбца в удаленной SQL базе. Таким образом, вы можете использовать его для создания псевдонима удаленного столбца.

Значение по умолчанию: нет выражения. | Нет | -| `hierarchical` | Если `true`, то атрибут содержит ключ предка для текущего элемента. Смотрите [Иерархические словари](external-dicts-dict-hierarchical.md).

Default value: `false`. | No | +| `hierarchical` | Если `true`, то атрибут содержит ключ предка для текущего элемента. Смотрите [Иерархические словари](external-dicts-dict-hierarchical.md).

Значение по умолчанию: `false`. | Нет | | `is_object_id` | Признак того, что запрос выполняется к документу MongoDB по `ObjectID`.

Значение по умолчанию: `false`. | Нет | -## Смотрите также {#smotrite-takzhe} +**Смотрите также** - [Функции для работы с внешними словарями](../../../sql-reference/functions/ext-dict-functions.md). - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict_structure/) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md index 7e35f59609d..4dc74200093 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md @@ -48,4 +48,3 @@ LIFETIME(...) -- Lifetime of dictionary in memory - [structure](external-dicts-dict-structure.md) — Структура словаря. Ключ и атрибуты, которые можно получить по ключу. - [lifetime](external-dicts-dict-lifetime.md) — Периодичность обновления словарей. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict/) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md index 6467b5f82e4..04ef24b68c5 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md @@ -61,4 +61,3 @@ ClickHouse: - [Ключ и поля словаря](external-dicts-dict-structure.md) - [Функции для работы с внешними словарями](../../../sql-reference/functions/ext-dict-functions.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts/) diff --git a/docs/ru/sql-reference/dictionaries/index.md b/docs/ru/sql-reference/dictionaries/index.md index 238aa244967..59c7518d0c5 100644 --- a/docs/ru/sql-reference/dictionaries/index.md +++ b/docs/ru/sql-reference/dictionaries/index.md @@ -10,11 +10,8 @@ toc_title: "Введение" ClickHouse поддерживает специальные функции для работы со словарями, которые можно использовать в запросах. Проще и эффективнее использовать словари с помощью функций, чем `JOIN` с таблицами-справочниками. -В словаре нельзя хранить значения [NULL](../../sql-reference/syntax.md#null-literal). - ClickHouse поддерживает: - [Встроенные словари](internal-dicts.md#internal_dicts) со специфическим [набором функций](../../sql-reference/dictionaries/external-dictionaries/index.md). - [Подключаемые (внешние) словари](external-dictionaries/external-dicts.md#dicts-external-dicts) с [набором функций](../../sql-reference/dictionaries/external-dictionaries/index.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/) diff --git a/docs/ru/sql-reference/dictionaries/internal-dicts.md b/docs/ru/sql-reference/dictionaries/internal-dicts.md index af7f13f7133..34e407ceacd 100644 --- a/docs/ru/sql-reference/dictionaries/internal-dicts.md +++ b/docs/ru/sql-reference/dictionaries/internal-dicts.md @@ -50,4 +50,3 @@ ClickHouse содержит встроенную возможность рабо Также имеются функции для работы с идентификаторами операционных систем и поисковых систем Яндекс.Метрики, пользоваться которыми не нужно. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/internal_dicts/) diff --git a/docs/ru/sql-reference/distributed-ddl.md b/docs/ru/sql-reference/distributed-ddl.md index 17c38cfe820..e03ecb893bc 100644 --- a/docs/ru/sql-reference/distributed-ddl.md +++ b/docs/ru/sql-reference/distributed-ddl.md @@ -15,5 +15,4 @@ CREATE TABLE IF NOT EXISTS all_hits ON CLUSTER cluster (p Date, i Int32) ENGINE Для корректного выполнения таких запросов необходимо на каждом хосте иметь одинаковое определение кластера (для упрощения синхронизации конфигов можете использовать подстановки из ZooKeeper). Также необходимо подключение к ZooKeeper серверам. Локальная версия запроса в конечном итоге будет выполнена на каждом хосте кластера, даже если некоторые хосты в данный момент не доступны. Гарантируется упорядоченность выполнения запросов в рамках одного хоста. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/distributed-ddl) \ No newline at end of file diff --git a/docs/ru/sql-reference/functions/arithmetic-functions.md b/docs/ru/sql-reference/functions/arithmetic-functions.md index 779e0a9fe4a..f587b7b5b5d 100644 --- a/docs/ru/sql-reference/functions/arithmetic-functions.md +++ b/docs/ru/sql-reference/functions/arithmetic-functions.md @@ -83,4 +83,3 @@ SELECT toTypeName(0), toTypeName(0 + 0), toTypeName(0 + 0 + 0), toTypeName(0 + 0 Вычисляет наименьшее общее кратное чисел. При делении на ноль или при делении минимального отрицательного числа на минус единицу, кидается исключение. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/arithmetic_functions/) diff --git a/docs/ru/sql-reference/functions/array-functions.md b/docs/ru/sql-reference/functions/array-functions.md index dca645888a9..560795506a0 100644 --- a/docs/ru/sql-reference/functions/array-functions.md +++ b/docs/ru/sql-reference/functions/array-functions.md @@ -58,7 +58,7 @@ toc_title: "Массивы" arrayConcat(arrays) ``` -**Параметры** +**Аргументы** - `arrays` – произвольное количество элементов типа [Array](../../sql-reference/functions/array-functions.md) **Пример** @@ -108,7 +108,7 @@ SELECT has([1, 2, NULL], NULL) hasAll(set, subset) ``` -**Параметры** +**Аргументы** - `set` – массив любого типа с набором элементов. - `subset` – массив любого типа со значениями, которые проверяются на вхождение в `set`. @@ -146,7 +146,7 @@ hasAll(set, subset) hasAny(array1, array2) ``` -**Параметры** +**Аргументы** - `array1` – массив любого типа с набором элементов. - `array2` – массив любого типа с набором элементов. @@ -320,21 +320,21 @@ SELECT arrayEnumerateUniq([1, 1, 1, 2, 2, 2], [1, 1, 2, 1, 1, 2]) AS res arrayPopBack(array) ``` -**Параметры** +**Аргументы** -- `array` - Массив. +- `array` – массив. **Пример** ``` sql -SELECT arrayPopBack([1, 2, 3]) AS res +SELECT arrayPopBack([1, 2, 3]) AS res; ``` -text - - ┌─res───┐ - │ [1,2] │ - └───────┘ +``` text +┌─res───┐ +│ [1,2] │ +└───────┘ +``` ## arrayPopFront {#arraypopfront} @@ -344,14 +344,14 @@ text arrayPopFront(array) ``` -**Параметры** +**Аргументы** -- `array` - Массив. +- `array` – массив. **Пример** ``` sql -SELECT arrayPopFront([1, 2, 3]) AS res +SELECT arrayPopFront([1, 2, 3]) AS res; ``` ``` text @@ -368,15 +368,15 @@ SELECT arrayPopFront([1, 2, 3]) AS res arrayPushBack(array, single_value) ``` -**Параметры** +**Аргументы** -- `array` - Массив. -- `single_value` - Одиночное значение. В массив с числам можно добавить только числа, в массив со строками только строки. При добавлении чисел ClickHouse автоматически приводит тип `single_value` к типу данных массива. Подробнее о типах данных в ClickHouse читайте в разделе «[Типы данных](../../sql-reference/functions/array-functions.md#data_types)». Может быть равно `NULL`. Функция добавит элемент `NULL` в массив, а тип элементов массива преобразует в `Nullable`. +- `array` – массив. +- `single_value` – значение добавляемого элемента. В массив с числам можно добавить только числа, в массив со строками только строки. При добавлении чисел ClickHouse автоматически приводит тип `single_value` к типу данных массива. Подробнее о типах данных в ClickHouse читайте в разделе «[Типы данных](../../sql-reference/functions/array-functions.md#data_types)». Может быть равно `NULL`, в этом случае функция добавит элемент `NULL` в массив, а тип элементов массива преобразует в `Nullable`. **Пример** ``` sql -SELECT arrayPushBack(['a'], 'b') AS res +SELECT arrayPushBack(['a'], 'b') AS res; ``` ``` text @@ -393,15 +393,15 @@ SELECT arrayPushBack(['a'], 'b') AS res arrayPushFront(array, single_value) ``` -**Параметры** +**Аргументы** -- `array` - Массив. -- `single_value` - Одиночное значение. В массив с числам можно добавить только числа, в массив со строками только строки. При добавлении чисел ClickHouse автоматически приводит тип `single_value` к типу данных массива. Подробнее о типах данных в ClickHouse читайте в разделе «[Типы данных](../../sql-reference/functions/array-functions.md#data_types)». Может быть равно `NULL`. Функция добавит элемент `NULL` в массив, а тип элементов массива преобразует в `Nullable`. +- `array` – массив. +- `single_value` – значение добавляемого элемента. В массив с числам можно добавить только числа, в массив со строками только строки. При добавлении чисел ClickHouse автоматически приводит тип `single_value` к типу данных массива. Подробнее о типах данных в ClickHouse читайте в разделе «[Типы данных](../../sql-reference/functions/array-functions.md#data_types)». Может быть равно `NULL`, в этом случае функция добавит элемент `NULL` в массив, а тип элементов массива преобразует в `Nullable`. **Пример** ``` sql -SELECT arrayPushFront(['b'], 'a') AS res +SELECT arrayPushFront(['b'], 'a') AS res; ``` ``` text @@ -418,7 +418,7 @@ SELECT arrayPushFront(['b'], 'a') AS res arrayResize(array, size[, extender]) ``` -**Параметры** +**Аргументы** - `array` — массив. - `size` — необходимая длина массива. @@ -433,7 +433,7 @@ arrayResize(array, size[, extender]) **Примеры вызовов** ``` sql -SELECT arrayResize([1], 3) +SELECT arrayResize([1], 3); ``` ``` text @@ -443,7 +443,7 @@ SELECT arrayResize([1], 3) ``` ``` sql -SELECT arrayResize([1], 3, NULL) +SELECT arrayResize([1], 3, NULL); ``` ``` text @@ -460,16 +460,16 @@ SELECT arrayResize([1], 3, NULL) arraySlice(array, offset[, length]) ``` -**Параметры** +**Аргументы** -- `array` - Массив данных. -- `offset` - Отступ от края массива. Положительное значение - отступ слева, отрицательное значение - отступ справа. Отсчет элементов массива начинается с 1. -- `length` - Длина необходимого среза. Если указать отрицательное значение, то функция вернёт открытый срез `[offset, array_length - length)`. Если не указать значение, то функция вернёт срез `[offset, the_end_of_array]`. +- `array` – массив данных. +- `offset` – отступ от края массива. Положительное значение - отступ слева, отрицательное значение - отступ справа. Отсчет элементов массива начинается с 1. +- `length` – длина необходимого среза. Если указать отрицательное значение, то функция вернёт открытый срез `[offset, array_length - length)`. Если не указать значение, то функция вернёт срез `[offset, the_end_of_array]`. **Пример** ``` sql -SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res +SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res; ``` ``` text @@ -702,9 +702,9 @@ SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res; arrayDifference(array) ``` -**Параметры** +**Аргументы** -- `array` – [Массив](https://clickhouse.tech/docs/ru/data_types/array/). +- `array` – [массив](https://clickhouse.tech/docs/ru/data_types/array/). **Возвращаемое значение** @@ -715,10 +715,10 @@ arrayDifference(array) Запрос: ``` sql -SELECT arrayDifference([1, 2, 3, 4]) +SELECT arrayDifference([1, 2, 3, 4]); ``` -Ответ: +Результат: ``` text ┌─arrayDifference([1, 2, 3, 4])─┐ @@ -731,10 +731,10 @@ SELECT arrayDifference([1, 2, 3, 4]) Запрос: ``` sql -SELECT arrayDifference([0, 10000000000000000000]) +SELECT arrayDifference([0, 10000000000000000000]); ``` -Ответ: +Результат: ``` text ┌─arrayDifference([0, 10000000000000000000])─┐ @@ -752,9 +752,9 @@ SELECT arrayDifference([0, 10000000000000000000]) arrayDistinct(array) ``` -**Параметры** +**Аргументы** -- `array` – [Массив](https://clickhouse.tech/docs/ru/data_types/array/). +- `array` – [массив](https://clickhouse.tech/docs/ru/data_types/array/). **Возвращаемое значение** @@ -765,7 +765,7 @@ arrayDistinct(array) Запрос: ``` sql -SELECT arrayDistinct([1, 2, 2, 3, 1]) +SELECT arrayDistinct([1, 2, 2, 3, 1]); ``` Ответ: @@ -820,7 +820,7 @@ SELECT arrayReduce(agg_func, arr1, arr2, ..., arrN) ``` -**Параметры** +**Аргументы** - `agg_func` — Имя агрегатной функции, которая должна быть константой [string](../../sql-reference/data-types/string.md). - `arr` — Любое количество столбцов типа [array](../../sql-reference/data-types/array.md) в качестве параметров агрегатной функции. @@ -832,10 +832,10 @@ arrayReduce(agg_func, arr1, arr2, ..., arrN) Запрос: ```sql -SELECT arrayReduce('max', [1, 2, 3]) +SELECT arrayReduce('max', [1, 2, 3]); ``` -Ответ: +Результат: ```text ┌─arrayReduce('max', [1, 2, 3])─┐ @@ -850,10 +850,10 @@ SELECT arrayReduce('max', [1, 2, 3]) Запрос: ```sql -SELECT arrayReduce('maxIf', [3, 5], [1, 0]) +SELECT arrayReduce('maxIf', [3, 5], [1, 0]); ``` -Ответ: +Результат: ```text ┌─arrayReduce('maxIf', [3, 5], [1, 0])─┐ @@ -866,10 +866,10 @@ SELECT arrayReduce('maxIf', [3, 5], [1, 0]) Запрос: ```sql -SELECT arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) +SELECT arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]); ``` -Ответ: +Результат: ```text ┌─arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])─┐ @@ -887,15 +887,15 @@ SELECT arrayReduce('uniqUpTo(3)', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) arrayReduceInRanges(agg_func, ranges, arr1, arr2, ..., arrN) ``` -**Параметры** +**Аргументы** -- `agg_func` — Имя агрегатной функции, которая должна быть [строковой](../../sql-reference/data-types/string.md) константой. -- `ranges` — Диапазоны для агрегирования, которые должны быть [массивом](../../sql-reference/data-types/array.md) of [кортежей](../../sql-reference/data-types/tuple.md) который содержит индекс и длину каждого диапазона. -- `arr` — Любое количество столбцов типа [Array](../../sql-reference/data-types/array.md) в качестве параметров агрегатной функции. +- `agg_func` — имя агрегатной функции, которая должна быть [строковой](../../sql-reference/data-types/string.md) константой. +- `ranges` — диапазоны для агрегирования, которые должны быть [массивом](../../sql-reference/data-types/array.md) of [кортежей](../../sql-reference/data-types/tuple.md) содержащих индекс и длину каждого диапазона. +- `arr` — любое количество столбцов типа [Array](../../sql-reference/data-types/array.md) в качестве параметров агрегатной функции. **Возвращаемое значение** -- Массив, содержащий результаты агрегатной функции для указанных диапазонов. +- Массив, содержащий результаты агрегатной функции для указанных диапазонов. Тип: [Array](../../sql-reference/data-types/array.md). @@ -911,7 +911,7 @@ SELECT arrayReduceInRanges( ) AS res ``` -Ответ: +Результат: ```text ┌─res─────────────────────────┐ @@ -958,14 +958,14 @@ flatten(array_of_arrays) Синоним: `flatten`. -**Параметры** +**Аргументы** -- `array_of_arrays` — [Массив](../../sql-reference/functions/array-functions.md) массивов. Например, `[[1,2,3], [4,5]]`. +- `array_of_arrays` — [массив](../../sql-reference/functions/array-functions.md) массивов. Например, `[[1,2,3], [4,5]]`. **Примеры** ``` sql -SELECT flatten([[[1]], [[2], [3]]]) +SELECT flatten([[[1]], [[2], [3]]]); ``` ``` text @@ -984,9 +984,9 @@ SELECT flatten([[[1]], [[2], [3]]]) arrayCompact(arr) ``` -**Параметры** +**Аргументы** -`arr` — [Массив](../../sql-reference/functions/array-functions.md) для обхода. +`arr` — [массив](../../sql-reference/functions/array-functions.md) для обхода. **Возвращаемое значение** @@ -999,10 +999,10 @@ arrayCompact(arr) Запрос: ``` sql -SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]) +SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]); ``` -Ответ: +Результат: ``` text ┌─arrayCompact([1, 1, nan, nan, 2, 3, 3, 3])─┐ @@ -1020,9 +1020,9 @@ SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3]) arrayZip(arr1, arr2, ..., arrN) ``` -**Параметры** +**Аргументы** -- `arrN` — [Массив](../data-types/array.md). +- `arrN` — [массив](../data-types/array.md). Функция принимает любое количество массивов, которые могут быть различных типов. Все массивы должны иметь одинаковую длину. @@ -1037,10 +1037,10 @@ arrayZip(arr1, arr2, ..., arrN) Запрос: ``` sql -SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]) +SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]); ``` -Ответ: +Результат: ``` text ┌─arrayZip(['a', 'b', 'c'], [5, 2, 1])─┐ @@ -1067,7 +1067,7 @@ SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res; Следующий пример показывает, как создать кортежи из элементов разных массивов: ``` sql -SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res +SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res; ``` ``` text @@ -1111,6 +1111,78 @@ SELECT Функция `arrayFilter` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен. +## arrayFill(func, arr1, …) {#array-fill} + +Перебирает `arr1` от первого элемента к последнему и заменяет `arr1[i]` на `arr1[i - 1]`, если `func` вернула 0. Первый элемент `arr1` остаётся неизменным. + +Примеры: + +``` sql +SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res +``` + +``` text +┌─res──────────────────────────────┐ +│ [1,1,3,11,12,12,12,5,6,14,14,14] │ +└──────────────────────────────────┘ +``` + +Функция `arrayFill` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен. + +## arrayReverseFill(func, arr1, …) {#array-reverse-fill} + +Перебирает `arr1` от последнего элемента к первому и заменяет `arr1[i]` на `arr1[i + 1]`, если `func` вернула 0. Последний элемент `arr1` остаётся неизменным. + +Примеры: + +``` sql +SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res +``` + +``` text +┌─res────────────────────────────────┐ +│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │ +└────────────────────────────────────┘ +``` + +Функция `arrayReverseFill` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен. + +## arraySplit(func, arr1, …) {#array-split} + +Разделяет массив `arr1` на несколько. Если `func` возвращает не 0, то массив разделяется, а элемент помещается в левую часть. Массив не разбивается по первому элементу. + +Примеры: + +``` sql +SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res +``` + +``` text +┌─res─────────────┐ +│ [[1,2,3],[4,5]] │ +└─────────────────┘ +``` + +Функция `arraySplit` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен. + +## arrayReverseSplit(func, arr1, …) {#array-reverse-split} + +Разделяет массив `arr1` на несколько. Если `func` возвращает не 0, то массив разделяется, а элемент помещается в правую часть. Массив не разбивается по последнему элементу. + +Примеры: + +``` sql +SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res +``` + +``` text +┌─res───────────────┐ +│ [[1],[2,3,4],[5]] │ +└───────────────────┘ +``` + +Функция `arrayReverseSplit` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен. + ## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1} Возвращает 1, если существует хотя бы один элемент массива `arr`, для которого функция func возвращает не 0. Иначе возвращает 0. @@ -1137,7 +1209,7 @@ SELECT ## arrayMin {#array-min} -Возвращает значение минимального элемента в исходном массиве. +Возвращает значение минимального элемента в исходном массиве. Если передана функция `func`, возвращается минимум из элементов массива, преобразованных этой функцией. @@ -1149,7 +1221,7 @@ SELECT arrayMin([func,] arr) ``` -**Параметры** +**Аргументы** - `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). - `arr` — массив. [Array](../../sql-reference/data-types/array.md). @@ -1192,7 +1264,7 @@ SELECT arrayMin(x -> (-x), [1, 2, 4]) AS res; ## arrayMax {#array-max} -Возвращает значение максимального элемента в исходном массиве. +Возвращает значение максимального элемента в исходном массиве. Если передана функция `func`, возвращается максимум из элементов массива, преобразованных этой функцией. @@ -1204,7 +1276,7 @@ SELECT arrayMin(x -> (-x), [1, 2, 4]) AS res; arrayMax([func,] arr) ``` -**Параметры** +**Аргументы** - `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). - `arr` — массив. [Array](../../sql-reference/data-types/array.md). @@ -1247,7 +1319,7 @@ SELECT arrayMax(x -> (-x), [1, 2, 4]) AS res; ## arraySum {#array-sum} -Возвращает сумму элементов в исходном массиве. +Возвращает сумму элементов в исходном массиве. Если передана функция `func`, возвращается сумма элементов массива, преобразованных этой функцией. @@ -1259,10 +1331,10 @@ SELECT arrayMax(x -> (-x), [1, 2, 4]) AS res; arraySum([func,] arr) ``` -**Параметры** +**Аргументы** - `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). -- `arr` — массив. [Array](../../sql-reference/data-types/array.md). +- `arr` — массив. [Array](../../sql-reference/data-types/array.md). **Возвращаемое значение** @@ -1302,7 +1374,7 @@ SELECT arraySum(x -> x*x, [2, 3]) AS res; ## arrayAvg {#array-avg} -Возвращает среднее значение элементов в исходном массиве. +Возвращает среднее значение элементов в исходном массиве. Если передана функция `func`, возвращается среднее значение элементов массива, преобразованных этой функцией. @@ -1314,10 +1386,10 @@ SELECT arraySum(x -> x*x, [2, 3]) AS res; arrayAvg([func,] arr) ``` -**Параметры** +**Аргументы** - `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). -- `arr` — массив. [Array](../../sql-reference/data-types/array.md). +- `arr` — массив. [Array](../../sql-reference/data-types/array.md). **Возвращаемое значение** @@ -1355,7 +1427,7 @@ SELECT arrayAvg(x -> (x * x), [2, 4]) AS res; └─────┘ ``` -**Синтаксис** +**Синтаксис** ``` sql arraySum(arr) @@ -1367,9 +1439,9 @@ arraySum(arr) Тип: [Int](../../sql-reference/data-types/int-uint.md) или [Float](../../sql-reference/data-types/float.md). -**Параметры** +**Аргументы** -- `arr` — [Массив](../../sql-reference/data-types/array.md). +- `arr` — [массив](../../sql-reference/data-types/array.md). **Примеры** @@ -1429,7 +1501,8 @@ SELECT arrayCumSum([1, 1, 1, 1]) AS res arrayAUC(arr_scores, arr_labels) ``` -**Параметры** +**Аргументы** + - `arr_scores` — оценка, которую дает модель предсказания. - `arr_labels` — ярлыки выборок, обычно 1 для содержательных выборок и 0 для бессодержательных выборок. @@ -1444,10 +1517,10 @@ arrayAUC(arr_scores, arr_labels) Запрос: ``` sql -select arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]) +SELECT arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]); ``` -Ответ: +Результат: ``` text ┌─arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1])─┐ @@ -1455,4 +1528,3 @@ select arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]) └────────────────────────────────────────---──┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/array_functions/) diff --git a/docs/ru/sql-reference/functions/array-join.md b/docs/ru/sql-reference/functions/array-join.md index ed67d30062b..3e3cf5c4011 100644 --- a/docs/ru/sql-reference/functions/array-join.md +++ b/docs/ru/sql-reference/functions/array-join.md @@ -32,4 +32,3 @@ SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src └─────┴───────────┴─────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/array_join/) diff --git a/docs/ru/sql-reference/functions/bit-functions.md b/docs/ru/sql-reference/functions/bit-functions.md index 79ea05f4bd7..a5124e67235 100644 --- a/docs/ru/sql-reference/functions/bit-functions.md +++ b/docs/ru/sql-reference/functions/bit-functions.md @@ -31,10 +31,10 @@ toc_title: "Битовые функции" SELECT bitTest(number, index) ``` -**Параметры** +**Аргументы** - `number` – целое число. -- `index` – position of bit. +- `index` – позиция бита. **Возвращаемое значение** @@ -49,10 +49,10 @@ SELECT bitTest(number, index) Запрос: ``` sql -SELECT bitTest(43, 1) +SELECT bitTest(43, 1); ``` -Ответ: +Результат: ``` text ┌─bitTest(43, 1)─┐ @@ -65,10 +65,10 @@ SELECT bitTest(43, 1) Запрос: ``` sql -SELECT bitTest(43, 2) +SELECT bitTest(43, 2); ``` -Ответ: +Результат: ``` text ┌─bitTest(43, 2)─┐ @@ -93,7 +93,7 @@ SELECT bitTest(43, 2) SELECT bitTestAll(number, index1, index2, index3, index4, ...) ``` -**Параметры** +**Аргументы** - `number` – целое число. - `index1`, `index2`, `index3`, `index4` – позиция бита. Например, конъюнкция для набора позиций `index1`, `index2`, `index3`, `index4` является истинной, если все его позиции истинны `index1` ⋀ `index2` ⋀ `index3` ⋀ `index4`. @@ -111,10 +111,10 @@ SELECT bitTestAll(number, index1, index2, index3, index4, ...) Запрос: ``` sql -SELECT bitTestAll(43, 0, 1, 3, 5) +SELECT bitTestAll(43, 0, 1, 3, 5); ``` -Ответ: +Результат: ``` text ┌─bitTestAll(43, 0, 1, 3, 5)─┐ @@ -127,10 +127,10 @@ SELECT bitTestAll(43, 0, 1, 3, 5) Запрос: ``` sql -SELECT bitTestAll(43, 0, 1, 3, 5, 2) +SELECT bitTestAll(43, 0, 1, 3, 5, 2); ``` -Ответ: +Результат: ``` text ┌─bitTestAll(43, 0, 1, 3, 5, 2)─┐ @@ -155,7 +155,7 @@ SELECT bitTestAll(43, 0, 1, 3, 5, 2) SELECT bitTestAny(number, index1, index2, index3, index4, ...) ``` -**Параметры** +**Аргументы** - `number` – целое число. - `index1`, `index2`, `index3`, `index4` – позиции бита. @@ -173,10 +173,10 @@ SELECT bitTestAny(number, index1, index2, index3, index4, ...) Запрос: ``` sql -SELECT bitTestAny(43, 0, 2) +SELECT bitTestAny(43, 0, 2); ``` -Ответ: +Результат: ``` text ┌─bitTestAny(43, 0, 2)─┐ @@ -189,10 +189,10 @@ SELECT bitTestAny(43, 0, 2) Запрос: ``` sql -SELECT bitTestAny(43, 4, 2) +SELECT bitTestAny(43, 4, 2); ``` -Ответ: +Результат: ``` text ┌─bitTestAny(43, 4, 2)─┐ @@ -210,9 +210,9 @@ SELECT bitTestAny(43, 4, 2) bitCount(x) ``` -**Параметры** +**Аргументы** -- `x` — [Целое число](../../sql-reference/functions/bit-functions.md) или [число с плавающей запятой](../../sql-reference/functions/bit-functions.md). Функция использует представление числа в памяти, что позволяет поддержать числа с плавающей запятой. +- `x` — [целое число](../../sql-reference/functions/bit-functions.md) или [число с плавающей запятой](../../sql-reference/functions/bit-functions.md). Функция использует представление числа в памяти, что позволяет поддержать числа с плавающей запятой. **Возвращаемое значение** @@ -229,7 +229,7 @@ bitCount(x) Запрос: ``` sql -SELECT bitCount(333) +SELECT bitCount(333); ``` Результат: @@ -240,4 +240,53 @@ SELECT bitCount(333) └───────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/bit_functions/) +## bitHammingDistance {#bithammingdistance} + +Возвращает [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между битовыми представлениями двух целых чисел. Может быть использовано с функциями [SimHash](../../sql-reference/functions/hash-functions.md#ngramsimhash) для проверки двух строк на схожесть. Чем меньше расстояние, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +bitHammingDistance(int1, int2) +``` + +**Аргументы** + +- `int1` — первое целое число. [Int64](../../sql-reference/data-types/int-uint.md). +- `int2` — второе целое число. [Int64](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Расстояние Хэмминга. + +Тип: [UInt8](../../sql-reference/data-types/int-uint.md). + +**Примеры** + +Запрос: + +``` sql +SELECT bitHammingDistance(111, 121); +``` + +Результат: + +``` text +┌─bitHammingDistance(111, 121)─┐ +│ 3 │ +└──────────────────────────────┘ +``` + +Используя [SimHash](../../sql-reference/functions/hash-functions.md#ngramsimhash): + +``` sql +SELECT bitHammingDistance(ngramSimHash('cat ate rat'), ngramSimHash('rat ate cat')); +``` + +Результат: + +``` text +┌─bitHammingDistance(ngramSimHash('cat ate rat'), ngramSimHash('rat ate cat'))─┐ +│ 5 │ +└──────────────────────────────────────────────────────────────────────────────┘ +``` diff --git a/docs/ru/sql-reference/functions/bitmap-functions.md b/docs/ru/sql-reference/functions/bitmap-functions.md index cd0ddee01a6..3da729664d0 100644 --- a/docs/ru/sql-reference/functions/bitmap-functions.md +++ b/docs/ru/sql-reference/functions/bitmap-functions.md @@ -13,19 +13,19 @@ toc_title: "Функции для битмапов" bitmapBuild(array) ``` -**Параметры** +**Аргументы** - `array` – массив типа `UInt*`. **Пример** ``` sql -SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) +SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res); ``` ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` @@ -37,14 +37,14 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) bitmapToArray(bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res +SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res; ``` ``` text @@ -63,11 +63,11 @@ SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res bitmapSubsetLimit(bitmap, range_start, cardinality_limit) ``` -**Параметры** +**Аргументы** -- `bitmap` – Битмап. [Bitmap object](#bitmap_functions-bitmapbuild). +- `bitmap` – битмап. [Bitmap object](#bitmap_functions-bitmapbuild). -- `range_start` – Начальная точка подмножества. [UInt32](../../sql-reference/functions/bitmap-functions.md#bitmap-functions). +- `range_start` – начальная точка подмножества. [UInt32](../../sql-reference/functions/bitmap-functions.md#bitmap-functions). - `cardinality_limit` – Верхний предел подмножества. [UInt32](../../sql-reference/functions/bitmap-functions.md#bitmap-functions). **Возвращаемое значение** @@ -81,10 +81,10 @@ bitmapSubsetLimit(bitmap, range_start, cardinality_limit) Запрос: ``` sql -SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res +SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res; ``` -Ответ: +Результат: ``` text ┌─res───────────────────────┐ @@ -100,12 +100,11 @@ SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12 bitmapContains(haystack, needle) ``` -**Параметры** +**Аргументы** - `haystack` – [объект Bitmap](#bitmap_functions-bitmapbuild), в котором функция ищет значение. - `needle` – значение, которое функция ищет. Тип — [UInt32](../../sql-reference/data-types/int-uint.md). - **Возвращаемые значения** - 0 — если в `haystack` нет `needle`. @@ -116,7 +115,7 @@ bitmapContains(haystack, needle) **Пример** ``` sql -SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res +SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res; ``` ``` text @@ -135,7 +134,7 @@ bitmapHasAny(bitmap1, bitmap2) Если вы уверены, что `bitmap2` содержит строго один элемент, используйте функцию [bitmapContains](#bitmap_functions-bitmapcontains). Она работает эффективнее. -**Параметры** +**Аргументы** - `bitmap*` – массив любого типа с набором элементов. @@ -147,7 +146,7 @@ bitmapHasAny(bitmap1, bitmap2) **Пример** ``` sql -SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res +SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; ``` ``` text @@ -165,14 +164,14 @@ SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res bitmapHasAll(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res +SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; ``` ``` text @@ -189,14 +188,14 @@ SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res bitmapAnd(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -213,14 +212,14 @@ SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re bitmapOr(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -237,14 +236,14 @@ SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res bitmapXor(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -261,14 +260,14 @@ SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re bitmapAndnot(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res +SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res; ``` ``` text @@ -285,14 +284,14 @@ SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS bitmapCardinality(bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. **Пример** ``` sql -SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res +SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res; ``` ``` text @@ -309,7 +308,7 @@ SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res bitmapAndCardinality(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. @@ -333,7 +332,7 @@ SELECT bitmapAndCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; bitmapOrCardinality(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. @@ -357,7 +356,7 @@ SELECT bitmapOrCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; bitmapXorCardinality(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. @@ -381,7 +380,7 @@ SELECT bitmapXorCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res; bitmapAndnotCardinality(bitmap,bitmap) ``` -**Параметры** +**Аргументы** - `bitmap` – битовый массив. @@ -397,4 +396,3 @@ SELECT bitmapAndnotCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res └─────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/bitmap_functions/) diff --git a/docs/ru/sql-reference/functions/comparison-functions.md b/docs/ru/sql-reference/functions/comparison-functions.md index 179df5c2ed5..b7301bde275 100644 --- a/docs/ru/sql-reference/functions/comparison-functions.md +++ b/docs/ru/sql-reference/functions/comparison-functions.md @@ -34,4 +34,3 @@ toc_title: "Функции сравнения" ## greaterOrEquals, оператор `>=` {#function-greaterorequals} -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/comparison_functions/) diff --git a/docs/ru/sql-reference/functions/conditional-functions.md b/docs/ru/sql-reference/functions/conditional-functions.md index 888e9427a79..b191937df51 100644 --- a/docs/ru/sql-reference/functions/conditional-functions.md +++ b/docs/ru/sql-reference/functions/conditional-functions.md @@ -17,11 +17,11 @@ SELECT if(cond, then, else) Если условие `cond` не равно нулю, то возвращается результат выражения `then`. Если условие `cond` равно нулю или является NULL, то результат выражения `then` пропускается и возвращается результат выражения `else`. -**Параметры** +**Аргументы** -- `cond` – Условие, которое может быть равно 0 или нет. Может быть [UInt8](../../sql-reference/functions/conditional-functions.md) или `NULL`. -- `then` - Возвращается результат выражения, если условие `cond` истинно. -- `else` - Возвращается результат выражения, если условие `cond` ложно. +- `cond` – проверяемое условие. Может быть [UInt8](../../sql-reference/functions/conditional-functions.md) или `NULL`. +- `then` – возвращается результат выражения, если условие `cond` истинно. +- `else` – возвращается результат выражения, если условие `cond` ложно. **Возвращаемые значения** @@ -32,10 +32,10 @@ SELECT if(cond, then, else) Запрос: ``` sql -SELECT if(1, plus(2, 2), plus(2, 6)) +SELECT if(1, plus(2, 2), plus(2, 6)); ``` -Ответ: +Результат: ``` text ┌─plus(2, 2)─┐ @@ -46,10 +46,10 @@ SELECT if(1, plus(2, 2), plus(2, 6)) Запрос: ``` sql -SELECT if(0, plus(2, 2), plus(2, 6)) +SELECT if(0, plus(2, 2), plus(2, 6)); ``` -Ответ: +Результат: ``` text ┌─plus(2, 6)─┐ @@ -79,11 +79,11 @@ SELECT if(0, plus(2, 2), plus(2, 6)) multiIf(cond_1, then_1, cond_2, then_2...else) -**Параметры** +**Аргументы** -- `cond_N` — Условие, при выполнении которого функция вернёт `then_N`. -- `then_N` — Результат функции при выполнении. -- `else` — Результат функции, если ни одно из условий не выполнено. +- `cond_N` — условие, при выполнении которого функция вернёт `then_N`. +- `then_N` — результат функции при выполнении. +- `else` — результат функции, если ни одно из условий не выполнено. Функция принимает `2N+1` параметров. @@ -111,4 +111,3 @@ SELECT if(0, plus(2, 2), plus(2, 6)) └────────────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/conditional_functions/) diff --git a/docs/ru/sql-reference/functions/date-time-functions.md b/docs/ru/sql-reference/functions/date-time-functions.md index 9f3df92922f..b442a782100 100644 --- a/docs/ru/sql-reference/functions/date-time-functions.md +++ b/docs/ru/sql-reference/functions/date-time-functions.md @@ -23,13 +23,53 @@ SELECT └─────────────────────┴────────────┴────────────┴─────────────────────┘ ``` +## timeZone {#timezone} + +Возвращает часовой пояс сервера. + +**Синтаксис** + +``` sql +timeZone() +``` + +Псевдоним: `timezone`. + +**Возвращаемое значение** + +- Часовой пояс. + +Тип: [String](../../sql-reference/data-types/string.md). + ## toTimeZone {#totimezone} -Переводит дату или дату-с-временем в указанный часовой пояс. Часовой пояс (таймзона) это атрибут типов Date/DateTime, внутреннее значение (количество секунд) поля таблицы или колонки результата не изменяется, изменяется тип поля и автоматически его текстовое отображение. +Переводит дату или дату с временем в указанный часовой пояс. Часовой пояс - это атрибут типов `Date` и `DateTime`. Внутреннее значение (количество секунд) поля таблицы или результирующего столбца не изменяется, изменяется тип поля и, соответственно, его текстовое отображение. + +**Синтаксис** + +``` sql +toTimezone(value, timezone) +``` + +Псевдоним: `toTimezone`. + +**Аргументы** + +- `value` — время или дата с временем. [DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — часовой пояс для возвращаемого значения. [String](../../sql-reference/data-types/string.md). + +**Возвращаемое значение** + +- Дата с временем. + +Тип: [DateTime](../../sql-reference/data-types/datetime.md). + +**Пример** + +Запрос: ```sql -SELECT - toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc, +SELECT toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc, toTypeName(time_utc) AS type_utc, toInt32(time_utc) AS int32utc, toTimeZone(time_utc, 'Asia/Yekaterinburg') AS time_yekat, @@ -40,6 +80,7 @@ SELECT toInt32(time_samoa) AS int32samoa FORMAT Vertical; ``` +Результат: ```text Row 1: @@ -57,6 +98,82 @@ int32samoa: 1546300800 `toTimeZone(time_utc, 'Asia/Yekaterinburg')` изменяет тип `DateTime('UTC')` в `DateTime('Asia/Yekaterinburg')`. Значение (unix-время) 1546300800 остается неизменным, но текстовое отображение (результат функции toString()) меняется `time_utc: 2019-01-01 00:00:00` в `time_yekat: 2019-01-01 05:00:00`. +## timeZoneOf {#timezoneof} + +Возвращает название часового пояса для значений типа [DateTime](../../sql-reference/data-types/datetime.md) и [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Синтаксис** + +``` sql +timeZoneOf(value) +``` + +Псевдоним: `timezoneOf`. + +**Аргументы** + +- `value` — Дата с временем. [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Возвращаемое значение** + +- Название часового пояса. + +Тип: [String](../../sql-reference/data-types/string.md). + +**Пример** + +Запрос: +``` sql +SELECT timezoneOf(now()); +``` + +Результат: +``` text +┌─timezoneOf(now())─┐ +│ Etc/UTC │ +└───────────────────┘ +``` + +## timeZoneOffset {#timezoneoffset} + +Возвращает смещение часового пояса в секундах от [UTC](https://ru.wikipedia.org/wiki/Всемирное_координированное_время). Функция учитывает [летнее время](https://ru.wikipedia.org/wiki/Летнее_время) и исторические изменения часовых поясов, которые действовали на указанную дату. +Для вычисления смещения используется информация из [базы данных IANA](https://www.iana.org/time-zones). + +**Синтаксис** + +``` sql +timeZoneOffset(value) +``` + +Псевдоним: `timezoneOffset`. + +**Аргументы** + +- `value` — Дата с временем. [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Возвращаемое значение** + +- Смещение в секундах от UTC. + +Тип: [Int32](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT toDateTime('2021-04-21 10:20:30', 'Europe/Moscow') AS Time, toTypeName(Time) AS Type, + timeZoneOffset(Time) AS Offset_in_seconds, (Offset_in_seconds / 3600) AS Offset_in_hours; +``` + +Результат: + +``` text +┌────────────────Time─┬─Type──────────────────────┬─Offset_in_seconds─┬─Offset_in_hours─┐ +│ 2021-04-21 10:20:30 │ DateTime('Europe/Moscow') │ 10800 │ 3 │ +└─────────────────────┴───────────────────────────┴───────────────────┴─────────────────┘ +``` + ## toYear {#toyear} Переводит дату или дату-с-временем в число типа UInt16, содержащее номер года (AD). @@ -136,7 +253,7 @@ toUnixTimestamp(str, [timezone]) Запрос: ``` sql -SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp +SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp; ``` Результат: @@ -162,6 +279,7 @@ SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp ```sql SELECT toStartOfISOYear(toDate('2017-01-01')) AS ISOYear20170101; ``` + ```text ┌─ISOYear20170101─┐ │ 2016-01-04 │ @@ -215,14 +333,14 @@ SELECT toStartOfISOYear(toDate('2017-01-01')) AS ISOYear20170101; toStartOfSecond(value[, timezone]) ``` -**Параметры** +**Аргументы** -- `value` — Дата и время. [DateTime64](../data-types/datetime64.md). -- `timezone` — [Часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) для возвращаемого значения (необязательно). Если параметр не задан, используется часовой пояс параметра `value`. [String](../data-types/string.md). +- `value` — дата и время. [DateTime64](../data-types/datetime64.md). +- `timezone` — [часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) для возвращаемого значения (необязательно). Если параметр не задан, используется часовой пояс параметра `value`. [String](../data-types/string.md). **Возвращаемое значение** -- Входное значение с отсеченными долями секунды. +- Входное значение с отсеченными долями секунды. Тип: [DateTime64](../data-types/datetime64.md). @@ -256,9 +374,9 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d └────────────────────────────────────────┘ ``` -**См. также** +**Смотрите также** -- Часовая зона сервера, конфигурационный параметр [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). +- Часовая зона сервера, конфигурационный параметр [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). ## toStartOfFiveMinute {#tostartoffiveminute} @@ -511,7 +629,7 @@ SELECT now(), date_trunc('hour', now(), 'Europe/Moscow'); date_add(unit, value, date) ``` -Синонимы: `dateAdd`, `DATE_ADD`. +Синонимы: `dateAdd`, `DATE_ADD`. **Аргументы** @@ -857,6 +975,7 @@ formatDateTime(Time, Format\[, Timezone\]) Возвращает значение времени и даты в определенном вами формате. **Поля подстановки** + Используйте поля подстановки для того, чтобы определить шаблон для выводимой строки. В колонке «Пример» результат работы функции для времени `2018-01-02 22:33:44`. | Поле | Описание | Пример | @@ -864,7 +983,7 @@ formatDateTime(Time, Format\[, Timezone\]) | %C | номер года, поделённый на 100 (00-99) | 20 | | %d | день месяца, с ведущим нулём (01-31) | 02 | | %D | короткая запись %m/%d/%y | 01/02/18 | -| %e | день месяца, с ведущим пробелом ( 1-31) | 2 | +| %e | день месяца, с ведущим пробелом ( 1-31) |   2 | | %F | короткая запись %Y-%m-%d | 2018-01-02 | | %G | четырехзначный формат вывода ISO-года, который основывается на особом подсчете номера недели согласно [стандарту ISO 8601](https://ru.wikipedia.org/wiki/ISO_8601), обычно используется вместе с %V | 2018 | | %g | двузначный формат вывода года по стандарту ISO 8601 | 18 | @@ -875,6 +994,7 @@ formatDateTime(Time, Format\[, Timezone\]) | %M | минуты, с ведущим нулём (00-59) | 33 | | %n | символ переноса строки (‘’) | | | %p | обозначения AM или PM | PM | +| %Q | квартал (1-4) | 1 | | %R | короткая запись %H:%M | 22:33 | | %S | секунды, с ведущими нулями (00-59) | 44 | | %t | символ табуляции (’) | | @@ -940,5 +1060,3 @@ SELECT FROM_UNIXTIME(1234334543, '%Y-%m-%d %R:%S') AS DateTime; │ 2009-02-11 14:42:23 │ └─────────────────────┘ ``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/date_time_functions/) diff --git a/docs/ru/sql-reference/functions/encoding-functions.md b/docs/ru/sql-reference/functions/encoding-functions.md index 951c6c60e38..f4fa21ba46a 100644 --- a/docs/ru/sql-reference/functions/encoding-functions.md +++ b/docs/ru/sql-reference/functions/encoding-functions.md @@ -15,13 +15,13 @@ toc_title: "Функции кодирования" char(number_1, [number_2, ..., number_n]); ``` -**Параметры** +**Аргументы** -- `number_1, number_2, ..., number_n` — Числовые аргументы, которые интерпретируются как целые числа. Типы: [Int](../../sql-reference/functions/encoding-functions.md), [Float](../../sql-reference/functions/encoding-functions.md). +- `number_1, number_2, ..., number_n` — числовые аргументы, которые интерпретируются как целые числа. Типы: [Int](../../sql-reference/functions/encoding-functions.md), [Float](../../sql-reference/functions/encoding-functions.md). **Возвращаемое значение** -- строка из соответствующих байт. +- Строка из соответствующих байт. Тип: `String`. @@ -30,10 +30,10 @@ char(number_1, [number_2, ..., number_n]); Запрос: ``` sql -SELECT char(104.1, 101, 108.9, 108.9, 111) AS hello +SELECT char(104.1, 101, 108.9, 108.9, 111) AS hello; ``` -Ответ: +Результат: ``` text ┌─hello─┐ @@ -49,7 +49,7 @@ SELECT char(104.1, 101, 108.9, 108.9, 111) AS hello SELECT char(0xD0, 0xBF, 0xD1, 0x80, 0xD0, 0xB8, 0xD0, 0xB2, 0xD0, 0xB5, 0xD1, 0x82) AS hello; ``` -Ответ: +Результат: ``` text ┌─hello──┐ @@ -63,7 +63,7 @@ SELECT char(0xD0, 0xBF, 0xD1, 0x80, 0xD0, 0xB8, 0xD0, 0xB2, 0xD0, 0xB5, 0xD1, 0x SELECT char(0xE4, 0xBD, 0xA0, 0xE5, 0xA5, 0xBD) AS hello; ``` -Ответ: +Результат: ``` text ┌─hello─┐ @@ -172,4 +172,3 @@ If you want to convert the result to a number, you can use the ‘reverse’ and Принимает целое число. Возвращает массив чисел типа UInt64, содержащий степени двойки, в сумме дающих исходное число; числа в массиве идут по возрастанию. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/encoding_functions/) diff --git a/docs/ru/sql-reference/functions/encryption-functions.md b/docs/ru/sql-reference/functions/encryption-functions.md index a7adb40e631..844f9cc3197 100644 --- a/docs/ru/sql-reference/functions/encryption-functions.md +++ b/docs/ru/sql-reference/functions/encryption-functions.md @@ -31,7 +31,7 @@ toc_title: "Функции для шифрования" encrypt('mode', 'plaintext', 'key' [, iv, aad]) ``` -**Параметры** +**Аргументы** - `mode` — режим шифрования. [String](../../sql-reference/data-types/string.md#string). - `plaintext` — текст, который будет зашифрован. [String](../../sql-reference/data-types/string.md#string). @@ -127,7 +127,7 @@ SELECT comment, hex(secret) FROM encryption_test WHERE comment LIKE '%gcm%'; aes_encrypt_mysql('mode', 'plaintext', 'key' [, iv]) ``` -**Параметры** +**Аргументы** - `mode` — режим шифрования. [String](../../sql-reference/data-types/string.md#string). - `plaintext` — текст, который будет зашифрован. [String](../../sql-reference/data-types/string.md#string). @@ -236,13 +236,13 @@ mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviv decrypt('mode', 'ciphertext', 'key' [, iv, aad]) ``` -**Параметры** +**Аргументы** - `mode` — режим шифрования. [String](../../sql-reference/data-types/string.md#string). - `ciphertext` — зашифрованный текст, который будет расшифрован. [String](../../sql-reference/data-types/string.md#string). - `key` — ключ шифрования. [String](../../sql-reference/data-types/string.md#string). - `iv` — инициализирующий вектор. Обязателен для `-gcm` режимов, для остальных режимов опциональный. [String](../../sql-reference/data-types/string.md#string). -- `aad` — дополнительные аутентифицированные данные. Текст не будет расшифрован, если это значение неверно. Работает только с `-gcm` режимами. Для остальных вызовет исключение. [String](../../sql-reference/data-types/string.md#string). +- `aad` — дополнительные аутентифицированные данные. Текст не будет расшифрован, если это значение неверно. Работает только с `-gcm` режимами. Для остальных вызовет исключение. [String](../../sql-reference/data-types/string.md#string). **Возвращаемое значение** @@ -316,7 +316,7 @@ SELECT comment, decrypt('aes-256-cfb128', secret, '12345678910121314151617181920 aes_decrypt_mysql('mode', 'ciphertext', 'key' [, iv]) ``` -**Параметры** +**Аргументы** - `mode` — режим шифрования. [String](../../sql-reference/data-types/string.md#string). - `ciphertext` — зашифрованный текст, который будет расшифрован. [String](../../sql-reference/data-types/string.md#string). diff --git a/docs/ru/sql-reference/functions/ext-dict-functions.md b/docs/ru/sql-reference/functions/ext-dict-functions.md index 8d018e8e9ac..919f8ebe276 100644 --- a/docs/ru/sql-reference/functions/ext-dict-functions.md +++ b/docs/ru/sql-reference/functions/ext-dict-functions.md @@ -16,7 +16,7 @@ dictGet('dict_name', 'attr_name', id_expr) dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr) ``` -**Параметры** +**Аргументы** - `dict_name` — имя словаря. [Строковый литерал](../syntax.md#syntax-string-literal). - `attr_name` — имя столбца словаря. [Строковый литерал](../syntax.md#syntax-string-literal). @@ -105,7 +105,7 @@ LIMIT 3 dictHas('dict_name', id) ``` -**Параметры** +**Аргументы** - `dict_name` — имя словаря. [Строковый литерал](../syntax.md#syntax-string-literal). - `id_expr` — значение ключа словаря. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md) или [Tuple](../../sql-reference/functions/ext-dict-functions.md) в зависимости от конфигурации словаря. @@ -127,7 +127,7 @@ dictHas('dict_name', id) dictGetHierarchy('dict_name', key) ``` -**Параметры** +**Аргументы** - `dict_name` — имя словаря. [Строковый литерал](../syntax.md#syntax-string-literal). - `key` — значение ключа. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md). @@ -144,7 +144,7 @@ Type: [Array(UInt64)](../../sql-reference/functions/ext-dict-functions.md). `dictIsIn ('dict_name', child_id_expr, ancestor_id_expr)` -**Параметры** +**Аргументы** - `dict_name` — имя словаря. [Строковый литерал](../syntax.md#syntax-string-literal). - `child_id_expr` — ключ для проверки. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md). @@ -180,12 +180,12 @@ dictGet[Type]('dict_name', 'attr_name', id_expr) dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr) ``` -**Параметры** +**Аргументы** - `dict_name` — имя словаря. [Строковый литерал](../syntax.md#syntax-string-literal). - `attr_name` — имя столбца словаря. [Строковый литерал](../syntax.md#syntax-string-literal). - `id_expr` — значение ключа словаря. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md) или [Tuple](../../sql-reference/functions/ext-dict-functions.md) в зависимости от конфигурации словаря. -- `default_value_expr` — значение, возвращаемое в том случае, когда словарь не содержит строки с заданным ключом `id_expr`. [Выражение](../syntax.md#syntax-expressions) возвращающее значение с типом данных, сконфигурированным для атрибута `attr_name`. +- `default_value_expr` — значение, возвращаемое в том случае, когда словарь не содержит строки с заданным ключом `id_expr`. [Выражение](../syntax.md#syntax-expressions), возвращающее значение с типом данных, сконфигурированным для атрибута `attr_name`. **Возвращаемое значение** @@ -198,4 +198,3 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr) Если значение атрибута не удалось обработать или оно не соответствует типу данных атрибута, то ClickHouse генерирует исключение. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/ext_dict_functions/) diff --git a/docs/ru/sql-reference/functions/files.md b/docs/ru/sql-reference/functions/files.md new file mode 100644 index 00000000000..9cb659375b9 --- /dev/null +++ b/docs/ru/sql-reference/functions/files.md @@ -0,0 +1,33 @@ +--- +toc_priority: 43 +toc_title: "Функции для работы с файлами" +--- + +# Функции для работы с файлами {#funktsii-dlia-raboty-s-failami} + +## file {#file} + +Читает файл как строку. Содержимое файла не разбирается (не парсится) и записывается в указанную колонку в виде единой строки. + +**Синтаксис** + +``` sql +file(path) +``` + +**Аргументы** + +- `path` — относительный путь до файла от [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). Путь к файлу может включать следующие символы подстановки и шаблоны: `*`, `?`, `{abc,def}` и `{N..M}`, где `N`, `M` — числа, `'abc', 'def'` — строки. + +**Примеры** + +Вставка данных из файлов a.txt и b.txt в таблицу в виде строк: + +``` sql +INSERT INTO table SELECT file('a.txt'), file('b.txt'); +``` + +**Смотрите также** + +- [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path) +- [file](../table-functions/file.md) diff --git a/docs/ru/sql-reference/functions/functions-for-nulls.md b/docs/ru/sql-reference/functions/functions-for-nulls.md index f0277a59699..7285f803264 100644 --- a/docs/ru/sql-reference/functions/functions-for-nulls.md +++ b/docs/ru/sql-reference/functions/functions-for-nulls.md @@ -15,7 +15,7 @@ isNull(x) Синоним: `ISNULL`. -**Параметры** +**Аргументы** - `x` — значение с не составным типом данных. @@ -38,7 +38,7 @@ isNull(x) Запрос ``` sql -SELECT x FROM t_null WHERE isNull(y) +SELECT x FROM t_null WHERE isNull(y); ``` ``` text @@ -55,7 +55,7 @@ SELECT x FROM t_null WHERE isNull(y) isNotNull(x) ``` -**Параметры** +**Аргументы** - `x` — значение с не составным типом данных. @@ -78,7 +78,7 @@ isNotNull(x) Запрос ``` sql -SELECT x FROM t_null WHERE isNotNull(y) +SELECT x FROM t_null WHERE isNotNull(y); ``` ``` text @@ -95,7 +95,7 @@ SELECT x FROM t_null WHERE isNotNull(y) coalesce(x,...) ``` -**Параметры** +**Аргументы** - Произвольное количество параметров не составного типа. Все параметры должны быть совместимы по типу данных. @@ -120,7 +120,7 @@ coalesce(x,...) Получим из адресной книги первый доступный способ связаться с клиентом: ``` sql -SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook +SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook; ``` ``` text @@ -138,7 +138,7 @@ SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook ifNull(x,alt) ``` -**Параметры** +**Аргументы** - `x` — значение для проверки на `NULL`, - `alt` — значение, которое функция вернёт, если `x` — `NULL`. @@ -151,7 +151,7 @@ ifNull(x,alt) **Пример** ``` sql -SELECT ifNull('a', 'b') +SELECT ifNull('a', 'b'); ``` ``` text @@ -161,7 +161,7 @@ SELECT ifNull('a', 'b') ``` ``` sql -SELECT ifNull(NULL, 'b') +SELECT ifNull(NULL, 'b'); ``` ``` text @@ -178,7 +178,7 @@ SELECT ifNull(NULL, 'b') nullIf(x, y) ``` -**Параметры** +**Аргументы** `x`, `y` — значения для сравнивания. Они должны быть совместимых типов, иначе ClickHouse сгенерирует исключение. @@ -190,7 +190,7 @@ nullIf(x, y) **Пример** ``` sql -SELECT nullIf(1, 1) +SELECT nullIf(1, 1); ``` ``` text @@ -200,7 +200,7 @@ SELECT nullIf(1, 1) ``` ``` sql -SELECT nullIf(1, 2) +SELECT nullIf(1, 2); ``` ``` text @@ -217,21 +217,21 @@ SELECT nullIf(1, 2) assumeNotNull(x) ``` -**Параметры** +**Аргументы** - `x` — исходное значение. **Возвращаемые значения** - Исходное значение с не `Nullable` типом, если оно — не `NULL`. -- Значение по умолчанию для не `Nullable` типа, если исходное значение — `NULL`. +- Неспецифицированный результат, зависящий от реализации, если исходное значение — `NULL`. **Пример** Рассмотрим таблицу `t_null`. ``` sql -SHOW CREATE TABLE t_null +SHOW CREATE TABLE t_null; ``` ``` text @@ -250,7 +250,7 @@ SHOW CREATE TABLE t_null Применим функцию `assumeNotNull` к столбцу `y`. ``` sql -SELECT assumeNotNull(y) FROM t_null +SELECT assumeNotNull(y) FROM t_null; ``` ``` text @@ -261,7 +261,7 @@ SELECT assumeNotNull(y) FROM t_null ``` ``` sql -SELECT toTypeName(assumeNotNull(y)) FROM t_null +SELECT toTypeName(assumeNotNull(y)) FROM t_null; ``` ``` text @@ -279,7 +279,7 @@ SELECT toTypeName(assumeNotNull(y)) FROM t_null toNullable(x) ``` -**Параметры** +**Аргументы** - `x` — значение произвольного не составного типа. @@ -290,7 +290,7 @@ toNullable(x) **Пример** ``` sql -SELECT toTypeName(10) +SELECT toTypeName(10); ``` ``` text @@ -300,7 +300,7 @@ SELECT toTypeName(10) ``` ``` sql -SELECT toTypeName(toNullable(10)) +SELECT toTypeName(toNullable(10)); ``` ``` text @@ -309,4 +309,3 @@ SELECT toTypeName(toNullable(10)) └────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/functions_for_nulls/) diff --git a/docs/ru/sql-reference/functions/geo/coordinates.md b/docs/ru/sql-reference/functions/geo/coordinates.md index 09e2d7d01bf..2605dc7a82f 100644 --- a/docs/ru/sql-reference/functions/geo/coordinates.md +++ b/docs/ru/sql-reference/functions/geo/coordinates.md @@ -133,4 +133,3 @@ SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res └─────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/functions/geo/coordinates) diff --git a/docs/ru/sql-reference/functions/geo/geohash.md b/docs/ru/sql-reference/functions/geo/geohash.md index 2dd3f83ddf1..0992b620e60 100644 --- a/docs/ru/sql-reference/functions/geo/geohash.md +++ b/docs/ru/sql-reference/functions/geo/geohash.md @@ -29,7 +29,7 @@ geohashEncode(longitude, latitude, [precision]) **Пример** ``` sql -SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res +SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res; ``` ``` text @@ -57,7 +57,7 @@ geohashDecode(geohash_string) **Пример** ``` sql -SELECT geohashDecode('ezs42') AS res +SELECT geohashDecode('ezs42') AS res; ``` ``` text @@ -76,13 +76,13 @@ SELECT geohashDecode('ezs42') AS res geohashesInBox(longitude_min, latitude_min, longitude_max, latitude_max, precision) ``` -**Параметры** +**Аргументы** - `longitude_min` — минимальная долгота. Диапазон возможных значений: `[-180°, 180°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md)). -- `latitude_min` - минимальная широта. Диапазон возможных значений: `[-90°, 90°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md). -- `longitude_max` - максимальная долгота. Диапазон возможных значений: `[-180°, 180°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md). -- `latitude_max` - максимальная широта. Диапазон возможных значений: `[-90°, 90°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md). -- `precision` - точность geohash. Диапазон возможных значений: `[1, 12]`. Тип данных: [UInt8](../../../sql-reference/data-types/int-uint.md). +- `latitude_min` — минимальная широта. Диапазон возможных значений: `[-90°, 90°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md). +- `longitude_max` — максимальная долгота. Диапазон возможных значений: `[-180°, 180°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md). +- `latitude_max` — максимальная широта. Диапазон возможных значений: `[-90°, 90°]`. Тип данных: [Float](../../../sql-reference/data-types/float.md). +- `precision` — точность geohash. Диапазон возможных значений: `[1, 12]`. Тип данных: [UInt8](../../../sql-reference/data-types/int-uint.md). !!! info "Замечание" Все передаваемые координаты должны быть одного и того же типа: либо `Float32`, либо `Float64`. @@ -102,8 +102,9 @@ geohashesInBox(longitude_min, latitude_min, longitude_max, latitude_max, precisi Запрос: ``` sql -SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos +SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos; ``` + Результат: ``` text @@ -112,4 +113,3 @@ SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos └─────────────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/functions/geo/geohash) diff --git a/docs/ru/sql-reference/functions/geo/h3.md b/docs/ru/sql-reference/functions/geo/h3.md index 7046833f7ec..27a512a9931 100644 --- a/docs/ru/sql-reference/functions/geo/h3.md +++ b/docs/ru/sql-reference/functions/geo/h3.md @@ -38,8 +38,9 @@ h3IsValid(h3index) Запрос: ``` sql -SELECT h3IsValid(630814730351855103) as h3IsValid +SELECT h3IsValid(630814730351855103) as h3IsValid; ``` + Результат: ``` text @@ -74,8 +75,9 @@ h3GetResolution(h3index) Запрос: ``` sql -SELECT h3GetResolution(639821929606596015) as resolution +SELECT h3GetResolution(639821929606596015) as resolution; ``` + Результат: ``` text @@ -107,8 +109,9 @@ h3EdgeAngle(resolution) Запрос: ``` sql -SELECT h3EdgeAngle(10) as edgeAngle +SELECT h3EdgeAngle(10) as edgeAngle; ``` + Результат: ``` text @@ -140,8 +143,9 @@ h3EdgeLengthM(resolution) Запрос: ``` sql -SELECT h3EdgeLengthM(15) as edgeLengthM +SELECT h3EdgeLengthM(15) as edgeLengthM; ``` + Результат: ``` text @@ -160,7 +164,7 @@ SELECT h3EdgeLengthM(15) as edgeLengthM geoToH3(lon, lat, resolution) ``` -**Параметры** +**Аргументы** - `lon` — географическая долгота. Тип данных — [Float64](../../../sql-reference/data-types/float.md). - `lat` — географическая широта. Тип данных — [Float64](../../../sql-reference/data-types/float.md). @@ -178,10 +182,10 @@ geoToH3(lon, lat, resolution) Запрос: ``` sql -SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index +SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index; ``` -Ответ: +Результат: ``` text ┌────────────h3Index─┐ @@ -199,7 +203,7 @@ SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index h3kRing(h3index, k) ``` -**Параметры** +**Аргументы** - `h3index` — идентификатор шестигранника. Тип данных: [UInt64](../../../sql-reference/data-types/int-uint.md). - `k` — радиус. Тип данных: [целое число](../../../sql-reference/data-types/int-uint.md) @@ -215,8 +219,9 @@ h3kRing(h3index, k) Запрос: ``` sql -SELECT arrayJoin(h3kRing(644325529233966508, 1)) AS h3index +SELECT arrayJoin(h3kRing(644325529233966508, 1)) AS h3index; ``` + Результат: ``` text @@ -311,7 +316,7 @@ SELECT h3HexAreaM2(13) as area; h3IndexesAreNeighbors(index1, index2) ``` -**Параметры** +**Аргументы** - `index1` — индекс шестиугольной ячейки. Тип: [UInt64](../../../sql-reference/data-types/int-uint.md). - `index2` — индекс шестиугольной ячейки. Тип: [UInt64](../../../sql-reference/data-types/int-uint.md). @@ -349,7 +354,7 @@ SELECT h3IndexesAreNeighbors(617420388351344639, 617420388352655359) AS n; h3ToChildren(index, resolution) ``` -**Параметры** +**Аргументы** - `index` — индекс шестиугольной ячейки. Тип: [UInt64](../../../sql-reference/data-types/int-uint.md). - `resolution` — разрешение. Диапазон: `[0, 15]`. Тип: [UInt8](../../../sql-reference/data-types/int-uint.md). @@ -386,7 +391,7 @@ SELECT h3ToChildren(599405990164561919, 6) AS children; h3ToParent(index, resolution) ``` -**Параметры** +**Аргументы** - `index` — индекс шестиугольной ячейки. Тип: [UInt64](../../../sql-reference/data-types/int-uint.md). - `resolution` — разрешение. Диапазон: `[0, 15]`. Тип: [UInt8](../../../sql-reference/data-types/int-uint.md). @@ -520,4 +525,3 @@ SELECT h3GetResolution(617420388352917503) as res; └─────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/functions/geo/h3) diff --git a/docs/ru/sql-reference/functions/geo/index.md b/docs/ru/sql-reference/functions/geo/index.md index 6b9a14e4d02..4d3bdfcd468 100644 --- a/docs/ru/sql-reference/functions/geo/index.md +++ b/docs/ru/sql-reference/functions/geo/index.md @@ -5,4 +5,3 @@ toc_title: hidden --- -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/functions/geo/) diff --git a/docs/ru/sql-reference/functions/hash-functions.md b/docs/ru/sql-reference/functions/hash-functions.md index 1742abe5b56..07c741e0588 100644 --- a/docs/ru/sql-reference/functions/hash-functions.md +++ b/docs/ru/sql-reference/functions/hash-functions.md @@ -7,6 +7,8 @@ toc_title: "Функции хэширования" Функции хэширования могут использоваться для детерминированного псевдослучайного разбрасывания элементов. +Simhash – это хеш-функция, которая для близких значений возвращает близкий хеш. + ## halfMD5 {#hash-functions-halfmd5} [Интерпретирует](../../sql-reference/functions/hash-functions.md#type_conversion_functions-reinterpretAsString) все входные параметры как строки и вычисляет хэш [MD5](https://ru.wikipedia.org/wiki/MD5) для каждой из них. Затем объединяет хэши, берет первые 8 байт хэша результирующей строки и интерпретирует их как значение типа `UInt64` с big-endian порядком байтов. @@ -18,9 +20,9 @@ halfMD5(par1, ...) Функция относительно медленная (5 миллионов коротких строк в секунду на ядро процессора). По возможности, используйте функцию [sipHash64](#hash_functions-siphash64) вместо неё. -**Параметры** +**Аргументы** -Функция принимает переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Функция принимает переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -29,7 +31,7 @@ halfMD5(par1, ...) **Пример** ``` sql -SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS halfMD5hash, toTypeName(halfMD5hash) AS type +SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS halfMD5hash, toTypeName(halfMD5hash) AS type; ``` ``` text @@ -61,9 +63,9 @@ sipHash64(par1,...) 3. Затем функция принимает хэш-значение, вычисленное на предыдущем шаге, и третий элемент исходного хэш-массива, и вычисляет хэш для массива из них. 4. Предыдущий шаг повторяется для всех остальных элементов исходного хэш-массива. -**Параметры** +**Аргументы** -Функция принимает переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Функция принимает переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -72,7 +74,7 @@ sipHash64(par1,...) **Пример** ``` sql -SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type +SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type; ``` ``` text @@ -97,9 +99,9 @@ cityHash64(par1,...) Это не криптографическая хэш-функция. Она использует CityHash алгоритм для строковых параметров и зависящую от реализации быструю некриптографическую хэш-функцию для параметров с другими типами данных. Функция использует комбинатор CityHash для получения конечных результатов. -**Параметры** +**Аргументы** -Функция принимает переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Функция принимает переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -110,7 +112,7 @@ cityHash64(par1,...) Пример вызова: ``` sql -SELECT cityHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS CityHash, toTypeName(CityHash) AS type +SELECT cityHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS CityHash, toTypeName(CityHash) AS type; ``` ``` text @@ -166,9 +168,9 @@ farmHash64(par1, ...) Эти функции используют методы `Fingerprint64` и `Hash64` из всех [доступных методов](https://github.com/google/farmhash/blob/master/src/farmhash.h). -**Параметры** +**Аргументы** -Функция принимает переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Функция принимает переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -177,7 +179,7 @@ farmHash64(par1, ...) **Пример** ``` sql -SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS FarmHash, toTypeName(FarmHash) AS type +SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS FarmHash, toTypeName(FarmHash) AS type; ``` ``` text @@ -191,7 +193,7 @@ SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:0 Вычисляет [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452) от строки. `JavaHash` не отличается ни скоростью, ни качеством, поэтому эту функцию следует считать устаревшей. Используйте эту функцию, если вам необходимо получить значение хэша по такому же алгоритму. ``` sql -SELECT javaHash(''); +SELECT javaHash('') ``` **Возвращаемое значение** @@ -208,7 +210,7 @@ SELECT javaHash(''); SELECT javaHash('Hello, world!'); ``` -Ответ: +Результат: ``` text ┌─javaHash('Hello, world!')─┐ @@ -226,7 +228,7 @@ SELECT javaHash('Hello, world!'); javaHashUTF16LE(stringUtf16le) ``` -**Параметры** +**Аргументы** - `stringUtf16le` — строка в `UTF-16LE`. @@ -243,10 +245,10 @@ javaHashUTF16LE(stringUtf16le) Запрос: ``` sql -SELECT javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le')) +SELECT javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le')); ``` -Ответ: +Результат: ``` text ┌─javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le'))─┐ @@ -259,7 +261,7 @@ SELECT javaHashUTF16LE(convertCharset('test', 'utf-8', 'utf-16le')) Вычисляет `HiveHash` от строки. ``` sql -SELECT hiveHash(''); +SELECT hiveHash('') ``` `HiveHash` — это результат [JavaHash](#hash_functions-javahash) с обнулённым битом знака числа. Функция используется в [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) вплоть до версии 3.0. @@ -278,7 +280,7 @@ SELECT hiveHash(''); SELECT hiveHash('Hello, world!'); ``` -Ответ: +Результат: ``` text ┌─hiveHash('Hello, world!')─┐ @@ -294,9 +296,9 @@ SELECT hiveHash('Hello, world!'); metroHash64(par1, ...) ``` -**Параметры** +**Аргументы** -Функция принимает переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Функция принимает переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -305,7 +307,7 @@ metroHash64(par1, ...) **Пример** ``` sql -SELECT metroHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MetroHash, toTypeName(MetroHash) AS type +SELECT metroHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MetroHash, toTypeName(MetroHash) AS type; ``` ``` text @@ -329,9 +331,9 @@ murmurHash2_32(par1, ...) murmurHash2_64(par1, ...) ``` -**Параметры** +**Аргументы** -Обе функции принимают переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Обе функции принимают переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -341,7 +343,7 @@ murmurHash2_64(par1, ...) **Пример** ``` sql -SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash2, toTypeName(MurmurHash2) AS type +SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash2, toTypeName(MurmurHash2) AS type; ``` ``` text @@ -360,9 +362,9 @@ SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23: gccMurmurHash(par1, ...); ``` -**Параметры** +**Аргументы** -- `par1, ...` — Переменное число параметров. Каждый параметр может быть любого из [поддерживаемых типов данных](../../sql-reference/data-types/index.md). +- `par1, ...` — переменное число параметров. Каждый параметр может быть любого из [поддерживаемых типов данных](../../sql-reference/data-types/index.md). **Возвращаемое значение** @@ -397,9 +399,9 @@ murmurHash3_32(par1, ...) murmurHash3_64(par1, ...) ``` -**Параметры** +**Аргументы** -Обе функции принимают переменное число входных параметров. Параметры могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). +Обе функции принимают переменное число входных параметров. Аргументы могут быть любого [поддерживаемого типа данных](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -409,7 +411,7 @@ murmurHash3_64(par1, ...) **Пример** ``` sql -SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text @@ -426,9 +428,9 @@ SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23: murmurHash3_128( expr ) ``` -**Параметры** +**Аргументы** -- `expr` — [выражение](../syntax.md#syntax-expressions) возвращающее значение типа[String](../../sql-reference/functions/hash-functions.md). +- `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа [String](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -437,13 +439,13 @@ murmurHash3_128( expr ) **Пример** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32, xxHash64 {#hash-functions-xxhash32-xxhash64} @@ -451,11 +453,11 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) Вычисляет `xxHash` от строки. Предлагается в двух вариантах: 32 и 64 бита. ``` sql -SELECT xxHash32(''); +SELECT xxHash32('') OR -SELECT xxHash64(''); +SELECT xxHash64('') ``` **Возвращаемое значение** @@ -472,7 +474,7 @@ SELECT xxHash64(''); SELECT xxHash32('Hello, world!'); ``` -Ответ: +Результат: ``` text ┌─xxHash32('Hello, world!')─┐ @@ -484,4 +486,937 @@ SELECT xxHash32('Hello, world!'); - [xxHash](http://cyan4973.github.io/xxHash/). -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/hash_functions/) +## ngramSimHash {#ngramsimhash} + +Выделяет из ASCII строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммовый `simhash`. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +ngramSimHash(string[, ngramsize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT ngramSimHash('ClickHouse') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 1627567969 │ +└────────────┘ +``` + +## ngramSimHashCaseInsensitive {#ngramsimhashcaseinsensitive} + +Выделяет из ASCII строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммовый `simhash`. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +ngramSimHashCaseInsensitive(string[, ngramsize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT ngramSimHashCaseInsensitive('ClickHouse') AS Hash; +``` + +Результат: + +``` text +┌──────Hash─┐ +│ 562180645 │ +└───────────┘ +``` + +## ngramSimHashUTF8 {#ngramsimhashutf8} + +Выделяет из UTF-8 строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммовый `simhash`. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +ngramSimHashUTF8(string[, ngramsize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT ngramSimHashUTF8('ClickHouse') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 1628157797 │ +└────────────┘ +``` + +## ngramSimHashCaseInsensitiveUTF8 {#ngramsimhashcaseinsensitiveutf8} + +Выделяет из UTF-8 строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммовый `simhash`. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +ngramSimHashCaseInsensitiveUTF8(string[, ngramsize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT ngramSimHashCaseInsensitiveUTF8('ClickHouse') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 1636742693 │ +└────────────┘ +``` + +## wordShingleSimHash {#wordshinglesimhash} + +Выделяет из ASCII строки отрезки (шинглы) из `shinglesize` слов и возвращает шингловый `simhash`. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +wordShingleSimHash(string[, shinglesize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleSimHash('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 2328277067 │ +└────────────┘ +``` + +## wordShingleSimHashCaseInsensitive {#wordshinglesimhashcaseinsensitive} + +Выделяет из ASCII строки отрезки (шинглы) из `shinglesize` слов и возвращает шингловый `simhash`. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +wordShingleSimHashCaseInsensitive(string[, shinglesize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleSimHashCaseInsensitive('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 2194812424 │ +└────────────┘ +``` + +## wordShingleSimHashUTF8 {#wordshinglesimhashutf8} + +Выделяет из UTF-8 строки отрезки (шинглы) из `shinglesize` слов и возвращает шингловый `simhash`. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +wordShingleSimHashUTF8(string[, shinglesize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleSimHashUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 2328277067 │ +└────────────┘ +``` + +## wordShingleSimHashCaseInsensitiveUTF8 {#wordshinglesimhashcaseinsensitiveutf8} + +Выделяет из UTF-8 строки отрезки (шинглы) из `shinglesize` слов и возвращает шингловый `simhash`. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [bitHammingDistance](../../sql-reference/functions/bit-functions.md#bithammingdistance). Чем меньше [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между результатом вычисления `simhash` двух строк, тем больше вероятность, что строки совпадают. + +**Синтаксис** + +``` sql +wordShingleSimHashCaseInsensitiveUTF8(string[, shinglesize]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Значение хеш-функции от строки. + +Тип: [UInt64](../../sql-reference/data-types/int-uint.md). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleSimHashCaseInsensitiveUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Hash; +``` + +Результат: + +``` text +┌───────Hash─┐ +│ 2194812424 │ +└────────────┘ +``` + +## ngramMinHash {#ngramminhash} + +Выделяет из ASCII строки отрезки (n-граммы) размером `ngramsize` символов и вычисляет хеш для каждой n-граммы. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +ngramMinHash(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHash('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (18333312859352735453,9054248444481805918) │ +└────────────────────────────────────────────┘ +``` + +## ngramMinHashCaseInsensitive {#ngramminhashcaseinsensitive} + +Выделяет из ASCII строки отрезки (n-граммы) размером `ngramsize` символов и вычисляет хеш для каждой n-граммы. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +ngramMinHashCaseInsensitive(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashCaseInsensitive('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (2106263556442004574,13203602793651726206) │ +└────────────────────────────────────────────┘ +``` + +## ngramMinHashUTF8 {#ngramminhashutf8} + +Выделяет из UTF-8 строки отрезки (n-граммы) размером `ngramsize` символов и вычисляет хеш для каждой n-граммы. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** +``` sql +ngramMinHashUTF8(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashUTF8('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (18333312859352735453,6742163577938632877) │ +└────────────────────────────────────────────┘ +``` + +## ngramMinHashCaseInsensitiveUTF8 {#ngramminhashcaseinsensitiveutf8} + +Выделяет из UTF-8 строки отрезки (n-граммы) размером `ngramsize` символов и вычисляет хеш для каждой n-граммы. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +ngramMinHashCaseInsensitiveUTF8(string [, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashCaseInsensitiveUTF8('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple───────────────────────────────────────┐ +│ (12493625717655877135,13203602793651726206) │ +└─────────────────────────────────────────────┘ +``` + +## ngramMinHashArg {#ngramminhasharg} + +Выделяет из ASCII строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммы с минимальным и максимальным хешами, вычисленными функцией [ngramMinHash](#ngramminhash) с теми же входными данными. Функция регистрозависимая. + +**Синтаксис** + +``` sql +ngramMinHashArg(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` n-грамм. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashArg('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ous','ick','lic','Hou','kHo','use'),('Hou','lic','ick','ous','ckH','Cli')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## ngramMinHashArgCaseInsensitive {#ngramminhashargcaseinsensitive} + +Выделяет из ASCII строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммы с минимальным и максимальным хешами, вычисленными функцией [ngramMinHashCaseInsensitive](#ngramminhashcaseinsensitive) с теми же входными данными. Функция регистро**не**зависимая. + +**Синтаксис** + +``` sql +ngramMinHashArgCaseInsensitive(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` n-грамм. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashArgCaseInsensitive('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ous','ick','lic','kHo','use','Cli'),('kHo','lic','ick','ous','ckH','Hou')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## ngramMinHashArgUTF8 {#ngramminhashargutf8} + +Выделяет из UTF-8 строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммы с минимальным и максимальным хешами, вычисленными функцией [ngramMinHashUTF8](#ngramminhashutf8) с теми же входными данными. Функция регистрозависимая. + +**Синтаксис** + +``` sql +ngramMinHashArgUTF8(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` n-грамм. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashArgUTF8('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ous','ick','lic','Hou','kHo','use'),('kHo','Hou','lic','ick','ous','ckH')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## ngramMinHashArgCaseInsensitiveUTF8 {#ngramminhashargcaseinsensitiveutf8} + +Выделяет из UTF-8 строки отрезки (n-граммы) размером `ngramsize` символов и возвращает n-граммы с минимальным и максимальным хешами, вычисленными функцией [ngramMinHashCaseInsensitiveUTF8](#ngramminhashcaseinsensitiveutf8) с теми же входными данными. Функция регистро**не**зависимая. + +**Синтаксис** + +``` sql +ngramMinHashArgCaseInsensitiveUTF8(string[, ngramsize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `ngramsize` — размер n-грамм. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` n-грамм. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT ngramMinHashArgCaseInsensitiveUTF8('ClickHouse') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────────────┐ +│ (('ckH','ous','ick','lic','kHo','use'),('kHo','lic','ick','ous','ckH','Hou')) │ +└───────────────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHash {#wordshingleminhash} + +Выделяет из ASCII строки отрезки (шинглы) из `shinglesize` слов и вычисляет хеш для каждого шингла. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +wordShingleMinHash(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHash('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (16452112859864147620,5844417301642981317) │ +└────────────────────────────────────────────┘ +``` + +## wordShingleMinHashCaseInsensitive {#wordshingleminhashcaseinsensitive} + +Выделяет из ASCII строки отрезки (шинглы) из `shinglesize` слов и вычисляет хеш для каждого шингла. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +wordShingleMinHashCaseInsensitive(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashCaseInsensitive('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────┐ +│ (3065874883688416519,1634050779997673240) │ +└───────────────────────────────────────────┘ +``` + +## wordShingleMinHashUTF8 {#wordshingleminhashutf8} + +Выделяет из UTF-8 строки отрезки (шинглы) из `shinglesize` слов и вычисляет хеш для каждого шингла. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистрозависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +wordShingleMinHashUTF8(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────┐ +│ (16452112859864147620,5844417301642981317) │ +└────────────────────────────────────────────┘ +``` + +## wordShingleMinHashCaseInsensitiveUTF8 {#wordshingleminhashcaseinsensitiveutf8} + +Выделяет из UTF-8 строки отрезки (шинглы) из `shinglesize` слов и вычисляет хеш для каждого шингла. Использует `hashnum` минимальных хешей, чтобы вычислить минимальный хеш, и `hashnum` максимальных хешей, чтобы вычислить максимальный хеш. Возвращает кортеж из этих хешей. Функция регистро**не**зависимая. + +Может быть использована для проверки двух строк на схожесть вместе с функцией [tupleHammingDistance](../../sql-reference/functions/tuple-functions.md#tuplehammingdistance). Если для двух строк минимальные или максимальные хеши одинаковы, мы считаем, что эти строки совпадают. + +**Синтаксис** + +``` sql +wordShingleMinHashCaseInsensitiveUTF8(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж с двумя хешами — минимальным и максимальным. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([UInt64](../../sql-reference/data-types/int-uint.md), [UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashCaseInsensitiveUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).') AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────┐ +│ (3065874883688416519,1634050779997673240) │ +└───────────────────────────────────────────┘ +``` + +## wordShingleMinHashArg {#wordshingleminhasharg} + +Выделяет из ASCII строки отрезки (шинглы) из `shinglesize` слов и возвращает шинглы с минимальным и максимальным хешами, вычисленными функцией [wordshingleMinHash](#wordshingleminhash) с теми же входными данными. Функция регистрозависимая. + +**Синтаксис** + +``` sql +wordShingleMinHashArg(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` шинглов. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashArg('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────┐ +│ (('OLAP','database','analytical'),('online','oriented','processing')) │ +└───────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHashArgCaseInsensitive {#wordshingleminhashargcaseinsensitive} + +Выделяет из ASCII строки отрезки (шинглы) из `shinglesize` слов и возвращает шинглы с минимальным и максимальным хешами, вычисленными функцией [wordShingleMinHashCaseInsensitive](#wordshingleminhashcaseinsensitive) с теми же входными данными. Функция регистро**не**зависимая. + +**Синтаксис** + +``` sql +wordShingleMinHashArgCaseInsensitive(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` шинглов. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashArgCaseInsensitive('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────────────────────────────────┐ +│ (('queries','database','analytical'),('oriented','processing','DBMS')) │ +└────────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHashArgUTF8 {#wordshingleminhashargutf8} + +Выделяет из UTF-8 строки отрезки (шинглы) из `shinglesize` слов и возвращает шинглы с минимальным и максимальным хешами, вычисленными функцией [wordShingleMinHashUTF8](#wordshingleminhashutf8) с теми же входными данными. Функция регистрозависимая. + +**Синтаксис** + +``` sql +wordShingleMinHashArgUTF8(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` шинглов. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashArgUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Результат: + +``` text +┌─Tuple─────────────────────────────────────────────────────────────────┐ +│ (('OLAP','database','analytical'),('online','oriented','processing')) │ +└───────────────────────────────────────────────────────────────────────┘ +``` + +## wordShingleMinHashArgCaseInsensitiveUTF8 {#wordshingleminhashargcaseinsensitiveutf8} + +Выделяет из UTF-8 строки отрезки (шинглы) из `shinglesize` слов и возвращает шинглы с минимальным и максимальным хешами, вычисленными функцией [wordShingleMinHashCaseInsensitiveUTF8](#wordshingleminhashcaseinsensitiveutf8) с теми же входными данными. Функция регистро**не**зависимая. + +**Синтаксис** + +``` sql +wordShingleMinHashArgCaseInsensitiveUTF8(string[, shinglesize, hashnum]) +``` + +**Аргументы** + +- `string` — строка. [String](../../sql-reference/data-types/string.md). +- `shinglesize` — размер словесных шинглов. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `3`. [UInt8](../../sql-reference/data-types/int-uint.md). +- `hashnum` — количество минимальных и максимальных хешей, которое используется при вычислении результата. Необязательный. Возможные значения: любое число от `1` до `25`. Значение по умолчанию: `6`. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемое значение** + +- Кортеж из двух кортежей, каждый из которых состоит из `hashnum` шинглов. + +Тип: [Tuple](../../sql-reference/data-types/tuple.md)([Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md)), [Tuple](../../sql-reference/data-types/tuple.md)([String](../../sql-reference/data-types/string.md))). + +**Пример** + +Запрос: + +``` sql +SELECT wordShingleMinHashArgCaseInsensitiveUTF8('ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).', 1, 3) AS Tuple; +``` + +Результат: + +``` text +┌─Tuple──────────────────────────────────────────────────────────────────┐ +│ (('queries','database','analytical'),('oriented','processing','DBMS')) │ +└────────────────────────────────────────────────────────────────────────┘ +``` diff --git a/docs/ru/sql-reference/functions/in-functions.md b/docs/ru/sql-reference/functions/in-functions.md index 7326d087610..2bdb71d5f93 100644 --- a/docs/ru/sql-reference/functions/in-functions.md +++ b/docs/ru/sql-reference/functions/in-functions.md @@ -9,4 +9,3 @@ toc_title: "Функции для реализации оператора IN" Смотрите раздел [Операторы IN](../operators/in.md#select-in-operators). -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/in_functions/) diff --git a/docs/ru/sql-reference/functions/index.md b/docs/ru/sql-reference/functions/index.md index ae3879b6c96..1eefd4d9f73 100644 --- a/docs/ru/sql-reference/functions/index.md +++ b/docs/ru/sql-reference/functions/index.md @@ -82,4 +82,3 @@ str -> str != Referer Если функция в запросе выполняется на сервере-инициаторе запроса, а вам нужно, чтобы она выполнялась на удалённых серверах, вы можете обернуть её в агрегатную функцию any или добавить в ключ в `GROUP BY`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/) diff --git a/docs/ru/sql-reference/functions/introspection.md b/docs/ru/sql-reference/functions/introspection.md index 4cd7e5d273b..cb2bcdb787f 100644 --- a/docs/ru/sql-reference/functions/introspection.md +++ b/docs/ru/sql-reference/functions/introspection.md @@ -32,7 +32,7 @@ ClickHouse сохраняет отчеты профилировщика в [жу addressToLine(address_of_binary_instruction) ``` -**Параметры** +**Аргументы** - `address_of_binary_instruction` ([Тип UInt64](../../sql-reference/functions/introspection.md))- Адрес инструкции в запущенном процессе. @@ -53,13 +53,13 @@ addressToLine(address_of_binary_instruction) Включение функций самоанализа: ``` sql -SET allow_introspection_functions=1 +SET allow_introspection_functions=1; ``` Выбор первой строки из списка `trace_log` системная таблица: ``` sql -SELECT * FROM system.trace_log LIMIT 1 \G +SELECT * FROM system.trace_log LIMIT 1 \G; ``` ``` text @@ -79,7 +79,7 @@ trace: [140658411141617,94784174532828,94784076370703,94784076 Получение имени файла исходного кода и номера строки для одного адреса: ``` sql -SELECT addressToLine(94784076370703) \G +SELECT addressToLine(94784076370703) \G; ``` ``` text @@ -123,9 +123,9 @@ trace_source_code_lines: /lib/x86_64-linux-gnu/libpthread-2.27.so addressToSymbol(address_of_binary_instruction) ``` -**Параметры** +**Аргументы** -- `address_of_binary_instruction` ([Тип uint64](../../sql-reference/functions/introspection.md)) — Адрес инструкции в запущенном процессе. +- `address_of_binary_instruction` ([Тип uint64](../../sql-reference/functions/introspection.md)) — адрес инструкции в запущенном процессе. **Возвращаемое значение** @@ -139,13 +139,13 @@ addressToSymbol(address_of_binary_instruction) Включение функций самоанализа: ``` sql -SET allow_introspection_functions=1 +SET allow_introspection_functions=1; ``` Выбор первой строки из списка `trace_log` системная таблица: ``` sql -SELECT * FROM system.trace_log LIMIT 1 \G +SELECT * FROM system.trace_log LIMIT 1 \G; ``` ``` text @@ -165,7 +165,7 @@ trace: [94138803686098,94138815010911,94138815096522,94138815101224,9413 Получение символа для одного адреса: ``` sql -SELECT addressToSymbol(94138803686098) \G +SELECT addressToSymbol(94138803686098) \G; ``` ``` text @@ -220,9 +220,9 @@ clone demangle(symbol) ``` -**Параметры** +**Аргументы** -- `symbol` ([Строка](../../sql-reference/functions/introspection.md)) - Символ из объектного файла. +- `symbol` ([Строка](../../sql-reference/functions/introspection.md)) - символ из объектного файла. **Возвращаемое значение** @@ -236,13 +236,13 @@ demangle(symbol) Включение функций самоанализа: ``` sql -SET allow_introspection_functions=1 +SET allow_introspection_functions=1; ``` Выбор первой строки из списка `trace_log` системная таблица: ``` sql -SELECT * FROM system.trace_log LIMIT 1 \G +SELECT * FROM system.trace_log LIMIT 1 \G; ``` ``` text @@ -262,7 +262,7 @@ trace: [94138803686098,94138815010911,94138815096522,94138815101224,9413 Получение имени функции для одного адреса: ``` sql -SELECT demangle(addressToSymbol(94138803686098)) \G +SELECT demangle(addressToSymbol(94138803686098)) \G; ``` ``` text @@ -336,6 +336,7 @@ SELECT tid(); │ 3878 │ └───────┘ ``` + ## logTrace {#logtrace} Выводит сообщение в лог сервера для каждого [Block](https://clickhouse.tech/docs/ru/development/architecture/#block). @@ -346,7 +347,7 @@ SELECT tid(); logTrace('message') ``` -**Параметры** +**Аргументы** - `message` — сообщение, которое отправляется в серверный лог. [String](../../sql-reference/data-types/string.md#string). @@ -354,7 +355,7 @@ logTrace('message') - Всегда возвращает 0. -**Example** +**Пример** Запрос: @@ -370,4 +371,4 @@ SELECT logTrace('logTrace message'); └──────────────────────────────┘ ``` -[Original article](https://clickhouse.tech/docs/en/query_language/functions/introspection/) \ No newline at end of file +[Original article](https://clickhouse.tech/docs/en/query_language/functions/introspection/) diff --git a/docs/ru/sql-reference/functions/ip-address-functions.md b/docs/ru/sql-reference/functions/ip-address-functions.md index a2a08b1938e..d7f6d2f7618 100644 --- a/docs/ru/sql-reference/functions/ip-address-functions.md +++ b/docs/ru/sql-reference/functions/ip-address-functions.md @@ -174,7 +174,7 @@ SELECT addr, cutIPv6(IPv6StringToNum(addr), 0, 0) FROM (SELECT ['notaddress', '1 Принимает число типа `UInt32`. Интерпретирует его, как IPv4-адрес в [big endian](https://en.wikipedia.org/wiki/Endianness). Возвращает значение `FixedString(16)`, содержащее адрес IPv6 в двоичном формате. Примеры: ``` sql -SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr +SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr; ``` ``` text @@ -207,7 +207,7 @@ SELECT Принимает на вход IPv4 и значение `UInt8`, содержащее [CIDR](https://ru.wikipedia.org/wiki/Бесклассовая_адресация). Возвращает кортеж с двумя IPv4, содержащими нижний и более высокий диапазон подсети. ``` sql -SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16) +SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16); ``` ``` text @@ -221,7 +221,7 @@ SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16) Принимает на вход IPv6 и значение `UInt8`, содержащее CIDR. Возвращает кортеж с двумя IPv6, содержащими нижний и более высокий диапазон подсети. ``` sql -SELECT IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32) +SELECT IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32); ``` ``` text @@ -328,7 +328,7 @@ SELECT toIPv6('127.0.0.1'); isIPv4String(string) ``` -**Параметры** +**Аргументы** - `string` — IP адрес. [String](../../sql-reference/data-types/string.md). @@ -343,7 +343,7 @@ isIPv4String(string) Запрос: ```sql -SELECT addr, isIPv4String(addr) FROM ( SELECT ['0.0.0.0', '127.0.0.1', '::ffff:127.0.0.1'] AS addr ) ARRAY JOIN addr +SELECT addr, isIPv4String(addr) FROM ( SELECT ['0.0.0.0', '127.0.0.1', '::ffff:127.0.0.1'] AS addr ) ARRAY JOIN addr; ``` Результат: @@ -366,7 +366,7 @@ SELECT addr, isIPv4String(addr) FROM ( SELECT ['0.0.0.0', '127.0.0.1', '::ffff:1 isIPv6String(string) ``` -**Параметры** +**Аргументы** - `string` — IP адрес. [String](../../sql-reference/data-types/string.md). @@ -381,7 +381,7 @@ isIPv6String(string) Запрос: ``` sql -SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0.0.1', '127.0.0.1'] AS addr ) ARRAY JOIN addr +SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0.0.1', '127.0.0.1'] AS addr ) ARRAY JOIN addr; ``` Результат: @@ -395,4 +395,54 @@ SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0 └──────────────────┴────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/ip_address_functions/) +## isIPAddressInRange {#isipaddressinrange} + +Проверяет попадает ли IP адрес в интервал, заданный в [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) нотации. + +**Syntax** + +``` sql +isIPAddressInRange(address, prefix) +``` +Функция принимает IPv4 или IPv6 адрес виде строки. Возвращает `0`, если версия адреса и интервала не совпадают. + +**Аргументы** + +- `address` — IPv4 или IPv6 адрес. [String](../../sql-reference/data-types/string.md). +- `prefix` — IPv4 или IPv6 подсеть, заданная в CIDR нотации. [String](../../sql-reference/data-types/string.md). + +**Возвращаемое значение** + +- `1` или `0`. + +Тип: [UInt8](../../sql-reference/data-types/int-uint.md). + +**Примеры** + +Запрос: + +``` sql +SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8') +``` + +Результат: + +``` text +┌─isIPAddressInRange('127.0.0.1', '127.0.0.0/8')─┐ +│ 1 │ +└────────────────────────────────────────────────┘ +``` + +Запрос: + +``` sql +SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16') +``` + +Результат: + +``` text +┌─isIPAddressInRange('127.0.0.1', 'ffff::/16')─┐ +│ 0 │ +└──────────────────────────────────────────────┘ +``` diff --git a/docs/ru/sql-reference/functions/json-functions.md b/docs/ru/sql-reference/functions/json-functions.md index 69b8f8f98f5..4de487c03ad 100644 --- a/docs/ru/sql-reference/functions/json-functions.md +++ b/docs/ru/sql-reference/functions/json-functions.md @@ -16,51 +16,65 @@ toc_title: JSON ## visitParamHas(params, name) {#visitparamhasparams-name} -Проверить наличие поля с именем name. +Проверяет наличие поля с именем `name`. + +Алиас: `simpleJSONHas`. ## visitParamExtractUInt(params, name) {#visitparamextractuintparams-name} -Распарсить UInt64 из значения поля с именем name. Если поле строковое - попытаться распарсить число из начала строки. Если такого поля нет, или если оно есть, но содержит не число, то вернуть 0. +Пытается выделить число типа UInt64 из значения поля с именем `name`. Если поле строковое, пытается выделить число из начала строки. Если такого поля нет, или если оно есть, но содержит не число, то возвращает 0. + +Алиас: `simpleJSONExtractUInt`. ## visitParamExtractInt(params, name) {#visitparamextractintparams-name} Аналогично для Int64. +Алиас: `simpleJSONExtractInt`. + ## visitParamExtractFloat(params, name) {#visitparamextractfloatparams-name} Аналогично для Float64. +Алиас: `simpleJSONExtractFloat`. + ## visitParamExtractBool(params, name) {#visitparamextractboolparams-name} -Распарсить значение true/false. Результат - UInt8. +Пытается выделить значение true/false. Результат — UInt8. + +Алиас: `simpleJSONExtractBool`. ## visitParamExtractRaw(params, name) {#visitparamextractrawparams-name} -Вернуть значение поля, включая разделители. +Возвращает значение поля, включая разделители. + +Алиас: `simpleJSONExtractRaw`. Примеры: ``` sql -visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"' -visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}' +visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"'; +visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'; ``` ## visitParamExtractString(params, name) {#visitparamextractstringparams-name} -Распарсить строку в двойных кавычках. У значения убирается экранирование. Если убрать экранированные символы не удалось, то возвращается пустая строка. +Разбирает строку в двойных кавычках. У значения убирается экранирование. Если убрать экранированные символы не удалось, то возвращается пустая строка. + +Алиас: `simpleJSONExtractString`. Примеры: ``` sql -visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0' -visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺' -visitParamExtractString('{"abc":"\\u263"}', 'abc') = '' -visitParamExtractString('{"abc":"hello}', 'abc') = '' +visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'; +visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺'; +visitParamExtractString('{"abc":"\\u263"}', 'abc') = ''; +visitParamExtractString('{"abc":"hello}', 'abc') = ''; ``` -На данный момент, не поддерживаются записанные в формате `\uXXXX\uYYYY` кодовые точки не из basic multilingual plane (они переводятся не в UTF-8, а в CESU-8). +На данный момент не поддерживаются записанные в формате `\uXXXX\uYYYY` кодовые точки не из basic multilingual plane (они переводятся не в UTF-8, а в CESU-8). -Следующие функции используют [simdjson](https://github.com/lemire/simdjson) который разработан под более сложные требования для разбора JSON. Упомянутое выше предположение 2 по-прежнему применимо. +Следующие функции используют [simdjson](https://github.com/lemire/simdjson), который разработан под более сложные требования для разбора JSON. Упомянутое выше допущение 2 по-прежнему применимо. ## isValidJSON(json) {#isvalidjsonjson} @@ -211,7 +225,7 @@ SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') Пример: ``` sql -SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]' +SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'; ``` ## JSONExtractArrayRaw(json\[, indices_or_keys\]…) {#jsonextractarrayrawjson-indices-or-keys} @@ -223,7 +237,7 @@ SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, Пример: ``` sql -SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = ['-100', '200.0', '"hello"']' +SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = ['-100', '200.0', '"hello"']'; ``` ## JSONExtractKeysAndValuesRaw {#json-extract-keys-and-values-raw} @@ -236,29 +250,28 @@ SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = JSONExtractKeysAndValuesRaw(json[, p, a, t, h]) ``` -**Параметры** +**Аргументы** -- `json` — [Строка](../data-types/string.md), содержащая валидный JSON. -- `p, a, t, h` — Индексы или ключи, разделенные запятыми, которые указывают путь к внутреннему полю во вложенном объекте JSON. Каждый аргумент может быть либо [строкой](../data-types/string.md) для получения поля по ключу, либо [целым числом](../data-types/int-uint.md) для получения N-го поля (индексирование начинается с 1, отрицательные числа используются для отсчета с конца). Если параметр не задан, весь JSON парсится как объект верхнего уровня. Необязательный параметр. +- `json` — [строка](../data-types/string.md), содержащая валидный JSON. +- `p, a, t, h` — индексы или ключи, разделенные запятыми, которые указывают путь к внутреннему полю во вложенном объекте JSON. Каждый аргумент может быть либо [строкой](../data-types/string.md) для получения поля по ключу, либо [целым числом](../data-types/int-uint.md) для получения N-го поля (индексирование начинается с 1, отрицательные числа используются для отсчета с конца). Если параметр не задан, весь JSON парсится как объект верхнего уровня. Необязательный параметр. **Возвращаемые значения** -- Массив с кортежами `('key', 'value')`. Члены кортежа — строки. +- Массив с кортежами `('key', 'value')`. Члены кортежа — строки. -- Пустой массив, если заданный объект не существует или входные данные не валидный JSON. +- Пустой массив, если заданный объект не существует или входные данные не валидный JSON. -Тип: Type: [Array](../data-types/array.md)([Tuple](../data-types/tuple.md)([String](../data-types/string.md), [String](../data-types/string.md)). -. +Тип: [Array](../data-types/array.md)([Tuple](../data-types/tuple.md)([String](../data-types/string.md), [String](../data-types/string.md)). **Примеры** Запрос: ``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}') +SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}'); ``` -Ответ: +Результат: ``` text ┌─JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}')─┐ @@ -269,10 +282,10 @@ SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello" Запрос: ``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b') +SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b'); ``` -Ответ: +Результат: ``` text ┌─JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', 'b')─┐ @@ -283,15 +296,13 @@ SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello" Запрос: ``` sql -SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c') +SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c'); ``` -Ответ: +Результат: ``` text ┌─JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello", "f": "world"}}}', -1, 'c')─┐ │ [('d','"hello"'),('f','"world"')] │ └───────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/json_functions/) diff --git a/docs/ru/sql-reference/functions/logical-functions.md b/docs/ru/sql-reference/functions/logical-functions.md index 2d71c60a509..8566657d2eb 100644 --- a/docs/ru/sql-reference/functions/logical-functions.md +++ b/docs/ru/sql-reference/functions/logical-functions.md @@ -17,4 +17,3 @@ toc_title: "Логические функции" ## xor {#xor} -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/logical_functions/) diff --git a/docs/ru/sql-reference/functions/machine-learning-functions.md b/docs/ru/sql-reference/functions/machine-learning-functions.md index decbff56646..a1716eed6c2 100644 --- a/docs/ru/sql-reference/functions/machine-learning-functions.md +++ b/docs/ru/sql-reference/functions/machine-learning-functions.md @@ -27,7 +27,7 @@ toc_title: "Функции машинного обучения" bayesAB(distribution_name, higher_is_better, variant_names, x, y) ``` -**Параметры** +**Аргументы** - `distribution_name` — вероятностное распределение. [String](../../sql-reference/data-types/string.md). Возможные значения: @@ -36,14 +36,14 @@ bayesAB(distribution_name, higher_is_better, variant_names, x, y) - `higher_is_better` — способ определения предпочтений. [Boolean](../../sql-reference/data-types/boolean.md). Возможные значения: - - `0` - чем меньше значение, тем лучше - - `1` - чем больше значение, тем лучше + - `0` — чем меньше значение, тем лучше + - `1` — чем больше значение, тем лучше -- `variant_names` - массив, содержащий названия вариантов. [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). +- `variant_names` — массив, содержащий названия вариантов. [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). -- `x` - массив, содержащий число проведенных тестов (испытаний) для каждого варианта. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). +- `x` — массив, содержащий число проведенных тестов (испытаний) для каждого варианта. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). -- `y` - массив, содержащий число успешных тестов (испытаний) для каждого варианта. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). +- `y` — массив, содержащий число успешных тестов (испытаний) для каждого варианта. [Array](../../sql-reference/data-types/array.md)([Float64](../../sql-reference/data-types/float.md)). !!! note "Замечание" Все три массива должны иметь одинаковый размер. Все значения `x` и `y` должны быть неотрицательными числами (константами). Значение `y` не может превышать соответствующее значение `x`. @@ -51,8 +51,8 @@ bayesAB(distribution_name, higher_is_better, variant_names, x, y) **Возвращаемые значения** Для каждого варианта рассчитываются: -- `beats_control` - вероятность, что данный вариант превосходит контрольный в долгосрочной перспективе -- `to_be_best` - вероятность, что данный вариант является лучшим в долгосрочной перспективе +- `beats_control` — вероятность, что данный вариант превосходит контрольный в долгосрочной перспективе +- `to_be_best` — вероятность, что данный вариант является лучшим в долгосрочной перспективе Тип: JSON. diff --git a/docs/ru/sql-reference/functions/math-functions.md b/docs/ru/sql-reference/functions/math-functions.md index d06fe267f5e..da075e922cd 100644 --- a/docs/ru/sql-reference/functions/math-functions.md +++ b/docs/ru/sql-reference/functions/math-functions.md @@ -54,7 +54,7 @@ toc_title: "Математические функции" Пример (правило трёх сигм): ``` sql -SELECT erf(3 / sqrt(2)) +SELECT erf(3 / sqrt(2)); ``` ``` text @@ -113,7 +113,7 @@ SELECT erf(3 / sqrt(2)) cosh(x) ``` -**Параметры** +**Аргументы** - `x` — угол в радианах. Значения из интервала: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -149,7 +149,7 @@ SELECT cosh(0); acosh(x) ``` -**Параметры** +**Аргументы** - `x` — гиперболический косинус угла. Значения из интервала: `1 <= x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -189,7 +189,7 @@ SELECT acosh(1); sinh(x) ``` -**Параметры** +**Аргументы** - `x` — угол в радианах. Значения из интервала: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -225,7 +225,7 @@ SELECT sinh(0); asinh(x) ``` -**Параметры** +**Аргументы** - `x` — гиперболический синус угла. Значения из интервала: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -265,7 +265,7 @@ SELECT asinh(0); atanh(x) ``` -**Параметры** +**Аргументы** - `x` — гиперболический тангенс угла. Значения из интервала: `–1 < x < 1`. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -301,7 +301,7 @@ SELECT atanh(0); atan2(y, x) ``` -**Параметры** +**Аргументы** - `y` — координата y точки, в которую проведена линия. [Float64](../../sql-reference/data-types/float.md#float32-float64). - `x` — координата х точки, в которую проведена линия. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -338,7 +338,7 @@ SELECT atan2(1, 1); hypot(x, y) ``` -**Параметры** +**Аргументы** - `x` — первый катет прямоугольного треугольника. [Float64](../../sql-reference/data-types/float.md#float32-float64). - `y` — второй катет прямоугольного треугольника. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -375,7 +375,7 @@ SELECT hypot(1, 1); log1p(x) ``` -**Параметры** +**Аргументы** - `x` — значения из интервала: `-1 < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64). @@ -405,4 +405,66 @@ SELECT log1p(0); - [log(x)](../../sql-reference/functions/math-functions.md#logx) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/math_functions/) +## sign(x) {#signx} + +Возвращает знак действительного числа. + +**Синтаксис** + +``` sql +sign(x) +``` + +**Аргумент** + +- `x` — Значения от `-∞` до `+∞`. Любой числовой тип, поддерживаемый ClickHouse. + +**Возвращаемое значение** + +- -1 если `x < 0` +- 0 если `x = 0` +- 1 если `x > 0` + +**Примеры** + +Результат sign() для нуля: + +``` sql +SELECT sign(0); +``` +Результат: + +``` text +┌─sign(0)─┐ +│ 0 │ +└─────────┘ +``` + +Результат sign() для положительного аргумента: + +``` sql +SELECT sign(1); +``` + +Результат: + +``` text +┌─sign(1)─┐ +│ 1 │ +└─────────┘ +``` + +Результат sign() для отрицательного аргумента: + +``` sql +SELECT sign(-1); +``` + +Результат: + +``` text +┌─sign(-1)─┐ +│ -1 │ +└──────────┘ +``` + diff --git a/docs/ru/sql-reference/functions/other-functions.md b/docs/ru/sql-reference/functions/other-functions.md index 595d2458ca9..84bbc6af968 100644 --- a/docs/ru/sql-reference/functions/other-functions.md +++ b/docs/ru/sql-reference/functions/other-functions.md @@ -16,16 +16,16 @@ toc_title: "Прочие функции" **Синтаксис** ```sql -getMacro(name); +getMacro(name) ``` -**Параметры** +**Аргументы** -- `name` — Имя, которое необходимо получить из секции `macros`. [String](../../sql-reference/data-types/string.md#string). +- `name` — имя, которое необходимо получить из секции `macros`. [String](../../sql-reference/data-types/string.md#string). **Возвращаемое значение** -- Значение по указанному имени. +- Значение по указанному имени. Тип: [String](../../sql-reference/data-types/string.md). @@ -66,7 +66,6 @@ WHERE macro = 'test' └───────┴──────────────┘ ``` - ## FQDN {#fqdn} Возвращает полное имя домена. @@ -74,7 +73,7 @@ WHERE macro = 'test' **Синтаксис** ``` sql -fqdn(); +fqdn() ``` Эта функция регистронезависимая. @@ -93,7 +92,7 @@ fqdn(); SELECT FQDN(); ``` -Ответ: +Результат: ``` text ┌─FQDN()──────────────────────────┐ @@ -109,9 +108,9 @@ SELECT FQDN(); basename( expr ) ``` -**Параметры** +**Аргументы** -- `expr` — Выражение, возвращающее значение типа [String](../../sql-reference/functions/other-functions.md). В результирующем значении все бэкслэши должны быть экранированы. +- `expr` — выражение, возвращающее значение типа [String](../../sql-reference/functions/other-functions.md). В результирующем значении все бэкслэши должны быть экранированы. **Возвращаемое значение** @@ -126,7 +125,7 @@ basename( expr ) **Пример** ``` sql -SELECT 'some/long/path/to/file' AS a, basename(a) +SELECT 'some/long/path/to/file' AS a, basename(a); ``` ``` text @@ -136,7 +135,7 @@ SELECT 'some/long/path/to/file' AS a, basename(a) ``` ``` sql -SELECT 'some\\long\\path\\to\\file' AS a, basename(a) +SELECT 'some\\long\\path\\to\\file' AS a, basename(a); ``` ``` text @@ -146,7 +145,7 @@ SELECT 'some\\long\\path\\to\\file' AS a, basename(a) ``` ``` sql -SELECT 'some-file-name' AS a, basename(a) +SELECT 'some-file-name' AS a, basename(a); ``` ``` text @@ -193,7 +192,7 @@ SELECT visibleWidth(NULL) byteSize(argument [, ...]) ``` -**Параметры** +**Аргументы** - `argument` — значение. @@ -246,7 +245,7 @@ INSERT INTO test VALUES(1, 8, 16, 32, 64, -8, -16, -32, -64, 32.32, 64.64); SELECT key, byteSize(u8) AS `byteSize(UInt8)`, byteSize(u16) AS `byteSize(UInt16)`, byteSize(u32) AS `byteSize(UInt32)`, byteSize(u64) AS `byteSize(UInt64)`, byteSize(i8) AS `byteSize(Int8)`, byteSize(i16) AS `byteSize(Int16)`, byteSize(i32) AS `byteSize(Int32)`, byteSize(i64) AS `byteSize(Int64)`, byteSize(f32) AS `byteSize(Float32)`, byteSize(f64) AS `byteSize(Float64)` FROM test ORDER BY key ASC FORMAT Vertical; ``` -Result: +Результат: ``` text Row 1: @@ -324,7 +323,7 @@ SELECT currentUser(); SELECT currentUser(); ``` -Ответ: +Результат: ``` text ┌─currentUser()─┐ @@ -346,14 +345,14 @@ SELECT currentUser(); isConstant(x) ``` -**Параметры** +**Аргументы** -- `x` — Выражение для проверки. +- `x` — выражение для проверки. **Возвращаемые значения** -- `1` — Выражение `x` является константным. -- `0` — Выражение `x` не является константным. +- `1` — выражение `x` является константным. +- `0` — выражение `x` не является константным. Тип: [UInt8](../data-types/int-uint.md). @@ -362,7 +361,7 @@ isConstant(x) Запрос: ```sql -SELECT isConstant(x + 1) FROM (SELECT 43 AS x) +SELECT isConstant(x + 1) FROM (SELECT 43 AS x); ``` Результат: @@ -376,7 +375,7 @@ SELECT isConstant(x + 1) FROM (SELECT 43 AS x) Запрос: ```sql -WITH 3.14 AS pi SELECT isConstant(cos(pi)) +WITH 3.14 AS pi SELECT isConstant(cos(pi)); ``` Результат: @@ -413,10 +412,10 @@ SELECT isConstant(number) FROM numbers(1) ifNotFinite(x,y) -**Параметры** +**Аргументы** -- `x` — Значение, которое нужно проверить на бесконечность. Тип: [Float\*](../../sql-reference/functions/other-functions.md). -- `y` — Запасное значение. Тип: [Float\*](../../sql-reference/functions/other-functions.md). +- `x` — значение, которое нужно проверить на бесконечность. Тип: [Float\*](../../sql-reference/functions/other-functions.md). +- `y` — запасное значение. Тип: [Float\*](../../sql-reference/functions/other-functions.md). **Возвращаемые значения** @@ -458,7 +457,7 @@ SELECT isConstant(number) FROM numbers(1) `bar(x, min, max, width)` рисует полосу ширины пропорциональной `(x - min)` и равной `width` символов при `x = max`. -Параметры: +Аргументы: - `x` — Величина для отображения. - `min, max` — Целочисленные константы, значение должно помещаться в `Int64`. @@ -673,13 +672,13 @@ neighbor(column, offset[, default_value]) Функция может получить доступ к значению в столбце соседней строки только внутри обрабатываемого в данный момент блока данных. Порядок строк, используемый при вычислении функции `neighbor`, может отличаться от порядка строк, возвращаемых пользователю. -Чтобы этого не случилось, вы можете сделать подзапрос с [ORDER BY](../../sql-reference/statements/select/order-by.md) и вызвать функцию изне подзапроса. +Чтобы этого не случилось, вы можете сделать подзапрос с [ORDER BY](../../sql-reference/statements/select/order-by.md) и вызвать функцию извне подзапроса. -**Параметры** +**Аргументы** -- `column` — Имя столбца или скалярное выражение. -- `offset` - Смещение от текущей строки `column`. [Int64](../../sql-reference/functions/other-functions.md). -- `default_value` - Опциональный параметр. Значение, которое будет возвращено, если смещение выходит за пределы блока данных. +- `column` — имя столбца или скалярное выражение. +- `offset` — смещение от текущей строки `column`. [Int64](../../sql-reference/functions/other-functions.md). +- `default_value` — опциональный параметр. Значение, которое будет возвращено, если смещение выходит за пределы блока данных. **Возвращаемое значение** @@ -696,7 +695,7 @@ neighbor(column, offset[, default_value]) SELECT number, neighbor(number, 2) FROM system.numbers LIMIT 10; ``` -Ответ: +Результат: ``` text ┌─number─┬─neighbor(number, 2)─┐ @@ -719,7 +718,7 @@ SELECT number, neighbor(number, 2) FROM system.numbers LIMIT 10; SELECT number, neighbor(number, 2, 999) FROM system.numbers LIMIT 10; ``` -Ответ: +Результат: ``` text ┌─number─┬─neighbor(number, 2, 999)─┐ @@ -750,7 +749,7 @@ SELECT FROM numbers(16) ``` -Ответ: +Результат: ``` text ┌──────month─┬─money─┬─prev_year─┬─year_over_year─┐ @@ -773,7 +772,7 @@ FROM numbers(16) └────────────┴───────┴───────────┴────────────────┘ ``` -## runningDifference(x) {#runningdifferencex} +## runningDifference(x) {#other_functions-runningdifference} Считает разницу между последовательными значениями строк в блоке данных. Возвращает 0 для первой строки и разницу с предыдущей строкой для каждой последующей строки. @@ -850,7 +849,64 @@ WHERE diff != 1 ## runningDifferenceStartingWithFirstValue {#runningdifferencestartingwithfirstvalue} -То же, что и \[runningDifference\] (./other_functions.md # other_functions-runningdifference), но в первой строке возвращается значение первой строки, а не ноль. +То же, что и [runningDifference](./other-functions.md#other_functions-runningdifference), но в первой строке возвращается значение первой строки, а не ноль. + +## runningConcurrency {#runningconcurrency} + +Подсчитывает количество одновременно идущих событий. +У каждого события есть время начала и время окончания. Считается, что время начала включено в событие, а время окончания исключено из него. Столбцы со временем начала и окончания событий должны иметь одинаковый тип данных. +Функция подсчитывает количество событий, происходящих одновременно на момент начала каждого из событий в выборке. + +!!! warning "Предупреждение" + События должны быть отсортированы по возрастанию времени начала. Если это требование нарушено, то функция вызывает исключение. + Каждый блок данных обрабатывается независимо. Если события из разных блоков данных накладываются по времени, они не могут быть корректно обработаны. + +**Синтаксис** + +``` sql +runningConcurrency(start, end) +``` + +**Аргументы** + +- `start` — Столбец с временем начала событий. [Date](../../sql-reference/data-types/date.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md). +- `end` — Столбец с временем окончания событий. [Date](../../sql-reference/data-types/date.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Возвращаемое значение** + +- Количество одновременно идущих событий на момент начала каждого события. + +Тип: [UInt32](../../sql-reference/data-types/int-uint.md) + +**Пример** + +Рассмотрим таблицу: + +``` text +┌──────start─┬────────end─┐ +│ 2021-03-03 │ 2021-03-11 │ +│ 2021-03-06 │ 2021-03-12 │ +│ 2021-03-07 │ 2021-03-08 │ +│ 2021-03-11 │ 2021-03-12 │ +└────────────┴────────────┘ +``` + +Запрос: + +``` sql +SELECT start, runningConcurrency(start, end) FROM example_table; +``` + +Результат: + +``` text +┌──────start─┬─runningConcurrency(start, end)─┐ +│ 2021-03-03 │ 1 │ +│ 2021-03-06 │ 2 │ +│ 2021-03-07 │ 3 │ +│ 2021-03-11 │ 2 │ +└────────────┴────────────────────────────────┘ +``` ## MACNumToString(num) {#macnumtostringnum} @@ -872,9 +928,9 @@ WHERE diff != 1 getSizeOfEnumType(value) ``` -**Параметры** +**Аргументы** -- `value` — Значение типа `Enum`. +- `value` — значение типа `Enum`. **Возвращаемые значения** @@ -901,9 +957,9 @@ SELECT getSizeOfEnumType( CAST('a' AS Enum8('a' = 1, 'b' = 2) ) ) AS x blockSerializedSize(value[, value[, ...]]) ``` -**Параметры** +**Аргументы** -- `value` — Значение произвольного типа. +- `value` — значение произвольного типа. **Возвращаемые значения** @@ -933,9 +989,9 @@ SELECT blockSerializedSize(maxState(1)) as x toColumnTypeName(value) ``` -**Параметры** +**Аргументы** -- `value` — Значение произвольного типа. +- `value` — значение произвольного типа. **Возвращаемые значения** @@ -973,9 +1029,9 @@ SELECT toColumnTypeName(CAST('2018-01-01 01:02:03' AS DateTime)) dumpColumnStructure(value) ``` -**Параметры** +**Аргументы** -- `value` — Значение произвольного типа. +- `value` — значение произвольного типа. **Возвращаемые значения** @@ -1003,9 +1059,9 @@ SELECT dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime')) defaultValueOfArgumentType(expression) ``` -**Параметры** +**Аргументы** -- `expression` — Значение произвольного типа или выражение, результатом которого является значение произвольного типа. +- `expression` — значение произвольного типа или выражение, результатом которого является значение произвольного типа. **Возвращаемые значения** @@ -1045,7 +1101,7 @@ SELECT defaultValueOfArgumentType( CAST(1 AS Nullable(Int8) ) ) defaultValueOfTypeName(type) ``` -**Параметры:** +**Аргументы** - `type` — тип данных. @@ -1077,6 +1133,111 @@ SELECT defaultValueOfTypeName('Nullable(Int8)') └──────────────────────────────────────────┘ ``` +## indexHint {#indexhint} +Возвращает все данные из диапазона, в который попадают данные, соответствующие указанному выражению. +Переданное выражение не будет вычислено. Выбор диапазона производится по индексу. +Индекс в ClickHouse разреженный, при чтении диапазона в ответ попадают «лишние» соседние данные. + +**Синтаксис** + +```sql +SELECT * FROM table WHERE indexHint() +``` + +**Возвращаемое значение** + +Возвращает диапазон индекса, в котором выполняется заданное условие. + +Тип: [Uint8](https://clickhouse.yandex/docs/ru/data_types/int_uint/#diapazony-uint). + +**Пример** + +Рассмотрим пример с использованием тестовых данных таблицы [ontime](../../getting-started/example-datasets/ontime.md). + +Исходная таблица: + +```sql +SELECT count() FROM ontime +``` + +```text +┌─count()─┐ +│ 4276457 │ +└─────────┘ +``` + +В таблице есть индексы по полям `(FlightDate, (Year, FlightDate))`. + +Выполним выборку по дате, где индекс не используется. + +Запрос: + +```sql +SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k +``` + +ClickHouse обработал всю таблицу (`Processed 4.28 million rows`). + +Результат: + +```text +┌──────────k─┬─count()─┐ +│ 2017-01-01 │ 13970 │ +│ 2017-01-02 │ 15882 │ +........................ +│ 2017-09-28 │ 16411 │ +│ 2017-09-29 │ 16384 │ +│ 2017-09-30 │ 12520 │ +└────────────┴─────────┘ +``` + +Для подключения индекса выбираем конкретную дату. + +Запрос: + +```sql +SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k +``` + +При использовании индекса ClickHouse обработал значительно меньшее количество строк (`Processed 32.74 thousand rows`). + +Результат: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-15 │ 16428 │ +└────────────┴─────────┘ +``` + +Передадим в функцию `indexHint` выражение `k = '2017-09-15'`. + +Запрос: + +```sql +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE indexHint(k = '2017-09-15') +GROUP BY k +ORDER BY k ASC +``` + +ClickHouse применил индекс по аналогии с примером выше (`Processed 32.74 thousand rows`). +Выражение `k = '2017-09-15'` не используется при формировании результата. +Функция `indexHint` позволяет увидеть соседние данные. + +Результат: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-14 │ 7071 │ +│ 2017-09-15 │ 16428 │ +│ 2017-09-16 │ 1077 │ +│ 2017-09-30 │ 8167 │ +└────────────┴─────────┘ +``` + ## replicate {#other-functions-replicate} Создает массив, заполненный одним значением. @@ -1087,10 +1248,10 @@ SELECT defaultValueOfTypeName('Nullable(Int8)') SELECT replicate(x, arr); ``` -**Параметры** +**Аргументы** -- `arr` — Исходный массив. ClickHouse создаёт новый массив такой же длины как исходный и заполняет его значением `x`. -- `x` — Значение, которым будет заполнен результирующий массив. +- `arr` — исходный массив. ClickHouse создаёт новый массив такой же длины как исходный и заполняет его значением `x`. +- `x` — значение, которым будет заполнен результирующий массив. **Возвращаемое значение** @@ -1170,7 +1331,7 @@ filesystemFree() SELECT formatReadableSize(filesystemFree()) AS "Free space", toTypeName(filesystemFree()) AS "Type"; ``` -Ответ: +Результат: ``` text ┌─Free space─┬─Type───┐ @@ -1202,7 +1363,7 @@ filesystemCapacity() SELECT formatReadableSize(filesystemCapacity()) AS "Capacity", toTypeName(filesystemCapacity()) AS "Type" ``` -Ответ: +Результат: ``` text ┌─Capacity──┬─Type───┐ @@ -1220,7 +1381,7 @@ SELECT formatReadableSize(filesystemCapacity()) AS "Capacity", toTypeName(filesy finalizeAggregation(state) ``` -**Параметры** +**Аргументы** - `state` — состояние агрегатной функции. [AggregateFunction](../../sql-reference/data-types/aggregatefunction.md#data-type-aggregatefunction). @@ -1321,17 +1482,17 @@ FROM numbers(10); **Синтаксис** ```sql -runningAccumulate(agg_state[, grouping]); +runningAccumulate(agg_state[, grouping]) ``` -**Параметры** +**Аргументы** -- `agg_state` — Состояние агрегатной функции. [AggregateFunction](../../sql-reference/data-types/aggregatefunction.md#data-type-aggregatefunction). -- `grouping` — Ключ группировки. Опциональный параметр. Состояние функции обнуляется, если значение `grouping` меняется. Параметр может быть любого [поддерживаемого типа данных](../../sql-reference/data-types/index.md), для которого определен оператор равенства. +- `agg_state` — состояние агрегатной функции. [AggregateFunction](../../sql-reference/data-types/aggregatefunction.md#data-type-aggregatefunction). +- `grouping` — ключ группировки. Опциональный параметр. Состояние функции обнуляется, если значение `grouping` меняется. Параметр может быть любого [поддерживаемого типа данных](../../sql-reference/data-types/index.md), для которого определен оператор равенства. **Возвращаемое значение** -- Каждая результирующая строка содержит результат агрегатной функции, накопленный для всех входных строк от 0 до текущей позиции. `runningAccumulate` обнуляет состояния для каждого нового блока данных или при изменении значения `grouping`. +- Каждая результирующая строка содержит результат агрегатной функции, накопленный для всех входных строк от 0 до текущей позиции. `runningAccumulate` обнуляет состояния для каждого нового блока данных или при изменении значения `grouping`. Тип зависит от используемой агрегатной функции. @@ -1430,7 +1591,7 @@ FROM joinGet(join_storage_table_name, `value_column`, join_keys) ``` -**Параметры** +**Аргументы** - `join_storage_table_name` — [идентификатор](../syntax.md#syntax-identifiers), который указывает, откуда производится выборка данных. Поиск по идентификатору осуществляется в базе данных по умолчанию (см. конфигурацию `default_database`). Чтобы переопределить базу данных по умолчанию, используйте команду `USE db_name`, или укажите базу данных и таблицу через разделитель `db_name.db_table`, см. пример. - `value_column` — столбец, из которого нужно произвести выборку данных. @@ -1535,9 +1696,9 @@ SELECT identity(42) randomPrintableASCII(length) ``` -**Параметры** +**Аргументы** -- `length` — Длина результирующей строки. Положительное целое число. +- `length` — длина результирующей строки. Положительное целое число. Если передать `length < 0`, то поведение функции не определено. @@ -1571,7 +1732,7 @@ SELECT number, randomPrintableASCII(30) as str, length(str) FROM system.numbers randomString(length) ``` -**Параметры** +**Аргументы** - `length` — длина строки. Положительное целое число. @@ -1619,11 +1780,11 @@ len: 30 randomFixedString(length); ``` -**Параметры** +**Аргументы** -- `length` — Длина строки в байтах. [UInt64](../../sql-reference/data-types/int-uint.md). +- `length` — длина строки в байтах. [UInt64](../../sql-reference/data-types/int-uint.md). -**Returned value(s)** +**Возвращаемое значение** - Строка, заполненная случайными байтами. @@ -1653,12 +1814,12 @@ SELECT randomFixedString(13) as rnd, toTypeName(rnd) **Синтаксис** ``` sql -randomStringUTF8(length); +randomStringUTF8(length) ``` -**Параметры** +**Аргументы** -- `length` — Длина итоговой строки в кодовых точках. [UInt64](../../sql-reference/data-types/int-uint.md). +- `length` — длина итоговой строки в кодовых точках. [UInt64](../../sql-reference/data-types/int-uint.md). **Возвращаемое значение** @@ -1690,7 +1851,7 @@ SELECT randomStringUTF8(13) **Синтаксис** ```sql -getSetting('custom_setting'); +getSetting('custom_setting') ``` **Параметр** @@ -1728,7 +1889,7 @@ SELECT getSetting('custom_a'); isDecimalOverflow(d, [p]) ``` -**Параметры** +**Аргументы** - `d` — число. [Decimal](../../sql-reference/data-types/decimal.md). - `p` — точность. Необязательный параметр. Если опущен, используется исходная точность первого аргумента. Использование этого параметра может быть полезно для извлечения данных в другую СУБД или файл. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges). @@ -1765,7 +1926,7 @@ SELECT isDecimalOverflow(toDecimal32(1000000000, 0), 9), countDigits(x) ``` -**Параметры** +**Аргументы** - `x` — [целое](../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64) или [дробное](../../sql-reference/data-types/decimal.md) число. @@ -1824,7 +1985,7 @@ UNSUPPORTED_METHOD tcpPort() ``` -**Параметры** +**Аргументы** - Нет. @@ -1854,4 +2015,3 @@ SELECT tcpPort(); - [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/other_functions/) diff --git a/docs/ru/sql-reference/functions/random-functions.md b/docs/ru/sql-reference/functions/random-functions.md index a09f5159309..bbf0affb081 100644 --- a/docs/ru/sql-reference/functions/random-functions.md +++ b/docs/ru/sql-reference/functions/random-functions.md @@ -31,9 +31,9 @@ toc_title: "Функции генерации псевдослучайных ч randConstant([x]) ``` -**Параметры** +**Аргументы** -- `x` — [Выражение](../syntax.md#syntax-expressions), возвращающее значение одного из [поддерживаемых типов данных](../data-types/index.md#data_types). Значение используется, чтобы избежать [склейки одинаковых выражений](index.md#common-subexpression-elimination), если функция вызывается несколько раз в одном запросе. Необязательный параметр. +- `x` — [выражение](../syntax.md#syntax-expressions), возвращающее значение одного из [поддерживаемых типов данных](../data-types/index.md#data_types). Значение используется, чтобы избежать [склейки одинаковых выражений](index.md#common-subexpression-elimination), если функция вызывается несколько раз в одном запросе. Необязательный параметр. **Возвращаемое значение** @@ -79,7 +79,7 @@ fuzzBits([s], [prob]) ``` Инвертирует каждый бит `s` с вероятностью `prob`. -**Параметры** +**Аргументы** - `s` — `String` or `FixedString` - `prob` — constant `Float32/64` @@ -107,4 +107,3 @@ FROM numbers(3) └───────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/random_functions/) diff --git a/docs/ru/sql-reference/functions/rounding-functions.md b/docs/ru/sql-reference/functions/rounding-functions.md index 704e7f5dd52..276f85bf6b7 100644 --- a/docs/ru/sql-reference/functions/rounding-functions.md +++ b/docs/ru/sql-reference/functions/rounding-functions.md @@ -33,10 +33,10 @@ N может быть отрицательным. round(expression [, decimal_places]) ``` -**Параметры:** +**Аргументы** -- `expression` — Число для округления. Может быть любым [выражением](../syntax.md#syntax-expressions), возвращающим числовой [тип данных](../../sql-reference/functions/rounding-functions.md#data_types). -- `decimal-places` — Целое значение. +- `expression` — число для округления. Может быть любым [выражением](../syntax.md#syntax-expressions), возвращающим числовой [тип данных](../../sql-reference/functions/rounding-functions.md#data_types). +- `decimal-places` — целое значение. - Если `decimal-places > 0`, то функция округляет значение справа от запятой. - Если `decimal-places < 0` то функция округляет значение слева от запятой. - Если `decimal-places = 0`, то функция округляет значение до целого. В этом случае аргумент можно опустить. @@ -112,13 +112,13 @@ round(3.65, 1) = 3.6 roundBankers(expression [, decimal_places]) ``` -**Параметры** +**Аргументы** -- `expression` — Число для округления. Может быть любым [выражением](../syntax.md#syntax-expressions), возвращающим числовой [тип данных](../../sql-reference/functions/rounding-functions.md#data_types). -- `decimal-places` — Десятичный разряд. Целое число. - - `decimal-places > 0` — Функция округляет значение выражения до ближайшего чётного числа на соответствующей позиции справа от запятой. Например, `roundBankers(3.55, 1) = 3.6`. - - `decimal-places < 0` — Функция округляет значение выражения до ближайшего чётного числа на соответствующей позиции слева от запятой. Например, `roundBankers(24.55, -1) = 20`. - - `decimal-places = 0` — Функция округляет значение до целого. В этом случае аргумент можно не передавать. Например, `roundBankers(2.5) = 2`. +- `expression` — число для округления. Может быть любым [выражением](../syntax.md#syntax-expressions), возвращающим числовой [тип данных](../../sql-reference/functions/rounding-functions.md#data_types). +- `decimal-places` — десятичный разряд. Целое число. + - `decimal-places > 0` — функция округляет значение выражения до ближайшего чётного числа на соответствующей позиции справа от запятой. Например, `roundBankers(3.55, 1) = 3.6`. + - `decimal-places < 0` — функция округляет значение выражения до ближайшего чётного числа на соответствующей позиции слева от запятой. Например, `roundBankers(24.55, -1) = 20`. + - `decimal-places = 0` — функция округляет значение до целого. В этом случае аргумент можно не передавать. Например, `roundBankers(2.5) = 2`. **Возвращаемое значение** @@ -177,4 +177,3 @@ roundBankers(10.755, 2) = 11,76 Принимает число. Если число меньше 18 - возвращает 0. Иначе округляет число вниз до чисел из набора: 18, 25, 35, 45, 55. Эта функция специфична для Яндекс.Метрики и предназначена для реализации отчёта по возрасту посетителей. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/rounding_functions/) diff --git a/docs/ru/sql-reference/functions/splitting-merging-functions.md b/docs/ru/sql-reference/functions/splitting-merging-functions.md index cacce5f4ba2..b8d04982b91 100644 --- a/docs/ru/sql-reference/functions/splitting-merging-functions.md +++ b/docs/ru/sql-reference/functions/splitting-merging-functions.md @@ -17,10 +17,10 @@ separator должен быть константной строкой из ро splitByChar(, ) ``` -**Параметры** +**Аргументы** -- `separator` — Разделитель, состоящий из одного символа. [String](../../sql-reference/data-types/string.md). -- `s` — Разбиваемая строка. [String](../../sql-reference/data-types/string.md). +- `separator` — разделитель, состоящий из одного символа. [String](../../sql-reference/data-types/string.md). +- `s` — разбиваемая строка. [String](../../sql-reference/data-types/string.md). **Возвращаемые значения** @@ -54,10 +54,10 @@ SELECT splitByChar(',', '1,2,3,abcde') splitByString(separator, s) ``` -**Параметры** +**Аргументы** -- `separator` — Разделитель. [String](../../sql-reference/data-types/string.md). -- `s` — Разбиваемая строка. [String](../../sql-reference/data-types/string.md). +- `separator` — разделитель. [String](../../sql-reference/data-types/string.md). +- `s` — разбиваемая строка. [String](../../sql-reference/data-types/string.md). **Возвращаемые значения** @@ -67,7 +67,7 @@ splitByString(separator, s) - Задано несколько последовательных разделителей; - Исходная строка `s` пуста. -Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). +Тип: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). **Примеры** @@ -115,4 +115,3 @@ SELECT alphaTokens('abca1abc') └─────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/splitting_merging_functions/) diff --git a/docs/ru/sql-reference/functions/string-functions.md b/docs/ru/sql-reference/functions/string-functions.md index 65a1cd63563..04af599c09a 100644 --- a/docs/ru/sql-reference/functions/string-functions.md +++ b/docs/ru/sql-reference/functions/string-functions.md @@ -70,19 +70,19 @@ toc_title: "Функции для работы со строками" Заменяет некорректные символы UTF-8 на символ `�` (U+FFFD). Все идущие подряд некорректные символы схлопываются в один заменяющий символ. ``` sql -toValidUTF8( input_string ) +toValidUTF8(input_string) ``` -Параметры: +**Аргументы** -- input_string — произвольный набор байтов, представленный как объект типа [String](../../sql-reference/functions/string-functions.md). +- `input_string` — произвольный набор байтов, представленный как объект типа [String](../../sql-reference/functions/string-functions.md). Возвращаемое значение: Корректная строка UTF-8. **Пример** ``` sql -SELECT toValidUTF8('\x61\xF0\x80\x80\x80b') +SELECT toValidUTF8('\x61\xF0\x80\x80\x80b'); ``` ``` text @@ -103,10 +103,10 @@ SELECT toValidUTF8('\x61\xF0\x80\x80\x80b') repeat(s, n) ``` -**Параметры** +**Аргументы** -- `s` — Строка для повторения. [String](../../sql-reference/functions/string-functions.md). -- `n` — Количество повторов. [UInt](../../sql-reference/functions/string-functions.md). +- `s` — строка для повторения. [String](../../sql-reference/functions/string-functions.md). +- `n` — количество повторов. [UInt](../../sql-reference/functions/string-functions.md). **Возвращаемое значение** @@ -119,10 +119,10 @@ repeat(s, n) Запрос: ``` sql -SELECT repeat('abc', 10) +SELECT repeat('abc', 10); ``` -Ответ: +Результат: ``` text ┌─repeat('abc', 10)──────────────┐ @@ -172,7 +172,7 @@ SELECT format('{} {}', 'Hello', 'World') concat(s1, s2, ...) ``` -**Параметры** +**Аргументы** Значения типа String или FixedString. @@ -187,10 +187,10 @@ concat(s1, s2, ...) Запрос: ``` sql -SELECT concat('Hello, ', 'World!') +SELECT concat('Hello, ', 'World!'); ``` -Ответ: +Результат: ``` text ┌─concat('Hello, ', 'World!')─┐ @@ -210,7 +210,7 @@ SELECT concat('Hello, ', 'World!') concatAssumeInjective(s1, s2, ...) ``` -**Параметры** +**Аргументы** Значения типа String или FixedString. @@ -242,10 +242,10 @@ SELECT * from key_val Запрос: ``` sql -SELECT concat(key1, key2), sum(value) FROM key_val GROUP BY (key1, key2) +SELECT concat(key1, key2), sum(value) FROM key_val GROUP BY (key1, key2); ``` -Ответ: +Результат: ``` text ┌─concat(key1, key2)─┬─sum(value)─┐ @@ -312,7 +312,7 @@ SELECT startsWith('Spider-Man', 'Spi'); SELECT startsWith('Hello, world!', 'He'); ``` -Ответ: +Результат: ``` text ┌─startsWith('Hello, world!', 'He')─┐ @@ -331,7 +331,7 @@ SELECT startsWith('Hello, world!', 'He'); trim([[LEADING|TRAILING|BOTH] trim_character FROM] input_string) ``` -**Параметры** +**Аргументы** - `trim_character` — один или несколько символов, подлежащие удалению. [String](../../sql-reference/functions/string-functions.md). - `input_string` — строка для обрезки. [String](../../sql-reference/functions/string-functions.md). @@ -347,10 +347,10 @@ trim([[LEADING|TRAILING|BOTH] trim_character FROM] input_string) Запрос: ``` sql -SELECT trim(BOTH ' ()' FROM '( Hello, world! )') +SELECT trim(BOTH ' ()' FROM '( Hello, world! )'); ``` -Ответ: +Результат: ``` text ┌─trim(BOTH ' ()' FROM '( Hello, world! )')─┐ @@ -370,7 +370,7 @@ trimLeft(input_string) Алиас: `ltrim(input_string)`. -**Параметры** +**Аргументы** - `input_string` — строка для обрезки. [String](../../sql-reference/functions/string-functions.md). @@ -385,10 +385,10 @@ trimLeft(input_string) Запрос: ``` sql -SELECT trimLeft(' Hello, world! ') +SELECT trimLeft(' Hello, world! '); ``` -Ответ: +Результат: ``` text ┌─trimLeft(' Hello, world! ')─┐ @@ -408,7 +408,7 @@ trimRight(input_string) Алиас: `rtrim(input_string)`. -**Параметры** +**Аргументы** - `input_string` — строка для обрезки. [String](../../sql-reference/functions/string-functions.md). @@ -423,10 +423,10 @@ trimRight(input_string) Запрос: ``` sql -SELECT trimRight(' Hello, world! ') +SELECT trimRight(' Hello, world! '); ``` -Ответ: +Результат: ``` text ┌─trimRight(' Hello, world! ')─┐ @@ -446,7 +446,7 @@ trimBoth(input_string) Алиас: `trim(input_string)`. -**Параметры** +**Аргументы** - `input_string` — строка для обрезки. [String](../../sql-reference/functions/string-functions.md). @@ -461,10 +461,10 @@ trimBoth(input_string) Запрос: ``` sql -SELECT trimBoth(' Hello, world! ') +SELECT trimBoth(' Hello, world! '); ``` -Ответ: +Результат: ``` text ┌─trimBoth(' Hello, world! ')─┐ @@ -494,14 +494,15 @@ SELECT trimBoth(' Hello, world! ') Заменяет литералы, последовательности литералов и сложные псевдонимы заполнителями. -**Синтаксис** +**Синтаксис** + ``` sql normalizeQuery(x) ``` -**Параметры** +**Аргументы** -- `x` — Последовательность символов. [String](../../sql-reference/data-types/string.md). +- `x` — последовательность символов. [String](../../sql-reference/data-types/string.md). **Возвращаемое значение** @@ -535,9 +536,9 @@ SELECT normalizeQuery('[1, 2, 3, x]') AS query; normalizedQueryHash(x) ``` -**Параметры** +**Аргументы** -- `x` — Последовательность символов. [String](../../sql-reference/data-types/string.md). +- `x` — последовательность символов. [String](../../sql-reference/data-types/string.md). **Возвращаемое значение** @@ -573,7 +574,7 @@ SELECT normalizedQueryHash('SELECT 1 AS `xyz`') != normalizedQueryHash('SELECT 1 encodeXMLComponent(x) ``` -**Параметры** +**Аргументы** - `x` — последовательность символов. [String](../../sql-reference/data-types/string.md). @@ -603,7 +604,6 @@ Hello, "world"! 'foo' ``` - ## decodeXMLComponent {#decode-xml-component} Заменяет символами предопределенные мнемоники XML: `"` `&` `'` `>` `<` @@ -615,7 +615,7 @@ Hello, "world"! decodeXMLComponent(x) ``` -**Параметры** +**Аргументы** - `x` — последовательность символов. [String](../../sql-reference/data-types/string.md). @@ -645,4 +645,66 @@ SELECT decodeXMLComponent('< Σ >'); - [Мнемоники в HTML](https://ru.wikipedia.org/wiki/%D0%9C%D0%BD%D0%B5%D0%BC%D0%BE%D0%BD%D0%B8%D0%BA%D0%B8_%D0%B2_HTML) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/string_functions/) + + +## extractTextFromHTML {#extracttextfromhtml} + +Функция для извлечения текста из HTML или XHTML. +Она не соответствует всем HTML, XML или XHTML стандартам на 100%, но ее реализация достаточно точная и быстрая. Правила обработки следующие: + +1. Комментарии удаляются. Пример: ``. Комментарий должен оканчиваться символами `-->`. Вложенные комментарии недопустимы. +Примечание: конструкции наподобие `` и `` не являются допустимыми комментариями в HTML, но они будут удалены согласно другим правилам. +2. Содержимое CDATA вставляется дословно. Примечание: формат CDATA специфичен для XML/XHTML. Но он обрабатывается всегда по принципу "наилучшего возможного результата". +3. Элементы `script` и `style` удаляются вместе со всем содержимым. Примечание: предполагается, что закрывающий тег не может появиться внутри содержимого. Например, в JS строковый литерал должен быть экранирован как `"<\/script>"`. +Примечание: комментарии и CDATA возможны внутри `script` или `style` - тогда закрывающие теги не ищутся внутри CDATA. Пример: `]]>`. Но они ищутся внутри комментариев. Иногда возникают сложные случаи: ` var y = "-->"; alert(x + y);` +Примечание: `script` и `style` могут быть названиями пространств имен XML - тогда они не обрабатываются как обычные элементы `script` или `style`. Пример: `Hello`. +Примечание: пробелы возможны после имени закрывающего тега: ``, но не перед ним: `< / script>`. +4. Другие теги или элементы, подобные тегам, удаляются, а их внутреннее содержимое остается. Пример: `.` +Примечание: ожидается, что такой HTML является недопустимым: `` +Примечание: функция также удаляет подобные тегам элементы: `<>`, ``, и т. д. +Примечание: если встречается тег без завершающего символа `>`, то удаляется этот тег и весь следующий за ним текст: `world`, `Helloworld` — в HTML нет пробелов, но функция вставляет их. Также следует учитывать такие варианты написания: `Hello

world

`, `Hello
world`. Подобные результаты выполнения функции могут использоваться для анализа данных, например, для преобразования HTML-текста в набор используемых слов. +7. Также обратите внимание, что правильная обработка пробелов требует поддержки `
` и свойств CSS `display` и `white-space`.
+
+**Синтаксис**
+
+``` sql
+extractTextFromHTML(x)
+```
+
+**Аргументы**
+
+-   `x` — текст для обработки. [String](../../sql-reference/data-types/string.md). 
+
+**Возвращаемое значение**
+
+-   Извлеченный текст.
+
+Тип: [String](../../sql-reference/data-types/string.md).
+
+**Пример**
+
+Первый пример содержит несколько тегов и комментарий. На этом примере также видно, как обрабатываются пробелы.
+Второй пример показывает обработку `CDATA` и тега `script`.
+В третьем примере текст выделяется из полного HTML ответа, полученного с помощью функции [url](../../sql-reference/table-functions/url.md).
+
+Запрос:
+
+``` sql
+SELECT extractTextFromHTML(' 

A text withtags.

'); +SELECT extractTextFromHTML('CDATA]]> '); +SELECT extractTextFromHTML(html) FROM url('http://www.donothingfor2minutes.com/', RawBLOB, 'html String'); +``` + +Результат: + +``` text +A text with tags . +The content within CDATA +Do Nothing for 2 Minutes 2:00   +``` diff --git a/docs/ru/sql-reference/functions/string-replace-functions.md b/docs/ru/sql-reference/functions/string-replace-functions.md index f00a06d1560..9426e8685b0 100644 --- a/docs/ru/sql-reference/functions/string-replace-functions.md +++ b/docs/ru/sql-reference/functions/string-replace-functions.md @@ -83,4 +83,3 @@ SELECT replaceRegexpAll('Hello, World!', '^', 'here: ') AS res └─────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/string_replace_functions/) diff --git a/docs/ru/sql-reference/functions/string-search-functions.md b/docs/ru/sql-reference/functions/string-search-functions.md index 95ac922a4a8..2417a1c6ffd 100644 --- a/docs/ru/sql-reference/functions/string-search-functions.md +++ b/docs/ru/sql-reference/functions/string-search-functions.md @@ -7,7 +7,7 @@ toc_title: "Функции поиска в строках" Во всех функциях, поиск регистрозависимый по умолчанию. Существуют варианты функций для регистронезависимого поиска. -## position(haystack, needle) {#position} +## position(haystack, needle), locate(haystack, needle) {#position} Поиск подстроки `needle` в строке `haystack`. @@ -21,13 +21,20 @@ toc_title: "Функции поиска в строках" position(haystack, needle[, start_pos]) ``` +``` sql +position(needle IN haystack) +``` + Алиас: `locate(haystack, needle[, start_pos])`. -**Параметры** +!!! note "Примечание" + Синтаксис `position(needle IN haystack)` обеспечивает совместимость с SQL, функция работает так же, как `position(haystack, needle)`. + +**Аргументы** - `haystack` — строка, по которой выполняется поиск. [Строка](../syntax.md#syntax-string-literal). - `needle` — подстрока, которую необходимо найти. [Строка](../syntax.md#syntax-string-literal). -- `start_pos` – Опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md) +- `start_pos` — опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -43,10 +50,10 @@ position(haystack, needle[, start_pos]) Запрос: ``` sql -SELECT position('Hello, world!', '!') +SELECT position('Hello, world!', '!'); ``` -Ответ: +Результат: ``` text ┌─position('Hello, world!', '!')─┐ @@ -59,10 +66,10 @@ SELECT position('Hello, world!', '!') Запрос: ``` sql -SELECT position('Привет, мир!', '!') +SELECT position('Привет, мир!', '!'); ``` -Ответ: +Результат: ``` text ┌─position('Привет, мир!', '!')─┐ @@ -70,6 +77,36 @@ SELECT position('Привет, мир!', '!') └───────────────────────────────┘ ``` +**Примеры работы функции с синтаксисом POSITION(needle IN haystack)** + +Запрос: + +```sql +SELECT 1 = position('абв' IN 'абв'); +``` + +Результат: + +```text +┌─equals(1, position('абв', 'абв'))─┐ +│ 1 │ +└───────────────────────────────────┘ +``` + +Запрос: + +```sql +SELECT 0 = position('абв' IN ''); +``` + +Результат: + +```text +┌─equals(0, position('', 'абв'))─┐ +│ 1 │ +└────────────────────────────────┘ +``` + ## positionCaseInsensitive {#positioncaseinsensitive} Такая же, как и [position](#position), но работает без учета регистра. Возвращает позицию в байтах найденной подстроки в строке, начиная с 1. @@ -82,11 +119,11 @@ SELECT position('Привет, мир!', '!') positionCaseInsensitive(haystack, needle[, start_pos]) ``` -**Параметры** +**Аргументы** - `haystack` — строка, по которой выполняется поиск. [Строка](../syntax.md#syntax-string-literal). - `needle` — подстрока, которую необходимо найти. [Строка](../syntax.md#syntax-string-literal). -- `start_pos` – Опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md) +- `start_pos` — опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -100,10 +137,10 @@ positionCaseInsensitive(haystack, needle[, start_pos]) Запрос: ``` sql -SELECT positionCaseInsensitive('Hello, world!', 'hello') +SELECT positionCaseInsensitive('Hello, world!', 'hello'); ``` -Ответ: +Результат: ``` text ┌─positionCaseInsensitive('Hello, world!', 'hello')─┐ @@ -125,11 +162,11 @@ SELECT positionCaseInsensitive('Hello, world!', 'hello') positionUTF8(haystack, needle[, start_pos]) ``` -**Параметры** +**Аргументы** - `haystack` — строка, по которой выполняется поиск. [Строка](../syntax.md#syntax-string-literal). - `needle` — подстрока, которую необходимо найти. [Строка](../syntax.md#syntax-string-literal). -- `start_pos` – Опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md) +- `start_pos` — опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -145,10 +182,10 @@ positionUTF8(haystack, needle[, start_pos]) Запрос: ``` sql -SELECT positionUTF8('Привет, мир!', '!') +SELECT positionUTF8('Привет, мир!', '!'); ``` -Ответ: +Результат: ``` text ┌─positionUTF8('Привет, мир!', '!')─┐ @@ -161,7 +198,7 @@ SELECT positionUTF8('Привет, мир!', '!') Запрос для символа `é`, который представлен одной кодовой точкой `U+00E9`: ``` sql -SELECT positionUTF8('Salut, étudiante!', '!') +SELECT positionUTF8('Salut, étudiante!', '!'); ``` Result: @@ -175,10 +212,10 @@ Result: Запрос для символа `é`, который представлен двумя кодовыми точками `U+0065U+0301`: ``` sql -SELECT positionUTF8('Salut, étudiante!', '!') +SELECT positionUTF8('Salut, étudiante!', '!'); ``` -Ответ: +Результат: ``` text ┌─positionUTF8('Salut, étudiante!', '!')─┐ @@ -198,11 +235,11 @@ SELECT positionUTF8('Salut, étudiante!', '!') positionCaseInsensitiveUTF8(haystack, needle[, start_pos]) ``` -**Параметры** +**Аргументы** - `haystack` — строка, по которой выполняется поиск. [Строка](../syntax.md#syntax-string-literal). - `needle` — подстрока, которую необходимо найти. [Строка](../syntax.md#syntax-string-literal). -- `start_pos` – Опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md) +- `start_pos` — опциональный параметр, позиция символа в строке, с которого начинается поиск. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -216,10 +253,10 @@ positionCaseInsensitiveUTF8(haystack, needle[, start_pos]) Запрос: ``` sql -SELECT positionCaseInsensitiveUTF8('Привет, мир!', 'Мир') +SELECT positionCaseInsensitiveUTF8('Привет, мир!', 'Мир'); ``` -Ответ: +Результат: ``` text ┌─positionCaseInsensitiveUTF8('Привет, мир!', 'Мир')─┐ @@ -257,7 +294,7 @@ multiSearchAllPositions(haystack, [needle1, needle2, ..., needlen]) Query: ``` sql -SELECT multiSearchAllPositions('Hello, World!', ['hello', '!', 'world']) +SELECT multiSearchAllPositions('Hello, World!', ['hello', '!', 'world']); ``` Result: @@ -357,7 +394,7 @@ Result: extractAllGroupsHorizontal(haystack, pattern) ``` -**Параметры** +**Аргументы** - `haystack` — строка для разбора. Тип: [String](../../sql-reference/data-types/string.md). - `pattern` — регулярное выражение, построенное по синтаксическим правилам [re2](https://github.com/google/re2/wiki/Syntax). Выражение должно содержать группы, заключенные в круглые скобки. Если выражение не содержит групп, генерируется исключение. Тип: [String](../../sql-reference/data-types/string.md). @@ -373,7 +410,7 @@ extractAllGroupsHorizontal(haystack, pattern) Запрос: ``` sql -SELECT extractAllGroupsHorizontal('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)') +SELECT extractAllGroupsHorizontal('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)'); ``` Результат: @@ -384,8 +421,9 @@ SELECT extractAllGroupsHorizontal('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=( └──────────────────────────────────────────────────────────────────────────────────────────┘ ``` -**См. также** -- функция [extractAllGroupsVertical](#extractallgroups-vertical) +**Смотрите также** + +- Функция [extractAllGroupsVertical](#extractallgroups-vertical) ## extractAllGroupsVertical {#extractallgroups-vertical} @@ -397,7 +435,7 @@ SELECT extractAllGroupsHorizontal('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=( extractAllGroupsVertical(haystack, pattern) ``` -**Параметры** +**Аргументы** - `haystack` — строка для разбора. Тип: [String](../../sql-reference/data-types/string.md). - `pattern` — регулярное выражение, построенное по синтаксическим правилам [re2](https://github.com/google/re2/wiki/Syntax). Выражение должно содержать группы, заключенные в круглые скобки. Если выражение не содержит групп, генерируется исключение. Тип: [String](../../sql-reference/data-types/string.md). @@ -413,7 +451,7 @@ extractAllGroupsVertical(haystack, pattern) Запрос: ``` sql -SELECT extractAllGroupsVertical('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)') +SELECT extractAllGroupsVertical('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[^"]+"|\\w+)'); ``` Результат: @@ -424,8 +462,9 @@ SELECT extractAllGroupsVertical('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[ └────────────────────────────────────────────────────────────────────────────────────────┘ ``` -**См. также** -- функция [extractAllGroupsHorizontal](#extractallgroups-horizontal) +**Смотрите также** + +- Функция [extractAllGroupsHorizontal](#extractallgroups-horizontal) ## like(haystack, pattern), оператор haystack LIKE pattern {#function-like} @@ -455,10 +494,10 @@ SELECT extractAllGroupsVertical('abc=111, def=222, ghi=333', '("[^"]+"|\\w+)=("[ ilike(haystack, pattern) ``` -**Параметры** +**Аргументы** -- `haystack` — Входная строка. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `pattern` — Если `pattern` не содержит процента или нижнего подчеркивания, тогда `pattern` представляет саму строку. Нижнее подчеркивание (`_`) в `pattern` обозначает любой отдельный символ. Знак процента (`%`) соответствует последовательности из любого количества символов: от нуля и более. +- `haystack` — входная строка. [String](../../sql-reference/syntax.md#syntax-string-literal). +- `pattern` — если `pattern` не содержит процента или нижнего подчеркивания, тогда `pattern` представляет саму строку. Нижнее подчеркивание (`_`) в `pattern` обозначает любой отдельный символ. Знак процента (`%`) соответствует последовательности из любого количества символов: от нуля и более. Некоторые примеры `pattern`: @@ -490,7 +529,7 @@ ilike(haystack, pattern) Запрос: ``` sql -SELECT * FROM Months WHERE ilike(name, '%j%') +SELECT * FROM Months WHERE ilike(name, '%j%'); ``` Результат: @@ -530,7 +569,7 @@ SELECT * FROM Months WHERE ilike(name, '%j%') countMatches(haystack, pattern) ``` -**Параметры** +**Аргументы** - `haystack` — строка, по которой выполняется поиск. [String](../../sql-reference/syntax.md#syntax-string-literal). - `pattern` — регулярное выражение, построенное по синтаксическим правилам [re2](https://github.com/google/re2/wiki/Syntax). [String](../../sql-reference/data-types/string.md). @@ -583,11 +622,11 @@ SELECT countMatches('aaaa', 'aa'); countSubstrings(haystack, needle[, start_pos]) ``` -**Параметры** +**Аргументы** - `haystack` — строка, в которой ведется поиск. [String](../../sql-reference/syntax.md#syntax-string-literal). - `needle` — искомая подстрока. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – позиция первого символа в строке, с которого начнется поиск. Необязательный параметр. [UInt](../../sql-reference/data-types/int-uint.md). +- `start_pos` — позиция первого символа в строке, с которого начнется поиск. Необязательный параметр. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -649,11 +688,11 @@ SELECT countSubstrings('abc___abc', 'abc', 4); countSubstringsCaseInsensitive(haystack, needle[, start_pos]) ``` -**Параметры** +**Аргументы** - `haystack` — строка, в которой ведется поиск. [String](../../sql-reference/syntax.md#syntax-string-literal). - `needle` — искомая подстрока. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – позиция первого символа в строке, с которого начнется поиск. Необязательный параметр. [UInt](../../sql-reference/data-types/int-uint.md). +- `start_pos` — позиция первого символа в строке, с которого начнется поиск. Необязательный параметр. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -715,11 +754,11 @@ SELECT countSubstringsCaseInsensitive('abC___abC', 'aBc', 2); SELECT countSubstringsCaseInsensitiveUTF8(haystack, needle[, start_pos]) ``` -**Параметры** +**Аргументы** - `haystack` — строка, в которой ведется поиск. [String](../../sql-reference/syntax.md#syntax-string-literal). - `needle` — искомая подстрока. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `start_pos` – позиция первого символа в строке, с которого начнется поиск. Необязательный параметр. [UInt](../../sql-reference/data-types/int-uint.md). +- `start_pos` — позиция первого символа в строке, с которого начнется поиск. Необязательный параметр. [UInt](../../sql-reference/data-types/int-uint.md). **Возвращаемые значения** @@ -756,5 +795,3 @@ SELECT countSubstringsCaseInsensitiveUTF8('аБв__АбВ__абв', 'Абв'); │ 3 │ └────────────────────────────────────────────────────────────┘ ``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/string_search_functions/) diff --git a/docs/ru/sql-reference/functions/tuple-functions.md b/docs/ru/sql-reference/functions/tuple-functions.md index f88886ec6f1..381743a450b 100644 --- a/docs/ru/sql-reference/functions/tuple-functions.md +++ b/docs/ru/sql-reference/functions/tuple-functions.md @@ -45,9 +45,9 @@ untuple(x) Чтобы пропустить некоторые столбцы в результате запроса, вы можете использовать выражение `EXCEPT`. -**Параметры** +**Аргументы** -- `x` - функция `tuple`, столбец или кортеж элементов. [Tuple](../../sql-reference/data-types/tuple.md). +- `x` — функция `tuple`, столбец или кортеж элементов. [Tuple](../../sql-reference/data-types/tuple.md). **Возвращаемое значение** @@ -111,4 +111,55 @@ SELECT untuple((* EXCEPT (v2, v3),)) FROM kv; - [Tuple](../../sql-reference/data-types/tuple.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/functions/tuple-functions/) +## tupleHammingDistance {#tuplehammingdistance} + +Возвращает [расстояние Хэмминга](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%A5%D1%8D%D0%BC%D0%BC%D0%B8%D0%BD%D0%B3%D0%B0) между двумя кортежами одинакового размера. + +**Синтаксис** + +``` sql +tupleHammingDistance(tuple1, tuple2) +``` + +**Аргументы** + +- `tuple1` — первый кортеж. [Tuple](../../sql-reference/data-types/tuple.md). +- `tuple2` — второй кортеж. [Tuple](../../sql-reference/data-types/tuple.md). + +Кортежи должны иметь одинаковый размер и тип элементов. + +**Возвращаемое значение** + +- Расстояние Хэмминга. + +Тип: [UInt8](../../sql-reference/data-types/int-uint.md). + +**Примеры** + +Запрос: + +``` sql +SELECT tupleHammingDistance((1, 2, 3), (3, 2, 1)) AS HammingDistance; +``` + +Результат: + +``` text +┌─HammingDistance─┐ +│ 2 │ +└─────────────────┘ +``` + +Может быть использовано с функциями [MinHash](../../sql-reference/functions/hash-functions.md#ngramminhash) для проверки строк на совпадение: + +``` sql +SELECT tupleHammingDistance(wordShingleMinHash(string), wordShingleMinHashCaseInsensitive(string)) as HammingDistance FROM (SELECT 'Clickhouse is a column-oriented database management system for online analytical processing of queries.' AS string); +``` + +Результат: + +``` text +┌─HammingDistance─┐ +│ 2 │ +└─────────────────┘ +``` diff --git a/docs/ru/sql-reference/functions/tuple-map-functions.md b/docs/ru/sql-reference/functions/tuple-map-functions.md index 696fdb9e5ae..c385dbd8f87 100644 --- a/docs/ru/sql-reference/functions/tuple-map-functions.md +++ b/docs/ru/sql-reference/functions/tuple-map-functions.md @@ -15,7 +15,7 @@ toc_title: Работа с контейнерами map map(key1, value1[, key2, value2, ...]) ``` -**Параметры** +**Аргументы** - `key` — ключ. [String](../../sql-reference/data-types/string.md) или [Integer](../../sql-reference/data-types/int-uint.md). - `value` — значение. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md) или [Array](../../sql-reference/data-types/array.md). @@ -62,9 +62,10 @@ SELECT a['key2'] FROM table_map; └─────────────────────────┘ ``` -**См. также** +**Смотрите также** - тип данных [Map(key, value)](../../sql-reference/data-types/map.md) + ## mapAdd {#function-mapadd} Собирает все ключи и суммирует соответствующие значения. @@ -75,7 +76,7 @@ SELECT a['key2'] FROM table_map; mapAdd(Tuple(Array, Array), Tuple(Array, Array) [, ...]) ``` -**Параметры** +**Аргументы** Аргументами являются [кортежи](../../sql-reference/data-types/tuple.md#tuplet1-t2) из двух [массивов](../../sql-reference/data-types/array.md#data-type-array), где элементы в первом массиве представляют ключи, а второй массив содержит значения для каждого ключа. Все массивы ключей должны иметь один и тот же тип, а все массивы значений должны содержать элементы, которые можно приводить к одному типу ([Int64](../../sql-reference/data-types/int-uint.md#int-ranges), [UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges) или [Float64](../../sql-reference/data-types/float.md#float32-float64)). @@ -111,7 +112,7 @@ SELECT mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])) as res, toTy mapSubtract(Tuple(Array, Array), Tuple(Array, Array) [, ...]) ``` -**Параметры** +**Аргументы** Аргументами являются [кортежи](../../sql-reference/data-types/tuple.md#tuplet1-t2) из двух [массивов](../../sql-reference/data-types/array.md#data-type-array), где элементы в первом массиве представляют ключи, а второй массив содержит значения для каждого ключа. Все массивы ключей должны иметь один и тот же тип, а все массивы значений должны содержать элементы, которые можно приводить к одному типу ([Int64](../../sql-reference/data-types/int-uint.md#int-ranges), [UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges) или [Float64](../../sql-reference/data-types/float.md#float32-float64)). @@ -151,10 +152,10 @@ mapPopulateSeries(keys, values[, max]) Количество элементов в `keys` и `values` должно быть одинаковым для каждой строки. -**Параметры** +**Аргументы** -- `keys` — Массив ключей [Array](../../sql-reference/data-types/array.md#data-type-array)([Int](../../sql-reference/data-types/int-uint.md#int-ranges)). -- `values` — Массив значений. [Array](../../sql-reference/data-types/array.md#data-type-array)([Int](../../sql-reference/data-types/int-uint.md#int-ranges)). +- `keys` — массив ключей [Array](../../sql-reference/data-types/array.md#data-type-array)([Int](../../sql-reference/data-types/int-uint.md#int-ranges)). +- `values` — массив значений. [Array](../../sql-reference/data-types/array.md#data-type-array)([Int](../../sql-reference/data-types/int-uint.md#int-ranges)). **Возвращаемое значение** @@ -186,7 +187,7 @@ select mapPopulateSeries([1,2,4], [11,22,44], 5) as res, toTypeName(res) as type mapContains(map, key) ``` -**Параметры** +**Аргументы** - `map` — контейнер Map. [Map](../../sql-reference/data-types/map.md). - `key` — ключ. Тип соответстует типу ключей параметра `map`. @@ -229,7 +230,7 @@ SELECT mapContains(a, 'name') FROM test; mapKeys(map) ``` -**Параметры** +**Аргументы** - `map` — контейнер Map. [Map](../../sql-reference/data-types/map.md). @@ -270,7 +271,7 @@ SELECT mapKeys(a) FROM test; mapKeys(map) ``` -**Параметры** +**Аргументы** - `map` — контейнер Map. [Map](../../sql-reference/data-types/map.md). @@ -301,4 +302,3 @@ SELECT mapValues(a) FROM test; └──────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/functions/tuple-map-functions/) diff --git a/docs/ru/sql-reference/functions/type-conversion-functions.md b/docs/ru/sql-reference/functions/type-conversion-functions.md index f312f9f5847..fc1dd15f8e3 100644 --- a/docs/ru/sql-reference/functions/type-conversion-functions.md +++ b/docs/ru/sql-reference/functions/type-conversion-functions.md @@ -22,7 +22,7 @@ toc_title: "Функции преобразования типов" - `toInt128(expr)` — возвращает значение типа `Int128`. - `toInt256(expr)` — возвращает значение типа `Int256`. -**Параметры** +**Аргументы** - `expr` — [выражение](../syntax.md#syntax-expressions) возвращающее число или строку с десятичным представление числа. Бинарное, восьмеричное и шестнадцатеричное представление числа не поддержаны. Ведущие нули обрезаются. @@ -100,7 +100,7 @@ SELECT toInt64OrNull('123123'), toInt8OrNull('123qwe123'); - `toUInt64(expr)` — возвращает значение типа `UInt64`. - `toUInt256(expr)` — возвращает значение типа `UInt256`. -**Параметры** +**Аргументы** - `expr` — [выражение](../syntax.md#syntax-expressions) возвращающее число или строку с десятичным представление числа. Бинарное, восьмеричное и шестнадцатеричное представление числа не поддержаны. Ведущие нули обрезаются. @@ -172,7 +172,7 @@ Cиноним: `DATE`. Эти функции следует использовать вместо функций `toDecimal*()`, если при ошибке обработки входного значения вы хотите получать `NULL` вместо исключения. -**Параметры** +**Аргументы** - `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа [String](../../sql-reference/functions/type-conversion-functions.md). ClickHouse ожидает текстовое представление десятичного числа. Например, `'1.111'`. - `S` — количество десятичных знаков в результирующем значении. @@ -225,7 +225,7 @@ SELECT toDecimal32OrNull(toString(-1.111), 2) AS val, toTypeName(val); Эти функции следует использовать вместо функций `toDecimal*()`, если при ошибке обработки входного значения вы хотите получать `0` вместо исключения. -**Параметры** +**Аргументы** - `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа [String](../../sql-reference/functions/type-conversion-functions.md). ClickHouse ожидает текстовое представление десятичного числа. Например, `'1.111'`. - `S` — количество десятичных знаков в результирующем значении. @@ -377,7 +377,7 @@ SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut; reinterpretAsUUID(fixed_string) ``` -**Параметры** +**Аргументы** - `fixed_string` — cтрока с big-endian порядком байтов. [FixedString](../../sql-reference/data-types/fixedstring.md#fixedstring). @@ -423,15 +423,51 @@ SELECT uuid = uuid2; └─────────────────────┘ ``` +## reinterpret(x, T) {#type_conversion_function-reinterpret} + +Использует туже самую исходную последовательность байт в памяти для значения `x` и переинтерпретирует ее как конечный тип данных + +Запрос: +```sql +SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint, + reinterpret(toInt8(1), 'Float32') as int_to_float, + reinterpret('1', 'UInt32') as string_to_int; +``` + +Результат: + +``` +┌─int_to_uint─┬─int_to_float─┬─string_to_int─┐ +│ 255 │ 1e-45 │ 49 │ +└─────────────┴──────────────┴───────────────┘ +``` + ## CAST(x, T) {#type_conversion_function-cast} -Преобразует входное значение `x` в указанный тип данных `T`. +Преобразует входное значение `x` в указанный тип данных `T`. В отличии от функции `reinterpret` использует внешнее представление значения `x`. Поддерживается также синтаксис `CAST(x AS t)`. Обратите внимание, что если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255. -**Пример** +**Примеры** + +Запрос: + +```sql +SELECT + cast(toInt8(-1), 'UInt8') AS cast_int_to_uint, + cast(toInt8(1), 'Float32') AS cast_int_to_float, + cast('1', 'UInt32') AS cast_string_to_int +``` + +Результат: + +``` +┌─cast_int_to_uint─┬─cast_int_to_float─┬─cast_string_to_int─┐ +│ 255 │ 1 │ 1 │ +└──────────────────┴───────────────────┴────────────────────┘ +``` Запрос: @@ -488,7 +524,7 @@ SELECT toTypeName(CAST(x, 'Nullable(UInt16)')) FROM t_null; └─────────────────────────────────────────┘ ``` -**См. также** +**Смотрите также** - Настройка [cast_keep_nullable](../../operations/settings/settings.md#cast_keep_nullable) @@ -511,7 +547,8 @@ SELECT cast(-1, 'UInt8') as uint8; ``` text ┌─uint8─┐ │ 255 │ -└───── +└───────┘ +``` Запрос: @@ -537,7 +574,7 @@ Code: 70. DB::Exception: Received from localhost:9000. DB::Exception: Value in c accurateCastOrNull(x, T) ``` -**Параметры** +**Аргументы** - `x` — входное значение. - `T` — имя возвращаемого типа данных. @@ -596,7 +633,7 @@ toIntervalQuarter(number) toIntervalYear(number) ``` -**Параметры** +**Аргументы** - `number` — длительность интервала. Положительное целое число. @@ -627,6 +664,7 @@ SELECT ``` ## parseDateTimeBestEffort {#parsedatetimebesteffort} +## parseDateTime32BestEffort {#parsedatetime32besteffort} Преобразует дату и время в [строковом](../../sql-reference/functions/type-conversion-functions.md) представлении к типу данных [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime). @@ -638,7 +676,7 @@ SELECT parseDateTimeBestEffort(time_string[, time_zone]) ``` -**Параметры** +**Аргументы** - `time_string` — строка, содержащая дату и время для преобразования. [String](../../sql-reference/functions/type-conversion-functions.md). - `time_zone` — часовой пояс. Функция анализирует `time_string` в соответствии с заданным часовым поясом. [String](../../sql-reference/functions/type-conversion-functions.md). @@ -733,7 +771,7 @@ SELECT parseDateTimeBestEffort('10 20:19'); └─────────────────────────────────────┘ ``` -**См. также** +**Смотрите также** - [Информация о формате ISO 8601 от @xkcd](https://xkcd.com/1179/) - [RFC 1123](https://tools.ietf.org/html/rfc1123) @@ -750,7 +788,7 @@ SELECT parseDateTimeBestEffort('10 20:19'); parseDateTimeBestEffortUS(time_string [, time_zone]) ``` -**Параметры** +**Аргументы** - `time_string` — строка, содержащая дату и время для преобразования. [String](../../sql-reference/data-types/string.md). - `time_zone` — часовой пояс. Функция анализирует `time_string` в соответствии с часовым поясом. [String](../../sql-reference/data-types/string.md). @@ -814,6 +852,16 @@ AS parseDateTimeBestEffortUS; └─────────────────────────——┘ ``` +## parseDateTimeBestEffortOrNull {#parsedatetimebesteffortornull} +## parseDateTime32BestEffortOrNull {#parsedatetime32besteffortornull} + +Работает также как [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает `NULL` когда получает формат даты который не может быть обработан. + +## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero} +## parseDateTime32BestEffortOrZero {#parsedatetime32besteffortorzero} + +Работает также как [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевую дату или нулевую дату и время когда получает формат даты который не может быть обработан. + ## parseDateTimeBestEffortUSOrNull {#parsedatetimebesteffortusornull} Работает аналогично функции [parseDateTimeBestEffortUS](#parsedatetimebesteffortUS), но в отличие от нее возвращает `NULL`, если входная строка не может быть преобразована в тип данных [DateTime](../../sql-reference/data-types/datetime.md). @@ -824,7 +872,7 @@ AS parseDateTimeBestEffortUS; parseDateTimeBestEffortUSOrNull(time_string[, time_zone]) ``` -**Параметры** +**Аргументы** - `time_string` — строка, содержащая дату или дату со временем для преобразования. Дата должна быть в американском формате (`MM/DD/YYYY` и т.д.). [String](../../sql-reference/data-types/string.md). - `time_zone` — [часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). Функция анализирует `time_string` в соответствии с заданным часовым поясом. Опциональный параметр. [String](../../sql-reference/data-types/string.md). @@ -910,7 +958,7 @@ SELECT parseDateTimeBestEffortUSOrNull('10.2021') AS parseDateTimeBestEffortUSOr parseDateTimeBestEffortUSOrZero(time_string[, time_zone]) ``` -**Параметры** +**Аргументы** - `time_string` — строка, содержащая дату или дату со временем для преобразования. Дата должна быть в американском формате (`MM/DD/YYYY` и т.д.). [String](../../sql-reference/data-types/string.md). - `time_zone` — [часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). Функция анализирует `time_string` в соответствии с заданным часовым поясом. Опциональный параметр. [String](../../sql-reference/data-types/string.md). @@ -986,9 +1034,100 @@ SELECT parseDateTimeBestEffortUSOrZero('02.2021') AS parseDateTimeBestEffortUSOr └─────────────────────────────────┘ ``` -## toUnixTimestamp64Milli -## toUnixTimestamp64Micro -## toUnixTimestamp64Nano +## parseDateTime64BestEffort {#parsedatetime64besteffort} + +Работает также как функция [parseDateTimeBestEffort](#parsedatetimebesteffort) но также понимамет милисекунды и микросекунды и возвращает `DateTime64(3)` или `DateTime64(6)` типы данных в зависимости от заданной точности. + +**Syntax** + +``` sql +parseDateTime64BestEffort(time_string [, precision [, time_zone]]) +``` + +**Parameters** + +- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md). +- `precision` — `3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md). +- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md). + +**Примеры** + +Запрос: + +```sql +SELECT parseDateTime64BestEffort('2021-01-01') AS a, toTypeName(a) AS t +UNION ALL +SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346') AS a, toTypeName(a) AS t +UNION ALL +SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t +UNION ALL +SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t +FORMAT PrettyCompactMonoBlcok +``` + +Результат: + +``` +┌──────────────────────────a─┬─t──────────────────────────────┐ +│ 2021-01-01 01:01:00.123000 │ DateTime64(3) │ +│ 2021-01-01 00:00:00.000000 │ DateTime64(3) │ +│ 2021-01-01 01:01:00.123460 │ DateTime64(6) │ +│ 2020-12-31 22:01:00.123000 │ DateTime64(3, 'Europe/Moscow') │ +└────────────────────────────┴────────────────────────────────┘ +``` + +## parseDateTime64BestEffortOrNull {#parsedatetime32besteffortornull} + +Работает также как функция [parseDateTime64BestEffort](#parsedatetime64besteffort) но возвращает `NULL` когда встречает формат даты который не может обработать. + +## parseDateTime64BestEffortOrZero {#parsedatetime64besteffortorzero} + +Работает также как функция [parseDateTime64BestEffort](#parsedatetimebesteffort) но возвращает "нулевую" дату и время когда встречает формат даты который не может обработать. + + +## toLowCardinality {#tolowcardinality} + +Преобразует входные данные в версию [LowCardianlity](../data-types/lowcardinality.md) того же типа данных. + +Чтобы преобразовать данные из типа `LowCardinality`, используйте функцию [CAST](#type_conversion_function-cast). Например, `CAST(x as String)`. + +**Синтаксис** + +```sql +toLowCardinality(expr) +``` + +**Аргументы** + +- `expr` — [выражение](../syntax.md#syntax-expressions), которое в результате преобразуется в один из [поддерживаемых типов данных](../data-types/index.md#data_types). + +**Возвращаемое значение** + +- Результат преобразования `expr`. + +Тип: `LowCardinality(expr_result_type)` + +**Пример** + +Запрос: + +```sql +SELECT toLowCardinality('1'); +``` + +Результат: + +```text +┌─toLowCardinality('1')─┐ +│ 1 │ +└───────────────────────┘ +``` + +## toUnixTimestamp64Milli {#tounixtimestamp64milli} + +## toUnixTimestamp64Micro {#tounixtimestamp64micro} + +## toUnixTimestamp64Nano {#tounixtimestamp64nano} Преобразует значение `DateTime64` в значение `Int64` с фиксированной точностью менее одной секунды. Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что возвращаемое значение - это временная метка в UTC, а не в часовом поясе `DateTime64`. @@ -999,7 +1138,7 @@ SELECT parseDateTimeBestEffortUSOrZero('02.2021') AS parseDateTimeBestEffortUSOr toUnixTimestamp64Milli(value) ``` -**Параметры** +**Аргументы** - `value` — значение `DateTime64` с любой точностью. @@ -1039,9 +1178,11 @@ SELECT toUnixTimestamp64Nano(dt64); └─────────────────────────────┘ ``` -## fromUnixTimestamp64Milli -## fromUnixTimestamp64Micro -## fromUnixTimestamp64Nano +## fromUnixTimestamp64Milli {#fromunixtimestamp64milli} + +## fromUnixTimestamp64Micro {#fromunixtimestamp64micro} + +## fromUnixTimestamp64Nano {#fromunixtimestamp64nano} Преобразует значение `Int64` в значение `DateTime64` с фиксированной точностью менее одной секунды и дополнительным часовым поясом. Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что входное значение обрабатывается как метка времени UTC, а не метка времени в заданном (или неявном) часовом поясе. @@ -1051,7 +1192,7 @@ SELECT toUnixTimestamp64Nano(dt64); fromUnixTimestamp64Milli(value [, ti]) ``` -**Параметры** +**Аргументы** - `value` — значение типы `Int64` с любой точностью. - `timezone` — (не обязательный параметр) часовой пояс в формате `String` для возвращаемого результата. @@ -1077,45 +1218,6 @@ SELECT fromUnixTimestamp64Milli(i64, 'UTC'); └──────────────────────────────────────┘ ``` -## toLowCardinality {#tolowcardinality} - -Преобразует входные данные в версию [LowCardianlity](../data-types/lowcardinality.md) того же типа данных. - -Чтобы преобразовать данные из типа `LowCardinality`, используйте функцию [CAST](#type_conversion_function-cast). Например, `CAST(x as String)`. - -**Синтаксис** - -```sql -toLowCardinality(expr) -``` - -**Параметры** - -- `expr` — [Выражение](../syntax.md#syntax-expressions), которое в результате преобразуется в один из [поддерживаемых типов данных](../data-types/index.md#data_types). - - -**Возвращаемое значение** - -- Результат преобразования `expr`. - -Тип: `LowCardinality(expr_result_type)` - -**Пример** - -Запрос: - -```sql -SELECT toLowCardinality('1'); -``` - -Результат: - -```text -┌─toLowCardinality('1')─┐ -│ 1 │ -└───────────────────────┘ -``` - ## formatRow {#formatrow} Преобразует произвольные выражения в строку заданного формата. @@ -1126,10 +1228,10 @@ SELECT toLowCardinality('1'); formatRow(format, x, y, ...) ``` -**Параметры** +**Аргументы** -- `format` — Текстовый формат. Например, [CSV](../../interfaces/formats.md#csv), [TSV](../../interfaces/formats.md#tabseparated). -- `x`,`y`, ... — Выражения. +- `format` — текстовый формат. Например, [CSV](../../interfaces/formats.md#csv), [TSV](../../interfaces/formats.md#tabseparated). +- `x`,`y`, ... — выражения. **Возвращаемое значение** @@ -1167,10 +1269,10 @@ FROM numbers(3); formatRowNoNewline(format, x, y, ...) ``` -**Параметры** +**Аргументы** -- `format` — Текстовый формат. Например, [CSV](../../interfaces/formats.md#csv), [TSV](../../interfaces/formats.md#tabseparated). -- `x`,`y`, ... — Выражения. +- `format` — текстовый формат. Например, [CSV](../../interfaces/formats.md#csv), [TSV](../../interfaces/formats.md#tabseparated). +- `x`,`y`, ... — выражения. **Возвращаемое значение** @@ -1195,4 +1297,3 @@ FROM numbers(3); └───────────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/type_conversion_functions/) diff --git a/docs/ru/sql-reference/functions/url-functions.md b/docs/ru/sql-reference/functions/url-functions.md index 83f7fd32f6c..bdf9beeabf5 100644 --- a/docs/ru/sql-reference/functions/url-functions.md +++ b/docs/ru/sql-reference/functions/url-functions.md @@ -23,7 +23,7 @@ toc_title: "Функции для работы с URL" domain(url) ``` -**Параметры** +**Аргументы** - `url` — URL. Тип — [String](../../sql-reference/functions/url-functions.md). @@ -53,7 +53,7 @@ yandex.com **Пример** ``` sql -SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk') +SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk'); ``` ``` text @@ -74,7 +74,7 @@ SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk') topLevelDomain(url) ``` -**Параметры** +**Аргументы** - `url` — URL. Тип — [String](../../sql-reference/functions/url-functions.md). @@ -96,7 +96,7 @@ https://yandex.com/time/ **Пример** ``` sql -SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk') +SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk'); ``` ``` text @@ -138,7 +138,7 @@ SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk') cutToFirstSignificantSubdomain(URL, TLD) ``` -**Parameters** +**Аргументы** - `URL` — URL. [String](../../sql-reference/data-types/string.md). - `TLD` — имя пользовательского списка доменов верхнего уровня. [String](../../sql-reference/data-types/string.md). @@ -192,7 +192,7 @@ SELECT cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', ' cutToFirstSignificantSubdomainCustomWithWWW(URL, TLD) ``` -**Параметры** +**Аргументы** - `URL` — URL. [String](../../sql-reference/data-types/string.md). - `TLD` — имя пользовательского списка доменов верхнего уровня. [String](../../sql-reference/data-types/string.md). @@ -246,7 +246,7 @@ SELECT cutToFirstSignificantSubdomainCustomWithWWW('www.foo', 'public_suffix_lis firstSignificantSubdomainCustom(URL, TLD) ``` -**Параметры** +**Аргументы** - `URL` — URL. [String](../../sql-reference/data-types/string.md). - `TLD` — имя пользовательского списка доменов верхнего уровня. [String](../../sql-reference/data-types/string.md). @@ -355,7 +355,7 @@ SELECT decodeURLComponent('http://127.0.0.1:8123/?query=SELECT%201%3B') AS Decod netloc(URL) ``` -**Параметры** +**Аргументы** - `url` — URL. Тип — [String](../../sql-reference/data-types/string.md). @@ -405,4 +405,3 @@ SELECT netloc('http://paul@www.example.com:80/'); Удаляет параметр URL с именем name, если такой есть. Функция работает при допущении, что имя параметра закодировано в URL в точности таким же образом, что и в переданном аргументе. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/url_functions/) diff --git a/docs/ru/sql-reference/functions/ym-dict-functions.md b/docs/ru/sql-reference/functions/ym-dict-functions.md index f6d02e553a0..d4bbe2eb709 100644 --- a/docs/ru/sql-reference/functions/ym-dict-functions.md +++ b/docs/ru/sql-reference/functions/ym-dict-functions.md @@ -113,13 +113,13 @@ LIMIT 15 **Синтаксис** ``` sql -regionToTopContinent(id[, geobase]); +regionToTopContinent(id[, geobase]) ``` -**Параметры** +**Аргументы** -- `id` — Идентификатор региона из геобазы Яндекса. [UInt32](../../sql-reference/functions/ym-dict-functions.md). -- `geobase` — Ключ словаря. Смотрите [Множественные геобазы](#multiple-geobases). [String](../../sql-reference/functions/ym-dict-functions.md). Опциональный параметр. +- `id` — идентификатор региона из геобазы Яндекса. [UInt32](../../sql-reference/functions/ym-dict-functions.md). +- `geobase` — ключ словаря. Смотрите [Множественные геобазы](#multiple-geobases). [String](../../sql-reference/functions/ym-dict-functions.md). Опциональный параметр. **Возвращаемое значение** @@ -151,4 +151,3 @@ regionToTopContinent(id[, geobase]); `ua` и `uk` обозначают одно и то же - украинский язык. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/ym_dict_functions/) diff --git a/docs/ru/sql-reference/index.md b/docs/ru/sql-reference/index.md index 7aea530c7ee..62d6a9cecde 100644 --- a/docs/ru/sql-reference/index.md +++ b/docs/ru/sql-reference/index.md @@ -13,4 +13,3 @@ toc_title: hidden - [ALTER](statements/alter/index.md#query_language_queries_alter) - [Прочие виды запросов](statements/misc.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/) diff --git a/docs/ru/sql-reference/operators/in.md b/docs/ru/sql-reference/operators/in.md index e0412747898..b092dd365bf 100644 --- a/docs/ru/sql-reference/operators/in.md +++ b/docs/ru/sql-reference/operators/in.md @@ -215,3 +215,25 @@ SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL 5. Если в GLOBAL IN есть частая необходимость, то спланируйте размещение кластера ClickHouse таким образом, чтобы в каждом дата-центре была хотя бы одна реплика каждого шарда, и среди них была быстрая сеть - чтобы запрос целиком можно было бы выполнить, передавая данные в пределах одного дата-центра. В секции `GLOBAL IN` также имеет смысл указывать локальную таблицу - в случае, если эта локальная таблица есть только на сервере-инициаторе запроса, и вы хотите воспользоваться данными из неё на удалённых серверах. + +### Распределенные подзапросы и max_parallel_replicas {#max_parallel_replica-subqueries} + +Когда настройка max_parallel_replicas больше чем 1, распределенные запросы преобразуются. Например, следующий запрос: + +```sql +SELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) +SETTINGS max_parallel_replicas=3 +``` + +преобразуются на каждом сервере в + +```sql +SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) +SETTINGS parallel_replicas_count=3, parallel_replicas_offset=M +``` + +где M значение между 1 и 3 зависящее от того на какой реплике выполняется локальный запрос. Эти параметры влияют на каждую таблицу семейства MergeTree в запросе и имеют тот же эффект, что и применение `SAMPLE 1/3 OFFSET (M-1)/3` для каждой таблицы. + +Поэтому применение настройки max_parallel_replicas даст корректные результаты если обе таблицы имеют одинаковую схему репликации и семплированы по UserID выражению от UserID. В частности, если local_table_2 не имеет семплирующего ключа, будут получены неверные результаты. Тоже правило применяется для JOIN. + +Один из способов избежать этого, если local_table_2 не удовлетворяет требованиям, использовать `GLOBAL IN` или `GLOBAL JOIN`. diff --git a/docs/ru/sql-reference/operators/index.md b/docs/ru/sql-reference/operators/index.md index 691c398ce4c..b7cacaf7a03 100644 --- a/docs/ru/sql-reference/operators/index.md +++ b/docs/ru/sql-reference/operators/index.md @@ -297,4 +297,3 @@ SELECT * FROM t_null WHERE y IS NOT NULL └───┴───┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/operators/) diff --git a/docs/ru/sql-reference/statements/alter/column.md b/docs/ru/sql-reference/statements/alter/column.md index 35a1952d842..158ab2e7385 100644 --- a/docs/ru/sql-reference/statements/alter/column.md +++ b/docs/ru/sql-reference/statements/alter/column.md @@ -13,6 +13,7 @@ toc_title: "Манипуляции со столбцами" - [COMMENT COLUMN](#alter_comment-column) — добавляет комментарий к столбцу; - [MODIFY COLUMN](#alter_modify-column) — изменяет тип столбца, выражение для значения по умолчанию и TTL. - [MODIFY COLUMN REMOVE](#modify-remove) — удаляет какое-либо из свойств столбца. +- [RENAME COLUMN](#alter_rename-column) — переименовывает существующий столбец. Подробное описание для каждого действия приведено ниже. @@ -62,6 +63,9 @@ DROP COLUMN [IF EXISTS] name Запрос удаляет данные из файловой системы. Так как это представляет собой удаление целых файлов, запрос выполняется почти мгновенно. +!!! warning "Предупреждение" + Вы не можете удалить столбец, используемый в [материализованном представлениии](../../../sql-reference/statements/create/view.md#materialized). В противном случае будет ошибка. + Пример: ``` sql @@ -116,7 +120,7 @@ MODIFY COLUMN [IF EXISTS] name [type] [default_expr] [TTL] [AFTER name_after | F - TTL - Примеры изменения TTL столбца смотрите в разделе [TTL столбца](ttl.md#mergetree-column-ttl). + Примеры изменения TTL столбца смотрите в разделе [TTL столбца](../../../engines/table-engines/mergetree-family/mergetree.md#mergetree-column-ttl). Если указано `IF EXISTS`, запрос не возвращает ошибку, если столбца не существует. @@ -154,10 +158,26 @@ ALTER TABLE table_name MODIFY column_name REMOVE property; ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL; ``` -## Смотрите также +**Смотрите также** - [REMOVE TTL](ttl.md). +## RENAME COLUMN {#alter_rename-column} + +Переименовывает существующий столбец. + +Синтаксис: + +```sql +ALTER TABLE table_name RENAME COLUMN column_name TO new_column_name +``` + +**Пример** + +```sql +ALTER TABLE table_with_ttl RENAME COLUMN column_ttl TO column_ttl_new; +``` + ## Ограничения запроса ALTER {#ogranicheniia-zaprosa-alter} Запрос `ALTER` позволяет создавать и удалять отдельные элементы (столбцы) вложенных структур данных, но не вложенные структуры данных целиком. Для добавления вложенной структуры данных, вы можете добавить столбцы с именем вида `name.nested_name` и типом `Array(T)` - вложенная структура данных полностью эквивалентна нескольким столбцам-массивам с именем, имеющим одинаковый префикс до точки. @@ -170,4 +190,3 @@ ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL; Для таблиц, которые не хранят данные самостоятельно (типа [Merge](../../../sql-reference/statements/alter/index.md) и [Distributed](../../../sql-reference/statements/alter/index.md)), `ALTER` всего лишь меняет структуру таблицы, но не меняет структуру подчинённых таблиц. Для примера, при ALTER-е таблицы типа `Distributed`, вам также потребуется выполнить запрос `ALTER` для таблиц на всех удалённых серверах. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/column/) diff --git a/docs/ru/sql-reference/statements/alter/constraint.md b/docs/ru/sql-reference/statements/alter/constraint.md index 13396f33621..452bf649415 100644 --- a/docs/ru/sql-reference/statements/alter/constraint.md +++ b/docs/ru/sql-reference/statements/alter/constraint.md @@ -20,4 +20,3 @@ ALTER TABLE [db].name DROP CONSTRAINT constraint_name; Запрос на изменение ограничений для Replicated таблиц реплицируется, сохраняя новые метаданные в ZooKeeper и применяя изменения на всех репликах. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/constraint/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/delete.md b/docs/ru/sql-reference/statements/alter/delete.md index ee5f03d9d95..70a411dab83 100644 --- a/docs/ru/sql-reference/statements/alter/delete.md +++ b/docs/ru/sql-reference/statements/alter/delete.md @@ -26,4 +26,3 @@ ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE WHERE filter_expr - [Синхронность запросов ALTER](../../../sql-reference/statements/alter/index.md#synchronicity-of-alter-queries) - [mutations_sync](../../../operations/settings/settings.md#mutations_sync) setting -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/delete/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/index.md b/docs/ru/sql-reference/statements/alter/index.md index 830c4a5745b..648fb7e7c5c 100644 --- a/docs/ru/sql-reference/statements/alter/index.md +++ b/docs/ru/sql-reference/statements/alter/index.md @@ -69,4 +69,3 @@ ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name Для запросов `ALTER TABLE ... UPDATE|DELETE` синхронность выполнения определяется настройкой [mutations_sync](../../../operations/settings/settings.md#mutations_sync). -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/index/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/index/index.md b/docs/ru/sql-reference/statements/alter/index/index.md index a42bccd7b47..632f11ed906 100644 --- a/docs/ru/sql-reference/statements/alter/index/index.md +++ b/docs/ru/sql-reference/statements/alter/index/index.md @@ -9,8 +9,9 @@ toc_title: "Манипуляции с индексами" Добавить или удалить индекс можно с помощью операций ``` sql -ALTER TABLE [db].name ADD INDEX name expression TYPE type GRANULARITY value [AFTER name] -ALTER TABLE [db].name DROP INDEX name +ALTER TABLE [db.]name ADD INDEX name expression TYPE type GRANULARITY value [AFTER name] +ALTER TABLE [db.]name DROP INDEX name +ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name ``` Поддерживается только таблицами семейства `*MergeTree`. @@ -18,7 +19,7 @@ ALTER TABLE [db].name DROP INDEX name Команда `ADD INDEX` добавляет описание индексов в метаданные, а `DROP INDEX` удаляет индекс из метаданных и стирает файлы индекса с диска, поэтому они легковесные и работают мгновенно. Если индекс появился в метаданных, то он начнет считаться в последующих слияниях и записях в таблицу, а не сразу после выполнения операции `ALTER`. +`MATERIALIZE INDEX` - перестраивает индекс в указанной партиции. Реализовано как мутация. Запрос на изменение индексов реплицируется, сохраняя новые метаданные в ZooKeeper и применяя изменения на всех репликах. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/index/index/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/order-by.md b/docs/ru/sql-reference/statements/alter/order-by.md index 32c0e382445..f0a9bfe3730 100644 --- a/docs/ru/sql-reference/statements/alter/order-by.md +++ b/docs/ru/sql-reference/statements/alter/order-by.md @@ -19,4 +19,3 @@ MODIFY ORDER BY new_expression сортировки, разрешено добавлять в ключ только новые столбцы (т.е. столбцы, добавляемые командой `ADD COLUMN` в том же запросе `ALTER`), у которых нет выражения по умолчанию. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/order-by/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/partition.md b/docs/ru/sql-reference/statements/alter/partition.md index 8776c70c89e..02a87406e86 100644 --- a/docs/ru/sql-reference/statements/alter/partition.md +++ b/docs/ru/sql-reference/statements/alter/partition.md @@ -38,7 +38,7 @@ ALTER TABLE mt DETACH PART 'all_2_2_0'; После того как запрос будет выполнен, вы сможете производить любые операции с данными в директории `detached`. Например, можно удалить их из файловой системы. -Запрос реплицируется — данные будут перенесены в директорию `detached` и забыты на всех репликах. Обратите внимание, запрос может быть отправлен только на реплику-лидер. Чтобы узнать, является ли реплика лидером, выполните запрос `SELECT` к системной таблице [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas). Либо можно выполнить запрос `DETACH` на всех репликах — тогда на всех репликах, кроме реплики-лидера, запрос вернет ошибку. +Запрос реплицируется — данные будут перенесены в директорию `detached` и забыты на всех репликах. Обратите внимание, запрос может быть отправлен только на реплику-лидер. Чтобы узнать, является ли реплика лидером, выполните запрос `SELECT` к системной таблице [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas). Либо можно выполнить запрос `DETACH` на всех репликах — тогда на всех репликах, кроме реплик-лидеров (поскольку допускается несколько лидеров), запрос вернет ошибку. ## DROP PARTITION\|PART {#alter_drop-partition} @@ -83,9 +83,13 @@ ALTER TABLE visits ATTACH PART 201901_2_2_0; Как корректно задать имя партиции или куска, см. в разделе [Как задавать имя партиции в запросах ALTER](#alter-how-to-specify-part-expr). -Этот запрос реплицируется. Реплика-иницатор проверяет, есть ли данные в директории `detached`. Если данные есть, то запрос проверяет их целостность. В случае успеха данные добавляются в таблицу. Все остальные реплики загружают данные с реплики-инициатора запроса. +Этот запрос реплицируется. Реплика-иницатор проверяет, есть ли данные в директории `detached`. +Если данные есть, то запрос проверяет их целостность. В случае успеха данные добавляются в таблицу. -Это означает, что вы можете разместить данные в директории `detached` на одной реплике и с помощью запроса `ALTER ... ATTACH` добавить их в таблицу на всех репликах. +Если реплика, не являющаяся инициатором запроса, получив команду присоединения, находит кусок с правильными контрольными суммами в своей собственной папке `detached`, она присоединяет данные, не скачивая их с других реплик. +Если нет куска с правильными контрольными суммами, данные загружаются из любой реплики, имеющей этот кусок. + +Вы можете поместить данные в директорию `detached` на одной реплике и с помощью запроса `ALTER ... ATTACH` добавить их в таблицу на всех репликах. ## ATTACH PARTITION FROM {#alter_attach-partition-from} @@ -93,7 +97,8 @@ ALTER TABLE visits ATTACH PART 201901_2_2_0; ALTER TABLE table2 ATTACH PARTITION partition_expr FROM table1 ``` -Копирует партицию из таблицы `table1` в таблицу `table2` и добавляет к существующим данным `table2`. Данные из `table1` не удаляются. +Копирует партицию из таблицы `table1` в таблицу `table2`. +Обратите внимание, что данные не удаляются ни из `table1`, ни из `table2`. Следует иметь в виду: @@ -305,5 +310,3 @@ OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL; `IN PARTITION` указывает на партицию, для которой применяются выражения [UPDATE](../../../sql-reference/statements/alter/update.md#alter-table-update-statements) или [DELETE](../../../sql-reference/statements/alter/delete.md#alter-mutations) в результате запроса `ALTER TABLE`. Новые куски создаются только в указанной партиции. Таким образом, `IN PARTITION` помогает снизить нагрузку, когда таблица разбита на множество партиций, а вам нужно обновить данные лишь точечно. Примеры запросов `ALTER ... PARTITION` можно посмотреть в тестах: [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_local.sql) и [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql). - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/partition/) diff --git a/docs/ru/sql-reference/statements/alter/quota.md b/docs/ru/sql-reference/statements/alter/quota.md index 0bdac1381da..2c73b8dace3 100644 --- a/docs/ru/sql-reference/statements/alter/quota.md +++ b/docs/ru/sql-reference/statements/alter/quota.md @@ -14,14 +14,14 @@ ALTER QUOTA [IF EXISTS] name [ON CLUSTER cluster_name] [RENAME TO new_name] [KEYED BY {user_name | ip_address | client_key | client_key,user_name | client_key,ip_address} | NOT KEYED] [FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year} - {MAX { {queries | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | + {MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | NO LIMITS | TRACKING ONLY} [,...]] [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` Ключи `user_name`, `ip_address`, `client_key`, `client_key, user_name` и `client_key, ip_address` соответствуют полям таблицы [system.quotas](../../../operations/system-tables/quotas.md). -Параметры `queries`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` соответствуют полям таблицы [system.quotas_usage](../../../operations/system-tables/quotas_usage.md). +Параметры `queries`, `query_selects`, `query_inserts`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` соответствуют полям таблицы [system.quotas_usage](../../../operations/system-tables/quotas_usage.md). В секции `ON CLUSTER` можно указать кластеры, на которых создается квота, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md). @@ -37,6 +37,4 @@ ALTER QUOTA IF EXISTS qA FOR INTERVAL 15 month MAX queries = 123 TO CURRENT_USER ``` sql ALTER QUOTA IF EXISTS qB FOR INTERVAL 30 minute MAX execution_time = 0.5, FOR INTERVAL 5 quarter MAX queries = 321, errors = 10 TO default; -``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/alter/quota/) +``` \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/role.md b/docs/ru/sql-reference/statements/alter/role.md index 69f7c5828c5..e9ce62c58d5 100644 --- a/docs/ru/sql-reference/statements/alter/role.md +++ b/docs/ru/sql-reference/statements/alter/role.md @@ -15,4 +15,3 @@ ALTER ROLE [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1] [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/alter/role/) diff --git a/docs/ru/sql-reference/statements/alter/row-policy.md b/docs/ru/sql-reference/statements/alter/row-policy.md index e2d23cda3ff..cff4d4e497a 100644 --- a/docs/ru/sql-reference/statements/alter/row-policy.md +++ b/docs/ru/sql-reference/statements/alter/row-policy.md @@ -18,4 +18,3 @@ ALTER [ROW] POLICY [IF EXISTS] name1 [ON CLUSTER cluster_name1] ON [database1.]t [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/alter/row-policy/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/settings-profile.md b/docs/ru/sql-reference/statements/alter/settings-profile.md index 54502901837..9b8646919ca 100644 --- a/docs/ru/sql-reference/statements/alter/settings-profile.md +++ b/docs/ru/sql-reference/statements/alter/settings-profile.md @@ -15,4 +15,3 @@ ALTER SETTINGS PROFILE [IF EXISTS] TO name1 [ON CLUSTER cluster_name1] [RENAME T [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...] ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/alter/settings-profile) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/ttl.md b/docs/ru/sql-reference/statements/alter/ttl.md index 5721ec9cf27..2a2d10b69de 100644 --- a/docs/ru/sql-reference/statements/alter/ttl.md +++ b/docs/ru/sql-reference/statements/alter/ttl.md @@ -18,7 +18,7 @@ ALTER TABLE table-name MODIFY TTL ttl-expression Удалить табличный TTL можно запросом следующего вида: ```sql -ALTER TABLE table_name REMOVE TTL +ALTER TABLE table_name REMOVE TTL ``` **Пример** @@ -64,7 +64,7 @@ ALTER TABLE table_with_ttl REMOVE TTL; Заново вставляем удаленную строку и снова принудительно запускаем очистку по `TTL` с помощью `OPTIMIZE`: -```sql +```sql INSERT INTO table_with_ttl VALUES (now() - INTERVAL 4 MONTH, 2, 'username2'); OPTIMIZE TABLE table_with_ttl FINAL; SELECT * FROM table_with_ttl; @@ -81,6 +81,5 @@ SELECT * FROM table_with_ttl; ### Смотрите также -- Подробнее о [свойстве TTL](../../../engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl). - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/ttl/) +- Подробнее о [свойстве TTL](../../../engines/table-engines/mergetree-family/mergetree.md#mergetree-column-ttl). +- Изменить столбец [с TTL](../../../sql-reference/statements/alter/column.md#alter_modify-column). \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/update.md b/docs/ru/sql-reference/statements/alter/update.md index e3d6725419a..206412d4be9 100644 --- a/docs/ru/sql-reference/statements/alter/update.md +++ b/docs/ru/sql-reference/statements/alter/update.md @@ -26,4 +26,3 @@ ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr - [Синхронность запросов ALTER](../../../sql-reference/statements/alter/index.md#synchronicity-of-alter-queries) - [mutations_sync](../../../operations/settings/settings.md#mutations_sync) setting -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/update/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/user.md b/docs/ru/sql-reference/statements/alter/user.md index 41574f74200..53d090f8eab 100644 --- a/docs/ru/sql-reference/statements/alter/user.md +++ b/docs/ru/sql-reference/statements/alter/user.md @@ -12,10 +12,10 @@ toc_title: USER ``` sql ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1] [, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...] - [IDENTIFIED [WITH {PLAINTEXT_PASSWORD|SHA256_PASSWORD|DOUBLE_SHA1_PASSWORD}] BY {'password'|'hash'}] - [[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] + [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}] + [[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] [DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...] ``` Для выполнения `ALTER USER` необходима привилегия [ALTER USER](../grant.md#grant-access-management). @@ -44,4 +44,3 @@ ALTER USER user DEFAULT ROLE ALL ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/alter/user/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/attach.md b/docs/ru/sql-reference/statements/attach.md index 259ab893e63..b135507b818 100644 --- a/docs/ru/sql-reference/statements/attach.md +++ b/docs/ru/sql-reference/statements/attach.md @@ -5,19 +5,55 @@ toc_title: ATTACH # ATTACH Statement {#attach} -Запрос полностью аналогичен запросу `CREATE`, но: +Выполняет подключение таблицы, например, при перемещении базы данных на другой сервер. -- вместо слова `CREATE` используется слово `ATTACH`; -- запрос не создаёт данные на диске, а предполагает, что данные уже лежат в соответствующих местах, и всего лишь добавляет информацию о таблице на сервер. После выполнения запроса `ATTACH` сервер будет знать о существовании таблицы. +Запрос не создаёт данные на диске, а предполагает, что данные уже лежат в соответствующих местах, и всего лишь добавляет информацию о таблице на сервер. После выполнения запроса `ATTACH` сервер будет знать о существовании таблицы. -Если таблица перед этим была отсоединена (`DETACH`), т.е. её структура известна, можно использовать сокращенную форму записи без определения структуры. +Если таблица перед этим была отключена при помощи ([DETACH](../../sql-reference/statements/detach.md)), т.е. её структура известна, можно использовать сокращенную форму записи без определения структуры. + +## Варианты синтаксиса {#syntax-forms} +### Присоединение существующей таблицы {#attach-existing-table} ``` sql ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] ``` -Этот запрос используется при старте сервера. Сервер хранит метаданные таблиц в виде файлов с запросами `ATTACH`, которые он просто исполняет при запуске (за исключением системных таблиц, которые явно создаются на сервере). +Этот запрос используется при старте сервера. Сервер хранит метаданные таблиц в виде файлов с запросами `ATTACH`, которые он просто исполняет при запуске (за исключением некоторых системных таблиц, которые явно создаются на сервере). +Если таблица была отключена перманентно, она не будет подключена обратно во время старта сервера, так что нужно явно использовать запрос `ATTACH`, чтобы подключить ее. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/attach/) +### Создание новой таблицы и присоединение данных {#create-new-table-and-attach-data} +**С указанием пути к табличным данным** + +```sql +ATTACH TABLE name FROM 'path/to/data/' (col1 Type1, ...) +``` + +Cоздает новую таблицу с указанной структурой и присоединяет табличные данные из соответствующего каталога в `user_files`. + +**Пример** + +Запрос: + +```sql +DROP TABLE IF EXISTS test; +INSERT INTO TABLE FUNCTION file('01188_attach/test/data.TSV', 'TSV', 's String, n UInt8') VALUES ('test', 42); +ATTACH TABLE test FROM '01188_attach/test' (s String, n UInt8) ENGINE = File(TSV); +SELECT * FROM test; +``` +Результат: + +```sql +┌─s────┬──n─┐ +│ test │ 42 │ +└──────┴────┘ +``` + +**С указанием UUID таблицы** (Только для баз данных `Atomic`) + +```sql +ATTACH TABLE name UUID '' (col1 Type1, ...) +``` + +Cоздает новую таблицу с указанной структурой и присоединяет данные из таблицы с указанным UUID. diff --git a/docs/ru/sql-reference/statements/check-table.md b/docs/ru/sql-reference/statements/check-table.md index 3dc135d87c6..9592c1a5bc2 100644 --- a/docs/ru/sql-reference/statements/check-table.md +++ b/docs/ru/sql-reference/statements/check-table.md @@ -29,9 +29,36 @@ CHECK TABLE [db.]name В движках `*Log` не предусмотрено автоматическое восстановление данных после сбоя. Используйте запрос `CHECK TABLE`, чтобы своевременно выявлять повреждение данных. -Для движков из семейства `MergeTree` запрос `CHECK TABLE` показывает статус проверки для каждого отдельного куска данных таблицы на локальном сервере. +## Проверка таблиц семейства MergeTree {#checking-mergetree-tables} -**Что делать, если данные повреждены** +Для таблиц семейства `MergeTree` если [check_query_single_value_result](../../operations/settings/settings.md#check_query_single_value_result) = 0, запрос `CHECK TABLE` возвращает статус каждого куска данных таблицы на локальном сервере. + +```sql +SET check_query_single_value_result = 0; +CHECK TABLE test_table; +``` + +```text +┌─part_path─┬─is_passed─┬─message─┐ +│ all_1_4_1 │ 1 │ │ +│ all_1_4_2 │ 1 │ │ +└───────────┴───────────┴─────────┘ +``` + +Если `check_query_single_value_result` = 0, запрос `CHECK TABLE` возвращает статус таблицы в целом. + +```sql +SET check_query_single_value_result = 1; +CHECK TABLE test_table; +``` + +```text +┌─result─┐ +│ 1 │ +└────────┘ +``` + +## Что делать, если данные повреждены {#if-data-is-corrupted} В этом случае можно скопировать оставшиеся неповрежденные данные в другую таблицу. Для этого: @@ -41,4 +68,3 @@ CHECK TABLE [db.]name 4. Перезапустите `clickhouse-client`, чтобы вернуть предыдущее значение параметра `max_threads`. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/check-table/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/database.md b/docs/ru/sql-reference/statements/create/database.md index 0e880517134..7d19f3e8f17 100644 --- a/docs/ru/sql-reference/statements/create/database.md +++ b/docs/ru/sql-reference/statements/create/database.md @@ -31,5 +31,4 @@ CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster] [ENGINE = engine(.. По умолчанию ClickHouse использует собственный движок баз данных. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/database) diff --git a/docs/ru/sql-reference/statements/create/dictionary.md b/docs/ru/sql-reference/statements/create/dictionary.md index dba2aa61ca1..a41b2cb9ad5 100644 --- a/docs/ru/sql-reference/statements/create/dictionary.md +++ b/docs/ru/sql-reference/statements/create/dictionary.md @@ -27,5 +27,4 @@ LIFETIME({MIN min_val MAX max_val | max_val}) Смотрите [Внешние словари](../../../sql-reference/dictionaries/external-dictionaries/external-dicts.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/dictionary) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/index.md b/docs/ru/sql-reference/statements/create/index.md index 70961e4f404..dfa5c28fff7 100644 --- a/docs/ru/sql-reference/statements/create/index.md +++ b/docs/ru/sql-reference/statements/create/index.md @@ -18,4 +18,3 @@ toc_title: "Обзор" - [QUOTA](../../../sql-reference/statements/create/quota.md) - [SETTINGS PROFILE](../../../sql-reference/statements/create/settings-profile.md) -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/statements/create/) diff --git a/docs/ru/sql-reference/statements/create/quota.md b/docs/ru/sql-reference/statements/create/quota.md index f5ac0df010e..38957ed8c6d 100644 --- a/docs/ru/sql-reference/statements/create/quota.md +++ b/docs/ru/sql-reference/statements/create/quota.md @@ -13,13 +13,13 @@ toc_title: "Квота" CREATE QUOTA [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name] [KEYED BY {user_name | ip_address | client_key | client_key, user_name | client_key, ip_address} | NOT KEYED] [FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year} - {MAX { {queries | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | + {MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | NO LIMITS | TRACKING ONLY} [,...]] [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` Ключи `user_name`, `ip_address`, `client_key`, `client_key, user_name` и `client_key, ip_address` соответствуют полям таблицы [system.quotas](../../../operations/system-tables/quotas.md). -Параметры `queries`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` соответствуют полям таблицы [system.quotas_usage](../../../operations/system-tables/quotas_usage.md). +Параметры `queries`, `query_selects`, `query_inserts`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` соответствуют полям таблицы [system.quotas_usage](../../../operations/system-tables/quotas_usage.md). В секции `ON CLUSTER` можно указать кластеры, на которых создается квота, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md). @@ -35,7 +35,4 @@ CREATE QUOTA qA FOR INTERVAL 15 month MAX queries = 123 TO CURRENT_USER; ``` sql CREATE QUOTA qB FOR INTERVAL 30 minute MAX execution_time = 0.5, FOR INTERVAL 5 quarter MAX queries = 321, errors = 10 TO default; -``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/quota) - \ No newline at end of file +``` \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/role.md b/docs/ru/sql-reference/statements/create/role.md index 8592f263156..16450b41126 100644 --- a/docs/ru/sql-reference/statements/create/role.md +++ b/docs/ru/sql-reference/statements/create/role.md @@ -46,5 +46,4 @@ SET ROLE accountant; SELECT * FROM db.*; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/role) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/row-policy.md b/docs/ru/sql-reference/statements/create/row-policy.md index 75f6fdfd2e1..6fe1dc45815 100644 --- a/docs/ru/sql-reference/statements/create/row-policy.md +++ b/docs/ru/sql-reference/statements/create/row-policy.md @@ -5,7 +5,7 @@ toc_title: "Политика доступа" # CREATE ROW POLICY {#create-row-policy-statement} -Создает [фильтры для строк](../../../operations/access-rights.md#row-policy-management), которые пользователь может прочесть из таблицы. +Создает [политики доступа к строкам](../../../operations/access-rights.md#row-policy-management), т.е. фильтры, которые определяют, какие строки пользователь может читать из таблицы. Синтаксис: @@ -13,34 +13,74 @@ toc_title: "Политика доступа" CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluster_name1] ON [db1.]table1 [, policy_name2 [ON CLUSTER cluster_name2] ON [db2.]table2 ...] [AS {PERMISSIVE | RESTRICTIVE}] - [FOR SELECT] - [USING condition] + [FOR SELECT] USING condition [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` -Секция `ON CLUSTER` позволяет создавать фильтры для строк на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md). +## Секция USING {#create-row-policy-using} -## Секция AS {#create-row-policy-as} - -С помощью данной секции можно создать политику разрешения или ограничения. - -Политика разрешения предоставляет доступ к строкам. Разрешительные политики, которые применяются к одной таблице, объединяются с помощью логического оператора `OR`. Политики являются разрешительными по умолчанию. - -Политика ограничения запрещает доступ к строкам. Ограничительные политики, которые применяются к одной таблице, объединяются логическим оператором `AND`. - -Ограничительные политики применяются к строкам, прошедшим фильтр разрешительной политики. Если вы не зададите разрешительные политики, пользователь не сможет обращаться ни к каким строкам из таблицы. +Секция `USING` указывает условие для фильтрации строк. Пользователь может видеть строку, если это условие, вычисленное для строки, дает ненулевой результат. ## Секция TO {#create-row-policy-to} -В секции `TO` вы можете перечислить как роли, так и пользователей. Например, `CREATE ROW POLICY ... TO accountant, john@localhost`. +В секции `TO` перечисляются пользователи и роли, для которых должна действовать политика. Например, `CREATE ROW POLICY ... TO accountant, john@localhost`. Ключевым словом `ALL` обозначаются все пользователи, включая текущего. Ключевые слова `ALL EXCEPT` позволяют исключить пользователей из списка всех пользователей. Например, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` +!!! note "Note" + Если для таблицы не задано ни одной политики доступа к строкам, то любой пользователь может выполнить команду SELECT и получить все строки таблицы. Если определить хотя бы одну политику для таблицы, до доступ к строкам будет управляться этими политиками, причем для всех пользователей (даже для тех, для кого политики не определялись). Например, следующая политика + + `CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter` + + запретит пользователям `mira` и `peter` видеть строки с `b != 1`, и еще запретит всем остальным пользователям (например, пользователю `paul`) видеть какие-либо строки вообще из таблицы `mydb.table1`. + + Если это нежелательно, такое поведение можно исправить, определив дополнительную политику: + + `CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter` + +## Секция AS {#create-row-policy-as} + +Может быть одновременно активно более одной политики для одной и той же таблицы и одного и того же пользователя. Поэтому нам нужен способ комбинировать политики. + +По умолчанию политики комбинируются с использованием логического оператора `OR`. Например, политики: + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio +``` + +разрешат пользователю с именем `peter` видеть строки, для которых будет верно `b=1` или `c=2`. + +Секция `AS` указывает, как политики должны комбинироваться с другими политиками. Политики могут быть или разрешительными (`PERMISSIVE`), или ограничительными (`RESTRICTIVE`). По умолчанию политики создаются разрешительными (`PERMISSIVE`); такие политики комбинируются с использованием логического оператора `OR`. + +Ограничительные (`RESTRICTIVE`) политики комбинируются с использованием логического оператора `AND`. + +Общая формула выглядит так: + +``` +строка_видима = (одна или больше permissive-политик дала ненулевой результат проверки условия) И + (все restrictive-политики дали ненулевой результат проверки условия) +``` + +Например, политики + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio +``` + +разрешат пользователю с именем `peter` видеть только те строки, для которых будет одновременно `b=1` и `c=2`. + +## Секция ON CLUSTER {#create-row-policy-on-cluster} + +Секция `ON CLUSTER` позволяет создавать политики на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md). + ## Примеры -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO accountant, john@localhost` +`CREATE ROW POLICY filter1 ON mydb.mytable USING a<1000 TO accountant, john@localhost` -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO ALL EXCEPT mira` +`CREATE ROW POLICY filter2 ON mydb.mytable USING a<1000 AND b=5 TO ALL EXCEPT mira` + +`CREATE ROW POLICY filter3 ON mydb.mytable USING 1 TO admin` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/row-policy) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/settings-profile.md b/docs/ru/sql-reference/statements/create/settings-profile.md index 5838ddc9153..522caf04c80 100644 --- a/docs/ru/sql-reference/statements/create/settings-profile.md +++ b/docs/ru/sql-reference/statements/create/settings-profile.md @@ -25,5 +25,4 @@ CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] TO name1 [ON CLUSTER cluste CREATE SETTINGS PROFILE max_memory_usage_profile SETTINGS max_memory_usage = 100000001 MIN 90000000 MAX 110000000 TO robin ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/settings-profile) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/table.md b/docs/ru/sql-reference/statements/create/table.md index 8e2c471e548..1ccd0a600f3 100644 --- a/docs/ru/sql-reference/statements/create/table.md +++ b/docs/ru/sql-reference/statements/create/table.md @@ -5,7 +5,11 @@ toc_title: "Таблица" # CREATE TABLE {#create-table-query} -Запрос `CREATE TABLE` может иметь несколько форм. +Запрос `CREATE TABLE` может иметь несколько форм, которые используются в зависимости от контекста и решаемых задач. + +По умолчанию таблицы создаются на текущем сервере. Распределенные DDL запросы создаются с помощью секции `ON CLUSTER`, которая [описана отдельно](../../../sql-reference/distributed-ddl.md). +## Варианты синтаксиса {#syntax-forms} +### С описанием структуры {#with-explicit-schema} ``` sql CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] @@ -23,28 +27,51 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] Также могут быть указаны выражения для значений по умолчанию - смотрите ниже. При необходимости можно указать [первичный ключ](#primary-key) с одним или несколькими ключевыми выражениями. + +### Со структурой, аналогичной другой таблице {#with-a-schema-similar-to-other-table} + ``` sql CREATE TABLE [IF NOT EXISTS] [db.]table_name AS [db2.]name2 [ENGINE = engine] ``` Создаёт таблицу с такой же структурой, как другая таблица. Можно указать другой движок для таблицы. Если движок не указан, то будет выбран такой же движок, как у таблицы `db2.name2`. +### Из табличной функции {#from-a-table-function} + ``` sql CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_function() ``` +Создаёт таблицу с такой же структурой и данными, как результат соответствующей табличной функции. Созданная таблица будет работать так же, как и указанная табличная функция. -Создаёт таблицу с такой же структурой и данными, как результат соответствующей табличной функцией. +### Из запроса SELECT {#from-select-query} ``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... +CREATE TABLE [IF NOT EXISTS] [db.]table_name[(name1 [type1], name2 [type2], ...)] ENGINE = engine AS SELECT ... ``` -Создаёт таблицу со структурой, как результат запроса `SELECT`, с движком engine, и заполняет её данными из SELECT-а. +Создаёт таблицу со структурой, как результат запроса `SELECT`, с движком `engine`, и заполняет её данными из `SELECT`. Также вы можете явно задать описание столбцов. -Во всех случаях, если указано `IF NOT EXISTS`, то запрос не будет возвращать ошибку, если таблица уже существует. В этом случае, запрос будет ничего не делать. +Если таблица уже существует и указано `IF NOT EXISTS`, то запрос ничего не делает. После секции `ENGINE` в запросе могут использоваться и другие секции в зависимости от движка. Подробную документацию по созданию таблиц смотрите в описаниях [движков таблиц](../../../engines/table-engines/index.md#table_engines). +**Пример** + +Запрос: + +``` sql +CREATE TABLE t1 (x String) ENGINE = Memory AS SELECT 1; +SELECT x, toTypeName(x) FROM t1; +``` + +Результат: + +```text +┌─x─┬─toTypeName(x)─┐ +│ 1 │ String │ +└───┴───────────────┘ +``` + ## Модификатор NULL или NOT NULL {#null-modifiers} Модификатор `NULL` или `NOT NULL`, указанный после типа данных в определении столбца, позволяет или не позволяет типу данных быть [Nullable](../../../sql-reference/data-types/nullable.md#data_type-nullable). @@ -53,7 +80,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... Смотрите также настройку [data_type_default_nullable](../../../operations/settings/settings.md#data_type_default_nullable). -### Значения по умолчанию {#create-default-values} +## Значения по умолчанию {#create-default-values} В описании столбца, может быть указано выражение для значения по умолчанию, одного из следующих видов: `DEFAULT expr`, `MATERIALIZED expr`, `ALIAS expr`. @@ -67,16 +94,22 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... В качестве выражения для умолчания, может быть указано произвольное выражение от констант и столбцов таблицы. При создании и изменении структуры таблицы, проверяется, что выражения не содержат циклов. При INSERT-е проверяется разрешимость выражений - что все столбцы, из которых их можно вычислить, переданы. +### DEFAULT {#default} + `DEFAULT expr` Обычное значение по умолчанию. Если в запросе INSERT не указан соответствующий столбец, то он будет заполнен путём вычисления соответствующего выражения. +### MATERIALIZED {#materialized} + `MATERIALIZED expr` Материализованное выражение. Такой столбец не может быть указан при INSERT, то есть, он всегда вычисляется. При INSERT без указания списка столбцов, такие столбцы не рассматриваются. Также этот столбец не подставляется при использовании звёздочки в запросе SELECT. Это необходимо, чтобы сохранить инвариант, что дамп, полученный путём `SELECT *`, можно вставить обратно в таблицу INSERT-ом без указания списка столбцов. +### ALIAS {#alias} + `ALIAS expr` Синоним. Такой столбец вообще не хранится в таблице. @@ -118,7 +151,7 @@ PRIMARY KEY(expr1[, expr2,...]); !!! warning "Предупреждение" Вы не можете сочетать оба способа в одном запросе. -### Ограничения (constraints) {#constraints} +## Ограничения {#constraints} Наряду с объявлением столбцов можно объявить ограничения на значения в столбцах таблицы: @@ -136,11 +169,11 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] Добавление большого числа ограничений может негативно повлиять на производительность `INSERT` запросов. -### Выражение для TTL {#vyrazhenie-dlia-ttl} +## Выражение для TTL {#vyrazhenie-dlia-ttl} Определяет время хранения значений. Может быть указано только для таблиц семейства MergeTree. Подробнее смотрите в [TTL для столбцов и таблиц](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl). -### Кодеки сжатия столбцов {#codecs} +## Кодеки сжатия столбцов {#codecs} По умолчанию, ClickHouse применяет к столбцу метод сжатия, определённый в [конфигурации сервера](../../../operations/server-configuration-parameters/settings.md). Кроме этого, можно задать метод сжатия для каждого отдельного столбца в запросе `CREATE TABLE`. @@ -182,7 +215,18 @@ ALTER TABLE codec_example MODIFY COLUMN float_value CODEC(Default); ClickHouse поддерживает кодеки общего назначения и специализированные кодеки. -#### Специализированные кодеки {#create-query-specialized-codecs} +### Кодеки общего назначения {#create-query-common-purpose-codecs} + +Кодеки: + +- `NONE` — без сжатия. +- `LZ4` — [алгоритм сжатия без потерь](https://github.com/lz4/lz4) используемый по умолчанию. Применяет быстрое сжатие LZ4. +- `LZ4HC[(level)]` — алгоритм LZ4 HC (high compression) с настраиваемым уровнем сжатия. Уровень по умолчанию — 9. Настройка `level <= 0` устанавливает уровень сжания по умолчанию. Возможные уровни сжатия: \[1, 12\]. Рекомендуемый диапазон уровней: \[4, 9\]. +- `ZSTD[(level)]` — [алгоритм сжатия ZSTD](https://en.wikipedia.org/wiki/Zstandard) с настраиваемым уровнем сжатия `level`. Возможные уровни сжатия: \[1, 22\]. Уровень сжатия по умолчанию: 1. + +Высокие уровни сжатия полезны для ассимметричных сценариев, подобных «один раз сжал, много раз распаковал». Они подразумевают лучшее сжатие, но большее использование CPU. + +### Специализированные кодеки {#create-query-specialized-codecs} Эти кодеки разработаны для того, чтобы, используя особенности данных сделать сжатие более эффективным. Некоторые из этих кодеков не сжимают данные самостоятельно. Они готовят данные для кодеков общего назначения, которые сжимают подготовленные данные эффективнее, чем неподготовленные. @@ -203,19 +247,7 @@ CREATE TABLE codec_example ) ENGINE = MergeTree() ``` - -#### Кодеки общего назначения {#create-query-common-purpose-codecs} - -Кодеки: - -- `NONE` — без сжатия. -- `LZ4` — [алгоритм сжатия без потерь](https://github.com/lz4/lz4) используемый по умолчанию. Применяет быстрое сжатие LZ4. -- `LZ4HC[(level)]` — алгоритм LZ4 HC (high compression) с настраиваемым уровнем сжатия. Уровень по умолчанию — 9. Настройка `level <= 0` устанавливает уровень сжания по умолчанию. Возможные уровни сжатия: \[1, 12\]. Рекомендуемый диапазон уровней: \[4, 9\]. -- `ZSTD[(level)]` — [алгоритм сжатия ZSTD](https://en.wikipedia.org/wiki/Zstandard) с настраиваемым уровнем сжатия `level`. Возможные уровни сжатия: \[1, 22\]. Уровень сжатия по умолчанию: 1. - -Высокие уровни сжатия полезны для ассимметричных сценариев, подобных «один раз сжал, много раз распаковал». Высокие уровни сжатия подразумеваю лучшее сжатие, но большее использование CPU. - -## Временные таблицы {#vremennye-tablitsy} +## Временные таблицы {#temporary-tables} ClickHouse поддерживает временные таблицы со следующими характеристиками: @@ -241,7 +273,77 @@ CREATE TEMPORARY TABLE [IF NOT EXISTS] table_name Вместо временных можно использовать обычные таблицы с [ENGINE = Memory](../../../engines/table-engines/special/memory.md). +## REPLACE TABLE {#replace-table-query} +Запрос `REPLACE` позволяет частично изменить таблицу (структуру или данные). + +!!!note "Замечание" + Такие запросы поддерживаются только движком БД [Atomic](../../../engines/database-engines/atomic.md). + +Чтобы удалить часть данных из таблицы, вы можете создать новую таблицу, добавить в нее данные из старой таблицы, которые вы хотите оставить (отобрав их с помощью запроса `SELECT`), затем удалить старую таблицу и переименовать новую таблицу так как старую: + +```sql +CREATE TABLE myNewTable AS myOldTable; +INSERT INTO myNewTable SELECT * FROM myOldTable WHERE CounterID <12345; +DROP TABLE myOldTable; +RENAME TABLE myNewTable TO myOldTable; +``` + +Вместо перечисленных выше операций можно использовать один запрос: + +```sql +REPLACE TABLE myOldTable SELECT * FROM myOldTable WHERE CounterID <12345; +``` + +### Синтаксис + +{CREATE [OR REPLACE]|REPLACE} TABLE [db.]table_name + +Для данного запроса можно использовать любые варианты синтаксиса запроса `CREATE`. Запрос `REPLACE` для несуществующей таблицы вызовет ошибку. + +### Примеры: + +Рассмотрим таблицу: + +```sql +CREATE DATABASE base ENGINE = Atomic; +CREATE OR REPLACE TABLE base.t1 (n UInt64, s String) ENGINE = MergeTree ORDER BY n; +INSERT INTO base.t1 VALUES (1, 'test'); +SELECT * FROM base.t1; +``` + +```text +┌─n─┬─s────┐ +│ 1 │ test │ +└───┴──────┘ +``` + +Используем запрос `REPLACE` для удаления всех данных: + +```sql +CREATE OR REPLACE TABLE base.t1 (n UInt64, s Nullable(String)) ENGINE = MergeTree ORDER BY n; +INSERT INTO base.t1 VALUES (2, null); +SELECT * FROM base.t1; +``` + +```text +┌─n─┬─s──┐ +│ 2 │ \N │ +└───┴────┘ +``` + +Используем запрос `REPLACE` для изменения структуры таблицы: + +```sql +REPLACE TABLE base.t1 (n UInt64) ENGINE = MergeTree ORDER BY n; +INSERT INTO base.t1 VALUES (3); +SELECT * FROM base.t1; +``` + +```text +┌─n─┐ +│ 3 │ +└───┘ +``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/table) diff --git a/docs/ru/sql-reference/statements/create/user.md b/docs/ru/sql-reference/statements/create/user.md index ac9547691e6..a487d1ac593 100644 --- a/docs/ru/sql-reference/statements/create/user.md +++ b/docs/ru/sql-reference/statements/create/user.md @@ -9,15 +9,17 @@ toc_title: "Пользователь" Синтаксис: -```sql +``` sql CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] [, name2 [ON CLUSTER cluster_name2] ...] - [IDENTIFIED [WITH {NO_PASSWORD|PLAINTEXT_PASSWORD|SHA256_PASSWORD|SHA256_HASH|DOUBLE_SHA1_PASSWORD|DOUBLE_SHA1_HASH}] BY {'password'|'hash'}] + [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}] [HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] [DEFAULT ROLE role [,...]] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...] ``` +`ON CLUSTER` позволяет создавать пользователей в кластере, см. [Распределенные DDL](../../../sql-reference/distributed-ddl.md). + ## Идентификация Существует несколько способов идентификации пользователя: @@ -28,6 +30,8 @@ CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] - `IDENTIFIED WITH sha256_hash BY 'hash'` - `IDENTIFIED WITH double_sha1_password BY 'qwerty'` - `IDENTIFIED WITH double_sha1_hash BY 'hash'` +- `IDENTIFIED WITH ldap SERVER 'server_name'` +- `IDENTIFIED WITH kerberos` or `IDENTIFIED WITH kerberos REALM 'realm'` ## Пользовательский хост @@ -81,5 +85,4 @@ CREATE USER user DEFAULT ROLE ALL CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2 ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/user) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/view.md b/docs/ru/sql-reference/statements/create/view.md index da021059a8e..4e34b5e3b6e 100644 --- a/docs/ru/sql-reference/statements/create/view.md +++ b/docs/ru/sql-reference/statements/create/view.md @@ -13,7 +13,7 @@ toc_title: "Представление" CREATE [OR REPLACE] VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] AS SELECT ... ``` -Обычные представления не хранят никаких данных, они выполняют чтение данных из другой таблицы при каждом доступе. Другими словами, обычное представление - это не что иное, как сохраненный запрос. При чтении данных из представления этот сохраненный запрос используется как подзапрос в секции [FROM](../../../sql-reference/statements/select/from.md). +Обычные представления не хранят никаких данных, они выполняют чтение данных из другой таблицы при каждом доступе. Другими словами, обычное представление — это не что иное, как сохраненный запрос. При чтении данных из представления этот сохраненный запрос используется как подзапрос в секции [FROM](../../../sql-reference/statements/select/from.md). Для примера, пусть вы создали представление: @@ -43,12 +43,12 @@ CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]na При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать `ENGINE` - движок таблицы для хранения данных. -При создании материализованного представления с испольованием `TO [db].[table]`, нельзя указывать `POPULATE` +При создании материализованного представления с испольованием `TO [db].[table]`, нельзя указывать `POPULATE`. Материализованное представление устроено следующим образом: при вставке данных в таблицу, указанную в SELECT-е, кусок вставляемых данных преобразуется этим запросом SELECT, и полученный результат вставляется в представление. !!! important "Важно" - Материализованные представлени в ClickHouse больше похожи на `after insert` триггеры. Если в запросе материализованного представления есть агрегирование, оно применяется только к вставляемому блоку записей. Любые изменения существующих данных исходной таблицы (например обновление, удаление, удаление раздела и т.д.) не изменяют материализованное представление. + Материализованные представления в ClickHouse больше похожи на `after insert` триггеры. Если в запросе материализованного представления есть агрегирование, оно применяется только к вставляемому блоку записей. Любые изменения существующих данных исходной таблицы (например обновление, удаление, удаление раздела и т.д.) не изменяют материализованное представление. Если указано `POPULATE`, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него. @@ -56,10 +56,177 @@ CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]na Недоработано выполнение запросов `ALTER` над материализованными представлениями, поэтому они могут быть неудобными для использования. Если материализованное представление использует конструкцию `TO [db.]name`, то можно выполнить `DETACH` представления, `ALTER` для целевой таблицы и последующий `ATTACH` ранее отсоединенного (`DETACH`) представления. -Обратите внимание, что работа материлизованного представления находится под влиянием настройки [optimize_on_insert](../../../operations/settings/settings.md#optimize-on-insert). Перед вставкой данных в таблицу происходит их слияние. +Обратите внимание, что работа материализованного представления находится под влиянием настройки [optimize_on_insert](../../../operations/settings/settings.md#optimize-on-insert). Перед вставкой данных в таблицу происходит их слияние. Представления выглядят так же, как обычные таблицы. Например, они перечисляются в результате запроса `SHOW TABLES`. -Отсутствует отдельный запрос для удаления представлений. Чтобы удалить представление, следует использовать `DROP TABLE`. +Чтобы удалить представление, следует использовать [DROP VIEW](../../../sql-reference/statements/drop.md#drop-view). Впрочем, `DROP TABLE` тоже работает для представлений. + +## LIVE-представления {#live-view} + +!!! important "Важно" + Представления `LIVE VIEW` являются экспериментальной возможностью. Их использование может повлечь потерю совместимости в будущих версиях. + Чтобы использовать `LIVE VIEW` и запросы `WATCH`, включите настройку [allow_experimental_live_view](../../../operations/settings/settings.md#allow-experimental-live-view). + +```sql +CREATE LIVE VIEW [IF NOT EXISTS] [db.]table_name [WITH [TIMEOUT [value_in_sec] [AND]] [REFRESH [value_in_sec]]] AS SELECT ... +``` +`LIVE VIEW` хранит результат запроса [SELECT](../../../sql-reference/statements/select/index.md), указанного при создании, и обновляется сразу же при изменении этого результата. Конечный результат запроса и промежуточные данные, из которых формируется результат, хранятся в оперативной памяти, и это обеспечивает высокую скорость обработки для повторяющихся запросов. LIVE-представления могут отправлять push-уведомления при изменении результата исходного запроса `SELECT`. Для этого используйте запрос [WATCH](../../../sql-reference/statements/watch.md). + +Изменение `LIVE VIEW` запускается при вставке данных в таблицу, указанную в исходном запросе `SELECT`. + +LIVE-представления работают по тому же принципу, что и распределенные таблицы. Но вместо объединения отдельных частей данных с разных серверов, LIVE-представления объединяют уже имеющийся результат с новыми данными. Если в исходном запросе LIVE-представления есть вложенный подзапрос, его результаты не кешируются, в кеше хранится только результат основного запроса. + +!!! info "Ограничения" + - [Табличные функции](../../../sql-reference/table-functions/index.md) в основном запросе не поддерживаются. + - Таблицы, не поддерживающие изменение с помощью запроса `INSERT`, такие как [словари](../../../sql-reference/dictionaries/index.md) и [системные таблицы](../../../operations/system-tables/index.md), а также [нормальные представления](#normal) или [материализованные представления](#materialized), не запускают обновление LIVE-представления. + - В LIVE-представлениях могут использоваться только такие запросы, которые объединяют результаты по старым и новым данным. LIVE-представления не работают с запросами, требующими полного пересчета данных или агрегирования с сохранением состояния. + - `LIVE VIEW` не работает для реплицируемых и распределенных таблиц, добавление данных в которые происходит на разных узлах. + - `LIVE VIEW` не обновляется, если в исходном запросе используются несколько таблиц. + + В случаях, когда `LIVE VIEW` не обновляется автоматически, чтобы обновлять его принудительно с заданной периодичностью, используйте [WITH REFRESH](#live-view-with-refresh). + +### Отслеживание изменений {#live-view-monitoring} + +Для отслеживания изменений LIVE-представления используйте запрос [WATCH](../../../sql-reference/statements/watch.md). + + +**Пример:** + +```sql +CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x; +CREATE LIVE VIEW lv AS SELECT sum(x) FROM mt; +``` +Отслеживаем изменения LIVE-представления при вставке данных в исходную таблицу. + +```sql +WATCH lv; +``` + +```bash +┌─sum(x)─┬─_version─┐ +│ 1 │ 1 │ +└────────┴──────────┘ +┌─sum(x)─┬─_version─┐ +│ 2 │ 2 │ +└────────┴──────────┘ +┌─sum(x)─┬─_version─┐ +│ 6 │ 3 │ +└────────┴──────────┘ +... +``` + +```sql +INSERT INTO mt VALUES (1); +INSERT INTO mt VALUES (2); +INSERT INTO mt VALUES (3); +``` + +Для получения списка изменений используйте ключевое слово [EVENTS](../../../sql-reference/statements/watch.md#events-clause). + + +```sql +WATCH lv EVENTS; +``` + +```bash +┌─version─┐ +│ 1 │ +└─────────┘ +┌─version─┐ +│ 2 │ +└─────────┘ +┌─version─┐ +│ 3 │ +└─────────┘ +... +``` + +Для работы с LIVE-представлениями, как и с любыми другими, можно использовать запросы [SELECT](../../../sql-reference/statements/select/index.md). Если результат запроса кеширован, он будет возвращен немедленно, без обращения к исходным таблицам представления. + +```sql +SELECT * FROM [db.]live_view WHERE ... +``` + +### Принудительное обновление {#live-view-alter-refresh} + +Чтобы принудительно обновить LIVE-представление, используйте запрос `ALTER LIVE VIEW [db.]table_name REFRESH`. + +### Секция WITH TIMEOUT {#live-view-with-timeout} + +LIVE-представление, созданное с параметром `WITH TIMEOUT`, будет автоматически удалено через определенное количество секунд с момента предыдущего запроса [WATCH](../../../sql-reference/statements/watch.md), примененного к данному LIVE-представлению. + +```sql +CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AS SELECT ... +``` + +Если временной промежуток не указан, используется значение настройки [temporary_live_view_timeout](../../../operations/settings/settings.md#temporary-live-view-timeout). + +**Пример:** + +```sql +CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x; +CREATE LIVE VIEW lv WITH TIMEOUT 15 AS SELECT sum(x) FROM mt; +``` + +### Секция WITH REFRESH {#live-view-with-refresh} + +LIVE-представление, созданное с параметром `WITH REFRESH`, будет автоматически обновляться через указанные промежутки времени, начиная с момента последнего обновления. + +```sql +CREATE LIVE VIEW [db.]table_name WITH REFRESH [value_in_sec] AS SELECT ... +``` + +Если значение временного промежутка не задано, используется значение [periodic_live_view_refresh](../../../operations/settings/settings.md#periodic-live-view-refresh). + +**Пример:** + +```sql +CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); +WATCH lv; +``` + +```bash +┌───────────────now()─┬─_version─┐ +│ 2021-02-21 08:47:05 │ 1 │ +└─────────────────────┴──────────┘ +┌───────────────now()─┬─_version─┐ +│ 2021-02-21 08:47:10 │ 2 │ +└─────────────────────┴──────────┘ +┌───────────────now()─┬─_version─┐ +│ 2021-02-21 08:47:15 │ 3 │ +└─────────────────────┴──────────┘ +``` + +Параметры `WITH TIMEOUT` и `WITH REFRESH` можно сочетать с помощью `AND`. + +```sql +CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AND REFRESH [value_in_sec] AS SELECT ... +``` + +**Пример:** + +```sql +CREATE LIVE VIEW lv WITH TIMEOUT 15 AND REFRESH 5 AS SELECT now(); +``` + +По истечении 15 секунд представление будет автоматически удалено, если нет активного запроса `WATCH`. + +```sql +WATCH lv; +``` + +``` +Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.lv doesn't exist.. +``` + +### Использование {#live-view-usage} + +Наиболее частые случаи использования `LIVE-VIEW`: + +- Получение push-уведомлений об изменениях данных без дополнительных периодических запросов. +- Кеширование результатов часто используемых запросов для получения их без задержки. +- Отслеживание изменений таблицы для запуска других запросов `SELECT`. +- Отслеживание показателей из системных таблиц с помощью периодических обновлений. [Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/view) diff --git a/docs/ru/sql-reference/statements/describe-table.md b/docs/ru/sql-reference/statements/describe-table.md index 64ed61de232..c66dbb66521 100644 --- a/docs/ru/sql-reference/statements/describe-table.md +++ b/docs/ru/sql-reference/statements/describe-table.md @@ -21,4 +21,3 @@ DESC|DESCRIBE TABLE [db.]table [INTO OUTFILE filename] [FORMAT format] Вложенные структуры данных выводятся в «развёрнутом» виде. То есть, каждый столбец - по отдельности, с именем через точку. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/describe-table/) diff --git a/docs/ru/sql-reference/statements/detach.md b/docs/ru/sql-reference/statements/detach.md index 00d0a4b20c6..af915d38772 100644 --- a/docs/ru/sql-reference/statements/detach.md +++ b/docs/ru/sql-reference/statements/detach.md @@ -5,15 +5,65 @@ toc_title: DETACH # DETACH {#detach-statement} -Удаляет из сервера информацию о таблице name. Сервер перестаёт знать о существовании таблицы. +Заставляет сервер "забыть" о существовании таблицы или материализованного представления. + +Синтаксис: ``` sql -DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] +DETACH TABLE|VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster] [PERMANENTLY] ``` -Но ни данные, ни метаданные таблицы не удаляются. При следующем запуске сервера, сервер прочитает метаданные и снова узнает о таблице. -Также, «отцепленную» таблицу можно прицепить заново запросом `ATTACH` (за исключением системных таблиц, для которых метаданные не хранятся). +Но ни данные, ни метаданные таблицы или материализованного представления не удаляются. При следующем запуске сервера, если не было использовано `PERMANENTLY`, сервер прочитает метаданные и снова узнает о таблице/представлении. Если таблица или представление были отключены перманентно, сервер не подключит их обратно автоматически. -Запроса `DETACH DATABASE` нет. +Независимо от того, каким способом таблица была отключена, ее можно подключить обратно с помощью запроса [ATTACH](../../sql-reference/statements/attach.md). Системные log таблицы также могут быть подключены обратно (к примеру `query_log`, `text_log` и др.) Другие системные таблицы не могут быть подключены обратно, но на следующем запуске сервер снова "вспомнит" об этих таблицах. + +`ATTACH MATERIALIZED VIEW` не может быть использован с кратким синтаксисом (без `SELECT`), но можно подключить представление с помощью запроса `ATTACH TABLE`. + +Обратите внимание, что нельзя перманентно отключить таблицу, которая уже временно отключена. Для этого ее сначала надо подключить обратно, а затем снова отключить перманентно. + +Также нельзя использовать [DROP](../../sql-reference/statements/drop.md#drop-table) с отключенной таблицей или создавать таблицу с помощью [CREATE TABLE](../../sql-reference/statements/create/table.md) с таким же именем, как у отключенной таблицы. Еще нельзя заменить отключенную таблицу другой с помощью запроса [RENAME TABLE](../../sql-reference/statements/rename.md). + +**Пример** + +Создание таблицы: + +Запрос: + +``` sql +CREATE TABLE test ENGINE = Log AS SELECT * FROM numbers(10); +SELECT * FROM test; +``` + +Результат: + +``` text +┌─number─┐ +│ 0 │ +│ 1 │ +│ 2 │ +│ 3 │ +│ 4 │ +│ 5 │ +│ 6 │ +│ 7 │ +│ 8 │ +│ 9 │ +└────────┘ +``` + +Отключение таблицы: + +Запрос: + +``` sql +DETACH TABLE test; +SELECT * FROM test; +``` + +Результат: + +``` text +Received exception from server (version 21.4.1): +Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test doesn't exist. +``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/detach/) diff --git a/docs/ru/sql-reference/statements/drop.md b/docs/ru/sql-reference/statements/drop.md index 514a92db91f..118f8eb923a 100644 --- a/docs/ru/sql-reference/statements/drop.md +++ b/docs/ru/sql-reference/statements/drop.md @@ -97,4 +97,3 @@ DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster] ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/drop/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/exists.md b/docs/ru/sql-reference/statements/exists.md index 0b2fd69273c..d4f1f707e79 100644 --- a/docs/ru/sql-reference/statements/exists.md +++ b/docs/ru/sql-reference/statements/exists.md @@ -12,4 +12,3 @@ EXISTS [TEMPORARY] TABLE [db.]name [INTO OUTFILE filename] [FORMAT format] Возвращает один столбец типа `UInt8`, содержащий одно значение - `0`, если таблицы или БД не существует и `1`, если таблица в указанной БД существует. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/exists/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/explain.md b/docs/ru/sql-reference/statements/explain.md new file mode 100644 index 00000000000..ef2a670a87e --- /dev/null +++ b/docs/ru/sql-reference/statements/explain.md @@ -0,0 +1,152 @@ +--- +toc_priority: 39 +toc_title: EXPLAIN +--- + +# EXPLAIN {#explain} + +Выводит план выполнения запроса. + +Синтаксис: + +```sql +EXPLAIN [AST | SYNTAX | PLAN | PIPELINE] [setting = value, ...] SELECT ... [FORMAT ...] +``` + +Пример: + +```sql +EXPLAIN SELECT sum(number) FROM numbers(10) UNION ALL SELECT sum(number) FROM numbers(10) ORDER BY sum(number) ASC FORMAT TSV; +``` + +```sql +Union + Expression (Projection) + Expression (Before ORDER BY and SELECT) + Aggregating + Expression (Before GROUP BY) + SettingQuotaAndLimits (Set limits and quota after reading from storage) + ReadFromStorage (SystemNumbers) + Expression (Projection) + MergingSorted (Merge sorted streams for ORDER BY) + MergeSorting (Merge sorted blocks for ORDER BY) + PartialSorting (Sort each block for ORDER BY) + Expression (Before ORDER BY and SELECT) + Aggregating + Expression (Before GROUP BY) + SettingQuotaAndLimits (Set limits and quota after reading from storage) + ReadFromStorage (SystemNumbers) +``` + +## Типы EXPLAIN {#explain-types} + +- `AST` — абстрактное синтаксическое дерево. +- `SYNTAX` — текст запроса после оптимизации на уровне AST. +- `PLAN` — план выполнения запроса. +- `PIPELINE` — конвейер выполнения запроса. + +### EXPLAIN AST {#explain-ast} + +Дамп AST запроса. + +Пример: + +```sql +EXPLAIN AST SELECT 1; +``` + +```sql +SelectWithUnionQuery (children 1) + ExpressionList (children 1) + SelectQuery (children 1) + ExpressionList (children 1) + Literal UInt64_1 +``` + +### EXPLAIN SYNTAX {#explain-syntax} + +Возвращает текст запроса после применения синтаксических оптимизаций. + +Пример: + +```sql +EXPLAIN SYNTAX SELECT * FROM system.numbers AS a, system.numbers AS b, system.numbers AS c; +``` + +```sql +SELECT + `--a.number` AS `a.number`, + `--b.number` AS `b.number`, + number AS `c.number` +FROM +( + SELECT + number AS `--a.number`, + b.number AS `--b.number` + FROM system.numbers AS a + CROSS JOIN system.numbers AS b +) AS `--.s` +CROSS JOIN system.numbers AS c +``` + +### EXPLAIN PLAN {#explain-plan} + +Дамп шагов выполнения запроса. + +Настройки: + +- `header` — выводит выходной заголовок для шага. По умолчанию: 0. +- `description` — выводит описание шага. По умолчанию: 1. +- `actions` — выводит подробную информацию о действиях, выполняемых на данном шаге. По умолчанию: 0. + +Пример: + +```sql +EXPLAIN SELECT sum(number) FROM numbers(10) GROUP BY number % 4; +``` + +```sql +Union + Expression (Projection) + Expression (Before ORDER BY and SELECT) + Aggregating + Expression (Before GROUP BY) + SettingQuotaAndLimits (Set limits and quota after reading from storage) + ReadFromStorage (SystemNumbers) +``` + +!!! note "Примечание" + Оценка стоимости выполнения шага и запроса не поддерживается. + +### EXPLAIN PIPELINE {#explain-pipeline} + +Настройки: + +- `header` — выводит заголовок для каждого выходного порта. По умолчанию: 0. +- `graph` — выводит граф, описанный на языке [DOT](https://ru.wikipedia.org/wiki/DOT_(язык)). По умолчанию: 0. +- `compact` — выводит граф в компактном режиме, если включена настройка `graph`. По умолчанию: 1. +- `indexes` — показывает используемые индексы, количество отфильтрованных кусков и гранул для каждого примененного индекса. По умолчанию: 0. Поддерживается для таблиц семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). + +Пример: + +```sql +EXPLAIN PIPELINE SELECT sum(number) FROM numbers_mt(100000) GROUP BY number % 4; +``` + +```sql +(Union) +(Expression) +ExpressionTransform + (Expression) + ExpressionTransform + (Aggregating) + Resize 2 → 1 + AggregatingTransform × 2 + (Expression) + ExpressionTransform × 2 + (SettingQuotaAndLimits) + (ReadFromStorage) + NumbersMt × 2 0 → 1 +``` + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/explain/) diff --git a/docs/ru/sql-reference/statements/grant.md b/docs/ru/sql-reference/statements/grant.md index d38e2ea38a0..093e6eb3b93 100644 --- a/docs/ru/sql-reference/statements/grant.md +++ b/docs/ru/sql-reference/statements/grant.md @@ -93,7 +93,7 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION - `ALTER ADD CONSTRAINT` - `ALTER DROP CONSTRAINT` - `ALTER TTL` - - `ALTER MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL` - `ALTER SETTINGS` - `ALTER MOVE PARTITION` - `ALTER FETCH PARTITION` @@ -104,9 +104,9 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION - [CREATE](#grant-create) - `CREATE DATABASE` - `CREATE TABLE` + - `CREATE TEMPORARY TABLE` - `CREATE VIEW` - `CREATE DICTIONARY` - - `CREATE TEMPORARY TABLE` - [DROP](#grant-drop) - `DROP DATABASE` - `DROP TABLE` @@ -152,7 +152,7 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION - `SYSTEM RELOAD` - `SYSTEM RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES` - `SYSTEM TTL MERGES` - `SYSTEM FETCHES` @@ -279,7 +279,7 @@ GRANT INSERT(x,y) ON db.table TO john - `ALTER ADD CONSTRAINT`. Уровень: `TABLE`. Алиасы: `ADD CONSTRAINT` - `ALTER DROP CONSTRAINT`. Уровень: `TABLE`. Алиасы: `DROP CONSTRAINT` - `ALTER TTL`. Уровень: `TABLE`. Алиасы: `ALTER MODIFY TTL`, `MODIFY TTL` - - `ALTER MATERIALIZE TTL`. Уровень: `TABLE`. Алиасы: `MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL`. Уровень: `TABLE`. Алиасы: `MATERIALIZE TTL` - `ALTER SETTINGS`. Уровень: `TABLE`. Алиасы: `ALTER SETTING`, `ALTER MODIFY SETTING`, `MODIFY SETTING` - `ALTER MOVE PARTITION`. Уровень: `TABLE`. Алиасы: `ALTER MOVE PART`, `MOVE PARTITION`, `MOVE PART` - `ALTER FETCH PARTITION`. Уровень: `TABLE`. Алиасы: `FETCH PARTITION` @@ -307,9 +307,9 @@ GRANT INSERT(x,y) ON db.table TO john - `CREATE`. Уровень: `GROUP` - `CREATE DATABASE`. Уровень: `DATABASE` - `CREATE TABLE`. Уровень: `TABLE` + - `CREATE TEMPORARY TABLE`. Уровень: `GLOBAL` - `CREATE VIEW`. Уровень: `VIEW` - `CREATE DICTIONARY`. Уровень: `DICTIONARY` - - `CREATE TEMPORARY TABLE`. Уровень: `GLOBAL` **Дополнительно** @@ -407,7 +407,7 @@ GRANT INSERT(x,y) ON db.table TO john - `SYSTEM RELOAD`. Уровень: `GROUP` - `SYSTEM RELOAD CONFIG`. Уровень: `GLOBAL`. Алиасы: `RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY`. Уровень: `GLOBAL`. Алиасы: `SYSTEM RELOAD DICTIONARIES`, `RELOAD DICTIONARY`, `RELOAD DICTIONARIES` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Уровень: `GLOBAL`. Алиасы: `RELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Уровень: `GLOBAL`. Алиасы: `RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES`. Уровень: `TABLE`. Алиасы: `SYSTEM STOP MERGES`, `SYSTEM START MERGES`, `STOP MERGES`, `START MERGES` - `SYSTEM TTL MERGES`. Уровень: `TABLE`. Алиасы: `SYSTEM STOP TTL MERGES`, `SYSTEM START TTL MERGES`, `STOP TTL MERGES`, `START TTL MERGES` - `SYSTEM FETCHES`. Уровень: `TABLE`. Алиасы: `SYSTEM STOP FETCHES`, `SYSTEM START FETCHES`, `STOP FETCHES`, `START FETCHES` @@ -483,4 +483,3 @@ GRANT INSERT(x,y) ON db.table TO john Привилегия `ADMIN OPTION` разрешает пользователю назначать свои роли другому пользователю. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/grant/) diff --git a/docs/ru/sql-reference/statements/insert-into.md b/docs/ru/sql-reference/statements/insert-into.md index 0ad85ed0166..bbd330962cf 100644 --- a/docs/ru/sql-reference/statements/insert-into.md +++ b/docs/ru/sql-reference/statements/insert-into.md @@ -119,4 +119,3 @@ INSERT INTO [db.]table [(c1, c2, c3)] SELECT ... - Данные поступают в режиме реального времени. - Вы загружаете данные, которые как правило отсортированы по времени. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/insert_into/) diff --git a/docs/ru/sql-reference/statements/kill.md b/docs/ru/sql-reference/statements/kill.md index e2556a7f782..6981d630dd8 100644 --- a/docs/ru/sql-reference/statements/kill.md +++ b/docs/ru/sql-reference/statements/kill.md @@ -70,4 +70,3 @@ KILL MUTATION WHERE database = 'default' AND table = 'table' AND mutation_id = ' Данные, уже изменённые мутацией, остаются в таблице (отката на старую версию данных не происходит). -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/kill/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/misc.md b/docs/ru/sql-reference/statements/misc.md index e9ceece8b2c..cedf52b7a34 100644 --- a/docs/ru/sql-reference/statements/misc.md +++ b/docs/ru/sql-reference/statements/misc.md @@ -19,4 +19,3 @@ toc_priority: 41 - [TRUNCATE](../../sql-reference/statements/truncate.md) - [USE](../../sql-reference/statements/use.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/misc/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/optimize.md b/docs/ru/sql-reference/statements/optimize.md index 8b1d72fed80..e1a9d613537 100644 --- a/docs/ru/sql-reference/statements/optimize.md +++ b/docs/ru/sql-reference/statements/optimize.md @@ -5,20 +5,83 @@ toc_title: OPTIMIZE # OPTIMIZE {#misc_operations-optimize} -``` sql -OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE] -``` - -Запрос пытается запустить внеплановый мёрж кусков данных для таблиц семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). Другие движки таблиц не поддерживаются. - -Если `OPTIMIZE` применяется к таблицам семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md), ClickHouse создаёт задачу на мёрж и ожидает её исполнения на всех узлах (если активирована настройка `replication_alter_partitions_sync`). - -- Если `OPTIMIZE` не выполняет мёрж по любой причине, ClickHouse не оповещает об этом клиента. Чтобы включить оповещения, используйте настройку [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop). -- Если указать `PARTITION`, то оптимизация выполняется только для указанной партиции. [Как задавать имя партиции в запросах](alter/index.md#alter-how-to-specify-part-expr). -- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске. Кроме того, слияние является принудительным, даже если выполняются параллельные слияния. -- Если указать `DEDUPLICATE`, то произойдет схлопывание полностью одинаковых строк (сравниваются значения во всех колонках), имеет смысл только для движка MergeTree. +Запрос пытается запустить внеплановое слияние кусков данных для таблиц. !!! warning "Внимание" - Запрос `OPTIMIZE` не может устранить причину появления ошибки «Too many parts». - -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/optimize/) + `OPTIMIZE` не устраняет причину появления ошибки `Too many parts`. + +**Синтаксис** + +``` sql +OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE [BY expression]] +``` + +Может применяться к таблицам семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md), [MaterializedView](../../engines/table-engines/special/materializedview.md) и [Buffer](../../engines/table-engines/special/buffer.md). Другие движки таблиц не поддерживаются. + +Если запрос `OPTIMIZE` применяется к таблицам семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md), ClickHouse создаёт задачу на слияние и ожидает её исполнения на всех узлах (если активирована настройка `replication_alter_partitions_sync`). + +- По умолчанию, если запросу `OPTIMIZE` не удалось выполнить слияние, то +ClickHouse не оповещает клиента. Чтобы включить оповещения, используйте настройку [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop). +- Если указать `PARTITION`, то оптимизация выполняется только для указанной партиции. [Как задавать имя партиции в запросах](alter/index.md#alter-how-to-specify-part-expr). +- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске данных. Кроме того, слияние является принудительным, даже если выполняются параллельные слияния. +- Если указать `DEDUPLICATE`, то произойдет схлопывание полностью одинаковых строк (сравниваются значения во всех столбцах), имеет смысл только для движка MergeTree. + +## Выражение BY {#by-expression} + +Чтобы выполнить дедупликацию по произвольному набору столбцов, вы можете явно указать список столбцов или использовать любую комбинацию подстановки [`*`](../../sql-reference/statements/select/index.md#asterisk), выражений [`COLUMNS`](../../sql-reference/statements/select/index.md#columns-expression) и [`EXCEPT`](../../sql-reference/statements/select/index.md#except-modifier). + + Список столбцов для дедупликации должен включать все столбцы, указанные в условиях сортировки (первичный ключ и ключ сортировки), а также в условиях партиционирования (ключ партиционирования). + + !!! note "Примечание" + Обратите внимание, что символ подстановки `*` обрабатывается так же, как и в запросах `SELECT`: столбцы `MATERIALIZED` и `ALIAS` не включаются в результат. + Если указать пустой список или выражение, которое возвращает пустой список, или дедуплицировать столбец по псевдониму (`ALIAS`), то сервер вернет ошибку. + + +**Примеры** + +Рассмотрим таблицу: + +``` sql +CREATE TABLE example ( + primary_key Int32, + secondary_key Int32, + value UInt32, + partition_key UInt32, + materialized_value UInt32 MATERIALIZED 12345, + aliased_value UInt32 ALIAS 2, + PRIMARY KEY primary_key +) ENGINE=MergeTree +PARTITION BY partition_key; +``` + +Прежний способ дедупликации, когда учитываются все столбцы. Строка удаляется только в том случае, если все значения во всех столбцах равны соответствующим значениям в предыдущей строке. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE; +``` + +Дедупликация по всем столбцам, кроме `ALIAS` и `MATERIALIZED`: `primary_key`, `secondary_key`, `value`, `partition_key` и `materialized_value`. + + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY *; +``` + +Дедупликация по всем столбцам, кроме `ALIAS`, `MATERIALIZED` и `materialized_value`: столбцы `primary_key`, `secondary_key`, `value` и `partition_key`. + + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY * EXCEPT materialized_value; +``` + +Дедупликация по столбцам `primary_key`, `secondary_key` и `partition_key`. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY primary_key, secondary_key, partition_key; +``` + +Дедупликация по любому столбцу, соответствующему регулярному выражению: столбцам `primary_key`, `secondary_key` и `partition_key`. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY COLUMNS('.*_key'); +``` diff --git a/docs/ru/sql-reference/statements/rename.md b/docs/ru/sql-reference/statements/rename.md index 94bf3c682a1..192426dbafa 100644 --- a/docs/ru/sql-reference/statements/rename.md +++ b/docs/ru/sql-reference/statements/rename.md @@ -3,8 +3,16 @@ toc_priority: 48 toc_title: RENAME --- -# RENAME {#misc_operations-rename} +# RENAME Statement {#misc_operations-rename} +## RENAME DATABASE {#misc_operations-rename_database} +Переименование базы данных + +``` +RENAME DATABASE atomic_database1 TO atomic_database2 [ON CLUSTER cluster] +``` + +## RENAME TABLE {#misc_operations-rename_table} Переименовывает одну или несколько таблиц. ``` sql @@ -12,6 +20,3 @@ RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... ``` Переименовывание таблицы является лёгкой операцией. Если вы указали после `TO` другую базу данных, то таблица будет перенесена в эту базу данных. При этом, директории с базами данных должны быть расположены в одной файловой системе (иначе возвращается ошибка). В случае переименования нескольких таблиц в одном запросе — это неатомарная операция, может выполнится частично, запросы в других сессиях могут получить ошибку `Table ... doesn't exist...`. - - -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/rename/) diff --git a/docs/ru/sql-reference/statements/revoke.md b/docs/ru/sql-reference/statements/revoke.md index 339746b8591..a3a282d6e5c 100644 --- a/docs/ru/sql-reference/statements/revoke.md +++ b/docs/ru/sql-reference/statements/revoke.md @@ -45,4 +45,3 @@ GRANT SELECT ON accounts.staff TO mira; REVOKE SELECT(wage) ON accounts.staff FROM mira; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/operations/settings/settings/) diff --git a/docs/ru/sql-reference/statements/select/all.md b/docs/ru/sql-reference/statements/select/all.md index 4049d77a173..d36a23ca54e 100644 --- a/docs/ru/sql-reference/statements/select/all.md +++ b/docs/ru/sql-reference/statements/select/all.md @@ -19,4 +19,3 @@ SELECT sum(ALL number) FROM numbers(10); SELECT sum(number) FROM numbers(10); ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/select/all) diff --git a/docs/ru/sql-reference/statements/select/index.md b/docs/ru/sql-reference/statements/select/index.md index a548a988a89..a3b4e889397 100644 --- a/docs/ru/sql-reference/statements/select/index.md +++ b/docs/ru/sql-reference/statements/select/index.md @@ -45,6 +45,7 @@ SELECT [DISTINCT] expr_list - [Секция SELECT](#select-clause) - [Секция DISTINCT](distinct.md) - [Секция LIMIT](limit.md) + [Секция OFFSET](offset.md) - [Секция UNION ALL](union.md) - [Секция INTO OUTFILE](into-outfile.md) - [Секция FORMAT](format.md) @@ -280,4 +281,3 @@ SELECT * REPLACE(i + 1 AS i) EXCEPT (j) APPLY(sum) from columns_transformers; SELECT * FROM some_table SETTINGS optimize_read_in_order=1, cast_keep_nullable=1; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/select/) diff --git a/docs/ru/sql-reference/statements/select/limit.md b/docs/ru/sql-reference/statements/select/limit.md index 03b720226f0..e4012e89556 100644 --- a/docs/ru/sql-reference/statements/select/limit.md +++ b/docs/ru/sql-reference/statements/select/limit.md @@ -12,13 +12,16 @@ toc_title: LIMIT При отсутствии секции [ORDER BY](order-by.md), однозначно сортирующей результат, результат может быть произвольным и может являться недетерминированным. +!!! note "Примечание" + Количество возвращаемых строк может зависеть также от настройки [limit](../../../operations/settings/settings.md#limit). + ## Модификатор LIMIT ... WITH TIES {#limit-with-ties} Когда вы установите модификатор WITH TIES для `LIMIT n[,m]` и указываете `ORDER BY expr_list`, вы получите первые `n` или `n,m` строк и дополнительно все строки с теми же самым значениями полей указанных в `ORDER BY` равными строке на позиции `n` для `LIMIT n` или `m` для `LIMIT n,m`. Этот модификатор также может быть скомбинирован с [ORDER BY ... WITH FILL модификатором](../../../sql-reference/statements/select/order-by.md#orderby-with-fill) -Для примера следующий запрос +Для примера следующий запрос: ```sql SELECT * FROM ( SELECT number%50 AS n FROM numbers(100) diff --git a/docs/ru/sql-reference/statements/select/offset.md b/docs/ru/sql-reference/statements/select/offset.md new file mode 100644 index 00000000000..31ff1d6ea8b --- /dev/null +++ b/docs/ru/sql-reference/statements/select/offset.md @@ -0,0 +1,86 @@ +--- +toc_title: OFFSET +--- + +# Секция OFFSET FETCH {#offset-fetch} + +`OFFSET` и `FETCH` позволяют извлекать данные по частям. Они указывают строки, которые вы хотите получить в результате запроса. + +``` sql +OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}] +``` + +`offset_row_count` или `fetch_row_count` может быть числом или литеральной константой. Если вы не задаете `fetch_row_count` явно, используется значение по умолчанию, равное 1. + +`OFFSET` указывает количество строк, которые необходимо пропустить перед началом возврата строк из запроса. + +`FETCH` указывает максимальное количество строк, которые могут быть получены в результате запроса. + +Опция `ONLY` используется для возврата строк, которые следуют сразу же за строками, пропущенными секцией `OFFSET`. В этом случае `FETCH` — это альтернатива [LIMIT](../../../sql-reference/statements/select/limit.md). Например, следующий запрос + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY; +``` + +идентичен запросу + +``` sql +SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1; +``` + +Опция `WITH TIES` используется для возврата дополнительных строк, которые привязываются к последней в результате запроса. Например, если `fetch_row_count` имеет значение 5 и существуют еще 2 строки с такими же значениями столбцов, указанных в `ORDER BY`, что и у пятой строки результата, то финальный набор будет содержать 7 строк. + +!!! note "Примечание" + Секция `OFFSET` должна находиться перед секцией `FETCH`, если обе присутствуют. + +!!! note "Примечание" + Общее количество пропущенных строк может зависеть также от настройки [offset](../../../operations/settings/settings.md#offset). + +## Примеры {#examples} + +Входная таблица: + +``` text +┌─a─┬─b─┐ +│ 1 │ 1 │ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 1 │ 3 │ +│ 5 │ 4 │ +│ 0 │ 6 │ +│ 5 │ 7 │ +└───┴───┘ +``` + +Использование опции `ONLY`: + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY; +``` + +Результат: + +``` text +┌─a─┬─b─┐ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 5 │ 4 │ +└───┴───┘ +``` + +Использование опции `WITH TIES`: + +``` sql +SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES; +``` + +Результат: + +``` text +┌─a─┬─b─┐ +│ 2 │ 1 │ +│ 3 │ 4 │ +│ 5 │ 4 │ +│ 5 │ 7 │ +└───┴───┘ +``` diff --git a/docs/ru/sql-reference/statements/select/order-by.md b/docs/ru/sql-reference/statements/select/order-by.md index f8b838cbd15..cb49d167b13 100644 --- a/docs/ru/sql-reference/statements/select/order-by.md +++ b/docs/ru/sql-reference/statements/select/order-by.md @@ -392,85 +392,3 @@ ORDER BY │ 1970-03-12 │ 1970-01-08 │ original │ └────────────┴────────────┴──────────┘ ``` - -## Секция OFFSET FETCH {#offset-fetch} - -`OFFSET` и `FETCH` позволяют извлекать данные по частям. Они указывают строки, которые вы хотите получить в результате запроса. - -``` sql -OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}] -``` - -`offset_row_count` или `fetch_row_count` может быть числом или литеральной константой. Если вы не используете `fetch_row_count`, то его значение равно 1. - -`OFFSET` указывает количество строк, которые необходимо пропустить перед началом возврата строк из запроса. - -`FETCH` указывает максимальное количество строк, которые могут быть получены в результате запроса. - -Опция `ONLY` используется для возврата строк, которые следуют сразу же за строками, пропущенными секцией `OFFSET`. В этом случае `FETCH` — это альтернатива [LIMIT](../../../sql-reference/statements/select/limit.md). Например, следующий запрос - -``` sql -SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY; -``` - -идентичен запросу - -``` sql -SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1; -``` - -Опция `WITH TIES` используется для возврата дополнительных строк, которые привязываются к последней в результате запроса. Например, если `fetch_row_count` имеет значение 5 и существуют еще 2 строки с такими же значениями столбцов, указанных в `ORDER BY`, что и у пятой строки результата, то финальный набор будет содержать 7 строк. - -!!! note "Примечание" - Секция `OFFSET` должна находиться перед секцией `FETCH`, если обе присутствуют. - -### Примеры {#examples} - -Входная таблица: - -``` text -┌─a─┬─b─┐ -│ 1 │ 1 │ -│ 2 │ 1 │ -│ 3 │ 4 │ -│ 1 │ 3 │ -│ 5 │ 4 │ -│ 0 │ 6 │ -│ 5 │ 7 │ -└───┴───┘ -``` - -Использование опции `ONLY`: - -``` sql -SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY; -``` - -Результат: - -``` text -┌─a─┬─b─┐ -│ 2 │ 1 │ -│ 3 │ 4 │ -│ 5 │ 4 │ -└───┴───┘ -``` - -Использование опции `WITH TIES`: - -``` sql -SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES; -``` - -Результат: - -``` text -┌─a─┬─b─┐ -│ 2 │ 1 │ -│ 3 │ 4 │ -│ 5 │ 4 │ -│ 5 │ 7 │ -└───┴───┘ -``` - -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/select/order-by/) diff --git a/docs/ru/sql-reference/statements/select/union.md b/docs/ru/sql-reference/statements/select/union.md index 8f1dc11c802..de8a9b0e4ea 100644 --- a/docs/ru/sql-reference/statements/select/union.md +++ b/docs/ru/sql-reference/statements/select/union.md @@ -78,4 +78,3 @@ SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 2; Запросы, которые являются частью `UNION/UNION ALL/UNION DISTINCT`, выполняются параллельно, и их результаты могут быть смешаны вместе. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/select/union/) diff --git a/docs/ru/sql-reference/statements/select/with.md b/docs/ru/sql-reference/statements/select/with.md index 328b28c27ef..7e09d94770a 100644 --- a/docs/ru/sql-reference/statements/select/with.md +++ b/docs/ru/sql-reference/statements/select/with.md @@ -67,4 +67,3 @@ WITH test1 AS (SELECT i + 1, j + 1 FROM test1) SELECT * FROM test1; ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/statements/select/with/) diff --git a/docs/ru/sql-reference/statements/set-role.md b/docs/ru/sql-reference/statements/set-role.md index ccbef41aa9b..b21a9ec8319 100644 --- a/docs/ru/sql-reference/statements/set-role.md +++ b/docs/ru/sql-reference/statements/set-role.md @@ -54,4 +54,3 @@ SET DEFAULT ROLE ALL EXCEPT role1, role2 TO user ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/set-role/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/set.md b/docs/ru/sql-reference/statements/set.md index b60dfcf8324..fa96c3c2a1b 100644 --- a/docs/ru/sql-reference/statements/set.md +++ b/docs/ru/sql-reference/statements/set.md @@ -19,4 +19,3 @@ SET profile = 'profile-name-from-the-settings-file' Подробности смотрите в разделе [Настройки](../../operations/settings/settings.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/set/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/show.md b/docs/ru/sql-reference/statements/show.md index b214f0072e3..6d39bab4990 100644 --- a/docs/ru/sql-reference/statements/show.md +++ b/docs/ru/sql-reference/statements/show.md @@ -427,4 +427,3 @@ SHOW CHANGED SETTINGS ILIKE '%MEMORY%' - Таблица [system.settings](../../operations/system-tables/settings.md) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/show/) diff --git a/docs/ru/sql-reference/statements/system.md b/docs/ru/sql-reference/statements/system.md index a6a6c5047af..f0f9b77b5ba 100644 --- a/docs/ru/sql-reference/statements/system.md +++ b/docs/ru/sql-reference/statements/system.md @@ -204,6 +204,7 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] ClickHouse может управлять фоновыми процессами связанными c репликацией в таблицах семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md). ### STOP FETCHES {#query_language-system-stop-fetches} + Позволяет остановить фоновые процессы синхронизации новыми вставленными кусками данных с другими репликами в кластере для таблиц семейства `ReplicatedMergeTree`: Всегда возвращает `Ok.` вне зависимости от типа таблицы и даже если таблица или база данных не существет. @@ -212,6 +213,7 @@ SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] ``` ### START FETCHES {#query_language-system-start-fetches} + Позволяет запустить фоновые процессы синхронизации новыми вставленными кусками данных с другими репликами в кластере для таблиц семейства `ReplicatedMergeTree`: Всегда возвращает `Ok.` вне зависимости от типа таблицы и даже если таблица или база данных не существет. @@ -220,6 +222,7 @@ SYSTEM START FETCHES [[db.]replicated_merge_tree_family_table_name] ``` ### STOP REPLICATED SENDS {#query_language-system-start-replicated-sends} + Позволяет остановить фоновые процессы отсылки новых вставленных кусков данных другим репликам в кластере для таблиц семейства `ReplicatedMergeTree`: ``` sql @@ -227,6 +230,7 @@ SYSTEM STOP REPLICATED SENDS [[db.]replicated_merge_tree_family_table_name] ``` ### START REPLICATED SENDS {#query_language-system-start-replicated-sends} + Позволяет запустить фоновые процессы отсылки новых вставленных кусков данных другим репликам в кластере для таблиц семейства `ReplicatedMergeTree`: ``` sql @@ -234,6 +238,7 @@ SYSTEM START REPLICATED SENDS [[db.]replicated_merge_tree_family_table_name] ``` ### STOP REPLICATION QUEUES {#query_language-system-stop-replication-queues} + Останавливает фоновые процессы разбора заданий из очереди репликации которая хранится в Zookeeper для таблиц семейства `ReplicatedMergeTree`. Возможные типы заданий - merges, fetches, mutation, DDL запросы с ON CLUSTER: ``` sql @@ -241,6 +246,7 @@ SYSTEM STOP REPLICATION QUEUES [[db.]replicated_merge_tree_family_table_name] ``` ### START REPLICATION QUEUES {#query_language-system-start-replication-queues} + Запускает фоновые процессы разбора заданий из очереди репликации которая хранится в Zookeeper для таблиц семейства `ReplicatedMergeTree`. Возможные типы заданий - merges, fetches, mutation, DDL запросы с ON CLUSTER: ``` sql @@ -248,21 +254,24 @@ SYSTEM START REPLICATION QUEUES [[db.]replicated_merge_tree_family_table_name] ``` ### SYNC REPLICA {#query_language-system-sync-replica} + Ждет когда таблица семейства `ReplicatedMergeTree` будет синхронизирована с другими репликами в кластере, будет работать до достижения `receive_timeout`, если синхронизация для таблицы отключена в настоящий момент времени: ``` sql SYSTEM SYNC REPLICA [db.]replicated_merge_tree_family_table_name ``` +После выполнения этого запроса таблица `[db.]replicated_merge_tree_family_table_name` синхронизирует команды из общего реплицированного лога в свою собственную очередь репликации. Затем запрос ждет, пока реплика не обработает все синхронизированные команды. + ### RESTART REPLICA {#query_language-system-restart-replica} -Реинициализация состояния Zookeeper сессий для таблицы семейства `ReplicatedMergeTree`, сравнивает текущее состояние с тем что хранится в Zookeeper как источник правды и добавляет задачи Zookeeper очередь если необходимо -Инициализация очереди репликации на основе данных ZooKeeper, происходит так же как при attach table. На короткое время таблица станет недоступной для любых операций. + +Реинициализация состояния Zookeeper-сессий для таблицы семейства `ReplicatedMergeTree`. Сравнивает текущее состояние с тем, что хранится в Zookeeper, как источник правды, и добавляет задачи в очередь репликации в Zookeeper, если необходимо. +Инициализация очереди репликации на основе данных ZooKeeper происходит так же, как при attach table. На короткое время таблица станет недоступной для любых операций. ``` sql SYSTEM RESTART REPLICA [db.]replicated_merge_tree_family_table_name ``` ### RESTART REPLICAS {#query_language-system-restart-replicas} -Реинициализация состояния Zookeeper сессий для всех `ReplicatedMergeTree` таблиц, сравнивает текущее состояние с тем что хранится в Zookeeper как источник правды и добавляет задачи Zookeeper очередь если необходимо -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/system/) +Реинициализация состояния ZooKeeper-сессий для всех `ReplicatedMergeTree` таблиц. Сравнивает текущее состояние реплики с тем, что хранится в ZooKeeper, как c источником правды, и добавляет задачи в очередь репликации в ZooKeeper, если необходимо. diff --git a/docs/ru/sql-reference/statements/truncate.md b/docs/ru/sql-reference/statements/truncate.md index 4909d349658..b23d96d5b08 100644 --- a/docs/ru/sql-reference/statements/truncate.md +++ b/docs/ru/sql-reference/statements/truncate.md @@ -14,4 +14,3 @@ TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] Запрос `TRUNCATE` не поддерживается для следующих движков: [View](../../engines/table-engines/special/view.md), [File](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md) и [Null](../../engines/table-engines/special/null.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/truncate/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/use.md b/docs/ru/sql-reference/statements/use.md index c84329ea5ff..0d40870c23a 100644 --- a/docs/ru/sql-reference/statements/use.md +++ b/docs/ru/sql-reference/statements/use.md @@ -13,4 +13,3 @@ USE db Текущая база данных используется для поиска таблиц, если база данных не указана в запросе явно через точку перед именем таблицы. При использовании HTTP протокола запрос не может быть выполнен, так как понятия сессии не существует. -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/use/) \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/watch.md b/docs/ru/sql-reference/statements/watch.md new file mode 100644 index 00000000000..ef5b2f80584 --- /dev/null +++ b/docs/ru/sql-reference/statements/watch.md @@ -0,0 +1,106 @@ +--- +toc_priority: 53 +toc_title: WATCH +--- + +# Запрос WATCH {#watch} + +!!! important "Важно" + Это экспериментальная функция. Она может повлечь потерю совместимости в будущих версиях. + Чтобы использовать `LIVE VIEW` и запросы `WATCH`, включите настройку `set allow_experimental_live_view = 1`. + +**Синтаксис** + +``` sql +WATCH [db.]live_view [EVENTS] [LIMIT n] [FORMAT format] +``` + +Запрос `WATCH` постоянно возвращает содержимое [LIVE-представления](./create/view.md#live-view). Если параметр `LIMIT` не был задан, запрос `WATCH` будет непрерывно обновлять содержимое [LIVE-представления](./create/view.md#live-view). + +```sql +WATCH [db.]live_view; +``` +## Виртуальные столбцы {#watch-virtual-columns} + +Виртуальный столбец `_version` в результате запроса обозначает версию данного результата. + +**Пример:** + +```sql +CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); +WATCH lv; +``` + +```bash +┌───────────────now()─┬─_version─┐ +│ 2021-02-21 09:17:21 │ 1 │ +└─────────────────────┴──────────┘ +┌───────────────now()─┬─_version─┐ +│ 2021-02-21 09:17:26 │ 2 │ +└─────────────────────┴──────────┘ +┌───────────────now()─┬─_version─┐ +│ 2021-02-21 09:17:31 │ 3 │ +└─────────────────────┴──────────┘ +... +``` + +По умолчанию запрашиваемые данные возвращаются клиенту, однако в сочетании с запросом [INSERT INTO](../../sql-reference/statements/insert-into.md) они могут быть перенаправлены для вставки в другую таблицу. + +**Пример:** + +```sql +INSERT INTO [db.]table WATCH [db.]live_view ... +``` + +## Секция EVENTS {#events-clause} + +С помощью параметра `EVENTS` можно получить компактную форму результата запроса `WATCH`. Вместо полного результата вы получаете номер последней версии результата. + +```sql +WATCH [db.]live_view EVENTS; +``` + +**Пример:** + +```sql +CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); +WATCH lv EVENTS; +``` + +```bash +┌─version─┐ +│ 1 │ +└─────────┘ +┌─version─┐ +│ 2 │ +└─────────┘ +... +``` + +## Секция LIMIT {#limit-clause} + +Параметр `LIMIT n` задает количество обновлений запроса `WATCH`, после которого отслеживание прекращается. По умолчанию это число не задано, поэтому запрос будет выполняться постоянно. Значение `LIMIT 0` означает, что запрос `WATCH` вернет единственный актуальный результат запроса и прекратит отслеживание. + +```sql +WATCH [db.]live_view LIMIT 1; +``` + +**Пример:** + +```sql +CREATE LIVE VIEW lv WITH REFRESH 5 AS SELECT now(); +WATCH lv EVENTS LIMIT 1; +``` + +```bash +┌─version─┐ +│ 1 │ +└─────────┘ +``` + +## Секция FORMAT {#format-clause} + +Параметр `FORMAT` работает аналогично одноименному параметру запроса [SELECT](../../sql-reference/statements/select/format.md#format-clause). + +!!! info "Примечание" + При отслеживании [LIVE VIEW](./create/view.md#live-view) через интерфейс HTTP следует использовать формат [JSONEachRowWithProgress](../../interfaces/formats.md#jsoneachrowwithprogress). Постоянные сообщения об изменениях будут добавлены в поток вывода для поддержания активности долговременного HTTP-соединения до тех пор, пока результат запроса изменяется. Проомежуток времени между сообщениями об изменениях управляется настройкой[live_view_heartbeat_interval](./create/view.md#live-view-settings). diff --git a/docs/ru/sql-reference/syntax.md b/docs/ru/sql-reference/syntax.md index d8eaa4f1731..dbbf5f92612 100644 --- a/docs/ru/sql-reference/syntax.md +++ b/docs/ru/sql-reference/syntax.md @@ -128,7 +128,7 @@ expr AS alias Например, `SELECT table_name_alias.column_name FROM table_name table_name_alias`. - В функции [CAST](sql_reference/syntax.md#type_conversion_function-cast), ключевое слово `AS` имеет другое значение. Смотрите описание функции. + В функции [CAST](../sql_reference/syntax.md#type_conversion_function-cast), ключевое слово `AS` имеет другое значение. Смотрите описание функции. - `expr` — любое выражение, которое поддерживает ClickHouse. @@ -138,7 +138,7 @@ expr AS alias Например, `SELECT "table t".column_name FROM table_name AS "table t"`. -### Примечания по использованию {#primechaniia-po-ispolzovaniiu} +### Примечания по использованию {#notes-on-usage} Синонимы являются глобальными для запроса или подзапроса, и вы можете определить синоним в любой части запроса для любого выражения. Например, `SELECT (1 AS n) + 2, n`. @@ -169,9 +169,9 @@ Received exception from server (version 18.14.17): Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query. ``` -В этом примере мы объявили таблицу `t` со столбцом `b`. Затем, при выборе данных, мы определили синоним `sum(b) AS b`. Поскольку синонимы глобальные, то ClickHouse заменил литерал `b` в выражении `argMax(a, b)` выражением `sum(b)`. Эта замена вызвала исключение. +В этом примере мы объявили таблицу `t` со столбцом `b`. Затем, при выборе данных, мы определили синоним `sum(b) AS b`. Поскольку синонимы глобальные, то ClickHouse заменил литерал `b` в выражении `argMax(a, b)` выражением `sum(b)`. Эта замена вызвала исключение. Можно изменить это поведение, включив настройку [prefer_column_name_to_alias](../operations/settings/settings.md#prefer_column_name_to_alias), для этого нужно установить ее в значение `1`. -## Звёздочка {#zviozdochka} +## Звёздочка {#asterisk} В запросе `SELECT`, вместо выражения может стоять звёздочка. Подробнее смотрите раздел «SELECT». @@ -180,5 +180,3 @@ Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception Выражение представляет собой функцию, идентификатор, литерал, применение оператора, выражение в скобках, подзапрос, звёздочку. А также может содержать синоним. Список выражений - одно выражение или несколько выражений через запятую. Функции и операторы, в свою очередь, в качестве аргументов, могут иметь произвольные выражения. - -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/syntax/) diff --git a/docs/ru/sql-reference/table-functions/file.md b/docs/ru/sql-reference/table-functions/file.md index f9bdf902ad8..1d8604528be 100644 --- a/docs/ru/sql-reference/table-functions/file.md +++ b/docs/ru/sql-reference/table-functions/file.md @@ -126,4 +126,3 @@ SELECT count(*) FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, - [Виртуальные столбцы](index.md#table_engines-virtual_columns) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table-functions/file/) diff --git a/docs/ru/sql-reference/table-functions/generate.md b/docs/ru/sql-reference/table-functions/generate.md index 47b7e43bc86..6dd88b622fc 100644 --- a/docs/ru/sql-reference/table-functions/generate.md +++ b/docs/ru/sql-reference/table-functions/generate.md @@ -10,10 +10,11 @@ toc_title: generateRandom Поддерживает все типы данных, которые могут храниться в таблице, за исключением `LowCardinality` и `AggregateFunction`. ``` sql -generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]); +generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]) ``` -**Входные параметры** +**Аргументы** + - `name` — название соответствующего столбца. - `TypeName` — тип соответствующего столбца. - `max_array_length` — максимальная длина массива для всех сгенерированных массивов. По умолчанию `10`. @@ -38,4 +39,3 @@ SELECT * FROM generateRandom('a Array(Int8), d Decimal32(4), c Tuple(DateTime64( └──────────┴──────────────┴────────────────────────────────────────────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/generate/) diff --git a/docs/ru/sql-reference/table-functions/hdfs.md b/docs/ru/sql-reference/table-functions/hdfs.md index 6edd70b7b1b..56aaeae487c 100644 --- a/docs/ru/sql-reference/table-functions/hdfs.md +++ b/docs/ru/sql-reference/table-functions/hdfs.md @@ -61,4 +61,3 @@ LIMIT 2 - [Виртуальные столбцы](index.md#table_engines-virtual_columns) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/hdfs/) diff --git a/docs/ru/sql-reference/table-functions/index.md b/docs/ru/sql-reference/table-functions/index.md index da04b2c7a10..fcd428df5d1 100644 --- a/docs/ru/sql-reference/table-functions/index.md +++ b/docs/ru/sql-reference/table-functions/index.md @@ -32,5 +32,6 @@ toc_title: "Введение" | [jdbc](jdbc.md) | Создаёт таблицу с дижком [JDBC](../../engines/table-engines/integrations/jdbc.md). | | [odbc](odbc.md) | Создаёт таблицу с движком [ODBC](../../engines/table-engines/integrations/odbc.md). | | [hdfs](hdfs.md) | Создаёт таблицу с движком [HDFS](../../engines/table-engines/integrations/hdfs.md). | +| [s3](s3.md) | Создаёт таблицу с движком [S3](../../engines/table-engines/integrations/s3.md). | -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/) +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table-functions/) diff --git a/docs/ru/sql-reference/table-functions/input.md b/docs/ru/sql-reference/table-functions/input.md index 96cf7515d52..0f5f621a247 100644 --- a/docs/ru/sql-reference/table-functions/input.md +++ b/docs/ru/sql-reference/table-functions/input.md @@ -43,4 +43,3 @@ $ cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV" $ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV" ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/input/) diff --git a/docs/ru/sql-reference/table-functions/jdbc.md b/docs/ru/sql-reference/table-functions/jdbc.md index d388262606f..4fc237f940d 100644 --- a/docs/ru/sql-reference/table-functions/jdbc.md +++ b/docs/ru/sql-reference/table-functions/jdbc.md @@ -24,4 +24,3 @@ SELECT * FROM jdbc('mysql://localhost:3306/?user=root&password=root', 'schema', SELECT * FROM jdbc('datasource://mysql-local', 'schema', 'table') ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/jdbc/) diff --git a/docs/ru/sql-reference/table-functions/merge.md b/docs/ru/sql-reference/table-functions/merge.md index 0822fdfe535..5b33f458468 100644 --- a/docs/ru/sql-reference/table-functions/merge.md +++ b/docs/ru/sql-reference/table-functions/merge.md @@ -9,4 +9,3 @@ toc_title: merge Структура таблицы берётся из первой попавшейся таблицы, подходящей под регулярное выражение. -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/merge/) diff --git a/docs/ru/sql-reference/table-functions/mysql.md b/docs/ru/sql-reference/table-functions/mysql.md index 18b34d0bf6c..665f1058ba2 100644 --- a/docs/ru/sql-reference/table-functions/mysql.md +++ b/docs/ru/sql-reference/table-functions/mysql.md @@ -5,15 +5,15 @@ toc_title: mysql # mysql {#mysql} -Позволяет выполнять запросы `SELECT` над данными, хранящимися на удалённом MySQL сервере. +Позволяет выполнять запросы `SELECT` и `INSERT` над данными, хранящимися на удалённом MySQL сервере. **Синтаксис** ``` sql -mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); +mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']) ``` -**Параметры** +**Аргументы** - `host:port` — адрес сервера MySQL. @@ -29,9 +29,10 @@ mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_ - `0` - выполняется запрос `INSERT INTO`. - `1` - выполняется запрос `REPLACE INTO`. -- `on_duplicate_clause` — выражение `ON DUPLICATE KEY on_duplicate_clause`, добавляемое в запрос `INSERT`. Может быть передано только с помощью `replace_query = 0` (если вы одновременно передадите `replace_query = 1` и `on_duplicate_clause`, будет сгенерировано исключение). +- `on_duplicate_clause` — выражение `ON DUPLICATE KEY on_duplicate_clause`, добавляемое в запрос `INSERT`. Может быть передано только с помощью `replace_query = 0` (если вы одновременно передадите `replace_query = 1` и `on_duplicate_clause`, будет сгенерировано исключение). - Пример: `INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1`, где `on_duplicate_clause` это `UPDATE c2 = c2 + 1;` + Пример: `INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1`, где `on_duplicate_clause` это `UPDATE c2 = c2 + 1`. + Выражения, которые могут использоваться в качестве `on_duplicate_clause` в секции `ON DUPLICATE KEY`, можно посмотреть в документации по [MySQL](http://www.mysql.ru/docs/). Простые условия `WHERE` такие как `=, !=, >, >=, <, =` выполняются на стороне сервера MySQL. @@ -42,7 +43,7 @@ mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_ Объект таблицы с теми же столбцами, что и в исходной таблице MySQL. !!! note "Примечание" - Чтобы отличить табличную функцию `mysql (...)` в запросе `INSERT` от имени таблицы со списком имен столбцов, используйте ключевые слова `FUNCTION` или `TABLE FUNCTION`. См. примеры ниже. + Чтобы отличить табличную функцию `mysql (...)` в запросе `INSERT` от имени таблицы со списком столбцов, используйте ключевые слова `FUNCTION` или `TABLE FUNCTION`. См. примеры ниже. **Примеры** @@ -96,4 +97,3 @@ SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123'); - [Движок таблиц ‘MySQL’](../../sql-reference/table-functions/mysql.md) - [Использование MySQL как источника данных для внешнего словаря](../../sql-reference/table-functions/mysql.md#dicts-external_dicts_dict_sources-mysql) -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table_functions/mysql/) diff --git a/docs/ru/sql-reference/table-functions/numbers.md b/docs/ru/sql-reference/table-functions/numbers.md index 005f400e082..71f63078415 100644 --- a/docs/ru/sql-reference/table-functions/numbers.md +++ b/docs/ru/sql-reference/table-functions/numbers.md @@ -25,4 +25,3 @@ SELECT * FROM system.numbers LIMIT 10; select toDate('2010-01-01') + number as d FROM numbers(365); ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/numbers/) diff --git a/docs/ru/sql-reference/table-functions/odbc.md b/docs/ru/sql-reference/table-functions/odbc.md index 19203123840..557e7d2a15b 100644 --- a/docs/ru/sql-reference/table-functions/odbc.md +++ b/docs/ru/sql-reference/table-functions/odbc.md @@ -103,4 +103,3 @@ SELECT * FROM odbc('DSN=mysqlconn', 'test', 'test') - [Внешние словари ODBC](../../sql-reference/table-functions/odbc.md#dicts-external_dicts_dict_sources-odbc) - [Движок таблиц ODBC](../../sql-reference/table-functions/odbc.md). -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/jdbc/) diff --git a/docs/ru/sql-reference/table-functions/postgresql.md b/docs/ru/sql-reference/table-functions/postgresql.md new file mode 100644 index 00000000000..2d8afe28f1e --- /dev/null +++ b/docs/ru/sql-reference/table-functions/postgresql.md @@ -0,0 +1,120 @@ +--- +toc_priority: 42 +toc_title: postgresql +--- + +# postgresql {#postgresql} + +Позволяет выполнять запросы `SELECT` и `INSERT` над таблицами удаленной БД PostgreSQL. + +**Синтаксис** + +``` sql +postgresql('host:port', 'database', 'table', 'user', 'password'[, `schema`]) +``` + +**Аргументы** + +- `host:port` — адрес сервера PostgreSQL. +- `database` — имя базы данных на удалённом сервере. +- `table` — имя таблицы на удалённом сервере. +- `user` — пользователь PostgreSQL. +- `password` — пароль пользователя. +- `schema` — имя схемы, если не используется схема по умолчанию. Необязательный аргумент. + +**Возвращаемое значение** + +Таблица с теми же столбцами, что и в исходной таблице PostgreSQL. + +!!! info "Примечание" + В запросах `INSERT` для того чтобы отличить табличную функцию `postgresql(...)` от таблицы со списком имен столбцов вы должны указывать ключевые слова `FUNCTION` или `TABLE FUNCTION`. См. примеры ниже. + +## Особенности реализации {#implementation-details} + +Запросы `SELECT` на стороне PostgreSQL выполняются как `COPY (SELECT ...) TO STDOUT` внутри транзакции PostgreSQL только на чтение с коммитом после каждого запроса `SELECT`. + +Простые условия для `WHERE` такие как `=`, `!=`, `>`, `>=`, `<`, `<=` и `IN` исполняются на стороне PostgreSQL сервера. + +Все операции объединения, аггрегации, сортировки, условия `IN [ array ]` и ограничения `LIMIT` выполняются на стороне ClickHouse только после того как запрос к PostgreSQL закончился. + +Запросы `INSERT` на стороне PostgreSQL выполняются как `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` внутри PostgreSQL транзакции с автоматическим коммитом после каждого запроса `INSERT`. + +PostgreSQL массивы конвертируются в массивы ClickHouse. + +!!! info "Примечание" + Будьте внимательны, в PostgreSQL массивы, созданные как `type_name[]`, являются многомерными и могут содержать в себе разное количество измерений в разных строках одной таблицы. Внутри ClickHouse допустипы только многомерные массивы с одинаковым кол-вом измерений во всех строках таблицы. + +При использовании словаря PostgreSQL поддерживается приоритет реплик. Чем больше номер реплики, тем ниже ее приоритет. Наивысший приоритет у реплики с номером `0`. + +**Примеры** + +Таблица в PostgreSQL: + +``` text +postgres=# CREATE TABLE "public"."test" ( +"int_id" SERIAL, +"int_nullable" INT NULL DEFAULT NULL, +"float" FLOAT NOT NULL, +"str" VARCHAR(100) NOT NULL DEFAULT '', +"float_nullable" FLOAT NULL DEFAULT NULL, +PRIMARY KEY (int_id)); + +CREATE TABLE + +postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); +INSERT 0 1 + +postgresql> SELECT * FROM test; + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) +``` + +Получение данных в ClickHouse: + +```sql +SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password') WHERE str IN ('test'); +``` + +``` text +┌─int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐ +│ 1 │ ᴺᵁᴸᴸ │ 2 │ test │ ᴺᵁᴸᴸ │ +└────────┴──────────────┴───────┴──────┴────────────────┘ +``` + +Вставка данных: + +```sql +INSERT INTO TABLE FUNCTION postgresql('localhost:5432', 'test', 'test', 'postgrsql_user', 'password') (int_id, float) VALUES (2, 3); +SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password'); +``` + +``` text +┌─int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐ +│ 1 │ ᴺᵁᴸᴸ │ 2 │ test │ ᴺᵁᴸᴸ │ +│ 2 │ ᴺᵁᴸᴸ │ 3 │ │ ᴺᵁᴸᴸ │ +└────────┴──────────────┴───────┴──────┴────────────────┘ +``` + +Using Non-default Schema: + +```text +postgres=# CREATE SCHEMA "nice.schema"; + +postgres=# CREATE TABLE "nice.schema"."nice.table" (a integer); + +postgres=# INSERT INTO "nice.schema"."nice.table" SELECT i FROM generate_series(0, 99) as t(i) +``` + +```sql +CREATE TABLE pg_table_schema_with_dots (a UInt32) + ENGINE PostgreSQL('localhost:5432', 'clickhouse', 'nice.table', 'postgrsql_user', 'password', 'nice.schema'); +``` + +**См. также** + +- [Движок таблиц PostgreSQL](../../sql-reference/table-functions/postgresql.md) +- [Использование PostgreSQL как источника данных для внешнего словаря](../../sql-reference/table-functions/postgresql.md#dicts-external_dicts_dict_sources-postgresql) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table-functions/postgresql/) diff --git a/docs/ru/sql-reference/table-functions/remote.md b/docs/ru/sql-reference/table-functions/remote.md index 83b3687f61d..00179abb207 100644 --- a/docs/ru/sql-reference/table-functions/remote.md +++ b/docs/ru/sql-reference/table-functions/remote.md @@ -106,4 +106,3 @@ INSERT INTO FUNCTION remote('127.0.0.1', currentDatabase(), 'remote_table') VALU SELECT * FROM remote_table; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table-functions/remote/) diff --git a/docs/ru/sql-reference/table-functions/s3.md b/docs/ru/sql-reference/table-functions/s3.md new file mode 100644 index 00000000000..e062e59c67c --- /dev/null +++ b/docs/ru/sql-reference/table-functions/s3.md @@ -0,0 +1,141 @@ +--- +toc_priority: 45 +toc_title: s3 +--- + +# Табличная Функция S3 {#s3-table-function} + +Предоставляет табличный интерфейс для выбора/вставки файлов в [Amazon S3](https://aws.amazon.com/s3/). Эта табличная функция похожа на [hdfs](../../sql-reference/table-functions/hdfs.md), но обеспечивает специфические для S3 возможности. + +**Синтаксис** + +``` sql +s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression]) +``` + +**Aргументы** + +- `path` — URL-адрес бакета с указанием пути к файлу. Поддерживает следующие подстановочные знаки в режиме "только чтение": `*, ?, {abc,def} и {N..M}` где `N, M` — числа, `'abc', 'def'` — строки. Подробнее смотри [здесь](../../engines/table-engines/integrations/s3.md#wildcards-in-path). +- `format` — [формат](../../interfaces/formats.md#formats) файла. +- `structure` — cтруктура таблицы. Формат `'column1_name column1_type, column2_name column2_type, ...'`. +- `compression` — автоматически обнаруживает сжатие по расширению файла. Возможные значения: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Необязательный параметр. + +**Возвращаемые значения** + +Таблица с указанной структурой для чтения или записи данных в указанный файл. + +**Примеры** + +Создание таблицы из файла S3 `https://storage.yandexcloud.net/my-test-bucket-768/data.csv` и выбор первых трех столбцов из нее: + +Запрос: + +``` sql +SELECT * +FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') +LIMIT 2; +``` + +Результат: + +``` text +┌─column1─┬─column2─┬─column3─┐ +│ 1 │ 2 │ 3 │ +│ 3 │ 2 │ 1 │ +└─────────┴─────────┴─────────┘ +``` + +То же самое, но файл со сжатием `gzip`: + +Запрос: + +``` sql +SELECT * +FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', 'gzip') +LIMIT 2; +``` + +Результат: + +``` text +┌─column1─┬─column2─┬─column3─┐ +│ 1 │ 2 │ 3 │ +│ 3 │ 2 │ 1 │ +└─────────┴─────────┴─────────┘ +``` + +## Примеры использования {#usage-examples} + +Предположим, у нас есть несколько файлов со следующими URI на S3: + +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_4.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv' +- 'https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_4.csv' + +Подсчитаем количество строк в файлах, заканчивающихся цифрами от 1 до 3: + +``` sql +SELECT count(*) +FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'name String, value UInt32'); +``` + +``` text +┌─count()─┐ +│ 18 │ +└─────────┘ +``` + +Подсчитаем общее количество строк во всех файлах этих двух каталогов: + +``` sql +SELECT count(*) +FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'name String, value UInt32'); +``` + +``` text +┌─count()─┐ +│ 24 │ +└─────────┘ +``` + +!!! warning "Warning" + Если список файлов содержит диапазоны чисел с ведущими нулями, используйте конструкцию с фигурными скобками для каждой цифры отдельно или используйте `?`. + +Подсчитаем общее количество строк в файлах с именами `file-000.csv`, `file-001.csv`, … , `file-999.csv`: + +``` sql +SELECT count(*) +FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32'); +``` + +``` text +┌─count()─┐ +│ 12 │ +└─────────┘ +``` + +Запишем данные в файл `test-data.csv.gz`: + +``` sql +INSERT INTO s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') +VALUES ('test-data', 1), ('test-data-2', 2); +``` + +Запишем данные из существующей таблицы в файл `test-data.csv.gz`: + +``` sql +INSERT INTO s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') +SELECT name, value FROM existing_table; +``` + +**Смотрите также** + +- [Движок таблиц S3](../../engines/table-engines/integrations/s3.md) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table-functions/s3/) + diff --git a/docs/ru/sql-reference/table-functions/url.md b/docs/ru/sql-reference/table-functions/url.md index 043a9231e75..a41a1f53cde 100644 --- a/docs/ru/sql-reference/table-functions/url.md +++ b/docs/ru/sql-reference/table-functions/url.md @@ -5,7 +5,7 @@ toc_title: url # url {#url} -Функция `url` берет данные по указанному адресу `URL` и создает из них таблицу указанной структуры со столбцами указанного формата. +Функция `url` берет данные по указанному адресу `URL` и создает из них таблицу указанной структуры со столбцами указанного формата. Функция `url` может быть использована в запросах `SELECT` и `INSERT` с таблицами на движке [URL](../../engines/table-engines/special/url.md). @@ -27,7 +27,7 @@ url(URL, format, structure) **Примеры** -Получение с HTTP-сервера первых 3 строк таблицы с данными в формате [CSV](../../interfaces/formats.md/#csv), содержащей столбцы типа [String](../../sql-reference/data-types/string.md) и [UInt32](../../sql-reference/data-types/int-uint.md). +Получение с HTTP-сервера первых 3 строк таблицы с данными в формате [CSV](../../interfaces/formats.md#csv), содержащей столбцы типа [String](../../sql-reference/data-types/string.md) и [UInt32](../../sql-reference/data-types/int-uint.md). ``` sql SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3; @@ -41,4 +41,3 @@ INSERT INTO FUNCTION url('http://127.0.0.1:8123/?query=INSERT+INTO+test_table+FO SELECT * FROM test_table; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/table-functions/url/) diff --git a/docs/ru/sql-reference/table-functions/view.md b/docs/ru/sql-reference/table-functions/view.md index 8a97253d048..bfd0a891179 100644 --- a/docs/ru/sql-reference/table-functions/view.md +++ b/docs/ru/sql-reference/table-functions/view.md @@ -8,7 +8,7 @@ view(subquery) ``` -**Входные параметры** +**Аргументы** - `subquery` — запрос `SELECT`. @@ -32,7 +32,7 @@ view(subquery) Запрос: ``` sql -SELECT * FROM view(SELECT name FROM months) +SELECT * FROM view(SELECT name FROM months); ``` Результат: @@ -49,14 +49,15 @@ SELECT * FROM view(SELECT name FROM months) Вы можете использовать функцию `view` как параметр табличных функций [remote](https://clickhouse.tech/docs/ru/sql-reference/table-functions/remote/#remote-remotesecure) и [cluster](https://clickhouse.tech/docs/ru/sql-reference/table-functions/cluster/#cluster-clusterallreplicas): ``` sql -SELECT * FROM remote(`127.0.0.1`, view(SELECT a, b, c FROM table_name)) +SELECT * FROM remote(`127.0.0.1`, view(SELECT a, b, c FROM table_name)); ``` ``` sql -SELECT * FROM cluster(`cluster_name`, view(SELECT a, b, c FROM table_name)) +SELECT * FROM cluster(`cluster_name`, view(SELECT a, b, c FROM table_name)); ``` **Смотрите также** - [view](https://clickhouse.tech/docs/ru/engines/table-engines/special/view/#table_engines-view) -[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/table_functions/view/) \ No newline at end of file + +[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/table-functions/view/) diff --git a/docs/ru/whats-new/extended-roadmap.md b/docs/ru/whats-new/extended-roadmap.md index 16c7709ec28..7b317d424f1 100644 --- a/docs/ru/whats-new/extended-roadmap.md +++ b/docs/ru/whats-new/extended-roadmap.md @@ -7,4 +7,3 @@ toc_title: Roadmap Планы развития на 2021 год опубликованы для обсуждения [здесь](https://github.com/ClickHouse/ClickHouse/issues/17623). -[Оригинальная статья](https://clickhouse.tech/docs/ru/roadmap/) diff --git a/docs/ru/whats-new/security-changelog.md b/docs/ru/whats-new/security-changelog.md index 1f46535833d..e3d26e772c4 100644 --- a/docs/ru/whats-new/security-changelog.md +++ b/docs/ru/whats-new/security-changelog.md @@ -73,4 +73,3 @@ unixODBC позволял указать путь для подключения Обнаружено благодаря: the UK’s National Cyber Security Centre (NCSC) -{## [Оригинальная статья](https://clickhouse.tech/docs/ru/security_changelog/) ##} diff --git a/docs/tools/.gitignore b/docs/tools/.gitignore new file mode 100644 index 00000000000..443cee8638c --- /dev/null +++ b/docs/tools/.gitignore @@ -0,0 +1,3 @@ +build +__pycache__ +*.pyc diff --git a/docs/tools/README.md b/docs/tools/README.md index 3c8862f1079..0a6c41d8089 100644 --- a/docs/tools/README.md +++ b/docs/tools/README.md @@ -51,5 +51,5 @@ The easiest way to see the result is to use `--livereload=8888` argument of buil At the moment there’s no easy way to do just that, but you can consider: -- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/enQtOTUzMjM4ODQwNTc5LWJmMjE3Yjc2YmI1ZDBlZmI4ZTc3OWY3ZTIwYTljYzY4MzBlODM3YzBjZTc1YmYyODRlZTJkYTgzYzBiNTA2Yjk). +- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA). - Some search engines allow to subscribe on specific website changes via email and you can opt-in for that for https://clickhouse.tech. diff --git a/docs/tools/build.py b/docs/tools/build.py index dfb9661c326..5a1f10268ab 100755 --- a/docs/tools/build.py +++ b/docs/tools/build.py @@ -65,8 +65,6 @@ def build_for_lang(lang, args): languages = { 'en': 'English', 'zh': '中文', - 'es': 'Español', - 'fr': 'Français', 'ru': 'Русский', 'ja': '日本語' } @@ -74,8 +72,6 @@ def build_for_lang(lang, args): site_names = { 'en': 'ClickHouse %s Documentation', 'zh': 'ClickHouse文档 %s', - 'es': 'Documentación de ClickHouse %s', - 'fr': 'Documentation ClickHouse %s', 'ru': 'Документация ClickHouse %s', 'ja': 'ClickHouseドキュメント %s' } @@ -183,7 +179,7 @@ if __name__ == '__main__': website_dir = os.path.join(src_dir, 'website') arg_parser = argparse.ArgumentParser() - arg_parser.add_argument('--lang', default='en,es,fr,ru,zh,ja') + arg_parser.add_argument('--lang', default='en,ru,zh,ja') arg_parser.add_argument('--blog-lang', default='en,ru') arg_parser.add_argument('--docs-dir', default='.') arg_parser.add_argument('--theme-dir', default=website_dir) diff --git a/docs/tools/make_links.sh b/docs/tools/make_links.sh index 743d4eebf16..801086178bf 100755 --- a/docs/tools/make_links.sh +++ b/docs/tools/make_links.sh @@ -8,7 +8,7 @@ BASE_DIR=$(dirname $(readlink -f $0)) function do_make_links() { set -x - langs=(en es zh fr ru ja tr fa) + langs=(en zh ru ja) src_file="$1" for lang in "${langs[@]}" do diff --git a/docs/tools/output.md b/docs/tools/output.md deleted file mode 100644 index 91ec6e75999..00000000000 --- a/docs/tools/output.md +++ /dev/null @@ -1,204 +0,0 @@ -# What is ClickHouse? {#what-is-clickhouse} - -ClickHouse is a column-oriented database management system (DBMS) for -online analytical processing of queries (OLAP). - -In a “normal” row-oriented DBMS, data is stored in this order: - - Row WatchID JavaEnable Title GoodEvent EventTime - ----- ------------- ------------ -------------------- ----------- --------------------- - #0 89354350662 1 Investor Relations 1 2016-05-18 05:19:20 - #1 90329509958 0 Contact us 1 2016-05-18 08:10:20 - #2 89953706054 1 Mission 1 2016-05-18 07:38:00 - #N ... ... ... ... ... - -In other words, all the values related to a row are physically stored -next to each other. - -Examples of a row-oriented DBMS are MySQL, Postgres, and MS SQL Server. -{: .grey } - -In a column-oriented DBMS, data is stored like this: - - Row: #0 #1 #2 #N - ------------- --------------------- --------------------- --------------------- ----- - WatchID: 89354350662 90329509958 89953706054 ... - JavaEnable: 1 0 1 ... - Title: Investor Relations Contact us Mission ... - GoodEvent: 1 1 1 ... - EventTime: 2016-05-18 05:19:20 2016-05-18 08:10:20 2016-05-18 07:38:00 ... - -These examples only show the order that data is arranged in. The values -from different columns are stored separately, and data from the same -column is stored together. - -Examples of a column-oriented DBMS: Vertica, Paraccel (Actian Matrix and -Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB -(VectorWise and Actian Vector), LucidDB, SAP HANA, Google Dremel, Google -PowerDrill, Druid, and kdb+. {: .grey } - -Different orders for storing data are better suited to different -scenarios. The data access scenario refers to what queries are made, how -often, and in what proportion; how much data is read for each type of -query – rows, columns, and bytes; the relationship between reading and -updating data; the working size of the data and how locally it is used; -whether transactions are used, and how isolated they are; requirements -for data replication and logical integrity; requirements for latency and -throughput for each type of query, and so on. - -The higher the load on the system, the more important it is to customize -the system set up to match the requirements of the usage scenario, and -the more fine grained this customization becomes. There is no system -that is equally well-suited to significantly different scenarios. If a -system is adaptable to a wide set of scenarios, under a high load, the -system will handle all the scenarios equally poorly, or will work well -for just one or few of possible scenarios. - -## Key Properties of the OLAP scenario {#key-properties-of-the-olap-scenario} - -- The vast majority of requests are for read access. -- Data is updated in fairly large batches (\> 1000 rows), not by - single rows; or it is not updated at all. -- Data is added to the DB but is not modified. -- For reads, quite a large number of rows are extracted from the DB, - but only a small subset of columns. -- Tables are “wide,” meaning they contain a large number of columns. -- Queries are relatively rare (usually hundreds of queries per server - or less per second). -- For simple queries, latencies around 50 ms are allowed. -- Column values are fairly small: numbers and short strings (for - example, 60 bytes per URL). -- Requires high throughput when processing a single query (up to - billions of rows per second per server). -- Transactions are not necessary. -- Low requirements for data consistency. -- There is one large table per query. All tables are small, except for - one. -- A query result is significantly smaller than the source data. In - other words, data is filtered or aggregated, so the result fits in a - single server’s RAM. - -It is easy to see that the OLAP scenario is very different from other -popular scenarios (such as OLTP or Key-Value access). So it doesn’t make -sense to try to use OLTP or a Key-Value DB for processing analytical -queries if you want to get decent performance. For example, if you try -to use MongoDB or Redis for analytics, you will get very poor -performance compared to OLAP databases. - -## Why Column-Oriented Databases Work Better in the OLAP Scenario {#why-column-oriented-databases-work-better-in-the-olap-scenario} - -Column-oriented databases are better suited to OLAP scenarios: they are -at least 100 times faster in processing most queries. The reasons are -explained in detail below, but the fact is easier to demonstrate -visually: - -**Row-oriented DBMS** - -![Row-oriented](images/row_oriented.gif#) - -**Column-oriented DBMS** - -![Column-oriented](images/column_oriented.gif#) - -See the difference? - -### Input/output {#inputoutput} - -1. For an analytical query, only a small number of table columns need - to be read. In a column-oriented database, you can read just the - data you need. For example, if you need 5 columns out of 100, you - can expect a 20-fold reduction in I/O. -2. Since data is read in packets, it is easier to compress. Data in - columns is also easier to compress. This further reduces the I/O - volume. -3. Due to the reduced I/O, more data fits in the system cache. - -For example, the query “count the number of records for each advertising -platform” requires reading one “advertising platform ID” column, which -takes up 1 byte uncompressed. If most of the traffic was not from -advertising platforms, you can expect at least 10-fold compression of -this column. When using a quick compression algorithm, data -decompression is possible at a speed of at least several gigabytes of -uncompressed data per second. In other words, this query can be -processed at a speed of approximately several billion rows per second on -a single server. This speed is actually achieved in practice. - -
- -Example - - $ clickhouse-client - ClickHouse client version 0.0.52053. - Connecting to localhost:9000. - Connected to ClickHouse server version 0.0.52053. - - :) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20 - - SELECT - CounterID, - count() - FROM hits - GROUP BY CounterID - ORDER BY count() DESC - LIMIT 20 - - ┌─CounterID─┬──count()─┐ - │ 114208 │ 56057344 │ - │ 115080 │ 51619590 │ - │ 3228 │ 44658301 │ - │ 38230 │ 42045932 │ - │ 145263 │ 42042158 │ - │ 91244 │ 38297270 │ - │ 154139 │ 26647572 │ - │ 150748 │ 24112755 │ - │ 242232 │ 21302571 │ - │ 338158 │ 13507087 │ - │ 62180 │ 12229491 │ - │ 82264 │ 12187441 │ - │ 232261 │ 12148031 │ - │ 146272 │ 11438516 │ - │ 168777 │ 11403636 │ - │ 4120072 │ 11227824 │ - │ 10938808 │ 10519739 │ - │ 74088 │ 9047015 │ - │ 115079 │ 8837972 │ - │ 337234 │ 8205961 │ - └───────────┴──────────┘ - - 20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.) - - :) - -
- -### CPU {#cpu} - -Since executing a query requires processing a large number of rows, it -helps to dispatch all operations for entire vectors instead of for -separate rows, or to implement the query engine so that there is almost -no dispatching cost. If you don’t do this, with any half-decent disk -subsystem, the query interpreter inevitably stalls the CPU. It makes -sense to both store data in columns and process it, when possible, by -columns. - -There are two ways to do this: - -1. A vector engine. All operations are written for vectors, instead of - for separate values. This means you don’t need to call operations - very often, and dispatching costs are negligible. Operation code - contains an optimized internal cycle. - -2. Code generation. The code generated for the query has all the - indirect calls in it. - -This is not done in “normal” databases, because it doesn’t make sense -when running simple queries. However, there are exceptions. For example, -MemSQL uses code generation to reduce latency when processing SQL -queries. (For comparison, analytical DBMSs require optimization of -throughput, not latency.) - -Note that for CPU efficiency, the query language must be declarative -(SQL or MDX), or at least a vector (J, K). The query should only contain -implicit loops, allowing for optimization. - -[Original article](https://clickhouse.tech/docs/en/) diff --git a/docs/tools/requirements.txt b/docs/tools/requirements.txt index 4106100bfa3..85f9dc2a9dd 100644 --- a/docs/tools/requirements.txt +++ b/docs/tools/requirements.txt @@ -10,7 +10,7 @@ cssmin==0.2.0 future==0.18.2 htmlmin==0.1.12 idna==2.10 -Jinja2==2.11.2 +Jinja2>=2.11.3 jinja2-highlight==0.6.1 jsmin==2.2.2 livereload==2.6.2 @@ -23,10 +23,9 @@ nltk==3.5 nose==1.3.7 protobuf==3.14.0 numpy==1.19.2 -Pygments==2.5.2 pymdown-extensions==8.0 python-slugify==4.0.1 -PyYAML==5.3.1 +PyYAML==5.4.1 repackage==0.7.3 requests==2.24.0 singledispatch==3.4.0.3 @@ -36,3 +35,4 @@ termcolor==1.1.0 tornado==6.1 Unidecode==1.1.1 urllib3==1.25.10 +Pygments>=2.7.4 diff --git a/docs/tools/single_page.py b/docs/tools/single_page.py index 05d50e768e2..a1e650d3ad3 100644 --- a/docs/tools/single_page.py +++ b/docs/tools/single_page.py @@ -24,55 +24,78 @@ def recursive_values(item): yield item +anchor_not_allowed_chars = re.compile(r'[^\w\-]') +def generate_anchor_from_path(path): + return re.sub(anchor_not_allowed_chars, '-', path) + +absolute_link = re.compile(r'^https?://') + + +def replace_link(match, path): + title = match.group(1) + link = match.group(2) + + # Not a relative link + if re.search(absolute_link, link): + return match.group(0) + + if link.endswith('/'): + link = link[0:-1] + '.md' + + return '{}(#{})'.format(title, generate_anchor_from_path(os.path.normpath(os.path.join(os.path.dirname(path), link)))) + + +# Concatenates Markdown files to a single file. def concatenate(lang, docs_path, single_page_file, nav): lang_path = os.path.join(docs_path, lang) - az_re = re.compile(r'[a-z]') proj_config = f'{docs_path}/toc_{lang}.yml' if os.path.exists(proj_config): with open(proj_config) as cfg_file: nav = yaml.full_load(cfg_file.read())['nav'] + files_to_concatenate = list(recursive_values(nav)) files_count = len(files_to_concatenate) logging.info(f'{files_count} files will be concatenated into single md-file for {lang}.') logging.debug('Concatenating: ' + ', '.join(files_to_concatenate)) assert files_count > 0, f'Empty single-page for {lang}' + link_regexp = re.compile(r'(\[[^\]]+\])\(([^)#]+)(?:#[^\)]+)?\)') + for path in files_to_concatenate: - if path.endswith('introduction/info.md'): - continue try: with open(os.path.join(lang_path, path)) as f: - anchors = set() - tmp_path = path.replace('/index.md', '/').replace('.md', '/') - prefixes = ['', '../', '../../', '../../../'] - parts = tmp_path.split('/') - anchors.add(parts[-2] + '/') - anchors.add('/'.join(parts[1:])) - - for part in parts[0:-2] if len(parts) > 2 else parts: - for prefix in prefixes: - anchor = prefix + tmp_path - if anchor: - anchors.add(anchor) - anchors.add('../' + anchor) - anchors.add('../../' + anchor) - tmp_path = tmp_path.replace(part, '..') - - for anchor in anchors: - if re.search(az_re, anchor): - single_page_file.write('' % anchor) - - single_page_file.write('\n') + # Insert a horizontal ruler. Then insert an anchor that we will link to. Its name will be a path to the .md file. + single_page_file.write('\n______\n\n' % generate_anchor_from_path(path)) in_metadata = False - for l in f: - if l.startswith('---'): + for line in f: + # Skip YAML metadata. + if line == '---\n': in_metadata = not in_metadata - if l.startswith('#'): - l = '#' + l + continue + if not in_metadata: - single_page_file.write(l) + # Increase the level of headers. + if line.startswith('#'): + line = '#' + line + + # Replace links within the docs. + + if re.search(link_regexp, line): + line = re.sub( + link_regexp, + lambda match: replace_link(match, path), + line) + + # If failed to replace the relative link, print to log + if '../' in line: + logging.info('Failed to resolve relative link:') + logging.info(path) + logging.info(line) + + single_page_file.write(line) + except IOError as e: logging.warning(str(e)) @@ -86,7 +109,8 @@ def build_single_page_version(lang, args, nav, cfg): extra['single_page'] = True extra['is_amp'] = False - with util.autoremoved_file(os.path.join(args.docs_dir, lang, 'single.md')) as single_md: + single_md_path = os.path.join(args.docs_dir, lang, 'single.md') + with open(single_md_path, 'w') as single_md: concatenate(lang, args.docs_dir, single_md, nav) with util.temp_dir() as site_temp: @@ -123,11 +147,14 @@ def build_single_page_version(lang, args, nav, cfg): single_page_index_html = os.path.join(single_page_output_path, 'index.html') single_page_content_js = os.path.join(single_page_output_path, 'content.js') + with open(single_page_index_html, 'r') as f: sp_prefix, sp_js, sp_suffix = f.read().split('') + with open(single_page_index_html, 'w') as f: f.write(sp_prefix) f.write(sp_suffix) + with open(single_page_content_js, 'w') as f: if args.minify: import jsmin @@ -151,6 +178,7 @@ def build_single_page_version(lang, args, nav, cfg): js_in = ' '.join(website.get_js_in(args)) subprocess.check_call(f'cat {css_in} > {test_dir}/css/base.css', shell=True) subprocess.check_call(f'cat {js_in} > {test_dir}/js/base.js', shell=True) + if args.save_raw_single_page: shutil.copytree(test_dir, args.save_raw_single_page) @@ -194,3 +222,7 @@ def build_single_page_version(lang, args, nav, cfg): subprocess.check_call(' '.join(create_pdf_command), shell=True) logging.info(f'Finished building single page version for {lang}') + + if os.path.exists(single_md_path): + os.unlink(single_md_path) + \ No newline at end of file diff --git a/docs/tools/test.py b/docs/tools/test.py index 7d11157c986..00d1d47137f 100755 --- a/docs/tools/test.py +++ b/docs/tools/test.py @@ -68,17 +68,17 @@ def test_single_page(input_path, lang): f, features='html.parser' ) + anchor_points = set() + duplicate_anchor_points = 0 links_to_nowhere = 0 + for tag in soup.find_all(): for anchor_point in [tag.attrs.get('name'), tag.attrs.get('id')]: if anchor_point: - if anchor_point in anchor_points: - duplicate_anchor_points += 1 - logging.info('Duplicate anchor point: %s' % anchor_point) - else: - anchor_points.add(anchor_point) + anchor_points.add(anchor_point) + for tag in soup.find_all(): href = tag.attrs.get('href') if href and href.startswith('#') and href != '#': @@ -87,11 +87,8 @@ def test_single_page(input_path, lang): logging.info("Tag %s", tag) logging.info('Link to nowhere: %s' % href) - if duplicate_anchor_points: - logging.warning('Found %d duplicate anchor points' % duplicate_anchor_points) - if links_to_nowhere: - if lang == 'en' or lang == 'ru': # TODO: check all languages again + if lang == 'en' or lang == 'ru': logging.error(f'Found {links_to_nowhere} links to nowhere in {lang}') sys.exit(1) else: diff --git a/docs/tools/translate/add_meta_flag.py b/docs/tools/translate/add_meta_flag.py deleted file mode 100755 index d87aa044faf..00000000000 --- a/docs/tools/translate/add_meta_flag.py +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env python3 - -import sys - -import util - -if __name__ == '__main__': - flag_name = sys.argv[1] - path = sys.argv[2] - meta, content = util.read_md_file(path) - meta[flag_name] = True - util.write_md_file(path, meta, content) diff --git a/docs/tools/translate/babel-mapping.ini b/docs/tools/translate/babel-mapping.ini deleted file mode 100644 index 6a9a3e5c073..00000000000 --- a/docs/tools/translate/babel-mapping.ini +++ /dev/null @@ -1,3 +0,0 @@ -[python: **.py] -[jinja2: **/templates/**.html] -extensions=jinja2.ext.i18n,jinja2.ext.autoescape,jinja2.ext.with_ diff --git a/docs/tools/translate/filter.py b/docs/tools/translate/filter.py deleted file mode 100755 index 61e1104d345..00000000000 --- a/docs/tools/translate/filter.py +++ /dev/null @@ -1,199 +0,0 @@ -#!/usr/bin/env python3 - -import os -import sys -import json.decoder - -import pandocfilters -import slugify - -import translate -import util - - -is_debug = os.environ.get('DEBUG') is not None - -filename = os.getenv('INPUT') - - -def debug(*args): - if is_debug: - print(*args, file=sys.stderr) - - -def process_buffer(buffer, new_value, item=None, is_header=False): - if buffer: - text = ''.join(buffer) - - try: - translated_text = translate.translate(text) - except TypeError: - translated_text = text - except json.decoder.JSONDecodeError as e: - print('Failed to translate', str(e), file=sys.stderr) - sys.exit(1) - - debug(f'Translate: "{text}" -> "{translated_text}"') - - if text and text[0].isupper() and not translated_text[0].isupper(): - translated_text = translated_text[0].upper() + translated_text[1:] - - if text.startswith(' ') and not translated_text.startswith(' '): - translated_text = ' ' + translated_text - - if text.endswith(' ') and not translated_text.endswith(' '): - translated_text = translated_text + ' ' - - if is_header and translated_text.endswith('.'): - translated_text = translated_text.rstrip('.') - - title_case = is_header and translate.default_target_language == 'en' and text[0].isupper() - title_case_whitelist = { - 'a', 'an', 'the', 'and', 'or', 'that', - 'of', 'on', 'for', 'from', 'with', 'to', 'in' - } - is_first_iteration = True - for token in translated_text.split(' '): - if title_case and token.isascii() and not token.isupper(): - if len(token) > 1 and token.lower() not in title_case_whitelist: - token = token[0].upper() + token[1:] - elif not is_first_iteration: - token = token.lower() - is_first_iteration = False - - new_value.append(pandocfilters.Str(token)) - new_value.append(pandocfilters.Space()) - - if item is None and len(new_value): - new_value.pop(len(new_value) - 1) - else: - new_value[-1] = item - elif item: - new_value.append(item) - - -def process_sentence(value, is_header=False): - new_value = [] - buffer = [] - for item in value: - if isinstance(item, list): - new_value.append([process_sentence(subitem, is_header) for subitem in item]) - continue - elif isinstance(item, dict): - t = item.get('t') - c = item.get('c') - if t == 'Str': - buffer.append(c) - elif t == 'Space': - buffer.append(' ') - elif t == 'DoubleQuote': - buffer.append('"') - else: - process_buffer(buffer, new_value, item, is_header) - buffer = [] - else: - new_value.append(item) - process_buffer(buffer, new_value, is_header=is_header) - return new_value - - -def translate_filter(key, value, _format, _): - if key not in ['Space', 'Str']: - debug(key, value) - try: - cls = getattr(pandocfilters, key) - except AttributeError: - return - - if key == 'Para' and value: - marker = value[0].get('c') - if isinstance(marker, str) and marker.startswith('!!!') and len(value) > 2: - # Admonition case - if marker != '!!!': - # Lost space after !!! case - value.insert(1, pandocfilters.Str(marker[3:])) - value.insert(1, pandocfilters.Space()) - value[0]['c'] = '!!!' - admonition_value = [] - remaining_para_value = [] - in_admonition = True - break_value = [pandocfilters.LineBreak(), pandocfilters.Str(' ' * 4)] - for item in value: - if in_admonition: - if item.get('t') == 'SoftBreak': - in_admonition = False - else: - admonition_value.append(item) - else: - if item.get('t') == 'SoftBreak': - remaining_para_value += break_value - else: - remaining_para_value.append(item) - - if admonition_value[-1].get('t') == 'Quoted': - text = process_sentence(admonition_value[-1]['c'][-1]) - text[0]['c'] = '"' + text[0]['c'] - text[-1]['c'] = text[-1]['c'] + '"' - admonition_value.pop(-1) - admonition_value += text - else: - text = admonition_value[-1].get('c') - if text: - text = translate.translate(text[0].upper() + text[1:]) - admonition_value.append(pandocfilters.Space()) - admonition_value.append(pandocfilters.Str(f'"{text}"')) - - return cls(admonition_value + break_value + process_sentence(remaining_para_value)) - else: - return cls(process_sentence(value)) - elif key == 'Plain' or key == 'Strong' or key == 'Emph': - return cls(process_sentence(value)) - elif key == 'Link': - try: - # Plain links case - if value[2][0] == value[1][0].get('c'): - return pandocfilters.Str(value[2][0]) - except IndexError: - pass - - value[1] = process_sentence(value[1]) - href = value[2][0] - if not (href.startswith('http') or href.startswith('#')): - anchor = None - attempts = 10 - if '#' in href: - href, anchor = href.split('#', 1) - if href.endswith('.md') and not href.startswith('/'): - parts = [part for part in os.environ['INPUT'].split('/') if len(part) == 2] - lang = parts[-1] - script_path = os.path.dirname(__file__) - base_path = os.path.abspath(f'{script_path}/../../{lang}') - href = os.path.join( - os.path.relpath(base_path, os.path.dirname(os.environ['INPUT'])), - os.path.relpath(href, base_path) - ) - if anchor: - href = f'{href}#{anchor}' - value[2][0] = href - return cls(*value) - elif key == 'Header': - if value[1][0].islower() and '_' not in value[1][0]: # Preserve some manually specified anchors - value[1][0] = slugify.slugify(value[1][0], separator='-', word_boundary=True, save_order=True) - - # TODO: title case header in en - value[2] = process_sentence(value[2], is_header=True) - return cls(*value) - elif key == 'SoftBreak': - return pandocfilters.LineBreak() - - return - - -if __name__ == "__main__": - os.environ['INPUT'] = os.path.abspath(os.environ['INPUT']) - pwd = os.path.dirname(filename or '.') - if pwd: - with util.cd(pwd): - pandocfilters.toJSONFilter(translate_filter) - else: - pandocfilters.toJSONFilter(translate_filter) diff --git a/docs/tools/translate/normalize-markdown.sh b/docs/tools/translate/normalize-markdown.sh deleted file mode 100755 index 7850fa34b1d..00000000000 --- a/docs/tools/translate/normalize-markdown.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env bash -# Usage: normalize-en-markdown.sh -set -e -BASE_DIR=$(dirname $(readlink -f $0)) -TEMP_FILE=$(mktemp) -trap 'rm -f -- "${TEMP_FILE}"' INT TERM HUP EXIT -INPUT="$1" -if [[ ! -L "${INPUT}" ]] -then - export INPUT - cat "${INPUT}" > "${TEMP_FILE}" - "${BASE_DIR}/translate.sh" "en" "${TEMP_FILE}" "${INPUT}" -fi diff --git a/docs/tools/translate/remove_machine_translated_meta.py b/docs/tools/translate/remove_machine_translated_meta.py deleted file mode 100755 index 26cfde97f1e..00000000000 --- a/docs/tools/translate/remove_machine_translated_meta.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python3 -import os -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) -import convert_toc -import util - - -if __name__ == '__main__': - path = sys.argv[1][2:] - convert_toc.init_redirects() - try: - path = convert_toc.redirects[path] - except KeyError: - pass - meta, content = util.read_md_file(path) - if 'machine_translated' in meta: - del meta['machine_translated'] - if 'machine_translated_rev' in meta: - del meta['machine_translated_rev'] - util.write_md_file(path, meta, content) diff --git a/docs/tools/translate/replace-with-translation.sh b/docs/tools/translate/replace-with-translation.sh deleted file mode 100755 index 922ac65a921..00000000000 --- a/docs/tools/translate/replace-with-translation.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env bash -# Usage: replace-with-translation.sh -set -e -BASE_DIR=$(dirname $(readlink -f $0)) -TEMP_FILE=$(mktemp) -trap 'rm -f -- "${TEMP_FILE}"' INT TERM HUP EXIT -TARGET_LANGUAGE="$1" -export INPUT="$2" -cat "${INPUT}" > "${TEMP_FILE}" -if [[ ! -z $SLEEP ]] -then - sleep $[ ( $RANDOM % 20 ) + 1 ]s -fi -rm -f "${INPUT}" -mkdir -p $(dirname "${INPUT}") || true -YANDEX=1 "${BASE_DIR}/translate.sh" "${TARGET_LANGUAGE}" "${TEMP_FILE}" "${INPUT}" -git add "${INPUT}" diff --git a/docs/tools/translate/requirements.txt b/docs/tools/translate/requirements.txt deleted file mode 100644 index 1bbd119b823..00000000000 --- a/docs/tools/translate/requirements.txt +++ /dev/null @@ -1,12 +0,0 @@ -Babel==2.8.0 -certifi==2020.6.20 -chardet==3.0.4 -googletrans==3.0.0 -idna==2.10 -Jinja2==2.11.2 -pandocfilters==1.4.2 -python-slugify==4.0.1 -PyYAML==5.3.1 -requests==2.24.0 -text-unidecode==1.3 -urllib3==1.25.10 diff --git a/docs/tools/translate/split_meta.py b/docs/tools/translate/split_meta.py deleted file mode 100755 index b38b93e10b4..00000000000 --- a/docs/tools/translate/split_meta.py +++ /dev/null @@ -1,35 +0,0 @@ -#!/usr/bin/env python3 -import os -import subprocess -import sys - -import translate -import util - - -if __name__ == '__main__': - path = sys.argv[1] - content_path = f'{path}.content' - meta_path = f'{path}.meta' - meta, content = util.read_md_file(path) - - target_language = os.getenv('TARGET_LANGUAGE') - if target_language is not None and target_language != 'en': - rev = subprocess.check_output( - 'git rev-parse HEAD', shell=True - ).decode('utf-8').strip() - meta['machine_translated'] = True - meta['machine_translated_rev'] = rev - title = meta.get('toc_title') - if title: - meta['toc_title'] = translate.translate(title, target_language) - folder_title = meta.get('toc_folder_title') - if folder_title: - meta['toc_folder_title'] = translate.translate(folder_title, target_language) - if 'en_copy' in meta: - del meta['en_copy'] - - with open(content_path, 'w') as f: - print(content, file=f) - - util.write_md_file(meta_path, meta, '') diff --git a/docs/tools/translate/translate.py b/docs/tools/translate/translate.py deleted file mode 100755 index 605ff78f424..00000000000 --- a/docs/tools/translate/translate.py +++ /dev/null @@ -1,80 +0,0 @@ -#!/usr/bin/env python3 - -import os -import random -import re -import sys -import time -import urllib.parse - -import googletrans -import requests -import yaml - - -translator = googletrans.Translator() -default_target_language = os.environ.get('TARGET_LANGUAGE', 'ru') -curly_braces_re = re.compile('({[^}]+})') - -is_yandex = os.environ.get('YANDEX') is not None - - -def translate_impl(text, target_language=None): - target_language = target_language or default_target_language - if target_language == 'en': - return text - elif is_yandex: - text = text.replace('‘', '\'') - text = text.replace('’', '\'') - has_alpha = any([char.isalpha() for char in text]) - if text.isascii() and has_alpha and not text.isupper(): - text = urllib.parse.quote(text) - url = f'http://translate.yandex.net/api/v1/tr.json/translate?srv=docs&lang=en-{target_language}&text={text}' - result = requests.get(url).json() - if result.get('code') == 200: - return result['text'][0] - else: - result = str(result) - print(f'Failed to translate "{text}": {result}', file=sys.stderr) - sys.exit(1) - else: - return text - else: - time.sleep(random.random()) - return translator.translate(text, target_language).text - - -def translate(text, target_language=None): - return "".join( - [ - part - if part.startswith("{") and part.endswith("}") - else translate_impl(part, target_language=target_language) - for part in re.split(curly_braces_re, text) - ] - ) - - -def translate_po(): - import babel.messages.pofile - base_dir = os.path.join(os.path.dirname(__file__), '..', '..', '..', 'website', 'locale') - for lang in ['en', 'zh', 'es', 'fr', 'ru', 'ja']: - po_path = os.path.join(base_dir, lang, 'LC_MESSAGES', 'messages.po') - with open(po_path, 'r') as f: - po_file = babel.messages.pofile.read_po(f, locale=lang, domain='messages') - for item in po_file: - if not item.string: - global is_yandex - is_yandex = True - item.string = translate(item.id, lang) - with open(po_path, 'wb') as f: - babel.messages.pofile.write_po(f, po_file) - - -if __name__ == '__main__': - target_language = sys.argv[1] - if target_language == 'po': - translate_po() - else: - result = translate_toc(yaml.full_load(sys.stdin.read())['nav'], sys.argv[1]) - print(yaml.dump({'nav': result})) diff --git a/docs/tools/translate/translate.sh b/docs/tools/translate/translate.sh deleted file mode 100755 index 1acf645eb81..00000000000 --- a/docs/tools/translate/translate.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/usr/bin/env bash -# Usage: translate.sh -set -e -BASE_DIR=$(dirname $(readlink -f $0)) -OUTPUT=${3:-/dev/stdout} -export TARGET_LANGUAGE="$1" -export DEBUG -TEMP_FILE=$(mktemp) -export INPUT_PATH="$2" -INPUT_META="${INPUT_PATH}.meta" -INPUT_CONTENT="${INPUT_PATH}.content" - -trap 'rm -f -- "${TEMP_FILE}" "${INPUT_META}" "${INPUT_CONTENT}"' INT TERM HUP EXIT -source "${BASE_DIR}/venv/bin/activate" - -${BASE_DIR}/split_meta.py "${INPUT_PATH}" - -pandoc "${INPUT_CONTENT}" --filter "${BASE_DIR}/filter.py" -o "${TEMP_FILE}" \ - -f "markdown-space_in_atx_header" -t "markdown_strict+pipe_tables+markdown_attribute+all_symbols_escapable+backtick_code_blocks+autolink_bare_uris-link_attributes+markdown_attribute+mmd_link_attributes-raw_attribute+header_attributes-grid_tables+definition_lists" \ - --atx-headers --wrap=none --columns=99999 --tab-stop=4 -perl -pi -e 's/{\\#\\#/{##/g' "${TEMP_FILE}" -perl -pi -e 's/\\#\\#}/##}/g' "${TEMP_FILE}" -perl -pi -e 's/ *$//gg' "${TEMP_FILE}" -if [[ "${TARGET_LANGUAGE}" == "ru" ]] -then - perl -pi -e 's/“/«/gg' "${TEMP_FILE}" - perl -pi -e 's/”/»/gg' "${TEMP_FILE}" -fi -cat "${INPUT_META}" "${TEMP_FILE}" > "${OUTPUT}" diff --git a/docs/tools/translate/typograph_ru.py b/docs/tools/translate/typograph_ru.py deleted file mode 100644 index 2d970cf2a2e..00000000000 --- a/docs/tools/translate/typograph_ru.py +++ /dev/null @@ -1,45 +0,0 @@ -import requests - -class TypographError(Exception): - pass - - -def typograph(text): - text = text.replace('&', '&') - text = text.replace('<', '<') - text = text.replace('>', '>') - template = f''' - - - - {text} - 3 - 0 - 0 - 0 - - - - ''' - result = requests.post( - url='http://typograf.artlebedev.ru/webservices/typograf.asmx', - data=template.encode('utf-8'), - headers={ - 'Content-Type': 'text/xml', - 'SOAPAction': 'http://typograf.artlebedev.ru/webservices/ProcessText' - } - ) - if result.ok and 'ProcessTextResult' in result.text: - result_text = result.text.split('')[1].split('')[0].rstrip() - result_text = result_text.replace('&', '&') - result_text = result_text.replace('<', '<') - result_text = result_text.replace('>', '>') - return result_text - else: - raise TypographError(result.text) - - -if __name__ == '__main__': - import sys - print((typograph(sys.stdin.read()))) diff --git a/docs/tools/translate/update-all-machine-translated.sh b/docs/tools/translate/update-all-machine-translated.sh deleted file mode 100755 index fae2aae787f..00000000000 --- a/docs/tools/translate/update-all-machine-translated.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/usr/bin/env bash -BASE_DIR=$(dirname $(readlink -f $0)) - -function translate() { - set -x - LANGUAGE=$1 - DOCS_ROOT="${BASE_DIR}/../../" - REV="$(git rev-parse HEAD)" - for FILENAME in $(find "${DOCS_ROOT}${LANGUAGE}" -name "*.md" -type f) - do - HAS_MT_TAG=$(grep -c "machine_translated: true" "${FILENAME}") - IS_UP_TO_DATE=$(grep -c "machine_translated_rev: \"${REV}\"" "${FILENAME}") - if [ "${HAS_MT_TAG}" -eq "1" ] && [ "${IS_UP_TO_DATE}" -eq "0" ] - then - set -e - EN_FILENAME=${FILENAME/\/${LANGUAGE}\///en/} - rm "${FILENAME}" || true - cp "${EN_FILENAME}" "${FILENAME}" - DEBUG=1 SLEEP=1 ${BASE_DIR}/replace-with-translation.sh ${LANGUAGE} "${FILENAME}" - set +e - fi - done -} -export BASE_DIR -export -f translate -parallel translate ::: es fr zh ja fa tr diff --git a/docs/tools/translate/update-po.sh b/docs/tools/translate/update-po.sh deleted file mode 100755 index f2f4039bcb8..00000000000 --- a/docs/tools/translate/update-po.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash -# Usage: update-po.sh -set -ex -BASE_DIR=$(dirname $(readlink -f $0)) -WEBSITE_DIR="${BASE_DIR}/../../../website" -LOCALE_DIR="${WEBSITE_DIR}/locale" -MESSAGES_POT="${LOCALE_DIR}/messages.pot" -BABEL_INI="${BASE_DIR}/babel-mapping.ini" -LANGS="en zh es fr ru ja tr fa" -source "${BASE_DIR}/venv/bin/activate" -cd "${WEBSITE_DIR}" -pybabel extract "." -o "${MESSAGES_POT}" -F "${BABEL_INI}" -for L in ${LANGS} -do - pybabel update -d locale -l "${L}" -i "${MESSAGES_POT}" || \ - pybabel init -d locale -l "${L}" -i "${MESSAGES_POT}" -done -python3 "${BASE_DIR}/translate.py" po -for L in ${LANGS} -do - pybabel compile -d locale -l "${L}" -done diff --git a/docs/tools/translate/util.py b/docs/tools/translate/util.py deleted file mode 120000 index 7f16d68497e..00000000000 --- a/docs/tools/translate/util.py +++ /dev/null @@ -1 +0,0 @@ -../util.py \ No newline at end of file diff --git a/docs/tools/util.py b/docs/tools/util.py index b840dc1168a..25961561f99 100644 --- a/docs/tools/util.py +++ b/docs/tools/util.py @@ -22,15 +22,6 @@ def temp_dir(): shutil.rmtree(path) -@contextlib.contextmanager -def autoremoved_file(path): - try: - with open(path, 'w') as handle: - yield handle - finally: - os.unlink(path) - - @contextlib.contextmanager def cd(new_cwd): old_cwd = os.getcwd() diff --git a/docs/zh/development/build.md b/docs/zh/development/build.md index 1aa5c1c97b7..01e0740bfa4 100644 --- a/docs/zh/development/build.md +++ b/docs/zh/development/build.md @@ -35,28 +35,12 @@ sudo apt-get install git cmake ninja-build 或cmake3而不是旧系统上的cmake。 或者在早期版本的系统中用 cmake3 替代 cmake -## 安装 GCC 10 {#an-zhuang-gcc-10} +## 安装 Clang -有几种方法可以做到这一点。 +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -### 安装 PPA 包 {#an-zhuang-ppa-bao} - -``` bash -sudo apt-get install software-properties-common -sudo apt-add-repository ppa:ubuntu-toolchain-r/test -sudo apt-get update -sudo apt-get install gcc-10 g++-10 -``` - -### 源码安装 gcc {#yuan-ma-an-zhuang-gcc} - -请查看 [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -## 使用 GCC 10 来编译 {#shi-yong-gcc-10-lai-bian-yi} - -``` bash -export CC=gcc-10 -export CXX=g++-10 +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ``` ## 拉取 ClickHouse 源码 {#la-qu-clickhouse-yuan-ma-1} diff --git a/docs/zh/development/developer-instruction.md b/docs/zh/development/developer-instruction.md index 53aab5dc086..04950c11521 100644 --- a/docs/zh/development/developer-instruction.md +++ b/docs/zh/development/developer-instruction.md @@ -123,17 +123,13 @@ ClickHouse使用多个外部库进行构建。大多数外部库不需要单独 # C++ 编译器 {#c-bian-yi-qi} -GCC编译器从版本9开始,以及Clang版本\>=8都可支持构建ClickHouse。 +We support clang starting from version 11. -Yandex官方当前使用GCC构建ClickHouse,因为它生成的机器代码性能较好(根据测评,最多可以相差几个百分点)。Clang通常可以更加便捷的开发。我们的持续集成(CI)平台会运行大约十二种构建组合的检查。 +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -在Ubuntu上安装GCC,请执行:`sudo apt install gcc g++` - -请使用`gcc --version`查看gcc的版本。如果gcc版本低于9,请参考此处的指示:https://clickhouse.tech/docs/zh/development/build/#an-zhuang-gcc-10 。 - -在Mac OS X上安装GCC,请执行:`brew install gcc` - -如果您决定使用Clang,还可以同时安装 `libc++`以及`lld`,前提是您也熟悉它们。此外,也推荐使用`ccache`。 +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` # 构建的过程 {#gou-jian-de-guo-cheng} @@ -146,7 +142,7 @@ Yandex官方当前使用GCC构建ClickHouse,因为它生成的机器代码性 在`build`目录下,通过运行CMake配置构建。 在第一次运行之前,请定义用于指定编译器的环境变量(本示例中为gcc 9 编译器)。 - export CC=gcc-10 CXX=g++-10 + export CC=clang CXX=clang++ cmake .. `CC`变量指代C的编译器(C Compiler的缩写),而`CXX`变量指代要使用哪个C++编译器进行编译。 diff --git a/docs/zh/development/style.md b/docs/zh/development/style.md index c8e883920dd..bb9bfde7b9b 100644 --- a/docs/zh/development/style.md +++ b/docs/zh/development/style.md @@ -696,7 +696,7 @@ auto s = std::string{"Hello"}; **2.** 语言: C++20. -**3.** 编译器: `gcc`。 此时(2020年08月),代码使用9.3版编译。(它也可以使用`clang 8` 编译) +**3.** 编译器: `clang`。 此时(2021年03月),代码使用11版编译。(它也可以使用`gcc` 编译 but it is not suitable for production) 使用标准库 (`libc++`)。 diff --git a/docs/zh/faq/terms_translation_zh.md b/docs/zh/faq/terms_translation_zh.md new file mode 100644 index 00000000000..d252b4e293e --- /dev/null +++ b/docs/zh/faq/terms_translation_zh.md @@ -0,0 +1,38 @@ +# 术语翻译约定 +本文档用来维护从英文翻译成中文的术语集。 + + + +## 保持英文,不译 +Parquet + +## 英文 <-> 中文 +Integer 整数 +floating-point 浮点数 +Fitting 拟合 +Decimal 定点数 +Tuple 元组 +function 函数 +array 数组/阵列 +hash 哈希/散列 +Parameters 参数 +Arguments 参数 + + +## +1. 对于array的翻译,保持初始翻译 数组/阵列 不变。 + +2. 对于倒装句。翻译时非直译,会调整语序。 +比如, groupArrayInsertAt 翻译中 + +``` text +- `x` — [Expression] resulting in one of the [supported data types]. +``` + +``` text +`x` — 生成所[支持的数据类型](数据)的[表达式]。 +``` + +3. See also 参见 + + diff --git a/docs/zh/getting-started/example-datasets/ontime.md b/docs/zh/getting-started/example-datasets/ontime.md index 3921f71fc7e..6d888b2196c 100644 --- a/docs/zh/getting-started/example-datasets/ontime.md +++ b/docs/zh/getting-started/example-datasets/ontime.md @@ -29,126 +29,127 @@ done 创建表结构: ``` sql -CREATE TABLE `ontime` ( - `Year` UInt16, - `Quarter` UInt8, - `Month` UInt8, - `DayofMonth` UInt8, - `DayOfWeek` UInt8, - `FlightDate` Date, - `UniqueCarrier` FixedString(7), - `AirlineID` Int32, - `Carrier` FixedString(2), - `TailNum` String, - `FlightNum` String, - `OriginAirportID` Int32, - `OriginAirportSeqID` Int32, - `OriginCityMarketID` Int32, - `Origin` FixedString(5), - `OriginCityName` String, - `OriginState` FixedString(2), - `OriginStateFips` String, - `OriginStateName` String, - `OriginWac` Int32, - `DestAirportID` Int32, - `DestAirportSeqID` Int32, - `DestCityMarketID` Int32, - `Dest` FixedString(5), - `DestCityName` String, - `DestState` FixedString(2), - `DestStateFips` String, - `DestStateName` String, - `DestWac` Int32, - `CRSDepTime` Int32, - `DepTime` Int32, - `DepDelay` Int32, - `DepDelayMinutes` Int32, - `DepDel15` Int32, - `DepartureDelayGroups` String, - `DepTimeBlk` String, - `TaxiOut` Int32, - `WheelsOff` Int32, - `WheelsOn` Int32, - `TaxiIn` Int32, - `CRSArrTime` Int32, - `ArrTime` Int32, - `ArrDelay` Int32, - `ArrDelayMinutes` Int32, - `ArrDel15` Int32, - `ArrivalDelayGroups` Int32, - `ArrTimeBlk` String, - `Cancelled` UInt8, - `CancellationCode` FixedString(1), - `Diverted` UInt8, - `CRSElapsedTime` Int32, - `ActualElapsedTime` Int32, - `AirTime` Int32, - `Flights` Int32, - `Distance` Int32, - `DistanceGroup` UInt8, - `CarrierDelay` Int32, - `WeatherDelay` Int32, - `NASDelay` Int32, - `SecurityDelay` Int32, - `LateAircraftDelay` Int32, - `FirstDepTime` String, - `TotalAddGTime` String, - `LongestAddGTime` String, - `DivAirportLandings` String, - `DivReachedDest` String, - `DivActualElapsedTime` String, - `DivArrDelay` String, - `DivDistance` String, - `Div1Airport` String, - `Div1AirportID` Int32, - `Div1AirportSeqID` Int32, - `Div1WheelsOn` String, - `Div1TotalGTime` String, - `Div1LongestGTime` String, - `Div1WheelsOff` String, - `Div1TailNum` String, - `Div2Airport` String, - `Div2AirportID` Int32, - `Div2AirportSeqID` Int32, - `Div2WheelsOn` String, - `Div2TotalGTime` String, - `Div2LongestGTime` String, - `Div2WheelsOff` String, - `Div2TailNum` String, - `Div3Airport` String, - `Div3AirportID` Int32, - `Div3AirportSeqID` Int32, - `Div3WheelsOn` String, - `Div3TotalGTime` String, - `Div3LongestGTime` String, - `Div3WheelsOff` String, - `Div3TailNum` String, - `Div4Airport` String, - `Div4AirportID` Int32, - `Div4AirportSeqID` Int32, - `Div4WheelsOn` String, - `Div4TotalGTime` String, - `Div4LongestGTime` String, - `Div4WheelsOff` String, - `Div4TailNum` String, - `Div5Airport` String, - `Div5AirportID` Int32, - `Div5AirportSeqID` Int32, - `Div5WheelsOn` String, - `Div5TotalGTime` String, - `Div5LongestGTime` String, - `Div5WheelsOff` String, - `Div5TailNum` String +CREATE TABLE `ontime` +( + `Year` UInt16, + `Quarter` UInt8, + `Month` UInt8, + `DayofMonth` UInt8, + `DayOfWeek` UInt8, + `FlightDate` Date, + `Reporting_Airline` String, + `DOT_ID_Reporting_Airline` Int32, + `IATA_CODE_Reporting_Airline` String, + `Tail_Number` Int32, + `Flight_Number_Reporting_Airline` String, + `OriginAirportID` Int32, + `OriginAirportSeqID` Int32, + `OriginCityMarketID` Int32, + `Origin` FixedString(5), + `OriginCityName` String, + `OriginState` FixedString(2), + `OriginStateFips` String, + `OriginStateName` String, + `OriginWac` Int32, + `DestAirportID` Int32, + `DestAirportSeqID` Int32, + `DestCityMarketID` Int32, + `Dest` FixedString(5), + `DestCityName` String, + `DestState` FixedString(2), + `DestStateFips` String, + `DestStateName` String, + `DestWac` Int32, + `CRSDepTime` Int32, + `DepTime` Int32, + `DepDelay` Int32, + `DepDelayMinutes` Int32, + `DepDel15` Int32, + `DepartureDelayGroups` String, + `DepTimeBlk` String, + `TaxiOut` Int32, + `WheelsOff` Int32, + `WheelsOn` Int32, + `TaxiIn` Int32, + `CRSArrTime` Int32, + `ArrTime` Int32, + `ArrDelay` Int32, + `ArrDelayMinutes` Int32, + `ArrDel15` Int32, + `ArrivalDelayGroups` Int32, + `ArrTimeBlk` String, + `Cancelled` UInt8, + `CancellationCode` FixedString(1), + `Diverted` UInt8, + `CRSElapsedTime` Int32, + `ActualElapsedTime` Int32, + `AirTime` Nullable(Int32), + `Flights` Int32, + `Distance` Int32, + `DistanceGroup` UInt8, + `CarrierDelay` Int32, + `WeatherDelay` Int32, + `NASDelay` Int32, + `SecurityDelay` Int32, + `LateAircraftDelay` Int32, + `FirstDepTime` String, + `TotalAddGTime` String, + `LongestAddGTime` String, + `DivAirportLandings` String, + `DivReachedDest` String, + `DivActualElapsedTime` String, + `DivArrDelay` String, + `DivDistance` String, + `Div1Airport` String, + `Div1AirportID` Int32, + `Div1AirportSeqID` Int32, + `Div1WheelsOn` String, + `Div1TotalGTime` String, + `Div1LongestGTime` String, + `Div1WheelsOff` String, + `Div1TailNum` String, + `Div2Airport` String, + `Div2AirportID` Int32, + `Div2AirportSeqID` Int32, + `Div2WheelsOn` String, + `Div2TotalGTime` String, + `Div2LongestGTime` String, + `Div2WheelsOff` String, + `Div2TailNum` String, + `Div3Airport` String, + `Div3AirportID` Int32, + `Div3AirportSeqID` Int32, + `Div3WheelsOn` String, + `Div3TotalGTime` String, + `Div3LongestGTime` String, + `Div3WheelsOff` String, + `Div3TailNum` String, + `Div4Airport` String, + `Div4AirportID` Int32, + `Div4AirportSeqID` Int32, + `Div4WheelsOn` String, + `Div4TotalGTime` String, + `Div4LongestGTime` String, + `Div4WheelsOff` String, + `Div4TailNum` String, + `Div5Airport` String, + `Div5AirportID` Int32, + `Div5AirportSeqID` Int32, + `Div5WheelsOn` String, + `Div5TotalGTime` String, + `Div5LongestGTime` String, + `Div5WheelsOff` String, + `Div5TailNum` String ) ENGINE = MergeTree -PARTITION BY Year -ORDER BY (Carrier, FlightDate) -SETTINGS index_granularity = 8192; + PARTITION BY Year + ORDER BY (IATA_CODE_Reporting_Airline, FlightDate) + SETTINGS index_granularity = 8192; ``` 加载数据: ``` bash -$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done +ls -1 *.zip | xargs -I{} -P $(nproc) bash -c "echo {}; unzip -cq {} '*.csv' | sed 's/\.00//g' | clickhouse-client --input_format_with_names_use_header=0 --query='INSERT INTO ontime FORMAT CSVWithNames'" ``` ## 下载预处理好的分区数据 {#xia-zai-yu-chu-li-hao-de-fen-qu-shu-ju} @@ -212,7 +213,7 @@ LIMIT 10; Q4. 查询2007年各航空公司延误超过10分钟以上的次数 ``` sql -SELECT Carrier, count(*) +SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier @@ -226,29 +227,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year=2007 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` 更好的查询版本: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year=2007 GROUP BY Carrier @@ -262,29 +263,29 @@ SELECT Carrier, c, c2, c*100/c2 as c3 FROM ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY Carrier -) +) q JOIN ( SELECT - Carrier, + IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier -) USING Carrier +) qq USING Carrier ORDER BY c3 DESC; ``` 更好的查询版本: ``` sql -SELECT Carrier, avg(DepDelay>10)*100 AS c3 +SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier @@ -303,7 +304,7 @@ FROM from ontime WHERE DepDelay>10 GROUP BY Year -) +) q JOIN ( select @@ -311,7 +312,7 @@ JOIN count(*) as c2 from ontime GROUP BY Year -) USING (Year) +) qq USING (Year) ORDER BY Year; ``` @@ -346,7 +347,7 @@ Q10. ``` sql SELECT - min(Year), max(Year), Carrier, count(*) AS cnt, + min(Year), max(Year), IATA_CODE_Reporting_Airline AS Carrier, count(*) AS cnt, sum(ArrDelayMinutes>30) AS flights_delayed, round(sum(ArrDelayMinutes>30)/count(*),2) AS rate FROM ontime diff --git a/docs/zh/getting-started/index.md b/docs/zh/getting-started/index.md index fdffca954f7..c5ec7ded932 100644 --- a/docs/zh/getting-started/index.md +++ b/docs/zh/getting-started/index.md @@ -1,7 +1,5 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "\u5BFC\u8A00" +toc_folder_title: 快速上手 toc_priority: 2 --- @@ -9,7 +7,7 @@ toc_priority: 2 如果您是ClickHouse的新手,并希望亲身体验它的性能。 -首先需要进行 [环境安装与部署](install.md). +首先需要完成 [安装与部署](install.md). 之后,您可以通过教程与示例数据完成自己的入门第一步: diff --git a/docs/zh/guides/apply-catboost-model.md b/docs/zh/guides/apply-catboost-model.md index 5e374751052..9002e5cf005 100644 --- a/docs/zh/guides/apply-catboost-model.md +++ b/docs/zh/guides/apply-catboost-model.md @@ -238,6 +238,6 @@ FROM ``` !!! note "注" - 查看函数说明 [avg()](../sql-reference/aggregate-functions/reference.md#agg_function-avg) 和 [log()](../sql-reference/functions/math-functions.md) 。 + 查看函数说明 [avg()](../sql-reference/aggregate-functions/reference/avg.md#agg_function-avg) 和 [log()](../sql-reference/functions/math-functions.md) 。 [原始文章](https://clickhouse.tech/docs/en/guides/apply_catboost_model/) diff --git a/docs/zh/interfaces/index.md b/docs/zh/interfaces/index.md index b678adc765a..4bc14539896 100644 --- a/docs/zh/interfaces/index.md +++ b/docs/zh/interfaces/index.md @@ -1,5 +1,5 @@ --- -toc_folder_title: Interfaces +toc_folder_title: 接口 toc_priority: 14 toc_title: 客户端 --- diff --git a/docs/zh/introduction/distinctive-features.md b/docs/zh/introduction/distinctive-features.md index e9a506f2481..f74c98a0c1d 100644 --- a/docs/zh/introduction/distinctive-features.md +++ b/docs/zh/introduction/distinctive-features.md @@ -17,7 +17,7 @@ toc_title: ClickHouse的特性 在一些列式数据库管理系统中(例如:InfiniDB CE 和 MonetDB) 并没有使用数据压缩。但是, 若想达到比较优异的性能,数据压缩确实起到了至关重要的作用。 -除了在磁盘空间和CPU消耗之间进行不同权衡的高效通用压缩编解码器之外,ClickHouse还提供针对特定类型数据的[专用编解码器](../sql-reference/statements/create/table.md#create-query-specialized-codecs),这使得ClickHouse能够与更小的数据库(如时间序列数据库)竞争并超越它们。 +除了在磁盘空间和CPU消耗之间进行不同权衡的高效通用压缩编解码器之外,ClickHouse还提供针对特定类型数据的[专用编解码器](../sql-reference/statements/create.md#create-query-specialized-codecs),这使得ClickHouse能够与更小的数据库(如时间序列数据库)竞争并超越它们。 ## 数据的磁盘存储 {#shu-ju-de-ci-pan-cun-chu} diff --git a/docs/zh/introduction/index.md b/docs/zh/introduction/index.md index 3b9deddd5cc..64466809d18 100644 --- a/docs/zh/introduction/index.md +++ b/docs/zh/introduction/index.md @@ -1,7 +1,5 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "\u5BFC\u8A00" +toc_folder_title: 简介 toc_priority: 1 --- diff --git a/docs/zh/operations/settings/settings.md b/docs/zh/operations/settings/settings.md index 64625c19c6a..720b822ce29 100644 --- a/docs/zh/operations/settings/settings.md +++ b/docs/zh/operations/settings/settings.md @@ -988,15 +988,15 @@ ClickHouse生成异常 ## count_distinct_implementation {#settings-count_distinct_implementation} -指定其中的 `uniq*` 函数应用于执行 [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference.md#agg_function-count) 建筑。 +指定其中的 `uniq*` 函数应用于执行 [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference/count.md#agg_function-count) 建筑。 可能的值: -- [uniq](../../sql-reference/aggregate-functions/reference.md#agg_function-uniq) -- [uniqCombined](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqcombined) -- [uniqCombined64](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqcombined64) -- [uniqHLL12](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqhll12) -- [uniqExact](../../sql-reference/aggregate-functions/reference.md#agg_function-uniqexact) +- [uniq](../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) +- [uniqCombined](../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined) +- [uniqCombined64](../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64) +- [uniqHLL12](../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12) +- [uniqExact](../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) 默认值: `uniqExact`. diff --git a/docs/zh/operations/system-tables/functions.md b/docs/zh/operations/system-tables/functions.md index ff716b0bc6c..8229a94cd5c 100644 --- a/docs/zh/operations/system-tables/functions.md +++ b/docs/zh/operations/system-tables/functions.md @@ -1,13 +1,30 @@ ---- -machine_translated: true -machine_translated_rev: 5decc73b5dc60054f19087d3690c4eb99446a6c3 ---- +# system.functions {#system-functions} -# 系统。功能 {#system-functions} - -包含有关正常函数和聚合函数的信息。 +包含有关常规函数和聚合函数的信息。 列: - `name`(`String`) – The name of the function. - `is_aggregate`(`UInt8`) — Whether the function is aggregate. + +**举例** +``` + SELECT * FROM system.functions LIMIT 10; +``` + +``` +┌─name─────────────────────┬─is_aggregate─┬─case_insensitive─┬─alias_to─┐ +│ sumburConsistentHash │ 0 │ 0 │ │ +│ yandexConsistentHash │ 0 │ 0 │ │ +│ demangle │ 0 │ 0 │ │ +│ addressToLine │ 0 │ 0 │ │ +│ JSONExtractRaw │ 0 │ 0 │ │ +│ JSONExtractKeysAndValues │ 0 │ 0 │ │ +│ JSONExtract │ 0 │ 0 │ │ +│ JSONExtractString │ 0 │ 0 │ │ +│ JSONExtractFloat │ 0 │ 0 │ │ +│ JSONExtractInt │ 0 │ 0 │ │ +└──────────────────────────┴──────────────┴──────────────────┴──────────┘ + +10 rows in set. Elapsed: 0.002 sec. +``` diff --git a/docs/zh/operations/system-tables/query_log.md b/docs/zh/operations/system-tables/query_log.md index 6d8d7a39699..aa954fc4845 100644 --- a/docs/zh/operations/system-tables/query_log.md +++ b/docs/zh/operations/system-tables/query_log.md @@ -5,86 +5,87 @@ machine_translated_rev: 5decc73b5dc60054f19087d3690c4eb99446a6c3 # system.query_log {#system_tables-query_log} -包含有关已执行查询的信息,例如,开始时间、处理持续时间、错误消息。 +包含已执行查询的相关信息,例如:开始时间、处理持续时间、错误消息。 !!! note "注" 此表不包含以下内容的摄取数据 `INSERT` 查询。 -您可以更改查询日志记录的设置 [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) 服务器配置部分。 +您可以更改query_log的设置,在服务器配置的 [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) 部分。 -您可以通过设置禁用查询日志记录 [log_queries=0](../../operations/settings/settings.md#settings-log-queries). 我们不建议关闭日志记录,因为此表中的信息对于解决问题很重要。 +您可以通过设置 [log_queries=0](../../operations/settings/settings.md#settings-log-queries)来禁用query_log. 我们不建议关闭此日志,因为此表中的信息对于解决问题很重要。 -数据的冲洗周期设置在 `flush_interval_milliseconds` 的参数 [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) 服务器设置部分。 要强制冲洗,请使用 [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) 查询。 +数据刷新的周期可通过 `flush_interval_milliseconds` 参数来设置 [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) 。 要强制刷新,请使用 [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs)。 -ClickHouse不会自动从表中删除数据。 看 [导言](../../operations/system-tables/index.md#system-tables-introduction) 欲了解更多详情。 +ClickHouse不会自动从表中删除数据。更多详情请看 [introduction](../../operations/system-tables/index.md#system-tables-introduction) 。 -该 `system.query_log` 表注册两种查询: +`system.query_log` 表注册两种查询: 1. 客户端直接运行的初始查询。 2. 由其他查询启动的子查询(用于分布式查询执行)。 对于这些类型的查询,有关父查询的信息显示在 `initial_*` 列。 -每个查询创建一个或两个行中 `query_log` 表,这取决于状态(见 `type` 列)的查询: +每个查询在`query_log` 表中创建一或两行记录,这取决于查询的状态(见 `type` 列): -1. 如果查询执行成功,则两行具有 `QueryStart` 和 `QueryFinish` 创建类型。 -2. 如果在查询处理过程中发生错误,两个事件与 `QueryStart` 和 `ExceptionWhileProcessing` 创建类型。 -3. 如果在启动查询之前发生错误,则单个事件具有 `ExceptionBeforeStart` 创建类型。 +1. 如果查询执行成功,会创建type分别为`QueryStart` 和 `QueryFinish` 的两行记录。 +2. 如果在查询处理过程中发生错误,会创建type分别为`QueryStart` 和 `ExceptionWhileProcessing` 的两行记录。 +3. 如果在启动查询之前发生错误,则创建一行type为`ExceptionBeforeStart` 的记录。 列: -- `type` ([枚举8](../../sql-reference/data-types/enum.md)) — Type of an event that occurred when executing the query. Values: - - `'QueryStart' = 1` — Successful start of query execution. - - `'QueryFinish' = 2` — Successful end of query execution. - - `'ExceptionBeforeStart' = 3` — Exception before the start of query execution. - - `'ExceptionWhileProcessing' = 4` — Exception during the query execution. -- `event_date` ([日期](../../sql-reference/data-types/date.md)) — Query starting date. -- `event_time` ([日期时间](../../sql-reference/data-types/datetime.md)) — Query starting time. -- `query_start_time` ([日期时间](../../sql-reference/data-types/datetime.md)) — Start time of query execution. -- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds. -- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` 和 `JOIN`. 对于分布式查询 `read_rows` 包括在所有副本上读取的行总数。 每个副本发送它的 `read_rows` 值,并且查询的服务器-发起方汇总所有接收到的和本地的值。 缓存卷不会影响此值。 -- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` 和 `JOIN`. 对于分布式查询 `read_bytes` 包括在所有副本上读取的行总数。 每个副本发送它的 `read_bytes` 值,并且查询的服务器-发起方汇总所有接收到的和本地的值。 缓存卷不会影响此值。 -- `written_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` 查询,写入的行数。 对于其他查询,列值为0。 -- `written_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` 查询时,写入的字节数。 对于其他查询,列值为0。 -- `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` 查询,或者在一些行 `INSERT` 查询。 -- `result_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — RAM volume in bytes used to store a query result. -- `memory_usage` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Memory consumption by the query. -- `query` ([字符串](../../sql-reference/data-types/string.md)) — Query string. -- `exception` ([字符串](../../sql-reference/data-types/string.md)) — Exception message. -- `exception_code` ([Int32](../../sql-reference/data-types/int-uint.md)) — Code of an exception. -- `stack_trace` ([字符串](../../sql-reference/data-types/string.md)) — [堆栈跟踪](https://en.wikipedia.org/wiki/Stack_trace). 如果查询成功完成,则为空字符串。 -- `is_initial_query` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Query type. Possible values: - - 1 — Query was initiated by the client. - - 0 — Query was initiated by another query as part of distributed query execution. -- `user` ([字符串](../../sql-reference/data-types/string.md)) — Name of the user who initiated the current query. -- `query_id` ([字符串](../../sql-reference/data-types/string.md)) — ID of the query. -- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that was used to make the query. -- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The client port that was used to make the query. -- `initial_user` ([字符串](../../sql-reference/data-types/string.md)) — Name of the user who ran the initial query (for distributed query execution). -- `initial_query_id` ([字符串](../../sql-reference/data-types/string.md)) — ID of the initial query (for distributed query execution). -- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that the parent query was launched from. -- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The client port that was used to make the parent query. -- `interface` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Interface that the query was initiated from. Possible values: +- `type` ([Enum8](../../sql-reference/data-types/enum.md)) — 执行查询时的事件类型. 值: + - `'QueryStart' = 1` — 查询成功启动. + - `'QueryFinish' = 2` — 查询成功完成. + - `'ExceptionBeforeStart' = 3` — 查询执行前有异常. + - `'ExceptionWhileProcessing' = 4` — 查询执行期间有异常. +- `event_date` ([Date](../../sql-reference/data-types/date.md)) — 查询开始日期. +- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — 查询开始时间. +- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — 查询开始时间(毫秒精度). +- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — 查询执行的开始时间. +- `query_start_time_microseconds` (DateTime64) — 查询执行的开始时间(毫秒精度). +- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 查询消耗的时间(毫秒). +- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 从参与了查询的所有表和表函数读取的总行数. 包括:普通的子查询, `IN` 和 `JOIN`的子查询. 对于分布式查询 `read_rows` 包括在所有副本上读取的行总数。 每个副本发送它的 `read_rows` 值,并且查询的服务器-发起方汇总所有接收到的和本地的值。 缓存卷不会影响此值。 +- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 从参与了查询的所有表和表函数读取的总字节数. 包括:普通的子查询, `IN` 和 `JOIN`的子查询. 对于分布式查询 `read_bytes` 包括在所有副本上读取的字节总数。 每个副本发送它的 `read_bytes` 值,并且查询的服务器-发起方汇总所有接收到的和本地的值。 缓存卷不会影响此值。 +- `written_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 对于 `INSERT` 查询,为写入的行数。 对于其他查询,值为0。 +- `written_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 对于 `INSERT` 查询时,为写入的字节数。 对于其他查询,值为0。 +- `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — `SELECT` 查询结果的行数,或`INSERT` 的行数。 +- `result_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 存储查询结果的RAM量. +- `memory_usage` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — 查询使用的内存. +- `query` ([String](../../sql-reference/data-types/string.md)) — 查询语句. +- `exception` ([String](../../sql-reference/data-types/string.md)) — 异常信息. +- `exception_code` ([Int32](../../sql-reference/data-types/int-uint.md)) — 异常码. +- `stack_trace` ([String](../../sql-reference/data-types/string.md)) — [Stack Trace](https://en.wikipedia.org/wiki/Stack_trace). 如果查询成功完成,则为空字符串。 +- `is_initial_query` ([UInt8](../../sql-reference/data-types/int-uint.md)) — 查询类型. 可能的值: + - 1 — 客户端发起的查询. + - 0 — 由另一个查询发起的,作为分布式查询的一部分. +- `user` ([String](../../sql-reference/data-types/string.md)) — 发起查询的用户. +- `query_id` ([String](../../sql-reference/data-types/string.md)) — 查询ID. +- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — 发起查询的客户端IP地址. +- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — 发起查询的客户端端口. +- `initial_user` ([String](../../sql-reference/data-types/string.md)) — 初始查询的用户名(用于分布式查询执行). +- `initial_query_id` ([String](../../sql-reference/data-types/string.md)) — 运行初始查询的ID(用于分布式查询执行). +- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — 运行父查询的IP地址. +- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — 发起父查询的客户端端口. +- `interface` ([UInt8](../../sql-reference/data-types/int-uint.md)) — 发起查询的接口. 可能的值: - 1 — TCP. - 2 — HTTP. -- `os_user` ([字符串](../../sql-reference/data-types/string.md)) — Operating system username who runs [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md). -- `client_hostname` ([字符串](../../sql-reference/data-types/string.md)) — Hostname of the client machine where the [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md) 或者运行另一个TCP客户端。 -- `client_name` ([字符串](../../sql-reference/data-types/string.md)) — The [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md) 或另一个TCP客户端名称。 -- `client_revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Revision of the [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md) 或另一个TCP客户端。 -- `client_version_major` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Major version of the [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md) 或另一个TCP客户端。 -- `client_version_minor` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Minor version of the [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md) 或另一个TCP客户端。 -- `client_version_patch` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Patch component of the [ツ环板clientョツ嘉ッツ偲](../../interfaces/cli.md) 或另一个TCP客户端版本。 -- `http_method` (UInt8) — HTTP method that initiated the query. Possible values: - - 0 — The query was launched from the TCP interface. - - 1 — `GET` 方法被使用。 - - 2 — `POST` 方法被使用。 -- `http_user_agent` ([字符串](../../sql-reference/data-types/string.md)) — The `UserAgent` http请求中传递的标头。 -- `quota_key` ([字符串](../../sql-reference/data-types/string.md)) — The “quota key” 在指定 [配额](../../operations/quotas.md) 设置(见 `keyed`). +- `os_user` ([String](../../sql-reference/data-types/string.md)) — 运行 [clickhouse-client](../../interfaces/cli.md)的操作系统用户名. +- `client_hostname` ([String](../../sql-reference/data-types/string.md)) — 运行[clickhouse-client](../../interfaces/cli.md) 或其他TCP客户端的机器的主机名。 +- `client_name` ([String](../../sql-reference/data-types/string.md)) — [clickhouse-client](../../interfaces/cli.md) 或其他TCP客户端的名称。 +- `client_revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — [clickhouse-client](../../interfaces/cli.md) 或其他TCP客户端的Revision。 +- `client_version_major` ([UInt32](../../sql-reference/data-types/int-uint.md)) — [clickhouse-client](../../interfaces/cli.md) 或其他TCP客户端的Major version。 +- `client_version_minor` ([UInt32](../../sql-reference/data-types/int-uint.md)) — [clickhouse-client](../../interfaces/cli.md) 或其他TCP客户端的Minor version。 +- `client_version_patch` ([UInt32](../../sql-reference/data-types/int-uint.md)) — [clickhouse-client](../../interfaces/cli.md) 或其他TCP客户端的Patch component。 +- `http_method` (UInt8) — 发起查询的HTTP方法. 可能值: + - 0 — TCP接口的查询. + - 1 — `GET` + - 2 — `POST` +- `http_user_agent` ([String](../../sql-reference/data-types/string.md)) — The `UserAgent` The UserAgent header passed in the HTTP request。 +- `quota_key` ([String](../../sql-reference/data-types/string.md)) — 在[quotas](../../operations/quotas.md) 配置里设置的“quota key” (见 `keyed`). - `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse revision. -- `thread_numbers` ([数组(UInt32)](../../sql-reference/data-types/array.md)) — Number of threads that are participating in query execution. -- `ProfileEvents.Names` ([数组(字符串)](../../sql-reference/data-types/array.md)) — Counters that measure different metrics. The description of them could be found in the table [系统。活动](../../operations/system-tables/events.md#system_tables-events) -- `ProfileEvents.Values` ([数组(UInt64)](../../sql-reference/data-types/array.md)) — Values of metrics that are listed in the `ProfileEvents.Names` 列。 -- `Settings.Names` ([数组(字符串)](../../sql-reference/data-types/array.md)) — Names of settings that were changed when the client ran the query. To enable logging changes to settings, set the `log_query_settings` 参数为1。 -- `Settings.Values` ([数组(字符串)](../../sql-reference/data-types/array.md)) — Values of settings that are listed in the `Settings.Names` 列。 - +- `thread_numbers` ([Array(UInt32)](../../sql-reference/data-types/array.md)) — 参与查询的线程数. +- `ProfileEvents.Names` ([Array(String)](../../sql-reference/data-types/array.md)) — 衡量不同指标的计数器。 可以在[system.events](../../operations/system-tables/events.md#system_tables-events)中找到它们的描述。 +- `ProfileEvents.Values` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — `ProfileEvents.Names` 列中列出的指标的值。 +- `Settings.Names` ([Array(String)](../../sql-reference/data-types/array.md)) — 客户端运行查询时更改的设置的名称。 要启用对设置的日志记录更改,请将log_query_settings参数设置为1。 +- `Settings.Values` ([Array(String)](../../sql-reference/data-types/array.md)) — `Settings.Names` 列中列出的设置的值。 **示例** ``` sql @@ -140,4 +141,4 @@ Settings.Values: ['0','random','1','10000000000'] **另请参阅** -- [system.query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log) — This table contains information about each query execution thread. +- [system.query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log) — 这个表包含了每个查询执行线程的信息 diff --git a/docs/zh/operations/tips.md b/docs/zh/operations/tips.md index 511e8a22644..6b46dbb5285 100644 --- a/docs/zh/operations/tips.md +++ b/docs/zh/operations/tips.md @@ -1,24 +1,8 @@ # 使用建议 {#usage-recommendations} -## CPU {#cpu} +## CPU频率调节器 {#cpu-scaling-governor} -必须支持SSE4.2指令集。 现代处理器(自2008年以来)支持它。 - -选择处理器时,与较少的内核和较高的时钟速率相比,更喜欢大量内核和稍慢的时钟速率。 -例如,具有2600MHz的16核心比具有3600MHz的8核心更好。 - -## 超线程 {#hyper-threading} - -不要禁用超线程。 它有助于某些查询,但不适用于其他查询。 - -## 超频 {#turbo-boost} - -强烈推荐超频(turbo-boost)。 它显着提高了典型负载的性能。 -您可以使用 `turbostat` 要查看负载下的CPU的实际时钟速率。 - -## CPU缩放调控器 {#cpu-scaling-governor} - -始终使用 `performance` 缩放调控器。 该 `on-demand` 随着需求的不断增加,缩放调节器的工作要糟糕得多。 +始终使用 `performance` 频率调节器。 `on-demand` 频率调节器在持续高需求的情况下,效果更差。 ``` bash echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor @@ -26,68 +10,70 @@ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_gover ## CPU限制 {#cpu-limitations} -处理器可能会过热。 使用 `dmesg` 看看CPU的时钟速率是否由于过热而受到限制。 -此限制也可以在数据中心级别的外部设置。 您可以使用 `turbostat` 在负载下监视它。 +处理器可能会过热。 使用 `dmesg` 查看CPU的时钟速率是否由于过热而受到限制。 +该限制也可以在数据中心级别外部设置。 您可以使用 `turbostat` 在负载下对其进行监控。 ## RAM {#ram} -对于少量数据(高达-200GB压缩),最好使用与数据量一样多的内存。 -对于大量数据和处理交互式(在线)查询时,应使用合理数量的RAM(128GB或更多),以便热数据子集适合页面缓存。 -即使对于每台服务器约50TB的数据量,使用128GB的RAM与64GB相比显着提高了查询性能。 +对于少量数据(压缩后约200GB),最好使用与数据量一样多的内存。 +对于大量数据,以及在处理交互式(在线)查询时,应使用合理数量的RAM(128GB或更多),以便热数据子集适合页面缓存。 +即使对于每台服务器约50TB的数据量,与64GB相比,使用128GB的RAM也可以显着提高查询性能。 -## 交换文件 {#swap-file} +不要禁用 overcommit。`cat /proc/sys/vm/overcommit_memory` 的值应该为0或1。运行 -始终禁用交换文件。 不这样做的唯一原因是,如果您使用的ClickHouse在您的个人笔记本电脑。 +``` bash +$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory +``` ## 大页(Huge Pages) {#huge-pages} -始终禁用透明大页(transparent huge pages)。 它会干扰内存分alloc,从而导致显着的性能下降。 +始终禁用透明大页(transparent huge pages)。 它会干扰内存分配器,从而导致显着的性能下降。 ``` bash echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled ``` -使用 `perf top` 观察内核中用于内存管理的时间。 +使用 `perf top` 来查看内核在内存管理上花费的时间。 永久大页(permanent huge pages)也不需要被分配。 -## 存储系统 {#storage-subsystem} +## 存储子系统 {#storage-subsystem} 如果您的预算允许您使用SSD,请使用SSD。 如果没有,请使用硬盘。 SATA硬盘7200转就行了。 -优先选择带有本地硬盘驱动器的大量服务器,而不是带有附加磁盘架的小量服务器。 -但是对于存储具有罕见查询的档案,货架将起作用。 +优先选择许多带有本地硬盘驱动器的服务器,而不是少量带有附加磁盘架的服务器。 +但是对于存储极少查询的档案,架子可以使用。 ## RAID {#raid} 当使用硬盘,你可以结合他们的RAID-10,RAID-5,RAID-6或RAID-50。 -对于Linux,软件RAID更好(与 `mdadm`). 我们不建议使用LVM。 +对于Linux,软件RAID更好(使用 `mdadm`). 我们不建议使用LVM。 当创建RAID-10,选择 `far` 布局。 如果您的预算允许,请选择RAID-10。 -如果您有超过4个磁盘,请使用RAID-6(首选)或RAID-50,而不是RAID-5。 +如果您有4个以上的磁盘,请使用RAID-6(首选)或RAID-50,而不是RAID-5。 当使用RAID-5、RAID-6或RAID-50时,始终增加stripe_cache_size,因为默认值通常不是最佳选择。 ``` bash echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size ``` -使用以下公式,从设备数量和块大小计算确切数量: `2 * num_devices * chunk_size_in_bytes / 4096`. +使用以下公式从设备数量和块大小中计算出确切的数量: `2 * num_devices * chunk_size_in_bytes / 4096`。 -1025KB的块大小足以满足所有RAID配置。 +1024KB的块大小足以满足所有RAID配置。 切勿将块大小设置得太小或太大。 您可以在SSD上使用RAID-0。 -无论使用何种RAID,始终使用复制来保证数据安全。 +无论使用哪种RAID,始终使用复制来保证数据安全。 -使用长队列启用NCQ。 对于HDD,选择CFQ调度程序,对于SSD,选择noop。 不要减少 ‘readahead’ 设置。 +启用有长队列的NCQ。 对于HDD,选择CFQ调度程序,对于SSD,选择noop。 不要减少 ‘readahead’ 设置。 对于HDD,启用写入缓存。 ## 文件系统 {#file-system} Ext4是最可靠的选择。 设置挂载选项 `noatime, nobarrier`. -XFS也是合适的,但它还没有经过ClickHouse的彻底测试。 -大多数其他文件系统也应该正常工作。 具有延迟分配的文件系统工作得更好。 +XFS也是合适的,但它还没有经过ClickHouse的全面测试。 +大多数其他文件系统也应该可以正常工作。 具有延迟分配的文件系统工作得更好。 ## Linux内核 {#linux-kernel} @@ -95,26 +81,43 @@ XFS也是合适的,但它还没有经过ClickHouse的彻底测试。 ## 网络 {#network} -如果您使用的是IPv6,请增加路由缓存的大小。 -3.2之前的Linux内核在IPv6实现方面遇到了许多问题。 +如果使用的是IPv6,请增加路由缓存的大小。 +3.2之前的Linux内核在IPv6实现方面存在许多问题。 -如果可能的话,至少使用一个10GB的网络。 1Gb也可以工作,但对于使用数十tb的数据修补副本或处理具有大量中间数据的分布式查询,情况会更糟。 +如果可能的话,至少使用10GB的网络。1GB也可以工作,但对于使用数十TB的数据修补副本或处理具有大量中间数据的分布式查询,情况会更糟。 + +## 虚拟机监视器(Hypervisor)配置 + +如果您使用的是OpenStack,请在nova.conf中设置 +``` +cpu_mode=host-passthrough +``` +。 + +如果您使用的是libvirt,请在XML配置中设置 +``` + +``` +。 + +这对于ClickHouse能够通过 `cpuid` 指令获取正确的信息非常重要。 +否则,当在旧的CPU型号上运行虚拟机监视器时,可能会导致 `Illegal instruction` 崩溃。 ## Zookeeper {#zookeeper} -您可能已经将ZooKeeper用于其他目的。 您可以使用相同的zookeeper安装,如果它还没有超载。 +您可能已经将ZooKeeper用于其他目的。 如果它还没有超载,您可以使用相同的zookeeper。 -最好使用新版本的 Zookeeper – 3.4.9 或之后的版本. 稳定 Liunx 发行版中的 Zookeeper 版本可能是落后的。 +最好使用新版本的Zookeeper – 3.4.9 或更高的版本. 稳定的Liunx发行版中的Zookeeper版本可能已过时。 -你永远不该使用自己手写的脚本在不同的 Zookeeper 集群之间转移数据, 这可能会导致序列节点的数据不正确。出于同样的原因,永远不要使用 zkcopy 工具: https://github.com/ksprojects/zkcopy/issues/15 +你永远不要使用手动编写的脚本在不同的Zookeeper集群之间传输数据, 这可能会导致序列节点的数据不正确。出于相同的原因,永远不要使用 zkcopy 工具: https://github.com/ksprojects/zkcopy/issues/15 -如果要将现有ZooKeeper集群分为两个,正确的方法是增加其副本的数量,然后将其重新配置为两个独立的集群。 +如果要将现有的ZooKeeper集群分为两个,正确的方法是增加其副本的数量,然后将其重新配置为两个独立的集群。 -不要在与ClickHouse相同的服务器上运行ZooKeeper。 因为ZooKeeper对延迟非常敏感,而ClickHouse可能会占用所有可用的系统资源。 +不要在ClickHouse所在的服务器上运行ZooKeeper。 因为ZooKeeper对延迟非常敏感,而ClickHouse可能会占用所有可用的系统资源。 默认设置下,ZooKeeper 就像是一个定时炸弹: -当使用默认配置时,ZooKeeper服务不会从旧快照和日志中删除文件(请参阅autopurge),这是操作员的责任。 +当使用默认配置时,ZooKeeper服务器不会从旧的快照和日志中删除文件(请参阅autopurge),这是操作员的责任。 必须拆除炸弹。 @@ -222,7 +225,7 @@ JAVA_OPTS="-Xms{{ '{{' }} cluster.get('xms','128M') {{ '}}' }} \ -XX:+CMSParallelRemarkEnabled" ``` -Salt init: +初始化: description "zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} centralized coordination service" diff --git a/docs/zh/sql-reference/aggregate-functions/combinators.md b/docs/zh/sql-reference/aggregate-functions/combinators.md index c458097a5fb..6d1cd9c775c 100644 --- a/docs/zh/sql-reference/aggregate-functions/combinators.md +++ b/docs/zh/sql-reference/aggregate-functions/combinators.md @@ -27,7 +27,7 @@ toc_title: 聚合函数组合器 ## -State {#agg-functions-combinator-state} -如果应用此combinator,则聚合函数不会返回结果值(例如唯一值的数量 [uniq](reference.md#agg_function-uniq) 函数),但是返回聚合的中间状态(对于 `uniq`,返回的是计算唯一值的数量的哈希表)。 这是一个 `AggregateFunction(...)` 可用于进一步处理或存储在表中以完成稍后的聚合。 +如果应用此combinator,则聚合函数不会返回结果值(例如唯一值的数量 [uniq](./reference/uniq.md#agg_function-uniq) 函数),但是返回聚合的中间状态(对于 `uniq`,返回的是计算唯一值的数量的哈希表)。 这是一个 `AggregateFunction(...)` 可用于进一步处理或存储在表中以完成稍后的聚合。 要使用这些状态,请使用: @@ -209,7 +209,7 @@ FROM 让我们得到的人的名字,他们的年龄在于的时间间隔 `[30,60)` 和 `[60,75)`。 由于我们使用整数表示的年龄,我们得到的年龄 `[30, 59]` 和 `[60,74]` 间隔。 -要在数组中聚合名称,我们使用 [groupArray](reference.md#agg_function-grouparray) 聚合函数。 这需要一个参数。 在我们的例子中,它是 `name` 列。 `groupArrayResample` 函数应该使用 `age` 按年龄聚合名称, 要定义所需的时间间隔,我们传入 `30, 75, 30` 参数给 `groupArrayResample` 函数。 +要在数组中聚合名称,我们使用 [groupArray](./reference/grouparray.md#agg_function-grouparray) 聚合函数。 这需要一个参数。 在我们的例子中,它是 `name` 列。 `groupArrayResample` 函数应该使用 `age` 按年龄聚合名称, 要定义所需的时间间隔,我们传入 `30, 75, 30` 参数给 `groupArrayResample` 函数。 ``` sql SELECT groupArrayResample(30, 75, 30)(name, age) FROM people diff --git a/docs/zh/sql-reference/aggregate-functions/parametric-functions.md b/docs/zh/sql-reference/aggregate-functions/parametric-functions.md index d151bbc3957..be9166e5737 100644 --- a/docs/zh/sql-reference/aggregate-functions/parametric-functions.md +++ b/docs/zh/sql-reference/aggregate-functions/parametric-functions.md @@ -493,6 +493,6 @@ FROM ## sumMapFiltered(keys_to_keep)(keys, values) {#summapfilteredkeys-to-keepkeys-values} -和 [sumMap](reference.md#agg_functions-summap) 基本一致, 除了一个键数组作为参数传递。这在使用高基数key时尤其有用。 +和 [sumMap](./reference/summap.md#agg_functions-summap) 基本一致, 除了一个键数组作为参数传递。这在使用高基数key时尤其有用。 [原始文章](https://clickhouse.tech/docs/en/query_language/agg_functions/parametric_functions/) diff --git a/docs/zh/sql-reference/aggregate-functions/reference.md b/docs/zh/sql-reference/aggregate-functions/reference.md deleted file mode 100644 index 3a224886a00..00000000000 --- a/docs/zh/sql-reference/aggregate-functions/reference.md +++ /dev/null @@ -1,1912 +0,0 @@ ---- -toc_priority: 36 -toc_title: 参考手册 ---- - -# 参考手册 {#aggregate-functions-reference} - -## count {#agg_function-count} - -计数行数或非空值。 - -ClickHouse支持以下语法 `count`: -- `count(expr)` 或 `COUNT(DISTINCT expr)`. -- `count()` 或 `COUNT(*)`. 该 `count()` 语法是ClickHouse特定的。 - -**参数** - -该功能可以采取: - -- 零参数。 -- 一 [表达式](../syntax.md#syntax-expressions). - -**返回值** - -- 如果没有参数调用函数,它会计算行数。 -- 如果 [表达式](../syntax.md#syntax-expressions) 被传递,则该函数计数此表达式返回的次数非null。 如果表达式返回 [可为空](../../sql-reference/data-types/nullable.md)-键入值,然后结果 `count` 保持不 `Nullable`. 如果返回表达式,则该函数返回0 `NULL` 对于所有的行。 - -在这两种情况下,返回值的类型为 [UInt64](../../sql-reference/data-types/int-uint.md). - -**详细信息** - -ClickHouse支持 `COUNT(DISTINCT ...)` 语法 这种结构的行为取决于 [count_distinct_implementation](../../operations/settings/settings.md#settings-count_distinct_implementation) 设置。 它定义了其中的 [uniq\*](#agg_function-uniq) 函数用于执行操作。 默认值为 [uniqExact](#agg_function-uniqexact) 功能。 - -该 `SELECT count() FROM table` 查询未被优化,因为表中的条目数没有单独存储。 它从表中选择一个小列并计算其中的值数。 - -**例** - -示例1: - -``` sql -SELECT count() FROM t -``` - -``` text -┌─count()─┐ -│ 5 │ -└─────────┘ -``` - -示例2: - -``` sql -SELECT name, value FROM system.settings WHERE name = 'count_distinct_implementation' -``` - -``` text -┌─name──────────────────────────┬─value─────┐ -│ count_distinct_implementation │ uniqExact │ -└───────────────────────────────┴───────────┘ -``` - -``` sql -SELECT count(DISTINCT num) FROM t -``` - -``` text -┌─uniqExact(num)─┐ -│ 3 │ -└────────────────┘ -``` - -这个例子表明 `count(DISTINCT num)` 由执行 `uniqExact` 根据功能 `count_distinct_implementation` 设定值。 - -## any(x) {#agg_function-any} - -选择第一个遇到的值。 -查询可以以任何顺序执行,甚至每次都以不同的顺序执行,因此此函数的结果是不确定的。 -要获得确定的结果,您可以使用 ‘min’ 或 ‘max’ 功能,而不是 ‘any’. - -在某些情况下,可以依靠执行的顺序。 这适用于SELECT来自使用ORDER BY的子查询的情况。 - -当一个 `SELECT` 查询具有 `GROUP BY` 子句或至少一个聚合函数,ClickHouse(相对于MySQL)要求在所有表达式 `SELECT`, `HAVING`,和 `ORDER BY` 子句可以从键或聚合函数计算。 换句话说,从表中选择的每个列必须在键或聚合函数内使用。 要获得像MySQL这样的行为,您可以将其他列放在 `any` 聚合函数。 - -## anyHeavy(x) {#anyheavyx} - -使用选择一个频繁出现的值 [重打者](http://www.cs.umd.edu/~samir/498/karp.pdf) 算法。 如果某个值在查询的每个执行线程中出现的情况超过一半,则返回此值。 通常情况下,结果是不确定的。 - -``` sql -anyHeavy(column) -``` - -**参数** - -- `column` – The column name. - -**示例** - -就拿 [时间](../../getting-started/example-datasets/ontime.md) 数据集,并选择在任何频繁出现的值 `AirlineID` 列。 - -``` sql -SELECT anyHeavy(AirlineID) AS res -FROM ontime -``` - -``` text -┌───res─┐ -│ 19690 │ -└───────┘ -``` - -## anyLast(x) {#anylastx} - -选择遇到的最后一个值。 -其结果是一样不确定的 `any` 功能。 - -## groupBitAnd {#groupbitand} - -按位应用 `AND` 对于一系列的数字。 - -``` sql -groupBitAnd(expr) -``` - -**参数** - -`expr` – An expression that results in `UInt*` 类型。 - -**返回值** - -的价值 `UInt*` 类型。 - -**示例** - -测试数据: - -``` text -binary decimal -00101100 = 44 -00011100 = 28 -00001101 = 13 -01010101 = 85 -``` - -查询: - -``` sql -SELECT groupBitAnd(num) FROM t -``` - -哪里 `num` 是包含测试数据的列。 - -结果: - -``` text -binary decimal -00000100 = 4 -``` - -## groupBitOr {#groupbitor} - -按位应用 `OR` 对于一系列的数字。 - -``` sql -groupBitOr(expr) -``` - -**参数** - -`expr` – An expression that results in `UInt*` 类型。 - -**返回值** - -的价值 `UInt*` 类型。 - -**示例** - -测试数据: - -``` text -binary decimal -00101100 = 44 -00011100 = 28 -00001101 = 13 -01010101 = 85 -``` - -查询: - -``` sql -SELECT groupBitOr(num) FROM t -``` - -哪里 `num` 是包含测试数据的列。 - -结果: - -``` text -binary decimal -01111101 = 125 -``` - -## groupBitXor {#groupbitxor} - -按位应用 `XOR` 对于一系列的数字。 - -``` sql -groupBitXor(expr) -``` - -**参数** - -`expr` – An expression that results in `UInt*` 类型。 - -**返回值** - -的价值 `UInt*` 类型。 - -**示例** - -测试数据: - -``` text -binary decimal -00101100 = 44 -00011100 = 28 -00001101 = 13 -01010101 = 85 -``` - -查询: - -``` sql -SELECT groupBitXor(num) FROM t -``` - -哪里 `num` 是包含测试数据的列。 - -结果: - -``` text -binary decimal -01101000 = 104 -``` - -## groupBitmap {#groupbitmap} - -从无符号整数列的位图或聚合计算,返回UInt64类型的基数,如果添加后缀状态,则返回 [位图对象](../../sql-reference/functions/bitmap-functions.md). - -``` sql -groupBitmap(expr) -``` - -**参数** - -`expr` – An expression that results in `UInt*` 类型。 - -**返回值** - -的价值 `UInt64` 类型。 - -**示例** - -测试数据: - -``` text -UserID -1 -1 -2 -3 -``` - -查询: - -``` sql -SELECT groupBitmap(UserID) as num FROM t -``` - -结果: - -``` text -num -3 -``` - -## min(x) {#agg_function-min} - -计算最小值。 - -## max(x) {#agg_function-max} - -计算最大值。 - -## argMin(arg,val) {#agg-function-argmin} - -计算 ‘arg’ 最小值的值 ‘val’ 价值。 如果有几个不同的值 ‘arg’ 对于最小值 ‘val’,遇到的第一个值是输出。 - -**示例:** - -``` text -┌─user─────┬─salary─┐ -│ director │ 5000 │ -│ manager │ 3000 │ -│ worker │ 1000 │ -└──────────┴────────┘ -``` - -``` sql -SELECT argMin(user, salary) FROM salary -``` - -``` text -┌─argMin(user, salary)─┐ -│ worker │ -└──────────────────────┘ -``` - -## argMax(arg,val) {#agg-function-argmax} - -计算 ‘arg’ 最大值 ‘val’ 价值。 如果有几个不同的值 ‘arg’ 对于最大值 ‘val’,遇到的第一个值是输出。 - -## sum(x) {#agg_function-sum} - -计算总和。 -只适用于数字。 - -## sumWithOverflow(x) {#sumwithoverflowx} - -使用与输入参数相同的数据类型计算数字的总和。 如果总和超过此数据类型的最大值,则函数返回错误。 - -只适用于数字。 - -## sumMap(key,value),sumMap(Tuple(key,value)) {#agg_functions-summap} - -总计 ‘value’ 数组根据在指定的键 ‘key’ 阵列。 -传递键和值数组的元组与传递两个键和值数组是同义的。 -元素的数量 ‘key’ 和 ‘value’ 总计的每一行必须相同。 -返回两个数组的一个二元组: key是排好序的,value是对应key的求和。 - -示例: - -``` sql -CREATE TABLE sum_map( - date Date, - timeslot DateTime, - statusMap Nested( - status UInt16, - requests UInt64 - ), - statusMapTuple Tuple(Array(Int32), Array(Int32)) -) ENGINE = Log; -INSERT INTO sum_map VALUES - ('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10], ([1, 2, 3], [10, 10, 10])), - ('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10], ([3, 4, 5], [10, 10, 10])), - ('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10], ([4, 5, 6], [10, 10, 10])), - ('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10], ([6, 7, 8], [10, 10, 10])); - -SELECT - timeslot, - sumMap(statusMap.status, statusMap.requests), - sumMap(statusMapTuple) -FROM sum_map -GROUP BY timeslot -``` - -``` text -┌────────────timeslot─┬─sumMap(statusMap.status, statusMap.requests)─┬─sumMap(statusMapTuple)─────────┐ -│ 2000-01-01 00:00:00 │ ([1,2,3,4,5],[10,10,20,10,10]) │ ([1,2,3,4,5],[10,10,20,10,10]) │ -│ 2000-01-01 00:01:00 │ ([4,5,6,7,8],[10,10,20,10,10]) │ ([4,5,6,7,8],[10,10,20,10,10]) │ -└─────────────────────┴──────────────────────────────────────────────┴────────────────────────────────┘ -``` - -## skewPop {#skewpop} - -计算的序列[偏度](https://en.wikipedia.org/wiki/Skewness)。 - -``` sql -skewPop(expr) -``` - -**参数** - -`expr` — [表达式](../syntax.md#syntax-expressions) 返回一个数字。 - -**返回值** - -给定序列的偏度。类型 — [Float64](../../sql-reference/data-types/float.md) - -**示例** - -``` sql -SELECT skewPop(value) FROM series_with_value_column -``` - -## skewSamp {#skewsamp} - -计算 [样品偏度](https://en.wikipedia.org/wiki/Skewness) 的序列。 - -它表示随机变量的偏度的无偏估计,如果传递的值形成其样本。 - -``` sql -skewSamp(expr) -``` - -**参数** - -`expr` — [表达式](../syntax.md#syntax-expressions) 返回一个数字。 - -**返回值** - -给定序列的偏度。 类型 — [Float64](../../sql-reference/data-types/float.md). 如果 `n <= 1` (`n` 是样本的大小),则该函数返回 `nan`. - -**示例** - -``` sql -SELECT skewSamp(value) FROM series_with_value_column -``` - -## kurtPop {#kurtpop} - -计算 [峰度](https://en.wikipedia.org/wiki/Kurtosis) 的序列。 - -``` sql -kurtPop(expr) -``` - -**参数** - -`expr` — [表达式](../syntax.md#syntax-expressions) 返回一个数字。 - -**返回值** - -给定序列的峰度。 类型 — [Float64](../../sql-reference/data-types/float.md) - -**示例** - -``` sql -SELECT kurtPop(value) FROM series_with_value_column -``` - -## kurtSamp {#kurtsamp} - -计算 [峰度样本](https://en.wikipedia.org/wiki/Kurtosis) 的序列。 - -它表示随机变量峰度的无偏估计,如果传递的值形成其样本。 - -``` sql -kurtSamp(expr) -``` - -**参数** - -`expr` — [表达式](../syntax.md#syntax-expressions) 返回一个数字。 - -**返回值** - -给定序列的峰度。类型 — [Float64](../../sql-reference/data-types/float.md). 如果 `n <= 1` (`n` 是样本的大小),则该函数返回 `nan`. - -**示例** - -``` sql -SELECT kurtSamp(value) FROM series_with_value_column -``` - -## avg(x) {#agg_function-avg} - -计算平均值。 -只适用于数字。 -结果总是Float64。 - -## avgWeighted {#avgweighted} - -计算 [加权算术平均值](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean). - -**语法** - -``` sql -avgWeighted(x, weight) -``` - -**参数** - -- `x` — 值。 [整数](../data-types/int-uint.md) 或 [浮点](../data-types/float.md). -- `weight` — 值的加权。 [整数](../data-types/int-uint.md) 或 [浮点](../data-types/float.md). - -`x` 和 `weight` 的类型一定是一样的 - -**返回值** - -- 加权平均值。 -- `NaN`. 如果所有的权重都等于0。 - -类型: [Float64](../data-types/float.md). - -**示例** - -查询: - -``` sql -SELECT avgWeighted(x, w) -FROM values('x Int8, w Int8', (4, 1), (1, 0), (10, 2)) -``` - -结果: - -``` text -┌─avgWeighted(x, weight)─┐ -│ 8 │ -└────────────────────────┘ -``` - -## uniq {#agg_function-uniq} - -计算参数的不同值的近似数量。 - -``` sql -uniq(x[, ...]) -``` - -**参数** - -该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 - -**返回值** - -- A [UInt64](../../sql-reference/data-types/int-uint.md)-键入号码。 - -**实现细节** - -功能: - -- 计算聚合中所有参数的哈希值,然后在计算中使用它。 - -- 使用自适应采样算法。 对于计算状态,该函数使用最多65536个元素哈希值的样本。 - - 这个算法是非常精确的,并且对于CPU来说非常高效。如果查询包含一些这样的函数,那和其他聚合函数相比 `uniq` 将是几乎一样快。 - -- 确定性地提供结果(它不依赖于查询处理顺序)。 - -我们建议在几乎所有情况下使用此功能。 - -**另请参阅** - -- [uniqCombined](#agg_function-uniqcombined) -- [uniqCombined64](#agg_function-uniqcombined64) -- [uniqHLL12](#agg_function-uniqhll12) -- [uniqExact](#agg_function-uniqexact) - -## uniqCombined {#agg_function-uniqcombined} - -计算不同参数值的近似数量。 - -``` sql -uniqCombined(HLL_precision)(x[, ...]) -``` - -该 `uniqCombined` 函数是计算不同数值数量的不错选择。 - -**参数** - -该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 - -`HLL_precision` 是以2为底的单元格数的对数 [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog). 可选,您可以将该函数用作 `uniqCombined(x[, ...])`. 默认值 `HLL_precision` 是17,这是有效的96KiB的空间(2^17个单元,每个6比特)。 - -**返回值** - -- 一个[UInt64](../../sql-reference/data-types/int-uint.md)类型的数字。 - -**实现细节** - -功能: - -- 计算散列(64位散列 `String` 否则32位)对于聚合中的所有参数,然后在计算中使用它。 - -- 使用三种算法的组合:数组、哈希表和包含错误修正表的HyperLogLog。 - - 少量的不同的值,使用数组。 值再多一些,使用哈希表。对于大量的数据来说,使用HyperLogLog,HyperLogLog占用一个固定的内存空间。 - -- 确定性地提供结果(它不依赖于查询处理顺序)。 - -!!! note "注" - 因为它使用32位散列非-`String` 类型,结果将有非常高的误差基数显着大于 `UINT_MAX` (错误将在几百亿不同值之后迅速提高),因此在这种情况下,您应该使用 [uniqCombined64](#agg_function-uniqcombined64) - -相比于 [uniq](#agg_function-uniq) 功能,该 `uniqCombined`: - -- 消耗少几倍的内存。 -- 计算精度高出几倍。 -- 通常具有略低的性能。 在某些情况下, `uniqCombined` 可以表现得比 `uniq` 好,例如,使用通过网络传输大量聚合状态的分布式查询。 - -**另请参阅** - -- [uniq](#agg_function-uniq) -- [uniqCombined64](#agg_function-uniqcombined64) -- [uniqHLL12](#agg_function-uniqhll12) -- [uniqExact](#agg_function-uniqexact) - -## uniqCombined64 {#agg_function-uniqcombined64} - -和 [uniqCombined](#agg_function-uniqcombined),但对所有数据类型使用64位哈希。 - -## uniqHLL12 {#agg_function-uniqhll12} - -计算不同参数值的近似数量,使用 [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) 算法。 - -``` sql -uniqHLL12(x[, ...]) -``` - -**参数** - -该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 - -**返回值** - -- A [UInt64](../../sql-reference/data-types/int-uint.md)-键入号码。 - -**实现细节** - -功能: - -- 计算聚合中所有参数的哈希值,然后在计算中使用它。 - -- 使用HyperLogLog算法来近似不同参数值的数量。 - - 212 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). - -- 提供确定结果(它不依赖于查询处理顺序)。 - -我们不建议使用此功能。 在大多数情况下,使用 [uniq](#agg_function-uniq) 或 [uniqCombined](#agg_function-uniqcombined) 功能。 - -**另请参阅** - -- [uniq](#agg_function-uniq) -- [uniqCombined](#agg_function-uniqcombined) -- [uniqExact](#agg_function-uniqexact) - -## uniqExact {#agg_function-uniqexact} - -计算不同参数值的准确数目。 - -``` sql -uniqExact(x[, ...]) -``` - -如果你绝对需要一个确切的结果,使用 `uniqExact` 功能。 否则使用 [uniq](#agg_function-uniq) 功能。 - -`uniqExact` 比 `uniq` 使用更多的内存,因为状态的大小随着不同值的数量的增加而无界增长。 - -**参数** - -该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 - -**另请参阅** - -- [uniq](#agg_function-uniq) -- [uniqCombined](#agg_function-uniqcombined) -- [uniqHLL12](#agg_function-uniqhll12) - -## groupArray(x), groupArray(max_size)(x) {#agg_function-grouparray} - -创建参数值的数组。 -值可以按任何(不确定)顺序添加到数组中。 - -第二个版本(与 `max_size` 参数)将结果数组的大小限制为 `max_size` 元素。 -例如, `groupArray (1) (x)` 相当于 `[any (x)]`. - -在某些情况下,您仍然可以依靠执行的顺序。 这适用于以下情况 `SELECT` 来自使用 `ORDER BY`. - -## groupArrayInsertAt {#grouparrayinsertat} - -在指定位置向数组中插入一个值。 - -**语法** - -``` sql -groupArrayInsertAt(default_x, size)(x, pos); -``` - -如果在一个查询中将多个值插入到同一位置,则该函数的行为方式如下: - -- 如果在单个线程中执行查询,则使用第一个插入的值。 -- 如果在多个线程中执行查询,则结果值是未确定的插入值之一。 - -**参数** - -- `x` — 被插入的值。[表达式](../syntax.md#syntax-expressions) 导致的一个 [支持的数据类型](../../sql-reference/data-types/index.md). -- `pos` — `x` 将被插入的位置。 数组中的索引编号从零开始。 [UInt32](../../sql-reference/data-types/int-uint.md#uint-ranges). -- `default_x`— 如果代入值为空,则使用默认值。可选参数。[表达式](../syntax.md#syntax-expressions) 为 `x` 数据类型的数据。 如果 `default_x` 未定义,则 [默认值](../../sql-reference/statements/create.md#create-default-values) 被使用。 -- `size`— 结果数组的长度。可选参数。如果使用该参数,`default_x` 必须指定。 [UInt32](../../sql-reference/data-types/int-uint.md#uint-ranges). - -**返回值** - -- 具有插入值的数组。 - -类型: [阵列](../../sql-reference/data-types/array.md#data-type-array). - -**示例** - -查询: - -``` sql -SELECT groupArrayInsertAt(toString(number), number * 2) FROM numbers(5); -``` - -结果: - -``` text -┌─groupArrayInsertAt(toString(number), multiply(number, 2))─┐ -│ ['0','','1','','2','','3','','4'] │ -└───────────────────────────────────────────────────────────┘ -``` - -查询: - -``` sql -SELECT groupArrayInsertAt('-')(toString(number), number * 2) FROM numbers(5); -``` - -结果: - -``` text -┌─groupArrayInsertAt('-')(toString(number), multiply(number, 2))─┐ -│ ['0','-','1','-','2','-','3','-','4'] │ -└────────────────────────────────────────────────────────────────┘ -``` - -查询: - -``` sql -SELECT groupArrayInsertAt('-', 5)(toString(number), number * 2) FROM numbers(5); -``` - -结果: - -``` text -┌─groupArrayInsertAt('-', 5)(toString(number), multiply(number, 2))─┐ -│ ['0','-','1','-','2'] │ -└───────────────────────────────────────────────────────────────────┘ -``` - -在一个位置多线程插入数据。 - -查询: - -``` sql -SELECT groupArrayInsertAt(number, 0) FROM numbers_mt(10) SETTINGS max_block_size = 1; -``` - -作为这个查询的结果,你会得到随机整数 `[0,9]` 范围。 例如: - -``` text -┌─groupArrayInsertAt(number, 0)─┐ -│ [7] │ -└───────────────────────────────┘ -``` - -## groupArrayMovingSum {#agg_function-grouparraymovingsum} - -计算输入值的移动和。 - -``` sql -groupArrayMovingSum(numbers_for_summing) -groupArrayMovingSum(window_size)(numbers_for_summing) -``` - -该函数可以将窗口大小作为参数。 如果未指定,则该函数的窗口大小等于列中的行数。 - -**参数** - -- `numbers_for_summing` — [表达式](../syntax.md#syntax-expressions) 为数值数据类型值。 -- `window_size` — 窗口大小。 - -**返回值** - -- 与输入数据大小和类型相同的数组。 - -**示例** - -样品表: - -``` sql -CREATE TABLE t -( - `int` UInt8, - `float` Float32, - `dec` Decimal32(2) -) -ENGINE = TinyLog -``` - -``` text -┌─int─┬─float─┬──dec─┐ -│ 1 │ 1.1 │ 1.10 │ -│ 2 │ 2.2 │ 2.20 │ -│ 4 │ 4.4 │ 4.40 │ -│ 7 │ 7.77 │ 7.77 │ -└─────┴───────┴──────┘ -``` - -查询: - -``` sql -SELECT - groupArrayMovingSum(int) AS I, - groupArrayMovingSum(float) AS F, - groupArrayMovingSum(dec) AS D -FROM t -``` - -``` text -┌─I──────────┬─F───────────────────────────────┬─D──────────────────────┐ -│ [1,3,7,14] │ [1.1,3.3000002,7.7000003,15.47] │ [1.10,3.30,7.70,15.47] │ -└────────────┴─────────────────────────────────┴────────────────────────┘ -``` - -``` sql -SELECT - groupArrayMovingSum(2)(int) AS I, - groupArrayMovingSum(2)(float) AS F, - groupArrayMovingSum(2)(dec) AS D -FROM t -``` - -``` text -┌─I──────────┬─F───────────────────────────────┬─D──────────────────────┐ -│ [1,3,6,11] │ [1.1,3.3000002,6.6000004,12.17] │ [1.10,3.30,6.60,12.17] │ -└────────────┴─────────────────────────────────┴────────────────────────┘ -``` - -## groupArrayMovingAvg {#agg_function-grouparraymovingavg} - -计算输入值的移动平均值。 - -``` sql -groupArrayMovingAvg(numbers_for_summing) -groupArrayMovingAvg(window_size)(numbers_for_summing) -``` - -该函数可以将窗口大小作为参数。 如果未指定,则该函数的窗口大小等于列中的行数。 - -**参数** - -- `numbers_for_summing` — [表达式](../syntax.md#syntax-expressions) 生成数值数据类型值。 -- `window_size` — 窗口大小。 - -**返回值** - -- 与输入数据大小和类型相同的数组。 - -该函数使用 [四舍五入到零](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero). 它截断无意义的小数位来保证结果的数据类型。 - -**示例** - -样品表 `b`: - -``` sql -CREATE TABLE t -( - `int` UInt8, - `float` Float32, - `dec` Decimal32(2) -) -ENGINE = TinyLog -``` - -``` text -┌─int─┬─float─┬──dec─┐ -│ 1 │ 1.1 │ 1.10 │ -│ 2 │ 2.2 │ 2.20 │ -│ 4 │ 4.4 │ 4.40 │ -│ 7 │ 7.77 │ 7.77 │ -└─────┴───────┴──────┘ -``` - -查询: - -``` sql -SELECT - groupArrayMovingAvg(int) AS I, - groupArrayMovingAvg(float) AS F, - groupArrayMovingAvg(dec) AS D -FROM t -``` - -``` text -┌─I─────────┬─F───────────────────────────────────┬─D─────────────────────┐ -│ [0,0,1,3] │ [0.275,0.82500005,1.9250001,3.8675] │ [0.27,0.82,1.92,3.86] │ -└───────────┴─────────────────────────────────────┴───────────────────────┘ -``` - -``` sql -SELECT - groupArrayMovingAvg(2)(int) AS I, - groupArrayMovingAvg(2)(float) AS F, - groupArrayMovingAvg(2)(dec) AS D -FROM t -``` - -``` text -┌─I─────────┬─F────────────────────────────────┬─D─────────────────────┐ -│ [0,1,3,5] │ [0.55,1.6500001,3.3000002,6.085] │ [0.55,1.65,3.30,6.08] │ -└───────────┴──────────────────────────────────┴───────────────────────┘ -``` - -## groupUniqArray(x), groupUniqArray(max_size)(x) {#groupuniqarrayx-groupuniqarraymax-sizex} - -从不同的参数值创建一个数组。 内存消耗是一样的 `uniqExact` 功能。 - -第二个版本(`max_size` 参数)将结果数组的大小限制为 `max_size` 元素。 -例如, `groupUniqArray(1)(x)` 相当于 `[any(x)]`. - -## quantile {#quantile} - -计算数字序列的近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -此功能适用 [水塘抽样(](https://en.wikipedia.org/wiki/Reservoir_sampling),使用储存器最大到8192和随机数发生器进行采样。 结果是非确定性的。 要获得精确的分位数,请使用 [quantileExact](#quantileexact) 功能。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantile(level)(expr) -``` - -别名: `median`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). -- `expr` — 求职表达式,类型为:数值[数据类型](../../sql-reference/data-types/index.md#data_types),[日期](../../sql-reference/data-types/date.md)数据类型或[时间](../../sql-reference/data-types/datetime.md)数据类型。 - -**返回值** - -- 指定层次的近似分位数。 - -类型: - -- [Float64](../../sql-reference/data-types/float.md) 对于数字数据类型输入。 -- [日期](../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 -- [日期时间](../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 - -**示例** - -输入表: - -``` text -┌─val─┐ -│ 1 │ -│ 1 │ -│ 2 │ -│ 3 │ -└─────┘ -``` - -查询: - -``` sql -SELECT quantile(val) FROM t -``` - -结果: - -``` text -┌─quantile(val)─┐ -│ 1.5 │ -└───────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileDeterministic {#quantiledeterministic} - -计算数字序列的近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -此功能适用 [水塘抽样(](https://en.wikipedia.org/wiki/Reservoir_sampling),使用储存器最大到8192和随机数发生器进行采样。 结果是非确定性的。 要获得精确的分位数,请使用 [quantileExact](#quantileexact) 功能。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantileDeterministic(level)(expr, determinator) -``` - -别名: `medianDeterministic`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). -- `expr` — 求职表达式,类型为:数值[数据类型](../../sql-reference/data-types/index.md#data_types),[日期](../../sql-reference/data-types/date.md)数据类型或[时间](../../sql-reference/data-types/datetime.md)数据类型。 -- `determinator` — 一个数字,其hash被用来代替在水塘抽样中随机生成的数字,这样可以保证取样的确定性。你可以使用用户ID或者事件ID等任何正数,但是如果相同的 `determinator` 出现多次,那结果很可能不正确。 - -**返回值** - -- 指定层次的近似分位数。 - -类型: - -- [Float64](../../sql-reference/data-types/float.md) 对于数字数据类型输入。 -- [日期](../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 -- [日期时间](../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 - -**示例** - -输入表: - -``` text -┌─val─┐ -│ 1 │ -│ 1 │ -│ 2 │ -│ 3 │ -└─────┘ -``` - -查询: - -``` sql -SELECT quantileDeterministic(val, 1) FROM t -``` - -结果: - -``` text -┌─quantileDeterministic(val, 1)─┐ -│ 1.5 │ -└───────────────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileExact {#quantileexact} - -准确计算数字序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -为了准确计算,所有输入的数据被合并为一个数组,并且部分的排序。因此该函数需要 `O(n)` 的内存,n为输入数据的个数。但是对于少量数据来说,该函数还是非常有效的。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantileExact(level)(expr) -``` - -别名: `medianExact`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). -- `expr` — 求职表达式,类型为:数值[数据类型](../../sql-reference/data-types/index.md#data_types),[日期](../../sql-reference/data-types/date.md)数据类型或[时间](../../sql-reference/data-types/datetime.md)数据类型。 - -**返回值** - -- 指定层次的分位数。 - -类型: - -- [Float64](../../sql-reference/data-types/float.md) 对于数字数据类型输入。 -- [日期](../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 -- [日期时间](../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 - -**示例** - -查询: - -``` sql -SELECT quantileExact(number) FROM numbers(10) -``` - -结果: - -``` text -┌─quantileExact(number)─┐ -│ 5 │ -└───────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileExactWeighted {#quantileexactweighted} - -考虑到每个元素的权重,然后准确计算数值序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -为了准确计算,所有输入的数据被合并为一个数组,并且部分的排序。每个输入值需要根据 `weight` 计算求和。该算法使用哈希表。正因为如此,在数据重复较多的时候使用的内存是少于[quantileExact](#quantileexact)的。 您可以使用此函数代替 `quantileExact` 并指定重量1。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantileExactWeighted(level)(expr, weight) -``` - -别名: `medianExactWeighted`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). -- `expr` — 求职表达式,类型为:数值[数据类型](../../sql-reference/data-types/index.md#data_types),[日期](../../sql-reference/data-types/date.md)数据类型或[时间](../../sql-reference/data-types/datetime.md)数据类型。 -- `weight` — 权重序列。 权重是一个数据出现的数值。 - -**返回值** - -- 指定层次的分位数。 - -类型: - -- [Float64](../../sql-reference/data-types/float.md) 对于数字数据类型输入。 -- [日期](../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 -- [日期时间](../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 - -**示例** - -输入表: - -``` text -┌─n─┬─val─┐ -│ 0 │ 3 │ -│ 1 │ 2 │ -│ 2 │ 1 │ -│ 5 │ 4 │ -└───┴─────┘ -``` - -查询: - -``` sql -SELECT quantileExactWeighted(n, val) FROM t -``` - -结果: - -``` text -┌─quantileExactWeighted(n, val)─┐ -│ 1 │ -└───────────────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileTiming {#quantiletiming} - -使用确定的精度计算数字数据序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -结果是确定性的(它不依赖于查询处理顺序)。 该函数针对描述加载网页时间或后端响应时间等分布的序列进行了优化。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantileTiming(level)(expr) -``` - -别名: `medianTiming`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). - -- `expr` — [表达式](../syntax.md#syntax-expressions),返回 [浮动\*](../../sql-reference/data-types/float.md)类型数据。 - - - 如果输入负值,那结果是不可预期的。 - - 如果输入值大于30000(页面加载时间大于30s),那我们假设为30000。 - -**精度** - -计算是准确的,如果: - -- 值的总数不超过5670。 -- 总数值超过5670,但页面加载时间小于1024ms。 - -否则,计算结果将四舍五入到16毫秒的最接近倍数。 - -!!! note "注" - 对于计算页面加载时间分位数,此函数比 [分位数](#quantile)更有效和准确。 - -**返回值** - -- 指定层次的分位数。 - -类型: `Float32`. - -!!! note "注" - 如果没有值传递给函数(当使用 `quantileTimingIf`), [NaN](../../sql-reference/data-types/float.md#data_type-float-nan-inf) 被返回。 这样做的目的是将这些案例与导致零的案例区分开来。 看 [ORDER BY clause](../statements/select/order-by.md#select-order-by) 对于 `NaN` 值排序注意事项。 - -**示例** - -输入表: - -``` text -┌─response_time─┐ -│ 72 │ -│ 112 │ -│ 126 │ -│ 145 │ -│ 104 │ -│ 242 │ -│ 313 │ -│ 168 │ -│ 108 │ -└───────────────┘ -``` - -查询: - -``` sql -SELECT quantileTiming(response_time) FROM t -``` - -结果: - -``` text -┌─quantileTiming(response_time)─┐ -│ 126 │ -└───────────────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileTimingWeighted {#quantiletimingweighted} - -根据每个序列成员的权重,使用确定的精度计算数字序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -结果是确定性的(它不依赖于查询处理顺序)。 该函数针对描述加载网页时间或后端响应时间等分布的序列进行了优化。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantileTimingWeighted(level)(expr, weight) -``` - -别名: `medianTimingWeighted`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). - -- `expr` — [表达式](../syntax.md#syntax-expressions),返回 [浮动\*](../../sql-reference/data-types/float.md)类型数据。 - - - 如果输入负值,那结果是不可预期的。 - - 如果输入值大于30000(页面加载时间大于30s),那我们假设为30000。 - -- `weight` — 权重序列。 权重是一个数据出现的数值。 - -**精度** - -计算是准确的,如果: - -- 值的总数不超过5670。 -- 总数值超过5670,但页面加载时间小于1024ms。 - -否则,计算结果将四舍五入到16毫秒的最接近倍数。 - -!!! note "注" - 对于计算页面加载时间分位数,此函数比 [分位数](#quantile)更高效和准确。 - -**返回值** - -- 指定层次的分位数。 - -类型: `Float32`. - -!!! note "注" - 如果没有值传递给函数(当使用 `quantileTimingIf`), [NaN](../../sql-reference/data-types/float.md#data_type-float-nan-inf) 被返回。 这样做的目的是将这些案例与导致零的案例区分开来。看 [ORDER BY clause](../statements/select/order-by.md#select-order-by) 对于 `NaN` 值排序注意事项。 - -**示例** - -输入表: - -``` text -┌─response_time─┬─weight─┐ -│ 68 │ 1 │ -│ 104 │ 2 │ -│ 112 │ 3 │ -│ 126 │ 2 │ -│ 138 │ 1 │ -│ 162 │ 1 │ -└───────────────┴────────┘ -``` - -查询: - -``` sql -SELECT quantileTimingWeighted(response_time, weight) FROM t -``` - -结果: - -``` text -┌─quantileTimingWeighted(response_time, weight)─┐ -│ 112 │ -└───────────────────────────────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileTDigest {#quantiletdigest} - -使用[t-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) 算法计算近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 - -最大误差为1%。 内存消耗 `log(n)`,这里 `n` 是值的个数。 结果取决于运行查询的顺序,并且是不确定的。 - -该功能的性能低于性能 [分位数](#quantile) 或 [时间分位](#quantiletiming). 在状态大小与精度的比率方面,这个函数比 `quantile`更优秀。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能。 - -**语法** - -``` sql -quantileTDigest(level)(expr) -``` - -别名: `medianTDigest`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). -- `expr` — 求职表达式,类型为:数值[数据类型](../../sql-reference/data-types/index.md#data_types),[日期](../../sql-reference/data-types/date.md)数据类型或[时间](../../sql-reference/data-types/datetime.md)数据类型。 - -**回值** - -- 指定层次的分位数。 - -类型: - -- [Float64](../../sql-reference/data-types/float.md) 对于数字数据类型输入。 -- [日期](../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 -- [日期时间](../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 - -**示例** - -查询: - -``` sql -SELECT quantileTDigest(number) FROM numbers(10) -``` - -结果: - -``` text -┌─quantileTDigest(number)─┐ -│ 4.5 │ -└─────────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## quantileTDigestWeighted {#quantiletdigestweighted} - -使用[t-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) 算法计算近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 该函数考虑了每个序列成员的权重。最大误差为1%。 内存消耗 `log(n)`,这里 `n` 是值的个数。 - -该功能的性能低于性能 [分位数](#quantile) 或 [时间分位](#quantiletiming). 在状态大小与精度的比率方面,这个函数比 `quantile`更优秀。 - -结果取决于运行查询的顺序,并且是不确定的。 - -当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[分位数](#quantiles)功能 - -**语法** - -``` sql -quantileTDigest(level)(expr) -``` - -别名: `medianTDigest`. - -**参数** - -- `level` — 分位数层次。可选参数。 从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。 在 `level=0.5` 该函数计算 [中位数](https://en.wikipedia.org/wiki/Median). -- `expr` — 求职表达式,类型为:数值[数据类型](../../sql-reference/data-types/index.md#data_types),[日期](../../sql-reference/data-types/date.md)数据类型或[时间](../../sql-reference/data-types/datetime.md)数据类型。 -- `weight` — 权重序列。 权重是一个数据出现的数值。 - -**返回值** - -- 指定层次的分位数。 - -类型: - -- [Float64](../../sql-reference/data-types/float.md) 对于数字数据类型输入。 -- [日期](../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 -- [日期时间](../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 - -**示例** - -查询: - -``` sql -SELECT quantileTDigestWeighted(number, 1) FROM numbers(10) -``` - -结果: - -``` text -┌─quantileTDigestWeighted(number, 1)─┐ -│ 4.5 │ -└────────────────────────────────────┘ -``` - -**另请参阅** - -- [中位数](#median) -- [分位数](#quantiles) - -## median {#median} - -`median*` 函数是 `quantile*` 函数的别名。 它们计算数字数据样本的中位数。 - -函数: - -- `median` — [quantile](#quantile)别名。 -- `medianDeterministic` — [quantileDeterministic](#quantiledeterministic)别名。 -- `medianExact` — [quantileExact](#quantileexact)别名。 -- `medianExactWeighted` — [quantileExactWeighted](#quantileexactweighted)别名。 -- `medianTiming` — [quantileTiming](#quantiletiming)别名。 -- `medianTimingWeighted` — [quantileTimingWeighted](#quantiletimingweighted)别名。 -- `medianTDigest` — [quantileTDigest](#quantiletdigest)别名。 -- `medianTDigestWeighted` — [quantileTDigestWeighted](#quantiletdigestweighted)别名。 - -**示例** - -输入表: - -``` text -┌─val─┐ -│ 1 │ -│ 1 │ -│ 2 │ -│ 3 │ -└─────┘ -``` - -查询: - -``` sql -SELECT medianDeterministic(val, 1) FROM t -``` - -结果: - -``` text -┌─medianDeterministic(val, 1)─┐ -│ 1.5 │ -└─────────────────────────────┘ -``` - -## quantiles(level1, level2, …)(x) {#quantiles} - -所有分位数函数也有相应的函数: `quantiles`, `quantilesDeterministic`, `quantilesTiming`, `quantilesTimingWeighted`, `quantilesExact`, `quantilesExactWeighted`, `quantilesTDigest`。这些函数一次计算所列层次的所有分位数,并返回结果值的数组。 - -## varSamp(x) {#varsampx} - -计算 `Σ((x - x̅)^2) / (n - 1)`,这里 `n` 是样本大小, `x̅`是`x`的平均值。 - -它表示随机变量的方差的无偏估计,如果传递的值形成其样本。 - -返回 `Float64`. 当 `n <= 1`,返回 `+∞`. - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `varSampStable` 功能。 它的工作速度较慢,但提供较低的计算错误。 - -## varPop(x) {#varpopx} - -计算 `Σ((x - x̅)^2) / n`,这里 `n` 是样本大小, `x̅`是`x`的平均值。 - -换句话说,计算一组数据的离差。 返回 `Float64`。 - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `varPopStable` 功能。 它的工作速度较慢,但提供较低的计算错误。 - -## stddevSamp(x) {#stddevsampx} - -结果等于平方根 `varSamp(x)`。 - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `stddevSampStable` 功能。 它的工作速度较慢,但提供较低的计算错误。 - -## stddevPop(x) {#stddevpopx} - -结果等于平方根 `varPop(x)`。 - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `stddevPopStable` 功能。 它的工作速度较慢,但提供较低的计算错误。 - -## topK(N)(x) {#topknx} - -返回指定列中近似最常见值的数组。 生成的数组按值的近似频率降序排序(而不是值本身)。 - -实现了[过滤节省空间](http://www.l2f.inesc-id.pt/~fmmb/wiki/uploads/Work/misnis.ref0a.pdf)算法, 使用基于reduce-and-combine的算法,借鉴[并行节省空间](https://arxiv.org/pdf/1401.0702.pdf). - -``` sql -topK(N)(column) -``` - -此函数不提供保证的结果。 在某些情况下,可能会发生错误,并且可能会返回不是最高频的值。 - -我们建议使用 `N < 10` 值,`N` 值越大,性能越低。最大值 `N = 65536`。 - -**参数** - -- ‘N’ 是要返回的元素数。 - -如果省略该参数,则使用默认值10。 - -**参数** - -- ' x ' – 计算的频率值。 - -**示例** - -就拿 [OnTime](../../getting-started/example-datasets/ontime.md) 数据集来说,选择`AirlineID` 列中出现最频繁的三个。 - -``` sql -SELECT topK(3)(AirlineID) AS res -FROM ontime -``` - -``` text -┌─res─────────────────┐ -│ [19393,19790,19805] │ -└─────────────────────┘ -``` - -## topKWeighted {#topkweighted} - -类似于 `topK` 但需要一个整数类型的附加参数 - `weight`. 每个输入都被记入 `weight` 次频率计算。 - -**语法** - -``` sql -topKWeighted(N)(x, weight) -``` - -**参数** - -- `N` — 返回值个数。 - -**参数** - -- `x` – 输入值。 -- `weight` — 权重。 [UInt8](../../sql-reference/data-types/int-uint.md)类型。 - -**返回值** - -返回具有最大近似权重总和的值数组。 - -**示例** - -查询: - -``` sql -SELECT topKWeighted(10)(number, number) FROM numbers(1000) -``` - -结果: - -``` text -┌─topKWeighted(10)(number, number)──────────┐ -│ [999,998,997,996,995,994,993,992,991,990] │ -└───────────────────────────────────────────┘ -``` - -## covarSamp(x,y) {#covarsampx-y} - -计算 `Σ((x - x̅)(y - y̅)) / (n - 1)`。 - -返回Float64。 当 `n <= 1`, returns +∞。 - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `covarSampStable` 功能。 它的工作速度较慢,但提供较低的计算错误。 - -## covarPop(x,y) {#covarpopx-y} - -计算 `Σ((x - x̅)(y - y̅)) / n`。 - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `covarPopStable` 功能。 它的工作速度较慢,但提供了较低的计算错误。 - -## corr(x,y) {#corrx-y} - -计算Pearson相关系数: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`。 - -!!! note "注" - 该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `corrStable` 功能。 它的工作速度较慢,但提供较低的计算错误。 - -## categoricalInformationValue {#categoricalinformationvalue} - -对于每个类别计算 `(P(tag = 1) - P(tag = 0))(log(P(tag = 1)) - log(P(tag = 0)))` 。 - -``` sql -categoricalInformationValue(category1, category2, ..., tag) -``` - -结果指示离散(分类)要素如何使用 `[category1, category2, ...]` 有助于使用学习模型预测`tag`的值。 - -## simpleLinearRegression {#simplelinearregression} - -执行简单(一维)线性回归。 - -``` sql -simpleLinearRegression(x, y) -``` - -参数: - -- `x` — x轴。 -- `y` — y轴。 - -返回值: - -符合`y = a*x + b`的常量 `(a, b)` 。 - -**例** - -``` sql -SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3]) -``` - -``` text -┌─arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3])─┐ -│ (1,0) │ -└───────────────────────────────────────────────────────────────────┘ -``` - -``` sql -SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6]) -``` - -``` text -┌─arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6])─┐ -│ (1,3) │ -└───────────────────────────────────────────────────────────────────┘ -``` - -## stochasticLinearRegression {#agg_functions-stochasticlinearregression} - -该函数实现随机线性回归。 它支持自定义参数的学习率、L2正则化系数、微批,并且具有少量更新权重的方法([Adam](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam) (默认), [simple SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), [Momentum](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum), [Nesterov](https://mipt.ru/upload/medialibrary/d7e/41-91.pdf))。 - -### 参数 {#agg_functions-stochasticlinearregression-parameters} - -有4个可自定义的参数。 它们按顺序传递给函数,但是没有必要传递所有四个默认值将被使用,但是好的模型需要一些参数调整。 - -``` text -stochasticLinearRegression(1.0, 1.0, 10, 'SGD') -``` - -1. `learning rate` 当执行梯度下降步骤时,步长上的系数。 过大的学习率可能会导致模型的权重无限大。 默认值为 `0.00001`. -2. `l2 regularization coefficient` 这可能有助于防止过度拟合。 默认值为 `0.1`. -3. `mini-batch size` 设置元素的数量,这些元素将被计算和求和以执行梯度下降的一个步骤。 纯随机下降使用一个元素,但是具有小批量(约10个元素)使梯度步骤更稳定。 默认值为 `15`. -4. `method for updating weights` 他们是: `Adam` (默认情况下), `SGD`, `Momentum`, `Nesterov`. `Momentum` 和 `Nesterov` 需要更多的计算和内存,但是它们恰好在收敛速度和随机梯度方法的稳定性方面是有用的。 - -### 用法 {#agg_functions-stochasticlinearregression-usage} - -`stochasticLinearRegression` 用于两个步骤:拟合模型和预测新数据。 为了拟合模型并保存其状态以供以后使用,我们使用 `-State` combinator,它基本上保存了状态(模型权重等)。 -为了预测我们使用函数 [evalMLMethod](../functions/machine-learning-functions.md#machine_learning_methods-evalmlmethod),这需要一个状态作为参数以及特征来预测。 - - - -**1.** 安装 - -可以使用这种查询。 - -``` sql -CREATE TABLE IF NOT EXISTS train_data -( - param1 Float64, - param2 Float64, - target Float64 -) ENGINE = Memory; - -CREATE TABLE your_model ENGINE = Memory AS SELECT -stochasticLinearRegressionState(0.1, 0.0, 5, 'SGD')(target, param1, param2) -AS state FROM train_data; -``` - -在这里,我们还需要将数据插入到 `train_data` 桌子 参数的数量不是固定的,它只取决于参数的数量,传递到 `linearRegressionState`. 它们都必须是数值。 -请注意,带有目标值的列(我们想要学习预测)被插入作为第一个参数。 - -**2.** 预测 - -在将状态保存到表中之后,我们可以多次使用它进行预测,甚至与其他状态合并并创建新的更好的模型。 - -``` sql -WITH (SELECT state FROM your_model) AS model SELECT -evalMLMethod(model, param1, param2) FROM test_data -``` - -查询将返回一列预测值。 请注意,第一个参数 `evalMLMethod` 是 `AggregateFunctionState` 对象,接下来是要素列。 - -`test_data` 是一个像表 `train_data` 但可能不包含目标值。 - -### 注 {#agg_functions-stochasticlinearregression-notes} - -1. 要合并两个模型,用户可以创建这样的查询: - `sql SELECT state1 + state2 FROM your_models` - 哪里 `your_models` 表包含这两个模型。 此查询将返回new `AggregateFunctionState` 对象。 - -2. 如果没有,用户可以获取创建的模型的权重用于自己的目的,而不保存模型 `-State` 使用combinator。 - `sql SELECT stochasticLinearRegression(0.01)(target, param1, param2) FROM train_data` - 这种查询将拟合模型并返回其权重-首先是权重,它对应于模型的参数,最后一个是偏差。 所以在上面的例子中,查询将返回一个具有3个值的列。 - -**另请参阅** - -- [stochasticLogisticRegression](#agg_functions-stochasticlogisticregression) -- [线性回归和逻辑回归之间的区别](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) - -## stochasticLogisticRegression {#agg_functions-stochasticlogisticregression} - -该函数实现随机逻辑回归。 它可以用于二进制分类问题,支持与stochasticLinearRegression相同的自定义参数,并以相同的方式工作。 - -### 参数 {#agg_functions-stochasticlogisticregression-parameters} - -参数与stochasticLinearRegression中的参数完全相同: -`learning rate`, `l2 regularization coefficient`, `mini-batch size`, `method for updating weights`. -欲了解更多信息,请参阅 [参数](#agg_functions-stochasticlinearregression-parameters). - -``` text -stochasticLogisticRegression(1.0, 1.0, 10, 'SGD') -``` - -**1.** 安装 - - - - 参考stochasticLinearRegression相关文档 - - 预测标签的取值范围为[-1, 1] - -**2.** 预测 - - - - 使用已经保存的state我们可以预测标签为 `1` 的对象的概率。 - - ``` sql - WITH (SELECT state FROM your_model) AS model SELECT - evalMLMethod(model, param1, param2) FROM test_data - ``` - - 查询结果返回一个列的概率。注意 `evalMLMethod` 的第一个参数是 `AggregateFunctionState` 对象,接下来的参数是列的特性。 - - 我们也可以设置概率的范围, 这样需要给元素指定不同的标签。 - - ``` sql - SELECT ans < 1.1 AND ans > 0.5 FROM - (WITH (SELECT state FROM your_model) AS model SELECT - evalMLMethod(model, param1, param2) AS ans FROM test_data) - ``` - - 结果是标签。 - - `test_data` 是一个像 `train_data` 一样的表,但是不包含目标值。 - -**另请参阅** - -- [随机指标线上回归](#agg_functions-stochasticlinearregression) -- [线性回归和逻辑回归之间的差异](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) - -## groupBitmapAnd {#groupbitmapand} - -计算位图列的AND,返回UInt64类型的基数,如果添加后缀状态,则返回 [位图对象](../../sql-reference/functions/bitmap-functions.md). - -``` sql -groupBitmapAnd(expr) -``` - -**参数** - -`expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` 类型。 - -**返回值** - -的价值 `UInt64` 类型。 - -**示例** - -``` sql -DROP TABLE IF EXISTS bitmap_column_expr_test2; -CREATE TABLE bitmap_column_expr_test2 -( - tag_id String, - z AggregateFunction(groupBitmap, UInt32) -) -ENGINE = MergeTree -ORDER BY tag_id; - -INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); - -SELECT groupBitmapAnd(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─groupBitmapAnd(z)─┐ -│ 3 │ -└───────────────────┘ - -SELECT arraySort(bitmapToArray(groupBitmapAndState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─arraySort(bitmapToArray(groupBitmapAndState(z)))─┐ -│ [6,8,10] │ -└──────────────────────────────────────────────────┘ -``` - -## groupBitmapOr {#groupbitmapor} - -计算位图列的OR,返回UInt64类型的基数,如果添加后缀状态,则返回 [位图对象](../../sql-reference/functions/bitmap-functions.md). 这相当于 `groupBitmapMerge`. - -``` sql -groupBitmapOr(expr) -``` - -**参数** - -`expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` 类型。 - -**返回值** - -的价值 `UInt64` 类型。 - -**示例** - -``` sql -DROP TABLE IF EXISTS bitmap_column_expr_test2; -CREATE TABLE bitmap_column_expr_test2 -( - tag_id String, - z AggregateFunction(groupBitmap, UInt32) -) -ENGINE = MergeTree -ORDER BY tag_id; - -INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); - -SELECT groupBitmapOr(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─groupBitmapOr(z)─┐ -│ 15 │ -└──────────────────┘ - -SELECT arraySort(bitmapToArray(groupBitmapOrState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─arraySort(bitmapToArray(groupBitmapOrState(z)))─┐ -│ [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] │ -└─────────────────────────────────────────────────┘ -``` - -## groupBitmapXor {#groupbitmapxor} - -计算位图列的XOR,返回UInt64类型的基数,如果添加后缀状态,则返回 [位图对象](../../sql-reference/functions/bitmap-functions.md). - -``` sql -groupBitmapOr(expr) -``` - -**参数** - -`expr` – An expression that results in `AggregateFunction(groupBitmap, UInt*)` 类型。 - -**返回值** - -的价值 `UInt64` 类型。 - -**示例** - -``` sql -DROP TABLE IF EXISTS bitmap_column_expr_test2; -CREATE TABLE bitmap_column_expr_test2 -( - tag_id String, - z AggregateFunction(groupBitmap, UInt32) -) -ENGINE = MergeTree -ORDER BY tag_id; - -INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); -INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); - -SELECT groupBitmapXor(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─groupBitmapXor(z)─┐ -│ 10 │ -└───────────────────┘ - -SELECT arraySort(bitmapToArray(groupBitmapXorState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); -┌─arraySort(bitmapToArray(groupBitmapXorState(z)))─┐ -│ [1,3,5,6,8,10,11,13,14,15] │ -└──────────────────────────────────────────────────┘ -``` - -[原始文章](https://clickhouse.tech/docs/en/query_language/agg_functions/reference/) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/any.md b/docs/zh/sql-reference/aggregate-functions/reference/any.md new file mode 100644 index 00000000000..205ff1c1944 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/any.md @@ -0,0 +1,13 @@ +--- +toc_priority: 6 +--- + +# any {#agg_function-any} + +选择第一个遇到的值。 +查询可以以任何顺序执行,甚至每次都以不同的顺序执行,因此此函数的结果是不确定的。 +要获得确定的结果,您可以使用 ‘min’ 或 ‘max’ 功能,而不是 ‘any’. + +在某些情况下,可以依靠执行的顺序。 这适用于SELECT来自使用ORDER BY的子查询的情况。 + +当一个 `SELECT` 查询具有 `GROUP BY` 子句或至少一个聚合函数,ClickHouse(相对于MySQL)要求在所有表达式 `SELECT`, `HAVING`,和 `ORDER BY` 子句可以从键或聚合函数计算。 换句话说,从表中选择的每个列必须在键或聚合函数内使用。 要获得像MySQL这样的行为,您可以将其他列放在 `any` 聚合函数。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/anyheavy.md b/docs/zh/sql-reference/aggregate-functions/reference/anyheavy.md new file mode 100644 index 00000000000..b67be9e48cf --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/anyheavy.md @@ -0,0 +1,34 @@ +--- +toc_priority: 103 +--- + +# anyHeavy {#anyheavyx} + +选择一个频繁出现的值,使用[heavy hitters](http://www.cs.umd.edu/~samir/498/karp.pdf) 算法。 如果某个值在查询的每个执行线程中出现的情况超过一半,则返回此值。 通常情况下,结果是不确定的。 + +``` sql +anyHeavy(column) +``` + +**参数** + +- `column` – The column name。 + +**示例** + +使用 [OnTime](../../../getting-started/example-datasets/ontime.md) 数据集,并选择在 `AirlineID` 列任何频繁出现的值。 + +查询: + +``` sql +SELECT anyHeavy(AirlineID) AS res +FROM ontime; +``` + +结果: + +``` text +┌───res─┐ +│ 19690 │ +└───────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/anylast.md b/docs/zh/sql-reference/aggregate-functions/reference/anylast.md new file mode 100644 index 00000000000..e6792e0e449 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/anylast.md @@ -0,0 +1,9 @@ +--- +toc_priority: 104 +--- + +## anyLast {#anylastx} + +选择遇到的最后一个值。 +其结果和[any](../../../sql-reference/aggregate-functions/reference/any.md) 函数一样是不确定的 。 + \ No newline at end of file diff --git a/docs/zh/sql-reference/aggregate-functions/reference/argmax.md b/docs/zh/sql-reference/aggregate-functions/reference/argmax.md new file mode 100644 index 00000000000..9d90590b2f1 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/argmax.md @@ -0,0 +1,64 @@ +--- +toc_priority: 106 +--- + +# argMax {#agg-function-argmax} + +计算 `val` 最大值对应的 `arg` 值。 如果 `val` 最大值存在几个不同的 `arg` 值,输出遇到的第一个值。 + +这个函数的Tuple版本将返回 `val` 最大值对应的元组。本函数适合和 `SimpleAggregateFunction` 搭配使用。 + +**语法** + +``` sql +argMax(arg, val) +``` + +或 + +``` sql +argMax(tuple(arg, val)) +``` + +**参数** + +- `arg` — Argument. +- `val` — Value. + +**返回值** + +- `val` 最大值对应的 `arg` 值。 + +类型: 匹配 `arg` 类型。 + +对于输入中的元组: + +- 元组 `(arg, val)`, 其中 `val` 最大值,`arg` 是对应的值。 + +类型: [元组](../../../sql-reference/data-types/tuple.md)。 + +**示例** + +输入表: + +``` text +┌─user─────┬─salary─┐ +│ director │ 5000 │ +│ manager │ 3000 │ +│ worker │ 1000 │ +└──────────┴────────┘ +``` + +查询: + +``` sql +SELECT argMax(user, salary), argMax(tuple(user, salary), salary), argMax(tuple(user, salary)) FROM salary; +``` + +结果: + +``` text +┌─argMax(user, salary)─┬─argMax(tuple(user, salary), salary)─┬─argMax(tuple(user, salary))─┐ +│ director │ ('director',5000) │ ('director',5000) │ +└──────────────────────┴─────────────────────────────────────┴─────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/argmin.md b/docs/zh/sql-reference/aggregate-functions/reference/argmin.md new file mode 100644 index 00000000000..0dd4625ac0d --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/argmin.md @@ -0,0 +1,37 @@ +--- +toc_priority: 105 +--- + +# argMin {#agg-function-argmin} + +语法: `argMin(arg, val)` 或 `argMin(tuple(arg, val))` + +计算 `val` 最小值对应的 `arg` 值。 如果 `val` 最小值存在几个不同的 `arg` 值,输出遇到的第一个(`arg`)值。 + +这个函数的Tuple版本将返回 `val` 最小值对应的tuple。本函数适合和`SimpleAggregateFunction`搭配使用。 + +**示例:** + +输入表: + +``` text +┌─user─────┬─salary─┐ +│ director │ 5000 │ +│ manager │ 3000 │ +│ worker │ 1000 │ +└──────────┴────────┘ +``` + +查询: + +``` sql +SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary; +``` + +结果: + +``` text +┌─argMin(user, salary)─┬─argMin(tuple(user, salary))─┐ +│ worker │ ('worker',1000) │ +└──────────────────────┴─────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/avg.md b/docs/zh/sql-reference/aggregate-functions/reference/avg.md new file mode 100644 index 00000000000..739654adc1c --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/avg.md @@ -0,0 +1,64 @@ +--- +toc_priority: 5 +--- + +# avg {#agg_function-avg} + +计算算术平均值。 + +**语法** + +``` sql +avg(x) +``` + +**参数** + +- `x` — 输入值, 必须是 [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md), 或 [Decimal](../../../sql-reference/data-types/decimal.md)。 + +**返回值** + +- 算术平均值,总是 [Float64](../../../sql-reference/data-types/float.md) 类型。 +- 输入参数 `x` 为空时返回 `NaN` 。 + +**示例** + +查询: + +``` sql +SELECT avg(x) FROM values('x Int8', 0, 1, 2, 3, 4, 5); +``` + +结果: + +``` text +┌─avg(x)─┐ +│ 2.5 │ +└────────┘ +``` + +**示例** + +创建一个临时表: + +查询: + +``` sql +CREATE table test (t UInt8) ENGINE = Memory; +``` + +获取算术平均值: + +查询: + +``` +SELECT avg(t) FROM test; +``` + +结果: + +``` text +┌─avg(x)─┐ +│ nan │ +└────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/avgweighted.md b/docs/zh/sql-reference/aggregate-functions/reference/avgweighted.md new file mode 100644 index 00000000000..9b732f57b4a --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/avgweighted.md @@ -0,0 +1,84 @@ +--- +toc_priority: 107 +--- + +# avgWeighted {#avgweighted} + + +计算 [加权算术平均值](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean)。 + +**语法** + +``` sql +avgWeighted(x, weight) +``` + +**参数** + +- `x` — 值。 +- `weight` — 值的加权。 + +`x` 和 `weight` 的类型必须是 +[整数](../../../sql-reference/data-types/int-uint.md), 或 +[浮点数](../../../sql-reference/data-types/float.md), 或 +[定点数](../../../sql-reference/data-types/decimal.md), +但是可以不一样。 + +**返回值** + +- `NaN`。 如果所有的权重都等于0 或所提供的权重参数是空。 +- 加权平均值。 其他。 + +类型: 总是[Float64](../../../sql-reference/data-types/float.md). + +**示例** + +查询: + +``` sql +SELECT avgWeighted(x, w) +FROM values('x Int8, w Int8', (4, 1), (1, 0), (10, 2)) +``` + +结果: + +``` text +┌─avgWeighted(x, weight)─┐ +│ 8 │ +└────────────────────────┘ +``` + + +**示例** + +查询: + +``` sql +SELECT avgWeighted(x, w) +FROM values('x Int8, w Int8', (0, 0), (1, 0), (10, 0)) +``` + +结果: + +``` text +┌─avgWeighted(x, weight)─┐ +│ nan │ +└────────────────────────┘ +``` + +**示例** + +查询: + +``` sql +CREATE table test (t UInt8) ENGINE = Memory; +SELECT avgWeighted(t) FROM test +``` + +结果: + +``` text +┌─avgWeighted(x, weight)─┐ +│ nan │ +└────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/categoricalinformationvalue.md b/docs/zh/sql-reference/aggregate-functions/reference/categoricalinformationvalue.md new file mode 100644 index 00000000000..1970e76c2fd --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/categoricalinformationvalue.md @@ -0,0 +1,13 @@ +--- +toc_priority: 250 +--- + +# categoricalInformationValue {#categoricalinformationvalue} + +对于每个类别计算 `(P(tag = 1) - P(tag = 0))(log(P(tag = 1)) - log(P(tag = 0)))` 。 + +``` sql +categoricalInformationValue(category1, category2, ..., tag) +``` + +结果指示离散(分类)要素如何使用 `[category1, category2, ...]` 有助于使用学习模型预测`tag`的值。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/corr.md b/docs/zh/sql-reference/aggregate-functions/reference/corr.md new file mode 100644 index 00000000000..5ab49f75023 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/corr.md @@ -0,0 +1,15 @@ +--- +toc_priority: 107 +--- + +# corr {#corrx-y} + +**语法** +``` sql +`corr(x, y)` +``` + +计算Pearson相关系数: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `corrStable` 函数。 它的工作速度较慢,但提供较低的计算错误。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/count.md b/docs/zh/sql-reference/aggregate-functions/reference/count.md new file mode 100644 index 00000000000..fc528980bfa --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/count.md @@ -0,0 +1,70 @@ +--- +toc_priority: 1 +--- + +# count {#agg_function-count} + + +计数行数或非空值。 + +ClickHouse支持以下 `count` 语法: +- `count(expr)` 或 `COUNT(DISTINCT expr)`。 +- `count()` 或 `COUNT(*)`. 该 `count()` 语法是ClickHouse特定的。 + +**参数** + +该函数可以采取: + +- 零参数。 +- 一个 [表达式](../../../sql-reference/syntax.md#syntax-expressions)。 + +**返回值** + +- 如果没有参数调用函数,它会计算行数。 +- 如果 [表达式](../../../sql-reference/syntax.md#syntax-expressions) 被传递,则该函数计数此表达式返回非null的次数。 如果表达式返回 [可为空](../../../sql-reference/data-types/nullable.md)类型的值,`count`的结果仍然不 `Nullable`。 如果表达式对于所有的行都返回 `NULL` ,则该函数返回 0 。 + +在这两种情况下,返回值的类型为 [UInt64](../../../sql-reference/data-types/int-uint.md)。 + +**详细信息** + +ClickHouse支持 `COUNT(DISTINCT ...)` 语法,这种结构的行为取决于 [count_distinct_implementation](../../../operations/settings/settings.md#settings-count_distinct_implementation) 设置。 它定义了用于执行该操作的 [uniq\*](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq)函数。 默认值是 [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact)函数。 + +`SELECT count() FROM table` 这个查询未被优化,因为表中的条目数没有单独存储。 它从表中选择一个小列并计算其值的个数。 + +**示例** + +示例1: + +``` sql +SELECT count() FROM t +``` + +``` text +┌─count()─┐ +│ 5 │ +└─────────┘ +``` + +示例2: + +``` sql +SELECT name, value FROM system.settings WHERE name = 'count_distinct_implementation' +``` + +``` text +┌─name──────────────────────────┬─value─────┐ +│ count_distinct_implementation │ uniqExact │ +└───────────────────────────────┴───────────┘ +``` + +``` sql +SELECT count(DISTINCT num) FROM t +``` + +``` text +┌─uniqExact(num)─┐ +│ 3 │ +└────────────────┘ +``` + +这个例子表明 `count(DISTINCT num)` 是通过 `count_distinct_implementation` 的设定值 `uniqExact` 函数来执行的。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/covarpop.md b/docs/zh/sql-reference/aggregate-functions/reference/covarpop.md new file mode 100644 index 00000000000..c6f43c6b9e9 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/covarpop.md @@ -0,0 +1,15 @@ +--- +toc_priority: 36 +--- + +# covarPop {#covarpop} + +**语法** +``` sql +covarPop(x, y) +``` + +计算 `Σ((x - x̅)(y - y̅)) / n` 的值。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `covarPopStable` 函数。 它的工作速度较慢,但提供了较低的计算错误。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/covarsamp.md b/docs/zh/sql-reference/aggregate-functions/reference/covarsamp.md new file mode 100644 index 00000000000..5ef5104504b --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/covarsamp.md @@ -0,0 +1,17 @@ +--- +toc_priority: 37 +--- + +# covarSamp {#covarsamp} + +**语法** +``` sql +covarSamp(x, y) +``` + +计算 `Σ((x - x̅)(y - y̅)) / (n - 1)` 的值。 + +返回Float64。 当 `n <= 1`, 返回 +∞。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `covarSampStable` 函数。 它的工作速度较慢,但提供较低的计算错误。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/deltasum.md b/docs/zh/sql-reference/aggregate-functions/reference/deltasum.md new file mode 100644 index 00000000000..e439263bf78 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/deltasum.md @@ -0,0 +1,69 @@ +--- +toc_priority: 141 +--- + +# deltaSum {#agg_functions-deltasum} + +计算连续行之间的差值和。如果差值为负,则忽略。 + +**语法** + +``` sql +deltaSum(value) +``` + +**参数** + +- `value` — 必须是 [整型](../../data-types/int-uint.md) 或者 [浮点型](../../data-types/float.md) 。 + +**返回值** + +- `Integer` or `Float` 型的算术差值和。 + +**示例** + +查询: + +``` sql +SELECT deltaSum(arrayJoin([1, 2, 3])); +``` + +结果: + +``` text +┌─deltaSum(arrayJoin([1, 2, 3]))─┐ +│ 2 │ +└────────────────────────────────┘ +``` + +查询: + +``` sql +SELECT deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3])); +``` + +结果: + +``` text +┌─deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3]))─┐ +│ 7 │ +└───────────────────────────────────────────────┘ +``` + +查询: + +``` sql +SELECT deltaSum(arrayJoin([2.25, 3, 4.5])); +``` + +结果: + +``` text +┌─deltaSum(arrayJoin([2.25, 3, 4.5]))─┐ +│ 2.25 │ +└─────────────────────────────────────┘ +``` + +**参见** + +- [runningDifference](../../functions/other-functions.md#other_functions-runningdifference) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/grouparray.md b/docs/zh/sql-reference/aggregate-functions/reference/grouparray.md new file mode 100644 index 00000000000..0a8f1cd326d --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/grouparray.md @@ -0,0 +1,20 @@ +--- +toc_priority: 110 +--- + +# groupArray {#agg_function-grouparray} + +**语法** +``` sql +groupArray(x) +或 +groupArray(max_size)(x) +``` + +创建参数值的数组。 +值可以按任何(不确定)顺序添加到数组中。 + +第二个版本(带有 `max_size` 参数)将结果数组的大小限制为 `max_size` 个元素。 +例如, `groupArray (1) (x)` 相当于 `[any (x)]` 。 + +在某些情况下,您仍然可以依赖执行顺序。这适用于SELECT(查询)来自使用了 `ORDER BY` 子查询的情况。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/grouparrayinsertat.md b/docs/zh/sql-reference/aggregate-functions/reference/grouparrayinsertat.md new file mode 100644 index 00000000000..3a50b24fd7f --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/grouparrayinsertat.md @@ -0,0 +1,91 @@ +--- +toc_priority: 112 +--- + +# groupArrayInsertAt {#grouparrayinsertat} + +在指定位置向数组中插入一个值。 + +**语法** + +``` sql +groupArrayInsertAt(default_x, size)(x, pos); +``` + +如果在一个查询中将多个值插入到同一位置,则该函数的行为方式如下: + +- 如果在单个线程中执行查询,则使用第一个插入的值。 +- 如果在多个线程中执行查询,则结果值是未确定的插入值之一。 + +**参数** + +- `x` — 要插入的值。生成所[支持的数据类型](../../../sql-reference/data-types/index.md)(数据)的[表达式](../../../sql-reference/syntax.md#syntax-expressions)。 +- `pos` — 指定元素 `x` 将被插入的位置。 数组中的索引编号从零开始。 [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges). +- `default_x` — 在空位置替换的默认值。可选参数。生成 `x` 数据类型 (数据) 的[表达式](../../../sql-reference/syntax.md#syntax-expressions)。 如果 `default_x` 未定义,则 [默认值](../../../sql-reference/statements/create.md#create-default-values) 被使用。 +- `size`— 结果数组的长度。可选参数。如果使用该参数,必须指定默认值 `default_x` 。 [UInt32](../../../sql-reference/data-types/int-uint.md#uint-ranges)。 + +**返回值** + +- 具有插入值的数组。 + +类型: [阵列](../../../sql-reference/data-types/array.md#data-type-array)。 + +**示例** + +查询: + +``` sql +SELECT groupArrayInsertAt(toString(number), number * 2) FROM numbers(5); +``` + +结果: + +``` text +┌─groupArrayInsertAt(toString(number), multiply(number, 2))─┐ +│ ['0','','1','','2','','3','','4'] │ +└───────────────────────────────────────────────────────────┘ +``` + +查询: + +``` sql +SELECT groupArrayInsertAt('-')(toString(number), number * 2) FROM numbers(5); +``` + +结果: + +``` text +┌─groupArrayInsertAt('-')(toString(number), multiply(number, 2))─┐ +│ ['0','-','1','-','2','-','3','-','4'] │ +└────────────────────────────────────────────────────────────────┘ +``` + +查询: + +``` sql +SELECT groupArrayInsertAt('-', 5)(toString(number), number * 2) FROM numbers(5); +``` + +结果: + +``` text +┌─groupArrayInsertAt('-', 5)(toString(number), multiply(number, 2))─┐ +│ ['0','-','1','-','2'] │ +└───────────────────────────────────────────────────────────────────┘ +``` + +在一个位置多线程插入数据。 + +查询: + +``` sql +SELECT groupArrayInsertAt(number, 0) FROM numbers_mt(10) SETTINGS max_block_size = 1; +``` + +作为这个查询的结果,你会得到 `[0,9]` 范围的随机整数。 例如: + +``` text +┌─groupArrayInsertAt(number, 0)─┐ +│ [7] │ +└───────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/grouparraymovingavg.md b/docs/zh/sql-reference/aggregate-functions/reference/grouparraymovingavg.md new file mode 100644 index 00000000000..8cdfc302b39 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/grouparraymovingavg.md @@ -0,0 +1,85 @@ +--- +toc_priority: 114 +--- + +# groupArrayMovingAvg {#agg_function-grouparraymovingavg} + +计算输入值的移动平均值。 + +**语法** + +``` sql +groupArrayMovingAvg(numbers_for_summing) +groupArrayMovingAvg(window_size)(numbers_for_summing) +``` + +该函数可以将窗口大小作为参数。 如果未指定,则该函数的窗口大小等于列中的行数。 + +**参数** + +- `numbers_for_summing` — [表达式](../../../sql-reference/syntax.md#syntax-expressions) 生成数值数据类型值。 +- `window_size` — 窗口大小。 + +**返回值** + +- 与输入数据大小相同的数组。 + +对于输入数据类型是[Integer](../../../sql-reference/data-types/int-uint.md), +和[floating-point](../../../sql-reference/data-types/float.md), +对应的返回值类型是 `Float64` 。 +对于输入数据类型是[Decimal](../../../sql-reference/data-types/decimal.md) 返回值类型是 `Decimal128` 。 + +该函数对于 `Decimal128` 使用 [四舍五入到零](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero). 它截断无意义的小数位来保证结果的数据类型。 + +**示例** + +样表 `t`: + +``` sql +CREATE TABLE t +( + `int` UInt8, + `float` Float32, + `dec` Decimal32(2) +) +ENGINE = TinyLog +``` + +``` text +┌─int─┬─float─┬──dec─┐ +│ 1 │ 1.1 │ 1.10 │ +│ 2 │ 2.2 │ 2.20 │ +│ 4 │ 4.4 │ 4.40 │ +│ 7 │ 7.77 │ 7.77 │ +└─────┴───────┴──────┘ +``` + +查询: + +``` sql +SELECT + groupArrayMovingAvg(int) AS I, + groupArrayMovingAvg(float) AS F, + groupArrayMovingAvg(dec) AS D +FROM t +``` + +``` text +┌─I────────────────────┬─F─────────────────────────────────────────────────────────────────────────────┬─D─────────────────────┐ +│ [0.25,0.75,1.75,3.5] │ [0.2750000059604645,0.8250000178813934,1.9250000417232513,3.8499999940395355] │ [0.27,0.82,1.92,3.86] │ +└──────────────────────┴───────────────────────────────────────────────────────────────────────────────┴───────────────────────┘ +``` + +``` sql +SELECT + groupArrayMovingAvg(2)(int) AS I, + groupArrayMovingAvg(2)(float) AS F, + groupArrayMovingAvg(2)(dec) AS D +FROM t +``` + +``` text +┌─I───────────────┬─F───────────────────────────────────────────────────────────────────────────┬─D─────────────────────┐ +│ [0.5,1.5,3,5.5] │ [0.550000011920929,1.6500000357627869,3.3000000715255737,6.049999952316284] │ [0.55,1.65,3.30,6.08] │ +└─────────────────┴─────────────────────────────────────────────────────────────────────────────┴───────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/grouparraymovingsum.md b/docs/zh/sql-reference/aggregate-functions/reference/grouparraymovingsum.md new file mode 100644 index 00000000000..d58d848e7ac --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/grouparraymovingsum.md @@ -0,0 +1,81 @@ +--- +toc_priority: 113 +--- + +# groupArrayMovingSum {#agg_function-grouparraymovingsum} + + +计算输入值的移动和。 + +**语法** + +``` sql +groupArrayMovingSum(numbers_for_summing) +groupArrayMovingSum(window_size)(numbers_for_summing) +``` + +该函数可以将窗口大小作为参数。 如果未指定,则该函数的窗口大小等于列中的行数。 + +**参数** + +- `numbers_for_summing` — [表达式](../../../sql-reference/syntax.md#syntax-expressions) 生成数值数据类型值。 +- `window_size` — 窗口大小。 + +**返回值** + +- 与输入数据大小相同的数组。 +对于输入数据类型是[Decimal](../../../sql-reference/data-types/decimal.md) 数组元素类型是 `Decimal128` 。 +对于其他的数值类型, 获取其对应的 `NearestFieldType` 。 + +**示例** + +样表: + +``` sql +CREATE TABLE t +( + `int` UInt8, + `float` Float32, + `dec` Decimal32(2) +) +ENGINE = TinyLog +``` + +``` text +┌─int─┬─float─┬──dec─┐ +│ 1 │ 1.1 │ 1.10 │ +│ 2 │ 2.2 │ 2.20 │ +│ 4 │ 4.4 │ 4.40 │ +│ 7 │ 7.77 │ 7.77 │ +└─────┴───────┴──────┘ +``` + +查询: + +``` sql +SELECT + groupArrayMovingSum(int) AS I, + groupArrayMovingSum(float) AS F, + groupArrayMovingSum(dec) AS D +FROM t +``` + +``` text +┌─I──────────┬─F───────────────────────────────┬─D──────────────────────┐ +│ [1,3,7,14] │ [1.1,3.3000002,7.7000003,15.47] │ [1.10,3.30,7.70,15.47] │ +└────────────┴─────────────────────────────────┴────────────────────────┘ +``` + +``` sql +SELECT + groupArrayMovingSum(2)(int) AS I, + groupArrayMovingSum(2)(float) AS F, + groupArrayMovingSum(2)(dec) AS D +FROM t +``` + +``` text +┌─I──────────┬─F───────────────────────────────┬─D──────────────────────┐ +│ [1,3,6,11] │ [1.1,3.3000002,6.6000004,12.17] │ [1.10,3.30,6.60,12.17] │ +└────────────┴─────────────────────────────────┴────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/grouparraysample.md b/docs/zh/sql-reference/aggregate-functions/reference/grouparraysample.md new file mode 100644 index 00000000000..529b63a2316 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/grouparraysample.md @@ -0,0 +1,82 @@ +--- +toc_priority: 114 +--- + +# groupArraySample {#grouparraysample} + +构建一个参数值的采样数组。 +结果数组的大小限制为 `max_size` 个元素。参数值被随机选择并添加到数组中。 + +**语法** + +``` sql +groupArraySample(max_size[, seed])(x) +``` + +**参数** + +- `max_size` — 结果数组的最大长度。[UInt64](../../data-types/int-uint.md)。 +- `seed` — 随机数发生器的种子。可选。[UInt64](../../data-types/int-uint.md)。默认值: `123456`。 +- `x` — 参数 (列名 或者 表达式)。 + +**返回值** + +- 随机选取参数 `x` (的值)组成的数组。 + +类型: [Array](../../../sql-reference/data-types/array.md). + +**示例** + +样表 `colors`: + +``` text +┌─id─┬─color──┐ +│ 1 │ red │ +│ 2 │ blue │ +│ 3 │ green │ +│ 4 │ white │ +│ 5 │ orange │ +└────┴────────┘ +``` + +使用列名做参数查询: + +``` sql +SELECT groupArraySample(3)(color) as newcolors FROM colors; +``` + +结果: + +```text +┌─newcolors──────────────────┐ +│ ['white','blue','green'] │ +└────────────────────────────┘ +``` + +使用列名和不同的(随机数)种子查询: + +``` sql +SELECT groupArraySample(3, 987654321)(color) as newcolors FROM colors; +``` + +结果: + +```text +┌─newcolors──────────────────┐ +│ ['red','orange','green'] │ +└────────────────────────────┘ +``` + +使用表达式做参数查询: + +``` sql +SELECT groupArraySample(3)(concat('light-', color)) as newcolors FROM colors; +``` + +结果: + +```text +┌─newcolors───────────────────────────────────┐ +│ ['light-blue','light-orange','light-green'] │ +└─────────────────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitand.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitand.md new file mode 100644 index 00000000000..1a8520b0f08 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitand.md @@ -0,0 +1,48 @@ +--- +toc_priority: 125 +--- + +# groupBitAnd {#groupbitand} + +对于数字序列按位应用 `AND` 。 + +**语法** + +``` sql +groupBitAnd(expr) +``` + +**参数** + +`expr` – 结果为 `UInt*` 类型的表达式。 + +**返回值** + +`UInt*` 类型的值。 + +**示例** + +测试数据: + +``` text +binary decimal +00101100 = 44 +00011100 = 28 +00001101 = 13 +01010101 = 85 +``` + +查询: + +``` sql +SELECT groupBitAnd(num) FROM t +``` + +`num` 是包含测试数据的列。 + +结果: + +``` text +binary decimal +00000100 = 4 +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitmap.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmap.md new file mode 100644 index 00000000000..5e14c3a21ea --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmap.md @@ -0,0 +1,46 @@ +--- +toc_priority: 128 +--- + +# groupBitmap {#groupbitmap} + +从无符号整数列进行位图或聚合计算,返回 `UInt64` 类型的基数,如果添加后缀 `State` ,则返回[位图对象](../../../sql-reference/functions/bitmap-functions.md)。 + +**语法** + +``` sql +groupBitmap(expr) +``` + +**参数** + +`expr` – 结果为 `UInt*` 类型的表达式。 + +**返回值** + +`UInt64` 类型的值。 + +**示例** + +测试数据: + +``` text +UserID +1 +1 +2 +3 +``` + +查询: + +``` sql +SELECT groupBitmap(UserID) as num FROM t +``` + +结果: + +``` text +num +3 +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapand.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapand.md new file mode 100644 index 00000000000..bd5aa17c7ff --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapand.md @@ -0,0 +1,48 @@ +--- +toc_priority: 129 +--- + +# groupBitmapAnd {#groupbitmapand} + +计算位图列的 `AND` ,返回 `UInt64` 类型的基数,如果添加后缀 `State` ,则返回 [位图对象](../../../sql-reference/functions/bitmap-functions.md)。 + +**语法** + +``` sql +groupBitmapAnd(expr) +``` + +**参数** + +`expr` – 结果为 `AggregateFunction(groupBitmap, UInt*)` 类型的表达式。 + +**返回值** + +`UInt64` 类型的值。 + +**示例** + +``` sql +DROP TABLE IF EXISTS bitmap_column_expr_test2; +CREATE TABLE bitmap_column_expr_test2 +( + tag_id String, + z AggregateFunction(groupBitmap, UInt32) +) +ENGINE = MergeTree +ORDER BY tag_id; + +INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); +INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); +INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); + +SELECT groupBitmapAnd(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); +┌─groupBitmapAnd(z)─┐ +│ 3 │ +└───────────────────┘ + +SELECT arraySort(bitmapToArray(groupBitmapAndState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); +┌─arraySort(bitmapToArray(groupBitmapAndState(z)))─┐ +│ [6,8,10] │ +└──────────────────────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapor.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapor.md new file mode 100644 index 00000000000..52048083d17 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapor.md @@ -0,0 +1,48 @@ +--- +toc_priority: 130 +--- + +# groupBitmapOr {#groupbitmapor} + +计算位图列的 `OR` ,返回 `UInt64` 类型的基数,如果添加后缀 `State` ,则返回 [位图对象](../../../sql-reference/functions/bitmap-functions.md)。 + +**语法** + +``` sql +groupBitmapOr(expr) +``` + +**参数** + +`expr` – 结果为 `AggregateFunction(groupBitmap, UInt*)` 类型的表达式。 + +**返回值** + +`UInt64` 类型的值。 + +**示例** + +``` sql +DROP TABLE IF EXISTS bitmap_column_expr_test2; +CREATE TABLE bitmap_column_expr_test2 +( + tag_id String, + z AggregateFunction(groupBitmap, UInt32) +) +ENGINE = MergeTree +ORDER BY tag_id; + +INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); +INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); +INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); + +SELECT groupBitmapOr(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); +┌─groupBitmapOr(z)─┐ +│ 15 │ +└──────────────────┘ + +SELECT arraySort(bitmapToArray(groupBitmapOrState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); +┌─arraySort(bitmapToArray(groupBitmapOrState(z)))─┐ +│ [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] │ +└─────────────────────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapxor.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapxor.md new file mode 100644 index 00000000000..d862e974418 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitmapxor.md @@ -0,0 +1,48 @@ +--- +toc_priority: 131 +--- + +# groupBitmapXor {#groupbitmapxor} + +计算位图列的 `XOR` ,返回 `UInt64` 类型的基数,如果添加后缀 `State` ,则返回 [位图对象](../../../sql-reference/functions/bitmap-functions.md)。 + +**语法** + +``` sql +groupBitmapXor(expr) +``` + +**参数** + +`expr` – 结果为 `AggregateFunction(groupBitmap, UInt*)` 类型的表达式。 + +**返回值** + +`UInt64` 类型的值。 + +**示例** + +``` sql +DROP TABLE IF EXISTS bitmap_column_expr_test2; +CREATE TABLE bitmap_column_expr_test2 +( + tag_id String, + z AggregateFunction(groupBitmap, UInt32) +) +ENGINE = MergeTree +ORDER BY tag_id; + +INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] as Array(UInt32)))); +INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] as Array(UInt32)))); +INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] as Array(UInt32)))); + +SELECT groupBitmapXor(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); +┌─groupBitmapXor(z)─┐ +│ 10 │ +└───────────────────┘ + +SELECT arraySort(bitmapToArray(groupBitmapXorState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%'); +┌─arraySort(bitmapToArray(groupBitmapXorState(z)))─┐ +│ [1,3,5,6,8,10,11,13,14,15] │ +└──────────────────────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitor.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitor.md new file mode 100644 index 00000000000..175cc8d7286 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitor.md @@ -0,0 +1,48 @@ +--- +toc_priority: 126 +--- + +# groupBitOr {#groupbitor} + +对于数字序列按位应用 `OR` 。 + +**语法** + +``` sql +groupBitOr(expr) +``` + +**参数** + +`expr` – 结果为 `UInt*` 类型的表达式。 + +**返回值** + +`UInt*` 类型的值。 + +**示例** + +测试数据:: + +``` text +binary decimal +00101100 = 44 +00011100 = 28 +00001101 = 13 +01010101 = 85 +``` + +查询: + +``` sql +SELECT groupBitOr(num) FROM t +``` + +`num` 是包含测试数据的列。 + +结果: + +``` text +binary decimal +01111101 = 125 +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupbitxor.md b/docs/zh/sql-reference/aggregate-functions/reference/groupbitxor.md new file mode 100644 index 00000000000..26409f00032 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupbitxor.md @@ -0,0 +1,48 @@ +--- +toc_priority: 127 +--- + +# groupBitXor {#groupbitxor} + +对于数字序列按位应用 `XOR` 。 + +**语法** + +``` sql +groupBitXor(expr) +``` + +**参数** + +`expr` – 结果为 `UInt*` 类型的表达式。 + +**返回值** + +`UInt*` 类型的值。 + +**示例** + +测试数据: + +``` text +binary decimal +00101100 = 44 +00011100 = 28 +00001101 = 13 +01010101 = 85 +``` + +查询: + +``` sql +SELECT groupBitXor(num) FROM t +``` + +`num` 是包含测试数据的列。 + +结果: + +``` text +binary decimal +01101000 = 104 +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/groupuniqarray.md b/docs/zh/sql-reference/aggregate-functions/reference/groupuniqarray.md new file mode 100644 index 00000000000..f371361bbf6 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/groupuniqarray.md @@ -0,0 +1,18 @@ +--- +toc_priority: 111 +--- + +# groupUniqArray {#groupuniqarray} + +**语法** + +``` sql +groupUniqArray(x) +或 +groupUniqArray(max_size)(x) +``` + +从不同的参数值创建一个数组。 内存消耗和 [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md) 函数是一样的。 + +第二个版本(带有 `max_size` 参数)将结果数组的大小限制为 `max_size` 个元素。 +例如, `groupUniqArray(1)(x)` 相当于 `[any(x)]`. diff --git a/docs/zh/sql-reference/aggregate-functions/reference/index.md b/docs/zh/sql-reference/aggregate-functions/reference/index.md new file mode 100644 index 00000000000..5070c79775e --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/index.md @@ -0,0 +1,72 @@ +--- +toc_folder_title: Reference +toc_priority: 36 +toc_hidden: true +--- + +# 聚合函数列表 {#aggregate-functions-reference} + +标准聚合函数: + +- [count](../../../sql-reference/aggregate-functions/reference/count.md) +- [min](../../../sql-reference/aggregate-functions/reference/min.md) +- [max](../../../sql-reference/aggregate-functions/reference/max.md) +- [sum](../../../sql-reference/aggregate-functions/reference/sum.md) +- [avg](../../../sql-reference/aggregate-functions/reference/avg.md) +- [any](../../../sql-reference/aggregate-functions/reference/any.md) +- [stddevPop](../../../sql-reference/aggregate-functions/reference/stddevpop.md) +- [stddevSamp](../../../sql-reference/aggregate-functions/reference/stddevsamp.md) +- [varPop](../../../sql-reference/aggregate-functions/reference/varpop.md) +- [varSamp](../../../sql-reference/aggregate-functions/reference/varsamp.md) +- [covarPop](../../../sql-reference/aggregate-functions/reference/covarpop.md) +- [covarSamp](../../../sql-reference/aggregate-functions/reference/covarsamp.md) + +ClickHouse 特有的聚合函数: + +- [anyHeavy](../../../sql-reference/aggregate-functions/reference/anyheavy.md) +- [anyLast](../../../sql-reference/aggregate-functions/reference/anylast.md) +- [argMin](../../../sql-reference/aggregate-functions/reference/argmin.md) +- [argMax](../../../sql-reference/aggregate-functions/reference/argmax.md) +- [avgWeighted](../../../sql-reference/aggregate-functions/reference/avgweighted.md) +- [topK](../../../sql-reference/aggregate-functions/reference/topk.md) +- [topKWeighted](../../../sql-reference/aggregate-functions/reference/topkweighted.md) +- [groupArray](../../../sql-reference/aggregate-functions/reference/grouparray.md) +- [groupUniqArray](../../../sql-reference/aggregate-functions/reference/groupuniqarray.md) +- [groupArrayInsertAt](../../../sql-reference/aggregate-functions/reference/grouparrayinsertat.md) +- [groupArrayMovingAvg](../../../sql-reference/aggregate-functions/reference/grouparraymovingavg.md) +- [groupArrayMovingSum](../../../sql-reference/aggregate-functions/reference/grouparraymovingsum.md) +- [groupBitAnd](../../../sql-reference/aggregate-functions/reference/groupbitand.md) +- [groupBitOr](../../../sql-reference/aggregate-functions/reference/groupbitor.md) +- [groupBitXor](../../../sql-reference/aggregate-functions/reference/groupbitxor.md) +- [groupBitmap](../../../sql-reference/aggregate-functions/reference/groupbitmap.md) +- [groupBitmapAnd](../../../sql-reference/aggregate-functions/reference/groupbitmapand.md) +- [groupBitmapOr](../../../sql-reference/aggregate-functions/reference/groupbitmapor.md) +- [groupBitmapXor](../../../sql-reference/aggregate-functions/reference/groupbitmapxor.md) +- [sumWithOverflow](../../../sql-reference/aggregate-functions/reference/sumwithoverflow.md) +- [sumMap](../../../sql-reference/aggregate-functions/reference/summap.md) +- [minMap](../../../sql-reference/aggregate-functions/reference/minmap.md) +- [maxMap](../../../sql-reference/aggregate-functions/reference/maxmap.md) +- [skewSamp](../../../sql-reference/aggregate-functions/reference/skewsamp.md) +- [skewPop](../../../sql-reference/aggregate-functions/reference/skewpop.md) +- [kurtSamp](../../../sql-reference/aggregate-functions/reference/kurtsamp.md) +- [kurtPop](../../../sql-reference/aggregate-functions/reference/kurtpop.md) +- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md) +- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md) +- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md) +- [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md) +- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md) +- [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md) +- [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md) +- [quantileExact](../../../sql-reference/aggregate-functions/reference/quantileexact.md) +- [quantileExactLow](../../../sql-reference/aggregate-functions/reference/quantileexact.md#quantileexactlow) +- [quantileExactHigh](../../../sql-reference/aggregate-functions/reference/quantileexact.md#quantileexacthigh) +- [quantileExactWeighted](../../../sql-reference/aggregate-functions/reference/quantileexactweighted.md) +- [quantileTiming](../../../sql-reference/aggregate-functions/reference/quantiletiming.md) +- [quantileTimingWeighted](../../../sql-reference/aggregate-functions/reference/quantiletimingweighted.md) +- [quantileDeterministic](../../../sql-reference/aggregate-functions/reference/quantiledeterministic.md) +- [quantileTDigest](../../../sql-reference/aggregate-functions/reference/quantiletdigest.md) +- [quantileTDigestWeighted](../../../sql-reference/aggregate-functions/reference/quantiletdigestweighted.md) +- [simpleLinearRegression](../../../sql-reference/aggregate-functions/reference/simplelinearregression.md) +- [stochasticLinearRegression](../../../sql-reference/aggregate-functions/reference/stochasticlinearregression.md) +- [stochasticLogisticRegression](../../../sql-reference/aggregate-functions/reference/stochasticlogisticregression.md) +- [categoricalInformationValue](../../../sql-reference/aggregate-functions/reference/categoricalinformationvalue.md) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/initializeAggregation.md b/docs/zh/sql-reference/aggregate-functions/reference/initializeAggregation.md new file mode 100644 index 00000000000..feecd7afb1f --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/initializeAggregation.md @@ -0,0 +1,37 @@ +--- +toc_priority: 150 +--- + +## initializeAggregation {#initializeaggregation} + +初始化你输入行的聚合。用于后缀是 `State` 的函数。 +用它来测试或处理 `AggregateFunction` 和 `AggregationgMergeTree` 类型的列。 + +**语法** + +``` sql +initializeAggregation (aggregate_function, column_1, column_2) +``` + +**参数** + +- `aggregate_function` — 聚合函数名。 这个函数的状态 — 正创建的。[String](../../../sql-reference/data-types/string.md#string)。 +- `column_n` — 将其转换为函数的参数的列。[String](../../../sql-reference/data-types/string.md#string)。 + +**返回值** + +返回输入行的聚合结果。返回类型将与 `initializeAgregation` 用作第一个参数的函数的返回类型相同。 +例如,对于后缀为 `State` 的函数,返回类型将是 `AggregateFunction`。 + +**示例** + +查询: + +```sql +SELECT uniqMerge(state) FROM (SELECT initializeAggregation('uniqState', number % 3) AS state FROM system.numbers LIMIT 10000); +``` +结果: + +┌─uniqMerge(state)─┐ +│ 3 │ +└──────────────────┘ diff --git a/docs/zh/sql-reference/aggregate-functions/reference/kurtpop.md b/docs/zh/sql-reference/aggregate-functions/reference/kurtpop.md new file mode 100644 index 00000000000..d5b76e0c1e9 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/kurtpop.md @@ -0,0 +1,26 @@ +--- +toc_priority: 153 +--- + +# kurtPop {#kurtpop} + +计算给定序列的 [峰度](https://en.wikipedia.org/wiki/Kurtosis)。 + +**语法** + +``` sql +kurtPop(expr) +``` + +**参数** + +`expr` — 结果为数字的 [表达式](../../../sql-reference/syntax.md#syntax-expressions)。 + +**返回值** + +给定分布的峰度。 类型 — [Float64](../../../sql-reference/data-types/float.md) + +**示例** + +``` sql +SELECT kurtPop(value) FROM series_with_value_column; diff --git a/docs/zh/sql-reference/aggregate-functions/reference/kurtsamp.md b/docs/zh/sql-reference/aggregate-functions/reference/kurtsamp.md new file mode 100644 index 00000000000..a38e14d0792 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/kurtsamp.md @@ -0,0 +1,28 @@ +--- +toc_priority: 154 +--- + +# kurtSamp {#kurtsamp} + +计算给定序列的 [峰度样本](https://en.wikipedia.org/wiki/Kurtosis)。 +它表示随机变量峰度的无偏估计,如果传递的值形成其样本。 + +**语法** + +``` sql +kurtSamp(expr) +``` + +**参数** + +`expr` — 结果为数字的 [表达式](../../../sql-reference/syntax.md#syntax-expressions)。 + +**返回值** + +给定序列的峰度。类型 — [Float64](../../../sql-reference/data-types/float.md)。 如果 `n <= 1` (`n` 是样本的大小),则该函数返回 `nan`。 + +**示例** + +``` sql +SELECT kurtSamp(value) FROM series_with_value_column; +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/mannwhitneyutest.md b/docs/zh/sql-reference/aggregate-functions/reference/mannwhitneyutest.md new file mode 100644 index 00000000000..016a650b61b --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/mannwhitneyutest.md @@ -0,0 +1,72 @@ +--- +toc_priority: 310 +toc_title: mannWhitneyUTest +--- + +# mannWhitneyUTest {#mannwhitneyutest} + +对两个总体的样本应用 Mann-Whitney 秩检验。 + +**语法** + +``` sql +mannWhitneyUTest[(alternative[, continuity_correction])](sample_data, sample_index) +``` + +两个样本的值都在 `sample_data` 列中。如果 `sample_index` 等于 0,则该行的值属于第一个总体的样本。 反之属于第二个总体的样本。 +零假设是两个总体随机相等。也可以检验单边假设。该检验不假设数据具有正态分布。 + +**参数** + +- `sample_data` — 样本数据。[Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) 或 [Decimal](../../../sql-reference/data-types/decimal.md)。 +- `sample_index` — 样本索引。[Integer](../../../sql-reference/data-types/int-uint.md). + +**参数** + +- `alternative` — 供选假设。(可选,默认值是: `'two-sided'` 。) [String](../../../sql-reference/data-types/string.md)。 + - `'two-sided'`; + - `'greater'`; + - `'less'`。 +- `continuity_correction` — 如果不为0,那么将对p值进行正态近似的连续性修正。(可选,默认:1。) [UInt64](../../../sql-reference/data-types/int-uint.md)。 + +**返回值** + +[元组](../../../sql-reference/data-types/tuple.md),有两个元素: + +- 计算出U统计量。[Float64](../../../sql-reference/data-types/float.md)。 +- 计算出的p值。[Float64](../../../sql-reference/data-types/float.md)。 + + +**示例** + +输入表: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 10 │ 0 │ +│ 11 │ 0 │ +│ 12 │ 0 │ +│ 1 │ 1 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +└─────────────┴──────────────┘ +``` + +查询: + +``` sql +SELECT mannWhitneyUTest('greater')(sample_data, sample_index) FROM mww_ttest; +``` + +结果: + +``` text +┌─mannWhitneyUTest('greater')(sample_data, sample_index)─┐ +│ (9,0.04042779918503192) │ +└────────────────────────────────────────────────────────┘ +``` + +**参见** + +- [Mann–Whitney U test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test) +- [Stochastic ordering](https://en.wikipedia.org/wiki/Stochastic_ordering) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/max.md b/docs/zh/sql-reference/aggregate-functions/reference/max.md new file mode 100644 index 00000000000..8372d5c6f85 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/max.md @@ -0,0 +1,7 @@ +--- +toc_priority: 3 +--- + +# max {#agg_function-max} + +计算最大值。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/maxmap.md b/docs/zh/sql-reference/aggregate-functions/reference/maxmap.md new file mode 100644 index 00000000000..4d91d1e75fd --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/maxmap.md @@ -0,0 +1,33 @@ +--- +toc_priority: 143 +--- + +# maxMap {#agg_functions-maxmap} + +**语法** + +```sql +maxMap(key, value) + 或 +maxMap(Tuple(key, value)) +``` + + +根据 `key` 数组中指定的键对 `value` 数组计算最大值。 + +传递 `key` 和 `value` 数组的元组与传递 `key` 和 `value` 的两个数组是同义的。 +要总计的每一行的 `key` 和 `value` (数组)元素的数量必须相同。 +返回两个数组组成的元组: 排好序的`key` 和对应 `key` 的 `value` 计算值(最大值)。 + +示例: + +``` sql +SELECT maxMap(a, b) +FROM values('a Array(Int32), b Array(Int64)', ([1, 2], [2, 2]), ([2, 3], [1, 1])) +``` + +``` text +┌─maxMap(a, b)──────┐ +│ ([1,2,3],[2,2,1]) │ +└───────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/median.md b/docs/zh/sql-reference/aggregate-functions/reference/median.md new file mode 100644 index 00000000000..83879f6cb34 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/median.md @@ -0,0 +1,41 @@ +# median {#median} + +`median*` 函数是 `quantile*` 函数的别名。它们计算数字数据样本的中位数。 + +函数: + +- `median` — [quantile](#quantile)别名。 +- `medianDeterministic` — [quantileDeterministic](#quantiledeterministic)别名。 +- `medianExact` — [quantileExact](#quantileexact)别名。 +- `medianExactWeighted` — [quantileExactWeighted](#quantileexactweighted)别名。 +- `medianTiming` — [quantileTiming](#quantiletiming)别名。 +- `medianTimingWeighted` — [quantileTimingWeighted](#quantiletimingweighted)别名。 +- `medianTDigest` — [quantileTDigest](#quantiletdigest)别名。 +- `medianTDigestWeighted` — [quantileTDigestWeighted](#quantiletdigestweighted)别名。 + +**示例** + +输入表: + +``` text +┌─val─┐ +│ 1 │ +│ 1 │ +│ 2 │ +│ 3 │ +└─────┘ +``` + +查询: + +``` sql +SELECT medianDeterministic(val, 1) FROM t +``` + +结果: + +``` text +┌─medianDeterministic(val, 1)─┐ +│ 1.5 │ +└─────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/min.md b/docs/zh/sql-reference/aggregate-functions/reference/min.md new file mode 100644 index 00000000000..95a4099a1b7 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/min.md @@ -0,0 +1,7 @@ +--- +toc_priority: 2 +--- + +## min {#agg_function-min} + +计算最小值。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/minmap.md b/docs/zh/sql-reference/aggregate-functions/reference/minmap.md new file mode 100644 index 00000000000..8e0022ac174 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/minmap.md @@ -0,0 +1,32 @@ +--- +toc_priority: 142 +--- + +# minMap {#agg_functions-minmap} + +**语法** + +```sql +minMap(key, value) +或 +minMap(Tuple(key, value)) +``` + +根据 `key` 数组中指定的键对 `value` 数组计算最小值。 + +传递 `key` 和 `value` 数组的元组与传递 `key` 和 `value` 的两个数组是同义的。 +要总计的每一行的 `key` 和 `value` (数组)元素的数量必须相同。 +返回两个数组组成的元组: 排好序的 `key` 和对应 `key` 的 `value` 计算值(最小值)。 + +**示例** + +``` sql +SELECT minMap(a, b) +FROM values('a Array(Int32), b Array(Int64)', ([1, 2], [2, 2]), ([2, 3], [1, 1])) +``` + +``` text +┌─minMap(a, b)──────┐ +│ ([1,2,3],[2,1,1]) │ +└───────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantile.md b/docs/zh/sql-reference/aggregate-functions/reference/quantile.md new file mode 100644 index 00000000000..4519688dc7e --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantile.md @@ -0,0 +1,65 @@ +--- +toc_priority: 200 +--- + +# quantile {#quantile} + +计算数字序列的近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 +此函数应用[水塘抽样][reservoir sampling] (https://en.wikipedia.org/wiki/Reservoir_sampling),使用高达8192的水塘大小和随机数发生器采样。 +结果是不确定的。要获得精确的分位数,使用 [quantileExact](../../../sql-reference/aggregate-functions/reference/quantileexact.md#quantileexact) 函数。 +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantile(level)(expr) +``` + +别名: `median`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]`。默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 + +**返回值** + +- 指定层次的分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 用于数字数据类型输入。 +- [Date](../../../sql-reference/data-types/date.md) 如果输入值是 `Date` 类型。 +- [DateTime](../../../sql-reference/data-types/datetime.md) 如果输入值是 `DateTime` 类型。 + +**示例** + +输入表: + +``` text +┌─val─┐ +│ 1 │ +│ 1 │ +│ 2 │ +│ 3 │ +└─────┘ +``` + +查询: + +``` sql +SELECT quantile(val) FROM t +``` + +结果: + +``` text +┌─quantile(val)─┐ +│ 1.5 │ +└───────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantiledeterministic.md b/docs/zh/sql-reference/aggregate-functions/reference/quantiledeterministic.md new file mode 100644 index 00000000000..c6c6b0a63de --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantiledeterministic.md @@ -0,0 +1,66 @@ +--- +toc_priority: 206 +--- + +# quantileDeterministic {#quantiledeterministic} + +计算数字序列的近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +此功能适用 [水塘抽样](https://en.wikipedia.org/wiki/Reservoir_sampling),使用储存器最大到8192和随机数发生器进行采样。 结果是非确定性的。 要获得精确的分位数,请使用 [quantileExact](../../../sql-reference/aggregate-functions/reference/quantileexact.md#quantileexact) 功能。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles)功能。 + +**语法** + +``` sql +quantileDeterministic(level)(expr, determinator) +``` + +别名: `medianDeterministic`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。 我们推荐 `level` 值的范围为 `[0.01, 0.99]`。默认值:0.5。 当 `level=0.5`时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 +- `determinator` — 一个数字,其hash被用来代替在水塘抽样中随机生成的数字,这样可以保证取样的确定性。你可以使用用户ID或者事件ID等任何正数,但是如果相同的 `determinator` 出现多次,那结果很可能不正确。 +**返回值** + +- 指定层次的近似分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 用于数字数据类型输入。 +- [Date](../../../sql-reference/data-types/date.md) 如果输入值是 `Date` 类型。 +- [DateTime](../../../sql-reference/data-types/datetime.md) 如果输入值是 `DateTime` 类型。 + +**示例** + +输入表: + +``` text +┌─val─┐ +│ 1 │ +│ 1 │ +│ 2 │ +│ 3 │ +└─────┘ +``` + +查询: + +``` sql +SELECT quantileDeterministic(val, 1) FROM t +``` + +结果: + +``` text +┌─quantileDeterministic(val, 1)─┐ +│ 1.5 │ +└───────────────────────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantileexact.md b/docs/zh/sql-reference/aggregate-functions/reference/quantileexact.md new file mode 100644 index 00000000000..a8d39c35700 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantileexact.md @@ -0,0 +1,170 @@ +--- +toc_priority: 202 +--- + +# quantileExact {#quantileexact} + + +准确计算数字序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +为了准确计算,所有输入的数据被合并为一个数组,并且部分的排序。因此该函数需要 `O(n)` 的内存,n为输入数据的个数。但是对于少量数据来说,该函数还是非常有效的。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantileExact(level)(expr) +``` + +别名: `medianExact`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]`。默认值:0.5。当 `level=0.5` 时,该函数计算[中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 + +**返回值** + +- 指定层次的分位数。 + + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 对于数字数据类型输入。 +- [日期](../../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 +- [日期时间](../../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 + +**示例** + +查询: + +``` sql +SELECT quantileExact(number) FROM numbers(10) +``` + +结果: + +``` text +┌─quantileExact(number)─┐ +│ 5 │ +└───────────────────────┘ +``` + +# quantileExactLow {#quantileexactlow} + +和 `quantileExact` 相似, 准确计算数字序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +为了准确计算,所有输入的数据被合并为一个数组,并且全排序。这排序[算法](https://en.cppreference.com/w/cpp/algorithm/sort)的复杂度是 `O(N·log(N))`, 其中 `N = std::distance(first, last)` 比较。 + +返回值取决于分位数级别和所选取的元素数量,即如果级别是 0.5, 函数返回偶数元素的低位中位数,奇数元素的中位数。中位数计算类似于 python 中使用的[median_low](https://docs.python.org/3/library/statistics.html#statistics.median_low)的实现。 + +对于所有其他级别, 返回 `level * size_of_array` 值所对应的索引的元素值。 + +例如: + +``` sql +SELECT quantileExactLow(0.1)(number) FROM numbers(10) + +┌─quantileExactLow(0.1)(number)─┐ +│ 1 │ +└───────────────────────────────┘ +``` + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantileExactLow(level)(expr) +``` + +别名: `medianExactLow`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]`。默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 + +**返回值** + +- 指定层次的分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 用于数字数据类型输入。 +- [Date](../../../sql-reference/data-types/date.md) 如果输入值是 `Date` 类型。 +- [DateTime](../../../sql-reference/data-types/datetime.md) 如果输入值是 `DateTime` 类型。 + +**示例** + +查询: + +``` sql +SELECT quantileExactLow(number) FROM numbers(10) +``` + +结果: + +``` text +┌─quantileExactLow(number)─┐ +│ 4 │ +└──────────────────────────┘ +``` + +# quantileExactHigh {#quantileexacthigh} + +和 `quantileExact` 相似, 准确计算数字序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +为了准确计算,所有输入的数据被合并为一个数组,并且全排序。这排序[算法](https://en.cppreference.com/w/cpp/algorithm/sort)的复杂度是 `O(N·log(N))`, 其中 `N = std::distance(first, last)` 比较。 + +返回值取决于分位数级别和所选取的元素数量,即如果级别是 0.5, 函数返回偶数元素的低位中位数,奇数元素的中位数。中位数计算类似于 python 中使用的[median_high](https://docs.python.org/3/library/statistics.html#statistics.median_high)的实现。 + +对于所有其他级别, 返回 `level * size_of_array` 值所对应的索引的元素值。 + +这个实现与当前的 `quantileExact` 实现完全相似。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantileExactHigh(level)(expr) +``` + +别名: `medianExactHigh`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]`。默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 + +**返回值** + +- 指定层次的分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 用于数字数据类型输入。 +- [Date](../../../sql-reference/data-types/date.md) 如果输入值是 `Date` 类型。 +- [DateTime](../../../sql-reference/data-types/datetime.md) 如果输入值是 `DateTime` 类型。 + +**示例** + +查询: + +``` sql +SELECT quantileExactHigh(number) FROM numbers(10) +``` + +结果: + +``` text +┌─quantileExactHigh(number)─┐ +│ 5 │ +└───────────────────────────┘ +``` +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantileexactweighted.md b/docs/zh/sql-reference/aggregate-functions/reference/quantileexactweighted.md new file mode 100644 index 00000000000..5211ca210f2 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantileexactweighted.md @@ -0,0 +1,66 @@ +--- +toc_priority: 203 +--- + +# quantileExactWeighted {#quantileexactweighted} + +考虑到每个元素的权重,然后准确计算数值序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +为了准确计算,所有输入的数据被合并为一个数组,并且部分的排序。每个输入值需要根据 `weight` 计算求和。该算法使用哈希表。正因为如此,在数据重复较多的时候使用的内存是少于[quantileExact](../../../sql-reference/aggregate-functions/reference/quantileexact.md#quantileexact)的。 您可以使用此函数代替 `quantileExact` 并指定`weight`为 1 。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantileExactWeighted(level)(expr, weight) +``` + +别名: `medianExactWeighted`。 + +**参数** +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]`. 默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 +- `weight` — 权重序列。 权重是一个数据出现的数值。 + +**返回值** + +- 指定层次的分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 对于数字数据类型输入。 +- [日期](../../../sql-reference/data-types/date.md) 如果输入值具有 `Date` 类型。 +- [日期时间](../../../sql-reference/data-types/datetime.md) 如果输入值具有 `DateTime` 类型。 + +**示例** + +输入表: + +``` text +┌─n─┬─val─┐ +│ 0 │ 3 │ +│ 1 │ 2 │ +│ 2 │ 1 │ +│ 5 │ 4 │ +└───┴─────┘ +``` + +查询: + +``` sql +SELECT quantileExactWeighted(n, val) FROM t +``` + +结果: + +``` text +┌─quantileExactWeighted(n, val)─┐ +│ 1 │ +└───────────────────────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantiles.md b/docs/zh/sql-reference/aggregate-functions/reference/quantiles.md new file mode 100644 index 00000000000..044c4d6d24e --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantiles.md @@ -0,0 +1,12 @@ +--- +toc_priority: 201 +--- + +# quantiles {#quantiles} + +**语法** +``` sql +quantiles(level1, level2, …)(x) +``` + +所有分位数函数(quantile)也有相应的分位数(quantiles)函数: `quantiles`, `quantilesDeterministic`, `quantilesTiming`, `quantilesTimingWeighted`, `quantilesExact`, `quantilesExactWeighted`, `quantilesTDigest`。 这些函数一次计算所列的级别的所有分位数, 并返回结果值的数组。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantiletdigest.md b/docs/zh/sql-reference/aggregate-functions/reference/quantiletdigest.md new file mode 100644 index 00000000000..fb186da299e --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantiletdigest.md @@ -0,0 +1,57 @@ +--- +toc_priority: 207 +--- + +# quantileTDigest {#quantiletdigest} + +使用[t-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) 算法计算数字序列近似[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +最大误差为1%。 内存消耗为 `log(n)`,这里 `n` 是值的个数。 结果取决于运行查询的顺序,并且是不确定的。 + +该函数的性能低于 [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile) 或 [quantileTiming](../../../sql-reference/aggregate-functions/reference/quantiletiming.md#quantiletiming) 的性能。 从状态大小和精度的比值来看,这个函数比 `quantile` 更优秀。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantileTDigest(level)(expr) +``` + +别名: `medianTDigest`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]` 。默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 + +**返回值** + +- 指定层次的分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 用于数字数据类型输入。 +- [Date](../../../sql-reference/data-types/date.md) 如果输入值是 `Date` 类型。 +- [DateTime](../../../sql-reference/data-types/datetime.md) 如果输入值是 `DateTime` 类型。 + +**示例** + +查询: + +``` sql +SELECT quantileTDigest(number) FROM numbers(10) +``` + +结果: + +``` text +┌─quantileTDigest(number)─┐ +│ 4.5 │ +└─────────────────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md b/docs/zh/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md new file mode 100644 index 00000000000..cf78c4c03bc --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md @@ -0,0 +1,58 @@ +--- +toc_priority: 208 +--- + +# quantileTDigestWeighted {#quantiletdigestweighted} + +使用[t-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) 算法计算数字序列近似[分位数](https://en.wikipedia.org/wiki/Quantile)。该函数考虑了每个序列成员的权重。最大误差为1%。 内存消耗为 `log(n)`,这里 `n` 是值的个数。 + +该函数的性能低于 [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile) 或 [quantileTiming](../../../sql-reference/aggregate-functions/reference/quantiletiming.md#quantiletiming) 的性能。 从状态大小和精度的比值来看,这个函数比 `quantile` 更优秀。 + +结果取决于运行查询的顺序,并且是不确定的。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用 [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) 函数。 + +**语法** + +``` sql +quantileTDigestWeighted(level)(expr, weight) +``` + +别名: `medianTDigestWeighted`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]` 。默认值:0.5。 当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值表达式,类型为数值类型[data types](../../../sql-reference/data-types/index.md#data_types), [Date](../../../sql-reference/data-types/date.md) 或 [DateTime](../../../sql-reference/data-types/datetime.md)。 +- `weight` — 权重序列。 权重是一个数据出现的数值。 + +**返回值** + +- 指定层次的分位数。 + +类型: + +- [Float64](../../../sql-reference/data-types/float.md) 用于数字数据类型输入。 +- [Date](../../../sql-reference/data-types/date.md) 如果输入值是 `Date` 类型。 +- [DateTime](../../../sql-reference/data-types/datetime.md) 如果输入值是 `DateTime` 类型。 + +**示例** + +查询: + +``` sql +SELECT quantileTDigestWeighted(number, 1) FROM numbers(10) +``` + +结果: + +``` text +┌─quantileTDigestWeighted(number, 1)─┐ +│ 4.5 │ +└────────────────────────────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantiletiming.md b/docs/zh/sql-reference/aggregate-functions/reference/quantiletiming.md new file mode 100644 index 00000000000..a193b60338a --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantiletiming.md @@ -0,0 +1,86 @@ +--- +toc_priority: 204 +--- + +# quantileTiming {#quantiletiming} + +使用确定的精度计算数字数据序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +结果是确定性的(它不依赖于查询处理顺序)。该函数针对描述加载网页时间或后端响应时间等分布的序列进行了优化。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles)函数。 + +**语法** + +``` sql +quantileTiming(level)(expr) +``` + +别名: `medianTiming`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]` 。默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值[表达式](../../../sql-reference/syntax.md#syntax-expressions) 返回 [Float\*](../../../sql-reference/data-types/float.md) 类型数值。 + + - 如果输入负值,那结果是不可预期的。 + - 如果输入值大于30000(页面加载时间大于30s),那我们假设为30000。 + +**精度** + +计算是准确的,如果: + + +- 值的总数不超过5670。 +- 总数值超过5670,但页面加载时间小于1024ms。 + +否则,计算结果将四舍五入到16毫秒的最接近倍数。 + +!!! note "注" + 对于计算页面加载时间分位数, 此函数比[quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile)更有效和准确。 + +**返回值** + +- 指定层次的分位数。 + +类型: `Float32`。 + +!!! note "注" +如果没有值传递给函数(当使用 `quantileTimingIf`), [NaN](../../../sql-reference/data-types/float.md#data_type-float-nan-inf)被返回。 这样做的目的是将这些案例与导致零的案例区分开来。 参见 [ORDER BY clause](../../../sql-reference/statements/select/order-by.md#select-order-by) 对于 `NaN` 值排序注意事项。 + +**示例** + +输入表: + +``` text +┌─response_time─┐ +│ 72 │ +│ 112 │ +│ 126 │ +│ 145 │ +│ 104 │ +│ 242 │ +│ 313 │ +│ 168 │ +│ 108 │ +└───────────────┘ +``` + +查询: + +``` sql +SELECT quantileTiming(response_time) FROM t +``` + +结果: + +``` text +┌─quantileTiming(response_time)─┐ +│ 126 │ +└───────────────────────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/quantiletimingweighted.md b/docs/zh/sql-reference/aggregate-functions/reference/quantiletimingweighted.md new file mode 100644 index 00000000000..7b130dbddbd --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/quantiletimingweighted.md @@ -0,0 +1,118 @@ +--- +toc_priority: 205 +--- + +# quantileTimingWeighted {#quantiletimingweighted} + +根据每个序列成员的权重,使用确定的精度计算数字序列的[分位数](https://en.wikipedia.org/wiki/Quantile)。 + +结果是确定性的(它不依赖于查询处理顺序)。该函数针对描述加载网页时间或后端响应时间等分布的序列进行了优化。 + +当在一个查询中使用多个不同层次的 `quantile*` 时,内部状态不会被组合(即查询的工作效率低于组合情况)。在这种情况下,使用[quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles)功能。 + +**语法** + +``` sql +quantileTimingWeighted(level)(expr, weight) +``` + +别名: `medianTimingWeighted`。 + +**参数** + +- `level` — 分位数层次。可选参数。从0到1的一个float类型的常量。我们推荐 `level` 值的范围为 `[0.01, 0.99]` 。默认值:0.5。当 `level=0.5` 时,该函数计算 [中位数](https://en.wikipedia.org/wiki/Median)。 +- `expr` — 求值[表达式](../../../sql-reference/syntax.md#syntax-expressions) 返回 [Float\*](../../../sql-reference/data-types/float.md) 类型数值。 + + - 如果输入负值,那结果是不可预期的。 + - 如果输入值大于30000(页面加载时间大于30s),那我们假设为30000。 + +- `weight` — 权重序列。 权重是一个数据出现的数值。 + +**精度** + +计算是准确的,如果: + + +- 值的总数不超过5670。 +- 总数值超过5670,但页面加载时间小于1024ms。 + +否则,计算结果将四舍五入到16毫秒的最接近倍数。 + +!!! note "注" + 对于计算页面加载时间分位数, 此函数比[quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile)更有效和准确。 + +**返回值** + +- 指定层次的分位数。 + +类型: `Float32`。 + +!!! note "注" +如果没有值传递给函数(当使用 `quantileTimingIf`), [NaN](../../../sql-reference/data-types/float.md#data_type-float-nan-inf)被返回。 这样做的目的是将这些案例与导致零的案例区分开来。 参见 [ORDER BY clause](../../../sql-reference/statements/select/order-by.md#select-order-by) 对于 `NaN` 值排序注意事项。 + +**示例** + +输入表: + +``` text +┌─response_time─┬─weight─┐ +│ 68 │ 1 │ +│ 104 │ 2 │ +│ 112 │ 3 │ +│ 126 │ 2 │ +│ 138 │ 1 │ +│ 162 │ 1 │ +└───────────────┴────────┘ +``` + +查询: + +``` sql +SELECT quantileTimingWeighted(response_time, weight) FROM t +``` + +结果: + +``` text +┌─quantileTimingWeighted(response_time, weight)─┐ +│ 112 │ +└───────────────────────────────────────────────┘ +``` + +# quantilesTimingWeighted {#quantilestimingweighted} + +类似于 `quantileTimingWeighted` , 但接受多个分位数层次参数,并返回一个由这些分位数值组成的数组。 + +**示例** + +输入表: + +``` text +┌─response_time─┬─weight─┐ +│ 68 │ 1 │ +│ 104 │ 2 │ +│ 112 │ 3 │ +│ 126 │ 2 │ +│ 138 │ 1 │ +│ 162 │ 1 │ +└───────────────┴────────┘ +``` + +查询: + +``` sql +SELECT quantilesTimingWeighted(0,5, 0.99)(response_time, weight) FROM t +``` + +结果: + +``` text +┌─quantilesTimingWeighted(0.5, 0.99)(response_time, weight)─┐ +│ [112,162] │ +└───────────────────────────────────────────────────────────┘ +``` + +**参见** + +- [中位数](../../../sql-reference/aggregate-functions/reference/median.md#median) +- [分位数](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/rankCorr.md b/docs/zh/sql-reference/aggregate-functions/reference/rankCorr.md new file mode 100644 index 00000000000..c29a43f6ca9 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/rankCorr.md @@ -0,0 +1,53 @@ +## rankCorr {#agg_function-rankcorr} + +计算等级相关系数。 + +**语法** + +``` sql +rankCorr(x, y) +``` + +**参数** + +- `x` — 任意值。[Float32](../../../sql-reference/data-types/float.md#float32-float64) 或 [Float64](../../../sql-reference/data-types/float.md#float32-float64)。 +- `y` — 任意值。[Float32](../../../sql-reference/data-types/float.md#float32-float64) 或 [Float64](../../../sql-reference/data-types/float.md#float32-float64)。 + +**返回值** + +- Returns a rank correlation coefficient of the ranks of x and y. The value of the correlation coefficient ranges from -1 to +1. If less than two arguments are passed, the function will return an exception. The value close to +1 denotes a high linear relationship, and with an increase of one random variable, the second random variable also increases. The value close to -1 denotes a high linear relationship, and with an increase of one random variable, the second random variable decreases. The value close or equal to 0 denotes no relationship between the two random variables. + +类型: [Float64](../../../sql-reference/data-types/float.md#float32-float64)。 + +**示例** + +查询: + +``` sql +SELECT rankCorr(number, number) FROM numbers(100); +``` + +结果: + +``` text +┌─rankCorr(number, number)─┐ +│ 1 │ +└──────────────────────────┘ +``` + +查询: + +``` sql +SELECT roundBankers(rankCorr(exp(number), sin(number)), 3) FROM numbers(100); +``` + +结果: + +``` text +┌─roundBankers(rankCorr(exp(number), sin(number)), 3)─┐ +│ -0.037 │ +└─────────────────────────────────────────────────────┘ +``` +**参见** + +- 斯皮尔曼等级相关系数[Spearman's rank correlation coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) \ No newline at end of file diff --git a/docs/zh/sql-reference/aggregate-functions/reference/simplelinearregression.md b/docs/zh/sql-reference/aggregate-functions/reference/simplelinearregression.md new file mode 100644 index 00000000000..56cb1539fc9 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/simplelinearregression.md @@ -0,0 +1,44 @@ +--- +toc_priority: 220 +--- + +# simpleLinearRegression {#simplelinearregression} + +执行简单(一维)线性回归。 + +**语法** + +``` sql +simpleLinearRegression(x, y) +``` + +**参数** + +- `x` — x轴。 +- `y` — y轴。 + +**返回值** + +符合`y = a*x + b`的常量 `(a, b)` 。 + +**示例** + +``` sql +SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3]) +``` + +``` text +┌─arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3])─┐ +│ (1,0) │ +└───────────────────────────────────────────────────────────────────┘ +``` + +``` sql +SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6]) +``` + +``` text +┌─arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6])─┐ +│ (1,3) │ +└───────────────────────────────────────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/skewpop.md b/docs/zh/sql-reference/aggregate-functions/reference/skewpop.md new file mode 100644 index 00000000000..0771c18c2f3 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/skewpop.md @@ -0,0 +1,27 @@ +--- +toc_priority: 150 +--- + +# skewPop {#skewpop} + +计算给定序列的 [偏度] (https://en.wikipedia.org/wiki/Skewness)。 + +**语法** + +``` sql +skewPop(expr) +``` + +**参数** + +`expr` — [表达式](../../../sql-reference/syntax.md#syntax-expressions) 返回一个数字。 + +**返回值** + +给定分布的偏度。类型 — [Float64](../../../sql-reference/data-types/float.md) + +**示例** + +``` sql +SELECT skewPop(value) FROM series_with_value_column; +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/skewsamp.md b/docs/zh/sql-reference/aggregate-functions/reference/skewsamp.md new file mode 100644 index 00000000000..902d06da8e7 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/skewsamp.md @@ -0,0 +1,29 @@ +--- +toc_priority: 151 +--- + +# skewSamp {#skewsamp} + +计算给定序列的 [样本偏度] (https://en.wikipedia.org/wiki/Skewness)。 + +如果传递的值形成其样本,它代表了一个随机变量的偏度的无偏估计。 + +**语法** + +``` sql +skewSamp(expr) +``` + +**参数** + +`expr` — [表达式](../../../sql-reference/syntax.md#syntax-expressions) 返回一个数字。 + +**返回值** + +给定分布的偏度。 类型 — [Float64](../../../sql-reference/data-types/float.md)。 如果 `n <= 1` (`n` 样本的大小), 函数返回 `nan`。 + +**示例** + +``` sql +SELECT skewSamp(value) FROM series_with_value_column; +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/stddevpop.md b/docs/zh/sql-reference/aggregate-functions/reference/stddevpop.md new file mode 100644 index 00000000000..378ef4ae7e4 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/stddevpop.md @@ -0,0 +1,10 @@ +--- +toc_priority: 30 +--- + +# stddevPop {#stddevpop} + +结果等于 [varPop] (../../../sql-reference/aggregate-functions/reference/varpop.md)的平方根。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `stddevPopStable` 函数。 它的工作速度较慢,但提供较低的计算错误。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/stddevsamp.md b/docs/zh/sql-reference/aggregate-functions/reference/stddevsamp.md new file mode 100644 index 00000000000..68a348146a9 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/stddevsamp.md @@ -0,0 +1,10 @@ +--- +toc_priority: 31 +--- + +# stddevSamp {#stddevsamp} + +结果等于 [varSamp] (../../../sql-reference/aggregate-functions/reference/varsamp.md)的平方根。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `stddevSampStable` 函数。 它的工作速度较慢,但提供较低的计算错误。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/stochasticlinearregression.md b/docs/zh/sql-reference/aggregate-functions/reference/stochasticlinearregression.md new file mode 100644 index 00000000000..43ebd6be575 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/stochasticlinearregression.md @@ -0,0 +1,77 @@ +--- +toc_priority: 221 +--- + +# stochasticLinearRegression {#agg_functions-stochasticlinearregression} + +该函数实现随机线性回归。 它支持自定义参数的学习率、L2正则化系数、微批,并且具有少量更新权重的方法([Adam](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam) (默认), [simple SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), [Momentum](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum), [Nesterov](https://mipt.ru/upload/medialibrary/d7e/41-91.pdf))。 + +### 参数 {#agg_functions-stochasticlinearregression-parameters} + +有4个可自定义的参数。它们按顺序传递给函数,但不需要传递所有四个参数——将使用默认值,然而好的模型需要一些参数调整。 + +**语法** + +``` sql +stochasticLinearRegression(1.0, 1.0, 10, 'SGD') +``` + +1. `learning rate` 当执行梯度下降步骤时,步长的系数。 过大的学习率可能会导致模型的权重无限大。 默认值为 `0.00001`。 +2. `l2 regularization coefficient` 这可能有助于防止过度拟合。 默认值为 `0.1`。 +3. `mini-batch size` 设置元素的数量,这些元素将被计算和求和以执行梯度下降的一个步骤。纯随机下降使用一个元素,但是具有小批量(约10个元素)使梯度步骤更稳定。 默认值为 `15`。 +4. `method for updating weights` 他们是: `Adam` (默认情况下), `SGD`, `Momentum`, `Nesterov`。`Momentum` 和 `Nesterov` 需要更多的计算和内存,但是它们恰好在收敛速度和随机梯度方法的稳定性方面是有用的。 + +### 使用 {#agg_functions-stochasticlinearregression-usage} + +`stochasticLinearRegression` 用于两个步骤:拟合模型和预测新数据。 为了拟合模型并保存其状态以供以后使用,我们使用 `-State` 组合器,它基本上保存了状态(模型权重等)。 +为了预测我们使用函数 [evalMLMethod](../../../sql-reference/functions/machine-learning-functions.md#machine_learning_methods-evalmlmethod), 这需要一个状态作为参数以及特征来预测。 + + + +**1.** 拟合 + +可以使用这种查询。 + +``` sql +CREATE TABLE IF NOT EXISTS train_data +( + param1 Float64, + param2 Float64, + target Float64 +) ENGINE = Memory; + +CREATE TABLE your_model ENGINE = Memory AS SELECT +stochasticLinearRegressionState(0.1, 0.0, 5, 'SGD')(target, param1, param2) +AS state FROM train_data; +``` + +在这里,我们还需要将数据插入到 `train_data` 表。参数的数量不是固定的,它只取决于传入 `linearRegressionState` 的参数数量。它们都必须是数值。 +注意,目标值(我们想学习预测的)列作为第一个参数插入。 + +**2.** 预测 + +在将状态保存到表中之后,我们可以多次使用它进行预测,甚至与其他状态合并,创建新的更好的模型。 + +``` sql +WITH (SELECT state FROM your_model) AS model SELECT +evalMLMethod(model, param1, param2) FROM test_data +``` + +查询将返回一列预测值。注意,`evalMLMethod` 的第一个参数是 `AggregateFunctionState` 对象, 接下来是特征列。 + +`test_data` 是一个类似 `train_data` 的表 但可能不包含目标值。 + +### 注 {#agg_functions-stochasticlinearregression-notes} + +1. 要合并两个模型,用户可以创建这样的查询: + `sql SELECT state1 + state2 FROM your_models` + 其中 `your_models` 表包含这两个模型。此查询将返回新的 `AggregateFunctionState` 对象。 + +2. 如果没有使用 `-State` 组合器,用户可以为自己的目的获取所创建模型的权重,而不保存模型 。 + `sql SELECT stochasticLinearRegression(0.01)(target, param1, param2) FROM train_data` + 这样的查询将拟合模型,并返回其权重——首先是权重,对应模型的参数,最后一个是偏差。 所以在上面的例子中,查询将返回一个具有3个值的列。 + +**参见** + +- [随机指标逻辑回归](../../../sql-reference/aggregate-functions/reference/stochasticlogisticregression.md#agg_functions-stochasticlogisticregression) +- [线性回归和逻辑回归之间的差异](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md b/docs/zh/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md new file mode 100644 index 00000000000..5ed2fb74b89 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md @@ -0,0 +1,56 @@ +--- +toc_priority: 222 +--- + +# stochasticLogisticRegression {#agg_functions-stochasticlogisticregression} + +该函数实现随机逻辑回归。 它可以用于二进制分类问题,支持与stochasticLinearRegression相同的自定义参数,并以相同的方式工作。 + +### 参数 {#agg_functions-stochasticlogisticregression-parameters} + +参数与stochasticLinearRegression中的参数完全相同: +`learning rate`, `l2 regularization coefficient`, `mini-batch size`, `method for updating weights`. +欲了解更多信息,参见 [参数] (#agg_functions-stochasticlinearregression-parameters). + +**语法** + +``` sql +stochasticLogisticRegression(1.0, 1.0, 10, 'SGD') +``` + +**1.** 拟合 + + + + 参考[stochasticLinearRegression](#stochasticlinearregression-usage-fitting) `拟合` 章节文档。 + + 预测标签的取值范围为\[-1, 1\] + +**2.** 预测 + + + + 使用已经保存的state我们可以预测标签为 `1` 的对象的概率。 + ``` sql + WITH (SELECT state FROM your_model) AS model SELECT + evalMLMethod(model, param1, param2) FROM test_data + ``` + + 查询结果返回一个列的概率。注意 `evalMLMethod` 的第一个参数是 `AggregateFunctionState` 对象,接下来的参数是列的特性。 + + 我们也可以设置概率的范围, 这样需要给元素指定不同的标签。 + + ``` sql + SELECT ans < 1.1 AND ans > 0.5 FROM + (WITH (SELECT state FROM your_model) AS model SELECT + evalMLMethod(model, param1, param2) AS ans FROM test_data) + ``` + + 结果是标签。 + + `test_data` 是一个像 `train_data` 一样的表,但是不包含目标值。 + +**参见** + +- [随机指标线性回归](../../../sql-reference/aggregate-functions/reference/stochasticlinearregression.md#agg_functions-stochasticlinearregression) +- [线性回归和逻辑回归之间的差异](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/studentttest.md b/docs/zh/sql-reference/aggregate-functions/reference/studentttest.md new file mode 100644 index 00000000000..6d84e728330 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/studentttest.md @@ -0,0 +1,64 @@ +--- +toc_priority: 300 +toc_title: studentTTest +--- + +# studentTTest {#studentttest} + +对两个总体的样本应用t检验。 + +**语法** + +``` sql +studentTTest(sample_data, sample_index) +``` + +两个样本的值都在 `sample_data` 列中。如果 `sample_index` 等于 0,则该行的值属于第一个总体的样本。 反之属于第二个总体的样本。 +零假设是总体的均值相等。假设为方差相等的正态分布。 + +**参数** + +- `sample_data` — 样本数据。[Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) 或 [Decimal](../../../sql-reference/data-types/decimal.md)。 +- `sample_index` — 样本索引。[Integer](../../../sql-reference/data-types/int-uint.md)。 + +**返回值** + +[元组](../../../sql-reference/data-types/tuple.md),有两个元素: + +- 计算出的t统计量。 [Float64](../../../sql-reference/data-types/float.md)。 +- 计算出的p值。[Float64](../../../sql-reference/data-types/float.md)。 + + +**示例** + +输入表: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 20.3 │ 0 │ +│ 21.1 │ 0 │ +│ 21.9 │ 1 │ +│ 21.7 │ 0 │ +│ 19.9 │ 1 │ +│ 21.8 │ 1 │ +└─────────────┴──────────────┘ +``` + +查询: + +``` sql +SELECT studentTTest(sample_data, sample_index) FROM student_ttest; +``` + +结果: + +``` text +┌─studentTTest(sample_data, sample_index)───┐ +│ (-0.21739130434783777,0.8385421208415731) │ +└───────────────────────────────────────────┘ +``` + +**参见** + +- [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test) +- [welchTTest function](../../../sql-reference/aggregate-functions/reference/welchttest.md#welchttest) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/sum.md b/docs/zh/sql-reference/aggregate-functions/reference/sum.md new file mode 100644 index 00000000000..049c491d2a5 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/sum.md @@ -0,0 +1,8 @@ +--- +toc_priority: 4 +--- + +# sum {#agg_function-sum} + +计算总和。 +只适用于数字。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/summap.md b/docs/zh/sql-reference/aggregate-functions/reference/summap.md new file mode 100644 index 00000000000..4a92a1ea1b0 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/summap.md @@ -0,0 +1,52 @@ +--- +toc_priority: 141 +--- + +# sumMap {#agg_functions-summap} + +**语法** + +``` sql +sumMap(key, value) +或 +sumMap(Tuple(key, value)) +``` + +根据 `key` 数组中指定的键对 `value` 数组进行求和。 + +传递 `key` 和 `value` 数组的元组与传递 `key` 和 `value` 的两个数组是同义的。 +要总计的每一行的 `key` 和 `value` (数组)元素的数量必须相同。 +返回两个数组组成的一个元组: 排好序的 `key` 和对应 `key` 的 `value` 之和。 + +示例: + +``` sql +CREATE TABLE sum_map( + date Date, + timeslot DateTime, + statusMap Nested( + status UInt16, + requests UInt64 + ), + statusMapTuple Tuple(Array(Int32), Array(Int32)) +) ENGINE = Log; +INSERT INTO sum_map VALUES + ('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10], ([1, 2, 3], [10, 10, 10])), + ('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10], ([3, 4, 5], [10, 10, 10])), + ('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10], ([4, 5, 6], [10, 10, 10])), + ('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10], ([6, 7, 8], [10, 10, 10])); + +SELECT + timeslot, + sumMap(statusMap.status, statusMap.requests), + sumMap(statusMapTuple) +FROM sum_map +GROUP BY timeslot +``` + +``` text +┌────────────timeslot─┬─sumMap(statusMap.status, statusMap.requests)─┬─sumMap(statusMapTuple)─────────┐ +│ 2000-01-01 00:00:00 │ ([1,2,3,4,5],[10,10,20,10,10]) │ ([1,2,3,4,5],[10,10,20,10,10]) │ +│ 2000-01-01 00:01:00 │ ([4,5,6,7,8],[10,10,20,10,10]) │ ([4,5,6,7,8],[10,10,20,10,10]) │ +└─────────────────────┴──────────────────────────────────────────────┴────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/sumwithoverflow.md b/docs/zh/sql-reference/aggregate-functions/reference/sumwithoverflow.md new file mode 100644 index 00000000000..0fd5af519da --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/sumwithoverflow.md @@ -0,0 +1,9 @@ +--- +toc_priority: 140 +--- + +# sumWithOverflow {#sumwithoverflowx} + +使用与输入参数相同的数据类型计算结果的数字总和。如果总和超过此数据类型的最大值,则使用溢出进行计算。 + +只适用于数字。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/topk.md b/docs/zh/sql-reference/aggregate-functions/reference/topk.md new file mode 100644 index 00000000000..69e006d1a6c --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/topk.md @@ -0,0 +1,43 @@ +--- +toc_priority: 108 +--- + +# topK {#topk} + +返回指定列中近似最常见值的数组。 生成的数组按值的近似频率降序排序(而不是值本身)。 + +实现了[过滤节省空间](http://www.l2f.inesc-id.pt/~fmmb/wiki/uploads/Work/misnis.ref0a.pdf)算法, 使用基于reduce-and-combine的算法,借鉴[并行节省空间](https://arxiv.org/pdf/1401.0702.pdf)。 + +**语法** + +``` sql +topK(N)(x) +``` +此函数不提供保证的结果。 在某些情况下,可能会发生错误,并且可能会返回不是最高频的值。 + +我们建议使用 `N < 10` 值,`N` 值越大,性能越低。最大值 `N = 65536`。 + +**参数** + +- `N` — 要返回的元素数。 + +如果省略该参数,则使用默认值10。 + +**参数** + +- `x` – (要计算频次的)值。 + +**示例** + +就拿 [OnTime](../../../getting-started/example-datasets/ontime.md) 数据集来说,选择`AirlineID` 列中出现最频繁的三个。 + +``` sql +SELECT topK(3)(AirlineID) AS res +FROM ontime +``` + +``` text +┌─res─────────────────┐ +│ [19393,19790,19805] │ +└─────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/topkweighted.md b/docs/zh/sql-reference/aggregate-functions/reference/topkweighted.md new file mode 100644 index 00000000000..66b436f42bb --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/topkweighted.md @@ -0,0 +1,42 @@ +--- +toc_priority: 109 +--- + +# topKWeighted {#topkweighted} + +类似于 `topK` 但需要一个整数类型的附加参数 - `weight`。 每个输入都被记入 `weight` 次频率计算。 + +**语法** + +``` sql +topKWeighted(N)(x, weight) +``` + +**参数** + +- `N` — 要返回的元素数。 + +**参数** + +- `x` – (要计算频次的)值。 +- `weight` — 权重。 [UInt8](../../../sql-reference/data-types/int-uint.md)类型。 + +**返回值** + +返回具有最大近似权重总和的值数组。 + +**示例** + +查询: + +``` sql +SELECT topKWeighted(10)(number, number) FROM numbers(1000) +``` + +结果: + +``` text +┌─topKWeighted(10)(number, number)──────────┐ +│ [999,998,997,996,995,994,993,992,991,990] │ +└───────────────────────────────────────────┘ +``` diff --git a/docs/zh/sql-reference/aggregate-functions/reference/uniq.md b/docs/zh/sql-reference/aggregate-functions/reference/uniq.md new file mode 100644 index 00000000000..2cf020d052b --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/uniq.md @@ -0,0 +1,42 @@ +--- +toc_priority: 190 +--- + +# uniq {#agg_function-uniq} + +计算参数的不同值的近似数量。 + +**语法** + +``` sql +uniq(x[, ...]) +``` + +**参数** + +该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`, 或数字类型。 + +**返回值** + +- [UInt64](../../../sql-reference/data-types/int-uint.md) 类型数值。 + +**实现细节** + +功能: + +- 计算聚合中所有参数的哈希值,然后在计算中使用它。 + +- 使用自适应采样算法。 对于计算状态,该函数使用最多65536个元素哈希值的样本。 + + 这个算法是非常精确的,并且对于CPU来说非常高效。如果查询包含一些这样的函数,那和其他聚合函数相比 `uniq` 将是几乎一样快。 + +- 确定性地提供结果(它不依赖于查询处理顺序)。 + +我们建议在几乎所有情况下使用此功能。 + +**参见** + +- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined) +- [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64) +- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12) +- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/uniqcombined.md b/docs/zh/sql-reference/aggregate-functions/reference/uniqcombined.md new file mode 100644 index 00000000000..26a681ed5af --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/uniqcombined.md @@ -0,0 +1,52 @@ +--- +toc_priority: 192 +--- + +# uniqCombined {#agg_function-uniqcombined} + +计算不同参数值的近似数量。 + +**语法** +``` sql +uniqCombined(HLL_precision)(x[, ...]) +``` +该 `uniqCombined` 函数是计算不同值数量的不错选择。 + +**参数** + +该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 + +`HLL_precision` 是以2为底的单元格数的对数 [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog)。可选,您可以将该函数用作 `uniqCombined(x[, ...])`。 `HLL_precision` 的默认值是17,这是有效的96KiB的空间(2^17个单元,每个6比特)。 + +**返回值** + +- 一个[UInt64](../../../sql-reference/data-types/int-uint.md)类型的数字。 + +**实现细节** + +功能: + +- 为聚合中的所有参数计算哈希(`String`类型用64位哈希,其他32位),然后在计算中使用它。 + +- 使用三种算法的组合:数组、哈希表和包含错误修正表的HyperLogLog。 + + + 少量的不同的值,使用数组。 值再多一些,使用哈希表。对于大量的数据来说,使用HyperLogLog,HyperLogLog占用一个固定的内存空间。 + +- 确定性地提供结果(它不依赖于查询处理顺序)。 + +!!! note "注" + 由于它对非 `String` 类型使用32位哈希,对于基数显著大于`UINT_MAX` ,结果将有非常高的误差(误差将在几百亿不同值之后迅速提高), 因此这种情况,你应该使用 [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64) + +相比于 [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) 函数, 该 `uniqCombined`: + +- 消耗内存要少几倍。 +- 计算精度高出几倍。 +- 通常具有略低的性能。 在某些情况下, `uniqCombined` 可以表现得比 `uniq` 好,例如,使用通过网络传输大量聚合状态的分布式查询。 + +**参见** + +- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) +- [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64) +- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12) +- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/uniqcombined64.md b/docs/zh/sql-reference/aggregate-functions/reference/uniqcombined64.md new file mode 100644 index 00000000000..3c07791450d --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/uniqcombined64.md @@ -0,0 +1,7 @@ +--- +toc_priority: 193 +--- + +# uniqCombined64 {#agg_function-uniqcombined64} + +和 [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined)一样, 但对于所有数据类型使用64位哈希。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/uniqexact.md b/docs/zh/sql-reference/aggregate-functions/reference/uniqexact.md new file mode 100644 index 00000000000..bdd60ca1d30 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/uniqexact.md @@ -0,0 +1,26 @@ +--- +toc_priority: 191 +--- + +# uniqExact {#agg_function-uniqexact} + +计算不同参数值的准确数目。 + +**语法** + +``` sql +uniqExact(x[, ...]) +``` +如果你绝对需要一个确切的结果,使用 `uniqExact` 函数。 否则使用 [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) 函数。 + +`uniqExact` 函数比 `uniq` 使用更多的内存,因为状态的大小随着不同值的数量的增加而无界增长。 + +**参数** + +该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 + +**参见** + +- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) +- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniqcombined) +- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniqhll12) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/uniqhll12.md b/docs/zh/sql-reference/aggregate-functions/reference/uniqhll12.md new file mode 100644 index 00000000000..7521065b954 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/uniqhll12.md @@ -0,0 +1,43 @@ +--- +toc_priority: 194 +--- + +# uniqHLL12 {#agg_function-uniqhll12} + +计算不同参数值的近似数量,使用 [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) 算法。 + +**语法** + +``` sql +uniqHLL12(x[, ...]) +``` + +**参数** + +该函数采用可变数量的参数。 参数可以是 `Tuple`, `Array`, `Date`, `DateTime`, `String`,或数字类型。 + +**返回值** + +**返回值** + +- 一个[UInt64](../../../sql-reference/data-types/int-uint.md)类型的数字。 + +**实现细节** + +功能: + +- 计算聚合中所有参数的哈希值,然后在计算中使用它。 + +- 使用 HyperLogLog 算法来近似不同参数值的数量。 + + 使用2^12个5比特单元。 状态的大小略大于2.5KB。 对于小数据集(<10K元素),结果不是很准确(误差高达10%)。 但是, 对于高基数数据集(10K-100M),结果相当准确,最大误差约为1.6%。Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). + +- 提供确定结果(它不依赖于查询处理顺序)。 + +我们不建议使用此函数。 在大多数情况下, 使用 [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) 或 [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined) 函数。 + +**参见** + +- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) +- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined) +- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact) diff --git a/docs/zh/sql-reference/aggregate-functions/reference/varpop.md b/docs/zh/sql-reference/aggregate-functions/reference/varpop.md new file mode 100644 index 00000000000..4dca8efde38 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/varpop.md @@ -0,0 +1,12 @@ +--- +toc_priority: 32 +--- + +# varPop(x) {#varpopx} + +计算 `Σ((x - x̅)^2) / n`,这里 `n` 是样本大小, `x̅` 是 `x` 的平均值。 + +换句话说,计算一组数据的离差。 返回 `Float64`。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `varPopStable` 函数。 它的工作速度较慢,但提供较低的计算错误。 diff --git a/docs/zh/sql-reference/aggregate-functions/reference/varsamp.md b/docs/zh/sql-reference/aggregate-functions/reference/varsamp.md new file mode 100644 index 00000000000..c83ee7e24d2 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/varsamp.md @@ -0,0 +1,15 @@ +--- +toc_priority: 33 +--- + +# varSamp {#varsamp} + +计算 `Σ((x - x̅)^2) / (n - 1)`,这里 `n` 是样本大小, `x̅`是`x`的平均值。 + +它表示随机变量的方差的无偏估计,如果传递的值形成其样本。 + +返回 `Float64`。 当 `n <= 1`,返回 `+∞`。 + +!!! note "注" +该函数使用数值不稳定的算法。 如果你需要 [数值稳定性](https://en.wikipedia.org/wiki/Numerical_stability) 在计算中,使用 `varSampStable` 函数。 它的工作速度较慢,但提供较低的计算错误。 + diff --git a/docs/zh/sql-reference/aggregate-functions/reference/welchttest.md b/docs/zh/sql-reference/aggregate-functions/reference/welchttest.md new file mode 100644 index 00000000000..44b8e81d4d9 --- /dev/null +++ b/docs/zh/sql-reference/aggregate-functions/reference/welchttest.md @@ -0,0 +1,62 @@ +--- +toc_priority: 301 +toc_title: welchTTest +--- + +# welchTTest {#welchttest} + +对两个总体的样本应用 Welch t检验。 + +**语法** + +``` sql +welchTTest(sample_data, sample_index) +``` +两个样本的值都在 `sample_data` 列中。如果 `sample_index` 等于 0,则该行的值属于第一个总体的样本。 反之属于第二个总体的样本。 +零假设是群体的均值相等。假设为正态分布。总体可能具有不相等的方差。 + +**参数** + +- `sample_data` — 样本数据。[Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) 或 [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — 样本索引。[Integer](../../../sql-reference/data-types/int-uint.md). + +**返回值** + +[元组](../../../sql-reference/data-types/tuple.md),有两个元素: + +- 计算出的t统计量。 [Float64](../../../sql-reference/data-types/float.md)。 +- 计算出的p值。[Float64](../../../sql-reference/data-types/float.md)。 + +**示例** + +输入表: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 20.3 │ 0 │ +│ 22.1 │ 0 │ +│ 21.9 │ 0 │ +│ 18.9 │ 1 │ +│ 20.3 │ 1 │ +│ 19 │ 1 │ +└─────────────┴──────────────┘ +``` + +查询: + +``` sql +SELECT welchTTest(sample_data, sample_index) FROM welch_ttest; +``` + +结果: + +``` text +┌─welchTTest(sample_data, sample_index)─────┐ +│ (2.7988719532211235,0.051807360348581945) │ +└───────────────────────────────────────────┘ +``` + +**参见** + +- [Welch's t-test](https://en.wikipedia.org/wiki/Welch%27s_t-test) +- [studentTTest function](../../../sql-reference/aggregate-functions/reference/studentttest.md#studentttest) diff --git a/docs/zh/sql-reference/data-types/index.md b/docs/zh/sql-reference/data-types/index.md index 70aa976cb11..c7f5c63e357 100644 --- a/docs/zh/sql-reference/data-types/index.md +++ b/docs/zh/sql-reference/data-types/index.md @@ -1,5 +1,12 @@ +--- +toc_folder_title: 数据类型 +toc_priority: 37 +toc_title: 简介 +--- + # 数据类型 {#data_types} ClickHouse 可以在数据表中存储多种数据类型。 本节描述 ClickHouse 支持的数据类型,以及使用或者实现它们时(如果有的话)的注意事项。 +你可以在系统表 [system.data_type_families](../../operations/system-tables/data_type_families.md#system_tables-data_type_families) 中检查数据类型名称是否区分大小写。 diff --git a/docs/zh/sql-reference/data-types/simpleaggregatefunction.md b/docs/zh/sql-reference/data-types/simpleaggregatefunction.md index e827adb817e..38d7699c176 100644 --- a/docs/zh/sql-reference/data-types/simpleaggregatefunction.md +++ b/docs/zh/sql-reference/data-types/simpleaggregatefunction.md @@ -1,26 +1,31 @@ ---- -machine_translated: true -machine_translated_rev: 71d72c1f237f4a553fe91ba6c6c633e81a49e35b ---- - # SimpleAggregateFunction {#data-type-simpleaggregatefunction} -`SimpleAggregateFunction(name, types_of_arguments…)` 数据类型存储聚合函数的当前值,而不将其完整状态存储为 [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) 有 此优化可应用于具有以下属性的函数:应用函数的结果 `f` 到行集 `S1 UNION ALL S2` 可以通过应用来获得 `f` 行的部分单独设置,然后再次应用 `f` 到结果: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. 此属性保证部分聚合结果足以计算组合结果,因此我们不必存储和处理任何额外的数据。 +`SimpleAggregateFunction(name, types_of_arguments…)` 数据类型存储聚合函数的当前值, 并不像 [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) 那样存储其全部状态。这种优化可以应用于具有以下属性函数: 将函数 `f` 应用于行集合 `S1 UNION ALL S2` 的结果,可以通过将 `f` 分别应用于行集合的部分, 然后再将 `f` 应用于结果来获得: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`。 这个属性保证了部分聚合结果足以计算出合并的结果,所以我们不必存储和处理任何额外的数据。 支持以下聚合函数: -- [`any`](../../sql-reference/aggregate-functions/reference.md#agg_function-any) -- [`anyLast`](../../sql-reference/aggregate-functions/reference.md#anylastx) -- [`min`](../../sql-reference/aggregate-functions/reference.md#agg_function-min) -- [`max`](../../sql-reference/aggregate-functions/reference.md#agg_function-max) -- [`sum`](../../sql-reference/aggregate-functions/reference.md#agg_function-sum) -- [`groupBitAnd`](../../sql-reference/aggregate-functions/reference.md#groupbitand) -- [`groupBitOr`](../../sql-reference/aggregate-functions/reference.md#groupbitor) -- [`groupBitXor`](../../sql-reference/aggregate-functions/reference.md#groupbitxor) -- [`groupArrayArray`](../../sql-reference/aggregate-functions/reference.md#agg_function-grouparray) -- [`groupUniqArrayArray`](../../sql-reference/aggregate-functions/reference.md#groupuniqarrayx-groupuniqarraymax-sizex) +- [`any`](../../sql-reference/aggregate-functions/reference/any.md#agg_function-any) +- [`anyLast`](../../sql-reference/aggregate-functions/reference/anylast.md#anylastx) +- [`min`](../../sql-reference/aggregate-functions/reference/min.md#agg_function-min) +- [`max`](../../sql-reference/aggregate-functions/reference/max.md#agg_function-max) +- [`sum`](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum) +- [`sumWithOverflow`](../../sql-reference/aggregate-functions/reference/sumwithoverflow.md#sumwithoverflowx) +- [`groupBitAnd`](../../sql-reference/aggregate-functions/reference/groupbitand.md#groupbitand) +- [`groupBitOr`](../../sql-reference/aggregate-functions/reference/groupbitor.md#groupbitor) +- [`groupBitXor`](../../sql-reference/aggregate-functions/reference/groupbitxor.md#groupbitxor) +- [`groupArrayArray`](../../sql-reference/aggregate-functions/reference/grouparray.md#agg_function-grouparray) +- [`groupUniqArrayArray`](../../sql-reference/aggregate-functions/reference/groupuniqarray.md) +- [`sumMap`](../../sql-reference/aggregate-functions/reference/summap.md#agg_functions-summap) +- [`minMap`](../../sql-reference/aggregate-functions/reference/minmap.md#agg_functions-minmap) +- [`maxMap`](../../sql-reference/aggregate-functions/reference/maxmap.md#agg_functions-maxmap) +- [`argMin`](../../sql-reference/aggregate-functions/reference/argmin.md) +- [`argMax`](../../sql-reference/aggregate-functions/reference/argmax.md) -的值 `SimpleAggregateFunction(func, Type)` 看起来和存储方式相同 `Type`,所以你不需要应用函数 `-Merge`/`-State` 后缀。 `SimpleAggregateFunction` 具有比更好的性能 `AggregateFunction` 具有相同的聚合功能。 + +!!! note "注" + `SimpleAggregateFunction(func, Type)` 的值外观和存储方式于 `Type` 相同, 所以你不需要应用带有 `-Merge`/`-State` 后缀的函数。 + + `SimpleAggregateFunction` 的性能优于具有相同聚合函数的 `AggregateFunction` 。 **参数** @@ -30,11 +35,7 @@ machine_translated_rev: 71d72c1f237f4a553fe91ba6c6c633e81a49e35b **示例** ``` sql -CREATE TABLE t -( - column1 SimpleAggregateFunction(sum, UInt64), - column2 SimpleAggregateFunction(any, String) -) ENGINE = ... +CREATE TABLE simple (id UInt64, val SimpleAggregateFunction(sum, Double)) ENGINE=AggregatingMergeTree ORDER BY id; ``` [原始文章](https://clickhouse.tech/docs/en/data_types/simpleaggregatefunction/) diff --git a/docs/zh/sql-reference/data-types/uuid.md b/docs/zh/sql-reference/data-types/uuid.md index 2ff1e391e81..b454484003c 100644 --- a/docs/zh/sql-reference/data-types/uuid.md +++ b/docs/zh/sql-reference/data-types/uuid.md @@ -1,21 +1,19 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 46 toc_title: UUID --- # UUID {#uuid-data-type} -通用唯一标识符(UUID)是用于标识记录的16字节数。 有关UUID的详细信息,请参阅 [维基百科](https://en.wikipedia.org/wiki/Universally_unique_identifier). +通用唯一标识符(UUID)是一个16字节的数字,用于标识记录。有关UUID的详细信息, 参见[维基百科](https://en.wikipedia.org/wiki/Universally_unique_identifier)。 -UUID类型值的示例如下所示: +UUID类型值的示例如下: ``` text 61f0c404-5cb3-11e7-907b-a6006ad3dba0 ``` -如果在插入新记录时未指定UUID列值,则UUID值将用零填充: +如果在插入新记录时未指定UUID列的值,则UUID值将用零填充: ``` text 00000000-0000-0000-0000-000000000000 @@ -23,13 +21,13 @@ UUID类型值的示例如下所示: ## 如何生成 {#how-to-generate} -要生成UUID值,ClickHouse提供了 [generateuidv4](../../sql-reference/functions/uuid-functions.md) 功能。 +要生成UUID值,ClickHouse提供了 [generateuidv4](../../sql-reference/functions/uuid-functions.md) 函数。 ## 用法示例 {#usage-example} **示例1** -此示例演示如何创建具有UUID类型列的表并将值插入到表中。 +这个例子演示了创建一个具有UUID类型列的表,并在表中插入一个值。 ``` sql CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog @@ -51,7 +49,7 @@ SELECT * FROM t_uuid **示例2** -在此示例中,插入新记录时未指定UUID列值。 +在这个示例中,插入新记录时未指定UUID列的值。 ``` sql INSERT INTO t_uuid (y) VALUES ('Example 2') @@ -70,8 +68,7 @@ SELECT * FROM t_uuid ## 限制 {#restrictions} -UUID数据类型仅支持以下功能 [字符串](string.md) 数据类型也支持(例如, [min](../../sql-reference/aggregate-functions/reference.md#agg_function-min), [max](../../sql-reference/aggregate-functions/reference.md#agg_function-max),和 [计数](../../sql-reference/aggregate-functions/reference.md#agg_function-count)). +UUID数据类型只支持 [字符串](../../sql-reference/data-types/string.md) 数据类型也支持的函数(比如, [min](../../sql-reference/aggregate-functions/reference/min.md#agg_function-min), [max](../../sql-reference/aggregate-functions/reference/max.md#agg_function-max), 和 [count](../../sql-reference/aggregate-functions/reference/count.md#agg_function-count))。 -算术运算不支持UUID数据类型(例如, [abs](../../sql-reference/functions/arithmetic-functions.md#arithm_func-abs))或聚合函数,例如 [sum](../../sql-reference/aggregate-functions/reference.md#agg_function-sum) 和 [avg](../../sql-reference/aggregate-functions/reference.md#agg_function-avg). +算术运算不支持UUID数据类型(例如, [abs](../../sql-reference/functions/arithmetic-functions.md#arithm_func-abs))或聚合函数,例如 [sum](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum) 和 [avg](../../sql-reference/aggregate-functions/reference/avg.md#agg_function-avg). -[原始文章](https://clickhouse.tech/docs/en/data_types/uuid/) diff --git a/docs/zh/sql-reference/functions/array-functions.md b/docs/zh/sql-reference/functions/array-functions.md index ac5dae3a97e..4f6dbc0d87d 100644 --- a/docs/zh/sql-reference/functions/array-functions.md +++ b/docs/zh/sql-reference/functions/array-functions.md @@ -606,7 +606,7 @@ SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res; 如果要获取数组中唯一项的列表,可以使用arrayReduce(‘groupUniqArray’,arr)。 -## arryjoin(arr) {#array-functions-join} +## arrayJoin(arr) {#array-functions-join} 一个特殊的功能。请参见[«ArrayJoin函数»](array-join.md#functions_arrayjoin)部分。 diff --git a/docs/zh/sql-reference/functions/other-functions.md b/docs/zh/sql-reference/functions/other-functions.md index b17a5e89332..c58c4bd1510 100644 --- a/docs/zh/sql-reference/functions/other-functions.md +++ b/docs/zh/sql-reference/functions/other-functions.md @@ -477,6 +477,103 @@ FROM 1 rows in set. Elapsed: 0.002 sec. + +## indexHint {#indexhint} +输出符合索引选择范围内的所有数据,同时不实用参数中的表达式进行过滤。 + +传递给函数的表达式参数将不会被计算,但ClickHouse使用参数中的表达式进行索引过滤。 + +**返回值** + +- 1。 + +**示例** + +这是一个包含[ontime](../../getting-started/example-datasets/ontime.md)测试数据集的测试表。 + +``` +SELECT count() FROM ontime + +┌─count()─┐ +│ 4276457 │ +└─────────┘ +``` + +该表使用`(FlightDate, (Year, FlightDate))`作为索引。 + +对该表进行如下的查询: + +``` +:) SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k + +SELECT + FlightDate AS k, + count() +FROM ontime +GROUP BY k +ORDER BY k ASC + +┌──────────k─┬─count()─┐ +│ 2017-01-01 │ 13970 │ +│ 2017-01-02 │ 15882 │ +........................ +│ 2017-09-28 │ 16411 │ +│ 2017-09-29 │ 16384 │ +│ 2017-09-30 │ 12520 │ +└────────────┴─────────┘ + +273 rows in set. Elapsed: 0.072 sec. Processed 4.28 million rows, 8.55 MB (59.00 million rows/s., 118.01 MB/s.) +``` + +在这个查询中,由于没有使用索引,所以ClickHouse将处理整个表的所有数据(`Processed 4.28 million rows`)。使用下面的查询尝试使用索引进行查询: + +``` +:) SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k + +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE k = '2017-09-15' +GROUP BY k +ORDER BY k ASC + +┌──────────k─┬─count()─┐ +│ 2017-09-15 │ 16428 │ +└────────────┴─────────┘ + +1 rows in set. Elapsed: 0.014 sec. Processed 32.74 thousand rows, 65.49 KB (2.31 million rows/s., 4.63 MB/s.) +``` + +在最后一行的显示中,通过索引ClickHouse处理的行数明显减少(`Processed 32.74 thousand rows`)。 + +现在将表达式`k = '2017-09-15'`传递给`indexHint`函数: + +``` +:) SELECT FlightDate AS k, count() FROM ontime WHERE indexHint(k = '2017-09-15') GROUP BY k ORDER BY k + +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE indexHint(k = '2017-09-15') +GROUP BY k +ORDER BY k ASC + +┌──────────k─┬─count()─┐ +│ 2017-09-14 │ 7071 │ +│ 2017-09-15 │ 16428 │ +│ 2017-09-16 │ 1077 │ +│ 2017-09-30 │ 8167 │ +└────────────┴─────────┘ + +4 rows in set. Elapsed: 0.004 sec. Processed 32.74 thousand rows, 65.49 KB (8.97 million rows/s., 17.94 MB/s.) +``` + +对于这个请求,根据ClickHouse显示ClickHouse与上一次相同的方式应用了索引(`Processed 32.74 thousand rows`)。但是,最终返回的结果集中并没有根据`k = '2017-09-15'`表达式进行过滤结果。 + +由于ClickHouse中使用稀疏索引,因此在读取范围时(本示例中为相邻日期),"额外"的数据将包含在索引结果中。使用`indexHint`函数可以查看到它们。 + ## 复制 {#replicate} 使用单个值填充一个数组。 diff --git a/docs/zh/sql-reference/statements/index.md b/docs/zh/sql-reference/statements/index.md index 1c5f4e9a7ef..ab080584c66 100644 --- a/docs/zh/sql-reference/statements/index.md +++ b/docs/zh/sql-reference/statements/index.md @@ -1,7 +1,7 @@ --- machine_translated: true machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "\u53D1\u8A00" +toc_folder_title: "\u8BED\u53E5" toc_priority: 31 --- diff --git a/docs/zh/sql-reference/statements/select/join.md b/docs/zh/sql-reference/statements/select/join.md index 2976484e09a..407c8ca6101 100644 --- a/docs/zh/sql-reference/statements/select/join.md +++ b/docs/zh/sql-reference/statements/select/join.md @@ -43,15 +43,15 @@ ClickHouse中提供的其他联接类型: Also the behavior of ClickHouse server for `ANY JOIN` operations depends on the [any_join_distinct_right_table_keys](../../../operations/settings/settings.md#any_join_distinct_right_table_keys) setting. -### ASOF加入使用 {#asof-join-usage} +### ASOF JOIN使用 {#asof-join-usage} `ASOF JOIN` 当您需要连接没有完全匹配的记录时非常有用。 -算法需要表中的特殊列。 本专栏: +该算法需要表中的特殊列。 该列需要满足: - 必须包含有序序列。 -- 可以是以下类型之一: [Int*,UInt*](../../../sql-reference/data-types/int-uint.md), [浮动\*](../../../sql-reference/data-types/float.md), [日期](../../../sql-reference/data-types/date.md), [日期时间](../../../sql-reference/data-types/datetime.md), [十进制\*](../../../sql-reference/data-types/decimal.md). -- 不能是唯一的列 `JOIN` +- 可以是以下类型之一: [Int*,UInt*](../../../sql-reference/data-types/int-uint.md), [Float\*](../../../sql-reference/data-types/float.md), [Date](../../../sql-reference/data-types/date.md), [DateTime](../../../sql-reference/data-types/datetime.md), [Decimal\*](../../../sql-reference/data-types/decimal.md). +- 不能是`JOIN`子句中唯一的列 语法 `ASOF JOIN ... ON`: @@ -62,9 +62,9 @@ ASOF LEFT JOIN table_2 ON equi_cond AND closest_match_cond ``` -您可以使用任意数量的相等条件和恰好一个最接近的匹配条件。 例如, `SELECT count() FROM table_1 ASOF LEFT JOIN table_2 ON table_1.a == table_2.b AND table_2.t <= table_1.t`. +您可以使用任意数量的相等条件和一个且只有一个最接近的匹配条件。 例如, `SELECT count() FROM table_1 ASOF LEFT JOIN table_2 ON table_1.a == table_2.b AND table_2.t <= table_1.t`. -支持最接近匹配的条件: `>`, `>=`, `<`, `<=`. +支持最接近匹配的运算符: `>`, `>=`, `<`, `<=`. 语法 `ASOF JOIN ... USING`: @@ -75,9 +75,9 @@ ASOF JOIN table_2 USING (equi_column1, ... equi_columnN, asof_column) ``` -`ASOF JOIN` 用途 `equi_columnX` 对于加入平等和 `asof_column` 用于加入与最接近的比赛 `table_1.asof_column >= table_2.asof_column` 条件。 该 `asof_column` 列总是在最后一个 `USING` 条款 +`table_1.asof_column >= table_2.asof_column` 中, `ASOF JOIN` 使用 `equi_columnX` 来进行条件匹配, `asof_column` 用于JOIN最接近匹配。 `asof_column` 列总是在最后一个 `USING` 条件中。 -例如,请考虑下表: +例如,参考下表: table_1 table_2 event | ev_time | user_id event | ev_time | user_id @@ -88,10 +88,10 @@ USING (equi_column1, ... equi_columnN, asof_column) event_1_2 | 13:00 | 42 event_2_3 | 13:00 | 42 ... ... -`ASOF JOIN` 可以从用户事件的时间戳 `table_1` 并找到一个事件 `table_2` 其中时间戳最接近事件的时间戳 `table_1` 对应于最接近的匹配条件。 如果可用,则相等的时间戳值是最接近的值。 在这里,该 `user_id` 列可用于连接相等和 `ev_time` 列可用于在最接近的匹配加入。 在我们的例子中, `event_1_1` 可以加入 `event_2_1` 和 `event_1_2` 可以加入 `event_2_3`,但是 `event_2_2` 不能加入。 +`ASOF JOIN`会从 `table_2` 中的用户事件时间戳找出和 `table_1` 中用户事件时间戳中最近的一个时间戳,来满足最接近匹配的条件。如果有得话,则相等的时间戳值是最接近的值。在此例中,`user_id` 列可用于条件匹配,`ev_time` 列可用于最接近匹配。在此例中,`event_1_1` 可以 JOIN `event_2_1`,`event_1_2` 可以JOIN `event_2_3`,但是 `event_2_2` 不能被JOIN。 !!! note "注" - `ASOF` 加入是 **不** 支持在 [加入我们](../../../engines/table-engines/special/join.md) 表引擎。 + `ASOF JOIN`在 [JOIN](../../../engines/table-engines/special/join.md) 表引擎中 **不受** 支持。 ## 分布式联接 {#global-join} diff --git a/docs/zh/sql-reference/table-functions/file.md b/docs/zh/sql-reference/table-functions/file.md index 4d694cb6729..84fddada867 100644 --- a/docs/zh/sql-reference/table-functions/file.md +++ b/docs/zh/sql-reference/table-functions/file.md @@ -1,23 +1,25 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 37 -toc_title: "\u6587\u4EF6" +toc_title: file --- -# 文件 {#file} +# file {#file} -从文件创建表。 此表函数类似于 [url](url.md) 和 [hdfs](hdfs.md) 一些的。 +从文件创建表。 此表函数类似于 [url](../../sql-reference/table-functions/url.md) 和 [hdfs](../../sql-reference/table-functions/hdfs.md)。 + +`file` 函数可用于对[File](../../engines/table-engines/special/file.md) 表中的数据进行 `SELECT` 和 `INSERT` 查询。 + +**语法** ``` sql file(path, format, structure) ``` -**输入参数** +**参数** -- `path` — The relative path to the file from [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). 只读模式下的globs后的文件支持路径: `*`, `?`, `{abc,def}` 和 `{N..M}` 哪里 `N`, `M` — numbers, \``'abc', 'def'` — strings. -- `format` — The [格式](../../interfaces/formats.md#formats) 的文件。 -- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. +- `path` — [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path)中文件的相对路径。在只读模式下,文件路径支持以下通配符: `*`, `?`, `{abc,def}` 和 `{N..M}`,其中 `N`, `M` 是数字, \``'abc', 'def'` 是字符串。 +- `format` —文件的[格式](../../interfaces/formats.md#formats)。 +- `structure` — 表的结构。格式 `'column1_name column1_type, column2_name column2_type, ...'`。 **返回值** @@ -25,7 +27,7 @@ file(path, format, structure) **示例** -设置 `user_files_path` 和文件的内容 `test.csv`: +设置 `user_files_path` 和文件 `test.csv` 的内容: ``` bash $ grep user_files_path /etc/clickhouse-server/config.xml @@ -37,12 +39,10 @@ $ cat /var/lib/clickhouse/user_files/test.csv 78,43,45 ``` -表从`test.csv` 并从中选择前两行: +从 `test.csv` 中的表中获取数据,并从表中选择前两行: ``` sql -SELECT * -FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') -LIMIT 2 +SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 2; ``` ``` text @@ -52,25 +52,40 @@ LIMIT 2 └─────────┴─────────┴─────────┘ ``` +从CSV文件获取包含3列 [UInt32](../../sql-reference/data-types/int-uint.md) 类型的表的前10行: + ``` sql --- getting the first 10 lines of a table that contains 3 columns of UInt32 type from a CSV file -SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 10 +SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 10; ``` -**路径中的水珠** +将文件中的数据插入表中: -多个路径组件可以具有globs。 对于正在处理的文件应该存在并匹配到整个路径模式(不仅后缀或前缀)。 +``` sql +INSERT INTO FUNCTION file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') VALUES (1, 2, 3), (3, 2, 1); +SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32'); +``` -- `*` — Substitutes any number of any characters except `/` 包括空字符串。 -- `?` — Substitutes any single character. -- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. -- `{N..M}` — Substitutes any number in range from N to M including both borders. +``` text +┌─column1─┬─column2─┬─column3─┐ +│ 1 │ 2 │ 3 │ +│ 3 │ 2 │ 1 │ +└─────────┴─────────┴─────────┘ +``` -建筑与 `{}` 类似于 [远程表功能](../../sql-reference/table-functions/remote.md)). +**路径中的通配符** + +多个路径组件可以具有通配符。 对于要处理的文件必须存在并与整个路径模式匹配(不仅后缀或前缀)。 + +- `*` — 替换任意数量的任何字符,除了 `/` 包括空字符串。 +- `?` — 替换任何单个字符。 +- `{some_string,another_string,yet_another_one}` — 替换任何字符串 `'some_string', 'another_string', 'yet_another_one'`。 +- `{N..M}` — 替换范围从N到M的任何数字(包括两个边界)。 + +使用 `{}` 的构造类似于 [remote](../../sql-reference/table-functions/remote.md))表函数。 **示例** -1. 假设我们有几个具有以下相对路径的文件: +假设我们有几个文件,这些文件具有以下相对路径: - ‘some_dir/some_file_1’ - ‘some_dir/some_file_2’ @@ -79,18 +94,14 @@ SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 U - ‘another_dir/some_file_2’ - ‘another_dir/some_file_3’ -1. 查询这些文件中的行数: - - +查询这些文件中的行数: ``` sql SELECT count(*) FROM file('{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32') ``` -1. 查询这两个目录的所有文件中的行数: - - +查询这两个目录的所有文件中的行数: ``` sql SELECT count(*) @@ -98,11 +109,11 @@ FROM file('{some,another}_dir/*', 'TSV', 'name String, value UInt32') ``` !!! warning "警告" - 如果您的文件列表包含带前导零的数字范围,请单独使用带大括号的构造或使用 `?`. + 如果您的文件列表包含带前导零的数字范围,请对每个数字分别使用带有大括号的结构或使用 `?`。 **示例** -从名为 `file000`, `file001`, … , `file999`: +从名为 `file000`, `file001`, … , `file999`的文件中查询数据: ``` sql SELECT count(*) @@ -111,8 +122,8 @@ FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32') ## 虚拟列 {#virtual-columns} -- `_path` — Path to the file. -- `_file` — Name of the file. +- `_path` — 文件路径。 +- `_file` — 文件名称。 **另请参阅** diff --git a/docs/zh/sql-reference/table-functions/generate.md b/docs/zh/sql-reference/table-functions/generate.md index 1b535161acb..b9b02793cf3 100644 --- a/docs/zh/sql-reference/table-functions/generate.md +++ b/docs/zh/sql-reference/table-functions/generate.md @@ -1,15 +1,13 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 47 toc_title: generateRandom --- # generateRandom {#generaterandom} -使用给定的模式生成随机数据。 -允许用数据填充测试表。 -支持可以存储在表中的所有数据类型,除了 `LowCardinality` 和 `AggregateFunction`. +生成具用给定的模式的随机数据。 +允许用数据来填充测试表。 +支持所有可以存储在表中的数据类型, `LowCardinality` 和 `AggregateFunction`除外。 ``` sql generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]); @@ -17,15 +15,15 @@ generateRandom('name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_stri **参数** -- `name` — Name of corresponding column. -- `TypeName` — Type of corresponding column. -- `max_array_length` — Maximum array length for all generated arrays. Defaults to `10`. -- `max_string_length` — Maximum string length for all generated strings. Defaults to `10`. -- `random_seed` — Specify random seed manually to produce stable results. If NULL — seed is randomly generated. +- `name` — 对应列的名称。 +- `TypeName` — 对应列的类型。 +- `max_array_length` — 生成数组的最大长度。 默认为10。 +- `max_string_length` — 生成字符串的最大长度。 默认为10。 +- `random_seed` — 手动指定随机种子以产生稳定的结果。 如果为NULL-种子是随机生成的。 **返回值** -具有请求架构的表对象。 +具有请求模式的表对象。 ## 用法示例 {#usage-example} diff --git a/docs/zh/sql-reference/table-functions/hdfs.md b/docs/zh/sql-reference/table-functions/hdfs.md index 112c88450e2..715d9671dc8 100644 --- a/docs/zh/sql-reference/table-functions/hdfs.md +++ b/docs/zh/sql-reference/table-functions/hdfs.md @@ -1,13 +1,11 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 45 toc_title: hdfs --- # hdfs {#hdfs} -从HDFS中的文件创建表。 此表函数类似于 [url](url.md) 和 [文件](file.md) 一些的。 +根据HDFS中的文件创建表。 该表函数类似于 [url](url.md) 和 [文件](file.md)。 ``` sql hdfs(URI, format, structure) @@ -15,9 +13,9 @@ hdfs(URI, format, structure) **输入参数** -- `URI` — The relative URI to the file in HDFS. Path to file support following globs in readonly mode: `*`, `?`, `{abc,def}` 和 `{N..M}` 哪里 `N`, `M` — numbers, \``'abc', 'def'` — strings. -- `format` — The [格式](../../interfaces/formats.md#formats) 的文件。 -- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. +- `URI` — HDFS中文件的相对URI。 在只读模式下,文件路径支持以下通配符: `*`, `?`, `{abc,def}` 和 `{N..M}` ,其中 `N`, `M` 是数字, \``'abc', 'def'` 是字符串。 +- `format` — 文件的[格式](../../interfaces/formats.md#formats)。 +- `structure` — 表的结构。格式 `'column1_name column1_type, column2_name column2_type, ...'`。 **返回值** @@ -25,7 +23,7 @@ hdfs(URI, format, structure) **示例** -表从 `hdfs://hdfs1:9000/test` 并从中选择前两行: +表来自 `hdfs://hdfs1:9000/test` 并从中选择前两行: ``` sql SELECT * @@ -40,20 +38,20 @@ LIMIT 2 └─────────┴─────────┴─────────┘ ``` -**路径中的水珠** +**路径中的通配符** -多个路径组件可以具有globs。 对于正在处理的文件应该存在并匹配到整个路径模式(不仅后缀或前缀)。 +多个路径组件可以具有通配符。 对于要处理的文件必须存在并与整个路径模式匹配(不仅后缀或前缀)。 -- `*` — Substitutes any number of any characters except `/` 包括空字符串。 -- `?` — Substitutes any single character. -- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. -- `{N..M}` — Substitutes any number in range from N to M including both borders. +- `*` — 替换任意数量的任何字符,除了 `/` 包括空字符串。 +- `?` — 替换任何单个字符。 +- `{some_string,another_string,yet_another_one}` — 替换任何字符串 `'some_string', 'another_string', 'yet_another_one'`。 +- `{N..M}` — 替换范围从N到M的任何数字(包括两个边界)。 -建筑与 `{}` 类似于 [远程表功能](../../sql-reference/table-functions/remote.md)). +使用 `{}` 的构造类似于 [remote](../../sql-reference/table-functions/remote.md))表函数。 **示例** -1. 假设我们在HDFS上有几个具有以下Uri的文件: +1. 假设我们在HDFS上有几个带有以下URI的文件: - ‘hdfs://hdfs1:9000/some_dir/some_file_1’ - ‘hdfs://hdfs1:9000/some_dir/some_file_2’ @@ -62,7 +60,7 @@ LIMIT 2 - ‘hdfs://hdfs1:9000/another_dir/some_file_2’ - ‘hdfs://hdfs1:9000/another_dir/some_file_3’ -1. 查询这些文件中的行数: +2. 查询这些文件中的行数: @@ -71,7 +69,7 @@ SELECT count(*) FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32') ``` -1. 查询这两个目录的所有文件中的行数: +3. 查询这两个目录的所有文件中的行数: @@ -81,11 +79,11 @@ FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value U ``` !!! warning "警告" - 如果您的文件列表包含带前导零的数字范围,请单独使用带大括号的构造或使用 `?`. + 如果您的文件列表包含带前导零的数字范围,请对每个数字分别使用带有大括号的结构或使用 `?`。 **示例** -从名为 `file000`, `file001`, … , `file999`: +从名为 `file000`, `file001`, … , `file999`的文件中查询数据: ``` sql SELECT count(*) @@ -94,8 +92,8 @@ FROM hdfs('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name Strin ## 虚拟列 {#virtual-columns} -- `_path` — Path to the file. -- `_file` — Name of the file. +- `_path` — 文件路径。 +- `_file` — 文件名称。 **另请参阅** diff --git a/docs/zh/sql-reference/table-functions/index.md b/docs/zh/sql-reference/table-functions/index.md index d9eadb9c592..20a335de0fc 100644 --- a/docs/zh/sql-reference/table-functions/index.md +++ b/docs/zh/sql-reference/table-functions/index.md @@ -1,38 +1,36 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd -toc_folder_title: "\u8868\u51FD\u6570" +toc_folder_title: 表函数 toc_priority: 34 toc_title: "\u5BFC\u8A00" --- # 表函数 {#table-functions} -表函数是构造表的方法。 +表函数是用来构造表的方法。 -您可以使用表函数: +您可以在以下位置使用表函数: -- [FROM](../statements/select/from.md) 《公约》条款 `SELECT` 查询。 +- `SELECT` 查询的[FROM](../../sql-reference/statements/select/from.md)子句。 - The method for creating a temporary table that is available only in the current query. The table is deleted when the query finishes. + 创建临时表的方法,该临时表仅在当前查询中可用。当查询完成后,该临时表将被删除。 -- [创建表为\](../statements/create.md#create-table-query) 查询。 +- [CREATE TABLE AS \](../statements/create.md#create-table-query) 查询。 - It's one of the methods of creating a table. + 这是创建表的方法之一。 !!! warning "警告" - 你不能使用表函数,如果 [allow_ddl](../../operations/settings/permissions-for-queries.md#settings_allow_ddl) 设置被禁用。 + 如果 [allow_ddl](../../operations/settings/permissions-for-queries.md#settings_allow_ddl) 设置被禁用,则不能使用表函数。 -| 功能 | 产品描述 | -|--------------------|--------------------------------------------------------------------------------------------------------| -| [文件](file.md) | 创建一个 [文件](../../engines/table-engines/special/file.md)-发动机表。 | -| [合并](merge.md) | 创建一个 [合并](../../engines/table-engines/special/merge.md)-发动机表。 | -| [数字](numbers.md) | 创建一个包含整数填充的单列的表。 | -| [远程](remote.md) | 允许您访问远程服务器,而无需创建 [分布](../../engines/table-engines/special/distributed.md)-发动机表。 | -| [url](url.md) | 创建一个 [Url](../../engines/table-engines/special/url.md)-发动机表。 | -| [mysql](mysql.md) | 创建一个 [MySQL](../../engines/table-engines/integrations/mysql.md)-发动机表。 | -| [jdbc](jdbc.md) | 创建一个 [JDBC](../../engines/table-engines/integrations/jdbc.md)-发动机表。 | -| [odbc](odbc.md) | 创建一个 [ODBC](../../engines/table-engines/integrations/odbc.md)-发动机表。 | -| [hdfs](hdfs.md) | 创建一个 [HDFS](../../engines/table-engines/integrations/hdfs.md)-发动机表。 | +| 函数 | 描述 | +|-----------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------| +| [file](../../sql-reference/table-functions/file.md) | 创建一个file引擎表。 | +| [merge](../../sql-reference/table-functions/merge.md) | 创建一个merge引擎表。 | +| [numbers](../../sql-reference/table-functions/numbers.md) | 创建一个单列的表,其中包含整数。 | +| [remote](../../sql-reference/table-functions/remote.md) | 允许您访问远程服务器,而无需创建分布式表。 | +| [url](../../sql-reference/table-functions/url.md) | 创建一个URL引擎表。 | +| [mysql](../../sql-reference/table-functions/mysql.md) | 创建一个MySQL引擎表。 | +| [jdbc](../../sql-reference/table-functions/jdbc.md) | 创建一个JDBC引擎表。 | +| [odbc](../../sql-reference/table-functions/odbc.md) | 创建一个ODBC引擎表。 | +| [hdfs](../../sql-reference/table-functions/hdfs.md) | 创建一个HDFS引擎表。 | [原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/) diff --git a/docs/zh/sql-reference/table-functions/input.md b/docs/zh/sql-reference/table-functions/input.md index 42b354dc935..a0215b26c8a 100644 --- a/docs/zh/sql-reference/table-functions/input.md +++ b/docs/zh/sql-reference/table-functions/input.md @@ -1,33 +1,29 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 46 -toc_title: "\u8F93\u5165" +toc_title: input --- -# 输入 {#input} +# input {#input} -`input(structure)` -表功能,允许有效地转换和插入数据发送到 -服务器与给定结构的表与另一种结构。 +`input(structure)` -表函数,可以有效地将发送给服务器的数据转换为具有给定结构的数据并将其插入到具有其他结构的表中。 -`structure` -以下格式发送到服务器的数据结构 `'column1_name column1_type, column2_name column2_type, ...'`. -例如, `'id UInt32, name String'`. +`structure` -发送到服务器的数据结构的格式 `'column1_name column1_type, column2_name column2_type, ...'`。 +例如, `'id UInt32, name String'`。 -此功能只能用于 `INSERT SELECT` 查询,只有一次,但其他行为像普通表函数 +该函数只能在 `INSERT SELECT` 查询中使用,并且只能使用一次,但在其他方面,行为类似于普通的表函数 (例如,它可以用于子查询等。). -数据可以以任何方式像普通发送 `INSERT` 查询并传递任何可用 [格式](../../interfaces/formats.md#formats) -必须在查询结束时指定(不像普通 `INSERT SELECT`). +数据可以像普通 `INSERT` 查询一样发送,并以必须在查询末尾指定的任何可用[格式](../../interfaces/formats.md#formats) +传递(与普通 `INSERT SELECT`不同)。 -这个功能的主要特点是,当服务器从客户端接收数据时,它同时将其转换 -根据表达式中的列表 `SELECT` 子句并插入到目标表中。 临时表 -不创建所有传输的数据。 +该函数的主要特点是,当服务器从客户端接收数据时,它会同时根据 `SELECT` 子句中的表达式列表将其转换,并插入到目标表中。 +不会创建包含所有已传输数据的临时表。 **例** - 让 `test` 表具有以下结构 `(a String, b String)` - 和数据 `data.csv` 具有不同的结构 `(col1 String, col2 Date, col3 Int32)`. 查询插入 - 从数据 `data.csv` 进 `test` 同时转换的表如下所示: + 并且 `data.csv` 中的数据具有不同的结构 `(col1 String, col2 Date, col3 Int32)`。 + 将数据从 `data.csv` 插入到 `test` 表中,同时进行转换的查询如下所示: @@ -35,7 +31,7 @@ toc_title: "\u8F93\u5165" $ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT lower(col1), col3 * col3 FROM input('col1 String, col2 Date, col3 Int32') FORMAT CSV"; ``` -- 如果 `data.csv` 包含相同结构的数据 `test_structure` 作为表 `test` 那么这两个查询是相等的: +- 如果 `data.csv` 包含与表 `test` 相同结构 `test_structure` 的数据,那么这两个查询是相等的: diff --git a/docs/zh/sql-reference/table-functions/jdbc.md b/docs/zh/sql-reference/table-functions/jdbc.md index c1833462171..af8c82f0097 100644 --- a/docs/zh/sql-reference/table-functions/jdbc.md +++ b/docs/zh/sql-reference/table-functions/jdbc.md @@ -1,6 +1,4 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 43 toc_title: jdbc --- @@ -9,10 +7,10 @@ toc_title: jdbc `jdbc(jdbc_connection_uri, schema, table)` -返回通过JDBC驱动程序连接的表。 -此表函数需要单独的 `clickhouse-jdbc-bridge` 程序正在运行。 +此表函数需要单独的 `clickhouse-jdbc-bridge` 程序才能运行。 它支持可空类型(基于查询的远程表的DDL)。 -**例** +**示例** ``` sql SELECT * FROM jdbc('jdbc:mysql://localhost:3306/?user=root&password=root', 'schema', 'table') diff --git a/docs/zh/sql-reference/table-functions/merge.md b/docs/zh/sql-reference/table-functions/merge.md index 0e94dcc4d42..410468b3d8a 100644 --- a/docs/zh/sql-reference/table-functions/merge.md +++ b/docs/zh/sql-reference/table-functions/merge.md @@ -1,14 +1,12 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 38 -toc_title: "\u5408\u5E76" +toc_title: merge --- -# 合并 {#merge} +# merge {#merge} -`merge(db_name, 'tables_regexp')` – Creates a temporary Merge table. For more information, see the section “Table engines, Merge”. +`merge(db_name, 'tables_regexp')` – 创建一个临时Merge表。 有关更多信息,请参见 “Table engines, Merge”。 -表结构取自与正则表达式匹配的第一个表。 +表结构取自遇到的第一个与正则表达式匹配的表。 [原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/merge/) diff --git a/docs/zh/sql-reference/table-functions/numbers.md b/docs/zh/sql-reference/table-functions/numbers.md index e5f13d60791..59a57b157e0 100644 --- a/docs/zh/sql-reference/table-functions/numbers.md +++ b/docs/zh/sql-reference/table-functions/numbers.md @@ -1,18 +1,16 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 39 -toc_title: "\u6570\u5B57" +toc_title: numbers --- -# 数字 {#numbers} +# numbers {#numbers} -`numbers(N)` – Returns a table with the single ‘number’ 包含从0到N-1的整数的列(UInt64)。 -`numbers(N, M)` -返回一个表与单 ‘number’ 包含从N到(N+M-1)的整数的列(UInt64)。 +`numbers(N)` – 返回一个包含单个 ‘number’ 列(UInt64)的表,其中包含从0到N-1的整数。 +`numbers(N, M)` - 返回一个包含单个 ‘number’ 列(UInt64)的表,其中包含从N到(N+M-1)的整数。 -类似于 `system.numbers` 表,它可以用于测试和生成连续的值, `numbers(N, M)` 比 `system.numbers`. +类似于 `system.numbers` 表,它可以用于测试和生成连续的值, `numbers(N, M)` 比 `system.numbers`更有效。 -以下查询是等效的: +以下查询是等价的: ``` sql SELECT * FROM numbers(10); @@ -20,10 +18,10 @@ SELECT * FROM numbers(0, 10); SELECT * FROM system.numbers LIMIT 10; ``` -例: +示例: ``` sql --- Generate a sequence of dates from 2010-01-01 to 2010-12-31 +-- 生成2010-01-01至2010-12-31的日期序列 select toDate('2010-01-01') + number as d FROM numbers(365); ``` diff --git a/docs/zh/sql-reference/table-functions/odbc.md b/docs/zh/sql-reference/table-functions/odbc.md index 95fb2277474..dd2826e892f 100644 --- a/docs/zh/sql-reference/table-functions/odbc.md +++ b/docs/zh/sql-reference/table-functions/odbc.md @@ -1,13 +1,11 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 44 toc_title: odbc --- # odbc {#table-functions-odbc} -返回通过连接的表 [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity). +返回通过 [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity) 连接的表。 ``` sql odbc(connection_settings, external_database, external_table) @@ -15,23 +13,23 @@ odbc(connection_settings, external_database, external_table) 参数: -- `connection_settings` — Name of the section with connection settings in the `odbc.ini` 文件 -- `external_database` — Name of a database in an external DBMS. -- `external_table` — Name of a table in the `external_database`. +- `connection_settings` — 在 `odbc.ini` 文件中连接设置的部分的名称。 +- `external_database` — 外部DBMS的数据库名。 +- `external_table` — `external_database` 数据库中的表名。 -为了安全地实现ODBC连接,ClickHouse使用单独的程序 `clickhouse-odbc-bridge`. 如果直接从ODBC驱动程序加载 `clickhouse-server`,驱动程序问题可能会导致ClickHouse服务器崩溃。 ClickHouse自动启动 `clickhouse-odbc-bridge` 当它是必需的。 ODBC桥程序是从相同的软件包作为安装 `clickhouse-server`. +为了安全地实现ODBC连接,ClickHouse使用单独的程序 `clickhouse-odbc-bridge`。 如果ODBC驱动程序直接从 `clickhouse-server` 加载,则驱动程序问题可能会导致ClickHouse服务器崩溃。 当需要时,ClickHouse自动启动 `clickhouse-odbc-bridge`。 ODBC桥程序是从与 `clickhouse-server` 相同的软件包安装的。 -与字段 `NULL` 外部表中的值将转换为基数据类型的默认值。 例如,如果远程MySQL表字段具有 `INT NULL` 键入它将转换为0(ClickHouse的默认值 `Int32` 数据类型)。 +外部表中字段包含的 `NULL` 值将转换为基本据类型的默认值。 例如,如果远程MySQL表字段包含 `INT NULL` 类型,则将被转换为0(ClickHouse`Int32` 数据类型的默认值)。 ## 用法示例 {#usage-example} -**通过ODBC从本地MySQL安装获取数据** +**通过ODBC从本地安装的MySQL获取数据** -此示例检查Ubuntu Linux18.04和MySQL服务器5.7。 +这个例子检查Ubuntu Linux18.04和MySQL服务器5.7。 -确保安装了unixODBC和MySQL连接器。 +确保已经安装了unixODBC和MySQL连接器。 -默认情况下(如果从软件包安装),ClickHouse以用户身份启动 `clickhouse`. 因此,您需要在MySQL服务器中创建和配置此用户。 +默认情况下(如果从软件包安装),ClickHouse以用户 `clickhouse` 启动。 因此,您需要在MySQL服务器中创建和配置此用户。 ``` bash $ sudo mysql @@ -42,7 +40,7 @@ mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION; ``` -然后配置连接 `/etc/odbc.ini`. +然后在 `/etc/odbc.ini` 中配置连接。 ``` bash $ cat /etc/odbc.ini @@ -55,7 +53,7 @@ USERNAME = clickhouse PASSWORD = clickhouse ``` -您可以使用 `isql` unixodbc安装中的实用程序。 +您可以使用unixODBC安装的 `isql` 实用程序检查连接。 ``` bash $ isql -v mysqlconn diff --git a/docs/zh/sql-reference/table-functions/remote.md b/docs/zh/sql-reference/table-functions/remote.md index b7bd494609b..cacc68c0b71 100644 --- a/docs/zh/sql-reference/table-functions/remote.md +++ b/docs/zh/sql-reference/table-functions/remote.md @@ -1,22 +1,52 @@ -# 远程,远程安全 {#remote-remotesecure} +# remote, remoteSecure {#remote-remotesecure} -允许您访问远程服务器,而无需创建 `Distributed` 表 +允许您访问远程服务器,而无需创建 `Distributed` 表。`remoteSecure` - 与 `remote` 相同,但是会使用加密链接。 -签名: +这两个函数都可以在 `SELECT` 和 `INSERT` 查询中使用。 + +语法: ``` sql -remote('addresses_expr', db, table[, 'user'[, 'password']]) -remote('addresses_expr', db.table[, 'user'[, 'password']]) -remoteSecure('addresses_expr', db, table[, 'user'[, 'password']]) -remoteSecure('addresses_expr', db.table[, 'user'[, 'password']]) +remote('addresses_expr', db, table[, 'user'[, 'password'], sharding_key]) +remote('addresses_expr', db.table[, 'user'[, 'password'], sharding_key]) +remoteSecure('addresses_expr', db, table[, 'user'[, 'password'], sharding_key]) +remoteSecure('addresses_expr', db.table[, 'user'[, 'password'], sharding_key]) ``` -`addresses_expr` – 代表远程服务器地址的一个表达式。可以只是单个服务器地址。 服务器地址可以是 `host:port` 或 `host`。`host` 可以指定为服务器域名,或是IPV4或IPV6地址。IPv6地址在方括号中指定。`port` 是远程服务器上的TCP端口。 如果省略端口,则使用服务器配置文件中的 `tcp_port` (默认情况为,9000)。 +**参数** + +- `addresses_expr` – 代表远程服务器地址的一个表达式。可以只是单个服务器地址。 服务器地址可以是 `host:port` 或 `host`。 + + `host` 可以指定为服务器名称,或是IPV4或IPV6地址。IPv6地址在方括号中指定。 + + `port` 是远程服务器上的TCP端口。 如果省略端口,则 `remote` 使用服务器配置文件中的 [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) (默认情况为,9000),`remoteSecure` 使用 [tcp_port_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) (默认情况为,9440)。 -!!! important "重要事项" IPv6地址需要指定端口。 + + 类型: [String](../../sql-reference/data-types/string.md)。 + + - `db` — 数据库名。类型: [String](../../sql-reference/data-types/string.md)。 + - `table` — 表名。类型: [String](../../sql-reference/data-types/string.md)。 + - `user` — 用户名。如果未指定用户,则使用 `default` 。类型: [String](../../sql-reference/data-types/string.md)。 + - `password` — 用户密码。如果未指定密码,则使用空密码。类型: [String](../../sql-reference/data-types/string.md)。 + - `sharding_key` — 分片键以支持在节点之间分布数据。 例如: `insert into remote('127.0.0.1:9000,127.0.0.2', db, table, 'default', rand())`。 类型: [UInt32](../../sql-reference/data-types/int-uint.md)。 + + **返回值** + + 来自远程服务器的数据集。 + + **用法** + + 使用 `remote` 表函数没有创建一个 `Distributed` 表更优,因为在这种情况下,将为每个请求重新建立服务器连接。此外,如果设置了主机名,则会解析这些名称,并且在使用各种副本时不会计入错误。 在处理大量查询时,始终优先创建 `Distributed` 表,不要使用 `remote` 表函数。 -例: + 该 `remote` 表函数可以在以下情况下是有用的: + + - 访问特定服务器进行数据比较、调试和测试。 + - 在多个ClickHouse集群之间的用户研究目的的查询。 + - 手动发出的不频繁分布式请求。 + - 每次重新定义服务器集的分布式请求。 + + **地址** ``` text example01-01-1 @@ -29,8 +59,6 @@ localhost 多个地址可以用逗号分隔。在这种情况下,ClickHouse将使用分布式处理,因此它将将查询发送到所有指定的地址(如具有不同数据的分片)。 -示例: - ``` text example01-01-1,example01-02-1 ``` @@ -49,30 +77,28 @@ example01-{01..02}-1 如果您有多对大括号,它会生成相应集合的直接乘积。 -大括号中的地址和部分地址可以用管道符号(\|)分隔。 在这种情况下,相应的地址集被解释为副本,并且查询将被发送到第一个正常副本。 但是,副本将按照当前[load_balancing](../../operations/settings/settings.md)设置的顺序进行迭代。 - -示例: +大括号中的地址和部分地址可以用管道符号(\|)分隔。 在这种情况下,相应的地址集被解释为副本,并且查询将被发送到第一个正常副本。 但是,副本将按照当前[load_balancing](../../operations/settings/settings.md)设置的顺序进行迭代。此示例指定两个分片,每个分片都有两个副本: ``` text example01-{01..02}-{1|2} ``` -此示例指定两个分片,每个分片都有两个副本。 - 生成的地址数由常量限制。目前这是1000个地址。 -使用 `remote` 表函数没有创建一个 `Distributed` 表更优,因为在这种情况下,将为每个请求重新建立服务器连接。此外,如果设置了主机名,则会解析这些名称,并且在使用各种副本时不会计算错误。 在处理大量查询时,始终优先创建 `Distributed` 表,不要使用 `remote` 表功能。 +**示例** -该 `remote` 表函数可以在以下情况下是有用的: +从远程服务器选择数据: -- 访问特定服务器进行数据比较、调试和测试。 -- 在多个ClickHouse集群之间的用户研究目的的查询。 -- 手动发出的不频繁分布式请求。 -- 每次重新定义服务器集的分布式请求。 +``` sql +SELECT * FROM remote('127.0.0.1', db.remote_engine_table) LIMIT 3; +``` -如果未指定用户, 将会使用`default`。 -如果未指定密码,则使用空密码。 +将远程服务器中的数据插入表中: -`remoteSecure` - 与 `remote` 相同,但是会使用加密链接。默认端口为配置文件中的[tcp_port_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure),或9440。 +``` sql +CREATE TABLE remote_table (name String, value UInt32) ENGINE=Memory; +INSERT INTO FUNCTION remote('127.0.0.1', currentDatabase(), 'remote_table') VALUES ('test', 42); +SELECT * FROM remote_table; +``` [原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/remote/) diff --git a/docs/zh/sql-reference/table-functions/url.md b/docs/zh/sql-reference/table-functions/url.md index c2efe09913a..d726cddd748 100644 --- a/docs/zh/sql-reference/table-functions/url.md +++ b/docs/zh/sql-reference/table-functions/url.md @@ -1,26 +1,43 @@ --- -machine_translated: true -machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd toc_priority: 41 toc_title: url --- # url {#url} -`url(URL, format, structure)` -返回从创建的表 `URL` 与给定 -`format` 和 `structure`. +`url` 函数从 `URL` 创建一个具有给定 `format` 和 `structure` 的表。 -URL-HTTP或HTTPS服务器地址,它可以接受 `GET` 和/或 `POST` 请求。 +`url` 函数可用于对[URL](../../engines/table-engines/special/url.md)表中的数据进行 `SELECT` 和 `INSERT` 的查询中。 -格式 - [格式](../../interfaces/formats.md#formats) 的数据。 +**语法** -结构-表结构 `'UserID UInt64, Name String'` 格式。 确定列名称和类型。 +``` sql +url(URL, format, structure) +``` + +**参数** + +- `URL` — HTTP或HTTPS服务器地址,它可以接受 `GET` 或 `POST` 请求 (对应于 `SELECT` 或 `INSERT` 查询)。类型: [String](../../sql-reference/data-types/string.md)。 +- `format` — 数据[格式](../../interfaces/formats.md#formats)。类型: [String](../../sql-reference/data-types/string.md)。 +- `structure` — 以 `'UserID UInt64, Name String'` 格式的表结构。确定列名和类型。 类型: [String](../../sql-reference/data-types/string.md)。 + +**返回值** + +A table with the specified format and structure and with data from the defined `URL`. **示例** +获取一个表的前3行,该表是从HTTP服务器获取的包含 `String` 和 [UInt32](../../sql-reference/data-types/int-uint.md) 类型的列,以[CSV](../../interfaces/formats.md#csv)格式返回。 + ``` sql --- getting the first 3 lines of a table that contains columns of String and UInt32 type from HTTP-server which answers in CSV format. -SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3 +SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3; ``` +将 `URL` 的数据插入到表中: + +``` sql +CREATE TABLE test_table (column1 String, column2 UInt32) ENGINE=Memory; +INSERT INTO FUNCTION url('http://127.0.0.1:8123/?query=INSERT+INTO+test_table+FORMAT+CSV', 'CSV', 'column1 String, column2 UInt32') VALUES ('http interface', 42); +SELECT * FROM test_table; +``` [原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/url/) diff --git a/programs/CMakeLists.txt b/programs/CMakeLists.txt index 9adca58b55a..09199e83026 100644 --- a/programs/CMakeLists.txt +++ b/programs/CMakeLists.txt @@ -33,7 +33,14 @@ option (ENABLE_CLICKHOUSE_OBFUSCATOR "Table data obfuscator (convert real data t ${ENABLE_CLICKHOUSE_ALL}) # https://clickhouse.tech/docs/en/operations/utilities/odbc-bridge/ -option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "HTTP-server working like a proxy to ODBC driver" +if (ENABLE_ODBC) + option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "HTTP-server working like a proxy to ODBC driver" + ${ENABLE_CLICKHOUSE_ALL}) +else () + option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "HTTP-server working like a proxy to ODBC driver" OFF) +endif () + +option (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE "HTTP-server working like a proxy to Library dictionary source" ${ENABLE_CLICKHOUSE_ALL}) # https://presentations.clickhouse.tech/matemarketing_2020/ @@ -109,6 +116,12 @@ else() message(STATUS "ODBC bridge mode: OFF") endif() +if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) + message(STATUS "Library bridge mode: ON") +else() + message(STATUS "Library bridge mode: OFF") +endif() + if (ENABLE_CLICKHOUSE_INSTALL) message(STATUS "ClickHouse install: ON") else() @@ -188,11 +201,16 @@ add_subdirectory (format) add_subdirectory (obfuscator) add_subdirectory (install) add_subdirectory (git-import) +add_subdirectory (bash-completion) if (ENABLE_CLICKHOUSE_ODBC_BRIDGE) add_subdirectory (odbc-bridge) endif () +if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) + add_subdirectory (library-bridge) +endif () + if (CLICKHOUSE_ONE_SHARED) add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES}) target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK}) @@ -208,6 +226,10 @@ if (CLICKHOUSE_SPLIT_BINARY) list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge) endif () + if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) + list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-library-bridge) + endif () + set_target_properties(${CLICKHOUSE_ALL_TARGETS} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) add_custom_target (clickhouse-bundle ALL DEPENDS ${CLICKHOUSE_ALL_TARGETS}) @@ -262,52 +284,52 @@ else () set (CLICKHOUSE_BUNDLE) if (ENABLE_CLICKHOUSE_SERVER) add_custom_target (clickhouse-server ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-server DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-server DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-server" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-server) endif () if (ENABLE_CLICKHOUSE_CLIENT) add_custom_target (clickhouse-client ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-client DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-client DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-client" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-client) endif () if (ENABLE_CLICKHOUSE_LOCAL) add_custom_target (clickhouse-local ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-local DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-local DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-local" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-local) endif () if (ENABLE_CLICKHOUSE_BENCHMARK) add_custom_target (clickhouse-benchmark ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-benchmark DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-benchmark DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-benchmark" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-benchmark) endif () if (ENABLE_CLICKHOUSE_COPIER) add_custom_target (clickhouse-copier ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-copier DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-copier DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-copier" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-copier) endif () if (ENABLE_CLICKHOUSE_EXTRACT_FROM_CONFIG) add_custom_target (clickhouse-extract-from-config ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-extract-from-config DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-extract-from-config DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-extract-from-config" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-extract-from-config) endif () if (ENABLE_CLICKHOUSE_COMPRESSOR) add_custom_target (clickhouse-compressor ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-compressor DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-compressor DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-compressor" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-compressor) endif () if (ENABLE_CLICKHOUSE_FORMAT) add_custom_target (clickhouse-format ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-format DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-format DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-format" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-format) endif () if (ENABLE_CLICKHOUSE_OBFUSCATOR) add_custom_target (clickhouse-obfuscator ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-obfuscator DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-obfuscator DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-obfuscator" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-obfuscator) endif () if (ENABLE_CLICKHOUSE_GIT_IMPORT) add_custom_target (clickhouse-git-import ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-git-import DEPENDS clickhouse) - install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import) endif () @@ -325,7 +347,7 @@ else () endif () if (ENABLE_TESTS AND USE_GTEST) - set (CLICKHOUSE_UNIT_TESTS_TARGETS unit_tests_libcommon unit_tests_dbms) + set (CLICKHOUSE_UNIT_TESTS_TARGETS unit_tests_dbms) add_custom_target (clickhouse-tests ALL DEPENDS ${CLICKHOUSE_UNIT_TESTS_TARGETS}) add_dependencies(clickhouse-bundle clickhouse-tests) endif() diff --git a/programs/bash-completion/CMakeLists.txt b/programs/bash-completion/CMakeLists.txt new file mode 100644 index 00000000000..d3a47f5a35e --- /dev/null +++ b/programs/bash-completion/CMakeLists.txt @@ -0,0 +1 @@ +add_subdirectory(completions) diff --git a/programs/bash-completion/completions/CMakeLists.txt b/programs/bash-completion/completions/CMakeLists.txt new file mode 100644 index 00000000000..d364e07ef6e --- /dev/null +++ b/programs/bash-completion/completions/CMakeLists.txt @@ -0,0 +1,28 @@ +macro(configure_bash_completion) + set(out "/usr/share/bash-completion/completions") + find_program(pkg-config PKG_CONFIG_BIN) + if (PKG_CONFIG_BIN) + execute_process( + COMMAND ${PKG_CONFIG_BIN} --variable=completionsdir bash-completion + OUTPUT_VARIABLE ${out} + OUTPUT_STRIP_TRAILING_WHITESPACE + ) + endif() + string(REPLACE /usr "${CMAKE_INSTALL_PREFIX}" out "${out}") + message(STATUS "bash_completion will be written to ${out}") +endmacro() + +configure_bash_completion() +foreach (name + # set of functions + clickhouse-bootstrap + + # binaries that accept settings as command line argument + clickhouse-client + clickhouse-local + clickhouse-benchmark + + clickhouse +) + install(FILES ${name} DESTINATION ${out}) +endforeach() diff --git a/programs/bash-completion/completions/clickhouse b/programs/bash-completion/completions/clickhouse new file mode 100644 index 00000000000..fc55398dcf1 --- /dev/null +++ b/programs/bash-completion/completions/clickhouse @@ -0,0 +1,33 @@ +[[ -v $_CLICKHOUSE_COMPLETION_LOADED ]] || source "$(dirname "${BASH_SOURCE[0]}")/clickhouse-bootstrap" + +function _clickhouse_get_utils() +{ + local cmd=$1 && shift + "$cmd" --help |& awk '/^clickhouse.*args/ { print $2 }' +} + +function _complete_for_clickhouse_entrypoint_bin() +{ + local cur prev cword words + eval local cmd="$( _clickhouse_quote "$1" )" + _clickhouse_bin_exist "$cmd" || return 0 + + COMPREPLY=() + _get_comp_words_by_ref cur prev cword words + + local util="$cur" + # complete utils, until it will be finished + if [[ $cword -lt 2 ]]; then + COMPREPLY=( $(compgen -W "$(_clickhouse_get_utils "$cmd")" -- "$cur") ) + return + fi + util="${words[1]}" + + if _complete_for_clickhouse_generic_bin_impl "$prev"; then + COMPREPLY=( $(compgen -W "$(_clickhouse_get_options "$cmd" "$util")" -- "$cur") ) + fi + + return 0 +} + +_complete_clickhouse_generic clickhouse _complete_for_clickhouse_entrypoint_bin diff --git a/programs/bash-completion/completions/clickhouse-benchmark b/programs/bash-completion/completions/clickhouse-benchmark new file mode 100644 index 00000000000..13064b7417d --- /dev/null +++ b/programs/bash-completion/completions/clickhouse-benchmark @@ -0,0 +1,2 @@ +[[ -v $_CLICKHOUSE_COMPLETION_LOADED ]] || source "$(dirname "${BASH_SOURCE[0]}")/clickhouse-bootstrap" +_complete_clickhouse_generic clickhouse-benchmark diff --git a/programs/bash-completion/completions/clickhouse-bootstrap b/programs/bash-completion/completions/clickhouse-bootstrap new file mode 100644 index 00000000000..15b2140161d --- /dev/null +++ b/programs/bash-completion/completions/clickhouse-bootstrap @@ -0,0 +1,105 @@ +# +# bash autocomplete, that can work with: +# a) --help of program +# +# Also you may like: +# $ bind "set completion-ignore-case on" +# $ bind "set show-all-if-ambiguous on" +# +# It uses bash-completion dynamic loader. + +# Known to work with bash 3.* with programmable completion and extended +# pattern matching enabled (use 'shopt -s extglob progcomp' to enable +# these if they are not already enabled). +shopt -s extglob + +export _CLICKHOUSE_COMPLETION_LOADED=1 + +CLICKHOUSE_QueryProcessingStage=( + complete + fetch_columns + with_mergeable_state + with_mergeable_state_after_aggregation +) + +function _clickhouse_bin_exist() +{ [ -x "$1" ] || command -v "$1" >& /dev/null; } + +function _clickhouse_quote() +{ + local quoted=${1//\'/\'\\\'\'}; + printf "'%s'" "$quoted" +} + +# Extract every option (everything that starts with "-") from the --help dialog. +function _clickhouse_get_options() +{ + "$@" --help 2>&1 | awk -F '[ ,=<>]' '{ for (i=1; i <= NF; ++i) { if (substr($i, 0, 1) == "-" && length($i) > 1) print $i; } }' | sort -u +} + +function _complete_for_clickhouse_generic_bin_impl() +{ + local prev=$1 && shift + + case "$prev" in + -C|--config-file|--config) + return 1 + ;; + --stage) + COMPREPLY=( $(compgen -W "${CLICKHOUSE_QueryProcessingStage[*]}" -- "$cur") ) + return 1 + ;; + --host) + COMPREPLY=( $(compgen -A hostname -- "$cur") ) + return 1 + ;; + # Argh... This looks like a bash bug... + # Redirections are passed to the completion function + # although it is managed by the shell directly... + '<'|'>'|'>>'|[12]'>'|[12]'>>') + return 1 + ;; + esac + + return 0 +} + +function _complete_for_clickhouse_generic_bin() +{ + local cur prev + eval local cmd="$( _clickhouse_quote "$1" )" + _clickhouse_bin_exist "$cmd" || return 0 + + COMPREPLY=() + _get_comp_words_by_ref cur prev + + if _complete_for_clickhouse_generic_bin_impl "$prev"; then + COMPREPLY=( $(compgen -W "$(_clickhouse_get_options "$cmd")" -- "$cur") ) + fi + + return 0 +} + +function _complete_clickhouse_generic() +{ + local bin=$1 && shift + local f=${1:-_complete_for_clickhouse_generic_bin} + local o=( + -o default + -o bashdefault + -o nospace + -F "$f" + "$bin" + ) + complete "${o[@]}" +} + +function _complete_clickhouse_bootstrap_main() +{ + local runtime=/usr/share/bash-completion/bash_completion + if ! type _get_comp_words_by_ref >& /dev/null && [[ -f $runtime ]]; then + source $runtime + fi + type _get_comp_words_by_ref >& /dev/null || return 0 +} +_complete_clickhouse_bootstrap_main "$@" diff --git a/programs/bash-completion/completions/clickhouse-client b/programs/bash-completion/completions/clickhouse-client new file mode 100644 index 00000000000..6b7899b7263 --- /dev/null +++ b/programs/bash-completion/completions/clickhouse-client @@ -0,0 +1,2 @@ +[[ -v $_CLICKHOUSE_COMPLETION_LOADED ]] || source "$(dirname "${BASH_SOURCE[0]}")/clickhouse-bootstrap" +_complete_clickhouse_generic clickhouse-client diff --git a/programs/bash-completion/completions/clickhouse-local b/programs/bash-completion/completions/clickhouse-local new file mode 100644 index 00000000000..7b12b48c7cd --- /dev/null +++ b/programs/bash-completion/completions/clickhouse-local @@ -0,0 +1,2 @@ +[[ -v $_CLICKHOUSE_COMPLETION_LOADED ]] || source "$(dirname "${BASH_SOURCE[0]}")/clickhouse-bootstrap" +_complete_clickhouse_generic clickhouse-local diff --git a/programs/benchmark/Benchmark.cpp b/programs/benchmark/Benchmark.cpp index a0e2ea155ba..1d2b579db3a 100644 --- a/programs/benchmark/Benchmark.cpp +++ b/programs/benchmark/Benchmark.cpp @@ -95,8 +95,8 @@ public: comparison_info_total.emplace_back(std::make_shared()); } - global_context.makeGlobalContext(); - global_context.setSettings(settings); + global_context->makeGlobalContext(); + global_context->setSettings(settings); std::cerr << std::fixed << std::setprecision(3); @@ -159,7 +159,7 @@ private: bool print_stacktrace; const Settings & settings; SharedContextHolder shared_context; - Context global_context; + ContextPtr global_context; QueryProcessingStage::Enum query_processing_stage; /// Don't execute new queries after timelimit or SIGINT or exception diff --git a/programs/client/CMakeLists.txt b/programs/client/CMakeLists.txt index 72b5caf9784..084e1b45911 100644 --- a/programs/client/CMakeLists.txt +++ b/programs/client/CMakeLists.txt @@ -21,4 +21,4 @@ list(APPEND CLICKHOUSE_CLIENT_LINK PRIVATE readpassphrase) clickhouse_program_add(client) -install (FILES clickhouse-client.xml DESTINATION ${CLICKHOUSE_ETC_DIR}/clickhouse-client COMPONENT clickhouse-client RENAME config.xml) +install (FILES clickhouse-client.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-client" COMPONENT clickhouse-client RENAME config.xml) diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index da7c729a737..3917d775cbc 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -21,7 +21,7 @@ #include #include #include -#include +#include #include #include #include @@ -65,6 +65,7 @@ #include #include #include +#include #include #include #include @@ -116,6 +117,34 @@ namespace ErrorCodes } +static bool queryHasWithClause(const IAST * ast) +{ + if (const auto * select = dynamic_cast(ast); + select && select->with()) + { + return true; + } + + // This full recursive walk is somewhat excessive, because most of the + // children are not queries, but on the other hand it will let us to avoid + // breakage when the AST structure changes and some new variant of query + // nesting is added. This function is used in fuzzer, so it's better to be + // defensive and avoid weird unexpected errors. + // clang-tidy is confused by this function: it thinks that if `select` is + // nullptr, `ast` is also nullptr, and complains about nullptr dereference. + // NOLINTNEXTLINE + for (const auto & child : ast->children) + { + if (queryHasWithClause(child.get())) + { + return true; + } + } + + return false; +} + + class Client : public Poco::Util::Application { public: @@ -162,7 +191,7 @@ private: bool has_vertical_output_suffix = false; /// Is \G present at the end of the query string? SharedContextHolder shared_context = Context::createShared(); - Context context = Context::createGlobal(shared_context.get()); + ContextPtr context = Context::createGlobal(shared_context.get()); /// Buffer that reads from stdin in batch mode. ReadBufferFromFileDescriptor std_in {STDIN_FILENO}; @@ -245,20 +274,20 @@ private: configReadClient(config(), home_path); - context.setApplicationType(Context::ApplicationType::CLIENT); - context.setQueryParameters(query_parameters); + context->setApplicationType(Context::ApplicationType::CLIENT); + context->setQueryParameters(query_parameters); /// settings and limits could be specified in config file, but passed settings has higher priority - for (const auto & setting : context.getSettingsRef().allUnchanged()) + for (const auto & setting : context->getSettingsRef().allUnchanged()) { const auto & name = setting.getName(); if (config().has(name)) - context.setSetting(name, config().getString(name)); + context->setSetting(name, config().getString(name)); } /// Set path for format schema files if (config().has("format_schema_path")) - context.setFormatSchemaPath(Poco::Path(config().getString("format_schema_path")).toString()); + context->setFormatSchemaPath(Poco::Path(config().getString("format_schema_path")).toString()); /// Initialize query_id_formats if any if (config().has("query_id_formats")) @@ -390,7 +419,7 @@ private: for (auto d : chineseNewYearIndicators) { /// Let's celebrate until Lantern Festival - if (d <= days && d + 25u >= days) + if (d <= days && d + 25 >= days) return true; else if (d > days) return false; @@ -498,7 +527,10 @@ private: std::cerr << std::fixed << std::setprecision(3); if (is_interactive) + { + clearTerminal(); showClientVersion(); + } is_default_format = !config().has("vertical") && !config().has("format"); if (config().has("vertical")) @@ -506,15 +538,15 @@ private: else format = config().getString("format", is_interactive ? "PrettyCompact" : "TabSeparated"); - format_max_block_size = config().getInt("format_max_block_size", context.getSettingsRef().max_block_size); + format_max_block_size = config().getInt("format_max_block_size", context->getSettingsRef().max_block_size); insert_format = "Values"; /// Setting value from cmd arg overrides one from config - if (context.getSettingsRef().max_insert_block_size.changed) - insert_format_max_block_size = context.getSettingsRef().max_insert_block_size; + if (context->getSettingsRef().max_insert_block_size.changed) + insert_format_max_block_size = context->getSettingsRef().max_insert_block_size; else - insert_format_max_block_size = config().getInt("insert_format_max_block_size", context.getSettingsRef().max_insert_block_size); + insert_format_max_block_size = config().getInt("insert_format_max_block_size", context->getSettingsRef().max_insert_block_size); if (!is_interactive) { @@ -523,7 +555,7 @@ private: ignore_error = config().getBool("ignore-error", false); } - ClientInfo & client_info = context.getClientInfo(); + ClientInfo & client_info = context->getClientInfo(); client_info.setInitialQuery(); client_info.quota_key = config().getString("quota_key", ""); @@ -531,7 +563,7 @@ private: /// Initialize DateLUT here to avoid counting time spent here as query execution time. const auto local_tz = DateLUT::instance().getTimeZone(); - if (!context.getSettingsRef().use_client_time_zone) + if (!context->getSettingsRef().use_client_time_zone) { const auto & time_zone = connection->getServerTimezone(connection_parameters.timeouts); if (!time_zone.empty()) @@ -706,7 +738,7 @@ private: { auto query_id = config().getString("query_id", ""); if (!query_id.empty()) - context.setCurrentQueryId(query_id); + context->setCurrentQueryId(query_id); nonInteractive(); @@ -1006,7 +1038,7 @@ private: { Tokens tokens(this_query_begin, all_queries_end); IParser::Pos token_iterator(tokens, - context.getSettingsRef().max_parser_depth); + context->getSettingsRef().max_parser_depth); if (!token_iterator.isValid()) { break; @@ -1055,7 +1087,7 @@ private: if (ignore_error) { Tokens tokens(this_query_begin, all_queries_end); - IParser::Pos token_iterator(tokens, context.getSettingsRef().max_parser_depth); + IParser::Pos token_iterator(tokens, context->getSettingsRef().max_parser_depth); while (token_iterator->type != TokenType::Semicolon && token_iterator.isValid()) ++token_iterator; this_query_begin = token_iterator->end; @@ -1101,7 +1133,7 @@ private: // beneficial so that we see proper trailing comments in "echo" and // server log. adjustQueryEnd(this_query_end, all_queries_end, - context.getSettingsRef().max_parser_depth); + context->getSettingsRef().max_parser_depth); // full_query is the query + inline INSERT data + trailing comments // (the latter is our best guess for now). @@ -1141,7 +1173,7 @@ private: { this_query_end = insert_ast->end; adjustQueryEnd(this_query_end, all_queries_end, - context.getSettingsRef().max_parser_depth); + context->getSettingsRef().max_parser_depth); } // Now we know for sure where the query ends. @@ -1255,6 +1287,29 @@ private: return true; } + // Prints changed settings to stderr. Useful for debugging fuzzing failures. + void printChangedSettings() const + { + const auto & changes = context->getSettingsRef().changes(); + if (!changes.empty()) + { + fmt::print(stderr, "Changed settings: "); + for (size_t i = 0; i < changes.size(); ++i) + { + if (i) + { + fmt::print(stderr, ", "); + } + fmt::print(stderr, "{} = '{}'", changes[i].name, + toString(changes[i].value)); + } + fmt::print(stderr, "\n"); + } + else + { + fmt::print(stderr, "No changed settings.\n"); + } + } /// Returns false when server is not available. bool processWithFuzzing(const String & text) @@ -1317,9 +1372,14 @@ private: auto base_after_fuzz = fuzz_base->formatForErrorMessage(); - // Debug AST cloning errors. + // Check that the source AST didn't change after fuzzing. This + // helps debug AST cloning errors, where the cloned AST doesn't + // clone all its children, and erroneously points to some source + // child elements. if (base_before_fuzz != base_after_fuzz) { + printChangedSettings(); + fmt::print(stderr, "Base before fuzz: {}\n" "Base after fuzz: {}\n", @@ -1334,7 +1394,7 @@ private: fmt::print(stderr, "IAST::clone() is broken for some AST node. This is a bug. The original AST ('dump before fuzz') and its cloned copy ('dump of cloned AST') refer to the same nodes, which must never happen. This means that their parent node doesn't implement clone() correctly."); - assert(false); + exit(1); } auto fuzzed_text = ast_to_process->formatForErrorMessage(); @@ -1378,29 +1438,76 @@ private: // Print the changed settings because they might be needed to // reproduce the error. - const auto & changes = context.getSettingsRef().changes(); - if (!changes.empty()) - { - fmt::print(stderr, "Changed settings: "); - for (size_t i = 0; i < changes.size(); ++i) - { - if (i) - { - fmt::print(stderr, ", "); - } - fmt::print(stderr, "{} = '{}'", changes[i].name, - toString(changes[i].value)); - } - fmt::print(stderr, "\n"); - } - else - { - fmt::print(stderr, "No changed settings.\n"); - } + printChangedSettings(); return false; } + // Check that after the query is formatted, we can parse it back, + // format again and get the same result. Unfortunately, we can't + // compare the ASTs, which would be more sensitive to errors. This + // double formatting check doesn't catch all errors, e.g. we can + // format query incorrectly, but to a valid SQL that we can then + // parse and format into the same SQL. + // There are some complicated cases where we can generate the SQL + // which we can't parse: + // * first argument of lambda() replaced by fuzzer with + // something else, leading to constructs such as + // arrayMap((min(x) + 3) -> x + 1, ....) + // * internals of Enum replaced, leading to: + // Enum(equals(someFunction(y), 3)). + // And there are even the cases when we can parse the query, but + // it's logically incorrect and its formatting is a mess, such as + // when `lambda()` function gets substituted into a wrong place. + // To avoid dealing with these cases, run the check only for the + // queries we were able to successfully execute. + // The final caveat is that sometimes WITH queries are not executed, + // if they are not referenced by the main SELECT, so they can still + // have the aforementioned problems. Disable this check for such + // queries, for lack of a better solution. + if (!have_error && queryHasWithClause(parsed_query.get())) + { + ASTPtr parsed_formatted_query; + try + { + const auto * tmp_pos = query_to_send.c_str(); + parsed_formatted_query = parseQuery(tmp_pos, + tmp_pos + query_to_send.size(), + false /* allow_multi_statements */); + } + catch (Exception & e) + { + if (e.code() != ErrorCodes::SYNTAX_ERROR) + { + throw; + } + } + + if (parsed_formatted_query) + { + const auto formatted_twice + = parsed_formatted_query->formatForErrorMessage(); + + if (formatted_twice != query_to_send) + { + fmt::print(stderr, "The query formatting is broken.\n"); + + printChangedSettings(); + + fmt::print(stderr, "Got the following (different) text after formatting the fuzzed query and parsing it back:\n'{}'\n, expected:\n'{}'\n", + formatted_twice, query_to_send); + fmt::print(stderr, "In more detail:\n"); + fmt::print(stderr, "AST-1:\n'{}'\n", parsed_query->dumpTree()); + fmt::print(stderr, "Text-1 (AST-1 formatted):\n'{}'\n", query_to_send); + fmt::print(stderr, "AST-2 (Text-1 parsed):\n'{}'\n", parsed_formatted_query->dumpTree()); + fmt::print(stderr, "Text-2 (AST-2 formatted):\n'{}'\n", formatted_twice); + fmt::print(stderr, "Text-1 must be equal to Text-2, but it is not.\n"); + + exit(1); + } + } + } + // The server is still alive so we're going to continue fuzzing. // Determine what we're going to use as the starting AST. if (have_error) @@ -1483,11 +1590,11 @@ private: if (is_interactive) { // Generate a new query_id - context.setCurrentQueryId(""); + context->setCurrentQueryId(""); for (const auto & query_id_format : query_id_formats) { writeString(query_id_format.first, std_out); - writeString(fmt::format(query_id_format.second, fmt::arg("query_id", context.getCurrentQueryId())), std_out); + writeString(fmt::format(query_id_format.second, fmt::arg("query_id", context->getCurrentQueryId())), std_out); writeChar('\n', std_out); std_out.next(); } @@ -1503,12 +1610,12 @@ private: { /// Temporarily apply query settings to context. std::optional old_settings; - SCOPE_EXIT({ if (old_settings) context.setSettings(*old_settings); }); + SCOPE_EXIT_SAFE({ if (old_settings) context->setSettings(*old_settings); }); auto apply_query_settings = [&](const IAST & settings_ast) { if (!old_settings) - old_settings.emplace(context.getSettingsRef()); - context.applySettingsChanges(settings_ast.as()->changes); + old_settings.emplace(context->getSettingsRef()); + context->applySettingsChanges(settings_ast.as()->changes); }; const auto * insert = parsed_query->as(); if (insert && insert->settings_ast) @@ -1546,7 +1653,7 @@ private: if (change.name == "profile") current_profile = change.value.safeGet(); else - context.applySettingChange(change); + context->applySettingChange(change); } } @@ -1618,10 +1725,10 @@ private: connection->sendQuery( connection_parameters.timeouts, query_to_send, - context.getCurrentQueryId(), + context->getCurrentQueryId(), query_processing_stage, - &context.getSettingsRef(), - &context.getClientInfo(), + &context->getSettingsRef(), + &context->getClientInfo(), true); sendExternalTables(); @@ -1659,10 +1766,10 @@ private: connection->sendQuery( connection_parameters.timeouts, query_to_send, - context.getCurrentQueryId(), + context->getCurrentQueryId(), query_processing_stage, - &context.getSettingsRef(), - &context.getClientInfo(), + &context->getSettingsRef(), + &context->getClientInfo(), true); sendExternalTables(); @@ -1685,7 +1792,7 @@ private: ParserQuery parser(end); ASTPtr res; - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); size_t max_length = 0; if (!allow_multi_statements) max_length = settings.max_query_size; @@ -1773,7 +1880,7 @@ private: current_format = insert->format; } - BlockInputStreamPtr block_input = context.getInputFormat( + BlockInputStreamPtr block_input = context->getInputFormat( current_format, buf, sample, insert_format_max_block_size); if (columns_description.hasDefaults()) @@ -2097,9 +2204,9 @@ private: /// It is not clear how to write progress with parallel formatting. It may increase code complexity significantly. if (!need_render_progress) - block_out_stream = context.getOutputStreamParallelIfPossible(current_format, *out_buf, block); + block_out_stream = context->getOutputStreamParallelIfPossible(current_format, *out_buf, block); else - block_out_stream = context.getOutputStream(current_format, *out_buf, block); + block_out_stream = context->getOutputStream(current_format, *out_buf, block); block_out_stream->writePrefix(); } @@ -2145,30 +2252,27 @@ private: return; processed_rows += block.rows(); + + /// Even if all blocks are empty, we still need to initialize the output stream to write empty resultset. initBlockOutputStream(block); /// The header block containing zero rows was used to initialize /// block_out_stream, do not output it. /// Also do not output too much data if we're fuzzing. - if (block.rows() != 0 - && (query_fuzzer_runs == 0 || processed_rows < 100)) - { - block_out_stream->write(block); - written_first_block = true; - } + if (block.rows() == 0 || (query_fuzzer_runs != 0 && processed_rows >= 100)) + return; - bool clear_progress = false; if (need_render_progress) - clear_progress = std_out.offset() > 0; - - if (clear_progress) clearProgress(); + block_out_stream->write(block); + written_first_block = true; + /// Received data block is immediately displayed to the user. block_out_stream->flush(); /// Restore progress bar after data block. - if (clear_progress) + if (need_render_progress) writeProgress(); } @@ -2363,6 +2467,17 @@ private: std::cout << DBMS_NAME << " client version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl; } + static void clearTerminal() + { + /// Clear from cursor until end of screen. + /// It is needed if garbage is left in terminal. + /// Show cursor. It can be left hidden by invocation of previous programs. + /// A test for this feature: perl -e 'print "x"x100000'; echo -ne '\033[0;0H\033[?25l'; clickhouse-client + std::cout << + "\033[0J" + "\033[?25h"; + } + public: void init(int argc, char ** argv) { @@ -2592,12 +2707,12 @@ public: } } - context.makeGlobalContext(); - context.setSettings(cmd_settings); + context->makeGlobalContext(); + context->setSettings(cmd_settings); /// Copy settings-related program options to config. /// TODO: Is this code necessary? - for (const auto & setting : context.getSettingsRef().all()) + for (const auto & setting : context->getSettingsRef().all()) { const auto & name = setting.getName(); if (options.count(name)) @@ -2689,7 +2804,7 @@ public: { std::string traceparent = options["opentelemetry-traceparent"].as(); std::string error; - if (!context.getClientInfo().client_trace_context.parseTraceparentHeader( + if (!context->getClientInfo().client_trace_context.parseTraceparentHeader( traceparent, error)) { throw Exception(ErrorCodes::BAD_ARGUMENTS, @@ -2700,7 +2815,7 @@ public: if (options.count("opentelemetry-tracestate")) { - context.getClientInfo().client_trace_context.tracestate = + context->getClientInfo().client_trace_context.tracestate = options["opentelemetry-tracestate"].as(); } diff --git a/programs/client/ConnectionParameters.cpp b/programs/client/ConnectionParameters.cpp index 19734dd5ffa..6faf43759df 100644 --- a/programs/client/ConnectionParameters.cpp +++ b/programs/client/ConnectionParameters.cpp @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #include @@ -60,7 +62,9 @@ ConnectionParameters::ConnectionParameters(const Poco::Util::AbstractConfigurati #endif } - compression = config.getBool("compression", true) ? Protocol::Compression::Enable : Protocol::Compression::Disable; + /// By default compression is disabled if address looks like localhost. + compression = config.getBool("compression", !isLocalAddress(DNSResolver::instance().resolveHost(host))) + ? Protocol::Compression::Enable : Protocol::Compression::Disable; timeouts = ConnectionTimeouts( Poco::Timespan(config.getInt("connect_timeout", DBMS_DEFAULT_CONNECT_TIMEOUT_SEC), 0), diff --git a/programs/client/QueryFuzzer.cpp b/programs/client/QueryFuzzer.cpp index 8d8d8daaf39..721e5acb991 100644 --- a/programs/client/QueryFuzzer.cpp +++ b/programs/client/QueryFuzzer.cpp @@ -27,6 +27,7 @@ #include #include + namespace DB { @@ -37,34 +38,33 @@ namespace ErrorCodes Field QueryFuzzer::getRandomField(int type) { + static constexpr Int64 bad_int64_values[] + = {-2, -1, 0, 1, 2, 3, 7, 10, 100, 255, 256, 257, 1023, 1024, + 1025, 65535, 65536, 65537, 1024 * 1024 - 1, 1024 * 1024, + 1024 * 1024 + 1, INT_MIN - 1ll, INT_MIN, INT_MIN + 1, + INT_MAX - 1, INT_MAX, INT_MAX + 1ll, INT64_MIN, INT64_MIN + 1, + INT64_MAX - 1, INT64_MAX}; switch (type) { case 0: { - static constexpr Int64 values[] - = {-2, -1, 0, 1, 2, 3, 7, 10, 100, 255, 256, 257, 1023, 1024, - 1025, 65535, 65536, 65537, 1024 * 1024 - 1, 1024 * 1024, - 1024 * 1024 + 1, INT64_MIN, INT64_MAX}; - return values[fuzz_rand() % (sizeof(values) / sizeof(*values))]; + return bad_int64_values[fuzz_rand() % (sizeof(bad_int64_values) + / sizeof(*bad_int64_values))]; } case 1: { static constexpr float values[] - = {NAN, INFINITY, -INFINITY, 0., 0.0001, 0.5, 0.9999, - 1., 1.0001, 2., 10.0001, 100.0001, 1000.0001}; - return values[fuzz_rand() % (sizeof(values) / sizeof(*values))]; + = {NAN, INFINITY, -INFINITY, 0., -0., 0.0001, 0.5, 0.9999, + 1., 1.0001, 2., 10.0001, 100.0001, 1000.0001, 1e10, 1e20, + FLT_MIN, FLT_MIN + FLT_EPSILON, FLT_MAX, FLT_MAX + FLT_EPSILON}; return values[fuzz_rand() % (sizeof(values) / sizeof(*values))]; } case 2: { - static constexpr Int64 values[] - = {-2, -1, 0, 1, 2, 3, 7, 10, 100, 255, 256, 257, 1023, 1024, - 1025, 65535, 65536, 65537, 1024 * 1024 - 1, 1024 * 1024, - 1024 * 1024 + 1, INT64_MIN, INT64_MAX}; static constexpr UInt64 scales[] = {0, 1, 2, 10}; return DecimalField( - values[fuzz_rand() % (sizeof(values) / sizeof(*values))], - scales[fuzz_rand() % (sizeof(scales) / sizeof(*scales))] - ); + bad_int64_values[fuzz_rand() % (sizeof(bad_int64_values) + / sizeof(*bad_int64_values))], + scales[fuzz_rand() % (sizeof(scales) / sizeof(*scales))]); } default: assert(false); @@ -570,6 +570,15 @@ void QueryFuzzer::addColumnLike(const ASTPtr ast) } const auto name = ast->formatForErrorMessage(); + if (name == "Null") + { + // The `Null` identifier from FORMAT Null clause. We don't quote it + // properly when formatting the AST, and while the resulting query + // technically works, it has non-standard case for Null (the standard + // is NULL), so it breaks the query formatting idempotence check. + // Just plug this particular case for now. + return; + } if (name.size() < 200) { column_like_map.insert({name, ast}); diff --git a/programs/client/QueryFuzzer.h b/programs/client/QueryFuzzer.h index 38714205967..9ef66db1873 100644 --- a/programs/client/QueryFuzzer.h +++ b/programs/client/QueryFuzzer.h @@ -50,7 +50,7 @@ struct QueryFuzzer // Some debug fields for detecting problematic ASTs with loops. // These are reset for each fuzzMain call. std::unordered_set debug_visited_nodes; - ASTPtr * debug_top_ast; + ASTPtr * debug_top_ast = nullptr; // This is the only function you have to call -- it will modify the passed diff --git a/programs/client/Suggest.cpp b/programs/client/Suggest.cpp index dfa7048349e..8d4c0fdbd5a 100644 --- a/programs/client/Suggest.cpp +++ b/programs/client/Suggest.cpp @@ -108,14 +108,6 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo " UNION ALL " "SELECT cluster FROM system.clusters" " UNION ALL " - "SELECT name FROM system.errors" - " UNION ALL " - "SELECT event FROM system.events" - " UNION ALL " - "SELECT metric FROM system.asynchronous_metrics" - " UNION ALL " - "SELECT metric FROM system.metrics" - " UNION ALL " "SELECT macro FROM system.macros" " UNION ALL " "SELECT policy_name FROM system.storage_policies" @@ -139,17 +131,12 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo query << ") WHERE notEmpty(res)"; - Settings settings; - /// To show all rows from: - /// - system.errors - /// - system.events - settings.system_events_show_zero_values = true; - fetch(connection, timeouts, query.str(), settings); + fetch(connection, timeouts, query.str()); } -void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query, Settings & settings) +void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query) { - connection.sendQuery(timeouts, query, "" /* query_id */, QueryProcessingStage::Complete, &settings); + connection.sendQuery(timeouts, query, "" /* query_id */, QueryProcessingStage::Complete); while (true) { diff --git a/programs/client/Suggest.h b/programs/client/Suggest.h index 0049bc08ebf..03332088cbe 100644 --- a/programs/client/Suggest.h +++ b/programs/client/Suggest.h @@ -33,7 +33,7 @@ public: private: void loadImpl(Connection & connection, const ConnectionTimeouts & timeouts, size_t suggestion_limit); - void fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query, Settings & settings); + void fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query); void fillWordsFromBlock(const Block & block); /// Words are fetched asynchronously. diff --git a/programs/config_tools.h.in b/programs/config_tools.h.in index 7cb5a6d883a..abe9ef8c562 100644 --- a/programs/config_tools.h.in +++ b/programs/config_tools.h.in @@ -15,3 +15,4 @@ #cmakedefine01 ENABLE_CLICKHOUSE_GIT_IMPORT #cmakedefine01 ENABLE_CLICKHOUSE_INSTALL #cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE +#cmakedefine01 ENABLE_CLICKHOUSE_LIBRARY_BRIDGE diff --git a/programs/copier/CMakeLists.txt b/programs/copier/CMakeLists.txt index f69b30f3f43..dfb067b00f9 100644 --- a/programs/copier/CMakeLists.txt +++ b/programs/copier/CMakeLists.txt @@ -1,7 +1,7 @@ set(CLICKHOUSE_COPIER_SOURCES - ${CMAKE_CURRENT_SOURCE_DIR}/ClusterCopierApp.cpp - ${CMAKE_CURRENT_SOURCE_DIR}/ClusterCopier.cpp - ${CMAKE_CURRENT_SOURCE_DIR}/Internals.cpp) + "${CMAKE_CURRENT_SOURCE_DIR}/ClusterCopierApp.cpp" + "${CMAKE_CURRENT_SOURCE_DIR}/ClusterCopier.cpp" + "${CMAKE_CURRENT_SOURCE_DIR}/Internals.cpp") set (CLICKHOUSE_COPIER_LINK PRIVATE diff --git a/programs/copier/ClusterCopier.cpp b/programs/copier/ClusterCopier.cpp index 7eea23160b2..aa9b359993e 100644 --- a/programs/copier/ClusterCopier.cpp +++ b/programs/copier/ClusterCopier.cpp @@ -22,7 +22,7 @@ namespace ErrorCodes void ClusterCopier::init() { - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); task_description_watch_callback = [this] (const Coordination::WatchResponse & response) { @@ -39,14 +39,14 @@ void ClusterCopier::init() task_cluster_initial_config = task_cluster_current_config; task_cluster->loadTasks(*task_cluster_initial_config); - context.setClustersConfig(task_cluster_initial_config, task_cluster->clusters_prefix); + getContext()->setClustersConfig(task_cluster_initial_config, task_cluster->clusters_prefix); /// Set up shards and their priority task_cluster->random_engine.seed(task_cluster->random_device()); for (auto & task_table : task_cluster->table_tasks) { - task_table.cluster_pull = context.getCluster(task_table.cluster_pull_name); - task_table.cluster_push = context.getCluster(task_table.cluster_push_name); + task_table.cluster_pull = getContext()->getCluster(task_table.cluster_pull_name); + task_table.cluster_push = getContext()->getCluster(task_table.cluster_push_name); task_table.initShards(task_cluster->random_engine); } @@ -106,7 +106,7 @@ void ClusterCopier::discoverShardPartitions(const ConnectionTimeouts & timeouts, try { - type->deserializeAsTextQuoted(*column_dummy, rb, FormatSettings()); + type->getDefaultSerialization()->deserializeTextQuoted(*column_dummy, rb, FormatSettings()); } catch (Exception & e) { @@ -206,7 +206,7 @@ void ClusterCopier::uploadTaskDescription(const std::string & task_path, const s if (task_config_str.empty()) return; - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); zookeeper->createAncestors(local_task_description_path); auto code = zookeeper->tryCreate(local_task_description_path, task_config_str, zkutil::CreateMode::Persistent); @@ -219,7 +219,7 @@ void ClusterCopier::uploadTaskDescription(const std::string & task_path, const s void ClusterCopier::reloadTaskDescription() { - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); task_description_watch_zookeeper = zookeeper; String task_config_str; @@ -235,7 +235,7 @@ void ClusterCopier::reloadTaskDescription() /// Setup settings task_cluster->reloadSettings(*config); - context.setSettings(task_cluster->settings_common); + getContext()->setSettings(task_cluster->settings_common); task_cluster_current_config = config; task_description_current_stat = stat; @@ -440,7 +440,7 @@ bool ClusterCopier::checkPartitionPieceIsDone(const TaskTable & task_table, cons { LOG_DEBUG(log, "Check that all shards processed partition {} piece {} successfully", partition_name, piece_number); - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); /// Collect all shards that contain partition piece number piece_number. Strings piece_status_paths; @@ -532,7 +532,7 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t LOG_DEBUG(log, "Try to move {} to destination table", partition_name); - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); const auto current_partition_attach_is_active = task_table.getPartitionAttachIsActivePath(partition_name); const auto current_partition_attach_is_done = task_table.getPartitionAttachIsDonePath(partition_name); @@ -599,11 +599,13 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t toString(current_piece_number)); Settings settings_push = task_cluster->settings_push; - - /// It is important, ALTER ATTACH PARTITION must be done synchronously - /// And we will execute this ALTER query on each replica of a shard. - /// It is correct, because this query is idempotent. - settings_push.replication_alter_partitions_sync = 2; + ClusterExecutionMode execution_mode = ClusterExecutionMode::ON_EACH_NODE; + UInt64 max_successful_executions_per_shard = 0; + if (settings_push.replication_alter_partitions_sync == 1) + { + execution_mode = ClusterExecutionMode::ON_EACH_SHARD; + max_successful_executions_per_shard = 1; + } query_alter_ast_string += " ALTER TABLE " + getQuotedTable(original_table) + ((partition_name == "'all'") ? " ATTACH PARTITION ID " : " ATTACH PARTITION ") + partition_name + @@ -613,14 +615,33 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t try { - size_t num_nodes = executeQueryOnCluster( - task_table.cluster_push, - query_alter_ast_string, - settings_push, - PoolMode::GET_MANY, - ClusterExecutionMode::ON_EACH_NODE); + /// Try attach partition on each shard + UInt64 num_nodes = executeQueryOnCluster( + task_table.cluster_push, + query_alter_ast_string, + task_cluster->settings_push, + PoolMode::GET_MANY, + execution_mode, + max_successful_executions_per_shard); - LOG_INFO(log, "Number of nodes that executed ALTER query successfully : {}", toString(num_nodes)); + if (settings_push.replication_alter_partitions_sync == 1) + { + LOG_INFO( + log, + "Destination tables {} have been executed alter query successfully on {} shards of {}", + getQuotedTable(task_table.table_push), + num_nodes, + task_table.cluster_push->getShardCount()); + + if (num_nodes != task_table.cluster_push->getShardCount()) + { + return TaskStatus::Error; + } + } + else + { + LOG_INFO(log, "Number of nodes that executed ALTER query successfully : {}", toString(num_nodes)); + } } catch (...) { @@ -856,6 +877,16 @@ bool ClusterCopier::tryDropPartitionPiece( bool ClusterCopier::tryProcessTable(const ConnectionTimeouts & timeouts, TaskTable & task_table) { + /// Create destination table + TaskStatus task_status = TaskStatus::Error; + + task_status = tryCreateDestinationTable(timeouts, task_table); + /// Exit if success + if (task_status != TaskStatus::Finished) + { + LOG_WARNING(log, "Create destination Tale Failed "); + return false; + } /// An heuristic: if previous shard is already done, then check next one without sleeps due to max_workers constraint bool previous_shard_is_instantly_finished = false; @@ -932,7 +963,7 @@ bool ClusterCopier::tryProcessTable(const ConnectionTimeouts & timeouts, TaskTab /// Do not sleep if there is a sequence of already processed shards to increase startup bool is_unprioritized_task = !previous_shard_is_instantly_finished && shard->priority.is_remote; - TaskStatus task_status = TaskStatus::Error; + task_status = TaskStatus::Error; bool was_error = false; has_shard_to_process = true; for (UInt64 try_num = 0; try_num < max_shard_partition_tries; ++try_num) @@ -1050,6 +1081,44 @@ bool ClusterCopier::tryProcessTable(const ConnectionTimeouts & timeouts, TaskTab return table_is_done; } +TaskStatus ClusterCopier::tryCreateDestinationTable(const ConnectionTimeouts & timeouts, TaskTable & task_table) +{ + /// Try create original table (if not exists) on each shard + + //TaskTable & task_table = task_shard.task_table; + const TaskShardPtr task_shard = task_table.all_shards.at(0); + /// We need to update table definitions for each part, it could be changed after ALTER + task_shard->current_pull_table_create_query = getCreateTableForPullShard(timeouts, *task_shard); + try + { + auto create_query_push_ast + = rewriteCreateQueryStorage(task_shard->current_pull_table_create_query, task_table.table_push, task_table.engine_push_ast); + auto & create = create_query_push_ast->as(); + create.if_not_exists = true; + InterpreterCreateQuery::prepareOnClusterQuery(create, getContext(), task_table.cluster_push_name); + String query = queryToString(create_query_push_ast); + + LOG_DEBUG(log, "Create destination tables. Query: {}", query); + UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY); + LOG_INFO( + log, + "Destination tables {} have been created on {} shards of {}", + getQuotedTable(task_table.table_push), + shards, + task_table.cluster_push->getShardCount()); + if (shards != task_table.cluster_push->getShardCount()) + { + return TaskStatus::Error; + } + } + catch (...) + { + tryLogCurrentException(log, "Error while creating original table. Maybe we are not first."); + } + + return TaskStatus::Finished; +} + /// Job for copying partition from particular shard. TaskStatus ClusterCopier::tryProcessPartitionTask(const ConnectionTimeouts & timeouts, ShardPartition & task_partition, bool is_unprioritized_task) { @@ -1142,7 +1211,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( auto split_table_for_current_piece = task_shard.list_of_split_tables_on_shard[current_piece_number]; - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); const String piece_is_dirty_flag_path = partition_piece.getPartitionPieceIsDirtyPath(); const String piece_is_dirty_cleaned_path = partition_piece.getPartitionPieceIsCleanedPath(); @@ -1193,7 +1262,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( ParserQuery p_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); return parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth); }; @@ -1297,10 +1366,10 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( ASTPtr query_select_ast = get_select_query(split_table_for_current_piece, "count()", /*enable_splitting*/ true); UInt64 count; { - Context local_context = context; + auto local_context = Context::createCopy(context); // Use pull (i.e. readonly) settings, but fetch data from destination servers - local_context.setSettings(task_cluster->settings_pull); - local_context.setSetting("skip_unavailable_shards", true); + local_context->setSettings(task_cluster->settings_pull); + local_context->setSetting("skip_unavailable_shards", true); Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_select_ast, local_context)->execute().getInputStream()); count = (block) ? block.safeGetByPosition(0).column->getUInt(0) : 0; @@ -1366,8 +1435,17 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( LOG_DEBUG(log, "Create destination tables. Query: {}", query); UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY); - LOG_DEBUG(log, "Destination tables {} have been created on {} shards of {}", - getQuotedTable(task_table.table_push), shards, task_table.cluster_push->getShardCount()); + LOG_INFO( + log, + "Destination tables {} have been created on {} shards of {}", + getQuotedTable(task_table.table_push), + shards, + task_table.cluster_push->getShardCount()); + + if (shards != task_table.cluster_push->getShardCount()) + { + return TaskStatus::Error; + } } /// Do the copying @@ -1390,7 +1468,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( query += "INSERT INTO " + getQuotedTable(split_table_for_current_piece) + " VALUES "; ParserQuery p_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); query_insert_ast = parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth); LOG_DEBUG(log, "Executing INSERT query: {}", query); @@ -1398,18 +1476,18 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( try { - std::unique_ptr context_select = std::make_unique(context); + auto context_select = Context::createCopy(context); context_select->setSettings(task_cluster->settings_pull); - std::unique_ptr context_insert = std::make_unique(context); + auto context_insert = Context::createCopy(context); context_insert->setSettings(task_cluster->settings_push); /// Custom INSERT SELECT implementation BlockInputStreamPtr input; BlockOutputStreamPtr output; { - BlockIO io_select = InterpreterFactory::get(query_select_ast, *context_select)->execute(); - BlockIO io_insert = InterpreterFactory::get(query_insert_ast, *context_insert)->execute(); + BlockIO io_select = InterpreterFactory::get(query_select_ast, context_select)->execute(); + BlockIO io_insert = InterpreterFactory::get(query_insert_ast, context_insert)->execute(); input = io_select.getInputStream(); output = io_insert.out; @@ -1477,26 +1555,6 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( LOG_INFO(log, "Partition {} piece {} copied. But not moved to original destination table.", task_partition.name, toString(current_piece_number)); - - /// Try create original table (if not exists) on each shard - try - { - auto create_query_push_ast = rewriteCreateQueryStorage(task_shard.current_pull_table_create_query, - task_table.table_push, task_table.engine_push_ast); - auto & create = create_query_push_ast->as(); - create.if_not_exists = true; - InterpreterCreateQuery::prepareOnClusterQuery(create, context, task_table.cluster_push_name); - String query = queryToString(create_query_push_ast); - - LOG_DEBUG(log, "Create destination tables. Query: {}", query); - UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY); - LOG_DEBUG(log, "Destination tables {} have been created on {} shards of {}", getQuotedTable(task_table.table_push), shards, task_table.cluster_push->getShardCount()); - } - catch (...) - { - tryLogCurrentException(log, "Error while creating original table. Maybe we are not first."); - } - /// Finalize the processing, change state of current partition task (and also check is_dirty flag) { String state_finished = TaskStateWithOwner::getData(TaskState::Finished, host_id); @@ -1523,7 +1581,7 @@ void ClusterCopier::dropAndCreateLocalTable(const ASTPtr & create_ast) const auto & create = create_ast->as(); dropLocalTableIfExists({create.database, create.table}); - InterpreterCreateQuery interpreter(create_ast, context); + InterpreterCreateQuery interpreter(create_ast, getContext()); interpreter.execute(); } @@ -1534,37 +1592,40 @@ void ClusterCopier::dropLocalTableIfExists(const DatabaseAndTableName & table_na drop_ast->database = table_name.first; drop_ast->table = table_name.second; - InterpreterDropQuery interpreter(drop_ast, context); + InterpreterDropQuery interpreter(drop_ast, getContext()); interpreter.execute(); } +void ClusterCopier::dropHelpingTablesByPieceNumber(const TaskTable & task_table, size_t current_piece_number) +{ + LOG_DEBUG(log, "Removing helping tables piece {}", current_piece_number); + + DatabaseAndTableName original_table = task_table.table_push; + DatabaseAndTableName helping_table + = DatabaseAndTableName(original_table.first, original_table.second + "_piece_" + toString(current_piece_number)); + + String query = "DROP TABLE IF EXISTS " + getQuotedTable(helping_table); + + const ClusterPtr & cluster_push = task_table.cluster_push; + Settings settings_push = task_cluster->settings_push; + + LOG_DEBUG(log, "Execute distributed DROP TABLE: {}", query); + + /// We have to drop partition_piece on each replica + UInt64 num_nodes = executeQueryOnCluster(cluster_push, query, settings_push, PoolMode::GET_MANY, ClusterExecutionMode::ON_EACH_NODE); + + LOG_INFO(log, "DROP TABLE query was successfully executed on {} nodes.", toString(num_nodes)); +} void ClusterCopier::dropHelpingTables(const TaskTable & task_table) { LOG_DEBUG(log, "Removing helping tables"); for (size_t current_piece_number = 0; current_piece_number < task_table.number_of_splits; ++current_piece_number) { - DatabaseAndTableName original_table = task_table.table_push; - DatabaseAndTableName helping_table = DatabaseAndTableName(original_table.first, original_table.second + "_piece_" + toString(current_piece_number)); - - String query = "DROP TABLE IF EXISTS " + getQuotedTable(helping_table); - - const ClusterPtr & cluster_push = task_table.cluster_push; - Settings settings_push = task_cluster->settings_push; - - LOG_DEBUG(log, "Execute distributed DROP TABLE: {}", query); - /// We have to drop partition_piece on each replica - UInt64 num_nodes = executeQueryOnCluster( - cluster_push, query, - settings_push, - PoolMode::GET_MANY, - ClusterExecutionMode::ON_EACH_NODE); - - LOG_DEBUG(log, "DROP TABLE query was successfully executed on {} nodes.", toString(num_nodes)); + dropHelpingTablesByPieceNumber(task_table, current_piece_number); } } - void ClusterCopier::dropParticularPartitionPieceFromAllHelpingTables(const TaskTable & task_table, const String & partition_name) { LOG_DEBUG(log, "Try drop partition partition from all helping tables."); @@ -1586,15 +1647,15 @@ void ClusterCopier::dropParticularPartitionPieceFromAllHelpingTables(const TaskT PoolMode::GET_MANY, ClusterExecutionMode::ON_EACH_NODE); - LOG_DEBUG(log, "DROP PARTITION query was successfully executed on {} nodes.", toString(num_nodes)); + LOG_INFO(log, "DROP PARTITION query was successfully executed on {} nodes.", toString(num_nodes)); } LOG_DEBUG(log, "All helping tables dropped partition {}", partition_name); } String ClusterCopier::getRemoteCreateTable(const DatabaseAndTableName & table, Connection & connection, const Settings & settings) { - Context remote_context(context); - remote_context.setSettings(settings); + auto remote_context = Context::createCopy(context); + remote_context->setSettings(settings); String query = "SHOW CREATE TABLE " + getQuotedTable(table); Block block = getBlockWithAllStreamData(std::make_shared( @@ -1613,7 +1674,7 @@ ASTPtr ClusterCopier::getCreateTableForPullShard(const ConnectionTimeouts & time task_cluster->settings_pull); ParserCreateQuery parser_create_query; - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); return parseQuery(parser_create_query, create_query_pull_str, settings.max_query_size, settings.max_parser_depth); } @@ -1642,7 +1703,7 @@ void ClusterCopier::createShardInternalTables(const ConnectionTimeouts & timeout /// Create special cluster with single shard String shard_read_cluster_name = read_shard_prefix + task_table.cluster_pull_name; ClusterPtr cluster_pull_current_shard = task_table.cluster_pull->getClusterWithSingleShard(task_shard.indexInCluster()); - context.setCluster(shard_read_cluster_name, cluster_pull_current_shard); + getContext()->setCluster(shard_read_cluster_name, cluster_pull_current_shard); auto storage_shard_ast = createASTStorageDistributed(shard_read_cluster_name, task_table.table_pull.first, task_table.table_pull.second); @@ -1702,13 +1763,13 @@ std::set ClusterCopier::getShardPartitions(const ConnectionTimeouts & ti } ParserQuery parser_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth); LOG_DEBUG(log, "Computing destination partition set, executing query: {}", query); - Context local_context = context; - local_context.setSettings(task_cluster->settings_pull); + auto local_context = Context::createCopy(context); + local_context->setSettings(task_cluster->settings_pull); Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()); if (block) @@ -1719,7 +1780,7 @@ std::set ClusterCopier::getShardPartitions(const ConnectionTimeouts & ti for (size_t i = 0; i < column.column->size(); ++i) { WriteBufferFromOwnString wb; - column.type->serializeAsTextQuoted(*column.column, i, wb, FormatSettings()); + column.type->getDefaultSerialization()->serializeTextQuoted(*column.column, i, wb, FormatSettings()); res.emplace(wb.str()); } } @@ -1748,11 +1809,11 @@ bool ClusterCopier::checkShardHasPartition(const ConnectionTimeouts & timeouts, LOG_DEBUG(log, "Checking shard {} for partition {} existence, executing query: {}", task_shard.getDescription(), partition_quoted_name, query); ParserQuery parser_query(query.data() + query.size()); -const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth); - Context local_context = context; - local_context.setSettings(task_cluster->settings_pull); + auto local_context = Context::createCopy(context); + local_context->setSettings(task_cluster->settings_pull); return InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows() != 0; } @@ -1787,11 +1848,11 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi LOG_DEBUG(log, "Checking shard {} for partition {} piece {} existence, executing query: {}", task_shard.getDescription(), partition_quoted_name, std::to_string(current_piece_number), query); ParserQuery parser_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth); - Context local_context = context; - local_context.setSettings(task_cluster->settings_pull); + auto local_context = Context::createCopy(context); + local_context->setSettings(task_cluster->settings_pull); auto result = InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows(); if (result != 0) LOG_DEBUG(log, "Partition {} piece number {} is PRESENT on shard {}", partition_quoted_name, std::to_string(current_piece_number), task_shard.getDescription()); @@ -1847,7 +1908,7 @@ UInt64 ClusterCopier::executeQueryOnCluster( /// In that case we don't have local replicas, but do it just in case for (UInt64 i = 0; i < num_local_replicas; ++i) { - auto interpreter = InterpreterFactory::get(query_ast, context); + auto interpreter = InterpreterFactory::get(query_ast, getContext()); interpreter->execute(); if (increment_and_check_exit()) @@ -1862,8 +1923,8 @@ UInt64 ClusterCopier::executeQueryOnCluster( auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(shard_settings).getSaturated(shard_settings.max_execution_time); auto connections = shard.pool->getMany(timeouts, &shard_settings, pool_mode); - Context shard_context(context); - shard_context.setSettings(shard_settings); + auto shard_context = Context::createCopy(context); + shard_context->setSettings(shard_settings); for (auto & connection : connections) { diff --git a/programs/copier/ClusterCopier.h b/programs/copier/ClusterCopier.h index 9aff5493cf8..e875ca7df2e 100644 --- a/programs/copier/ClusterCopier.h +++ b/programs/copier/ClusterCopier.h @@ -12,18 +12,17 @@ namespace DB { -class ClusterCopier +class ClusterCopier : WithContext { public: ClusterCopier(const String & task_path_, const String & host_id_, const String & proxy_database_name_, - Context & context_) - : + ContextPtr context_) + : WithContext(context_), task_zookeeper_path(task_path_), host_id(host_id_), working_database_name(proxy_database_name_), - context(context_), log(&Poco::Logger::get("ClusterCopier")) {} void init(); @@ -36,7 +35,7 @@ public: /// Compute set of partitions, assume set of partitions aren't changed during the processing void discoverTablePartitions(const ConnectionTimeouts & timeouts, TaskTable & task_table, UInt64 num_threads = 0); - void uploadTaskDescription(const std::string & task_path, const std::string & task_file, const bool force); + void uploadTaskDescription(const std::string & task_path, const std::string & task_file, bool force); void reloadTaskDescription(); @@ -120,15 +119,16 @@ protected: /// Removes MATERIALIZED and ALIAS columns from create table query static ASTPtr removeAliasColumnsFromCreateQuery(const ASTPtr & query_ast); - bool tryDropPartitionPiece(ShardPartition & task_partition, const size_t current_piece_number, + bool tryDropPartitionPiece(ShardPartition & task_partition, size_t current_piece_number, const zkutil::ZooKeeperPtr & zookeeper, const CleanStateClock & clean_state_clock); - static constexpr UInt64 max_table_tries = 1000; - static constexpr UInt64 max_shard_partition_tries = 600; - static constexpr UInt64 max_shard_partition_piece_tries_for_alter = 100; + static constexpr UInt64 max_table_tries = 3; + static constexpr UInt64 max_shard_partition_tries = 3; + static constexpr UInt64 max_shard_partition_piece_tries_for_alter = 3; bool tryProcessTable(const ConnectionTimeouts & timeouts, TaskTable & task_table); + TaskStatus tryCreateDestinationTable(const ConnectionTimeouts & timeouts, TaskTable & task_table); /// Job for copying partition from particular shard. TaskStatus tryProcessPartitionTask(const ConnectionTimeouts & timeouts, ShardPartition & task_partition, @@ -140,7 +140,7 @@ protected: TaskStatus processPartitionPieceTaskImpl(const ConnectionTimeouts & timeouts, ShardPartition & task_partition, - const size_t current_piece_number, + size_t current_piece_number, bool is_unprioritized_task); void dropAndCreateLocalTable(const ASTPtr & create_ast); @@ -149,6 +149,8 @@ protected: void dropHelpingTables(const TaskTable & task_table); + void dropHelpingTablesByPieceNumber(const TaskTable & task_table, size_t current_piece_number); + /// Is used for usage less disk space. /// After all pieces were successfully moved to original destination /// table we can get rid of partition pieces (partitions in helping tables). @@ -216,7 +218,6 @@ private: bool experimental_use_sample_offset{false}; - Context & context; Poco::Logger * log; std::chrono::milliseconds default_sleep_time{1000}; diff --git a/programs/copier/ClusterCopierApp.cpp b/programs/copier/ClusterCopierApp.cpp index e3169a49ecf..d3fff616b65 100644 --- a/programs/copier/ClusterCopierApp.cpp +++ b/programs/copier/ClusterCopierApp.cpp @@ -3,6 +3,7 @@ #include #include #include +#include #include @@ -110,9 +111,9 @@ void ClusterCopierApp::mainImpl() LOG_INFO(log, "Starting clickhouse-copier (id {}, host_id {}, path {}, revision {})", process_id, host_id, process_path, ClickHouseRevision::getVersionRevision()); SharedContextHolder shared_context = Context::createShared(); - auto context = std::make_unique(Context::createGlobal(shared_context.get())); + auto context = Context::createGlobal(shared_context.get()); context->makeGlobalContext(); - SCOPE_EXIT(context->shutdown()); + SCOPE_EXIT_SAFE(context->shutdown()); context->setConfig(loaded_config.configuration); context->setApplicationType(Context::ApplicationType::LOCAL); @@ -127,13 +128,13 @@ void ClusterCopierApp::mainImpl() registerFormats(); static const std::string default_database = "_local"; - DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, *context)); + DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, context)); context->setCurrentDatabase(default_database); /// Initialize query scope just in case. - CurrentThread::QueryScope query_scope(*context); + CurrentThread::QueryScope query_scope(context); - auto copier = std::make_unique(task_path, host_id, default_database, *context); + auto copier = std::make_unique(task_path, host_id, default_database, context); copier->setSafeMode(is_safe_mode); copier->setCopyFaultProbability(copy_fault_probability); copier->setMoveFaultProbability(move_fault_probability); diff --git a/programs/copier/Internals.cpp b/programs/copier/Internals.cpp index ea2be469945..bec612a8226 100644 --- a/programs/copier/Internals.cpp +++ b/programs/copier/Internals.cpp @@ -222,8 +222,8 @@ Names extractPrimaryKeyColumnNames(const ASTPtr & storage_ast) { String pk_column = primary_key_expr_list->children[i]->getColumnName(); if (pk_column != sorting_key_column) - throw Exception("Primary key must be a prefix of the sorting key, but in position " - + toString(i) + " its column is " + pk_column + ", not " + sorting_key_column, + throw Exception("Primary key must be a prefix of the sorting key, but the column in the position " + + toString(i) + " is " + sorting_key_column +", not " + pk_column, ErrorCodes::BAD_ARGUMENTS); if (!primary_key_columns_set.emplace(pk_column).second) diff --git a/programs/copier/TaskCluster.h b/programs/copier/TaskCluster.h index 5b28f461dd8..1a50597d07f 100644 --- a/programs/copier/TaskCluster.h +++ b/programs/copier/TaskCluster.h @@ -98,6 +98,7 @@ inline void DB::TaskCluster::reloadSettings(const Poco::Util::AbstractConfigurat set_default_value(settings_pull.max_block_size, 8192UL); set_default_value(settings_pull.preferred_block_size_bytes, 0); set_default_value(settings_push.insert_distributed_timeout, 0); + set_default_value(settings_push.replication_alter_partitions_sync, 2); } } diff --git a/programs/format/Format.cpp b/programs/format/Format.cpp index ab974169853..5bf19191353 100644 --- a/programs/format/Format.cpp +++ b/programs/format/Format.cpp @@ -1,6 +1,6 @@ +#include #include #include -#include #include #include @@ -50,6 +50,7 @@ int mainEntryClickHouseFormat(int argc, char ** argv) ("quiet,q", "just check syntax, no output on success") ("multiquery,n", "allow multiple queries in the same file") ("obfuscate", "obfuscate instead of formatting") + ("backslash", "add a backslash at the end of each line of the formatted query") ("seed", po::value(), "seed (arbitrary string) that determines the result of obfuscation") ; @@ -70,6 +71,7 @@ int mainEntryClickHouseFormat(int argc, char ** argv) bool quiet = options.count("quiet"); bool multiple = options.count("multiquery"); bool obfuscate = options.count("obfuscate"); + bool backslash = options.count("backslash"); if (quiet && (hilite || oneline || obfuscate)) { @@ -100,8 +102,8 @@ int mainEntryClickHouseFormat(int argc, char ** argv) } SharedContextHolder shared_context = Context::createShared(); - Context context = Context::createGlobal(shared_context.get()); - context.makeGlobalContext(); + auto context = Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); registerFunctions(); registerAggregateFunctions(); @@ -148,12 +150,39 @@ int mainEntryClickHouseFormat(int argc, char ** argv) } if (!quiet) { - WriteBufferFromOStream res_buf(std::cout, 4096); - formatAST(*res, res_buf, hilite, oneline); - res_buf.next(); - if (multiple) - std::cout << "\n;\n"; - std::cout << std::endl; + if (!backslash) + { + WriteBufferFromOStream res_buf(std::cout, 4096); + formatAST(*res, res_buf, hilite, oneline); + res_buf.next(); + if (multiple) + std::cout << "\n;\n"; + std::cout << std::endl; + } + /// add additional '\' at the end of each line; + else + { + WriteBufferFromOwnString str_buf; + formatAST(*res, str_buf, hilite, oneline); + + auto res_string = str_buf.str(); + WriteBufferFromOStream res_cout(std::cout, 4096); + + const char * s_pos= res_string.data(); + const char * s_end = s_pos + res_string.size(); + + while (s_pos != s_end) + { + if (*s_pos == '\n') + res_cout.write(" \\", 2); + res_cout.write(*s_pos++); + } + + res_cout.next(); + if (multiple) + std::cout << " \\\n;\n"; + std::cout << std::endl; + } } do diff --git a/programs/git-import/git-import.cpp b/programs/git-import/git-import.cpp index ae8b55e2aff..b07435dcf78 100644 --- a/programs/git-import/git-import.cpp +++ b/programs/git-import/git-import.cpp @@ -1064,7 +1064,7 @@ void processCommit( time_t commit_time; readText(commit_time, in); - commit.time = commit_time; + commit.time = LocalDateTime(commit_time); assertChar('\0', in); readNullTerminated(commit.author, in); std::string parent_hash; diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index ef72624e7ab..96d336673d0 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -71,6 +71,9 @@ namespace ErrorCodes } +/// ANSI escape sequence for intense color in terminal. +#define HILITE "\033[1m" +#define END_HILITE "\033[0m" using namespace DB; namespace po = boost::program_options; @@ -559,20 +562,32 @@ int mainEntryClickHouseInstall(int argc, char ** argv) bool stdin_is_a_tty = isatty(STDIN_FILENO); bool stdout_is_a_tty = isatty(STDOUT_FILENO); - bool is_interactive = stdin_is_a_tty && stdout_is_a_tty; + + /// dpkg or apt installers can ask for non-interactive work explicitly. + + const char * debian_frontend_var = getenv("DEBIAN_FRONTEND"); + bool noninteractive = debian_frontend_var && debian_frontend_var == std::string_view("noninteractive"); + + bool is_interactive = !noninteractive && stdin_is_a_tty && stdout_is_a_tty; + + /// We can ask password even if stdin is closed/redirected but /dev/tty is available. + bool can_ask_password = !noninteractive && stdout_is_a_tty; if (has_password_for_default_user) { - fmt::print("Password for default user is already specified. To remind or reset, see {} and {}.\n", + fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE "\n", users_config_file.string(), users_d.string()); } - else if (!is_interactive) + else if (!can_ask_password) { - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); } else { + /// NOTE: When installing debian package with dpkg -i, stdin is not a terminal but we are still being able to enter password. + /// More sophisticated method with /dev/tty is used inside the `readpassphrase` function. + char buf[1000] = {}; std::string password; if (auto * result = readpassphrase("Enter password for default user: ", buf, sizeof(buf), 0)) @@ -600,7 +615,7 @@ int mainEntryClickHouseInstall(int argc, char ** argv) "
\n"; out.sync(); out.finalize(); - fmt::print("Password for default user is saved in file {}.\n", password_file); + fmt::print(HILITE "Password for default user is saved in file {}." END_HILITE "\n", password_file); #else out << "\n" " \n" @@ -611,12 +626,12 @@ int mainEntryClickHouseInstall(int argc, char ** argv) "\n"; out.sync(); out.finalize(); - fmt::print("Password for default user is saved in plaintext in file {}.\n", password_file); + fmt::print(HILITE "Password for default user is saved in plaintext in file {}." END_HILITE "\n", password_file); #endif has_password_for_default_user = true; } else - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); } @@ -641,7 +656,6 @@ int mainEntryClickHouseInstall(int argc, char ** argv) " This is optional. Taskstats accounting will be disabled." " To enable taskstats accounting you may add the required capability later manually.\"", "/tmp/test_setcap.sh", fs::canonical(main_bin_path).string()); - fmt::print(" {}\n", command); executeScript(command); #endif @@ -830,8 +844,8 @@ namespace fmt::print("The pidof command returned unusual output.\n"); } - WriteBufferFromFileDescriptor stderr(STDERR_FILENO); - copyData(sh->err, stderr); + WriteBufferFromFileDescriptor std_err(STDERR_FILENO); + copyData(sh->err, std_err); sh->tryWait(); } @@ -842,6 +856,13 @@ namespace { fmt::print("The process with pid = {} is running.\n", pid); } + else if (errno == ESRCH) + { + fmt::print("The process with pid = {} does not exist.\n", pid); + return 0; + } + else + throwFromErrno(fmt::format("Cannot obtain the status of pid {} with `kill`", pid), ErrorCodes::CANNOT_KILL); } if (!pid) diff --git a/programs/library-bridge/CMakeLists.txt b/programs/library-bridge/CMakeLists.txt new file mode 100644 index 00000000000..0913c6e4a9a --- /dev/null +++ b/programs/library-bridge/CMakeLists.txt @@ -0,0 +1,25 @@ +set (CLICKHOUSE_LIBRARY_BRIDGE_SOURCES + library-bridge.cpp + LibraryInterface.cpp + LibraryBridge.cpp + Handlers.cpp + HandlerFactory.cpp + SharedLibraryHandler.cpp + SharedLibraryHandlerFactory.cpp +) + +if (OS_LINUX) + set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--no-export-dynamic") +endif () + +add_executable(clickhouse-library-bridge ${CLICKHOUSE_LIBRARY_BRIDGE_SOURCES}) + +target_link_libraries(clickhouse-library-bridge PRIVATE + daemon + dbms + bridge +) + +set_target_properties(clickhouse-library-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) + +install(TARGETS clickhouse-library-bridge RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) diff --git a/programs/library-bridge/HandlerFactory.cpp b/programs/library-bridge/HandlerFactory.cpp new file mode 100644 index 00000000000..9f53a24156f --- /dev/null +++ b/programs/library-bridge/HandlerFactory.cpp @@ -0,0 +1,23 @@ +#include "HandlerFactory.h" + +#include +#include +#include "Handlers.h" + + +namespace DB +{ + std::unique_ptr LibraryBridgeHandlerFactory::createRequestHandler(const HTTPServerRequest & request) + { + Poco::URI uri{request.getURI()}; + LOG_DEBUG(log, "Request URI: {}", uri.toString()); + + if (uri == "/ping" && request.getMethod() == Poco::Net::HTTPRequest::HTTP_GET) + return std::make_unique(keep_alive_timeout); + + if (request.getMethod() == Poco::Net::HTTPRequest::HTTP_POST) + return std::make_unique(keep_alive_timeout, getContext()); + + return nullptr; + } +} diff --git a/programs/library-bridge/HandlerFactory.h b/programs/library-bridge/HandlerFactory.h new file mode 100644 index 00000000000..93f0721bf01 --- /dev/null +++ b/programs/library-bridge/HandlerFactory.h @@ -0,0 +1,37 @@ +#pragma once + +#include +#include +#include + + +namespace DB +{ + +class SharedLibraryHandler; +using SharedLibraryHandlerPtr = std::shared_ptr; + +/// Factory for '/ping', '/' handlers. +class LibraryBridgeHandlerFactory : public HTTPRequestHandlerFactory, WithContext +{ +public: + LibraryBridgeHandlerFactory( + const std::string & name_, + size_t keep_alive_timeout_, + ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get(name_)) + , name(name_) + , keep_alive_timeout(keep_alive_timeout_) + { + } + + std::unique_ptr createRequestHandler(const HTTPServerRequest & request) override; + +private: + Poco::Logger * log; + std::string name; + size_t keep_alive_timeout; +}; + +} diff --git a/programs/library-bridge/Handlers.cpp b/programs/library-bridge/Handlers.cpp new file mode 100644 index 00000000000..6a1bfbbccb7 --- /dev/null +++ b/programs/library-bridge/Handlers.cpp @@ -0,0 +1,288 @@ +#include "Handlers.h" +#include "SharedLibraryHandlerFactory.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +namespace DB +{ +namespace +{ + std::shared_ptr parseColumns(std::string && column_string) + { + auto sample_block = std::make_shared(); + auto names_and_types = NamesAndTypesList::parse(column_string); + + for (const NameAndTypePair & column_data : names_and_types) + sample_block->insert({column_data.type, column_data.name}); + + return sample_block; + } + + std::vector parseIdsFromBinary(const std::string & ids_string) + { + ReadBufferFromString buf(ids_string); + std::vector ids; + readVectorBinary(ids, buf); + return ids; + } + + std::vector parseNamesFromBinary(const std::string & names_string) + { + ReadBufferFromString buf(names_string); + std::vector names; + readVectorBinary(names, buf); + return names; + } +} + + +void LibraryRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) +{ + LOG_TRACE(log, "Request URI: {}", request.getURI()); + HTMLForm params(request); + + if (!params.has("method")) + { + processError(response, "No 'method' in request URL"); + return; + } + + if (!params.has("dictionary_id")) + { + processError(response, "No 'dictionary_id in request URL"); + return; + } + + std::string method = params.get("method"); + std::string dictionary_id = params.get("dictionary_id"); + LOG_TRACE(log, "Library method: '{}', dictionary id: {}", method, dictionary_id); + + WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); + + try + { + if (method == "libNew") + { + auto & read_buf = request.getStream(); + params.read(read_buf); + + if (!params.has("library_path")) + { + processError(response, "No 'library_path' in request URL"); + return; + } + + if (!params.has("library_settings")) + { + processError(response, "No 'library_settings' in request URL"); + return; + } + + std::string library_path = params.get("library_path"); + const auto & settings_string = params.get("library_settings"); + std::vector library_settings = parseNamesFromBinary(settings_string); + + /// Needed for library dictionary + if (!params.has("attributes_names")) + { + processError(response, "No 'attributes_names' in request URL"); + return; + } + + const auto & attributes_string = params.get("attributes_names"); + std::vector attributes_names = parseNamesFromBinary(attributes_string); + + /// Needed to parse block from binary string format + if (!params.has("sample_block")) + { + processError(response, "No 'sample_block' in request URL"); + return; + } + std::string sample_block_string = params.get("sample_block"); + + std::shared_ptr sample_block; + try + { + sample_block = parseColumns(std::move(sample_block_string)); + } + catch (const Exception & ex) + { + processError(response, "Invalid 'sample_block' parameter in request body '" + ex.message() + "'"); + LOG_WARNING(log, ex.getStackTraceString()); + return; + } + + if (!params.has("null_values")) + { + processError(response, "No 'null_values' in request URL"); + return; + } + + ReadBufferFromString read_block_buf(params.get("null_values")); + auto format = FormatFactory::instance().getInput(FORMAT, read_block_buf, *sample_block, getContext(), DEFAULT_BLOCK_SIZE); + auto reader = std::make_shared(format); + auto sample_block_with_nulls = reader->read(); + + LOG_DEBUG(log, "Dictionary sample block with null values: {}", sample_block_with_nulls.dumpStructure()); + + SharedLibraryHandlerFactory::instance().create(dictionary_id, library_path, library_settings, sample_block_with_nulls, attributes_names); + writeStringBinary("1", out); + } + else if (method == "libClone") + { + if (!params.has("from_dictionary_id")) + { + processError(response, "No 'from_dictionary_id' in request URL"); + return; + } + + std::string from_dictionary_id = params.get("from_dictionary_id"); + LOG_TRACE(log, "Calling libClone from {} to {}", from_dictionary_id, dictionary_id); + SharedLibraryHandlerFactory::instance().clone(from_dictionary_id, dictionary_id); + writeStringBinary("1", out); + } + else if (method == "libDelete") + { + SharedLibraryHandlerFactory::instance().remove(dictionary_id); + writeStringBinary("1", out); + } + else if (method == "isModified") + { + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + bool res = library_handler->isModified(); + writeStringBinary(std::to_string(res), out); + } + else if (method == "supportsSelectiveLoad") + { + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + bool res = library_handler->supportsSelectiveLoad(); + writeStringBinary(std::to_string(res), out); + } + else if (method == "loadAll") + { + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + const auto & sample_block = library_handler->getSampleBlock(); + auto input = library_handler->loadAll(); + + BlockOutputStreamPtr output = FormatFactory::instance().getOutputStream(FORMAT, out, sample_block, getContext()); + copyData(*input, *output); + } + else if (method == "loadIds") + { + params.read(request.getStream()); + + if (!params.has("ids")) + { + processError(response, "No 'ids' in request URL"); + return; + } + + std::vector ids = parseIdsFromBinary(params.get("ids")); + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + const auto & sample_block = library_handler->getSampleBlock(); + auto input = library_handler->loadIds(ids); + BlockOutputStreamPtr output = FormatFactory::instance().getOutputStream(FORMAT, out, sample_block, getContext()); + copyData(*input, *output); + } + else if (method == "loadKeys") + { + if (!params.has("requested_block_sample")) + { + processError(response, "No 'requested_block_sample' in request URL"); + return; + } + + std::string requested_block_string = params.get("requested_block_sample"); + + std::shared_ptr requested_sample_block; + try + { + requested_sample_block = parseColumns(std::move(requested_block_string)); + } + catch (const Exception & ex) + { + processError(response, "Invalid 'requested_block' parameter in request body '" + ex.message() + "'"); + LOG_WARNING(log, ex.getStackTraceString()); + return; + } + + auto & read_buf = request.getStream(); + auto format = FormatFactory::instance().getInput(FORMAT, read_buf, *requested_sample_block, getContext(), DEFAULT_BLOCK_SIZE); + auto reader = std::make_shared(format); + auto block = reader->read(); + + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + const auto & sample_block = library_handler->getSampleBlock(); + auto input = library_handler->loadKeys(block.getColumns()); + BlockOutputStreamPtr output = FormatFactory::instance().getOutputStream(FORMAT, out, sample_block, getContext()); + copyData(*input, *output); + } + } + catch (...) + { + auto message = getCurrentExceptionMessage(true); + response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_INTERNAL_SERVER_ERROR, message); // can't call process_error, because of too soon response sending + + try + { + writeStringBinary(message, out); + out.finalize(); + } + catch (...) + { + tryLogCurrentException(log); + } + + tryLogCurrentException(log); + } + + try + { + out.finalize(); + } + catch (...) + { + tryLogCurrentException(log); + } +} + + +void LibraryRequestHandler::processError(HTTPServerResponse & response, const std::string & message) +{ + response.setStatusAndReason(HTTPResponse::HTTP_INTERNAL_SERVER_ERROR); + + if (!response.sent()) + *response.send() << message << std::endl; + + LOG_WARNING(log, message); +} + + +void PingHandler::handleRequest(HTTPServerRequest & /* request */, HTTPServerResponse & response) +{ + try + { + setResponseDefaultHeaders(response, keep_alive_timeout); + const char * data = "Ok.\n"; + response.sendBuffer(data, strlen(data)); + } + catch (...) + { + tryLogCurrentException("PingHandler"); + } +} + + +} diff --git a/programs/library-bridge/Handlers.h b/programs/library-bridge/Handlers.h new file mode 100644 index 00000000000..dac61d3a735 --- /dev/null +++ b/programs/library-bridge/Handlers.h @@ -0,0 +1,59 @@ +#pragma once + +#include +#include +#include +#include "SharedLibraryHandler.h" + + +namespace DB +{ + + +/// Handler for requests to Library Dictionary Source, returns response in RowBinary format. +/// When a library dictionary source is created, it sends libNew request to library bridge (which is started on first +/// request to it, if it was not yet started). On this request a new sharedLibrayHandler is added to a +/// sharedLibraryHandlerFactory by a dictionary uuid. With libNew request come: library_path, library_settings, +/// names of dictionary attributes, sample block to parse block of null values, block of null values. Everything is +/// passed in binary format and is urlencoded. When dictionary is cloned, a new handler is created. +/// Each handler is unique to dictionary. +class LibraryRequestHandler : public HTTPRequestHandler, WithContext +{ +public: + + LibraryRequestHandler( + size_t keep_alive_timeout_, + ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("LibraryRequestHandler")) + , keep_alive_timeout(keep_alive_timeout_) + { + } + + void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override; + +private: + static constexpr inline auto FORMAT = "RowBinary"; + + void processError(HTTPServerResponse & response, const std::string & message); + + Poco::Logger * log; + size_t keep_alive_timeout; +}; + + +class PingHandler : public HTTPRequestHandler +{ +public: + explicit PingHandler(size_t keep_alive_timeout_) + : keep_alive_timeout(keep_alive_timeout_) + { + } + + void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override; + +private: + const size_t keep_alive_timeout; +}; + +} diff --git a/programs/library-bridge/LibraryBridge.cpp b/programs/library-bridge/LibraryBridge.cpp new file mode 100644 index 00000000000..2e5d6041151 --- /dev/null +++ b/programs/library-bridge/LibraryBridge.cpp @@ -0,0 +1,17 @@ +#include "LibraryBridge.h" + +#pragma GCC diagnostic ignored "-Wmissing-declarations" +int mainEntryClickHouseLibraryBridge(int argc, char ** argv) +{ + DB::LibraryBridge app; + try + { + return app.run(argc, argv); + } + catch (...) + { + std::cerr << DB::getCurrentExceptionMessage(true) << "\n"; + auto code = DB::getCurrentExceptionCode(); + return code ? code : 1; + } +} diff --git a/programs/library-bridge/LibraryBridge.h b/programs/library-bridge/LibraryBridge.h new file mode 100644 index 00000000000..9f2dafb89ab --- /dev/null +++ b/programs/library-bridge/LibraryBridge.h @@ -0,0 +1,26 @@ +#pragma once + +#include +#include +#include "HandlerFactory.h" + + +namespace DB +{ + +class LibraryBridge : public IBridge +{ + +protected: + std::string bridgeName() const override + { + return "LibraryBridge"; + } + + HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const override + { + return std::make_shared("LibraryRequestHandlerFactory-factory", keep_alive_timeout, context); + } +}; + +} diff --git a/src/Dictionaries/LibraryDictionarySourceExternal.cpp b/programs/library-bridge/LibraryInterface.cpp similarity index 55% rename from src/Dictionaries/LibraryDictionarySourceExternal.cpp rename to programs/library-bridge/LibraryInterface.cpp index 2e944056283..3975368c17f 100644 --- a/src/Dictionaries/LibraryDictionarySourceExternal.cpp +++ b/programs/library-bridge/LibraryInterface.cpp @@ -1,4 +1,5 @@ -#include "LibraryDictionarySourceExternal.h" +#include "LibraryInterface.h" + #include namespace @@ -6,10 +7,25 @@ namespace const char DICT_LOGGER_NAME[] = "LibraryDictionarySourceExternal"; } -void ClickHouseLibrary::log(ClickHouseLibrary::LogLevel level, ClickHouseLibrary::CString msg) +namespace ClickHouseLibrary { - using ClickHouseLibrary::LogLevel; +std::string_view LIBRARY_CREATE_NEW_FUNC_NAME = "ClickHouseDictionary_v3_libNew"; +std::string_view LIBRARY_CLONE_FUNC_NAME = "ClickHouseDictionary_v3_libClone"; +std::string_view LIBRARY_DELETE_FUNC_NAME = "ClickHouseDictionary_v3_libDelete"; + +std::string_view LIBRARY_DATA_NEW_FUNC_NAME = "ClickHouseDictionary_v3_dataNew"; +std::string_view LIBRARY_DATA_DELETE_FUNC_NAME = "ClickHouseDictionary_v3_dataDelete"; + +std::string_view LIBRARY_LOAD_ALL_FUNC_NAME = "ClickHouseDictionary_v3_loadAll"; +std::string_view LIBRARY_LOAD_IDS_FUNC_NAME = "ClickHouseDictionary_v3_loadIds"; +std::string_view LIBRARY_LOAD_KEYS_FUNC_NAME = "ClickHouseDictionary_v3_loadKeys"; + +std::string_view LIBRARY_IS_MODIFIED_FUNC_NAME = "ClickHouseDictionary_v3_isModified"; +std::string_view LIBRARY_SUPPORTS_SELECTIVE_LOAD_FUNC_NAME = "ClickHouseDictionary_v3_supportsSelectiveLoad"; + +void log(LogLevel level, CString msg) +{ auto & logger = Poco::Logger::get(DICT_LOGGER_NAME); switch (level) { @@ -47,3 +63,5 @@ void ClickHouseLibrary::log(ClickHouseLibrary::LogLevel level, ClickHouseLibrary break; } } + +} diff --git a/programs/library-bridge/LibraryInterface.h b/programs/library-bridge/LibraryInterface.h new file mode 100644 index 00000000000..d23de59bbb1 --- /dev/null +++ b/programs/library-bridge/LibraryInterface.h @@ -0,0 +1,110 @@ +#pragma once + +#include +#include + +#define CLICKHOUSE_DICTIONARY_LIBRARY_API 1 + +namespace ClickHouseLibrary +{ +using CString = const char *; +using ColumnName = CString; +using ColumnNames = ColumnName[]; + +struct CStrings +{ + CString * data = nullptr; + uint64_t size = 0; +}; + +struct VectorUInt64 +{ + const uint64_t * data = nullptr; + uint64_t size = 0; +}; + +struct ColumnsUInt64 +{ + VectorUInt64 * data = nullptr; + uint64_t size = 0; +}; + +struct Field +{ + const void * data = nullptr; + uint64_t size = 0; +}; + +struct Row +{ + const Field * data = nullptr; + uint64_t size = 0; +}; + +struct Table +{ + const Row * data = nullptr; + uint64_t size = 0; + uint64_t error_code = 0; // 0 = ok; !0 = error, with message in error_string + const char * error_string = nullptr; +}; + +enum LogLevel +{ + FATAL = 1, + CRITICAL, + ERROR, + WARNING, + NOTICE, + INFORMATION, + DEBUG, + TRACE, +}; + +void log(LogLevel level, CString msg); + +extern std::string_view LIBRARY_CREATE_NEW_FUNC_NAME; +extern std::string_view LIBRARY_CLONE_FUNC_NAME; +extern std::string_view LIBRARY_DELETE_FUNC_NAME; + +extern std::string_view LIBRARY_DATA_NEW_FUNC_NAME; +extern std::string_view LIBRARY_DATA_DELETE_FUNC_NAME; + +extern std::string_view LIBRARY_LOAD_ALL_FUNC_NAME; +extern std::string_view LIBRARY_LOAD_IDS_FUNC_NAME; +extern std::string_view LIBRARY_LOAD_KEYS_FUNC_NAME; + +extern std::string_view LIBRARY_IS_MODIFIED_FUNC_NAME; +extern std::string_view LIBRARY_SUPPORTS_SELECTIVE_LOAD_FUNC_NAME; + +using LibraryContext = void *; + +using LibraryLoggerFunc = void (*)(LogLevel, CString /* message */); + +using LibrarySettings = CStrings *; + +using LibraryNewFunc = LibraryContext (*)(LibrarySettings, LibraryLoggerFunc); +using LibraryCloneFunc = LibraryContext (*)(LibraryContext); +using LibraryDeleteFunc = void (*)(LibraryContext); + +using LibraryData = void *; +using LibraryDataNewFunc = LibraryData (*)(LibraryContext); +using LibraryDataDeleteFunc = void (*)(LibraryContext, LibraryData); + +/// Can be safely casted into const Table * with static_cast +using RawClickHouseLibraryTable = void *; +using RequestedColumnsNames = CStrings *; + +using LibraryLoadAllFunc = RawClickHouseLibraryTable (*)(LibraryData, LibrarySettings, RequestedColumnsNames); + +using RequestedIds = const VectorUInt64 *; +using LibraryLoadIdsFunc = RawClickHouseLibraryTable (*)(LibraryData, LibrarySettings, RequestedColumnsNames, RequestedIds); + +using RequestedKeys = Table *; +/// There are no requested column names for load keys func +using LibraryLoadKeysFunc = RawClickHouseLibraryTable (*)(LibraryData, LibrarySettings, RequestedKeys); + +using LibraryIsModifiedFunc = bool (*)(LibraryContext, LibrarySettings); +using LibrarySupportsSelectiveLoadFunc = bool (*)(LibraryContext, LibrarySettings); + +} diff --git a/programs/library-bridge/LibraryUtils.h b/programs/library-bridge/LibraryUtils.h new file mode 100644 index 00000000000..8ced8df1c48 --- /dev/null +++ b/programs/library-bridge/LibraryUtils.h @@ -0,0 +1,44 @@ +#pragma once + +#include +#include +#include +#include + +#include "LibraryInterface.h" + + +namespace DB +{ + +class CStringsHolder +{ + +public: + using Container = std::vector; + + explicit CStringsHolder(const Container & strings_pass) + { + strings_holder = strings_pass; + strings.size = strings_holder.size(); + + ptr_holder = std::make_unique(strings.size); + strings.data = ptr_holder.get(); + + size_t i = 0; + for (auto & str : strings_holder) + { + strings.data[i] = str.c_str(); + ++i; + } + } + + ClickHouseLibrary::CStrings strings; // will pass pointer to lib + +private: + std::unique_ptr ptr_holder = nullptr; + Container strings_holder; +}; + + +} diff --git a/programs/library-bridge/SharedLibraryHandler.cpp b/programs/library-bridge/SharedLibraryHandler.cpp new file mode 100644 index 00000000000..ab8cf2417c2 --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandler.cpp @@ -0,0 +1,219 @@ +#include "SharedLibraryHandler.h" + +#include +#include +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int EXTERNAL_LIBRARY_ERROR; + extern const int SIZES_OF_COLUMNS_DOESNT_MATCH; +} + + +SharedLibraryHandler::SharedLibraryHandler( + const std::string & library_path_, + const std::vector & library_settings, + const Block & sample_block_, + const std::vector & attributes_names_) + : library_path(library_path_) + , sample_block(sample_block_) + , attributes_names(attributes_names_) +{ + library = std::make_shared(library_path, RTLD_LAZY); + settings_holder = std::make_shared(CStringsHolder(library_settings)); + + auto lib_new = library->tryGet(ClickHouseLibrary::LIBRARY_CREATE_NEW_FUNC_NAME); + + if (lib_new) + lib_data = lib_new(&settings_holder->strings, ClickHouseLibrary::log); + else + throw Exception("Method libNew failed", ErrorCodes::EXTERNAL_LIBRARY_ERROR); +} + + +SharedLibraryHandler::SharedLibraryHandler(const SharedLibraryHandler & other) + : library_path{other.library_path} + , sample_block{other.sample_block} + , attributes_names{other.attributes_names} + , library{other.library} + , settings_holder{other.settings_holder} +{ + + auto lib_clone = library->tryGet(ClickHouseLibrary::LIBRARY_CLONE_FUNC_NAME); + + if (lib_clone) + { + lib_data = lib_clone(other.lib_data); + } + else + { + auto lib_new = library->tryGet(ClickHouseLibrary::LIBRARY_CREATE_NEW_FUNC_NAME); + + if (lib_new) + lib_data = lib_new(&settings_holder->strings, ClickHouseLibrary::log); + } +} + + +SharedLibraryHandler::~SharedLibraryHandler() +{ + auto lib_delete = library->tryGet(ClickHouseLibrary::LIBRARY_DELETE_FUNC_NAME); + + if (lib_delete) + lib_delete(lib_data); +} + + +bool SharedLibraryHandler::isModified() +{ + auto func_is_modified = library->tryGet(ClickHouseLibrary::LIBRARY_IS_MODIFIED_FUNC_NAME); + + if (func_is_modified) + return func_is_modified(lib_data, &settings_holder->strings); + + return true; +} + + +bool SharedLibraryHandler::supportsSelectiveLoad() +{ + auto func_supports_selective_load = library->tryGet(ClickHouseLibrary::LIBRARY_SUPPORTS_SELECTIVE_LOAD_FUNC_NAME); + + if (func_supports_selective_load) + return func_supports_selective_load(lib_data, &settings_holder->strings); + + return true; +} + + +BlockInputStreamPtr SharedLibraryHandler::loadAll() +{ + auto columns_holder = std::make_unique(attributes_names.size()); + ClickHouseLibrary::CStrings columns{static_cast(columns_holder.get()), attributes_names.size()}; + for (size_t i = 0; i < attributes_names.size(); ++i) + columns.data[i] = attributes_names[i].c_str(); + + auto load_all_func = library->get(ClickHouseLibrary::LIBRARY_LOAD_ALL_FUNC_NAME); + auto data_new_func = library->get(ClickHouseLibrary::LIBRARY_DATA_NEW_FUNC_NAME); + auto data_delete_func = library->get(ClickHouseLibrary::LIBRARY_DATA_DELETE_FUNC_NAME); + + ClickHouseLibrary::LibraryData data_ptr = data_new_func(lib_data); + SCOPE_EXIT(data_delete_func(lib_data, data_ptr)); + + ClickHouseLibrary::RawClickHouseLibraryTable data = load_all_func(data_ptr, &settings_holder->strings, &columns); + auto block = dataToBlock(data); + + return std::make_shared(block); +} + + +BlockInputStreamPtr SharedLibraryHandler::loadIds(const std::vector & ids) +{ + const ClickHouseLibrary::VectorUInt64 ids_data{ext::bit_cast(ids.data()), ids.size()}; + + auto columns_holder = std::make_unique(attributes_names.size()); + ClickHouseLibrary::CStrings columns_pass{static_cast(columns_holder.get()), attributes_names.size()}; + + auto load_ids_func = library->get(ClickHouseLibrary::LIBRARY_LOAD_IDS_FUNC_NAME); + auto data_new_func = library->get(ClickHouseLibrary::LIBRARY_DATA_NEW_FUNC_NAME); + auto data_delete_func = library->get(ClickHouseLibrary::LIBRARY_DATA_DELETE_FUNC_NAME); + + ClickHouseLibrary::LibraryData data_ptr = data_new_func(lib_data); + SCOPE_EXIT(data_delete_func(lib_data, data_ptr)); + + ClickHouseLibrary::RawClickHouseLibraryTable data = load_ids_func(data_ptr, &settings_holder->strings, &columns_pass, &ids_data); + auto block = dataToBlock(data); + + return std::make_shared(block); +} + + +BlockInputStreamPtr SharedLibraryHandler::loadKeys(const Columns & key_columns) +{ + auto holder = std::make_unique(key_columns.size()); + std::vector> column_data_holders; + + for (size_t i = 0; i < key_columns.size(); ++i) + { + auto cell_holder = std::make_unique(key_columns[i]->size()); + + for (size_t j = 0; j < key_columns[i]->size(); ++j) + { + auto data_ref = key_columns[i]->getDataAt(j); + + cell_holder[j] = ClickHouseLibrary::Field{ + .data = static_cast(data_ref.data), + .size = data_ref.size}; + } + + holder[i] = ClickHouseLibrary::Row{ + .data = static_cast(cell_holder.get()), + .size = key_columns[i]->size()}; + + column_data_holders.push_back(std::move(cell_holder)); + } + + ClickHouseLibrary::Table request_cols{ + .data = static_cast(holder.get()), + .size = key_columns.size()}; + + auto load_keys_func = library->get(ClickHouseLibrary::LIBRARY_LOAD_KEYS_FUNC_NAME); + auto data_new_func = library->get(ClickHouseLibrary::LIBRARY_DATA_NEW_FUNC_NAME); + auto data_delete_func = library->get(ClickHouseLibrary::LIBRARY_DATA_DELETE_FUNC_NAME); + + ClickHouseLibrary::LibraryData data_ptr = data_new_func(lib_data); + SCOPE_EXIT(data_delete_func(lib_data, data_ptr)); + + ClickHouseLibrary::RawClickHouseLibraryTable data = load_keys_func(data_ptr, &settings_holder->strings, &request_cols); + auto block = dataToBlock(data); + + return std::make_shared(block); +} + + +Block SharedLibraryHandler::dataToBlock(const ClickHouseLibrary::RawClickHouseLibraryTable data) +{ + if (!data) + throw Exception("LibraryDictionarySource: No data returned", ErrorCodes::EXTERNAL_LIBRARY_ERROR); + + const auto * columns_received = static_cast(data); + if (columns_received->error_code) + throw Exception( + "LibraryDictionarySource: Returned error: " + std::to_string(columns_received->error_code) + " " + (columns_received->error_string ? columns_received->error_string : ""), + ErrorCodes::EXTERNAL_LIBRARY_ERROR); + + MutableColumns columns = sample_block.cloneEmptyColumns(); + + for (size_t col_n = 0; col_n < columns_received->size; ++col_n) + { + if (columns.size() != columns_received->data[col_n].size) + throw Exception( + "LibraryDictionarySource: Returned unexpected number of columns: " + std::to_string(columns_received->data[col_n].size) + ", must be " + std::to_string(columns.size()), + ErrorCodes::SIZES_OF_COLUMNS_DOESNT_MATCH); + + for (size_t row_n = 0; row_n < columns_received->data[col_n].size; ++row_n) + { + const auto & field = columns_received->data[col_n].data[row_n]; + if (!field.data) + { + /// sample_block contains null_value (from config) inside corresponding column + const auto & col = sample_block.getByPosition(row_n); + columns[row_n]->insertFrom(*(col.column), 0); + } + else + { + const auto & size = field.size; + columns[row_n]->insertData(static_cast(field.data), size); + } + } + } + + return sample_block.cloneWithColumns(std::move(columns)); +} + +} diff --git a/programs/library-bridge/SharedLibraryHandler.h b/programs/library-bridge/SharedLibraryHandler.h new file mode 100644 index 00000000000..5c0334ac89f --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandler.h @@ -0,0 +1,54 @@ +#pragma once + +#include +#include +#include +#include "LibraryUtils.h" + + +namespace DB +{ + +/// A class that manages all operations with library dictionary. +/// Every library dictionary source has its own object of this class, accessed by UUID. +class SharedLibraryHandler +{ + +public: + SharedLibraryHandler( + const std::string & library_path_, + const std::vector & library_settings, + const Block & sample_block_, + const std::vector & attributes_names_); + + SharedLibraryHandler(const SharedLibraryHandler & other); + + ~SharedLibraryHandler(); + + BlockInputStreamPtr loadAll(); + + BlockInputStreamPtr loadIds(const std::vector & ids); + + BlockInputStreamPtr loadKeys(const Columns & key_columns); + + bool isModified(); + + bool supportsSelectiveLoad(); + + const Block & getSampleBlock() { return sample_block; } + +private: + Block dataToBlock(const ClickHouseLibrary::RawClickHouseLibraryTable data); + + std::string library_path; + const Block sample_block; + std::vector attributes_names; + + SharedLibraryPtr library; + std::shared_ptr settings_holder; + void * lib_data; +}; + +using SharedLibraryHandlerPtr = std::shared_ptr; + +} diff --git a/programs/library-bridge/SharedLibraryHandlerFactory.cpp b/programs/library-bridge/SharedLibraryHandlerFactory.cpp new file mode 100644 index 00000000000..05494c313c4 --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandlerFactory.cpp @@ -0,0 +1,67 @@ +#include "SharedLibraryHandlerFactory.h" + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +SharedLibraryHandlerPtr SharedLibraryHandlerFactory::get(const std::string & dictionary_id) +{ + std::lock_guard lock(mutex); + auto library_handler = library_handlers.find(dictionary_id); + + if (library_handler != library_handlers.end()) + return library_handler->second; + + return nullptr; +} + + +void SharedLibraryHandlerFactory::create( + const std::string & dictionary_id, + const std::string & library_path, + const std::vector & library_settings, + const Block & sample_block, + const std::vector & attributes_names) +{ + std::lock_guard lock(mutex); + library_handlers[dictionary_id] = std::make_shared(library_path, library_settings, sample_block, attributes_names); +} + + +void SharedLibraryHandlerFactory::clone(const std::string & from_dictionary_id, const std::string & to_dictionary_id) +{ + std::lock_guard lock(mutex); + auto from_library_handler = library_handlers.find(from_dictionary_id); + + /// This is not supposed to happen as libClone is called from copy constructor of LibraryDictionarySource + /// object, and shared library handler of from_dictionary is removed only in its destructor. + /// And if for from_dictionary there was no shared library handler, it would have received and exception in + /// its constructor, so no libClone would be made from it. + if (from_library_handler == library_handlers.end()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "No shared library handler found"); + + /// libClone method will be called in copy constructor + library_handlers[to_dictionary_id] = std::make_shared(*from_library_handler->second); +} + + +void SharedLibraryHandlerFactory::remove(const std::string & dictionary_id) +{ + std::lock_guard lock(mutex); + /// libDelete is called in destructor. + library_handlers.erase(dictionary_id); +} + + +SharedLibraryHandlerFactory & SharedLibraryHandlerFactory::instance() +{ + static SharedLibraryHandlerFactory ret; + return ret; +} + +} diff --git a/programs/library-bridge/SharedLibraryHandlerFactory.h b/programs/library-bridge/SharedLibraryHandlerFactory.h new file mode 100644 index 00000000000..473d90618a2 --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandlerFactory.h @@ -0,0 +1,37 @@ +#pragma once + +#include "SharedLibraryHandler.h" +#include +#include + + +namespace DB +{ + +/// Each library dictionary source has unique UUID. When clone() method is called, a new UUID is generated. +/// There is a unique mapping from diciotnary UUID to sharedLibraryHandler. +class SharedLibraryHandlerFactory final : private boost::noncopyable +{ +public: + static SharedLibraryHandlerFactory & instance(); + + SharedLibraryHandlerPtr get(const std::string & dictionary_id); + + void create( + const std::string & dictionary_id, + const std::string & library_path, + const std::vector & library_settings, + const Block & sample_block, + const std::vector & attributes_names); + + void clone(const std::string & from_dictionary_id, const std::string & to_dictionary_id); + + void remove(const std::string & dictionary_id); + +private: + /// map: dict_id -> sharedLibraryHandler + std::unordered_map library_handlers; + std::mutex mutex; +}; + +} diff --git a/programs/library-bridge/library-bridge.cpp b/programs/library-bridge/library-bridge.cpp new file mode 100644 index 00000000000..5fff2ffe525 --- /dev/null +++ b/programs/library-bridge/library-bridge.cpp @@ -0,0 +1,3 @@ +int mainEntryClickHouseLibraryBridge(int argc, char ** argv); +int main(int argc_, char ** argv_) { return mainEntryClickHouseLibraryBridge(argc_, argv_); } + diff --git a/programs/local/LocalServer.cpp b/programs/local/LocalServer.cpp index 7c6b60fbf8e..f680c2c2da6 100644 --- a/programs/local/LocalServer.cpp +++ b/programs/local/LocalServer.cpp @@ -99,9 +99,9 @@ void LocalServer::initialize(Poco::Util::Application & self) } } -void LocalServer::applyCmdSettings(Context & context) +void LocalServer::applyCmdSettings(ContextPtr context) { - context.applySettingsChanges(cmd_settings.changes()); + context->applySettingsChanges(cmd_settings.changes()); } /// If path is specified and not empty, will try to setup server environment and load existing metadata @@ -176,7 +176,7 @@ void LocalServer::tryInitPath() } -static void attachSystemTables(const Context & context) +static void attachSystemTables(ContextPtr context) { DatabasePtr system_database = DatabaseCatalog::instance().tryGetDatabase(DatabaseCatalog::SYSTEM_DATABASE); if (!system_database) @@ -211,7 +211,7 @@ try } shared_context = Context::createShared(); - global_context = std::make_unique(Context::createGlobal(shared_context.get())); + global_context = Context::createGlobal(shared_context.get()); global_context->makeGlobalContext(); global_context->setApplicationType(Context::ApplicationType::LOCAL); tryInitPath(); @@ -260,6 +260,11 @@ try if (mark_cache_size) global_context->setMarkCache(mark_cache_size); + /// A cache for mmapped files. + size_t mmap_cache_size = config().getUInt64("mmap_cache_size", 1000); /// The choice of default is arbitrary. + if (mmap_cache_size) + global_context->setMMappedFileCache(mmap_cache_size); + /// Load global settings from default_profile and system_profile. global_context->setDefaultProfiles(config()); @@ -269,9 +274,9 @@ try * if such tables will not be dropped, clickhouse-server will not be able to load them due to security reasons. */ std::string default_database = config().getString("default_database", "_local"); - DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, *global_context)); + DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, global_context)); global_context->setCurrentDatabase(default_database); - applyCmdOptions(*global_context); + applyCmdOptions(global_context); if (config().has("path")) { @@ -283,15 +288,15 @@ try LOG_DEBUG(log, "Loading metadata from {}", path); Poco::File(path + "data/").createDirectories(); Poco::File(path + "metadata/").createDirectories(); - loadMetadataSystem(*global_context); - attachSystemTables(*global_context); - loadMetadata(*global_context); + loadMetadataSystem(global_context); + attachSystemTables(global_context); + loadMetadata(global_context); DatabaseCatalog::instance().loadDatabases(); LOG_DEBUG(log, "Loaded metadata."); } else if (!config().has("no-system-tables")) { - attachSystemTables(*global_context); + attachSystemTables(global_context); } processQueries(); @@ -370,13 +375,13 @@ void LocalServer::processQueries() /// we can't mutate global global_context (can lead to races, as it was already passed to some background threads) /// so we can't reuse it safely as a query context and need a copy here - auto context = Context(*global_context); + auto context = Context::createCopy(global_context); - context.makeSessionContext(); - context.makeQueryContext(); + context->makeSessionContext(); + context->makeQueryContext(); - context.setUser("default", "", Poco::Net::SocketAddress{}); - context.setCurrentQueryId(""); + context->setUser("default", "", Poco::Net::SocketAddress{}); + context->setCurrentQueryId(""); applyCmdSettings(context); /// Use the same query_id (and thread group) for all queries @@ -613,9 +618,9 @@ void LocalServer::init(int argc, char ** argv) argsToConfig(arguments, config(), 100); } -void LocalServer::applyCmdOptions(Context & context) +void LocalServer::applyCmdOptions(ContextPtr context) { - context.setDefaultFormat(config().getString("output-format", config().getString("format", "TSV"))); + context->setDefaultFormat(config().getString("output-format", config().getString("format", "TSV"))); applyCmdSettings(context); } diff --git a/programs/local/LocalServer.h b/programs/local/LocalServer.h index 02778bd86cb..3555e8a38ad 100644 --- a/programs/local/LocalServer.h +++ b/programs/local/LocalServer.h @@ -36,15 +36,15 @@ private: std::string getInitialCreateTableQuery(); void tryInitPath(); - void applyCmdOptions(Context & context); - void applyCmdSettings(Context & context); + void applyCmdOptions(ContextPtr context); + void applyCmdSettings(ContextPtr context); void processQueries(); void setupUsers(); void cleanup(); protected: SharedContextHolder shared_context; - std::unique_ptr global_context; + ContextPtr global_context; /// Settings specified via command line args Settings cmd_settings; diff --git a/programs/obfuscator/Obfuscator.cpp b/programs/obfuscator/Obfuscator.cpp index 950db4e4f05..c92eb5c6647 100644 --- a/programs/obfuscator/Obfuscator.cpp +++ b/programs/obfuscator/Obfuscator.cpp @@ -100,16 +100,16 @@ class IModel { public: /// Call train iteratively for each block to train a model. - virtual void train(const IColumn & column); + virtual void train(const IColumn & column) = 0; /// Call finalize one time after training before generating. - virtual void finalize(); + virtual void finalize() = 0; /// Call generate: pass source data column to obtain a column with anonymized data as a result. - virtual ColumnPtr generate(const IColumn & column); + virtual ColumnPtr generate(const IColumn & column) = 0; /// Deterministically change seed to some other value. This can be used to generate more values than were in source. - virtual void updateSeed(); + virtual void updateSeed() = 0; virtual ~IModel() = default; }; @@ -1129,8 +1129,8 @@ try } SharedContextHolder shared_context = Context::createShared(); - Context context = Context::createGlobal(shared_context.get()); - context.makeGlobalContext(); + ContextPtr context = Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); ReadBufferFromFileDescriptor file_in(STDIN_FILENO); WriteBufferFromFileDescriptor file_out(STDOUT_FILENO); @@ -1152,7 +1152,7 @@ try if (!silent) std::cerr << "Training models\n"; - BlockInputStreamPtr input = context.getInputFormat(input_format, file_in, header, max_block_size); + BlockInputStreamPtr input = context->getInputFormat(input_format, file_in, header, max_block_size); input->readPrefix(); while (Block block = input->read()) @@ -1179,8 +1179,8 @@ try file_in.seek(0, SEEK_SET); - BlockInputStreamPtr input = context.getInputFormat(input_format, file_in, header, max_block_size); - BlockOutputStreamPtr output = context.getOutputStream(output_format, file_out, header); + BlockInputStreamPtr input = context->getInputFormat(input_format, file_in, header, max_block_size); + BlockOutputStreamPtr output = context->getOutputStreamParallelIfPossible(output_format, file_out, header); if (processed_rows + source_rows > limit) input = std::make_shared(input, limit - processed_rows, 0); diff --git a/programs/odbc-bridge/CMakeLists.txt b/programs/odbc-bridge/CMakeLists.txt index 11864354619..7b232f2b5dc 100644 --- a/programs/odbc-bridge/CMakeLists.txt +++ b/programs/odbc-bridge/CMakeLists.txt @@ -24,12 +24,14 @@ add_executable(clickhouse-odbc-bridge ${CLICKHOUSE_ODBC_BRIDGE_SOURCES}) target_link_libraries(clickhouse-odbc-bridge PRIVATE daemon dbms + bridge clickhouse_parsers - Poco::Data - Poco::Data::ODBC + nanodbc + unixodbc ) set_target_properties(clickhouse-odbc-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) +target_compile_options (clickhouse-odbc-bridge PRIVATE -Wno-reserved-id-macro -Wno-keyword-macro) if (USE_GDB_ADD_INDEX) add_custom_command(TARGET clickhouse-odbc-bridge POST_BUILD COMMAND ${GDB_ADD_INDEX_EXE} ../clickhouse-odbc-bridge COMMENT "Adding .gdb-index to clickhouse-odbc-bridge" VERBATIM) diff --git a/programs/odbc-bridge/ColumnInfoHandler.cpp b/programs/odbc-bridge/ColumnInfoHandler.cpp index 14fa734f246..e33858583c2 100644 --- a/programs/odbc-bridge/ColumnInfoHandler.cpp +++ b/programs/odbc-bridge/ColumnInfoHandler.cpp @@ -2,29 +2,36 @@ #if USE_ODBC -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "getIdentifierQuote.h" -# include "validateODBCConnectionString.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "getIdentifierQuote.h" +#include "validateODBCConnectionString.h" +#include "ODBCConnectionFactory.h" + +#include +#include -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC namespace DB { + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; + extern const int BAD_ARGUMENTS; +} + namespace { DataTypePtr getDataType(SQLSMALLINT type) @@ -59,6 +66,7 @@ namespace } } + void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) { HTMLForm params(request, request.getStream()); @@ -77,88 +85,79 @@ void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServ process_error("No 'table' param in request URL"); return; } + if (!params.has("connection_string")) { process_error("No 'connection_string' in request URL"); return; } + std::string schema_name; std::string table_name = params.get("table"); std::string connection_string = params.get("connection_string"); if (params.has("schema")) - { schema_name = params.get("schema"); - LOG_TRACE(log, "Will fetch info for table '{}'", schema_name + "." + table_name); - } - else - LOG_TRACE(log, "Will fetch info for table '{}'", table_name); + LOG_TRACE(log, "Got connection str '{}'", connection_string); try { const bool external_table_functions_use_nulls = Poco::NumberParser::parseBool(params.get("external_table_functions_use_nulls", "false")); - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - SQLHDBC hdbc = session.dbc().handle(); + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); - SQLHSTMT hstmt = nullptr; + nanodbc::catalog catalog(*connection); + std::string catalog_name; - if (POCO_SQL_ODBC_CLASS::Utility::isError(SQLAllocStmt(hdbc, &hstmt))) - throw POCO_SQL_ODBC_CLASS::ODBCException("Could not allocate connection handle."); - - SCOPE_EXIT(SQLFreeStmt(hstmt, SQL_DROP)); - - const auto & context_settings = context.getSettingsRef(); - - /// TODO Why not do SQLColumns instead? - std::string name = schema_name.empty() ? backQuoteIfNeed(table_name) : backQuoteIfNeed(schema_name) + "." + backQuoteIfNeed(table_name); - WriteBufferFromOwnString buf; - std::string input = "SELECT * FROM " + name + " WHERE 1 = 0"; - ParserQueryWithOutput parser(input.data() + input.size()); - ASTPtr select = parseQuery(parser, input.data(), input.data() + input.size(), "", context_settings.max_query_size, context_settings.max_parser_depth); - - IAST::FormatSettings settings(buf, true); - settings.always_quote_identifiers = true; - settings.identifier_quoting_style = getQuotingStyle(hdbc); - select->format(settings); - std::string query = buf.str(); - - LOG_TRACE(log, "Inferring structure with query '{}'", query); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(POCO_SQL_ODBC_CLASS::SQLPrepare(hstmt, reinterpret_cast(query.data()), query.size()))) - throw POCO_SQL_ODBC_CLASS::DescriptorException(session.dbc()); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(SQLExecute(hstmt))) - throw POCO_SQL_ODBC_CLASS::StatementException(hstmt); - - SQLSMALLINT cols = 0; - if (POCO_SQL_ODBC_CLASS::Utility::isError(SQLNumResultCols(hstmt, &cols))) - throw POCO_SQL_ODBC_CLASS::StatementException(hstmt); - - /// TODO cols not checked - - NamesAndTypesList columns; - for (SQLSMALLINT ncol = 1; ncol <= cols; ++ncol) + /// In XDBC tables it is allowed to pass either database_name or schema_name in table definion, but not both of them. + /// They both are passed as 'schema' parameter in request URL, so it is not clear whether it is database_name or schema_name passed. + /// If it is schema_name then we know that database is added in odbc.ini. But if we have database_name as 'schema', + /// it is not guaranteed. For nanodbc database_name must be either in odbc.ini or passed as catalog_name. + auto get_columns = [&]() { - SQLSMALLINT type = 0; - /// TODO Why 301? - SQLCHAR column_name[301]; - - SQLSMALLINT is_nullable; - const auto result = POCO_SQL_ODBC_CLASS::SQLDescribeCol(hstmt, ncol, column_name, sizeof(column_name), nullptr, &type, nullptr, nullptr, &is_nullable); - if (POCO_SQL_ODBC_CLASS::Utility::isError(result)) - throw POCO_SQL_ODBC_CLASS::StatementException(hstmt); - - auto column_type = getDataType(type); - if (external_table_functions_use_nulls && is_nullable == SQL_NULLABLE) + nanodbc::catalog::tables tables = catalog.find_tables(table_name, /* type = */ "", /* schema = */ "", /* catalog = */ schema_name); + if (tables.next()) { - column_type = std::make_shared(column_type); + catalog_name = tables.table_catalog(); + LOG_TRACE(log, "Will fetch info for table '{}.{}'", catalog_name, table_name); + return catalog.find_columns(/* column = */ "", table_name, /* schema = */ "", catalog_name); } - columns.emplace_back(reinterpret_cast(column_name), std::move(column_type)); + tables = catalog.find_tables(table_name, /* type = */ "", /* schema = */ schema_name); + if (tables.next()) + { + catalog_name = tables.table_catalog(); + LOG_TRACE(log, "Will fetch info for table '{}.{}.{}'", catalog_name, schema_name, table_name); + return catalog.find_columns(/* column = */ "", table_name, schema_name, catalog_name); + } + + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Table {} not found", schema_name.empty() ? table_name : schema_name + '.' + table_name); + }; + + nanodbc::catalog::columns columns_definition = get_columns(); + + NamesAndTypesList columns; + while (columns_definition.next()) + { + SQLSMALLINT type = columns_definition.sql_data_type(); + std::string column_name = columns_definition.column_name(); + + bool is_nullable = columns_definition.nullable() == SQL_NULLABLE; + + auto column_type = getDataType(type); + + if (external_table_functions_use_nulls && is_nullable == SQL_NULLABLE) + column_type = std::make_shared(column_type); + + columns.emplace_back(column_name, std::move(column_type)); } + if (columns.empty()) + throw Exception("Columns definition was not returned", ErrorCodes::LOGICAL_ERROR); + WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try { diff --git a/programs/odbc-bridge/ColumnInfoHandler.h b/programs/odbc-bridge/ColumnInfoHandler.h index 9b5b470b31d..bc976f54aee 100644 --- a/programs/odbc-bridge/ColumnInfoHandler.h +++ b/programs/odbc-bridge/ColumnInfoHandler.h @@ -2,24 +2,23 @@ #if USE_ODBC -# include -# include -# include +#include +#include +#include +#include +#include -# include -/** The structure of the table is taken from the query "SELECT * FROM table WHERE 1=0". - * TODO: It would be much better to utilize ODBC methods dedicated for columns description. - * If there is no such table, an exception is thrown. - */ namespace DB { -class ODBCColumnsInfoHandler : public HTTPRequestHandler +class ODBCColumnsInfoHandler : public HTTPRequestHandler, WithContext { public: - ODBCColumnsInfoHandler(size_t keep_alive_timeout_, Context & context_) - : log(&Poco::Logger::get("ODBCColumnsInfoHandler")), keep_alive_timeout(keep_alive_timeout_), context(context_) + ODBCColumnsInfoHandler(size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("ODBCColumnsInfoHandler")) + , keep_alive_timeout(keep_alive_timeout_) { } @@ -28,7 +27,6 @@ public: private: Poco::Logger * log; size_t keep_alive_timeout; - Context & context; }; } diff --git a/programs/odbc-bridge/HandlerFactory.cpp b/programs/odbc-bridge/HandlerFactory.cpp index 9ac48af4ace..49984453d33 100644 --- a/programs/odbc-bridge/HandlerFactory.cpp +++ b/programs/odbc-bridge/HandlerFactory.cpp @@ -8,7 +8,7 @@ namespace DB { -std::unique_ptr HandlerFactory::createRequestHandler(const HTTPServerRequest & request) +std::unique_ptr ODBCBridgeHandlerFactory::createRequestHandler(const HTTPServerRequest & request) { Poco::URI uri{request.getURI()}; LOG_TRACE(log, "Request URI: {}", uri.toString()); @@ -21,26 +21,26 @@ std::unique_ptr HandlerFactory::createRequestHandler(const H if (uri.getPath() == "/columns_info") #if USE_ODBC - return std::make_unique(keep_alive_timeout, context); + return std::make_unique(keep_alive_timeout, getContext()); #else return nullptr; #endif else if (uri.getPath() == "/identifier_quote") #if USE_ODBC - return std::make_unique(keep_alive_timeout, context); + return std::make_unique(keep_alive_timeout, getContext()); #else return nullptr; #endif else if (uri.getPath() == "/schema_allowed") #if USE_ODBC - return std::make_unique(keep_alive_timeout, context); + return std::make_unique(keep_alive_timeout, getContext()); #else return nullptr; #endif else if (uri.getPath() == "/write") - return std::make_unique(pool_map, keep_alive_timeout, context, "write"); + return std::make_unique(keep_alive_timeout, getContext(), "write"); else - return std::make_unique(pool_map, keep_alive_timeout, context, "read"); + return std::make_unique(keep_alive_timeout, getContext(), "read"); } return nullptr; } diff --git a/programs/odbc-bridge/HandlerFactory.h b/programs/odbc-bridge/HandlerFactory.h index 5dce6f02ecd..ffbbe3670af 100644 --- a/programs/odbc-bridge/HandlerFactory.h +++ b/programs/odbc-bridge/HandlerFactory.h @@ -1,32 +1,28 @@ #pragma once -#include +#include #include #include "ColumnInfoHandler.h" #include "IdentifierQuoteHandler.h" #include "MainHandler.h" #include "SchemaAllowedHandler.h" - #include -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wunused-parameter" -#include -#pragma GCC diagnostic pop - namespace DB { /** Factory for '/ping', '/', '/columns_info', '/identifier_quote', '/schema_allowed' handlers. * Also stores Session pools for ODBC connections */ -class HandlerFactory : public HTTPRequestHandlerFactory +class ODBCBridgeHandlerFactory : public HTTPRequestHandlerFactory, WithContext { public: - HandlerFactory(const std::string & name_, size_t keep_alive_timeout_, Context & context_) - : log(&Poco::Logger::get(name_)), name(name_), keep_alive_timeout(keep_alive_timeout_), context(context_) + ODBCBridgeHandlerFactory(const std::string & name_, size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get(name_)) + , name(name_) + , keep_alive_timeout(keep_alive_timeout_) { - pool_map = std::make_shared(); } std::unique_ptr createRequestHandler(const HTTPServerRequest & request) override; @@ -35,7 +31,6 @@ private: Poco::Logger * log; std::string name; size_t keep_alive_timeout; - Context & context; - std::shared_ptr pool_map; }; + } diff --git a/programs/odbc-bridge/IdentifierQuoteHandler.cpp b/programs/odbc-bridge/IdentifierQuoteHandler.cpp index 5060d37c479..a5a97cb8086 100644 --- a/programs/odbc-bridge/IdentifierQuoteHandler.cpp +++ b/programs/odbc-bridge/IdentifierQuoteHandler.cpp @@ -2,23 +2,20 @@ #if USE_ODBC -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "getIdentifierQuote.h" -# include "validateODBCConnectionString.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "getIdentifierQuote.h" +#include "validateODBCConnectionString.h" +#include "ODBCConnectionFactory.h" -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC namespace DB { @@ -44,10 +41,12 @@ void IdentifierQuoteHandler::handleRequest(HTTPServerRequest & request, HTTPServ try { std::string connection_string = params.get("connection_string"); - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - SQLHDBC hdbc = session.dbc().handle(); - auto identifier = getIdentifierQuote(hdbc); + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); + + auto identifier = getIdentifierQuote(*connection); WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try diff --git a/programs/odbc-bridge/IdentifierQuoteHandler.h b/programs/odbc-bridge/IdentifierQuoteHandler.h index dad88c72ad8..ef3806fd802 100644 --- a/programs/odbc-bridge/IdentifierQuoteHandler.h +++ b/programs/odbc-bridge/IdentifierQuoteHandler.h @@ -11,11 +11,13 @@ namespace DB { -class IdentifierQuoteHandler : public HTTPRequestHandler +class IdentifierQuoteHandler : public HTTPRequestHandler, WithContext { public: - IdentifierQuoteHandler(size_t keep_alive_timeout_, Context &) - : log(&Poco::Logger::get("IdentifierQuoteHandler")), keep_alive_timeout(keep_alive_timeout_) + IdentifierQuoteHandler(size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("IdentifierQuoteHandler")) + , keep_alive_timeout(keep_alive_timeout_) { } diff --git a/programs/odbc-bridge/MainHandler.cpp b/programs/odbc-bridge/MainHandler.cpp index 4fcc9deea6a..e24b51f6037 100644 --- a/programs/odbc-bridge/MainHandler.cpp +++ b/programs/odbc-bridge/MainHandler.cpp @@ -18,18 +18,17 @@ #include #include #include +#include "ODBCConnectionFactory.h" #include #include +#include -#if USE_ODBC -#include -#define POCO_SQL_ODBC_CLASS Poco::Data::ODBC -#endif namespace DB { + namespace { std::unique_ptr parseColumns(std::string && column_string) @@ -42,37 +41,6 @@ namespace } } -using PocoSessionPoolConstructor = std::function()>; -/** Is used to adjust max size of default Poco thread pool. See issue #750 - * Acquire the lock, resize pool and construct new Session. - */ -static std::shared_ptr createAndCheckResizePocoSessionPool(PocoSessionPoolConstructor pool_constr) -{ - static std::mutex mutex; - - Poco::ThreadPool & pool = Poco::ThreadPool::defaultPool(); - - /// NOTE: The lock don't guarantee that external users of the pool don't change its capacity - std::unique_lock lock(mutex); - - if (pool.available() == 0) - pool.addCapacity(2 * std::max(pool.capacity(), 1)); - - return pool_constr(); -} - -ODBCHandler::PoolPtr ODBCHandler::getPool(const std::string & connection_str) -{ - std::lock_guard lock(mutex); - if (!pool_map->count(connection_str)) - { - pool_map->emplace(connection_str, createAndCheckResizePocoSessionPool([connection_str] - { - return std::make_shared("ODBC", validateODBCConnectionString(connection_str)); - })); - } - return pool_map->at(connection_str); -} void ODBCHandler::processError(HTTPServerResponse & response, const std::string & message) { @@ -82,12 +50,14 @@ void ODBCHandler::processError(HTTPServerResponse & response, const std::string LOG_WARNING(log, message); } + void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) { HTMLForm params(request); + LOG_TRACE(log, "Request URI: {}", request.getURI()); + if (mode == "read") params.read(request.getStream()); - LOG_TRACE(log, "Request URI: {}", request.getURI()); if (mode == "read" && !params.has("query")) { @@ -95,11 +65,6 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse return; } - if (!params.has("columns")) - { - processError(response, "No 'columns' in request URL"); - return; - } if (!params.has("connection_string")) { @@ -107,6 +72,16 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse return; } + if (!params.has("sample_block")) + { + processError(response, "No 'sample_block' in request URL"); + return; + } + + std::string format = params.get("format", "RowBinary"); + std::string connection_string = params.get("connection_string"); + LOG_TRACE(log, "Connection string: '{}'", connection_string); + UInt64 max_block_size = DEFAULT_BLOCK_SIZE; if (params.has("max_block_size")) { @@ -119,28 +94,27 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse max_block_size = parse(max_block_size_str); } - std::string columns = params.get("columns"); + std::string sample_block_string = params.get("sample_block"); std::unique_ptr sample_block; try { - sample_block = parseColumns(std::move(columns)); + sample_block = parseColumns(std::move(sample_block_string)); } catch (const Exception & ex) { - processError(response, "Invalid 'columns' parameter in request body '" + ex.message() + "'"); - LOG_WARNING(log, ex.getStackTraceString()); + processError(response, "Invalid 'sample_block' parameter in request body '" + ex.message() + "'"); + LOG_ERROR(log, ex.getStackTraceString()); return; } - std::string format = params.get("format", "RowBinary"); - - std::string connection_string = params.get("connection_string"); - LOG_TRACE(log, "Connection string: '{}'", connection_string); - WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try { + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); + if (mode == "write") { if (!params.has("db_name")) @@ -159,15 +133,12 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse auto quoting_style = IdentifierQuotingStyle::None; #if USE_ODBC - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - quoting_style = getQuotingStyle(session.dbc().handle()); + quoting_style = getQuotingStyle(*connection); #endif - - auto pool = getPool(connection_string); auto & read_buf = request.getStream(); - auto input_format = FormatFactory::instance().getInput(format, read_buf, *sample_block, context, max_block_size); + auto input_format = FormatFactory::instance().getInput(format, read_buf, *sample_block, getContext(), max_block_size); auto input_stream = std::make_shared(input_format); - ODBCBlockOutputStream output_stream(pool->get(), db_name, table_name, *sample_block, quoting_style); + ODBCBlockOutputStream output_stream(*connection, db_name, table_name, *sample_block, getContext(), quoting_style); copyData(*input_stream, output_stream); writeStringBinary("Ok.", out); } @@ -176,9 +147,8 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse std::string query = params.get("query"); LOG_TRACE(log, "Query: {}", query); - BlockOutputStreamPtr writer = FormatFactory::instance().getOutputStream(format, out, *sample_block, context); - auto pool = getPool(connection_string); - ODBCBlockInputStream inp(pool->get(), query, *sample_block, max_block_size); + BlockOutputStreamPtr writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, out, *sample_block, getContext()); + ODBCBlockInputStream inp(*connection, query, *sample_block, max_block_size); copyData(inp, *writer); } } diff --git a/programs/odbc-bridge/MainHandler.h b/programs/odbc-bridge/MainHandler.h index e237ede5814..bc0fca8b9a5 100644 --- a/programs/odbc-bridge/MainHandler.h +++ b/programs/odbc-bridge/MainHandler.h @@ -1,14 +1,13 @@ #pragma once -#include +#include #include - #include -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wunused-parameter" -#include -#pragma GCC diagnostic pop + +#include +#include + namespace DB { @@ -17,20 +16,16 @@ namespace DB * and also query in request body * response in RowBinary format */ -class ODBCHandler : public HTTPRequestHandler +class ODBCHandler : public HTTPRequestHandler, WithContext { public: - using PoolPtr = std::shared_ptr; - using PoolMap = std::unordered_map; - - ODBCHandler(std::shared_ptr pool_map_, + ODBCHandler( size_t keep_alive_timeout_, - Context & context_, + ContextPtr context_, const String & mode_) - : log(&Poco::Logger::get("ODBCHandler")) - , pool_map(pool_map_) + : WithContext(context_) + , log(&Poco::Logger::get("ODBCHandler")) , keep_alive_timeout(keep_alive_timeout_) - , context(context_) , mode(mode_) { } @@ -40,14 +35,11 @@ public: private: Poco::Logger * log; - std::shared_ptr pool_map; size_t keep_alive_timeout; - Context & context; String mode; static inline std::mutex mutex; - PoolPtr getPool(const std::string & connection_str); void processError(HTTPServerResponse & response, const std::string & message); }; diff --git a/programs/odbc-bridge/ODBCBlockInputStream.cpp b/programs/odbc-bridge/ODBCBlockInputStream.cpp index 3e2a2d0c7d4..3a73cb9f601 100644 --- a/programs/odbc-bridge/ODBCBlockInputStream.cpp +++ b/programs/odbc-bridge/ODBCBlockInputStream.cpp @@ -1,5 +1,7 @@ #include "ODBCBlockInputStream.h" #include +#include +#include #include #include #include @@ -14,137 +16,143 @@ namespace DB { namespace ErrorCodes { - extern const int NUMBER_OF_COLUMNS_DOESNT_MATCH; extern const int UNKNOWN_TYPE; } ODBCBlockInputStream::ODBCBlockInputStream( - Poco::Data::Session && session_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_) - : session{session_} - , statement{(this->session << query_str, Poco::Data::Keywords::now)} - , result{statement} - , iterator{result.begin()} + nanodbc::connection & connection_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_) + : log(&Poco::Logger::get("ODBCBlockInputStream")) , max_block_size{max_block_size_} - , log(&Poco::Logger::get("ODBCBlockInputStream")) + , connection(connection_) + , query(query_str) { - if (sample_block.columns() != result.columnCount()) - throw Exception{"RecordSet contains " + toString(result.columnCount()) + " columns while " + toString(sample_block.columns()) - + " expected", - ErrorCodes::NUMBER_OF_COLUMNS_DOESNT_MATCH}; - description.init(sample_block); -} - - -namespace -{ - using ValueType = ExternalResultDescription::ValueType; - - void insertValue(IColumn & column, const ValueType type, const Poco::Dynamic::Var & value) - { - switch (type) - { - case ValueType::vtUInt8: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtUInt16: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtUInt32: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtUInt64: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt8: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt16: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt32: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt64: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtFloat32: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtFloat64: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtString: - assert_cast(column).insert(value.convert()); - break; - case ValueType::vtDate: - { - Poco::DateTime date = value.convert(); - assert_cast(column).insertValue(UInt16{LocalDate(date.year(), date.month(), date.day()).getDayNum()}); - break; - } - case ValueType::vtDateTime: - { - Poco::DateTime datetime = value.convert(); - assert_cast(column).insertValue(time_t{LocalDateTime( - datetime.year(), datetime.month(), datetime.day(), datetime.hour(), datetime.minute(), datetime.second())}); - break; - } - case ValueType::vtUUID: - assert_cast(column).insert(parse(value.convert())); - break; - default: - throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE); - } - } - - void insertDefaultValue(IColumn & column, const IColumn & sample_column) { column.insertFrom(sample_column, 0); } + result = execute(connection, NANODBC_TEXT(query)); } Block ODBCBlockInputStream::readImpl() { - if (iterator == result.end()) - return {}; - - MutableColumns columns(description.sample_block.columns()); - for (const auto i : ext::range(0, columns.size())) - columns[i] = description.sample_block.getByPosition(i).column->cloneEmpty(); + if (finished) + return Block(); + MutableColumns columns(description.sample_block.cloneEmptyColumns()); size_t num_rows = 0; - while (iterator != result.end()) + + while (true) { - Poco::Data::Row & row = *iterator; - - for (const auto idx : ext::range(0, row.fieldCount())) + if (!result.next()) { - /// TODO This is extremely slow. - const Poco::Dynamic::Var & value = row[idx]; + finished = true; + break; + } - if (!value.isEmpty()) + for (int idx = 0; idx < result.columns(); ++idx) + { + const auto & sample = description.sample_block.getByPosition(idx); + + if (!result.is_null(idx)) { - if (description.types[idx].second) + bool is_nullable = description.types[idx].second; + + if (is_nullable) { ColumnNullable & column_nullable = assert_cast(*columns[idx]); - insertValue(column_nullable.getNestedColumn(), description.types[idx].first, value); + const auto & data_type = assert_cast(*sample.type); + insertValue(column_nullable.getNestedColumn(), data_type.getNestedType(), description.types[idx].first, result, idx); column_nullable.getNullMapData().emplace_back(0); } else - insertValue(*columns[idx], description.types[idx].first, value); + { + insertValue(*columns[idx], sample.type, description.types[idx].first, result, idx); + } } else - insertDefaultValue(*columns[idx], *description.sample_block.getByPosition(idx).column); + insertDefaultValue(*columns[idx], *sample.column); } - ++iterator; - - ++num_rows; - if (num_rows == max_block_size) + if (++num_rows == max_block_size) break; } return description.sample_block.cloneWithColumns(std::move(columns)); } + +void ODBCBlockInputStream::insertValue( + IColumn & column, const DataTypePtr data_type, const ValueType type, nanodbc::result & row, size_t idx) +{ + switch (type) + { + case ValueType::vtUInt8: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtUInt16: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtUInt32: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtUInt64: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt8: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt16: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt32: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt64: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtFloat32: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtFloat64: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtFixedString:[[fallthrough]]; + case ValueType::vtString: + assert_cast(column).insert(row.get(idx)); + break; + case ValueType::vtUUID: + { + auto value = row.get(idx); + assert_cast(column).insert(parse(value.data(), value.size())); + break; + } + case ValueType::vtDate: + assert_cast(column).insertValue(UInt16{LocalDate{row.get(idx)}.getDayNum()}); + break; + case ValueType::vtDateTime: + { + auto value = row.get(idx); + ReadBufferFromString in(value); + time_t time = 0; + readDateTimeText(time, in); + if (time < 0) + time = 0; + assert_cast(column).insertValue(time); + break; + } + case ValueType::vtDateTime64:[[fallthrough]]; + case ValueType::vtDecimal32: [[fallthrough]]; + case ValueType::vtDecimal64: [[fallthrough]]; + case ValueType::vtDecimal128: [[fallthrough]]; + case ValueType::vtDecimal256: + { + auto value = row.get(idx); + ReadBufferFromString istr(value); + data_type->getDefaultSerialization()->deserializeWholeText(column, istr, FormatSettings{}); + break; + } + default: + throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE); + } +} + } diff --git a/programs/odbc-bridge/ODBCBlockInputStream.h b/programs/odbc-bridge/ODBCBlockInputStream.h index 13491e05822..bbd90ce4d6c 100644 --- a/programs/odbc-bridge/ODBCBlockInputStream.h +++ b/programs/odbc-bridge/ODBCBlockInputStream.h @@ -3,10 +3,8 @@ #include #include #include -#include -#include -#include #include +#include namespace DB @@ -15,25 +13,33 @@ namespace DB class ODBCBlockInputStream final : public IBlockInputStream { public: - ODBCBlockInputStream( - Poco::Data::Session && session_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_); + ODBCBlockInputStream(nanodbc::connection & connection_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_); String getName() const override { return "ODBC"; } Block getHeader() const override { return description.sample_block.cloneEmpty(); } private: + using QueryResult = std::shared_ptr; + using ValueType = ExternalResultDescription::ValueType; + Block readImpl() override; - Poco::Data::Session session; - Poco::Data::Statement statement; - Poco::Data::RecordSet result; - Poco::Data::RecordSet::Iterator iterator; + static void insertValue(IColumn & column, const DataTypePtr data_type, const ValueType type, nanodbc::result & row, size_t idx); + static void insertDefaultValue(IColumn & column, const IColumn & sample_column) + { + column.insertFrom(sample_column, 0); + } + + Poco::Logger * log; const UInt64 max_block_size; ExternalResultDescription description; - Poco::Logger * log; + nanodbc::connection & connection; + nanodbc::result result; + String query; + bool finished = false; }; } diff --git a/programs/odbc-bridge/ODBCBlockOutputStream.cpp b/programs/odbc-bridge/ODBCBlockOutputStream.cpp index 4d8b9fa6bdf..e4614204178 100644 --- a/programs/odbc-bridge/ODBCBlockOutputStream.cpp +++ b/programs/odbc-bridge/ODBCBlockOutputStream.cpp @@ -8,16 +8,14 @@ #include #include #include "getIdentifierQuote.h" +#include +#include +#include namespace DB { -namespace ErrorCodes -{ - extern const int UNKNOWN_TYPE; -} - namespace { using ValueType = ExternalResultDescription::ValueType; @@ -40,69 +38,21 @@ namespace return buf.str(); } - std::string getQuestionMarks(size_t n) - { - std::string result = "("; - for (size_t i = 0; i < n; ++i) - { - if (i > 0) - result += ","; - result += "?"; - } - return result + ")"; - } - - Poco::Dynamic::Var getVarFromField(const Field & field, const ValueType type) - { - switch (type) - { - case ValueType::vtUInt8: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtUInt16: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtUInt32: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtUInt64: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtInt8: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtInt16: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtInt32: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtInt64: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtFloat32: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtFloat64: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtString: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtDate: - return Poco::Dynamic::Var(LocalDate(DayNum(field.get())).toString()).convert(); - case ValueType::vtDateTime: - return Poco::Dynamic::Var(std::to_string(LocalDateTime(time_t(field.get())))).convert(); - case ValueType::vtUUID: - return Poco::Dynamic::Var(UUID(field.get()).toUnderType().toHexString()).convert(); - default: - throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE); - - } - __builtin_unreachable(); - } } -ODBCBlockOutputStream::ODBCBlockOutputStream(Poco::Data::Session && session_, +ODBCBlockOutputStream::ODBCBlockOutputStream(nanodbc::connection & connection_, const std::string & remote_database_name_, const std::string & remote_table_name_, const Block & sample_block_, + ContextPtr local_context_, IdentifierQuotingStyle quoting_) - : session(session_) + : log(&Poco::Logger::get("ODBCBlockOutputStream")) + , connection(connection_) , db_name(remote_database_name_) , table_name(remote_table_name_) , sample_block(sample_block_) + , local_context(local_context_) , quoting(quoting_) - , log(&Poco::Logger::get("ODBCBlockOutputStream")) { description.init(sample_block); } @@ -114,28 +64,12 @@ Block ODBCBlockOutputStream::getHeader() const void ODBCBlockOutputStream::write(const Block & block) { - ColumnsWithTypeAndName columns; - for (size_t i = 0; i < block.columns(); ++i) - columns.push_back({block.getColumns()[i], sample_block.getDataTypes()[i], sample_block.getNames()[i]}); + WriteBufferFromOwnString values_buf; + auto writer = FormatFactory::instance().getOutputStream("Values", values_buf, sample_block, local_context); + writer->write(block); - std::vector row_to_insert(block.columns()); - Poco::Data::Statement statement(session << getInsertQuery(db_name, table_name, columns, quoting) + getQuestionMarks(block.columns())); - for (size_t i = 0; i < block.columns(); ++i) - statement.addBind(Poco::Data::Keywords::use(row_to_insert[i])); - - for (size_t i = 0; i < block.rows(); ++i) - { - for (size_t col_idx = 0; col_idx < block.columns(); ++col_idx) - { - Field val; - columns[col_idx].column->get(i, val); - if (val.isNull()) - row_to_insert[col_idx] = Poco::Dynamic::Var(); - else - row_to_insert[col_idx] = getVarFromField(val, description.types[col_idx].first); - } - statement.execute(); - } + std::string query = getInsertQuery(db_name, table_name, block.getColumnsWithTypeAndName(), quoting) + values_buf.str(); + execute(connection, query); } } diff --git a/programs/odbc-bridge/ODBCBlockOutputStream.h b/programs/odbc-bridge/ODBCBlockOutputStream.h index 39e1d6f77ac..0b13f7039b5 100644 --- a/programs/odbc-bridge/ODBCBlockOutputStream.h +++ b/programs/odbc-bridge/ODBCBlockOutputStream.h @@ -2,30 +2,41 @@ #include #include -#include #include #include +#include +#include + namespace DB { + class ODBCBlockOutputStream : public IBlockOutputStream { + public: - ODBCBlockOutputStream(Poco::Data::Session && session_, const std::string & remote_database_name_, - const std::string & remote_table_name_, const Block & sample_block_, IdentifierQuotingStyle quoting); + ODBCBlockOutputStream( + nanodbc::connection & connection_, + const std::string & remote_database_name_, + const std::string & remote_table_name_, + const Block & sample_block_, + ContextPtr local_context_, + IdentifierQuotingStyle quoting); Block getHeader() const override; void write(const Block & block) override; private: - Poco::Data::Session session; + Poco::Logger * log; + + nanodbc::connection & connection; std::string db_name; std::string table_name; Block sample_block; + ContextPtr local_context; IdentifierQuotingStyle quoting; ExternalResultDescription description; - Poco::Logger * log; }; } diff --git a/programs/odbc-bridge/ODBCBridge.cpp b/programs/odbc-bridge/ODBCBridge.cpp index 8869a2639c1..0deefe46014 100644 --- a/programs/odbc-bridge/ODBCBridge.cpp +++ b/programs/odbc-bridge/ODBCBridge.cpp @@ -1,244 +1,4 @@ #include "ODBCBridge.h" -#include "HandlerFactory.h" - -#include -#include -#include -#include - -#if USE_ODBC -// It doesn't make much sense to build this bridge without ODBC, but we still do this. -# include -#endif - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - - -namespace DB -{ -namespace ErrorCodes -{ - extern const int ARGUMENT_OUT_OF_BOUND; -} - -namespace -{ - Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log) - { - Poco::Net::SocketAddress socket_address; - try - { - socket_address = Poco::Net::SocketAddress(host, port); - } - catch (const Poco::Net::DNSException & e) - { - const auto code = e.code(); - if (code == EAI_FAMILY -#if defined(EAI_ADDRFAMILY) - || code == EAI_ADDRFAMILY -#endif - ) - { - LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. If it is an IPv6 address and your host has disabled IPv6, then consider to specify IPv4 address to listen in element of configuration file. Example: 0.0.0.0", host, e.code(), e.message()); - } - - throw; - } - return socket_address; - } - - Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, Poco::Logger * log) - { - auto address = makeSocketAddress(host, port, log); -#if POCO_VERSION < 0x01080000 - socket.bind(address, /* reuseAddress = */ true); -#else - socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ false); -#endif - - socket.listen(/* backlog = */ 64); - - return address; - } -} - -void ODBCBridge::handleHelp(const std::string &, const std::string &) -{ - Poco::Util::HelpFormatter help_formatter(options()); - help_formatter.setCommand(commandName()); - help_formatter.setHeader("HTTP-proxy for odbc requests"); - help_formatter.setUsage("--http-port "); - help_formatter.format(std::cerr); - - stopOptionsProcessing(); -} - - -void ODBCBridge::defineOptions(Poco::Util::OptionSet & options) -{ - options.addOption(Poco::Util::Option("http-port", "", "port to listen").argument("http-port", true).binding("http-port")); - options.addOption( - Poco::Util::Option("listen-host", "", "hostname or address to listen, default 127.0.0.1").argument("listen-host").binding("listen-host")); - options.addOption( - Poco::Util::Option("http-timeout", "", "http timeout for socket, default 1800").argument("http-timeout").binding("http-timeout")); - - options.addOption(Poco::Util::Option("max-server-connections", "", "max connections to server, default 1024") - .argument("max-server-connections") - .binding("max-server-connections")); - options.addOption(Poco::Util::Option("keep-alive-timeout", "", "keepalive timeout, default 10") - .argument("keep-alive-timeout") - .binding("keep-alive-timeout")); - - options.addOption(Poco::Util::Option("log-level", "", "sets log level, default info").argument("log-level").binding("logger.level")); - - options.addOption( - Poco::Util::Option("log-path", "", "log path for all logs, default console").argument("log-path").binding("logger.log")); - - options.addOption(Poco::Util::Option("err-log-path", "", "err log path for all logs, default no") - .argument("err-log-path") - .binding("logger.errorlog")); - - options.addOption(Poco::Util::Option("stdout-path", "", "stdout log path, default console") - .argument("stdout-path") - .binding("logger.stdout")); - - options.addOption(Poco::Util::Option("stderr-path", "", "stderr log path, default console") - .argument("stderr-path") - .binding("logger.stderr")); - - using Me = std::decay_t; - options.addOption(Poco::Util::Option("help", "", "produce this help message") - .binding("help") - .callback(Poco::Util::OptionCallback(this, &Me::handleHelp))); - - ServerApplication::defineOptions(options); // NOLINT Don't need complex BaseDaemon's .xml config -} - -void ODBCBridge::initialize(Application & self) -{ - BaseDaemon::closeFDs(); - is_help = config().has("help"); - - if (is_help) - return; - - config().setString("logger", "ODBCBridge"); - - /// Redirect stdout, stderr to specified files. - /// Some libraries and sanitizers write to stderr in case of errors. - const auto stdout_path = config().getString("logger.stdout", ""); - if (!stdout_path.empty()) - { - if (!freopen(stdout_path.c_str(), "a+", stdout)) - throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path); - - /// Disable buffering for stdout. - setbuf(stdout, nullptr); - } - const auto stderr_path = config().getString("logger.stderr", ""); - if (!stderr_path.empty()) - { - if (!freopen(stderr_path.c_str(), "a+", stderr)) - throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path); - - /// Disable buffering for stderr. - setbuf(stderr, nullptr); - } - - buildLoggers(config(), logger(), self.commandName()); - - BaseDaemon::logRevision(); - - log = &logger(); - hostname = config().getString("listen-host", "127.0.0.1"); - port = config().getUInt("http-port"); - if (port > 0xFFFF) - throw Exception("Out of range 'http-port': " + std::to_string(port), ErrorCodes::ARGUMENT_OUT_OF_BOUND); - - http_timeout = config().getUInt("http-timeout", DEFAULT_HTTP_READ_BUFFER_TIMEOUT); - max_server_connections = config().getUInt("max-server-connections", 1024); - keep_alive_timeout = config().getUInt("keep-alive-timeout", 10); - - initializeTerminationAndSignalProcessing(); - -#if USE_ODBC - // It doesn't make much sense to build this bridge without ODBC, but we - // still do this. - Poco::Data::ODBC::Connector::registerConnector(); -#endif - - ServerApplication::initialize(self); // NOLINT -} - -void ODBCBridge::uninitialize() -{ - BaseDaemon::uninitialize(); -} - -int ODBCBridge::main(const std::vector & /*args*/) -{ - if (is_help) - return Application::EXIT_OK; - - registerFormats(); - - LOG_INFO(log, "Starting up"); - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, hostname, port, log); - socket.setReceiveTimeout(http_timeout); - socket.setSendTimeout(http_timeout); - Poco::ThreadPool server_pool(3, max_server_connections); - Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams; - http_params->setTimeout(http_timeout); - http_params->setKeepAliveTimeout(keep_alive_timeout); - - auto shared_context = Context::createShared(); - Context context(Context::createGlobal(shared_context.get())); - context.makeGlobalContext(); - - if (config().has("query_masking_rules")) - { - SensitiveDataMasker::setInstance(std::make_unique(config(), "query_masking_rules")); - } - - auto server = HTTPServer( - context, - std::make_shared("ODBCRequestHandlerFactory-factory", keep_alive_timeout, context), - server_pool, - socket, - http_params); - server.start(); - - LOG_INFO(log, "Listening http://{}", address.toString()); - - SCOPE_EXIT({ - LOG_DEBUG(log, "Received termination signal."); - LOG_DEBUG(log, "Waiting for current connections to close."); - server.stop(); - for (size_t count : ext::range(1, 6)) - { - if (server.currentConnections() == 0) - break; - LOG_DEBUG(log, "Waiting for {} connections, try {}", server.currentConnections(), count); - std::this_thread::sleep_for(std::chrono::milliseconds(1000)); - } - }); - - waitForTerminationRequest(); - return Application::EXIT_OK; -} -} #pragma GCC diagnostic ignored "-Wmissing-declarations" int mainEntryClickHouseODBCBridge(int argc, char ** argv) diff --git a/programs/odbc-bridge/ODBCBridge.h b/programs/odbc-bridge/ODBCBridge.h index 9a0d37fa0f9..b17051dce91 100644 --- a/programs/odbc-bridge/ODBCBridge.h +++ b/programs/odbc-bridge/ODBCBridge.h @@ -2,38 +2,25 @@ #include #include -#include +#include +#include "HandlerFactory.h" + namespace DB { -/** Class represents clickhouse-odbc-bridge server, which listen - * incoming HTTP POST and GET requests on specified port and host. - * Has two handlers '/' for all incoming POST requests to ODBC driver - * and /ping for GET request about service status - */ -class ODBCBridge : public BaseDaemon + +class ODBCBridge : public IBridge { -public: - void defineOptions(Poco::Util::OptionSet & options) override; protected: - void initialize(Application & self) override; + std::string bridgeName() const override + { + return "ODBCBridge"; + } - void uninitialize() override; - - int main(const std::vector & args) override; - -private: - void handleHelp(const std::string &, const std::string &); - - bool is_help; - std::string hostname; - size_t port; - size_t http_timeout; - std::string log_level; - size_t max_server_connections; - size_t keep_alive_timeout; - - Poco::Logger * log; + HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const override + { + return std::make_shared("ODBCRequestHandlerFactory-factory", keep_alive_timeout, context); + } }; } diff --git a/programs/odbc-bridge/ODBCConnectionFactory.h b/programs/odbc-bridge/ODBCConnectionFactory.h new file mode 100644 index 00000000000..56961ddb2fb --- /dev/null +++ b/programs/odbc-bridge/ODBCConnectionFactory.h @@ -0,0 +1,82 @@ +#pragma once + +#include +#include +#include +#include +#include + + +namespace nanodbc +{ + +static constexpr inline auto ODBC_CONNECT_TIMEOUT = 100; + +using ConnectionPtr = std::shared_ptr; +using Pool = BorrowedObjectPool; +using PoolPtr = std::shared_ptr; + +class ConnectionHolder +{ + +public: + ConnectionHolder(const std::string & connection_string_, PoolPtr pool_) : connection_string(connection_string_), pool(pool_) {} + + ~ConnectionHolder() + { + if (connection) + pool->returnObject(std::move(connection)); + } + + nanodbc::connection & operator*() + { + if (!connection) + { + pool->borrowObject(connection, [&]() + { + return std::make_shared(connection_string, ODBC_CONNECT_TIMEOUT); + }); + } + + return *connection; + } + +private: + std::string connection_string; + PoolPtr pool; + ConnectionPtr connection; +}; + +} + + +namespace DB +{ + +class ODBCConnectionFactory final : private boost::noncopyable +{ +public: + static ODBCConnectionFactory & instance() + { + static ODBCConnectionFactory ret; + return ret; + } + + nanodbc::ConnectionHolder get(const std::string & connection_string, size_t pool_size) + { + std::lock_guard lock(mutex); + + if (!factory.count(connection_string)) + factory.emplace(std::make_pair(connection_string, std::make_shared(pool_size))); + + return nanodbc::ConnectionHolder(connection_string, factory[connection_string]); + } + +private: + /// [connection_settings_string] -> [connection_pool] + using PoolFactory = std::unordered_map; + PoolFactory factory; + std::mutex mutex; +}; + +} diff --git a/programs/odbc-bridge/SchemaAllowedHandler.cpp b/programs/odbc-bridge/SchemaAllowedHandler.cpp index d4a70db61f4..4cceaee962c 100644 --- a/programs/odbc-bridge/SchemaAllowedHandler.cpp +++ b/programs/odbc-bridge/SchemaAllowedHandler.cpp @@ -2,33 +2,26 @@ #if USE_ODBC -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "validateODBCConnectionString.h" +#include +#include +#include +#include +#include +#include +#include "validateODBCConnectionString.h" +#include "ODBCConnectionFactory.h" +#include +#include -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC namespace DB { namespace { - bool isSchemaAllowed(SQLHDBC hdbc) + bool isSchemaAllowed(nanodbc::connection & connection) { - SQLUINTEGER value; - SQLSMALLINT value_length = sizeof(value); - SQLRETURN r = POCO_SQL_ODBC_CLASS::SQLGetInfo(hdbc, SQL_SCHEMA_USAGE, &value, sizeof(value), &value_length); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(r)) - throw POCO_SQL_ODBC_CLASS::ConnectionException(hdbc); - - return value != 0; + uint32_t result = connection.get_info(SQL_SCHEMA_USAGE); + return result != 0; } } @@ -55,10 +48,12 @@ void SchemaAllowedHandler::handleRequest(HTTPServerRequest & request, HTTPServer try { std::string connection_string = params.get("connection_string"); - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - SQLHDBC hdbc = session.dbc().handle(); - bool result = isSchemaAllowed(hdbc); + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); + + bool result = isSchemaAllowed(*connection); WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try diff --git a/programs/odbc-bridge/SchemaAllowedHandler.h b/programs/odbc-bridge/SchemaAllowedHandler.h index 91eddf67803..d7b922ed05b 100644 --- a/programs/odbc-bridge/SchemaAllowedHandler.h +++ b/programs/odbc-bridge/SchemaAllowedHandler.h @@ -1,22 +1,25 @@ #pragma once +#include #include - #include #if USE_ODBC + namespace DB { class Context; /// This handler establishes connection to database, and retrieves whether schema is allowed. -class SchemaAllowedHandler : public HTTPRequestHandler +class SchemaAllowedHandler : public HTTPRequestHandler, WithContext { public: - SchemaAllowedHandler(size_t keep_alive_timeout_, Context &) - : log(&Poco::Logger::get("SchemaAllowedHandler")), keep_alive_timeout(keep_alive_timeout_) + SchemaAllowedHandler(size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("SchemaAllowedHandler")) + , keep_alive_timeout(keep_alive_timeout_) { } diff --git a/programs/odbc-bridge/getIdentifierQuote.cpp b/programs/odbc-bridge/getIdentifierQuote.cpp index 15b3749d37d..9ccad6e6e1d 100644 --- a/programs/odbc-bridge/getIdentifierQuote.cpp +++ b/programs/odbc-bridge/getIdentifierQuote.cpp @@ -2,11 +2,10 @@ #if USE_ODBC -# include -# include -# include - -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC +#include +#include +#include +#include namespace DB @@ -17,33 +16,27 @@ namespace ErrorCodes extern const int ILLEGAL_TYPE_OF_ARGUMENT; } -std::string getIdentifierQuote(SQLHDBC hdbc) + +std::string getIdentifierQuote(nanodbc::connection & connection) { - std::string identifier; - - SQLSMALLINT t; - SQLRETURN r = POCO_SQL_ODBC_CLASS::SQLGetInfo(hdbc, SQL_IDENTIFIER_QUOTE_CHAR, nullptr, 0, &t); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(r)) - throw POCO_SQL_ODBC_CLASS::ConnectionException(hdbc); - - if (t > 0) + std::string quote; + try { - // I have no idea, why to add '2' here, got from: contrib/poco/Data/ODBC/src/ODBCStatementImpl.cpp:60 (SQL_DRIVER_NAME) - identifier.resize(static_cast(t) + 2); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(POCO_SQL_ODBC_CLASS::SQLGetInfo( - hdbc, SQL_IDENTIFIER_QUOTE_CHAR, &identifier[0], SQLSMALLINT((identifier.length() - 1) * sizeof(identifier[0])), &t))) - throw POCO_SQL_ODBC_CLASS::ConnectionException(hdbc); - - identifier.resize(static_cast(t)); + quote = connection.get_info(SQL_IDENTIFIER_QUOTE_CHAR); } - return identifier; + catch (...) + { + LOG_WARNING(&Poco::Logger::get("ODBCGetIdentifierQuote"), "Cannot fetch identifier quote. Default double quote is used. Reason: {}", getCurrentExceptionMessage(false)); + return "\""; + } + + return quote; } -IdentifierQuotingStyle getQuotingStyle(SQLHDBC hdbc) + +IdentifierQuotingStyle getQuotingStyle(nanodbc::connection & connection) { - auto identifier_quote = getIdentifierQuote(hdbc); + auto identifier_quote = getIdentifierQuote(connection); if (identifier_quote.length() == 0) return IdentifierQuotingStyle::None; else if (identifier_quote[0] == '`') diff --git a/programs/odbc-bridge/getIdentifierQuote.h b/programs/odbc-bridge/getIdentifierQuote.h index 0fb4c3bddb1..7f7156eff82 100644 --- a/programs/odbc-bridge/getIdentifierQuote.h +++ b/programs/odbc-bridge/getIdentifierQuote.h @@ -2,20 +2,19 @@ #if USE_ODBC -# include -# include -# include - -# include - +#include +#include +#include #include +#include + namespace DB { -std::string getIdentifierQuote(SQLHDBC hdbc); +std::string getIdentifierQuote(nanodbc::connection & connection); -IdentifierQuotingStyle getQuotingStyle(SQLHDBC hdbc); +IdentifierQuotingStyle getQuotingStyle(nanodbc::connection & connection); } diff --git a/programs/server/.gitignore b/programs/server/.gitignore index b774776e4be..ddc480e4b29 100644 --- a/programs/server/.gitignore +++ b/programs/server/.gitignore @@ -1,8 +1,11 @@ -/access -/dictionaries_lib -/flags -/format_schemas +/metadata /metadata_dropped +/data +/store +/access +/flags +/dictionaries_lib +/format_schemas /preprocessed_configs /shadow /tmp diff --git a/programs/server/CMakeLists.txt b/programs/server/CMakeLists.txt index 198d9081168..0dcfbce1c30 100644 --- a/programs/server/CMakeLists.txt +++ b/programs/server/CMakeLists.txt @@ -19,6 +19,7 @@ set (CLICKHOUSE_SERVER_LINK clickhouse_storages_system clickhouse_table_functions string_utils + jemalloc ${LINK_RESOURCE_LIB} @@ -28,7 +29,7 @@ set (CLICKHOUSE_SERVER_LINK clickhouse_program_add(server) -install(FILES config.xml users.xml DESTINATION ${CLICKHOUSE_ETC_DIR}/clickhouse-server COMPONENT clickhouse) +install(FILES config.xml users.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-server" COMPONENT clickhouse) # TODO We actually need this on Mac, FreeBSD. if (OS_LINUX) @@ -42,11 +43,16 @@ if (OS_LINUX) set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ}) # https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake - add_custom_command(OUTPUT ${RESOURCE_OBJ} - COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} ${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ} - COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents - ${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ} ${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}) - + # PPC64LE fails to do this with objcopy, use ld or lld instead + if (ARCH_PPC64LE) + add_custom_command(OUTPUT ${RESOURCE_OBJ} + COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE}) + else() + add_custom_command(OUTPUT ${RESOURCE_OBJ} + COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" + COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents + "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}") + endif() set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true) endforeach(RESOURCE_FILE) diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index 9889b08828b..729dc150e07 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -47,6 +48,8 @@ #include #include #include +#include +#include #include #include #include @@ -84,6 +87,8 @@ # include # include # include +# include +# include #endif #if USE_SSL @@ -96,7 +101,11 @@ #endif #if USE_NURAFT -# include +# include +#endif + +#if USE_JEMALLOC +# include #endif namespace CurrentMetrics @@ -107,11 +116,35 @@ namespace CurrentMetrics extern const Metric MaxDDLEntryID; } +#if USE_JEMALLOC +static bool jemallocOptionEnabled(const char *name) +{ + bool value; + size_t size = sizeof(value); + + if (mallctl(name, reinterpret_cast(&value), &size, /* newp= */ nullptr, /* newlen= */ 0)) + throw Poco::SystemException("mallctl() failed"); + + return value; +} +#else +static bool jemallocOptionEnabled(const char *) { return 0; } +#endif + int mainEntryClickHouseServer(int argc, char ** argv) { DB::Server app; + if (jemallocOptionEnabled("opt.background_thread")) + { + LOG_ERROR(&app.logger(), + "jemalloc.background_thread was requested, " + "however ClickHouse uses percpu_arena and background_thread most likely will not give any benefits, " + "and also background_thread is not compatible with ClickHouse watchdog " + "(that can be disabled with CLICKHOUSE_WATCHDOG_ENABLE=0)"); + } + /// Do not fork separate process from watchdog if we attached to terminal. /// Otherwise it breaks gdb usage. /// Can be overridden by environment variable (cannot use server config at this moment). @@ -171,18 +204,24 @@ int waitServersToFinish(std::vector & servers, size_t const int sleep_one_ms = 100; int sleep_current_ms = 0; int current_connections = 0; - while (sleep_current_ms < sleep_max_ms) + for (;;) { current_connections = 0; + for (auto & server : servers) { server.stop(); current_connections += server.currentConnections(); } + if (!current_connections) break; + sleep_current_ms += sleep_one_ms; - std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + if (sleep_current_ms < sleep_max_ms) + std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + else + break; } return current_connections; } @@ -390,6 +429,19 @@ void checkForUsersNotInMainConfig( } +[[noreturn]] void forceShutdown() +{ +#if defined(THREAD_SANITIZER) && defined(OS_LINUX) + /// Thread sanitizer tries to do something on exit that we don't need if we want to exit immediately, + /// while connection handling threads are still run. + (void)syscall(SYS_exit_group, 0); + __builtin_unreachable(); +#else + _exit(0); +#endif +} + + int Server::main(const std::vector & /*args*/) { Poco::Logger * log = &logger(); @@ -424,8 +476,7 @@ int Server::main(const std::vector & /*args*/) * settings, available functions, data types, aggregate functions, databases, ... */ auto shared_context = Context::createShared(); - auto global_context = std::make_unique(Context::createGlobal(shared_context.get())); - global_context_ptr = global_context.get(); + global_context = Context::createGlobal(shared_context.get()); global_context->makeGlobalContext(); global_context->setApplicationType(Context::ApplicationType::SERVER); @@ -687,16 +738,8 @@ int Server::main(const std::vector & /*args*/) } } - if (config().has("interserver_http_credentials")) - { - String user = config().getString("interserver_http_credentials.user", ""); - String password = config().getString("interserver_http_credentials.password", ""); - - if (user.empty()) - throw Exception("Configuration parameter interserver_http_credentials user can't be empty", ErrorCodes::NO_ELEMENTS_IN_CONFIG); - - global_context->setInterserverCredentials(user, password); - } + LOG_DEBUG(log, "Initiailizing interserver credentials."); + global_context->updateInterserverCredentials(config()); if (config().has("macros")) global_context->setMacros(std::make_unique(config(), "macros", log)); @@ -757,6 +800,7 @@ int Server::main(const std::vector & /*args*/) global_context->setClustersConfig(config); global_context->setMacros(std::make_unique(*config, "macros", log)); global_context->setExternalAuthenticatorsConfig(*config); + global_context->setExternalModelsConfig(config); /// Setup protection to avoid accidental DROP for big tables (that are greater than 50 GB by default) if (config->has("max_table_size_to_drop")) @@ -776,6 +820,7 @@ int Server::main(const std::vector & /*args*/) } global_context->updateStorageConfiguration(*config); + global_context->updateInterserverCredentials(*config); }, /* already_loaded = */ false); /// Reload it right now (initial loading) @@ -828,10 +873,14 @@ int Server::main(const std::vector & /*args*/) } global_context->setMarkCache(mark_cache_size); + /// A cache for mmapped files. + size_t mmap_cache_size = config().getUInt64("mmap_cache_size", 1000); /// The choice of default is arbitrary. + if (mmap_cache_size) + global_context->setMMappedFileCache(mmap_cache_size); + #if USE_EMBEDDED_COMPILER size_t compiled_expression_cache_size = config().getUInt64("compiled_expression_cache_size", 500); - if (compiled_expression_cache_size) - global_context->setCompiledExpressionCache(compiled_expression_cache_size); + CompiledExpressionCacheFactory::instance().init(compiled_expression_cache_size); #endif /// Set path for format schema files @@ -862,15 +911,15 @@ int Server::main(const std::vector & /*args*/) listen_try = true; } - if (config().has("test_keeper_server")) + if (config().has("keeper_server")) { #if USE_NURAFT /// Initialize test keeper RAFT. Do nothing if no nu_keeper_server in config. - global_context->initializeNuKeeperStorageDispatcher(); + global_context->initializeKeeperStorageDispatcher(); for (const auto & listen_host : listen_hosts) { - /// TCP NuKeeper - const char * port_name = "test_keeper_server.tcp_port"; + /// TCP Keeper + const char * port_name = "keeper_server.tcp_port"; createServer(listen_host, port_name, listen_try, [&](UInt16 port) { Poco::Net::ServerSocket socket; @@ -880,9 +929,29 @@ int Server::main(const std::vector & /*args*/) servers_to_start_before_tables->emplace_back( port_name, std::make_unique( - new NuKeeperTCPHandlerFactory(*this), server_pool, socket, new Poco::Net::TCPServerParams)); + new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams)); - LOG_INFO(log, "Listening for connections to NuKeeper (tcp): {}", address.toString()); + LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString()); + }); + + const char * secure_port_name = "keeper_server.tcp_port_secure"; + createServer(listen_host, secure_port_name, listen_try, [&](UInt16 port) + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + servers_to_start_before_tables->emplace_back( + secure_port_name, + std::make_unique( + new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams)); + LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString()); +#else + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif }); } #else @@ -929,13 +998,15 @@ int Server::main(const std::vector & /*args*/) else LOG_INFO(log, "Closed connections to servers for tables."); - global_context->shutdownNuKeeperStorageDispatcher(); + global_context->shutdownKeeperStorageDispatcher(); } + /// Wait server pool to avoid use-after-free of destroyed context in the handlers + server_pool.joinAll(); + /** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available. * At this moment, no one could own shared part of Context. */ - global_context_ptr = nullptr; global_context.reset(); shared_context.reset(); LOG_DEBUG(log, "Destroyed global context."); @@ -949,14 +1020,14 @@ int Server::main(const std::vector & /*args*/) try { - loadMetadataSystem(*global_context); + loadMetadataSystem(global_context); /// After attaching system databases we can initialize system log. global_context->initializeSystemLogs(); auto & database_catalog = DatabaseCatalog::instance(); /// After the system database is created, attach virtual system tables (in addition to query_log and part_log) attachSystemTablesServer(*database_catalog.getSystemDatabase(), has_zookeeper); /// Then, load remaining databases - loadMetadata(*global_context, default_database); + loadMetadata(global_context, default_database); database_catalog.loadDatabases(); /// After loading validate that default database exists database_catalog.assertDatabaseExists(default_database); @@ -981,7 +1052,7 @@ int Server::main(const std::vector & /*args*/) /// /// Look at compiler-rt/lib/sanitizer_common/sanitizer_stacktrace.h /// -#if USE_UNWIND && !WITH_COVERAGE && !defined(SANITIZER) +#if USE_UNWIND && !WITH_COVERAGE && !defined(SANITIZER) && defined(__x86_64__) /// Profilers cannot work reliably with any other libunwind or without PHDR cache. if (hasPHDRCache()) { @@ -1018,6 +1089,10 @@ int Server::main(const std::vector & /*args*/) " when two different stack unwinding methods will interfere with each other."); #endif +#if !defined(__x86_64__) + LOG_INFO(log, "Query Profiler is only tested on x86_64. It also known to not work under qemu-user."); +#endif + if (!hasPHDRCache()) LOG_INFO(log, "Query Profiler and TraceCollector are disabled because they require PHDR cache to be created" " (otherwise the function 'dl_iterate_phdr' is not lock free and not async-signal safe)."); @@ -1032,7 +1107,7 @@ int Server::main(const std::vector & /*args*/) else { /// Initialize a watcher periodically updating DNS cache - dns_cache_updater = std::make_unique(*global_context, config().getInt("dns_cache_update_period", 15)); + dns_cache_updater = std::make_unique(global_context, config().getInt("dns_cache_update_period", 15)); } #if defined(OS_LINUX) @@ -1064,7 +1139,7 @@ int Server::main(const std::vector & /*args*/) { /// This object will periodically calculate some metrics. AsynchronousMetrics async_metrics( - *global_context, config().getUInt("asynchronous_metrics_update_period_s", 60), servers_to_start_before_tables, servers); + global_context, config().getUInt("asynchronous_metrics_update_period_s", 60), servers_to_start_before_tables, servers); attachSystemTablesAsync(*DatabaseCatalog::instance().getSystemDatabase(), async_metrics); for (const auto & listen_host : listen_hosts) @@ -1280,9 +1355,6 @@ int Server::main(const std::vector & /*args*/) async_metrics.start(); global_context->enableNamedSessions(); - for (auto & server : *servers) - server.start(); - { String level_str = config().getString("text_log.level", ""); int level = level_str.empty() ? INT_MAX : Poco::Logger::parseLevel(level_str); @@ -1304,7 +1376,7 @@ int Server::main(const std::vector & /*args*/) } /// try to load dictionaries immediately, throw on error and die - ext::scope_guard dictionaries_xmls, models_xmls; + ext::scope_guard dictionaries_xmls; try { if (!config().getBool("dictionaries_lazy_load", true)) @@ -1314,8 +1386,6 @@ int Server::main(const std::vector & /*args*/) } dictionaries_xmls = global_context->getExternalDictionariesLoader().addConfigRepository( std::make_unique(config(), "dictionaries_config")); - models_xmls = global_context->getExternalModelsLoader().addConfigRepository( - std::make_unique(config(), "models_config")); } catch (...) { @@ -1330,10 +1400,12 @@ int Server::main(const std::vector & /*args*/) int pool_size = config().getInt("distributed_ddl.pool_size", 1); if (pool_size < 1) throw Exception("distributed_ddl.pool_size should be greater then 0", ErrorCodes::ARGUMENT_OUT_OF_BOUND); - global_context->setDDLWorker(std::make_unique(pool_size, ddl_zookeeper_path, *global_context, &config(), + global_context->setDDLWorker(std::make_unique(pool_size, ddl_zookeeper_path, global_context, &config(), "distributed_ddl", "DDLWorker", &CurrentMetrics::MaxDDLEntryID)); } + for (auto & server : *servers) + server.start(); LOG_INFO(log, "Ready for connections."); SCOPE_EXIT({ @@ -1379,7 +1451,7 @@ int Server::main(const std::vector & /*args*/) /// Dump coverage here, because std::atexit callback would not be called. dumpCoverageReportIfPossible(); LOG_INFO(log, "Will shutdown forcefully."); - _exit(Application::EXIT_OK); + forceShutdown(); } }); diff --git a/programs/server/Server.h b/programs/server/Server.h index fbfc26f6ee5..c698108767c 100644 --- a/programs/server/Server.h +++ b/programs/server/Server.h @@ -40,9 +40,9 @@ public: return BaseDaemon::logger(); } - Context & context() const override + ContextPtr context() const override { - return *global_context_ptr; + return global_context; } bool isCancelled() const override @@ -64,8 +64,7 @@ protected: std::string getDefaultCorePath() const override; private: - Context * global_context_ptr = nullptr; - + ContextPtr global_context; Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const; using CreateServerFunc = std::function; diff --git a/programs/server/config.d/keeper_port.xml b/programs/server/config.d/keeper_port.xml new file mode 120000 index 00000000000..6ebfce266fc --- /dev/null +++ b/programs/server/config.d/keeper_port.xml @@ -0,0 +1 @@ +../../../tests/config/config.d/keeper_port.xml \ No newline at end of file diff --git a/programs/server/config.d/test_keeper_port.xml b/programs/server/config.d/test_keeper_port.xml deleted file mode 120000 index f3f721caae0..00000000000 --- a/programs/server/config.d/test_keeper_port.xml +++ /dev/null @@ -1 +0,0 @@ -../../../tests/config/config.d/test_keeper_port.xml \ No newline at end of file diff --git a/programs/server/config.xml b/programs/server/config.xml index 715a366af00..195b6263595 100644 --- a/programs/server/config.xml +++ b/programs/server/config.xml @@ -7,7 +7,20 @@ --> - + trace /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.err.log @@ -76,7 +89,7 @@ - + 9005 + 1000 + + /var/lib/clickhouse/ @@ -599,7 +631,7 @@ - + - + remove <--- - return loadable_definition_name.substr(database_name.length() + 1); - } -} - - -ExternalLoaderDatabaseConfigRepository::ExternalLoaderDatabaseConfigRepository(IDatabase & database_, const Context & context_) - : global_context(context_.getGlobalContext()) - , database_name(database_.getDatabaseName()) - , database(database_) -{ -} - -LoadablesConfigurationPtr ExternalLoaderDatabaseConfigRepository::load(const std::string & loadable_definition_name) -{ - auto dict_name = trimDatabaseName(loadable_definition_name, database_name, database, global_context); - return database.getDictionaryConfiguration(dict_name); -} - -bool ExternalLoaderDatabaseConfigRepository::exists(const std::string & loadable_definition_name) -{ - auto dict_name = trimDatabaseName(loadable_definition_name, database_name, database, global_context); - return database.isDictionaryExist(dict_name); -} - -Poco::Timestamp ExternalLoaderDatabaseConfigRepository::getUpdateTime(const std::string & loadable_definition_name) -{ - auto dict_name = trimDatabaseName(loadable_definition_name, database_name, database, global_context); - return database.getObjectMetadataModificationTime(dict_name); -} - -std::set ExternalLoaderDatabaseConfigRepository::getAllLoadablesDefinitionNames() -{ - std::set result; - auto itr = database.getDictionariesIterator(); - bool is_atomic_database = database.getUUID() != UUIDHelpers::Nil; - while (itr && itr->isValid()) - { - if (is_atomic_database) - { - assert(itr->uuid() != UUIDHelpers::Nil); - result.insert(toString(itr->uuid())); - } - else - result.insert(database_name + "." + itr->name()); - itr->next(); - } - return result; -} - -} diff --git a/src/Interpreters/ExternalLoaderDatabaseConfigRepository.h b/src/Interpreters/ExternalLoaderDatabaseConfigRepository.h index 59dad1274e0..b8dd6e278ad 100644 --- a/src/Interpreters/ExternalLoaderDatabaseConfigRepository.h +++ b/src/Interpreters/ExternalLoaderDatabaseConfigRepository.h @@ -9,12 +9,12 @@ namespace DB /// Repository from database, which stores dictionary definitions on disk. /// Tracks update time and existence of .sql files through IDatabase. -class ExternalLoaderDatabaseConfigRepository : public IExternalLoaderConfigRepository +class ExternalLoaderDatabaseConfigRepository : public IExternalLoaderConfigRepository, WithContext { public: - ExternalLoaderDatabaseConfigRepository(IDatabase & database_, const Context & global_context_); + ExternalLoaderDatabaseConfigRepository(IDatabase & database_, ContextPtr global_context_); - const std::string & getName() const override { return database_name; } + std::string getName() const override { return database_name; } std::set getAllLoadablesDefinitionNames() override; @@ -25,7 +25,6 @@ public: LoadablesConfigurationPtr load(const std::string & loadable_definition_name) override; private: - const Context & global_context; const String database_name; IDatabase & database; }; diff --git a/src/Interpreters/ExternalLoaderDictionaryStorageConfigRepository.cpp b/src/Interpreters/ExternalLoaderDictionaryStorageConfigRepository.cpp new file mode 100644 index 00000000000..86f5a9ded0a --- /dev/null +++ b/src/Interpreters/ExternalLoaderDictionaryStorageConfigRepository.cpp @@ -0,0 +1,39 @@ +#include "ExternalLoaderDictionaryStorageConfigRepository.h" + +#include +#include + +namespace DB +{ + +ExternalLoaderDictionaryStorageConfigRepository::ExternalLoaderDictionaryStorageConfigRepository(const StorageDictionary & dictionary_storage_) + : dictionary_storage(dictionary_storage_) +{ +} + +std::string ExternalLoaderDictionaryStorageConfigRepository::getName() const +{ + return dictionary_storage.getStorageID().getInternalDictionaryName(); +} + +std::set ExternalLoaderDictionaryStorageConfigRepository::getAllLoadablesDefinitionNames() +{ + return { getName() }; +} + +bool ExternalLoaderDictionaryStorageConfigRepository::exists(const std::string & loadable_definition_name) +{ + return getName() == loadable_definition_name; +} + +Poco::Timestamp ExternalLoaderDictionaryStorageConfigRepository::getUpdateTime(const std::string &) +{ + return dictionary_storage.getUpdateTime(); +} + +LoadablesConfigurationPtr ExternalLoaderDictionaryStorageConfigRepository::load(const std::string &) +{ + return dictionary_storage.getConfiguration(); +} + +} diff --git a/src/Interpreters/ExternalLoaderDictionaryStorageConfigRepository.h b/src/Interpreters/ExternalLoaderDictionaryStorageConfigRepository.h new file mode 100644 index 00000000000..06d2b0faf75 --- /dev/null +++ b/src/Interpreters/ExternalLoaderDictionaryStorageConfigRepository.h @@ -0,0 +1,30 @@ +#pragma once + +#include +#include + +namespace DB +{ + +class StorageDictionary; + +class ExternalLoaderDictionaryStorageConfigRepository : public IExternalLoaderConfigRepository +{ +public: + explicit ExternalLoaderDictionaryStorageConfigRepository(const StorageDictionary & dictionary_storage_); + + std::string getName() const override; + + std::set getAllLoadablesDefinitionNames() override; + + bool exists(const std::string & loadable_definition_name) override; + + Poco::Timestamp getUpdateTime(const std::string & loadable_definition_name) override; + + LoadablesConfigurationPtr load(const std::string & loadable_definition_name) override; + +private: + const StorageDictionary & dictionary_storage; +}; + +} diff --git a/src/Interpreters/ExternalLoaderTempConfigRepository.h b/src/Interpreters/ExternalLoaderTempConfigRepository.h index 46e2eb846e9..d042f310ed1 100644 --- a/src/Interpreters/ExternalLoaderTempConfigRepository.h +++ b/src/Interpreters/ExternalLoaderTempConfigRepository.h @@ -13,7 +13,7 @@ class ExternalLoaderTempConfigRepository : public IExternalLoaderConfigRepositor public: ExternalLoaderTempConfigRepository(const String & repository_name_, const String & path_, const LoadablesConfigurationPtr & config_); - const String & getName() const override { return name; } + String getName() const override { return name; } bool isTemporary() const override { return true; } std::set getAllLoadablesDefinitionNames() override; diff --git a/src/Interpreters/ExternalLoaderXMLConfigRepository.h b/src/Interpreters/ExternalLoaderXMLConfigRepository.h index dd689856300..76f51f04397 100644 --- a/src/Interpreters/ExternalLoaderXMLConfigRepository.h +++ b/src/Interpreters/ExternalLoaderXMLConfigRepository.h @@ -15,7 +15,7 @@ class ExternalLoaderXMLConfigRepository : public IExternalLoaderConfigRepository public: ExternalLoaderXMLConfigRepository(const Poco::Util::AbstractConfiguration & main_config_, const std::string & config_key_); - const String & getName() const override { return name; } + std::string getName() const override { return name; } /// Return set of .xml files from path in main_config (config_key) std::set getAllLoadablesDefinitionNames() override; diff --git a/src/Interpreters/ExternalModelsLoader.cpp b/src/Interpreters/ExternalModelsLoader.cpp index 4e9ddb78241..317cf0bf1c9 100644 --- a/src/Interpreters/ExternalModelsLoader.cpp +++ b/src/Interpreters/ExternalModelsLoader.cpp @@ -10,9 +10,8 @@ namespace ErrorCodes } -ExternalModelsLoader::ExternalModelsLoader(Context & context_) - : ExternalLoader("external model", &Poco::Logger::get("ExternalModelsLoader")) - , context(context_) +ExternalModelsLoader::ExternalModelsLoader(ContextPtr context_) + : ExternalLoader("external model", &Poco::Logger::get("ExternalModelsLoader")), WithContext(context_) { setConfigSettings({"model", "name", {}, {}}); enablePeriodicUpdates(true); @@ -30,7 +29,7 @@ std::shared_ptr ExternalModelsLoader::create( { return std::make_unique( name, config.getString(config_prefix + ".path"), - context.getConfigRef().getString("catboost_dynamic_library_path"), + getContext()->getConfigRef().getString("catboost_dynamic_library_path"), lifetime ); } diff --git a/src/Interpreters/ExternalModelsLoader.h b/src/Interpreters/ExternalModelsLoader.h index 3f09512338d..f0a7592f4d3 100644 --- a/src/Interpreters/ExternalModelsLoader.h +++ b/src/Interpreters/ExternalModelsLoader.h @@ -1,28 +1,33 @@ #pragma once #include +#include #include #include + #include namespace DB { -class Context; - /// Manages user-defined models. -class ExternalModelsLoader : public ExternalLoader +class ExternalModelsLoader : public ExternalLoader, WithContext { public: using ModelPtr = std::shared_ptr; /// Models will be loaded immediately and then will be updated in separate thread, each 'reload_period' seconds. - ExternalModelsLoader(Context & context_); + explicit ExternalModelsLoader(ContextPtr context_); - ModelPtr getModel(const std::string & name) const + ModelPtr getModel(const std::string & model_name) const { - return std::static_pointer_cast(load(name)); + return std::static_pointer_cast(load(model_name)); + } + + void reloadModel(const std::string & model_name) const + { + loadOrReload(model_name); } protected: @@ -30,9 +35,6 @@ protected: const std::string & config_prefix, const std::string & repository_name) const override; friend class StorageSystemModels; -private: - - Context & context; }; } diff --git a/src/Interpreters/ExtractExpressionInfoVisitor.cpp b/src/Interpreters/ExtractExpressionInfoVisitor.cpp index 64c23cd4fd1..2d46fe08e95 100644 --- a/src/Interpreters/ExtractExpressionInfoVisitor.cpp +++ b/src/Interpreters/ExtractExpressionInfoVisitor.cpp @@ -40,7 +40,7 @@ void ExpressionInfoMatcher::visit(const ASTFunction & ast_function, const ASTPtr } else { - const auto & function = FunctionFactory::instance().tryGet(ast_function.name, data.context); + const auto & function = FunctionFactory::instance().tryGet(ast_function.name, data.getContext()); /// Skip lambda, tuple and other special functions if (function) @@ -82,11 +82,12 @@ bool ExpressionInfoMatcher::needChildVisit(const ASTPtr & node, const ASTPtr &) return !node->as(); } -bool hasNonRewritableFunction(const ASTPtr & node, const Context & context) +bool hasNonRewritableFunction(const ASTPtr & node, ContextPtr context) { for (const auto & select_expression : node->children) { - ExpressionInfoVisitor::Data expression_info{.context = context, .tables = {}}; + TablesWithColumns tables; + ExpressionInfoVisitor::Data expression_info{WithContext{context}, tables}; ExpressionInfoVisitor(expression_info).visit(select_expression); if (expression_info.is_stateful_function diff --git a/src/Interpreters/ExtractExpressionInfoVisitor.h b/src/Interpreters/ExtractExpressionInfoVisitor.h index d05415490e6..c84e243ce2e 100644 --- a/src/Interpreters/ExtractExpressionInfoVisitor.h +++ b/src/Interpreters/ExtractExpressionInfoVisitor.h @@ -1,21 +1,20 @@ #pragma once -#include +#include +#include +#include #include #include -#include -#include +#include namespace DB { -class Context; struct ExpressionInfoMatcher { - struct Data + struct Data : public WithContext { - const Context & context; const TablesWithColumns & tables; bool is_array_join = false; @@ -37,6 +36,6 @@ struct ExpressionInfoMatcher using ExpressionInfoVisitor = ConstInDepthNodeVisitor; -bool hasNonRewritableFunction(const ASTPtr & node, const Context & context); +bool hasNonRewritableFunction(const ASTPtr & node, ContextPtr context); } diff --git a/src/Interpreters/GlobalSubqueriesVisitor.h b/src/Interpreters/GlobalSubqueriesVisitor.h index 80d133ebea6..1aac27396ed 100644 --- a/src/Interpreters/GlobalSubqueriesVisitor.h +++ b/src/Interpreters/GlobalSubqueriesVisitor.h @@ -1,37 +1,36 @@ #pragma once -#include -#include -#include -#include -#include -#include -#include -#include -#include #include #include #include #include -#include -#include #include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include #include +#include namespace DB { + namespace ErrorCodes { extern const int WRONG_GLOBAL_SUBQUERY; } - class GlobalSubqueriesMatcher { public: - struct Data + struct Data : WithContext { - const Context & context; size_t subquery_depth; bool is_remote; size_t external_table_id; @@ -39,16 +38,22 @@ public: SubqueriesForSets & subqueries_for_sets; bool & has_global_subqueries; - Data(const Context & context_, size_t subquery_depth_, bool is_remote_, - TemporaryTablesMapping & tables, SubqueriesForSets & subqueries_for_sets_, bool & has_global_subqueries_) - : context(context_), - subquery_depth(subquery_depth_), - is_remote(is_remote_), - external_table_id(1), - external_tables(tables), - subqueries_for_sets(subqueries_for_sets_), - has_global_subqueries(has_global_subqueries_) - {} + Data( + ContextPtr context_, + size_t subquery_depth_, + bool is_remote_, + TemporaryTablesMapping & tables, + SubqueriesForSets & subqueries_for_sets_, + bool & has_global_subqueries_) + : WithContext(context_) + , subquery_depth(subquery_depth_) + , is_remote(is_remote_) + , external_table_id(1) + , external_tables(tables) + , subqueries_for_sets(subqueries_for_sets_) + , has_global_subqueries(has_global_subqueries_) + { + } void addExternalStorage(ASTPtr & ast, bool set_alias = false) { @@ -80,7 +85,7 @@ public: /// If this is already an external table, you do not need to add anything. Just remember its presence. auto temporary_table_name = getIdentifierName(subquery_or_table_name); bool exists_in_local_map = external_tables.end() != external_tables.find(temporary_table_name); - bool exists_in_context = context.tryResolveStorageID(StorageID("", temporary_table_name), Context::ResolveExternal); + bool exists_in_context = getContext()->tryResolveStorageID(StorageID("", temporary_table_name), Context::ResolveExternal); if (exists_in_local_map || exists_in_context) return; } @@ -97,14 +102,17 @@ public: } } - auto interpreter = interpretSubquery(subquery_or_table_name, context, subquery_depth, {}); + auto interpreter = interpretSubquery(subquery_or_table_name, getContext(), subquery_depth, {}); Block sample = interpreter->getSampleBlock(); NamesAndTypesList columns = sample.getNamesAndTypesList(); auto external_storage_holder = std::make_shared( - context, ColumnsDescription{columns}, ConstraintsDescription{}, nullptr, - /*create_for_global_subquery*/ true); + getContext(), + ColumnsDescription{columns}, + ConstraintsDescription{}, + nullptr, + /*create_for_global_subquery*/ true); StoragePtr external_storage = external_storage_holder->getTable(); /** We replace the subquery with the name of the temporary table. @@ -136,10 +144,10 @@ public: external_tables[external_table_name] = external_storage_holder; - if (context.getSettingsRef().use_index_for_in_with_subqueries) + if (getContext()->getSettingsRef().use_index_for_in_with_subqueries) { auto external_table = external_storage_holder->getTable(); - auto table_out = external_table->write({}, external_table->getInMemoryMetadataPtr(), context); + auto table_out = external_table->write({}, external_table->getInMemoryMetadataPtr(), getContext()); auto io = interpreter->execute(); PullingPipelineExecutor executor(io.pipeline); @@ -176,9 +184,7 @@ public: static bool needChildVisit(ASTPtr &, const ASTPtr & child) { /// We do not go into subqueries. - if (child->as()) - return false; - return true; + return !child->as(); } private: diff --git a/src/Interpreters/HashJoin.cpp b/src/Interpreters/HashJoin.cpp index fcd89aed84d..d6163ff773a 100644 --- a/src/Interpreters/HashJoin.cpp +++ b/src/Interpreters/HashJoin.cpp @@ -1504,6 +1504,7 @@ BlockInputStreamPtr HashJoin::createStreamWithNonJoinedRows(const Block & result void HashJoin::reuseJoinedData(const HashJoin & join) { data = join.data; + from_storage_join = true; joinDispatch(kind, strictness, data->maps, [this](auto kind_, auto strictness_, auto & map) { used_flags.reinit(map.getBufferSizeInCells(data->type) + 1); diff --git a/src/Interpreters/HashJoin.h b/src/Interpreters/HashJoin.h index b726de44f3a..c0a699d0a12 100644 --- a/src/Interpreters/HashJoin.h +++ b/src/Interpreters/HashJoin.h @@ -134,6 +134,8 @@ class HashJoin : public IJoin public: HashJoin(std::shared_ptr table_join_, const Block & right_sample_block, bool any_take_last_row_ = false); + const TableJoin & getTableJoin() const override { return *table_join; } + /** Add block of data from right hand of JOIN to the map. * Returns false, if some limit was exceeded and you should not insert more data. */ @@ -157,6 +159,8 @@ public: void joinTotals(Block & block) const override; + bool isFilled() const override { return from_storage_join || data->type == Type::DICT; } + /** For RIGHT and FULL JOINs. * A stream that will contain default values from left table, joined with rows from right table, that was not joined before. * Use only after all calls to joinBlock was done. @@ -342,6 +346,9 @@ private: ASTTableJoin::Kind kind; ASTTableJoin::Strictness strictness; + /// This join was created from StorageJoin and it is already filled. + bool from_storage_join = false; + /// Names of key columns in right-side table (in the order they appear in ON/USING clause). @note It could contain duplicates. const Names & key_names_right; diff --git a/src/Interpreters/IExternalLoaderConfigRepository.h b/src/Interpreters/IExternalLoaderConfigRepository.h index 866aa0b877f..0d0c8acc01a 100644 --- a/src/Interpreters/IExternalLoaderConfigRepository.h +++ b/src/Interpreters/IExternalLoaderConfigRepository.h @@ -23,7 +23,7 @@ class IExternalLoaderConfigRepository { public: /// Returns the name of the repository. - virtual const std::string & getName() const = 0; + virtual std::string getName() const = 0; /// Whether this repository is temporary: /// it's created and destroyed while executing the same query. @@ -42,7 +42,7 @@ public: /// Load configuration from some concrete source to AbstractConfiguration virtual LoadablesConfigurationPtr load(const std::string & path) = 0; - virtual ~IExternalLoaderConfigRepository() {} + virtual ~IExternalLoaderConfigRepository() = default; }; } diff --git a/src/Interpreters/IInterpreter.cpp b/src/Interpreters/IInterpreter.cpp index b219a9cd923..1b0e9738429 100644 --- a/src/Interpreters/IInterpreter.cpp +++ b/src/Interpreters/IInterpreter.cpp @@ -4,7 +4,7 @@ namespace DB { void IInterpreter::extendQueryLogElem( - QueryLogElement & elem, const ASTPtr & ast, const Context & context, const String & query_database, const String & query_table) const + QueryLogElement & elem, const ASTPtr & ast, ContextPtr context, const String & query_database, const String & query_table) const { if (!query_database.empty() && query_table.empty()) { @@ -12,7 +12,8 @@ void IInterpreter::extendQueryLogElem( } else if (!query_table.empty()) { - auto quoted_database = query_database.empty() ? backQuoteIfNeed(context.getCurrentDatabase()) : backQuoteIfNeed(query_database); + auto quoted_database = query_database.empty() ? backQuoteIfNeed(context->getCurrentDatabase()) + : backQuoteIfNeed(query_database); elem.query_databases.insert(quoted_database); elem.query_tables.insert(quoted_database + "." + backQuoteIfNeed(query_table)); } diff --git a/src/Interpreters/IInterpreter.h b/src/Interpreters/IInterpreter.h index aaecf735e60..1b4eada3c9f 100644 --- a/src/Interpreters/IInterpreter.h +++ b/src/Interpreters/IInterpreter.h @@ -1,13 +1,13 @@ #pragma once #include +#include #include namespace DB { struct QueryLogElement; -class Context; /** Interpreters interface for different queries. */ @@ -27,11 +27,11 @@ public: void extendQueryLogElem( QueryLogElement & elem, const ASTPtr & ast, - const Context & context, + ContextPtr context, const String & query_database, const String & query_table) const; - virtual void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, const Context &) const {} + virtual void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, ContextPtr) const {} virtual ~IInterpreter() = default; }; diff --git a/src/Interpreters/IInterpreterUnionOrSelectQuery.cpp b/src/Interpreters/IInterpreterUnionOrSelectQuery.cpp index 833625a6f3e..7233ab332dd 100644 --- a/src/Interpreters/IInterpreterUnionOrSelectQuery.cpp +++ b/src/Interpreters/IInterpreterUnionOrSelectQuery.cpp @@ -4,7 +4,7 @@ namespace DB { -void IInterpreterUnionOrSelectQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, const Context &) const +void IInterpreterUnionOrSelectQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const { elem.query_kind = "Select"; } diff --git a/src/Interpreters/IInterpreterUnionOrSelectQuery.h b/src/Interpreters/IInterpreterUnionOrSelectQuery.h index 128072fb103..72723d68161 100644 --- a/src/Interpreters/IInterpreterUnionOrSelectQuery.h +++ b/src/Interpreters/IInterpreterUnionOrSelectQuery.h @@ -10,9 +10,9 @@ namespace DB class IInterpreterUnionOrSelectQuery : public IInterpreter { public: - IInterpreterUnionOrSelectQuery(const ASTPtr & query_ptr_, const Context & context_, const SelectQueryOptions & options_) + IInterpreterUnionOrSelectQuery(const ASTPtr & query_ptr_, ContextPtr context_, const SelectQueryOptions & options_) : query_ptr(query_ptr_) - , context(std::make_shared(context_)) + , context(Context::createCopy(context_)) , options(options_) , max_streams(context->getSettingsRef().max_threads) { @@ -28,11 +28,11 @@ public: size_t getMaxStreams() const { return max_streams; } - void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, const Context &) const override; + void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const override; protected: ASTPtr query_ptr; - std::shared_ptr context; + ContextPtr context; Block result_header; SelectQueryOptions options; size_t max_streams = 1; diff --git a/src/Interpreters/IJoin.h b/src/Interpreters/IJoin.h index ade6eaa0cc9..0f486fbe523 100644 --- a/src/Interpreters/IJoin.h +++ b/src/Interpreters/IJoin.h @@ -14,11 +14,15 @@ class Block; struct ExtraBlock; using ExtraBlockPtr = std::shared_ptr; +class TableJoin; + class IJoin { public: virtual ~IJoin() = default; + virtual const TableJoin & getTableJoin() const = 0; + /// Add block of data from right hand of JOIN. /// @returns false, if some limit was exceeded and you should not insert more data. virtual bool addJoinedBlock(const Block & block, bool check_limits = true) = 0; @@ -37,6 +41,10 @@ public: virtual size_t getTotalByteCount() const = 0; virtual bool alwaysReturnsEmptySet() const { return false; } + /// StorageJoin/Dictionary is already filled. No need to call addJoinedBlock. + /// Different query plan is used for such joins. + virtual bool isFilled() const { return false; } + virtual BlockInputStreamPtr createStreamWithNonJoinedRows(const Block &, UInt64) const { return {}; } }; diff --git a/src/Interpreters/IdentifierSemantic.cpp b/src/Interpreters/IdentifierSemantic.cpp index a1fc533eb7f..8b8eab219ba 100644 --- a/src/Interpreters/IdentifierSemantic.cpp +++ b/src/Interpreters/IdentifierSemantic.cpp @@ -3,6 +3,8 @@ #include #include +#include + namespace DB { @@ -209,7 +211,7 @@ IdentifierSemantic::ColumnMatch IdentifierSemantic::canReferColumnToTable(const return canReferColumnToTable(identifier, table_with_columns.table); } -/// Strip qualificators from left side of column name. +/// Strip qualifications from left side of column name. /// Example: 'database.table.name' -> 'name'. void IdentifierSemantic::setColumnShortName(ASTIdentifier & identifier, const DatabaseAndTableWithAlias & db_and_table) { @@ -249,4 +251,86 @@ void IdentifierSemantic::setColumnLongName(ASTIdentifier & identifier, const Dat } } +std::optional IdentifierSemantic::getIdentMembership(const ASTIdentifier & ident, const std::vector & tables) +{ + std::optional table_pos = IdentifierSemantic::getMembership(ident); + if (table_pos) + return table_pos; + return IdentifierSemantic::chooseTableColumnMatch(ident, tables, true); +} + +std::optional +IdentifierSemantic::getIdentsMembership(ASTPtr ast, const std::vector & tables, const Aliases & aliases) +{ + auto idents = IdentifiersCollector::collect(ast); + + std::optional result; + for (const auto * ident : idents) + { + /// short name clashes with alias, ambiguous + if (ident->isShort() && aliases.count(ident->shortName())) + return {}; + const auto pos = getIdentMembership(*ident, tables); + if (!pos) + return {}; + /// identifiers from different tables + if (result && *pos != *result) + return {}; + result = pos; + } + return result; +} + +IdentifiersCollector::ASTIdentifiers IdentifiersCollector::collect(const ASTPtr & node) +{ + IdentifiersCollector::Data ident_data; + ConstInDepthNodeVisitor ident_visitor(ident_data); + ident_visitor.visit(node); + return ident_data.idents; +} + +bool IdentifiersCollector::needChildVisit(const ASTPtr &, const ASTPtr &) +{ + return true; +} + +void IdentifiersCollector::visit(const ASTPtr & node, IdentifiersCollector::Data & data) +{ + if (const auto * ident = node->as()) + data.idents.push_back(ident); +} + + +IdentifierMembershipCollector::IdentifierMembershipCollector(const ASTSelectQuery & select, ContextPtr context) +{ + if (ASTPtr with = select.with()) + QueryAliasesNoSubqueriesVisitor(aliases).visit(with); + QueryAliasesNoSubqueriesVisitor(aliases).visit(select.select()); + + tables = getDatabaseAndTablesWithColumns(getTableExpressions(select), context); +} + +std::optional IdentifierMembershipCollector::getIdentsMembership(ASTPtr ast) const +{ + return IdentifierSemantic::getIdentsMembership(ast, tables, aliases); +} + +static void collectConjunctions(const ASTPtr & node, std::vector & members) +{ + if (const auto * func = node->as(); func && func->name == "and") + { + for (const auto & child : func->arguments->children) + collectConjunctions(child, members); + return; + } + members.push_back(node); +} + +std::vector collectConjunctions(const ASTPtr & node) +{ + std::vector members; + collectConjunctions(node, members); + return members; +} + } diff --git a/src/Interpreters/IdentifierSemantic.h b/src/Interpreters/IdentifierSemantic.h index 80b55ba0537..20ef3248f7c 100644 --- a/src/Interpreters/IdentifierSemantic.h +++ b/src/Interpreters/IdentifierSemantic.h @@ -2,8 +2,15 @@ #include -#include +#include #include +#include +#include +#include +#include + +#include +#include namespace DB { @@ -59,9 +66,48 @@ struct IdentifierSemantic static std::optional chooseTableColumnMatch(const ASTIdentifier &, const TablesWithColumns & tables, bool allow_ambiguous = false); + static std::optional getIdentMembership(const ASTIdentifier & ident, const std::vector & tables); + + /// Collect common table membership for identifiers in expression + /// If membership cannot be established or there are several identifies from different tables, return empty optional + static std::optional + getIdentsMembership(ASTPtr ast, const std::vector & tables, const Aliases & aliases); + private: static bool doesIdentifierBelongTo(const ASTIdentifier & identifier, const String & database, const String & table); static bool doesIdentifierBelongTo(const ASTIdentifier & identifier, const String & table); }; + +/// Collect all identifies from AST recursively +class IdentifiersCollector +{ +public: + using ASTIdentPtr = const ASTIdentifier *; + using ASTIdentifiers = std::vector; + struct Data + { + ASTIdentifiers idents; + }; + + static void visit(const ASTPtr & node, Data & data); + static bool needChildVisit(const ASTPtr &, const ASTPtr &); + static ASTIdentifiers collect(const ASTPtr & node); +}; + +/// Collect identifier table membership considering aliases +class IdentifierMembershipCollector +{ +public: + IdentifierMembershipCollector(const ASTSelectQuery & select, ContextPtr context); + std::optional getIdentsMembership(ASTPtr ast) const; + +private: + std::vector tables; + Aliases aliases; +}; + +/// Split expression `expr_1 AND expr_2 AND ... AND expr_n` into vector `[expr_1, expr_2, ..., expr_n]` +std::vector collectConjunctions(const ASTPtr & node); + } diff --git a/src/Interpreters/InJoinSubqueriesPreprocessor.cpp b/src/Interpreters/InJoinSubqueriesPreprocessor.cpp index 2e922393131..8dc5d95bd89 100644 --- a/src/Interpreters/InJoinSubqueriesPreprocessor.cpp +++ b/src/Interpreters/InJoinSubqueriesPreprocessor.cpp @@ -25,9 +25,9 @@ namespace ErrorCodes namespace { -StoragePtr tryGetTable(const ASTPtr & database_and_table, const Context & context) +StoragePtr tryGetTable(const ASTPtr & database_and_table, ContextPtr context) { - auto table_id = context.tryResolveStorageID(database_and_table); + auto table_id = context->tryResolveStorageID(database_and_table); if (!table_id) return {}; return DatabaseCatalog::instance().tryGetTable(table_id, context); @@ -35,12 +35,21 @@ StoragePtr tryGetTable(const ASTPtr & database_and_table, const Context & contex using CheckShardsAndTables = InJoinSubqueriesPreprocessor::CheckShardsAndTables; -struct NonGlobalTableData +struct NonGlobalTableData : public WithContext { using TypeToVisit = ASTTableExpression; + NonGlobalTableData( + ContextPtr context_, + const CheckShardsAndTables & checker_, + std::vector & renamed_tables_, + ASTFunction * function_, + ASTTableJoin * table_join_) + : WithContext(context_), checker(checker_), renamed_tables(renamed_tables_), function(function_), table_join(table_join_) + { + } + const CheckShardsAndTables & checker; - const Context & context; std::vector & renamed_tables; ASTFunction * function = nullptr; ASTTableJoin * table_join = nullptr; @@ -55,9 +64,9 @@ struct NonGlobalTableData private: void renameIfNeeded(ASTPtr & database_and_table) { - const DistributedProductMode distributed_product_mode = context.getSettingsRef().distributed_product_mode; + const DistributedProductMode distributed_product_mode = getContext()->getSettingsRef().distributed_product_mode; - StoragePtr storage = tryGetTable(database_and_table, context); + StoragePtr storage = tryGetTable(database_and_table, getContext()); if (!storage || !checker.hasAtLeastTwoShards(*storage)) return; @@ -119,11 +128,17 @@ using NonGlobalTableVisitor = InDepthNodeVisitor; class NonGlobalSubqueryMatcher { public: - struct Data + struct Data : public WithContext { + using RenamedTables = std::vector>>; + + Data(ContextPtr context_, const CheckShardsAndTables & checker_, RenamedTables & renamed_tables_) + : WithContext(context_), checker(checker_), renamed_tables(renamed_tables_) + { + } + const CheckShardsAndTables & checker; - const Context & context; - std::vector>> & renamed_tables; + RenamedTables & renamed_tables; }; static void visit(ASTPtr & node, Data & data) @@ -161,7 +176,7 @@ private: } auto & subquery = node.arguments->children.at(1); std::vector renamed; - NonGlobalTableVisitor::Data table_data{data.checker, data.context, renamed, &node, nullptr}; + NonGlobalTableVisitor::Data table_data(data.getContext(), data.checker, renamed, &node, nullptr); NonGlobalTableVisitor(table_data).visit(subquery); if (!renamed.empty()) data.renamed_tables.emplace_back(subquery, std::move(renamed)); @@ -179,7 +194,7 @@ private: if (auto & subquery = node.table_expression->as()->subquery) { std::vector renamed; - NonGlobalTableVisitor::Data table_data{data.checker, data.context, renamed, nullptr, table_join}; + NonGlobalTableVisitor::Data table_data(data.getContext(), data.checker, renamed, nullptr, table_join); NonGlobalTableVisitor(table_data).visit(subquery); if (!renamed.empty()) data.renamed_tables.emplace_back(subquery, std::move(renamed)); @@ -202,7 +217,7 @@ void InJoinSubqueriesPreprocessor::visit(ASTPtr & ast) const if (!query || !query->tables()) return; - if (context.getSettingsRef().distributed_product_mode == DistributedProductMode::ALLOW) + if (getContext()->getSettingsRef().distributed_product_mode == DistributedProductMode::ALLOW) return; const auto & tables_in_select_query = query->tables()->as(); @@ -221,12 +236,12 @@ void InJoinSubqueriesPreprocessor::visit(ASTPtr & ast) const /// If not really distributed table, skip it. { - StoragePtr storage = tryGetTable(table_expression->database_and_table_name, context); + StoragePtr storage = tryGetTable(table_expression->database_and_table_name, getContext()); if (!storage || !checker->hasAtLeastTwoShards(*storage)) return; } - NonGlobalSubqueryVisitor::Data visitor_data{*checker, context, renamed_tables}; + NonGlobalSubqueryVisitor::Data visitor_data{getContext(), *checker, renamed_tables}; NonGlobalSubqueryVisitor(visitor_data).visit(ast); } diff --git a/src/Interpreters/InJoinSubqueriesPreprocessor.h b/src/Interpreters/InJoinSubqueriesPreprocessor.h index 4d46fabfd99..92a408e7ae3 100644 --- a/src/Interpreters/InJoinSubqueriesPreprocessor.h +++ b/src/Interpreters/InJoinSubqueriesPreprocessor.h @@ -1,19 +1,18 @@ #pragma once -#include +#include #include #include +#include -#include #include +#include namespace DB { class ASTSelectQuery; -class Context; - /** Scheme of operation: * @@ -32,7 +31,7 @@ class Context; * Do not recursively preprocess subqueries, as it will be done by calling code. */ -class InJoinSubqueriesPreprocessor +class InJoinSubqueriesPreprocessor : WithContext { public: using SubqueryTables = std::vector>>; /// {subquery, renamed_tables} @@ -47,17 +46,17 @@ public: virtual ~CheckShardsAndTables() {} }; - InJoinSubqueriesPreprocessor(const Context & context_, SubqueryTables & renamed_tables_, - CheckShardsAndTables::Ptr _checker = std::make_unique()) - : context(context_) - , renamed_tables(renamed_tables_) - , checker(std::move(_checker)) - {} + InJoinSubqueriesPreprocessor( + ContextPtr context_, + SubqueryTables & renamed_tables_, + CheckShardsAndTables::Ptr _checker = std::make_unique()) + : WithContext(context_), renamed_tables(renamed_tables_), checker(std::move(_checker)) + { + } void visit(ASTPtr & ast) const; private: - const Context & context; SubqueryTables & renamed_tables; CheckShardsAndTables::Ptr checker; }; diff --git a/src/Interpreters/InterpreterAlterQuery.cpp b/src/Interpreters/InterpreterAlterQuery.cpp index 37eaecf9a90..70453405b58 100644 --- a/src/Interpreters/InterpreterAlterQuery.cpp +++ b/src/Interpreters/InterpreterAlterQuery.cpp @@ -1,24 +1,27 @@ #include -#include -#include + +#include +#include +#include +#include #include #include +#include #include +#include #include #include -#include #include -#include -#include +#include #include #include -#include +#include +#include #include + #include + #include -#include -#include -#include namespace DB @@ -32,8 +35,7 @@ namespace ErrorCodes } -InterpreterAlterQuery::InterpreterAlterQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterAlterQuery::InterpreterAlterQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) { } @@ -44,21 +46,23 @@ BlockIO InterpreterAlterQuery::execute() if (!alter.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccess()); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccess()); - context.checkAccess(getRequiredAccess()); - auto table_id = context.resolveStorageID(alter, Context::ResolveOrdinary); + getContext()->checkAccess(getRequiredAccess()); + auto table_id = getContext()->resolveStorageID(alter, Context::ResolveOrdinary); + query_ptr->as().database = table_id.database_name; DatabasePtr database = DatabaseCatalog::instance().getDatabase(table_id.database_name); - if (typeid_cast(database.get()) && context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) + if (typeid_cast(database.get()) + && getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) { auto guard = DatabaseCatalog::instance().getDDLGuard(table_id.database_name, table_id.table_name); guard->releaseTableLock(); - return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, context); + return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, getContext()); } - StoragePtr table = DatabaseCatalog::instance().getTable(table_id, context); - auto alter_lock = table->lockForAlter(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + StoragePtr table = DatabaseCatalog::instance().getTable(table_id, getContext()); + auto alter_lock = table->lockForAlter(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); auto metadata_snapshot = table->getInMemoryMetadataPtr(); /// Add default database to table identifiers that we can encounter in e.g. default expressions, @@ -104,15 +108,15 @@ BlockIO InterpreterAlterQuery::execute() if (!mutation_commands.empty()) { - table->checkMutationIsPossible(mutation_commands, context.getSettingsRef()); - MutationsInterpreter(table, metadata_snapshot, mutation_commands, context, false).validate(); - table->mutate(mutation_commands, context); + table->checkMutationIsPossible(mutation_commands, getContext()->getSettingsRef()); + MutationsInterpreter(table, metadata_snapshot, mutation_commands, getContext(), false).validate(); + table->mutate(mutation_commands, getContext()); } if (!partition_commands.empty()) { - table->checkAlterPartitionIsPossible(partition_commands, metadata_snapshot, context.getSettingsRef()); - auto partition_commands_pipe = table->alterPartition(metadata_snapshot, partition_commands, context); + table->checkAlterPartitionIsPossible(partition_commands, metadata_snapshot, getContext()->getSettingsRef()); + auto partition_commands_pipe = table->alterPartition(metadata_snapshot, partition_commands, getContext()); if (!partition_commands_pipe.empty()) res.pipeline.init(std::move(partition_commands_pipe)); } @@ -135,10 +139,10 @@ BlockIO InterpreterAlterQuery::execute() if (!alter_commands.empty()) { StorageInMemoryMetadata metadata = table->getInMemoryMetadata(); - alter_commands.validate(metadata, context); + alter_commands.validate(metadata, getContext()); alter_commands.prepare(metadata); - table->checkAlterIsPossible(alter_commands, context); - table->alter(alter_commands, context, alter_lock); + table->checkAlterIsPossible(alter_commands, getContext()); + table->alter(alter_commands, getContext(), alter_lock); } return res; @@ -176,11 +180,6 @@ AccessRightsElements InterpreterAlterQuery::getRequiredAccessForCommand(const AS required_access.emplace_back(AccessType::ALTER_UPDATE, database, table, column_names_from_update_assignments()); break; } - case ASTAlterCommand::DELETE: - { - required_access.emplace_back(AccessType::ALTER_DELETE, database, table); - break; - } case ASTAlterCommand::ADD_COLUMN: { required_access.emplace_back(AccessType::ALTER_ADD_COLUMN, database, table, column_name_from_col_decl()); @@ -243,10 +242,6 @@ AccessRightsElements InterpreterAlterQuery::getRequiredAccessForCommand(const AS break; } case ASTAlterCommand::MODIFY_TTL: - { - required_access.emplace_back(AccessType::ALTER_TTL, database, table); - break; - } case ASTAlterCommand::REMOVE_TTL: { required_access.emplace_back(AccessType::ALTER_TTL, database, table); @@ -267,7 +262,8 @@ AccessRightsElements InterpreterAlterQuery::getRequiredAccessForCommand(const AS required_access.emplace_back(AccessType::INSERT, database, table); break; } - case ASTAlterCommand::DROP_PARTITION: [[fallthrough]]; + case ASTAlterCommand::DELETE: + case ASTAlterCommand::DROP_PARTITION: case ASTAlterCommand::DROP_DETACHED_PARTITION: { required_access.emplace_back(AccessType::ALTER_DELETE, database, table); @@ -299,7 +295,9 @@ AccessRightsElements InterpreterAlterQuery::getRequiredAccessForCommand(const AS break; } case ASTAlterCommand::FREEZE_PARTITION: [[fallthrough]]; - case ASTAlterCommand::FREEZE_ALL: + case ASTAlterCommand::FREEZE_ALL: [[fallthrough]]; + case ASTAlterCommand::UNFREEZE_PARTITION: [[fallthrough]]; + case ASTAlterCommand::UNFREEZE_ALL: { required_access.emplace_back(AccessType::ALTER_FREEZE_PARTITION, database, table); break; @@ -325,7 +323,7 @@ AccessRightsElements InterpreterAlterQuery::getRequiredAccessForCommand(const AS return required_access; } -void InterpreterAlterQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, const Context &) const +void InterpreterAlterQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr) const { const auto & alter = ast->as(); @@ -355,14 +353,14 @@ void InterpreterAlterQuery::extendQueryLogElemImpl(QueryLogElement & elem, const if (!command->from_table.empty()) { - String database = command->from_database.empty() ? context.getCurrentDatabase() : command->from_database; + String database = command->from_database.empty() ? getContext()->getCurrentDatabase() : command->from_database; elem.query_databases.insert(database); elem.query_tables.insert(database + "." + command->from_table); } if (!command->to_table.empty()) { - String database = command->to_database.empty() ? context.getCurrentDatabase() : command->to_database; + String database = command->to_database.empty() ? getContext()->getCurrentDatabase() : command->to_database; elem.query_databases.insert(database); elem.query_tables.insert(database + "." + command->to_table); } diff --git a/src/Interpreters/InterpreterAlterQuery.h b/src/Interpreters/InterpreterAlterQuery.h index 37084844f6c..ae9750b0b62 100644 --- a/src/Interpreters/InterpreterAlterQuery.h +++ b/src/Interpreters/InterpreterAlterQuery.h @@ -6,7 +6,7 @@ namespace DB { -class Context; + class AccessRightsElements; class ASTAlterCommand; @@ -14,23 +14,21 @@ class ASTAlterCommand; /** Allows you add or remove a column in the table. * It also allows you to manipulate the partitions of the MergeTree family tables. */ -class InterpreterAlterQuery : public IInterpreter +class InterpreterAlterQuery : public IInterpreter, WithContext { public: - InterpreterAlterQuery(const ASTPtr & query_ptr_, const Context & context_); + InterpreterAlterQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; static AccessRightsElements getRequiredAccessForCommand(const ASTAlterCommand & command, const String & database, const String & table); - void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, const Context & context) const override; + void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr context) const override; private: AccessRightsElements getRequiredAccess() const; ASTPtr query_ptr; - - const Context & context; }; } diff --git a/src/Interpreters/InterpreterCheckQuery.cpp b/src/Interpreters/InterpreterCheckQuery.cpp index b3cd807abe5..e8a4f884dd0 100644 --- a/src/Interpreters/InterpreterCheckQuery.cpp +++ b/src/Interpreters/InterpreterCheckQuery.cpp @@ -29,8 +29,7 @@ NamesAndTypes getBlockStructure() } -InterpreterCheckQuery::InterpreterCheckQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterCheckQuery::InterpreterCheckQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) { } @@ -38,14 +37,14 @@ InterpreterCheckQuery::InterpreterCheckQuery(const ASTPtr & query_ptr_, const Co BlockIO InterpreterCheckQuery::execute() { const auto & check = query_ptr->as(); - auto table_id = context.resolveStorageID(check, Context::ResolveOrdinary); + auto table_id = getContext()->resolveStorageID(check, Context::ResolveOrdinary); - context.checkAccess(AccessType::SHOW_TABLES, table_id); - StoragePtr table = DatabaseCatalog::instance().getTable(table_id, context); - auto check_results = table->checkData(query_ptr, context); + getContext()->checkAccess(AccessType::SHOW_TABLES, table_id); + StoragePtr table = DatabaseCatalog::instance().getTable(table_id, getContext()); + auto check_results = table->checkData(query_ptr, getContext()); Block block; - if (context.getSettingsRef().check_query_single_value_result) + if (getContext()->getSettingsRef().check_query_single_value_result) { bool result = std::all_of(check_results.begin(), check_results.end(), [] (const CheckResult & res) { return res.success; }); auto column = ColumnUInt8::create(); diff --git a/src/Interpreters/InterpreterCheckQuery.h b/src/Interpreters/InterpreterCheckQuery.h index c667ca74c22..557faddfe9c 100644 --- a/src/Interpreters/InterpreterCheckQuery.h +++ b/src/Interpreters/InterpreterCheckQuery.h @@ -7,20 +7,17 @@ namespace DB { -class Context; class Cluster; -class InterpreterCheckQuery : public IInterpreter +class InterpreterCheckQuery : public IInterpreter, WithContext { public: - InterpreterCheckQuery(const ASTPtr & query_ptr_, const Context & context_); + InterpreterCheckQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; private: ASTPtr query_ptr; - - const Context & context; }; } diff --git a/src/Interpreters/InterpreterCreateQuery.cpp b/src/Interpreters/InterpreterCreateQuery.cpp index d1af86e7b11..d45d02243fb 100644 --- a/src/Interpreters/InterpreterCreateQuery.cpp +++ b/src/Interpreters/InterpreterCreateQuery.cpp @@ -59,6 +59,7 @@ #include #include #include +#include #include #include @@ -70,6 +71,7 @@ namespace DB namespace ErrorCodes { extern const int TABLE_ALREADY_EXISTS; + extern const int DICTIONARY_ALREADY_EXISTS; extern const int EMPTY_LIST_OF_COLUMNS_PASSED; extern const int INCORRECT_QUERY; extern const int UNKNOWN_DATABASE_ENGINE; @@ -78,7 +80,6 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; extern const int BAD_DATABASE_FOR_TEMPORARY_TABLE; extern const int SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY; - extern const int DICTIONARY_ALREADY_EXISTS; extern const int ILLEGAL_SYNTAX_FOR_DATA_TYPE; extern const int ILLEGAL_COLUMN; extern const int LOGICAL_ERROR; @@ -90,8 +91,8 @@ namespace ErrorCodes namespace fs = std::filesystem; -InterpreterCreateQuery::InterpreterCreateQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterCreateQuery::InterpreterCreateQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_) { } @@ -113,7 +114,7 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) /// Will write file with database metadata, if needed. String database_name_escaped = escapeForFileName(database_name); - fs::path metadata_path = fs::canonical(context.getPath()); + fs::path metadata_path = fs::canonical(getContext()->getPath()); fs::path metadata_file_tmp_path = metadata_path / "metadata" / (database_name_escaped + ".sql.tmp"); fs::path metadata_file_path = metadata_path / "metadata" / (database_name_escaped + ".sql"); @@ -122,7 +123,7 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) if (!fs::exists(metadata_file_path)) throw Exception("Database engine must be specified for ATTACH DATABASE query", ErrorCodes::UNKNOWN_DATABASE_ENGINE); /// Short syntax: try read database definition from file - auto ast = DatabaseOnDisk::parseQueryFromMetadata(nullptr, context, metadata_file_path); + auto ast = DatabaseOnDisk::parseQueryFromMetadata(nullptr, getContext(), metadata_file_path); create = ast->as(); if (!create.table.empty() || !create.storage) throw Exception(ErrorCodes::INCORRECT_QUERY, "Metadata file {} contains incorrect CREATE DATABASE query", metadata_file_path.string()); @@ -136,7 +137,7 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) /// When attaching old-style database during server startup, we must always use Ordinary engine if (create.attach) throw Exception("Database engine must be specified for ATTACH DATABASE query", ErrorCodes::UNKNOWN_DATABASE_ENGINE); - bool old_style_database = context.getSettingsRef().default_database_engine.value == DefaultDatabaseEngine::Ordinary; + bool old_style_database = getContext()->getSettingsRef().default_database_engine.value == DefaultDatabaseEngine::Ordinary; auto engine = std::make_shared(); auto storage = std::make_shared(); engine->name = old_style_database ? "Ordinary" : "Atomic"; @@ -174,7 +175,7 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) if (create_from_user) { - const auto & default_engine = context.getSettingsRef().default_database_engine.value; + const auto & default_engine = getContext()->getSettingsRef().default_database_engine.value; if (create.uuid == UUIDHelpers::Nil && default_engine == DefaultDatabaseEngine::Atomic) create.uuid = UUIDHelpers::generateV4(); /// Will enable Atomic engine for nested database } @@ -194,7 +195,7 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) } else { - bool is_on_cluster = context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; + bool is_on_cluster = getContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; if (create.uuid != UUIDHelpers::Nil && !is_on_cluster) throw Exception("Ordinary database engine does not support UUID", ErrorCodes::INCORRECT_QUERY); @@ -203,19 +204,20 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) metadata_path = metadata_path / "metadata" / database_name_escaped; } - if (create.storage->engine->name == "MaterializeMySQL" && !context.getSettingsRef().allow_experimental_database_materialize_mysql && !internal) + if (create.storage->engine->name == "MaterializeMySQL" && !getContext()->getSettingsRef().allow_experimental_database_materialize_mysql + && !internal) { throw Exception("MaterializeMySQL is an experimental database engine. " "Enable allow_experimental_database_materialize_mysql to use it.", ErrorCodes::UNKNOWN_DATABASE_ENGINE); } - if (create.storage->engine->name == "Replicated" && !context.getSettingsRef().allow_experimental_database_replicated && !internal) + if (create.storage->engine->name == "Replicated" && !getContext()->getSettingsRef().allow_experimental_database_replicated && !internal) { throw Exception("Replicated is an experimental database engine. " "Enable allow_experimental_database_replicated to use it.", ErrorCodes::UNKNOWN_DATABASE_ENGINE); } - DatabasePtr database = DatabaseFactory::get(create, metadata_path / "", context); + DatabasePtr database = DatabaseFactory::get(create, metadata_path / "", getContext()); if (create.uuid != UUIDHelpers::Nil) create.database = TABLE_WITH_UUID_NAME_PLACEHOLDER; @@ -237,7 +239,7 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) writeString(statement, out); out.next(); - if (context.getSettingsRef().fsync_metadata) + if (getContext()->getSettingsRef().fsync_metadata) out.sync(); out.close(); } @@ -260,7 +262,8 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) renamed = true; } - database->loadStoredObjects(context, has_force_restore_data_flag, create.attach && force_attach); + /// We use global context here, because storages lifetime is bigger than query context lifetime + database->loadStoredObjects(getContext()->getGlobalContext(), has_force_restore_data_flag, create.attach && force_attach); } catch (...) { @@ -360,7 +363,7 @@ ASTPtr InterpreterCreateQuery::formatConstraints(const ConstraintsDescription & } ColumnsDescription InterpreterCreateQuery::getColumnsDescription( - const ASTExpressionList & columns_ast, const Context & context, bool sanity_check_compression_codecs) + const ASTExpressionList & columns_ast, ContextPtr context_, bool attach) { /// First, deduce implicit types. @@ -369,6 +372,7 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription( ASTPtr default_expr_list = std::make_shared(); NamesAndTypesList column_names_and_types; + bool make_columns_nullable = !attach && context_->getSettingsRef().data_type_default_nullable; for (const auto & ast : columns_ast.children) { @@ -387,7 +391,7 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription( if (*col_decl.null_modifier) column_type = makeNullable(column_type); } - else if (context.getSettingsRef().data_type_default_nullable) + else if (make_columns_nullable) { column_type = makeNullable(column_type); } @@ -430,8 +434,9 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription( Block defaults_sample_block; /// set missing types and wrap default_expression's in a conversion-function if necessary if (!default_expr_list->children.empty()) - defaults_sample_block = validateColumnsDefaultsAndGetSampleBlock(default_expr_list, column_names_and_types, context); + defaults_sample_block = validateColumnsDefaultsAndGetSampleBlock(default_expr_list, column_names_and_types, context_); + bool sanity_check_compression_codecs = !attach && !context_->getSettingsRef().allow_suspicious_codecs; ColumnsDescription res; auto name_type_it = column_names_and_types.begin(); for (auto ast_it = columns_ast.children.begin(); ast_it != columns_ast.children.end(); ++ast_it, ++name_type_it) @@ -475,7 +480,7 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription( res.add(std::move(column)); } - if (context.getSettingsRef().flatten_nested) + if (context_->getSettingsRef().flatten_nested) res.flattenNested(); if (res.getAllPhysical().empty()) @@ -507,24 +512,23 @@ InterpreterCreateQuery::TableProperties InterpreterCreateQuery::setProperties(AS if (create.columns_list->columns) { - bool sanity_check_compression_codecs = !create.attach && !context.getSettingsRef().allow_suspicious_codecs; - properties.columns = getColumnsDescription(*create.columns_list->columns, context, sanity_check_compression_codecs); + properties.columns = getColumnsDescription(*create.columns_list->columns, getContext(), create.attach); } if (create.columns_list->indices) for (const auto & index : create.columns_list->indices->children) properties.indices.push_back( - IndexDescription::getIndexFromAST(index->clone(), properties.columns, context)); + IndexDescription::getIndexFromAST(index->clone(), properties.columns, getContext())); properties.constraints = getConstraintsDescription(create.columns_list->constraints); } else if (!create.as_table.empty()) { - String as_database_name = context.resolveDatabase(create.as_database); - StoragePtr as_storage = DatabaseCatalog::instance().getTable({as_database_name, create.as_table}, context); + String as_database_name = getContext()->resolveDatabase(create.as_database); + StoragePtr as_storage = DatabaseCatalog::instance().getTable({as_database_name, create.as_table}, getContext()); /// as_storage->getColumns() and setEngine(...) must be called under structure lock of other_table for CREATE ... AS other_table. - as_storage_lock = as_storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + as_storage_lock = as_storage->lockForShare(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); auto as_storage_metadata = as_storage->getInMemoryMetadataPtr(); properties.columns = as_storage_metadata->getColumns(); @@ -537,16 +541,20 @@ InterpreterCreateQuery::TableProperties InterpreterCreateQuery::setProperties(AS } else if (create.select) { - Block as_select_sample = InterpreterSelectWithUnionQuery::getSampleBlock(create.select->clone(), context); + Block as_select_sample = InterpreterSelectWithUnionQuery::getSampleBlock(create.select->clone(), getContext()); properties.columns = ColumnsDescription(as_select_sample.getNamesAndTypesList()); } else if (create.as_table_function) { /// Table function without columns list. - auto table_function = TableFunctionFactory::instance().get(create.as_table_function, context); - properties.columns = table_function->getActualTableStructure(context); + auto table_function = TableFunctionFactory::instance().get(create.as_table_function, getContext()); + properties.columns = table_function->getActualTableStructure(getContext()); assert(!properties.columns.empty()); } + else if (create.is_dictionary) + { + return {}; + } else throw Exception("Incorrect CREATE query: required list of column descriptions or AS section or SELECT.", ErrorCodes::INCORRECT_QUERY); @@ -585,7 +593,7 @@ void InterpreterCreateQuery::validateTableStructure(const ASTCreateQuery & creat throw Exception("Column " + backQuoteIfNeed(column.name) + " already exists", ErrorCodes::DUPLICATE_COLUMN); } - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); /// Check low cardinality types in creating table if it was not allowed in setting if (!create.attach && !settings.allow_suspicious_low_cardinality_types && !create.is_materialized_view) @@ -680,10 +688,10 @@ void InterpreterCreateQuery::setEngine(ASTCreateQuery & create) const { /// NOTE Getting the structure from the table specified in the AS is done not atomically with the creation of the table. - String as_database_name = context.resolveDatabase(create.as_database); + String as_database_name = getContext()->resolveDatabase(create.as_database); String as_table_name = create.as_table; - ASTPtr as_create_ptr = DatabaseCatalog::instance().getDatabase(as_database_name)->getCreateTableQuery(as_table_name, context); + ASTPtr as_create_ptr = DatabaseCatalog::instance().getDatabase(as_database_name)->getCreateTableQuery(as_table_name, getContext()); const auto & as_create = as_create_ptr->as(); const String qualified_name = backQuoteIfNeed(as_database_name) + "." + backQuoteIfNeed(as_table_name); @@ -712,12 +720,26 @@ void InterpreterCreateQuery::setEngine(ASTCreateQuery & create) const } } +static void generateUUIDForTable(ASTCreateQuery & create) +{ + if (create.uuid == UUIDHelpers::Nil) + create.uuid = UUIDHelpers::generateV4(); + + /// If destination table (to_table_id) is not specified for materialized view, + /// then MV will create inner table. We should generate UUID of inner table here, + /// so it will be the same on all hosts if query in ON CLUSTER or database engine is Replicated. + bool need_uuid_for_inner_table = !create.attach && create.is_materialized_view && !create.to_table_id; + if (need_uuid_for_inner_table && create.to_inner_uuid == UUIDHelpers::Nil) + create.to_inner_uuid = UUIDHelpers::generateV4(); +} + void InterpreterCreateQuery::assertOrSetUUID(ASTCreateQuery & create, const DatabasePtr & database) const { const auto * kind = create.is_dictionary ? "Dictionary" : "Table"; const auto * kind_upper = create.is_dictionary ? "DICTIONARY" : "TABLE"; - if (database->getEngineName() == "Replicated" && context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY && !internal) + if (database->getEngineName() == "Replicated" && getContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY + && !internal) { if (create.uuid == UUIDHelpers::Nil) throw Exception("Table UUID is not specified in DDL log", ErrorCodes::LOGICAL_ERROR); @@ -743,18 +765,19 @@ void InterpreterCreateQuery::assertOrSetUUID(ASTCreateQuery & create, const Data kind_upper, create.table); } - if (create.uuid == UUIDHelpers::Nil) - create.uuid = UUIDHelpers::generateV4(); + generateUUIDForTable(create); } else { - bool is_on_cluster = context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; - if (create.uuid != UUIDHelpers::Nil && !is_on_cluster) + bool is_on_cluster = getContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; + bool has_uuid = create.uuid != UUIDHelpers::Nil || create.to_inner_uuid != UUIDHelpers::Nil; + if (has_uuid && !is_on_cluster) throw Exception(ErrorCodes::INCORRECT_QUERY, "{} UUID specified, but engine of database {} is not Atomic", kind, create.database); /// Ignore UUID if it's ON CLUSTER query create.uuid = UUIDHelpers::Nil; + create.to_inner_uuid = UUIDHelpers::Nil; } if (create.replace_table) @@ -767,19 +790,19 @@ void InterpreterCreateQuery::assertOrSetUUID(ASTCreateQuery & create, const Data UUID uuid_of_table_to_replace; if (create.create_or_replace) { - uuid_of_table_to_replace = context.tryResolveStorageID(StorageID(create.database, create.table)).uuid; + uuid_of_table_to_replace = getContext()->tryResolveStorageID(StorageID(create.database, create.table)).uuid; if (uuid_of_table_to_replace == UUIDHelpers::Nil) { /// Convert to usual CREATE create.replace_table = false; - assert(!database->isTableExist(create.table, context)); + assert(!database->isTableExist(create.table, getContext())); } else create.table = "_tmp_replace_" + toString(uuid_of_table_to_replace); } else { - uuid_of_table_to_replace = context.resolveStorageID(StorageID(create.database, create.table)).uuid; + uuid_of_table_to_replace = getContext()->resolveStorageID(StorageID(create.database, create.table)).uuid; if (uuid_of_table_to_replace == UUIDHelpers::Nil) throw Exception(ErrorCodes::UNKNOWN_TABLE, "Table {}.{} doesn't exist", backQuoteIfNeed(create.database), backQuoteIfNeed(create.table)); @@ -796,22 +819,42 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create) throw Exception("Temporary tables cannot be inside a database. You should not specify a database for a temporary table.", ErrorCodes::BAD_DATABASE_FOR_TEMPORARY_TABLE); - String current_database = context.getCurrentDatabase(); + String current_database = getContext()->getCurrentDatabase(); auto database_name = create.database.empty() ? current_database : create.database; // If this is a stub ATTACH query, read the query definition from the database if (create.attach && !create.storage && !create.columns_list) { auto database = DatabaseCatalog::instance().getDatabase(database_name); + if (database->getEngineName() == "Replicated") + { + auto guard = DatabaseCatalog::instance().getDDLGuard(database_name, create.table); + if (typeid_cast(database.get()) && getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) + { + create.database = database_name; + guard->releaseTableLock(); + return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, getContext()); + } + } + bool if_not_exists = create.if_not_exists; // Table SQL definition is available even if the table is detached (even permanently) - auto query = database->getCreateTableQuery(create.table, context); - create = query->as(); // Copy the saved create query, but use ATTACH instead of CREATE - if (create.is_dictionary) + auto query = database->getCreateTableQuery(create.table, getContext()); + auto create_query = query->as(); + + if (!create.is_dictionary && create_query.is_dictionary) throw Exception(ErrorCodes::INCORRECT_QUERY, - "Cannot ATTACH TABLE {}.{}, it is a Dictionary", - backQuoteIfNeed(database_name), backQuoteIfNeed(create.table)); + "Cannot ATTACH TABLE {}.{}, it is a Dictionary", + backQuoteIfNeed(database_name), backQuoteIfNeed(create.table)); + + if (create.is_dictionary && !create_query.is_dictionary) + throw Exception(ErrorCodes::INCORRECT_QUERY, + "Cannot ATTACH DICTIONARY {}.{}, it is a Table", + backQuoteIfNeed(database_name), backQuoteIfNeed(create.table)); + + create = create_query; // Copy the saved create query, but use ATTACH instead of CREATE + create.attach = true; create.attach_short_syntax = true; create.if_not_exists = if_not_exists; @@ -821,10 +864,10 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create) if (create.attach_from_path) { - fs::path user_files = fs::path(context.getUserFilesPath()).lexically_normal(); - fs::path root_path = fs::path(context.getPath()).lexically_normal(); + fs::path user_files = fs::path(getContext()->getUserFilesPath()).lexically_normal(); + fs::path root_path = fs::path(getContext()->getPath()).lexically_normal(); - if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) + if (getContext()->getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) { fs::path data_path = fs::path(*create.attach_from_path).lexically_normal(); if (data_path.is_relative()) @@ -844,7 +887,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create) "Data directory {} must be inside {} to attach it", String(data_path), String(user_files)); } } - else if (create.attach && !create.attach_short_syntax && context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) + else if (create.attach && !create.attach_short_syntax && getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) { auto * log = &Poco::Logger::get("InterpreterCreateQuery"); LOG_WARNING(log, "ATTACH TABLE query with full table definition is not recommended: " @@ -861,6 +904,8 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create) if (create.select && create.isView()) { + // Expand CTE before filling default database + ApplyWithSubqueryVisitor().visit(*create.select); AddDefaultDatabaseVisitor visitor(current_database); visitor.visit(*create.select); } @@ -876,12 +921,11 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create) if (need_add_to_database && database->getEngineName() == "Replicated") { auto guard = DatabaseCatalog::instance().getDDLGuard(create.database, create.table); - database = DatabaseCatalog::instance().getDatabase(create.database); - if (typeid_cast(database.get()) && context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) + if (typeid_cast(database.get()) && getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) { assertOrSetUUID(create, database); guard->releaseTableLock(); - return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, context); + return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, getContext()); } } @@ -916,8 +960,11 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, database = DatabaseCatalog::instance().getDatabase(create.database); assertOrSetUUID(create, database); + String storage_name = create.is_dictionary ? "Dictionary" : "Table"; + auto storage_already_exists_error_code = create.is_dictionary ? ErrorCodes::DICTIONARY_ALREADY_EXISTS : ErrorCodes::TABLE_ALREADY_EXISTS; + /// Table can be created before or it can be created concurrently in another thread, while we were waiting in DDLGuard. - if (database->isTableExist(create.table, context)) + if (database->isTableExist(create.table, getContext())) { /// TODO Check structure of table if (create.if_not_exists) @@ -930,26 +977,27 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, drop_ast->table = create.table; drop_ast->no_ddl_lock = true; - Context drop_context = context; + auto drop_context = Context::createCopy(context); InterpreterDropQuery interpreter(drop_ast, drop_context); interpreter.execute(); } else - throw Exception(ErrorCodes::TABLE_ALREADY_EXISTS, "Table {}.{} already exists.", backQuoteIfNeed(create.database), backQuoteIfNeed(create.table)); + throw Exception(storage_already_exists_error_code, + "{} {}.{} already exists.", storage_name, backQuoteIfNeed(create.database), backQuoteIfNeed(create.table)); } data_path = database->getTableDataPath(create); - if (!create.attach && !data_path.empty() && fs::exists(fs::path{context.getPath()} / data_path)) - throw Exception(ErrorCodes::TABLE_ALREADY_EXISTS, "Directory for table data {} already exists", String(data_path)); + if (!create.attach && !data_path.empty() && fs::exists(fs::path{getContext()->getPath()} / data_path)) + throw Exception(storage_already_exists_error_code, "Directory for {} data {} already exists", Poco::toLower(storage_name), String(data_path)); } else { - if (create.if_not_exists && context.tryResolveStorageID({"", create.table}, Context::ResolveExternal)) + if (create.if_not_exists && getContext()->tryResolveStorageID({"", create.table}, Context::ResolveExternal)) return false; String temporary_table_name = create.table; - auto temporary_table = TemporaryTableHolder(context, properties.columns, properties.constraints, query_ptr); - context.getSessionContext().addExternalTable(temporary_table_name, std::move(temporary_table)); + auto temporary_table = TemporaryTableHolder(getContext(), properties.columns, properties.constraints, query_ptr); + getContext()->getSessionContext()->addExternalTable(temporary_table_name, std::move(temporary_table)); return true; } @@ -965,20 +1013,34 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, create.attach_from_path = std::nullopt; } + if (create.attach) + { + /// If table was detached it's not possible to attach it back while some threads are using + /// old instance of the storage. For example, AsynchronousMetrics may cause ATTACH to fail, + /// so we allow waiting here. If database_atomic_wait_for_drop_and_detach_synchronously is disabled + /// and old storage instance still exists it will throw exception. + bool throw_if_table_in_use = getContext()->getSettingsRef().database_atomic_wait_for_drop_and_detach_synchronously; + if (throw_if_table_in_use) + database->checkDetachedTableNotInUse(create.uuid); + else + database->waitDetachedTableNotInUse(create.uuid); + } + StoragePtr res; /// NOTE: CREATE query may be rewritten by Storage creator or table function if (create.as_table_function) { const auto & factory = TableFunctionFactory::instance(); - res = factory.get(create.as_table_function, context)->execute(create.as_table_function, context, create.table, properties.columns); + auto table_func = factory.get(create.as_table_function, getContext()); + res = table_func->execute(create.as_table_function, getContext(), create.table, properties.columns); res->renameInMemory({create.database, create.table, create.uuid}); } else { res = StorageFactory::instance().get(create, data_path, - context, - context.getGlobalContext(), + getContext(), + getContext()->getGlobalContext(), properties.columns, properties.constraints, false); @@ -989,7 +1051,7 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, "ATTACH ... FROM ... query is not supported for {} table engine, " "because such tables do not store any data on disk. Use CREATE instead.", res->getName()); - database->createTable(context, create.table, res, query_ptr); + database->createTable(getContext(), create.table, res, query_ptr); /// Move table data to the proper place. Wo do not move data earlier to avoid situations /// when data directory moved, but table has not been created due to some error. @@ -1041,10 +1103,10 @@ BlockIO InterpreterCreateQuery::doCreateOrReplaceTable(ASTCreateQuery & create, }; ast_rename->elements.push_back(std::move(elem)); ast_rename->exchange = true; - InterpreterRenameQuery(ast_rename, context).execute(); + InterpreterRenameQuery(ast_rename, getContext()).execute(); replaced = true; - InterpreterDropQuery(ast_drop, context).execute(); + InterpreterDropQuery(ast_drop, getContext()).execute(); create.table = table_to_replace_name; return fillTableIfNeeded(create); @@ -1052,7 +1114,7 @@ BlockIO InterpreterCreateQuery::doCreateOrReplaceTable(ASTCreateQuery & create, catch (...) { if (created && create.replace_table && !replaced) - InterpreterDropQuery(ast_drop, context).execute(); + InterpreterDropQuery(ast_drop, getContext()).execute(); throw; } } @@ -1067,79 +1129,29 @@ BlockIO InterpreterCreateQuery::fillTableIfNeeded(const ASTCreateQuery & create) insert->table_id = {create.database, create.table, create.uuid}; insert->select = create.select->clone(); - if (create.temporary && !context.getSessionContext().hasQueryContext()) - context.getSessionContext().makeQueryContext(); + if (create.temporary && !getContext()->getSessionContext()->hasQueryContext()) + getContext()->getSessionContext()->makeQueryContext(); return InterpreterInsertQuery(insert, - create.temporary ? context.getSessionContext() : context, - context.getSettingsRef().insert_allow_materialized_columns).execute(); + create.temporary ? getContext()->getSessionContext() : getContext(), + getContext()->getSettingsRef().insert_allow_materialized_columns).execute(); } return {}; } -BlockIO InterpreterCreateQuery::createDictionary(ASTCreateQuery & create) -{ - String dictionary_name = create.table; - - create.database = context.resolveDatabase(create.database); - const String & database_name = create.database; - - auto guard = DatabaseCatalog::instance().getDDLGuard(database_name, dictionary_name); - DatabasePtr database = DatabaseCatalog::instance().getDatabase(database_name); - - if (typeid_cast(database.get()) && context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) - { - if (!create.attach) - assertOrSetUUID(create, database); - guard->releaseTableLock(); - return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, context); - } - - if (database->isDictionaryExist(dictionary_name)) - { - /// TODO Check structure of dictionary - if (create.if_not_exists) - return {}; - else - throw Exception( - "Dictionary " + database_name + "." + dictionary_name + " already exists.", ErrorCodes::DICTIONARY_ALREADY_EXISTS); - } - - if (create.attach) - { - auto query = DatabaseCatalog::instance().getDatabase(database_name)->getCreateDictionaryQuery(dictionary_name); - create = query->as(); - create.attach = true; - } - - assertOrSetUUID(create, database); - - if (create.attach) - { - auto config = getDictionaryConfigurationFromAST(create, context); - auto modification_time = database->getObjectMetadataModificationTime(dictionary_name); - database->attachDictionary(dictionary_name, DictionaryAttachInfo{query_ptr, config, modification_time}); - } - else - database->createDictionary(context, dictionary_name, query_ptr); - - return {}; -} - -void InterpreterCreateQuery::prepareOnClusterQuery(ASTCreateQuery & create, const Context & context, const String & cluster_name) +void InterpreterCreateQuery::prepareOnClusterQuery(ASTCreateQuery & create, ContextPtr local_context, const String & cluster_name) { if (create.attach) return; /// For CREATE query generate UUID on initiator, so it will be the same on all hosts. /// It will be ignored if database does not support UUIDs. - if (create.uuid == UUIDHelpers::Nil) - create.uuid = UUIDHelpers::generateV4(); + generateUUIDForTable(create); /// For cross-replication cluster we cannot use UUID in replica path. - String cluster_name_expanded = context.getMacros()->expand(cluster_name); - ClusterPtr cluster = context.getCluster(cluster_name_expanded); + String cluster_name_expanded = local_context->getMacros()->expand(cluster_name); + ClusterPtr cluster = local_context->getCluster(cluster_name_expanded); if (cluster->maybeCrossReplication()) { @@ -1163,7 +1175,7 @@ void InterpreterCreateQuery::prepareOnClusterQuery(ASTCreateQuery & create, cons Macros::MacroExpansionInfo info; info.table_id.uuid = create.uuid; info.ignore_unknown = true; - context.getMacros()->expand(zk_path, info); + local_context->getMacros()->expand(zk_path, info); if (!info.expanded_uuid) return; } @@ -1181,21 +1193,19 @@ BlockIO InterpreterCreateQuery::execute() auto & create = query_ptr->as(); if (!create.cluster.empty()) { - prepareOnClusterQuery(create, context, create.cluster); - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccess()); + prepareOnClusterQuery(create, getContext(), create.cluster); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccess()); } - context.checkAccess(getRequiredAccess()); + getContext()->checkAccess(getRequiredAccess()); ASTQueryWithOutput::resetOutputASTIfExist(create); /// CREATE|ATTACH DATABASE if (!create.database.empty() && create.table.empty()) return createDatabase(create); - else if (!create.is_dictionary) - return createTable(create); else - return createDictionary(create); + return createTable(create); } @@ -1249,12 +1259,12 @@ AccessRightsElements InterpreterCreateQuery::getRequiredAccess() const return required_access; } -void InterpreterCreateQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, const Context &) const +void InterpreterCreateQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const { elem.query_kind = "Create"; if (!as_table_saved.empty()) { - String database = backQuoteIfNeed(as_database_saved.empty() ? context.getCurrentDatabase() : as_database_saved); + String database = backQuoteIfNeed(as_database_saved.empty() ? getContext()->getCurrentDatabase() : as_database_saved); elem.query_databases.insert(database); elem.query_tables.insert(database + "." + backQuoteIfNeed(as_table_saved)); } diff --git a/src/Interpreters/InterpreterCreateQuery.h b/src/Interpreters/InterpreterCreateQuery.h index d88357fe412..f665ec85d46 100644 --- a/src/Interpreters/InterpreterCreateQuery.h +++ b/src/Interpreters/InterpreterCreateQuery.h @@ -1,18 +1,17 @@ #pragma once +#include #include #include +#include #include #include -#include #include -#include namespace DB { -class Context; class ASTCreateQuery; class ASTExpressionList; class ASTConstraintDeclaration; @@ -23,10 +22,10 @@ using DatabasePtr = std::shared_ptr; /** Allows to create new table or database, * or create an object for existing table or database. */ -class InterpreterCreateQuery : public IInterpreter +class InterpreterCreateQuery : public IInterpreter, WithContext { public: - InterpreterCreateQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterCreateQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; @@ -54,12 +53,12 @@ public: /// Obtain information about columns, their types, default values and column comments, /// for case when columns in CREATE query is specified explicitly. - static ColumnsDescription getColumnsDescription(const ASTExpressionList & columns, const Context & context, bool sanity_check_compression_codecs); + static ColumnsDescription getColumnsDescription(const ASTExpressionList & columns, ContextPtr context, bool attach); static ConstraintsDescription getConstraintsDescription(const ASTExpressionList * constraints); - static void prepareOnClusterQuery(ASTCreateQuery & create, const Context & context, const String & cluster_name); + static void prepareOnClusterQuery(ASTCreateQuery & create, ContextPtr context, const String & cluster_name); - void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, const Context &) const override; + void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr) const override; private: struct TableProperties @@ -71,7 +70,6 @@ private: BlockIO createDatabase(ASTCreateQuery & create); BlockIO createTable(ASTCreateQuery & create); - BlockIO createDictionary(ASTCreateQuery & create); /// Calculate list of columns, constraints, indices, etc... of table. Rewrite query in canonical way. TableProperties setProperties(ASTCreateQuery & create) const; @@ -88,7 +86,6 @@ private: void assertOrSetUUID(ASTCreateQuery & create, const DatabasePtr & database) const; ASTPtr query_ptr; - Context & context; /// Skip safety threshold when loading tables. bool has_force_restore_data_flag = false; diff --git a/src/Interpreters/InterpreterCreateQuotaQuery.cpp b/src/Interpreters/InterpreterCreateQuotaQuery.cpp index ff30a2fff47..223215624b3 100644 --- a/src/Interpreters/InterpreterCreateQuotaQuery.cpp +++ b/src/Interpreters/InterpreterCreateQuotaQuery.cpp @@ -73,18 +73,18 @@ namespace BlockIO InterpreterCreateQuotaQuery::execute() { auto & query = query_ptr->as(); - auto & access_control = context.getAccessControlManager(); - context.checkAccess(query.alter ? AccessType::ALTER_QUOTA : AccessType::CREATE_QUOTA); + auto & access_control = getContext()->getAccessControlManager(); + getContext()->checkAccess(query.alter ? AccessType::ALTER_QUOTA : AccessType::CREATE_QUOTA); if (!query.cluster.empty()) { - query.replaceCurrentUserTagWithName(context.getUserName()); - return executeDDLQueryOnCluster(query_ptr, context); + query.replaceCurrentUserTag(getContext()->getUserName()); + return executeDDLQueryOnCluster(query_ptr, getContext()); } std::optional roles_from_query; if (query.roles) - roles_from_query = RolesOrUsersSet{*query.roles, access_control, context.getUserID()}; + roles_from_query = RolesOrUsersSet{*query.roles, access_control, getContext()->getUserID()}; if (query.alter) { diff --git a/src/Interpreters/InterpreterCreateQuotaQuery.h b/src/Interpreters/InterpreterCreateQuotaQuery.h index ca0e7b44713..d8edd24b2d9 100644 --- a/src/Interpreters/InterpreterCreateQuotaQuery.h +++ b/src/Interpreters/InterpreterCreateQuotaQuery.h @@ -2,19 +2,18 @@ #include #include -#include namespace DB { + class ASTCreateQuotaQuery; struct Quota; - -class InterpreterCreateQuotaQuery : public IInterpreter +class InterpreterCreateQuotaQuery : public IInterpreter, WithContext { public: - InterpreterCreateQuotaQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterCreateQuotaQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -25,6 +24,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterCreateRoleQuery.cpp b/src/Interpreters/InterpreterCreateRoleQuery.cpp index 72ad3234b95..b9debc259be 100644 --- a/src/Interpreters/InterpreterCreateRoleQuery.cpp +++ b/src/Interpreters/InterpreterCreateRoleQuery.cpp @@ -34,14 +34,14 @@ namespace BlockIO InterpreterCreateRoleQuery::execute() { const auto & query = query_ptr->as(); - auto & access_control = context.getAccessControlManager(); + auto & access_control = getContext()->getAccessControlManager(); if (query.alter) - context.checkAccess(AccessType::ALTER_ROLE); + getContext()->checkAccess(AccessType::ALTER_ROLE); else - context.checkAccess(AccessType::CREATE_ROLE); + getContext()->checkAccess(AccessType::CREATE_ROLE); if (!query.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context); + return executeDDLQueryOnCluster(query_ptr, getContext()); std::optional settings_from_query; if (query.settings) diff --git a/src/Interpreters/InterpreterCreateRoleQuery.h b/src/Interpreters/InterpreterCreateRoleQuery.h index 768fd6d1d6b..18b3f946837 100644 --- a/src/Interpreters/InterpreterCreateRoleQuery.h +++ b/src/Interpreters/InterpreterCreateRoleQuery.h @@ -6,14 +6,14 @@ namespace DB { + class ASTCreateRoleQuery; struct Role; - -class InterpreterCreateRoleQuery : public IInterpreter +class InterpreterCreateRoleQuery : public IInterpreter, WithContext { public: - InterpreterCreateRoleQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterCreateRoleQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -21,6 +21,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterCreateRowPolicyQuery.cpp b/src/Interpreters/InterpreterCreateRowPolicyQuery.cpp index 8f1c5b061e0..5e4b9b30e66 100644 --- a/src/Interpreters/InterpreterCreateRowPolicyQuery.cpp +++ b/src/Interpreters/InterpreterCreateRowPolicyQuery.cpp @@ -44,21 +44,21 @@ namespace BlockIO InterpreterCreateRowPolicyQuery::execute() { auto & query = query_ptr->as(); - auto & access_control = context.getAccessControlManager(); - context.checkAccess(query.alter ? AccessType::ALTER_ROW_POLICY : AccessType::CREATE_ROW_POLICY); + auto & access_control = getContext()->getAccessControlManager(); + getContext()->checkAccess(query.alter ? AccessType::ALTER_ROW_POLICY : AccessType::CREATE_ROW_POLICY); if (!query.cluster.empty()) { - query.replaceCurrentUserTagWithName(context.getUserName()); - return executeDDLQueryOnCluster(query_ptr, context); + query.replaceCurrentUserTag(getContext()->getUserName()); + return executeDDLQueryOnCluster(query_ptr, getContext()); } assert(query.names->cluster.empty()); std::optional roles_from_query; if (query.roles) - roles_from_query = RolesOrUsersSet{*query.roles, access_control, context.getUserID()}; + roles_from_query = RolesOrUsersSet{*query.roles, access_control, getContext()->getUserID()}; - query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase()); + query.replaceEmptyDatabase(getContext()->getCurrentDatabase()); if (query.alter) { diff --git a/src/Interpreters/InterpreterCreateRowPolicyQuery.h b/src/Interpreters/InterpreterCreateRowPolicyQuery.h index feb583310c0..10167bac669 100644 --- a/src/Interpreters/InterpreterCreateRowPolicyQuery.h +++ b/src/Interpreters/InterpreterCreateRowPolicyQuery.h @@ -2,19 +2,18 @@ #include #include -#include namespace DB { + class ASTCreateRowPolicyQuery; struct RowPolicy; - -class InterpreterCreateRowPolicyQuery : public IInterpreter +class InterpreterCreateRowPolicyQuery : public IInterpreter, WithContext { public: - InterpreterCreateRowPolicyQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterCreateRowPolicyQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -22,6 +21,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterCreateSettingsProfileQuery.cpp b/src/Interpreters/InterpreterCreateSettingsProfileQuery.cpp index b65225db16c..fb5fb258b10 100644 --- a/src/Interpreters/InterpreterCreateSettingsProfileQuery.cpp +++ b/src/Interpreters/InterpreterCreateSettingsProfileQuery.cpp @@ -42,16 +42,16 @@ namespace BlockIO InterpreterCreateSettingsProfileQuery::execute() { auto & query = query_ptr->as(); - auto & access_control = context.getAccessControlManager(); + auto & access_control = getContext()->getAccessControlManager(); if (query.alter) - context.checkAccess(AccessType::ALTER_SETTINGS_PROFILE); + getContext()->checkAccess(AccessType::ALTER_SETTINGS_PROFILE); else - context.checkAccess(AccessType::CREATE_SETTINGS_PROFILE); + getContext()->checkAccess(AccessType::CREATE_SETTINGS_PROFILE); if (!query.cluster.empty()) { - query.replaceCurrentUserTagWithName(context.getUserName()); - return executeDDLQueryOnCluster(query_ptr, context); + query.replaceCurrentUserTag(getContext()->getUserName()); + return executeDDLQueryOnCluster(query_ptr, getContext()); } std::optional settings_from_query; @@ -60,7 +60,7 @@ BlockIO InterpreterCreateSettingsProfileQuery::execute() std::optional roles_from_query; if (query.to_roles) - roles_from_query = RolesOrUsersSet{*query.to_roles, access_control, context.getUserID()}; + roles_from_query = RolesOrUsersSet{*query.to_roles, access_control, getContext()->getUserID()}; if (query.alter) { diff --git a/src/Interpreters/InterpreterCreateSettingsProfileQuery.h b/src/Interpreters/InterpreterCreateSettingsProfileQuery.h index fd420779cf4..9ef1f0354a9 100644 --- a/src/Interpreters/InterpreterCreateSettingsProfileQuery.h +++ b/src/Interpreters/InterpreterCreateSettingsProfileQuery.h @@ -6,14 +6,14 @@ namespace DB { + class ASTCreateSettingsProfileQuery; struct SettingsProfile; - -class InterpreterCreateSettingsProfileQuery : public IInterpreter +class InterpreterCreateSettingsProfileQuery : public IInterpreter, WithContext { public: - InterpreterCreateSettingsProfileQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterCreateSettingsProfileQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -21,6 +21,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterCreateUserQuery.cpp b/src/Interpreters/InterpreterCreateUserQuery.cpp index c9b087de5b4..7f4969ff9ef 100644 --- a/src/Interpreters/InterpreterCreateUserQuery.cpp +++ b/src/Interpreters/InterpreterCreateUserQuery.cpp @@ -20,7 +20,8 @@ namespace const ASTCreateUserQuery & query, const std::shared_ptr & override_name, const std::optional & override_default_roles, - const std::optional & override_settings) + const std::optional & override_settings, + const std::optional & override_grantees) { if (override_name) user.setName(override_name->toString()); @@ -62,6 +63,11 @@ namespace user.settings = *override_settings; else if (query.settings) user.settings = *query.settings; + + if (override_grantees) + user.grantees = *override_grantees; + else if (query.grantees) + user.grantees = *query.grantees; } } @@ -69,8 +75,8 @@ namespace BlockIO InterpreterCreateUserQuery::execute() { const auto & query = query_ptr->as(); - auto & access_control = context.getAccessControlManager(); - auto access = context.getAccess(); + auto & access_control = getContext()->getAccessControlManager(); + auto access = getContext()->getAccess(); access->checkAccess(query.alter ? AccessType::ALTER_USER : AccessType::CREATE_USER); std::optional default_roles_from_query; @@ -85,7 +91,7 @@ BlockIO InterpreterCreateUserQuery::execute() } if (!query.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context); + return executeDDLQueryOnCluster(query_ptr, getContext()); std::optional settings_from_query; if (query.settings) @@ -93,12 +99,17 @@ BlockIO InterpreterCreateUserQuery::execute() if (query.alter) { + std::optional grantees_from_query; + if (query.grantees) + grantees_from_query = RolesOrUsersSet{*query.grantees, access_control}; + auto update_func = [&](const AccessEntityPtr & entity) -> AccessEntityPtr { auto updated_user = typeid_cast>(entity->clone()); - updateUserFromQueryImpl(*updated_user, query, {}, default_roles_from_query, settings_from_query); + updateUserFromQueryImpl(*updated_user, query, {}, default_roles_from_query, settings_from_query, grantees_from_query); return updated_user; }; + Strings names = query.names->toStrings(); if (query.if_exists) { @@ -114,16 +125,28 @@ BlockIO InterpreterCreateUserQuery::execute() for (const auto & name : *query.names) { auto new_user = std::make_shared(); - updateUserFromQueryImpl(*new_user, query, name, default_roles_from_query, settings_from_query); + updateUserFromQueryImpl(*new_user, query, name, default_roles_from_query, settings_from_query, RolesOrUsersSet::AllTag{}); new_users.emplace_back(std::move(new_user)); } + std::vector ids; if (query.if_not_exists) - access_control.tryInsert(new_users); + ids = access_control.tryInsert(new_users); else if (query.or_replace) - access_control.insertOrReplace(new_users); + ids = access_control.insertOrReplace(new_users); else - access_control.insert(new_users); + ids = access_control.insert(new_users); + + if (query.grantees) + { + RolesOrUsersSet grantees_from_query = RolesOrUsersSet{*query.grantees, access_control}; + access_control.update(ids, [&](const AccessEntityPtr & entity) -> AccessEntityPtr + { + auto updated_user = typeid_cast>(entity->clone()); + updated_user->grantees = grantees_from_query; + return updated_user; + }); + } } return {}; @@ -132,7 +155,7 @@ BlockIO InterpreterCreateUserQuery::execute() void InterpreterCreateUserQuery::updateUserFromQuery(User & user, const ASTCreateUserQuery & query) { - updateUserFromQueryImpl(user, query, {}, {}, {}); + updateUserFromQueryImpl(user, query, {}, {}, {}, {}); } } diff --git a/src/Interpreters/InterpreterCreateUserQuery.h b/src/Interpreters/InterpreterCreateUserQuery.h index e5b614b5cbb..e9f4e82e767 100644 --- a/src/Interpreters/InterpreterCreateUserQuery.h +++ b/src/Interpreters/InterpreterCreateUserQuery.h @@ -6,14 +6,14 @@ namespace DB { + class ASTCreateUserQuery; struct User; - -class InterpreterCreateUserQuery : public IInterpreter +class InterpreterCreateUserQuery : public IInterpreter, WithContext { public: - InterpreterCreateUserQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterCreateUserQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -21,6 +21,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterDescribeQuery.cpp b/src/Interpreters/InterpreterDescribeQuery.cpp index 27d8ac48bc8..705e52da72c 100644 --- a/src/Interpreters/InterpreterDescribeQuery.cpp +++ b/src/Interpreters/InterpreterDescribeQuery.cpp @@ -69,20 +69,20 @@ BlockInputStreamPtr InterpreterDescribeQuery::executeImpl() if (table_expression.subquery) { auto names_and_types = InterpreterSelectWithUnionQuery::getSampleBlock( - table_expression.subquery->children.at(0), context).getNamesAndTypesList(); + table_expression.subquery->children.at(0), getContext()).getNamesAndTypesList(); columns = ColumnsDescription(std::move(names_and_types)); } else if (table_expression.table_function) { - TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(table_expression.table_function, context); - columns = table_function_ptr->getActualTableStructure(context); + TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(table_expression.table_function, getContext()); + columns = table_function_ptr->getActualTableStructure(getContext()); } else { - auto table_id = context.resolveStorageID(table_expression.database_and_table_name); - context.checkAccess(AccessType::SHOW_COLUMNS, table_id); - auto table = DatabaseCatalog::instance().getTable(table_id, context); - auto table_lock = table->lockForShare(context.getInitialQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto table_id = getContext()->resolveStorageID(table_expression.database_and_table_name); + getContext()->checkAccess(AccessType::SHOW_COLUMNS, table_id); + auto table = DatabaseCatalog::instance().getTable(table_id, getContext()); + auto table_lock = table->lockForShare(getContext()->getInitialQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); auto metadata_snapshot = table->getInMemoryMetadataPtr(); columns = metadata_snapshot->getColumns(); } diff --git a/src/Interpreters/InterpreterDescribeQuery.h b/src/Interpreters/InterpreterDescribeQuery.h index 4fafe61f229..627d1ca0353 100644 --- a/src/Interpreters/InterpreterDescribeQuery.h +++ b/src/Interpreters/InterpreterDescribeQuery.h @@ -7,16 +7,12 @@ namespace DB { -class Context; - - /** Return names, types and other information about columns in specified table. */ -class InterpreterDescribeQuery : public IInterpreter +class InterpreterDescribeQuery : public IInterpreter, WithContext { public: - InterpreterDescribeQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterDescribeQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -24,7 +20,6 @@ public: private: ASTPtr query_ptr; - const Context & context; BlockInputStreamPtr executeImpl(); }; diff --git a/src/Interpreters/InterpreterDropAccessEntityQuery.cpp b/src/Interpreters/InterpreterDropAccessEntityQuery.cpp index e86f8361100..a9b8db6d74e 100644 --- a/src/Interpreters/InterpreterDropAccessEntityQuery.cpp +++ b/src/Interpreters/InterpreterDropAccessEntityQuery.cpp @@ -25,13 +25,13 @@ using EntityType = IAccessEntity::Type; BlockIO InterpreterDropAccessEntityQuery::execute() { auto & query = query_ptr->as(); - auto & access_control = context.getAccessControlManager(); - context.checkAccess(getRequiredAccess()); + auto & access_control = getContext()->getAccessControlManager(); + getContext()->checkAccess(getRequiredAccess()); if (!query.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context); + return executeDDLQueryOnCluster(query_ptr, getContext()); - query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase()); + query.replaceEmptyDatabase(getContext()->getCurrentDatabase()); auto do_drop = [&](const Strings & names) { diff --git a/src/Interpreters/InterpreterDropAccessEntityQuery.h b/src/Interpreters/InterpreterDropAccessEntityQuery.h index 0db68a0ad78..7f0f6348610 100644 --- a/src/Interpreters/InterpreterDropAccessEntityQuery.h +++ b/src/Interpreters/InterpreterDropAccessEntityQuery.h @@ -6,12 +6,13 @@ namespace DB { + class AccessRightsElements; -class InterpreterDropAccessEntityQuery : public IInterpreter +class InterpreterDropAccessEntityQuery : public IInterpreter, WithContext { public: - InterpreterDropAccessEntityQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterDropAccessEntityQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -19,6 +20,6 @@ private: AccessRightsElements getRequiredAccess() const; ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterDropQuery.cpp b/src/Interpreters/InterpreterDropQuery.cpp index 33e93a79c41..0b4efe4f978 100644 --- a/src/Interpreters/InterpreterDropQuery.cpp +++ b/src/Interpreters/InterpreterDropQuery.cpp @@ -31,7 +31,6 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; extern const int SYNTAX_ERROR; extern const int UNKNOWN_TABLE; - extern const int UNKNOWN_DICTIONARY; extern const int NOT_IMPLEMENTED; extern const int INCORRECT_QUERY; } @@ -43,27 +42,22 @@ static DatabasePtr tryGetDatabase(const String & database_name, bool if_exists) } -InterpreterDropQuery::InterpreterDropQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} +InterpreterDropQuery::InterpreterDropQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) +{ +} BlockIO InterpreterDropQuery::execute() { auto & drop = query_ptr->as(); if (!drop.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccessForDDLOnCluster()); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccessForDDLOnCluster()); - if (context.getSettingsRef().database_atomic_wait_for_drop_and_detach_synchronously) + if (getContext()->getSettingsRef().database_atomic_wait_for_drop_and_detach_synchronously) drop.no_delay = true; if (!drop.table.empty()) - { - if (!drop.is_dictionary) - return executeToTable(drop); - else if (drop.permanently && drop.kind == ASTDropQuery::Kind::Detach) - throw Exception("DETACH PERMANENTLY is not implemented for dictionaries", ErrorCodes::NOT_IMPLEMENTED); - else - return executeToDictionary(drop.database, drop.table, drop.kind, drop.if_exists, drop.temporary, drop.no_ddl_lock); - } + return executeToTable(drop); else if (!drop.database.empty()) return executeToDatabase(drop); else @@ -82,7 +76,7 @@ void InterpreterDropQuery::waitForTableToBeActuallyDroppedOrDetached(const ASTDr db->waitDetachedTableNotInUse(uuid_to_wait); } -BlockIO InterpreterDropQuery::executeToTable(const ASTDropQuery & query) +BlockIO InterpreterDropQuery::executeToTable(ASTDropQuery & query) { DatabasePtr database; UUID table_to_wait_on = UUIDHelpers::Nil; @@ -92,16 +86,16 @@ BlockIO InterpreterDropQuery::executeToTable(const ASTDropQuery & query) return res; } -BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, DatabasePtr & db, UUID & uuid_to_wait) +BlockIO InterpreterDropQuery::executeToTableImpl(ASTDropQuery & query, DatabasePtr & db, UUID & uuid_to_wait) { /// NOTE: it does not contain UUID, we will resolve it with locked DDLGuard auto table_id = StorageID(query); if (query.temporary || table_id.database_name.empty()) { - if (context.tryResolveStorageID(table_id, Context::ResolveExternal)) + if (getContext()->tryResolveStorageID(table_id, Context::ResolveExternal)) return executeToTemporaryTable(table_id.getTableName(), query.kind); else - table_id.database_name = context.getCurrentDatabase(); + query.database = table_id.database_name = getContext()->getCurrentDatabase(); } if (query.temporary) @@ -115,13 +109,22 @@ BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, Dat auto ddl_guard = (!query.no_ddl_lock ? DatabaseCatalog::instance().getDDLGuard(table_id.database_name, table_id.table_name) : nullptr); /// If table was already dropped by anyone, an exception will be thrown - auto [database, table] = query.if_exists ? DatabaseCatalog::instance().tryGetDatabaseAndTable(table_id, context) - : DatabaseCatalog::instance().getDatabaseAndTable(table_id, context); + auto [database, table] = query.if_exists ? DatabaseCatalog::instance().tryGetDatabaseAndTable(table_id, getContext()) + : DatabaseCatalog::instance().getDatabaseAndTable(table_id, getContext()); if (database && table) { - if (query.as().is_view && !table->isView()) - throw Exception("Table " + table_id.getNameForLogs() + " is not a View", ErrorCodes::LOGICAL_ERROR); + auto & ast_drop_query = query.as(); + + if (ast_drop_query.is_view && !table->isView()) + throw Exception(ErrorCodes::INCORRECT_QUERY, + "Table {} is not a View", + table_id.getNameForLogs()); + + if (ast_drop_query.is_dictionary && !table->isDictionary()) + throw Exception(ErrorCodes::INCORRECT_QUERY, + "Table {} is not a Dictionary", + table_id.getNameForLogs()); /// Now get UUID, so we can wait for table data to be finally dropped table_id.uuid = database->tryGetTableUUID(table_id.table_name); @@ -129,40 +132,55 @@ BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, Dat /// Prevents recursive drop from drop database query. The original query must specify a table. bool is_drop_or_detach_database = query_ptr->as()->table.empty(); bool is_replicated_ddl_query = typeid_cast(database.get()) && - context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY && + getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY && !is_drop_or_detach_database; + + AccessFlags drop_storage; + + if (table->isView()) + drop_storage = AccessType::DROP_VIEW; + else if (table->isDictionary()) + drop_storage = AccessType::DROP_DICTIONARY; + else + drop_storage = AccessType::DROP_TABLE; + if (is_replicated_ddl_query) { - if (query.kind == ASTDropQuery::Kind::Detach && !query.permanently) - throw Exception(ErrorCodes::INCORRECT_QUERY, "DETACH TABLE is not allowed for Replicated databases. " - "Use DETACH TABLE PERMANENTLY or SYSTEM RESTART REPLICA"); - if (query.kind == ASTDropQuery::Kind::Detach) - context.checkAccess(table->isView() ? AccessType::DROP_VIEW : AccessType::DROP_TABLE, table_id); + getContext()->checkAccess(drop_storage, table_id); else if (query.kind == ASTDropQuery::Kind::Truncate) - context.checkAccess(AccessType::TRUNCATE, table_id); + getContext()->checkAccess(AccessType::TRUNCATE, table_id); else if (query.kind == ASTDropQuery::Kind::Drop) - context.checkAccess(table->isView() ? AccessType::DROP_VIEW : AccessType::DROP_TABLE, table_id); + getContext()->checkAccess(drop_storage, table_id); ddl_guard->releaseTableLock(); table.reset(); - return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query.clone(), context); + return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query.clone(), getContext()); } if (query.kind == ASTDropQuery::Kind::Detach) { - context.checkAccess(table->isView() ? AccessType::DROP_VIEW : AccessType::DROP_TABLE, table_id); - table->checkTableCanBeDetached(); + getContext()->checkAccess(drop_storage, table_id); + + if (table->isDictionary()) + { + /// If DROP DICTIONARY query is not used, check if Dictionary can be dropped with DROP TABLE query + if (!query.is_dictionary) + table->checkTableCanBeDetached(); + } + else + table->checkTableCanBeDetached(); + table->shutdown(); TableExclusiveLockHolder table_lock; if (database->getUUID() == UUIDHelpers::Nil) - table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + table_lock = table->lockExclusively(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); if (query.permanently) { /// Drop table from memory, don't touch data, metadata file renamed and will be skipped during server restart - database->detachTablePermanently(context, table_id.table_name); + database->detachTablePermanently(getContext(), table_id.table_name); } else { @@ -172,26 +190,38 @@ BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, Dat } else if (query.kind == ASTDropQuery::Kind::Truncate) { - context.checkAccess(AccessType::TRUNCATE, table_id); + if (table->isDictionary()) + throw Exception("Cannot TRUNCATE dictionary", ErrorCodes::SYNTAX_ERROR); + + getContext()->checkAccess(AccessType::TRUNCATE, table_id); + table->checkTableCanBeDropped(); - auto table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto table_lock = table->lockExclusively(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); auto metadata_snapshot = table->getInMemoryMetadataPtr(); /// Drop table data, don't touch metadata - table->truncate(query_ptr, metadata_snapshot, context, table_lock); + table->truncate(query_ptr, metadata_snapshot, getContext(), table_lock); } else if (query.kind == ASTDropQuery::Kind::Drop) { - context.checkAccess(table->isView() ? AccessType::DROP_VIEW : AccessType::DROP_TABLE, table_id); - table->checkTableCanBeDropped(); + getContext()->checkAccess(drop_storage, table_id); + + if (table->isDictionary()) + { + /// If DROP DICTIONARY query is not used, check if Dictionary can be dropped with DROP TABLE query + if (!query.is_dictionary) + table->checkTableCanBeDropped(); + } + else + table->checkTableCanBeDropped(); table->shutdown(); TableExclusiveLockHolder table_lock; if (database->getUUID() == UUIDHelpers::Nil) - table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + table_lock = table->lockExclusively(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); - database->dropTable(context, table_id.table_name, query.no_delay); + database->dropTable(getContext(), table_id.table_name, query.no_delay); } db = database; @@ -201,90 +231,30 @@ BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, Dat return {}; } - -BlockIO InterpreterDropQuery::executeToDictionary( - const String & database_name_, - const String & dictionary_name, - ASTDropQuery::Kind kind, - bool if_exists, - bool is_temporary, - bool no_ddl_lock) -{ - if (is_temporary) - throw Exception("Temporary dictionaries are not possible.", ErrorCodes::SYNTAX_ERROR); - - String database_name = context.resolveDatabase(database_name_); - - auto ddl_guard = (!no_ddl_lock ? DatabaseCatalog::instance().getDDLGuard(database_name, dictionary_name) : nullptr); - - DatabasePtr database = tryGetDatabase(database_name, if_exists); - - bool is_drop_or_detach_database = query_ptr->as()->table.empty(); - bool is_replicated_ddl_query = typeid_cast(database.get()) && - context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY && - !is_drop_or_detach_database; - if (is_replicated_ddl_query) - { - if (kind == ASTDropQuery::Kind::Detach) - throw Exception(ErrorCodes::INCORRECT_QUERY, "DETACH DICTIONARY is not allowed for Replicated databases."); - - context.checkAccess(AccessType::DROP_DICTIONARY, database_name, dictionary_name); - - ddl_guard->releaseTableLock(); - return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, context); - } - - if (!database || !database->isDictionaryExist(dictionary_name)) - { - if (!if_exists) - throw Exception( - "Dictionary " + backQuoteIfNeed(database_name) + "." + backQuoteIfNeed(dictionary_name) + " doesn't exist.", - ErrorCodes::UNKNOWN_DICTIONARY); - else - return {}; - } - - if (kind == ASTDropQuery::Kind::Detach) - { - /// Drop dictionary from memory, don't touch data and metadata - context.checkAccess(AccessType::DROP_DICTIONARY, database_name, dictionary_name); - database->detachDictionary(dictionary_name); - } - else if (kind == ASTDropQuery::Kind::Truncate) - { - throw Exception("Cannot TRUNCATE dictionary", ErrorCodes::SYNTAX_ERROR); - } - else if (kind == ASTDropQuery::Kind::Drop) - { - context.checkAccess(AccessType::DROP_DICTIONARY, database_name, dictionary_name); - database->removeDictionary(context, dictionary_name); - } - return {}; -} - BlockIO InterpreterDropQuery::executeToTemporaryTable(const String & table_name, ASTDropQuery::Kind kind) { if (kind == ASTDropQuery::Kind::Detach) throw Exception("Unable to detach temporary table.", ErrorCodes::SYNTAX_ERROR); else { - auto & context_handle = context.hasSessionContext() ? context.getSessionContext() : context; - auto resolved_id = context_handle.tryResolveStorageID(StorageID("", table_name), Context::ResolveExternal); + auto context_handle = getContext()->hasSessionContext() ? getContext()->getSessionContext() : getContext(); + auto resolved_id = context_handle->tryResolveStorageID(StorageID("", table_name), Context::ResolveExternal); if (resolved_id) { - StoragePtr table = DatabaseCatalog::instance().getTable(resolved_id, context); + StoragePtr table = DatabaseCatalog::instance().getTable(resolved_id, getContext()); if (kind == ASTDropQuery::Kind::Truncate) { - auto table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto table_lock + = table->lockExclusively(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); /// Drop table data, don't touch metadata auto metadata_snapshot = table->getInMemoryMetadataPtr(); - table->truncate(query_ptr, metadata_snapshot, context, table_lock); + table->truncate(query_ptr, metadata_snapshot, getContext(), table_lock); } else if (kind == ASTDropQuery::Kind::Drop) { - context_handle.removeExternalTable(table_name); + context_handle->removeExternalTable(table_name); table->shutdown(); - auto table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto table_lock = table->lockExclusively(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); /// Delete table data table->drop(); table->is_dropped = true; @@ -338,7 +308,7 @@ BlockIO InterpreterDropQuery::executeToDatabaseImpl(const ASTDropQuery & query, else if (query.kind == ASTDropQuery::Kind::Detach || query.kind == ASTDropQuery::Kind::Drop) { bool drop = query.kind == ASTDropQuery::Kind::Drop; - context.checkAccess(AccessType::DROP_DATABASE, database_name); + getContext()->checkAccess(AccessType::DROP_DATABASE, database_name); if (query.kind == ASTDropQuery::Kind::Detach && query.permanently) throw Exception("DETACH PERMANENTLY is not implemented for databases", ErrorCodes::NOT_IMPLEMENTED); @@ -352,26 +322,18 @@ BlockIO InterpreterDropQuery::executeToDatabaseImpl(const ASTDropQuery & query, if (database->shouldBeEmptyOnDetach()) { - /// DETACH or DROP all tables and dictionaries inside database. - /// First we should DETACH or DROP dictionaries because StorageDictionary - /// must be detached only by detaching corresponding dictionary. - for (auto iterator = database->getDictionariesIterator(); iterator->isValid(); iterator->next()) - { - String current_dictionary = iterator->name(); - executeToDictionary(database_name, current_dictionary, query.kind, false, false, false); - } - ASTDropQuery query_for_table; query_for_table.kind = query.kind; query_for_table.if_exists = true; query_for_table.database = database_name; query_for_table.no_delay = query.no_delay; - for (auto iterator = database->getTablesIterator(context); iterator->isValid(); iterator->next()) + for (auto iterator = database->getTablesIterator(getContext()); iterator->isValid(); iterator->next()) { DatabasePtr db; UUID table_to_wait = UUIDHelpers::Nil; query_for_table.table = iterator->name(); + query_for_table.is_dictionary = iterator->table()->isDictionary(); executeToTableImpl(query_for_table, db, table_to_wait); uuids_to_wait.push_back(table_to_wait); } @@ -425,7 +387,7 @@ AccessRightsElements InterpreterDropQuery::getRequiredAccessForDDLOnCluster() co return required_access; } -void InterpreterDropQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, const Context &) const +void InterpreterDropQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const { elem.query_kind = "Drop"; } diff --git a/src/Interpreters/InterpreterDropQuery.h b/src/Interpreters/InterpreterDropQuery.h index d51ce3293ec..8e8d577deec 100644 --- a/src/Interpreters/InterpreterDropQuery.h +++ b/src/Interpreters/InterpreterDropQuery.h @@ -16,26 +16,25 @@ class AccessRightsElements; * or remove information about table (just forget) from server (DETACH), * or just clear all data in table (TRUNCATE). */ -class InterpreterDropQuery : public IInterpreter +class InterpreterDropQuery : public IInterpreter, WithContext { public: - InterpreterDropQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterDropQuery(const ASTPtr & query_ptr_, ContextPtr context_); /// Drop table or database. BlockIO execute() override; - void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, const Context &) const override; + void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const override; private: AccessRightsElements getRequiredAccessForDDLOnCluster() const; ASTPtr query_ptr; - Context & context; BlockIO executeToDatabase(const ASTDropQuery & query); BlockIO executeToDatabaseImpl(const ASTDropQuery & query, DatabasePtr & database, std::vector & uuids_to_wait); - BlockIO executeToTable(const ASTDropQuery & query); - BlockIO executeToTableImpl(const ASTDropQuery & query, DatabasePtr & db, UUID & uuid_to_wait); + BlockIO executeToTable(ASTDropQuery & query); + BlockIO executeToTableImpl(ASTDropQuery & query, DatabasePtr & db, UUID & uuid_to_wait); static void waitForTableToBeActuallyDroppedOrDetached(const ASTDropQuery & query, const DatabasePtr & db, const UUID & uuid_to_wait); diff --git a/src/Interpreters/InterpreterExistsQuery.cpp b/src/Interpreters/InterpreterExistsQuery.cpp index 335ee452e39..cbdb8561920 100644 --- a/src/Interpreters/InterpreterExistsQuery.cpp +++ b/src/Interpreters/InterpreterExistsQuery.cpp @@ -44,34 +44,34 @@ BlockInputStreamPtr InterpreterExistsQuery::executeImpl() { if (exists_query->temporary) { - result = context.tryResolveStorageID({"", exists_query->table}, Context::ResolveExternal); + result = getContext()->tryResolveStorageID({"", exists_query->table}, Context::ResolveExternal); } else { - String database = context.resolveDatabase(exists_query->database); - context.checkAccess(AccessType::SHOW_TABLES, database, exists_query->table); - result = DatabaseCatalog::instance().isTableExist({database, exists_query->table}, context); + String database = getContext()->resolveDatabase(exists_query->database); + getContext()->checkAccess(AccessType::SHOW_TABLES, database, exists_query->table); + result = DatabaseCatalog::instance().isTableExist({database, exists_query->table}, getContext()); } } else if ((exists_query = query_ptr->as())) { - String database = context.resolveDatabase(exists_query->database); - context.checkAccess(AccessType::SHOW_TABLES, database, exists_query->table); - auto table = DatabaseCatalog::instance().tryGetTable({database, exists_query->table}, context); + String database = getContext()->resolveDatabase(exists_query->database); + getContext()->checkAccess(AccessType::SHOW_TABLES, database, exists_query->table); + auto table = DatabaseCatalog::instance().tryGetTable({database, exists_query->table}, getContext()); result = table && table->isView(); } else if ((exists_query = query_ptr->as())) { - String database = context.resolveDatabase(exists_query->database); - context.checkAccess(AccessType::SHOW_DATABASES, database); + String database = getContext()->resolveDatabase(exists_query->database); + getContext()->checkAccess(AccessType::SHOW_DATABASES, database); result = DatabaseCatalog::instance().isDatabaseExist(database); } else if ((exists_query = query_ptr->as())) { if (exists_query->temporary) throw Exception("Temporary dictionaries are not possible.", ErrorCodes::SYNTAX_ERROR); - String database = context.resolveDatabase(exists_query->database); - context.checkAccess(AccessType::SHOW_DICTIONARIES, database, exists_query->table); + String database = getContext()->resolveDatabase(exists_query->database); + getContext()->checkAccess(AccessType::SHOW_DICTIONARIES, database, exists_query->table); result = DatabaseCatalog::instance().isDictionaryExist({database, exists_query->table}); } diff --git a/src/Interpreters/InterpreterExistsQuery.h b/src/Interpreters/InterpreterExistsQuery.h index 1860e1d0aa9..efc664f07c3 100644 --- a/src/Interpreters/InterpreterExistsQuery.h +++ b/src/Interpreters/InterpreterExistsQuery.h @@ -7,16 +7,12 @@ namespace DB { -class Context; - - /** Check that table exists. Return single row with single column "result" of type UInt8 and value 0 or 1. */ -class InterpreterExistsQuery : public IInterpreter +class InterpreterExistsQuery : public IInterpreter, WithContext { public: - InterpreterExistsQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterExistsQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) { } BlockIO execute() override; @@ -24,7 +20,6 @@ public: private: ASTPtr query_ptr; - const Context & context; BlockInputStreamPtr executeImpl(); }; diff --git a/src/Interpreters/InterpreterExplainQuery.cpp b/src/Interpreters/InterpreterExplainQuery.cpp index 5135e40e4dd..b4a91170bc4 100644 --- a/src/Interpreters/InterpreterExplainQuery.cpp +++ b/src/Interpreters/InterpreterExplainQuery.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -14,8 +15,12 @@ #include #include +#include +#include #include +#include + namespace DB { @@ -31,9 +36,9 @@ namespace { struct ExplainAnalyzedSyntaxMatcher { - struct Data + struct Data : public WithContext { - const Context & context; + explicit Data(ContextPtr context_) : WithContext(context_) {} }; static bool needChildVisit(ASTPtr & node, ASTPtr &) @@ -50,7 +55,7 @@ namespace static void visit(ASTSelectQuery & select, ASTPtr & node, Data & data) { InterpreterSelectQuery interpreter( - node, data.context, SelectQueryOptions(QueryProcessingStage::FetchColumns).analyze().modify()); + node, data.getContext(), SelectQueryOptions(QueryProcessingStage::FetchColumns).analyze().modify()); const SelectQueryInfo & query_info = interpreter.getQueryInfo(); if (query_info.view_query) @@ -119,6 +124,7 @@ struct QueryPlanSettings /// Apply query plan optimizations. bool optimize = true; + bool json = false; constexpr static char name[] = "PLAN"; @@ -127,7 +133,9 @@ struct QueryPlanSettings {"header", query_plan_options.header}, {"description", query_plan_options.description}, {"actions", query_plan_options.actions}, + {"indexes", query_plan_options.indexes}, {"optimize", optimize}, + {"json", json} }; }; @@ -221,6 +229,7 @@ BlockInputStreamPtr InterpreterExplainQuery::executeImpl() MutableColumns res_columns = sample_block.cloneEmptyColumns(); WriteBufferFromOwnString buf; + bool single_line = false; if (ast.getKind() == ASTExplainQuery::ParsedAST) { @@ -234,7 +243,7 @@ BlockInputStreamPtr InterpreterExplainQuery::executeImpl() if (ast.getSettings()) throw Exception("Settings are not supported for EXPLAIN SYNTAX query.", ErrorCodes::UNKNOWN_SETTING); - ExplainAnalyzedSyntaxVisitor::Data data{.context = context}; + ExplainAnalyzedSyntaxVisitor::Data data(getContext()); ExplainAnalyzedSyntaxVisitor(data).visit(query); ast.getExplainedQuery()->format(IAST::FormatSettings(buf, false)); @@ -247,13 +256,32 @@ BlockInputStreamPtr InterpreterExplainQuery::executeImpl() auto settings = checkAndGetSettings(ast.getSettings()); QueryPlan plan; - InterpreterSelectWithUnionQuery interpreter(ast.getExplainedQuery(), context, SelectQueryOptions()); + InterpreterSelectWithUnionQuery interpreter(ast.getExplainedQuery(), getContext(), SelectQueryOptions()); interpreter.buildQueryPlan(plan); if (settings.optimize) - plan.optimize(QueryPlanOptimizationSettings(context.getSettingsRef())); + plan.optimize(QueryPlanOptimizationSettings::fromContext(getContext())); - plan.explainPlan(buf, settings.query_plan_options); + if (settings.json) + { + /// Add extra layers to make plan look more like from postgres. + auto plan_map = std::make_unique(); + plan_map->add("Plan", plan.explainPlan(settings.query_plan_options)); + auto plan_array = std::make_unique(); + plan_array->add(std::move(plan_map)); + + auto format_settings = getFormatSettings(getContext()); + format_settings.json.quote_64bit_integers = false; + + JSONBuilder::FormatSettings json_format_settings{.settings = format_settings}; + JSONBuilder::FormatContext format_context{.out = buf}; + + plan_array->format(json_format_settings, format_context); + + single_line = true; + } + else + plan.explainPlan(buf, settings.query_plan_options); } else if (ast.getKind() == ASTExplainQuery::QueryPipeline) { @@ -263,9 +291,11 @@ BlockInputStreamPtr InterpreterExplainQuery::executeImpl() auto settings = checkAndGetSettings(ast.getSettings()); QueryPlan plan; - InterpreterSelectWithUnionQuery interpreter(ast.getExplainedQuery(), context, SelectQueryOptions()); + InterpreterSelectWithUnionQuery interpreter(ast.getExplainedQuery(), getContext(), SelectQueryOptions()); interpreter.buildQueryPlan(plan); - auto pipeline = plan.buildQueryPipeline(QueryPlanOptimizationSettings(context.getSettingsRef())); + auto pipeline = plan.buildQueryPipeline( + QueryPlanOptimizationSettings::fromContext(getContext()), + BuildQueryPipelineSettings::fromContext(getContext())); if (settings.graph) { @@ -284,7 +314,10 @@ BlockInputStreamPtr InterpreterExplainQuery::executeImpl() } } - fillColumn(*res_columns[0], buf.str()); + if (single_line) + res_columns[0]->insertData(buf.str().data(), buf.str().size()); + else + fillColumn(*res_columns[0], buf.str()); return std::make_shared(sample_block.cloneWithColumns(std::move(res_columns))); } diff --git a/src/Interpreters/InterpreterExplainQuery.h b/src/Interpreters/InterpreterExplainQuery.h index fbc8a998f2c..f16b1a8f69d 100644 --- a/src/Interpreters/InterpreterExplainQuery.h +++ b/src/Interpreters/InterpreterExplainQuery.h @@ -7,15 +7,11 @@ namespace DB { -class Context; - /// Returns single row with explain results -class InterpreterExplainQuery : public IInterpreter +class InterpreterExplainQuery : public IInterpreter, WithContext { public: - InterpreterExplainQuery(const ASTPtr & query_, const Context & context_) - : query(query_), context(context_) - {} + InterpreterExplainQuery(const ASTPtr & query_, ContextPtr context_) : WithContext(context_), query(query_) { } BlockIO execute() override; @@ -23,7 +19,6 @@ public: private: ASTPtr query; - const Context & context; BlockInputStreamPtr executeImpl(); }; diff --git a/src/Interpreters/InterpreterExternalDDLQuery.cpp b/src/Interpreters/InterpreterExternalDDLQuery.cpp index 4a93c0fa753..8f9f0cf9ddb 100644 --- a/src/Interpreters/InterpreterExternalDDLQuery.cpp +++ b/src/Interpreters/InterpreterExternalDDLQuery.cpp @@ -26,8 +26,8 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; } -InterpreterExternalDDLQuery::InterpreterExternalDDLQuery(const ASTPtr & query_, Context & context_) - : query(query_), context(context_) +InterpreterExternalDDLQuery::InterpreterExternalDDLQuery(const ASTPtr & query_, ContextPtr context_) + : WithContext(context_), query(query_) { } @@ -35,7 +35,7 @@ BlockIO InterpreterExternalDDLQuery::execute() { const ASTExternalDDLQuery & external_ddl_query = query->as(); - if (context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) + if (getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) throw Exception("Cannot parse and execute EXTERNAL DDL FROM.", ErrorCodes::SYNTAX_ERROR); if (external_ddl_query.from->name == "MySQL") @@ -48,19 +48,19 @@ BlockIO InterpreterExternalDDLQuery::execute() if (external_ddl_query.external_ddl->as()) return MySQLInterpreter::InterpreterMySQLDropQuery( - external_ddl_query.external_ddl, context, getIdentifierName(arguments[0]), + external_ddl_query.external_ddl, getContext(), getIdentifierName(arguments[0]), getIdentifierName(arguments[1])).execute(); else if (external_ddl_query.external_ddl->as()) return MySQLInterpreter::InterpreterMySQLRenameQuery( - external_ddl_query.external_ddl, context, getIdentifierName(arguments[0]), + external_ddl_query.external_ddl, getContext(), getIdentifierName(arguments[0]), getIdentifierName(arguments[1])).execute(); else if (external_ddl_query.external_ddl->as()) return MySQLInterpreter::InterpreterMySQLAlterQuery( - external_ddl_query.external_ddl, context, getIdentifierName(arguments[0]), + external_ddl_query.external_ddl, getContext(), getIdentifierName(arguments[0]), getIdentifierName(arguments[1])).execute(); else if (external_ddl_query.external_ddl->as()) return MySQLInterpreter::InterpreterMySQLCreateQuery( - external_ddl_query.external_ddl, context, getIdentifierName(arguments[0]), + external_ddl_query.external_ddl, getContext(), getIdentifierName(arguments[0]), getIdentifierName(arguments[1])).execute(); #endif } diff --git a/src/Interpreters/InterpreterExternalDDLQuery.h b/src/Interpreters/InterpreterExternalDDLQuery.h index f6b29c20c70..15a842a2611 100644 --- a/src/Interpreters/InterpreterExternalDDLQuery.h +++ b/src/Interpreters/InterpreterExternalDDLQuery.h @@ -6,15 +6,15 @@ namespace DB { -class InterpreterExternalDDLQuery : public IInterpreter +class InterpreterExternalDDLQuery : public IInterpreter, WithContext { public: - InterpreterExternalDDLQuery(const ASTPtr & query_, Context & context_); + InterpreterExternalDDLQuery(const ASTPtr & query_, ContextPtr context_); BlockIO execute() override; + private: const ASTPtr query; - Context & context; }; diff --git a/src/Interpreters/InterpreterFactory.cpp b/src/Interpreters/InterpreterFactory.cpp index 15e4c52f040..4af8b6ffa7d 100644 --- a/src/Interpreters/InterpreterFactory.cpp +++ b/src/Interpreters/InterpreterFactory.cpp @@ -92,7 +92,7 @@ namespace ErrorCodes } -std::unique_ptr InterpreterFactory::get(ASTPtr & query, Context & context, const SelectQueryOptions & options) +std::unique_ptr InterpreterFactory::get(ASTPtr & query, ContextPtr context, const SelectQueryOptions & options) { OpenTelemetrySpanHolder span("InterpreterFactory::get()"); @@ -112,7 +112,7 @@ std::unique_ptr InterpreterFactory::get(ASTPtr & query, Context & else if (query->as()) { ProfileEvents::increment(ProfileEvents::InsertQuery); - bool allow_materialized = static_cast(context.getSettingsRef().insert_allow_materialized_columns); + bool allow_materialized = static_cast(context->getSettingsRef().insert_allow_materialized_columns); return std::make_unique(query, context, allow_materialized); } else if (query->as()) diff --git a/src/Interpreters/InterpreterFactory.h b/src/Interpreters/InterpreterFactory.h index 22a6ab7e8f3..c122fe11b7d 100644 --- a/src/Interpreters/InterpreterFactory.h +++ b/src/Interpreters/InterpreterFactory.h @@ -16,7 +16,7 @@ class InterpreterFactory public: static std::unique_ptr get( ASTPtr & query, - Context & context, + ContextPtr context, const SelectQueryOptions & options = {}); }; diff --git a/src/Interpreters/InterpreterGrantQuery.cpp b/src/Interpreters/InterpreterGrantQuery.cpp index dafe4d2e18c..7487ca79bde 100644 --- a/src/Interpreters/InterpreterGrantQuery.cpp +++ b/src/Interpreters/InterpreterGrantQuery.cpp @@ -1,4 +1,5 @@ #include +#include #include #include #include @@ -11,13 +12,16 @@ #include #include - namespace DB { +namespace ErrorCodes +{ + extern const int ACCESS_DENIED; + extern const int LOGICAL_ERROR; +} + namespace { - using Kind = ASTGrantQuery::Kind; - template void updateFromQueryTemplate( T & grantee, @@ -26,38 +30,28 @@ namespace { if (!query.access_rights_elements.empty()) { - if (query.kind == Kind::GRANT) - { - if (query.grant_option) - grantee.access.grantWithGrantOption(query.access_rights_elements); - else - grantee.access.grant(query.access_rights_elements); - } + if (query.is_revoke) + grantee.access.revoke(query.access_rights_elements); else - { - if (query.grant_option) - grantee.access.revokeGrantOption(query.access_rights_elements); - else - grantee.access.revoke(query.access_rights_elements); - } + grantee.access.grant(query.access_rights_elements); } if (!roles_to_grant_or_revoke.empty()) { - if (query.kind == Kind::GRANT) - { - if (query.admin_option) - grantee.granted_roles.grantWithAdminOption(roles_to_grant_or_revoke); - else - grantee.granted_roles.grant(roles_to_grant_or_revoke); - } - else + if (query.is_revoke) { if (query.admin_option) grantee.granted_roles.revokeAdminOption(roles_to_grant_or_revoke); else grantee.granted_roles.revoke(roles_to_grant_or_revoke); } + else + { + if (query.admin_option) + grantee.granted_roles.grantWithAdminOption(roles_to_grant_or_revoke); + else + grantee.granted_roles.grant(roles_to_grant_or_revoke); + } } } @@ -71,122 +65,210 @@ namespace else if (auto * role = typeid_cast(&grantee)) updateFromQueryTemplate(*role, query, roles_to_grant_or_revoke); } + + void checkGranteeIsAllowed(const ContextAccess & access, const UUID & grantee_id, const IAccessEntity & grantee) + { + auto current_user = access.getUser(); + if (current_user && !current_user->grantees.match(grantee_id)) + throw Exception(grantee.outputTypeAndName() + " is not allowed as grantee", ErrorCodes::ACCESS_DENIED); + } + + void checkGranteesAreAllowed(const AccessControlManager & access_control, const ContextAccess & access, const std::vector & grantee_ids) + { + auto current_user = access.getUser(); + if (!current_user || (current_user->grantees == RolesOrUsersSet::AllTag{})) + return; + + for (const auto & id : grantee_ids) + { + auto entity = access_control.tryRead(id); + if (auto role = typeid_cast(entity)) + checkGranteeIsAllowed(access, id, *role); + else if (auto user = typeid_cast(entity)) + checkGranteeIsAllowed(access, id, *user); + } + } + + void checkGrantOption( + const AccessControlManager & access_control, + const ContextAccess & access, + const ASTGrantQuery & query, + const std::vector & grantees_from_query) + { + const auto & elements = query.access_rights_elements; + if (elements.empty()) + return; + + /// To execute the command GRANT the current user needs to have the access granted + /// with GRANT OPTION. + if (!query.is_revoke) + { + access.checkGrantOption(elements); + checkGranteesAreAllowed(access_control, access, grantees_from_query); + return; + } + + if (access.hasGrantOption(elements)) + { + checkGranteesAreAllowed(access_control, access, grantees_from_query); + return; + } + + /// Special case for the command REVOKE: it's possible that the current user doesn't have + /// the access granted with GRANT OPTION but it's still ok because the roles or users + /// from whom the access rights will be revoked don't have the specified access granted either. + /// + /// For example, to execute + /// GRANT ALL ON mydb.* TO role1 + /// REVOKE ALL ON *.* FROM role1 + /// the current user needs to have grants only on the 'mydb' database. + AccessRights all_granted_access; + for (const auto & id : grantees_from_query) + { + auto entity = access_control.tryRead(id); + if (auto role = typeid_cast(entity)) + { + checkGranteeIsAllowed(access, id, *role); + all_granted_access.makeUnion(role->access); + } + else if (auto user = typeid_cast(entity)) + { + checkGranteeIsAllowed(access, id, *user); + all_granted_access.makeUnion(user->access); + } + } + + AccessRights required_access; + if (elements[0].is_partial_revoke) + { + AccessRightsElements non_revoke_elements = elements; + std::for_each(non_revoke_elements.begin(), non_revoke_elements.end(), [&](AccessRightsElement & element) { element.is_partial_revoke = false; }); + required_access.grant(non_revoke_elements); + } + else + { + required_access.grant(elements); + } + required_access.makeIntersection(all_granted_access); + + for (auto & required_access_element : required_access.getElements()) + { + if (!required_access_element.is_partial_revoke && (required_access_element.grant_option || !elements[0].grant_option)) + access.checkGrantOption(required_access_element); + } + } + + + std::vector getRoleIDsAndCheckAdminOption( + const AccessControlManager & access_control, + const ContextAccess & access, + const ASTGrantQuery & query, + const RolesOrUsersSet & roles_from_query, + const std::vector & grantees_from_query) + { + std::vector matching_ids; + + if (!query.is_revoke) + { + matching_ids = roles_from_query.getMatchingIDs(access_control); + access.checkAdminOption(matching_ids); + checkGranteesAreAllowed(access_control, access, grantees_from_query); + return matching_ids; + } + + if (!roles_from_query.all) + { + matching_ids = roles_from_query.getMatchingIDs(); + if (access.hasAdminOption(matching_ids)) + { + checkGranteesAreAllowed(access_control, access, grantees_from_query); + return matching_ids; + } + } + + /// Special case for the command REVOKE: it's possible that the current user doesn't have the admin option + /// for some of the specified roles but it's still ok because the roles or users from whom the roles will be + /// revoked from don't have the specified roles granted either. + /// + /// For example, to execute + /// GRANT role2 TO role1 + /// REVOKE ALL FROM role1 + /// the current user needs to have only 'role2' to be granted with admin option (not all the roles). + GrantedRoles all_granted_roles; + for (const auto & id : grantees_from_query) + { + auto entity = access_control.tryRead(id); + if (auto role = typeid_cast(entity)) + { + checkGranteeIsAllowed(access, id, *role); + all_granted_roles.makeUnion(role->granted_roles); + } + else if (auto user = typeid_cast(entity)) + { + checkGranteeIsAllowed(access, id, *user); + all_granted_roles.makeUnion(user->granted_roles); + } + } + + const auto & all_granted_roles_set = query.admin_option ? all_granted_roles.getGrantedWithAdminOption() : all_granted_roles.getGranted(); + if (roles_from_query.all) + boost::range::set_difference(all_granted_roles_set, roles_from_query.except_ids, std::back_inserter(matching_ids)); + else + boost::range::remove_erase_if(matching_ids, [&](const UUID & id) { return !all_granted_roles_set.count(id); }); + access.checkAdminOption(matching_ids); + return matching_ids; + } } BlockIO InterpreterGrantQuery::execute() { auto & query = query_ptr->as(); - query.replaceCurrentUserTagWithName(context.getUserName()); - if (!query.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, query.access_rights_elements, true); + query.replaceCurrentUserTag(getContext()->getUserName()); + query.access_rights_elements.eraseNonGrantable(); - auto access = context.getAccess(); - auto & access_control = context.getAccessControlManager(); - query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase()); + if (!query.access_rights_elements.sameOptions()) + throw Exception("Elements of an ASTGrantQuery are expected to have the same options", ErrorCodes::LOGICAL_ERROR); + if (!query.access_rights_elements.empty() && query.access_rights_elements[0].is_partial_revoke && !query.is_revoke) + throw Exception("A partial revoke should be revoked, not granted", ErrorCodes::LOGICAL_ERROR); - RolesOrUsersSet roles_set; + auto & access_control = getContext()->getAccessControlManager(); + std::optional roles_set; if (query.roles) roles_set = RolesOrUsersSet{*query.roles, access_control}; - std::vector to_roles = RolesOrUsersSet{*query.to_roles, access_control, context.getUserID()}.getMatchingIDs(access_control); + std::vector grantees = RolesOrUsersSet{*query.grantees, access_control, getContext()->getUserID()}.getMatchingIDs(access_control); + + /// Check if the current user has corresponding roles granted with admin option. + std::vector roles; + if (roles_set) + roles = getRoleIDsAndCheckAdminOption(access_control, *getContext()->getAccess(), query, *roles_set, grantees); + + if (!query.cluster.empty()) + { + /// To execute the command GRANT the current user needs to have the access granted with GRANT OPTION. + auto required_access = query.access_rights_elements; + std::for_each(required_access.begin(), required_access.end(), [&](AccessRightsElement & element) { element.grant_option = true; }); + checkGranteesAreAllowed(access_control, *getContext()->getAccess(), grantees); + return executeDDLQueryOnCluster(query_ptr, getContext(), std::move(required_access)); + } + + query.replaceEmptyDatabase(getContext()->getCurrentDatabase()); /// Check if the current user has corresponding access rights with grant option. if (!query.access_rights_elements.empty()) - { - query.access_rights_elements.removeNonGrantableFlags(); + checkGrantOption(access_control, *getContext()->getAccess(), query, grantees); - /// Special case for REVOKE: it's possible that the current user doesn't have the grant option for all - /// the specified access rights and that's ok because the roles or users which the access rights - /// will be revoked from don't have the specified access rights either. - /// - /// For example, to execute - /// GRANT ALL ON mydb.* TO role1 - /// REVOKE ALL ON *.* FROM role1 - /// the current user needs to have access rights only for the 'mydb' database. - if ((query.kind == Kind::REVOKE) && !access->hasGrantOption(query.access_rights_elements)) - { - AccessRights max_access; - for (const auto & id : to_roles) - { - auto entity = access_control.tryRead(id); - if (auto role = typeid_cast(entity)) - max_access.makeUnion(role->access); - else if (auto user = typeid_cast(entity)) - max_access.makeUnion(user->access); - } - AccessRights access_to_revoke; - if (query.grant_option) - access_to_revoke.grantWithGrantOption(query.access_rights_elements); - else - access_to_revoke.grant(query.access_rights_elements); - access_to_revoke.makeIntersection(max_access); - AccessRightsElements filtered_access_to_revoke; - for (auto & element : access_to_revoke.getElements()) - { - if ((element.kind == Kind::GRANT) && (element.grant_option || !query.grant_option)) - filtered_access_to_revoke.emplace_back(std::move(element)); - } - query.access_rights_elements = std::move(filtered_access_to_revoke); - } - - access->checkGrantOption(query.access_rights_elements); - } - - /// Check if the current user has corresponding roles granted with admin option. - std::vector roles_to_grant_or_revoke; - if (!roles_set.empty()) - { - bool all = roles_set.all; - if (!all) - roles_to_grant_or_revoke = roles_set.getMatchingIDs(); - - /// Special case for REVOKE: it's possible that the current user doesn't have the admin option for all - /// the specified roles and that's ok because the roles or users which the roles will be revoked from - /// don't have the specified roles granted either. - /// - /// For example, to execute - /// GRANT role2 TO role1 - /// REVOKE ALL FROM role1 - /// the current user needs to have only 'role2' to be granted with admin option (not all the roles). - if ((query.kind == Kind::REVOKE) && (roles_set.all || !access->hasAdminOption(roles_to_grant_or_revoke))) - { - auto & roles_to_revoke = roles_to_grant_or_revoke; - boost::container::flat_set max_roles; - for (const auto & id : to_roles) - { - auto entity = access_control.tryRead(id); - auto add_to_max_roles = [&](const GrantedRoles & granted_roles) - { - if (query.admin_option) - max_roles.insert(granted_roles.roles_with_admin_option.begin(), granted_roles.roles_with_admin_option.end()); - else - max_roles.insert(granted_roles.roles.begin(), granted_roles.roles.end()); - }; - if (auto role = typeid_cast(entity)) - add_to_max_roles(role->granted_roles); - else if (auto user = typeid_cast(entity)) - add_to_max_roles(user->granted_roles); - } - if (roles_set.all) - boost::range::set_difference(max_roles, roles_set.except_ids, std::back_inserter(roles_to_revoke)); - else - boost::range::remove_erase_if(roles_to_revoke, [&](const UUID & id) { return !max_roles.count(id); }); - } - - access->checkAdminOption(roles_to_grant_or_revoke); - } - - /// Update roles and users listed in `to_roles`. + /// Update roles and users listed in `grantees`. auto update_func = [&](const AccessEntityPtr & entity) -> AccessEntityPtr { auto clone = entity->clone(); - updateFromQueryImpl(*clone, query, roles_to_grant_or_revoke); + updateFromQueryImpl(*clone, query, roles); return clone; }; - access_control.update(to_roles, update_func); + access_control.update(grantees, update_func); return {}; } @@ -209,4 +291,13 @@ void InterpreterGrantQuery::updateRoleFromQuery(Role & role, const ASTGrantQuery updateFromQueryImpl(role, query, roles_to_grant_or_revoke); } +void InterpreterGrantQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr) const +{ + auto & query = query_ptr->as(); + if (query.is_revoke) + elem.query_kind = "Revoke"; + else + elem.query_kind = "Grant"; +} + } diff --git a/src/Interpreters/InterpreterGrantQuery.h b/src/Interpreters/InterpreterGrantQuery.h index 32810faecd2..f5939ff3cb7 100644 --- a/src/Interpreters/InterpreterGrantQuery.h +++ b/src/Interpreters/InterpreterGrantQuery.h @@ -1,29 +1,30 @@ #pragma once +#include #include #include -#include namespace DB { + class ASTGrantQuery; struct User; struct Role; - -class InterpreterGrantQuery : public IInterpreter +class InterpreterGrantQuery : public IInterpreter, WithContext { public: - InterpreterGrantQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterGrantQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; static void updateUserFromQuery(User & user, const ASTGrantQuery & query); static void updateRoleFromQuery(Role & role, const ASTGrantQuery & query); + void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, ContextPtr) const override; private: ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterInsertQuery.cpp b/src/Interpreters/InterpreterInsertQuery.cpp index b926ca9efad..3d119757fc7 100644 --- a/src/Interpreters/InterpreterInsertQuery.cpp +++ b/src/Interpreters/InterpreterInsertQuery.cpp @@ -9,21 +9,17 @@ #include #include #include -#include #include #include #include #include #include -#include #include #include #include #include #include #include -#include -#include #include #include #include @@ -35,11 +31,8 @@ #include #include #include - -namespace -{ -const UInt64 PARALLEL_DISTRIBUTED_INSERT_SELECT_ALL = 2; -} +#include +#include namespace DB @@ -50,13 +43,12 @@ namespace ErrorCodes extern const int NO_SUCH_COLUMN_IN_TABLE; extern const int ILLEGAL_COLUMN; extern const int DUPLICATE_COLUMN; - extern const int LOGICAL_ERROR; } InterpreterInsertQuery::InterpreterInsertQuery( - const ASTPtr & query_ptr_, const Context & context_, bool allow_materialized_, bool no_squash_, bool no_destination_) - : query_ptr(query_ptr_) - , context(context_) + const ASTPtr & query_ptr_, ContextPtr context_, bool allow_materialized_, bool no_squash_, bool no_destination_) + : WithContext(context_) + , query_ptr(query_ptr_) , allow_materialized(allow_materialized_) , no_squash(no_squash_) , no_destination(no_destination_) @@ -70,12 +62,12 @@ StoragePtr InterpreterInsertQuery::getTable(ASTInsertQuery & query) if (query.table_function) { const auto & factory = TableFunctionFactory::instance(); - TableFunctionPtr table_function_ptr = factory.get(query.table_function, context); - return table_function_ptr->execute(query.table_function, context, table_function_ptr->getName()); + TableFunctionPtr table_function_ptr = factory.get(query.table_function, getContext()); + return table_function_ptr->execute(query.table_function, getContext(), table_function_ptr->getName()); } - query.table_id = context.resolveStorageID(query.table_id); - return DatabaseCatalog::instance().getTable(query.table_id, context); + query.table_id = getContext()->resolveStorageID(query.table_id); + return DatabaseCatalog::instance().getTable(query.table_id, getContext()); } Block InterpreterInsertQuery::getSampleBlock( @@ -95,7 +87,7 @@ Block InterpreterInsertQuery::getSampleBlock( Block table_sample = metadata_snapshot->getSampleBlock(); - const auto columns_ast = processColumnTransformers(context.getCurrentDatabase(), table, metadata_snapshot, query.columns); + const auto columns_ast = processColumnTransformers(getContext()->getCurrentDatabase(), table, metadata_snapshot, query.columns); /// Form the block based on the column names from the query Block res; @@ -160,97 +152,28 @@ static bool isTrivialSelect(const ASTPtr & select) BlockIO InterpreterInsertQuery::execute() { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = getContext()->getSettingsRef(); auto & query = query_ptr->as(); BlockIO res; StoragePtr table = getTable(query); - auto table_lock = table->lockForShare(context.getInitialQueryId(), settings.lock_acquire_timeout); + auto table_lock = table->lockForShare(getContext()->getInitialQueryId(), settings.lock_acquire_timeout); auto metadata_snapshot = table->getInMemoryMetadataPtr(); auto query_sample_block = getSampleBlock(query, table, metadata_snapshot); if (!query.table_function) - context.checkAccess(AccessType::INSERT, query.table_id, query_sample_block.getNames()); + getContext()->checkAccess(AccessType::INSERT, query.table_id, query_sample_block.getNames()); bool is_distributed_insert_select = false; if (query.select && table->isRemote() && settings.parallel_distributed_insert_select) { // Distributed INSERT SELECT - std::shared_ptr storage_src; - auto & select = query.select->as(); - auto new_query = std::dynamic_pointer_cast(query.clone()); - if (select.list_of_selects->children.size() == 1) - { - if (auto * select_query = select.list_of_selects->children.at(0)->as()) - { - JoinedTables joined_tables(Context(context), *select_query); - - if (joined_tables.tablesCount() == 1) - { - storage_src = std::dynamic_pointer_cast(joined_tables.getLeftTableStorage()); - if (storage_src) - { - const auto select_with_union_query = std::make_shared(); - select_with_union_query->list_of_selects = std::make_shared(); - - auto new_select_query = std::dynamic_pointer_cast(select_query->clone()); - select_with_union_query->list_of_selects->children.push_back(new_select_query); - - new_select_query->replaceDatabaseAndTable(storage_src->getRemoteDatabaseName(), storage_src->getRemoteTableName()); - - new_query->select = select_with_union_query; - } - } - } - } - - auto storage_dst = std::dynamic_pointer_cast(table); - - if (storage_src && storage_dst && storage_src->getClusterName() == storage_dst->getClusterName()) + if (auto maybe_pipeline = table->distributedWrite(query, getContext())) { + res.pipeline = std::move(*maybe_pipeline); is_distributed_insert_select = true; - - if (settings.parallel_distributed_insert_select == PARALLEL_DISTRIBUTED_INSERT_SELECT_ALL) - { - new_query->table_id = StorageID(storage_dst->getRemoteDatabaseName(), storage_dst->getRemoteTableName()); - } - - const auto & cluster = storage_src->getCluster(); - const auto & shards_info = cluster->getShardsInfo(); - - std::vector> pipelines; - - String new_query_str = queryToString(new_query); - for (size_t shard_index : ext::range(0, shards_info.size())) - { - const auto & shard_info = shards_info[shard_index]; - if (shard_info.isLocal()) - { - InterpreterInsertQuery interpreter(new_query, context); - pipelines.emplace_back(std::make_unique(interpreter.execute().pipeline)); - } - else - { - auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(settings); - auto connections = shard_info.pool->getMany(timeouts, &settings, PoolMode::GET_ONE); - if (connections.empty() || connections.front().isNull()) - throw Exception( - "Expected exactly one connection for shard " + toString(shard_info.shard_num), ErrorCodes::LOGICAL_ERROR); - - /// INSERT SELECT query returns empty block - auto in_stream = std::make_shared(std::move(connections), new_query_str, Block{}, context); - pipelines.emplace_back(std::make_unique()); - pipelines.back()->init(Pipe(std::make_shared(std::move(in_stream)))); - pipelines.back()->setSinks([](const Block & header, QueryPipeline::StreamType) -> ProcessorPtr - { - return std::make_shared(header); - }); - } - } - - res.pipeline = QueryPipeline::unitePipelines(std::move(pipelines), {}); } } @@ -285,15 +208,15 @@ BlockIO InterpreterInsertQuery::execute() * to avoid unnecessary squashing. */ - Settings new_settings = context.getSettings(); + Settings new_settings = getContext()->getSettings(); new_settings.max_threads = std::max(1, settings.max_insert_threads); if (settings.min_insert_block_size_rows && table->prefersLargeBlocks()) new_settings.max_block_size = settings.min_insert_block_size_rows; - Context new_context = context; - new_context.setSettings(new_settings); + auto new_context = Context::createCopy(context); + new_context->setSettings(new_settings); InterpreterSelectWithUnionQuery interpreter_select{ query.select, new_context, SelectQueryOptions(QueryProcessingStage::Complete, 1)}; @@ -303,7 +226,7 @@ BlockIO InterpreterInsertQuery::execute() { /// Passing 1 as subquery_depth will disable limiting size of intermediate result. InterpreterSelectWithUnionQuery interpreter_select{ - query.select, context, SelectQueryOptions(QueryProcessingStage::Complete, 1)}; + query.select, getContext(), SelectQueryOptions(QueryProcessingStage::Complete, 1)}; res = interpreter_select.execute(); } @@ -311,10 +234,29 @@ BlockIO InterpreterInsertQuery::execute() out_streams_size = std::min(size_t(settings.max_insert_threads), res.pipeline.getNumStreams()); res.pipeline.resize(out_streams_size); + + /// Allow to insert Nullable into non-Nullable columns, NULL values will be added as defaults values. + if (getContext()->getSettingsRef().insert_null_as_default) + { + const auto & input_columns = res.pipeline.getHeader().getColumnsWithTypeAndName(); + const auto & query_columns = query_sample_block.getColumnsWithTypeAndName(); + const auto & output_columns = metadata_snapshot->getColumns(); + + if (input_columns.size() == query_columns.size()) + { + for (size_t col_idx = 0; col_idx < query_columns.size(); ++col_idx) + { + /// Change query sample block columns to Nullable to allow inserting nullable columns, where NULL values will be substituted with + /// default column values (in AddingDefaultBlockOutputStream), so all values will be cast correctly. + if (input_columns[col_idx].type->isNullable() && !query_columns[col_idx].type->isNullable() && output_columns.hasDefault(query_columns[col_idx].name)) + query_sample_block.setColumn(col_idx, ColumnWithTypeAndName(makeNullable(query_columns[col_idx].column), makeNullable(query_columns[col_idx].type), query_columns[col_idx].name)); + } + } + } } else if (query.watch) { - InterpreterWatchQuery interpreter_watch{ query.watch, context }; + InterpreterWatchQuery interpreter_watch{ query.watch, getContext() }; res = interpreter_watch.execute(); res.pipeline.init(Pipe(std::make_shared(std::move(res.in)))); } @@ -327,21 +269,23 @@ BlockIO InterpreterInsertQuery::execute() /// NOTE: we explicitly ignore bound materialized views when inserting into Kafka Storage. /// Otherwise we'll get duplicates when MV reads same rows again from Kafka. if (table->noPushingToViews() && !no_destination) - out = table->write(query_ptr, metadata_snapshot, context); + out = table->write(query_ptr, metadata_snapshot, getContext()); else - out = std::make_shared(table, metadata_snapshot, context, query_ptr, no_destination); + out = std::make_shared(table, metadata_snapshot, getContext(), query_ptr, no_destination); /// Note that we wrap transforms one on top of another, so we write them in reverse of data processing order. /// Checking constraints. It must be done after calculation of all defaults, so we can check them on calculated columns. if (const auto & constraints = metadata_snapshot->getConstraints(); !constraints.empty()) out = std::make_shared( - query.table_id, out, out->getHeader(), metadata_snapshot->getConstraints(), context); + query.table_id, out, out->getHeader(), metadata_snapshot->getConstraints(), getContext()); + + bool null_as_default = query.select && getContext()->getSettingsRef().insert_null_as_default; /// Actually we don't know structure of input blocks from query/table, /// because some clients break insertion protocol (columns != header) out = std::make_shared( - out, query_sample_block, metadata_snapshot->getColumns(), context); + out, query_sample_block, metadata_snapshot->getColumns(), getContext(), null_as_default); /// It's important to squash blocks as early as possible (before other transforms), /// because other transforms may work inefficient if block size is small. @@ -360,7 +304,7 @@ BlockIO InterpreterInsertQuery::execute() } auto out_wrapper = std::make_shared(out); - out_wrapper->setProcessListElement(context.getProcessListElement()); + out_wrapper->setProcessListElement(getContext()->getProcessListElement()); out = std::move(out_wrapper); out_streams.emplace_back(std::move(out)); } @@ -378,7 +322,7 @@ BlockIO InterpreterInsertQuery::execute() res.pipeline.getHeader().getColumnsWithTypeAndName(), header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Position); - auto actions = std::make_shared(actions_dag); + auto actions = std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext())); res.pipeline.addSimpleTransform([&](const Block & in_header) -> ProcessorPtr { @@ -406,7 +350,7 @@ BlockIO InterpreterInsertQuery::execute() else if (query.data && !query.has_tail) /// can execute without additional data { // res.out = std::move(out_streams.at(0)); - res.in = std::make_shared(query_ptr, nullptr, query_sample_block, context, nullptr); + res.in = std::make_shared(query_ptr, nullptr, query_sample_block, getContext(), nullptr); res.in = std::make_shared(res.in, out_streams.at(0)); } else @@ -429,10 +373,10 @@ StorageID InterpreterInsertQuery::getDatabaseTable() const } -void InterpreterInsertQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, const Context & context_) const +void InterpreterInsertQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr context_) const { elem.query_kind = "Insert"; - const auto & insert_table = context_.getInsertionTable(); + const auto & insert_table = context_->getInsertionTable(); if (!insert_table.empty()) { elem.query_databases.insert(insert_table.getDatabaseName()); diff --git a/src/Interpreters/InterpreterInsertQuery.h b/src/Interpreters/InterpreterInsertQuery.h index 85178bc73ad..71b8c827702 100644 --- a/src/Interpreters/InterpreterInsertQuery.h +++ b/src/Interpreters/InterpreterInsertQuery.h @@ -9,17 +9,14 @@ namespace DB { -class Context; - - /** Interprets the INSERT query. */ -class InterpreterInsertQuery : public IInterpreter +class InterpreterInsertQuery : public IInterpreter, WithContext { public: InterpreterInsertQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, bool allow_materialized_ = false, bool no_squash_ = false, bool no_destination_ = false); @@ -33,14 +30,13 @@ public: StorageID getDatabaseTable() const; - void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, const Context & context_) const override; + void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr context_) const override; private: StoragePtr getTable(ASTInsertQuery & query); Block getSampleBlock(const ASTInsertQuery & query, const StoragePtr & table, const StorageMetadataPtr & metadata_snapshot) const; ASTPtr query_ptr; - const Context & context; const bool allow_materialized; const bool no_squash; const bool no_destination; diff --git a/src/Interpreters/InterpreterKillQueryQuery.cpp b/src/Interpreters/InterpreterKillQueryQuery.cpp index c50659c6c45..88ddfefef63 100644 --- a/src/Interpreters/InterpreterKillQueryQuery.cpp +++ b/src/Interpreters/InterpreterKillQueryQuery.cpp @@ -74,7 +74,7 @@ static void insertResultRow(size_t n, CancellationCode code, const Block & sourc columns[col_num]->insertFrom(*source.getByName(header.getByPosition(col_num).name).column, n); } -static QueryDescriptors extractQueriesExceptMeAndCheckAccess(const Block & processes_block, Context & context) +static QueryDescriptors extractQueriesExceptMeAndCheckAccess(const Block & processes_block, ContextPtr context) { QueryDescriptors res; size_t num_processes = processes_block.rows(); @@ -82,7 +82,7 @@ static QueryDescriptors extractQueriesExceptMeAndCheckAccess(const Block & proce const ColumnString & query_id_col = typeid_cast(*processes_block.getByName("query_id").column); const ColumnString & user_col = typeid_cast(*processes_block.getByName("user").column); - const ClientInfo & my_client = context.getProcessListElement()->getClientInfo(); + const ClientInfo & my_client = context->getProcessListElement()->getClientInfo(); bool access_denied = false; std::optional is_kill_query_granted_value; @@ -90,7 +90,7 @@ static QueryDescriptors extractQueriesExceptMeAndCheckAccess(const Block & proce { if (!is_kill_query_granted_value) { - is_kill_query_granted_value = context.getAccess()->isGranted(AccessType::KILL_QUERY); + is_kill_query_granted_value = context->getAccess()->isGranted(AccessType::KILL_QUERY); if (!*is_kill_query_granted_value) access_denied = true; } @@ -195,7 +195,7 @@ BlockIO InterpreterKillQueryQuery::execute() const auto & query = query_ptr->as(); if (!query.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccessForDDLOnCluster()); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccessForDDLOnCluster()); BlockIO res_io; switch (query.type) @@ -206,8 +206,8 @@ BlockIO InterpreterKillQueryQuery::execute() if (!processes_block) return res_io; - ProcessList & process_list = context.getProcessList(); - QueryDescriptors queries_to_stop = extractQueriesExceptMeAndCheckAccess(processes_block, context); + ProcessList & process_list = getContext()->getProcessList(); + QueryDescriptors queries_to_stop = extractQueriesExceptMeAndCheckAccess(processes_block, getContext()); auto header = processes_block.cloneEmpty(); header.insert(0, {ColumnString::create(), std::make_shared(), "kill_status"}); @@ -248,7 +248,7 @@ BlockIO InterpreterKillQueryQuery::execute() MutableColumns res_columns = header.cloneEmptyColumns(); auto table_id = StorageID::createEmpty(); AccessRightsElements required_access_rights; - auto access = context.getAccess(); + auto access = getContext()->getAccess(); bool access_denied = false; for (size_t i = 0; i < mutations_block.rows(); ++i) @@ -259,14 +259,16 @@ BlockIO InterpreterKillQueryQuery::execute() CancellationCode code = CancellationCode::Unknown; if (!query.test) { - auto storage = DatabaseCatalog::instance().tryGetTable(table_id, context); + auto storage = DatabaseCatalog::instance().tryGetTable(table_id, getContext()); if (!storage) code = CancellationCode::NotFound; else { ParserAlterCommand parser; - auto command_ast = parseQuery(parser, command_col.getDataAt(i).toString(), 0, context.getSettingsRef().max_parser_depth); - required_access_rights = InterpreterAlterQuery::getRequiredAccessForCommand(command_ast->as(), table_id.database_name, table_id.table_name); + auto command_ast + = parseQuery(parser, command_col.getDataAt(i).toString(), 0, getContext()->getSettingsRef().max_parser_depth); + required_access_rights = InterpreterAlterQuery::getRequiredAccessForCommand( + command_ast->as(), table_id.database_name, table_id.table_name); if (!access->isGranted(required_access_rights)) { access_denied = true; @@ -300,7 +302,7 @@ Block InterpreterKillQueryQuery::getSelectResult(const String & columns, const S if (where_expression) select_query += " WHERE " + queryToString(where_expression); - auto stream = executeQuery(select_query, context.getGlobalContext(), true).getInputStream(); + auto stream = executeQuery(select_query, getContext()->getGlobalContext(), true).getInputStream(); Block res = stream->read(); if (res && stream->read()) diff --git a/src/Interpreters/InterpreterKillQueryQuery.h b/src/Interpreters/InterpreterKillQueryQuery.h index 788703f8e6d..5ffd9a525a2 100644 --- a/src/Interpreters/InterpreterKillQueryQuery.h +++ b/src/Interpreters/InterpreterKillQueryQuery.h @@ -8,15 +8,12 @@ namespace DB { -class Context; class AccessRightsElements; - -class InterpreterKillQueryQuery final : public IInterpreter +class InterpreterKillQueryQuery final : public IInterpreter, WithContext { public: - InterpreterKillQueryQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterKillQueryQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) { } BlockIO execute() override; @@ -25,8 +22,6 @@ private: Block getSelectResult(const String & columns, const String & table); ASTPtr query_ptr; - Context & context; }; - } diff --git a/src/Interpreters/InterpreterOptimizeQuery.cpp b/src/Interpreters/InterpreterOptimizeQuery.cpp index d8e9013e397..64de5ee0479 100644 --- a/src/Interpreters/InterpreterOptimizeQuery.cpp +++ b/src/Interpreters/InterpreterOptimizeQuery.cpp @@ -25,12 +25,12 @@ BlockIO InterpreterOptimizeQuery::execute() const auto & ast = query_ptr->as(); if (!ast.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccess()); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccess()); - context.checkAccess(getRequiredAccess()); + getContext()->checkAccess(getRequiredAccess()); - auto table_id = context.resolveStorageID(ast, Context::ResolveOrdinary); - StoragePtr table = DatabaseCatalog::instance().getTable(table_id, context); + auto table_id = getContext()->resolveStorageID(ast, Context::ResolveOrdinary); + StoragePtr table = DatabaseCatalog::instance().getTable(table_id, getContext()); auto metadata_snapshot = table->getInMemoryMetadataPtr(); // Empty list of names means we deduplicate by all columns, but user can explicitly state which columns to use. @@ -40,7 +40,8 @@ BlockIO InterpreterOptimizeQuery::execute() // User requested custom set of columns for deduplication, possibly with Column Transformer expression. { // Expand asterisk, column transformers, etc into list of column names. - const auto cols = processColumnTransformers(context.getCurrentDatabase(), table, metadata_snapshot, ast.deduplicate_by_columns); + const auto cols + = processColumnTransformers(getContext()->getCurrentDatabase(), table, metadata_snapshot, ast.deduplicate_by_columns); for (const auto & col : cols->children) column_names.emplace_back(col->getColumnName()); } @@ -68,7 +69,7 @@ BlockIO InterpreterOptimizeQuery::execute() } } - table->optimize(query_ptr, metadata_snapshot, ast.partition, ast.final, ast.deduplicate, column_names, context); + table->optimize(query_ptr, metadata_snapshot, ast.partition, ast.final, ast.deduplicate, column_names, getContext()); return {}; } diff --git a/src/Interpreters/InterpreterOptimizeQuery.h b/src/Interpreters/InterpreterOptimizeQuery.h index faf336a0e98..8491fe8df49 100644 --- a/src/Interpreters/InterpreterOptimizeQuery.h +++ b/src/Interpreters/InterpreterOptimizeQuery.h @@ -6,19 +6,15 @@ namespace DB { -class Context; -class AccessRightsElements; +class AccessRightsElements; /** Just call method "optimize" for table. */ -class InterpreterOptimizeQuery : public IInterpreter +class InterpreterOptimizeQuery : public IInterpreter, WithContext { public: - InterpreterOptimizeQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) - { - } + InterpreterOptimizeQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -26,7 +22,6 @@ private: AccessRightsElements getRequiredAccess() const; ASTPtr query_ptr; - Context & context; }; } diff --git a/src/Interpreters/InterpreterRenameQuery.cpp b/src/Interpreters/InterpreterRenameQuery.cpp index 923a342d9ea..515559ad903 100644 --- a/src/Interpreters/InterpreterRenameQuery.cpp +++ b/src/Interpreters/InterpreterRenameQuery.cpp @@ -18,8 +18,8 @@ namespace ErrorCodes extern const int NOT_IMPLEMENTED; } -InterpreterRenameQuery::InterpreterRenameQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterRenameQuery::InterpreterRenameQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_) { } @@ -29,12 +29,12 @@ BlockIO InterpreterRenameQuery::execute() const auto & rename = query_ptr->as(); if (!rename.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccess()); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccess()); - context.checkAccess(getRequiredAccess()); + getContext()->checkAccess(getRequiredAccess()); - String path = context.getPath(); - String current_database = context.getCurrentDatabase(); + String path = getContext()->getPath(); + String current_database = getContext()->getCurrentDatabase(); /** In case of error while renaming, it is possible that only part of tables was renamed * or we will be in inconsistent state. (It is worth to be fixed.) @@ -77,10 +77,11 @@ BlockIO InterpreterRenameQuery::executeToTables(const ASTRenameQuery & rename, c for (const auto & elem : descriptions) { if (!rename.exchange) - database_catalog.assertTableDoesntExist(StorageID(elem.to_database_name, elem.to_table_name), context); + database_catalog.assertTableDoesntExist(StorageID(elem.to_database_name, elem.to_table_name), getContext()); DatabasePtr database = database_catalog.getDatabase(elem.from_database_name); - if (typeid_cast(database.get()) && context.getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) + if (typeid_cast(database.get()) + && getContext()->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY) { if (1 < descriptions.size()) throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Database {} is Replicated, " @@ -90,12 +91,12 @@ BlockIO InterpreterRenameQuery::executeToTables(const ASTRenameQuery & rename, c UniqueTableName to(elem.to_database_name, elem.to_table_name); ddl_guards[from]->releaseTableLock(); ddl_guards[to]->releaseTableLock(); - return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, context); + return typeid_cast(database.get())->tryEnqueueReplicatedDDL(query_ptr, getContext()); } else { database->renameTable( - context, + getContext(), elem.from_table_name, *database_catalog.getDatabase(elem.to_database_name), elem.to_table_name, @@ -140,19 +141,19 @@ AccessRightsElements InterpreterRenameQuery::getRequiredAccess() const return required_access; } -void InterpreterRenameQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, const Context &) const +void InterpreterRenameQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr) const { elem.query_kind = "Rename"; const auto & rename = ast->as(); for (const auto & element : rename.elements) { { - String database = backQuoteIfNeed(element.from.database.empty() ? context.getCurrentDatabase() : element.from.database); + String database = backQuoteIfNeed(element.from.database.empty() ? getContext()->getCurrentDatabase() : element.from.database); elem.query_databases.insert(database); elem.query_tables.insert(database + "." + backQuoteIfNeed(element.from.table)); } { - String database = backQuoteIfNeed(element.to.database.empty() ? context.getCurrentDatabase() : element.to.database); + String database = backQuoteIfNeed(element.to.database.empty() ? getContext()->getCurrentDatabase() : element.to.database); elem.query_databases.insert(database); elem.query_tables.insert(database + "." + backQuoteIfNeed(element.to.table)); } diff --git a/src/Interpreters/InterpreterRenameQuery.h b/src/Interpreters/InterpreterRenameQuery.h index 0da25f63e8d..49fdd50f52d 100644 --- a/src/Interpreters/InterpreterRenameQuery.h +++ b/src/Interpreters/InterpreterRenameQuery.h @@ -7,7 +7,6 @@ namespace DB { -class Context; class AccessRightsElements; class DDLGuard; @@ -49,12 +48,12 @@ using TableGuards = std::map>; /** Rename one table * or rename many tables at once. */ -class InterpreterRenameQuery : public IInterpreter +class InterpreterRenameQuery : public IInterpreter, WithContext { public: - InterpreterRenameQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterRenameQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; - void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, const Context &) const override; + void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr) const override; private: BlockIO executeToTables(const ASTRenameQuery & rename, const RenameDescriptions & descriptions, TableGuards & ddl_guards); @@ -63,7 +62,6 @@ private: AccessRightsElements getRequiredAccess() const; ASTPtr query_ptr; - Context & context; }; } diff --git a/src/Interpreters/InterpreterSelectQuery.cpp b/src/Interpreters/InterpreterSelectQuery.cpp index d0c8966cf07..16c9731a427 100644 --- a/src/Interpreters/InterpreterSelectQuery.cpp +++ b/src/Interpreters/InterpreterSelectQuery.cpp @@ -37,7 +37,6 @@ #include #include -#include #include #include #include @@ -48,6 +47,7 @@ #include #include #include +#include #include #include #include @@ -62,6 +62,7 @@ #include #include #include +#include #include #include #include @@ -81,7 +82,7 @@ #include #include #include -#include +#include #include @@ -137,24 +138,23 @@ String InterpreterSelectQuery::generateFilterActions(ActionsDAGPtr & actions, co table_expr->children.push_back(table_expr->database_and_table_name); /// Using separate expression analyzer to prevent any possible alias injection - auto syntax_result = TreeRewriter(*context).analyzeSelect(query_ast, TreeRewriterResult({}, storage, metadata_snapshot)); - SelectQueryExpressionAnalyzer analyzer(query_ast, syntax_result, *context, metadata_snapshot); + auto syntax_result = TreeRewriter(context).analyzeSelect(query_ast, TreeRewriterResult({}, storage, metadata_snapshot)); + SelectQueryExpressionAnalyzer analyzer(query_ast, syntax_result, context, metadata_snapshot); actions = analyzer.simpleSelectActions(); auto column_name = expr_list->children.at(0)->getColumnName(); - actions->removeUnusedActions({column_name}); + actions->removeUnusedActions(NameSet{column_name}); actions->projectInput(false); - ActionsDAG::Index index; for (const auto * node : actions->getInputs()) - actions->addNodeToIndex(node); + actions->getIndex().push_back(node); return column_name; } InterpreterSelectQuery::InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const SelectQueryOptions & options_, const Names & required_result_column_names_) : InterpreterSelectQuery(query_ptr_, context_, nullptr, std::nullopt, nullptr, options_, required_result_column_names_) @@ -163,7 +163,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( InterpreterSelectQuery::InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const BlockInputStreamPtr & input_, const SelectQueryOptions & options_) : InterpreterSelectQuery(query_ptr_, context_, input_, std::nullopt, nullptr, options_.copy().noSubquery()) @@ -171,7 +171,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( InterpreterSelectQuery::InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, Pipe input_pipe_, const SelectQueryOptions & options_) : InterpreterSelectQuery(query_ptr_, context_, nullptr, std::move(input_pipe_), nullptr, options_.copy().noSubquery()) @@ -179,7 +179,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( InterpreterSelectQuery::InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const StoragePtr & storage_, const StorageMetadataPtr & metadata_snapshot_, const SelectQueryOptions & options_) @@ -192,15 +192,15 @@ InterpreterSelectQuery::~InterpreterSelectQuery() = default; /** There are no limits on the maximum size of the result for the subquery. * Since the result of the query is not the result of the entire query. */ -static Context getSubqueryContext(const Context & context) +static ContextPtr getSubqueryContext(ContextPtr context) { - Context subquery_context = context; - Settings subquery_settings = context.getSettings(); + auto subquery_context = Context::createCopy(context); + Settings subquery_settings = context->getSettings(); subquery_settings.max_result_rows = 0; subquery_settings.max_result_bytes = 0; /// The calculation of extremes does not make sense and is not necessary (if you do it, then the extremes of the subquery can be taken for whole query). subquery_settings.extremes = false; - subquery_context.setSettings(subquery_settings); + subquery_context->setSettings(subquery_settings); return subquery_context; } @@ -223,7 +223,7 @@ static void rewriteMultipleJoins(ASTPtr & query, const TablesWithColumns & table /// Checks that the current user has the SELECT privilege. static void checkAccessRightsForSelect( - const Context & context, + ContextPtr context, const StorageID & table_id, const StorageMetadataPtr & table_metadata, const Strings & required_columns, @@ -236,19 +236,19 @@ static void checkAccessRightsForSelect( /// In this case just checking access for `required_columns` doesn't work correctly /// because `required_columns` will contain the name of a column of minimum size (see TreeRewriterResult::collectUsedColumns()) /// which is probably not the same column as the column the current user has access to. - auto access = context.getAccess(); + auto access = context->getAccess(); for (const auto & column : table_metadata->getColumns()) { if (access->isGranted(AccessType::SELECT, table_id.database_name, table_id.table_name, column.name)) return; } - throw Exception(context.getUserName() + ": Not enough privileges. " + throw Exception(context->getUserName() + ": Not enough privileges. " "To execute this query it's necessary to have grant SELECT for at least one column on " + table_id.getFullTableName(), ErrorCodes::ACCESS_DENIED); } /// General check. - context.checkAccess(AccessType::SELECT, table_id, required_columns); + context->checkAccess(AccessType::SELECT, table_id, required_columns); } /// Returns true if we should ignore quotas and limits for a specified table in the system database. @@ -265,7 +265,7 @@ static bool shouldIgnoreQuotaAndLimits(const StorageID & table_id) InterpreterSelectQuery::InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const BlockInputStreamPtr & input_, std::optional input_pipe_, const StoragePtr & storage_, @@ -309,7 +309,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( ApplyWithSubqueryVisitor().visit(query_ptr); } - JoinedTables joined_tables(getSubqueryContext(*context), getSelectQuery()); + JoinedTables joined_tables(getSubqueryContext(context), getSelectQuery()); bool got_storage_from_query = false; if (!has_input && !storage) @@ -376,7 +376,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( if (view) view->replaceWithSubquery(getSelectQuery(), view_table, metadata_snapshot); - syntax_analyzer_result = TreeRewriter(*context).analyzeSelect( + syntax_analyzer_result = TreeRewriter(context).analyzeSelect( query_ptr, TreeRewriterResult(source_header.getNamesAndTypesList(), storage, metadata_snapshot), options, joined_tables.tablesWithColumns(), required_result_column_names, table_join); @@ -384,7 +384,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( /// Save scalar sub queries's results in the query context if (!options.only_analyze && context->hasQueryContext()) for (const auto & it : syntax_analyzer_result->getScalars()) - context->getQueryContext().addScalar(it.first, it.second); + context->getQueryContext()->addScalar(it.first, it.second); if (view) { @@ -393,7 +393,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( view = nullptr; } - if (try_move_to_prewhere && storage && query.where() && !query.prewhere() && !query.final()) + if (try_move_to_prewhere && storage && query.where() && !query.prewhere()) { /// PREWHERE optimization: transfer some condition from WHERE to PREWHERE if enabled and viable if (const auto & column_sizes = storage->getColumnSizes(); !column_sizes.empty()) @@ -407,12 +407,18 @@ InterpreterSelectQuery::InterpreterSelectQuery( current_info.query = query_ptr; current_info.syntax_analyzer_result = syntax_analyzer_result; - MergeTreeWhereOptimizer{current_info, *context, std::move(column_compressed_sizes), metadata_snapshot, syntax_analyzer_result->requiredSourceColumns(), log}; + MergeTreeWhereOptimizer{ + current_info, + context, + std::move(column_compressed_sizes), + metadata_snapshot, + syntax_analyzer_result->requiredSourceColumns(), + log}; } } query_analyzer = std::make_unique( - query_ptr, syntax_analyzer_result, *context, metadata_snapshot, + query_ptr, syntax_analyzer_result, context, metadata_snapshot, NameSet(required_result_column_names.begin(), required_result_column_names.end()), !options.only_analyze, options, std::move(subquery_for_sets)); @@ -520,7 +526,7 @@ InterpreterSelectQuery::InterpreterSelectQuery( { /// The current user should have the SELECT privilege. /// If this table_id is for a table function we don't check access rights here because in this case they have been already checked in ITableFunction::execute(). - checkAccessRightsForSelect(*context, table_id, metadata_snapshot, required_columns, *syntax_analyzer_result); + checkAccessRightsForSelect(context, table_id, metadata_snapshot, required_columns, *syntax_analyzer_result); /// Remove limits for some tables in the `system` database. if (shouldIgnoreQuotaAndLimits(table_id) && (joined_tables.tablesCount() <= 1)) @@ -561,7 +567,9 @@ BlockIO InterpreterSelectQuery::execute() buildQueryPlan(query_plan); - res.pipeline = std::move(*query_plan.buildQueryPipeline(QueryPlanOptimizationSettings(context->getSettingsRef()))); + res.pipeline = std::move(*query_plan.buildQueryPipeline( + QueryPlanOptimizationSettings::fromContext(context), + BuildQueryPipelineSettings::fromContext(context))); return res; } @@ -572,7 +580,7 @@ Block InterpreterSelectQuery::getSampleBlockImpl() query_info.query = query_ptr; if (storage && !options.only_analyze) - from_stage = storage->getQueryProcessingStage(*context, options.to_stage, query_info); + from_stage = storage->getQueryProcessingStage(context, options.to_stage, query_info); /// Do I need to perform the first part of the pipeline? /// Running on remote servers during distributed processing or if query is not distributed. @@ -606,7 +614,9 @@ Block InterpreterSelectQuery::getSampleBlockImpl() if (analysis_result.prewhere_info) { - ExpressionActions(analysis_result.prewhere_info->prewhere_actions).execute(header); + ExpressionActions( + analysis_result.prewhere_info->prewhere_actions, + ExpressionActionsSettings::fromContext(context)).execute(header); if (analysis_result.prewhere_info->remove_prewhere_column) header.erase(analysis_result.prewhere_info->prewhere_column_name); } @@ -618,6 +628,28 @@ Block InterpreterSelectQuery::getSampleBlockImpl() if (!analysis_result.need_aggregate) { // What's the difference with selected_columns? + // Here we calculate the header we want from remote server after it + // executes query up to WithMergeableState. When there is an ORDER BY, + // it is executed on remote server firstly, then we execute merge + // sort on initiator. To execute ORDER BY, we need to calculate the + // ORDER BY keys. These keys might be not present among the final + // SELECT columns given by the `selected_column`. This is why we have + // to use proper keys given by the result columns of the + // `before_order_by` expression actions. + // Another complication is window functions -- if we have them, they + // are calculated on initiator, before ORDER BY columns. In this case, + // the shard has to return columns required for window function + // calculation and further steps, given by the `before_window` + // expression actions. + // As of 21.6 this is broken: the actions in `before_window` might + // not contain everything required for the ORDER BY step, but this + // is a responsibility of ExpressionAnalyzer and is not a problem + // with this code. See + // https://github.com/ClickHouse/ClickHouse/issues/19857 for details. + if (analysis_result.before_window) + { + return analysis_result.before_window->getResultColumns(); + } return analysis_result.before_order_by->getResultColumns(); } @@ -645,14 +677,20 @@ Block InterpreterSelectQuery::getSampleBlockImpl() if (options.to_stage == QueryProcessingStage::Enum::WithMergeableStateAfterAggregation) { - // What's the difference with selected_columns? + // It's different from selected_columns, see the comment above for + // WithMergeableState stage. + if (analysis_result.before_window) + { + return analysis_result.before_window->getResultColumns(); + } + return analysis_result.before_order_by->getResultColumns(); } return analysis_result.final_projection->getResultColumns(); } -static Field getWithFillFieldValue(const ASTPtr & node, const Context & context) +static Field getWithFillFieldValue(const ASTPtr & node, ContextPtr context) { const auto & [field, type] = evaluateConstantExpression(node, context); @@ -662,7 +700,7 @@ static Field getWithFillFieldValue(const ASTPtr & node, const Context & context) return field; } -static FillColumnDescription getWithFillDescription(const ASTOrderByElement & order_by_elem, const Context & context) +static FillColumnDescription getWithFillDescription(const ASTOrderByElement & order_by_elem, ContextPtr context) { FillColumnDescription descr; if (order_by_elem.fill_from) @@ -707,7 +745,7 @@ static FillColumnDescription getWithFillDescription(const ASTOrderByElement & or return descr; } -static SortDescription getSortDescription(const ASTSelectQuery & query, const Context & context) +static SortDescription getSortDescription(const ASTSelectQuery & query, ContextPtr context) { SortDescription order_descr; order_descr.reserve(query.orderBy()->children.size()); @@ -747,7 +785,7 @@ static SortDescription getSortDescriptionFromGroupBy(const ASTSelectQuery & quer return order_descr; } -static UInt64 getLimitUIntValue(const ASTPtr & node, const Context & context, const std::string & expr) +static UInt64 getLimitUIntValue(const ASTPtr & node, ContextPtr context, const std::string & expr) { const auto & [field, type] = evaluateConstantExpression(node, context); @@ -762,7 +800,7 @@ static UInt64 getLimitUIntValue(const ASTPtr & node, const Context & context, co } -static std::pair getLimitLengthAndOffset(const ASTSelectQuery & query, const Context & context) +static std::pair getLimitLengthAndOffset(const ASTSelectQuery & query, ContextPtr context) { UInt64 length = 0; UInt64 offset = 0; @@ -779,7 +817,7 @@ static std::pair getLimitLengthAndOffset(const ASTSelectQuery & } -static UInt64 getLimitForSorting(const ASTSelectQuery & query, const Context & context) +static UInt64 getLimitForSorting(const ASTSelectQuery & query, ContextPtr context) { /// Partial sort can be done if there is LIMIT but no DISTINCT or LIMIT BY, neither ARRAY JOIN. if (!query.distinct && !query.limitBy() && !query.limit_with_ties && !query.arrayJoinExpressionList() && query.limitLength()) @@ -986,7 +1024,10 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu * but there is an ORDER or LIMIT, * then we will perform the preliminary sorting and LIMIT on the remote server. */ - if (!expressions.second_stage && !expressions.need_aggregate && !expressions.hasHaving()) + if (!expressions.second_stage + && !expressions.need_aggregate + && !expressions.hasHaving() + && !expressions.has_window) { if (expressions.has_order_by) executeOrder(query_plan, query_info.input_order_info); @@ -1077,25 +1118,36 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu if (expressions.hasJoin()) { - JoinPtr join = expressions.join; - - QueryPlanStepPtr join_step = std::make_unique( - query_plan.getCurrentDataStream(), - expressions.join); - - join_step->setStepDescription("JOIN"); - query_plan.addStep(std::move(join_step)); - - if (expressions.join_has_delayed_stream) + if (expressions.join->isFilled()) { - const Block & join_result_sample = query_plan.getCurrentDataStream().header; - auto stream = std::make_shared(*join, join_result_sample, settings.max_block_size); - auto source = std::make_shared(std::move(stream)); - auto add_non_joined_rows_step = std::make_unique( - query_plan.getCurrentDataStream(), std::move(source)); + QueryPlanStepPtr filled_join_step = std::make_unique( + query_plan.getCurrentDataStream(), + expressions.join, + settings.max_block_size); - add_non_joined_rows_step->setStepDescription("Add non-joined rows after JOIN"); - query_plan.addStep(std::move(add_non_joined_rows_step)); + filled_join_step->setStepDescription("JOIN"); + query_plan.addStep(std::move(filled_join_step)); + } + else + { + auto joined_plan = query_analyzer->getJoinedPlan(); + + if (!joined_plan) + throw Exception(ErrorCodes::LOGICAL_ERROR, "There is no joined plan for query"); + + QueryPlanStepPtr join_step = std::make_unique( + query_plan.getCurrentDataStream(), + joined_plan->getCurrentDataStream(), + expressions.join, + settings.max_block_size); + + join_step->setStepDescription("JOIN"); + std::vector plans; + plans.emplace_back(std::make_unique(std::move(query_plan))); + plans.emplace_back(std::move(joined_plan)); + + query_plan = QueryPlan(); + query_plan.unitePlans(std::move(join_step), {std::move(plans)}); } } @@ -1108,12 +1160,41 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu /// We need to reset input order info, so that executeOrder can't use it query_info.input_order_info.reset(); } + + // Now we must execute: + // 1) expressions before window functions, + // 2) window functions, + // 3) expressions after window functions, + // 4) preliminary distinct. + // This code decides which part we execute on shard (first_stage) + // and which part on initiator (second_stage). See also the counterpart + // code for "second_stage" that has to execute the rest. + if (expressions.need_aggregate) + { + // We have aggregation, so we can't execute any later-stage + // expressions on shards, neither "before window functions" nor + // "before ORDER BY". + } else { - executeExpression(query_plan, expressions.before_window, "Before window functions"); - executeWindow(query_plan); - executeExpression(query_plan, expressions.before_order_by, "Before ORDER BY"); - executeDistinct(query_plan, true, expressions.selected_columns, true); + // We don't have aggregation. + // Window functions must be executed on initiator (second_stage). + // ORDER BY and DISTINCT might depend on them, so if we have + // window functions, we can't execute ORDER BY and DISTINCT + // now, on shard (first_stage). + if (query_analyzer->hasWindow()) + { + executeExpression(query_plan, expressions.before_window, "Before window functions"); + } + else + { + // We don't have window functions, so we can execute the + // expressions before ORDER BY and the preliminary DISTINCT + // now, on shards (first_stage). + assert(!expressions.before_window); + executeExpression(query_plan, expressions.before_order_by, "Before ORDER BY"); + executeDistinct(query_plan, true, expressions.selected_columns, true); + } } preliminary_sort(); @@ -1157,16 +1238,39 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu } else if (expressions.hasHaving()) executeHaving(query_plan, expressions.before_having); + } + else if (query.group_by_with_totals || query.group_by_with_rollup || query.group_by_with_cube) + throw Exception("WITH TOTALS, ROLLUP or CUBE are not supported without aggregation", ErrorCodes::NOT_IMPLEMENTED); + // Now we must execute: + // 1) expressions before window functions, + // 2) window functions, + // 3) expressions after window functions, + // 4) preliminary distinct. + // Some of these were already executed at the shards (first_stage), + // see the counterpart code and comments there. + if (expressions.need_aggregate) + { executeExpression(query_plan, expressions.before_window, "Before window functions"); executeWindow(query_plan); executeExpression(query_plan, expressions.before_order_by, "Before ORDER BY"); executeDistinct(query_plan, true, expressions.selected_columns, true); - } - else if (query.group_by_with_totals || query.group_by_with_rollup || query.group_by_with_cube) - throw Exception("WITH TOTALS, ROLLUP or CUBE are not supported without aggregation", ErrorCodes::NOT_IMPLEMENTED); + else + { + if (query_analyzer->hasWindow()) + { + executeWindow(query_plan); + executeExpression(query_plan, expressions.before_order_by, "Before ORDER BY"); + executeDistinct(query_plan, true, expressions.selected_columns, true); + } + else + { + // Neither aggregation nor windows, all expressions before + // ORDER BY executed on shards. + } + } if (expressions.has_order_by) { @@ -1193,7 +1297,7 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu bool has_withfill = false; if (query.orderBy()) { - SortDescription order_descr = getSortDescription(query, *context); + SortDescription order_descr = getSortDescription(query, context); for (auto & desc : order_descr) if (desc.with_fill) { @@ -1385,7 +1489,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc temp_query_info.syntax_analyzer_result = syntax_analyzer_result; temp_query_info.sets = query_analyzer->getPreparedSets(); - num_rows = storage->totalRowsByPartitionPredicate(temp_query_info, *context); + num_rows = storage->totalRowsByPartitionPredicate(temp_query_info, context); } if (num_rows) @@ -1397,7 +1501,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc AggregateDataPtr place = state.data(); agg_count.create(place); - SCOPE_EXIT(agg_count.destroy(place)); + SCOPE_EXIT_MEMORY_SAFE(agg_count.destroy(place)); agg_count.set(place, *num_rows); @@ -1484,9 +1588,9 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc auto column_decl = storage_columns.get(column); column_expr = column_default->expression->clone(); // recursive visit for alias to alias - replaceAliasColumnsInQuery(column_expr, metadata_snapshot->getColumns(), syntax_analyzer_result->getArrayJoinSourceNameSet(), *context); + replaceAliasColumnsInQuery(column_expr, metadata_snapshot->getColumns(), syntax_analyzer_result->getArrayJoinSourceNameSet(), context); - column_expr = addTypeConversionToAST(std::move(column_expr), column_decl.type->getName(), metadata_snapshot->getColumns().getAll(), *context); + column_expr = addTypeConversionToAST(std::move(column_expr), column_decl.type->getName(), metadata_snapshot->getColumns().getAll(), context); column_expr = setAlias(column_expr, column); } else @@ -1531,8 +1635,8 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc = ext::map(required_columns_after_prewhere, [](const auto & it) { return it.name; }); } - auto syntax_result = TreeRewriter(*context).analyze(required_columns_all_expr, required_columns_after_prewhere, storage, metadata_snapshot); - alias_actions = ExpressionAnalyzer(required_columns_all_expr, syntax_result, *context).getActionsDAG(true); + auto syntax_result = TreeRewriter(context).analyze(required_columns_all_expr, required_columns_after_prewhere, storage, metadata_snapshot); + alias_actions = ExpressionAnalyzer(required_columns_all_expr, syntax_result, context).getActionsDAG(true); /// The set of required columns could be added as a result of adding an action to calculate ALIAS. required_columns = alias_actions->getRequiredColumns().getNames(); @@ -1556,9 +1660,9 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc prewhere_info->prewhere_actions->tryRestoreColumn(name); auto analyzed_result - = TreeRewriter(*context).analyze(required_columns_from_prewhere_expr, metadata_snapshot->getColumns().getAllPhysical()); + = TreeRewriter(context).analyze(required_columns_from_prewhere_expr, metadata_snapshot->getColumns().getAllPhysical()); prewhere_info->alias_actions - = ExpressionAnalyzer(required_columns_from_prewhere_expr, analyzed_result, *context).getActionsDAG(true, false); + = ExpressionAnalyzer(required_columns_from_prewhere_expr, analyzed_result, context).getActionsDAG(true, false); /// Add (physical?) columns required by alias actions. auto required_columns_from_alias = prewhere_info->alias_actions->getRequiredColumns(); @@ -1604,7 +1708,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc UInt64 max_block_size = settings.max_block_size; - auto [limit_length, limit_offset] = getLimitLengthAndOffset(query, *context); + auto [limit_length, limit_offset] = getLimitLengthAndOffset(query, context); /** Optimization - if not specified DISTINCT, WHERE, GROUP, HAVING, ORDER, LIMIT BY, WITH TIES but LIMIT is specified, and limit + offset < max_block_size, * then as the block size we will use limit + offset (not to read more from the table than requested), @@ -1620,6 +1724,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc && !query.limitBy() && query.limitLength() && !query_analyzer->hasAggregation() + && !query_analyzer->hasWindow() && limit_length <= std::numeric_limits::max() - limit_offset && limit_length + limit_offset < max_block_size) { @@ -1646,7 +1751,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc throw Exception("Subquery expected", ErrorCodes::LOGICAL_ERROR); interpreter_subquery = std::make_unique( - subquery, getSubqueryContext(*context), + subquery, getSubqueryContext(context), options.copy().subquery().noModify(), required_columns); if (query_analyzer->hasAggregation()) @@ -1660,7 +1765,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc { /// Table. if (max_streams == 0) - throw Exception("Logical error: zero number of streams requested", ErrorCodes::LOGICAL_ERROR); + max_streams = 1; /// If necessary, we request more sources than the number of threads - to distribute the work evenly over the threads. if (max_streams > 1 && !is_remote) @@ -1668,19 +1773,19 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc query_info.syntax_analyzer_result = syntax_analyzer_result; query_info.sets = query_analyzer->getPreparedSets(); + auto actions_settings = ExpressionActionsSettings::fromContext(context); if (prewhere_info) { query_info.prewhere_info = std::make_shared(); - - query_info.prewhere_info->prewhere_actions = std::make_shared(prewhere_info->prewhere_actions); + query_info.prewhere_info->prewhere_actions = std::make_shared(prewhere_info->prewhere_actions, actions_settings); if (prewhere_info->row_level_filter_actions) - query_info.prewhere_info->row_level_filter = std::make_shared(prewhere_info->row_level_filter_actions); + query_info.prewhere_info->row_level_filter = std::make_shared(prewhere_info->row_level_filter_actions, actions_settings); if (prewhere_info->alias_actions) - query_info.prewhere_info->alias_actions = std::make_shared(prewhere_info->alias_actions); + query_info.prewhere_info->alias_actions = std::make_shared(prewhere_info->alias_actions, actions_settings); if (prewhere_info->remove_columns_actions) - query_info.prewhere_info->remove_columns_actions = std::make_shared(prewhere_info->remove_columns_actions); + query_info.prewhere_info->remove_columns_actions = std::make_shared(prewhere_info->remove_columns_actions, actions_settings); query_info.prewhere_info->prewhere_column_name = prewhere_info->prewhere_column_name; query_info.prewhere_info->remove_prewhere_column = prewhere_info->remove_prewhere_column; @@ -1695,7 +1800,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc if (analysis_result.optimize_read_in_order) query_info.order_optimizer = std::make_shared( analysis_result.order_by_elements_actions, - getSortDescription(query, *context), + getSortDescription(query, context), query_info.syntax_analyzer_result); else query_info.order_optimizer = std::make_shared( @@ -1703,7 +1808,7 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc getSortDescriptionFromGroupBy(query), query_info.syntax_analyzer_result); - query_info.input_order_info = query_info.order_optimizer->getInputOrder(metadata_snapshot, *context); + query_info.input_order_info = query_info.order_optimizer->getInputOrder(metadata_snapshot, context); } StreamLocalLimits limits; @@ -1723,12 +1828,12 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc quota = context->getQuota(); storage->read(query_plan, required_columns, metadata_snapshot, - query_info, *context, processing_stage, max_block_size, max_streams); + query_info, context, processing_stage, max_block_size, max_streams); if (context->hasQueryContext() && !options.is_internal) { auto local_storage_id = storage->getStorageID(); - context->getQueryContext().addQueryAccessInfo( + context->getQueryContext()->addQueryAccessInfo( backQuoteIfNeed(local_storage_id.getDatabaseName()), local_storage_id.getFullTableName(), required_columns); } @@ -2038,7 +2143,13 @@ void InterpreterSelectQuery::executeWindow(QueryPlan & query_plan) for (size_t i = 0; i < windows_sorted.size(); ++i) { const auto & w = *windows_sorted[i]; - if (i == 0 || !sortIsPrefix(w, *windows_sorted[i - 1])) + + // We don't need to sort again if the input from previous window already + // has suitable sorting. Also don't create sort steps when there are no + // columns to sort by, because the sort nodes are confused by this. It + // happens in case of `over ()`. + if (!w.full_sort_description.empty() + && (i == 0 || !sortIsPrefix(w, *windows_sorted[i - 1]))) { auto partial_sorting = std::make_unique( query_plan.getCurrentDataStream(), @@ -2104,8 +2215,8 @@ void InterpreterSelectQuery::executeOrderOptimized(QueryPlan & query_plan, Input void InterpreterSelectQuery::executeOrder(QueryPlan & query_plan, InputOrderInfoPtr input_sorting_info) { auto & query = getSelectQuery(); - SortDescription output_order_descr = getSortDescription(query, *context); - UInt64 limit = getLimitForSorting(query, *context); + SortDescription output_order_descr = getSortDescription(query, context); + UInt64 limit = getLimitForSorting(query, context); if (input_sorting_info) { @@ -2153,8 +2264,8 @@ void InterpreterSelectQuery::executeOrder(QueryPlan & query_plan, InputOrderInfo void InterpreterSelectQuery::executeMergeSorted(QueryPlan & query_plan, const std::string & description) { auto & query = getSelectQuery(); - SortDescription order_descr = getSortDescription(query, *context); - UInt64 limit = getLimitForSorting(query, *context); + SortDescription order_descr = getSortDescription(query, context); + UInt64 limit = getLimitForSorting(query, context); executeMergeSorted(query_plan, order_descr, limit, description); } @@ -2188,7 +2299,7 @@ void InterpreterSelectQuery::executeDistinct(QueryPlan & query_plan, bool before { const Settings & settings = context->getSettingsRef(); - auto [limit_length, limit_offset] = getLimitLengthAndOffset(query, *context); + auto [limit_length, limit_offset] = getLimitLengthAndOffset(query, context); UInt64 limit_for_distinct = 0; /// If after this stage of DISTINCT ORDER BY is not executed, @@ -2217,7 +2328,7 @@ void InterpreterSelectQuery::executePreLimit(QueryPlan & query_plan, bool do_not /// If there is LIMIT if (query.limitLength()) { - auto [limit_length, limit_offset] = getLimitLengthAndOffset(query, *context); + auto [limit_length, limit_offset] = getLimitLengthAndOffset(query, context); if (do_not_skip_offset) { @@ -2245,8 +2356,8 @@ void InterpreterSelectQuery::executeLimitBy(QueryPlan & query_plan) for (const auto & elem : query.limitBy()->children) columns.emplace_back(elem->getColumnName()); - UInt64 length = getLimitUIntValue(query.limitByLength(), *context, "LIMIT"); - UInt64 offset = (query.limitByOffset() ? getLimitUIntValue(query.limitByOffset(), *context, "OFFSET") : 0); + UInt64 length = getLimitUIntValue(query.limitByLength(), context, "LIMIT"); + UInt64 offset = (query.limitByOffset() ? getLimitUIntValue(query.limitByOffset(), context, "OFFSET") : 0); auto limit_by = std::make_unique(query_plan.getCurrentDataStream(), length, offset, columns); query_plan.addStep(std::move(limit_by)); @@ -2257,7 +2368,7 @@ void InterpreterSelectQuery::executeWithFill(QueryPlan & query_plan) auto & query = getSelectQuery(); if (query.orderBy()) { - SortDescription order_descr = getSortDescription(query, *context); + SortDescription order_descr = getSortDescription(query, context); SortDescription fill_descr; for (auto & desc : order_descr) { @@ -2299,14 +2410,14 @@ void InterpreterSelectQuery::executeLimit(QueryPlan & query_plan) UInt64 limit_length; UInt64 limit_offset; - std::tie(limit_length, limit_offset) = getLimitLengthAndOffset(query, *context); + std::tie(limit_length, limit_offset) = getLimitLengthAndOffset(query, context); SortDescription order_descr; if (query.limit_with_ties) { if (!query.orderBy()) throw Exception("LIMIT WITH TIES without ORDER BY", ErrorCodes::LOGICAL_ERROR); - order_descr = getSortDescription(query, *context); + order_descr = getSortDescription(query, context); } auto limit = std::make_unique( @@ -2329,7 +2440,7 @@ void InterpreterSelectQuery::executeOffset(QueryPlan & query_plan) { UInt64 limit_length; UInt64 limit_offset; - std::tie(limit_length, limit_offset) = getLimitLengthAndOffset(query, *context); + std::tie(limit_length, limit_offset) = getLimitLengthAndOffset(query, context); auto offsets_step = std::make_unique(query_plan.getCurrentDataStream(), limit_offset); query_plan.addStep(std::move(offsets_step)); @@ -2353,7 +2464,7 @@ void InterpreterSelectQuery::executeSubqueriesInSetsAndJoins(QueryPlan & query_p const Settings & settings = context->getSettingsRef(); SizeLimits limits(settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode); - addCreatingSetsStep(query_plan, std::move(subqueries_for_sets), limits, *context); + addCreatingSetsStep(query_plan, std::move(subqueries_for_sets), limits, context); } @@ -2367,7 +2478,7 @@ void InterpreterSelectQuery::initSettings() { auto & query = getSelectQuery(); if (query.settings()) - InterpreterSetQuery(query.settings(), *context).executeForCurrentContext(); + InterpreterSetQuery(query.settings(), context).executeForCurrentContext(); } } diff --git a/src/Interpreters/InterpreterSelectQuery.h b/src/Interpreters/InterpreterSelectQuery.h index 13f4755d431..66b3fc65eff 100644 --- a/src/Interpreters/InterpreterSelectQuery.h +++ b/src/Interpreters/InterpreterSelectQuery.h @@ -46,28 +46,28 @@ public: InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const SelectQueryOptions &, const Names & required_result_column_names_ = Names{}); /// Read data not from the table specified in the query, but from the prepared source `input`. InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const BlockInputStreamPtr & input_, const SelectQueryOptions & = {}); /// Read data not from the table specified in the query, but from the prepared pipe `input`. InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, Pipe input_pipe_, const SelectQueryOptions & = {}); /// Read data not from the table specified in the query, but from the specified `storage_`. InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const StoragePtr & storage_, const StorageMetadataPtr & metadata_snapshot_ = nullptr, const SelectQueryOptions & = {}); @@ -94,7 +94,7 @@ public: private: InterpreterSelectQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const BlockInputStreamPtr & input_, std::optional input_pipe, const StoragePtr & storage_, diff --git a/src/Interpreters/InterpreterSelectWithUnionQuery.cpp b/src/Interpreters/InterpreterSelectWithUnionQuery.cpp index 5f6f01b4401..3cf4a905d38 100644 --- a/src/Interpreters/InterpreterSelectWithUnionQuery.cpp +++ b/src/Interpreters/InterpreterSelectWithUnionQuery.cpp @@ -6,11 +6,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include @@ -27,7 +29,7 @@ namespace ErrorCodes } InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery( - const ASTPtr & query_ptr_, const Context & context_, const SelectQueryOptions & options_, const Names & required_result_column_names) + const ASTPtr & query_ptr_, ContextPtr context_, const SelectQueryOptions & options_, const Names & required_result_column_names) : IInterpreterUnionOrSelectQuery(query_ptr_, context_, options_) { ASTSelectWithUnionQuery * ast = query_ptr->as(); @@ -195,26 +197,26 @@ Block InterpreterSelectWithUnionQuery::getCommonHeaderForUnion(const Blocks & he Block InterpreterSelectWithUnionQuery::getCurrentChildResultHeader(const ASTPtr & ast_ptr_, const Names & required_result_column_names) { if (ast_ptr_->as()) - return InterpreterSelectWithUnionQuery(ast_ptr_, *context, options.copy().analyze().noModify(), required_result_column_names) + return InterpreterSelectWithUnionQuery(ast_ptr_, context, options.copy().analyze().noModify(), required_result_column_names) .getSampleBlock(); else - return InterpreterSelectQuery(ast_ptr_, *context, options.copy().analyze().noModify()).getSampleBlock(); + return InterpreterSelectQuery(ast_ptr_, context, options.copy().analyze().noModify()).getSampleBlock(); } std::unique_ptr InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(const ASTPtr & ast_ptr_, const Names & current_required_result_column_names) { if (ast_ptr_->as()) - return std::make_unique(ast_ptr_, *context, options, current_required_result_column_names); + return std::make_unique(ast_ptr_, context, options, current_required_result_column_names); else - return std::make_unique(ast_ptr_, *context, options, current_required_result_column_names); + return std::make_unique(ast_ptr_, context, options, current_required_result_column_names); } InterpreterSelectWithUnionQuery::~InterpreterSelectWithUnionQuery() = default; -Block InterpreterSelectWithUnionQuery::getSampleBlock(const ASTPtr & query_ptr_, const Context & context_, bool is_subquery) +Block InterpreterSelectWithUnionQuery::getSampleBlock(const ASTPtr & query_ptr_, ContextPtr context_, bool is_subquery) { - auto & cache = context_.getSampleBlockCache(); + auto & cache = context_->getSampleBlockCache(); /// Using query string because query_ptr changes for every internal SELECT auto key = queryToString(query_ptr_); if (cache.find(key) != cache.end()) @@ -250,11 +252,23 @@ void InterpreterSelectWithUnionQuery::buildQueryPlan(QueryPlan & query_plan) { plans[i] = std::make_unique(); nested_interpreters[i]->buildQueryPlan(*plans[i]); + + if (!blocksHaveEqualStructure(plans[i]->getCurrentDataStream().header, result_header)) + { + auto actions_dag = ActionsDAG::makeConvertingActions( + plans[i]->getCurrentDataStream().header.getColumnsWithTypeAndName(), + result_header.getColumnsWithTypeAndName(), + ActionsDAG::MatchColumnsMode::Position); + auto converting_step = std::make_unique(plans[i]->getCurrentDataStream(), std::move(actions_dag)); + converting_step->setStepDescription("Conversion before UNION"); + plans[i]->addStep(std::move(converting_step)); + } + data_streams[i] = plans[i]->getCurrentDataStream(); } auto max_threads = context->getSettingsRef().max_threads; - auto union_step = std::make_unique(std::move(data_streams), result_header, max_threads); + auto union_step = std::make_unique(std::move(data_streams), max_threads); query_plan.unitePlans(std::move(union_step), std::move(plans)); @@ -296,7 +310,9 @@ BlockIO InterpreterSelectWithUnionQuery::execute() QueryPlan query_plan; buildQueryPlan(query_plan); - auto pipeline = query_plan.buildQueryPipeline(QueryPlanOptimizationSettings(context->getSettingsRef())); + auto pipeline = query_plan.buildQueryPipeline( + QueryPlanOptimizationSettings::fromContext(context), + BuildQueryPipelineSettings::fromContext(context)); res.pipeline = std::move(*pipeline); res.pipeline.addInterpreterContext(context); diff --git a/src/Interpreters/InterpreterSelectWithUnionQuery.h b/src/Interpreters/InterpreterSelectWithUnionQuery.h index f4062b2005e..bd18b8e7907 100644 --- a/src/Interpreters/InterpreterSelectWithUnionQuery.h +++ b/src/Interpreters/InterpreterSelectWithUnionQuery.h @@ -6,7 +6,6 @@ namespace DB { -class Context; class InterpreterSelectQuery; class QueryPlan; @@ -19,7 +18,7 @@ public: InterpreterSelectWithUnionQuery( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, const SelectQueryOptions &, const Names & required_result_column_names = {}); @@ -35,7 +34,7 @@ public: static Block getSampleBlock( const ASTPtr & query_ptr_, - const Context & context_, + ContextPtr context_, bool is_subquery = false); virtual void ignoreWithTotals() override; diff --git a/src/Interpreters/InterpreterSetQuery.cpp b/src/Interpreters/InterpreterSetQuery.cpp index f92e9638822..1c6a4236bf6 100644 --- a/src/Interpreters/InterpreterSetQuery.cpp +++ b/src/Interpreters/InterpreterSetQuery.cpp @@ -9,8 +9,8 @@ namespace DB BlockIO InterpreterSetQuery::execute() { const auto & ast = query_ptr->as(); - context.checkSettingsConstraints(ast.changes); - context.getSessionContext().applySettingsChanges(ast.changes); + getContext()->checkSettingsConstraints(ast.changes); + getContext()->getSessionContext()->applySettingsChanges(ast.changes); return {}; } @@ -18,8 +18,8 @@ BlockIO InterpreterSetQuery::execute() void InterpreterSetQuery::executeForCurrentContext() { const auto & ast = query_ptr->as(); - context.checkSettingsConstraints(ast.changes); - context.applySettingsChanges(ast.changes); + getContext()->checkSettingsConstraints(ast.changes); + getContext()->applySettingsChanges(ast.changes); } } diff --git a/src/Interpreters/InterpreterSetQuery.h b/src/Interpreters/InterpreterSetQuery.h index 6e8e8b8b733..31519be6f29 100644 --- a/src/Interpreters/InterpreterSetQuery.h +++ b/src/Interpreters/InterpreterSetQuery.h @@ -7,17 +7,14 @@ namespace DB { -class Context; class ASTSetQuery; - /** Change one or several settings for the session or just for the current context. */ -class InterpreterSetQuery : public IInterpreter +class InterpreterSetQuery : public IInterpreter, WithContext { public: - InterpreterSetQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterSetQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} /** Usual SET query. Set setting for the session. */ @@ -30,8 +27,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; - } diff --git a/src/Interpreters/InterpreterSetRoleQuery.cpp b/src/Interpreters/InterpreterSetRoleQuery.cpp index f955c881b2e..057ccd447ef 100644 --- a/src/Interpreters/InterpreterSetRoleQuery.cpp +++ b/src/Interpreters/InterpreterSetRoleQuery.cpp @@ -28,44 +28,42 @@ BlockIO InterpreterSetRoleQuery::execute() void InterpreterSetRoleQuery::setRole(const ASTSetRoleQuery & query) { - auto & access_control = context.getAccessControlManager(); - auto & session_context = context.getSessionContext(); - auto user = session_context.getUser(); + auto & access_control = getContext()->getAccessControlManager(); + auto session_context = getContext()->getSessionContext(); + auto user = session_context->getUser(); if (query.kind == ASTSetRoleQuery::Kind::SET_ROLE_DEFAULT) { - session_context.setCurrentRolesDefault(); + session_context->setCurrentRolesDefault(); } else { RolesOrUsersSet roles_from_query{*query.roles, access_control}; - boost::container::flat_set new_current_roles; + std::vector new_current_roles; if (roles_from_query.all) { - for (const auto & id : user->granted_roles.roles) - if (roles_from_query.match(id)) - new_current_roles.emplace(id); + new_current_roles = user->granted_roles.findGranted(roles_from_query); } else { for (const auto & id : roles_from_query.getMatchingIDs()) { - if (!user->granted_roles.roles.count(id)) + if (!user->granted_roles.isGranted(id)) throw Exception("Role should be granted to set current", ErrorCodes::SET_NON_GRANTED_ROLE); - new_current_roles.emplace(id); + new_current_roles.emplace_back(id); } } - session_context.setCurrentRoles(new_current_roles); + session_context->setCurrentRoles(new_current_roles); } } void InterpreterSetRoleQuery::setDefaultRole(const ASTSetRoleQuery & query) { - context.checkAccess(AccessType::ALTER_USER); + getContext()->checkAccess(AccessType::ALTER_USER); - auto & access_control = context.getAccessControlManager(); - std::vector to_users = RolesOrUsersSet{*query.to_users, access_control, context.getUserID()}.getMatchingIDs(access_control); + auto & access_control = getContext()->getAccessControlManager(); + std::vector to_users = RolesOrUsersSet{*query.to_users, access_control, getContext()->getUserID()}.getMatchingIDs(access_control); RolesOrUsersSet roles_from_query{*query.roles, access_control}; auto update_func = [&](const AccessEntityPtr & entity) -> AccessEntityPtr @@ -85,7 +83,7 @@ void InterpreterSetRoleQuery::updateUserSetDefaultRoles(User & user, const Roles { for (const auto & id : roles_from_query.getMatchingIDs()) { - if (!user.granted_roles.roles.count(id)) + if (!user.granted_roles.isGranted(id)) throw Exception("Role should be granted to set default", ErrorCodes::SET_NON_GRANTED_ROLE); } } diff --git a/src/Interpreters/InterpreterSetRoleQuery.h b/src/Interpreters/InterpreterSetRoleQuery.h index 0919b0f23f9..70ba3c381ab 100644 --- a/src/Interpreters/InterpreterSetRoleQuery.h +++ b/src/Interpreters/InterpreterSetRoleQuery.h @@ -7,16 +7,14 @@ namespace DB { -class Context; class ASTSetRoleQuery; struct RolesOrUsersSet; struct User; - -class InterpreterSetRoleQuery : public IInterpreter +class InterpreterSetRoleQuery : public IInterpreter, WithContext { public: - InterpreterSetRoleQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterSetRoleQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -27,6 +25,6 @@ private: void setDefaultRole(const ASTSetRoleQuery & query); ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterShowAccessEntitiesQuery.cpp b/src/Interpreters/InterpreterShowAccessEntitiesQuery.cpp index 009b9c580d3..afd685ea983 100644 --- a/src/Interpreters/InterpreterShowAccessEntitiesQuery.cpp +++ b/src/Interpreters/InterpreterShowAccessEntitiesQuery.cpp @@ -17,22 +17,22 @@ namespace ErrorCodes using EntityType = IAccessEntity::Type; -InterpreterShowAccessEntitiesQuery::InterpreterShowAccessEntitiesQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterShowAccessEntitiesQuery::InterpreterShowAccessEntitiesQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_) { } BlockIO InterpreterShowAccessEntitiesQuery::execute() { - return executeQuery(getRewrittenQuery(), context, true); + return executeQuery(getRewrittenQuery(), getContext(), true); } String InterpreterShowAccessEntitiesQuery::getRewrittenQuery() const { auto & query = query_ptr->as(); - query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase()); + query.replaceEmptyDatabase(getContext()->getCurrentDatabase()); String origin; String expr = "*"; String filter, order; diff --git a/src/Interpreters/InterpreterShowAccessEntitiesQuery.h b/src/Interpreters/InterpreterShowAccessEntitiesQuery.h index 8fcd70919ba..7224f0d593b 100644 --- a/src/Interpreters/InterpreterShowAccessEntitiesQuery.h +++ b/src/Interpreters/InterpreterShowAccessEntitiesQuery.h @@ -6,12 +6,11 @@ namespace DB { -class Context; -class InterpreterShowAccessEntitiesQuery : public IInterpreter +class InterpreterShowAccessEntitiesQuery : public IInterpreter, WithContext { public: - InterpreterShowAccessEntitiesQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterShowAccessEntitiesQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; @@ -22,7 +21,6 @@ private: String getRewrittenQuery() const; ASTPtr query_ptr; - Context & context; }; } diff --git a/src/Interpreters/InterpreterShowAccessQuery.cpp b/src/Interpreters/InterpreterShowAccessQuery.cpp index a1f8b5ed656..d2e21b71ae5 100644 --- a/src/Interpreters/InterpreterShowAccessQuery.cpp +++ b/src/Interpreters/InterpreterShowAccessQuery.cpp @@ -49,8 +49,8 @@ BlockInputStreamPtr InterpreterShowAccessQuery::executeImpl() const std::vector InterpreterShowAccessQuery::getEntities() const { - const auto & access_control = context.getAccessControlManager(); - context.checkAccess(AccessType::SHOW_ACCESS); + const auto & access_control = getContext()->getAccessControlManager(); + getContext()->checkAccess(AccessType::SHOW_ACCESS); std::vector entities; for (auto type : ext::range(EntityType::MAX)) @@ -71,7 +71,7 @@ std::vector InterpreterShowAccessQuery::getEntities() const ASTs InterpreterShowAccessQuery::getCreateAndGrantQueries() const { auto entities = getEntities(); - const auto & access_control = context.getAccessControlManager(); + const auto & access_control = getContext()->getAccessControlManager(); ASTs create_queries, grant_queries; for (const auto & entity : entities) diff --git a/src/Interpreters/InterpreterShowAccessQuery.h b/src/Interpreters/InterpreterShowAccessQuery.h index eb548c56241..d08d8962abc 100644 --- a/src/Interpreters/InterpreterShowAccessQuery.h +++ b/src/Interpreters/InterpreterShowAccessQuery.h @@ -6,17 +6,16 @@ namespace DB { -class Context; + struct IAccessEntity; using AccessEntityPtr = std::shared_ptr; /** Return all queries for creating access entities and grants. */ -class InterpreterShowAccessQuery : public IInterpreter +class InterpreterShowAccessQuery : public IInterpreter, WithContext { public: - InterpreterShowAccessQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterShowAccessQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -29,8 +28,6 @@ private: std::vector getEntities() const; ASTPtr query_ptr; - Context & context; }; - } diff --git a/src/Interpreters/InterpreterShowCreateAccessEntityQuery.cpp b/src/Interpreters/InterpreterShowCreateAccessEntityQuery.cpp index 3135b0cfdf2..33e29d2b7bd 100644 --- a/src/Interpreters/InterpreterShowCreateAccessEntityQuery.cpp +++ b/src/Interpreters/InterpreterShowCreateAccessEntityQuery.cpp @@ -73,6 +73,15 @@ namespace query->settings = user.settings.toASTWithNames(*manager); } + if (user.grantees != RolesOrUsersSet::AllTag{}) + { + if (attach_mode) + query->grantees = user.grantees.toAST(); + else + query->grantees = user.grantees.toASTWithNames(*manager); + query->grantees->use_keyword_any = true; + } + return query; } @@ -216,8 +225,8 @@ namespace } -InterpreterShowCreateAccessEntityQuery::InterpreterShowCreateAccessEntityQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterShowCreateAccessEntityQuery::InterpreterShowCreateAccessEntityQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_) { } @@ -261,9 +270,9 @@ BlockInputStreamPtr InterpreterShowCreateAccessEntityQuery::executeImpl() std::vector InterpreterShowCreateAccessEntityQuery::getEntities() const { auto & show_query = query_ptr->as(); - const auto & access_control = context.getAccessControlManager(); - context.checkAccess(getRequiredAccess()); - show_query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase()); + const auto & access_control = getContext()->getAccessControlManager(); + getContext()->checkAccess(getRequiredAccess()); + show_query.replaceEmptyDatabase(getContext()->getCurrentDatabase()); std::vector entities; if (show_query.all) @@ -277,12 +286,12 @@ std::vector InterpreterShowCreateAccessEntityQuery::getEntities } else if (show_query.current_user) { - if (auto user = context.getUser()) + if (auto user = getContext()->getUser()) entities.push_back(user); } else if (show_query.current_quota) { - auto usage = context.getQuotaUsage(); + auto usage = getContext()->getQuotaUsage(); if (usage) entities.push_back(access_control.read(usage->quota_id)); } @@ -332,7 +341,7 @@ ASTs InterpreterShowCreateAccessEntityQuery::getCreateQueries() const auto entities = getEntities(); ASTs list; - const auto & access_control = context.getAccessControlManager(); + const auto & access_control = getContext()->getAccessControlManager(); for (const auto & entity : entities) list.push_back(getCreateQuery(*entity, access_control)); diff --git a/src/Interpreters/InterpreterShowCreateAccessEntityQuery.h b/src/Interpreters/InterpreterShowCreateAccessEntityQuery.h index 5bacbd42988..6d026d2b81b 100644 --- a/src/Interpreters/InterpreterShowCreateAccessEntityQuery.h +++ b/src/Interpreters/InterpreterShowCreateAccessEntityQuery.h @@ -16,10 +16,10 @@ using AccessEntityPtr = std::shared_ptr; /** Returns a single item containing a statement which could be used to create a specified role. */ -class InterpreterShowCreateAccessEntityQuery : public IInterpreter +class InterpreterShowCreateAccessEntityQuery : public IInterpreter, WithContext { public: - InterpreterShowCreateAccessEntityQuery(const ASTPtr & query_ptr_, const Context & context_); + InterpreterShowCreateAccessEntityQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; @@ -36,7 +36,6 @@ private: AccessRightsElements getRequiredAccess() const; ASTPtr query_ptr; - const Context & context; }; diff --git a/src/Interpreters/InterpreterShowCreateQuery.cpp b/src/Interpreters/InterpreterShowCreateQuery.cpp index 10c8339c135..967d3e7f570 100644 --- a/src/Interpreters/InterpreterShowCreateQuery.cpp +++ b/src/Interpreters/InterpreterShowCreateQuery.cpp @@ -45,43 +45,52 @@ BlockInputStreamPtr InterpreterShowCreateQuery::executeImpl() ASTPtr create_query; ASTQueryWithTableAndOutput * show_query; if ((show_query = query_ptr->as()) || - (show_query = query_ptr->as())) + (show_query = query_ptr->as()) || + (show_query = query_ptr->as())) { auto resolve_table_type = show_query->temporary ? Context::ResolveExternal : Context::ResolveOrdinary; - auto table_id = context.resolveStorageID(*show_query, resolve_table_type); - context.checkAccess(AccessType::SHOW_COLUMNS, table_id); - create_query = DatabaseCatalog::instance().getDatabase(table_id.database_name)->getCreateTableQuery(table_id.table_name, context); + auto table_id = getContext()->resolveStorageID(*show_query, resolve_table_type); + + bool is_dictionary = static_cast(query_ptr->as()); + + if (is_dictionary) + getContext()->checkAccess(AccessType::SHOW_DICTIONARIES, table_id); + else + getContext()->checkAccess(AccessType::SHOW_COLUMNS, table_id); + + create_query = DatabaseCatalog::instance().getDatabase(table_id.database_name)->getCreateTableQuery(table_id.table_name, getContext()); + + auto & ast_create_query = create_query->as(); if (query_ptr->as()) { - auto & ast_create_query = create_query->as(); if (!ast_create_query.isView()) - throw Exception(backQuote(ast_create_query.database) + "." + backQuote(ast_create_query.table) + " is not a VIEW", ErrorCodes::BAD_ARGUMENTS); + throw Exception(ErrorCodes::BAD_ARGUMENTS, "{}.{} is not a VIEW", + backQuote(ast_create_query.database), backQuote(ast_create_query.table)); + } + else if (is_dictionary) + { + if (!ast_create_query.is_dictionary) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "{}.{} is not a DICTIONARY", + backQuote(ast_create_query.database), backQuote(ast_create_query.table)); } } else if ((show_query = query_ptr->as())) { if (show_query->temporary) throw Exception("Temporary databases are not possible.", ErrorCodes::SYNTAX_ERROR); - show_query->database = context.resolveDatabase(show_query->database); - context.checkAccess(AccessType::SHOW_DATABASES, show_query->database); + show_query->database = getContext()->resolveDatabase(show_query->database); + getContext()->checkAccess(AccessType::SHOW_DATABASES, show_query->database); create_query = DatabaseCatalog::instance().getDatabase(show_query->database)->getCreateDatabaseQuery(); } - else if ((show_query = query_ptr->as())) - { - if (show_query->temporary) - throw Exception("Temporary dictionaries are not possible.", ErrorCodes::SYNTAX_ERROR); - show_query->database = context.resolveDatabase(show_query->database); - context.checkAccess(AccessType::SHOW_DICTIONARIES, show_query->database, show_query->table); - create_query = DatabaseCatalog::instance().getDatabase(show_query->database)->getCreateDictionaryQuery(show_query->table); - } if (!create_query) throw Exception("Unable to show the create query of " + show_query->table + ". Maybe it was created by the system.", ErrorCodes::THERE_IS_NO_QUERY); - if (!context.getSettingsRef().show_table_uuid_in_table_create_query_if_not_nil) + if (!getContext()->getSettingsRef().show_table_uuid_in_table_create_query_if_not_nil) { auto & create = create_query->as(); create.uuid = UUIDHelpers::Nil; + create.to_inner_uuid = UUIDHelpers::Nil; } WriteBufferFromOwnString buf; diff --git a/src/Interpreters/InterpreterShowCreateQuery.h b/src/Interpreters/InterpreterShowCreateQuery.h index 5ac98509b23..53f587d3e7d 100644 --- a/src/Interpreters/InterpreterShowCreateQuery.h +++ b/src/Interpreters/InterpreterShowCreateQuery.h @@ -7,16 +7,12 @@ namespace DB { -class Context; - - /** Return single row with single column "statement" of type String with text of query to CREATE specified table. */ -class InterpreterShowCreateQuery : public IInterpreter +class InterpreterShowCreateQuery : public IInterpreter, WithContext { public: - InterpreterShowCreateQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterShowCreateQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -24,7 +20,6 @@ public: private: ASTPtr query_ptr; - const Context & context; BlockInputStreamPtr executeImpl(); }; diff --git a/src/Interpreters/InterpreterShowGrantsQuery.cpp b/src/Interpreters/InterpreterShowGrantsQuery.cpp index a2ddc5eec27..f3e554122e1 100644 --- a/src/Interpreters/InterpreterShowGrantsQuery.cpp +++ b/src/Interpreters/InterpreterShowGrantsQuery.cpp @@ -32,56 +32,50 @@ namespace { ASTs res; - std::shared_ptr to_roles = std::make_shared(); - to_roles->names.push_back(grantee.getName()); + std::shared_ptr grantees = std::make_shared(); + grantees->names.push_back(grantee.getName()); std::shared_ptr current_query = nullptr; - auto elements = grantee.access.getElements(); - for (const auto & element : elements) + for (const auto & element : grantee.access.getElements()) { + if (element.empty()) + continue; + if (current_query) { const auto & prev_element = current_query->access_rights_elements.back(); - bool continue_using_current_query = (element.database == prev_element.database) - && (element.any_database == prev_element.any_database) && (element.table == prev_element.table) - && (element.any_table == prev_element.any_table) && (element.grant_option == current_query->grant_option) - && (element.kind == current_query->kind); - if (!continue_using_current_query) + bool continue_with_current_query = element.sameDatabaseAndTable(prev_element) && element.sameOptions(prev_element); + if (!continue_with_current_query) current_query = nullptr; } if (!current_query) { current_query = std::make_shared(); - current_query->kind = element.kind; - current_query->attach = attach_mode; - current_query->grant_option = element.grant_option; - current_query->to_roles = to_roles; + current_query->grantees = grantees; + current_query->attach_mode = attach_mode; + if (element.is_partial_revoke) + current_query->is_revoke = true; res.push_back(current_query); } current_query->access_rights_elements.emplace_back(std::move(element)); } - auto grants_roles = grantee.granted_roles.getGrants(); - - for (bool admin_option : {false, true}) + for (const auto & element : grantee.granted_roles.getElements()) { - const auto & roles = admin_option ? grants_roles.grants_with_admin_option : grants_roles.grants; - if (roles.empty()) + if (element.empty()) continue; auto grant_query = std::make_shared(); - using Kind = ASTGrantQuery::Kind; - grant_query->kind = Kind::GRANT; - grant_query->attach = attach_mode; - grant_query->admin_option = admin_option; - grant_query->to_roles = to_roles; + grant_query->grantees = grantees; + grant_query->admin_option = element.admin_option; + grant_query->attach_mode = attach_mode; if (attach_mode) - grant_query->roles = RolesOrUsersSet{roles}.toAST(); + grant_query->roles = RolesOrUsersSet{element.ids}.toAST(); else - grant_query->roles = RolesOrUsersSet{roles}.toASTWithNames(*manager); + grant_query->roles = RolesOrUsersSet{element.ids}.toASTWithNames(*manager); res.push_back(std::move(grant_query)); } @@ -142,8 +136,8 @@ BlockInputStreamPtr InterpreterShowGrantsQuery::executeImpl() std::vector InterpreterShowGrantsQuery::getEntities() const { const auto & show_query = query_ptr->as(); - const auto & access_control = context.getAccessControlManager(); - auto ids = RolesOrUsersSet{*show_query.for_roles, access_control, context.getUserID()}.getMatchingIDs(access_control); + const auto & access_control = getContext()->getAccessControlManager(); + auto ids = RolesOrUsersSet{*show_query.for_roles, access_control, getContext()->getUserID()}.getMatchingIDs(access_control); std::vector entities; for (const auto & id : ids) @@ -161,7 +155,7 @@ std::vector InterpreterShowGrantsQuery::getEntities() const ASTs InterpreterShowGrantsQuery::getGrantQueries() const { auto entities = getEntities(); - const auto & access_control = context.getAccessControlManager(); + const auto & access_control = getContext()->getAccessControlManager(); ASTs grant_queries; for (const auto & entity : entities) diff --git a/src/Interpreters/InterpreterShowGrantsQuery.h b/src/Interpreters/InterpreterShowGrantsQuery.h index f5dbd110fd0..c23aa1e3b94 100644 --- a/src/Interpreters/InterpreterShowGrantsQuery.h +++ b/src/Interpreters/InterpreterShowGrantsQuery.h @@ -1,22 +1,22 @@ #pragma once +#include #include #include -#include namespace DB { + class AccessControlManager; class ASTShowGrantsQuery; struct IAccessEntity; using AccessEntityPtr = std::shared_ptr; - -class InterpreterShowGrantsQuery : public IInterpreter +class InterpreterShowGrantsQuery : public IInterpreter, WithContext { public: - InterpreterShowGrantsQuery(const ASTPtr & query_ptr_, Context & context_) : query_ptr(query_ptr_), context(context_) {} + InterpreterShowGrantsQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -32,6 +32,6 @@ private: std::vector getEntities() const; ASTPtr query_ptr; - Context & context; }; + } diff --git a/src/Interpreters/InterpreterShowPrivilegesQuery.cpp b/src/Interpreters/InterpreterShowPrivilegesQuery.cpp index e88b97d8671..c566d31e2fc 100644 --- a/src/Interpreters/InterpreterShowPrivilegesQuery.cpp +++ b/src/Interpreters/InterpreterShowPrivilegesQuery.cpp @@ -4,7 +4,7 @@ namespace DB { -InterpreterShowPrivilegesQuery::InterpreterShowPrivilegesQuery(const ASTPtr & query_ptr_, Context & context_) +InterpreterShowPrivilegesQuery::InterpreterShowPrivilegesQuery(const ASTPtr & query_ptr_, ContextPtr context_) : query_ptr(query_ptr_), context(context_) { } diff --git a/src/Interpreters/InterpreterShowPrivilegesQuery.h b/src/Interpreters/InterpreterShowPrivilegesQuery.h index 0c1a30107c0..75989263405 100644 --- a/src/Interpreters/InterpreterShowPrivilegesQuery.h +++ b/src/Interpreters/InterpreterShowPrivilegesQuery.h @@ -11,7 +11,7 @@ class Context; class InterpreterShowPrivilegesQuery : public IInterpreter { public: - InterpreterShowPrivilegesQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterShowPrivilegesQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; @@ -20,7 +20,7 @@ public: private: ASTPtr query_ptr; - Context & context; + ContextPtr context; }; } diff --git a/src/Interpreters/InterpreterShowProcesslistQuery.cpp b/src/Interpreters/InterpreterShowProcesslistQuery.cpp index 697b286fe75..780ba688a89 100644 --- a/src/Interpreters/InterpreterShowProcesslistQuery.cpp +++ b/src/Interpreters/InterpreterShowProcesslistQuery.cpp @@ -12,7 +12,7 @@ namespace DB BlockIO InterpreterShowProcesslistQuery::execute() { - return executeQuery("SELECT * FROM system.processes", context, true); + return executeQuery("SELECT * FROM system.processes", getContext(), true); } } diff --git a/src/Interpreters/InterpreterShowProcesslistQuery.h b/src/Interpreters/InterpreterShowProcesslistQuery.h index fa0bbf075bd..5eedb67595e 100644 --- a/src/Interpreters/InterpreterShowProcesslistQuery.h +++ b/src/Interpreters/InterpreterShowProcesslistQuery.h @@ -7,16 +7,13 @@ namespace DB { -class Context; - - /** Return list of currently executing queries. */ -class InterpreterShowProcesslistQuery : public IInterpreter +class InterpreterShowProcesslistQuery : public IInterpreter, WithContext { public: - InterpreterShowProcesslistQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterShowProcesslistQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; @@ -27,8 +24,6 @@ public: private: ASTPtr query_ptr; - Context & context; }; - } diff --git a/src/Interpreters/InterpreterShowTablesQuery.cpp b/src/Interpreters/InterpreterShowTablesQuery.cpp index 49c55fa3a93..901999f004f 100644 --- a/src/Interpreters/InterpreterShowTablesQuery.cpp +++ b/src/Interpreters/InterpreterShowTablesQuery.cpp @@ -18,8 +18,8 @@ namespace ErrorCodes } -InterpreterShowTablesQuery::InterpreterShowTablesQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) +InterpreterShowTablesQuery::InterpreterShowTablesQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_) { } @@ -102,7 +102,7 @@ String InterpreterShowTablesQuery::getRewrittenQuery() if (query.temporary && !query.from.empty()) throw Exception("The `FROM` and `TEMPORARY` cannot be used together in `SHOW TABLES`", ErrorCodes::SYNTAX_ERROR); - String database = context.resolveDatabase(query.from); + String database = getContext()->resolveDatabase(query.from); DatabaseCatalog::instance().assertDatabaseExists(database); WriteBufferFromOwnString rewritten_query; @@ -142,7 +142,7 @@ String InterpreterShowTablesQuery::getRewrittenQuery() BlockIO InterpreterShowTablesQuery::execute() { - return executeQuery(getRewrittenQuery(), context, true); + return executeQuery(getRewrittenQuery(), getContext(), true); } diff --git a/src/Interpreters/InterpreterShowTablesQuery.h b/src/Interpreters/InterpreterShowTablesQuery.h index 4f720e68622..b61be568e35 100644 --- a/src/Interpreters/InterpreterShowTablesQuery.h +++ b/src/Interpreters/InterpreterShowTablesQuery.h @@ -13,10 +13,10 @@ class Context; /** Return a list of tables or databases meets specified conditions. * Interprets a query through replacing it to SELECT query from system.tables or system.databases. */ -class InterpreterShowTablesQuery : public IInterpreter +class InterpreterShowTablesQuery : public IInterpreter, WithContext { public: - InterpreterShowTablesQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterShowTablesQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; @@ -27,7 +27,6 @@ public: private: ASTPtr query_ptr; - Context & context; String getRewrittenQuery(); }; diff --git a/src/Interpreters/InterpreterSystemQuery.cpp b/src/Interpreters/InterpreterSystemQuery.cpp index ece3209621b..8a4b3b07692 100644 --- a/src/Interpreters/InterpreterSystemQuery.cpp +++ b/src/Interpreters/InterpreterSystemQuery.cpp @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -24,9 +25,11 @@ #include #include #include +#include #include #include #include +#include #include #include #include @@ -133,7 +136,7 @@ AccessType getRequiredAccessType(StorageActionBlockType action_type) /// Implements SYSTEM [START|STOP] void InterpreterSystemQuery::startStopAction(StorageActionBlockType action_type, bool start) { - auto manager = context.getActionLocksManager(); + auto manager = getContext()->getActionLocksManager(); manager->cleanExpired(); if (volume_ptr && action_type == ActionLocks::PartsMerge) @@ -142,7 +145,7 @@ void InterpreterSystemQuery::startStopAction(StorageActionBlockType action_type, } else if (table_id) { - auto table = DatabaseCatalog::instance().tryGetTable(table_id, context); + auto table = DatabaseCatalog::instance().tryGetTable(table_id, getContext()); if (table) { if (start) @@ -156,10 +159,10 @@ void InterpreterSystemQuery::startStopAction(StorageActionBlockType action_type, } else { - auto access = context.getAccess(); + auto access = getContext()->getAccess(); for (auto & elem : DatabaseCatalog::instance().getDatabases()) { - for (auto iterator = elem.second->getTablesIterator(context); iterator->isValid(); iterator->next()) + for (auto iterator = elem.second->getTablesIterator(getContext()); iterator->isValid(); iterator->next()) { StoragePtr table = iterator->table(); if (!table) @@ -189,8 +192,8 @@ void InterpreterSystemQuery::startStopAction(StorageActionBlockType action_type, } -InterpreterSystemQuery::InterpreterSystemQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_->clone()), context(context_), log(&Poco::Logger::get("InterpreterSystemQuery")) +InterpreterSystemQuery::InterpreterSystemQuery(const ASTPtr & query_ptr_, ContextPtr context_) + : WithContext(context_), query_ptr(query_ptr_->clone()), log(&Poco::Logger::get("InterpreterSystemQuery")) { } @@ -200,37 +203,37 @@ BlockIO InterpreterSystemQuery::execute() auto & query = query_ptr->as(); if (!query.cluster.empty()) - return executeDDLQueryOnCluster(query_ptr, context, getRequiredAccessForDDLOnCluster()); + return executeDDLQueryOnCluster(query_ptr, getContext(), getRequiredAccessForDDLOnCluster()); using Type = ASTSystemQuery::Type; /// Use global context with fresh system profile settings - Context system_context = context.getGlobalContext(); - system_context.setSetting("profile", context.getSystemProfileName()); + auto system_context = Context::createCopy(getContext()->getGlobalContext()); + system_context->setSetting("profile", getContext()->getSystemProfileName()); /// Make canonical query for simpler processing if (!query.table.empty()) - table_id = context.resolveStorageID(StorageID(query.database, query.table), Context::ResolveOrdinary); + table_id = getContext()->resolveStorageID(StorageID(query.database, query.table), Context::ResolveOrdinary); if (!query.target_dictionary.empty() && !query.database.empty()) query.target_dictionary = query.database + "." + query.target_dictionary; volume_ptr = {}; if (!query.storage_policy.empty() && !query.volume.empty()) - volume_ptr = context.getStoragePolicy(query.storage_policy)->getVolumeByName(query.volume); + volume_ptr = getContext()->getStoragePolicy(query.storage_policy)->getVolumeByName(query.volume); switch (query.type) { case Type::SHUTDOWN: { - context.checkAccess(AccessType::SYSTEM_SHUTDOWN); + getContext()->checkAccess(AccessType::SYSTEM_SHUTDOWN); if (kill(0, SIGTERM)) throwFromErrno("System call kill(0, SIGTERM) failed", ErrorCodes::CANNOT_KILL); break; } case Type::KILL: { - context.checkAccess(AccessType::SYSTEM_SHUTDOWN); + getContext()->checkAccess(AccessType::SYSTEM_SHUTDOWN); /// Exit with the same code as it is usually set by shell when process is terminated by SIGKILL. /// It's better than doing 'raise' or 'kill', because they have no effect for 'init' process (with pid = 0, usually in Docker). LOG_INFO(log, "Exit immediately as the SYSTEM KILL command has been issued."); @@ -253,56 +256,80 @@ BlockIO InterpreterSystemQuery::execute() } case Type::DROP_DNS_CACHE: { - context.checkAccess(AccessType::SYSTEM_DROP_DNS_CACHE); + getContext()->checkAccess(AccessType::SYSTEM_DROP_DNS_CACHE); DNSResolver::instance().dropCache(); /// Reinitialize clusters to update their resolved_addresses - system_context.reloadClusterConfig(); + system_context->reloadClusterConfig(); break; } case Type::DROP_MARK_CACHE: - context.checkAccess(AccessType::SYSTEM_DROP_MARK_CACHE); - system_context.dropMarkCache(); + getContext()->checkAccess(AccessType::SYSTEM_DROP_MARK_CACHE); + system_context->dropMarkCache(); break; case Type::DROP_UNCOMPRESSED_CACHE: - context.checkAccess(AccessType::SYSTEM_DROP_UNCOMPRESSED_CACHE); - system_context.dropUncompressedCache(); + getContext()->checkAccess(AccessType::SYSTEM_DROP_UNCOMPRESSED_CACHE); + system_context->dropUncompressedCache(); + break; + case Type::DROP_MMAP_CACHE: + getContext()->checkAccess(AccessType::SYSTEM_DROP_MMAP_CACHE); + system_context->dropMMappedFileCache(); break; #if USE_EMBEDDED_COMPILER case Type::DROP_COMPILED_EXPRESSION_CACHE: - context.checkAccess(AccessType::SYSTEM_DROP_COMPILED_EXPRESSION_CACHE); - system_context.dropCompiledExpressionCache(); + getContext()->checkAccess(AccessType::SYSTEM_DROP_COMPILED_EXPRESSION_CACHE); + if (auto * cache = CompiledExpressionCacheFactory::instance().tryGetCache()) + cache->reset(); break; #endif case Type::RELOAD_DICTIONARY: { - context.checkAccess(AccessType::SYSTEM_RELOAD_DICTIONARY); - system_context.getExternalDictionariesLoader().loadOrReload( - DatabaseCatalog::instance().resolveDictionaryName(query.target_dictionary)); + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_DICTIONARY); + + auto & external_dictionaries_loader = system_context->getExternalDictionariesLoader(); + external_dictionaries_loader.reloadDictionary(query.target_dictionary, getContext()); + + ExternalDictionariesLoader::resetAll(); break; } case Type::RELOAD_DICTIONARIES: { - context.checkAccess(AccessType::SYSTEM_RELOAD_DICTIONARY); + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_DICTIONARY); executeCommandsAndThrowIfError( - [&] () { system_context.getExternalDictionariesLoader().reloadAllTriedToLoad(); }, - [&] () { system_context.getEmbeddedDictionaries().reload(); } + [&] () { system_context->getExternalDictionariesLoader().reloadAllTriedToLoad(); }, + [&] () { system_context->getEmbeddedDictionaries().reload(); } ); ExternalDictionariesLoader::resetAll(); break; } + case Type::RELOAD_MODEL: + { + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_MODEL); + + auto & external_models_loader = system_context->getExternalModelsLoader(); + external_models_loader.reloadModel(query.target_model); + break; + } + case Type::RELOAD_MODELS: + { + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_MODEL); + + auto & external_models_loader = system_context->getExternalModelsLoader(); + external_models_loader.reloadAllTriedToLoad(); + break; + } case Type::RELOAD_EMBEDDED_DICTIONARIES: - context.checkAccess(AccessType::SYSTEM_RELOAD_EMBEDDED_DICTIONARIES); - system_context.getEmbeddedDictionaries().reload(); + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_EMBEDDED_DICTIONARIES); + system_context->getEmbeddedDictionaries().reload(); break; case Type::RELOAD_CONFIG: - context.checkAccess(AccessType::SYSTEM_RELOAD_CONFIG); - system_context.reloadConfig(); + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_CONFIG); + system_context->reloadConfig(); break; case Type::RELOAD_SYMBOLS: { #if defined(__ELF__) && !defined(__FreeBSD__) - context.checkAccess(AccessType::SYSTEM_RELOAD_SYMBOLS); + getContext()->checkAccess(AccessType::SYSTEM_RELOAD_SYMBOLS); (void)SymbolIndex::instance(true); break; #else @@ -368,18 +395,21 @@ BlockIO InterpreterSystemQuery::execute() throw Exception("There is no " + query.database + "." + query.table + " replicated table", ErrorCodes::BAD_ARGUMENTS); break; + case Type::RESTART_DISK: + restartDisk(query.disk); + break; case Type::FLUSH_LOGS: { - context.checkAccess(AccessType::SYSTEM_FLUSH_LOGS); + getContext()->checkAccess(AccessType::SYSTEM_FLUSH_LOGS); executeCommandsAndThrowIfError( - [&] () { if (auto query_log = context.getQueryLog()) query_log->flush(true); }, - [&] () { if (auto part_log = context.getPartLog("")) part_log->flush(true); }, - [&] () { if (auto query_thread_log = context.getQueryThreadLog()) query_thread_log->flush(true); }, - [&] () { if (auto trace_log = context.getTraceLog()) trace_log->flush(true); }, - [&] () { if (auto text_log = context.getTextLog()) text_log->flush(true); }, - [&] () { if (auto metric_log = context.getMetricLog()) metric_log->flush(true); }, - [&] () { if (auto asynchronous_metric_log = context.getAsynchronousMetricLog()) asynchronous_metric_log->flush(true); }, - [&] () { if (auto opentelemetry_span_log = context.getOpenTelemetrySpanLog()) opentelemetry_span_log->flush(true); } + [&] () { if (auto query_log = getContext()->getQueryLog()) query_log->flush(true); }, + [&] () { if (auto part_log = getContext()->getPartLog("")) part_log->flush(true); }, + [&] () { if (auto query_thread_log = getContext()->getQueryThreadLog()) query_thread_log->flush(true); }, + [&] () { if (auto trace_log = getContext()->getTraceLog()) trace_log->flush(true); }, + [&] () { if (auto text_log = getContext()->getTextLog()) text_log->flush(true); }, + [&] () { if (auto metric_log = getContext()->getMetricLog()) metric_log->flush(true); }, + [&] () { if (auto asynchronous_metric_log = getContext()->getAsynchronousMetricLog()) asynchronous_metric_log->flush(true); }, + [&] () { if (auto opentelemetry_span_log = getContext()->getOpenTelemetrySpanLog()) opentelemetry_span_log->flush(true); } ); break; } @@ -394,12 +424,12 @@ BlockIO InterpreterSystemQuery::execute() } -StoragePtr InterpreterSystemQuery::tryRestartReplica(const StorageID & replica, Context & system_context, bool need_ddl_guard) +StoragePtr InterpreterSystemQuery::tryRestartReplica(const StorageID & replica, ContextPtr system_context, bool need_ddl_guard) { - context.checkAccess(AccessType::SYSTEM_RESTART_REPLICA, replica); + getContext()->checkAccess(AccessType::SYSTEM_RESTART_REPLICA, replica); auto table_ddl_guard = need_ddl_guard ? DatabaseCatalog::instance().getDDLGuard(replica.getDatabaseName(), replica.getTableName()) : nullptr; - auto [database, table] = DatabaseCatalog::instance().tryGetDatabaseAndTable(replica, context); + auto [database, table] = DatabaseCatalog::instance().tryGetDatabaseAndTable(replica, getContext()); ASTPtr create_ast; /// Detach actions @@ -409,8 +439,8 @@ StoragePtr InterpreterSystemQuery::tryRestartReplica(const StorageID & replica, table->shutdown(); { /// If table was already dropped by anyone, an exception will be thrown - auto table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - create_ast = database->getCreateTableQuery(replica.table_name, context); + auto table_lock = table->lockExclusively(getContext()->getCurrentQueryId(), getContext()->getSettingsRef().lock_acquire_timeout); + create_ast = database->getCreateTableQuery(replica.table_name, getContext()); database->detachTable(replica.table_name); } @@ -421,14 +451,14 @@ StoragePtr InterpreterSystemQuery::tryRestartReplica(const StorageID & replica, auto & create = create_ast->as(); create.attach = true; - auto columns = InterpreterCreateQuery::getColumnsDescription(*create.columns_list->columns, system_context, false); + auto columns = InterpreterCreateQuery::getColumnsDescription(*create.columns_list->columns, system_context, true); auto constraints = InterpreterCreateQuery::getConstraintsDescription(create.columns_list->constraints); auto data_path = database->getTableDataPath(create); table = StorageFactory::instance().get(create, data_path, system_context, - system_context.getGlobalContext(), + system_context->getGlobalContext(), columns, constraints, false); @@ -439,7 +469,7 @@ StoragePtr InterpreterSystemQuery::tryRestartReplica(const StorageID & replica, return table; } -void InterpreterSystemQuery::restartReplicas(Context & system_context) +void InterpreterSystemQuery::restartReplicas(ContextPtr system_context) { std::vector replica_names; auto & catalog = DatabaseCatalog::instance(); @@ -447,7 +477,7 @@ void InterpreterSystemQuery::restartReplicas(Context & system_context) for (auto & elem : catalog.getDatabases()) { DatabasePtr & database = elem.second; - for (auto iterator = database->getTablesIterator(context); iterator->isValid(); iterator->next()) + for (auto iterator = database->getTablesIterator(getContext()); iterator->isValid(); iterator->next()) { if (auto table = iterator->table()) { @@ -482,43 +512,43 @@ void InterpreterSystemQuery::dropReplica(ASTSystemQuery & query) if (!table_id.empty()) { - context.checkAccess(AccessType::SYSTEM_DROP_REPLICA, table_id); - StoragePtr table = DatabaseCatalog::instance().getTable(table_id, context); + getContext()->checkAccess(AccessType::SYSTEM_DROP_REPLICA, table_id); + StoragePtr table = DatabaseCatalog::instance().getTable(table_id, getContext()); if (!dropReplicaImpl(query, table)) throw Exception("Table " + table_id.getNameForLogs() + " is not replicated", ErrorCodes::BAD_ARGUMENTS); } else if (!query.database.empty()) { - context.checkAccess(AccessType::SYSTEM_DROP_REPLICA, query.database); + getContext()->checkAccess(AccessType::SYSTEM_DROP_REPLICA, query.database); DatabasePtr database = DatabaseCatalog::instance().getDatabase(query.database); - for (auto iterator = database->getTablesIterator(context); iterator->isValid(); iterator->next()) + for (auto iterator = database->getTablesIterator(getContext()); iterator->isValid(); iterator->next()) dropReplicaImpl(query, iterator->table()); LOG_TRACE(log, "Dropped replica {} from database {}", query.replica, backQuoteIfNeed(database->getDatabaseName())); } else if (query.is_drop_whole_replica) { - context.checkAccess(AccessType::SYSTEM_DROP_REPLICA); + getContext()->checkAccess(AccessType::SYSTEM_DROP_REPLICA); auto databases = DatabaseCatalog::instance().getDatabases(); for (auto & elem : databases) { DatabasePtr & database = elem.second; - for (auto iterator = database->getTablesIterator(context); iterator->isValid(); iterator->next()) + for (auto iterator = database->getTablesIterator(getContext()); iterator->isValid(); iterator->next()) dropReplicaImpl(query, iterator->table()); LOG_TRACE(log, "Dropped replica {} from database {}", query.replica, backQuoteIfNeed(database->getDatabaseName())); } } else if (!query.replica_zk_path.empty()) { - context.checkAccess(AccessType::SYSTEM_DROP_REPLICA); + getContext()->checkAccess(AccessType::SYSTEM_DROP_REPLICA); auto remote_replica_path = query.replica_zk_path + "/replicas/" + query.replica; /// This check is actually redundant, but it may prevent from some user mistakes for (auto & elem : DatabaseCatalog::instance().getDatabases()) { DatabasePtr & database = elem.second; - for (auto iterator = database->getTablesIterator(context); iterator->isValid(); iterator->next()) + for (auto iterator = database->getTablesIterator(getContext()); iterator->isValid(); iterator->next()) { if (auto * storage_replicated = dynamic_cast(iterator->table().get())) { @@ -533,7 +563,7 @@ void InterpreterSystemQuery::dropReplica(ASTSystemQuery & query) } } - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); bool looks_like_table_path = zookeeper->exists(query.replica_zk_path + "/replicas") || zookeeper->exists(query.replica_zk_path + "/dropped"); @@ -559,7 +589,7 @@ bool InterpreterSystemQuery::dropReplicaImpl(ASTSystemQuery & query, const Stora return false; StorageReplicatedMergeTree::Status status; - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); storage_replicated->getStatus(status); /// Do not allow to drop local replicas and active remote replicas @@ -582,13 +612,13 @@ bool InterpreterSystemQuery::dropReplicaImpl(ASTSystemQuery & query, const Stora void InterpreterSystemQuery::syncReplica(ASTSystemQuery &) { - context.checkAccess(AccessType::SYSTEM_SYNC_REPLICA, table_id); - StoragePtr table = DatabaseCatalog::instance().getTable(table_id, context); + getContext()->checkAccess(AccessType::SYSTEM_SYNC_REPLICA, table_id); + StoragePtr table = DatabaseCatalog::instance().getTable(table_id, getContext()); if (auto * storage_replicated = dynamic_cast(table.get())) { LOG_TRACE(log, "Synchronizing entries in replica's queue with table's log and waiting for it to become empty"); - if (!storage_replicated->waitForShrinkingQueueSize(0, context.getSettingsRef().receive_timeout.totalMilliseconds())) + if (!storage_replicated->waitForShrinkingQueueSize(0, getContext()->getSettingsRef().receive_timeout.totalMilliseconds())) { LOG_ERROR(log, "SYNC REPLICA {}: Timed out!", table_id.getNameForLogs()); throw Exception( @@ -603,14 +633,26 @@ void InterpreterSystemQuery::syncReplica(ASTSystemQuery &) void InterpreterSystemQuery::flushDistributed(ASTSystemQuery &) { - context.checkAccess(AccessType::SYSTEM_FLUSH_DISTRIBUTED, table_id); + getContext()->checkAccess(AccessType::SYSTEM_FLUSH_DISTRIBUTED, table_id); - if (auto * storage_distributed = dynamic_cast(DatabaseCatalog::instance().getTable(table_id, context).get())) - storage_distributed->flushClusterNodesAllData(context); + if (auto * storage_distributed = dynamic_cast(DatabaseCatalog::instance().getTable(table_id, getContext()).get())) + storage_distributed->flushClusterNodesAllData(getContext()); else throw Exception("Table " + table_id.getNameForLogs() + " is not distributed", ErrorCodes::BAD_ARGUMENTS); } +void InterpreterSystemQuery::restartDisk(String & name) +{ + getContext()->checkAccess(AccessType::SYSTEM_RESTART_DISK); + + auto disk = getContext()->getDisk(name); + + if (DiskRestartProxy * restart_proxy = dynamic_cast(disk.get())) + restart_proxy->restart(); + else + throw Exception("Disk " + name + " doesn't have possibility to restart", ErrorCodes::BAD_ARGUMENTS); +} + AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster() const { @@ -628,6 +670,7 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster() } case Type::DROP_DNS_CACHE: [[fallthrough]]; case Type::DROP_MARK_CACHE: [[fallthrough]]; + case Type::DROP_MMAP_CACHE: [[fallthrough]]; #if USE_EMBEDDED_COMPILER case Type::DROP_COMPILED_EXPRESSION_CACHE: [[fallthrough]]; #endif @@ -643,6 +686,12 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster() required_access.emplace_back(AccessType::SYSTEM_RELOAD_DICTIONARY); break; } + case Type::RELOAD_MODEL: [[fallthrough]]; + case Type::RELOAD_MODELS: + { + required_access.emplace_back(AccessType::SYSTEM_RELOAD_MODEL); + break; + } case Type::RELOAD_CONFIG: { required_access.emplace_back(AccessType::SYSTEM_RELOAD_CONFIG); @@ -746,6 +795,11 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster() required_access.emplace_back(AccessType::SYSTEM_FLUSH_LOGS); break; } + case Type::RESTART_DISK: + { + required_access.emplace_back(AccessType::SYSTEM_RESTART_DISK); + break; + } case Type::STOP_LISTEN_QUERIES: break; case Type::START_LISTEN_QUERIES: break; case Type::UNKNOWN: break; @@ -754,4 +808,9 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster() return required_access; } +void InterpreterSystemQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr) const +{ + elem.query_kind = "System"; +} + } diff --git a/src/Interpreters/InterpreterSystemQuery.h b/src/Interpreters/InterpreterSystemQuery.h index 6fa0a432191..341611e0af1 100644 --- a/src/Interpreters/InterpreterSystemQuery.h +++ b/src/Interpreters/InterpreterSystemQuery.h @@ -30,32 +30,34 @@ class ASTSystemQuery; * - start/stop actions for all existing tables. * Note that the actions for tables that will be created after this query will not be affected. */ -class InterpreterSystemQuery : public IInterpreter +class InterpreterSystemQuery : public IInterpreter, WithContext { public: - InterpreterSystemQuery(const ASTPtr & query_ptr_, Context & context_); + InterpreterSystemQuery(const ASTPtr & query_ptr_, ContextPtr context_); BlockIO execute() override; private: ASTPtr query_ptr; - Context & context; Poco::Logger * log = nullptr; StorageID table_id = StorageID::createEmpty(); /// Will be set up if query contains table name VolumePtr volume_ptr; /// Tries to get a replicated table and restart it /// Returns pointer to a newly created table if the restart was successful - StoragePtr tryRestartReplica(const StorageID & replica, Context & context, bool need_ddl_guard = true); + StoragePtr tryRestartReplica(const StorageID & replica, ContextPtr context, bool need_ddl_guard = true); - void restartReplicas(Context & system_context); + void restartReplicas(ContextPtr system_context); void syncReplica(ASTSystemQuery & query); void dropReplica(ASTSystemQuery & query); bool dropReplicaImpl(ASTSystemQuery & query, const StoragePtr & table); void flushDistributed(ASTSystemQuery & query); + void restartDisk(String & name); AccessRightsElements getRequiredAccessForDDLOnCluster() const; void startStopAction(StorageActionBlockType action_type, bool start); + + void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, ContextPtr) const override; }; diff --git a/src/Interpreters/InterpreterUseQuery.cpp b/src/Interpreters/InterpreterUseQuery.cpp index 58f5b6c9a32..626d2f499c7 100644 --- a/src/Interpreters/InterpreterUseQuery.cpp +++ b/src/Interpreters/InterpreterUseQuery.cpp @@ -11,8 +11,8 @@ namespace DB BlockIO InterpreterUseQuery::execute() { const String & new_database = query_ptr->as().database; - context.checkAccess(AccessType::SHOW_DATABASES, new_database); - context.getSessionContext().setCurrentDatabase(new_database); + getContext()->checkAccess(AccessType::SHOW_DATABASES, new_database); + getContext()->getSessionContext()->setCurrentDatabase(new_database); return {}; } diff --git a/src/Interpreters/InterpreterUseQuery.h b/src/Interpreters/InterpreterUseQuery.h index ae409117afd..d1ce57dc64a 100644 --- a/src/Interpreters/InterpreterUseQuery.h +++ b/src/Interpreters/InterpreterUseQuery.h @@ -7,23 +7,17 @@ namespace DB { -class Context; - - /** Change default database for session. */ -class InterpreterUseQuery : public IInterpreter +class InterpreterUseQuery : public IInterpreter, WithContext { public: - InterpreterUseQuery(const ASTPtr & query_ptr_, Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterUseQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; private: ASTPtr query_ptr; - Context & context; }; - } diff --git a/src/Interpreters/InterpreterWatchQuery.cpp b/src/Interpreters/InterpreterWatchQuery.cpp index 757a2d1738b..84959f2f624 100644 --- a/src/Interpreters/InterpreterWatchQuery.cpp +++ b/src/Interpreters/InterpreterWatchQuery.cpp @@ -32,15 +32,15 @@ namespace ErrorCodes BlockIO InterpreterWatchQuery::execute() { - if (!context.getSettingsRef().allow_experimental_live_view) + if (!getContext()->getSettingsRef().allow_experimental_live_view) throw Exception("Experimental LIVE VIEW feature is not enabled (the setting 'allow_experimental_live_view')", ErrorCodes::SUPPORT_IS_DISABLED); BlockIO res; const ASTWatchQuery & query = typeid_cast(*query_ptr); - auto table_id = context.resolveStorageID(query, Context::ResolveOrdinary); + auto table_id = getContext()->resolveStorageID(query, Context::ResolveOrdinary); /// Get storage - storage = DatabaseCatalog::instance().tryGetTable(table_id, context); + storage = DatabaseCatalog::instance().tryGetTable(table_id, getContext()); if (!storage) throw Exception("Table " + table_id.getNameForLogs() + " doesn't exist.", @@ -48,10 +48,10 @@ BlockIO InterpreterWatchQuery::execute() /// List of columns to read to execute the query. Names required_columns = storage->getInMemoryMetadataPtr()->getColumns().getNamesOfPhysical(); - context.checkAccess(AccessType::SELECT, table_id, required_columns); + getContext()->checkAccess(AccessType::SELECT, table_id, required_columns); /// Get context settings for this query - const Settings & settings = context.getSettingsRef(); + const Settings & settings = getContext()->getSettingsRef(); /// Limitation on the number of columns to read. if (settings.max_columns_to_read && required_columns.size() > settings.max_columns_to_read) @@ -71,7 +71,7 @@ BlockIO InterpreterWatchQuery::execute() QueryProcessingStage::Enum from_stage = QueryProcessingStage::FetchColumns; /// Watch storage - streams = storage->watch(required_columns, query_info, context, from_stage, max_block_size, max_streams); + streams = storage->watch(required_columns, query_info, getContext(), from_stage, max_block_size, max_streams); /// Constraints on the result, the quota on the result, and also callback for progress. if (IBlockInputStream * stream = dynamic_cast(streams[0].get())) @@ -83,7 +83,7 @@ BlockIO InterpreterWatchQuery::execute() limits.size_limits.overflow_mode = settings.result_overflow_mode; stream->setLimits(limits); - stream->setQuota(context.getQuota()); + stream->setQuota(getContext()->getQuota()); } res.in = streams[0]; diff --git a/src/Interpreters/InterpreterWatchQuery.h b/src/Interpreters/InterpreterWatchQuery.h index c4aa5b9ca99..45b61a18b66 100644 --- a/src/Interpreters/InterpreterWatchQuery.h +++ b/src/Interpreters/InterpreterWatchQuery.h @@ -14,30 +14,27 @@ limitations under the License. */ #include #include #include -#include #include -#include +#include #include +#include namespace DB { -class Context; class IAST; using ASTPtr = std::shared_ptr; using StoragePtr = std::shared_ptr; -class InterpreterWatchQuery : public IInterpreter +class InterpreterWatchQuery : public IInterpreter, WithContext { public: - InterpreterWatchQuery(const ASTPtr & query_ptr_, const Context & context_) - : query_ptr(query_ptr_), context(context_) {} + InterpreterWatchQuery(const ASTPtr & query_ptr_, ContextPtr context_) : WithContext(context_), query_ptr(query_ptr_) {} BlockIO execute() override; private: ASTPtr query_ptr; - const Context & context; /// Table from where to read data, if not subquery. StoragePtr storage; @@ -45,5 +42,4 @@ private: BlockInputStreams streams; }; - } diff --git a/src/Interpreters/InterserverCredentials.cpp b/src/Interpreters/InterserverCredentials.cpp new file mode 100644 index 00000000000..e60d397eb02 --- /dev/null +++ b/src/Interpreters/InterserverCredentials.cpp @@ -0,0 +1,87 @@ +#include +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NO_ELEMENTS_IN_CONFIG; +} + +std::unique_ptr +InterserverCredentials::make(const Poco::Util::AbstractConfiguration & config, const std::string & root_tag) +{ + if (config.has("user") && !config.has("password")) + throw Exception("Configuration parameter interserver_http_credentials.password can't be empty", ErrorCodes::NO_ELEMENTS_IN_CONFIG); + + if (!config.has("user") && config.has("password")) + throw Exception("Configuration parameter interserver_http_credentials.user can't be empty if user specified", ErrorCodes::NO_ELEMENTS_IN_CONFIG); + + /// They both can be empty + auto user = config.getString(root_tag + ".user", ""); + auto password = config.getString(root_tag + ".password", ""); + + auto store = parseCredentialsFromConfig(user, password, config, root_tag); + + return std::make_unique(user, password, store); +} + +InterserverCredentials::CurrentCredentials InterserverCredentials::parseCredentialsFromConfig( + const std::string & current_user_, + const std::string & current_password_, + const Poco::Util::AbstractConfiguration & config, + const std::string & root_tag) +{ + auto * log = &Poco::Logger::get("InterserverCredentials"); + CurrentCredentials store; + store.emplace_back(current_user_, current_password_); + if (config.getBool(root_tag + ".allow_empty", false)) + { + LOG_DEBUG(log, "Allowing empty credentials"); + /// Allow empty credential to support migrating from no auth + store.emplace_back("", ""); + } + + Poco::Util::AbstractConfiguration::Keys old_users; + config.keys(root_tag, old_users); + + for (const auto & user_key : old_users) + { + if (startsWith(user_key, "old")) + { + std::string full_prefix = root_tag + "." + user_key; + std::string old_user_name = config.getString(full_prefix + ".user"); + LOG_DEBUG(log, "Adding credentials for old user {}", old_user_name); + + std::string old_user_password = config.getString(full_prefix + ".password"); + + store.emplace_back(old_user_name, old_user_password); + } + } + + return store; +} + +InterserverCredentials::CheckResult InterserverCredentials::isValidUser(const UserWithPassword & credentials) const +{ + auto itr = std::find(all_users_store.begin(), all_users_store.end(), credentials); + + if (itr == all_users_store.end()) + { + if (credentials.first.empty()) + return {"Server requires HTTP Basic authentication, but client doesn't provide it", false}; + + return {"Incorrect user or password in HTTP basic authentication: " + credentials.first, false}; + } + + return {"", true}; +} + +InterserverCredentials::CheckResult InterserverCredentials::isValidUser(const std::string & user, const std::string & password) const +{ + return isValidUser(std::make_pair(user, password)); +} + +} diff --git a/src/Interpreters/InterserverCredentials.h b/src/Interpreters/InterserverCredentials.h new file mode 100644 index 00000000000..ffe568868dd --- /dev/null +++ b/src/Interpreters/InterserverCredentials.h @@ -0,0 +1,70 @@ +#pragma once + +#include +#include +#include +#include + +namespace DB +{ + +/// InterserverCredentials implements authentication using a CurrentCredentials, which +/// is configured, e.g. +/// +/// admin +/// 222 +/// +/// +/// +/// +/// admin +/// qqq +/// +/// +/// +/// johny +/// 333 +/// +/// +class InterserverCredentials +{ +public: + using UserWithPassword = std::pair; + using CheckResult = std::pair; + using CurrentCredentials = std::vector; + + InterserverCredentials(const InterserverCredentials &) = delete; + + static std::unique_ptr make(const Poco::Util::AbstractConfiguration & config, const std::string & root_tag); + + InterserverCredentials(const std::string & current_user_, const std::string & current_password_, const CurrentCredentials & all_users_store_) + : current_user(current_user_) + , current_password(current_password_) + , all_users_store(all_users_store_) + {} + + CheckResult isValidUser(const UserWithPassword & credentials) const; + CheckResult isValidUser(const std::string & user, const std::string & password) const; + + std::string getUser() const { return current_user; } + + std::string getPassword() const { return current_password; } + + +private: + std::string current_user; + std::string current_password; + + /// In common situation this store contains one record + CurrentCredentials all_users_store; + + static CurrentCredentials parseCredentialsFromConfig( + const std::string & current_user_, + const std::string & current_password_, + const Poco::Util::AbstractConfiguration & config, + const std::string & root_tag); +}; + +using InterserverCredentialsPtr = std::shared_ptr; + +} diff --git a/src/Interpreters/InterserverIOHandler.h b/src/Interpreters/InterserverIOHandler.h index db95a00d0f7..b0c95ed3835 100644 --- a/src/Interpreters/InterserverIOHandler.h +++ b/src/Interpreters/InterserverIOHandler.h @@ -9,13 +9,17 @@ #include #include -#include - #include #include #include #include +namespace zkutil +{ + class ZooKeeper; + using ZooKeeperPtr = std::shared_ptr; +} + namespace DB { diff --git a/src/Interpreters/JoinSwitcher.h b/src/Interpreters/JoinSwitcher.h index 1fd719cd5dc..75ff7bb9b2c 100644 --- a/src/Interpreters/JoinSwitcher.h +++ b/src/Interpreters/JoinSwitcher.h @@ -19,6 +19,8 @@ class JoinSwitcher : public IJoin public: JoinSwitcher(std::shared_ptr table_join_, const Block & right_sample_block_); + const TableJoin & getTableJoin() const override { return *table_join; } + /// Add block of data from right hand of JOIN into current join object. /// If join-in-memory memory limit exceeded switches to join-on-disk and continue with it. /// @returns false, if join-on-disk disk limit exceeded diff --git a/src/Interpreters/JoinToSubqueryTransformVisitor.cpp b/src/Interpreters/JoinToSubqueryTransformVisitor.cpp index 772077ba92a..3cb8004a29f 100644 --- a/src/Interpreters/JoinToSubqueryTransformVisitor.cpp +++ b/src/Interpreters/JoinToSubqueryTransformVisitor.cpp @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -73,16 +74,29 @@ public: } } - void addTableColumns(const String & table_name) + using ShouldAddColumnPredicate = std::function; + + /// Add columns from table with table_name into select expression list + /// Use should_add_column_predicate for check if column name should be added + /// By default should_add_column_predicate returns true for any column name + void addTableColumns( + const String & table_name, + ShouldAddColumnPredicate should_add_column_predicate = [](const String &) { return true; }) { auto it = table_columns.find(table_name); if (it == table_columns.end()) throw Exception("Unknown qualified identifier: " + table_name, ErrorCodes::UNKNOWN_IDENTIFIER); for (const auto & column : it->second) - new_select_expression_list->children.push_back( - std::make_shared(std::vector{it->first, column.name})); + { + if (should_add_column_predicate(column.name)) + { + auto identifier = std::make_shared(std::vector{it->first, column.name}); + new_select_expression_list->children.emplace_back(std::move(identifier)); + } + } } + }; static bool needChildVisit(const ASTPtr &, const ASTPtr &) { return false; } @@ -119,6 +133,13 @@ private: data.addTableColumns(identifier.name()); } + else if (auto * columns_matcher = child->as()) + { + has_asterisks = true; + + for (auto & table_name : data.tables_order) + data.addTableColumns(table_name, [&](const String & column_name) { return columns_matcher->isColumnMatching(column_name); }); + } else data.new_select_expression_list->children.push_back(child); } @@ -225,7 +246,7 @@ struct CollectColumnIdentifiersMatcher : identifiers(identifiers_) {} - void addIdentirier(const ASTIdentifier & ident) + void addIdentifier(const ASTIdentifier & ident) { for (const auto & aliases : ignored) if (aliases.count(ident.name())) @@ -267,7 +288,7 @@ struct CollectColumnIdentifiersMatcher static void visit(const ASTIdentifier & ident, const ASTPtr &, Data & data) { - data.addIdentirier(ident); + data.addIdentifier(ident); } static void visit(const ASTFunction & func, const ASTPtr &, Data & data) diff --git a/src/Interpreters/JoinedTables.cpp b/src/Interpreters/JoinedTables.cpp index 17d7949e478..d947a3e2a48 100644 --- a/src/Interpreters/JoinedTables.cpp +++ b/src/Interpreters/JoinedTables.cpp @@ -1,25 +1,24 @@ #include -#include -#include -#include + #include #include - -#include -#include -#include -#include -#include - +#include +#include +#include #include +#include +#include #include #include #include #include -#include -#include #include #include +#include +#include +#include +#include +#include namespace DB { @@ -129,7 +128,7 @@ using RenameQualifiedIdentifiersVisitor = InDepthNodeVisitorgetQueryContext()->executeTableFunction(left_table_expression); StorageID table_id = StorageID::createEmpty(); if (left_db_and_table) { - table_id = context.resolveStorageID(StorageID(left_db_and_table->database, left_db_and_table->table, left_db_and_table->uuid)); + table_id = context->resolveStorageID(StorageID(left_db_and_table->database, left_db_and_table->table, left_db_and_table->uuid)); } else /// If the table is not specified - use the table `system.one`. { table_id = StorageID("system", "one"); } - if (auto view_source = context.getViewSource()) + if (auto view_source = context->getViewSource()) { const auto & storage_values = static_cast(*view_source); auto tmp_table_id = storage_values.getStorageID(); if (tmp_table_id.database_name == table_id.database_name && tmp_table_id.table_name == table_id.table_name) { /// Read from view source. - return context.getViewSource(); + return context->getViewSource(); } } @@ -192,7 +191,7 @@ bool JoinedTables::resolveTables() if (tables_with_columns.size() != table_expressions.size()) throw Exception("Unexpected tables count", ErrorCodes::LOGICAL_ERROR); - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); if (settings.joined_subquery_requires_alias && tables_with_columns.size() > 1) { for (size_t i = 0; i < tables_with_columns.size(); ++i) @@ -234,7 +233,7 @@ void JoinedTables::rewriteDistributedInAndJoins(ASTPtr & query) String database; if (!renamed_tables.empty()) - database = context.getCurrentDatabase(); + database = context->getCurrentDatabase(); for (auto & [subquery, ast_tables] : renamed_tables) { @@ -254,8 +253,8 @@ std::shared_ptr JoinedTables::makeTableJoin(const ASTSelectQuery & se if (tables_with_columns.size() < 2) return {}; - auto settings = context.getSettingsRef(); - auto table_join = std::make_shared(settings, context.getTemporaryVolume()); + auto settings = context->getSettingsRef(); + auto table_join = std::make_shared(settings, context->getTemporaryVolume()); const ASTTablesInSelectQueryElement * ast_join = select_query.join(); const auto & table_to_join = ast_join->table_expression->as(); @@ -263,7 +262,7 @@ std::shared_ptr JoinedTables::makeTableJoin(const ASTSelectQuery & se /// TODO This syntax does not support specifying a database name. if (table_to_join.database_and_table_name) { - auto joined_table_id = context.resolveStorageID(table_to_join.database_and_table_name); + auto joined_table_id = context->resolveStorageID(table_to_join.database_and_table_name); StoragePtr table = DatabaseCatalog::instance().tryGetTable(joined_table_id, context); if (table) { diff --git a/src/Interpreters/JoinedTables.h b/src/Interpreters/JoinedTables.h index 812808fed61..52eb71e419d 100644 --- a/src/Interpreters/JoinedTables.h +++ b/src/Interpreters/JoinedTables.h @@ -22,11 +22,11 @@ using StorageMetadataPtr = std::shared_ptr; class JoinedTables { public: - JoinedTables(Context && context, const ASTSelectQuery & select_query); + JoinedTables(ContextPtr context, const ASTSelectQuery & select_query); void reset(const ASTSelectQuery & select_query) { - *this = JoinedTables(std::move(context), select_query); + *this = JoinedTables(Context::createCopy(context), select_query); } StoragePtr getLeftTableStorage(); @@ -48,7 +48,7 @@ public: std::unique_ptr makeLeftTableSubquery(const SelectQueryOptions & select_options); private: - Context context; + ContextPtr context; std::vector table_expressions; TablesWithColumns tables_with_columns; diff --git a/src/Interpreters/MergeJoin.cpp b/src/Interpreters/MergeJoin.cpp index ddeaf053225..a9f50cdda0e 100644 --- a/src/Interpreters/MergeJoin.cpp +++ b/src/Interpreters/MergeJoin.cpp @@ -76,6 +76,7 @@ int nullableCompareAt(const IColumn & left_column, const IColumn & right_column, return left_column.compareAt(lhs_pos, rhs_pos, right_column, null_direction_hint); } +/// Get first and last row from sorted block Block extractMinMax(const Block & block, const Block & keys) { if (block.rows() == 0) @@ -86,7 +87,7 @@ Block extractMinMax(const Block & block, const Block & keys) for (size_t i = 0; i < columns.size(); ++i) { - const auto & src_column = block.getByName(keys.getByPosition(i).name); + const auto & src_column = block.getByName(min_max.getByPosition(i).name); columns[i]->insertFrom(*src_column.column, 0); columns[i]->insertFrom(*src_column.column, block.rows() - 1); @@ -465,6 +466,7 @@ MergeJoin::MergeJoin(std::shared_ptr table_join_, const Block & right table_join->splitAdditionalColumns(right_sample_block, right_table_keys, right_columns_to_add); JoinCommon::removeLowCardinalityInplace(right_table_keys); + JoinCommon::removeLowCardinalityInplace(right_sample_block, table_join->keyNamesRight()); const NameSet required_right_keys = table_join->requiredRightKeys(); for (const auto & column : right_table_keys) @@ -485,6 +487,7 @@ MergeJoin::MergeJoin(std::shared_ptr table_join_, const Block & right left_blocks_buffer = std::make_shared(left_sort_description, max_bytes); } +/// Has to be called even if totals are empty void MergeJoin::setTotals(const Block & totals_block) { totals = totals_block; diff --git a/src/Interpreters/MergeJoin.h b/src/Interpreters/MergeJoin.h index a13d0304907..f286e74b385 100644 --- a/src/Interpreters/MergeJoin.h +++ b/src/Interpreters/MergeJoin.h @@ -23,6 +23,7 @@ class MergeJoin : public IJoin public: MergeJoin(std::shared_ptr table_join_, const Block & right_sample_block); + const TableJoin & getTableJoin() const override { return *table_join; } bool addJoinedBlock(const Block & block, bool check_limits) override; void joinBlock(Block &, ExtraBlockPtr & not_processed) override; void joinTotals(Block &) const override; @@ -76,12 +77,15 @@ private: Block right_table_keys; Block right_columns_to_add; SortedBlocksWriter::Blocks right_blocks; + + /// Each block stores first and last row from corresponding sorted block on disk Blocks min_max_right_blocks; std::shared_ptr left_blocks_buffer; std::shared_ptr used_rows_bitmap; mutable std::unique_ptr cached_right_blocks; std::vector> loaded_right_blocks; std::unique_ptr disk_writer; + /// Set of files with sorted blocks SortedBlocksWriter::SortedFiles flushed_right_blocks; Block totals; std::atomic is_in_memory{true}; diff --git a/src/Interpreters/MetricLog.cpp b/src/Interpreters/MetricLog.cpp index ce5d5793b87..fd1c120f18c 100644 --- a/src/Interpreters/MetricLog.cpp +++ b/src/Interpreters/MetricLog.cpp @@ -41,7 +41,7 @@ void MetricLogElement::appendToBlock(MutableColumns & columns) const { size_t column_idx = 0; - columns[column_idx++]->insert(DateLUT::instance().toDayNum(event_time)); + columns[column_idx++]->insert(DateLUT::instance().toDayNum(event_time).toUnderType()); columns[column_idx++]->insert(event_time); columns[column_idx++]->insert(event_time_microseconds); columns[column_idx++]->insert(milliseconds); diff --git a/src/Interpreters/MonotonicityCheckVisitor.h b/src/Interpreters/MonotonicityCheckVisitor.h index 87571a44eb0..350318047c7 100644 --- a/src/Interpreters/MonotonicityCheckVisitor.h +++ b/src/Interpreters/MonotonicityCheckVisitor.h @@ -26,7 +26,7 @@ public: struct Data { const TablesWithColumns & tables; - const Context & context; + ContextPtr context; const std::unordered_set & group_by_function_hashes; Monotonicity monotonicity{true, true, true}; ASTIdentifier * identifier = nullptr; diff --git a/src/Interpreters/MutationsInterpreter.cpp b/src/Interpreters/MutationsInterpreter.cpp index 6e732ea6783..1315f9efa05 100644 --- a/src/Interpreters/MutationsInterpreter.cpp +++ b/src/Interpreters/MutationsInterpreter.cpp @@ -26,6 +26,7 @@ #include #include #include +#include namespace DB @@ -50,7 +51,7 @@ class FirstNonDeterministicFunctionMatcher public: struct Data { - const Context & context; + ContextPtr context; std::optional nondeterministic_function_name; }; @@ -80,7 +81,7 @@ public: using FirstNonDeterministicFunctionFinder = InDepthNodeVisitor; -std::optional findFirstNonDeterministicFunctionName(const MutationCommand & command, const Context & context) +std::optional findFirstNonDeterministicFunctionName(const MutationCommand & command, ContextPtr context) { FirstNonDeterministicFunctionMatcher::Data finder_data{context, std::nullopt}; @@ -113,7 +114,7 @@ std::optional findFirstNonDeterministicFunctionName(const MutationComman return {}; } -ASTPtr prepareQueryAffectedAST(const std::vector & commands, const StoragePtr & storage, const Context & context) +ASTPtr prepareQueryAffectedAST(const std::vector & commands, const StoragePtr & storage, ContextPtr context) { /// Execute `SELECT count() FROM storage WHERE predicate1 OR predicate2 OR ...` query. /// The result can differ from the number of affected rows (e.g. if there is an UPDATE command that @@ -178,7 +179,7 @@ bool isStorageTouchedByMutations( const StoragePtr & storage, const StorageMetadataPtr & metadata_snapshot, const std::vector & commands, - Context context_copy) + ContextPtr context_copy) { if (commands.empty()) return false; @@ -206,8 +207,8 @@ bool isStorageTouchedByMutations( if (all_commands_can_be_skipped) return false; - context_copy.setSetting("max_streams_to_max_threads_ratio", 1); - context_copy.setSetting("max_threads", 1); + context_copy->setSetting("max_streams_to_max_threads_ratio", 1); + context_copy->setSetting("max_threads", 1); ASTPtr select_query = prepareQueryAffectedAST(commands, storage, context_copy); @@ -232,7 +233,7 @@ bool isStorageTouchedByMutations( ASTPtr getPartitionAndPredicateExpressionForMutationCommand( const MutationCommand & command, const StoragePtr & storage, - const Context & context + ContextPtr context ) { ASTPtr partition_predicate_as_ast_func; @@ -266,7 +267,7 @@ MutationsInterpreter::MutationsInterpreter( StoragePtr storage_, const StorageMetadataPtr & metadata_snapshot_, MutationCommands commands_, - const Context & context_, + ContextPtr context_, bool can_execute_) : storage(std::move(storage_)) , metadata_snapshot(metadata_snapshot_) @@ -349,6 +350,35 @@ static void validateUpdateColumns( } } +/// Returns ASTs of updated nested subcolumns, if all of subcolumns were updated. +/// They are used to validate sizes of nested arrays. +/// If some of subcolumns were updated and some weren't, +/// it makes sense to validate only updated columns with their old versions, +/// because their sizes couldn't change, since sizes of all nested subcolumns must be consistent. +static std::optional> getExpressionsOfUpdatedNestedSubcolumns( + const String & column_name, + const NamesAndTypesList & all_columns, + const std::unordered_map & column_to_update_expression) +{ + std::vector res; + auto source_name = Nested::splitName(column_name).first; + + /// Check this nested subcolumn + for (const auto & column : all_columns) + { + auto split = Nested::splitName(column.name); + if (isArray(column.type) && split.first == source_name && !split.second.empty()) + { + auto it = column_to_update_expression.find(column.name); + if (it == column_to_update_expression.end()) + return {}; + + res.push_back(it->second); + } + } + + return res; +} ASTPtr MutationsInterpreter::prepare(bool dry_run) { @@ -398,7 +428,7 @@ ASTPtr MutationsInterpreter::prepare(bool dry_run) auto dependencies = getAllColumnDependencies(metadata_snapshot, updated_columns); /// First, break a sequence of commands into stages. - for (const auto & command : commands) + for (auto & command : commands) { if (command.type == MutationCommand::DELETE) { @@ -438,12 +468,43 @@ ASTPtr MutationsInterpreter::prepare(bool dry_run) /// /// Outer CAST is added just in case if we don't trust the returning type of 'if'. - auto type_literal = std::make_shared(columns_desc.getPhysical(column).type->getName()); + const auto & type = columns_desc.getPhysical(column).type; + auto type_literal = std::make_shared(type->getName()); const auto & update_expr = kv.second; + + ASTPtr condition = getPartitionAndPredicateExpressionForMutationCommand(command); + + /// And new check validateNestedArraySizes for Nested subcolumns + if (isArray(type) && !Nested::splitName(column).second.empty()) + { + std::shared_ptr function = nullptr; + + auto nested_update_exprs = getExpressionsOfUpdatedNestedSubcolumns(column, all_columns, command.column_to_update_expression); + if (!nested_update_exprs) + { + function = makeASTFunction("validateNestedArraySizes", + condition, + update_expr->clone(), + std::make_shared(column)); + condition = makeASTFunction("and", condition, function); + } + else if (nested_update_exprs->size() > 1) + { + function = std::make_shared(); + function->name = "validateNestedArraySizes"; + function->arguments = std::make_shared(); + function->children.push_back(function->arguments); + function->arguments->children.push_back(condition); + for (const auto & it : *nested_update_exprs) + function->arguments->children.push_back(it->clone()); + condition = makeASTFunction("and", condition, function); + } + } + auto updated_column = makeASTFunction("CAST", makeASTFunction("if", - getPartitionAndPredicateExpressionForMutationCommand(command), + condition, makeASTFunction("CAST", update_expr->clone(), type_literal), @@ -649,9 +710,9 @@ ASTPtr MutationsInterpreter::prepareInterpreterSelectQuery(std::vector & all_asts->children.push_back(std::make_shared(column)); auto syntax_result = TreeRewriter(context).analyze(all_asts, all_columns, storage, metadata_snapshot); - if (context.hasQueryContext()) + if (context->hasQueryContext()) for (const auto & it : syntax_result->getScalars()) - context.getQueryContext().addScalar(it.first, it.second); + context->getQueryContext()->addScalar(it.first, it.second); stage.analyzer = std::make_unique(all_asts, syntax_result, context); @@ -673,16 +734,24 @@ ASTPtr MutationsInterpreter::prepareInterpreterSelectQuery(std::vector & for (const auto & kv : stage.column_to_updated) stage.analyzer->appendExpression(actions_chain, kv.second, dry_run); + auto & actions = actions_chain.getLastStep().actions(); + for (const auto & kv : stage.column_to_updated) { - actions_chain.getLastStep().actions()->addAlias( - kv.second->getColumnName(), kv.first, /* can_replace = */ true); + auto column_name = kv.second->getColumnName(); + const auto & dag_node = actions->findInIndex(column_name); + const auto & alias = actions->addAlias(dag_node, kv.first); + actions->addOrReplaceInIndex(alias); } } /// Remove all intermediate columns. actions_chain.addStep(); - actions_chain.getLastStep().required_output.assign(stage.output_columns.begin(), stage.output_columns.end()); + actions_chain.getLastStep().required_output.clear(); + ActionsDAG::NodeRawConstPtrs new_index; + for (const auto & name : stage.output_columns) + actions_chain.getLastStep().addRequiredOutput(name); + actions_chain.getLastActions(); actions_chain.finalize(); @@ -748,14 +817,17 @@ QueryPipelinePtr MutationsInterpreter::addStreamsForLaterStages(const std::vecto SubqueriesForSets & subqueries_for_sets = stage.analyzer->getSubqueriesForSets(); if (!subqueries_for_sets.empty()) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); SizeLimits network_transfer_limits( settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode); addCreatingSetsStep(plan, std::move(subqueries_for_sets), network_transfer_limits, context); } } - auto pipeline = plan.buildQueryPipeline(QueryPlanOptimizationSettings(context.getSettingsRef())); + auto pipeline = plan.buildQueryPipeline( + QueryPlanOptimizationSettings::fromContext(context), + BuildQueryPipelineSettings::fromContext(context)); + pipeline->addSimpleTransform([&](const Block & header) { return std::make_shared(header); @@ -769,7 +841,7 @@ void MutationsInterpreter::validate() if (!select_interpreter) select_interpreter = std::make_unique(mutation_ast, context, storage, metadata_snapshot, select_limits); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); /// For Replicated* storages mutations cannot employ non-deterministic functions /// because that produces inconsistencies between replicas diff --git a/src/Interpreters/MutationsInterpreter.h b/src/Interpreters/MutationsInterpreter.h index dcebba5743e..34a9b61771d 100644 --- a/src/Interpreters/MutationsInterpreter.h +++ b/src/Interpreters/MutationsInterpreter.h @@ -23,13 +23,13 @@ bool isStorageTouchedByMutations( const StoragePtr & storage, const StorageMetadataPtr & metadata_snapshot, const std::vector & commands, - Context context_copy + ContextPtr context_copy ); ASTPtr getPartitionAndPredicateExpressionForMutationCommand( const MutationCommand & command, const StoragePtr & storage, - const Context & context + ContextPtr context ); /// Create an input stream that will read data from storage and apply mutation commands (UPDATEs, DELETEs, MATERIALIZEs) @@ -43,7 +43,7 @@ public: StoragePtr storage_, const StorageMetadataPtr & metadata_snapshot_, MutationCommands commands_, - const Context & context_, + ContextPtr context_, bool can_execute_); void validate(); @@ -74,7 +74,7 @@ private: StoragePtr storage; StorageMetadataPtr metadata_snapshot; MutationCommands commands; - Context context; + ContextPtr context; bool can_execute; SelectQueryOptions select_limits; @@ -101,7 +101,7 @@ private: struct Stage { - Stage(const Context & context_) : expressions_chain(context_) {} + explicit Stage(ContextPtr context_) : expressions_chain(context_) {} ASTs filters; std::unordered_map column_to_updated; diff --git a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp index 7f4da0638d4..2420255c5c1 100644 --- a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp +++ b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -40,7 +41,7 @@ namespace MySQLInterpreter { static inline String resolveDatabase( - const String & database_in_query, const String & replica_mysql_database, const String & replica_clickhouse_database, const Context & context) + const String & database_in_query, const String & replica_mysql_database, const String & replica_clickhouse_database, ContextPtr context) { if (!database_in_query.empty()) { @@ -62,7 +63,7 @@ static inline String resolveDatabase( /// context.getCurrentDatabase() is always return `default database` /// When USE replica_mysql_database; CREATE TABLE table_name; /// context.getCurrentDatabase() is always return replica_clickhouse_database - const String & current_database = context.getCurrentDatabase(); + const String & current_database = context->getCurrentDatabase(); return current_database != replica_clickhouse_database ? "" : replica_clickhouse_database; } @@ -116,7 +117,7 @@ static inline NamesAndTypesList getColumnsList(ASTExpressionList * columns_defin return columns_name_and_type; } -static NamesAndTypesList getNames(const ASTFunction & expr, const Context & context, const NamesAndTypesList & columns) +static NamesAndTypesList getNames(const ASTFunction & expr, ContextPtr context, const NamesAndTypesList & columns) { if (expr.arguments->children.empty()) return NamesAndTypesList{}; @@ -157,7 +158,7 @@ static NamesAndTypesList modifyPrimaryKeysToNonNullable(const NamesAndTypesList } static inline std::tuple getKeys( - ASTExpressionList * columns_define, ASTExpressionList * indices_define, const Context & context, NamesAndTypesList & columns) + ASTExpressionList * columns_define, ASTExpressionList * indices_define, ContextPtr context, NamesAndTypesList & columns) { NameSet increment_columns; auto keys = makeASTFunction("tuple"); @@ -369,7 +370,7 @@ static ASTPtr getOrderByPolicy( return order_by_expression; } -void InterpreterCreateImpl::validate(const InterpreterCreateImpl::TQuery & create_query, const Context &) +void InterpreterCreateImpl::validate(const InterpreterCreateImpl::TQuery & create_query, ContextPtr) { /// This is dangerous, because the like table may not exists in ClickHouse if (create_query.like_table) @@ -382,7 +383,7 @@ void InterpreterCreateImpl::validate(const InterpreterCreateImpl::TQuery & creat } ASTs InterpreterCreateImpl::getRewrittenQueries( - const TQuery & create_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const TQuery & create_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { auto rewritten_query = std::make_shared(); if (resolveDatabase(create_query.database, mysql_database, mapped_to_database, context) != mapped_to_database) @@ -411,13 +412,26 @@ ASTs InterpreterCreateImpl::getRewrittenQueries( return column_declaration; }; - /// Add _sign and _version column. + /// Add _sign and _version columns. String sign_column_name = getUniqueColumnName(columns_name_and_type, "_sign"); String version_column_name = getUniqueColumnName(columns_name_and_type, "_version"); columns->set(columns->columns, InterpreterCreateQuery::formatColumns(columns_name_and_type)); columns->columns->children.emplace_back(create_materialized_column_declaration(sign_column_name, "Int8", UInt64(1))); columns->columns->children.emplace_back(create_materialized_column_declaration(version_column_name, "UInt64", UInt64(1))); + /// Add minmax skipping index for _version column. + auto version_index = std::make_shared(); + version_index->name = version_column_name; + auto index_expr = std::make_shared(version_column_name); + auto index_type = makeASTFunction("minmax"); + index_type->no_empty_args = true; + version_index->set(version_index->expr, index_expr); + version_index->set(version_index->type, index_type); + version_index->granularity = 1; + ASTPtr indices = std::make_shared(); + indices->children.push_back(version_index); + columns->set(columns->indices, indices); + auto storage = std::make_shared(); /// The `partition by` expression must use primary keys, otherwise the primary keys will not be merge. @@ -439,12 +453,12 @@ ASTs InterpreterCreateImpl::getRewrittenQueries( return ASTs{rewritten_query}; } -void InterpreterDropImpl::validate(const InterpreterDropImpl::TQuery & /*query*/, const Context & /*context*/) +void InterpreterDropImpl::validate(const InterpreterDropImpl::TQuery & /*query*/, ContextPtr /*context*/) { } ASTs InterpreterDropImpl::getRewrittenQueries( - const InterpreterDropImpl::TQuery & drop_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const InterpreterDropImpl::TQuery & drop_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { const auto & database_name = resolveDatabase(drop_query.database, mysql_database, mapped_to_database, context); @@ -457,14 +471,14 @@ ASTs InterpreterDropImpl::getRewrittenQueries( return ASTs{rewritten_query}; } -void InterpreterRenameImpl::validate(const InterpreterRenameImpl::TQuery & rename_query, const Context & /*context*/) +void InterpreterRenameImpl::validate(const InterpreterRenameImpl::TQuery & rename_query, ContextPtr /*context*/) { if (rename_query.exchange) throw Exception("Cannot execute exchange for external ddl query.", ErrorCodes::NOT_IMPLEMENTED); } ASTs InterpreterRenameImpl::getRewrittenQueries( - const InterpreterRenameImpl::TQuery & rename_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const InterpreterRenameImpl::TQuery & rename_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { ASTRenameQuery::Elements elements; for (const auto & rename_element : rename_query.elements) @@ -493,12 +507,12 @@ ASTs InterpreterRenameImpl::getRewrittenQueries( return ASTs{rewritten_query}; } -void InterpreterAlterImpl::validate(const InterpreterAlterImpl::TQuery & /*query*/, const Context & /*context*/) +void InterpreterAlterImpl::validate(const InterpreterAlterImpl::TQuery & /*query*/, ContextPtr /*context*/) { } ASTs InterpreterAlterImpl::getRewrittenQueries( - const InterpreterAlterImpl::TQuery & alter_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const InterpreterAlterImpl::TQuery & alter_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { if (resolveDatabase(alter_query.database, mysql_database, mapped_to_database, context) != mapped_to_database) return {}; diff --git a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h index 497a661cc7f..3202612ac94 100644 --- a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h +++ b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h @@ -1,62 +1,66 @@ #pragma once -#include -#include #include #include #include #include +#include #include #include +#include namespace DB { namespace MySQLInterpreter { + struct InterpreterDropImpl + { + using TQuery = ASTDropQuery; -struct InterpreterDropImpl -{ - using TQuery = ASTDropQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); + static ASTs getRewrittenQueries( + const TQuery & drop_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; - static ASTs getRewrittenQueries(const TQuery & drop_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + struct InterpreterAlterImpl + { + using TQuery = MySQLParser::ASTAlterQuery; -struct InterpreterAlterImpl -{ - using TQuery = MySQLParser::ASTAlterQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); + static ASTs getRewrittenQueries( + const TQuery & alter_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; - static ASTs getRewrittenQueries(const TQuery & alter_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + struct InterpreterRenameImpl + { + using TQuery = ASTRenameQuery; -struct InterpreterRenameImpl -{ - using TQuery = ASTRenameQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); + static ASTs getRewrittenQueries( + const TQuery & rename_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; - static ASTs getRewrittenQueries(const TQuery & rename_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + struct InterpreterCreateImpl + { + using TQuery = MySQLParser::ASTCreateQuery; -struct InterpreterCreateImpl -{ - using TQuery = MySQLParser::ASTCreateQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); - - static ASTs getRewrittenQueries(const TQuery & create_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + static ASTs getRewrittenQueries( + const TQuery & create_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; template -class InterpreterMySQLDDLQuery : public IInterpreter +class InterpreterMySQLDDLQuery : public IInterpreter, WithContext { public: - InterpreterMySQLDDLQuery(const ASTPtr & query_ptr_, Context & context_, const String & mapped_to_database_, const String & mysql_database_) - : query_ptr(query_ptr_), context(context_), mapped_to_database(mapped_to_database_), mysql_database(mysql_database_) + InterpreterMySQLDDLQuery( + const ASTPtr & query_ptr_, ContextPtr context_, const String & mapped_to_database_, const String & mysql_database_) + : WithContext(context_), query_ptr(query_ptr_), mapped_to_database(mapped_to_database_), mysql_database(mysql_database_) { } @@ -64,18 +68,17 @@ public: { const typename InterpreterImpl::TQuery & query = query_ptr->as(); - InterpreterImpl::validate(query, context); - ASTs rewritten_queries = InterpreterImpl::getRewrittenQueries(query, context, mapped_to_database, mysql_database); + InterpreterImpl::validate(query, getContext()); + ASTs rewritten_queries = InterpreterImpl::getRewrittenQueries(query, getContext(), mapped_to_database, mysql_database); for (const auto & rewritten_query : rewritten_queries) - executeQuery("/* Rewritten MySQL DDL Query */ " + queryToString(rewritten_query), context, true); + executeQuery("/* Rewritten MySQL DDL Query */ " + queryToString(rewritten_query), getContext(), true); return BlockIO{}; } private: ASTPtr query_ptr; - Context & context; const String mapped_to_database; const String mysql_database; }; diff --git a/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp b/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp index 0d8e57aafc5..77a14e780c5 100644 --- a/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp +++ b/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp @@ -18,7 +18,7 @@ using namespace DB; -static inline ASTPtr tryRewrittenCreateQuery(const String & query, const Context & context) +static inline ASTPtr tryRewrittenCreateQuery(const String & query, ContextPtr context) { ParserExternalDDLQuery external_ddl_parser; ASTPtr ast = parseQuery(external_ddl_parser, "EXTERNAL DDL FROM MySQL(test_database, test_database) " + query, 0, 0); @@ -28,6 +28,10 @@ static inline ASTPtr tryRewrittenCreateQuery(const String & query, const Context context, "test_database", "test_database")[0]; } +static const char MATERIALIZEMYSQL_TABLE_COLUMNS[] = ", `_sign` Int8() MATERIALIZED 1" + ", `_version` UInt64() MATERIALIZED 1" + ", INDEX _version _version TYPE minmax GRANULARITY 1"; + TEST(MySQLCreateRewritten, ColumnsDataType) { tryRegisterFunctions(); @@ -45,46 +49,46 @@ TEST(MySQLCreateRewritten, ColumnsDataType) { EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + ")", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` Nullable(" + mapped_type + ")" - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` Nullable(" + mapped_type + ")" + + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + " NOT NULL)", context_holder.context)), "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` " + mapped_type + - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + " COMMENT 'test_comment' NOT NULL)", context_holder.context)), "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` " + mapped_type + - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); if (Poco::toUpper(test_type).find("INT") != std::string::npos) { EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + " UNSIGNED)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` Nullable(U" + mapped_type + ")" - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` Nullable(U" + mapped_type + ")" + + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + " COMMENT 'test_comment' UNSIGNED)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` Nullable(U" + mapped_type + ")" - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` Nullable(U" + mapped_type + ")" + + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + " NOT NULL UNSIGNED)", context_holder.context)), "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` U" + mapped_type + - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1`(`key` INT NOT NULL PRIMARY KEY, test " + test_type + " COMMENT 'test_comment' UNSIGNED NOT NULL)", context_holder.context)), "CREATE TABLE test_database.test_table_1 (`key` Int32, `test` U" + mapped_type + - ", `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + MATERIALIZEMYSQL_TABLE_COLUMNS + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); } } @@ -109,13 +113,15 @@ TEST(MySQLCreateRewritten, PartitionPolicy) { EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " PRIMARY KEY)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, " - "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)"); + "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + + MATERIALIZEMYSQL_TABLE_COLUMNS + + ") ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " NOT NULL PRIMARY KEY)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, " - "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)"); + "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + + MATERIALIZEMYSQL_TABLE_COLUMNS + + ") ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)"); } } @@ -138,23 +144,27 @@ TEST(MySQLCreateRewritten, OrderbyPolicy) { EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " PRIMARY KEY, `key2` " + test_type + " UNIQUE KEY)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` Nullable(" + mapped_type + "), `_sign` Int8() MATERIALIZED 1, " - "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, assumeNotNull(key2))"); + "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` Nullable(" + mapped_type + ")" + + MATERIALIZEMYSQL_TABLE_COLUMNS + + ") ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, assumeNotNull(key2))"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " NOT NULL PRIMARY KEY, `key2` " + test_type + " NOT NULL UNIQUE KEY)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, " - "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, key2)"); + "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` " + mapped_type + + MATERIALIZEMYSQL_TABLE_COLUMNS + + ") ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, key2)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " KEY UNIQUE KEY)", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, " - "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)"); + "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + + MATERIALIZEMYSQL_TABLE_COLUMNS + + ") ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + ", `key2` " + test_type + " UNIQUE KEY, PRIMARY KEY(`key`, `key2`))", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, " - "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, key2)"); + "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` " + mapped_type + + MATERIALIZEMYSQL_TABLE_COLUMNS + + ") ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, key2)"); } } @@ -165,23 +175,27 @@ TEST(MySQLCreateRewritten, RewrittenQueryWithPrimaryKey) EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` int NOT NULL PRIMARY KEY) ENGINE=InnoDB DEFAULT CHARSET=utf8", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` Int32, `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version) " - "PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); + "CREATE TABLE test_database.test_table_1 (`key` Int32" + + std::string(MATERIALIZEMYSQL_TABLE_COLUMNS) + + ") ENGINE = ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` int NOT NULL, PRIMARY KEY (`key`)) ENGINE=InnoDB DEFAULT CHARSET=utf8", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` Int32, `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version) " - "PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); + "CREATE TABLE test_database.test_table_1 (`key` Int32" + + std::string(MATERIALIZEMYSQL_TABLE_COLUMNS) + + ") ENGINE = ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY tuple(key)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key_1` int NOT NULL, key_2 INT NOT NULL, PRIMARY KEY (`key_1`, `key_2`)) ENGINE=InnoDB DEFAULT CHARSET=utf8", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key_1` Int32, `key_2` Int32, `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " - "ReplacingMergeTree(_version) PARTITION BY intDiv(key_1, 4294967) ORDER BY (key_1, key_2)"); + "CREATE TABLE test_database.test_table_1 (`key_1` Int32, `key_2` Int32" + + std::string(MATERIALIZEMYSQL_TABLE_COLUMNS) + + ") ENGINE = ReplacingMergeTree(_version) PARTITION BY intDiv(key_1, 4294967) ORDER BY (key_1, key_2)"); EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key_1` BIGINT NOT NULL, key_2 INT NOT NULL, PRIMARY KEY (`key_1`, `key_2`)) ENGINE=InnoDB DEFAULT CHARSET=utf8", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key_1` Int64, `key_2` Int32, `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " - "ReplacingMergeTree(_version) PARTITION BY intDiv(key_2, 4294967) ORDER BY (key_1, key_2)"); + "CREATE TABLE test_database.test_table_1 (`key_1` Int64, `key_2` Int32" + + std::string(MATERIALIZEMYSQL_TABLE_COLUMNS) + + ") ENGINE = ReplacingMergeTree(_version) PARTITION BY intDiv(key_2, 4294967) ORDER BY (key_1, key_2)"); } TEST(MySQLCreateRewritten, RewrittenQueryWithPrefixKey) @@ -191,7 +205,8 @@ TEST(MySQLCreateRewritten, RewrittenQueryWithPrefixKey) EXPECT_EQ(queryToString(tryRewrittenCreateQuery( "CREATE TABLE `test_database`.`test_table_1` (`key` int NOT NULL PRIMARY KEY, `prefix_key` varchar(200) NOT NULL, KEY prefix_key_index(prefix_key(2))) ENGINE=InnoDB DEFAULT CHARSET=utf8", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`key` Int32, `prefix_key` String, `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1) ENGINE = " + "CREATE TABLE test_database.test_table_1 (`key` Int32, `prefix_key` String" + + std::string(MATERIALIZEMYSQL_TABLE_COLUMNS) + ") ENGINE = " "ReplacingMergeTree(_version) PARTITION BY intDiv(key, 4294967) ORDER BY (key, prefix_key)"); } @@ -204,6 +219,7 @@ TEST(MySQLCreateRewritten, UniqueKeysConvert) "CREATE TABLE `test_database`.`test_table_1` (code varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,name varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL," " id bigint NOT NULL AUTO_INCREMENT, tenant_id bigint NOT NULL, PRIMARY KEY (id), UNIQUE KEY code_id (code, tenant_id), UNIQUE KEY name_id (name, tenant_id))" " ENGINE=InnoDB AUTO_INCREMENT=100 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;", context_holder.context)), - "CREATE TABLE test_database.test_table_1 (`code` String, `name` String, `id` Int64, `tenant_id` Int64, `_sign` Int8() MATERIALIZED 1, `_version` UInt64() MATERIALIZED 1)" - " ENGINE = ReplacingMergeTree(_version) PARTITION BY intDiv(id, 18446744073709551) ORDER BY (code, name, tenant_id, id)"); + "CREATE TABLE test_database.test_table_1 (`code` String, `name` String, `id` Int64, `tenant_id` Int64" + + std::string(MATERIALIZEMYSQL_TABLE_COLUMNS) + + ") ENGINE = ReplacingMergeTree(_version) PARTITION BY intDiv(id, 18446744073709551) ORDER BY (code, name, tenant_id, id)"); } diff --git a/src/Interpreters/OpenTelemetrySpanLog.cpp b/src/Interpreters/OpenTelemetrySpanLog.cpp index e1df145cf51..c72b0f3d326 100644 --- a/src/Interpreters/OpenTelemetrySpanLog.cpp +++ b/src/Interpreters/OpenTelemetrySpanLog.cpp @@ -49,7 +49,7 @@ void OpenTelemetrySpanLogElement::appendToBlock(MutableColumns & columns) const columns[i++]->insert(operation_name); columns[i++]->insert(start_time_us); columns[i++]->insert(finish_time_us); - columns[i++]->insert(DateLUT::instance().toDayNum(finish_time_us / 1000000)); + columns[i++]->insert(DateLUT::instance().toDayNum(finish_time_us / 1000000).toUnderType()); columns[i++]->insert(attribute_names); // The user might add some ints values, and we will have Int Field, and the // insert will fail because the column requires Strings. Convert the fields @@ -116,7 +116,7 @@ OpenTelemetrySpanHolder::~OpenTelemetrySpanHolder() return; } - auto * context = thread_group->query_context; + auto context = thread_group->query_context.lock(); if (!context) { // Both global and query contexts can be null when executing a diff --git a/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp new file mode 100644 index 00000000000..399def00006 --- /dev/null +++ b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp @@ -0,0 +1,124 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +namespace +{ + +using namespace DB; + +Field executeFunctionOnField( + const Field & field, const std::string & name, + const ExpressionActionsPtr & sharding_expr, + const std::string & sharding_key_column_name) +{ + DataTypePtr type = applyVisitor(FieldToDataType{}, field); + + ColumnWithTypeAndName column; + column.column = type->createColumnConst(1, field); + column.name = name; + column.type = type; + + Block block{column}; + size_t num_rows = 1; + sharding_expr->execute(block, num_rows); + + ColumnWithTypeAndName & ret = block.getByName(sharding_key_column_name); + return (*ret.column)[0]; +} + +/// @param sharding_column_value - one of values from IN +/// @param sharding_column_name - name of that column +/// @param sharding_expr - expression of sharding_key for the Distributed() table +/// @param sharding_key_column_name - name of the column for sharding_expr +/// @param shard_info - info for the current shard (to compare shard_num with calculated) +/// @param slots - weight -> shard mapping +/// @return true if shard may contain such value (or it is unknown), otherwise false. +bool shardContains( + const Field & sharding_column_value, + const std::string & sharding_column_name, + const ExpressionActionsPtr & sharding_expr, + const std::string & sharding_key_column_name, + const Cluster::ShardInfo & shard_info, + const Cluster::SlotToShard & slots) +{ + /// NULL is not allowed in sharding key, + /// so it should be safe to assume that shard cannot contain it. + if (sharding_column_value.isNull()) + return false; + + Field sharding_value = executeFunctionOnField(sharding_column_value, sharding_column_name, sharding_expr, sharding_key_column_name); + /// The value from IN can be non-numeric, + /// but in this case it should be convertible to numeric type, let's try. + sharding_value = convertFieldToType(sharding_value, DataTypeUInt64()); + /// In case of conversion is not possible (NULL), shard cannot contain the value anyway. + if (sharding_value.isNull()) + return false; + + UInt64 value = sharding_value.get(); + const auto shard_num = slots[value % slots.size()] + 1; + return shard_info.shard_num == shard_num; +} + +} + +namespace DB +{ + +bool OptimizeShardingKeyRewriteInMatcher::needChildVisit(ASTPtr & /*node*/, const ASTPtr & /*child*/) +{ + return true; +} + +void OptimizeShardingKeyRewriteInMatcher::visit(ASTPtr & node, Data & data) +{ + if (auto * function = node->as()) + visit(*function, data); +} + +void OptimizeShardingKeyRewriteInMatcher::visit(ASTFunction & function, Data & data) +{ + if (function.name != "in") + return; + + auto * left = function.arguments->children.front().get(); + auto * right = function.arguments->children.back().get(); + auto * identifier = left->as(); + if (!identifier) + return; + + const auto & sharding_expr = data.sharding_key_expr; + const auto & sharding_key_column_name = data.sharding_key_column_name; + + if (!sharding_expr->getRequiredColumnsWithTypes().contains(identifier->name())) + return; + + /// NOTE: that we should not take care about empty tuple, + /// since after optimize_skip_unused_shards, + /// at least one element should match each shard. + if (auto * tuple_func = right->as(); tuple_func && tuple_func->name == "tuple") + { + auto * tuple_elements = tuple_func->children.front()->as(); + std::erase_if(tuple_elements->children, [&](auto & child) + { + auto * literal = child->template as(); + return literal && !shardContains(literal->value, identifier->name(), sharding_expr, sharding_key_column_name, data.shard_info, data.slots); + }); + } + else if (auto * tuple_literal = right->as(); + tuple_literal && tuple_literal->value.getType() == Field::Types::Tuple) + { + auto & tuple = tuple_literal->value.get(); + std::erase_if(tuple, [&](auto & child) + { + return !shardContains(child, identifier->name(), sharding_expr, sharding_key_column_name, data.shard_info, data.slots); + }); + } +} + +} diff --git a/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.h b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.h new file mode 100644 index 00000000000..3087fb844ed --- /dev/null +++ b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.h @@ -0,0 +1,41 @@ +#pragma once + +#include +#include + +namespace DB +{ + +class ExpressionActions; +using ExpressionActionsPtr = std::shared_ptr; + +class ASTFunction; + +/// Rewrite `sharding_key IN (...)` for specific shard, +/// so that it will contain only values that belong to this specific shard. +/// +/// See also: +/// - evaluateExpressionOverConstantCondition() +/// - StorageDistributed::createSelector() +/// - createBlockSelector() +struct OptimizeShardingKeyRewriteInMatcher +{ + /// Cluster::SlotToShard + using SlotToShard = std::vector; + + struct Data + { + const ExpressionActionsPtr & sharding_key_expr; + const std::string & sharding_key_column_name; + const Cluster::ShardInfo & shard_info; + const Cluster::SlotToShard & slots; + }; + + static bool needChildVisit(ASTPtr & /*node*/, const ASTPtr & /*child*/); + static void visit(ASTPtr & node, Data & data); + static void visit(ASTFunction & function, Data & data); +}; + +using OptimizeShardingKeyRewriteInVisitor = InDepthNodeVisitor; + +} diff --git a/src/Interpreters/PartLog.cpp b/src/Interpreters/PartLog.cpp index 860666a0035..e4459399336 100644 --- a/src/Interpreters/PartLog.cpp +++ b/src/Interpreters/PartLog.cpp @@ -71,7 +71,7 @@ void PartLogElement::appendToBlock(MutableColumns & columns) const columns[i++]->insert(query_id); columns[i++]->insert(event_type); - columns[i++]->insert(DateLUT::instance().toDayNum(event_time)); + columns[i++]->insert(DateLUT::instance().toDayNum(event_time).toUnderType()); columns[i++]->insert(event_time); columns[i++]->insert(event_time_microseconds); columns[i++]->insert(duration_ms); @@ -103,7 +103,7 @@ void PartLogElement::appendToBlock(MutableColumns & columns) const bool PartLog::addNewPart( - Context & current_context, const MutableDataPartPtr & part, UInt64 elapsed_ns, const ExecutionStatus & execution_status) + ContextPtr current_context, const MutableDataPartPtr & part, UInt64 elapsed_ns, const ExecutionStatus & execution_status) { return addNewParts(current_context, {part}, elapsed_ns, execution_status); } @@ -120,7 +120,7 @@ inline UInt64 time_in_seconds(std::chrono::time_point } bool PartLog::addNewParts( - Context & current_context, const PartLog::MutableDataPartsVector & parts, UInt64 elapsed_ns, const ExecutionStatus & execution_status) + ContextPtr current_context, const PartLog::MutableDataPartsVector & parts, UInt64 elapsed_ns, const ExecutionStatus & execution_status) { if (parts.empty()) return true; @@ -130,7 +130,7 @@ bool PartLog::addNewParts( try { auto table_id = parts.front()->storage.getStorageID(); - part_log = current_context.getPartLog(table_id.database_name); // assume parts belong to the same table + part_log = current_context->getPartLog(table_id.database_name); // assume parts belong to the same table if (!part_log) return false; diff --git a/src/Interpreters/PartLog.h b/src/Interpreters/PartLog.h index c946d6ce85f..edb6ab4a45f 100644 --- a/src/Interpreters/PartLog.h +++ b/src/Interpreters/PartLog.h @@ -69,9 +69,9 @@ class PartLog : public SystemLog public: /// Add a record about creation of new part. - static bool addNewPart(Context & context, const MutableDataPartPtr & part, UInt64 elapsed_ns, + static bool addNewPart(ContextPtr context, const MutableDataPartPtr & part, UInt64 elapsed_ns, const ExecutionStatus & execution_status = {}); - static bool addNewParts(Context & context, const MutableDataPartsVector & parts, UInt64 elapsed_ns, + static bool addNewParts(ContextPtr context, const MutableDataPartsVector & parts, UInt64 elapsed_ns, const ExecutionStatus & execution_status = {}); }; diff --git a/src/Interpreters/PredicateExpressionsOptimizer.cpp b/src/Interpreters/PredicateExpressionsOptimizer.cpp index 00b47be408a..f2e55441fb6 100644 --- a/src/Interpreters/PredicateExpressionsOptimizer.cpp +++ b/src/Interpreters/PredicateExpressionsOptimizer.cpp @@ -19,11 +19,11 @@ namespace ErrorCodes } PredicateExpressionsOptimizer::PredicateExpressionsOptimizer( - const Context & context_, const TablesWithColumns & tables_with_columns_, const Settings & settings) - : enable_optimize_predicate_expression(settings.enable_optimize_predicate_expression) + ContextPtr context_, const TablesWithColumns & tables_with_columns_, const Settings & settings) + : WithContext(context_) + , enable_optimize_predicate_expression(settings.enable_optimize_predicate_expression) , enable_optimize_predicate_expression_to_final_subquery(settings.enable_optimize_predicate_expression_to_final_subquery) , allow_push_predicate_when_subquery_contains_with(settings.allow_push_predicate_when_subquery_contains_with) - , context(context_) , tables_with_columns(tables_with_columns_) { } @@ -87,7 +87,7 @@ std::vector PredicateExpressionsOptimizer::extractTablesPredicates(const A for (const auto & predicate_expression : splitConjunctionPredicate({where, prewhere})) { - ExpressionInfoVisitor::Data expression_info{.context = context, .tables = tables_with_columns}; + ExpressionInfoVisitor::Data expression_info{WithContext{getContext()}, tables_with_columns}; ExpressionInfoVisitor(expression_info).visit(predicate_expression); if (expression_info.is_stateful_function @@ -146,7 +146,7 @@ bool PredicateExpressionsOptimizer::tryRewritePredicatesToTables(ASTs & tables_e break; /// Skip left and right table optimization is_rewrite_tables |= tryRewritePredicatesToTable(tables_element[table_pos], tables_predicates[table_pos], - tables_with_columns[table_pos].columns.getNames()); + tables_with_columns[table_pos]); if (table_element->table_join && isRight(table_element->table_join->as()->kind)) break; /// Skip left table optimization @@ -156,13 +156,13 @@ bool PredicateExpressionsOptimizer::tryRewritePredicatesToTables(ASTs & tables_e return is_rewrite_tables; } -bool PredicateExpressionsOptimizer::tryRewritePredicatesToTable(ASTPtr & table_element, const ASTs & table_predicates, Names && table_columns) const +bool PredicateExpressionsOptimizer::tryRewritePredicatesToTable(ASTPtr & table_element, const ASTs & table_predicates, const TableWithColumnNamesAndTypes & table_columns) const { if (!table_predicates.empty()) { auto optimize_final = enable_optimize_predicate_expression_to_final_subquery; auto optimize_with = allow_push_predicate_when_subquery_contains_with; - PredicateRewriteVisitor::Data data(context, table_predicates, std::move(table_columns), optimize_final, optimize_with); + PredicateRewriteVisitor::Data data(getContext(), table_predicates, table_columns, optimize_final, optimize_with); PredicateRewriteVisitor(data).visit(table_element); return data.is_rewrite; @@ -187,7 +187,8 @@ bool PredicateExpressionsOptimizer::tryMovePredicatesFromHavingToWhere(ASTSelect for (const auto & moving_predicate: splitConjunctionPredicate({select_query.having()})) { - ExpressionInfoVisitor::Data expression_info{.context = context, .tables = {}}; + TablesWithColumns tables; + ExpressionInfoVisitor::Data expression_info{WithContext{getContext()}, tables}; ExpressionInfoVisitor(expression_info).visit(moving_predicate); /// TODO: If there is no group by, where, and prewhere expression, we can push down the stateful function diff --git a/src/Interpreters/PredicateExpressionsOptimizer.h b/src/Interpreters/PredicateExpressionsOptimizer.h index 8cceda93164..a31b9907da6 100644 --- a/src/Interpreters/PredicateExpressionsOptimizer.h +++ b/src/Interpreters/PredicateExpressionsOptimizer.h @@ -1,12 +1,12 @@ #pragma once -#include +#include #include +#include namespace DB { -class Context; struct Settings; /** Predicate optimization based on rewriting ast rules @@ -15,10 +15,10 @@ struct Settings; * - Move predicates from having to where * - Push the predicate down from the current query to the having of the subquery */ -class PredicateExpressionsOptimizer +class PredicateExpressionsOptimizer : WithContext { public: - PredicateExpressionsOptimizer(const Context & context_, const TablesWithColumns & tables_with_columns_, const Settings & settings_); + PredicateExpressionsOptimizer(ContextPtr context_, const TablesWithColumns & tables_with_columns_, const Settings & settings_); bool optimize(ASTSelectQuery & select_query); @@ -26,14 +26,14 @@ private: const bool enable_optimize_predicate_expression; const bool enable_optimize_predicate_expression_to_final_subquery; const bool allow_push_predicate_when_subquery_contains_with; - const Context & context; const TablesWithColumns & tables_with_columns; std::vector extractTablesPredicates(const ASTPtr & where, const ASTPtr & prewhere); bool tryRewritePredicatesToTables(ASTs & tables_element, const std::vector & tables_predicates); - bool tryRewritePredicatesToTable(ASTPtr & table_element, const ASTs & table_predicates, Names && table_columns) const; + bool tryRewritePredicatesToTable( + ASTPtr & table_element, const ASTs & table_predicates, const TableWithColumnNamesAndTypes & table_columns) const; bool tryMovePredicatesFromHavingToWhere(ASTSelectQuery & select_query); }; diff --git a/src/Interpreters/PredicateRewriteVisitor.cpp b/src/Interpreters/PredicateRewriteVisitor.cpp index 9e6d5543f2f..092d37d78dd 100644 --- a/src/Interpreters/PredicateRewriteVisitor.cpp +++ b/src/Interpreters/PredicateRewriteVisitor.cpp @@ -17,8 +17,16 @@ namespace DB { PredicateRewriteVisitorData::PredicateRewriteVisitorData( - const Context & context_, const ASTs & predicates_, Names && column_names_, bool optimize_final_, bool optimize_with_) - : context(context_), predicates(predicates_), column_names(column_names_), optimize_final(optimize_final_), optimize_with(optimize_with_) + ContextPtr context_, + const ASTs & predicates_, + const TableWithColumnNamesAndTypes & table_columns_, + bool optimize_final_, + bool optimize_with_) + : WithContext(context_) + , predicates(predicates_) + , table_columns(table_columns_) + , optimize_final(optimize_final_) + , optimize_with(optimize_with_) { } @@ -42,7 +50,8 @@ void PredicateRewriteVisitorData::visit(ASTSelectWithUnionQuery & union_select_q void PredicateRewriteVisitorData::visitFirstInternalSelect(ASTSelectQuery & select_query, ASTPtr &) { - is_rewrite |= rewriteSubquery(select_query, column_names, column_names); + /// In this case inner_columns same as outer_columns from table_columns + is_rewrite |= rewriteSubquery(select_query, table_columns.columns.getNames()); } void PredicateRewriteVisitorData::visitOtherInternalSelect(ASTSelectQuery & select_query, ASTPtr &) @@ -63,9 +72,9 @@ void PredicateRewriteVisitorData::visitOtherInternalSelect(ASTSelectQuery & sele } const Names & internal_columns = InterpreterSelectQuery( - temp_internal_select, context, SelectQueryOptions().analyze()).getSampleBlock().getNames(); + temp_internal_select, getContext(), SelectQueryOptions().analyze()).getSampleBlock().getNames(); - if (rewriteSubquery(*temp_select_query, column_names, internal_columns)) + if (rewriteSubquery(*temp_select_query, internal_columns)) { is_rewrite |= true; select_query.setExpression(ASTSelectQuery::Expression::SELECT, std::move(temp_select_query->refSelect())); @@ -89,15 +98,16 @@ static void cleanAliasAndCollectIdentifiers(ASTPtr & predicate, std::vector identifiers; @@ -106,13 +116,16 @@ bool PredicateRewriteVisitorData::rewriteSubquery(ASTSelectQuery & subquery, con for (const auto & identifier : identifiers) { - const auto & column_name = identifier->shortName(); - const auto & outer_column_iterator = std::find(outer_columns.begin(), outer_columns.end(), column_name); + IdentifierSemantic::setColumnShortName(*identifier, table_columns.table); + const auto & column_name = identifier->name(); /// For lambda functions, we can't always find them in the list of columns /// For example: SELECT * FROM system.one WHERE arrayMap(x -> x, [dummy]) = [0] + const auto & outer_column_iterator = std::find(outer_columns.begin(), outer_columns.end(), column_name); if (outer_column_iterator != outer_columns.end()) + { identifier->setShortName(inner_columns[outer_column_iterator - outer_columns.begin()]); + } } /// We only need to push all the predicates to subquery having diff --git a/src/Interpreters/PredicateRewriteVisitor.h b/src/Interpreters/PredicateRewriteVisitor.h index 02c8b9ca422..fc076464925 100644 --- a/src/Interpreters/PredicateRewriteVisitor.h +++ b/src/Interpreters/PredicateRewriteVisitor.h @@ -1,14 +1,16 @@ #pragma once -#include +#include +#include +#include #include #include -#include +#include namespace DB { -class PredicateRewriteVisitorData +class PredicateRewriteVisitorData : WithContext { public: bool is_rewrite = false; @@ -18,18 +20,19 @@ public: static bool needChild(const ASTPtr & node, const ASTPtr &) { - if (node && node->as()) - return false; - - return true; + return !(node && node->as()); } - PredicateRewriteVisitorData(const Context & context_, const ASTs & predicates_, Names && column_names_, bool optimize_final_, bool optimize_with_); + PredicateRewriteVisitorData( + ContextPtr context_, + const ASTs & predicates_, + const TableWithColumnNamesAndTypes & table_columns_, + bool optimize_final_, + bool optimize_with_); private: - const Context & context; const ASTs & predicates; - const Names column_names; + const TableWithColumnNamesAndTypes & table_columns; bool optimize_final; bool optimize_with; @@ -37,9 +40,10 @@ private: void visitOtherInternalSelect(ASTSelectQuery & select_query, ASTPtr &); - bool rewriteSubquery(ASTSelectQuery & subquery, const Names & outer_columns, const Names & inner_columns); + bool rewriteSubquery(ASTSelectQuery & subquery, const Names & inner_columns); }; using PredicateRewriteMatcher = OneTypeMatcher; using PredicateRewriteVisitor = InDepthNodeVisitor; + } diff --git a/src/Interpreters/ProcessList.cpp b/src/Interpreters/ProcessList.cpp index 4e336a98787..951ff6420c4 100644 --- a/src/Interpreters/ProcessList.cpp +++ b/src/Interpreters/ProcessList.cpp @@ -60,12 +60,12 @@ static bool isUnlimitedQuery(const IAST * ast) } -ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * ast, Context & query_context) +ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * ast, ContextPtr query_context) { EntryPtr res; - const ClientInfo & client_info = query_context.getClientInfo(); - const Settings & settings = query_context.getSettingsRef(); + const ClientInfo & client_info = query_context->getClientInfo(); + const Settings & settings = query_context->getSettingsRef(); if (client_info.current_query_id.empty()) throw Exception("Query id cannot be empty", ErrorCodes::LOGICAL_ERROR); @@ -174,12 +174,10 @@ ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * as } auto process_it = processes.emplace(processes.end(), - query_, client_info, priorities.insert(settings.priority)); + query_context, query_, client_info, priorities.insert(settings.priority)); res = std::make_shared(*this, process_it); - process_it->query_context = &query_context; - ProcessListForUser & user_process_list = user_to_queries[client_info.current_user]; user_process_list.queries.emplace(client_info.current_query_id, &res->get()); @@ -201,7 +199,7 @@ ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * as /// Set query-level memory trackers thread_group->memory_tracker.setOrRaiseHardLimit(settings.max_memory_usage); - if (query_context.hasTraceCollector()) + if (query_context->hasTraceCollector()) { /// Set up memory profiling thread_group->memory_tracker.setOrRaiseProfilerLimit(settings.memory_profiler_step); @@ -290,14 +288,12 @@ ProcessListEntry::~ProcessListEntry() QueryStatus::QueryStatus( - const String & query_, - const ClientInfo & client_info_, - QueryPriorities::Handle && priority_handle_) - : - query(query_), - client_info(client_info_), - priority_handle(std::move(priority_handle_)), - num_queries_increment{CurrentMetrics::Query} + ContextPtr context_, const String & query_, const ClientInfo & client_info_, QueryPriorities::Handle && priority_handle_) + : WithContext(context_) + , query(query_) + , client_info(client_info_) + , priority_handle(std::move(priority_handle_)) + , num_queries_increment{CurrentMetrics::Query} { } @@ -454,8 +450,11 @@ QueryStatusInfo QueryStatus::getInfo(bool get_thread_list, bool get_profile_even res.profile_counters = std::make_shared(thread_group->performance_counters.getPartiallyAtomicSnapshot()); } - if (get_settings && query_context) - res.query_settings = std::make_shared(query_context->getSettings()); + if (get_settings && getContext()) + { + res.query_settings = std::make_shared(getContext()->getSettings()); + res.current_database = getContext()->getCurrentDatabase(); + } return res; } diff --git a/src/Interpreters/ProcessList.h b/src/Interpreters/ProcessList.h index f6151bb3429..3eeea9c8e5b 100644 --- a/src/Interpreters/ProcessList.h +++ b/src/Interpreters/ProcessList.h @@ -1,6 +1,5 @@ #pragma once - #include #include #include @@ -33,7 +32,6 @@ namespace CurrentMetrics namespace DB { -class Context; struct Settings; class IAST; @@ -68,10 +66,11 @@ struct QueryStatusInfo std::vector thread_ids; std::shared_ptr profile_counters; std::shared_ptr query_settings; + std::string current_database; }; /// Query and information about its execution. -class QueryStatus +class QueryStatus : public WithContext { protected: friend class ProcessList; @@ -82,9 +81,6 @@ protected: String query; ClientInfo client_info; - /// Is set once when init - Context * query_context = nullptr; - /// Info about all threads involved in query execution ThreadGroupStatusPtr thread_group; @@ -127,6 +123,7 @@ protected: public: QueryStatus( + ContextPtr context_, const String & query_, const ClientInfo & client_info_, QueryPriorities::Handle && priority_handle_); @@ -171,9 +168,6 @@ public: QueryStatusInfo getInfo(bool get_thread_list = false, bool get_profile_events = false, bool get_settings = false) const; - Context * tryGetQueryContext() { return query_context; } - const Context * tryGetQueryContext() const { return query_context; } - /// Copies pointers to in/out streams void setQueryStreams(const BlockIO & io); @@ -304,7 +298,7 @@ public: * If timeout is passed - throw an exception. * Don't count KILL QUERY queries. */ - EntryPtr insert(const String & query_, const IAST * ast, Context & query_context); + EntryPtr insert(const String & query_, const IAST * ast, ContextPtr query_context); /// Number of currently executing queries. size_t size() const { return processes.size(); } diff --git a/src/Interpreters/QueryAliasesVisitor.cpp b/src/Interpreters/QueryAliasesVisitor.cpp index d395bfc20e9..bd0b2e88d2f 100644 --- a/src/Interpreters/QueryAliasesVisitor.cpp +++ b/src/Interpreters/QueryAliasesVisitor.cpp @@ -15,15 +15,22 @@ namespace ErrorCodes extern const int MULTIPLE_EXPRESSIONS_FOR_ALIAS; } -static String wrongAliasMessage(const ASTPtr & ast, const ASTPtr & prev_ast, const String & alias) +namespace { - WriteBufferFromOwnString message; - message << "Different expressions with the same alias " << backQuoteIfNeed(alias) << ":\n"; - formatAST(*ast, message, false, true); - message << "\nand\n"; - formatAST(*prev_ast, message, false, true); - message << '\n'; - return message.str(); + + constexpr auto dummy_subquery_name_prefix = "_subquery"; + + String wrongAliasMessage(const ASTPtr & ast, const ASTPtr & prev_ast, const String & alias) + { + WriteBufferFromOwnString message; + message << "Different expressions with the same alias " << backQuoteIfNeed(alias) << ":\n"; + formatAST(*ast, message, false, true); + message << "\nand\n"; + formatAST(*prev_ast, message, false, true); + message << '\n'; + return message.str(); + } + } @@ -99,7 +106,7 @@ void QueryAliasesMatcher::visit(const ASTSubquery & const_subquery, const AST String alias; do { - alias = "_subquery" + std::to_string(++subquery_index); + alias = dummy_subquery_name_prefix + std::to_string(++subquery_index); } while (aliases.count(alias)); @@ -124,6 +131,30 @@ void QueryAliasesMatcher::visitOther(const ASTPtr & ast, Data & data) aliases[alias] = ast; } + + /** QueryAliasesVisitor is executed before ExecuteScalarSubqueriesVisitor. + For example we have subquery in our query (SELECT sum(number) FROM numbers(10)). + + After running QueryAliasesVisitor it will be (SELECT sum(number) FROM numbers(10)) as _subquery_1 + and prefer_alias_to_column_name for this subquery will be true. + + After running ExecuteScalarSubqueriesVisitor it will be converted to (45 as _subquery_1) + and prefer_alias_to_column_name for ast literal will be true. + + But if we send such query on remote host with Distributed engine for example we cannot send prefer_alias_to_column_name + information for our ast node with query string. And this alias will be dropped because prefer_alias_to_column_name for ASTWIthAlias + by default is false. + + It is imporant that subquery can be converted to literal during ExecuteScalarSubqueriesVisitor. + And code below check if we previously set for subquery alias as _subquery, and if it is true + then set prefer_alias_to_column_name = true for node that was optimized during ExecuteScalarSubqueriesVisitor. + */ + + if (auto * ast_with_alias = dynamic_cast(ast.get())) + { + if (startsWith(alias, dummy_subquery_name_prefix)) + ast_with_alias->prefer_alias_to_column_name = true; + } } /// Explicit template instantiations diff --git a/src/Interpreters/QueryLog.cpp b/src/Interpreters/QueryLog.cpp index 82b957f895b..b6902468242 100644 --- a/src/Interpreters/QueryLog.cpp +++ b/src/Interpreters/QueryLog.cpp @@ -119,7 +119,7 @@ void QueryLogElement::appendToBlock(MutableColumns & columns) const size_t i = 0; columns[i++]->insert(type); - columns[i++]->insert(DateLUT::instance().toDayNum(event_time)); + columns[i++]->insert(DateLUT::instance().toDayNum(event_time).toUnderType()); columns[i++]->insert(event_time); columns[i++]->insert(event_time_microseconds); columns[i++]->insert(query_start_time); diff --git a/src/Interpreters/QueryNormalizer.cpp b/src/Interpreters/QueryNormalizer.cpp index 33d2e8d1ba1..0a2e6505558 100644 --- a/src/Interpreters/QueryNormalizer.cpp +++ b/src/Interpreters/QueryNormalizer.cpp @@ -72,6 +72,12 @@ void QueryNormalizer::visit(ASTIdentifier & node, ASTPtr & ast, Data & data) if (!IdentifierSemantic::getColumnName(node)) return; + if (data.settings.prefer_column_name_to_alias) + { + if (data.source_columns_set.find(node.name()) != data.source_columns_set.end()) + return; + } + /// If it is an alias, but not a parent alias (for constructs like "SELECT column + 1 AS column"). auto it_alias = data.aliases.find(node.name()); if (it_alias != data.aliases.end() && current_alias != node.name()) @@ -131,8 +137,20 @@ static bool needVisitChild(const ASTPtr & child) void QueryNormalizer::visit(ASTSelectQuery & select, const ASTPtr &, Data & data) { for (auto & child : select.children) - if (needVisitChild(child)) + { + if (child == select.groupBy() || child == select.orderBy() || child == select.having()) + { + bool old_setting = data.settings.prefer_column_name_to_alias; + data.settings.prefer_column_name_to_alias = false; visit(child, data); + data.settings.prefer_column_name_to_alias = old_setting; + } + else + { + if (needVisitChild(child)) + visit(child, data); + } + } /// If the WHERE clause or HAVING consists of a single alias, the reference must be replaced not only in children, /// but also in where_expression and having_expression. diff --git a/src/Interpreters/QueryNormalizer.h b/src/Interpreters/QueryNormalizer.h index e481f76ca8e..3dcccea1cfb 100644 --- a/src/Interpreters/QueryNormalizer.h +++ b/src/Interpreters/QueryNormalizer.h @@ -4,6 +4,7 @@ #include #include +#include namespace DB { @@ -21,12 +22,15 @@ class QueryNormalizer { const UInt64 max_ast_depth; const UInt64 max_expanded_ast_elements; + bool prefer_column_name_to_alias; template ExtractedSettings(const T & settings) - : max_ast_depth(settings.max_ast_depth), - max_expanded_ast_elements(settings.max_expanded_ast_elements) - {} + : max_ast_depth(settings.max_ast_depth) + , max_expanded_ast_elements(settings.max_expanded_ast_elements) + , prefer_column_name_to_alias(settings.prefer_column_name_to_alias) + { + } }; public: @@ -36,7 +40,8 @@ public: using MapOfASTs = std::map; const Aliases & aliases; - const ExtractedSettings settings; + const NameSet & source_columns_set; + ExtractedSettings settings; /// tmp data size_t level; @@ -44,8 +49,9 @@ public: SetOfASTs current_asts; /// vertices in the current call stack of this method std::string current_alias; /// the alias referencing to the ancestor of ast (the deepest ancestor with aliases) - Data(const Aliases & aliases_, ExtractedSettings && settings_) + Data(const Aliases & aliases_, const NameSet & source_columns_set_, ExtractedSettings && settings_) : aliases(aliases_) + , source_columns_set(source_columns_set_) , settings(settings_) , level(0) {} diff --git a/src/Interpreters/QueryThreadLog.cpp b/src/Interpreters/QueryThreadLog.cpp index f1cce1a3da9..31f1fddc87f 100644 --- a/src/Interpreters/QueryThreadLog.cpp +++ b/src/Interpreters/QueryThreadLog.cpp @@ -76,7 +76,7 @@ void QueryThreadLogElement::appendToBlock(MutableColumns & columns) const { size_t i = 0; - columns[i++]->insert(DateLUT::instance().toDayNum(event_time)); + columns[i++]->insert(DateLUT::instance().toDayNum(event_time).toUnderType()); columns[i++]->insert(event_time); columns[i++]->insert(event_time_microseconds); columns[i++]->insert(query_start_time); diff --git a/src/Interpreters/RedundantFunctionsInOrderByVisitor.h b/src/Interpreters/RedundantFunctionsInOrderByVisitor.h index d737e877f01..f807849fb86 100644 --- a/src/Interpreters/RedundantFunctionsInOrderByVisitor.h +++ b/src/Interpreters/RedundantFunctionsInOrderByVisitor.h @@ -16,7 +16,7 @@ public: struct Data { std::unordered_set & keys; - const Context & context; + ContextPtr context; bool redundant = true; bool done = false; diff --git a/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp b/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp index ae575b8aae7..f46e80a6370 100644 --- a/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp +++ b/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp @@ -16,7 +16,7 @@ static bool isUniq(const ASTFunction & func) } /// Remove injective functions of one argument: replace with a child -static bool removeInjectiveFunction(ASTPtr & ast, const Context & context, const FunctionFactory & function_factory) +static bool removeInjectiveFunction(ASTPtr & ast, ContextPtr context, const FunctionFactory & function_factory) { const ASTFunction * func = ast->as(); if (!func) @@ -46,7 +46,7 @@ void RemoveInjectiveFunctionsMatcher::visit(ASTFunction & func, ASTPtr &, const for (auto & arg : func.arguments->children) { - while (removeInjectiveFunction(arg, data.context, function_factory)) + while (removeInjectiveFunction(arg, data.getContext(), function_factory)) ; } } diff --git a/src/Interpreters/RemoveInjectiveFunctionsVisitor.h b/src/Interpreters/RemoveInjectiveFunctionsVisitor.h index 1adde0d35b0..a3bbd562407 100644 --- a/src/Interpreters/RemoveInjectiveFunctionsVisitor.h +++ b/src/Interpreters/RemoveInjectiveFunctionsVisitor.h @@ -1,7 +1,8 @@ #pragma once -#include +#include #include +#include namespace DB { @@ -12,9 +13,9 @@ class ASTFunction; class RemoveInjectiveFunctionsMatcher { public: - struct Data + struct Data : public WithContext { - const Context & context; + explicit Data(ContextPtr context_) : WithContext(context_) {} }; static void visit(ASTPtr & ast, const Data & data); diff --git a/src/Interpreters/ReplaceQueryParameterVisitor.cpp b/src/Interpreters/ReplaceQueryParameterVisitor.cpp index 9b4223b8947..8d737f27e64 100644 --- a/src/Interpreters/ReplaceQueryParameterVisitor.cpp +++ b/src/Interpreters/ReplaceQueryParameterVisitor.cpp @@ -61,7 +61,7 @@ void ReplaceQueryParameterVisitor::visitQueryParameter(ASTPtr & ast) IColumn & temp_column = *temp_column_ptr; ReadBufferFromString read_buffer{value}; FormatSettings format_settings; - data_type->deserializeAsTextEscaped(temp_column, read_buffer, format_settings); + data_type->getDefaultSerialization()->deserializeTextEscaped(temp_column, read_buffer, format_settings); if (!read_buffer.eof()) throw Exception(ErrorCodes::BAD_QUERY_PARAMETER, diff --git a/src/Interpreters/RequiredSourceColumnsVisitor.cpp b/src/Interpreters/RequiredSourceColumnsVisitor.cpp index 54883043d30..2f2a68656bc 100644 --- a/src/Interpreters/RequiredSourceColumnsVisitor.cpp +++ b/src/Interpreters/RequiredSourceColumnsVisitor.cpp @@ -51,8 +51,10 @@ bool RequiredSourceColumnsMatcher::needChildVisit(const ASTPtr & node, const AST if (const auto * f = node->as()) { + /// "indexHint" is a special function for index analysis. + /// Everything that is inside it is not calculated. See KeyCondition /// "lambda" visit children itself. - if (f->name == "lambda") + if (f->name == "indexHint" || f->name == "lambda") return false; } diff --git a/src/Interpreters/StorageID.h b/src/Interpreters/StorageID.h index ec5ccba37c2..2b2a8daa009 100644 --- a/src/Interpreters/StorageID.h +++ b/src/Interpreters/StorageID.h @@ -89,7 +89,7 @@ struct StorageID const String & config_prefix); /// If dictionary has UUID, then use it as dictionary name in ExternalLoader to allow dictionary renaming. - /// DatabaseCatalog::resolveDictionaryName(...) should be used to access such dictionaries by name. + /// ExternalDictnariesLoader::resolveDictionaryName(...) should be used to access such dictionaries by name. String getInternalDictionaryName() const; private: diff --git a/src/Interpreters/SubqueryForSet.cpp b/src/Interpreters/SubqueryForSet.cpp index c81b7a710ae..08fc07c71e1 100644 --- a/src/Interpreters/SubqueryForSet.cpp +++ b/src/Interpreters/SubqueryForSet.cpp @@ -1,9 +1,6 @@ #include -#include -#include -#include -#include - +#include +#include namespace DB { @@ -13,65 +10,4 @@ SubqueryForSet::~SubqueryForSet() = default; SubqueryForSet::SubqueryForSet(SubqueryForSet &&) = default; SubqueryForSet & SubqueryForSet::operator= (SubqueryForSet &&) = default; -void SubqueryForSet::makeSource(std::shared_ptr & interpreter, - NamesWithAliases && joined_block_aliases_) -{ - joined_block_aliases = std::move(joined_block_aliases_); - source = std::make_unique(); - interpreter->buildQueryPlan(*source); - - sample_block = interpreter->getSampleBlock(); - renameColumns(sample_block); -} - -void SubqueryForSet::renameColumns(Block & block) -{ - for (const auto & name_with_alias : joined_block_aliases) - { - if (block.has(name_with_alias.first)) - { - auto pos = block.getPositionByName(name_with_alias.first); - auto column = block.getByPosition(pos); - block.erase(pos); - column.name = name_with_alias.second; - block.insert(std::move(column)); - } - } -} - -void SubqueryForSet::addJoinActions(ExpressionActionsPtr actions) -{ - actions->execute(sample_block); - if (joined_block_actions == nullptr) - { - joined_block_actions = actions; - } - else - { - auto new_dag = ActionsDAG::merge( - std::move(*joined_block_actions->getActionsDAG().clone()), - std::move(*actions->getActionsDAG().clone())); - joined_block_actions = std::make_shared(new_dag); - } -} - -bool SubqueryForSet::insertJoinedBlock(Block & block) -{ - renameColumns(block); - - if (joined_block_actions) - joined_block_actions->execute(block); - - return join->addJoinedBlock(block); -} - -void SubqueryForSet::setTotals(Block totals) -{ - if (join) - { - renameColumns(totals); - join->setTotals(totals); - } -} - } diff --git a/src/Interpreters/SubqueryForSet.h b/src/Interpreters/SubqueryForSet.h index a42bf296d6c..974f5bd3e58 100644 --- a/src/Interpreters/SubqueryForSet.h +++ b/src/Interpreters/SubqueryForSet.h @@ -2,19 +2,16 @@ #include #include -#include -#include -#include namespace DB { -class InterpreterSelectWithUnionQuery; -class ExpressionActions; -using ExpressionActionsPtr = std::shared_ptr; class QueryPlan; +class Set; +using SetPtr = std::shared_ptr; + /// Information on what to do when executing a subquery in the [GLOBAL] IN/JOIN section. struct SubqueryForSet { @@ -28,28 +25,10 @@ struct SubqueryForSet /// If set, build it from result. SetPtr set; - JoinPtr join; - /// Apply this actions to joined block. - ExpressionActionsPtr joined_block_actions; - Block sample_block; /// source->getHeader() + column renames /// If set, put the result into the table. /// This is a temporary table for transferring to remote servers for distributed query processing. StoragePtr table; - - void makeSource(std::shared_ptr & interpreter, - NamesWithAliases && joined_block_aliases_); - - void addJoinActions(ExpressionActionsPtr actions); - - bool insertJoinedBlock(Block & block); - void setTotals(Block totals); - -private: - NamesWithAliases joined_block_aliases; /// Rename column from joined block from this list. - - /// Rename source right table column names into qualified column names if they conflicts with left table ones. - void renameColumns(Block & block); }; /// ID of subquery -> what to do with it. diff --git a/src/Interpreters/SystemLog.cpp b/src/Interpreters/SystemLog.cpp index 1667d845d77..31ceca8ec05 100644 --- a/src/Interpreters/SystemLog.cpp +++ b/src/Interpreters/SystemLog.cpp @@ -30,7 +30,7 @@ constexpr size_t DEFAULT_METRIC_LOG_COLLECT_INTERVAL_MILLISECONDS = 1000; /// Creates a system log with MergeTree engine using parameters from config template std::shared_ptr createSystemLog( - Context & context, + ContextPtr context, const String & default_database_name, const String & default_table_name, const Poco::Util::AbstractConfiguration & config, @@ -88,7 +88,7 @@ std::shared_ptr createSystemLog( } -SystemLogs::SystemLogs(Context & global_context, const Poco::Util::AbstractConfiguration & config) +SystemLogs::SystemLogs(ContextPtr global_context, const Poco::Util::AbstractConfiguration & config) { query_log = createSystemLog(global_context, "system", "query_log", config, "query_log"); query_thread_log = createSystemLog(global_context, "system", "query_thread_log", config, "query_thread_log"); diff --git a/src/Interpreters/SystemLog.h b/src/Interpreters/SystemLog.h index aa3dc113e44..aa01ca3517b 100644 --- a/src/Interpreters/SystemLog.h +++ b/src/Interpreters/SystemLog.h @@ -93,7 +93,7 @@ public: /// because SystemLog destruction makes insert query while flushing data into underlying tables struct SystemLogs { - SystemLogs(Context & global_context, const Poco::Util::AbstractConfiguration & config); + SystemLogs(ContextPtr global_context, const Poco::Util::AbstractConfiguration & config); ~SystemLogs(); void shutdown(); @@ -115,7 +115,7 @@ struct SystemLogs template -class SystemLog : public ISystemLog, private boost::noncopyable +class SystemLog : public ISystemLog, private boost::noncopyable, WithContext { public: using Self = SystemLog; @@ -129,7 +129,7 @@ public: * and new table get created - as if previous table was not exist. */ SystemLog( - Context & context_, + ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, @@ -152,6 +152,8 @@ public: void shutdown() override { stopFlushThread(); + if (table) + table->shutdown(); } String getName() override @@ -166,7 +168,6 @@ protected: private: /* Saving thread data */ - Context & context; const StorageID table_id; const String storage_def; StoragePtr table; @@ -184,12 +185,13 @@ private: // synchronous log flushing for SYSTEM FLUSH LOGS. uint64_t queue_front_index = 0; bool is_shutdown = false; + // A flag that says we must create the tables even if the queue is empty. bool is_force_prepare_tables = false; std::condition_variable flush_event; // Requested to flush logs up to this index, exclusive - uint64_t requested_flush_before = 0; + uint64_t requested_flush_up_to = 0; // Flushed log up to this index, exclusive - uint64_t flushed_before = 0; + uint64_t flushed_up_to = 0; // Logged overflow message at this queue front index uint64_t logged_queue_full_at_index = -1; @@ -207,12 +209,13 @@ private: template -SystemLog::SystemLog(Context & context_, +SystemLog::SystemLog( + ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, size_t flush_interval_milliseconds_) - : context(context_) + : WithContext(context_) , table_id(database_name_, table_name_) , storage_def(storage_def_) , flush_interval_milliseconds(flush_interval_milliseconds_) @@ -267,8 +270,8 @@ void SystemLog::add(const LogElement & element) // It is enough to only wake the flushing thread once, after the message // count increases past half available size. const uint64_t queue_end = queue_front_index + queue.size(); - if (requested_flush_before < queue_end) - requested_flush_before = queue_end; + if (requested_flush_up_to < queue_end) + requested_flush_up_to = queue_end; flush_event.notify_all(); } @@ -304,24 +307,36 @@ void SystemLog::add(const LogElement & element) template void SystemLog::flush(bool force) { - std::unique_lock lock(mutex); + uint64_t this_thread_requested_offset; - if (is_shutdown) - return; - - const uint64_t queue_end = queue_front_index + queue.size(); - - is_force_prepare_tables = force; - if (requested_flush_before < queue_end || force) { - requested_flush_before = queue_end; + std::unique_lock lock(mutex); + + if (is_shutdown) + return; + + this_thread_requested_offset = queue_front_index + queue.size(); + + // Publish our flush request, taking care not to overwrite the requests + // made by other threads. + is_force_prepare_tables |= force; + requested_flush_up_to = std::max(requested_flush_up_to, + this_thread_requested_offset); + flush_event.notify_all(); } - // Use an arbitrary timeout to avoid endless waiting. - const int timeout_seconds = 60; + LOG_DEBUG(log, "Requested flush up to offset {}", + this_thread_requested_offset); + + // Use an arbitrary timeout to avoid endless waiting. 60s proved to be + // too fast for our parallel functional tests, probably because they + // heavily load the disk. + const int timeout_seconds = 180; + std::unique_lock lock(mutex); bool result = flush_event.wait_for(lock, std::chrono::seconds(timeout_seconds), - [&] { return flushed_before >= queue_end && !is_force_prepare_tables; }); + [&] { return flushed_up_to >= this_thread_requested_offset + && !is_force_prepare_tables; }); if (!result) { @@ -371,6 +386,8 @@ void SystemLog::savingThreadFunction() // The end index (exclusive, like std end()) of the messages we are // going to flush. uint64_t to_flush_end = 0; + // Should we prepare table even if there are no new messages. + bool should_prepare_tables_anyway = false; { std::unique_lock lock(mutex); @@ -378,7 +395,7 @@ void SystemLog::savingThreadFunction() std::chrono::milliseconds(flush_interval_milliseconds), [&] () { - return requested_flush_before > flushed_before || is_shutdown || is_force_prepare_tables; + return requested_flush_up_to > flushed_up_to || is_shutdown || is_force_prepare_tables; } ); @@ -389,18 +406,14 @@ void SystemLog::savingThreadFunction() to_flush.resize(0); queue.swap(to_flush); + should_prepare_tables_anyway = is_force_prepare_tables; + exit_this_thread = is_shutdown; } if (to_flush.empty()) { - bool force; - { - std::lock_guard lock(mutex); - force = is_force_prepare_tables; - } - - if (force) + if (should_prepare_tables_anyway) { prepareTable(); LOG_TRACE(log, "Table created (force)"); @@ -429,7 +442,8 @@ void SystemLog::flushImpl(const std::vector & to_flush, { try { - LOG_TRACE(log, "Flushing system log, {} entries to flush", to_flush.size()); + LOG_TRACE(log, "Flushing system log, {} entries to flush up to offset {}", + to_flush.size(), to_flush_end); /// We check for existence of the table and create it as needed at every /// flush. This is done to allow user to drop the table at any moment @@ -451,8 +465,8 @@ void SystemLog::flushImpl(const std::vector & to_flush, ASTPtr query_ptr(insert.release()); // we need query context to do inserts to target table with MV containing subqueries or joins - Context insert_context(context); - insert_context.makeQueryContext(); + auto insert_context = Context::createCopy(context); + insert_context->makeQueryContext(); InterpreterInsertQuery interpreter(query_ptr, insert_context); BlockIO io = interpreter.execute(); @@ -468,12 +482,12 @@ void SystemLog::flushImpl(const std::vector & to_flush, { std::lock_guard lock(mutex); - flushed_before = to_flush_end; + flushed_up_to = to_flush_end; is_force_prepare_tables = false; flush_event.notify_all(); } - LOG_TRACE(log, "Flushed system log"); + LOG_TRACE(log, "Flushed system log up to offset {}", to_flush_end); } @@ -482,7 +496,7 @@ void SystemLog::prepareTable() { String description = table_id.getNameForLogs(); - table = DatabaseCatalog::instance().tryGetTable(table_id, context); + table = DatabaseCatalog::instance().tryGetTable(table_id, getContext()); if (table) { @@ -494,7 +508,8 @@ void SystemLog::prepareTable() { /// Rename the existing table. int suffix = 0; - while (DatabaseCatalog::instance().isTableExist({table_id.database_name, table_id.table_name + "_" + toString(suffix)}, context)) + while (DatabaseCatalog::instance().isTableExist( + {table_id.database_name, table_id.table_name + "_" + toString(suffix)}, getContext())) ++suffix; auto rename = std::make_shared(); @@ -513,10 +528,14 @@ void SystemLog::prepareTable() rename->elements.emplace_back(elem); - LOG_DEBUG(log, "Existing table {} for system log has obsolete or different structure. Renaming it to {}", description, backQuoteIfNeed(to.table)); + LOG_DEBUG( + log, + "Existing table {} for system log has obsolete or different structure. Renaming it to {}", + description, + backQuoteIfNeed(to.table)); - Context query_context = context; - query_context.makeQueryContext(); + auto query_context = Context::createCopy(context); + query_context->makeQueryContext(); InterpreterRenameQuery(rename, query_context).execute(); /// The required table will be created. @@ -534,13 +553,14 @@ void SystemLog::prepareTable() auto create = getCreateTableQuery(); - Context query_context = context; - query_context.makeQueryContext(); + auto query_context = Context::createCopy(context); + query_context->makeQueryContext(); + InterpreterCreateQuery interpreter(create, query_context); interpreter.setInternal(true); interpreter.execute(); - table = DatabaseCatalog::instance().getTable(table_id, context); + table = DatabaseCatalog::instance().getTable(table_id, getContext()); } is_prepared = true; diff --git a/src/Interpreters/TableJoin.cpp b/src/Interpreters/TableJoin.cpp index c50c61c418c..f547e011a73 100644 --- a/src/Interpreters/TableJoin.cpp +++ b/src/Interpreters/TableJoin.cpp @@ -214,12 +214,12 @@ Block TableJoin::getRequiredRightKeys(const Block & right_table_keys, std::vecto bool TableJoin::leftBecomeNullable(const DataTypePtr & column_type) const { - return forceNullableLeft() && column_type->canBeInsideNullable(); + return forceNullableLeft() && JoinCommon::canBecomeNullable(column_type); } bool TableJoin::rightBecomeNullable(const DataTypePtr & column_type) const { - return forceNullableRight() && column_type->canBeInsideNullable(); + return forceNullableRight() && JoinCommon::canBecomeNullable(column_type); } void TableJoin::addJoinedColumn(const NameAndTypePair & joined_column) @@ -233,7 +233,7 @@ void TableJoin::addJoinedColumn(const NameAndTypePair & joined_column) } if (rightBecomeNullable(type)) - type = makeNullable(type); + type = JoinCommon::convertTypeToNullable(type); columns_added_by_join.emplace_back(joined_column.name, type); } @@ -265,7 +265,7 @@ void TableJoin::addJoinedColumnsAndCorrectTypes(ColumnsWithTypeAndName & columns /// No need to nullify constants bool is_column_const = col.column && isColumnConst(*col.column); if (!is_column_const) - col.type = makeNullable(col.type); + col.type = JoinCommon::convertTypeToNullable(col.type); } } diff --git a/src/Interpreters/TableJoin.h b/src/Interpreters/TableJoin.h index 71a27849297..b75ef848f13 100644 --- a/src/Interpreters/TableJoin.h +++ b/src/Interpreters/TableJoin.h @@ -128,7 +128,11 @@ public: bool allowDictJoin(const String & dict_key, const Block & sample_block, Names &, NamesAndTypesList &) const; bool preferMergeJoin() const { return join_algorithm == JoinAlgorithm::PREFER_PARTIAL_MERGE; } bool forceMergeJoin() const { return join_algorithm == JoinAlgorithm::PARTIAL_MERGE; } - bool forceHashJoin() const { return join_algorithm == JoinAlgorithm::HASH; } + bool forceHashJoin() const + { + /// HashJoin always used for DictJoin + return dictionary_reader || join_algorithm == JoinAlgorithm::HASH; + } bool forceNullableRight() const { return join_use_nulls && isLeftOrFull(table_join.kind); } bool forceNullableLeft() const { return join_use_nulls && isRightOrFull(table_join.kind); } diff --git a/src/Interpreters/TextLog.cpp b/src/Interpreters/TextLog.cpp index f60b6acae6f..f5a0ce51d49 100644 --- a/src/Interpreters/TextLog.cpp +++ b/src/Interpreters/TextLog.cpp @@ -55,7 +55,7 @@ void TextLogElement::appendToBlock(MutableColumns & columns) const { size_t i = 0; - columns[i++]->insert(DateLUT::instance().toDayNum(event_time)); + columns[i++]->insert(DateLUT::instance().toDayNum(event_time).toUnderType()); columns[i++]->insert(event_time); columns[i++]->insert(event_time_microseconds); columns[i++]->insert(microseconds); @@ -74,7 +74,7 @@ void TextLogElement::appendToBlock(MutableColumns & columns) const columns[i++]->insert(source_line); } -TextLog::TextLog(Context & context_, const String & database_name_, +TextLog::TextLog(ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, size_t flush_interval_milliseconds_) : SystemLog(context_, database_name_, table_name_, diff --git a/src/Interpreters/TextLog.h b/src/Interpreters/TextLog.h index da678868be3..7ff55128a90 100644 --- a/src/Interpreters/TextLog.h +++ b/src/Interpreters/TextLog.h @@ -33,7 +33,7 @@ class TextLog : public SystemLog { public: TextLog( - Context & context_, + ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, diff --git a/src/Interpreters/ThreadStatusExt.cpp b/src/Interpreters/ThreadStatusExt.cpp index 8a979721290..c04534e11a1 100644 --- a/src/Interpreters/ThreadStatusExt.cpp +++ b/src/Interpreters/ThreadStatusExt.cpp @@ -33,9 +33,11 @@ namespace ErrorCodes void ThreadStatus::applyQuerySettings() { - const Settings & settings = query_context->getSettingsRef(); + auto query_context_ptr = query_context.lock(); + assert(query_context_ptr); + const Settings & settings = query_context_ptr->getSettingsRef(); - query_id = query_context->getCurrentQueryId(); + query_id = query_context_ptr->getCurrentQueryId(); initQueryProfiler(); untracked_memory_limit = settings.max_untracked_memory; @@ -58,26 +60,26 @@ void ThreadStatus::applyQuerySettings() } -void ThreadStatus::attachQueryContext(Context & query_context_) +void ThreadStatus::attachQueryContext(ContextPtr query_context_) { - query_context = &query_context_; + query_context = query_context_; - if (!global_context) - global_context = &query_context->getGlobalContext(); + if (global_context.expired()) + global_context = query_context_->getGlobalContext(); if (thread_group) { std::lock_guard lock(thread_group->mutex); thread_group->query_context = query_context; - if (!thread_group->global_context) + if (thread_group->global_context.expired()) thread_group->global_context = global_context; } // Generate new span for thread manually here, because we can't depend // on OpenTelemetrySpanHolder due to link order issues. // FIXME why and how is this different from setupState()? - thread_trace_context = query_context->query_trace_context; + thread_trace_context = query_context_->query_trace_context; if (thread_trace_context.trace_id) { thread_trace_context.span_id = thread_local_rng(); @@ -113,17 +115,17 @@ void ThreadStatus::setupState(const ThreadGroupStatusPtr & thread_group_) fatal_error_callback = thread_group->fatal_error_callback; query_context = thread_group->query_context; - if (!global_context) + if (global_context.expired()) global_context = thread_group->global_context; } - if (query_context) + if (auto query_context_ptr = query_context.lock()) { applyQuerySettings(); // Generate new span for thread manually here, because we can't depend // on OpenTelemetrySpanHolder due to link order issues. - thread_trace_context = query_context->query_trace_context; + thread_trace_context = query_context_ptr->query_trace_context; if (thread_trace_context.trace_id) { thread_trace_context.span_id = thread_local_rng(); @@ -201,9 +203,9 @@ void ThreadStatus::initPerformanceCounters() // query_start_time_nanoseconds cannot be used here since RUsageCounters expect CLOCK_MONOTONIC *last_rusage = RUsageCounters::current(); - if (query_context) + if (auto query_context_ptr = query_context.lock()) { - const Settings & settings = query_context->getSettingsRef(); + const Settings & settings = query_context_ptr->getSettingsRef(); if (settings.metrics_perf_events_enabled) { try @@ -246,8 +248,8 @@ void ThreadStatus::finalizePerformanceCounters() // 'select 1 settings metrics_perf_events_enabled = 1', I still get // query_context->getSettingsRef().metrics_perf_events_enabled == 0 *shrug*. bool close_perf_descriptors = true; - if (query_context) - close_perf_descriptors = !query_context->getSettingsRef().metrics_perf_events_enabled; + if (auto query_context_ptr = query_context.lock()) + close_perf_descriptors = !query_context_ptr->getSettingsRef().metrics_perf_events_enabled; try { @@ -262,17 +264,19 @@ void ThreadStatus::finalizePerformanceCounters() try { - if (global_context && query_context) + auto global_context_ptr = global_context.lock(); + auto query_context_ptr = query_context.lock(); + if (global_context_ptr && query_context_ptr) { - const auto & settings = query_context->getSettingsRef(); + const auto & settings = query_context_ptr->getSettingsRef(); if (settings.log_queries && settings.log_query_threads) { const auto now = std::chrono::system_clock::now(); Int64 query_duration_ms = (time_in_microseconds(now) - query_start_time_microseconds) / 1000; if (query_duration_ms >= settings.log_queries_min_query_duration_ms.totalMilliseconds()) { - if (auto thread_log = global_context->getQueryThreadLog()) - logToQueryThreadLog(*thread_log, query_context->getCurrentDatabase(), now); + if (auto thread_log = global_context_ptr->getQueryThreadLog()) + logToQueryThreadLog(*thread_log, query_context_ptr->getCurrentDatabase(), now); } } } @@ -286,10 +290,13 @@ void ThreadStatus::finalizePerformanceCounters() void ThreadStatus::initQueryProfiler() { /// query profilers are useless without trace collector - if (!global_context || !global_context->hasTraceCollector()) + auto global_context_ptr = global_context.lock(); + if (!global_context_ptr || !global_context_ptr->hasTraceCollector()) return; - const auto & settings = query_context->getSettingsRef(); + auto query_context_ptr = query_context.lock(); + assert(query_context_ptr); + const auto & settings = query_context_ptr->getSettingsRef(); try { @@ -316,6 +323,8 @@ void ThreadStatus::finalizeQueryProfiler() void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) { + MemoryTracker::LockExceptionInThread lock(VariableContext::Global); + if (exit_if_already_detached && thread_state == ThreadState::DetachedFromQuery) { thread_state = thread_exits ? ThreadState::Died : ThreadState::DetachedFromQuery; @@ -325,9 +334,10 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) assertState({ThreadState::AttachedToQuery}, __PRETTY_FUNCTION__); std::shared_ptr opentelemetry_span_log; - if (thread_trace_context.trace_id && query_context) + auto query_context_ptr = query_context.lock(); + if (thread_trace_context.trace_id && query_context_ptr) { - opentelemetry_span_log = query_context->getOpenTelemetrySpanLog(); + opentelemetry_span_log = query_context_ptr->getOpenTelemetrySpanLog(); } if (opentelemetry_span_log) @@ -347,7 +357,8 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) // is going to fail, because we're going to reset it to zero later in // this function. span.span_id = thread_trace_context.span_id; - span.parent_span_id = query_context->query_trace_context.span_id; + assert(query_context_ptr); + span.parent_span_id = query_context_ptr->query_trace_context.span_id; span.operation_name = getThreadName(); span.start_time_us = query_start_time_microseconds; span.finish_time_us = @@ -370,7 +381,7 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) memory_tracker.setParent(thread_group->memory_tracker.getParent()); query_id.clear(); - query_context = nullptr; + query_context.reset(); thread_trace_context.trace_id = 0; thread_trace_context.span_id = 0; thread_group.reset(); @@ -429,11 +440,12 @@ void ThreadStatus::logToQueryThreadLog(QueryThreadLog & thread_log, const String } } - if (query_context) + auto query_context_ptr = query_context.lock(); + if (query_context_ptr) { - elem.client_info = query_context->getClientInfo(); + elem.client_info = query_context_ptr->getClientInfo(); - if (query_context->getSettingsRef().log_profile_events != 0) + if (query_context_ptr->getSettingsRef().log_profile_events != 0) { /// NOTE: Here we are in the same thread, so we can make memcpy() elem.profile_counters = std::make_shared(performance_counters.getPartiallyAtomicSnapshot()); @@ -467,7 +479,7 @@ void CurrentThread::attachToIfDetached(const ThreadGroupStatusPtr & thread_group current_thread->deleter = CurrentThread::defaultThreadDeleter; } -void CurrentThread::attachQueryContext(Context & query_context) +void CurrentThread::attachQueryContext(ContextPtr query_context) { if (unlikely(!current_thread)) return; @@ -496,12 +508,12 @@ void CurrentThread::detachQueryIfNotDetached() } -CurrentThread::QueryScope::QueryScope(Context & query_context) +CurrentThread::QueryScope::QueryScope(ContextPtr query_context) { CurrentThread::initializeQuery(); CurrentThread::attachQueryContext(query_context); - if (!query_context.hasQueryContext()) - query_context.makeQueryContext(); + if (!query_context->hasQueryContext()) + query_context->makeQueryContext(); } void CurrentThread::QueryScope::logPeakMemoryUsage() diff --git a/src/Interpreters/TraceLog.cpp b/src/Interpreters/TraceLog.cpp index 40bcc0db445..fe7512f2f00 100644 --- a/src/Interpreters/TraceLog.cpp +++ b/src/Interpreters/TraceLog.cpp @@ -42,7 +42,7 @@ void TraceLogElement::appendToBlock(MutableColumns & columns) const { size_t i = 0; - columns[i++]->insert(DateLUT::instance().toDayNum(event_time)); + columns[i++]->insert(DateLUT::instance().toDayNum(event_time).toUnderType()); columns[i++]->insert(event_time); columns[i++]->insert(event_time_microseconds); columns[i++]->insert(timestamp_ns); diff --git a/src/Interpreters/TreeOptimizer.cpp b/src/Interpreters/TreeOptimizer.cpp index 5c6f76c8c29..5b06c00435a 100644 --- a/src/Interpreters/TreeOptimizer.cpp +++ b/src/Interpreters/TreeOptimizer.cpp @@ -81,7 +81,7 @@ void appendUnusedGroupByColumn(ASTSelectQuery * select_query, const NameSet & so } /// Eliminates injective function calls and constant expressions from group by statement. -void optimizeGroupBy(ASTSelectQuery * select_query, const NameSet & source_columns, const Context & context) +void optimizeGroupBy(ASTSelectQuery * select_query, const NameSet & source_columns, ContextPtr context) { const FunctionFactory & function_factory = FunctionFactory::instance(); @@ -135,8 +135,7 @@ void optimizeGroupBy(ASTSelectQuery * select_query, const NameSet & source_colum const auto & dict_name = dict_name_ast->value.safeGet(); const auto & attr_name = attr_name_ast->value.safeGet(); - String resolved_name = DatabaseCatalog::instance().resolveDictionaryName(dict_name); - const auto & dict_ptr = context.getExternalDictionariesLoader().getDictionary(resolved_name); + const auto & dict_ptr = context->getExternalDictionariesLoader().getDictionary(dict_name, context); if (!dict_ptr->isInjective(attr_name)) { ++i; @@ -271,7 +270,7 @@ void optimizeDuplicatesInOrderBy(const ASTSelectQuery * select_query) } /// Optimize duplicate ORDER BY -void optimizeDuplicateOrderBy(ASTPtr & query, const Context & context) +void optimizeDuplicateOrderBy(ASTPtr & query, ContextPtr context) { DuplicateOrderByVisitor::Data order_by_data{context}; DuplicateOrderByVisitor(order_by_data).visit(query); @@ -397,7 +396,7 @@ void optimizeDuplicateDistinct(ASTSelectQuery & select) /// Replace monotonous functions in ORDER BY if they don't participate in GROUP BY expression, /// has a single argument and not an aggregate functions. -void optimizeMonotonousFunctionsInOrderBy(ASTSelectQuery * select_query, const Context & context, +void optimizeMonotonousFunctionsInOrderBy(ASTSelectQuery * select_query, ContextPtr context, const TablesWithColumns & tables_with_columns, const Names & sorting_key_columns) { @@ -449,7 +448,7 @@ void optimizeMonotonousFunctionsInOrderBy(ASTSelectQuery * select_query, const C /// Optimize ORDER BY x, y, f(x), g(x, y), f(h(x)), t(f(x), g(x)) into ORDER BY x, y /// in case if f(), g(), h(), t() are deterministic (in scope of query). /// Don't optimize ORDER BY f(x), g(x), x even if f(x) is bijection for x or g(x). -void optimizeRedundantFunctionsInOrderBy(const ASTSelectQuery * select_query, const Context & context) +void optimizeRedundantFunctionsInOrderBy(const ASTSelectQuery * select_query, ContextPtr context) { const auto & order_by = select_query->orderBy(); if (!order_by) @@ -562,9 +561,9 @@ void optimizeCountConstantAndSumOne(ASTPtr & query) } -void optimizeInjectiveFunctionsInsideUniq(ASTPtr & query, const Context & context) +void optimizeInjectiveFunctionsInsideUniq(ASTPtr & query, ContextPtr context) { - RemoveInjectiveFunctionsVisitor::Data data = {context}; + RemoveInjectiveFunctionsVisitor::Data data(context); RemoveInjectiveFunctionsVisitor(data).visit(query); } @@ -593,10 +592,10 @@ void TreeOptimizer::optimizeIf(ASTPtr & query, Aliases & aliases, bool if_chain_ void TreeOptimizer::apply(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, const std::vector & tables_with_columns, - const Context & context, const StorageMetadataPtr & metadata_snapshot, + ContextPtr context, const StorageMetadataPtr & metadata_snapshot, bool & rewrite_subqueries) { - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); auto * select_query = query->as(); if (!select_query) diff --git a/src/Interpreters/TreeOptimizer.h b/src/Interpreters/TreeOptimizer.h index a10dfc57451..b268b230f4e 100644 --- a/src/Interpreters/TreeOptimizer.h +++ b/src/Interpreters/TreeOptimizer.h @@ -1,13 +1,13 @@ #pragma once -#include #include +#include #include +#include namespace DB { -class Context; struct StorageInMemoryMetadata; using StorageMetadataPtr = std::shared_ptr; @@ -16,10 +16,14 @@ using StorageMetadataPtr = std::shared_ptr; class TreeOptimizer { public: - static void apply(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, - const std::vector & tables_with_columns, - const Context & context, const StorageMetadataPtr & metadata_snapshot, - bool & rewrite_subqueries); + static void apply( + ASTPtr & query, + Aliases & aliases, + const NameSet & source_columns_set, + const std::vector & tables_with_columns, + ContextPtr context, + const StorageMetadataPtr & metadata_snapshot, + bool & rewrite_subqueries); static void optimizeIf(ASTPtr & query, Aliases & aliases, bool if_chain_to_multiif); }; diff --git a/src/Interpreters/TreeRewriter.cpp b/src/Interpreters/TreeRewriter.cpp index 9318f87175a..324a773fbc2 100644 --- a/src/Interpreters/TreeRewriter.cpp +++ b/src/Interpreters/TreeRewriter.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -181,8 +182,72 @@ struct CustomizeAggregateFunctionsMoveSuffixData } }; +struct FuseSumCountAggregates +{ + std::vector sums {}; + std::vector counts {}; + std::vector avgs {}; + + void addFuncNode(ASTFunction * func) + { + if (func->name == "sum") + sums.push_back(func); + else if (func->name == "count") + counts.push_back(func); + else + { + assert(func->name == "avg"); + avgs.push_back(func); + } + } + + bool canBeFused() const + { + // Need at least two different kinds of functions to fuse. + if (sums.empty() && counts.empty()) + return false; + if (sums.empty() && avgs.empty()) + return false; + if (counts.empty() && avgs.empty()) + return false; + return true; + } +}; + +struct FuseSumCountAggregatesVisitorData +{ + using TypeToVisit = ASTFunction; + + std::unordered_map fuse_map; + + void visit(ASTFunction & func, ASTPtr &) + { + if (func.name == "sum" || func.name == "avg" || func.name == "count") + { + if (func.arguments->children.empty()) + return; + + // Probably we can extend it to match count() for non-nullable argument + // to sum/avg with any other argument. Now we require strict match. + const auto argument = func.arguments->children.at(0)->getColumnName(); + auto it = fuse_map.find(argument); + if (it != fuse_map.end()) + { + it->second.addFuncNode(&func); + } + else + { + FuseSumCountAggregates funcs{}; + funcs.addFuncNode(&func); + fuse_map[argument] = funcs; + } + } + } +}; + using CustomizeAggregateFunctionsOrNullVisitor = InDepthNodeVisitor, true>; using CustomizeAggregateFunctionsMoveOrNullVisitor = InDepthNodeVisitor, true>; +using FuseSumCountAggregatesVisitor = InDepthNodeVisitor, true>; /// Translate qualified names such as db.table.column, table.column, table_alias.column to names' normal form. /// Expand asterisks and qualified asterisks with column names. @@ -200,6 +265,49 @@ void translateQualifiedNames(ASTPtr & query, const ASTSelectQuery & select_query throw Exception("Empty list of columns in SELECT query", ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED); } +// Replaces one avg/sum/count function with an appropriate expression with +// sumCount(). +void replaceWithSumCount(String column_name, ASTFunction & func) +{ + auto func_base = makeASTFunction("sumCount", std::make_shared(column_name)); + auto exp_list = std::make_shared(); + if (func.name == "sum" || func.name == "count") + { + /// Rewrite "sum" to sumCount().1, rewrite "count" to sumCount().2 + UInt8 idx = (func.name == "sum" ? 1 : 2); + func.name = "tupleElement"; + exp_list->children.push_back(func_base); + exp_list->children.push_back(std::make_shared(idx)); + } + else + { + /// Rewrite "avg" to sumCount().1 / sumCount().2 + auto new_arg1 = makeASTFunction("tupleElement", func_base, std::make_shared(UInt8(1))); + auto new_arg2 = makeASTFunction("tupleElement", func_base, std::make_shared(UInt8(2))); + func.name = "divide"; + exp_list->children.push_back(new_arg1); + exp_list->children.push_back(new_arg2); + } + func.arguments = exp_list; + func.children.push_back(func.arguments); +} + +void fuseSumCountAggregates(std::unordered_map & fuse_map) +{ + for (auto & it : fuse_map) + { + if (it.second.canBeFused()) + { + for (auto & func: it.second.sums) + replaceWithSumCount(it.first, *func); + for (auto & func: it.second.avgs) + replaceWithSumCount(it.first, *func); + for (auto & func: it.second.counts) + replaceWithSumCount(it.first, *func); + } + } +} + bool hasArrayJoin(const ASTPtr & ast) { if (const ASTFunction * function = ast->as()) @@ -293,13 +401,11 @@ void removeUnneededColumnsFromSelectClause(const ASTSelectQuery * select_query, else { ASTFunction * func = elem->as(); + + /// Never remove untuple. It's result column may be in required columns. + /// It is not easy to analyze untuple here, because types were not calculated yes. if (func && func->name == "untuple") - for (const auto & col : required_result_columns) - if (col.rfind("_ut_", 0) == 0) - { - new_elements.push_back(elem); - break; - } + new_elements.push_back(elem); } } @@ -307,10 +413,10 @@ void removeUnneededColumnsFromSelectClause(const ASTSelectQuery * select_query, } /// Replacing scalar subqueries with constant values. -void executeScalarSubqueries(ASTPtr & query, const Context & context, size_t subquery_depth, Scalars & scalars, bool only_analyze) +void executeScalarSubqueries(ASTPtr & query, ContextPtr context, size_t subquery_depth, Scalars & scalars, bool only_analyze) { LogAST log; - ExecuteScalarSubqueriesVisitor::Data visitor_data{context, subquery_depth, scalars, only_analyze}; + ExecuteScalarSubqueriesVisitor::Data visitor_data{WithContext{context}, subquery_depth, scalars, only_analyze}; ExecuteScalarSubqueriesVisitor(visitor_data, log.stream()).visit(query); } @@ -405,13 +511,13 @@ void setJoinStrictness(ASTSelectQuery & select_query, JoinStrictness join_defaul /// Find the columns that are obtained by JOIN. void collectJoinedColumns(TableJoin & analyzed_join, const ASTSelectQuery & select_query, - const TablesWithColumns & tables, const Aliases & aliases, ASTPtr & new_where_conditions) + const TablesWithColumns & tables, const Aliases & aliases) { const ASTTablesInSelectQueryElement * node = select_query.join(); - if (!node) + if (!node || tables.size() < 2) return; - auto & table_join = node->table_join->as(); + const auto & table_join = node->table_join->as(); if (table_join.using_expression_list) { @@ -430,33 +536,16 @@ void collectJoinedColumns(TableJoin & analyzed_join, const ASTSelectQuery & sele { bool is_asof = (table_join.strictness == ASTTableJoin::Strictness::Asof); - CollectJoinOnKeysVisitor::Data data{analyzed_join, tables[0], tables[1], aliases, is_asof, table_join.kind}; + CollectJoinOnKeysVisitor::Data data{analyzed_join, tables[0], tables[1], aliases, is_asof}; CollectJoinOnKeysVisitor(data).visit(table_join.on_expression); if (!data.has_some) throw Exception("Cannot get JOIN keys from JOIN ON section: " + queryToString(table_join.on_expression), ErrorCodes::INVALID_JOIN_ON_EXPRESSION); if (is_asof) - { data.asofToJoinKeys(); - } - else if (data.new_on_expression) - { - table_join.on_expression = data.new_on_expression; - new_where_conditions = data.new_where_conditions; - } } } -/// Move joined key related to only one table to WHERE clause -void moveJoinedKeyToWhere(ASTSelectQuery * select_query, ASTPtr & new_where_conditions) -{ - if (select_query->where()) - select_query->setExpression(ASTSelectQuery::Expression::WHERE, - makeASTFunction("and", new_where_conditions, select_query->where())); - else - select_query->setExpression(ASTSelectQuery::Expression::WHERE, new_where_conditions->clone()); -} - std::vector getAggregates(ASTPtr & query, const ASTSelectQuery & select_query) { @@ -662,7 +751,10 @@ void TreeRewriterResult::collectUsedColumns(const ASTPtr & query, bool is_select const auto & partition_desc = metadata_snapshot->getPartitionKey(); if (partition_desc.expression) { - const auto & partition_source_columns = partition_desc.expression->getRequiredColumns(); + auto partition_source_columns = partition_desc.expression->getRequiredColumns(); + partition_source_columns.push_back("_part"); + partition_source_columns.push_back("_partition_id"); + partition_source_columns.push_back("_part_uuid"); optimize_trivial_count = true; for (const auto & required_column : required) { @@ -786,7 +878,7 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect( size_t subquery_depth = select_options.subquery_depth; bool remove_duplicates = select_options.remove_duplicates; - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); const NameSet & source_columns_set = result.source_columns_set; @@ -813,7 +905,14 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect( /// Optimizes logical expressions. LogicalExpressionsOptimizer(select_query, settings.optimize_min_equality_disjunction_chain_length.value).perform(); - normalize(query, result.aliases, settings); + NameSet all_source_columns_set = source_columns_set; + if (table_join) + { + for (const auto & [name, _] : table_join->columns_from_joined_table) + all_source_columns_set.insert(name); + } + + normalize(query, result.aliases, all_source_columns_set, settings); /// Remove unneeded columns according to 'required_result_columns'. /// Leave all selected columns in case of DISTINCT; columns that contain arrayJoin function inside. @@ -822,25 +921,22 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect( removeUnneededColumnsFromSelectClause(select_query, required_result_columns, remove_duplicates); /// Executing scalar subqueries - replacing them with constant values. - executeScalarSubqueries(query, context, subquery_depth, result.scalars, select_options.only_analyze); + executeScalarSubqueries(query, getContext(), subquery_depth, result.scalars, select_options.only_analyze); - TreeOptimizer::apply(query, result.aliases, source_columns_set, tables_with_columns, context, result.metadata_snapshot, result.rewrite_subqueries); + TreeOptimizer::apply( + query, result.aliases, source_columns_set, tables_with_columns, getContext(), result.metadata_snapshot, result.rewrite_subqueries); /// array_join_alias_to_name, array_join_result_to_source. getArrayJoinedColumns(query, result, select_query, result.source_columns, source_columns_set); setJoinStrictness(*select_query, settings.join_default_strictness, settings.any_join_distinct_right_table_keys, result.analyzed_join->table_join); - - ASTPtr new_where_condition = nullptr; - collectJoinedColumns(*result.analyzed_join, *select_query, tables_with_columns, result.aliases, new_where_condition); - if (new_where_condition) - moveJoinedKeyToWhere(select_query, new_where_condition); + collectJoinedColumns(*result.analyzed_join, *select_query, tables_with_columns, result.aliases); /// rewrite filters for select query, must go after getArrayJoinedColumns if (settings.optimize_respect_aliases && result.metadata_snapshot) { - replaceAliasColumnsInQuery(query, result.metadata_snapshot->getColumns(), result.getArrayJoinSourceNameSet(), context); + replaceAliasColumnsInQuery(query, result.metadata_snapshot->getColumns(), result.getArrayJoinSourceNameSet(), getContext()); } result.aggregates = getAggregates(query, *select_query); @@ -867,14 +963,14 @@ TreeRewriterResultPtr TreeRewriter::analyze( if (query->as()) throw Exception("Not select analyze for select asts.", ErrorCodes::LOGICAL_ERROR); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); TreeRewriterResult result(source_columns, storage, metadata_snapshot, false); - normalize(query, result.aliases, settings); + normalize(query, result.aliases, result.source_columns_set, settings); /// Executing scalar subqueries. Column defaults could be a scalar subquery. - executeScalarSubqueries(query, context, 0, result.scalars, false); + executeScalarSubqueries(query, getContext(), 0, result.scalars, false); TreeOptimizer::optimizeIf(query, result.aliases, settings.optimize_if_chain_to_multiif); @@ -896,7 +992,7 @@ TreeRewriterResultPtr TreeRewriter::analyze( return std::make_shared(result); } -void TreeRewriter::normalize(ASTPtr & query, Aliases & aliases, const Settings & settings) +void TreeRewriter::normalize(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, const Settings & settings) { CustomizeCountDistinctVisitor::Data data_count_distinct{settings.count_distinct_implementation}; CustomizeCountDistinctVisitor(data_count_distinct).visit(query); @@ -922,7 +1018,18 @@ void TreeRewriter::normalize(ASTPtr & query, Aliases & aliases, const Settings & CustomizeGlobalNotInVisitor(data_global_not_null_in).visit(query); } - // Rewrite all aggregate functions to add -OrNull suffix to them + // Try to fuse sum/avg/count with identical arguments to one sumCount call, + // if we have at least two different functions. E.g. we will replace sum(x) + // and count(x) with sumCount(x).1 and sumCount(x).2, and sumCount() will + // be calculated only once because of CSE. + if (settings.optimize_fuse_sum_count_avg) + { + FuseSumCountAggregatesVisitor::Data data; + FuseSumCountAggregatesVisitor(data).visit(query); + fuseSumCountAggregates(data.fuse_map); + } + + /// Rewrite all aggregate functions to add -OrNull suffix to them if (settings.aggregate_functions_null_for_empty) { CustomizeAggregateFunctionsOrNullVisitor::Data data_or_null{"OrNull"}; @@ -945,7 +1052,7 @@ void TreeRewriter::normalize(ASTPtr & query, Aliases & aliases, const Settings & FunctionNameNormalizer().visit(query.get()); /// Common subexpression elimination. Rewrite rules. - QueryNormalizer::Data normalizer_data(aliases, settings); + QueryNormalizer::Data normalizer_data(aliases, source_columns_set, settings); QueryNormalizer(normalizer_data).visit(query); } diff --git a/src/Interpreters/TreeRewriter.h b/src/Interpreters/TreeRewriter.h index 1cb5ff26525..26cfaad1fbb 100644 --- a/src/Interpreters/TreeRewriter.h +++ b/src/Interpreters/TreeRewriter.h @@ -3,8 +3,9 @@ #include #include #include -#include +#include #include +#include #include namespace DB @@ -13,7 +14,6 @@ namespace DB class ASTFunction; struct ASTTablesInSelectQueryElement; class TableJoin; -class Context; struct Settings; struct SelectQueryOptions; using Scalars = std::map; @@ -92,12 +92,10 @@ using TreeRewriterResultPtr = std::shared_ptr; /// * scalar subqueries are executed replaced with constants /// * unneeded columns are removed from SELECT clause /// * duplicated columns are removed from ORDER BY, LIMIT BY, USING(...). -class TreeRewriter +class TreeRewriter : WithContext { public: - TreeRewriter(const Context & context_) - : context(context_) - {} + explicit TreeRewriter(ContextPtr context_) : WithContext(context_) {} /// Analyze and rewrite not select query TreeRewriterResultPtr analyze( @@ -117,9 +115,7 @@ public: std::shared_ptr table_join = {}) const; private: - const Context & context; - - static void normalize(ASTPtr & query, Aliases & aliases, const Settings & settings); + static void normalize(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, const Settings & settings); }; } diff --git a/src/Interpreters/WindowDescription.cpp b/src/Interpreters/WindowDescription.cpp index e922f49c896..05d75d4647e 100644 --- a/src/Interpreters/WindowDescription.cpp +++ b/src/Interpreters/WindowDescription.cpp @@ -1,5 +1,6 @@ #include +#include #include #include @@ -60,7 +61,7 @@ void WindowFrame::toString(WriteBuffer & buf) const } else { - buf << abs(begin_offset); + buf << applyVisitor(FieldVisitorToString(), begin_offset); buf << " " << (begin_preceding ? "PRECEDING" : "FOLLOWING"); } @@ -77,7 +78,7 @@ void WindowFrame::toString(WriteBuffer & buf) const } else { - buf << abs(end_offset); + buf << applyVisitor(FieldVisitorToString(), end_offset); buf << " " << (end_preceding ? "PRECEDING" : "FOLLOWING"); } @@ -85,6 +86,38 @@ void WindowFrame::toString(WriteBuffer & buf) const void WindowFrame::checkValid() const { + // Check the validity of offsets. + if (type == WindowFrame::FrameType::Rows + || type == WindowFrame::FrameType::Groups) + { + if (begin_type == BoundaryType::Offset + && !((begin_offset.getType() == Field::Types::UInt64 + || begin_offset.getType() == Field::Types::Int64) + && begin_offset.get() >= 0 + && begin_offset.get() < INT_MAX)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame start offset for '{}' frame must be a nonnegative 32-bit integer, '{}' of type '{}' given.", + toString(type), + applyVisitor(FieldVisitorToString(), begin_offset), + Field::Types::toString(begin_offset.getType())); + } + + if (end_type == BoundaryType::Offset + && !((end_offset.getType() == Field::Types::UInt64 + || end_offset.getType() == Field::Types::Int64) + && end_offset.get() >= 0 + && end_offset.get() < INT_MAX)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame end offset for '{}' frame must be a nonnegative 32-bit integer, '{}' of type '{}' given.", + toString(type), + applyVisitor(FieldVisitorToString(), end_offset), + Field::Types::toString(end_offset.getType())); + } + } + + // Check relative positioning of offsets. // UNBOUNDED PRECEDING end and UNBOUNDED FOLLOWING start should have been // forbidden at the parsing level. assert(!(begin_type == BoundaryType::Unbounded && !begin_preceding)); @@ -121,23 +154,33 @@ void WindowFrame::checkValid() const if (end_type == BoundaryType::Offset && begin_type == BoundaryType::Offset) { - // Frame starting with following rows can't have preceding rows. - if (!(end_preceding && !begin_preceding)) + // Frame start offset must be less or equal that the frame end offset. + bool begin_less_equal_end; + if (begin_preceding && end_preceding) { - // Frame start offset must be less or equal that the frame end offset. - const bool begin_before_end - = begin_offset * (begin_preceding ? -1 : 1) - <= end_offset * (end_preceding ? -1 : 1); - - if (!begin_before_end) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame start offset {} {} does not precede the frame end offset {} {}", - begin_offset, begin_preceding ? "PRECEDING" : "FOLLOWING", - end_offset, end_preceding ? "PRECEDING" : "FOLLOWING"); - } - return; + begin_less_equal_end = begin_offset >= end_offset; } + else if (begin_preceding && !end_preceding) + { + begin_less_equal_end = true; + } + else if (!begin_preceding && end_preceding) + { + begin_less_equal_end = false; + } + else /* if (!begin_preceding && !end_preceding) */ + { + begin_less_equal_end = begin_offset <= end_offset; + } + + if (!begin_less_equal_end) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame start offset {} {} does not precede the frame end offset {} {}", + begin_offset, begin_preceding ? "PRECEDING" : "FOLLOWING", + end_offset, end_preceding ? "PRECEDING" : "FOLLOWING"); + } + return; } throw Exception(ErrorCodes::BAD_ARGUMENTS, diff --git a/src/Interpreters/WindowDescription.h b/src/Interpreters/WindowDescription.h index faad4649f91..70a4e0e44e0 100644 --- a/src/Interpreters/WindowDescription.h +++ b/src/Interpreters/WindowDescription.h @@ -44,14 +44,13 @@ struct WindowFrame // Offset might be both preceding and following, controlled by begin_preceding, // but the offset value must be positive. BoundaryType begin_type = BoundaryType::Unbounded; - // This should have been a Field but I'm getting some crazy linker errors. - int64_t begin_offset = 0; + Field begin_offset = 0; bool begin_preceding = true; // Here as well, Unbounded can only be UNBOUNDED FOLLOWING, and end_preceding // must be false. BoundaryType end_type = BoundaryType::Current; - int64_t end_offset = 0; + Field end_offset = 0; bool end_preceding = false; diff --git a/src/Interpreters/addMissingDefaults.cpp b/src/Interpreters/addMissingDefaults.cpp index 9e8ce1f75b4..ffabb43ed42 100644 --- a/src/Interpreters/addMissingDefaults.cpp +++ b/src/Interpreters/addMissingDefaults.cpp @@ -19,11 +19,15 @@ ActionsDAGPtr addMissingDefaults( const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, - const Context & context) + ContextPtr context, + bool null_as_default) { + auto actions = std::make_shared(header.getColumnsWithTypeAndName()); + auto & index = actions->getIndex(); + /// For missing columns of nested structure, you need to create not a column of empty arrays, but a column of arrays of correct lengths. /// First, remember the offset columns for all arrays in the block. - std::map nested_groups; + std::map nested_groups; for (size_t i = 0, size = header.columns(); i < size; ++i) { @@ -35,14 +39,12 @@ ActionsDAGPtr addMissingDefaults( auto & group = nested_groups[offsets_name]; if (group.empty()) - group.push_back({}); + group.push_back(nullptr); - group.push_back(elem.name); + group.push_back(actions->getInputs()[i]); } } - auto actions = std::make_shared(header.getColumnsWithTypeAndName()); - FunctionOverloadResolverPtr func_builder_replicate = FunctionFactory::instance().get("replicate", context); /// We take given columns from input block and missed columns without default value @@ -61,11 +63,11 @@ ActionsDAGPtr addMissingDefaults( DataTypePtr nested_type = typeid_cast(*column.type).getNestedType(); ColumnPtr nested_column = nested_type->createColumnConstWithDefaultValue(0); - const auto & constant = actions->addColumn({std::move(nested_column), nested_type, column.name}, true); + const auto & constant = actions->addColumn({std::move(nested_column), nested_type, column.name}); auto & group = nested_groups[offsets_name]; - group[0] = constant.result_name; - actions->addFunction(func_builder_replicate, group, constant.result_name, context, true); + group[0] = &constant; + index.push_back(&actions->addFunction(func_builder_replicate, group, constant.result_name)); continue; } @@ -74,11 +76,12 @@ ActionsDAGPtr addMissingDefaults( * it can be full (or the interpreter may decide that it is constant everywhere). */ auto new_column = column.type->createColumnConstWithDefaultValue(0); - actions->addColumn({std::move(new_column), column.type, column.name}, true, true); + const auto * col = &actions->addColumn({std::move(new_column), column.type, column.name}); + index.push_back(&actions->materializeNode(*col)); } /// Computes explicitly specified values by default and materialized columns. - if (auto dag = evaluateMissingDefaults(actions->getResultColumns(), required_columns, columns, context)) + if (auto dag = evaluateMissingDefaults(actions->getResultColumns(), required_columns, columns, context, true, null_as_default)) actions = ActionsDAG::merge(std::move(*actions), std::move(*dag)); else /// Removes unused columns and reorders result. diff --git a/src/Interpreters/addMissingDefaults.h b/src/Interpreters/addMissingDefaults.h index e746c7cc9e6..0a3d4de478c 100644 --- a/src/Interpreters/addMissingDefaults.h +++ b/src/Interpreters/addMissingDefaults.h @@ -1,15 +1,16 @@ #pragma once -#include -#include +#include + #include +#include +#include namespace DB { class Block; -class Context; class NamesAndTypesList; class ColumnsDescription; @@ -20,12 +21,10 @@ using ActionsDAGPtr = std::shared_ptr; * 1. Columns, that are missed inside request, but present in table without defaults (missed columns) * 2. Columns, that are missed inside request, but present in table with defaults (columns with default values) * 3. Columns that materialized from other columns (materialized columns) + * Also can substitute NULL with DEFAULT value in case of INSERT SELECT query (null_as_default) if according setting is 1. * All three types of columns are materialized (not constants). */ ActionsDAGPtr addMissingDefaults( - const Block & header, - const NamesAndTypesList & required_columns, - const ColumnsDescription & columns, - const Context & context); - + const Block & header, const NamesAndTypesList & required_columns, + const ColumnsDescription & columns, ContextPtr context, bool null_as_default = false); } diff --git a/src/Interpreters/addTypeConversionToAST.cpp b/src/Interpreters/addTypeConversionToAST.cpp index 18591fd732c..73c95bd9a8c 100644 --- a/src/Interpreters/addTypeConversionToAST.cpp +++ b/src/Interpreters/addTypeConversionToAST.cpp @@ -32,7 +32,7 @@ ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name) return func; } -ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, const Context & context) +ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, ContextPtr context) { auto syntax_analyzer_result = TreeRewriter(context).analyze(ast, all_columns); const auto actions = ExpressionAnalyzer(ast, syntax_analyzer_result, context).getActions(true); diff --git a/src/Interpreters/addTypeConversionToAST.h b/src/Interpreters/addTypeConversionToAST.h index 16fa98f6e0c..eb391b2c749 100644 --- a/src/Interpreters/addTypeConversionToAST.h +++ b/src/Interpreters/addTypeConversionToAST.h @@ -1,17 +1,19 @@ #pragma once -#include +#include #include +#include namespace DB { -class Context; + class NamesAndTypesList; + /// It will produce an expression with CAST to get an AST with the required type. ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name); // If same type, then ignore the wrapper of CAST function -ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, const Context & context); +ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, ContextPtr context); } diff --git a/src/Interpreters/convertFieldToType.cpp b/src/Interpreters/convertFieldToType.cpp index 1d93ef56dea..529caec9c80 100644 --- a/src/Interpreters/convertFieldToType.cpp +++ b/src/Interpreters/convertFieldToType.cpp @@ -125,23 +125,23 @@ static Field convertDecimalType(const Field & from, const To & type) Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const IDataType * from_type_hint) { + // This was added to mitigate converting DateTime64-Field (a typedef to a Decimal64) to DataTypeDate64-compatible type. + if (from_type_hint && from_type_hint->equals(type)) + { + return src; + } + WhichDataType which_type(type); WhichDataType which_from_type; if (from_type_hint) { which_from_type = WhichDataType(*from_type_hint); - - // This was added to mitigate converting DateTime64-Field (a typedef to a Decimal64) to DataTypeDate64-compatible type. - if (from_type_hint && from_type_hint->equals(type)) - { - return src; - } } /// Conversion between Date and DateTime and vice versa. if (which_type.isDate() && which_from_type.isDateTime()) { - return static_cast(*from_type_hint).getTimeZone().toDayNum(src.get()); + return static_cast(static_cast(*from_type_hint).getTimeZone().toDayNum(src.get()).toUnderType()); } else if (which_type.isDateTime() && which_from_type.isDate()) { @@ -344,7 +344,7 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID ReadBufferFromString in_buffer(src.get()); try { - type_to_parse->deserializeAsWholeText(*col, in_buffer, FormatSettings{}); + type_to_parse->getDefaultSerialization()->deserializeWholeText(*col, in_buffer, FormatSettings{}); } catch (Exception & e) { @@ -377,6 +377,11 @@ Field convertFieldToType(const Field & from_value, const IDataType & to_type, co else if (const auto * nullable_type = typeid_cast(&to_type)) { const IDataType & nested_type = *nullable_type->getNestedType(); + + /// NULL remains NULL after any conversion. + if (WhichDataType(nested_type).isNothing()) + return {}; + if (from_type_hint && from_type_hint->equals(nested_type)) return from_value; return convertFieldToTypeImpl(from_value, nested_type, from_type_hint); @@ -392,8 +397,11 @@ Field convertFieldToTypeOrThrow(const Field & from_value, const IDataType & to_t throw Exception(ErrorCodes::TYPE_MISMATCH, "Cannot convert NULL to {}", to_type.getName()); Field converted = convertFieldToType(from_value, to_type, from_type_hint); if (!is_null && converted.isNull()) - throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "Cannot convert value{}: it cannot be represented as {}", - from_type_hint ? " from " + from_type_hint->getName() : "", to_type.getName()); + throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, + "Cannot convert value '{}'{}: it cannot be represented as {}", + toString(from_value), + from_type_hint ? " from " + from_type_hint->getName() : "", + to_type.getName()); return converted; } diff --git a/src/Interpreters/evaluateConstantExpression.cpp b/src/Interpreters/evaluateConstantExpression.cpp index 42e96bae07b..e6b5893edd7 100644 --- a/src/Interpreters/evaluateConstantExpression.cpp +++ b/src/Interpreters/evaluateConstantExpression.cpp @@ -19,7 +19,6 @@ #include #include - namespace DB { @@ -30,14 +29,14 @@ namespace ErrorCodes } -std::pair> evaluateConstantExpression(const ASTPtr & node, const Context & context) +std::pair> evaluateConstantExpression(const ASTPtr & node, ContextPtr context) { NamesAndTypesList source_columns = {{ "_dummy", std::make_shared() }}; auto ast = node->clone(); - ReplaceQueryParameterVisitor param_visitor(context.getQueryParameters()); + ReplaceQueryParameterVisitor param_visitor(context->getQueryParameters()); param_visitor.visit(ast); - if (context.getSettingsRef().normalize_function_names) + if (context->getSettingsRef().normalize_function_names) FunctionNameNormalizer().visit(ast.get()); String name = ast->getColumnName(); @@ -66,7 +65,7 @@ std::pair> evaluateConstantExpression(co } -ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context & context) +ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, ContextPtr context) { /// If it's already a literal. if (node->as()) @@ -74,7 +73,7 @@ ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context & return std::make_shared(evaluateConstantExpression(node, context).first); } -ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context) +ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, ContextPtr context) { if (const auto * id = node->as()) return std::make_shared(id->name()); @@ -82,18 +81,18 @@ ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, cons return evaluateConstantExpressionAsLiteral(node, context); } -ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, const Context & context) +ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, ContextPtr context) { ASTPtr res = evaluateConstantExpressionOrIdentifierAsLiteral(node, context); auto & literal = res->as(); if (literal.value.safeGet().empty()) { - String current_database = context.getCurrentDatabase(); + String current_database = context->getCurrentDatabase(); if (current_database.empty()) { /// Table was created on older version of ClickHouse and CREATE contains not folded expression. /// Current database is not set yet during server startup, so we cannot evaluate it correctly. - literal.value = context.getConfigRef().getString("default_database", "default"); + literal.value = context->getConfigRef().getString("default_database", "default"); } else literal.value = current_database; @@ -166,9 +165,9 @@ namespace return result; } - Disjunction analyzeFunction(const ASTFunction * fn, const ExpressionActionsPtr & expr) + Disjunction analyzeFunction(const ASTFunction * fn, const ExpressionActionsPtr & expr, size_t & limit) { - if (!fn) + if (!fn || !limit) { return {}; } @@ -182,6 +181,7 @@ namespace const auto * identifier = left->as() ? left->as() : right->as(); const auto * literal = left->as() ? left->as() : right->as(); + --limit; return analyzeEquals(identifier, literal, expr); } else if (fn->name == "in") @@ -192,6 +192,19 @@ namespace Disjunction result; + auto add_dnf = [&](const auto &dnf) + { + if (dnf.size() > limit) + { + result.clear(); + return false; + } + + result.insert(result.end(), dnf.begin(), dnf.end()); + limit -= dnf.size(); + return true; + }; + if (const auto * tuple_func = right->as(); tuple_func && tuple_func->name == "tuple") { const auto * tuple_elements = tuple_func->children.front()->as(); @@ -205,7 +218,10 @@ namespace return {}; } - result.insert(result.end(), dnf.begin(), dnf.end()); + if (!add_dnf(dnf)) + { + return {}; + } } } else if (const auto * tuple_literal = right->as(); @@ -221,7 +237,10 @@ namespace return {}; } - result.insert(result.end(), dnf.begin(), dnf.end()); + if (!add_dnf(dnf)) + { + return {}; + } } } else @@ -244,13 +263,14 @@ namespace for (const auto & arg : args->children) { - const auto dnf = analyzeFunction(arg->as(), expr); + const auto dnf = analyzeFunction(arg->as(), expr, limit); if (dnf.empty()) { return {}; } + /// limit accounted in analyzeFunction() result.insert(result.end(), dnf.begin(), dnf.end()); } @@ -269,13 +289,14 @@ namespace for (const auto & arg : args->children) { - const auto dnf = analyzeFunction(arg->as(), expr); + const auto dnf = analyzeFunction(arg->as(), expr, limit); if (dnf.empty()) { continue; } + /// limit accounted in analyzeFunction() result = andDNF(result, dnf); } @@ -286,17 +307,15 @@ namespace } } -std::optional evaluateExpressionOverConstantCondition(const ASTPtr & node, const ExpressionActionsPtr & target_expr) +std::optional evaluateExpressionOverConstantCondition(const ASTPtr & node, const ExpressionActionsPtr & target_expr, size_t & limit) { Blocks result; - // TODO: `node` may be always-false literal. - if (const auto * fn = node->as()) { - const auto dnf = analyzeFunction(fn, target_expr); + const auto dnf = analyzeFunction(fn, target_expr, limit); - if (dnf.empty()) + if (dnf.empty() || !limit) { return {}; } @@ -350,6 +369,14 @@ std::optional evaluateExpressionOverConstantCondition(const ASTPtr & nod } } } + else if (const auto * literal = node->as()) + { + // Check if it's always true or false. + if (literal->value.getType() == Field::Types::UInt64 && literal->value.get() == 0) + return {result}; + else + return {}; + } return {result}; } diff --git a/src/Interpreters/evaluateConstantExpression.h b/src/Interpreters/evaluateConstantExpression.h index 8e3fa08a626..b95982f5b99 100644 --- a/src/Interpreters/evaluateConstantExpression.h +++ b/src/Interpreters/evaluateConstantExpression.h @@ -2,6 +2,7 @@ #include #include +#include #include #include @@ -12,7 +13,6 @@ namespace DB { -class Context; class ExpressionActions; class IDataType; @@ -23,33 +23,34 @@ using ExpressionActionsPtr = std::shared_ptr; * Throws exception if it's not a constant expression. * Quite suboptimal. */ -std::pair> evaluateConstantExpression(const ASTPtr & node, const Context & context); +std::pair> evaluateConstantExpression(const ASTPtr & node, ContextPtr context); /** Evaluate constant expression and returns ASTLiteral with its value. */ -ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context & context); +ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, ContextPtr context); /** Evaluate constant expression and returns ASTLiteral with its value. * Also, if AST is identifier, then return string literal with its name. * Useful in places where some name may be specified as identifier, or as result of a constant expression. */ -ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context); +ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, ContextPtr context); /** The same as evaluateConstantExpressionOrIdentifierAsLiteral(...), * but if result is an empty string, replace it with current database name * or default database name. */ -ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, const Context & context); +ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, ContextPtr context); /** Try to fold condition to countable set of constant values. * @param node a condition that we try to fold. * @param target_expr expression evaluated over a set of constants. + * @param limit limit for number of values * @return optional blocks each with a single row and a single column for target expression, * or empty blocks if condition is always false, * or nothing if condition can't be folded to a set of constants. */ -std::optional evaluateExpressionOverConstantCondition(const ASTPtr & node, const ExpressionActionsPtr & target_expr); +std::optional evaluateExpressionOverConstantCondition(const ASTPtr & node, const ExpressionActionsPtr & target_expr, size_t & limit); } diff --git a/src/Interpreters/examples/CMakeLists.txt b/src/Interpreters/examples/CMakeLists.txt new file mode 100644 index 00000000000..6916c255b36 --- /dev/null +++ b/src/Interpreters/examples/CMakeLists.txt @@ -0,0 +1,37 @@ +add_executable (hash_map hash_map.cpp) +target_include_directories (hash_map SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) +target_link_libraries (hash_map PRIVATE dbms) + +add_executable (hash_map_lookup hash_map_lookup.cpp) +target_include_directories (hash_map_lookup SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) +target_link_libraries (hash_map_lookup PRIVATE dbms) + +add_executable (hash_map3 hash_map3.cpp) +target_link_libraries (hash_map3 PRIVATE dbms ${FARMHASH_LIBRARIES} metrohash) + +add_executable (hash_map_string hash_map_string.cpp) +target_include_directories (hash_map_string SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) +target_link_libraries (hash_map_string PRIVATE dbms) + +add_executable (hash_map_string_2 hash_map_string_2.cpp) +target_link_libraries (hash_map_string_2 PRIVATE dbms) + +add_executable (hash_map_string_3 hash_map_string_3.cpp) +target_link_libraries (hash_map_string_3 PRIVATE dbms ${FARMHASH_LIBRARIES} metrohash) + +add_executable (hash_map_string_small hash_map_string_small.cpp) +target_include_directories (hash_map_string_small SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) +target_link_libraries (hash_map_string_small PRIVATE dbms) + +add_executable (string_hash_map string_hash_map.cpp) +target_link_libraries (string_hash_map PRIVATE dbms) + +add_executable (string_hash_map_aggregation string_hash_map.cpp) +target_link_libraries (string_hash_map_aggregation PRIVATE dbms) + +add_executable (string_hash_set string_hash_set.cpp) +target_link_libraries (string_hash_set PRIVATE dbms) + +add_executable (two_level_hash_map two_level_hash_map.cpp) +target_include_directories (two_level_hash_map SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) +target_link_libraries (two_level_hash_map PRIVATE dbms) diff --git a/src/Interpreters/tests/hash_map.cpp b/src/Interpreters/examples/hash_map.cpp similarity index 100% rename from src/Interpreters/tests/hash_map.cpp rename to src/Interpreters/examples/hash_map.cpp diff --git a/src/Interpreters/tests/hash_map3.cpp b/src/Interpreters/examples/hash_map3.cpp similarity index 100% rename from src/Interpreters/tests/hash_map3.cpp rename to src/Interpreters/examples/hash_map3.cpp diff --git a/src/Interpreters/tests/hash_map_lookup.cpp b/src/Interpreters/examples/hash_map_lookup.cpp similarity index 100% rename from src/Interpreters/tests/hash_map_lookup.cpp rename to src/Interpreters/examples/hash_map_lookup.cpp diff --git a/src/Interpreters/tests/hash_map_string.cpp b/src/Interpreters/examples/hash_map_string.cpp similarity index 100% rename from src/Interpreters/tests/hash_map_string.cpp rename to src/Interpreters/examples/hash_map_string.cpp diff --git a/src/Interpreters/tests/hash_map_string_2.cpp b/src/Interpreters/examples/hash_map_string_2.cpp similarity index 100% rename from src/Interpreters/tests/hash_map_string_2.cpp rename to src/Interpreters/examples/hash_map_string_2.cpp diff --git a/src/Interpreters/tests/hash_map_string_3.cpp b/src/Interpreters/examples/hash_map_string_3.cpp similarity index 100% rename from src/Interpreters/tests/hash_map_string_3.cpp rename to src/Interpreters/examples/hash_map_string_3.cpp diff --git a/src/Interpreters/tests/hash_map_string_small.cpp b/src/Interpreters/examples/hash_map_string_small.cpp similarity index 100% rename from src/Interpreters/tests/hash_map_string_small.cpp rename to src/Interpreters/examples/hash_map_string_small.cpp diff --git a/src/Interpreters/tests/string_hash_map.cpp b/src/Interpreters/examples/string_hash_map.cpp similarity index 100% rename from src/Interpreters/tests/string_hash_map.cpp rename to src/Interpreters/examples/string_hash_map.cpp diff --git a/src/Interpreters/tests/string_hash_set.cpp b/src/Interpreters/examples/string_hash_set.cpp similarity index 100% rename from src/Interpreters/tests/string_hash_set.cpp rename to src/Interpreters/examples/string_hash_set.cpp diff --git a/src/Interpreters/tests/two_level_hash_map.cpp b/src/Interpreters/examples/two_level_hash_map.cpp similarity index 100% rename from src/Interpreters/tests/two_level_hash_map.cpp rename to src/Interpreters/examples/two_level_hash_map.cpp diff --git a/src/Interpreters/executeDDLQueryOnCluster.cpp b/src/Interpreters/executeDDLQueryOnCluster.cpp index 1937fbaf905..99ece6bb14c 100644 --- a/src/Interpreters/executeDDLQueryOnCluster.cpp +++ b/src/Interpreters/executeDDLQueryOnCluster.cpp @@ -13,6 +13,9 @@ #include #include #include +#include +#include +#include #include namespace fs = std::filesystem; @@ -45,17 +48,17 @@ bool isSupportedAlterType(int type) } -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & context) +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, ContextPtr context) { return executeDDLQueryOnCluster(query_ptr_, context, {}); } -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context, const AccessRightsElements & query_requires_access, bool query_requires_grant_option) +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context, const AccessRightsElements & query_requires_access) { - return executeDDLQueryOnCluster(query_ptr, context, AccessRightsElements{query_requires_access}, query_requires_grant_option); + return executeDDLQueryOnCluster(query_ptr, context, AccessRightsElements{query_requires_access}); } -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & context, AccessRightsElements && query_requires_access, bool query_requires_grant_option) +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, ContextPtr context, AccessRightsElements && query_requires_access) { /// Remove FORMAT and INTO OUTFILE if exists ASTPtr query_ptr = query_ptr_->clone(); @@ -68,7 +71,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont throw Exception("Distributed execution is not supported for such DDL queries", ErrorCodes::NOT_IMPLEMENTED); } - if (!context.getSettingsRef().allow_distributed_ddl) + if (!context->getSettingsRef().allow_distributed_ddl) throw Exception("Distributed DDL queries are prohibited for the user", ErrorCodes::QUERY_IS_PROHIBITED); if (const auto * query_alter = query_ptr->as()) @@ -80,9 +83,9 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont } } - query->cluster = context.getMacros()->expand(query->cluster); - ClusterPtr cluster = context.getCluster(query->cluster); - DDLWorker & ddl_worker = context.getDDLWorker(); + query->cluster = context->getMacros()->expand(query->cluster); + ClusterPtr cluster = context->getCluster(query->cluster); + DDLWorker & ddl_worker = context->getDDLWorker(); /// Enumerate hosts which will be used to send query. Cluster::AddressesWithFailover shards = cluster->getShardsAddresses(); @@ -106,7 +109,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont != query_requires_access.end()); bool use_local_default_database = false; - const String & current_database = context.getCurrentDatabase(); + const String & current_database = context->getCurrentDatabase(); if (need_replace_current_database) { @@ -154,47 +157,75 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont visitor.visitDDL(query_ptr); /// Check access rights, assume that all servers have the same users config - if (query_requires_grant_option) - context.getAccess()->checkGrantOption(query_requires_access); - else - context.checkAccess(query_requires_access); + context->checkAccess(query_requires_access); DDLLogEntry entry; entry.hosts = std::move(hosts); entry.query = queryToString(query_ptr); entry.initiator = ddl_worker.getCommonHostID(); + entry.setSettingsIfRequired(context); String node_path = ddl_worker.enqueueQuery(entry); + return getDistributedDDLStatus(node_path, entry, context); +} + +BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & entry, ContextPtr context, const std::optional & hosts_to_wait) +{ BlockIO io; - if (context.getSettingsRef().distributed_ddl_task_timeout == 0) + if (context->getSettingsRef().distributed_ddl_task_timeout == 0) return io; - auto stream = std::make_shared(node_path, entry, context); - io.in = std::move(stream); + auto stream = std::make_shared(node_path, entry, context, hosts_to_wait); + if (context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NONE) + { + /// Wait for query to finish, but ignore output + NullBlockOutputStream output{Block{}}; + copyData(*stream, output); + } + else + { + io.in = std::move(stream); + } return io; } - -DDLQueryStatusInputStream::DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, const Context & context_, +DDLQueryStatusInputStream::DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, ContextPtr context_, const std::optional & hosts_to_wait) : node_path(zk_node_path) , context(context_) , watch(CLOCK_MONOTONIC_COARSE) , log(&Poco::Logger::get("DDLQueryStatusInputStream")) { + if (context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::THROW || + context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NONE) + throw_on_timeout = true; + else if (context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NULL_STATUS_ON_TIMEOUT || + context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NEVER_THROW) + throw_on_timeout = false; + else + throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown output mode"); + + auto maybe_make_nullable = [&](const DataTypePtr & type) -> DataTypePtr + { + if (throw_on_timeout) + return type; + return std::make_shared(type); + }; + sample = Block{ - {std::make_shared(), "host"}, - {std::make_shared(), "port"}, - {std::make_shared(), "status"}, - {std::make_shared(), "error"}, - {std::make_shared(), "num_hosts_remaining"}, - {std::make_shared(), "num_hosts_active"}, + {std::make_shared(), "host"}, + {std::make_shared(), "port"}, + {maybe_make_nullable(std::make_shared()), "status"}, + {maybe_make_nullable(std::make_shared()), "error"}, + {std::make_shared(), "num_hosts_remaining"}, + {std::make_shared(), "num_hosts_active"}, }; if (hosts_to_wait) { waiting_hosts = NameSet(hosts_to_wait->begin(), hosts_to_wait->end()); by_hostname = false; + sample.erase("port"); } else { @@ -204,28 +235,46 @@ DDLQueryStatusInputStream::DDLQueryStatusInputStream(const String & zk_node_path addTotalRowsApprox(waiting_hosts.size()); - timeout_seconds = context.getSettingsRef().distributed_ddl_task_timeout; + timeout_seconds = context->getSettingsRef().distributed_ddl_task_timeout; +} + +std::pair DDLQueryStatusInputStream::parseHostAndPort(const String & host_id) const +{ + String host = host_id; + UInt16 port = 0; + if (by_hostname) + { + auto host_and_port = Cluster::Address::fromString(host_id); + host = host_and_port.first; + port = host_and_port.second; + } + return {host, port}; } Block DDLQueryStatusInputStream::readImpl() { Block res; - if (num_hosts_finished >= waiting_hosts.size()) + bool all_hosts_finished = num_hosts_finished >= waiting_hosts.size(); + /// Seems like num_hosts_finished cannot be strictly greater than waiting_hosts.size() + assert(num_hosts_finished <= waiting_hosts.size()); + if (all_hosts_finished || timeout_exceeded) { - if (first_exception) + bool throw_if_error_on_host = context->getSettingsRef().distributed_ddl_output_mode != DistributedDDLOutputMode::NEVER_THROW; + if (first_exception && throw_if_error_on_host) throw Exception(*first_exception); return res; } - auto zookeeper = context.getZooKeeper(); + auto zookeeper = context->getZooKeeper(); size_t try_number = 0; while (res.rows() == 0) { if (isCancelled()) { - if (first_exception) + bool throw_if_error_on_host = context->getSettingsRef().distributed_ddl_output_mode != DistributedDDLOutputMode::NEVER_THROW; + if (first_exception && throw_if_error_on_host) throw Exception(*first_exception); return res; @@ -236,11 +285,36 @@ Block DDLQueryStatusInputStream::readImpl() size_t num_unfinished_hosts = waiting_hosts.size() - num_hosts_finished; size_t num_active_hosts = current_active_hosts.size(); + constexpr const char * msg_format = "Watching task {} is executing longer than distributed_ddl_task_timeout (={}) seconds. " + "There are {} unfinished hosts ({} of them are currently active), " + "they are going to execute the query in background"; + if (throw_on_timeout) + throw Exception(ErrorCodes::TIMEOUT_EXCEEDED, msg_format, + node_path, timeout_seconds, num_unfinished_hosts, num_active_hosts); - throw Exception(ErrorCodes::TIMEOUT_EXCEEDED, - "Watching task {} is executing longer than distributed_ddl_task_timeout (={}) seconds. " - "There are {} unfinished hosts ({} of them are currently active), they are going to execute the query in background", - node_path, timeout_seconds, num_unfinished_hosts, num_active_hosts); + timeout_exceeded = true; + LOG_INFO(log, msg_format, node_path, timeout_seconds, num_unfinished_hosts, num_active_hosts); + + NameSet unfinished_hosts = waiting_hosts; + for (const auto & host_id : finished_hosts) + unfinished_hosts.erase(host_id); + + /// Query is not finished on the rest hosts, so fill the corresponding rows with NULLs. + MutableColumns columns = sample.cloneEmptyColumns(); + for (const String & host_id : unfinished_hosts) + { + auto [host, port] = parseHostAndPort(host_id); + size_t num = 0; + columns[num++]->insert(host); + if (by_hostname) + columns[num++]->insert(port); + columns[num++]->insert(Field{}); + columns[num++]->insert(Field{}); + columns[num++]->insert(num_unfinished_hosts); + columns[num++]->insert(num_active_hosts); + } + res = sample.cloneWithColumns(std::move(columns)); + return res; } if (num_hosts_finished != 0 || try_number != 0) @@ -272,26 +346,21 @@ Block DDLQueryStatusInputStream::readImpl() status.tryDeserializeText(status_data); } - String host = host_id; - UInt16 port = 0; - if (by_hostname) - { - auto host_and_port = Cluster::Address::fromString(host_id); - host = host_and_port.first; - port = host_and_port.second; - } + auto [host, port] = parseHostAndPort(host_id); if (status.code != 0 && first_exception == nullptr) first_exception = std::make_unique(status.code, "There was an error on [{}:{}]: {}", host, port, status.message); ++num_hosts_finished; - columns[0]->insert(host); - columns[1]->insert(port); - columns[2]->insert(status.code); - columns[3]->insert(status.message); - columns[4]->insert(waiting_hosts.size() - num_hosts_finished); - columns[5]->insert(current_active_hosts.size()); + size_t num = 0; + columns[num++]->insert(host); + if (by_hostname) + columns[num++]->insert(port); + columns[num++]->insert(status.code); + columns[num++]->insert(status.message); + columns[num++]->insert(waiting_hosts.size() - num_hosts_finished); + columns[num++]->insert(current_active_hosts.size()); } res = sample.cloneWithColumns(std::move(columns)); } diff --git a/src/Interpreters/executeDDLQueryOnCluster.h b/src/Interpreters/executeDDLQueryOnCluster.h index 2b272d3b0da..bbd39a6e8ec 100644 --- a/src/Interpreters/executeDDLQueryOnCluster.h +++ b/src/Interpreters/executeDDLQueryOnCluster.h @@ -1,5 +1,7 @@ #pragma once + #include +#include #include namespace zkutil @@ -10,7 +12,6 @@ namespace zkutil namespace DB { -class Context; class AccessRightsElements; struct DDLLogEntry; @@ -20,15 +21,16 @@ bool isSupportedAlterType(int type); /// Pushes distributed DDL query to the queue. /// Returns DDLQueryStatusInputStream, which reads results of query execution on each host in the cluster. -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context); -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context, const AccessRightsElements & query_requires_access, bool query_requires_grant_option = false); -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context, AccessRightsElements && query_requires_access, bool query_requires_grant_option = false); +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context); +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context, const AccessRightsElements & query_requires_access); +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context, AccessRightsElements && query_requires_access); +BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & entry, ContextPtr context, const std::optional & hosts_to_wait = {}); class DDLQueryStatusInputStream final : public IBlockInputStream { public: - DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, const Context & context_, const std::optional & hosts_to_wait = {}); + DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, ContextPtr context_, const std::optional & hosts_to_wait = {}); String getName() const override { return "DDLQueryStatusInputStream"; } @@ -44,8 +46,10 @@ private: Strings getNewAndUpdate(const Strings & current_list_of_finished_hosts); + std::pair parseHostAndPort(const String & host_id) const; + String node_path; - const Context & context; + ContextPtr context; Stopwatch watch; Poco::Logger * log; @@ -62,6 +66,8 @@ private: Int64 timeout_seconds = 120; bool by_hostname = true; + bool throw_on_timeout = true; + bool timeout_exceeded = false; }; } diff --git a/src/Interpreters/executeQuery.cpp b/src/Interpreters/executeQuery.cpp index 1a0aa031d6f..5df245f9f26 100644 --- a/src/Interpreters/executeQuery.cpp +++ b/src/Interpreters/executeQuery.cpp @@ -121,7 +121,7 @@ static String joinLines(const String & query) } -static String prepareQueryForLogging(const String & query, Context & context) +static String prepareQueryForLogging(const String & query, ContextPtr context) { String res = query; @@ -136,14 +136,14 @@ static String prepareQueryForLogging(const String & query, Context & context) } } - res = res.substr(0, context.getSettingsRef().log_queries_cut_to_length); + res = res.substr(0, context->getSettingsRef().log_queries_cut_to_length); return res; } /// Log query into text log (not into system table). -static void logQuery(const String & query, const Context & context, bool internal) +static void logQuery(const String & query, ContextPtr context, bool internal) { if (internal) { @@ -151,14 +151,14 @@ static void logQuery(const String & query, const Context & context, bool interna } else { - const auto & client_info = context.getClientInfo(); + const auto & client_info = context->getClientInfo(); const auto & current_query_id = client_info.current_query_id; const auto & initial_query_id = client_info.initial_query_id; const auto & current_user = client_info.current_user; - String comment = context.getSettingsRef().log_comment; - size_t max_query_size = context.getSettingsRef().max_query_size; + String comment = context->getSettingsRef().log_comment; + size_t max_query_size = context->getSettingsRef().max_query_size; if (comment.size() > max_query_size) comment.resize(max_query_size); @@ -170,7 +170,7 @@ static void logQuery(const String & query, const Context & context, bool interna client_info.current_address.toString(), (current_user != "default" ? ", user: " + current_user : ""), (!initial_query_id.empty() && current_query_id != initial_query_id ? ", initial_query_id: " + initial_query_id : std::string()), - (context.getSettingsRef().use_antlr_parser ? "experimental" : "production"), + (context->getSettingsRef().use_antlr_parser ? "experimental" : "production"), comment, joinLines(query)); @@ -204,19 +204,30 @@ static void setExceptionStackTrace(QueryLogElement & elem) /// Log exception (with query info) into text log (not into system table). -static void logException(Context & context, QueryLogElement & elem) +static void logException(ContextPtr context, QueryLogElement & elem) { String comment; if (!elem.log_comment.empty()) comment = fmt::format(" (comment: {})", elem.log_comment); if (elem.stack_trace.empty()) - LOG_ERROR(&Poco::Logger::get("executeQuery"), "{} (from {}){} (in query: {})", - elem.exception, context.getClientInfo().current_address.toString(), comment, joinLines(elem.query)); + LOG_ERROR( + &Poco::Logger::get("executeQuery"), + "{} (from {}){} (in query: {})", + elem.exception, + context->getClientInfo().current_address.toString(), + comment, + joinLines(elem.query)); else - LOG_ERROR(&Poco::Logger::get("executeQuery"), "{} (from {}){} (in query: {})" + LOG_ERROR( + &Poco::Logger::get("executeQuery"), + "{} (from {}){} (in query: {})" ", Stack trace (when copying this message, always include the lines below):\n\n{}", - elem.exception, context.getClientInfo().current_address.toString(), comment, joinLines(elem.query), elem.stack_trace); + elem.exception, + context->getClientInfo().current_address.toString(), + comment, + joinLines(elem.query), + elem.stack_trace); } inline UInt64 time_in_microseconds(std::chrono::time_point timepoint) @@ -230,13 +241,13 @@ inline UInt64 time_in_seconds(std::chrono::time_point return std::chrono::duration_cast(timepoint.time_since_epoch()).count(); } -static void onExceptionBeforeStart(const String & query_for_logging, Context & context, UInt64 current_time_us, ASTPtr ast) +static void onExceptionBeforeStart(const String & query_for_logging, ContextPtr context, UInt64 current_time_us, ASTPtr ast) { /// Exception before the query execution. - if (auto quota = context.getQuota()) + if (auto quota = context->getQuota()) quota->used(Quota::ERRORS, 1, /* check_exceeded = */ false); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); /// Log the start of query execution into the table if necessary. QueryLogElement elem; @@ -251,7 +262,7 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c elem.query_start_time = current_time_us / 1000000; elem.query_start_time_microseconds = current_time_us; - elem.current_database = context.getCurrentDatabase(); + elem.current_database = context->getCurrentDatabase(); elem.query = query_for_logging; elem.normalized_query_hash = normalizedQueryHash(query_for_logging); @@ -260,7 +271,7 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c elem.exception_code = getCurrentExceptionCode(); elem.exception = getCurrentExceptionMessage(false); - elem.client_info = context.getClientInfo(); + elem.client_info = context->getClientInfo(); elem.log_comment = settings.log_comment; if (elem.log_comment.size() > settings.max_query_size) @@ -274,17 +285,17 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c CurrentThread::finalizePerformanceCounters(); if (settings.log_queries && elem.type >= settings.log_queries_min_type && !settings.log_queries_min_query_duration_ms.totalMilliseconds()) - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); - if (auto opentelemetry_span_log = context.getOpenTelemetrySpanLog(); - context.query_trace_context.trace_id + if (auto opentelemetry_span_log = context->getOpenTelemetrySpanLog(); + context->query_trace_context.trace_id && opentelemetry_span_log) { OpenTelemetrySpanLogElement span; - span.trace_id = context.query_trace_context.trace_id; - span.span_id = context.query_trace_context.span_id; - span.parent_span_id = context.getClientInfo().client_trace_context.span_id; + span.trace_id = context->query_trace_context.trace_id; + span.span_id = context->query_trace_context.span_id; + span.parent_span_id = context->getClientInfo().client_trace_context.span_id; span.operation_name = "query"; span.start_time_us = current_time_us; span.finish_time_us = current_time_us; @@ -299,11 +310,11 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c span.attribute_names.push_back("clickhouse.query_id"); span.attribute_values.push_back(elem.client_info.current_query_id); - if (!context.query_trace_context.tracestate.empty()) + if (!context->query_trace_context.tracestate.empty()) { span.attribute_names.push_back("clickhouse.tracestate"); span.attribute_values.push_back( - context.query_trace_context.tracestate); + context->query_trace_context.tracestate); } opentelemetry_span_log->add(span); @@ -324,19 +335,19 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c } } -static void setQuerySpecificSettings(ASTPtr & ast, Context & context) +static void setQuerySpecificSettings(ASTPtr & ast, ContextPtr context) { if (auto * ast_insert_into = dynamic_cast(ast.get())) { if (ast_insert_into->watch) - context.setSetting("output_format_enable_streaming", 1); + context->setSetting("output_format_enable_streaming", 1); } } static std::tuple executeQueryImpl( const char * begin, const char * end, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool has_query_tail, @@ -349,7 +360,7 @@ static std::tuple executeQueryImpl( assert(internal || CurrentThread::get().getQueryContext()->getCurrentQueryId() == CurrentThread::getQueryId()); #endif - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); ASTPtr ast; const char * query_end; @@ -365,7 +376,7 @@ static std::tuple executeQueryImpl( #if !defined(ARCADIA_BUILD) if (settings.use_antlr_parser) { - ast = parseQuery(begin, end, max_query_size, settings.max_parser_depth, context.getCurrentDatabase()); + ast = parseQuery(begin, end, max_query_size, settings.max_parser_depth, context->getCurrentDatabase()); } else { @@ -456,9 +467,9 @@ static std::tuple executeQueryImpl( try { /// Replace ASTQueryParameter with ASTLiteral for prepared statements. - if (context.hasQueryParameters()) + if (context->hasQueryParameters()) { - ReplaceQueryParameterVisitor visitor(context.getQueryParameters()); + ReplaceQueryParameterVisitor visitor(context->getQueryParameters()); visitor.visit(ast); query = serializeAST(*ast); } @@ -476,7 +487,7 @@ static std::tuple executeQueryImpl( } /// Normalize SelectWithUnionQuery - NormalizeSelectWithUnionQueryVisitor::Data data{context.getSettingsRef().union_default_mode}; + NormalizeSelectWithUnionQueryVisitor::Data data{context->getSettingsRef().union_default_mode}; NormalizeSelectWithUnionQueryVisitor{data}.visit(ast); /// Check the limits. @@ -487,12 +498,12 @@ static std::tuple executeQueryImpl( if (!internal && !ast->as()) { /// processlist also has query masked now, to avoid secrets leaks though SHOW PROCESSLIST by other users. - process_list_entry = context.getProcessList().insert(query_for_logging, ast.get(), context); - context.setProcessListElement(&process_list_entry->get()); + process_list_entry = context->getProcessList().insert(query_for_logging, ast.get(), context); + context->setProcessListElement(&process_list_entry->get()); } /// Load external tables if they were provided - context.initializeExternalTablesIfSet(); + context->initializeExternalTablesIfSet(); auto * insert_query = ast->as(); if (insert_query && insert_query->select) @@ -504,7 +515,7 @@ static std::tuple executeQueryImpl( insert_query->tryFindInputFunction(input_function); if (input_function) { - StoragePtr storage = context.executeTableFunction(input_function); + StoragePtr storage = context->executeTableFunction(input_function); auto & input_storage = dynamic_cast(*storage); auto input_metadata_snapshot = input_storage.getInMemoryMetadataPtr(); BlockInputStreamPtr input_stream = std::make_shared( @@ -515,14 +526,14 @@ static std::tuple executeQueryImpl( } else /// reset Input callbacks if query is not INSERT SELECT - context.resetInputCallbacks(); + context->resetInputCallbacks(); auto interpreter = InterpreterFactory::get(ast, context, SelectQueryOptions(stage).setInternal(internal)); std::shared_ptr quota; if (!interpreter->ignoreQuota()) { - quota = context.getQuota(); + quota = context->getQuota(); if (quota) { if (ast->as() || ast->as()) @@ -558,7 +569,7 @@ static std::tuple executeQueryImpl( /// Save insertion table (not table function). TODO: support remote() table function. auto table_id = insert_interpreter->getDatabaseTable(); if (!table_id.empty()) - context.setInsertionTable(std::move(table_id)); + context->setInsertionTable(std::move(table_id)); } if (process_list_entry) @@ -578,8 +589,8 @@ static std::tuple executeQueryImpl( { /// Limits on the result, the quota on the result, and also callback for progress. /// Limits apply only to the final result. - pipeline.setProgressCallback(context.getProgressCallback()); - pipeline.setProcessListElement(context.getProcessListElement()); + pipeline.setProgressCallback(context->getProgressCallback()); + pipeline.setProcessListElement(context->getProcessListElement()); if (stage == QueryProcessingStage::Complete && !pipeline.isCompleted()) { pipeline.resize(1); @@ -597,8 +608,8 @@ static std::tuple executeQueryImpl( /// Limits apply only to the final result. if (res.in) { - res.in->setProgressCallback(context.getProgressCallback()); - res.in->setProcessListElement(context.getProcessListElement()); + res.in->setProgressCallback(context->getProgressCallback()); + res.in->setProcessListElement(context->getProcessListElement()); if (stage == QueryProcessingStage::Complete) { if (!interpreter->ignoreQuota()) @@ -612,7 +623,7 @@ static std::tuple executeQueryImpl( { if (auto * stream = dynamic_cast(res.out.get())) { - stream->setProcessListElement(context.getProcessListElement()); + stream->setProcessListElement(context->getProcessListElement()); } } } @@ -628,11 +639,11 @@ static std::tuple executeQueryImpl( elem.query_start_time = time_in_seconds(current_time); elem.query_start_time_microseconds = time_in_microseconds(current_time); - elem.current_database = context.getCurrentDatabase(); + elem.current_database = context->getCurrentDatabase(); elem.query = query_for_logging; elem.normalized_query_hash = normalizedQueryHash(query_for_logging); - elem.client_info = context.getClientInfo(); + elem.client_info = context->getClientInfo(); bool log_queries = settings.log_queries && !internal; @@ -641,7 +652,7 @@ static std::tuple executeQueryImpl( { if (use_processors) { - const auto & info = context.getQueryAccessInfo(); + const auto & info = context->getQueryAccessInfo(); elem.query_databases = info.databases; elem.query_tables = info.tables; elem.query_columns = info.columns; @@ -650,7 +661,7 @@ static std::tuple executeQueryImpl( interpreter->extendQueryLogElem(elem, ast, context, query_database, query_table); if (settings.log_query_settings) - elem.query_settings = std::make_shared(context.getSettingsRef()); + elem.query_settings = std::make_shared(context->getSettingsRef()); elem.log_comment = settings.log_comment; if (elem.log_comment.size() > settings.max_query_size) @@ -658,7 +669,7 @@ static std::tuple executeQueryImpl( if (elem.type >= settings.log_queries_min_type && !settings.log_queries_min_query_duration_ms.totalMilliseconds()) { - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); } } @@ -692,7 +703,7 @@ static std::tuple executeQueryImpl( }; /// Also make possible for caller to log successful query finish and exception during execution. - auto finish_callback = [elem, &context, ast, + auto finish_callback = [elem, context, ast, log_queries, log_queries_min_type = settings.log_queries_min_type, log_queries_min_query_duration_ms = settings.log_queries_min_query_duration_ms.totalMilliseconds(), @@ -700,7 +711,7 @@ static std::tuple executeQueryImpl( ] (IBlockInputStream * stream_in, IBlockOutputStream * stream_out, QueryPipeline * query_pipeline) mutable { - QueryStatus * process_list_elem = context.getProcessListElement(); + QueryStatus * process_list_elem = context->getProcessListElement(); if (!process_list_elem) return; @@ -708,7 +719,7 @@ static std::tuple executeQueryImpl( /// Update performance counters before logging to query_log CurrentThread::finalizePerformanceCounters(); - QueryStatusInfo info = process_list_elem->getInfo(true, context.getSettingsRef().log_profile_events); + QueryStatusInfo info = process_list_elem->getInfo(true, context->getSettingsRef().log_profile_events); double elapsed_seconds = info.elapsed_seconds; @@ -721,7 +732,7 @@ static std::tuple executeQueryImpl( elem.event_time_microseconds = time_in_microseconds(finish_time); status_info_to_query_log(elem, info, ast); - auto progress_callback = context.getProgressCallback(); + auto progress_callback = context->getProgressCallback(); if (progress_callback) progress_callback(Progress(WriteProgress(info.written_rows, info.written_bytes))); @@ -763,7 +774,7 @@ static std::tuple executeQueryImpl( elem.thread_ids = std::move(info.thread_ids); elem.profile_counters = std::move(info.profile_counters); - const auto & factories_info = context.getQueryFactoriesInfo(); + const auto & factories_info = context->getQueryFactoriesInfo(); elem.used_aggregate_functions = factories_info.aggregate_functions; elem.used_aggregate_function_combinators = factories_info.aggregate_function_combinators; elem.used_database_engines = factories_info.database_engines; @@ -776,18 +787,18 @@ static std::tuple executeQueryImpl( if (log_queries && elem.type >= log_queries_min_type && Int64(elem.query_duration_ms) >= log_queries_min_query_duration_ms) { - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); } - if (auto opentelemetry_span_log = context.getOpenTelemetrySpanLog(); - context.query_trace_context.trace_id + if (auto opentelemetry_span_log = context->getOpenTelemetrySpanLog(); + context->query_trace_context.trace_id && opentelemetry_span_log) { OpenTelemetrySpanLogElement span; - span.trace_id = context.query_trace_context.trace_id; - span.span_id = context.query_trace_context.span_id; - span.parent_span_id = context.getClientInfo().client_trace_context.span_id; + span.trace_id = context->query_trace_context.trace_id; + span.span_id = context->query_trace_context.span_id; + span.parent_span_id = context->getClientInfo().client_trace_context.span_id; span.operation_name = "query"; span.start_time_us = elem.query_start_time_microseconds; span.finish_time_us = time_in_microseconds(finish_time); @@ -801,18 +812,18 @@ static std::tuple executeQueryImpl( span.attribute_names.push_back("clickhouse.query_id"); span.attribute_values.push_back(elem.client_info.current_query_id); - if (!context.query_trace_context.tracestate.empty()) + if (!context->query_trace_context.tracestate.empty()) { span.attribute_names.push_back("clickhouse.tracestate"); span.attribute_values.push_back( - context.query_trace_context.tracestate); + context->query_trace_context.tracestate); } opentelemetry_span_log->add(span); } }; - auto exception_callback = [elem, &context, ast, + auto exception_callback = [elem, context, ast, log_queries, log_queries_min_type = settings.log_queries_min_type, log_queries_min_query_duration_ms = settings.log_queries_min_query_duration_ms.totalMilliseconds(), @@ -833,8 +844,8 @@ static std::tuple executeQueryImpl( elem.exception_code = getCurrentExceptionCode(); elem.exception = getCurrentExceptionMessage(false); - QueryStatus * process_list_elem = context.getProcessListElement(); - const Settings & current_settings = context.getSettingsRef(); + QueryStatus * process_list_elem = context->getProcessListElement(); + const Settings & current_settings = context->getSettingsRef(); /// Update performance counters before logging to query_log CurrentThread::finalizePerformanceCounters(); @@ -852,7 +863,7 @@ static std::tuple executeQueryImpl( /// In case of exception we log internal queries also if (log_queries && elem.type >= log_queries_min_type && Int64(elem.query_duration_ms) >= log_queries_min_query_duration_ms) { - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); } @@ -898,7 +909,7 @@ static std::tuple executeQueryImpl( BlockIO executeQuery( const String & query, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool may_have_embedded_data) @@ -912,7 +923,7 @@ BlockIO executeQuery( { String format_name = ast_query_with_output->format ? getIdentifierName(ast_query_with_output->format) - : context.getDefaultFormat(); + : context->getDefaultFormat(); if (format_name == "Null") streams.null_format = true; @@ -923,7 +934,7 @@ BlockIO executeQuery( BlockIO executeQuery( const String & query, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool may_have_embedded_data, @@ -942,7 +953,7 @@ void executeQuery( ReadBuffer & istr, WriteBuffer & ostr, bool allow_into_outfile, - Context & context, + ContextPtr context, std::function set_result_details) { PODArray parse_buf; @@ -953,7 +964,7 @@ void executeQuery( if (!istr.hasPendingData()) istr.next(); - size_t max_query_size = context.getSettingsRef().max_query_size; + size_t max_query_size = context->getSettingsRef().max_query_size; bool may_have_tail; if (istr.buffer().end() - istr.position() > static_cast(max_query_size)) @@ -1012,12 +1023,12 @@ void executeQuery( String format_name = ast_query_with_output && (ast_query_with_output->format != nullptr) ? getIdentifierName(ast_query_with_output->format) - : context.getDefaultFormat(); + : context->getDefaultFormat(); - auto out = context.getOutputStream(format_name, *out_buf, streams.in->getHeader()); + auto out = context->getOutputStreamParallelIfPossible(format_name, *out_buf, streams.in->getHeader()); /// Save previous progress callback if any. TODO Do it more conveniently. - auto previous_progress_callback = context.getProgressCallback(); + auto previous_progress_callback = context->getProgressCallback(); /// NOTE Progress callback takes shared ownership of 'out'. streams.in->setProgressCallback([out, previous_progress_callback] (const Progress & progress) @@ -1028,7 +1039,8 @@ void executeQuery( }); if (set_result_details) - set_result_details(context.getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); + set_result_details( + context->getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); copyData(*streams.in, *out, [](){ return false; }, [&out](const Block &) { out->flush(); }); } @@ -1050,7 +1062,7 @@ void executeQuery( String format_name = ast_query_with_output && (ast_query_with_output->format != nullptr) ? getIdentifierName(ast_query_with_output->format) - : context.getDefaultFormat(); + : context->getDefaultFormat(); if (!pipeline.isCompleted()) { @@ -1059,11 +1071,11 @@ void executeQuery( return std::make_shared(header); }); - auto out = context.getOutputFormat(format_name, *out_buf, pipeline.getHeader()); + auto out = context->getOutputFormatParallelIfPossible(format_name, *out_buf, pipeline.getHeader()); out->setAutoFlush(); /// Save previous progress callback if any. TODO Do it more conveniently. - auto previous_progress_callback = context.getProgressCallback(); + auto previous_progress_callback = context->getProgressCallback(); /// NOTE Progress callback takes shared ownership of 'out'. pipeline.setProgressCallback([out, previous_progress_callback] (const Progress & progress) @@ -1074,13 +1086,14 @@ void executeQuery( }); if (set_result_details) - set_result_details(context.getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); + set_result_details( + context->getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); pipeline.setOutputFormat(std::move(out)); } else { - pipeline.setProgressCallback(context.getProgressCallback()); + pipeline.setProgressCallback(context->getProgressCallback()); } { diff --git a/src/Interpreters/executeQuery.h b/src/Interpreters/executeQuery.h index 2850bb3baf4..bdb1f877ce3 100644 --- a/src/Interpreters/executeQuery.h +++ b/src/Interpreters/executeQuery.h @@ -2,7 +2,6 @@ #include #include - #include namespace DB @@ -10,7 +9,6 @@ namespace DB class ReadBuffer; class WriteBuffer; -class Context; /// Parse and execute a query. @@ -18,7 +16,7 @@ void executeQuery( ReadBuffer & istr, /// Where to read query from (and data for INSERT, if present). WriteBuffer & ostr, /// Where to write query output to. bool allow_into_outfile, /// If true and the query contains INTO OUTFILE section, redirect output to that file. - Context & context, /// DB, tables, data types, storage engines, functions, aggregate functions... + ContextPtr context, /// DB, tables, data types, storage engines, functions, aggregate functions... std::function set_result_details /// If a non-empty callback is passed, it will be called with the query id, the content-type, the format, and the timezone. ); @@ -39,7 +37,7 @@ void executeQuery( /// must be done separately. BlockIO executeQuery( const String & query, /// Query text without INSERT data. The latter must be written to BlockIO::out. - Context & context, /// DB, tables, data types, storage engines, functions, aggregate functions... + ContextPtr context, /// DB, tables, data types, storage engines, functions, aggregate functions... bool internal = false, /// If true, this query is caused by another query and thus needn't be registered in the ProcessList. QueryProcessingStage::Enum stage = QueryProcessingStage::Complete, /// To which stage the query must be executed. bool may_have_embedded_data = false /// If insert query may have embedded data @@ -48,7 +46,7 @@ BlockIO executeQuery( /// Old interface with allow_processors flag. For compatibility. BlockIO executeQuery( const String & query, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool may_have_embedded_data, diff --git a/src/Interpreters/getHeaderForProcessingStage.cpp b/src/Interpreters/getHeaderForProcessingStage.cpp index b56b90cdf3f..9c7c86a0b88 100644 --- a/src/Interpreters/getHeaderForProcessingStage.cpp +++ b/src/Interpreters/getHeaderForProcessingStage.cpp @@ -12,21 +12,27 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -/// Rewrite original query removing joined tables from it -bool removeJoin(ASTSelectQuery & select) +bool hasJoin(const ASTSelectQuery & select) { const auto & tables = select.tables(); if (!tables || tables->children.size() < 2) return false; const auto & joined_table = tables->children[1]->as(); - if (!joined_table.table_join) - return false; + return joined_table.table_join != nullptr; +} - /// The most simple temporary solution: leave only the first table in query. - /// TODO: we also need to remove joined columns and related functions (taking in account aliases if any). - tables->children.resize(1); - return true; +/// Rewrite original query removing joined tables from it +bool removeJoin(ASTSelectQuery & select) +{ + if (hasJoin(select)) + { + /// The most simple temporary solution: leave only the first table in query. + /// TODO: we also need to remove joined columns and related functions (taking in account aliases if any). + select.tables()->children.resize(1); + return true; + } + return false; } Block getHeaderForProcessingStage( @@ -34,7 +40,7 @@ Block getHeaderForProcessingStage( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage) { switch (processed_stage) diff --git a/src/Interpreters/getHeaderForProcessingStage.h b/src/Interpreters/getHeaderForProcessingStage.h index ec238edf774..75a89bc5d39 100644 --- a/src/Interpreters/getHeaderForProcessingStage.h +++ b/src/Interpreters/getHeaderForProcessingStage.h @@ -1,7 +1,9 @@ #pragma once + #include #include #include +#include namespace DB @@ -11,9 +13,9 @@ class IStorage; struct StorageInMemoryMetadata; using StorageMetadataPtr = std::shared_ptr; struct SelectQueryInfo; -class Context; class ASTSelectQuery; +bool hasJoin(const ASTSelectQuery & select); bool removeJoin(ASTSelectQuery & select); Block getHeaderForProcessingStage( @@ -21,7 +23,7 @@ Block getHeaderForProcessingStage( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage); } diff --git a/src/Interpreters/getTableExpressions.cpp b/src/Interpreters/getTableExpressions.cpp index a4e971c302c..22eb307071c 100644 --- a/src/Interpreters/getTableExpressions.cpp +++ b/src/Interpreters/getTableExpressions.cpp @@ -75,7 +75,7 @@ ASTPtr extractTableExpression(const ASTSelectQuery & select, size_t table_number static NamesAndTypesList getColumnsFromTableExpression( const ASTTableExpression & table_expression, - const Context & context, + ContextPtr context, NamesAndTypesList & materialized, NamesAndTypesList & aliases, NamesAndTypesList & virtuals) @@ -89,7 +89,7 @@ static NamesAndTypesList getColumnsFromTableExpression( else if (table_expression.table_function) { const auto table_function = table_expression.table_function; - auto * query_context = const_cast(&context.getQueryContext()); + auto query_context = context->getQueryContext(); const auto & function_storage = query_context->executeTableFunction(table_function); auto function_metadata_snapshot = function_storage->getInMemoryMetadataPtr(); const auto & columns = function_metadata_snapshot->getColumns(); @@ -100,7 +100,7 @@ static NamesAndTypesList getColumnsFromTableExpression( } else if (table_expression.database_and_table_name) { - auto table_id = context.resolveStorageID(table_expression.database_and_table_name); + auto table_id = context->resolveStorageID(table_expression.database_and_table_name); const auto & table = DatabaseCatalog::instance().getTable(table_id, context); auto table_metadata_snapshot = table->getInMemoryMetadataPtr(); const auto & columns = table_metadata_snapshot->getColumns(); @@ -113,7 +113,7 @@ static NamesAndTypesList getColumnsFromTableExpression( return names_and_type_list; } -NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, const Context & context) +NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, ContextPtr context) { NamesAndTypesList materialized; NamesAndTypesList aliases; @@ -121,15 +121,15 @@ NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table return getColumnsFromTableExpression(table_expression, context, materialized, aliases, virtuals); } -TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, const Context & context) +TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, ContextPtr context) { TablesWithColumns tables_with_columns; if (!table_expressions.empty()) { - String current_database = context.getCurrentDatabase(); - bool include_alias_cols = context.getSettingsRef().asterisk_include_alias_columns; - bool include_materialized_cols = context.getSettingsRef().asterisk_include_materialized_columns; + String current_database = context->getCurrentDatabase(); + bool include_alias_cols = context->getSettingsRef().asterisk_include_alias_columns; + bool include_materialized_cols = context->getSettingsRef().asterisk_include_materialized_columns; for (const ASTTableExpression * table_expression : table_expressions) { diff --git a/src/Interpreters/getTableExpressions.h b/src/Interpreters/getTableExpressions.h index 9254fb9d6a0..961176437b5 100644 --- a/src/Interpreters/getTableExpressions.h +++ b/src/Interpreters/getTableExpressions.h @@ -1,6 +1,7 @@ #pragma once #include +#include #include namespace DB @@ -8,7 +9,6 @@ namespace DB struct ASTTableExpression; class ASTSelectQuery; -class Context; NameSet removeDuplicateColumns(NamesAndTypesList & columns); @@ -16,7 +16,7 @@ std::vector getTableExpressions(const ASTSelectQuery const ASTTableExpression * getTableExpression(const ASTSelectQuery & select, size_t table_number); ASTPtr extractTableExpression(const ASTSelectQuery & select, size_t table_number); -NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, const Context & context); -TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, const Context & context); +NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, ContextPtr context); +TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, ContextPtr context); } diff --git a/src/Interpreters/inplaceBlockConversions.cpp b/src/Interpreters/inplaceBlockConversions.cpp index d06cde99425..ff16c7b3ff6 100644 --- a/src/Interpreters/inplaceBlockConversions.cpp +++ b/src/Interpreters/inplaceBlockConversions.cpp @@ -24,13 +24,22 @@ namespace { /// Add all required expressions for missing columns calculation -void addDefaultRequiredExpressionsRecursively(const Block & block, const String & required_column, const ColumnsDescription & columns, ASTPtr default_expr_list_accum, NameSet & added_columns) +void addDefaultRequiredExpressionsRecursively( + const Block & block, const String & required_column_name, DataTypePtr required_column_type, + const ColumnsDescription & columns, ASTPtr default_expr_list_accum, NameSet & added_columns, bool null_as_default) { checkStackSize(); - if (block.has(required_column) || added_columns.count(required_column)) + + bool is_column_in_query = block.has(required_column_name); + bool convert_null_to_default = false; + + if (is_column_in_query) + convert_null_to_default = null_as_default && block.findByName(required_column_name)->type->isNullable() && !required_column_type->isNullable(); + + if ((is_column_in_query && !convert_null_to_default) || added_columns.count(required_column_name)) return; - auto column_default = columns.getDefault(required_column); + auto column_default = columns.getDefault(required_column_name); if (column_default) { @@ -43,22 +52,25 @@ void addDefaultRequiredExpressionsRecursively(const Block & block, const String RequiredSourceColumnsVisitor(columns_context).visit(column_default_expr); NameSet required_columns_names = columns_context.requiredColumns(); - auto cast_func = makeASTFunction("CAST", column_default_expr, std::make_shared(columns.get(required_column).type->getName())); - default_expr_list_accum->children.emplace_back(setAlias(cast_func, required_column)); - added_columns.emplace(required_column); + auto expr = makeASTFunction("CAST", column_default_expr, std::make_shared(columns.get(required_column_name).type->getName())); + if (is_column_in_query && convert_null_to_default) + expr = makeASTFunction("ifNull", std::make_shared(required_column_name), std::move(expr)); + default_expr_list_accum->children.emplace_back(setAlias(expr, required_column_name)); - for (const auto & required_column_name : required_columns_names) - addDefaultRequiredExpressionsRecursively(block, required_column_name, columns, default_expr_list_accum, added_columns); + added_columns.emplace(required_column_name); + + for (const auto & next_required_column_name : required_columns_names) + addDefaultRequiredExpressionsRecursively(block, next_required_column_name, required_column_type, columns, default_expr_list_accum, added_columns, null_as_default); } } -ASTPtr defaultRequiredExpressions(const Block & block, const NamesAndTypesList & required_columns, const ColumnsDescription & columns) +ASTPtr defaultRequiredExpressions(const Block & block, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, bool null_as_default) { ASTPtr default_expr_list = std::make_shared(); NameSet added_columns; for (const auto & column : required_columns) - addDefaultRequiredExpressionsRecursively(block, column.name, columns, default_expr_list, added_columns); + addDefaultRequiredExpressionsRecursively(block, column.name, column.type, columns, default_expr_list, added_columns, null_as_default); if (default_expr_list->children.empty()) return nullptr; @@ -92,7 +104,7 @@ ActionsDAGPtr createExpressions( ASTPtr expr_list, bool save_unneeded_columns, const NamesAndTypesList & required_columns, - const Context & context) + ContextPtr context) { if (!expr_list) return nullptr; @@ -114,7 +126,7 @@ ActionsDAGPtr createExpressions( } -void performRequiredConversions(Block & block, const NamesAndTypesList & required_columns, const Context & context) +void performRequiredConversions(Block & block, const NamesAndTypesList & required_columns, ContextPtr context) { ASTPtr conversion_expr_list = convertRequiredExpressions(block, required_columns); if (conversion_expr_list->children.empty()) @@ -122,7 +134,7 @@ void performRequiredConversions(Block & block, const NamesAndTypesList & require if (auto dag = createExpressions(block, conversion_expr_list, true, required_columns, context)) { - auto expression = std::make_shared(std::move(dag)); + auto expression = std::make_shared(std::move(dag), ExpressionActionsSettings::fromContext(context)); expression->execute(block); } } @@ -131,12 +143,14 @@ ActionsDAGPtr evaluateMissingDefaults( const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, - const Context & context, bool save_unneeded_columns) + ContextPtr context, + bool save_unneeded_columns, + bool null_as_default) { if (!columns.hasDefaults()) return nullptr; - ASTPtr expr_list = defaultRequiredExpressions(header, required_columns, columns); + ASTPtr expr_list = defaultRequiredExpressions(header, required_columns, columns, null_as_default); return createExpressions(header, expr_list, save_unneeded_columns, required_columns, context); } diff --git a/src/Interpreters/inplaceBlockConversions.h b/src/Interpreters/inplaceBlockConversions.h index 63540e2994d..cc8261693f9 100644 --- a/src/Interpreters/inplaceBlockConversions.h +++ b/src/Interpreters/inplaceBlockConversions.h @@ -1,31 +1,34 @@ #pragma once -#include -#include +#include + #include +#include +#include namespace DB { class Block; -class Context; class NamesAndTypesList; class ColumnsDescription; class ActionsDAG; using ActionsDAGPtr = std::shared_ptr; -/// Create actions which adds missing defaults to block according to required_columns using columns description. +/// Create actions which adds missing defaults to block according to required_columns using columns description +/// or substitute NULL into DEFAULT value in case of INSERT SELECT query (null_as_default) if according setting is 1. /// Return nullptr if no actions required. ActionsDAGPtr evaluateMissingDefaults( const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, - const Context & context, bool save_unneeded_columns = true); + ContextPtr context, + bool save_unneeded_columns = true, + bool null_as_default = false); /// Tries to convert columns in block to required_columns -void performRequiredConversions(Block & block, - const NamesAndTypesList & required_columns, - const Context & context); +void performRequiredConversions(Block & block, const NamesAndTypesList & required_columns, ContextPtr context); + } diff --git a/src/Interpreters/interpretSubquery.cpp b/src/Interpreters/interpretSubquery.cpp index cf343a4fda2..2fb2f390b67 100644 --- a/src/Interpreters/interpretSubquery.cpp +++ b/src/Interpreters/interpretSubquery.cpp @@ -22,14 +22,14 @@ namespace ErrorCodes } std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, size_t subquery_depth, const Names & required_source_columns) + const ASTPtr & table_expression, ContextPtr context, size_t subquery_depth, const Names & required_source_columns) { auto subquery_options = SelectQueryOptions(QueryProcessingStage::Complete, subquery_depth); return interpretSubquery(table_expression, context, required_source_columns, subquery_options); } std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, const Names & required_source_columns, const SelectQueryOptions & options) + const ASTPtr & table_expression, ContextPtr context, const Names & required_source_columns, const SelectQueryOptions & options) { if (auto * expr = table_expression->as()) { @@ -59,13 +59,13 @@ std::shared_ptr interpretSubquery( * max_rows_in_join, max_bytes_in_join, join_overflow_mode, * which are checked separately (in the Set, Join objects). */ - Context subquery_context = context; - Settings subquery_settings = context.getSettings(); + auto subquery_context = Context::createCopy(context); + Settings subquery_settings = context->getSettings(); subquery_settings.max_result_rows = 0; subquery_settings.max_result_bytes = 0; /// The calculation of `extremes` does not make sense and is not necessary (if you do it, then the `extremes` of the subquery can be taken instead of the whole query). subquery_settings.extremes = false; - subquery_context.setSettings(subquery_settings); + subquery_context->setSettings(subquery_settings); auto subquery_options = options.subquery(); @@ -88,14 +88,14 @@ std::shared_ptr interpretSubquery( /// get columns list for target table if (function) { - auto * query_context = const_cast(&context.getQueryContext()); + auto query_context = context->getQueryContext(); const auto & storage = query_context->executeTableFunction(table_expression); columns = storage->getInMemoryMetadataPtr()->getColumns().getOrdinary(); select_query->addTableFunction(*const_cast(&table_expression)); // XXX: const_cast should be avoided! } else { - auto table_id = context.resolveStorageID(table_expression); + auto table_id = context->resolveStorageID(table_expression); const auto & storage = DatabaseCatalog::instance().getTable(table_id, context); columns = storage->getInMemoryMetadataPtr()->getColumns().getOrdinary(); select_query->replaceDatabaseAndTable(table_id); diff --git a/src/Interpreters/interpretSubquery.h b/src/Interpreters/interpretSubquery.h index 2aee6ffd81a..3836d1f7664 100644 --- a/src/Interpreters/interpretSubquery.h +++ b/src/Interpreters/interpretSubquery.h @@ -6,12 +6,10 @@ namespace DB { -class Context; +std::shared_ptr interpretSubquery( + const ASTPtr & table_expression, ContextPtr context, size_t subquery_depth, const Names & required_source_columns); std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, size_t subquery_depth, const Names & required_source_columns); - -std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, const Names & required_source_columns, const SelectQueryOptions & options); + const ASTPtr & table_expression, ContextPtr context, const Names & required_source_columns, const SelectQueryOptions & options); } diff --git a/src/Interpreters/join_common.cpp b/src/Interpreters/join_common.cpp index 4c124f99e57..cc51848b4a4 100644 --- a/src/Interpreters/join_common.cpp +++ b/src/Interpreters/join_common.cpp @@ -49,20 +49,53 @@ ColumnPtr changeLowCardinality(const ColumnPtr & column, const ColumnPtr & dst_s namespace JoinCommon { -void convertColumnToNullable(ColumnWithTypeAndName & column, bool low_card_nullability) + +bool canBecomeNullable(const DataTypePtr & type) { - if (low_card_nullability && column.type->lowCardinality()) + bool can_be_inside = type->canBeInsideNullable(); + if (const auto * low_cardinality_type = typeid_cast(type.get())) + can_be_inside |= low_cardinality_type->getDictionaryType()->canBeInsideNullable(); + return can_be_inside; +} + +/// Add nullability to type. +/// Note: LowCardinality(T) transformed to LowCardinality(Nullable(T)) +DataTypePtr convertTypeToNullable(const DataTypePtr & type) +{ + if (const auto * low_cardinality_type = typeid_cast(type.get())) + { + const auto & dict_type = low_cardinality_type->getDictionaryType(); + if (dict_type->canBeInsideNullable()) + return std::make_shared(makeNullable(dict_type)); + } + return makeNullable(type); +} + +void convertColumnToNullable(ColumnWithTypeAndName & column, bool remove_low_card) +{ + if (remove_low_card && column.type->lowCardinality()) { column.column = recursiveRemoveLowCardinality(column.column); column.type = recursiveRemoveLowCardinality(column.type); } - if (column.type->isNullable() || !column.type->canBeInsideNullable()) + if (column.type->isNullable() || !canBecomeNullable(column.type)) return; - column.type = makeNullable(column.type); + column.type = convertTypeToNullable(column.type); + if (column.column) - column.column = makeNullable(column.column); + { + if (column.column->lowCardinality()) + { + /// Convert nested to nullable, not LowCardinality itself + ColumnLowCardinality * col_as_lc = assert_cast(column.column->assumeMutable().get()); + if (!col_as_lc->nestedIsNullable()) + col_as_lc->nestedToNullable(); + } + else + column.column = makeNullable(column.column); + } } void convertColumnsToNullable(Block & block, size_t starting_pos) @@ -96,7 +129,7 @@ void changeColumnRepresentation(const ColumnPtr & src_column, ColumnPtr & dst_co ColumnPtr dst_not_null = JoinCommon::emptyNotNullableClone(dst_column); bool lowcard_src = JoinCommon::emptyNotNullableClone(src_column)->lowCardinality(); bool lowcard_dst = dst_not_null->lowCardinality(); - bool change_lowcard = (!lowcard_src && lowcard_dst) || (lowcard_src && !lowcard_dst); + bool change_lowcard = lowcard_src != lowcard_dst; if (nullable_src && !nullable_dst) { @@ -268,6 +301,10 @@ void joinTotals(const Block & totals, const Block & columns_to_add, const TableJ { if (table_join.rightBecomeNullable(col.type)) JoinCommon::convertColumnToNullable(col); + + /// In case of arrayJoin it can be not one row + if (col.column->size() != 1) + col.column = col.column->cloneResized(1); } for (size_t i = 0; i < totals_without_keys.columns(); ++i) @@ -280,6 +317,11 @@ void joinTotals(const Block & totals, const Block & columns_to_add, const TableJ for (size_t i = 0; i < columns_to_add.columns(); ++i) { const auto & col = columns_to_add.getByPosition(i); + if (block.has(col.name)) + { + /// For StorageJoin we discarded table qualifiers, so some names may clash + continue; + } block.insert({ col.type->createColumnConstWithDefaultValue(1)->convertToFullColumnIfConst(), col.type, diff --git a/src/Interpreters/join_common.h b/src/Interpreters/join_common.h index cec41438448..9a000aa107a 100644 --- a/src/Interpreters/join_common.h +++ b/src/Interpreters/join_common.h @@ -15,8 +15,9 @@ using ColumnRawPtrs = std::vector; namespace JoinCommon { - -void convertColumnToNullable(ColumnWithTypeAndName & column, bool low_card_nullability = false); +bool canBecomeNullable(const DataTypePtr & type); +DataTypePtr convertTypeToNullable(const DataTypePtr & type); +void convertColumnToNullable(ColumnWithTypeAndName & column, bool remove_low_card = false); void convertColumnsToNullable(Block & block, size_t starting_pos = 0); void removeColumnNullability(ColumnWithTypeAndName & column); void changeColumnRepresentation(const ColumnPtr & src_column, ColumnPtr & dst_column); diff --git a/src/Interpreters/loadMetadata.cpp b/src/Interpreters/loadMetadata.cpp index 71d3c7e6e5b..79076e57328 100644 --- a/src/Interpreters/loadMetadata.cpp +++ b/src/Interpreters/loadMetadata.cpp @@ -25,13 +25,14 @@ namespace DB static void executeCreateQuery( const String & query, - Context & context, + ContextPtr context, const String & database, const String & file_name, bool has_force_restore_data_flag) { ParserCreateQuery parser; - ASTPtr ast = parseQuery(parser, query.data(), query.data() + query.size(), "in file " + file_name, 0, context.getSettingsRef().max_parser_depth); + ASTPtr ast = parseQuery( + parser, query.data(), query.data() + query.size(), "in file " + file_name, 0, context->getSettingsRef().max_parser_depth); auto & ast_create_query = ast->as(); ast_create_query.database = database; @@ -45,7 +46,7 @@ static void executeCreateQuery( static void loadDatabase( - Context & context, + ContextPtr context, const String & database, const String & database_path, bool force_restore_data) @@ -73,8 +74,7 @@ static void loadDatabase( try { - executeCreateQuery(database_attach_query, context, database, - database_metadata_file, force_restore_data); + executeCreateQuery(database_attach_query, context, database, database_metadata_file, force_restore_data); } catch (Exception & e) { @@ -84,18 +84,18 @@ static void loadDatabase( } -void loadMetadata(Context & context, const String & default_database_name) +void loadMetadata(ContextPtr context, const String & default_database_name) { Poco::Logger * log = &Poco::Logger::get("loadMetadata"); - String path = context.getPath() + "metadata"; + String path = context->getPath() + "metadata"; /** There may exist 'force_restore_data' file, that means, * skip safety threshold on difference of data parts while initializing tables. * This file is deleted after successful loading of tables. * (flag is "one-shot") */ - Poco::File force_restore_data_flag_file(context.getFlagsPath() + "force_restore_data"); + Poco::File force_restore_data_flag_file(context->getFlagsPath() + "force_restore_data"); bool has_force_restore_data_flag = force_restore_data_flag_file.exists(); /// Loop over databases. @@ -168,9 +168,9 @@ void loadMetadata(Context & context, const String & default_database_name) } -void loadMetadataSystem(Context & context) +void loadMetadataSystem(ContextPtr context) { - String path = context.getPath() + "metadata/" + DatabaseCatalog::SYSTEM_DATABASE; + String path = context->getPath() + "metadata/" + DatabaseCatalog::SYSTEM_DATABASE; String metadata_file = path + ".sql"; if (Poco::File(path).exists() || Poco::File(metadata_file).exists()) { diff --git a/src/Interpreters/loadMetadata.h b/src/Interpreters/loadMetadata.h index b23887d5282..047def84bba 100644 --- a/src/Interpreters/loadMetadata.h +++ b/src/Interpreters/loadMetadata.h @@ -1,16 +1,16 @@ #pragma once +#include + namespace DB { -class Context; - /// Load tables from system database. Only real tables like query_log, part_log. /// You should first load system database, then attach system tables that you need into it, then load other databases. -void loadMetadataSystem(Context & context); +void loadMetadataSystem(ContextPtr context); /// Load tables from databases and add them to context. Database 'system' is ignored. Use separate function to load system tables. -void loadMetadata(Context & context, const String & default_database_name = {}); +void loadMetadata(ContextPtr context, const String & default_database_name = {}); } diff --git a/src/Interpreters/replaceAliasColumnsInQuery.cpp b/src/Interpreters/replaceAliasColumnsInQuery.cpp index 4daa787c397..4c8367b269a 100644 --- a/src/Interpreters/replaceAliasColumnsInQuery.cpp +++ b/src/Interpreters/replaceAliasColumnsInQuery.cpp @@ -6,7 +6,7 @@ namespace DB { -void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, const Context & context) +void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, ContextPtr context) { ColumnAliasesVisitor::Data aliase_column_data(columns, forbidden_columns, context); ColumnAliasesVisitor aliase_column_visitor(aliase_column_data); diff --git a/src/Interpreters/replaceAliasColumnsInQuery.h b/src/Interpreters/replaceAliasColumnsInQuery.h index bf7143ba099..92d2686b45b 100644 --- a/src/Interpreters/replaceAliasColumnsInQuery.h +++ b/src/Interpreters/replaceAliasColumnsInQuery.h @@ -1,14 +1,15 @@ #pragma once -#include #include +#include #include +#include namespace DB { class ColumnsDescription; -class Context; -void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, const Context & context); + +void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, ContextPtr context); } diff --git a/src/Interpreters/tests/CMakeLists.txt b/src/Interpreters/tests/CMakeLists.txt index 1bc9d7fbacb..e69de29bb2d 100644 --- a/src/Interpreters/tests/CMakeLists.txt +++ b/src/Interpreters/tests/CMakeLists.txt @@ -1,46 +0,0 @@ -add_executable (hash_map hash_map.cpp) -target_include_directories (hash_map SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) -target_link_libraries (hash_map PRIVATE dbms) - -add_executable (hash_map_lookup hash_map_lookup.cpp) -target_include_directories (hash_map_lookup SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) -target_link_libraries (hash_map_lookup PRIVATE dbms) - -add_executable (hash_map3 hash_map3.cpp) -target_link_libraries (hash_map3 PRIVATE dbms ${FARMHASH_LIBRARIES} metrohash) - -add_executable (hash_map_string hash_map_string.cpp) -target_include_directories (hash_map_string SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) -target_link_libraries (hash_map_string PRIVATE dbms) - -add_executable (hash_map_string_2 hash_map_string_2.cpp) -target_link_libraries (hash_map_string_2 PRIVATE dbms) - -add_executable (hash_map_string_3 hash_map_string_3.cpp) -target_link_libraries (hash_map_string_3 PRIVATE dbms ${FARMHASH_LIBRARIES} metrohash) - -add_executable (hash_map_string_small hash_map_string_small.cpp) -target_include_directories (hash_map_string_small SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) -target_link_libraries (hash_map_string_small PRIVATE dbms) - -add_executable (string_hash_map string_hash_map.cpp) -target_link_libraries (string_hash_map PRIVATE dbms) - -add_executable (string_hash_map_aggregation string_hash_map.cpp) -target_link_libraries (string_hash_map_aggregation PRIVATE dbms) - -add_executable (string_hash_set string_hash_set.cpp) -target_link_libraries (string_hash_set PRIVATE dbms) - -add_executable (two_level_hash_map two_level_hash_map.cpp) -target_include_directories (two_level_hash_map SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) -target_link_libraries (two_level_hash_map PRIVATE dbms) - -add_executable (in_join_subqueries_preprocessor in_join_subqueries_preprocessor.cpp) -target_link_libraries (in_join_subqueries_preprocessor PRIVATE clickhouse_aggregate_functions dbms clickhouse_parsers) -add_check(in_join_subqueries_preprocessor) - -if (OS_LINUX) - add_executable (internal_iotop internal_iotop.cpp) - target_link_libraries (internal_iotop PRIVATE dbms) -endif () diff --git a/src/Interpreters/tests/gtest_cycle_aliases.cpp b/src/Interpreters/tests/gtest_cycle_aliases.cpp index 56e23c6a497..c13e98cd69f 100644 --- a/src/Interpreters/tests/gtest_cycle_aliases.cpp +++ b/src/Interpreters/tests/gtest_cycle_aliases.cpp @@ -20,6 +20,6 @@ TEST(QueryNormalizer, SimpleCycleAlias) aliases["b"] = parseQuery(parser, "a as b", 0, 0)->children[0]; Settings settings; - QueryNormalizer::Data normalizer_data(aliases, settings); + QueryNormalizer::Data normalizer_data(aliases, {}, settings); EXPECT_THROW(QueryNormalizer(normalizer_data).visit(ast), Exception); } diff --git a/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp b/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp deleted file mode 100644 index 2b53277d02f..00000000000 --- a/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp +++ /dev/null @@ -1,1264 +0,0 @@ -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - - -namespace DB -{ - namespace ErrorCodes - { - extern const int DISTRIBUTED_IN_JOIN_SUBQUERY_DENIED; - } -} - - -/// Simplified version of the StorageDistributed class. -class StorageDistributedFake final : public ext::shared_ptr_helper, public DB::IStorage -{ - friend struct ext::shared_ptr_helper; -public: - std::string getName() const override { return "DistributedFake"; } - bool isRemote() const override { return true; } - size_t getShardCount() const { return shard_count; } - std::string getRemoteDatabaseName() const { return remote_database; } - std::string getRemoteTableName() const { return remote_table; } - -protected: - StorageDistributedFake(const std::string & remote_database_, const std::string & remote_table_, size_t shard_count_) - : IStorage({"", ""}), remote_database(remote_database_), remote_table(remote_table_), shard_count(shard_count_) - { - } - -private: - const std::string remote_database; - const std::string remote_table; - size_t shard_count; -}; - - -class CheckShardsAndTablesMock : public DB::InJoinSubqueriesPreprocessor::CheckShardsAndTables -{ -public: - bool hasAtLeastTwoShards(const DB::IStorage & table) const override - { - if (!table.isRemote()) - return false; - - const StorageDistributedFake * distributed = dynamic_cast(&table); - if (!distributed) - return false; - - return distributed->getShardCount() >= 2; - } - - std::pair - getRemoteDatabaseAndTableName(const DB::IStorage & table) const override - { - const StorageDistributedFake & distributed = dynamic_cast(table); - return { distributed.getRemoteDatabaseName(), distributed.getRemoteTableName() }; - } -}; - - -struct TestEntry -{ - unsigned int line_num; - std::string input; - std::string expected_output; - size_t shard_count; - DB::DistributedProductMode mode; - bool expected_success; -}; - -using TestEntries = std::vector; -using TestResult = std::pair; - -TestResult check(const TestEntry & entry); -bool parse(DB::ASTPtr & ast, const std::string & query); -bool equals(const DB::ASTPtr & lhs, const DB::ASTPtr & rhs); -void reorder(DB::IAST * ast); - - -TestEntries entries = -{ - /// Trivial query. - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 0, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 1, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 0, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 1, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 0, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 1, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 0, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 1, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT 1", - "SELECT 1", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - /// Section IN / depth 1 - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 1, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 1, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits)", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section NOT IN / depth 1 - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM remote_db.remote_visits)", - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM remote_db.remote_visits)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM remote_db.remote_visits)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM remote_db.remote_visits)", - "SELECT count() FROM test.visits_all WHERE UserID NOT IN (SELECT UserID FROM remote_db.remote_visits)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section GLOBAL IN / depth 1 - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section GLOBAL NOT IN / depth 1 - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL NOT IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section JOIN / depth 1 - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits) USING UserID", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section GLOBAL JOIN / depth 1 - - { - __LINE__, - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section JOIN / depth 1 / 2 of the subquery. - - { - __LINE__, - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits WHERE RegionID = 2) USING UserID", - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits WHERE RegionID = 2) USING UserID", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE RegionID = 2) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section IN / depth 1 / table at level 2 - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits))", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section GLOBAL IN / depth 1 / table at level 2 - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - "SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section IN at level 1, GLOBAL IN section at level 2. - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)))", - "SELECT UserID FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)))", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)))", - "SELECT UserID FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)))", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)))", - "SELECT UserID FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote_db.remote_visits WHERE UserID GLOBAL IN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits)))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section JOIN / depth 1 / table at level 2 - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits)) USING UserID", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM test.visits_all)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits)) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM (SELECT UserID FROM remote_db.remote_visits)) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section IN / depth 2 - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM remote_db.remote_visits WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM remote_db.remote_visits WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section JOIN / depth 2 - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM remote_db.remote_visits)) USING CounterID)", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM remote_db.remote_visits)) USING CounterID)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID GLOBAL IN (SELECT CounterID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.visits_all ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM test.visits_all)) USING CounterID)", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM remote_db.remote_visits ALL INNER JOIN (SELECT CounterID FROM (SELECT CounterID FROM remote_db.remote_visits)) USING CounterID)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section JOIN / depth 2 - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE OtherID GLOBAL IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 2) USING OtherID)", - "SELECT UserID FROM test.visits_all WHERE OtherID GLOBAL IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 2) USING OtherID)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE OtherID GLOBAL IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - "SELECT UserID FROM test.visits_all WHERE OtherID GLOBAL IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - "SELECT UserID FROM test.visits_all WHERE OtherID GLOBAL IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) GLOBAL ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM test.visits_all WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2) USING OtherID)", - "SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 1) ALL INNER JOIN (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 2) USING OtherID)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section JOIN / section IN - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 2)) USING UserID", - "SELECT UserID FROM test.visits_all GLOBAL ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 2)) USING UserID", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM test.visits_all WHERE OtherID IN (SELECT OtherID FROM test.visits_all WHERE RegionID = 2)) USING UserID", - "SELECT UserID FROM test.visits_all ALL INNER JOIN (SELECT UserID FROM remote_db.remote_visits WHERE OtherID IN (SELECT OtherID FROM remote_db.remote_visits WHERE RegionID = 2)) USING UserID", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Table function. - - { - __LINE__, - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote('127.0.0.{1,2}', test, visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote('127.0.0.{1,2}', test, visits_all))", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote('127.0.0.{1,2}', test, visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote('127.0.0.{1,2}', test, visits_all))", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - "SELECT count() FROM remote('127.0.0.{1,2}', test, visits_all) WHERE UserID IN (SELECT UserID FROM test.visits_all)", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote('127.0.0.{1,2}', test, visits_all))", - "SELECT count() FROM test.visits_all WHERE UserID IN (SELECT UserID FROM remote('127.0.0.{1,2}', test, visits_all))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Section IN / depth 2 / two distributed tables - - { - __LINE__, - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM test.hits_all WHERE BrowserID IN (SELECT BrowserID FROM test.visits_all WHERE OtherID = 1))", - "SELECT UserID, RegionID FROM test.visits_all WHERE CounterID IN (SELECT CounterID FROM distant_db.distant_hits WHERE BrowserID IN (SELECT BrowserID FROM remote_db.remote_visits WHERE OtherID = 1))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Aggregate function. - - { - __LINE__, - "SELECT sum(RegionID IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - "SELECT sum(RegionID IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - 2, - DB::DistributedProductMode::ALLOW, - true - }, - - { - __LINE__, - "SELECT sum(RegionID IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - "SELECT sum(RegionID IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - 2, - DB::DistributedProductMode::DENY, - false - }, - - { - __LINE__, - "SELECT sum(RegionID GLOBAL IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - "SELECT sum(RegionID GLOBAL IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT sum(RegionID IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - "SELECT sum(RegionID GLOBAL IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT sum(RegionID GLOBAL IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - "SELECT sum(RegionID GLOBAL IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT sum(RegionID IN (SELECT RegionID from test.hits_all)) FROM test.visits_all", - "SELECT sum(RegionID IN (SELECT RegionID from distant_db.distant_hits)) FROM test.visits_all", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - /// Miscellaneous. - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all))", - 2, - DB::DistributedProductMode::DENY, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all))", - 2, - DB::DistributedProductMode::LOCAL, - true - }, - - { - __LINE__, - "SELECT count() FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all))", - "SELECT count() FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all WHERE x GLOBAL IN (SELECT x FROM test.visits_all))", - 2, - DB::DistributedProductMode::GLOBAL, - true - }, - - { - __LINE__, - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.hits_all))", - "SELECT UserID FROM (SELECT UserID FROM test.visits_all WHERE UserID GLOBAL IN (SELECT UserID FROM test.hits_all))", - 2, - DB::DistributedProductMode::DENY, - true - } -}; - - -static bool run() -{ - unsigned int count = 0; - unsigned int i = 1; - - for (const auto & entry : entries) - { - auto res = check(entry); - if (res.first) - { - ++count; - } - else - std::cout << "Test " << i << " at line " << entry.line_num << " failed.\n" - "Expected: " << entry.expected_output << ".\n" - "Received: " << res.second << "\n"; - - ++i; - } - - std::cout << count << " out of " << entries.size() << " test(s) passed.\n"; - - return count == entries.size(); -} - - -TestResult check(const TestEntry & entry) -{ - static DB::SharedContextHolder shared_context = DB::Context::createShared(); - static DB::Context context = DB::Context::createGlobal(shared_context.get()); - context.makeGlobalContext(); - - try - { - - auto storage_distributed_visits = StorageDistributedFake::create("remote_db", "remote_visits", entry.shard_count); - auto storage_distributed_hits = StorageDistributedFake::create("distant_db", "distant_hits", entry.shard_count); - - DB::DatabasePtr database = std::make_shared("test", "./metadata/test/", context); - DB::DatabaseCatalog::instance().attachDatabase("test", database); - database->attachTable("visits_all", storage_distributed_visits); - database->attachTable("hits_all", storage_distributed_hits); - context.setCurrentDatabase("test"); - context.setSetting("distributed_product_mode", entry.mode); - - /// Parse and process the incoming query. - DB::ASTPtr ast_input; - if (!parse(ast_input, entry.input)) - return TestResult(false, "parse error"); - - bool success = true; - - try - { - DB::InJoinSubqueriesPreprocessor::SubqueryTables renamed; - DB::InJoinSubqueriesPreprocessor(context, renamed, std::make_unique()).visit(ast_input); - } - catch (const DB::Exception & ex) - { - if (ex.code() == DB::ErrorCodes::DISTRIBUTED_IN_JOIN_SUBQUERY_DENIED) - success = false; - else - throw; - } - catch (...) - { - throw; - } - - if (success != entry.expected_success) - return TestResult(false, "unexpected result"); - - /// Parse the expected result. - DB::ASTPtr ast_expected; - if (!parse(ast_expected, entry.expected_output)) - return TestResult(false, "parse error"); - - /// Compare the processed query and the expected result. - bool res = equals(ast_input, ast_expected); - std::string output = DB::queryToString(ast_input); - - DB::DatabaseCatalog::instance().detachDatabase("test"); - return TestResult(res, output); - } - catch (DB::Exception & e) - { - DB::DatabaseCatalog::instance().detachDatabase("test"); - return TestResult(false, e.displayText()); - } -} - -bool parse(DB::ASTPtr & ast, const std::string & query) -{ - DB::ParserSelectQuery parser; - std::string message; - const auto * begin = query.data(); - const auto * end = begin + query.size(); - ast = DB::tryParseQuery(parser, begin, end, message, false, "", false, 0, 0); - return ast != nullptr; -} - -bool equals(const DB::ASTPtr & lhs, const DB::ASTPtr & rhs) -{ - DB::ASTPtr lhs_reordered = lhs->clone(); - reorder(&*lhs_reordered); - - DB::ASTPtr rhs_reordered = rhs->clone(); - reorder(&*rhs_reordered); - - return lhs_reordered->getTreeHash() == rhs_reordered->getTreeHash(); -} - -void reorder(DB::IAST * ast) -{ - if (ast == nullptr) - return; - - auto & children = ast->children; - if (children.empty()) - return; - - for (auto & child : children) - reorder(&*child); - - std::sort(children.begin(), children.end(), [](const DB::ASTPtr & lhs, const DB::ASTPtr & rhs) - { - return lhs->getTreeHash() < rhs->getTreeHash(); - }); -} - -int main() -{ - return run() ? EXIT_SUCCESS : EXIT_FAILURE; -} diff --git a/src/Interpreters/tests/internal_iotop.cpp b/src/Interpreters/tests/internal_iotop.cpp deleted file mode 100644 index 6025a46e9b7..00000000000 --- a/src/Interpreters/tests/internal_iotop.cpp +++ /dev/null @@ -1,157 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - - -std::mutex mutex; - - -static std::ostream & operator << (std::ostream & stream, const ::taskstats & stat) -{ -#define PRINT(field) (stream << #field << " " << stat.field) - - PRINT(ac_pid) << ", "; - - PRINT(read_bytes) << ", "; - PRINT(write_bytes) << ", "; - - PRINT(read_char) << ", "; - PRINT(write_char) << ", "; - - PRINT(swapin_delay_total) << ", "; - PRINT(blkio_delay_total) << ", "; - PRINT(cpu_delay_total) << ", "; - - PRINT(ac_pid) << ", "; - - PRINT(ac_utime) << ", "; - PRINT(ac_stime) << ", "; - -#undef PRINT - - return stream; -} - -using namespace DB; - - -static void do_io(size_t id) -{ - ::taskstats stat; - int tid = getThreadId(); - TaskStatsInfoGetter get_info; - - get_info.getStat(stat, tid); - { - std::lock_guard lock(mutex); - std::cerr << "#" << id << ", tid " << tid << ", initial\n" << stat << "\n"; - } - - size_t copy_size = 1048576 * (1 + id); - std::string path_dst = "test_out_" + std::to_string(id); - - { - size_t page_size = static_cast(::getPageSize()); - ReadBufferFromFile rb("/dev/urandom"); - WriteBufferFromFile wb(path_dst, DBMS_DEFAULT_BUFFER_SIZE, O_WRONLY | O_CREAT | O_TRUNC | O_DIRECT, 0666, nullptr, page_size); - copyData(rb, wb, copy_size); - wb.close(); - } - - get_info.getStat(stat, tid); - { - std::lock_guard lock(mutex); - std::cerr << "#" << id << ", tid " << tid << ", step1\n" << stat << "\n"; - } - - { - ReadBufferFromFile rb(path_dst); - WriteBufferFromOwnString wb; - copyData(rb, wb, copy_size); - } - - get_info.getStat(stat, tid); - { - std::lock_guard lock(mutex); - std::cerr << "#" << id << ", tid " << tid << ", step2\n" << stat << "\n"; - } - - { - ReadBufferFromFile rb(path_dst); - WriteBufferFromOwnString wb; - copyData(rb, wb, copy_size); - } - - get_info.getStat(stat, tid); - { - std::lock_guard lock(mutex); - std::cerr << "#" << id << ", tid " << tid << ", step3\n" << stat << "\n"; - } - - Poco::File(path_dst).remove(false); -} - -static void test_perf() -{ - - ::taskstats stat; - int tid = getThreadId(); - TaskStatsInfoGetter get_info; - - rusage rusage; - - constexpr size_t num_samples = 1000000; - { - Stopwatch watch; - for (size_t i = 0; i < num_samples; ++i) - getrusage(RUSAGE_THREAD, &rusage); - - auto ms = watch.elapsedMilliseconds(); - if (ms > 0) - std::cerr << "RUsage: " << double(ms) / num_samples << " ms per call, " << 1000 * num_samples / ms << " calls per second\n"; - } - - { - Stopwatch watch; - for (size_t i = 0; i < num_samples; ++i) - get_info.getStat(stat, tid); - - auto ms = watch.elapsedMilliseconds(); - if (ms > 0) - std::cerr << "Netlink: " << double(ms) / num_samples << " ms per call, " << 1000 * num_samples / ms << " calls per second\n"; - } - - std::cerr << stat << "\n"; -} - -int main() -try -{ - std::cerr << "pid " << getpid() << "\n"; - - size_t num_threads = 2; - ThreadPool pool(num_threads); - for (size_t i = 0; i < num_threads; ++i) - pool.scheduleOrThrowOnError([i]() { do_io(i); }); - pool.wait(); - - test_perf(); - return 0; -} -catch (...) -{ - std::cerr << getCurrentExceptionMessage(true); - return -1; -} - diff --git a/src/Interpreters/ya.make b/src/Interpreters/ya.make index 3eab077df86..105e1e11365 100644 --- a/src/Interpreters/ya.make +++ b/src/Interpreters/ya.make @@ -50,10 +50,11 @@ SRCS( EmbeddedDictionaries.cpp ExecuteScalarSubqueriesVisitor.cpp ExpressionActions.cpp + ExpressionActionsSettings.cpp ExpressionAnalyzer.cpp ExternalDictionariesLoader.cpp ExternalLoader.cpp - ExternalLoaderDatabaseConfigRepository.cpp + ExternalLoaderDictionaryStorageConfigRepository.cpp ExternalLoaderTempConfigRepository.cpp ExternalLoaderXMLConfigRepository.cpp ExternalModelsLoader.cpp @@ -102,6 +103,7 @@ SRCS( InterpreterSystemQuery.cpp InterpreterUseQuery.cpp InterpreterWatchQuery.cpp + InterserverCredentials.cpp JoinSwitcher.cpp JoinToSubqueryTransformVisitor.cpp JoinedTables.cpp @@ -116,6 +118,7 @@ SRCS( OpenTelemetrySpanLog.cpp OptimizeIfChains.cpp OptimizeIfWithConstantConditionVisitor.cpp + OptimizeShardingKeyRewriteInVisitor.cpp PartLog.cpp PredicateExpressionsOptimizer.cpp PredicateRewriteVisitor.cpp diff --git a/src/Interpreters/ya.make.in b/src/Interpreters/ya.make.in index 6be5a5f2db7..4481f8c6136 100644 --- a/src/Interpreters/ya.make.in +++ b/src/Interpreters/ya.make.in @@ -17,7 +17,7 @@ NO_COMPILER_WARNINGS() SRCS( - + ) END() diff --git a/src/Parsers/ASTAlterQuery.cpp b/src/Parsers/ASTAlterQuery.cpp index f24b26d5b54..5b052bae856 100644 --- a/src/Parsers/ASTAlterQuery.cpp +++ b/src/Parsers/ASTAlterQuery.cpp @@ -245,7 +245,7 @@ void ASTAlterCommand::formatImpl( else if (type == ASTAlterCommand::FETCH_PARTITION) { settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << "FETCH " - << "PARTITION " << (settings.hilite ? hilite_none : ""); + << (part ? "PART " : "PARTITION ") << (settings.hilite ? hilite_none : ""); partition->formatImpl(settings, state, frame); settings.ostr << (settings.hilite ? hilite_keyword : "") << " FROM " << (settings.hilite ? hilite_none : "") << DB::quote << from; @@ -271,6 +271,27 @@ void ASTAlterCommand::formatImpl( << " " << DB::quote << with_name; } } + else if (type == ASTAlterCommand::UNFREEZE_PARTITION) + { + settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << "UNFREEZE PARTITION " << (settings.hilite ? hilite_none : ""); + partition->formatImpl(settings, state, frame); + + if (!with_name.empty()) + { + settings.ostr << " " << (settings.hilite ? hilite_keyword : "") << "WITH NAME" << (settings.hilite ? hilite_none : "") + << " " << DB::quote << with_name; + } + } + else if (type == ASTAlterCommand::UNFREEZE_ALL) + { + settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << "UNFREEZE"; + + if (!with_name.empty()) + { + settings.ostr << " " << (settings.hilite ? hilite_keyword : "") << "WITH NAME" << (settings.hilite ? hilite_none : "") + << " " << DB::quote << with_name; + } + } else if (type == ASTAlterCommand::DELETE) { settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << "DELETE" << (settings.hilite ? hilite_none : ""); @@ -368,7 +389,8 @@ bool ASTAlterQuery::isSettingsAlter() const bool ASTAlterQuery::isFreezeAlter() const { - return isOneCommandTypeOnly(ASTAlterCommand::FREEZE_PARTITION) || isOneCommandTypeOnly(ASTAlterCommand::FREEZE_ALL); + return isOneCommandTypeOnly(ASTAlterCommand::FREEZE_PARTITION) || isOneCommandTypeOnly(ASTAlterCommand::FREEZE_ALL) + || isOneCommandTypeOnly(ASTAlterCommand::UNFREEZE_PARTITION) || isOneCommandTypeOnly(ASTAlterCommand::UNFREEZE_ALL); } /** Get the text that identifies this element. */ diff --git a/src/Parsers/ASTAlterQuery.h b/src/Parsers/ASTAlterQuery.h index 4cc01aa889e..8a2bfdb1960 100644 --- a/src/Parsers/ASTAlterQuery.h +++ b/src/Parsers/ASTAlterQuery.h @@ -54,6 +54,8 @@ public: FETCH_PARTITION, FREEZE_PARTITION, FREEZE_ALL, + UNFREEZE_PARTITION, + UNFREEZE_ALL, DELETE, UPDATE, @@ -153,7 +155,9 @@ public: */ String from; - /** For FREEZE PARTITION - place local backup to directory with specified name. + /** + * For FREEZE PARTITION - place local backup to directory with specified name. + * For UNFREEZE - delete local backup at directory with specified name. */ String with_name; diff --git a/src/Parsers/ASTCreateQuery.cpp b/src/Parsers/ASTCreateQuery.cpp index 2af0d2d4a45..1192fcc6ebd 100644 --- a/src/Parsers/ASTCreateQuery.cpp +++ b/src/Parsers/ASTCreateQuery.cpp @@ -297,12 +297,20 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat if (to_table_id) { + assert(is_materialized_view && to_inner_uuid == UUIDHelpers::Nil); settings.ostr << (settings.hilite ? hilite_keyword : "") << " TO " << (settings.hilite ? hilite_none : "") << (!to_table_id.database_name.empty() ? backQuoteIfNeed(to_table_id.database_name) + "." : "") << backQuoteIfNeed(to_table_id.table_name); } + if (to_inner_uuid != UUIDHelpers::Nil) + { + assert(is_materialized_view && !to_table_id); + settings.ostr << (settings.hilite ? hilite_keyword : "") << " TO INNER UUID " << (settings.hilite ? hilite_none : "") + << quoteString(toString(to_inner_uuid)); + } + if (!as_table.empty()) { settings.ostr diff --git a/src/Parsers/ASTCreateQuery.h b/src/Parsers/ASTCreateQuery.h index c9a6251cb94..d6d5c22240c 100644 --- a/src/Parsers/ASTCreateQuery.h +++ b/src/Parsers/ASTCreateQuery.h @@ -66,6 +66,7 @@ public: ASTExpressionList * tables = nullptr; StorageID to_table_id = StorageID::createEmpty(); /// For CREATE MATERIALIZED VIEW mv TO table. + UUID to_inner_uuid = UUIDHelpers::Nil; /// For materialized view with inner table ASTStorage * storage = nullptr; String as_database; String as_table; diff --git a/src/Parsers/ASTCreateQuotaQuery.cpp b/src/Parsers/ASTCreateQuotaQuery.cpp index 7e570b889e3..18f72d61319 100644 --- a/src/Parsers/ASTCreateQuotaQuery.cpp +++ b/src/Parsers/ASTCreateQuotaQuery.cpp @@ -185,10 +185,10 @@ void ASTCreateQuotaQuery::formatImpl(const FormatSettings & settings, FormatStat } -void ASTCreateQuotaQuery::replaceCurrentUserTagWithName(const String & current_user_name) const +void ASTCreateQuotaQuery::replaceCurrentUserTag(const String & current_user_name) const { if (roles) - roles->replaceCurrentUserTagWithName(current_user_name); + roles->replaceCurrentUserTag(current_user_name); } } diff --git a/src/Parsers/ASTCreateQuotaQuery.h b/src/Parsers/ASTCreateQuotaQuery.h index a1269afafa6..00984d4b4c9 100644 --- a/src/Parsers/ASTCreateQuotaQuery.h +++ b/src/Parsers/ASTCreateQuotaQuery.h @@ -56,7 +56,7 @@ public: String getID(char) const override; ASTPtr clone() const override; void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; - void replaceCurrentUserTagWithName(const String & current_user_name) const; + void replaceCurrentUserTag(const String & current_user_name) const; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } }; } diff --git a/src/Parsers/ASTCreateRowPolicyQuery.cpp b/src/Parsers/ASTCreateRowPolicyQuery.cpp index 30b001feeca..3b4c2484acf 100644 --- a/src/Parsers/ASTCreateRowPolicyQuery.cpp +++ b/src/Parsers/ASTCreateRowPolicyQuery.cpp @@ -169,15 +169,15 @@ void ASTCreateRowPolicyQuery::formatImpl(const FormatSettings & settings, Format } -void ASTCreateRowPolicyQuery::replaceCurrentUserTagWithName(const String & current_user_name) const +void ASTCreateRowPolicyQuery::replaceCurrentUserTag(const String & current_user_name) const { if (roles) - roles->replaceCurrentUserTagWithName(current_user_name); + roles->replaceCurrentUserTag(current_user_name); } -void ASTCreateRowPolicyQuery::replaceEmptyDatabaseWithCurrent(const String & current_database) const +void ASTCreateRowPolicyQuery::replaceEmptyDatabase(const String & current_database) const { if (names) - names->replaceEmptyDatabaseWithCurrent(current_database); + names->replaceEmptyDatabase(current_database); } } diff --git a/src/Parsers/ASTCreateRowPolicyQuery.h b/src/Parsers/ASTCreateRowPolicyQuery.h index 9d0e2fcce7b..46a7578726e 100644 --- a/src/Parsers/ASTCreateRowPolicyQuery.h +++ b/src/Parsers/ASTCreateRowPolicyQuery.h @@ -49,7 +49,7 @@ public: void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } - void replaceCurrentUserTagWithName(const String & current_user_name) const; - void replaceEmptyDatabaseWithCurrent(const String & current_database) const; + void replaceCurrentUserTag(const String & current_user_name) const; + void replaceEmptyDatabase(const String & current_database) const; }; } diff --git a/src/Parsers/ASTCreateSettingsProfileQuery.cpp b/src/Parsers/ASTCreateSettingsProfileQuery.cpp index 84f8309462e..e99c40ca681 100644 --- a/src/Parsers/ASTCreateSettingsProfileQuery.cpp +++ b/src/Parsers/ASTCreateSettingsProfileQuery.cpp @@ -86,9 +86,9 @@ void ASTCreateSettingsProfileQuery::formatImpl(const FormatSettings & format, Fo } -void ASTCreateSettingsProfileQuery::replaceCurrentUserTagWithName(const String & current_user_name) const +void ASTCreateSettingsProfileQuery::replaceCurrentUserTag(const String & current_user_name) const { if (to_roles) - to_roles->replaceCurrentUserTagWithName(current_user_name); + to_roles->replaceCurrentUserTag(current_user_name); } } diff --git a/src/Parsers/ASTCreateSettingsProfileQuery.h b/src/Parsers/ASTCreateSettingsProfileQuery.h index 119019093b2..df0a11456bc 100644 --- a/src/Parsers/ASTCreateSettingsProfileQuery.h +++ b/src/Parsers/ASTCreateSettingsProfileQuery.h @@ -39,7 +39,7 @@ public: String getID(char) const override; ASTPtr clone() const override; void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override; - void replaceCurrentUserTagWithName(const String & current_user_name) const; + void replaceCurrentUserTag(const String & current_user_name) const; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } }; } diff --git a/src/Parsers/ASTCreateUserQuery.cpp b/src/Parsers/ASTCreateUserQuery.cpp index e2e477fa622..696b88ea9c1 100644 --- a/src/Parsers/ASTCreateUserQuery.cpp +++ b/src/Parsers/ASTCreateUserQuery.cpp @@ -203,6 +203,13 @@ namespace format.ostr << (format.hilite ? IAST::hilite_keyword : "") << " SETTINGS " << (format.hilite ? IAST::hilite_none : ""); settings.format(format); } + + + void formatGrantees(const ASTRolesOrUsersSet & grantees, const IAST::FormatSettings & settings) + { + settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << " GRANTEES " << (settings.hilite ? IAST::hilite_none : ""); + grantees.format(settings); + } } @@ -260,5 +267,8 @@ void ASTCreateUserQuery::formatImpl(const FormatSettings & format, FormatState & if (settings && (!settings->empty() || alter)) formatSettings(*settings, format); + + if (grantees) + formatGrantees(*grantees, format); } } diff --git a/src/Parsers/ASTCreateUserQuery.h b/src/Parsers/ASTCreateUserQuery.h index 7acfd87909a..1612c213f34 100644 --- a/src/Parsers/ASTCreateUserQuery.h +++ b/src/Parsers/ASTCreateUserQuery.h @@ -17,13 +17,15 @@ class ASTSettingsProfileElements; * [HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] * [DEFAULT ROLE role [,...]] * [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + * [GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]] * * ALTER USER [IF EXISTS] name - * [RENAME TO new_name] - * [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password|plaintext_password|sha256_password|sha256_hash|double_sha1_password|double_sha1_hash}] BY {'password'|'hash'}}|{WITH ldap SERVER 'server_name'}|{WITH kerberos [REALM 'realm']}] - * [[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] - * [DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ] - * [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + * [RENAME TO new_name] + * [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password|plaintext_password|sha256_password|sha256_hash|double_sha1_password|double_sha1_hash}] BY {'password'|'hash'}}|{WITH ldap SERVER 'server_name'}|{WITH kerberos [REALM 'realm']}] + * [[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] + * [DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ] + * [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + * [GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]] */ class ASTCreateUserQuery : public IAST, public ASTQueryWithOnCluster { @@ -46,8 +48,8 @@ public: std::optional remove_hosts; std::shared_ptr default_roles; - std::shared_ptr settings; + std::shared_ptr grantees; String getID(char) const override; ASTPtr clone() const override; diff --git a/src/Parsers/ASTDropAccessEntityQuery.cpp b/src/Parsers/ASTDropAccessEntityQuery.cpp index 1df176c24ec..6c19c9f8af3 100644 --- a/src/Parsers/ASTDropAccessEntityQuery.cpp +++ b/src/Parsers/ASTDropAccessEntityQuery.cpp @@ -54,9 +54,9 @@ void ASTDropAccessEntityQuery::formatImpl(const FormatSettings & settings, Forma } -void ASTDropAccessEntityQuery::replaceEmptyDatabaseWithCurrent(const String & current_database) const +void ASTDropAccessEntityQuery::replaceEmptyDatabase(const String & current_database) const { if (row_policy_names) - row_policy_names->replaceEmptyDatabaseWithCurrent(current_database); + row_policy_names->replaceEmptyDatabase(current_database); } } diff --git a/src/Parsers/ASTDropAccessEntityQuery.h b/src/Parsers/ASTDropAccessEntityQuery.h index 76a5f450566..df78acef6f4 100644 --- a/src/Parsers/ASTDropAccessEntityQuery.h +++ b/src/Parsers/ASTDropAccessEntityQuery.h @@ -30,6 +30,6 @@ public: void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } - void replaceEmptyDatabaseWithCurrent(const String & current_database) const; + void replaceEmptyDatabase(const String & current_database) const; }; } diff --git a/src/Parsers/ASTFunction.cpp b/src/Parsers/ASTFunction.cpp index 806b8e6c5b9..a0662a47152 100644 --- a/src/Parsers/ASTFunction.cpp +++ b/src/Parsers/ASTFunction.cpp @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -15,8 +16,16 @@ namespace DB { +namespace ErrorCodes +{ + extern const int UNEXPECTED_EXPRESSION; +} + void ASTFunction::appendColumnNameImpl(WriteBuffer & ostr) const { + if (name == "view") + throw Exception("Table function view cannot be used as an expression", ErrorCodes::UNEXPECTED_EXPRESSION); + writeString(name, ostr); if (parameters) @@ -214,27 +223,77 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format for (const char ** func = operators; *func; func += 2) { - if (0 == strcmp(name.c_str(), func[0])) + if (strcmp(name.c_str(), func[0]) != 0) { - if (frame.need_parens) - settings.ostr << '('; - - settings.ostr << (settings.hilite ? hilite_operator : "") << func[1] << (settings.hilite ? hilite_none : ""); - - /** A particularly stupid case. If we have a unary minus before a literal that is a negative number - * "-(-1)" or "- -1", this can not be formatted as `--1`, since this will be interpreted as a comment. - * Instead, add a space. - * PS. You can not just ask to add parentheses - see formatImpl for ASTLiteral. - */ - if (name == "negate" && arguments->children[0]->as()) - settings.ostr << ' '; - - arguments->formatImpl(settings, state, nested_need_parens); - written = true; - - if (frame.need_parens) - settings.ostr << ')'; + continue; } + + const auto * literal = arguments->children[0]->as(); + /* A particularly stupid case. If we have a unary minus before + * a literal that is a negative number "-(-1)" or "- -1", this + * can not be formatted as `--1`, since this will be + * interpreted as a comment. Instead, negate the literal + * in place. Another possible solution is to use parentheses, + * but the old comment said it is impossible, without mentioning + * the reason. We should also negate the nonnegative literals, + * for symmetry. We print the negated value without parentheses, + * because they are not needed around a single literal. Also we + * use formatting from FieldVisitorToString, so that the type is + * preserved (e.g. -0. is printed with trailing period). + */ + if (literal && name == "negate") + { + written = applyVisitor( + [&settings](const auto & value) + // -INT_MAX is negated to -INT_MAX by the negate() + // function, so we can implement this behavior here as + // well. Technically it is an UB to perform such negation + // w/o a cast to unsigned type. + NO_SANITIZE_UNDEFINED + { + using ValueType = std::decay_t; + if constexpr (isDecimalField()) + { + // The parser doesn't create decimal literals, but + // they can be produced by constant folding or the + // fuzzer. Decimals are always signed, so no need + // to deduce the result type like we do for ints. + const auto int_value = value.getValue().value; + settings.ostr << FieldVisitorToString{}(ValueType{ + -int_value, + value.getScale()}); + } + else if constexpr (std::is_arithmetic_v) + { + using ResultType = typename NumberTraits::ResultOfNegate::Type; + settings.ostr << FieldVisitorToString{}( + -static_cast(value)); + return true; + } + + return false; + }, + literal->value); + + if (written) + { + break; + } + } + + // We don't need parentheses around a single literal. + if (!literal && frame.need_parens) + settings.ostr << '('; + + settings.ostr << (settings.hilite ? hilite_operator : "") << func[1] << (settings.hilite ? hilite_none : ""); + + arguments->formatImpl(settings, state, nested_need_parens); + written = true; + + if (!literal && frame.need_parens) + settings.ostr << ')'; + + break; } } @@ -324,10 +383,40 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format if (!written && 0 == strcmp(name.c_str(), "tupleElement")) { - /// It can be printed in a form of 'x.1' only if right hand side is unsigned integer literal. - if (const auto * lit = arguments->children[1]->as()) + // fuzzer sometimes may inserts tupleElement() created from ASTLiteral: + // + // Function_tupleElement, 0xx + // -ExpressionList_, 0xx + // --Literal_Int64_255, 0xx + // --Literal_Int64_100, 0xx + // + // And in this case it will be printed as "255.100", which + // later will be parsed as float, and formatting will be + // inconsistent. + // + // So instead of printing it as regular tuple, + // let's print it as ExpressionList instead (i.e. with ", " delimiter). + bool tuple_arguments_valid = true; + const auto * lit_left = arguments->children[0]->as(); + const auto * lit_right = arguments->children[1]->as(); + + if (lit_left) { - if (lit->value.getType() == Field::Types::UInt64) + Field::Types::Which type = lit_left->value.getType(); + if (type != Field::Types::Tuple && type != Field::Types::Array) + { + tuple_arguments_valid = false; + } + } + + // It can be printed in a form of 'x.1' only if right hand side + // is an unsigned integer lineral. We also allow nonnegative + // signed integer literals, because the fuzzer sometimes inserts + // them, and we want to have consistent formatting. + if (tuple_arguments_valid && lit_right) + { + if (isInt64FieldType(lit_right->value.getType()) + && lit_right->value.get() >= 0) { if (frame.need_parens) settings.ostr << '('; @@ -425,14 +514,14 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format if (!written && 0 == strcmp(name.c_str(), "map")) { - settings.ostr << (settings.hilite ? hilite_operator : "") << '{' << (settings.hilite ? hilite_none : ""); + settings.ostr << (settings.hilite ? hilite_operator : "") << "map(" << (settings.hilite ? hilite_none : ""); for (size_t i = 0; i < arguments->children.size(); ++i) { if (i != 0) settings.ostr << ", "; arguments->children[i]->formatImpl(settings, state, nested_dont_need_parens); } - settings.ostr << (settings.hilite ? hilite_operator : "") << '}' << (settings.hilite ? hilite_none : ""); + settings.ostr << (settings.hilite ? hilite_operator : "") << ')' << (settings.hilite ? hilite_none : ""); written = true; } } diff --git a/src/Parsers/ASTFunctionWithKeyValueArguments.h b/src/Parsers/ASTFunctionWithKeyValueArguments.h index 88ab712cc04..f5eaa33bfc7 100644 --- a/src/Parsers/ASTFunctionWithKeyValueArguments.h +++ b/src/Parsers/ASTFunctionWithKeyValueArguments.h @@ -20,7 +20,7 @@ public: bool second_with_brackets; public: - ASTPair(bool second_with_brackets_) + explicit ASTPair(bool second_with_brackets_) : second_with_brackets(second_with_brackets_) { } @@ -49,7 +49,7 @@ public: /// Has brackets around arguments bool has_brackets; - ASTFunctionWithKeyValueArguments(bool has_brackets_ = true) + explicit ASTFunctionWithKeyValueArguments(bool has_brackets_ = true) : has_brackets(has_brackets_) { } diff --git a/src/Parsers/ASTGrantQuery.cpp b/src/Parsers/ASTGrantQuery.cpp index 2610836c759..aca53868226 100644 --- a/src/Parsers/ASTGrantQuery.cpp +++ b/src/Parsers/ASTGrantQuery.cpp @@ -27,7 +27,26 @@ namespace } - void formatAccessRightsElements(const AccessRightsElements & elements, const IAST::FormatSettings & settings) + void formatONClause(const String & database, bool any_database, const String & table, bool any_table, const IAST::FormatSettings & settings) + { + settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << "ON " << (settings.hilite ? IAST::hilite_none : ""); + if (any_database) + { + settings.ostr << "*.*"; + } + else + { + if (!database.empty()) + settings.ostr << backQuoteIfNeed(database) << "."; + if (any_table) + settings.ostr << "*"; + else + settings.ostr << backQuoteIfNeed(table); + } + } + + + void formatElementsWithoutOptions(const AccessRightsElements & elements, const IAST::FormatSettings & settings) { bool no_output = true; for (size_t i = 0; i != elements.size(); ++i) @@ -58,31 +77,14 @@ namespace if (!next_element_on_same_db_and_table) { - settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << " ON " << (settings.hilite ? IAST::hilite_none : ""); - if (element.any_database) - settings.ostr << "*."; - else if (!element.database.empty()) - settings.ostr << backQuoteIfNeed(element.database) + "."; - - if (element.any_table) - settings.ostr << "*"; - else - settings.ostr << backQuoteIfNeed(element.table); + settings.ostr << " "; + formatONClause(element.database, element.any_database, element.table, element.any_table, settings); } } if (no_output) settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << "USAGE ON " << (settings.hilite ? IAST::hilite_none : "") << "*.*"; } - - - void formatToRoles(const ASTRolesOrUsersSet & to_roles, ASTGrantQuery::Kind kind, const IAST::FormatSettings & settings) - { - using Kind = ASTGrantQuery::Kind; - settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << ((kind == Kind::GRANT) ? " TO " : " FROM ") - << (settings.hilite ? IAST::hilite_none : ""); - to_roles.format(settings); - } } @@ -100,12 +102,18 @@ ASTPtr ASTGrantQuery::clone() const void ASTGrantQuery::formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const { - settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << (attach ? "ATTACH " : "") << ((kind == Kind::GRANT) ? "GRANT" : "REVOKE") + settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << (attach_mode ? "ATTACH " : "") << (is_revoke ? "REVOKE" : "GRANT") << (settings.hilite ? IAST::hilite_none : ""); + if (!access_rights_elements.sameOptions()) + throw Exception("Elements of an ASTGrantQuery are expected to have the same options", ErrorCodes::LOGICAL_ERROR); + if (!access_rights_elements.empty() && access_rights_elements[0].is_partial_revoke && !is_revoke) + throw Exception("A partial revoke should be revoked, not granted", ErrorCodes::LOGICAL_ERROR); + bool grant_option = !access_rights_elements.empty() && access_rights_elements[0].grant_option; + formatOnCluster(settings); - if (kind == Kind::REVOKE) + if (is_revoke) { if (grant_option) settings.ostr << (settings.hilite ? hilite_keyword : "") << " GRANT OPTION FOR" << (settings.hilite ? hilite_none : ""); @@ -113,18 +121,21 @@ void ASTGrantQuery::formatImpl(const FormatSettings & settings, FormatState &, F settings.ostr << (settings.hilite ? hilite_keyword : "") << " ADMIN OPTION FOR" << (settings.hilite ? hilite_none : ""); } - if (roles && !access_rights_elements.empty()) - throw Exception("Either roles or access rights elements should be set", ErrorCodes::LOGICAL_ERROR); - settings.ostr << " "; if (roles) + { roles->format(settings); + if (!access_rights_elements.empty()) + throw Exception("ASTGrantQuery can contain either roles or access rights elements to grant or revoke, not both of them", ErrorCodes::LOGICAL_ERROR); + } else - formatAccessRightsElements(access_rights_elements, settings); + formatElementsWithoutOptions(access_rights_elements, settings); - formatToRoles(*to_roles, kind, settings); + settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << (is_revoke ? " FROM " : " TO ") + << (settings.hilite ? IAST::hilite_none : ""); + grantees->format(settings); - if (kind == Kind::GRANT) + if (!is_revoke) { if (grant_option) settings.ostr << (settings.hilite ? hilite_keyword : "") << " WITH GRANT OPTION" << (settings.hilite ? hilite_none : ""); @@ -134,16 +145,16 @@ void ASTGrantQuery::formatImpl(const FormatSettings & settings, FormatState &, F } -void ASTGrantQuery::replaceEmptyDatabaseWithCurrent(const String & current_database) +void ASTGrantQuery::replaceEmptyDatabase(const String & current_database) { access_rights_elements.replaceEmptyDatabase(current_database); } -void ASTGrantQuery::replaceCurrentUserTagWithName(const String & current_user_name) const +void ASTGrantQuery::replaceCurrentUserTag(const String & current_user_name) const { - if (to_roles) - to_roles->replaceCurrentUserTagWithName(current_user_name); + if (grantees) + grantees->replaceCurrentUserTag(current_user_name); } } diff --git a/src/Parsers/ASTGrantQuery.h b/src/Parsers/ASTGrantQuery.h index c36e42689a5..833c4db8ec6 100644 --- a/src/Parsers/ASTGrantQuery.h +++ b/src/Parsers/ASTGrantQuery.h @@ -19,20 +19,18 @@ class ASTRolesOrUsersSet; class ASTGrantQuery : public IAST, public ASTQueryWithOnCluster { public: - using Kind = AccessRightsElementWithOptions::Kind; - Kind kind = Kind::GRANT; - bool attach = false; + bool attach_mode = false; + bool is_revoke = false; AccessRightsElements access_rights_elements; std::shared_ptr roles; - std::shared_ptr to_roles; - bool grant_option = false; bool admin_option = false; + std::shared_ptr grantees; String getID(char) const override; ASTPtr clone() const override; void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; - void replaceEmptyDatabaseWithCurrent(const String & current_database); - void replaceCurrentUserTagWithName(const String & current_user_name) const; + void replaceEmptyDatabase(const String & current_database); + void replaceCurrentUserTag(const String & current_user_name) const; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } }; } diff --git a/src/Parsers/ASTRolesOrUsersSet.cpp b/src/Parsers/ASTRolesOrUsersSet.cpp index 1e7cd79f527..fc5385e4a58 100644 --- a/src/Parsers/ASTRolesOrUsersSet.cpp +++ b/src/Parsers/ASTRolesOrUsersSet.cpp @@ -7,7 +7,7 @@ namespace DB { namespace { - void formatRoleNameOrID(const String & str, bool is_id, const IAST::FormatSettings & settings) + void formatNameOrID(const String & str, bool is_id, const IAST::FormatSettings & settings) { if (is_id) { @@ -30,19 +30,21 @@ void ASTRolesOrUsersSet::formatImpl(const FormatSettings & settings, FormatState } bool need_comma = false; + if (all) { if (std::exchange(need_comma, true)) settings.ostr << ", "; - settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << "ALL" << (settings.hilite ? IAST::hilite_none : ""); + settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << (use_keyword_any ? "ANY" : "ALL") + << (settings.hilite ? IAST::hilite_none : ""); } else { - for (const auto & role : names) + for (const auto & name : names) { if (std::exchange(need_comma, true)) settings.ostr << ", "; - formatRoleNameOrID(role, id_mode, settings); + formatNameOrID(name, id_mode, settings); } if (current_user) @@ -58,11 +60,11 @@ void ASTRolesOrUsersSet::formatImpl(const FormatSettings & settings, FormatState settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << " EXCEPT " << (settings.hilite ? IAST::hilite_none : ""); need_comma = false; - for (const auto & except_role : except_names) + for (const auto & name : except_names) { if (std::exchange(need_comma, true)) settings.ostr << ", "; - formatRoleNameOrID(except_role, id_mode, settings); + formatNameOrID(name, id_mode, settings); } if (except_current_user) @@ -75,7 +77,7 @@ void ASTRolesOrUsersSet::formatImpl(const FormatSettings & settings, FormatState } -void ASTRolesOrUsersSet::replaceCurrentUserTagWithName(const String & current_user_name) +void ASTRolesOrUsersSet::replaceCurrentUserTag(const String & current_user_name) { if (current_user) { diff --git a/src/Parsers/ASTRolesOrUsersSet.h b/src/Parsers/ASTRolesOrUsersSet.h index f18aa0bdd73..15d42ee39a0 100644 --- a/src/Parsers/ASTRolesOrUsersSet.h +++ b/src/Parsers/ASTRolesOrUsersSet.h @@ -9,22 +9,24 @@ namespace DB using Strings = std::vector; /// Represents a set of users/roles like -/// {user_name | role_name | CURRENT_USER} [,...] | NONE | ALL | ALL EXCEPT {user_name | role_name | CURRENT_USER} [,...] +/// {user_name | role_name | CURRENT_USER | ALL | NONE} [,...] +/// [EXCEPT {user_name | role_name | CURRENT_USER | ALL | NONE} [,...]] class ASTRolesOrUsersSet : public IAST { public: + bool all = false; Strings names; bool current_user = false; - bool all = false; Strings except_names; bool except_current_user = false; - bool id_mode = false; /// true if `names` and `except_names` keep UUIDs, not names. - bool allow_role_names = true; /// true if this set can contain names of roles. - bool allow_user_names = true; /// true if this set can contain names of users. + bool allow_users = true; /// whether this set can contain names of users + bool allow_roles = true; /// whether this set can contain names of roles + bool id_mode = false; /// whether this set keep UUIDs instead of names + bool use_keyword_any = false; /// whether the keyword ANY should be used instead of the keyword ALL bool empty() const { return names.empty() && !current_user && !all; } - void replaceCurrentUserTagWithName(const String & current_user_name); + void replaceCurrentUserTag(const String & current_user_name); String getID(char) const override { return "RolesOrUsersSet"; } ASTPtr clone() const override { return std::make_shared(*this); } diff --git a/src/Parsers/ASTRowPolicyName.cpp b/src/Parsers/ASTRowPolicyName.cpp index 3d1ac5621db..0b69c1a46b3 100644 --- a/src/Parsers/ASTRowPolicyName.cpp +++ b/src/Parsers/ASTRowPolicyName.cpp @@ -23,7 +23,7 @@ void ASTRowPolicyName::formatImpl(const FormatSettings & settings, FormatState & } -void ASTRowPolicyName::replaceEmptyDatabaseWithCurrent(const String & current_database) +void ASTRowPolicyName::replaceEmptyDatabase(const String & current_database) { if (name_parts.database.empty()) name_parts.database = current_database; @@ -125,7 +125,7 @@ Strings ASTRowPolicyNames::toStrings() const } -void ASTRowPolicyNames::replaceEmptyDatabaseWithCurrent(const String & current_database) +void ASTRowPolicyNames::replaceEmptyDatabase(const String & current_database) { for (auto & np : name_parts) if (np.database.empty()) diff --git a/src/Parsers/ASTRowPolicyName.h b/src/Parsers/ASTRowPolicyName.h index ac2f84f5d8b..b195596225b 100644 --- a/src/Parsers/ASTRowPolicyName.h +++ b/src/Parsers/ASTRowPolicyName.h @@ -22,7 +22,7 @@ public: void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } - void replaceEmptyDatabaseWithCurrent(const String & current_database); + void replaceEmptyDatabase(const String & current_database); }; @@ -44,6 +44,6 @@ public: void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster(clone()); } - void replaceEmptyDatabaseWithCurrent(const String & current_database); + void replaceEmptyDatabase(const String & current_database); }; } diff --git a/src/Parsers/ASTSelectQuery.cpp b/src/Parsers/ASTSelectQuery.cpp index aa5508bf190..4715c7f201b 100644 --- a/src/Parsers/ASTSelectQuery.cpp +++ b/src/Parsers/ASTSelectQuery.cpp @@ -137,8 +137,8 @@ void ASTSelectQuery::formatImpl(const FormatSettings & s, FormatState & state, F if (window()) { s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << - "WINDOW " << (s.hilite ? hilite_none : ""); - window()->formatImpl(s, state, frame); + "WINDOW" << (s.hilite ? hilite_none : ""); + window()->as().formatImplMultiline(s, state, frame); } if (orderBy()) diff --git a/src/Parsers/ASTShowAccessEntitiesQuery.cpp b/src/Parsers/ASTShowAccessEntitiesQuery.cpp index bacde098640..6dd53fd5cde 100644 --- a/src/Parsers/ASTShowAccessEntitiesQuery.cpp +++ b/src/Parsers/ASTShowAccessEntitiesQuery.cpp @@ -43,7 +43,7 @@ void ASTShowAccessEntitiesQuery::formatQueryImpl(const FormatSettings & settings } -void ASTShowAccessEntitiesQuery::replaceEmptyDatabaseWithCurrent(const String & current_database) +void ASTShowAccessEntitiesQuery::replaceEmptyDatabase(const String & current_database) { if (database_and_table_name) { diff --git a/src/Parsers/ASTShowAccessEntitiesQuery.h b/src/Parsers/ASTShowAccessEntitiesQuery.h index 7ccd76bfe5e..2be1e0b92f0 100644 --- a/src/Parsers/ASTShowAccessEntitiesQuery.h +++ b/src/Parsers/ASTShowAccessEntitiesQuery.h @@ -31,7 +31,7 @@ public: String getID(char) const override; ASTPtr clone() const override { return std::make_shared(*this); } - void replaceEmptyDatabaseWithCurrent(const String & current_database); + void replaceEmptyDatabase(const String & current_database); protected: void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; diff --git a/src/Parsers/ASTShowCreateAccessEntityQuery.cpp b/src/Parsers/ASTShowCreateAccessEntityQuery.cpp index f870c98071c..5ff51a47002 100644 --- a/src/Parsers/ASTShowCreateAccessEntityQuery.cpp +++ b/src/Parsers/ASTShowCreateAccessEntityQuery.cpp @@ -72,10 +72,10 @@ void ASTShowCreateAccessEntityQuery::formatQueryImpl(const FormatSettings & sett } -void ASTShowCreateAccessEntityQuery::replaceEmptyDatabaseWithCurrent(const String & current_database) +void ASTShowCreateAccessEntityQuery::replaceEmptyDatabase(const String & current_database) { if (row_policy_names) - row_policy_names->replaceEmptyDatabaseWithCurrent(current_database); + row_policy_names->replaceEmptyDatabase(current_database); if (database_and_table_name) { diff --git a/src/Parsers/ASTShowCreateAccessEntityQuery.h b/src/Parsers/ASTShowCreateAccessEntityQuery.h index 10c4c0ca511..e20bb4f022e 100644 --- a/src/Parsers/ASTShowCreateAccessEntityQuery.h +++ b/src/Parsers/ASTShowCreateAccessEntityQuery.h @@ -40,7 +40,7 @@ public: String getID(char) const override; ASTPtr clone() const override; - void replaceEmptyDatabaseWithCurrent(const String & current_database); + void replaceEmptyDatabase(const String & current_database); protected: String getKeyword() const; diff --git a/src/Parsers/ASTSystemQuery.cpp b/src/Parsers/ASTSystemQuery.cpp index f3a43d7f3fd..bf94df0bf50 100644 --- a/src/Parsers/ASTSystemQuery.cpp +++ b/src/Parsers/ASTSystemQuery.cpp @@ -30,6 +30,8 @@ const char * ASTSystemQuery::typeToString(Type type) return "DROP MARK CACHE"; case Type::DROP_UNCOMPRESSED_CACHE: return "DROP UNCOMPRESSED CACHE"; + case Type::DROP_MMAP_CACHE: + return "DROP MMAP CACHE"; #if USE_EMBEDDED_COMPILER case Type::DROP_COMPILED_EXPRESSION_CACHE: return "DROP COMPILED EXPRESSION CACHE"; @@ -52,6 +54,10 @@ const char * ASTSystemQuery::typeToString(Type type) return "RELOAD DICTIONARY"; case Type::RELOAD_DICTIONARIES: return "RELOAD DICTIONARIES"; + case Type::RELOAD_MODEL: + return "RELOAD MODEL"; + case Type::RELOAD_MODELS: + return "RELOAD MODELS"; case Type::RELOAD_EMBEDDED_DICTIONARIES: return "RELOAD EMBEDDED DICTIONARIES"; case Type::RELOAD_CONFIG: @@ -88,6 +94,8 @@ const char * ASTSystemQuery::typeToString(Type type) return "START DISTRIBUTED SENDS"; case Type::FLUSH_LOGS: return "FLUSH LOGS"; + case Type::RESTART_DISK: + return "RESTART DISK"; default: throw Exception("Unknown SYSTEM query command", ErrorCodes::LOGICAL_ERROR); } diff --git a/src/Parsers/ASTSystemQuery.h b/src/Parsers/ASTSystemQuery.h index ad7eb664659..6cd1443155f 100644 --- a/src/Parsers/ASTSystemQuery.h +++ b/src/Parsers/ASTSystemQuery.h @@ -24,6 +24,7 @@ public: DROP_DNS_CACHE, DROP_MARK_CACHE, DROP_UNCOMPRESSED_CACHE, + DROP_MMAP_CACHE, #if USE_EMBEDDED_COMPILER DROP_COMPILED_EXPRESSION_CACHE, #endif @@ -35,9 +36,12 @@ public: SYNC_REPLICA, RELOAD_DICTIONARY, RELOAD_DICTIONARIES, + RELOAD_MODEL, + RELOAD_MODELS, RELOAD_EMBEDDED_DICTIONARIES, RELOAD_CONFIG, RELOAD_SYMBOLS, + RESTART_DISK, STOP_MERGES, START_MERGES, STOP_TTL_MERGES, @@ -62,6 +66,7 @@ public: Type type = Type::UNKNOWN; String target_dictionary; + String target_model; String database; String table; String replica; @@ -69,6 +74,7 @@ public: bool is_drop_whole_replica{}; String storage_policy; String volume; + String disk; UInt64 seconds{}; String getID(char) const override { return "SYSTEM query"; } diff --git a/src/Parsers/ASTWindowDefinition.cpp b/src/Parsers/ASTWindowDefinition.cpp index aee951fc1f3..35374df6177 100644 --- a/src/Parsers/ASTWindowDefinition.cpp +++ b/src/Parsers/ASTWindowDefinition.cpp @@ -35,6 +35,8 @@ String ASTWindowDefinition::getID(char) const void ASTWindowDefinition::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked format_frame) const { + format_frame.expression_list_prepend_whitespace = false; + if (partition_by) { settings.ostr << "PARTITION BY "; @@ -70,7 +72,8 @@ void ASTWindowDefinition::formatImpl(const FormatSettings & settings, } else { - settings.ostr << abs(frame.begin_offset); + settings.ostr << applyVisitor(FieldVisitorToString(), + frame.begin_offset); settings.ostr << " " << (!frame.begin_preceding ? "FOLLOWING" : "PRECEDING"); } @@ -85,7 +88,8 @@ void ASTWindowDefinition::formatImpl(const FormatSettings & settings, } else { - settings.ostr << abs(frame.end_offset); + settings.ostr << applyVisitor(FieldVisitorToString(), + frame.end_offset); settings.ostr << " " << (!frame.end_preceding ? "FOLLOWING" : "PRECEDING"); } diff --git a/src/Parsers/CMakeLists.txt b/src/Parsers/CMakeLists.txt index 13e460da4e4..5aaa5c32f92 100644 --- a/src/Parsers/CMakeLists.txt +++ b/src/Parsers/CMakeLists.txt @@ -1,14 +1,14 @@ -include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake) +include("${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake") add_headers_and_sources(clickhouse_parsers .) add_headers_and_sources(clickhouse_parsers ./MySQL) add_library(clickhouse_parsers ${clickhouse_parsers_headers} ${clickhouse_parsers_sources}) target_link_libraries(clickhouse_parsers PUBLIC clickhouse_common_io) if (USE_DEBUG_HELPERS) - set (INCLUDE_DEBUG_HELPERS "-I${ClickHouse_SOURCE_DIR}/base -include ${ClickHouse_SOURCE_DIR}/src/Parsers/iostream_debug_helpers.h") + set (INCLUDE_DEBUG_HELPERS "-I\"${ClickHouse_SOURCE_DIR}/base\" -include \"${ClickHouse_SOURCE_DIR}/src/Parsers/iostream_debug_helpers.h\"") set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${INCLUDE_DEBUG_HELPERS}") endif () -if(ENABLE_TESTS) - add_subdirectory(tests) +if(ENABLE_EXAMPLES) + add_subdirectory(examples) endif() diff --git a/src/Parsers/ExpressionElementParsers.cpp b/src/Parsers/ExpressionElementParsers.cpp index 41191f6ebaa..3e635b2accc 100644 --- a/src/Parsers/ExpressionElementParsers.cpp +++ b/src/Parsers/ExpressionElementParsers.cpp @@ -49,7 +49,6 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; extern const int SYNTAX_ERROR; extern const int LOGICAL_ERROR; - extern const int NOT_IMPLEMENTED; } @@ -533,6 +532,7 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p ParserKeyword keyword_groups("GROUPS"); ParserKeyword keyword_range("RANGE"); + node->frame.is_default = false; if (keyword_rows.ignore(pos, expected)) { node->frame.type = WindowFrame::FrameType::Rows; @@ -548,6 +548,7 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p else { /* No frame clause. */ + node->frame.is_default = true; return true; } @@ -579,30 +580,8 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p else if (parser_literal.parse(pos, ast_literal, expected)) { const Field & value = ast_literal->as().value; - if (!isInt64FieldType(value.getType())) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Only integer frame offsets are supported, '{}' is not supported.", - Field::Types::toString(value.getType())); - } - node->frame.begin_offset = value.get(); + node->frame.begin_offset = value; node->frame.begin_type = WindowFrame::BoundaryType::Offset; - // We can easily get a UINT64_MAX here, which doesn't even fit into - // int64_t. Not sure what checks we are going to need here after we - // support floats and dates. - if (node->frame.begin_offset > INT_MAX || node->frame.begin_offset < INT_MIN) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame offset must be between {} and {}, but {} is given", - INT_MAX, INT_MIN, node->frame.begin_offset); - } - - if (node->frame.begin_offset < 0) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame start offset must be greater than zero, {} given", - node->frame.begin_offset); - } } else { @@ -618,8 +597,8 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p node->frame.begin_preceding = false; if (node->frame.begin_type == WindowFrame::BoundaryType::Unbounded) { - throw Exception(ErrorCodes::NOT_IMPLEMENTED, - "Frame start UNBOUNDED FOLLOWING is not implemented"); + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame start cannot be UNBOUNDED FOLLOWING"); } } else @@ -650,28 +629,8 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p else if (parser_literal.parse(pos, ast_literal, expected)) { const Field & value = ast_literal->as().value; - if (!isInt64FieldType(value.getType())) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Only integer frame offsets are supported, '{}' is not supported.", - Field::Types::toString(value.getType())); - } - node->frame.end_offset = value.get(); + node->frame.end_offset = value; node->frame.end_type = WindowFrame::BoundaryType::Offset; - - if (node->frame.end_offset > INT_MAX || node->frame.end_offset < INT_MIN) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame offset must be between {} and {}, but {} is given", - INT_MAX, INT_MIN, node->frame.end_offset); - } - - if (node->frame.end_offset < 0) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame end offset must be greater than zero, {} given", - node->frame.end_offset); - } } else { @@ -683,8 +642,8 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p node->frame.end_preceding = true; if (node->frame.end_type == WindowFrame::BoundaryType::Unbounded) { - throw Exception(ErrorCodes::NOT_IMPLEMENTED, - "Frame end UNBOUNDED PRECEDING is not implemented"); + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame end cannot be UNBOUNDED PRECEDING"); } } else if (keyword_following.ignore(pos, expected)) @@ -699,11 +658,6 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p } } - if (!(node->frame == WindowFrame{})) - { - node->frame.is_default = false; - } - return true; } diff --git a/src/Parsers/ExpressionElementParsers.h b/src/Parsers/ExpressionElementParsers.h index cbbbd3f6d3b..f8b2408ac16 100644 --- a/src/Parsers/ExpressionElementParsers.h +++ b/src/Parsers/ExpressionElementParsers.h @@ -45,7 +45,7 @@ protected: class ParserIdentifier : public IParserBase { public: - ParserIdentifier(bool allow_query_parameter_ = false) : allow_query_parameter(allow_query_parameter_) {} + explicit ParserIdentifier(bool allow_query_parameter_ = false) : allow_query_parameter(allow_query_parameter_) {} protected: const char * getName() const override { return "identifier"; } bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override; @@ -59,7 +59,7 @@ protected: class ParserCompoundIdentifier : public IParserBase { public: - ParserCompoundIdentifier(bool table_name_with_optional_uuid_ = false, bool allow_query_parameter_ = false) + explicit ParserCompoundIdentifier(bool table_name_with_optional_uuid_ = false, bool allow_query_parameter_ = false) : table_name_with_optional_uuid(table_name_with_optional_uuid_), allow_query_parameter(allow_query_parameter_) { } @@ -85,7 +85,7 @@ public: using ColumnTransformers = MultiEnum; static constexpr auto AllTransformers = ColumnTransformers{ColumnTransformer::APPLY, ColumnTransformer::EXCEPT, ColumnTransformer::REPLACE}; - ParserColumnsTransformers(ColumnTransformers allowed_transformers_ = AllTransformers, bool is_strict_ = false) + explicit ParserColumnsTransformers(ColumnTransformers allowed_transformers_ = AllTransformers, bool is_strict_ = false) : allowed_transformers(allowed_transformers_) , is_strict(is_strict_) {} @@ -103,7 +103,7 @@ class ParserAsterisk : public IParserBase { public: using ColumnTransformers = ParserColumnsTransformers::ColumnTransformers; - ParserAsterisk(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) + explicit ParserAsterisk(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) : allowed_transformers(allowed_transformers_) {} @@ -129,7 +129,7 @@ class ParserColumnsMatcher : public IParserBase { public: using ColumnTransformers = ParserColumnsTransformers::ColumnTransformers; - ParserColumnsMatcher(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) + explicit ParserColumnsMatcher(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) : allowed_transformers(allowed_transformers_) {} @@ -149,7 +149,7 @@ protected: class ParserFunction : public IParserBase { public: - ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false) + explicit ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false) : allow_function_parameters(allow_function_parameters_), is_table_function(is_table_function_) { } diff --git a/src/Parsers/Lexer.cpp b/src/Parsers/Lexer.cpp index ffa8250a3f3..1fa4d396113 100644 --- a/src/Parsers/Lexer.cpp +++ b/src/Parsers/Lexer.cpp @@ -275,7 +275,8 @@ Token Lexer::nextTokenImpl() else ++pos; } - return Token(TokenType::ErrorMultilineCommentIsNotClosed, token_begin, end); + pos = end; + return Token(TokenType::ErrorMultilineCommentIsNotClosed, token_begin, pos); } } return Token(TokenType::Slash, token_begin, pos); diff --git a/src/Parsers/New/AST/Identifier.cpp b/src/Parsers/New/AST/Identifier.cpp index a5c41bf9876..3b931d19720 100644 --- a/src/Parsers/New/AST/Identifier.cpp +++ b/src/Parsers/New/AST/Identifier.cpp @@ -142,16 +142,19 @@ antlrcpp::Any ParseTreeVisitor::visitIdentifierOrNull(ClickHouseParser::Identifi antlrcpp::Any ParseTreeVisitor::visitInterval(ClickHouseParser::IntervalContext *) { + asm (""); // prevent symbol removal __builtin_unreachable(); } antlrcpp::Any ParseTreeVisitor::visitKeyword(ClickHouseParser::KeywordContext *) { + asm (""); // prevent symbol removal __builtin_unreachable(); } antlrcpp::Any ParseTreeVisitor::visitKeywordForAlias(ClickHouseParser::KeywordForAliasContext *) { + asm (""); // prevent symbol removal __builtin_unreachable(); } diff --git a/src/Parsers/New/CharInputStream.cpp b/src/Parsers/New/CharInputStream.cpp index 820666ff801..71cccafae50 100644 --- a/src/Parsers/New/CharInputStream.cpp +++ b/src/Parsers/New/CharInputStream.cpp @@ -40,7 +40,7 @@ void CharInputStream::consume() throw IllegalStateException("cannot consume EOF"); } - if (p < s) ++p; + ++p; } void CharInputStream::seek(size_t i) diff --git a/src/Parsers/New/ClickHouseLexer.h b/src/Parsers/New/ClickHouseLexer.h index e925c5d271f..62de0792f3c 100644 --- a/src/Parsers/New/ClickHouseLexer.h +++ b/src/Parsers/New/ClickHouseLexer.h @@ -13,51 +13,51 @@ namespace DB { class ClickHouseLexer : public antlr4::Lexer { public: enum { - ADD = 1, AFTER = 2, ALIAS = 3, ALL = 4, ALTER = 5, AND = 6, ANTI = 7, - ANY = 8, ARRAY = 9, AS = 10, ASCENDING = 11, ASOF = 12, ASYNC = 13, - ATTACH = 14, BETWEEN = 15, BOTH = 16, BY = 17, CASE = 18, CAST = 19, - CHECK = 20, CLEAR = 21, CLUSTER = 22, CODEC = 23, COLLATE = 24, COLUMN = 25, - COMMENT = 26, CONSTRAINT = 27, CREATE = 28, CROSS = 29, CUBE = 30, DATABASE = 31, - DATABASES = 32, DATE = 33, DAY = 34, DEDUPLICATE = 35, DEFAULT = 36, - DELAY = 37, DELETE = 38, DESC = 39, DESCENDING = 40, DESCRIBE = 41, - DETACH = 42, DICTIONARIES = 43, DICTIONARY = 44, DISK = 45, DISTINCT = 46, - DISTRIBUTED = 47, DROP = 48, ELSE = 49, END = 50, ENGINE = 51, EVENTS = 52, - EXISTS = 53, EXPLAIN = 54, EXPRESSION = 55, EXTRACT = 56, FETCHES = 57, - FINAL = 58, FIRST = 59, FLUSH = 60, FOR = 61, FORMAT = 62, FREEZE = 63, - FROM = 64, FULL = 65, FUNCTION = 66, GLOBAL = 67, GRANULARITY = 68, - GROUP = 69, HAVING = 70, HIERARCHICAL = 71, HOUR = 72, ID = 73, IF = 74, - ILIKE = 75, IN = 76, INDEX = 77, INF = 78, INJECTIVE = 79, INNER = 80, - INSERT = 81, INTERVAL = 82, INTO = 83, IS = 84, IS_OBJECT_ID = 85, JOIN = 86, - KEY = 87, KILL = 88, LAST = 89, LAYOUT = 90, LEADING = 91, LEFT = 92, - LIFETIME = 93, LIKE = 94, LIMIT = 95, LIVE = 96, LOCAL = 97, LOGS = 98, - MATERIALIZED = 99, MAX = 100, MERGES = 101, MIN = 102, MINUTE = 103, - MODIFY = 104, MONTH = 105, MOVE = 106, MUTATION = 107, NAN_SQL = 108, - NO = 109, NOT = 110, NULL_SQL = 111, NULLS = 112, OFFSET = 113, ON = 114, - OPTIMIZE = 115, OR = 116, ORDER = 117, OUTER = 118, OUTFILE = 119, PARTITION = 120, - POPULATE = 121, PREWHERE = 122, PRIMARY = 123, QUARTER = 124, RANGE = 125, - RELOAD = 126, REMOVE = 127, RENAME = 128, REPLACE = 129, REPLICA = 130, - REPLICATED = 131, RIGHT = 132, ROLLUP = 133, SAMPLE = 134, SECOND = 135, - SELECT = 136, SEMI = 137, SENDS = 138, SET = 139, SETTINGS = 140, SHOW = 141, - SOURCE = 142, START = 143, STOP = 144, SUBSTRING = 145, SYNC = 146, - SYNTAX = 147, SYSTEM = 148, TABLE = 149, TABLES = 150, TEMPORARY = 151, - TEST = 152, THEN = 153, TIES = 154, TIMEOUT = 155, TIMESTAMP = 156, - TO = 157, TOP = 158, TOTALS = 159, TRAILING = 160, TRIM = 161, TRUNCATE = 162, - TTL = 163, TYPE = 164, UNION = 165, UPDATE = 166, USE = 167, USING = 168, - UUID = 169, VALUES = 170, VIEW = 171, VOLUME = 172, WATCH = 173, WEEK = 174, - WHEN = 175, WHERE = 176, WITH = 177, YEAR = 178, JSON_FALSE = 179, JSON_TRUE = 180, - IDENTIFIER = 181, FLOATING_LITERAL = 182, OCTAL_LITERAL = 183, DECIMAL_LITERAL = 184, - HEXADECIMAL_LITERAL = 185, STRING_LITERAL = 186, ARROW = 187, ASTERISK = 188, - BACKQUOTE = 189, BACKSLASH = 190, COLON = 191, COMMA = 192, CONCAT = 193, - DASH = 194, DOT = 195, EQ_DOUBLE = 196, EQ_SINGLE = 197, GE = 198, GT = 199, - LBRACE = 200, LBRACKET = 201, LE = 202, LPAREN = 203, LT = 204, NOT_EQ = 205, - PERCENT = 206, PLUS = 207, QUERY = 208, QUOTE_DOUBLE = 209, QUOTE_SINGLE = 210, - RBRACE = 211, RBRACKET = 212, RPAREN = 213, SEMICOLON = 214, SLASH = 215, - UNDERSCORE = 216, MULTI_LINE_COMMENT = 217, SINGLE_LINE_COMMENT = 218, + ADD = 1, AFTER = 2, ALIAS = 3, ALL = 4, ALTER = 5, AND = 6, ANTI = 7, + ANY = 8, ARRAY = 9, AS = 10, ASCENDING = 11, ASOF = 12, ASYNC = 13, + ATTACH = 14, BETWEEN = 15, BOTH = 16, BY = 17, CASE = 18, CAST = 19, + CHECK = 20, CLEAR = 21, CLUSTER = 22, CODEC = 23, COLLATE = 24, COLUMN = 25, + COMMENT = 26, CONSTRAINT = 27, CREATE = 28, CROSS = 29, CUBE = 30, DATABASE = 31, + DATABASES = 32, DATE = 33, DAY = 34, DEDUPLICATE = 35, DEFAULT = 36, + DELAY = 37, DELETE = 38, DESC = 39, DESCENDING = 40, DESCRIBE = 41, + DETACH = 42, DICTIONARIES = 43, DICTIONARY = 44, DISK = 45, DISTINCT = 46, + DISTRIBUTED = 47, DROP = 48, ELSE = 49, END = 50, ENGINE = 51, EVENTS = 52, + EXISTS = 53, EXPLAIN = 54, EXPRESSION = 55, EXTRACT = 56, FETCHES = 57, + FINAL = 58, FIRST = 59, FLUSH = 60, FOR = 61, FORMAT = 62, FREEZE = 63, + FROM = 64, FULL = 65, FUNCTION = 66, GLOBAL = 67, GRANULARITY = 68, + GROUP = 69, HAVING = 70, HIERARCHICAL = 71, HOUR = 72, ID = 73, IF = 74, + ILIKE = 75, IN = 76, INDEX = 77, INF = 78, INJECTIVE = 79, INNER = 80, + INSERT = 81, INTERVAL = 82, INTO = 83, IS = 84, IS_OBJECT_ID = 85, JOIN = 86, + KEY = 87, KILL = 88, LAST = 89, LAYOUT = 90, LEADING = 91, LEFT = 92, + LIFETIME = 93, LIKE = 94, LIMIT = 95, LIVE = 96, LOCAL = 97, LOGS = 98, + MATERIALIZED = 99, MAX = 100, MERGES = 101, MIN = 102, MINUTE = 103, + MODIFY = 104, MONTH = 105, MOVE = 106, MUTATION = 107, NAN_SQL = 108, + NO = 109, NOT = 110, NULL_SQL = 111, NULLS = 112, OFFSET = 113, ON = 114, + OPTIMIZE = 115, OR = 116, ORDER = 117, OUTER = 118, OUTFILE = 119, PARTITION = 120, + POPULATE = 121, PREWHERE = 122, PRIMARY = 123, QUARTER = 124, RANGE = 125, + RELOAD = 126, REMOVE = 127, RENAME = 128, REPLACE = 129, REPLICA = 130, + REPLICATED = 131, RIGHT = 132, ROLLUP = 133, SAMPLE = 134, SECOND = 135, + SELECT = 136, SEMI = 137, SENDS = 138, SET = 139, SETTINGS = 140, SHOW = 141, + SOURCE = 142, START = 143, STOP = 144, SUBSTRING = 145, SYNC = 146, + SYNTAX = 147, SYSTEM = 148, TABLE = 149, TABLES = 150, TEMPORARY = 151, + TEST = 152, THEN = 153, TIES = 154, TIMEOUT = 155, TIMESTAMP = 156, + TO = 157, TOP = 158, TOTALS = 159, TRAILING = 160, TRIM = 161, TRUNCATE = 162, + TTL = 163, TYPE = 164, UNION = 165, UPDATE = 166, USE = 167, USING = 168, + UUID = 169, VALUES = 170, VIEW = 171, VOLUME = 172, WATCH = 173, WEEK = 174, + WHEN = 175, WHERE = 176, WITH = 177, YEAR = 178, JSON_FALSE = 179, JSON_TRUE = 180, + IDENTIFIER = 181, FLOATING_LITERAL = 182, OCTAL_LITERAL = 183, DECIMAL_LITERAL = 184, + HEXADECIMAL_LITERAL = 185, STRING_LITERAL = 186, ARROW = 187, ASTERISK = 188, + BACKQUOTE = 189, BACKSLASH = 190, COLON = 191, COMMA = 192, CONCAT = 193, + DASH = 194, DOT = 195, EQ_DOUBLE = 196, EQ_SINGLE = 197, GE = 198, GT = 199, + LBRACE = 200, LBRACKET = 201, LE = 202, LPAREN = 203, LT = 204, NOT_EQ = 205, + PERCENT = 206, PLUS = 207, QUERY = 208, QUOTE_DOUBLE = 209, QUOTE_SINGLE = 210, + RBRACE = 211, RBRACKET = 212, RPAREN = 213, SEMICOLON = 214, SLASH = 215, + UNDERSCORE = 216, MULTI_LINE_COMMENT = 217, SINGLE_LINE_COMMENT = 218, WHITESPACE = 219 }; ClickHouseLexer(antlr4::CharStream *input); - ~ClickHouseLexer(); + ~ClickHouseLexer() override; virtual std::string getGrammarFileName() const override; virtual const std::vector& getRuleNames() const override; diff --git a/src/Parsers/New/ClickHouseParser.h b/src/Parsers/New/ClickHouseParser.h index 11beadb182e..35e8d81d7f8 100644 --- a/src/Parsers/New/ClickHouseParser.h +++ b/src/Parsers/New/ClickHouseParser.h @@ -91,7 +91,7 @@ public: }; ClickHouseParser(antlr4::TokenStream *input); - ~ClickHouseParser(); + ~ClickHouseParser() override; virtual std::string getGrammarFileName() const override; virtual const antlr4::atn::ATN& getATN() const override { return _atn; }; diff --git a/src/Parsers/ParserAlterQuery.cpp b/src/Parsers/ParserAlterQuery.cpp index 5d20e27e486..de524342fb4 100644 --- a/src/Parsers/ParserAlterQuery.cpp +++ b/src/Parsers/ParserAlterQuery.cpp @@ -61,8 +61,10 @@ bool ParserAlterCommand::parseImpl(Pos & pos, ASTPtr & node, Expected & expected ParserKeyword s_drop_detached_partition("DROP DETACHED PARTITION"); ParserKeyword s_drop_detached_part("DROP DETACHED PART"); ParserKeyword s_fetch_partition("FETCH PARTITION"); + ParserKeyword s_fetch_part("FETCH PART"); ParserKeyword s_replace_partition("REPLACE PARTITION"); ParserKeyword s_freeze("FREEZE"); + ParserKeyword s_unfreeze("UNFREEZE"); ParserKeyword s_partition("PARTITION"); ParserKeyword s_first("FIRST"); @@ -427,6 +429,21 @@ bool ParserAlterCommand::parseImpl(Pos & pos, ASTPtr & node, Expected & expected command->from = ast_from->as().value.get(); command->type = ASTAlterCommand::FETCH_PARTITION; } + else if (s_fetch_part.ignore(pos, expected)) + { + if (!parser_string_literal.parse(pos, command->partition, expected)) + return false; + + if (!s_from.ignore(pos, expected)) + return false; + + ASTPtr ast_from; + if (!parser_string_literal.parse(pos, ast_from, expected)) + return false; + command->from = ast_from->as().value.get(); + command->part = true; + command->type = ASTAlterCommand::FETCH_PARTITION; + } else if (s_freeze.ignore(pos, expected)) { if (s_partition.ignore(pos, expected)) @@ -454,6 +471,37 @@ bool ParserAlterCommand::parseImpl(Pos & pos, ASTPtr & node, Expected & expected command->with_name = ast_with_name->as().value.get(); } } + else if (s_unfreeze.ignore(pos, expected)) + { + if (s_partition.ignore(pos, expected)) + { + if (!parser_partition.parse(pos, command->partition, expected)) + return false; + + command->type = ASTAlterCommand::UNFREEZE_PARTITION; + } + else + { + command->type = ASTAlterCommand::UNFREEZE_ALL; + } + + /// WITH NAME 'name' - remove local backup to directory with specified name + if (s_with.ignore(pos, expected)) + { + if (!s_name.ignore(pos, expected)) + return false; + + ASTPtr ast_with_name; + if (!parser_string_literal.parse(pos, ast_with_name, expected)) + return false; + + command->with_name = ast_with_name->as().value.get(); + } + else + { + return false; + } + } else if (s_modify_column.ignore(pos, expected)) { if (s_if_exists.ignore(pos, expected)) diff --git a/src/Parsers/ParserCreateQuery.cpp b/src/Parsers/ParserCreateQuery.cpp index 4cef79fdf42..bfd51b7633d 100644 --- a/src/Parsers/ParserCreateQuery.cpp +++ b/src/Parsers/ParserCreateQuery.cpp @@ -780,6 +780,7 @@ bool ParserCreateViewQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec ASTPtr table; ASTPtr to_table; + ASTPtr to_inner_uuid; ASTPtr columns_list; ASTPtr storage; ASTPtr as_database; @@ -830,9 +831,16 @@ bool ParserCreateViewQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec return false; } - // TO [db.]table - if (ParserKeyword{"TO"}.ignore(pos, expected)) + + if (ParserKeyword{"TO INNER UUID"}.ignore(pos, expected)) { + ParserLiteral literal_p; + if (!literal_p.parse(pos, to_inner_uuid, expected)) + return false; + } + else if (ParserKeyword{"TO"}.ignore(pos, expected)) + { + // TO [db.]table if (!table_name_p.parse(pos, to_table, expected)) return false; } @@ -883,6 +891,8 @@ bool ParserCreateViewQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec if (to_table) query->to_table_id = getTableIdentifier(to_table); + if (to_inner_uuid) + query->to_inner_uuid = parseFromString(to_inner_uuid->as()->value.get()); query->set(query->columns_list, columns_list); query->set(query->storage, storage); diff --git a/src/Parsers/ParserCreateQuotaQuery.cpp b/src/Parsers/ParserCreateQuotaQuery.cpp index 68c53d2fc1d..a8779a68600 100644 --- a/src/Parsers/ParserCreateQuotaQuery.cpp +++ b/src/Parsers/ParserCreateQuotaQuery.cpp @@ -226,7 +226,7 @@ namespace { ASTPtr node; ParserRolesOrUsersSet roles_p; - roles_p.allowAll().allowRoleNames().allowUserNames().allowCurrentUser().useIDMode(id_mode); + roles_p.allowAll().allowRoles().allowUsers().allowCurrentUser().useIDMode(id_mode); if (!ParserKeyword{"TO"}.ignore(pos, expected) || !roles_p.parse(pos, node, expected)) return false; diff --git a/src/Parsers/ParserCreateRowPolicyQuery.cpp b/src/Parsers/ParserCreateRowPolicyQuery.cpp index fae5bd35b43..534f781a273 100644 --- a/src/Parsers/ParserCreateRowPolicyQuery.cpp +++ b/src/Parsers/ParserCreateRowPolicyQuery.cpp @@ -187,7 +187,7 @@ namespace return false; ParserRolesOrUsersSet roles_p; - roles_p.allowAll().allowRoleNames().allowUserNames().allowCurrentUser().useIDMode(id_mode); + roles_p.allowAll().allowRoles().allowUsers().allowCurrentUser().useIDMode(id_mode); if (!roles_p.parse(pos, ast, expected)) return false; diff --git a/src/Parsers/ParserCreateSettingsProfileQuery.cpp b/src/Parsers/ParserCreateSettingsProfileQuery.cpp index 797379509e4..2d1e6824b50 100644 --- a/src/Parsers/ParserCreateSettingsProfileQuery.cpp +++ b/src/Parsers/ParserCreateSettingsProfileQuery.cpp @@ -53,7 +53,7 @@ namespace return false; ParserRolesOrUsersSet roles_p; - roles_p.allowAll().allowRoleNames().allowUserNames().allowCurrentUser().useIDMode(id_mode); + roles_p.allowAll().allowRoles().allowUsers().allowCurrentUser().useIDMode(id_mode); if (!roles_p.parse(pos, ast, expected)) return false; diff --git a/src/Parsers/ParserCreateUserQuery.cpp b/src/Parsers/ParserCreateUserQuery.cpp index 16c539d3ebc..84bf60d56d3 100644 --- a/src/Parsers/ParserCreateUserQuery.cpp +++ b/src/Parsers/ParserCreateUserQuery.cpp @@ -246,12 +246,12 @@ namespace ASTPtr ast; ParserRolesOrUsersSet default_roles_p; - default_roles_p.allowAll().allowRoleNames().useIDMode(id_mode); + default_roles_p.allowAll().allowRoles().useIDMode(id_mode); if (!default_roles_p.parse(pos, ast, expected)) return false; default_roles = typeid_cast>(ast); - default_roles->allow_user_names = false; + default_roles->allow_users = false; return true; }); } @@ -275,6 +275,24 @@ namespace }); } + bool parseGrantees(IParserBase::Pos & pos, Expected & expected, bool id_mode, std::shared_ptr & grantees) + { + return IParserBase::wrapParseImpl(pos, [&] + { + if (!ParserKeyword{"GRANTEES"}.ignore(pos, expected)) + return false; + + ASTPtr ast; + ParserRolesOrUsersSet grantees_p; + grantees_p.allowAny().allowUsers().allowCurrentUser().allowRoles().useIDMode(id_mode); + if (!grantees_p.parse(pos, ast, expected)) + return false; + + grantees = typeid_cast>(ast); + return true; + }); + } + bool parseOnCluster(IParserBase::Pos & pos, Expected & expected, String & cluster) { return IParserBase::wrapParseImpl(pos, [&] @@ -330,6 +348,7 @@ bool ParserCreateUserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec std::optional remove_hosts; std::shared_ptr default_roles; std::shared_ptr settings; + std::shared_ptr grantees; String cluster; while (true) @@ -368,6 +387,9 @@ bool ParserCreateUserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec if (cluster.empty() && parseOnCluster(pos, expected, cluster)) continue; + if (!grantees && parseGrantees(pos, expected, attach_mode, grantees)) + continue; + if (alter) { if (new_name.empty() && (names->size() == 1) && parseRenameTo(pos, expected, new_name)) @@ -422,6 +444,7 @@ bool ParserCreateUserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec query->remove_hosts = std::move(remove_hosts); query->default_roles = std::move(default_roles); query->settings = std::move(settings); + query->grantees = std::move(grantees); return true; } diff --git a/src/Parsers/ParserCreateUserQuery.h b/src/Parsers/ParserCreateUserQuery.h index 5b83a261fa2..215133a777c 100644 --- a/src/Parsers/ParserCreateUserQuery.h +++ b/src/Parsers/ParserCreateUserQuery.h @@ -9,13 +9,17 @@ namespace DB * CREATE USER [IF NOT EXISTS | OR REPLACE] name * [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password|plaintext_password|sha256_password|sha256_hash|double_sha1_password|double_sha1_hash}] BY {'password'|'hash'}}|{WITH ldap SERVER 'server_name'}|{WITH kerberos [REALM 'realm']}] * [HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] + * [DEFAULT ROLE role [,...]] * [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + * [GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]] * * ALTER USER [IF EXISTS] name * [RENAME TO new_name] * [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password|plaintext_password|sha256_password|sha256_hash|double_sha1_password|double_sha1_hash}] BY {'password'|'hash'}}|{WITH ldap SERVER 'server_name'}|{WITH kerberos [REALM 'realm']}] * [[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] + * [DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ] * [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + * [GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]] */ class ParserCreateUserQuery : public IParserBase { diff --git a/src/Parsers/ParserGrantQuery.cpp b/src/Parsers/ParserGrantQuery.cpp index 7dd721c9af2..d3aa62e73da 100644 --- a/src/Parsers/ParserGrantQuery.cpp +++ b/src/Parsers/ParserGrantQuery.cpp @@ -8,6 +8,7 @@ #include #include #include +#include namespace DB @@ -20,8 +21,6 @@ namespace ErrorCodes namespace { - using Kind = ASTGrantQuery::Kind; - bool parseAccessFlags(IParser::Pos & pos, Expected & expected, AccessFlags & access_flags) { static constexpr auto is_one_of_access_type_words = [](IParser::Pos & pos_) @@ -87,7 +86,7 @@ namespace }); } - bool parseAccessTypesWithColumns(IParser::Pos & pos, Expected & expected, + bool parseAccessFlagsWithColumns(IParser::Pos & pos, Expected & expected, std::vector> & access_and_columns) { std::vector> res; @@ -112,7 +111,7 @@ namespace } - bool parseAccessRightsElements(IParser::Pos & pos, Expected & expected, AccessRightsElements & elements) + bool parseElementsWithoutOptions(IParser::Pos & pos, Expected & expected, AccessRightsElements & elements) { return IParserBase::wrapParseImpl(pos, [&] { @@ -121,7 +120,7 @@ namespace auto parse_around_on = [&] { std::vector> access_and_columns; - if (!parseAccessTypesWithColumns(pos, expected, access_and_columns)) + if (!parseAccessFlagsWithColumns(pos, expected, access_and_columns)) return false; if (!ParserKeyword{"ON"}.ignore(pos, expected)) @@ -157,16 +156,16 @@ namespace } - void removeNonGrantableFlags(AccessRightsElements & elements) + void eraseNonGrantable(AccessRightsElements & elements) { - for (auto & element : elements) + boost::range::remove_erase_if(elements, [](AccessRightsElement & element) { if (element.empty()) - continue; + return true; auto old_flags = element.access_flags; - element.removeNonGrantableFlags(); + element.eraseNonGrantable(); if (!element.empty()) - continue; + return false; if (!element.any_column) throw Exception(old_flags.toString() + " cannot be granted on the column level", ErrorCodes::INVALID_GRANT); @@ -176,17 +175,17 @@ namespace throw Exception(old_flags.toString() + " cannot be granted on the database level", ErrorCodes::INVALID_GRANT); else throw Exception(old_flags.toString() + " cannot be granted", ErrorCodes::INVALID_GRANT); - } + }); } - bool parseRoles(IParser::Pos & pos, Expected & expected, Kind kind, bool id_mode, std::shared_ptr & roles) + bool parseRoles(IParser::Pos & pos, Expected & expected, bool is_revoke, bool id_mode, std::shared_ptr & roles) { return IParserBase::wrapParseImpl(pos, [&] { ParserRolesOrUsersSet roles_p; - roles_p.allowRoleNames().useIDMode(id_mode); - if (kind == Kind::REVOKE) + roles_p.allowRoles().useIDMode(id_mode); + if (is_revoke) roles_p.allowAll(); ASTPtr ast; @@ -199,28 +198,20 @@ namespace } - bool parseToRoles(IParser::Pos & pos, Expected & expected, ASTGrantQuery::Kind kind, std::shared_ptr & to_roles) + bool parseToGrantees(IParser::Pos & pos, Expected & expected, bool is_revoke, std::shared_ptr & grantees) { return IParserBase::wrapParseImpl(pos, [&] { - if (kind == Kind::GRANT) - { - if (!ParserKeyword{"TO"}.ignore(pos, expected)) - return false; - } - else - { - if (!ParserKeyword{"FROM"}.ignore(pos, expected)) - return false; - } + if (!ParserKeyword{is_revoke ? "FROM" : "TO"}.ignore(pos, expected)) + return false; ASTPtr ast; ParserRolesOrUsersSet roles_p; - roles_p.allowRoleNames().allowUserNames().allowCurrentUser().allowAll(kind == Kind::REVOKE); + roles_p.allowRoles().allowUsers().allowCurrentUser().allowAll(is_revoke); if (!roles_p.parse(pos, ast, expected)) return false; - to_roles = typeid_cast>(ast); + grantees = typeid_cast>(ast); return true; }); } @@ -237,20 +228,13 @@ namespace bool ParserGrantQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) { - bool attach = false; - if (attach_mode) - { - if (!ParserKeyword{"ATTACH"}.ignore(pos, expected)) - return false; - attach = true; - } + if (attach_mode && !ParserKeyword{"ATTACH"}.ignore(pos, expected)) + return false; - Kind kind; - if (ParserKeyword{"GRANT"}.ignore(pos, expected)) - kind = Kind::GRANT; - else if (ParserKeyword{"REVOKE"}.ignore(pos, expected)) - kind = Kind::REVOKE; - else + bool is_revoke = false; + if (ParserKeyword{"REVOKE"}.ignore(pos, expected)) + is_revoke = true; + else if (!ParserKeyword{"GRANT"}.ignore(pos, expected)) return false; String cluster; @@ -259,7 +243,7 @@ bool ParserGrantQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) bool grant_option = false; bool admin_option = false; - if (kind == Kind::REVOKE) + if (is_revoke) { if (ParserKeyword{"GRANT OPTION FOR"}.ignore(pos, expected)) grant_option = true; @@ -269,20 +253,20 @@ bool ParserGrantQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) AccessRightsElements elements; std::shared_ptr roles; - if (!parseAccessRightsElements(pos, expected, elements) && !parseRoles(pos, expected, kind, attach, roles)) + if (!parseElementsWithoutOptions(pos, expected, elements) && !parseRoles(pos, expected, is_revoke, attach_mode, roles)) return false; if (cluster.empty()) parseOnCluster(pos, expected, cluster); - std::shared_ptr to_roles; - if (!parseToRoles(pos, expected, kind, to_roles)) + std::shared_ptr grantees; + if (!parseToGrantees(pos, expected, is_revoke, grantees)) return false; if (cluster.empty()) parseOnCluster(pos, expected, cluster); - if (kind == Kind::GRANT) + if (!is_revoke) { if (ParserKeyword{"WITH GRANT OPTION"}.ignore(pos, expected)) grant_option = true; @@ -298,19 +282,24 @@ bool ParserGrantQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) if (admin_option && !elements.empty()) throw Exception("ADMIN OPTION should be specified for roles", ErrorCodes::SYNTAX_ERROR); - if (kind == Kind::GRANT) - removeNonGrantableFlags(elements); + if (grant_option) + { + for (auto & element : elements) + element.grant_option = true; + } + + if (!is_revoke) + eraseNonGrantable(elements); auto query = std::make_shared(); node = query; - query->kind = kind; - query->attach = attach; + query->is_revoke = is_revoke; + query->attach_mode = attach_mode; query->cluster = std::move(cluster); query->access_rights_elements = std::move(elements); query->roles = std::move(roles); - query->to_roles = std::move(to_roles); - query->grant_option = grant_option; + query->grantees = std::move(grantees); query->admin_option = admin_option; return true; diff --git a/src/Parsers/ParserRenameQuery.cpp b/src/Parsers/ParserRenameQuery.cpp index 7fa4e6e5408..e3b35249cd6 100644 --- a/src/Parsers/ParserRenameQuery.cpp +++ b/src/Parsers/ParserRenameQuery.cpp @@ -42,6 +42,7 @@ bool ParserRenameQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) ParserKeyword s_rename_table("RENAME TABLE"); ParserKeyword s_exchange_tables("EXCHANGE TABLES"); ParserKeyword s_rename_dictionary("RENAME DICTIONARY"); + ParserKeyword s_exchange_dictionaries("EXCHANGE DICTIONARIES"); ParserKeyword s_rename_database("RENAME DATABASE"); ParserKeyword s_to("TO"); ParserKeyword s_and("AND"); @@ -56,6 +57,11 @@ bool ParserRenameQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) exchange = true; else if (s_rename_dictionary.ignore(pos, expected)) dictionary = true; + else if (s_exchange_dictionaries.ignore(pos, expected)) + { + exchange = true; + dictionary = true; + } else if (s_rename_database.ignore(pos, expected)) { ASTPtr from_db; diff --git a/src/Parsers/ParserRolesOrUsersSet.cpp b/src/Parsers/ParserRolesOrUsersSet.cpp index 0f3ba3f0f84..41e9ee6501d 100644 --- a/src/Parsers/ParserRolesOrUsersSet.cpp +++ b/src/Parsers/ParserRolesOrUsersSet.cpp @@ -12,11 +12,7 @@ namespace DB { namespace { - bool parseRoleNameOrID( - IParserBase::Pos & pos, - Expected & expected, - bool id_mode, - String & res) + bool parseNameOrID(IParserBase::Pos & pos, Expected & expected, bool id_mode, String & res) { return IParserBase::wrapParseImpl(pos, [&] { @@ -39,20 +35,21 @@ namespace }); } - bool parseBeforeExcept( IParserBase::Pos & pos, Expected & expected, bool id_mode, bool allow_all, + bool allow_any, bool allow_current_user, - Strings & names, bool & all, + Strings & names, bool & current_user) { bool res_all = false; - bool res_current_user = false; Strings res_names; + bool res_current_user = false; + Strings res_with_roles_names; auto parse_element = [&] { @@ -65,6 +62,12 @@ namespace return true; } + if (allow_any && ParserKeyword{"ANY"}.ignore(pos, expected)) + { + res_all = true; + return true; + } + if (allow_current_user && parseCurrentUserTag(pos, expected)) { res_current_user = true; @@ -72,7 +75,7 @@ namespace } String name; - if (parseRoleNameOrID(pos, expected, id_mode, name)) + if (parseNameOrID(pos, expected, id_mode, name)) { res_names.emplace_back(std::move(name)); return true; @@ -85,8 +88,8 @@ namespace return false; names = std::move(res_names); - all = res_all; current_user = res_current_user; + all = res_all; return true; } @@ -98,13 +101,12 @@ namespace Strings & except_names, bool & except_current_user) { - return IParserBase::wrapParseImpl(pos, [&] - { + return IParserBase::wrapParseImpl(pos, [&] { if (!ParserKeyword{"EXCEPT"}.ignore(pos, expected)) return false; bool unused; - return parseBeforeExcept(pos, expected, id_mode, false, allow_current_user, except_names, unused, except_current_user); + return parseBeforeExcept(pos, expected, id_mode, false, false, allow_current_user, unused, except_names, except_current_user); }); } } @@ -112,13 +114,13 @@ namespace bool ParserRolesOrUsersSet::parseImpl(Pos & pos, ASTPtr & node, Expected & expected) { + bool all = false; Strings names; bool current_user = false; - bool all = false; Strings except_names; bool except_current_user = false; - if (!parseBeforeExcept(pos, expected, id_mode, allow_all, allow_current_user, names, all, current_user)) + if (!parseBeforeExcept(pos, expected, id_mode, allow_all, allow_any, allow_current_user, all, names, current_user)) return false; parseExceptAndAfterExcept(pos, expected, id_mode, allow_current_user, except_names, except_current_user); @@ -132,9 +134,10 @@ bool ParserRolesOrUsersSet::parseImpl(Pos & pos, ASTPtr & node, Expected & expec result->all = all; result->except_names = std::move(except_names); result->except_current_user = except_current_user; + result->allow_users = allow_users; + result->allow_roles = allow_roles; result->id_mode = id_mode; - result->allow_user_names = allow_user_names; - result->allow_role_names = allow_role_names; + result->use_keyword_any = all && allow_any && !allow_all; node = result; return true; } diff --git a/src/Parsers/ParserRolesOrUsersSet.h b/src/Parsers/ParserRolesOrUsersSet.h index c71012e874c..9ae9937e784 100644 --- a/src/Parsers/ParserRolesOrUsersSet.h +++ b/src/Parsers/ParserRolesOrUsersSet.h @@ -6,15 +6,17 @@ namespace DB { /** Parses a string like this: - * {role|CURRENT_USER} [,...] | NONE | ALL | ALL EXCEPT {role|CURRENT_USER} [,...] + * {user_name | role_name | CURRENT_USER | ALL | NONE} [,...] + * [EXCEPT {user_name | role_name | CURRENT_USER | ALL | NONE} [,...]] */ class ParserRolesOrUsersSet : public IParserBase { public: ParserRolesOrUsersSet & allowAll(bool allow_all_ = true) { allow_all = allow_all_; return *this; } - ParserRolesOrUsersSet & allowUserNames(bool allow_user_names_ = true) { allow_user_names = allow_user_names_; return *this; } - ParserRolesOrUsersSet & allowRoleNames(bool allow_role_names_ = true) { allow_role_names = allow_role_names_; return *this; } + ParserRolesOrUsersSet & allowAny(bool allow_any_ = true) { allow_any = allow_any_; return *this; } + ParserRolesOrUsersSet & allowUsers(bool allow_users_ = true) { allow_users = allow_users_; return *this; } ParserRolesOrUsersSet & allowCurrentUser(bool allow_current_user_ = true) { allow_current_user = allow_current_user_; return *this; } + ParserRolesOrUsersSet & allowRoles(bool allow_roles_ = true) { allow_roles = allow_roles_; return *this; } ParserRolesOrUsersSet & useIDMode(bool id_mode_ = true) { id_mode = id_mode_; return *this; } protected: @@ -23,9 +25,10 @@ protected: private: bool allow_all = false; - bool allow_user_names = false; - bool allow_role_names = false; + bool allow_any = false; + bool allow_users = false; bool allow_current_user = false; + bool allow_roles = false; bool id_mode = false; }; diff --git a/src/Parsers/ParserSetRoleQuery.cpp b/src/Parsers/ParserSetRoleQuery.cpp index e8734f8dfc1..678474af040 100644 --- a/src/Parsers/ParserSetRoleQuery.cpp +++ b/src/Parsers/ParserSetRoleQuery.cpp @@ -15,12 +15,12 @@ namespace { ASTPtr ast; ParserRolesOrUsersSet roles_p; - roles_p.allowRoleNames().allowAll(); + roles_p.allowRoles().allowAll(); if (!roles_p.parse(pos, ast, expected)) return false; roles = typeid_cast>(ast); - roles->allow_user_names = false; + roles->allow_users = false; return true; }); } @@ -34,12 +34,12 @@ namespace ASTPtr ast; ParserRolesOrUsersSet users_p; - users_p.allowUserNames().allowCurrentUser(); + users_p.allowUsers().allowCurrentUser(); if (!users_p.parse(pos, ast, expected)) return false; to_users = typeid_cast>(ast); - to_users->allow_role_names = false; + to_users->allow_roles = false; return true; }); } diff --git a/src/Parsers/ParserShowGrantsQuery.cpp b/src/Parsers/ParserShowGrantsQuery.cpp index d25527754be..bd9e4012771 100644 --- a/src/Parsers/ParserShowGrantsQuery.cpp +++ b/src/Parsers/ParserShowGrantsQuery.cpp @@ -19,7 +19,7 @@ bool ParserShowGrantsQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec { ASTPtr for_roles_ast; ParserRolesOrUsersSet for_roles_p; - for_roles_p.allowUserNames().allowRoleNames().allowAll().allowCurrentUser(); + for_roles_p.allowUsers().allowRoles().allowAll().allowCurrentUser(); if (!for_roles_p.parse(pos, for_roles_ast, expected)) return false; diff --git a/src/Parsers/ParserSystemQuery.cpp b/src/Parsers/ParserSystemQuery.cpp index 491037da9a9..a1487468ab3 100644 --- a/src/Parsers/ParserSystemQuery.cpp +++ b/src/Parsers/ParserSystemQuery.cpp @@ -57,7 +57,35 @@ bool ParserSystemQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expected & return false; break; } + case Type::RELOAD_MODEL: + { + String cluster_str; + if (ParserKeyword{"ON"}.ignore(pos, expected)) + { + if (!ASTQueryWithOnCluster::parse(pos, cluster_str, expected)) + return false; + } + res->cluster = cluster_str; + ASTPtr ast; + if (ParserStringLiteral{}.parse(pos, ast, expected)) + { + res->target_model = ast->as().value.safeGet(); + } + else + { + ParserIdentifier model_parser; + ASTPtr model; + String target_model; + if (!model_parser.parse(pos, model, expected)) + return false; + + if (!tryGetIdentifierNameInto(model, res->target_model)) + return false; + } + + break; + } case Type::DROP_REPLICA: { ASTPtr ast; @@ -106,6 +134,17 @@ bool ParserSystemQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expected & return false; break; + case Type::RESTART_DISK: + { + ASTPtr ast; + if (ParserIdentifier{}.parse(pos, ast, expected)) + res->disk = ast->as().name(); + else + return false; + + break; + } + case Type::STOP_DISTRIBUTED_SENDS: case Type::START_DISTRIBUTED_SENDS: case Type::FLUSH_DISTRIBUTED: diff --git a/src/Parsers/tests/CMakeLists.txt b/src/Parsers/examples/CMakeLists.txt similarity index 100% rename from src/Parsers/tests/CMakeLists.txt rename to src/Parsers/examples/CMakeLists.txt diff --git a/src/Parsers/tests/create_parser.cpp b/src/Parsers/examples/create_parser.cpp similarity index 100% rename from src/Parsers/tests/create_parser.cpp rename to src/Parsers/examples/create_parser.cpp diff --git a/src/Parsers/tests/create_parser_fuzzer.cpp b/src/Parsers/examples/create_parser_fuzzer.cpp similarity index 100% rename from src/Parsers/tests/create_parser_fuzzer.cpp rename to src/Parsers/examples/create_parser_fuzzer.cpp diff --git a/src/Parsers/tests/lexer.cpp b/src/Parsers/examples/lexer.cpp similarity index 100% rename from src/Parsers/tests/lexer.cpp rename to src/Parsers/examples/lexer.cpp diff --git a/src/Parsers/tests/lexer_fuzzer.cpp b/src/Parsers/examples/lexer_fuzzer.cpp similarity index 100% rename from src/Parsers/tests/lexer_fuzzer.cpp rename to src/Parsers/examples/lexer_fuzzer.cpp diff --git a/src/Parsers/tests/select_parser.cpp b/src/Parsers/examples/select_parser.cpp similarity index 100% rename from src/Parsers/tests/select_parser.cpp rename to src/Parsers/examples/select_parser.cpp diff --git a/src/Parsers/tests/select_parser_fuzzer.cpp b/src/Parsers/examples/select_parser_fuzzer.cpp similarity index 100% rename from src/Parsers/tests/select_parser_fuzzer.cpp rename to src/Parsers/examples/select_parser_fuzzer.cpp diff --git a/src/Parsers/ya.make.in b/src/Parsers/ya.make.in index 0dbd6b5b593..168fe7ceb34 100644 --- a/src/Parsers/ya.make.in +++ b/src/Parsers/ya.make.in @@ -8,7 +8,7 @@ PEERDIR( SRCS( - + ) END() diff --git a/src/Processors/CMakeLists.txt b/src/Processors/CMakeLists.txt index 99ba159eaf4..7e965188b4c 100644 --- a/src/Processors/CMakeLists.txt +++ b/src/Processors/CMakeLists.txt @@ -1,4 +1,4 @@ -if (ENABLE_TESTS) - add_subdirectory (tests) +if (ENABLE_EXAMPLES) + add_subdirectory(examples) endif () diff --git a/src/Processors/DelayedPortsProcessor.cpp b/src/Processors/DelayedPortsProcessor.cpp index ae4ba4659aa..8174619f8ce 100644 --- a/src/Processors/DelayedPortsProcessor.cpp +++ b/src/Processors/DelayedPortsProcessor.cpp @@ -8,9 +8,35 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +InputPorts createInputPorts( + const Block & header, + size_t num_ports, + IProcessor::PortNumbers delayed_ports, + bool assert_main_ports_empty) +{ + if (!assert_main_ports_empty) + return InputPorts(num_ports, header); + + InputPorts res; + std::sort(delayed_ports.begin(), delayed_ports.end()); + size_t next_delayed_port = 0; + for (size_t i = 0; i < num_ports; ++i) + { + if (next_delayed_port < delayed_ports.size() && i == delayed_ports[next_delayed_port]) + { + res.emplace_back(header); + ++next_delayed_port; + } + else + res.emplace_back(Block()); + } + + return res; +} + DelayedPortsProcessor::DelayedPortsProcessor( const Block & header, size_t num_ports, const PortNumbers & delayed_ports, bool assert_main_ports_empty) - : IProcessor(InputPorts(num_ports, header), + : IProcessor(createInputPorts(header, num_ports, delayed_ports, assert_main_ports_empty), OutputPorts((assert_main_ports_empty ? delayed_ports.size() : num_ports), header)) , num_delayed_ports(delayed_ports.size()) { diff --git a/src/Processors/Executors/PipelineExecutor.cpp b/src/Processors/Executors/PipelineExecutor.cpp index a724f22ed31..b1751dfd030 100644 --- a/src/Processors/Executors/PipelineExecutor.cpp +++ b/src/Processors/Executors/PipelineExecutor.cpp @@ -1,14 +1,15 @@ -#include #include #include -#include #include -#include #include -#include #include +#include +#include +#include +#include #include #include +#include #ifndef NDEBUG #include @@ -740,7 +741,7 @@ void PipelineExecutor::executeImpl(size_t num_threads) bool finished_flag = false; - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (!finished_flag) { finish(); @@ -766,9 +767,9 @@ void PipelineExecutor::executeImpl(size_t num_threads) if (thread_group) CurrentThread::attachTo(thread_group); - SCOPE_EXIT( - if (thread_group) - CurrentThread::detachQueryIfNotDetached(); + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachQueryIfNotDetached(); ); try diff --git a/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp b/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp index f1626414375..9f1999bc4a3 100644 --- a/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp +++ b/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp @@ -5,7 +5,7 @@ #include #include -#include +#include namespace DB { @@ -72,7 +72,7 @@ static void threadFunction(PullingAsyncPipelineExecutor::Data & data, ThreadGrou if (thread_group) CurrentThread::attachTo(thread_group); - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); diff --git a/src/Processors/Formats/IInputFormat.cpp b/src/Processors/Formats/IInputFormat.cpp index 0fbc78ea8c0..5594e04dc74 100644 --- a/src/Processors/Formats/IInputFormat.cpp +++ b/src/Processors/Formats/IInputFormat.cpp @@ -5,21 +5,15 @@ namespace DB { -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - IInputFormat::IInputFormat(Block header, ReadBuffer & in_) : ISource(std::move(header)), in(in_) { + column_mapping = std::make_shared(); } void IInputFormat::resetParser() { - if (in.hasPendingData()) - throw Exception("Unread data in IInputFormat::resetParser. Most likely it's a bug.", ErrorCodes::LOGICAL_ERROR); - + in.ignoreAll(); // those are protected attributes from ISource (I didn't want to propagate resetParser up there) finished = false; got_exception = false; diff --git a/src/Processors/Formats/IInputFormat.h b/src/Processors/Formats/IInputFormat.h index e1537aff6c5..95910bf51e5 100644 --- a/src/Processors/Formats/IInputFormat.h +++ b/src/Processors/Formats/IInputFormat.h @@ -2,9 +2,29 @@ #include +#include + namespace DB { +/// Used to pass info from header between different InputFormats in ParallelParsing +struct ColumnMapping +{ + /// Non-atomic because there is strict `happens-before` between read and write access + /// See InputFormatParallelParsing + bool is_set; + /// Maps indexes of columns in the input file to indexes of table columns + using OptionalIndexes = std::vector>; + OptionalIndexes column_indexes_for_input_fields; + + /// Tracks which columns we have read in a single read() call. + /// For columns that are never read, it is initialized to false when we + /// read the file header, and never changed afterwards. + /// For other columns, it is updated on each read() call. + std::vector read_columns; +}; + +using ColumnMappingPtr = std::shared_ptr; class ReadBuffer; @@ -39,9 +59,17 @@ public: return none; } + /// Must be called from ParallelParsingInputFormat after readSuffix + ColumnMappingPtr getColumnMapping() const { return column_mapping; } + /// Must be called from ParallelParsingInputFormat before readPrefix + void setColumnMapping(ColumnMappingPtr column_mapping_) { column_mapping = column_mapping_; } + size_t getCurrentUnitNumber() const { return current_unit_number; } void setCurrentUnitNumber(size_t current_unit_number_) { current_unit_number = current_unit_number_; } +protected: + ColumnMappingPtr column_mapping{}; + private: /// Number of currently parsed chunk (if parallel parsing is enabled) size_t current_unit_number = 0; diff --git a/src/Processors/Formats/IRowInputFormat.cpp b/src/Processors/Formats/IRowInputFormat.cpp index 79090ae2b89..52e64a9d90d 100644 --- a/src/Processors/Formats/IRowInputFormat.cpp +++ b/src/Processors/Formats/IRowInputFormat.cpp @@ -39,6 +39,16 @@ bool isParseError(int code) || code == ErrorCodes::INCORRECT_DATA; /// For some ReadHelpers } +IRowInputFormat::IRowInputFormat(Block header, ReadBuffer & in_, Params params_) + : IInputFormat(std::move(header), in_), params(params_) +{ + const auto & port_header = getPort().getHeader(); + size_t num_columns = port_header.columns(); + serializations.resize(num_columns); + for (size_t i = 0; i < num_columns; ++i) + serializations[i] = port_header.getByPosition(i).type->getDefaultSerialization(); +} + Chunk IRowInputFormat::generate() { @@ -180,7 +190,7 @@ Chunk IRowInputFormat::generate() if (num_errors && (params.allow_errors_num > 0 || params.allow_errors_ratio > 0)) { Poco::Logger * log = &Poco::Logger::get("IRowInputFormat"); - LOG_TRACE(log, "Skipped {} rows with errors while reading the input stream", num_errors); + LOG_DEBUG(log, "Skipped {} rows with errors while reading the input stream", num_errors); } readSuffix(); diff --git a/src/Processors/Formats/IRowInputFormat.h b/src/Processors/Formats/IRowInputFormat.h index b7863704062..8c600ad7285 100644 --- a/src/Processors/Formats/IRowInputFormat.h +++ b/src/Processors/Formats/IRowInputFormat.h @@ -14,7 +14,7 @@ namespace DB /// Contains extra information about read data. struct RowReadExtension { - /// IRowInputStream.read() output. It contains non zero for columns that actually read from the source and zero otherwise. + /// IRowInputFormat::read output. It contains non zero for columns that actually read from the source and zero otherwise. /// It's used to attach defaults for partially filled rows. std::vector read_columns; }; @@ -40,13 +40,7 @@ class IRowInputFormat : public IInputFormat public: using Params = RowInputFormatParams; - IRowInputFormat( - Block header, - ReadBuffer & in_, - Params params_) - : IInputFormat(std::move(header), in_), params(params_) - { - } + IRowInputFormat(Block header, ReadBuffer & in_, Params params_); Chunk generate() override; @@ -76,6 +70,8 @@ protected: size_t getTotalRows() const { return total_rows; } + Serializations serializations; + private: Params params; diff --git a/src/Processors/Formats/IRowOutputFormat.cpp b/src/Processors/Formats/IRowOutputFormat.cpp index f5f01643f4e..b714844feea 100644 --- a/src/Processors/Formats/IRowOutputFormat.cpp +++ b/src/Processors/Formats/IRowOutputFormat.cpp @@ -10,6 +10,16 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +IRowOutputFormat::IRowOutputFormat(const Block & header, WriteBuffer & out_, const Params & params_) + : IOutputFormat(header, out_) + , types(header.getDataTypes()) + , params(params_) +{ + serializations.reserve(types.size()); + for (const auto & type : types) + serializations.push_back(type->getDefaultSerialization()); +} + void IRowOutputFormat::consume(DB::Chunk chunk) { writePrefixIfNot(); @@ -82,7 +92,7 @@ void IRowOutputFormat::write(const Columns & columns, size_t row_num) if (i != 0) writeFieldDelimiter(); - writeField(*columns[i], *types[i], row_num); + writeField(*columns[i], *serializations[i], row_num); } writeRowEndDelimiter(); diff --git a/src/Processors/Formats/IRowOutputFormat.h b/src/Processors/Formats/IRowOutputFormat.h index 4fb94f7b7f7..c35d93b6133 100644 --- a/src/Processors/Formats/IRowOutputFormat.h +++ b/src/Processors/Formats/IRowOutputFormat.h @@ -25,6 +25,7 @@ class IRowOutputFormat : public IOutputFormat { protected: DataTypes types; + Serializations serializations; bool first_row = true; void consume(Chunk chunk) override; @@ -35,10 +36,7 @@ protected: public: using Params = RowOutputFormatParams; - IRowOutputFormat(const Block & header, WriteBuffer & out_, const Params & params_) - : IOutputFormat(header, out_), types(header.getDataTypes()), params(params_) - { - } + IRowOutputFormat(const Block & header, WriteBuffer & out_, const Params & params_); /** Write a row. * Default implementation calls methods to write single values and delimiters @@ -50,7 +48,7 @@ public: virtual void writeTotals(const Columns & columns, size_t row_num); /** Write single value. */ - virtual void writeField(const IColumn & column, const IDataType & type, size_t row_num) = 0; + virtual void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) = 0; /** Write delimiter. */ virtual void writeFieldDelimiter() {} /// delimiter between values diff --git a/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp b/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp index 4edef1f1365..52d2cf98c25 100644 --- a/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp @@ -24,7 +24,6 @@ namespace ErrorCodes ArrowBlockInputFormat::ArrowBlockInputFormat(ReadBuffer & in_, const Block & header_, bool stream_) : IInputFormat(header_, in_), stream{stream_} { - prepareReader(); } Chunk ArrowBlockInputFormat::generate() @@ -35,12 +34,18 @@ Chunk ArrowBlockInputFormat::generate() if (stream) { + if (!stream_reader) + prepareReader(); + batch_result = stream_reader->Next(); if (batch_result.ok() && !(*batch_result)) return res; } else { + if (!file_reader) + prepareReader(); + if (record_batch_current >= record_batch_total) return res; @@ -71,14 +76,14 @@ void ArrowBlockInputFormat::resetParser() stream_reader.reset(); else file_reader.reset(); - prepareReader(); + record_batch_current = 0; } void ArrowBlockInputFormat::prepareReader() { if (stream) { - auto stream_reader_status = arrow::ipc::RecordBatchStreamReader::Open(asArrowFile(in)); + auto stream_reader_status = arrow::ipc::RecordBatchStreamReader::Open(std::make_unique(in)); if (!stream_reader_status.ok()) throw Exception(ErrorCodes::UNKNOWN_EXCEPTION, "Error while opening a table: {}", stream_reader_status.status().ToString()); @@ -101,7 +106,7 @@ void ArrowBlockInputFormat::prepareReader() record_batch_current = 0; } -void registerInputFormatProcessorArrow(FormatFactory &factory) +void registerInputFormatProcessorArrow(FormatFactory & factory) { factory.registerInputFormatProcessor( "Arrow", @@ -112,7 +117,7 @@ void registerInputFormatProcessorArrow(FormatFactory &factory) { return std::make_shared(buf, sample, false); }); - + factory.markFormatAsColumnOriented("Arrow"); factory.registerInputFormatProcessor( "ArrowStream", [](ReadBuffer & buf, diff --git a/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp b/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp index c783e10debb..9582e0c3312 100644 --- a/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp +++ b/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp @@ -55,26 +55,23 @@ arrow::Status RandomAccessFileFromSeekableReadBuffer::Close() arrow::Result RandomAccessFileFromSeekableReadBuffer::Tell() const { - return arrow::Result(in.getPosition()); + return in.getPosition(); } arrow::Result RandomAccessFileFromSeekableReadBuffer::Read(int64_t nbytes, void * out) { - int64_t bytes_read = in.readBig(reinterpret_cast(out), nbytes); - return arrow::Result(bytes_read); + return in.readBig(reinterpret_cast(out), nbytes); } arrow::Result> RandomAccessFileFromSeekableReadBuffer::Read(int64_t nbytes) { - auto buffer_status = arrow::AllocateBuffer(nbytes); - ARROW_RETURN_NOT_OK(buffer_status); + ARROW_ASSIGN_OR_RAISE(auto buffer, arrow::AllocateResizableBuffer(nbytes)) + ARROW_ASSIGN_OR_RAISE(int64_t bytes_read, Read(nbytes, buffer->mutable_data())) - auto shared_buffer = std::shared_ptr(std::move(std::move(*buffer_status))); + if (bytes_read < nbytes) + RETURN_NOT_OK(buffer->Resize(bytes_read)); - size_t n = in.readBig(reinterpret_cast(shared_buffer->mutable_data()), nbytes); - - auto read_buffer = arrow::SliceBuffer(shared_buffer, 0, n); - return arrow::Result>(shared_buffer); + return buffer; } arrow::Status RandomAccessFileFromSeekableReadBuffer::Seek(int64_t position) @@ -83,6 +80,43 @@ arrow::Status RandomAccessFileFromSeekableReadBuffer::Seek(int64_t position) return arrow::Status::OK(); } + +ArrowInputStreamFromReadBuffer::ArrowInputStreamFromReadBuffer(ReadBuffer & in_) : in(in_), is_open{true} +{ +} + +arrow::Result ArrowInputStreamFromReadBuffer::Read(int64_t nbytes, void * out) +{ + return in.readBig(reinterpret_cast(out), nbytes); +} + +arrow::Result> ArrowInputStreamFromReadBuffer::Read(int64_t nbytes) +{ + ARROW_ASSIGN_OR_RAISE(auto buffer, arrow::AllocateResizableBuffer(nbytes)) + ARROW_ASSIGN_OR_RAISE(int64_t bytes_read, Read(nbytes, buffer->mutable_data())) + + if (bytes_read < nbytes) + RETURN_NOT_OK(buffer->Resize(bytes_read)); + + return buffer; +} + +arrow::Status ArrowInputStreamFromReadBuffer::Abort() +{ + return arrow::Status(); +} + +arrow::Result ArrowInputStreamFromReadBuffer::Tell() const +{ + return in.count(); +} + +arrow::Status ArrowInputStreamFromReadBuffer::Close() +{ + is_open = false; + return arrow::Status(); +} + std::shared_ptr asArrowFile(ReadBuffer & in) { if (auto * fd_in = dynamic_cast(&in)) diff --git a/src/Processors/Formats/Impl/ArrowBufferedStreams.h b/src/Processors/Formats/Impl/ArrowBufferedStreams.h index bb94535549c..a10a5bcabdb 100644 --- a/src/Processors/Formats/Impl/ArrowBufferedStreams.h +++ b/src/Processors/Formats/Impl/ArrowBufferedStreams.h @@ -61,6 +61,24 @@ private: ARROW_DISALLOW_COPY_AND_ASSIGN(RandomAccessFileFromSeekableReadBuffer); }; +class ArrowInputStreamFromReadBuffer : public arrow::io::InputStream +{ +public: + explicit ArrowInputStreamFromReadBuffer(ReadBuffer & in); + arrow::Result Read(int64_t nbytes, void* out) override; + arrow::Result> Read(int64_t nbytes) override; + arrow::Status Abort() override; + arrow::Result Tell() const override; + arrow::Status Close() override; + bool closed() const override { return !is_open; } + +private: + ReadBuffer & in; + bool is_open = false; + + ARROW_DISALLOW_COPY_AND_ASSIGN(ArrowInputStreamFromReadBuffer); +}; + std::shared_ptr asArrowFile(ReadBuffer & in); } diff --git a/src/Processors/Formats/Impl/AvroRowInputFormat.cpp b/src/Processors/Formats/Impl/AvroRowInputFormat.cpp index a8d71790f41..95ee42b4d09 100644 --- a/src/Processors/Formats/Impl/AvroRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/AvroRowInputFormat.cpp @@ -554,7 +554,7 @@ AvroDeserializer::Action AvroDeserializer::createAction(const Block & header, co } } -AvroDeserializer::AvroDeserializer(const Block & header, avro::ValidSchema schema, const FormatSettings & format_settings) +AvroDeserializer::AvroDeserializer(const Block & header, avro::ValidSchema schema, bool allow_missing_fields) { const auto & schema_root = schema.root(); if (schema_root->type() != avro::AVRO_RECORD) @@ -565,7 +565,7 @@ AvroDeserializer::AvroDeserializer(const Block & header, avro::ValidSchema schem column_found.resize(header.columns()); row_action = createAction(header, schema_root); // fail on missing fields when allow_missing_fields = false - if (!format_settings.avro.allow_missing_fields) + if (!allow_missing_fields) { for (size_t i = 0; i < header.columns(); ++i) { @@ -592,19 +592,24 @@ void AvroDeserializer::deserializeRow(MutableColumns & columns, avro::Decoder & AvroRowInputFormat::AvroRowInputFormat(const Block & header_, ReadBuffer & in_, Params params_, const FormatSettings & format_settings_) - : IRowInputFormat(header_, in_, params_) - , file_reader(std::make_unique(in_)) - , deserializer(output.getHeader(), file_reader.dataSchema(), format_settings_) + : IRowInputFormat(header_, in_, params_), + allow_missing_fields(format_settings_.avro.allow_missing_fields) { - file_reader.init(); +} + +void AvroRowInputFormat::readPrefix() +{ + file_reader_ptr = std::make_unique(std::make_unique(in)); + deserializer_ptr = std::make_unique(output.getHeader(), file_reader_ptr->dataSchema(), allow_missing_fields); + file_reader_ptr->init(); } bool AvroRowInputFormat::readRow(MutableColumns & columns, RowReadExtension &ext) { - if (file_reader.hasMore()) + if (file_reader_ptr->hasMore()) { - file_reader.decr(); - deserializer.deserializeRow(columns, file_reader.decoder(), ext); + file_reader_ptr->decr(); + deserializer_ptr->deserializeRow(columns, file_reader_ptr->decoder(), ext); return true; } return false; @@ -781,7 +786,7 @@ const AvroDeserializer & AvroConfluentRowInputFormat::getOrCreateDeserializer(Sc if (it == deserializer_cache.end()) { auto schema = schema_registry->getSchema(schema_id); - AvroDeserializer deserializer(output.getHeader(), schema, format_settings); + AvroDeserializer deserializer(output.getHeader(), schema, format_settings.avro.allow_missing_fields); it = deserializer_cache.emplace(schema_id, deserializer).first; } return it->second; diff --git a/src/Processors/Formats/Impl/AvroRowInputFormat.h b/src/Processors/Formats/Impl/AvroRowInputFormat.h index e3de3bf59a7..5617b4a7661 100644 --- a/src/Processors/Formats/Impl/AvroRowInputFormat.h +++ b/src/Processors/Formats/Impl/AvroRowInputFormat.h @@ -25,7 +25,7 @@ namespace DB class AvroDeserializer { public: - AvroDeserializer(const Block & header, avro::ValidSchema schema, const FormatSettings & format_settings); + AvroDeserializer(const Block & header, avro::ValidSchema schema, bool allow_missing_fields); void deserializeRow(MutableColumns & columns, avro::Decoder & decoder, RowReadExtension & ext) const; private: @@ -107,12 +107,15 @@ class AvroRowInputFormat : public IRowInputFormat { public: AvroRowInputFormat(const Block & header_, ReadBuffer & in_, Params params_, const FormatSettings & format_settings_); - virtual bool readRow(MutableColumns & columns, RowReadExtension & ext) override; + bool readRow(MutableColumns & columns, RowReadExtension & ext) override; + void readPrefix() override; + String getName() const override { return "AvroRowInputFormat"; } private: - avro::DataFileReaderBase file_reader; - AvroDeserializer deserializer; + std::unique_ptr file_reader_ptr; + std::unique_ptr deserializer_ptr; + bool allow_missing_fields; }; /// Confluent framing + Avro binary datum encoding. Mainly used for Kafka. diff --git a/src/Processors/Formats/Impl/AvroRowOutputFormat.h b/src/Processors/Formats/Impl/AvroRowOutputFormat.h index 08370154d9a..8d0581d3307 100644 --- a/src/Processors/Formats/Impl/AvroRowOutputFormat.h +++ b/src/Processors/Formats/Impl/AvroRowOutputFormat.h @@ -48,7 +48,7 @@ public: String getName() const override { return "AvroRowOutputFormat"; } void write(const Columns & columns, size_t row_num) override; - void writeField(const IColumn &, const IDataType &, size_t) override {} + void writeField(const IColumn &, const ISerialization &, size_t) override {} virtual void writePrefix() override; virtual void writeSuffix() override; diff --git a/src/Processors/Formats/Impl/BinaryRowInputFormat.cpp b/src/Processors/Formats/Impl/BinaryRowInputFormat.cpp index f49f521d474..36b57e242d7 100644 --- a/src/Processors/Formats/Impl/BinaryRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/BinaryRowInputFormat.cpp @@ -20,7 +20,7 @@ bool BinaryRowInputFormat::readRow(MutableColumns & columns, RowReadExtension &) size_t num_columns = columns.size(); for (size_t i = 0; i < num_columns; ++i) - getPort().getHeader().getByPosition(i).type->deserializeBinary(*columns[i], in); + serializations[i]->deserializeBinary(*columns[i], in); return true; } diff --git a/src/Processors/Formats/Impl/BinaryRowOutputFormat.cpp b/src/Processors/Formats/Impl/BinaryRowOutputFormat.cpp index d74a0a075fe..424eb375fa3 100644 --- a/src/Processors/Formats/Impl/BinaryRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/BinaryRowOutputFormat.cpp @@ -41,9 +41,9 @@ void BinaryRowOutputFormat::writePrefix() } } -void BinaryRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void BinaryRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { - type.serializeBinary(column, row_num, out); + serialization.serializeBinary(column, row_num, out); } diff --git a/src/Processors/Formats/Impl/BinaryRowOutputFormat.h b/src/Processors/Formats/Impl/BinaryRowOutputFormat.h index 562ed7b18aa..36a62098b75 100644 --- a/src/Processors/Formats/Impl/BinaryRowOutputFormat.h +++ b/src/Processors/Formats/Impl/BinaryRowOutputFormat.h @@ -21,7 +21,7 @@ public: String getName() const override { return "BinaryRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writePrefix() override; String getContentType() const override { return "application/octet-stream"; } diff --git a/src/Processors/Formats/Impl/CSVRowInputFormat.cpp b/src/Processors/Formats/Impl/CSVRowInputFormat.cpp index f7f08411dfa..4ccc0db4cfe 100644 --- a/src/Processors/Formats/Impl/CSVRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/CSVRowInputFormat.cpp @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include @@ -55,13 +55,13 @@ void CSVRowInputFormat::addInputColumn(const String & column_name) { if (format_settings.skip_unknown_fields) { - column_indexes_for_input_fields.push_back(std::nullopt); + column_mapping->column_indexes_for_input_fields.push_back(std::nullopt); return; } throw Exception( "Unknown field found in CSV header: '" + column_name + "' " + - "at position " + std::to_string(column_indexes_for_input_fields.size()) + + "at position " + std::to_string(column_mapping->column_indexes_for_input_fields.size()) + "\nSet the 'input_format_skip_unknown_fields' parameter explicitly to ignore and proceed", ErrorCodes::INCORRECT_DATA ); @@ -69,11 +69,11 @@ void CSVRowInputFormat::addInputColumn(const String & column_name) const auto column_index = column_it->second; - if (read_columns[column_index]) + if (column_mapping->read_columns[column_index]) throw Exception("Duplicate field found while parsing CSV header: " + column_name, ErrorCodes::INCORRECT_DATA); - read_columns[column_index] = true; - column_indexes_for_input_fields.emplace_back(column_index); + column_mapping->read_columns[column_index] = true; + column_mapping->column_indexes_for_input_fields.emplace_back(column_index); } static void skipEndOfLine(ReadBuffer & in) @@ -145,6 +145,16 @@ static void skipRow(ReadBuffer & in, const FormatSettings::CSV & settings, size_ } } +void CSVRowInputFormat::setupAllColumnsByTableSchema() +{ + const auto & header = getPort().getHeader(); + column_mapping->read_columns.assign(header.columns(), true); + column_mapping->column_indexes_for_input_fields.resize(header.columns()); + + for (size_t i = 0; i < column_mapping->column_indexes_for_input_fields.size(); ++i) + column_mapping->column_indexes_for_input_fields[i] = i; +} + void CSVRowInputFormat::readPrefix() { @@ -155,7 +165,9 @@ void CSVRowInputFormat::readPrefix() size_t num_columns = data_types.size(); const auto & header = getPort().getHeader(); - if (with_names) + /// This is a bit of abstraction leakage, but we have almost the same code in other places. + /// Thus, we check if this InputFormat is working with the "real" beginning of the data in case of parallel parsing. + if (with_names && getCurrentUnitNumber() == 0) { /// This CSV file has a header row with column names. Depending on the /// settings, use it or skip it. @@ -163,7 +175,7 @@ void CSVRowInputFormat::readPrefix() { /// Look at the file header to see which columns we have there. /// The missing columns are filled with defaults. - read_columns.assign(header.columns(), false); + column_mapping->read_columns.assign(header.columns(), false); do { String column_name; @@ -177,7 +189,7 @@ void CSVRowInputFormat::readPrefix() skipDelimiter(in, format_settings.csv.delimiter, true); - for (auto read_column : read_columns) + for (auto read_column : column_mapping->read_columns) { if (!read_column) { @@ -189,18 +201,13 @@ void CSVRowInputFormat::readPrefix() return; } else + { skipRow(in, format_settings.csv, num_columns); + setupAllColumnsByTableSchema(); + } } - - /// The default: map each column of the file to the column of the table with - /// the same index. - read_columns.assign(header.columns(), true); - column_indexes_for_input_fields.resize(header.columns()); - - for (size_t i = 0; i < column_indexes_for_input_fields.size(); ++i) - { - column_indexes_for_input_fields[i] = i; - } + else if (!column_mapping->is_set) + setupAllColumnsByTableSchema(); } @@ -216,17 +223,19 @@ bool CSVRowInputFormat::readRow(MutableColumns & columns, RowReadExtension & ext /// it doesn't have to check it. bool have_default_columns = have_always_default_columns; - ext.read_columns.assign(read_columns.size(), true); + ext.read_columns.assign(column_mapping->read_columns.size(), true); const auto delimiter = format_settings.csv.delimiter; - for (size_t file_column = 0; file_column < column_indexes_for_input_fields.size(); ++file_column) + for (size_t file_column = 0; file_column < column_mapping->column_indexes_for_input_fields.size(); ++file_column) { - const auto & table_column = column_indexes_for_input_fields[file_column]; - const bool is_last_file_column = file_column + 1 == column_indexes_for_input_fields.size(); + const auto & table_column = column_mapping->column_indexes_for_input_fields[file_column]; + const bool is_last_file_column = file_column + 1 == column_mapping->column_indexes_for_input_fields.size(); if (table_column) { skipWhitespacesAndTabs(in); - ext.read_columns[*table_column] = readField(*columns[*table_column], data_types[*table_column], is_last_file_column); + ext.read_columns[*table_column] = readField(*columns[*table_column], data_types[*table_column], + serializations[*table_column], is_last_file_column); + if (!ext.read_columns[*table_column]) have_default_columns = true; skipWhitespacesAndTabs(in); @@ -243,9 +252,9 @@ bool CSVRowInputFormat::readRow(MutableColumns & columns, RowReadExtension & ext if (have_default_columns) { - for (size_t i = 0; i < read_columns.size(); i++) + for (size_t i = 0; i < column_mapping->read_columns.size(); i++) { - if (!read_columns[i]) + if (!column_mapping->read_columns[i]) { /// The column value for this row is going to be overwritten /// with default by the caller, but the general assumption is @@ -266,7 +275,7 @@ bool CSVRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & columns, { const char delimiter = format_settings.csv.delimiter; - for (size_t file_column = 0; file_column < column_indexes_for_input_fields.size(); ++file_column) + for (size_t file_column = 0; file_column < column_mapping->column_indexes_for_input_fields.size(); ++file_column) { if (file_column == 0 && in.eof()) { @@ -275,10 +284,10 @@ bool CSVRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & columns, } skipWhitespacesAndTabs(in); - if (column_indexes_for_input_fields[file_column].has_value()) + if (column_mapping->column_indexes_for_input_fields[file_column].has_value()) { const auto & header = getPort().getHeader(); - size_t col_idx = column_indexes_for_input_fields[file_column].value(); + size_t col_idx = column_mapping->column_indexes_for_input_fields[file_column].value(); if (!deserializeFieldAndPrintDiagnosticInfo(header.getByPosition(col_idx).name, data_types[col_idx], *columns[col_idx], out, file_column)) return false; @@ -294,7 +303,7 @@ bool CSVRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & columns, skipWhitespacesAndTabs(in); /// Delimiters - if (file_column + 1 == column_indexes_for_input_fields.size()) + if (file_column + 1 == column_mapping->column_indexes_for_input_fields.size()) { if (in.eof()) return false; @@ -356,10 +365,11 @@ void CSVRowInputFormat::syncAfterError() void CSVRowInputFormat::tryDeserializeField(const DataTypePtr & type, IColumn & column, size_t file_column) { - if (column_indexes_for_input_fields[file_column]) + const auto & index = column_mapping->column_indexes_for_input_fields[file_column]; + if (index) { - const bool is_last_file_column = file_column + 1 == column_indexes_for_input_fields.size(); - readField(column, type, is_last_file_column); + const bool is_last_file_column = file_column + 1 == column_mapping->column_indexes_for_input_fields.size(); + readField(column, type, serializations[*index], is_last_file_column); } else { @@ -368,7 +378,7 @@ void CSVRowInputFormat::tryDeserializeField(const DataTypePtr & type, IColumn & } } -bool CSVRowInputFormat::readField(IColumn & column, const DataTypePtr & type, bool is_last_file_column) +bool CSVRowInputFormat::readField(IColumn & column, const DataTypePtr & type, const SerializationPtr & serialization, bool is_last_file_column) { const bool at_delimiter = !in.eof() && *in.position() == format_settings.csv.delimiter; const bool at_last_column_line_end = is_last_file_column @@ -391,12 +401,12 @@ bool CSVRowInputFormat::readField(IColumn & column, const DataTypePtr & type, bo else if (format_settings.null_as_default && !type->isNullable()) { /// If value is null but type is not nullable then use default value instead. - return DataTypeNullable::deserializeTextCSV(column, in, format_settings, type); + return SerializationNullable::deserializeTextCSVImpl(column, in, format_settings, serialization); } else { /// Read the column normally. - type->deserializeAsTextCSV(column, in, format_settings); + serialization->deserializeTextCSV(column, in, format_settings); return true; } } @@ -404,8 +414,8 @@ bool CSVRowInputFormat::readField(IColumn & column, const DataTypePtr & type, bo void CSVRowInputFormat::resetParser() { RowInputFormatWithDiagnosticInfo::resetParser(); - column_indexes_for_input_fields.clear(); - read_columns.clear(); + column_mapping->column_indexes_for_input_fields.clear(); + column_mapping->read_columns.clear(); have_always_default_columns = false; } @@ -492,6 +502,7 @@ static std::pair fileSegmentationEngineCSVImpl(ReadBuffer & in, DB void registerFileSegmentationEngineCSV(FormatFactory & factory) { factory.registerFileSegmentationEngine("CSV", &fileSegmentationEngineCSVImpl); + factory.registerFileSegmentationEngine("CSVWithNames", &fileSegmentationEngineCSVImpl); } } diff --git a/src/Processors/Formats/Impl/CSVRowInputFormat.h b/src/Processors/Formats/Impl/CSVRowInputFormat.h index c884eb6c3db..230acc51268 100644 --- a/src/Processors/Formats/Impl/CSVRowInputFormat.h +++ b/src/Processors/Formats/Impl/CSVRowInputFormat.h @@ -38,22 +38,13 @@ private: using IndexesMap = std::unordered_map; IndexesMap column_indexes_by_names; - /// Maps indexes of columns in the input file to indexes of table columns - using OptionalIndexes = std::vector>; - OptionalIndexes column_indexes_for_input_fields; - - /// Tracks which columns we have read in a single read() call. - /// For columns that are never read, it is initialized to false when we - /// read the file header, and never changed afterwards. - /// For other columns, it is updated on each read() call. - std::vector read_columns; - /// Whether we have any columns that are not read from file at all, /// and must be always initialized with defaults. bool have_always_default_columns = false; void addInputColumn(const String & column_name); + void setupAllColumnsByTableSchema(); bool parseRowAndPrintDiagnosticInfo(MutableColumns & columns, WriteBuffer & out) override; void tryDeserializeField(const DataTypePtr & type, IColumn & column, size_t file_column) override; bool isGarbageAfterField(size_t, ReadBuffer::Position pos) override @@ -61,7 +52,7 @@ private: return *pos != '\n' && *pos != '\r' && *pos != format_settings.csv.delimiter && *pos != ' ' && *pos != '\t'; } - bool readField(IColumn & column, const DataTypePtr & type, bool is_last_file_column); + bool readField(IColumn & column, const DataTypePtr & type, const SerializationPtr & serialization, bool is_last_file_column); }; } diff --git a/src/Processors/Formats/Impl/CSVRowOutputFormat.cpp b/src/Processors/Formats/Impl/CSVRowOutputFormat.cpp index 90fc768d311..b9945ddec15 100644 --- a/src/Processors/Formats/Impl/CSVRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/CSVRowOutputFormat.cpp @@ -40,9 +40,9 @@ void CSVRowOutputFormat::doWritePrefix() } -void CSVRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void CSVRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { - type.serializeAsTextCSV(column, row_num, out, format_settings); + serialization.serializeTextCSV(column, row_num, out, format_settings); } diff --git a/src/Processors/Formats/Impl/CSVRowOutputFormat.h b/src/Processors/Formats/Impl/CSVRowOutputFormat.h index 55803aeb53e..780a6c4d3ce 100644 --- a/src/Processors/Formats/Impl/CSVRowOutputFormat.h +++ b/src/Processors/Formats/Impl/CSVRowOutputFormat.h @@ -24,7 +24,7 @@ public: String getName() const override { return "CSVRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowEndDelimiter() override; void writeBeforeTotals() override; diff --git a/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp b/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp index 3e870b409d3..288c6ee09ef 100644 --- a/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp +++ b/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp @@ -144,9 +144,9 @@ class ReplaceLiteralsVisitor { public: LiteralsInfo replaced_literals; - const Context & context; + ContextPtr context; - explicit ReplaceLiteralsVisitor(const Context & context_) : context(context_) { } + explicit ReplaceLiteralsVisitor(ContextPtr context_) : context(context_) { } void visit(ASTPtr & ast, bool force_nullable) { @@ -293,7 +293,7 @@ private: /// E.g. template of "position('some string', 'other string') != 0" is /// ["position", "(", DataTypeString, ",", DataTypeString, ")", "!=", DataTypeUInt64] ConstantExpressionTemplate::TemplateStructure::TemplateStructure(LiteralsInfo & replaced_literals, TokenIterator expression_begin, TokenIterator expression_end, - ASTPtr & expression, const IDataType & result_type, bool null_as_default_, const Context & context) + ASTPtr & expression, const IDataType & result_type, bool null_as_default_, ContextPtr context) { null_as_default = null_as_default_; @@ -305,6 +305,7 @@ ConstantExpressionTemplate::TemplateStructure::TemplateStructure(LiteralsInfo & /// Make sequence of tokens and determine IDataType by Field::Types:Which for each literal. token_after_literal_idx.reserve(replaced_literals.size()); special_parser.resize(replaced_literals.size()); + serializations.resize(replaced_literals.size()); TokenIterator prev_end = expression_begin; for (size_t i = 0; i < replaced_literals.size(); ++i) @@ -325,6 +326,8 @@ ConstantExpressionTemplate::TemplateStructure::TemplateStructure(LiteralsInfo & literals.insert({nullptr, info.type, info.dummy_column_name}); prev_end = info.literal->end.value(); + + serializations[i] = info.type->getDefaultSerialization(); } while (prev_end < expression_end) @@ -374,7 +377,7 @@ ConstantExpressionTemplate::Cache::getFromCacheOrConstruct(const DataTypePtr & r TokenIterator expression_begin, TokenIterator expression_end, const ASTPtr & expression_, - const Context & context, + ContextPtr context, bool * found_in_cache, const String & salt) { @@ -382,7 +385,7 @@ ConstantExpressionTemplate::Cache::getFromCacheOrConstruct(const DataTypePtr & r ASTPtr expression = expression_->clone(); ReplaceLiteralsVisitor visitor(context); visitor.visit(expression, result_column_type->isNullable() || null_as_default); - ReplaceQueryParameterVisitor param_visitor(context.getQueryParameters()); + ReplaceQueryParameterVisitor param_visitor(context->getQueryParameters()); param_visitor.visit(expression); size_t template_hash = TemplateStructure::getTemplateHash(expression, visitor.replaced_literals, result_column_type, null_as_default, salt); @@ -458,7 +461,7 @@ bool ConstantExpressionTemplate::tryParseExpression(ReadBuffer & istr, const For return false; } else - type->deserializeAsTextQuoted(*columns[cur_column], istr, format_settings); + structure->serializations[cur_column]->deserializeTextQuoted(*columns[cur_column], istr, format_settings); ++cur_column; } diff --git a/src/Processors/Formats/Impl/ConstantExpressionTemplate.h b/src/Processors/Formats/Impl/ConstantExpressionTemplate.h index 299ce4c9925..6659243df63 100644 --- a/src/Processors/Formats/Impl/ConstantExpressionTemplate.h +++ b/src/Processors/Formats/Impl/ConstantExpressionTemplate.h @@ -23,7 +23,7 @@ class ConstantExpressionTemplate : boost::noncopyable struct TemplateStructure : boost::noncopyable { TemplateStructure(LiteralsInfo & replaced_literals, TokenIterator expression_begin, TokenIterator expression_end, - ASTPtr & expr, const IDataType & result_type, bool null_as_default_, const Context & context); + ASTPtr & expr, const IDataType & result_type, bool null_as_default_, ContextPtr context); static void addNodesToCastResult(const IDataType & result_column_type, ASTPtr & expr, bool null_as_default); static size_t getTemplateHash(const ASTPtr & expression, const LiteralsInfo & replaced_literals, @@ -36,6 +36,7 @@ class ConstantExpressionTemplate : boost::noncopyable Block literals; ExpressionActionsPtr actions_on_literals; + Serializations serializations; std::vector special_parser; bool null_as_default; @@ -58,7 +59,7 @@ public: TokenIterator expression_begin, TokenIterator expression_end, const ASTPtr & expression_, - const Context & context, + ContextPtr context, bool * found_in_cache = nullptr, const String & salt = {}); }; diff --git a/src/Processors/Formats/Impl/JSONCompactEachRowRowInputFormat.cpp b/src/Processors/Formats/Impl/JSONCompactEachRowRowInputFormat.cpp index 1fc5041b1f3..682a4fbf69a 100644 --- a/src/Processors/Formats/Impl/JSONCompactEachRowRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/JSONCompactEachRowRowInputFormat.cpp @@ -4,7 +4,7 @@ #include #include #include -#include +#include namespace DB { @@ -202,6 +202,7 @@ void JSONCompactEachRowRowInputFormat::readField(size_t index, MutableColumns & { read_columns[index] = true; const auto & type = data_types[index]; + const auto & serialization = serializations[index]; if (yield_strings) { @@ -211,16 +212,16 @@ void JSONCompactEachRowRowInputFormat::readField(size_t index, MutableColumns & ReadBufferFromString buf(str); if (format_settings.null_as_default && !type->isNullable()) - read_columns[index] = DataTypeNullable::deserializeWholeText(*columns[index], buf, format_settings, type); + read_columns[index] = SerializationNullable::deserializeWholeTextImpl(*columns[index], buf, format_settings, serialization); else - type->deserializeAsWholeText(*columns[index], buf, format_settings); + serialization->deserializeWholeText(*columns[index], buf, format_settings); } else { if (format_settings.null_as_default && !type->isNullable()) - read_columns[index] = DataTypeNullable::deserializeTextJSON(*columns[index], in, format_settings, type); + read_columns[index] = SerializationNullable::deserializeTextJSONImpl(*columns[index], in, format_settings, serialization); else - type->deserializeAsTextJSON(*columns[index], in, format_settings); + serialization->deserializeTextJSON(*columns[index], in, format_settings); } } catch (Exception & e) diff --git a/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.cpp b/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.cpp index 11134499984..a3055873c01 100644 --- a/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.cpp @@ -22,17 +22,17 @@ JSONCompactEachRowRowOutputFormat::JSONCompactEachRowRowOutputFormat(WriteBuffer } -void JSONCompactEachRowRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void JSONCompactEachRowRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { if (yield_strings) { WriteBufferFromOwnString buf; - type.serializeAsText(column, row_num, buf, settings); + serialization.serializeText(column, row_num, buf, settings); writeJSONString(buf.str(), out, settings); } else - type.serializeAsTextJSON(column, row_num, out, settings); + serialization.serializeTextJSON(column, row_num, out, settings); } @@ -63,7 +63,7 @@ void JSONCompactEachRowRowOutputFormat::writeTotals(const Columns & columns, siz if (i != 0) JSONCompactEachRowRowOutputFormat::writeFieldDelimiter(); - JSONCompactEachRowRowOutputFormat::writeField(*columns[i], *types[i], row_num); + JSONCompactEachRowRowOutputFormat::writeField(*columns[i], *serializations[i], row_num); } writeCString("]\n", out); } diff --git a/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.h b/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.h index 3d4b80247b8..792eb906f4b 100644 --- a/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.h +++ b/src/Processors/Formats/Impl/JSONCompactEachRowRowOutputFormat.h @@ -31,7 +31,7 @@ public: void writeTotals(const Columns & columns, size_t row_num) override; void writeAfterTotals() override {} - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowStartDelimiter() override; void writeRowEndDelimiter() override; diff --git a/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.cpp b/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.cpp index 97304afbebd..cefaded6912 100644 --- a/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.cpp @@ -18,17 +18,17 @@ JSONCompactRowOutputFormat::JSONCompactRowOutputFormat( } -void JSONCompactRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void JSONCompactRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { if (yield_strings) { WriteBufferFromOwnString buf; - type.serializeAsText(column, row_num, buf, settings); + serialization.serializeText(column, row_num, buf, settings); writeJSONString(buf.str(), *ostr, settings); } else - type.serializeAsTextJSON(column, row_num, *ostr, settings); + serialization.serializeTextJSON(column, row_num, *ostr, settings); ++field_number; } @@ -82,7 +82,7 @@ void JSONCompactRowOutputFormat::writeExtremesElement(const char * title, const if (i != 0) writeTotalsFieldDelimiter(); - writeField(*columns[i], *types[i], row_num); + writeField(*columns[i], *serializations[i], row_num); } writeChar(']', *ostr); diff --git a/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.h b/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.h index 71ba3579837..9bb433c50b1 100644 --- a/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.h +++ b/src/Processors/Formats/Impl/JSONCompactRowOutputFormat.h @@ -25,7 +25,7 @@ public: String getName() const override { return "JSONCompactRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowStartDelimiter() override; void writeRowEndDelimiter() override; @@ -36,9 +36,9 @@ public: protected: void writeExtremesElement(const char * title, const Columns & columns, size_t row_num) override; - void writeTotalsField(const IColumn & column, const IDataType & type, size_t row_num) override + void writeTotalsField(const IColumn & column, const ISerialization & serialization, size_t row_num) override { - return writeField(column, type, row_num); + return writeField(column, serialization, row_num); } void writeTotalsFieldDelimiter() override; diff --git a/src/Processors/Formats/Impl/JSONEachRowRowInputFormat.cpp b/src/Processors/Formats/Impl/JSONEachRowRowInputFormat.cpp index 720b606be4f..e0f6514295b 100644 --- a/src/Processors/Formats/Impl/JSONEachRowRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/JSONEachRowRowInputFormat.cpp @@ -5,7 +5,7 @@ #include #include #include -#include +#include namespace DB { @@ -140,6 +140,7 @@ void JSONEachRowRowInputFormat::readField(size_t index, MutableColumns & columns { seen_columns[index] = read_columns[index] = true; const auto & type = getPort().getHeader().getByPosition(index).type; + const auto & serialization = serializations[index]; if (yield_strings) { @@ -149,16 +150,16 @@ void JSONEachRowRowInputFormat::readField(size_t index, MutableColumns & columns ReadBufferFromString buf(str); if (format_settings.null_as_default && !type->isNullable()) - read_columns[index] = DataTypeNullable::deserializeWholeText(*columns[index], buf, format_settings, type); + read_columns[index] = SerializationNullable::deserializeWholeTextImpl(*columns[index], buf, format_settings, serialization); else - type->deserializeAsWholeText(*columns[index], buf, format_settings); + serialization->deserializeWholeText(*columns[index], buf, format_settings); } else { if (format_settings.null_as_default && !type->isNullable()) - read_columns[index] = DataTypeNullable::deserializeTextJSON(*columns[index], in, format_settings, type); + read_columns[index] = SerializationNullable::deserializeTextJSONImpl(*columns[index], in, format_settings, serialization); else - type->deserializeAsTextJSON(*columns[index], in, format_settings); + serialization->deserializeTextJSON(*columns[index], in, format_settings); } } catch (Exception & e) diff --git a/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.cpp b/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.cpp index 30cd0660682..a69499de813 100644 --- a/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.cpp @@ -28,7 +28,7 @@ JSONEachRowRowOutputFormat::JSONEachRowRowOutputFormat( } -void JSONEachRowRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void JSONEachRowRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { writeString(fields[field_number], out); writeChar(':', out); @@ -37,11 +37,11 @@ void JSONEachRowRowOutputFormat::writeField(const IColumn & column, const IDataT { WriteBufferFromOwnString buf; - type.serializeAsText(column, row_num, buf, settings); + serialization.serializeText(column, row_num, buf, settings); writeJSONString(buf.str(), out, settings); } else - type.serializeAsTextJSON(column, row_num, out, settings); + serialization.serializeTextJSON(column, row_num, out, settings); ++field_number; } diff --git a/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.h b/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.h index 38760379056..10b15f3e7b2 100644 --- a/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.h +++ b/src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.h @@ -23,7 +23,7 @@ public: String getName() const override { return "JSONEachRowRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowStartDelimiter() override; void writeRowEndDelimiter() override; diff --git a/src/Processors/Formats/Impl/JSONRowOutputFormat.cpp b/src/Processors/Formats/Impl/JSONRowOutputFormat.cpp index 517f126060f..38c6eefac1c 100644 --- a/src/Processors/Formats/Impl/JSONRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/JSONRowOutputFormat.cpp @@ -71,7 +71,7 @@ void JSONRowOutputFormat::writePrefix() } -void JSONRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void JSONRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { writeCString("\t\t\t", *ostr); writeString(fields[field_number].name, *ostr); @@ -81,16 +81,16 @@ void JSONRowOutputFormat::writeField(const IColumn & column, const IDataType & t { WriteBufferFromOwnString buf; - type.serializeAsText(column, row_num, buf, settings); + serialization.serializeText(column, row_num, buf, settings); writeJSONString(buf.str(), *ostr, settings); } else - type.serializeAsTextJSON(column, row_num, *ostr, settings); + serialization.serializeTextJSON(column, row_num, *ostr, settings); ++field_number; } -void JSONRowOutputFormat::writeTotalsField(const IColumn & column, const IDataType & type, size_t row_num) +void JSONRowOutputFormat::writeTotalsField(const IColumn & column, const ISerialization & serialization, size_t row_num) { writeCString("\t\t", *ostr); writeString(fields[field_number].name, *ostr); @@ -100,11 +100,11 @@ void JSONRowOutputFormat::writeTotalsField(const IColumn & column, const IDataTy { WriteBufferFromOwnString buf; - type.serializeAsText(column, row_num, buf, settings); + serialization.serializeText(column, row_num, buf, settings); writeJSONString(buf.str(), *ostr, settings); } else - type.serializeAsTextJSON(column, row_num, *ostr, settings); + serialization.serializeTextJSON(column, row_num, *ostr, settings); ++field_number; } @@ -159,7 +159,7 @@ void JSONRowOutputFormat::writeTotals(const Columns & columns, size_t row_num) if (i != 0) writeTotalsFieldDelimiter(); - writeTotalsField(*columns[i], *types[i], row_num); + writeTotalsField(*columns[i], *serializations[i], row_num); } } @@ -191,7 +191,7 @@ void JSONRowOutputFormat::writeExtremesElement(const char * title, const Columns if (i != 0) writeFieldDelimiter(); - writeField(*columns[i], *types[i], row_num); + writeField(*columns[i], *serializations[i], row_num); } writeChar('\n', *ostr); diff --git a/src/Processors/Formats/Impl/JSONRowOutputFormat.h b/src/Processors/Formats/Impl/JSONRowOutputFormat.h index 88b74afbabd..75d4aa5d201 100644 --- a/src/Processors/Formats/Impl/JSONRowOutputFormat.h +++ b/src/Processors/Formats/Impl/JSONRowOutputFormat.h @@ -25,7 +25,7 @@ public: String getName() const override { return "JSONRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowStartDelimiter() override; void writeRowEndDelimiter() override; @@ -63,7 +63,7 @@ public: String getContentType() const override { return "application/json; charset=UTF-8"; } protected: - virtual void writeTotalsField(const IColumn & column, const IDataType & type, size_t row_num); + virtual void writeTotalsField(const IColumn & column, const ISerialization & serialization, size_t row_num); virtual void writeExtremesElement(const char * title, const Columns & columns, size_t row_num); virtual void writeTotalsFieldDelimiter() { writeFieldDelimiter(); } diff --git a/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp b/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp index 51bba07d995..ee5d4193a45 100644 --- a/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp @@ -21,16 +21,13 @@ void MarkdownRowOutputFormat::writePrefix() } writeCString("\n|", out); String left_alignment = ":-|"; - String central_alignment = ":-:|"; String right_alignment = "-:|"; for (size_t i = 0; i < columns; ++i) { - if (isInteger(types[i])) + if (types[i]->shouldAlignRightInPrettyFormats()) writeString(right_alignment, out); - else if (isString(types[i])) - writeString(left_alignment, out); else - writeString(central_alignment, out); + writeString(left_alignment, out); } writeChar('\n', out); } @@ -50,9 +47,9 @@ void MarkdownRowOutputFormat::writeRowEndDelimiter() writeCString(" |\n", out); } -void MarkdownRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void MarkdownRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { - type.serializeAsTextEscaped(column, row_num, out, format_settings); + serialization.serializeTextEscaped(column, row_num, out, format_settings); } void registerOutputFormatProcessorMarkdown(FormatFactory & factory) diff --git a/src/Processors/Formats/Impl/MarkdownRowOutputFormat.h b/src/Processors/Formats/Impl/MarkdownRowOutputFormat.h index 6bfb763d818..0b2a4dd0b23 100644 --- a/src/Processors/Formats/Impl/MarkdownRowOutputFormat.h +++ b/src/Processors/Formats/Impl/MarkdownRowOutputFormat.h @@ -28,7 +28,7 @@ public: /// Write '|\n' after each row void writeRowEndDelimiter() override ; - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; String getName() const override { return "MarkdownRowOutputFormat"; } protected: diff --git a/src/Processors/Formats/Impl/MsgPackRowOutputFormat.h b/src/Processors/Formats/Impl/MsgPackRowOutputFormat.h index b6764ed4a4f..9c66bb9d207 100644 --- a/src/Processors/Formats/Impl/MsgPackRowOutputFormat.h +++ b/src/Processors/Formats/Impl/MsgPackRowOutputFormat.h @@ -25,7 +25,7 @@ public: String getName() const override { return "MsgPackRowOutputFormat"; } void write(const Columns & columns, size_t row_num) override; - void writeField(const IColumn &, const IDataType &, size_t) override {} + void writeField(const IColumn &, const ISerialization &, size_t) override {} void serializeField(const IColumn & column, DataTypePtr data_type, size_t row_num); private: diff --git a/src/Processors/Formats/Impl/MySQLOutputFormat.cpp b/src/Processors/Formats/Impl/MySQLOutputFormat.cpp index f40261b4561..0f73349c271 100644 --- a/src/Processors/Formats/Impl/MySQLOutputFormat.cpp +++ b/src/Processors/Formats/Impl/MySQLOutputFormat.cpp @@ -26,6 +26,10 @@ void MySQLOutputFormat::initialize() const auto & header = getPort(PortKind::Main).getHeader(); data_types = header.getDataTypes(); + serializations.reserve(data_types.size()); + for (const auto & type : data_types) + serializations.emplace_back(type->getDefaultSerialization()); + if (header.columns()) { packet_endpoint->sendPacket(LengthEncodedNumber(header.columns())); @@ -36,7 +40,7 @@ void MySQLOutputFormat::initialize() packet_endpoint->sendPacket(getColumnDefinition(column_name, data_types[i]->getTypeId())); } - if (!(context->mysql.client_capabilities & Capability::CLIENT_DEPRECATE_EOF)) + if (!(getContext()->mysql.client_capabilities & Capability::CLIENT_DEPRECATE_EOF)) { packet_endpoint->sendPacket(EOFPacket(0, 0)); } @@ -51,7 +55,7 @@ void MySQLOutputFormat::consume(Chunk chunk) for (size_t i = 0; i < chunk.getNumRows(); i++) { - ProtocolText::ResultSetRow row_packet(data_types, chunk.getColumns(), i); + ProtocolText::ResultSetRow row_packet(serializations, chunk.getColumns(), i); packet_endpoint->sendPacket(row_packet); } } @@ -60,7 +64,7 @@ void MySQLOutputFormat::finalize() { size_t affected_rows = 0; std::string human_readable_info; - if (QueryStatus * process_list_elem = context->getProcessListElement()) + if (QueryStatus * process_list_elem = getContext()->getProcessListElement()) { CurrentThread::finalizePerformanceCounters(); QueryStatusInfo info = process_list_elem->getInfo(); @@ -74,10 +78,11 @@ void MySQLOutputFormat::finalize() const auto & header = getPort(PortKind::Main).getHeader(); if (header.columns() == 0) - packet_endpoint->sendPacket(OKPacket(0x0, context->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); - else - if (context->mysql.client_capabilities & CLIENT_DEPRECATE_EOF) - packet_endpoint->sendPacket(OKPacket(0xfe, context->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); + packet_endpoint->sendPacket( + OKPacket(0x0, getContext()->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); + else if (getContext()->mysql.client_capabilities & CLIENT_DEPRECATE_EOF) + packet_endpoint->sendPacket( + OKPacket(0xfe, getContext()->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); else packet_endpoint->sendPacket(EOFPacket(0, 0), true); } diff --git a/src/Processors/Formats/Impl/MySQLOutputFormat.h b/src/Processors/Formats/Impl/MySQLOutputFormat.h index c0300675240..01a892410df 100644 --- a/src/Processors/Formats/Impl/MySQLOutputFormat.h +++ b/src/Processors/Formats/Impl/MySQLOutputFormat.h @@ -15,21 +15,20 @@ namespace DB class IColumn; class IDataType; class WriteBuffer; -class Context; /** A stream for outputting data in a binary line-by-line format. */ -class MySQLOutputFormat final : public IOutputFormat +class MySQLOutputFormat final : public IOutputFormat, WithConstContext { public: MySQLOutputFormat(WriteBuffer & out_, const Block & header_, const FormatSettings & settings_); String getName() const override { return "MySQLOutputFormat"; } - void setContext(const Context & context_) + void setContext(ContextConstPtr context_) { - context = &context_; - packet_endpoint = std::make_unique(out, const_cast(context_.mysql.sequence_id)); /// TODO: fix it + context = context_; + packet_endpoint = std::make_unique(out, const_cast(getContext()->mysql.sequence_id)); /// TODO: fix it } void consume(Chunk) override; @@ -40,13 +39,12 @@ public: void initialize(); private: - bool initialized = false; - const Context * context = nullptr; std::unique_ptr packet_endpoint; FormatSettings format_settings; DataTypes data_types; + Serializations serializations; }; } diff --git a/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.cpp b/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.cpp index 3dd72a7a5c7..7a14966e220 100644 --- a/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.cpp @@ -23,7 +23,7 @@ static void writeODBCString(WriteBuffer & out, const std::string & str) out.write(str.data(), str.size()); } -void ODBCDriver2BlockOutputFormat::writeRow(const Block & header, const Columns & columns, size_t row_idx, std::string & buffer) +void ODBCDriver2BlockOutputFormat::writeRow(const Serializations & serializations, const Columns & columns, size_t row_idx, std::string & buffer) { size_t num_columns = columns.size(); for (size_t column_idx = 0; column_idx < num_columns; ++column_idx) @@ -39,7 +39,7 @@ void ODBCDriver2BlockOutputFormat::writeRow(const Block & header, const Columns { { WriteBufferFromString text_out(buffer); - header.getByPosition(column_idx).type->serializeAsText(*column, row_idx, text_out, format_settings); + serializations[column_idx]->serializeText(*column, row_idx, text_out, format_settings); } writeODBCString(out, buffer); } @@ -51,9 +51,15 @@ void ODBCDriver2BlockOutputFormat::write(Chunk chunk, PortKind port_kind) String text_value; const auto & header = getPort(port_kind).getHeader(); const auto & columns = chunk.getColumns(); + + size_t num_columns = columns.size(); + Serializations serializations(num_columns); + for (size_t i = 0; i < num_columns; ++i) + serializations[i] = header.getByPosition(i).type->getDefaultSerialization(); + const size_t rows = chunk.getNumRows(); for (size_t i = 0; i < rows; ++i) - writeRow(header, columns, i, text_value); + writeRow(serializations, columns, i, text_value); } void ODBCDriver2BlockOutputFormat::consume(Chunk chunk) diff --git a/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.h b/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.h index 2a20c7a0619..4545e429cc2 100644 --- a/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.h +++ b/src/Processors/Formats/Impl/ODBCDriver2BlockOutputFormat.h @@ -45,7 +45,7 @@ private: prefix_written = true; } - void writeRow(const Block & header, const Columns & columns, size_t row_idx, std::string & buffer); + void writeRow(const Serializations & serializations, const Columns & columns, size_t row_idx, std::string & buffer); void write(Chunk chunk, PortKind port_kind); void writePrefix(); }; diff --git a/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp b/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp index 7776a904f1c..6f43addc4ed 100644 --- a/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp @@ -19,6 +19,13 @@ namespace ErrorCodes extern const int CANNOT_READ_ALL_DATA; } +#define THROW_ARROW_NOT_OK(status) \ + do \ + { \ + if (::arrow::Status _s = (status); !_s.ok()) \ + throw Exception(_s.ToString(), ErrorCodes::BAD_ARGUMENTS); \ + } while (false) + ORCBlockInputFormat::ORCBlockInputFormat(ReadBuffer & in_, Block header_) : IInputFormat(std::move(header_), in_) { } @@ -28,21 +35,26 @@ Chunk ORCBlockInputFormat::generate() Chunk res; const Block & header = getPort().getHeader(); - if (file_reader) + if (!file_reader) + prepareReader(); + + if (stripe_current >= stripe_total) return res; - arrow::Status open_status = arrow::adapters::orc::ORCFileReader::Open(asArrowFile(in), arrow::default_memory_pool(), &file_reader); - if (!open_status.ok()) - throw Exception(open_status.ToString(), ErrorCodes::BAD_ARGUMENTS); + std::shared_ptr batch_result; + arrow::Status batch_status = file_reader->ReadStripe(stripe_current, include_indices, &batch_result); + if (!batch_status.ok()) + throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, + "Error while reading batch of ORC data: {}", batch_status.ToString()); - std::shared_ptr table; - arrow::Status read_status = file_reader->Read(&table); - if (!read_status.ok()) - throw ParsingException{"Error while reading ORC data: " + read_status.ToString(), - ErrorCodes::CANNOT_READ_ALL_DATA}; + auto table_result = arrow::Table::FromRecordBatches({batch_result}); + if (!table_result.ok()) + throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, + "Error while reading batch of ORC data: {}", table_result.status().ToString()); - ArrowColumnToCHColumn::arrowTableToCHChunk(res, table, header, "ORC"); + ++stripe_current; + ArrowColumnToCHColumn::arrowTableToCHChunk(res, *table_result, header, "ORC"); return res; } @@ -51,6 +63,26 @@ void ORCBlockInputFormat::resetParser() IInputFormat::resetParser(); file_reader.reset(); + include_indices.clear(); + stripe_current = 0; +} + +void ORCBlockInputFormat::prepareReader() +{ + THROW_ARROW_NOT_OK(arrow::adapters::orc::ORCFileReader::Open(asArrowFile(in), arrow::default_memory_pool(), &file_reader)); + stripe_total = file_reader->NumberOfStripes(); + stripe_current = 0; + + std::shared_ptr schema; + THROW_ARROW_NOT_OK(file_reader->ReadSchema(&schema)); + + for (int i = 0; i < schema->num_fields(); ++i) + { + if (getPort().getHeader().has(schema->field(i)->name())) + { + include_indices.push_back(i+1); + } + } } void registerInputFormatProcessorORC(FormatFactory &factory) @@ -64,6 +96,7 @@ void registerInputFormatProcessorORC(FormatFactory &factory) { return std::make_shared(buf, sample); }); + factory.markFormatAsColumnOriented("ORC"); } } diff --git a/src/Processors/Formats/Impl/ORCBlockInputFormat.h b/src/Processors/Formats/Impl/ORCBlockInputFormat.h index cff42560366..0c78290f3cc 100644 --- a/src/Processors/Formats/Impl/ORCBlockInputFormat.h +++ b/src/Processors/Formats/Impl/ORCBlockInputFormat.h @@ -25,6 +25,15 @@ private: // TODO: check that this class implements every part of its parent std::unique_ptr file_reader; + + int stripe_total = 0; + + int stripe_current = 0; + + // indices of columns to read from ORC file + std::vector include_indices; + + void prepareReader(); }; } diff --git a/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp b/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp index 0ebca3661b4..ce7dd1abd51 100644 --- a/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp +++ b/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp @@ -80,9 +80,11 @@ namespace DB } - void ParallelFormattingOutputFormat::collectorThreadFunction() + void ParallelFormattingOutputFormat::collectorThreadFunction(const ThreadGroupStatusPtr & thread_group) { setThreadName("Collector"); + if (thread_group) + CurrentThread::attachToIfDetached(thread_group); try { @@ -135,9 +137,11 @@ namespace DB } - void ParallelFormattingOutputFormat::formatterThreadFunction(size_t current_unit_number) + void ParallelFormattingOutputFormat::formatterThreadFunction(size_t current_unit_number, const ThreadGroupStatusPtr & thread_group) { setThreadName("Formatter"); + if (thread_group) + CurrentThread::attachToIfDetached(thread_group); try { diff --git a/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.h b/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.h index 7e7c44a8aae..8b9e8293c69 100644 --- a/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.h +++ b/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.h @@ -6,6 +6,7 @@ #include #include #include +#include "IO/WriteBufferFromString.h" #include #include #include @@ -75,7 +76,10 @@ public: /// Just heuristic. We need one thread for collecting, one thread for receiving chunks /// and n threads for formatting. processing_units.resize(params.max_threads_for_parallel_formatting + 2); - collector_thread = ThreadFromGlobalPool([&] { collectorThreadFunction(); }); + collector_thread = ThreadFromGlobalPool([thread_group = CurrentThread::getGroup(), this] + { + collectorThreadFunction(thread_group); + }); LOG_TRACE(&Poco::Logger::get("ParallelFormattingOutputFormat"), "Parallel formatting is being used"); } @@ -101,6 +105,15 @@ public: finishAndWait(); } + /// There are no formats which support parallel formatting and progress writing at the same time + void onProgress(const Progress &) override {} + + String getContentType() const override + { + WriteBufferFromOwnString buffer; + return internal_formatter_creator(buffer)->getContentType(); + } + protected: void consume(Chunk chunk) override final { @@ -190,14 +203,17 @@ private: void scheduleFormatterThreadForUnitWithNumber(size_t ticket_number) { - pool.scheduleOrThrowOnError([this, ticket_number] { formatterThreadFunction(ticket_number); }); + pool.scheduleOrThrowOnError([this, thread_group = CurrentThread::getGroup(), ticket_number] + { + formatterThreadFunction(ticket_number, thread_group); + }); } /// Collects all temporary buffers into main WriteBuffer. - void collectorThreadFunction(); + void collectorThreadFunction(const ThreadGroupStatusPtr & thread_group); /// This function is executed in ThreadPool and the only purpose of it is to format one Chunk into a continuous buffer in memory. - void formatterThreadFunction(size_t current_unit_number); + void formatterThreadFunction(size_t current_unit_number, const ThreadGroupStatusPtr & thread_group); }; } diff --git a/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp b/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp index d1660b53019..7152c9d9916 100644 --- a/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp +++ b/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp @@ -2,14 +2,14 @@ #include #include #include -#include +#include namespace DB { void ParallelParsingInputFormat::segmentatorThreadFunction(ThreadGroupStatusPtr thread_group) { - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); @@ -60,7 +60,7 @@ void ParallelParsingInputFormat::segmentatorThreadFunction(ThreadGroupStatusPtr void ParallelParsingInputFormat::parserThreadFunction(ThreadGroupStatusPtr thread_group, size_t current_ticket_number) { - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); @@ -89,6 +89,11 @@ void ParallelParsingInputFormat::parserThreadFunction(ThreadGroupStatusPtr threa unit.chunk_ext.chunk.clear(); unit.chunk_ext.block_missing_values.clear(); + /// Propagate column_mapping to other parsers. + /// Note: column_mapping is used only for *WithNames types + if (current_ticket_number != 0) + input_format->setColumnMapping(column_mapping); + // We don't know how many blocks will be. So we have to read them all // until an empty block occurred. Chunk chunk; @@ -100,6 +105,14 @@ void ParallelParsingInputFormat::parserThreadFunction(ThreadGroupStatusPtr threa unit.chunk_ext.block_missing_values.emplace_back(parser.getMissingValues()); } + /// Extract column_mapping from first parser to propagate it to others + if (current_ticket_number == 0) + { + column_mapping = input_format->getColumnMapping(); + column_mapping->is_set = true; + first_parser_finished.set(); + } + // We suppose we will get at least some blocks for a non-empty buffer, // except at the end of file. Also see a matching assert in readImpl(). assert(unit.is_last || !unit.chunk_ext.chunk.empty() || parsing_finished); @@ -117,8 +130,6 @@ void ParallelParsingInputFormat::parserThreadFunction(ThreadGroupStatusPtr threa void ParallelParsingInputFormat::onBackgroundException(size_t offset) { - tryLogCurrentException(__PRETTY_FUNCTION__); - std::unique_lock lock(mutex); if (!background_exception) { @@ -129,12 +140,20 @@ void ParallelParsingInputFormat::onBackgroundException(size_t offset) } tryLogCurrentException(__PRETTY_FUNCTION__); parsing_finished = true; + first_parser_finished.set(); reader_condvar.notify_all(); segmentator_condvar.notify_all(); } Chunk ParallelParsingInputFormat::generate() { + /// Delayed launching of segmentator thread + if (unlikely(!parsing_started.exchange(true))) + { + segmentator_thread = ThreadFromGlobalPool( + &ParallelParsingInputFormat::segmentatorThreadFunction, this, CurrentThread::getGroup()); + } + if (isCancelled() || parsing_finished) { /** diff --git a/src/Processors/Formats/Impl/ParallelParsingInputFormat.h b/src/Processors/Formats/Impl/ParallelParsingInputFormat.h index 9dda2dfe55d..dafaf9bed72 100644 --- a/src/Processors/Formats/Impl/ParallelParsingInputFormat.h +++ b/src/Processors/Formats/Impl/ParallelParsingInputFormat.h @@ -10,6 +10,8 @@ #include #include #include +#include +#include namespace DB { @@ -95,8 +97,7 @@ public: // bump into reader thread on wraparound. processing_units.resize(params.max_threads + 2); - segmentator_thread = ThreadFromGlobalPool( - &ParallelParsingInputFormat::segmentatorThreadFunction, this, CurrentThread::getGroup()); + LOG_TRACE(&Poco::Logger::get("ParallelParsingInputFormat"), "Parallel parsing is used"); } ~ParallelParsingInputFormat() override @@ -199,6 +200,9 @@ private: std::condition_variable reader_condvar; std::condition_variable segmentator_condvar; + Poco::Event first_parser_finished; + + std::atomic parsing_started{false}; std::atomic parsing_finished{false}; /// There are multiple "parsers", that's why we use thread pool. @@ -250,6 +254,9 @@ private: { parserThreadFunction(group, ticket_number); }); + /// We have to wait here to possibly extract ColumnMappingPtr from the first parser. + if (ticket_number == 0) + first_parser_finished.wait(); } void finishAndWait() diff --git a/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp b/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp index bb55c71b7ca..162185e75b8 100644 --- a/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp @@ -94,6 +94,7 @@ void registerInputFormatProcessorParquet(FormatFactory &factory) { return std::make_shared(buf, sample); }); + factory.markFormatAsColumnOriented("Parquet"); } } diff --git a/src/Processors/Formats/Impl/PostgreSQLOutputFormat.cpp b/src/Processors/Formats/Impl/PostgreSQLOutputFormat.cpp index 50b3def929e..8c4da279fc5 100644 --- a/src/Processors/Formats/Impl/PostgreSQLOutputFormat.cpp +++ b/src/Processors/Formats/Impl/PostgreSQLOutputFormat.cpp @@ -18,7 +18,7 @@ void PostgreSQLOutputFormat::doWritePrefix() initialized = true; const auto & header = getPort(PortKind::Main).getHeader(); - data_types = header.getDataTypes(); + auto data_types = header.getDataTypes(); if (header.columns()) { @@ -29,6 +29,7 @@ void PostgreSQLOutputFormat::doWritePrefix() { const auto & column_name = header.getColumnsWithTypeAndName()[i].name; columns.emplace_back(column_name, data_types[i]->getTypeId()); + serializations.emplace_back(data_types[i]->getDefaultSerialization()); } message_transport.send(PostgreSQLProtocol::Messaging::RowDescription(columns)); } @@ -51,7 +52,7 @@ void PostgreSQLOutputFormat::consume(Chunk chunk) else { WriteBufferFromOwnString ostr; - data_types[j]->serializeAsText(*columns[j], i, ostr, format_settings); + serializations[j]->serializeText(*columns[j], i, ostr, format_settings); row.push_back(std::make_shared(std::move(ostr.str()))); } } diff --git a/src/Processors/Formats/Impl/PostgreSQLOutputFormat.h b/src/Processors/Formats/Impl/PostgreSQLOutputFormat.h index 8ff5aae5067..257fbdff341 100644 --- a/src/Processors/Formats/Impl/PostgreSQLOutputFormat.h +++ b/src/Processors/Formats/Impl/PostgreSQLOutputFormat.h @@ -27,7 +27,7 @@ private: FormatSettings format_settings; PostgreSQLProtocol::Messaging::MessageTransport message_transport; - DataTypes data_types; + Serializations serializations; }; } diff --git a/src/Processors/Formats/Impl/PrettyBlockOutputFormat.cpp b/src/Processors/Formats/Impl/PrettyBlockOutputFormat.cpp index 8bd4d36532d..8e178b6629e 100644 --- a/src/Processors/Formats/Impl/PrettyBlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/PrettyBlockOutputFormat.cpp @@ -1,4 +1,7 @@ #include +#if defined(OS_SUNOS) +# include +#endif #include #include #include @@ -59,7 +62,8 @@ void PrettyBlockOutputFormat::calculateWidths( { { WriteBufferFromString out_serialize(serialized_value); - elem.type->serializeAsText(*column, j, out_serialize, format_settings); + auto serialization = elem.type->getDefaultSerialization(); + serialization->serializeText(*column, j, out_serialize, format_settings); } /// Avoid calculating width of too long strings by limiting the size in bytes. @@ -154,6 +158,10 @@ void PrettyBlockOutputFormat::write(const Chunk & chunk, PortKind port_kind) const auto & columns = chunk.getColumns(); const auto & header = getPort(port_kind).getHeader(); + Serializations serializations(num_columns); + for (size_t i = 0; i < num_columns; ++i) + serializations[i] = header.getByPosition(i).type->getDefaultSerialization(); + WidthsPerColumn widths; Widths max_widths; Widths name_widths; @@ -290,11 +298,10 @@ void PrettyBlockOutputFormat::write(const Chunk & chunk, PortKind port_kind) { if (j != 0) writeCString(grid_symbols.bar, out); - const auto & type = *header.getByPosition(j).type; - writeValueWithPadding(*columns[j], type, i, + writeValueWithPadding(*columns[j], *serializations[j], i, widths[j].empty() ? max_widths[j] : widths[j][i], - max_widths[j]); + max_widths[j], type.shouldAlignRightInPrettyFormats()); } writeCString(grid_symbols.bar, out); @@ -313,12 +320,13 @@ void PrettyBlockOutputFormat::write(const Chunk & chunk, PortKind port_kind) void PrettyBlockOutputFormat::writeValueWithPadding( - const IColumn & column, const IDataType & type, size_t row_num, size_t value_width, size_t pad_to_width) + const IColumn & column, const ISerialization & serialization, size_t row_num, + size_t value_width, size_t pad_to_width, bool align_right) { String serialized_value = " "; { WriteBufferFromString out_serialize(serialized_value, WriteBufferFromString::AppendModeTag()); - type.serializeAsText(column, row_num, out_serialize, format_settings); + serialization.serializeText(column, row_num, out_serialize, format_settings); } if (value_width > format_settings.pretty.max_value_width) @@ -348,7 +356,7 @@ void PrettyBlockOutputFormat::writeValueWithPadding( writeChar(' ', out); }; - if (type.shouldAlignRightInPrettyFormats()) + if (align_right) { write_padding(); out.write(serialized_value.data(), serialized_value.size()); diff --git a/src/Processors/Formats/Impl/PrettyBlockOutputFormat.h b/src/Processors/Formats/Impl/PrettyBlockOutputFormat.h index de79fe5ee2a..02b438d2571 100644 --- a/src/Processors/Formats/Impl/PrettyBlockOutputFormat.h +++ b/src/Processors/Formats/Impl/PrettyBlockOutputFormat.h @@ -57,7 +57,8 @@ protected: WidthsPerColumn & widths, Widths & max_padded_widths, Widths & name_widths); void writeValueWithPadding( - const IColumn & column, const IDataType & type, size_t row_num, size_t value_width, size_t pad_to_width); + const IColumn & column, const ISerialization & serialization, size_t row_num, + size_t value_width, size_t pad_to_width, bool align_right); }; } diff --git a/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.cpp b/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.cpp index cfa669ae8ad..c4902ea4c26 100644 --- a/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.cpp @@ -149,6 +149,7 @@ void PrettyCompactBlockOutputFormat::writeBottom(const Widths & max_widths) void PrettyCompactBlockOutputFormat::writeRow( size_t row_num, const Block & header, + const Serializations & serializations, const Columns & columns, const WidthsPerColumn & widths, const Widths & max_widths) @@ -179,7 +180,7 @@ void PrettyCompactBlockOutputFormat::writeRow( const auto & type = *header.getByPosition(j).type; const auto & cur_widths = widths[j].empty() ? max_widths[j] : widths[j][row_num]; - writeValueWithPadding(*columns[j], type, row_num, cur_widths, max_widths[j]); + writeValueWithPadding(*columns[j], *serializations[j], row_num, cur_widths, max_widths[j], type.shouldAlignRightInPrettyFormats()); } writeCString(grid_symbols.bar, out); @@ -240,8 +241,13 @@ void PrettyCompactBlockOutputFormat::writeChunk(const Chunk & chunk, PortKind po writeHeader(header, max_widths, name_widths); + size_t num_columns = header.columns(); + Serializations serializations(num_columns); + for (size_t i = 0; i < num_columns; ++i) + serializations[i] = header.getByPosition(i).type->getDefaultSerialization(); + for (size_t i = 0; i < num_rows && total_rows + i < max_rows; ++i) - writeRow(i, header, columns, widths, max_widths); + writeRow(i, header, serializations, columns, widths, max_widths); writeBottom(max_widths); diff --git a/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.h b/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.h index 90c9d375192..96344397a0c 100644 --- a/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.h +++ b/src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.h @@ -23,6 +23,7 @@ protected: void writeRow( size_t row_num, const Block & header, + const Serializations & serializations, const Columns & columns, const WidthsPerColumn & widths, const Widths & max_widths); diff --git a/src/Processors/Formats/Impl/PrettySpaceBlockOutputFormat.cpp b/src/Processors/Formats/Impl/PrettySpaceBlockOutputFormat.cpp index f3fb27a5558..fa987c6b949 100644 --- a/src/Processors/Formats/Impl/PrettySpaceBlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/PrettySpaceBlockOutputFormat.cpp @@ -24,6 +24,10 @@ void PrettySpaceBlockOutputFormat::write(const Chunk & chunk, PortKind port_kind const auto & header = getPort(port_kind).getHeader(); const auto & columns = chunk.getColumns(); + Serializations serializations(num_columns); + for (size_t i = 0; i < num_columns; ++i) + serializations[i] = header.getByPosition(i).type->getDefaultSerialization(); + WidthsPerColumn widths; Widths max_widths; Widths name_widths; @@ -87,7 +91,8 @@ void PrettySpaceBlockOutputFormat::write(const Chunk & chunk, PortKind port_kind const auto & type = *header.getByPosition(column).type; auto & cur_width = widths[column].empty() ? max_widths[column] : widths[column][row]; - writeValueWithPadding(*columns[column], type, row, cur_width, max_widths[column]); + writeValueWithPadding(*columns[column], *serializations[column], + row, cur_width, max_widths[column], type.shouldAlignRightInPrettyFormats()); } writeChar('\n', out); diff --git a/src/Processors/Formats/Impl/ProtobufRowOutputFormat.h b/src/Processors/Formats/Impl/ProtobufRowOutputFormat.h index 5f82950e891..54324490a3b 100644 --- a/src/Processors/Formats/Impl/ProtobufRowOutputFormat.h +++ b/src/Processors/Formats/Impl/ProtobufRowOutputFormat.h @@ -42,7 +42,7 @@ public: String getName() const override { return "ProtobufRowOutputFormat"; } void write(const Columns & columns, size_t row_num) override; - void writeField(const IColumn &, const IDataType &, size_t) override {} + void writeField(const IColumn &, const ISerialization &, size_t) override {} std::string getContentType() const override { return "application/octet-stream"; } private: diff --git a/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.cpp b/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.cpp index bcee94d8ad5..49f1159d48d 100644 --- a/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.cpp @@ -15,7 +15,7 @@ RawBLOBRowOutputFormat::RawBLOBRowOutputFormat( } -void RawBLOBRowOutputFormat::writeField(const IColumn & column, const IDataType &, size_t row_num) +void RawBLOBRowOutputFormat::writeField(const IColumn & column, const ISerialization &, size_t row_num) { StringRef value = column.getDataAt(row_num); out.write(value.data, value.size); diff --git a/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.h b/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.h index 6a9a70bb12f..7a29c62e4d8 100644 --- a/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.h +++ b/src/Processors/Formats/Impl/RawBLOBRowOutputFormat.h @@ -34,7 +34,7 @@ public: String getName() const override { return "RawBLOBRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType &, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization &, size_t row_num) override; }; } diff --git a/src/Processors/Formats/Impl/RegexpRowInputFormat.cpp b/src/Processors/Formats/Impl/RegexpRowInputFormat.cpp index 108f4d9d321..555c79f8064 100644 --- a/src/Processors/Formats/Impl/RegexpRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/RegexpRowInputFormat.cpp @@ -1,7 +1,7 @@ #include #include #include -#include +#include #include namespace DB @@ -65,37 +65,38 @@ bool RegexpRowInputFormat::readField(size_t index, MutableColumns & columns) ReadBuffer field_buf(const_cast(matched_fields[index].data()), matched_fields[index].size(), 0); try { + const auto & serialization = serializations[index]; switch (field_format) { case ColumnFormat::Escaped: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextEscaped(*columns[index], field_buf, format_settings, type); + read = SerializationNullable::deserializeTextEscapedImpl(*columns[index], field_buf, format_settings, serialization); else - type->deserializeAsTextEscaped(*columns[index], field_buf, format_settings); + serialization->deserializeTextEscaped(*columns[index], field_buf, format_settings); break; case ColumnFormat::Quoted: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextQuoted(*columns[index], field_buf, format_settings, type); + read = SerializationNullable::deserializeTextQuotedImpl(*columns[index], field_buf, format_settings, serialization); else - type->deserializeAsTextQuoted(*columns[index], field_buf, format_settings); + serialization->deserializeTextQuoted(*columns[index], field_buf, format_settings); break; case ColumnFormat::Csv: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextCSV(*columns[index], field_buf, format_settings, type); + read = SerializationNullable::deserializeTextCSVImpl(*columns[index], field_buf, format_settings, serialization); else - type->deserializeAsTextCSV(*columns[index], field_buf, format_settings); + serialization->deserializeTextCSV(*columns[index], field_buf, format_settings); break; case ColumnFormat::Json: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextJSON(*columns[index], field_buf, format_settings, type); + read = SerializationNullable::deserializeTextJSONImpl(*columns[index], field_buf, format_settings, serialization); else - type->deserializeAsTextJSON(*columns[index], field_buf, format_settings); + serialization->deserializeTextJSON(*columns[index], field_buf, format_settings); break; case ColumnFormat::Raw: if (parse_as_nullable) - read = DataTypeNullable::deserializeWholeText(*columns[index], field_buf, format_settings, type); + read = SerializationNullable::deserializeWholeTextImpl(*columns[index], field_buf, format_settings, serialization); else - type->deserializeAsWholeText(*columns[index], field_buf, format_settings); + serialization->deserializeWholeText(*columns[index], field_buf, format_settings); break; default: break; diff --git a/src/Processors/Formats/Impl/TSKVRowInputFormat.cpp b/src/Processors/Formats/Impl/TSKVRowInputFormat.cpp index 8d769cab346..ee6fce83358 100644 --- a/src/Processors/Formats/Impl/TSKVRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/TSKVRowInputFormat.cpp @@ -1,7 +1,7 @@ #include #include #include -#include +#include namespace DB @@ -142,10 +142,11 @@ bool TSKVRowInputFormat::readRow(MutableColumns & columns, RowReadExtension & ex seen_columns[index] = read_columns[index] = true; const auto & type = getPort().getHeader().getByPosition(index).type; + const auto & serialization = serializations[index]; if (format_settings.null_as_default && !type->isNullable()) - read_columns[index] = DataTypeNullable::deserializeTextEscaped(*columns[index], in, format_settings, type); + read_columns[index] = SerializationNullable::deserializeTextEscapedImpl(*columns[index], in, format_settings, serialization); else - header.getByPosition(index).type->deserializeAsTextEscaped(*columns[index], in, format_settings); + serialization->deserializeTextEscaped(*columns[index], in, format_settings); } } else diff --git a/src/Processors/Formats/Impl/TSKVRowOutputFormat.cpp b/src/Processors/Formats/Impl/TSKVRowOutputFormat.cpp index 149ba3f0a2a..627ae67fa31 100644 --- a/src/Processors/Formats/Impl/TSKVRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/TSKVRowOutputFormat.cpp @@ -24,10 +24,10 @@ TSKVRowOutputFormat::TSKVRowOutputFormat(WriteBuffer & out_, const Block & heade } -void TSKVRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void TSKVRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { writeString(fields[field_number].name, out); - type.serializeAsTextEscaped(column, row_num, out, format_settings); + serialization.serializeTextEscaped(column, row_num, out, format_settings); ++field_number; } diff --git a/src/Processors/Formats/Impl/TSKVRowOutputFormat.h b/src/Processors/Formats/Impl/TSKVRowOutputFormat.h index 1b341cbbc72..24c4e5ca866 100644 --- a/src/Processors/Formats/Impl/TSKVRowOutputFormat.h +++ b/src/Processors/Formats/Impl/TSKVRowOutputFormat.h @@ -18,7 +18,7 @@ public: String getName() const override { return "TSKVRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeRowEndDelimiter() override; protected: diff --git a/src/Processors/Formats/Impl/TabSeparatedRawRowInputFormat.h b/src/Processors/Formats/Impl/TabSeparatedRawRowInputFormat.h index bbcfec8e6da..07c8edf9e6e 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRawRowInputFormat.h +++ b/src/Processors/Formats/Impl/TabSeparatedRawRowInputFormat.h @@ -31,7 +31,7 @@ public: String getName() const override { return "TabSeparatedRawRowInputFormat"; } - bool readField(IColumn & column, const DataTypePtr & type, bool) override + bool readField(IColumn & column, const DataTypePtr &, const SerializationPtr & serialization, bool) override { String tmp; @@ -49,8 +49,7 @@ public: } ReadBufferFromString cell(tmp); - - type->deserializeAsWholeText(column, cell, format_settings); + serialization->deserializeWholeText(column, cell, format_settings); return true; } diff --git a/src/Processors/Formats/Impl/TabSeparatedRawRowOutputFormat.h b/src/Processors/Formats/Impl/TabSeparatedRawRowOutputFormat.h index 6aa7f7bdfad..dc9312e53bc 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRawRowOutputFormat.h +++ b/src/Processors/Formats/Impl/TabSeparatedRawRowOutputFormat.h @@ -26,9 +26,9 @@ public: String getName() const override { return "TabSeparatedRawRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override { - type.serializeAsText(column, row_num, out, format_settings); + serialization.serializeText(column, row_num, out, format_settings); } }; diff --git a/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp b/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp index 96b01a5bd9b..f89b76342a4 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp @@ -7,7 +7,8 @@ #include #include #include -#include +#include +#include namespace DB { @@ -62,19 +63,19 @@ TabSeparatedRowInputFormat::TabSeparatedRowInputFormat(const Block & header_, Re column_indexes_by_names.emplace(column_info.name, i); } - column_indexes_for_input_fields.reserve(num_columns); - read_columns.assign(num_columns, false); + column_mapping->column_indexes_for_input_fields.reserve(num_columns); + column_mapping->read_columns.assign(num_columns, false); } void TabSeparatedRowInputFormat::setupAllColumnsByTableSchema() { const auto & header = getPort().getHeader(); - read_columns.assign(header.columns(), true); - column_indexes_for_input_fields.resize(header.columns()); + column_mapping->read_columns.assign(header.columns(), true); + column_mapping->column_indexes_for_input_fields.resize(header.columns()); - for (size_t i = 0; i < column_indexes_for_input_fields.size(); ++i) - column_indexes_for_input_fields[i] = i; + for (size_t i = 0; i < column_mapping->column_indexes_for_input_fields.size(); ++i) + column_mapping->column_indexes_for_input_fields[i] = i; } @@ -85,13 +86,13 @@ void TabSeparatedRowInputFormat::addInputColumn(const String & column_name) { if (format_settings.skip_unknown_fields) { - column_indexes_for_input_fields.push_back(std::nullopt); + column_mapping->column_indexes_for_input_fields.push_back(std::nullopt); return; } throw Exception( "Unknown field found in TSV header: '" + column_name + "' " + - "at position " + std::to_string(column_indexes_for_input_fields.size()) + + "at position " + std::to_string(column_mapping->column_indexes_for_input_fields.size()) + "\nSet the 'input_format_skip_unknown_fields' parameter explicitly to ignore and proceed", ErrorCodes::INCORRECT_DATA ); @@ -99,11 +100,11 @@ void TabSeparatedRowInputFormat::addInputColumn(const String & column_name) const auto column_index = column_it->second; - if (read_columns[column_index]) + if (column_mapping->read_columns[column_index]) throw Exception("Duplicate field found while parsing TSV header: " + column_name, ErrorCodes::INCORRECT_DATA); - read_columns[column_index] = true; - column_indexes_for_input_fields.emplace_back(column_index); + column_mapping->read_columns[column_index] = true; + column_mapping->column_indexes_for_input_fields.emplace_back(column_index); } @@ -113,8 +114,8 @@ void TabSeparatedRowInputFormat::fillUnreadColumnsWithDefaults(MutableColumns & if (unlikely(row_num == 1)) { columns_to_fill_with_default_values.clear(); - for (size_t index = 0; index < read_columns.size(); ++index) - if (read_columns[index] == 0) + for (size_t index = 0; index < column_mapping->read_columns.size(); ++index) + if (column_mapping->read_columns[index] == 0) columns_to_fill_with_default_values.push_back(index); } @@ -136,7 +137,9 @@ void TabSeparatedRowInputFormat::readPrefix() skipBOMIfExists(in); } - if (with_names) + /// This is a bit of abstraction leakage, but we have almost the same code in other places. + /// Thus, we check if this InputFormat is working with the "real" beginning of the data in case of parallel parsing. + if (with_names && getCurrentUnitNumber() == 0) { if (format_settings.with_names_use_header) { @@ -165,15 +168,15 @@ void TabSeparatedRowInputFormat::readPrefix() else { setupAllColumnsByTableSchema(); - skipTSVRow(in, column_indexes_for_input_fields.size()); + skipTSVRow(in, column_mapping->column_indexes_for_input_fields.size()); } } - else + else if (!column_mapping->is_set) setupAllColumnsByTableSchema(); if (with_types) { - skipTSVRow(in, column_indexes_for_input_fields.size()); + skipTSVRow(in, column_mapping->column_indexes_for_input_fields.size()); } } @@ -185,15 +188,15 @@ bool TabSeparatedRowInputFormat::readRow(MutableColumns & columns, RowReadExtens updateDiagnosticInfo(); - ext.read_columns.assign(read_columns.size(), true); - for (size_t file_column = 0; file_column < column_indexes_for_input_fields.size(); ++file_column) + ext.read_columns.assign(column_mapping->read_columns.size(), true); + for (size_t file_column = 0; file_column < column_mapping->column_indexes_for_input_fields.size(); ++file_column) { - const auto & column_index = column_indexes_for_input_fields[file_column]; - const bool is_last_file_column = file_column + 1 == column_indexes_for_input_fields.size(); + const auto & column_index = column_mapping->column_indexes_for_input_fields[file_column]; + const bool is_last_file_column = file_column + 1 == column_mapping->column_indexes_for_input_fields.size(); if (column_index) { const auto & type = data_types[*column_index]; - ext.read_columns[*column_index] = readField(*columns[*column_index], type, is_last_file_column); + ext.read_columns[*column_index] = readField(*columns[*column_index], type, serializations[*column_index], is_last_file_column); } else { @@ -202,7 +205,7 @@ bool TabSeparatedRowInputFormat::readRow(MutableColumns & columns, RowReadExtens } /// skip separators - if (file_column + 1 < column_indexes_for_input_fields.size()) + if (file_column + 1 < column_mapping->column_indexes_for_input_fields.size()) { assertChar('\t', in); } @@ -221,24 +224,27 @@ bool TabSeparatedRowInputFormat::readRow(MutableColumns & columns, RowReadExtens } -bool TabSeparatedRowInputFormat::readField(IColumn & column, const DataTypePtr & type, bool is_last_file_column) +bool TabSeparatedRowInputFormat::readField(IColumn & column, const DataTypePtr & type, + const SerializationPtr & serialization, bool is_last_file_column) { const bool at_delimiter = !is_last_file_column && !in.eof() && *in.position() == '\t'; const bool at_last_column_line_end = is_last_file_column && (in.eof() || *in.position() == '\n'); + if (format_settings.tsv.empty_as_default && (at_delimiter || at_last_column_line_end)) { column.insertDefault(); return false; } else if (format_settings.null_as_default && !type->isNullable()) - return DataTypeNullable::deserializeTextEscaped(column, in, format_settings, type); - type->deserializeAsTextEscaped(column, in, format_settings); + return SerializationNullable::deserializeTextEscapedImpl(column, in, format_settings, serialization); + + serialization->deserializeTextEscaped(column, in, format_settings); return true; } bool TabSeparatedRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & columns, WriteBuffer & out) { - for (size_t file_column = 0; file_column < column_indexes_for_input_fields.size(); ++file_column) + for (size_t file_column = 0; file_column < column_mapping->column_indexes_for_input_fields.size(); ++file_column) { if (file_column == 0 && in.eof()) { @@ -246,10 +252,10 @@ bool TabSeparatedRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & return false; } - if (column_indexes_for_input_fields[file_column].has_value()) + if (column_mapping->column_indexes_for_input_fields[file_column].has_value()) { const auto & header = getPort().getHeader(); - size_t col_idx = column_indexes_for_input_fields[file_column].value(); + size_t col_idx = column_mapping->column_indexes_for_input_fields[file_column].value(); if (!deserializeFieldAndPrintDiagnosticInfo(header.getByPosition(col_idx).name, data_types[col_idx], *columns[col_idx], out, file_column)) return false; @@ -264,7 +270,7 @@ bool TabSeparatedRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & } /// Delimiters - if (file_column + 1 == column_indexes_for_input_fields.size()) + if (file_column + 1 == column_mapping->column_indexes_for_input_fields.size()) { if (!in.eof()) { @@ -330,10 +336,13 @@ bool TabSeparatedRowInputFormat::parseRowAndPrintDiagnosticInfo(MutableColumns & void TabSeparatedRowInputFormat::tryDeserializeField(const DataTypePtr & type, IColumn & column, size_t file_column) { - if (column_indexes_for_input_fields[file_column]) + const auto & index = column_mapping->column_indexes_for_input_fields[file_column]; + if (index) { + bool can_be_parsed_as_null = removeLowCardinality(type)->isNullable(); + // check null value for type is not nullable. don't cross buffer bound for simplicity, so maybe missing some case - if (!type->isNullable() && !in.eof()) + if (!can_be_parsed_as_null && !in.eof()) { if (*in.position() == '\\' && in.available() >= 2) { @@ -349,8 +358,9 @@ void TabSeparatedRowInputFormat::tryDeserializeField(const DataTypePtr & type, I } } } - const bool is_last_file_column = file_column + 1 == column_indexes_for_input_fields.size(); - readField(column, type, is_last_file_column); + + const bool is_last_file_column = file_column + 1 == column_mapping->column_indexes_for_input_fields.size(); + readField(column, type, serializations[*index], is_last_file_column); } else { @@ -368,8 +378,8 @@ void TabSeparatedRowInputFormat::resetParser() { RowInputFormatWithDiagnosticInfo::resetParser(); const auto & sample = getPort().getHeader(); - read_columns.assign(sample.columns(), false); - column_indexes_for_input_fields.clear(); + column_mapping->read_columns.assign(sample.columns(), false); + column_mapping->column_indexes_for_input_fields.clear(); columns_to_fill_with_default_values.clear(); } @@ -463,7 +473,7 @@ static std::pair fileSegmentationEngineTabSeparatedImpl(ReadBuffer void registerFileSegmentationEngineTabSeparated(FormatFactory & factory) { // We can use the same segmentation engine for TSKV. - for (const auto * name : {"TabSeparated", "TSV", "TSKV"}) + for (const auto & name : {"TabSeparated", "TSV", "TSKV", "TabSeparatedWithNames", "TSVWithNames"}) { factory.registerFileSegmentationEngine(name, &fileSegmentationEngineTabSeparatedImpl); } diff --git a/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.h b/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.h index 0141d87403a..8127b5ceba7 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.h +++ b/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.h @@ -33,7 +33,8 @@ protected: bool with_types; const FormatSettings format_settings; - virtual bool readField(IColumn & column, const DataTypePtr & type, bool is_last_file_column); + virtual bool readField(IColumn & column, const DataTypePtr & type, + const SerializationPtr & serialization, bool is_last_file_column); private: DataTypes data_types; @@ -41,10 +42,6 @@ private: using IndexesMap = std::unordered_map; IndexesMap column_indexes_by_names; - using OptionalIndexes = std::vector>; - OptionalIndexes column_indexes_for_input_fields; - - std::vector read_columns; std::vector columns_to_fill_with_default_values; void addInputColumn(const String & column_name); diff --git a/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.cpp b/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.cpp index dd3adfa40eb..3e99264785e 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.cpp @@ -43,9 +43,9 @@ void TabSeparatedRowOutputFormat::doWritePrefix() } -void TabSeparatedRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void TabSeparatedRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { - type.serializeAsTextEscaped(column, row_num, out, format_settings); + serialization.serializeTextEscaped(column, row_num, out, format_settings); } diff --git a/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.h b/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.h index 7985d6a1c86..e3190be70e8 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.h +++ b/src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.h @@ -28,7 +28,7 @@ public: String getName() const override { return "TabSeparatedRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowEndDelimiter() override; void writeBeforeTotals() override; diff --git a/src/Processors/Formats/Impl/TemplateBlockOutputFormat.cpp b/src/Processors/Formats/Impl/TemplateBlockOutputFormat.cpp index 6e33c7d90c9..d65f6dd9e38 100644 --- a/src/Processors/Formats/Impl/TemplateBlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/TemplateBlockOutputFormat.cpp @@ -21,9 +21,9 @@ TemplateBlockOutputFormat::TemplateBlockOutputFormat(const Block & header_, Writ { const auto & sample = getPort(PortKind::Main).getHeader(); size_t columns = sample.columns(); - types.resize(columns); + serializations.resize(columns); for (size_t i = 0; i < columns; ++i) - types[i] = sample.safeGetByPosition(i).type; + serializations[i] = sample.safeGetByPosition(i).type->getDefaultSerialization(); /// Validate format string for whole output size_t data_idx = format.format_idx_to_column_idx.size() + 1; @@ -105,32 +105,32 @@ void TemplateBlockOutputFormat::writeRow(const Chunk & chunk, size_t row_num) writeString(row_format.delimiters[j], out); size_t col_idx = *row_format.format_idx_to_column_idx[j]; - serializeField(*chunk.getColumns()[col_idx], *types[col_idx], row_num, row_format.formats[j]); + serializeField(*chunk.getColumns()[col_idx], *serializations[col_idx], row_num, row_format.formats[j]); } writeString(row_format.delimiters[columns], out); } -void TemplateBlockOutputFormat::serializeField(const IColumn & column, const IDataType & type, size_t row_num, ColumnFormat col_format) +void TemplateBlockOutputFormat::serializeField(const IColumn & column, const ISerialization & serialization, size_t row_num, ColumnFormat col_format) { switch (col_format) { case ColumnFormat::Escaped: - type.serializeAsTextEscaped(column, row_num, out, settings); + serialization.serializeTextEscaped(column, row_num, out, settings); break; case ColumnFormat::Quoted: - type.serializeAsTextQuoted(column, row_num, out, settings); + serialization.serializeTextQuoted(column, row_num, out, settings); break; case ColumnFormat::Csv: - type.serializeAsTextCSV(column, row_num, out, settings); + serialization.serializeTextCSV(column, row_num, out, settings); break; case ColumnFormat::Json: - type.serializeAsTextJSON(column, row_num, out, settings); + serialization.serializeTextJSON(column, row_num, out, settings); break; case ColumnFormat::Xml: - type.serializeAsTextXML(column, row_num, out, settings); + serialization.serializeTextXML(column, row_num, out, settings); break; case ColumnFormat::Raw: - type.serializeAsText(column, row_num, out, settings); + serialization.serializeText(column, row_num, out, settings); break; default: __builtin_unreachable(); @@ -142,7 +142,7 @@ template void TemplateBlockOutputFormat::writeValue(U v auto type = std::make_unique(); auto col = type->createColumn(); col->insert(value); - serializeField(*col, *type, 0, col_format); + serializeField(*col, *type->getDefaultSerialization(), 0, col_format); } void TemplateBlockOutputFormat::consume(Chunk chunk) diff --git a/src/Processors/Formats/Impl/TemplateBlockOutputFormat.h b/src/Processors/Formats/Impl/TemplateBlockOutputFormat.h index f29d31eb3f1..0d41b8888d4 100644 --- a/src/Processors/Formats/Impl/TemplateBlockOutputFormat.h +++ b/src/Processors/Formats/Impl/TemplateBlockOutputFormat.h @@ -47,12 +47,12 @@ protected: void finalize() override; void writeRow(const Chunk & chunk, size_t row_num); - void serializeField(const IColumn & column, const IDataType & type, size_t row_num, ColumnFormat format); + void serializeField(const IColumn & column, const ISerialization & serialization, size_t row_num, ColumnFormat format); template void writeValue(U value, ColumnFormat col_format); protected: const FormatSettings settings; - DataTypes types; + Serializations serializations; ParsedTemplateFormatString format; ParsedTemplateFormatString row_format; diff --git a/src/Processors/Formats/Impl/TemplateRowInputFormat.cpp b/src/Processors/Formats/Impl/TemplateRowInputFormat.cpp index 6023b38e4de..0e5a962a037 100644 --- a/src/Processors/Formats/Impl/TemplateRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/TemplateRowInputFormat.cpp @@ -4,7 +4,7 @@ #include #include #include -#include +#include namespace DB { @@ -173,7 +173,7 @@ bool TemplateRowInputFormat::readRow(MutableColumns & columns, RowReadExtension if (row_format.format_idx_to_column_idx[i]) { size_t col_idx = *row_format.format_idx_to_column_idx[i]; - extra.read_columns[col_idx] = deserializeField(data_types[col_idx], *columns[col_idx], i); + extra.read_columns[col_idx] = deserializeField(data_types[col_idx], serializations[col_idx], *columns[col_idx], i); } else skipField(row_format.formats[i]); @@ -189,7 +189,8 @@ bool TemplateRowInputFormat::readRow(MutableColumns & columns, RowReadExtension return true; } -bool TemplateRowInputFormat::deserializeField(const DataTypePtr & type, IColumn & column, size_t file_column) +bool TemplateRowInputFormat::deserializeField(const DataTypePtr & type, + const SerializationPtr & serialization, IColumn & column, size_t file_column) { ColumnFormat col_format = row_format.formats[file_column]; bool read = true; @@ -200,30 +201,30 @@ bool TemplateRowInputFormat::deserializeField(const DataTypePtr & type, IColumn { case ColumnFormat::Escaped: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextEscaped(column, buf, settings, type); + read = SerializationNullable::deserializeTextEscapedImpl(column, buf, settings, serialization); else - type->deserializeAsTextEscaped(column, buf, settings); + serialization->deserializeTextEscaped(column, buf, settings); break; case ColumnFormat::Quoted: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextQuoted(column, buf, settings, type); + read = SerializationNullable::deserializeTextQuotedImpl(column, buf, settings, serialization); else - type->deserializeAsTextQuoted(column, buf, settings); + serialization->deserializeTextQuoted(column, buf, settings); break; case ColumnFormat::Csv: /// Will read unquoted string until settings.csv.delimiter settings.csv.delimiter = row_format.delimiters[file_column + 1].empty() ? default_csv_delimiter : row_format.delimiters[file_column + 1].front(); if (parse_as_nullable) - read = DataTypeNullable::deserializeTextCSV(column, buf, settings, type); + read = SerializationNullable::deserializeTextCSVImpl(column, buf, settings, serialization); else - type->deserializeAsTextCSV(column, buf, settings); + serialization->deserializeTextCSV(column, buf, settings); break; case ColumnFormat::Json: if (parse_as_nullable) - read = DataTypeNullable::deserializeTextJSON(column, buf, settings, type); + read = SerializationNullable::deserializeTextJSONImpl(column, buf, settings, serialization); else - type->deserializeAsTextJSON(column, buf, settings); + serialization->deserializeTextJSON(column, buf, settings); break; default: __builtin_unreachable(); @@ -412,8 +413,9 @@ void TemplateRowInputFormat::writeErrorStringForWrongDelimiter(WriteBuffer & out void TemplateRowInputFormat::tryDeserializeField(const DataTypePtr & type, IColumn & column, size_t file_column) { - if (row_format.format_idx_to_column_idx[file_column]) - deserializeField(type, column, file_column); + const auto & index = row_format.format_idx_to_column_idx[file_column]; + if (index) + deserializeField(type, serializations[*index], column, file_column); else skipField(row_format.formats[file_column]); } diff --git a/src/Processors/Formats/Impl/TemplateRowInputFormat.h b/src/Processors/Formats/Impl/TemplateRowInputFormat.h index 6adfe0a34b4..322f8570ab7 100644 --- a/src/Processors/Formats/Impl/TemplateRowInputFormat.h +++ b/src/Processors/Formats/Impl/TemplateRowInputFormat.h @@ -32,7 +32,9 @@ public: void resetParser() override; private: - bool deserializeField(const DataTypePtr & type, IColumn & column, size_t file_column); + bool deserializeField(const DataTypePtr & type, + const SerializationPtr & serialization, IColumn & column, size_t file_column); + void skipField(ColumnFormat col_format); inline void skipSpaces() { if (ignore_spaces) skipWhitespaceIfAny(buf); } @@ -43,6 +45,7 @@ private: bool parseRowAndPrintDiagnosticInfo(MutableColumns & columns, WriteBuffer & out) override; void tryDeserializeField(const DataTypePtr & type, IColumn & column, size_t file_column) override; + bool isGarbageAfterField(size_t after_col_idx, ReadBuffer::Position pos) override; void writeErrorStringForWrongDelimiter(WriteBuffer & out, const String & description, const String & delim); diff --git a/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp b/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp index 1455b8f6740..701385447b4 100644 --- a/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -40,6 +41,9 @@ ValuesBlockInputFormat::ValuesBlockInputFormat(ReadBuffer & in_, const Block & h attempts_to_deduce_template(num_columns), attempts_to_deduce_template_cached(num_columns), rows_parsed_using_template(num_columns), templates(num_columns), types(header_.getDataTypes()) { + serializations.resize(types.size()); + for (size_t i = 0; i < types.size(); ++i) + serializations[i] = types[i]->getDefaultSerialization(); } Chunk ValuesBlockInputFormat::generate() @@ -164,10 +168,12 @@ bool ValuesBlockInputFormat::tryReadValue(IColumn & column, size_t column_idx) { bool read = true; const auto & type = types[column_idx]; + const auto & serialization = serializations[column_idx]; if (format_settings.null_as_default && !type->isNullable()) - read = DataTypeNullable::deserializeTextQuoted(column, buf, format_settings, type); + read = SerializationNullable::deserializeTextQuotedImpl(column, buf, format_settings, serialization); else - type->deserializeAsTextQuoted(column, buf, format_settings); + serialization->deserializeTextQuoted(column, buf, format_settings); + rollback_on_exception = true; skipWhitespaceIfAny(buf); @@ -310,7 +316,8 @@ bool ValuesBlockInputFormat::parseExpression(IColumn & column, size_t column_idx bool ok = false; try { - header.getByPosition(column_idx).type->deserializeAsTextQuoted(column, buf, format_settings); + const auto & serialization = serializations[column_idx]; + serialization->deserializeTextQuoted(column, buf, format_settings); rollback_on_exception = true; skipWhitespaceIfAny(buf); if (checkDelimiterAfterValue(column_idx)) @@ -351,7 +358,7 @@ bool ValuesBlockInputFormat::parseExpression(IColumn & column, size_t column_idx TokenIterator(tokens), token_iterator, ast, - *context, + context, &found_in_cache, delimiter); templates[column_idx].emplace(structure); @@ -393,7 +400,7 @@ bool ValuesBlockInputFormat::parseExpression(IColumn & column, size_t column_idx /// Try to evaluate single expression if other parsers don't work buf.position() = const_cast(token_iterator->begin); - std::pair value_raw = evaluateConstantExpression(ast, *context); + std::pair value_raw = evaluateConstantExpression(ast, context); Field & expression_value = value_raw.first; diff --git a/src/Processors/Formats/Impl/ValuesBlockInputFormat.h b/src/Processors/Formats/Impl/ValuesBlockInputFormat.h index a541870e484..ea5ab9239e0 100644 --- a/src/Processors/Formats/Impl/ValuesBlockInputFormat.h +++ b/src/Processors/Formats/Impl/ValuesBlockInputFormat.h @@ -1,21 +1,19 @@ #pragma once #include -#include -#include #include -#include - +#include #include #include +#include +#include +#include namespace DB { -class Context; class ReadBuffer; - /** Stream to read data in VALUES format (as in INSERT query). */ class ValuesBlockInputFormat final : public IInputFormat @@ -36,7 +34,7 @@ public: void resetParser() override; /// TODO: remove context somehow. - void setContext(const Context & context_) { context = std::make_unique(context_); } + void setContext(ContextConstPtr context_) { context = Context::createCopy(context_); } const BlockMissingValues & getMissingValues() const override { return block_missing_values; } @@ -68,12 +66,11 @@ private: bool skipToNextRow(size_t min_chunk_bytes = 0, int balance = 0); -private: PeekableReadBuffer buf; const RowInputFormatParams params; - std::unique_ptr context; /// pimpl + ContextPtr context; /// pimpl const FormatSettings format_settings; const size_t num_columns; @@ -89,6 +86,7 @@ private: ConstantExpressionTemplate::Cache templates_cache; const DataTypes types; + Serializations serializations; BlockMissingValues block_missing_values; }; diff --git a/src/Processors/Formats/Impl/ValuesRowOutputFormat.cpp b/src/Processors/Formats/Impl/ValuesRowOutputFormat.cpp index 7791e1296e0..e0152a7ffee 100644 --- a/src/Processors/Formats/Impl/ValuesRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/ValuesRowOutputFormat.cpp @@ -15,9 +15,9 @@ ValuesRowOutputFormat::ValuesRowOutputFormat(WriteBuffer & out_, const Block & h { } -void ValuesRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void ValuesRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { - type.serializeAsTextQuoted(column, row_num, out, format_settings); + serialization.serializeTextQuoted(column, row_num, out, format_settings); } void ValuesRowOutputFormat::writeFieldDelimiter() diff --git a/src/Processors/Formats/Impl/ValuesRowOutputFormat.h b/src/Processors/Formats/Impl/ValuesRowOutputFormat.h index 73f91866f43..493ce458b1e 100644 --- a/src/Processors/Formats/Impl/ValuesRowOutputFormat.h +++ b/src/Processors/Formats/Impl/ValuesRowOutputFormat.h @@ -19,7 +19,7 @@ public: String getName() const override { return "ValuesRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeFieldDelimiter() override; void writeRowStartDelimiter() override; void writeRowEndDelimiter() override; diff --git a/src/Processors/Formats/Impl/VerticalRowOutputFormat.cpp b/src/Processors/Formats/Impl/VerticalRowOutputFormat.cpp index a3c71cbde59..c6f37d270b0 100644 --- a/src/Processors/Formats/Impl/VerticalRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/VerticalRowOutputFormat.cpp @@ -50,22 +50,22 @@ VerticalRowOutputFormat::VerticalRowOutputFormat( } -void VerticalRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void VerticalRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { if (row_number > format_settings.pretty.max_rows) return; writeString(names_and_paddings[field_number], out); - writeValue(column, type, row_num); + writeValue(column, serialization, row_num); writeChar('\n', out); ++field_number; } -void VerticalRowOutputFormat::writeValue(const IColumn & column, const IDataType & type, size_t row_num) const +void VerticalRowOutputFormat::writeValue(const IColumn & column, const ISerialization & serialization, size_t row_num) const { - type.serializeAsText(column, row_num, out, format_settings); + serialization.serializeText(column, row_num, out, format_settings); } @@ -123,26 +123,25 @@ void VerticalRowOutputFormat::writeBeforeExtremes() void VerticalRowOutputFormat::writeMinExtreme(const Columns & columns, size_t row_num) { - writeSpecialRow(columns, row_num, PortKind::Totals, "Min"); + writeSpecialRow(columns, row_num, "Min"); } void VerticalRowOutputFormat::writeMaxExtreme(const Columns & columns, size_t row_num) { - writeSpecialRow(columns, row_num, PortKind::Totals, "Max"); + writeSpecialRow(columns, row_num, "Max"); } void VerticalRowOutputFormat::writeTotals(const Columns & columns, size_t row_num) { - writeSpecialRow(columns, row_num, PortKind::Totals, "Totals"); + writeSpecialRow(columns, row_num, "Totals"); was_totals_written = true; } -void VerticalRowOutputFormat::writeSpecialRow(const Columns & columns, size_t row_num, PortKind port_kind, const char * title) +void VerticalRowOutputFormat::writeSpecialRow(const Columns & columns, size_t row_num, const char * title) { row_number = 0; field_number = 0; - const auto & header = getPort(port_kind).getHeader(); size_t num_columns = columns.size(); writeCString(title, out); @@ -158,8 +157,7 @@ void VerticalRowOutputFormat::writeSpecialRow(const Columns & columns, size_t ro if (i != 0) writeFieldDelimiter(); - const auto & col = header.getByPosition(i); - writeField(*columns[i], *col.type, row_num); + writeField(*columns[i], *serializations[i], row_num); } } diff --git a/src/Processors/Formats/Impl/VerticalRowOutputFormat.h b/src/Processors/Formats/Impl/VerticalRowOutputFormat.h index d372f5f611a..9e89f677f87 100644 --- a/src/Processors/Formats/Impl/VerticalRowOutputFormat.h +++ b/src/Processors/Formats/Impl/VerticalRowOutputFormat.h @@ -22,7 +22,7 @@ public: String getName() const override { return "VerticalRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeRowStartDelimiter() override; void writeRowBetweenDelimiter() override; void writeSuffix() override; @@ -35,10 +35,10 @@ public: void writeBeforeExtremes() override; protected: - virtual void writeValue(const IColumn & column, const IDataType & type, size_t row_num) const; + virtual void writeValue(const IColumn & column, const ISerialization & serialization, size_t row_num) const; /// For totals and extremes. - void writeSpecialRow(const Columns & columns, size_t row_num, PortKind port_kind, const char * title); + void writeSpecialRow(const Columns & columns, size_t row_num, const char * title); const FormatSettings format_settings; size_t field_number = 0; diff --git a/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp b/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp index 6fd63a18147..893c4e229c7 100644 --- a/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp @@ -82,12 +82,12 @@ void XMLRowOutputFormat::writePrefix() } -void XMLRowOutputFormat::writeField(const IColumn & column, const IDataType & type, size_t row_num) +void XMLRowOutputFormat::writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) { writeCString("\t\t\t<", *ostr); writeString(field_tag_names[field_number], *ostr); writeCString(">", *ostr); - type.serializeAsTextXML(column, row_num, *ostr, format_settings); + serialization.serializeTextXML(column, row_num, *ostr, format_settings); writeCString("\n", *ostr); @@ -132,7 +132,7 @@ void XMLRowOutputFormat::writeTotals(const Columns & columns, size_t row_num) writeCString("\t\t<", *ostr); writeString(field_tag_names[i], *ostr); writeCString(">", *ostr); - column.type->serializeAsTextXML(*columns[i], row_num, *ostr, format_settings); + column.type->getDefaultSerialization()->serializeTextXML(*columns[i], row_num, *ostr, format_settings); writeCString("\n", *ostr); @@ -181,7 +181,7 @@ void XMLRowOutputFormat::writeExtremesElement(const char * title, const Columns writeCString("\t\t\t<", *ostr); writeString(field_tag_names[i], *ostr); writeCString(">", *ostr); - column.type->serializeAsTextXML(*columns[i], row_num, *ostr, format_settings); + column.type->getDefaultSerialization()->serializeTextXML(*columns[i], row_num, *ostr, format_settings); writeCString("\n", *ostr); diff --git a/src/Processors/Formats/Impl/XMLRowOutputFormat.h b/src/Processors/Formats/Impl/XMLRowOutputFormat.h index 233ee773c1c..8ca4721c459 100644 --- a/src/Processors/Formats/Impl/XMLRowOutputFormat.h +++ b/src/Processors/Formats/Impl/XMLRowOutputFormat.h @@ -20,7 +20,7 @@ public: String getName() const override { return "XMLRowOutputFormat"; } - void writeField(const IColumn & column, const IDataType & type, size_t row_num) override; + void writeField(const IColumn & column, const ISerialization & serialization, size_t row_num) override; void writeRowStartDelimiter() override; void writeRowEndDelimiter() override; void writePrefix() override; diff --git a/src/Processors/ISource.h b/src/Processors/ISource.h index b7e2b5dce8e..db91c0c5bce 100644 --- a/src/Processors/ISource.h +++ b/src/Processors/ISource.h @@ -19,7 +19,7 @@ protected: virtual std::optional tryGenerate(); public: - ISource(Block header); + explicit ISource(Block header); Status prepare() override; void work() override; diff --git a/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.cpp b/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.cpp index ccb66259e2e..0db99fc7b0e 100644 --- a/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.cpp @@ -66,14 +66,16 @@ void CollapsingSortedAlgorithm::insertRow(RowRef & row) merged_data.insertRow(*row.all_columns, row.row_num, row.owned_chunk->getNumRows()); } -void CollapsingSortedAlgorithm::insertRows() +std::optional CollapsingSortedAlgorithm::insertRows() { if (count_positive == 0 && count_negative == 0) { /// No input rows have been read. - return; + return {}; } + std::optional res; + if (last_is_positive || count_positive != count_negative) { if (count_positive <= count_negative && !only_positive_sign) @@ -86,6 +88,9 @@ void CollapsingSortedAlgorithm::insertRows() if (count_positive >= count_negative) { + if (merged_data.hasEnoughRows()) + res = merged_data.pull(); + insertRow(last_positive_row); if (out_row_sources_buf) @@ -107,10 +112,16 @@ void CollapsingSortedAlgorithm::insertRows() out_row_sources_buf->write( reinterpret_cast(current_row_sources.data()), current_row_sources.size() * sizeof(RowSourcePart)); + + return res; } IMergingAlgorithm::Status CollapsingSortedAlgorithm::merge() { + /// Rare case, which may happen when index_granularity is 1, but we needed to insert 2 rows inside insertRows(). + if (merged_data.hasEnoughRows()) + return Status(merged_data.pull()); + /// Take rows in required order and put them into `merged_data`, while the rows are no more than `max_block_size` while (queue.isValid()) { @@ -132,15 +143,14 @@ IMergingAlgorithm::Status CollapsingSortedAlgorithm::merge() setRowRef(last_row, current); bool key_differs = !last_row.hasEqualSortColumnsWith(current_row); - - /// if there are enough rows and the last one is calculated completely - if (key_differs && merged_data.hasEnoughRows()) - return Status(merged_data.pull()); - if (key_differs) { + /// if there are enough rows and the last one is calculated completely + if (merged_data.hasEnoughRows()) + return Status(merged_data.pull()); + /// We write data for the previous primary key. - insertRows(); + auto res = insertRows(); current_row.swap(last_row); @@ -151,6 +161,12 @@ IMergingAlgorithm::Status CollapsingSortedAlgorithm::merge() first_negative_pos = 0; last_positive_pos = 0; current_row_sources.resize(0); + + /// Here we can return ready chunk. + /// Next iteration, last_row == current_row, and all the counters are zeroed. + /// So, current_row should be correctly processed. + if (res) + return Status(std::move(*res)); } /// Initially, skip all rows. On insert, unskip "corner" rows. @@ -194,7 +210,15 @@ IMergingAlgorithm::Status CollapsingSortedAlgorithm::merge() } } - insertRows(); + if (auto res = insertRows()) + { + /// Queue is empty, and we have inserted all the rows. + /// Set counter to zero so that insertRows() will return immediately next time. + count_positive = 0; + count_negative = 0; + return Status(std::move(*res)); + } + return Status(merged_data.pull(), true); } diff --git a/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.h b/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.h index 028715f715b..18ebaad5596 100644 --- a/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.h +++ b/src/Processors/Merges/Algorithms/CollapsingSortedAlgorithm.h @@ -66,7 +66,11 @@ private: void reportIncorrectData(); void insertRow(RowRef & row); - void insertRows(); + + /// Insert ready rows into merged_data. We may want to insert 0, 1 or 2 rows. + /// It may happen that 2 rows is going to be inserted and, but merged data has free space only for 1 row. + /// In this case, Chunk with ready is pulled from merged_data before the second insertion. + std::optional insertRows(); }; } diff --git a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp index 3ef0caefd8f..3eb94ba78b7 100644 --- a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp @@ -24,11 +24,13 @@ FinishAggregatingInOrderAlgorithm::FinishAggregatingInOrderAlgorithm( const Block & header_, size_t num_inputs_, AggregatingTransformParamsPtr params_, - SortDescription description_) + SortDescription description_, + size_t max_block_size_) : header(header_) , num_inputs(num_inputs_) , params(params_) , description(std::move(description_)) + , max_block_size(max_block_size_) { /// Replace column names in description to positions. for (auto & column_description : description) @@ -56,6 +58,13 @@ void FinishAggregatingInOrderAlgorithm::consume(Input & input, size_t source_num IMergingAlgorithm::Status FinishAggregatingInOrderAlgorithm::merge() { + if (!inputs_to_update.empty()) + { + Status status(inputs_to_update.back()); + inputs_to_update.pop_back(); + return status; + } + /// Find the input with smallest last row. std::optional best_input; for (size_t i = 0; i < num_inputs; ++i) @@ -94,16 +103,30 @@ IMergingAlgorithm::Status FinishAggregatingInOrderAlgorithm::merge() states[i].to_row = (it == indices.end() ? states[i].num_rows : *it); } - Status status(*best_input); - status.chunk = aggregate(); + addToAggregation(); + + /// At least one chunk should be fully aggregated. + assert(!inputs_to_update.empty()); + Status status(inputs_to_update.back()); + inputs_to_update.pop_back(); + + /// Do not merge blocks, if there are too few rows. + if (accumulated_rows >= max_block_size) + status.chunk = aggregate(); return status; } Chunk FinishAggregatingInOrderAlgorithm::aggregate() { - BlocksList blocks; + auto aggregated = params->aggregator.mergeBlocks(blocks, false); + blocks.clear(); + accumulated_rows = 0; + return {aggregated.getColumns(), aggregated.rows()}; +} +void FinishAggregatingInOrderAlgorithm::addToAggregation() +{ for (size_t i = 0; i < num_inputs; ++i) { const auto & state = states[i]; @@ -112,7 +135,7 @@ Chunk FinishAggregatingInOrderAlgorithm::aggregate() if (state.to_row - state.current_row == state.num_rows) { - blocks.emplace_back(header.cloneWithColumns(states[i].all_columns)); + blocks.emplace_back(header.cloneWithColumns(state.all_columns)); } else { @@ -125,10 +148,11 @@ Chunk FinishAggregatingInOrderAlgorithm::aggregate() } states[i].current_row = states[i].to_row; + accumulated_rows += blocks.back().rows(); + + if (!states[i].isValid()) + inputs_to_update.push_back(i); } - - auto aggregated = params->aggregator.mergeBlocks(blocks, false); - return {aggregated.getColumns(), aggregated.rows()}; } } diff --git a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h index 2f9cd5d71a2..119aefb0ab0 100644 --- a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h +++ b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h @@ -37,7 +37,8 @@ public: const Block & header_, size_t num_inputs_, AggregatingTransformParamsPtr params_, - SortDescription description_); + SortDescription description_, + size_t max_block_size_); void initialize(Inputs inputs) override; void consume(Input & input, size_t source_num) override; @@ -45,6 +46,7 @@ public: private: Chunk aggregate(); + void addToAggregation(); struct State { @@ -66,8 +68,13 @@ private: size_t num_inputs; AggregatingTransformParamsPtr params; SortDescription description; + size_t max_block_size; + Inputs current_inputs; std::vector states; + std::vector inputs_to_update; + BlocksList blocks; + size_t accumulated_rows = 0; }; } diff --git a/src/Processors/Merges/Algorithms/ReplacingSortedAlgorithm.cpp b/src/Processors/Merges/Algorithms/ReplacingSortedAlgorithm.cpp index 132241844d7..b8c788ed1fc 100644 --- a/src/Processors/Merges/Algorithms/ReplacingSortedAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/ReplacingSortedAlgorithm.cpp @@ -93,6 +93,10 @@ IMergingAlgorithm::Status ReplacingSortedAlgorithm::merge() } } + /// If have enough rows, return block, because it prohibited to overflow requested number of rows. + if (merged_data.hasEnoughRows()) + return Status(merged_data.pull()); + /// We will write the data for the last primary key. if (!selected_row.empty()) insertRow(); diff --git a/src/Processors/Merges/FinishAggregatingInOrderTransform.h b/src/Processors/Merges/FinishAggregatingInOrderTransform.h index e067b9472d9..4f9e53bd7d5 100644 --- a/src/Processors/Merges/FinishAggregatingInOrderTransform.h +++ b/src/Processors/Merges/FinishAggregatingInOrderTransform.h @@ -16,13 +16,15 @@ public: const Block & header, size_t num_inputs, AggregatingTransformParamsPtr params, - SortDescription description) + SortDescription description, + size_t max_block_size) : IMergingTransform( num_inputs, header, header, true, header, num_inputs, params, - std::move(description)) + std::move(description), + max_block_size) { } diff --git a/src/Processors/Pipe.cpp b/src/Processors/Pipe.cpp index 129bebf452a..044975448ad 100644 --- a/src/Processors/Pipe.cpp +++ b/src/Processors/Pipe.cpp @@ -8,6 +8,7 @@ #include #include #include +#include namespace DB { @@ -250,12 +251,53 @@ static Pipes removeEmptyPipes(Pipes pipes) return res; } -Pipe Pipe::unitePipes(Pipes pipes) +/// Calculate common header for pipes. +/// This function is needed only to remove ColumnConst from common header in case if some columns are const, and some not. +/// E.g. if the first header is `x, const y, const z` and the second is `const x, y, const z`, the common header will be `x, y, const z`. +static Block getCommonHeader(const Pipes & pipes) { - return Pipe::unitePipes(std::move(pipes), nullptr); + Block res; + + for (const auto & pipe : pipes) + { + if (const auto & header = pipe.getHeader()) + { + res = header; + break; + } + } + + for (const auto & pipe : pipes) + { + const auto & header = pipe.getHeader(); + for (size_t i = 0; i < res.columns(); ++i) + { + /// We do not check that headers are compatible here. Will do it later. + + if (i >= header.columns()) + break; + + auto & common = res.getByPosition(i).column; + const auto & cur = header.getByPosition(i).column; + + /// Only remove const from common header if it is not const for current pipe. + if (cur && common && !isColumnConst(*cur)) + { + if (const auto * column_const = typeid_cast(common.get())) + common = column_const->getDataColumnPtr(); + } + } + } + + return res; } -Pipe Pipe::unitePipes(Pipes pipes, Processors * collected_processors) +Pipe Pipe::unitePipes(Pipes pipes) +{ + return Pipe::unitePipes(std::move(pipes), nullptr, false); +} + +Pipe Pipe::unitePipes(Pipes pipes, Processors * collected_processors, bool allow_empty_header) { Pipe res; @@ -275,12 +317,14 @@ Pipe Pipe::unitePipes(Pipes pipes, Processors * collected_processors) OutputPortRawPtrs totals; OutputPortRawPtrs extremes; - res.header = pipes.front().header; res.collected_processors = collected_processors; + res.header = getCommonHeader(pipes); for (auto & pipe : pipes) { - assertBlocksHaveEqualStructure(res.header, pipe.header, "Pipe::unitePipes"); + if (!allow_empty_header || pipe.header) + assertCompatibleHeader(pipe.header, res.header, "Pipe::unitePipes"); + res.processors.insert(res.processors.end(), pipe.processors.begin(), pipe.processors.end()); res.output_ports.insert(res.output_ports.end(), pipe.output_ports.begin(), pipe.output_ports.end()); diff --git a/src/Processors/Pipe.h b/src/Processors/Pipe.h index f21f4761977..4ba08787579 100644 --- a/src/Processors/Pipe.h +++ b/src/Processors/Pipe.h @@ -1,8 +1,9 @@ #pragma once + #include -#include #include #include +#include namespace DB { @@ -155,7 +156,7 @@ private: /// This methods are for QueryPipeline. It is allowed to complete graph only there. /// So, we may be sure that Pipe always has output port if not empty. bool isCompleted() const { return !empty() && output_ports.empty(); } - static Pipe unitePipes(Pipes pipes, Processors * collected_processors); + static Pipe unitePipes(Pipes pipes, Processors * collected_processors, bool allow_empty_header); void setSinks(const Pipe::ProcessorGetterWithStreamKind & getter); void setOutputFormat(ProcessorPtr output); diff --git a/src/Processors/Port.cpp b/src/Processors/Port.cpp index 7e7ccb1adad..0a6026b27f2 100644 --- a/src/Processors/Port.cpp +++ b/src/Processors/Port.cpp @@ -16,7 +16,7 @@ void connect(OutputPort & output, InputPort & input) auto out_name = output.getProcessor().getName(); auto in_name = input.getProcessor().getName(); - assertBlocksHaveEqualStructure(input.getHeader(), output.getHeader(), " function connect between " + out_name + " and " + in_name); + assertCompatibleHeader(output.getHeader(), input.getHeader(), " function connect between " + out_name + " and " + in_name); input.output_port = &output; output.input_port = &input; diff --git a/src/Processors/QueryPipeline.cpp b/src/Processors/QueryPipeline.cpp index 97d753bf5fb..14b60d0b14c 100644 --- a/src/Processors/QueryPipeline.cpp +++ b/src/Processors/QueryPipeline.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -96,6 +97,12 @@ void QueryPipeline::addTransform(ProcessorPtr transform) pipe.addTransform(std::move(transform)); } +void QueryPipeline::addTransform(ProcessorPtr transform, InputPort * totals, InputPort * extremes) +{ + checkInitializedAndNotCompleted(); + pipe.addTransform(std::move(transform), totals, extremes); +} + void QueryPipeline::transform(const Transformer & transformer) { checkInitializedAndNotCompleted(); @@ -211,10 +218,14 @@ void QueryPipeline::setOutputFormat(ProcessorPtr output) QueryPipeline QueryPipeline::unitePipelines( std::vector> pipelines, - const Block & common_header, size_t max_threads_limit, Processors * collected_processors) { + if (pipelines.empty()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot unite an empty set of pipelines"); + + Block common_header = pipelines.front()->getHeader(); + /// Should we limit the number of threads for united pipeline. True if all pipelines have max_threads != 0. /// If true, result max_threads will be sum(max_threads). /// Note: it may be > than settings.max_threads, so we should apply this limit again. @@ -228,20 +239,6 @@ QueryPipeline QueryPipeline::unitePipelines( pipeline.checkInitialized(); pipeline.pipe.collected_processors = collected_processors; - if (!pipeline.isCompleted()) - { - auto actions_dag = ActionsDAG::makeConvertingActions( - pipeline.getHeader().getColumnsWithTypeAndName(), - common_header.getColumnsWithTypeAndName(), - ActionsDAG::MatchColumnsMode::Position); - auto actions = std::make_shared(actions_dag); - - pipeline.addSimpleTransform([&](const Block & header) - { - return std::make_shared(header, actions); - }); - } - pipes.emplace_back(std::move(pipeline.pipe)); max_threads += pipeline.max_threads; @@ -254,7 +251,7 @@ QueryPipeline QueryPipeline::unitePipelines( } QueryPipeline pipeline; - pipeline.init(Pipe::unitePipes(std::move(pipes), collected_processors)); + pipeline.init(Pipe::unitePipes(std::move(pipes), collected_processors, false)); if (will_limit_max_threads) { @@ -265,8 +262,98 @@ QueryPipeline QueryPipeline::unitePipelines( return pipeline; } +std::unique_ptr QueryPipeline::joinPipelines( + std::unique_ptr left, + std::unique_ptr right, + JoinPtr join, + size_t max_block_size, + Processors * collected_processors) +{ + left->checkInitializedAndNotCompleted(); + right->checkInitializedAndNotCompleted(); -void QueryPipeline::addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, const Context & context) + /// Extremes before join are useless. They will be calculated after if needed. + left->pipe.dropExtremes(); + right->pipe.dropExtremes(); + + left->pipe.collected_processors = collected_processors; + right->pipe.collected_processors = collected_processors; + + /// In case joined subquery has totals, and we don't, add default chunk to totals. + bool default_totals = false; + if (!left->hasTotals() && right->hasTotals()) + { + left->addDefaultTotals(); + default_totals = true; + } + + /// (left) ──────┐ + /// ╞> Joining ─> (joined) + /// (left) ─┐┌───┘ + /// └┼───┐ + /// (right) ┐ (totals) ──┼─┐ ╞> Joining ─> (joined) + /// ╞> Resize ┐ ╓─┘┌┼─┘ + /// (right) ┘ │ ╟──┘└─┐ + /// ╞> FillingJoin ─> Resize ╣ ╞> Joining ─> (totals) + /// (totals) ─────────┘ ╙─────┘ + + size_t num_streams = left->getNumStreams(); + right->resize(1); + + auto adding_joined = std::make_shared(right->getHeader(), join); + InputPort * totals_port = nullptr; + if (right->hasTotals()) + totals_port = adding_joined->addTotalsPort(); + + right->addTransform(std::move(adding_joined), totals_port, nullptr); + + size_t num_streams_including_totals = num_streams + (left->hasTotals() ? 1 : 0); + right->resize(num_streams_including_totals); + + /// This counter is needed for every Joining except totals, to decide which Joining will generate non joined rows. + auto finish_counter = std::make_shared(num_streams); + + auto lit = left->pipe.output_ports.begin(); + auto rit = right->pipe.output_ports.begin(); + + for (size_t i = 0; i < num_streams; ++i) + { + auto joining = std::make_shared(left->getHeader(), join, max_block_size, false, default_totals, finish_counter); + connect(**lit, joining->getInputs().front()); + connect(**rit, joining->getInputs().back()); + *lit = &joining->getOutputs().front(); + + ++lit; + ++rit; + + if (collected_processors) + collected_processors->emplace_back(joining); + + left->pipe.processors.emplace_back(std::move(joining)); + } + + if (left->hasTotals()) + { + auto joining = std::make_shared(left->getHeader(), join, max_block_size, true, default_totals); + connect(*left->pipe.totals_port, joining->getInputs().front()); + connect(**rit, joining->getInputs().back()); + left->pipe.totals_port = &joining->getOutputs().front(); + + ++rit; + + if (collected_processors) + collected_processors->emplace_back(joining); + + left->pipe.processors.emplace_back(std::move(joining)); + } + + left->pipe.processors.insert(left->pipe.processors.end(), right->pipe.processors.begin(), right->pipe.processors.end()); + left->pipe.holder = std::move(right->pipe.holder); + left->pipe.header = left->pipe.output_ports.front()->getHeader(); + return left; +} + +void QueryPipeline::addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, ContextPtr context) { resize(1); @@ -288,7 +375,9 @@ void QueryPipeline::addCreatingSetsTransform(const Block & res_header, SubqueryF void QueryPipeline::addPipelineBefore(QueryPipeline pipeline) { checkInitializedAndNotCompleted(); - assertBlocksHaveEqualStructure(getHeader(), pipeline.getHeader(), "QueryPipeline"); + if (pipeline.getHeader()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Pipeline for CreatingSets should have empty header. Got: {}", + pipeline.getHeader().dumpStructure()); IProcessor::PortNumbers delayed_streams(pipe.numOutputPorts()); for (size_t i = 0; i < delayed_streams.size(); ++i) @@ -299,7 +388,7 @@ void QueryPipeline::addPipelineBefore(QueryPipeline pipeline) Pipes pipes; pipes.emplace_back(std::move(pipe)); pipes.emplace_back(QueryPipeline::getPipe(std::move(pipeline))); - pipe = Pipe::unitePipes(std::move(pipes), collected_processors); + pipe = Pipe::unitePipes(std::move(pipes), collected_processors, true); auto processor = std::make_shared(getHeader(), pipe.numOutputPorts(), delayed_streams, true); addTransform(std::move(processor)); diff --git a/src/Processors/QueryPipeline.h b/src/Processors/QueryPipeline.h index d459faae997..0c8caa93539 100644 --- a/src/Processors/QueryPipeline.h +++ b/src/Processors/QueryPipeline.h @@ -1,19 +1,16 @@ #pragma once -#include -#include -#include #include #include - +#include +#include +#include #include #include namespace DB { -class Context; - class IOutputFormat; class QueryPipelineProcessorsCollector; @@ -28,6 +25,11 @@ using SubqueriesForSets = std::unordered_map; struct SizeLimits; +struct ExpressionActionsSettings; + +class IJoin; +using JoinPtr = std::shared_ptr; + class QueryPipeline { public: @@ -53,6 +55,7 @@ public: void addSimpleTransform(const Pipe::ProcessorGetterWithStreamKind & getter); /// Add transform with getNumStreams() input ports. void addTransform(ProcessorPtr transform); + void addTransform(ProcessorPtr transform, InputPort * totals, InputPort * extremes); using Transformer = std::function; /// Transform pipeline in general way. @@ -88,15 +91,24 @@ public: /// If collector is used, it will collect only newly-added processors, but not processors from pipelines. static QueryPipeline unitePipelines( std::vector> pipelines, - const Block & common_header, size_t max_threads_limit = 0, Processors * collected_processors = nullptr); + /// Join two pipelines together using JoinPtr. + /// If collector is used, it will collect only newly-added processors, but not processors from pipelines. + static std::unique_ptr joinPipelines( + std::unique_ptr left, + std::unique_ptr right, + JoinPtr join, + size_t max_block_size, + Processors * collected_processors = nullptr); + /// Add other pipeline and execute it before current one. - /// Pipeline must have same header. + /// Pipeline must have empty header, it should not generate any chunk. + /// This is used for CreatingSets. void addPipelineBefore(QueryPipeline pipeline); - void addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, const Context & context); + void addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, ContextPtr context); PipelineExecutorPtr execute(); diff --git a/src/Processors/QueryPlan/AddingDelayedSourceStep.cpp b/src/Processors/QueryPlan/AddingDelayedSourceStep.cpp deleted file mode 100644 index 99db95059a0..00000000000 --- a/src/Processors/QueryPlan/AddingDelayedSourceStep.cpp +++ /dev/null @@ -1,42 +0,0 @@ -#include -#include - -namespace DB -{ - -static ITransformingStep::Traits getTraits() -{ - return ITransformingStep::Traits - { - { - .preserves_distinct_columns = false, - .returns_single_stream = false, - .preserves_number_of_streams = false, - .preserves_sorting = false, - }, - { - .preserves_number_of_rows = false, /// New rows are added from delayed stream - } - }; -} - -AddingDelayedSourceStep::AddingDelayedSourceStep( - const DataStream & input_stream_, - ProcessorPtr source_) - : ITransformingStep(input_stream_, input_stream_.header, getTraits()) - , source(std::move(source_)) -{ -} - -void AddingDelayedSourceStep::transformPipeline(QueryPipeline & pipeline) -{ - source->setQueryPlanStep(this); - pipeline.addDelayedStream(source); - - /// Now, after adding delayed stream, it has implicit dependency on other port. - /// Here we add resize processor to remove this dependency. - /// Otherwise, if we add MergeSorting + MergingSorted transform to pipeline, we could get `Pipeline stuck` - pipeline.resize(pipeline.getNumStreams(), true); -} - -} diff --git a/src/Processors/QueryPlan/AddingDelayedSourceStep.h b/src/Processors/QueryPlan/AddingDelayedSourceStep.h deleted file mode 100644 index 5ea8f23008b..00000000000 --- a/src/Processors/QueryPlan/AddingDelayedSourceStep.h +++ /dev/null @@ -1,28 +0,0 @@ -#pragma once -#include -#include - -namespace DB -{ - -class IProcessor; -using ProcessorPtr = std::shared_ptr; - -/// Adds another source to pipeline. Data from this source will be read after data from all other sources. -/// NOTE: tis step is needed because of non-joined data from JOIN. Remove this step after adding JoinStep. -class AddingDelayedSourceStep : public ITransformingStep -{ -public: - AddingDelayedSourceStep( - const DataStream & input_stream_, - ProcessorPtr source_); - - String getName() const override { return "AddingDelayedSource"; } - - void transformPipeline(QueryPipeline & pipeline) override; - -private: - ProcessorPtr source; -}; - -} diff --git a/src/Processors/QueryPlan/AggregatingStep.cpp b/src/Processors/QueryPlan/AggregatingStep.cpp index 813d86b50c0..772390acb32 100644 --- a/src/Processors/QueryPlan/AggregatingStep.cpp +++ b/src/Processors/QueryPlan/AggregatingStep.cpp @@ -46,7 +46,7 @@ AggregatingStep::AggregatingStep( { } -void AggregatingStep::transformPipeline(QueryPipeline & pipeline) +void AggregatingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { QueryPipelineProcessorsCollector collector(pipeline, this); @@ -100,7 +100,8 @@ void AggregatingStep::transformPipeline(QueryPipeline & pipeline) pipeline.getHeader(), pipeline.getNumStreams(), transform_params, - group_by_sort_description); + group_by_sort_description, + max_block_size); pipeline.addTransform(std::move(transform)); aggregating_sorted = collector.detachProcessors(1); @@ -162,6 +163,11 @@ void AggregatingStep::describeActions(FormatSettings & settings) const params.explain(settings.out, settings.offset); } +void AggregatingStep::describeActions(JSONBuilder::JSONMap & map) const +{ + params.explain(map); +} + void AggregatingStep::describePipeline(FormatSettings & settings) const { if (!aggregating.empty()) diff --git a/src/Processors/QueryPlan/AggregatingStep.h b/src/Processors/QueryPlan/AggregatingStep.h index 6be92394fab..696aabd4de7 100644 --- a/src/Processors/QueryPlan/AggregatingStep.h +++ b/src/Processors/QueryPlan/AggregatingStep.h @@ -27,7 +27,9 @@ public: String getName() const override { return "Aggregating"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings &) const override; void describePipeline(FormatSettings & settings) const override; diff --git a/src/Processors/QueryPlan/ArrayJoinStep.cpp b/src/Processors/QueryPlan/ArrayJoinStep.cpp index b71d2176e52..9089bb8e5a2 100644 --- a/src/Processors/QueryPlan/ArrayJoinStep.cpp +++ b/src/Processors/QueryPlan/ArrayJoinStep.cpp @@ -5,7 +5,7 @@ #include #include #include - +#include namespace DB { @@ -46,7 +46,7 @@ void ArrayJoinStep::updateInputStream(DataStream input_stream, Block result_head res_header = std::move(result_header); } -void ArrayJoinStep::transformPipeline(QueryPipeline & pipeline) +void ArrayJoinStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) { @@ -60,7 +60,7 @@ void ArrayJoinStep::transformPipeline(QueryPipeline & pipeline) pipeline.getHeader().getColumnsWithTypeAndName(), res_header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto actions = std::make_shared(actions_dag); + auto actions = std::make_shared(actions_dag, settings.getActionsSettings()); pipeline.addSimpleTransform([&](const Block & header) { @@ -87,4 +87,15 @@ void ArrayJoinStep::describeActions(FormatSettings & settings) const settings.out << '\n'; } +void ArrayJoinStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Left", array_join->is_left); + + auto columns_array = std::make_unique(); + for (const auto & column : array_join->columns) + columns_array->add(column); + + map.add("Columns", std::move(columns_array)); +} + } diff --git a/src/Processors/QueryPlan/ArrayJoinStep.h b/src/Processors/QueryPlan/ArrayJoinStep.h index 92c7e0a1304..b3e08c2023c 100644 --- a/src/Processors/QueryPlan/ArrayJoinStep.h +++ b/src/Processors/QueryPlan/ArrayJoinStep.h @@ -13,8 +13,9 @@ public: explicit ArrayJoinStep(const DataStream & input_stream_, ArrayJoinActionPtr array_join_); String getName() const override { return "ArrayJoin"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; void updateInputStream(DataStream input_stream, Block result_header); diff --git a/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp b/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp new file mode 100644 index 00000000000..9691da4a362 --- /dev/null +++ b/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp @@ -0,0 +1,21 @@ +#include +#include +#include +#include + +namespace DB +{ + +BuildQueryPipelineSettings BuildQueryPipelineSettings::fromSettings(const Settings & from) +{ + BuildQueryPipelineSettings settings; + settings.actions_settings = ExpressionActionsSettings::fromSettings(from); + return settings; +} + +BuildQueryPipelineSettings BuildQueryPipelineSettings::fromContext(ContextPtr from) +{ + return fromSettings(from->getSettingsRef()); +} + +} diff --git a/src/Processors/QueryPlan/BuildQueryPipelineSettings.h b/src/Processors/QueryPlan/BuildQueryPipelineSettings.h new file mode 100644 index 00000000000..c3282d43778 --- /dev/null +++ b/src/Processors/QueryPlan/BuildQueryPipelineSettings.h @@ -0,0 +1,22 @@ +#pragma once + +#include + +#include + +namespace DB +{ + +struct Settings; + +struct BuildQueryPipelineSettings +{ + ExpressionActionsSettings actions_settings; + + const ExpressionActionsSettings & getActionsSettings() const { return actions_settings; } + + static BuildQueryPipelineSettings fromSettings(const Settings & from); + static BuildQueryPipelineSettings fromContext(ContextPtr from); +}; + +} diff --git a/src/Processors/QueryPlan/CreatingSetsStep.cpp b/src/Processors/QueryPlan/CreatingSetsStep.cpp index 5868a7045f7..811e5885219 100644 --- a/src/Processors/QueryPlan/CreatingSetsStep.cpp +++ b/src/Processors/QueryPlan/CreatingSetsStep.cpp @@ -2,6 +2,8 @@ #include #include #include +#include +#include namespace DB { @@ -29,22 +31,21 @@ static ITransformingStep::Traits getTraits() CreatingSetStep::CreatingSetStep( const DataStream & input_stream_, - Block header, String description_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_) - : ITransformingStep(input_stream_, header, getTraits()) + ContextPtr context_) + : ITransformingStep(input_stream_, Block{}, getTraits()) + , WithContext(context_) , description(std::move(description_)) , subquery_for_set(std::move(subquery_for_set_)) , network_transfer_limits(std::move(network_transfer_limits_)) - , context(context_) { } -void CreatingSetStep::transformPipeline(QueryPipeline & pipeline) +void CreatingSetStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { - pipeline.addCreatingSetsTransform(getOutputStream().header, std::move(subquery_for_set), network_transfer_limits, context); + pipeline.addCreatingSetsTransform(getOutputStream().header, std::move(subquery_for_set), network_transfer_limits, getContext()); } void CreatingSetStep::describeActions(FormatSettings & settings) const @@ -54,12 +55,21 @@ void CreatingSetStep::describeActions(FormatSettings & settings) const settings.out << prefix; if (subquery_for_set.set) settings.out << "Set: "; - else if (subquery_for_set.join) - settings.out << "Join: "; + // else if (subquery_for_set.join) + // settings.out << "Join: "; settings.out << description << '\n'; } +void CreatingSetStep::describeActions(JSONBuilder::JSONMap & map) const +{ + if (subquery_for_set.set) + map.add("Set", description); + // else if (subquery_for_set.join) + // map.add("Join", description); +} + + CreatingSetsStep::CreatingSetsStep(DataStreams input_streams_) { if (input_streams_.empty()) @@ -69,10 +79,12 @@ CreatingSetsStep::CreatingSetsStep(DataStreams input_streams_) output_stream = input_streams.front(); for (size_t i = 1; i < input_streams.size(); ++i) - assertBlocksHaveEqualStructure(output_stream->header, input_streams[i].header, "CreatingSets"); + if (input_streams[i].header) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Creating set input must have empty header. Got: {}", + input_streams[i].header.dumpStructure()); } -QueryPipelinePtr CreatingSetsStep::updatePipeline(QueryPipelines pipelines) +QueryPipelinePtr CreatingSetsStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) { if (pipelines.empty()) throw Exception("CreatingSetsStep cannot be created with no inputs", ErrorCodes::LOGICAL_ERROR); @@ -81,14 +93,13 @@ QueryPipelinePtr CreatingSetsStep::updatePipeline(QueryPipelines pipelines) if (pipelines.size() == 1) return main_pipeline; - std::swap(pipelines.front(), pipelines.back()); - pipelines.pop_back(); + pipelines.erase(pipelines.begin()); QueryPipeline delayed_pipeline; if (pipelines.size() > 1) { QueryPipelineProcessorsCollector collector(delayed_pipeline, this); - delayed_pipeline = QueryPipeline::unitePipelines(std::move(pipelines), output_stream->header); + delayed_pipeline = QueryPipeline::unitePipelines(std::move(pipelines)); processors = collector.detachProcessors(); } else @@ -108,7 +119,7 @@ void CreatingSetsStep::describePipeline(FormatSettings & settings) const } void addCreatingSetsStep( - QueryPlan & query_plan, SubqueriesForSets subqueries_for_sets, const SizeLimits & limits, const Context & context) + QueryPlan & query_plan, SubqueriesForSets subqueries_for_sets, const SizeLimits & limits, ContextPtr context) { DataStreams input_streams; input_streams.emplace_back(query_plan.getCurrentDataStream()); @@ -123,17 +134,14 @@ void addCreatingSetsStep( continue; auto plan = std::move(set.source); - std::string type = (set.join != nullptr) ? "JOIN" - : "subquery"; auto creating_set = std::make_unique( plan->getCurrentDataStream(), - input_streams.front().header, std::move(description), std::move(set), limits, context); - creating_set->setStepDescription("Create set for " + type); + creating_set->setStepDescription("Create set for subquery"); plan->addStep(std::move(creating_set)); input_streams.emplace_back(plan->getCurrentDataStream()); diff --git a/src/Processors/QueryPlan/CreatingSetsStep.h b/src/Processors/QueryPlan/CreatingSetsStep.h index 97821cb63d3..fa6d34ef667 100644 --- a/src/Processors/QueryPlan/CreatingSetsStep.h +++ b/src/Processors/QueryPlan/CreatingSetsStep.h @@ -1,34 +1,35 @@ #pragma once + #include #include #include +#include namespace DB { /// Creates sets for subqueries and JOIN. See CreatingSetsTransform. -class CreatingSetStep : public ITransformingStep +class CreatingSetStep : public ITransformingStep, WithContext { public: CreatingSetStep( const DataStream & input_stream_, - Block header, String description_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_); + ContextPtr context_); String getName() const override { return "CreatingSet"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: String description; SubqueryForSet subquery_for_set; SizeLimits network_transfer_limits; - const Context & context; }; class CreatingSetsStep : public IQueryPlanStep @@ -38,7 +39,7 @@ public: String getName() const override { return "CreatingSets"; } - QueryPipelinePtr updatePipeline(QueryPipelines pipelines) override; + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) override; void describePipeline(FormatSettings & settings) const override; @@ -50,6 +51,6 @@ void addCreatingSetsStep( QueryPlan & query_plan, SubqueriesForSets subqueries_for_sets, const SizeLimits & limits, - const Context & context); + ContextPtr context); } diff --git a/src/Processors/QueryPlan/CubeStep.cpp b/src/Processors/QueryPlan/CubeStep.cpp index 6a0ec33402b..1a3016b7106 100644 --- a/src/Processors/QueryPlan/CubeStep.cpp +++ b/src/Processors/QueryPlan/CubeStep.cpp @@ -30,7 +30,7 @@ CubeStep::CubeStep(const DataStream & input_stream_, AggregatingTransformParamsP output_stream->distinct_columns.insert(params->params.src_header.getByPosition(key).name); } -void CubeStep::transformPipeline(QueryPipeline & pipeline) +void CubeStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.resize(1); diff --git a/src/Processors/QueryPlan/CubeStep.h b/src/Processors/QueryPlan/CubeStep.h index f67a03dc7e2..0e06ffc598a 100644 --- a/src/Processors/QueryPlan/CubeStep.h +++ b/src/Processors/QueryPlan/CubeStep.h @@ -17,7 +17,7 @@ public: String getName() const override { return "Cube"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; const Aggregator::Params & getParams() const; private: diff --git a/src/Processors/QueryPlan/DistinctStep.cpp b/src/Processors/QueryPlan/DistinctStep.cpp index 60966b08beb..5edd2f52f47 100644 --- a/src/Processors/QueryPlan/DistinctStep.cpp +++ b/src/Processors/QueryPlan/DistinctStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -62,7 +63,7 @@ DistinctStep::DistinctStep( } } -void DistinctStep::transformPipeline(QueryPipeline & pipeline) +void DistinctStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { if (checkColumnsAlreadyDistinct(columns, input_streams.front().distinct_columns)) return; @@ -102,4 +103,13 @@ void DistinctStep::describeActions(FormatSettings & settings) const settings.out << '\n'; } +void DistinctStep::describeActions(JSONBuilder::JSONMap & map) const +{ + auto columns_array = std::make_unique(); + for (const auto & column : columns) + columns_array->add(column); + + map.add("Columns", std::move(columns_array)); +} + } diff --git a/src/Processors/QueryPlan/DistinctStep.h b/src/Processors/QueryPlan/DistinctStep.h index 4bfd73ce044..815601d6253 100644 --- a/src/Processors/QueryPlan/DistinctStep.h +++ b/src/Processors/QueryPlan/DistinctStep.h @@ -18,8 +18,9 @@ public: String getName() const override { return "Distinct"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: diff --git a/src/Processors/QueryPlan/ExpressionStep.cpp b/src/Processors/QueryPlan/ExpressionStep.cpp index e23954cc063..ddf6ed00c3f 100644 --- a/src/Processors/QueryPlan/ExpressionStep.cpp +++ b/src/Processors/QueryPlan/ExpressionStep.cpp @@ -4,6 +4,10 @@ #include #include #include +#include +#include + +#include namespace DB { @@ -24,26 +28,10 @@ static ITransformingStep::Traits getTraits(const ActionsDAGPtr & actions) }; } -static ITransformingStep::Traits getJoinTraits() -{ - return ITransformingStep::Traits - { - { - .preserves_distinct_columns = false, - .returns_single_stream = false, - .preserves_number_of_streams = true, - .preserves_sorting = false, - }, - { - .preserves_number_of_rows = false, - } - }; -} - ExpressionStep::ExpressionStep(const DataStream & input_stream_, ActionsDAGPtr actions_dag_) : ITransformingStep( input_stream_, - Transform::transformHeader(input_stream_.header, std::make_shared(actions_dag_)), + Transform::transformHeader(input_stream_.header, std::make_shared(actions_dag_, ExpressionActionsSettings{})), getTraits(actions_dag_)) , actions_dag(std::move(actions_dag_)) { @@ -54,7 +42,8 @@ ExpressionStep::ExpressionStep(const DataStream & input_stream_, ActionsDAGPtr a void ExpressionStep::updateInputStream(DataStream input_stream, bool keep_header) { Block out_header = keep_header ? std::move(output_stream->header) - : Transform::transformHeader(input_stream.header, std::make_shared(actions_dag)); + : Transform::transformHeader(input_stream.header, + std::make_shared(actions_dag, ExpressionActionsSettings{})); output_stream = createOutputStream( input_stream, std::move(out_header), @@ -64,9 +53,9 @@ void ExpressionStep::updateInputStream(DataStream input_stream, bool keep_header input_streams.emplace_back(std::move(input_stream)); } -void ExpressionStep::transformPipeline(QueryPipeline & pipeline) +void ExpressionStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { - auto expression = std::make_shared(actions_dag); + auto expression = std::make_shared(actions_dag, settings.getActionsSettings()); pipeline.addSimpleTransform([&](const Block & header) { return std::make_shared(header, expression); @@ -78,7 +67,7 @@ void ExpressionStep::transformPipeline(QueryPipeline & pipeline) pipeline.getHeader().getColumnsWithTypeAndName(), output_stream->header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto convert_actions = std::make_shared(convert_actions_dag); + auto convert_actions = std::make_shared(convert_actions_dag, settings.getActionsSettings()); pipeline.addSimpleTransform([&](const Block & header) { @@ -92,7 +81,7 @@ void ExpressionStep::describeActions(FormatSettings & settings) const String prefix(settings.offset, ' '); bool first = true; - auto expression = std::make_shared(actions_dag); + auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); for (const auto & action : expression->getActions()) { settings.out << prefix << (first ? "Actions: " @@ -107,30 +96,10 @@ void ExpressionStep::describeActions(FormatSettings & settings) const settings.out << '\n'; } -JoinStep::JoinStep(const DataStream & input_stream_, JoinPtr join_) - : ITransformingStep( - input_stream_, - Transform::transformHeader(input_stream_.header, join_), - getJoinTraits()) - , join(std::move(join_)) +void ExpressionStep::describeActions(JSONBuilder::JSONMap & map) const { -} - -void JoinStep::transformPipeline(QueryPipeline & pipeline) -{ - /// In case joined subquery has totals, and we don't, add default chunk to totals. - bool add_default_totals = false; - if (!pipeline.hasTotals()) - { - pipeline.addDefaultTotals(); - add_default_totals = true; - } - - pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) - { - bool on_totals = stream_type == QueryPipeline::StreamType::Totals; - return std::make_shared(header, join, on_totals, add_default_totals); - }); + auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + map.add("Expression", expression->toTree()); } } diff --git a/src/Processors/QueryPlan/ExpressionStep.h b/src/Processors/QueryPlan/ExpressionStep.h index 60f186688b0..753d446f1f3 100644 --- a/src/Processors/QueryPlan/ExpressionStep.h +++ b/src/Processors/QueryPlan/ExpressionStep.h @@ -7,9 +7,6 @@ namespace DB class ActionsDAG; using ActionsDAGPtr = std::shared_ptr; -class IJoin; -using JoinPtr = std::shared_ptr; - class ExpressionTransform; class JoiningTransform; @@ -22,7 +19,7 @@ public: explicit ExpressionStep(const DataStream & input_stream_, ActionsDAGPtr actions_dag_); String getName() const override { return "Expression"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) override; void updateInputStream(DataStream input_stream, bool keep_header); @@ -30,23 +27,10 @@ public: const ActionsDAGPtr & getExpression() const { return actions_dag; } + void describeActions(JSONBuilder::JSONMap & map) const override; + private: ActionsDAGPtr actions_dag; }; -/// TODO: add separate step for join. -class JoinStep : public ITransformingStep -{ -public: - using Transform = JoiningTransform; - - explicit JoinStep(const DataStream & input_stream_, JoinPtr join_); - String getName() const override { return "Join"; } - - void transformPipeline(QueryPipeline & pipeline) override; - -private: - JoinPtr join; -}; - } diff --git a/src/Processors/QueryPlan/ExtremesStep.cpp b/src/Processors/QueryPlan/ExtremesStep.cpp index 59dce0b40b7..d3ec403f37e 100644 --- a/src/Processors/QueryPlan/ExtremesStep.cpp +++ b/src/Processors/QueryPlan/ExtremesStep.cpp @@ -25,7 +25,7 @@ ExtremesStep::ExtremesStep(const DataStream & input_stream_) { } -void ExtremesStep::transformPipeline(QueryPipeline & pipeline) +void ExtremesStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.addExtremesTransform(); } diff --git a/src/Processors/QueryPlan/ExtremesStep.h b/src/Processors/QueryPlan/ExtremesStep.h index cd6f27228f9..960b046b955 100644 --- a/src/Processors/QueryPlan/ExtremesStep.h +++ b/src/Processors/QueryPlan/ExtremesStep.h @@ -11,7 +11,7 @@ public: String getName() const override { return "Extremes"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; }; } diff --git a/src/Processors/QueryPlan/FillingStep.cpp b/src/Processors/QueryPlan/FillingStep.cpp index 1a8fba97ee2..a4306ffed2b 100644 --- a/src/Processors/QueryPlan/FillingStep.cpp +++ b/src/Processors/QueryPlan/FillingStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -35,7 +36,7 @@ FillingStep::FillingStep(const DataStream & input_stream_, SortDescription sort_ throw Exception("FillingStep expects single input", ErrorCodes::LOGICAL_ERROR); } -void FillingStep::transformPipeline(QueryPipeline & pipeline) +void FillingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.addSimpleTransform([&](const Block & header) { @@ -50,4 +51,9 @@ void FillingStep::describeActions(FormatSettings & settings) const settings.out << '\n'; } +void FillingStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Sort Description", explainSortDescription(sort_description, input_streams.front().header)); +} + } diff --git a/src/Processors/QueryPlan/FillingStep.h b/src/Processors/QueryPlan/FillingStep.h index c8d1f74c6ca..f4c6782e9df 100644 --- a/src/Processors/QueryPlan/FillingStep.h +++ b/src/Processors/QueryPlan/FillingStep.h @@ -13,8 +13,9 @@ public: String getName() const override { return "Filling"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; const SortDescription & getSortDescription() const { return sort_description; } diff --git a/src/Processors/QueryPlan/FilterStep.cpp b/src/Processors/QueryPlan/FilterStep.cpp index 921c1351511..522e7dabba8 100644 --- a/src/Processors/QueryPlan/FilterStep.cpp +++ b/src/Processors/QueryPlan/FilterStep.cpp @@ -4,6 +4,7 @@ #include #include #include +#include namespace DB { @@ -31,7 +32,11 @@ FilterStep::FilterStep( bool remove_filter_column_) : ITransformingStep( input_stream_, - FilterTransform::transformHeader(input_stream_.header, std::make_shared(actions_dag_), filter_column_name_, remove_filter_column_), + FilterTransform::transformHeader( + input_stream_.header, + std::make_shared(actions_dag_, ExpressionActionsSettings{}), + filter_column_name_, + remove_filter_column_), getTraits(actions_dag_)) , actions_dag(std::move(actions_dag_)) , filter_column_name(std::move(filter_column_name_)) @@ -45,7 +50,11 @@ void FilterStep::updateInputStream(DataStream input_stream, bool keep_header) { Block out_header = std::move(output_stream->header); if (keep_header) - out_header = FilterTransform::transformHeader(input_stream.header, std::make_shared(actions_dag), filter_column_name, remove_filter_column); + out_header = FilterTransform::transformHeader( + input_stream.header, + std::make_shared(actions_dag, ExpressionActionsSettings{}), + filter_column_name, + remove_filter_column); output_stream = createOutputStream( input_stream, @@ -56,9 +65,9 @@ void FilterStep::updateInputStream(DataStream input_stream, bool keep_header) input_streams.emplace_back(std::move(input_stream)); } -void FilterStep::transformPipeline(QueryPipeline & pipeline) +void FilterStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { - auto expression = std::make_shared(actions_dag); + auto expression = std::make_shared(actions_dag, settings.getActionsSettings()); pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) { bool on_totals = stream_type == QueryPipeline::StreamType::Totals; @@ -71,7 +80,7 @@ void FilterStep::transformPipeline(QueryPipeline & pipeline) pipeline.getHeader().getColumnsWithTypeAndName(), output_stream->header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto convert_actions = std::make_shared(convert_actions_dag); + auto convert_actions = std::make_shared(convert_actions_dag, settings.getActionsSettings()); pipeline.addSimpleTransform([&](const Block & header) { @@ -90,7 +99,7 @@ void FilterStep::describeActions(FormatSettings & settings) const settings.out << '\n'; bool first = true; - auto expression = std::make_shared(actions_dag); + auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); for (const auto & action : expression->getActions()) { settings.out << prefix << (first ? "Actions: " @@ -105,4 +114,13 @@ void FilterStep::describeActions(FormatSettings & settings) const settings.out << '\n'; } +void FilterStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Filter Column", filter_column_name); + map.add("Removes Filter", remove_filter_column); + + auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + map.add("Expression", expression->toTree()); +} + } diff --git a/src/Processors/QueryPlan/FilterStep.h b/src/Processors/QueryPlan/FilterStep.h index 72b02624faa..d01d128a08c 100644 --- a/src/Processors/QueryPlan/FilterStep.h +++ b/src/Processors/QueryPlan/FilterStep.h @@ -18,10 +18,11 @@ public: bool remove_filter_column_); String getName() const override { return "Filter"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) override; void updateInputStream(DataStream input_stream, bool keep_header); + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; const ActionsDAGPtr & getExpression() const { return actions_dag; } diff --git a/src/Processors/QueryPlan/FinishSortingStep.cpp b/src/Processors/QueryPlan/FinishSortingStep.cpp index d883bd0e0dd..a2e056b3029 100644 --- a/src/Processors/QueryPlan/FinishSortingStep.cpp +++ b/src/Processors/QueryPlan/FinishSortingStep.cpp @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB { @@ -47,11 +48,11 @@ void FinishSortingStep::updateLimit(size_t limit_) if (limit_ && (limit == 0 || limit_ < limit)) { limit = limit_; - transform_traits.preserves_number_of_rows = limit == 0; + transform_traits.preserves_number_of_rows = false; } } -void FinishSortingStep::transformPipeline(QueryPipeline & pipeline) +void FinishSortingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { bool need_finish_sorting = (prefix_description.size() < result_description.size()); if (pipeline.getNumStreams() > 1) @@ -101,4 +102,13 @@ void FinishSortingStep::describeActions(FormatSettings & settings) const settings.out << prefix << "Limit " << limit << '\n'; } +void FinishSortingStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Prefix Sort Description", explainSortDescription(prefix_description, input_streams.front().header)); + map.add("Result Sort Description", explainSortDescription(result_description, input_streams.front().header)); + + if (limit) + map.add("Limit", limit); +} + } diff --git a/src/Processors/QueryPlan/FinishSortingStep.h b/src/Processors/QueryPlan/FinishSortingStep.h index 4bb62037faa..9fe031e792d 100644 --- a/src/Processors/QueryPlan/FinishSortingStep.h +++ b/src/Processors/QueryPlan/FinishSortingStep.h @@ -18,8 +18,9 @@ public: String getName() const override { return "FinishSorting"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; /// Add limit or change it to lower value. diff --git a/src/Processors/QueryPlan/IQueryPlanStep.h b/src/Processors/QueryPlan/IQueryPlanStep.h index dfdbc9df442..9ff2b22e5b8 100644 --- a/src/Processors/QueryPlan/IQueryPlanStep.h +++ b/src/Processors/QueryPlan/IQueryPlanStep.h @@ -1,6 +1,9 @@ #pragma once #include #include +#include + +namespace JSONBuilder { class JSONMap; } namespace DB { @@ -13,6 +16,8 @@ class IProcessor; using ProcessorPtr = std::shared_ptr; using Processors = std::vector; +namespace JSONBuilder { class JSONMap; } + /// Description of data stream. /// Single logical data stream may relate to many ports of pipeline. class DataStream @@ -75,7 +80,7 @@ public: /// * header from each pipeline is the same as header from corresponding input_streams /// Result pipeline must contain any number of streams with compatible output header is hasOutputStream(), /// or pipeline should be completed otherwise. - virtual QueryPipelinePtr updatePipeline(QueryPipelines pipelines) = 0; + virtual QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) = 0; const DataStreams & getInputStreams() const { return input_streams; } @@ -96,8 +101,13 @@ public: }; /// Get detailed description of step actions. This is shown in EXPLAIN query with options `actions = 1`. + virtual void describeActions(JSONBuilder::JSONMap & /*map*/) const {} virtual void describeActions(FormatSettings & /*settings*/) const {} + /// Get detailed description of read-from-storage step indexes (if any). Shown in with options `indexes = 1`. + virtual void describeIndexes(JSONBuilder::JSONMap & /*map*/) const {} + virtual void describeIndexes(FormatSettings & /*settings*/) const {} + /// Get description of processors added in current step. Should be called after updatePipeline(). virtual void describePipeline(FormatSettings & /*settings*/) const {} diff --git a/src/Processors/QueryPlan/ISourceStep.cpp b/src/Processors/QueryPlan/ISourceStep.cpp index 1796f033896..ec82e42fa34 100644 --- a/src/Processors/QueryPlan/ISourceStep.cpp +++ b/src/Processors/QueryPlan/ISourceStep.cpp @@ -9,11 +9,11 @@ ISourceStep::ISourceStep(DataStream output_stream_) output_stream = std::move(output_stream_); } -QueryPipelinePtr ISourceStep::updatePipeline(QueryPipelines) +QueryPipelinePtr ISourceStep::updatePipeline(QueryPipelines, const BuildQueryPipelineSettings & settings) { auto pipeline = std::make_unique(); QueryPipelineProcessorsCollector collector(*pipeline, this); - initializePipeline(*pipeline); + initializePipeline(*pipeline, settings); auto added_processors = collector.detachProcessors(); processors.insert(processors.end(), added_processors.begin(), added_processors.end()); return pipeline; diff --git a/src/Processors/QueryPlan/ISourceStep.h b/src/Processors/QueryPlan/ISourceStep.h index fdb3dd566cb..fbef0fcce38 100644 --- a/src/Processors/QueryPlan/ISourceStep.h +++ b/src/Processors/QueryPlan/ISourceStep.h @@ -10,9 +10,9 @@ class ISourceStep : public IQueryPlanStep public: explicit ISourceStep(DataStream output_stream_); - QueryPipelinePtr updatePipeline(QueryPipelines pipelines) override; + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) override; - virtual void initializePipeline(QueryPipeline & pipeline) = 0; + virtual void initializePipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) = 0; void describePipeline(FormatSettings & settings) const override; diff --git a/src/Processors/QueryPlan/ITransformingStep.cpp b/src/Processors/QueryPlan/ITransformingStep.cpp index cb27bf38278..e71afd94c46 100644 --- a/src/Processors/QueryPlan/ITransformingStep.cpp +++ b/src/Processors/QueryPlan/ITransformingStep.cpp @@ -36,16 +36,16 @@ DataStream ITransformingStep::createOutputStream( } -QueryPipelinePtr ITransformingStep::updatePipeline(QueryPipelines pipelines) +QueryPipelinePtr ITransformingStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) { if (collect_processors) { QueryPipelineProcessorsCollector collector(*pipelines.front(), this); - transformPipeline(*pipelines.front()); + transformPipeline(*pipelines.front(), settings); processors = collector.detachProcessors(); } else - transformPipeline(*pipelines.front()); + transformPipeline(*pipelines.front(), settings); return std::move(pipelines.front()); } diff --git a/src/Processors/QueryPlan/ITransformingStep.h b/src/Processors/QueryPlan/ITransformingStep.h index bd99478ec37..9abe025729d 100644 --- a/src/Processors/QueryPlan/ITransformingStep.h +++ b/src/Processors/QueryPlan/ITransformingStep.h @@ -48,9 +48,9 @@ public: ITransformingStep(DataStream input_stream, Block output_header, Traits traits, bool collect_processors_ = true); - QueryPipelinePtr updatePipeline(QueryPipelines pipelines) override; + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) override; - virtual void transformPipeline(QueryPipeline & pipeline) = 0; + virtual void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) = 0; const TransformTraits & getTransformTraits() const { return transform_traits; } const DataStreamTraits & getDataStreamTraits() const { return data_stream_traits; } diff --git a/src/Processors/QueryPlan/JoinStep.cpp b/src/Processors/QueryPlan/JoinStep.cpp new file mode 100644 index 00000000000..b06d6628dcb --- /dev/null +++ b/src/Processors/QueryPlan/JoinStep.cpp @@ -0,0 +1,89 @@ +#include +#include +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +JoinStep::JoinStep( + const DataStream & left_stream_, + const DataStream & right_stream_, + JoinPtr join_, + size_t max_block_size_) + : join(std::move(join_)) + , max_block_size(max_block_size_) +{ + input_streams = {left_stream_, right_stream_}; + output_stream = DataStream + { + .header = JoiningTransform::transformHeader(left_stream_.header, join), + }; +} + +QueryPipelinePtr JoinStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) +{ + if (pipelines.size() != 2) + throw Exception(ErrorCodes::LOGICAL_ERROR, "JoinStep expect two input steps"); + + return QueryPipeline::joinPipelines(std::move(pipelines[0]), std::move(pipelines[1]), join, max_block_size, &processors); +} + +void JoinStep::describePipeline(FormatSettings & settings) const +{ + IQueryPlanStep::describePipeline(processors, settings); +} + +static ITransformingStep::Traits getStorageJoinTraits() +{ + return ITransformingStep::Traits + { + { + .preserves_distinct_columns = false, + .returns_single_stream = false, + .preserves_number_of_streams = true, + .preserves_sorting = false, + }, + { + .preserves_number_of_rows = false, + } + }; +} + +FilledJoinStep::FilledJoinStep(const DataStream & input_stream_, JoinPtr join_, size_t max_block_size_) + : ITransformingStep( + input_stream_, + JoiningTransform::transformHeader(input_stream_.header, join_), + getStorageJoinTraits()) + , join(std::move(join_)) + , max_block_size(max_block_size_) +{ + if (!join->isFilled()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "FilledJoinStep expects Join to be filled"); +} + +void FilledJoinStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) +{ + bool default_totals = false; + if (!pipeline.hasTotals() && join->hasTotals()) + { + pipeline.addDefaultTotals(); + default_totals = true; + } + + auto finish_counter = std::make_shared(pipeline.getNumStreams()); + + pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) + { + bool on_totals = stream_type == QueryPipeline::StreamType::Totals; + auto counter = on_totals ? nullptr : finish_counter; + return std::make_shared(header, join, max_block_size, on_totals, default_totals, counter); + }); +} + +} diff --git a/src/Processors/QueryPlan/JoinStep.h b/src/Processors/QueryPlan/JoinStep.h new file mode 100644 index 00000000000..6430f7cbd59 --- /dev/null +++ b/src/Processors/QueryPlan/JoinStep.h @@ -0,0 +1,50 @@ +#pragma once +#include +#include + +namespace DB +{ + +class IJoin; +using JoinPtr = std::shared_ptr; + +/// Join two data streams. +class JoinStep : public IQueryPlanStep +{ +public: + JoinStep( + const DataStream & left_stream_, + const DataStream & right_stream_, + JoinPtr join_, + size_t max_block_size_); + + String getName() const override { return "Join"; } + + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) override; + + void describePipeline(FormatSettings & settings) const override; + + const JoinPtr & getJoin() const { return join; } + +private: + JoinPtr join; + size_t max_block_size; + Processors processors; +}; + +/// Special step for the case when Join is already filled. +/// For StorageJoin and Dictionary. +class FilledJoinStep : public ITransformingStep +{ +public: + FilledJoinStep(const DataStream & input_stream_, JoinPtr join_, size_t max_block_size_); + + String getName() const override { return "FilledJoin"; } + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + +private: + JoinPtr join; + size_t max_block_size; +}; + +} diff --git a/src/Processors/QueryPlan/LimitByStep.cpp b/src/Processors/QueryPlan/LimitByStep.cpp index 9fcf5b60164..8ded0784b41 100644 --- a/src/Processors/QueryPlan/LimitByStep.cpp +++ b/src/Processors/QueryPlan/LimitByStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -33,7 +34,7 @@ LimitByStep::LimitByStep( } -void LimitByStep::transformPipeline(QueryPipeline & pipeline) +void LimitByStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.resize(1); @@ -72,4 +73,15 @@ void LimitByStep::describeActions(FormatSettings & settings) const settings.out << prefix << "Offset " << group_offset << '\n'; } +void LimitByStep::describeActions(JSONBuilder::JSONMap & map) const +{ + auto columns_array = std::make_unique(); + for (const auto & column : columns) + columns_array->add(column); + + map.add("Columns", std::move(columns_array)); + map.add("Length", group_length); + map.add("Offset", group_offset); +} + } diff --git a/src/Processors/QueryPlan/LimitByStep.h b/src/Processors/QueryPlan/LimitByStep.h index 9320735640c..1b574cd02a1 100644 --- a/src/Processors/QueryPlan/LimitByStep.h +++ b/src/Processors/QueryPlan/LimitByStep.h @@ -14,8 +14,9 @@ public: String getName() const override { return "LimitBy"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: diff --git a/src/Processors/QueryPlan/LimitStep.cpp b/src/Processors/QueryPlan/LimitStep.cpp index 565d05956e5..5f5a0bd0d64 100644 --- a/src/Processors/QueryPlan/LimitStep.cpp +++ b/src/Processors/QueryPlan/LimitStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -42,7 +43,7 @@ void LimitStep::updateInputStream(DataStream input_stream) output_stream = createOutputStream(input_streams.front(), output_stream->header, getDataStreamTraits()); } -void LimitStep::transformPipeline(QueryPipeline & pipeline) +void LimitStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { auto transform = std::make_shared( pipeline.getHeader(), limit, offset, pipeline.getNumStreams(), always_read_till_end, with_ties, description); @@ -76,4 +77,12 @@ void LimitStep::describeActions(FormatSettings & settings) const } } +void LimitStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Limit", limit); + map.add("Offset", offset); + map.add("With Ties", with_ties); + map.add("Reads All Data", always_read_till_end); +} + } diff --git a/src/Processors/QueryPlan/LimitStep.h b/src/Processors/QueryPlan/LimitStep.h index eea186fea4a..772ba0722a7 100644 --- a/src/Processors/QueryPlan/LimitStep.h +++ b/src/Processors/QueryPlan/LimitStep.h @@ -18,8 +18,9 @@ public: String getName() const override { return "Limit"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; size_t getLimitForSorting() const diff --git a/src/Processors/QueryPlan/MergeSortingStep.cpp b/src/Processors/QueryPlan/MergeSortingStep.cpp index b30286130b1..c9e141281f4 100644 --- a/src/Processors/QueryPlan/MergeSortingStep.cpp +++ b/src/Processors/QueryPlan/MergeSortingStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -52,11 +53,11 @@ void MergeSortingStep::updateLimit(size_t limit_) if (limit_ && (limit == 0 || limit_ < limit)) { limit = limit_; - transform_traits.preserves_number_of_rows = limit == 0; + transform_traits.preserves_number_of_rows = false; } } -void MergeSortingStep::transformPipeline(QueryPipeline & pipeline) +void MergeSortingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) -> ProcessorPtr { @@ -84,4 +85,12 @@ void MergeSortingStep::describeActions(FormatSettings & settings) const settings.out << prefix << "Limit " << limit << '\n'; } +void MergeSortingStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Sort Description", explainSortDescription(description, input_streams.front().header)); + + if (limit) + map.add("Limit", limit); +} + } diff --git a/src/Processors/QueryPlan/MergeSortingStep.h b/src/Processors/QueryPlan/MergeSortingStep.h index a385a8a3e93..dcecdffd122 100644 --- a/src/Processors/QueryPlan/MergeSortingStep.h +++ b/src/Processors/QueryPlan/MergeSortingStep.h @@ -24,8 +24,9 @@ public: String getName() const override { return "MergeSorting"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; /// Add limit or change it to lower value. diff --git a/src/Processors/QueryPlan/MergingAggregatedStep.cpp b/src/Processors/QueryPlan/MergingAggregatedStep.cpp index 85bbbeab59a..71efb37b363 100644 --- a/src/Processors/QueryPlan/MergingAggregatedStep.cpp +++ b/src/Processors/QueryPlan/MergingAggregatedStep.cpp @@ -40,7 +40,7 @@ MergingAggregatedStep::MergingAggregatedStep( output_stream->distinct_columns.insert(params->params.intermediate_header.getByPosition(key).name); } -void MergingAggregatedStep::transformPipeline(QueryPipeline & pipeline) +void MergingAggregatedStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { if (!memory_efficient_aggregation) { @@ -68,4 +68,9 @@ void MergingAggregatedStep::describeActions(FormatSettings & settings) const return params->params.explain(settings.out, settings.offset); } +void MergingAggregatedStep::describeActions(JSONBuilder::JSONMap & map) const +{ + params->params.explain(map); +} + } diff --git a/src/Processors/QueryPlan/MergingAggregatedStep.h b/src/Processors/QueryPlan/MergingAggregatedStep.h index 5ffceb7f938..2e94d536a8c 100644 --- a/src/Processors/QueryPlan/MergingAggregatedStep.h +++ b/src/Processors/QueryPlan/MergingAggregatedStep.h @@ -21,8 +21,9 @@ public: String getName() const override { return "MergingAggregated"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: diff --git a/src/Processors/QueryPlan/MergingFinal.cpp b/src/Processors/QueryPlan/MergingFinal.cpp index c1dec231f11..c564a28d377 100644 --- a/src/Processors/QueryPlan/MergingFinal.cpp +++ b/src/Processors/QueryPlan/MergingFinal.cpp @@ -9,6 +9,7 @@ #include #include #include +#include namespace DB { @@ -53,7 +54,7 @@ MergingFinal::MergingFinal( // output_stream->sort_mode = DataStream::SortMode::Stream; } -void MergingFinal::transformPipeline(QueryPipeline & pipeline) +void MergingFinal::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { const auto & header = pipeline.getHeader(); size_t num_outputs = pipeline.getNumStreams(); @@ -161,4 +162,9 @@ void MergingFinal::describeActions(FormatSettings & settings) const settings.out << '\n'; } +void MergingFinal::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Sort Description", explainSortDescription(sort_description, input_streams.front().header)); +} + } diff --git a/src/Processors/QueryPlan/MergingFinal.h b/src/Processors/QueryPlan/MergingFinal.h index c01f5c7f9a1..ed0394a62f4 100644 --- a/src/Processors/QueryPlan/MergingFinal.h +++ b/src/Processors/QueryPlan/MergingFinal.h @@ -20,8 +20,9 @@ public: String getName() const override { return "MergingFinal"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: diff --git a/src/Processors/QueryPlan/MergingSortedStep.cpp b/src/Processors/QueryPlan/MergingSortedStep.cpp index c59540b009f..7e866f4ccd2 100644 --- a/src/Processors/QueryPlan/MergingSortedStep.cpp +++ b/src/Processors/QueryPlan/MergingSortedStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -42,11 +43,11 @@ void MergingSortedStep::updateLimit(size_t limit_) if (limit_ && (limit == 0 || limit_ < limit)) { limit = limit_; - transform_traits.preserves_number_of_rows = limit == 0; + transform_traits.preserves_number_of_rows = false; } } -void MergingSortedStep::transformPipeline(QueryPipeline & pipeline) +void MergingSortedStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { /// If there are several streams, then we merge them into one if (pipeline.getNumStreams() > 1) @@ -73,4 +74,12 @@ void MergingSortedStep::describeActions(FormatSettings & settings) const settings.out << prefix << "Limit " << limit << '\n'; } +void MergingSortedStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Sort Description", explainSortDescription(sort_description, input_streams.front().header)); + + if (limit) + map.add("Limit", limit); +} + } diff --git a/src/Processors/QueryPlan/MergingSortedStep.h b/src/Processors/QueryPlan/MergingSortedStep.h index 483cfa5e8a7..4f82e3830d0 100644 --- a/src/Processors/QueryPlan/MergingSortedStep.h +++ b/src/Processors/QueryPlan/MergingSortedStep.h @@ -19,8 +19,9 @@ public: String getName() const override { return "MergingSorted"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; /// Add limit or change it to lower value. diff --git a/src/Processors/QueryPlan/OffsetStep.cpp b/src/Processors/QueryPlan/OffsetStep.cpp index 7ac3d3f2110..34ddb687ddd 100644 --- a/src/Processors/QueryPlan/OffsetStep.cpp +++ b/src/Processors/QueryPlan/OffsetStep.cpp @@ -2,6 +2,7 @@ #include #include #include +#include namespace DB { @@ -28,7 +29,7 @@ OffsetStep::OffsetStep(const DataStream & input_stream_, size_t offset_) { } -void OffsetStep::transformPipeline(QueryPipeline & pipeline) +void OffsetStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { auto transform = std::make_shared( pipeline.getHeader(), offset, pipeline.getNumStreams()); @@ -41,4 +42,9 @@ void OffsetStep::describeActions(FormatSettings & settings) const settings.out << String(settings.offset, ' ') << "Offset " << offset << '\n'; } +void OffsetStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Offset", offset); +} + } diff --git a/src/Processors/QueryPlan/OffsetStep.h b/src/Processors/QueryPlan/OffsetStep.h index 17949371edf..a10fcc7baec 100644 --- a/src/Processors/QueryPlan/OffsetStep.h +++ b/src/Processors/QueryPlan/OffsetStep.h @@ -13,8 +13,9 @@ public: String getName() const override { return "Offset"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: diff --git a/src/Processors/QueryPlan/Optimizations/Optimizations.h b/src/Processors/QueryPlan/Optimizations/Optimizations.h index f96237fc71a..7e946a71fad 100644 --- a/src/Processors/QueryPlan/Optimizations/Optimizations.h +++ b/src/Processors/QueryPlan/Optimizations/Optimizations.h @@ -1,5 +1,6 @@ #pragma once #include +#include #include namespace DB @@ -23,6 +24,7 @@ struct Optimization using Function = size_t (*)(QueryPlan::Node *, QueryPlan::Nodes &); const Function apply = nullptr; const char * name; + const bool QueryPlanOptimizationSettings::* const is_enabled; }; /// Move ARRAY JOIN up if possible. @@ -46,11 +48,11 @@ inline const auto & getOptimizations() { static const std::array optimizations = {{ - {tryLiftUpArrayJoin, "liftUpArrayJoin"}, - {tryPushDownLimit, "pushDownLimit"}, - {trySplitFilter, "splitFilter"}, - {tryMergeExpressions, "mergeExpressions"}, - {tryPushDownFilter, "pushDownFilter"}, + {tryLiftUpArrayJoin, "liftUpArrayJoin", &QueryPlanOptimizationSettings::optimize_plan}, + {tryPushDownLimit, "pushDownLimit", &QueryPlanOptimizationSettings::optimize_plan}, + {trySplitFilter, "splitFilter", &QueryPlanOptimizationSettings::optimize_plan}, + {tryMergeExpressions, "mergeExpressions", &QueryPlanOptimizationSettings::optimize_plan}, + {tryPushDownFilter, "pushDownFilter", &QueryPlanOptimizationSettings::filter_push_down}, }}; return optimizations; diff --git a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp index cbd38d46ebf..1472fb87a89 100644 --- a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp +++ b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp @@ -1,12 +1,22 @@ #include #include +#include namespace DB { -QueryPlanOptimizationSettings::QueryPlanOptimizationSettings(const Settings & settings) +QueryPlanOptimizationSettings QueryPlanOptimizationSettings::fromSettings(const Settings & from) { - max_optimizations_to_apply = settings.query_plan_max_optimizations_to_apply; + QueryPlanOptimizationSettings settings; + settings.optimize_plan = from.query_plan_enable_optimizations; + settings.max_optimizations_to_apply = from.query_plan_max_optimizations_to_apply; + settings.filter_push_down = from.query_plan_filter_push_down; + return settings; +} + +QueryPlanOptimizationSettings QueryPlanOptimizationSettings::fromContext(ContextPtr from) +{ + return fromSettings(from->getSettingsRef()); } } diff --git a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h index 074298e24a1..b5a37bf69d6 100644 --- a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h +++ b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h @@ -1,5 +1,7 @@ #pragma once +#include + #include namespace DB @@ -9,12 +11,18 @@ struct Settings; struct QueryPlanOptimizationSettings { - QueryPlanOptimizationSettings() = delete; - explicit QueryPlanOptimizationSettings(const Settings & settings); - /// If not zero, throw if too many optimizations were applied to query plan. /// It helps to avoid infinite optimization loop. size_t max_optimizations_to_apply = 0; + + /// If disabled, no optimization applied. + bool optimize_plan = true; + + /// If filter push down optimization is enabled. + bool filter_push_down = true; + + static QueryPlanOptimizationSettings fromSettings(const Settings & from); + static QueryPlanOptimizationSettings fromContext(ContextPtr from); }; } diff --git a/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp b/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp index d64f082b7ee..e1fac21d5c1 100644 --- a/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp +++ b/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp @@ -3,7 +3,9 @@ #include #include #include +#include #include +#include #include #include #include @@ -11,8 +13,10 @@ #include #include #include +#include #include #include +#include #include #include @@ -43,43 +47,38 @@ static size_t tryAddNewFilterStep( // std::cerr << "Filter: \n" << expression->dumpDAG() << std::endl; - auto split_filter = expression->splitActionsForFilter(filter_column_name, removes_filter, allowed_inputs); + const auto & all_inputs = child->getInputStreams().front().header.getColumnsWithTypeAndName(); + + auto split_filter = expression->cloneActionsForFilterPushDown(filter_column_name, removes_filter, allowed_inputs, all_inputs); if (!split_filter) return 0; // std::cerr << "===============\n" << expression->dumpDAG() << std::endl; // std::cerr << "---------------\n" << split_filter->dumpDAG() << std::endl; - const auto & index = expression->getIndex(); - auto it = index.begin(); - for (; it != index.end(); ++it) - if ((*it)->result_name == filter_column_name) - break; - - const bool found_filter_column = it != expression->getIndex().end(); - - if (!found_filter_column && !removes_filter) + const auto * filter_node = expression->tryFindInIndex(filter_column_name); + if (!filter_node && !removes_filter) throw Exception(ErrorCodes::LOGICAL_ERROR, "Filter column {} was removed from ActionsDAG but it is needed in result. DAG:\n{}", filter_column_name, expression->dumpDAG()); /// Filter column was replaced to constant. - const bool filter_is_constant = found_filter_column && (*it)->column && isColumnConst(*(*it)->column); + const bool filter_is_constant = filter_node && filter_node->column && isColumnConst(*filter_node->column); - if (!found_filter_column || filter_is_constant) - /// This means that all predicates of filter were pused down. + if (!filter_node || filter_is_constant) + /// This means that all predicates of filter were pushed down. /// Replace current actions to expression, as we don't need to filter anything. parent = std::make_unique(child->getOutputStream(), expression); /// Add new Filter step before Aggregating. /// Expression/Filter -> Aggregating -> Something auto & node = nodes.emplace_back(); - node.children.swap(child_node->children); - child_node->children.emplace_back(&node); + node.children.emplace_back(&node); + std::swap(node.children[0], child_node->children[0]); /// Expression/Filter -> Aggregating -> Filter -> Something - /// New filter column is added to the end. - auto split_filter_column_name = (*split_filter->getIndex().rbegin())->result_name; + /// New filter column is the first one. + auto split_filter_column_name = (*split_filter->getIndex().begin())->result_name; node.step = std::make_unique( node.children.at(0)->step->getOutputStream(), std::move(split_filter), std::move(split_filter_column_name), true); @@ -87,7 +86,7 @@ static size_t tryAddNewFilterStep( return 3; } -static Names getAggregatinKeys(const Aggregator::Params & params) +static Names getAggregatingKeys(const Aggregator::Params & params) { Names keys; keys.reserve(params.keys.size()); @@ -117,17 +116,36 @@ size_t tryPushDownFilter(QueryPlan::Node * parent_node, QueryPlan::Nodes & nodes if (auto * aggregating = typeid_cast(child.get())) { const auto & params = aggregating->getParams(); - Names keys = getAggregatinKeys(params); + Names keys = getAggregatingKeys(params); if (auto updated_steps = tryAddNewFilterStep(parent_node, nodes, keys)) return updated_steps; } + if (typeid_cast(child.get())) + { + /// CreatingSets does not change header. + /// We can push down filter and update header. + /// - Something + /// Filter - CreatingSets - CreatingSet + /// - CreatingSet + auto input_streams = child->getInputStreams(); + input_streams.front() = filter->getOutputStream(); + child = std::make_unique(input_streams); + std::swap(parent, child); + std::swap(parent_node->children, child_node->children); + std::swap(parent_node->children.front(), child_node->children.front()); + /// - Filter - Something + /// CreatingSets - CreatingSet + /// - CreatingSet + return 2; + } + if (auto * totals_having = typeid_cast(child.get())) { /// If totals step has HAVING expression, skip it for now. /// TODO: - /// We can merge HAING expression with current filer. + /// We can merge HAVING expression with current filer. /// Also, we can push down part of HAVING which depend only on aggregation keys. if (totals_having->getActions()) return 0; @@ -173,6 +191,36 @@ size_t tryPushDownFilter(QueryPlan::Node * parent_node, QueryPlan::Nodes & nodes return updated_steps; } + if (auto * join = typeid_cast(child.get())) + { + const auto & table_join = join->getJoin()->getTableJoin(); + /// Push down is for left table only. We need to update JoinStep for push down into right. + /// Only inner and left join are supported. Other types may generate default values for left table keys. + /// So, if we push down a condition like `key != 0`, not all rows may be filtered. + if (table_join.kind() == ASTTableJoin::Kind::Inner || table_join.kind() == ASTTableJoin::Kind::Left) + { + const auto & left_header = join->getInputStreams().front().header; + const auto & res_header = join->getOutputStream().header; + Names allowed_keys; + for (const auto & name : table_join.keyNamesLeft()) + { + /// Skip key if it is renamed. + /// I don't know if it is possible. Just in case. + if (!left_header.has(name) || !res_header.has(name)) + continue; + + /// Skip if type is changed. Push down expression expect equal types. + if (!left_header.getByName(name).type->equals(*res_header.getByName(name).type)) + continue; + + allowed_keys.push_back(name); + } + + if (auto updated_steps = tryAddNewFilterStep(parent_node, nodes, allowed_keys)) + return updated_steps; + } + } + /// TODO. /// We can filter earlier if expression does not depend on WITH FILL columns. /// But we cannot just push down condition, because other column may be filled with defaults. @@ -198,6 +246,48 @@ size_t tryPushDownFilter(QueryPlan::Node * parent_node, QueryPlan::Nodes & nodes return updated_steps; } + if (auto * union_step = typeid_cast(child.get())) + { + /// Union does not change header. + /// We can push down filter and update header. + auto union_input_streams = child->getInputStreams(); + for (auto & input_stream : union_input_streams) + input_stream.header = filter->getOutputStream().header; + + /// - Something + /// Filter - Union - Something + /// - Something + + child = std::make_unique(union_input_streams, union_step->getMaxThreads()); + + std::swap(parent, child); + std::swap(parent_node->children, child_node->children); + std::swap(parent_node->children.front(), child_node->children.front()); + + /// - Filter - Something + /// Union - Something + /// - Something + + for (size_t i = 1; i < parent_node->children.size(); ++i) + { + auto & filter_node = nodes.emplace_back(); + filter_node.children.push_back(parent_node->children[i]); + parent_node->children[i] = &filter_node; + + filter_node.step = std::make_unique( + filter_node.children.front()->step->getOutputStream(), + filter->getExpression()->clone(), + filter->getFilterColumnName(), + filter->removesFilterColumn()); + } + + /// - Filter - Something + /// Union - Filter - Something + /// - Filter - Something + + return 3; + } + return 0; } diff --git a/src/Processors/QueryPlan/Optimizations/mergeExpressions.cpp b/src/Processors/QueryPlan/Optimizations/mergeExpressions.cpp index dfd15a2a929..a5cb5972bd8 100644 --- a/src/Processors/QueryPlan/Optimizations/mergeExpressions.cpp +++ b/src/Processors/QueryPlan/Optimizations/mergeExpressions.cpp @@ -50,8 +50,10 @@ size_t tryMergeExpressions(QueryPlan::Node * parent_node, QueryPlan::Nodes &) auto merged = ActionsDAG::merge(std::move(*child_actions), std::move(*parent_actions)); - auto filter = std::make_unique(child_expr->getInputStreams().front(), merged, - parent_filter->getFilterColumnName(), parent_filter->removesFilterColumn()); + auto filter = std::make_unique(child_expr->getInputStreams().front(), + merged, + parent_filter->getFilterColumnName(), + parent_filter->removesFilterColumn()); filter->setStepDescription("(" + parent_filter->getStepDescription() + " + " + child_expr->getStepDescription() + ")"); parent_node->step = std::move(filter); diff --git a/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp b/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp index 858bde9c660..da9b1e26f68 100644 --- a/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp +++ b/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp @@ -16,6 +16,9 @@ namespace QueryPlanOptimizations void optimizeTree(const QueryPlanOptimizationSettings & settings, QueryPlan::Node & root, QueryPlan::Nodes & nodes) { + if (!settings.optimize_plan) + return; + const auto & optimizations = getOptimizations(); struct Frame @@ -63,6 +66,9 @@ void optimizeTree(const QueryPlanOptimizationSettings & settings, QueryPlan::Nod /// Apply all optimizations. for (const auto & optimization : optimizations) { + if (!(settings.*(optimization.is_enabled))) + continue; + /// Just in case, skip optimization if it is not initialized. if (!optimization.apply) continue; diff --git a/src/Processors/QueryPlan/PartialSortingStep.cpp b/src/Processors/QueryPlan/PartialSortingStep.cpp index ce34eca9112..f4abea440fe 100644 --- a/src/Processors/QueryPlan/PartialSortingStep.cpp +++ b/src/Processors/QueryPlan/PartialSortingStep.cpp @@ -3,6 +3,7 @@ #include #include #include +#include namespace DB { @@ -42,11 +43,11 @@ void PartialSortingStep::updateLimit(size_t limit_) if (limit_ && (limit == 0 || limit_ < limit)) { limit = limit_; - transform_traits.preserves_number_of_rows = limit == 0; + transform_traits.preserves_number_of_rows = false; } } -void PartialSortingStep::transformPipeline(QueryPipeline & pipeline) +void PartialSortingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) -> ProcessorPtr { @@ -81,4 +82,12 @@ void PartialSortingStep::describeActions(FormatSettings & settings) const settings.out << prefix << "Limit " << limit << '\n'; } +void PartialSortingStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Sort Description", explainSortDescription(sort_description, input_streams.front().header)); + + if (limit) + map.add("Limit", limit); +} + } diff --git a/src/Processors/QueryPlan/PartialSortingStep.h b/src/Processors/QueryPlan/PartialSortingStep.h index 172ef25c300..aeca42f7096 100644 --- a/src/Processors/QueryPlan/PartialSortingStep.h +++ b/src/Processors/QueryPlan/PartialSortingStep.h @@ -18,8 +18,9 @@ public: String getName() const override { return "PartialSorting"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; /// Add limit or change it to lower value. diff --git a/src/Processors/QueryPlan/QueryPlan.cpp b/src/Processors/QueryPlan/QueryPlan.cpp index f5d5e0d99b7..3e46adb9d9c 100644 --- a/src/Processors/QueryPlan/QueryPlan.cpp +++ b/src/Processors/QueryPlan/QueryPlan.cpp @@ -7,6 +7,9 @@ #include #include #include +#include +#include +#include namespace DB { @@ -130,7 +133,9 @@ void QueryPlan::addStep(QueryPlanStepPtr step) " input expected", ErrorCodes::LOGICAL_ERROR); } -QueryPipelinePtr QueryPlan::buildQueryPipeline(const QueryPlanOptimizationSettings & optimization_settings) +QueryPipelinePtr QueryPlan::buildQueryPipeline( + const QueryPlanOptimizationSettings & optimization_settings, + const BuildQueryPipelineSettings & build_pipeline_settings) { checkInitialized(); optimize(optimization_settings); @@ -160,7 +165,7 @@ QueryPipelinePtr QueryPlan::buildQueryPipeline(const QueryPlanOptimizationSettin if (next_child == frame.node->children.size()) { bool limit_max_threads = frame.pipelines.empty(); - last_pipeline = frame.node->step->updatePipeline(std::move(frame.pipelines)); + last_pipeline = frame.node->step->updatePipeline(std::move(frame.pipelines), build_pipeline_settings); if (limit_max_threads && max_threads) last_pipeline->limitMaxThreads(max_threads); @@ -177,7 +182,9 @@ QueryPipelinePtr QueryPlan::buildQueryPipeline(const QueryPlanOptimizationSettin return last_pipeline; } -Pipe QueryPlan::convertToPipe(const QueryPlanOptimizationSettings & optimization_settings) +Pipe QueryPlan::convertToPipe( + const QueryPlanOptimizationSettings & optimization_settings, + const BuildQueryPipelineSettings & build_pipeline_settings) { if (!isInitialized()) return {}; @@ -185,7 +192,7 @@ Pipe QueryPlan::convertToPipe(const QueryPlanOptimizationSettings & optimization if (isCompleted()) throw Exception("Cannot convert completed QueryPlan to Pipe", ErrorCodes::LOGICAL_ERROR); - return QueryPipeline::getPipe(std::move(*buildQueryPipeline(optimization_settings))); + return QueryPipeline::getPipe(std::move(*buildQueryPipeline(optimization_settings, build_pipeline_settings))); } void QueryPlan::addInterpreterContext(std::shared_ptr context) @@ -194,6 +201,92 @@ void QueryPlan::addInterpreterContext(std::shared_ptr context) } +static void explainStep(const IQueryPlanStep & step, JSONBuilder::JSONMap & map, const QueryPlan::ExplainPlanOptions & options) +{ + map.add("Node Type", step.getName()); + + if (options.description) + { + const auto & description = step.getStepDescription(); + if (!description.empty()) + map.add("Description", description); + } + + if (options.header && step.hasOutputStream()) + { + auto header_array = std::make_unique(); + + for (const auto & output_column : step.getOutputStream().header) + { + auto column_map = std::make_unique(); + column_map->add("Name", output_column.name); + if (output_column.type) + column_map->add("Type", output_column.type->getName()); + + header_array->add(std::move(column_map)); + } + + map.add("Header", std::move(header_array)); + } + + if (options.actions) + step.describeActions(map); + + if (options.indexes) + step.describeIndexes(map); +} + +JSONBuilder::ItemPtr QueryPlan::explainPlan(const ExplainPlanOptions & options) +{ + checkInitialized(); + + struct Frame + { + Node * node; + size_t next_child = 0; + std::unique_ptr node_map = {}; + std::unique_ptr children_array = {}; + }; + + std::stack stack; + stack.push(Frame{.node = root}); + + std::unique_ptr tree; + + while (!stack.empty()) + { + auto & frame = stack.top(); + + if (frame.next_child == 0) + { + if (!frame.node->children.empty()) + frame.children_array = std::make_unique(); + + frame.node_map = std::make_unique(); + explainStep(*frame.node->step, *frame.node_map, options); + } + + if (frame.next_child < frame.node->children.size()) + { + stack.push(Frame{frame.node->children[frame.next_child]}); + ++frame.next_child; + } + else + { + if (frame.children_array) + frame.node_map->add("Plans", std::move(frame.children_array)); + + tree.swap(frame.node_map); + stack.pop(); + + if (!stack.empty()) + stack.top().children_array->add(std::move(tree)); + } + } + + return tree; +} + static void explainStep( const IQueryPlanStep & step, IQueryPlanStep::FormatSettings & settings, @@ -237,6 +330,9 @@ static void explainStep( if (options.actions) step.describeActions(settings); + + if (options.indexes) + step.describeIndexes(settings); } std::string debugExplainStep(const IQueryPlanStep & step) diff --git a/src/Processors/QueryPlan/QueryPlan.h b/src/Processors/QueryPlan/QueryPlan.h index 7973f9af45a..4c75f00cf4d 100644 --- a/src/Processors/QueryPlan/QueryPlan.h +++ b/src/Processors/QueryPlan/QueryPlan.h @@ -1,11 +1,12 @@ #pragma once -#include -#include -#include -#include #include -#include +#include + +#include +#include +#include +#include namespace DB { @@ -18,7 +19,6 @@ using QueryPlanStepPtr = std::unique_ptr; class QueryPipeline; using QueryPipelinePtr = std::unique_ptr; -class Context; class WriteBuffer; class QueryPlan; @@ -26,6 +26,15 @@ using QueryPlanPtr = std::unique_ptr; class Pipe; +struct QueryPlanOptimizationSettings; +struct BuildQueryPipelineSettings; + +namespace JSONBuilder +{ + class IItem; + using ItemPtr = std::unique_ptr; +} + /// A tree of query steps. /// The goal of QueryPlan is to build QueryPipeline. /// QueryPlan let delay pipeline creation which is helpful for pipeline-level optimizations. @@ -46,10 +55,14 @@ public: void optimize(const QueryPlanOptimizationSettings & optimization_settings); - QueryPipelinePtr buildQueryPipeline(const QueryPlanOptimizationSettings & optimization_settings); + QueryPipelinePtr buildQueryPipeline( + const QueryPlanOptimizationSettings & optimization_settings, + const BuildQueryPipelineSettings & build_pipeline_settings); /// If initialized, build pipeline and convert to pipe. Otherwise, return empty pipe. - Pipe convertToPipe(const QueryPlanOptimizationSettings & optimization_settings); + Pipe convertToPipe( + const QueryPlanOptimizationSettings & optimization_settings, + const BuildQueryPipelineSettings & build_pipeline_settings); struct ExplainPlanOptions { @@ -59,6 +72,8 @@ public: bool description = true; /// Add detailed information about step actions. bool actions = false; + /// Add information about indexes actions. + bool indexes = false; }; struct ExplainPipelineOptions @@ -67,6 +82,7 @@ public: bool header = false; }; + JSONBuilder::ItemPtr explainPlan(const ExplainPlanOptions & options); void explainPlan(WriteBuffer & buffer, const ExplainPlanOptions & options); void explainPipeline(WriteBuffer & buffer, const ExplainPipelineOptions & options); diff --git a/src/Processors/QueryPlan/ReadFromMergeTree.cpp b/src/Processors/QueryPlan/ReadFromMergeTree.cpp new file mode 100644 index 00000000000..8a7fca63d8c --- /dev/null +++ b/src/Processors/QueryPlan/ReadFromMergeTree.cpp @@ -0,0 +1,315 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +ReadFromMergeTree::ReadFromMergeTree( + const MergeTreeData & storage_, + StorageMetadataPtr metadata_snapshot_, + String query_id_, + Names required_columns_, + RangesInDataParts parts_, + IndexStatPtr index_stats_, + PrewhereInfoPtr prewhere_info_, + Names virt_column_names_, + Settings settings_, + size_t num_streams_, + ReadType read_type_) + : ISourceStep(DataStream{.header = MergeTreeBaseSelectProcessor::transformHeader( + metadata_snapshot_->getSampleBlockForColumns(required_columns_, storage_.getVirtuals(), storage_.getStorageID()), + prewhere_info_, + virt_column_names_)}) + , storage(storage_) + , metadata_snapshot(std::move(metadata_snapshot_)) + , query_id(std::move(query_id_)) + , required_columns(std::move(required_columns_)) + , parts(std::move(parts_)) + , index_stats(std::move(index_stats_)) + , prewhere_info(std::move(prewhere_info_)) + , virt_column_names(std::move(virt_column_names_)) + , settings(std::move(settings_)) + , num_streams(num_streams_) + , read_type(read_type_) +{ +} + +Pipe ReadFromMergeTree::readFromPool() +{ + Pipes pipes; + size_t sum_marks = 0; + size_t total_rows = 0; + + for (const auto & part : parts) + { + sum_marks += part.getMarksCount(); + total_rows += part.getRowsCount(); + } + + auto pool = std::make_shared( + num_streams, + sum_marks, + settings.min_marks_for_concurrent_read, + std::move(parts), + storage, + metadata_snapshot, + prewhere_info, + true, + required_columns, + settings.backoff_settings, + settings.preferred_block_size_bytes, + false); + + auto * logger = &Poco::Logger::get(storage.getLogName() + " (SelectExecutor)"); + LOG_DEBUG(logger, "Reading approx. {} rows with {} streams", total_rows, num_streams); + + for (size_t i = 0; i < num_streams; ++i) + { + auto source = std::make_shared( + i, pool, settings.min_marks_for_concurrent_read, settings.max_block_size, + settings.preferred_block_size_bytes, settings.preferred_max_column_in_block_size_bytes, + storage, metadata_snapshot, settings.use_uncompressed_cache, + prewhere_info, settings.reader_settings, virt_column_names); + + if (i == 0) + { + /// Set the approximate number of rows for the first source only + source->addTotalRowsApprox(total_rows); + } + + pipes.emplace_back(std::move(source)); + } + + return Pipe::unitePipes(std::move(pipes)); +} + +template +ProcessorPtr ReadFromMergeTree::createSource(const RangesInDataPart & part) +{ + return std::make_shared( + storage, metadata_snapshot, part.data_part, settings.max_block_size, settings.preferred_block_size_bytes, + settings.preferred_max_column_in_block_size_bytes, required_columns, part.ranges, settings.use_uncompressed_cache, + prewhere_info, true, settings.reader_settings, virt_column_names, part.part_index_in_query); +} + +Pipe ReadFromMergeTree::readInOrder() +{ + Pipes pipes; + for (const auto & part : parts) + { + auto source = read_type == ReadType::InReverseOrder + ? createSource(part) + : createSource(part); + + pipes.emplace_back(std::move(source)); + } + + auto pipe = Pipe::unitePipes(std::move(pipes)); + + if (read_type == ReadType::InReverseOrder) + { + pipe.addSimpleTransform([&](const Block & header) + { + return std::make_shared(header); + }); + } + + return pipe; +} + +Pipe ReadFromMergeTree::read() +{ + if (read_type == ReadType::Default && num_streams > 1) + return readFromPool(); + + auto pipe = readInOrder(); + + /// Use ConcatProcessor to concat sources together. + /// It is needed to read in parts order (and so in PK order) if single thread is used. + if (read_type == ReadType::Default && pipe.numOutputPorts() > 1) + pipe.addTransform(std::make_shared(pipe.getHeader(), pipe.numOutputPorts())); + + return pipe; +} + +void ReadFromMergeTree::initializePipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) +{ + Pipe pipe = read(); + + for (const auto & processor : pipe.getProcessors()) + processors.emplace_back(processor); + + // Attach QueryIdHolder if needed + if (!query_id.empty()) + pipe.addQueryIdHolder(std::make_shared(query_id, storage)); + + pipeline.init(std::move(pipe)); +} + +static const char * indexTypeToString(ReadFromMergeTree::IndexType type) +{ + switch (type) + { + case ReadFromMergeTree::IndexType::None: + return "None"; + case ReadFromMergeTree::IndexType::MinMax: + return "MinMax"; + case ReadFromMergeTree::IndexType::Partition: + return "Partition"; + case ReadFromMergeTree::IndexType::PrimaryKey: + return "PrimaryKey"; + case ReadFromMergeTree::IndexType::Skip: + return "Skip"; + } + + __builtin_unreachable(); +} + +static const char * readTypeToString(ReadFromMergeTree::ReadType type) +{ + switch (type) + { + case ReadFromMergeTree::ReadType::Default: + return "Default"; + case ReadFromMergeTree::ReadType::InOrder: + return "InOrder"; + case ReadFromMergeTree::ReadType::InReverseOrder: + return "InReverseOrder"; + } + + __builtin_unreachable(); +} + +void ReadFromMergeTree::describeActions(FormatSettings & format_settings) const +{ + std::string prefix(format_settings.offset, format_settings.indent_char); + format_settings.out << prefix << "ReadType: " << readTypeToString(read_type) << '\n'; + + if (index_stats && !index_stats->empty()) + { + format_settings.out << prefix << "Parts: " << index_stats->back().num_parts_after << '\n'; + format_settings.out << prefix << "Granules: " << index_stats->back().num_granules_after << '\n'; + } +} + +void ReadFromMergeTree::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Read Type", readTypeToString(read_type)); + if (index_stats && !index_stats->empty()) + { + map.add("Parts", index_stats->back().num_parts_after); + map.add("Granules", index_stats->back().num_granules_after); + } +} + +void ReadFromMergeTree::describeIndexes(FormatSettings & format_settings) const +{ + std::string prefix(format_settings.offset, format_settings.indent_char); + if (index_stats && !index_stats->empty()) + { + /// Do not print anything if no indexes is applied. + if (index_stats->size() == 1 && index_stats->front().type == IndexType::None) + return; + + std::string indent(format_settings.indent, format_settings.indent_char); + format_settings.out << prefix << "Indexes:\n"; + + for (size_t i = 0; i < index_stats->size(); ++i) + { + const auto & stat = (*index_stats)[i]; + if (stat.type == IndexType::None) + continue; + + format_settings.out << prefix << indent << indexTypeToString(stat.type) << '\n'; + + if (!stat.name.empty()) + format_settings.out << prefix << indent << indent << "Name: " << stat.name << '\n'; + + if (!stat.description.empty()) + format_settings.out << prefix << indent << indent << "Description: " << stat.description << '\n'; + + if (!stat.used_keys.empty()) + { + format_settings.out << prefix << indent << indent << "Keys: " << stat.name << '\n'; + for (const auto & used_key : stat.used_keys) + format_settings.out << prefix << indent << indent << indent << used_key << '\n'; + } + + if (!stat.condition.empty()) + format_settings.out << prefix << indent << indent << "Condition: " << stat.condition << '\n'; + + format_settings.out << prefix << indent << indent << "Parts: " << stat.num_parts_after; + if (i) + format_settings.out << '/' << (*index_stats)[i - 1].num_parts_after; + format_settings.out << '\n'; + + format_settings.out << prefix << indent << indent << "Granules: " << stat.num_granules_after; + if (i) + format_settings.out << '/' << (*index_stats)[i - 1].num_granules_after; + format_settings.out << '\n'; + } + } +} + +void ReadFromMergeTree::describeIndexes(JSONBuilder::JSONMap & map) const +{ + if (index_stats && !index_stats->empty()) + { + /// Do not print anything if no indexes is applied. + if (index_stats->size() == 1 && index_stats->front().type == IndexType::None) + return; + + auto indexes_array = std::make_unique(); + + for (size_t i = 0; i < index_stats->size(); ++i) + { + const auto & stat = (*index_stats)[i]; + if (stat.type == IndexType::None) + continue; + + auto index_map = std::make_unique(); + + index_map->add("Type", indexTypeToString(stat.type)); + + if (!stat.name.empty()) + index_map->add("Name", stat.name); + + if (!stat.description.empty()) + index_map->add("Description", stat.description); + + if (!stat.used_keys.empty()) + { + auto keys_array = std::make_unique(); + + for (const auto & used_key : stat.used_keys) + keys_array->add(used_key); + + index_map->add("Keys", std::move(keys_array)); + } + + if (!stat.condition.empty()) + index_map->add("Condition", stat.condition); + + if (i) + index_map->add("Initial Parts", (*index_stats)[i - 1].num_parts_after); + index_map->add("Selected Parts", stat.num_parts_after); + + if (i) + index_map->add("Initial Granules", (*index_stats)[i - 1].num_granules_after); + index_map->add("Selected Granules", stat.num_granules_after); + + indexes_array->add(std::move(index_map)); + } + + map.add("Indexes", std::move(indexes_array)); + } +} + +} diff --git a/src/Processors/QueryPlan/ReadFromMergeTree.h b/src/Processors/QueryPlan/ReadFromMergeTree.h new file mode 100644 index 00000000000..479610b3edc --- /dev/null +++ b/src/Processors/QueryPlan/ReadFromMergeTree.h @@ -0,0 +1,116 @@ +#pragma once +#include +#include +#include +#include + +namespace DB +{ + +/// This step is created to read from MergeTree* table. +/// For now, it takes a list of parts and creates source from it. +class ReadFromMergeTree final : public ISourceStep +{ +public: + + enum class IndexType + { + None, + MinMax, + Partition, + PrimaryKey, + Skip, + }; + + /// This is a struct with information about applied indexes. + /// Is used for introspection only, in EXPLAIN query. + struct IndexStat + { + IndexType type; + std::string name; + std::string description; + std::string condition; + std::vector used_keys; + size_t num_parts_after; + size_t num_granules_after; + }; + + using IndexStats = std::vector; + using IndexStatPtr = std::unique_ptr; + + /// Part of settings which are needed for reading. + struct Settings + { + UInt64 max_block_size; + size_t preferred_block_size_bytes; + size_t preferred_max_column_in_block_size_bytes; + size_t min_marks_for_concurrent_read; + bool use_uncompressed_cache; + + MergeTreeReaderSettings reader_settings; + MergeTreeReadPool::BackoffSettings backoff_settings; + }; + + enum class ReadType + { + /// By default, read will use MergeTreeReadPool and return pipe with num_streams outputs. + /// If num_streams == 1, will read without pool, in order specified in parts. + Default, + /// Read in sorting key order. + /// Returned pipe will have the number of ports equals to parts.size(). + /// Parameter num_streams_ is ignored in this case. + /// User should add MergingSorted itself if needed. + InOrder, + /// The same as InOrder, but in reverse order. + /// For every part, read ranges and granules from end to begin. Also add ReverseTransform. + InReverseOrder, + }; + + ReadFromMergeTree( + const MergeTreeData & storage_, + StorageMetadataPtr metadata_snapshot_, + String query_id_, + Names required_columns_, + RangesInDataParts parts_, + IndexStatPtr index_stats_, + PrewhereInfoPtr prewhere_info_, + Names virt_column_names_, + Settings settings_, + size_t num_streams_, + ReadType read_type_ + ); + + String getName() const override { return "ReadFromMergeTree"; } + + void initializePipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + + void describeActions(FormatSettings & format_settings) const override; + void describeIndexes(FormatSettings & format_settings) const override; + + void describeActions(JSONBuilder::JSONMap & map) const override; + void describeIndexes(JSONBuilder::JSONMap & map) const override; + +private: + const MergeTreeData & storage; + StorageMetadataPtr metadata_snapshot; + String query_id; + + Names required_columns; + RangesInDataParts parts; + IndexStatPtr index_stats; + PrewhereInfoPtr prewhere_info; + Names virt_column_names; + Settings settings; + + size_t num_streams; + ReadType read_type; + + Pipe read(); + Pipe readFromPool(); + Pipe readInOrder(); + + template + ProcessorPtr createSource(const RangesInDataPart & part); +}; + +} diff --git a/src/Processors/QueryPlan/ReadFromPreparedSource.cpp b/src/Processors/QueryPlan/ReadFromPreparedSource.cpp index 2d4b759b637..d7711abb3e1 100644 --- a/src/Processors/QueryPlan/ReadFromPreparedSource.cpp +++ b/src/Processors/QueryPlan/ReadFromPreparedSource.cpp @@ -11,7 +11,7 @@ ReadFromPreparedSource::ReadFromPreparedSource(Pipe pipe_, std::shared_ptr(getOutputStream().header))); } diff --git a/src/Processors/QueryPlan/ReadNothingStep.h b/src/Processors/QueryPlan/ReadNothingStep.h index da4740574da..4c5b4adb7ce 100644 --- a/src/Processors/QueryPlan/ReadNothingStep.h +++ b/src/Processors/QueryPlan/ReadNothingStep.h @@ -12,7 +12,7 @@ public: String getName() const override { return "ReadNothing"; } - void initializePipeline(QueryPipeline & pipeline) override; + void initializePipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; }; } diff --git a/src/Processors/QueryPlan/ReverseRowsStep.cpp b/src/Processors/QueryPlan/ReverseRowsStep.cpp deleted file mode 100644 index 32e16937611..00000000000 --- a/src/Processors/QueryPlan/ReverseRowsStep.cpp +++ /dev/null @@ -1,37 +0,0 @@ -#include -#include -#include - -namespace DB -{ - -static ITransformingStep::Traits getTraits() -{ - return ITransformingStep::Traits - { - { - .preserves_distinct_columns = true, - .returns_single_stream = false, - .preserves_number_of_streams = true, - .preserves_sorting = false, - }, - { - .preserves_number_of_rows = true, - } - }; -} - -ReverseRowsStep::ReverseRowsStep(const DataStream & input_stream_) - : ITransformingStep(input_stream_, input_stream_.header, getTraits()) -{ -} - -void ReverseRowsStep::transformPipeline(QueryPipeline & pipeline) -{ - pipeline.addSimpleTransform([&](const Block & header) - { - return std::make_shared(header); - }); -} - -} diff --git a/src/Processors/QueryPlan/ReverseRowsStep.h b/src/Processors/QueryPlan/ReverseRowsStep.h deleted file mode 100644 index 955d022cde0..00000000000 --- a/src/Processors/QueryPlan/ReverseRowsStep.h +++ /dev/null @@ -1,18 +0,0 @@ -#pragma once -#include - -namespace DB -{ - -/// Reverse rows in chunk. -class ReverseRowsStep : public ITransformingStep -{ -public: - ReverseRowsStep(const DataStream & input_stream_); - - String getName() const override { return "ReverseRows"; } - - void transformPipeline(QueryPipeline & pipeline) override; -}; - -} diff --git a/src/Processors/QueryPlan/RollupStep.cpp b/src/Processors/QueryPlan/RollupStep.cpp index 5f9931030c7..45573b352d6 100644 --- a/src/Processors/QueryPlan/RollupStep.cpp +++ b/src/Processors/QueryPlan/RollupStep.cpp @@ -30,7 +30,7 @@ RollupStep::RollupStep(const DataStream & input_stream_, AggregatingTransformPar output_stream->distinct_columns.insert(params->params.src_header.getByPosition(key).name); } -void RollupStep::transformPipeline(QueryPipeline & pipeline) +void RollupStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { pipeline.resize(1); diff --git a/src/Processors/QueryPlan/RollupStep.h b/src/Processors/QueryPlan/RollupStep.h index b6d9b2af5bf..21faf539990 100644 --- a/src/Processors/QueryPlan/RollupStep.h +++ b/src/Processors/QueryPlan/RollupStep.h @@ -16,7 +16,7 @@ public: String getName() const override { return "Rollup"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; private: AggregatingTransformParamsPtr params; diff --git a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp index 588dd7599a1..734e6db318d 100644 --- a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp +++ b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp @@ -28,7 +28,7 @@ SettingQuotaAndLimitsStep::SettingQuotaAndLimitsStep( StreamLocalLimits & limits_, SizeLimits & leaf_limits_, std::shared_ptr quota_, - std::shared_ptr context_) + ContextPtr context_) : ITransformingStep(input_stream_, input_stream_.header, getTraits()) , context(std::move(context_)) , storage(std::move(storage_)) @@ -39,7 +39,7 @@ SettingQuotaAndLimitsStep::SettingQuotaAndLimitsStep( { } -void SettingQuotaAndLimitsStep::transformPipeline(QueryPipeline & pipeline) +void SettingQuotaAndLimitsStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { /// Table lock is stored inside pipeline here. pipeline.setLimits(limits); diff --git a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h index 66e44e18cd4..3c73c208b70 100644 --- a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h +++ b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h @@ -1,4 +1,6 @@ #pragma once + +#include #include #include #include @@ -26,14 +28,14 @@ public: StreamLocalLimits & limits_, SizeLimits & leaf_limits_, std::shared_ptr quota_, - std::shared_ptr context_); + ContextPtr context_); String getName() const override { return "SettingQuotaAndLimits"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; private: - std::shared_ptr context; + ContextPtr context; StoragePtr storage; TableLockHolder table_lock; StreamLocalLimits limits; diff --git a/src/Processors/QueryPlan/TotalsHavingStep.cpp b/src/Processors/QueryPlan/TotalsHavingStep.cpp index 9947875e679..4966c04dee7 100644 --- a/src/Processors/QueryPlan/TotalsHavingStep.cpp +++ b/src/Processors/QueryPlan/TotalsHavingStep.cpp @@ -4,6 +4,7 @@ #include #include #include +#include namespace DB { @@ -36,7 +37,7 @@ TotalsHavingStep::TotalsHavingStep( input_stream_, TotalsHavingTransform::transformHeader( input_stream_.header, - (actions_dag_ ? std::make_shared(actions_dag_) : nullptr), + (actions_dag_ ? std::make_shared(actions_dag_, ExpressionActionsSettings{}) : nullptr), final_), getTraits(!filter_column_.empty())) , overflow_row(overflow_row_) @@ -48,10 +49,11 @@ TotalsHavingStep::TotalsHavingStep( { } -void TotalsHavingStep::transformPipeline(QueryPipeline & pipeline) +void TotalsHavingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { auto totals_having = std::make_shared( - pipeline.getHeader(), overflow_row, (actions_dag ? std::make_shared(actions_dag) : nullptr), + pipeline.getHeader(), overflow_row, + (actions_dag ? std::make_shared(actions_dag, settings.getActionsSettings()) : nullptr), filter_column_name, totals_mode, auto_include_threshold, final); pipeline.addTotalsHavingTransform(std::move(totals_having)); @@ -83,7 +85,7 @@ void TotalsHavingStep::describeActions(FormatSettings & settings) const if (actions_dag) { bool first = true; - auto expression = std::make_shared(actions_dag); + auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); for (const auto & action : expression->getActions()) { settings.out << prefix << (first ? "Actions: " @@ -94,4 +96,15 @@ void TotalsHavingStep::describeActions(FormatSettings & settings) const } } +void TotalsHavingStep::describeActions(JSONBuilder::JSONMap & map) const +{ + map.add("Mode", totalsModeToString(totals_mode, auto_include_threshold)); + if (actions_dag) + { + map.add("Filter column", filter_column_name); + auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + map.add("Expression", expression->toTree()); + } +} + } diff --git a/src/Processors/QueryPlan/TotalsHavingStep.h b/src/Processors/QueryPlan/TotalsHavingStep.h index 57d5cf7aad5..bc053c96970 100644 --- a/src/Processors/QueryPlan/TotalsHavingStep.h +++ b/src/Processors/QueryPlan/TotalsHavingStep.h @@ -24,8 +24,9 @@ public: String getName() const override { return "TotalsHaving"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; const ActionsDAGPtr & getActions() const { return actions_dag; } diff --git a/src/Processors/QueryPlan/UnionStep.cpp b/src/Processors/QueryPlan/UnionStep.cpp index 630ff53f47d..7403dd0a12a 100644 --- a/src/Processors/QueryPlan/UnionStep.cpp +++ b/src/Processors/QueryPlan/UnionStep.cpp @@ -1,13 +1,30 @@ #include #include #include -#include +#include namespace DB { -UnionStep::UnionStep(DataStreams input_streams_, Block result_header, size_t max_threads_) - : header(std::move(result_header)) +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +static Block checkHeaders(const DataStreams & input_streams) +{ + if (input_streams.empty()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot unite an empty set of query plan steps"); + + Block res = input_streams.front().header; + for (const auto & stream : input_streams) + assertBlocksHaveEqualStructure(stream.header, res, "UnionStep"); + + return res; +} + +UnionStep::UnionStep(DataStreams input_streams_, size_t max_threads_) + : header(checkHeaders(input_streams_)) , max_threads(max_threads_) { input_streams = std::move(input_streams_); @@ -18,7 +35,7 @@ UnionStep::UnionStep(DataStreams input_streams_, Block result_header, size_t max output_stream = DataStream{.header = header}; } -QueryPipelinePtr UnionStep::updatePipeline(QueryPipelines pipelines) +QueryPipelinePtr UnionStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) { auto pipeline = std::make_unique(); QueryPipelineProcessorsCollector collector(*pipeline, this); @@ -30,7 +47,7 @@ QueryPipelinePtr UnionStep::updatePipeline(QueryPipelines pipelines) return pipeline; } - *pipeline = QueryPipeline::unitePipelines(std::move(pipelines), output_stream->header, max_threads); + *pipeline = QueryPipeline::unitePipelines(std::move(pipelines), max_threads); processors = collector.detachProcessors(); return pipeline; diff --git a/src/Processors/QueryPlan/UnionStep.h b/src/Processors/QueryPlan/UnionStep.h index e2e1f2c9efa..81bd033d045 100644 --- a/src/Processors/QueryPlan/UnionStep.h +++ b/src/Processors/QueryPlan/UnionStep.h @@ -9,14 +9,16 @@ class UnionStep : public IQueryPlanStep { public: /// max_threads is used to limit the number of threads for result pipeline. - UnionStep(DataStreams input_streams_, Block result_header, size_t max_threads_ = 0); + explicit UnionStep(DataStreams input_streams_, size_t max_threads_ = 0); String getName() const override { return "Union"; } - QueryPipelinePtr updatePipeline(QueryPipelines pipelines) override; + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) override; void describePipeline(FormatSettings & settings) const override; + size_t getMaxThreads() const { return max_threads; } + private: Block header; size_t max_threads; diff --git a/src/Processors/QueryPlan/WindowStep.cpp b/src/Processors/QueryPlan/WindowStep.cpp index 2b824f91b45..29f2999ec83 100644 --- a/src/Processors/QueryPlan/WindowStep.cpp +++ b/src/Processors/QueryPlan/WindowStep.cpp @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB { @@ -62,8 +63,13 @@ WindowStep::WindowStep(const DataStream & input_stream_, } -void WindowStep::transformPipeline(QueryPipeline & pipeline) +void WindowStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { + // This resize is needed for cases such as `over ()` when we don't have a + // sort node, and the input might have multiple streams. The sort node would + // have resized it. + pipeline.resize(1); + pipeline.addSimpleTransform([&](const Block & /*header*/) { return std::make_shared(input_header, @@ -111,4 +117,25 @@ void WindowStep::describeActions(FormatSettings & settings) const } } +void WindowStep::describeActions(JSONBuilder::JSONMap & map) const +{ + if (!window_description.partition_by.empty()) + { + auto partion_columns_array = std::make_unique(); + for (const auto & descr : window_description.partition_by) + partion_columns_array->add(descr.column_name); + + map.add("Partition By", std::move(partion_columns_array)); + } + + if (!window_description.order_by.empty()) + map.add("Sort Description", explainSortDescription(window_description.order_by, {})); + + auto functions_array = std::make_unique(); + for (const auto & func : window_functions) + functions_array->add(func.column_name); + + map.add("Functions", std::move(functions_array)); +} + } diff --git a/src/Processors/QueryPlan/WindowStep.h b/src/Processors/QueryPlan/WindowStep.h index ffd5e78df67..b5018b1d5a7 100644 --- a/src/Processors/QueryPlan/WindowStep.h +++ b/src/Processors/QueryPlan/WindowStep.h @@ -20,8 +20,9 @@ public: String getName() const override { return "Window"; } - void transformPipeline(QueryPipeline & pipeline) override; + void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + void describeActions(JSONBuilder::JSONMap & map) const override; void describeActions(FormatSettings & settings) const override; private: diff --git a/src/Processors/Transforms/AggregatingInOrderTransform.cpp b/src/Processors/Transforms/AggregatingInOrderTransform.cpp index d448d31611d..c19d6bd00b5 100644 --- a/src/Processors/Transforms/AggregatingInOrderTransform.cpp +++ b/src/Processors/Transforms/AggregatingInOrderTransform.cpp @@ -1,6 +1,7 @@ #include #include #include +#include namespace DB { @@ -58,6 +59,7 @@ void AggregatingInOrderTransform::consume(Chunk chunk) LOG_TRACE(log, "Aggregating in order"); is_consume_started = true; } + src_rows += rows; src_bytes += chunk.bytes(); @@ -82,58 +84,55 @@ void AggregatingInOrderTransform::consume(Chunk chunk) res_aggregate_columns.resize(params->params.aggregates_size); for (size_t i = 0; i < params->params.keys_size; ++i) - { res_key_columns[i] = res_header.safeGetByPosition(i).type->createColumn(); - } + for (size_t i = 0; i < params->params.aggregates_size; ++i) - { res_aggregate_columns[i] = res_header.safeGetByPosition(i + params->params.keys_size).type->createColumn(); - } + params->aggregator.createStatesAndFillKeyColumnsWithSingleKey(variants, key_columns, key_begin, res_key_columns); + params->aggregator.addArenasToAggregateColumns(variants, res_aggregate_columns); ++cur_block_size; } - ssize_t mid = 0; - ssize_t high = 0; - ssize_t low = -1; + + /// Will split block into segments with the same key while (key_end != rows) { - high = rows; /// Find the first position of new (not current) key in current chunk - while (high - low > 1) - { - mid = (low + high) / 2; - if (!less(res_key_columns, key_columns, cur_block_size - 1, mid, group_by_description)) - low = mid; - else - high = mid; - } - key_end = high; + auto indices = ext::range(key_begin, rows); + auto it = std::upper_bound(indices.begin(), indices.end(), cur_block_size - 1, + [&](size_t lhs_row, size_t rhs_row) + { + return less(res_key_columns, key_columns, lhs_row, rhs_row, group_by_description); + }); + + key_end = (it == indices.end() ? rows : *it); + /// Add data to aggr. state if interval is not empty. Empty when haven't found current key in new block. if (key_begin != key_end) - { params->aggregator.executeOnIntervalWithoutKeyImpl(variants.without_key, key_begin, key_end, aggregate_function_instructions.data(), variants.aggregates_pool); - } - low = key_begin = key_end; /// We finalize last key aggregation state if a new key found. - if (key_begin != rows) + if (key_end != rows) { - params->aggregator.fillAggregateColumnsWithSingleKey(variants, res_aggregate_columns); + params->aggregator.addSingleKeyToAggregateColumns(variants, res_aggregate_columns); + /// If res_block_size is reached we have to stop consuming and generate the block. Save the extra rows into new chunk. if (cur_block_size == res_block_size) { Columns source_columns = chunk.detachColumns(); for (auto & source_column : source_columns) - source_column = source_column->cut(key_begin, rows - key_begin); + source_column = source_column->cut(key_end, rows - key_end); - current_chunk = Chunk(source_columns, rows - key_begin); + current_chunk = Chunk(source_columns, rows - key_end); src_rows -= current_chunk.getNumRows(); block_end_reached = true; need_generate = true; cur_block_size = 0; + variants.without_key = nullptr; + /// Arenas cannot be destroyed here, since later, in FinalizingSimpleTransform /// there will be finalizeChunk(), but even after /// finalizeChunk() we cannot destroy arena, since some memory @@ -155,10 +154,13 @@ void AggregatingInOrderTransform::consume(Chunk chunk) } /// We create a new state for the new key and update res_key_columns - params->aggregator.createStatesAndFillKeyColumnsWithSingleKey(variants, key_columns, key_begin, res_key_columns); + params->aggregator.createStatesAndFillKeyColumnsWithSingleKey(variants, key_columns, key_end, res_key_columns); ++cur_block_size; } + + key_begin = key_end; } + block_end_reached = false; } @@ -212,8 +214,8 @@ IProcessor::Status AggregatingInOrderTransform::prepare() { output.push(std::move(to_push_chunk)); output.finish(); - LOG_TRACE(log, "Aggregated. {} to {} rows (from {})", src_rows, res_rows, - formatReadableSizeWithBinarySuffix(src_bytes)); + LOG_DEBUG(log, "Aggregated. {} to {} rows (from {})", + src_rows, res_rows, formatReadableSizeWithBinarySuffix(src_bytes)); return Status::Finished; } if (input.isFinished()) @@ -227,14 +229,18 @@ IProcessor::Status AggregatingInOrderTransform::prepare() input.setNeeded(); return Status::NeedData; } - current_chunk = input.pull(!is_consume_finished); + assert(!is_consume_finished); + current_chunk = input.pull(true /* set_not_needed */); return Status::Ready; } void AggregatingInOrderTransform::generate() { if (cur_block_size && is_consume_finished) - params->aggregator.fillAggregateColumnsWithSingleKey(variants, res_aggregate_columns); + { + params->aggregator.addSingleKeyToAggregateColumns(variants, res_aggregate_columns); + variants.without_key = nullptr; + } Block res = res_header.cloneEmpty(); diff --git a/src/Processors/Transforms/AggregatingTransform.cpp b/src/Processors/Transforms/AggregatingTransform.cpp index c6907202d31..3400d06dae3 100644 --- a/src/Processors/Transforms/AggregatingTransform.cpp +++ b/src/Processors/Transforms/AggregatingTransform.cpp @@ -541,7 +541,7 @@ void AggregatingTransform::initGenerate() double elapsed_seconds = watch.elapsedSeconds(); size_t rows = variants.sizeWithoutOverflowRow(); - LOG_TRACE(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)", + LOG_DEBUG(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)", src_rows, rows, ReadableSize(src_bytes), elapsed_seconds, src_rows / elapsed_seconds, ReadableSize(src_bytes / elapsed_seconds)); @@ -599,7 +599,7 @@ void AggregatingTransform::initGenerate() pipe = Pipe::unitePipes(std::move(pipes)); } - LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed)); + LOG_DEBUG(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed)); addMergingAggregatedMemoryEfficientTransform(pipe, params, temporary_data_merge_threads); diff --git a/src/Processors/Transforms/CreatingSetsTransform.cpp b/src/Processors/Transforms/CreatingSetsTransform.cpp index c5fb4f3a952..86051019235 100644 --- a/src/Processors/Transforms/CreatingSetsTransform.cpp +++ b/src/Processors/Transforms/CreatingSetsTransform.cpp @@ -25,11 +25,11 @@ CreatingSetsTransform::CreatingSetsTransform( Block out_header_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_) + ContextPtr context_) : IAccumulatingTransform(std::move(in_header_), std::move(out_header_)) + , WithContext(context_) , subquery(std::move(subquery_for_set_)) , network_transfer_limits(std::move(network_transfer_limits_)) - , context(context_) { } @@ -45,19 +45,16 @@ void CreatingSetsTransform::startSubquery() { if (subquery.set) LOG_TRACE(log, "Creating set."); - if (subquery.join) - LOG_TRACE(log, "Creating join."); if (subquery.table) LOG_TRACE(log, "Filling temporary table."); if (subquery.table) - table_out = subquery.table->write({}, subquery.table->getInMemoryMetadataPtr(), context); + table_out = subquery.table->write({}, subquery.table->getInMemoryMetadataPtr(), getContext()); done_with_set = !subquery.set; - done_with_join = !subquery.join; done_with_table = !subquery.table; - if (done_with_set && done_with_join && done_with_table) + if (done_with_set /*&& done_with_join*/ && done_with_table) throw Exception("Logical error: nothing to do with subquery", ErrorCodes::LOGICAL_ERROR); if (table_out) @@ -72,8 +69,6 @@ void CreatingSetsTransform::finishSubquery() if (subquery.set) LOG_DEBUG(log, "Created Set with {} entries from {} rows in {} sec.", subquery.set->getTotalRowCount(), read_rows, seconds); - if (subquery.join) - LOG_DEBUG(log, "Created Join with {} entries from {} rows in {} sec.", subquery.join->getTotalRowCount(), read_rows, seconds); if (subquery.table) LOG_DEBUG(log, "Created Table with {} rows in {} sec.", read_rows, seconds); } @@ -81,12 +76,6 @@ void CreatingSetsTransform::finishSubquery() { LOG_DEBUG(log, "Subquery has empty result."); } - - if (totals) - subquery.setTotals(getInputPort().getHeader().cloneWithColumns(totals.detachColumns())); - else - /// Set empty totals anyway, it is needed for MergeJoin. - subquery.setTotals({}); } void CreatingSetsTransform::init() @@ -111,12 +100,6 @@ void CreatingSetsTransform::consume(Chunk chunk) done_with_set = true; } - if (!done_with_join) - { - if (!subquery.insertJoinedBlock(block)) - done_with_join = true; - } - if (!done_with_table) { block = materializeBlock(block); @@ -130,7 +113,7 @@ void CreatingSetsTransform::consume(Chunk chunk) done_with_table = true; } - if (done_with_set && done_with_join && done_with_table) + if (done_with_set && done_with_table) finishConsume(); } diff --git a/src/Processors/Transforms/CreatingSetsTransform.h b/src/Processors/Transforms/CreatingSetsTransform.h index 3452de63ea0..a847582a988 100644 --- a/src/Processors/Transforms/CreatingSetsTransform.h +++ b/src/Processors/Transforms/CreatingSetsTransform.h @@ -1,9 +1,13 @@ #pragma once -#include -#include -#include -#include + #include +#include +#include +#include +#include +#include + +#include namespace DB { @@ -16,7 +20,7 @@ using ProgressCallback = std::function; /// Don't return any data. Sets are created when Finish status is returned. /// In general, several work() methods need to be called to finish. /// Independent processors is created for each subquery. -class CreatingSetsTransform : public IAccumulatingTransform +class CreatingSetsTransform : public IAccumulatingTransform, WithContext { public: CreatingSetsTransform( @@ -24,7 +28,7 @@ public: Block out_header_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_); + ContextPtr context_); String getName() const override { return "CreatingSetsTransform"; } @@ -40,11 +44,10 @@ private: Stopwatch watch; bool done_with_set = true; - bool done_with_join = true; + //bool done_with_join = true; bool done_with_table = true; SizeLimits network_transfer_limits; - const Context & context; size_t rows_to_transfer = 0; size_t bytes_to_transfer = 0; diff --git a/src/Processors/Transforms/JoiningTransform.cpp b/src/Processors/Transforms/JoiningTransform.cpp index 26630f80b17..31b2da46ab3 100644 --- a/src/Processors/Transforms/JoiningTransform.cpp +++ b/src/Processors/Transforms/JoiningTransform.cpp @@ -1,10 +1,17 @@ #include #include #include +#include +#include namespace DB { +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + Block JoiningTransform::transformHeader(Block header, const JoinPtr & join) { ExtraBlockPtr tmp; @@ -12,13 +19,128 @@ Block JoiningTransform::transformHeader(Block header, const JoinPtr & join) return header; } -JoiningTransform::JoiningTransform(Block input_header, JoinPtr join_, - bool on_totals_, bool default_totals_) - : ISimpleTransform(input_header, transformHeader(input_header, join_), on_totals_) +JoiningTransform::JoiningTransform( + Block input_header, + JoinPtr join_, + size_t max_block_size_, + bool on_totals_, + bool default_totals_, + FinishCounterPtr finish_counter_) + : IProcessor({input_header}, {transformHeader(input_header, join_)}) , join(std::move(join_)) , on_totals(on_totals_) , default_totals(default_totals_) -{} + , finish_counter(std::move(finish_counter_)) + , max_block_size(max_block_size_) +{ + if (!join->isFilled()) + inputs.emplace_back(Block(), this); +} + +IProcessor::Status JoiningTransform::prepare() +{ + auto & output = outputs.front(); + + /// Check can output. + if (output.isFinished() || stop_reading) + { + output.finish(); + for (auto & input : inputs) + input.close(); + return Status::Finished; + } + + if (!output.canPush()) + { + for (auto & input : inputs) + input.setNotNeeded(); + return Status::PortFull; + } + + /// Output if has data. + if (has_output) + { + output.push(std::move(output_chunk)); + has_output = false; + + return Status::PortFull; + } + + if (inputs.size() > 1) + { + auto & last_in = inputs.back(); + if (!last_in.isFinished()) + { + last_in.setNeeded(); + if (last_in.hasData()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "No data is expected from second JoiningTransform port"); + + return Status::NeedData; + } + } + + if (has_input) + return Status::Ready; + + auto & input = inputs.front(); + if (input.isFinished()) + { + if (process_non_joined) + return Status::Ready; + + output.finish(); + return Status::Finished; + } + + input.setNeeded(); + + if (!input.hasData()) + return Status::NeedData; + + input_chunk = input.pull(true); + has_input = true; + return Status::Ready; +} + +void JoiningTransform::work() +{ + if (has_input) + { + transform(input_chunk); + output_chunk.swap(input_chunk); + has_input = not_processed != nullptr; + has_output = !output_chunk.empty(); + } + else + { + if (!non_joined_stream) + { + if (!finish_counter || !finish_counter->isLast()) + { + process_non_joined = false; + return; + } + + non_joined_stream = join->createStreamWithNonJoinedRows(outputs.front().getHeader(), max_block_size); + if (!non_joined_stream) + { + process_non_joined = false; + return; + } + } + + auto block = non_joined_stream->read(); + if (!block) + { + process_non_joined = false; + return; + } + + auto rows = block.rows(); + output_chunk.setColumns(block.getColumns(), rows); + has_output = true; + } +} void JoiningTransform::transform(Chunk & chunk) { @@ -28,7 +150,7 @@ void JoiningTransform::transform(Chunk & chunk) if (join->alwaysReturnsEmptySet() && !on_totals) { - stopReading(); + stop_reading = true; chunk.clear(); return; } @@ -38,7 +160,11 @@ void JoiningTransform::transform(Chunk & chunk) if (on_totals) { /// We have to make chunk empty before return - block = getInputPort().getHeader().cloneWithColumns(chunk.detachColumns()); + /// In case of using `arrayJoin` we can get more or less rows than one + auto cols = chunk.detachColumns(); + for (auto & col : cols) + col = col->cloneResized(1); + block = inputs.front().getHeader().cloneWithColumns(std::move(cols)); /// Drop totals if both out stream and joined stream doesn't have ones. /// See comment in ExpressionTransform.h @@ -57,29 +183,122 @@ void JoiningTransform::transform(Chunk & chunk) Block JoiningTransform::readExecute(Chunk & chunk) { Block res; + // std::cerr << "=== Chunk rows " << chunk.getNumRows() << " cols " << chunk.getNumColumns() << std::endl; if (!not_processed) { + // std::cerr << "!not_processed " << std::endl; if (chunk.hasColumns()) - res = getInputPort().getHeader().cloneWithColumns(chunk.detachColumns()); + res = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns()); if (res) join->joinBlock(res, not_processed); } else if (not_processed->empty()) /// There's not processed data inside expression. { + // std::cerr << "not_processed->empty() " << std::endl; if (chunk.hasColumns()) - res = getInputPort().getHeader().cloneWithColumns(chunk.detachColumns()); + res = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns()); not_processed.reset(); join->joinBlock(res, not_processed); } else { + // std::cerr << "not not_processed->empty() " << std::endl; res = std::move(not_processed->block); join->joinBlock(res, not_processed); } + + // std::cerr << "Res block rows " << res.rows() << " cols " << res.columns() << std::endl; return res; } +FillingRightJoinSideTransform::FillingRightJoinSideTransform(Block input_header, JoinPtr join_) + : IProcessor({input_header}, {Block()}) + , join(std::move(join_)) +{} + +InputPort * FillingRightJoinSideTransform::addTotalsPort() +{ + if (inputs.size() > 1) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Totals port was already added to FillingRightJoinSideTransform"); + + return &inputs.emplace_back(inputs.front().getHeader(), this); +} + +IProcessor::Status FillingRightJoinSideTransform::prepare() +{ + auto & output = outputs.front(); + + /// Check can output. + if (output.isFinished()) + { + for (auto & input : inputs) + input.close(); + return Status::Finished; + } + + if (!output.canPush()) + { + for (auto & input : inputs) + input.setNotNeeded(); + return Status::PortFull; + } + + auto & input = inputs.front(); + + if (stop_reading) + { + input.close(); + } + else if (!input.isFinished()) + { + input.setNeeded(); + + if (!input.hasData()) + return Status::NeedData; + + chunk = input.pull(true); + return Status::Ready; + } + + if (inputs.size() > 1) + { + auto & totals_input = inputs.back(); + if (!totals_input.isFinished()) + { + totals_input.setNeeded(); + + if (!totals_input.hasData()) + return Status::NeedData; + + chunk = totals_input.pull(true); + for_totals = true; + return Status::Ready; + } + } + else if (!set_totals) + { + chunk.setColumns(inputs.front().getHeader().cloneEmpty().getColumns(), 0); + for_totals = true; + return Status::Ready; + } + + output.finish(); + return Status::Finished; +} + +void FillingRightJoinSideTransform::work() +{ + auto block = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns()); + + if (for_totals) + join->setTotals(block); + else + stop_reading = !join->addJoinedBlock(block); + + set_totals = for_totals; +} + } diff --git a/src/Processors/Transforms/JoiningTransform.h b/src/Processors/Transforms/JoiningTransform.h index 15a203635e2..98038946f3b 100644 --- a/src/Processors/Transforms/JoiningTransform.h +++ b/src/Processors/Transforms/JoiningTransform.h @@ -1,5 +1,5 @@ #pragma once -#include +#include namespace DB @@ -8,21 +8,63 @@ namespace DB class IJoin; using JoinPtr = std::shared_ptr; -class JoiningTransform : public ISimpleTransform +class IBlockInputStream; +using BlockInputStreamPtr = std::shared_ptr; + +/// Join rows to chunk form left table. +/// This transform usually has two input ports and one output. +/// First input is for data from left table. +/// Second input has empty header and is connected with FillingRightJoinSide. +/// We can process left table only when Join is filled. Second input is used to signal that FillingRightJoinSide is finished. +class JoiningTransform : public IProcessor { public: - JoiningTransform(Block input_header, JoinPtr join_, - bool on_totals_ = false, bool default_totals_ = false); + + /// Count streams and check which is last. + /// The last one should process non-joined rows. + class FinishCounter + { + public: + explicit FinishCounter(size_t total_) : total(total_) {} + + bool isLast() + { + return finished.fetch_add(1) + 1 >= total; + } + + private: + const size_t total; + std::atomic finished{0}; + }; + + using FinishCounterPtr = std::shared_ptr; + + JoiningTransform( + Block input_header, + JoinPtr join_, + size_t max_block_size_, + bool on_totals_ = false, + bool default_totals_ = false, + FinishCounterPtr finish_counter_ = nullptr); String getName() const override { return "JoiningTransform"; } static Block transformHeader(Block header, const JoinPtr & join); + Status prepare() override; + void work() override; + protected: - void transform(Chunk & chunk) override; - bool needInputData() const override { return !not_processed; } + void transform(Chunk & chunk); private: + Chunk input_chunk; + Chunk output_chunk; + bool has_input = false; + bool has_output = false; + bool stop_reading = false; + bool process_non_joined = true; + JoinPtr join; bool on_totals; /// This flag means that we have manually added totals to our pipeline. @@ -33,7 +75,33 @@ private: ExtraBlockPtr not_processed; + FinishCounterPtr finish_counter; + BlockInputStreamPtr non_joined_stream; + size_t max_block_size; + Block readExecute(Chunk & chunk); }; +/// Fills Join with block from right table. +/// Has single input and single output port. +/// Output port has empty header. It is closed when al data is inserted in join. +class FillingRightJoinSideTransform : public IProcessor +{ +public: + FillingRightJoinSideTransform(Block input_header, JoinPtr join_); + String getName() const override { return "FillingRightJoinSide"; } + + InputPort * addTotalsPort(); + + Status prepare() override; + void work() override; + +private: + JoinPtr join; + Chunk chunk; + bool stop_reading = false; + bool for_totals = false; + bool set_totals = false; +}; + } diff --git a/src/Processors/Transforms/MergingAggregatedTransform.cpp b/src/Processors/Transforms/MergingAggregatedTransform.cpp index 1a04f85fd9c..ddc58d830da 100644 --- a/src/Processors/Transforms/MergingAggregatedTransform.cpp +++ b/src/Processors/Transforms/MergingAggregatedTransform.cpp @@ -52,7 +52,7 @@ Chunk MergingAggregatedTransform::generate() if (!generate_started) { generate_started = true; - LOG_TRACE(log, "Read {} blocks of partially aggregated data, total {} rows.", total_input_blocks, total_input_rows); + LOG_DEBUG(log, "Read {} blocks of partially aggregated data, total {} rows.", total_input_blocks, total_input_rows); /// Exception safety. Make iterator valid in case any method below throws. next_block = blocks.begin(); diff --git a/src/Processors/Transforms/PartialSortingTransform.cpp b/src/Processors/Transforms/PartialSortingTransform.cpp index 2fd0a64ee92..3a75571872f 100644 --- a/src/Processors/Transforms/PartialSortingTransform.cpp +++ b/src/Processors/Transforms/PartialSortingTransform.cpp @@ -10,6 +10,8 @@ PartialSortingTransform::PartialSortingTransform( : ISimpleTransform(header_, header_, false) , description(description_), limit(limit_) { + // Sorting by no columns doesn't make sense. + assert(!description.empty()); } static ColumnRawPtrs extractColumns(const Block & block, const SortDescription & description) @@ -91,6 +93,14 @@ size_t getFilterMask(const ColumnRawPtrs & lhs, const ColumnRawPtrs & rhs, size_ void PartialSortingTransform::transform(Chunk & chunk) { + if (chunk.getNumRows()) + { + // The following code works with Blocks and will lose the number of + // rows when there are no columns. We shouldn't get such block, because + // we have to sort by at least one column. + assert(chunk.getNumColumns()); + } + if (read_rows) read_rows->add(chunk.getNumRows()); diff --git a/src/Processors/Transforms/WindowTransform.cpp b/src/Processors/Transforms/WindowTransform.cpp index 0013e0061e2..df92f911325 100644 --- a/src/Processors/Transforms/WindowTransform.cpp +++ b/src/Processors/Transforms/WindowTransform.cpp @@ -2,7 +2,9 @@ #include #include +#include #include +#include #include #include @@ -27,7 +29,8 @@ public: virtual ~IWindowFunction() = default; // Must insert the result for current_row. - virtual void windowInsertResultInto(IColumn & to, const WindowTransform * transform) = 0; + virtual void windowInsertResultInto(const WindowTransform * transform, + size_t function_index) = 0; }; // Compares ORDER BY column values at given rows to find the boundaries of frame: @@ -37,7 +40,7 @@ template static int compareValuesWithOffset(const IColumn * _compared_column, size_t compared_row, const IColumn * _reference_column, size_t reference_row, - uint64_t _offset, + const Field & _offset, bool offset_is_preceding) { // Casting the columns to the known type here makes it faster, probably @@ -46,7 +49,11 @@ static int compareValuesWithOffset(const IColumn * _compared_column, _compared_column); const auto * reference_column = assert_cast( _reference_column); - const auto offset = static_cast(_offset); + // Note that the storage type of offset returned by get<> is different, so + // we need to specify the type explicitly. + const typename ColumnType::ValueType offset + = _offset.get(); + assert(offset >= 0); const auto compared_value_data = compared_column->getDataAt(compared_row); assert(compared_value_data.size == sizeof(typename ColumnType::ValueType)); @@ -59,32 +66,32 @@ static int compareValuesWithOffset(const IColumn * _compared_column, reference_value_data.data); bool is_overflow; - bool overflow_to_negative; if (offset_is_preceding) { is_overflow = __builtin_sub_overflow(reference_value, offset, &reference_value); - overflow_to_negative = offset > 0; } else { is_overflow = __builtin_add_overflow(reference_value, offset, &reference_value); - overflow_to_negative = offset < 0; } // fmt::print(stderr, -// "compared [{}] = {}, ref [{}] = {}, offset {} preceding {} overflow {} to negative {}\n", +// "compared [{}] = {}, old ref {}, shifted ref [{}] = {}, offset {} preceding {} overflow {} to negative {}\n", // compared_row, toString(compared_value), +// // fmt doesn't like char8_t. +// static_cast(unalignedLoad(reference_value_data.data)), // reference_row, toString(reference_value), // toString(offset), offset_is_preceding, -// is_overflow, overflow_to_negative); +// is_overflow, offset_is_preceding); if (is_overflow) { - if (overflow_to_negative) + if (offset_is_preceding) { // Overflow to the negative, [compared] must be greater. + // We know that because offset is >= 0. return 1; } else @@ -101,6 +108,53 @@ static int compareValuesWithOffset(const IColumn * _compared_column, } } +// A specialization of compareValuesWithOffset for floats. +template +static int compareValuesWithOffsetFloat(const IColumn * _compared_column, + size_t compared_row, const IColumn * _reference_column, + size_t reference_row, + const Field & _offset, + bool offset_is_preceding) +{ + // Casting the columns to the known type here makes it faster, probably + // because the getData call can be devirtualized. + const auto * compared_column = assert_cast( + _compared_column); + const auto * reference_column = assert_cast( + _reference_column); + const auto offset = _offset.get(); + assert(offset >= 0); + + const auto compared_value_data = compared_column->getDataAt(compared_row); + assert(compared_value_data.size == sizeof(typename ColumnType::ValueType)); + auto compared_value = unalignedLoad( + compared_value_data.data); + + const auto reference_value_data = reference_column->getDataAt(reference_row); + assert(reference_value_data.size == sizeof(typename ColumnType::ValueType)); + auto reference_value = unalignedLoad( + reference_value_data.data); + + // Floats overflow to Inf and the comparison will work normally, so we don't + // have to do anything. + if (offset_is_preceding) + { + reference_value -= offset; + } + else + { + reference_value += offset; + } + + const auto result = compared_value < reference_value ? -1 + : compared_value == reference_value ? 0 : 1; + +// fmt::print(stderr, "compared {}, offset {}, reference {}, result {}\n", +// compared_value, offset, reference_value, result); + + return result; +} + // Helper macros to dispatch on type of the ORDER BY column #define APPLY_FOR_ONE_TYPE(FUNCTION, TYPE) \ else if (typeid_cast(column)) \ @@ -114,14 +168,20 @@ if (false) /* NOLINT */ \ { \ /* Do nothing, a starter condition. */ \ } \ -APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ -APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ -APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ -APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ +\ +APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ +APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ +APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ +APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ +APPLY_FOR_ONE_TYPE(FUNCTION, ColumnVector) \ +\ +APPLY_FOR_ONE_TYPE(FUNCTION##Float, ColumnVector) \ +APPLY_FOR_ONE_TYPE(FUNCTION##Float, ColumnVector) \ +\ else \ { \ throw Exception(ErrorCodes::NOT_IMPLEMENTED, \ @@ -193,9 +253,43 @@ WindowTransform::WindowTransform(const Block & input_header_, == WindowFrame::BoundaryType::Offset)) { assert(order_by_indices.size() == 1); - const IColumn * column = input_header.getByPosition( - order_by_indices[0]).column.get(); + const auto & entry = input_header.getByPosition(order_by_indices[0]); + const IColumn * column = entry.column.get(); APPLY_FOR_TYPES(compareValuesWithOffset) + + // Convert the offsets to the ORDER BY column type. We can't just check + // that the type matches, because e.g. the int literals are always + // (U)Int64, but the column might be Int8 and so on. + if (window_description.frame.begin_type + == WindowFrame::BoundaryType::Offset) + { + window_description.frame.begin_offset = convertFieldToTypeOrThrow( + window_description.frame.begin_offset, + *entry.type); + + if (applyVisitor(FieldVisitorAccurateLess{}, + window_description.frame.begin_offset, Field(0))) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Window frame start offset must be nonnegative, {} given", + window_description.frame.begin_offset); + } + } + if (window_description.frame.end_type + == WindowFrame::BoundaryType::Offset) + { + window_description.frame.end_offset = convertFieldToTypeOrThrow( + window_description.frame.end_offset, + *entry.type); + + if (applyVisitor(FieldVisitorAccurateLess{}, + window_description.frame.end_offset, Field(0))) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Window frame start offset must be nonnegative, {} given", + window_description.frame.end_offset); + } + } } } @@ -340,6 +434,9 @@ auto WindowTransform::moveRowNumberNoCheck(const RowNumber & _x, int offset) con assertValid(x); assert(offset <= 0); + // abs(offset) is less than INT_MAX, as checked in the parser, so + // this negation should always work. + assert(offset >= -INT_MAX); if (x.row >= static_cast(-offset)) { x.row -= -offset; @@ -391,7 +488,7 @@ void WindowTransform::advanceFrameStartRowsOffset() { // Just recalculate it each time by walking blocks. const auto [moved_row, offset_left] = moveRowNumber(current_row, - window_description.frame.begin_offset + window_description.frame.begin_offset.get() * (window_description.frame.begin_preceding ? -1 : 1)); frame_start = moved_row; @@ -638,7 +735,7 @@ void WindowTransform::advanceFrameEndRowsOffset() // Walk the specified offset from the current row. The "+1" is needed // because the frame_end is a past-the-end pointer. const auto [moved_row, offset_left] = moveRowNumber(current_row, - window_description.frame.end_offset + window_description.frame.end_offset.get() * (window_description.frame.end_preceding ? -1 : 1) + 1); @@ -852,14 +949,14 @@ void WindowTransform::writeOutCurrentRow() for (size_t wi = 0; wi < workspaces.size(); ++wi) { auto & ws = workspaces[wi]; - IColumn * result_column = block.output_columns[wi].get(); if (ws.window_function_impl) { - ws.window_function_impl->windowInsertResultInto(*result_column, this); + ws.window_function_impl->windowInsertResultInto(this, wi); } else { + IColumn * result_column = block.output_columns[wi].get(); const auto * a = ws.aggregate_function.get(); auto * buf = ws.aggregate_function_state.data(); // FIXME does it also allocate the result on the arena? @@ -878,15 +975,23 @@ void WindowTransform::appendChunk(Chunk & chunk) // have it if it's end of data, though. if (!input_is_finished) { - assert(chunk.hasRows()); + if (!chunk.hasRows()) + { + // Joins may generate empty input chunks when it's not yet end of + // input. Just ignore them. They probably shouldn't be sending empty + // chunks up the pipeline, but oh well. + return; + } + blocks.push_back({}); auto & block = blocks.back(); + // Use the number of rows from the Chunk, because it is correct even in + // the case where the Chunk has no columns. Not sure if this actually + // happens, because even in the case of `count() over ()` we have a dummy + // input column. + block.rows = chunk.getNumRows(); block.input_columns = chunk.detachColumns(); - // Even in case of `count() over ()` we should have a dummy input column. - // Not sure how reliable this is... - block.rows = block.input_columns[0]->size(); - for (auto & ws : workspaces) { // Aggregate functions can't work with constant columns, so we have to @@ -1109,9 +1214,7 @@ IProcessor::Status WindowTransform::prepare() if (output.canPush()) { // Output the ready block. -// fmt::print(stderr, "output block {}\n", next_output_block_number); const auto i = next_output_block_number - first_block_number; - ++next_output_block_number; auto & block = blocks[i]; auto columns = block.input_columns; for (auto & res : block.output_columns) @@ -1120,6 +1223,12 @@ IProcessor::Status WindowTransform::prepare() } output_data.chunk.setColumns(columns, block.rows); +// fmt::print(stderr, "output block {} as chunk '{}'\n", +// next_output_block_number, +// output_data.chunk.dumpStructure()); + + ++next_output_block_number; + output.pushData(std::move(output_data)); } @@ -1275,8 +1384,13 @@ struct WindowFunctionRank final : public WindowFunction DataTypePtr getReturnType() const override { return std::make_shared(); } - void windowInsertResultInto(IColumn & to, const WindowTransform * transform) override + bool allocatesMemoryInArena() const override { return false; } + + void windowInsertResultInto(const WindowTransform * transform, + size_t function_index) override { + IColumn & to = *transform->blockAt(transform->current_row) + .output_columns[function_index]; assert_cast(to).getData().push_back( transform->peer_group_start_row_number); } @@ -1292,8 +1406,13 @@ struct WindowFunctionDenseRank final : public WindowFunction DataTypePtr getReturnType() const override { return std::make_shared(); } - void windowInsertResultInto(IColumn & to, const WindowTransform * transform) override + bool allocatesMemoryInArena() const override { return false; } + + void windowInsertResultInto(const WindowTransform * transform, + size_t function_index) override { + IColumn & to = *transform->blockAt(transform->current_row) + .output_columns[function_index]; assert_cast(to).getData().push_back( transform->peer_group_number); } @@ -1309,13 +1428,133 @@ struct WindowFunctionRowNumber final : public WindowFunction DataTypePtr getReturnType() const override { return std::make_shared(); } - void windowInsertResultInto(IColumn & to, const WindowTransform * transform) override + bool allocatesMemoryInArena() const override { return false; } + + void windowInsertResultInto(const WindowTransform * transform, + size_t function_index) override { + IColumn & to = *transform->blockAt(transform->current_row) + .output_columns[function_index]; assert_cast(to).getData().push_back( transform->current_row_number); } }; +// ClickHouse-specific variant of lag/lead that respects the window frame. +template +struct WindowFunctionLagLeadInFrame final : public WindowFunction +{ + WindowFunctionLagLeadInFrame(const std::string & name_, + const DataTypes & argument_types_, const Array & parameters_) + : WindowFunction(name_, argument_types_, parameters_) + { + if (!parameters.empty()) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Function {} cannot be parameterized", name_); + } + + if (argument_types.empty()) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Function {} takes at least one argument", name_); + } + + if (argument_types.size() == 1) + { + return; + } + + if (!isInt64FieldType(argument_types[1]->getDefault().getType())) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Offset must be an integer, '{}' given", + argument_types[1]->getName()); + } + + if (argument_types.size() == 2) + { + return; + } + + if (!getLeastSupertype({argument_types[0], argument_types[2]})) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "The default value type '{}' is not convertible to the argument type '{}'", + argument_types[2]->getName(), + argument_types[0]->getName()); + } + + if (argument_types.size() > 3) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Function '{}' accepts at most 3 arguments, {} given", + name, argument_types.size()); + } + } + + DataTypePtr getReturnType() const override + { return argument_types[0]; } + + bool allocatesMemoryInArena() const override { return false; } + + void windowInsertResultInto(const WindowTransform * transform, + size_t function_index) override + { + const auto & current_block = transform->blockAt(transform->current_row); + IColumn & to = *current_block.output_columns[function_index]; + const auto & workspace = transform->workspaces[function_index]; + + int64_t offset = 1; + if (argument_types.size() > 1) + { + offset = (*current_block.input_columns[ + workspace.argument_column_indices[1]])[ + transform->current_row.row].get(); + if (offset < 0) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "The offset for function {} must be nonnegative, {} given", + getName(), offset); + } + if (offset > INT_MAX) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "The offset for function {} must be less than {}, {} given", + getName(), INT_MAX, offset); + } + } + + const auto [target_row, offset_left] = transform->moveRowNumber( + transform->current_row, offset * (is_lead ? 1 : -1)); + + if (offset_left != 0 + || target_row < transform->frame_start + || transform->frame_end <= target_row) + { + // Offset is outside the frame. + if (argument_types.size() > 2) + { + // Column with default values is specified. + to.insertFrom(*current_block.input_columns[ + workspace.argument_column_indices[2]], + transform->current_row.row); + } + else + { + to.insertDefault(); + } + } + else + { + // Offset is inside the frame. + to.insertFrom(*transform->blockAt(target_row).input_columns[ + workspace.argument_column_indices[0]], + target_row.row); + } + } +}; + void registerWindowFunctions(AggregateFunctionFactory & factory) { // Why didn't I implement lag/lead yet? Because they are a mess. I imagine @@ -1327,9 +1566,10 @@ void registerWindowFunctions(AggregateFunctionFactory & factory) // the whole partition like Postgres does, because using a linear amount // of additional memory is not an option when we have a lot of data. We must // be able to process at least the lag/lead in streaming fashion. - // Our best bet is probably rewriting, say `lag(value, offset)` to - // `any(value) over (rows between offset preceding and offset preceding)`, - // at the query planning stage. + // A partial solution for constant offsets is rewriting, say `lag(value, offset) + // to `any(value) over (rows between offset preceding and offset preceding)`. + // We also implement non-standard functions `lag/leadInFrame`, that are + // analogous to `lag/lead`, but respect the frame. // Functions like cume_dist() do require materializing the entire // partition, but it's probably also simpler to implement them by rewriting // to a (rows between unbounded preceding and unbounded following) frame, @@ -1355,6 +1595,20 @@ void registerWindowFunctions(AggregateFunctionFactory & factory) return std::make_shared(name, argument_types, parameters); }); + + factory.registerFunction("lagInFrame", [](const std::string & name, + const DataTypes & argument_types, const Array & parameters) + { + return std::make_shared>( + name, argument_types, parameters); + }); + + factory.registerFunction("leadInFrame", [](const std::string & name, + const DataTypes & argument_types, const Array & parameters) + { + return std::make_shared>( + name, argument_types, parameters); + }); } } diff --git a/src/Processors/Transforms/WindowTransform.h b/src/Processors/Transforms/WindowTransform.h index 5001b984e9a..882bf429c0a 100644 --- a/src/Processors/Transforms/WindowTransform.h +++ b/src/Processors/Transforms/WindowTransform.h @@ -110,7 +110,9 @@ public: Status prepare() override; void work() override; -private: + /* + * Implementation details. + */ void advancePartitionEnd(); bool arePeers(const RowNumber & x, const RowNumber & y) const; @@ -321,10 +323,7 @@ public: int (* compare_values_with_offset) ( const IColumn * compared_column, size_t compared_row, const IColumn * reference_column, size_t reference_row, - // We can make it a Field later if we need the Decimals. Now we only - // have ints and datetime, and the underlying Field type for them is - // uint64_t anyway. - uint64_t offset, + const Field & offset, bool offset_is_preceding); }; diff --git a/src/Functions/tests/CMakeLists.txt b/src/Processors/examples/CMakeLists.txt similarity index 100% rename from src/Functions/tests/CMakeLists.txt rename to src/Processors/examples/CMakeLists.txt diff --git a/src/Processors/tests/processors_test_aggregation.cpp b/src/Processors/examples/processors_test_aggregation.cpp similarity index 100% rename from src/Processors/tests/processors_test_aggregation.cpp rename to src/Processors/examples/processors_test_aggregation.cpp diff --git a/src/Processors/tests/processors_test_merge_sorting_transform.cpp b/src/Processors/examples/processors_test_merge_sorting_transform.cpp similarity index 100% rename from src/Processors/tests/processors_test_merge_sorting_transform.cpp rename to src/Processors/examples/processors_test_merge_sorting_transform.cpp diff --git a/src/Processors/ya.make b/src/Processors/ya.make index 5724e9a592f..5ab9c79511f 100644 --- a/src/Processors/ya.make +++ b/src/Processors/ya.make @@ -93,9 +93,9 @@ SRCS( Pipe.cpp Port.cpp QueryPipeline.cpp - QueryPlan/AddingDelayedSourceStep.cpp QueryPlan/AggregatingStep.cpp QueryPlan/ArrayJoinStep.cpp + QueryPlan/BuildQueryPipelineSettings.cpp QueryPlan/CreatingSetsStep.cpp QueryPlan/CubeStep.cpp QueryPlan/DistinctStep.cpp @@ -107,6 +107,7 @@ SRCS( QueryPlan/IQueryPlanStep.cpp QueryPlan/ISourceStep.cpp QueryPlan/ITransformingStep.cpp + QueryPlan/JoinStep.cpp QueryPlan/LimitByStep.cpp QueryPlan/LimitStep.cpp QueryPlan/MergeSortingStep.cpp @@ -124,9 +125,9 @@ SRCS( QueryPlan/PartialSortingStep.cpp QueryPlan/QueryIdHolder.cpp QueryPlan/QueryPlan.cpp + QueryPlan/ReadFromMergeTree.cpp QueryPlan/ReadFromPreparedSource.cpp QueryPlan/ReadNothingStep.cpp - QueryPlan/ReverseRowsStep.cpp QueryPlan/RollupStep.cpp QueryPlan/SettingQuotaAndLimitsStep.cpp QueryPlan/TotalsHavingStep.cpp diff --git a/src/Processors/ya.make.in b/src/Processors/ya.make.in index f33dd041a32..06230b96be8 100644 --- a/src/Processors/ya.make.in +++ b/src/Processors/ya.make.in @@ -10,7 +10,7 @@ PEERDIR( SRCS( - + ) END() diff --git a/src/Server/GRPCServer.cpp b/src/Server/GRPCServer.cpp index ede9bbff063..6f0f2d30123 100644 --- a/src/Server/GRPCServer.cpp +++ b/src/Server/GRPCServer.cpp @@ -521,7 +521,7 @@ namespace Poco::Logger * log = nullptr; std::shared_ptr session; - std::optional query_context; + ContextPtr query_context; std::optional query_scope; String query_text; ASTPtr ast; @@ -651,7 +651,7 @@ namespace } /// Create context. - query_context.emplace(iserver.context()); + query_context = Context::createCopy(iserver.context()); /// Authentication. query_context->setUser(user, password, user_address); @@ -665,11 +665,11 @@ namespace { session = query_context->acquireNamedSession( query_info.session_id(), getSessionTimeout(query_info, iserver.config()), query_info.session_check()); - query_context = session->context; + query_context = Context::createCopy(session->context); query_context->setSessionContext(session->context); } - query_scope.emplace(*query_context); + query_scope.emplace(query_context); /// Set client info. ClientInfo & client_info = query_context->getClientInfo(); @@ -741,26 +741,26 @@ namespace output_format = query_context->getDefaultFormat(); /// Set callback to create and fill external tables - query_context->setExternalTablesInitializer([this] (Context & context) + query_context->setExternalTablesInitializer([this] (ContextPtr context) { - if (&context != &*query_context) + if (context != query_context) throw Exception("Unexpected context in external tables initializer", ErrorCodes::LOGICAL_ERROR); createExternalTables(); }); /// Set callbacks to execute function input(). - query_context->setInputInitializer([this] (Context & context, const StoragePtr & input_storage) + query_context->setInputInitializer([this] (ContextPtr context, const StoragePtr & input_storage) { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in Input initializer", ErrorCodes::LOGICAL_ERROR); input_function_is_used = true; initializeBlockInputStream(input_storage->getInMemoryMetadataPtr()->getSampleBlock()); block_input_stream->readPrefix(); }); - query_context->setInputBlocksReaderCallback([this](Context & context) -> Block + query_context->setInputBlocksReaderCallback([this](ContextPtr context) -> Block { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in InputBlocksReader", ErrorCodes::LOGICAL_ERROR); auto block = block_input_stream->read(); if (!block) @@ -775,7 +775,7 @@ namespace query_end = insert_query->data; } String query(begin, query_end); - io = ::DB::executeQuery(query, *query_context, false, QueryProcessingStage::Complete, true, true); + io = ::DB::executeQuery(query, query_context, false, QueryProcessingStage::Complete, true, true); } void Call::processInput() @@ -783,8 +783,6 @@ namespace if (!io.out) return; - initializeBlockInputStream(io.out->getHeader()); - bool has_data_to_insert = (insert_query && insert_query->data) || !query_info.input_data().empty() || query_info.next_query_info(); if (!has_data_to_insert) @@ -795,6 +793,10 @@ namespace throw Exception("No data to insert", ErrorCodes::NO_DATA_TO_INSERT); } + /// This is significant, because parallel parsing may be used. + /// So we mustn't touch the input stream from other thread. + initializeBlockInputStream(io.out->getHeader()); + block_input_stream->readPrefix(); io.out->writePrefix(); @@ -876,10 +878,10 @@ namespace auto table_id = query_context->resolveStorageID(insert_query->table_id, Context::ResolveOrdinary); if (query_context->getSettingsRef().input_format_defaults_for_omitted_fields && table_id) { - StoragePtr storage = DatabaseCatalog::instance().getTable(table_id, *query_context); + StoragePtr storage = DatabaseCatalog::instance().getTable(table_id, query_context); const auto & columns = storage->getInMemoryMetadataPtr()->getColumns(); if (!columns.empty()) - block_input_stream = std::make_shared(block_input_stream, columns, *query_context); + block_input_stream = std::make_shared(block_input_stream, columns, query_context); } } } @@ -901,7 +903,7 @@ namespace StoragePtr storage; if (auto resolved = query_context->tryResolveStorageID(temporary_id, Context::ResolveExternal)) { - storage = DatabaseCatalog::instance().getTable(resolved, *query_context); + storage = DatabaseCatalog::instance().getTable(resolved, query_context); } else { @@ -916,7 +918,7 @@ namespace column.type = DataTypeFactory::instance().get(name_and_type.type()); columns.emplace_back(std::move(column)); } - auto temporary_table = TemporaryTableHolder(*query_context, ColumnsDescription{columns}, {}); + auto temporary_table = TemporaryTableHolder(query_context, ColumnsDescription{columns}, {}); storage = temporary_table.getTable(); query_context->addExternalTable(temporary_id.table_name, std::move(temporary_table)); } @@ -925,17 +927,17 @@ namespace { /// The data will be written directly to the table. auto metadata_snapshot = storage->getInMemoryMetadataPtr(); - auto out_stream = storage->write(ASTPtr(), metadata_snapshot, *query_context); + auto out_stream = storage->write(ASTPtr(), metadata_snapshot, query_context); ReadBufferFromMemory data(external_table.data().data(), external_table.data().size()); String format = external_table.format(); if (format.empty()) format = "TabSeparated"; - Context * external_table_context = &*query_context; - std::optional temp_context; + ContextPtr external_table_context = query_context; + ContextPtr temp_context; if (!external_table.settings().empty()) { - temp_context = *query_context; - external_table_context = &*temp_context; + temp_context = Context::createCopy(query_context); + external_table_context = temp_context; SettingsChanges settings_changes; for (const auto & [key, value] : external_table.settings()) settings_changes.push_back({key, value}); diff --git a/src/Server/HTTP/HTMLForm.cpp b/src/Server/HTTP/HTMLForm.cpp index ca407858c33..7a87f484b5c 100644 --- a/src/Server/HTTP/HTMLForm.cpp +++ b/src/Server/HTTP/HTMLForm.cpp @@ -71,23 +71,6 @@ HTMLForm::HTMLForm(const Poco::URI & uri) : field_limit(DFL_FIELD_LIMIT), value_ } -void HTMLForm::setEncoding(const std::string & encoding_) -{ - encoding = encoding_; -} - - -void HTMLForm::addPart(const std::string & name, Poco::Net::PartSource * source) -{ - poco_check_ptr(source); - - Part part; - part.name = name; - part.source = std::unique_ptr(source); - parts.push_back(std::move(part)); -} - - void HTMLForm::load(const Poco::Net::HTTPRequest & request, ReadBuffer & requestBody, PartHandler & handler) { clear(); @@ -126,36 +109,12 @@ void HTMLForm::load(const Poco::Net::HTTPRequest & request, ReadBuffer & request } -void HTMLForm::load(const Poco::Net::HTTPRequest & request) -{ - NullPartHandler nah; - EmptyReadBuffer nis; - load(request, nis, nah); -} - - -void HTMLForm::read(ReadBuffer & in, PartHandler & handler) -{ - if (encoding == ENCODING_URL) - readQuery(in); - else - readMultipart(in, handler); -} - - void HTMLForm::read(ReadBuffer & in) { readQuery(in); } -void HTMLForm::read(const std::string & queryString) -{ - ReadBufferFromString istr(queryString); - readQuery(istr); -} - - void HTMLForm::readQuery(ReadBuffer & in) { size_t fields = 0; @@ -269,22 +228,6 @@ void HTMLForm::readMultipart(ReadBuffer & in_, PartHandler & handler) } -void HTMLForm::setFieldLimit(int limit) -{ - poco_assert(limit >= 0); - - field_limit = limit; -} - - -void HTMLForm::setValueLengthLimit(int limit) -{ - poco_assert(limit >= 0); - - value_length_limit = limit; -} - - HTMLForm::MultipartReadBuffer::MultipartReadBuffer(ReadBuffer & in_, const std::string & boundary_) : ReadBuffer(nullptr, 0), in(in_), boundary("--" + boundary_) { @@ -369,6 +312,11 @@ bool HTMLForm::MultipartReadBuffer::nextImpl() else boundary_hit = startsWith(line, boundary); + if (!line.empty()) + /// If we don't make sure that memory is contiguous then situation may happen, when part of the line is inside internal memory + /// and other part is inside sub-buffer, thus we'll be unable to setup our working buffer properly. + in.makeContinuousMemoryFromCheckpointToPos(); + in.rollbackToCheckpoint(true); /// Rolling back to checkpoint may change underlying buffers. diff --git a/src/Server/HTTP/HTMLForm.h b/src/Server/HTTP/HTMLForm.h index 27be712e1d5..8d8fb0d1719 100644 --- a/src/Server/HTTP/HTMLForm.h +++ b/src/Server/HTTP/HTMLForm.h @@ -52,24 +52,6 @@ public: return (it != end()) ? DB::parse(it->second) : default_value; } - template - T getParsed(const std::string & key) - { - return DB::parse(get(key)); - } - - /// Sets the encoding used for posting the form. - /// Encoding must be either "application/x-www-form-urlencoded" (which is the default) or "multipart/form-data". - void setEncoding(const std::string & encoding); - - /// Returns the encoding used for posting the form. - const std::string & getEncoding() const { return encoding; } - - /// Adds an part/attachment (file upload) to the form. - /// The form takes ownership of the PartSource and deletes it when it is no longer needed. - /// The part will only be sent if the encoding set for the form is "multipart/form-data" - void addPart(const std::string & name, Poco::Net::PartSource * pSource); - /// Reads the form data from the given HTTP request. /// Uploaded files are passed to the given PartHandler. void load(const Poco::Net::HTTPRequest & request, ReadBuffer & requestBody, PartHandler & handler); @@ -78,41 +60,10 @@ public: /// Uploaded files are silently discarded. void load(const Poco::Net::HTTPRequest & request, ReadBuffer & requestBody); - /// Reads the form data from the given HTTP request. - /// The request must be a GET request and the form data must be in the query string (URL encoded). - /// For POST requests, you must use one of the overloads taking an additional input stream for the request body. - void load(const Poco::Net::HTTPRequest & request); - - /// Reads the form data from the given input stream. - /// The form data read from the stream must be in the encoding specified for the form. - /// Note that read() does not clear the form before reading the new values. - void read(ReadBuffer & in, PartHandler & handler); - /// Reads the URL-encoded form data from the given input stream. /// Note that read() does not clear the form before reading the new values. void read(ReadBuffer & in); - /// Reads the form data from the given HTTP query string. - /// Note that read() does not clear the form before reading the new values. - void read(const std::string & queryString); - - /// Returns the MIME boundary used for writing multipart form data. - const std::string & getBoundary() const { return boundary; } - - /// Returns the maximum number of header fields allowed. - /// See setFieldLimit() for more information. - int getFieldLimit() const { return field_limit; } - - /// Sets the maximum number of header fields allowed. This limit is used to defend certain kinds of denial-of-service attacks. - /// Specify 0 for unlimited (not recommended). The default limit is 100. - void setFieldLimit(int limit); - - /// Sets the maximum size for form field values stored as strings. - void setValueLengthLimit(int limit); - - /// Returns the maximum size for form field values stored as strings. - int getValueLengthLimit() const { return value_length_limit; } - static const std::string ENCODING_URL; /// "application/x-www-form-urlencoded" static const std::string ENCODING_MULTIPART; /// "multipart/form-data" static const int UNKNOWN_CONTENT_LENGTH; diff --git a/src/Server/HTTP/HTTPServer.cpp b/src/Server/HTTP/HTTPServer.cpp index 3e050080bdd..42e6467d0af 100644 --- a/src/Server/HTTP/HTTPServer.cpp +++ b/src/Server/HTTP/HTTPServer.cpp @@ -6,16 +6,16 @@ namespace DB { HTTPServer::HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, - UInt16 portNumber, + UInt16 port_number, Poco::Net::HTTPServerParams::Ptr params) - : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), portNumber, params), factory(factory_) + : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), port_number, params), factory(factory_) { } HTTPServer::HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, const Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params) @@ -24,12 +24,12 @@ HTTPServer::HTTPServer( } HTTPServer::HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, - Poco::ThreadPool & threadPool, + Poco::ThreadPool & thread_pool, const Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params) - : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), threadPool, socket, params), factory(factory_) + : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), thread_pool, socket, params), factory(factory_) { } diff --git a/src/Server/HTTP/HTTPServer.h b/src/Server/HTTP/HTTPServer.h index 1ce62c65ca2..d95bdff0baa 100644 --- a/src/Server/HTTP/HTTPServer.h +++ b/src/Server/HTTP/HTTPServer.h @@ -17,27 +17,27 @@ class HTTPServer : public Poco::Net::TCPServer { public: explicit HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory, - UInt16 portNumber = 80, + UInt16 port_number = 80, Poco::Net::HTTPServerParams::Ptr params = new Poco::Net::HTTPServerParams); HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory, const Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params); HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory, - Poco::ThreadPool & threadPool, + Poco::ThreadPool & thread_pool, const Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params); ~HTTPServer() override; - void stopAll(bool abortCurrent = false); + void stopAll(bool abort_current = false); private: HTTPRequestHandlerFactoryPtr factory; diff --git a/src/Server/HTTP/HTTPServerConnection.cpp b/src/Server/HTTP/HTTPServerConnection.cpp index e2ee4c8882b..19985949005 100644 --- a/src/Server/HTTP/HTTPServerConnection.cpp +++ b/src/Server/HTTP/HTTPServerConnection.cpp @@ -6,11 +6,11 @@ namespace DB { HTTPServerConnection::HTTPServerConnection( - const Context & context_, + ContextPtr context_, const Poco::Net::StreamSocket & socket, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) - : TCPServerConnection(socket), context(context_), params(params_), factory(factory_), stopped(false) + : TCPServerConnection(socket), context(Context::createCopy(context_)), params(params_), factory(factory_), stopped(false) { poco_check_ptr(factory); } @@ -67,15 +67,15 @@ void HTTPServerConnection::run() } } } - catch (Poco::Net::NoMessageException &) + catch (const Poco::Net::NoMessageException &) { break; } - catch (Poco::Net::MessageException &) + catch (const Poco::Net::MessageException &) { sendErrorResponse(session, Poco::Net::HTTPResponse::HTTP_BAD_REQUEST); } - catch (Poco::Exception &) + catch (const Poco::Exception &) { if (session.networkException()) { @@ -98,31 +98,4 @@ void HTTPServerConnection::sendErrorResponse(Poco::Net::HTTPServerSession & sess session.setKeepAlive(false); } -void HTTPServerConnection::onServerStopped(const bool & abortCurrent) -{ - stopped = true; - if (abortCurrent) - { - try - { - socket().shutdown(); - } - catch (...) - { - } - } - else - { - std::unique_lock lock(mutex); - - try - { - socket().shutdown(); - } - catch (...) - { - } - } -} - } diff --git a/src/Server/HTTP/HTTPServerConnection.h b/src/Server/HTTP/HTTPServerConnection.h index 589c33025bf..1c7ae6cd2b7 100644 --- a/src/Server/HTTP/HTTPServerConnection.h +++ b/src/Server/HTTP/HTTPServerConnection.h @@ -14,7 +14,7 @@ class HTTPServerConnection : public Poco::Net::TCPServerConnection { public: HTTPServerConnection( - const Context & context, + ContextPtr context, const Poco::Net::StreamSocket & socket, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); @@ -23,10 +23,9 @@ public: protected: static void sendErrorResponse(Poco::Net::HTTPServerSession & session, Poco::Net::HTTPResponse::HTTPStatus status); - void onServerStopped(const bool & abortCurrent); private: - Context context; + ContextPtr context; Poco::Net::HTTPServerParams::Ptr params; HTTPRequestHandlerFactoryPtr factory; bool stopped; diff --git a/src/Server/HTTP/HTTPServerConnectionFactory.cpp b/src/Server/HTTP/HTTPServerConnectionFactory.cpp index 876ccb9096b..0e4fb6cfcec 100644 --- a/src/Server/HTTP/HTTPServerConnectionFactory.cpp +++ b/src/Server/HTTP/HTTPServerConnectionFactory.cpp @@ -5,8 +5,8 @@ namespace DB { HTTPServerConnectionFactory::HTTPServerConnectionFactory( - const Context & context_, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) - : context(context_), params(params_), factory(factory_) + ContextPtr context_, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) + : context(Context::createCopy(context_)), params(params_), factory(factory_) { poco_check_ptr(factory); } diff --git a/src/Server/HTTP/HTTPServerConnectionFactory.h b/src/Server/HTTP/HTTPServerConnectionFactory.h index 4f8ca43cbfb..3f11eca0f69 100644 --- a/src/Server/HTTP/HTTPServerConnectionFactory.h +++ b/src/Server/HTTP/HTTPServerConnectionFactory.h @@ -12,12 +12,12 @@ namespace DB class HTTPServerConnectionFactory : public Poco::Net::TCPServerConnectionFactory { public: - HTTPServerConnectionFactory(const Context & context, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); + HTTPServerConnectionFactory(ContextPtr context, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override; private: - Context context; + ContextPtr context; Poco::Net::HTTPServerParams::Ptr params; HTTPRequestHandlerFactoryPtr factory; }; diff --git a/src/Server/HTTP/HTTPServerRequest.cpp b/src/Server/HTTP/HTTPServerRequest.cpp index bdba6a51d91..69dc8d4dbda 100644 --- a/src/Server/HTTP/HTTPServerRequest.cpp +++ b/src/Server/HTTP/HTTPServerRequest.cpp @@ -15,8 +15,8 @@ namespace DB { - -HTTPServerRequest::HTTPServerRequest(const Context & context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session) +HTTPServerRequest::HTTPServerRequest(ContextPtr context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session) + : max_uri_size(context->getSettingsRef().http_max_uri_size) { response.attachRequest(this); @@ -24,9 +24,8 @@ HTTPServerRequest::HTTPServerRequest(const Context & context, HTTPServerResponse client_address = session.clientAddress(); server_address = session.serverAddress(); - auto receive_timeout = context.getSettingsRef().http_receive_timeout; - auto send_timeout = context.getSettingsRef().http_send_timeout; - auto max_query_size = context.getSettingsRef().max_query_size; + auto receive_timeout = context->getSettingsRef().http_receive_timeout; + auto send_timeout = context->getSettingsRef().http_send_timeout; session.socket().setReceiveTimeout(receive_timeout); session.socket().setSendTimeout(send_timeout); @@ -37,7 +36,7 @@ HTTPServerRequest::HTTPServerRequest(const Context & context, HTTPServerResponse readRequest(*in); /// Try parse according to RFC7230 if (getChunkedTransferEncoding()) - stream = std::make_unique(std::move(in), max_query_size); + stream = std::make_unique(std::move(in)); else if (hasContentLength()) stream = std::make_unique(std::move(in), getContentLength(), false); else if (getMethod() != HTTPRequest::HTTP_GET && getMethod() != HTTPRequest::HTTP_HEAD && getMethod() != HTTPRequest::HTTP_DELETE) @@ -93,10 +92,10 @@ void HTTPServerRequest::readRequest(ReadBuffer & in) skipWhitespaceIfAny(in); - while (in.read(ch) && !Poco::Ascii::isSpace(ch) && uri.size() <= MAX_URI_LENGTH) + while (in.read(ch) && !Poco::Ascii::isSpace(ch) && uri.size() <= max_uri_size) uri += ch; - if (uri.size() > MAX_URI_LENGTH) + if (uri.size() > max_uri_size) throw Poco::Net::MessageException("HTTP request URI invalid or too long"); skipWhitespaceIfAny(in); diff --git a/src/Server/HTTP/HTTPServerRequest.h b/src/Server/HTTP/HTTPServerRequest.h index 7fd54850212..a560f907cf0 100644 --- a/src/Server/HTTP/HTTPServerRequest.h +++ b/src/Server/HTTP/HTTPServerRequest.h @@ -1,5 +1,6 @@ #pragma once +#include #include #include @@ -8,14 +9,13 @@ namespace DB { -class Context; class HTTPServerResponse; class ReadBufferFromPocoSocket; class HTTPServerRequest : public HTTPRequest { public: - HTTPServerRequest(const Context & context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session); + HTTPServerRequest(ContextPtr context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session); /// FIXME: it's a little bit inconvenient interface. The rationale is that all other ReadBuffer's wrap each other /// via unique_ptr - but we can't inherit HTTPServerRequest from ReadBuffer and pass it around, @@ -43,11 +43,12 @@ private: MAX_NAME_LENGTH = 256, MAX_VALUE_LENGTH = 8192, MAX_METHOD_LENGTH = 32, - MAX_URI_LENGTH = 16384, MAX_VERSION_LENGTH = 8, MAX_FIELDS_NUMBER = 100, }; + const size_t max_uri_size; + std::unique_ptr stream; Poco::Net::SocketImpl * socket; Poco::Net::SocketAddress client_address; diff --git a/src/Server/HTTP/HTTPServerResponse.cpp b/src/Server/HTTP/HTTPServerResponse.cpp index e3d52fffa80..db5cfb132e3 100644 --- a/src/Server/HTTP/HTTPServerResponse.cpp +++ b/src/Server/HTTP/HTTPServerResponse.cpp @@ -94,32 +94,6 @@ std::pair, std::shared_ptr> HTTPServ return std::make_pair(header_stream, stream); } -void HTTPServerResponse::sendFile(const std::string & path, const std::string & mediaType) -{ - poco_assert(!stream); - - Poco::File f(path); - Poco::Timestamp date_time = f.getLastModified(); - Poco::File::FileSize length = f.getSize(); - set("Last-Modified", Poco::DateTimeFormatter::format(date_time, Poco::DateTimeFormat::HTTP_FORMAT)); - setContentLength64(length); - setContentType(mediaType); - setChunkedTransferEncoding(false); - - Poco::FileInputStream istr(path); - if (istr.good()) - { - stream = std::make_shared(session); - write(*stream); - if (request && request->getMethod() != HTTPRequest::HTTP_HEAD) - { - Poco::StreamCopier::copyStream(istr, *stream); - } - } - else - throw Poco::OpenFileException(path); -} - void HTTPServerResponse::sendBuffer(const void * buffer, std::size_t length) { poco_assert(!stream); @@ -135,20 +109,6 @@ void HTTPServerResponse::sendBuffer(const void * buffer, std::size_t length) } } -void HTTPServerResponse::redirect(const std::string & uri, HTTPStatus status) -{ - poco_assert(!stream); - - setContentLength(0); - setChunkedTransferEncoding(false); - - setStatusAndReason(status); - set("Location", uri); - - stream = std::make_shared(session); - write(*stream); -} - void HTTPServerResponse::requireAuthentication(const std::string & realm) { poco_assert(!stream); diff --git a/src/Server/HTTP/HTTPServerResponse.h b/src/Server/HTTP/HTTPServerResponse.h index 82221ce3a83..f5b7a70dc79 100644 --- a/src/Server/HTTP/HTTPServerResponse.h +++ b/src/Server/HTTP/HTTPServerResponse.h @@ -36,17 +36,6 @@ public: /// or redirect() has been called. std::pair, std::shared_ptr> beginSend(); /// TODO: use some WriteBuffer implementation here. - /// Sends the response header to the client, followed - /// by the content of the given file. - /// - /// Must not be called after send(), sendBuffer() - /// or redirect() has been called. - /// - /// Throws a FileNotFoundException if the file - /// cannot be found, or an OpenFileException if - /// the file cannot be opened. - void sendFile(const std::string & path, const std::string & mediaType); - /// Sends the response header to the client, followed /// by the contents of the given buffer. /// @@ -61,16 +50,6 @@ public: /// or redirect() has been called. void sendBuffer(const void * pBuffer, std::size_t length); /// FIXME: do we need this one? - /// Sets the status code, which must be one of - /// HTTP_MOVED_PERMANENTLY (301), HTTP_FOUND (302), - /// or HTTP_SEE_OTHER (303), - /// and sets the "Location" header field - /// to the given URI, which according to - /// the HTTP specification, must be absolute. - /// - /// Must not be called after send() has been called. - void redirect(const std::string & uri, Poco::Net::HTTPResponse::HTTPStatus status = Poco::Net::HTTPResponse::HTTP_FOUND); - void requireAuthentication(const std::string & realm); /// Sets the status code to 401 (Unauthorized) /// and sets the "WWW-Authenticate" header field @@ -83,7 +62,7 @@ public: private: Poco::Net::HTTPServerSession & session; - HTTPServerRequest * request; + HTTPServerRequest * request = nullptr; std::shared_ptr stream; std::shared_ptr header_stream; }; diff --git a/src/Server/HTTP/README.md b/src/Server/HTTP/README.md new file mode 100644 index 00000000000..71730962780 --- /dev/null +++ b/src/Server/HTTP/README.md @@ -0,0 +1,3 @@ +# Notice + +The source code located in this folder is based on some files from the POCO project, from here `contrib/poco/Net/src`. diff --git a/src/Server/HTTP/ReadHeaders.cpp b/src/Server/HTTP/ReadHeaders.cpp index 77ec48c11b1..2fc2de8321a 100644 --- a/src/Server/HTTP/ReadHeaders.cpp +++ b/src/Server/HTTP/ReadHeaders.cpp @@ -51,7 +51,7 @@ void readHeaders( if (name.size() > max_name_length) throw Poco::Net::MessageException("Field name is too long"); if (ch != ':') - throw Poco::Net::MessageException("Field name is invalid or no colon found"); + throw Poco::Net::MessageException(fmt::format("Field name is invalid or no colon found: \"{}\"", name)); } in.ignore(); diff --git a/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp b/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp index 355af038da9..a4fe3649e6f 100644 --- a/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp +++ b/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp @@ -196,7 +196,7 @@ void WriteBufferFromHTTPServerResponse::finalize() WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() { /// FIXME move final flush into the caller - MemoryTracker::LockExceptionInThread lock; + MemoryTracker::LockExceptionInThread lock(VariableContext::Global); finalize(); } diff --git a/src/Server/HTTPHandler.cpp b/src/Server/HTTPHandler.cpp index 6b4981beae0..8aed5d20f74 100644 --- a/src/Server/HTTPHandler.cpp +++ b/src/Server/HTTPHandler.cpp @@ -277,7 +277,7 @@ HTTPHandler::~HTTPHandler() bool HTTPHandler::authenticateUser( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response) @@ -381,7 +381,7 @@ bool HTTPHandler::authenticateUser( /// Set client info. It will be used for quota accounting parameters in 'setUser' method. - ClientInfo & client_info = context.getClientInfo(); + ClientInfo & client_info = context->getClientInfo(); client_info.query_kind = ClientInfo::QueryKind::INITIAL_QUERY; client_info.interface = ClientInfo::Interface::HTTP; @@ -398,7 +398,7 @@ bool HTTPHandler::authenticateUser( try { - context.setUser(*request_credentials, request.clientAddress()); + context->setUser(*request_credentials, request.clientAddress()); } catch (const Authentication::Require & required_credentials) { @@ -430,7 +430,7 @@ bool HTTPHandler::authenticateUser( request_credentials.reset(); if (!quota_key.empty()) - context.setQuotaKey(quota_key); + context->setQuotaKey(quota_key); /// Query sent through HTTP interface is initial. client_info.initial_user = client_info.current_user; @@ -441,7 +441,7 @@ bool HTTPHandler::authenticateUser( void HTTPHandler::processQuery( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response, @@ -470,10 +470,10 @@ void HTTPHandler::processQuery( session_timeout = parseSessionTimeout(config, params); std::string session_check = params.get("session_check", ""); - session = context.acquireNamedSession(session_id, session_timeout, session_check == "1"); + session = context->acquireNamedSession(session_id, session_timeout, session_check == "1"); - context = session->context; - context.setSessionContext(session->context); + context->copyFrom(session->context); /// FIXME: maybe move this part to HandleRequest(), copyFrom() is used only here. + context->setSessionContext(session->context); } SCOPE_EXIT({ @@ -489,7 +489,7 @@ void HTTPHandler::processQuery( { std::string opentelemetry_traceparent = request.get("traceparent"); std::string error; - if (!context.getClientInfo().client_trace_context.parseTraceparentHeader( + if (!context->getClientInfo().client_trace_context.parseTraceparentHeader( opentelemetry_traceparent, error)) { throw Exception(ErrorCodes::BAD_REQUEST_PARAMETER, @@ -497,14 +497,14 @@ void HTTPHandler::processQuery( opentelemetry_traceparent, error); } - context.getClientInfo().client_trace_context.tracestate = request.get("tracestate", ""); + context->getClientInfo().client_trace_context.tracestate = request.get("tracestate", ""); } #endif // Set the query id supplied by the user, if any, and also update the OpenTelemetry fields. - context.setCurrentQueryId(params.get("query_id", request.get("X-ClickHouse-Query-Id", ""))); + context->setCurrentQueryId(params.get("query_id", request.get("X-ClickHouse-Query-Id", ""))); - ClientInfo & client_info = context.getClientInfo(); + ClientInfo & client_info = context->getClientInfo(); client_info.initial_query_id = client_info.current_query_id; /// The client can pass a HTTP header indicating supported compression method (gzip or deflate). @@ -570,7 +570,7 @@ void HTTPHandler::processQuery( if (buffer_until_eof) { - const std::string tmp_path(context.getTemporaryVolume()->getDisk()->getPath()); + const std::string tmp_path(context->getTemporaryVolume()->getDisk()->getPath()); const std::string tmp_path_template(tmp_path + "http_buffers/"); auto create_tmp_disk_buffer = [tmp_path_template] (const WriteBufferPtr &) @@ -658,13 +658,13 @@ void HTTPHandler::processQuery( /// In theory if initially readonly = 0, the client can change any setting and then set readonly /// to some other value. - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); /// Only readonly queries are allowed for HTTP GET requests. if (request.getMethod() == HTTPServerRequest::HTTP_GET) { if (settings.readonly == 0) - context.setSetting("readonly", 2); + context->setSetting("readonly", 2); } bool has_external_data = startsWith(request.getContentType(), "multipart/form-data"); @@ -707,14 +707,14 @@ void HTTPHandler::processQuery( } if (!database.empty()) - context.setCurrentDatabase(database); + context->setCurrentDatabase(database); if (!default_format.empty()) - context.setDefaultFormat(default_format); + context->setDefaultFormat(default_format); /// For external data we also want settings - context.checkSettingsConstraints(settings_changes); - context.applySettingsChanges(settings_changes); + context->checkSettingsConstraints(settings_changes); + context->applySettingsChanges(settings_changes); const auto & query = getQuery(request, params, context); std::unique_ptr in_param = std::make_unique(query); @@ -737,11 +737,11 @@ void HTTPHandler::processQuery( /// Origin header. used_output.out->addHeaderCORS(settings.add_http_cors_header && !request.get("Origin", "").empty()); - auto append_callback = [&context] (ProgressCallback callback) + auto append_callback = [context] (ProgressCallback callback) { - auto prev = context.getProgressCallback(); + auto prev = context->getProgressCallback(); - context.setProgressCallback([prev, callback] (const Progress & progress) + context->setProgressCallback([prev, callback] (const Progress & progress) { if (prev) prev(progress); @@ -756,12 +756,12 @@ void HTTPHandler::processQuery( if (settings.readonly > 0 && settings.cancel_http_readonly_queries_on_client_close) { - append_callback([&context, &request](const Progress &) + append_callback([context, &request](const Progress &) { /// Assume that at the point this method is called no one is reading data from the socket any more: /// should be true for read-only queries. if (!request.checkPeerConnected()) - context.killCurrentQuery(); + context->killCurrentQuery(); }); } @@ -877,7 +877,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse if (!request_context) { // Context should be initialized before anything, for correct memory accounting. - request_context = std::make_unique(server.context()); + request_context = Context::createCopy(server.context()); request_credentials.reset(); } @@ -899,6 +899,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse HTMLForm params(request); with_stacktrace = params.getParsed("stacktrace", false); + /// FIXME: maybe this check is already unnecessary. /// Workaround. Poco does not detect 411 Length Required case. if (request.getMethod() == HTTPRequest::HTTP_POST && !request.getChunkedTransferEncoding() && !request.hasContentLength()) { @@ -907,7 +908,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse ErrorCodes::HTTP_LENGTH_REQUIRED); } - processQuery(*request_context, request, params, response, used_output, query_scope); + processQuery(request_context, request, params, response, used_output, query_scope); LOG_DEBUG(log, (request_credentials ? "Authentication in progress..." : "Done processing query")); } catch (...) @@ -936,7 +937,7 @@ DynamicQueryHandler::DynamicQueryHandler(IServer & server_, const std::string & { } -bool DynamicQueryHandler::customizeQueryParam(Context & context, const std::string & key, const std::string & value) +bool DynamicQueryHandler::customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) { if (key == param_name) return true; /// do nothing @@ -945,14 +946,14 @@ bool DynamicQueryHandler::customizeQueryParam(Context & context, const std::stri { /// Save name and values of substitution in dictionary. const String parameter_name = key.substr(strlen("param_")); - context.setQueryParameter(parameter_name, value); + context->setQueryParameter(parameter_name, value); return true; } return false; } -std::string DynamicQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) +std::string DynamicQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) { if (likely(!startsWith(request.getContentType(), "multipart/form-data"))) { @@ -978,25 +979,31 @@ std::string DynamicQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm } PredefinedQueryHandler::PredefinedQueryHandler( - IServer & server_, const NameSet & receive_params_, const std::string & predefined_query_ - , const CompiledRegexPtr & url_regex_, const std::unordered_map & header_name_with_regex_) - : HTTPHandler(server_, "PredefinedQueryHandler"), receive_params(receive_params_), predefined_query(predefined_query_) - , url_regex(url_regex_), header_name_with_capture_regex(header_name_with_regex_) + IServer & server_, + const NameSet & receive_params_, + const std::string & predefined_query_, + const CompiledRegexPtr & url_regex_, + const std::unordered_map & header_name_with_regex_) + : HTTPHandler(server_, "PredefinedQueryHandler") + , receive_params(receive_params_) + , predefined_query(predefined_query_) + , url_regex(url_regex_) + , header_name_with_capture_regex(header_name_with_regex_) { } -bool PredefinedQueryHandler::customizeQueryParam(Context & context, const std::string & key, const std::string & value) +bool PredefinedQueryHandler::customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) { if (receive_params.count(key)) { - context.setQueryParameter(key, value); + context->setQueryParameter(key, value); return true; } return false; } -void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, DB::Context & context) +void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, ContextPtr context) { /// If in the configuration file, the handler's header is regex and contains named capture group /// We will extract regex named capture groups as query parameters @@ -1014,7 +1021,7 @@ void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, DB::C const auto & capturing_value = matches[capturing_index]; if (capturing_value.data()) - context.setQueryParameter(capturing_name, String(capturing_value.data(), capturing_value.size())); + context->setQueryParameter(capturing_name, String(capturing_value.data(), capturing_value.size())); } } }; @@ -1032,7 +1039,7 @@ void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, DB::C } } -std::string PredefinedQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) +std::string PredefinedQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) { if (unlikely(startsWith(request.getContentType(), "multipart/form-data"))) { diff --git a/src/Server/HTTPHandler.h b/src/Server/HTTPHandler.h index 0f1d75664bd..4715949cb87 100644 --- a/src/Server/HTTPHandler.h +++ b/src/Server/HTTPHandler.h @@ -18,7 +18,6 @@ namespace Poco { class Logger; } namespace DB { -class Context; class Credentials; class IServer; class WriteBufferFromHTTPServerResponse; @@ -34,11 +33,11 @@ public: void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override; /// This method is called right before the query execution. - virtual void customizeContext(HTTPServerRequest & /* request */, Context & /* context */) {} + virtual void customizeContext(HTTPServerRequest & /* request */, ContextPtr /* context */) {} - virtual bool customizeQueryParam(Context & context, const std::string & key, const std::string & value) = 0; + virtual bool customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) = 0; - virtual std::string getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) = 0; + virtual std::string getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) = 0; private: struct Output @@ -74,7 +73,7 @@ private: // The request_context and the request_credentials instances may outlive a single request/response loop. // This happens only when the authentication mechanism requires more than a single request/response exchange (e.g., SPNEGO). - std::unique_ptr request_context; + ContextPtr request_context; std::unique_ptr request_credentials; // Returns true when the user successfully authenticated, @@ -83,14 +82,14 @@ private: // the request_context and request_credentials instances are preserved. // Throws an exception if authentication failed. bool authenticateUser( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response); /// Also initializes 'used_output'. void processQuery( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response, @@ -114,9 +113,9 @@ private: public: explicit DynamicQueryHandler(IServer & server_, const std::string & param_name_ = "query"); - std::string getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) override; + std::string getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) override; - bool customizeQueryParam(Context &context, const std::string &key, const std::string &value) override; + bool customizeQueryParam(ContextPtr context, const std::string &key, const std::string &value) override; }; class PredefinedQueryHandler : public HTTPHandler @@ -131,11 +130,11 @@ public: IServer & server_, const NameSet & receive_params_, const std::string & predefined_query_ , const CompiledRegexPtr & url_regex_, const std::unordered_map & header_name_with_regex_); - virtual void customizeContext(HTTPServerRequest & request, Context & context) override; + virtual void customizeContext(HTTPServerRequest & request, ContextPtr context) override; - std::string getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) override; + std::string getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) override; - bool customizeQueryParam(Context & context, const std::string & key, const std::string & value) override; + bool customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) override; }; } diff --git a/src/Server/IServer.h b/src/Server/IServer.h index 131e7443646..80736fda3ea 100644 --- a/src/Server/IServer.h +++ b/src/Server/IServer.h @@ -1,5 +1,7 @@ #pragma once +#include + namespace Poco { @@ -7,6 +9,7 @@ namespace Util { class LayeredConfiguration; } + class Logger; } @@ -15,8 +18,6 @@ class Logger; namespace DB { -class Context; - class IServer { public: @@ -27,12 +28,12 @@ public: virtual Poco::Logger & logger() const = 0; /// Returns global application's context. - virtual Context & context() const = 0; + virtual ContextPtr context() const = 0; /// Returns true if shutdown signaled. virtual bool isCancelled() const = 0; - virtual ~IServer() {} + virtual ~IServer() = default; }; } diff --git a/src/Server/InterserverIOHTTPHandler.cpp b/src/Server/InterserverIOHTTPHandler.cpp index 740072e8e9f..64af8860b23 100644 --- a/src/Server/InterserverIOHTTPHandler.cpp +++ b/src/Server/InterserverIOHTTPHandler.cpp @@ -25,29 +25,26 @@ namespace ErrorCodes std::pair InterserverIOHTTPHandler::checkAuthentication(HTTPServerRequest & request) const { - const auto & config = server.config(); - - if (config.has("interserver_http_credentials.user")) + auto server_credentials = server.context()->getInterserverCredentials(); + if (server_credentials) { if (!request.hasCredentials()) - return {"Server requires HTTP Basic authentication, but client doesn't provide it", false}; + return server_credentials->isValidUser("", ""); + String scheme, info; request.getCredentials(scheme, info); if (scheme != "Basic") return {"Server requires HTTP Basic authentication but client provides another method", false}; - String user = config.getString("interserver_http_credentials.user"); - String password = config.getString("interserver_http_credentials.password", ""); - Poco::Net::HTTPBasicCredentials credentials(info); - if (std::make_pair(user, password) != std::make_pair(credentials.getUsername(), credentials.getPassword())) - return {"Incorrect user or password in HTTP Basic authentication", false}; + return server_credentials->isValidUser(credentials.getUsername(), credentials.getPassword()); } else if (request.hasCredentials()) { return {"Client requires HTTP Basic authentication, but server doesn't provide it", false}; } + return {"", true}; } @@ -62,7 +59,7 @@ void InterserverIOHTTPHandler::processQuery(HTTPServerRequest & request, HTTPSer auto & body = request.getStream(); - auto endpoint = server.context().getInterserverIOHandler().getEndpoint(endpoint_name); + auto endpoint = server.context()->getInterserverIOHandler().getEndpoint(endpoint_name); /// Locked for read while query processing std::shared_lock lock(endpoint->rwlock); if (endpoint->blocker.isCancelled()) @@ -107,6 +104,7 @@ void InterserverIOHTTPHandler::handleRequest(HTTPServerRequest & request, HTTPSe } catch (...) { + tryLogCurrentException(log); out.finalize(); } }; @@ -116,6 +114,7 @@ void InterserverIOHTTPHandler::handleRequest(HTTPServerRequest & request, HTTPSe if (auto [message, success] = checkAuthentication(request); success) { processQuery(request, response, used_output); + used_output.out->finalize(); LOG_DEBUG(log, "Done processing query"); } else diff --git a/src/Server/InterserverIOHTTPHandler.h b/src/Server/InterserverIOHTTPHandler.h index 47892aa678f..c0d776115e1 100644 --- a/src/Server/InterserverIOHTTPHandler.h +++ b/src/Server/InterserverIOHTTPHandler.h @@ -2,10 +2,12 @@ #include #include +#include #include #include +#include namespace CurrentMetrics diff --git a/src/Server/NuKeeperTCPHandler.cpp b/src/Server/KeeperTCPHandler.cpp similarity index 90% rename from src/Server/NuKeeperTCPHandler.cpp rename to src/Server/KeeperTCPHandler.cpp index b283356d27d..1dadd3437f7 100644 --- a/src/Server/NuKeeperTCPHandler.cpp +++ b/src/Server/KeeperTCPHandler.cpp @@ -1,4 +1,4 @@ -#include +#include #if USE_NURAFT @@ -189,20 +189,20 @@ struct SocketInterruptablePollWrapper #endif }; -NuKeeperTCPHandler::NuKeeperTCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_) +KeeperTCPHandler::KeeperTCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_) : Poco::Net::TCPServerConnection(socket_) , server(server_) , log(&Poco::Logger::get("NuKeeperTCPHandler")) - , global_context(server.context()) - , nu_keeper_storage_dispatcher(global_context.getNuKeeperStorageDispatcher()) - , operation_timeout(0, global_context.getConfigRef().getUInt("test_keeper_server.operation_timeout_ms", Coordination::DEFAULT_OPERATION_TIMEOUT_MS) * 1000) - , session_timeout(0, global_context.getConfigRef().getUInt("test_keeper_server.session_timeout_ms", Coordination::DEFAULT_SESSION_TIMEOUT_MS) * 1000) + , global_context(Context::createCopy(server.context())) + , nu_keeper_storage_dispatcher(global_context->getKeeperStorageDispatcher()) + , operation_timeout(0, global_context->getConfigRef().getUInt("test_keeper_server.operation_timeout_ms", Coordination::DEFAULT_OPERATION_TIMEOUT_MS) * 1000) + , session_timeout(0, global_context->getConfigRef().getUInt("test_keeper_server.session_timeout_ms", Coordination::DEFAULT_SESSION_TIMEOUT_MS) * 1000) , poll_wrapper(std::make_unique(socket_)) , responses(std::make_unique()) { } -void NuKeeperTCPHandler::sendHandshake(bool has_leader) +void KeeperTCPHandler::sendHandshake(bool has_leader) { Coordination::write(Coordination::SERVER_HANDSHAKE_LENGTH, *out); if (has_leader) @@ -217,12 +217,12 @@ void NuKeeperTCPHandler::sendHandshake(bool has_leader) out->next(); } -void NuKeeperTCPHandler::run() +void KeeperTCPHandler::run() { runImpl(); } -Poco::Timespan NuKeeperTCPHandler::receiveHandshake() +Poco::Timespan KeeperTCPHandler::receiveHandshake() { int32_t handshake_length; int32_t protocol_version; @@ -240,16 +240,10 @@ Poco::Timespan NuKeeperTCPHandler::receiveHandshake() throw Exception("Unexpected protocol version: " + toString(protocol_version), ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); Coordination::read(last_zxid_seen, *in); - - if (last_zxid_seen != 0) - throw Exception("Non zero last_zxid_seen is not supported", ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); - Coordination::read(timeout_ms, *in); + + /// TODO Stop ignoring this value Coordination::read(previous_session_id, *in); - - if (previous_session_id != 0) - throw Exception("Non zero previous session id is not supported", ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); - Coordination::read(passwd, *in); int8_t readonly; @@ -260,12 +254,12 @@ Poco::Timespan NuKeeperTCPHandler::receiveHandshake() } -void NuKeeperTCPHandler::runImpl() +void KeeperTCPHandler::runImpl() { setThreadName("TstKprHandler"); ThreadStatus thread_status; - auto global_receive_timeout = global_context.getSettingsRef().receive_timeout; - auto global_send_timeout = global_context.getSettingsRef().send_timeout; + auto global_receive_timeout = global_context->getSettingsRef().receive_timeout; + auto global_send_timeout = global_context->getSettingsRef().send_timeout; socket().setReceiveTimeout(global_receive_timeout); socket().setSendTimeout(global_send_timeout); @@ -399,7 +393,7 @@ void NuKeeperTCPHandler::runImpl() } } -std::pair NuKeeperTCPHandler::receiveRequest() +std::pair KeeperTCPHandler::receiveRequest() { int32_t length; Coordination::read(length, *in); diff --git a/src/Server/NuKeeperTCPHandler.h b/src/Server/KeeperTCPHandler.h similarity index 83% rename from src/Server/NuKeeperTCPHandler.h rename to src/Server/KeeperTCPHandler.h index 03a857ad1d7..6c3929198c0 100644 --- a/src/Server/NuKeeperTCPHandler.h +++ b/src/Server/KeeperTCPHandler.h @@ -13,7 +13,7 @@ #include #include #include -#include +#include #include #include #include @@ -29,16 +29,16 @@ using ThreadSafeResponseQueue = ThreadSafeQueue; -class NuKeeperTCPHandler : public Poco::Net::TCPServerConnection +class KeeperTCPHandler : public Poco::Net::TCPServerConnection { public: - NuKeeperTCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_); + KeeperTCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_); void run() override; private: IServer & server; Poco::Logger * log; - Context global_context; - std::shared_ptr nu_keeper_storage_dispatcher; + ContextPtr global_context; + std::shared_ptr nu_keeper_storage_dispatcher; Poco::Timespan operation_timeout; Poco::Timespan session_timeout; int64_t session_id{-1}; diff --git a/src/Server/NuKeeperTCPHandlerFactory.h b/src/Server/KeeperTCPHandlerFactory.h similarity index 65% rename from src/Server/NuKeeperTCPHandlerFactory.h rename to src/Server/KeeperTCPHandlerFactory.h index 0fd86ebc21f..132a8b96c23 100644 --- a/src/Server/NuKeeperTCPHandlerFactory.h +++ b/src/Server/KeeperTCPHandlerFactory.h @@ -1,15 +1,16 @@ #pragma once -#include +#include #include #include #include #include +#include namespace DB { -class NuKeeperTCPHandlerFactory : public Poco::Net::TCPServerConnectionFactory +class KeeperTCPHandlerFactory : public Poco::Net::TCPServerConnectionFactory { private: IServer & server; @@ -21,9 +22,9 @@ private: void run() override {} }; public: - NuKeeperTCPHandlerFactory(IServer & server_) + KeeperTCPHandlerFactory(IServer & server_, bool secure) : server(server_) - , log(&Poco::Logger::get("NuKeeperTCPHandlerFactory")) + , log(&Poco::Logger::get(std::string{"KeeperTCP"} + (secure ? "S" : "") + "HandlerFactory")) { } @@ -31,8 +32,8 @@ public: { try { - LOG_TRACE(log, "NuKeeper request. Address: {}", socket.peerAddress().toString()); - return new NuKeeperTCPHandler(server, socket); + LOG_TRACE(log, "Keeper request. Address: {}", socket.peerAddress().toString()); + return new KeeperTCPHandler(server, socket); } catch (const Poco::Net::NetException &) { diff --git a/src/Server/MySQLHandler.cpp b/src/Server/MySQLHandler.cpp index 75c88a6ff93..7b1df092aa1 100644 --- a/src/Server/MySQLHandler.cpp +++ b/src/Server/MySQLHandler.cpp @@ -72,7 +72,7 @@ MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & so : Poco::Net::TCPServerConnection(socket_) , server(server_) , log(&Poco::Logger::get("MySQLHandler")) - , connection_context(server.context()) + , connection_context(Context::createCopy(server.context())) , connection_id(connection_id_) , auth_plugin(new MySQLProtocol::Authentication::Native41()) { @@ -89,14 +89,14 @@ void MySQLHandler::run() { setThreadName("MySQLHandler"); ThreadStatus thread_status; - connection_context.makeSessionContext(); - connection_context.getClientInfo().interface = ClientInfo::Interface::MYSQL; - connection_context.setDefaultFormat("MySQLWire"); - connection_context.getClientInfo().connection_id = connection_id; + connection_context->makeSessionContext(); + connection_context->getClientInfo().interface = ClientInfo::Interface::MYSQL; + connection_context->setDefaultFormat("MySQLWire"); + connection_context->getClientInfo().connection_id = connection_id; in = std::make_shared(socket()); out = std::make_shared(socket()); - packet_endpoint = std::make_shared(*in, *out, connection_context.mysql.sequence_id); + packet_endpoint = std::make_shared(*in, *out, connection_context->mysql.sequence_id); try { @@ -108,11 +108,11 @@ void MySQLHandler::run() HandshakeResponse handshake_response; finishHandshake(handshake_response); - connection_context.mysql.client_capabilities = handshake_response.capability_flags; + connection_context->mysql.client_capabilities = handshake_response.capability_flags; if (handshake_response.max_packet_size) - connection_context.mysql.max_packet_size = handshake_response.max_packet_size; - if (!connection_context.mysql.max_packet_size) - connection_context.mysql.max_packet_size = MAX_PACKET_LENGTH; + connection_context->mysql.max_packet_size = handshake_response.max_packet_size; + if (!connection_context->mysql.max_packet_size) + connection_context->mysql.max_packet_size = MAX_PACKET_LENGTH; LOG_TRACE(log, "Capabilities: {}, max_packet_size: {}, character_set: {}, user: {}, auth_response length: {}, database: {}, auth_plugin_name: {}", @@ -133,8 +133,8 @@ void MySQLHandler::run() try { if (!handshake_response.database.empty()) - connection_context.setCurrentDatabase(handshake_response.database); - connection_context.setCurrentQueryId(Poco::format("mysql:%lu", connection_id)); + connection_context->setCurrentDatabase(handshake_response.database); + connection_context->setCurrentQueryId(Poco::format("mysql:%lu", connection_id)); } catch (const Exception & exc) @@ -252,7 +252,7 @@ void MySQLHandler::authenticate(const String & user_name, const String & auth_pl try { // For compatibility with JavaScript MySQL client, Native41 authentication plugin is used when possible (if password is specified using double SHA1). Otherwise SHA256 plugin is used. - auto user = connection_context.getAccessControlManager().read(user_name); + auto user = connection_context->getAccessControlManager().read(user_name); const DB::Authentication::Type user_auth_type = user->authentication.getType(); if (user_auth_type == DB::Authentication::SHA256_PASSWORD) { @@ -276,7 +276,7 @@ void MySQLHandler::comInitDB(ReadBuffer & payload) String database; readStringUntilEOF(database, payload); LOG_DEBUG(log, "Setting current database to {}", database); - connection_context.setCurrentDatabase(database); + connection_context->setCurrentDatabase(database); packet_endpoint->sendPacket(OKPacket(0, client_capability_flags, 0, 0, 1), true); } @@ -284,7 +284,7 @@ void MySQLHandler::comFieldList(ReadBuffer & payload) { ComFieldList packet; packet.readPayloadWithUnpacked(payload); - String database = connection_context.getCurrentDatabase(); + String database = connection_context->getCurrentDatabase(); StoragePtr table_ptr = DatabaseCatalog::instance().getTable({database, packet.table}, connection_context); auto metadata_snapshot = table_ptr->getInMemoryMetadataPtr(); for (const NameAndTypePair & column : metadata_snapshot->getColumns().getAll()) @@ -332,11 +332,11 @@ void MySQLHandler::comQuery(ReadBuffer & payload) ReadBufferFromString replacement(replacement_query); - Context query_context = connection_context; + auto query_context = Context::createCopy(connection_context); std::atomic affected_rows {0}; - auto prev = query_context.getProgressCallback(); - query_context.setProgressCallback([&, prev = prev](const Progress & progress) + auto prev = query_context->getProgressCallback(); + query_context->setProgressCallback([&, prev = prev](const Progress & progress) { if (prev) prev(progress); @@ -391,14 +391,14 @@ void MySQLHandlerSSL::finishHandshakeSSL( ReadBufferFromMemory payload(buf, pos); payload.ignore(PACKET_HEADER_SIZE); ssl_request.readPayloadWithUnpacked(payload); - connection_context.mysql.client_capabilities = ssl_request.capability_flags; - connection_context.mysql.max_packet_size = ssl_request.max_packet_size ? ssl_request.max_packet_size : MAX_PACKET_LENGTH; + connection_context->mysql.client_capabilities = ssl_request.capability_flags; + connection_context->mysql.max_packet_size = ssl_request.max_packet_size ? ssl_request.max_packet_size : MAX_PACKET_LENGTH; secure_connection = true; ss = std::make_shared(SecureStreamSocket::attach(socket(), SSLManager::instance().defaultServerContext())); in = std::make_shared(*ss); out = std::make_shared(*ss); - connection_context.mysql.sequence_id = 2; - packet_endpoint = std::make_shared(*in, *out, connection_context.mysql.sequence_id); + connection_context->mysql.sequence_id = 2; + packet_endpoint = std::make_shared(*in, *out, connection_context->mysql.sequence_id); packet_endpoint->receivePacket(packet); /// Reading HandshakeResponse from secure socket. } diff --git a/src/Server/MySQLHandler.h b/src/Server/MySQLHandler.h index 1418d068ffd..f5fb82b5bef 100644 --- a/src/Server/MySQLHandler.h +++ b/src/Server/MySQLHandler.h @@ -56,7 +56,7 @@ private: protected: Poco::Logger * log; - Context connection_context; + ContextPtr connection_context; std::shared_ptr packet_endpoint; diff --git a/src/Server/PostgreSQLHandler.cpp b/src/Server/PostgreSQLHandler.cpp index b3a3bbf2aaa..01887444c65 100644 --- a/src/Server/PostgreSQLHandler.cpp +++ b/src/Server/PostgreSQLHandler.cpp @@ -33,7 +33,7 @@ PostgreSQLHandler::PostgreSQLHandler( std::vector> & auth_methods_) : Poco::Net::TCPServerConnection(socket_) , server(server_) - , connection_context(server.context()) + , connection_context(Context::createCopy(server.context())) , ssl_enabled(ssl_enabled_) , connection_id(connection_id_) , authentication_manager(auth_methods_) @@ -52,9 +52,9 @@ void PostgreSQLHandler::run() { setThreadName("PostgresHandler"); ThreadStatus thread_status; - connection_context.makeSessionContext(); - connection_context.getClientInfo().interface = ClientInfo::Interface::POSTGRESQL; - connection_context.setDefaultFormat("PostgreSQLWire"); + connection_context->makeSessionContext(); + connection_context->getClientInfo().interface = ClientInfo::Interface::POSTGRESQL; + connection_context->setDefaultFormat("PostgreSQLWire"); try { @@ -132,8 +132,8 @@ bool PostgreSQLHandler::startup() try { if (!start_up_msg->database.empty()) - connection_context.setCurrentDatabase(start_up_msg->database); - connection_context.setCurrentQueryId(Poco::format("postgres:%d:%d", connection_id, secret_key)); + connection_context->setCurrentDatabase(start_up_msg->database); + connection_context->setCurrentQueryId(Poco::format("postgres:%d:%d", connection_id, secret_key)); } catch (const Exception & exc) { @@ -213,8 +213,8 @@ void PostgreSQLHandler::sendParameterStatusData(PostgreSQLProtocol::Messaging::S void PostgreSQLHandler::cancelRequest() { - connection_context.setCurrentQueryId(""); - connection_context.setDefaultFormat("Null"); + connection_context->setCurrentQueryId(""); + connection_context->setDefaultFormat("Null"); std::unique_ptr msg = message_transport->receiveWithPayloadSize(8); @@ -268,7 +268,7 @@ void PostgreSQLHandler::processQuery() return; } - const auto & settings = connection_context.getSettingsRef(); + const auto & settings = connection_context->getSettingsRef(); std::vector queries; auto parse_res = splitMultipartQuery(query->query, queries, settings.max_query_size, settings.max_parser_depth); if (!parse_res.second) diff --git a/src/Server/PostgreSQLHandler.h b/src/Server/PostgreSQLHandler.h index 697aa9b6744..cc30c85d8bb 100644 --- a/src/Server/PostgreSQLHandler.h +++ b/src/Server/PostgreSQLHandler.h @@ -37,7 +37,7 @@ private: Poco::Logger * log = &Poco::Logger::get("PostgreSQLHandler"); IServer & server; - Context connection_context; + ContextPtr connection_context; bool ssl_enabled; Int32 connection_id; Int32 secret_key; diff --git a/src/Server/ReplicasStatusHandler.cpp b/src/Server/ReplicasStatusHandler.cpp index 778f9827131..86295cc5170 100644 --- a/src/Server/ReplicasStatusHandler.cpp +++ b/src/Server/ReplicasStatusHandler.cpp @@ -34,7 +34,7 @@ void ReplicasStatusHandler::handleRequest(HTTPServerRequest & request, HTTPServe /// Even if lag is small, output detailed information about the lag. bool verbose = params.get("verbose", "") == "1"; - const MergeTreeSettings & settings = context.getReplicatedMergeTreeSettings(); + const MergeTreeSettings & settings = context->getReplicatedMergeTreeSettings(); bool ok = true; WriteBufferFromOwnString message; @@ -73,7 +73,7 @@ void ReplicasStatusHandler::handleRequest(HTTPServerRequest & request, HTTPServe } } - const auto & config = context.getConfigRef(); + const auto & config = context->getConfigRef(); setResponseDefaultHeaders(response, config.getUInt("keep_alive_timeout", 10)); if (!ok) diff --git a/src/Server/ReplicasStatusHandler.h b/src/Server/ReplicasStatusHandler.h index 8a790b13ad6..eda0b15ed6f 100644 --- a/src/Server/ReplicasStatusHandler.h +++ b/src/Server/ReplicasStatusHandler.h @@ -12,7 +12,7 @@ class IServer; class ReplicasStatusHandler : public HTTPRequestHandler { private: - Context & context; + ContextPtr context; public: explicit ReplicasStatusHandler(IServer & server_); diff --git a/src/Server/StaticRequestHandler.cpp b/src/Server/StaticRequestHandler.cpp index 9f959239be9..169d6859b43 100644 --- a/src/Server/StaticRequestHandler.cpp +++ b/src/Server/StaticRequestHandler.cpp @@ -137,7 +137,7 @@ void StaticRequestHandler::writeResponse(WriteBuffer & out) if (startsWith(response_expression, file_prefix)) { - const auto & user_files_absolute_path = Poco::Path(server.context().getUserFilesPath()).makeAbsolute().makeDirectory().toString(); + const auto & user_files_absolute_path = Poco::Path(server.context()->getUserFilesPath()).makeAbsolute().makeDirectory().toString(); const auto & file_name = response_expression.substr(file_prefix.size(), response_expression.size() - file_prefix.size()); const auto & file_path = Poco::Path(user_files_absolute_path, file_name).makeAbsolute().toString(); diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 5765c3ec43e..916f29ba1d4 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,7 @@ #include +#include "Core/Protocol.h" #include "TCPHandler.h" #if !defined(ARCADIA_BUILD) @@ -55,6 +57,7 @@ namespace ErrorCodes extern const int SOCKET_TIMEOUT; extern const int UNEXPECTED_PACKET_FROM_CLIENT; extern const int SUPPORT_IS_DISABLED; + extern const int UNKNOWN_PROTOCOL; } TCPHandler::TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_) @@ -62,8 +65,8 @@ TCPHandler::TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket , server(server_) , parse_proxy_protocol(parse_proxy_protocol_) , log(&Poco::Logger::get("TCPHandler")) - , connection_context(server.context()) - , query_context(server.context()) + , connection_context(Context::createCopy(server.context())) + , query_context(Context::createCopy(server.context())) , server_display_name(std::move(server_display_name_)) { } @@ -72,7 +75,8 @@ TCPHandler::~TCPHandler() try { state.reset(); - out->next(); + if (out) + out->next(); } catch (...) { @@ -85,13 +89,13 @@ void TCPHandler::runImpl() setThreadName("TCPHandler"); ThreadStatus thread_status; - connection_context = server.context(); - connection_context.makeSessionContext(); + connection_context = Context::createCopy(server.context()); + connection_context->makeSessionContext(); /// These timeouts can be changed after receiving query. - auto global_receive_timeout = connection_context.getSettingsRef().receive_timeout; - auto global_send_timeout = connection_context.getSettingsRef().send_timeout; + auto global_receive_timeout = connection_context->getSettingsRef().receive_timeout; + auto global_send_timeout = connection_context->getSettingsRef().send_timeout; socket().setReceiveTimeout(global_receive_timeout); socket().setSendTimeout(global_send_timeout); @@ -132,7 +136,7 @@ void TCPHandler::runImpl() try { /// We try to send error information to the client. - sendException(e, connection_context.getSettingsRef().calculate_text_stack_trace); + sendException(e, connection_context->getSettingsRef().calculate_text_stack_trace); } catch (...) {} @@ -146,28 +150,30 @@ void TCPHandler::runImpl() { Exception e("Database " + backQuote(default_database) + " doesn't exist", ErrorCodes::UNKNOWN_DATABASE); LOG_ERROR(log, "Code: {}, e.displayText() = {}, Stack trace:\n\n{}", e.code(), e.displayText(), e.getStackTraceString()); - sendException(e, connection_context.getSettingsRef().calculate_text_stack_trace); + sendException(e, connection_context->getSettingsRef().calculate_text_stack_trace); return; } - connection_context.setCurrentDatabase(default_database); + connection_context->setCurrentDatabase(default_database); } - Settings connection_settings = connection_context.getSettings(); + Settings connection_settings = connection_context->getSettings(); + UInt64 idle_connection_timeout = connection_settings.idle_connection_timeout; + UInt64 poll_interval = connection_settings.poll_interval; sendHello(); - connection_context.setProgressCallback([this] (const Progress & value) { return this->updateProgress(value); }); + connection_context->setProgressCallback([this] (const Progress & value) { return this->updateProgress(value); }); while (true) { /// We are waiting for a packet from the client. Thus, every `poll_interval` seconds check whether we need to shut down. { Stopwatch idle_time; - while (!server.isCancelled() && !static_cast(*in).poll( - std::min(connection_settings.poll_interval, connection_settings.idle_connection_timeout) * 1000000)) + UInt64 timeout_ms = std::min(poll_interval, idle_connection_timeout) * 1000000; + while (!server.isCancelled() && !static_cast(*in).poll(timeout_ms)) { - if (idle_time.elapsedSeconds() > connection_settings.idle_connection_timeout) + if (idle_time.elapsedSeconds() > idle_connection_timeout) { LOG_TRACE(log, "Closing idle connection"); return; @@ -180,7 +186,7 @@ void TCPHandler::runImpl() break; /// Set context of request. - query_context = connection_context; + query_context = Context::createCopy(connection_context); Stopwatch watch; state.reset(); @@ -208,12 +214,21 @@ void TCPHandler::runImpl() if (!receivePacket()) continue; + /** If Query received, then settings in query_context has been updated + * So, update some other connection settings, for flexibility. + */ + { + const Settings & settings = query_context->getSettingsRef(); + idle_connection_timeout = settings.idle_connection_timeout; + poll_interval = settings.poll_interval; + } + /** If part_uuids got received in previous packet, trying to read again. */ if (state.empty() && state.part_uuids && !receivePacket()) continue; - query_scope.emplace(*query_context); + query_scope.emplace(query_context); send_exception_with_stack_trace = query_context->getSettingsRef().calculate_text_stack_trace; @@ -228,9 +243,9 @@ void TCPHandler::runImpl() CurrentThread::setFatalErrorCallback([this]{ sendLogs(); }); } - query_context->setExternalTablesInitializer([&connection_settings, this] (Context & context) + query_context->setExternalTablesInitializer([&connection_settings, this] (ContextPtr context) { - if (&context != &*query_context) + if (context != query_context) throw Exception("Unexpected context in external tables initializer", ErrorCodes::LOGICAL_ERROR); /// Get blocks of temporary tables @@ -245,9 +260,9 @@ void TCPHandler::runImpl() }); /// Send structure of columns to client for function input() - query_context->setInputInitializer([this] (Context & context, const StoragePtr & input_storage) + query_context->setInputInitializer([this] (ContextPtr context, const StoragePtr & input_storage) { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in Input initializer", ErrorCodes::LOGICAL_ERROR); auto metadata_snapshot = input_storage->getInMemoryMetadataPtr(); @@ -265,15 +280,15 @@ void TCPHandler::runImpl() sendData(state.input_header); }); - query_context->setInputBlocksReaderCallback([&connection_settings, this] (Context & context) -> Block + query_context->setInputBlocksReaderCallback([&connection_settings, this] (ContextPtr context) -> Block { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in InputBlocksReader", ErrorCodes::LOGICAL_ERROR); - size_t poll_interval; + size_t poll_interval_ms; int receive_timeout; - std::tie(poll_interval, receive_timeout) = getReadTimeouts(connection_settings); - if (!readDataNext(poll_interval, receive_timeout)) + std::tie(poll_interval_ms, receive_timeout) = getReadTimeouts(connection_settings); + if (!readDataNext(poll_interval_ms, receive_timeout)) { state.block_in.reset(); state.maybe_compressed_in.reset(); @@ -282,11 +297,21 @@ void TCPHandler::runImpl() return state.block_for_input; }); - customizeContext(*query_context); + customizeContext(query_context); + + /// This callback is needed for requesting read tasks inside pipeline for distributed processing + query_context->setReadTaskCallback([this]() -> String + { + std::lock_guard lock(task_callback_mutex); + sendReadTaskRequestAssumeLocked(); + return receiveReadTaskResponseAssumeLocked(); + }); bool may_have_embedded_data = client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_CLIENT_SUPPORT_EMBEDDED_DATA; /// Processing Query - state.io = executeQuery(state.query, *query_context, false, state.stage, may_have_embedded_data); + state.io = executeQuery(state.query, query_context, false, state.stage, may_have_embedded_data); + + unknown_packet_in_send_data = query_context->getSettingsRef().unknown_packet_in_send_data; after_check_cancelled.restart(); after_send_progress.restart(); @@ -450,7 +475,7 @@ void TCPHandler::runImpl() } -bool TCPHandler::readDataNext(const size_t & poll_interval, const int & receive_timeout) +bool TCPHandler::readDataNext(size_t poll_interval, time_t receive_timeout) { Stopwatch watch(CLOCK_MONOTONIC_COARSE); @@ -468,8 +493,8 @@ bool TCPHandler::readDataNext(const size_t & poll_interval, const int & receive_ * If we periodically poll, the receive_timeout of the socket itself does not work. * Therefore, an additional check is added. */ - double elapsed = watch.elapsedSeconds(); - if (elapsed > receive_timeout) + Float64 elapsed = watch.elapsedSeconds(); + if (elapsed > static_cast(receive_timeout)) { throw Exception(ErrorCodes::SOCKET_TIMEOUT, "Timeout exceeded while receiving data from client. Waited for {} seconds, timeout is {} seconds.", @@ -510,10 +535,7 @@ std::tuple TCPHandler::getReadTimeouts(const Settings & connection_ void TCPHandler::readData(const Settings & connection_settings) { - size_t poll_interval; - int receive_timeout; - - std::tie(poll_interval, receive_timeout) = getReadTimeouts(connection_settings); + auto [poll_interval, receive_timeout] = getReadTimeouts(connection_settings); sendLogs(); while (readDataNext(poll_interval, receive_timeout)) @@ -536,7 +558,7 @@ void TCPHandler::processInsertQuery(const Settings & connection_settings) { if (!table_id.empty()) { - auto storage_ptr = DatabaseCatalog::instance().getTable(table_id, *query_context); + auto storage_ptr = DatabaseCatalog::instance().getTable(table_id, query_context); sendTableColumns(storage_ptr->getInMemoryMetadataPtr()->getColumns()); } } @@ -545,7 +567,16 @@ void TCPHandler::processInsertQuery(const Settings & connection_settings) /// Send block to the client - table structure. sendData(state.io.out->getHeader()); - readData(connection_settings); + try + { + readData(connection_settings); + } + catch (...) + { + /// To avoid flushing from the destructor, that may lead to uncaught exception. + state.io.out->writeSuffix(); + throw; + } state.io.out->writeSuffix(); } @@ -643,6 +674,8 @@ void TCPHandler::processOrdinaryQueryWithProcessors() Block block; while (executor.pull(block, query_context->getSettingsRef().interactive_delay / 1000)) { + std::lock_guard lock(task_callback_mutex); + if (isQueryCancelled()) { /// A packet was received requesting to stop execution of the request. @@ -700,7 +733,7 @@ void TCPHandler::processTablesStatusRequest() TablesStatusResponse response; for (const QualifiedTableName & table_name: request.tables) { - auto resolved_id = connection_context.tryResolveStorageID({table_name.database, table_name.table}); + auto resolved_id = connection_context->tryResolveStorageID({table_name.database, table_name.table}); StoragePtr table = DatabaseCatalog::instance().tryGetTable(resolved_id, connection_context); if (!table) continue; @@ -722,11 +755,11 @@ void TCPHandler::processTablesStatusRequest() /// For testing hedged requests const Settings & settings = query_context->getSettingsRef(); - if (settings.sleep_in_send_tables_status) + if (settings.sleep_in_send_tables_status_ms.totalMilliseconds()) { out->next(); - std::chrono::seconds sec(settings.sleep_in_send_tables_status); - std::this_thread::sleep_for(sec); + std::chrono::milliseconds ms(settings.sleep_in_send_tables_status_ms.totalMilliseconds()); + std::this_thread::sleep_for(ms); } response.write(*out, client_tcp_protocol_version); @@ -754,6 +787,13 @@ void TCPHandler::sendPartUUIDs() } } + +void TCPHandler::sendReadTaskRequestAssumeLocked() +{ + writeVarUInt(Protocol::Server::ReadTaskRequest, *out); + out->next(); +} + void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) { writeVarUInt(Protocol::Server::ProfileInfo, *out); @@ -861,7 +901,7 @@ bool TCPHandler::receiveProxyHeader() } LOG_TRACE(log, "Forwarded client address from PROXY header: {}", forwarded_address); - connection_context.getClientInfo().forwarded_for = forwarded_address; + connection_context->getClientInfo().forwarded_for = forwarded_address; return true; } @@ -914,7 +954,7 @@ void TCPHandler::receiveHello() if (user != USER_INTERSERVER_MARKER) { - connection_context.setUser(user, password, socket().peerAddress()); + connection_context->setUser(user, password, socket().peerAddress()); } else { @@ -962,8 +1002,6 @@ bool TCPHandler::receivePacket() UInt64 packet_type = 0; readVarUInt(packet_type, *in); -// std::cerr << "Server got packet: " << Protocol::Client::toString(packet_type) << "\n"; - switch (packet_type) { case Protocol::Client::IgnoredPartUUIDs: @@ -1015,6 +1053,34 @@ void TCPHandler::receiveIgnoredPartUUIDs() query_context->getIgnoredPartUUIDs()->add(uuids); } + +String TCPHandler::receiveReadTaskResponseAssumeLocked() +{ + UInt64 packet_type = 0; + readVarUInt(packet_type, *in); + if (packet_type != Protocol::Client::ReadTaskResponse) + { + if (packet_type == Protocol::Client::Cancel) + { + state.is_cancelled = true; + return {}; + } + else + { + throw Exception(fmt::format("Received {} packet after requesting read task", + Protocol::Client::toString(packet_type)), ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); + } + } + UInt64 version; + readVarUInt(version, *in); + if (version != DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION) + throw Exception("Protocol version for distributed processing mismatched", ErrorCodes::UNKNOWN_PROTOCOL); + String response; + readStringBinary(response, *in); + return response; +} + + void TCPHandler::receiveClusterNameAndSalt() { readStringBinary(cluster, *in); @@ -1032,7 +1098,7 @@ void TCPHandler::receiveClusterNameAndSalt() try { /// We try to send error information to the client. - sendException(e, connection_context.getSettingsRef().calculate_text_stack_trace); + sendException(e, connection_context->getSettingsRef().calculate_text_stack_trace); } catch (...) {} @@ -1235,18 +1301,18 @@ bool TCPHandler::receiveData(bool scalar) /// If such a table does not exist, create it. if (resolved) { - storage = DatabaseCatalog::instance().getTable(resolved, *query_context); + storage = DatabaseCatalog::instance().getTable(resolved, query_context); } else { NamesAndTypesList columns = block.getNamesAndTypesList(); - auto temporary_table = TemporaryTableHolder(*query_context, ColumnsDescription{columns}, {}); + auto temporary_table = TemporaryTableHolder(query_context, ColumnsDescription{columns}, {}); storage = temporary_table.getTable(); query_context->addExternalTable(temporary_id.table_name, std::move(temporary_table)); } auto metadata_snapshot = storage->getInMemoryMetadataPtr(); /// The data will be written directly to the table. - auto temporary_table_out = storage->write(ASTPtr(), metadata_snapshot, *query_context); + auto temporary_table_out = storage->write(ASTPtr(), metadata_snapshot, query_context); temporary_table_out->write(block); temporary_table_out->writeSuffix(); @@ -1345,7 +1411,7 @@ void TCPHandler::initBlockOutput(const Block & block) *state.maybe_compressed_out, client_tcp_protocol_version, block.cloneEmpty(), - !connection_context.getSettingsRef().low_cardinality_allow_in_native_format); + !connection_context->getSettingsRef().low_cardinality_allow_in_native_format); } } @@ -1358,7 +1424,7 @@ void TCPHandler::initLogsBlockOutput(const Block & block) *out, client_tcp_protocol_version, block.cloneEmpty(), - !connection_context.getSettingsRef().low_cardinality_allow_in_native_format); + !connection_context->getSettingsRef().low_cardinality_allow_in_native_format); } } @@ -1409,22 +1475,57 @@ void TCPHandler::sendData(const Block & block) { initBlockOutput(block); - writeVarUInt(Protocol::Server::Data, *out); - /// Send external table name (empty name is the main table) - writeStringBinary("", *out); + auto prev_bytes_written_out = out->count(); + auto prev_bytes_written_compressed_out = state.maybe_compressed_out->count(); - /// For testing hedged requests - const Settings & settings = query_context->getSettingsRef(); - if (block.rows() > 0 && settings.sleep_in_send_data) + try { - out->next(); - std::chrono::seconds sec(settings.sleep_in_send_data); - std::this_thread::sleep_for(sec); - } + /// For testing hedged requests + if (unknown_packet_in_send_data) + { + --unknown_packet_in_send_data; + if (unknown_packet_in_send_data == 0) + writeVarUInt(UInt64(-1), *out); + } - state.block_out->write(block); - state.maybe_compressed_out->next(); - out->next(); + writeVarUInt(Protocol::Server::Data, *out); + /// Send external table name (empty name is the main table) + writeStringBinary("", *out); + + /// For testing hedged requests + const Settings & settings = query_context->getSettingsRef(); + if (block.rows() > 0 && settings.sleep_in_send_data_ms.totalMilliseconds()) + { + out->next(); + std::chrono::milliseconds ms(settings.sleep_in_send_data_ms.totalMilliseconds()); + std::this_thread::sleep_for(ms); + } + + state.block_out->write(block); + state.maybe_compressed_out->next(); + out->next(); + } + catch (...) + { + /// In case of unsuccessful write, if the buffer with written data was not flushed, + /// we will rollback write to avoid breaking the protocol. + /// (otherwise the client will not be able to receive exception after unfinished data + /// as it will expect the continuation of the data). + /// It looks like hangs on client side or a message like "Data compressed with different methods". + + if (state.compression == Protocol::Compression::Enable) + { + auto extra_bytes_written_compressed = state.maybe_compressed_out->count() - prev_bytes_written_compressed_out; + if (state.maybe_compressed_out->offset() >= extra_bytes_written_compressed) + state.maybe_compressed_out->position() -= extra_bytes_written_compressed; + } + + auto extra_bytes_written_out = out->count() - prev_bytes_written_out; + if (out->offset() >= extra_bytes_written_out) + out->position() -= extra_bytes_written_out; + + throw; + } } diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index ee2f7c96b5a..8387ca5f254 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -8,10 +8,10 @@ #include #include #include +#include #include #include #include -#include #include "IServer.h" @@ -89,7 +89,7 @@ struct QueryState *this = QueryState(); } - bool empty() + bool empty() const { return is_empty; } @@ -113,14 +113,13 @@ public: * because it allows to check the IP ranges of the trusted proxy. * Proxy-forwarded (original client) IP address is used for quota accounting if quota is keyed by forwarded IP. */ - TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, - std::string server_display_name_); + TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_); ~TCPHandler() override; void run() override; /// This method is called right before the query execution. - virtual void customizeContext(DB::Context & /*context*/) {} + virtual void customizeContext(ContextPtr /*context*/) {} private: IServer & server; @@ -133,8 +132,10 @@ private: UInt64 client_version_patch = 0; UInt64 client_tcp_protocol_version = 0; - Context connection_context; - std::optional query_context; + ContextPtr connection_context; + ContextPtr query_context; + + size_t unknown_packet_in_send_data = 0; /// Streams for reading/writing from/to client connection socket. std::shared_ptr in; @@ -151,6 +152,7 @@ private: String cluster; String cluster_secret; + std::mutex task_callback_mutex; /// At the moment, only one ongoing query in the connection is supported at a time. QueryState state; @@ -170,9 +172,11 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); + String receiveReadTaskResponseAssumeLocked(); bool receiveData(bool scalar); - bool readDataNext(const size_t & poll_interval, const int & receive_timeout); + bool readDataNext(size_t poll_interval, time_t receive_timeout); void readData(const Settings & connection_settings); + void receiveClusterNameAndSalt(); std::tuple getReadTimeouts(const Settings & connection_settings); [[noreturn]] void receiveUnexpectedData(); @@ -199,12 +203,11 @@ private: void sendLogs(); void sendEndOfStream(); void sendPartUUIDs(); + void sendReadTaskRequestAssumeLocked(); void sendProfileInfo(const BlockStreamProfileInfo & info); void sendTotals(const Block & totals); void sendExtremes(const Block & extremes); - void receiveClusterNameAndSalt(); - /// Creates state.block_in/block_out for blocks read/write, depending on whether compression is enabled. void initBlockInput(); void initBlockOutput(const Block & block); diff --git a/src/Server/ya.make b/src/Server/ya.make index ef5ef6d5f57..6a6a442fce8 100644 --- a/src/Server/ya.make +++ b/src/Server/ya.make @@ -22,10 +22,10 @@ SRCS( HTTPHandler.cpp HTTPHandlerFactory.cpp InterserverIOHTTPHandler.cpp + KeeperTCPHandler.cpp MySQLHandler.cpp MySQLHandlerFactory.cpp NotFoundHandler.cpp - NuKeeperTCPHandler.cpp PostgreSQLHandler.cpp PostgreSQLHandlerFactory.cpp PrometheusMetricsWriter.cpp diff --git a/src/Server/ya.make.in b/src/Server/ya.make.in index c0c1dcc7b15..fd1b414edf5 100644 --- a/src/Server/ya.make.in +++ b/src/Server/ya.make.in @@ -9,7 +9,7 @@ PEERDIR( SRCS( - + ) END() diff --git a/src/Storages/AlterCommands.cpp b/src/Storages/AlterCommands.cpp index 7043a32760b..e3177c167c5 100644 --- a/src/Storages/AlterCommands.cpp +++ b/src/Storages/AlterCommands.cpp @@ -299,7 +299,7 @@ std::optional AlterCommand::parse(const ASTAlterCommand * command_ } -void AlterCommand::apply(StorageInMemoryMetadata & metadata, const Context & context) const +void AlterCommand::apply(StorageInMemoryMetadata & metadata, ContextPtr context) const { if (type == ADD_COLUMN) { @@ -320,7 +320,7 @@ void AlterCommand::apply(StorageInMemoryMetadata & metadata, const Context & con metadata.columns.add(column, after_column, first); /// Slow, because each time a list is copied - if (context.getSettingsRef().flatten_nested) + if (context->getSettingsRef().flatten_nested) metadata.columns.flattenNested(); } else if (type == DROP_COLUMN) @@ -702,7 +702,7 @@ bool AlterCommand::isRemovingProperty() const return to_remove != RemoveProperty::NO_PROPERTY; } -std::optional AlterCommand::tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, const Context & context) const +std::optional AlterCommand::tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, ContextPtr context) const { if (!isRequireMutationStage(metadata)) return {}; @@ -788,7 +788,7 @@ String alterTypeToString(const AlterCommand::Type type) __builtin_unreachable(); } -void AlterCommands::apply(StorageInMemoryMetadata & metadata, const Context & context) const +void AlterCommands::apply(StorageInMemoryMetadata & metadata, ContextPtr context) const { if (!prepared) throw DB::Exception("Alter commands is not prepared. Cannot apply. It's a bug", ErrorCodes::LOGICAL_ERROR); @@ -880,7 +880,7 @@ void AlterCommands::prepare(const StorageInMemoryMetadata & metadata) prepared = true; } -void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Context & context) const +void AlterCommands::validate(const StorageInMemoryMetadata & metadata, ContextPtr context) const { auto all_columns = metadata.columns; /// Default expression for all added/modified columns @@ -907,7 +907,7 @@ void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Con ErrorCodes::BAD_ARGUMENTS}; if (command.codec) - CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context.getSettingsRef().allow_suspicious_codecs); + CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context->getSettingsRef().allow_suspicious_codecs); all_columns.add(ColumnDescription(column_name, command.data_type)); } @@ -927,7 +927,7 @@ void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Con ErrorCodes::NOT_IMPLEMENTED}; if (command.codec) - CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context.getSettingsRef().allow_suspicious_codecs); + CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context->getSettingsRef().allow_suspicious_codecs); auto column_default = all_columns.getDefault(column_name); if (column_default) { @@ -1172,7 +1172,7 @@ static MutationCommand createMaterializeTTLCommand() return command; } -MutationCommands AlterCommands::getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, const Context & context) const +MutationCommands AlterCommands::getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, ContextPtr context) const { MutationCommands result; for (const auto & alter_cmd : *this) diff --git a/src/Storages/AlterCommands.h b/src/Storages/AlterCommands.h index c973b0b6a6f..d6c80bc5ed4 100644 --- a/src/Storages/AlterCommands.h +++ b/src/Storages/AlterCommands.h @@ -128,7 +128,7 @@ struct AlterCommand static std::optional parse(const ASTAlterCommand * command); - void apply(StorageInMemoryMetadata & metadata, const Context & context) const; + void apply(StorageInMemoryMetadata & metadata, ContextPtr context) const; /// Check that alter command require data modification (mutation) to be /// executed. For example, cast from Date to UInt16 type can be executed @@ -151,7 +151,7 @@ struct AlterCommand /// If possible, convert alter command to mutation command. In other case /// return empty optional. Some storages may execute mutations after /// metadata changes. - std::optional tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, const Context & context) const; + std::optional tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, ContextPtr context) const; }; /// Return string representation of AlterCommand::Type @@ -170,7 +170,7 @@ public: /// Checks that all columns exist and dependencies between them. /// This check is lightweight and base only on metadata. /// More accurate check have to be performed with storage->checkAlterIsPossible. - void validate(const StorageInMemoryMetadata & metadata, const Context & context) const; + void validate(const StorageInMemoryMetadata & metadata, ContextPtr context) const; /// Prepare alter commands. Set ignore flag to some of them and set some /// parts to commands from storage's metadata (for example, absent default) @@ -178,7 +178,7 @@ public: /// Apply all alter command in sequential order to storage metadata. /// Commands have to be prepared before apply. - void apply(StorageInMemoryMetadata & metadata, const Context & context) const; + void apply(StorageInMemoryMetadata & metadata, ContextPtr context) const; /// At least one command modify settings. bool isSettingsAlter() const; @@ -190,7 +190,7 @@ public: /// alter. If alter can be performed as pure metadata update, than result is /// empty. If some TTL changes happened than, depending on materialize_ttl /// additional mutation command (MATERIALIZE_TTL) will be returned. - MutationCommands getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, const Context & context) const; + MutationCommands getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, ContextPtr context) const; }; } diff --git a/src/Storages/CMakeLists.txt b/src/Storages/CMakeLists.txt index deb1c9f6716..ff22e9fa9e1 100644 --- a/src/Storages/CMakeLists.txt +++ b/src/Storages/CMakeLists.txt @@ -1,6 +1,6 @@ add_subdirectory(MergeTree) add_subdirectory(System) -if(ENABLE_TESTS) - add_subdirectory(tests) +if(ENABLE_EXAMPLES) + add_subdirectory(examples) endif() diff --git a/src/Storages/ColumnDefault.h b/src/Storages/ColumnDefault.h index 1035bfcc834..38b61415a9a 100644 --- a/src/Storages/ColumnDefault.h +++ b/src/Storages/ColumnDefault.h @@ -1,10 +1,10 @@ #pragma once +#include + #include #include -#include - namespace DB { @@ -18,7 +18,7 @@ enum class ColumnDefaultKind ColumnDefaultKind columnDefaultKindFromString(const std::string & str); -std::string toString(const ColumnDefaultKind kind); +std::string toString(ColumnDefaultKind kind); struct ColumnDefault diff --git a/src/Storages/ColumnsDescription.cpp b/src/Storages/ColumnsDescription.cpp index fc6bb661986..545911f1465 100644 --- a/src/Storages/ColumnsDescription.cpp +++ b/src/Storages/ColumnsDescription.cpp @@ -582,7 +582,7 @@ void ColumnsDescription::removeSubcolumns(const String & name_in_storage, const subcolumns.erase(name_in_storage + "." + subcolumn_name); } -Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, const Context & context) +Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, ContextPtr context) { for (const auto & child : default_expr_list->children) if (child->as() || child->as() || child->as()) diff --git a/src/Storages/ColumnsDescription.h b/src/Storages/ColumnsDescription.h index 26e30004544..7fff22abf71 100644 --- a/src/Storages/ColumnsDescription.h +++ b/src/Storages/ColumnsDescription.h @@ -1,18 +1,20 @@ #pragma once -#include -#include -#include -#include -#include -#include -#include #include +#include +#include +#include +#include +#include +#include +#include -#include -#include -#include #include +#include +#include +#include + +#include namespace DB @@ -159,5 +161,5 @@ private: /// default expression result can be casted to column_type. Also checks, that we /// don't have strange constructions in default expression like SELECT query or /// arrayJoin function. -Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, const Context & context); +Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, ContextPtr context); } diff --git a/src/Storages/ConstraintsDescription.cpp b/src/Storages/ConstraintsDescription.cpp index e6315872a66..1e86a17523b 100644 --- a/src/Storages/ConstraintsDescription.cpp +++ b/src/Storages/ConstraintsDescription.cpp @@ -41,7 +41,7 @@ ConstraintsDescription ConstraintsDescription::parse(const String & str) return res; } -ConstraintsExpressions ConstraintsDescription::getExpressions(const DB::Context & context, +ConstraintsExpressions ConstraintsDescription::getExpressions(const DB::ContextPtr context, const DB::NamesAndTypesList & source_columns_) const { ConstraintsExpressions res; diff --git a/src/Storages/ConstraintsDescription.h b/src/Storages/ConstraintsDescription.h index d6d2baefbd2..5e6416822bb 100644 --- a/src/Storages/ConstraintsDescription.h +++ b/src/Storages/ConstraintsDescription.h @@ -19,7 +19,7 @@ struct ConstraintsDescription static ConstraintsDescription parse(const String & str); - ConstraintsExpressions getExpressions(const Context & context, const NamesAndTypesList & source_columns_) const; + ConstraintsExpressions getExpressions(ContextPtr context, const NamesAndTypesList & source_columns_) const; ConstraintsDescription(const ConstraintsDescription & other); ConstraintsDescription & operator=(const ConstraintsDescription & other); diff --git a/src/Storages/Distributed/DirectoryMonitor.cpp b/src/Storages/Distributed/DirectoryMonitor.cpp index fb5e5080314..55ee884646e 100644 --- a/src/Storages/Distributed/DirectoryMonitor.cpp +++ b/src/Storages/Distributed/DirectoryMonitor.cpp @@ -9,6 +9,8 @@ #include #include #include +#include +#include #include #include #include @@ -34,6 +36,7 @@ namespace CurrentMetrics { extern const Metric DistributedSend; extern const Metric DistributedFilesToInsert; + extern const Metric BrokenDistributedFilesToInsert; } namespace DB @@ -104,12 +107,14 @@ namespace size_t rows = 0; size_t bytes = 0; - std::string header; + /// dumpStructure() of the header -- obsolete + std::string block_header_string; + Block block_header; }; - DistributedHeader readDistributedHeader(ReadBuffer & in, Poco::Logger * log) + DistributedHeader readDistributedHeader(ReadBufferFromFile & in, Poco::Logger * log) { - DistributedHeader header; + DistributedHeader distributed_header; UInt64 query_size; readVarUInt(query_size, in); @@ -135,17 +140,25 @@ namespace LOG_WARNING(log, "ClickHouse shard version is older than ClickHouse initiator version. It may lack support for new features."); } - readStringBinary(header.insert_query, header_buf); - header.insert_settings.read(header_buf); + readStringBinary(distributed_header.insert_query, header_buf); + distributed_header.insert_settings.read(header_buf); if (header_buf.hasPendingData()) - header.client_info.read(header_buf, initiator_revision); + distributed_header.client_info.read(header_buf, initiator_revision); if (header_buf.hasPendingData()) { - readVarUInt(header.rows, header_buf); - readVarUInt(header.bytes, header_buf); - readStringBinary(header.header, header_buf); + readVarUInt(distributed_header.rows, header_buf); + readVarUInt(distributed_header.bytes, header_buf); + readStringBinary(distributed_header.block_header_string, header_buf); + } + + if (header_buf.hasPendingData()) + { + NativeBlockInputStream header_block_in(header_buf, DBMS_TCP_PROTOCOL_VERSION); + distributed_header.block_header = header_block_in.read(); + if (!distributed_header.block_header) + throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Cannot read header from the {} batch", in.getFileName()); } /// Add handling new data here, for example: @@ -155,20 +168,20 @@ namespace /// /// And note that it is safe, because we have checksum and size for header. - return header; + return distributed_header; } if (query_size == DBMS_DISTRIBUTED_SIGNATURE_HEADER_OLD_FORMAT) { - header.insert_settings.read(in, SettingsWriteFormat::BINARY); - readStringBinary(header.insert_query, in); - return header; + distributed_header.insert_settings.read(in, SettingsWriteFormat::BINARY); + readStringBinary(distributed_header.insert_query, in); + return distributed_header; } - header.insert_query.resize(query_size); - in.readStrict(header.insert_query.data(), query_size); + distributed_header.insert_query.resize(query_size); + in.readStrict(distributed_header.insert_query.data(), query_size); - return header; + return distributed_header; } /// remote_error argument is used to decide whether some errors should be @@ -200,35 +213,71 @@ namespace return nullptr; } - void writeRemoteConvert(const DistributedHeader & header, RemoteBlockOutputStream & remote, ReadBufferFromFile & in, Poco::Logger * log) + void writeAndConvert(RemoteBlockOutputStream & remote, ReadBufferFromFile & in) { - if (remote.getHeader() && header.header != remote.getHeader().dumpStructure()) + CompressedReadBuffer decompressing_in(in); + NativeBlockInputStream block_in(decompressing_in, DBMS_TCP_PROTOCOL_VERSION); + block_in.readPrefix(); + + while (Block block = block_in.read()) { - LOG_WARNING(log, - "Structure does not match (remote: {}, local: {}), implicit conversion will be done", - remote.getHeader().dumpStructure(), header.header); - - CompressedReadBuffer decompressing_in(in); - /// Lack of header, requires to read blocks - NativeBlockInputStream block_in(decompressing_in, DBMS_TCP_PROTOCOL_VERSION); - - block_in.readPrefix(); - while (Block block = block_in.read()) - { - ConvertingBlockInputStream convert( - std::make_shared(block), - remote.getHeader(), - ConvertingBlockInputStream::MatchColumnsMode::Name); - auto adopted_block = convert.read(); - remote.write(adopted_block); - } - block_in.readSuffix(); + ConvertingBlockInputStream convert( + std::make_shared(block), + remote.getHeader(), + ConvertingBlockInputStream::MatchColumnsMode::Name); + auto adopted_block = convert.read(); + remote.write(adopted_block); } - else + + block_in.readSuffix(); + } + + void writeRemoteConvert( + const DistributedHeader & distributed_header, + RemoteBlockOutputStream & remote, + bool compression_expected, + ReadBufferFromFile & in, + Poco::Logger * log) + { + if (!remote.getHeader()) { CheckingCompressedReadBuffer checking_in(in); remote.writePrepared(checking_in); + return; } + + /// This is old format, that does not have header for the block in the file header, + /// applying ConvertingBlockInputStream in this case is not a big overhead. + /// + /// Anyway we can get header only from the first block, which contain all rows anyway. + if (!distributed_header.block_header) + { + LOG_TRACE(log, "Processing batch {} with old format (no header)", in.getFileName()); + + writeAndConvert(remote, in); + return; + } + + if (!blocksHaveEqualStructure(distributed_header.block_header, remote.getHeader())) + { + LOG_WARNING(log, + "Structure does not match (remote: {}, local: {}), implicit conversion will be done", + remote.getHeader().dumpStructure(), distributed_header.block_header.dumpStructure()); + + writeAndConvert(remote, in); + return; + } + + /// If connection does not use compression, we have to uncompress the data. + if (!compression_expected) + { + writeAndConvert(remote, in); + return; + } + + /// Otherwise write data as it was already prepared (more efficient path). + CheckingCompressedReadBuffer checking_in(in); + remote.writePrepared(checking_in); } } @@ -245,17 +294,18 @@ StorageDistributedDirectoryMonitor::StorageDistributedDirectoryMonitor( , disk(disk_) , relative_path(relative_path_) , path(disk->getPath() + relative_path + '/') - , should_batch_inserts(storage.global_context.getSettingsRef().distributed_directory_monitor_batch_inserts) + , should_batch_inserts(storage.getContext()->getSettingsRef().distributed_directory_monitor_batch_inserts) , dir_fsync(storage.getDistributedSettingsRef().fsync_directories) - , min_batched_block_size_rows(storage.global_context.getSettingsRef().min_insert_block_size_rows) - , min_batched_block_size_bytes(storage.global_context.getSettingsRef().min_insert_block_size_bytes) + , min_batched_block_size_rows(storage.getContext()->getSettingsRef().min_insert_block_size_rows) + , min_batched_block_size_bytes(storage.getContext()->getSettingsRef().min_insert_block_size_bytes) , current_batch_file_path(path + "current_batch.txt") - , default_sleep_time(storage.global_context.getSettingsRef().distributed_directory_monitor_sleep_time_ms.totalMilliseconds()) + , default_sleep_time(storage.getContext()->getSettingsRef().distributed_directory_monitor_sleep_time_ms.totalMilliseconds()) , sleep_time(default_sleep_time) - , max_sleep_time(storage.global_context.getSettingsRef().distributed_directory_monitor_max_sleep_time_ms.totalMilliseconds()) + , max_sleep_time(storage.getContext()->getSettingsRef().distributed_directory_monitor_max_sleep_time_ms.totalMilliseconds()) , log(&Poco::Logger::get(getLoggerName())) , monitor_blocker(monitor_blocker_) , metric_pending_files(CurrentMetrics::DistributedFilesToInsert, 0) + , metric_broken_files(CurrentMetrics::BrokenDistributedFilesToInsert, 0) { task_handle = bg_pool.createTask(getLoggerName() + "/Bg", [this]{ run(); }); task_handle->activateAndSchedule(); @@ -320,20 +370,20 @@ void StorageDistributedDirectoryMonitor::run() { do_sleep = !processFiles(files); - std::lock_guard metrics_lock(metrics_mutex); - last_exception = std::exception_ptr{}; + std::lock_guard status_lock(status_mutex); + status.last_exception = std::exception_ptr{}; } catch (...) { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); do_sleep = true; - ++error_count; + ++status.error_count; sleep_time = std::min( - std::chrono::milliseconds{Int64(default_sleep_time.count() * std::exp2(error_count))}, + std::chrono::milliseconds{Int64(default_sleep_time.count() * std::exp2(status.error_count))}, max_sleep_time); tryLogCurrentException(getLoggerName().data()); - last_exception = std::current_exception(); + status.last_exception = std::current_exception(); } } else @@ -344,9 +394,9 @@ void StorageDistributedDirectoryMonitor::run() const auto now = std::chrono::system_clock::now(); if (now - last_decrease_time > decrease_error_count_period) { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); - error_count /= 2; + status.error_count /= 2; last_decrease_time = now; } @@ -427,7 +477,7 @@ ConnectionPoolPtr StorageDistributedDirectoryMonitor::createPool(const std::stri auto pools = createPoolsForAddresses(name, pool_factory, storage.log); - const auto settings = storage.global_context.getSettings(); + const auto settings = storage.getContext()->getSettings(); return pools.size() == 1 ? pools.front() : std::make_shared(pools, settings.load_balancing, settings.distributed_replica_error_half_life.totalSeconds(), @@ -454,16 +504,16 @@ std::map StorageDistributedDirectoryMonitor::getFiles() } { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); - if (files_count != files.size()) - LOG_TRACE(log, "Files set to {} (was {})", files.size(), files_count); - if (bytes_count != new_bytes_count) - LOG_TRACE(log, "Bytes set to {} (was {})", new_bytes_count, bytes_count); + if (status.files_count != files.size()) + LOG_TRACE(log, "Files set to {} (was {})", files.size(), status.files_count); + if (status.bytes_count != new_bytes_count) + LOG_TRACE(log, "Bytes set to {} (was {})", new_bytes_count, status.bytes_count); metric_pending_files.changeTo(files.size()); - files_count = files.size(); - bytes_count = new_bytes_count; + status.files_count = files.size(); + status.bytes_count = new_bytes_count; } return files; @@ -490,32 +540,40 @@ bool StorageDistributedDirectoryMonitor::processFiles(const std::mapgetSettingsRef()); try { CurrentMetrics::Increment metric_increment{CurrentMetrics::DistributedSend}; ReadBufferFromFile in(file_path); - const auto & header = readDistributedHeader(in, log); + const auto & distributed_header = readDistributedHeader(in, log); - auto connection = pool->get(timeouts, &header.insert_settings); + LOG_DEBUG(log, "Started processing `{}` ({} rows, {} bytes)", file_path, + formatReadableQuantity(distributed_header.rows), + formatReadableSizeWithBinarySuffix(distributed_header.bytes)); + + auto connection = pool->get(timeouts, &distributed_header.insert_settings); RemoteBlockOutputStream remote{*connection, timeouts, - header.insert_query, header.insert_settings, header.client_info}; + distributed_header.insert_query, + distributed_header.insert_settings, + distributed_header.client_info}; remote.writePrefix(); - writeRemoteConvert(header, remote, in, log); + bool compression_expected = connection->getCompression() == Protocol::Compression::Enable; + writeRemoteConvert(distributed_header, remote, compression_expected, in, log); remote.writeSuffix(); } - catch (const Exception & e) + catch (Exception & e) { + e.addMessage(fmt::format("While sending {}", file_path)); maybeMarkAsBroken(file_path, e); throw; } auto dir_sync_guard = getDirectorySyncGuard(dir_fsync, disk, relative_path); markAsSend(file_path); - LOG_TRACE(log, "Finished processing `{}`", file_path); + LOG_TRACE(log, "Finished processing `{}` (took {} ms)", file_path, watch.elapsedMilliseconds()); } struct StorageDistributedDirectoryMonitor::BatchHeader @@ -523,20 +581,21 @@ struct StorageDistributedDirectoryMonitor::BatchHeader Settings settings; String query; ClientInfo client_info; - String sample_block_structure; + Block header; - BatchHeader(Settings settings_, String query_, ClientInfo client_info_, String sample_block_structure_) + BatchHeader(Settings settings_, String query_, ClientInfo client_info_, Block header_) : settings(std::move(settings_)) , query(std::move(query_)) , client_info(std::move(client_info_)) - , sample_block_structure(std::move(sample_block_structure_)) + , header(std::move(header_)) { } bool operator==(const BatchHeader & other) const { - return std::tie(settings, query, client_info.query_kind, sample_block_structure) == - std::tie(other.settings, other.query, other.client_info.query_kind, other.sample_block_structure); + return std::tie(settings, query, client_info.query_kind) == + std::tie(other.settings, other.query, other.client_info.query_kind) && + blocksHaveEqualStructure(header, other.header); } struct Hash @@ -545,7 +604,7 @@ struct StorageDistributedDirectoryMonitor::BatchHeader { SipHash hash_state; hash_state.update(batch_header.query.data(), batch_header.query.size()); - hash_state.update(batch_header.sample_block_structure.data(), batch_header.sample_block_structure.size()); + batch_header.header.updateHash(hash_state); return hash_state.get64(); } }; @@ -587,6 +646,12 @@ struct StorageDistributedDirectoryMonitor::Batch CurrentMetrics::Increment metric_increment{CurrentMetrics::DistributedSend}; + Stopwatch watch; + + LOG_DEBUG(parent.log, "Sending a batch of {} files ({} rows, {} bytes).", file_indices.size(), + formatReadableQuantity(total_rows), + formatReadableSizeWithBinarySuffix(total_bytes)); + if (!recovered) { /// For deduplication in Replicated tables to work, in case of error @@ -613,7 +678,7 @@ struct StorageDistributedDirectoryMonitor::Batch Poco::File{tmp_file}.renameTo(parent.current_batch_file_path); } - auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(parent.storage.global_context.getSettingsRef()); + auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(parent.storage.getContext()->getSettingsRef()); auto connection = parent.pool->get(timeouts); bool batch_broken = false; @@ -632,22 +697,24 @@ struct StorageDistributedDirectoryMonitor::Batch } ReadBufferFromFile in(file_path->second); - const auto & header = readDistributedHeader(in, parent.log); + const auto & distributed_header = readDistributedHeader(in, parent.log); if (!remote) { remote = std::make_unique(*connection, timeouts, - header.insert_query, header.insert_settings, header.client_info); + distributed_header.insert_query, + distributed_header.insert_settings, + distributed_header.client_info); remote->writePrefix(); } - - writeRemoteConvert(header, *remote, in, parent.log); + bool compression_expected = connection->getCompression() == Protocol::Compression::Enable; + writeRemoteConvert(distributed_header, *remote, compression_expected, in, parent.log); } if (remote) remote->writeSuffix(); } - catch (const Exception & e) + catch (Exception & e) { if (isFileBrokenErrorCode(e.code(), e.isRemoteException())) { @@ -655,12 +722,19 @@ struct StorageDistributedDirectoryMonitor::Batch batch_broken = true; } else + { + std::vector files(file_index_to_path.size()); + for (const auto & [index, name] : file_index_to_path) + files.push_back(name); + e.addMessage(fmt::format("While sending batch {}", fmt::join(files, "\n"))); + throw; + } } if (!batch_broken) { - LOG_TRACE(parent.log, "Sent a batch of {} files.", file_indices.size()); + LOG_TRACE(parent.log, "Sent a batch of {} files (took {} ms).", file_indices.size(), watch.elapsedMilliseconds()); auto dir_sync_guard = getDirectorySyncGuard(dir_fsync, parent.disk, parent.relative_path); for (UInt64 file_index : file_indices) @@ -756,10 +830,10 @@ bool StorageDistributedDirectoryMonitor::addAndSchedule(size_t file_size, size_t return false; { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); metric_pending_files.add(); - bytes_count += file_size; - ++files_count; + status.bytes_count += file_size; + ++status.files_count; } return task_handle->scheduleAfter(ms, false); @@ -767,16 +841,9 @@ bool StorageDistributedDirectoryMonitor::addAndSchedule(size_t file_size, size_t StorageDistributedDirectoryMonitor::Status StorageDistributedDirectoryMonitor::getStatus() { - std::lock_guard metrics_lock(metrics_mutex); - - return Status{ - path, - last_exception, - error_count, - files_count, - bytes_count, - monitor_blocker.isCancelled(), - }; + std::lock_guard status_lock(status_mutex); + Status current_status{status, path, monitor_blocker.isCancelled()}; + return current_status; } void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map & files) @@ -808,22 +875,27 @@ void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map size_t total_rows = 0; size_t total_bytes = 0; - std::string sample_block_structure; - DistributedHeader header; + Block header; + DistributedHeader distributed_header; try { /// Determine metadata of the current file and check if it is not broken. ReadBufferFromFile in{file_path}; - header = readDistributedHeader(in, log); + distributed_header = readDistributedHeader(in, log); - if (header.rows) + if (distributed_header.rows) { - total_rows += header.rows; - total_bytes += header.bytes; - sample_block_structure = header.header; + total_rows += distributed_header.rows; + total_bytes += distributed_header.bytes; } - else + + if (distributed_header.block_header) + header = distributed_header.block_header; + + if (!total_rows || !header) { + LOG_DEBUG(log, "Processing batch {} with old format (no header/rows)", in.getFileName()); + CompressedReadBuffer decompressing_in(in); NativeBlockInputStream block_in(decompressing_in, DBMS_TCP_PROTOCOL_VERSION); block_in.readPrefix(); @@ -833,8 +905,8 @@ void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map total_rows += block.rows(); total_bytes += block.bytes(); - if (sample_block_structure.empty()) - sample_block_structure = block.cloneEmpty().dumpStructure(); + if (!header) + header = block.cloneEmpty(); } block_in.readSuffix(); } @@ -850,7 +922,12 @@ void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map throw; } - BatchHeader batch_header(std::move(header.insert_settings), std::move(header.insert_query), std::move(header.client_info), std::move(sample_block_structure)); + BatchHeader batch_header( + std::move(distributed_header.insert_settings), + std::move(distributed_header.insert_query), + std::move(distributed_header.client_info), + std::move(header) + ); Batch & batch = header_to_batch.try_emplace(batch_header, *this, files).first->second; batch.file_indices.push_back(file_idx); @@ -895,11 +972,17 @@ void StorageDistributedDirectoryMonitor::markAsBroken(const std::string & file_p Poco::File file(file_path); { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); size_t file_size = file.getSize(); - --files_count; - bytes_count -= file_size; + + --status.files_count; + status.bytes_count -= file_size; + + ++status.broken_files_count; + status.broken_bytes_count += file_size; + + metric_broken_files.add(); } file.renameTo(broken_file_path); @@ -913,10 +996,10 @@ void StorageDistributedDirectoryMonitor::markAsSend(const std::string & file_pat size_t file_size = file.getSize(); { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); metric_pending_files.sub(); - --files_count; - bytes_count -= file_size; + --status.files_count; + status.bytes_count -= file_size; } file.remove(); @@ -945,7 +1028,7 @@ void StorageDistributedDirectoryMonitor::updatePath(const std::string & new_rela std::lock_guard lock{mutex}; { - std::lock_guard metrics_lock(metrics_mutex); + std::lock_guard status_lock(status_mutex); relative_path = new_relative_path; path = disk->getPath() + relative_path + '/'; } diff --git a/src/Storages/Distributed/DirectoryMonitor.h b/src/Storages/Distributed/DirectoryMonitor.h index 1ccac4522d7..ab9b8592294 100644 --- a/src/Storages/Distributed/DirectoryMonitor.h +++ b/src/Storages/Distributed/DirectoryMonitor.h @@ -50,15 +50,23 @@ public: /// For scheduling via DistributedBlockOutputStream bool addAndSchedule(size_t file_size, size_t ms); + struct InternalStatus + { + std::exception_ptr last_exception; + + size_t error_count = 0; + + size_t files_count = 0; + size_t bytes_count = 0; + + size_t broken_files_count = 0; + size_t broken_bytes_count = 0; + }; /// system.distribution_queue interface - struct Status + struct Status : InternalStatus { std::string path; - std::exception_ptr last_exception; - size_t error_count; - size_t files_count; - size_t bytes_count; - bool is_blocked; + bool is_blocked = false; }; Status getStatus(); @@ -92,11 +100,8 @@ private: struct BatchHeader; struct Batch; - std::mutex metrics_mutex; - size_t error_count = 0; - size_t files_count = 0; - size_t bytes_count = 0; - std::exception_ptr last_exception; + std::mutex status_mutex; + InternalStatus status; const std::chrono::milliseconds default_sleep_time; std::chrono::milliseconds sleep_time; @@ -110,6 +115,7 @@ private: BackgroundSchedulePoolTaskHolder task_handle; CurrentMetrics::Increment metric_pending_files; + CurrentMetrics::Increment metric_broken_files; friend class DirectoryMonitorBlockInputStream; }; diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.cpp b/src/Storages/Distributed/DistributedBlockOutputStream.cpp index f8ba4221842..a4aa2779771 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.cpp +++ b/src/Storages/Distributed/DistributedBlockOutputStream.cpp @@ -58,6 +58,7 @@ namespace ErrorCodes { extern const int LOGICAL_ERROR; extern const int TIMEOUT_EXCEEDED; + extern const int TOO_LARGE_DISTRIBUTED_DEPTH; } static Block adoptBlock(const Block & header, const Block & block, Poco::Logger * log) @@ -86,14 +87,15 @@ static void writeBlockConvert(const BlockOutputStreamPtr & out, const Block & bl DistributedBlockOutputStream::DistributedBlockOutputStream( - const Context & context_, + ContextPtr context_, StorageDistributed & storage_, const StorageMetadataPtr & metadata_snapshot_, const ASTPtr & query_ast_, const ClusterPtr & cluster_, bool insert_sync_, - UInt64 insert_timeout_) - : context(context_) + UInt64 insert_timeout_, + StorageID main_table_) + : context(Context::createCopy(context_)) , storage(storage_) , metadata_snapshot(metadata_snapshot_) , query_ast(query_ast_) @@ -101,8 +103,13 @@ DistributedBlockOutputStream::DistributedBlockOutputStream( , cluster(cluster_) , insert_sync(insert_sync_) , insert_timeout(insert_timeout_) + , main_table(main_table_) , log(&Poco::Logger::get("DistributedBlockOutputStream")) { + const auto & settings = context->getSettingsRef(); + if (settings.max_distributed_depth && context->getClientInfo().distributed_depth > settings.max_distributed_depth) + throw Exception("Maximum distributed depth exceeded", ErrorCodes::TOO_LARGE_DISTRIBUTED_DEPTH); + context->getClientInfo().distributed_depth += 1; } @@ -143,7 +150,7 @@ void DistributedBlockOutputStream::write(const Block & block) void DistributedBlockOutputStream::writeAsync(const Block & block) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); bool random_shard_insert = settings.insert_distributed_one_random_shard && !storage.has_sharding_key; if (random_shard_insert) @@ -194,7 +201,7 @@ std::string DistributedBlockOutputStream::getCurrentStateDescription() void DistributedBlockOutputStream::initWritingJobs(const Block & first_block, size_t start, size_t end) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); const auto & addresses_with_failovers = cluster->getShardsAddresses(); const auto & shards_info = cluster->getShardsInfo(); size_t num_shards = end - start; @@ -303,7 +310,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep } const Block & shard_block = (num_shards > 1) ? job.current_shard_block : current_block; - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); /// Do not initiate INSERT for empty block. if (shard_block.rows() == 0) @@ -321,11 +328,11 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep throw Exception("There are several writing job for an automatically replicated shard", ErrorCodes::LOGICAL_ERROR); /// TODO: it make sense to rewrite skip_unavailable_shards and max_parallel_replicas here - auto connections = shard_info.pool->getMany(timeouts, &settings, PoolMode::GET_ONE); - if (connections.empty() || connections.front().isNull()) + auto results = shard_info.pool->getManyChecked(timeouts, &settings, PoolMode::GET_ONE, main_table.getQualifiedName()); + if (results.empty() || results.front().entry.isNull()) throw Exception("Expected exactly one connection for shard " + toString(job.shard_index), ErrorCodes::LOGICAL_ERROR); - job.connection_entry = std::move(connections.front()); + job.connection_entry = std::move(results.front().entry); } else { @@ -343,7 +350,8 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep if (throttler) job.connection_entry->setThrottler(throttler); - job.stream = std::make_shared(*job.connection_entry, timeouts, query_string, settings, context.getClientInfo()); + job.stream = std::make_shared( + *job.connection_entry, timeouts, query_string, settings, context->getClientInfo()); job.stream->writePrefix(); } @@ -357,7 +365,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep if (!job.stream) { /// Forward user settings - job.local_context = std::make_unique(context); + job.local_context = Context::createCopy(context); /// Copying of the query AST is required to avoid race, /// in case of INSERT into multiple local shards. @@ -367,7 +375,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep /// to resolve tables (in InterpreterInsertQuery::getTable()) auto copy_query_ast = query_ast->clone(); - InterpreterInsertQuery interp(copy_query_ast, *job.local_context); + InterpreterInsertQuery interp(copy_query_ast, job.local_context); auto block_io = interp.execute(); job.stream = block_io.out; @@ -385,7 +393,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep void DistributedBlockOutputStream::writeSync(const Block & block) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); const auto & shards_info = cluster->getShardsInfo(); bool random_shard_insert = settings.insert_distributed_one_random_shard && !storage.has_sharding_key; size_t start = 0; @@ -471,7 +479,9 @@ void DistributedBlockOutputStream::writeSuffix() LOG_DEBUG(log, "It took {} sec. to insert {} blocks, {} rows per second. {}", elapsed, inserted_blocks, inserted_rows / elapsed, getCurrentStateDescription()); }; - if (insert_sync && pool) + /// Pool finished means that some exception had been thrown before, + /// and scheduling new jobs will return "Cannot schedule a task" error. + if (insert_sync && pool && !pool->finished()) { finished_jobs_count = 0; try @@ -562,7 +572,7 @@ void DistributedBlockOutputStream::writeSplitAsync(const Block & block) void DistributedBlockOutputStream::writeAsyncImpl(const Block & block, size_t shard_id) { const auto & shard_info = cluster->getShardsInfo()[shard_id]; - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); if (shard_info.hasInternalReplication()) { @@ -610,7 +620,7 @@ void DistributedBlockOutputStream::writeToLocal(const Block & block, size_t repe void DistributedBlockOutputStream::writeToShard(const Block & block, const std::vector & dir_names) { - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); const auto & distributed_settings = storage.getDistributedSettingsRef(); bool fsync = distributed_settings.fsync_after_insert; @@ -675,11 +685,17 @@ void DistributedBlockOutputStream::writeToShard(const Block & block, const std:: WriteBufferFromOwnString header_buf; writeVarUInt(DBMS_TCP_PROTOCOL_VERSION, header_buf); writeStringBinary(query_string, header_buf); - context.getSettingsRef().write(header_buf); - context.getClientInfo().write(header_buf, DBMS_TCP_PROTOCOL_VERSION); + context->getSettingsRef().write(header_buf); + context->getClientInfo().write(header_buf, DBMS_TCP_PROTOCOL_VERSION); writeVarUInt(block.rows(), header_buf); writeVarUInt(block.bytes(), header_buf); - writeStringBinary(block.cloneEmpty().dumpStructure(), header_buf); + writeStringBinary(block.cloneEmpty().dumpStructure(), header_buf); /// obsolete + /// Write block header separately in the batch header. + /// It is required for checking does conversion is required or not. + { + NativeBlockOutputStream header_stream{header_buf, DBMS_TCP_PROTOCOL_VERSION, block.cloneEmpty()}; + header_stream.write(block.cloneEmpty()); + } /// Add new fields here, for example: /// writeVarUInt(my_new_data, header_buf); @@ -724,7 +740,7 @@ void DistributedBlockOutputStream::writeToShard(const Block & block, const std:: Poco::File(first_file_tmp_path).remove(); /// Notify - auto sleep_ms = context.getSettingsRef().distributed_directory_monitor_sleep_time_ms; + auto sleep_ms = context->getSettingsRef().distributed_directory_monitor_sleep_time_ms; for (const auto & dir_name : dir_names) { auto & directory_monitor = storage.requireDirectoryMonitor(disk, dir_name); @@ -732,5 +748,4 @@ void DistributedBlockOutputStream::writeToShard(const Block & block, const std:: } } - } diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.h b/src/Storages/Distributed/DistributedBlockOutputStream.h index ca57ad46fbb..f574702f35f 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.h +++ b/src/Storages/Distributed/DistributedBlockOutputStream.h @@ -38,13 +38,14 @@ class DistributedBlockOutputStream : public IBlockOutputStream { public: DistributedBlockOutputStream( - const Context & context_, + ContextPtr context_, StorageDistributed & storage_, const StorageMetadataPtr & metadata_snapshot_, const ASTPtr & query_ast_, const ClusterPtr & cluster_, bool insert_sync_, - UInt64 insert_timeout_); + UInt64 insert_timeout_, + StorageID main_table_); Block getHeader() const override; void write(const Block & block) override; @@ -83,8 +84,7 @@ private: /// Returns the number of blocks was written for each cluster node. Uses during exception handling. std::string getCurrentStateDescription(); -private: - const Context & context; + ContextPtr context; StorageDistributed & storage; StorageMetadataPtr metadata_snapshot; ASTPtr query_ast; @@ -97,6 +97,7 @@ private: /// Sync-related stuff UInt64 insert_timeout; // in seconds + StorageID main_table; Stopwatch watch; Stopwatch watch_current_block; std::optional pool; @@ -115,7 +116,7 @@ private: Block current_shard_block; ConnectionPool::Entry connection_entry; - std::unique_ptr local_context; + ContextPtr local_context; BlockOutputStreamPtr stream; UInt64 blocks_written = 0; diff --git a/src/Storages/HDFS/HDFSCommon.cpp b/src/Storages/HDFS/HDFSCommon.cpp index e5ec8a06139..40f52921008 100644 --- a/src/Storages/HDFS/HDFSCommon.cpp +++ b/src/Storages/HDFS/HDFSCommon.cpp @@ -9,14 +9,15 @@ #include #include + namespace DB { namespace ErrorCodes { -extern const int BAD_ARGUMENTS; -extern const int NETWORK_ERROR; -extern const int EXCESSIVE_ELEMENT_IN_CONFIG; -extern const int NO_ELEMENTS_IN_CONFIG; + extern const int BAD_ARGUMENTS; + extern const int NETWORK_ERROR; + extern const int EXCESSIVE_ELEMENT_IN_CONFIG; + extern const int NO_ELEMENTS_IN_CONFIG; } const String HDFSBuilderWrapper::CONFIG_PREFIX = "hdfs"; diff --git a/src/Storages/HDFS/HDFSCommon.h b/src/Storages/HDFS/HDFSCommon.h index fa1ca88464e..154c253a76b 100644 --- a/src/Storages/HDFS/HDFSCommon.h +++ b/src/Storages/HDFS/HDFSCommon.h @@ -17,6 +17,7 @@ namespace DB { + namespace detail { struct HDFSFsDeleter @@ -28,16 +29,14 @@ namespace detail }; } + struct HDFSFileInfo { hdfsFileInfo * file_info; int length; - HDFSFileInfo() - : file_info(nullptr) - , length(0) - { - } + HDFSFileInfo() : file_info(nullptr) , length(0) {} + HDFSFileInfo(const HDFSFileInfo & other) = delete; HDFSFileInfo(HDFSFileInfo && other) = default; HDFSFileInfo & operator=(const HDFSFileInfo & other) = delete; @@ -49,17 +48,30 @@ struct HDFSFileInfo } }; + class HDFSBuilderWrapper { - hdfsBuilder * hdfs_builder; - String hadoop_kerberos_keytab; - String hadoop_kerberos_principal; - String hadoop_kerberos_kinit_command = "kinit"; - String hadoop_security_kerberos_ticket_cache_path; - static std::mutex kinit_mtx; +friend HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::AbstractConfiguration &); - std::vector> config_stor; +static const String CONFIG_PREFIX; + +public: + HDFSBuilderWrapper() : hdfs_builder(hdfsNewBuilder()) {} + + ~HDFSBuilderWrapper() { hdfsFreeBuilder(hdfs_builder); } + + HDFSBuilderWrapper(const HDFSBuilderWrapper &) = delete; + HDFSBuilderWrapper(HDFSBuilderWrapper &&) = default; + + hdfsBuilder * get() { return hdfs_builder; } + +private: + void loadFromConfig(const Poco::Util::AbstractConfiguration & config, const String & config_path, bool isUser = false); + + String getKinitCmd(); + + void runKinit(); // hdfs builder relies on an external config data storage std::pair& keep(const String & k, const String & v) @@ -67,48 +79,24 @@ class HDFSBuilderWrapper return config_stor.emplace_back(std::make_pair(k, v)); } + hdfsBuilder * hdfs_builder; + String hadoop_kerberos_keytab; + String hadoop_kerberos_principal; + String hadoop_kerberos_kinit_command = "kinit"; + String hadoop_security_kerberos_ticket_cache_path; + + static std::mutex kinit_mtx; + std::vector> config_stor; bool need_kinit{false}; - - static const String CONFIG_PREFIX; - -private: - - void loadFromConfig(const Poco::Util::AbstractConfiguration & config, const String & config_path, bool isUser = false); - - String getKinitCmd(); - - void runKinit(); - -public: - - hdfsBuilder * - get() - { - return hdfs_builder; - } - - HDFSBuilderWrapper() - : hdfs_builder(hdfsNewBuilder()) - { - } - - ~HDFSBuilderWrapper() - { - hdfsFreeBuilder(hdfs_builder); - - } - - HDFSBuilderWrapper(const HDFSBuilderWrapper &) = delete; - HDFSBuilderWrapper(HDFSBuilderWrapper &&) = default; - - friend HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::AbstractConfiguration &); }; using HDFSFSPtr = std::unique_ptr, detail::HDFSFsDeleter>; + // set read/connect timeout, default value in libhdfs3 is about 1 hour, and too large /// TODO Allow to tune from query Settings. HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::AbstractConfiguration &); HDFSFSPtr createHDFSFS(hdfsBuilder * builder); + } #endif diff --git a/src/Storages/HDFS/ReadBufferFromHDFS.cpp b/src/Storages/HDFS/ReadBufferFromHDFS.cpp index f3b0e3022f1..29ea46c7590 100644 --- a/src/Storages/HDFS/ReadBufferFromHDFS.cpp +++ b/src/Storages/HDFS/ReadBufferFromHDFS.cpp @@ -8,6 +8,7 @@ namespace DB { + namespace ErrorCodes { extern const int NETWORK_ERROR; @@ -21,34 +22,39 @@ struct ReadBufferFromHDFS::ReadBufferFromHDFSImpl /// HDFS create/open functions are not thread safe static std::mutex hdfs_init_mutex; - std::string hdfs_uri; + String hdfs_uri; + String hdfs_file_path; + hdfsFile fin; HDFSBuilderWrapper builder; HDFSFSPtr fs; - explicit ReadBufferFromHDFSImpl(const std::string & hdfs_name_, + explicit ReadBufferFromHDFSImpl( + const std::string & hdfs_uri_, + const std::string & hdfs_file_path_, const Poco::Util::AbstractConfiguration & config_) - : hdfs_uri(hdfs_name_), - builder(createHDFSBuilder(hdfs_uri, config_)) + : hdfs_uri(hdfs_uri_) + , hdfs_file_path(hdfs_file_path_) + , builder(createHDFSBuilder(hdfs_uri_, config_)) { std::lock_guard lock(hdfs_init_mutex); fs = createHDFSFS(builder.get()); - const size_t begin_of_path = hdfs_uri.find('/', hdfs_uri.find("//") + 2); - const std::string path = hdfs_uri.substr(begin_of_path); - fin = hdfsOpenFile(fs.get(), path.c_str(), O_RDONLY, 0, 0, 0); + fin = hdfsOpenFile(fs.get(), hdfs_file_path.c_str(), O_RDONLY, 0, 0, 0); if (fin == nullptr) - throw Exception("Unable to open HDFS file: " + path + " error: " + std::string(hdfsGetLastError()), - ErrorCodes::CANNOT_OPEN_FILE); + throw Exception(ErrorCodes::CANNOT_OPEN_FILE, + "Unable to open HDFS file: {}. Error: {}", + hdfs_uri + hdfs_file_path, std::string(hdfsGetLastError())); } int read(char * start, size_t size) const { int bytes_read = hdfsRead(fs.get(), fin, start, size); if (bytes_read < 0) - throw Exception("Fail to read HDFS file: " + hdfs_uri + " " + std::string(hdfsGetLastError()), - ErrorCodes::NETWORK_ERROR); + throw Exception(ErrorCodes::NETWORK_ERROR, + "Fail to read from HDFS: {}, file path: {}. Error: {}", + hdfs_uri, hdfs_file_path, std::string(hdfsGetLastError())); return bytes_read; } @@ -62,11 +68,13 @@ struct ReadBufferFromHDFS::ReadBufferFromHDFSImpl std::mutex ReadBufferFromHDFS::ReadBufferFromHDFSImpl::hdfs_init_mutex; -ReadBufferFromHDFS::ReadBufferFromHDFS(const std::string & hdfs_name_, - const Poco::Util::AbstractConfiguration & config_, - size_t buf_size_) +ReadBufferFromHDFS::ReadBufferFromHDFS( + const String & hdfs_uri_, + const String & hdfs_file_path_, + const Poco::Util::AbstractConfiguration & config_, + size_t buf_size_) : BufferWithOwnMemory(buf_size_) - , impl(std::make_unique(hdfs_name_, config_)) + , impl(std::make_unique(hdfs_uri_, hdfs_file_path_, config_)) { } diff --git a/src/Storages/HDFS/ReadBufferFromHDFS.h b/src/Storages/HDFS/ReadBufferFromHDFS.h index 8d26c001b2e..bd14e3d3792 100644 --- a/src/Storages/HDFS/ReadBufferFromHDFS.h +++ b/src/Storages/HDFS/ReadBufferFromHDFS.h @@ -7,11 +7,8 @@ #include #include #include - #include - #include - #include @@ -22,13 +19,19 @@ namespace DB */ class ReadBufferFromHDFS : public BufferWithOwnMemory { - struct ReadBufferFromHDFSImpl; - std::unique_ptr impl; +struct ReadBufferFromHDFSImpl; + public: - ReadBufferFromHDFS(const std::string & hdfs_name_, const Poco::Util::AbstractConfiguration &, size_t buf_size_ = DBMS_DEFAULT_BUFFER_SIZE); + ReadBufferFromHDFS(const String & hdfs_uri_, const String & hdfs_file_path_, + const Poco::Util::AbstractConfiguration &, size_t buf_size_ = DBMS_DEFAULT_BUFFER_SIZE); + ~ReadBufferFromHDFS() override; bool nextImpl() override; + +private: + std::unique_ptr impl; }; } + #endif diff --git a/src/Storages/HDFS/StorageHDFS.cpp b/src/Storages/HDFS/StorageHDFS.cpp index f7afd4a497d..c08e487f179 100644 --- a/src/Storages/HDFS/StorageHDFS.cpp +++ b/src/Storages/HDFS/StorageHDFS.cpp @@ -40,15 +40,15 @@ StorageHDFS::StorageHDFS(const String & uri_, const String & format_name_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, + ContextPtr context_, const String & compression_method_ = "") : IStorage(table_id_) + , WithContext(context_) , uri(uri_) , format_name(format_name_) - , context(context_) , compression_method(compression_method_) { - context.getRemoteHostFilter().checkURL(Poco::URI(uri)); + context_->getRemoteHostFilter().checkURL(Poco::URI(uri)); StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -59,7 +59,7 @@ StorageHDFS::StorageHDFS(const String & uri_, namespace { -class HDFSSource : public SourceWithProgress +class HDFSSource : public SourceWithProgress, WithContext { public: struct SourcesInfo @@ -90,16 +90,16 @@ public: String format_, String compression_method_, Block sample_block_, - const Context & context_, + ContextPtr context_, UInt64 max_block_size_) : SourceWithProgress(getHeader(sample_block_, source_info_->need_path_column, source_info_->need_file_column)) + , WithContext(context_) , source_info(std::move(source_info_)) , uri(std::move(uri_)) , format(std::move(format_)) , compression_method(compression_method_) , max_block_size(max_block_size_) , sample_block(std::move(sample_block_)) - , context(context_) { } @@ -122,11 +122,10 @@ public: current_path = uri + path; auto compression = chooseCompressionMethod(path, compression_method); - auto read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(current_path, context.getGlobalContext().getConfigRef()), compression); - auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); - auto input_stream = std::make_shared(input_format); + read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(uri, path, getContext()->getGlobalContext()->getConfigRef()), compression); + auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, getContext(), max_block_size); - reader = std::make_shared>(input_stream, std::move(read_buf)); + reader = std::make_shared(input_format); reader->readPrefix(); } @@ -156,10 +155,12 @@ public: reader->readSuffix(); reader.reset(); + read_buf.reset(); } } private: + std::unique_ptr read_buf; BlockInputStreamPtr reader; SourcesInfoPtr source_info; String uri; @@ -169,7 +170,6 @@ private: UInt64 max_block_size; Block sample_block; - const Context & context; }; class HDFSBlockOutputStream : public IBlockOutputStream @@ -178,12 +178,12 @@ public: HDFSBlockOutputStream(const String & uri, const String & format, const Block & sample_block_, - const Context & context, + ContextPtr context, const CompressionMethod compression_method) : sample_block(sample_block_) { - write_buf = wrapWriteBufferWithCompressionMethod(std::make_unique(uri, context.getGlobalContext().getConfigRef()), compression_method, 3); - writer = FormatFactory::instance().getOutputStream(format, *write_buf, sample_block, context); + write_buf = wrapWriteBufferWithCompressionMethod(std::make_unique(uri, context->getGlobalContext()->getConfigRef()), compression_method, 3); + writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); } Block getHeader() const override @@ -267,21 +267,32 @@ Pipe StorageHDFS::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context_, + ContextPtr context_, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) { - const size_t begin_of_path = uri.find('/', uri.find("//") + 2); + size_t begin_of_path; + /// This uri is checked for correctness in constructor of StorageHDFS and never modified afterwards + auto two_slash = uri.find("//"); + + if (two_slash == std::string::npos) + begin_of_path = uri.find('/'); + else + begin_of_path = uri.find('/', two_slash + 2); + const String path_from_uri = uri.substr(begin_of_path); const String uri_without_path = uri.substr(0, begin_of_path); - HDFSBuilderWrapper builder = createHDFSBuilder(uri_without_path + "/", context_.getGlobalContext().getConfigRef()); + HDFSBuilderWrapper builder = createHDFSBuilder(uri_without_path + "/", context_->getGlobalContext()->getConfigRef()); HDFSFSPtr fs = createHDFSFS(builder.get()); auto sources_info = std::make_shared(); sources_info->uris = LSWithRegexpMatching("/", fs, path_from_uri); + if (sources_info->uris.empty()) + LOG_WARNING(log, "No file in HDFS matches the path: {}", uri); + for (const auto & column : column_names) { if (column == "_path") @@ -302,12 +313,12 @@ Pipe StorageHDFS::read( return Pipe::unitePipes(std::move(pipes)); } -BlockOutputStreamPtr StorageHDFS::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageHDFS::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(uri, format_name, metadata_snapshot->getSampleBlock(), - context, + getContext(), chooseCompressionMethod(uri, compression_method)); } @@ -321,22 +332,22 @@ void registerStorageHDFS(StorageFactory & factory) throw Exception( "Storage HDFS requires 2 or 3 arguments: url, name of used format and optional compression method.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.local_context); + engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.getLocalContext()); String url = engine_args[0]->as().value.safeGet(); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); String format_name = engine_args[1]->as().value.safeGet(); String compression_method; if (engine_args.size() == 3) { - engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context); + engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.getLocalContext()); compression_method = engine_args[2]->as().value.safeGet(); } else compression_method = "auto"; - return StorageHDFS::create(url, args.table_id, format_name, args.columns, args.constraints, args.context, compression_method); + return StorageHDFS::create(url, args.table_id, format_name, args.columns, args.constraints, args.getContext(), compression_method); }, { .source_access_type = AccessType::HDFS, diff --git a/src/Storages/HDFS/StorageHDFS.h b/src/Storages/HDFS/StorageHDFS.h index 4172bce1cd1..e3f235296ac 100644 --- a/src/Storages/HDFS/StorageHDFS.h +++ b/src/Storages/HDFS/StorageHDFS.h @@ -13,7 +13,7 @@ namespace DB * This class represents table engine for external hdfs files. * Read method is supported for now. */ -class StorageHDFS final : public ext::shared_ptr_helper, public IStorage +class StorageHDFS final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: @@ -23,12 +23,12 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; NamesAndTypesList getVirtuals() const override; @@ -38,13 +38,12 @@ protected: const String & format_name_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, + ContextPtr context_, const String & compression_method_); private: - String uri; + const String uri; String format_name; - Context & context; String compression_method; Poco::Logger * log = &Poco::Logger::get("StorageHDFS"); diff --git a/src/Storages/IStorage.cpp b/src/Storages/IStorage.cpp index 2cbc36e02fe..f7fb359432e 100644 --- a/src/Storages/IStorage.cpp +++ b/src/Storages/IStorage.cpp @@ -1,8 +1,5 @@ #include -#include -#include - #include #include #include @@ -87,7 +84,7 @@ Pipe IStorage::read( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) @@ -100,7 +97,7 @@ void IStorage::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) @@ -119,12 +116,12 @@ void IStorage::read( } Pipe IStorage::alterPartition( - const StorageMetadataPtr & /* metadata_snapshot */, const PartitionCommands & /* commands */, const Context & /* context */) + const StorageMetadataPtr & /* metadata_snapshot */, const PartitionCommands & /* commands */, ContextPtr /* context */) { throw Exception("Partition operations are not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } -void IStorage::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void IStorage::alter(const AlterCommands & params, ContextPtr context, TableLockHolder &) { auto table_id = getStorageID(); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); @@ -134,7 +131,7 @@ void IStorage::alter(const AlterCommands & params, const Context & context, Tabl } -void IStorage::checkAlterIsPossible(const AlterCommands & commands, const Context & /* context */) const +void IStorage::checkAlterIsPossible(const AlterCommands & commands, ContextPtr /* context */) const { for (const auto & command : commands) { @@ -182,7 +179,7 @@ Names IStorage::getAllRegisteredNames() const return result; } -NameDependencies IStorage::getDependentViewsByColumn(const Context & context) const +NameDependencies IStorage::getDependentViewsByColumn(ContextPtr context) const { NameDependencies name_deps; auto dependencies = DatabaseCatalog::instance().getDependencies(storage_id); diff --git a/src/Storages/IStorage.h b/src/Storages/IStorage.h index 4dfd2ca50f3..a0fb7c70843 100644 --- a/src/Storages/IStorage.h +++ b/src/Storages/IStorage.h @@ -5,13 +5,15 @@ #include #include #include -#include +#include #include -#include +#include #include -#include #include +#include #include +#include +#include #include #include #include @@ -29,11 +31,10 @@ namespace ErrorCodes extern const int NOT_IMPLEMENTED; } -class Context; - using StorageActionBlockType = size_t; class ASTCreateQuery; +class ASTInsertQuery; struct Settings; @@ -50,6 +51,9 @@ class Pipe; class QueryPlan; using QueryPlanPtr = std::unique_ptr; +class QueryPipeline; +using QueryPipelinePtr = std::unique_ptr; + class IStoragePolicy; using StoragePolicyPtr = std::shared_ptr; @@ -104,6 +108,9 @@ public: /// Returns true if the storage is a view of a table or another view. virtual bool isView() const { return false; } + /// Returns true if the storage is dictionary + virtual bool isDictionary() const { return false; } + /// Returns true if the storage supports queries with the SAMPLE section. virtual bool supportsSampling() const { return getInMemoryMetadataPtr()->hasSamplingKey(); } @@ -176,7 +183,7 @@ public: Names getAllRegisteredNames() const override; - NameDependencies getDependentViewsByColumn(const Context & context) const; + NameDependencies getDependentViewsByColumn(ContextPtr context) const; protected: /// Returns whether the column is virtual - by default all columns are real. @@ -226,7 +233,7 @@ public: * QueryProcessingStage::Enum required for Distributed over Distributed, * since it cannot return Complete for intermediate queries never. */ - virtual QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const + virtual QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const { return QueryProcessingStage::FetchColumns; } @@ -253,7 +260,7 @@ public: virtual BlockInputStreams watch( const Names & /*column_names*/, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) @@ -285,7 +292,7 @@ public: const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/); @@ -297,7 +304,7 @@ public: const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/); @@ -314,11 +321,24 @@ public: virtual BlockOutputStreamPtr write( const ASTPtr & /*query*/, const StorageMetadataPtr & /*metadata_snapshot*/, - const Context & /*context*/) + ContextPtr /*context*/) { throw Exception("Method write is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } + /** Writes the data to a table in distributed manner. + * It is supposed that implementation looks into SELECT part of the query and executes distributed + * INSERT SELECT if it is possible with current storage as a receiver and query SELECT part as a producer. + * + * Returns query pipeline if distributed writing is possible, and nullptr otherwise. + */ + virtual QueryPipelinePtr distributedWrite( + const ASTInsertQuery & /*query*/, + ContextPtr /*context*/) + { + return nullptr; + } + /** Delete the table data. Called before deleting the directory with the data. * The method can be called only after detaching table from Context (when no queries are performed with table). * The table is not usable during and after call to this method. @@ -333,7 +353,7 @@ public: virtual void truncate( const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot */, - const Context & /* context */, + ContextPtr /* context */, TableExclusiveLockHolder &) { throw Exception("Truncate is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); @@ -361,12 +381,12 @@ public: /** ALTER tables in the form of column changes that do not affect the change * to Storage or its parameters. Executes under alter lock (lockForAlter). */ - virtual void alter(const AlterCommands & params, const Context & context, TableLockHolder & alter_lock_holder); + virtual void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & alter_lock_holder); /** Checks that alter commands can be applied to storage. For example, columns can be modified, * or primary key can be changes, etc. */ - virtual void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const; + virtual void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const; /** * Checks that mutation commands can be applied to storage. @@ -379,7 +399,7 @@ public: virtual Pipe alterPartition( const StorageMetadataPtr & /* metadata_snapshot */, const PartitionCommands & /* commands */, - const Context & /* context */); + ContextPtr /* context */); /// Checks that partition commands can be applied to storage. virtual void checkAlterPartitionIsPossible(const PartitionCommands & commands, const StorageMetadataPtr & metadata_snapshot, const Settings & settings) const; @@ -394,13 +414,13 @@ public: bool /*final*/, bool /*deduplicate*/, const Names & /* deduplicate_by_columns */, - const Context & /*context*/) + ContextPtr /*context*/) { throw Exception("Method optimize is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } /// Mutate the table contents - virtual void mutate(const MutationCommands &, const Context &) + virtual void mutate(const MutationCommands &, ContextPtr) { throw Exception("Mutations are not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } @@ -444,10 +464,10 @@ public: virtual bool supportsIndexForIn() const { return false; } /// Provides a hint that the storage engine may evaluate the IN-condition by using an index. - virtual bool mayBenefitFromIndexForIn(const ASTPtr & /* left_in_operand */, const Context & /* query_context */, const StorageMetadataPtr & /* metadata_snapshot */) const { return false; } + virtual bool mayBenefitFromIndexForIn(const ASTPtr & /* left_in_operand */, ContextPtr /* query_context */, const StorageMetadataPtr & /* metadata_snapshot */) const { return false; } /// Checks validity of the data - virtual CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) { throw Exception("Check query is not supported for " + getName() + " storage", ErrorCodes::NOT_IMPLEMENTED); } + virtual CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) { throw Exception("Check query is not supported for " + getName() + " storage", ErrorCodes::NOT_IMPLEMENTED); } /// Checks that table could be dropped right now /// Otherwise - throws an exception with detailed information. @@ -480,7 +500,7 @@ public: virtual std::optional totalRows(const Settings &) const { return {}; } /// Same as above but also take partition predicate into account. - virtual std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, const Context &) const { return {}; } + virtual std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, ContextPtr) const { return {}; } /// If it is possible to quickly determine exact number of bytes for the table on storage: /// - memory (approximated, resident) diff --git a/src/Storages/IndicesDescription.cpp b/src/Storages/IndicesDescription.cpp index dbc95615383..3147ad70696 100644 --- a/src/Storages/IndicesDescription.cpp +++ b/src/Storages/IndicesDescription.cpp @@ -67,7 +67,7 @@ IndexDescription & IndexDescription::operator=(const IndexDescription & other) return *this; } -IndexDescription IndexDescription::getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context) +IndexDescription IndexDescription::getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context) { const auto * index_definition = definition_ast->as(); if (!index_definition) @@ -118,7 +118,7 @@ IndexDescription IndexDescription::getIndexFromAST(const ASTPtr & definition_ast return result; } -void IndexDescription::recalculateWithNewColumns(const ColumnsDescription & new_columns, const Context & context) +void IndexDescription::recalculateWithNewColumns(const ColumnsDescription & new_columns, ContextPtr context) { *this = getIndexFromAST(definition_ast, new_columns, context); } @@ -144,7 +144,7 @@ String IndicesDescription::toString() const } -IndicesDescription IndicesDescription::parse(const String & str, const ColumnsDescription & columns, const Context & context) +IndicesDescription IndicesDescription::parse(const String & str, const ColumnsDescription & columns, ContextPtr context) { IndicesDescription result; if (str.empty()) @@ -160,7 +160,7 @@ IndicesDescription IndicesDescription::parse(const String & str, const ColumnsDe } -ExpressionActionsPtr IndicesDescription::getSingleExpressionForIndices(const ColumnsDescription & columns, const Context & context) const +ExpressionActionsPtr IndicesDescription::getSingleExpressionForIndices(const ColumnsDescription & columns, ContextPtr context) const { ASTPtr combined_expr_list = std::make_shared(); for (const auto & index : *this) diff --git a/src/Storages/IndicesDescription.h b/src/Storages/IndicesDescription.h index f383029837e..d9c7efdb75c 100644 --- a/src/Storages/IndicesDescription.h +++ b/src/Storages/IndicesDescription.h @@ -46,7 +46,7 @@ struct IndexDescription size_t granularity; /// Parse index from definition AST - static IndexDescription getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context); + static IndexDescription getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context); IndexDescription() = default; @@ -57,7 +57,7 @@ struct IndexDescription /// Recalculate index with new columns because index expression may change /// if something change in columns. - void recalculateWithNewColumns(const ColumnsDescription & new_columns, const Context & context); + void recalculateWithNewColumns(const ColumnsDescription & new_columns, ContextPtr context); }; /// All secondary indices in storage @@ -68,10 +68,10 @@ struct IndicesDescription : public std::vector /// Convert description to string String toString() const; /// Parse description from string - static IndicesDescription parse(const String & str, const ColumnsDescription & columns, const Context & context); + static IndicesDescription parse(const String & str, const ColumnsDescription & columns, ContextPtr context); /// Return common expression for all stored indices - ExpressionActionsPtr getSingleExpressionForIndices(const ColumnsDescription & columns, const Context & context) const; + ExpressionActionsPtr getSingleExpressionForIndices(const ColumnsDescription & columns, ContextPtr context) const; }; } diff --git a/src/Storages/Kafka/KafkaBlockInputStream.cpp b/src/Storages/Kafka/KafkaBlockInputStream.cpp index bf985902b4d..5d9b19b1972 100644 --- a/src/Storages/Kafka/KafkaBlockInputStream.cpp +++ b/src/Storages/Kafka/KafkaBlockInputStream.cpp @@ -35,8 +35,8 @@ KafkaBlockInputStream::KafkaBlockInputStream( , max_block_size(max_block_size_) , commit_in_suffix(commit_in_suffix_) , non_virtual_header(metadata_snapshot->getSampleBlockNonMaterialized()) - , virtual_header(metadata_snapshot->getSampleBlockForColumns( - {"_topic", "_key", "_offset", "_partition", "_timestamp", "_timestamp_ms", "_headers.name", "_headers.value"}, storage.getVirtuals(), storage.getStorageID())) + , virtual_header(metadata_snapshot->getSampleBlockForColumns(storage.getVirtualColumnNames(), storage.getVirtuals(), storage.getStorageID())) + , handle_error_mode(storage.getHandleKafkaErrorMode()) { } @@ -78,21 +78,22 @@ Block KafkaBlockInputStream::readImpl() // now it's one-time usage InputStream // one block of the needed size (or with desired flush timeout) is formed in one internal iteration // otherwise external iteration will reuse that and logic will became even more fuzzy - MutableColumns result_columns = non_virtual_header.cloneEmptyColumns(); MutableColumns virtual_columns = virtual_header.cloneEmptyColumns(); + auto put_error_to_stream = handle_error_mode == HandleKafkaErrorMode::STREAM; + auto input_format = FormatFactory::instance().getInputFormat( - storage.getFormatName(), *buffer, non_virtual_header, *context, max_block_size); + storage.getFormatName(), *buffer, non_virtual_header, context, max_block_size); InputPort port(input_format->getPort().getHeader(), input_format.get()); connect(input_format->getPort(), port); port.setNeeded(); + std::optional exception_message; auto read_kafka_message = [&] { size_t new_rows = 0; - while (true) { auto status = input_format->prepare(); @@ -136,7 +137,41 @@ Block KafkaBlockInputStream::readImpl() while (true) { - auto new_rows = buffer->poll() ? read_kafka_message() : 0; + size_t new_rows = 0; + exception_message.reset(); + if (buffer->poll()) + { + try + { + new_rows = read_kafka_message(); + } + catch (Exception & e) + { + if (put_error_to_stream) + { + input_format->resetParser(); + exception_message = e.message(); + for (auto & column : result_columns) + { + // read_kafka_message could already push some rows to result_columns + // before exception, we need to fix it. + auto cur_rows = column->size(); + if (cur_rows > total_rows) + { + column->popBack(cur_rows - total_rows); + } + // all data columns will get default value in case of error + column->insertDefault(); + } + new_rows = 1; + } + else + { + e.addMessage("while parsing Kafka message (topic: {}, partition: {}, offset: {})'", buffer->currentTopic(), buffer->currentPartition(), buffer->currentOffset()); + throw; + } + } + } if (new_rows) { @@ -189,6 +224,20 @@ Block KafkaBlockInputStream::readImpl() } virtual_columns[6]->insert(headers_names); virtual_columns[7]->insert(headers_values); + if (put_error_to_stream) + { + if (exception_message) + { + auto payload = buffer->currentPayload(); + virtual_columns[8]->insert(payload); + virtual_columns[9]->insert(*exception_message); + } + else + { + virtual_columns[8]->insertDefault(); + virtual_columns[9]->insertDefault(); + } + } } total_rows = total_rows + new_rows; diff --git a/src/Storages/Kafka/KafkaBlockInputStream.h b/src/Storages/Kafka/KafkaBlockInputStream.h index 517df6ecaf7..98e4b8982e0 100644 --- a/src/Storages/Kafka/KafkaBlockInputStream.h +++ b/src/Storages/Kafka/KafkaBlockInputStream.h @@ -39,7 +39,7 @@ public: private: StorageKafka & storage; StorageMetadataPtr metadata_snapshot; - const std::shared_ptr context; + ContextPtr context; Names column_names; Poco::Logger * log; UInt64 max_block_size; @@ -51,6 +51,7 @@ private: const Block non_virtual_header; const Block virtual_header; + const HandleKafkaErrorMode handle_error_mode; }; } diff --git a/src/Storages/Kafka/KafkaBlockOutputStream.cpp b/src/Storages/Kafka/KafkaBlockOutputStream.cpp index 2cb0fd98c71..21de27708b4 100644 --- a/src/Storages/Kafka/KafkaBlockOutputStream.cpp +++ b/src/Storages/Kafka/KafkaBlockOutputStream.cpp @@ -9,7 +9,7 @@ namespace DB KafkaBlockOutputStream::KafkaBlockOutputStream( StorageKafka & storage_, const StorageMetadataPtr & metadata_snapshot_, - const std::shared_ptr & context_) + const ContextPtr & context_) : storage(storage_) , metadata_snapshot(metadata_snapshot_) , context(context_) @@ -25,11 +25,11 @@ void KafkaBlockOutputStream::writePrefix() { buffer = storage.createWriteBuffer(getHeader()); - auto format_settings = getFormatSettings(*context); + auto format_settings = getFormatSettings(context); format_settings.protobuf.allow_multiple_rows_without_delimiter = true; child = FormatFactory::instance().getOutputStream(storage.getFormatName(), *buffer, - getHeader(), *context, + getHeader(), context, [this](const Columns & columns, size_t row) { buffer->countRow(columns, row); diff --git a/src/Storages/Kafka/KafkaSettings.h b/src/Storages/Kafka/KafkaSettings.h index 1df10d16339..1010c486abb 100644 --- a/src/Storages/Kafka/KafkaSettings.h +++ b/src/Storages/Kafka/KafkaSettings.h @@ -29,7 +29,8 @@ class ASTStorage; M(Char, kafka_row_delimiter, '\0', "The character to be considered as a delimiter in Kafka message.", 0) \ M(String, kafka_schema, "", "Schema identifier (used by schema-based formats) for Kafka engine", 0) \ M(UInt64, kafka_skip_broken_messages, 0, "Skip at least this number of broken messages from Kafka topic per block", 0) \ - M(Bool, kafka_thread_per_consumer, false, "Provide independent thread for each consumer", 0) + M(Bool, kafka_thread_per_consumer, false, "Provide independent thread for each consumer", 0) \ + M(HandleKafkaErrorMode, kafka_handle_error_mode, HandleKafkaErrorMode::DEFAULT, "How to handle errors for Kafka engine. Passible values: default, stream.", 0) \ /** TODO: */ /* https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md */ diff --git a/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h b/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h index 1d889655941..49d3df0e180 100644 --- a/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h +++ b/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h @@ -63,6 +63,7 @@ public: auto currentPartition() const { return current[-1].get_partition(); } auto currentTimestamp() const { return current[-1].get_timestamp(); } const auto & currentHeaderList() const { return current[-1].get_header_list(); } + String currentPayload() const { return current[-1].get_payload(); } private: using Messages = std::vector; diff --git a/src/Storages/Kafka/StorageKafka.cpp b/src/Storages/Kafka/StorageKafka.cpp index 45e4ec538a1..15dd5b553b0 100644 --- a/src/Storages/Kafka/StorageKafka.cpp +++ b/src/Storages/Kafka/StorageKafka.cpp @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -169,20 +170,19 @@ namespace } StorageKafka::StorageKafka( - const StorageID & table_id_, - const Context & context_, - const ColumnsDescription & columns_, - std::unique_ptr kafka_settings_) + const StorageID & table_id_, ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr kafka_settings_) : IStorage(table_id_) - , global_context(context_.getGlobalContext()) + , WithContext(context_->getGlobalContext()) , kafka_settings(std::move(kafka_settings_)) - , topics(parseTopics(global_context.getMacros()->expand(kafka_settings->kafka_topic_list.value))) - , brokers(global_context.getMacros()->expand(kafka_settings->kafka_broker_list.value)) - , group(global_context.getMacros()->expand(kafka_settings->kafka_group_name.value)) - , client_id(kafka_settings->kafka_client_id.value.empty() ? getDefaultClientId(table_id_) : global_context.getMacros()->expand(kafka_settings->kafka_client_id.value)) - , format_name(global_context.getMacros()->expand(kafka_settings->kafka_format.value)) + , topics(parseTopics(getContext()->getMacros()->expand(kafka_settings->kafka_topic_list.value))) + , brokers(getContext()->getMacros()->expand(kafka_settings->kafka_broker_list.value)) + , group(getContext()->getMacros()->expand(kafka_settings->kafka_group_name.value)) + , client_id( + kafka_settings->kafka_client_id.value.empty() ? getDefaultClientId(table_id_) + : getContext()->getMacros()->expand(kafka_settings->kafka_client_id.value)) + , format_name(getContext()->getMacros()->expand(kafka_settings->kafka_format.value)) , row_delimiter(kafka_settings->kafka_row_delimiter.value) - , schema_name(global_context.getMacros()->expand(kafka_settings->kafka_schema.value)) + , schema_name(getContext()->getMacros()->expand(kafka_settings->kafka_schema.value)) , num_consumers(kafka_settings->kafka_num_consumers.value) , log(&Poco::Logger::get("StorageKafka (" + table_id_.table_name + ")")) , semaphore(0, num_consumers) @@ -190,13 +190,18 @@ StorageKafka::StorageKafka( , settings_adjustments(createSettingsAdjustments()) , thread_per_consumer(kafka_settings->kafka_thread_per_consumer.value) { + if (kafka_settings->kafka_handle_error_mode == HandleKafkaErrorMode::STREAM) + { + kafka_settings->input_format_allow_errors_num = 0; + kafka_settings->input_format_allow_errors_ratio = 0; + } StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); setInMemoryMetadata(storage_metadata); auto task_count = thread_per_consumer ? num_consumers : 1; for (size_t i = 0; i < task_count; ++i) { - auto task = global_context.getMessageBrokerSchedulePool().createTask(log->name(), [this, i]{ threadFunc(i); }); + auto task = getContext()->getMessageBrokerSchedulePool().createTask(log->name(), [this, i]{ threadFunc(i); }); task->deactivate(); tasks.emplace_back(std::make_shared(std::move(task))); } @@ -255,7 +260,7 @@ Pipe StorageKafka::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /* query_info */, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /* processed_stage */, size_t /* max_block_size */, unsigned /* num_streams */) @@ -266,7 +271,7 @@ Pipe StorageKafka::read( /// Always use all consumers at once, otherwise SELECT may not read messages from all partitions. Pipes pipes; pipes.reserve(num_created_consumers); - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->applySettingsChanges(settings_adjustments); // Claim as many consumers as requested, but don't block @@ -284,9 +289,9 @@ Pipe StorageKafka::read( } -BlockOutputStreamPtr StorageKafka::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageKafka::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->applySettingsChanges(settings_adjustments); if (topics.size() > 1) @@ -382,7 +387,7 @@ ProducerBufferPtr StorageKafka::createWriteBuffer(const Block & header) updateConfiguration(conf); auto producer = std::make_shared(conf); - const Settings & settings = global_context.getSettingsRef(); + const Settings & settings = getContext()->getSettingsRef(); size_t poll_timeout = settings.stream_poll_timeout_ms.totalMilliseconds(); return std::make_shared( @@ -438,14 +443,14 @@ size_t StorageKafka::getMaxBlockSize() const { return kafka_settings->kafka_max_block_size.changed ? kafka_settings->kafka_max_block_size.value - : (global_context.getSettingsRef().max_insert_block_size.value / num_consumers); + : (getContext()->getSettingsRef().max_insert_block_size.value / num_consumers); } size_t StorageKafka::getPollMaxBatchSize() const { size_t batch_size = kafka_settings->kafka_poll_max_batch_size.changed ? kafka_settings->kafka_poll_max_batch_size.value - : global_context.getSettingsRef().max_block_size.value; + : getContext()->getSettingsRef().max_block_size.value; return std::min(batch_size,getMaxBlockSize()); } @@ -454,13 +459,13 @@ size_t StorageKafka::getPollTimeoutMillisecond() const { return kafka_settings->kafka_poll_timeout_ms.changed ? kafka_settings->kafka_poll_timeout_ms.totalMilliseconds() - : global_context.getSettingsRef().stream_poll_timeout_ms.totalMilliseconds(); + : getContext()->getSettingsRef().stream_poll_timeout_ms.totalMilliseconds(); } void StorageKafka::updateConfiguration(cppkafka::Configuration & conf) { // Update consumer configuration from the configuration - const auto & config = global_context.getConfigRef(); + const auto & config = getContext()->getConfigRef(); if (config.has(CONFIG_PREFIX)) loadFromConfig(conf, config, CONFIG_PREFIX); @@ -512,7 +517,7 @@ bool StorageKafka::checkDependencies(const StorageID & table_id) // Check the dependencies are ready? for (const auto & db_tab : dependencies) { - auto table = DatabaseCatalog::instance().tryGetTable(db_tab, global_context); + auto table = DatabaseCatalog::instance().tryGetTable(db_tab, getContext()); if (!table) return false; @@ -581,8 +586,10 @@ void StorageKafka::threadFunc(size_t idx) bool StorageKafka::streamToViews() { + Stopwatch watch; + auto table_id = getStorageID(); - auto table = DatabaseCatalog::instance().getTable(table_id, global_context); + auto table = DatabaseCatalog::instance().getTable(table_id, getContext()); if (!table) throw Exception("Engine table " + table_id.getNameForLogs() + " doesn't exist.", ErrorCodes::LOGICAL_ERROR); auto metadata_snapshot = getInMemoryMetadataPtr(); @@ -593,13 +600,13 @@ bool StorageKafka::streamToViews() size_t block_size = getMaxBlockSize(); - auto kafka_context = std::make_shared(global_context); + auto kafka_context = Context::createCopy(getContext()); kafka_context->makeQueryContext(); kafka_context->applySettingsChanges(settings_adjustments); // Create a stream for each consumer and join them in a union stream // Only insert into dependent views and expect that input blocks contain virtual columns - InterpreterInsertQuery interpreter(insert, *kafka_context, false, true, true); + InterpreterInsertQuery interpreter(insert, kafka_context, false, true, true); auto block_io = interpreter.execute(); // Create a stream for each consumer and join them in a union stream @@ -617,7 +624,7 @@ bool StorageKafka::streamToViews() limits.speed_limits.max_execution_time = kafka_settings->kafka_flush_interval_ms.changed ? kafka_settings->kafka_flush_interval_ms - : global_context.getSettingsRef().stream_flush_interval_ms; + : getContext()->getSettingsRef().stream_flush_interval_ms; limits.timeout_overflow_mode = OverflowMode::BREAK; stream->setLimits(limits); @@ -633,7 +640,11 @@ bool StorageKafka::streamToViews() // We can't cancel during copyData, as it's not aware of commits and other kafka-related stuff. // It will be cancelled on underlying layer (kafka buffer) std::atomic stub = {false}; - copyData(*in, *block_io.out, &stub); + size_t rows = 0; + copyData(*in, *block_io.out, [&rows](const Block & block) + { + rows += block.rows(); + }, &stub); bool some_stream_is_stalled = false; for (auto & stream : streams) @@ -642,6 +653,10 @@ bool StorageKafka::streamToViews() stream->as()->commit(); } + UInt64 milliseconds = watch.elapsedMilliseconds(); + LOG_DEBUG(log, "Pushing {} rows to {} took {} ms.", + formatReadableQuantity(rows), table_id.getNameForLogs(), milliseconds); + return some_stream_is_stalled; } @@ -690,14 +705,14 @@ void registerStorageKafka(StorageFactory & factory) engine_args[(ARG_NUM)-1] = \ evaluateConstantExpressionAsLiteral( \ engine_args[(ARG_NUM)-1], \ - args.local_context); \ + args.getLocalContext()); \ } \ if ((EVAL) == 2) \ { \ engine_args[(ARG_NUM)-1] = \ evaluateConstantExpressionOrIdentifierAsLiteral( \ engine_args[(ARG_NUM)-1], \ - args.local_context); \ + args.getLocalContext()); \ } \ kafka_settings->PAR_NAME = \ engine_args[(ARG_NUM)-1]->as().value; \ @@ -752,7 +767,7 @@ void registerStorageKafka(StorageFactory & factory) throw Exception("kafka_poll_max_batch_size can not be lower than 1", ErrorCodes::BAD_ARGUMENTS); } - return StorageKafka::create(args.table_id, args.context, args.columns, std::move(kafka_settings)); + return StorageKafka::create(args.table_id, args.getContext(), args.columns, std::move(kafka_settings)); }; factory.registerStorage("Kafka", creator_fn, StorageFactory::StorageFeatures{ .supports_settings = true, }); @@ -760,7 +775,7 @@ void registerStorageKafka(StorageFactory & factory) NamesAndTypesList StorageKafka::getVirtuals() const { - return NamesAndTypesList{ + auto result = NamesAndTypesList{ {"_topic", std::make_shared()}, {"_key", std::make_shared()}, {"_offset", std::make_shared()}, @@ -770,6 +785,32 @@ NamesAndTypesList StorageKafka::getVirtuals() const {"_headers.name", std::make_shared(std::make_shared())}, {"_headers.value", std::make_shared(std::make_shared())} }; + if (kafka_settings->kafka_handle_error_mode == HandleKafkaErrorMode::STREAM) + { + result.push_back({"_raw_message", std::make_shared()}); + result.push_back({"_error", std::make_shared()}); + } + return result; +} + +Names StorageKafka::getVirtualColumnNames() const +{ + auto result = Names { + "_topic", + "_key", + "_offset", + "_partition", + "_timestamp", + "_timestamp_ms", + "_headers.name", + "_headers.value", + }; + if (kafka_settings->kafka_handle_error_mode == HandleKafkaErrorMode::STREAM) + { + result.push_back({"_raw_message"}); + result.push_back({"_error"}); + } + return result; } } diff --git a/src/Storages/Kafka/StorageKafka.h b/src/Storages/Kafka/StorageKafka.h index 53871990810..b09b2ecd39e 100644 --- a/src/Storages/Kafka/StorageKafka.h +++ b/src/Storages/Kafka/StorageKafka.h @@ -28,7 +28,7 @@ struct StorageKafkaInterceptors; /** Implements a Kafka queue table engine that can be used as a persistent queue / buffer, * or as a basic building block for creating pipelines with a continuous insertion / ETL. */ -class StorageKafka final : public ext::shared_ptr_helper, public IStorage +class StorageKafka final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend struct StorageKafkaInterceptors; @@ -45,7 +45,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -53,7 +53,7 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, - const Context & context) override; + ContextPtr context) override; void pushReadBuffer(ConsumerBufferPtr buf); ConsumerBufferPtr popReadBuffer(); @@ -64,16 +64,17 @@ public: const auto & getFormatName() const { return format_name; } NamesAndTypesList getVirtuals() const override; + Names getVirtualColumnNames() const; + HandleKafkaErrorMode getHandleKafkaErrorMode() const { return kafka_settings->kafka_handle_error_mode; } protected: StorageKafka( const StorageID & table_id_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr kafka_settings_); private: // Configuration and state - const Context & global_context; std::unique_ptr kafka_settings; const Names topics; const String brokers; @@ -112,6 +113,9 @@ private: std::mutex thread_statuses_mutex; std::list> thread_statuses; + /// Handle error mode + HandleKafkaErrorMode handle_error_mode; + SettingsChanges createSettingsAdjustments(); ConsumerBufferPtr createReadBuffer(const size_t consumer_number); diff --git a/src/Storages/KeyDescription.cpp b/src/Storages/KeyDescription.cpp index ee4a20bfc4f..be327313b4d 100644 --- a/src/Storages/KeyDescription.cpp +++ b/src/Storages/KeyDescription.cpp @@ -66,14 +66,14 @@ KeyDescription & KeyDescription::operator=(const KeyDescription & other) void KeyDescription::recalculateWithNewAST( const ASTPtr & new_ast, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { *this = getSortingKeyFromAST(new_ast, columns, context, additional_column); } void KeyDescription::recalculateWithNewColumns( const ColumnsDescription & new_columns, - const Context & context) + ContextPtr context) { *this = getSortingKeyFromAST(definition_ast, new_columns, context, additional_column); } @@ -81,7 +81,7 @@ void KeyDescription::recalculateWithNewColumns( KeyDescription KeyDescription::getKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { return getSortingKeyFromAST(definition_ast, columns, context, {}); } @@ -89,7 +89,7 @@ KeyDescription KeyDescription::getKeyFromAST( KeyDescription KeyDescription::getSortingKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const std::optional & additional_column) { KeyDescription result; diff --git a/src/Storages/KeyDescription.h b/src/Storages/KeyDescription.h index 7d1e7efb55f..194aad4d5b2 100644 --- a/src/Storages/KeyDescription.h +++ b/src/Storages/KeyDescription.h @@ -40,28 +40,28 @@ struct KeyDescription static KeyDescription getKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context); + ContextPtr context); /// Sorting key can contain additional column defined by storage type (like /// Version column in VersionedCollapsingMergeTree). static KeyDescription getSortingKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const std::optional & additional_column); /// Recalculate all expressions and fields for key with new columns without /// changes in constant fields. Just wrapper for static methods. void recalculateWithNewColumns( const ColumnsDescription & new_columns, - const Context & context); + ContextPtr context); /// Recalculate all expressions and fields for key with new ast without /// changes in constant fields. Just wrapper for static methods. void recalculateWithNewAST( const ASTPtr & new_ast, const ColumnsDescription & columns, - const Context & context); + ContextPtr context); KeyDescription() = default; diff --git a/src/Storages/LiveView/StorageBlocks.h b/src/Storages/LiveView/StorageBlocks.h index 4ad0ffb93ca..f4ba8d7b09c 100644 --- a/src/Storages/LiveView/StorageBlocks.h +++ b/src/Storages/LiveView/StorageBlocks.h @@ -33,13 +33,13 @@ public: bool supportsSampling() const override { return true; } bool supportsFinal() const override { return true; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override { return to_stage; } + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override { return to_stage; } Pipe read( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) override diff --git a/src/Storages/LiveView/StorageLiveView.cpp b/src/Storages/LiveView/StorageLiveView.cpp index bfec7bffc8c..1d81405ec26 100644 --- a/src/Storages/LiveView/StorageLiveView.cpp +++ b/src/Storages/LiveView/StorageLiveView.cpp @@ -56,13 +56,13 @@ namespace ErrorCodes } -static StorageID extractDependentTable(ASTPtr & query, Context & context, const String & table_name, ASTPtr & inner_subquery) +static StorageID extractDependentTable(ASTPtr & query, ContextPtr context, const String & table_name, ASTPtr & inner_subquery) { ASTSelectQuery & select_query = typeid_cast(*query); if (auto db_and_table = getDatabaseAndTable(select_query, 0)) { - String select_database_name = context.getCurrentDatabase(); + String select_database_name = context->getCurrentDatabase(); String select_table_name = db_and_table->table; if (db_and_table->database.empty()) @@ -98,7 +98,7 @@ static StorageID extractDependentTable(ASTPtr & query, Context & context, const } } -MergeableBlocksPtr StorageLiveView::collectMergeableBlocks(const Context & context) +MergeableBlocksPtr StorageLiveView::collectMergeableBlocks(ContextPtr local_context) { ASTPtr mergeable_query = inner_query; @@ -109,7 +109,7 @@ MergeableBlocksPtr StorageLiveView::collectMergeableBlocks(const Context & conte BlocksPtrs new_blocks = std::make_shared>(); BlocksPtr base_blocks = std::make_shared(); - InterpreterSelectQuery interpreter(mergeable_query->clone(), context, SelectQueryOptions(QueryProcessingStage::WithMergeableState), Names()); + InterpreterSelectQuery interpreter(mergeable_query->clone(), local_context, SelectQueryOptions(QueryProcessingStage::WithMergeableState), Names()); auto view_mergeable_stream = std::make_shared(interpreter.execute().getInputStream()); @@ -137,7 +137,7 @@ Pipes StorageLiveView::blocksToPipes(BlocksPtrs blocks, Block & sample_block) BlockInputStreamPtr StorageLiveView::completeQuery(Pipes pipes) { //FIXME it's dangerous to create Context on stack - auto block_context = std::make_unique(global_context); + auto block_context = Context::createCopy(getContext()); block_context->makeQueryContext(); auto creator = [&](const StorageID & blocks_id_global) @@ -147,17 +147,17 @@ BlockInputStreamPtr StorageLiveView::completeQuery(Pipes pipes) blocks_id_global, parent_table_metadata->getColumns(), std::move(pipes), QueryProcessingStage::WithMergeableState); }; - block_context->addExternalTable(getBlocksTableName(), TemporaryTableHolder(global_context, creator)); + block_context->addExternalTable(getBlocksTableName(), TemporaryTableHolder(getContext(), creator)); - InterpreterSelectQuery select(getInnerBlocksQuery(), *block_context, StoragePtr(), nullptr, SelectQueryOptions(QueryProcessingStage::Complete)); + InterpreterSelectQuery select(getInnerBlocksQuery(), block_context, StoragePtr(), nullptr, SelectQueryOptions(QueryProcessingStage::Complete)); BlockInputStreamPtr data = std::make_shared(select.execute().getInputStream()); /// Squashing is needed here because the view query can generate a lot of blocks /// even when only one block is inserted into the parent table (e.g. if the query is a GROUP BY /// and two-level aggregation is triggered). data = std::make_shared( - data, global_context.getSettingsRef().min_insert_block_size_rows, - global_context.getSettingsRef().min_insert_block_size_bytes); + data, getContext()->getSettingsRef().min_insert_block_size_rows, + getContext()->getSettingsRef().min_insert_block_size_bytes); return data; } @@ -165,7 +165,7 @@ BlockInputStreamPtr StorageLiveView::completeQuery(Pipes pipes) void StorageLiveView::writeIntoLiveView( StorageLiveView & live_view, const Block & block, - const Context & context) + ContextPtr local_context) { BlockOutputStreamPtr output = std::make_shared(live_view); @@ -190,9 +190,9 @@ void StorageLiveView::writeIntoLiveView( std::lock_guard lock(live_view.mutex); mergeable_blocks = live_view.getMergeableBlocks(); - if (!mergeable_blocks || mergeable_blocks->blocks->size() >= context.getGlobalContext().getSettingsRef().max_live_view_insert_blocks_before_refresh) + if (!mergeable_blocks || mergeable_blocks->blocks->size() >= local_context->getGlobalContext()->getSettingsRef().max_live_view_insert_blocks_before_refresh) { - mergeable_blocks = live_view.collectMergeableBlocks(context); + mergeable_blocks = live_view.collectMergeableBlocks(local_context); live_view.setMergeableBlocks(mergeable_blocks); from = live_view.blocksToPipes(mergeable_blocks->blocks, mergeable_blocks->sample_block); is_block_processed = true; @@ -216,9 +216,9 @@ void StorageLiveView::writeIntoLiveView( blocks_id_global, parent_metadata->getColumns(), std::move(pipes), QueryProcessingStage::FetchColumns); }; - TemporaryTableHolder blocks_storage(context, creator); + TemporaryTableHolder blocks_storage(local_context, creator); - InterpreterSelectQuery select_block(mergeable_query, context, blocks_storage.getTable(), blocks_storage.getTable()->getInMemoryMetadataPtr(), + InterpreterSelectQuery select_block(mergeable_query, local_context, blocks_storage.getTable(), blocks_storage.getTable()->getInMemoryMetadataPtr(), QueryProcessingStage::WithMergeableState); auto data_mergeable_stream = std::make_shared( @@ -246,13 +246,13 @@ void StorageLiveView::writeIntoLiveView( StorageLiveView::StorageLiveView( const StorageID & table_id_, - Context & local_context, + ContextPtr context_, const ASTCreateQuery & query, const ColumnsDescription & columns_) : IStorage(table_id_) - , global_context(local_context.getGlobalContext()) + , WithContext(context_->getGlobalContext()) { - live_view_context = std::make_unique(global_context); + live_view_context = Context::createCopy(getContext()); live_view_context->makeQueryContext(); log = &Poco::Logger::get("StorageLiveView (" + table_id_.database_name + "." + table_id_.table_name + ")"); @@ -271,7 +271,7 @@ StorageLiveView::StorageLiveView( inner_query = query.select->list_of_selects->children.at(0); auto inner_query_tmp = inner_query->clone(); - select_table_id = extractDependentTable(inner_query_tmp, global_context, table_id_.table_name, inner_subquery); + select_table_id = extractDependentTable(inner_query_tmp, getContext(), table_id_.table_name, inner_subquery); DatabaseCatalog::instance().addDependency(select_table_id, table_id_); @@ -291,7 +291,7 @@ StorageLiveView::StorageLiveView( blocks_metadata_ptr = std::make_shared(); active_ptr = std::make_shared(true); - periodic_refresh_task = global_context.getSchedulePool().createTask("LieViewPeriodicRefreshTask", [this]{ periodicRefreshTaskFunc(); }); + periodic_refresh_task = getContext()->getSchedulePool().createTask("LieViewPeriodicRefreshTask", [this]{ periodicRefreshTaskFunc(); }); periodic_refresh_task->deactivate(); } @@ -301,7 +301,7 @@ Block StorageLiveView::getHeader() const if (!sample_block) { - sample_block = InterpreterSelectQuery(inner_query->clone(), *live_view_context, SelectQueryOptions(QueryProcessingStage::Complete)).getSampleBlock(); + sample_block = InterpreterSelectQuery(inner_query->clone(), live_view_context, SelectQueryOptions(QueryProcessingStage::Complete)).getSampleBlock(); sample_block.insert({DataTypeUInt64().createColumnConst( sample_block.rows(), 0)->convertToFullColumnIfConst(), std::make_shared(), @@ -318,7 +318,7 @@ Block StorageLiveView::getHeader() const StoragePtr StorageLiveView::getParentStorage() const { - return DatabaseCatalog::instance().getTable(select_table_id, global_context); + return DatabaseCatalog::instance().getTable(select_table_id, getContext()); } ASTPtr StorageLiveView::getInnerBlocksQuery() @@ -330,9 +330,9 @@ ASTPtr StorageLiveView::getInnerBlocksQuery() /// Rewrite inner query with right aliases for JOIN. /// It cannot be done in constructor or startup() because InterpreterSelectQuery may access table, /// which is not loaded yet during server startup, so we do it lazily - InterpreterSelectQuery(inner_blocks_query, *live_view_context, SelectQueryOptions().modify().analyze()); // NOLINT + InterpreterSelectQuery(inner_blocks_query, live_view_context, SelectQueryOptions().modify().analyze()); // NOLINT auto table_id = getStorageID(); - extractDependentTable(inner_blocks_query, global_context, table_id.table_name, inner_subquery); + extractDependentTable(inner_blocks_query, getContext(), table_id.table_name, inner_subquery); } return inner_blocks_query->clone(); } @@ -350,7 +350,7 @@ bool StorageLiveView::getNewBlocks() /// called before writeIntoLiveView function is called which can lead to /// the same block added twice to the mergeable_blocks leading to /// inserted data to be duplicated - auto new_mergeable_blocks = collectMergeableBlocks(*live_view_context); + auto new_mergeable_blocks = collectMergeableBlocks(live_view_context); Pipes from = blocksToPipes(new_mergeable_blocks->blocks, new_mergeable_blocks->sample_block); BlockInputStreamPtr data = completeQuery(std::move(from)); @@ -500,7 +500,7 @@ Pipe StorageLiveView::read( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -525,7 +525,7 @@ Pipe StorageLiveView::read( BlockInputStreams StorageLiveView::watch( const Names & /*column_names*/, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum & processed_stage, size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -546,12 +546,12 @@ BlockInputStreams StorageLiveView::watch( reader = std::make_shared( std::static_pointer_cast(shared_from_this()), blocks_ptr, blocks_metadata_ptr, active_ptr, has_limit, limit, - context.getSettingsRef().live_view_heartbeat_interval.totalSeconds()); + local_context->getSettingsRef().live_view_heartbeat_interval.totalSeconds()); else reader = std::make_shared( std::static_pointer_cast(shared_from_this()), blocks_ptr, blocks_metadata_ptr, active_ptr, has_limit, limit, - context.getSettingsRef().live_view_heartbeat_interval.totalSeconds()); + local_context->getSettingsRef().live_view_heartbeat_interval.totalSeconds()); { std::lock_guard lock(mutex); @@ -578,10 +578,12 @@ void registerStorageLiveView(StorageFactory & factory) { factory.registerStorage("LiveView", [](const StorageFactory::Arguments & args) { - if (!args.attach && !args.local_context.getSettingsRef().allow_experimental_live_view) - throw Exception("Experimental LIVE VIEW feature is not enabled (the setting 'allow_experimental_live_view')", ErrorCodes::SUPPORT_IS_DISABLED); + if (!args.attach && !args.getLocalContext()->getSettingsRef().allow_experimental_live_view) + throw Exception( + "Experimental LIVE VIEW feature is not enabled (the setting 'allow_experimental_live_view')", + ErrorCodes::SUPPORT_IS_DISABLED); - return StorageLiveView::create(args.table_id, args.local_context, args.query, args.columns); + return StorageLiveView::create(args.table_id, args.getLocalContext(), args.query, args.columns); }); } diff --git a/src/Storages/LiveView/StorageLiveView.h b/src/Storages/LiveView/StorageLiveView.h index e30a8f51705..df09316f333 100644 --- a/src/Storages/LiveView/StorageLiveView.h +++ b/src/Storages/LiveView/StorageLiveView.h @@ -49,7 +49,7 @@ class Pipe; using Pipes = std::vector; -class StorageLiveView final : public ext::shared_ptr_helper, public IStorage +class StorageLiveView final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend class LiveViewBlockInputStream; @@ -142,13 +142,13 @@ public: void startup() override; void shutdown() override; - void refresh(const bool grab_lock = true); + void refresh(bool grab_lock = true); Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -156,7 +156,7 @@ public: BlockInputStreams watch( const Names & column_names, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -165,7 +165,7 @@ public: MergeableBlocksPtr getMergeableBlocks() { return mergeable_blocks; } /// Collect mergeable blocks and their sample. Must be called holding mutex - MergeableBlocksPtr collectMergeableBlocks(const Context & context); + MergeableBlocksPtr collectMergeableBlocks(ContextPtr context); /// Complete query using input streams from mergeable blocks BlockInputStreamPtr completeQuery(Pipes pipes); @@ -183,7 +183,7 @@ public: static void writeIntoLiveView( StorageLiveView & live_view, const Block & block, - const Context & context); + ContextPtr context); private: /// TODO move to common struct SelectQueryDescription @@ -191,8 +191,7 @@ private: ASTPtr inner_query; /// stored query : SELECT * FROM ( SELECT a FROM A) ASTPtr inner_subquery; /// stored query's innermost subquery if any ASTPtr inner_blocks_query; /// query over the mergeable blocks to produce final result - Context & global_context; - std::unique_ptr live_view_context; + ContextPtr live_view_context; Poco::Logger * log; @@ -231,7 +230,7 @@ private: StorageLiveView( const StorageID & table_id_, - Context & local_context, + ContextPtr context_, const ASTCreateQuery & query, const ColumnsDescription & columns ); diff --git a/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp b/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp index 143e7460cc3..7294b82f10d 100644 --- a/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp +++ b/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp @@ -1,8 +1,9 @@ #include -#include + #include #include #include +#include namespace DB @@ -15,7 +16,7 @@ namespace ErrorCodes namespace { - void executeDropQuery(const StorageID & storage_id, Context & context) + void executeDropQuery(const StorageID & storage_id, ContextPtr context) { if (!DatabaseCatalog::instance().isTableExist(storage_id, context)) return; @@ -41,45 +42,20 @@ namespace std::unique_ptr TemporaryLiveViewCleaner::the_instance; -void TemporaryLiveViewCleaner::init(Context & global_context_) +void TemporaryLiveViewCleaner::init(ContextPtr global_context_) { if (the_instance) throw Exception("TemporaryLiveViewCleaner already initialized", ErrorCodes::LOGICAL_ERROR); the_instance.reset(new TemporaryLiveViewCleaner(global_context_)); } -void TemporaryLiveViewCleaner::startupIfNecessary() +void TemporaryLiveViewCleaner::startup() { + background_thread_can_start = true; + std::lock_guard lock{mutex}; - if (background_thread_should_exit) - return; if (!views.empty()) - startupIfNecessaryImpl(lock); - else - can_start_background_thread = true; -} - -void TemporaryLiveViewCleaner::startupIfNecessaryImpl(const std::lock_guard &) -{ - /// If views.empty() the background thread isn't running or it's going to stop right now. - /// If can_start_background_thread is false, then the thread has not been started previously. - bool background_thread_is_running; - if (can_start_background_thread) - { - background_thread_is_running = !views.empty(); - } - else - { - can_start_background_thread = true; - background_thread_is_running = false; - } - - if (!background_thread_is_running) - { - if (background_thread.joinable()) - background_thread.join(); - background_thread = ThreadFromGlobalPool{&TemporaryLiveViewCleaner::backgroundThreadFunc, this}; - } + startBackgroundThread(); } void TemporaryLiveViewCleaner::shutdown() @@ -87,13 +63,10 @@ void TemporaryLiveViewCleaner::shutdown() the_instance.reset(); } - -TemporaryLiveViewCleaner::TemporaryLiveViewCleaner(Context & global_context_) - : global_context(global_context_) +TemporaryLiveViewCleaner::TemporaryLiveViewCleaner(ContextPtr global_context_) : WithContext(global_context_) { } - TemporaryLiveViewCleaner::~TemporaryLiveViewCleaner() { stopBackgroundThread(); @@ -108,27 +81,29 @@ void TemporaryLiveViewCleaner::addView(const std::shared_ptr & auto current_time = std::chrono::system_clock::now(); auto time_of_next_check = current_time + view->getTimeout(); - std::lock_guard lock{mutex}; - if (background_thread_should_exit) - return; - - if (can_start_background_thread) - startupIfNecessaryImpl(lock); - /// Keep the vector `views` sorted by time of next check. StorageAndTimeOfCheck storage_and_time_of_check{view, time_of_next_check}; + std::lock_guard lock{mutex}; views.insert(std::upper_bound(views.begin(), views.end(), storage_and_time_of_check), storage_and_time_of_check); - background_thread_wake_up.notify_one(); + if (background_thread_can_start) + { + startBackgroundThread(); + background_thread_wake_up.notify_one(); + } } void TemporaryLiveViewCleaner::backgroundThreadFunc() { std::unique_lock lock{mutex}; - while (!background_thread_should_exit && !views.empty()) + while (!background_thread_should_exit) { - background_thread_wake_up.wait_until(lock, views.front().time_of_check); + if (views.empty()) + background_thread_wake_up.wait(lock); + else + background_thread_wake_up.wait_until(lock, views.front().time_of_check); + if (background_thread_should_exit) break; @@ -167,20 +142,24 @@ void TemporaryLiveViewCleaner::backgroundThreadFunc() lock.unlock(); for (const auto & storage_id : storages_to_drop) - executeDropQuery(storage_id, global_context); + executeDropQuery(storage_id, getContext()); lock.lock(); } } +void TemporaryLiveViewCleaner::startBackgroundThread() +{ + if (!background_thread.joinable() && background_thread_can_start && !background_thread_should_exit) + background_thread = ThreadFromGlobalPool{&TemporaryLiveViewCleaner::backgroundThreadFunc, this}; +} + void TemporaryLiveViewCleaner::stopBackgroundThread() { + background_thread_should_exit = true; + background_thread_wake_up.notify_one(); if (background_thread.joinable()) - { - background_thread_should_exit = true; - background_thread_wake_up.notify_one(); background_thread.join(); - } } } diff --git a/src/Storages/LiveView/TemporaryLiveViewCleaner.h b/src/Storages/LiveView/TemporaryLiveViewCleaner.h index 8d57aa9fbfa..9b31bf9c999 100644 --- a/src/Storages/LiveView/TemporaryLiveViewCleaner.h +++ b/src/Storages/LiveView/TemporaryLiveViewCleaner.h @@ -1,17 +1,20 @@ #pragma once +#include #include + #include namespace DB { + class StorageLiveView; struct StorageID; /// This class removes temporary live views in the background thread when it's possible. /// There should only a single instance of this class. -class TemporaryLiveViewCleaner +class TemporaryLiveViewCleaner : WithContext { public: static TemporaryLiveViewCleaner & instance() { return *the_instance; } @@ -20,19 +23,19 @@ public: void addView(const std::shared_ptr & view); /// Should be called once. - static void init(Context & global_context_); + static void init(ContextPtr global_context_); static void shutdown(); - void startupIfNecessary(); - void startupIfNecessaryImpl(const std::lock_guard &); + void startup(); private: friend std::unique_ptr::deleter_type; - TemporaryLiveViewCleaner(Context & global_context_); + TemporaryLiveViewCleaner(ContextPtr global_context_); ~TemporaryLiveViewCleaner(); void backgroundThreadFunc(); + void startBackgroundThread(); void stopBackgroundThread(); struct StorageAndTimeOfCheck @@ -43,11 +46,10 @@ private: }; static std::unique_ptr the_instance; - Context & global_context; std::mutex mutex; std::vector views; ThreadFromGlobalPool background_thread; - bool can_start_background_thread = false; + std::atomic background_thread_can_start = false; std::atomic background_thread_should_exit = false; std::condition_variable background_thread_wake_up; }; diff --git a/src/Storages/MergeTree/BackgroundJobsExecutor.cpp b/src/Storages/MergeTree/BackgroundJobsExecutor.cpp index 8e5a0e8a3b8..ae06721b43d 100644 --- a/src/Storages/MergeTree/BackgroundJobsExecutor.cpp +++ b/src/Storages/MergeTree/BackgroundJobsExecutor.cpp @@ -16,10 +16,10 @@ namespace DB { IBackgroundJobExecutor::IBackgroundJobExecutor( - Context & global_context_, + ContextPtr global_context_, const BackgroundTaskSchedulingSettings & sleep_settings_, const std::vector & pools_configs_) - : global_context(global_context_) + : WithContext(global_context_) , sleep_settings(sleep_settings_) , rng(randomSeed()) { @@ -155,7 +155,7 @@ void IBackgroundJobExecutor::start() std::lock_guard lock(scheduling_task_mutex); if (!scheduling_task) { - scheduling_task = global_context.getSchedulePool().createTask( + scheduling_task = getContext()->getSchedulePool().createTask( getBackgroundTaskName(), [this]{ jobExecutingTask(); }); } @@ -187,12 +187,12 @@ IBackgroundJobExecutor::~IBackgroundJobExecutor() BackgroundJobsExecutor::BackgroundJobsExecutor( MergeTreeData & data_, - Context & global_context_) + ContextPtr global_context_) : IBackgroundJobExecutor( global_context_, - global_context_.getBackgroundProcessingTaskSchedulingSettings(), - {PoolConfig{PoolType::MERGE_MUTATE, global_context_.getSettingsRef().background_pool_size, CurrentMetrics::BackgroundPoolTask}, - PoolConfig{PoolType::FETCH, global_context_.getSettingsRef().background_fetches_pool_size, CurrentMetrics::BackgroundFetchesPoolTask}}) + global_context_->getBackgroundProcessingTaskSchedulingSettings(), + {PoolConfig{PoolType::MERGE_MUTATE, global_context_->getSettingsRef().background_pool_size, CurrentMetrics::BackgroundPoolTask}, + PoolConfig{PoolType::FETCH, global_context_->getSettingsRef().background_fetches_pool_size, CurrentMetrics::BackgroundFetchesPoolTask}}) , data(data_) { } @@ -209,11 +209,11 @@ std::optional BackgroundJobsExecutor::getBackgroundJob() BackgroundMovesExecutor::BackgroundMovesExecutor( MergeTreeData & data_, - Context & global_context_) + ContextPtr global_context_) : IBackgroundJobExecutor( global_context_, - global_context_.getBackgroundMoveTaskSchedulingSettings(), - {PoolConfig{PoolType::MOVE, global_context_.getSettingsRef().background_move_pool_size, CurrentMetrics::BackgroundMovePoolTask}}) + global_context_->getBackgroundMoveTaskSchedulingSettings(), + {PoolConfig{PoolType::MOVE, global_context_->getSettingsRef().background_move_pool_size, CurrentMetrics::BackgroundMovePoolTask}}) , data(data_) { } diff --git a/src/Storages/MergeTree/BackgroundJobsExecutor.h b/src/Storages/MergeTree/BackgroundJobsExecutor.h index da22c752e1b..e9cefc7a6b0 100644 --- a/src/Storages/MergeTree/BackgroundJobsExecutor.h +++ b/src/Storages/MergeTree/BackgroundJobsExecutor.h @@ -50,11 +50,9 @@ struct JobAndPool /// Consists of two important parts: /// 1) Task in background scheduling pool which receives new jobs from storages and put them into required pool. /// 2) One or more ThreadPool objects, which execute background jobs. -class IBackgroundJobExecutor +class IBackgroundJobExecutor : protected WithContext { protected: - Context & global_context; - /// Configuration for single background ThreadPool struct PoolConfig { @@ -106,7 +104,7 @@ public: protected: IBackgroundJobExecutor( - Context & global_context_, + ContextPtr global_context_, const BackgroundTaskSchedulingSettings & sleep_settings_, const std::vector & pools_configs_); @@ -134,7 +132,7 @@ private: public: BackgroundJobsExecutor( MergeTreeData & data_, - Context & global_context_); + ContextPtr global_context_); protected: String getBackgroundTaskName() const override; @@ -150,7 +148,7 @@ private: public: BackgroundMovesExecutor( MergeTreeData & data_, - Context & global_context_); + ContextPtr global_context_); protected: String getBackgroundTaskName() const override; diff --git a/src/Storages/MergeTree/CMakeLists.txt b/src/Storages/MergeTree/CMakeLists.txt index 36cab0b3590..390835f17ae 100644 --- a/src/Storages/MergeTree/CMakeLists.txt +++ b/src/Storages/MergeTree/CMakeLists.txt @@ -1,3 +1,3 @@ -if(ENABLE_TESTS) - add_subdirectory(tests) +if(ENABLE_EXAMPLES) + add_subdirectory(examples) endif() diff --git a/src/Storages/MergeTree/DataPartsExchange.cpp b/src/Storages/MergeTree/DataPartsExchange.cpp index de7f3b6c0f4..205d57f533e 100644 --- a/src/Storages/MergeTree/DataPartsExchange.cpp +++ b/src/Storages/MergeTree/DataPartsExchange.cpp @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -36,6 +37,8 @@ namespace ErrorCodes extern const int INSECURE_PATH; extern const int CORRUPTED_DATA; extern const int LOGICAL_ERROR; + extern const int S3_ERROR; + extern const int INCORRECT_PART_TYPE; } namespace DataPartsExchange @@ -48,6 +51,7 @@ constexpr auto REPLICATION_PROTOCOL_VERSION_WITH_PARTS_SIZE_AND_TTL_INFOS = 2; constexpr auto REPLICATION_PROTOCOL_VERSION_WITH_PARTS_TYPE = 3; constexpr auto REPLICATION_PROTOCOL_VERSION_WITH_PARTS_DEFAULT_COMPRESSION = 4; constexpr auto REPLICATION_PROTOCOL_VERSION_WITH_PARTS_UUID = 5; +constexpr auto REPLICATION_PROTOCOL_VERSION_WITH_PARTS_S3_COPY = 6; std::string getEndpointId(const std::string & node_id) @@ -112,7 +116,7 @@ void Service::processQuery(const HTMLForm & params, ReadBuffer & /*body*/, Write } /// We pretend to work as older server version, to be sure that client will correctly process our version - response.addCookie({"server_protocol_version", toString(std::min(client_protocol_version, REPLICATION_PROTOCOL_VERSION_WITH_PARTS_UUID))}); + response.addCookie({"server_protocol_version", toString(std::min(client_protocol_version, REPLICATION_PROTOCOL_VERSION_WITH_PARTS_S3_COPY))}); ++total_sends; SCOPE_EXIT({--total_sends;}); @@ -148,7 +152,30 @@ void Service::processQuery(const HTMLForm & params, ReadBuffer & /*body*/, Write sendPartFromMemory(part, out); else { - sendPartFromDisk(part, out, client_protocol_version); + bool try_use_s3_copy = false; + + if (data_settings->allow_s3_zero_copy_replication + && client_protocol_version >= REPLICATION_PROTOCOL_VERSION_WITH_PARTS_S3_COPY) + { /// if source and destination are in the same S3 storage we try to use S3 CopyObject request first + int send_s3_metadata = parse(params.get("send_s3_metadata", "0")); + if (send_s3_metadata == 1) + { + auto disk = part->volume->getDisk(); + if (disk->getType() == DB::DiskType::Type::S3) + { + try_use_s3_copy = true; + } + } + } + if (try_use_s3_copy) + { + response.addCookie({"send_s3_metadata", "1"}); + sendPartS3Metadata(part, out); + } + else + { + sendPartFromDisk(part, out, client_protocol_version); + } } } catch (const NetException &) @@ -229,6 +256,55 @@ void Service::sendPartFromDisk(const MergeTreeData::DataPartPtr & part, WriteBuf part->checksums.checkEqual(data_checksums, false); } +void Service::sendPartS3Metadata(const MergeTreeData::DataPartPtr & part, WriteBuffer & out) +{ + /// We'll take a list of files from the list of checksums. + MergeTreeData::DataPart::Checksums checksums = part->checksums; + /// Add files that are not in the checksum list. + auto file_names_without_checksums = part->getFileNamesWithoutChecksums(); + for (const auto & file_name : file_names_without_checksums) + checksums.files[file_name] = {}; + + auto disk = part->volume->getDisk(); + if (disk->getType() != DB::DiskType::Type::S3) + throw Exception("S3 disk is not S3 anymore", ErrorCodes::LOGICAL_ERROR); + + part->storage.lockSharedData(*part); + + String part_id = part->getUniqueId(); + writeStringBinary(part_id, out); + + writeBinary(checksums.files.size(), out); + for (const auto & it : checksums.files) + { + String file_name = it.first; + + String metadata_file = disk->getPath() + part->getFullRelativePath() + file_name; + + Poco::File metadata(metadata_file); + + if (!metadata.exists()) + throw Exception("S3 metadata '" + file_name + "' is not exists", ErrorCodes::CORRUPTED_DATA); + if (!metadata.isFile()) + throw Exception("S3 metadata '" + file_name + "' is not a file", ErrorCodes::CORRUPTED_DATA); + UInt64 file_size = metadata.getSize(); + + writeStringBinary(it.first, out); + writeBinary(file_size, out); + + auto file_in = createReadBufferFromFileBase(metadata_file, 0, 0, 0, nullptr, DBMS_DEFAULT_BUFFER_SIZE); + HashingWriteBuffer hashing_out(out); + copyData(*file_in, hashing_out, blocker.getCounter()); + if (blocker.isCancelled()) + throw Exception("Transferring part to replica was cancelled", ErrorCodes::ABORTED); + + if (hashing_out.count() != file_size) + throw Exception("Unexpected size of file " + metadata_file, ErrorCodes::BAD_SIZE_OF_FILE_IN_DATA_PART); + + writePODBinary(hashing_out.getHash(), out); + } +} + MergeTreeData::DataPartPtr Service::findPart(const String & name) { /// It is important to include PreCommitted and Outdated parts here because remote replicas cannot reliably @@ -252,7 +328,10 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart( const String & password, const String & interserver_scheme, bool to_detached, - const String & tmp_prefix_) + const String & tmp_prefix_, + std::optional * tagger_ptr, + bool try_use_s3_copy, + const DiskPtr disk_s3) { if (blocker.isCancelled()) throw Exception("Fetching of part was cancelled", ErrorCodes::ABORTED); @@ -269,10 +348,36 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart( { {"endpoint", getEndpointId(replica_path)}, {"part", part_name}, - {"client_protocol_version", toString(REPLICATION_PROTOCOL_VERSION_WITH_PARTS_UUID)}, + {"client_protocol_version", toString(REPLICATION_PROTOCOL_VERSION_WITH_PARTS_S3_COPY)}, {"compress", "false"} }); + if (try_use_s3_copy && disk_s3 && disk_s3->getType() != DB::DiskType::Type::S3) + throw Exception("Try to fetch shared s3 part on non-s3 disk", ErrorCodes::LOGICAL_ERROR); + + Disks disks_s3; + + if (!data_settings->allow_s3_zero_copy_replication) + try_use_s3_copy = false; + + if (try_use_s3_copy) + { + if (disk_s3) + disks_s3.push_back(disk_s3); + else + { + disks_s3 = data.getDisksByType(DiskType::Type::S3); + + if (disks_s3.empty()) + try_use_s3_copy = false; + } + } + + if (try_use_s3_copy) + { + uri.addQueryParameter("send_s3_metadata", "1"); + } + Poco::Net::HTTPBasicCredentials creds{}; if (!user.empty()) { @@ -293,6 +398,44 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart( int server_protocol_version = parse(in.getResponseCookie("server_protocol_version", "0")); + int send_s3 = parse(in.getResponseCookie("send_s3_metadata", "0")); + + if (send_s3 == 1) + { + if (server_protocol_version < REPLICATION_PROTOCOL_VERSION_WITH_PARTS_S3_COPY) + throw Exception("Got 'send_s3_metadata' cookie with old protocol version", ErrorCodes::LOGICAL_ERROR); + if (!try_use_s3_copy) + throw Exception("Got 'send_s3_metadata' cookie when was not requested", ErrorCodes::LOGICAL_ERROR); + + size_t sum_files_size = 0; + readBinary(sum_files_size, in); + IMergeTreeDataPart::TTLInfos ttl_infos; + /// Skip ttl infos, not required for S3 metadata + String ttl_infos_string; + readBinary(ttl_infos_string, in); + String part_type = "Wide"; + readStringBinary(part_type, in); + if (part_type == "InMemory") + throw Exception("Got 'send_s3_metadata' cookie for in-memory part", ErrorCodes::INCORRECT_PART_TYPE); + + UUID part_uuid = UUIDHelpers::Nil; + if (server_protocol_version >= REPLICATION_PROTOCOL_VERSION_WITH_PARTS_UUID) + readUUIDText(part_uuid, in); + + try + { + return downloadPartToS3(part_name, replica_path, to_detached, tmp_prefix_, std::move(disks_s3), in); + } + catch (const Exception & e) + { + if (e.code() != ErrorCodes::S3_ERROR) + throw; + /// Try again but without S3 copy + return fetchPart(metadata_snapshot, part_name, replica_path, host, port, timeouts, + user, password, interserver_scheme, to_detached, tmp_prefix_, nullptr, false); + } + } + ReservationPtr reservation; size_t sum_files_size = 0; if (server_protocol_version >= REPLICATION_PROTOCOL_VERSION_WITH_PARTS_SIZE) @@ -306,10 +449,18 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart( ReadBufferFromString ttl_infos_buffer(ttl_infos_string); assertString("ttl format version: 1\n", ttl_infos_buffer); ttl_infos.read(ttl_infos_buffer); - reservation = data.reserveSpacePreferringTTLRules(metadata_snapshot, sum_files_size, ttl_infos, std::time(nullptr), 0, true); + reservation + = data.balancedReservation(metadata_snapshot, sum_files_size, 0, part_name, part_info, {}, tagger_ptr, &ttl_infos, true); + if (!reservation) + reservation + = data.reserveSpacePreferringTTLRules(metadata_snapshot, sum_files_size, ttl_infos, std::time(nullptr), 0, true); } else - reservation = data.reserveSpace(sum_files_size); + { + reservation = data.balancedReservation(metadata_snapshot, sum_files_size, 0, part_name, part_info, {}, tagger_ptr, nullptr); + if (!reservation) + reservation = data.reserveSpace(sum_files_size); + } } else { @@ -330,7 +481,7 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart( auto storage_id = data.getStorageID(); String new_part_path = part_type == "InMemory" ? "memory" : data.getFullPathOnDisk(reservation->getDisk()) + part_name + "/"; - auto entry = data.global_context.getReplicatedFetchList().insert( + auto entry = data.getContext()->getReplicatedFetchList().insert( storage_id.getDatabaseName(), storage_id.getTableName(), part_info.partition_id, part_name, new_part_path, replica_path, uri, to_detached, sum_files_size); @@ -392,11 +543,22 @@ MergeTreeData::MutableDataPartPtr Fetcher::downloadPartToDisk( static const String TMP_PREFIX = "tmp_fetch_"; String tmp_prefix = tmp_prefix_.empty() ? TMP_PREFIX : tmp_prefix_; + /// We will remove directory if it's already exists. Make precautions. + if (tmp_prefix.empty() + || part_name.empty() + || std::string::npos != tmp_prefix.find_first_of("/.") + || std::string::npos != part_name.find_first_of("/.")) + throw Exception("Logical error: tmp_prefix and part_name cannot be empty or contain '.' or '/' characters.", ErrorCodes::LOGICAL_ERROR); + String part_relative_path = String(to_detached ? "detached/" : "") + tmp_prefix + part_name; String part_download_path = data.getRelativeDataPath() + part_relative_path + "/"; if (disk->exists(part_download_path)) - throw Exception("Directory " + fullPath(disk, part_download_path) + " already exists.", ErrorCodes::DIRECTORY_ALREADY_EXISTS); + { + LOG_WARNING(log, "Directory {} already exists, probably result of a failed fetch. Will remove it before fetching part.", + fullPath(disk, part_download_path)); + disk->removeRecursive(part_download_path); + } disk->createDirectories(part_download_path); @@ -462,6 +624,100 @@ MergeTreeData::MutableDataPartPtr Fetcher::downloadPartToDisk( return new_data_part; } +MergeTreeData::MutableDataPartPtr Fetcher::downloadPartToS3( + const String & part_name, + const String & replica_path, + bool to_detached, + const String & tmp_prefix_, + const Disks & disks_s3, + PooledReadWriteBufferFromHTTP & in + ) +{ + if (disks_s3.empty()) + throw Exception("No S3 disks anymore", ErrorCodes::LOGICAL_ERROR); + + String part_id; + readStringBinary(part_id, in); + + DiskPtr disk = disks_s3[0]; + + for (const auto & disk_s3 : disks_s3) + { + if (disk_s3->checkUniqueId(part_id)) + { + disk = disk_s3; + break; + } + } + + static const String TMP_PREFIX = "tmp_fetch_"; + String tmp_prefix = tmp_prefix_.empty() ? TMP_PREFIX : tmp_prefix_; + + String part_relative_path = String(to_detached ? "detached/" : "") + tmp_prefix + part_name; + String part_download_path = data.getRelativeDataPath() + part_relative_path + "/"; + + if (disk->exists(part_download_path)) + throw Exception("Directory " + fullPath(disk, part_download_path) + " already exists.", ErrorCodes::DIRECTORY_ALREADY_EXISTS); + + CurrentMetrics::Increment metric_increment{CurrentMetrics::ReplicatedFetch}; + + disk->createDirectories(part_download_path); + + size_t files; + readBinary(files, in); + + auto volume = std::make_shared("volume_" + part_name, disk); + MergeTreeData::MutableDataPartPtr new_data_part = data.createPart(part_name, volume, part_relative_path); + + for (size_t i = 0; i < files; ++i) + { + String file_name; + UInt64 file_size; + + readStringBinary(file_name, in); + readBinary(file_size, in); + + String data_path = new_data_part->getFullRelativePath() + file_name; + String metadata_file = fullPath(disk, data_path); + + { + auto file_out = std::make_unique(metadata_file, DBMS_DEFAULT_BUFFER_SIZE, -1, 0666, nullptr, 0); + + HashingWriteBuffer hashing_out(*file_out); + + copyData(in, hashing_out, file_size, blocker.getCounter()); + + if (blocker.isCancelled()) + { + /// NOTE The is_cancelled flag also makes sense to check every time you read over the network, + /// performing a poll with a not very large timeout. + /// And now we check it only between read chunks (in the `copyData` function). + disk->removeSharedRecursive(part_download_path, true); + throw Exception("Fetching of part was cancelled", ErrorCodes::ABORTED); + } + + MergeTreeDataPartChecksum::uint128 expected_hash; + readPODBinary(expected_hash, in); + + if (expected_hash != hashing_out.getHash()) + { + throw Exception("Checksum mismatch for file " + metadata_file + " transferred from " + replica_path, + ErrorCodes::CHECKSUM_DOESNT_MATCH); + } + } + } + + assertEOF(in); + + new_data_part->is_temp = true; + new_data_part->modification_time = time(nullptr); + new_data_part->loadColumnsChecksumsIndexes(true, false); + + new_data_part->storage.lockSharedData(*new_data_part); + + return new_data_part; +} + } } diff --git a/src/Storages/MergeTree/DataPartsExchange.h b/src/Storages/MergeTree/DataPartsExchange.h index 834fed1182f..e9b3d443fcd 100644 --- a/src/Storages/MergeTree/DataPartsExchange.h +++ b/src/Storages/MergeTree/DataPartsExchange.h @@ -9,6 +9,12 @@ #include +namespace zkutil +{ + class ZooKeeper; + using ZooKeeperPtr = std::shared_ptr; +} + namespace DB { @@ -32,6 +38,7 @@ private: MergeTreeData::DataPartPtr findPart(const String & name); void sendPartFromMemory(const MergeTreeData::DataPartPtr & part, WriteBuffer & out); void sendPartFromDisk(const MergeTreeData::DataPartPtr & part, WriteBuffer & out, int client_protocol_version); + void sendPartS3Metadata(const MergeTreeData::DataPartPtr & part, WriteBuffer & out); /// StorageReplicatedMergeTree::shutdown() waits for all parts exchange handlers to finish, /// so Service will never access dangling reference to storage @@ -58,7 +65,10 @@ public: const String & password, const String & interserver_scheme, bool to_detached = false, - const String & tmp_prefix_ = ""); + const String & tmp_prefix_ = "", + std::optional * tagger_ptr = nullptr, + bool try_use_s3_copy = true, + const DiskPtr disk_s3 = nullptr); /// You need to stop the data transfer. ActionBlocker blocker; @@ -80,6 +90,14 @@ private: ReservationPtr reservation, PooledReadWriteBufferFromHTTP & in); + MergeTreeData::MutableDataPartPtr downloadPartToS3( + const String & part_name, + const String & replica_path, + bool to_detached, + const String & tmp_prefix_, + const Disks & disks_s3, + PooledReadWriteBufferFromHTTP & in); + MergeTreeData & data; Poco::Logger * log; }; diff --git a/src/Storages/MergeTree/IMergeTreeDataPart.cpp b/src/Storages/MergeTree/IMergeTreeDataPart.cpp index 1568ca16254..36032f9208f 100644 --- a/src/Storages/MergeTree/IMergeTreeDataPart.cpp +++ b/src/Storages/MergeTree/IMergeTreeDataPart.cpp @@ -9,8 +9,10 @@ #include #include #include +#include #include #include +#include #include #include #include @@ -35,6 +37,7 @@ namespace CurrentMetrics namespace DB { + namespace ErrorCodes { extern const int DIRECTORY_ALREADY_EXISTS; @@ -68,12 +71,12 @@ void IMergeTreeDataPart::MinMaxIndex::load(const MergeTreeData & data, const Dis { String file_name = part_path + "minmax_" + escapeForFileName(minmax_column_names[i]) + ".idx"; auto file = openForReading(disk_, file_name); - const DataTypePtr & data_type = minmax_column_types[i]; + auto serialization = minmax_column_types[i]->getDefaultSerialization(); Field min_val; - data_type->deserializeBinary(min_val, *file); + serialization->deserializeBinary(min_val, *file); Field max_val; - data_type->deserializeBinary(max_val, *file); + serialization->deserializeBinary(max_val, *file); hyperrectangle.emplace_back(min_val, true, max_val, true); } @@ -106,12 +109,12 @@ void IMergeTreeDataPart::MinMaxIndex::store( for (size_t i = 0; i < column_names.size(); ++i) { String file_name = "minmax_" + escapeForFileName(column_names[i]) + ".idx"; - const DataTypePtr & data_type = data_types.at(i); + auto serialization = data_types.at(i)->getDefaultSerialization(); auto out = disk_->writeFile(part_path + file_name); HashingWriteBuffer out_hashing(*out); - data_type->serializeBinary(hyperrectangle[i].left, out_hashing); - data_type->serializeBinary(hyperrectangle[i].right, out_hashing); + serialization->serializeBinary(hyperrectangle[i].left, out_hashing); + serialization->serializeBinary(hyperrectangle[i].right, out_hashing); out_hashing.next(); out_checksums.files[file_name].file_size = out_hashing.count(); out_checksums.files[file_name].file_hash = out_hashing.getHash(); @@ -330,40 +333,49 @@ IMergeTreeDataPart::State IMergeTreeDataPart::getState() const } -DayNum IMergeTreeDataPart::getMinDate() const +std::pair IMergeTreeDataPart::getMinMaxDate() const { if (storage.minmax_idx_date_column_pos != -1 && minmax_idx.initialized) - return DayNum(minmax_idx.hyperrectangle[storage.minmax_idx_date_column_pos].left.get()); + { + const auto & hyperrectangle = minmax_idx.hyperrectangle[storage.minmax_idx_date_column_pos]; + return {DayNum(hyperrectangle.left.get()), DayNum(hyperrectangle.right.get())}; + } else - return DayNum(); + return {}; } - -DayNum IMergeTreeDataPart::getMaxDate() const -{ - if (storage.minmax_idx_date_column_pos != -1 && minmax_idx.initialized) - return DayNum(minmax_idx.hyperrectangle[storage.minmax_idx_date_column_pos].right.get()); - else - return DayNum(); -} - -time_t IMergeTreeDataPart::getMinTime() const +std::pair IMergeTreeDataPart::getMinMaxTime() const { if (storage.minmax_idx_time_column_pos != -1 && minmax_idx.initialized) - return minmax_idx.hyperrectangle[storage.minmax_idx_time_column_pos].left.get(); + { + const auto & hyperrectangle = minmax_idx.hyperrectangle[storage.minmax_idx_time_column_pos]; + + /// The case of DateTime + if (hyperrectangle.left.getType() == Field::Types::UInt64) + { + assert(hyperrectangle.right.getType() == Field::Types::UInt64); + return {hyperrectangle.left.get(), hyperrectangle.right.get()}; + } + /// The case of DateTime64 + else if (hyperrectangle.left.getType() == Field::Types::Decimal64) + { + assert(hyperrectangle.right.getType() == Field::Types::Decimal64); + + auto left = hyperrectangle.left.get>(); + auto right = hyperrectangle.right.get>(); + + assert(left.getScale() == right.getScale()); + + return { left.getValue() / left.getScaleMultiplier(), right.getValue() / right.getScaleMultiplier() }; + } + else + throw Exception(ErrorCodes::LOGICAL_ERROR, "Part minmax index by time is neither DateTime or DateTime64"); + } else - return 0; + return {}; } -time_t IMergeTreeDataPart::getMaxTime() const -{ - if (storage.minmax_idx_time_column_pos != -1 && minmax_idx.initialized) - return minmax_idx.hyperrectangle[storage.minmax_idx_time_column_pos].right.get(); - else - return 0; -} - void IMergeTreeDataPart::setColumns(const NamesAndTypesList & new_columns) { columns = new_columns; @@ -404,7 +416,7 @@ void IMergeTreeDataPart::removeIfNeeded() } } - remove(); + remove(false); if (state == State::DeleteOnDestroy) { @@ -597,9 +609,13 @@ void IMergeTreeDataPart::loadIndex() size_t marks_count = index_granularity.getMarksCount(); + Serializations serializations(key_size); + for (size_t j = 0; j < key_size; ++j) + serializations[j] = primary_key.data_types[j]->getDefaultSerialization(); + for (size_t i = 0; i < marks_count; ++i) //-V756 for (size_t j = 0; j < key_size; ++j) - primary_key.data_types[j]->deserializeBinary(*loaded_index[j], *index_file); + serializations[j]->deserializeBinary(*loaded_index[j], *index_file); for (size_t i = 0; i < key_size; ++i) { @@ -690,12 +706,18 @@ CompressionCodecPtr IMergeTreeDataPart::detectDefaultCompressionCodec() const auto column_size = getColumnSize(part_column.name, *part_column.type); if (column_size.data_compressed != 0 && !storage_columns.hasCompressionCodec(part_column.name)) { + auto serialization = IDataType::getSerialization(part_column, + [&](const String & stream_name) + { + return volume->getDisk()->exists(stream_name + IMergeTreeDataPart::DATA_FILE_EXTENSION); + }); + String path_to_data_file; - part_column.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { if (path_to_data_file.empty()) { - String candidate_path = getFullRelativePath() + IDataType::getFileNameForStream(part_column, substream_path) + ".bin"; + String candidate_path = getFullRelativePath() + ISerialization::getFileNameForStream(part_column, substream_path) + ".bin"; /// We can have existing, but empty .bin files. Example: LowCardinality(Nullable(...)) columns and column_name.dict.null.bin file. if (volume->getDisk()->exists(candidate_path) && volume->getDisk()->getFileSize(candidate_path) != 0) @@ -751,7 +773,8 @@ void IMergeTreeDataPart::loadPartitionAndMinMaxIndex() void IMergeTreeDataPart::loadChecksums(bool require) { - String path = getFullRelativePath() + "checksums.txt"; + const String path = getFullRelativePath() + "checksums.txt"; + if (volume->getDisk()->exists(path)) { auto buf = openForReading(volume->getDisk(), path); @@ -766,12 +789,14 @@ void IMergeTreeDataPart::loadChecksums(bool require) else { if (require) - throw Exception("No checksums.txt in part " + name, ErrorCodes::NO_FILE_IN_DATA_PART); + throw Exception(ErrorCodes::NO_FILE_IN_DATA_PART, "No checksums.txt in part {}", name); /// If the checksums file is not present, calculate the checksums and write them to disk. /// Check the data while we are at it. LOG_WARNING(storage.log, "Checksums for part {} not found. Will calculate them from data on disk.", name); + checksums = checkDataPart(shared_from_this(), false); + { auto out = volume->getDisk()->writeFile(getFullRelativePath() + "checksums.txt.tmp", 4096); checksums.write(*out); @@ -934,7 +959,8 @@ void IMergeTreeDataPart::loadColumns(bool require) { /// We can get list of columns only from columns.txt in compact parts. if (require || part_type == Type::COMPACT) - throw Exception("No columns.txt in part " + name, ErrorCodes::NO_FILE_IN_DATA_PART); + throw Exception("No columns.txt in part " + name + ", expected path " + path + " on drive " + volume->getDisk()->getName(), + ErrorCodes::NO_FILE_IN_DATA_PART); /// If there is no file with a list of columns, write it down. for (const NameAndTypePair & column : metadata_snapshot->getColumns().getAllPhysical()) @@ -1009,16 +1035,18 @@ void IMergeTreeDataPart::renameTo(const String & new_relative_path, bool remove_ } volume->getDisk()->setLastModified(from, Poco::Timestamp::fromEpochTime(time(nullptr))); - volume->getDisk()->moveFile(from, to); + volume->getDisk()->moveDirectory(from, to); relative_path = new_relative_path; SyncGuardPtr sync_guard; if (storage.getSettings()->fsync_part_directory) sync_guard = volume->getDisk()->getDirectorySyncGuard(to); + + storage.lockSharedData(*this); } -void IMergeTreeDataPart::remove() const +void IMergeTreeDataPart::remove(bool keep_s3) const { if (!isStoredOnDisk()) return; @@ -1048,7 +1076,7 @@ void IMergeTreeDataPart::remove() const try { - volume->getDisk()->removeRecursive(to + "/"); + volume->getDisk()->removeSharedRecursive(to + "/", keep_s3); } catch (...) { @@ -1059,7 +1087,7 @@ void IMergeTreeDataPart::remove() const try { - volume->getDisk()->moveFile(from, to); + volume->getDisk()->moveDirectory(from, to); } catch (const Poco::FileNotFoundException &) { @@ -1071,7 +1099,7 @@ void IMergeTreeDataPart::remove() const if (checksums.empty()) { /// If the part is not completely written, we cannot use fast path by listing files. - volume->getDisk()->removeRecursive(to + "/"); + volume->getDisk()->removeSharedRecursive(to + "/", keep_s3); } else { @@ -1079,21 +1107,21 @@ void IMergeTreeDataPart::remove() const { /// Remove each expected file in directory, then remove directory itself. - #if !__clang__ + #if !defined(__clang__) # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wunused-variable" #endif for (const auto & [file, _] : checksums.files) - volume->getDisk()->removeFile(to + "/" + file); - #if !__clang__ + volume->getDisk()->removeSharedFile(to + "/" + file, keep_s3); + #if !defined(__clang__) # pragma GCC diagnostic pop #endif for (const auto & file : {"checksums.txt", "columns.txt"}) - volume->getDisk()->removeFile(to + "/" + file); + volume->getDisk()->removeSharedFile(to + "/" + file, keep_s3); - volume->getDisk()->removeFileIfExists(to + "/" + DEFAULT_COMPRESSION_CODEC_FILE_NAME); - volume->getDisk()->removeFileIfExists(to + "/" + DELETE_ON_DESTROY_MARKER_FILE_NAME); + volume->getDisk()->removeSharedFileIfExists(to + "/" + DEFAULT_COMPRESSION_CODEC_FILE_NAME, keep_s3); + volume->getDisk()->removeSharedFileIfExists(to + "/" + DELETE_ON_DESTROY_MARKER_FILE_NAME, keep_s3); volume->getDisk()->removeDirectory(to); } @@ -1103,7 +1131,7 @@ void IMergeTreeDataPart::remove() const LOG_ERROR(storage.log, "Cannot quickly remove directory {} by removing files; fallback to recursive removal. Reason: {}", fullPath(volume->getDisk(), to), getCurrentExceptionMessage(false)); - volume->getDisk()->removeRecursive(to + "/"); + volume->getDisk()->removeSharedRecursive(to + "/", keep_s3); } } } @@ -1168,7 +1196,6 @@ void IMergeTreeDataPart::makeCloneOnDisk(const DiskPtr & disk, const String & di disk->removeRecursive(path_to_clone + relative_path + '/'); } disk->createDirectories(path_to_clone); - volume->getDisk()->copy(getFullRelativePath(), disk, path_to_clone); volume->getDisk()->removeFileIfExists(path_to_clone + '/' + DELETE_ON_DESTROY_MARKER_FILE_NAME); } @@ -1305,6 +1332,48 @@ bool IMergeTreeDataPart::checkAllTTLCalculated(const StorageMetadataPtr & metada return true; } +SerializationPtr IMergeTreeDataPart::getSerializationForColumn(const NameAndTypePair & column) const +{ + return IDataType::getSerialization(column, + [&](const String & stream_name) + { + return checksums.files.count(stream_name + DATA_FILE_EXTENSION) != 0; + }); +} + +String IMergeTreeDataPart::getUniqueId() const +{ + String id; + + auto disk = volume->getDisk(); + + if (disk->getType() == DB::DiskType::Type::S3) + id = disk->getUniqueId(getFullRelativePath() + "checksums.txt"); + + if (id.empty()) + throw Exception("Can't get unique S3 object", ErrorCodes::LOGICAL_ERROR); + + return id; +} + + +String IMergeTreeDataPart::getZeroLevelPartBlockID() const +{ + if (info.level != 0) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Trying to get block id for non zero level part {}", name); + + SipHash hash; + checksums.computeTotalChecksumDataOnly(hash); + union + { + char bytes[16]; + UInt64 words[2]; + } hash_value; + hash.get128(hash_value.bytes); + + return info.partition_id + "_" + toString(hash_value.words[0]) + "_" + toString(hash_value.words[1]); +} + bool isCompactPart(const MergeTreeDataPartPtr & data_part) { return (data_part && data_part->getType() == MergeTreeDataPartType::COMPACT); diff --git a/src/Storages/MergeTree/IMergeTreeDataPart.h b/src/Storages/MergeTree/IMergeTreeDataPart.h index 2f531bd8391..4e531826c98 100644 --- a/src/Storages/MergeTree/IMergeTreeDataPart.h +++ b/src/Storages/MergeTree/IMergeTreeDataPart.h @@ -16,12 +16,17 @@ #include #include #include -#include #include #include +namespace zkutil +{ + class ZooKeeper; + using ZooKeeperPtr = std::shared_ptr; +} + namespace DB { @@ -48,6 +53,8 @@ namespace ErrorCodes class IMergeTreeDataPart : public std::enable_shared_from_this { public: + static constexpr auto DATA_FILE_EXTENSION = ".bin"; + using Checksums = MergeTreeDataPartChecksums; using Checksum = MergeTreeDataPartChecksums::Checksum; using ValueSizeMap = std::map; @@ -56,7 +63,7 @@ public: using MergeTreeWriterPtr = std::unique_ptr; using ColumnSizeByName = std::unordered_map; - using NameToPosition = std::unordered_map; + using NameToNumber = std::unordered_map; using Type = MergeTreeDataPartType; @@ -124,7 +131,7 @@ public: /// Throws an exception if part is not stored in on-disk format. void assertOnDisk() const; - void remove() const; + void remove(bool keep_s3 = false) const; /// Initialize columns (from columns.txt if exists, or create from column files if not). /// Load checksums from checksums.txt if exists. Load index if required. @@ -149,16 +156,17 @@ public: bool contains(const IMergeTreeDataPart & other) const { return info.contains(other.info); } - /// If the partition key includes date column (a common case), these functions will return min and max values for this column. - DayNum getMinDate() const; - DayNum getMaxDate() const; + /// If the partition key includes date column (a common case), this function will return min and max values for that column. + std::pair getMinMaxDate() const; - /// otherwise, if the partition key includes dateTime column (also a common case), these functions will return min and max values for this column. - time_t getMinTime() const; - time_t getMaxTime() const; + /// otherwise, if the partition key includes dateTime column (also a common case), this function will return min and max values for that column. + std::pair getMinMaxTime() const; bool isEmpty() const { return rows_count == 0; } + /// Compute part block id for zero level part. Otherwise throws an exception. + String getZeroLevelPartBlockID() const; + const MergeTreeData & storage; String name; @@ -198,8 +206,8 @@ public: * * Possible state transitions: * Temporary -> Precommitted: we are trying to commit a fetched, inserted or merged part to active set - * Precommitted -> Outdated: we could not to add a part to active set and doing a rollback (for example it is duplicated part) - * Precommitted -> Committed: we successfully committed a part to active dataset + * Precommitted -> Outdated: we could not add a part to active set and are doing a rollback (for example it is duplicated part) + * Precommitted -> Committed: we successfully committed a part to active dataset * Precommitted -> Outdated: a part was replaced by a covering part or DROP PARTITION * Outdated -> Deleting: a cleaner selected this part for deletion * Deleting -> Outdated: if an ZooKeeper error occurred during the deletion, we will retry deletion @@ -361,6 +369,13 @@ public: /// part creation (using alter query with materialize_ttl setting). bool checkAllTTLCalculated(const StorageMetadataPtr & metadata_snapshot) const; + /// Returns serialization for column according to files in which column is written in part. + SerializationPtr getSerializationForColumn(const NameAndTypePair & column) const; + + /// Return some uniq string for file + /// Required for distinguish different copies of the same part on S3 + String getUniqueId() const; + protected: /// Total size of all columns, calculated once in calcuateColumnSizesOnDisk @@ -390,7 +405,7 @@ protected: private: /// In compact parts order of columns is necessary - NameToPosition column_name_to_position; + NameToNumber column_name_to_position; /// Reads part unique identifier (if exists) from uuid.txt void loadUUID(); diff --git a/src/Storages/MergeTree/IMergeTreeReader.cpp b/src/Storages/MergeTree/IMergeTreeReader.cpp index f28ca28b124..52d3e7ca9ab 100644 --- a/src/Storages/MergeTree/IMergeTreeReader.cpp +++ b/src/Storages/MergeTree/IMergeTreeReader.cpp @@ -187,10 +187,12 @@ void IMergeTreeReader::evaluateMissingDefaults(Block additional_columns, Columns } auto dag = DB::evaluateMissingDefaults( - additional_columns, columns, metadata_snapshot->getColumns(), storage.global_context); + additional_columns, columns, metadata_snapshot->getColumns(), storage.getContext()); if (dag) { - auto actions = std::make_shared(std::move(dag)); + auto actions = std::make_shared< + ExpressionActions>(std::move(dag), + ExpressionActionsSettings::fromSettings(storage.getContext()->getSettingsRef())); actions->execute(additional_columns); } @@ -229,8 +231,9 @@ NameAndTypePair IMergeTreeReader::getColumnFromPart(const NameAndTypePair & requ { auto subcolumn_name = required_column.getSubcolumnName(); auto subcolumn_type = it->second->tryGetSubcolumnType(subcolumn_name); + if (!subcolumn_type) - subcolumn_type = required_column.type; + return required_column; return {it->first, subcolumn_name, it->second, subcolumn_type}; } @@ -267,7 +270,7 @@ void IMergeTreeReader::performRequiredConversions(Columns & res_columns) copy_block.insert({res_columns[pos], getColumnFromPart(*name_and_type).type, name_and_type->name}); } - DB::performRequiredConversions(copy_block, columns, storage.global_context); + DB::performRequiredConversions(copy_block, columns, storage.getContext()); /// Move columns from block. name_and_type = columns.begin(); diff --git a/src/Storages/MergeTree/IMergeTreeReader.h b/src/Storages/MergeTree/IMergeTreeReader.h index d192339432f..0771bc3d5cb 100644 --- a/src/Storages/MergeTree/IMergeTreeReader.h +++ b/src/Storages/MergeTree/IMergeTreeReader.h @@ -16,7 +16,7 @@ class IMergeTreeReader : private boost::noncopyable { public: using ValueSizeMap = std::map; - using DeserializeBinaryBulkStateMap = std::map; + using DeserializeBinaryBulkStateMap = std::map; IMergeTreeReader( const MergeTreeData::DataPartPtr & data_part_, diff --git a/src/Storages/MergeTree/IMergedBlockOutputStream.cpp b/src/Storages/MergeTree/IMergedBlockOutputStream.cpp index 7e562ae03d6..e334cd486ef 100644 --- a/src/Storages/MergeTree/IMergedBlockOutputStream.cpp +++ b/src/Storages/MergeTree/IMergedBlockOutputStream.cpp @@ -30,10 +30,11 @@ NameSet IMergedBlockOutputStream::removeEmptyColumnsFromPart( std::map stream_counts; for (const NameAndTypePair & column : columns) { - column.type->enumerateStreams( - [&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_path */) + auto serialization = data_part->getSerializationForColumn(column); + serialization->enumerateStreams( + [&](const ISerialization::SubstreamPath & substream_path) { - ++stream_counts[IDataType::getFileNameForStream(column, substream_path)]; + ++stream_counts[ISerialization::getFileNameForStream(column, substream_path)]; }, {}); } @@ -46,9 +47,9 @@ NameSet IMergedBlockOutputStream::removeEmptyColumnsFromPart( if (!column_with_type) continue; - IDataType::StreamCallback callback = [&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_path */) + ISerialization::StreamCallback callback = [&](const ISerialization::SubstreamPath & substream_path) { - String stream_name = IDataType::getFileNameForStream(*column_with_type, substream_path); + String stream_name = ISerialization::getFileNameForStream(*column_with_type, substream_path); /// Delete files if they are no longer shared with another column. if (--stream_counts[stream_name] == 0) { @@ -57,8 +58,8 @@ NameSet IMergedBlockOutputStream::removeEmptyColumnsFromPart( } }; - IDataType::SubstreamPath stream_path; - column_with_type->type->enumerateStreams(callback, stream_path); + auto serialization = data_part->getSerializationForColumn(*column_with_type); + serialization->enumerateStreams(callback); } /// Remove files on disk and checksums diff --git a/src/Storages/MergeTree/IMergedBlockOutputStream.h b/src/Storages/MergeTree/IMergedBlockOutputStream.h index ed8da4d334b..b2ad5309017 100644 --- a/src/Storages/MergeTree/IMergedBlockOutputStream.h +++ b/src/Storages/MergeTree/IMergedBlockOutputStream.h @@ -24,9 +24,9 @@ public: } protected: - using SerializationState = IDataType::SerializeBinaryBulkStatePtr; + // using SerializationState = ISerialization::SerializeBinaryBulkStatePtr; - IDataType::OutputStreamGetter createStreamGetter(const String & name, WrittenOffsetColumns & offset_columns); + // ISerialization::OutputStreamGetter createStreamGetter(const String & name, WrittenOffsetColumns & offset_columns); /// Remove all columns marked expired in data_part. Also, clears checksums /// and columns array. Return set of removed files names. diff --git a/src/Storages/MergeTree/KeyCondition.cpp b/src/Storages/MergeTree/KeyCondition.cpp index 8f5dec8077d..f0d23041279 100644 --- a/src/Storages/MergeTree/KeyCondition.cpp +++ b/src/Storages/MergeTree/KeyCondition.cpp @@ -24,6 +24,8 @@ #include #include +#include + namespace DB { @@ -91,7 +93,7 @@ static String extractFixedPrefixFromLikePattern(const String & like_pattern) */ static String firstStringThatIsGreaterThanAllStringsWithPrefix(const String & prefix) { - /** Increment the last byte of the prefix by one. But if it is 255, then remove it and increase the previous one. + /** Increment the last byte of the prefix by one. But if it is max (255), then remove it and increase the previous one. * Example (for convenience, suppose that the maximum value of byte is `z`) * abcx -> abcy * abcz -> abd @@ -101,7 +103,7 @@ static String firstStringThatIsGreaterThanAllStringsWithPrefix(const String & pr String res = prefix; - while (!res.empty() && static_cast(res.back()) == 255) + while (!res.empty() && static_cast(res.back()) == std::numeric_limits::max()) res.pop_back(); if (res.empty()) @@ -309,11 +311,11 @@ static const std::map inverse_relations = { bool isLogicalOperator(const String & func_name) { - return (func_name == "and" || func_name == "or" || func_name == "not"); + return (func_name == "and" || func_name == "or" || func_name == "not" || func_name == "indexHint"); } /// The node can be one of: -/// - Logical operator (AND, OR, NOT) +/// - Logical operator (AND, OR, NOT and indexHint() - logical NOOP) /// - An "atom" (relational operator, constant, expression) /// - A logical constant expression /// - Any other function @@ -330,7 +332,8 @@ ASTPtr cloneASTWithInversionPushDown(const ASTPtr node, const bool need_inversio const auto result_node = makeASTFunction(func->name); - if (need_inversion) + /// indexHint() is a special case - logical NOOP function + if (result_node->name != "indexHint" && need_inversion) { result_node->name = (result_node->name == "and") ? "or" : "and"; } @@ -370,7 +373,7 @@ inline bool Range::less(const Field & lhs, const Field & rhs) { return applyVisi * For index to work when something like "WHERE Date = toDate(now())" is written. */ Block KeyCondition::getBlockWithConstants( - const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, const Context & context) + const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, ContextPtr context) { Block result { @@ -387,7 +390,7 @@ Block KeyCondition::getBlockWithConstants( KeyCondition::KeyCondition( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Names & key_column_names, const ExpressionActionsPtr & key_expr_, bool single_point_, @@ -444,7 +447,8 @@ bool KeyCondition::addCondition(const String & column, const Range & range) */ bool KeyCondition::getConstant(const ASTPtr & expr, Block & block_with_constants, Field & out_value, DataTypePtr & out_type) { - String column_name = expr->getColumnNameWithoutAlias(); + // Constant expr should use alias names if any + String column_name = expr->getColumnName(); if (const auto * lit = expr->as()) { @@ -555,7 +559,7 @@ static FieldRef applyFunction(const FunctionBasePtr & func, const DataTypePtr & return {field.columns, field.row_idx, result_idx}; } -void KeyCondition::traverseAST(const ASTPtr & node, const Context & context, Block & block_with_constants) +void KeyCondition::traverseAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants) { RPNElement element; @@ -595,19 +599,8 @@ bool KeyCondition::canConstantBeWrappedByMonotonicFunctions( Field & out_value, DataTypePtr & out_type) { - /// We don't look for inversed key transformations when strict is true, which is required for trivial count(). - /// Consider the following test case: - /// - /// create table test1(p DateTime, k int) engine MergeTree partition by toDate(p) order by k; - /// insert into test1 values ('2020-09-01 00:01:02', 1), ('2020-09-01 20:01:03', 2), ('2020-09-02 00:01:03', 3); - /// select count() from test1 where p > toDateTime('2020-09-01 10:00:00'); - /// - /// toDate(DateTime) is always monotonic, but we cannot relaxing the predicates to be - /// >= toDate(toDateTime('2020-09-01 10:00:00')), which returns 3 instead of the right count: 2. - if (strict) - return false; - - String expr_name = node->getColumnNameWithoutAlias(); + // Constant expr should use alias names if any + String expr_name = node->getColumnName(); const auto & sample_block = key_expr->getSampleBlock(); if (!sample_block.has(expr_name)) return false; @@ -619,8 +612,8 @@ bool KeyCondition::canConstantBeWrappedByMonotonicFunctions( bool found_transformation = false; auto input_column = sample_block.getByName(expr_name); auto const_column = out_type->createColumnConst(1, out_value); - out_value = (*castColumn({const_column, out_type, "c"}, input_column.type))[0]; - out_type = input_column.type; + auto const_value = (*castColumn({const_column, out_type, "c"}, input_column.type))[0]; + auto const_type = input_column.type; for (const auto & action : key_expr->getActions()) { /** The key functional expression constraint may be inferred from a plain column in the expression. @@ -642,14 +635,14 @@ bool KeyCondition::canConstantBeWrappedByMonotonicFunctions( return false; /// Range is irrelevant in this case. - IFunction::Monotonicity monotonicity = action.node->function_base->getMonotonicityForRange(*out_type, Field(), Field()); + IFunction::Monotonicity monotonicity = action.node->function_base->getMonotonicityForRange(*const_type, Field(), Field()); if (!monotonicity.is_always_monotonic) return false; /// Apply the next transformation step. - std::tie(out_value, out_type) = applyFunctionForFieldOfUnknownType( + std::tie(const_value, const_type) = applyFunctionForFieldOfUnknownType( action.node->function_builder, - out_type, out_value); + const_type, const_value); expr_name = action.node->result_name; @@ -659,6 +652,8 @@ bool KeyCondition::canConstantBeWrappedByMonotonicFunctions( { out_key_column_num = it->second; out_key_column_type = sample_block.getByName(it->first).type; + out_value = const_value; + out_type = const_type; found_transformation = true; break; } @@ -672,10 +667,8 @@ bool KeyCondition::canConstantBeWrappedByMonotonicFunctions( bool KeyCondition::canConstantBeWrappedByFunctions( const ASTPtr & ast, size_t & out_key_column_num, DataTypePtr & out_key_column_type, Field & out_value, DataTypePtr & out_type) { - if (strict) - return false; - - String expr_name = ast->getColumnNameWithoutAlias(); + // Constant expr should use alias names if any + String expr_name = ast->getColumnName(); const auto & sample_block = key_expr->getSampleBlock(); if (!sample_block.has(expr_name)) return false; @@ -731,12 +724,10 @@ bool KeyCondition::canConstantBeWrappedByFunctions( if (is_valid_chain) { - { - auto input_column = sample_block.getByName(expr_name); - auto const_column = out_type->createColumnConst(1, out_value); - out_value = (*castColumn({const_column, out_type, "c"}, input_column.type))[0]; - out_type = input_column.type; - } + auto input_column = sample_block.getByName(expr_name); + auto const_column = out_type->createColumnConst(1, out_value); + auto const_value = (*castColumn({const_column, out_type, "c"}, input_column.type))[0]; + auto const_type = input_column.type; while (!chain.empty()) { @@ -748,7 +739,7 @@ bool KeyCondition::canConstantBeWrappedByFunctions( if (func->children.size() == 1) { - std::tie(out_value, out_type) = applyFunctionForFieldOfUnknownType(func->function_builder, out_type, out_value); + std::tie(const_value, const_type) = applyFunctionForFieldOfUnknownType(func->function_builder, const_type, const_value); } else if (func->children.size() == 2) { @@ -758,21 +749,23 @@ bool KeyCondition::canConstantBeWrappedByFunctions( { auto left_arg_type = left->result_type; auto left_arg_value = (*left->column)[0]; - std::tie(out_value, out_type) = applyBinaryFunctionForFieldOfUnknownType( - func->function_builder, left_arg_type, left_arg_value, out_type, out_value); + std::tie(const_value, const_type) = applyBinaryFunctionForFieldOfUnknownType( + func->function_builder, left_arg_type, left_arg_value, const_type, const_value); } else { auto right_arg_type = right->result_type; auto right_arg_value = (*right->column)[0]; - std::tie(out_value, out_type) = applyBinaryFunctionForFieldOfUnknownType( - func->function_builder, out_type, out_value, right_arg_type, right_arg_value); + std::tie(const_value, const_type) = applyBinaryFunctionForFieldOfUnknownType( + func->function_builder, const_type, const_value, right_arg_type, right_arg_value); } } } out_key_column_num = it->second; out_key_column_type = sample_block.getByName(it->first).type; + out_value = const_value; + out_type = const_type; return true; } } @@ -783,7 +776,7 @@ bool KeyCondition::canConstantBeWrappedByFunctions( bool KeyCondition::tryPrepareSetIndex( const ASTs & args, - const Context & context, + ContextPtr context, RPNElement & out, size_t & out_key_column_num) { @@ -935,6 +928,9 @@ public: return func->getMonotonicityForRange(type, left, right); } + Kind getKind() const { return kind; } + const ColumnWithTypeAndName & getConstArg() const { return const_arg; } + private: FunctionBasePtr func; ColumnWithTypeAndName const_arg; @@ -944,7 +940,7 @@ private: bool KeyCondition::isKeyPossiblyWrappedByMonotonicFunctions( const ASTPtr & node, - const Context & context, + ContextPtr context, size_t & out_key_column_num, DataTypePtr & out_key_res_column_type, MonotonicFunctionsChain & out_functions_chain) @@ -959,6 +955,8 @@ bool KeyCondition::isKeyPossiblyWrappedByMonotonicFunctions( { const auto & args = (*it)->arguments->children; auto func_builder = FunctionFactory::instance().tryGet((*it)->name, context); + if (!func_builder) + return false; ColumnsWithTypeAndName arguments; ColumnWithTypeAndName const_arg; FunctionWithOptionalConstArg::Kind kind = FunctionWithOptionalConstArg::Kind::NO_CONST; @@ -1011,6 +1009,8 @@ bool KeyCondition::isKeyPossiblyWrappedByMonotonicFunctionsImpl( * Therefore, use the full name of the expression for search. */ const auto & sample_block = key_expr->getSampleBlock(); + + // Key columns should use canonical names for index analysis String name = node->getColumnNameWithoutAlias(); auto it = key_columns.find(name); @@ -1070,7 +1070,7 @@ static void castValueToType(const DataTypePtr & desired_type, Field & src_value, } -bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & context, Block & block_with_constants, RPNElement & out) +bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants, RPNElement & out) { /** Functions < > = != <= >= in `notIn`, where one argument is a constant, and the other is one of columns of key, * or itself, wrapped in a chain of possibly-monotonic functions, @@ -1104,6 +1104,23 @@ bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & cont bool is_set_const = false; bool is_constant_transformed = false; + /// We don't look for inversed key transformations when strict is true, which is required for trivial count(). + /// Consider the following test case: + /// + /// create table test1(p DateTime, k int) engine MergeTree partition by toDate(p) order by k; + /// insert into test1 values ('2020-09-01 00:01:02', 1), ('2020-09-01 20:01:03', 2), ('2020-09-02 00:01:03', 3); + /// select count() from test1 where p > toDateTime('2020-09-01 10:00:00'); + /// + /// toDate(DateTime) is always monotonic, but we cannot relax the predicates to be + /// >= toDate(toDateTime('2020-09-01 10:00:00')), which returns 3 instead of the right count: 2. + bool strict_condition = strict; + + /// If we use this key condition to prune partitions by single value, we cannot relax conditions for NOT. + if (single_point + && (func_name == "notLike" || func_name == "notIn" || func_name == "globalNotIn" || func_name == "notEquals" + || func_name == "notEmpty")) + strict_condition = true; + if (functionIsInOrGlobalInOperator(func_name)) { if (tryPrepareSetIndex(args, context, out, key_column_num)) @@ -1120,13 +1137,15 @@ bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & cont { key_arg_pos = 0; } - else if (canConstantBeWrappedByMonotonicFunctions(args[0], key_column_num, key_expr_type, const_value, const_type)) + else if ( + !strict_condition + && canConstantBeWrappedByMonotonicFunctions(args[0], key_column_num, key_expr_type, const_value, const_type)) { key_arg_pos = 0; is_constant_transformed = true; } else if ( - single_point && func_name == "equals" + single_point && func_name == "equals" && !strict_condition && canConstantBeWrappedByFunctions(args[0], key_column_num, key_expr_type, const_value, const_type)) { key_arg_pos = 0; @@ -1141,13 +1160,15 @@ bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & cont { key_arg_pos = 1; } - else if (canConstantBeWrappedByMonotonicFunctions(args[1], key_column_num, key_expr_type, const_value, const_type)) + else if ( + !strict_condition + && canConstantBeWrappedByMonotonicFunctions(args[1], key_column_num, key_expr_type, const_value, const_type)) { key_arg_pos = 1; is_constant_transformed = true; } else if ( - single_point && func_name == "equals" + single_point && func_name == "equals" && !strict_condition && canConstantBeWrappedByFunctions(args[1], key_column_num, key_expr_type, const_value, const_type)) { key_arg_pos = 0; @@ -1269,6 +1290,8 @@ bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & cont bool KeyCondition::tryParseLogicalOperatorFromAST(const ASTFunction * func, RPNElement & out) { /// Functions AND, OR, NOT. + /// Also a special function `indexHint` - works as if instead of calling a function there are just parentheses + /// (or, the same thing - calling the function `and` from one argument). const ASTs & args = func->arguments->children; if (func->name == "not") @@ -1280,7 +1303,7 @@ bool KeyCondition::tryParseLogicalOperatorFromAST(const ASTFunction * func, RPNE } else { - if (func->name == "and") + if (func->name == "and" || func->name == "indexHint") out.function = RPNElement::FUNCTION_AND; else if (func->name == "or") out.function = RPNElement::FUNCTION_OR; @@ -1303,6 +1326,235 @@ String KeyCondition::toString() const return res; } +KeyCondition::Description KeyCondition::getDescription() const +{ + /// This code may seem to be too difficult. + /// Here we want to convert RPN back to tree, and also simplify some logical expressions like `and(x, true) -> x`. + Description description; + + /// That's a binary tree. Explicit. + /// Build and optimize it simultaneously. + struct Node + { + enum class Type + { + /// Leaf, which is RPNElement. + Leaf, + /// Leafs, which are logical constants. + True, + False, + /// Binary operators. + And, + Or, + }; + + Type type{}; + + /// Only for Leaf + const RPNElement * element = nullptr; + /// This means that logical NOT is applied to leaf. + bool negate = false; + + std::unique_ptr left = nullptr; + std::unique_ptr right = nullptr; + }; + + /// The algorithm is the same as in KeyCondition::checkInHyperrectangle + /// We build a pair of trees on stack. For checking if key condition may be true, and if it may be false. + /// We need only `can_be_true` in result. + struct Frame + { + std::unique_ptr can_be_true; + std::unique_ptr can_be_false; + }; + + /// Combine two subtrees using logical operator. + auto combine = [](std::unique_ptr left, std::unique_ptr right, Node::Type type) + { + /// Simplify operators with for one constant condition. + + if (type == Node::Type::And) + { + /// false AND right + if (left->type == Node::Type::False) + return left; + + /// left AND false + if (right->type == Node::Type::False) + return right; + + /// true AND right + if (left->type == Node::Type::True) + return right; + + /// left AND true + if (right->type == Node::Type::True) + return left; + } + + if (type == Node::Type::Or) + { + /// false OR right + if (left->type == Node::Type::False) + return right; + + /// left OR false + if (right->type == Node::Type::False) + return left; + + /// true OR right + if (left->type == Node::Type::True) + return left; + + /// left OR true + if (right->type == Node::Type::True) + return right; + } + + return std::make_unique(Node{ + .type = type, + .left = std::move(left), + .right = std::move(right) + }); + }; + + std::vector rpn_stack; + for (const auto & element : rpn) + { + if (element.function == RPNElement::FUNCTION_UNKNOWN) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::True}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::True}); + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else if ( + element.function == RPNElement::FUNCTION_IN_RANGE + || element.function == RPNElement::FUNCTION_NOT_IN_RANGE + || element.function == RPNElement::FUNCTION_IN_SET + || element.function == RPNElement::FUNCTION_NOT_IN_SET) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::Leaf, .element = &element, .negate = false}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::Leaf, .element = &element, .negate = true}); + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else if (element.function == RPNElement::FUNCTION_NOT) + { + assert(!rpn_stack.empty()); + + std::swap(rpn_stack.back().can_be_true, rpn_stack.back().can_be_false); + } + else if (element.function == RPNElement::FUNCTION_AND) + { + assert(!rpn_stack.empty()); + auto arg1 = std::move(rpn_stack.back()); + + rpn_stack.pop_back(); + + assert(!rpn_stack.empty()); + auto arg2 = std::move(rpn_stack.back()); + + Frame frame; + frame.can_be_true = combine(std::move(arg1.can_be_true), std::move(arg2.can_be_true), Node::Type::And); + frame.can_be_false = combine(std::move(arg1.can_be_false), std::move(arg2.can_be_false), Node::Type::Or); + + rpn_stack.back() = std::move(frame); + } + else if (element.function == RPNElement::FUNCTION_OR) + { + assert(!rpn_stack.empty()); + auto arg1 = std::move(rpn_stack.back()); + + rpn_stack.pop_back(); + + assert(!rpn_stack.empty()); + auto arg2 = std::move(rpn_stack.back()); + + Frame frame; + frame.can_be_true = combine(std::move(arg1.can_be_true), std::move(arg2.can_be_true), Node::Type::Or); + frame.can_be_false = combine(std::move(arg1.can_be_false), std::move(arg2.can_be_false), Node::Type::And); + + rpn_stack.back() = std::move(frame); + } + else if (element.function == RPNElement::ALWAYS_FALSE) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::False}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::True}); + + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else if (element.function == RPNElement::ALWAYS_TRUE) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::True}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::False}); + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else + throw Exception("Unexpected function type in KeyCondition::RPNElement", ErrorCodes::LOGICAL_ERROR); + } + + if (rpn_stack.size() != 1) + throw Exception("Unexpected stack size in KeyCondition::checkInRange", ErrorCodes::LOGICAL_ERROR); + + std::vector key_names(key_columns.size()); + std::vector is_key_used(key_columns.size(), false); + + for (const auto & key : key_columns) + key_names[key.second] = key.first; + + WriteBufferFromOwnString buf; + + std::function describe; + describe = [&describe, &key_names, &is_key_used, &buf](const Node * node) + { + switch (node->type) + { + case Node::Type::Leaf: + { + is_key_used[node->element->key_column] = true; + + /// Note: for condition with double negation, like `not(x not in set)`, + /// we can replace it to `x in set` here. + /// But I won't do it, because `cloneASTWithInversionPushDown` already push down `not`. + /// So, this seem to be impossible for `can_be_true` tree. + if (node->negate) + buf << "not("; + buf << node->element->toString(key_names[node->element->key_column], true); + if (node->negate) + buf << ")"; + break; + } + case Node::Type::True: + buf << "true"; + break; + case Node::Type::False: + buf << "false"; + break; + case Node::Type::And: + buf << "and("; + describe(node->left.get()); + buf << ", "; + describe(node->right.get()); + buf << ")"; + break; + case Node::Type::Or: + buf << "or("; + describe(node->left.get()); + buf << ", "; + describe(node->right.get()); + buf << ")"; + break; + } + }; + + describe(rpn_stack.front().can_be_true.get()); + description.condition = std::move(buf.str()); + + for (size_t i = 0; i < key_names.size(); ++i) + if (is_key_used[i]) + description.used_keys.emplace_back(key_names[i]); + + return description; +} /** Index is the value of key every `index_granularity` rows. * This value is called a "mark". That is, the index consists of marks. @@ -1321,11 +1573,12 @@ String KeyCondition::toString() const * The set of all possible tuples can be considered as an n-dimensional space, where n is the size of the tuple. * A range of tuples specifies some subset of this space. * - * Hyperrectangles (you can also find the term "rail") - * will be the subrange of an n-dimensional space that is a direct product of one-dimensional ranges. - * In this case, the one-dimensional range can be: a period, a segment, an interval, a half-interval, unlimited on the left, unlimited on the right ... + * Hyperrectangles will be the subrange of an n-dimensional space that is a direct product of one-dimensional ranges. + * In this case, the one-dimensional range can be: + * a point, a segment, an open interval, a half-open interval; + * unlimited on the left, unlimited on the right ... * - * The range of tuples can always be represented as a combination of hyperrectangles. + * The range of tuples can always be represented as a combination (union) of hyperrectangles. * For example, the range [ x1 y1 .. x2 y2 ] given x1 != x2 is equal to the union of the following three hyperrectangles: * [x1] x [y1 .. +inf) * (x1 .. x2) x (-inf .. +inf) @@ -1727,18 +1980,38 @@ bool KeyCondition::mayBeTrueAfter( return checkInRange(used_key_size, left_key, nullptr, data_types, false, BoolMask::consider_only_can_be_true).can_be_true; } - -String KeyCondition::RPNElement::toString() const +String KeyCondition::RPNElement::toString() const { return toString("column " + std::to_string(key_column), false); } +String KeyCondition::RPNElement::toString(const std::string_view & column_name, bool print_constants) const { - auto print_wrapped_column = [this](WriteBuffer & buf) + auto print_wrapped_column = [this, &column_name, print_constants](WriteBuffer & buf) { for (auto it = monotonic_functions_chain.rbegin(); it != monotonic_functions_chain.rend(); ++it) + { buf << (*it)->getName() << "("; + if (print_constants) + { + if (const auto * func = typeid_cast(it->get())) + { + if (func->getKind() == FunctionWithOptionalConstArg::Kind::LEFT_CONST) + buf << applyVisitor(FieldVisitorToString(), (*func->getConstArg().column)[0]) << ", "; + } + } + } - buf << "column " << key_column; + buf << column_name; for (auto it = monotonic_functions_chain.rbegin(); it != monotonic_functions_chain.rend(); ++it) + { + if (print_constants) + { + if (const auto * func = typeid_cast(it->get())) + { + if (func->getKind() == FunctionWithOptionalConstArg::Kind::RIGHT_CONST) + buf << ", " << applyVisitor(FieldVisitorToString(), (*func->getConstArg().column)[0]); + } + } buf << ")"; + } }; WriteBufferFromOwnString buf; diff --git a/src/Storages/MergeTree/KeyCondition.h b/src/Storages/MergeTree/KeyCondition.h index b8167f406bd..bd51769ad1f 100644 --- a/src/Storages/MergeTree/KeyCondition.h +++ b/src/Storages/MergeTree/KeyCondition.h @@ -229,7 +229,7 @@ public: /// Does not take into account the SAMPLE section. all_columns - the set of all columns of the table. KeyCondition( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Names & key_column_names, const ExpressionActionsPtr & key_expr, bool single_point_ = false, @@ -293,6 +293,16 @@ public: String toString() const; + /// Condition description for EXPLAIN query. + struct Description + { + /// Which columns from PK were used, in PK order. + std::vector used_keys; + /// Condition which was applied, mostly human-readable. + std::string condition; + }; + + Description getDescription() const; /** A chain of possibly monotone functions. * If the key column is wrapped in functions that can be monotonous in some value ranges @@ -307,7 +317,7 @@ public: const ASTPtr & expr, Block & block_with_constants, Field & out_value, DataTypePtr & out_type); static Block getBlockWithConstants( - const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, const Context & context); + const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, ContextPtr context); static std::optional applyMonotonicFunctionsChainToRange( Range key_range, @@ -345,6 +355,7 @@ private: : function(function_), range(range_), key_column(key_column_) {} String toString() const; + String toString(const std::string_view & column_name, bool print_constants) const; Function function = FUNCTION_UNKNOWN; @@ -375,8 +386,8 @@ private: bool right_bounded, BoolMask initial_mask) const; - void traverseAST(const ASTPtr & node, const Context & context, Block & block_with_constants); - bool tryParseAtomFromAST(const ASTPtr & node, const Context & context, Block & block_with_constants, RPNElement & out); + void traverseAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants); + bool tryParseAtomFromAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants, RPNElement & out); static bool tryParseLogicalOperatorFromAST(const ASTFunction * func, RPNElement & out); /** Is node the key column @@ -387,7 +398,7 @@ private: */ bool isKeyPossiblyWrappedByMonotonicFunctions( const ASTPtr & node, - const Context & context, + ContextPtr context, size_t & out_key_column_num, DataTypePtr & out_key_res_column_type, MonotonicFunctionsChain & out_functions_chain); @@ -413,7 +424,7 @@ private: /// do it and return true. bool tryPrepareSetIndex( const ASTs & args, - const Context & context, + ContextPtr context, RPNElement & out, size_t & out_key_column_num); diff --git a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp index 6bf164dd824..41ad71c89ce 100644 --- a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp @@ -30,7 +30,7 @@ MergeTreeBaseSelectProcessor::MergeTreeBaseSelectProcessor( const MergeTreeReaderSettings & reader_settings_, bool use_uncompressed_cache_, const Names & virt_column_names_) - : SourceWithProgress(getHeader(std::move(header), prewhere_info_, virt_column_names_)) + : SourceWithProgress(transformHeader(std::move(header), prewhere_info_, virt_column_names_)) , storage(storage_) , metadata_snapshot(metadata_snapshot_) , prewhere_info(prewhere_info_) @@ -370,7 +370,7 @@ void MergeTreeBaseSelectProcessor::executePrewhereActions(Block & block, const P } } -Block MergeTreeBaseSelectProcessor::getHeader( +Block MergeTreeBaseSelectProcessor::transformHeader( Block block, const PrewhereInfoPtr & prewhere_info, const Names & virtual_columns) { executePrewhereActions(block, prewhere_info); diff --git a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h index 00ef131ae45..a4c55cbae45 100644 --- a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h +++ b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h @@ -33,6 +33,8 @@ public: ~MergeTreeBaseSelectProcessor() override; + static Block transformHeader(Block block, const PrewhereInfoPtr & prewhere_info, const Names & virtual_columns); + static void executePrewhereActions(Block & block, const PrewhereInfoPtr & prewhere_info); protected: @@ -49,8 +51,6 @@ protected: static void injectVirtualColumns(Block & block, MergeTreeReadTask * task, const Names & virtual_columns); static void injectVirtualColumns(Chunk & chunk, MergeTreeReadTask * task, const Names & virtual_columns); - static Block getHeader(Block block, const PrewhereInfoPtr & prewhere_info, const Names & virtual_columns); - void initializeRangeReaders(MergeTreeReadTask & task); protected: diff --git a/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp b/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp index 904081cc1df..bc91e29d900 100644 --- a/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp +++ b/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp @@ -29,12 +29,20 @@ void MergeTreeBlockOutputStream::write(const Block & block) Stopwatch watch; MergeTreeData::MutableDataPartPtr part = storage.writer.writeTempPart(current_block, metadata_snapshot, optimize_on_insert); - storage.renameTempPartAndAdd(part, &storage.increment); - PartLog::addNewPart(storage.global_context, part, watch.elapsed()); + /// If optimize_on_insert setting is true, current_block could become empty after merge + /// and we didn't create part. + if (!part) + continue; - /// Initiate async merge - it will be done if it's good time for merge and if there are space in 'background_pool'. - storage.background_executor.triggerTask(); + /// Part can be deduplicated, so increment counters and add to part log only if it's really added + if (storage.renameTempPartAndAdd(part, &storage.increment, nullptr, storage.getDeduplicationLog())) + { + PartLog::addNewPart(storage.getContext(), part, watch.elapsed()); + + /// Initiate async merge - it will be done if it's good time for merge and if there are space in 'background_pool'. + storage.background_executor.triggerTask(); + } } } diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp index 0c22d5fbc0f..f677f745080 100644 --- a/src/Storages/MergeTree/MergeTreeData.cpp +++ b/src/Storages/MergeTree/MergeTreeData.cpp @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -55,6 +56,8 @@ #include #include +#include + #include #include #include @@ -70,6 +73,7 @@ namespace ProfileEvents extern const Event RejectedInserts; extern const Event DelayedInserts; extern const Event DelayedInsertsMilliseconds; + extern const Event DuplicatedInsertedBlocks; } namespace CurrentMetrics @@ -131,7 +135,7 @@ MergeTreeData::MergeTreeData( const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr storage_settings_, @@ -139,7 +143,7 @@ MergeTreeData::MergeTreeData( bool attach, BrokenPartCallback broken_part_callback_) : IStorage(table_id_) - , global_context(context_.getGlobalContext()) + , WithContext(context_->getGlobalContext()) , merging_params(merging_params_) , require_part_metadata(require_part_metadata_) , relative_data_path(relative_data_path_) @@ -159,7 +163,7 @@ MergeTreeData::MergeTreeData( /// Check sanity of MergeTreeSettings. Only when table is created. if (!attach) - settings->sanityCheck(global_context.getSettingsRef()); + settings->sanityCheck(getContext()->getSettingsRef()); MergeTreeDataFormatVersion min_format_version(0); if (!date_column_name.empty()) @@ -204,8 +208,8 @@ MergeTreeData::MergeTreeData( for (const auto & [path, disk] : getRelativeDataPathsWithDisks()) { disk->createDirectories(path); - disk->createDirectories(path + "detached"); - auto current_version_file_path = path + "format_version.txt"; + disk->createDirectories(path + MergeTreeData::DETACHED_DIR_NAME); + auto current_version_file_path = path + MergeTreeData::FORMAT_VERSION_FILE_NAME; if (disk->exists(current_version_file_path)) { if (!version_file.first.empty()) @@ -219,7 +223,7 @@ MergeTreeData::MergeTreeData( /// If not choose any if (version_file.first.empty()) - version_file = {relative_data_path + "format_version.txt", getStoragePolicy()->getAnyDisk()}; + version_file = {relative_data_path + MergeTreeData::FORMAT_VERSION_FILE_NAME, getStoragePolicy()->getAnyDisk()}; bool version_file_exists = version_file.second->exists(version_file.first); @@ -229,7 +233,7 @@ MergeTreeData::MergeTreeData( format_version = min_format_version; auto buf = version_file.second->writeFile(version_file.first); writeIntText(format_version.toUnderType(), *buf); - if (global_context.getSettingsRef().fsync_metadata) + if (getContext()->getSettingsRef().fsync_metadata) buf->sync(); } else @@ -258,7 +262,7 @@ MergeTreeData::MergeTreeData( StoragePolicyPtr MergeTreeData::getStoragePolicy() const { - return global_context.getStoragePolicy(getSettings()->storage_policy); + return getContext()->getStoragePolicy(getSettings()->storage_policy); } static void checkKeyExpression(const ExpressionActions & expr, const Block & sample_block, const String & key_name, bool allow_nullable_key) @@ -315,8 +319,8 @@ void MergeTreeData::checkProperties( { const String & pk_column = new_primary_key.column_names[i]; if (pk_column != sorting_key_column) - throw Exception("Primary key must be a prefix of the sorting key, but in position " - + toString(i) + " its column is " + pk_column + ", not " + sorting_key_column, + throw Exception("Primary key must be a prefix of the sorting key, but the column in the position " + + toString(i) + " is " + sorting_key_column +", not " + pk_column, ErrorCodes::BAD_ARGUMENTS); if (!primary_key_columns_set.emplace(pk_column).second) @@ -353,7 +357,7 @@ void MergeTreeData::checkProperties( if (!added_key_column_expr_list->children.empty()) { - auto syntax = TreeRewriter(global_context).analyze(added_key_column_expr_list, all_columns); + auto syntax = TreeRewriter(getContext()).analyze(added_key_column_expr_list, all_columns); Names used_columns = syntax->requiredSourceColumns(); NamesAndTypesList deleted_columns; @@ -410,7 +414,7 @@ ExpressionActionsPtr getCombinedIndicesExpression( const KeyDescription & key, const IndicesDescription & indices, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { ASTPtr combined_expr_list = key.expression_list_ast->clone(); @@ -424,13 +428,13 @@ ExpressionActionsPtr getCombinedIndicesExpression( } -ExpressionActionsPtr MergeTreeData::getMinMaxExpr(const KeyDescription & partition_key) +ExpressionActionsPtr MergeTreeData::getMinMaxExpr(const KeyDescription & partition_key, const ExpressionActionsSettings & settings) { NamesAndTypesList partition_key_columns; if (!partition_key.column_names.empty()) partition_key_columns = partition_key.expression->getRequiredColumnsWithTypes(); - return std::make_shared(std::make_shared(partition_key_columns)); + return std::make_shared(std::make_shared(partition_key_columns), settings); } Names MergeTreeData::getMinMaxColumnsNames(const KeyDescription & partition_key) @@ -449,12 +453,12 @@ DataTypes MergeTreeData::getMinMaxColumnsTypes(const KeyDescription & partition_ ExpressionActionsPtr MergeTreeData::getPrimaryKeyAndSkipIndicesExpression(const StorageMetadataPtr & metadata_snapshot) const { - return getCombinedIndicesExpression(metadata_snapshot->getPrimaryKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), global_context); + return getCombinedIndicesExpression(metadata_snapshot->getPrimaryKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), getContext()); } ExpressionActionsPtr MergeTreeData::getSortingKeyAndSkipIndicesExpression(const StorageMetadataPtr & metadata_snapshot) const { - return getCombinedIndicesExpression(metadata_snapshot->getSortingKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), global_context); + return getCombinedIndicesExpression(metadata_snapshot->getSortingKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), getContext()); } @@ -469,15 +473,19 @@ void MergeTreeData::checkPartitionKeyAndInitMinMax(const KeyDescription & new_pa DataTypes minmax_idx_columns_types = getMinMaxColumnsTypes(new_partition_key); /// Try to find the date column in columns used by the partition key (a common case). - bool encountered_date_column = false; + /// If there are no - DateTime or DateTime64 would also suffice. + + bool has_date_column = false; + bool has_datetime_column = false; + for (size_t i = 0; i < minmax_idx_columns_types.size(); ++i) { - if (typeid_cast(minmax_idx_columns_types[i].get())) + if (isDate(minmax_idx_columns_types[i])) { - if (!encountered_date_column) + if (!has_date_column) { minmax_idx_date_column_pos = i; - encountered_date_column = true; + has_date_column = true; } else { @@ -486,21 +494,23 @@ void MergeTreeData::checkPartitionKeyAndInitMinMax(const KeyDescription & new_pa } } } - if (!encountered_date_column) + if (!has_date_column) { for (size_t i = 0; i < minmax_idx_columns_types.size(); ++i) { - if (typeid_cast(minmax_idx_columns_types[i].get())) + if (isDateTime(minmax_idx_columns_types[i]) + || isDateTime64(minmax_idx_columns_types[i]) + ) { - if (!encountered_date_column) + if (!has_datetime_column) { minmax_idx_time_column_pos = i; - encountered_date_column = true; + has_datetime_column = true; } else { /// There is more than one DateTime column in partition key and we don't know which one to choose. - minmax_idx_time_column_pos = -1; + minmax_idx_time_column_pos = -1; } } } @@ -675,6 +685,43 @@ void MergeTreeData::MergingParams::check(const StorageInMemoryMetadata & metadat } +std::optional MergeTreeData::totalRowsByPartitionPredicateImpl( + const SelectQueryInfo & query_info, ContextPtr local_context, const DataPartsVector & parts) const +{ + if (parts.empty()) + return 0u; + auto metadata_snapshot = getInMemoryMetadataPtr(); + ASTPtr expression_ast; + Block virtual_columns_block = MergeTreeDataSelectExecutor::getBlockWithVirtualPartColumns(parts, true /* one_part */); + + // Generate valid expressions for filtering + bool valid = VirtualColumnUtils::prepareFilterBlockWithQuery(query_info.query, local_context, virtual_columns_block, expression_ast); + + PartitionPruner partition_pruner(metadata_snapshot->getPartitionKey(), query_info, local_context, true /* strict */); + if (partition_pruner.isUseless() && !valid) + return {}; + + std::unordered_set part_values; + if (valid && expression_ast) + { + virtual_columns_block = MergeTreeDataSelectExecutor::getBlockWithVirtualPartColumns(parts, false /* one_part */); + VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, local_context, expression_ast); + part_values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_part"); + if (part_values.empty()) + return 0; + } + // At this point, empty `part_values` means all parts. + + size_t res = 0; + for (const auto & part : parts) + { + if ((part_values.empty() || part_values.find(part->name) != part_values.end()) && !partition_pruner.canBePruned(part)) + res += part->rows_count; + } + return res; +} + + String MergeTreeData::MergingParams::getModeName() const { switch (mode) @@ -723,7 +770,7 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks) for (const auto & disk_ptr : disks) defined_disk_names.insert(disk_ptr->getName()); - for (const auto & [disk_name, disk] : global_context.getDisksMap()) + for (const auto & [disk_name, disk] : getContext()->getDisksMap()) { if (defined_disk_names.count(disk_name) == 0 && disk->exists(relative_data_path)) { @@ -744,8 +791,8 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks) auto disk_ptr = *disk_it; for (auto it = disk_ptr->iterateDirectory(relative_data_path); it->isValid(); it->next()) { - /// Skip temporary directories. - if (startsWith(it->name(), "tmp")) + /// Skip temporary directories, file 'format_version.txt' and directory 'detached'. + if (startsWith(it->name(), "tmp") || it->name() == MergeTreeData::FORMAT_VERSION_FILE_NAME || it->name() == MergeTreeData::DETACHED_DIR_NAME) continue; if (!startsWith(it->name(), MergeTreeWriteAheadLog::WAL_FILE_NAME)) @@ -771,7 +818,7 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks) if (part_names_with_disks.empty() && parts_from_wal.empty()) { - LOG_DEBUG(log, "There is no data parts"); + LOG_DEBUG(log, "There are no data parts"); return; } @@ -1126,7 +1173,7 @@ void MergeTreeData::removePartsFinally(const MergeTreeData::DataPartsVector & pa /// NOTE: There is no need to log parts deletion somewhere else, all deleting parts pass through this function and pass away auto table_id = getStorageID(); - if (auto part_log = global_context.getPartLog(table_id.database_name)) + if (auto part_log = getContext()->getPartLog(table_id.database_name)) { PartLogElement part_log_elem; @@ -1154,6 +1201,11 @@ void MergeTreeData::clearOldPartsFromFilesystem(bool force) DataPartsVector parts_to_remove = grabOldParts(force); clearPartsFromFilesystem(parts_to_remove); removePartsFinally(parts_to_remove); + + /// This is needed to close files to avoid they reside on disk after being deleted. + /// NOTE: we can drop files from cache more selectively but this is good enough. + if (!parts_to_remove.empty()) + getContext()->dropMMappedFileCache(); } void MergeTreeData::clearPartsFromFilesystem(const DataPartsVector & parts_to_remove) @@ -1163,14 +1215,21 @@ void MergeTreeData::clearPartsFromFilesystem(const DataPartsVector & parts_to_re { /// Parallel parts removal. - size_t num_threads = std::min(size_t(settings->max_part_removal_threads), parts_to_remove.size()); + size_t num_threads = std::min(settings->max_part_removal_threads, parts_to_remove.size()); ThreadPool pool(num_threads); /// NOTE: Under heavy system load you may get "Cannot schedule a task" from ThreadPool. for (const DataPartPtr & part : parts_to_remove) { - pool.scheduleOrThrowOnError([&] + pool.scheduleOrThrowOnError([&, thread_group = CurrentThread::getGroup()] { + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachQueryIfNotDetached(); + ); + if (thread_group) + CurrentThread::attachTo(thread_group); + LOG_DEBUG(log, "Removing part from filesystem {}", part->name); part->remove(); }); @@ -1255,7 +1314,7 @@ void MergeTreeData::clearEmptyParts() { ASTPtr literal = std::make_shared(part->name); /// If another replica has already started drop, it's ok, no need to throw. - dropPartition(literal, /* detach = */ false, /*drop_part = */ true, global_context, /* throw_if_noop = */ false); + dropPartition(literal, /* detach = */ false, /*drop_part = */ true, getContext(), /* throw_if_noop = */ false); } } } @@ -1278,7 +1337,7 @@ void MergeTreeData::rename(const String & new_table_path, const StorageID & new_ } if (!getStorageID().hasUUID()) - global_context.dropCaches(); + getContext()->dropCaches(); relative_data_path = new_table_path; renameInMemory(new_table_id); @@ -1300,7 +1359,7 @@ void MergeTreeData::dropAllData() /// Tables in atomic databases have UUID and stored in persistent locations. /// No need to drop caches (that are keyed by filesystem path) because collision is not possible. if (!getStorageID().hasUUID()) - global_context.dropCaches(); + getContext()->dropCaches(); LOG_TRACE(log, "dropAllData: removing data from filesystem."); @@ -1334,12 +1393,20 @@ void MergeTreeData::dropIfEmpty() if (!data_parts_by_info.empty()) return; - for (const auto & [path, disk] : getRelativeDataPathsWithDisks()) + try { - /// Non recursive, exception is thrown if there are more files. - disk->removeFile(path + "format_version.txt"); - disk->removeDirectory(path + "detached"); - disk->removeDirectory(path); + for (const auto & [path, disk] : getRelativeDataPathsWithDisks()) + { + /// Non recursive, exception is thrown if there are more files. + disk->removeFileIfExists(path + MergeTreeData::FORMAT_VERSION_FILE_NAME); + disk->removeDirectory(path + MergeTreeData::DETACHED_DIR_NAME); + disk->removeDirectory(path); + } + } + catch (...) + { + // On unsuccessful creation of ReplicatedMergeTree table with multidisk configuration some files may not exist. + tryLogCurrentException(__PRETTY_FUNCTION__); } } @@ -1425,23 +1492,23 @@ void checkVersionColumnTypesConversion(const IDataType * old_type, const IDataTy } -void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { /// Check that needed transformations can be applied to the list of columns without considering type conversions. StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); if (!settings.allow_non_metadata_alters) { - auto mutation_commands = commands.getMutationCommands(new_metadata, settings.materialize_ttl_after_modify, global_context); + auto mutation_commands = commands.getMutationCommands(new_metadata, settings.materialize_ttl_after_modify, getContext()); if (!mutation_commands.empty()) throw Exception(ErrorCodes::ALTER_OF_COLUMN_IS_FORBIDDEN, "The following alter commands: '{}' will modify data on disk, but setting `allow_non_metadata_alters` is disabled", queryToString(mutation_commands.ast())); } - commands.apply(new_metadata, global_context); + commands.apply(new_metadata, getContext()); /// Set of columns that shouldn't be altered. NameSet columns_alter_type_forbidden; @@ -1504,13 +1571,13 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C old_types.emplace(column.name, column.type.get()); NamesAndTypesList columns_to_check_conversion; - auto name_deps = getDependentViewsByColumn(context); + auto name_deps = getDependentViewsByColumn(local_context); for (const AlterCommand & command : commands) { /// Just validate partition expression if (command.partition) { - getPartitionIDFromQuery(command.partition, global_context); + getPartitionIDFromQuery(command.partition, getContext()); } if (command.column_name == merging_params.version_column) @@ -1521,7 +1588,8 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C const IDataType * new_type = command.data_type.get(); const IDataType * old_type = old_types[command.column_name]; - checkVersionColumnTypesConversion(old_type, new_type, command.column_name); + if (new_type) + checkVersionColumnTypesConversion(old_type, new_type, command.column_name); /// No other checks required continue; @@ -1585,13 +1653,16 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C ErrorCodes::ALTER_OF_COLUMN_IS_FORBIDDEN); } - const auto & deps_mv = name_deps[command.column_name]; - if (!deps_mv.empty()) + if (!command.clear) { - throw Exception( - "Trying to ALTER DROP column " + backQuoteIfNeed(command.column_name) + " which is referenced by materialized view " - + toString(deps_mv), - ErrorCodes::ALTER_OF_COLUMN_IS_FORBIDDEN); + const auto & deps_mv = name_deps[command.column_name]; + if (!deps_mv.empty()) + { + throw Exception( + "Trying to ALTER DROP column " + backQuoteIfNeed(command.column_name) + " which is referenced by materialized view " + + toString(deps_mv), + ErrorCodes::ALTER_OF_COLUMN_IS_FORBIDDEN); + } } dropped_columns.emplace(command.column_name); @@ -1644,7 +1715,7 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C if (!columns_to_check_conversion.empty()) { auto old_header = old_metadata.getSampleBlock(); - performRequiredConversions(old_header, columns_to_check_conversion, global_context); + performRequiredConversions(old_header, columns_to_check_conversion, getContext()); } if (old_metadata.hasSettingsChanges()) @@ -1675,7 +1746,7 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C } if (setting_name == "storage_policy") - checkStoragePolicy(global_context.getStoragePolicy(new_value.safeGet())); + checkStoragePolicy(getContext()->getStoragePolicy(new_value.safeGet())); } } @@ -1800,7 +1871,7 @@ void MergeTreeData::changeSettings( { if (change.name == "storage_policy") { - StoragePolicyPtr new_storage_policy = global_context.getStoragePolicy(change.value.safeGet()); + StoragePolicyPtr new_storage_policy = getContext()->getStoragePolicy(change.value.safeGet()); StoragePolicyPtr old_storage_policy = getStoragePolicy(); /// StoragePolicy of different version or name is guaranteed to have different pointer @@ -1825,7 +1896,7 @@ void MergeTreeData::changeSettings( { auto disk = new_storage_policy->getDiskByName(disk_name); disk->createDirectories(relative_data_path); - disk->createDirectories(relative_data_path + "detached"); + disk->createDirectories(relative_data_path + MergeTreeData::DETACHED_DIR_NAME); } /// FIXME how would that be done while reloading configuration??? @@ -1837,7 +1908,7 @@ void MergeTreeData::changeSettings( MergeTreeSettings copy = *getSettings(); copy.applyChanges(new_changes); - copy.sanityCheck(global_context.getSettingsRef()); + copy.sanityCheck(getContext()->getSettingsRef()); storage_settings.set(std::make_unique(copy)); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); @@ -1849,11 +1920,6 @@ void MergeTreeData::changeSettings( } } -PartitionCommandsResultInfo MergeTreeData::freezeAll(const String & with_name, const StorageMetadataPtr & metadata_snapshot, const Context & context, TableLockHolder &) -{ - return freezePartitionsByMatcher([] (const DataPartPtr &) { return true; }, metadata_snapshot, with_name, context); -} - void MergeTreeData::PartsTemporaryRename::addPart(const String & old_name, const String & new_name) { old_and_new_names.push_back({old_name, new_name}); @@ -1980,7 +2046,7 @@ MergeTreeData::DataPartsVector MergeTreeData::getActivePartsToReplace( } -bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction) +bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, MergeTreeDeduplicationLog * deduplication_log) { if (out_transaction && &out_transaction->data != this) throw Exception("MergeTreeData::Transaction for one table cannot be used with another. It is a bug.", @@ -1989,7 +2055,7 @@ bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrem DataPartsVector covered_parts; { auto lock = lockParts(); - if (!renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts)) + if (!renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts, deduplication_log)) return false; } if (!covered_parts.empty()) @@ -2002,7 +2068,7 @@ bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrem bool MergeTreeData::renameTempPartAndReplace( MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, - std::unique_lock & lock, DataPartsVector * out_covered_parts) + std::unique_lock & lock, DataPartsVector * out_covered_parts, MergeTreeDeduplicationLog * deduplication_log) { if (out_transaction && &out_transaction->data != this) throw Exception("MergeTreeData::Transaction for one table cannot be used with another. It is a bug.", @@ -2057,6 +2123,22 @@ bool MergeTreeData::renameTempPartAndReplace( return false; } + /// Deduplication log used only from non-replicated MergeTree. Replicated + /// tables have their own mechanism. We try to deduplicate at such deep + /// level, because only here we know real part name which is required for + /// deduplication. + if (deduplication_log) + { + String block_id = part->getZeroLevelPartBlockID(); + auto res = deduplication_log->addPart(block_id, part_info); + if (!res.second) + { + ProfileEvents::increment(ProfileEvents::DuplicatedInsertedBlocks); + LOG_INFO(log, "Block with ID {} already exists as part {}; ignoring it", block_id, res.first.getPartName()); + return false; + } + } + /// All checks are passed. Now we can rename the part on disk. /// So, we maintain invariant: if a non-temporary part in filesystem then it is in data_parts /// @@ -2113,7 +2195,7 @@ bool MergeTreeData::renameTempPartAndReplace( } MergeTreeData::DataPartsVector MergeTreeData::renameTempPartAndReplace( - MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction) + MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, MergeTreeDeduplicationLog * deduplication_log) { if (out_transaction && &out_transaction->data != this) throw Exception("MergeTreeData::Transaction for one table cannot be used with another. It is a bug.", @@ -2122,7 +2204,7 @@ MergeTreeData::DataPartsVector MergeTreeData::renameTempPartAndReplace( DataPartsVector covered_parts; { auto lock = lockParts(); - renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts); + renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts, deduplication_log); } return covered_parts; } @@ -2238,7 +2320,7 @@ MergeTreeData::DataPartsVector MergeTreeData::removePartsInRangeFromWorkingSet(c void MergeTreeData::forgetPartAndMoveToDetached(const MergeTreeData::DataPartPtr & part_to_detach, const String & prefix, bool restore_covered) { - LOG_INFO(log, "Renaming {} to {}{} and forgiving it.", part_to_detach->relative_path, prefix, part_to_detach->name); + LOG_INFO(log, "Renaming {} to {}{} and forgetting it.", part_to_detach->relative_path, prefix, part_to_detach->name); auto lock = lockParts(); @@ -2479,7 +2561,7 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until) const if (settings->inactive_parts_to_throw_insert > 0 || settings->inactive_parts_to_delay_insert > 0) { size_t inactive_parts_count_in_partition = getMaxInactivePartsCountForPartition(); - if (inactive_parts_count_in_partition >= settings->inactive_parts_to_throw_insert) + if (settings->inactive_parts_to_throw_insert > 0 && inactive_parts_count_in_partition >= settings->inactive_parts_to_throw_insert) { ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( @@ -2495,7 +2577,7 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until) const ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( ErrorCodes::TOO_MANY_PARTS, - "Too many parts ({}). Parts cleaning are processing significantly slower than inserts", + "Too many parts ({}). Merges are processing significantly slower than inserts", parts_count_in_partition); } @@ -2690,45 +2772,8 @@ void MergeTreeData::removePartContributionToColumnSizes(const DataPartPtr & part } } - -PartitionCommandsResultInfo MergeTreeData::freezePartition(const ASTPtr & partition_ast, const StorageMetadataPtr & metadata_snapshot, const String & with_name, const Context & context, TableLockHolder &) -{ - std::optional prefix; - String partition_id; - - if (format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING) - { - /// Month-partitioning specific - partition value can represent a prefix of the partition to freeze. - if (const auto * partition_lit = partition_ast->as().value->as()) - prefix = partition_lit->value.getType() == Field::Types::UInt64 - ? toString(partition_lit->value.get()) - : partition_lit->value.safeGet(); - else - partition_id = getPartitionIDFromQuery(partition_ast, context); - } - else - partition_id = getPartitionIDFromQuery(partition_ast, context); - - if (prefix) - LOG_DEBUG(log, "Freezing parts with prefix {}", *prefix); - else - LOG_DEBUG(log, "Freezing parts with partition ID {}", partition_id); - - - return freezePartitionsByMatcher( - [&prefix, &partition_id](const DataPartPtr & part) - { - if (prefix) - return startsWith(part->info.partition_id, *prefix); - else - return part->info.partition_id == partition_id; - }, - metadata_snapshot, - with_name, - context); -} - -void MergeTreeData::checkAlterPartitionIsPossible(const PartitionCommands & commands, const StorageMetadataPtr & /*metadata_snapshot*/, const Settings & settings) const +void MergeTreeData::checkAlterPartitionIsPossible( + const PartitionCommands & commands, const StorageMetadataPtr & /*metadata_snapshot*/, const Settings & settings) const { for (const auto & command : commands) { @@ -2742,13 +2787,13 @@ void MergeTreeData::checkAlterPartitionIsPossible(const PartitionCommands & comm if (command.part) { auto part_name = command.partition->as().value.safeGet(); - /// We able to parse it + /// We are able to parse it MergeTreePartInfo::fromPartName(part_name, format_version); } else { - /// We able to parse it - getPartitionIDFromQuery(command.partition, global_context); + /// We are able to parse it + getPartitionIDFromQuery(command.partition, getContext()); } } } @@ -2756,7 +2801,7 @@ void MergeTreeData::checkAlterPartitionIsPossible(const PartitionCommands & comm void MergeTreeData::checkPartitionCanBeDropped(const ASTPtr & partition) { - const String partition_id = getPartitionIDFromQuery(partition, global_context); + const String partition_id = getPartitionIDFromQuery(partition, getContext()); auto parts_to_remove = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); UInt64 partition_size = 0; @@ -2765,7 +2810,7 @@ void MergeTreeData::checkPartitionCanBeDropped(const ASTPtr & partition) partition_size += part->getBytesOnDisk(); auto table_id = getStorageID(); - global_context.checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, partition_size); + getContext()->checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, partition_size); } void MergeTreeData::checkPartCanBeDropped(const ASTPtr & part_ast) @@ -2776,17 +2821,17 @@ void MergeTreeData::checkPartCanBeDropped(const ASTPtr & part_ast) throw Exception(ErrorCodes::NO_SUCH_DATA_PART, "No part {} in committed state", part_name); auto table_id = getStorageID(); - global_context.checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, part->getBytesOnDisk()); + getContext()->checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, part->getBytesOnDisk()); } -void MergeTreeData::movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, const Context & context) +void MergeTreeData::movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr local_context) { String partition_id; if (moving_part) partition_id = partition->as().value.safeGet(); else - partition_id = getPartitionIDFromQuery(partition, context); + partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector parts; if (moving_part) @@ -2824,14 +2869,14 @@ void MergeTreeData::movePartitionToDisk(const ASTPtr & partition, const String & } -void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, const Context & context) +void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr local_context) { String partition_id; if (moving_part) partition_id = partition->as().value.safeGet(); else - partition_id = getPartitionIDFromQuery(partition, context); + partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector parts; if (moving_part) @@ -2849,7 +2894,7 @@ void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String throw Exception("Volume " + name + " does not exists on policy " + getStoragePolicy()->getName(), ErrorCodes::UNKNOWN_DISK); if (parts.empty()) - throw Exception("Nothing to move", ErrorCodes::NO_SUCH_DATA_PART); + throw Exception("Nothing to move (сheck that the partition exists).", ErrorCodes::NO_SUCH_DATA_PART); parts.erase(std::remove_if(parts.begin(), parts.end(), [&](auto part_ptr) { @@ -2878,7 +2923,12 @@ void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String throw Exception("Cannot move parts because moves are manually disabled", ErrorCodes::ABORTED); } -void MergeTreeData::fetchPartition(const ASTPtr & /*partition*/, const StorageMetadataPtr & /*metadata_snapshot*/, const String & /*from*/, const Context & /*query_context*/) +void MergeTreeData::fetchPartition( + const ASTPtr & /*partition*/, + const StorageMetadataPtr & /*metadata_snapshot*/, + const String & /*from*/, + bool /*fetch_part*/, + ContextPtr /*query_context*/) { throw Exception(ErrorCodes::NOT_IMPLEMENTED, "FETCH PARTITION is not supported by storage {}", getName()); } @@ -2886,7 +2936,7 @@ void MergeTreeData::fetchPartition(const ASTPtr & /*partition*/, const StorageMe Pipe MergeTreeData::alterPartition( const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, - const Context & query_context) + ContextPtr query_context) { PartitionCommandsResultInfo result; for (const PartitionCommand & command : commands) @@ -2923,7 +2973,7 @@ Pipe MergeTreeData::alterPartition( case PartitionCommand::MoveDestinationType::TABLE: checkPartitionCanBeDropped(command.partition); - String dest_database = query_context.resolveDatabase(command.to_database); + String dest_database = query_context->resolveDatabase(command.to_database); auto dest_storage = DatabaseCatalog::instance().getTable({dest_database, command.to_table}, query_context); movePartitionToTable(dest_storage, command.partition, query_context); break; @@ -2934,42 +2984,57 @@ Pipe MergeTreeData::alterPartition( case PartitionCommand::REPLACE_PARTITION: { checkPartitionCanBeDropped(command.partition); - String from_database = query_context.resolveDatabase(command.from_database); + String from_database = query_context->resolveDatabase(command.from_database); auto from_storage = DatabaseCatalog::instance().getTable({from_database, command.from_table}, query_context); replacePartitionFrom(from_storage, command.partition, command.replace, query_context); } break; case PartitionCommand::FETCH_PARTITION: - fetchPartition(command.partition, metadata_snapshot, command.from_zookeeper_path, query_context); + fetchPartition(command.partition, metadata_snapshot, command.from_zookeeper_path, command.part, query_context); break; case PartitionCommand::FREEZE_PARTITION: { - auto lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); current_command_results = freezePartition(command.partition, metadata_snapshot, command.with_name, query_context, lock); } break; case PartitionCommand::FREEZE_ALL_PARTITIONS: { - auto lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); current_command_results = freezeAll(command.with_name, metadata_snapshot, query_context, lock); } break; + + case PartitionCommand::UNFREEZE_PARTITION: + { + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + current_command_results = unfreezePartition(command.partition, command.with_name, query_context, lock); + } + break; + + case PartitionCommand::UNFREEZE_ALL_PARTITIONS: + { + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + current_command_results = unfreezeAll(command.with_name, query_context, lock); + } + + break; } for (auto & command_result : current_command_results) command_result.command_type = command.typeToString(); result.insert(result.end(), current_command_results.begin(), current_command_results.end()); } - if (query_context.getSettingsRef().alter_partition_verbose_result) + if (query_context->getSettingsRef().alter_partition_verbose_result) return convertCommandsResultToSource(result); return {}; } -String MergeTreeData::getPartitionIDFromQuery(const ASTPtr & ast, const Context & context) const +String MergeTreeData::getPartitionIDFromQuery(const ASTPtr & ast, ContextPtr local_context) const { const auto & partition_ast = ast->as(); @@ -3011,7 +3076,12 @@ String MergeTreeData::getPartitionIDFromQuery(const ASTPtr & ast, const Context ReadBufferFromMemory right_paren_buf(")", 1); ConcatReadBuffer buf({&left_paren_buf, &fields_buf, &right_paren_buf}); - auto input_format = FormatFactory::instance().getInput("Values", buf, metadata_snapshot->getPartitionKey().sample_block, context, context.getSettingsRef().max_block_size); + auto input_format = FormatFactory::instance().getInput( + "Values", + buf, + metadata_snapshot->getPartitionKey().sample_block, + local_context, + local_context->getSettingsRef().max_block_size); auto input_stream = std::make_shared(input_format); auto block = input_stream->read(); @@ -3096,7 +3166,7 @@ MergeTreeData::getDetachedParts() const for (const auto & [path, disk] : getRelativeDataPathsWithDisks()) { - for (auto it = disk->iterateDirectory(path + "detached"); it->isValid(); it->next()) + for (auto it = disk->iterateDirectory(path + MergeTreeData::DETACHED_DIR_NAME); it->isValid(); it->next()) { res.emplace_back(); auto & part = res.back(); @@ -3124,7 +3194,7 @@ void MergeTreeData::validateDetachedPartName(const String & name) const ErrorCodes::BAD_DATA_PART_NAME); } -void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, const Context & context) +void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, ContextPtr local_context) { PartsTemporaryRename renamed_parts(*this, "detached/"); @@ -3136,7 +3206,7 @@ void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, const Cont } else { - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); DetachedPartsInfo detached_parts = getDetachedParts(); for (const auto & part_info : detached_parts) if (part_info.valid_name && part_info.partition_id == partition_id @@ -3158,33 +3228,38 @@ void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, const Cont } MergeTreeData::MutableDataPartsVector MergeTreeData::tryLoadPartsToAttach(const ASTPtr & partition, bool attach_part, - const Context & context, PartsTemporaryRename & renamed_parts) + ContextPtr local_context, PartsTemporaryRename & renamed_parts) { - String source_dir = "detached/"; + const String source_dir = "detached/"; std::map name_to_disk; + /// Let's compose a list of parts that should be added. if (attach_part) { - String part_id = partition->as().value.safeGet(); + const String part_id = partition->as().value.safeGet(); + validateDetachedPartName(part_id); renamed_parts.addPart(part_id, "attaching_" + part_id); + if (MergeTreePartInfo::tryParsePartName(part_id, nullptr, format_version)) name_to_disk[part_id] = getDiskForPart(part_id, source_dir); } else { - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); LOG_DEBUG(log, "Looking for parts for partition {} in {}", partition_id, source_dir); ActiveDataPartSet active_parts(format_version); const auto disks = getStoragePolicy()->getDisks(); + for (const auto & disk : disks) { for (auto it = disk->iterateDirectory(relative_data_path + source_dir); it->isValid(); it->next()) { const String & name = it->name(); MergeTreePartInfo part_info; + // TODO what if name contains "_tryN" suffix? /// Parts with prefix in name (e.g. attaching_1_3_3_0, deleting_1_3_3_0) will be ignored if (!MergeTreePartInfo::tryParsePartName(name, &part_info, format_version) @@ -3192,21 +3267,23 @@ MergeTreeData::MutableDataPartsVector MergeTreeData::tryLoadPartsToAttach(const { continue; } + LOG_DEBUG(log, "Found part {}", name); active_parts.add(name); name_to_disk[name] = disk; } } LOG_DEBUG(log, "{} of them are active", active_parts.size()); - /// Inactive parts rename so they can not be attached in case of repeated ATTACH. + + /// Inactive parts are renamed so they can not be attached in case of repeated ATTACH. for (const auto & [name, disk] : name_to_disk) { - String containing_part = active_parts.getContainingPart(name); + const String containing_part = active_parts.getContainingPart(name); + if (!containing_part.empty() && containing_part != name) - { // TODO maybe use PartsTemporaryRename here? - disk->moveDirectory(relative_data_path + source_dir + name, relative_data_path + source_dir + "inactive_" + name); - } + disk->moveDirectory(relative_data_path + source_dir + name, + relative_data_path + source_dir + "inactive_" + name); else renamed_parts.addPart(name, "attaching_" + name); } @@ -3221,11 +3298,13 @@ MergeTreeData::MutableDataPartsVector MergeTreeData::tryLoadPartsToAttach(const MutableDataPartsVector loaded_parts; loaded_parts.reserve(renamed_parts.old_and_new_names.size()); - for (const auto & part_names : renamed_parts.old_and_new_names) + for (const auto & [old_name, new_name] : renamed_parts.old_and_new_names) { - LOG_DEBUG(log, "Checking part {}", part_names.second); - auto single_disk_volume = std::make_shared("volume_" + part_names.first, name_to_disk[part_names.first], 0); - MutableDataPartPtr part = createPart(part_names.first, single_disk_volume, source_dir + part_names.second); + LOG_DEBUG(log, "Checking part {}", new_name); + + auto single_disk_volume = std::make_shared("volume_" + old_name, name_to_disk[old_name]); + MutableDataPartPtr part = createPart(old_name, single_disk_volume, source_dir + new_name); + loadPartAndFixMetadataImpl(part); loaded_parts.push_back(part); } @@ -3271,11 +3350,13 @@ ReservationPtr MergeTreeData::reserveSpacePreferringTTLRules( const IMergeTreeDataPart::TTLInfos & ttl_infos, time_t time_of_move, size_t min_volume_index, - bool is_insert) const + bool is_insert, + DiskPtr selected_disk) const { expected_size = std::max(RESERVATION_MIN_ESTIMATION_SIZE, expected_size); - ReservationPtr reservation = tryReserveSpacePreferringTTLRules(metadata_snapshot, expected_size, ttl_infos, time_of_move, min_volume_index, is_insert); + ReservationPtr reservation = tryReserveSpacePreferringTTLRules( + metadata_snapshot, expected_size, ttl_infos, time_of_move, min_volume_index, is_insert, selected_disk); return checkAndReturnReservation(expected_size, std::move(reservation)); } @@ -3286,7 +3367,8 @@ ReservationPtr MergeTreeData::tryReserveSpacePreferringTTLRules( const IMergeTreeDataPart::TTLInfos & ttl_infos, time_t time_of_move, size_t min_volume_index, - bool is_insert) const + bool is_insert, + DiskPtr selected_disk) const { expected_size = std::max(RESERVATION_MIN_ESTIMATION_SIZE, expected_size); @@ -3321,7 +3403,12 @@ ReservationPtr MergeTreeData::tryReserveSpacePreferringTTLRules( } } - reservation = getStoragePolicy()->reserve(expected_size, min_volume_index); + // Prefer selected_disk + if (selected_disk) + reservation = selected_disk->reserve(expected_size); + + if (!reservation) + reservation = getStoragePolicy()->reserve(expected_size, min_volume_index); return reservation; } @@ -3386,7 +3473,7 @@ CompressionCodecPtr MergeTreeData::getCompressionCodecForPart(size_t part_size_c if (best_ttl_entry) return CompressionCodecFactory::instance().get(best_ttl_entry->recompression_codec, {}); - return global_context.chooseCompressionCodec( + return getContext()->chooseCompressionCodec( part_size_compressed, static_cast(part_size_compressed) / getTotalActiveSizeInBytes()); } @@ -3547,7 +3634,7 @@ bool MergeTreeData::isPrimaryOrMinMaxKeyColumnPossiblyWrappedInFunctions( } bool MergeTreeData::mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context &, const StorageMetadataPtr & metadata_snapshot) const + const ASTPtr & left_in_operand, ContextPtr, const StorageMetadataPtr & metadata_snapshot) const { /// Make sure that the left side of the IN operator contain part of the key. /// If there is a tuple on the left side of the IN operator, at least one item of the tuple @@ -3711,9 +3798,62 @@ MergeTreeData::PathsWithDisks MergeTreeData::getRelativeDataPathsWithDisks() con return res; } -PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher(MatcherFn matcher, const StorageMetadataPtr & metadata_snapshot, const String & with_name, const Context & context) +MergeTreeData::MatcherFn MergeTreeData::getPartitionMatcher(const ASTPtr & partition_ast, ContextPtr local_context) const { - String clickhouse_path = Poco::Path(context.getPath()).makeAbsolute().toString(); + bool prefixed = false; + String id; + + if (format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING) + { + /// Month-partitioning specific - partition value can represent a prefix of the partition to freeze. + if (const auto * partition_lit = partition_ast->as().value->as()) + { + id = partition_lit->value.getType() == Field::Types::UInt64 + ? toString(partition_lit->value.get()) + : partition_lit->value.safeGet(); + prefixed = true; + } + else + id = getPartitionIDFromQuery(partition_ast, local_context); + } + else + id = getPartitionIDFromQuery(partition_ast, local_context); + + return [prefixed, id](const String & partition_id) + { + if (prefixed) + return startsWith(partition_id, id); + else + return id == partition_id; + }; +} + +PartitionCommandsResultInfo MergeTreeData::freezePartition( + const ASTPtr & partition_ast, + const StorageMetadataPtr & metadata_snapshot, + const String & with_name, + ContextPtr local_context, + TableLockHolder &) +{ + return freezePartitionsByMatcher(getPartitionMatcher(partition_ast, local_context), metadata_snapshot, with_name, local_context); +} + +PartitionCommandsResultInfo MergeTreeData::freezeAll( + const String & with_name, + const StorageMetadataPtr & metadata_snapshot, + ContextPtr local_context, + TableLockHolder &) +{ + return freezePartitionsByMatcher([] (const String &) { return true; }, metadata_snapshot, with_name, local_context); +} + +PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher( + MatcherFn matcher, + const StorageMetadataPtr & metadata_snapshot, + const String & with_name, + ContextPtr local_context) +{ + String clickhouse_path = Poco::Path(local_context->getPath()).makeAbsolute().toString(); String default_shadow_path = clickhouse_path + "shadow/"; Poco::File(default_shadow_path).createDirectories(); auto increment = Increment(default_shadow_path + "increment.txt").get(true); @@ -3734,7 +3874,7 @@ PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher(MatcherFn m size_t parts_processed = 0; for (const auto & part : data_parts) { - if (!matcher(part)) + if (!matcher(part->info.partition_id)) continue; LOG_DEBUG(log, "Freezing part {} snapshot will be placed at {}", part->name, backup_path); @@ -3764,6 +3904,70 @@ PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher(MatcherFn m return result; } +PartitionCommandsResultInfo MergeTreeData::unfreezePartition( + const ASTPtr & partition, + const String & backup_name, + ContextPtr local_context, + TableLockHolder &) +{ + return unfreezePartitionsByMatcher(getPartitionMatcher(partition, local_context), backup_name, local_context); +} + +PartitionCommandsResultInfo MergeTreeData::unfreezeAll( + const String & backup_name, + ContextPtr local_context, + TableLockHolder &) +{ + return unfreezePartitionsByMatcher([] (const String &) { return true; }, backup_name, local_context); +} + +PartitionCommandsResultInfo MergeTreeData::unfreezePartitionsByMatcher(MatcherFn matcher, const String & backup_name, ContextPtr) +{ + auto backup_path = std::filesystem::path("shadow") / escapeForFileName(backup_name) / relative_data_path; + + LOG_DEBUG(log, "Unfreezing parts by path {}", backup_path.generic_string()); + + PartitionCommandsResultInfo result; + + for (const auto & disk : getStoragePolicy()->getDisks()) + { + if (!disk->exists(backup_path)) + continue; + + for (auto it = disk->iterateDirectory(backup_path); it->isValid(); it->next()) + { + const auto & partition_directory = it->name(); + + /// Partition ID is prefix of part directory name: _ + auto found = partition_directory.find('_'); + if (found == std::string::npos) + continue; + auto partition_id = partition_directory.substr(0, found); + + if (!matcher(partition_id)) + continue; + + const auto & path = it->path(); + + disk->removeRecursive(path); + + result.push_back(PartitionCommandResultInfo{ + .partition_id = partition_id, + .part_name = partition_directory, + .backup_path = disk->getPath() + backup_path.generic_string(), + .part_backup_path = disk->getPath() + path, + .backup_name = backup_name, + }); + + LOG_DEBUG(log, "Unfreezed part by path {}", disk->getPath() + path); + } + } + + LOG_DEBUG(log, "Unfreezed {} parts", result.size()); + + return result; +} + bool MergeTreeData::canReplacePartition(const DataPartPtr & src_part) const { const auto settings = getSettings(); @@ -3800,7 +4004,7 @@ void MergeTreeData::writePartLog( try { auto table_id = getStorageID(); - auto part_log = global_context.getPartLog(table_id.database_name); + auto part_log = getContext()->getPartLog(table_id.database_name); if (!part_log) return; @@ -4067,7 +4271,7 @@ NamesAndTypesList MergeTreeData::getVirtuals() const size_t MergeTreeData::getTotalMergesWithTTLInMergeList() const { - return global_context.getMergeList().getMergesWithTTLCount(); + return getContext()->getMergeList().getMergesWithTTLCount(); } void MergeTreeData::addPartContributionToDataVolume(const DataPartPtr & part) @@ -4120,4 +4324,187 @@ void MergeTreeData::removeQueryId(const String & query_id) const else query_id_set.erase(query_id); } + +ReservationPtr MergeTreeData::balancedReservation( + const StorageMetadataPtr & metadata_snapshot, + size_t part_size, + size_t max_volume_index, + const String & part_name, + const MergeTreePartInfo & part_info, + MergeTreeData::DataPartsVector covered_parts, + std::optional * tagger_ptr, + const IMergeTreeDataPart::TTLInfos * ttl_infos, + bool is_insert) +{ + ReservationPtr reserved_space; + auto min_bytes_to_rebalance_partition_over_jbod = getSettings()->min_bytes_to_rebalance_partition_over_jbod; + if (tagger_ptr && min_bytes_to_rebalance_partition_over_jbod > 0 && part_size >= min_bytes_to_rebalance_partition_over_jbod) + try + { + const auto & disks = getStoragePolicy()->getVolume(max_volume_index)->getDisks(); + std::map disk_occupation; + std::map> disk_parts_for_logging; + for (const auto & disk : disks) + disk_occupation.emplace(disk->getName(), 0); + + std::set committed_big_parts_from_partition; + std::set submerging_big_parts_from_partition; + std::lock_guard lock(currently_submerging_emerging_mutex); + + for (const auto & part : currently_submerging_big_parts) + { + if (part_info.partition_id == part->info.partition_id) + submerging_big_parts_from_partition.insert(part->name); + } + + { + auto lock_parts = lockParts(); + if (covered_parts.empty()) + { + // It's a part fetch. Calculate `covered_parts` here. + MergeTreeData::DataPartPtr covering_part; + covered_parts = getActivePartsToReplace(part_info, part_name, covering_part, lock_parts); + } + + // Remove irrelevant parts. + covered_parts.erase( + std::remove_if( + covered_parts.begin(), + covered_parts.end(), + [min_bytes_to_rebalance_partition_over_jbod](const auto & part) + { + return !(part->isStoredOnDisk() && part->getBytesOnDisk() >= min_bytes_to_rebalance_partition_over_jbod); + }), + covered_parts.end()); + + // Include current submerging big parts which are not yet in `currently_submerging_big_parts` + for (const auto & part : covered_parts) + submerging_big_parts_from_partition.insert(part->name); + + for (const auto & part : getDataPartsStateRange(MergeTreeData::DataPartState::Committed)) + { + if (part->isStoredOnDisk() && part->getBytesOnDisk() >= min_bytes_to_rebalance_partition_over_jbod + && part_info.partition_id == part->info.partition_id) + { + auto name = part->volume->getDisk()->getName(); + auto it = disk_occupation.find(name); + if (it != disk_occupation.end()) + { + if (submerging_big_parts_from_partition.find(part->name) == submerging_big_parts_from_partition.end()) + { + it->second += part->getBytesOnDisk(); + disk_parts_for_logging[name].push_back(formatReadableSizeWithBinarySuffix(part->getBytesOnDisk())); + committed_big_parts_from_partition.insert(part->name); + } + else + { + disk_parts_for_logging[name].push_back(formatReadableSizeWithBinarySuffix(part->getBytesOnDisk()) + " (submerging)"); + } + } + else + { + // Part is on different volume. Ignore it. + } + } + } + } + + for (const auto & [name, emerging_part] : currently_emerging_big_parts) + { + // It's possible that the emerging big parts are committed and get added twice. Thus a set is used to deduplicate. + if (committed_big_parts_from_partition.find(name) == committed_big_parts_from_partition.end() + && part_info.partition_id == emerging_part.partition_id) + { + auto it = disk_occupation.find(emerging_part.disk_name); + if (it != disk_occupation.end()) + { + it->second += emerging_part.estimate_bytes; + disk_parts_for_logging[emerging_part.disk_name].push_back( + formatReadableSizeWithBinarySuffix(emerging_part.estimate_bytes) + " (emerging)"); + } + else + { + // Part is on different volume. Ignore it. + } + } + } + + size_t min_occupation_size = std::numeric_limits::max(); + std::vector candidates; + for (const auto & [disk_name, size] : disk_occupation) + { + if (size < min_occupation_size) + { + min_occupation_size = size; + candidates = {disk_name}; + } + else if (size == min_occupation_size) + { + candidates.push_back(disk_name); + } + } + + if (!candidates.empty()) + { + // Random pick one disk from best candidates + std::shuffle(candidates.begin(), candidates.end(), thread_local_rng); + String selected_disk_name = candidates.front(); + WriteBufferFromOwnString log_str; + writeCString("\nbalancer: \n", log_str); + for (const auto & [disk_name, per_disk_parts] : disk_parts_for_logging) + writeString(fmt::format(" {}: [{}]\n", disk_name, fmt::join(per_disk_parts, ", ")), log_str); + LOG_DEBUG(log, log_str.str()); + + if (ttl_infos) + reserved_space = tryReserveSpacePreferringTTLRules( + metadata_snapshot, + part_size, + *ttl_infos, + time(nullptr), + max_volume_index, + is_insert, + getStoragePolicy()->getDiskByName(selected_disk_name)); + else + reserved_space = tryReserveSpace(part_size, getStoragePolicy()->getDiskByName(selected_disk_name)); + + if (reserved_space) + { + currently_emerging_big_parts.emplace( + part_name, EmergingPartInfo{reserved_space->getDisk(0)->getName(), part_info.partition_id, part_size}); + + for (const auto & part : covered_parts) + { + if (currently_submerging_big_parts.count(part)) + LOG_WARNING(log, "currently_submerging_big_parts contains duplicates. JBOD might lose balance"); + else + currently_submerging_big_parts.insert(part); + } + + // Record submerging big parts in the tagger to clean them up. + tagger_ptr->emplace(*this, part_name, std::move(covered_parts), log); + } + } + } + catch (...) + { + LOG_DEBUG(log, "JBOD balancer encounters an error. Fallback to random disk selection"); + tryLogCurrentException(log); + } + return reserved_space; +} + +CurrentlySubmergingEmergingTagger::~CurrentlySubmergingEmergingTagger() +{ + std::lock_guard lock(storage.currently_submerging_emerging_mutex); + + for (const auto & part : submerging_parts) + { + if (!storage.currently_submerging_big_parts.count(part)) + LOG_WARNING(log, "currently_submerging_big_parts doesn't contain part {} to erase. This is a bug", part->name); + else + storage.currently_submerging_big_parts.erase(part); + } + storage.currently_emerging_big_parts.erase(emerging_part_name); +} + } diff --git a/src/Storages/MergeTree/MergeTreeData.h b/src/Storages/MergeTree/MergeTreeData.h index 15059ab47e5..46c0014d9f7 100644 --- a/src/Storages/MergeTree/MergeTreeData.h +++ b/src/Storages/MergeTree/MergeTreeData.h @@ -41,9 +41,20 @@ class MutationCommands; class Context; struct JobAndPool; +/// Auxiliary struct holding information about the future merged or mutated part. +struct EmergingPartInfo +{ + String disk_name; + String partition_id; + size_t estimate_bytes; +}; + +struct CurrentlySubmergingEmergingTagger; + class ExpressionActions; using ExpressionActionsPtr = std::shared_ptr; using ManyExpressionActions = std::vector; +class MergeTreeDeduplicationLog; namespace ErrorCodes { @@ -100,7 +111,7 @@ namespace ErrorCodes /// - MergeTreeDataWriter /// - MergeTreeDataMergerMutator -class MergeTreeData : public IStorage +class MergeTreeData : public IStorage, public WithContext { public: /// Function to call if the part is suspected to contain corrupt data. @@ -116,6 +127,9 @@ public: using DataPartStates = std::initializer_list; using DataPartStateVector = std::vector; + constexpr static auto FORMAT_VERSION_FILE_NAME = "format_version.txt"; + constexpr static auto DETACHED_DIR_NAME = "detached"; + /// Auxiliary structure for index comparison. Keep in mind lifetime of MergeTreePartInfo. struct DataPartStateAndInfo { @@ -335,7 +349,7 @@ public: MergeTreeData(const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, @@ -361,7 +375,7 @@ public: NamesAndTypesList getVirtuals() const override; - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context &, const StorageMetadataPtr & metadata_snapshot) const override; + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr, const StorageMetadataPtr & metadata_snapshot) const override; /// Load the set of data parts from disk. Call once - immediately after the object is created. void loadDataParts(bool skip_sanity_checks); @@ -384,10 +398,10 @@ public: void validateDetachedPartName(const String & name) const; - void dropDetached(const ASTPtr & partition, bool part, const Context & context); + void dropDetached(const ASTPtr & partition, bool part, ContextPtr context); MutableDataPartsVector tryLoadPartsToAttach(const ASTPtr & partition, bool attach_part, - const Context & context, PartsTemporaryRename & renamed_parts); + ContextPtr context, PartsTemporaryRename & renamed_parts); /// Returns Committed parts DataParts getDataParts() const; @@ -434,23 +448,23 @@ public: /// active set later with out_transaction->commit()). /// Else, commits the part immediately. /// Returns true if part was added. Returns false if part is covered by bigger part. - bool renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr); + bool renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr, MergeTreeDeduplicationLog * deduplication_log = nullptr); /// The same as renameTempPartAndAdd but the block range of the part can contain existing parts. /// Returns all parts covered by the added part (in ascending order). /// If out_transaction == nullptr, marks covered parts as Outdated. DataPartsVector renameTempPartAndReplace( - MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr); + MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr, MergeTreeDeduplicationLog * deduplication_log = nullptr); /// Low-level version of previous one, doesn't lock mutex bool renameTempPartAndReplace( MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, DataPartsLock & lock, - DataPartsVector * out_covered_parts = nullptr); + DataPartsVector * out_covered_parts = nullptr, MergeTreeDeduplicationLog * deduplication_log = nullptr); /// Remove parts from working set immediately (without wait for background /// process). Transfer part state to temporary. Have very limited usage only - /// for new parts which don't already present in table. + /// for new parts which aren't already present in table. void removePartsFromWorkingSetImmediatelyAndSetTemporaryState(const DataPartsVector & remove); /// Removes parts from the working set parts. @@ -517,7 +531,7 @@ public: /// - all type conversions can be done. /// - columns corresponding to primary key, indices, sign, sampling expression and date are not affected. /// If something is wrong, throws an exception. - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// Checks if the Mutation can be performed. /// (currently no additional checks: always ok) @@ -531,13 +545,6 @@ public: const ASTPtr & new_settings, TableLockHolder & table_lock_holder); - /// Freezes all parts. - PartitionCommandsResultInfo freezeAll( - const String & with_name, - const StorageMetadataPtr & metadata_snapshot, - const Context & context, - TableLockHolder & table_lock_holder); - /// Should be called if part data is suspected to be corrupted. void reportBrokenPart(const String & name) const { @@ -555,15 +562,38 @@ public: * Backup is created in directory clickhouse_dir/shadow/i/, where i - incremental number, * or if 'with_name' is specified - backup is created in directory with specified name. */ - PartitionCommandsResultInfo freezePartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & with_name, const Context & context, TableLockHolder & table_lock_holder); + PartitionCommandsResultInfo freezePartition( + const ASTPtr & partition, + const StorageMetadataPtr & metadata_snapshot, + const String & with_name, + ContextPtr context, + TableLockHolder & table_lock_holder); + /// Freezes all parts. + PartitionCommandsResultInfo freezeAll( + const String & with_name, + const StorageMetadataPtr & metadata_snapshot, + ContextPtr context, + TableLockHolder & table_lock_holder); + + /// Unfreezes particular partition. + PartitionCommandsResultInfo unfreezePartition( + const ASTPtr & partition, + const String & backup_name, + ContextPtr context, + TableLockHolder & table_lock_holder); + + /// Unfreezes all parts. + PartitionCommandsResultInfo unfreezeAll( + const String & backup_name, + ContextPtr context, + TableLockHolder & table_lock_holder); -public: /// Moves partition to specified Disk - void movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, const Context & context); + void movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr context); /// Moves partition to specified Volume - void movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, const Context & context); + void movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr context); void checkPartitionCanBeDropped(const ASTPtr & partition) override; @@ -572,7 +602,7 @@ public: Pipe alterPartition( const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, - const Context & query_context) override; + ContextPtr query_context) override; size_t getColumnCompressedSize(const std::string & name) const { @@ -588,7 +618,7 @@ public: } /// For ATTACH/DETACH/DROP PARTITION. - String getPartitionIDFromQuery(const ASTPtr & ast, const Context & context) const; + String getPartitionIDFromQuery(const ASTPtr & ast, ContextPtr context) const; /// Extracts MergeTreeData of other *MergeTree* storage /// and checks that their structure suitable for ALTER TABLE ATTACH PARTITION FROM @@ -651,7 +681,8 @@ public: const IMergeTreeDataPart::TTLInfos & ttl_infos, time_t time_of_move, size_t min_volume_index = 0, - bool is_insert = false) const; + bool is_insert = false, + DiskPtr selected_disk = nullptr) const; ReservationPtr tryReserveSpacePreferringTTLRules( const StorageMetadataPtr & metadata_snapshot, @@ -659,14 +690,31 @@ public: const IMergeTreeDataPart::TTLInfos & ttl_infos, time_t time_of_move, size_t min_volume_index = 0, - bool is_insert = false) const; + bool is_insert = false, + DiskPtr selected_disk = nullptr) const; + + /// Reserves space for the part based on the distribution of "big parts" in the same partition. + /// Parts with estimated size larger than `min_bytes_to_rebalance_partition_over_jbod` are + /// considered as big. The priority is lower than TTL. If reservation fails, return nullptr. + ReservationPtr balancedReservation( + const StorageMetadataPtr & metadata_snapshot, + size_t part_size, + size_t max_volume_index, + const String & part_name, + const MergeTreePartInfo & part_info, + MergeTreeData::DataPartsVector covered_parts, + std::optional * tagger_ptr, + const IMergeTreeDataPart::TTLInfos * ttl_infos, + bool is_insert = false); /// Choose disk with max available free space /// Reserves 0 bytes - ReservationPtr makeEmptyReservationOnLargestDisk() { return getStoragePolicy()->makeEmptyReservationOnLargestDisk(); } + ReservationPtr makeEmptyReservationOnLargestDisk() const { return getStoragePolicy()->makeEmptyReservationOnLargestDisk(); } + + Disks getDisksByType(DiskType::Type type) const { return getStoragePolicy()->getDisksByType(type); } /// Return alter conversions for part which must be applied on fly. - AlterConversions getAlterConversionsForPart(const MergeTreeDataPartPtr part) const; + AlterConversions getAlterConversionsForPart(MergeTreeDataPartPtr part) const; /// Returns destination disk or volume for the TTL rule according to current storage policy /// 'is_insert' - is TTL move performed on new data part insert. SpacePtr getDestinationForMoveTTL(const TTLDescription & move_ttl, bool is_insert = false) const; @@ -685,8 +733,6 @@ public: MergeTreeDataFormatVersion format_version; - Context & global_context; - /// Merging params - what additional actions to perform during merge. const MergingParams merging_params; @@ -697,7 +743,7 @@ public: Int64 minmax_idx_time_column_pos = -1; /// In other cases, minmax index often includes a dateTime column. /// Get partition key expression on required columns - static ExpressionActionsPtr getMinMaxExpr(const KeyDescription & partition_key); + static ExpressionActionsPtr getMinMaxExpr(const KeyDescription & partition_key, const ExpressionActionsSettings & settings); /// Get column names required for partition key static Names getMinMaxColumnsNames(const KeyDescription & partition_key); /// Get column types required for partition key @@ -741,6 +787,26 @@ public: std::optional getDataMovingJob(); bool areBackgroundMovesNeeded() const; + /// Lock part in zookeeper for use common S3 data in several nodes + /// Overridden in StorageReplicatedMergeTree + virtual void lockSharedData(const IMergeTreeDataPart &) const {} + + /// Unlock common S3 data part in zookeeper + /// Overridden in StorageReplicatedMergeTree + virtual bool unlockSharedData(const IMergeTreeDataPart &) const { return true; } + + /// Fetch part only if some replica has it on shared storage like S3 + /// Overridden in StorageReplicatedMergeTree + virtual bool tryToFetchIfShared(const IMergeTreeDataPart &, const DiskPtr &, const String &) { return false; } + + /// Parts that currently submerging (merging to bigger parts) or emerging + /// (to be appeared after merging finished). These two variables have to be used + /// with `currently_submerging_emerging_mutex`. + DataParts currently_submerging_big_parts; + std::map currently_emerging_big_parts; + /// Mutex for currently_submerging_parts and currently_emerging_parts + mutable std::mutex currently_submerging_emerging_mutex; + protected: friend class IMergeTreeDataPart; @@ -825,6 +891,9 @@ protected: return {begin, end}; } + std::optional totalRowsByPartitionPredicateImpl( + const SelectQueryInfo & query_info, ContextPtr context, const DataPartsVector & parts) const; + static decltype(auto) getStateModifier(DataPartState state) { return [state] (const DataPartPtr & part) { part->setState(state); }; @@ -888,19 +957,25 @@ protected: bool isPrimaryOrMinMaxKeyColumnPossiblyWrappedInFunctions(const ASTPtr & node, const StorageMetadataPtr & metadata_snapshot) const; /// Common part for |freezePartition()| and |freezeAll()|. - using MatcherFn = std::function; - PartitionCommandsResultInfo freezePartitionsByMatcher(MatcherFn matcher, const StorageMetadataPtr & metadata_snapshot, const String & with_name, const Context & context); + using MatcherFn = std::function; + PartitionCommandsResultInfo freezePartitionsByMatcher(MatcherFn matcher, const StorageMetadataPtr & metadata_snapshot, const String & with_name, ContextPtr context); + PartitionCommandsResultInfo unfreezePartitionsByMatcher(MatcherFn matcher, const String & backup_name, ContextPtr context); // Partition helpers bool canReplacePartition(const DataPartPtr & src_part) const; - virtual void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & context, bool throw_if_noop = true) = 0; - virtual PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, const Context & context) = 0; - virtual void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) = 0; - virtual void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & context) = 0; + virtual void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr context, bool throw_if_noop = true) = 0; + virtual PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, ContextPtr context) = 0; + virtual void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr context) = 0; + virtual void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr context) = 0; /// Makes sense only for replicated tables - virtual void fetchPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & from, const Context & query_context); + virtual void fetchPartition( + const ASTPtr & partition, + const StorageMetadataPtr & metadata_snapshot, + const String & from, + bool fetch_part, + ContextPtr query_context); void writePartLog( PartLogElement::Type type, @@ -973,6 +1048,27 @@ private: // Record all query ids which access the table. It's guarded by `query_id_set_mutex` and is always mutable. mutable std::set query_id_set; mutable std::mutex query_id_set_mutex; + + // Get partition matcher for FREEZE / UNFREEZE queries. + MatcherFn getPartitionMatcher(const ASTPtr & partition, ContextPtr context) const; +}; + +/// RAII struct to record big parts that are submerging or emerging. +/// It's used to calculate the balanced statistics of JBOD array. +struct CurrentlySubmergingEmergingTagger +{ + MergeTreeData & storage; + String emerging_part_name; + MergeTreeData::DataPartsVector submerging_parts; + Poco::Logger * log; + + CurrentlySubmergingEmergingTagger( + MergeTreeData & storage_, const String & name_, MergeTreeData::DataPartsVector && parts_, Poco::Logger * log_) + : storage(storage_), emerging_part_name(name_), submerging_parts(std::move(parts_)), log(log_) + { + } + + ~CurrentlySubmergingEmergingTagger(); }; } diff --git a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp index f2f8172837c..dfebd88abe9 100644 --- a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp +++ b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp @@ -637,7 +637,7 @@ public: } }; -static bool needSyncPart(const size_t input_rows, size_t input_bytes, const MergeTreeSettings & settings) +static bool needSyncPart(size_t input_rows, size_t input_bytes, const MergeTreeSettings & settings) { return ((settings.min_rows_to_fsync_after_merge && input_rows >= settings.min_rows_to_fsync_after_merge) || (settings.min_compressed_bytes_to_fsync_after_merge && input_bytes >= settings.min_compressed_bytes_to_fsync_after_merge)); @@ -651,7 +651,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor MergeList::Entry & merge_entry, TableLockHolder &, time_t time_of_merge, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, bool deduplicate, const Names & deduplicate_by_columns) @@ -751,12 +751,14 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor /// deadlock is impossible. auto compression_codec = data.getCompressionCodecForPart(merge_entry->total_size_bytes_compressed, new_data_part->ttl_infos, time_of_merge); - auto tmp_disk = context.getTemporaryVolume()->getDisk(); + auto tmp_disk = context->getTemporaryVolume()->getDisk(); String rows_sources_file_path; std::unique_ptr rows_sources_uncompressed_write_buf; std::unique_ptr rows_sources_write_buf; std::optional column_sizes; + SyncGuardPtr sync_guard; + if (chosen_merge_algorithm == MergeAlgorithm::Vertical) { tmp_disk->createDirectories(new_part_tmp_path); @@ -769,6 +771,9 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor part->accumulateColumnSizes(merged_column_to_size); column_sizes = ColumnSizeEstimator(merged_column_to_size, merging_column_names, gathering_column_names); + + if (data.getSettings()->fsync_part_directory) + sync_guard = disk->getDirectorySyncGuard(new_part_tmp_path); } else { @@ -778,10 +783,6 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor gathering_column_names.clear(); } - SyncGuardPtr sync_guard; - if (data.getSettings()->fsync_part_directory) - sync_guard = disk->getDirectorySyncGuard(new_part_tmp_path); - /** Read from all parts, merge and write into a new one. * In passing, we calculate expression for sorting. */ @@ -909,7 +910,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor { const auto & indices = metadata_snapshot->getSecondaryIndices(); merged_stream = std::make_shared( - merged_stream, indices.getSingleExpressionForIndices(metadata_snapshot->getColumns(), data.global_context)); + merged_stream, indices.getSingleExpressionForIndices(metadata_snapshot->getColumns(), data.getContext())); merged_stream = std::make_shared(merged_stream); } @@ -1098,7 +1099,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor const MutationCommands & commands, MergeListEntry & merge_entry, time_t time_of_mutation, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, TableLockHolder &) { @@ -1112,12 +1113,12 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor const auto & source_part = future_part.parts[0]; auto storage_from_source_part = StorageFromMergeTreeDataPart::create(source_part); - auto context_for_reading = context; - context_for_reading.setSetting("max_streams_to_max_threads_ratio", 1); - context_for_reading.setSetting("max_threads", 1); + auto context_for_reading = Context::createCopy(context); + context_for_reading->setSetting("max_streams_to_max_threads_ratio", 1); + context_for_reading->setSetting("max_threads", 1); /// Allow mutations to work when force_index_by_date or force_primary_key is on. - context_for_reading.setSetting("force_index_by_date", Field(0)); - context_for_reading.setSetting("force_primary_key", Field(0)); + context_for_reading->setSetting("force_index_by_date", Field(0)); + context_for_reading->setSetting("force_primary_key", Field(0)); MutationCommands commands_for_part; for (const auto & command : commands) @@ -1128,7 +1129,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor } if (source_part->isStoredOnDisk() && !isStorageTouchedByMutations( - storage_from_source_part, metadata_snapshot, commands_for_part, context_for_reading)) + storage_from_source_part, metadata_snapshot, commands_for_part, Context::createCopy(context_for_reading))) { LOG_TRACE(log, "Part {} doesn't change up to mutation version {}", source_part->name, future_part.part_info.mutation); return data.cloneAndLoadDataPartOnSameDisk(source_part, "tmp_clone_", future_part.part_info, metadata_snapshot); @@ -1488,10 +1489,11 @@ NameToNameVector MergeTreeDataMergerMutator::collectFilesForRenames( std::map stream_counts; for (const NameAndTypePair & column : source_part->getColumns()) { - column.type->enumerateStreams( - [&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto serialization = source_part->getSerializationForColumn(column); + serialization->enumerateStreams( + [&](const ISerialization::SubstreamPath & substream_path) { - ++stream_counts[IDataType::getFileNameForStream(column, substream_path)]; + ++stream_counts[ISerialization::getFileNameForStream(column, substream_path)]; }, {}); } @@ -1507,9 +1509,9 @@ NameToNameVector MergeTreeDataMergerMutator::collectFilesForRenames( } else if (command.type == MutationCommand::Type::DROP_COLUMN) { - IDataType::StreamCallback callback = [&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::StreamCallback callback = [&](const ISerialization::SubstreamPath & substream_path) { - String stream_name = IDataType::getFileNameForStream({command.column_name, command.data_type}, substream_path); + String stream_name = ISerialization::getFileNameForStream({command.column_name, command.data_type}, substream_path); /// Delete files if they are no longer shared with another column. if (--stream_counts[stream_name] == 0) { @@ -1518,19 +1520,21 @@ NameToNameVector MergeTreeDataMergerMutator::collectFilesForRenames( } }; - IDataType::SubstreamPath stream_path; auto column = source_part->getColumns().tryGetByName(command.column_name); if (column) - column->type->enumerateStreams(callback, stream_path); + { + auto serialization = source_part->getSerializationForColumn(*column); + serialization->enumerateStreams(callback); + } } else if (command.type == MutationCommand::Type::RENAME_COLUMN) { String escaped_name_from = escapeForFileName(command.column_name); String escaped_name_to = escapeForFileName(command.rename_to); - IDataType::StreamCallback callback = [&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::StreamCallback callback = [&](const ISerialization::SubstreamPath & substream_path) { - String stream_from = IDataType::getFileNameForStream({command.column_name, command.data_type}, substream_path); + String stream_from = ISerialization::getFileNameForStream({command.column_name, command.data_type}, substream_path); String stream_to = boost::replace_first_copy(stream_from, escaped_name_from, escaped_name_to); @@ -1540,10 +1544,13 @@ NameToNameVector MergeTreeDataMergerMutator::collectFilesForRenames( rename_vector.emplace_back(stream_from + mrk_extension, stream_to + mrk_extension); } }; - IDataType::SubstreamPath stream_path; + auto column = source_part->getColumns().tryGetByName(command.column_name); if (column) - column->type->enumerateStreams(callback, stream_path); + { + auto serialization = source_part->getSerializationForColumn(*column); + serialization->enumerateStreams(callback); + } } } @@ -1561,15 +1568,15 @@ NameSet MergeTreeDataMergerMutator::collectFilesToSkip( /// Skip updated files for (const auto & entry : updated_header) { - IDataType::StreamCallback callback = [&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::StreamCallback callback = [&](const ISerialization::SubstreamPath & substream_path) { - String stream_name = IDataType::getFileNameForStream({entry.name, entry.type}, substream_path); + String stream_name = ISerialization::getFileNameForStream({entry.name, entry.type}, substream_path); files_to_skip.insert(stream_name + ".bin"); files_to_skip.insert(stream_name + mrk_extension); }; - IDataType::SubstreamPath stream_path; - entry.type->enumerateStreams(callback, stream_path); + auto serialization = source_part->getSerializationForColumn({entry.name, entry.type}); + serialization->enumerateStreams(callback); } for (const auto & index : indices_to_recalc) { @@ -1683,7 +1690,7 @@ std::set MergeTreeDataMergerMutator::getIndicesToRecalculate( BlockInputStreamPtr & input_stream, const NamesAndTypesList & updated_columns, const StorageMetadataPtr & metadata_snapshot, - const Context & context) + ContextPtr context) { /// Checks if columns used in skipping indexes modified. const auto & index_factory = MergeTreeIndexFactory::instance(); @@ -1894,9 +1901,9 @@ void MergeTreeDataMergerMutator::finalizeMutatedPart( MergeTreeData::DataPart::calculateTotalSizeOnDisk(new_data_part->volume->getDisk(), new_data_part->getFullRelativePath())); new_data_part->default_codec = codec; new_data_part->calculateColumnsSizesOnDisk(); + new_data_part->storage.lockSharedData(*new_data_part); } - bool MergeTreeDataMergerMutator::checkOperationIsNotCanceled(const MergeListEntry & merge_entry) const { if (merges_blocker.isCancelled() || merge_entry->is_cancelled) diff --git a/src/Storages/MergeTree/MergeTreeDataMergerMutator.h b/src/Storages/MergeTree/MergeTreeDataMergerMutator.h index 2f3a898ba84..d4dc0ce8499 100644 --- a/src/Storages/MergeTree/MergeTreeDataMergerMutator.h +++ b/src/Storages/MergeTree/MergeTreeDataMergerMutator.h @@ -125,7 +125,7 @@ public: MergeListEntry & merge_entry, TableLockHolder & table_lock_holder, time_t time_of_merge, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, bool deduplicate, const Names & deduplicate_by_columns); @@ -137,7 +137,7 @@ public: const MutationCommands & commands, MergeListEntry & merge_entry, time_t time_of_mutation, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, TableLockHolder & table_lock_holder); @@ -199,7 +199,7 @@ private: BlockInputStreamPtr & input_stream, const NamesAndTypesList & updated_columns, const StorageMetadataPtr & metadata_snapshot, - const Context & context); + ContextPtr context); /// Override all columns of new part using mutating_stream void mutateAllPartColumns( diff --git a/src/Storages/MergeTree/MergeTreeDataPartChecksum.cpp b/src/Storages/MergeTree/MergeTreeDataPartChecksum.cpp index dd141a68248..b0eb1cbea70 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartChecksum.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartChecksum.cpp @@ -293,11 +293,8 @@ String MergeTreeDataPartChecksums::getTotalChecksumHex() const { SipHash hash_of_all_files; - for (const auto & elem : files) + for (const auto & [name, checksum] : files) { - const String & name = elem.first; - const auto & checksum = elem.second; - updateHash(hash_of_all_files, name); hash_of_all_files.update(checksum.file_hash); } @@ -376,11 +373,8 @@ void MinimalisticDataPartChecksums::computeTotalChecksums(const MergeTreeDataPar SipHash hash_of_uncompressed_files_state; SipHash uncompressed_hash_of_compressed_files_state; - for (const auto & elem : full_checksums_.files) + for (const auto & [name, checksum] : full_checksums_.files) { - const String & name = elem.first; - const auto & checksum = elem.second; - updateHash(hash_of_all_files_state, name); hash_of_all_files_state.update(checksum.file_hash); diff --git a/src/Storages/MergeTree/MergeTreeDataPartCompact.h b/src/Storages/MergeTree/MergeTreeDataPartCompact.h index 2c0c4020bb0..564d59c9198 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartCompact.h +++ b/src/Storages/MergeTree/MergeTreeDataPartCompact.h @@ -19,7 +19,6 @@ class MergeTreeDataPartCompact : public IMergeTreeDataPart { public: static constexpr auto DATA_FILE_NAME = "data"; - static constexpr auto DATA_FILE_EXTENSION = ".bin"; static constexpr auto DATA_FILE_NAME_WITH_EXTENSION = "data.bin"; MergeTreeDataPartCompact( diff --git a/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp b/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp index 96fa411339c..045ab488ada 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp @@ -88,7 +88,7 @@ void MergeTreeDataPartInMemory::flushToDisk(const String & base_path, const Stri disk->createDirectories(destination_path); - auto compression_codec = storage.global_context.chooseCompressionCodec(0, 0); + auto compression_codec = storage.getContext()->chooseCompressionCodec(0, 0); auto indices = MergeTreeIndexFactory::instance().getMany(metadata_snapshot->getSecondaryIndices()); MergedBlockOutputStream out(new_data_part, metadata_snapshot, columns, indices, compression_codec); out.writePrefix(); diff --git a/src/Storages/MergeTree/MergeTreeDataPartWide.cpp b/src/Storages/MergeTree/MergeTreeDataPartWide.cpp index 5dd8c26f224..1da115efa70 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWide.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWide.cpp @@ -82,9 +82,10 @@ ColumnSize MergeTreeDataPartWide::getColumnSizeImpl( if (checksums.empty()) return size; - column.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto serialization = getSerializationForColumn(column); + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { - String file_name = IDataType::getFileNameForStream(column, substream_path); + String file_name = ISerialization::getFileNameForStream(column, substream_path); if (processed_substreams && !processed_substreams->insert(file_name).second) return; @@ -159,19 +160,19 @@ void MergeTreeDataPartWide::checkConsistency(bool require_part_metadata) const { for (const NameAndTypePair & name_type : columns) { - IDataType::SubstreamPath stream_path; - name_type.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto serialization = getSerializationForColumn(name_type); + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { - String file_name = IDataType::getFileNameForStream(name_type, substream_path); + String file_name = ISerialization::getFileNameForStream(name_type, substream_path); String mrk_file_name = file_name + index_granularity_info.marks_file_extension; - String bin_file_name = file_name + ".bin"; + String bin_file_name = file_name + DATA_FILE_EXTENSION; if (!checksums.files.count(mrk_file_name)) throw Exception("No " + mrk_file_name + " file checksum for column " + name_type.name + " in part " + fullPath(volume->getDisk(), path), ErrorCodes::NO_FILE_IN_DATA_PART); if (!checksums.files.count(bin_file_name)) throw Exception("No " + bin_file_name + " file checksum for column " + name_type.name + " in part " + fullPath(volume->getDisk(), path), ErrorCodes::NO_FILE_IN_DATA_PART); - }, stream_path); + }); } } @@ -182,9 +183,15 @@ void MergeTreeDataPartWide::checkConsistency(bool require_part_metadata) const std::optional marks_size; for (const NameAndTypePair & name_type : columns) { - name_type.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto serialization = IDataType::getSerialization(name_type, + [&](const String & stream_name) + { + return volume->getDisk()->exists(stream_name + DATA_FILE_EXTENSION); + }); + + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { - auto file_path = path + IDataType::getFileNameForStream(name_type, substream_path) + index_granularity_info.marks_file_extension; + auto file_path = path + ISerialization::getFileNameForStream(name_type, substream_path) + index_granularity_info.marks_file_extension; /// Missing file is Ok for case when new column was added. if (volume->getDisk()->exists(file_path)) @@ -208,18 +215,22 @@ void MergeTreeDataPartWide::checkConsistency(bool require_part_metadata) const bool MergeTreeDataPartWide::hasColumnFiles(const NameAndTypePair & column) const { - bool res = true; - - column.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto check_stream_exists = [this](const String & stream_name) { - String file_name = IDataType::getFileNameForStream(column, substream_path); + auto bin_checksum = checksums.files.find(stream_name + DATA_FILE_EXTENSION); + auto mrk_checksum = checksums.files.find(stream_name + index_granularity_info.marks_file_extension); - auto bin_checksum = checksums.files.find(file_name + ".bin"); - auto mrk_checksum = checksums.files.find(file_name + index_granularity_info.marks_file_extension); + return bin_checksum != checksums.files.end() && mrk_checksum != checksums.files.end(); + }; - if (bin_checksum == checksums.files.end() || mrk_checksum == checksums.files.end()) + bool res = true; + auto serialization = IDataType::getSerialization(column, check_stream_exists); + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) + { + String file_name = ISerialization::getFileNameForStream(column, substream_path); + if (!check_stream_exists(file_name)) res = false; - }, {}); + }); return res; } @@ -227,10 +238,11 @@ bool MergeTreeDataPartWide::hasColumnFiles(const NameAndTypePair & column) const String MergeTreeDataPartWide::getFileNameForColumn(const NameAndTypePair & column) const { String filename; - column.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto serialization = column.type->getDefaultSerialization(); + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { if (filename.empty()) - filename = IDataType::getFileNameForStream(column, substream_path); + filename = ISerialization::getFileNameForStream(column, substream_path); }); return filename; } diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp b/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp index 7ec5cf8920c..2efda206cf9 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp @@ -39,9 +39,9 @@ MergeTreeDataPartWriterCompact::MergeTreeDataPartWriterCompact( void MergeTreeDataPartWriterCompact::addStreams(const NameAndTypePair & column, const ASTPtr & effective_codec_desc) { - IDataType::StreamCallback callback = [&] (const IDataType::SubstreamPath & substream_path, const IDataType & substream_type) + IDataType::StreamCallbackWithType callback = [&] (const ISerialization::SubstreamPath & substream_path, const IDataType & substream_type) { - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); /// Shared offsets for Nested type. if (compressed_streams.count(stream_name)) @@ -50,7 +50,7 @@ void MergeTreeDataPartWriterCompact::addStreams(const NameAndTypePair & column, CompressionCodecPtr compression_codec; /// If we can use special codec than just get it - if (IDataType::isSpecialCompressionAllowed(substream_path)) + if (ISerialization::isSpecialCompressionAllowed(substream_path)) compression_codec = CompressionCodecFactory::instance().get(effective_codec_desc, &substream_type, default_codec); else /// otherwise return only generic codecs and don't use info about data_type compression_codec = CompressionCodecFactory::instance().get(effective_codec_desc, nullptr, default_codec, true); @@ -63,8 +63,7 @@ void MergeTreeDataPartWriterCompact::addStreams(const NameAndTypePair & column, compressed_streams.emplace(stream_name, stream); }; - IDataType::SubstreamPath stream_path; - column.type->enumerateStreams(callback, stream_path); + column.type->enumerateStreams(serializations[column.name], callback); } namespace @@ -106,20 +105,21 @@ Granules getGranulesToWrite(const MergeTreeIndexGranularity & index_granularity, /// Write single granule of one column (rows between 2 marks) void writeColumnSingleGranule( const ColumnWithTypeAndName & column, - IDataType::OutputStreamGetter stream_getter, + const SerializationPtr & serialization, + ISerialization::OutputStreamGetter stream_getter, size_t from_row, size_t number_of_rows) { - IDataType::SerializeBinaryBulkStatePtr state; - IDataType::SerializeBinaryBulkSettings serialize_settings; + ISerialization::SerializeBinaryBulkStatePtr state; + ISerialization::SerializeBinaryBulkSettings serialize_settings; serialize_settings.getter = stream_getter; serialize_settings.position_independent_encoding = true; serialize_settings.low_cardinality_max_dictionary_size = 0; - column.type->serializeBinaryBulkStatePrefix(serialize_settings, state); - column.type->serializeBinaryBulkWithMultipleStreams(*column.column, from_row, number_of_rows, serialize_settings, state); - column.type->serializeBinaryBulkStateSuffix(serialize_settings, state); + serialization->serializeBinaryBulkStatePrefix(serialize_settings, state); + serialization->serializeBinaryBulkWithMultipleStreams(*column.column, from_row, number_of_rows, serialize_settings, state); + serialization->serializeBinaryBulkStateSuffix(serialize_settings, state); } } @@ -181,9 +181,9 @@ void MergeTreeDataPartWriterCompact::writeDataBlock(const Block & block, const G /// So we flush each stream (using next()) before using new one, because otherwise we will override /// data in result file. CompressedStreamPtr prev_stream; - auto stream_getter = [&, this](const IDataType::SubstreamPath & substream_path) -> WriteBuffer * + auto stream_getter = [&, this](const ISerialization::SubstreamPath & substream_path) -> WriteBuffer * { - String stream_name = IDataType::getFileNameForStream(*name_and_type, substream_path); + String stream_name = ISerialization::getFileNameForStream(*name_and_type, substream_path); auto & result_stream = compressed_streams[stream_name]; /// Write one compressed block per column in granule for more optimal reading. @@ -203,7 +203,9 @@ void MergeTreeDataPartWriterCompact::writeDataBlock(const Block & block, const G writeIntBinary(plain_hashing.count(), marks); writeIntBinary(UInt64(0), marks); - writeColumnSingleGranule(block.getByName(name_and_type->name), stream_getter, granule.start_row, granule.rows_to_write); + writeColumnSingleGranule( + block.getByName(name_and_type->name), serializations[name_and_type->name], + stream_getter, granule.start_row, granule.rows_to_write); /// Each type always have at least one substream prev_stream->hashing_buf.next(); //-V522 diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp b/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp index fcc4f92be24..9902add9847 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp @@ -90,6 +90,9 @@ MergeTreeDataPartWriterOnDisk::MergeTreeDataPartWriterOnDisk( if (!disk->exists(part_path)) disk->createDirectories(part_path); + for (const auto & column : columns_list) + serializations.emplace(column.name, column.type->getDefaultSerialization()); + if (settings.rewrite_primary_key) initPrimaryIndex(); initSkipIndices(); @@ -200,7 +203,7 @@ void MergeTreeDataPartWriterOnDisk::calculateAndSerializePrimaryIndex(const Bloc { const auto & primary_column = primary_index_block.getByPosition(j); index_columns[j]->insertFrom(*primary_column.column, granule.start_row); - primary_column.type->serializeBinary(*primary_column.column, granule.start_row, *index_stream); + primary_column.type->getDefaultSerialization()->serializeBinary(*primary_column.column, granule.start_row, *index_stream); } } } @@ -265,7 +268,7 @@ void MergeTreeDataPartWriterOnDisk::finishPrimaryIndexSerialization( const auto & column = *last_block_index_columns[j]; size_t last_row_number = column.size() - 1; index_columns[j]->insertFrom(column, last_row_number); - index_types[j]->serializeBinary(column, last_row_number, *index_stream); + index_types[j]->getDefaultSerialization()->serializeBinary(column, last_row_number, *index_stream); } last_block_index_columns.clear(); } diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.h b/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.h index 704b38ba6d5..d952950e461 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.h +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.h @@ -132,6 +132,9 @@ protected: MergeTreeIndexAggregators skip_indices_aggregators; std::vector skip_index_accumulated_marks; + using SerializationsMap = std::unordered_map; + SerializationsMap serializations; + std::unique_ptr index_file_stream; std::unique_ptr index_stream; DataTypes index_types; diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp index f2bbf53bd97..57e8cca46cd 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp @@ -2,6 +2,8 @@ #include #include #include +#include +#include namespace DB { @@ -88,18 +90,18 @@ void MergeTreeDataPartWriterWide::addStreams( const NameAndTypePair & column, const ASTPtr & effective_codec_desc) { - IDataType::StreamCallback callback = [&] (const IDataType::SubstreamPath & substream_path, const IDataType & substream_type) + IDataType::StreamCallbackWithType callback = [&] (const ISerialization::SubstreamPath & substream_path, const IDataType & substream_type) { - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); /// Shared offsets for Nested type. if (column_streams.count(stream_name)) return; CompressionCodecPtr compression_codec; /// If we can use special codec then just get it - if (IDataType::isSpecialCompressionAllowed(substream_path)) + if (ISerialization::isSpecialCompressionAllowed(substream_path)) compression_codec = CompressionCodecFactory::instance().get(effective_codec_desc, &substream_type, default_codec); - else /// otherwise return only generic codecs and don't use info about the data_type + else /// otherwise return only generic codecs and don't use info about the` data_type compression_codec = CompressionCodecFactory::instance().get(effective_codec_desc, nullptr, default_codec, true); column_streams[stream_name] = std::make_unique( @@ -111,19 +113,18 @@ void MergeTreeDataPartWriterWide::addStreams( settings.max_compress_block_size); }; - IDataType::SubstreamPath stream_path; - column.type->enumerateStreams(callback, stream_path); + column.type->enumerateStreams(serializations[column.name], callback); } -IDataType::OutputStreamGetter MergeTreeDataPartWriterWide::createStreamGetter( +ISerialization::OutputStreamGetter MergeTreeDataPartWriterWide::createStreamGetter( const NameAndTypePair & column, WrittenOffsetColumns & offset_columns) const { - return [&, this] (const IDataType::SubstreamPath & substream_path) -> WriteBuffer * + return [&, this] (const ISerialization::SubstreamPath & substream_path) -> WriteBuffer * { - bool is_offsets = !substream_path.empty() && substream_path.back().type == IDataType::Substream::ArraySizes; + bool is_offsets = !substream_path.empty() && substream_path.back().type == ISerialization::Substream::ArraySizes; - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); /// Don't write offsets more than one time for Nested type. if (is_offsets && offset_columns.count(stream_name)) @@ -242,7 +243,7 @@ void MergeTreeDataPartWriterWide::writeSingleMark( const NameAndTypePair & column, WrittenOffsetColumns & offset_columns, size_t number_of_rows, - DB::IDataType::SubstreamPath & path) + ISerialization::SubstreamPath & path) { StreamsWithMarks marks = getCurrentMarksForColumn(column, offset_columns, path); for (const auto & mark : marks) @@ -261,14 +262,14 @@ void MergeTreeDataPartWriterWide::flushMarkToFile(const StreamNameAndMark & stre StreamsWithMarks MergeTreeDataPartWriterWide::getCurrentMarksForColumn( const NameAndTypePair & column, WrittenOffsetColumns & offset_columns, - DB::IDataType::SubstreamPath & path) + ISerialization::SubstreamPath & path) { StreamsWithMarks result; - column.type->enumerateStreams([&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + serializations[column.name]->enumerateStreams([&] (const ISerialization::SubstreamPath & substream_path) { - bool is_offsets = !substream_path.empty() && substream_path.back().type == IDataType::Substream::ArraySizes; + bool is_offsets = !substream_path.empty() && substream_path.back().type == ISerialization::Substream::ArraySizes; - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); /// Don't write offsets more than one time for Nested type. if (is_offsets && offset_columns.count(stream_name)) @@ -295,18 +296,19 @@ void MergeTreeDataPartWriterWide::writeSingleGranule( const NameAndTypePair & name_and_type, const IColumn & column, WrittenOffsetColumns & offset_columns, - IDataType::SerializeBinaryBulkStatePtr & serialization_state, - IDataType::SerializeBinaryBulkSettings & serialize_settings, + ISerialization::SerializeBinaryBulkStatePtr & serialization_state, + ISerialization::SerializeBinaryBulkSettings & serialize_settings, const Granule & granule) { - name_and_type.type->serializeBinaryBulkWithMultipleStreams(column, granule.start_row, granule.rows_to_write, serialize_settings, serialization_state); + const auto & serialization = serializations[name_and_type.name]; + serialization->serializeBinaryBulkWithMultipleStreams(column, granule.start_row, granule.rows_to_write, serialize_settings, serialization_state); /// So that instead of the marks pointing to the end of the compressed block, there were marks pointing to the beginning of the next one. - name_and_type.type->enumerateStreams([&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + serialization->enumerateStreams([&] (const ISerialization::SubstreamPath & substream_path) { - bool is_offsets = !substream_path.empty() && substream_path.back().type == IDataType::Substream::ArraySizes; + bool is_offsets = !substream_path.empty() && substream_path.back().type == ISerialization::Substream::ArraySizes; - String stream_name = IDataType::getFileNameForStream(name_and_type, substream_path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, substream_path); /// Don't write offsets more than one time for Nested type. if (is_offsets && offset_columns.count(stream_name)) @@ -331,13 +333,13 @@ void MergeTreeDataPartWriterWide::writeColumn( if (inserted) { - IDataType::SerializeBinaryBulkSettings serialize_settings; + ISerialization::SerializeBinaryBulkSettings serialize_settings; serialize_settings.getter = createStreamGetter(name_and_type, offset_columns); - type->serializeBinaryBulkStatePrefix(serialize_settings, it->second); + serializations[name]->serializeBinaryBulkStatePrefix(serialize_settings, it->second); } - const auto & global_settings = storage.global_context.getSettingsRef(); - IDataType::SerializeBinaryBulkSettings serialize_settings; + const auto & global_settings = storage.getContext()->getSettingsRef(); + ISerialization::SerializeBinaryBulkSettings serialize_settings; serialize_settings.getter = createStreamGetter(name_and_type, offset_columns); serialize_settings.low_cardinality_max_dictionary_size = global_settings.low_cardinality_max_dictionary_size; serialize_settings.low_cardinality_use_single_dictionary_for_part = global_settings.low_cardinality_use_single_dictionary_for_part != 0; @@ -374,12 +376,12 @@ void MergeTreeDataPartWriterWide::writeColumn( } } - name_and_type.type->enumerateStreams([&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + serializations[name]->enumerateStreams([&] (const ISerialization::SubstreamPath & substream_path) { - bool is_offsets = !substream_path.empty() && substream_path.back().type == IDataType::Substream::ArraySizes; + bool is_offsets = !substream_path.empty() && substream_path.back().type == ISerialization::Substream::ArraySizes; if (is_offsets) { - String stream_name = IDataType::getFileNameForStream(name_and_type, substream_path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, substream_path); offset_columns.insert(stream_name); } }, serialize_settings.path); @@ -392,10 +394,11 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot validate column of non fixed type {}", type.getName()); auto disk = data_part->volume->getDisk(); - String mrk_path = fullPath(disk, part_path + name + marks_file_extension); - String bin_path = fullPath(disk, part_path + name + DATA_FILE_EXTENSION); + String escaped_name = escapeForFileName(name); + String mrk_path = fullPath(disk, part_path + escaped_name + marks_file_extension); + String bin_path = fullPath(disk, part_path + escaped_name + DATA_FILE_EXTENSION); DB::ReadBufferFromFile mrk_in(mrk_path); - DB::CompressedReadBufferFromFile bin_in(bin_path, 0, 0, 0); + DB::CompressedReadBufferFromFile bin_in(bin_path, 0, 0, 0, nullptr); bool must_be_last = false; UInt64 offset_in_compressed_file = 0; UInt64 offset_in_decompressed_block = 0; @@ -403,6 +406,7 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, size_t mark_num; + const auto & serialization = serializations[name]; for (mark_num = 0; !mrk_in.eof(); ++mark_num) { if (mark_num > index_granularity.getMarksCount()) @@ -430,7 +434,7 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, { auto column = type.createColumn(); - type.deserializeBinaryBulk(*column, bin_in, 1000000000, 0.0); + serialization->deserializeBinaryBulk(*column, bin_in, 1000000000, 0.0); throw Exception(ErrorCodes::LOGICAL_ERROR, "Still have {} rows in bin stream, last mark #{} index granularity size {}, last rows {}", column->size(), mark_num, index_granularity.getMarksCount(), index_granularity_rows); @@ -450,7 +454,7 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, auto column = type.createColumn(); - type.deserializeBinaryBulk(*column, bin_in, index_granularity_rows, 0.0); + serialization->deserializeBinaryBulk(*column, bin_in, index_granularity_rows, 0.0); if (bin_in.eof()) { @@ -489,7 +493,7 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, { auto column = type.createColumn(); - type.deserializeBinaryBulk(*column, bin_in, 1000000000, 0.0); + serialization->deserializeBinaryBulk(*column, bin_in, 1000000000, 0.0); throw Exception(ErrorCodes::LOGICAL_ERROR, "Still have {} rows in bin stream, last mark #{} index granularity size {}, last rows {}", column->size(), mark_num, index_granularity.getMarksCount(), index_granularity_rows); @@ -499,8 +503,8 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, void MergeTreeDataPartWriterWide::finishDataSerialization(IMergeTreeDataPart::Checksums & checksums, bool sync) { - const auto & global_settings = storage.global_context.getSettingsRef(); - IDataType::SerializeBinaryBulkSettings serialize_settings; + const auto & global_settings = storage.getContext()->getSettingsRef(); + ISerialization::SerializeBinaryBulkSettings serialize_settings; serialize_settings.low_cardinality_max_dictionary_size = global_settings.low_cardinality_max_dictionary_size; serialize_settings.low_cardinality_use_single_dictionary_for_part = global_settings.low_cardinality_use_single_dictionary_for_part != 0; WrittenOffsetColumns offset_columns; @@ -523,7 +527,7 @@ void MergeTreeDataPartWriterWide::finishDataSerialization(IMergeTreeDataPart::Ch if (!serialization_states.empty()) { serialize_settings.getter = createStreamGetter(*it, written_offset_columns ? *written_offset_columns : offset_columns); - it->type->serializeBinaryBulkStateSuffix(serialize_settings, serialization_states[it->name]); + serializations[it->name]->serializeBinaryBulkStateSuffix(serialize_settings, serialization_states[it->name]); } if (write_final_mark) @@ -565,16 +569,16 @@ void MergeTreeDataPartWriterWide::finish(IMergeTreeDataPart::Checksums & checksu void MergeTreeDataPartWriterWide::writeFinalMark( const NameAndTypePair & column, WrittenOffsetColumns & offset_columns, - DB::IDataType::SubstreamPath & path) + ISerialization::SubstreamPath & path) { writeSingleMark(column, offset_columns, 0, path); /// Memoize information about offsets - column.type->enumerateStreams([&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + serializations[column.name]->enumerateStreams([&] (const ISerialization::SubstreamPath & substream_path) { - bool is_offsets = !substream_path.empty() && substream_path.back().type == IDataType::Substream::ArraySizes; + bool is_offsets = !substream_path.empty() && substream_path.back().type == ISerialization::Substream::ArraySizes; if (is_offsets) { - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); offset_columns.insert(stream_name); } }, path); diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.h b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.h index e6f96f3f146..5eaaa0c1bbe 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.h +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.h @@ -50,15 +50,15 @@ private: const NameAndTypePair & name_and_type, const IColumn & column, WrittenOffsetColumns & offset_columns, - IDataType::SerializeBinaryBulkStatePtr & serialization_state, - IDataType::SerializeBinaryBulkSettings & serialize_settings, + ISerialization::SerializeBinaryBulkStatePtr & serialization_state, + ISerialization::SerializeBinaryBulkSettings & serialize_settings, const Granule & granule); /// Take offsets from column and return as MarkInCompressed file with stream name StreamsWithMarks getCurrentMarksForColumn( const NameAndTypePair & column, WrittenOffsetColumns & offset_columns, - DB::IDataType::SubstreamPath & path); + ISerialization::SubstreamPath & path); /// Write mark to disk using stream and rows count void flushMarkToFile( @@ -70,12 +70,12 @@ private: const NameAndTypePair & column, WrittenOffsetColumns & offset_columns, size_t number_of_rows, - DB::IDataType::SubstreamPath & path); + ISerialization::SubstreamPath & path); void writeFinalMark( const NameAndTypePair & column, WrittenOffsetColumns & offset_columns, - DB::IDataType::SubstreamPath & path); + ISerialization::SubstreamPath & path); void addStreams( const NameAndTypePair & column, @@ -100,15 +100,16 @@ private: /// Also useful to have exact amount of rows in last (non-final) mark. void adjustLastMarkIfNeedAndFlushToDisk(size_t new_rows_in_last_mark); - IDataType::OutputStreamGetter createStreamGetter(const NameAndTypePair & column, WrittenOffsetColumns & offset_columns) const; + ISerialization::OutputStreamGetter createStreamGetter(const NameAndTypePair & column, WrittenOffsetColumns & offset_columns) const; - using SerializationState = IDataType::SerializeBinaryBulkStatePtr; + using SerializationState = ISerialization::SerializeBinaryBulkStatePtr; using SerializationStates = std::unordered_map; SerializationStates serialization_states; using ColumnStreams = std::map; ColumnStreams column_streams; + /// Non written marks to disk (for each column). Waiting until all rows for /// this marks will be written to disk. using MarksForColumns = std::unordered_map; diff --git a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp index b1f3f524beb..8245364d87a 100644 --- a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp +++ b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp @@ -1,5 +1,5 @@ #include /// For calculations related to sampling coefficients. -#include +#include #include #include @@ -28,7 +28,7 @@ #include #include #include -#include +#include #include #include #include @@ -70,29 +70,35 @@ MergeTreeDataSelectExecutor::MergeTreeDataSelectExecutor(const MergeTreeData & d { } - -/// Construct a block consisting only of possible values of virtual columns -static Block getBlockWithVirtualPartColumns(const MergeTreeData::DataPartsVector & parts, bool with_uuid) +Block MergeTreeDataSelectExecutor::getBlockWithVirtualPartColumns(const MergeTreeData::DataPartsVector & parts, bool one_part) { - auto part_column = ColumnString::create(); - auto part_uuid_column = ColumnUUID::create(); + Block block(std::initializer_list{ + ColumnWithTypeAndName(ColumnString::create(), std::make_shared(), "_part"), + ColumnWithTypeAndName(ColumnString::create(), std::make_shared(), "_partition_id"), + ColumnWithTypeAndName(ColumnUUID::create(), std::make_shared(), "_part_uuid")}); + + MutableColumns columns = block.mutateColumns(); + + auto & part_column = columns[0]; + auto & partition_id_column = columns[1]; + auto & part_uuid_column = columns[2]; for (const auto & part : parts) { part_column->insert(part->name); - if (with_uuid) - part_uuid_column->insert(part->uuid); + partition_id_column->insert(part->info.partition_id); + part_uuid_column->insert(part->uuid); + if (one_part) + { + part_column = ColumnConst::create(std::move(part_column), 1); + partition_id_column = ColumnConst::create(std::move(partition_id_column), 1); + part_uuid_column = ColumnConst::create(std::move(part_uuid_column), 1); + break; + } } - if (with_uuid) - { - return Block(std::initializer_list{ - ColumnWithTypeAndName(std::move(part_column), std::make_shared(), "_part"), - ColumnWithTypeAndName(std::move(part_uuid_column), std::make_shared(), "_part_uuid"), - }); - } - - return Block{ColumnWithTypeAndName(std::move(part_column), std::make_shared(), "_part")}; + block.setColumns(std::move(columns)); + return block; } @@ -149,7 +155,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::read( const Names & column_names_to_return, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const UInt64 max_block_size, const unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read) const @@ -165,7 +171,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( const Names & column_names_to_return, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const UInt64 max_block_size, const unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read) const @@ -176,8 +182,8 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( Names real_column_names; size_t total_parts = parts.size(); - bool part_column_queried = false; - bool part_uuid_column_queried = false; + if (total_parts == 0) + return std::make_unique(); bool sample_factor_column_queried = false; Float64 used_sample_factor = 1; @@ -186,7 +192,6 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { if (name == "_part") { - part_column_queried = true; virt_column_names.push_back(name); } else if (name == "_part_index") @@ -199,7 +204,6 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( } else if (name == "_part_uuid") { - part_uuid_column_queried = true; virt_column_names.push_back(name); } else if (name == "_sample_factor") @@ -219,16 +223,27 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( if (real_column_names.empty()) real_column_names.push_back(ExpressionActions::getSmallestColumn(available_real_columns)); - /// If `_part` or `_part_uuid` virtual columns are requested, we try to filter out data by them. - Block virtual_columns_block = getBlockWithVirtualPartColumns(parts, part_uuid_column_queried); - if (part_column_queried || part_uuid_column_queried) - VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, context); + std::unordered_set part_values; + ASTPtr expression_ast; + auto virtual_columns_block = getBlockWithVirtualPartColumns(parts, true /* one_part */); - auto part_values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_part"); + // Generate valid expressions for filtering + VirtualColumnUtils::prepareFilterBlockWithQuery(query_info.query, context, virtual_columns_block, expression_ast); + + // If there is still something left, fill the virtual block and do the filtering. + if (expression_ast) + { + virtual_columns_block = getBlockWithVirtualPartColumns(parts, false /* one_part */); + VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, context, expression_ast); + part_values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_part"); + if (part_values.empty()) + return std::make_unique(); + } + // At this point, empty `part_values` means all parts. metadata_snapshot->check(real_column_names, data.getVirtuals(), data.getStorageID()); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); const auto & primary_key = metadata_snapshot->getPrimaryKey(); Names primary_key_columns = primary_key.column_names; @@ -249,7 +264,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( auto minmax_columns_names = data.getMinMaxColumnsNames(partition_key); minmax_columns_types = data.getMinMaxColumnsTypes(partition_key); - minmax_idx_condition.emplace(query_info, context, minmax_columns_names, data.getMinMaxExpr(partition_key)); + minmax_idx_condition.emplace(query_info, context, minmax_columns_names, data.getMinMaxExpr(partition_key, ExpressionActionsSettings::fromContext(context))); partition_pruner.emplace(metadata_snapshot->getPartitionKey(), query_info, context, false /* strict */); if (settings.force_index_by_date && (minmax_idx_condition->alwaysUnknownOrTrue() && partition_pruner->isUseless())) @@ -270,13 +285,42 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( } } - const Context & query_context = context.hasQueryContext() ? context.getQueryContext() : context; + auto query_context = context->hasQueryContext() ? context->getQueryContext() : context; - if (query_context.getSettingsRef().allow_experimental_query_deduplication) - selectPartsToReadWithUUIDFilter(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read, query_context); + PartFilterCounters part_filter_counters; + auto index_stats = std::make_unique(); + + if (query_context->getSettingsRef().allow_experimental_query_deduplication) + selectPartsToReadWithUUIDFilter(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read, query_context, part_filter_counters); else - selectPartsToRead(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read); + selectPartsToRead(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read, part_filter_counters); + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::None, + .num_parts_after = part_filter_counters.num_initial_selected_parts, + .num_granules_after = part_filter_counters.num_initial_selected_granules}); + + if (minmax_idx_condition) + { + auto description = minmax_idx_condition->getDescription(); + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::MinMax, + .condition = std::move(description.condition), + .used_keys = std::move(description.used_keys), + .num_parts_after = part_filter_counters.num_parts_after_minmax, + .num_granules_after = part_filter_counters.num_granules_after_minmax}); + } + + if (partition_pruner) + { + auto description = partition_pruner->getKeyCondition().getDescription(); + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::Partition, + .condition = std::move(description.condition), + .used_keys = std::move(description.used_keys), + .num_parts_after = part_filter_counters.num_parts_after_partition_pruner, + .num_granules_after = part_filter_counters.num_granules_after_partition_pruner}); + } /// Sampling. Names column_names_to_read = real_column_names; @@ -373,7 +417,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { LOG_DEBUG(log, "Will use no data on this replica because parallel replicas processing has been requested" " (the setting 'max_parallel_replicas') but the table does not support sampling and this replica is not the first."); - return {}; + return std::make_unique(); } bool use_sampling = relative_sample_size > 0 || (settings.parallel_replicas_count > 1 && data.supportsSampling()); @@ -546,29 +590,26 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { .min_bytes_to_use_direct_io = settings.min_bytes_to_use_direct_io, .min_bytes_to_use_mmap_io = settings.min_bytes_to_use_mmap_io, + .mmap_cache = context->getMMappedFileCache(), .max_read_buffer_size = settings.max_read_buffer_size, .save_marks_in_cache = true, .checksum_on_read = settings.checksum_on_read, }; - /// PREWHERE - String prewhere_column; - if (select.prewhere()) - prewhere_column = select.prewhere()->getColumnName(); - struct DataSkippingIndexAndCondition { MergeTreeIndexPtr index; MergeTreeIndexConditionPtr condition; - std::atomic total_granules; - std::atomic granules_dropped; + std::atomic total_granules{0}; + std::atomic granules_dropped{0}; + std::atomic total_parts{0}; + std::atomic parts_dropped{0}; DataSkippingIndexAndCondition(MergeTreeIndexPtr index_, MergeTreeIndexConditionPtr condition_) : index(index_) , condition(condition_) - , total_granules(0) - , granules_dropped(0) - {} + { + } }; std::list useful_indices; @@ -615,6 +656,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( RangesInDataParts parts_with_ranges(parts.size()); size_t sum_marks = 0; std::atomic sum_marks_pk = 0; + std::atomic sum_parts_pk = 0; std::atomic total_marks_pk = 0; size_t sum_ranges = 0; @@ -637,25 +679,29 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( RangesInDataPart ranges(part, part_index); - total_marks_pk.fetch_add(part->index_granularity.getMarksCount(), std::memory_order_relaxed); + size_t total_marks_count = part->getMarksCount(); + if (total_marks_count && part->index_granularity.hasFinalMark()) + --total_marks_count; + + total_marks_pk.fetch_add(total_marks_count, std::memory_order_relaxed); if (metadata_snapshot->hasPrimaryKey()) ranges.ranges = markRangesFromPKRange(part, metadata_snapshot, key_condition, settings, log); - else - { - size_t total_marks_count = part->getMarksCount(); - if (total_marks_count) - { - if (part->index_granularity.hasFinalMark()) - --total_marks_count; - ranges.ranges = MarkRanges{MarkRange{0, total_marks_count}}; - } - } + else if (total_marks_count) + ranges.ranges = MarkRanges{MarkRange{0, total_marks_count}}; sum_marks_pk.fetch_add(ranges.getMarksCount(), std::memory_order_relaxed); + if (!ranges.ranges.empty()) + sum_parts_pk.fetch_add(1, std::memory_order_relaxed); + for (auto & index_and_condition : useful_indices) { + if (ranges.ranges.empty()) + break; + + index_and_condition.total_parts.fetch_add(1, std::memory_order_relaxed); + size_t total_granules = 0; size_t granules_dropped = 0; ranges.ranges = filterMarksUsingIndex( @@ -667,6 +713,9 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( index_and_condition.total_granules.fetch_add(total_granules, std::memory_order_relaxed); index_and_condition.granules_dropped.fetch_add(granules_dropped, std::memory_order_relaxed); + + if (ranges.ranges.empty()) + index_and_condition.parts_dropped.fetch_add(1, std::memory_order_relaxed); } if (!ranges.ranges.empty()) @@ -699,7 +748,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( for (size_t part_index = 0; part_index < parts.size(); ++part_index) pool.scheduleOrThrowOnError([&, part_index, thread_group = CurrentThread::getGroup()] { - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); @@ -732,12 +781,34 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( parts_with_ranges.resize(next_part); } + if (metadata_snapshot->hasPrimaryKey()) + { + auto description = key_condition.getDescription(); + + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::PrimaryKey, + .condition = std::move(description.condition), + .used_keys = std::move(description.used_keys), + .num_parts_after = sum_parts_pk.load(std::memory_order_relaxed), + .num_granules_after = sum_marks_pk.load(std::memory_order_relaxed)}); + } + for (const auto & index_and_condition : useful_indices) { const auto & index_name = index_and_condition.index->index.name; LOG_DEBUG(log, "Index {} has dropped {}/{} granules.", backQuote(index_name), index_and_condition.granules_dropped, index_and_condition.total_granules); + + std::string description = index_and_condition.index->index.type + + " GRANULARITY " + std::to_string(index_and_condition.index->index.granularity); + + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::Skip, + .name = index_name, + .description = std::move(description), + .num_parts_after = index_and_condition.total_parts - index_and_condition.parts_dropped, + .num_granules_after = index_and_condition.total_granules - index_and_condition.granules_dropped}); } LOG_DEBUG(log, "Selected {}/{} parts by partition key, {} parts by primary key, {}/{} marks by primary key, {} marks to read from {} ranges", @@ -771,7 +842,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( if (data_settings->min_marks_to_honor_max_concurrent_queries > 0 && sum_marks >= data_settings->min_marks_to_honor_max_concurrent_queries) { - query_id = context.getCurrentQueryId(); + query_id = context->getCurrentQueryId(); if (!query_id.empty()) data.insertQueryIdOrThrow(query_id, data_settings->max_concurrent_queries); } @@ -804,6 +875,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( plan = spreadMarkRangesAmongStreamsFinal( std::move(parts_with_ranges), + std::move(index_stats), num_streams, column_names_to_read, metadata_snapshot, @@ -827,6 +899,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( plan = spreadMarkRangesAmongStreamsWithOrder( std::move(parts_with_ranges), + std::move(index_stats), num_streams, column_names_to_read, metadata_snapshot, @@ -844,6 +917,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { plan = spreadMarkRangesAmongStreams( std::move(parts_with_ranges), + std::move(index_stats), num_streams, column_names_to_read, metadata_snapshot, @@ -955,25 +1029,9 @@ size_t minMarksForConcurrentRead( } -static QueryPlanPtr createPlanFromPipe(Pipe pipe, const String & query_id, const MergeTreeData & data, const std::string & description = "") -{ - auto plan = std::make_unique(); - - std::string storage_name = "MergeTree"; - if (!description.empty()) - storage_name += ' ' + description; - - // Attach QueryIdHolder if needed - if (!query_id.empty()) - pipe.addQueryIdHolder(std::make_shared(query_id, data)); - - auto step = std::make_unique(std::move(pipe), storage_name); - plan->addStep(std::move(step)); - return plan; -} - QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreams( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -1025,75 +1083,32 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreams( if (0 == sum_marks) return {}; + ReadFromMergeTree::Settings step_settings + { + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = min_marks_for_concurrent_read, + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; + if (num_streams > 1) { - /// Parallel query execution. - Pipes res; - /// Reduce the number of num_streams if the data is small. if (sum_marks < num_streams * min_marks_for_concurrent_read && parts.size() < num_streams) num_streams = std::max((sum_marks + min_marks_for_concurrent_read - 1) / min_marks_for_concurrent_read, parts.size()); - - MergeTreeReadPoolPtr pool = std::make_shared( - num_streams, - sum_marks, - min_marks_for_concurrent_read, - std::move(parts), - data, - metadata_snapshot, - query_info.prewhere_info, - true, - column_names, - MergeTreeReadPool::BackoffSettings(settings), - settings.preferred_block_size_bytes, - false); - - /// Let's estimate total number of rows for progress bar. - LOG_TRACE(log, "Reading approx. {} rows with {} streams", total_rows, num_streams); - - for (size_t i = 0; i < num_streams; ++i) - { - auto source = std::make_shared( - i, pool, min_marks_for_concurrent_read, max_block_size, - settings.preferred_block_size_bytes, settings.preferred_max_column_in_block_size_bytes, - data, metadata_snapshot, use_uncompressed_cache, - query_info.prewhere_info, reader_settings, virt_columns); - - if (i == 0) - { - /// Set the approximate number of rows for the first source only - source->addTotalRowsApprox(total_rows); - } - - res.emplace_back(std::move(source)); - } - - return createPlanFromPipe(Pipe::unitePipes(std::move(res)), query_id, data); } - else - { - /// Sequential query execution. - Pipes res; - for (const auto & part : parts) - { - auto source = std::make_shared( - data, metadata_snapshot, part.data_part, max_block_size, settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, column_names, part.ranges, use_uncompressed_cache, - query_info.prewhere_info, true, reader_settings, virt_columns, part.part_index_in_query); + auto plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams, ReadFromMergeTree::ReadType::Default); - res.emplace_back(std::move(source)); - } - - auto pipe = Pipe::unitePipes(std::move(res)); - - /// Use ConcatProcessor to concat sources together. - /// It is needed to read in parts order (and so in PK order) if single thread is used. - if (pipe.numOutputPorts() > 1) - pipe.addTransform(std::make_shared(pipe.getHeader(), pipe.numOutputPorts())); - - return createPlanFromPipe(std::move(pipe), query_id, data); - } + plan->addStep(std::move(step)); + return plan; } static ActionsDAGPtr createProjection(const Block & header) @@ -1106,6 +1121,7 @@ static ActionsDAGPtr createProjection(const Block & header) QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -1213,8 +1229,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( for (size_t i = 0; i < num_streams && !parts.empty(); ++i) { size_t need_marks = min_marks_per_stream; - - Pipes pipes; + RangesInDataParts new_parts; /// Loop over parts. /// We will iteratively take part or some subrange of a part from the back @@ -1269,53 +1284,31 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( parts.emplace_back(part); } ranges_to_get_from_part = split_ranges(ranges_to_get_from_part, input_order_info->direction); - - if (input_order_info->direction == 1) - { - pipes.emplace_back(std::make_shared( - data, - metadata_snapshot, - part.data_part, - max_block_size, - settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, - column_names, - ranges_to_get_from_part, - use_uncompressed_cache, - query_info.prewhere_info, - true, - reader_settings, - virt_columns, - part.part_index_in_query)); - } - else - { - pipes.emplace_back(std::make_shared( - data, - metadata_snapshot, - part.data_part, - max_block_size, - settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, - column_names, - ranges_to_get_from_part, - use_uncompressed_cache, - query_info.prewhere_info, - true, - reader_settings, - virt_columns, - part.part_index_in_query)); - } + new_parts.emplace_back(part.data_part, part.part_index_in_query, std::move(ranges_to_get_from_part)); } - auto plan = createPlanFromPipe(Pipe::unitePipes(std::move(pipes)), query_id, data, "with order"); - - if (input_order_info->direction != 1) + ReadFromMergeTree::Settings step_settings { - auto reverse_step = std::make_unique(plan->getCurrentDataStream()); - plan->addStep(std::move(reverse_step)); - } + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = min_marks_for_concurrent_read, + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; + auto read_type = input_order_info->direction == 1 + ? ReadFromMergeTree::ReadType::InOrder + : ReadFromMergeTree::ReadType::InReverseOrder; + + auto plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(new_parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams, read_type); + + plan->addStep(std::move(step)); plans.emplace_back(std::move(plan)); } @@ -1355,8 +1348,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( for (const auto & plan : plans) input_streams.emplace_back(plan->getCurrentDataStream()); - const auto & common_header = plans.front()->getCurrentDataStream().header; - auto union_step = std::make_unique(std::move(input_streams), common_header); + auto union_step = std::make_unique(std::move(input_streams)); auto plan = std::make_unique(); plan->unitePlans(std::move(union_step), std::move(plans)); @@ -1367,6 +1359,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -1408,7 +1401,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( num_streams = settings.max_final_threads; /// If setting do_not_merge_across_partitions_select_final is true than we won't merge parts from different partitions. - /// We have all parts in parts vector, where parts with same partition are nerby. + /// We have all parts in parts vector, where parts with same partition are nearby. /// So we will store iterators pointed to the beginning of each partition range (and parts.end()), /// then we will create a pipe for each partition that will run selecting processor and merging processor /// for the parts with this partition. In the end we will unite all the pipes. @@ -1447,7 +1440,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( QueryPlanPtr plan; { - Pipes pipes; + RangesInDataParts new_parts; /// If do_not_merge_across_partitions_select_final is true and there is only one part in partition /// with level > 0 then we won't postprocess this part and if num_streams > 1 we @@ -1466,36 +1459,35 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( { for (auto part_it = parts_to_merge_ranges[range_index]; part_it != parts_to_merge_ranges[range_index + 1]; ++part_it) { - auto source_processor = std::make_shared( - data, - metadata_snapshot, - part_it->data_part, - max_block_size, - settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, - column_names, - part_it->ranges, - use_uncompressed_cache, - query_info.prewhere_info, - true, - reader_settings, - virt_columns, - part_it->part_index_in_query); - - pipes.emplace_back(std::move(source_processor)); + new_parts.emplace_back(part_it->data_part, part_it->part_index_in_query, part_it->ranges); } } - if (pipes.empty()) + if (new_parts.empty()) continue; - auto pipe = Pipe::unitePipes(std::move(pipes)); + ReadFromMergeTree::Settings step_settings + { + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = 0, /// this setting is not used for reading in order + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; + + plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(new_parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams, ReadFromMergeTree::ReadType::InOrder); + + plan->addStep(std::move(step)); /// Drop temporary columns, added by 'sorting_key_expr' if (!out_projection) - out_projection = createProjection(pipe.getHeader()); - - plan = createPlanFromPipe(std::move(pipe), query_id, data, "with final"); + out_projection = createProjection(plan->getCurrentDataStream().header); } auto expression_step = std::make_unique( @@ -1542,7 +1534,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( if (!lonely_parts.empty()) { - Pipes pipes; + RangesInDataParts new_parts; size_t num_streams_for_lonely_parts = num_streams * lonely_parts.size(); @@ -1557,41 +1549,28 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( if (sum_marks_in_lonely_parts < num_streams_for_lonely_parts * min_marks_for_concurrent_read && lonely_parts.size() < num_streams_for_lonely_parts) num_streams_for_lonely_parts = std::max((sum_marks_in_lonely_parts + min_marks_for_concurrent_read - 1) / min_marks_for_concurrent_read, lonely_parts.size()); - - MergeTreeReadPoolPtr pool = std::make_shared( - num_streams_for_lonely_parts, - sum_marks_in_lonely_parts, - min_marks_for_concurrent_read, - std::move(lonely_parts), - data, - metadata_snapshot, - query_info.prewhere_info, - true, - column_names, - MergeTreeReadPool::BackoffSettings(settings), - settings.preferred_block_size_bytes, - false); - - LOG_TRACE(log, "Reading approx. {} rows with {} streams", total_rows_in_lonely_parts, num_streams_for_lonely_parts); - - for (size_t i = 0; i < num_streams_for_lonely_parts; ++i) + ReadFromMergeTree::Settings step_settings { - auto source = std::make_shared( - i, pool, min_marks_for_concurrent_read, max_block_size, - settings.preferred_block_size_bytes, settings.preferred_max_column_in_block_size_bytes, - data, metadata_snapshot, use_uncompressed_cache, - query_info.prewhere_info, reader_settings, virt_columns); + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = min_marks_for_concurrent_read, + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; - pipes.emplace_back(std::move(source)); - } + auto plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(lonely_parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams_for_lonely_parts, ReadFromMergeTree::ReadType::Default); - auto pipe = Pipe::unitePipes(std::move(pipes)); + plan->addStep(std::move(step)); /// Drop temporary columns, added by 'sorting_key_expr' if (!out_projection) - out_projection = createProjection(pipe.getHeader()); - - QueryPlanPtr plan = createPlanFromPipe(std::move(pipe), query_id, data, "with final"); + out_projection = createProjection(plan->getCurrentDataStream().header); auto expression_step = std::make_unique( plan->getCurrentDataStream(), @@ -1892,29 +1871,20 @@ void MergeTreeDataSelectExecutor::selectPartsToRead( const std::optional & minmax_idx_condition, const DataTypes & minmax_columns_types, std::optional & partition_pruner, - const PartitionIdToMaxBlock * max_block_numbers_to_read) + const PartitionIdToMaxBlock * max_block_numbers_to_read, + PartFilterCounters & counters) { auto prev_parts = parts; parts.clear(); for (const auto & part : prev_parts) { - if (part_values.find(part->name) == part_values.end()) + if (!part_values.empty() && part_values.find(part->name) == part_values.end()) continue; if (part->isEmpty()) continue; - if (minmax_idx_condition && !minmax_idx_condition->checkInHyperrectangle( - part->minmax_idx.hyperrectangle, minmax_columns_types).can_be_true) - continue; - - if (partition_pruner) - { - if (partition_pruner->canBePruned(part)) - continue; - } - if (max_block_numbers_to_read) { auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); @@ -1922,6 +1892,29 @@ void MergeTreeDataSelectExecutor::selectPartsToRead( continue; } + size_t num_granules = part->getMarksCount(); + if (num_granules && part->index_granularity.hasFinalMark()) + --num_granules; + + counters.num_initial_selected_parts += 1; + counters.num_initial_selected_granules += num_granules; + + if (minmax_idx_condition && !minmax_idx_condition->checkInHyperrectangle( + part->minmax_idx.hyperrectangle, minmax_columns_types).can_be_true) + continue; + + counters.num_parts_after_minmax += 1; + counters.num_granules_after_minmax += num_granules; + + if (partition_pruner) + { + if (partition_pruner->canBePruned(part)) + continue; + } + + counters.num_parts_after_partition_pruner += 1; + counters.num_granules_after_partition_pruner += num_granules; + parts.push_back(part); } } @@ -1933,16 +1926,14 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( const DataTypes & minmax_columns_types, std::optional & partition_pruner, const PartitionIdToMaxBlock * max_block_numbers_to_read, - const Context & query_context) const + ContextPtr query_context, + PartFilterCounters & counters) const { - /// const_cast to add UUIDs to context. Bad practice. - Context & non_const_context = const_cast(query_context); - /// process_parts prepare parts that have to be read for the query, /// returns false if duplicated parts' UUID have been met auto select_parts = [&] (MergeTreeData::DataPartsVector & selected_parts) -> bool { - auto ignored_part_uuids = non_const_context.getIgnoredPartUUIDs(); + auto ignored_part_uuids = query_context->getIgnoredPartUUIDs(); std::unordered_set temp_part_uuids; auto prev_parts = selected_parts; @@ -1950,23 +1941,12 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( for (const auto & part : prev_parts) { - if (part_values.find(part->name) == part_values.end()) + if (!part_values.empty() && part_values.find(part->name) == part_values.end()) continue; if (part->isEmpty()) continue; - if (minmax_idx_condition - && !minmax_idx_condition->checkInHyperrectangle(part->minmax_idx.hyperrectangle, minmax_columns_types) - .can_be_true) - continue; - - if (partition_pruner) - { - if (partition_pruner->canBePruned(part)) - continue; - } - if (max_block_numbers_to_read) { auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); @@ -1974,13 +1954,37 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( continue; } + /// Skip the part if its uuid is meant to be excluded + if (part->uuid != UUIDHelpers::Nil && ignored_part_uuids->has(part->uuid)) + continue; + + size_t num_granules = part->getMarksCount(); + if (num_granules && part->index_granularity.hasFinalMark()) + --num_granules; + + counters.num_initial_selected_parts += 1; + counters.num_initial_selected_granules += num_granules; + + if (minmax_idx_condition + && !minmax_idx_condition->checkInHyperrectangle(part->minmax_idx.hyperrectangle, minmax_columns_types) + .can_be_true) + continue; + + counters.num_parts_after_minmax += 1; + counters.num_granules_after_minmax += num_granules; + + if (partition_pruner) + { + if (partition_pruner->canBePruned(part)) + continue; + } + + counters.num_parts_after_partition_pruner += 1; + counters.num_granules_after_partition_pruner += num_granules; + /// populate UUIDs and exclude ignored parts if enabled if (part->uuid != UUIDHelpers::Nil) { - /// Skip the part if its uuid is meant to be excluded - if (ignored_part_uuids->has(part->uuid)) - continue; - auto result = temp_part_uuids.insert(part->uuid); if (!result.second) throw Exception("Found a part with the same UUID on the same replica.", ErrorCodes::LOGICAL_ERROR); @@ -1991,12 +1995,12 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( if (!temp_part_uuids.empty()) { - auto duplicates = non_const_context.getPartUUIDs()->add(std::vector{temp_part_uuids.begin(), temp_part_uuids.end()}); + auto duplicates = query_context->getPartUUIDs()->add(std::vector{temp_part_uuids.begin(), temp_part_uuids.end()}); if (!duplicates.empty()) { /// on a local replica with prefer_localhost_replica=1 if any duplicates appeared during the first pass, /// adding them to the exclusion, so they will be skipped on second pass - non_const_context.getIgnoredPartUUIDs()->add(duplicates); + query_context->getIgnoredPartUUIDs()->add(duplicates); return false; } } @@ -2012,6 +2016,8 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( { LOG_DEBUG(log, "Found duplicate uuids locally, will retry part selection without them"); + counters = PartFilterCounters(); + /// Second attempt didn't help, throw an exception if (!select_parts(parts)) throw Exception("Found duplicate UUIDs while processing query.", ErrorCodes::DUPLICATED_PART_UUIDS); diff --git a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h index 634719639ad..d7193fbfbfa 100644 --- a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h +++ b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB @@ -29,7 +30,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, UInt64 max_block_size, unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read = nullptr) const; @@ -39,11 +40,15 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, UInt64 max_block_size, unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read = nullptr) const; + /// Construct a block consisting only of possible virtual columns for part pruning. + /// If one_part is true, fill in at most one part. + static Block getBlockWithVirtualPartColumns(const MergeTreeData::DataPartsVector & parts, bool one_part); + private: const MergeTreeData & data; @@ -51,6 +56,7 @@ private: QueryPlanPtr spreadMarkRangesAmongStreams( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -65,6 +71,7 @@ private: /// out_projection - save projection only with columns, requested to read QueryPlanPtr spreadMarkRangesAmongStreamsWithOrder( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -80,6 +87,7 @@ private: QueryPlanPtr spreadMarkRangesAmongStreamsFinal( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -117,6 +125,16 @@ private: size_t & granules_dropped, Poco::Logger * log); + struct PartFilterCounters + { + size_t num_initial_selected_parts = 0; + size_t num_initial_selected_granules = 0; + size_t num_parts_after_minmax = 0; + size_t num_granules_after_minmax = 0; + size_t num_parts_after_partition_pruner = 0; + size_t num_granules_after_partition_pruner = 0; + }; + /// Select the parts in which there can be data that satisfy `minmax_idx_condition` and that match the condition on `_part`, /// as well as `max_block_number_to_read`. static void selectPartsToRead( @@ -125,7 +143,8 @@ private: const std::optional & minmax_idx_condition, const DataTypes & minmax_columns_types, std::optional & partition_pruner, - const PartitionIdToMaxBlock * max_block_numbers_to_read); + const PartitionIdToMaxBlock * max_block_numbers_to_read, + PartFilterCounters & counters); /// Same as previous but also skip parts uuids if any to the query context, or skip parts which uuids marked as excluded. void selectPartsToReadWithUUIDFilter( @@ -135,7 +154,8 @@ private: const DataTypes & minmax_columns_types, std::optional & partition_pruner, const PartitionIdToMaxBlock * max_block_numbers_to_read, - const Context & query_context) const; + ContextPtr query_context, + PartFilterCounters & counters) const; }; } diff --git a/src/Storages/MergeTree/MergeTreeDataWriter.cpp b/src/Storages/MergeTree/MergeTreeDataWriter.cpp index f478cdba40a..79d95eb03ee 100644 --- a/src/Storages/MergeTree/MergeTreeDataWriter.cpp +++ b/src/Storages/MergeTree/MergeTreeDataWriter.cpp @@ -327,6 +327,11 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataWriter::writeTempPart(BlockWithPa /// Size of part would not be greater than block.bytes() + epsilon size_t expected_size = block.bytes(); + /// If optimize_on_insert is true, block may become empty after merge. + /// There is no need to create empty part. + if (expected_size == 0) + return nullptr; + DB::IMergeTreeDataPart::TTLInfos move_ttl_infos; const auto & move_ttl_entries = metadata_snapshot->getMoveTTLs(); for (const auto & ttl_entry : move_ttl_entries) @@ -347,6 +352,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataWriter::writeTempPart(BlockWithPa new_data_part->uuid = UUIDHelpers::generateV4(); new_data_part->setColumns(columns); + new_data_part->rows_count = block.rows(); new_data_part->partition = std::move(partition); new_data_part->minmax_idx = std::move(minmax_idx); new_data_part->is_temp = true; @@ -390,7 +396,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataWriter::writeTempPart(BlockWithPa /// This effectively chooses minimal compression method: /// either default lz4 or compression method with zero thresholds on absolute and relative part size. - auto compression_codec = data.global_context.chooseCompressionCodec(0, 0); + auto compression_codec = data.getContext()->chooseCompressionCodec(0, 0); const auto & index_factory = MergeTreeIndexFactory::instance(); MergedBlockOutputStream out(new_data_part, metadata_snapshot, columns, index_factory.getMany(metadata_snapshot->getSecondaryIndices()), compression_codec); diff --git a/src/Storages/MergeTree/MergeTreeDeduplicationLog.cpp b/src/Storages/MergeTree/MergeTreeDeduplicationLog.cpp new file mode 100644 index 00000000000..33960e2e1ff --- /dev/null +++ b/src/Storages/MergeTree/MergeTreeDeduplicationLog.cpp @@ -0,0 +1,311 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +namespace +{ + +/// Deduplication operation part was dropped or added +enum class MergeTreeDeduplicationOp : uint8_t +{ + ADD = 1, + DROP = 2, +}; + +/// Record for deduplication on disk +struct MergeTreeDeduplicationLogRecord +{ + MergeTreeDeduplicationOp operation; + std::string part_name; + std::string block_id; +}; + +void writeRecord(const MergeTreeDeduplicationLogRecord & record, WriteBuffer & out) +{ + writeIntText(static_cast(record.operation), out); + writeChar('\t', out); + writeString(record.part_name, out); + writeChar('\t', out); + writeString(record.block_id, out); + writeChar('\n', out); + out.next(); +} + +void readRecord(MergeTreeDeduplicationLogRecord & record, ReadBuffer & in) +{ + uint8_t op; + readIntText(op, in); + record.operation = static_cast(op); + assertChar('\t', in); + readString(record.part_name, in); + assertChar('\t', in); + readString(record.block_id, in); + assertChar('\n', in); +} + + +std::string getLogPath(const std::string & prefix, size_t number) +{ + std::filesystem::path path(prefix); + path /= std::filesystem::path(std::string{"deduplication_log_"} + std::to_string(number) + ".txt"); + return path; +} + +size_t getLogNumber(const std::string & path_str) +{ + std::filesystem::path path(path_str); + std::string filename = path.stem(); + Strings filename_parts; + boost::split(filename_parts, filename, boost::is_any_of("_")); + + return parse(filename_parts[2]); +} + +} + +MergeTreeDeduplicationLog::MergeTreeDeduplicationLog( + const std::string & logs_dir_, + size_t deduplication_window_, + const MergeTreeDataFormatVersion & format_version_) + : logs_dir(logs_dir_) + , deduplication_window(deduplication_window_) + , rotate_interval(deduplication_window_ * 2) /// actually it doesn't matter + , format_version(format_version_) + , deduplication_map(deduplication_window) +{ + namespace fs = std::filesystem; + if (deduplication_window != 0 && !fs::exists(logs_dir)) + fs::create_directories(logs_dir); +} + +void MergeTreeDeduplicationLog::load() +{ + namespace fs = std::filesystem; + if (!fs::exists(logs_dir)) + return; + + for (const auto & p : fs::directory_iterator(logs_dir)) + { + const auto & path = p.path(); + auto log_number = getLogNumber(path); + existing_logs[log_number] = {path, 0}; + } + + /// We should know which logs are exist even in case + /// of deduplication_window = 0 + if (!existing_logs.empty()) + current_log_number = existing_logs.rbegin()->first; + + if (deduplication_window != 0) + { + /// Order important, we load history from the begging to the end + for (auto & [log_number, desc] : existing_logs) + { + try + { + desc.entries_count = loadSingleLog(desc.path); + } + catch (...) + { + tryLogCurrentException(__PRETTY_FUNCTION__, "Error while loading MergeTree deduplication log on path " + desc.path); + } + } + + /// Start new log, drop previous + rotateAndDropIfNeeded(); + + /// Can happen in case we have unfinished log + if (!current_writer) + current_writer = std::make_unique(existing_logs.rbegin()->second.path, DBMS_DEFAULT_BUFFER_SIZE, O_APPEND | O_CREAT | O_WRONLY); + } +} + +size_t MergeTreeDeduplicationLog::loadSingleLog(const std::string & path) +{ + ReadBufferFromFile read_buf(path); + + size_t total_entries = 0; + while (!read_buf.eof()) + { + MergeTreeDeduplicationLogRecord record; + readRecord(record, read_buf); + if (record.operation == MergeTreeDeduplicationOp::DROP) + deduplication_map.erase(record.block_id); + else + deduplication_map.insert(record.block_id, MergeTreePartInfo::fromPartName(record.part_name, format_version)); + total_entries++; + } + return total_entries; +} + +void MergeTreeDeduplicationLog::rotate() +{ + /// We don't deduplicate anything so we don't need any writers + if (deduplication_window == 0) + return; + + current_log_number++; + auto new_path = getLogPath(logs_dir, current_log_number); + MergeTreeDeduplicationLogNameDescription log_description{new_path, 0}; + existing_logs.emplace(current_log_number, log_description); + + if (current_writer) + current_writer->sync(); + + current_writer = std::make_unique(log_description.path, DBMS_DEFAULT_BUFFER_SIZE, O_APPEND | O_CREAT | O_WRONLY); +} + +void MergeTreeDeduplicationLog::dropOutdatedLogs() +{ + size_t current_sum = 0; + size_t remove_from_value = 0; + /// Go from end to the beginning + for (auto itr = existing_logs.rbegin(); itr != existing_logs.rend(); ++itr) + { + if (current_sum > deduplication_window) + { + /// We have more logs than required, all older files (including current) can be dropped + remove_from_value = itr->first; + break; + } + + auto & description = itr->second; + current_sum += description.entries_count; + } + + /// If we found some logs to drop + if (remove_from_value != 0) + { + /// Go from the beginning to the end and drop all outdated logs + for (auto itr = existing_logs.begin(); itr != existing_logs.end();) + { + size_t number = itr->first; + std::filesystem::remove(itr->second.path); + itr = existing_logs.erase(itr); + if (remove_from_value == number) + break; + } + } + +} + +void MergeTreeDeduplicationLog::rotateAndDropIfNeeded() +{ + /// If we don't have logs at all or already have enough records in current + if (existing_logs.empty() || existing_logs[current_log_number].entries_count >= rotate_interval) + { + rotate(); + dropOutdatedLogs(); + } +} + +std::pair MergeTreeDeduplicationLog::addPart(const std::string & block_id, const MergeTreePartInfo & part_info) +{ + std::lock_guard lock(state_mutex); + + /// We support zero case because user may want to disable deduplication with + /// ALTER MODIFY SETTING query. It's much more simpler to handle zero case + /// here then destroy whole object, check for null pointer from different + /// threads and so on. + if (deduplication_window == 0) + return std::make_pair(part_info, true); + + /// If we already have this block let's deduplicate it + if (deduplication_map.contains(block_id)) + { + auto info = deduplication_map.get(block_id); + return std::make_pair(info, false); + } + + assert(current_writer != nullptr); + + /// Create new record + MergeTreeDeduplicationLogRecord record; + record.operation = MergeTreeDeduplicationOp::ADD; + record.part_name = part_info.getPartName(); + record.block_id = block_id; + /// Write it to disk + writeRecord(record, *current_writer); + /// We have one more record in current log + existing_logs[current_log_number].entries_count++; + /// Add to deduplication map + deduplication_map.insert(record.block_id, part_info); + /// Rotate and drop old logs if needed + rotateAndDropIfNeeded(); + + return std::make_pair(part_info, true); +} + +void MergeTreeDeduplicationLog::dropPart(const MergeTreePartInfo & drop_part_info) +{ + std::lock_guard lock(state_mutex); + + /// We support zero case because user may want to disable deduplication with + /// ALTER MODIFY SETTING query. It's much more simpler to handle zero case + /// here then destroy whole object, check for null pointer from different + /// threads and so on. + if (deduplication_window == 0) + return; + + assert(current_writer != nullptr); + + for (auto itr = deduplication_map.begin(); itr != deduplication_map.end(); /* no increment here, we erasing from map */) + { + const auto & part_info = itr->value; + /// Part is covered by dropped part, let's remove it from + /// deduplication history + if (drop_part_info.contains(part_info)) + { + /// Create drop record + MergeTreeDeduplicationLogRecord record; + record.operation = MergeTreeDeduplicationOp::DROP; + record.part_name = part_info.getPartName(); + record.block_id = itr->key; + /// Write it to disk + writeRecord(record, *current_writer); + /// We have one more record on disk + existing_logs[current_log_number].entries_count++; + + /// Increment itr before erase, otherwise it will invalidated + ++itr; + /// Remove block_id from in-memory table + deduplication_map.erase(record.block_id); + + /// Rotate and drop old logs if needed + rotateAndDropIfNeeded(); + } + else + { + ++itr; + } + } +} + +void MergeTreeDeduplicationLog::setDeduplicationWindowSize(size_t deduplication_window_) +{ + std::lock_guard lock(state_mutex); + + deduplication_window = deduplication_window_; + rotate_interval = deduplication_window * 2; + + /// If settings was set for the first time with ALTER MODIFY SETTING query + if (deduplication_window != 0 && !std::filesystem::exists(logs_dir)) + std::filesystem::create_directories(logs_dir); + + deduplication_map.setMaxSize(deduplication_window); + rotateAndDropIfNeeded(); + + /// Can happen in case we have unfinished log + if (!current_writer) + current_writer = std::make_unique(existing_logs.rbegin()->second.path, DBMS_DEFAULT_BUFFER_SIZE, O_APPEND | O_CREAT | O_WRONLY); +} + +} diff --git a/src/Storages/MergeTree/MergeTreeDeduplicationLog.h b/src/Storages/MergeTree/MergeTreeDeduplicationLog.h new file mode 100644 index 00000000000..281a76050a2 --- /dev/null +++ b/src/Storages/MergeTree/MergeTreeDeduplicationLog.h @@ -0,0 +1,192 @@ +#pragma once +#include +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +/// Description of dedupliction log +struct MergeTreeDeduplicationLogNameDescription +{ + /// Path to log + std::string path; + + /// How many entries we have in log + size_t entries_count; +}; + +/// Simple string-key HashTable with fixed size based on STL containers. +/// Preserves order using linked list and remove elements +/// on overflow in FIFO order. +template +class LimitedOrderedHashMap +{ +private: + struct ListNode + { + std::string key; + V value; + }; + using Queue = std::list; + using IndexMap = std::unordered_map; + + Queue queue; + IndexMap map; + size_t max_size; +public: + using iterator = typename Queue::iterator; + using const_iterator = typename Queue::const_iterator; + using reverse_iterator = typename Queue::reverse_iterator; + using const_reverse_iterator = typename Queue::const_reverse_iterator; + + explicit LimitedOrderedHashMap(size_t max_size_) + : max_size(max_size_) + {} + + bool contains(const std::string & key) const + { + return map.find(key) != map.end(); + } + + V get(const std::string & key) const + { + return map.at(key)->value; + } + + size_t size() const + { + return queue.size(); + } + + void setMaxSize(size_t max_size_) + { + max_size = max_size_; + while (size() > max_size) + { + map.erase(queue.front().key); + queue.pop_front(); + } + } + + bool erase(const std::string & key) + { + auto it = map.find(key); + if (it == map.end()) + return false; + + auto queue_itr = it->second; + map.erase(it); + queue.erase(queue_itr); + + return true; + } + + bool insert(const std::string & key, const V & value) + { + auto it = map.find(key); + if (it != map.end()) + return false; + + if (size() == max_size) + { + map.erase(queue.front().key); + queue.pop_front(); + } + + ListNode elem{key, value}; + auto itr = queue.insert(queue.end(), elem); + map.emplace(itr->key, itr); + return true; + } + + void clear() + { + map.clear(); + queue.clear(); + } + + iterator begin() { return queue.begin(); } + const_iterator begin() const { return queue.cbegin(); } + iterator end() { return queue.end(); } + const_iterator end() const { return queue.cend(); } + + reverse_iterator rbegin() { return queue.rbegin(); } + const_reverse_iterator rbegin() const { return queue.crbegin(); } + reverse_iterator rend() { return queue.rend(); } + const_reverse_iterator rend() const { return queue.crend(); } +}; + +/// Fixed-size log for deduplication in non-replicated MergeTree. +/// Stores records on disk for zero-level parts in human-readable format: +/// operation part_name partition_id_check_sum +/// 1 88_18_18_0 88_10619499460461868496_9553701830997749308 +/// 2 77_14_14_0 77_15147918179036854170_6725063583757244937 +/// 2 77_15_15_0 77_14977227047908934259_8047656067364802772 +/// 1 77_20_20_0 77_15147918179036854170_6725063583757244937 +/// Also stores them in memory in hash table with limited size. +class MergeTreeDeduplicationLog +{ +public: + MergeTreeDeduplicationLog( + const std::string & logs_dir_, + size_t deduplication_window_, + const MergeTreeDataFormatVersion & format_version_); + + /// Add part into in-memory hash table and to disk + /// Return true and part info if insertion was successful. + /// Otherwise, in case of duplicate, return false and previous part name with same hash (useful for logging) + std::pair addPart(const std::string & block_id, const MergeTreePartInfo & part); + + /// Remove all covered parts from in memory table and add DROP records to the disk + void dropPart(const MergeTreePartInfo & drop_part_info); + + /// Load history from disk. Ignores broken logs. + void load(); + + void setDeduplicationWindowSize(size_t deduplication_window_); +private: + const std::string logs_dir; + /// Size of deduplication window + size_t deduplication_window; + + /// How often we create new logs. Not very important, + /// default value equals deduplication_window * 2 + size_t rotate_interval; + const MergeTreeDataFormatVersion format_version; + + /// Current log number. Always growing number. + size_t current_log_number = 0; + + /// All existing logs in order of their numbers + std::map existing_logs; + + /// In memory hash-table + LimitedOrderedHashMap deduplication_map; + + /// Writer to the current log file + std::unique_ptr current_writer; + + /// Overall mutex because we can have a lot of cocurrent inserts + std::mutex state_mutex; + + /// Start new log + void rotate(); + + /// Remove all old logs with non-needed records for deduplication_window + void dropOutdatedLogs(); + + /// Execute both previous methods if needed + void rotateAndDropIfNeeded(); + + /// Load single log from disk. In case of corruption throws exceptions + size_t loadSingleLog(const std::string & path); +}; + +} diff --git a/src/Storages/MergeTree/MergeTreeIOSettings.h b/src/Storages/MergeTree/MergeTreeIOSettings.h index f2469494792..dd241cfd591 100644 --- a/src/Storages/MergeTree/MergeTreeIOSettings.h +++ b/src/Storages/MergeTree/MergeTreeIOSettings.h @@ -3,13 +3,19 @@ #include #include + namespace DB { +class MMappedFileCache; +using MMappedFileCachePtr = std::shared_ptr; + + struct MergeTreeReaderSettings { size_t min_bytes_to_use_direct_io = 0; size_t min_bytes_to_use_mmap_io = 0; + MMappedFileCachePtr mmap_cache; size_t max_read_buffer_size = DBMS_DEFAULT_BUFFER_SIZE; /// If save_marks_in_cache is false, then, if marks are not in cache, /// we will load them but won't save in the cache, to avoid evicting other data. diff --git a/src/Storages/MergeTree/MergeTreeIndexAggregatorBloomFilter.h b/src/Storages/MergeTree/MergeTreeIndexAggregatorBloomFilter.h index ebbe9865313..9877db8ee30 100644 --- a/src/Storages/MergeTree/MergeTreeIndexAggregatorBloomFilter.h +++ b/src/Storages/MergeTree/MergeTreeIndexAggregatorBloomFilter.h @@ -6,7 +6,7 @@ namespace DB { -class MergeTreeIndexAggregatorBloomFilter : public IMergeTreeIndexAggregator +class MergeTreeIndexAggregatorBloomFilter final : public IMergeTreeIndexAggregator { public: MergeTreeIndexAggregatorBloomFilter(size_t bits_per_row_, size_t hash_functions_, const Names & columns_name_); diff --git a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp index a98ba16978d..c37d710ec8f 100644 --- a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp @@ -67,7 +67,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexBloomFilter::createIndexAggregator() c return std::make_shared(bits_per_row, hash_functions, index.column_names); } -MergeTreeIndexConditionPtr MergeTreeIndexBloomFilter::createIndexCondition(const SelectQueryInfo & query_info, const Context & context) const +MergeTreeIndexConditionPtr MergeTreeIndexBloomFilter::createIndexCondition(const SelectQueryInfo & query_info, ContextPtr context) const { return std::make_shared(query_info, context, index.sample_block, hash_functions); } diff --git a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h index b0d9a295bcd..9112f23ee64 100644 --- a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h +++ b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h @@ -8,7 +8,7 @@ namespace DB { -class MergeTreeIndexBloomFilter : public IMergeTreeIndex +class MergeTreeIndexBloomFilter final : public IMergeTreeIndex { public: MergeTreeIndexBloomFilter( @@ -20,7 +20,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; - MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query_info, const Context & context) const override; + MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query_info, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; diff --git a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp index a9915f01645..031129a35f4 100644 --- a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp @@ -87,11 +87,11 @@ bool maybeTrueOnBloomFilter(const IColumn * hash_column, const BloomFilterPtr & } MergeTreeIndexConditionBloomFilter::MergeTreeIndexConditionBloomFilter( - const SelectQueryInfo & info_, const Context & context_, const Block & header_, size_t hash_functions_) - : header(header_), context(context_), query_info(info_), hash_functions(hash_functions_) + const SelectQueryInfo & info_, ContextPtr context_, const Block & header_, size_t hash_functions_) + : WithContext(context_), header(header_), query_info(info_), hash_functions(hash_functions_) { - auto atom_from_ast = [this](auto & node, auto &, auto & constants, auto & out) { return traverseAtomAST(node, constants, out); }; - rpn = std::move(RPNBuilder(info_, context, atom_from_ast).extractRPN()); + auto atom_from_ast = [this](auto & node, auto, auto & constants, auto & out) { return traverseAtomAST(node, constants, out); }; + rpn = std::move(RPNBuilder(info_, getContext(), atom_from_ast).extractRPN()); } bool MergeTreeIndexConditionBloomFilter::alwaysUnknownOrTrue() const diff --git a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h index 34fb45c86a5..61e796fb6f7 100644 --- a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h +++ b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h @@ -13,7 +13,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -class MergeTreeIndexConditionBloomFilter : public IMergeTreeIndexCondition +class MergeTreeIndexConditionBloomFilter final : public IMergeTreeIndexCondition, WithContext { public: struct RPNElement @@ -42,7 +42,7 @@ public: std::vector> predicate; }; - MergeTreeIndexConditionBloomFilter(const SelectQueryInfo & info_, const Context & context_, const Block & header_, size_t hash_functions_); + MergeTreeIndexConditionBloomFilter(const SelectQueryInfo & info_, ContextPtr context_, const Block & header_, size_t hash_functions_); bool alwaysUnknownOrTrue() const override; @@ -56,7 +56,6 @@ public: private: const Block & header; - const Context & context; const SelectQueryInfo & query_info; const size_t hash_functions; std::vector rpn; diff --git a/src/Storages/MergeTree/MergeTreeIndexFullText.cpp b/src/Storages/MergeTree/MergeTreeIndexFullText.cpp index 3e8b9cc704b..10136cd1069 100644 --- a/src/Storages/MergeTree/MergeTreeIndexFullText.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexFullText.cpp @@ -43,15 +43,29 @@ namespace ErrorCodes /// Adds all tokens from string to bloom filter. static void stringToBloomFilter( + const String & string, TokenExtractorPtr token_extractor, BloomFilter & bloom_filter) +{ + const char * data = string.data(); + size_t size = string.size(); + + size_t cur = 0; + size_t token_start = 0; + size_t token_len = 0; + while (cur < size && token_extractor->nextInField(data, size, &cur, &token_start, &token_len)) + bloom_filter.add(data + token_start, token_len); +} + +static void columnToBloomFilter( const char * data, size_t size, TokenExtractorPtr token_extractor, BloomFilter & bloom_filter) { size_t cur = 0; size_t token_start = 0; size_t token_len = 0; - while (cur < size && token_extractor->next(data, size, &cur, &token_start, &token_len)) + while (cur < size && token_extractor->nextInColumn(data, size, &cur, &token_start, &token_len)) bloom_filter.add(data + token_start, token_len); } + /// Adds all tokens from like pattern string to bloom filter. (Because like pattern can contain `\%` and `\_`.) static void likeStringToBloomFilter( const String & data, TokenExtractorPtr token_extractor, BloomFilter & bloom_filter) @@ -61,15 +75,14 @@ static void likeStringToBloomFilter( while (cur < data.size() && token_extractor->nextLike(data, &cur, token)) bloom_filter.add(token.c_str(), token.size()); } + /// Unified condition for equals, startsWith and endsWith bool MergeTreeConditionFullText::createFunctionEqualsCondition( RPNElement & out, const Field & value, const BloomFilterParameters & params, TokenExtractorPtr token_extractor) { out.function = RPNElement::FUNCTION_EQUALS; out.bloom_filter = std::make_unique(params); - - const auto & str = value.get(); - stringToBloomFilter(str.c_str(), str.size(), token_extractor, *out.bloom_filter); + stringToBloomFilter(value.get(), token_extractor, *out.bloom_filter); return true; } @@ -143,7 +156,7 @@ void MergeTreeIndexAggregatorFullText::update(const Block & block, size_t * pos, for (size_t i = 0; i < rows_read; ++i) { auto ref = column->getDataAt(*pos + i); - stringToBloomFilter(ref.data, ref.size, token_extractor, granule->bloom_filters[col]); + columnToBloomFilter(ref.data, ref.size, token_extractor, granule->bloom_filters[col]); } } granule->has_elems = true; @@ -153,7 +166,7 @@ void MergeTreeIndexAggregatorFullText::update(const Block & block, size_t * pos, MergeTreeConditionFullText::MergeTreeConditionFullText( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Block & index_sample_block, const BloomFilterParameters & params_, TokenExtractorPtr token_extactor_) @@ -166,7 +179,7 @@ MergeTreeConditionFullText::MergeTreeConditionFullText( rpn = std::move( RPNBuilder( query_info, context, - [this] (const ASTPtr & node, const Context & /* context */, Block & block_with_constants, RPNElement & out) -> bool + [this] (const ASTPtr & node, ContextPtr /* context */, Block & block_with_constants, RPNElement & out) -> bool { return this->atomFromAST(node, block_with_constants, out); }).extractRPN()); @@ -367,9 +380,7 @@ bool MergeTreeConditionFullText::atomFromAST( out.key_column = key_column_num; out.function = RPNElement::FUNCTION_NOT_EQUALS; out.bloom_filter = std::make_unique(params); - - const auto & str = const_value.get(); - stringToBloomFilter(str.c_str(), str.size(), token_extractor, *out.bloom_filter); + stringToBloomFilter(const_value.get(), token_extractor, *out.bloom_filter); return true; } else if (func_name == "equals") @@ -382,9 +393,7 @@ bool MergeTreeConditionFullText::atomFromAST( out.key_column = key_column_num; out.function = RPNElement::FUNCTION_EQUALS; out.bloom_filter = std::make_unique(params); - - const auto & str = const_value.get(); - likeStringToBloomFilter(str, token_extractor, *out.bloom_filter); + likeStringToBloomFilter(const_value.get(), token_extractor, *out.bloom_filter); return true; } else if (func_name == "notLike") @@ -392,9 +401,7 @@ bool MergeTreeConditionFullText::atomFromAST( out.key_column = key_column_num; out.function = RPNElement::FUNCTION_NOT_EQUALS; out.bloom_filter = std::make_unique(params); - - const auto & str = const_value.get(); - likeStringToBloomFilter(str, token_extractor, *out.bloom_filter); + likeStringToBloomFilter(const_value.get(), token_extractor, *out.bloom_filter); return true; } else if (func_name == "hasToken") @@ -402,9 +409,7 @@ bool MergeTreeConditionFullText::atomFromAST( out.key_column = key_column_num; out.function = RPNElement::FUNCTION_EQUALS; out.bloom_filter = std::make_unique(params); - - const auto & str = const_value.get(); - stringToBloomFilter(str.c_str(), str.size(), token_extractor, *out.bloom_filter); + stringToBloomFilter(const_value.get(), token_extractor, *out.bloom_filter); return true; } else if (func_name == "startsWith") @@ -431,8 +436,7 @@ bool MergeTreeConditionFullText::atomFromAST( return false; bloom_filters.back().emplace_back(params); - const auto & str = element.get(); - stringToBloomFilter(str.c_str(), str.size(), token_extractor, bloom_filters.back().back()); + stringToBloomFilter(element.get(), token_extractor, bloom_filters.back().back()); } out.set_bloom_filters = std::move(bloom_filters); return true; @@ -541,7 +545,7 @@ bool MergeTreeConditionFullText::tryPrepareSetBloomFilter( { bloom_filters.back().emplace_back(params); auto ref = column->getDataAt(row); - stringToBloomFilter(ref.data, ref.size, token_extractor, bloom_filters.back().back()); + columnToBloomFilter(ref.data, ref.size, token_extractor, bloom_filters.back().back()); } } @@ -562,7 +566,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexFullText::createIndexAggregator() cons } MergeTreeIndexConditionPtr MergeTreeIndexFullText::createIndexCondition( - const SelectQueryInfo & query, const Context & context) const + const SelectQueryInfo & query, ContextPtr context) const { return std::make_shared(query, context, index.sample_block, params, token_extractor.get()); }; @@ -573,7 +577,7 @@ bool MergeTreeIndexFullText::mayBenefitFromIndexForIn(const ASTPtr & node) const } -bool NgramTokenExtractor::next(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const +bool NgramTokenExtractor::nextInField(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const { *token_start = *pos; *token_len = 0; @@ -635,7 +639,33 @@ bool NgramTokenExtractor::nextLike(const String & str, size_t * pos, String & to return false; } -bool SplitTokenExtractor::next(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const + +bool SplitTokenExtractor::nextInField(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const +{ + *token_start = *pos; + *token_len = 0; + + while (*pos < len) + { + if (isASCII(data[*pos]) && !isAlphaNumericASCII(data[*pos])) + { + /// Finish current token if any + if (*token_len > 0) + return true; + *token_start = ++*pos; + } + else + { + /// Note that UTF-8 sequence is completely consisted of non-ASCII bytes. + ++*pos; + ++*token_len; + } + } + + return *token_len > 0; +} + +bool SplitTokenExtractor::nextInColumn(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const { *token_start = *pos; *token_len = 0; diff --git a/src/Storages/MergeTree/MergeTreeIndexFullText.h b/src/Storages/MergeTree/MergeTreeIndexFullText.h index c3c1ff8de8b..1385621f97f 100644 --- a/src/Storages/MergeTree/MergeTreeIndexFullText.h +++ b/src/Storages/MergeTree/MergeTreeIndexFullText.h @@ -14,10 +14,18 @@ namespace DB struct ITokenExtractor { virtual ~ITokenExtractor() = default; + /// Fast inplace implementation for regular use. /// Gets string (data ptr and len) and start position for extracting next token (state of extractor). /// Returns false if parsing is finished, otherwise returns true. - virtual bool next(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const = 0; + virtual bool nextInField(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const = 0; + + /// Optimized version that can assume at least 15 padding bytes after data + len (as our Columns provide). + virtual bool nextInColumn(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const + { + return nextInField(data, len, pos, token_start, token_len); + } + /// Special implementation for creating bloom filter for LIKE function. /// It skips unescaped `%` and `_` and supports escaping symbols, but it is less lightweight. virtual bool nextLike(const String & str, size_t * pos, String & out) const = 0; @@ -27,7 +35,7 @@ struct ITokenExtractor using TokenExtractorPtr = const ITokenExtractor *; -struct MergeTreeIndexGranuleFullText : public IMergeTreeIndexGranule +struct MergeTreeIndexGranuleFullText final : public IMergeTreeIndexGranule { explicit MergeTreeIndexGranuleFullText( const String & index_name_, @@ -50,7 +58,7 @@ struct MergeTreeIndexGranuleFullText : public IMergeTreeIndexGranule using MergeTreeIndexGranuleFullTextPtr = std::shared_ptr; -struct MergeTreeIndexAggregatorFullText : IMergeTreeIndexAggregator +struct MergeTreeIndexAggregatorFullText final : IMergeTreeIndexAggregator { explicit MergeTreeIndexAggregatorFullText( const Names & index_columns_, @@ -74,12 +82,12 @@ struct MergeTreeIndexAggregatorFullText : IMergeTreeIndexAggregator }; -class MergeTreeConditionFullText : public IMergeTreeIndexCondition +class MergeTreeConditionFullText final : public IMergeTreeIndexCondition { public: MergeTreeConditionFullText( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Block & index_sample_block, const BloomFilterParameters & params_, TokenExtractorPtr token_extactor_); @@ -156,13 +164,13 @@ private: /// Parser extracting all ngrams from string. -struct NgramTokenExtractor : public ITokenExtractor +struct NgramTokenExtractor final : public ITokenExtractor { NgramTokenExtractor(size_t n_) : n(n_) {} static String getName() { return "ngrambf_v1"; } - bool next(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const override; + bool nextInField(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const override; bool nextLike(const String & str, size_t * pos, String & token) const override; bool supportLike() const override { return true; } @@ -171,18 +179,19 @@ struct NgramTokenExtractor : public ITokenExtractor }; /// Parser extracting tokens (sequences of numbers and ascii letters). -struct SplitTokenExtractor : public ITokenExtractor +struct SplitTokenExtractor final : public ITokenExtractor { static String getName() { return "tokenbf_v1"; } - bool next(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const override; + bool nextInField(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const override; + bool nextInColumn(const char * data, size_t len, size_t * pos, size_t * token_start, size_t * token_len) const override; bool nextLike(const String & str, size_t * pos, String & token) const override; bool supportLike() const override { return true; } }; -class MergeTreeIndexFullText : public IMergeTreeIndex +class MergeTreeIndexFullText final : public IMergeTreeIndex { public: MergeTreeIndexFullText( @@ -199,7 +208,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query, const Context & context) const override; + const SelectQueryInfo & query, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; diff --git a/src/Storages/MergeTree/MergeTreeIndexGranuleBloomFilter.h b/src/Storages/MergeTree/MergeTreeIndexGranuleBloomFilter.h index 54e2c105db8..cdd4b92f80c 100644 --- a/src/Storages/MergeTree/MergeTreeIndexGranuleBloomFilter.h +++ b/src/Storages/MergeTree/MergeTreeIndexGranuleBloomFilter.h @@ -6,7 +6,7 @@ namespace DB { -class MergeTreeIndexGranuleBloomFilter : public IMergeTreeIndexGranule +class MergeTreeIndexGranuleBloomFilter final : public IMergeTreeIndexGranule { public: MergeTreeIndexGranuleBloomFilter(size_t bits_per_row_, size_t hash_functions_, size_t index_columns_); diff --git a/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp b/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp index de89a27ab46..099d561cf80 100644 --- a/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp @@ -37,10 +37,12 @@ void MergeTreeIndexGranuleMinMax::serializeBinary(WriteBuffer & ostr) const for (size_t i = 0; i < index_sample_block.columns(); ++i) { const DataTypePtr & type = index_sample_block.getByPosition(i).type; + auto serialization = type->getDefaultSerialization(); + if (!type->isNullable()) { - type->serializeBinary(hyperrectangle[i].left, ostr); - type->serializeBinary(hyperrectangle[i].right, ostr); + serialization->serializeBinary(hyperrectangle[i].left, ostr); + serialization->serializeBinary(hyperrectangle[i].right, ostr); } else { @@ -48,8 +50,8 @@ void MergeTreeIndexGranuleMinMax::serializeBinary(WriteBuffer & ostr) const writeBinary(is_null, ostr); if (!is_null) { - type->serializeBinary(hyperrectangle[i].left, ostr); - type->serializeBinary(hyperrectangle[i].right, ostr); + serialization->serializeBinary(hyperrectangle[i].left, ostr); + serialization->serializeBinary(hyperrectangle[i].right, ostr); } } } @@ -60,13 +62,17 @@ void MergeTreeIndexGranuleMinMax::deserializeBinary(ReadBuffer & istr) hyperrectangle.clear(); Field min_val; Field max_val; + + for (size_t i = 0; i < index_sample_block.columns(); ++i) { const DataTypePtr & type = index_sample_block.getByPosition(i).type; + auto serialization = type->getDefaultSerialization(); + if (!type->isNullable()) { - type->deserializeBinary(min_val, istr); - type->deserializeBinary(max_val, istr); + serialization->deserializeBinary(min_val, istr); + serialization->deserializeBinary(max_val, istr); } else { @@ -74,8 +80,8 @@ void MergeTreeIndexGranuleMinMax::deserializeBinary(ReadBuffer & istr) readBinary(is_null, istr); if (!is_null) { - type->deserializeBinary(min_val, istr); - type->deserializeBinary(max_val, istr); + serialization->deserializeBinary(min_val, istr); + serialization->deserializeBinary(max_val, istr); } else { @@ -132,7 +138,7 @@ void MergeTreeIndexAggregatorMinMax::update(const Block & block, size_t * pos, s MergeTreeIndexConditionMinMax::MergeTreeIndexConditionMinMax( const IndexDescription & index, const SelectQueryInfo & query, - const Context & context) + ContextPtr context) : index_data_types(index.data_types) , condition(query, context, index.column_names, index.expression) { @@ -169,7 +175,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexMinMax::createIndexAggregator() const } MergeTreeIndexConditionPtr MergeTreeIndexMinMax::createIndexCondition( - const SelectQueryInfo & query, const Context & context) const + const SelectQueryInfo & query, ContextPtr context) const { return std::make_shared(index, query, context); }; diff --git a/src/Storages/MergeTree/MergeTreeIndexMinMax.h b/src/Storages/MergeTree/MergeTreeIndexMinMax.h index 3956b1d9f9a..97b9b874484 100644 --- a/src/Storages/MergeTree/MergeTreeIndexMinMax.h +++ b/src/Storages/MergeTree/MergeTreeIndexMinMax.h @@ -10,7 +10,7 @@ namespace DB { -struct MergeTreeIndexGranuleMinMax : public IMergeTreeIndexGranule +struct MergeTreeIndexGranuleMinMax final : public IMergeTreeIndexGranule { MergeTreeIndexGranuleMinMax(const String & index_name_, const Block & index_sample_block_); MergeTreeIndexGranuleMinMax( @@ -31,7 +31,7 @@ struct MergeTreeIndexGranuleMinMax : public IMergeTreeIndexGranule }; -struct MergeTreeIndexAggregatorMinMax : IMergeTreeIndexAggregator +struct MergeTreeIndexAggregatorMinMax final : IMergeTreeIndexAggregator { MergeTreeIndexAggregatorMinMax(const String & index_name_, const Block & index_sample_block); ~MergeTreeIndexAggregatorMinMax() override = default; @@ -46,13 +46,13 @@ struct MergeTreeIndexAggregatorMinMax : IMergeTreeIndexAggregator }; -class MergeTreeIndexConditionMinMax : public IMergeTreeIndexCondition +class MergeTreeIndexConditionMinMax final : public IMergeTreeIndexCondition { public: MergeTreeIndexConditionMinMax( const IndexDescription & index, const SelectQueryInfo & query, - const Context & context); + ContextPtr context); bool alwaysUnknownOrTrue() const override; @@ -78,7 +78,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query, const Context & context) const override; + const SelectQueryInfo & query, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; }; diff --git a/src/Storages/MergeTree/MergeTreeIndexSet.cpp b/src/Storages/MergeTree/MergeTreeIndexSet.cpp index b6706367bfa..ff875b185e9 100644 --- a/src/Storages/MergeTree/MergeTreeIndexSet.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexSet.cpp @@ -52,28 +52,31 @@ void MergeTreeIndexGranuleSet::serializeBinary(WriteBuffer & ostr) const "Attempt to write empty set index " + backQuote(index_name), ErrorCodes::LOGICAL_ERROR); const auto & size_type = DataTypePtr(std::make_shared()); + auto size_serialization = size_type->getDefaultSerialization(); if (max_rows != 0 && size() > max_rows) { - size_type->serializeBinary(0, ostr); + size_serialization->serializeBinary(0, ostr); return; } - size_type->serializeBinary(size(), ostr); + size_serialization->serializeBinary(size(), ostr); for (size_t i = 0; i < index_sample_block.columns(); ++i) { const auto & type = index_sample_block.getByPosition(i).type; - IDataType::SerializeBinaryBulkSettings settings; - settings.getter = [&ostr](IDataType::SubstreamPath) -> WriteBuffer * { return &ostr; }; + ISerialization::SerializeBinaryBulkSettings settings; + settings.getter = [&ostr](ISerialization::SubstreamPath) -> WriteBuffer * { return &ostr; }; settings.position_independent_encoding = false; settings.low_cardinality_max_dictionary_size = 0; - IDataType::SerializeBinaryBulkStatePtr state; - type->serializeBinaryBulkStatePrefix(settings, state); - type->serializeBinaryBulkWithMultipleStreams(*block.getByPosition(i).column, 0, size(), settings, state); - type->serializeBinaryBulkStateSuffix(settings, state); + auto serialization = type->getDefaultSerialization(); + ISerialization::SerializeBinaryBulkStatePtr state; + + serialization->serializeBinaryBulkStatePrefix(settings, state); + serialization->serializeBinaryBulkWithMultipleStreams(*block.getByPosition(i).column, 0, size(), settings, state); + serialization->serializeBinaryBulkStateSuffix(settings, state); } } @@ -83,7 +86,7 @@ void MergeTreeIndexGranuleSet::deserializeBinary(ReadBuffer & istr) Field field_rows; const auto & size_type = DataTypePtr(std::make_shared()); - size_type->deserializeBinary(field_rows, istr); + size_type->getDefaultSerialization()->deserializeBinary(field_rows, istr); size_t rows_to_read = field_rows.get(); if (rows_to_read == 0) @@ -95,13 +98,16 @@ void MergeTreeIndexGranuleSet::deserializeBinary(ReadBuffer & istr) const auto & type = column.type; ColumnPtr new_column = type->createColumn(); - IDataType::DeserializeBinaryBulkSettings settings; - settings.getter = [&](IDataType::SubstreamPath) -> ReadBuffer * { return &istr; }; + + ISerialization::DeserializeBinaryBulkSettings settings; + settings.getter = [&](ISerialization::SubstreamPath) -> ReadBuffer * { return &istr; }; settings.position_independent_encoding = false; - IDataType::DeserializeBinaryBulkStatePtr state; - type->deserializeBinaryBulkStatePrefix(settings, state); - type->deserializeBinaryBulkWithMultipleStreams(new_column, rows_to_read, settings, state); + ISerialization::DeserializeBinaryBulkStatePtr state; + auto serialization = type->getDefaultSerialization(); + + serialization->deserializeBinaryBulkStatePrefix(settings, state); + serialization->deserializeBinaryBulkWithMultipleStreams(new_column, rows_to_read, settings, state, nullptr); block.insert(ColumnWithTypeAndName(new_column, type, column.name)); } @@ -235,7 +241,7 @@ MergeTreeIndexConditionSet::MergeTreeIndexConditionSet( const Block & index_sample_block_, size_t max_rows_, const SelectQueryInfo & query, - const Context & context) + ContextPtr context) : index_name(index_name_) , max_rows(max_rows_) , index_sample_block(index_sample_block_) @@ -293,6 +299,10 @@ bool MergeTreeIndexConditionSet::mayBeTrueOnGranule(MergeTreeIndexGranulePtr idx auto column = result.getByName(expression_ast->getColumnName()).column->convertToFullColumnIfConst()->convertToFullColumnIfLowCardinality(); + + if (column->onlyNull()) + return false; + const auto * col_uint8 = typeid_cast(column.get()); const NullMap * null_map = nullptr; @@ -382,7 +392,7 @@ bool MergeTreeIndexConditionSet::operatorFromAST(ASTPtr & node) func->name = "__bitSwapLastTwo"; } - else if (func->name == "and") + else if (func->name == "and" || func->name == "indexHint") { auto last_arg = args.back(); args.pop_back(); @@ -438,7 +448,7 @@ bool MergeTreeIndexConditionSet::checkASTUseless(const ASTPtr & node, bool atomi const ASTs & args = func->arguments->children; - if (func->name == "and") + if (func->name == "and" || func->name == "indexHint") return checkASTUseless(args[0], atomic) && checkASTUseless(args[1], atomic); else if (func->name == "or") return checkASTUseless(args[0], atomic) || checkASTUseless(args[1], atomic); @@ -468,7 +478,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexSet::createIndexAggregator() const } MergeTreeIndexConditionPtr MergeTreeIndexSet::createIndexCondition( - const SelectQueryInfo & query, const Context & context) const + const SelectQueryInfo & query, ContextPtr context) const { return std::make_shared(index.name, index.sample_block, max_rows, query, context); }; diff --git a/src/Storages/MergeTree/MergeTreeIndexSet.h b/src/Storages/MergeTree/MergeTreeIndexSet.h index d84991f5e85..28afe4f714d 100644 --- a/src/Storages/MergeTree/MergeTreeIndexSet.h +++ b/src/Storages/MergeTree/MergeTreeIndexSet.h @@ -14,7 +14,7 @@ namespace DB class MergeTreeIndexSet; -struct MergeTreeIndexGranuleSet : public IMergeTreeIndexGranule +struct MergeTreeIndexGranuleSet final : public IMergeTreeIndexGranule { explicit MergeTreeIndexGranuleSet( const String & index_name_, @@ -42,7 +42,7 @@ struct MergeTreeIndexGranuleSet : public IMergeTreeIndexGranule }; -struct MergeTreeIndexAggregatorSet : IMergeTreeIndexAggregator +struct MergeTreeIndexAggregatorSet final : IMergeTreeIndexAggregator { explicit MergeTreeIndexAggregatorSet( const String & index_name_, @@ -79,7 +79,7 @@ private: }; -class MergeTreeIndexConditionSet : public IMergeTreeIndexCondition +class MergeTreeIndexConditionSet final : public IMergeTreeIndexCondition { public: MergeTreeIndexConditionSet( @@ -87,7 +87,7 @@ public: const Block & index_sample_block_, size_t max_rows_, const SelectQueryInfo & query, - const Context & context); + ContextPtr context); bool alwaysUnknownOrTrue() const override; @@ -113,7 +113,7 @@ private: }; -class MergeTreeIndexSet : public IMergeTreeIndex +class MergeTreeIndexSet final : public IMergeTreeIndex { public: MergeTreeIndexSet( @@ -129,7 +129,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query, const Context & context) const override; + const SelectQueryInfo & query, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; diff --git a/src/Storages/MergeTree/MergeTreeIndices.h b/src/Storages/MergeTree/MergeTreeIndices.h index c7b9dfb123e..674daeb480d 100644 --- a/src/Storages/MergeTree/MergeTreeIndices.h +++ b/src/Storages/MergeTree/MergeTreeIndices.h @@ -84,7 +84,7 @@ struct IMergeTreeIndex virtual MergeTreeIndexAggregatorPtr createIndexAggregator() const = 0; virtual MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query_info, const Context & context) const = 0; + const SelectQueryInfo & query_info, ContextPtr context) const = 0; Names getColumnsRequiredForIndexCalc() const { return index.expression->getRequiredColumns(); } diff --git a/src/Storages/MergeTree/MergeTreeMutationEntry.cpp b/src/Storages/MergeTree/MergeTreeMutationEntry.cpp index 44c4b3c4d10..49c4e93eb1d 100644 --- a/src/Storages/MergeTree/MergeTreeMutationEntry.cpp +++ b/src/Storages/MergeTree/MergeTreeMutationEntry.cpp @@ -75,7 +75,9 @@ MergeTreeMutationEntry::MergeTreeMutationEntry(DiskPtr disk_, const String & pat LocalDateTime create_time_dt; *buf >> "create time: " >> create_time_dt >> "\n"; - create_time = create_time_dt; + create_time = DateLUT::instance().makeDateTime( + create_time_dt.year(), create_time_dt.month(), create_time_dt.day(), + create_time_dt.hour(), create_time_dt.minute(), create_time_dt.second()); *buf >> "commands: "; commands.readText(*buf); diff --git a/src/Storages/MergeTree/MergeTreePartition.cpp b/src/Storages/MergeTree/MergeTreePartition.cpp index 9b02b9f1fd8..897b868db25 100644 --- a/src/Storages/MergeTree/MergeTreePartition.cpp +++ b/src/Storages/MergeTree/MergeTreePartition.cpp @@ -102,7 +102,7 @@ void MergeTreePartition::serializeText(const MergeTreeData & storage, WriteBuffe const DataTypePtr & type = partition_key_sample.getByPosition(0).type; auto column = type->createColumn(); column->insert(value[0]); - type->serializeAsText(*column, 0, out, format_settings); + type->getDefaultSerialization()->serializeText(*column, 0, out, format_settings); } else { @@ -117,9 +117,9 @@ void MergeTreePartition::serializeText(const MergeTreeData & storage, WriteBuffe columns.push_back(std::move(column)); } - DataTypeTuple tuple_type(types); + auto tuple_serialization = DataTypeTuple(types).getDefaultSerialization(); auto tuple_column = ColumnTuple::create(columns); - tuple_type.serializeText(*tuple_column, 0, out, format_settings); + tuple_serialization->serializeText(*tuple_column, 0, out, format_settings); } } @@ -134,7 +134,7 @@ void MergeTreePartition::load(const MergeTreeData & storage, const DiskPtr & dis auto file = openForReading(disk, partition_file_path); value.resize(partition_key_sample.columns()); for (size_t i = 0; i < partition_key_sample.columns(); ++i) - partition_key_sample.getByPosition(i).type->deserializeBinary(value[i], *file); + partition_key_sample.getByPosition(i).type->getDefaultSerialization()->deserializeBinary(value[i], *file); } void MergeTreePartition::store(const MergeTreeData & storage, const DiskPtr & disk, const String & part_path, MergeTreeDataPartChecksums & checksums) const @@ -152,7 +152,7 @@ void MergeTreePartition::store(const Block & partition_key_sample, const DiskPtr auto out = disk->writeFile(part_path + "partition.dat"); HashingWriteBuffer out_hashing(*out); for (size_t i = 0; i < value.size(); ++i) - partition_key_sample.getByPosition(i).type->serializeBinary(value[i], out_hashing); + partition_key_sample.getByPosition(i).type->getDefaultSerialization()->serializeBinary(value[i], out_hashing); out_hashing.next(); checksums.files["partition.dat"].file_size = out_hashing.count(); checksums.files["partition.dat"].file_hash = out_hashing.getHash(); diff --git a/src/Storages/MergeTree/MergeTreePartsMover.cpp b/src/Storages/MergeTree/MergeTreePartsMover.cpp index 7b8c88b1bff..f9e3883d5e2 100644 --- a/src/Storages/MergeTree/MergeTreePartsMover.cpp +++ b/src/Storages/MergeTree/MergeTreePartsMover.cpp @@ -182,7 +182,7 @@ bool MergeTreePartsMover::selectPartsForMove( if (!parts_to_move.empty()) { - LOG_TRACE(log, "Selected {} parts to move according to storage policy rules and {} parts according to TTL rules, {} total", parts_to_move_by_policy_rules, parts_to_move_by_ttl_rules, ReadableSize(parts_to_move_total_size_bytes)); + LOG_DEBUG(log, "Selected {} parts to move according to storage policy rules and {} parts according to TTL rules, {} total", parts_to_move_by_policy_rules, parts_to_move_by_ttl_rules, ReadableSize(parts_to_move_total_size_bytes)); return true; } else @@ -194,15 +194,40 @@ MergeTreeData::DataPartPtr MergeTreePartsMover::clonePart(const MergeTreeMoveEnt if (moves_blocker.isCancelled()) throw Exception("Cancelled moving parts.", ErrorCodes::ABORTED); - LOG_TRACE(log, "Cloning part {}", moving_part.part->name); + auto settings = data->getSettings(); + auto part = moving_part.part; + LOG_TRACE(log, "Cloning part {}", part->name); + + auto disk = moving_part.reserved_space->getDisk(); const String directory_to_move = "moving"; - moving_part.part->makeCloneOnDisk(moving_part.reserved_space->getDisk(), directory_to_move); + if (settings->allow_s3_zero_copy_replication) + { + /// Try to fetch part from S3 without copy and fallback to default copy + /// if it's not possible + moving_part.part->assertOnDisk(); + String path_to_clone = data->getRelativeDataPath() + directory_to_move + "/"; + String relative_path = part->relative_path; + if (disk->exists(path_to_clone + relative_path)) + { + LOG_WARNING(log, "Path " + fullPath(disk, path_to_clone + relative_path) + " already exists. Will remove it and clone again."); + disk->removeRecursive(path_to_clone + relative_path + "/"); + } + disk->createDirectories(path_to_clone); + bool is_fetched = data->tryToFetchIfShared(*part, disk, path_to_clone + "/" + part->name); + if (!is_fetched) + part->volume->getDisk()->copy(data->getRelativeDataPath() + relative_path + "/", disk, path_to_clone); + part->volume->getDisk()->removeFileIfExists(path_to_clone + "/" + IMergeTreeDataPart::DELETE_ON_DESTROY_MARKER_FILE_NAME); + } + else + { + part->makeCloneOnDisk(disk, directory_to_move); + } - auto single_disk_volume = std::make_shared("volume_" + moving_part.part->name, moving_part.reserved_space->getDisk(), 0); + auto single_disk_volume = std::make_shared("volume_" + part->name, moving_part.reserved_space->getDisk(), 0); MergeTreeData::MutableDataPartPtr cloned_part = - data->createPart(moving_part.part->name, single_disk_volume, directory_to_move + '/' + moving_part.part->name); - LOG_TRACE(log, "Part {} was cloned to {}", moving_part.part->name, cloned_part->getFullPath()); + data->createPart(part->name, single_disk_volume, directory_to_move + '/' + part->name); + LOG_TRACE(log, "Part {} was cloned to {}", part->name, cloned_part->getFullPath()); cloned_part->loadColumnsChecksumsIndexes(true, true); return cloned_part; diff --git a/src/Storages/MergeTree/MergeTreeRangeReader.cpp b/src/Storages/MergeTree/MergeTreeRangeReader.cpp index e72039f7172..0bd3d384cba 100644 --- a/src/Storages/MergeTree/MergeTreeRangeReader.cpp +++ b/src/Storages/MergeTree/MergeTreeRangeReader.cpp @@ -486,9 +486,13 @@ void MergeTreeRangeReader::ReadResult::setFilter(const ColumnPtr & new_filter) ConstantFilterDescription const_description(*new_filter); if (const_description.always_true) + { setFilterConstTrue(); + } else if (const_description.always_false) + { clear(); + } else { FilterDescription filter_description(*new_filter); @@ -937,7 +941,10 @@ void MergeTreeRangeReader::executePrewhereActionsAndFilterColumns(ReadResult & r auto columns = block.getColumns(); filterColumns(columns, row_level_filter); - block.setColumns(columns); + if (columns.empty()) + block = block.cloneEmpty(); + else + block.setColumns(columns); } prewhere_info->prewhere_actions->execute(block); diff --git a/src/Storages/MergeTree/MergeTreeReadPool.h b/src/Storages/MergeTree/MergeTreeReadPool.h index 366e9a2381a..9949bdf86f8 100644 --- a/src/Storages/MergeTree/MergeTreeReadPool.h +++ b/src/Storages/MergeTree/MergeTreeReadPool.h @@ -100,7 +100,7 @@ private: const MergeTreeData & data; StorageMetadataPtr metadata_snapshot; - Names column_names; + const Names column_names; bool do_not_steal_tasks; bool predict_block_size_bytes; std::vector per_part_column_name_set; diff --git a/src/Storages/MergeTree/MergeTreeReaderCompact.cpp b/src/Storages/MergeTree/MergeTreeReaderCompact.cpp index 67268e8afd8..da28f75b57f 100644 --- a/src/Storages/MergeTree/MergeTreeReaderCompact.cpp +++ b/src/Storages/MergeTree/MergeTreeReaderCompact.cpp @@ -84,7 +84,8 @@ MergeTreeReaderCompact::MergeTreeReaderCompact( buffer_size, 0, settings.min_bytes_to_use_direct_io, - settings.min_bytes_to_use_mmap_io); + settings.min_bytes_to_use_mmap_io, + settings.mmap_cache.get()); }, uncompressed_cache, /* allow_different_codecs = */ true); @@ -103,7 +104,12 @@ MergeTreeReaderCompact::MergeTreeReaderCompact( auto buffer = std::make_unique( data_part->volume->getDisk()->readFile( - full_data_path, buffer_size, 0, settings.min_bytes_to_use_direct_io, settings.min_bytes_to_use_mmap_io), + full_data_path, + buffer_size, + 0, + settings.min_bytes_to_use_direct_io, + settings.min_bytes_to_use_mmap_io, + settings.mmap_cache.get()), /* allow_different_codecs = */ true); if (profile_callback_) @@ -200,16 +206,16 @@ void MergeTreeReaderCompact::readData( if (!isContinuousReading(from_mark, column_position)) seekToMark(from_mark, column_position); - auto buffer_getter = [&](const IDataType::SubstreamPath & substream_path) -> ReadBuffer * + auto buffer_getter = [&](const ISerialization::SubstreamPath & substream_path) -> ReadBuffer * { - if (only_offsets && (substream_path.size() != 1 || substream_path[0].type != IDataType::Substream::ArraySizes)) + if (only_offsets && (substream_path.size() != 1 || substream_path[0].type != ISerialization::Substream::ArraySizes)) return nullptr; return data_buffer; }; - IDataType::DeserializeBinaryBulkStatePtr state; - IDataType::DeserializeBinaryBulkSettings deserialize_settings; + ISerialization::DeserializeBinaryBulkStatePtr state; + ISerialization::DeserializeBinaryBulkSettings deserialize_settings; deserialize_settings.getter = buffer_getter; deserialize_settings.avg_value_size_hint = avg_value_size_hints[name]; @@ -218,14 +224,16 @@ void MergeTreeReaderCompact::readData( auto type_in_storage = name_and_type.getTypeInStorage(); ColumnPtr temp_column = type_in_storage->createColumn(); - type_in_storage->deserializeBinaryBulkStatePrefix(deserialize_settings, state); - type_in_storage->deserializeBinaryBulkWithMultipleStreams(temp_column, rows_to_read, deserialize_settings, state); + auto serialization = type_in_storage->getDefaultSerialization(); + serialization->deserializeBinaryBulkStatePrefix(deserialize_settings, state); + serialization->deserializeBinaryBulkWithMultipleStreams(temp_column, rows_to_read, deserialize_settings, state, nullptr); column = type_in_storage->getSubcolumn(name_and_type.getSubcolumnName(), *temp_column); } else { - type->deserializeBinaryBulkStatePrefix(deserialize_settings, state); - type->deserializeBinaryBulkWithMultipleStreams(column, rows_to_read, deserialize_settings, state); + auto serialization = type->getDefaultSerialization(); + serialization->deserializeBinaryBulkStatePrefix(deserialize_settings, state); + serialization->deserializeBinaryBulkWithMultipleStreams(column, rows_to_read, deserialize_settings, state, nullptr); } /// The buffer is left in inconsistent state after reading single offsets diff --git a/src/Storages/MergeTree/MergeTreeReaderStream.cpp b/src/Storages/MergeTree/MergeTreeReaderStream.cpp index fd251497d7c..774c5bcf3d8 100644 --- a/src/Storages/MergeTree/MergeTreeReaderStream.cpp +++ b/src/Storages/MergeTree/MergeTreeReaderStream.cpp @@ -89,7 +89,8 @@ MergeTreeReaderStream::MergeTreeReaderStream( buffer_size, sum_mark_range_bytes, settings.min_bytes_to_use_direct_io, - settings.min_bytes_to_use_mmap_io); + settings.min_bytes_to_use_mmap_io, + settings.mmap_cache.get()); }, uncompressed_cache); @@ -105,8 +106,13 @@ MergeTreeReaderStream::MergeTreeReaderStream( else { auto buffer = std::make_unique( - disk->readFile(path_prefix + data_file_extension, buffer_size, - sum_mark_range_bytes, settings.min_bytes_to_use_direct_io, settings.min_bytes_to_use_mmap_io) + disk->readFile( + path_prefix + data_file_extension, + buffer_size, + sum_mark_range_bytes, + settings.min_bytes_to_use_direct_io, + settings.min_bytes_to_use_mmap_io, + settings.mmap_cache.get()) ); if (profile_callback) diff --git a/src/Storages/MergeTree/MergeTreeReaderWide.cpp b/src/Storages/MergeTree/MergeTreeReaderWide.cpp index 30db54fc8e0..0da2f643eb0 100644 --- a/src/Storages/MergeTree/MergeTreeReaderWide.cpp +++ b/src/Storages/MergeTree/MergeTreeReaderWide.cpp @@ -72,7 +72,7 @@ size_t MergeTreeReaderWide::readRows(size_t from_mark, bool continue_reading, si /// If append is true, then the value will be equal to nullptr and will be used only to /// check that the offsets column has been already read. OffsetColumns offset_columns; - std::unordered_map caches; + std::unordered_map caches; auto name_and_type = columns.begin(); for (size_t pos = 0; pos < num_columns; ++pos, ++name_and_type) @@ -137,9 +137,9 @@ size_t MergeTreeReaderWide::readRows(size_t from_mark, bool continue_reading, si void MergeTreeReaderWide::addStreams(const NameAndTypePair & name_and_type, const ReadBufferFromFileBase::ProfileCallback & profile_callback, clockid_t clock_type) { - IDataType::StreamCallback callback = [&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::StreamCallback callback = [&] (const ISerialization::SubstreamPath & substream_path) { - String stream_name = IDataType::getFileNameForStream(name_and_type, substream_path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, substream_path); if (streams.count(stream_name)) return; @@ -160,25 +160,26 @@ void MergeTreeReaderWide::addStreams(const NameAndTypePair & name_and_type, profile_callback, clock_type)); }; - IDataType::SubstreamPath substream_path; - name_and_type.type->enumerateStreams(callback, substream_path); + auto serialization = data_part->getSerializationForColumn(name_and_type); + serialization->enumerateStreams(callback); + serializations.emplace(name_and_type.name, std::move(serialization)); } void MergeTreeReaderWide::readData( const NameAndTypePair & name_and_type, ColumnPtr & column, size_t from_mark, bool continue_reading, size_t max_rows_to_read, - IDataType::SubstreamsCache & cache) + ISerialization::SubstreamsCache & cache) { - auto get_stream_getter = [&](bool stream_for_prefix) -> IDataType::InputStreamGetter + auto get_stream_getter = [&](bool stream_for_prefix) -> ISerialization::InputStreamGetter { - return [&, stream_for_prefix](const IDataType::SubstreamPath & substream_path) -> ReadBuffer * + return [&, stream_for_prefix](const ISerialization::SubstreamPath & substream_path) -> ReadBuffer * { /// If substream have already been read. - if (cache.count(IDataType::getSubcolumnNameForStream(substream_path))) + if (cache.count(ISerialization::getSubcolumnNameForStream(substream_path))) return nullptr; - String stream_name = IDataType::getFileNameForStream(name_and_type, substream_path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, substream_path); auto it = streams.find(stream_name); if (it == streams.end()) @@ -199,19 +200,23 @@ void MergeTreeReaderWide::readData( }; double & avg_value_size_hint = avg_value_size_hints[name_and_type.name]; - IDataType::DeserializeBinaryBulkSettings deserialize_settings; + ISerialization::DeserializeBinaryBulkSettings deserialize_settings; deserialize_settings.avg_value_size_hint = avg_value_size_hint; - if (deserialize_binary_bulk_state_map.count(name_and_type.name) == 0) + const auto & name = name_and_type.name; + auto serialization = serializations[name]; + + if (deserialize_binary_bulk_state_map.count(name) == 0) { deserialize_settings.getter = get_stream_getter(true); - name_and_type.type->deserializeBinaryBulkStatePrefix(deserialize_settings, deserialize_binary_bulk_state_map[name_and_type.name]); + serialization->deserializeBinaryBulkStatePrefix(deserialize_settings, deserialize_binary_bulk_state_map[name]); } deserialize_settings.getter = get_stream_getter(false); deserialize_settings.continuous_reading = continue_reading; - auto & deserialize_state = deserialize_binary_bulk_state_map[name_and_type.name]; - name_and_type.type->deserializeBinaryBulkWithMultipleStreams(column, max_rows_to_read, deserialize_settings, deserialize_state, &cache); + auto & deserialize_state = deserialize_binary_bulk_state_map[name]; + + serializations[name]->deserializeBinaryBulkWithMultipleStreams(column, max_rows_to_read, deserialize_settings, deserialize_state, &cache); IDataType::updateAvgValueSizeHint(*column, avg_value_size_hint); } diff --git a/src/Storages/MergeTree/MergeTreeReaderWide.h b/src/Storages/MergeTree/MergeTreeReaderWide.h index bf9e97035d0..1afbca4bf41 100644 --- a/src/Storages/MergeTree/MergeTreeReaderWide.h +++ b/src/Storages/MergeTree/MergeTreeReaderWide.h @@ -34,8 +34,10 @@ public: private: using FileStreams = std::map>; + using Serializations = std::map; FileStreams streams; + Serializations serializations; void addStreams(const NameAndTypePair & name_and_type, const ReadBufferFromFileBase::ProfileCallback & profile_callback, clockid_t clock_type); @@ -43,7 +45,7 @@ private: void readData( const NameAndTypePair & name_and_type, ColumnPtr & column, size_t from_mark, bool continue_reading, size_t max_rows_to_read, - IDataType::SubstreamsCache & cache); + ISerialization::SubstreamsCache & cache); }; } diff --git a/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp index ee0a77ba3cf..e9527efaa4a 100644 --- a/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp @@ -44,12 +44,11 @@ MergeTreeReverseSelectProcessor::MergeTreeReverseSelectProcessor( for (const auto & range : all_mark_ranges) total_marks_count += range.end - range.begin; - size_t total_rows = data_part->index_granularity.getTotalRows(); + size_t total_rows = data_part->index_granularity.getRowsCountInRanges(all_mark_ranges); if (!quiet) - LOG_TRACE(log, "Reading {} ranges in reverse order from part {}, approx. {}, up to {} rows starting from {}", + LOG_DEBUG(log, "Reading {} ranges in reverse order from part {}, approx. {} rows starting from {}", all_mark_ranges.size(), data_part->name, total_rows, - data_part->index_granularity.getRowsCountInRanges(all_mark_ranges), data_part->index_granularity.getMarkStartingRow(all_mark_ranges.front().begin)); addTotalRowsApprox(total_rows); @@ -63,9 +62,9 @@ MergeTreeReverseSelectProcessor::MergeTreeReverseSelectProcessor( column_name_set = NameSet{column_names.begin(), column_names.end()}; if (use_uncompressed_cache) - owned_uncompressed_cache = storage.global_context.getUncompressedCache(); + owned_uncompressed_cache = storage.getContext()->getUncompressedCache(); - owned_mark_cache = storage.global_context.getMarkCache(); + owned_mark_cache = storage.getContext()->getMarkCache(); reader = data_part->getReader(task_columns.columns, metadata_snapshot, all_mark_ranges, owned_uncompressed_cache.get(), diff --git a/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp index 65f9b1eba3b..980afa170e9 100644 --- a/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp @@ -47,7 +47,7 @@ MergeTreeSelectProcessor::MergeTreeSelectProcessor( size_t total_rows = data_part->index_granularity.getRowsCountInRanges(all_mark_ranges); if (!quiet) - LOG_TRACE(log, "Reading {} ranges from part {}, approx. {} rows starting from {}", + LOG_DEBUG(log, "Reading {} ranges from part {}, approx. {} rows starting from {}", all_mark_ranges.size(), data_part->name, total_rows, data_part->index_granularity.getMarkStartingRow(all_mark_ranges.front().begin)); @@ -87,9 +87,9 @@ try if (!reader) { if (use_uncompressed_cache) - owned_uncompressed_cache = storage.global_context.getUncompressedCache(); + owned_uncompressed_cache = storage.getContext()->getUncompressedCache(); - owned_mark_cache = storage.global_context.getMarkCache(); + owned_mark_cache = storage.getContext()->getMarkCache(); reader = data_part->getReader(task_columns.columns, metadata_snapshot, all_mark_ranges, owned_uncompressed_cache.get(), owned_mark_cache.get(), reader_settings); diff --git a/src/Storages/MergeTree/MergeTreeSequentialSource.cpp b/src/Storages/MergeTree/MergeTreeSequentialSource.cpp index edd63aadd29..e82b1966461 100644 --- a/src/Storages/MergeTree/MergeTreeSequentialSource.cpp +++ b/src/Storages/MergeTree/MergeTreeSequentialSource.cpp @@ -23,16 +23,16 @@ MergeTreeSequentialSource::MergeTreeSequentialSource( , data_part(std::move(data_part_)) , columns_to_read(std::move(columns_to_read_)) , read_with_direct_io(read_with_direct_io_) - , mark_cache(storage.global_context.getMarkCache()) + , mark_cache(storage.getContext()->getMarkCache()) { if (!quiet) { /// Print column name but don't pollute logs in case of many columns. if (columns_to_read.size() == 1) - LOG_TRACE(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part, column {}", + LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part, column {}", data_part->getMarksCount(), data_part->name, data_part->rows_count, columns_to_read.front()); else - LOG_TRACE(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part", + LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part", data_part->getMarksCount(), data_part->name, data_part->rows_count); } diff --git a/src/Storages/MergeTree/MergeTreeSettings.cpp b/src/Storages/MergeTree/MergeTreeSettings.cpp index e77668e8900..dfaaf4a942b 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.cpp +++ b/src/Storages/MergeTree/MergeTreeSettings.cpp @@ -98,6 +98,31 @@ void MergeTreeSettings::sanityCheck(const Settings & query_settings) const number_of_free_entries_in_pool_to_lower_max_size_of_merge, query_settings.background_pool_size); } -} + // The min_index_granularity_bytes value is 1024 b and index_granularity_bytes is 10 mb by default. + // If index_granularity_bytes is not disabled i.e > 0 b, then always ensure that it's greater than + // min_index_granularity_bytes. This is mainly a safeguard against accidents whereby a really low + // index_granularity_bytes SETTING of 1b can create really large parts with large marks. + if (index_granularity_bytes > 0 && index_granularity_bytes < min_index_granularity_bytes) + { + throw Exception( + ErrorCodes::BAD_ARGUMENTS, + "index_granularity_bytes: {} is lower than specified min_index_granularity_bytes: {}", + index_granularity_bytes, + min_index_granularity_bytes); + } + + // If min_bytes_to_rebalance_partition_over_jbod is not disabled i.e > 0 b, then always ensure that + // it's not less than min_bytes_to_rebalance_partition_over_jbod. This is a safeguard to avoid tiny + // parts to participate JBOD balancer which will slow down the merge process. + if (min_bytes_to_rebalance_partition_over_jbod > 0 + && min_bytes_to_rebalance_partition_over_jbod < max_bytes_to_merge_at_max_space_in_pool / 1024) + { + throw Exception( + ErrorCodes::BAD_ARGUMENTS, + "min_bytes_to_rebalance_partition_over_jbod: {} is lower than specified max_bytes_to_merge_at_max_space_in_pool / 150: {}", + min_bytes_to_rebalance_partition_over_jbod, + max_bytes_to_merge_at_max_space_in_pool / 1024); + } +} } diff --git a/src/Storages/MergeTree/MergeTreeSettings.h b/src/Storages/MergeTree/MergeTreeSettings.h index 16657b4083d..f422f00f4dc 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.h +++ b/src/Storages/MergeTree/MergeTreeSettings.h @@ -2,6 +2,7 @@ #include #include +#include namespace Poco::Util @@ -54,6 +55,7 @@ struct Settings; M(UInt64, write_ahead_log_bytes_to_fsync, 100ULL * 1024 * 1024, "Amount of bytes, accumulated in WAL to do fsync.", 0) \ M(UInt64, write_ahead_log_interval_ms_to_fsync, 100, "Interval in milliseconds after which fsync for WAL is being done.", 0) \ M(Bool, in_memory_parts_insert_sync, false, "If true insert of part with in-memory format will wait for fsync of WAL", 0) \ + M(UInt64, non_replicated_deduplication_window, 0, "How many last blocks of hashes should be kept on disk (0 - disabled).", 0) \ \ /** Inserts settings. */ \ M(UInt64, parts_to_delay_insert, 150, "If table contains at least that many active parts in single partition, artificially slow down insert into table.", 0) \ @@ -71,6 +73,7 @@ struct Settings; M(Seconds, prefer_fetch_merged_part_time_threshold, 3600, "If time passed after replication log entry creation exceeds this threshold and sum size of parts is greater than \"prefer_fetch_merged_part_size_threshold\", prefer fetching merged part from replica instead of doing merge locally. To speed up very long merges.", 0) \ M(UInt64, prefer_fetch_merged_part_size_threshold, 10ULL * 1024 * 1024 * 1024, "If sum size of parts exceeds this threshold and time passed after replication log entry creation is greater than \"prefer_fetch_merged_part_time_threshold\", prefer fetching merged part from replica instead of doing merge locally. To speed up very long merges.", 0) \ M(Seconds, execute_merges_on_single_replica_time_threshold, 0, "When greater than zero only a single replica starts the merge immediately, others wait up to that amount of time to download the result instead of doing merges locally. If the chosen replica doesn't finish the merge during that amount of time, fallback to standard behavior happens.", 0) \ + M(Seconds, s3_execute_merges_on_single_replica_time_threshold, 3 * 60 * 60, "When greater than zero only a single replica starts the merge immediatelys when merged part on S3 storage and 'allow_s3_zero_copy_replication' is enabled.", 0) \ M(Seconds, try_fetch_recompressed_part_timeout, 7200, "Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.", 0) \ M(Bool, always_fetch_merged_part, 0, "If true, replica never merge parts and always download merged parts from other replicas.", 0) \ M(UInt64, max_suspicious_broken_parts, 10, "Max broken parts, if more - deny automatic deletion.", 0) \ @@ -82,6 +85,9 @@ struct Settings; M(UInt64, replicated_max_parallel_fetches_for_host, DEFAULT_COUNT_OF_HTTP_CONNECTIONS_PER_ENDPOINT, "Limit parallel fetches from endpoint (actually pool size).", 0) \ M(UInt64, replicated_max_parallel_sends, 0, "Limit parallel sends.", 0) \ M(UInt64, replicated_max_parallel_sends_for_table, 0, "Limit parallel sends for one table.", 0) \ + M(Seconds, replicated_fetches_http_connection_timeout, 0, "HTTP connection timeout for part fetch requests. Inherited from default profile `http_connection_timeout` if not set explicitly.", 0) \ + M(Seconds, replicated_fetches_http_send_timeout, 0, "HTTP send timeout for part fetch requests. Inherited from default profile `http_send_timeout` if not set explicitly.", 0) \ + M(Seconds, replicated_fetches_http_receive_timeout, 0, "HTTP receive timeout for fetch part requests. Inherited from default profile `http_receive_timeout` if not set explicitly.", 0) \ M(Bool, replicated_can_become_leader, true, "If true, Replicated tables replicas on this node will try to acquire leadership.", 0) \ M(Seconds, zookeeper_session_expiration_check_period, 60, "ZooKeeper session expiration check period, in seconds.", 0) \ M(Bool, detach_old_local_parts_when_cloning_replica, 1, "Do not remove old local parts when repairing lost replica.", 0) \ @@ -114,11 +120,13 @@ struct Settings; M(UInt64, concurrent_part_removal_threshold, 100, "Activate concurrent part removal (see 'max_part_removal_threads') only if the number of inactive data parts is at least this.", 0) \ M(String, storage_policy, "default", "Name of storage disk policy", 0) \ M(Bool, allow_nullable_key, false, "Allow Nullable types as primary keys.", 0) \ + M(Bool, allow_s3_zero_copy_replication, false, "Allow Zero-copy replication over S3", 0) \ M(Bool, remove_empty_parts, true, "Remove empty parts after they were pruned by TTL, mutation, or collapsing merge algorithm", 0) \ M(Bool, assign_part_uuids, false, "Generate UUIDs for parts. Before enabling check that all replicas support new format.", 0) \ M(Int64, max_partitions_to_read, -1, "Limit the max number of partitions that can be accessed in one query. <= 0 means unlimited. This setting is the default that can be overridden by the query-level setting with the same name.", 0) \ M(UInt64, max_concurrent_queries, 0, "Max number of concurrently executed queries related to the MergeTree table (0 - disabled). Queries will still be limited by other max_concurrent_queries settings.", 0) \ M(UInt64, min_marks_to_honor_max_concurrent_queries, 0, "Minimal number of marks to honor the MergeTree-level's max_concurrent_queries (0 - disabled). Queries will still be limited by other max_concurrent_queries settings.", 0) \ + M(UInt64, min_bytes_to_rebalance_partition_over_jbod, 0, "Minimal amount of bytes to enable part rebalance over JBOD array (0 - disabled).", 0) \ \ /** Obsolete settings. Kept for backward compatibility only. */ \ M(UInt64, min_relative_delay_to_yield_leadership, 120, "Obsolete setting, does nothing.", 0) \ diff --git a/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp b/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp index f57247e39ab..ba9216ac1b0 100644 --- a/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp @@ -71,8 +71,8 @@ bool MergeTreeThreadSelectBlockInputProcessor::getNewTask() auto rest_mark_ranges = pool->getRestMarks(*task->data_part, task->mark_ranges[0]); if (use_uncompressed_cache) - owned_uncompressed_cache = storage.global_context.getUncompressedCache(); - owned_mark_cache = storage.global_context.getMarkCache(); + owned_uncompressed_cache = storage.getContext()->getUncompressedCache(); + owned_mark_cache = storage.getContext()->getMarkCache(); reader = task->data_part->getReader(task->columns, metadata_snapshot, rest_mark_ranges, owned_uncompressed_cache.get(), owned_mark_cache.get(), reader_settings, diff --git a/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp b/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp index 34cac56d74c..2b27a51ba5e 100644 --- a/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp +++ b/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp @@ -29,7 +29,7 @@ static constexpr auto threshold = 2; MergeTreeWhereOptimizer::MergeTreeWhereOptimizer( SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, std::unordered_map column_sizes_, const StorageMetadataPtr & metadata_snapshot, const Names & queried_columns_, @@ -37,7 +37,9 @@ MergeTreeWhereOptimizer::MergeTreeWhereOptimizer( : table_columns{ext::map( metadata_snapshot->getColumns().getAllPhysical(), [](const NameAndTypePair & col) { return col.name; })} , queried_columns{queried_columns_} - , block_with_constants{KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context)} + , sorting_key_names{NameSet( + metadata_snapshot->getSortingKey().column_names.begin(), metadata_snapshot->getSortingKey().column_names.end())} + , block_with_constants{KeyCondition::getBlockWithConstants(query_info.query->clone(), query_info.syntax_analyzer_result, context)} , log{log_} , column_sizes{std::move(column_sizes_)} { @@ -114,12 +116,12 @@ static bool isConditionGood(const ASTPtr & condition) } -void MergeTreeWhereOptimizer::analyzeImpl(Conditions & res, const ASTPtr & node) const +void MergeTreeWhereOptimizer::analyzeImpl(Conditions & res, const ASTPtr & node, bool is_final) const { if (const auto * func_and = node->as(); func_and && func_and->name == "and") { for (const auto & elem : func_and->arguments->children) - analyzeImpl(res, elem); + analyzeImpl(res, elem, is_final); } else { @@ -133,7 +135,7 @@ void MergeTreeWhereOptimizer::analyzeImpl(Conditions & res, const ASTPtr & node) cond.viable = /// Condition depend on some column. Constant expressions are not moved. !cond.identifiers.empty() - && !cannotBeMoved(node) + && !cannotBeMoved(node, is_final) /// Do not take into consideration the conditions consisting only of the first primary key column && !hasPrimaryKeyAtoms(node) /// Only table columns are considered. Not array joined columns. NOTE We're assuming that aliases was expanded. @@ -149,10 +151,10 @@ void MergeTreeWhereOptimizer::analyzeImpl(Conditions & res, const ASTPtr & node) } /// Transform conjunctions chain in WHERE expression to Conditions list. -MergeTreeWhereOptimizer::Conditions MergeTreeWhereOptimizer::analyze(const ASTPtr & expression) const +MergeTreeWhereOptimizer::Conditions MergeTreeWhereOptimizer::analyze(const ASTPtr & expression, bool is_final) const { Conditions res; - analyzeImpl(res, expression); + analyzeImpl(res, expression, is_final); return res; } @@ -183,7 +185,7 @@ void MergeTreeWhereOptimizer::optimize(ASTSelectQuery & select) const if (!select.where() || select.prewhere()) return; - Conditions where_conditions = analyze(select.where()); + Conditions where_conditions = analyze(select.where(), select.final()); Conditions prewhere_conditions; UInt64 total_size_of_moved_conditions = 0; @@ -216,14 +218,20 @@ void MergeTreeWhereOptimizer::optimize(ASTSelectQuery & select) const if (!it->viable) break; - /// 10% ratio is just a guess. - /// If sizes of compressed columns cannot be calculated, e.g. for compact parts, - /// use number of moved columns as a fallback. - bool moved_enough = - (total_size_of_queried_columns > 0 && total_size_of_moved_conditions > 0 - && (total_size_of_moved_conditions + it->columns_size) * 10 > total_size_of_queried_columns) - || (total_number_of_moved_columns > 0 - && (total_number_of_moved_columns + it->identifiers.size()) * 10 > queried_columns.size()); + bool moved_enough = false; + if (total_size_of_queried_columns > 0) + { + /// If we know size of queried columns use it as threshold. 10% ratio is just a guess. + moved_enough = total_size_of_moved_conditions > 0 + && (total_size_of_moved_conditions + it->columns_size) * 10 > total_size_of_queried_columns; + } + else + { + /// Otherwise, use number of moved columns as a fallback. + /// It can happen, if table has only compact parts. 25% ratio is just a guess. + moved_enough = total_number_of_moved_columns > 0 + && (total_number_of_moved_columns + it->identifiers.size()) * 4 > queried_columns.size(); + } if (moved_enough) break; @@ -300,6 +308,12 @@ bool MergeTreeWhereOptimizer::isPrimaryKeyAtom(const ASTPtr & ast) const } +bool MergeTreeWhereOptimizer::isSortingKey(const String & column_name) const +{ + return sorting_key_names.count(column_name); +} + + bool MergeTreeWhereOptimizer::isConstant(const ASTPtr & expr) const { const auto column_name = expr->getColumnName(); @@ -319,7 +333,7 @@ bool MergeTreeWhereOptimizer::isSubsetOfTableColumns(const NameSet & identifiers } -bool MergeTreeWhereOptimizer::cannotBeMoved(const ASTPtr & ptr) const +bool MergeTreeWhereOptimizer::cannotBeMoved(const ASTPtr & ptr, bool is_final) const { if (const auto * function_ptr = ptr->as()) { @@ -331,17 +345,22 @@ bool MergeTreeWhereOptimizer::cannotBeMoved(const ASTPtr & ptr) const if ("globalIn" == function_ptr->name || "globalNotIn" == function_ptr->name) return true; + + /// indexHint is a special function that it does not make sense to transfer to PREWHERE + if ("indexHint" == function_ptr->name) + return true; } else if (auto opt_name = IdentifierSemantic::getColumnName(ptr)) { /// disallow moving result of ARRAY JOIN to PREWHERE if (array_joined_names.count(*opt_name) || - array_joined_names.count(Nested::extractTableName(*opt_name))) + array_joined_names.count(Nested::extractTableName(*opt_name)) || + (is_final && !isSortingKey(*opt_name))) return true; } for (const auto & child : ptr->children) - if (cannotBeMoved(child)) + if (cannotBeMoved(child, is_final)) return true; return false; diff --git a/src/Storages/MergeTree/MergeTreeWhereOptimizer.h b/src/Storages/MergeTree/MergeTreeWhereOptimizer.h index cad77fb9eed..0559fdee2ae 100644 --- a/src/Storages/MergeTree/MergeTreeWhereOptimizer.h +++ b/src/Storages/MergeTree/MergeTreeWhereOptimizer.h @@ -1,12 +1,15 @@ #pragma once -#include -#include -#include -#include #include +#include #include +#include + +#include +#include +#include + namespace Poco { class Logger; } @@ -32,7 +35,7 @@ class MergeTreeWhereOptimizer : private boost::noncopyable public: MergeTreeWhereOptimizer( SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, std::unordered_map column_sizes_, const StorageMetadataPtr & metadata_snapshot, const Names & queried_columns_, @@ -67,10 +70,10 @@ private: using Conditions = std::list; - void analyzeImpl(Conditions & res, const ASTPtr & node) const; + void analyzeImpl(Conditions & res, const ASTPtr & node, bool is_final) const; /// Transform conjunctions chain in WHERE expression to Conditions list. - Conditions analyze(const ASTPtr & expression) const; + Conditions analyze(const ASTPtr & expression, bool is_final) const; /// Transform Conditions list to WHERE or PREWHERE expression. static ASTPtr reconstruct(const Conditions & conditions); @@ -85,6 +88,8 @@ private: bool isPrimaryKeyAtom(const ASTPtr & ast) const; + bool isSortingKey(const String & column_name) const; + bool isConstant(const ASTPtr & expr) const; bool isSubsetOfTableColumns(const NameSet & identifiers) const; @@ -95,7 +100,7 @@ private: * * Also, disallow moving expressions with GLOBAL [NOT] IN. */ - bool cannotBeMoved(const ASTPtr & ptr) const; + bool cannotBeMoved(const ASTPtr & ptr, bool is_final) const; void determineArrayJoinedNames(ASTSelectQuery & select); @@ -104,6 +109,7 @@ private: String first_primary_key_column; const StringSet table_columns; const Names queried_columns; + const NameSet sorting_key_names; const Block block_with_constants; Poco::Logger * log; std::unordered_map column_sizes; diff --git a/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp b/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp index 4ca20572e90..4c92d4f6136 100644 --- a/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp +++ b/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp @@ -30,7 +30,7 @@ MergeTreeWriteAheadLog::MergeTreeWriteAheadLog( , disk(disk_) , name(name_) , path(storage.getRelativeDataPath() + name_) - , pool(storage.global_context.getSchedulePool()) + , pool(storage.getContext()->getSchedulePool()) { init(); sync_task = pool.createTask("MergeTreeWriteAheadLog::sync", [this] diff --git a/src/Storages/MergeTree/MergedBlockOutputStream.cpp b/src/Storages/MergeTree/MergedBlockOutputStream.cpp index 1605ec693cb..ab364e0e5aa 100644 --- a/src/Storages/MergeTree/MergedBlockOutputStream.cpp +++ b/src/Storages/MergeTree/MergedBlockOutputStream.cpp @@ -26,7 +26,7 @@ MergedBlockOutputStream::MergedBlockOutputStream( , default_codec(default_codec_) { MergeTreeWriterSettings writer_settings( - storage.global_context.getSettings(), + storage.getContext()->getSettings(), storage.getSettings(), data_part->index_granularity_info.is_adaptive, /* rewrite_primary_key = */ true, @@ -91,6 +91,7 @@ void MergedBlockOutputStream::writeSuffixAndFinalizePart( new_part->calculateColumnsSizesOnDisk(); if (default_codec != nullptr) new_part->default_codec = default_codec; + new_part->storage.lockSharedData(*new_part); } void MergedBlockOutputStream::finalizePartOnDisk( @@ -185,7 +186,6 @@ void MergedBlockOutputStream::writeImpl(const Block & block, const IColumn::Perm return; writer->write(block, permutation); - rows_count += rows; } diff --git a/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp b/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp index 41479f104f3..298c550d496 100644 --- a/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp +++ b/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp @@ -21,7 +21,7 @@ MergedColumnOnlyOutputStream::MergedColumnOnlyOutputStream( : IMergedBlockOutputStream(data_part, metadata_snapshot_) , header(header_) { - const auto & global_settings = data_part->storage.global_context.getSettings(); + const auto & global_settings = data_part->storage.getContext()->getSettings(); const auto & storage_settings = data_part->storage.getSettings(); MergeTreeWriterSettings writer_settings( diff --git a/src/Storages/MergeTree/PartitionPruner.h b/src/Storages/MergeTree/PartitionPruner.h index 3cb7552c427..a4035087b89 100644 --- a/src/Storages/MergeTree/PartitionPruner.h +++ b/src/Storages/MergeTree/PartitionPruner.h @@ -21,7 +21,7 @@ private: using DataPartPtr = std::shared_ptr; public: - PartitionPruner(const KeyDescription & partition_key_, const SelectQueryInfo & query_info, const Context & context, bool strict) + PartitionPruner(const KeyDescription & partition_key_, const SelectQueryInfo & query_info, ContextPtr context, bool strict) : partition_key(partition_key_) , partition_condition( query_info, context, partition_key.column_names, partition_key.expression, true /* single_point */, strict) @@ -32,6 +32,8 @@ public: bool canBePruned(const DataPartPtr & part); bool isUseless() const { return useless; } + + const KeyCondition & getKeyCondition() const { return partition_condition; } }; } diff --git a/src/Storages/MergeTree/RPNBuilder.h b/src/Storages/MergeTree/RPNBuilder.h index 292a120d28a..d63781db67d 100644 --- a/src/Storages/MergeTree/RPNBuilder.h +++ b/src/Storages/MergeTree/RPNBuilder.h @@ -1,35 +1,34 @@ #pragma once -#include #include #include #include -#include #include -#include +#include #include +#include +#include namespace DB { -class Context; /// Builds reverse polish notation template -class RPNBuilder +class RPNBuilder : WithContext { public: using RPN = std::vector; using AtomFromASTFunc = std::function< - bool(const ASTPtr & node, const Context & context, Block & block_with_constants, RPNElement & out)>; + bool(const ASTPtr & node, ContextPtr context, Block & block_with_constants, RPNElement & out)>; - RPNBuilder(const SelectQueryInfo & query_info, const Context & context_, const AtomFromASTFunc & atomFromAST_) - : context(context_), atomFromAST(atomFromAST_) + RPNBuilder(const SelectQueryInfo & query_info, ContextPtr context_, const AtomFromASTFunc & atomFromAST_) + : WithContext(context_), atomFromAST(atomFromAST_) { /** Evaluation of expressions that depend only on constants. * For the index to be used, if it is written, for example `WHERE Date = toDate(now())`. */ - block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context); + block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, getContext()); /// Transform WHERE section to Reverse Polish notation const ASTSelectQuery & select = typeid_cast(*query_info.query); @@ -80,7 +79,7 @@ private: } } - if (!atomFromAST(node, context, block_with_constants, element)) + if (!atomFromAST(node, getContext(), block_with_constants, element)) { element.function = RPNElement::FUNCTION_UNKNOWN; } @@ -91,6 +90,8 @@ private: bool operatorFromAST(const ASTFunction * func, RPNElement & out) { /// Functions AND, OR, NOT. + /// Also a special function `indexHint` - works as if instead of calling a function there are just parentheses + /// (or, the same thing - calling the function `and` from one argument). const ASTs & args = typeid_cast(*func->arguments).children; if (func->name == "not") @@ -102,7 +103,7 @@ private: } else { - if (func->name == "and") + if (func->name == "and" || func->name == "indexHint") out.function = RPNElement::FUNCTION_AND; else if (func->name == "or") out.function = RPNElement::FUNCTION_OR; @@ -113,7 +114,6 @@ private: return true; } - const Context & context; const AtomFromASTFunc & atomFromAST; Block block_with_constants; RPN rpn; diff --git a/src/Storages/MergeTree/ReplicatedFetchList.cpp b/src/Storages/MergeTree/ReplicatedFetchList.cpp index 82bc8ae21e0..6d4451ad3e0 100644 --- a/src/Storages/MergeTree/ReplicatedFetchList.cpp +++ b/src/Storages/MergeTree/ReplicatedFetchList.cpp @@ -39,7 +39,6 @@ ReplicatedFetchInfo ReplicatedFetchListElement::getInfo() const res.source_replica_port = source_replica_port; res.interserver_scheme = interserver_scheme; res.uri = uri; - res.interserver_scheme = interserver_scheme; res.to_detached = to_detached; res.elapsed = watch.elapsedSeconds(); res.progress = progress.load(std::memory_order_relaxed); diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp index 7046a510f75..df4f9124980 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp @@ -41,12 +41,14 @@ ReplicatedMergeTreeBlockOutputStream::ReplicatedMergeTreeBlockOutputStream( size_t max_parts_per_block_, bool quorum_parallel_, bool deduplicate_, - bool optimize_on_insert_) + bool optimize_on_insert_, + bool is_attach_) : storage(storage_) , metadata_snapshot(metadata_snapshot_) , quorum(quorum_) , quorum_timeout_ms(quorum_timeout_ms_) , max_parts_per_block(max_parts_per_block_) + , is_attach(is_attach_) , quorum_parallel(quorum_parallel_) , deduplicate(deduplicate_) , log(&Poco::Logger::get(storage.getLogName() + " (Replicated OutputStream)")) @@ -144,22 +146,18 @@ void ReplicatedMergeTreeBlockOutputStream::write(const Block & block) MergeTreeData::MutableDataPartPtr part = storage.writer.writeTempPart(current_block, metadata_snapshot, optimize_on_insert); + /// If optimize_on_insert setting is true, current_block could become empty after merge + /// and we didn't create part. + if (!part) + continue; + String block_id; if (deduplicate) { - SipHash hash; - part->checksums.computeTotalChecksumDataOnly(hash); - union - { - char bytes[16]; - UInt64 words[2]; - } hash_value; - hash.get128(hash_value.bytes); - /// We add the hash from the data and partition identifier to deduplication ID. /// That is, do not insert the same data to the same partition twice. - block_id = part->info.partition_id + "_" + toString(hash_value.words[0]) + "_" + toString(hash_value.words[1]); + block_id = part->getZeroLevelPartBlockID(); LOG_DEBUG(log, "Wrote block with ID '{}', {} rows", block_id, current_block.block.rows()); } @@ -174,11 +172,11 @@ void ReplicatedMergeTreeBlockOutputStream::write(const Block & block) /// Set a special error code if the block is duplicate int error = (deduplicate && last_block_is_duplicate) ? ErrorCodes::INSERT_WAS_DEDUPLICATED : 0; - PartLog::addNewPart(storage.global_context, part, watch.elapsed(), ExecutionStatus(error)); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed(), ExecutionStatus(error)); } catch (...) { - PartLog::addNewPart(storage.global_context, part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); throw; } } @@ -202,11 +200,11 @@ void ReplicatedMergeTreeBlockOutputStream::writeExistingPart(MergeTreeData::Muta try { commitPart(zookeeper, part, ""); - PartLog::addNewPart(storage.global_context, part, watch.elapsed()); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed()); } catch (...) { - PartLog::addNewPart(storage.global_context, part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); throw; } } @@ -258,10 +256,20 @@ void ReplicatedMergeTreeBlockOutputStream::commitPart( part->name = part->getNewName(part->info); - /// Will add log entry about new part. - StorageReplicatedMergeTree::LogEntry log_entry; - log_entry.type = StorageReplicatedMergeTree::LogEntry::GET_PART; + + if (is_attach) + { + log_entry.type = StorageReplicatedMergeTree::LogEntry::ATTACH_PART; + + /// We don't need to involve ZooKeeper to obtain the checksums as by the time we get + /// the MutableDataPartPtr here, we already have the data thus being able to + /// calculate the checksums. + log_entry.part_checksum = part->checksums.getTotalChecksumHex(); + } + else + log_entry.type = StorageReplicatedMergeTree::LogEntry::GET_PART; + log_entry.create_time = time(nullptr); log_entry.source_replica = storage.replica_name; log_entry.new_part_name = part->name; @@ -333,13 +341,28 @@ void ReplicatedMergeTreeBlockOutputStream::commitPart( /// If it exists on our replica, ignore it. if (storage.getActiveContainingPart(existing_part_name)) { - LOG_INFO(log, "Block with ID {} already exists locally as part {}; ignoring it.", block_id, existing_part_name); part->is_duplicate = true; last_block_is_duplicate = true; ProfileEvents::increment(ProfileEvents::DuplicatedInsertedBlocks); + if (quorum) + { + LOG_INFO(log, "Block with ID {} already exists locally as part {}; ignoring it, but checking quorum.", block_id, existing_part_name); + + std::string quorum_path; + if (quorum_parallel) + quorum_path = storage.zookeeper_path + "/quorum/parallel/" + existing_part_name; + else + quorum_path = storage.zookeeper_path + "/quorum/status"; + + waitForQuorum(zookeeper, existing_part_name, quorum_path, quorum_info.is_active_node_value); + } + else + { + LOG_INFO(log, "Block with ID {} already exists locally as part {}; ignoring it.", block_id, existing_part_name); + } + return; } - LOG_INFO(log, "Block with ID {} already exists on other replicas as part {}; will write it locally with that name.", block_id, existing_part_name); @@ -478,50 +501,7 @@ void ReplicatedMergeTreeBlockOutputStream::commitPart( storage.updateQuorum(part->name, false); } - /// We are waiting for quorum to be satisfied. - LOG_TRACE(log, "Waiting for quorum"); - - try - { - while (true) - { - zkutil::EventPtr event = std::make_shared(); - - std::string value; - /// `get` instead of `exists` so that `watch` does not leak if the node is no longer there. - if (!zookeeper->tryGet(quorum_info.status_path, value, nullptr, event)) - break; - - LOG_TRACE(log, "Quorum node {} still exists, will wait for updates", quorum_info.status_path); - - ReplicatedMergeTreeQuorumEntry quorum_entry(value); - - /// If the node has time to disappear, and then appear again for the next insert. - if (quorum_entry.part_name != part->name) - break; - - if (!event->tryWait(quorum_timeout_ms)) - throw Exception("Timeout while waiting for quorum", ErrorCodes::TIMEOUT_EXCEEDED); - - LOG_TRACE(log, "Quorum {} updated, will check quorum node still exists", quorum_info.status_path); - } - - /// And what if it is possible that the current replica at this time has ceased to be active - /// and the quorum is marked as failed and deleted? - String value; - if (!zookeeper->tryGet(storage.replica_path + "/is_active", value, nullptr) - || value != quorum_info.is_active_node_value) - throw Exception("Replica become inactive while waiting for quorum", ErrorCodes::NO_ACTIVE_REPLICAS); - } - catch (...) - { - /// We do not know whether or not data has been inserted - /// - whether other replicas have time to download the part and mark the quorum as done. - throw Exception("Unknown status, client must retry. Reason: " + getCurrentExceptionMessage(false), - ErrorCodes::UNKNOWN_STATUS_OF_INSERT); - } - - LOG_TRACE(log, "Quorum satisfied"); + waitForQuorum(zookeeper, part->name, quorum_info.status_path, quorum_info.is_active_node_value); } } @@ -533,4 +513,57 @@ void ReplicatedMergeTreeBlockOutputStream::writePrefix() } +void ReplicatedMergeTreeBlockOutputStream::waitForQuorum( + zkutil::ZooKeeperPtr & zookeeper, + const std::string & part_name, + const std::string & quorum_path, + const std::string & is_active_node_value) const +{ + /// We are waiting for quorum to be satisfied. + LOG_TRACE(log, "Waiting for quorum"); + + try + { + while (true) + { + zkutil::EventPtr event = std::make_shared(); + + std::string value; + /// `get` instead of `exists` so that `watch` does not leak if the node is no longer there. + if (!zookeeper->tryGet(quorum_path, value, nullptr, event)) + break; + + LOG_TRACE(log, "Quorum node {} still exists, will wait for updates", quorum_path); + + ReplicatedMergeTreeQuorumEntry quorum_entry(value); + + /// If the node has time to disappear, and then appear again for the next insert. + if (quorum_entry.part_name != part_name) + break; + + if (!event->tryWait(quorum_timeout_ms)) + throw Exception("Timeout while waiting for quorum", ErrorCodes::TIMEOUT_EXCEEDED); + + LOG_TRACE(log, "Quorum {} updated, will check quorum node still exists", quorum_path); + } + + /// And what if it is possible that the current replica at this time has ceased to be active + /// and the quorum is marked as failed and deleted? + String value; + if (!zookeeper->tryGet(storage.replica_path + "/is_active", value, nullptr) + || value != is_active_node_value) + throw Exception("Replica become inactive while waiting for quorum", ErrorCodes::NO_ACTIVE_REPLICAS); + } + catch (...) + { + /// We do not know whether or not data has been inserted + /// - whether other replicas have time to download the part and mark the quorum as done. + throw Exception("Unknown status, client must retry. Reason: " + getCurrentExceptionMessage(false), + ErrorCodes::UNKNOWN_STATUS_OF_INSERT); + } + + LOG_TRACE(log, "Quorum satisfied"); +} + + } diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h index 3ac2c4bcfcb..6ea16491d64 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h @@ -30,7 +30,10 @@ public: size_t max_parts_per_block_, bool quorum_parallel_, bool deduplicate_, - bool optimize_on_insert); + bool optimize_on_insert, + // special flag to determine the ALTER TABLE ATTACH PART without the query context, + // needed to set the special LogEntryType::ATTACH_PART + bool is_attach_ = false); Block getHeader() const override; void writePrefix() override; @@ -60,12 +63,19 @@ private: /// Rename temporary part and commit to ZooKeeper. void commitPart(zkutil::ZooKeeperPtr & zookeeper, MergeTreeData::MutableDataPartPtr & part, const String & block_id); + /// Wait for quorum to be satisfied on path (quorum_path) form part (part_name) + /// Also checks that replica still alive. + void waitForQuorum( + zkutil::ZooKeeperPtr & zookeeper, const std::string & part_name, + const std::string & quorum_path, const std::string & is_active_node_value) const; + StorageReplicatedMergeTree & storage; StorageMetadataPtr metadata_snapshot; size_t quorum; size_t quorum_timeout_ms; size_t max_parts_per_block; + bool is_attach = false; bool quorum_parallel = false; bool deduplicate = true; bool last_block_is_duplicate = false; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp index 701cb2fa1ed..d3496d99cef 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp @@ -24,7 +24,7 @@ ReplicatedMergeTreeCleanupThread::ReplicatedMergeTreeCleanupThread(StorageReplic , log_name(storage.getStorageID().getFullTableName() + " (ReplicatedMergeTreeCleanupThread)") , log(&Poco::Logger::get(log_name)) { - task = storage.global_context.getSchedulePool().createTask(log_name, [this]{ run(); }); + task = storage.getContext()->getSchedulePool().createTask(log_name, [this]{ run(); }); } void ReplicatedMergeTreeCleanupThread::run() @@ -307,8 +307,9 @@ struct ReplicatedMergeTreeCleanupThread::NodeWithStat { String node; Int64 ctime = 0; + Int32 version = 0; - NodeWithStat(String node_, Int64 ctime_) : node(std::move(node_)), ctime(ctime_) {} + NodeWithStat(String node_, Int64 ctime_, Int32 version_) : node(std::move(node_)), ctime(ctime_), version(version_) {} static bool greaterByTime(const NodeWithStat & lhs, const NodeWithStat & rhs) { @@ -334,7 +335,7 @@ void ReplicatedMergeTreeCleanupThread::clearOldBlocks() current_time - static_cast(1000 * storage_settings->replicated_deduplication_window_seconds)); /// Virtual node, all nodes that are "greater" than this one will be deleted - NodeWithStat block_threshold{{}, time_threshold}; + NodeWithStat block_threshold{{}, time_threshold, 0}; size_t current_deduplication_window = std::min(timed_blocks.size(), storage_settings->replicated_deduplication_window); auto first_outdated_block_fixed_threshold = timed_blocks.begin() + current_deduplication_window; @@ -342,11 +343,20 @@ void ReplicatedMergeTreeCleanupThread::clearOldBlocks() timed_blocks.begin(), timed_blocks.end(), block_threshold, NodeWithStat::greaterByTime); auto first_outdated_block = std::min(first_outdated_block_fixed_threshold, first_outdated_block_time_threshold); + auto num_nodes_to_delete = timed_blocks.end() - first_outdated_block; + if (!num_nodes_to_delete) + return; + + auto last_outdated_block = timed_blocks.end() - 1; + LOG_TRACE(log, "Will clear {} old blocks from {} (ctime {}) to {} (ctime {})", num_nodes_to_delete, + first_outdated_block->node, first_outdated_block->ctime, + last_outdated_block->node, last_outdated_block->ctime); + zkutil::AsyncResponses try_remove_futures; for (auto it = first_outdated_block; it != timed_blocks.end(); ++it) { String path = storage.zookeeper_path + "/blocks/" + it->node; - try_remove_futures.emplace_back(path, zookeeper->asyncTryRemove(path)); + try_remove_futures.emplace_back(path, zookeeper->asyncTryRemove(path, it->version)); } for (auto & pair : try_remove_futures) @@ -359,7 +369,7 @@ void ReplicatedMergeTreeCleanupThread::clearOldBlocks() zookeeper->removeRecursive(path); cached_block_stats.erase(first_outdated_block->node); } - else if (rc == Coordination::Error::ZOK || rc == Coordination::Error::ZNONODE) + else if (rc == Coordination::Error::ZOK || rc == Coordination::Error::ZNONODE || rc == Coordination::Error::ZBADVERSION) { /// No node is Ok. Another replica is removing nodes concurrently. /// Successfully removed blocks have to be removed from cache @@ -372,9 +382,7 @@ void ReplicatedMergeTreeCleanupThread::clearOldBlocks() first_outdated_block++; } - auto num_nodes_to_delete = timed_blocks.end() - first_outdated_block; - if (num_nodes_to_delete) - LOG_TRACE(log, "Cleared {} old blocks from ZooKeeper", num_nodes_to_delete); + LOG_TRACE(log, "Cleared {} old blocks from ZooKeeper", num_nodes_to_delete); } @@ -419,7 +427,8 @@ void ReplicatedMergeTreeCleanupThread::getBlocksSortedByTime(zkutil::ZooKeeper & else { /// Cached block - timed_blocks.emplace_back(block, it->second); + const auto & ctime_and_version = it->second; + timed_blocks.emplace_back(block, ctime_and_version.first, ctime_and_version.second); } } @@ -429,8 +438,8 @@ void ReplicatedMergeTreeCleanupThread::getBlocksSortedByTime(zkutil::ZooKeeper & auto status = elem.second.get(); if (status.error != Coordination::Error::ZNONODE) { - cached_block_stats.emplace(elem.first, status.stat.ctime); - timed_blocks.emplace_back(elem.first, status.stat.ctime); + cached_block_stats.emplace(elem.first, std::make_pair(status.stat.ctime, status.stat.version)); + timed_blocks.emplace_back(elem.first, status.stat.ctime, status.stat.version); } } diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.h b/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.h index 520af888621..939a40db8c8 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.h @@ -58,8 +58,8 @@ private: /// Remove old mutations that are done from ZooKeeper. This is done by the leader replica. void clearOldMutations(); - using NodeCTimeCache = std::map; - NodeCTimeCache cached_block_stats; + using NodeCTimeAndVersionCache = std::map>; + NodeCTimeAndVersionCache cached_block_stats; struct NodeWithStat; /// Returns list of blocks (with their stat) sorted by ctime in descending order. diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.cpp index 9a9f25fd470..7d8ba0e4a30 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.cpp @@ -1,4 +1,5 @@ #include +#include "Access/IAccessEntity.h" #include #include @@ -52,6 +53,11 @@ void ReplicatedMergeTreeLogEntryData::writeText(WriteBuffer & out) const out << "get\n" << new_part_name; break; + case ATTACH_PART: + out << "attach\n" << new_part_name << "\n" + << "part_checksum: " << part_checksum; + break; + case MERGE_PARTS: out << "merge\n"; for (const String & s : source_parts) @@ -136,7 +142,7 @@ void ReplicatedMergeTreeLogEntryData::writeText(WriteBuffer & out) const break; default: - throw Exception("Unknown log entry type: " + DB::toString(type), ErrorCodes::LOGICAL_ERROR); + throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown log entry type: {}", static_cast(type)); } out << '\n'; @@ -156,13 +162,16 @@ void ReplicatedMergeTreeLogEntryData::readText(ReadBuffer & in) in >> "format version: " >> format_version >> "\n"; if (format_version < 1 || format_version >= FORMAT_LAST) - throw Exception("Unknown ReplicatedMergeTreeLogEntry format version: " + DB::toString(format_version), ErrorCodes::UNKNOWN_FORMAT_VERSION); + throw Exception(ErrorCodes::UNKNOWN_FORMAT_VERSION, "Unknown ReplicatedMergeTreeLogEntry format version: {}", + DB::toString(format_version)); if (format_version >= FORMAT_WITH_CREATE_TIME) { LocalDateTime create_time_dt; in >> "create_time: " >> create_time_dt >> "\n"; - create_time = create_time_dt; + create_time = DateLUT::instance().makeDateTime( + create_time_dt.year(), create_time_dt.month(), create_time_dt.day(), + create_time_dt.hour(), create_time_dt.minute(), create_time_dt.second()); } in >> "source replica: " >> source_replica >> "\n"; @@ -175,11 +184,17 @@ void ReplicatedMergeTreeLogEntryData::readText(ReadBuffer & in) in >> type_str >> "\n"; bool trailing_newline_found = false; + if (type_str == "get") { type = GET_PART; in >> new_part_name; } + else if (type_str == "attach") + { + type = ATTACH_PART; + in >> new_part_name >> "\npart_checksum: " >> part_checksum; + } else if (type_str == "merge") { type = MERGE_PARTS; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.h b/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.h index cdf5a40d5a9..309120560e7 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -31,6 +32,8 @@ struct ReplicatedMergeTreeLogEntryData { EMPTY, /// Not used. GET_PART, /// Get the part from another replica. + ATTACH_PART, /// Attach the part, possibly from our own replica (if found in /detached folder). + /// You may think of it as a GET_PART with some optimisations as they're nearly identical. MERGE_PARTS, /// Merge the parts. DROP_RANGE, /// Delete the parts in the specified partition in the specified number range. CLEAR_COLUMN, /// NOTE: Deprecated. Drop specific column from specified partition. @@ -45,6 +48,7 @@ struct ReplicatedMergeTreeLogEntryData switch (type) { case ReplicatedMergeTreeLogEntryData::GET_PART: return "GET_PART"; + case ReplicatedMergeTreeLogEntryData::ATTACH_PART: return "ATTACH_PART"; case ReplicatedMergeTreeLogEntryData::MERGE_PARTS: return "MERGE_PARTS"; case ReplicatedMergeTreeLogEntryData::DROP_RANGE: return "DROP_RANGE"; case ReplicatedMergeTreeLogEntryData::CLEAR_COLUMN: return "CLEAR_COLUMN"; @@ -71,6 +75,8 @@ struct ReplicatedMergeTreeLogEntryData Type type = EMPTY; String source_replica; /// Empty string means that this entry was added to the queue immediately, and not copied from the log. + String part_checksum; /// Part checksum for ATTACH_PART, empty otherwise. + /// The name of resulting part for GET_PART and MERGE_PARTS /// Part range for DROP_RANGE and CLEAR_COLUMN String new_part_name; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.cpp index d90183abd95..65da6080e86 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.cpp @@ -56,6 +56,17 @@ bool ReplicatedMergeTreeMergeStrategyPicker::shouldMergeOnSingleReplica(const Re } +bool ReplicatedMergeTreeMergeStrategyPicker::shouldMergeOnSingleReplicaS3Shared(const ReplicatedMergeTreeLogEntryData & entry) const +{ + time_t threshold = s3_execute_merges_on_single_replica_time_threshold; + return ( + threshold > 0 /// feature turned on + && entry.type == ReplicatedMergeTreeLogEntry::MERGE_PARTS /// it is a merge log entry + && entry.create_time + threshold > time(nullptr) /// not too much time waited + ); +} + + /// that will return the same replica name for ReplicatedMergeTreeLogEntry on all the replicas (if the replica set is the same). /// that way each replica knows who is responsible for doing a certain merge. @@ -90,18 +101,23 @@ std::optional ReplicatedMergeTreeMergeStrategyPicker::pickReplicaToExecu void ReplicatedMergeTreeMergeStrategyPicker::refreshState() { auto threshold = storage.getSettings()->execute_merges_on_single_replica_time_threshold.totalSeconds(); + auto threshold_s3 = 0; + if (storage.getSettings()->allow_s3_zero_copy_replication) + threshold_s3 = storage.getSettings()->s3_execute_merges_on_single_replica_time_threshold.totalSeconds(); if (threshold == 0) - { /// we can reset the settings w/o lock (it's atomic) execute_merges_on_single_replica_time_threshold = threshold; + if (threshold_s3 == 0) + s3_execute_merges_on_single_replica_time_threshold = threshold_s3; + if (threshold == 0 && threshold_s3 == 0) return; - } auto now = time(nullptr); /// the setting was already enabled, and last state refresh was done recently - if (execute_merges_on_single_replica_time_threshold != 0 + if (((threshold != 0 && execute_merges_on_single_replica_time_threshold != 0) + || (threshold_s3 != 0 && s3_execute_merges_on_single_replica_time_threshold != 0)) && now - last_refresh_time < REFRESH_STATE_MINIMUM_INTERVAL_SECONDS) return; @@ -130,11 +146,15 @@ void ReplicatedMergeTreeMergeStrategyPicker::refreshState() LOG_WARNING(storage.log, "Can't find current replica in the active replicas list, or too few active replicas to use execute_merges_on_single_replica_time_threshold!"); /// we can reset the settings w/o lock (it's atomic) execute_merges_on_single_replica_time_threshold = 0; + s3_execute_merges_on_single_replica_time_threshold = 0; return; } std::lock_guard lock(mutex); - execute_merges_on_single_replica_time_threshold = threshold; + if (threshold != 0) /// Zeros already reset + execute_merges_on_single_replica_time_threshold = threshold; + if (threshold_s3 != 0) + s3_execute_merges_on_single_replica_time_threshold = threshold_s3; last_refresh_time = now; current_replica_index = current_replica_index_tmp; active_replicas = active_replicas_tmp; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.h b/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.h index 02a760d1ace..8adf206676a 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeMergeStrategyPicker.h @@ -52,6 +52,10 @@ public: /// and we may need to do a fetch (or postpone) instead of merge bool shouldMergeOnSingleReplica(const ReplicatedMergeTreeLogEntryData & entry) const; + /// return true if s3_execute_merges_on_single_replica_time_threshold feature is active + /// and we may need to do a fetch (or postpone) instead of merge + bool shouldMergeOnSingleReplicaS3Shared(const ReplicatedMergeTreeLogEntryData & entry) const; + /// returns the replica name /// and it's not current replica should do the merge /// used in shouldExecuteLogEntry and in tryExecuteMerge @@ -68,6 +72,7 @@ private: uint64_t getEntryHash(const ReplicatedMergeTreeLogEntryData & entry) const; std::atomic execute_merges_on_single_replica_time_threshold = 0; + std::atomic s3_execute_merges_on_single_replica_time_threshold = 0; std::atomic last_refresh_time = 0; std::mutex mutex; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeMutationEntry.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeMutationEntry.cpp index b2299b2cbbd..c617befe9c4 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeMutationEntry.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeMutationEntry.cpp @@ -37,7 +37,9 @@ void ReplicatedMergeTreeMutationEntry::readText(ReadBuffer & in) LocalDateTime create_time_dt; in >> "create time: " >> create_time_dt >> "\n"; - create_time = create_time_dt; + create_time = DateLUT::instance().makeDateTime( + create_time_dt.year(), create_time_dt.month(), create_time_dt.day(), + create_time_dt.hour(), create_time_dt.minute(), create_time_dt.second()); in >> "source replica: " >> source_replica >> "\n"; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp index b2a144ca748..09b2a23767c 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp @@ -28,7 +28,7 @@ ReplicatedMergeTreePartCheckThread::ReplicatedMergeTreePartCheckThread(StorageRe , log_name(storage.getStorageID().getFullTableName() + " (ReplicatedMergeTreePartCheckThread)") , log(&Poco::Logger::get(log_name)) { - task = storage.global_context.getSchedulePool().createTask(log_name, [this] { run(); }); + task = storage.getContext()->getSchedulePool().createTask(log_name, [this] { run(); }); task->schedule(); } @@ -213,7 +213,7 @@ std::pair ReplicatedMergeTreePartCheckThread::findLo /// because our checks of local storage and zookeeper are not consistent. /// If part exists in zookeeper and doesn't exists in local storage definitely require /// to fetch this part. But if we check local storage first and than check zookeeper - /// some background process can successfully commit part between this checks (both to the local stoarge and zookeeper), + /// some background process can successfully commit part between this checks (both to the local storage and zookeeper), /// but checker thread will remove part from zookeeper and queue fetch. bool exists_in_zookeeper = zookeeper->exists(part_path); @@ -234,6 +234,8 @@ CheckResult ReplicatedMergeTreePartCheckThread::checkPart(const String & part_na auto [exists_in_zookeeper, part] = findLocalPart(part_name); + LOG_TRACE(log, "Part {} in zookeeper: {}, locally: {}", part_name, exists_in_zookeeper, part != nullptr); + /// We do not have this or a covering part. if (!part) { @@ -250,6 +252,9 @@ CheckResult ReplicatedMergeTreePartCheckThread::checkPart(const String & part_na auto local_part_header = ReplicatedMergeTreePartHeader::fromColumnsAndChecksums( part->getColumns(), part->checksums); + /// The double get scheme is needed to retain compatibility with very old parts that were created + /// before the ReplicatedMergeTreePartHeader was introduced. + String part_path = storage.replica_path + "/parts/" + part_name; String part_znode; /// If the part is in ZooKeeper, check its data with its checksums, and them with ZooKeeper. diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp index 26a916d2356..ad41bbe1a08 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp @@ -145,7 +145,7 @@ void ReplicatedMergeTreeQueue::insertUnlocked( else queue.push_front(entry); - if (entry->type == LogEntry::GET_PART) + if (entry->type == LogEntry::GET_PART || entry->type == LogEntry::ATTACH_PART) { inserts_by_time.insert(entry); @@ -184,7 +184,7 @@ void ReplicatedMergeTreeQueue::updateStateOnQueueEntryRemoval( std::unique_lock & state_lock) { /// Update insert times. - if (entry->type == LogEntry::GET_PART) + if (entry->type == LogEntry::GET_PART || entry->type == LogEntry::ATTACH_PART) { inserts_by_time.erase(entry); @@ -563,7 +563,7 @@ int32_t ReplicatedMergeTreeQueue::pullLogsToQueue(zkutil::ZooKeeperPtr zookeeper replica_path + "/queue/queue-", res.data, zkutil::CreateMode::PersistentSequential)); const auto & entry = *copied_entries.back(); - if (entry.type == LogEntry::GET_PART) + if (entry.type == LogEntry::GET_PART || entry.type == LogEntry::ATTACH_PART) { std::lock_guard state_lock(state_mutex); if (entry.create_time && (!min_unprocessed_insert_time || entry.create_time < min_unprocessed_insert_time)) @@ -871,7 +871,12 @@ ReplicatedMergeTreeQueue::StringSet ReplicatedMergeTreeQueue::moveSiblingPartsFo if (it0 == merge_entry) break; - if (((*it0)->type == LogEntry::MERGE_PARTS || (*it0)->type == LogEntry::GET_PART || (*it0)->type == LogEntry::MUTATE_PART) + const auto t = (*it0)->type; + + if ((t == LogEntry::MERGE_PARTS || + t == LogEntry::GET_PART || + t == LogEntry::ATTACH_PART || + t == LogEntry::MUTATE_PART) && parts_for_merge.count((*it0)->new_part_name)) { queue.splice(queue.end(), queue, it0, it); @@ -921,7 +926,10 @@ void ReplicatedMergeTreeQueue::removePartProducingOpsInRange( { auto type = (*it)->type; - if (((type == LogEntry::GET_PART || type == LogEntry::MERGE_PARTS || type == LogEntry::MUTATE_PART) + if (((type == LogEntry::GET_PART || + type == LogEntry::ATTACH_PART || + type == LogEntry::MERGE_PARTS || + type == LogEntry::MUTATE_PART) && part_info.contains(MergeTreePartInfo::fromPartName((*it)->new_part_name, format_version))) || checkReplaceRangeCanBeRemoved(part_info, *it, current)) { @@ -1066,6 +1074,7 @@ bool ReplicatedMergeTreeQueue::shouldExecuteLogEntry( /// some other entry which is currently executing, then we can postpone this entry. if (entry.type == LogEntry::MERGE_PARTS || entry.type == LogEntry::GET_PART + || entry.type == LogEntry::ATTACH_PART || entry.type == LogEntry::MUTATE_PART) { for (const String & new_part_name : entry.getBlockingPartNames()) @@ -1076,7 +1085,8 @@ bool ReplicatedMergeTreeQueue::shouldExecuteLogEntry( } /// Check that fetches pool is not overloaded - if (entry.type == LogEntry::GET_PART && !storage.canExecuteFetch(entry, out_postpone_reason)) + if ((entry.type == LogEntry::GET_PART || entry.type == LogEntry::ATTACH_PART) + && !storage.canExecuteFetch(entry, out_postpone_reason)) { /// Don't print log message about this, because we can have a lot of fetches, /// for example during replica recovery. @@ -1643,7 +1653,7 @@ ReplicatedMergeTreeQueue::Status ReplicatedMergeTreeQueue::getStatus() const if (entry->create_time && (!res.queue_oldest_time || entry->create_time < res.queue_oldest_time)) res.queue_oldest_time = entry->create_time; - if (entry->type == LogEntry::GET_PART) + if (entry->type == LogEntry::GET_PART || entry->type == LogEntry::ATTACH_PART) { ++res.inserts_in_queue; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp index b3cb7c92def..ca6ea3103d1 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp @@ -47,7 +47,7 @@ ReplicatedMergeTreeRestartingThread::ReplicatedMergeTreeRestartingThread(Storage const auto storage_settings = storage.getSettings(); check_period_ms = storage_settings->zookeeper_session_expiration_check_period.totalSeconds() * 1000; - task = storage.global_context.getSchedulePool().createTask(log_name, [this]{ run(); }); + task = storage.getContext()->getSchedulePool().createTask(log_name, [this]{ run(); }); } void ReplicatedMergeTreeRestartingThread::run() diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp index ac1c92849d5..de72ad1168b 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp @@ -205,7 +205,7 @@ void ReplicatedMergeTreeTableMetadata::checkImmutableFieldsEquals(const Replicat } -void ReplicatedMergeTreeTableMetadata::checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, const Context & context) const +void ReplicatedMergeTreeTableMetadata::checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, ContextPtr context) const { checkImmutableFieldsEquals(from_zk); diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h index c1c34637664..f398547e992 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h @@ -63,7 +63,7 @@ struct ReplicatedMergeTreeTableMetadata } }; - void checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, const Context & context) const; + void checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, ContextPtr context) const; Diff checkAndFindDiff(const ReplicatedMergeTreeTableMetadata & from_zk) const; diff --git a/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h b/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h index 1d011effc69..9f1a28a1522 100644 --- a/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h +++ b/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h @@ -4,6 +4,8 @@ #include #include #include +#include +#include #include #include @@ -24,7 +26,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) override @@ -33,14 +35,14 @@ public: std::move(*MergeTreeDataSelectExecutor(part->storage) .readFromParts({part}, column_names, metadata_snapshot, query_info, context, max_block_size, num_streams)); - return query_plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + return query_plan.convertToPipe(QueryPlanOptimizationSettings::fromContext(context), BuildQueryPipelineSettings::fromContext(context)); } bool supportsIndexForIn() const override { return true; } bool mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override + const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override { return part->storage.mayBenefitFromIndexForIn(left_in_operand, query_context, metadata_snapshot); } @@ -55,7 +57,7 @@ public: return part->info.partition_id; } - String getPartitionIDFromQuery(const ASTPtr & ast, const Context & context) const + String getPartitionIDFromQuery(const ASTPtr & ast, ContextPtr context) const { return part->storage.getPartitionIDFromQuery(ast, context); } diff --git a/src/Storages/MergeTree/checkDataPart.cpp b/src/Storages/MergeTree/checkDataPart.cpp index c9da156dc97..ac28f84db43 100644 --- a/src/Storages/MergeTree/checkDataPart.cpp +++ b/src/Storages/MergeTree/checkDataPart.cpp @@ -120,9 +120,15 @@ IMergeTreeDataPart::Checksums checkDataPart( { for (const auto & column : columns_list) { - column.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + auto serialization = IDataType::getSerialization(column, + [&](const String & stream_name) + { + return disk->exists(stream_name + IMergeTreeDataPart::DATA_FILE_EXTENSION); + }); + + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { - String file_name = IDataType::getFileNameForStream(column, substream_path) + ".bin"; + String file_name = ISerialization::getFileNameForStream(column, substream_path) + ".bin"; checksums_data.files[file_name] = checksum_compressed_file(disk, path + file_name); }, {}); } diff --git a/src/Storages/MergeTree/tests/CMakeLists.txt b/src/Storages/MergeTree/examples/CMakeLists.txt similarity index 100% rename from src/Storages/MergeTree/tests/CMakeLists.txt rename to src/Storages/MergeTree/examples/CMakeLists.txt diff --git a/src/Storages/MergeTree/tests/wal_action_metadata.cpp b/src/Storages/MergeTree/examples/wal_action_metadata.cpp similarity index 100% rename from src/Storages/MergeTree/tests/wal_action_metadata.cpp rename to src/Storages/MergeTree/examples/wal_action_metadata.cpp diff --git a/src/Storages/MergeTree/registerStorageMergeTree.cpp b/src/Storages/MergeTree/registerStorageMergeTree.cpp index 31687abf2f3..0cff7e00bc6 100644 --- a/src/Storages/MergeTree/registerStorageMergeTree.cpp +++ b/src/Storages/MergeTree/registerStorageMergeTree.cpp @@ -20,6 +20,7 @@ #include #include +#include namespace DB @@ -183,9 +184,9 @@ appendGraphitePattern(const Poco::Util::AbstractConfiguration & config, const St patterns.emplace_back(pattern); } -static void setGraphitePatternsFromConfig(const Context & context, const String & config_element, Graphite::Params & params) +static void setGraphitePatternsFromConfig(ContextPtr context, const String & config_element, Graphite::Params & params) { - const auto & config = context.getConfigRef(); + const auto & config = context->getConfigRef(); if (!config.has(config_element)) throw Exception("No '" + config_element + "' element in configuration file", ErrorCodes::NO_ELEMENTS_IN_CONFIG); @@ -410,6 +411,35 @@ static StoragePtr create(const StorageFactory::Arguments & args) throw Exception(msg, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); } + if (is_extended_storage_def) + { + /// Allow expressions in engine arguments. + /// In new syntax argument can be literal or identifier or array/tuple of identifiers. + size_t arg_idx = 0; + try + { + for (; arg_idx < engine_args.size(); ++arg_idx) + { + auto & arg = engine_args[arg_idx]; + auto * arg_func = arg->as(); + if (!arg_func) + continue; + + /// If we got ASTFunction, let's evaluate it and replace with ASTLiteral. + /// Do not try evaluate array or tuple, because it's array or tuple of column identifiers. + if (arg_func->name == "array" || arg_func->name == "tuple") + continue; + Field value = evaluateConstantExpression(arg, args.getLocalContext()).first; + arg = std::make_shared(value); + } + } + catch (Exception & e) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Cannot evaluate engine argument {}: {} {}", + arg_idx, e.message(), getMergeTreeVerboseHelp(is_extended_storage_def)); + } + } + /// For Replicated. String zookeeper_path; String replica_name; @@ -451,9 +481,9 @@ static StoragePtr create(const StorageFactory::Arguments & args) { /// Try use default values if arguments are not specified. /// Note: {uuid} macro works for ON CLUSTER queries when database engine is Atomic. - zookeeper_path = args.context.getConfigRef().getString("default_replica_path", "/clickhouse/tables/{uuid}/{shard}"); + zookeeper_path = args.getContext()->getConfigRef().getString("default_replica_path", "/clickhouse/tables/{uuid}/{shard}"); /// TODO maybe use hostname if {replica} is not defined? - replica_name = args.context.getConfigRef().getString("default_replica_name", "{replica}"); + replica_name = args.getContext()->getConfigRef().getString("default_replica_name", "{replica}"); /// Modify query, so default values will be written to metadata assert(arg_num == 0); @@ -473,8 +503,8 @@ static StoragePtr create(const StorageFactory::Arguments & args) throw Exception("Expected two string literal arguments: zookeeper_path and replica_name", ErrorCodes::BAD_ARGUMENTS); /// Allow implicit {uuid} macros only for zookeeper_path in ON CLUSTER queries - bool is_on_cluster = args.local_context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; - bool is_replicated_database = args.local_context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY && + bool is_on_cluster = args.getLocalContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; + bool is_replicated_database = args.getLocalContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY && DatabaseCatalog::instance().getDatabase(args.table_id.database_name)->getEngineName() == "Replicated"; bool allow_uuid_macro = is_on_cluster || is_replicated_database || args.query.attach; @@ -491,11 +521,11 @@ static StoragePtr create(const StorageFactory::Arguments & args) info.table_id = args.table_id; if (!allow_uuid_macro) info.table_id.uuid = UUIDHelpers::Nil; - zookeeper_path = args.context.getMacros()->expand(zookeeper_path, info); + zookeeper_path = args.getContext()->getMacros()->expand(zookeeper_path, info); info.level = 0; info.table_id.uuid = UUIDHelpers::Nil; - replica_name = args.context.getMacros()->expand(replica_name, info); + replica_name = args.getContext()->getMacros()->expand(replica_name, info); } ast_zk_path->value = zookeeper_path; @@ -507,11 +537,11 @@ static StoragePtr create(const StorageFactory::Arguments & args) info.table_id = args.table_id; if (!allow_uuid_macro) info.table_id.uuid = UUIDHelpers::Nil; - zookeeper_path = args.context.getMacros()->expand(zookeeper_path, info); + zookeeper_path = args.getContext()->getMacros()->expand(zookeeper_path, info); info.level = 0; info.table_id.uuid = UUIDHelpers::Nil; - replica_name = args.context.getMacros()->expand(replica_name, info); + replica_name = args.getContext()->getMacros()->expand(replica_name, info); /// We do not allow renaming table with these macros in metadata, because zookeeper_path will be broken after RENAME TABLE. /// NOTE: it may happen if table was created by older version of ClickHouse (< 20.10) and macros was not unfolded on table creation @@ -570,7 +600,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) throw Exception(error_msg, ErrorCodes::BAD_ARGUMENTS); --arg_cnt; - setGraphitePatternsFromConfig(args.context, graphite_config_name, merging_params.graphite_params); + setGraphitePatternsFromConfig(args.getContext(), graphite_config_name, merging_params.graphite_params); } else if (merging_params.mode == MergeTreeData::MergingParams::VersionedCollapsing) { @@ -599,9 +629,9 @@ static StoragePtr create(const StorageFactory::Arguments & args) std::unique_ptr storage_settings; if (replicated) - storage_settings = std::make_unique(args.context.getReplicatedMergeTreeSettings()); + storage_settings = std::make_unique(args.getContext()->getReplicatedMergeTreeSettings()); else - storage_settings = std::make_unique(args.context.getMergeTreeSettings()); + storage_settings = std::make_unique(args.getContext()->getMergeTreeSettings()); if (is_extended_storage_def) { @@ -612,7 +642,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// Partition key may be undefined, but despite this we store it's empty /// value in partition_key structure. MergeTree checks this case and use /// single default partition with name "all". - metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_key, metadata.columns, args.context); + metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_key, metadata.columns, args.getContext()); /// PRIMARY KEY without ORDER BY is allowed and considered as ORDER BY. if (!args.storage_def->order_by && args.storage_def->primary_key) @@ -630,33 +660,33 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// before storage creation. After that storage will just copy this /// column if sorting key will be changed. metadata.sorting_key = KeyDescription::getSortingKeyFromAST( - args.storage_def->order_by->ptr(), metadata.columns, args.context, merging_param_key_arg); + args.storage_def->order_by->ptr(), metadata.columns, args.getContext(), merging_param_key_arg); /// If primary key explicitly defined, than get it from AST if (args.storage_def->primary_key) { - metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.getContext()); } else /// Otherwise we don't have explicit primary key and copy it from order by { - metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->order_by->ptr(), metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->order_by->ptr(), metadata.columns, args.getContext()); /// and set it's definition_ast to nullptr (so isPrimaryKeyDefined() /// will return false but hasPrimaryKey() will return true. metadata.primary_key.definition_ast = nullptr; } if (args.storage_def->sample_by) - metadata.sampling_key = KeyDescription::getKeyFromAST(args.storage_def->sample_by->ptr(), metadata.columns, args.context); + metadata.sampling_key = KeyDescription::getKeyFromAST(args.storage_def->sample_by->ptr(), metadata.columns, args.getContext()); if (args.storage_def->ttl_table) { metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( - args.storage_def->ttl_table->ptr(), metadata.columns, args.context, metadata.primary_key); + args.storage_def->ttl_table->ptr(), metadata.columns, args.getContext(), metadata.primary_key); } if (args.query.columns_list && args.query.columns_list->indices) for (auto & index : args.query.columns_list->indices->children) - metadata.secondary_indices.push_back(IndexDescription::getIndexFromAST(index, args.columns, args.context)); + metadata.secondary_indices.push_back(IndexDescription::getIndexFromAST(index, args.columns, args.getContext())); if (args.query.columns_list && args.query.columns_list->constraints) for (auto & constraint : args.query.columns_list->constraints->children) @@ -665,7 +695,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) auto column_ttl_asts = args.columns.getColumnTTLs(); for (const auto & [name, ast] : column_ttl_asts) { - auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, args.columns, args.context, metadata.primary_key); + auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, args.columns, args.getContext(), metadata.primary_key); metadata.column_ttls_by_name[name] = new_ttl_entry; } @@ -674,25 +704,6 @@ static StoragePtr create(const StorageFactory::Arguments & args) // updates the default storage_settings with settings specified via SETTINGS arg in a query if (args.storage_def->settings) metadata.settings_changes = args.storage_def->settings->ptr(); - - size_t index_granularity_bytes = 0; - size_t min_index_granularity_bytes = 0; - - index_granularity_bytes = storage_settings->index_granularity_bytes; - min_index_granularity_bytes = storage_settings->min_index_granularity_bytes; - - /* the min_index_granularity_bytes value is 1024 b and index_granularity_bytes is 10 mb by default - * if index_granularity_bytes is not disabled i.e > 0 b, then always ensure that it's greater than - * min_index_granularity_bytes. This is mainly a safeguard against accidents whereby a really low - * index_granularity_bytes SETTING of 1b can create really large parts with large marks. - */ - if (index_granularity_bytes > 0 && index_granularity_bytes < min_index_granularity_bytes) - { - throw Exception( - "index_granularity_bytes: " + std::to_string(index_granularity_bytes) - + " is lesser than specified min_index_granularity_bytes: " + std::to_string(min_index_granularity_bytes), - ErrorCodes::BAD_ARGUMENTS); - } } else { @@ -705,7 +716,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) auto partition_by_ast = makeASTFunction("toYYYYMM", std::make_shared(date_column_name)); - metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_ast, metadata.columns, args.context); + metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_ast, metadata.columns, args.getContext()); ++arg_num; @@ -713,7 +724,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// If there is an expression for sampling if (arg_cnt - arg_num == 3) { - metadata.sampling_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.context); + metadata.sampling_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.getContext()); ++arg_num; } @@ -723,10 +734,10 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// before storage creation. After that storage will just copy this /// column if sorting key will be changed. metadata.sorting_key - = KeyDescription::getSortingKeyFromAST(engine_args[arg_num], metadata.columns, args.context, merging_param_key_arg); + = KeyDescription::getSortingKeyFromAST(engine_args[arg_num], metadata.columns, args.getContext(), merging_param_key_arg); /// In old syntax primary_key always equals to sorting key. - metadata.primary_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.getContext()); /// But it's not explicitly defined, so we evaluate definition to /// nullptr metadata.primary_key.definition_ast = nullptr; @@ -751,8 +762,8 @@ static StoragePtr create(const StorageFactory::Arguments & args) { for (size_t i = 0; i < data_types.size(); ++i) if (isFloat(data_types[i])) - throw Exception( - "Donot support float point as partition key: " + metadata.partition_key.column_names[i], ErrorCodes::BAD_ARGUMENTS); + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Floating point partition key is not supported: {}", metadata.partition_key.column_names[i]); } if (arg_num != arg_cnt) @@ -766,7 +777,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) args.table_id, args.relative_data_path, metadata, - args.context, + args.getContext(), date_column_name, merging_params, std::move(storage_settings), @@ -778,7 +789,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) args.relative_data_path, metadata, args.attach, - args.context, + args.getContext(), date_column_name, merging_params, std::move(storage_settings), diff --git a/src/Storages/PartitionCommands.cpp b/src/Storages/PartitionCommands.cpp index 76c2af17256..f09f60887e8 100644 --- a/src/Storages/PartitionCommands.cpp +++ b/src/Storages/PartitionCommands.cpp @@ -82,6 +82,7 @@ std::optional PartitionCommand::parse(const ASTAlterCommand * res.type = FETCH_PARTITION; res.partition = command_ast->partition; res.from_zookeeper_path = command_ast->from; + res.part = command_ast->part; return res; } else if (command_ast->type == ASTAlterCommand::FREEZE_PARTITION) @@ -94,10 +95,25 @@ std::optional PartitionCommand::parse(const ASTAlterCommand * } else if (command_ast->type == ASTAlterCommand::FREEZE_ALL) { - PartitionCommand command; - command.type = PartitionCommand::FREEZE_ALL_PARTITIONS; - command.with_name = command_ast->with_name; - return command; + PartitionCommand res; + res.type = PartitionCommand::FREEZE_ALL_PARTITIONS; + res.with_name = command_ast->with_name; + return res; + } + else if (command_ast->type == ASTAlterCommand::UNFREEZE_PARTITION) + { + PartitionCommand res; + res.type = PartitionCommand::UNFREEZE_PARTITION; + res.partition = command_ast->partition; + res.with_name = command_ast->with_name; + return res; + } + else if (command_ast->type == ASTAlterCommand::UNFREEZE_ALL) + { + PartitionCommand res; + res.type = PartitionCommand::UNFREEZE_ALL_PARTITIONS; + res.with_name = command_ast->with_name; + return res; } else return {}; @@ -125,11 +141,18 @@ std::string PartitionCommand::typeToString() const else return "DROP DETACHED PARTITION"; case PartitionCommand::Type::FETCH_PARTITION: - return "FETCH PARTITION"; + if (part) + return "FETCH PART"; + else + return "FETCH PARTITION"; case PartitionCommand::Type::FREEZE_ALL_PARTITIONS: return "FREEZE ALL"; case PartitionCommand::Type::FREEZE_PARTITION: return "FREEZE PARTITION"; + case PartitionCommand::Type::UNFREEZE_PARTITION: + return "UNFREEZE PARTITION"; + case PartitionCommand::Type::UNFREEZE_ALL_PARTITIONS: + return "UNFREEZE ALL"; case PartitionCommand::Type::REPLACE_PARTITION: return "REPLACE PARTITION"; } diff --git a/src/Storages/PartitionCommands.h b/src/Storages/PartitionCommands.h index e4f70305dbd..9f89d44bd4e 100644 --- a/src/Storages/PartitionCommands.h +++ b/src/Storages/PartitionCommands.h @@ -27,6 +27,8 @@ struct PartitionCommand FETCH_PARTITION, FREEZE_ALL_PARTITIONS, FREEZE_PARTITION, + UNFREEZE_ALL_PARTITIONS, + UNFREEZE_PARTITION, REPLACE_PARTITION, }; @@ -52,7 +54,7 @@ struct PartitionCommand /// For FETCH PARTITION - path in ZK to the shard, from which to download the partition. String from_zookeeper_path; - /// For FREEZE PARTITION + /// For FREEZE PARTITION and UNFREEZE String with_name; enum MoveDestinationType diff --git a/src/Storages/PostgreSQL/PostgreSQLConnection.cpp b/src/Storages/PostgreSQL/PostgreSQLConnection.cpp index 668550ec721..53cf5159c5a 100644 --- a/src/Storages/PostgreSQL/PostgreSQLConnection.cpp +++ b/src/Storages/PostgreSQL/PostgreSQLConnection.cpp @@ -3,36 +3,68 @@ #endif #if USE_LIBPQXX -#include -#include +#include "PostgreSQLConnection.h" +#include #include -namespace DB +namespace postgres { -PostgreSQLConnection::ConnectionPtr PostgreSQLConnection::conn() +Connection::Connection( + const String & connection_str_, + const String & address_) + : connection_str(connection_str_) + , address(address_) { - checkUpdateConnection(); +} + + +pqxx::ConnectionPtr Connection::get() +{ + connectIfNeeded(); return connection; } -void PostgreSQLConnection::checkUpdateConnection() + +pqxx::ConnectionPtr Connection::tryGet() { - if (!connection || !connection->is_open()) - connection = std::make_unique(connection_str); + if (tryConnectIfNeeded()) + return connection; + return nullptr; } -std::string PostgreSQLConnection::formatConnectionString( - std::string dbname, std::string host, UInt16 port, std::string user, std::string password) + +void Connection::connectIfNeeded() { - WriteBufferFromOwnString out; - out << "dbname=" << quote << dbname - << " host=" << quote << host - << " port=" << port - << " user=" << quote << user - << " password=" << quote << password; - return out.str(); + if (!connection || !connection->is_open()) + { + LOG_DEBUG(&Poco::Logger::get("PostgreSQLConnection"), "New connection to {}", getAddress()); + connection = std::make_shared(connection_str); + } +} + + +bool Connection::tryConnectIfNeeded() +{ + try + { + connectIfNeeded(); + } + catch (const pqxx::broken_connection & pqxx_error) + { + LOG_ERROR( + &Poco::Logger::get("PostgreSQLConnection"), + "Unable to setup connection to {}, reason: {}", + getAddress(), pqxx_error.what()); + return false; + } + catch (...) + { + throw; + } + + return true; } } diff --git a/src/Storages/PostgreSQL/PostgreSQLConnection.h b/src/Storages/PostgreSQL/PostgreSQLConnection.h index ae79a3436e0..488f45a068d 100644 --- a/src/Storages/PostgreSQL/PostgreSQLConnection.h +++ b/src/Storages/PostgreSQL/PostgreSQLConnection.h @@ -7,42 +7,76 @@ #if USE_LIBPQXX #include // Y_IGNORE #include +#include -namespace DB -{ - -/// Tiny connection class to make it more convenient to use. -/// Connection is not made until actually used. -class PostgreSQLConnection +namespace pqxx { using ConnectionPtr = std::shared_ptr; +} + +namespace postgres +{ + +class Connection +{ public: - PostgreSQLConnection(std::string dbname, std::string host, UInt16 port, std::string user, std::string password) - : connection_str(formatConnectionString(std::move(dbname), std::move(host), port, std::move(user), std::move(password))) {} + Connection( + const String & connection_str_, + const String & address_); - PostgreSQLConnection(const std::string & connection_str_) : connection_str(connection_str_) {} + Connection(const Connection & other) = delete; - PostgreSQLConnection(const PostgreSQLConnection &) = delete; - PostgreSQLConnection operator =(const PostgreSQLConnection &) = delete; + pqxx::ConnectionPtr get(); - ConnectionPtr conn(); + pqxx::ConnectionPtr tryGet(); - std::string & conn_str() { return connection_str; } + bool isConnected() { return tryConnectIfNeeded(); } + +private: + void connectIfNeeded(); + + bool tryConnectIfNeeded(); + + const std::string & getAddress() { return address; } + + pqxx::ConnectionPtr connection; + std::string connection_str, address; +}; + +using ConnectionPtr = std::shared_ptr; + + +class ConnectionHolder +{ + +using Pool = ConcurrentBoundedQueue; +static constexpr inline auto POSTGRESQL_POOL_WAIT_MS = 50; + +public: + ConnectionHolder(ConnectionPtr connection_, Pool & pool_) + : connection(std::move(connection_)) + , pool(pool_) + { + } + + ConnectionHolder(const ConnectionHolder & other) = delete; + + ~ConnectionHolder() { pool.tryPush(connection, POSTGRESQL_POOL_WAIT_MS); } + + pqxx::connection & conn() const { return *connection->get(); } + + bool isConnected() { return connection->isConnected(); } private: ConnectionPtr connection; - std::string connection_str; - - static std::string formatConnectionString( - std::string dbname, std::string host, UInt16 port, std::string user, std::string password); - - void checkUpdateConnection(); + Pool & pool; }; -using PostgreSQLConnectionPtr = std::shared_ptr; +using ConnectionHolderPtr = std::shared_ptr; } + #endif diff --git a/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp new file mode 100644 index 00000000000..42c716dcf14 --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp @@ -0,0 +1,96 @@ +#if !defined(ARCADIA_BUILD) +#include "config_core.h" +#endif + +#if USE_LIBPQXX +#include +#include +#include "PostgreSQLConnectionPool.h" +#include "PostgreSQLConnection.h" +#include + + +namespace postgres +{ + +ConnectionPool::ConnectionPool( + std::string dbname, + std::string host, + UInt16 port, + std::string user, + std::string password, + size_t pool_size_, + int64_t pool_wait_timeout_) + : pool(std::make_shared(pool_size_)) + , pool_size(pool_size_) + , pool_wait_timeout(pool_wait_timeout_) + , block_on_empty_pool(pool_wait_timeout == -1) +{ + LOG_INFO( + &Poco::Logger::get("PostgreSQLConnectionPool"), + "New connection pool. Size: {}, blocks on empty pool: {}", + pool_size, block_on_empty_pool); + + address = host + ':' + std::to_string(port); + connection_str = formatConnectionString(std::move(dbname), std::move(host), port, std::move(user), std::move(password)); + initialize(); +} + + +ConnectionPool::ConnectionPool(const ConnectionPool & other) + : pool(std::make_shared(other.pool_size)) + , connection_str(other.connection_str) + , address(other.address) + , pool_size(other.pool_size) + , pool_wait_timeout(other.pool_wait_timeout) + , block_on_empty_pool(other.block_on_empty_pool) +{ + initialize(); +} + + +void ConnectionPool::initialize() +{ + /// No connection is made, just fill pool with non-connected connection objects. + for (size_t i = 0; i < pool_size; ++i) + pool->push(std::make_shared(connection_str, address)); +} + + +std::string ConnectionPool::formatConnectionString( + std::string dbname, std::string host, UInt16 port, std::string user, std::string password) +{ + DB::WriteBufferFromOwnString out; + out << "dbname=" << DB::quote << dbname + << " host=" << DB::quote << host + << " port=" << port + << " user=" << DB::quote << user + << " password=" << DB::quote << password; + return out.str(); +} + + +ConnectionHolderPtr ConnectionPool::get() +{ + ConnectionPtr connection; + + /// Always blocks by default. + if (block_on_empty_pool) + { + /// pop to ConcurrentBoundedQueue will block until it is non-empty. + pool->pop(connection); + return std::make_shared(connection, *pool); + } + + if (pool->tryPop(connection, pool_wait_timeout)) + { + return std::make_shared(connection, *pool); + } + + connection = std::make_shared(connection_str, address); + return std::make_shared(connection, *pool); +} + +} + +#endif diff --git a/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h new file mode 100644 index 00000000000..f1239fc78b5 --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h @@ -0,0 +1,64 @@ +#pragma once + +#if !defined(ARCADIA_BUILD) +#include "config_core.h" +#endif + +#if USE_LIBPQXX +#include "PostgreSQLConnection.h" + + +namespace postgres +{ + +class PoolWithFailover; + +/// Connection pool size is defined by user with setting `postgresql_connection_pool_size` (default 16). +/// If pool is empty, it will block until there are available connections. +/// If setting `connection_pool_wait_timeout` is defined, it will not block on empty pool and will +/// wait until the timeout and then create a new connection. (only for storage/db engine) +class ConnectionPool +{ + +friend class PoolWithFailover; + +static constexpr inline auto POSTGRESQL_POOL_DEFAULT_SIZE = 16; + +public: + + ConnectionPool( + std::string dbname, + std::string host, + UInt16 port, + std::string user, + std::string password, + size_t pool_size_ = POSTGRESQL_POOL_DEFAULT_SIZE, + int64_t pool_wait_timeout_ = -1); + + ConnectionPool(const ConnectionPool & other); + + ConnectionPool operator =(const ConnectionPool &) = delete; + + ConnectionHolderPtr get(); + +private: + using Pool = ConcurrentBoundedQueue; + using PoolPtr = std::shared_ptr; + + static std::string formatConnectionString( + std::string dbname, std::string host, UInt16 port, std::string user, std::string password); + + void initialize(); + + PoolPtr pool; + std::string connection_str, address; + size_t pool_size; + int64_t pool_wait_timeout; + bool block_on_empty_pool; +}; + +using ConnectionPoolPtr = std::shared_ptr; + +} + +#endif diff --git a/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.cpp b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.cpp new file mode 100644 index 00000000000..6230bb4bc3b --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.cpp @@ -0,0 +1,110 @@ +#include "PostgreSQLPoolWithFailover.h" +#include "PostgreSQLConnection.h" +#include +#include +#include + + +namespace DB +{ +namespace ErrorCodes +{ + extern const int POSTGRESQL_CONNECTION_FAILURE; +} +} + +namespace postgres +{ + +PoolWithFailover::PoolWithFailover( + const Poco::Util::AbstractConfiguration & config, + const std::string & config_prefix, + const size_t max_tries_) + : max_tries(max_tries_) +{ + auto db = config.getString(config_prefix + ".db", ""); + auto host = config.getString(config_prefix + ".host", ""); + auto port = config.getUInt(config_prefix + ".port", 0); + auto user = config.getString(config_prefix + ".user", ""); + auto password = config.getString(config_prefix + ".password", ""); + + if (config.has(config_prefix + ".replica")) + { + Poco::Util::AbstractConfiguration::Keys config_keys; + config.keys(config_prefix, config_keys); + + for (const auto & config_key : config_keys) + { + if (config_key.starts_with("replica")) + { + std::string replica_name = config_prefix + "." + config_key; + size_t priority = config.getInt(replica_name + ".priority", 0); + + auto replica_host = config.getString(replica_name + ".host", host); + auto replica_port = config.getUInt(replica_name + ".port", port); + auto replica_user = config.getString(replica_name + ".user", user); + auto replica_password = config.getString(replica_name + ".password", password); + + replicas_with_priority[priority].emplace_back(std::make_shared(db, replica_host, replica_port, replica_user, replica_password)); + } + } + } + else + { + replicas_with_priority[0].emplace_back(std::make_shared(db, host, port, user, password)); + } +} + + +PoolWithFailover::PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t pool_size, + int64_t pool_wait_timeout, + size_t max_tries_) + : max_tries(max_tries_) +{ + /// Replicas have the same priority, but traversed replicas are moved to the end of the queue. + for (const auto & [host, port] : addresses) + { + LOG_DEBUG(&Poco::Logger::get("PostgreSQLPoolWithFailover"), "Adding address host: {}, port: {} to connection pool", host, port); + replicas_with_priority[0].emplace_back(std::make_shared(database, host, port, user, password, pool_size, pool_wait_timeout)); + } +} + + +PoolWithFailover::PoolWithFailover(const PoolWithFailover & other) + : replicas_with_priority(other.replicas_with_priority) + , max_tries(other.max_tries) +{ +} + + +ConnectionHolderPtr PoolWithFailover::get() +{ + std::lock_guard lock(mutex); + + for (size_t try_idx = 0; try_idx < max_tries; ++try_idx) + { + for (auto & priority : replicas_with_priority) + { + auto & replicas = priority.second; + for (size_t i = 0; i < replicas.size(); ++i) + { + auto connection = replicas[i]->get(); + if (connection->isConnected()) + { + /// Move all traversed replicas to the end. + std::rotate(replicas.begin(), replicas.begin() + i + 1, replicas.end()); + return connection; + } + } + } + } + + throw DB::Exception(DB::ErrorCodes::POSTGRESQL_CONNECTION_FAILURE, "Unable to connect to any of the replicas"); +} + +} diff --git a/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.h b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.h new file mode 100644 index 00000000000..048200f012a --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.h @@ -0,0 +1,52 @@ +#pragma once + +#include +#include +#include +#include "PostgreSQLConnectionPool.h" + + +namespace postgres +{ + +class PoolWithFailover +{ + +using RemoteDescription = std::vector>; + +public: + static constexpr inline auto POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES = 5; + static constexpr inline auto POSTGRESQL_POOL_DEFAULT_SIZE = 16; + + PoolWithFailover( + const Poco::Util::AbstractConfiguration & config, + const std::string & config_prefix, + const size_t max_tries_ = POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + + PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t pool_size = POSTGRESQL_POOL_DEFAULT_SIZE, + int64_t pool_wait_timeout = -1, + size_t max_tries_ = POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + + PoolWithFailover(const PoolWithFailover & other); + + ConnectionHolderPtr get(); + + +private: + /// Highest priority is 0, the bigger the number in map, the less the priority + using Replicas = std::vector; + using ReplicasWithPriority = std::map; + + ReplicasWithPriority replicas_with_priority; + size_t max_tries; + std::mutex mutex; +}; + +using PoolWithFailoverPtr = std::shared_ptr; + +} diff --git a/src/Storages/PostgreSQL/PostgreSQLReplicaConnection.h b/src/Storages/PostgreSQL/PostgreSQLReplicaConnection.h new file mode 100644 index 00000000000..1ed442873a2 --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLReplicaConnection.h @@ -0,0 +1,39 @@ +#pragma once + +#include +#include +#include "PostgreSQLConnectionPool.h" +#include + + +namespace DB +{ + +class PostgreSQLReplicaConnection +{ + +public: + static constexpr inline auto POSTGRESQL_CONNECTION_DEFAULT_RETRIES_NUM = 5; + + PostgreSQLReplicaConnection( + const Poco::Util::AbstractConfiguration & config, + const String & config_prefix, + const size_t num_retries_ = POSTGRESQL_CONNECTION_DEFAULT_RETRIES_NUM); + + PostgreSQLReplicaConnection(const PostgreSQLReplicaConnection & other); + + PostgreSQLConnectionHolderPtr get(); + + +private: + /// Highest priority is 0, the bigger the number in map, the less the priority + using ReplicasByPriority = std::map; + + ReplicasByPriority replicas; + size_t num_retries; + std::mutex mutex; +}; + +using PostgreSQLReplicaConnectionPtr = std::shared_ptr; + +} diff --git a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp index c5c43440228..6c3d3a53c21 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp +++ b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp @@ -16,7 +16,7 @@ namespace DB RabbitMQBlockInputStream::RabbitMQBlockInputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, - std::shared_ptr context_, + ContextPtr context_, const Names & columns, size_t max_block_size_, bool ack_in_suffix_) @@ -91,7 +91,7 @@ Block RabbitMQBlockInputStream::readImpl() MutableColumns virtual_columns = virtual_header.cloneEmptyColumns(); auto input_format = FormatFactory::instance().getInputFormat( - storage.getFormatName(), *buffer, non_virtual_header, *context, max_block_size); + storage.getFormatName(), *buffer, non_virtual_header, context, max_block_size); InputPort port(input_format->getPort().getHeader(), input_format.get()); connect(input_format->getPort(), port); diff --git a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h index 8b93ca4e911..5ce1c96bf33 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h +++ b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h @@ -15,7 +15,7 @@ public: RabbitMQBlockInputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, - std::shared_ptr context_, + ContextPtr context_, const Names & columns, size_t max_block_size_, bool ack_in_suffix = true); @@ -38,7 +38,7 @@ public: private: StorageRabbitMQ & storage; StorageMetadataPtr metadata_snapshot; - std::shared_ptr context; + ContextPtr context; Names column_names; const size_t max_block_size; bool ack_in_suffix; diff --git a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp index a987fff3c64..3c837cb95b1 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp +++ b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp @@ -11,7 +11,7 @@ namespace DB RabbitMQBlockOutputStream::RabbitMQBlockOutputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, - const Context & context_) + ContextPtr context_) : storage(storage_) , metadata_snapshot(metadata_snapshot_) , context(context_) diff --git a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h index 7e5c22f9f39..3941875ea86 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h +++ b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h @@ -11,7 +11,7 @@ class RabbitMQBlockOutputStream : public IBlockOutputStream { public: - explicit RabbitMQBlockOutputStream(StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, const Context & context_); + explicit RabbitMQBlockOutputStream(StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, ContextPtr context_); Block getHeader() const override; @@ -22,7 +22,7 @@ public: private: StorageRabbitMQ & storage; StorageMetadataPtr metadata_snapshot; - const Context & context; + ContextPtr context; ProducerBufferPtr buffer; BlockOutputStreamPtr child; }; diff --git a/src/Storages/RabbitMQ/RabbitMQSettings.h b/src/Storages/RabbitMQ/RabbitMQSettings.h index 66348d61424..c6725903898 100644 --- a/src/Storages/RabbitMQ/RabbitMQSettings.h +++ b/src/Storages/RabbitMQ/RabbitMQSettings.h @@ -24,6 +24,7 @@ namespace DB M(UInt64, rabbitmq_skip_broken_messages, 0, "Skip at least this number of broken messages from RabbitMQ per block", 0) \ M(UInt64, rabbitmq_max_block_size, 0, "Number of row collected before flushing data from RabbitMQ.", 0) \ M(Milliseconds, rabbitmq_flush_interval_ms, 0, "Timeout for flushing data from RabbitMQ.", 0) \ + M(String, rabbitmq_vhost, "/", "RabbitMQ vhost.", 0) \ #define LIST_OF_RABBITMQ_SETTINGS(M) \ RABBITMQ_RELATED_SETTINGS(M) \ diff --git a/src/Storages/RabbitMQ/StorageRabbitMQ.cpp b/src/Storages/RabbitMQ/StorageRabbitMQ.cpp index 0ecf85e5c3d..525a08784be 100644 --- a/src/Storages/RabbitMQ/StorageRabbitMQ.cpp +++ b/src/Storages/RabbitMQ/StorageRabbitMQ.cpp @@ -70,31 +70,31 @@ namespace ExchangeType StorageRabbitMQ::StorageRabbitMQ( const StorageID & table_id_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr rabbitmq_settings_) : IStorage(table_id_) - , global_context(context_.getGlobalContext()) + , WithContext(context_->getGlobalContext()) , rabbitmq_settings(std::move(rabbitmq_settings_)) - , exchange_name(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_name.value)) - , format_name(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_format.value)) - , exchange_type(defineExchangeType(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_type.value))) - , routing_keys(parseRoutingKeys(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_routing_key_list.value))) + , exchange_name(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_name.value)) + , format_name(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_format.value)) + , exchange_type(defineExchangeType(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_type.value))) + , routing_keys(parseRoutingKeys(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_routing_key_list.value))) , row_delimiter(rabbitmq_settings->rabbitmq_row_delimiter.value) - , schema_name(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_schema.value)) + , schema_name(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_schema.value)) , num_consumers(rabbitmq_settings->rabbitmq_num_consumers.value) , num_queues(rabbitmq_settings->rabbitmq_num_queues.value) - , queue_base(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_queue_base.value)) - , deadletter_exchange(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_deadletter_exchange.value)) + , queue_base(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_queue_base.value)) + , deadletter_exchange(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_deadletter_exchange.value)) , persistent(rabbitmq_settings->rabbitmq_persistent.value) , hash_exchange(num_consumers > 1 || num_queues > 1) , log(&Poco::Logger::get("StorageRabbitMQ (" + table_id_.table_name + ")")) - , address(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_host_port.value)) + , address(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_host_port.value)) , parsed_address(parseAddress(address, 5672)) , login_password(std::make_pair( - global_context.getConfigRef().getString("rabbitmq.username"), - global_context.getConfigRef().getString("rabbitmq.password"))) - , vhost(global_context.getConfigRef().getString("rabbitmq.vhost", "/")) + getContext()->getConfigRef().getString("rabbitmq.username"), + getContext()->getConfigRef().getString("rabbitmq.password"))) + , vhost(getContext()->getConfigRef().getString("rabbitmq.vhost", rabbitmq_settings->rabbitmq_vhost.value)) , semaphore(0, num_consumers) , unique_strbase(getRandomName()) , queue_size(std::max(QUEUE_SIZE, static_cast(getMaxBlockSize()))) @@ -106,18 +106,18 @@ StorageRabbitMQ::StorageRabbitMQ( storage_metadata.setColumns(columns_); setInMemoryMetadata(storage_metadata); - rabbitmq_context = addSettings(global_context); + rabbitmq_context = addSettings(getContext()); rabbitmq_context->makeQueryContext(); /// One looping task for all consumers as they share the same connection == the same handler == the same event loop event_handler->updateLoopState(Loop::STOP); - looping_task = global_context.getMessageBrokerSchedulePool().createTask("RabbitMQLoopingTask", [this]{ loopingFunc(); }); + looping_task = getContext()->getMessageBrokerSchedulePool().createTask("RabbitMQLoopingTask", [this]{ loopingFunc(); }); looping_task->deactivate(); - streaming_task = global_context.getMessageBrokerSchedulePool().createTask("RabbitMQStreamingTask", [this]{ streamingToViewsFunc(); }); + streaming_task = getContext()->getMessageBrokerSchedulePool().createTask("RabbitMQStreamingTask", [this]{ streamingToViewsFunc(); }); streaming_task->deactivate(); - connection_task = global_context.getMessageBrokerSchedulePool().createTask("RabbitMQConnectionTask", [this]{ connectionFunc(); }); + connection_task = getContext()->getMessageBrokerSchedulePool().createTask("RabbitMQConnectionTask", [this]{ connectionFunc(); }); connection_task->deactivate(); if (queue_base.empty()) @@ -188,9 +188,9 @@ String StorageRabbitMQ::getTableBasedName(String name, const StorageID & table_i } -std::shared_ptr StorageRabbitMQ::addSettings(const Context & context) const +std::shared_ptr StorageRabbitMQ::addSettings(ContextPtr local_context) const { - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->setSetting("input_format_skip_unknown_fields", true); modified_context->setSetting("input_format_allow_errors_ratio", 0.); modified_context->setSetting("input_format_allow_errors_num", rabbitmq_settings->rabbitmq_skip_broken_messages.value); @@ -253,7 +253,7 @@ size_t StorageRabbitMQ::getMaxBlockSize() const { return rabbitmq_settings->rabbitmq_max_block_size.changed ? rabbitmq_settings->rabbitmq_max_block_size.value - : (global_context.getSettingsRef().max_insert_block_size.value / num_consumers); + : (getContext()->getSettingsRef().max_insert_block_size.value / num_consumers); } @@ -562,7 +562,7 @@ Pipe StorageRabbitMQ::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /* query_info */, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /* processed_stage */, size_t /* max_block_size */, unsigned /* num_streams */) @@ -574,7 +574,7 @@ Pipe StorageRabbitMQ::read( return {}; auto sample_block = metadata_snapshot->getSampleBlockForColumns(column_names, getVirtuals(), getStorageID()); - auto modified_context = addSettings(context); + auto modified_context = addSettings(local_context); auto block_size = getMaxBlockSize(); if (!event_handler->connectionRunning()) @@ -607,9 +607,9 @@ Pipe StorageRabbitMQ::read( } -BlockOutputStreamPtr StorageRabbitMQ::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageRabbitMQ::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - return std::make_shared(*this, metadata_snapshot, context); + return std::make_shared(*this, metadata_snapshot, local_context); } @@ -712,7 +712,7 @@ ConsumerBufferPtr StorageRabbitMQ::createReadBuffer() ProducerBufferPtr StorageRabbitMQ::createWriteBuffer() { return std::make_shared( - parsed_address, global_context, login_password, vhost, routing_keys, exchange_name, exchange_type, + parsed_address, getContext(), login_password, vhost, routing_keys, exchange_name, exchange_type, producer_id.fetch_add(1), persistent, wait_confirm, log, row_delimiter ? std::optional{row_delimiter} : std::nullopt, 1, 1024); } @@ -728,7 +728,7 @@ bool StorageRabbitMQ::checkDependencies(const StorageID & table_id) // Check the dependencies are ready? for (const auto & db_tab : dependencies) { - auto table = DatabaseCatalog::instance().tryGetTable(db_tab, global_context); + auto table = DatabaseCatalog::instance().tryGetTable(db_tab, getContext()); if (!table) return false; @@ -798,7 +798,7 @@ void StorageRabbitMQ::streamingToViewsFunc() bool StorageRabbitMQ::streamToViews() { auto table_id = getStorageID(); - auto table = DatabaseCatalog::instance().getTable(table_id, global_context); + auto table = DatabaseCatalog::instance().getTable(table_id, getContext()); if (!table) throw Exception("Engine table " + table_id.getNameForLogs() + " doesn't exist.", ErrorCodes::LOGICAL_ERROR); @@ -807,7 +807,7 @@ bool StorageRabbitMQ::streamToViews() insert->table_id = table_id; // Only insert into dependent views and expect that input blocks contain virtual columns - InterpreterInsertQuery interpreter(insert, *rabbitmq_context, false, true, true); + InterpreterInsertQuery interpreter(insert, rabbitmq_context, false, true, true); auto block_io = interpreter.execute(); auto metadata_snapshot = getInMemoryMetadataPtr(); @@ -831,7 +831,7 @@ bool StorageRabbitMQ::streamToViews() limits.speed_limits.max_execution_time = rabbitmq_settings->rabbitmq_flush_interval_ms.changed ? rabbitmq_settings->rabbitmq_flush_interval_ms - : global_context.getSettingsRef().stream_flush_interval_ms; + : getContext()->getSettingsRef().stream_flush_interval_ms; limits.timeout_overflow_mode = OverflowMode::BREAK; @@ -988,9 +988,11 @@ void registerStorageRabbitMQ(StorageFactory & factory) CHECK_RABBITMQ_STORAGE_ARGUMENT(14, rabbitmq_max_block_size) CHECK_RABBITMQ_STORAGE_ARGUMENT(15, rabbitmq_flush_interval_ms) + CHECK_RABBITMQ_STORAGE_ARGUMENT(16, rabbitmq_vhost) + #undef CHECK_RABBITMQ_STORAGE_ARGUMENT - return StorageRabbitMQ::create(args.table_id, args.context, args.columns, std::move(rabbitmq_settings)); + return StorageRabbitMQ::create(args.table_id, args.getContext(), args.columns, std::move(rabbitmq_settings)); }; factory.registerStorage("RabbitMQ", creator_fn, StorageFactory::StorageFeatures{ .supports_settings = true, }); diff --git a/src/Storages/RabbitMQ/StorageRabbitMQ.h b/src/Storages/RabbitMQ/StorageRabbitMQ.h index 9f573ea4a3e..eeda6b9fdca 100644 --- a/src/Storages/RabbitMQ/StorageRabbitMQ.h +++ b/src/Storages/RabbitMQ/StorageRabbitMQ.h @@ -19,11 +19,9 @@ namespace DB { -class Context; - using ChannelPtr = std::shared_ptr; -class StorageRabbitMQ final: public ext::shared_ptr_helper, public IStorage +class StorageRabbitMQ final: public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; @@ -40,7 +38,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -48,7 +46,7 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, - const Context & context) override; + ContextPtr context) override; void pushReadBuffer(ConsumerBufferPtr buf); ConsumerBufferPtr popReadBuffer(); @@ -59,7 +57,7 @@ public: const String & getFormatName() const { return format_name; } NamesAndTypesList getVirtuals() const override; - const String getExchange() const { return exchange_name; } + String getExchange() const { return exchange_name; } void unbindExchange(); bool exchangeRemoved() { return exchange_removed.load(); } @@ -69,13 +67,12 @@ public: protected: StorageRabbitMQ( const StorageID & table_id_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr rabbitmq_settings_); private: - const Context & global_context; - std::shared_ptr rabbitmq_context; + ContextPtr rabbitmq_context; std::unique_ptr rabbitmq_settings; const String exchange_name; @@ -139,7 +136,7 @@ private: static AMQP::ExchangeType defineExchangeType(String exchange_type_); static String getTableBasedName(String name, const StorageID & table_id); - std::shared_ptr addSettings(const Context & context) const; + std::shared_ptr addSettings(ContextPtr context) const; size_t getMaxBlockSize() const; void deactivateTask(BackgroundSchedulePool::TaskHolder & task, bool wait, bool stop_loop); diff --git a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp index ebee27faf17..b9af60eb66f 100644 --- a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp +++ b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp @@ -27,7 +27,7 @@ namespace ErrorCodes WriteBufferToRabbitMQProducer::WriteBufferToRabbitMQProducer( std::pair & parsed_address_, - const Context & global_context, + ContextPtr global_context, const std::pair & login_password_, const String & vhost_, const Names & routing_keys_, @@ -72,7 +72,7 @@ WriteBufferToRabbitMQProducer::WriteBufferToRabbitMQProducer( ErrorCodes::CANNOT_CONNECT_RABBITMQ); } - writing_task = global_context.getSchedulePool().createTask("RabbitMQWritingTask", [this]{ writingFunc(); }); + writing_task = global_context->getSchedulePool().createTask("RabbitMQWritingTask", [this]{ writingFunc(); }); writing_task->deactivate(); if (exchange_type == AMQP::ExchangeType::headers) diff --git a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h index e88f5e10e74..452cc38d751 100644 --- a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h +++ b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h @@ -20,7 +20,7 @@ class WriteBufferToRabbitMQProducer : public WriteBuffer public: WriteBufferToRabbitMQProducer( std::pair & parsed_address_, - const Context & global_context, + ContextPtr global_context, const std::pair & login_password_, const String & vhost_, const Names & routing_keys_, diff --git a/src/Storages/ReadInOrderOptimizer.cpp b/src/Storages/ReadInOrderOptimizer.cpp index 2b751329208..3bb7034b588 100644 --- a/src/Storages/ReadInOrderOptimizer.cpp +++ b/src/Storages/ReadInOrderOptimizer.cpp @@ -34,7 +34,7 @@ ReadInOrderOptimizer::ReadInOrderOptimizer( forbidden_columns = syntax_result->getArrayJoinSourceNameSet(); } -InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & metadata_snapshot, const Context & context) const +InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & metadata_snapshot, ContextPtr context) const { Names sorting_key_columns = metadata_snapshot->getSortingKeyColumns(); if (!metadata_snapshot->hasSortingKey()) @@ -44,7 +44,7 @@ InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & int read_direction = required_sort_description.at(0).direction; size_t prefix_size = std::min(required_sort_description.size(), sorting_key_columns.size()); - auto aliase_columns = metadata_snapshot->getColumns().getAliases(); + auto aliased_columns = metadata_snapshot->getColumns().getAliases(); for (size_t i = 0; i < prefix_size; ++i) { @@ -55,13 +55,18 @@ InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & /// or in some simple cases when order key element is wrapped into monotonic function. auto apply_order_judge = [&] (const ExpressionActions::Actions & actions, const String & sort_column) { + /// If required order depend on collation, it cannot be matched with primary key order. + /// Because primary keys cannot have collations. + if (required_sort_description[i].collator) + return false; + int current_direction = required_sort_description[i].direction; - /// For the path: order by (sort_column, ...) + /// For the path: order by (sort_column, ...) if (sort_column == sorting_key_columns[i] && current_direction == read_direction) { return true; } - /// For the path: order by (function(sort_column), ...) + /// For the path: order by (function(sort_column), ...) /// Allow only one simple monotonic functions with one argument /// Why not allow multi monotonic functions? else @@ -125,7 +130,7 @@ InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & /// currently we only support alias column without any function wrapper /// ie: `order by aliased_column` can have this optimization, but `order by function(aliased_column)` can not. /// This suits most cases. - if (context.getSettingsRef().optimize_respect_aliases && aliase_columns.contains(required_sort_description[i].column_name)) + if (context->getSettingsRef().optimize_respect_aliases && aliased_columns.contains(required_sort_description[i].column_name)) { auto column_expr = metadata_snapshot->getColumns().get(required_sort_description[i].column_name).default_desc.expression->clone(); replaceAliasColumnsInQuery(column_expr, metadata_snapshot->getColumns(), forbidden_columns, context); diff --git a/src/Storages/ReadInOrderOptimizer.h b/src/Storages/ReadInOrderOptimizer.h index 3676f4cc88c..0af1121db32 100644 --- a/src/Storages/ReadInOrderOptimizer.h +++ b/src/Storages/ReadInOrderOptimizer.h @@ -22,7 +22,7 @@ public: const SortDescription & required_sort_description, const TreeRewriterResultPtr & syntax_result); - InputOrderInfoPtr getInputOrder(const StorageMetadataPtr & metadata_snapshot, const Context & context) const; + InputOrderInfoPtr getInputOrder(const StorageMetadataPtr & metadata_snapshot, ContextPtr context) const; private: /// Actions for every element of order expression to analyze functions for monotonicity diff --git a/src/Storages/RocksDB/EmbeddedRocksDBBlockInputStream.cpp b/src/Storages/RocksDB/EmbeddedRocksDBBlockInputStream.cpp index 35c41cabd8b..4900e17ad91 100644 --- a/src/Storages/RocksDB/EmbeddedRocksDBBlockInputStream.cpp +++ b/src/Storages/RocksDB/EmbeddedRocksDBBlockInputStream.cpp @@ -48,7 +48,8 @@ Block EmbeddedRocksDBBlockInputStream::readImpl() size_t idx = 0; for (const auto & elem : sample_block) { - elem.type->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer); + auto serialization = elem.type->getDefaultSerialization(); + serialization->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer); ++idx; } } diff --git a/src/Storages/RocksDB/EmbeddedRocksDBBlockOutputStream.cpp b/src/Storages/RocksDB/EmbeddedRocksDBBlockOutputStream.cpp index 1edbdc25942..d7b125cb41f 100644 --- a/src/Storages/RocksDB/EmbeddedRocksDBBlockOutputStream.cpp +++ b/src/Storages/RocksDB/EmbeddedRocksDBBlockOutputStream.cpp @@ -51,7 +51,7 @@ void EmbeddedRocksDBBlockOutputStream::write(const Block & block) size_t idx = 0; for (const auto & elem : block) { - elem.type->serializeBinary(*elem.column, i, idx == primary_key_pos ? wb_key : wb_value); + elem.type->getDefaultSerialization()->serializeBinary(*elem.column, i, idx == primary_key_pos ? wb_key : wb_value); ++idx; } status = batch.Put(wb_key.str(), wb_value.str()); diff --git a/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp b/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp index d7456966467..9173c23ec5a 100644 --- a/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp +++ b/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp @@ -198,7 +198,7 @@ public: while (it < end && rows_processed < max_block_size) { WriteBufferFromString wb(serialized_keys[rows_processed]); - key_column.type->serializeBinary(*it, wb); + key_column.type->getDefaultSerialization()->serializeBinary(*it, wb); wb.finalize(); slices_keys[rows_processed] = std::move(serialized_keys[rows_processed]); @@ -219,7 +219,7 @@ public: size_t idx = 0; for (const auto & elem : sample_block) { - elem.type->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer); + elem.type->getDefaultSerialization()->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer); ++idx; } } @@ -245,12 +245,12 @@ StorageEmbeddedRocksDB::StorageEmbeddedRocksDB(const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, bool attach, - Context & context_, + ContextPtr context_, const String & primary_key_) : IStorage(table_id_), primary_key{primary_key_} { setInMemoryMetadata(metadata_); - rocksdb_dir = context_.getPath() + relative_data_path_; + rocksdb_dir = context_->getPath() + relative_data_path_; if (!attach) { Poco::File(rocksdb_dir).createDirectories(); @@ -258,7 +258,7 @@ StorageEmbeddedRocksDB::StorageEmbeddedRocksDB(const StorageID & table_id_, initDb(); } -void StorageEmbeddedRocksDB::truncate(const ASTPtr &, const StorageMetadataPtr & , const Context &, TableExclusiveLockHolder &) +void StorageEmbeddedRocksDB::truncate(const ASTPtr &, const StorageMetadataPtr & , ContextPtr, TableExclusiveLockHolder &) { rocksdb_ptr->Close(); Poco::File(rocksdb_dir).remove(true); @@ -284,7 +284,7 @@ Pipe StorageEmbeddedRocksDB::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) @@ -331,7 +331,7 @@ Pipe StorageEmbeddedRocksDB::read( } BlockOutputStreamPtr StorageEmbeddedRocksDB::write( - const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) + const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(*this, metadata_snapshot); } @@ -352,13 +352,13 @@ static StoragePtr create(const StorageFactory::Arguments & args) if (!args.storage_def->primary_key) throw Exception("StorageEmbeddedRocksDB must require one column in primary key", ErrorCodes::BAD_ARGUMENTS); - metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.getContext()); auto primary_key_names = metadata.getColumnsRequiredForPrimaryKey(); if (primary_key_names.size() != 1) { throw Exception("StorageEmbeddedRocksDB must require one column in primary key", ErrorCodes::BAD_ARGUMENTS); } - return StorageEmbeddedRocksDB::create(args.table_id, args.relative_data_path, metadata, args.attach, args.context, primary_key_names[0]); + return StorageEmbeddedRocksDB::create(args.table_id, args.relative_data_path, metadata, args.attach, args.getContext(), primary_key_names[0]); } diff --git a/src/Storages/RocksDB/StorageEmbeddedRocksDB.h b/src/Storages/RocksDB/StorageEmbeddedRocksDB.h index bd700a35809..64255392c35 100644 --- a/src/Storages/RocksDB/StorageEmbeddedRocksDB.h +++ b/src/Storages/RocksDB/StorageEmbeddedRocksDB.h @@ -29,28 +29,31 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; bool supportsParallelInsert() const override { return true; } bool supportsIndexForIn() const override { return true; } bool mayBenefitFromIndexForIn( - const ASTPtr & node, const Context & /*query_context*/, const StorageMetadataPtr & /*metadata_snapshot*/) const override + const ASTPtr & node, ContextPtr /*query_context*/, const StorageMetadataPtr & /*metadata_snapshot*/) const override { return node->getColumnName() == primary_key; } + bool storesDataOnDisk() const override { return true; } + Strings getDataPaths() const override { return {rocksdb_dir}; } + protected: StorageEmbeddedRocksDB(const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata, bool attach, - Context & context_, + ContextPtr context_, const String & primary_key_); private: diff --git a/src/Storages/SelectQueryDescription.cpp b/src/Storages/SelectQueryDescription.cpp index c11e6bd74f8..05747a9a260 100644 --- a/src/Storages/SelectQueryDescription.cpp +++ b/src/Storages/SelectQueryDescription.cpp @@ -44,11 +44,11 @@ SelectQueryDescription & SelectQueryDescription::SelectQueryDescription::operato namespace { -StorageID extractDependentTableFromSelectQuery(ASTSelectQuery & query, const Context & context, bool add_default_db = true) +StorageID extractDependentTableFromSelectQuery(ASTSelectQuery & query, ContextPtr context, bool add_default_db = true) { if (add_default_db) { - AddDefaultDatabaseVisitor visitor(context.getCurrentDatabase(), false, nullptr); + AddDefaultDatabaseVisitor visitor(context->getCurrentDatabase(), false, nullptr); visitor.visit(query); } @@ -114,7 +114,7 @@ static bool isSingleSelect(const ASTPtr & select, ASTPtr & res) return isSingleSelect(new_inner_query, res); } -SelectQueryDescription SelectQueryDescription::getSelectQueryFromASTForMatView(const ASTPtr & select, const Context & context) +SelectQueryDescription SelectQueryDescription::getSelectQueryFromASTForMatView(const ASTPtr & select, ContextPtr context) { ASTPtr new_inner_query; diff --git a/src/Storages/SelectQueryDescription.h b/src/Storages/SelectQueryDescription.h index ce3ca44c147..28a0a186a07 100644 --- a/src/Storages/SelectQueryDescription.h +++ b/src/Storages/SelectQueryDescription.h @@ -1,5 +1,6 @@ #pragma once +#include #include namespace DB @@ -17,7 +18,7 @@ struct SelectQueryDescription /// Parse description from select query for materialized view. Also /// validates query. - static SelectQueryDescription getSelectQueryFromASTForMatView(const ASTPtr & select, const Context & context); + static SelectQueryDescription getSelectQueryFromASTForMatView(const ASTPtr & select, ContextPtr context); SelectQueryDescription() = default; SelectQueryDescription(const SelectQueryDescription & other); diff --git a/src/Storages/SelectQueryInfo.h b/src/Storages/SelectQueryInfo.h index fea9a7bad68..b4ac07c612a 100644 --- a/src/Storages/SelectQueryInfo.h +++ b/src/Storages/SelectQueryInfo.h @@ -119,9 +119,13 @@ struct SelectQueryInfo ASTPtr query; ASTPtr view_query; /// Optimized VIEW query - /// For optimize_skip_unused_shards. - /// Can be modified in getQueryProcessingStage() + /// Cluster for the query. ClusterPtr cluster; + /// Optimized cluster for the query. + /// In case of optimize_skip_unused_shards it may differs from original cluster. + /// + /// Configured in StorageDistributed::getQueryProcessingStage() + ClusterPtr optimized_cluster; TreeRewriterResultPtr syntax_analyzer_result; @@ -134,6 +138,8 @@ struct SelectQueryInfo /// Prepared sets are used for indices by storage engine. /// Example: x IN (1, 2, 3) PreparedSets sets; + + ClusterPtr getCluster() const { return !optimized_cluster ? cluster : optimized_cluster; } }; } diff --git a/src/Storages/StorageBuffer.cpp b/src/Storages/StorageBuffer.cpp index 07850560c23..58161a94f45 100644 --- a/src/Storages/StorageBuffer.cpp +++ b/src/Storages/StorageBuffer.cpp @@ -29,7 +29,8 @@ #include #include #include - +#include +#include namespace ProfileEvents { @@ -39,6 +40,11 @@ namespace ProfileEvents extern const Event StorageBufferPassedTimeMaxThreshold; extern const Event StorageBufferPassedRowsMaxThreshold; extern const Event StorageBufferPassedBytesMaxThreshold; + extern const Event StorageBufferPassedTimeFlushThreshold; + extern const Event StorageBufferPassedRowsFlushThreshold; + extern const Event StorageBufferPassedBytesFlushThreshold; + extern const Event StorageBufferLayerLockReadersWaitMilliseconds; + extern const Event StorageBufferLayerLockWritersWaitMilliseconds; } namespace CurrentMetrics @@ -62,25 +68,57 @@ namespace ErrorCodes } +std::unique_lock StorageBuffer::Buffer::lockForReading() const +{ + return lockImpl(/* read= */true); +} +std::unique_lock StorageBuffer::Buffer::lockForWriting() const +{ + return lockImpl(/* read= */false); +} +std::unique_lock StorageBuffer::Buffer::tryLock() const +{ + std::unique_lock lock(mutex, std::try_to_lock); + return lock; +} +std::unique_lock StorageBuffer::Buffer::lockImpl(bool read) const +{ + std::unique_lock lock(mutex, std::defer_lock); + + Stopwatch watch(CLOCK_MONOTONIC_COARSE); + lock.lock(); + UInt64 elapsed = watch.elapsedMilliseconds(); + + if (read) + ProfileEvents::increment(ProfileEvents::StorageBufferLayerLockReadersWaitMilliseconds, elapsed); + else + ProfileEvents::increment(ProfileEvents::StorageBufferLayerLockWritersWaitMilliseconds, elapsed); + + return lock; +} + + StorageBuffer::StorageBuffer( const StorageID & table_id_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, size_t num_shards_, const Thresholds & min_thresholds_, const Thresholds & max_thresholds_, + const Thresholds & flush_thresholds_, const StorageID & destination_id_, bool allow_materialized_) : IStorage(table_id_) - , buffer_context(context_.getBufferContext()) + , WithContext(context_->getBufferContext()) , num_shards(num_shards_), buffers(num_shards_) , min_thresholds(min_thresholds_) , max_thresholds(max_thresholds_) + , flush_thresholds(flush_thresholds_) , destination_id(destination_id_) , allow_materialized(allow_materialized_) , log(&Poco::Logger::get("StorageBuffer (" + table_id_.getFullTableName() + ")")) - , bg_pool(buffer_context.getBufferFlushSchedulePool()) + , bg_pool(getContext()->getBufferFlushSchedulePool()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -110,7 +148,7 @@ protected: return res; has_been_read = true; - std::lock_guard lock(buffer.mutex); + std::unique_lock lock(buffer.lockForReading()); if (!buffer.data.rows()) return res; @@ -140,16 +178,16 @@ private: }; -QueryProcessingStage::Enum StorageBuffer::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const +QueryProcessingStage::Enum StorageBuffer::getQueryProcessingStage(ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { if (destination_id) { - auto destination = DatabaseCatalog::instance().getTable(destination_id, context); + auto destination = DatabaseCatalog::instance().getTable(destination_id, local_context); if (destination.get() == this) throw Exception("Destination table is myself. Read will cause infinite loop.", ErrorCodes::INFINITE_LOOP); - return destination->getQueryProcessingStage(context, to_stage, query_info); + return destination->getQueryProcessingStage(local_context, to_stage, query_info); } return QueryProcessingStage::FetchColumns; @@ -160,14 +198,16 @@ Pipe StorageBuffer::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); - return plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); + return plan.convertToPipe( + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } void StorageBuffer::read( @@ -175,19 +215,19 @@ void StorageBuffer::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) { if (destination_id) { - auto destination = DatabaseCatalog::instance().getTable(destination_id, context); + auto destination = DatabaseCatalog::instance().getTable(destination_id, local_context); if (destination.get() == this) throw Exception("Destination table is myself. Read will cause infinite loop.", ErrorCodes::INFINITE_LOOP); - auto destination_lock = destination->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto destination_lock = destination->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto destination_metadata_snapshot = destination->getInMemoryMetadataPtr(); @@ -202,12 +242,12 @@ void StorageBuffer::read( if (dst_has_same_structure) { if (query_info.order_optimizer) - query_info.input_order_info = query_info.order_optimizer->getInputOrder(destination_metadata_snapshot, context); + query_info.input_order_info = query_info.order_optimizer->getInputOrder(destination_metadata_snapshot, local_context); /// The destination table has the same structure of the requested columns and we can simply read blocks from there. destination->read( query_plan, column_names, destination_metadata_snapshot, query_info, - context, processed_stage, max_block_size, num_streams); + local_context, processed_stage, max_block_size, num_streams); } else { @@ -242,7 +282,7 @@ void StorageBuffer::read( { destination->read( query_plan, columns_intersection, destination_metadata_snapshot, query_info, - context, processed_stage, max_block_size, num_streams); + local_context, processed_stage, max_block_size, num_streams); if (query_plan.isInitialized()) { @@ -251,7 +291,7 @@ void StorageBuffer::read( query_plan.getCurrentDataStream().header, header_after_adding_defaults.getNamesAndTypesList(), metadata_snapshot->getColumns(), - context); + local_context); auto adding_missed = std::make_unique( query_plan.getCurrentDataStream(), @@ -314,7 +354,7 @@ void StorageBuffer::read( if (processed_stage > QueryProcessingStage::FetchColumns) { auto interpreter = InterpreterSelectQuery( - query_info.query, context, std::move(pipe_from_buffers), + query_info.query, local_context, std::move(pipe_from_buffers), SelectQueryOptions(processed_stage)); interpreter.buildQueryPlan(buffers_plan); } @@ -388,7 +428,7 @@ void StorageBuffer::read( plans.emplace_back(std::make_unique(std::move(buffers_plan))); query_plan = QueryPlan(); - auto union_step = std::make_unique(std::move(input_streams), result_header); + auto union_step = std::make_unique(std::move(input_streams)); union_step->setStepDescription("Unite sources from Buffer table"); query_plan.unitePlans(std::move(union_step), std::move(plans)); } @@ -492,7 +532,7 @@ public: StoragePtr destination; if (storage.destination_id) { - destination = DatabaseCatalog::instance().tryGetTable(storage.destination_id, storage.buffer_context); + destination = DatabaseCatalog::instance().tryGetTable(storage.destination_id, storage.getContext()); if (destination.get() == &storage) throw Exception("Destination table is myself. Write will cause infinite loop.", ErrorCodes::INFINITE_LOOP); } @@ -507,7 +547,7 @@ public: { if (storage.destination_id) { - LOG_TRACE(storage.log, "Writing block with {} rows, {} bytes directly.", rows, bytes); + LOG_DEBUG(storage.log, "Writing block with {} rows, {} bytes directly.", rows, bytes); storage.writeBlockToDestination(block, destination); } return; @@ -525,7 +565,7 @@ public: for (size_t try_no = 0; try_no < storage.num_shards; ++try_no) { - std::unique_lock lock(storage.buffers[shard_num].mutex, std::try_to_lock); + std::unique_lock lock(storage.buffers[shard_num].tryLock()); if (lock.owns_lock()) { @@ -545,7 +585,7 @@ public: if (!least_busy_buffer) { least_busy_buffer = &storage.buffers[start_shard_num]; - least_busy_lock = std::unique_lock(least_busy_buffer->mutex); + least_busy_lock = least_busy_buffer->lockForWriting(); } insertIntoBuffer(block, *least_busy_buffer); least_busy_lock.unlock(); @@ -567,7 +607,7 @@ private: { buffer.data = sorted_block.cloneEmpty(); } - else if (storage.checkThresholds(buffer, current_time, sorted_block.rows(), sorted_block.bytes())) + else if (storage.checkThresholds(buffer, /* direct= */true, current_time, sorted_block.rows(), sorted_block.bytes())) { /** If, after inserting the buffer, the constraints are exceeded, then we will reset the buffer. * This also protects against unlimited consumption of RAM, since if it is impossible to write to the table, @@ -585,14 +625,14 @@ private: }; -BlockOutputStreamPtr StorageBuffer::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageBuffer::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(*this, metadata_snapshot); } bool StorageBuffer::mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const + const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const { if (!destination_id) return false; @@ -608,7 +648,7 @@ bool StorageBuffer::mayBenefitFromIndexForIn( void StorageBuffer::startup() { - if (buffer_context.getSettingsRef().readonly) + if (getContext()->getSettingsRef().readonly) { LOG_WARNING(log, "Storage {} is run with readonly settings, it will not be able to insert data. Set appropriate buffer_profile to fix this.", getName()); } @@ -627,7 +667,7 @@ void StorageBuffer::shutdown() try { - optimize(nullptr /*query*/, getInMemoryMetadataPtr(), {} /*partition*/, false /*final*/, false /*deduplicate*/, {}, buffer_context); + optimize(nullptr /*query*/, getInMemoryMetadataPtr(), {} /*partition*/, false /*final*/, false /*deduplicate*/, {}, getContext()); } catch (...) { @@ -653,7 +693,7 @@ bool StorageBuffer::optimize( bool final, bool deduplicate, const Names & /* deduplicate_by_columns */, - const Context & /*context*/) + ContextPtr /*context*/) { if (partition) throw Exception("Partition cannot be specified when optimizing table of type Buffer", ErrorCodes::NOT_IMPLEMENTED); @@ -672,13 +712,13 @@ bool StorageBuffer::supportsPrewhere() const { if (!destination_id) return false; - auto dest = DatabaseCatalog::instance().tryGetTable(destination_id, buffer_context); + auto dest = DatabaseCatalog::instance().tryGetTable(destination_id, getContext()); if (dest && dest.get() != this) return dest->supportsPrewhere(); return false; } -bool StorageBuffer::checkThresholds(const Buffer & buffer, time_t current_time, size_t additional_rows, size_t additional_bytes) const +bool StorageBuffer::checkThresholds(const Buffer & buffer, bool direct, time_t current_time, size_t additional_rows, size_t additional_bytes) const { time_t time_passed = 0; if (buffer.first_write_time) @@ -687,11 +727,11 @@ bool StorageBuffer::checkThresholds(const Buffer & buffer, time_t current_time, size_t rows = buffer.data.rows() + additional_rows; size_t bytes = buffer.data.bytes() + additional_bytes; - return checkThresholdsImpl(rows, bytes, time_passed); + return checkThresholdsImpl(direct, rows, bytes, time_passed); } -bool StorageBuffer::checkThresholdsImpl(size_t rows, size_t bytes, time_t time_passed) const +bool StorageBuffer::checkThresholdsImpl(bool direct, size_t rows, size_t bytes, time_t time_passed) const { if (time_passed > min_thresholds.time && rows > min_thresholds.rows && bytes > min_thresholds.bytes) { @@ -717,6 +757,27 @@ bool StorageBuffer::checkThresholdsImpl(size_t rows, size_t bytes, time_t time_p return true; } + if (!direct) + { + if (flush_thresholds.time && time_passed > flush_thresholds.time) + { + ProfileEvents::increment(ProfileEvents::StorageBufferPassedTimeFlushThreshold); + return true; + } + + if (flush_thresholds.rows && rows > flush_thresholds.rows) + { + ProfileEvents::increment(ProfileEvents::StorageBufferPassedRowsFlushThreshold); + return true; + } + + if (flush_thresholds.bytes && bytes > flush_thresholds.bytes) + { + ProfileEvents::increment(ProfileEvents::StorageBufferPassedBytesFlushThreshold); + return true; + } + } + return false; } @@ -737,9 +798,9 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc size_t bytes = 0; time_t time_passed = 0; - std::unique_lock lock(buffer.mutex, std::defer_lock); + std::optional> lock; if (!locked) - lock.lock(); + lock.emplace(buffer.lockForReading()); block_to_write = buffer.data.cloneEmpty(); @@ -750,7 +811,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc if (check_thresholds) { - if (!checkThresholdsImpl(rows, bytes, time_passed)) + if (!checkThresholdsImpl(/* direct= */false, rows, bytes, time_passed)) return; } else @@ -769,7 +830,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc if (!destination_id) { - LOG_TRACE(log, "Flushing buffer with {} rows (discarded), {} bytes, age {} seconds {}.", rows, bytes, time_passed, (check_thresholds ? "(bg)" : "(direct)")); + LOG_DEBUG(log, "Flushing buffer with {} rows (discarded), {} bytes, age {} seconds {}.", rows, bytes, time_passed, (check_thresholds ? "(bg)" : "(direct)")); return; } @@ -783,7 +844,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc Stopwatch watch; try { - writeBlockToDestination(block_to_write, DatabaseCatalog::instance().tryGetTable(destination_id, buffer_context)); + writeBlockToDestination(block_to_write, DatabaseCatalog::instance().tryGetTable(destination_id, getContext())); if (reset_block_structure) buffer.data.clear(); } @@ -806,7 +867,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc } UInt64 milliseconds = watch.elapsedMilliseconds(); - LOG_TRACE(log, "Flushing buffer with {} rows, {} bytes, age {} seconds, took {} ms {}.", rows, bytes, time_passed, milliseconds, (check_thresholds ? "(bg)" : "(direct)")); + LOG_DEBUG(log, "Flushing buffer with {} rows, {} bytes, age {} seconds, took {} ms {}.", rows, bytes, time_passed, milliseconds, (check_thresholds ? "(bg)" : "(direct)")); } @@ -865,8 +926,8 @@ void StorageBuffer::writeBlockToDestination(const Block & block, StoragePtr tabl for (const auto & column : block_to_write) list_of_columns->children.push_back(std::make_shared(column.name)); - auto insert_context = Context(buffer_context); - insert_context.makeQueryContext(); + auto insert_context = Context::createCopy(getContext()); + insert_context->makeQueryContext(); InterpreterInsertQuery interpreter{insert, insert_context, allow_materialized}; @@ -907,7 +968,7 @@ void StorageBuffer::reschedule() /// try_to_lock is also ok for background flush, since if there is /// INSERT contended, then the reschedule will be done after /// INSERT will be done. - std::unique_lock lock(buffer.mutex, std::try_to_lock); + std::unique_lock lock(buffer.tryLock()); if (lock.owns_lock()) { min_first_write_time = buffer.first_write_time; @@ -927,9 +988,9 @@ void StorageBuffer::reschedule() flush_handle->scheduleAfter(std::min(min, max) * 1000); } -void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { - auto name_deps = getDependentViewsByColumn(context); + auto name_deps = getDependentViewsByColumn(local_context); for (const auto & command : commands) { if (command.type != AlterCommand::Type::ADD_COLUMN && command.type != AlterCommand::Type::MODIFY_COLUMN @@ -937,7 +998,7 @@ void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, const C throw Exception( "Alter of type '" + alterTypeToString(command.type) + "' is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); - if (command.type == AlterCommand::Type::DROP_COLUMN) + if (command.type == AlterCommand::Type::DROP_COLUMN && !command.clear) { const auto & deps_mv = name_deps[command.column_name]; if (!deps_mv.empty()) @@ -954,7 +1015,7 @@ void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, const C std::optional StorageBuffer::totalRows(const Settings & settings) const { std::optional underlying_rows; - auto underlying = DatabaseCatalog::instance().tryGetTable(destination_id, buffer_context); + auto underlying = DatabaseCatalog::instance().tryGetTable(destination_id, getContext()); if (underlying) underlying_rows = underlying->totalRows(settings); @@ -964,7 +1025,7 @@ std::optional StorageBuffer::totalRows(const Settings & settings) const UInt64 rows = 0; for (const auto & buffer : buffers) { - std::lock_guard lock(buffer.mutex); + const auto lock(buffer.lockForReading()); rows += buffer.data.rows(); } return rows + *underlying_rows; @@ -975,26 +1036,26 @@ std::optional StorageBuffer::totalBytes(const Settings & /*settings*/) c UInt64 bytes = 0; for (const auto & buffer : buffers) { - std::lock_guard lock(buffer.mutex); + const auto lock(buffer.lockForReading()); bytes += buffer.data.allocatedBytes(); } return bytes; } -void StorageBuffer::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void StorageBuffer::alter(const AlterCommands & params, ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); - checkAlterIsPossible(params, context); + checkAlterIsPossible(params, local_context); auto metadata_snapshot = getInMemoryMetadataPtr(); /// Flush all buffers to storages, so that no non-empty blocks of the old /// structure remain. Structure of empty blocks will be updated during first /// insert. - optimize({} /*query*/, metadata_snapshot, {} /*partition_id*/, false /*final*/, false /*deduplicate*/, {}, context); + optimize({} /*query*/, metadata_snapshot, {} /*partition_id*/, false /*final*/, false /*deduplicate*/, {}, local_context); StorageInMemoryMetadata new_metadata = *metadata_snapshot; - params.apply(new_metadata, context); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + params.apply(new_metadata, local_context); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); setInMemoryMetadata(new_metadata); } @@ -1005,25 +1066,26 @@ void registerStorageBuffer(StorageFactory & factory) * * db, table - in which table to put data from buffer. * num_buckets - level of parallelism. - * min_time, max_time, min_rows, max_rows, min_bytes, max_bytes - conditions for flushing the buffer. + * min_time, max_time, min_rows, max_rows, min_bytes, max_bytes - conditions for flushing the buffer, + * flush_time, flush_rows, flush_bytes - conditions for flushing. */ factory.registerStorage("Buffer", [](const StorageFactory::Arguments & args) { ASTs & engine_args = args.engine_args; - if (engine_args.size() != 9) - throw Exception("Storage Buffer requires 9 parameters: " - " destination_database, destination_table, num_buckets, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes.", + if (engine_args.size() < 9 || engine_args.size() > 12) + throw Exception("Storage Buffer requires from 9 to 12 parameters: " + " destination_database, destination_table, num_buckets, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes[, flush_time, flush_rows, flush_bytes].", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); // Table and database name arguments accept expressions, evaluate them. - engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.local_context); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); + engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.getLocalContext()); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); // After we evaluated all expressions, check that all arguments are // literals. - for (size_t i = 0; i < 9; i++) + for (size_t i = 0; i < engine_args.size(); i++) { if (!typeid_cast(engine_args[i].get())) { @@ -1033,23 +1095,35 @@ void registerStorageBuffer(StorageFactory & factory) } } - String destination_database = engine_args[0]->as().value.safeGet(); - String destination_table = engine_args[1]->as().value.safeGet(); + size_t i = 0; - UInt64 num_buckets = applyVisitor(FieldVisitorConvertToNumber(), engine_args[2]->as().value); + String destination_database = engine_args[i++]->as().value.safeGet(); + String destination_table = engine_args[i++]->as().value.safeGet(); - Int64 min_time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[3]->as().value); - Int64 max_time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[4]->as().value); - UInt64 min_rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[5]->as().value); - UInt64 max_rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[6]->as().value); - UInt64 min_bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[7]->as().value); - UInt64 max_bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[8]->as().value); + UInt64 num_buckets = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + + StorageBuffer::Thresholds min; + StorageBuffer::Thresholds max; + StorageBuffer::Thresholds flush; + + min.time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + max.time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + min.rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + max.rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + min.bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + max.bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + if (engine_args.size() > i) + flush.time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + if (engine_args.size() > i) + flush.rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + if (engine_args.size() > i) + flush.bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); /// If destination_id is not set, do not write data from the buffer, but simply empty the buffer. StorageID destination_id = StorageID::createEmpty(); if (!destination_table.empty()) { - destination_id.database_name = args.context.resolveDatabase(destination_database); + destination_id.database_name = args.getContext()->resolveDatabase(destination_database); destination_id.table_name = destination_table; } @@ -1057,12 +1131,11 @@ void registerStorageBuffer(StorageFactory & factory) args.table_id, args.columns, args.constraints, - args.context, + args.getContext(), num_buckets, - StorageBuffer::Thresholds{min_time, min_rows, min_bytes}, - StorageBuffer::Thresholds{max_time, max_rows, max_bytes}, + min, max, flush, destination_id, - static_cast(args.local_context.getSettingsRef().insert_allow_materialized_columns)); + static_cast(args.getLocalContext()->getSettingsRef().insert_allow_materialized_columns)); }, { .supports_parallel_insert = true, diff --git a/src/Storages/StorageBuffer.h b/src/Storages/StorageBuffer.h index f6904ddb0e4..1747c024a74 100644 --- a/src/Storages/StorageBuffer.h +++ b/src/Storages/StorageBuffer.h @@ -1,15 +1,17 @@ #pragma once -#include -#include -#include -#include -#include #include -#include +#include #include +#include +#include + #include +#include +#include +#include + namespace Poco { class Logger; } @@ -33,33 +35,36 @@ namespace DB * Thresholds can be exceeded. For example, if max_rows = 1 000 000, the buffer already had 500 000 rows, * and a part of 800 000 rows is added, then there will be 1 300 000 rows in the buffer, and then such a block will be written to the subordinate table. * + * There are also separate thresholds for flush, those thresholds are checked only for non-direct flush. + * This maybe useful if you do not want to add extra latency for INSERT queries, + * so you can set max_rows=1e6 and flush_rows=500e3, then each 500e3 rows buffer will be flushed in background only. + * * When you destroy a Buffer table, all remaining data is flushed to the subordinate table. * The data in the buffer is not replicated, not logged to disk, not indexed. With a rough restart of the server, the data is lost. */ -class StorageBuffer final : public ext::shared_ptr_helper, public IStorage +class StorageBuffer final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend class BufferSource; friend class BufferBlockOutputStream; public: - /// Thresholds. struct Thresholds { - time_t time; /// The number of seconds from the insertion of the first row into the block. - size_t rows; /// The number of rows in the block. - size_t bytes; /// The number of (uncompressed) bytes in the block. + time_t time = 0; /// The number of seconds from the insertion of the first row into the block. + size_t rows = 0; /// The number of rows in the block. + size_t bytes = 0; /// The number of (uncompressed) bytes in the block. }; std::string getName() const override { return "Buffer"; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -69,7 +74,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -78,7 +83,7 @@ public: bool supportsSubcolumns() const override { return true; } - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void startup() override; /// Flush all buffers into the subordinate table and stop background thread. @@ -90,19 +95,19 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override; + ContextPtr context) override; bool supportsSampling() const override { return true; } bool supportsPrewhere() const override; bool supportsFinal() const override { return true; } bool supportsIndexForIn() const override { return true; } - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override; + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// The structure of the subordinate table is not checked and does not change. - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; std::optional totalRows(const Settings & settings) const override; std::optional totalBytes(const Settings & settings) const override; @@ -112,13 +117,19 @@ public: private: - const Context & buffer_context; - struct Buffer { time_t first_write_time = 0; Block data; + + std::unique_lock lockForReading() const; + std::unique_lock lockForWriting() const; + std::unique_lock tryLock() const; + + private: mutable std::mutex mutex; + + std::unique_lock lockImpl(bool read) const; }; /// There are `num_shards` of independent buffers. @@ -127,6 +138,7 @@ private: const Thresholds min_thresholds; const Thresholds max_thresholds; + const Thresholds flush_thresholds; StorageID destination_id; bool allow_materialized; @@ -145,8 +157,8 @@ private: /// are exceeded. If reset_block_structure is set - clears inner block /// structure inside buffer (useful in OPTIMIZE and ALTER). void flushBuffer(Buffer & buffer, bool check_thresholds, bool locked = false, bool reset_block_structure = false); - bool checkThresholds(const Buffer & buffer, time_t current_time, size_t additional_rows = 0, size_t additional_bytes = 0) const; - bool checkThresholdsImpl(size_t rows, size_t bytes, time_t time_passed) const; + bool checkThresholds(const Buffer & buffer, bool direct, time_t current_time, size_t additional_rows = 0, size_t additional_bytes = 0) const; + bool checkThresholdsImpl(bool direct, size_t rows, size_t bytes, time_t time_passed) const; /// `table` argument is passed, as it is sometimes evaluated beforehand. It must match the `destination`. void writeBlockToDestination(const Block & block, StoragePtr table); @@ -165,10 +177,11 @@ protected: const StorageID & table_id_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, size_t num_shards_, const Thresholds & min_thresholds_, const Thresholds & max_thresholds_, + const Thresholds & flush_thresholds_, const StorageID & destination_id, bool allow_materialized_); }; diff --git a/src/Storages/StorageDictionary.cpp b/src/Storages/StorageDictionary.cpp index 32fe7b4c026..16818c9ea18 100644 --- a/src/Storages/StorageDictionary.cpp +++ b/src/Storages/StorageDictionary.cpp @@ -5,11 +5,13 @@ #include #include #include +#include #include #include #include #include #include +#include namespace DB @@ -19,6 +21,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; extern const int THERE_IS_NO_COLUMN; extern const int CANNOT_DETACH_DICTIONARY_AS_TABLE; + extern const int DICTIONARY_ALREADY_EXISTS; } namespace @@ -89,19 +92,14 @@ String StorageDictionary::generateNamesAndTypesDescription(const NamesAndTypesLi return ss.str(); } -String StorageDictionary::resolvedDictionaryName() const -{ - if (location == Location::SameDatabaseAndNameAsDictionary) - return dictionary_name; - return DatabaseCatalog::instance().resolveDictionaryName(dictionary_name); -} - StorageDictionary::StorageDictionary( const StorageID & table_id_, const String & dictionary_name_, const ColumnsDescription & columns_, - Location location_) + Location location_, + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , dictionary_name(dictionary_name_) , location(location_) { @@ -112,18 +110,52 @@ StorageDictionary::StorageDictionary( StorageDictionary::StorageDictionary( - const StorageID & table_id_, const String & dictionary_name_, const DictionaryStructure & dictionary_structure_, Location location_) - : StorageDictionary(table_id_, dictionary_name_, ColumnsDescription{getNamesAndTypes(dictionary_structure_)}, location_) + const StorageID & table_id_, + const String & dictionary_name_, + const DictionaryStructure & dictionary_structure_, + Location location_, + ContextPtr context_) + : StorageDictionary( + table_id_, + dictionary_name_, + ColumnsDescription{getNamesAndTypes(dictionary_structure_)}, + location_, + context_) { } +StorageDictionary::StorageDictionary( + const StorageID & table_id, + LoadablesConfigurationPtr dictionary_configuration, + ContextPtr context_) + : StorageDictionary( + table_id, + table_id.getFullNameNotQuoted(), + context_->getExternalDictionariesLoader().getDictionaryStructure(*dictionary_configuration), + Location::SameDatabaseAndNameAsDictionary, + context_) +{ + configuration = dictionary_configuration; + + auto repository = std::make_unique(*this); + remove_repository_callback = context_->getExternalDictionariesLoader().addConfigRepository(std::move(repository)); +} + +StorageDictionary::~StorageDictionary() +{ + removeDictionaryConfigurationFromRepository(); +} void StorageDictionary::checkTableCanBeDropped() const { if (location == Location::SameDatabaseAndNameAsDictionary) - throw Exception("Cannot drop/detach dictionary " + backQuote(dictionary_name) + " as table, use DROP DICTIONARY or DETACH DICTIONARY query instead", ErrorCodes::CANNOT_DETACH_DICTIONARY_AS_TABLE); + throw Exception(ErrorCodes::CANNOT_DETACH_DICTIONARY_AS_TABLE, + "Cannot drop/detach dictionary {} as table, use DROP DICTIONARY or DETACH DICTIONARY query instead", + dictionary_name); if (location == Location::DictionaryDatabase) - throw Exception("Cannot drop/detach table " + getStorageID().getFullTableName() + " from a database with DICTIONARY engine", ErrorCodes::CANNOT_DETACH_DICTIONARY_AS_TABLE); + throw Exception(ErrorCodes::CANNOT_DETACH_DICTIONARY_AS_TABLE, + "Cannot drop/detach table from a database with DICTIONARY engine, use DROP DICTIONARY or DETACH DICTIONARY query instead", + dictionary_name); } void StorageDictionary::checkTableCanBeDetached() const @@ -135,38 +167,130 @@ Pipe StorageDictionary::read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*threads*/) { - auto dictionary = context.getExternalDictionariesLoader().getDictionary(resolvedDictionaryName()); + auto dictionary = getContext()->getExternalDictionariesLoader().getDictionary(dictionary_name, local_context); auto stream = dictionary->getBlockInputStream(column_names, max_block_size); /// TODO: update dictionary interface for processors. return Pipe(std::make_shared(stream)); } +void StorageDictionary::shutdown() +{ + removeDictionaryConfigurationFromRepository(); +} + +void StorageDictionary::startup() +{ + auto global_context = getContext(); + + bool lazy_load = global_context->getConfigRef().getBool("dictionaries_lazy_load", true); + if (!lazy_load) + { + auto & external_dictionaries_loader = global_context->getExternalDictionariesLoader(); + + /// reloadConfig() is called here to force loading the dictionary. + external_dictionaries_loader.reloadConfig(getStorageID().getInternalDictionaryName()); + } +} + +void StorageDictionary::removeDictionaryConfigurationFromRepository() +{ + if (remove_repository_callback_executed) + return; + + remove_repository_callback_executed = true; + remove_repository_callback.reset(); +} + +Poco::Timestamp StorageDictionary::getUpdateTime() const +{ + std::lock_guard lock(dictionary_config_mutex); + return update_time; +} + +LoadablesConfigurationPtr StorageDictionary::getConfiguration() const +{ + std::lock_guard lock(dictionary_config_mutex); + return configuration; +} + +void StorageDictionary::renameInMemory(const StorageID & new_table_id) +{ + if (configuration) + { + configuration->setString("dictionary.database", new_table_id.database_name); + configuration->setString("dictionary.name", new_table_id.table_name); + + auto & external_dictionaries_loader = getContext()->getExternalDictionariesLoader(); + external_dictionaries_loader.reloadConfig(getStorageID().getInternalDictionaryName()); + + auto result = external_dictionaries_loader.getLoadResult(getStorageID().getInternalDictionaryName()); + if (!result.object) + return; + + const auto dictionary = std::static_pointer_cast(result.object); + dictionary->updateDictionaryName(new_table_id); + } + + IStorage::renameInMemory(new_table_id); +} void registerStorageDictionary(StorageFactory & factory) { factory.registerStorage("Dictionary", [](const StorageFactory::Arguments & args) { - if (args.engine_args.size() != 1) - throw Exception("Storage Dictionary requires single parameter: name of dictionary", - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + auto query = args.query; - args.engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(args.engine_args[0], args.local_context); - String dictionary_name = args.engine_args[0]->as().value.safeGet(); + auto local_context = args.getLocalContext(); - if (!args.attach) + if (query.is_dictionary) { - auto resolved = DatabaseCatalog::instance().resolveDictionaryName(dictionary_name); - const auto & dictionary = args.context.getExternalDictionariesLoader().getDictionary(resolved); - const DictionaryStructure & dictionary_structure = dictionary->getStructure(); - checkNamesAndTypesCompatibleWithDictionary(dictionary_name, args.columns, dictionary_structure); - } + auto dictionary_id = args.table_id; + auto & external_dictionaries_loader = local_context->getExternalDictionariesLoader(); - return StorageDictionary::create(args.table_id, dictionary_name, args.columns, StorageDictionary::Location::Custom); + /// A dictionary with the same full name could be defined in *.xml config files. + if (external_dictionaries_loader.getCurrentStatus(dictionary_id.getFullNameNotQuoted()) != ExternalLoader::Status::NOT_EXIST) + throw Exception(ErrorCodes::DICTIONARY_ALREADY_EXISTS, + "Dictionary {} already exists.", dictionary_id.getFullNameNotQuoted()); + + /// Create dictionary storage that owns underlying dictionary + auto abstract_dictionary_configuration = getDictionaryConfigurationFromAST(args.query, local_context, dictionary_id.database_name); + auto result_storage = StorageDictionary::create(dictionary_id, abstract_dictionary_configuration, local_context); + + bool lazy_load = local_context->getConfigRef().getBool("dictionaries_lazy_load", true); + if (!args.attach && !lazy_load) + { + /// load() is called here to force loading the dictionary, wait until the loading is finished, + /// and throw an exception if the loading is failed. + external_dictionaries_loader.load(dictionary_id.getInternalDictionaryName()); + } + + return result_storage; + } + else + { + /// Create dictionary storage that is view of underlying dictionary + + if (args.engine_args.size() != 1) + throw Exception("Storage Dictionary requires single parameter: name of dictionary", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + args.engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(args.engine_args[0], local_context); + String dictionary_name = args.engine_args[0]->as().value.safeGet(); + + if (!args.attach) + { + const auto & dictionary = args.getContext()->getExternalDictionariesLoader().getDictionary(dictionary_name, args.getContext()); + const DictionaryStructure & dictionary_structure = dictionary->getStructure(); + checkNamesAndTypesCompatibleWithDictionary(dictionary_name, args.columns, dictionary_structure); + } + + return StorageDictionary::create(args.table_id, dictionary_name, args.columns, StorageDictionary::Location::Custom, local_context); + } }); } diff --git a/src/Storages/StorageDictionary.h b/src/Storages/StorageDictionary.h index 589ff7d4654..c22c337d40a 100644 --- a/src/Storages/StorageDictionary.h +++ b/src/Storages/StorageDictionary.h @@ -1,19 +1,26 @@ #pragma once -#include +#include #include +#include +#include + namespace DB { struct DictionaryStructure; +class TableFunctionDictionary; -class StorageDictionary final : public ext::shared_ptr_helper, public IStorage +class StorageDictionary final : public ext::shared_ptr_helper, public IStorage, public WithContext { friend struct ext::shared_ptr_helper; + friend class TableFunctionDictionary; public: std::string getName() const override { return "Dictionary"; } + ~StorageDictionary() override; + void checkTableCanBeDropped() const override; void checkTableCanBeDetached() const override; @@ -21,7 +28,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned threads) override; @@ -29,8 +36,16 @@ public: static NamesAndTypesList getNamesAndTypes(const DictionaryStructure & dictionary_structure); static String generateNamesAndTypesDescription(const NamesAndTypesList & list); - const String & dictionaryName() const { return dictionary_name; } - String resolvedDictionaryName() const; + bool isDictionary() const override { return true; } + void shutdown() override; + void startup() override; + + void renameInMemory(const StorageID & new_table_id) override; + + Poco::Timestamp getUpdateTime() const; + LoadablesConfigurationPtr getConfiguration() const; + + const String & getDictionaryName() const { return dictionary_name; } /// Specifies where the table is located relative to the dictionary. enum class Location @@ -54,18 +69,33 @@ private: const String dictionary_name; const Location location; -protected: + mutable std::mutex dictionary_config_mutex; + Poco::Timestamp update_time; + LoadablesConfigurationPtr configuration; + + std::atomic remove_repository_callback_executed = false; + ext::scope_guard remove_repository_callback; + + void removeDictionaryConfigurationFromRepository(); + StorageDictionary( const StorageID & table_id_, const String & dictionary_name_, const ColumnsDescription & columns_, - Location location_); + Location location_, + ContextPtr context_); StorageDictionary( const StorageID & table_id_, const String & dictionary_name_, const DictionaryStructure & dictionary_structure, - Location location_); + Location location_, + ContextPtr context_); + + StorageDictionary( + const StorageID & table_id_, + LoadablesConfigurationPtr dictionary_configuration_, + ContextPtr context_); }; } diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index 2a05d92ace1..a402c3e0218 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -1,8 +1,11 @@ #include #include + #include +#include + #include #include #include @@ -31,6 +34,7 @@ #include #include #include +#include #include #include @@ -39,6 +43,7 @@ #include #include #include +#include #include #include #include @@ -48,12 +53,20 @@ #include #include +#include +#include +#include +#include +#include +#include + #include #include #include #include #include +#include #include @@ -69,6 +82,8 @@ const UInt64 FORCE_OPTIMIZE_SKIP_UNUSED_SHARDS_HAS_SHARDING_KEY = 1; const UInt64 FORCE_OPTIMIZE_SKIP_UNUSED_SHARDS_ALWAYS = 2; const UInt64 DISTRIBUTED_GROUP_BY_NO_MERGE_AFTER_AGGREGATION = 2; + +const UInt64 PARALLEL_DISTRIBUTED_INSERT_SELECT_ALL = 2; } namespace ProfileEvents @@ -83,6 +98,7 @@ namespace DB namespace ErrorCodes { + extern const int LOGICAL_ERROR; extern const int NOT_IMPLEMENTED; extern const int STORAGE_REQUIRES_PARAMETER; extern const int BAD_ARGUMENTS; @@ -205,7 +221,7 @@ std::string makeFormattedListOfShards(const ClusterPtr & cluster) return buf.str(); } -ExpressionActionsPtr buildShardingKeyExpression(const ASTPtr & sharding_key, const Context & context, const NamesAndTypesList & columns, bool project) +ExpressionActionsPtr buildShardingKeyExpression(const ASTPtr & sharding_key, ContextPtr context, const NamesAndTypesList & columns, bool project) { ASTPtr query = sharding_key; auto syntax_result = TreeRewriter(context).analyze(query, columns); @@ -253,7 +269,7 @@ public: void replaceConstantExpressions( ASTPtr & node, - const Context & context, + ContextPtr context, const NamesAndTypesList & columns, ConstStoragePtr storage, const StorageMetadataPtr & metadata_snapshot) @@ -373,7 +389,7 @@ StorageDistributed::StorageDistributed( const String & remote_database_, const String & remote_table_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, @@ -381,12 +397,12 @@ StorageDistributed::StorageDistributed( bool attach_, ClusterPtr owned_cluster_) : IStorage(id_) + , WithContext(context_->getGlobalContext()) , remote_database(remote_database_) , remote_table(remote_table_) - , global_context(context_.getGlobalContext()) , log(&Poco::Logger::get("StorageDistributed (" + id_.table_name + ")")) , owned_cluster(std::move(owned_cluster_)) - , cluster_name(global_context.getMacros()->expand(cluster_name_)) + , cluster_name(getContext()->getMacros()->expand(cluster_name_)) , has_sharding_key(sharding_key_) , relative_data_path(relative_data_path_) , distributed_settings(distributed_settings_) @@ -399,14 +415,14 @@ StorageDistributed::StorageDistributed( if (sharding_key_) { - sharding_key_expr = buildShardingKeyExpression(sharding_key_, global_context, storage_metadata.getColumns().getAllPhysical(), false); + sharding_key_expr = buildShardingKeyExpression(sharding_key_, getContext(), storage_metadata.getColumns().getAllPhysical(), false); sharding_key_column_name = sharding_key_->getColumnName(); sharding_key_is_deterministic = isExpressionActionsDeterministics(sharding_key_expr); } if (!relative_data_path.empty()) { - storage_policy = global_context.getStoragePolicy(storage_policy_name_); + storage_policy = getContext()->getStoragePolicy(storage_policy_name_); data_volume = storage_policy->getVolume(0); if (storage_policy->getVolumes().size() > 1) LOG_WARNING(log, "Storage policy for Distributed table has multiple volumes. " @@ -416,7 +432,7 @@ StorageDistributed::StorageDistributed( /// Sanity check. Skip check if the table is already created to allow the server to start. if (!attach_ && !cluster_name.empty()) { - size_t num_local_shards = global_context.getCluster(cluster_name)->getLocalShardCount(); + size_t num_local_shards = getContext()->getCluster(cluster_name)->getLocalShardCount(); if (num_local_shards && remote_database == id_.database_name && remote_table == id_.table_name) throw Exception("Distributed table " + id_.table_name + " looks at itself", ErrorCodes::INFINITE_LOOP); } @@ -429,22 +445,23 @@ StorageDistributed::StorageDistributed( const ConstraintsDescription & constraints_, ASTPtr remote_table_function_ptr_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, const DistributedSettings & distributed_settings_, bool attach, ClusterPtr owned_cluster_) - : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) + : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, + storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) { remote_table_function_ptr = std::move(remote_table_function_ptr_); } QueryProcessingStage::Enum StorageDistributed::getQueryProcessingStage( - const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const + ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); auto metadata_snapshot = getInMemoryMetadataPtr(); ClusterPtr cluster = getCluster(); @@ -452,18 +469,20 @@ QueryProcessingStage::Enum StorageDistributed::getQueryProcessingStage( /// Always calculate optimized cluster here, to avoid conditions during read() /// (Anyway it will be calculated in the read()) - if (settings.optimize_skip_unused_shards) + if (getClusterQueriedNodes(settings, cluster) > 1 && settings.optimize_skip_unused_shards) { - ClusterPtr optimized_cluster = getOptimizedCluster(context, metadata_snapshot, query_info.query); + ClusterPtr optimized_cluster = getOptimizedCluster(local_context, metadata_snapshot, query_info.query); if (optimized_cluster) { - LOG_DEBUG(log, "Skipping irrelevant shards - the query will be sent to the following shards of the cluster (shard numbers): {}", makeFormattedListOfShards(optimized_cluster)); + LOG_DEBUG(log, "Skipping irrelevant shards - the query will be sent to the following shards of the cluster (shard numbers): {}", + makeFormattedListOfShards(optimized_cluster)); cluster = optimized_cluster; - query_info.cluster = cluster; + query_info.optimized_cluster = cluster; } else { - LOG_DEBUG(log, "Unable to figure out irrelevant shards from WHERE/PREWHERE clauses - the query will be sent to all shards of the cluster{}", has_sharding_key ? "" : " (no sharding key)"); + LOG_DEBUG(log, "Unable to figure out irrelevant shards from WHERE/PREWHERE clauses - the query will be sent to all shards of the cluster{}", + has_sharding_key ? "" : " (no sharding key)"); } } @@ -506,14 +525,16 @@ Pipe StorageDistributed::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); - return plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); + return plan.convertToPipe( + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } void StorageDistributed::read( @@ -521,7 +542,7 @@ void StorageDistributed::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -530,9 +551,20 @@ void StorageDistributed::read( query_info.query, remote_database, remote_table, remote_table_function_ptr); Block header = - InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + InterpreterSelectQuery(query_info.query, local_context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); - const Scalars & scalars = context.hasQueryContext() ? context.getQueryContext().getScalars() : Scalars{}; + /// Return directly (with correct header) if no shard to query. + if (query_info.getCluster()->getShardsInfo().empty()) + { + Pipe pipe(std::make_shared(header)); + auto read_from_pipe = std::make_unique(std::move(pipe)); + read_from_pipe->setStepDescription("Read from NullSource (Distributed)"); + query_plan.addStep(std::move(read_from_pipe)); + + return; + } + + const Scalars & scalars = local_context->hasQueryContext() ? local_context->getQueryContext()->getScalars() : Scalars{}; bool has_virtual_shard_num_column = std::find(column_names.begin(), column_names.end(), "_shard_num") != column_names.end(); if (has_virtual_shard_num_column && !isVirtualColumn("_shard_num", metadata_snapshot)) @@ -540,19 +572,30 @@ void StorageDistributed::read( ClusterProxy::SelectStreamFactory select_stream_factory = remote_table_function_ptr ? ClusterProxy::SelectStreamFactory( - header, processed_stage, remote_table_function_ptr, scalars, has_virtual_shard_num_column, context.getExternalTables()) + header, processed_stage, remote_table_function_ptr, scalars, has_virtual_shard_num_column, local_context->getExternalTables()) : ClusterProxy::SelectStreamFactory( - header, processed_stage, StorageID{remote_database, remote_table}, scalars, has_virtual_shard_num_column, context.getExternalTables()); + header, + processed_stage, + StorageID{remote_database, remote_table}, + scalars, + has_virtual_shard_num_column, + local_context->getExternalTables()); ClusterProxy::executeQuery(query_plan, select_stream_factory, log, - modified_query_ast, context, query_info); + modified_query_ast, local_context, query_info, + sharding_key_expr, sharding_key_column_name, + getCluster()); + + /// This is a bug, it is possible only when there is no shards to query, and this is handled earlier. + if (!query_plan.isInitialized()) + throw Exception("Pipeline is not initialized", ErrorCodes::LOGICAL_ERROR); } -BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { auto cluster = getCluster(); - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); /// Ban an attempt to make async insert into the table belonging to DatabaseMemory if (!storage_policy && !owned_cluster && !settings.insert_distributed_sync && !settings.insert_shard_id) @@ -582,16 +625,94 @@ BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMeta /// DistributedBlockOutputStream will not own cluster, but will own ConnectionPools of the cluster return std::make_shared( - context, *this, metadata_snapshot, + local_context, *this, metadata_snapshot, createInsertToRemoteTableQuery( remote_database, remote_table, metadata_snapshot->getSampleBlockNonMaterialized()), - cluster, insert_sync, timeout); + cluster, insert_sync, timeout, StorageID{remote_database, remote_table}); } -void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +QueryPipelinePtr StorageDistributed::distributedWrite(const ASTInsertQuery & query, ContextPtr local_context) { - auto name_deps = getDependentViewsByColumn(context); + const Settings & settings = local_context->getSettingsRef(); + std::shared_ptr storage_src; + auto & select = query.select->as(); + auto new_query = std::dynamic_pointer_cast(query.clone()); + if (select.list_of_selects->children.size() == 1) + { + if (auto * select_query = select.list_of_selects->children.at(0)->as()) + { + JoinedTables joined_tables(Context::createCopy(local_context), *select_query); + + if (joined_tables.tablesCount() == 1) + { + storage_src = std::dynamic_pointer_cast(joined_tables.getLeftTableStorage()); + if (storage_src) + { + const auto select_with_union_query = std::make_shared(); + select_with_union_query->list_of_selects = std::make_shared(); + + auto new_select_query = std::dynamic_pointer_cast(select_query->clone()); + select_with_union_query->list_of_selects->children.push_back(new_select_query); + + new_select_query->replaceDatabaseAndTable(storage_src->getRemoteDatabaseName(), storage_src->getRemoteTableName()); + + new_query->select = select_with_union_query; + } + } + } + } + + if (!storage_src || storage_src->getClusterName() != getClusterName()) + { + return nullptr; + } + + if (settings.parallel_distributed_insert_select == PARALLEL_DISTRIBUTED_INSERT_SELECT_ALL) + { + new_query->table_id = StorageID(getRemoteDatabaseName(), getRemoteTableName()); + } + + const auto & cluster = getCluster(); + const auto & shards_info = cluster->getShardsInfo(); + + std::vector> pipelines; + + String new_query_str = queryToString(new_query); + for (size_t shard_index : ext::range(0, shards_info.size())) + { + const auto & shard_info = shards_info[shard_index]; + if (shard_info.isLocal()) + { + InterpreterInsertQuery interpreter(new_query, local_context); + pipelines.emplace_back(std::make_unique(interpreter.execute().pipeline)); + } + else + { + auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(settings); + auto connections = shard_info.pool->getMany(timeouts, &settings, PoolMode::GET_ONE); + if (connections.empty() || connections.front().isNull()) + throw Exception( + "Expected exactly one connection for shard " + toString(shard_info.shard_num), ErrorCodes::LOGICAL_ERROR); + + /// INSERT SELECT query returns empty block + auto in_stream = std::make_shared(std::move(connections), new_query_str, Block{}, local_context); + pipelines.emplace_back(std::make_unique()); + pipelines.back()->init(Pipe(std::make_shared(std::move(in_stream)))); + pipelines.back()->setSinks([](const Block & header, QueryPipeline::StreamType) -> ProcessorPtr + { + return std::make_shared(header); + }); + } + } + + return std::make_unique(QueryPipeline::unitePipelines(std::move(pipelines))); +} + + +void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const +{ + auto name_deps = getDependentViewsByColumn(local_context); for (const auto & command : commands) { if (command.type != AlterCommand::Type::ADD_COLUMN @@ -602,7 +723,7 @@ void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, co throw Exception("Alter of type '" + alterTypeToString(command.type) + "' is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); - if (command.type == AlterCommand::DROP_COLUMN) + if (command.type == AlterCommand::DROP_COLUMN && !command.clear) { const auto & deps_mv = name_deps[command.column_name]; if (!deps_mv.empty()) @@ -616,21 +737,21 @@ void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, co } } -void StorageDistributed::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void StorageDistributed::alter(const AlterCommands & params, ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); - checkAlterIsPossible(params, context); + checkAlterIsPossible(params, local_context); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); - params.apply(new_metadata, context); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + params.apply(new_metadata, local_context); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); setInMemoryMetadata(new_metadata); } void StorageDistributed::startup() { - if (remote_database.empty() && !remote_table_function_ptr) + if (remote_database.empty() && !remote_table_function_ptr && !getCluster()->maybeCrossReplication()) LOG_WARNING(log, "Name of remote database is empty. Default database will be used implicitly."); if (!storage_policy) @@ -697,7 +818,7 @@ Strings StorageDistributed::getDataPaths() const return paths; } -void StorageDistributed::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) +void StorageDistributed::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { std::lock_guard lock(cluster_nodes_mutex); @@ -764,7 +885,7 @@ StorageDistributedDirectoryMonitor& StorageDistributed::requireDirectoryMonitor( *this, disk, relative_data_path + name, node_data.connection_pool, monitors_blocker, - global_context.getDistributedSchedulePool()); + getContext()->getDistributedSchedulePool()); } return *node_data.directory_monitor; } @@ -794,19 +915,20 @@ size_t StorageDistributed::getShardCount() const ClusterPtr StorageDistributed::getCluster() const { - return owned_cluster ? owned_cluster : global_context.getCluster(cluster_name); + return owned_cluster ? owned_cluster : getContext()->getCluster(cluster_name); } -ClusterPtr StorageDistributed::getOptimizedCluster(const Context & context, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const +ClusterPtr StorageDistributed::getOptimizedCluster( + ContextPtr local_context, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const { ClusterPtr cluster = getCluster(); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = local_context->getSettingsRef(); bool sharding_key_is_usable = settings.allow_nondeterministic_optimize_skip_unused_shards || sharding_key_is_deterministic; if (has_sharding_key && sharding_key_is_usable) { - ClusterPtr optimized = skipUnusedShards(cluster, query_ptr, metadata_snapshot, context); + ClusterPtr optimized = skipUnusedShards(cluster, query_ptr, metadata_snapshot, local_context); if (optimized) return optimized; } @@ -828,7 +950,7 @@ ClusterPtr StorageDistributed::getOptimizedCluster(const Context & context, cons throw Exception(exception_message.str(), ErrorCodes::UNABLE_TO_SKIP_UNUSED_SHARDS); } - return cluster; + return {}; } IColumn::Selector StorageDistributed::createSelector(const ClusterPtr cluster, const ColumnWithTypeAndName & result) @@ -863,7 +985,7 @@ ClusterPtr StorageDistributed::skipUnusedShards( ClusterPtr cluster, const ASTPtr & query_ptr, const StorageMetadataPtr & metadata_snapshot, - const Context & context) const + ContextPtr local_context) const { const auto & select = query_ptr->as(); @@ -882,8 +1004,25 @@ ClusterPtr StorageDistributed::skipUnusedShards( condition_ast = select.prewhere() ? select.prewhere()->clone() : select.where()->clone(); } - replaceConstantExpressions(condition_ast, context, metadata_snapshot->getColumns().getAll(), shared_from_this(), metadata_snapshot); - const auto blocks = evaluateExpressionOverConstantCondition(condition_ast, sharding_key_expr); + replaceConstantExpressions(condition_ast, local_context, metadata_snapshot->getColumns().getAll(), shared_from_this(), metadata_snapshot); + + size_t limit = local_context->getSettingsRef().optimize_skip_unused_shards_limit; + if (!limit || limit > SSIZE_MAX) + { + throw Exception("optimize_skip_unused_shards_limit out of range (0, {}]", ErrorCodes::ARGUMENT_OUT_OF_BOUND, SSIZE_MAX); + } + // To interpret limit==0 as limit is reached + ++limit; + const auto blocks = evaluateExpressionOverConstantCondition(condition_ast, sharding_key_expr, limit); + + if (!limit) + { + LOG_TRACE(log, + "Number of values for sharding key exceeds optimize_skip_unused_shards_limit={}, " + "try to increase it, but note that this may increase query processing time.", + local_context->getSettingsRef().optimize_skip_unused_shards_limit); + return nullptr; + } // Can't get definite answer if we can skip any shards if (!blocks) @@ -914,10 +1053,10 @@ ActionLock StorageDistributed::getActionLock(StorageActionBlockType type) return {}; } -void StorageDistributed::flushClusterNodesAllData(const Context & context) +void StorageDistributed::flushClusterNodesAllData(ContextPtr local_context) { /// Sync SYSTEM FLUSH DISTRIBUTED with TRUNCATE - auto table_lock = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto table_lock = lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); std::vector> directory_monitors; @@ -992,7 +1131,7 @@ void StorageDistributed::delayInsertOrThrowIfNeeded() const !distributed_settings.bytes_to_delay_insert) return; - UInt64 total_bytes = *totalBytes(global_context.getSettingsRef()); + UInt64 total_bytes = *totalBytes(getContext()->getSettingsRef()); if (distributed_settings.bytes_to_throw_insert && total_bytes > distributed_settings.bytes_to_throw_insert) { @@ -1013,12 +1152,12 @@ void StorageDistributed::delayInsertOrThrowIfNeeded() const do { delayed_ms += step_ms; std::this_thread::sleep_for(std::chrono::milliseconds(step_ms)); - } while (*totalBytes(global_context.getSettingsRef()) > distributed_settings.bytes_to_delay_insert && delayed_ms < distributed_settings.max_delay_to_insert*1000); + } while (*totalBytes(getContext()->getSettingsRef()) > distributed_settings.bytes_to_delay_insert && delayed_ms < distributed_settings.max_delay_to_insert*1000); ProfileEvents::increment(ProfileEvents::DistributedDelayedInserts); ProfileEvents::increment(ProfileEvents::DistributedDelayedInsertsMilliseconds, delayed_ms); - UInt64 new_total_bytes = *totalBytes(global_context.getSettingsRef()); + UInt64 new_total_bytes = *totalBytes(getContext()->getSettingsRef()); LOG_INFO(log, "Too many bytes pending for async INSERT: was {}, now {}, INSERT was delayed to {} ms", formatReadableSizeWithBinarySuffix(total_bytes), formatReadableSizeWithBinarySuffix(new_total_bytes), @@ -1068,8 +1207,8 @@ void registerStorageDistributed(StorageFactory & factory) String cluster_name = getClusterNameAndMakeLiteral(engine_args[0]); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); - engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); + engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.getLocalContext()); String remote_database = engine_args[1]->as().value.safeGet(); String remote_table = engine_args[2]->as().value.safeGet(); @@ -1080,7 +1219,7 @@ void registerStorageDistributed(StorageFactory & factory) /// Check that sharding_key exists in the table and has numeric type. if (sharding_key) { - auto sharding_expr = buildShardingKeyExpression(sharding_key, args.context, args.columns.getAllPhysical(), true); + auto sharding_expr = buildShardingKeyExpression(sharding_key, args.getContext(), args.columns.getAllPhysical(), true); const Block & block = sharding_expr->getSampleBlock(); if (block.columns() != 1) @@ -1114,7 +1253,7 @@ void registerStorageDistributed(StorageFactory & factory) return StorageDistributed::create( args.table_id, args.columns, args.constraints, remote_database, remote_table, cluster_name, - args.context, + args.getContext(), sharding_key, storage_policy, args.relative_data_path, diff --git a/src/Storages/StorageDistributed.h b/src/Storages/StorageDistributed.h index 5904124505a..886a8e032de 100644 --- a/src/Storages/StorageDistributed.h +++ b/src/Storages/StorageDistributed.h @@ -36,7 +36,7 @@ using ExpressionActionsPtr = std::shared_ptr; * You can pass one address, not several. * In this case, the table can be considered remote, rather than distributed. */ -class StorageDistributed final : public ext::shared_ptr_helper, public IStorage +class StorageDistributed final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend class DistributedBlockOutputStream; @@ -55,13 +55,13 @@ public: bool isRemote() const override { return true; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -71,7 +71,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t /*max_block_size*/, unsigned /*num_streams*/) override; @@ -79,18 +79,20 @@ public: bool supportsParallelInsert() const override { return true; } std::optional totalBytes(const Settings &) const override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; + + QueryPipelinePtr distributedWrite(const ASTInsertQuery & query, ContextPtr context) override; /// Removes temporary data in local filesystem. - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// in the sub-tables, you need to manually add and delete columns /// the structure of the sub-table is not checked - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; void startup() override; void shutdown() override; @@ -111,7 +113,7 @@ public: ClusterPtr getCluster() const; /// Used by InterpreterSystemQuery - void flushClusterNodesAllData(const Context & context); + void flushClusterNodesAllData(ContextPtr context); /// Used by ClusterCopier size_t getShardCount() const; @@ -124,7 +126,7 @@ private: const String & remote_database_, const String & remote_table_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, @@ -138,7 +140,7 @@ private: const ConstraintsDescription & constraints_, ASTPtr remote_table_function_ptr_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, @@ -163,12 +165,13 @@ private: /// Used by StorageSystemDistributionQueue std::vector getDirectoryMonitorsStatuses() const; - static IColumn::Selector createSelector(const ClusterPtr cluster, const ColumnWithTypeAndName & result); + static IColumn::Selector createSelector(ClusterPtr cluster, const ColumnWithTypeAndName & result); /// Apply the following settings: /// - optimize_skip_unused_shards /// - force_optimize_skip_unused_shards - ClusterPtr getOptimizedCluster(const Context &, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const; - ClusterPtr skipUnusedShards(ClusterPtr cluster, const ASTPtr & query_ptr, const StorageMetadataPtr & metadata_snapshot, const Context & context) const; + ClusterPtr getOptimizedCluster(ContextPtr, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const; + ClusterPtr + skipUnusedShards(ClusterPtr cluster, const ASTPtr & query_ptr, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) const; size_t getRandomShardIndex(const Cluster::ShardsInfo & shards); @@ -176,12 +179,10 @@ private: void delayInsertOrThrowIfNeeded() const; -private: String remote_database; String remote_table; ASTPtr remote_table_function_ptr; - const Context & global_context; Poco::Logger * log; /// Used to implement TableFunctionRemote. diff --git a/src/Storages/StorageExternalDistributed.cpp b/src/Storages/StorageExternalDistributed.cpp new file mode 100644 index 00000000000..7c1bcd1e83a --- /dev/null +++ b/src/Storages/StorageExternalDistributed.cpp @@ -0,0 +1,282 @@ +#include "StorageExternalDistributed.h" + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int BAD_ARGUMENTS; +} + +StorageExternalDistributed::StorageExternalDistributed( + const StorageID & table_id_, + ExternalStorageEngine table_engine, + const String & cluster_description, + const String & remote_database, + const String & remote_table, + const String & username, + const String & password, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context) + : IStorage(table_id_) +{ + StorageInMemoryMetadata storage_metadata; + storage_metadata.setColumns(columns_); + storage_metadata.setConstraints(constraints_); + setInMemoryMetadata(storage_metadata); + + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + std::vector shards_descriptions = parseRemoteDescription(cluster_description, 0, cluster_description.size(), ',', max_addresses); + std::vector> addresses; + +#if USE_MYSQL || USE_LIBPQXX + + /// For each shard pass replicas description into storage, replicas are managed by storage's PoolWithFailover. + for (const auto & shard_description : shards_descriptions) + { + StoragePtr shard; + + switch (table_engine) + { +#if USE_MYSQL + case ExternalStorageEngine::MySQL: + { + addresses = parseRemoteDescriptionForExternalDatabase(shard_description, max_addresses, 3306); + + mysqlxx::PoolWithFailover pool( + remote_database, + addresses, + username, password); + + shard = StorageMySQL::create( + table_id_, + std::move(pool), + remote_database, + remote_table, + /* replace_query = */ false, + /* on_duplicate_clause = */ "", + columns_, constraints_, + context); + break; + } +#endif +#if USE_LIBPQXX + + case ExternalStorageEngine::PostgreSQL: + { + addresses = parseRemoteDescriptionForExternalDatabase(shard_description, max_addresses, 5432); + + postgres::PoolWithFailover pool( + remote_database, + addresses, + username, password, + context->getSettingsRef().postgresql_connection_pool_size, + context->getSettingsRef().postgresql_connection_pool_wait_timeout); + + shard = StoragePostgreSQL::create( + table_id_, + std::move(pool), + remote_table, + columns_, constraints_, + context); + break; + } +#endif + default: + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Unsupported table engine. Supported engines are: MySQL, PostgreSQL, URL"); + } + } + + shards.emplace(std::move(shard)); + } + +#else + (void)table_engine; + (void)remote_database; + (void)remote_table; + (void)username; + (void)password; + (void)shards_descriptions; + (void)addresses; +#endif +} + + +StorageExternalDistributed::StorageExternalDistributed( + const String & addresses_description, + const StorageID & table_id, + const String & format_name, + const std::optional & format_settings, + const String & compression_method, + const ColumnsDescription & columns, + const ConstraintsDescription & constraints, + ContextPtr context) + : IStorage(table_id) +{ + StorageInMemoryMetadata storage_metadata; + storage_metadata.setColumns(columns); + storage_metadata.setConstraints(constraints); + setInMemoryMetadata(storage_metadata); + + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + /// Generate addresses without splitting for failover options + std::vector url_descriptions = parseRemoteDescription(addresses_description, 0, addresses_description.size(), ',', max_addresses); + std::vector uri_options; + + for (const auto & url_description : url_descriptions) + { + /// For each uri (which acts like shard) check if it has failover options + uri_options = parseRemoteDescription(url_description, 0, url_description.size(), '|', max_addresses); + StoragePtr shard; + + if (uri_options.size() > 1) + { + shard = std::make_shared( + uri_options, + table_id, + format_name, + format_settings, + columns, constraints, context, + compression_method); + } + else + { + Poco::URI uri(url_description); + shard = std::make_shared( + uri, + table_id, + format_name, + format_settings, + columns, constraints, context, + compression_method); + + LOG_DEBUG(&Poco::Logger::get("StorageURLDistributed"), "Adding URL: {}", url_description); + } + + shards.emplace(std::move(shard)); + } +} + + +Pipe StorageExternalDistributed::read( + const Names & column_names, + const StorageMetadataPtr & metadata_snapshot, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + unsigned num_streams) +{ + Pipes pipes; + for (const auto & shard : shards) + { + pipes.emplace_back(shard->read( + column_names, + metadata_snapshot, + query_info, + context, + processed_stage, + max_block_size, + num_streams + )); + } + + return Pipe::unitePipes(std::move(pipes)); +} + + +void registerStorageExternalDistributed(StorageFactory & factory) +{ + factory.registerStorage("ExternalDistributed", [](const StorageFactory::Arguments & args) + { + ASTs & engine_args = args.engine_args; + + if (engine_args.size() != 6) + throw Exception( + "Storage MySQLiDistributed requires 5 parameters: ExternalDistributed('engine_name', 'cluster_description', database, table, 'user', 'password').", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + for (auto & engine_arg : engine_args) + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); + + const String & engine_name = engine_args[0]->as().value.safeGet(); + const String & addresses_description = engine_args[1]->as().value.safeGet(); + + StorageExternalDistributed::ExternalStorageEngine table_engine; + if (engine_name == "URL") + { + table_engine = StorageExternalDistributed::ExternalStorageEngine::URL; + + const String & format_name = engine_args[2]->as().value.safeGet(); + String compression_method = "auto"; + if (engine_args.size() == 4) + compression_method = engine_args[3]->as().value.safeGet(); + + auto format_settings = StorageURL::getFormatSettingsFromArgs(args); + + return StorageExternalDistributed::create( + addresses_description, + args.table_id, + format_name, + format_settings, + compression_method, + args.columns, + args.constraints, + args.getContext()); + } + else + { + if (engine_name == "MySQL") + table_engine = StorageExternalDistributed::ExternalStorageEngine::MySQL; + else if (engine_name == "PostgreSQL") + table_engine = StorageExternalDistributed::ExternalStorageEngine::PostgreSQL; + else + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "External storage engine {} is not supported for StorageExternalDistributed. Supported engines are: MySQL, PostgreSQL, URL", + engine_name); + + const String & remote_database = engine_args[2]->as().value.safeGet(); + const String & remote_table = engine_args[3]->as().value.safeGet(); + const String & username = engine_args[4]->as().value.safeGet(); + const String & password = engine_args[5]->as().value.safeGet(); + + return StorageExternalDistributed::create( + args.table_id, + table_engine, + addresses_description, + remote_database, + remote_table, + username, + password, + args.columns, + args.constraints, + args.getContext()); + } + }, + { + .source_access_type = AccessType::SOURCES, + }); +} + +} diff --git a/src/Storages/StorageExternalDistributed.h b/src/Storages/StorageExternalDistributed.h new file mode 100644 index 00000000000..a6718398a3a --- /dev/null +++ b/src/Storages/StorageExternalDistributed.h @@ -0,0 +1,70 @@ +#pragma once + +#if !defined(ARCADIA_BUILD) +#include "config_core.h" +#endif + +#include +#include + + +namespace DB +{ + +/// Storages MySQL and PostgreSQL use ConnectionPoolWithFailover and support multiple replicas. +/// This class unites multiple storages with replicas into multiple shards with replicas. +/// A query to external database is passed to one replica on each shard, the result is united. +/// Replicas on each shard have the same priority, traversed replicas are moved to the end of the queue. +/// TODO: try `load_balancing` setting for replicas priorities same way as for table function `remote` +class StorageExternalDistributed final : public ext::shared_ptr_helper, public DB::IStorage +{ + friend struct ext::shared_ptr_helper; + +public: + enum class ExternalStorageEngine + { + MySQL, + PostgreSQL, + URL + }; + + std::string getName() const override { return "ExternalDistributed"; } + + Pipe read( + const Names & column_names, + const StorageMetadataPtr & /*metadata_snapshot*/, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + unsigned num_streams) override; + +protected: + StorageExternalDistributed( + const StorageID & table_id_, + ExternalStorageEngine table_engine, + const String & cluster_description, + const String & remote_database_, + const String & remote_table_, + const String & username, + const String & password, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_); + + StorageExternalDistributed( + const String & addresses_description, + const StorageID & table_id, + const String & format_name, + const std::optional & format_settings, + const String & compression_method, + const ColumnsDescription & columns, + const ConstraintsDescription & constraints, + ContextPtr context); + +private: + using Shards = std::unordered_set; + Shards shards; +}; + +} diff --git a/src/Storages/StorageFactory.cpp b/src/Storages/StorageFactory.cpp index 85f3bea9e0c..a775ac43c29 100644 --- a/src/Storages/StorageFactory.cpp +++ b/src/Storages/StorageFactory.cpp @@ -31,6 +31,23 @@ static void checkAllTypesAreAllowedInTable(const NamesAndTypesList & names_and_t } +ContextPtr StorageFactory::Arguments::getContext() const +{ + auto ptr = context.lock(); + if (!ptr) + throw Exception("Context has expired", ErrorCodes::LOGICAL_ERROR); + return ptr; +} + +ContextPtr StorageFactory::Arguments::getLocalContext() const +{ + auto ptr = local_context.lock(); + if (!ptr) + throw Exception("Context has expired", ErrorCodes::LOGICAL_ERROR); + return ptr; +} + + void StorageFactory::registerStorage(const std::string & name, CreatorFn creator_fn, StorageFeatures features) { if (!storages.emplace(name, Creator{std::move(creator_fn), features}).second) @@ -42,8 +59,8 @@ void StorageFactory::registerStorage(const std::string & name, CreatorFn creator StoragePtr StorageFactory::get( const ASTCreateQuery & query, const String & relative_data_path, - Context & local_context, - Context & context, + ContextPtr local_context, + ContextPtr context, const ColumnsDescription & columns, const ConstraintsDescription & constraints, bool has_force_restore_data_flag) const @@ -62,12 +79,18 @@ StoragePtr StorageFactory::get( } else if (query.is_live_view) { - if (query.storage) throw Exception("Specifying ENGINE is not allowed for a LiveView", ErrorCodes::INCORRECT_QUERY); name = "LiveView"; } + else if (query.is_dictionary) + { + if (query.storage) + throw Exception("Specifying ENGINE is not allowed for a Dictionary", ErrorCodes::INCORRECT_QUERY); + + name = "Dictionary"; + } else { /// Check for some special types, that are not allowed to be stored in tables. Example: NULL data type. @@ -179,9 +202,10 @@ StoragePtr StorageFactory::get( .attach = query.attach, .has_force_restore_data_flag = has_force_restore_data_flag }; + assert(arguments.getContext() == arguments.getContext()->getGlobalContext()); auto res = storages.at(name).creator_fn(arguments); - if (!empty_engine_args.empty()) + if (!empty_engine_args.empty()) //-V547 { /// Storage creator modified empty arguments list, so we should modify the query assert(storage_def && storage_def->engine && !storage_def->engine->arguments); @@ -190,8 +214,8 @@ StoragePtr StorageFactory::get( storage_def->engine->arguments->children = empty_engine_args; } - if (local_context.hasQueryContext() && context.getSettingsRef().log_queries) - local_context.getQueryContext().addQueryFactoriesInfo(Context::QueryLogFactories::Storage, name); + if (local_context->hasQueryContext() && context->getSettingsRef().log_queries) + local_context->getQueryContext()->addQueryFactoriesInfo(Context::QueryLogFactories::Storage, name); return res; } diff --git a/src/Storages/StorageFactory.h b/src/Storages/StorageFactory.h index 18dd24e10db..43f6a6d6f7d 100644 --- a/src/Storages/StorageFactory.h +++ b/src/Storages/StorageFactory.h @@ -39,12 +39,15 @@ public: /// Relative to from server config (possibly of some of some for *MergeTree) const String & relative_data_path; const StorageID & table_id; - Context & local_context; - Context & context; + ContextWeakPtr local_context; + ContextWeakPtr context; const ColumnsDescription & columns; const ConstraintsDescription & constraints; bool attach; bool has_force_restore_data_flag; + + ContextPtr getContext() const; + ContextPtr getLocalContext() const; }; /// Analog of the IStorage::supports*() helpers @@ -76,8 +79,8 @@ public: StoragePtr get( const ASTCreateQuery & query, const String & relative_data_path, - Context & local_context, - Context & context, + ContextPtr local_context, + ContextPtr context, const ColumnsDescription & columns, const ConstraintsDescription & constraints, bool has_force_restore_data_flag) const; diff --git a/src/Storages/StorageFile.cpp b/src/Storages/StorageFile.cpp index 5524569e1f0..14b91d29805 100644 --- a/src/Storages/StorageFile.cpp +++ b/src/Storages/StorageFile.cpp @@ -22,6 +22,8 @@ #include #include #include +#include +#include #include #include @@ -114,9 +116,9 @@ std::string getTablePath(const std::string & table_dir_path, const std::string & } /// Both db_dir_path and table_path must be converted to absolute paths (in particular, path cannot contain '..'). -void checkCreationIsAllowed(const Context & context_global, const std::string & db_dir_path, const std::string & table_path) +void checkCreationIsAllowed(ContextPtr context_global, const std::string & db_dir_path, const std::string & table_path) { - if (context_global.getApplicationType() != Context::ApplicationType::SERVER) + if (context_global->getApplicationType() != Context::ApplicationType::SERVER) return; /// "/dev/null" is allowed for perf testing @@ -129,7 +131,7 @@ void checkCreationIsAllowed(const Context & context_global, const std::string & } } -Strings StorageFile::getPathsList(const String & table_path, const String & user_files_path, const Context & context) +Strings StorageFile::getPathsList(const String & table_path, const String & user_files_path, ContextPtr context) { String user_files_absolute_path = Poco::Path(user_files_path).makeAbsolute().makeDirectory().toString(); Poco::Path poco_path = Poco::Path(table_path); @@ -149,10 +151,15 @@ Strings StorageFile::getPathsList(const String & table_path, const String & user return paths; } +bool StorageFile::isColumnOriented() const +{ + return format_name != "Distributed" && FormatFactory::instance().checkIfFormatIsColumnOriented(format_name); +} + StorageFile::StorageFile(int table_fd_, CommonArguments args) : StorageFile(args) { - if (args.context.getApplicationType() == Context::ApplicationType::SERVER) + if (args.getContext()->getApplicationType() == Context::ApplicationType::SERVER) throw Exception("Using file descriptor as source of storage isn't allowed for server daemons", ErrorCodes::DATABASE_ACCESS_DENIED); if (args.format_name == "Distributed") throw Exception("Distributed format is allowed only with explicit file path", ErrorCodes::INCORRECT_FILE_NAME); @@ -170,7 +177,7 @@ StorageFile::StorageFile(const std::string & table_path_, const std::string & us : StorageFile(args) { is_db_table = false; - paths = getPathsList(table_path_, user_files_path, args.context); + paths = getPathsList(table_path_, user_files_path, args.getContext()); if (args.format_name == "Distributed") { @@ -207,7 +214,7 @@ StorageFile::StorageFile(CommonArguments args) , format_name(args.format_name) , format_settings(args.format_settings) , compression_method(args.compression_method) - , base_path(args.context.getPath()) + , base_path(args.getContext()->getPath()) { StorageInMemoryMetadata storage_metadata; if (args.format_name != "Distributed") @@ -218,15 +225,17 @@ StorageFile::StorageFile(CommonArguments args) } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); return std::chrono::seconds{lock_timeout}; } +using StorageFilePtr = std::shared_ptr; + class StorageFileSource : public SourceWithProgress { @@ -257,14 +266,26 @@ public: return header; } + static Block getBlockForSource( + const StorageFilePtr & storage, + const StorageMetadataPtr & metadata_snapshot, + const ColumnsDescription & columns_description, + const FilesInfoPtr & files_info) + { + if (storage->isColumnOriented()) + return metadata_snapshot->getSampleBlockForColumns(columns_description.getNamesOfPhysical(), storage->getVirtuals(), storage->getStorageID()); + else + return getHeader(metadata_snapshot, files_info->need_path_column, files_info->need_file_column); + } + StorageFileSource( std::shared_ptr storage_, const StorageMetadataPtr & metadata_snapshot_, - const Context & context_, + ContextPtr context_, UInt64 max_block_size_, FilesInfoPtr files_info_, ColumnsDescription columns_description_) - : SourceWithProgress(getHeader(metadata_snapshot_, files_info_->need_path_column, files_info_->need_file_column)) + : SourceWithProgress(getBlockForSource(storage_, metadata_snapshot_, columns_description_, files_info_)) , storage(std::move(storage_)) , metadata_snapshot(metadata_snapshot_) , files_info(std::move(files_info_)) @@ -344,8 +365,16 @@ public: } read_buf = wrapReadBufferWithCompressionMethod(std::move(nested_buffer), method); + + auto get_block_for_format = [&]() -> Block + { + if (storage->isColumnOriented()) + return metadata_snapshot->getSampleBlockForColumns(columns_description.getNamesOfPhysical()); + return metadata_snapshot->getSampleBlock(); + }; + auto format = FormatFactory::instance().getInput( - storage->format_name, *read_buf, metadata_snapshot->getSampleBlock(), context, max_block_size, storage->format_settings); + storage->format_name, *read_buf, get_block_for_format(), context, max_block_size, storage->format_settings); reader = std::make_shared(format); @@ -403,7 +432,7 @@ private: ColumnsDescription columns_description; - const Context & context; /// TODO Untangle potential issues with context lifetime. + ContextPtr context; /// TODO Untangle potential issues with context lifetime. UInt64 max_block_size; bool finished_generate = false; @@ -412,12 +441,11 @@ private: std::unique_lock unique_lock; }; - Pipe StorageFile::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) @@ -429,7 +457,7 @@ Pipe StorageFile::read( else if (paths.size() == 1 && !Poco::File(paths[0]).exists()) { - if (context.getSettingsRef().engine_file_empty_if_not_exists) + if (context->getSettingsRef().engine_file_empty_if_not_exists) return Pipe(std::make_shared(metadata_snapshot->getSampleBlockForColumns(column_names, getVirtuals(), getStorageID()))); else throw Exception("File " + paths[0] + " doesn't exist", ErrorCodes::FILE_DOESNT_EXIST); @@ -457,9 +485,16 @@ Pipe StorageFile::read( for (size_t i = 0; i < num_streams; ++i) { + const auto get_columns_for_format = [&]() -> ColumnsDescription + { + if (isColumnOriented()) + return ColumnsDescription{ + metadata_snapshot->getSampleBlockForColumns(column_names, getVirtuals(), getStorageID()).getNamesAndTypesList()}; + else + return metadata_snapshot->getColumns(); + }; pipes.emplace_back(std::make_shared( - this_ptr, metadata_snapshot, context, max_block_size, files_info, - metadata_snapshot->getColumns())); + this_ptr, metadata_snapshot, context, max_block_size, files_info, get_columns_for_format())); } return Pipe::unitePipes(std::move(pipes)); @@ -474,7 +509,7 @@ public: const StorageMetadataPtr & metadata_snapshot_, std::unique_lock && lock_, const CompressionMethod compression_method, - const Context & context, + ContextPtr context, const std::optional & format_settings, int & flags) : storage(storage_) @@ -549,7 +584,7 @@ private: BlockOutputStreamPtr StorageFile::write( const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, - const Context & context) + ContextPtr context) { if (format_name == "Distributed") throw Exception("Method write is not implemented for Distributed format", ErrorCodes::NOT_IMPLEMENTED); @@ -557,7 +592,7 @@ BlockOutputStreamPtr StorageFile::write( int flags = 0; std::string path; - if (context.getSettingsRef().engine_file_truncate_on_insert) + if (context->getSettingsRef().engine_file_truncate_on_insert) flags |= O_TRUNC; if (!paths.empty()) @@ -610,7 +645,7 @@ void StorageFile::rename(const String & new_path_to_table_data, const StorageID void StorageFile::truncate( const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot */, - const Context & /* context */, + ContextPtr /* context */, TableExclusiveLockHolder &) { if (paths.size() != 1) @@ -643,11 +678,15 @@ void registerStorageFile(StorageFactory & factory) "File", [](const StorageFactory::Arguments & factory_args) { - StorageFile::CommonArguments storage_args{ - .table_id = factory_args.table_id, - .columns = factory_args.columns, - .constraints = factory_args.constraints, - .context = factory_args.context + StorageFile::CommonArguments storage_args + { + WithContext(factory_args.getContext()), + factory_args.table_id, + {}, + {}, + {}, + factory_args.columns, + factory_args.constraints, }; ASTs & engine_args_ast = factory_args.engine_args; @@ -657,7 +696,7 @@ void registerStorageFile(StorageFactory & factory) "Storage File requires from 1 to 3 arguments: name of used format, source and compression_method.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args_ast[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[0], factory_args.local_context); + engine_args_ast[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[0], factory_args.getLocalContext()); storage_args.format_name = engine_args_ast[0]->as().value.safeGet(); // Use format settings from global server context + settings from @@ -669,7 +708,7 @@ void registerStorageFile(StorageFactory & factory) // Apply changed settings from global context, but ignore the // unknown ones, because we only have the format settings here. - const auto & changes = factory_args.context.getSettingsRef().changes(); + const auto & changes = factory_args.getContext()->getSettingsRef().changes(); for (const auto & change : changes) { if (user_format_settings.has(change.name)) @@ -683,12 +722,12 @@ void registerStorageFile(StorageFactory & factory) factory_args.storage_def->settings->changes); storage_args.format_settings = getFormatSettings( - factory_args.context, user_format_settings); + factory_args.getContext(), user_format_settings); } else { storage_args.format_settings = getFormatSettings( - factory_args.context); + factory_args.getContext()); } if (engine_args_ast.size() == 1) /// Table in database @@ -725,7 +764,7 @@ void registerStorageFile(StorageFactory & factory) if (engine_args_ast.size() == 3) { - engine_args_ast[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[2], factory_args.local_context); + engine_args_ast[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[2], factory_args.getLocalContext()); storage_args.compression_method = engine_args_ast[2]->as().value.safeGet(); } else @@ -734,7 +773,7 @@ void registerStorageFile(StorageFactory & factory) if (0 <= source_fd) /// File descriptor return StorageFile::create(source_fd, storage_args); else /// User's file - return StorageFile::create(source_path, factory_args.context.getUserFilesPath(), storage_args); + return StorageFile::create(source_path, factory_args.getContext()->getUserFilesPath(), storage_args); }, storage_features); } diff --git a/src/Storages/StorageFile.h b/src/Storages/StorageFile.h index c316412f808..a277dda7cc0 100644 --- a/src/Storages/StorageFile.h +++ b/src/Storages/StorageFile.h @@ -28,7 +28,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -36,12 +36,12 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, - const Context & context) override; + ContextPtr context) override; void truncate( const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot */, - const Context & /* context */, + ContextPtr /* context */, TableExclusiveLockHolder &) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; @@ -49,7 +49,7 @@ public: bool storesDataOnDisk() const override; Strings getDataPaths() const override; - struct CommonArguments + struct CommonArguments : public WithContext { StorageID table_id; std::string format_name; @@ -57,12 +57,17 @@ public: std::string compression_method; const ColumnsDescription & columns; const ConstraintsDescription & constraints; - const Context & context; }; NamesAndTypesList getVirtuals() const override; - static Strings getPathsList(const String & table_path, const String & user_files_path, const Context & context); + static Strings getPathsList(const String & table_path, const String & user_files_path, ContextPtr context); + + /// Check if the format is column-oriented. + /// Is is useful because column oriented formats could effectively skip unknown columns + /// So we can create a header of only required columns in read method and ask + /// format to read only them. Note: this hack cannot be done with ordinary formats like TSV. + bool isColumnOriented() const; protected: friend class StorageFileSource; diff --git a/src/Storages/StorageGenerateRandom.cpp b/src/Storages/StorageGenerateRandom.cpp index fd10691ecc4..bc158c38f37 100644 --- a/src/Storages/StorageGenerateRandom.cpp +++ b/src/Storages/StorageGenerateRandom.cpp @@ -65,7 +65,7 @@ ColumnPtr fillColumnWithRandomData( UInt64 max_array_length, UInt64 max_string_length, pcg64 & rng, - const Context & context) + ContextPtr context) { TypeIndex idx = type->getTypeId(); @@ -215,7 +215,7 @@ ColumnPtr fillColumnWithRandomData( column->getData().resize(limit); for (size_t i = 0; i < limit; ++i) - column->getData()[i] = rng() % (DATE_LUT_MAX_DAY_NUM + 1); /// Slow + column->getData()[i] = rng() % (DATE_LUT_MAX_DAY_NUM + 1); return column; } @@ -339,7 +339,7 @@ ColumnPtr fillColumnWithRandomData( class GenerateSource : public SourceWithProgress { public: - GenerateSource(UInt64 block_size_, UInt64 max_array_length_, UInt64 max_string_length_, UInt64 random_seed_, Block block_header_, const Context & context_) + GenerateSource(UInt64 block_size_, UInt64 max_array_length_, UInt64 max_string_length_, UInt64 random_seed_, Block block_header_, ContextPtr context_) : SourceWithProgress(Nested::flatten(prepareBlockToFill(block_header_))) , block_size(block_size_), max_array_length(max_array_length_), max_string_length(max_string_length_) , block_to_fill(std::move(block_header_)), rng(random_seed_), context(context_) {} @@ -367,7 +367,7 @@ private: pcg64 rng; - const Context & context; + ContextPtr context; static Block & prepareBlockToFill(Block & block) { @@ -442,7 +442,7 @@ Pipe StorageGenerateRandom::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) diff --git a/src/Storages/StorageGenerateRandom.h b/src/Storages/StorageGenerateRandom.h index 965c5b3a9d3..d9c2acb782b 100644 --- a/src/Storages/StorageGenerateRandom.h +++ b/src/Storages/StorageGenerateRandom.h @@ -19,7 +19,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageInMemoryMetadata.cpp b/src/Storages/StorageInMemoryMetadata.cpp index 871ff38c07f..2f4a24a5c60 100644 --- a/src/Storages/StorageInMemoryMetadata.cpp +++ b/src/Storages/StorageInMemoryMetadata.cpp @@ -291,9 +291,10 @@ Block StorageInMemoryMetadata::getSampleBlockForColumns( { Block res; - std::unordered_map columns_map; - auto all_columns = getColumns().getAllWithSubcolumns(); + std::unordered_map columns_map; + columns_map.reserve(all_columns.size()); + for (const auto & elem : all_columns) columns_map.emplace(elem.name, elem.type); @@ -306,15 +307,11 @@ Block StorageInMemoryMetadata::getSampleBlockForColumns( { auto it = columns_map.find(name); if (it != columns_map.end()) - { res.insert({it->second->createColumn(), it->second, it->first}); - } else - { throw Exception( - "Column " + backQuote(name) + " not found in table " + storage_id.getNameForLogs(), + "Column " + backQuote(name) + " not found in table " + (storage_id.empty() ? "" : storage_id.getNameForLogs()), ErrorCodes::NOT_FOUND_COLUMN_IN_BLOCK); - } } return res; diff --git a/src/Storages/StorageInMemoryMetadata.h b/src/Storages/StorageInMemoryMetadata.h index 038416aff7d..00fb944c0b5 100644 --- a/src/Storages/StorageInMemoryMetadata.h +++ b/src/Storages/StorageInMemoryMetadata.h @@ -85,9 +85,10 @@ struct StorageInMemoryMetadata /// Returns combined set of columns const ColumnsDescription & getColumns() const; - /// Returns secondary indices + /// Returns secondary indices const IndicesDescription & getSecondaryIndices() const; + /// Has at least one non primary index bool hasSecondaryIndices() const; @@ -146,8 +147,7 @@ struct StorageInMemoryMetadata /// Storage metadata. StorageID required only for more clear exception /// message. Block getSampleBlockForColumns( - const Names & column_names, const NamesAndTypesList & virtuals, const StorageID & storage_id) const; - + const Names & column_names, const NamesAndTypesList & virtuals = {}, const StorageID & storage_id = StorageID::createEmpty()) const; /// Returns structure with partition key. const KeyDescription & getPartitionKey() const; /// Returns ASTExpressionList of partition key expression for storage or nullptr if there is none. diff --git a/src/Storages/StorageInput.cpp b/src/Storages/StorageInput.cpp index 1f881bccf07..63b440aff08 100644 --- a/src/Storages/StorageInput.cpp +++ b/src/Storages/StorageInput.cpp @@ -27,17 +27,14 @@ StorageInput::StorageInput(const StorageID & table_id, const ColumnsDescription } -class StorageInputSource : public SourceWithProgress +class StorageInputSource : public SourceWithProgress, WithContext { public: - StorageInputSource(Context & context_, Block sample_block) - : SourceWithProgress(std::move(sample_block)), context(context_) - { - } + StorageInputSource(ContextPtr context_, Block sample_block) : SourceWithProgress(std::move(sample_block)), WithContext(context_) {} Chunk generate() override { - auto block = context.getInputBlocksReaderCallback()(context); + auto block = getContext()->getInputBlocksReaderCallback()(getContext()); if (!block) return {}; @@ -46,9 +43,6 @@ public: } String getName() const override { return "Input"; } - -private: - Context & context; }; @@ -62,18 +56,18 @@ Pipe StorageInput::read( const Names & /*column_names*/, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) { Pipes pipes; - Context & query_context = const_cast(context).getQueryContext(); + auto query_context = context->getQueryContext(); /// It is TCP request if we have callbacks for input(). - if (query_context.getInputBlocksReaderCallback()) + if (query_context->getInputBlocksReaderCallback()) { /// Send structure to the client. - query_context.initializeInput(shared_from_this()); + query_context->initializeInput(shared_from_this()); return Pipe(std::make_shared(query_context, metadata_snapshot->getSampleBlock())); } diff --git a/src/Storages/StorageInput.h b/src/Storages/StorageInput.h index 3cb64993d45..106c22db385 100644 --- a/src/Storages/StorageInput.h +++ b/src/Storages/StorageInput.h @@ -21,7 +21,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageJoin.cpp b/src/Storages/StorageJoin.cpp index a449cebba51..983b9213a35 100644 --- a/src/Storages/StorageJoin.cpp +++ b/src/Storages/StorageJoin.cpp @@ -68,7 +68,7 @@ StorageJoin::StorageJoin( void StorageJoin::truncate( - const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder&) + const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder&) { disk->removeRecursive(path); disk->createDirectories(path); @@ -146,7 +146,7 @@ void registerStorageJoin(StorageFactory & factory) ASTs & engine_args = args.engine_args; - const auto & settings = args.context.getSettingsRef(); + const auto & settings = args.getContext()->getSettingsRef(); auto join_use_nulls = settings.join_use_nulls; auto max_rows_in_join = settings.max_rows_in_join; @@ -186,7 +186,7 @@ void registerStorageJoin(StorageFactory & factory) } } - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); if (engine_args.size() < 3) throw Exception( @@ -492,7 +492,7 @@ Pipe StorageJoin::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned /*num_streams*/) diff --git a/src/Storages/StorageJoin.h b/src/Storages/StorageJoin.h index 5f0f9f92404..4baac53c69c 100644 --- a/src/Storages/StorageJoin.h +++ b/src/Storages/StorageJoin.h @@ -27,7 +27,7 @@ class StorageJoin final : public ext::shared_ptr_helper, public Sto public: String getName() const override { return "Join"; } - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; /// Return instance of HashJoin holding lock that protects from insertions to StorageJoin. /// HashJoin relies on structure of hash table that's why we need to return it with locked mutex. @@ -45,7 +45,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageLog.cpp b/src/Storages/StorageLog.cpp index 02172517eb1..8ed68e0b44d 100644 --- a/src/Storages/StorageLog.cpp +++ b/src/Storages/StorageLog.cpp @@ -87,6 +87,8 @@ private: size_t rows_read = 0; size_t max_read_buffer_size; + std::unordered_map serializations; + struct Stream { Stream(const DiskPtr & disk, const String & data_path, size_t offset, size_t max_read_buffer_size_) @@ -104,11 +106,11 @@ private: using FileStreams = std::map; FileStreams streams; - using DeserializeState = IDataType::DeserializeBinaryBulkStatePtr; + using DeserializeState = ISerialization::DeserializeBinaryBulkStatePtr; using DeserializeStates = std::map; DeserializeStates deserialize_states; - void readData(const NameAndTypePair & name_and_type, ColumnPtr & column, size_t max_rows_to_read, IDataType::SubstreamsCache & cache); + void readData(const NameAndTypePair & name_and_type, ColumnPtr & column, size_t max_rows_to_read, ISerialization::SubstreamsCache & cache); }; @@ -124,7 +126,7 @@ Chunk LogSource::generate() /// How many rows to read for the next block. size_t max_rows_to_read = std::min(block_size, rows_limit - rows_read); - std::unordered_map caches; + std::unordered_map caches; for (const auto & name_type : columns) { @@ -162,19 +164,20 @@ Chunk LogSource::generate() void LogSource::readData(const NameAndTypePair & name_and_type, ColumnPtr & column, - size_t max_rows_to_read, IDataType::SubstreamsCache & cache) + size_t max_rows_to_read, ISerialization::SubstreamsCache & cache) { - IDataType::DeserializeBinaryBulkSettings settings; /// TODO Use avg_value_size_hint. + ISerialization::DeserializeBinaryBulkSettings settings; /// TODO Use avg_value_size_hint. const auto & [name, type] = name_and_type; + auto serialization = IDataType::getSerialization(name_and_type); auto create_stream_getter = [&](bool stream_for_prefix) { - return [&, stream_for_prefix] (const IDataType::SubstreamPath & path) -> ReadBuffer * + return [&, stream_for_prefix] (const ISerialization::SubstreamPath & path) -> ReadBuffer * { - if (cache.count(IDataType::getSubcolumnNameForStream(path))) + if (cache.count(ISerialization::getSubcolumnNameForStream(path))) return nullptr; - String stream_name = IDataType::getFileNameForStream(name_and_type, path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, path); const auto & file_it = storage.files.find(stream_name); if (storage.files.end() == file_it) throw Exception("Logical error: no information about file " + stream_name + " in StorageLog", ErrorCodes::LOGICAL_ERROR); @@ -193,11 +196,11 @@ void LogSource::readData(const NameAndTypePair & name_and_type, ColumnPtr & colu if (deserialize_states.count(name) == 0) { settings.getter = create_stream_getter(true); - type->deserializeBinaryBulkStatePrefix(settings, deserialize_states[name]); + serialization->deserializeBinaryBulkStatePrefix(settings, deserialize_states[name]); } settings.getter = create_stream_getter(false); - type->deserializeBinaryBulkWithMultipleStreams(column, max_rows_to_read, settings, deserialize_states[name], &cache); + serialization->deserializeBinaryBulkWithMultipleStreams(column, max_rows_to_read, settings, deserialize_states[name], &cache); } @@ -282,11 +285,11 @@ private: std::unique_ptr marks_stream; /// Declared below `lock` to make the file open when rwlock is captured. - using SerializeState = IDataType::SerializeBinaryBulkStatePtr; + using SerializeState = ISerialization::SerializeBinaryBulkStatePtr; using SerializeStates = std::map; SerializeStates serialize_states; - IDataType::OutputStreamGetter createStreamGetter(const NameAndTypePair & name_and_type, WrittenStreams & written_streams); + ISerialization::OutputStreamGetter createStreamGetter(const NameAndTypePair & name_and_type, WrittenStreams & written_streams); void writeData( const NameAndTypePair & name_and_type, @@ -324,14 +327,15 @@ void LogBlockOutputStream::writeSuffix() return; WrittenStreams written_streams; - IDataType::SerializeBinaryBulkSettings settings; + ISerialization::SerializeBinaryBulkSettings settings; for (const auto & column : getHeader()) { auto it = serialize_states.find(column.name); if (it != serialize_states.end()) { settings.getter = createStreamGetter(NameAndTypePair(column.name, column.type), written_streams); - column.type->serializeBinaryBulkStateSuffix(settings, it->second); + auto serialization = column.type->getDefaultSerialization(); + serialization->serializeBinaryBulkStateSuffix(settings, it->second); } } @@ -353,15 +357,20 @@ void LogBlockOutputStream::writeSuffix() streams.clear(); done = true; + + /// unlock should be done from the same thread as lock, and dtor may be + /// called from different thread, so it should be done here (at least in + /// case of no exceptions occurred) + lock.unlock(); } -IDataType::OutputStreamGetter LogBlockOutputStream::createStreamGetter(const NameAndTypePair & name_and_type, +ISerialization::OutputStreamGetter LogBlockOutputStream::createStreamGetter(const NameAndTypePair & name_and_type, WrittenStreams & written_streams) { - return [&] (const IDataType::SubstreamPath & path) -> WriteBuffer * + return [&] (const ISerialization::SubstreamPath & path) -> WriteBuffer * { - String stream_name = IDataType::getFileNameForStream(name_and_type, path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, path); if (written_streams.count(stream_name)) return nullptr; @@ -377,12 +386,13 @@ IDataType::OutputStreamGetter LogBlockOutputStream::createStreamGetter(const Nam void LogBlockOutputStream::writeData(const NameAndTypePair & name_and_type, const IColumn & column, MarksForColumns & out_marks, WrittenStreams & written_streams) { - IDataType::SerializeBinaryBulkSettings settings; + ISerialization::SerializeBinaryBulkSettings settings; const auto & [name, type] = name_and_type; + auto serialization = type->getDefaultSerialization(); - type->enumerateStreams([&] (const IDataType::SubstreamPath & path, const IDataType & /* substream_type */) + serialization->enumerateStreams([&] (const ISerialization::SubstreamPath & path) { - String stream_name = IDataType::getFileNameForStream(name_and_type, path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, path); if (written_streams.count(stream_name)) return; @@ -398,11 +408,11 @@ void LogBlockOutputStream::writeData(const NameAndTypePair & name_and_type, cons settings.getter = createStreamGetter(name_and_type, written_streams); if (serialize_states.count(name) == 0) - type->serializeBinaryBulkStatePrefix(settings, serialize_states[name]); + serialization->serializeBinaryBulkStatePrefix(settings, serialize_states[name]); - type->enumerateStreams([&] (const IDataType::SubstreamPath & path, const IDataType & /* substream_type */) + serialization->enumerateStreams([&] (const ISerialization::SubstreamPath & path) { - String stream_name = IDataType::getFileNameForStream(name_and_type, path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, path); if (written_streams.count(stream_name)) return; @@ -416,11 +426,11 @@ void LogBlockOutputStream::writeData(const NameAndTypePair & name_and_type, cons out_marks.emplace_back(file.column_index, mark); }, settings.path); - type->serializeBinaryBulkWithMultipleStreams(column, 0, 0, settings, serialize_states[name]); + serialization->serializeBinaryBulkWithMultipleStreams(column, 0, 0, settings, serialize_states[name]); - type->enumerateStreams([&] (const IDataType::SubstreamPath & path, const IDataType & /* substream_type */) + serialization->enumerateStreams([&] (const ISerialization::SubstreamPath & path) { - String stream_name = IDataType::getFileNameForStream(name_and_type, path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, path); if (!written_streams.emplace(stream_name).second) return; @@ -501,9 +511,9 @@ void StorageLog::addFiles(const NameAndTypePair & column) throw Exception("Duplicate column with name " + column.name + " in constructor of StorageLog.", ErrorCodes::DUPLICATE_COLUMN); - IDataType::StreamCallback stream_callback = [&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::StreamCallback stream_callback = [&] (const ISerialization::SubstreamPath & substream_path) { - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); if (!files.count(stream_name)) { @@ -516,8 +526,8 @@ void StorageLog::addFiles(const NameAndTypePair & column) } }; - IDataType::SubstreamPath substream_path; - column.type->enumerateStreams(stream_callback, substream_path); + auto serialization = column.type->getDefaultSerialization(); + serialization->enumerateStreams(stream_callback); } @@ -581,7 +591,7 @@ void StorageLog::rename(const String & new_path_to_table_data, const StorageID & renameInMemory(new_table_id); } -void StorageLog::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) +void StorageLog::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) { files.clear(); file_count = 0; @@ -607,11 +617,12 @@ const StorageLog::Marks & StorageLog::getMarksWithRealRowCount(const StorageMeta * If this is a data type with multiple stream, get the first stream, that we assume have real row count. * (Example: for Array data type, first stream is array sizes; and number of array sizes is the number of arrays). */ - IDataType::SubstreamPath substream_root_path; - column.type->enumerateStreams([&](const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::SubstreamPath substream_root_path; + auto serialization = column.type->getDefaultSerialization(); + serialization->enumerateStreams([&](const ISerialization::SubstreamPath & substream_path) { if (filename.empty()) - filename = IDataType::getFileNameForStream(column, substream_path); + filename = ISerialization::getFileNameForStream(column, substream_path); }, substream_root_path); Files::const_iterator it = files.find(filename); @@ -622,9 +633,9 @@ const StorageLog::Marks & StorageLog::getMarksWithRealRowCount(const StorageMeta } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); @@ -636,7 +647,7 @@ Pipe StorageLog::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) @@ -661,7 +672,7 @@ Pipe StorageLog::read( if (num_streams > marks_size) num_streams = marks_size; - size_t max_read_buffer_size = context.getSettingsRef().max_read_buffer_size; + size_t max_read_buffer_size = context->getSettingsRef().max_read_buffer_size; for (size_t stream = 0; stream < num_streams; ++stream) { @@ -684,7 +695,7 @@ Pipe StorageLog::read( return Pipe::unitePipes(std::move(pipes)); } -BlockOutputStreamPtr StorageLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { auto lock_timeout = getLockTimeout(context); loadMarks(lock_timeout); @@ -696,7 +707,7 @@ BlockOutputStreamPtr StorageLog::write(const ASTPtr & /*query*/, const StorageMe return std::make_shared(*this, metadata_snapshot, std::move(lock)); } -CheckResults StorageLog::checkData(const ASTPtr & /* query */, const Context & context) +CheckResults StorageLog::checkData(const ASTPtr & /* query */, ContextPtr context) { std::shared_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -720,11 +731,11 @@ void registerStorageLog(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); String disk_name = getDiskName(*args.storage_def); - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); return StorageLog::create( disk, args.relative_data_path, args.table_id, args.columns, args.constraints, - args.attach, args.context.getSettings().max_compress_block_size); + args.attach, args.getContext()->getSettings().max_compress_block_size); }, features); } diff --git a/src/Storages/StorageLog.h b/src/Storages/StorageLog.h index acb03658182..4fbaf53529f 100644 --- a/src/Storages/StorageLog.h +++ b/src/Storages/StorageLog.h @@ -29,18 +29,18 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) override; + CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) override; - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {DB::fullPath(disk, table_path)}; } @@ -86,7 +86,7 @@ private: DiskPtr disk; String table_path; - mutable std::shared_timed_mutex rwlock; + std::shared_timed_mutex rwlock; Files files; diff --git a/src/Storages/StorageMaterializeMySQL.cpp b/src/Storages/StorageMaterializeMySQL.cpp index e59f1e22958..8e6f2e1ad63 100644 --- a/src/Storages/StorageMaterializeMySQL.cpp +++ b/src/Storages/StorageMaterializeMySQL.cpp @@ -39,7 +39,7 @@ Pipe StorageMaterializeMySQL::read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned int num_streams) @@ -48,7 +48,7 @@ Pipe StorageMaterializeMySQL::read( rethrowSyncExceptionIfNeed(database); NameSet column_names_set = NameSet(column_names.begin(), column_names.end()); - auto lock = nested_storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock = nested_storage->lockForShare(context->getCurrentQueryId(), context->getSettingsRef().lock_acquire_timeout); const StorageMetadataPtr & nested_metadata = nested_storage->getInMemoryMetadataPtr(); Block nested_header = nested_metadata->getSampleBlock(); @@ -92,7 +92,7 @@ Pipe StorageMaterializeMySQL::read( { Block pipe_header = pipe.getHeader(); auto syntax = TreeRewriter(context).analyze(expressions, pipe_header.getNamesAndTypesList()); - ExpressionActionsPtr expression_actions = ExpressionAnalyzer(expressions, syntax, context).getActions(true); + ExpressionActionsPtr expression_actions = ExpressionAnalyzer(expressions, syntax, context).getActions(true /* add_aliases */, false /* project_result */); pipe.addSimpleTransform([&](const Block & header) { diff --git a/src/Storages/StorageMaterializeMySQL.h b/src/Storages/StorageMaterializeMySQL.h index f787470e2d2..8a4b88cbbb4 100644 --- a/src/Storages/StorageMaterializeMySQL.h +++ b/src/Storages/StorageMaterializeMySQL.h @@ -26,9 +26,9 @@ public: Pipe read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr &, const Context &) override { throwNotAllowed(); } + BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr &, ContextPtr) override { throwNotAllowed(); } NamesAndTypesList getVirtuals() const override; ColumnSizeByName getColumnSizes() const override; diff --git a/src/Storages/StorageMaterializedView.cpp b/src/Storages/StorageMaterializedView.cpp index d2cb418ce97..89b8bc72526 100644 --- a/src/Storages/StorageMaterializedView.cpp +++ b/src/Storages/StorageMaterializedView.cpp @@ -26,7 +26,8 @@ #include #include #include - +#include +#include namespace DB { @@ -49,11 +50,11 @@ static inline String generateInnerTableName(const StorageID & view_id) StorageMaterializedView::StorageMaterializedView( const StorageID & table_id_, - Context & local_context, + ContextPtr local_context, const ASTCreateQuery & query, const ColumnsDescription & columns_, bool attach_) - : IStorage(table_id_), global_context(local_context.getGlobalContext()) + : IStorage(table_id_), WithContext(local_context->getGlobalContext()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -75,24 +76,30 @@ StorageMaterializedView::StorageMaterializedView( storage_metadata.setSelectQuery(select); setInMemoryMetadata(storage_metadata); + bool point_to_itself_by_uuid = has_inner_table && query.to_inner_uuid != UUIDHelpers::Nil + && query.to_inner_uuid == table_id_.uuid; + bool point_to_itself_by_name = !has_inner_table && query.to_table_id.database_name == table_id_.database_name + && query.to_table_id.table_name == table_id_.table_name; + if (point_to_itself_by_uuid || point_to_itself_by_name) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); + if (!has_inner_table) { - if (query.to_table_id.database_name == table_id_.database_name && query.to_table_id.table_name == table_id_.table_name) - throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); target_table_id = query.to_table_id; } else if (attach_) { /// If there is an ATTACH request, then the internal table must already be created. - target_table_id = StorageID(getStorageID().database_name, generateInnerTableName(getStorageID())); + target_table_id = StorageID(getStorageID().database_name, generateInnerTableName(getStorageID()), query.to_inner_uuid); } else { /// We will create a query to create an internal table. - auto create_context = Context(local_context); + auto create_context = Context::createCopy(local_context); auto manual_create_query = std::make_shared(); manual_create_query->database = getStorageID().database_name; manual_create_query->table = generateInnerTableName(getStorageID()); + manual_create_query->uuid = query.to_inner_uuid; auto new_columns_list = std::make_shared(); new_columns_list->set(new_columns_list->columns, query.columns_list->columns->ptr()); @@ -104,30 +111,33 @@ StorageMaterializedView::StorageMaterializedView( create_interpreter.setInternal(true); create_interpreter.execute(); - target_table_id = DatabaseCatalog::instance().getTable({manual_create_query->database, manual_create_query->table}, global_context)->getStorageID(); + target_table_id = DatabaseCatalog::instance().getTable({manual_create_query->database, manual_create_query->table}, getContext())->getStorageID(); } if (!select.select_table_id.empty()) DatabaseCatalog::instance().addDependency(select.select_table_id, getStorageID()); } -QueryProcessingStage::Enum StorageMaterializedView::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const +QueryProcessingStage::Enum StorageMaterializedView::getQueryProcessingStage( + ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { - return getTargetTable()->getQueryProcessingStage(context, to_stage, query_info); + return getTargetTable()->getQueryProcessingStage(local_context, to_stage, query_info); } Pipe StorageMaterializedView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); - return plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); + return plan.convertToPipe( + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } void StorageMaterializedView::read( @@ -135,23 +145,23 @@ void StorageMaterializedView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { auto storage = getTargetTable(); - auto lock = storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock = storage->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto target_metadata_snapshot = storage->getInMemoryMetadataPtr(); if (query_info.order_optimizer) - query_info.input_order_info = query_info.order_optimizer->getInputOrder(target_metadata_snapshot, context); + query_info.input_order_info = query_info.order_optimizer->getInputOrder(target_metadata_snapshot, local_context); - storage->read(query_plan, column_names, target_metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + storage->read(query_plan, column_names, target_metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); if (query_plan.isInitialized()) { - auto mv_header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, context, processed_stage); + auto mv_header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, local_context, processed_stage); auto target_header = query_plan.getCurrentDataStream().header; if (!blocksHaveEqualStructure(mv_header, target_header)) { @@ -181,20 +191,20 @@ void StorageMaterializedView::read( } } -BlockOutputStreamPtr StorageMaterializedView::write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) +BlockOutputStreamPtr StorageMaterializedView::write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr local_context) { auto storage = getTargetTable(); - auto lock = storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock = storage->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto metadata_snapshot = storage->getInMemoryMetadataPtr(); - auto stream = storage->write(query, metadata_snapshot, context); + auto stream = storage->write(query, metadata_snapshot, local_context); stream->addTableLock(lock); return stream; } -static void executeDropQuery(ASTDropQuery::Kind kind, const Context & global_context, const Context & current_context, const StorageID & target_table_id, bool no_delay) +static void executeDropQuery(ASTDropQuery::Kind kind, ContextPtr global_context, ContextPtr current_context, const StorageID & target_table_id, bool no_delay) { if (DatabaseCatalog::instance().tryGetTable(target_table_id, current_context)) { @@ -210,13 +220,13 @@ static void executeDropQuery(ASTDropQuery::Kind kind, const Context & global_con /// to avoid "Not enough privileges" error if current user has only DROP VIEW ON mat_view_name privilege /// and not allowed to drop inner table explicitly. Allowing to drop inner table without explicit grant /// looks like expected behaviour and we have tests for it. - auto drop_context = Context(global_context); - drop_context.getClientInfo().query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; - if (auto txn = current_context.getZooKeeperMetadataTransaction()) + auto drop_context = Context::createCopy(global_context); + drop_context->getClientInfo().query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; + if (auto txn = current_context->getZooKeeperMetadataTransaction()) { /// For Replicated database - drop_context.setQueryContext(const_cast(current_context)); - drop_context.initZooKeeperMetadataTransaction(txn, true); + drop_context->setQueryContext(current_context); + drop_context->initZooKeeperMetadataTransaction(txn, true); } InterpreterDropQuery drop_interpreter(ast_drop_query, drop_context); drop_interpreter.execute(); @@ -231,19 +241,19 @@ void StorageMaterializedView::drop() if (!select_query.select_table_id.empty()) DatabaseCatalog::instance().removeDependency(select_query.select_table_id, table_id); - dropInnerTable(true, global_context); + dropInnerTable(true, getContext()); } -void StorageMaterializedView::dropInnerTable(bool no_delay, const Context & context) +void StorageMaterializedView::dropInnerTable(bool no_delay, ContextPtr local_context) { if (has_inner_table && tryGetTargetTable()) - executeDropQuery(ASTDropQuery::Kind::Drop, global_context, context, target_table_id, no_delay); + executeDropQuery(ASTDropQuery::Kind::Drop, getContext(), local_context, target_table_id, no_delay); } -void StorageMaterializedView::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context & context, TableExclusiveLockHolder &) +void StorageMaterializedView::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr local_context, TableExclusiveLockHolder &) { if (has_inner_table) - executeDropQuery(ASTDropQuery::Kind::Truncate, global_context, context, target_table_id, true); + executeDropQuery(ASTDropQuery::Kind::Truncate, getContext(), local_context, target_table_id, true); } void StorageMaterializedView::checkStatementCanBeForwarded() const @@ -261,26 +271,26 @@ bool StorageMaterializedView::optimize( bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) + ContextPtr local_context) { checkStatementCanBeForwarded(); auto storage_ptr = getTargetTable(); auto metadata_snapshot = storage_ptr->getInMemoryMetadataPtr(); - return getTargetTable()->optimize(query, metadata_snapshot, partition, final, deduplicate, deduplicate_by_columns, context); + return getTargetTable()->optimize(query, metadata_snapshot, partition, final, deduplicate, deduplicate_by_columns, local_context); } void StorageMaterializedView::alter( const AlterCommands & params, - const Context & context, + ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - params.apply(new_metadata, context); + params.apply(new_metadata, local_context); /// start modify query - if (context.getSettingsRef().allow_experimental_alter_materialized_view_structure) + if (local_context->getSettingsRef().allow_experimental_alter_materialized_view_structure) { const auto & new_select = new_metadata.select; const auto & old_select = old_metadata.getSelectQuery(); @@ -291,14 +301,14 @@ void StorageMaterializedView::alter( } /// end modify query - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); setInMemoryMetadata(new_metadata); } -void StorageMaterializedView::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageMaterializedView::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); if (settings.allow_experimental_alter_materialized_view_structure) { for (const auto & command : commands) @@ -328,10 +338,10 @@ void StorageMaterializedView::checkMutationIsPossible(const MutationCommands & c } Pipe StorageMaterializedView::alterPartition( - const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, const Context & context) + const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, ContextPtr local_context) { checkStatementCanBeForwarded(); - return getTargetTable()->alterPartition(metadata_snapshot, commands, context); + return getTargetTable()->alterPartition(metadata_snapshot, commands, local_context); } void StorageMaterializedView::checkAlterPartitionIsPossible( @@ -341,10 +351,10 @@ void StorageMaterializedView::checkAlterPartitionIsPossible( getTargetTable()->checkAlterPartitionIsPossible(commands, metadata_snapshot, settings); } -void StorageMaterializedView::mutate(const MutationCommands & commands, const Context & context) +void StorageMaterializedView::mutate(const MutationCommands & commands, ContextPtr local_context) { checkStatementCanBeForwarded(); - getTargetTable()->mutate(commands, context); + getTargetTable()->mutate(commands, local_context); } void StorageMaterializedView::renameInMemory(const StorageID & new_table_id) @@ -371,7 +381,7 @@ void StorageMaterializedView::renameInMemory(const StorageID & new_table_id) elem.to = to; rename->elements.emplace_back(elem); - InterpreterRenameQuery(rename, global_context).execute(); + InterpreterRenameQuery(rename, getContext()).execute(); target_table_id.table_name = new_target_table_name; } @@ -393,13 +403,13 @@ void StorageMaterializedView::shutdown() StoragePtr StorageMaterializedView::getTargetTable() const { checkStackSize(); - return DatabaseCatalog::instance().getTable(target_table_id, global_context); + return DatabaseCatalog::instance().getTable(target_table_id, getContext()); } StoragePtr StorageMaterializedView::tryGetTargetTable() const { checkStackSize(); - return DatabaseCatalog::instance().tryGetTable(target_table_id, global_context); + return DatabaseCatalog::instance().tryGetTable(target_table_id, getContext()); } Strings StorageMaterializedView::getDataPaths() const @@ -420,7 +430,7 @@ void registerStorageMaterializedView(StorageFactory & factory) { /// Pass local_context here to convey setting for inner table return StorageMaterializedView::create( - args.table_id, args.local_context, args.query, + args.table_id, args.getLocalContext(), args.query, args.columns, args.attach); }); } diff --git a/src/Storages/StorageMaterializedView.h b/src/Storages/StorageMaterializedView.h index a5dc089d68e..cda8112a8c3 100644 --- a/src/Storages/StorageMaterializedView.h +++ b/src/Storages/StorageMaterializedView.h @@ -12,7 +12,7 @@ namespace DB { -class StorageMaterializedView final : public ext::shared_ptr_helper, public IStorage +class StorageMaterializedView final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: @@ -27,19 +27,19 @@ public: bool supportsIndexForIn() const override { return getTargetTable()->supportsIndexForIn(); } bool supportsParallelInsert() const override { return getTargetTable()->supportsParallelInsert(); } bool supportsSubcolumns() const override { return getTargetTable()->supportsSubcolumns(); } - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & /* metadata_snapshot */) const override + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & /* metadata_snapshot */) const override { auto target_table = getTargetTable(); auto metadata_snapshot = target_table->getInMemoryMetadataPtr(); return target_table->mayBenefitFromIndexForIn(left_in_operand, query_context, metadata_snapshot); } - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void drop() override; - void dropInnerTable(bool no_delay, const Context & context); + void dropInnerTable(bool no_delay, ContextPtr context); - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; bool optimize( const ASTPtr & query, @@ -48,25 +48,25 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override; + ContextPtr context) override; - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; void checkMutationIsPossible(const MutationCommands & commands, const Settings & settings) const override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; - Pipe alterPartition(const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, const Context & context) override; + Pipe alterPartition(const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, ContextPtr context) override; void checkAlterPartitionIsPossible(const PartitionCommands & commands, const StorageMetadataPtr & metadata_snapshot, const Settings & settings) const override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; void renameInMemory(const StorageID & new_table_id) override; void shutdown() override; - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; StoragePtr getTargetTable() const; StoragePtr tryGetTargetTable() const; @@ -77,7 +77,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -87,7 +87,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -98,7 +98,6 @@ private: /// Will be initialized in constructor StorageID target_table_id = StorageID::createEmpty(); - Context & global_context; bool has_inner_table = false; void checkStatementCanBeForwarded() const; @@ -106,7 +105,7 @@ private: protected: StorageMaterializedView( const StorageID & table_id_, - Context & local_context, + ContextPtr local_context, const ASTCreateQuery & query, const ColumnsDescription & columns_, bool attach_); diff --git a/src/Storages/StorageMemory.cpp b/src/Storages/StorageMemory.cpp index d98cd4212e9..4cae7367606 100644 --- a/src/Storages/StorageMemory.cpp +++ b/src/Storages/StorageMemory.cpp @@ -26,10 +26,6 @@ class MemorySource : public SourceWithProgress { using InitializerFunc = std::function &)>; public: - /// Blocks are stored in std::list which may be appended in another thread. - /// We use pointer to the beginning of the list and its current size. - /// We don't need synchronisation in this reader, because while we hold SharedLock on storage, - /// only new elements can be added to the back of the list, so our iterators remain valid MemorySource( Names column_names_, @@ -59,7 +55,7 @@ protected: size_t current_index = getAndIncrementExecutionIndex(); - if (current_index >= data->size()) + if (!data || current_index >= data->size()) { return {}; } @@ -182,7 +178,7 @@ Pipe StorageMemory::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned num_streams) @@ -230,7 +226,7 @@ Pipe StorageMemory::read( } -BlockOutputStreamPtr StorageMemory::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageMemory::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(*this, metadata_snapshot); } @@ -258,7 +254,7 @@ void StorageMemory::checkMutationIsPossible(const MutationCommands & /*commands* /// Some validation will be added } -void StorageMemory::mutate(const MutationCommands & commands, const Context & context) +void StorageMemory::mutate(const MutationCommands & commands, ContextPtr context) { std::lock_guard lock(mutex); auto metadata_snapshot = getInMemoryMetadataPtr(); @@ -320,7 +316,7 @@ void StorageMemory::mutate(const MutationCommands & commands, const Context & co void StorageMemory::truncate( - const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) + const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { data.set(std::make_unique()); total_size_bytes.store(0, std::memory_order_relaxed); diff --git a/src/Storages/StorageMemory.h b/src/Storages/StorageMemory.h index b7fa4d7b222..1118474deee 100644 --- a/src/Storages/StorageMemory.h +++ b/src/Storages/StorageMemory.h @@ -34,7 +34,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -47,14 +47,14 @@ public: bool hasEvenlyDistributedRead() const override { return true; } - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) override; void drop() override; void checkMutationIsPossible(const MutationCommands & commands, const Settings & settings) const override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; std::optional totalRows(const Settings &) const override; std::optional totalBytes(const Settings &) const override; @@ -97,7 +97,7 @@ public: void delayReadForGlobalSubqueries() { delay_read_for_global_subqueries = true; } private: - /// MultiVersion data storage, so that we can copy the list of blocks to readers. + /// MultiVersion data storage, so that we can copy the vector of blocks to readers. MultiVersion data; diff --git a/src/Storages/StorageMerge.cpp b/src/Storages/StorageMerge.cpp index 46be91ba258..bf7141576e8 100644 --- a/src/Storages/StorageMerge.cpp +++ b/src/Storages/StorageMerge.cpp @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -43,12 +44,15 @@ namespace ErrorCodes namespace { -void modifySelect(ASTSelectQuery & select, const TreeRewriterResult & rewriter_result) +TreeRewriterResult modifySelect(ASTSelectQuery & select, const TreeRewriterResult & rewriter_result, ContextPtr context) { + + TreeRewriterResult new_rewriter_result = rewriter_result; if (removeJoin(select)) { /// Also remove GROUP BY cause ExpressionAnalyzer would check if it has all aggregate columns but joined columns would be missed. select.setExpression(ASTSelectQuery::Expression::GROUP_BY, {}); + new_rewriter_result.aggregates.clear(); /// Replace select list to remove joined columns auto select_list = std::make_shared(); @@ -57,12 +61,40 @@ void modifySelect(ASTSelectQuery & select, const TreeRewriterResult & rewriter_r select.setExpression(ASTSelectQuery::Expression::SELECT, select_list); - /// TODO: keep WHERE/PREWHERE. We have to remove joined columns and their expressions but keep others. - select.setExpression(ASTSelectQuery::Expression::WHERE, {}); - select.setExpression(ASTSelectQuery::Expression::PREWHERE, {}); + const DB::IdentifierMembershipCollector membership_collector{select, context}; + + /// Remove unknown identifiers from where, leave only ones from left table + auto replace_where = [&membership_collector](ASTSelectQuery & query, ASTSelectQuery::Expression expr) + { + auto where = query.getExpression(expr, false); + if (!where) + return; + + const size_t left_table_pos = 0; + /// Test each argument of `and` function and select ones related to only left table + std::shared_ptr new_conj = makeASTFunction("and"); + for (const auto & node : collectConjunctions(where)) + { + if (membership_collector.getIdentsMembership(node) == left_table_pos) + new_conj->arguments->children.push_back(std::move(node)); + } + + if (new_conj->arguments->children.empty()) + /// No identifiers from left table + query.setExpression(expr, {}); + else if (new_conj->arguments->children.size() == 1) + /// Only one expression, lift from `and` + query.setExpression(expr, std::move(new_conj->arguments->children[0])); + else + /// Set new expression + query.setExpression(expr, std::move(new_conj)); + }; + replace_where(select,ASTSelectQuery::Expression::WHERE); + replace_where(select,ASTSelectQuery::Expression::PREWHERE); select.setExpression(ASTSelectQuery::Expression::HAVING, {}); select.setExpression(ASTSelectQuery::Expression::ORDER_BY, {}); } + return new_rewriter_result; } } @@ -72,11 +104,11 @@ StorageMerge::StorageMerge( const ColumnsDescription & columns_, const String & source_database_, const Strings & source_tables_, - const Context & context_) + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , source_database(source_database_) , source_tables(std::in_place, source_tables_.begin(), source_tables_.end()) - , global_context(context_.getGlobalContext()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -88,11 +120,11 @@ StorageMerge::StorageMerge( const ColumnsDescription & columns_, const String & source_database_, const String & source_table_regexp_, - const Context & context_) + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , source_database(source_database_) , source_table_regexp(source_table_regexp_) - , global_context(context_.getGlobalContext()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -102,7 +134,7 @@ StorageMerge::StorageMerge( template StoragePtr StorageMerge::getFirstTable(F && predicate) const { - auto iterator = getDatabaseIterator(global_context); + auto iterator = getDatabaseIterator(getContext()); while (iterator->isValid()) { @@ -124,11 +156,10 @@ bool StorageMerge::isRemote() const } -bool StorageMerge::mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const +bool StorageMerge::mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const { /// It's beneficial if it is true for at least one table. - StorageListWithLocks selected_tables = getSelectedTables( - query_context.getCurrentQueryId(), query_context.getSettingsRef()); + StorageListWithLocks selected_tables = getSelectedTables(query_context); size_t i = 0; for (const auto & table : selected_tables) @@ -148,10 +179,9 @@ bool StorageMerge::mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, cons } -QueryProcessingStage::Enum StorageMerge::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const +QueryProcessingStage::Enum +StorageMerge::getQueryProcessingStage(ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { - ASTPtr modified_query = query_info.query->clone(); - auto & modified_select = modified_query->as(); /// In case of JOIN the first stage (which includes JOIN) /// should be done on the initiator always. /// @@ -159,12 +189,12 @@ QueryProcessingStage::Enum StorageMerge::getQueryProcessingStage(const Context & /// (see modifySelect()/removeJoin()) /// /// And for this we need to return FetchColumns. - if (removeJoin(modified_select)) + if (const auto * select = query_info.query->as(); select && hasJoin(*select)) return QueryProcessingStage::FetchColumns; auto stage_in_source_tables = QueryProcessingStage::FetchColumns; - DatabaseTablesIteratorPtr iterator = getDatabaseIterator(context); + DatabaseTablesIteratorPtr iterator = getDatabaseIterator(local_context); size_t selected_table_size = 0; @@ -174,7 +204,7 @@ QueryProcessingStage::Enum StorageMerge::getQueryProcessingStage(const Context & if (table && table.get() != this) { ++selected_table_size; - stage_in_source_tables = std::max(stage_in_source_tables, table->getQueryProcessingStage(context, to_stage, query_info)); + stage_in_source_tables = std::max(stage_in_source_tables, table->getQueryProcessingStage(local_context, to_stage, query_info)); } iterator->next(); @@ -188,7 +218,7 @@ Pipe StorageMerge::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, unsigned num_streams) @@ -210,17 +240,16 @@ Pipe StorageMerge::read( /** Just in case, turn off optimization "transfer to PREWHERE", * since there is no certainty that it works when one of table is MergeTree and other is not. */ - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->setSetting("optimize_move_to_prewhere", false); /// What will be result structure depending on query processed stage in source tables? - Block header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, context, processed_stage); + Block header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, local_context, processed_stage); /** First we make list of selected tables to find out its size. * This is necessary to correctly pass the recommended number of threads to each table. */ - StorageListWithLocks selected_tables = getSelectedTables( - query_info.query, has_table_virtual_column, context.getCurrentQueryId(), context.getSettingsRef()); + StorageListWithLocks selected_tables = getSelectedTables(local_context, query_info.query, has_table_virtual_column); if (selected_tables.empty()) /// FIXME: do we support sampling in this case? @@ -228,7 +257,8 @@ Pipe StorageMerge::read( {}, query_info, processed_stage, max_block_size, header, {}, real_column_names, modified_context, 0, has_table_virtual_column); size_t tables_count = selected_tables.size(); - Float64 num_streams_multiplier = std::min(unsigned(tables_count), std::max(1U, unsigned(context.getSettingsRef().max_streams_multiplier_for_merge_tables))); + Float64 num_streams_multiplier + = std::min(unsigned(tables_count), std::max(1U, unsigned(local_context->getSettingsRef().max_streams_multiplier_for_merge_tables))); num_streams *= num_streams_multiplier; size_t remaining_streams = num_streams; @@ -239,7 +269,7 @@ Pipe StorageMerge::read( { auto storage_ptr = std::get<0>(*it); auto storage_metadata_snapshot = storage_ptr->getInMemoryMetadataPtr(); - auto current_info = query_info.order_optimizer->getInputOrder(storage_metadata_snapshot, context); + auto current_info = query_info.order_optimizer->getInputOrder(storage_metadata_snapshot, local_context); if (it == selected_tables.begin()) input_sorting_info = current_info; else if (!current_info || (input_sorting_info && *current_info != *input_sorting_info)) @@ -293,7 +323,7 @@ Pipe StorageMerge::createSources( const Block & header, const StorageWithLockAndName & storage_with_lock, Names & real_column_names, - const std::shared_ptr & modified_context, + ContextPtr modified_context, size_t streams_num, bool has_table_virtual_column, bool concat_streams) @@ -304,7 +334,8 @@ Pipe StorageMerge::createSources( /// Original query could contain JOIN but we need only the first joined table and its columns. auto & modified_select = modified_query_info.query->as(); - modifySelect(modified_select, *query_info.syntax_analyzer_result); + auto new_analyzer_res = modifySelect(modified_select, *query_info.syntax_analyzer_result, modified_context); + modified_query_info.syntax_analyzer_result = std::make_shared(std::move(new_analyzer_res)); VirtualColumnUtils::rewriteEntityInAst(modified_query_info.query, "_table", table_name); @@ -313,7 +344,7 @@ Pipe StorageMerge::createSources( if (!storage) { pipe = QueryPipeline::getPipe(InterpreterSelectQuery( - modified_query_info.query, *modified_context, + modified_query_info.query, modified_context, std::make_shared(header), SelectQueryOptions(processed_stage).analyze()).execute().pipeline); @@ -321,15 +352,21 @@ Pipe StorageMerge::createSources( return pipe; } - auto storage_stage = storage->getQueryProcessingStage(*modified_context, QueryProcessingStage::Complete, modified_query_info); + auto storage_stage = storage->getQueryProcessingStage(modified_context, QueryProcessingStage::Complete, modified_query_info); if (processed_stage <= storage_stage) { /// If there are only virtual columns in query, you must request at least one other column. if (real_column_names.empty()) real_column_names.push_back(ExpressionActions::getSmallestColumn(metadata_snapshot->getColumns().getAllPhysical())); - - pipe = storage->read(real_column_names, metadata_snapshot, modified_query_info, *modified_context, processed_stage, max_block_size, UInt32(streams_num)); + pipe = storage->read( + real_column_names, + metadata_snapshot, + modified_query_info, + modified_context, + processed_stage, + max_block_size, + UInt32(streams_num)); } else if (processed_stage > storage_stage) { @@ -339,7 +376,7 @@ Pipe StorageMerge::createSources( modified_context->setSetting("max_threads", streams_num); modified_context->setSetting("max_streams_to_max_threads_ratio", 1); - InterpreterSelectQuery interpreter{modified_query_info.query, *modified_context, SelectQueryOptions(processed_stage)}; + InterpreterSelectQuery interpreter{modified_query_info.query, modified_context, SelectQueryOptions(processed_stage)}; pipe = QueryPipeline::getPipe(interpreter.execute().pipeline); @@ -366,7 +403,9 @@ Pipe StorageMerge::createSources( column.column = column.type->createColumnConst(0, Field(table_name)); auto adding_column_dag = ActionsDAG::makeAddingColumnActions(std::move(column)); - auto adding_column_actions = std::make_shared(std::move(adding_column_dag)); + auto adding_column_actions = std::make_shared( + std::move(adding_column_dag), + ExpressionActionsSettings::fromContext(modified_context)); pipe.addSimpleTransform([&](const Block & stream_header) { @@ -376,7 +415,7 @@ Pipe StorageMerge::createSources( /// Subordinary tables could have different but convertible types, like numeric types of different width. /// We must return streams with structure equals to structure of Merge table. - convertingSourceStream(header, metadata_snapshot, *modified_context, modified_query_info.query, pipe, processed_stage); + convertingSourceStream(header, metadata_snapshot, modified_context, modified_query_info.query, pipe, processed_stage); pipe.addTableLock(struct_lock); pipe.addStorageHolder(storage); @@ -386,33 +425,20 @@ Pipe StorageMerge::createSources( return pipe; } - -StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables(const String & query_id, const Settings & settings) const -{ - StorageListWithLocks selected_tables; - auto iterator = getDatabaseIterator(global_context); - - while (iterator->isValid()) - { - const auto & table = iterator->table(); - if (table && table.get() != this) - selected_tables.emplace_back( - table, table->lockForShare(query_id, settings.lock_acquire_timeout), iterator->name()); - - iterator->next(); - } - - return selected_tables; -} - - StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( - const ASTPtr & query, bool has_virtual_column, const String & query_id, const Settings & settings) const + ContextPtr query_context, + const ASTPtr & query /* = nullptr */, + bool filter_by_virtual_column /* = false */) const { - StorageListWithLocks selected_tables; - DatabaseTablesIteratorPtr iterator = getDatabaseIterator(global_context); + assert(!filter_by_virtual_column || query); - auto virtual_column = ColumnString::create(); + const Settings & settings = query_context->getSettingsRef(); + StorageListWithLocks selected_tables; + DatabaseTablesIteratorPtr iterator = getDatabaseIterator(getContext()); + + MutableColumnPtr table_name_virtual_column; + if (filter_by_virtual_column) + table_name_virtual_column = ColumnString::create(); while (iterator->isValid()) { @@ -425,18 +451,20 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( if (storage.get() != this) { - selected_tables.emplace_back( - storage, storage->lockForShare(query_id, settings.lock_acquire_timeout), iterator->name()); - virtual_column->insert(iterator->name()); + auto table_lock = storage->lockForShare(query_context->getCurrentQueryId(), settings.lock_acquire_timeout); + selected_tables.emplace_back(storage, std::move(table_lock), iterator->name()); + if (filter_by_virtual_column) + table_name_virtual_column->insert(iterator->name()); } iterator->next(); } - if (has_virtual_column) + if (filter_by_virtual_column) { - Block virtual_columns_block = Block{ColumnWithTypeAndName(std::move(virtual_column), std::make_shared(), "_table")}; - VirtualColumnUtils::filterBlockWithQuery(query, virtual_columns_block, global_context); + /// Filter names of selected tables if there is a condition on "_table" virtual column in WHERE clause + Block virtual_columns_block = Block{ColumnWithTypeAndName(std::move(table_name_virtual_column), std::make_shared(), "_table")}; + VirtualColumnUtils::filterBlockWithQuery(query, virtual_columns_block, query_context); auto values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_table"); /// Remove unused tables from the list @@ -446,8 +474,7 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( return selected_tables; } - -DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(const Context & context) const +DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(ContextPtr local_context) const { try { @@ -469,13 +496,13 @@ DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(const Context & cont return source_table_regexp->match(table_name_); }; - return database->getTablesIterator(context, table_name_match); + return database->getTablesIterator(local_context, table_name_match); } -void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { - auto name_deps = getDependentViewsByColumn(context); + auto name_deps = getDependentViewsByColumn(local_context); for (const auto & command : commands) { if (command.type != AlterCommand::Type::ADD_COLUMN && command.type != AlterCommand::Type::MODIFY_COLUMN @@ -483,7 +510,7 @@ void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, const Co throw Exception( "Alter of type '" + alterTypeToString(command.type) + "' is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); - if (command.type == AlterCommand::Type::DROP_COLUMN) + if (command.type == AlterCommand::Type::DROP_COLUMN && !command.clear) { const auto & deps_mv = name_deps[command.column_name]; if (!deps_mv.empty()) @@ -498,20 +525,20 @@ void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, const Co } void StorageMerge::alter( - const AlterCommands & params, const Context & context, TableLockHolder &) + const AlterCommands & params, ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); StorageInMemoryMetadata storage_metadata = getInMemoryMetadata(); - params.apply(storage_metadata, context); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, storage_metadata); + params.apply(storage_metadata, local_context); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, storage_metadata); setInMemoryMetadata(storage_metadata); } void StorageMerge::convertingSourceStream( const Block & header, const StorageMetadataPtr & metadata_snapshot, - const Context & context, + ContextPtr local_context, ASTPtr & query, Pipe & pipe, QueryProcessingStage::Enum processed_stage) @@ -522,7 +549,7 @@ void StorageMerge::convertingSourceStream( pipe.getHeader().getColumnsWithTypeAndName(), header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto convert_actions = std::make_shared(convert_actions_dag); + auto convert_actions = std::make_shared(convert_actions_dag, ExpressionActionsSettings::fromContext(local_context)); pipe.addSimpleTransform([&](const Block & stream_header) { @@ -546,8 +573,8 @@ void StorageMerge::convertingSourceStream( NamesAndTypesList source_columns = metadata_snapshot->getSampleBlock().getNamesAndTypesList(); auto virtual_column = *getVirtuals().tryGetByName("_table"); source_columns.emplace_back(NameAndTypePair{virtual_column.name, virtual_column.type}); - auto syntax_result = TreeRewriter(context).analyze(where_expression, source_columns); - ExpressionActionsPtr actions = ExpressionAnalyzer{where_expression, syntax_result, context}.getActions(false, false); + auto syntax_result = TreeRewriter(local_context).analyze(where_expression, source_columns); + ExpressionActionsPtr actions = ExpressionAnalyzer{where_expression, syntax_result, local_context}.getActions(false, false); Names required_columns = actions->getRequiredColumns(); for (const auto & required_column : required_columns) @@ -584,15 +611,15 @@ void registerStorageMerge(StorageFactory & factory) " - name of source database and regexp for table names.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.local_context); - engine_args[1] = evaluateConstantExpressionAsLiteral(engine_args[1], args.local_context); + engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.getLocalContext()); + engine_args[1] = evaluateConstantExpressionAsLiteral(engine_args[1], args.getLocalContext()); String source_database = engine_args[0]->as().value.safeGet(); String table_name_regexp = engine_args[1]->as().value.safeGet(); return StorageMerge::create( args.table_id, args.columns, - source_database, table_name_regexp, args.context); + source_database, table_name_regexp, args.getContext()); }); } diff --git a/src/Storages/StorageMerge.h b/src/Storages/StorageMerge.h index eaffd34a379..ff016952686 100644 --- a/src/Storages/StorageMerge.h +++ b/src/Storages/StorageMerge.h @@ -12,7 +12,7 @@ namespace DB /** A table that represents the union of an arbitrary number of other tables. * All tables must have the same structure. */ -class StorageMerge final : public ext::shared_ptr_helper, public IStorage +class StorageMerge final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: @@ -27,44 +27,41 @@ public: bool supportsIndexForIn() const override { return true; } bool supportsSubcolumns() const override { return true; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// you need to add and remove columns in the sub-tables manually /// the structure of sub-tables is not checked - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; bool mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override; + const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override; private: String source_database; std::optional> source_tables; std::optional source_table_regexp; - const Context & global_context; using StorageWithLockAndName = std::tuple; using StorageListWithLocks = std::list; - StorageListWithLocks getSelectedTables(const String & query_id, const Settings & settings) const; - StorageMerge::StorageListWithLocks getSelectedTables( - const ASTPtr & query, bool has_virtual_column, const String & query_id, const Settings & settings) const; + ContextPtr query_context, const ASTPtr & query = nullptr, bool filter_by_virtual_column = false) const; template StoragePtr getFirstTable(F && predicate) const; - DatabaseTablesIteratorPtr getDatabaseIterator(const Context & context) const; + DatabaseTablesIteratorPtr getDatabaseIterator(ContextPtr context) const; NamesAndTypesList getVirtuals() const override; ColumnSizeByName getColumnSizes() const override; @@ -75,31 +72,31 @@ protected: const ColumnsDescription & columns_, const String & source_database_, const Strings & source_tables_, - const Context & context_); + ContextPtr context_); StorageMerge( const StorageID & table_id_, const ColumnsDescription & columns_, const String & source_database_, const String & source_table_regexp_, - const Context & context_); + ContextPtr context_); Pipe createSources( const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, const QueryProcessingStage::Enum & processed_stage, - const UInt64 max_block_size, + UInt64 max_block_size, const Block & header, const StorageWithLockAndName & storage_with_lock, Names & real_column_names, - const std::shared_ptr & modified_context, + ContextPtr modified_context, size_t streams_num, bool has_table_virtual_column, bool concat_streams = false); void convertingSourceStream( const Block & header, const StorageMetadataPtr & metadata_snapshot, - const Context & context, ASTPtr & query, + ContextPtr context, ASTPtr & query, Pipe & pipe, QueryProcessingStage::Enum processed_stage); }; diff --git a/src/Storages/StorageMergeTree.cpp b/src/Storages/StorageMergeTree.cpp index c8f44c78e6e..b3165febd7c 100644 --- a/src/Storages/StorageMergeTree.cpp +++ b/src/Storages/StorageMergeTree.cpp @@ -25,6 +25,8 @@ #include #include #include +#include +#include #include namespace CurrentMetrics @@ -61,7 +63,7 @@ StorageMergeTree::StorageMergeTree( const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, bool attach, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr storage_settings_, @@ -78,9 +80,9 @@ StorageMergeTree::StorageMergeTree( attach) , reader(*this) , writer(*this) - , merger_mutator(*this, global_context.getSettingsRef().background_pool_size) - , background_executor(*this, global_context) - , background_moves_executor(*this, global_context) + , merger_mutator(*this, getContext()->getSettingsRef().background_pool_size) + , background_executor(*this, getContext()) + , background_moves_executor(*this, getContext()) { loadDataParts(has_force_restore_data_flag); @@ -91,6 +93,8 @@ StorageMergeTree::StorageMergeTree( increment.set(getMaxBlockNumber()); loadMutations(); + + loadDeduplicationLog(); } @@ -178,12 +182,12 @@ void StorageMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) { - if (auto plan = reader.read(column_names, metadata_snapshot, query_info, context, max_block_size, num_streams)) + if (auto plan = reader.read(column_names, metadata_snapshot, query_info, local_context, max_block_size, num_streams)) query_plan = std::move(*plan); } @@ -191,14 +195,16 @@ Pipe StorageMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); - return plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); + return plan.convertToPipe( + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } std::optional StorageMergeTree::totalRows(const Settings &) const @@ -206,20 +212,10 @@ std::optional StorageMergeTree::totalRows(const Settings &) const return getTotalActiveSizeInRows(); } -std::optional StorageMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, const Context & context) const +std::optional StorageMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr local_context) const { - auto metadata_snapshot = getInMemoryMetadataPtr(); - PartitionPruner partition_pruner(metadata_snapshot->getPartitionKey(), query_info, context, true /* strict */); - if (partition_pruner.isUseless()) - return {}; - size_t res = 0; - auto lock = lockParts(); - for (const auto & part : getDataPartsStateRange(DataPartState::Committed)) - { - if (!partition_pruner.canBePruned(part)) - res += part->rows_count; - } - return res; + auto parts = getDataPartsVector({DataPartState::Committed}); + return totalRowsByPartitionPredicateImpl(query_info, local_context, parts); } std::optional StorageMergeTree::totalBytes(const Settings &) const @@ -227,18 +223,18 @@ std::optional StorageMergeTree::totalBytes(const Settings &) const return getTotalActiveSizeInBytes(); } -BlockOutputStreamPtr StorageMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr +StorageMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); return std::make_shared( - *this, metadata_snapshot, settings.max_partitions_per_insert_block, context.getSettingsRef().optimize_on_insert); + *this, metadata_snapshot, settings.max_partitions_per_insert_block, local_context->getSettingsRef().optimize_on_insert); } void StorageMergeTree::checkTableCanBeDropped() const { auto table_id = getStorageID(); - global_context.checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); + getContext()->checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); } void StorageMergeTree::drop() @@ -247,7 +243,7 @@ void StorageMergeTree::drop() dropAllData(); } -void StorageMergeTree::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) +void StorageMergeTree::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { { /// Asks to complete merges and does not allow them to start. @@ -267,24 +263,25 @@ void StorageMergeTree::truncate(const ASTPtr &, const StorageMetadataPtr &, cons void StorageMergeTree::alter( const AlterCommands & commands, - const Context & context, + ContextPtr local_context, TableLockHolder & table_lock_holder) { auto table_id = getStorageID(); + auto old_storage_settings = getSettings(); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - auto maybe_mutation_commands = commands.getMutationCommands(new_metadata, context.getSettingsRef().materialize_ttl_after_modify, context); + auto maybe_mutation_commands = commands.getMutationCommands(new_metadata, local_context->getSettingsRef().materialize_ttl_after_modify, local_context); String mutation_file_name; Int64 mutation_version = -1; - commands.apply(new_metadata, context); + commands.apply(new_metadata, local_context); /// This alter can be performed at new_metadata level only if (commands.isSettingsAlter()) { changeSettings(new_metadata.settings_changes, table_lock_holder); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); } else { @@ -294,7 +291,7 @@ void StorageMergeTree::alter( /// Reinitialize primary key because primary key column types might have changed. setProperties(new_metadata, old_metadata); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); if (!maybe_mutation_commands.empty()) mutation_version = startMutation(maybe_mutation_commands, mutation_file_name); @@ -305,6 +302,21 @@ void StorageMergeTree::alter( if (!maybe_mutation_commands.empty()) waitForMutation(mutation_version, mutation_file_name); } + + { + /// Some additional changes in settings + auto new_storage_settings = getSettings(); + + if (old_storage_settings->non_replicated_deduplication_window != new_storage_settings->non_replicated_deduplication_window) + { + /// We cannot place this check into settings sanityCheck because it depends on format_version. + /// sanityCheck must work event without storage. + if (new_storage_settings->non_replicated_deduplication_window != 0 && format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING) + throw Exception("Deduplication for non-replicated MergeTree in old syntax is not supported", ErrorCodes::BAD_ARGUMENTS); + + deduplication_log->setDeduplicationWindowSize(new_storage_settings->non_replicated_deduplication_window); + } + } } @@ -332,12 +344,25 @@ StorageMergeTree::CurrentlyMergingPartsTagger::CurrentlyMergingPartsTagger( max_volume_index = std::max(max_volume_index, storage.getStoragePolicy()->getVolumeIndexByDisk(part_ptr->volume->getDisk())); } - reserved_space = storage.tryReserveSpacePreferringTTLRules(metadata_snapshot, total_size, ttl_infos, time(nullptr), max_volume_index); + reserved_space = storage.balancedReservation( + metadata_snapshot, + total_size, + max_volume_index, + future_part.name, + future_part.part_info, + future_part.parts, + &tagger, + &ttl_infos); + + if (!reserved_space) + reserved_space + = storage.tryReserveSpacePreferringTTLRules(metadata_snapshot, total_size, ttl_infos, time(nullptr), max_volume_index); } + if (!reserved_space) { if (is_mutation) - throw Exception("Not enough space for mutating part '" + future_part_.parts[0]->name + "'", ErrorCodes::NOT_ENOUGH_SPACE); + throw Exception("Not enough space for mutating part '" + future_part.parts[0]->name + "'", ErrorCodes::NOT_ENOUGH_SPACE); else throw Exception("Not enough space for merging parts", ErrorCodes::NOT_ENOUGH_SPACE); } @@ -454,12 +479,12 @@ void StorageMergeTree::waitForMutation(Int64 version, const String & file_name) LOG_INFO(log, "Mutation {} done", file_name); } -void StorageMergeTree::mutate(const MutationCommands & commands, const Context & query_context) +void StorageMergeTree::mutate(const MutationCommands & commands, ContextPtr query_context) { String mutation_file_name; Int64 version = startMutation(commands, mutation_file_name); - if (query_context.getSettingsRef().mutations_sync > 0) + if (query_context->getSettingsRef().mutations_sync > 0) waitForMutation(version, mutation_file_name); } @@ -593,7 +618,7 @@ CancellationCode StorageMergeTree::killMutation(const String & mutation_id) if (!to_kill) return CancellationCode::NotFound; - global_context.getMergeList().cancelPartMutations({}, to_kill->block_number); + getContext()->getMergeList().cancelPartMutations({}, to_kill->block_number); to_kill->removeFile(); LOG_TRACE(log, "Cancelled part mutations and removed mutation file {}", mutation_id); { @@ -607,6 +632,16 @@ CancellationCode StorageMergeTree::killMutation(const String & mutation_id) return CancellationCode::CancelSent; } +void StorageMergeTree::loadDeduplicationLog() +{ + auto settings = getSettings(); + if (settings->non_replicated_deduplication_window != 0 && format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING) + throw Exception("Deduplication for non-replicated MergeTree in old syntax is not supported", ErrorCodes::BAD_ARGUMENTS); + + std::string path = getDataPaths()[0] + "/deduplication_logs"; + deduplication_log = std::make_unique(path, settings->non_replicated_deduplication_window, format_version); + deduplication_log->load(); +} void StorageMergeTree::loadMutations() { @@ -688,9 +723,10 @@ std::shared_ptr StorageMergeTree::se UInt64 disk_space = getStoragePolicy()->getMaxUnreservedFreeSpace(); select_decision = merger_mutator.selectAllPartsToMergeWithinPartition( future_part, disk_space, can_merge, partition_id, final, metadata_snapshot, out_disable_reason, optimize_skip_merged_partitions); + auto timeout_ms = getSettings()->lock_acquire_timeout_for_background_operations.totalMilliseconds(); + auto timeout = std::chrono::milliseconds(timeout_ms); /// If final - we will wait for currently processing merges to finish and continue. - /// TODO Respect query settings for timeout if (final && select_decision != SelectPartsDecision::SELECTED && !currently_merging_mutating_parts.empty() @@ -700,10 +736,9 @@ std::shared_ptr StorageMergeTree::se LOG_DEBUG(log, "Waiting for currently running merges ({} parts are merging right now) to perform OPTIMIZE FINAL", currently_merging_mutating_parts.size()); - if (std::cv_status::timeout == currently_processing_in_background_condition.wait_for( - lock, std::chrono::seconds(DBMS_DEFAULT_LOCK_ACQUIRE_TIMEOUT_SEC))) + if (std::cv_status::timeout == currently_processing_in_background_condition.wait_for(lock, timeout)) { - *out_disable_reason = "Timeout while waiting for already running merges before running OPTIMIZE with FINAL"; + *out_disable_reason = fmt::format("Timeout ({} ms) while waiting for already running merges before running OPTIMIZE with FINAL", timeout_ms); break; } } @@ -733,7 +768,7 @@ std::shared_ptr StorageMergeTree::se /// Account TTL merge here to avoid exceeding the max_number_of_merges_with_ttl_in_pool limit if (isTTLMergeType(future_part.merge_type)) - global_context.getMergeList().bookMergeWithTTL(); + getContext()->getMergeList().bookMergeWithTTL(); merging_tagger = std::make_unique(future_part, MergeTreeDataMergerMutator::estimateNeededDiskSpace(future_part.parts), *this, metadata_snapshot, false); return std::make_shared(future_part, std::move(merging_tagger), MutationCommands{}); @@ -777,7 +812,7 @@ bool StorageMergeTree::mergeSelectedParts( MutableDataPartPtr new_part; auto table_id = getStorageID(); - auto merge_list_entry = global_context.getMergeList().insert(table_id.database_name, table_id.table_name, future_part); + auto merge_list_entry = getContext()->getMergeList().insert(table_id.database_name, table_id.table_name, future_part); auto write_part_log = [&] (const ExecutionStatus & execution_status) { @@ -795,7 +830,7 @@ bool StorageMergeTree::mergeSelectedParts( { new_part = merger_mutator.mergePartsToTemporaryPart( future_part, metadata_snapshot, *(merge_list_entry), table_lock_holder, time(nullptr), - global_context, merge_mutate_entry.tagger->reserved_space, deduplicate, deduplicate_by_columns); + getContext(), merge_mutate_entry.tagger->reserved_space, deduplicate, deduplicate_by_columns); merger_mutator.renameMergedTemporaryPart(new_part, future_part.parts, nullptr); write_part_log({}); @@ -819,7 +854,7 @@ std::shared_ptr StorageMergeTree::se const StorageMetadataPtr & metadata_snapshot, String * /* disable_reason */, TableLockHolder & /* table_lock_holder */) { std::lock_guard lock(currently_processing_in_background_mutex); - size_t max_ast_elements = global_context.getSettingsRef().max_expanded_ast_elements; + size_t max_ast_elements = getContext()->getSettingsRef().max_expanded_ast_elements; FutureMergedMutatedPart future_part; if (storage_settings.get()->assign_part_uuids) @@ -874,7 +909,7 @@ std::shared_ptr StorageMergeTree::se if (!commands_for_size_validation.empty()) { MutationsInterpreter interpreter( - shared_from_this(), metadata_snapshot, commands_for_size_validation, global_context, false); + shared_from_this(), metadata_snapshot, commands_for_size_validation, getContext(), false); commands_size += interpreter.evaluateCommandsSize(); } @@ -904,7 +939,7 @@ bool StorageMergeTree::mutateSelectedPart(const StorageMetadataPtr & metadata_sn auto & future_part = merge_mutate_entry.future_part; auto table_id = getStorageID(); - auto merge_list_entry = global_context.getMergeList().insert(table_id.database_name, table_id.table_name, future_part); + auto merge_list_entry = getContext()->getMergeList().insert(table_id.database_name, table_id.table_name, future_part); Stopwatch stopwatch; MutableDataPartPtr new_part; @@ -924,7 +959,7 @@ bool StorageMergeTree::mutateSelectedPart(const StorageMetadataPtr & metadata_sn { new_part = merger_mutator.mutatePartToTemporaryPart( future_part, metadata_snapshot, merge_mutate_entry.commands, *(merge_list_entry), - time(nullptr), global_context, merge_mutate_entry.tagger->reserved_space, table_lock_holder); + time(nullptr), getContext(), merge_mutate_entry.tagger->reserved_space, table_lock_holder); renameTempPartAndReplace(new_part); @@ -1049,7 +1084,7 @@ bool StorageMergeTree::optimize( bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) + ContextPtr local_context) { if (deduplicate) { @@ -1070,14 +1105,21 @@ bool StorageMergeTree::optimize( for (const String & partition_id : partition_ids) { - if (!merge(true, partition_id, true, deduplicate, deduplicate_by_columns, &disable_reason, context.getSettingsRef().optimize_skip_merged_partitions)) + if (!merge( + true, + partition_id, + true, + deduplicate, + deduplicate_by_columns, + &disable_reason, + local_context->getSettingsRef().optimize_skip_merged_partitions)) { constexpr const char * message = "Cannot OPTIMIZE table: {}"; if (disable_reason.empty()) disable_reason = "unknown reason"; LOG_INFO(log, message, disable_reason); - if (context.getSettingsRef().optimize_throw_if_noop) + if (local_context->getSettingsRef().optimize_throw_if_noop) throw Exception(ErrorCodes::CANNOT_ASSIGN_OPTIMIZE, message, disable_reason); return false; } @@ -1087,16 +1129,23 @@ bool StorageMergeTree::optimize( { String partition_id; if (partition) - partition_id = getPartitionIDFromQuery(partition, context); + partition_id = getPartitionIDFromQuery(partition, local_context); - if (!merge(true, partition_id, final, deduplicate, deduplicate_by_columns, &disable_reason, context.getSettingsRef().optimize_skip_merged_partitions)) + if (!merge( + true, + partition_id, + final, + deduplicate, + deduplicate_by_columns, + &disable_reason, + local_context->getSettingsRef().optimize_skip_merged_partitions)) { constexpr const char * message = "Cannot OPTIMIZE table: {}"; if (disable_reason.empty()) disable_reason = "unknown reason"; LOG_INFO(log, message, disable_reason); - if (context.getSettingsRef().optimize_throw_if_noop) + if (local_context->getSettingsRef().optimize_throw_if_noop) throw Exception(ErrorCodes::CANNOT_ASSIGN_OPTIMIZE, message, disable_reason); return false; } @@ -1164,7 +1213,7 @@ MergeTreeDataPartPtr StorageMergeTree::outdatePart(const String & part_name, boo } } -void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & context, bool throw_if_noop) +void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr local_context, bool throw_if_noop) { { MergeTreeData::DataPartsVector parts_to_remove; @@ -1184,7 +1233,7 @@ void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool /// Asks to complete merges and does not allow them to start. /// This protects against "revival" of data for a removed partition after completion of merge. auto merge_blocker = stopMergesAndWait(); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); parts_to_remove = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); /// TODO should we throw an exception if parts_to_remove is empty? @@ -1202,6 +1251,12 @@ void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool } } + if (deduplication_log) + { + for (const auto & part : parts_to_remove) + deduplication_log->dropPart(part->info); + } + if (detach) LOG_INFO(log, "Detached {} parts.", parts_to_remove.size()); else @@ -1214,11 +1269,11 @@ void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool PartitionCommandsResultInfo StorageMergeTree::attachPartition( const ASTPtr & partition, const StorageMetadataPtr & /* metadata_snapshot */, - bool attach_part, const Context & context) + bool attach_part, ContextPtr local_context) { PartitionCommandsResultInfo results; PartsTemporaryRename renamed_parts(*this, "detached/"); - MutableDataPartsVector loaded_parts = tryLoadPartsToAttach(partition, attach_part, context, renamed_parts); + MutableDataPartsVector loaded_parts = tryLoadPartsToAttach(partition, attach_part, local_context, renamed_parts); for (size_t i = 0; i < loaded_parts.size(); ++i) { @@ -1237,20 +1292,20 @@ PartitionCommandsResultInfo StorageMergeTree::attachPartition( } /// New parts with other data may appear in place of deleted parts. - context.dropCaches(); + local_context->dropCaches(); return results; } -void StorageMergeTree::replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) +void StorageMergeTree::replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr local_context) { - auto lock1 = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - auto lock2 = source_table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = source_table->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto source_metadata_snapshot = source_table->getInMemoryMetadataPtr(); auto my_metadata_snapshot = getInMemoryMetadataPtr(); Stopwatch watch; MergeTreeData & src_data = checkStructureAndGetMergeTreeData(source_table, source_metadata_snapshot, my_metadata_snapshot); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector src_parts = src_data.getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); MutableDataPartsVector dst_parts; @@ -1305,19 +1360,19 @@ void StorageMergeTree::replacePartitionFrom(const StoragePtr & source_table, con removePartsInRangeFromWorkingSet(drop_range, true, false, data_parts_lock); } - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } } -void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & context) +void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr local_context) { - auto lock1 = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - auto lock2 = dest_table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = dest_table->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto dest_table_storage = std::dynamic_pointer_cast(dest_table); if (!dest_table_storage) @@ -1334,7 +1389,7 @@ void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const Stopwatch watch; MergeTreeData & src_data = dest_table_storage->checkStructureAndGetMergeTreeData(*this, metadata_snapshot, dest_metadata_snapshot); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector src_parts = src_data.getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); MutableDataPartsVector dst_parts; @@ -1372,7 +1427,7 @@ void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const DataPartsLock lock(mutex); for (MutableDataPartPtr & part : dst_parts) - dest_table_storage->renameTempPartAndReplace(part, &increment, &transaction, lock); + dest_table_storage->renameTempPartAndReplace(part, &dest_table_storage->increment, &transaction, lock); removePartsFromWorkingSet(src_parts, true, lock); transaction.commit(&lock); @@ -1381,11 +1436,11 @@ void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const clearOldMutations(true); clearOldPartsFromFilesystem(); - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } } @@ -1410,13 +1465,13 @@ void StorageMergeTree::onActionLockRemove(StorageActionBlockType action_type) background_moves_executor.triggerTask(); } -CheckResults StorageMergeTree::checkData(const ASTPtr & query, const Context & context) +CheckResults StorageMergeTree::checkData(const ASTPtr & query, ContextPtr local_context) { CheckResults results; DataPartsVector data_parts; if (const auto & check_query = query->as(); check_query.partition) { - String partition_id = getPartitionIDFromQuery(check_query.partition, context); + String partition_id = getPartitionIDFromQuery(check_query.partition, local_context); data_parts = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); } else diff --git a/src/Storages/StorageMergeTree.h b/src/Storages/StorageMergeTree.h index 9dd62439814..2a50cb33912 100644 --- a/src/Storages/StorageMergeTree.h +++ b/src/Storages/StorageMergeTree.h @@ -12,6 +12,8 @@ #include #include #include +#include + #include #include #include @@ -40,7 +42,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -50,16 +52,16 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; std::optional totalRows(const Settings &) const override; - std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, const Context &) const override; + std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, ContextPtr) const override; std::optional totalBytes(const Settings &) const override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; /** Perform the next step in combining the parts. */ @@ -70,9 +72,9 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override; + ContextPtr context) override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; /// Return introspection information about currently processing or recently processed mutations. std::vector getMutationsStatus() const override; @@ -80,9 +82,9 @@ public: CancellationCode killMutation(const String & mutation_id) override; void drop() override; - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; - void alter(const AlterCommands & commands, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & commands, ContextPtr context, TableLockHolder & table_lock_holder) override; void checkTableCanBeDropped() const override; @@ -90,9 +92,11 @@ public: void onActionLockRemove(StorageActionBlockType action_type) override; - CheckResults checkData(const ASTPtr & query, const Context & context) override; + CheckResults checkData(const ASTPtr & query, ContextPtr context) override; std::optional getDataProcessingJob() override; + + MergeTreeDeduplicationLog * getDeduplicationLog() { return deduplication_log.get(); } private: /// Mutex and condvar for synchronous mutations wait @@ -105,6 +109,8 @@ private: BackgroundJobsExecutor background_executor; BackgroundMovesExecutor background_moves_executor; + std::unique_ptr deduplication_log; + /// For block numbers. SimpleIncrement increment; @@ -128,6 +134,10 @@ private: void loadMutations(); + /// Load and initialize deduplication logs. Even if deduplication setting + /// equals zero creates object with deduplication window equals zero. + void loadDeduplicationLog(); + /** Determines what parts should be merged and merges it. * If aggressive - when selects parts don't takes into account their ratio size and novelty (used for OPTIMIZE query). * Returns true if merge is finished successfully. @@ -150,8 +160,9 @@ private: { FutureMergedMutatedPart future_part; ReservationPtr reserved_space; - StorageMergeTree & storage; + // Optional tagger to maintain volatile parts for the JBOD balancer + std::optional tagger; CurrentlyMergingPartsTagger( FutureMergedMutatedPart & future_part_, @@ -200,11 +211,11 @@ private: void clearOldMutations(bool truncate = false); // Partition helpers - void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & context, bool throw_if_noop) override; - PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, const Context & context) override; + void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr context, bool throw_if_noop) override; + PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, ContextPtr context) override; - void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) override; - void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & context) override; + void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr context) override; + void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr context) override; bool partIsAssignedToBackgroundOperation(const DataPartPtr & part) const override; /// Update mutation entries after part mutation execution. May reset old /// errors if mutation was successful. Otherwise update last_failed* fields @@ -238,7 +249,7 @@ protected: const String & relative_data_path_, const StorageInMemoryMetadata & metadata, bool attach, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, diff --git a/src/Storages/StorageMongoDB.cpp b/src/Storages/StorageMongoDB.cpp index 09fd413af75..2b0200f3643 100644 --- a/src/Storages/StorageMongoDB.cpp +++ b/src/Storages/StorageMongoDB.cpp @@ -74,7 +74,7 @@ Pipe StorageMongoDB::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned) @@ -106,7 +106,7 @@ void registerStorageMongoDB(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); /// 27017 is the default MongoDB port. auto parsed_host_port = parseAddress(engine_args[0]->as().value.safeGet(), 27017); diff --git a/src/Storages/StorageMongoDB.h b/src/Storages/StorageMongoDB.h index 589ab276539..5e96d1543a2 100644 --- a/src/Storages/StorageMongoDB.h +++ b/src/Storages/StorageMongoDB.h @@ -35,7 +35,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageMySQL.cpp b/src/Storages/StorageMySQL.cpp index caac7c5d95e..35eb85e41d2 100644 --- a/src/Storages/StorageMySQL.cpp +++ b/src/Storages/StorageMySQL.cpp @@ -18,6 +18,7 @@ #include #include #include +#include namespace DB @@ -41,21 +42,21 @@ static String backQuoteMySQL(const String & x) StorageMySQL::StorageMySQL( const StorageID & table_id_, - mysqlxx::Pool && pool_, + mysqlxx::PoolWithFailover && pool_, const std::string & remote_database_name_, const std::string & remote_table_name_, const bool replace_query_, const std::string & on_duplicate_clause_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_) + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , remote_database_name(remote_database_name_) , remote_table_name(remote_table_name_) , replace_query{replace_query_} , on_duplicate_clause{on_duplicate_clause_} - , pool(std::move(pool_)) - , global_context(context_.getGlobalContext()) + , pool(std::make_shared(pool_)) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -68,9 +69,9 @@ Pipe StorageMySQL::read( const Names & column_names_, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info_, - const Context & context_, + ContextPtr context_, QueryProcessingStage::Enum /*processed_stage*/, - size_t max_block_size_, + size_t /*max_block_size*/, unsigned) { metadata_snapshot->check(column_names_, getVirtuals(), getStorageID()); @@ -94,9 +95,10 @@ Pipe StorageMySQL::read( sample_block.insert({ column_data.type, column_data.name }); } - /// TODO: rewrite MySQLBlockInputStream + + StreamSettings mysql_input_stream_settings(context_->getSettingsRef(), true, false); return Pipe(std::make_shared( - std::make_shared(pool, query, sample_block, max_block_size_, /* auto_close = */ true))); + std::make_shared(pool, query, sample_block, mysql_input_stream_settings))); } @@ -144,10 +146,12 @@ public: { WriteBufferFromOwnString sqlbuf; sqlbuf << (storage.replace_query ? "REPLACE" : "INSERT") << " INTO "; - sqlbuf << backQuoteMySQL(remote_database_name) << "." << backQuoteMySQL(remote_table_name); + if (!remote_database_name.empty()) + sqlbuf << backQuoteMySQL(remote_database_name) << "."; + sqlbuf << backQuoteMySQL(remote_table_name); sqlbuf << " (" << dumpNamesWithBackQuote(block) << ") VALUES "; - auto writer = FormatFactory::instance().getOutputStream("Values", sqlbuf, metadata_snapshot->getSampleBlock(), storage.global_context); + auto writer = FormatFactory::instance().getOutputStream("Values", sqlbuf, metadata_snapshot->getSampleBlock(), storage.getContext()); writer->write(block); if (!storage.on_duplicate_clause.empty()) @@ -211,9 +215,15 @@ private: }; -BlockOutputStreamPtr StorageMySQL::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageMySQL::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - return std::make_shared(*this, metadata_snapshot, remote_database_name, remote_table_name, pool.get(), context.getSettingsRef().mysql_max_rows_to_insert); + return std::make_shared( + *this, + metadata_snapshot, + remote_database_name, + remote_table_name, + pool->get(), + local_context->getSettingsRef().mysql_max_rows_to_insert); } void registerStorageMySQL(StorageFactory & factory) @@ -224,21 +234,22 @@ void registerStorageMySQL(StorageFactory & factory) if (engine_args.size() < 5 || engine_args.size() > 7) throw Exception( - "Storage MySQL requires 5-7 parameters: MySQL('host:port', database, table, 'user', 'password'[, replace_query, 'on_duplicate_clause']).", + "Storage MySQL requires 5-7 parameters: MySQL('host:port' (or 'addresses_pattern'), database, table, 'user', 'password'[, replace_query, 'on_duplicate_clause']).", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); /// 3306 is the default MySQL port. - auto parsed_host_port = parseAddress(engine_args[0]->as().value.safeGet(), 3306); - + const String & host_port = engine_args[0]->as().value.safeGet(); const String & remote_database = engine_args[1]->as().value.safeGet(); const String & remote_table = engine_args[2]->as().value.safeGet(); const String & username = engine_args[3]->as().value.safeGet(); const String & password = engine_args[4]->as().value.safeGet(); + size_t max_addresses = args.getContext()->getSettingsRef().glob_expansion_max_elements; - mysqlxx::Pool pool(remote_database, parsed_host_port.first, username, password, parsed_host_port.second); + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 3306); + mysqlxx::PoolWithFailover pool(remote_database, addresses, username, password); bool replace_query = false; std::string on_duplicate_clause; @@ -261,7 +272,7 @@ void registerStorageMySQL(StorageFactory & factory) on_duplicate_clause, args.columns, args.constraints, - args.context); + args.getContext()); }, { .source_access_type = AccessType::MYSQL, diff --git a/src/Storages/StorageMySQL.h b/src/Storages/StorageMySQL.h index 645f3600eee..a68c06c1abe 100644 --- a/src/Storages/StorageMySQL.h +++ b/src/Storages/StorageMySQL.h @@ -1,15 +1,15 @@ #pragma once #if !defined(ARCADIA_BUILD) -# include "config_core.h" +#include "config_core.h" #endif #if USE_MYSQL -# include +#include -# include -# include +#include +#include namespace DB @@ -19,20 +19,20 @@ namespace DB * Use ENGINE = mysql(host_port, database_name, table_name, user_name, password) * Read only. */ -class StorageMySQL final : public ext::shared_ptr_helper, public IStorage +class StorageMySQL final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: StorageMySQL( const StorageID & table_id_, - mysqlxx::Pool && pool_, + mysqlxx::PoolWithFailover && pool_, const std::string & remote_database_name_, const std::string & remote_table_name_, - const bool replace_query_, + bool replace_query_, const std::string & on_duplicate_clause_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_); + ContextPtr context_); std::string getName() const override { return "MySQL"; } @@ -40,12 +40,12 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; private: friend class StorageMySQLBlockOutputStream; @@ -55,8 +55,7 @@ private: bool replace_query; std::string on_duplicate_clause; - mysqlxx::Pool pool; - const Context & global_context; + mysqlxx::PoolWithFailoverPtr pool; }; } diff --git a/src/Storages/StorageNull.cpp b/src/Storages/StorageNull.cpp index ed9a7fffc63..6c8a21db571 100644 --- a/src/Storages/StorageNull.cpp +++ b/src/Storages/StorageNull.cpp @@ -36,7 +36,7 @@ void registerStorageNull(StorageFactory & factory) }); } -void StorageNull::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageNull::checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const { auto name_deps = getDependentViewsByColumn(context); for (const auto & command : commands) @@ -46,7 +46,7 @@ void StorageNull::checkAlterIsPossible(const AlterCommands & commands, const Con throw Exception( "Alter of type '" + alterTypeToString(command.type) + "' is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); - if (command.type == AlterCommand::DROP_COLUMN) + if (command.type == AlterCommand::DROP_COLUMN && !command.clear) { const auto & deps_mv = name_deps[command.column_name]; if (!deps_mv.empty()) @@ -61,7 +61,7 @@ void StorageNull::checkAlterIsPossible(const AlterCommands & commands, const Con } -void StorageNull::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void StorageNull::alter(const AlterCommands & params, ContextPtr context, TableLockHolder &) { auto table_id = getStorageID(); diff --git a/src/Storages/StorageNull.h b/src/Storages/StorageNull.h index 943c056a588..7fe65eb25dc 100644 --- a/src/Storages/StorageNull.h +++ b/src/Storages/StorageNull.h @@ -25,7 +25,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processing_stage*/, size_t, unsigned) override @@ -36,14 +36,14 @@ public: bool supportsParallelInsert() const override { return true; } - BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &) override + BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr) override { return std::make_shared(metadata_snapshot->getSampleBlock()); } - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; std::optional totalRows(const Settings &) const override { diff --git a/src/Storages/StoragePostgreSQL.cpp b/src/Storages/StoragePostgreSQL.cpp index 78ec8c34e41..a99568c0036 100644 --- a/src/Storages/StoragePostgreSQL.cpp +++ b/src/Storages/StoragePostgreSQL.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -41,15 +42,17 @@ namespace ErrorCodes StoragePostgreSQL::StoragePostgreSQL( const StorageID & table_id_, + const postgres::PoolWithFailover & pool_, const String & remote_table_name_, - PostgreSQLConnectionPtr connection_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_) + ContextPtr context_, + const String & remote_table_schema_) : IStorage(table_id_) , remote_table_name(remote_table_name_) + , remote_table_schema(remote_table_schema_) , global_context(context_) - , connection(std::move(connection_)) + , pool(std::make_shared(pool_)) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -62,16 +65,18 @@ Pipe StoragePostgreSQL::read( const Names & column_names_, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info_, - const Context & context_, + ContextPtr context_, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size_, unsigned) { metadata_snapshot->check(column_names_, getVirtuals(), getStorageID()); + /// Connection is already made to the needed database, so it should not be present in the query; + /// remote_table_schema is empty if it is not specified, will access only table_name. String query = transformQueryForExternalDatabase( query_info_, metadata_snapshot->getColumns().getOrdinary(), - IdentifierQuotingStyle::DoubleQuotes, "", remote_table_name, context_); + IdentifierQuotingStyle::DoubleQuotes, remote_table_schema, remote_table_name, context_); Block sample_block; for (const String & column_name : column_names_) @@ -84,7 +89,7 @@ Pipe StoragePostgreSQL::read( } return Pipe(std::make_shared( - std::make_shared(connection->conn(), query, sample_block, max_block_size_))); + std::make_shared(pool->get(), query, sample_block, max_block_size_))); } @@ -93,10 +98,10 @@ class PostgreSQLBlockOutputStream : public IBlockOutputStream public: explicit PostgreSQLBlockOutputStream( const StorageMetadataPtr & metadata_snapshot_, - ConnectionPtr connection_, + postgres::ConnectionHolderPtr connection_, const std::string & remote_table_name_) : metadata_snapshot(metadata_snapshot_) - , connection(connection_) + , connection(std::move(connection_)) , remote_table_name(remote_table_name_) { } @@ -106,7 +111,7 @@ public: void writePrefix() override { - work = std::make_unique(*connection); + work = std::make_unique(connection->conn()); } @@ -143,7 +148,7 @@ public: } else { - data_types[j]->serializeAsText(*columns[j], i, ostr, FormatSettings{}); + data_types[j]->getDefaultSerialization()->serializeText(*columns[j], i, ostr, FormatSettings{}); } row[j] = ostr.str(); @@ -210,7 +215,7 @@ public: auto array_column = ColumnArray::create(createNested(nested)); array_column->insert(array_field); WriteBufferFromOwnString ostr; - data_type->serializeAsText(*array_column, 0, ostr, FormatSettings{}); + data_type->getDefaultSerialization()->serializeText(*array_column, 0, ostr, FormatSettings{}); /// ostr is guaranteed to be at least '[]', i.e. size is at least 2 and 2 only if ostr.str() == '[]' assert(ostr.str().size() >= 2); @@ -272,7 +277,7 @@ public: private: StorageMetadataPtr metadata_snapshot; - ConnectionPtr connection; + postgres::ConnectionHolderPtr connection; std::string remote_table_name; std::unique_ptr work; @@ -281,9 +286,9 @@ private: BlockOutputStreamPtr StoragePostgreSQL::write( - const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /* context */) + const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /* context */) { - return std::make_shared(metadata_snapshot, connection->conn(), remote_table_name); + return std::make_shared(metadata_snapshot, pool->get(), remote_table_name); } @@ -293,26 +298,39 @@ void registerStoragePostgreSQL(StorageFactory & factory) { ASTs & engine_args = args.engine_args; - if (engine_args.size() != 5) - throw Exception("Storage PostgreSQL requires 5 parameters: " - "PostgreSQL('host:port', 'database', 'table', 'username', 'password'.", + if (engine_args.size() < 5 || engine_args.size() > 6) + throw Exception("Storage PostgreSQL requires from 5 to 6 parameters: " + "PostgreSQL('host:port', 'database', 'table', 'username', 'password' [, 'schema']", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); - auto parsed_host_port = parseAddress(engine_args[0]->as().value.safeGet(), 5432); + auto host_port = engine_args[0]->as().value.safeGet(); + /// Split into replicas if needed. + size_t max_addresses = args.getContext()->getSettingsRef().glob_expansion_max_elements; + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 5432); + + const String & remote_database = engine_args[1]->as().value.safeGet(); const String & remote_table = engine_args[2]->as().value.safeGet(); + const String & username = engine_args[3]->as().value.safeGet(); + const String & password = engine_args[4]->as().value.safeGet(); - auto connection = std::make_shared( - engine_args[1]->as().value.safeGet(), - parsed_host_port.first, - parsed_host_port.second, - engine_args[3]->as().value.safeGet(), - engine_args[4]->as().value.safeGet()); + String remote_table_schema; + if (engine_args.size() == 6) + remote_table_schema = engine_args[5]->as().value.safeGet(); + + postgres::PoolWithFailover pool( + remote_database, + addresses, + username, + password, + args.getContext()->getSettingsRef().postgresql_connection_pool_size, + args.getContext()->getSettingsRef().postgresql_connection_pool_wait_timeout); return StoragePostgreSQL::create( - args.table_id, remote_table, connection, args.columns, args.constraints, args.context); + args.table_id, pool, remote_table, + args.columns, args.constraints, args.getContext(), remote_table_schema); }, { .source_access_type = AccessType::POSTGRES, diff --git a/src/Storages/StoragePostgreSQL.h b/src/Storages/StoragePostgreSQL.h index 8aebae5896b..e4ab59f7a06 100644 --- a/src/Storages/StoragePostgreSQL.h +++ b/src/Storages/StoragePostgreSQL.h @@ -9,14 +9,12 @@ #include #include #include -#include +#include namespace DB { -class PostgreSQLConnection; -using PostgreSQLConnectionPtr = std::shared_ptr; class StoragePostgreSQL final : public ext::shared_ptr_helper, public IStorage { @@ -24,11 +22,12 @@ class StoragePostgreSQL final : public ext::shared_ptr_helper public: StoragePostgreSQL( const StorageID & table_id_, - const std::string & remote_table_name_, - PostgreSQLConnectionPtr connection_, + const postgres::PoolWithFailover & pool_, + const String & remote_table_name_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_); + ContextPtr context_, + const std::string & remote_table_schema_ = ""); String getName() const override { return "PostgreSQL"; } @@ -36,19 +35,20 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; private: friend class PostgreSQLBlockOutputStream; String remote_table_name; - Context global_context; - PostgreSQLConnectionPtr connection; + String remote_table_schema; + ContextPtr global_context; + postgres::PoolWithFailoverPtr pool; }; } diff --git a/src/Storages/StorageProxy.h b/src/Storages/StorageProxy.h index 0349319d8fa..2c3e9d610b0 100644 --- a/src/Storages/StorageProxy.h +++ b/src/Storages/StorageProxy.h @@ -11,7 +11,7 @@ class StorageProxy : public IStorage { public: - StorageProxy(const StorageID & table_id_) : IStorage(table_id_) {} + explicit StorageProxy(const StorageID & table_id_) : IStorage(table_id_) {} virtual StoragePtr getNested() const = 0; @@ -32,7 +32,7 @@ public: NamesAndTypesList getVirtuals() const override { return getNested()->getVirtuals(); } QueryProcessingStage::Enum getQueryProcessingStage( - const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & ast) const override + ContextPtr context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & ast) const override { return getNested()->getQueryProcessingStage(context, to_stage, ast); } @@ -40,7 +40,7 @@ public: BlockInputStreams watch( const Names & column_names, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size, unsigned num_streams) override @@ -52,7 +52,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override @@ -60,10 +60,7 @@ public: return getNested()->read(column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); } - BlockOutputStreamPtr write( - const ASTPtr & query, - const StorageMetadataPtr & metadata_snapshot, - const Context & context) override + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) override { return getNested()->write(query, metadata_snapshot, context); } @@ -73,7 +70,7 @@ public: void truncate( const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, - const Context & context, + ContextPtr context, TableExclusiveLockHolder & lock) override { getNested()->truncate(query, metadata_snapshot, context, lock); @@ -91,13 +88,13 @@ public: IStorage::renameInMemory(new_table_id); } - void alter(const AlterCommands & params, const Context & context, TableLockHolder & alter_lock_holder) override + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & alter_lock_holder) override { getNested()->alter(params, context, alter_lock_holder); IStorage::setInMemoryMetadata(getNested()->getInMemoryMetadata()); } - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override { getNested()->checkAlterIsPossible(commands, context); } @@ -105,7 +102,7 @@ public: Pipe alterPartition( const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, - const Context & context) override + ContextPtr context) override { return getNested()->alterPartition(metadata_snapshot, commands, context); } @@ -122,12 +119,12 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override + ContextPtr context) override { return getNested()->optimize(query, metadata_snapshot, partition, final, deduplicate, deduplicate_by_columns, context); } - void mutate(const MutationCommands & commands, const Context & context) override { getNested()->mutate(commands, context); } + void mutate(const MutationCommands & commands, ContextPtr context) override { getNested()->mutate(commands, context); } CancellationCode killMutation(const String & mutation_id) override { return getNested()->killMutation(mutation_id); } @@ -137,12 +134,12 @@ public: ActionLock getActionLock(StorageActionBlockType action_type) override { return getNested()->getActionLock(action_type); } bool supportsIndexForIn() const override { return getNested()->supportsIndexForIn(); } - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override { return getNested()->mayBenefitFromIndexForIn(left_in_operand, query_context, metadata_snapshot); } - CheckResults checkData(const ASTPtr & query , const Context & context) override { return getNested()->checkData(query, context); } + CheckResults checkData(const ASTPtr & query , ContextPtr context) override { return getNested()->checkData(query, context); } void checkTableCanBeDropped() const override { getNested()->checkTableCanBeDropped(); } void checkPartitionCanBeDropped(const ASTPtr & partition) override { getNested()->checkPartitionCanBeDropped(partition); } bool storesDataOnDisk() const override { return getNested()->storesDataOnDisk(); } diff --git a/src/Storages/StorageReplicatedMergeTree.cpp b/src/Storages/StorageReplicatedMergeTree.cpp index 68f3b6d80d1..864c31ec05d 100644 --- a/src/Storages/StorageReplicatedMergeTree.cpp +++ b/src/Storages/StorageReplicatedMergeTree.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include @@ -39,6 +40,9 @@ #include #include +#include +#include + #include #include #include @@ -48,6 +52,7 @@ #include #include #include +#include #include #include @@ -56,6 +61,7 @@ #include #include +#include #include #include @@ -124,6 +130,7 @@ namespace ErrorCodes extern const int UNKNOWN_POLICY; extern const int NO_SUCH_DATA_PART; extern const int INTERSERVER_SCHEME_DOESNT_MATCH; + extern const int DUPLICATE_DATA_PART; } namespace ActionLocks @@ -142,6 +149,10 @@ static const auto MERGE_SELECTING_SLEEP_MS = 5 * 1000; static const auto MUTATIONS_FINALIZING_SLEEP_MS = 1 * 1000; static const auto MUTATIONS_FINALIZING_IDLE_SLEEP_MS = 5 * 1000; + +std::atomic_uint StorageReplicatedMergeTree::total_fetches {0}; + + void StorageReplicatedMergeTree::setZooKeeper() { /// Every ReplicatedMergeTree table is using only one ZooKeeper session. @@ -153,11 +164,11 @@ void StorageReplicatedMergeTree::setZooKeeper() std::lock_guard lock(current_zookeeper_mutex); if (zookeeper_name == default_zookeeper_name) { - current_zookeeper = global_context.getZooKeeper(); + current_zookeeper = getContext()->getZooKeeper(); } else { - current_zookeeper = global_context.getAuxiliaryZooKeeper(zookeeper_name); + current_zookeeper = getContext()->getAuxiliaryZooKeeper(zookeeper_name); } } @@ -221,7 +232,7 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree( const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, @@ -243,34 +254,34 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree( , replica_path(zookeeper_path + "/replicas/" + replica_name_) , reader(*this) , writer(*this) - , merger_mutator(*this, global_context.getSettingsRef().background_pool_size) + , merger_mutator(*this, getContext()->getSettingsRef().background_pool_size) , merge_strategy_picker(*this) , queue(*this, merge_strategy_picker) , fetcher(*this) - , background_executor(*this, global_context) - , background_moves_executor(*this, global_context) + , background_executor(*this, getContext()) + , background_moves_executor(*this, getContext()) , cleanup_thread(*this) , part_check_thread(*this) , restarting_thread(*this) , allow_renaming(allow_renaming_) - , replicated_fetches_pool_size(global_context.getSettingsRef().background_fetches_pool_size) + , replicated_fetches_pool_size(getContext()->getSettingsRef().background_fetches_pool_size) { - queue_updating_task = global_context.getSchedulePool().createTask( + queue_updating_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::queueUpdatingTask)", [this]{ queueUpdatingTask(); }); - mutations_updating_task = global_context.getSchedulePool().createTask( + mutations_updating_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::mutationsUpdatingTask)", [this]{ mutationsUpdatingTask(); }); - merge_selecting_task = global_context.getSchedulePool().createTask( + merge_selecting_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::mergeSelectingTask)", [this] { mergeSelectingTask(); }); /// Will be activated if we win leader election. merge_selecting_task->deactivate(); - mutations_finalizing_task = global_context.getSchedulePool().createTask( + mutations_finalizing_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::mutationsFinalizingTask)", [this] { mutationsFinalizingTask(); }); - if (global_context.hasZooKeeper() || global_context.hasAuxiliaryZooKeeper(zookeeper_name)) + if (getContext()->hasZooKeeper() || getContext()->hasAuxiliaryZooKeeper(zookeeper_name)) { /// It's possible for getZooKeeper() to timeout if zookeeper host(s) can't /// be reached. In such cases Poco::Exception is thrown after a connection @@ -289,11 +300,11 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree( { if (zookeeper_name == default_zookeeper_name) { - current_zookeeper = global_context.getZooKeeper(); + current_zookeeper = getContext()->getZooKeeper(); } else { - current_zookeeper = global_context.getAuxiliaryZooKeeper(zookeeper_name); + current_zookeeper = getContext()->getAuxiliaryZooKeeper(zookeeper_name); } } catch (...) @@ -447,12 +458,12 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( if (replicas.empty()) return; - zkutil::EventPtr wait_event = std::make_shared(); std::set inactive_replicas; for (const String & replica : replicas) { LOG_DEBUG(log, "Waiting for {} to apply mutation {}", replica, mutation_id); + zkutil::EventPtr wait_event = std::make_shared(); while (!partial_shutdown_called) { @@ -476,9 +487,8 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( String mutation_pointer = zookeeper_path + "/replicas/" + replica + "/mutation_pointer"; std::string mutation_pointer_value; - Coordination::Stat get_stat; /// Replica could be removed - if (!zookeeper->tryGet(mutation_pointer, mutation_pointer_value, &get_stat, wait_event)) + if (!zookeeper->tryGet(mutation_pointer, mutation_pointer_value, nullptr, wait_event)) { LOG_WARNING(log, "Replica {} was removed", replica); break; @@ -488,8 +498,10 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( /// Replica can become inactive, so wait with timeout and recheck it if (wait_event->tryWait(1000)) - break; + continue; + /// Here we check mutation for errors or kill on local replica. If they happen on this replica + /// they will happen on each replica, so we can check only in-memory info. auto mutation_status = queue.getIncompleteMutationsStatus(mutation_id); if (!mutation_status || !mutation_status->latest_fail_reason.empty()) break; @@ -506,6 +518,8 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( std::set mutation_ids; mutation_ids.insert(mutation_id); + /// Here we check mutation for errors or kill on local replica. If they happen on this replica + /// they will happen on each replica, so we can check only in-memory info. auto mutation_status = queue.getIncompleteMutationsStatus(mutation_id, &mutation_ids); checkMutationStatus(mutation_status, mutation_ids); @@ -539,6 +553,13 @@ void StorageReplicatedMergeTree::createNewZooKeeperNodes() /// Mutations zookeeper->createIfNotExists(zookeeper_path + "/mutations", String()); zookeeper->createIfNotExists(replica_path + "/mutation_pointer", String()); + + /// Nodes for zero-copy S3 replication + if (storage_settings.get()->allow_s3_zero_copy_replication) + { + zookeeper->createIfNotExists(zookeeper_path + "/zero_copy_s3", String()); + zookeeper->createIfNotExists(zookeeper_path + "/zero_copy_s3/shared", String()); + } } @@ -564,42 +585,24 @@ bool StorageReplicatedMergeTree::createTableIfNotExists(const StorageMetadataPtr /// This is Ok because another replica is definitely going to drop the table. LOG_WARNING(log, "Removing leftovers from table {} (this might take several minutes)", zookeeper_path); + String drop_lock_path = zookeeper_path + "/dropped/lock"; + Coordination::Error code = zookeeper->tryCreate(drop_lock_path, "", zkutil::CreateMode::Ephemeral); - Strings children; - Coordination::Error code = zookeeper->tryGetChildren(zookeeper_path, children); - if (code == Coordination::Error::ZNONODE) + if (code == Coordination::Error::ZNONODE || code == Coordination::Error::ZNODEEXISTS) { - LOG_WARNING(log, "Table {} is already finished removing by another replica right now", replica_path); + LOG_WARNING(log, "The leftovers from table {} were removed by another replica", zookeeper_path); + } + else if (code != Coordination::Error::ZOK) + { + throw Coordination::Exception(code, drop_lock_path); } else { - for (const auto & child : children) - if (child != "dropped") - zookeeper->tryRemoveRecursive(zookeeper_path + "/" + child); - - Coordination::Requests ops; - Coordination::Responses responses; - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path, -1)); - code = zookeeper->tryMulti(ops, responses); - - if (code == Coordination::Error::ZNONODE) + auto metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(drop_lock_path, *zookeeper); + if (!removeTableNodesFromZooKeeper(zookeeper, zookeeper_path, metadata_drop_lock, log)) { - LOG_WARNING(log, "Table {} is already finished removing by another replica right now", replica_path); - } - else if (code == Coordination::Error::ZNOTEMPTY) - { - throw Exception(fmt::format( - "The old table was not completely removed from ZooKeeper, {} still exists and may contain some garbage. But it should never happen according to the logic of operations (it's a bug).", zookeeper_path), ErrorCodes::LOGICAL_ERROR); - } - else if (code != Coordination::Error::ZOK) - { - /// It is still possible that ZooKeeper session is expired or server is killed in the middle of the delete operation. - zkutil::KeeperMultiException::check(code, ops, responses); - } - else - { - LOG_WARNING(log, "The leftovers from table {} was successfully removed from ZooKeeper", zookeeper_path); + /// Someone is recursively removing table right now, we cannot create new table until old one is removed + continue; } } } @@ -612,10 +615,6 @@ bool StorageReplicatedMergeTree::createTableIfNotExists(const StorageMetadataPtr Coordination::Requests ops; ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path, "", zkutil::CreateMode::Persistent)); - /// Check that the table is not being dropped right now. - ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/dropped", "", zkutil::CreateMode::Persistent)); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); - ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/metadata", metadata_str, zkutil::CreateMode::Persistent)); ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/columns", metadata_snapshot->getColumns().toString(), @@ -758,9 +757,9 @@ void StorageReplicatedMergeTree::drop() /// and calling StorageReplicatedMergeTree::getZooKeeper()/getAuxiliaryZooKeeper() won't suffice. zkutil::ZooKeeperPtr zookeeper; if (zookeeper_name == default_zookeeper_name) - zookeeper = global_context.getZooKeeper(); + zookeeper = getContext()->getZooKeeper(); else - zookeeper = global_context.getAuxiliaryZooKeeper(zookeeper_name); + zookeeper = getContext()->getAuxiliaryZooKeeper(zookeeper_name); /// If probably there is metadata in ZooKeeper, we don't allow to drop the table. if (!zookeeper) @@ -803,10 +802,18 @@ void StorageReplicatedMergeTree::dropReplica(zkutil::ZooKeeperPtr zookeeper, con * because table creation is executed in single transaction that will conflict with remaining nodes. */ + /// Node /dropped works like a lock that protects from concurrent removal of old table and creation of new table. + /// But recursive removal may fail in the middle of operation leaving some garbage in zookeeper_path, so + /// we remove it on table creation if there is /dropped node. Creating thread may remove /dropped node created by + /// removing thread, and it causes race condition if removing thread is not finished yet. + /// To avoid this we also create ephemeral child before starting recursive removal. + /// (The existence of child node does not allow to remove parent node). Coordination::Requests ops; Coordination::Responses responses; + String drop_lock_path = zookeeper_path + "/dropped/lock"; ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/replicas", -1)); ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/dropped", "", zkutil::CreateMode::Persistent)); + ops.emplace_back(zkutil::makeCreateRequest(drop_lock_path, "", zkutil::CreateMode::Ephemeral)); Coordination::Error code = zookeeper->tryMulti(ops, responses); if (code == Coordination::Error::ZNONODE || code == Coordination::Error::ZNODEEXISTS) @@ -823,48 +830,57 @@ void StorageReplicatedMergeTree::dropReplica(zkutil::ZooKeeperPtr zookeeper, con } else { + auto metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(drop_lock_path, *zookeeper); LOG_INFO(logger, "Removing table {} (this might take several minutes)", zookeeper_path); - - Strings children; - code = zookeeper->tryGetChildren(zookeeper_path, children); - if (code == Coordination::Error::ZNONODE) - { - LOG_WARNING(logger, "Table {} is already finished removing by another replica right now", remote_replica_path); - } - else - { - for (const auto & child : children) - if (child != "dropped") - zookeeper->tryRemoveRecursive(zookeeper_path + "/" + child); - - ops.clear(); - responses.clear(); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path, -1)); - code = zookeeper->tryMulti(ops, responses); - - if (code == Coordination::Error::ZNONODE) - { - LOG_WARNING(logger, "Table {} is already finished removing by another replica right now", remote_replica_path); - } - else if (code == Coordination::Error::ZNOTEMPTY) - { - LOG_ERROR(logger, "Table was not completely removed from ZooKeeper, {} still exists and may contain some garbage.", - zookeeper_path); - } - else if (code != Coordination::Error::ZOK) - { - /// It is still possible that ZooKeeper session is expired or server is killed in the middle of the delete operation. - zkutil::KeeperMultiException::check(code, ops, responses); - } - else - { - LOG_INFO(logger, "Table {} was successfully removed from ZooKeeper", zookeeper_path); - } - } + removeTableNodesFromZooKeeper(zookeeper, zookeeper_path, metadata_drop_lock, logger); } } +bool StorageReplicatedMergeTree::removeTableNodesFromZooKeeper(zkutil::ZooKeeperPtr zookeeper, + const String & zookeeper_path, const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock, Poco::Logger * logger) +{ + bool completely_removed = false; + Strings children; + Coordination::Error code = zookeeper->tryGetChildren(zookeeper_path, children); + if (code == Coordination::Error::ZNONODE) + throw Exception(ErrorCodes::LOGICAL_ERROR, "There is a race condition between creation and removal of replicated table. It's a bug"); + + + for (const auto & child : children) + if (child != "dropped") + zookeeper->tryRemoveRecursive(zookeeper_path + "/" + child); + + Coordination::Requests ops; + Coordination::Responses responses; + ops.emplace_back(zkutil::makeRemoveRequest(metadata_drop_lock->getPath(), -1)); + ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); + ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path, -1)); + code = zookeeper->tryMulti(ops, responses); + + if (code == Coordination::Error::ZNONODE) + { + throw Exception(ErrorCodes::LOGICAL_ERROR, "There is a race condition between creation and removal of replicated table. It's a bug"); + } + else if (code == Coordination::Error::ZNOTEMPTY) + { + LOG_ERROR(logger, "Table was not completely removed from ZooKeeper, {} still exists and may contain some garbage," + "but someone is removing it right now.", zookeeper_path); + } + else if (code != Coordination::Error::ZOK) + { + /// It is still possible that ZooKeeper session is expired or server is killed in the middle of the delete operation. + zkutil::KeeperMultiException::check(code, ops, responses); + } + else + { + metadata_drop_lock->setAlreadyRemoved(); + completely_removed = true; + LOG_INFO(logger, "Table {} was successfully removed from ZooKeeper", zookeeper_path); + } + + return completely_removed; +} + /** Verify that list of columns and table storage_settings_ptr match those specified in ZK (/metadata). * If not, throw an exception. @@ -878,7 +894,7 @@ void StorageReplicatedMergeTree::checkTableStructure(const String & zookeeper_pr Coordination::Stat metadata_stat; String metadata_str = zookeeper->get(zookeeper_prefix + "/metadata", &metadata_stat); auto metadata_from_zk = ReplicatedMergeTreeTableMetadata::parse(metadata_str); - old_metadata.checkEquals(metadata_from_zk, metadata_snapshot->getColumns(), global_context); + old_metadata.checkEquals(metadata_from_zk, metadata_snapshot->getColumns(), getContext()); Coordination::Stat columns_stat; auto columns_from_zk = ColumnsDescription::parse(zookeeper->get(zookeeper_prefix + "/columns", &columns_stat)); @@ -896,8 +912,7 @@ void StorageReplicatedMergeTree::setTableStructure( StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - if (new_columns != new_metadata.columns) - new_metadata.columns = new_columns; + new_metadata.columns = new_columns; if (!metadata_diff.empty()) { @@ -924,7 +939,7 @@ void StorageReplicatedMergeTree::setTableStructure( auto & sorting_key = new_metadata.sorting_key; auto & primary_key = new_metadata.primary_key; - sorting_key.recalculateWithNewAST(order_by_ast, new_metadata.columns, global_context); + sorting_key.recalculateWithNewAST(order_by_ast, new_metadata.columns, getContext()); if (primary_key.definition_ast == nullptr) { @@ -932,18 +947,18 @@ void StorageReplicatedMergeTree::setTableStructure( /// save the old ORDER BY expression as the new primary key. auto old_sorting_key_ast = old_metadata.getSortingKey().definition_ast; primary_key = KeyDescription::getKeyFromAST( - old_sorting_key_ast, new_metadata.columns, global_context); + old_sorting_key_ast, new_metadata.columns, getContext()); } } if (metadata_diff.sampling_expression_changed) { auto sample_by_ast = parse_key_expr(metadata_diff.new_sampling_expression); - new_metadata.sampling_key.recalculateWithNewAST(sample_by_ast, new_metadata.columns, global_context); + new_metadata.sampling_key.recalculateWithNewAST(sample_by_ast, new_metadata.columns, getContext()); } if (metadata_diff.skip_indices_changed) - new_metadata.secondary_indices = IndicesDescription::parse(metadata_diff.new_skip_indices, new_columns, global_context); + new_metadata.secondary_indices = IndicesDescription::parse(metadata_diff.new_skip_indices, new_columns, getContext()); if (metadata_diff.constraints_changed) new_metadata.constraints = ConstraintsDescription::parse(metadata_diff.new_constraints); @@ -955,7 +970,7 @@ void StorageReplicatedMergeTree::setTableStructure( ParserTTLExpressionList parser; auto ttl_for_table_ast = parseQuery(parser, metadata_diff.new_ttl_table, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH); new_metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( - ttl_for_table_ast, new_metadata.columns, global_context, new_metadata.primary_key); + ttl_for_table_ast, new_metadata.columns, getContext(), new_metadata.primary_key); } else /// TTL was removed { @@ -965,53 +980,50 @@ void StorageReplicatedMergeTree::setTableStructure( } /// Changes in columns may affect following metadata fields - if (new_metadata.columns != old_metadata.columns) + new_metadata.column_ttls_by_name.clear(); + for (const auto & [name, ast] : new_metadata.columns.getColumnTTLs()) { - new_metadata.column_ttls_by_name.clear(); - for (const auto & [name, ast] : new_metadata.columns.getColumnTTLs()) - { - auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, new_metadata.columns, global_context, new_metadata.primary_key); - new_metadata.column_ttls_by_name[name] = new_ttl_entry; - } - - if (new_metadata.partition_key.definition_ast != nullptr) - new_metadata.partition_key.recalculateWithNewColumns(new_metadata.columns, global_context); - - if (!metadata_diff.sorting_key_changed) /// otherwise already updated - new_metadata.sorting_key.recalculateWithNewColumns(new_metadata.columns, global_context); - - /// Primary key is special, it exists even if not defined - if (new_metadata.primary_key.definition_ast != nullptr) - { - new_metadata.primary_key.recalculateWithNewColumns(new_metadata.columns, global_context); - } - else - { - new_metadata.primary_key = KeyDescription::getKeyFromAST(new_metadata.sorting_key.definition_ast, new_metadata.columns, global_context); - new_metadata.primary_key.definition_ast = nullptr; - } - - if (!metadata_diff.sampling_expression_changed && new_metadata.sampling_key.definition_ast != nullptr) - new_metadata.sampling_key.recalculateWithNewColumns(new_metadata.columns, global_context); - - if (!metadata_diff.skip_indices_changed) /// otherwise already updated - { - for (auto & index : new_metadata.secondary_indices) - index.recalculateWithNewColumns(new_metadata.columns, global_context); - } - - if (!metadata_diff.ttl_table_changed && new_metadata.table_ttl.definition_ast != nullptr) - new_metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( - new_metadata.table_ttl.definition_ast, new_metadata.columns, global_context, new_metadata.primary_key); + auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, new_metadata.columns, getContext(), new_metadata.primary_key); + new_metadata.column_ttls_by_name[name] = new_ttl_entry; } + if (new_metadata.partition_key.definition_ast != nullptr) + new_metadata.partition_key.recalculateWithNewColumns(new_metadata.columns, getContext()); + + if (!metadata_diff.sorting_key_changed) /// otherwise already updated + new_metadata.sorting_key.recalculateWithNewColumns(new_metadata.columns, getContext()); + + /// Primary key is special, it exists even if not defined + if (new_metadata.primary_key.definition_ast != nullptr) + { + new_metadata.primary_key.recalculateWithNewColumns(new_metadata.columns, getContext()); + } + else + { + new_metadata.primary_key = KeyDescription::getKeyFromAST(new_metadata.sorting_key.definition_ast, new_metadata.columns, getContext()); + new_metadata.primary_key.definition_ast = nullptr; + } + + if (!metadata_diff.sampling_expression_changed && new_metadata.sampling_key.definition_ast != nullptr) + new_metadata.sampling_key.recalculateWithNewColumns(new_metadata.columns, getContext()); + + if (!metadata_diff.skip_indices_changed) /// otherwise already updated + { + for (auto & index : new_metadata.secondary_indices) + index.recalculateWithNewColumns(new_metadata.columns, getContext()); + } + + if (!metadata_diff.ttl_table_changed && new_metadata.table_ttl.definition_ast != nullptr) + new_metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( + new_metadata.table_ttl.definition_ast, new_metadata.columns, getContext(), new_metadata.primary_key); + /// Even if the primary/sorting/partition keys didn't change we must reinitialize it /// because primary/partition key column types might have changed. checkTTLExpressions(new_metadata, old_metadata); setProperties(new_metadata, old_metadata); auto table_id = getStorageID(); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(global_context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(getContext(), table_id, new_metadata); } @@ -1343,6 +1355,48 @@ String StorageReplicatedMergeTree::getChecksumsForZooKeeper(const MergeTreeDataP getSettings()->use_minimalistic_checksums_in_zookeeper); } +MergeTreeData::MutableDataPartPtr StorageReplicatedMergeTree::attachPartHelperFoundValidPart(const LogEntry& entry) const +{ + const MergeTreePartInfo actual_part_info = MergeTreePartInfo::fromPartName(entry.new_part_name, format_version); + const String part_new_name = actual_part_info.getPartName(); + + for (const DiskPtr & disk : getStoragePolicy()->getDisks()) + for (const auto it = disk->iterateDirectory(relative_data_path + "detached/"); it->isValid(); it->next()) + { + MergeTreePartInfo part_info; + + if (!MergeTreePartInfo::tryParsePartName(it->name(), &part_info, format_version) || + part_info.partition_id != actual_part_info.partition_id) + continue; + + const String part_old_name = part_info.getPartName(); + const String part_path = "detached/" + part_old_name; + + const VolumePtr volume = std::make_shared("volume_" + part_old_name, disk); + + /// actual_part_info is more recent than part_info so we use it + MergeTreeData::MutableDataPartPtr part = createPart(part_new_name, actual_part_info, volume, part_path); + + try + { + part->loadColumnsChecksumsIndexes(true, true); + } + catch (const Exception&) + { + /// This method throws if the part data is corrupted or partly missing. In this case, we simply don't + /// process the part. + continue; + } + + if (entry.part_checksum == part->checksums.getTotalChecksumHex()) + { + part->modification_time = disk->getLastModified(part->getFullRelativePath()).epochTime(); + return part; + } + } + + return {}; +} bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry) { @@ -1358,32 +1412,54 @@ bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry) return true; } - if (entry.type == LogEntry::GET_PART || - entry.type == LogEntry::MERGE_PARTS || - entry.type == LogEntry::MUTATE_PART) + const bool is_get_or_attach = entry.type == LogEntry::GET_PART || entry.type == LogEntry::ATTACH_PART; + + if (is_get_or_attach || entry.type == LogEntry::MERGE_PARTS || entry.type == LogEntry::MUTATE_PART) { /// If we already have this part or a part covering it, we do not need to do anything. /// The part may be still in the PreCommitted -> Committed transition so we first search /// among PreCommitted parts to definitely find the desired part if it exists. DataPartPtr existing_part = getPartIfExists(entry.new_part_name, {MergeTreeDataPartState::PreCommitted}); + if (!existing_part) existing_part = getActiveContainingPart(entry.new_part_name); - /// Even if the part is locally, it (in exceptional cases) may not be in ZooKeeper. Let's check that it is there. + /// Even if the part is local, it (in exceptional cases) may not be in ZooKeeper. Let's check that it is there. if (existing_part && getZooKeeper()->exists(replica_path + "/parts/" + existing_part->name)) { - if (!(entry.type == LogEntry::GET_PART && entry.source_replica == replica_name)) - { - LOG_DEBUG(log, "Skipping action for part {} because part {} already exists.", entry.new_part_name, existing_part->name); - } + if (!is_get_or_attach || entry.source_replica != replica_name) + LOG_DEBUG(log, "Skipping action for part {} because part {} already exists.", + entry.new_part_name, existing_part->name); + return true; } } - if (entry.type == LogEntry::GET_PART && entry.source_replica == replica_name) + if (entry.type == LogEntry::ATTACH_PART) + { + if (MutableDataPartPtr part = attachPartHelperFoundValidPart(entry); part) + { + LOG_TRACE(log, "Found valid part to attach from local data, preparing the transaction"); + + Transaction transaction(*this); + + renameTempPartAndReplace(part, nullptr, &transaction); + checkPartChecksumsAndCommit(transaction, part); + + writePartLog(PartLogElement::Type::NEW_PART, {}, 0 /** log entry is fake so we don't measure the time */, + part->name, part, {} /** log entry is fake so there are no initial parts */, nullptr); + + return true; + } + + LOG_TRACE(log, "Didn't find part with the correct checksums, will fetch it from other replica"); + } + + if (is_get_or_attach && entry.source_replica == replica_name) LOG_WARNING(log, "Part {} from own log doesn't exist.", entry.new_part_name); - /// Perhaps we don't need this part, because during write with quorum, the quorum has failed (see below about `/quorum/failed_parts`). + /// Perhaps we don't need this part, because during write with quorum, the quorum has failed + /// (see below about `/quorum/failed_parts`). if (entry.quorum && getZooKeeper()->exists(zookeeper_path + "/quorum/failed_parts/" + entry.new_part_name)) { LOG_DEBUG(log, "Skipping action for part {} because quorum for that part was failed.", entry.new_part_name); @@ -1391,28 +1467,28 @@ bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry) } bool do_fetch = false; - if (entry.type == LogEntry::GET_PART) + + switch (entry.type) { - do_fetch = true; - } - else if (entry.type == LogEntry::MERGE_PARTS) - { - /// Sometimes it's better to fetch merged part instead of merge - /// For example when we don't have all source parts for merge - do_fetch = !tryExecuteMerge(entry); - } - else if (entry.type == LogEntry::MUTATE_PART) - { - /// Sometimes it's better to fetch mutated part instead of merge - do_fetch = !tryExecutePartMutation(entry); - } - else if (entry.type == LogEntry::ALTER_METADATA) - { - return executeMetadataAlter(entry); - } - else - { - throw Exception("Unexpected log entry type: " + toString(static_cast(entry.type)), ErrorCodes::LOGICAL_ERROR); + case LogEntry::ATTACH_PART: + /// We surely don't have this part locally as we've checked it before, so download it. + [[fallthrough]]; + case LogEntry::GET_PART: + do_fetch = true; + break; + case LogEntry::MERGE_PARTS: + /// Sometimes it's better to fetch the merged part instead of merging, + /// e.g when we don't have all the source parts. + do_fetch = !tryExecuteMerge(entry); + break; + case LogEntry::MUTATE_PART: + /// Sometimes it's better to fetch mutated part instead of merging. + do_fetch = !tryExecutePartMutation(entry); + break; + case LogEntry::ALTER_METADATA: + return executeMetadataAlter(entry); + default: + throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected log entry type: {}", static_cast(entry.type)); } if (do_fetch) @@ -1423,7 +1499,8 @@ bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry) bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) { - LOG_TRACE(log, "Executing log entry to merge parts {} to {}", boost::algorithm::join(entry.source_parts, ", "), entry.new_part_name); + LOG_TRACE(log, "Executing log entry to merge parts {} to {}", + fmt::join(entry.source_parts, ", "), entry.new_part_name); const auto storage_settings_ptr = getSettings(); @@ -1439,50 +1516,64 @@ bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) { LOG_INFO(log, "Will try to fetch part {} until '{}' because this part assigned to recompression merge. " "Source replica {} will try to merge this part first", entry.new_part_name, - LocalDateTime(entry.create_time + storage_settings_ptr->try_fetch_recompressed_part_timeout.totalSeconds()), entry.source_replica); + DateLUT::instance().timeToString(entry.create_time + storage_settings_ptr->try_fetch_recompressed_part_timeout.totalSeconds()), entry.source_replica); return false; } /// In some use cases merging can be more expensive than fetching /// and it may be better to spread merges tasks across the replicas /// instead of doing exactly the same merge cluster-wise + std::optional replica_to_execute_merge; + bool replica_to_execute_merge_picked = false; + if (merge_strategy_picker.shouldMergeOnSingleReplica(entry)) { - auto replica_to_execute_merge = merge_strategy_picker.pickReplicaToExecuteMerge(entry); + replica_to_execute_merge = merge_strategy_picker.pickReplicaToExecuteMerge(entry); + replica_to_execute_merge_picked = true; if (replica_to_execute_merge) { - LOG_DEBUG(log, "Prefer fetching part {} from replica {} due execute_merges_on_single_replica_time_threshold", entry.new_part_name, replica_to_execute_merge.value()); + LOG_DEBUG(log, + "Prefer fetching part {} from replica {} due to execute_merges_on_single_replica_time_threshold", + entry.new_part_name, replica_to_execute_merge.value()); + return false; } } DataPartsVector parts; - bool have_all_parts = true; - for (const String & name : entry.source_parts) + + for (const String & source_part_name : entry.source_parts) { - DataPartPtr part = getActiveContainingPart(name); - if (!part) + DataPartPtr source_part_or_covering = getActiveContainingPart(source_part_name); + + if (!source_part_or_covering) { - have_all_parts = false; - break; + /// We do not have one of source parts locally, try to take some already merged part from someone. + LOG_DEBUG(log, "Don't have all parts for merge {}; will try to fetch it instead", entry.new_part_name); + return false; } - if (part->name != name) + + if (source_part_or_covering->name != source_part_name) { - LOG_WARNING(log, "Part {} is covered by {} but should be merged into {}. This shouldn't happen often.", name, part->name, entry.new_part_name); - have_all_parts = false; - break; + /// We do not have source part locally, but we have some covering part. Possible options: + /// 1. We already have merged part (source_part_or_covering->name == new_part_name) + /// 2. We have some larger merged part which covers new_part_name (and therefore it covers source_part_name too) + /// 3. We have two intersecting parts, both cover source_part_name. It's logical error. + /// TODO Why 1 and 2 can happen? Do we need more assertions here or somewhere else? + constexpr const char * message = "Part {} is covered by {} but should be merged into {}. This shouldn't happen often."; + LOG_WARNING(log, message, source_part_name, source_part_or_covering->name, entry.new_part_name); + if (!source_part_or_covering->info.contains(MergeTreePartInfo::fromPartName(entry.new_part_name, format_version))) + throw Exception(ErrorCodes::LOGICAL_ERROR, message, source_part_name, source_part_or_covering->name, entry.new_part_name); + return false; } - parts.push_back(part); + + parts.push_back(source_part_or_covering); } - if (!have_all_parts) - { - /// If you do not have all the necessary parts, try to take some already merged part from someone. - LOG_DEBUG(log, "Don't have all parts for merge {}; will try to fetch it instead", entry.new_part_name); - return false; - } - else if (entry.create_time + storage_settings_ptr->prefer_fetch_merged_part_time_threshold.totalSeconds() <= time(nullptr)) + /// All source parts are found locally, we can execute merge + + if (entry.create_time + storage_settings_ptr->prefer_fetch_merged_part_time_threshold.totalSeconds() <= time(nullptr)) { /// If entry is old enough, and have enough size, and part are exists in any replica, /// then prefer fetching of merged part from replica. @@ -1516,26 +1607,60 @@ bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) auto table_lock = lockForShare(RWLockImpl::NO_QUERY, storage_settings_ptr->lock_acquire_timeout_for_background_operations); StorageMetadataPtr metadata_snapshot = getInMemoryMetadataPtr(); - ReservationPtr reserved_space = reserveSpacePreferringTTLRules( - metadata_snapshot, estimated_space_for_merge, ttl_infos, time(nullptr), max_volume_index); - FutureMergedMutatedPart future_merged_part(parts, entry.new_part_type); if (future_merged_part.name != entry.new_part_name) { throw Exception("Future merged part name " + backQuote(future_merged_part.name) + " differs from part name in log entry: " + backQuote(entry.new_part_name), ErrorCodes::BAD_DATA_PART_NAME); } + + std::optional tagger; + ReservationPtr reserved_space = balancedReservation( + metadata_snapshot, + estimated_space_for_merge, + max_volume_index, + future_merged_part.name, + future_merged_part.part_info, + future_merged_part.parts, + &tagger, + &ttl_infos); + + if (!reserved_space) + reserved_space + = reserveSpacePreferringTTLRules(metadata_snapshot, estimated_space_for_merge, ttl_infos, time(nullptr), max_volume_index); + future_merged_part.uuid = entry.new_part_uuid; future_merged_part.updatePath(*this, reserved_space); future_merged_part.merge_type = entry.merge_type; + if (storage_settings_ptr->allow_s3_zero_copy_replication) + { + if (auto disk = reserved_space->getDisk(); disk->getType() == DB::DiskType::Type::S3) + { + if (merge_strategy_picker.shouldMergeOnSingleReplicaS3Shared(entry)) + { + if (!replica_to_execute_merge_picked) + replica_to_execute_merge = merge_strategy_picker.pickReplicaToExecuteMerge(entry); + + if (replica_to_execute_merge) + { + LOG_DEBUG(log, + "Prefer fetching part {} from replica {} due s3_execute_merges_on_single_replica_time_threshold", + entry.new_part_name, replica_to_execute_merge.value()); + return false; + } + } + } + } + /// Account TTL merge if (isTTLMergeType(future_merged_part.merge_type)) - global_context.getMergeList().bookMergeWithTTL(); + getContext()->getMergeList().bookMergeWithTTL(); auto table_id = getStorageID(); + /// Add merge to list - MergeList::EntryPtr merge_entry = global_context.getMergeList().insert(table_id.database_name, table_id.table_name, future_merged_part); + MergeList::EntryPtr merge_entry = getContext()->getMergeList().insert(table_id.database_name, table_id.table_name, future_merged_part); Transaction transaction(*this); MutableDataPartPtr part; @@ -1553,7 +1678,7 @@ bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) { part = merger_mutator.mergePartsToTemporaryPart( future_merged_part, metadata_snapshot, *merge_entry, - table_lock, entry.create_time, global_context, reserved_space, entry.deduplicate, entry.deduplicate_by_columns); + table_lock, entry.create_time, getContext(), reserved_space, entry.deduplicate, entry.deduplicate_by_columns); merger_mutator.renameMergedTemporaryPart(part, parts, &transaction); @@ -1569,7 +1694,16 @@ bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) ProfileEvents::increment(ProfileEvents::DataAfterMergeDiffersFromReplica); - LOG_ERROR(log, "{}. Data after merge is not byte-identical to data on another replicas. There could be several reasons: 1. Using newer version of compression library after server update. 2. Using another compression method. 3. Non-deterministic compression algorithm (highly unlikely). 4. Non-deterministic merge algorithm due to logical error in code. 5. Data corruption in memory due to bug in code. 6. Data corruption in memory due to hardware issue. 7. Manual modification of source data after server startup. 8. Manual modification of checksums stored in ZooKeeper. 9. Part format related settings like 'enable_mixed_granularity_parts' are different on different replicas. We will download merged part from replica to force byte-identical result.", getCurrentExceptionMessage(false)); + LOG_ERROR(log, + "{}. Data after merge is not byte-identical to data on another replicas. There could be several" + " reasons: 1. Using newer version of compression library after server update. 2. Using another" + " compression method. 3. Non-deterministic compression algorithm (highly unlikely). 4." + " Non-deterministic merge algorithm due to logical error in code. 5. Data corruption in memory due" + " to bug in code. 6. Data corruption in memory due to hardware issue. 7. Manual modification of" + " source data after server startup. 8. Manual modification of checksums stored in ZooKeeper. 9." + " Part format related settings like 'enable_mixed_granularity_parts' are different on different" + " replicas. We will download merged part from replica to force byte-identical result.", + getCurrentExceptionMessage(false)); write_part_log(ExecutionStatus::fromCurrentException()); @@ -1618,9 +1752,10 @@ bool StorageReplicatedMergeTree::tryExecutePartMutation(const StorageReplicatedM if (source_part->name != source_part_name) { - throw Exception("Part " + source_part_name + " is covered by " + source_part->name - + " but should be mutated to " + entry.new_part_name + ". This is a bug.", - ErrorCodes::LOGICAL_ERROR); + LOG_WARNING(log, "Part " + source_part_name + " is covered by " + source_part->name + + " but should be mutated to " + entry.new_part_name + ". " + + "Possibly the mutation of this part is not needed and will be skipped. This shouldn't happen often."); + return false; } /// TODO - some better heuristic? @@ -1663,7 +1798,7 @@ bool StorageReplicatedMergeTree::tryExecutePartMutation(const StorageReplicatedM future_mutated_part.type = source_part->getType(); auto table_id = getStorageID(); - MergeList::EntryPtr merge_entry = global_context.getMergeList().insert( + MergeList::EntryPtr merge_entry = getContext()->getMergeList().insert( table_id.database_name, table_id.table_name, future_mutated_part); Stopwatch stopwatch; @@ -1679,7 +1814,7 @@ bool StorageReplicatedMergeTree::tryExecutePartMutation(const StorageReplicatedM { new_part = merger_mutator.mutatePartToTemporaryPart( future_mutated_part, metadata_snapshot, commands, *merge_entry, - entry.create_time, global_context, reserved_space, table_lock); + entry.create_time, getContext(), reserved_space, table_lock); renameTempPartAndReplace(new_part, nullptr, &transaction); try @@ -1732,22 +1867,18 @@ bool StorageReplicatedMergeTree::executeFetch(LogEntry & entry) const auto storage_settings_ptr = getSettings(); auto metadata_snapshot = getInMemoryMetadataPtr(); - static std::atomic_uint total_fetches {0}; - if (storage_settings_ptr->replicated_max_parallel_fetches && total_fetches >= storage_settings_ptr->replicated_max_parallel_fetches) - { - throw Exception("Too many total fetches from replicas, maximum: " + storage_settings_ptr->replicated_max_parallel_fetches.toString(), - ErrorCodes::TOO_MANY_FETCHES); - } + if (storage_settings_ptr->replicated_max_parallel_fetches && + total_fetches >= storage_settings_ptr->replicated_max_parallel_fetches) + throw Exception(ErrorCodes::TOO_MANY_FETCHES, "Too many total fetches from replicas, maximum: {} ", + storage_settings_ptr->replicated_max_parallel_fetches.toString()); ++total_fetches; SCOPE_EXIT({--total_fetches;}); if (storage_settings_ptr->replicated_max_parallel_fetches_for_table && current_table_fetches >= storage_settings_ptr->replicated_max_parallel_fetches_for_table) - { - throw Exception("Too many fetches from replicas for table, maximum: " + storage_settings_ptr->replicated_max_parallel_fetches_for_table.toString(), - ErrorCodes::TOO_MANY_FETCHES); - } + throw Exception(ErrorCodes::TOO_MANY_FETCHES, "Too many fetches from replicas for table, maximum: {}", + storage_settings_ptr->replicated_max_parallel_fetches_for_table.toString()); ++current_table_fetches; SCOPE_EXIT({--current_table_fetches;}); @@ -1920,6 +2051,57 @@ bool StorageReplicatedMergeTree::executeFetch(LogEntry & entry) } +bool StorageReplicatedMergeTree::executeFetchShared( + const String & source_replica, + const String & new_part_name, + const DiskPtr & disk, + const String & path) +{ + if (source_replica.empty()) + { + LOG_INFO(log, "No active replica has part {} on S3.", new_part_name); + return false; + } + + const auto storage_settings_ptr = getSettings(); + auto metadata_snapshot = getInMemoryMetadataPtr(); + + if (storage_settings_ptr->replicated_max_parallel_fetches && total_fetches >= storage_settings_ptr->replicated_max_parallel_fetches) + { + throw Exception("Too many total fetches from replicas, maximum: " + storage_settings_ptr->replicated_max_parallel_fetches.toString(), + ErrorCodes::TOO_MANY_FETCHES); + } + + ++total_fetches; + SCOPE_EXIT({--total_fetches;}); + + if (storage_settings_ptr->replicated_max_parallel_fetches_for_table + && current_table_fetches >= storage_settings_ptr->replicated_max_parallel_fetches_for_table) + { + throw Exception("Too many fetches from replicas for table, maximum: " + storage_settings_ptr->replicated_max_parallel_fetches_for_table.toString(), + ErrorCodes::TOO_MANY_FETCHES); + } + + ++current_table_fetches; + SCOPE_EXIT({--current_table_fetches;}); + + try + { + if (!fetchExistsPart(new_part_name, metadata_snapshot, zookeeper_path + "/replicas/" + source_replica, disk, path)) + return false; + } + catch (Exception & e) + { + if (e.code() == ErrorCodes::RECEIVED_ERROR_TOO_MANY_REQUESTS) + e.addMessage("Too busy replica. Will try later."); + tryLogCurrentException(log, __PRETTY_FUNCTION__); + throw; + } + + return true; +} + + void StorageReplicatedMergeTree::executeDropRange(const LogEntry & entry) { auto drop_range_info = MergeTreePartInfo::fromPartName(entry.new_part_name, format_version); @@ -1982,12 +2164,20 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) struct PartDescription { - PartDescription(size_t index_, const String & src_part_name_, const String & new_part_name_, const String & checksum_hex_, - MergeTreeDataFormatVersion format_version) - : index(index_), - src_part_name(src_part_name_), src_part_info(MergeTreePartInfo::fromPartName(src_part_name_, format_version)), - new_part_name(new_part_name_), new_part_info(MergeTreePartInfo::fromPartName(new_part_name_, format_version)), - checksum_hex(checksum_hex_) {} + PartDescription( + size_t index_, + const String & src_part_name_, + const String & new_part_name_, + const String & checksum_hex_, + MergeTreeDataFormatVersion format_version) + : index(index_) + , src_part_name(src_part_name_) + , src_part_info(MergeTreePartInfo::fromPartName(src_part_name_, format_version)) + , new_part_name(new_part_name_) + , new_part_info(MergeTreePartInfo::fromPartName(new_part_name_, format_version)) + , checksum_hex(checksum_hex_) + { + } size_t index; // in log entry arrays String src_part_name; @@ -2062,7 +2252,7 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) auto clone_data_parts_from_source_table = [&] () -> size_t { - source_table = DatabaseCatalog::instance().tryGetTable(source_table_id, global_context); + source_table = DatabaseCatalog::instance().tryGetTable(source_table_id, getContext()); if (!source_table) { LOG_DEBUG(log, "Can't use {} as source table for REPLACE PARTITION command. It does not exist.", source_table_id.getNameForLogs()); @@ -2218,16 +2408,17 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) { String source_replica_path = zookeeper_path + "/replicas/" + part_desc->replica; ReplicatedMergeTreeAddress address(getZooKeeper()->get(source_replica_path + "/host")); - auto timeouts = ConnectionTimeouts::getHTTPTimeouts(global_context); - auto [user, password] = global_context.getInterserverCredentials(); - String interserver_scheme = global_context.getInterserverScheme(); + auto timeouts = getFetchPartHTTPTimeouts(getContext()); + + auto credentials = getContext()->getInterserverCredentials(); + String interserver_scheme = getContext()->getInterserverScheme(); if (interserver_scheme != address.scheme) throw Exception("Interserver schemas are different '" + interserver_scheme + "' != '" + address.scheme + "', can't fetch part from " + address.host, ErrorCodes::LOGICAL_ERROR); part_desc->res_part = fetcher.fetchPart( metadata_snapshot, part_desc->found_new_part_name, source_replica_path, - address.host, address.replication_port, timeouts, user, password, interserver_scheme, false, TMP_PREFIX + "fetch_"); + address.host, address.replication_port, timeouts, credentials->getUser(), credentials->getPassword(), interserver_scheme, false, TMP_PREFIX + "fetch_"); /// TODO: check columns_version of fetched part @@ -2276,11 +2467,11 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) parts_to_remove = removePartsInRangeFromWorkingSet(drop_range, true, false, data_parts_lock); } - PartLog::addNewParts(global_context, res_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), res_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, res_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), res_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } @@ -2937,7 +3128,6 @@ StorageReplicatedMergeTree::CreateMergeEntryResult StorageReplicatedMergeTree::c entry.merge_type = merge_type; entry.deduplicate = deduplicate; entry.deduplicate_by_columns = deduplicate_by_columns; - entry.merge_type = merge_type; entry.create_time = time(nullptr); for (const auto & part : parts) @@ -3115,7 +3305,7 @@ void StorageReplicatedMergeTree::enterLeaderElection() try { leader_election = std::make_shared( - global_context.getSchedulePool(), + getContext()->getSchedulePool(), zookeeper_path + "/leader_election", *current_zookeeper, /// current_zookeeper lives for the lifetime of leader_election, /// since before changing `current_zookeeper`, `leader_election` object is destroyed in `partialShutdown` method. @@ -3151,6 +3341,23 @@ void StorageReplicatedMergeTree::exitLeaderElection() leader_election = nullptr; } +ConnectionTimeouts StorageReplicatedMergeTree::getFetchPartHTTPTimeouts(ContextPtr local_context) +{ + auto timeouts = ConnectionTimeouts::getHTTPTimeouts(local_context); + auto settings = getSettings(); + + if (settings->replicated_fetches_http_connection_timeout.changed) + timeouts.connection_timeout = settings->replicated_fetches_http_connection_timeout; + + if (settings->replicated_fetches_http_send_timeout.changed) + timeouts.send_timeout = settings->replicated_fetches_http_send_timeout; + + if (settings->replicated_fetches_http_receive_timeout.changed) + timeouts.receive_timeout = settings->replicated_fetches_http_receive_timeout; + + return timeouts; +} + bool StorageReplicatedMergeTree::checkReplicaHavePart(const String & replica, const String & part_name) { auto zookeeper = getZooKeeper(); @@ -3165,12 +3372,16 @@ String StorageReplicatedMergeTree::findReplicaHavingPart(const String & part_nam /// Select replicas in uniformly random order. std::shuffle(replicas.begin(), replicas.end(), thread_local_rng); + LOG_TRACE(log, "Candidate replicas: {}", replicas.size()); + for (const String & replica : replicas) { - /// We don't interested in ourself. + /// We aren't interested in ourself. if (replica == replica_name) continue; + LOG_TRACE(log, "Candidate replica: {}", replica); + if (checkReplicaHavePart(replica, part_name) && (!active || zookeeper->exists(zookeeper_path + "/replicas/" + replica + "/is_active"))) return replica; @@ -3181,7 +3392,6 @@ String StorageReplicatedMergeTree::findReplicaHavingPart(const String & part_nam return {}; } - String StorageReplicatedMergeTree::findReplicaHavingCoveringPart(LogEntry & entry, bool active) { auto zookeeper = getZooKeeper(); @@ -3443,6 +3653,7 @@ bool StorageReplicatedMergeTree::partIsInsertingWithParallelQuorum(const MergeTr return zookeeper->exists(zookeeper_path + "/quorum/parallel/" + part_info.getPartName()); } + bool StorageReplicatedMergeTree::partIsLastQuorumPart(const MergeTreePartInfo & part_info) const { auto zookeeper = getZooKeeper(); @@ -3464,6 +3675,7 @@ bool StorageReplicatedMergeTree::partIsLastQuorumPart(const MergeTreePartInfo & return partition_it->second == part_info.getPartName(); } + bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const StorageMetadataPtr & metadata_snapshot, const String & source_replica_path, bool to_detached, size_t quorum, zkutil::ZooKeeper::Ptr zookeeper_) { @@ -3490,7 +3702,7 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora } } - SCOPE_EXIT + SCOPE_EXIT_MEMORY ({ std::lock_guard lock(currently_fetching_parts_mutex); currently_fetching_parts.erase(part_name); @@ -3549,7 +3761,13 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora } + ReplicatedMergeTreeAddress address; + ConnectionTimeouts timeouts; + String interserver_scheme; + InterserverCredentialsPtr credentials; + std::optional tagger_ptr; std::function get_part; + if (part_to_clone) { get_part = [&, part_to_clone]() @@ -3559,12 +3777,13 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora } else { - ReplicatedMergeTreeAddress address(zookeeper->get(source_replica_path + "/host")); - auto timeouts = ConnectionTimeouts::getHTTPTimeouts(global_context); - auto user_password = global_context.getInterserverCredentials(); - String interserver_scheme = global_context.getInterserverScheme(); + address.fromString(zookeeper->get(source_replica_path + "/host")); + timeouts = getFetchPartHTTPTimeouts(getContext()); - get_part = [&, address, timeouts, user_password, interserver_scheme]() + credentials = getContext()->getInterserverCredentials(); + interserver_scheme = getContext()->getInterserverScheme(); + + get_part = [&, address, timeouts, credentials, interserver_scheme]() { if (interserver_scheme != address.scheme) throw Exception("Interserver schemes are different: '" + interserver_scheme @@ -3572,9 +3791,19 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora ErrorCodes::INTERSERVER_SCHEME_DOESNT_MATCH); return fetcher.fetchPart( - metadata_snapshot, part_name, source_replica_path, - address.host, address.replication_port, - timeouts, user_password.first, user_password.second, interserver_scheme, to_detached); + metadata_snapshot, + part_name, + source_replica_path, + address.host, + address.replication_port, + timeouts, + credentials->getUser(), + credentials->getPassword(), + interserver_scheme, + to_detached, + "", + &tagger_ptr, + true); }; } @@ -3587,11 +3816,6 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora Transaction transaction(*this); renameTempPartAndReplace(part, nullptr, &transaction); - /** NOTE - * Here, an error occurs if ALTER occurred with a change in the column type or column deletion, - * and the part on remote server has not yet been modified. - * After a while, one of the following attempts to make `fetchPart` succeed. - */ replaced_parts = checkPartChecksumsAndCommit(transaction, part); /** If a quorum is tracked for this part, you must update it. @@ -3663,6 +3887,104 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora } +bool StorageReplicatedMergeTree::fetchExistsPart(const String & part_name, const StorageMetadataPtr & metadata_snapshot, + const String & source_replica_path, DiskPtr replaced_disk, String replaced_part_path) +{ + auto zookeeper = getZooKeeper(); + const auto part_info = MergeTreePartInfo::fromPartName(part_name, format_version); + + if (auto part = getPartIfExists(part_info, {IMergeTreeDataPart::State::Outdated, IMergeTreeDataPart::State::Deleting})) + { + LOG_DEBUG(log, "Part {} should be deleted after previous attempt before fetch", part->name); + /// Force immediate parts cleanup to delete the part that was left from the previous fetch attempt. + cleanup_thread.wakeup(); + return false; + } + + { + std::lock_guard lock(currently_fetching_parts_mutex); + if (!currently_fetching_parts.insert(part_name).second) + { + LOG_DEBUG(log, "Part {} is already fetching right now", part_name); + return false; + } + } + + SCOPE_EXIT_MEMORY + ({ + std::lock_guard lock(currently_fetching_parts_mutex); + currently_fetching_parts.erase(part_name); + }); + + LOG_DEBUG(log, "Fetching part {} from {}", part_name, source_replica_path); + + TableLockHolder table_lock_holder = lockForShare(RWLockImpl::NO_QUERY, getSettings()->lock_acquire_timeout_for_background_operations); + + /// Logging + Stopwatch stopwatch; + MutableDataPartPtr part; + DataPartsVector replaced_parts; + + auto write_part_log = [&] (const ExecutionStatus & execution_status) + { + writePartLog( + PartLogElement::DOWNLOAD_PART, execution_status, stopwatch.elapsed(), + part_name, part, replaced_parts, nullptr); + }; + + std::function get_part; + + ReplicatedMergeTreeAddress address(zookeeper->get(source_replica_path + "/host")); + auto timeouts = ConnectionTimeouts::getHTTPTimeouts(getContext()); + auto credentials = getContext()->getInterserverCredentials(); + String interserver_scheme = getContext()->getInterserverScheme(); + + get_part = [&, address, timeouts, interserver_scheme, credentials]() + { + if (interserver_scheme != address.scheme) + throw Exception("Interserver schemes are different: '" + interserver_scheme + + "' != '" + address.scheme + "', can't fetch part from " + address.host, + ErrorCodes::INTERSERVER_SCHEME_DOESNT_MATCH); + + return fetcher.fetchPart( + metadata_snapshot, part_name, source_replica_path, + address.host, address.replication_port, + timeouts, credentials->getUser(), credentials->getPassword(), interserver_scheme, false, "", nullptr, true, + replaced_disk); + }; + + try + { + part = get_part(); + + if (part->volume->getDisk()->getName() != replaced_disk->getName()) + throw Exception("Part " + part->name + " fetched on wrong disk " + part->volume->getDisk()->getName(), ErrorCodes::LOGICAL_ERROR); + replaced_disk->removeFileIfExists(replaced_part_path); + replaced_disk->moveDirectory(part->getFullRelativePath(), replaced_part_path); + } + catch (const Exception & e) + { + /// The same part is being written right now (but probably it's not committed yet). + /// We will check the need for fetch later. + if (e.code() == ErrorCodes::DIRECTORY_ALREADY_EXISTS) + return false; + + throw; + } + catch (...) + { + write_part_log(ExecutionStatus::fromCurrentException()); + throw; + } + + ProfileEvents::increment(ProfileEvents::ReplicatedPartFetches); + + LOG_DEBUG(log, "Fetched part {} from {}", part_name, source_replica_path); + + return true; +} + + void StorageReplicatedMergeTree::startup() { if (is_readonly) @@ -3675,7 +3997,7 @@ void StorageReplicatedMergeTree::startup() InterserverIOEndpointPtr data_parts_exchange_ptr = std::make_shared(*this); [[maybe_unused]] auto prev_ptr = std::atomic_exchange(&data_parts_exchange_endpoint, data_parts_exchange_ptr); assert(prev_ptr == nullptr); - global_context.getInterserverIOHandler().addEndpoint(data_parts_exchange_ptr->getId(replica_path), data_parts_exchange_ptr); + getContext()->getInterserverIOHandler().addEndpoint(data_parts_exchange_ptr->getId(replica_path), data_parts_exchange_ptr); /// In this thread replica will be activated. restarting_thread.start(); @@ -3730,7 +4052,7 @@ void StorageReplicatedMergeTree::shutdown() auto data_parts_exchange_ptr = std::atomic_exchange(&data_parts_exchange_endpoint, InterserverIOEndpointPtr{}); if (data_parts_exchange_ptr) { - global_context.getInterserverIOHandler().removeEndpointIfExists(data_parts_exchange_ptr->getId(replica_path)); + getContext()->getInterserverIOHandler().removeEndpointIfExists(data_parts_exchange_ptr->getId(replica_path)); /// Ask all parts exchange handlers to finish asap. New ones will fail to start data_parts_exchange_ptr->blocker.cancelForever(); /// Wait for all of them @@ -3816,7 +4138,7 @@ void StorageReplicatedMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned num_streams) @@ -3826,15 +4148,15 @@ void StorageReplicatedMergeTree::read( * 2. Do not read parts that have not yet been written to the quorum of the replicas. * For this you have to synchronously go to ZooKeeper. */ - if (context.getSettingsRef().select_sequential_consistency) + if (local_context->getSettingsRef().select_sequential_consistency) { auto max_added_blocks = getMaxAddedBlocks(); - if (auto plan = reader.read(column_names, metadata_snapshot, query_info, context, max_block_size, num_streams, &max_added_blocks)) + if (auto plan = reader.read(column_names, metadata_snapshot, query_info, local_context, max_block_size, num_streams, &max_added_blocks)) query_plan = std::move(*plan); return; } - if (auto plan = reader.read(column_names, metadata_snapshot, query_info, context, max_block_size, num_streams)) + if (auto plan = reader.read(column_names, metadata_snapshot, query_info, local_context, max_block_size, num_streams)) query_plan = std::move(*plan); } @@ -3842,14 +4164,16 @@ Pipe StorageReplicatedMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); - return plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); + return plan.convertToPipe( + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } @@ -3888,19 +4212,11 @@ std::optional StorageReplicatedMergeTree::totalRows(const Settings & set return res; } -std::optional StorageReplicatedMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, const Context & context) const +std::optional StorageReplicatedMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr local_context) const { - auto metadata_snapshot = getInMemoryMetadataPtr(); - PartitionPruner partition_pruner(metadata_snapshot->getPartitionKey(), query_info, context, true /* strict */); - if (partition_pruner.isUseless()) - return {}; - size_t res = 0; - foreachCommittedParts([&](auto & part) - { - if (!partition_pruner.canBePruned(part)) - res += part->rows_count; - }, context.getSettingsRef().select_sequential_consistency); - return res; + DataPartsVector parts; + foreachCommittedParts([&](auto & part) { parts.push_back(part); }, local_context->getSettingsRef().select_sequential_consistency); + return totalRowsByPartitionPredicateImpl(query_info, local_context, parts); } std::optional StorageReplicatedMergeTree::totalBytes(const Settings & settings) const @@ -3918,12 +4234,12 @@ void StorageReplicatedMergeTree::assertNotReadonly() const } -BlockOutputStreamPtr StorageReplicatedMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageReplicatedMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { const auto storage_settings_ptr = getSettings(); assertNotReadonly(); - const Settings & query_settings = context.getSettingsRef(); + const Settings & query_settings = local_context->getSettingsRef(); bool deduplicate = storage_settings_ptr->replicated_deduplication_window != 0 && query_settings.insert_deduplicate; // TODO: should we also somehow pass list of columns to deduplicate on to the ReplicatedMergeTreeBlockOutputStream ? @@ -3933,7 +4249,7 @@ BlockOutputStreamPtr StorageReplicatedMergeTree::write(const ASTPtr & /*query*/, query_settings.max_partitions_per_insert_block, query_settings.insert_quorum_parallel, deduplicate, - context.getSettingsRef().optimize_on_insert); + local_context->getSettingsRef().optimize_on_insert); } @@ -3944,8 +4260,12 @@ bool StorageReplicatedMergeTree::optimize( bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & query_context) + ContextPtr query_context) { + /// NOTE: exclusive lock cannot be used here, since this may lead to deadlock (see comments below), + /// but it should be safe to use non-exclusive to avoid dropping parts that may be required for processing queue. + auto table_lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + assertNotReadonly(); if (!is_leader) @@ -3959,7 +4279,7 @@ bool StorageReplicatedMergeTree::optimize( auto handle_noop = [&] (const String & message) { - if (query_context.getSettingsRef().optimize_throw_if_noop) + if (query_context->getSettingsRef().optimize_throw_if_noop) throw Exception(message, ErrorCodes::CANNOT_ASSIGN_OPTIMIZE); return false; }; @@ -3993,7 +4313,7 @@ bool StorageReplicatedMergeTree::optimize( future_merged_part.uuid = UUIDHelpers::generateV4(); SelectPartsDecision select_decision = merger_mutator.selectAllPartsToMergeWithinPartition( - future_merged_part, disk_space, can_merge, partition_id, true, metadata_snapshot, nullptr, query_context.getSettingsRef().optimize_skip_merged_partitions); + future_merged_part, disk_space, can_merge, partition_id, true, metadata_snapshot, nullptr, query_context->getSettingsRef().optimize_skip_merged_partitions); if (select_decision != SelectPartsDecision::SELECTED) break; @@ -4044,7 +4364,7 @@ bool StorageReplicatedMergeTree::optimize( UInt64 disk_space = getStoragePolicy()->getMaxUnreservedFreeSpace(); String partition_id = getPartitionIDFromQuery(partition, query_context); select_decision = merger_mutator.selectAllPartsToMergeWithinPartition( - future_merged_part, disk_space, can_merge, partition_id, final, metadata_snapshot, &disable_reason, query_context.getSettingsRef().optimize_skip_merged_partitions); + future_merged_part, disk_space, can_merge, partition_id, final, metadata_snapshot, &disable_reason, query_context->getSettingsRef().optimize_skip_merged_partitions); } /// If there is nothing to merge then we treat this merge as successful (needed for optimize final optimization) @@ -4082,7 +4402,7 @@ bool StorageReplicatedMergeTree::optimize( } } - if (query_context.getSettingsRef().replication_alter_partitions_sync != 0) + if (query_context->getSettingsRef().replication_alter_partitions_sync != 0) { /// NOTE Table lock must not be held while waiting. Some combination of R-W-R locks from different threads will yield to deadlock. for (auto & merge_entry : merge_entries) @@ -4127,7 +4447,7 @@ bool StorageReplicatedMergeTree::executeMetadataAlter(const StorageReplicatedMer std::set StorageReplicatedMergeTree::getPartitionIdsAffectedByCommands( - const MutationCommands & commands, const Context & query_context) const + const MutationCommands & commands, ContextPtr query_context) const { std::set affected_partition_ids; @@ -4149,7 +4469,7 @@ std::set StorageReplicatedMergeTree::getPartitionIdsAffectedByCommands( PartitionBlockNumbersHolder StorageReplicatedMergeTree::allocateBlockNumbersInAffectedPartitions( - const MutationCommands & commands, const Context & query_context, const zkutil::ZooKeeperPtr & zookeeper) const + const MutationCommands & commands, ContextPtr query_context, const zkutil::ZooKeeperPtr & zookeeper) const { const std::set mutation_affected_partition_ids = getPartitionIdsAffectedByCommands(commands, query_context); @@ -4181,7 +4501,7 @@ PartitionBlockNumbersHolder StorageReplicatedMergeTree::allocateBlockNumbersInAf void StorageReplicatedMergeTree::alter( - const AlterCommands & commands, const Context & query_context, TableLockHolder & table_lock_holder) + const AlterCommands & commands, ContextPtr query_context, TableLockHolder & table_lock_holder) { assertNotReadonly(); @@ -4288,7 +4608,7 @@ void StorageReplicatedMergeTree::alter( alter_entry->create_time = time(nullptr); auto maybe_mutation_commands = commands.getMutationCommands( - *current_metadata, query_context.getSettingsRef().materialize_ttl_after_modify, query_context); + *current_metadata, query_context->getSettingsRef().materialize_ttl_after_modify, query_context); alter_entry->have_mutation = !maybe_mutation_commands.empty(); alter_path_idx = ops.size(); @@ -4320,7 +4640,7 @@ void StorageReplicatedMergeTree::alter( zkutil::makeCreateRequest(mutations_path + "/", mutation_entry.toString(), zkutil::CreateMode::PersistentSequential)); } - if (auto txn = query_context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) { txn->moveOpsTo(ops); /// NOTE: IDatabase::alterTable(...) is called when executing ALTER_METADATA queue entry without query context, @@ -4376,12 +4696,12 @@ void StorageReplicatedMergeTree::alter( table_lock_holder.reset(); std::vector unwaited; - if (query_context.getSettingsRef().replication_alter_partitions_sync == 2) + if (query_context->getSettingsRef().replication_alter_partitions_sync == 2) { LOG_DEBUG(log, "Updated shared metadata nodes in ZooKeeper. Waiting for replicas to apply changes."); unwaited = waitForAllReplicasToProcessLogEntry(*alter_entry, false); } - else if (query_context.getSettingsRef().replication_alter_partitions_sync == 1) + else if (query_context->getSettingsRef().replication_alter_partitions_sync == 1) { LOG_DEBUG(log, "Updated shared metadata nodes in ZooKeeper. Waiting for replicas to apply changes."); waitForReplicaToProcessLogEntry(replica_name, *alter_entry); @@ -4393,7 +4713,7 @@ void StorageReplicatedMergeTree::alter( if (mutation_znode) { LOG_DEBUG(log, "Metadata changes applied. Will wait for data changes."); - waitMutation(*mutation_znode, query_context.getSettingsRef().replication_alter_partitions_sync); + waitMutation(*mutation_znode, query_context->getSettingsRef().replication_alter_partitions_sync); LOG_DEBUG(log, "Data changes applied."); } } @@ -4407,7 +4727,7 @@ static String getPartNamePossiblyFake(MergeTreeDataFormatVersion format_version, /// The date range is all month long. const auto & lut = DateLUT::instance(); time_t start_time = lut.YYYYMMDDToDate(parse(part_info.partition_id + "01")); - DayNum left_date = lut.toDayNum(start_time); + DayNum left_date = DayNum(lut.toDayNum(start_time).toUnderType()); DayNum right_date = DayNum(static_cast(left_date) + lut.daysInMonth(start_time) - 1); return part_info.getPartNameV0(left_date, right_date); } @@ -4457,7 +4777,7 @@ bool StorageReplicatedMergeTree::getFakePartCoveringAllPartsInPartition(const St } -void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & query_context, bool throw_if_noop) +void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr query_context, bool throw_if_noop) { assertNotReadonly(); if (!is_leader) @@ -4482,9 +4802,9 @@ void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool de if (did_drop) { /// If necessary, wait until the operation is performed on itself or on all replicas. - if (query_context.getSettingsRef().replication_alter_partitions_sync != 0) + if (query_context->getSettingsRef().replication_alter_partitions_sync != 0) { - if (query_context.getSettingsRef().replication_alter_partitions_sync == 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync == 1) waitForReplicaToProcessLogEntry(replica_name, entry); else waitForAllReplicasToProcessLogEntry(entry); @@ -4500,7 +4820,7 @@ void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool de void StorageReplicatedMergeTree::truncate( - const ASTPtr &, const StorageMetadataPtr &, const Context & query_context, TableExclusiveLockHolder & table_lock) + const ASTPtr &, const StorageMetadataPtr &, ContextPtr query_context, TableExclusiveLockHolder & table_lock) { table_lock.release(); /// Truncate is done asynchronously. @@ -4526,7 +4846,7 @@ PartitionCommandsResultInfo StorageReplicatedMergeTree::attachPartition( const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool attach_part, - const Context & query_context) + ContextPtr query_context) { assertNotReadonly(); @@ -4534,13 +4854,20 @@ PartitionCommandsResultInfo StorageReplicatedMergeTree::attachPartition( PartsTemporaryRename renamed_parts(*this, "detached/"); MutableDataPartsVector loaded_parts = tryLoadPartsToAttach(partition, attach_part, query_context, renamed_parts); - ReplicatedMergeTreeBlockOutputStream output(*this, metadata_snapshot, 0, 0, 0, false, false, false); /// TODO Allow to use quorum here. + /// TODO Allow to use quorum here. + ReplicatedMergeTreeBlockOutputStream output(*this, metadata_snapshot, 0, 0, 0, false, false, false, + /*is_attach*/true); + for (size_t i = 0; i < loaded_parts.size(); ++i) { - String old_name = loaded_parts[i]->name; + const String old_name = loaded_parts[i]->name; + output.writeExistingPart(loaded_parts[i]); + renamed_parts.old_and_new_names[i].first.clear(); + LOG_DEBUG(log, "Attached part {} as {}", old_name, loaded_parts[i]->name); + results.push_back(PartitionCommandResultInfo{ .partition_id = loaded_parts[i]->info.partition_id, .part_name = loaded_parts[i]->name, @@ -4554,7 +4881,7 @@ PartitionCommandsResultInfo StorageReplicatedMergeTree::attachPartition( void StorageReplicatedMergeTree::checkTableCanBeDropped() const { auto table_id = getStorageID(); - global_context.checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); + getContext()->checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); } void StorageReplicatedMergeTree::checkTableCanBeRenamed() const @@ -4722,7 +5049,7 @@ bool StorageReplicatedMergeTree::waitForTableReplicaToProcessLogEntry( const auto & stop_waiting = [&]() { - bool stop_waiting_itself = waiting_itself && is_dropped; + bool stop_waiting_itself = waiting_itself && (partial_shutdown_called || is_dropped); bool stop_waiting_non_active = !wait_for_non_active && !getZooKeeper()->exists(table_zookeeper_path + "/replicas/" + replica + "/is_active"); return stop_waiting_itself || stop_waiting_non_active; }; @@ -4887,11 +5214,7 @@ void StorageReplicatedMergeTree::getStatus(Status & res, bool with_zk_fields) { auto log_entries = zookeeper->getChildren(zookeeper_path + "/log"); - if (log_entries.empty()) - { - res.log_max_index = 0; - } - else + if (!log_entries.empty()) { const String & last_log_entry = *std::max_element(log_entries.begin(), log_entries.end()); res.log_max_index = parse(last_log_entry.substr(strlen("log-"))); @@ -4903,7 +5226,6 @@ void StorageReplicatedMergeTree::getStatus(Status & res, bool with_zk_fields) auto all_replicas = zookeeper->getChildren(zookeeper_path + "/replicas"); res.total_replicas = all_replicas.size(); - res.active_replicas = 0; for (const String & replica : all_replicas) if (zookeeper->exists(zookeeper_path + "/replicas/" + replica + "/is_active")) ++res.active_replicas; @@ -5029,57 +5351,71 @@ void StorageReplicatedMergeTree::getReplicaDelays(time_t & out_absolute_delay, t } } - void StorageReplicatedMergeTree::fetchPartition( const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & from_, - const Context & query_context) + bool fetch_part, + ContextPtr query_context) { Macros::MacroExpansionInfo info; - info.expand_special_macros_only = false; + info.expand_special_macros_only = false; //-V1048 info.table_id = getStorageID(); info.table_id.uuid = UUIDHelpers::Nil; - auto expand_from = query_context.getMacros()->expand(from_, info); + auto expand_from = query_context->getMacros()->expand(from_, info); String auxiliary_zookeeper_name = extractZooKeeperName(expand_from); String from = extractZooKeeperPath(expand_from); if (from.empty()) throw Exception("ZooKeeper path should not be empty", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); - String partition_id = getPartitionIDFromQuery(partition, query_context); zkutil::ZooKeeperPtr zookeeper; if (auxiliary_zookeeper_name != default_zookeeper_name) - { - zookeeper = global_context.getAuxiliaryZooKeeper(auxiliary_zookeeper_name); - - LOG_INFO(log, "Will fetch partition {} from shard {} (auxiliary zookeeper '{}')", partition_id, from_, auxiliary_zookeeper_name); - } + zookeeper = getContext()->getAuxiliaryZooKeeper(auxiliary_zookeeper_name); else - { zookeeper = getZooKeeper(); - LOG_INFO(log, "Will fetch partition {} from shard {}", partition_id, from_); - } - if (from.back() == '/') from.resize(from.size() - 1); + if (fetch_part) + { + String part_name = partition->as().value.safeGet(); + auto part_path = findReplicaHavingPart(part_name, from, zookeeper); + + if (part_path.empty()) + throw Exception(ErrorCodes::NO_REPLICA_HAS_PART, "Part {} does not exist on any replica", part_name); + /** Let's check that there is no such part in the `detached` directory (where we will write the downloaded parts). + * Unreliable (there is a race condition) - such a part may appear a little later. + */ + if (checkIfDetachedPartExists(part_name)) + throw Exception(ErrorCodes::DUPLICATE_DATA_PART, "Detached part " + part_name + " already exists."); + LOG_INFO(log, "Will fetch part {} from shard {} (zookeeper '{}')", part_name, from_, auxiliary_zookeeper_name); + + try + { + /// part name , metadata, part_path , true, 0, zookeeper + if (!fetchPart(part_name, metadata_snapshot, part_path, true, 0, zookeeper)) + throw Exception(ErrorCodes::UNFINISHED, "Failed to fetch part {} from {}", part_name, from_); + } + catch (const DB::Exception & e) + { + if (e.code() != ErrorCodes::RECEIVED_ERROR_FROM_REMOTE_IO_SERVER && e.code() != ErrorCodes::RECEIVED_ERROR_TOO_MANY_REQUESTS + && e.code() != ErrorCodes::CANNOT_READ_ALL_DATA) + throw; + + LOG_INFO(log, e.displayText()); + } + return; + } + + String partition_id = getPartitionIDFromQuery(partition, query_context); + LOG_INFO(log, "Will fetch partition {} from shard {} (zookeeper '{}')", partition_id, from_, auxiliary_zookeeper_name); /** Let's check that there is no such partition in the `detached` directory (where we will write the downloaded parts). * Unreliable (there is a race condition) - such a partition may appear a little later. */ - Poco::DirectoryIterator dir_end; - for (const std::string & path : getDataPaths()) - { - for (Poco::DirectoryIterator dir_it{path + "detached/"}; dir_it != dir_end; ++dir_it) - { - MergeTreePartInfo part_info; - if (MergeTreePartInfo::tryParsePartName(dir_it.name(), &part_info, format_version) - && part_info.partition_id == partition_id) - throw Exception("Detached partition " + partition_id + " already exists.", ErrorCodes::PARTITION_ALREADY_EXISTS); - } - - } + if (checkIfDetachedPartitionExists(partition_id)) + throw Exception("Detached partition " + partition_id + " already exists.", ErrorCodes::PARTITION_ALREADY_EXISTS); zkutil::Strings replicas; zkutil::Strings active_replicas; @@ -5151,7 +5487,7 @@ void StorageReplicatedMergeTree::fetchPartition( if (try_no) LOG_INFO(log, "Some of parts ({}) are missing. Will try to fetch covering parts.", missing_parts.size()); - if (try_no >= query_context.getSettings().max_fetch_partition_retries_count) + if (try_no >= query_context->getSettings().max_fetch_partition_retries_count) throw Exception("Too many retries to fetch parts from " + best_replica_path, ErrorCodes::TOO_MANY_RETRIES_TO_FETCH_PARTS); Strings parts = zookeeper->getChildren(best_replica_path + "/parts"); @@ -5216,7 +5552,7 @@ void StorageReplicatedMergeTree::fetchPartition( } -void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, const Context & query_context) +void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, ContextPtr query_context) { /// Overview of the mutation algorithm. /// @@ -5300,7 +5636,7 @@ void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, const requests.emplace_back(zkutil::makeCreateRequest( mutations_path + "/", mutation_entry.toString(), zkutil::CreateMode::PersistentSequential)); - if (auto txn = query_context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) txn->moveOpsTo(requests); Coordination::Responses responses; @@ -5325,7 +5661,7 @@ void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, const throw Coordination::Exception("Unable to create a mutation znode", rc); } - waitMutation(mutation_entry.znode_name, query_context.getSettingsRef().mutations_sync); + waitMutation(mutation_entry.znode_name, query_context->getSettingsRef().mutations_sync); } void StorageReplicatedMergeTree::waitMutation(const String & znode_name, size_t mutations_sync) const @@ -5369,11 +5705,61 @@ CancellationCode StorageReplicatedMergeTree::killMutation(const String & mutatio { const String & partition_id = pair.first; Int64 block_number = pair.second; - global_context.getMergeList().cancelPartMutations(partition_id, block_number); + getContext()->getMergeList().cancelPartMutations(partition_id, block_number); } return CancellationCode::CancelSent; } +void StorageReplicatedMergeTree::removePartsFromFilesystem(const DataPartsVector & parts) +{ + auto remove_part = [&](const auto & part) + { + LOG_DEBUG(log, "Removing part from filesystem {}", part.name); + try + { + bool keep_s3 = !this->unlockSharedData(part); + part.remove(keep_s3); + } + catch (...) + { + tryLogCurrentException(log, "There is a problem with deleting part " + part.name + " from filesystem"); + } + }; + + const auto settings = getSettings(); + if (settings->max_part_removal_threads > 1 && parts.size() > settings->concurrent_part_removal_threshold) + { + /// Parallel parts removal. + + size_t num_threads = std::min(settings->max_part_removal_threads, parts.size()); + ThreadPool pool(num_threads); + + /// NOTE: Under heavy system load you may get "Cannot schedule a task" from ThreadPool. + for (const DataPartPtr & part : parts) + { + pool.scheduleOrThrowOnError([&, thread_group = CurrentThread::getGroup()] + { + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachQueryIfNotDetached(); + ); + if (thread_group) + CurrentThread::attachTo(thread_group); + + remove_part(*part); + }); + } + + pool.wait(); + } + else + { + for (const DataPartPtr & part : parts) + { + remove_part(*part); + } + } +} void StorageReplicatedMergeTree::clearOldPartsAndRemoveFromZK() { @@ -5399,25 +5785,10 @@ void StorageReplicatedMergeTree::clearOldPartsAndRemoveFromZK() } parts.clear(); - auto remove_parts_from_filesystem = [log=log] (const DataPartsVector & parts_to_remove) - { - for (const auto & part : parts_to_remove) - { - try - { - part->remove(); - } - catch (...) - { - tryLogCurrentException(log, "There is a problem with deleting part " + part->name + " from filesystem"); - } - } - }; - /// Delete duplicate parts from filesystem if (!parts_to_delete_only_from_filesystem.empty()) { - remove_parts_from_filesystem(parts_to_delete_only_from_filesystem); + removePartsFromFilesystem(parts_to_delete_only_from_filesystem); removePartsFinally(parts_to_delete_only_from_filesystem); LOG_DEBUG(log, "Removed {} old duplicate parts", parts_to_delete_only_from_filesystem.size()); @@ -5462,7 +5833,7 @@ void StorageReplicatedMergeTree::clearOldPartsAndRemoveFromZK() /// Remove parts from filesystem and finally from data_parts if (!parts_to_remove_from_filesystem.empty()) { - remove_parts_from_filesystem(parts_to_remove_from_filesystem); + removePartsFromFilesystem(parts_to_remove_from_filesystem); removePartsFinally(parts_to_remove_from_filesystem); LOG_DEBUG(log, "Removed {} old parts", parts_to_remove_from_filesystem.size()); @@ -5667,18 +6038,18 @@ void StorageReplicatedMergeTree::clearBlocksInPartition( } void StorageReplicatedMergeTree::replacePartitionFrom( - const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) + const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr query_context) { /// First argument is true, because we possibly will add new data to current table. - auto lock1 = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - auto lock2 = source_table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = source_table->lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); auto source_metadata_snapshot = source_table->getInMemoryMetadataPtr(); auto metadata_snapshot = getInMemoryMetadataPtr(); Stopwatch watch; MergeTreeData & src_data = checkStructureAndGetMergeTreeData(source_table, source_metadata_snapshot, metadata_snapshot); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, query_context); DataPartsVector src_all_parts = src_data.getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); DataPartsVector src_parts; @@ -5804,7 +6175,7 @@ void StorageReplicatedMergeTree::replacePartitionFrom( } } - if (auto txn = context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) txn->moveOpsTo(ops); ops.emplace_back(zkutil::makeSetRequest(zookeeper_path + "/log", "", -1)); /// Just update version @@ -5828,11 +6199,11 @@ void StorageReplicatedMergeTree::replacePartitionFrom( parts_to_remove = removePartsInRangeFromWorkingSet(drop_range, true, false, data_parts_lock); } - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } @@ -5850,7 +6221,7 @@ void StorageReplicatedMergeTree::replacePartitionFrom( cleanup_thread.wakeup(); /// If necessary, wait until the operation is performed on all replicas. - if (context.getSettingsRef().replication_alter_partitions_sync > 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync > 1) { lock2.reset(); lock1.reset(); @@ -5858,10 +6229,10 @@ void StorageReplicatedMergeTree::replacePartitionFrom( } } -void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & query_context) +void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr query_context) { - auto lock1 = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); - auto lock2 = dest_table->lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = dest_table->lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); auto dest_table_storage = std::dynamic_pointer_cast(dest_table); if (!dest_table_storage) @@ -5940,7 +6311,7 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta entry_delete.type = LogEntry::DROP_RANGE; entry_delete.source_replica = replica_name; entry_delete.new_part_name = drop_range_fake_part_name; - entry_delete.detach = false; + entry_delete.detach = false; //-V1048 entry_delete.create_time = time(nullptr); } @@ -6014,11 +6385,11 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta transaction.commit(&lock); } - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } @@ -6033,7 +6404,7 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta parts_to_remove.clear(); cleanup_thread.wakeup(); - if (query_context.getSettingsRef().replication_alter_partitions_sync > 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync > 1) { lock2.reset(); dest_table_storage->waitForAllReplicasToProcessLogEntry(entry); @@ -6050,7 +6421,7 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta log_znode_path = dynamic_cast(*op_results.front()).path_created; entry_delete.znode_name = log_znode_path.substr(log_znode_path.find_last_of('/') + 1); - if (query_context.getSettingsRef().replication_alter_partitions_sync > 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync > 1) { lock1.reset(); waitForAllReplicasToProcessLogEntry(entry_delete); @@ -6105,16 +6476,16 @@ void StorageReplicatedMergeTree::getCommitPartOps( ReplicatedMergeTreeAddress StorageReplicatedMergeTree::getReplicatedMergeTreeAddress() const { - auto host_port = global_context.getInterserverIOAddress(); + auto host_port = getContext()->getInterserverIOAddress(); auto table_id = getStorageID(); ReplicatedMergeTreeAddress res; res.host = host_port.first; res.replication_port = host_port.second; - res.queries_port = global_context.getTCPPort(); + res.queries_port = getContext()->getTCPPort(); res.database = table_id.database_name; res.table = table_id.table_name; - res.scheme = global_context.getInterserverScheme(); + res.scheme = getContext()->getInterserverScheme(); return res; } @@ -6275,7 +6646,7 @@ bool StorageReplicatedMergeTree::dropPart( } bool StorageReplicatedMergeTree::dropAllPartsInPartition( - zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, const Context & query_context, bool detach) + zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, ContextPtr query_context, bool detach) { MergeTreePartInfo drop_range_info; if (!getFakePartCoveringAllPartsInPartition(partition_id, drop_range_info)) @@ -6307,7 +6678,7 @@ bool StorageReplicatedMergeTree::dropAllPartsInPartition( Coordination::Requests ops; ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/log/log-", entry.toString(), zkutil::CreateMode::PersistentSequential)); ops.emplace_back(zkutil::makeSetRequest(zookeeper_path + "/log", "", -1)); /// Just update version. - if (auto txn = query_context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) txn->moveOpsTo(ops); Coordination::Responses responses = zookeeper.multi(ops); @@ -6318,13 +6689,13 @@ bool StorageReplicatedMergeTree::dropAllPartsInPartition( } -CheckResults StorageReplicatedMergeTree::checkData(const ASTPtr & query, const Context & context) +CheckResults StorageReplicatedMergeTree::checkData(const ASTPtr & query, ContextPtr local_context) { CheckResults results; DataPartsVector data_parts; if (const auto & check_query = query->as(); check_query.partition) { - String partition_id = getPartitionIDFromQuery(check_query.partition, context); + String partition_id = getPartitionIDFromQuery(check_query.partition, local_context); data_parts = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); } else @@ -6344,6 +6715,7 @@ CheckResults StorageReplicatedMergeTree::checkData(const ASTPtr & query, const C return results; } + bool StorageReplicatedMergeTree::canUseAdaptiveGranularity() const { const auto storage_settings_ptr = getSettings(); @@ -6358,10 +6730,238 @@ MutationCommands StorageReplicatedMergeTree::getFirstAlterMutationCommandsForPar return queue.getFirstAlterMutationCommandsForPart(part); } + void StorageReplicatedMergeTree::startBackgroundMovesIfNeeded() { if (areBackgroundMovesNeeded()) background_moves_executor.start(); } + +void StorageReplicatedMergeTree::lockSharedData(const IMergeTreeDataPart & part) const +{ + if (!part.volume) + return; + DiskPtr disk = part.volume->getDisk(); + if (!disk) + return; + if (disk->getType() != DB::DiskType::Type::S3) + return; + + zkutil::ZooKeeperPtr zookeeper = tryGetZooKeeper(); + if (!zookeeper) + return; + + String id = part.getUniqueId(); + boost::replace_all(id, "/", "_"); + + String zookeeper_node = zookeeper_path + "/zero_copy_s3/shared/" + part.name + "/" + id + "/" + replica_name; + + LOG_TRACE(log, "Set zookeeper lock {}", zookeeper_node); + + /// In rare case other replica can remove path between createAncestors and createIfNotExists + /// So we make up to 5 attempts + for (int attempts = 5; attempts > 0; --attempts) + { + try + { + zookeeper->createAncestors(zookeeper_node); + zookeeper->createIfNotExists(zookeeper_node, "lock"); + break; + } + catch (const zkutil::KeeperException & e) + { + if (e.code == Coordination::Error::ZNONODE) + continue; + throw; + } + } +} + + +bool StorageReplicatedMergeTree::unlockSharedData(const IMergeTreeDataPart & part) const +{ + if (!part.volume) + return true; + DiskPtr disk = part.volume->getDisk(); + if (!disk) + return true; + if (disk->getType() != DB::DiskType::Type::S3) + return true; + + zkutil::ZooKeeperPtr zookeeper = tryGetZooKeeper(); + if (!zookeeper) + return true; + + String id = part.getUniqueId(); + boost::replace_all(id, "/", "_"); + + String zookeeper_part_node = zookeeper_path + "/zero_copy_s3/shared/" + part.name; + String zookeeper_part_uniq_node = zookeeper_part_node + "/" + id; + String zookeeper_node = zookeeper_part_uniq_node + "/" + replica_name; + + LOG_TRACE(log, "Remove zookeeper lock {}", zookeeper_node); + + zookeeper->tryRemove(zookeeper_node); + + Strings children; + zookeeper->tryGetChildren(zookeeper_part_uniq_node, children); + + if (!children.empty()) + { + LOG_TRACE(log, "Found zookeper locks for {}", zookeeper_part_uniq_node); + return false; + } + + zookeeper->tryRemove(zookeeper_part_uniq_node); + + /// Even when we have lock with same part name, but with different uniq, we can remove files on S3 + children.clear(); + zookeeper->tryGetChildren(zookeeper_part_node, children); + if (children.empty()) + /// Cleanup after last uniq removing + zookeeper->tryRemove(zookeeper_part_node); + + return true; +} + + +bool StorageReplicatedMergeTree::tryToFetchIfShared( + const IMergeTreeDataPart & part, + const DiskPtr & disk, + const String & path) +{ + const auto data_settings = getSettings(); + if (!data_settings->allow_s3_zero_copy_replication) + return false; + + if (disk->getType() != DB::DiskType::Type::S3) + return false; + + String replica = getSharedDataReplica(part); + + /// We can't fetch part when none replicas have this part on S3 + if (replica.empty()) + return false; + + return executeFetchShared(replica, part.name, disk, path); +} + + +String StorageReplicatedMergeTree::getSharedDataReplica( + const IMergeTreeDataPart & part) const +{ + String best_replica; + + zkutil::ZooKeeperPtr zookeeper = tryGetZooKeeper(); + if (!zookeeper) + return best_replica; + + String zookeeper_part_node = zookeeper_path + "/zero_copy_s3/shared/" + part.name; + + Strings ids; + zookeeper->tryGetChildren(zookeeper_part_node, ids); + + Strings replicas; + for (const auto & id : ids) + { + String zookeeper_part_uniq_node = zookeeper_part_node + "/" + id; + Strings id_replicas; + zookeeper->tryGetChildren(zookeeper_part_uniq_node, id_replicas); + LOG_TRACE(log, "Found zookeper replicas for {}: {}", zookeeper_part_uniq_node, id_replicas.size()); + replicas.insert(replicas.end(), id_replicas.begin(), id_replicas.end()); + } + + LOG_TRACE(log, "Found zookeper replicas for part {}: {}", part.name, replicas.size()); + + Strings active_replicas; + + /// TODO: Move best replica choose in common method (here is the same code as in StorageReplicatedMergeTree::fetchPartition) + + /// Leave only active replicas. + active_replicas.reserve(replicas.size()); + + for (const String & replica : replicas) + if ((replica != replica_name) && (zookeeper->exists(zookeeper_path + "/replicas/" + replica + "/is_active"))) + active_replicas.push_back(replica); + + LOG_TRACE(log, "Found zookeper active replicas for part {}: {}", part.name, active_replicas.size()); + + if (active_replicas.empty()) + return best_replica; + + /** You must select the best (most relevant) replica. + * This is a replica with the maximum `log_pointer`, then with the minimum `queue` size. + * NOTE This is not exactly the best criteria. It does not make sense to download old partitions, + * and it would be nice to be able to choose the replica closest by network. + * NOTE Of course, there are data races here. You can solve it by retrying. + */ + Int64 max_log_pointer = -1; + UInt64 min_queue_size = std::numeric_limits::max(); + + for (const String & replica : active_replicas) + { + String current_replica_path = zookeeper_path + "/replicas/" + replica; + + String log_pointer_str = zookeeper->get(current_replica_path + "/log_pointer"); + Int64 log_pointer = log_pointer_str.empty() ? 0 : parse(log_pointer_str); + + Coordination::Stat stat; + zookeeper->get(current_replica_path + "/queue", &stat); + size_t queue_size = stat.numChildren; + + if (log_pointer > max_log_pointer + || (log_pointer == max_log_pointer && queue_size < min_queue_size)) + { + max_log_pointer = log_pointer; + min_queue_size = queue_size; + best_replica = replica; + } + } + + return best_replica; +} + +String StorageReplicatedMergeTree::findReplicaHavingPart( + const String & part_name, const String & zookeeper_path_, zkutil::ZooKeeper::Ptr zookeeper_) +{ + Strings replicas = zookeeper_->getChildren(zookeeper_path_ + "/replicas"); + + /// Select replicas in uniformly random order. + std::shuffle(replicas.begin(), replicas.end(), thread_local_rng); + + for (const String & replica : replicas) + { + if (zookeeper_->exists(zookeeper_path_ + "/replicas/" + replica + "/parts/" + part_name) + && zookeeper_->exists(zookeeper_path_ + "/replicas/" + replica + "/is_active")) + return zookeeper_path_ + "/replicas/" + replica; + } + + return {}; +} + +bool StorageReplicatedMergeTree::checkIfDetachedPartExists(const String & part_name) +{ + Poco::DirectoryIterator dir_end; + for (const std::string & path : getDataPaths()) + for (Poco::DirectoryIterator dir_it{path + "detached/"}; dir_it != dir_end; ++dir_it) + if (dir_it.name() == part_name) + return true; + return false; +} + +bool StorageReplicatedMergeTree::checkIfDetachedPartitionExists(const String & partition_name) +{ + Poco::DirectoryIterator dir_end; + for (const std::string & path : getDataPaths()) + { + for (Poco::DirectoryIterator dir_it{path + "detached/"}; dir_it != dir_end; ++dir_it) + { + MergeTreePartInfo part_info; + if (MergeTreePartInfo::tryParsePartName(dir_it.name(), &part_info, format_version) && part_info.partition_id == partition_name) + return true; + } + } + return false; +} } diff --git a/src/Storages/StorageReplicatedMergeTree.h b/src/Storages/StorageReplicatedMergeTree.h index a1a70ada9b2..c70556f40df 100644 --- a/src/Storages/StorageReplicatedMergeTree.h +++ b/src/Storages/StorageReplicatedMergeTree.h @@ -39,13 +39,14 @@ namespace DB * - the structure of the table (/metadata, /columns) * - action log with data (/log/log-...,/replicas/replica_name/queue/queue-...); * - a replica list (/replicas), and replica activity tag (/replicas/replica_name/is_active), replica addresses (/replicas/replica_name/host); - * - select the leader replica (/leader_election) - these are the replicas that assigning merges, mutations and partition manipulations + * - the leader replica election (/leader_election) - these are the replicas that assign merges, mutations + * and partition manipulations. * (after ClickHouse version 20.5 we allow multiple leaders to act concurrently); * - a set of parts of data on each replica (/replicas/replica_name/parts); * - list of the last N blocks of data with checksum, for deduplication (/blocks); * - the list of incremental block numbers (/block_numbers) that we are about to insert, * to ensure the linear order of data insertion and data merge only on the intervals in this sequence; - * - coordinates writes with quorum (/quorum). + * - coordinate writes with quorum (/quorum). * - Storage of mutation entries (ALTER DELETE, ALTER UPDATE etc.) to execute (/mutations). * See comments in StorageReplicatedMergeTree::mutate() for details. */ @@ -65,6 +66,8 @@ namespace DB * - if the part is corrupt (removePartAndEnqueueFetch) or absent during the check (at start - checkParts, while running - searchForMissingPart), * actions are put on GET from other replicas; * + * TODO Update the GET part after rewriting the code (search locally). + * * The replica to which INSERT was made in the queue will also have an entry of the GET of this data. * Such an entry is considered to be executed as soon as the queue handler sees it. * @@ -93,7 +96,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -103,16 +106,16 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; std::optional totalRows(const Settings & settings) const override; - std::optional totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, const Context & context) const override; + std::optional totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr context) const override; std::optional totalBytes(const Settings & settings) const override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; bool optimize( const ASTPtr & query, @@ -121,11 +124,11 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & query_context) override; + ContextPtr query_context) override; - void alter(const AlterCommands & commands, const Context & query_context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & commands, ContextPtr query_context, TableLockHolder & table_lock_holder) override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; void waitMutation(const String & znode_name, size_t mutations_sync) const; std::vector getMutationsStatus() const override; CancellationCode killMutation(const String & mutation_id) override; @@ -134,7 +137,7 @@ public: */ void drop() override; - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context & query_context, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr query_context, TableExclusiveLockHolder &) override; void checkTableCanBeRenamed() const override; @@ -194,7 +197,7 @@ public: part_check_thread.enqueuePart(part_name, delay_to_check_seconds); } - CheckResults checkData(const ASTPtr & query, const Context & context) override; + CheckResults checkData(const ASTPtr & query, ContextPtr context) override; /// Checks ability to use granularity bool canUseAdaptiveGranularity() const override; @@ -205,6 +208,10 @@ public: */ static void dropReplica(zkutil::ZooKeeperPtr zookeeper, const String & zookeeper_path, const String & replica, Poco::Logger * logger); + /// Removes table from ZooKeeper after the last replica was dropped + static bool removeTableNodesFromZooKeeper(zkutil::ZooKeeperPtr zookeeper, const String & zookeeper_path, + const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock, Poco::Logger * logger); + /// Get job to execute in background pool (merge, mutate, drop range and so on) std::optional getDataProcessingJob() override; @@ -212,6 +219,23 @@ public: /// is not overloaded bool canExecuteFetch(const ReplicatedMergeTreeLogEntry & entry, String & disable_reason) const; + /// Fetch part only when it stored on shared storage like S3 + bool executeFetchShared(const String & source_replica, const String & new_part_name, const DiskPtr & disk, const String & path); + + /// Lock part in zookeeper for use common S3 data in several nodes + void lockSharedData(const IMergeTreeDataPart & part) const override; + + /// Unlock common S3 data part in zookeeper + /// Return true if data unlocked + /// Return false if data is still used by another node + bool unlockSharedData(const IMergeTreeDataPart & part) const override; + + /// Fetch part only if some replica has it on shared storage like S3 + bool tryToFetchIfShared(const IMergeTreeDataPart & part, const DiskPtr & disk, const String & path) override; + + /// Get best replica having this partition on S3 + String getSharedDataReplica(const IMergeTreeDataPart & part) const; + private: /// Get a sequential consistent view of current parts. ReplicatedMergeTreeQuorumAddedParts::PartitionIdToMaxBlock getMaxAddedBlocks() const; @@ -234,6 +258,8 @@ private: using LogEntry = ReplicatedMergeTreeLogEntry; using LogEntryPtr = LogEntry::Ptr; + using MergeTreeData::MutableDataPartPtr; + zkutil::ZooKeeperPtr current_zookeeper; /// Use only the methods below. mutable std::mutex current_zookeeper_mutex; /// To recreate the session in the background thread. @@ -289,6 +315,9 @@ private: /// Event that is signalled (and is reset) by the restarting_thread when the ZooKeeper session expires. Poco::Event partial_shutdown_event {false}; /// Poco::Event::EVENT_MANUALRESET + /// Limiting parallel fetches per node + static std::atomic_uint total_fetches; + /// Limiting parallel fetches per one table std::atomic_uint current_table_fetches {0}; @@ -370,8 +399,7 @@ private: String getChecksumsForZooKeeper(const MergeTreeDataPartChecksums & checksums) const; /// Accepts a PreComitted part, atomically checks its checksums with ones on other replicas and commit the part - DataPartsVector checkPartChecksumsAndCommit(Transaction & transaction, - const DataPartPtr & part); + DataPartsVector checkPartChecksumsAndCommit(Transaction & transaction, const DataPartPtr & part); bool partIsAssignedToBackgroundOperation(const DataPartPtr & part) const override; @@ -384,6 +412,8 @@ private: /// Just removes part from ZooKeeper using previous method void removePartFromZooKeeper(const String & part_name); + void removePartsFromFilesystem(const DataPartsVector & parts); + /// Quickly removes big set of parts from ZooKeeper (using async multi queries) void removePartsFromZooKeeper(zkutil::ZooKeeperPtr & zookeeper, const Strings & part_names, NameSet * parts_should_be_retried = nullptr); @@ -401,6 +431,9 @@ private: */ bool executeLogEntry(LogEntry & entry); + /// Lookup the part for the entry in the detached/ folder. + /// returns nullptr if the part is corrupt or missing. + MutableDataPartPtr attachPartHelperFoundValidPart(const LogEntry& entry) const; void executeDropRange(const LogEntry & entry); @@ -488,11 +521,16 @@ private: /// Exchange parts. + ConnectionTimeouts getFetchPartHTTPTimeouts(ContextPtr context); + /** Returns an empty string if no one has a part. */ String findReplicaHavingPart(const String & part_name, bool active); + static String findReplicaHavingPart(const String & part_name, const String & zookeeper_path_, zkutil::ZooKeeper::Ptr zookeeper_); bool checkReplicaHavePart(const String & replica, const String & part_name); + bool checkIfDetachedPartExists(const String & part_name); + bool checkIfDetachedPartitionExists(const String & partition_name); /** Find replica having specified part or any part that covers it. * If active = true, consider only active replicas. @@ -515,6 +553,17 @@ private: size_t quorum, zkutil::ZooKeeper::Ptr zookeeper_ = nullptr); + /** Download the specified part from the specified replica. + * Used for replace local part on the same s3-shared part in hybrid storage. + * Returns false if part is already fetching right now. + */ + bool fetchExistsPart( + const String & part_name, + const StorageMetadataPtr & metadata_snapshot, + const String & replica_path, + DiskPtr replaced_disk, + String replaced_part_path); + /// Required only to avoid races between executeLogEntry and fetchPartition std::unordered_set currently_fetching_parts; std::mutex currently_fetching_parts_mutex; @@ -577,14 +626,19 @@ private: bool dropPart(zkutil::ZooKeeperPtr & zookeeper, String part_name, LogEntry & entry, bool detach, bool throw_if_noop); bool dropAllPartsInPartition( - zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, const Context & query_context, bool detach); + zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, ContextPtr query_context, bool detach); // Partition helpers - void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & query_context, bool throw_if_noop) override; - PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, const Context & query_context) override; - void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & query_context) override; - void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & query_context) override; - void fetchPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & from, const Context & query_context) override; + void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr query_context, bool throw_if_noop) override; + PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, ContextPtr query_context) override; + void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr query_context) override; + void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr query_context) override; + void fetchPartition( + const ASTPtr & partition, + const StorageMetadataPtr & metadata_snapshot, + const String & from, + bool fetch_part, + ContextPtr query_context) override; /// Check granularity of already existing replicated table in zookeeper if it exists /// return true if it's fixed @@ -598,9 +652,9 @@ private: void startBackgroundMovesIfNeeded() override; - std::set getPartitionIdsAffectedByCommands(const MutationCommands & commands, const Context & query_context) const; + std::set getPartitionIdsAffectedByCommands(const MutationCommands & commands, ContextPtr query_context) const; PartitionBlockNumbersHolder allocateBlockNumbersInAffectedPartitions( - const MutationCommands & commands, const Context & query_context, const zkutil::ZooKeeperPtr & zookeeper) const; + const MutationCommands & commands, ContextPtr query_context, const zkutil::ZooKeeperPtr & zookeeper) const; protected: /** If not 'attach', either creates a new table in ZK, or adds a replica to an existing table. @@ -612,7 +666,7 @@ protected: const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index a31a7fa0944..2f25fb43e74 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -45,147 +45,276 @@ namespace ErrorCodes extern const int UNEXPECTED_EXPRESSION; extern const int S3_ERROR; } - - -namespace +class StorageS3Source::DisclosedGlobIterator::Impl { - class StorageS3Source : public SourceWithProgress + +public: + Impl(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) + : client(client_), globbed_uri(globbed_uri_) { - public: + std::lock_guard lock(mutex); - static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column) + if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) + throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); + + const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); + + /// We don't have to list bucket, because there is no asterics. + if (key_prefix.size() == globbed_uri.key.size()) { - if (with_path_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); - if (with_file_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); - - return sample_block; + buffer.emplace_back(globbed_uri.key); + buffer_iter = buffer.begin(); + is_finished = true; + return; } - StorageS3Source( - bool need_path, - bool need_file, - const String & format, - String name_, - const Block & sample_block, - const Context & context, - const ColumnsDescription & columns, - UInt64 max_block_size, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key) - : SourceWithProgress(getHeader(sample_block, need_path, need_file)) - , name(std::move(name_)) - , with_file_column(need_file) - , with_path_column(need_path) - , file_path(bucket + "/" + key) - { - read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(client, bucket, key), compression_method); - auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); - reader = std::make_shared(input_format); + request.SetBucket(globbed_uri.bucket); + request.SetPrefix(key_prefix); + matcher = std::make_unique(makeRegexpPatternFromGlobs(globbed_uri.key)); + fillInternalBufferAssumeLocked(); + } - if (columns.hasDefaults()) - reader = std::make_shared(reader, columns, context); + String next() + { + std::lock_guard lock(mutex); + return nextAssumeLocked(); + } + +private: + + String nextAssumeLocked() + { + if (buffer_iter != buffer.end()) + { + auto answer = *buffer_iter; + ++buffer_iter; + return answer; } - String getName() const override - { - return name; - } - - Chunk generate() override - { - if (!reader) - return {}; - - if (!initialized) - { - reader->readSuffix(); - initialized = true; - } - - if (auto block = reader->read()) - { - auto columns = block.getColumns(); - UInt64 num_rows = block.rows(); - - if (with_path_column) - columns.push_back(DataTypeString().createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); - if (with_file_column) - { - size_t last_slash_pos = file_path.find_last_of('/'); - columns.push_back(DataTypeString().createColumnConst(num_rows, file_path.substr( - last_slash_pos + 1))->convertToFullColumnIfConst()); - } - - return Chunk(std::move(columns), num_rows); - } - - reader.reset(); - + if (is_finished) return {}; - } - private: - String name; - std::unique_ptr read_buf; - BlockInputStreamPtr reader; - bool initialized = false; - bool with_file_column = false; - bool with_path_column = false; - String file_path; - }; + fillInternalBufferAssumeLocked(); - class StorageS3BlockOutputStream : public IBlockOutputStream + return nextAssumeLocked(); + } + + void fillInternalBufferAssumeLocked() { - public: - StorageS3BlockOutputStream( - const String & format, - const Block & sample_block_, - const Context & context, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key, - size_t min_upload_part_size, - size_t max_single_part_upload_size) - : sample_block(sample_block_) + buffer.clear(); + + outcome = client.ListObjectsV2(request); + if (!outcome.IsSuccess()) + throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, S3 exception: {}, message: {}", + quoteString(request.GetBucket()), quoteString(request.GetPrefix()), + backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); + + const auto & result_batch = outcome.GetResult().GetContents(); + + buffer.reserve(result_batch.size()); + for (const auto & row : result_batch) { - write_buf = wrapWriteBufferWithCompressionMethod( - std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); - writer = FormatFactory::instance().getOutputStream(format, *write_buf, sample_block, context); + String key = row.GetKey(); + if (re2::RE2::FullMatch(key, *matcher)) + buffer.emplace_back(std::move(key)); + } + /// Set iterator only after the whole batch is processed + buffer_iter = buffer.begin(); + + request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); + + /// It returns false when all objects were returned + is_finished = !outcome.GetResult().GetIsTruncated(); + } + + std::mutex mutex; + Strings buffer; + Strings::iterator buffer_iter; + Aws::S3::S3Client client; + S3::URI globbed_uri; + Aws::S3::Model::ListObjectsV2Request request; + Aws::S3::Model::ListObjectsV2Outcome outcome; + std::unique_ptr matcher; + bool is_finished{false}; +}; + +StorageS3Source::DisclosedGlobIterator::DisclosedGlobIterator(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) + : pimpl(std::make_shared(client_, globbed_uri_)) {} + +String StorageS3Source::DisclosedGlobIterator::next() +{ + return pimpl->next(); +} + + +Block StorageS3Source::getHeader(Block sample_block, bool with_path_column, bool with_file_column) +{ + if (with_path_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); + if (with_file_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); + + return sample_block; +} + +StorageS3Source::StorageS3Source( + bool need_path, + bool need_file, + const String & format_, + String name_, + const Block & sample_block_, + ContextPtr context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + UInt64 s3_max_single_read_retries_, + const String compression_hint_, + const std::shared_ptr & client_, + const String & bucket_, + std::shared_ptr file_iterator_) + : SourceWithProgress(getHeader(sample_block_, need_path, need_file)) + , WithContext(context_) + , name(std::move(name_)) + , bucket(bucket_) + , format(format_) + , columns_desc(columns_) + , max_block_size(max_block_size_) + , s3_max_single_read_retries(s3_max_single_read_retries_) + , compression_hint(compression_hint_) + , client(client_) + , sample_block(sample_block_) + , with_file_column(need_file) + , with_path_column(need_path) + , file_iterator(file_iterator_) +{ + initialize(); +} + + +bool StorageS3Source::initialize() +{ + String current_key = (*file_iterator)(); + if (current_key.empty()) + return false; + + file_path = bucket + "/" + current_key; + + read_buf = wrapReadBufferWithCompressionMethod( + std::make_unique(client, bucket, current_key, s3_max_single_read_retries), chooseCompressionMethod(current_key, compression_hint)); + auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, getContext(), max_block_size); + reader = std::make_shared(input_format); + + if (columns_desc.hasDefaults()) + reader = std::make_shared(reader, columns_desc, getContext()); + + initialized = false; + return true; +} + +String StorageS3Source::getName() const +{ + return name; +} + +Chunk StorageS3Source::generate() +{ + if (!reader) + return {}; + + if (!initialized) + { + reader->readPrefix(); + initialized = true; + } + + if (auto block = reader->read()) + { + auto columns = block.getColumns(); + UInt64 num_rows = block.rows(); + + if (with_path_column) + columns.push_back(DataTypeString().createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); + if (with_file_column) + { + size_t last_slash_pos = file_path.find_last_of('/'); + columns.push_back(DataTypeString().createColumnConst(num_rows, file_path.substr( + last_slash_pos + 1))->convertToFullColumnIfConst()); } - Block getHeader() const override - { - return sample_block; - } + return Chunk(std::move(columns), num_rows); + } - void write(const Block & block) override - { - writer->write(block); - } + reader->readSuffix(); + reader.reset(); + read_buf.reset(); - void writePrefix() override - { - writer->writePrefix(); - } + if (!initialize()) + return {}; - void writeSuffix() override + return generate(); +} + + +class StorageS3BlockOutputStream : public IBlockOutputStream +{ +public: + StorageS3BlockOutputStream( + const String & format, + const Block & sample_block_, + ContextPtr context, + const CompressionMethod compression_method, + const std::shared_ptr & client, + const String & bucket, + const String & key, + size_t min_upload_part_size, + size_t max_single_part_upload_size) + : sample_block(sample_block_) + { + write_buf = wrapWriteBufferWithCompressionMethod( + std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); + writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); + } + + Block getHeader() const override + { + return sample_block; + } + + void write(const Block & block) override + { + writer->write(block); + } + + void writePrefix() override + { + writer->writePrefix(); + } + + void flush() override + { + writer->flush(); + } + + void writeSuffix() override + { + try { writer->writeSuffix(); writer->flush(); write_buf->finalize(); } + catch (...) + { + /// Stop ParallelFormattingOutputFormat correctly. + writer.reset(); + throw; + } + } - private: - Block sample_block; - std::unique_ptr write_buf; - BlockOutputStreamPtr writer; - }; -} +private: + Block sample_block; + std::unique_ptr write_buf; + BlockOutputStreamPtr writer; +}; StorageS3::StorageS3( @@ -194,90 +323,31 @@ StorageS3::StorageS3( const String & secret_access_key_, const StorageID & table_id_, const String & format_name_, + UInt64 s3_max_single_read_retries_, UInt64 min_upload_part_size_, UInt64 max_single_part_upload_size_, UInt64 max_connections_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, - const String & compression_method_) + ContextPtr context_, + const String & compression_method_, + bool distributed_processing_) : IStorage(table_id_) - , uri(uri_) - , access_key_id(access_key_id_) - , secret_access_key(secret_access_key_) - , max_connections(max_connections_) - , global_context(context_.getGlobalContext()) + , client_auth{uri_, access_key_id_, secret_access_key_, max_connections_, {}, {}} /// Client and settings will be updated later , format_name(format_name_) + , s3_max_single_read_retries(s3_max_single_read_retries_) , min_upload_part_size(min_upload_part_size_) , max_single_part_upload_size(max_single_part_upload_size_) , compression_method(compression_method_) , name(uri_.storage_name) + , distributed_processing(distributed_processing_) { - global_context.getRemoteHostFilter().checkURL(uri_.uri); + context_->getGlobalContext()->getRemoteHostFilter().checkURL(uri_.uri); StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); storage_metadata.setConstraints(constraints_); setInMemoryMetadata(storage_metadata); - updateAuthSettings(context_); -} - - -namespace -{ - /* "Recursive" directory listing with matched paths as a result. - * Have the same method in StorageFile. - */ -Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri) -{ - if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) - { - throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); - } - - const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); - if (key_prefix.size() == globbed_uri.key.size()) - { - return {globbed_uri.key}; - } - - Aws::S3::Model::ListObjectsV2Request request; - request.SetBucket(globbed_uri.bucket); - request.SetPrefix(key_prefix); - - re2::RE2 matcher(makeRegexpPatternFromGlobs(globbed_uri.key)); - Strings result; - Aws::S3::Model::ListObjectsV2Outcome outcome; - int page = 0; - do - { - ++page; - outcome = client.ListObjectsV2(request); - if (!outcome.IsSuccess()) - { - if (page > 1) - throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, page {}, S3 exception: {}, message: {}", - quoteString(request.GetBucket()), quoteString(request.GetPrefix()), page, - backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - - throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, S3 exception: {}, message: {}", - quoteString(request.GetBucket()), quoteString(request.GetPrefix()), - backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - } - - for (const auto & row : outcome.GetResult().GetContents()) - { - String key = row.GetKey(); - if (re2::RE2::FullMatch(key, matcher)) - result.emplace_back(std::move(key)); - } - - request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); - } - while (outcome.GetResult().GetIsTruncated()); - - return result; -} - + updateClientAndAuthSettings(context_, client_auth); } @@ -285,12 +355,12 @@ Pipe StorageS3::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) { - updateAuthSettings(context); + updateClientAndAuthSettings(local_context, client_auth); Pipes pipes; bool need_path_column = false; @@ -303,73 +373,93 @@ Pipe StorageS3::read( need_file_column = true; } - for (const String & key : listFilesWithRegexpMatching(*client, uri)) + std::shared_ptr iterator_wrapper{nullptr}; + if (distributed_processing) + { + iterator_wrapper = std::make_shared( + [callback = local_context->getReadTaskCallback()]() -> String { + return callback(); + }); + } + else + { + /// Iterate through disclosed globs and make a source for each file + auto glob_iterator = std::make_shared(*client_auth.client, client_auth.uri); + iterator_wrapper = std::make_shared([glob_iterator]() + { + return glob_iterator->next(); + }); + } + + for (size_t i = 0; i < num_streams; ++i) + { pipes.emplace_back(std::make_shared( need_path_column, need_file_column, format_name, getName(), metadata_snapshot->getSampleBlock(), - context, + local_context, metadata_snapshot->getColumns(), max_block_size, - chooseCompressionMethod(uri.key, compression_method), - client, - uri.bucket, - key)); - + s3_max_single_read_retries, + compression_method, + client_auth.client, + client_auth.uri.bucket, + iterator_wrapper)); + } auto pipe = Pipe::unitePipes(std::move(pipes)); - // It's possible to have many buckets read from s3, resize(num_streams) might open too many handles at the same time. - // Using narrowPipe instead. + narrowPipe(pipe, num_streams); return pipe; } -BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - updateAuthSettings(context); + updateClientAndAuthSettings(local_context, client_auth); return std::make_shared( format_name, metadata_snapshot->getSampleBlock(), - global_context, - chooseCompressionMethod(uri.key, compression_method), - client, - uri.bucket, - uri.key, + local_context, + chooseCompressionMethod(client_auth.uri.key, compression_method), + client_auth.client, + client_auth.uri.bucket, + client_auth.uri.key, min_upload_part_size, max_single_part_upload_size); } -void StorageS3::updateAuthSettings(const Context & context) +void StorageS3::updateClientAndAuthSettings(ContextPtr ctx, StorageS3::ClientAuthentificaiton & upd) { - auto settings = context.getStorageS3Settings().getSettings(uri.uri.toString()); - if (client && (!access_key_id.empty() || settings == auth_settings)) + auto settings = ctx->getStorageS3Settings().getSettings(upd.uri.uri.toString()); + if (upd.client && (!upd.access_key_id.empty() || settings == upd.auth_settings)) return; - Aws::Auth::AWSCredentials credentials(access_key_id, secret_access_key); + Aws::Auth::AWSCredentials credentials(upd.access_key_id, upd.secret_access_key); HeaderCollection headers; - if (access_key_id.empty()) + if (upd.access_key_id.empty()) { credentials = Aws::Auth::AWSCredentials(settings.access_key_id, settings.secret_access_key); headers = settings.headers; } S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration( - context.getRemoteHostFilter(), context.getGlobalContext().getSettingsRef().s3_max_redirects); + ctx->getRemoteHostFilter(), ctx->getGlobalContext()->getSettingsRef().s3_max_redirects); - client_configuration.endpointOverride = uri.endpoint; - client_configuration.maxConnections = max_connections; + client_configuration.endpointOverride = upd.uri.endpoint; + client_configuration.maxConnections = upd.max_connections; - client = S3::ClientFactory::instance().create( + upd.client = S3::ClientFactory::instance().create( client_configuration, - uri.is_virtual_hosted_style, + upd.uri.is_virtual_hosted_style, credentials.GetAWSAccessKeyId(), credentials.GetAWSSecretKey(), settings.server_side_encryption_customer_key_base64, std::move(headers), - settings.use_environment_credentials.value_or(global_context.getConfigRef().getBool("s3.use_environment_credentials", false))); + settings.use_environment_credentials.value_or(ctx->getConfigRef().getBool("s3.use_environment_credentials", false)), + settings.use_insecure_imds_request.value_or(ctx->getConfigRef().getBool("s3.use_insecure_imds_request", false))); - auth_settings = std::move(settings); + upd.auth_settings = std::move(settings); } void registerStorageS3Impl(const String & name, StorageFactory & factory) @@ -380,10 +470,11 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) if (engine_args.size() < 2 || engine_args.size() > 5) throw Exception( - "Storage S3 requires 2 to 5 arguments: url, [access_key_id, secret_access_key], name of used format and [compression_method].", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + "Storage S3 requires 2 to 5 arguments: url, [access_key_id, secret_access_key], name of used format and [compression_method].", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); String url = engine_args[0]->as().value.safeGet(); Poco::URI uri (url); @@ -397,9 +488,10 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) secret_access_key = engine_args[2]->as().value.safeGet(); } - UInt64 min_upload_part_size = args.local_context.getSettingsRef().s3_min_upload_part_size; - UInt64 max_single_part_upload_size = args.local_context.getSettingsRef().s3_max_single_part_upload_size; - UInt64 max_connections = args.local_context.getSettingsRef().s3_max_connections; + UInt64 s3_max_single_read_retries = args.getLocalContext()->getSettingsRef().s3_max_single_read_retries; + UInt64 min_upload_part_size = args.getLocalContext()->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = args.getLocalContext()->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = args.getLocalContext()->getSettingsRef().s3_max_connections; String compression_method; String format_name; @@ -420,12 +512,13 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) secret_access_key, args.table_id, format_name, + s3_max_single_read_retries, min_upload_part_size, max_single_part_upload_size, max_connections, args.columns, args.constraints, - args.context, + args.getContext(), compression_method ); }, diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 46d8c9276a2..b068f82cfb1 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -4,11 +4,20 @@ #if USE_AWS_S3 +#include + +#include + #include #include + +#include #include #include #include +#include +#include +#include namespace Aws::S3 { @@ -18,12 +27,74 @@ namespace Aws::S3 namespace DB { +class StorageS3SequentialSource; +class StorageS3Source : public SourceWithProgress, WithContext +{ +public: + class DisclosedGlobIterator + { + public: + DisclosedGlobIterator(Aws::S3::S3Client &, const S3::URI &); + String next(); + private: + class Impl; + /// shared_ptr to have copy constructor + std::shared_ptr pimpl; + }; + + using IteratorWrapper = std::function; + + static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); + + StorageS3Source( + bool need_path, + bool need_file, + const String & format, + String name_, + const Block & sample_block, + ContextPtr context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + UInt64 s3_max_single_read_retries_, + const String compression_hint_, + const std::shared_ptr & client_, + const String & bucket, + std::shared_ptr file_iterator_); + + String getName() const override; + + Chunk generate() override; + +private: + String name; + String bucket; + String file_path; + String format; + ColumnsDescription columns_desc; + UInt64 max_block_size; + UInt64 s3_max_single_read_retries; + String compression_hint; + std::shared_ptr client; + Block sample_block; + + + std::unique_ptr read_buf; + BlockInputStreamPtr reader; + bool initialized = false; + bool with_file_column = false; + bool with_path_column = false; + std::shared_ptr file_iterator; + + /// Recreate ReadBuffer and BlockInputStream for each file. + bool initialize(); +}; + /** * This class represents table engine for external S3 urls. * It sends HTTP GET to server when select is called and * HTTP PUT when insert is called. */ -class StorageS3 : public ext::shared_ptr_helper, public IStorage +class StorageS3 : public ext::shared_ptr_helper, public IStorage, WithContext { public: StorageS3(const S3::URI & uri, @@ -31,13 +102,15 @@ public: const String & secret_access_key, const StorageID & table_id_, const String & format_name_, + UInt64 s3_max_single_read_retries_, UInt64 min_upload_part_size_, UInt64 max_single_part_upload_size_, UInt64 max_connections_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, - const String & compression_method_ = ""); + ContextPtr context_, + const String & compression_method_ = "", + bool distributed_processing_ = false); String getName() const override { @@ -48,31 +121,41 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; NamesAndTypesList getVirtuals() const override; private: - const S3::URI uri; - const String access_key_id; - const String secret_access_key; - const UInt64 max_connections; - const Context & global_context; + + friend class StorageS3Cluster; + friend class TableFunctionS3Cluster; + + struct ClientAuthentificaiton + { + const S3::URI uri; + const String access_key_id; + const String secret_access_key; + const UInt64 max_connections; + std::shared_ptr client; + S3AuthSettings auth_settings; + }; + + ClientAuthentificaiton client_auth; String format_name; + UInt64 s3_max_single_read_retries; size_t min_upload_part_size; size_t max_single_part_upload_size; String compression_method; - std::shared_ptr client; String name; - S3AuthSettings auth_settings; + const bool distributed_processing; - void updateAuthSettings(const Context & context); + static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; } diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp new file mode 100644 index 00000000000..8afc0e44023 --- /dev/null +++ b/src/Storages/StorageS3Cluster.cpp @@ -0,0 +1,166 @@ +#include "Storages/StorageS3Cluster.h" + +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_AWS_S3 + +#include "Common/Exception.h" +#include +#include "Client/Connection.h" +#include "Core/QueryProcessingStage.h" +#include +#include "DataStreams/RemoteBlockInputStream.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "Processors/Sources/SourceWithProgress.h" +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +namespace DB +{ + + +StorageS3Cluster::StorageS3Cluster( + const String & filename_, + const String & access_key_id_, + const String & secret_access_key_, + const StorageID & table_id_, + String cluster_name_, + const String & format_name_, + UInt64 max_connections_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_, + const String & compression_method_) + : IStorage(table_id_) + , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} + , filename(filename_) + , cluster_name(cluster_name_) + , format_name(format_name_) + , compression_method(compression_method_) +{ + StorageInMemoryMetadata storage_metadata; + storage_metadata.setColumns(columns_); + storage_metadata.setConstraints(constraints_); + setInMemoryMetadata(storage_metadata); + StorageS3::updateClientAndAuthSettings(context_, client_auth); +} + +/// The code executes on initiator +Pipe StorageS3Cluster::read( + const Names & column_names, + const StorageMetadataPtr & metadata_snapshot, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t /*max_block_size*/, + unsigned /*num_streams*/) +{ + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto cluster = context->getCluster(cluster_name)->getClusterWithReplicasAsShards(context->getSettings()); + S3::URI s3_uri(Poco::URI{filename}); + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto iterator = std::make_shared(*client_auth.client, client_auth.uri); + auto callback = std::make_shared([iterator]() mutable -> String + { + return iterator->next(); + }); + + /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) + Block header = + InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + + const Scalars & scalars = context->hasQueryContext() ? context->getQueryContext()->getScalars() : Scalars{}; + + Pipes pipes; + connections.reserve(cluster->getShardCount()); + + const bool add_agg_info = processed_stage == QueryProcessingStage::WithMergeableState; + + for (const auto & replicas : cluster->getShardsAddresses()) + { + /// There will be only one replica, because we consider each replica as a shard + for (const auto & node : replicas) + { + connections.emplace_back(std::make_shared( + node.host_name, node.port, context->getGlobalContext()->getCurrentDatabase(), + node.user, node.password, node.cluster, node.cluster_secret, + "S3ClusterInititiator", + node.compression, + node.secure + )); + + /// For unknown reason global context is passed to IStorage::read() method + /// So, task_identifier is passed as constructor argument. It is more obvious. + auto remote_query_executor = std::make_shared( + *connections.back(), queryToString(query_info.query), header, context, + /*throttler=*/nullptr, scalars, Tables(), processed_stage, callback); + + pipes.emplace_back(std::make_shared(remote_query_executor, add_agg_info, false)); + } + } + + metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); + return Pipe::unitePipes(std::move(pipes)); +} + +QueryProcessingStage::Enum StorageS3Cluster::getQueryProcessingStage( + ContextPtr context, QueryProcessingStage::Enum to_stage, SelectQueryInfo &) const +{ + /// Initiator executes query on remote node. + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) + if (to_stage >= QueryProcessingStage::Enum::WithMergeableState) + return QueryProcessingStage::Enum::WithMergeableState; + + /// Follower just reads the data. + return QueryProcessingStage::Enum::FetchColumns; +} + + +NamesAndTypesList StorageS3Cluster::getVirtuals() const +{ + return NamesAndTypesList{ + {"_path", std::make_shared()}, + {"_file", std::make_shared()} + }; +} + + +} + +#endif diff --git a/src/Storages/StorageS3Cluster.h b/src/Storages/StorageS3Cluster.h new file mode 100644 index 00000000000..c98840d62fc --- /dev/null +++ b/src/Storages/StorageS3Cluster.h @@ -0,0 +1,63 @@ +#pragma once + +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_AWS_S3 + +#include "Client/Connection.h" +#include +#include +#include + +#include +#include +#include "ext/shared_ptr_helper.h" + +namespace DB +{ + +class Context; + +struct ClientAuthentificationBuilder +{ + String access_key_id; + String secret_access_key; + UInt64 max_connections; +}; + +class StorageS3Cluster : public ext::shared_ptr_helper, public IStorage +{ + friend struct ext::shared_ptr_helper; +public: + std::string getName() const override { return "S3Cluster"; } + + Pipe read(const Names &, const StorageMetadataPtr &, SelectQueryInfo &, + ContextPtr, QueryProcessingStage::Enum, size_t /*max_block_size*/, unsigned /*num_streams*/) override; + + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum, SelectQueryInfo &) const override; + + NamesAndTypesList getVirtuals() const override; + +protected: + StorageS3Cluster( + const String & filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, + String cluster_name_, const String & format_name_, UInt64 max_connections_, const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, ContextPtr context_, const String & compression_method_); + +private: + /// Connections from initiator to other nodes + std::vector> connections; + StorageS3::ClientAuthentificaiton client_auth; + + String filename; + String cluster_name; + String format_name; + String compression_method; +}; + + +} + +#endif diff --git a/src/Storages/StorageS3Settings.cpp b/src/Storages/StorageS3Settings.cpp index 6d97e6fae95..8aafc12a688 100644 --- a/src/Storages/StorageS3Settings.cpp +++ b/src/Storages/StorageS3Settings.cpp @@ -36,6 +36,11 @@ void StorageS3Settings::loadFromConfig(const String & config_elem, const Poco::U { use_environment_credentials = config.getBool(config_elem + "." + key + ".use_environment_credentials"); } + std::optional use_insecure_imds_request; + if (config.has(config_elem + "." + key + ".use_insecure_imds_request")) + { + use_insecure_imds_request = config.getBool(config_elem + "." + key + ".use_insecure_imds_request"); + } HeaderCollection headers; Poco::Util::AbstractConfiguration::Keys subconfig_keys; @@ -52,7 +57,7 @@ void StorageS3Settings::loadFromConfig(const String & config_elem, const Poco::U } } - settings.emplace(endpoint, S3AuthSettings{std::move(access_key_id), std::move(secret_access_key), std::move(server_side_encryption_customer_key_base64), std::move(headers), use_environment_credentials}); + settings.emplace(endpoint, S3AuthSettings{std::move(access_key_id), std::move(secret_access_key), std::move(server_side_encryption_customer_key_base64), std::move(headers), use_environment_credentials, use_insecure_imds_request}); } } } diff --git a/src/Storages/StorageS3Settings.h b/src/Storages/StorageS3Settings.h index 29c6c3bb415..66e776dbea2 100644 --- a/src/Storages/StorageS3Settings.h +++ b/src/Storages/StorageS3Settings.h @@ -33,12 +33,14 @@ struct S3AuthSettings HeaderCollection headers; std::optional use_environment_credentials; + std::optional use_insecure_imds_request; inline bool operator==(const S3AuthSettings & other) const { return access_key_id == other.access_key_id && secret_access_key == other.secret_access_key && server_side_encryption_customer_key_base64 == other.server_side_encryption_customer_key_base64 && headers == other.headers - && use_environment_credentials == other.use_environment_credentials; + && use_environment_credentials == other.use_environment_credentials + && use_insecure_imds_request == other.use_insecure_imds_request; } }; diff --git a/src/Storages/StorageSet.cpp b/src/Storages/StorageSet.cpp index d64042f0c1e..34bbfed874f 100644 --- a/src/Storages/StorageSet.cpp +++ b/src/Storages/StorageSet.cpp @@ -99,7 +99,7 @@ void SetOrJoinBlockOutputStream::writeSuffix() } -BlockOutputStreamPtr StorageSetOrJoinBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageSetOrJoinBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { UInt64 id = ++increment; return std::make_shared(*this, metadata_snapshot, path, path + "tmp/", toString(id) + ".bin", persistent); @@ -156,7 +156,7 @@ size_t StorageSet::getSize() const { return set->getTotalRowCount(); } std::optional StorageSet::totalRows(const Settings &) const { return set->getTotalRowCount(); } std::optional StorageSet::totalBytes(const Settings &) const { return set->getTotalByteCount(); } -void StorageSet::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) +void StorageSet::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) { disk->removeRecursive(path); disk->createDirectories(path); @@ -246,7 +246,7 @@ void registerStorageSet(StorageFactory & factory) if (has_settings) set_settings.loadFromQuery(*args.storage_def); - DiskPtr disk = args.context.getDisk(set_settings.disk); + DiskPtr disk = args.getContext()->getDisk(set_settings.disk); return StorageSet::create(disk, args.relative_data_path, args.table_id, args.columns, args.constraints, set_settings.persistent); }, StorageFactory::StorageFeatures{ .supports_settings = true, }); } diff --git a/src/Storages/StorageSet.h b/src/Storages/StorageSet.h index 9b9078f7dd5..b87dcf21a23 100644 --- a/src/Storages/StorageSet.h +++ b/src/Storages/StorageSet.h @@ -23,7 +23,7 @@ class StorageSetOrJoinBase : public IStorage public: void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {path}; } @@ -72,7 +72,7 @@ public: /// Access the insides. SetPtr & getSet() { return set; } - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; std::optional totalRows(const Settings & settings) const override; std::optional totalBytes(const Settings & settings) const override; diff --git a/src/Storages/StorageStripeLog.cpp b/src/Storages/StorageStripeLog.cpp index db4fbff78cd..d845dfb71f2 100644 --- a/src/Storages/StorageStripeLog.cpp +++ b/src/Storages/StorageStripeLog.cpp @@ -228,6 +228,11 @@ public: storage.file_checker.save(); done = true; + + /// unlock should be done from the same thread as lock, and dtor may be + /// called from different thread, so it should be done here (at least in + /// case of no exceptions occurred) + lock.unlock(); } private: @@ -302,9 +307,9 @@ void StorageStripeLog::rename(const String & new_path_to_table_data, const Stora } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); @@ -316,7 +321,7 @@ Pipe StorageStripeLog::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, unsigned num_streams) @@ -353,7 +358,7 @@ Pipe StorageStripeLog::read( std::advance(end, (stream + 1) * size / num_streams); pipes.emplace_back(std::make_shared( - *this, metadata_snapshot, column_names, context.getSettingsRef().max_read_buffer_size, index, begin, end)); + *this, metadata_snapshot, column_names, context->getSettingsRef().max_read_buffer_size, index, begin, end)); } /// We do not keep read lock directly at the time of reading, because we read ranges of data that do not change. @@ -362,7 +367,7 @@ Pipe StorageStripeLog::read( } -BlockOutputStreamPtr StorageStripeLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageStripeLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { std::unique_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -372,7 +377,7 @@ BlockOutputStreamPtr StorageStripeLog::write(const ASTPtr & /*query*/, const Sto } -CheckResults StorageStripeLog::checkData(const ASTPtr & /* query */, const Context & context) +CheckResults StorageStripeLog::checkData(const ASTPtr & /* query */, ContextPtr context) { std::shared_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -381,7 +386,7 @@ CheckResults StorageStripeLog::checkData(const ASTPtr & /* query */, const Conte return file_checker.check(); } -void StorageStripeLog::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) +void StorageStripeLog::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { disk->clearDirectory(table_path); file_checker = FileChecker{disk, table_path + "sizes.json"}; @@ -402,11 +407,11 @@ void registerStorageStripeLog(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); String disk_name = getDiskName(*args.storage_def); - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); return StorageStripeLog::create( disk, args.relative_data_path, args.table_id, args.columns, args.constraints, - args.attach, args.context.getSettings().max_compress_block_size); + args.attach, args.getContext()->getSettings().max_compress_block_size); }, features); } diff --git a/src/Storages/StorageStripeLog.h b/src/Storages/StorageStripeLog.h index 5782e2526d3..7fad94870dc 100644 --- a/src/Storages/StorageStripeLog.h +++ b/src/Storages/StorageStripeLog.h @@ -29,21 +29,21 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) override; + CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {DB::fullPath(disk, table_path)}; } - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder&) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder&) override; protected: StorageStripeLog( @@ -68,7 +68,7 @@ private: size_t max_compress_block_size; FileChecker file_checker; - mutable std::shared_timed_mutex rwlock; + std::shared_timed_mutex rwlock; Poco::Logger * log; }; diff --git a/src/Storages/StorageTableFunction.h b/src/Storages/StorageTableFunction.h index 7bdd6469ebc..7d909165d5f 100644 --- a/src/Storages/StorageTableFunction.h +++ b/src/Storages/StorageTableFunction.h @@ -73,7 +73,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override @@ -94,7 +94,9 @@ public: pipe.getHeader().getColumnsWithTypeAndName(), to_header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto convert_actions = std::make_shared(convert_actions_dag); + auto convert_actions = std::make_shared( + convert_actions_dag, + ExpressionActionsSettings::fromSettings(context->getSettingsRef())); pipe.addSimpleTransform([&](const Block & header) { @@ -107,7 +109,7 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, - const Context & context) override + ContextPtr context) override { auto storage = getNested(); auto cached_structure = metadata_snapshot->getSampleBlock(); diff --git a/src/Storages/StorageTinyLog.cpp b/src/Storages/StorageTinyLog.cpp index 06e2c21b1a8..41c2961e929 100644 --- a/src/Storages/StorageTinyLog.cpp +++ b/src/Storages/StorageTinyLog.cpp @@ -109,11 +109,11 @@ private: using FileStreams = std::map>; FileStreams streams; - using DeserializeState = IDataType::DeserializeBinaryBulkStatePtr; + using DeserializeState = ISerialization::DeserializeBinaryBulkStatePtr; using DeserializeStates = std::map; DeserializeStates deserialize_states; - void readData(const NameAndTypePair & name_and_type, ColumnPtr & column, UInt64 limit, IDataType::SubstreamsCache & cache); + void readData(const NameAndTypePair & name_and_type, ColumnPtr & column, UInt64 limit, ISerialization::SubstreamsCache & cache); }; @@ -132,7 +132,7 @@ Chunk TinyLogSource::generate() return {}; } - std::unordered_map caches; + std::unordered_map caches; for (const auto & name_type : columns) { ColumnPtr column; @@ -162,16 +162,18 @@ Chunk TinyLogSource::generate() void TinyLogSource::readData(const NameAndTypePair & name_and_type, - ColumnPtr & column, UInt64 limit, IDataType::SubstreamsCache & cache) + ColumnPtr & column, UInt64 limit, ISerialization::SubstreamsCache & cache) { - IDataType::DeserializeBinaryBulkSettings settings; /// TODO Use avg_value_size_hint. + ISerialization::DeserializeBinaryBulkSettings settings; /// TODO Use avg_value_size_hint. const auto & [name, type] = name_and_type; - settings.getter = [&] (const IDataType::SubstreamPath & path) -> ReadBuffer * + auto serialization = IDataType::getSerialization(name_and_type); + + settings.getter = [&] (const ISerialization::SubstreamPath & path) -> ReadBuffer * { - if (cache.count(IDataType::getSubcolumnNameForStream(path))) + if (cache.count(ISerialization::getSubcolumnNameForStream(path))) return nullptr; - String stream_name = IDataType::getFileNameForStream(name_and_type, path); + String stream_name = ISerialization::getFileNameForStream(name_and_type, path); auto & stream = streams[stream_name]; if (!stream) { @@ -184,9 +186,9 @@ void TinyLogSource::readData(const NameAndTypePair & name_and_type, }; if (deserialize_states.count(name) == 0) - type->deserializeBinaryBulkStatePrefix(settings, deserialize_states[name]); + serialization->deserializeBinaryBulkStatePrefix(settings, deserialize_states[name]); - type->deserializeBinaryBulkWithMultipleStreams(column, limit, settings, deserialize_states[name], &cache); + serialization->deserializeBinaryBulkWithMultipleStreams(column, limit, settings, deserialize_states[name], &cache); } @@ -261,24 +263,24 @@ private: using FileStreams = std::map>; FileStreams streams; - using SerializeState = IDataType::SerializeBinaryBulkStatePtr; + using SerializeState = ISerialization::SerializeBinaryBulkStatePtr; using SerializeStates = std::map; SerializeStates serialize_states; using WrittenStreams = std::set; - IDataType::OutputStreamGetter createStreamGetter(const NameAndTypePair & column, WrittenStreams & written_streams); + ISerialization::OutputStreamGetter createStreamGetter(const NameAndTypePair & column, WrittenStreams & written_streams); void writeData(const NameAndTypePair & name_and_type, const IColumn & column, WrittenStreams & written_streams); }; -IDataType::OutputStreamGetter TinyLogBlockOutputStream::createStreamGetter( +ISerialization::OutputStreamGetter TinyLogBlockOutputStream::createStreamGetter( const NameAndTypePair & column, WrittenStreams & written_streams) { - return [&] (const IDataType::SubstreamPath & path) -> WriteBuffer * + return [&] (const ISerialization::SubstreamPath & path) -> WriteBuffer * { - String stream_name = IDataType::getFileNameForStream(column, path); + String stream_name = ISerialization::getFileNameForStream(column, path); if (!written_streams.insert(stream_name).second) return nullptr; @@ -298,8 +300,9 @@ IDataType::OutputStreamGetter TinyLogBlockOutputStream::createStreamGetter( void TinyLogBlockOutputStream::writeData(const NameAndTypePair & name_and_type, const IColumn & column, WrittenStreams & written_streams) { - IDataType::SerializeBinaryBulkSettings settings; + ISerialization::SerializeBinaryBulkSettings settings; const auto & [name, type] = name_and_type; + auto serialization = type->getDefaultSerialization(); if (serialize_states.count(name) == 0) { @@ -307,11 +310,11 @@ void TinyLogBlockOutputStream::writeData(const NameAndTypePair & name_and_type, /// Use different WrittenStreams set, or we get nullptr for them in `serializeBinaryBulkWithMultipleStreams` WrittenStreams prefix_written_streams; settings.getter = createStreamGetter(name_and_type, prefix_written_streams); - type->serializeBinaryBulkStatePrefix(settings, serialize_states[name]); + serialization->serializeBinaryBulkStatePrefix(settings, serialize_states[name]); } settings.getter = createStreamGetter(name_and_type, written_streams); - type->serializeBinaryBulkWithMultipleStreams(column, 0, 0, settings, serialize_states[name]); + serialization->serializeBinaryBulkWithMultipleStreams(column, 0, 0, settings, serialize_states[name]); } @@ -328,14 +331,15 @@ void TinyLogBlockOutputStream::writeSuffix() } WrittenStreams written_streams; - IDataType::SerializeBinaryBulkSettings settings; + ISerialization::SerializeBinaryBulkSettings settings; for (const auto & column : getHeader()) { auto it = serialize_states.find(column.name); if (it != serialize_states.end()) { settings.getter = createStreamGetter(NameAndTypePair(column.name, column.type), written_streams); - column.type->serializeBinaryBulkStateSuffix(settings, it->second); + auto serialization = column.type->getDefaultSerialization(); + serialization->serializeBinaryBulkStateSuffix(settings, it->second); } } @@ -353,6 +357,11 @@ void TinyLogBlockOutputStream::writeSuffix() for (const auto & file : column_files) storage.file_checker.update(file); storage.file_checker.save(); + + /// unlock should be done from the same thread as lock, and dtor may be + /// called from different thread, so it should be done here (at least in + /// case of no exceptions occurred) + lock.unlock(); } @@ -423,9 +432,9 @@ void StorageTinyLog::addFiles(const NameAndTypePair & column) throw Exception("Duplicate column with name " + name + " in constructor of StorageTinyLog.", ErrorCodes::DUPLICATE_COLUMN); - IDataType::StreamCallback stream_callback = [&] (const IDataType::SubstreamPath & substream_path, const IDataType & /* substream_type */) + ISerialization::StreamCallback stream_callback = [&] (const ISerialization::SubstreamPath & substream_path) { - String stream_name = IDataType::getFileNameForStream(column, substream_path); + String stream_name = ISerialization::getFileNameForStream(column, substream_path); if (!files.count(stream_name)) { ColumnData column_data; @@ -434,8 +443,9 @@ void StorageTinyLog::addFiles(const NameAndTypePair & column) } }; - IDataType::SubstreamPath substream_path; - type->enumerateStreams(stream_callback, substream_path); + ISerialization::SubstreamPath substream_path; + auto serialization = type->getDefaultSerialization(); + serialization->enumerateStreams(stream_callback, substream_path); } @@ -455,9 +465,9 @@ void StorageTinyLog::rename(const String & new_path_to_table_data, const Storage } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); @@ -469,7 +479,7 @@ Pipe StorageTinyLog::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*num_streams*/) @@ -480,7 +490,7 @@ Pipe StorageTinyLog::read( // When reading, we lock the entire storage, because we only have one file // per column and can't modify it concurrently. - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); std::shared_lock lock{rwlock, getLockTimeout(context)}; if (!lock) @@ -496,13 +506,13 @@ Pipe StorageTinyLog::read( } -BlockOutputStreamPtr StorageTinyLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageTinyLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { return std::make_shared(*this, metadata_snapshot, std::unique_lock{rwlock, getLockTimeout(context)}); } -CheckResults StorageTinyLog::checkData(const ASTPtr & /* query */, const Context & context) +CheckResults StorageTinyLog::checkData(const ASTPtr & /* query */, ContextPtr context) { std::shared_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -512,7 +522,7 @@ CheckResults StorageTinyLog::checkData(const ASTPtr & /* query */, const Context } void StorageTinyLog::truncate( - const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) + const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) { disk->clearDirectory(table_path); @@ -538,11 +548,11 @@ void registerStorageTinyLog(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); String disk_name = getDiskName(*args.storage_def); - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); return StorageTinyLog::create( disk, args.relative_data_path, args.table_id, args.columns, args.constraints, - args.attach, args.context.getSettings().max_compress_block_size); + args.attach, args.getContext()->getSettings().max_compress_block_size); }, features); } diff --git a/src/Storages/StorageTinyLog.h b/src/Storages/StorageTinyLog.h index b76e8e34dfb..01652169b62 100644 --- a/src/Storages/StorageTinyLog.h +++ b/src/Storages/StorageTinyLog.h @@ -28,22 +28,22 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) override; + CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {DB::fullPath(disk, table_path)}; } bool supportsSubcolumns() const override { return true; } - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; protected: StorageTinyLog( @@ -70,7 +70,7 @@ private: Files files; FileChecker file_checker; - mutable std::shared_timed_mutex rwlock; + std::shared_timed_mutex rwlock; Poco::Logger * log; diff --git a/src/Storages/StorageURL.cpp b/src/Storages/StorageURL.cpp index ca984f9ece9..e6c2f52f925 100644 --- a/src/Storages/StorageURL.cpp +++ b/src/Storages/StorageURL.cpp @@ -1,4 +1,3 @@ -#include #include #include @@ -22,6 +21,8 @@ #include #include #include +#include +#include namespace DB @@ -29,11 +30,12 @@ namespace DB namespace ErrorCodes { extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int NETWORK_ERROR; } IStorageURLBase::IStorageURLBase( const Poco::URI & uri_, - const Context & context_, + ContextPtr /*context_*/, const StorageID & table_id_, const String & format_name_, const std::optional & format_settings_, @@ -42,13 +44,10 @@ IStorageURLBase::IStorageURLBase( const String & compression_method_) : IStorage(table_id_) , uri(uri_) - , context_global(context_) , compression_method(compression_method_) , format_name(format_name_) , format_settings(format_settings_) { - context_global.getRemoteHostFilter().checkURL(uri); - StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); storage_metadata.setConstraints(constraints_); @@ -67,7 +66,7 @@ namespace const std::optional & format_settings, String name_, const Block & sample_block, - const Context & context, + ContextPtr context, const ColumnsDescription & columns, UInt64 max_block_size, const ConnectionTimeouts & timeouts, @@ -99,11 +98,11 @@ namespace method, std::move(callback), timeouts, - context.getSettingsRef().max_http_get_redirects, + context->getSettingsRef().max_http_get_redirects, Poco::Net::HTTPBasicCredentials{}, DBMS_DEFAULT_BUFFER_SIZE, header, - context.getRemoteHostFilter()), + context->getRemoteHostFilter()), compression_method); auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size, format_settings); @@ -147,7 +146,7 @@ StorageURLBlockOutputStream::StorageURLBlockOutputStream(const Poco::URI & uri, const String & format, const std::optional & format_settings, const Block & sample_block_, - const Context & context, + ContextPtr context, const ConnectionTimeouts & timeouts, const CompressionMethod compression_method) : sample_block(sample_block_) @@ -187,7 +186,7 @@ std::vector> IStorageURLBase::getReadURIPara const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/) const { @@ -198,7 +197,7 @@ std::function IStorageURLBase::getReadPOSTDataCallback( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/) const { @@ -210,13 +209,13 @@ Pipe IStorageURLBase::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned /*num_streams*/) { auto request_uri = uri; - auto params = getReadURIParams(column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size); + auto params = getReadURIParams(column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size); for (const auto & [param, value] : params) request_uri.addQueryParameter(param, value); @@ -225,26 +224,153 @@ Pipe IStorageURLBase::read( getReadMethod(), getReadPOSTDataCallback( column_names, metadata_snapshot, query_info, - context, processed_stage, max_block_size), + local_context, processed_stage, max_block_size), format_name, format_settings, getName(), getHeaderBlock(column_names, metadata_snapshot), - context, + local_context, metadata_snapshot->getColumns(), max_block_size, - ConnectionTimeouts::getHTTPTimeouts(context), + ConnectionTimeouts::getHTTPTimeouts(local_context), chooseCompressionMethod(request_uri.getPath(), compression_method))); } -BlockOutputStreamPtr IStorageURLBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) + +Pipe StorageURLWithFailover::read( + const Names & column_names, + const StorageMetadataPtr & metadata_snapshot, + SelectQueryInfo & query_info, + ContextPtr local_context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + unsigned /*num_streams*/) +{ + auto params = getReadURIParams(column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size); + WriteBufferFromOwnString error_message; + error_message << "Detailed description:"; + + for (const auto & uri_option : uri_options) + { + auto request_uri = uri_option; + for (const auto & [param, value] : params) + request_uri.addQueryParameter(param, value); + try + { + /// Check for uri accessibility is done in constructor of ReadWriteBufferFromHTTP while creating StorageURLSource. + auto url_source = std::make_shared( + request_uri, + getReadMethod(), + getReadPOSTDataCallback( + column_names, metadata_snapshot, query_info, + local_context, processed_stage, max_block_size), + format_name, + format_settings, + getName(), + getHeaderBlock(column_names, metadata_snapshot), + local_context, + metadata_snapshot->getColumns(), + max_block_size, + ConnectionTimeouts::getHTTPTimeouts(local_context), + chooseCompressionMethod(request_uri.getPath(), compression_method)); + + std::shuffle(uri_options.begin(), uri_options.end(), thread_local_rng); + + return Pipe(url_source); + } + catch (...) + { + error_message << " Host: " << uri_option.getHost() << ", post: " << uri_option.getPort() << ", path: " << uri_option.getPath(); + error_message << ", error: " << getCurrentExceptionMessage(false) << ";"; + + tryLogCurrentException(__PRETTY_FUNCTION__); + } + } + + throw Exception(ErrorCodes::NETWORK_ERROR, "All uri options are unreachable. {}", error_message.str()); +} + + +BlockOutputStreamPtr IStorageURLBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { return std::make_shared(uri, format_name, - format_settings, metadata_snapshot->getSampleBlock(), context_global, - ConnectionTimeouts::getHTTPTimeouts(context_global), + format_settings, metadata_snapshot->getSampleBlock(), context, + ConnectionTimeouts::getHTTPTimeouts(context), chooseCompressionMethod(uri.toString(), compression_method)); } +StorageURL::StorageURL(const Poco::URI & uri_, + const StorageID & table_id_, + const String & format_name_, + const std::optional & format_settings_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_, + const String & compression_method_) + : IStorageURLBase(uri_, context_, table_id_, format_name_, + format_settings_, columns_, constraints_, compression_method_) +{ + context_->getRemoteHostFilter().checkURL(uri); +} + + +StorageURLWithFailover::StorageURLWithFailover( + const std::vector & uri_options_, + const StorageID & table_id_, + const String & format_name_, + const std::optional & format_settings_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_, + const String & compression_method_) + : StorageURL(Poco::URI(), table_id_, format_name_, format_settings_, columns_, constraints_, context_, compression_method_) +{ + for (const auto & uri_option : uri_options_) + { + Poco::URI poco_uri(uri_option); + context_->getRemoteHostFilter().checkURL(poco_uri); + uri_options.emplace_back(std::move(poco_uri)); + LOG_DEBUG(&Poco::Logger::get("StorageURLDistributed"), "Adding URL option: {}", uri_option); + } +} + + +FormatSettings StorageURL::getFormatSettingsFromArgs(const StorageFactory::Arguments & args) +{ + // Use format settings from global server context + settings from + // the SETTINGS clause of the create query. Settings from current + // session and user are ignored. + FormatSettings format_settings; + if (args.storage_def->settings) + { + FormatFactorySettings user_format_settings; + + // Apply changed settings from global context, but ignore the + // unknown ones, because we only have the format settings here. + const auto & changes = args.getContext()->getSettingsRef().changes(); + for (const auto & change : changes) + { + if (user_format_settings.has(change.name)) + { + user_format_settings.set(change.name, change.value); + } + } + + // Apply changes from SETTINGS clause, with validation. + user_format_settings.applyChanges(args.storage_def->settings->changes); + + format_settings = getFormatSettings(args.getContext(), + user_format_settings); + } + else + { + format_settings = getFormatSettings(args.getContext()); + } + + return format_settings; +} + + void registerStorageURL(StorageFactory & factory) { factory.registerStorage("URL", [](const StorageFactory::Arguments & args) @@ -255,62 +381,30 @@ void registerStorageURL(StorageFactory & factory) throw Exception( "Storage URL requires 2 or 3 arguments: url, name of used format and optional compression method.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.local_context); + engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.getLocalContext()); - String url = engine_args[0]->as().value.safeGet(); + const String & url = engine_args[0]->as().value.safeGet(); Poco::URI uri(url); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); - String format_name = engine_args[1]->as().value.safeGet(); + const String & format_name = engine_args[1]->as().value.safeGet(); - String compression_method; + String compression_method = "auto"; if (engine_args.size() == 3) { - engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context); + engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.getLocalContext()); compression_method = engine_args[2]->as().value.safeGet(); } - else - { - compression_method = "auto"; - } - // Use format settings from global server context + settings from - // the SETTINGS clause of the create query. Settings from current - // session and user are ignored. - FormatSettings format_settings; - if (args.storage_def->settings) - { - FormatFactorySettings user_format_settings; - - // Apply changed settings from global context, but ignore the - // unknown ones, because we only have the format settings here. - const auto & changes = args.context.getSettingsRef().changes(); - for (const auto & change : changes) - { - if (user_format_settings.has(change.name)) - { - user_format_settings.set(change.name, change.value); - } - } - - // Apply changes from SETTINGS clause, with validation. - user_format_settings.applyChanges(args.storage_def->settings->changes); - - format_settings = getFormatSettings(args.context, - user_format_settings); - } - else - { - format_settings = getFormatSettings(args.context); - } + auto format_settings = StorageURL::getFormatSettingsFromArgs(args); return StorageURL::create( uri, args.table_id, format_name, format_settings, - args.columns, args.constraints, args.context, + args.columns, args.constraints, args.getContext(), compression_method); }, { diff --git a/src/Storages/StorageURL.h b/src/Storages/StorageURL.h index 21b2e3e27a1..012915c9b24 100644 --- a/src/Storages/StorageURL.h +++ b/src/Storages/StorageURL.h @@ -6,6 +6,7 @@ #include #include #include +#include namespace DB @@ -26,17 +27,17 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; protected: IStorageURLBase( const Poco::URI & uri_, - const Context & context_, + ContextPtr context_, const StorageID & id_, const String & format_name_, const std::optional & format_settings_, @@ -45,7 +46,6 @@ protected: const String & compression_method_); Poco::URI uri; - const Context & context_global; String compression_method; String format_name; // For URL engine, we use format settings from server context + `SETTINGS` @@ -54,14 +54,13 @@ protected: // In this case, format_settings is not set. std::optional format_settings; -private: virtual std::string getReadMethod() const; virtual std::vector> getReadURIParams( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const; @@ -69,10 +68,11 @@ private: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const; +private: virtual Block getHeaderBlock(const Names & column_names, const StorageMetadataPtr & metadata_snapshot) const = 0; }; @@ -84,9 +84,9 @@ public: const String & format, const std::optional & format_settings, const Block & sample_block_, - const Context & context, + ContextPtr context, const ConnectionTimeouts & timeouts, - const CompressionMethod compression_method); + CompressionMethod compression_method); Block getHeader() const override { @@ -103,7 +103,7 @@ private: BlockOutputStreamPtr writer; }; -class StorageURL final : public ext::shared_ptr_helper, public IStorageURLBase +class StorageURL : public ext::shared_ptr_helper, public IStorageURLBase { friend struct ext::shared_ptr_helper; public: @@ -113,12 +113,8 @@ public: const std::optional & format_settings_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, - const String & compression_method_) - : IStorageURLBase(uri_, context_, table_id_, format_name_, - format_settings_, columns_, constraints_, compression_method_) - { - } + ContextPtr context_, + const String & compression_method_); String getName() const override { @@ -129,5 +125,35 @@ public: { return metadata_snapshot->getSampleBlock(); } + + static FormatSettings getFormatSettingsFromArgs(const StorageFactory::Arguments & args); +}; + + +/// StorageURLWithFailover is allowed only for URL table function, not as a separate storage. +class StorageURLWithFailover final : public StorageURL +{ +public: + StorageURLWithFailover( + const std::vector & uri_options_, + const StorageID & table_id_, + const String & format_name_, + const std::optional & format_settings_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_, + const String & compression_method_); + + Pipe read( + const Names & column_names, + const StorageMetadataPtr & /*metadata_snapshot*/, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + unsigned num_streams) override; + +private: + std::vector uri_options; }; } diff --git a/src/Storages/StorageValues.cpp b/src/Storages/StorageValues.cpp index 500deac5f25..ace5ca3667c 100644 --- a/src/Storages/StorageValues.cpp +++ b/src/Storages/StorageValues.cpp @@ -24,7 +24,7 @@ Pipe StorageValues::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) diff --git a/src/Storages/StorageValues.h b/src/Storages/StorageValues.h index 5729f245149..6ae33ed70f1 100644 --- a/src/Storages/StorageValues.h +++ b/src/Storages/StorageValues.h @@ -19,7 +19,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageView.cpp b/src/Storages/StorageView.cpp index 4d2bb6bdf15..75bd4b2967f 100644 --- a/src/Storages/StorageView.cpp +++ b/src/Storages/StorageView.cpp @@ -17,6 +17,8 @@ #include #include #include +#include +#include namespace DB { @@ -52,14 +54,16 @@ Pipe StorageView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); - return plan.convertToPipe(QueryPlanOptimizationSettings(context.getSettingsRef())); + return plan.convertToPipe( + QueryPlanOptimizationSettings::fromContext(context), + BuildQueryPipelineSettings::fromContext(context)); } void StorageView::read( @@ -67,7 +71,7 @@ void StorageView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/StorageView.h b/src/Storages/StorageView.h index 6f894ce2775..fa11472218d 100644 --- a/src/Storages/StorageView.h +++ b/src/Storages/StorageView.h @@ -25,7 +25,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -35,7 +35,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageXDBC.cpp b/src/Storages/StorageXDBC.cpp index f2f8cdb23f5..f94696c716b 100644 --- a/src/Storages/StorageXDBC.cpp +++ b/src/Storages/StorageXDBC.cpp @@ -14,6 +14,8 @@ #include #include #include +#include + namespace DB { @@ -28,7 +30,7 @@ StorageXDBC::StorageXDBC( const std::string & remote_database_name_, const std::string & remote_table_name_, const ColumnsDescription & columns_, - const Context & context_, + ContextPtr context_, const BridgeHelperPtr bridge_helper_) /// Please add support for constraints as soon as StorageODBC or JDBC will support insertion. : IStorageURLBase(Poco::URI(), @@ -53,27 +55,21 @@ std::string StorageXDBC::getReadMethod() const } std::vector> StorageXDBC::getReadURIParams( - const Names & column_names, - const StorageMetadataPtr & metadata_snapshot, + const Names & /* column_names */, + const StorageMetadataPtr & /* metadata_snapshot */, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t max_block_size) const { - NamesAndTypesList cols; - for (const String & name : column_names) - { - auto column_data = metadata_snapshot->getColumns().getPhysical(name); - cols.emplace_back(column_data.name, column_data.type); - } - return bridge_helper->getURLParams(cols.toString(), max_block_size); + return bridge_helper->getURLParams(max_block_size); } std::function StorageXDBC::getReadPOSTDataCallback( - const Names & /*column_names*/, + const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/) const { @@ -82,16 +78,30 @@ std::function StorageXDBC::getReadPOSTDataCallback( bridge_helper->getIdentifierQuotingStyle(), remote_database_name, remote_table_name, - context); + local_context); - return [query](std::ostream & os) { os << "query=" << query; }; + NamesAndTypesList cols; + for (const String & name : column_names) + { + auto column_data = metadata_snapshot->getColumns().getPhysical(name); + cols.emplace_back(column_data.name, column_data.type); + } + + auto write_body_callback = [query, cols](std::ostream & os) + { + os << "sample_block=" << escapeForFileName(cols.toString()); + os << "&"; + os << "query=" << escapeForFileName(query); + }; + + return write_body_callback; } Pipe StorageXDBC::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) @@ -99,35 +109,32 @@ Pipe StorageXDBC::read( metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); bridge_helper->startBridgeSync(); - return IStorageURLBase::read(column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + return IStorageURLBase::read(column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); } -BlockOutputStreamPtr StorageXDBC::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageXDBC::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { bridge_helper->startBridgeSync(); - NamesAndTypesList cols; Poco::URI request_uri = uri; request_uri.setPath("/write"); - for (const String & name : metadata_snapshot->getSampleBlock().getNames()) - { - auto column_data = metadata_snapshot->getColumns().getPhysical(name); - cols.emplace_back(column_data.name, column_data.type); - } - auto url_params = bridge_helper->getURLParams(cols.toString(), 65536); + + auto url_params = bridge_helper->getURLParams(65536); for (const auto & [param, value] : url_params) request_uri.addQueryParameter(param, value); + request_uri.addQueryParameter("db_name", remote_database_name); request_uri.addQueryParameter("table_name", remote_table_name); request_uri.addQueryParameter("format_name", format_name); + request_uri.addQueryParameter("sample_block", metadata_snapshot->getSampleBlock().getNamesAndTypesList().toString()); return std::make_shared( request_uri, format_name, - getFormatSettings(context), + getFormatSettings(local_context), metadata_snapshot->getSampleBlock(), - context, - ConnectionTimeouts::getHTTPTimeouts(context), + local_context, + ConnectionTimeouts::getHTTPTimeouts(local_context), chooseCompressionMethod(uri.toString(), compression_method)); } @@ -155,16 +162,16 @@ namespace ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (size_t i = 0; i < 3; ++i) - engine_args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[i], args.local_context); + engine_args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[i], args.getLocalContext()); - BridgeHelperPtr bridge_helper = std::make_shared>(args.context, - args.context.getSettingsRef().http_receive_timeout.value, + BridgeHelperPtr bridge_helper = std::make_shared>(args.getContext(), + args.getContext()->getSettingsRef().http_receive_timeout.value, engine_args[0]->as().value.safeGet()); return std::make_shared(args.table_id, engine_args[1]->as().value.safeGet(), engine_args[2]->as().value.safeGet(), args.columns, - args.context, + args.getContext(), bridge_helper); }, diff --git a/src/Storages/StorageXDBC.h b/src/Storages/StorageXDBC.h index 8524a03503a..064912fda92 100644 --- a/src/Storages/StorageXDBC.h +++ b/src/Storages/StorageXDBC.h @@ -1,7 +1,7 @@ #pragma once #include -#include +#include namespace DB @@ -19,7 +19,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -29,10 +29,10 @@ public: const std::string & remote_database_name, const std::string & remote_table_name, const ColumnsDescription & columns_, - const Context & context_, + ContextPtr context_, BridgeHelperPtr bridge_helper_); - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; std::string getName() const override; private: @@ -49,7 +49,7 @@ private: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const override; @@ -57,7 +57,7 @@ private: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const override; diff --git a/src/Storages/System/CMakeLists.txt b/src/Storages/System/CMakeLists.txt index a1eb525dceb..7e350932038 100644 --- a/src/Storages/System/CMakeLists.txt +++ b/src/Storages/System/CMakeLists.txt @@ -1,17 +1,16 @@ # The file StorageSystemContributors.cpp is generated at release time and committed to the source tree. # You can also regenerate it manually this way: -# execute_process(COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/StorageSystemContributors.sh) +# execute_process(COMMAND "${CMAKE_CURRENT_SOURCE_DIR}/StorageSystemContributors.sh") -set (CONFIG_BUILD ${CMAKE_CURRENT_BINARY_DIR}/StorageSystemBuildOptions.generated.cpp) +set (CONFIG_BUILD "${CMAKE_CURRENT_BINARY_DIR}/StorageSystemBuildOptions.generated.cpp") get_property (BUILD_COMPILE_DEFINITIONS DIRECTORY ${ClickHouse_SOURCE_DIR} PROPERTY COMPILE_DEFINITIONS) get_property (BUILD_INCLUDE_DIRECTORIES DIRECTORY ${ClickHouse_SOURCE_DIR} PROPERTY INCLUDE_DIRECTORIES) get_property(TZDATA_VERSION GLOBAL PROPERTY TZDATA_VERSION_PROP) -string (TIMESTAMP BUILD_DATE "%Y-%m-%d" UTC) configure_file (StorageSystemBuildOptions.generated.cpp.in ${CONFIG_BUILD}) -include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake) +include("${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake") add_headers_and_sources(storages_system .) list (APPEND storages_system_sources ${CONFIG_BUILD}) @@ -28,8 +27,8 @@ endif() add_dependencies(generate-source generate-contributors) -set(GENERATED_LICENSES_SRC ${CMAKE_CURRENT_BINARY_DIR}/StorageSystemLicenses.generated.cpp) -set(GENERATED_TIMEZONES_SRC ${CMAKE_CURRENT_BINARY_DIR}/StorageSystemTimeZones.generated.cpp) +set(GENERATED_LICENSES_SRC "${CMAKE_CURRENT_BINARY_DIR}/StorageSystemLicenses.generated.cpp") +set(GENERATED_TIMEZONES_SRC "${CMAKE_CURRENT_BINARY_DIR}/StorageSystemTimeZones.generated.cpp") add_custom_command( OUTPUT StorageSystemLicenses.generated.cpp diff --git a/src/Storages/System/IStorageSystemOneBlock.h b/src/Storages/System/IStorageSystemOneBlock.h index d83a71c2592..fdc966130ad 100644 --- a/src/Storages/System/IStorageSystemOneBlock.h +++ b/src/Storages/System/IStorageSystemOneBlock.h @@ -12,13 +12,22 @@ namespace DB class Context; -/** Base class for system tables whose all columns have String type. +/** IStorageSystemOneBlock is base class for system tables whose all columns can be synchronously fetched. + * + * Client class need to provide static method static NamesAndTypesList getNamesAndTypes() that will return list of column names and + * their types. IStorageSystemOneBlock during read will create result columns in same order as result of getNamesAndTypes + * and pass it with fillData method. + * + * Client also must override fillData and fill result columns. + * + * If subclass want to support virtual columns, it should override getVirtuals method of IStorage interface. + * IStorageSystemOneBlock will add virtuals columns at the end of result columns of fillData method. */ template class IStorageSystemOneBlock : public IStorage { protected: - virtual void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const = 0; + virtual void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const = 0; public: #if defined(ARCADIA_BUILD) @@ -36,14 +45,15 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) override { - metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); + auto virtuals_names_and_types = getVirtuals(); + metadata_snapshot->check(column_names, virtuals_names_and_types, getStorageID()); - Block sample_block = metadata_snapshot->getSampleBlock(); + Block sample_block = metadata_snapshot->getSampleBlockWithVirtuals(virtuals_names_and_types); MutableColumns res_columns = sample_block.cloneEmptyColumns(); fillData(res_columns, context, query_info); diff --git a/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp b/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp index c0dd5cc85d3..c2d82c6cd7c 100644 --- a/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp +++ b/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp @@ -12,13 +12,13 @@ NamesAndTypesList StorageSystemAggregateFunctionCombinators::getNamesAndTypes() }; } -void StorageSystemAggregateFunctionCombinators::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemAggregateFunctionCombinators::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & combinators = AggregateFunctionCombinatorFactory::instance().getAllAggregateFunctionCombinators(); for (const auto & pair : combinators) { - res_columns[0]->insert(pair.first); - res_columns[1]->insert(pair.second->isForInternalUsageOnly()); + res_columns[0]->insert(pair.name); + res_columns[1]->insert(pair.combinator_ptr->isForInternalUsageOnly()); } } diff --git a/src/Storages/System/StorageSystemAggregateFunctionCombinators.h b/src/Storages/System/StorageSystemAggregateFunctionCombinators.h index 8d204020160..a978bfbface 100644 --- a/src/Storages/System/StorageSystemAggregateFunctionCombinators.h +++ b/src/Storages/System/StorageSystemAggregateFunctionCombinators.h @@ -12,7 +12,7 @@ class StorageSystemAggregateFunctionCombinators final : public ext::shared_ptr_h { friend struct ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; public: diff --git a/src/Storages/System/StorageSystemAsynchronousMetrics.cpp b/src/Storages/System/StorageSystemAsynchronousMetrics.cpp index 8dabac4fb49..70e12440678 100644 --- a/src/Storages/System/StorageSystemAsynchronousMetrics.cpp +++ b/src/Storages/System/StorageSystemAsynchronousMetrics.cpp @@ -21,7 +21,7 @@ StorageSystemAsynchronousMetrics::StorageSystemAsynchronousMetrics(const Storage { } -void StorageSystemAsynchronousMetrics::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemAsynchronousMetrics::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { auto async_metrics_values = async_metrics.getValues(); for (const auto & name_value : async_metrics_values) diff --git a/src/Storages/System/StorageSystemAsynchronousMetrics.h b/src/Storages/System/StorageSystemAsynchronousMetrics.h index a2a92d248d8..eee029bbe51 100644 --- a/src/Storages/System/StorageSystemAsynchronousMetrics.h +++ b/src/Storages/System/StorageSystemAsynchronousMetrics.h @@ -33,7 +33,7 @@ protected: #endif StorageSystemAsynchronousMetrics(const StorageID & table_id_, const AsynchronousMetrics & async_metrics_); - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemBuildOptions.cpp b/src/Storages/System/StorageSystemBuildOptions.cpp index a63afcf4ce5..01a60a0235c 100644 --- a/src/Storages/System/StorageSystemBuildOptions.cpp +++ b/src/Storages/System/StorageSystemBuildOptions.cpp @@ -16,7 +16,7 @@ NamesAndTypesList StorageSystemBuildOptions::getNamesAndTypes() }; } -void StorageSystemBuildOptions::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemBuildOptions::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { #if !defined(ARCADIA_BUILD) for (auto * it = auto_config_build; *it; it += 2) diff --git a/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in b/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in index 8ece4219d0c..8fe574da643 100644 --- a/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in +++ b/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in @@ -1,22 +1,14 @@ // .cpp autogenerated by cmake -#cmakedefine01 BUILD_DETERMINISTIC - const char * auto_config_build[] { "VERSION_FULL", "@VERSION_FULL@", "VERSION_DESCRIBE", "@VERSION_DESCRIBE@", "VERSION_INTEGER", "@VERSION_INTEGER@", - -#if BUILD_DETERMINISTIC "SYSTEM", "@CMAKE_SYSTEM_NAME@", -#else "VERSION_GITHASH", "@VERSION_GITHASH@", "VERSION_REVISION", "@VERSION_REVISION@", - "BUILD_DATE", "@BUILD_DATE@", - "SYSTEM", "@CMAKE_SYSTEM@", -#endif - + "VERSION_DATE", "@VERSION_DATE@", "BUILD_TYPE", "@CMAKE_BUILD_TYPE@", "SYSTEM_PROCESSOR", "@CMAKE_SYSTEM_PROCESSOR@", "LIBRARY_ARCHITECTURE", "@CMAKE_LIBRARY_ARCHITECTURE@", diff --git a/src/Storages/System/StorageSystemBuildOptions.h b/src/Storages/System/StorageSystemBuildOptions.h index afd27f00bcc..8a22a3dcb45 100644 --- a/src/Storages/System/StorageSystemBuildOptions.h +++ b/src/Storages/System/StorageSystemBuildOptions.h @@ -16,7 +16,7 @@ class StorageSystemBuildOptions final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemClusters.cpp b/src/Storages/System/StorageSystemClusters.cpp index 25b432252f9..8a3227aafdb 100644 --- a/src/Storages/System/StorageSystemClusters.cpp +++ b/src/Storages/System/StorageSystemClusters.cpp @@ -29,16 +29,30 @@ NamesAndTypesList StorageSystemClusters::getNamesAndTypes() } -void StorageSystemClusters::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemClusters::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - for (const auto & name_and_cluster : context.getClusters().getContainer()) + for (const auto & name_and_cluster : context->getClusters().getContainer()) writeCluster(res_columns, name_and_cluster); const auto databases = DatabaseCatalog::instance().getDatabases(); for (const auto & name_and_database : databases) { if (const auto * replicated = typeid_cast(name_and_database.second.get())) - writeCluster(res_columns, {name_and_database.first, replicated->getCluster()}); + { + // A quick fix for stateless tests with DatabaseReplicated. Its ZK + // node can be destroyed at any time. If another test lists + // system.clusters to get client command line suggestions, it will + // get an error when trying to get the info about DB from ZK. + // Just ignore these inaccessible databases. A good example of a + // failing test is `01526_client_start_and_exit`. + try { + writeCluster(res_columns, {name_and_database.first, replicated->getCluster()}); + } + catch (...) + { + tryLogCurrentException(__PRETTY_FUNCTION__); + } + } } } diff --git a/src/Storages/System/StorageSystemClusters.h b/src/Storages/System/StorageSystemClusters.h index 4f2a843999f..81aefaff1c4 100644 --- a/src/Storages/System/StorageSystemClusters.h +++ b/src/Storages/System/StorageSystemClusters.h @@ -28,7 +28,7 @@ protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; using NameAndCluster = std::pair>; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; static void writeCluster(MutableColumns & res_columns, const NameAndCluster & name_and_cluster); }; diff --git a/src/Storages/System/StorageSystemCollations.cpp b/src/Storages/System/StorageSystemCollations.cpp index a870a7c7c78..c9343ccd146 100644 --- a/src/Storages/System/StorageSystemCollations.cpp +++ b/src/Storages/System/StorageSystemCollations.cpp @@ -13,7 +13,7 @@ NamesAndTypesList StorageSystemCollations::getNamesAndTypes() }; } -void StorageSystemCollations::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemCollations::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (const auto & [locale, lang]: AvailableCollationLocales::instance().getAvailableCollations()) { diff --git a/src/Storages/System/StorageSystemCollations.h b/src/Storages/System/StorageSystemCollations.h index 133acd937a1..454fd968511 100644 --- a/src/Storages/System/StorageSystemCollations.h +++ b/src/Storages/System/StorageSystemCollations.h @@ -10,7 +10,7 @@ class StorageSystemCollations final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; public: diff --git a/src/Storages/System/StorageSystemColumns.cpp b/src/Storages/System/StorageSystemColumns.cpp index 6726d502071..8f65147bb11 100644 --- a/src/Storages/System/StorageSystemColumns.cpp +++ b/src/Storages/System/StorageSystemColumns.cpp @@ -65,12 +65,12 @@ public: ColumnPtr databases_, ColumnPtr tables_, Storages storages_, - const Context & context) + ContextPtr context) : SourceWithProgress(header_) , columns_mask(std::move(columns_mask_)), max_block_size(max_block_size_) , databases(std::move(databases_)), tables(std::move(tables_)), storages(std::move(storages_)) - , total_tables(tables->size()), access(context.getAccess()) - , query_id(context.getCurrentQueryId()), lock_acquire_timeout(context.getSettingsRef().lock_acquire_timeout) + , total_tables(tables->size()), access(context->getAccess()) + , query_id(context->getCurrentQueryId()), lock_acquire_timeout(context->getSettingsRef().lock_acquire_timeout) { } @@ -243,7 +243,7 @@ Pipe StorageSystemColumns::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*num_streams*/) @@ -289,9 +289,9 @@ Pipe StorageSystemColumns::read( } Tables external_tables; - if (context.hasSessionContext()) + if (context->hasSessionContext()) { - external_tables = context.getSessionContext().getExternalTables(); + external_tables = context->getSessionContext()->getExternalTables(); if (!external_tables.empty()) database_column_mut->insertDefault(); /// Empty database for external tables. } diff --git a/src/Storages/System/StorageSystemColumns.h b/src/Storages/System/StorageSystemColumns.h index c4f35485612..5cd8c5b38fd 100644 --- a/src/Storages/System/StorageSystemColumns.h +++ b/src/Storages/System/StorageSystemColumns.h @@ -21,7 +21,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemContributors.cpp b/src/Storages/System/StorageSystemContributors.cpp index cd0f31975cc..ed28be2a4ab 100644 --- a/src/Storages/System/StorageSystemContributors.cpp +++ b/src/Storages/System/StorageSystemContributors.cpp @@ -16,7 +16,7 @@ NamesAndTypesList StorageSystemContributors::getNamesAndTypes() }; } -void StorageSystemContributors::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemContributors::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { std::vector contributors; for (auto * it = auto_contributors; *it; ++it) diff --git a/src/Storages/System/StorageSystemContributors.generated.cpp b/src/Storages/System/StorageSystemContributors.generated.cpp index fd4807e550c..b8741e6951c 100644 --- a/src/Storages/System/StorageSystemContributors.generated.cpp +++ b/src/Storages/System/StorageSystemContributors.generated.cpp @@ -4,6 +4,7 @@ const char * auto_contributors[] { "20018712", "243f6a88 85a308d3", "243f6a8885a308d313198a2e037", + "3ldar-nasyrov", "821008736@qq.com", "Akazz", "Alain BERRIER", @@ -16,6 +17,7 @@ const char * auto_contributors[] { "Aleksei Semiglazov", "Aleksey", "Aleksey Akulovich", + "Alex", "Alex Bocharov", "Alex Karo", "Alex Krash", @@ -58,6 +60,7 @@ const char * auto_contributors[] { "Alexey Vasiliev", "Alexey Zatelepin", "Alexsey Shestakov", + "Ali Demirci", "Aliaksandr Pliutau", "Aliaksandr Shylau", "Amos Bird", @@ -138,9 +141,11 @@ const char * auto_contributors[] { "Brett Hoerner", "Bulat Gaifullin", "Carbyn", + "Chao Ma", "Chao Wang", "Chen Yufei", "Chienlung Cheung", + "Christian", "Ciprian Hacman", "Clement Rodriguez", "Clément Rodriguez", @@ -172,6 +177,7 @@ const char * auto_contributors[] { "Dmitry Belyavtsev", "Dmitry Bilunov", "Dmitry Galuza", + "Dmitry Krylov", "Dmitry Luhtionov", "Dmitry Moskowski", "Dmitry Muzyka", @@ -182,6 +188,7 @@ const char * auto_contributors[] { "Dongdong Yang", "DoomzD", "Dr. Strange Looker", + "Egor O'Sten", "Ekaterina", "Eldar Zaitov", "Elena Baskakova", @@ -258,6 +265,7 @@ const char * auto_contributors[] { "Ilya Skrypitsa", "Ilya Yatsishin", "ImgBotApp", + "Islam Israfilov", "Islam Israfilov (Islam93)", "Ivan", "Ivan A. Torgashov", @@ -282,6 +290,7 @@ const char * auto_contributors[] { "Jochen Schalanda", "John", "John Hummel", + "John Skopis", "Jonatas Freitas", "Kang Liu", "Karl Pietrzak", @@ -367,6 +376,7 @@ const char * auto_contributors[] { "Mikahil Nacharov", "Mike", "Mike F", + "Mike Kot", "Mikhail", "Mikhail Cheshkov", "Mikhail Fandyushin", @@ -378,6 +388,7 @@ const char * auto_contributors[] { "Mikhail Salosin", "Mikhail Surin", "Mikhail f. Shiryaev", + "MikuSugar", "Milad Arabi", "Mohammad Hossein Sekhavat", "MovElb", @@ -388,6 +399,8 @@ const char * auto_contributors[] { "Narek Galstyan", "NeZeD [Mac Pro]", "Neeke Gao", + "Neng Liu", + "Nickolay Yastrebov", "Nico Mandery", "Nico Piderman", "Nicolae Vartolomei", @@ -439,6 +452,7 @@ const char * auto_contributors[] { "Philippe Ombredanne", "Potya", "Pradeep Chhetri", + "Pysaoke", "Quid37", "Rafael David Tinoco", "Ramazan Polat", @@ -455,6 +469,7 @@ const char * auto_contributors[] { "Roman Peshkurov", "Roman Tsisyk", "Ruslan", + "Ruslan Savchenko", "Russ Frank", "Ruzal Ibragimov", "S.M.A. Djawadi", @@ -463,11 +478,13 @@ const char * auto_contributors[] { "Sami Kerola", "Samuel Chou", "Saulius Valatka", + "Serg Kulakov", "Serge Rider", "Sergei Bocharov", "Sergei Semin", "Sergei Shtykov", "Sergei Tsetlin (rekub)", + "Sergey Demurin", "Sergey Elantsev", "Sergey Fedorov", "Sergey Kononenko", @@ -483,6 +500,7 @@ const char * auto_contributors[] { "SevaCode", "Sherry Wang", "Silviu Caragea", + "Simeon Emanuilov", "Simon Liu", "Simon Podlipsky", "Sina", @@ -504,7 +522,9 @@ const char * auto_contributors[] { "TCeason", "Tagir Kuskarov", "Tai White", + "Taleh Zaliyev", "Tangaev", + "Tatiana Kirillova", "Tema Novikov", "The-Alchemist", "TiunovNN", @@ -534,6 +554,7 @@ const char * auto_contributors[] { "Veselkov Konstantin", "Victor Tarnavsky", "Viktor Taranenko", + "Vitaliy Fedorchenko", "Vitaliy Karnienko", "Vitaliy Kozlovskiy", "Vitaliy Lyudvichenko", @@ -566,6 +587,7 @@ const char * auto_contributors[] { "William Shallum", "Winter Zhang", "Xianda Ke", + "Xiang Zhou", "Y Lu", "Yangkuan Liu", "Yatsishin Ilya", @@ -591,6 +613,7 @@ const char * auto_contributors[] { "abyss7", "achimbab", "achulkov2", + "adevyatova", "ageraab", "akazz", "akonyaev", @@ -616,6 +639,7 @@ const char * auto_contributors[] { "artpaul", "asiana21", "avasiliev", + "avogar", "avsharapov", "awesomeleo", "benamazing", @@ -632,6 +656,8 @@ const char * auto_contributors[] { "centos7", "champtar", "chang.chen", + "changvvb", + "chasingegg", "chengy8934", "chenqi", "chenxing-xc", @@ -683,6 +709,7 @@ const char * auto_contributors[] { "frank", "franklee", "fredchenbj", + "fuqi", "fuwhu", "g-arslan", "ggerogery", @@ -701,8 +728,10 @@ const char * auto_contributors[] { "idfer", "igor", "igor.lapko", + "ikarishinjieva", "ikopylov", "imgbot[bot]", + "ip", "it1804", "ivan-kush", "ivanzhukov", @@ -715,6 +744,8 @@ const char * auto_contributors[] { "jianmei zhang", "jyz0309", "keenwolf", + "kevin wan", + "kirillikoff", "kmeaw", "koshachy", "kreuzerkrieg", @@ -744,16 +775,19 @@ const char * auto_contributors[] { "malkfilipp", "manmitya", "maqroll", + "mastertheknife", "maxim", "maxim-babenko", "maxkuzn", "maxulan", + "mehanizm", "melin", "memo", "meo", "mergify[bot]", "mf5137", "mfridental", + "michon470", "miha-g", "mikepop7", "millb", @@ -791,6 +825,7 @@ const char * auto_contributors[] { "r1j1k", "rainbowsysu", "ritaank", + "robert", "robot-clickhouse", "robot-metrika-test", "rodrigargar", @@ -808,6 +843,7 @@ const char * auto_contributors[] { "shangshujie", "shedx", "simon-says", + "songenjie", "spff", "spongedc", "spyros87", @@ -850,6 +886,7 @@ const char * auto_contributors[] { "ygrek", "yhgcn", "yiguolei", + "yingjinghan", "ylchou", "yonesko", "yuefoo", @@ -863,6 +900,7 @@ const char * auto_contributors[] { "zhen ni", "zhukai", "zlx19950903", + "zvonand", "zvrr", "zvvr", "zzsmdfj", @@ -879,6 +917,7 @@ const char * auto_contributors[] { "张健", "张风啸", "徐炘", + "曲正鹏", "极客青年", "谢磊", "贾顺名(Jarvis)", diff --git a/src/Storages/System/StorageSystemContributors.h b/src/Storages/System/StorageSystemContributors.h index 0fd77f78655..c4a50263c5b 100644 --- a/src/Storages/System/StorageSystemContributors.h +++ b/src/Storages/System/StorageSystemContributors.h @@ -16,7 +16,7 @@ class StorageSystemContributors final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemContributors.sh b/src/Storages/System/StorageSystemContributors.sh index c4c4eb5ad30..e34bf8ee7c8 100755 --- a/src/Storages/System/StorageSystemContributors.sh +++ b/src/Storages/System/StorageSystemContributors.sh @@ -2,6 +2,8 @@ set -x +LC_ALL=C + # doesn't actually cd to directory, but return absolute path CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # cd to directory @@ -10,7 +12,7 @@ cd $CUR_DIR CONTRIBUTORS_FILE=${CONTRIBUTORS_FILE=$CUR_DIR/StorageSystemContributors.generated.cpp} # if you don't specify HEAD here, without terminal `git shortlog` would expect input from stdin -git shortlog HEAD --summary | perl -lnE 's/^\s+\d+\s+(.+)/ "$1",/; next unless $1; say $_' > $CONTRIBUTORS_FILE.tmp +git shortlog HEAD --summary | perl -lnE 's/^\s+\d+\s+(.+)/ "$1",/; next unless $1; say $_' | sort > $CONTRIBUTORS_FILE.tmp # If git history not available - dont make target file if [ ! -s $CONTRIBUTORS_FILE.tmp ]; then diff --git a/src/Storages/System/StorageSystemCurrentRoles.cpp b/src/Storages/System/StorageSystemCurrentRoles.cpp index b0667f2f3ca..a5b3566f5f7 100644 --- a/src/Storages/System/StorageSystemCurrentRoles.cpp +++ b/src/Storages/System/StorageSystemCurrentRoles.cpp @@ -22,10 +22,10 @@ NamesAndTypesList StorageSystemCurrentRoles::getNamesAndTypes() } -void StorageSystemCurrentRoles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemCurrentRoles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - auto roles_info = context.getRolesInfo(); - auto user = context.getUser(); + auto roles_info = context->getRolesInfo(); + auto user = context->getUser(); if (!roles_info || !user) return; diff --git a/src/Storages/System/StorageSystemCurrentRoles.h b/src/Storages/System/StorageSystemCurrentRoles.h index 807db661371..77ab95547fa 100644 --- a/src/Storages/System/StorageSystemCurrentRoles.h +++ b/src/Storages/System/StorageSystemCurrentRoles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemDDLWorkerQueue.cpp b/src/Storages/System/StorageSystemDDLWorkerQueue.cpp index 04321544f5d..98b15bfa6e2 100644 --- a/src/Storages/System/StorageSystemDDLWorkerQueue.cpp +++ b/src/Storages/System/StorageSystemDDLWorkerQueue.cpp @@ -95,7 +95,7 @@ NamesAndTypesList StorageSystemDDLWorkerQueue::getNamesAndTypes() }; } -static String clusterNameFromDDLQuery(const Context & context, const DDLLogEntry & entry) +static String clusterNameFromDDLQuery(ContextPtr context, const DDLLogEntry & entry) { const char * begin = entry.query.data(); const char * end = begin + entry.query.size(); @@ -104,15 +104,15 @@ static String clusterNameFromDDLQuery(const Context & context, const DDLLogEntry String cluster_name; ParserQuery parser_query(end); String description; - query = parseQuery(parser_query, begin, end, description, 0, context.getSettingsRef().max_parser_depth); + query = parseQuery(parser_query, begin, end, description, 0, context->getSettingsRef().max_parser_depth); if (query && (query_on_cluster = dynamic_cast(query.get()))) cluster_name = query_on_cluster->cluster; return cluster_name; } -void StorageSystemDDLWorkerQueue::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemDDLWorkerQueue::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - zkutil::ZooKeeperPtr zookeeper = context.getZooKeeper(); + zkutil::ZooKeeperPtr zookeeper = context->getZooKeeper(); Coordination::Error zk_exception_code = Coordination::Error::ZOK; String ddl_zookeeper_path = config.getString("distributed_ddl.path", "/clickhouse/task_queue/ddl/"); String ddl_query_path; @@ -130,7 +130,7 @@ void StorageSystemDDLWorkerQueue::fillData(MutableColumns & res_columns, const C if (code != Coordination::Error::ZOK && code != Coordination::Error::ZNONODE) zk_exception_code = code; - const auto & clusters = context.getClusters(); + const auto & clusters = context->getClusters(); for (const auto & name_and_cluster : clusters.getContainer()) { const ClusterPtr & cluster = name_and_cluster.second; diff --git a/src/Storages/System/StorageSystemDDLWorkerQueue.h b/src/Storages/System/StorageSystemDDLWorkerQueue.h index 9326d4dcb26..d1afa2d546a 100644 --- a/src/Storages/System/StorageSystemDDLWorkerQueue.h +++ b/src/Storages/System/StorageSystemDDLWorkerQueue.h @@ -22,7 +22,7 @@ class StorageSystemDDLWorkerQueue final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemDatabases.cpp b/src/Storages/System/StorageSystemDatabases.cpp index 88ac987014d..e09e47d8baf 100644 --- a/src/Storages/System/StorageSystemDatabases.cpp +++ b/src/Storages/System/StorageSystemDatabases.cpp @@ -20,9 +20,9 @@ NamesAndTypesList StorageSystemDatabases::getNamesAndTypes() }; } -void StorageSystemDatabases::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemDatabases::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_DATABASES); const auto databases = DatabaseCatalog::instance().getDatabases(); @@ -36,7 +36,7 @@ void StorageSystemDatabases::fillData(MutableColumns & res_columns, const Contex res_columns[0]->insert(database_name); res_columns[1]->insert(database->getEngineName()); - res_columns[2]->insert(context.getPath() + database->getDataPath()); + res_columns[2]->insert(context->getPath() + database->getDataPath()); res_columns[3]->insert(database->getMetadataPath()); res_columns[4]->insert(database->getUUID()); } diff --git a/src/Storages/System/StorageSystemDatabases.h b/src/Storages/System/StorageSystemDatabases.h index fe517c0f651..33f91fee837 100644 --- a/src/Storages/System/StorageSystemDatabases.h +++ b/src/Storages/System/StorageSystemDatabases.h @@ -26,7 +26,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemDetachedParts.cpp b/src/Storages/System/StorageSystemDetachedParts.cpp index f96566026b1..56644620f97 100644 --- a/src/Storages/System/StorageSystemDetachedParts.cpp +++ b/src/Storages/System/StorageSystemDetachedParts.cpp @@ -33,7 +33,7 @@ Pipe StorageSystemDetachedParts::read( const Names & /* column_names */, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemDetachedParts.h b/src/Storages/System/StorageSystemDetachedParts.h index 4c6970dadd6..18a6f5576d6 100644 --- a/src/Storages/System/StorageSystemDetachedParts.h +++ b/src/Storages/System/StorageSystemDetachedParts.h @@ -26,7 +26,7 @@ protected: const Names & /* column_names */, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) override; diff --git a/src/Storages/System/StorageSystemDictionaries.cpp b/src/Storages/System/StorageSystemDictionaries.cpp index 6661f51b02f..5f4d5df2036 100644 --- a/src/Storages/System/StorageSystemDictionaries.cpp +++ b/src/Storages/System/StorageSystemDictionaries.cpp @@ -30,7 +30,8 @@ NamesAndTypesList StorageSystemDictionaries::getNamesAndTypes() {"status", std::make_shared(getStatusEnumAllPossibleValues())}, {"origin", std::make_shared()}, {"type", std::make_shared()}, - {"key", std::make_shared()}, + {"key.names", std::make_shared(std::make_shared())}, + {"key.types", std::make_shared(std::make_shared())}, {"attribute.names", std::make_shared(std::make_shared())}, {"attribute.types", std::make_shared(std::make_shared())}, {"bytes_allocated", std::make_shared()}, @@ -49,15 +50,27 @@ NamesAndTypesList StorageSystemDictionaries::getNamesAndTypes() }; } -void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & /*query_info*/) const +NamesAndTypesList StorageSystemDictionaries::getVirtuals() const { - const auto access = context.getAccess(); - const bool check_access_for_dictionaries = !access->isGranted(AccessType::SHOW_DICTIONARIES); + return { + {"key", std::make_shared()} + }; +} + +void StorageSystemDictionaries::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & /*query_info*/) const +{ + const auto access = context->getAccess(); + const bool check_access_for_dictionaries = access->isGranted(AccessType::SHOW_DICTIONARIES); + + const auto & external_dictionaries = context->getExternalDictionariesLoader(); + + if (!check_access_for_dictionaries) + return; - const auto & external_dictionaries = context.getExternalDictionariesLoader(); for (const auto & load_result : external_dictionaries.getLoadResults()) { - const auto dict_ptr = std::dynamic_pointer_cast(load_result.object); + const auto dict_ptr = std::dynamic_pointer_cast(load_result.object); + DictionaryStructure dictionary_structure = ExternalDictionariesLoader::getDictionaryStructure(*load_result.config); StorageID dict_id = StorageID::createEmpty(); if (dict_ptr) @@ -68,8 +81,7 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con dict_id.table_name = load_result.name; String db_or_tag = dict_id.database_name.empty() ? IDictionary::NO_DATABASE_TAG : dict_id.database_name; - if (check_access_for_dictionaries - && !access->isGranted(AccessType::SHOW_DICTIONARIES, db_or_tag, dict_id.table_name)) + if (!access->isGranted(AccessType::SHOW_DICTIONARIES, db_or_tag, dict_id.table_name)) continue; size_t i = 0; @@ -82,13 +94,22 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con std::exception_ptr last_exception = load_result.exception; if (dict_ptr) - { res_columns[i++]->insert(dict_ptr->getTypeName()); + else + res_columns[i++]->insertDefault(); - const auto & dict_struct = dict_ptr->getStructure(); - res_columns[i++]->insert(dict_struct.getKeyDescription()); - res_columns[i++]->insert(ext::map(dict_struct.attributes, [] (auto & attr) { return attr.name; })); - res_columns[i++]->insert(ext::map(dict_struct.attributes, [] (auto & attr) { return attr.type->getName(); })); + res_columns[i++]->insert(ext::map(dictionary_structure.getKeysNames(), [] (auto & name) { return name; })); + + if (dictionary_structure.id) + res_columns[i++]->insert(Array({"UInt64"})); + else + res_columns[i++]->insert(ext::map(*dictionary_structure.key, [] (auto & attr) { return attr.type->getName(); })); + + res_columns[i++]->insert(ext::map(dictionary_structure.attributes, [] (auto & attr) { return attr.name; })); + res_columns[i++]->insert(ext::map(dictionary_structure.attributes, [] (auto & attr) { return attr.type->getName(); })); + + if (dict_ptr) + { res_columns[i++]->insert(dict_ptr->getBytesAllocated()); res_columns[i++]->insert(dict_ptr->getQueryCount()); res_columns[i++]->insert(dict_ptr->getHitRate()); @@ -104,7 +125,7 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con } else { - for (size_t j = 0; j != 12; ++j) // Number of empty fields if dict_ptr is null + for (size_t j = 0; j != 8; ++j) // Number of empty fields if dict_ptr is null res_columns[i++]->insertDefault(); } @@ -117,6 +138,9 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con else res_columns[i++]->insertDefault(); + /// Start fill virtual columns + + res_columns[i++]->insert(dictionary_structure.getKeyDescription()); } } diff --git a/src/Storages/System/StorageSystemDictionaries.h b/src/Storages/System/StorageSystemDictionaries.h index 5139ce3c5f6..aa65a946127 100644 --- a/src/Storages/System/StorageSystemDictionaries.h +++ b/src/Storages/System/StorageSystemDictionaries.h @@ -18,10 +18,12 @@ public: static NamesAndTypesList getNamesAndTypes(); + NamesAndTypesList getVirtuals() const override; + protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemDisks.cpp b/src/Storages/System/StorageSystemDisks.cpp index b04d24cc705..5d7628acb2a 100644 --- a/src/Storages/System/StorageSystemDisks.cpp +++ b/src/Storages/System/StorageSystemDisks.cpp @@ -30,7 +30,7 @@ Pipe StorageSystemDisks::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -44,7 +44,7 @@ Pipe StorageSystemDisks::read( MutableColumnPtr col_keep = ColumnUInt64::create(); MutableColumnPtr col_type = ColumnString::create(); - for (const auto & [disk_name, disk_ptr] : context.getDisksMap()) + for (const auto & [disk_name, disk_ptr] : context->getDisksMap()) { col_name->insert(disk_name); col_path->insert(disk_ptr->getPath()); diff --git a/src/Storages/System/StorageSystemDisks.h b/src/Storages/System/StorageSystemDisks.h index cff05242019..fa0f6fe4b8a 100644 --- a/src/Storages/System/StorageSystemDisks.h +++ b/src/Storages/System/StorageSystemDisks.h @@ -24,7 +24,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemDistributionQueue.cpp b/src/Storages/System/StorageSystemDistributionQueue.cpp index db649e7e1ba..d8879c3655e 100644 --- a/src/Storages/System/StorageSystemDistributionQueue.cpp +++ b/src/Storages/System/StorageSystemDistributionQueue.cpp @@ -98,14 +98,16 @@ NamesAndTypesList StorageSystemDistributionQueue::getNamesAndTypes() { "error_count", std::make_shared() }, { "data_files", std::make_shared() }, { "data_compressed_bytes", std::make_shared() }, + { "broken_data_files", std::make_shared() }, + { "broken_data_compressed_bytes", std::make_shared() }, { "last_exception", std::make_shared() }, }; } -void StorageSystemDistributionQueue::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemDistributionQueue::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); std::map> tables; @@ -181,6 +183,8 @@ void StorageSystemDistributionQueue::fillData(MutableColumns & res_columns, cons res_columns[col_num++]->insert(status.error_count); res_columns[col_num++]->insert(status.files_count); res_columns[col_num++]->insert(status.bytes_count); + res_columns[col_num++]->insert(status.broken_files_count); + res_columns[col_num++]->insert(status.broken_bytes_count); if (status.last_exception) res_columns[col_num++]->insert(getExceptionMessage(status.last_exception, false)); diff --git a/src/Storages/System/StorageSystemDistributionQueue.h b/src/Storages/System/StorageSystemDistributionQueue.h index 88e7fa45cf5..9314418d242 100644 --- a/src/Storages/System/StorageSystemDistributionQueue.h +++ b/src/Storages/System/StorageSystemDistributionQueue.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemEnabledRoles.cpp b/src/Storages/System/StorageSystemEnabledRoles.cpp index 27a42ca6f8b..99370dd647d 100644 --- a/src/Storages/System/StorageSystemEnabledRoles.cpp +++ b/src/Storages/System/StorageSystemEnabledRoles.cpp @@ -23,10 +23,10 @@ NamesAndTypesList StorageSystemEnabledRoles::getNamesAndTypes() } -void StorageSystemEnabledRoles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemEnabledRoles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - auto roles_info = context.getRolesInfo(); - auto user = context.getUser(); + auto roles_info = context->getRolesInfo(); + auto user = context->getUser(); if (!roles_info || !user) return; diff --git a/src/Storages/System/StorageSystemEnabledRoles.h b/src/Storages/System/StorageSystemEnabledRoles.h index 18df31c646a..13b0533b790 100644 --- a/src/Storages/System/StorageSystemEnabledRoles.h +++ b/src/Storages/System/StorageSystemEnabledRoles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemErrors.cpp b/src/Storages/System/StorageSystemErrors.cpp index 89df058900b..d08ffd730ac 100644 --- a/src/Storages/System/StorageSystemErrors.cpp +++ b/src/Storages/System/StorageSystemErrors.cpp @@ -1,5 +1,7 @@ #include #include +#include +#include #include #include #include @@ -10,30 +12,51 @@ namespace DB NamesAndTypesList StorageSystemErrors::getNamesAndTypes() { return { - { "name", std::make_shared() }, - { "code", std::make_shared() }, - { "value", std::make_shared() }, + { "name", std::make_shared() }, + { "code", std::make_shared() }, + { "value", std::make_shared() }, + { "last_error_time", std::make_shared() }, + { "last_error_message", std::make_shared() }, + { "last_error_trace", std::make_shared(std::make_shared()) }, + { "remote", std::make_shared() }, }; } -void StorageSystemErrors::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemErrors::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { + auto add_row = [&](std::string_view name, size_t code, const auto & error, bool remote) + { + if (error.count || context->getSettingsRef().system_events_show_zero_values) + { + size_t col_num = 0; + res_columns[col_num++]->insert(name); + res_columns[col_num++]->insert(code); + res_columns[col_num++]->insert(error.count); + res_columns[col_num++]->insert(error.error_time_ms / 1000); + res_columns[col_num++]->insert(error.message); + { + Array trace_array; + trace_array.reserve(error.trace.size()); + for (size_t i = 0; i < error.trace.size(); ++i) + trace_array.emplace_back(reinterpret_cast(error.trace[i])); + + res_columns[col_num++]->insert(trace_array); + } + res_columns[col_num++]->insert(remote); + } + }; + for (size_t i = 0, end = ErrorCodes::end(); i < end; ++i) { - UInt64 value = ErrorCodes::values[i]; + const auto & error = ErrorCodes::values[i].get(); std::string_view name = ErrorCodes::getName(i); if (name.empty()) continue; - if (value || context.getSettingsRef().system_events_show_zero_values) - { - size_t col_num = 0; - res_columns[col_num++]->insert(name); - res_columns[col_num++]->insert(i); - res_columns[col_num++]->insert(value); - } + add_row(name, i, error.local, /* remote= */ false); + add_row(name, i, error.remote, /* remote= */ true); } } diff --git a/src/Storages/System/StorageSystemErrors.h b/src/Storages/System/StorageSystemErrors.h index 569a7a998b7..ff3af11d251 100644 --- a/src/Storages/System/StorageSystemErrors.h +++ b/src/Storages/System/StorageSystemErrors.h @@ -25,7 +25,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemEvents.cpp b/src/Storages/System/StorageSystemEvents.cpp index ddb00659473..be2d3f8d49e 100644 --- a/src/Storages/System/StorageSystemEvents.cpp +++ b/src/Storages/System/StorageSystemEvents.cpp @@ -16,13 +16,13 @@ NamesAndTypesList StorageSystemEvents::getNamesAndTypes() }; } -void StorageSystemEvents::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemEvents::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { for (size_t i = 0, end = ProfileEvents::end(); i < end; ++i) { UInt64 value = ProfileEvents::global_counters[i]; - if (0 != value || context.getSettingsRef().system_events_show_zero_values) + if (0 != value || context->getSettingsRef().system_events_show_zero_values) { res_columns[0]->insert(ProfileEvents::getName(ProfileEvents::Event(i))); res_columns[1]->insert(value); diff --git a/src/Storages/System/StorageSystemEvents.h b/src/Storages/System/StorageSystemEvents.h index f1687e42233..6071cb7b2b3 100644 --- a/src/Storages/System/StorageSystemEvents.h +++ b/src/Storages/System/StorageSystemEvents.h @@ -22,7 +22,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemFormats.cpp b/src/Storages/System/StorageSystemFormats.cpp index 7048ab98a0d..86e0212a523 100644 --- a/src/Storages/System/StorageSystemFormats.cpp +++ b/src/Storages/System/StorageSystemFormats.cpp @@ -15,7 +15,7 @@ NamesAndTypesList StorageSystemFormats::getNamesAndTypes() }; } -void StorageSystemFormats::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemFormats::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & formats = FormatFactory::instance().getAllFormats(); for (const auto & pair : formats) diff --git a/src/Storages/System/StorageSystemFormats.h b/src/Storages/System/StorageSystemFormats.h index f90839e44e9..ed65cd2af88 100644 --- a/src/Storages/System/StorageSystemFormats.h +++ b/src/Storages/System/StorageSystemFormats.h @@ -9,7 +9,7 @@ class StorageSystemFormats final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; public: diff --git a/src/Storages/System/StorageSystemFunctions.cpp b/src/Storages/System/StorageSystemFunctions.cpp index e46b7007dc2..973bf493cd1 100644 --- a/src/Storages/System/StorageSystemFunctions.cpp +++ b/src/Storages/System/StorageSystemFunctions.cpp @@ -34,7 +34,7 @@ NamesAndTypesList StorageSystemFunctions::getNamesAndTypes() }; } -void StorageSystemFunctions::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemFunctions::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & functions_factory = FunctionFactory::instance(); const auto & function_names = functions_factory.getAllRegisteredNames(); diff --git a/src/Storages/System/StorageSystemFunctions.h b/src/Storages/System/StorageSystemFunctions.h index f62d731f288..62942721995 100644 --- a/src/Storages/System/StorageSystemFunctions.h +++ b/src/Storages/System/StorageSystemFunctions.h @@ -24,7 +24,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemGrants.cpp b/src/Storages/System/StorageSystemGrants.cpp index 360256c1f45..1ba5e6d96a4 100644 --- a/src/Storages/System/StorageSystemGrants.cpp +++ b/src/Storages/System/StorageSystemGrants.cpp @@ -18,7 +18,6 @@ namespace DB { using EntityType = IAccessEntity::Type; -using Kind = AccessRightsElementWithOptions::Kind; NamesAndTypesList StorageSystemGrants::getNamesAndTypes() { @@ -36,10 +35,10 @@ NamesAndTypesList StorageSystemGrants::getNamesAndTypes() } -void StorageSystemGrants::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemGrants::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); boost::range::push_back(ids, access_control.findAll()); @@ -64,7 +63,7 @@ void StorageSystemGrants::fillData(MutableColumns & res_columns, const Context & const String * database, const String * table, const String * column, - Kind kind, + bool is_partial_revoke, bool grant_option) { if (grantee_type == EntityType::USER) @@ -119,13 +118,13 @@ void StorageSystemGrants::fillData(MutableColumns & res_columns, const Context & column_column_null_map.push_back(true); } - column_is_partial_revoke.push_back(kind == Kind::REVOKE); + column_is_partial_revoke.push_back(is_partial_revoke); column_grant_option.push_back(grant_option); }; auto add_rows = [&](const String & grantee_name, IAccessEntity::Type grantee_type, - const AccessRightsElementsWithOptions & elements) + const AccessRightsElements & elements) { for (const auto & element : elements) { @@ -139,13 +138,13 @@ void StorageSystemGrants::fillData(MutableColumns & res_columns, const Context & if (element.any_column) { for (const auto & access_type : access_types) - add_row(grantee_name, grantee_type, access_type, database, table, nullptr, element.kind, element.grant_option); + add_row(grantee_name, grantee_type, access_type, database, table, nullptr, element.is_partial_revoke, element.grant_option); } else { for (const auto & access_type : access_types) for (const auto & column : element.columns) - add_row(grantee_name, grantee_type, access_type, database, table, &column, element.kind, element.grant_option); + add_row(grantee_name, grantee_type, access_type, database, table, &column, element.is_partial_revoke, element.grant_option); } } }; diff --git a/src/Storages/System/StorageSystemGrants.h b/src/Storages/System/StorageSystemGrants.h index 39c38deed85..8c8a0f9f7bf 100644 --- a/src/Storages/System/StorageSystemGrants.h +++ b/src/Storages/System/StorageSystemGrants.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemGraphite.cpp b/src/Storages/System/StorageSystemGraphite.cpp index 93bc16785b2..dd592600d18 100644 --- a/src/Storages/System/StorageSystemGraphite.cpp +++ b/src/Storages/System/StorageSystemGraphite.cpp @@ -25,7 +25,7 @@ NamesAndTypesList StorageSystemGraphite::getNamesAndTypes() /* * Looking for (Replicated)*GraphiteMergeTree and get all configuration parameters for them */ -static StorageSystemGraphite::Configs getConfigs(const Context & context) +static StorageSystemGraphite::Configs getConfigs(ContextPtr context) { const Databases databases = DatabaseCatalog::instance().getDatabases(); StorageSystemGraphite::Configs graphite_configs; @@ -73,7 +73,7 @@ static StorageSystemGraphite::Configs getConfigs(const Context & context) return graphite_configs; } -void StorageSystemGraphite::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemGraphite::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { Configs graphite_configs = getConfigs(context); diff --git a/src/Storages/System/StorageSystemGraphite.h b/src/Storages/System/StorageSystemGraphite.h index 703db41dc39..256ad50e472 100644 --- a/src/Storages/System/StorageSystemGraphite.h +++ b/src/Storages/System/StorageSystemGraphite.h @@ -32,7 +32,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemLicenses.cpp b/src/Storages/System/StorageSystemLicenses.cpp index 894c861de29..6f880f03e10 100644 --- a/src/Storages/System/StorageSystemLicenses.cpp +++ b/src/Storages/System/StorageSystemLicenses.cpp @@ -18,7 +18,7 @@ NamesAndTypesList StorageSystemLicenses::getNamesAndTypes() }; } -void StorageSystemLicenses::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemLicenses::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (const auto * it = library_licenses; *it; it += 4) { diff --git a/src/Storages/System/StorageSystemLicenses.h b/src/Storages/System/StorageSystemLicenses.h index cee48abacab..43bb1c20c22 100644 --- a/src/Storages/System/StorageSystemLicenses.h +++ b/src/Storages/System/StorageSystemLicenses.h @@ -17,7 +17,7 @@ class StorageSystemLicenses final : { friend struct ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemMacros.cpp b/src/Storages/System/StorageSystemMacros.cpp index 8e6420add8b..576fbc69039 100644 --- a/src/Storages/System/StorageSystemMacros.cpp +++ b/src/Storages/System/StorageSystemMacros.cpp @@ -14,9 +14,9 @@ NamesAndTypesList StorageSystemMacros::getNamesAndTypes() }; } -void StorageSystemMacros::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemMacros::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - auto macros = context.getMacros(); + auto macros = context->getMacros(); for (const auto & macro : macros->getMacroMap()) { diff --git a/src/Storages/System/StorageSystemMacros.h b/src/Storages/System/StorageSystemMacros.h index 52336bd6f69..298aa488265 100644 --- a/src/Storages/System/StorageSystemMacros.h +++ b/src/Storages/System/StorageSystemMacros.h @@ -24,7 +24,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMergeTreeSettings.cpp b/src/Storages/System/StorageSystemMergeTreeSettings.cpp index 19cbf76f252..626319af63f 100644 --- a/src/Storages/System/StorageSystemMergeTreeSettings.cpp +++ b/src/Storages/System/StorageSystemMergeTreeSettings.cpp @@ -20,9 +20,9 @@ NamesAndTypesList SystemMergeTreeSettings::getNamesAndTypes() } template -void SystemMergeTreeSettings::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void SystemMergeTreeSettings::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto & settings = replicated ? context.getReplicatedMergeTreeSettings().all() : context.getMergeTreeSettings().all(); + const auto & settings = replicated ? context->getReplicatedMergeTreeSettings().all() : context->getMergeTreeSettings().all(); for (const auto & setting : settings) { res_columns[0]->insert(setting.getName()); diff --git a/src/Storages/System/StorageSystemMergeTreeSettings.h b/src/Storages/System/StorageSystemMergeTreeSettings.h index 9f61fa6f780..b02b191fb69 100644 --- a/src/Storages/System/StorageSystemMergeTreeSettings.h +++ b/src/Storages/System/StorageSystemMergeTreeSettings.h @@ -28,7 +28,7 @@ public: protected: using IStorageSystemOneBlock>::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMerges.cpp b/src/Storages/System/StorageSystemMerges.cpp index b61324818e4..b29836206d0 100644 --- a/src/Storages/System/StorageSystemMerges.cpp +++ b/src/Storages/System/StorageSystemMerges.cpp @@ -36,12 +36,12 @@ NamesAndTypesList StorageSystemMerges::getNamesAndTypes() } -void StorageSystemMerges::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemMerges::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_tables = !access->isGranted(AccessType::SHOW_TABLES); - for (const auto & merge : context.getMergeList().get()) + for (const auto & merge : context->getMergeList().get()) { if (check_access_for_tables && !access->isGranted(AccessType::SHOW_TABLES, merge.database, merge.table)) continue; diff --git a/src/Storages/System/StorageSystemMerges.h b/src/Storages/System/StorageSystemMerges.h index 81c03c4e397..5898bf62825 100644 --- a/src/Storages/System/StorageSystemMerges.h +++ b/src/Storages/System/StorageSystemMerges.h @@ -24,7 +24,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMetrics.cpp b/src/Storages/System/StorageSystemMetrics.cpp index b2332c52817..6007c8a7c71 100644 --- a/src/Storages/System/StorageSystemMetrics.cpp +++ b/src/Storages/System/StorageSystemMetrics.cpp @@ -17,7 +17,7 @@ NamesAndTypesList StorageSystemMetrics::getNamesAndTypes() }; } -void StorageSystemMetrics::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemMetrics::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (size_t i = 0, end = CurrentMetrics::end(); i < end; ++i) { diff --git a/src/Storages/System/StorageSystemMetrics.h b/src/Storages/System/StorageSystemMetrics.h index c47bcea656f..af5d32ec46b 100644 --- a/src/Storages/System/StorageSystemMetrics.h +++ b/src/Storages/System/StorageSystemMetrics.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemModels.cpp b/src/Storages/System/StorageSystemModels.cpp index 9fae9803b96..3df48e830bb 100644 --- a/src/Storages/System/StorageSystemModels.cpp +++ b/src/Storages/System/StorageSystemModels.cpp @@ -25,9 +25,9 @@ NamesAndTypesList StorageSystemModels::getNamesAndTypes() }; } -void StorageSystemModels::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemModels::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto & external_models_loader = context.getExternalModelsLoader(); + const auto & external_models_loader = context->getExternalModelsLoader(); auto load_results = external_models_loader.getLoadResults(); for (const auto & load_result : load_results) diff --git a/src/Storages/System/StorageSystemModels.h b/src/Storages/System/StorageSystemModels.h index cee5200e7de..832a9d550db 100644 --- a/src/Storages/System/StorageSystemModels.h +++ b/src/Storages/System/StorageSystemModels.h @@ -21,7 +21,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMutations.cpp b/src/Storages/System/StorageSystemMutations.cpp index f66f57ef5d1..fa521c632b8 100644 --- a/src/Storages/System/StorageSystemMutations.cpp +++ b/src/Storages/System/StorageSystemMutations.cpp @@ -35,9 +35,9 @@ NamesAndTypesList StorageSystemMutations::getNamesAndTypes() } -void StorageSystemMutations::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemMutations::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); /// Collect a set of *MergeTree tables. diff --git a/src/Storages/System/StorageSystemMutations.h b/src/Storages/System/StorageSystemMutations.h index f7bc5f6f33c..1f41ff6051b 100644 --- a/src/Storages/System/StorageSystemMutations.h +++ b/src/Storages/System/StorageSystemMutations.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemNumbers.cpp b/src/Storages/System/StorageSystemNumbers.cpp index 677e0c02400..f8a0e94bf98 100644 --- a/src/Storages/System/StorageSystemNumbers.cpp +++ b/src/Storages/System/StorageSystemNumbers.cpp @@ -126,7 +126,7 @@ Pipe StorageSystemNumbers::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) diff --git a/src/Storages/System/StorageSystemNumbers.h b/src/Storages/System/StorageSystemNumbers.h index d12c28c1ce2..708ace7a4cd 100644 --- a/src/Storages/System/StorageSystemNumbers.h +++ b/src/Storages/System/StorageSystemNumbers.h @@ -33,7 +33,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemOne.cpp b/src/Storages/System/StorageSystemOne.cpp index c456b22e97b..7c28f897121 100644 --- a/src/Storages/System/StorageSystemOne.cpp +++ b/src/Storages/System/StorageSystemOne.cpp @@ -24,7 +24,7 @@ Pipe StorageSystemOne::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemOne.h b/src/Storages/System/StorageSystemOne.h index 8228ce465e0..a14d5e15726 100644 --- a/src/Storages/System/StorageSystemOne.h +++ b/src/Storages/System/StorageSystemOne.h @@ -25,13 +25,13 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; protected: - StorageSystemOne(const StorageID & table_id_); + explicit StorageSystemOne(const StorageID & table_id_); }; } diff --git a/src/Storages/System/StorageSystemParts.cpp b/src/Storages/System/StorageSystemParts.cpp index eece092206d..6a643dbe1b9 100644 --- a/src/Storages/System/StorageSystemParts.cpp +++ b/src/Storages/System/StorageSystemParts.cpp @@ -137,14 +137,17 @@ void StorageSystemParts::processNextStorage( if (columns_mask[src_index++]) columns[res_index++]->insert(static_cast(part.use_count() - 1)); + auto min_max_date = part->getMinMaxDate(); + auto min_max_time = part->getMinMaxTime(); + if (columns_mask[src_index++]) - columns[res_index++]->insert(part->getMinDate()); + columns[res_index++]->insert(min_max_date.first); if (columns_mask[src_index++]) - columns[res_index++]->insert(part->getMaxDate()); + columns[res_index++]->insert(min_max_date.second); if (columns_mask[src_index++]) - columns[res_index++]->insert(static_cast(part->getMinTime())); + columns[res_index++]->insert(static_cast(min_max_time.first)); if (columns_mask[src_index++]) - columns[res_index++]->insert(static_cast(part->getMaxTime())); + columns[res_index++]->insert(static_cast(min_max_time.second)); if (columns_mask[src_index++]) columns[res_index++]->insert(part->info.partition_id); if (columns_mask[src_index++]) diff --git a/src/Storages/System/StorageSystemPartsBase.cpp b/src/Storages/System/StorageSystemPartsBase.cpp index 39cc651e147..f1c82aa4c63 100644 --- a/src/Storages/System/StorageSystemPartsBase.cpp +++ b/src/Storages/System/StorageSystemPartsBase.cpp @@ -62,8 +62,8 @@ StoragesInfo::getParts(MergeTreeData::DataPartStateVector & state, bool has_stat return data->getDataPartsVector({State::Committed}, &state); } -StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, const Context & context) - : query_id(context.getCurrentQueryId()), settings(context.getSettings()) +StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, ContextPtr context) + : query_id(context->getCurrentQueryId()), settings(context->getSettings()) { /// Will apply WHERE to subset of columns and then add more columns. /// This is kind of complicated, but we use WHERE to do less work. @@ -74,7 +74,7 @@ StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, const MutableColumnPtr engine_column_mut = ColumnString::create(); MutableColumnPtr active_column_mut = ColumnUInt8::create(); - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_tables = !access->isGranted(AccessType::SHOW_TABLES); { @@ -84,7 +84,7 @@ StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, const MutableColumnPtr database_column_mut = ColumnString::create(); for (const auto & database : databases) { - /// Checck if database can contain MergeTree tables, + /// Check if database can contain MergeTree tables, /// if not it's unnecessary to load all tables of database just to filter all of them. if (database.second->canContainMergeTreeTables()) database_column_mut->insert(database.first); @@ -234,7 +234,7 @@ Pipe StorageSystemPartsBase::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemPartsBase.h b/src/Storages/System/StorageSystemPartsBase.h index 3f63d75e2b6..33f82d04252 100644 --- a/src/Storages/System/StorageSystemPartsBase.h +++ b/src/Storages/System/StorageSystemPartsBase.h @@ -31,7 +31,7 @@ struct StoragesInfo class StoragesInfoStream { public: - StoragesInfoStream(const SelectQueryInfo & query_info, const Context & context); + StoragesInfoStream(const SelectQueryInfo & query_info, ContextPtr context); StoragesInfo next(); private: @@ -59,7 +59,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemPartsColumns.cpp b/src/Storages/System/StorageSystemPartsColumns.cpp index 8754e424281..703de70d17f 100644 --- a/src/Storages/System/StorageSystemPartsColumns.cpp +++ b/src/Storages/System/StorageSystemPartsColumns.cpp @@ -32,6 +32,8 @@ StorageSystemPartsColumns::StorageSystemPartsColumns(const StorageID & table_id_ {"refcount", std::make_shared()}, {"min_date", std::make_shared()}, {"max_date", std::make_shared()}, + {"min_time", std::make_shared()}, + {"max_time", std::make_shared()}, {"partition_id", std::make_shared()}, {"min_block_number", std::make_shared()}, {"max_block_number", std::make_shared()}, @@ -95,8 +97,10 @@ void StorageSystemPartsColumns::processNextStorage( /// For convenience, in returned refcount, don't add references that was due to local variables in this method: all_parts, active_parts. auto use_count = part.use_count() - 1; - auto min_date = part->getMinDate(); - auto max_date = part->getMaxDate(); + + auto min_max_date = part->getMinMaxDate(); + auto min_max_time = part->getMinMaxTime(); + auto index_size_in_bytes = part->getIndexSizeInBytes(); auto index_size_in_allocated_bytes = part->getIndexSizeInAllocatedBytes(); @@ -141,9 +145,14 @@ void StorageSystemPartsColumns::processNextStorage( columns[res_index++]->insert(UInt64(use_count)); if (columns_mask[src_index++]) - columns[res_index++]->insert(min_date); + columns[res_index++]->insert(min_max_date.first); if (columns_mask[src_index++]) - columns[res_index++]->insert(max_date); + columns[res_index++]->insert(min_max_date.second); + if (columns_mask[src_index++]) + columns[res_index++]->insert(static_cast(min_max_time.first)); + if (columns_mask[src_index++]) + columns[res_index++]->insert(static_cast(min_max_time.second)); + if (columns_mask[src_index++]) columns[res_index++]->insert(part->info.partition_id); if (columns_mask[src_index++]) diff --git a/src/Storages/System/StorageSystemPrivileges.cpp b/src/Storages/System/StorageSystemPrivileges.cpp index 5dda0caf201..ca369efe43a 100644 --- a/src/Storages/System/StorageSystemPrivileges.cpp +++ b/src/Storages/System/StorageSystemPrivileges.cpp @@ -74,7 +74,7 @@ NamesAndTypesList StorageSystemPrivileges::getNamesAndTypes() } -void StorageSystemPrivileges::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemPrivileges::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { size_t column_index = 0; auto & column_access_type = assert_cast(*res_columns[column_index++]).getData(); diff --git a/src/Storages/System/StorageSystemPrivileges.h b/src/Storages/System/StorageSystemPrivileges.h index 8540e3d7ec3..618e1c91597 100644 --- a/src/Storages/System/StorageSystemPrivileges.h +++ b/src/Storages/System/StorageSystemPrivileges.h @@ -19,7 +19,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemProcesses.cpp b/src/Storages/System/StorageSystemProcesses.cpp index b397d941786..785b4c0df11 100644 --- a/src/Storages/System/StorageSystemProcesses.cpp +++ b/src/Storages/System/StorageSystemProcesses.cpp @@ -64,13 +64,15 @@ NamesAndTypesList StorageSystemProcesses::getNamesAndTypes() {"ProfileEvents.Values", std::make_shared(std::make_shared())}, {"Settings.Names", std::make_shared(std::make_shared())}, {"Settings.Values", std::make_shared(std::make_shared())}, + + {"current_database", std::make_shared()}, }; } -void StorageSystemProcesses::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemProcesses::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - ProcessList::Info info = context.getProcessList().getInfo(true, true, true); + ProcessList::Info info = context->getProcessList().getInfo(true, true, true); for (const auto & process : info) { @@ -149,6 +151,8 @@ void StorageSystemProcesses::fillData(MutableColumns & res_columns, const Contex column_settings_values->insertDefault(); } } + + res_columns[i++]->insert(process.current_database); } } diff --git a/src/Storages/System/StorageSystemProcesses.h b/src/Storages/System/StorageSystemProcesses.h index 62c568970e7..4f876348a4b 100644 --- a/src/Storages/System/StorageSystemProcesses.h +++ b/src/Storages/System/StorageSystemProcesses.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotaLimits.cpp b/src/Storages/System/StorageSystemQuotaLimits.cpp index c6e99cc9203..63a419e213c 100644 --- a/src/Storages/System/StorageSystemQuotaLimits.cpp +++ b/src/Storages/System/StorageSystemQuotaLimits.cpp @@ -69,10 +69,10 @@ NamesAndTypesList StorageSystemQuotaLimits::getNamesAndTypes() } -void StorageSystemQuotaLimits::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotaLimits::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_QUOTAS); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemQuotaLimits.h b/src/Storages/System/StorageSystemQuotaLimits.h index e9ae7fc09d0..8f496734e0f 100644 --- a/src/Storages/System/StorageSystemQuotaLimits.h +++ b/src/Storages/System/StorageSystemQuotaLimits.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotaUsage.cpp b/src/Storages/System/StorageSystemQuotaUsage.cpp index 6d6e22e7be6..a25a130bf6c 100644 --- a/src/Storages/System/StorageSystemQuotaUsage.cpp +++ b/src/Storages/System/StorageSystemQuotaUsage.cpp @@ -81,10 +81,10 @@ NamesAndTypesList StorageSystemQuotaUsage::getNamesAndTypesImpl(bool add_column_ } -void StorageSystemQuotaUsage::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotaUsage::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - auto usage = context.getQuotaUsage(); + context->checkAccess(AccessType::SHOW_QUOTAS); + auto usage = context->getQuotaUsage(); if (!usage) return; @@ -94,7 +94,7 @@ void StorageSystemQuotaUsage::fillData(MutableColumns & res_columns, const Conte void StorageSystemQuotaUsage::fillDataImpl( MutableColumns & res_columns, - const Context & context, + ContextPtr context, bool add_column_is_current, const std::vector & quotas_usage) { @@ -128,7 +128,7 @@ void StorageSystemQuotaUsage::fillDataImpl( std::optional current_quota_id; if (add_column_is_current) { - if (auto current_usage = context.getQuotaUsage()) + if (auto current_usage = context->getQuotaUsage()) current_quota_id = current_usage->quota_id; } diff --git a/src/Storages/System/StorageSystemQuotaUsage.h b/src/Storages/System/StorageSystemQuotaUsage.h index abb9505eb5a..806c3eb3f4a 100644 --- a/src/Storages/System/StorageSystemQuotaUsage.h +++ b/src/Storages/System/StorageSystemQuotaUsage.h @@ -20,12 +20,12 @@ public: static NamesAndTypesList getNamesAndTypes(); static NamesAndTypesList getNamesAndTypesImpl(bool add_column_is_current); - static void fillDataImpl(MutableColumns & res_columns, const Context & context, bool add_column_is_current, const std::vector & quotas_usage); + static void fillDataImpl(MutableColumns & res_columns, ContextPtr context, bool add_column_is_current, const std::vector & quotas_usage); protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotas.cpp b/src/Storages/System/StorageSystemQuotas.cpp index fab6384e6a8..4bba082f66e 100644 --- a/src/Storages/System/StorageSystemQuotas.cpp +++ b/src/Storages/System/StorageSystemQuotas.cpp @@ -52,10 +52,10 @@ NamesAndTypesList StorageSystemQuotas::getNamesAndTypes() } -void StorageSystemQuotas::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotas::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_QUOTAS); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemQuotas.h b/src/Storages/System/StorageSystemQuotas.h index 8d1da53d641..fb74ea9b05f 100644 --- a/src/Storages/System/StorageSystemQuotas.h +++ b/src/Storages/System/StorageSystemQuotas.h @@ -19,7 +19,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotasUsage.cpp b/src/Storages/System/StorageSystemQuotasUsage.cpp index 5c6879cd143..363562bce19 100644 --- a/src/Storages/System/StorageSystemQuotasUsage.cpp +++ b/src/Storages/System/StorageSystemQuotasUsage.cpp @@ -13,10 +13,10 @@ NamesAndTypesList StorageSystemQuotasUsage::getNamesAndTypes() return StorageSystemQuotaUsage::getNamesAndTypesImpl(/* add_column_is_current = */ true); } -void StorageSystemQuotasUsage::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotasUsage::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - auto all_quotas_usage = context.getAccessControlManager().getAllQuotasUsage(); + context->checkAccess(AccessType::SHOW_QUOTAS); + auto all_quotas_usage = context->getAccessControlManager().getAllQuotasUsage(); StorageSystemQuotaUsage::fillDataImpl(res_columns, context, /* add_column_is_current = */ true, all_quotas_usage); } } diff --git a/src/Storages/System/StorageSystemQuotasUsage.h b/src/Storages/System/StorageSystemQuotasUsage.h index d4fd93b577d..1f29ea9b886 100644 --- a/src/Storages/System/StorageSystemQuotasUsage.h +++ b/src/Storages/System/StorageSystemQuotasUsage.h @@ -20,7 +20,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemReplicas.cpp b/src/Storages/System/StorageSystemReplicas.cpp index 0af67ab6986..fc33c6b421b 100644 --- a/src/Storages/System/StorageSystemReplicas.cpp +++ b/src/Storages/System/StorageSystemReplicas.cpp @@ -60,14 +60,14 @@ Pipe StorageSystemReplicas::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) { metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); /// We collect a set of replicated tables. diff --git a/src/Storages/System/StorageSystemReplicas.h b/src/Storages/System/StorageSystemReplicas.h index d9e364a28c0..2352d7ccdf2 100644 --- a/src/Storages/System/StorageSystemReplicas.h +++ b/src/Storages/System/StorageSystemReplicas.h @@ -22,7 +22,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemReplicatedFetches.cpp b/src/Storages/System/StorageSystemReplicatedFetches.cpp index 53bec5aa42f..453568e3b86 100644 --- a/src/Storages/System/StorageSystemReplicatedFetches.cpp +++ b/src/Storages/System/StorageSystemReplicatedFetches.cpp @@ -30,12 +30,12 @@ NamesAndTypesList StorageSystemReplicatedFetches::getNamesAndTypes() }; } -void StorageSystemReplicatedFetches::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemReplicatedFetches::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_tables = !access->isGranted(AccessType::SHOW_TABLES); - for (const auto & fetch : context.getReplicatedFetchList().get()) + for (const auto & fetch : context->getReplicatedFetchList().get()) { if (check_access_for_tables && !access->isGranted(AccessType::SHOW_TABLES, fetch.database, fetch.table)) continue; diff --git a/src/Storages/System/StorageSystemReplicatedFetches.h b/src/Storages/System/StorageSystemReplicatedFetches.h index 34081923e4f..ed25e75eb70 100644 --- a/src/Storages/System/StorageSystemReplicatedFetches.h +++ b/src/Storages/System/StorageSystemReplicatedFetches.h @@ -22,7 +22,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemReplicationQueue.cpp b/src/Storages/System/StorageSystemReplicationQueue.cpp index 9cd5e8b8ff3..8acd192eac4 100644 --- a/src/Storages/System/StorageSystemReplicationQueue.cpp +++ b/src/Storages/System/StorageSystemReplicationQueue.cpp @@ -47,9 +47,9 @@ NamesAndTypesList StorageSystemReplicationQueue::getNamesAndTypes() } -void StorageSystemReplicationQueue::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemReplicationQueue::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); std::map> replicated_tables; diff --git a/src/Storages/System/StorageSystemReplicationQueue.h b/src/Storages/System/StorageSystemReplicationQueue.h index 36841fb9be9..f85f23a2b20 100644 --- a/src/Storages/System/StorageSystemReplicationQueue.h +++ b/src/Storages/System/StorageSystemReplicationQueue.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemRoleGrants.cpp b/src/Storages/System/StorageSystemRoleGrants.cpp index 0f0fcd831d9..32984afcfc5 100644 --- a/src/Storages/System/StorageSystemRoleGrants.cpp +++ b/src/Storages/System/StorageSystemRoleGrants.cpp @@ -31,10 +31,10 @@ NamesAndTypesList StorageSystemRoleGrants::getNamesAndTypes() } -void StorageSystemRoleGrants::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemRoleGrants::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); boost::range::push_back(ids, access_control.findAll()); @@ -80,15 +80,17 @@ void StorageSystemRoleGrants::fillData(MutableColumns & res_columns, const Conte const GrantedRoles & granted_roles, const RolesOrUsersSet * default_roles) { - for (const auto & role_id : granted_roles.roles) + for (const auto & element : granted_roles.getElements()) { - auto role_name = access_control.tryReadName(role_id); - if (!role_name) - continue; + for (const auto & role_id : element.ids) + { + auto role_name = access_control.tryReadName(role_id); + if (!role_name) + continue; - bool is_default = !default_roles || default_roles->match(role_id); - bool with_admin_option = granted_roles.roles_with_admin_option.count(role_id); - add_row(grantee_name, grantee_type, *role_name, is_default, with_admin_option); + bool is_default = !default_roles || default_roles->match(role_id); + add_row(grantee_name, grantee_type, *role_name, is_default, element.admin_option); + } } }; diff --git a/src/Storages/System/StorageSystemRoleGrants.h b/src/Storages/System/StorageSystemRoleGrants.h index 0a02303abc3..a290dcf320d 100644 --- a/src/Storages/System/StorageSystemRoleGrants.h +++ b/src/Storages/System/StorageSystemRoleGrants.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemRoles.cpp b/src/Storages/System/StorageSystemRoles.cpp index c560bc2bc6e..65ae74887a7 100644 --- a/src/Storages/System/StorageSystemRoles.cpp +++ b/src/Storages/System/StorageSystemRoles.cpp @@ -23,10 +23,10 @@ NamesAndTypesList StorageSystemRoles::getNamesAndTypes() } -void StorageSystemRoles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemRoles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_ROLES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_ROLES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemRoles.h b/src/Storages/System/StorageSystemRoles.h index fb44194baff..38c7ed05f1e 100644 --- a/src/Storages/System/StorageSystemRoles.h +++ b/src/Storages/System/StorageSystemRoles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemRowPolicies.cpp b/src/Storages/System/StorageSystemRowPolicies.cpp index 9b11a781d6f..f9d6b14957e 100644 --- a/src/Storages/System/StorageSystemRowPolicies.cpp +++ b/src/Storages/System/StorageSystemRowPolicies.cpp @@ -52,10 +52,10 @@ NamesAndTypesList StorageSystemRowPolicies::getNamesAndTypes() } -void StorageSystemRowPolicies::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemRowPolicies::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_ROW_POLICIES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_ROW_POLICIES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemRowPolicies.h b/src/Storages/System/StorageSystemRowPolicies.h index b81020b421c..3b9ebfcc25a 100644 --- a/src/Storages/System/StorageSystemRowPolicies.h +++ b/src/Storages/System/StorageSystemRowPolicies.h @@ -20,7 +20,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemSettings.cpp b/src/Storages/System/StorageSystemSettings.cpp index 07a2d450b12..1aca7e45190 100644 --- a/src/Storages/System/StorageSystemSettings.cpp +++ b/src/Storages/System/StorageSystemSettings.cpp @@ -26,10 +26,10 @@ NamesAndTypesList StorageSystemSettings::getNamesAndTypes() #pragma GCC optimize("-fno-var-tracking-assignments") #endif -void StorageSystemSettings::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemSettings::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const Settings & settings = context.getSettingsRef(); - auto settings_constraints = context.getSettingsConstraints(); + const Settings & settings = context->getSettingsRef(); + auto settings_constraints = context->getSettingsConstraints(); for (const auto & setting : settings.all()) { const auto & setting_name = setting.getName(); diff --git a/src/Storages/System/StorageSystemSettings.h b/src/Storages/System/StorageSystemSettings.h index 6cb5e18e1d7..d93c09d3f80 100644 --- a/src/Storages/System/StorageSystemSettings.h +++ b/src/Storages/System/StorageSystemSettings.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemSettingsProfileElements.cpp b/src/Storages/System/StorageSystemSettingsProfileElements.cpp index cf47416e188..fa824091238 100644 --- a/src/Storages/System/StorageSystemSettingsProfileElements.cpp +++ b/src/Storages/System/StorageSystemSettingsProfileElements.cpp @@ -37,10 +37,10 @@ NamesAndTypesList StorageSystemSettingsProfileElements::getNamesAndTypes() } -void StorageSystemSettingsProfileElements::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemSettingsProfileElements::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_SETTINGS_PROFILES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_SETTINGS_PROFILES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); boost::range::push_back(ids, access_control.findAll()); boost::range::push_back(ids, access_control.findAll()); diff --git a/src/Storages/System/StorageSystemSettingsProfileElements.h b/src/Storages/System/StorageSystemSettingsProfileElements.h index 2dc79fed0e7..2262ea96dde 100644 --- a/src/Storages/System/StorageSystemSettingsProfileElements.h +++ b/src/Storages/System/StorageSystemSettingsProfileElements.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemSettingsProfiles.cpp b/src/Storages/System/StorageSystemSettingsProfiles.cpp index a678290d447..c726f54a324 100644 --- a/src/Storages/System/StorageSystemSettingsProfiles.cpp +++ b/src/Storages/System/StorageSystemSettingsProfiles.cpp @@ -30,10 +30,10 @@ NamesAndTypesList StorageSystemSettingsProfiles::getNamesAndTypes() } -void StorageSystemSettingsProfiles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemSettingsProfiles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_SETTINGS_PROFILES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_SETTINGS_PROFILES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemSettingsProfiles.h b/src/Storages/System/StorageSystemSettingsProfiles.h index c6b887c99df..580430dc28b 100644 --- a/src/Storages/System/StorageSystemSettingsProfiles.h +++ b/src/Storages/System/StorageSystemSettingsProfiles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemStackTrace.cpp b/src/Storages/System/StorageSystemStackTrace.cpp index e74d56108ad..a6651aff8be 100644 --- a/src/Storages/System/StorageSystemStackTrace.cpp +++ b/src/Storages/System/StorageSystemStackTrace.cpp @@ -183,7 +183,7 @@ NamesAndTypesList StorageSystemStackTrace::getNamesAndTypes() } -void StorageSystemStackTrace::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemStackTrace::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { /// It shouldn't be possible to do concurrent reads from this table. std::lock_guard lock(mutex); diff --git a/src/Storages/System/StorageSystemStackTrace.h b/src/Storages/System/StorageSystemStackTrace.h index 582618d2ecd..7f10e309775 100644 --- a/src/Storages/System/StorageSystemStackTrace.h +++ b/src/Storages/System/StorageSystemStackTrace.h @@ -31,7 +31,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; mutable std::mutex mutex; diff --git a/src/Storages/System/StorageSystemStoragePolicies.cpp b/src/Storages/System/StorageSystemStoragePolicies.cpp index 7a10b986c11..48dfadd2b3c 100644 --- a/src/Storages/System/StorageSystemStoragePolicies.cpp +++ b/src/Storages/System/StorageSystemStoragePolicies.cpp @@ -39,7 +39,7 @@ Pipe StorageSystemStoragePolicies::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -55,7 +55,7 @@ Pipe StorageSystemStoragePolicies::read( MutableColumnPtr col_move_factor = ColumnFloat32::create(); MutableColumnPtr col_prefer_not_to_merge = ColumnUInt8::create(); - for (const auto & [policy_name, policy_ptr] : context.getPoliciesMap()) + for (const auto & [policy_name, policy_ptr] : context->getPoliciesMap()) { const auto & volumes = policy_ptr->getVolumes(); for (size_t i = 0; i != volumes.size(); ++i) diff --git a/src/Storages/System/StorageSystemStoragePolicies.h b/src/Storages/System/StorageSystemStoragePolicies.h index afd5e672d66..70053ebc1bc 100644 --- a/src/Storages/System/StorageSystemStoragePolicies.h +++ b/src/Storages/System/StorageSystemStoragePolicies.h @@ -24,7 +24,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemTableEngines.cpp b/src/Storages/System/StorageSystemTableEngines.cpp index 3f06faf6736..bc33cd9189c 100644 --- a/src/Storages/System/StorageSystemTableEngines.cpp +++ b/src/Storages/System/StorageSystemTableEngines.cpp @@ -20,7 +20,7 @@ NamesAndTypesList StorageSystemTableEngines::getNamesAndTypes() }; } -void StorageSystemTableEngines::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemTableEngines::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (const auto & pair : StorageFactory::instance().getAllStorages()) { diff --git a/src/Storages/System/StorageSystemTableEngines.h b/src/Storages/System/StorageSystemTableEngines.h index 1c080c3040b..37f7f354073 100644 --- a/src/Storages/System/StorageSystemTableEngines.h +++ b/src/Storages/System/StorageSystemTableEngines.h @@ -12,7 +12,7 @@ class StorageSystemTableEngines final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemTableFunctions.cpp b/src/Storages/System/StorageSystemTableFunctions.cpp index 65b1dc41879..2824e1726e9 100644 --- a/src/Storages/System/StorageSystemTableFunctions.cpp +++ b/src/Storages/System/StorageSystemTableFunctions.cpp @@ -9,7 +9,7 @@ NamesAndTypesList StorageSystemTableFunctions::getNamesAndTypes() return {{"name", std::make_shared()}}; } -void StorageSystemTableFunctions::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemTableFunctions::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & functions_names = TableFunctionFactory::instance().getAllRegisteredNames(); for (const auto & function_name : functions_names) diff --git a/src/Storages/System/StorageSystemTableFunctions.h b/src/Storages/System/StorageSystemTableFunctions.h index 95e025b9881..a5db5450d20 100644 --- a/src/Storages/System/StorageSystemTableFunctions.h +++ b/src/Storages/System/StorageSystemTableFunctions.h @@ -14,7 +14,7 @@ protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; public: diff --git a/src/Storages/System/StorageSystemTables.cpp b/src/Storages/System/StorageSystemTables.cpp index 132ed234323..9602339f381 100644 --- a/src/Storages/System/StorageSystemTables.cpp +++ b/src/Storages/System/StorageSystemTables.cpp @@ -62,7 +62,7 @@ StorageSystemTables::StorageSystemTables(const StorageID & table_id_) } -static ColumnPtr getFilteredDatabases(const ASTPtr & query, const Context & context) +static ColumnPtr getFilteredDatabases(const SelectQueryInfo & query_info, ContextPtr context) { MutableColumnPtr column = ColumnString::create(); @@ -76,7 +76,7 @@ static ColumnPtr getFilteredDatabases(const ASTPtr & query, const Context & cont } Block block { ColumnWithTypeAndName(std::move(column), std::make_shared(), "database") }; - VirtualColumnUtils::filterBlockWithQuery(query, block, context); + VirtualColumnUtils::filterBlockWithQuery(query_info.query, block, context); return block.getByPosition(0).column; } @@ -104,12 +104,12 @@ public: Block header, UInt64 max_block_size_, ColumnPtr databases_, - const Context & context_) + ContextPtr context_) : SourceWithProgress(std::move(header)) , columns_mask(std::move(columns_mask_)) , max_block_size(max_block_size_) , databases(std::move(databases_)) - , context(context_) {} + , context(Context::createCopy(context_)) {} String getName() const override { return "Tables"; } @@ -121,7 +121,7 @@ protected: MutableColumns res_columns = getPort().getHeader().cloneEmptyColumns(); - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); size_t rows_count = 0; @@ -148,9 +148,9 @@ protected: /// This is for temporary tables. They are output in single block regardless to max_block_size. if (database_idx >= databases->size()) { - if (context.hasSessionContext()) + if (context->hasSessionContext()) { - Tables external_tables = context.getSessionContext().getExternalTables(); + Tables external_tables = context->getSessionContext()->getExternalTables(); for (auto & table : external_tables) { @@ -278,7 +278,7 @@ protected: } try { - lock = table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + lock = table->lockForShare(context->getCurrentQueryId(), context->getSettingsRef().lock_acquire_timeout); } catch (const Exception & e) { @@ -355,10 +355,11 @@ protected: { ASTPtr ast = database->tryGetCreateTableQuery(table_name, context); - if (ast && !context.getSettingsRef().show_table_uuid_in_table_create_query_if_not_nil) + if (ast && !context->getSettingsRef().show_table_uuid_in_table_create_query_if_not_nil) { auto & create = ast->as(); create.uuid = UUIDHelpers::Nil; + create.to_inner_uuid = UUIDHelpers::Nil; } if (columns_mask[src_index++]) @@ -441,7 +442,7 @@ protected: if (columns_mask[src_index++]) { assert(table != nullptr); - auto total_rows = table->totalRows(context.getSettingsRef()); + auto total_rows = table->totalRows(context->getSettingsRef()); if (total_rows) res_columns[res_index++]->insert(*total_rows); else @@ -451,7 +452,7 @@ protected: if (columns_mask[src_index++]) { assert(table != nullptr); - auto total_bytes = table->totalBytes(context.getSettingsRef()); + auto total_bytes = table->totalBytes(context->getSettingsRef()); if (total_bytes) res_columns[res_index++]->insert(*total_bytes); else @@ -489,7 +490,7 @@ private: ColumnPtr databases; size_t database_idx = 0; DatabaseTablesIteratorPtr tables_it; - const Context context; + ContextPtr context; bool done = false; DatabasePtr database; std::string database_name; @@ -500,7 +501,7 @@ Pipe StorageSystemTables::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*num_streams*/) @@ -524,7 +525,7 @@ Pipe StorageSystemTables::read( } } - ColumnPtr filtered_databases_column = getFilteredDatabases(query_info.query, context); + ColumnPtr filtered_databases_column = getFilteredDatabases(query_info, context); return Pipe(std::make_shared( std::move(columns_mask), std::move(res_block), max_block_size, std::move(filtered_databases_column), context)); diff --git a/src/Storages/System/StorageSystemTables.h b/src/Storages/System/StorageSystemTables.h index 2e0b3386f8c..da5e236b33f 100644 --- a/src/Storages/System/StorageSystemTables.h +++ b/src/Storages/System/StorageSystemTables.h @@ -22,7 +22,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemTimeZones.cpp b/src/Storages/System/StorageSystemTimeZones.cpp index e5523f54caf..dc3711812a6 100644 --- a/src/Storages/System/StorageSystemTimeZones.cpp +++ b/src/Storages/System/StorageSystemTimeZones.cpp @@ -15,7 +15,7 @@ NamesAndTypesList StorageSystemTimeZones::getNamesAndTypes() }; } -void StorageSystemTimeZones::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemTimeZones::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (auto * it = auto_time_zones; *it; ++it) res_columns[0]->insert(String(*it)); diff --git a/src/Storages/System/StorageSystemTimeZones.h b/src/Storages/System/StorageSystemTimeZones.h index b7544ecb16d..0f68b2de293 100644 --- a/src/Storages/System/StorageSystemTimeZones.h +++ b/src/Storages/System/StorageSystemTimeZones.h @@ -17,7 +17,7 @@ class StorageSystemTimeZones final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemUserDirectories.cpp b/src/Storages/System/StorageSystemUserDirectories.cpp index 519f0c0dcb0..7858af25365 100644 --- a/src/Storages/System/StorageSystemUserDirectories.cpp +++ b/src/Storages/System/StorageSystemUserDirectories.cpp @@ -22,9 +22,9 @@ NamesAndTypesList StorageSystemUserDirectories::getNamesAndTypes() } -void StorageSystemUserDirectories::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemUserDirectories::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto & access_control = context.getAccessControlManager(); + const auto & access_control = context->getAccessControlManager(); auto storages = access_control.getStorages(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemUserDirectories.h b/src/Storages/System/StorageSystemUserDirectories.h index 902c890fe29..0ddb0ad49d8 100644 --- a/src/Storages/System/StorageSystemUserDirectories.h +++ b/src/Storages/System/StorageSystemUserDirectories.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemUsers.cpp b/src/Storages/System/StorageSystemUsers.cpp index bec94bc388c..e60f1372df9 100644 --- a/src/Storages/System/StorageSystemUsers.cpp +++ b/src/Storages/System/StorageSystemUsers.cpp @@ -47,15 +47,18 @@ NamesAndTypesList StorageSystemUsers::getNamesAndTypes() {"default_roles_all", std::make_shared()}, {"default_roles_list", std::make_shared(std::make_shared())}, {"default_roles_except", std::make_shared(std::make_shared())}, + {"grantees_any", std::make_shared()}, + {"grantees_list", std::make_shared(std::make_shared())}, + {"grantees_except", std::make_shared(std::make_shared())}, }; return names_and_types; } -void StorageSystemUsers::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemUsers::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_USERS); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_USERS); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; @@ -77,13 +80,19 @@ void StorageSystemUsers::fillData(MutableColumns & res_columns, const Context & auto & column_default_roles_list_offsets = assert_cast(*res_columns[column_index++]).getOffsets(); auto & column_default_roles_except = assert_cast(assert_cast(*res_columns[column_index]).getData()); auto & column_default_roles_except_offsets = assert_cast(*res_columns[column_index++]).getOffsets(); + auto & column_grantees_any = assert_cast(*res_columns[column_index++]).getData(); + auto & column_grantees_list = assert_cast(assert_cast(*res_columns[column_index]).getData()); + auto & column_grantees_list_offsets = assert_cast(*res_columns[column_index++]).getOffsets(); + auto & column_grantees_except = assert_cast(assert_cast(*res_columns[column_index]).getData()); + auto & column_grantees_except_offsets = assert_cast(*res_columns[column_index++]).getOffsets(); auto add_row = [&](const String & name, const UUID & id, const String & storage_name, const Authentication & authentication, const AllowedClientHosts & allowed_hosts, - const RolesOrUsersSet & default_roles) + const RolesOrUsersSet & default_roles, + const RolesOrUsersSet & grantees) { column_name.insertData(name.data(), name.length()); column_id.push_back(id); @@ -156,14 +165,21 @@ void StorageSystemUsers::fillData(MutableColumns & res_columns, const Context & auto default_roles_ast = default_roles.toASTWithNames(access_control); column_default_roles_all.push_back(default_roles_ast->all); - for (const auto & role_name : default_roles_ast->names) column_default_roles_list.insertData(role_name.data(), role_name.length()); column_default_roles_list_offsets.push_back(column_default_roles_list.size()); - - for (const auto & role_name : default_roles_ast->except_names) - column_default_roles_except.insertData(role_name.data(), role_name.length()); + for (const auto & except_name : default_roles_ast->except_names) + column_default_roles_except.insertData(except_name.data(), except_name.length()); column_default_roles_except_offsets.push_back(column_default_roles_except.size()); + + auto grantees_ast = grantees.toASTWithNames(access_control); + column_grantees_any.push_back(grantees_ast->all); + for (const auto & grantee_name : grantees_ast->names) + column_grantees_list.insertData(grantee_name.data(), grantee_name.length()); + column_grantees_list_offsets.push_back(column_grantees_list.size()); + for (const auto & except_name : grantees_ast->except_names) + column_grantees_except.insertData(except_name.data(), except_name.length()); + column_grantees_except_offsets.push_back(column_grantees_except.size()); }; for (const auto & id : ids) @@ -176,7 +192,7 @@ void StorageSystemUsers::fillData(MutableColumns & res_columns, const Context & if (!storage) continue; - add_row(user->getName(), id, storage->getStorageName(), user->authentication, user->allowed_client_hosts, user->default_roles); + add_row(user->getName(), id, storage->getStorageName(), user->authentication, user->allowed_client_hosts, user->default_roles, user->grantees); } } diff --git a/src/Storages/System/StorageSystemUsers.h b/src/Storages/System/StorageSystemUsers.h index 707ea94591d..3c463a23db9 100644 --- a/src/Storages/System/StorageSystemUsers.h +++ b/src/Storages/System/StorageSystemUsers.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemZeros.cpp b/src/Storages/System/StorageSystemZeros.cpp index ed5ab93369a..d1456d72685 100644 --- a/src/Storages/System/StorageSystemZeros.cpp +++ b/src/Storages/System/StorageSystemZeros.cpp @@ -94,7 +94,7 @@ Pipe StorageSystemZeros::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) diff --git a/src/Storages/System/StorageSystemZeros.h b/src/Storages/System/StorageSystemZeros.h index 04733f550c1..2ccdcf9c944 100644 --- a/src/Storages/System/StorageSystemZeros.h +++ b/src/Storages/System/StorageSystemZeros.h @@ -24,7 +24,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemZooKeeper.cpp b/src/Storages/System/StorageSystemZooKeeper.cpp index 8fa5ccbd630..1a8aac3b277 100644 --- a/src/Storages/System/StorageSystemZooKeeper.cpp +++ b/src/Storages/System/StorageSystemZooKeeper.cpp @@ -63,7 +63,7 @@ static String pathCorrected(const String & path) } -static bool extractPathImpl(const IAST & elem, Paths & res, const Context & context) +static bool extractPathImpl(const IAST & elem, Paths & res, ContextPtr context) { const auto * function = elem.as(); if (!function) @@ -94,8 +94,8 @@ static bool extractPathImpl(const IAST & elem, Paths & res, const Context & cont { auto interpreter_subquery = interpretSubquery(value, context, {}, {}); auto stream = interpreter_subquery->execute().getInputStream(); - SizeLimits limites(context.getSettingsRef().max_rows_in_set, context.getSettingsRef().max_bytes_in_set, OverflowMode::THROW); - Set set(limites, true, context.getSettingsRef().transform_null_in); + SizeLimits limites(context->getSettingsRef().max_rows_in_set, context->getSettingsRef().max_bytes_in_set, OverflowMode::THROW); + Set set(limites, true, context->getSettingsRef().transform_null_in); set.setHeader(stream->getHeader()); stream->readPrefix(); @@ -165,7 +165,7 @@ static bool extractPathImpl(const IAST & elem, Paths & res, const Context & cont /** Retrieve from the query a condition of the form `path = 'path'`, from conjunctions in the WHERE clause. */ -static Paths extractPath(const ASTPtr & query, const Context & context) +static Paths extractPath(const ASTPtr & query, ContextPtr context) { const auto & select = query->as(); if (!select.where()) @@ -176,13 +176,13 @@ static Paths extractPath(const ASTPtr & query, const Context & context) } -void StorageSystemZooKeeper::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemZooKeeper::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { const Paths & paths = extractPath(query_info.query, context); if (paths.empty()) throw Exception("SELECT from system.zookeeper table must contain condition like path = 'path' or path IN ('path1','path2'...) or path IN (subquery) in WHERE clause.", ErrorCodes::BAD_ARGUMENTS); - zkutil::ZooKeeperPtr zookeeper = context.getZooKeeper(); + zkutil::ZooKeeperPtr zookeeper = context->getZooKeeper(); std::unordered_set paths_corrected; for (const auto & path : paths) diff --git a/src/Storages/System/StorageSystemZooKeeper.h b/src/Storages/System/StorageSystemZooKeeper.h index 06611f61dae..226ca79facf 100644 --- a/src/Storages/System/StorageSystemZooKeeper.h +++ b/src/Storages/System/StorageSystemZooKeeper.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/TTLDescription.cpp b/src/Storages/TTLDescription.cpp index 41c20b2714b..95ea4f07f18 100644 --- a/src/Storages/TTLDescription.cpp +++ b/src/Storages/TTLDescription.cpp @@ -162,7 +162,7 @@ TTLDescription & TTLDescription::operator=(const TTLDescription & other) TTLDescription TTLDescription::getTTLFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const KeyDescription & primary_key) { TTLDescription result; @@ -289,7 +289,7 @@ TTLDescription TTLDescription::getTTLFromAST( { result.recompression_codec = CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST( - ttl_element->recompression_codec, {}, !context.getSettingsRef().allow_suspicious_codecs); + ttl_element->recompression_codec, {}, !context->getSettingsRef().allow_suspicious_codecs); } } @@ -330,7 +330,7 @@ TTLTableDescription & TTLTableDescription::operator=(const TTLTableDescription & TTLTableDescription TTLTableDescription::getTTLForTableFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const KeyDescription & primary_key) { TTLTableDescription result; diff --git a/src/Storages/TTLDescription.h b/src/Storages/TTLDescription.h index a2340ad6bcd..6288098b3c5 100644 --- a/src/Storages/TTLDescription.h +++ b/src/Storages/TTLDescription.h @@ -80,7 +80,7 @@ struct TTLDescription /// Parse TTL structure from definition. Able to parse both column and table /// TTLs. - static TTLDescription getTTLFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context, const KeyDescription & primary_key); + static TTLDescription getTTLFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context, const KeyDescription & primary_key); TTLDescription() = default; TTLDescription(const TTLDescription & other); @@ -117,7 +117,7 @@ struct TTLTableDescription TTLTableDescription & operator=(const TTLTableDescription & other); static TTLTableDescription getTTLForTableFromAST( - const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context, const KeyDescription & primary_key); + const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context, const KeyDescription & primary_key); }; } diff --git a/src/Storages/VirtualColumnUtils.cpp b/src/Storages/VirtualColumnUtils.cpp index 6b99dc25e37..0c6cb563525 100644 --- a/src/Storages/VirtualColumnUtils.cpp +++ b/src/Storages/VirtualColumnUtils.cpp @@ -5,13 +5,16 @@ #include #include #include +#include #include #include #include #include #include +#include +#include #include #include #include @@ -19,40 +22,51 @@ #include #include #include +#include namespace DB { +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + namespace { /// Verifying that the function depends only on the specified columns -bool isValidFunction(const ASTPtr & expression, const NameSet & columns) +bool isValidFunction(const ASTPtr & expression, const std::function & is_constant) { - for (const auto & child : expression->children) - if (!isValidFunction(child, columns)) - return false; - - if (auto opt_name = IdentifierSemantic::getColumnName(expression)) - return columns.count(*opt_name); - - return true; + const auto * function = expression->as(); + if (function && functionIsInOrGlobalInOperator(function->name)) + { + // Second argument of IN can be a scalar subquery + return isValidFunction(function->arguments->children[0], is_constant); + } + else + return is_constant(expression); } /// Extract all subfunctions of the main conjunction, but depending only on the specified columns -void extractFunctions(const ASTPtr & expression, const NameSet & columns, std::vector & result) +bool extractFunctions(const ASTPtr & expression, const std::function & is_constant, std::vector & result) { const auto * function = expression->as(); - if (function && function->name == "and") + if (function && (function->name == "and" || function->name == "indexHint")) { + bool ret = true; for (const auto & child : function->arguments->children) - extractFunctions(child, columns, result); + ret &= extractFunctions(child, is_constant, result); + return ret; } - else if (isValidFunction(expression, columns)) + else if (isValidFunction(expression, is_constant)) { result.push_back(expression->clone()); + return true; } + else + return false; } /// Construct a conjunction from given functions @@ -65,6 +79,25 @@ ASTPtr buildWhereExpression(const ASTs & functions) return makeASTFunction("and", functions); } +void buildSets(const ASTPtr & expression, ExpressionAnalyzer & analyzer) +{ + const auto * func = expression->as(); + if (func && functionIsInOrGlobalInOperator(func->name)) + { + const IAST & args = *func->arguments; + const ASTPtr & arg = args.children.at(1); + if (arg->as() || arg->as()) + { + analyzer.tryMakeSetForIndexFromSubquery(arg); + } + } + else + { + for (const auto & child : expression->children) + buildSets(child, analyzer); + } +} + } namespace VirtualColumnUtils @@ -76,7 +109,6 @@ void rewriteEntityInAst(ASTPtr ast, const String & column_name, const Field & va if (!select.with()) select.setExpression(ASTSelectQuery::Expression::WITH, std::make_shared()); - if (func.empty()) { auto literal = std::make_shared(value); @@ -96,30 +128,77 @@ void rewriteEntityInAst(ASTPtr ast, const String & column_name, const Field & va } } -void filterBlockWithQuery(const ASTPtr & query, Block & block, const Context & context) +bool prepareFilterBlockWithQuery(const ASTPtr & query, ContextPtr context, Block block, ASTPtr & expression_ast) { + if (block.rows() == 0) + throw Exception("Cannot prepare filter with empty block", ErrorCodes::LOGICAL_ERROR); + + /// Take the first row of the input block to build a constant block + auto columns = block.getColumns(); + Columns const_columns(columns.size()); + for (size_t i = 0; i < columns.size(); ++i) + { + if (isColumnConst(*columns[i])) + const_columns[i] = columns[i]->cloneResized(1); + else + const_columns[i] = ColumnConst::create(columns[i]->cloneResized(1), 1); + } + block.setColumns(const_columns); + + bool unmodified = true; const auto & select = query->as(); if (!select.where() && !select.prewhere()) - return; + return unmodified; - NameSet columns; - for (const auto & it : block.getNamesAndTypesList()) - columns.insert(it.name); + ASTPtr condition_ast; + if (select.prewhere() && select.where()) + condition_ast = makeASTFunction("and", select.prewhere()->clone(), select.where()->clone()); + else + condition_ast = select.prewhere() ? select.prewhere()->clone() : select.where()->clone(); - /// We will create an expression that evaluates the expressions in WHERE and PREWHERE, depending only on the existing columns. + // Provide input columns as constant columns to check if an expression is constant. + std::function is_constant = [&block, &context](const ASTPtr & node) + { + auto actions = std::make_shared(block.getColumnsWithTypeAndName()); + PreparedSets prepared_sets; + SubqueriesForSets subqueries_for_sets; + ActionsVisitor::Data visitor_data( + context, SizeLimits{}, 1, {}, std::move(actions), prepared_sets, subqueries_for_sets, true, true, true, false); + ActionsVisitor(visitor_data).visit(node); + actions = visitor_data.getActions(); + auto expression_actions = std::make_shared(actions); + auto block_with_constants = block; + expression_actions->execute(block_with_constants); + auto column_name = node->getColumnName(); + return block_with_constants.has(column_name) && isColumnConst(*block_with_constants.getByName(column_name).column); + }; + + /// Create an expression that evaluates the expressions in WHERE and PREWHERE, depending only on the existing columns. std::vector functions; if (select.where()) - extractFunctions(select.where(), columns, functions); + unmodified &= extractFunctions(select.where(), is_constant, functions); if (select.prewhere()) - extractFunctions(select.prewhere(), columns, functions); + unmodified &= extractFunctions(select.prewhere(), is_constant, functions); + + expression_ast = buildWhereExpression(functions); + return unmodified; +} + +void filterBlockWithQuery(const ASTPtr & query, Block & block, ContextPtr context, ASTPtr expression_ast) +{ + if (block.rows() == 0) + return; + + if (!expression_ast) + prepareFilterBlockWithQuery(query, context, block, expression_ast); - ASTPtr expression_ast = buildWhereExpression(functions); if (!expression_ast) return; - /// Let's analyze and calculate the expression. + /// Let's analyze and calculate the prepared expression. auto syntax_result = TreeRewriter(context).analyze(expression_ast, block.getNamesAndTypesList()); ExpressionAnalyzer analyzer(expression_ast, syntax_result, context); + buildSets(expression_ast, analyzer); ExpressionActionsPtr actions = analyzer.getActions(false); Block block_with_filter = block; @@ -132,10 +211,15 @@ void filterBlockWithQuery(const ASTPtr & query, Block & block, const Context & c ConstantFilterDescription constant_filter(*filter_column); if (constant_filter.always_true) + { return; + } if (constant_filter.always_false) + { block = block.cloneEmpty(); + return; + } FilterDescription filter(*filter_column); diff --git a/src/Storages/VirtualColumnUtils.h b/src/Storages/VirtualColumnUtils.h index 445a996ab87..15783f6e79f 100644 --- a/src/Storages/VirtualColumnUtils.h +++ b/src/Storages/VirtualColumnUtils.h @@ -1,15 +1,16 @@ #pragma once -#include - #include +#include #include +#include + +#include namespace DB { -class Context; class NamesAndTypesList; @@ -24,9 +25,14 @@ namespace VirtualColumnUtils /// - `WITH toUInt16(9000) as _port`. void rewriteEntityInAst(ASTPtr ast, const String & column_name, const Field & value, const String & func = ""); +/// Prepare `expression_ast` to filter block. Returns true if `expression_ast` is not trimmed, that is, +/// `block` provides all needed columns for `expression_ast`, else return false. +bool prepareFilterBlockWithQuery(const ASTPtr & query, ContextPtr context, Block block, ASTPtr & expression_ast); + /// Leave in the block only the rows that fit under the WHERE clause and the PREWHERE clause of the query. /// Only elements of the outer conjunction are considered, depending only on the columns present in the block. -void filterBlockWithQuery(const ASTPtr & query, Block & block, const Context & context); +/// If `expression_ast` is passed, use it to filter block. +void filterBlockWithQuery(const ASTPtr & query, Block & block, ContextPtr context, ASTPtr expression_ast = {}); /// Extract from the input stream a set of `name` column values template diff --git a/src/Storages/tests/CMakeLists.txt b/src/Storages/examples/CMakeLists.txt similarity index 94% rename from src/Storages/tests/CMakeLists.txt rename to src/Storages/examples/CMakeLists.txt index b58fed9edf5..59d44829363 100644 --- a/src/Storages/tests/CMakeLists.txt +++ b/src/Storages/examples/CMakeLists.txt @@ -1,6 +1,3 @@ -add_executable (part_name part_name.cpp) -target_link_libraries (part_name PRIVATE dbms) - add_executable (remove_symlink_directory remove_symlink_directory.cpp) target_link_libraries (remove_symlink_directory PRIVATE dbms) diff --git a/src/Storages/tests/active_parts.py b/src/Storages/examples/active_parts.py similarity index 100% rename from src/Storages/tests/active_parts.py rename to src/Storages/examples/active_parts.py diff --git a/src/Storages/tests/columns_description_fuzzer.cpp b/src/Storages/examples/columns_description_fuzzer.cpp similarity index 100% rename from src/Storages/tests/columns_description_fuzzer.cpp rename to src/Storages/examples/columns_description_fuzzer.cpp diff --git a/src/Storages/tests/get_abandonable_lock_in_all_partitions.cpp b/src/Storages/examples/get_abandonable_lock_in_all_partitions.cpp similarity index 100% rename from src/Storages/tests/get_abandonable_lock_in_all_partitions.cpp rename to src/Storages/examples/get_abandonable_lock_in_all_partitions.cpp diff --git a/src/Storages/tests/get_current_inserts_in_replicated.cpp b/src/Storages/examples/get_current_inserts_in_replicated.cpp similarity index 100% rename from src/Storages/tests/get_current_inserts_in_replicated.cpp rename to src/Storages/examples/get_current_inserts_in_replicated.cpp diff --git a/src/Storages/tests/merge_selector.cpp b/src/Storages/examples/merge_selector.cpp similarity index 100% rename from src/Storages/tests/merge_selector.cpp rename to src/Storages/examples/merge_selector.cpp diff --git a/src/Storages/tests/merge_selector2.cpp b/src/Storages/examples/merge_selector2.cpp similarity index 100% rename from src/Storages/tests/merge_selector2.cpp rename to src/Storages/examples/merge_selector2.cpp diff --git a/src/Storages/tests/mergetree_checksum_fuzzer.cpp b/src/Storages/examples/mergetree_checksum_fuzzer.cpp similarity index 100% rename from src/Storages/tests/mergetree_checksum_fuzzer.cpp rename to src/Storages/examples/mergetree_checksum_fuzzer.cpp diff --git a/src/Storages/tests/remove_symlink_directory.cpp b/src/Storages/examples/remove_symlink_directory.cpp similarity index 100% rename from src/Storages/tests/remove_symlink_directory.cpp rename to src/Storages/examples/remove_symlink_directory.cpp diff --git a/src/Storages/tests/transform_part_zk_nodes.cpp b/src/Storages/examples/transform_part_zk_nodes.cpp similarity index 100% rename from src/Storages/tests/transform_part_zk_nodes.cpp rename to src/Storages/examples/transform_part_zk_nodes.cpp diff --git a/src/Storages/getStructureOfRemoteTable.cpp b/src/Storages/getStructureOfRemoteTable.cpp index de5f3924ca9..fb828b8f744 100644 --- a/src/Storages/getStructureOfRemoteTable.cpp +++ b/src/Storages/getStructureOfRemoteTable.cpp @@ -29,7 +29,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( const Cluster & cluster, const Cluster::ShardInfo & shard_info, const StorageID & table_id, - const Context & context, + ContextPtr context, const ASTPtr & table_func_ptr) { String query; @@ -59,7 +59,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( ColumnsDescription res; - auto new_context = ClusterProxy::updateSettingsForCluster(cluster, context, context.getSettingsRef()); + auto new_context = ClusterProxy::updateSettingsForCluster(cluster, context, context->getSettingsRef()); /// Expect only needed columns from the result of DESC TABLE. NOTE 'comment' column is ignored for compatibility reasons. Block sample_block @@ -71,7 +71,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( }; /// Execute remote query without restrictions (because it's not real user query, but part of implementation) - auto input = std::make_shared(shard_info.pool, query, sample_block, *new_context); + auto input = std::make_shared(shard_info.pool, query, sample_block, new_context); input->setPoolMode(PoolMode::GET_ONE); if (!table_func_ptr) input->setMainTable(table_id); @@ -104,7 +104,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( column.default_desc.kind = columnDefaultKindFromString(kind_name); String expr_str = (*default_expr)[i].get(); column.default_desc.expression = parseQuery( - expr_parser, expr_str.data(), expr_str.data() + expr_str.size(), "default expression", 0, context.getSettingsRef().max_parser_depth); + expr_parser, expr_str.data(), expr_str.data() + expr_str.size(), "default expression", 0, context->getSettingsRef().max_parser_depth); } res.add(column); @@ -117,7 +117,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( ColumnsDescription getStructureOfRemoteTable( const Cluster & cluster, const StorageID & table_id, - const Context & context, + ContextPtr context, const ASTPtr & table_func_ptr) { const auto & shards_info = cluster.getShardsInfo(); diff --git a/src/Storages/getStructureOfRemoteTable.h b/src/Storages/getStructureOfRemoteTable.h index af418144cb0..3f77236c756 100644 --- a/src/Storages/getStructureOfRemoteTable.h +++ b/src/Storages/getStructureOfRemoteTable.h @@ -16,7 +16,7 @@ struct StorageID; ColumnsDescription getStructureOfRemoteTable( const Cluster & cluster, const StorageID & table_id, - const Context & context, + ContextPtr context, const ASTPtr & table_func_ptr = nullptr); } diff --git a/src/Storages/registerStorages.cpp b/src/Storages/registerStorages.cpp index 0022ee6bd4f..7100afa6909 100644 --- a/src/Storages/registerStorages.cpp +++ b/src/Storages/registerStorages.cpp @@ -62,6 +62,10 @@ void registerStorageEmbeddedRocksDB(StorageFactory & factory); void registerStoragePostgreSQL(StorageFactory & factory); #endif +#if USE_MYSQL || USE_LIBPQXX +void registerStorageExternalDistributed(StorageFactory & factory); +#endif + void registerStorages() { auto & factory = StorageFactory::instance(); @@ -118,6 +122,10 @@ void registerStorages() #if USE_LIBPQXX registerStoragePostgreSQL(factory); #endif + + #if USE_MYSQL || USE_LIBPQXX + registerStorageExternalDistributed(factory); + #endif } } diff --git a/src/Storages/tests/gtest_SplitTokenExtractor.cpp b/src/Storages/tests/gtest_SplitTokenExtractor.cpp index b5a26c9cd8e..ee6a55f50b8 100644 --- a/src/Storages/tests/gtest_SplitTokenExtractor.cpp +++ b/src/Storages/tests/gtest_SplitTokenExtractor.cpp @@ -61,12 +61,12 @@ TEST_P(SplitTokenExtractorTest, next) for (const auto & expected_token : param.tokens) { SCOPED_TRACE(++i); - ASSERT_TRUE(token_extractor.next(data->data(), data->size(), &pos, &token_start, &token_len)); + ASSERT_TRUE(token_extractor.nextInColumn(data->data(), data->size(), &pos, &token_start, &token_len)); EXPECT_EQ(expected_token, std::string_view(data->data() + token_start, token_len)) << " token_start:" << token_start << " token_len: " << token_len; } - ASSERT_FALSE(token_extractor.next(data->data(), data->size(), &pos, &token_start, &token_len)) + ASSERT_FALSE(token_extractor.nextInColumn(data->data(), data->size(), &pos, &token_start, &token_len)) << "\n\t=> \"" << param.source.substr(token_start, token_len) << "\"" << "\n\t" << token_start << ", " << token_len << ", " << pos << ", " << data->size(); } diff --git a/src/Storages/tests/gtest_background_executor.cpp b/src/Storages/tests/gtest_background_executor.cpp index 0ddf2d9ea2a..283cdf3fbf8 100644 --- a/src/Storages/tests/gtest_background_executor.cpp +++ b/src/Storages/tests/gtest_background_executor.cpp @@ -17,9 +17,9 @@ static std::atomic counter{0}; class TestJobExecutor : public IBackgroundJobExecutor { public: - explicit TestJobExecutor(Context & context) + explicit TestJobExecutor(ContextPtr local_context) :IBackgroundJobExecutor( - context, + local_context, BackgroundTaskSchedulingSettings{}, {PoolConfig{PoolType::MERGE_MUTATE, 4, CurrentMetrics::BackgroundPoolTask}}) {} @@ -43,7 +43,7 @@ TEST(BackgroundExecutor, TestMetric) const auto & context_holder = getContext(); std::vector executors; for (size_t i = 0; i < 100; ++i) - executors.emplace_back(std::make_unique(const_cast(context_holder.context))); + executors.emplace_back(std::make_unique(context_holder.context)); for (size_t i = 0; i < 100; ++i) executors[i]->start(); diff --git a/src/Storages/tests/gtest_storage_log.cpp b/src/Storages/tests/gtest_storage_log.cpp index cbb894c7420..41c1b6ac75a 100644 --- a/src/Storages/tests/gtest_storage_log.cpp +++ b/src/Storages/tests/gtest_storage_log.cpp @@ -19,7 +19,7 @@ #include #include -#if !__clang__ +#if !defined(__clang__) # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wsuggest-override" #endif @@ -70,7 +70,7 @@ using DiskImplementations = testing::Types; TYPED_TEST_SUITE(StorageLogTest, DiskImplementations); // Returns data written to table in Values format. -std::string writeData(int rows, DB::StoragePtr & table, const DB::Context & context) +std::string writeData(int rows, DB::StoragePtr & table, const DB::ContextPtr context) { using namespace DB; auto metadata_snapshot = table->getInMemoryMetadataPtr(); @@ -108,7 +108,7 @@ std::string writeData(int rows, DB::StoragePtr & table, const DB::Context & cont } // Returns all table data in Values format. -std::string readData(DB::StoragePtr & table, const DB::Context & context) +std::string readData(DB::StoragePtr & table, const DB::ContextPtr context) { using namespace DB; auto metadata_snapshot = table->getInMemoryMetadataPtr(); diff --git a/src/Storages/tests/gtest_transform_query_for_external_database.cpp b/src/Storages/tests/gtest_transform_query_for_external_database.cpp index 99dfc55ed69..d774fd144cf 100644 --- a/src/Storages/tests/gtest_transform_query_for_external_database.cpp +++ b/src/Storages/tests/gtest_transform_query_for_external_database.cpp @@ -22,16 +22,7 @@ struct State { State(const State&) = delete; - Context context; - NamesAndTypesList columns{ - {"column", std::make_shared()}, - {"apply_id", std::make_shared()}, - {"apply_type", std::make_shared()}, - {"apply_status", std::make_shared()}, - {"create_time", std::make_shared()}, - {"field", std::make_shared()}, - {"value", std::make_shared()}, - }; + ContextPtr context; static const State & instance() { @@ -39,27 +30,83 @@ struct State return state; } + const NamesAndTypesList & getColumns() const + { + return tables[0].columns; + } + + std::vector getTables(size_t num = 0) const + { + std::vector res; + for (size_t i = 0; i < std::min(num, tables.size()); ++i) + res.push_back(tables[i]); + return res; + } + private: + + static DatabaseAndTableWithAlias createDBAndTable(String table_name) + { + DatabaseAndTableWithAlias res; + res.database = "test"; + res.table = table_name; + return res; + } + + const std::vector tables{ + TableWithColumnNamesAndTypes( + createDBAndTable("table"), + { + {"column", std::make_shared()}, + {"apply_id", std::make_shared()}, + {"apply_type", std::make_shared()}, + {"apply_status", std::make_shared()}, + {"create_time", std::make_shared()}, + {"field", std::make_shared()}, + {"value", std::make_shared()}, + }), + TableWithColumnNamesAndTypes( + createDBAndTable("table2"), + { + {"num", std::make_shared()}, + {"attr", std::make_shared()}, + }), + }; + explicit State() - : context(getContext().context) + : context(Context::createCopy(getContext().context)) { tryRegisterFunctions(); DatabasePtr database = std::make_shared("test", context); - database->attachTable("table", StorageMemory::create(StorageID("test", "table"), ColumnsDescription{columns}, ConstraintsDescription{})); - DatabaseCatalog::instance().attachDatabase("test", database); - context.setCurrentDatabase("test"); + + for (const auto & tab : tables) + { + const auto & table_name = tab.table.table; + const auto & db_name = tab.table.database; + database->attachTable( + table_name, + StorageMemory::create(StorageID(db_name, table_name), ColumnsDescription{getColumns()}, ConstraintsDescription{})); + } + DatabaseCatalog::instance().attachDatabase(database->getDatabaseName(), database); + context->setCurrentDatabase("test"); } }; - -static void check(const std::string & query, const std::string & expected, const Context & context, const NamesAndTypesList & columns) +static void check( + const State & state, + size_t table_num, + const std::string & query, + const std::string & expected) { ParserSelectQuery parser; ASTPtr ast = parseQuery(parser, query, 1000, 1000); SelectQueryInfo query_info; - query_info.syntax_analyzer_result = TreeRewriter(context).analyzeSelect(ast, columns); + SelectQueryOptions select_options; + query_info.syntax_analyzer_result + = TreeRewriter(state.context).analyzeSelect(ast, state.getColumns(), select_options, state.getTables(table_num)); query_info.query = ast; - std::string transformed_query = transformQueryForExternalDatabase(query_info, columns, IdentifierQuotingStyle::DoubleQuotes, "test", "table", context); + std::string transformed_query = transformQueryForExternalDatabase( + query_info, state.getColumns(), IdentifierQuotingStyle::DoubleQuotes, "test", "table", state.context); EXPECT_EQ(transformed_query, expected); } @@ -69,82 +116,93 @@ TEST(TransformQueryForExternalDatabase, InWithSingleElement) { const State & state = State::instance(); - check("SELECT column FROM test.table WHERE 1 IN (1)", - R"(SELECT "column" FROM "test"."table" WHERE 1)", - state.context, state.columns); - check("SELECT column FROM test.table WHERE column IN (1, 2)", - R"(SELECT "column" FROM "test"."table" WHERE "column" IN (1, 2))", - state.context, state.columns); - check("SELECT column FROM test.table WHERE column NOT IN ('hello', 'world')", - R"(SELECT "column" FROM "test"."table" WHERE "column" NOT IN ('hello', 'world'))", - state.context, state.columns); + check(state, 1, + "SELECT column FROM test.table WHERE 1 IN (1)", + R"(SELECT "column" FROM "test"."table" WHERE 1)"); + check(state, 1, + "SELECT column FROM test.table WHERE column IN (1, 2)", + R"(SELECT "column" FROM "test"."table" WHERE "column" IN (1, 2))"); + check(state, 1, + "SELECT column FROM test.table WHERE column NOT IN ('hello', 'world')", + R"(SELECT "column" FROM "test"."table" WHERE "column" NOT IN ('hello', 'world'))"); } TEST(TransformQueryForExternalDatabase, InWithTable) { const State & state = State::instance(); - check("SELECT column FROM test.table WHERE 1 IN external_table", - R"(SELECT "column" FROM "test"."table")", - state.context, state.columns); - check("SELECT column FROM test.table WHERE 1 IN (x)", - R"(SELECT "column" FROM "test"."table")", - state.context, state.columns); - check("SELECT column, field, value FROM test.table WHERE column IN (field, value)", - R"(SELECT "column", "field", "value" FROM "test"."table" WHERE "column" IN ("field", "value"))", - state.context, state.columns); - check("SELECT column FROM test.table WHERE column NOT IN hello AND column = 123", - R"(SELECT "column" FROM "test"."table" WHERE ("column" = 123))", - state.context, state.columns); + check(state, 1, + "SELECT column FROM test.table WHERE 1 IN external_table", + R"(SELECT "column" FROM "test"."table")"); + check(state, 1, + "SELECT column FROM test.table WHERE 1 IN (x)", + R"(SELECT "column" FROM "test"."table")"); + check(state, 1, + "SELECT column, field, value FROM test.table WHERE column IN (field, value)", + R"(SELECT "column", "field", "value" FROM "test"."table" WHERE "column" IN ("field", "value"))"); + check(state, 1, + "SELECT column FROM test.table WHERE column NOT IN hello AND column = 123", + R"(SELECT "column" FROM "test"."table" WHERE "column" = 123)"); } TEST(TransformQueryForExternalDatabase, Like) { const State & state = State::instance(); - check("SELECT column FROM test.table WHERE column LIKE '%hello%'", - R"(SELECT "column" FROM "test"."table" WHERE "column" LIKE '%hello%')", - state.context, state.columns); - check("SELECT column FROM test.table WHERE column NOT LIKE 'w%rld'", - R"(SELECT "column" FROM "test"."table" WHERE "column" NOT LIKE 'w%rld')", - state.context, state.columns); + check(state, 1, + "SELECT column FROM test.table WHERE column LIKE '%hello%'", + R"(SELECT "column" FROM "test"."table" WHERE "column" LIKE '%hello%')"); + check(state, 1, + "SELECT column FROM test.table WHERE column NOT LIKE 'w%rld'", + R"(SELECT "column" FROM "test"."table" WHERE "column" NOT LIKE 'w%rld')"); } TEST(TransformQueryForExternalDatabase, Substring) { const State & state = State::instance(); - check("SELECT column FROM test.table WHERE left(column, 10) = RIGHT(column, 10) AND SUBSTRING(column FROM 1 FOR 2) = 'Hello'", - R"(SELECT "column" FROM "test"."table")", - state.context, state.columns); + check(state, 1, + "SELECT column FROM test.table WHERE left(column, 10) = RIGHT(column, 10) AND SUBSTRING(column FROM 1 FOR 2) = 'Hello'", + R"(SELECT "column" FROM "test"."table")"); } TEST(TransformQueryForExternalDatabase, MultipleAndSubqueries) { const State & state = State::instance(); - check("SELECT column FROM test.table WHERE 1 = 1 AND toString(column) = '42' AND column = 42 AND left(column, 10) = RIGHT(column, 10) AND column IN (1, 42) AND SUBSTRING(column FROM 1 FOR 2) = 'Hello' AND column != 4", - R"(SELECT "column" FROM "test"."table" WHERE 1 AND ("column" = 42) AND ("column" IN (1, 42)) AND ("column" != 4))", - state.context, state.columns); - check("SELECT column FROM test.table WHERE toString(column) = '42' AND left(column, 10) = RIGHT(column, 10) AND column = 42", - R"(SELECT "column" FROM "test"."table" WHERE ("column" = 42))", - state.context, state.columns); + check(state, 1, + "SELECT column FROM test.table WHERE 1 = 1 AND toString(column) = '42' AND column = 42 AND left(column, 10) = RIGHT(column, 10) AND column IN (1, 42) AND SUBSTRING(column FROM 1 FOR 2) = 'Hello' AND column != 4", + R"(SELECT "column" FROM "test"."table" WHERE 1 AND ("column" = 42) AND ("column" IN (1, 42)) AND ("column" != 4))"); + check(state, 1, + "SELECT column FROM test.table WHERE toString(column) = '42' AND left(column, 10) = RIGHT(column, 10) AND column = 42", + R"(SELECT "column" FROM "test"."table" WHERE ("column" = 42))"); } TEST(TransformQueryForExternalDatabase, Issue7245) { const State & state = State::instance(); - check("select apply_id from test.table where apply_type = 2 and create_time > addDays(toDateTime('2019-01-01 01:02:03'),-7) and apply_status in (3,4)", - R"(SELECT "apply_id", "apply_type", "apply_status", "create_time" FROM "test"."table" WHERE ("apply_type" = 2) AND ("create_time" > '2018-12-25 01:02:03') AND ("apply_status" IN (3, 4)))", - state.context, state.columns); + check(state, 1, + "SELECT apply_id FROM test.table WHERE apply_type = 2 AND create_time > addDays(toDateTime('2019-01-01 01:02:03'),-7) AND apply_status IN (3,4)", + R"(SELECT "apply_id", "apply_type", "apply_status", "create_time" FROM "test"."table" WHERE ("apply_type" = 2) AND ("create_time" > '2018-12-25 01:02:03') AND ("apply_status" IN (3, 4)))"); } TEST(TransformQueryForExternalDatabase, Aliases) { const State & state = State::instance(); - check("SELECT field AS value, field AS display WHERE field NOT IN ('') AND display LIKE '%test%'", - R"(SELECT "field" FROM "test"."table" WHERE ("field" NOT IN ('')) AND ("field" LIKE '%test%'))", - state.context, state.columns); + check(state, 1, + "SELECT field AS value, field AS display WHERE field NOT IN ('') AND display LIKE '%test%'", + R"(SELECT "field" FROM "test"."table" WHERE ("field" NOT IN ('')) AND ("field" LIKE '%test%'))"); +} + +TEST(TransformQueryForExternalDatabase, ForeignColumnInWhere) +{ + const State & state = State::instance(); + + check(state, 2, + "SELECT column FROM test.table " + "JOIN test.table2 AS table2 ON (test.table.apply_id = table2.num) " + "WHERE column > 2 AND (apply_id = 1 OR table2.num = 1) AND table2.attr != ''", + R"(SELECT "column", "apply_id" FROM "test"."table" WHERE ("column" > 2) AND ("apply_id" = 1))"); } diff --git a/src/Storages/tests/part_name.cpp b/src/Storages/tests/part_name.cpp deleted file mode 100644 index 79c5578a8ca..00000000000 --- a/src/Storages/tests/part_name.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include -#include -#include - - -int main(int, char **) -{ - DayNum today = DateLUT::instance().toDayNum(time(nullptr)); - - for (DayNum date = today; DayNum(date + 10) > today; --date) - { - DB::MergeTreePartInfo part_info("partition", 0, 0, 0); - std::string name = part_info.getPartNameV0(date, date); - std::cerr << name << '\n'; - - time_t time = DateLUT::instance().YYYYMMDDToDate(DB::parse(name)); - std::cerr << LocalDateTime(time) << '\n'; - } - - return 0; -} diff --git a/src/Storages/transformQueryForExternalDatabase.cpp b/src/Storages/transformQueryForExternalDatabase.cpp index 42daf8cfc26..b3fe788d874 100644 --- a/src/Storages/transformQueryForExternalDatabase.cpp +++ b/src/Storages/transformQueryForExternalDatabase.cpp @@ -63,7 +63,7 @@ public: const IColumn & inner_column = assert_cast(*result.column).getDataColumn(); WriteBufferFromOwnString out; - result.type->serializeAsText(inner_column, 0, out, FormatSettings()); + result.type->getDefaultSerialization()->serializeText(inner_column, 0, out, FormatSettings()); node = std::make_shared(out.str()); } } @@ -88,7 +88,7 @@ public: } }; -void replaceConstantExpressions(ASTPtr & node, const Context & context, const NamesAndTypesList & all_columns) +void replaceConstantExpressions(ASTPtr & node, ContextPtr context, const NamesAndTypesList & all_columns) { auto syntax_result = TreeRewriter(context).analyze(node, all_columns); Block block_with_constants = KeyCondition::getBlockWithConstants(node, syntax_result, context); @@ -160,8 +160,78 @@ bool isCompatible(const IAST & node) return node.as(); } +bool removeUnknownSubexpressions(ASTPtr & node, const NameSet & known_names); + +void removeUnknownChildren(ASTs & children, const NameSet & known_names) +{ + + ASTs new_children; + for (auto & child : children) + { + bool leave_child = removeUnknownSubexpressions(child, known_names); + if (leave_child) + new_children.push_back(child); + } + children = std::move(new_children); } +/// return `true` if we should leave node in tree +bool removeUnknownSubexpressions(ASTPtr & node, const NameSet & known_names) +{ + if (const auto * ident = node->as()) + return known_names.contains(ident->name()); + + if (node->as() != nullptr) + return true; + + auto * func = node->as(); + if (func && (func->name == "and" || func->name == "or")) + { + removeUnknownChildren(func->arguments->children, known_names); + /// all children removed, current node can be removed too + if (func->arguments->children.size() == 1) + { + /// if only one child left, pull it on top level + node = func->arguments->children[0]; + return true; + } + return !func->arguments->children.empty(); + } + + bool leave_child = true; + for (auto & child : node->children) + { + leave_child = leave_child && removeUnknownSubexpressions(child, known_names); + if (!leave_child) + break; + } + return leave_child; +} + +// When a query references an external table such as table from MySQL database, +// the corresponding table storage has to execute the relevant part of the query. We +// send the query to the storage as AST. Before that, we have to remove the conditions +// that reference other tables from `WHERE`, so that the external engine is not confused +// by the unknown columns. +bool removeUnknownSubexpressionsFromWhere(ASTPtr & node, const NamesAndTypesList & available_columns) +{ + if (!node) + return false; + + NameSet known_names; + for (const auto & col : available_columns) + known_names.insert(col.name); + + if (auto * expr_list = node->as(); expr_list && !expr_list->children.empty()) + { + /// traverse expression list on top level + removeUnknownChildren(expr_list->children, known_names); + return !expr_list->children.empty(); + } + return removeUnknownSubexpressions(node, known_names); +} + +} String transformQueryForExternalDatabase( const SelectQueryInfo & query_info, @@ -169,7 +239,7 @@ String transformQueryForExternalDatabase( IdentifierQuotingStyle identifier_quoting_style, const String & database, const String & table, - const Context & context) + ContextPtr context) { auto clone_query = query_info.query->clone(); const Names used_columns = query_info.syntax_analyzer_result->requiredSourceColumns(); @@ -191,7 +261,8 @@ String transformQueryForExternalDatabase( */ ASTPtr original_where = clone_query->as().where(); - if (original_where) + bool where_has_known_columns = removeUnknownSubexpressionsFromWhere(original_where, available_columns); + if (original_where && where_has_known_columns) { replaceConstantExpressions(original_where, context, available_columns); diff --git a/src/Storages/transformQueryForExternalDatabase.h b/src/Storages/transformQueryForExternalDatabase.h index c760c628970..215afab8b30 100644 --- a/src/Storages/transformQueryForExternalDatabase.h +++ b/src/Storages/transformQueryForExternalDatabase.h @@ -4,13 +4,13 @@ #include #include #include +#include namespace DB { class IAST; -class Context; /** For given ClickHouse query, * creates another query in a form of @@ -29,6 +29,6 @@ String transformQueryForExternalDatabase( IdentifierQuotingStyle identifier_quoting_style, const String & database, const String & table, - const Context & context); + ContextPtr context); } diff --git a/src/Storages/ya.make b/src/Storages/ya.make index e3e1807c566..ba294b05857 100644 --- a/src/Storages/ya.make +++ b/src/Storages/ya.make @@ -57,6 +57,7 @@ SRCS( MergeTree/MergeTreeDataPartWriterWide.cpp MergeTree/MergeTreeDataSelectExecutor.cpp MergeTree/MergeTreeDataWriter.cpp + MergeTree/MergeTreeDeduplicationLog.cpp MergeTree/MergeTreeIndexAggregatorBloomFilter.cpp MergeTree/MergeTreeIndexBloomFilter.cpp MergeTree/MergeTreeIndexConditionBloomFilter.cpp @@ -117,6 +118,7 @@ SRCS( StorageBuffer.cpp StorageDictionary.cpp StorageDistributed.cpp + StorageExternalDistributed.cpp StorageFactory.cpp StorageFile.cpp StorageGenerateRandom.cpp diff --git a/src/Storages/ya.make.in b/src/Storages/ya.make.in index 2e8727b53fd..f7efe5870d3 100644 --- a/src/Storages/ya.make.in +++ b/src/Storages/ya.make.in @@ -10,7 +10,7 @@ PEERDIR( SRCS( - + ) END() diff --git a/src/TableFunctions/CMakeLists.txt b/src/TableFunctions/CMakeLists.txt index 8e9eedadf53..576d1ea23ff 100644 --- a/src/TableFunctions/CMakeLists.txt +++ b/src/TableFunctions/CMakeLists.txt @@ -1,4 +1,4 @@ -include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake) +include("${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake") add_headers_and_sources(clickhouse_table_functions .) list(REMOVE_ITEM clickhouse_table_functions_sources ITableFunction.cpp TableFunctionFactory.cpp) diff --git a/src/TableFunctions/ITableFunction.cpp b/src/TableFunctions/ITableFunction.cpp index 804a5b232ec..218d86fe4a2 100644 --- a/src/TableFunctions/ITableFunction.cpp +++ b/src/TableFunctions/ITableFunction.cpp @@ -14,18 +14,26 @@ namespace ProfileEvents namespace DB { -StoragePtr ITableFunction::execute(const ASTPtr & ast_function, const Context & context, const std::string & table_name, +StoragePtr ITableFunction::execute(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const { ProfileEvents::increment(ProfileEvents::TableFunctionExecute); - context.checkAccess(AccessType::CREATE_TEMPORARY_TABLE | StorageFactory::instance().getSourceAccessType(getStorageTypeName())); + context->checkAccess(AccessType::CREATE_TEMPORARY_TABLE | StorageFactory::instance().getSourceAccessType(getStorageTypeName())); - if (cached_columns.empty() || (hasStaticStructure() && cached_columns == getActualTableStructure(context))) + if (cached_columns.empty()) return executeImpl(ast_function, context, table_name, std::move(cached_columns)); - auto get_storage = [=, tf = shared_from_this()]() -> StoragePtr + /// We have table structure, so it's CREATE AS table_function(). + /// We should use global context here because there will be no query context on server startup + /// and because storage lifetime is bigger than query context lifetime. + auto global_context = context->getGlobalContext(); + if (hasStaticStructure() && cached_columns == getActualTableStructure(context)) + return executeImpl(ast_function, global_context, table_name, std::move(cached_columns)); + + auto this_table_function = shared_from_this(); + auto get_storage = [=]() -> StoragePtr { - return tf->executeImpl(ast_function, context, table_name, cached_columns); + return this_table_function->executeImpl(ast_function, global_context, table_name, cached_columns); }; /// It will request actual table structure and create underlying storage lazily diff --git a/src/TableFunctions/ITableFunction.h b/src/TableFunctions/ITableFunction.h index 4a73adbdf80..56147ffd598 100644 --- a/src/TableFunctions/ITableFunction.h +++ b/src/TableFunctions/ITableFunction.h @@ -47,18 +47,20 @@ public: /// Returns false if storage returned by table function supports type conversion (e.g. StorageDistributed) virtual bool needStructureConversion() const { return true; } - virtual void parseArguments(const ASTPtr & /*ast_function*/, const Context & /*context*/) {} + virtual void parseArguments(const ASTPtr & /*ast_function*/, ContextPtr /*context*/) {} /// Returns actual table structure probably requested from remote server, may fail - virtual ColumnsDescription getActualTableStructure(const Context & /*context*/) const = 0; + virtual ColumnsDescription getActualTableStructure(ContextPtr /*context*/) const = 0; /// Create storage according to the query. - StoragePtr execute(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns_ = {}) const; + StoragePtr + execute(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns_ = {}) const; - virtual ~ITableFunction() {} + virtual ~ITableFunction() = default; private: - virtual StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const = 0; + virtual StoragePtr executeImpl( + const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const = 0; virtual const char * getStorageTypeName() const = 0; }; diff --git a/src/TableFunctions/ITableFunctionFileLike.cpp b/src/TableFunctions/ITableFunctionFileLike.cpp index 1349c166474..44a917a0f00 100644 --- a/src/TableFunctions/ITableFunctionFileLike.cpp +++ b/src/TableFunctions/ITableFunctionFileLike.cpp @@ -26,7 +26,7 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; } -void ITableFunctionFileLike::parseArguments(const ASTPtr & ast_function, const Context & context) +void ITableFunctionFileLike::parseArguments(const ASTPtr & ast_function, ContextPtr context) { /// Parse args ASTs & args_func = ast_function->children; @@ -64,20 +64,20 @@ void ITableFunctionFileLike::parseArguments(const ASTPtr & ast_function, const C compression_method = args[3]->as().value.safeGet(); } -StoragePtr ITableFunctionFileLike::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr ITableFunctionFileLike::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); - StoragePtr storage = getStorage(filename, format, columns, const_cast(context), table_name, compression_method); + StoragePtr storage = getStorage(filename, format, columns, context, table_name, compression_method); storage->startup(); return storage; } -ColumnsDescription ITableFunctionFileLike::getActualTableStructure(const Context & context) const +ColumnsDescription ITableFunctionFileLike::getActualTableStructure(ContextPtr context) const { if (structure.empty()) { assert(getName() == "file" && format == "Distributed"); - Strings paths = StorageFile::getPathsList(filename, context.getUserFilesPath(), context); + Strings paths = StorageFile::getPathsList(filename, context->getUserFilesPath(), context); if (paths.empty()) throw Exception("Cannot get table structure from file, because no files match specified name", ErrorCodes::INCORRECT_FILE_NAME); auto read_stream = StorageDistributedDirectoryMonitor::createStreamFromFile(paths[0]); diff --git a/src/TableFunctions/ITableFunctionFileLike.h b/src/TableFunctions/ITableFunctionFileLike.h index f1c648ac0aa..7c96ce610b3 100644 --- a/src/TableFunctions/ITableFunctionFileLike.h +++ b/src/TableFunctions/ITableFunctionFileLike.h @@ -13,15 +13,15 @@ class Context; class ITableFunctionFileLike : public ITableFunction { private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; virtual StoragePtr getStorage( - const String & source, const String & format, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method) const = 0; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; bool hasStaticStructure() const override { return true; } diff --git a/src/TableFunctions/ITableFunctionXDBC.cpp b/src/TableFunctions/ITableFunctionXDBC.cpp index e04a86b5abf..51431a1e3a6 100644 --- a/src/TableFunctions/ITableFunctionXDBC.cpp +++ b/src/TableFunctions/ITableFunctionXDBC.cpp @@ -28,7 +28,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -void ITableFunctionXDBC::parseArguments(const ASTPtr & ast_function, const Context & context) +void ITableFunctionXDBC::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto & args_func = ast_function->as(); @@ -55,15 +55,21 @@ void ITableFunctionXDBC::parseArguments(const ASTPtr & ast_function, const Conte connection_string = args[0]->as().value.safeGet(); remote_table_name = args[1]->as().value.safeGet(); } - - /// Have to const_cast, because bridges store their commands inside context - helper = createBridgeHelper(const_cast(context), context.getSettingsRef().http_receive_timeout.value, connection_string); - helper->startBridgeSync(); } -ColumnsDescription ITableFunctionXDBC::getActualTableStructure(const Context & context) const +void ITableFunctionXDBC::startBridgeIfNot(ContextPtr context) const { - assert(helper); + if (!helper) + { + /// Have to const_cast, because bridges store their commands inside context + helper = createBridgeHelper(context, context->getSettingsRef().http_receive_timeout.value, connection_string); + helper->startBridgeSync(); + } +} + +ColumnsDescription ITableFunctionXDBC::getActualTableStructure(ContextPtr context) const +{ + startBridgeIfNot(context); /* Infer external table structure */ Poco::URI columns_info_uri = helper->getColumnsInfoURI(); @@ -72,7 +78,7 @@ ColumnsDescription ITableFunctionXDBC::getActualTableStructure(const Context & c columns_info_uri.addQueryParameter("schema", schema_name); columns_info_uri.addQueryParameter("table", remote_table_name); - const auto use_nulls = context.getSettingsRef().external_table_functions_use_nulls; + const auto use_nulls = context->getSettingsRef().external_table_functions_use_nulls; columns_info_uri.addQueryParameter("external_table_functions_use_nulls", Poco::NumberFormatter::format(use_nulls)); @@ -85,9 +91,9 @@ ColumnsDescription ITableFunctionXDBC::getActualTableStructure(const Context & c return ColumnsDescription{columns}; } -StoragePtr ITableFunctionXDBC::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr ITableFunctionXDBC::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { - assert(helper); + startBridgeIfNot(context); auto columns = getActualTableStructure(context); auto result = std::make_shared(StorageID(getDatabaseName(), table_name), schema_name, remote_table_name, columns, context, helper); result->startup(); diff --git a/src/TableFunctions/ITableFunctionXDBC.h b/src/TableFunctions/ITableFunctionXDBC.h index fb0a0fd1185..4f6656773b1 100644 --- a/src/TableFunctions/ITableFunctionXDBC.h +++ b/src/TableFunctions/ITableFunctionXDBC.h @@ -3,7 +3,7 @@ #include #include #include -#include +#include #if !defined(ARCADIA_BUILD) # include @@ -18,21 +18,23 @@ namespace DB class ITableFunctionXDBC : public ITableFunction { private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; /* A factory method to create bridge helper, that will assist in remote interaction */ - virtual BridgeHelperPtr createBridgeHelper(Context & context, - const Poco::Timespan & http_timeout_, + virtual BridgeHelperPtr createBridgeHelper(ContextPtr context, + Poco::Timespan http_timeout_, const std::string & connection_string_) const = 0; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + + void startBridgeIfNot(ContextPtr context) const; String connection_string; String schema_name; String remote_table_name; - BridgeHelperPtr helper; + mutable BridgeHelperPtr helper; }; class TableFunctionJDBC : public ITableFunctionXDBC @@ -45,8 +47,8 @@ public: } private: - BridgeHelperPtr createBridgeHelper(Context & context, - const Poco::Timespan & http_timeout_, + BridgeHelperPtr createBridgeHelper(ContextPtr context, + Poco::Timespan http_timeout_, const std::string & connection_string_) const override { return std::make_shared>(context, http_timeout_, connection_string_); @@ -65,8 +67,8 @@ public: } private: - BridgeHelperPtr createBridgeHelper(Context & context, - const Poco::Timespan & http_timeout_, + BridgeHelperPtr createBridgeHelper(ContextPtr context, + Poco::Timespan http_timeout_, const std::string & connection_string_) const override { return std::make_shared>(context, http_timeout_, connection_string_); diff --git a/src/TableFunctions/TableFunctionDictionary.cpp b/src/TableFunctions/TableFunctionDictionary.cpp new file mode 100644 index 00000000000..268f49b912e --- /dev/null +++ b/src/TableFunctions/TableFunctionDictionary.cpp @@ -0,0 +1,71 @@ +#include + +#include + +#include +#include +#include + +#include + +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; +} + +void TableFunctionDictionary::parseArguments(const ASTPtr & ast_function, ContextPtr context) +{ + // Parse args + ASTs & args_func = ast_function->children; + + if (args_func.size() != 1) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Table function ({}) must have arguments.", quoteString(getName())); + + ASTs & args = args_func.at(0)->children; + + if (args.size() != 1) + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Table function ({}) requires 1 arguments", quoteString(getName())); + + for (auto & arg : args) + arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + + dictionary_name = args[0]->as().value.safeGet(); +} + +ColumnsDescription TableFunctionDictionary::getActualTableStructure(ContextPtr context) const +{ + const ExternalDictionariesLoader & external_loader = context->getExternalDictionariesLoader(); + auto dictionary_structure = external_loader.getDictionaryStructure(dictionary_name, context); + auto result = ColumnsDescription(StorageDictionary::getNamesAndTypes(dictionary_structure)); + + return result; +} + +StoragePtr TableFunctionDictionary::executeImpl( + const ASTPtr &, ContextPtr context, const std::string & table_name, ColumnsDescription) const +{ + StorageID dict_id(getDatabaseName(), table_name); + auto dictionary_table_structure = getActualTableStructure(context); + + auto result = StorageDictionary::create( + dict_id, + dictionary_name, + std::move(dictionary_table_structure), + StorageDictionary::Location::Custom, + context); + + return result; +} + +void registerTableFunctionDictionary(TableFunctionFactory & factory) +{ + factory.registerFunction(); +} + +} diff --git a/src/TableFunctions/TableFunctionDictionary.h b/src/TableFunctions/TableFunctionDictionary.h new file mode 100644 index 00000000000..aed435bebfd --- /dev/null +++ b/src/TableFunctions/TableFunctionDictionary.h @@ -0,0 +1,34 @@ +#pragma once + +#include + +namespace DB +{ +class Context; + +/* file(path, format, structure) - creates a temporary storage from file + * + * The file must be in the clickhouse data directory. + * The relative path begins with the clickhouse data directory. + */ +class TableFunctionDictionary final : public ITableFunction +{ +public: + static constexpr auto name = "dictionary"; + std::string getName() const override + { + return name; + } + + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription) const override; + + const char * getStorageTypeName() const override { return "Dictionary"; } + +private: + String dictionary_name; + ColumnsDescription dictionary_columns; +};} diff --git a/src/TableFunctions/TableFunctionFactory.cpp b/src/TableFunctions/TableFunctionFactory.cpp index e8f844e8074..15e61354f6d 100644 --- a/src/TableFunctions/TableFunctionFactory.cpp +++ b/src/TableFunctions/TableFunctionFactory.cpp @@ -31,7 +31,7 @@ void TableFunctionFactory::registerFunction(const std::string & name, Value crea TableFunctionPtr TableFunctionFactory::get( const ASTPtr & ast_function, - const Context & context) const + ContextPtr context) const { const auto * table_function = ast_function->as(); auto res = tryGet(table_function->name, context); @@ -50,7 +50,7 @@ TableFunctionPtr TableFunctionFactory::get( TableFunctionPtr TableFunctionFactory::tryGet( const std::string & name_param, - const Context &) const + ContextPtr) const { String name = getAliasToOrName(name_param); TableFunctionPtr res; @@ -70,7 +70,7 @@ TableFunctionPtr TableFunctionFactory::tryGet( if (CurrentThread::isInitialized()) { - const auto * query_context = CurrentThread::get().getQueryContext(); + auto query_context = CurrentThread::get().getQueryContext(); if (query_context && query_context->getSettingsRef().log_queries) query_context->addQueryFactoriesInfo(Context::QueryLogFactories::TableFunction, name); } diff --git a/src/TableFunctions/TableFunctionFactory.h b/src/TableFunctions/TableFunctionFactory.h index 820b5eb1c7b..59b4ffb9fd5 100644 --- a/src/TableFunctions/TableFunctionFactory.h +++ b/src/TableFunctions/TableFunctionFactory.h @@ -41,10 +41,10 @@ public: } /// Throws an exception if not found. - TableFunctionPtr get(const ASTPtr & ast_function, const Context & context) const; + TableFunctionPtr get(const ASTPtr & ast_function, ContextPtr context) const; /// Returns nullptr if not found. - TableFunctionPtr tryGet(const std::string & name, const Context & context) const; + TableFunctionPtr tryGet(const std::string & name, ContextPtr context) const; bool isTableFunctionName(const std::string & name) const; diff --git a/src/TableFunctions/TableFunctionFile.cpp b/src/TableFunctions/TableFunctionFile.cpp index 13ac6dc2145..6ecb5606d56 100644 --- a/src/TableFunctions/TableFunctionFile.cpp +++ b/src/TableFunctions/TableFunctionFile.cpp @@ -12,20 +12,23 @@ namespace DB { StoragePtr TableFunctionFile::getStorage(const String & source, const String & format_, const ColumnsDescription & columns, - Context & global_context, const std::string & table_name, + ContextPtr global_context, const std::string & table_name, const std::string & compression_method_) const { // For `file` table function, we are going to use format settings from the // query context. - StorageFile::CommonArguments args{StorageID(getDatabaseName(), table_name), + StorageFile::CommonArguments args + { + WithContext(global_context), + StorageID(getDatabaseName(), table_name), format_, std::nullopt /*format settings*/, compression_method_, columns, ConstraintsDescription{}, - global_context}; + }; - return StorageFile::create(source, global_context.getUserFilesPath(), args); + return StorageFile::create(source, global_context->getUserFilesPath(), args); } void registerTableFunctionFile(TableFunctionFactory & factory) diff --git a/src/TableFunctions/TableFunctionFile.h b/src/TableFunctions/TableFunctionFile.h index 02704e4bf7f..460656a7218 100644 --- a/src/TableFunctions/TableFunctionFile.h +++ b/src/TableFunctions/TableFunctionFile.h @@ -5,7 +5,6 @@ namespace DB { -class Context; /* file(path, format, structure) - creates a temporary storage from file * @@ -23,7 +22,7 @@ public: private: StoragePtr getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const std::string & compression_method_) const override; const char * getStorageTypeName() const override { return "File"; } };} diff --git a/src/TableFunctions/TableFunctionGenerateRandom.cpp b/src/TableFunctions/TableFunctionGenerateRandom.cpp index 15c2c2bfa1f..b19be7bd7a3 100644 --- a/src/TableFunctions/TableFunctionGenerateRandom.cpp +++ b/src/TableFunctions/TableFunctionGenerateRandom.cpp @@ -26,7 +26,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -void TableFunctionGenerateRandom::parseArguments(const ASTPtr & ast_function, const Context & /*context*/) +void TableFunctionGenerateRandom::parseArguments(const ASTPtr & ast_function, ContextPtr /*context*/) { ASTs & args_func = ast_function->children; @@ -74,12 +74,12 @@ void TableFunctionGenerateRandom::parseArguments(const ASTPtr & ast_function, co max_array_length = args[3]->as().value.safeGet(); } -ColumnsDescription TableFunctionGenerateRandom::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionGenerateRandom::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionGenerateRandom::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionGenerateRandom::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto res = StorageGenerateRandom::create(StorageID(getDatabaseName(), table_name), columns, max_array_length, max_string_length, random_seed); diff --git a/src/TableFunctions/TableFunctionGenerateRandom.h b/src/TableFunctions/TableFunctionGenerateRandom.h index 1d8839ba6d4..bcad11156be 100644 --- a/src/TableFunctions/TableFunctionGenerateRandom.h +++ b/src/TableFunctions/TableFunctionGenerateRandom.h @@ -15,11 +15,11 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "GenerateRandom"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String structure; UInt64 max_string_length = 10; diff --git a/src/TableFunctions/TableFunctionHDFS.cpp b/src/TableFunctions/TableFunctionHDFS.cpp index 700cb93ca06..714c6ea1f59 100644 --- a/src/TableFunctions/TableFunctionHDFS.cpp +++ b/src/TableFunctions/TableFunctionHDFS.cpp @@ -10,7 +10,7 @@ namespace DB { StoragePtr TableFunctionHDFS::getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const { return StorageHDFS::create( diff --git a/src/TableFunctions/TableFunctionHDFS.h b/src/TableFunctions/TableFunctionHDFS.h index 47e040f7593..d9ee9b47868 100644 --- a/src/TableFunctions/TableFunctionHDFS.h +++ b/src/TableFunctions/TableFunctionHDFS.h @@ -26,7 +26,7 @@ public: private: StoragePtr getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const override; const char * getStorageTypeName() const override { return "HDFS"; } }; diff --git a/src/TableFunctions/TableFunctionInput.cpp b/src/TableFunctions/TableFunctionInput.cpp index 41c41835434..677a6ff3ce4 100644 --- a/src/TableFunctions/TableFunctionInput.cpp +++ b/src/TableFunctions/TableFunctionInput.cpp @@ -22,7 +22,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionInput::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionInput::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto * function = ast_function->as(); @@ -38,12 +38,12 @@ void TableFunctionInput::parseArguments(const ASTPtr & ast_function, const Conte structure = evaluateConstantExpressionOrIdentifierAsLiteral(args[0], context)->as().value.safeGet(); } -ColumnsDescription TableFunctionInput::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionInput::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionInput::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionInput::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto storage = StorageInput::create(StorageID(getDatabaseName(), table_name), getActualTableStructure(context)); storage->startup(); diff --git a/src/TableFunctions/TableFunctionInput.h b/src/TableFunctions/TableFunctionInput.h index 5809d48a77c..5953693e711 100644 --- a/src/TableFunctions/TableFunctionInput.h +++ b/src/TableFunctions/TableFunctionInput.h @@ -18,11 +18,11 @@ public: bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Input"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String structure; }; diff --git a/src/TableFunctions/TableFunctionMerge.cpp b/src/TableFunctions/TableFunctionMerge.cpp index c5fb9a7686d..6d10b0d04b6 100644 --- a/src/TableFunctions/TableFunctionMerge.cpp +++ b/src/TableFunctions/TableFunctionMerge.cpp @@ -33,7 +33,7 @@ namespace } -void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, ContextPtr context) { ASTs & args_func = ast_function->children; @@ -57,7 +57,7 @@ void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, const Conte } -const Strings & TableFunctionMerge::getSourceTables(const Context & context) const +const Strings & TableFunctionMerge::getSourceTables(ContextPtr context) const { if (source_tables) return *source_tables; @@ -67,7 +67,7 @@ const Strings & TableFunctionMerge::getSourceTables(const Context & context) con OptimizedRegularExpression re(source_table_regexp); auto table_name_match = [&](const String & table_name_) { return re.match(table_name_); }; - auto access = context.getAccess(); + auto access = context->getAccess(); bool granted_show_on_all_tables = access->isGranted(AccessType::SHOW_TABLES, source_database); bool granted_select_on_all_tables = access->isGranted(AccessType::SELECT, source_database); @@ -91,7 +91,7 @@ const Strings & TableFunctionMerge::getSourceTables(const Context & context) con } -ColumnsDescription TableFunctionMerge::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionMerge::getActualTableStructure(ContextPtr context) const { for (const auto & table_name : getSourceTables(context)) { @@ -104,7 +104,7 @@ ColumnsDescription TableFunctionMerge::getActualTableStructure(const Context & c } -StoragePtr TableFunctionMerge::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionMerge::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto res = StorageMerge::create( StorageID(getDatabaseName(), table_name), diff --git a/src/TableFunctions/TableFunctionMerge.h b/src/TableFunctions/TableFunctionMerge.h index 8f9f4522d17..04027b9d76a 100644 --- a/src/TableFunctions/TableFunctionMerge.h +++ b/src/TableFunctions/TableFunctionMerge.h @@ -16,12 +16,12 @@ public: static constexpr auto name = "merge"; std::string getName() const override { return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Merge"; } - const Strings & getSourceTables(const Context & context) const; - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + const Strings & getSourceTables(ContextPtr context) const; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String source_database; String source_table_regexp; diff --git a/src/TableFunctions/TableFunctionMySQL.cpp b/src/TableFunctions/TableFunctionMySQL.cpp index d6a62dc68b4..7d3fca58451 100644 --- a/src/TableFunctions/TableFunctionMySQL.cpp +++ b/src/TableFunctions/TableFunctionMySQL.cpp @@ -1,29 +1,30 @@ #if !defined(ARCADIA_BUILD) -# include "config_core.h" +#include "config_core.h" #endif #if USE_MYSQL -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "registerTableFunctions.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "registerTableFunctions.h" -# include // for fetchTablesColumnsList +#include // for fetchTablesColumnsList +#include namespace DB @@ -37,7 +38,7 @@ namespace ErrorCodes extern const int UNKNOWN_TABLE; } -void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto & args_func = ast_function->as(); @@ -59,6 +60,11 @@ void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, const Conte user_name = args[3]->as().value.safeGet(); password = args[4]->as().value.safeGet(); + /// Split into replicas if needed. 3306 is the default MySQL port number + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 3306); + pool.emplace(remote_database_name, addresses, user_name, password); + if (args.size() >= 6) replace_query = args[5]->as().value.safeGet() > 0; if (args.size() == 7) @@ -68,33 +74,27 @@ void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, const Conte throw Exception( "Only one of 'replace_query' and 'on_duplicate_clause' can be specified, or none of them", ErrorCodes::BAD_ARGUMENTS); - - /// 3306 is the default MySQL port number - parsed_host_port = parseAddress(host_port, 3306); } -ColumnsDescription TableFunctionMySQL::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionMySQL::getActualTableStructure(ContextPtr context) const { - assert(!parsed_host_port.first.empty()); - if (!pool) - pool.emplace(remote_database_name, parsed_host_port.first, user_name, password, parsed_host_port.second); - - const auto & settings = context.getSettingsRef(); - const auto tables_and_columns = fetchTablesColumnsList(*pool, remote_database_name, {remote_table_name}, settings.external_table_functions_use_nulls, settings.mysql_datatypes_support_level); + const auto & settings = context->getSettingsRef(); + const auto tables_and_columns = fetchTablesColumnsList(*pool, remote_database_name, {remote_table_name}, settings, settings.mysql_datatypes_support_level); const auto columns = tables_and_columns.find(remote_table_name); if (columns == tables_and_columns.end()) - throw Exception("MySQL table " + backQuoteIfNeed(remote_database_name) + "." + backQuoteIfNeed(remote_table_name) + " doesn't exist.", ErrorCodes::UNKNOWN_TABLE); + throw Exception("MySQL table " + (remote_database_name.empty() ? "" : (backQuote(remote_database_name) + ".")) + + backQuote(remote_table_name) + " doesn't exist.", ErrorCodes::UNKNOWN_TABLE); return ColumnsDescription{columns->second}; } -StoragePtr TableFunctionMySQL::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionMySQL::executeImpl( + const ASTPtr & /*ast_function*/, + ContextPtr context, + const std::string & table_name, + ColumnsDescription /*cached_columns*/) const { - assert(!parsed_host_port.first.empty()); - if (!pool) - pool.emplace(remote_database_name, parsed_host_port.first, user_name, password, parsed_host_port.second); - auto columns = getActualTableStructure(context); auto res = StorageMySQL::create( diff --git a/src/TableFunctions/TableFunctionMySQL.h b/src/TableFunctions/TableFunctionMySQL.h index 1fe5a4aa4ac..64c7d56cf2a 100644 --- a/src/TableFunctions/TableFunctionMySQL.h +++ b/src/TableFunctions/TableFunctionMySQL.h @@ -24,13 +24,12 @@ public: return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "MySQL"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; - std::pair parsed_host_port; String remote_database_name; String remote_table_name; String user_name; @@ -38,7 +37,7 @@ private: bool replace_query = false; String on_duplicate_clause; - mutable std::optional pool; + mutable std::optional pool; }; } diff --git a/src/TableFunctions/TableFunctionNull.cpp b/src/TableFunctions/TableFunctionNull.cpp index 6abe0319394..334d7c3dcbd 100644 --- a/src/TableFunctions/TableFunctionNull.cpp +++ b/src/TableFunctions/TableFunctionNull.cpp @@ -17,7 +17,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionNull::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionNull::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto * function = ast_function->as(); if (!function || !function->arguments) @@ -30,12 +30,12 @@ void TableFunctionNull::parseArguments(const ASTPtr & ast_function, const Contex structure = evaluateConstantExpressionOrIdentifierAsLiteral(arguments[0], context)->as()->value.safeGet(); } -ColumnsDescription TableFunctionNull::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionNull::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionNull::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionNull::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto res = StorageNull::create(StorageID(getDatabaseName(), table_name), columns, ConstraintsDescription()); diff --git a/src/TableFunctions/TableFunctionNull.h b/src/TableFunctions/TableFunctionNull.h index 4d4cecb0292..6734fb8efb6 100644 --- a/src/TableFunctions/TableFunctionNull.h +++ b/src/TableFunctions/TableFunctionNull.h @@ -17,11 +17,11 @@ public: static constexpr auto name = "null"; std::string getName() const override { return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const String & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const String & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Null"; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; - ColumnsDescription getActualTableStructure(const Context & context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; String structure; }; diff --git a/src/TableFunctions/TableFunctionNumbers.cpp b/src/TableFunctions/TableFunctionNumbers.cpp index 594075b1c82..01ffd2b2e3d 100644 --- a/src/TableFunctions/TableFunctionNumbers.cpp +++ b/src/TableFunctions/TableFunctionNumbers.cpp @@ -23,14 +23,14 @@ namespace ErrorCodes template -ColumnsDescription TableFunctionNumbers::getActualTableStructure(const Context & /*context*/) const +ColumnsDescription TableFunctionNumbers::getActualTableStructure(ContextPtr /*context*/) const { /// NOTE: https://bugs.llvm.org/show_bug.cgi?id=47418 return ColumnsDescription{{{"number", std::make_shared()}}}; } template -StoragePtr TableFunctionNumbers::executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionNumbers::executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { if (const auto * function = ast_function->as()) { @@ -56,7 +56,7 @@ void registerTableFunctionNumbers(TableFunctionFactory & factory) } template -UInt64 TableFunctionNumbers::evaluateArgument(const Context & context, ASTPtr & argument) const +UInt64 TableFunctionNumbers::evaluateArgument(ContextPtr context, ASTPtr & argument) const { const auto & [field, type] = evaluateConstantExpression(argument, context); diff --git a/src/TableFunctions/TableFunctionNumbers.h b/src/TableFunctions/TableFunctionNumbers.h index c27bb7319ba..6cee752390e 100644 --- a/src/TableFunctions/TableFunctionNumbers.h +++ b/src/TableFunctions/TableFunctionNumbers.h @@ -19,12 +19,12 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "SystemNumbers"; } - UInt64 evaluateArgument(const Context & context, ASTPtr & argument) const; + UInt64 evaluateArgument(ContextPtr context, ASTPtr & argument) const; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; }; diff --git a/src/TableFunctions/TableFunctionPostgreSQL.cpp b/src/TableFunctions/TableFunctionPostgreSQL.cpp index eefdff1fa87..6e7ba1825fc 100644 --- a/src/TableFunctions/TableFunctionPostgreSQL.cpp +++ b/src/TableFunctions/TableFunctionPostgreSQL.cpp @@ -11,6 +11,8 @@ #include "registerTableFunctions.h" #include #include +#include +#include namespace DB @@ -24,28 +26,32 @@ namespace ErrorCodes StoragePtr TableFunctionPostgreSQL::executeImpl(const ASTPtr & /*ast_function*/, - const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const + ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto result = std::make_shared( - StorageID(getDatabaseName(), table_name), remote_table_name, - connection, columns, ConstraintsDescription{}, context); + StorageID(getDatabaseName(), table_name), *connection_pool, remote_table_name, + columns, ConstraintsDescription{}, context, remote_table_schema); result->startup(); return result; } -ColumnsDescription TableFunctionPostgreSQL::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionPostgreSQL::getActualTableStructure(ContextPtr context) const { - const bool use_nulls = context.getSettingsRef().external_table_functions_use_nulls; - auto columns = fetchPostgreSQLTableStructure(connection->conn(), remote_table_name, use_nulls); + const bool use_nulls = context->getSettingsRef().external_table_functions_use_nulls; + auto columns = fetchPostgreSQLTableStructure( + connection_pool->get(), + remote_table_schema.empty() ? doubleQuoteString(remote_table_name) + : doubleQuoteString(remote_table_schema) + '.' + doubleQuoteString(remote_table_name), + use_nulls); return ColumnsDescription{*columns}; } -void TableFunctionPostgreSQL::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionPostgreSQL::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto & func_args = ast_function->as(); @@ -54,21 +60,27 @@ void TableFunctionPostgreSQL::parseArguments(const ASTPtr & ast_function, const ASTs & args = func_args.arguments->children; - if (args.size() != 5) - throw Exception("Table function 'PostgreSQL' requires 5 parameters: " - "PostgreSQL('host:port', 'database', 'table', 'user', 'password').", + if (args.size() < 5 || args.size() > 6) + throw Exception("Table function 'PostgreSQL' requires from 5 to 6 parameters: " + "PostgreSQL('host:port', 'database', 'table', 'user', 'password', [, 'schema']).", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); - auto parsed_host_port = parseAddress(args[0]->as().value.safeGet(), 5432); + /// Split into replicas if needed. 5432 is a default postgresql port. + const auto & host_port = args[0]->as().value.safeGet(); + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 5432); + remote_table_name = args[2]->as().value.safeGet(); - connection = std::make_shared( + if (args.size() == 6) + remote_table_schema = args[5]->as().value.safeGet(); + + connection_pool = std::make_shared( args[1]->as().value.safeGet(), - parsed_host_port.first, - parsed_host_port.second, + addresses, args[3]->as().value.safeGet(), args[4]->as().value.safeGet()); } diff --git a/src/TableFunctions/TableFunctionPostgreSQL.h b/src/TableFunctions/TableFunctionPostgreSQL.h index e625cbd9bf6..44f804fbb30 100644 --- a/src/TableFunctions/TableFunctionPostgreSQL.h +++ b/src/TableFunctions/TableFunctionPostgreSQL.h @@ -5,14 +5,12 @@ #if USE_LIBPQXX #include +#include namespace DB { -class PostgreSQLConnection; -using PostgreSQLConnectionPtr = std::shared_ptr; - class TableFunctionPostgreSQL : public ITableFunction { public: @@ -21,17 +19,17 @@ public: private: StoragePtr executeImpl( - const ASTPtr & ast_function, const Context & context, + const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "PostgreSQL"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String connection_str; - String remote_table_name; - PostgreSQLConnectionPtr connection; + String remote_table_name, remote_table_schema; + postgres::PoolWithFailoverPtr connection_pool; }; } diff --git a/src/TableFunctions/TableFunctionRemote.cpp b/src/TableFunctions/TableFunctionRemote.cpp index 0e7623c0ac3..ab2458b64f4 100644 --- a/src/TableFunctions/TableFunctionRemote.cpp +++ b/src/TableFunctions/TableFunctionRemote.cpp @@ -28,7 +28,7 @@ namespace ErrorCodes } -void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr context) { ASTs & args_func = ast_function->children; @@ -162,14 +162,14 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont { /// Use an existing cluster from the main config if (name != "clusterAllReplicas") - cluster = context.getCluster(cluster_name); + cluster = context->getCluster(cluster_name); else - cluster = context.getCluster(cluster_name)->getClusterWithReplicasAsShards(context.getSettings()); + cluster = context->getCluster(cluster_name)->getClusterWithReplicasAsShards(context->getSettings()); } else { /// Create new cluster from the scratch - size_t max_addresses = context.getSettingsRef().table_function_remote_max_addresses; + size_t max_addresses = context->getSettingsRef().table_function_remote_max_addresses; std::vector shards = parseRemoteDescription(cluster_description, 0, cluster_description.size(), ',', max_addresses); std::vector> names; @@ -180,7 +180,7 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont if (names.empty()) throw Exception("Shard list is empty after parsing first argument", ErrorCodes::BAD_ARGUMENTS); - auto maybe_secure_port = context.getTCPPortSecure(); + auto maybe_secure_port = context->getTCPPortSecure(); /// Check host and port on affiliation allowed hosts. for (const auto & hosts : names) @@ -189,20 +189,20 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont { size_t colon = host.find(':'); if (colon == String::npos) - context.getRemoteHostFilter().checkHostAndPort( + context->getRemoteHostFilter().checkHostAndPort( host, - toString((secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context.getTCPPort()))); + toString((secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context->getTCPPort()))); else - context.getRemoteHostFilter().checkHostAndPort(host.substr(0, colon), host.substr(colon + 1)); + context->getRemoteHostFilter().checkHostAndPort(host.substr(0, colon), host.substr(colon + 1)); } } cluster = std::make_shared( - context.getSettings(), + context->getSettings(), names, username, password, - (secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context.getTCPPort()), + (secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context->getTCPPort()), false, secure); } @@ -214,7 +214,7 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont remote_table_id.table_name = remote_table; } -StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const +StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const { /// StorageDistributed supports mismatching structure of remote table, so we can use outdated structure for CREATE ... AS remote(...) /// without additional conversion in StorageTableFunctionProxy @@ -255,7 +255,7 @@ StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, con return res; } -ColumnsDescription TableFunctionRemote::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionRemote::getActualTableStructure(ContextPtr context) const { assert(cluster); return getStructureOfRemoteTable(*cluster, remote_table_id, context, remote_table_function_ptr); diff --git a/src/TableFunctions/TableFunctionRemote.h b/src/TableFunctions/TableFunctionRemote.h index d485440d604..845c36182dc 100644 --- a/src/TableFunctions/TableFunctionRemote.h +++ b/src/TableFunctions/TableFunctionRemote.h @@ -18,19 +18,19 @@ namespace DB class TableFunctionRemote : public ITableFunction { public: - TableFunctionRemote(const std::string & name_, bool secure_ = false); + explicit TableFunctionRemote(const std::string & name_, bool secure_ = false); std::string getName() const override { return name; } - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; bool needStructureConversion() const override { return false; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Distributed"; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; std::string name; bool is_cluster_function; diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp index 6dc9230ca46..973899d2101 100644 --- a/src/TableFunctions/TableFunctionS3.cpp +++ b/src/TableFunctions/TableFunctionS3.cpp @@ -17,58 +17,76 @@ namespace DB namespace ErrorCodes { - extern const int LOGICAL_ERROR; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionS3::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr context) { /// Parse args ASTs & args_func = ast_function->children; + const auto message = fmt::format( + "The signature of table function {} could be the following:\n" \ + " - url, format, structure\n" \ + " - url, format, structure, compression_method\n" \ + " - url, access_key_id, secret_access_key, format, structure\n" \ + " - url, access_key_id, secret_access_key, format, structure, compression_method", + getName()); + if (args_func.size() != 1) - throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::LOGICAL_ERROR); + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); ASTs & args = args_func.at(0)->children; if (args.size() < 3 || args.size() > 6) - throw Exception("Table function '" + getName() + "' requires 3 to 6 arguments: url, [access_key_id, secret_access_key,] format, structure and [compression_method].", - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + /// Size -> argument indexes + static auto size_to_args = std::map> + { + {3, {{"format", 1}, {"structure", 2}}}, + {4, {{"format", 1}, {"structure", 2}, {"compression_method", 3}}}, + {5, {{"access_key_id", 1}, {"secret_access_key", 2}, {"format", 3}, {"structure", 4}}}, + {6, {{"access_key_id", 1}, {"secret_access_key", 2}, {"format", 3}, {"structure", 4}, {"compression_method", 5}}} + }; + + /// This argument is always the first filename = args[0]->as().value.safeGet(); - if (args.size() < 5) - { - format = args[1]->as().value.safeGet(); - structure = args[2]->as().value.safeGet(); - } - else - { - access_key_id = args[1]->as().value.safeGet(); - secret_access_key = args[2]->as().value.safeGet(); - format = args[3]->as().value.safeGet(); - structure = args[4]->as().value.safeGet(); - } + auto & args_to_idx = size_to_args[args.size()]; - if (args.size() == 4 || args.size() == 6) - compression_method = args.back()->as().value.safeGet(); + if (args_to_idx.contains("format")) + format = args[args_to_idx["format"]]->as().value.safeGet(); + + if (args_to_idx.contains("structure")) + structure = args[args_to_idx["structure"]]->as().value.safeGet(); + + if (args_to_idx.contains("compression_method")) + compression_method = args[args_to_idx["compression_method"]]->as().value.safeGet(); + + if (args_to_idx.contains("access_key_id")) + access_key_id = args[args_to_idx["access_key_id"]]->as().value.safeGet(); + + if (args_to_idx.contains("secret_access_key")) + secret_access_key = args[args_to_idx["secret_access_key"]]->as().value.safeGet(); } -ColumnsDescription TableFunctionS3::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionS3::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { Poco::URI uri (filename); S3::URI s3_uri (uri); - UInt64 min_upload_part_size = context.getSettingsRef().s3_min_upload_part_size; - UInt64 max_single_part_upload_size = context.getSettingsRef().s3_max_single_part_upload_size; - UInt64 max_connections = context.getSettingsRef().s3_max_connections; + UInt64 s3_max_single_read_retries = context->getSettingsRef().s3_max_single_read_retries; + UInt64 min_upload_part_size = context->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = context->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = context->getSettingsRef().s3_max_connections; StoragePtr storage = StorageS3::create( s3_uri, @@ -76,12 +94,13 @@ StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, const C secret_access_key, StorageID(getDatabaseName(), table_name), format, + s3_max_single_read_retries, min_upload_part_size, max_single_part_upload_size, max_connections, getActualTableStructure(context), ConstraintsDescription{}, - const_cast(context), + context, compression_method); storage->startup(); diff --git a/src/TableFunctions/TableFunctionS3.h b/src/TableFunctions/TableFunctionS3.h index 722fb9eb23c..1835fa3daa9 100644 --- a/src/TableFunctions/TableFunctionS3.h +++ b/src/TableFunctions/TableFunctionS3.h @@ -27,14 +27,14 @@ public: protected: StoragePtr executeImpl( const ASTPtr & ast_function, - const Context & context, + ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "S3"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String filename; String format; diff --git a/src/TableFunctions/TableFunctionS3Cluster.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp new file mode 100644 index 00000000000..16f48c70608 --- /dev/null +++ b/src/TableFunctions/TableFunctionS3Cluster.cpp @@ -0,0 +1,149 @@ +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_AWS_S3 + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "registerTableFunctions.h" + +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; +} + + +void TableFunctionS3Cluster::parseArguments(const ASTPtr & ast_function, ContextPtr context) +{ + /// Parse args + ASTs & args_func = ast_function->children; + + if (args_func.size() != 1) + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + ASTs & args = args_func.at(0)->children; + + const auto message = fmt::format( + "The signature of table function {} could be the following:\n" \ + " - cluster, url, format, structure\n" \ + " - cluster, url, format, structure, compression_method\n" \ + " - cluster, url, access_key_id, secret_access_key, format, structure\n" \ + " - cluster, url, access_key_id, secret_access_key, format, structure, compression_method", + getName()); + + if (args.size() < 4 || args.size() > 7) + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + for (auto & arg : args) + arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + + /// This arguments are always the first + cluster_name = args[0]->as().value.safeGet(); + filename = args[1]->as().value.safeGet(); + + /// Size -> argument indexes + static auto size_to_args = std::map> + { + {4, {{"format", 2}, {"structure", 3}}}, + {5, {{"format", 2}, {"structure", 3}, {"compression_method", 4}}}, + {6, {{"access_key_id", 2}, {"secret_access_key", 3}, {"format", 4}, {"structure", 5}}}, + {7, {{"access_key_id", 2}, {"secret_access_key", 3}, {"format", 4}, {"structure", 5}, {"compression_method", 6}}} + }; + + auto & args_to_idx = size_to_args[args.size()]; + + if (args_to_idx.contains("format")) + format = args[args_to_idx["format"]]->as().value.safeGet(); + + if (args_to_idx.contains("structure")) + structure = args[args_to_idx["structure"]]->as().value.safeGet(); + + if (args_to_idx.contains("compression_method")) + compression_method = args[args_to_idx["compression_method"]]->as().value.safeGet(); + + if (args_to_idx.contains("access_key_id")) + access_key_id = args[args_to_idx["access_key_id"]]->as().value.safeGet(); + + if (args_to_idx.contains("secret_access_key")) + secret_access_key = args[args_to_idx["secret_access_key"]]->as().value.safeGet(); +} + + +ColumnsDescription TableFunctionS3Cluster::getActualTableStructure(ContextPtr context) const +{ + return parseColumnsListFromString(structure, context); +} + +StoragePtr TableFunctionS3Cluster::executeImpl( + const ASTPtr & /*function*/, ContextPtr context, + const std::string & table_name, ColumnsDescription /*cached_columns*/) const +{ + StoragePtr storage; + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) + { + /// On worker node this filename won't contains globs + Poco::URI uri (filename); + S3::URI s3_uri (uri); + /// Actually this parameters are not used + UInt64 s3_max_single_read_retries = context->getSettingsRef().s3_max_single_read_retries; + UInt64 min_upload_part_size = context->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = context->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = context->getSettingsRef().s3_max_connections; + storage = StorageS3::create( + s3_uri, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + format, + s3_max_single_read_retries, + min_upload_part_size, + max_single_part_upload_size, + max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method, /*distributed_processing=*/true); + } + else + { + storage = StorageS3Cluster::create( + filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + cluster_name, format, context->getSettingsRef().s3_max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method); + } + + storage->startup(); + + return storage; +} + + +void registerTableFunctionS3Cluster(TableFunctionFactory & factory) +{ + factory.registerFunction(); +} + + +} + +#endif diff --git a/src/TableFunctions/TableFunctionS3Cluster.h b/src/TableFunctions/TableFunctionS3Cluster.h new file mode 100644 index 00000000000..cc857725ce6 --- /dev/null +++ b/src/TableFunctions/TableFunctionS3Cluster.h @@ -0,0 +1,56 @@ +#pragma once + +#include + +#if USE_AWS_S3 + +#include + + +namespace DB +{ + +class Context; + +/** + * s3Cluster(cluster_name, source, [access_key_id, secret_access_key,] format, structure) + * A table function, which allows to process many files from S3 on a specific cluster + * On initiator it creates a connection to _all_ nodes in cluster, discloses asterics + * in S3 file path and dispatch each file dynamically. + * On worker node it asks initiator about next task to process, processes it. + * This is repeated until the tasks are finished. + */ +class TableFunctionS3Cluster : public ITableFunction +{ +public: + static constexpr auto name = "s3Cluster"; + std::string getName() const override + { + return name; + } + bool hasStaticStructure() const override { return true; } + +protected: + StoragePtr executeImpl( + const ASTPtr & ast_function, + ContextPtr context, + const std::string & table_name, + ColumnsDescription cached_columns) const override; + + const char * getStorageTypeName() const override { return "S3Cluster"; } + + ColumnsDescription getActualTableStructure(ContextPtr) const override; + void parseArguments(const ASTPtr &, ContextPtr) override; + + String cluster_name; + String filename; + String format; + String structure; + String access_key_id; + String secret_access_key; + String compression_method = "auto"; +}; + +} + +#endif diff --git a/src/TableFunctions/TableFunctionURL.cpp b/src/TableFunctions/TableFunctionURL.cpp index 1c0109e892b..c2acb3ee207 100644 --- a/src/TableFunctions/TableFunctionURL.cpp +++ b/src/TableFunctions/TableFunctionURL.cpp @@ -7,18 +7,35 @@ #include #include #include +#include namespace DB { StoragePtr TableFunctionURL::getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const { - Poco::URI uri(source); - return StorageURL::create(uri, StorageID(getDatabaseName(), table_name), - format_, std::nullopt /*format settings*/, columns, - ConstraintsDescription{}, global_context, compression_method_); + /// If url contains {1..k} or failover options with separator `|`, use a separate storage + if ((source.find('{') == std::string::npos || source.find('}') == std::string::npos) && source.find('|') == std::string::npos) + { + Poco::URI uri(source); + return StorageURL::create(uri, StorageID(getDatabaseName(), table_name), + format_, std::nullopt /*format settings*/, columns, + ConstraintsDescription{}, global_context, compression_method_); + } + else + { + return StorageExternalDistributed::create( + source, + StorageID(getDatabaseName(), table_name), + format_, + std::nullopt, + compression_method_, + columns, + ConstraintsDescription{}, + global_context); + } } void registerTableFunctionURL(TableFunctionFactory & factory) diff --git a/src/TableFunctions/TableFunctionURL.h b/src/TableFunctions/TableFunctionURL.h index 5eb027e2b8a..fde361e8bbb 100644 --- a/src/TableFunctions/TableFunctionURL.h +++ b/src/TableFunctions/TableFunctionURL.h @@ -21,7 +21,7 @@ public: private: StoragePtr getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const override; const char * getStorageTypeName() const override { return "URL"; } }; diff --git a/src/TableFunctions/TableFunctionValues.cpp b/src/TableFunctions/TableFunctionValues.cpp index 4127a30892f..c66ebe7322e 100644 --- a/src/TableFunctions/TableFunctionValues.cpp +++ b/src/TableFunctions/TableFunctionValues.cpp @@ -30,7 +30,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -static void parseAndInsertValues(MutableColumns & res_columns, const ASTs & args, const Block & sample_block, const Context & context) +static void parseAndInsertValues(MutableColumns & res_columns, const ASTs & args, const Block & sample_block, ContextPtr context) { if (res_columns.size() == 1) /// Parsing arguments as Fields { @@ -68,7 +68,7 @@ static void parseAndInsertValues(MutableColumns & res_columns, const ASTs & args } } -void TableFunctionValues::parseArguments(const ASTPtr & ast_function, const Context & /*context*/) +void TableFunctionValues::parseArguments(const ASTPtr & ast_function, ContextPtr /*context*/) { ASTs & args_func = ast_function->children; @@ -93,12 +93,12 @@ void TableFunctionValues::parseArguments(const ASTPtr & ast_function, const Cont structure = args[0]->as().value.safeGet(); } -ColumnsDescription TableFunctionValues::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionValues::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionValues::executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionValues::executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); diff --git a/src/TableFunctions/TableFunctionValues.h b/src/TableFunctions/TableFunctionValues.h index 549fa2de507..058f5f1d2ed 100644 --- a/src/TableFunctions/TableFunctionValues.h +++ b/src/TableFunctions/TableFunctionValues.h @@ -14,11 +14,11 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Values"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String structure; }; diff --git a/src/TableFunctions/TableFunctionView.cpp b/src/TableFunctions/TableFunctionView.cpp index 62a833dabc4..3f51e0bbc95 100644 --- a/src/TableFunctions/TableFunctionView.cpp +++ b/src/TableFunctions/TableFunctionView.cpp @@ -15,7 +15,7 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; } -void TableFunctionView::parseArguments(const ASTPtr & ast_function, const Context & /*context*/) +void TableFunctionView::parseArguments(const ASTPtr & ast_function, ContextPtr /*context*/) { const auto * function = ast_function->as(); if (function) @@ -29,7 +29,7 @@ void TableFunctionView::parseArguments(const ASTPtr & ast_function, const Contex throw Exception("Table function '" + getName() + "' requires a query argument.", ErrorCodes::BAD_ARGUMENTS); } -ColumnsDescription TableFunctionView::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionView::getActualTableStructure(ContextPtr context) const { assert(create.select); assert(create.children.size() == 1); @@ -39,7 +39,7 @@ ColumnsDescription TableFunctionView::getActualTableStructure(const Context & co } StoragePtr TableFunctionView::executeImpl( - const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const + const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto res = StorageView::create(StorageID(getDatabaseName(), table_name), create, columns); diff --git a/src/TableFunctions/TableFunctionView.h b/src/TableFunctions/TableFunctionView.h index 0ed66ff712c..9ef634746eb 100644 --- a/src/TableFunctions/TableFunctionView.h +++ b/src/TableFunctions/TableFunctionView.h @@ -17,11 +17,11 @@ public: static constexpr auto name = "view"; std::string getName() const override { return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const String & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const String & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "View"; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; - ColumnsDescription getActualTableStructure(const Context & context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; ASTCreateQuery create; }; diff --git a/src/TableFunctions/TableFunctionZeros.cpp b/src/TableFunctions/TableFunctionZeros.cpp index 9b0c6c6e78b..9fd14eec4af 100644 --- a/src/TableFunctions/TableFunctionZeros.cpp +++ b/src/TableFunctions/TableFunctionZeros.cpp @@ -20,14 +20,14 @@ extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; template -ColumnsDescription TableFunctionZeros::getActualTableStructure(const Context & /*context*/) const +ColumnsDescription TableFunctionZeros::getActualTableStructure(ContextPtr /*context*/) const { /// NOTE: https://bugs.llvm.org/show_bug.cgi?id=47418 return ColumnsDescription{{{"zero", std::make_shared()}}}; } template -StoragePtr TableFunctionZeros::executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionZeros::executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { if (const auto * function = ast_function->as()) { @@ -53,7 +53,7 @@ void registerTableFunctionZeros(TableFunctionFactory & factory) } template -UInt64 TableFunctionZeros::evaluateArgument(const Context & context, ASTPtr & argument) const +UInt64 TableFunctionZeros::evaluateArgument(ContextPtr context, ASTPtr & argument) const { return evaluateConstantExpressionOrIdentifierAsLiteral(argument, context)->as().value.safeGet(); } diff --git a/src/TableFunctions/TableFunctionZeros.h b/src/TableFunctions/TableFunctionZeros.h index 48a2d8019f6..0407eff2f78 100644 --- a/src/TableFunctions/TableFunctionZeros.h +++ b/src/TableFunctions/TableFunctionZeros.h @@ -19,12 +19,12 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "SystemZeros"; } - UInt64 evaluateArgument(const Context & context, ASTPtr & argument) const; + UInt64 evaluateArgument(ContextPtr context, ASTPtr & argument) const; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; }; diff --git a/src/TableFunctions/parseColumnsListForTableFunction.cpp b/src/TableFunctions/parseColumnsListForTableFunction.cpp index 5221d96e086..08e80ef425a 100644 --- a/src/TableFunctions/parseColumnsListForTableFunction.cpp +++ b/src/TableFunctions/parseColumnsListForTableFunction.cpp @@ -14,10 +14,10 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -ColumnsDescription parseColumnsListFromString(const std::string & structure, const Context & context) +ColumnsDescription parseColumnsListFromString(const std::string & structure, ContextPtr context) { ParserColumnDeclarationList parser; - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); ASTPtr columns_list_raw = parseQuery(parser, structure, "columns declaration list", settings.max_query_size, settings.max_parser_depth); @@ -25,7 +25,7 @@ ColumnsDescription parseColumnsListFromString(const std::string & structure, con if (!columns_list) throw Exception("Could not cast AST to ASTExpressionList", ErrorCodes::LOGICAL_ERROR); - return InterpreterCreateQuery::getColumnsDescription(*columns_list, context, !settings.allow_suspicious_codecs); + return InterpreterCreateQuery::getColumnsDescription(*columns_list, context, false); } } diff --git a/src/TableFunctions/parseColumnsListForTableFunction.h b/src/TableFunctions/parseColumnsListForTableFunction.h index d077d308e37..e0130a2618d 100644 --- a/src/TableFunctions/parseColumnsListForTableFunction.h +++ b/src/TableFunctions/parseColumnsListForTableFunction.h @@ -10,6 +10,6 @@ namespace DB class Context; /// Parses a common argument for table functions such as table structure given in string -ColumnsDescription parseColumnsListFromString(const std::string & structure, const Context & context); +ColumnsDescription parseColumnsListFromString(const std::string & structure, ContextPtr context); } diff --git a/src/TableFunctions/registerTableFunctions.cpp b/src/TableFunctions/registerTableFunctions.cpp index a6640bbb0e9..6cf40c4f090 100644 --- a/src/TableFunctions/registerTableFunctions.cpp +++ b/src/TableFunctions/registerTableFunctions.cpp @@ -21,6 +21,7 @@ void registerTableFunctions() #if USE_AWS_S3 registerTableFunctionS3(factory); + registerTableFunctionS3Cluster(factory); registerTableFunctionCOS(factory); #endif @@ -40,6 +41,8 @@ void registerTableFunctions() #if USE_LIBPQXX registerTableFunctionPostgreSQL(factory); #endif + + registerTableFunctionDictionary(factory); } } diff --git a/src/TableFunctions/registerTableFunctions.h b/src/TableFunctions/registerTableFunctions.h index 7e9a8ab5b61..c49fafc5f86 100644 --- a/src/TableFunctions/registerTableFunctions.h +++ b/src/TableFunctions/registerTableFunctions.h @@ -21,6 +21,7 @@ void registerTableFunctionGenerate(TableFunctionFactory & factory); #if USE_AWS_S3 void registerTableFunctionS3(TableFunctionFactory & factory); +void registerTableFunctionS3Cluster(TableFunctionFactory & factory); void registerTableFunctionCOS(TableFunctionFactory & factory); #endif @@ -41,6 +42,8 @@ void registerTableFunctionMySQL(TableFunctionFactory & factory); void registerTableFunctionPostgreSQL(TableFunctionFactory & factory); #endif +void registerTableFunctionDictionary(TableFunctionFactory & factory); + void registerTableFunctions(); } diff --git a/src/TableFunctions/ya.make b/src/TableFunctions/ya.make index 7bcf5fc53b3..f50e345f2d8 100644 --- a/src/TableFunctions/ya.make +++ b/src/TableFunctions/ya.make @@ -12,6 +12,7 @@ SRCS( ITableFunction.cpp ITableFunctionFileLike.cpp ITableFunctionXDBC.cpp + TableFunctionDictionary.cpp TableFunctionFactory.cpp TableFunctionFile.cpp TableFunctionGenerateRandom.cpp diff --git a/src/TableFunctions/ya.make.in b/src/TableFunctions/ya.make.in index 0d5aa172cb4..6f351b3f646 100644 --- a/src/TableFunctions/ya.make.in +++ b/src/TableFunctions/ya.make.in @@ -8,7 +8,7 @@ PEERDIR( SRCS( - + ) END() diff --git a/src/ya.make b/src/ya.make index 5361c8a5695..6537f67d66f 100644 --- a/src/ya.make +++ b/src/ya.make @@ -5,6 +5,7 @@ LIBRARY() PEERDIR( clickhouse/src/Access clickhouse/src/AggregateFunctions + clickhouse/src/Bridge clickhouse/src/Client clickhouse/src/Columns clickhouse/src/Common diff --git a/tests/ci/ci_config.json b/tests/ci/ci_config.json index 0e467319285..54f4e19bd94 100644 --- a/tests/ci/ci_config.json +++ b/tests/ci/ci_config.json @@ -71,16 +71,6 @@ "tidy": "disable", "with_coverage": false }, - { - "compiler": "clang-10", - "build-type": "", - "sanitizer": "", - "package-type": "deb", - "bundled": "bundled", - "splitted": "unsplitted", - "tidy": "disable", - "with_coverage": false - }, { "compiler": "clang-11", "build-type": "debug", @@ -92,7 +82,7 @@ "with_coverage": false }, { - "compiler": "gcc-9", + "compiler": "gcc-10", "build-type": "", "sanitizer": "", "package-type": "deb", @@ -359,7 +349,7 @@ }, "Functional stateless tests (unbundled)": { "required_build_properties": { - "compiler": "gcc-9", + "compiler": "gcc-10", "package_type": "deb", "build_type": "relwithdebuginfo", "sanitizer": "none", diff --git a/tests/clickhouse-test b/tests/clickhouse-test index 64a93416c41..5cbf41102c7 100755 --- a/tests/clickhouse-test +++ b/tests/clickhouse-test @@ -1,10 +1,13 @@ #!/usr/bin/env python3 +import shutil import sys import os import os.path +import signal import re import json +import copy import traceback from argparse import ArgumentParser @@ -34,6 +37,17 @@ MESSAGES_TO_RETRY = [ "ConnectionPoolWithFailover: Connection failed at try", ] +class Terminated(KeyboardInterrupt): + pass +def signal_handler(sig, frame): + raise Terminated(f'Terminated with {sig} signal') + +def stop_tests(): + # send signal to all processes in group to avoid hung check triggering + # (to avoid terminating clickhouse-test itself, the signal should be ignored) + signal.signal(signal.SIGTERM, signal.SIG_IGN) + os.killpg(os.getpgid(os.getpid()), signal.SIGTERM) + signal.signal(signal.SIGTERM, signal.SIG_DFL) def json_minify(string): """ @@ -107,18 +121,21 @@ def remove_control_characters(s): def get_db_engine(args, database_name): if args.replicated_database: - return " ENGINE=Replicated('/test/clickhouse/db/{}', 's1', 'r1')".format(database_name) + return " ON CLUSTER test_cluster_database_replicated ENGINE=Replicated('/test/clickhouse/db/{}', '{{shard}}', '{{replica}}')".format(database_name) if args.db_engine: return " ENGINE=" + args.db_engine return "" # Will use default engine -def run_single_test(args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file): - # print(client_options) +def configure_testcase_args(args, case_file, suite_tmp_dir): + testcase_args = copy.deepcopy(args) - start_time = datetime.now() - if args.database: - database = args.database + testcase_args.testcase_start_time = datetime.now() + testcase_args.testcase_client = f"{testcase_args.client} --log_comment='{case_file}'" + + if testcase_args.database: + database = testcase_args.database os.environ.setdefault("CLICKHOUSE_DATABASE", database) + os.environ.setdefault("CLICKHOUSE_TMP", suite_tmp_dir) else: # If --database is not specified, we will create temporary database with unique name @@ -128,20 +145,35 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std return ''.join(random.choice(alphabet) for _ in range(length)) database = 'test_{suffix}'.format(suffix=random_str()) - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(testcase_args.testcase_client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) try: - clickhouse_proc_create.communicate(("CREATE DATABASE " + database + get_db_engine(args, database)), timeout=args.timeout) + clickhouse_proc_create.communicate(("CREATE DATABASE " + database + get_db_engine(testcase_args, database)), timeout=testcase_args.timeout) except TimeoutExpired: - total_time = (datetime.now() - start_time).total_seconds() + total_time = (datetime.now() - testcase_args.testcase_start_time).total_seconds() return clickhouse_proc_create, "", "Timeout creating database {} before test".format(database), total_time os.environ["CLICKHOUSE_DATABASE"] = database + # Set temporary directory to match the randomly generated database, + # because .sh tests also use it for temporary files and we want to avoid + # collisions. + testcase_args.test_tmp_dir = os.path.join(suite_tmp_dir, database) + os.mkdir(testcase_args.test_tmp_dir) + os.environ.setdefault("CLICKHOUSE_TMP", testcase_args.test_tmp_dir) + + testcase_args.testcase_database = database + + return testcase_args + +def run_single_test(args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file): + client = args.client + start_time = args.testcase_start_time + database = args.testcase_database # This is for .sh tests - os.environ.setdefault("CLICKHOUSE_LOG_COMMENT", case_file) + os.environ["CLICKHOUSE_LOG_COMMENT"] = case_file params = { - 'client': args.client + ' --database=' + database, + 'client': client + ' --database=' + database, 'logs_level': server_logs_level, 'options': client_options, 'test': case_file, @@ -152,7 +184,7 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std pattern = '{test} > {stdout} 2> {stderr}' if ext == '.sql': - pattern = "{client} --send_logs_level={logs_level} --testmode --multiquery {options} --log_comment='{test}' < " + pattern + pattern = "{client} --send_logs_level={logs_level} --testmode --multiquery {options} < " + pattern command = pattern.format(**params) @@ -169,10 +201,13 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std need_drop_database = not maybe_passed if need_drop_database: - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) - seconds_left = max(args.timeout - (datetime.now() - start_time).total_seconds(), 10) + clickhouse_proc_create = Popen(shlex.split(client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) + seconds_left = max(args.timeout - (datetime.now() - start_time).total_seconds(), 20) try: - clickhouse_proc_create.communicate(("DROP DATABASE " + database), timeout=seconds_left) + drop_database_query = "DROP DATABASE " + database + if args.replicated_database: + drop_database_query += " ON CLUSTER test_cluster_database_replicated" + clickhouse_proc_create.communicate((drop_database_query), timeout=seconds_left) except TimeoutExpired: # kill test process because it can also hung if proc.returncode is None: @@ -185,12 +220,17 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std total_time = (datetime.now() - start_time).total_seconds() return clickhouse_proc_create, "", "Timeout dropping database {} after test".format(database), total_time + shutil.rmtree(args.test_tmp_dir) + total_time = (datetime.now() - start_time).total_seconds() # Normalize randomized database names in stdout, stderr files. os.system("LC_ALL=C sed -i -e 's/{test_db}/default/g' {file}".format(test_db=database, file=stdout_file)) if not args.show_db_name: os.system("LC_ALL=C sed -i -e 's/{test_db}/default/g' {file}".format(test_db=database, file=stderr_file)) + if args.replicated_database: + os.system("LC_ALL=C sed -i -e 's|/auto_{{shard}}||g' {file}".format(file=stdout_file)) + os.system("LC_ALL=C sed -i -e 's|auto_{{replica}}||g' {file}".format(file=stdout_file)) stdout = open(stdout_file, 'rb').read() if os.path.exists(stdout_file) else b'' stdout = str(stdout, errors='replace', encoding='utf-8') @@ -206,8 +246,12 @@ def need_retry(stderr): def get_processlist(args): try: + query = b"SHOW PROCESSLIST FORMAT Vertical" + if args.replicated_database: + query = b"SELECT materialize((hostName(), tcpPort())) as host, * " \ + b"FROM clusterAllReplicas('r', system.processes) WHERE query NOT LIKE '%system.processes%' FORMAT Vertical" clickhouse_proc = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE) - (stdout, _) = clickhouse_proc.communicate((b"SHOW PROCESSLIST FORMAT Vertical"), timeout=10) + (stdout, _) = clickhouse_proc.communicate((query), timeout=20) return False, stdout.decode('utf-8') except Exception as ex: print("Exception", ex) @@ -216,11 +260,12 @@ def get_processlist(args): # collect server stacktraces using gdb def get_stacktraces_from_gdb(server_pid): - cmd = "gdb -batch -ex 'thread apply all backtrace' -p {}".format(server_pid) try: + cmd = "gdb -batch -ex 'thread apply all backtrace' -p {}".format(server_pid) return subprocess.check_output(cmd, shell=True).decode('utf-8') except Exception as ex: - return "Error occured while receiving stack traces from gdb: {}".format(str(ex)) + print("Error occured while receiving stack traces from gdb: {}".format(str(ex))) + return None # collect server stacktraces from system.stack_trace table @@ -230,21 +275,26 @@ def get_stacktraces_from_clickhouse(client): return subprocess.check_output("{} --allow_introspection_functions=1 --query " "\"SELECT arrayStringConcat(arrayMap(x, y -> concat(x, ': ', y), arrayMap(x -> addressToLine(x), trace), " "arrayMap(x -> demangle(addressToSymbol(x)), trace)), '\n') as trace " - "FROM system.stack_trace format Vertical\"".format(client), shell=True).decode('utf-8') + "FROM system.stack_trace format Vertical\"".format(client), shell=True, stderr=subprocess.STDOUT).decode('utf-8') except Exception as ex: - return "Error occured while receiving stack traces from client: {}".format(str(ex)) + print("Error occured while receiving stack traces from client: {}".format(str(ex))) + return None def get_server_pid(server_tcp_port): - cmd = "lsof -i tcp:{port} -s tcp:LISTEN -Fp | awk '/^p[0-9]+$/{{print substr($0, 2)}}'".format(port=server_tcp_port) - try: - output = subprocess.check_output(cmd, shell=True) - if output: - return int(output) - else: - return None # server dead - except Exception: - return None + # lsof does not work in stress tests for some reason + cmd_lsof = "lsof -i tcp:{port} -s tcp:LISTEN -Fp | awk '/^p[0-9]+$/{{print substr($0, 2)}}'".format(port=server_tcp_port) + cmd_pidof = "pidof -s clickhouse-server" + commands = [cmd_lsof, cmd_pidof] + output = None + for cmd in commands: + try: + output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT, universal_newlines=True) + if output: + return int(output) + except Exception as e: + print("Cannot get server pid with {}, got {}: {}".format(cmd, output, e)) + return None # most likely server dead def colored(text, args, color=None, on_color=None, attrs=None): @@ -279,6 +329,9 @@ def run_tests_array(all_tests_with_params): failures_total = 0 failures = 0 failures_chain = 0 + start_time = datetime.now() + + is_concurrent = multiprocessing.current_process().name != "MainProcess" client_options = get_additional_client_options(args) @@ -289,14 +342,16 @@ def run_tests_array(all_tests_with_params): return '' if all_tests: - print("\nRunning {} {} tests.".format(len(all_tests), suite) + "\n") + print(f"\nRunning {len(all_tests)} {suite} tests ({multiprocessing.current_process().name}).\n") for case in all_tests: if SERVER_DIED: + stop_tests() break if stop_time and time() > stop_time: print("\nStop tests run because global time limit is exceeded.\n") + stop_tests() break case_file = os.path.join(suite_dir, case) @@ -304,7 +359,6 @@ def run_tests_array(all_tests_with_params): try: status = '' - is_concurrent = multiprocessing.current_process().name != "MainProcess" if not is_concurrent: sys.stdout.flush() sys.stdout.write("{0:72}".format(name + ": ")) @@ -346,7 +400,7 @@ def run_tests_array(all_tests_with_params): clickhouse_proc = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) failed_to_check = False try: - clickhouse_proc.communicate(("SELECT 'Running test {suite}/{case} from pid={pid}';".format(pid = os.getpid(), case = case, suite = suite)), timeout=10) + clickhouse_proc.communicate(("SELECT 'Running test {suite}/{case} from pid={pid}';".format(pid = os.getpid(), case = case, suite = suite)), timeout=20) except: failed_to_check = True @@ -354,6 +408,7 @@ def run_tests_array(all_tests_with_params): failures += 1 print("Server does not respond to health check") SERVER_DIED = True + stop_tests() break file_suffix = ('.' + str(os.getpid())) if is_concurrent and args.test_runs > 1 else '' @@ -361,7 +416,10 @@ def run_tests_array(all_tests_with_params): stdout_file = os.path.join(suite_tmp_dir, name) + file_suffix + '.stdout' stderr_file = os.path.join(suite_tmp_dir, name) + file_suffix + '.stderr' - proc, stdout, stderr, total_time = run_single_test(args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file) + + testcase_args = configure_testcase_args(args, case_file, suite_tmp_dir) + + proc, stdout, stderr, total_time = run_single_test(testcase_args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file) if proc.returncode is None: try: @@ -376,10 +434,11 @@ def run_tests_array(all_tests_with_params): status += " - Timeout!\n" if stderr: status += stderr + status += 'Database: ' + testcase_args.testcase_database else: counter = 1 - while proc.returncode != 0 and need_retry(stderr): - proc, stdout, stderr, total_time = run_single_test(args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file) + while need_retry(stderr): + proc, stdout, stderr, total_time = run_single_test(testcase_args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file) sleep(2**counter) counter += 1 if counter > 6: @@ -399,7 +458,7 @@ def run_tests_array(all_tests_with_params): if ' ' in stderr: SERVER_DIED = True - if args.stop and ('Connection refused' in stderr or 'Attempt to read after eof' in stderr) and not 'Received exception from server' in stderr: + if testcase_args.stop and ('Connection refused' in stderr or 'Attempt to read after eof' in stderr) and not 'Received exception from server' in stderr: SERVER_DIED = True if os.path.isfile(stdout_file): @@ -408,6 +467,8 @@ def run_tests_array(all_tests_with_params): open(stdout_file).read().split('\n')[:100]) status += '\n' + status += 'Database: ' + testcase_args.testcase_database + elif stderr: failures += 1 failures_chain += 1 @@ -415,6 +476,7 @@ def run_tests_array(all_tests_with_params): status += print_test_time(total_time) status += " - having stderror:\n{}\n".format( '\n'.join(stderr.split('\n')[:100])) + status += 'Database: ' + testcase_args.testcase_database elif 'Exception' in stdout: failures += 1 failures_chain += 1 @@ -422,27 +484,31 @@ def run_tests_array(all_tests_with_params): status += print_test_time(total_time) status += " - having exception:\n{}\n".format( '\n'.join(stdout.split('\n')[:100])) + status += 'Database: ' + testcase_args.testcase_database elif not os.path.isfile(reference_file): status += MSG_UNKNOWN status += print_test_time(total_time) status += " - no reference file\n" + status += 'Database: ' + testcase_args.testcase_database else: result_is_different = subprocess.call(['diff', '-q', reference_file, stdout_file], stdout=PIPE) if result_is_different: - diff = Popen(['diff', '-U', str(args.unified), reference_file, stdout_file], stdout=PIPE, universal_newlines=True).communicate()[0] + diff = Popen(['diff', '-U', str(testcase_args.unified), reference_file, stdout_file], stdout=PIPE, universal_newlines=True).communicate()[0] failures += 1 status += MSG_FAIL status += print_test_time(total_time) status += " - result differs with reference:\n{}\n".format(diff) + status += 'Database: ' + testcase_args.testcase_database else: - if args.test_runs > 1 and total_time > 30 and 'long' not in name: + if testcase_args.test_runs > 1 and total_time > 30 and 'long' not in name: # We're in Flaky Check mode, check the run time as well while we're at it. failures += 1 failures_chain += 1 status += MSG_FAIL status += print_test_time(total_time) - status += " - Long test not marked as 'long'" + status += " - Test runs too long (> 30s). Make it faster.\n" + status += 'Database: ' + testcase_args.testcase_database else: passed_total += 1 failures_chain = 0 @@ -461,6 +527,7 @@ def run_tests_array(all_tests_with_params): sys.stdout.flush() except KeyboardInterrupt as e: print(colored("Break tests execution", args, "red")) + stop_tests() raise e except: exc_type, exc_value, tb = sys.exc_info() @@ -468,17 +535,24 @@ def run_tests_array(all_tests_with_params): print("{0} - Test internal error: {1}\n{2}\n{3}".format(MSG_FAIL, exc_type.__name__, exc_value, "\n".join(traceback.format_tb(tb, 10)))) if failures_chain >= 20: + stop_tests() break failures_total = failures_total + failures if failures_total > 0: - print(colored("\nHaving {failures_total} errors! {passed_total} tests passed. {skipped_total} tests skipped.".format( - passed_total = passed_total, skipped_total = skipped_total, failures_total = failures_total), args, "red", attrs=["bold"])) + print(colored(f"\nHaving {failures_total} errors! {passed_total} tests passed." + f" {skipped_total} tests skipped. {(datetime.now() - start_time).total_seconds():.2f} s elapsed" + f' ({multiprocessing.current_process().name}).', + args, "red", attrs=["bold"])) exit_code = 1 else: - print(colored("\n{passed_total} tests passed. {skipped_total} tests skipped.".format( - passed_total = passed_total, skipped_total = skipped_total), args, "green", attrs=["bold"])) + print(colored(f"\n{passed_total} tests passed. {skipped_total} tests skipped." + f" {(datetime.now() - start_time).total_seconds():.2f} s elapsed" + f' ({multiprocessing.current_process().name}).', + args, "green", attrs=["bold"])) + + sys.stdout.flush() server_logs_level = "warning" @@ -643,7 +717,6 @@ def main(args): os.environ.setdefault("CLICKHOUSE_CONFIG", args.configserver) if args.configclient: os.environ.setdefault("CLICKHOUSE_CONFIG_CLIENT", args.configclient) - os.environ.setdefault("CLICKHOUSE_TMP", tmp_dir) # Force to print server warnings in stderr # Shell scripts could change logging level @@ -671,10 +744,10 @@ def main(args): args.shard = False if args.database and args.database != "test": - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) clickhouse_proc_create.communicate(("CREATE DATABASE IF NOT EXISTS " + args.database + get_db_engine(args, args.database))) - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) clickhouse_proc_create.communicate(("CREATE DATABASE IF NOT EXISTS test" + get_db_engine(args, 'test'))) def is_test_from_dir(suite_dir, case): @@ -747,7 +820,7 @@ def main(args): all_tests = os.listdir(suite_dir) all_tests = [case for case in all_tests if is_test_from_dir(suite_dir, case)] if args.test: - all_tests = [t for t in all_tests if any([re.search(r, t) for r in args.test])] + all_tests = [t for t in all_tests if any(re.search(r, t) for r in args.test)] all_tests = all_tests * args.test_runs all_tests.sort(key=key_func) @@ -774,7 +847,8 @@ def main(args): if jobs > run_total: run_total = jobs - batch_size = len(parallel_tests) // jobs + # Create two batches per process for more uniform execution time. + batch_size = max(1, len(parallel_tests) // (jobs * 2)) parallel_tests_array = [] for i in range(0, len(parallel_tests), batch_size): parallel_tests_array.append((parallel_tests[i:i+batch_size], suite, suite_dir, suite_tmp_dir)) @@ -806,21 +880,26 @@ def main(args): clickhouse_tcp_port = os.getenv("CLICKHOUSE_PORT_TCP", '9000') server_pid = get_server_pid(clickhouse_tcp_port) + bt = None if server_pid: print("\nLocated ClickHouse server process {} listening at TCP port {}".format(server_pid, clickhouse_tcp_port)) - - # It does not work in Sandbox - #print("\nCollecting stacktraces from system.stacktraces table:") - #print(get_stacktraces_from_clickhouse(args.client)) - print("\nCollecting stacktraces from all running threads with gdb:") - print(get_stacktraces_from_gdb(server_pid)) - else: + bt = get_stacktraces_from_gdb(server_pid) + if len(bt) < 1000: + print("Got suspiciously small stacktraces: ", bt) + bt = None + if bt is None: + print("\nCollecting stacktraces from system.stacktraces table:") + bt = get_stacktraces_from_clickhouse(args.client) + if bt is None: print( colored( "\nUnable to locate ClickHouse server process listening at TCP port {}. " "It must have crashed or exited prematurely!".format(clickhouse_tcp_port), args, "red", attrs=["bold"])) + else: + print(bt) + exit_code = 1 else: @@ -829,6 +908,8 @@ def main(args): if total_tests_run == 0: print("No tests were run.") sys.exit(1) + else: + print("All tests have finished.") sys.exit(exit_code) @@ -895,6 +976,14 @@ def collect_sequential_list(skip_list_path): if __name__ == '__main__': + # Move to a new process group and kill it at exit so that we don't have any + # infinite tests processes left + # (new process group is required to avoid killing some parent processes) + os.setpgid(0, 0) + signal.signal(signal.SIGTERM, signal_handler) + signal.signal(signal.SIGINT, signal_handler) + signal.signal(signal.SIGHUP, signal_handler) + parser=ArgumentParser(description='ClickHouse functional tests') parser.add_argument('-q', '--queries', help='Path to queries dir') parser.add_argument('--tmp', help='Path to tmp dir') diff --git a/tests/config/config.d/clusters.xml b/tests/config/config.d/clusters.xml index c0babf0ff89..28d3ae03745 100644 --- a/tests/config/config.d/clusters.xml +++ b/tests/config/config.d/clusters.xml @@ -15,6 +15,32 @@ 9000 - + + + + + 127.0.0.1 + 9000 + + + 127.0.0.2 + 9000 + + + + + + + shard_0 + localhost + 9000 + + + shard_1 + localhost + 9000 + + + - \ No newline at end of file + diff --git a/tests/config/config.d/database_replicated.xml b/tests/config/config.d/database_replicated.xml new file mode 100644 index 00000000000..9a3b4d68ea6 --- /dev/null +++ b/tests/config/config.d/database_replicated.xml @@ -0,0 +1,82 @@ + + + + localhost + 9181 + + + localhost + 19181 + + + localhost + 29181 + + + + + 9181 + 1 + + + 10000 + 30000 + 1000 + 2000 + 4000 + trace + false + + 1000000000000000 + + + + + 1 + localhost + 44444 + true + 3 + + + 2 + localhost + 44445 + true + true + 2 + + + 3 + localhost + 44446 + true + true + 1 + + + + + + + + + localhost + 9000 + + + localhost + 19000 + + + + + localhost + 29000 + + + + + + <_functional_tests_helper_database_replicated_replace_args_macros>1 + diff --git a/tests/config/config.d/test_keeper_port.xml b/tests/config/config.d/keeper_port.xml similarity index 75% rename from tests/config/config.d/test_keeper_port.xml rename to tests/config/config.d/keeper_port.xml index 438ecef60ea..b21df47bc85 100644 --- a/tests/config/config.d/test_keeper_port.xml +++ b/tests/config/config.d/keeper_port.xml @@ -1,5 +1,5 @@ - + 9181 1 @@ -8,6 +8,8 @@ 30000 false 60000 + + 1000000000000000 @@ -17,5 +19,5 @@ 44444 - + diff --git a/tests/config/install.sh b/tests/config/install.sh index 1fca2b11e04..7e01860e241 100755 --- a/tests/config/install.sh +++ b/tests/config/install.sh @@ -29,7 +29,7 @@ ln -sf $SRC_PATH/config.d/graphite.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/database_atomic.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/max_concurrent_queries.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/test_cluster_with_incorrect_pw.xml $DEST_SERVER_PATH/config.d/ -ln -sf $SRC_PATH/config.d/test_keeper_port.xml $DEST_SERVER_PATH/config.d/ +ln -sf $SRC_PATH/config.d/keeper_port.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/logging_no_rotate.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/tcp_with_proxy.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/top_level_domains_lists.xml $DEST_SERVER_PATH/config.d/ @@ -65,6 +65,31 @@ if [[ -n "$USE_DATABASE_ORDINARY" ]] && [[ "$USE_DATABASE_ORDINARY" -eq 1 ]]; th fi if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then ln -sf $SRC_PATH/users.d/database_replicated.xml $DEST_SERVER_PATH/users.d/ + ln -sf $SRC_PATH/config.d/database_replicated.xml $DEST_SERVER_PATH/config.d/ + rm /etc/clickhouse-server/config.d/zookeeper.xml + rm /etc/clickhouse-server/config.d/keeper_port.xml + + # There is a bug in config reloading, so we cannot override macros using --macros.replica r2 + # And we have to copy configs... + mkdir -p /etc/clickhouse-server1 + mkdir -p /etc/clickhouse-server2 + chown clickhouse /etc/clickhouse-server1 + chown clickhouse /etc/clickhouse-server2 + chgrp clickhouse /etc/clickhouse-server1 + chgrp clickhouse /etc/clickhouse-server2 + sudo -u clickhouse cp -r /etc/clickhouse-server/* /etc/clickhouse-server1 + sudo -u clickhouse cp -r /etc/clickhouse-server/* /etc/clickhouse-server2 + rm /etc/clickhouse-server1/config.d/macros.xml + rm /etc/clickhouse-server2/config.d/macros.xml + sudo -u clickhouse cat /etc/clickhouse-server/config.d/macros.xml | sed "s|r1|r2|" > /etc/clickhouse-server1/config.d/macros.xml + sudo -u clickhouse cat /etc/clickhouse-server/config.d/macros.xml | sed "s|s1|s2|" > /etc/clickhouse-server2/config.d/macros.xml + + sudo mkdir -p /var/lib/clickhouse1 + sudo mkdir -p /var/lib/clickhouse2 + sudo chown clickhouse /var/lib/clickhouse1 + sudo chown clickhouse /var/lib/clickhouse2 + sudo chgrp clickhouse /var/lib/clickhouse1 + sudo chgrp clickhouse /var/lib/clickhouse2 fi ln -sf $SRC_PATH/client_config.xml $DEST_CLIENT_PATH/config.xml diff --git a/tests/config/users.d/database_replicated.xml b/tests/config/users.d/database_replicated.xml index 23801d00154..903b8a64e22 100644 --- a/tests/config/users.d/database_replicated.xml +++ b/tests/config/users.d/database_replicated.xml @@ -2,9 +2,11 @@ 1 - 0 + none 30 30 + 1 + 2 diff --git a/tests/fuzz/ast.dict b/tests/fuzz/ast.dict index 8327f276b31..7befb36c840 100644 --- a/tests/fuzz/ast.dict +++ b/tests/fuzz/ast.dict @@ -156,6 +156,7 @@ "extractURLParameterNames" "extractURLParameters" "FETCH PARTITION" +"FETCH PART" "FINAL" "FIRST" "firstSignificantSubdomain" diff --git a/tests/integration/README.md b/tests/integration/README.md index cdfb6b1a70a..241d798f044 100644 --- a/tests/integration/README.md +++ b/tests/integration/README.md @@ -12,7 +12,7 @@ You must install latest Docker from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#set-up-the-repository Don't use Docker from your system repository. -* [pip](https://pypi.python.org/pypi/pip) and `libpq-dev`. To install: `sudo apt-get install python3-pip libpq-dev zlib1g-dev libcrypto++-dev libssl-dev` +* [pip](https://pypi.python.org/pypi/pip) and `libpq-dev`. To install: `sudo apt-get install python3-pip libpq-dev zlib1g-dev libcrypto++-dev libssl-dev libkrb5-dev` * [py.test](https://docs.pytest.org/) testing framework. To install: `sudo -H pip install pytest` * [docker-compose](https://docs.docker.com/compose/) and additional python libraries. To install: @@ -39,7 +39,8 @@ sudo -H pip install \ redis \ tzlocal \ urllib3 \ - requests-kerberos + requests-kerberos \ + dict2xml ``` (highly not recommended) If you really want to use OS packages on modern debian/ubuntu instead of "pip": `sudo apt install -y docker docker-compose python3-pytest python3-dicttoxml python3-docker python3-pymysql python3-pymongo python3-tzlocal python3-kazoo python3-psycopg2 kafka-python python3-pytest-timeout python3-minio` @@ -51,11 +52,10 @@ To check, that you have access to Docker, run `docker ps`. Run the tests with the `pytest` command. To select which tests to run, use: `pytest -k ` By default tests are run with system-wide client binary, server binary and base configs. To change that, -set the following environment variables:` +set the following environment variables: * `CLICKHOUSE_TESTS_SERVER_BIN_PATH` to choose the server binary. * `CLICKHOUSE_TESTS_CLIENT_BIN_PATH` to choose the client binary. -* `CLICKHOUSE_TESTS_BASE_CONFIG_DIR` to choose the directory from which base configs (`config.xml` and - `users.xml`) are taken. +* `CLICKHOUSE_TESTS_BASE_CONFIG_DIR` to choose the directory from which base configs (`config.xml` and`users.xml`) are taken. For tests that use common docker compose files you may need to set up their path with environment variable: `DOCKER_COMPOSE_DIR=$HOME/ClickHouse/docker/test/integration/runner/compose` diff --git a/tests/integration/ci-runner.py b/tests/integration/ci-runner.py new file mode 100755 index 00000000000..a21f3d344ba --- /dev/null +++ b/tests/integration/ci-runner.py @@ -0,0 +1,568 @@ +#!/usr/bin/env python3 + +import logging +import subprocess +import os +import time +import shutil +from collections import defaultdict +import random +import json +import csv + + +MAX_RETRY = 2 +SLEEP_BETWEEN_RETRIES = 5 +CLICKHOUSE_BINARY_PATH = "/usr/bin/clickhouse" +CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH = "/usr/bin/clickhouse-odbc-bridge" +CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH = "/usr/bin/clickhouse-library-bridge" + +TRIES_COUNT = 10 +MAX_TIME_SECONDS = 3600 + +MAX_TIME_IN_SANDBOX = 20 * 60 # 20 minutes +TASK_TIMEOUT = 8 * 60 * 60 # 8 hours + +def get_tests_to_run(pr_info): + result = set([]) + changed_files = pr_info['changed_files'] + + if changed_files is None: + return [] + + for fpath in changed_files: + if 'tests/integration/test_' in fpath: + logging.info('File %s changed and seems like integration test', fpath) + result.add(fpath.split('/')[2]) + return list(result) + + +def filter_existing_tests(tests_to_run, repo_path): + result = [] + for relative_test_path in tests_to_run: + if os.path.exists(os.path.join(repo_path, 'tests/integration', relative_test_path)): + result.append(relative_test_path) + else: + logging.info("Skipping test %s, seems like it was removed", relative_test_path) + return result + + +def _get_deselect_option(tests): + return ' '.join(['--deselect {}'.format(t) for t in tests]) + + +def parse_test_results_output(fname): + read = False + description_output = [] + with open(fname, 'r') as out: + for line in out: + if read and line.strip() and not line.startswith('=='): + description_output.append(line.strip()) + if 'short test summary info' in line: + read = True + return description_output + + +def get_counters(output): + counters = { + "ERROR": set([]), + "PASSED": set([]), + "FAILED": set([]), + } + + for line in output: + if '.py' in line: + line_arr = line.strip().split(' ') + state = line_arr[0] + test_name = ' '.join(line_arr[1:]) + if ' - ' in test_name: + test_name = test_name[:test_name.find(' - ')] + if state in counters: + counters[state].add(test_name) + else: + logging.info("Strange line %s", line) + else: + logging.info("Strange line %s") + return {k: list(v) for k, v in counters.items()} + + +def parse_test_times(fname): + read = False + description_output = [] + with open(fname, 'r') as out: + for line in out: + if read and '==' in line: + break + if read and line.strip(): + description_output.append(line.strip()) + if 'slowest durations' in line: + read = True + return description_output + + +def get_test_times(output): + result = defaultdict(float) + for line in output: + if '.py' in line: + line_arr = line.strip().split(' ') + test_time = line_arr[0] + test_name = ' '.join([elem for elem in line_arr[2:] if elem]) + if test_name not in result: + result[test_name] = 0.0 + result[test_name] += float(test_time[:-1]) + return result + + +def clear_ip_tables_and_restart_daemons(): + logging.info("Dump iptables after run %s", subprocess.check_output("iptables -L", shell=True)) + try: + logging.info("Killing all alive docker containers") + subprocess.check_output("docker kill $(docker ps -q)", shell=True) + except subprocess.CalledProcessError as err: + logging.info("docker kill excepted: " + str(err)) + + try: + logging.info("Removing all docker containers") + subprocess.check_output("docker rm $(docker ps -a -q) --force", shell=True) + except subprocess.CalledProcessError as err: + logging.info("docker rm excepted: " + str(err)) + + try: + logging.info("Stopping docker daemon") + subprocess.check_output("service docker stop", shell=True) + except subprocess.CalledProcessError as err: + logging.info("docker stop excepted: " + str(err)) + + try: + for i in range(200): + try: + logging.info("Restarting docker %s", i) + subprocess.check_output("service docker start", shell=True) + subprocess.check_output("docker ps", shell=True) + break + except subprocess.CalledProcessError as err: + time.sleep(0.5) + logging.info("Waiting docker to start, current %s", str(err)) + else: + raise Exception("Docker daemon doesn't responding") + except subprocess.CalledProcessError as err: + logging.info("Can't reload docker: " + str(err)) + + iptables_iter = 0 + try: + for i in range(1000): + iptables_iter = i + # when rules will be empty, it will raise exception + subprocess.check_output("iptables -D DOCKER-USER 1", shell=True) + except subprocess.CalledProcessError as err: + logging.info("All iptables rules cleared, " + str(iptables_iter) + "iterations, last error: " + str(err)) + + +class ClickhouseIntegrationTestsRunner: + + def __init__(self, result_path, params): + self.result_path = result_path + self.params = params + + self.image_versions = self.params['docker_images_with_versions'] + self.shuffle_groups = self.params['shuffle_test_groups'] + self.flaky_check = 'flaky check' in self.params['context_name'] + self.start_time = time.time() + self.soft_deadline_time = self.start_time + (TASK_TIMEOUT - MAX_TIME_IN_SANDBOX) + + def path(self): + return self.result_path + + def base_path(self): + return os.path.join(str(self.result_path), '../') + + def should_skip_tests(self): + return [] + + def get_image_with_version(self, name): + if name in self.image_versions: + return name + ":" + self.image_versions[name] + logging.warn("Cannot find image %s in params list %s", name, self.image_versions) + if ':' not in name: + return name + ":latest" + return name + + def get_single_image_version(self): + name = self.get_images_names()[0] + if name in self.image_versions: + return self.image_versions[name] + logging.warn("Cannot find image %s in params list %s", name, self.image_versions) + return 'latest' + + def shuffle_test_groups(self): + return self.shuffle_groups != 0 + + @staticmethod + def get_images_names(): + return ["yandex/clickhouse-integration-tests-runner", "yandex/clickhouse-mysql-golang-client", + "yandex/clickhouse-mysql-java-client", "yandex/clickhouse-mysql-js-client", + "yandex/clickhouse-mysql-php-client", "yandex/clickhouse-postgresql-java-client", + "yandex/clickhouse-integration-test", "yandex/clickhouse-kerberos-kdc", + "yandex/clickhouse-integration-helper", ] + + + def _can_run_with(self, path, opt): + with open(path, 'r') as script: + for line in script: + if opt in line: + return True + return False + + def _install_clickhouse(self, debs_path): + for package in ('clickhouse-common-static_', 'clickhouse-server_', 'clickhouse-client', 'clickhouse-common-static-dbg_'): # order matters + logging.info("Installing package %s", package) + for f in os.listdir(debs_path): + if package in f: + full_path = os.path.join(debs_path, f) + logging.info("Package found in %s", full_path) + log_name = "install_" + f + ".log" + log_path = os.path.join(str(self.path()), log_name) + with open(log_path, 'w') as log: + cmd = "dpkg -i {}".format(full_path) + logging.info("Executing installation cmd %s", cmd) + retcode = subprocess.Popen(cmd, shell=True, stderr=log, stdout=log).wait() + if retcode == 0: + logging.info("Instsallation of %s successfull", full_path) + else: + raise Exception("Installation of %s failed", full_path) + break + else: + raise Exception("Package with {} not found".format(package)) + logging.info("Unstripping binary") + # logging.info("Unstring %s", subprocess.check_output("eu-unstrip /usr/bin/clickhouse {}".format(CLICKHOUSE_BINARY_PATH), shell=True)) + + logging.info("All packages installed") + os.chmod(CLICKHOUSE_BINARY_PATH, 0o777) + os.chmod(CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH, 0o777) + os.chmod(CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH, 0o777) + result_path_bin = os.path.join(str(self.base_path()), "clickhouse") + result_path_odbc_bridge = os.path.join(str(self.base_path()), "clickhouse-odbc-bridge") + result_path_library_bridge = os.path.join(str(self.base_path()), "clickhouse-library-bridge") + shutil.copy(CLICKHOUSE_BINARY_PATH, result_path_bin) + shutil.copy(CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH, result_path_odbc_bridge) + shutil.copy(CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH, result_path_library_bridge) + return None, None + + def _compress_logs(self, path, result_path): + subprocess.check_call("tar czf {} -C {} .".format(result_path, path), shell=True) # STYLE_CHECK_ALLOW_SUBPROCESS_CHECK_CALL + + def _get_all_tests(self, repo_path): + image_cmd = self._get_runner_image_cmd(repo_path) + cmd = "cd {}/tests/integration && ./runner {} ' --setup-plan' | grep '::' | sed 's/ (fixtures used:.*//g' | sed 's/^ *//g' > all_tests.txt".format(repo_path, image_cmd) + logging.info("Getting all tests with cmd '%s'", cmd) + subprocess.check_call(cmd, shell=True) # STYLE_CHECK_ALLOW_SUBPROCESS_CHECK_CALL + + all_tests_file_path = "{}/tests/integration/all_tests.txt".format(repo_path) + if not os.path.isfile(all_tests_file_path) or os.path.getsize(all_tests_file_path) == 0: + raise Exception("There is something wrong with getting all tests list: file '{}' is empty or does not exist.".format(all_tests_file_path)) + + all_tests = [] + with open(all_tests_file_path, "r") as all_tests_file: + for line in all_tests_file: + all_tests.append(line.strip()) + return list(sorted(all_tests)) + + def group_test_by_file(self, tests): + result = {} + for test in tests: + test_file = test.split('::')[0] + if test_file not in result: + result[test_file] = [] + result[test_file].append(test) + return result + + def _update_counters(self, main_counters, current_counters): + for test in current_counters["PASSED"]: + if test not in main_counters["PASSED"] and test not in main_counters["FLAKY"]: + is_flaky = False + if test in main_counters["FAILED"]: + main_counters["FAILED"].remove(test) + is_flaky = True + if test in main_counters["ERROR"]: + main_counters["ERROR"].remove(test) + is_flaky = True + + if is_flaky: + main_counters["FLAKY"].append(test) + else: + main_counters["PASSED"].append(test) + + for state in ("ERROR", "FAILED"): + for test in current_counters[state]: + if test in main_counters["FLAKY"]: + continue + if test in main_counters["PASSED"]: + main_counters["PASSED"].remove(test) + main_counters["FLAKY"].append(test) + continue + if test not in main_counters[state]: + main_counters[state].append(test) + + def _get_runner_image_cmd(self, repo_path): + image_cmd = '' + if self._can_run_with(os.path.join(repo_path, "tests/integration", "runner"), '--docker-image-version'): + for img in self.get_images_names(): + if img == "yandex/clickhouse-integration-tests-runner": + runner_version = self.get_single_image_version() + logging.info("Can run with custom docker image version %s", runner_version) + image_cmd += ' --docker-image-version={} '.format(runner_version) + else: + if self._can_run_with(os.path.join(repo_path, "tests/integration", "runner"), '--docker-compose-images-tags'): + image_cmd += '--docker-compose-images-tags={} '.format(self.get_image_with_version(img)) + else: + image_cmd = '' + logging.info("Cannot run with custom docker image version :(") + return image_cmd + + def run_test_group(self, repo_path, test_group, tests_in_group, num_tries): + counters = { + "ERROR": [], + "PASSED": [], + "FAILED": [], + "SKIPPED": [], + "FLAKY": [], + } + tests_times = defaultdict(float) + + if self.soft_deadline_time < time.time(): + for test in tests_in_group: + logging.info("Task timeout exceeded, skipping %s", test) + counters["SKIPPED"].append(test) + tests_times[test] = 0 + return counters, tests_times, [] + + image_cmd = self._get_runner_image_cmd(repo_path) + test_group_str = test_group.replace('/', '_').replace('.', '_') + log_paths = [] + + for i in range(num_tries): + logging.info("Running test group %s for the %s retry", test_group, i) + clear_ip_tables_and_restart_daemons() + + output_path = os.path.join(str(self.path()), "test_output_" + test_group_str + "_" + str(i) + ".log") + log_name = "integration_run_" + test_group_str + "_" + str(i) + ".txt" + log_path = os.path.join(str(self.path()), log_name) + log_paths.append(log_path) + logging.info("Will wait output inside %s", output_path) + + test_names = set([]) + for test_name in tests_in_group: + if test_name not in counters["PASSED"]: + if '[' in test_name: + test_names.add(test_name[:test_name.find('[')]) + else: + test_names.add(test_name) + + test_cmd = ' '.join([test for test in sorted(test_names)]) + cmd = "cd {}/tests/integration && ./runner {} '-ss {} -rfEp --color=no --durations=0 {}' | tee {}".format( + repo_path, image_cmd, test_cmd, _get_deselect_option(self.should_skip_tests()), output_path) + + with open(log_path, 'w') as log: + logging.info("Executing cmd: %s", cmd) + retcode = subprocess.Popen(cmd, shell=True, stderr=log, stdout=log).wait() + if retcode == 0: + logging.info("Run %s group successfully", test_group) + else: + logging.info("Some tests failed") + + if os.path.exists(output_path): + lines = parse_test_results_output(output_path) + new_counters = get_counters(lines) + times_lines = parse_test_times(output_path) + new_tests_times = get_test_times(times_lines) + self._update_counters(counters, new_counters) + for test_name, test_time in new_tests_times.items(): + tests_times[test_name] = test_time + os.remove(output_path) + if len(counters["PASSED"]) + len(counters["FLAKY"]) == len(tests_in_group): + logging.info("All tests from group %s passed", test_group) + break + if len(counters["PASSED"]) + len(counters["FLAKY"]) >= 0 and len(counters["FAILED"]) == 0 and len(counters["ERROR"]) == 0: + logging.info("Seems like all tests passed but some of them are skipped or deselected. Ignoring them and finishing group.") + break + else: + for test in tests_in_group: + if test not in counters["PASSED"] and test not in counters["ERROR"] and test not in counters["FAILED"]: + counters["ERROR"].append(test) + + return counters, tests_times, log_paths + + def run_flaky_check(self, repo_path, build_path): + pr_info = self.params['pr_info'] + + # pytest swears, if we require to run some tests which was renamed or deleted + tests_to_run = filter_existing_tests(get_tests_to_run(pr_info), repo_path) + if not tests_to_run: + logging.info("No tests to run found") + return 'success', 'Nothing to run', [('Nothing to run', 'OK')], '' + + self._install_clickhouse(build_path) + logging.info("Found '%s' tests to run", ' '.join(tests_to_run)) + result_state = "success" + description_prefix = "No flaky tests: " + start = time.time() + logging.info("Starting check with retries") + final_retry = 0 + logs = [] + for i in range(TRIES_COUNT): + final_retry += 1 + logging.info("Running tests for the %s time", i) + counters, tests_times, log_paths = self.run_test_group(repo_path, "flaky", tests_to_run, 1) + logs += log_paths + if counters["FAILED"]: + logging.info("Found failed tests: %s", ' '.join(counters["FAILED"])) + description_prefix = "Flaky tests found: " + result_state = "failure" + break + if counters["ERROR"]: + description_prefix = "Flaky tests found: " + logging.info("Found error tests: %s", ' '.join(counters["ERROR"])) + # NOTE "error" result state will restart the whole test task, so we use "failure" here + result_state = "failure" + break + assert len(counters["FLAKY"]) == 0 + logging.info("Try is OK, all tests passed, going to clear env") + clear_ip_tables_and_restart_daemons() + logging.info("And going to sleep for some time") + if time.time() - start > MAX_TIME_SECONDS: + logging.info("Timeout reached, going to finish flaky check") + break + time.sleep(5) + + logging.info("Finally all tests done, going to compress test dir") + test_logs = os.path.join(str(self.path()), "./test_dir.tar.gz") + self._compress_logs("{}/tests/integration".format(repo_path), test_logs) + logging.info("Compression finished") + + test_result = [] + for state in ("ERROR", "FAILED", "PASSED", "SKIPPED", "FLAKY"): + if state == "PASSED": + text_state = "OK" + elif state == "FAILED": + text_state = "FAIL" + else: + text_state = state + test_result += [(c + ' (✕' + str(final_retry) + ')', text_state, "{:.2f}".format(tests_times[c])) for c in counters[state]] + status_text = description_prefix + ', '.join([str(n).lower().replace('failed', 'fail') + ': ' + str(len(c)) for n, c in counters.items()]) + + return result_state, status_text, test_result, [test_logs] + logs + + def run_impl(self, repo_path, build_path): + if self.flaky_check: + return self.run_flaky_check(repo_path, build_path) + + self._install_clickhouse(build_path) + logging.info("Dump iptables before run %s", subprocess.check_output("iptables -L", shell=True)) + all_tests = self._get_all_tests(repo_path) + logging.info("Found %s tests first 3 %s", len(all_tests), ' '.join(all_tests[:3])) + grouped_tests = self.group_test_by_file(all_tests) + logging.info("Found %s tests groups", len(grouped_tests)) + + counters = { + "ERROR": [], + "PASSED": [], + "FAILED": [], + "SKIPPED": [], + "FLAKY": [], + } + tests_times = defaultdict(float) + tests_log_paths = defaultdict(list) + + items_to_run = list(grouped_tests.items()) + + logging.info("Total test groups %s", len(items_to_run)) + if self.shuffle_test_groups(): + logging.info("Shuffling test groups") + random.shuffle(items_to_run) + + for group, tests in items_to_run: + logging.info("Running test group %s countaining %s tests", group, len(tests)) + group_counters, group_test_times, log_paths = self.run_test_group(repo_path, group, tests, MAX_RETRY) + total_tests = 0 + for counter, value in group_counters.items(): + logging.info("Tests from group %s stats, %s count %s", group, counter, len(value)) + counters[counter] += value + logging.info("Totally have %s with status %s", len(counters[counter]), counter) + total_tests += len(counters[counter]) + logging.info("Totally finished tests %s/%s", total_tests, len(all_tests)) + + for test_name, test_time in group_test_times.items(): + tests_times[test_name] = test_time + tests_log_paths[test_name] = log_paths + + if len(counters["FAILED"]) + len(counters["ERROR"]) >= 20: + logging.info("Collected more than 20 failed/error tests, stopping") + break + + logging.info("Finally all tests done, going to compress test dir") + test_logs = os.path.join(str(self.path()), "./test_dir.tar.gz") + self._compress_logs("{}/tests/integration".format(repo_path), test_logs) + logging.info("Compression finished") + + if counters["FAILED"] or counters["ERROR"]: + logging.info("Overall status failure, because we have tests in FAILED or ERROR state") + result_state = "failure" + else: + logging.info("Overall success!") + result_state = "success" + + test_result = [] + for state in ("ERROR", "FAILED", "PASSED", "SKIPPED", "FLAKY"): + if state == "PASSED": + text_state = "OK" + elif state == "FAILED": + text_state = "FAIL" + else: + text_state = state + test_result += [(c, text_state, "{:.2f}".format(tests_times[c]), tests_log_paths[c]) for c in counters[state]] + + failed_sum = len(counters['FAILED']) + len(counters['ERROR']) + status_text = "fail: {}, passed: {}, flaky: {}".format(failed_sum, len(counters['PASSED']), len(counters['FLAKY'])) + + if self.soft_deadline_time < time.time(): + status_text = "Timeout, " + status_text + result_state = "failure" + + counters['FLAKY'] = [] + if not counters or sum(len(counter) for counter in counters.values()) == 0: + status_text = "No tests found for some reason! It's a bug" + result_state = "failure" + + if '(memory)' in self.params['context_name']: + result_state = "success" + + return result_state, status_text, test_result, [test_logs] + +def write_results(results_file, status_file, results, status): + with open(results_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerows(results) + with open(status_file, 'w') as f: + out = csv.writer(f, delimiter='\t') + out.writerow(status) + +if __name__ == "__main__": + logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') + + repo_path = os.environ.get("CLICKHOUSE_TESTS_REPO_PATH") + build_path = os.environ.get("CLICKHOUSE_TESTS_BUILD_PATH") + result_path = os.environ.get("CLICKHOUSE_TESTS_RESULT_PATH") + params_path = os.environ.get("CLICKHOUSE_TESTS_JSON_PARAMS_PATH") + + params = json.loads(open(params_path, 'r').read()) + runner = ClickhouseIntegrationTestsRunner(result_path, params) + + logging.info("Running tests") + state, description, test_results, _ = runner.run_impl(repo_path, build_path) + logging.info("Tests finished") + + status = (state, description) + out_results_file = os.path.join(str(runner.path()), "test_results.tsv") + out_status_file = os.path.join(str(runner.path()), "check_status.tsv") + write_results(out_results_file, out_status_file, test_results, status) + logging.info("Result written") diff --git a/tests/integration/helpers/cluster.py b/tests/integration/helpers/cluster.py index 3872234d36c..ed28c3a7fc4 100644 --- a/tests/integration/helpers/cluster.py +++ b/tests/integration/helpers/cluster.py @@ -54,6 +54,26 @@ def run_and_check(args, env=None, shell=False, stdout=subprocess.PIPE, stderr=su raise Exception('Command {} return non-zero code {}: {}'.format(args, res.returncode, res.stderr.decode('utf-8'))) +def retry_exception(num, delay, func, exception=Exception, *args, **kwargs): + """ + Retry if `func()` throws, `num` times. + + :param func: func to run + :param num: number of retries + + :throws StopIteration + """ + i = 0 + while i <= num: + try: + func(*args, **kwargs) + time.sleep(delay) + except exception: # pylint: disable=broad-except + i += 1 + continue + return + raise StopIteration('Function did not finished successfully') + def subprocess_check_call(args): # Uncomment for debugging # print('run:', ' ' . join(args)) @@ -75,6 +95,15 @@ def get_odbc_bridge_path(): return '/usr/bin/clickhouse-odbc-bridge' return path +def get_library_bridge_path(): + path = os.environ.get('CLICKHOUSE_TESTS_LIBRARY_BRIDGE_BIN_PATH') + if path is None: + server_path = os.environ.get('CLICKHOUSE_TESTS_SERVER_BIN_PATH') + if server_path is not None: + return os.path.join(os.path.dirname(server_path), 'clickhouse-library-bridge') + else: + return '/usr/bin/clickhouse-library-bridge' + return path def get_docker_compose_path(): compose_path = os.environ.get('DOCKER_COMPOSE_DIR') @@ -98,7 +127,7 @@ class ClickHouseCluster: """ def __init__(self, base_path, name=None, base_config_dir=None, server_bin_path=None, client_bin_path=None, - odbc_bridge_bin_path=None, zookeeper_config_path=None, custom_dockerd_host=None): + odbc_bridge_bin_path=None, library_bridge_bin_path=None, zookeeper_config_path=None, custom_dockerd_host=None): for param in list(os.environ.keys()): print("ENV %40s %s" % (param, os.environ[param])) self.base_dir = p.dirname(base_path) @@ -109,6 +138,7 @@ class ClickHouseCluster: self.server_bin_path = p.realpath( server_bin_path or os.environ.get('CLICKHOUSE_TESTS_SERVER_BIN_PATH', '/usr/bin/clickhouse')) self.odbc_bridge_bin_path = p.realpath(odbc_bridge_bin_path or get_odbc_bridge_path()) + self.library_bridge_bin_path = p.realpath(library_bridge_bin_path or get_library_bridge_path()) self.client_bin_path = p.realpath( client_bin_path or os.environ.get('CLICKHOUSE_TESTS_CLIENT_BIN_PATH', '/usr/bin/clickhouse-client')) self.zookeeper_config_path = p.join(self.base_dir, zookeeper_config_path) if zookeeper_config_path else p.join( @@ -139,7 +169,9 @@ class ClickHouseCluster: self.instances = {} self.with_zookeeper = False self.with_mysql = False + self.with_mysql_cluster = False self.with_postgres = False + self.with_postgres_cluster = False self.with_kafka = False self.with_kerberized_kafka = False self.with_rabbitmq = False @@ -180,9 +212,9 @@ class ClickHouseCluster: def add_instance(self, name, base_config_dir=None, main_configs=None, user_configs=None, dictionaries=None, macros=None, - with_zookeeper=False, with_mysql=False, with_kafka=False, with_kerberized_kafka=False, with_rabbitmq=False, + with_zookeeper=False, with_mysql=False, with_mysql_cluster=False, with_kafka=False, with_kerberized_kafka=False, with_rabbitmq=False, clickhouse_path_dir=None, - with_odbc_drivers=False, with_postgres=False, with_hdfs=False, with_kerberized_hdfs=False, with_mongo=False, + with_odbc_drivers=False, with_postgres=False, with_postgres_cluster=False, with_hdfs=False, with_kerberized_hdfs=False, with_mongo=False, with_redis=False, with_minio=False, with_cassandra=False, hostname=None, env_variables=None, image="yandex/clickhouse-integration-test", tag=None, stay_alive=False, ipv4_address=None, ipv6_address=None, with_installed_binary=False, tmpfs=None, @@ -223,6 +255,7 @@ class ClickHouseCluster: with_zookeeper=with_zookeeper, zookeeper_config_path=self.zookeeper_config_path, with_mysql=with_mysql, + with_mysql_cluster=with_mysql_cluster, with_kafka=with_kafka, with_kerberized_kafka=with_kerberized_kafka, with_rabbitmq=with_rabbitmq, @@ -233,6 +266,7 @@ class ClickHouseCluster: with_cassandra=with_cassandra, server_bin_path=self.server_bin_path, odbc_bridge_bin_path=self.odbc_bridge_bin_path, + library_bridge_bin_path=self.library_bridge_bin_path, clickhouse_path_dir=clickhouse_path_dir, with_odbc_drivers=with_odbc_drivers, hostname=hostname, @@ -274,6 +308,14 @@ class ClickHouseCluster: cmds.append(self.base_mysql_cmd) + if with_mysql_cluster and not self.with_mysql_cluster: + self.with_mysql_cluster = True + self.base_cmd.extend(['--file', p.join(docker_compose_yml_dir, 'docker_compose_mysql_cluster.yml')]) + self.base_mysql_cluster_cmd = ['docker-compose', '--project-name', self.project_name, + '--file', p.join(docker_compose_yml_dir, 'docker_compose_mysql_cluster.yml')] + + cmds.append(self.base_mysql_cluster_cmd) + if with_postgres and not self.with_postgres: self.with_postgres = True self.base_cmd.extend(['--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres.yml')]) @@ -281,6 +323,13 @@ class ClickHouseCluster: '--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres.yml')] cmds.append(self.base_postgres_cmd) + if with_postgres_cluster and not self.with_postgres_cluster: + self.with_postgres_cluster = True + self.base_cmd.extend(['--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres.yml')]) + self.base_postgres_cluster_cmd = ['docker-compose', '--project-name', self.project_name, + '--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres_cluster.yml')] + cmds.append(self.base_postgres_cluster_cmd) + if with_odbc_drivers and not self.with_odbc_drivers: self.with_odbc_drivers = True if not self.with_mysql: @@ -449,11 +498,11 @@ class ClickHouseCluster: ["bash", "-c", "echo {} | base64 --decode > {}".format(encodedStr, dest_path)], user='root') - def wait_mysql_to_start(self, timeout=60): + def wait_mysql_to_start(self, timeout=60, port=3308): start = time.time() while time.time() - start < timeout: try: - conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=3308) + conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=port) conn.close() print("Mysql Started") return @@ -464,11 +513,11 @@ class ClickHouseCluster: subprocess_call(['docker-compose', 'ps', '--services', '--all']) raise Exception("Cannot wait MySQL container") - def wait_postgres_to_start(self, timeout=60): + def wait_postgres_to_start(self, timeout=60, port=5432): start = time.time() while time.time() - start < timeout: try: - conn_string = "host='localhost' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} user='postgres' password='mysecretpassword'".format(port) conn = psycopg2.connect(conn_string) conn.close() print("Postgres Started") @@ -603,16 +652,6 @@ class ClickHouseCluster: if self.is_up: return - # Just in case kill unstopped containers from previous launch - try: - print("Trying to kill unstopped containers...") - - if not subprocess_call(['docker-compose', 'kill']): - subprocess_call(['docker-compose', 'down', '--volumes']) - print("Unstopped containers killed") - except: - pass - try: if destroy_dirs and p.exists(self.instances_dir): print(("Removing instances dir %s", self.instances_dir)) @@ -622,9 +661,24 @@ class ClickHouseCluster: print(('Setup directory for instance: {} destroy_dirs: {}'.format(instance.name, destroy_dirs))) instance.create_dir(destroy_dir=destroy_dirs) + # In case of multiple cluster we should not stop compose services. + if destroy_dirs: + # Just in case kill unstopped containers from previous launch + try: + print("Trying to kill unstopped containers...") + subprocess_call(['docker-compose', 'kill']) + subprocess_call(self.base_cmd + ['down', '--volumes', '--remove-orphans']) + print("Unstopped containers killed") + except: + pass + + clickhouse_pull_cmd = self.base_cmd + ['pull'] + print(f"Pulling images for {self.base_cmd}") + retry_exception(10, 5, subprocess_check_call, Exception, clickhouse_pull_cmd) + self.docker_client = docker.from_env(version=self.docker_api_version) - common_opts = ['up', '-d', '--force-recreate'] + common_opts = ['up', '-d'] if self.with_zookeeper and self.base_zookeeper_cmd: print('Setup ZooKeeper') @@ -650,11 +704,25 @@ class ClickHouseCluster: subprocess_check_call(self.base_mysql_cmd + common_opts) self.wait_mysql_to_start(120) + if self.with_mysql_cluster and self.base_mysql_cluster_cmd: + print('Setup MySQL') + subprocess_check_call(self.base_mysql_cluster_cmd + common_opts) + self.wait_mysql_to_start(120, port=3348) + self.wait_mysql_to_start(120, port=3368) + self.wait_mysql_to_start(120, port=3388) + if self.with_postgres and self.base_postgres_cmd: print('Setup Postgres') subprocess_check_call(self.base_postgres_cmd + common_opts) self.wait_postgres_to_start(120) + if self.with_postgres_cluster and self.base_postgres_cluster_cmd: + print('Setup Postgres') + subprocess_check_call(self.base_postgres_cluster_cmd + common_opts) + self.wait_postgres_to_start(120, port=5421) + self.wait_postgres_to_start(120, port=5441) + self.wait_postgres_to_start(120, port=5461) + if self.with_kafka and self.base_kafka_cmd: print('Setup Kafka') subprocess_check_call(self.base_kafka_cmd + common_opts + ['--renew-anon-volumes']) @@ -692,7 +760,7 @@ class ClickHouseCluster: if self.with_redis and self.base_redis_cmd: print('Setup Redis') - subprocess_check_call(self.base_redis_cmd + ['up', '-d', '--force-recreate']) + subprocess_check_call(self.base_redis_cmd + ['up', '-d']) time.sleep(10) if self.with_minio and self.base_minio_cmd: @@ -726,7 +794,7 @@ class ClickHouseCluster: os.environ.pop('SSL_CERT_FILE') if self.with_cassandra and self.base_cassandra_cmd: - subprocess_check_call(self.base_cassandra_cmd + ['up', '-d', '--force-recreate']) + subprocess_check_call(self.base_cassandra_cmd + ['up', '-d']) self.wait_cassandra_to_start() clickhouse_start_cmd = self.base_cmd + ['up', '-d', '--no-recreate'] @@ -861,6 +929,7 @@ services: - /etc/passwd:/etc/passwd:ro {binary_volume} {odbc_bridge_volume} + {library_bridge_volume} {odbc_ini_path} {keytab_path} {krb5_conf} @@ -896,9 +965,9 @@ class ClickHouseInstance: def __init__( self, cluster, base_path, name, base_config_dir, custom_main_configs, custom_user_configs, custom_dictionaries, - macros, with_zookeeper, zookeeper_config_path, with_mysql, with_kafka, with_kerberized_kafka, with_rabbitmq, with_kerberized_hdfs, + macros, with_zookeeper, zookeeper_config_path, with_mysql, with_mysql_cluster, with_kafka, with_kerberized_kafka, with_rabbitmq, with_kerberized_hdfs, with_mongo, with_redis, with_minio, - with_cassandra, server_bin_path, odbc_bridge_bin_path, clickhouse_path_dir, with_odbc_drivers, + with_cassandra, server_bin_path, odbc_bridge_bin_path, library_bridge_bin_path, clickhouse_path_dir, with_odbc_drivers, hostname=None, env_variables=None, image="yandex/clickhouse-integration-test", tag="latest", stay_alive=False, ipv4_address=None, ipv6_address=None, with_installed_binary=False, tmpfs=None): @@ -922,8 +991,10 @@ class ClickHouseInstance: self.server_bin_path = server_bin_path self.odbc_bridge_bin_path = odbc_bridge_bin_path + self.library_bridge_bin_path = library_bridge_bin_path self.with_mysql = with_mysql + self.with_mysql_cluster = with_mysql_cluster self.with_kafka = with_kafka self.with_kerberized_kafka = with_kerberized_kafka self.with_rabbitmq = with_rabbitmq @@ -960,13 +1031,18 @@ class ClickHouseInstance: self.ipv6_address = ipv6_address self.with_installed_binary = with_installed_binary - def is_built_with_thread_sanitizer(self): + def is_built_with_sanitizer(self, sanitizer_name=''): build_opts = self.query("SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS'") - return "-fsanitize=thread" in build_opts + return "-fsanitize={}".format(sanitizer_name) in build_opts + + def is_built_with_thread_sanitizer(self): + return self.is_built_with_sanitizer('thread') def is_built_with_address_sanitizer(self): - build_opts = self.query("SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS'") - return "-fsanitize=address" in build_opts + return self.is_built_with_sanitizer('address') + + def is_built_with_memory_sanitizer(self): + return self.is_built_with_sanitizer('memory') # Connects to the instance via clickhouse-client, sends a query (1st argument) and returns the answer def query(self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, @@ -1053,23 +1129,28 @@ class ClickHouseInstance: return self.http_query(sql=sql, data=data, params=params, user=user, password=password, expect_fail_and_get_error=True) - def stop_clickhouse(self, start_wait_sec=5, kill=False): + def stop_clickhouse(self, stop_wait_sec=30, kill=False): if not self.stay_alive: raise Exception("clickhouse can be stopped only with stay_alive=True instance") self.exec_in_container(["bash", "-c", "pkill {} clickhouse".format("-9" if kill else "")], user='root') - time.sleep(start_wait_sec) + deadline = time.time() + stop_wait_sec + while time.time() < deadline: + time.sleep(0.5) + if self.get_process_pid("clickhouse") is None: + break + assert self.get_process_pid("clickhouse") is None, "ClickHouse was not stopped" - def start_clickhouse(self, stop_wait_sec=5): + def start_clickhouse(self, start_wait_sec=30): if not self.stay_alive: raise Exception("clickhouse can be started again only with stay_alive=True instance") self.exec_in_container(["bash", "-c", "{} --daemon".format(CLICKHOUSE_START_COMMAND)], user=str(os.getuid())) # wait start from helpers.test_tools import assert_eq_with_retry - assert_eq_with_retry(self, "select 1", "1", retry_count=int(stop_wait_sec / 0.5), sleep_time=0.5) + assert_eq_with_retry(self, "select 1", "1", retry_count=int(start_wait_sec / 0.5), sleep_time=0.5) - def restart_clickhouse(self, stop_start_wait_sec=5, kill=False): + def restart_clickhouse(self, stop_start_wait_sec=30, kill=False): self.stop_clickhouse(stop_start_wait_sec, kill) self.start_clickhouse(stop_start_wait_sec) @@ -1082,6 +1163,11 @@ class ClickHouseInstance: ["bash", "-c", 'grep "{}" /var/log/clickhouse-server/clickhouse-server.log || true'.format(substring)]) return len(result) > 0 + def count_in_log(self, substring): + result = self.exec_in_container( + ["bash", "-c", 'grep "{}" /var/log/clickhouse-server/clickhouse-server.log | wc -l'.format(substring)]) + return result + def wait_for_log_line(self, regexp, filename='/var/log/clickhouse-server/clickhouse-server.log', timeout=30, repetitions=1, look_behind_lines=100): start_time = time.time() result = self.exec_in_container( @@ -1384,9 +1470,11 @@ class ClickHouseInstance: if not self.with_installed_binary: binary_volume = "- " + self.server_bin_path + ":/usr/bin/clickhouse" odbc_bridge_volume = "- " + self.odbc_bridge_bin_path + ":/usr/bin/clickhouse-odbc-bridge" + library_bridge_volume = "- " + self.library_bridge_bin_path + ":/usr/bin/clickhouse-library-bridge" else: binary_volume = "- " + self.server_bin_path + ":/usr/share/clickhouse_fresh" odbc_bridge_volume = "- " + self.odbc_bridge_bin_path + ":/usr/share/clickhouse-odbc-bridge_fresh" + library_bridge_volume = "- " + self.library_bridge_bin_path + ":/usr/share/clickhouse-library-bridge_fresh" with open(self.docker_compose_path, 'w') as docker_compose: docker_compose.write(DOCKER_COMPOSE_TEMPLATE.format( @@ -1396,6 +1484,7 @@ class ClickHouseInstance: hostname=self.hostname, binary_volume=binary_volume, odbc_bridge_volume=odbc_bridge_volume, + library_bridge_volume=library_bridge_volume, instance_config_dir=instance_config_dir, config_d_dir=self.config_d_dir, db_dir=db_dir, diff --git a/tests/integration/helpers/corrupt_part_data_on_disk.py b/tests/integration/helpers/corrupt_part_data_on_disk.py new file mode 100644 index 00000000000..1a6f384da9e --- /dev/null +++ b/tests/integration/helpers/corrupt_part_data_on_disk.py @@ -0,0 +1,14 @@ +def corrupt_part_data_on_disk(node, table, part_name): + part_path = node.query("SELECT path FROM system.parts WHERE table = '{}' and name = '{}'" + .format(table, part_name)).strip() + + corrupt_part_data_by_path(node, part_path) + +def corrupt_part_data_by_path(node, part_path): + print("Corrupting part", part_path, "at", node.name) + print("Will corrupt: ", + node.exec_in_container(['bash', '-c', 'cd {p} && ls *.bin | head -n 1'.format(p=part_path)])) + + node.exec_in_container(['bash', '-c', + 'cd {p} && ls *.bin | head -n 1 | xargs -I{{}} sh -c \'echo "1" >> $1\' -- {{}}'.format( + p=part_path)], privileged=True) diff --git a/tests/integration/helpers/dictionary.py b/tests/integration/helpers/dictionary.py index b3f7a729777..41d87180c8a 100644 --- a/tests/integration/helpers/dictionary.py +++ b/tests/integration/helpers/dictionary.py @@ -7,12 +7,12 @@ class Layout(object): 'flat': '', 'hashed': '', 'cache': '128', - 'ssd_cache': '/etc/clickhouse/dictionaries/all128', + 'ssd_cache': '/etc/clickhouse/dictionaries/all', 'complex_key_hashed': '', 'complex_key_hashed_one_key': '', 'complex_key_hashed_two_keys': '', 'complex_key_cache': '128', - 'complex_key_ssd_cache': '/etc/clickhouse/dictionaries/all128', + 'complex_key_ssd_cache': '/etc/clickhouse/dictionaries/all', 'range_hashed': '', 'direct': '', 'complex_key_direct': '' diff --git a/tests/integration/helpers/hdfs_api.py b/tests/integration/helpers/hdfs_api.py index cb742662855..ae0e4cffc7a 100644 --- a/tests/integration/helpers/hdfs_api.py +++ b/tests/integration/helpers/hdfs_api.py @@ -90,9 +90,8 @@ class HDFSApi(object): if kerberized: self._run_kinit() self.kerberos_auth = reqkerb.HTTPKerberosAuth(mutual_authentication=reqkerb.DISABLED, hostname_override=self.host, principal=self.principal) - #principal=self.principal, - #hostname_override=self.host, principal=self.principal) - # , mutual_authentication=reqkerb.REQUIRED, force_preemptive=True) + if self.kerberos_auth is None: + print("failed to obtain kerberos_auth") else: self.kerberos_auth = None @@ -122,20 +121,24 @@ class HDFSApi(object): raise Exception("Kinit running failure") - def read_data(self, path, universal_newlines=True): + def req_wrapper(self, func, expected_code, cnt=2, **kwargs): with dns_hook(self): - response = requests.get("{protocol}://{host}:{port}/webhdfs/v1{path}?op=OPEN".format(protocol=self.protocol, host=self.host, port=self.proxy_port, path=path), headers={'host': 'localhost'}, allow_redirects=False, verify=False, auth=self.kerberos_auth) - if response.status_code != 307: - response.raise_for_status() + for i in range(0, cnt): + response_data = func(**kwargs) + if response_data.status_code == expected_code: + return response_data + else: + print("unexpected response_data.status_code {}", response_data.status_code) + response_data.raise_for_status() + + def read_data(self, path, universal_newlines=True): + response = self.req_wrapper(requests.get, 307, url="{protocol}://{host}:{port}/webhdfs/v1{path}?op=OPEN".format(protocol=self.protocol, host=self.host, port=self.proxy_port, path=path), headers={'host': 'localhost'}, allow_redirects=False, verify=False, auth=self.kerberos_auth) # additional_params = '&'.join(response.headers['Location'].split('&')[1:2]) url = "{location}".format(location=response.headers['Location']) # print("redirected to ", url) - with dns_hook(self): - response_data = requests.get(url, - headers={'host': 'localhost'}, - verify=False, auth=self.kerberos_auth) - if response_data.status_code != 200: - response_data.raise_for_status() + response_data = self.req_wrapper(requests.get, 200, url=url, + headers={'host': 'localhost'}, + verify=False, auth=self.kerberos_auth) if universal_newlines: return response_data.text else: @@ -149,42 +152,36 @@ class HDFSApi(object): named_file.write(content) named_file.flush() - if self.kerberized: self._run_kinit() self.kerberos_auth = reqkerb.HTTPKerberosAuth(mutual_authentication=reqkerb.DISABLED, hostname_override=self.host, principal=self.principal) # print(self.kerberos_auth) - with dns_hook(self): - response = requests.put( - "{protocol}://{host}:{port}/webhdfs/v1{path}?op=CREATE".format(protocol=self.protocol, host=self.host, - port=self.proxy_port, - path=path, user=self.user), - allow_redirects=False, - headers={'host': 'localhost'}, - params={'overwrite' : 'true'}, - verify=False, auth=self.kerberos_auth + response = self.req_wrapper(requests.put, 307, + url="{protocol}://{host}:{port}/webhdfs/v1{path}?op=CREATE".format( + protocol=self.protocol, host=self.host, + port=self.proxy_port, + path=path, user=self.user), + allow_redirects=False, + headers={'host': 'localhost'}, + params={'overwrite' : 'true'}, + verify=False, auth=self.kerberos_auth ) - if response.status_code != 307: - # print(response.headers) - response.raise_for_status() - additional_params = '&'.join( response.headers['Location'].split('&')[1:2] + ["user.name={}".format(self.user), "overwrite=true"]) - with dns_hook(self), open(fpath, mode="rb") as fh: + with open(fpath, mode="rb") as fh: file_data = fh.read() protocol = "http" # self.protocol - response = requests.put( - "{location}".format(location=response.headers['Location']), - data=file_data, - headers={'content-type':'text/plain', 'host': 'localhost'}, - params={'file': path, 'user.name' : self.user}, - allow_redirects=False, verify=False, auth=self.kerberos_auth + response = self.req_wrapper(requests.put, 201, + url="{location}".format( + location=response.headers['Location']), + data=file_data, + headers={'content-type':'text/plain', 'host': 'localhost'}, + params={'file': path, 'user.name' : self.user}, + allow_redirects=False, verify=False, auth=self.kerberos_auth ) - # print(response) - if response.status_code != 201: - response.raise_for_status() + # print(response) def write_gzip_data(self, path, content): diff --git a/tests/integration/runner b/tests/integration/runner index 6dca7663310..e89e10fbc21 100755 --- a/tests/integration/runner +++ b/tests/integration/runner @@ -33,10 +33,15 @@ def check_args_and_update_paths(args): if not os.path.isabs(args.binary): args.binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.binary)) - if not args.bridge_binary: - args.bridge_binary = os.path.join(os.path.dirname(args.binary), 'clickhouse-odbc-bridge') - elif not os.path.isabs(args.bridge_binary): - args.bridge_binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.bridge_binary)) + if not args.odbc_bridge_binary: + args.odbc_bridge_binary = os.path.join(os.path.dirname(args.binary), 'clickhouse-odbc-bridge') + elif not os.path.isabs(args.odbc_bridge_binary): + args.odbc_bridge_binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.odbc_bridge_binary)) + + if not args.library_bridge_binary: + args.library_bridge_binary = os.path.join(os.path.dirname(args.binary), 'clickhouse-library-bridge') + elif not os.path.isabs(args.library_bridge_binary): + args.library_bridge_binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.library_bridge_binary)) if args.base_configs_dir: if not os.path.isabs(args.base_configs_dir): @@ -61,7 +66,7 @@ def check_args_and_update_paths(args): logging.info("base_configs_dir: {}, binary: {}, cases_dir: {} ".format(args.base_configs_dir, args.binary, args.cases_dir)) - for path in [args.binary, args.bridge_binary, args.base_configs_dir, args.cases_dir, CLICKHOUSE_ROOT]: + for path in [args.binary, args.odbc_bridge_binary, args.library_bridge_binary, args.base_configs_dir, args.cases_dir, CLICKHOUSE_ROOT]: if not os.path.exists(path): raise Exception("Path {} doesn't exist".format(path)) @@ -82,7 +87,8 @@ signal.signal(signal.SIGINT, docker_kill_handler_handler) # To run integration tests following artfacts should be sufficient: # - clickhouse binaries (env CLICKHOUSE_TESTS_SERVER_BIN_PATH or --binary arg) # - clickhouse default configs(config.xml, users.xml) from same version as binary (env CLICKHOUSE_TESTS_BASE_CONFIG_DIR or --base-configs-dir arg) -# - odbc bridge binary (env CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH or --bridge-binary arg) +# - odbc bridge binary (env CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH or --odbc-bridge-binary arg) +# - library bridge binary (env CLICKHOUSE_TESTS_LIBRARY_BRIDGE_BIN_PATH or --library-bridge-binary) # - tests/integration directory with all test cases and configs (env CLICKHOUSE_TESTS_INTEGRATION_PATH or --cases-dir) # # 1) --clickhouse-root is only used to determine other paths on default places @@ -98,10 +104,15 @@ if __name__ == "__main__": help="Path to clickhouse binary. For example /usr/bin/clickhouse") parser.add_argument( - "--bridge-binary", + "--odbc-bridge-binary", default=os.environ.get("CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH", ""), help="Path to clickhouse-odbc-bridge binary. Defaults to clickhouse-odbc-bridge in the same dir as clickhouse.") + parser.add_argument( + "--library-bridge-binary", + default=os.environ.get("CLICKHOUSE_TESTS_LIBRARY_BRIDGE_BIN_PATH", ""), + help="Path to clickhouse-library-bridge binary. Defaults to clickhouse-library-bridge in the same dir as clickhouse.") + parser.add_argument( "--base-configs-dir", default=os.environ.get("CLICKHOUSE_TESTS_BASE_CONFIG_DIR"), @@ -185,14 +196,17 @@ if __name__ == "__main__": if sys.stdout.isatty() and sys.stdin.isatty(): tty = "-it" - cmd = "docker run {net} {tty} --rm --name {name} --privileged --volume={bridge_bin}:/clickhouse-odbc-bridge --volume={bin}:/clickhouse \ + cmd = "docker run {net} {tty} --rm --name {name} --privileged \ + --volume={odbc_bridge_bin}:/clickhouse-odbc-bridge --volume={bin}:/clickhouse \ + --volume={library_bridge_bin}:/clickhouse-library-bridge --volume={bin}:/clickhouse \ --volume={base_cfg}:/clickhouse-config --volume={cases_dir}:/ClickHouse/tests/integration \ --volume={src_dir}/Server/grpc_protos:/ClickHouse/src/Server/grpc_protos \ --volume={name}_volume:/var/lib/docker {env_tags} -e PYTEST_OPTS='{opts}' {img} {command}".format( net=net, tty=tty, bin=args.binary, - bridge_bin=args.bridge_binary, + odbc_bridge_bin=args.odbc_bridge_binary, + library_bridge_bin=args.library_bridge_binary, base_cfg=args.base_configs_dir, cases_dir=args.cases_dir, src_dir=args.src_dir, diff --git a/tests/integration/test_allowed_url_from_config/test.py b/tests/integration/test_allowed_url_from_config/test.py index 6442937c8f4..8001af35913 100644 --- a/tests/integration/test_allowed_url_from_config/test.py +++ b/tests/integration/test_allowed_url_from_config/test.py @@ -5,7 +5,8 @@ cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', main_configs=['configs/config_with_hosts.xml']) node2 = cluster.add_instance('node2', main_configs=['configs/config_with_only_primary_hosts.xml']) node3 = cluster.add_instance('node3', main_configs=['configs/config_with_only_regexp_hosts.xml']) -node4 = cluster.add_instance('node4', main_configs=['configs/config_without_allowed_hosts.xml']) +node4 = cluster.add_instance('node4', main_configs=[]) # No `remote_url_allow_hosts` at all. +node5 = cluster.add_instance('node5', main_configs=['configs/config_without_allowed_hosts.xml']) node6 = cluster.add_instance('node6', main_configs=['configs/config_for_remote.xml']) node7 = cluster.add_instance('node7', main_configs=['configs/config_for_redirect.xml'], with_hdfs=True) @@ -50,11 +51,25 @@ def test_config_with_only_regexp_hosts(start_cluster): "CREATE TABLE table_test_3_4 (word String) Engine=URL('https://yandex2.ru', S3)") -def test_config_without_allowed_hosts(start_cluster): +def test_config_without_allowed_hosts_section(start_cluster): assert node4.query("CREATE TABLE table_test_4_1 (word String) Engine=URL('https://host:80', CSV)") == "" - assert node4.query("CREATE TABLE table_test_4_2 (word String) Engine=URL('https://host', HDFS)") == "" - assert node4.query("CREATE TABLE table_test_4_3 (word String) Engine=URL('https://yandex.ru', CSV)") == "" - assert node4.query("CREATE TABLE table_test_4_4 (word String) Engine=URL('ftp://something.com', S3)") == "" + assert node4.query("CREATE TABLE table_test_4_2 (word String) Engine=S3('https://host:80/bucket/key', CSV)") == "" + assert node4.query("CREATE TABLE table_test_4_3 (word String) Engine=URL('https://host', HDFS)") == "" + assert node4.query("CREATE TABLE table_test_4_4 (word String) Engine=URL('https://yandex.ru', CSV)") == "" + assert node4.query("CREATE TABLE table_test_4_5 (word String) Engine=URL('ftp://something.com', S3)") == "" + + +def test_config_without_allowed_hosts(start_cluster): + assert "not allowed" in node5.query_and_get_error( + "CREATE TABLE table_test_5_1 (word String) Engine=URL('https://host:80', CSV)") + assert "not allowed" in node5.query_and_get_error( + "CREATE TABLE table_test_5_2 (word String) Engine=S3('https://host:80/bucket/key', CSV)") + assert "not allowed" in node5.query_and_get_error( + "CREATE TABLE table_test_5_3 (word String) Engine=URL('https://host', HDFS)") + assert "not allowed" in node5.query_and_get_error( + "CREATE TABLE table_test_5_4 (word String) Engine=URL('https://yandex.ru', CSV)") + assert "not allowed" in node5.query_and_get_error( + "CREATE TABLE table_test_5_5 (word String) Engine=URL('ftp://something.com', S3)") def test_table_function_remote(start_cluster): diff --git a/tests/integration/test_live_view_over_distributed/__init__.py b/tests/integration/test_attach_without_fetching/__init__.py similarity index 100% rename from tests/integration/test_live_view_over_distributed/__init__.py rename to tests/integration/test_attach_without_fetching/__init__.py diff --git a/tests/integration/test_live_view_over_distributed/configs/remote_servers.xml b/tests/integration/test_attach_without_fetching/configs/remote_servers.xml similarity index 53% rename from tests/integration/test_live_view_over_distributed/configs/remote_servers.xml rename to tests/integration/test_attach_without_fetching/configs/remote_servers.xml index ebce4697529..7978f921b2e 100644 --- a/tests/integration/test_live_view_over_distributed/configs/remote_servers.xml +++ b/tests/integration/test_attach_without_fetching/configs/remote_servers.xml @@ -2,14 +2,17 @@ + true - node1 + node_1_1 9000 - - - node2 + node_1_2 + 9000 + + + node_1_3 9000 diff --git a/tests/integration/test_attach_without_fetching/test.py b/tests/integration/test_attach_without_fetching/test.py new file mode 100644 index 00000000000..8b0c1ffbc5c --- /dev/null +++ b/tests/integration/test_attach_without_fetching/test.py @@ -0,0 +1,130 @@ +import time +import pytest + +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import assert_eq_with_retry +from helpers.network import PartitionManager +from helpers.corrupt_part_data_on_disk import corrupt_part_data_by_path + +def fill_node(node): + node.query( + ''' + CREATE TABLE test(n UInt32) + ENGINE = ReplicatedMergeTree('/clickhouse/tables/test', '{replica}') + ORDER BY n PARTITION BY n % 10; + '''.format(replica=node.name)) + +cluster = ClickHouseCluster(__file__) +configs =["configs/remote_servers.xml"] + +node_1 = cluster.add_instance('replica1', with_zookeeper=True, main_configs=configs) +node_2 = cluster.add_instance('replica2', with_zookeeper=True, main_configs=configs) +node_3 = cluster.add_instance('replica3', with_zookeeper=True, main_configs=configs) + +@pytest.fixture(scope="module") +def start_cluster(): + try: + cluster.start() + fill_node(node_1) + fill_node(node_2) + # the third node is filled after the DETACH query + yield cluster + + except Exception as ex: + print(ex) + + finally: + cluster.shutdown() + +def check_data(nodes, detached_parts): + for node in nodes: + print("> Replication queue for", node.name, "\n> table\treplica_name\tsource_replica\ttype\tposition\n", + node.query("SELECT table, replica_name, source_replica, type, position FROM system.replication_queue")) + + node.query("SYSTEM SYNC REPLICA test") + + print("> Checking data integrity for", node.name) + + for i in range(10): + assert_eq_with_retry(node, "SELECT count() FROM test WHERE n % 10 == " + str(i), + "0\n" if i in detached_parts else "10\n") + + assert_eq_with_retry(node, "SELECT count() FROM system.parts WHERE table='test'", + str(10 - len(detached_parts)) + "\n") + + res: str = node.query("SELECT * FROM test ORDER BY n") + + for other in nodes: + if other != node: + print("> Checking data consistency,", other.name, "vs", node.name) + assert_eq_with_retry(other, "SELECT * FROM test ORDER BY n", res) + + +# 1. Check that ALTER TABLE ATTACH PART|PARTITION does not fetch data from other replicas if it's present in the +# detached/ folder. +# 2. Check that ALTER TABLE ATTACH PART|PARTITION downloads the data from other replicas if the detached/ folder +# does not contain the part with the correct checksums. +def test_attach_without_fetching(start_cluster): + # Note here requests are used for both PARTITION and PART. This is done for better test diversity. + # The partition and part are used interchangeably which is not true in most cases. + # 0. Insert data on two replicas + node_1.query("INSERT INTO test SELECT * FROM numbers(100)") + + check_data([node_1, node_2], detached_parts=[]) + + # 1. + # This part will be fetched from other replicas as it would be missing in the detached/ folder and + # also attached locally. + node_1.query("ALTER TABLE test DETACH PART '0_0_0_0'") + # This partition will be just fetched from other replicas as the checksums won't match + # (we'll manually break the data). + node_1.query("ALTER TABLE test DETACH PARTITION 1") + # This partition will be just fetched from other replicas as the part data will be corrupted with one of the + # files missing. + node_1.query("ALTER TABLE test DETACH PARTITION 2") + + + check_data([node_1, node_2], detached_parts=[0, 1, 2]) + + # 2. Create the third replica + fill_node(node_3) + + # 3. Break the part data on the second node to corrupt the checksums. + # Replica 3 should download the data from replica 1 as there is no local data. + # Replica 2 should also download the data from 1 as the checksums won't match. + print("Checking attach with corrupted part data with files missing") + + print("Before deleting:", node_2.exec_in_container(['bash', '-c', + 'cd {p} && ls *.bin'.format( + p="/var/lib/clickhouse/data/default/test/detached/2_0_0_0")], privileged=True)) + + node_2.exec_in_container(['bash', '-c', + 'cd {p} && rm -fr *.bin'.format( + p="/var/lib/clickhouse/data/default/test/detached/2_0_0_0")], privileged=True) + + node_1.query("ALTER TABLE test ATTACH PARTITION 2") + check_data([node_1, node_2, node_3], detached_parts=[0, 1]) + + # 4. Break the part data on the second node to corrupt the checksums. + # Replica 3 should download the data from replica 1 as there is no local data. + # Replica 2 should also download the data from 1 as the checksums won't match. + print("Checking attach with corrupted part data with all of the files present") + + corrupt_part_data_by_path(node_2, "/var/lib/clickhouse/data/default/test/detached/1_0_0_0") + + node_1.query("ALTER TABLE test ATTACH PARTITION 1") + check_data([node_1, node_2, node_3], detached_parts=[0]) + + # 5. Attach the first part and check if it has been fetched correctly. + # Replica 2 should attach the local data from detached/. + # Replica 3 should download the data from replica 2 as there is no local data and other connections are broken. + print("Checking attach with valid checksums") + + with PartitionManager() as pm: + # If something goes wrong and replica 2 wants to fetch data, the test will fail. + pm.partition_instances(node_2, node_1, action='REJECT --reject-with tcp-reset') + pm.partition_instances(node_1, node_3, action='REJECT --reject-with tcp-reset') + + node_1.query("ALTER TABLE test ATTACH PART '0_0_0_0'") + + check_data([node_1, node_2, node_3], detached_parts=[]) diff --git a/tests/integration/test_broken_part_during_merge/test.py b/tests/integration/test_broken_part_during_merge/test.py index 33719166f4a..910dbc1d1a9 100644 --- a/tests/integration/test_broken_part_during_merge/test.py +++ b/tests/integration/test_broken_part_during_merge/test.py @@ -3,6 +3,7 @@ import pytest from helpers.cluster import ClickHouseCluster from multiprocessing.dummy import Pool from helpers.network import PartitionManager +from helpers.corrupt_part_data_on_disk import corrupt_part_data_on_disk import time cluster = ClickHouseCluster(__file__) @@ -25,13 +26,6 @@ def started_cluster(): finally: cluster.shutdown() -def corrupt_data_part_on_disk(node, table, part_name): - part_path = node.query( - "SELECT path FROM system.parts WHERE table = '{}' and name = '{}'".format(table, part_name)).strip() - node.exec_in_container(['bash', '-c', - 'cd {p} && ls *.bin | head -n 1 | xargs -I{{}} sh -c \'echo "1" >> $1\' -- {{}}'.format( - p=part_path)], privileged=True) - def test_merge_and_part_corruption(started_cluster): node1.query("SYSTEM STOP REPLICATION QUEUES replicated_mt") @@ -43,7 +37,7 @@ def test_merge_and_part_corruption(started_cluster): # Need to corrupt "border part" (left or right). If we will corrupt something in the middle # clickhouse will not consider merge as broken, because we have parts with the same min and max # block numbers. - corrupt_data_part_on_disk(node1, 'replicated_mt', 'all_3_3_0') + corrupt_part_data_on_disk(node1, 'replicated_mt', 'all_3_3_0') with Pool(1) as p: def optimize_with_delay(x): diff --git a/programs/server/data/default/.gitignore b/tests/integration/test_catboost_model_config_reload/__init__.py similarity index 100% rename from programs/server/data/default/.gitignore rename to tests/integration/test_catboost_model_config_reload/__init__.py diff --git a/tests/integration/test_catboost_model_config_reload/config/catboost_lib.xml b/tests/integration/test_catboost_model_config_reload/config/catboost_lib.xml new file mode 100644 index 00000000000..745be7cebe6 --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/config/catboost_lib.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/model/libcatboostmodel.so + diff --git a/tests/integration/test_catboost_model_config_reload/config/models_config.xml b/tests/integration/test_catboost_model_config_reload/config/models_config.xml new file mode 100644 index 00000000000..7e62283a83c --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/config/models_config.xml @@ -0,0 +1,2 @@ + + diff --git a/tests/integration/test_catboost_model_config_reload/model/libcatboostmodel.so b/tests/integration/test_catboost_model_config_reload/model/libcatboostmodel.so new file mode 100755 index 00000000000..388d9f887b4 Binary files /dev/null and b/tests/integration/test_catboost_model_config_reload/model/libcatboostmodel.so differ diff --git a/tests/integration/test_catboost_model_config_reload/model/model.bin b/tests/integration/test_catboost_model_config_reload/model/model.bin new file mode 100644 index 00000000000..118e099d176 Binary files /dev/null and b/tests/integration/test_catboost_model_config_reload/model/model.bin differ diff --git a/tests/integration/test_catboost_model_config_reload/model/model_config.xml b/tests/integration/test_catboost_model_config_reload/model/model_config.xml new file mode 100644 index 00000000000..af9778097fa --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/model/model_config.xml @@ -0,0 +1,8 @@ + + + catboost + model1 + /etc/clickhouse-server/model/model.bin + 0 + + diff --git a/tests/integration/test_catboost_model_config_reload/model/model_config2.xml b/tests/integration/test_catboost_model_config_reload/model/model_config2.xml new file mode 100644 index 00000000000..b81120ec900 --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/model/model_config2.xml @@ -0,0 +1,8 @@ + + + catboost + model2 + /etc/clickhouse-server/model/model.bin + 0 + + diff --git a/tests/integration/test_catboost_model_config_reload/test.py b/tests/integration/test_catboost_model_config_reload/test.py new file mode 100644 index 00000000000..27652ba365e --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/test.py @@ -0,0 +1,61 @@ +import os +import sys +import time + +import pytest + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) + +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node = cluster.add_instance('node', stay_alive=True, main_configs=['config/models_config.xml', 'config/catboost_lib.xml']) + + +def copy_file_to_container(local_path, dist_path, container_id): + os.system("docker cp {local} {cont_id}:{dist}".format(local=local_path, cont_id=container_id, dist=dist_path)) + + +config = ''' + /etc/clickhouse-server/model/{model_config} +''' + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + copy_file_to_container(os.path.join(SCRIPT_DIR, 'model/.'), '/etc/clickhouse-server/model', node.docker_id) + node.restart_clickhouse() + + yield cluster + + finally: + cluster.shutdown() + + +def change_config(model_config): + node.replace_config("/etc/clickhouse-server/config.d/models_config.xml", config.format(model_config=model_config)) + node.query("SYSTEM RELOAD CONFIG;") + + +def test(started_cluster): + if node.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + # Set config with the path to the first model. + change_config("model_config.xml") + + node.query("SELECT modelEvaluate('model1', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + + # Change path to the second model in config. + change_config("model_config2.xml") + + # Check that the new model is loaded. + node.query("SELECT modelEvaluate('model2', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + + # Check that the old model was unloaded. + node.query_and_get_error("SELECT modelEvaluate('model1', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + diff --git a/programs/server/metadata/default/.gitignore b/tests/integration/test_catboost_model_first_evaluate/__init__.py similarity index 100% rename from programs/server/metadata/default/.gitignore rename to tests/integration/test_catboost_model_first_evaluate/__init__.py diff --git a/tests/integration/test_catboost_model_first_evaluate/config/models_config.xml b/tests/integration/test_catboost_model_first_evaluate/config/models_config.xml new file mode 100644 index 00000000000..eab1458031b --- /dev/null +++ b/tests/integration/test_catboost_model_first_evaluate/config/models_config.xml @@ -0,0 +1,4 @@ + + /etc/clickhouse-server/model/libcatboostmodel.so + /etc/clickhouse-server/model/model_config.xml + diff --git a/tests/integration/test_catboost_model_first_evaluate/model/libcatboostmodel.so b/tests/integration/test_catboost_model_first_evaluate/model/libcatboostmodel.so new file mode 100755 index 00000000000..388d9f887b4 Binary files /dev/null and b/tests/integration/test_catboost_model_first_evaluate/model/libcatboostmodel.so differ diff --git a/tests/integration/test_catboost_model_first_evaluate/model/model.bin b/tests/integration/test_catboost_model_first_evaluate/model/model.bin new file mode 100644 index 00000000000..118e099d176 Binary files /dev/null and b/tests/integration/test_catboost_model_first_evaluate/model/model.bin differ diff --git a/tests/integration/test_catboost_model_first_evaluate/model/model_config.xml b/tests/integration/test_catboost_model_first_evaluate/model/model_config.xml new file mode 100644 index 00000000000..2c328167a94 --- /dev/null +++ b/tests/integration/test_catboost_model_first_evaluate/model/model_config.xml @@ -0,0 +1,8 @@ + + + catboost + titanic + /etc/clickhouse-server/model/model.bin + 0 + + diff --git a/tests/integration/test_catboost_model_first_evaluate/test.py b/tests/integration/test_catboost_model_first_evaluate/test.py new file mode 100644 index 00000000000..7e498ccfe21 --- /dev/null +++ b/tests/integration/test_catboost_model_first_evaluate/test.py @@ -0,0 +1,39 @@ +import os +import sys +import time + +import pytest + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) + +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node = cluster.add_instance('node', stay_alive=True, main_configs=['config/models_config.xml']) + + +def copy_file_to_container(local_path, dist_path, container_id): + os.system("docker cp {local} {cont_id}:{dist}".format(local=local_path, cont_id=container_id, dist=dist_path)) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + copy_file_to_container(os.path.join(SCRIPT_DIR, 'model/.'), '/etc/clickhouse-server/model', node.docker_id) + node.restart_clickhouse() + + yield cluster + + finally: + cluster.shutdown() + + +def test(started_cluster): + if node.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + node.query("select modelEvaluate('titanic', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + diff --git a/src/Processors/tests/CMakeLists.txt b/tests/integration/test_catboost_model_reload/__init__.py similarity index 100% rename from src/Processors/tests/CMakeLists.txt rename to tests/integration/test_catboost_model_reload/__init__.py diff --git a/tests/integration/test_catboost_model_reload/config/catboost_lib.xml b/tests/integration/test_catboost_model_reload/config/catboost_lib.xml new file mode 100644 index 00000000000..745be7cebe6 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/config/catboost_lib.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/model/libcatboostmodel.so + diff --git a/tests/integration/test_catboost_model_reload/config/models_config.xml b/tests/integration/test_catboost_model_reload/config/models_config.xml new file mode 100644 index 00000000000..e84ca8b5285 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/config/models_config.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/model/model_config.xml + diff --git a/tests/integration/test_catboost_model_reload/model/conjunction.cbm b/tests/integration/test_catboost_model_reload/model/conjunction.cbm new file mode 100644 index 00000000000..7b75fb5f886 Binary files /dev/null and b/tests/integration/test_catboost_model_reload/model/conjunction.cbm differ diff --git a/tests/integration/test_catboost_model_reload/model/disjunction.cbm b/tests/integration/test_catboost_model_reload/model/disjunction.cbm new file mode 100644 index 00000000000..8145c24637f Binary files /dev/null and b/tests/integration/test_catboost_model_reload/model/disjunction.cbm differ diff --git a/tests/integration/test_catboost_model_reload/model/libcatboostmodel.so b/tests/integration/test_catboost_model_reload/model/libcatboostmodel.so new file mode 100755 index 00000000000..388d9f887b4 Binary files /dev/null and b/tests/integration/test_catboost_model_reload/model/libcatboostmodel.so differ diff --git a/tests/integration/test_catboost_model_reload/model/model_config.xml b/tests/integration/test_catboost_model_reload/model/model_config.xml new file mode 100644 index 00000000000..7cbda165ce9 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/model/model_config.xml @@ -0,0 +1,8 @@ + + + catboost + model + /etc/clickhouse-server/model/model.cbm + 0 + + diff --git a/tests/integration/test_catboost_model_reload/test.py b/tests/integration/test_catboost_model_reload/test.py new file mode 100644 index 00000000000..3d88c19cd2c --- /dev/null +++ b/tests/integration/test_catboost_model_reload/test.py @@ -0,0 +1,80 @@ +import os +import sys +import time + +import pytest + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) + +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node = cluster.add_instance('node', stay_alive=True, main_configs=['config/models_config.xml', 'config/catboost_lib.xml']) + +def copy_file_to_container(local_path, dist_path, container_id): + os.system("docker cp {local} {cont_id}:{dist}".format(local=local_path, cont_id=container_id, dist=dist_path)) + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + copy_file_to_container(os.path.join(SCRIPT_DIR, 'model/.'), '/etc/clickhouse-server/model', node.docker_id) + node.query("CREATE TABLE binary (x UInt64, y UInt64) ENGINE = TinyLog()") + node.query("INSERT INTO binary VALUES (1, 1), (1, 0), (0, 1), (0, 0)") + + node.restart_clickhouse() + + yield cluster + + finally: + cluster.shutdown() + +def test_model_reload(started_cluster): + if node.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + node.exec_in_container(["bash", "-c", "rm -f /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/conjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODEL model") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n0\n0\n0\n' + + node.exec_in_container(["bash", "-c", "rm /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/disjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODEL model") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n1\n1\n0\n' + +def test_models_reload(started_cluster): + if node.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + node.exec_in_container(["bash", "-c", "rm -f /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/conjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODELS") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n0\n0\n0\n' + + node.exec_in_container(["bash", "-c", "rm /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/disjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODELS") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n1\n1\n0\n' diff --git a/tests/integration/test_cluster_copier/configs/users.xml b/tests/integration/test_cluster_copier/configs/users.xml index e742d4f05a6..d27ca56eec7 100644 --- a/tests/integration/test_cluster_copier/configs/users.xml +++ b/tests/integration/test_cluster_copier/configs/users.xml @@ -17,6 +17,14 @@ default default + + 12345678 + + ::/0 + + default + default + diff --git a/tests/integration/test_cluster_copier/task_self_copy.xml b/tests/integration/test_cluster_copier/task_self_copy.xml new file mode 100644 index 00000000000..e0e35ccfe99 --- /dev/null +++ b/tests/integration/test_cluster_copier/task_self_copy.xml @@ -0,0 +1,64 @@ + + + 9440 + + + + false + + s0_0_0 + 9000 + dbuser + 12345678 + 0 + + + + + + + false + + s0_0_0 + 9000 + dbuser + 12345678 + 0 + + + + + + 2 + + + 1 + + + + 0 + + + + 3 + 1 + + + + + source_cluster + db1 + source_table + + destination_cluster + db2 + destination_table + + + ENGINE = MergeTree PARTITION BY a ORDER BY a SETTINGS index_granularity = 8192 + + + rand() + + + \ No newline at end of file diff --git a/tests/integration/test_cluster_copier/test.py b/tests/integration/test_cluster_copier/test.py index d87969630cd..57f9d150c8d 100644 --- a/tests/integration/test_cluster_copier/test.py +++ b/tests/integration/test_cluster_copier/test.py @@ -251,6 +251,31 @@ class Task_non_partitioned_table: instance = cluster.instances['s1_1_0'] instance.query("DROP TABLE copier_test1_1") +class Task_self_copy: + + def __init__(self, cluster): + self.cluster = cluster + self.zk_task_path = "/clickhouse-copier/task_self_copy" + self.copier_task_config = open(os.path.join(CURRENT_TEST_DIR, 'task_self_copy.xml'), 'r').read() + + def start(self): + instance = cluster.instances['s0_0_0'] + instance.query("CREATE DATABASE db1;") + instance.query( + "CREATE TABLE db1.source_table (`a` Int8, `b` String, `c` Int8) ENGINE = MergeTree PARTITION BY a ORDER BY a SETTINGS index_granularity = 8192") + instance.query("CREATE DATABASE db2;") + instance.query( + "CREATE TABLE db2.destination_table (`a` Int8, `b` String, `c` Int8) ENGINE = MergeTree PARTITION BY a ORDER BY a SETTINGS index_granularity = 8192") + instance.query("INSERT INTO db1.source_table VALUES (1, 'ClickHouse', 1);") + instance.query("INSERT INTO db1.source_table VALUES (2, 'Copier', 2);") + + def check(self): + instance = cluster.instances['s0_0_0'] + assert TSV(instance.query("SELECT * FROM db2.destination_table ORDER BY a")) == TSV(instance.query("SELECT * FROM db1.source_table ORDER BY a")) + instance = cluster.instances['s0_0_0'] + instance.query("DROP DATABASE db1 SYNC") + instance.query("DROP DATABASE db2 SYNC") + def execute_task(task, cmd_options): task.start() @@ -380,9 +405,14 @@ def test_no_index(started_cluster): def test_no_arg(started_cluster): execute_task(Task_no_arg(started_cluster), []) + def test_non_partitioned_table(started_cluster): execute_task(Task_non_partitioned_table(started_cluster), []) + +def test_self_copy(started_cluster): + execute_task(Task_self_copy(started_cluster), []) + if __name__ == '__main__': with contextmanager(started_cluster)() as cluster: for name, instance in list(cluster.instances.items()): diff --git a/tests/integration/test_dictionaries_complex_key_cache_string/configs/dictionaries/ssd_complex_key_cache_string.xml b/tests/integration/test_dictionaries_complex_key_cache_string/configs/dictionaries/ssd_complex_key_cache_string.xml index 85f811d2d85..c8fdbcbe0ef 100644 --- a/tests/integration/test_dictionaries_complex_key_cache_string/configs/dictionaries/ssd_complex_key_cache_string.xml +++ b/tests/integration/test_dictionaries_complex_key_cache_string/configs/dictionaries/ssd_complex_key_cache_string.xml @@ -42,7 +42,6 @@ 131072 1048576 /etc/clickhouse/dictionaries/radars - 1048576 1 diff --git a/tests/integration/test_dictionaries_ddl/test.py b/tests/integration/test_dictionaries_ddl/test.py index 3ea64383fbf..d96d6864ba3 100644 --- a/tests/integration/test_dictionaries_ddl/test.py +++ b/tests/integration/test_dictionaries_ddl/test.py @@ -288,6 +288,23 @@ def test_clickhouse_remote(started_cluster): time.sleep(0.5) node3.query("detach dictionary if exists test.clickhouse_remote") + + with pytest.raises(QueryRuntimeException): + node3.query(""" + CREATE DICTIONARY test.clickhouse_remote( + id UInt64, + SomeValue1 UInt8, + SomeValue2 String + ) + PRIMARY KEY id + LAYOUT(FLAT()) + SOURCE(CLICKHOUSE(HOST 'node4' PORT 9000 USER 'default' PASSWORD 'default' TABLE 'xml_dictionary_table' DB 'test')) + LIFETIME(MIN 1 MAX 10) + """) + + node3.query("attach dictionary test.clickhouse_remote") + node3.query("drop dictionary test.clickhouse_remote") + node3.query(""" CREATE DICTIONARY test.clickhouse_remote( id UInt64, diff --git a/tests/integration/test_dictionaries_postgresql/configs/dictionaries/postgres_dict.xml b/tests/integration/test_dictionaries_postgresql/configs/dictionaries/postgres_dict.xml new file mode 100644 index 00000000000..4ee07d0972a --- /dev/null +++ b/tests/integration/test_dictionaries_postgresql/configs/dictionaries/postgres_dict.xml @@ -0,0 +1,83 @@ + + + + dict0 + + + clickhouse + postgres1 + 5432 + postgres + mysecretpassword + test0
+ SELECT value FROM test0 WHERE id = 0 +
+ + + + + + + id + UInt32 + + + id + UInt32 + + + + value + UInt32 + + + + 1 +
+ + dict1 + + + clickhouse + postgres + mysecretpassword + test1
+ + postgres1 + 3 + 5432 + + + postgres2 + 5433 + 1 + + + postgres2 + 5432 + 2 + +
+ + + + + + + id + UInt32 + + + id + UInt32 + + + + value + UInt32 + + + + 1 +
+
diff --git a/tests/integration/test_dictionaries_postgresql/configs/postgres_dict.xml b/tests/integration/test_dictionaries_postgresql/configs/postgres_dict.xml deleted file mode 100644 index 2572930a798..00000000000 --- a/tests/integration/test_dictionaries_postgresql/configs/postgres_dict.xml +++ /dev/null @@ -1,37 +0,0 @@ - - - - dict0 - - - clickhouse - postgres1 - 5432 - postgres - mysecretpassword - test0
- SELECT value FROM test0 WHERE id = 0 -
- - - - - - - id - UInt32 - - - id - UInt32 - - - - value - UInt32 - - - - 1 -
-
diff --git a/tests/integration/test_dictionaries_postgresql/test.py b/tests/integration/test_dictionaries_postgresql/test.py index b83c00409af..5b3b5a5aa45 100644 --- a/tests/integration/test_dictionaries_postgresql/test.py +++ b/tests/integration/test_dictionaries_postgresql/test.py @@ -6,7 +6,10 @@ from helpers.cluster import ClickHouseCluster from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=['configs/config.xml', 'configs/postgres_dict.xml', 'configs/log_conf.xml'], with_postgres=True) +node1 = cluster.add_instance('node1', main_configs=[ + 'configs/config.xml', + 'configs/dictionaries/postgres_dict.xml', + 'configs/log_conf.xml'], with_postgres=True, with_postgres_cluster=True) postgres_dict_table_template = """ CREATE TABLE IF NOT EXISTS {} ( @@ -18,11 +21,12 @@ click_dict_table_template = """ ) ENGINE = Dictionary({}) """ -def get_postgres_conn(database=False): +def get_postgres_conn(port=5432, database=False): if database == True: - conn_string = "host='localhost' dbname='clickhouse' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} dbname='clickhouse' user='postgres' password='mysecretpassword'".format(port) else: - conn_string = "host='localhost' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} user='postgres' password='mysecretpassword'".format(port) + conn = psycopg2.connect(conn_string) conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) conn.autocommit = True @@ -32,15 +36,13 @@ def create_postgres_db(conn, name): cursor = conn.cursor() cursor.execute("CREATE DATABASE {}".format(name)) -def create_postgres_table(conn, table_name): - cursor = conn.cursor() +def create_postgres_table(cursor, table_name): cursor.execute(postgres_dict_table_template.format(table_name)) -def create_and_fill_postgres_table(table_name): - conn = get_postgres_conn(True) - create_postgres_table(conn, table_name) +def create_and_fill_postgres_table(cursor, table_name, host='postgres1', port=5432): + create_postgres_table(cursor, table_name) # Fill postgres table using clickhouse postgres table function and check - table_func = '''postgresql('postgres1:5432', 'clickhouse', '{}', 'postgres', 'mysecretpassword')'''.format(table_name) + table_func = '''postgresql('{}:{}', 'clickhouse', '{}', 'postgres', 'mysecretpassword')'''.format(host, port, table_name) node1.query('''INSERT INTO TABLE FUNCTION {} SELECT number, number from numbers(10000) '''.format(table_func, table_name)) result = node1.query("SELECT count() FROM {}".format(table_func)) @@ -54,10 +56,16 @@ def create_dict(table_name, index=0): def started_cluster(): try: cluster.start() - postgres_conn = get_postgres_conn() node1.query("CREATE DATABASE IF NOT EXISTS test") - print("postgres connected") + + postgres_conn = get_postgres_conn(port=5432) + print("postgres1 connected") create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5421) + print("postgres2 connected") + create_postgres_db(postgres_conn, 'clickhouse') + yield cluster finally: @@ -65,25 +73,28 @@ def started_cluster(): def test_load_dictionaries(started_cluster): - conn = get_postgres_conn(True) + conn = get_postgres_conn(database=True) cursor = conn.cursor() table_name = 'test0' - create_and_fill_postgres_table(table_name) + create_and_fill_postgres_table(cursor, table_name) create_dict(table_name) dict_name = 'dict0' - node1.query("SYSTEM RELOAD DICTIONARIES") + node1.query("SYSTEM RELOAD DICTIONARY {}".format(dict_name)) assert node1.query("SELECT count() FROM `test`.`dict_table_{}`".format(table_name)).rstrip() == '10000' assert node1.query("SELECT dictGetUInt32('{}', 'id', toUInt64(0))".format(dict_name)) == '0\n' assert node1.query("SELECT dictGetUInt32('{}', 'value', toUInt64(9999))".format(dict_name)) == '9999\n' + cursor.execute("DROP TABLE IF EXISTS {}".format(table_name)) + node1.query("DROP TABLE IF EXISTS {}".format(table_name)) + node1.query("DROP DICTIONARY IF EXISTS {}".format(dict_name)) def test_invalidate_query(started_cluster): - conn = get_postgres_conn(True) + conn = get_postgres_conn(database=True) cursor = conn.cursor() table_name = 'test0' - create_and_fill_postgres_table(table_name) + create_and_fill_postgres_table(cursor, table_name) # invalidate query: SELECT value FROM test0 WHERE id = 0 dict_name = 'dict0' @@ -112,6 +123,40 @@ def test_invalidate_query(started_cluster): assert node1.query("SELECT dictGetUInt32('{}', 'value', toUInt64(0))".format(dict_name)) == '2\n' assert node1.query("SELECT dictGetUInt32('{}', 'value', toUInt64(1))".format(dict_name)) == '2\n' + node1.query("DROP TABLE IF EXISTS {}".format(table_name)) + node1.query("DROP DICTIONARY IF EXISTS {}".format(dict_name)) + cursor.execute("DROP TABLE IF EXISTS {}".format(table_name)) + + +def test_dictionary_with_replicas(started_cluster): + conn1 = get_postgres_conn(port=5432, database=True) + cursor1 = conn1.cursor() + conn2 = get_postgres_conn(port=5421, database=True) + cursor2 = conn2.cursor() + + create_postgres_table(cursor1, 'test1') + create_postgres_table(cursor2, 'test1') + + cursor1.execute('INSERT INTO test1 select i, i from generate_series(0, 99) as t(i);'); + cursor2.execute('INSERT INTO test1 select i, i from generate_series(100, 199) as t(i);'); + + create_dict('test1', 1) + result = node1.query("SELECT * FROM `test`.`dict_table_test1` ORDER BY id") + + # priority 0 - non running port + assert node1.contains_in_log('Unable to setup connection to postgres2:5433*') + + # priority 1 - postgres2, table contains rows with values 100-200 + # priority 2 - postgres1, table contains rows with values 0-100 + expected = node1.query("SELECT number, number FROM numbers(100, 100)") + assert(result == expected) + + cursor1.execute("DROP TABLE IF EXISTS test1") + cursor2.execute("DROP TABLE IF EXISTS test1") + + node1.query("DROP TABLE IF EXISTS test1") + node1.query("DROP DICTIONARY IF EXISTS dict1") + if __name__ == '__main__': cluster.start() diff --git a/tests/integration/test_dictionaries_update_and_reload/test.py b/tests/integration/test_dictionaries_update_and_reload/test.py index 5c8abcda38e..533a29dc245 100644 --- a/tests/integration/test_dictionaries_update_and_reload/test.py +++ b/tests/integration/test_dictionaries_update_and_reload/test.py @@ -141,7 +141,8 @@ def test_reload_after_loading(started_cluster): time.sleep(1) # see the comment above replace_in_file_in_container('/etc/clickhouse-server/config.d/executable.xml', '81', '82') replace_in_file_in_container('/etc/clickhouse-server/config.d/file.txt', '101', '102') - query("SYSTEM RELOAD DICTIONARIES") + query("SYSTEM RELOAD DICTIONARY 'file'") + query("SYSTEM RELOAD DICTIONARY 'executable'") assert query("SELECT dictGetInt32('executable', 'a', toUInt64(7))") == "82\n" assert query("SELECT dictGetInt32('file', 'a', toUInt64(9))") == "102\n" diff --git a/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper.reference b/tests/integration/test_dictionaries_update_field/__init__.py similarity index 100% rename from tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper.reference rename to tests/integration/test_dictionaries_update_field/__init__.py diff --git a/tests/integration/test_dictionaries_update_field/configs/config.xml b/tests/integration/test_dictionaries_update_field/configs/config.xml new file mode 100644 index 00000000000..a1518083be3 --- /dev/null +++ b/tests/integration/test_dictionaries_update_field/configs/config.xml @@ -0,0 +1,30 @@ + + + + trace + /var/log/clickhouse-server/clickhouse-server.log + /var/log/clickhouse-server/clickhouse-server.err.log + 1000M + 10 + + + 9000 + 127.0.0.1 + + + + true + none + + AcceptCertificateHandler + + + + + 500 + 5368709120 + ./clickhouse/ + users.xml + + /etc/clickhouse-server/config.d/*.xml + diff --git a/tests/integration/test_dictionaries_update_field/configs/users.xml b/tests/integration/test_dictionaries_update_field/configs/users.xml new file mode 100644 index 00000000000..6061af8e33d --- /dev/null +++ b/tests/integration/test_dictionaries_update_field/configs/users.xml @@ -0,0 +1,23 @@ + + + + + + + + + + + + ::/0 + + default + default + + + + + + + + diff --git a/tests/integration/test_dictionaries_update_field/test.py b/tests/integration/test_dictionaries_update_field/test.py new file mode 100644 index 00000000000..c52c836b4f7 --- /dev/null +++ b/tests/integration/test_dictionaries_update_field/test.py @@ -0,0 +1,77 @@ +## sudo -H pip install PyMySQL +import time + +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.cluster import ClickHouseKiller +from helpers.network import PartitionManager + +cluster = ClickHouseCluster(__file__) + +node = cluster.add_instance('main_node', main_configs=[]) + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + node.query( + """ + CREATE TABLE table_for_update_field_dictionary + ( + key UInt64, + value String, + last_insert_time DateTime + ) + ENGINE = TinyLog; + """ + ) + + yield cluster + + finally: + cluster.shutdown() + +@pytest.mark.parametrize("dictionary_name,dictionary_type", [ + ("flat_update_field_dictionary", "FLAT"), + ("simple_key_hashed_update_field_dictionary", "HASHED"), + ("complex_key_hashed_update_field_dictionary", "HASHED") +]) +def test_update_field(started_cluster, dictionary_name, dictionary_type): + create_dictionary_query = """ + CREATE DICTIONARY {dictionary_name} + ( + key UInt64, + value String, + last_insert_time DateTime + ) + PRIMARY KEY key + SOURCE(CLICKHOUSE(table 'table_for_update_field_dictionary' update_field 'last_insert_time')) + LAYOUT({dictionary_type}()) + LIFETIME(1); + """.format(dictionary_name=dictionary_name, dictionary_type=dictionary_type) + + node.query(create_dictionary_query) + + node.query("INSERT INTO table_for_update_field_dictionary VALUES (1, 'First', now());") + query_result = node.query("SELECT key, value FROM {dictionary_name} ORDER BY key ASC".format(dictionary_name=dictionary_name)) + assert query_result == '1\tFirst\n' + + node.query("INSERT INTO table_for_update_field_dictionary VALUES (2, 'Second', now());") + time.sleep(10) + + query_result = node.query("SELECT key, value FROM {dictionary_name} ORDER BY key ASC".format(dictionary_name=dictionary_name)) + + assert query_result == '1\tFirst\n2\tSecond\n' + + node.query("INSERT INTO table_for_update_field_dictionary VALUES (2, 'SecondUpdated', now());") + node.query("INSERT INTO table_for_update_field_dictionary VALUES (3, 'Third', now());") + + time.sleep(10) + + query_result = node.query("SELECT key, value FROM {dictionary_name} ORDER BY key ASC".format(dictionary_name=dictionary_name)) + + assert query_result == '1\tFirst\n2\tSecondUpdated\n3\tThird\n' + + node.query("TRUNCATE TABLE table_for_update_field_dictionary") + node.query("DROP DICTIONARY {dictionary_name}".format(dictionary_name=dictionary_name)) diff --git a/tests/integration/test_distributed_inter_server_secret/test.py b/tests/integration/test_distributed_inter_server_secret/test.py index b1daf2271d0..1a0e5a3dd91 100644 --- a/tests/integration/test_distributed_inter_server_secret/test.py +++ b/tests/integration/test_distributed_inter_server_secret/test.py @@ -97,12 +97,14 @@ def test_insecure(): n1.query('SELECT * FROM dist_insecure') def test_insecure_insert_async(): + n1.query("TRUNCATE TABLE data") n1.query('INSERT INTO dist_insecure SELECT * FROM numbers(2)') n1.query('SYSTEM FLUSH DISTRIBUTED ON CLUSTER insecure dist_insecure') assert int(n1.query('SELECT count() FROM dist_insecure')) == 2 n1.query('TRUNCATE TABLE data ON CLUSTER insecure') def test_insecure_insert_sync(): + n1.query("TRUNCATE TABLE data") n1.query('INSERT INTO dist_insecure SELECT * FROM numbers(2)', settings={'insert_distributed_sync': 1}) assert int(n1.query('SELECT count() FROM dist_insecure')) == 2 n1.query('TRUNCATE TABLE data ON CLUSTER secure') @@ -111,12 +113,14 @@ def test_secure(): n1.query('SELECT * FROM dist_secure') def test_secure_insert_async(): + n1.query("TRUNCATE TABLE data") n1.query('INSERT INTO dist_secure SELECT * FROM numbers(2)') n1.query('SYSTEM FLUSH DISTRIBUTED ON CLUSTER secure dist_secure') assert int(n1.query('SELECT count() FROM dist_secure')) == 2 n1.query('TRUNCATE TABLE data ON CLUSTER secure') def test_secure_insert_sync(): + n1.query("TRUNCATE TABLE data") n1.query('INSERT INTO dist_secure SELECT * FROM numbers(2)', settings={'insert_distributed_sync': 1}) assert int(n1.query('SELECT count() FROM dist_secure')) == 2 n1.query('TRUNCATE TABLE data ON CLUSTER secure') @@ -126,6 +130,7 @@ def test_secure_insert_sync(): # Buffer() flush happens with global context, that does not have user # And so Context::user/ClientInfo::current_user/ClientInfo::initial_user will be empty def test_secure_insert_buffer_async(): + n1.query("TRUNCATE TABLE data") n1.query('INSERT INTO dist_secure_buffer SELECT * FROM numbers(2)') n1.query('SYSTEM FLUSH DISTRIBUTED ON CLUSTER secure dist_secure') # no Buffer flush happened @@ -141,6 +146,7 @@ def test_secure_disagree(): n1.query('SELECT * FROM dist_secure_disagree') def test_secure_disagree_insert(): + n1.query("TRUNCATE TABLE data") n1.query('INSERT INTO dist_secure_disagree SELECT * FROM numbers(2)') with pytest.raises(QueryRuntimeException, match='.*Hash mismatch.*'): n1.query('SYSTEM FLUSH DISTRIBUTED ON CLUSTER secure_disagree dist_secure_disagree') diff --git a/tests/queries/0_stateless/01641_memory_tracking_insert_optimize_long.reference b/tests/integration/test_distributed_queries_stress/__init__.py similarity index 100% rename from tests/queries/0_stateless/01641_memory_tracking_insert_optimize_long.reference rename to tests/integration/test_distributed_queries_stress/__init__.py diff --git a/tests/integration/test_distributed_queries_stress/configs/remote_servers.xml b/tests/integration/test_distributed_queries_stress/configs/remote_servers.xml new file mode 100644 index 00000000000..7d00cebccfc --- /dev/null +++ b/tests/integration/test_distributed_queries_stress/configs/remote_servers.xml @@ -0,0 +1,42 @@ + + 1000 + + + + + true + + node1_r1 + 9000 + + + node1_r2 + 9000 + + + + + + true + + node1_r1 + 9000 + + + node1_r2 + 9000 + + + + + node2_r1 + 9000 + + + node2_r2 + 9000 + + + + + diff --git a/tests/integration/test_distributed_queries_stress/test.py b/tests/integration/test_distributed_queries_stress/test.py new file mode 100644 index 00000000000..dcc4943f7e6 --- /dev/null +++ b/tests/integration/test_distributed_queries_stress/test.py @@ -0,0 +1,103 @@ +# pylint: disable=redefined-outer-name +# pylint: disable=unused-argument +# pylint: disable=line-too-long + +import shlex +import itertools +import pytest +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node1_r1 = cluster.add_instance('node1_r1', main_configs=['configs/remote_servers.xml']) +node2_r1 = cluster.add_instance('node2_r1', main_configs=['configs/remote_servers.xml']) +node1_r2 = cluster.add_instance('node1_r2', main_configs=['configs/remote_servers.xml']) +node2_r2 = cluster.add_instance('node2_r2', main_configs=['configs/remote_servers.xml']) + +def run_benchmark(payload, settings): + node1_r1.exec_in_container([ + 'bash', '-c', 'echo {} | '.format(shlex.quote(payload.strip())) + ' '.join([ + 'clickhouse', 'benchmark', + '--concurrency=100', + '--cumulative', + '--delay=0', + # NOTE: with current matrix even 3 seconds it huge... + '--timelimit=3', + # tune some basic timeouts + '--hedged_connection_timeout_ms=200', + '--connect_timeout_with_failover_ms=200', + '--connections_with_failover_max_tries=5', + *settings, + ]) + ]) + +@pytest.fixture(scope='module') +def started_cluster(): + try: + cluster.start() + + for _, instance in cluster.instances.items(): + instance.query(""" + create table if not exists data ( + key Int, + /* just to increase block size */ + v1 UInt64, + v2 UInt64, + v3 UInt64, + v4 UInt64, + v5 UInt64, + v6 UInt64, + v7 UInt64, + v8 UInt64, + v9 UInt64, + v10 UInt64, + v11 UInt64, + v12 UInt64 + ) Engine=MergeTree() order by key partition by key%5; + insert into data (key) select * from numbers(10); + + create table if not exists dist_one as data engine=Distributed(one_shard, currentDatabase(), data, key); + create table if not exists dist_one_over_dist as data engine=Distributed(one_shard, currentDatabase(), dist_one, yandexConsistentHash(key, 2)); + + create table if not exists dist_two as data engine=Distributed(two_shards, currentDatabase(), data, key); + create table if not exists dist_two_over_dist as data engine=Distributed(two_shards, currentDatabase(), dist_two, yandexConsistentHash(key, 2)); + """) + yield cluster + finally: + cluster.shutdown() + +# since it includes started_cluster fixture at first start +@pytest.mark.timeout(60) +@pytest.mark.parametrize('table,settings', itertools.product( + [ # tables + 'dist_one', + 'dist_one_over_dist', + 'dist_two', + 'dist_two_over_dist', + ], + [ # settings + *list(itertools.combinations([ + '', # defaults + '--prefer_localhost_replica=0', + '--async_socket_for_remote=0', + '--use_hedged_requests=0', + '--optimize_skip_unused_shards=1', + '--distributed_group_by_no_merge=2', + '--optimize_distributed_group_by_sharding_key=1', + + # TODO: enlarge test matrix (but first those values to accept ms): + # + # - sleep_in_send_tables_status + # - sleep_in_send_data + ], 2)) + # TODO: more combinations that just 2 + ], +)) +def test_stress_distributed(table, settings, started_cluster): + payload = f''' + select * from {table} where key = 0; + select * from {table} where key = 1; + select * from {table} where key = 2; + select * from {table} where key = 3; + select * from {table}; + ''' + run_benchmark(payload, settings) diff --git a/tests/integration/test_drop_replica/test.py b/tests/integration/test_drop_replica/test.py index f3af9dcb980..eb67a25f9f5 100644 --- a/tests/integration/test_drop_replica/test.py +++ b/tests/integration/test_drop_replica/test.py @@ -49,7 +49,6 @@ def fill_nodes(nodes, shard): cluster = ClickHouseCluster(__file__) - node_1_1 = cluster.add_instance('node_1_1', with_zookeeper=True, main_configs=['configs/remote_servers.xml']) node_1_2 = cluster.add_instance('node_1_2', with_zookeeper=True, main_configs=['configs/remote_servers.xml']) node_1_3 = cluster.add_instance('node_1_3', with_zookeeper=True, main_configs=['configs/remote_servers.xml']) @@ -72,12 +71,11 @@ def start_cluster(): def test_drop_replica(start_cluster): - for i in range(100): - node_1_1.query("INSERT INTO test.test_table VALUES (1, {})".format(i)) - node_1_1.query("INSERT INTO test1.test_table VALUES (1, {})".format(i)) - node_1_1.query("INSERT INTO test2.test_table VALUES (1, {})".format(i)) - node_1_1.query("INSERT INTO test3.test_table VALUES (1, {})".format(i)) - node_1_1.query("INSERT INTO test4.test_table VALUES (1, {})".format(i)) + node_1_1.query("INSERT INTO test.test_table SELECT number, toString(number) FROM numbers(100)") + node_1_1.query("INSERT INTO test1.test_table SELECT number, toString(number) FROM numbers(100)") + node_1_1.query("INSERT INTO test2.test_table SELECT number, toString(number) FROM numbers(100)") + node_1_1.query("INSERT INTO test3.test_table SELECT number, toString(number) FROM numbers(100)") + node_1_1.query("INSERT INTO test4.test_table SELECT number, toString(number) FROM numbers(100)") zk = cluster.get_kazoo_client('zoo1') assert "can't drop local replica" in node_1_1.query_and_get_error("SYSTEM DROP REPLICA 'node_1_1'") @@ -103,53 +101,52 @@ def test_drop_replica(start_cluster): assert "does not look like a table path" in \ node_1_3.query_and_get_error("SYSTEM DROP REPLICA 'node_1_1' FROM ZKPATH '/clickhouse/tables/test'") - with PartitionManager() as pm: - ## make node_1_1 dead - pm.drop_instance_zk_connections(node_1_1) - time.sleep(10) + node_1_1.query("DETACH DATABASE test") + for i in range(1, 5): + node_1_1.query("DETACH DATABASE test{}".format(i)) - assert "doesn't exist" in node_1_3.query_and_get_error( - "SYSTEM DROP REPLICA 'node_1_1' FROM TABLE test.test_table") + assert "doesn't exist" in node_1_3.query_and_get_error( + "SYSTEM DROP REPLICA 'node_1_1' FROM TABLE test.test_table") - assert "doesn't exist" in node_1_3.query_and_get_error("SYSTEM DROP REPLICA 'node_1_1' FROM DATABASE test1") + assert "doesn't exist" in node_1_3.query_and_get_error("SYSTEM DROP REPLICA 'node_1_1' FROM DATABASE test1") - node_1_3.query("SYSTEM DROP REPLICA 'node_1_1'") - exists_replica_1_1 = zk.exists( - "/clickhouse/tables/test3/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, - replica='node_1_1')) - assert (exists_replica_1_1 != None) + node_1_3.query("SYSTEM DROP REPLICA 'node_1_1'") + exists_replica_1_1 = zk.exists( + "/clickhouse/tables/test3/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, + replica='node_1_1')) + assert (exists_replica_1_1 != None) - ## If you want to drop a inactive/stale replicate table that does not have a local replica, you can following syntax(ZKPATH): - node_1_3.query( - "SYSTEM DROP REPLICA 'node_1_1' FROM ZKPATH '/clickhouse/tables/test2/{shard}/replicated/test_table'".format( - shard=1)) - exists_replica_1_1 = zk.exists( - "/clickhouse/tables/test2/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, - replica='node_1_1')) - assert (exists_replica_1_1 == None) + ## If you want to drop a inactive/stale replicate table that does not have a local replica, you can following syntax(ZKPATH): + node_1_3.query( + "SYSTEM DROP REPLICA 'node_1_1' FROM ZKPATH '/clickhouse/tables/test2/{shard}/replicated/test_table'".format( + shard=1)) + exists_replica_1_1 = zk.exists( + "/clickhouse/tables/test2/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, + replica='node_1_1')) + assert (exists_replica_1_1 == None) - node_1_2.query("SYSTEM DROP REPLICA 'node_1_1' FROM TABLE test.test_table") - exists_replica_1_1 = zk.exists( - "/clickhouse/tables/test/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, - replica='node_1_1')) - assert (exists_replica_1_1 == None) + node_1_2.query("SYSTEM DROP REPLICA 'node_1_1' FROM TABLE test.test_table") + exists_replica_1_1 = zk.exists( + "/clickhouse/tables/test/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, + replica='node_1_1')) + assert (exists_replica_1_1 == None) - node_1_2.query("SYSTEM DROP REPLICA 'node_1_1' FROM DATABASE test1") - exists_replica_1_1 = zk.exists( - "/clickhouse/tables/test1/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, - replica='node_1_1')) - assert (exists_replica_1_1 == None) + node_1_2.query("SYSTEM DROP REPLICA 'node_1_1' FROM DATABASE test1") + exists_replica_1_1 = zk.exists( + "/clickhouse/tables/test1/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, + replica='node_1_1')) + assert (exists_replica_1_1 == None) - node_1_3.query( - "SYSTEM DROP REPLICA 'node_1_1' FROM ZKPATH '/clickhouse/tables/test3/{shard}/replicated/test_table'".format( - shard=1)) - exists_replica_1_1 = zk.exists( - "/clickhouse/tables/test3/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, - replica='node_1_1')) - assert (exists_replica_1_1 == None) + node_1_3.query( + "SYSTEM DROP REPLICA 'node_1_1' FROM ZKPATH '/clickhouse/tables/test3/{shard}/replicated/test_table'".format( + shard=1)) + exists_replica_1_1 = zk.exists( + "/clickhouse/tables/test3/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, + replica='node_1_1')) + assert (exists_replica_1_1 == None) - node_1_2.query("SYSTEM DROP REPLICA 'node_1_1'") - exists_replica_1_1 = zk.exists( - "/clickhouse/tables/test4/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, - replica='node_1_1')) - assert (exists_replica_1_1 == None) + node_1_2.query("SYSTEM DROP REPLICA 'node_1_1'") + exists_replica_1_1 = zk.exists( + "/clickhouse/tables/test4/{shard}/replicated/test_table/replicas/{replica}".format(shard=1, + replica='node_1_1')) + assert (exists_replica_1_1 == None) diff --git a/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py b/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py index f9c10d68fe3..7bce2d50011 100644 --- a/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py +++ b/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py @@ -1,5 +1,3 @@ - - import pytest from helpers.client import QueryRuntimeException from helpers.cluster import ClickHouseCluster @@ -18,23 +16,33 @@ def start_cluster(): cluster.shutdown() -def test_fetch_part_from_allowed_zookeeper(start_cluster): +@pytest.mark.parametrize( + ('part', 'date', 'part_name'), + [ + ('PARTITION', '2020-08-27', '2020-08-27'), + ('PART', '2020-08-28', '20200828_0_0_0'), + ] +) +def test_fetch_part_from_allowed_zookeeper(start_cluster, part, date, part_name): node.query( - "CREATE TABLE simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date;" + "CREATE TABLE IF NOT EXISTS simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date;" ) - node.query("INSERT INTO simple VALUES ('2020-08-27', 1)") + + node.query("""INSERT INTO simple VALUES ('{date}', 1)""".format(date=date)) node.query( - "CREATE TABLE simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date;" + "CREATE TABLE IF NOT EXISTS simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date;" ) + node.query( - "ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper2:/clickhouse/tables/0/simple';" - ) - node.query("ALTER TABLE simple2 ATTACH PARTITION '2020-08-27';") + """ALTER TABLE simple2 FETCH {part} '{part_name}' FROM 'zookeeper2:/clickhouse/tables/0/simple';""".format( + part=part, part_name=part_name)) + + node.query("""ALTER TABLE simple2 ATTACH {part} '{part_name}';""".format(part=part, part_name=part_name)) with pytest.raises(QueryRuntimeException): node.query( - "ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper:/clickhouse/tables/0/simple';" - ) + """ALTER TABLE simple2 FETCH {part} '{part_name}' FROM 'zookeeper:/clickhouse/tables/0/simple';""".format( + part=part, part_name=part_name)) - assert node.query("SELECT id FROM simple2").strip() == "1" + assert node.query("""SELECT id FROM simple2 where date = '{date}'""".format(date=date)).strip() == "1" diff --git a/tests/integration/test_grant_and_revoke/test.py b/tests/integration/test_grant_and_revoke/test.py index e29d63c9e0b..c1be16fe17d 100644 --- a/tests/integration/test_grant_and_revoke/test.py +++ b/tests/integration/test_grant_and_revoke/test.py @@ -26,7 +26,7 @@ def cleanup_after_test(): try: yield finally: - instance.query("DROP USER IF EXISTS A, B") + instance.query("DROP USER IF EXISTS A, B, C") instance.query("DROP TABLE IF EXISTS test.view_1") @@ -106,6 +106,46 @@ def test_revoke_requires_grant_option(): assert instance.query("SHOW GRANTS FOR B") == "" +def test_allowed_grantees(): + instance.query("CREATE USER A") + instance.query("CREATE USER B") + + instance.query('GRANT SELECT ON test.table TO A WITH GRANT OPTION') + instance.query("GRANT SELECT ON test.table TO B", user='A') + assert instance.query("SELECT * FROM test.table", user='B') == "1\t5\n2\t10\n" + instance.query("REVOKE SELECT ON test.table FROM B", user='A') + + instance.query('ALTER USER A GRANTEES NONE') + expected_error = "user `B` is not allowed as grantee" + assert expected_error in instance.query_and_get_error("GRANT SELECT ON test.table TO B", user='A') + + instance.query('ALTER USER A GRANTEES ANY EXCEPT B') + assert instance.query('SHOW CREATE USER A') == "CREATE USER A GRANTEES ANY EXCEPT B\n" + expected_error = "user `B` is not allowed as grantee" + assert expected_error in instance.query_and_get_error("GRANT SELECT ON test.table TO B", user='A') + + instance.query('ALTER USER A GRANTEES B') + instance.query("GRANT SELECT ON test.table TO B", user='A') + assert instance.query("SELECT * FROM test.table", user='B') == "1\t5\n2\t10\n" + instance.query("REVOKE SELECT ON test.table FROM B", user='A') + + instance.query('ALTER USER A GRANTEES ANY') + assert instance.query('SHOW CREATE USER A') == "CREATE USER A\n" + instance.query("GRANT SELECT ON test.table TO B", user='A') + assert instance.query("SELECT * FROM test.table", user='B') == "1\t5\n2\t10\n" + + instance.query('ALTER USER A GRANTEES NONE') + expected_error = "user `B` is not allowed as grantee" + assert expected_error in instance.query_and_get_error("REVOKE SELECT ON test.table FROM B", user='A') + + instance.query("CREATE USER C GRANTEES ANY EXCEPT C") + assert instance.query('SHOW CREATE USER C') == "CREATE USER C GRANTEES ANY EXCEPT C\n" + instance.query('GRANT SELECT ON test.table TO C WITH GRANT OPTION') + assert instance.query("SELECT * FROM test.table", user='C') == "1\t5\n2\t10\n" + expected_error = "user `C` is not allowed as grantee" + assert expected_error in instance.query_and_get_error("REVOKE SELECT ON test.table FROM C", user='C') + + def test_grant_all_on_table(): instance.query("CREATE USER A, B") instance.query("GRANT ALL ON test.table TO A WITH GRANT OPTION") diff --git a/tests/integration/test_hedged_requests/configs/users.xml b/tests/integration/test_hedged_requests/configs/users.xml index 509d3d12508..ac42155a18a 100644 --- a/tests/integration/test_hedged_requests/configs/users.xml +++ b/tests/integration/test_hedged_requests/configs/users.xml @@ -3,8 +3,10 @@ in_order - 100 - 2 + 100 + 2000 + 1 + 1 diff --git a/tests/integration/test_hedged_requests/test.py b/tests/integration/test_hedged_requests/test.py index 0c0155ff9a2..e40b3109c44 100644 --- a/tests/integration/test_hedged_requests/test.py +++ b/tests/integration/test_hedged_requests/test.py @@ -15,28 +15,30 @@ NODES = {'node_' + str(i): None for i in (1, 2, 3)} NODES['node'] = None -sleep_time = 30 +# Sleep time in milliseconds. +sleep_time = 30000 @pytest.fixture(scope="module") def started_cluster(): NODES['node'] = cluster.add_instance( - 'node', with_zookeeper=True, stay_alive=True, main_configs=['configs/remote_servers.xml'], user_configs=['configs/users.xml']) + 'node', stay_alive=True, main_configs=['configs/remote_servers.xml'], user_configs=['configs/users.xml']) for name in NODES: if name != 'node': - NODES[name] = cluster.add_instance(name, with_zookeeper=True, user_configs=['configs/users1.xml']) + NODES[name] = cluster.add_instance(name, user_configs=['configs/users1.xml']) try: cluster.start() for node_id, node in list(NODES.items()): - node.query('''CREATE TABLE replicated (id UInt32, date Date) ENGINE = - ReplicatedMergeTree('/clickhouse/tables/replicated', '{}') ORDER BY id PARTITION BY toYYYYMM(date)'''.format(node_id)) + node.query('''CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = + MergeTree() ORDER BY id PARTITION BY toYYYYMM(date)''') + + node.query("INSERT INTO test_hedged select number, toDate(number) from numbers(100);") + NODES['node'].query('''CREATE TABLE distributed (id UInt32, date Date) ENGINE = - Distributed('test_cluster', 'default', 'replicated')''') - - NODES['node'].query("INSERT INTO distributed select number, toDate(number) from numbers(100);") + Distributed('test_cluster', 'default', 'test_hedged')''') yield cluster @@ -47,8 +49,8 @@ def started_cluster(): config = ''' - {sleep_in_send_tables_status} - {sleep_in_send_data} + {sleep_in_send_tables_status_ms} + {sleep_in_send_data_ms} ''' @@ -70,12 +72,12 @@ def check_query(expected_replica, receive_timeout=300): assert query_time < 10 -def check_settings(node_name, sleep_in_send_tables_status, sleep_in_send_data): +def check_settings(node_name, sleep_in_send_tables_status_ms, sleep_in_send_data_ms): attempts = 0 while attempts < 1000: - setting1 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status'") - setting2 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_data'") - if int(setting1) == sleep_in_send_tables_status and int(setting2) == sleep_in_send_data: + setting1 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms'") + setting2 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms'") + if int(setting1) == sleep_in_send_tables_status_ms and int(setting2) == sleep_in_send_data_ms: return time.sleep(0.1) attempts += 1 @@ -83,10 +85,41 @@ def check_settings(node_name, sleep_in_send_tables_status, sleep_in_send_data): assert attempts < 1000 +def check_changing_replica_events(expected_count): + result = NODES['node'].query("SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica'") + + # If server load is high we can see more than expected + # replica change events, but never less than expected + assert int(result) >= expected_count + + +def update_configs(node_1_sleep_in_send_tables_status=0, node_1_sleep_in_send_data=0, + node_2_sleep_in_send_tables_status=0, node_2_sleep_in_send_data=0, + node_3_sleep_in_send_tables_status=0, node_3_sleep_in_send_data=0): + NODES['node_1'].replace_config( + '/etc/clickhouse-server/users.d/users1.xml', + config.format(sleep_in_send_tables_status_ms=node_1_sleep_in_send_tables_status, sleep_in_send_data_ms=node_1_sleep_in_send_data)) + + NODES['node_2'].replace_config( + '/etc/clickhouse-server/users.d/users1.xml', + config.format(sleep_in_send_tables_status_ms=node_2_sleep_in_send_tables_status, sleep_in_send_data_ms=node_2_sleep_in_send_data)) + + NODES['node_3'].replace_config( + '/etc/clickhouse-server/users.d/users1.xml', + config.format(sleep_in_send_tables_status_ms=node_3_sleep_in_send_tables_status, sleep_in_send_data_ms=node_3_sleep_in_send_data)) + + check_settings('node_1', node_1_sleep_in_send_tables_status, node_1_sleep_in_send_data) + check_settings('node_2', node_2_sleep_in_send_tables_status, node_2_sleep_in_send_data) + check_settings('node_3', node_3_sleep_in_send_tables_status, node_3_sleep_in_send_data) + + def test_stuck_replica(started_cluster): + update_configs() + cluster.pause_container("node_1") check_query(expected_replica="node_2") + check_changing_replica_events(1) result = NODES['node'].query("SELECT slowdowns_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1'") @@ -105,6 +138,8 @@ def test_stuck_replica(started_cluster): def test_long_query(started_cluster): + update_configs() + # Restart to reset pool states. NODES['node'].restart_clickhouse() @@ -115,206 +150,75 @@ def test_long_query(started_cluster): def test_send_table_status_sleep(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - check_settings('node_1', sleep_time, 0) - check_settings('node_2', 0, 0) - check_settings('node_3', 0, 0) - + update_configs(node_1_sleep_in_send_tables_status=sleep_time) check_query(expected_replica="node_2") + check_changing_replica_events(1) def test_send_table_status_sleep2(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - check_settings('node_1', sleep_time, 0) - check_settings('node_2', sleep_time, 0) - check_settings('node_3', 0, 0) - + update_configs(node_1_sleep_in_send_tables_status=sleep_time, node_2_sleep_in_send_tables_status=sleep_time) check_query(expected_replica="node_3") + check_changing_replica_events(2) def test_send_data(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - check_settings('node_1', 0, sleep_time) - check_settings('node_2', 0, 0) - check_settings('node_3', 0, 0) - + update_configs(node_1_sleep_in_send_data=sleep_time) check_query(expected_replica="node_2") + check_changing_replica_events(1) def test_send_data2(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - check_settings('node_1', 0, sleep_time) - check_settings('node_2', 0, sleep_time) - check_settings('node_3', 0, 0) - + update_configs(node_1_sleep_in_send_data=sleep_time, node_2_sleep_in_send_data=sleep_time) check_query(expected_replica="node_3") + check_changing_replica_events(2) def test_combination1(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - check_settings('node_1', sleep_time, 0) - check_settings('node_2', 0, sleep_time) - check_settings('node_3', 0, 0) - + update_configs(node_1_sleep_in_send_tables_status=sleep_time, node_2_sleep_in_send_data=sleep_time) check_query(expected_replica="node_3") + check_changing_replica_events(2) def test_combination2(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=0)) - - check_settings('node_1', 0, sleep_time) - check_settings('node_2', sleep_time, 0) - check_settings('node_3', 0, 0) - + update_configs(node_1_sleep_in_send_data=sleep_time, node_2_sleep_in_send_tables_status=sleep_time) check_query(expected_replica="node_3") + check_changing_replica_events(2) def test_combination3(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - check_settings('node_1', 0, sleep_time) - check_settings('node_2', 1, 0) - check_settings('node_3', 0, sleep_time) - + update_configs(node_1_sleep_in_send_data=sleep_time, + node_2_sleep_in_send_tables_status=1000, + node_3_sleep_in_send_data=sleep_time) check_query(expected_replica="node_2") + check_changing_replica_events(3) def test_combination4(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=2, sleep_in_send_data=0)) - - check_settings('node_1', 1, sleep_time) - check_settings('node_2', 1, 0) - check_settings('node_3', 2, 0) - + update_configs(node_1_sleep_in_send_tables_status=1000, + node_1_sleep_in_send_data=sleep_time, + node_2_sleep_in_send_tables_status=1000, + node_3_sleep_in_send_tables_status=1000) check_query(expected_replica="node_2") + check_changing_replica_events(4) def test_receive_timeout1(started_cluster): # Check the situation when first two replicas get receive timeout # in establishing connection, but the third replica is ok. - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=3, sleep_in_send_data=0)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=3, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=1)) - - check_settings('node_1', 3, 0) - check_settings('node_2', 3, 0) - check_settings('node_3', 0, 1) - + update_configs(node_1_sleep_in_send_tables_status=3000, + node_2_sleep_in_send_tables_status=3000, + node_3_sleep_in_send_data=1000) check_query(expected_replica="node_3", receive_timeout=2) + check_changing_replica_events(2) def test_receive_timeout2(started_cluster): # Check the situation when first replica get receive timeout # in packet receiving but there are replicas in process of # connection establishing. - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=4)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=2, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=2, sleep_in_send_data=0)) - - check_settings('node_1', 0, 4) - check_settings('node_2', 2, 0) - check_settings('node_3', 2, 0) - + update_configs(node_1_sleep_in_send_data=4000, + node_2_sleep_in_send_tables_status=2000, + node_3_sleep_in_send_tables_status=2000) check_query(expected_replica="node_2", receive_timeout=3) + check_changing_replica_events(3) diff --git a/tests/integration/test_hedged_requests_parallel/configs/users.xml b/tests/integration/test_hedged_requests_parallel/configs/users.xml index af9d6d96e60..9600c0c7124 100644 --- a/tests/integration/test_hedged_requests_parallel/configs/users.xml +++ b/tests/integration/test_hedged_requests_parallel/configs/users.xml @@ -4,8 +4,10 @@ in_order 2 - 100 - 2 + 100 + 2000 + 1 + 1 diff --git a/tests/integration/test_hedged_requests_parallel/test.py b/tests/integration/test_hedged_requests_parallel/test.py index 17db4af5d41..7abc2eb1d2a 100644 --- a/tests/integration/test_hedged_requests_parallel/test.py +++ b/tests/integration/test_hedged_requests_parallel/test.py @@ -14,29 +14,30 @@ cluster = ClickHouseCluster(__file__) NODES = {'node_' + str(i): None for i in (1, 2, 3, 4)} NODES['node'] = None -sleep_time = 30 +# Cleep time in milliseconds. +sleep_time = 30000 @pytest.fixture(scope="module") def started_cluster(): cluster = ClickHouseCluster(__file__) NODES['node'] = cluster.add_instance( - 'node', with_zookeeper=True, stay_alive=True, main_configs=['configs/remote_servers.xml'], user_configs=['configs/users.xml']) + 'node', stay_alive=True, main_configs=['configs/remote_servers.xml'], user_configs=['configs/users.xml']) for name in NODES: if name != 'node': - NODES[name] = cluster.add_instance(name, with_zookeeper=True, user_configs=['configs/users1.xml']) + NODES[name] = cluster.add_instance(name, user_configs=['configs/users1.xml']) try: cluster.start() for node_id, node in list(NODES.items()): - node.query('''CREATE TABLE replicated (id UInt32, date Date) ENGINE = - ReplicatedMergeTree('/clickhouse/tables/replicated', '{}') ORDER BY id PARTITION BY toYYYYMM(date)'''.format(node_id)) + node.query('''CREATE TABLE test_hedged (id UInt32, date Date) ENGINE = + MergeTree() ORDER BY id PARTITION BY toYYYYMM(date)''') + + node.query("INSERT INTO test_hedged SELECT number, toDateTime(number) FROM numbers(100)") NODES['node'].query('''CREATE TABLE distributed (id UInt32, date Date) ENGINE = - Distributed('test_cluster', 'default', 'replicated')''') - - NODES['node'].query("INSERT INTO distributed VALUES (1, '2020-01-01'), (2, '2020-01-02')") + Distributed('test_cluster', 'default', 'test_hedged')''') yield cluster @@ -47,33 +48,37 @@ def started_cluster(): config = ''' - {sleep_in_send_tables_status} - {sleep_in_send_data} + {sleep_in_send_tables_status_ms} + {sleep_in_send_data_ms} ''' -def check_query(): +QUERY_1 = "SELECT count() FROM distributed" +QUERY_2 = "SELECT * FROM distributed" + + +def check_query(query=QUERY_1): NODES['node'].restart_clickhouse() # Without hedged requests select query will last more than 30 seconds, # with hedged requests it will last just around 1-2 second start = time.time() - NODES['node'].query("SELECT * FROM distributed"); + NODES['node'].query(query); query_time = time.time() - start print("Query time:", query_time) - + assert query_time < 5 -def check_settings(node_name, sleep_in_send_tables_status, sleep_in_send_data): +def check_settings(node_name, sleep_in_send_tables_status_ms, sleep_in_send_data_ms): attempts = 0 while attempts < 1000: - setting1 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status'") - setting2 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_data'") - if int(setting1) == sleep_in_send_tables_status and int(setting2) == sleep_in_send_data: + setting1 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_tables_status_ms'") + setting2 = NODES[node_name].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms'") + if int(setting1) == sleep_in_send_tables_status_ms and int(setting2) == sleep_in_send_data_ms: return time.sleep(0.1) attempts += 1 @@ -81,78 +86,76 @@ def check_settings(node_name, sleep_in_send_tables_status, sleep_in_send_data): assert attempts < 1000 -def test_send_table_status_sleep(started_cluster): +def check_changing_replica_events(expected_count): + result = NODES['node'].query("SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica'") + + # If server load is high we can see more than expected + # replica change events, but never less than expected + assert int(result) >= expected_count + + +def update_configs(node_1_sleep_in_send_tables_status=0, node_1_sleep_in_send_data=0, + node_2_sleep_in_send_tables_status=0, node_2_sleep_in_send_data=0, + node_3_sleep_in_send_tables_status=0, node_3_sleep_in_send_data=0, + node_4_sleep_in_send_tables_status=0, node_4_sleep_in_send_data=0): NODES['node_1'].replace_config( '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) + config.format(sleep_in_send_tables_status_ms=node_1_sleep_in_send_tables_status, sleep_in_send_data_ms=node_1_sleep_in_send_data)) NODES['node_2'].replace_config( '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=sleep_time, sleep_in_send_data=0)) - - check_settings('node_1', sleep_time, 0) - check_settings('node_2', sleep_time, 0) - - check_query() - - -def test_send_data(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - check_settings('node_1', 0, sleep_time) - check_settings('node_2', 0, sleep_time) - - check_query() - - -def test_combination1(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=0)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=0)) + config.format(sleep_in_send_tables_status_ms=node_2_sleep_in_send_tables_status, sleep_in_send_data_ms=node_2_sleep_in_send_data)) NODES['node_3'].replace_config( '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - check_settings('node_1', 1, 0) - check_settings('node_2', 1, 0) - check_settings('node_3', 0, sleep_time) - - check_query() - - -def test_combination2(started_cluster): - NODES['node_1'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) - - NODES['node_2'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=0)) - - NODES['node_3'].replace_config( - '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=0, sleep_in_send_data=sleep_time)) + config.format(sleep_in_send_tables_status_ms=node_3_sleep_in_send_tables_status, sleep_in_send_data_ms=node_3_sleep_in_send_data)) NODES['node_4'].replace_config( '/etc/clickhouse-server/users.d/users1.xml', - config.format(sleep_in_send_tables_status=1, sleep_in_send_data=0)) + config.format(sleep_in_send_tables_status_ms=node_4_sleep_in_send_tables_status, sleep_in_send_data_ms=node_4_sleep_in_send_data)) - - check_settings('node_1', 0, sleep_time) - check_settings('node_2', 1, 0) - check_settings('node_3', 0, sleep_time) - check_settings('node_4', 1, 0) - + check_settings('node_1', node_1_sleep_in_send_tables_status, node_1_sleep_in_send_data) + check_settings('node_2', node_2_sleep_in_send_tables_status, node_2_sleep_in_send_data) + check_settings('node_3', node_3_sleep_in_send_tables_status, node_3_sleep_in_send_data) + check_settings('node_4', node_4_sleep_in_send_tables_status, node_4_sleep_in_send_data) + + +def test_send_table_status_sleep(started_cluster): + update_configs(node_1_sleep_in_send_tables_status=sleep_time, node_2_sleep_in_send_tables_status=sleep_time) check_query() + check_changing_replica_events(2) + + +def test_send_data(started_cluster): + update_configs(node_1_sleep_in_send_data=sleep_time, node_2_sleep_in_send_data=sleep_time) + check_query() + check_changing_replica_events(2) + + +def test_combination1(started_cluster): + update_configs(node_1_sleep_in_send_tables_status=1000, + node_2_sleep_in_send_tables_status=1000, + node_3_sleep_in_send_data=sleep_time) + check_query() + check_changing_replica_events(3) + + +def test_combination2(started_cluster): + update_configs(node_1_sleep_in_send_data=sleep_time, + node_2_sleep_in_send_tables_status=1000, + node_3_sleep_in_send_data=sleep_time, + node_4_sleep_in_send_tables_status=1000) + check_query() + check_changing_replica_events(4) + + +def test_query_with_no_data_to_sample(started_cluster): + update_configs(node_1_sleep_in_send_data=sleep_time, + node_2_sleep_in_send_data=sleep_time) + + # When there is no way to sample data, the whole query will be performed by + # the first replica and the second replica will just send EndOfStream, + # so we will change only the first replica here. + check_query(query=QUERY_2) + check_changing_replica_events(1) diff --git a/tests/integration/test_insert_into_distributed/configs/remote_servers.xml b/tests/integration/test_insert_into_distributed/configs/remote_servers.xml index 320766c18ae..07fd52751b7 100644 --- a/tests/integration/test_insert_into_distributed/configs/remote_servers.xml +++ b/tests/integration/test_insert_into_distributed/configs/remote_servers.xml @@ -28,14 +28,27 @@ - + + + true + + node1 + 9000 + + + node2 + 9000 + + + + shard1 9000 - - + + shard2 9000 diff --git a/tests/integration/test_insert_into_distributed/test.py b/tests/integration/test_insert_into_distributed/test.py index d71d1075c70..d54336a3027 100644 --- a/tests/integration/test_insert_into_distributed/test.py +++ b/tests/integration/test_insert_into_distributed/test.py @@ -1,6 +1,7 @@ import time import pytest +from helpers.client import QueryRuntimeException from helpers.cluster import ClickHouseCluster from helpers.network import PartitionManager from helpers.test_tools import TSV @@ -75,6 +76,26 @@ CREATE TABLE table_function (n UInt8, s String) ENGINE = MergeTree() ORDER BY n' node2.query(''' CREATE TABLE table_function (n UInt8, s String) ENGINE = MergeTree() ORDER BY n''') + node1.query(''' +CREATE TABLE distributed_one_replica_internal_replication (date Date, id UInt32) ENGINE = Distributed('shard_with_local_replica_internal_replication', 'default', 'single_replicated') +''') + + node2.query(''' +CREATE TABLE distributed_one_replica_internal_replication (date Date, id UInt32) ENGINE = Distributed('shard_with_local_replica_internal_replication', 'default', 'single_replicated') +''') + + node1.query(''' +CREATE TABLE distributed_one_replica_no_internal_replication (date Date, id UInt32) ENGINE = Distributed('shard_with_local_replica', 'default', 'single_replicated') +''') + + node2.query(''' +CREATE TABLE distributed_one_replica_no_internal_replication (date Date, id UInt32) ENGINE = Distributed('shard_with_local_replica', 'default', 'single_replicated') +''') + + node2.query(''' +CREATE TABLE single_replicated(date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/single_replicated', 'node2', date, id, 8192) +''') + yield cluster finally: @@ -162,6 +183,45 @@ def test_inserts_local(started_cluster): assert instance.query("SELECT count(*) FROM local").strip() == '1' +def test_inserts_single_replica_local_internal_replication(started_cluster): + with pytest.raises(QueryRuntimeException, match="Table default.single_replicated doesn't exist"): + node1.query( + "INSERT INTO distributed_one_replica_internal_replication VALUES ('2000-01-01', 1)", + settings={ + "insert_distributed_sync": "1", + "prefer_localhost_replica": "1", + # to make the test more deterministic + "load_balancing": "first_or_random", + }, + ) + assert node2.query("SELECT count(*) FROM single_replicated").strip() == '0' + + +def test_inserts_single_replica_internal_replication(started_cluster): + node1.query( + "INSERT INTO distributed_one_replica_internal_replication VALUES ('2000-01-01', 1)", + settings={ + "insert_distributed_sync": "1", + "prefer_localhost_replica": "0", + # to make the test more deterministic + "load_balancing": "first_or_random", + }, + ) + assert node2.query("SELECT count(*) FROM single_replicated").strip() == '1' + + +def test_inserts_single_replica_no_internal_replication(started_cluster): + with pytest.raises(QueryRuntimeException, match="Table default.single_replicated doesn't exist"): + node1.query( + "INSERT INTO distributed_one_replica_no_internal_replication VALUES ('2000-01-01', 1)", + settings={ + "insert_distributed_sync": "1", + "prefer_localhost_replica": "0", + }, + ) + assert node2.query("SELECT count(*) FROM single_replicated").strip() == '1' + + def test_prefer_localhost_replica(started_cluster): test_query = "SELECT * FROM distributed ORDER BY id" diff --git a/tests/queries/0_stateless/01671_ddl_hang_timeout.reference b/tests/integration/test_jbod_balancer/__init__.py similarity index 100% rename from tests/queries/0_stateless/01671_ddl_hang_timeout.reference rename to tests/integration/test_jbod_balancer/__init__.py diff --git a/tests/integration/test_jbod_balancer/configs/config.d/storage_configuration.xml b/tests/integration/test_jbod_balancer/configs/config.d/storage_configuration.xml new file mode 100644 index 00000000000..62b0ffacaf0 --- /dev/null +++ b/tests/integration/test_jbod_balancer/configs/config.d/storage_configuration.xml @@ -0,0 +1,29 @@ + + + + + 1024 + + + /jbod1/ + + + /jbod2/ + + + /jbod3/ + + + + + + + jbod1 + jbod2 + jbod3 + + + + + + diff --git a/tests/integration/test_jbod_balancer/test.py b/tests/integration/test_jbod_balancer/test.py new file mode 100644 index 00000000000..abc6a0bff11 --- /dev/null +++ b/tests/integration/test_jbod_balancer/test.py @@ -0,0 +1,182 @@ +import json +import random +import re +import string +import threading +import time +from multiprocessing.dummy import Pool + +import pytest +from helpers.client import QueryRuntimeException +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) + +node1 = cluster.add_instance( + "node1", + main_configs=["configs/config.d/storage_configuration.xml",], + with_zookeeper=True, + stay_alive=True, + tmpfs=["/jbod1:size=100M", "/jbod2:size=100M", "/jbod3:size=100M"], + macros={"shard": 0, "replica": 1}, +) + + +node2 = cluster.add_instance( + "node2", + main_configs=["configs/config.d/storage_configuration.xml"], + with_zookeeper=True, + stay_alive=True, + tmpfs=["/jbod1:size=100M", "/jbod2:size=100M", "/jbod3:size=100M"], + macros={"shard": 0, "replica": 2}, +) + + +@pytest.fixture(scope="module") +def start_cluster(): + try: + cluster.start() + yield cluster + + finally: + cluster.shutdown() + + +def check_balance(node, table): + + partitions = node.query( + """ + WITH + arraySort(groupArray(c)) AS array_c, + arrayEnumerate(array_c) AS array_i, + sum(c) AS sum_c, + count() AS n, + if(sum_c = 0, 0, (((2. * arraySum(arrayMap((c, i) -> (c * i), array_c, array_i))) / n) / sum_c) - (((n + 1) * 1.) / n)) AS gini + SELECT + partition + FROM + ( + SELECT + partition, + disk_name, + sum(bytes_on_disk) AS c + FROM system.parts + WHERE active AND level > 0 AND disk_name like 'jbod%' AND table = '{}' + GROUP BY + partition, disk_name + ) + GROUP BY partition + HAVING gini < 0.1 + """.format( + table + ) + ).splitlines() + + assert set(partitions) == set(["0", "1"]) + + +def test_jbod_balanced_merge(start_cluster): + try: + node1.query( + """ + CREATE TABLE tbl (p UInt8, d String) + ENGINE = MergeTree + PARTITION BY p + ORDER BY tuple() + SETTINGS + storage_policy = 'jbod', + min_bytes_to_rebalance_partition_over_jbod = 1024, + max_bytes_to_merge_at_max_space_in_pool = 4096 + """ + ) + node1.query("create table tmp1 as tbl") + node1.query("create table tmp2 as tbl") + + for i in range(200): + # around 1k per block + node1.query( + "insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + node1.query( + "insert into tmp1 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + node1.query( + "insert into tmp2 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + + time.sleep(1) + + check_balance(node1, "tbl") + + finally: + node1.query(f"DROP TABLE IF EXISTS tbl SYNC") + node1.query(f"DROP TABLE IF EXISTS tmp1 SYNC") + node1.query(f"DROP TABLE IF EXISTS tmp2 SYNC") + + +def test_replicated_balanced_merge_fetch(start_cluster): + try: + for i, node in enumerate([node1, node2]): + node.query( + """ + CREATE TABLE tbl (p UInt8, d String) + ENGINE = ReplicatedMergeTree('/clickhouse/tbl', '{}') + PARTITION BY p + ORDER BY tuple() + SETTINGS + storage_policy = 'jbod', + old_parts_lifetime = 1, + cleanup_delay_period = 1, + cleanup_delay_period_random_add = 2, + min_bytes_to_rebalance_partition_over_jbod = 1024, + max_bytes_to_merge_at_max_space_in_pool = 4096 + """.format( + i + ) + ) + + node.query( + """ + CREATE TABLE tmp1 (p UInt8, d String) + ENGINE = MergeTree + PARTITION BY p + ORDER BY tuple() + SETTINGS + storage_policy = 'jbod', + min_bytes_to_rebalance_partition_over_jbod = 1024, + max_bytes_to_merge_at_max_space_in_pool = 4096 + """ + ) + + node.query("create table tmp2 as tmp1") + + node2.query("alter table tbl modify setting always_fetch_merged_part = 1") + + for i in range(200): + # around 1k per block + node1.query( + "insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + node1.query( + "insert into tmp1 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + node1.query( + "insert into tmp2 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + node2.query( + "insert into tmp1 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + node2.query( + "insert into tmp2 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" + ) + + node2.query("SYSTEM SYNC REPLICA tbl", timeout=10) + + check_balance(node1, "tbl") + check_balance(node2, "tbl") + + finally: + for node in [node1, node2]: + node.query("DROP TABLE IF EXISTS tbl SYNC") + node.query("DROP TABLE IF EXISTS tmp1 SYNC") + node.query("DROP TABLE IF EXISTS tmp2 SYNC") diff --git a/tests/integration/test_testkeeper_back_to_back/__init__.py b/tests/integration/test_keeper_back_to_back/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_back_to_back/__init__.py rename to tests/integration/test_keeper_back_to_back/__init__.py diff --git a/tests/integration/test_testkeeper_back_to_back/configs/enable_test_keeper.xml b/tests/integration/test_keeper_back_to_back/configs/enable_keeper.xml similarity index 93% rename from tests/integration/test_testkeeper_back_to_back/configs/enable_test_keeper.xml rename to tests/integration/test_keeper_back_to_back/configs/enable_keeper.xml index 137b46ab5a9..dc8d3a8dfc9 100644 --- a/tests/integration/test_testkeeper_back_to_back/configs/enable_test_keeper.xml +++ b/tests/integration/test_keeper_back_to_back/configs/enable_keeper.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -19,5 +19,5 @@ 44444
- + diff --git a/tests/integration/test_testkeeper_back_to_back/configs/logs_conf.xml b/tests/integration/test_keeper_back_to_back/configs/logs_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_back_to_back/configs/logs_conf.xml rename to tests/integration/test_keeper_back_to_back/configs/logs_conf.xml diff --git a/tests/integration/test_testkeeper_back_to_back/configs/use_test_keeper.xml b/tests/integration/test_keeper_back_to_back/configs/use_keeper.xml similarity index 100% rename from tests/integration/test_testkeeper_back_to_back/configs/use_test_keeper.xml rename to tests/integration/test_keeper_back_to_back/configs/use_keeper.xml diff --git a/tests/integration/test_testkeeper_back_to_back/test.py b/tests/integration/test_keeper_back_to_back/test.py similarity index 99% rename from tests/integration/test_testkeeper_back_to_back/test.py rename to tests/integration/test_keeper_back_to_back/test.py index dd4e1f98cfd..8abd4321fd9 100644 --- a/tests/integration/test_testkeeper_back_to_back/test.py +++ b/tests/integration/test_keeper_back_to_back/test.py @@ -7,7 +7,7 @@ import time from multiprocessing.dummy import Pool cluster = ClickHouseCluster(__file__) -node = cluster.add_instance('node', main_configs=['configs/enable_test_keeper.xml', 'configs/logs_conf.xml'], with_zookeeper=True) +node = cluster.add_instance('node', main_configs=['configs/enable_keeper.xml', 'configs/logs_conf.xml'], with_zookeeper=True) from kazoo.client import KazooClient, KazooState, KeeperState def get_genuine_zk(): diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/__init__.py b/tests/integration/test_keeper_internal_secure/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_multinode_blocade_leader/__init__.py rename to tests/integration/test_keeper_internal_secure/__init__.py diff --git a/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper1.xml b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper1.xml new file mode 100644 index 00000000000..ecbd50c72a6 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper1.xml @@ -0,0 +1,42 @@ + + + 9181 + 1 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 5000 + 10000 + 75 + trace + + + + true + + 1 + node1 + 44444 + true + 3 + + + 2 + node2 + 44444 + true + true + 2 + + + 3 + node3 + 44444 + true + true + 1 + + + + diff --git a/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper2.xml b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper2.xml new file mode 100644 index 00000000000..53129ae0a75 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper2.xml @@ -0,0 +1,42 @@ + + + 9181 + 2 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 5000 + 10000 + 75 + trace + + + + true + + 1 + node1 + 44444 + true + 3 + + + 2 + node2 + 44444 + true + true + 2 + + + 3 + node3 + 44444 + true + true + 1 + + + + diff --git a/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper3.xml b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper3.xml new file mode 100644 index 00000000000..4c685764ec0 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper3.xml @@ -0,0 +1,42 @@ + + + 9181 + 3 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 5000 + 10000 + 75 + trace + + + + true + + 1 + node1 + 44444 + true + 3 + + + 2 + node2 + 44444 + true + true + 2 + + + 3 + node3 + 44444 + true + true + 1 + + + + diff --git a/tests/integration/test_keeper_internal_secure/configs/rootCA.pem b/tests/integration/test_keeper_internal_secure/configs/rootCA.pem new file mode 100644 index 00000000000..ec16533d98a --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/rootCA.pem @@ -0,0 +1,21 @@ +-----BEGIN CERTIFICATE----- +MIIDazCCAlOgAwIBAgIUUiyhAav08YhTLfUIXLN/0Ln09n4wDQYJKoZIhvcNAQEL +BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM +GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMTA0MTIxMTQ1MjBaFw0yMTA1 +MTIxMTQ1MjBaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw +HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB +AQUAA4IBDwAwggEKAoIBAQDK0Ww4voPlkePBPS2MsEi7e1ePS+CDxTdDuOwWWEA7 +JiOyqIGqdyL6AE2EqjL3sSdVFVxytpGQWDuM6JHXdb01AnMngBuql9Jkiln7i267 +v54HtMWdm8o3rik/b/mB+kkn/sP715tI49Ybh/RobtvtK16ZgHr1ombkq6rXiom2 +8GmSmpYFwZtZsXtm2JwbZVayupQpWwdu3KrTXKBtVyKVvvWdgkf47DWYtWDS3vqE +cShM1H97G4DvI+4RX1WtQevQ0yCx1aFTg4xMHFkpUxlP8iW6mQaQPqy9rnI57e3L +RHc2I/B56xa43R3GmQ2S7bE4hvm1SrZDtVgrZLf4nvwNAgMBAAGjUzBRMB0GA1Ud +DgQWBBQ4+o0x1FzK7nRbcnm2pNLwaywCdzAfBgNVHSMEGDAWgBQ4+o0x1FzK7nRb +cnm2pNLwaywCdzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQDE +YmM8MH6RKcaqMqCBefWLj0LTcZ/Wm4G/eCFC51PkAIsf7thnzViemBHRXUSF8wzc +1MBPD6II6OB1F0i7ntGjtlhnL2WcPYbo2Np59p7fo9SMbYwF49OZ40twsuKeeoAp +pfow+y/EBZqa99MY2q6FU6FDA3Rpv0Sdk+/5PHdsSP6cgeMszFBUS0tCQEvEl83n +FJUb0vjEX4x3J64XO/0DKXyCxFyF77OwHG2ZV5BeCpIhGXu+d/e221LJkGI2orKR +kgsaUwrkS8HQt3Hd0gYpLI1Opx/JlRpB0VLYLzRGj7kDpbAcTj3SMEUp/FAZmlXR +Iiebt73eE3rOWVFgyY9f +-----END CERTIFICATE----- diff --git a/tests/integration/test_keeper_internal_secure/configs/server.crt b/tests/integration/test_keeper_internal_secure/configs/server.crt new file mode 100644 index 00000000000..dfa32da5444 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/server.crt @@ -0,0 +1,19 @@ +-----BEGIN CERTIFICATE----- +MIIDETCCAfkCFHL+gKBQnU0P73/nrFrGaVPauTPmMA0GCSqGSIb3DQEBCwUAMEUx +CzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRl +cm5ldCBXaWRnaXRzIFB0eSBMdGQwHhcNMjEwNDEyMTE0NzI5WhcNMjEwNTEyMTE0 +NzI5WjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UE +CgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOC +AQ8AMIIBCgKCAQEA1iPeYn1Vy4QnQi6uNVqQnFLr0u3qdrMjGEBNAOuGmtIdhIn8 +rMCzaehNr3y2YTMRbZAqmv28P/wOXpzR1uQaFlQzTOjmsn/HOZ9JX2hv5sBUv7SU +UiPJS7UtptKDPbLv3N/v1dOXbY+vVyzo8U1Q9OS1J5yhYW6KtxP++hfSrOsFu669 +d1pqWFWaNBsmf0zF+ETvi6lywhyTFA1/PazcStP5GntcDL7eDvGq+DDsRC40oRpy +S4xRQRSteCTtGGmWpx+Jmt+90wFnLgruUbWT0veCoLxLvz0tJUk3ueUVnMkrxBQG +Fz+IWm+SQppNU5LlAcBcu9wJfo3h34BXp0NFNQIDAQABMA0GCSqGSIb3DQEBCwUA +A4IBAQCUnvQsv+GsPwGnIWqH9iiFVhgDx5QbSTW94Fyqk8dcIJBzWAiCshmLBWPJ +pfy4y2nxJbzovFsd9DA49pxqqILeLjue99yma2DVKeo+XDLDN3OX5faIMTBd7AnL +0MKqW7gUSLRUZrNOvFciAY8xRezgBQQBo4mcmmMbAbk5wKndGY6ZZOcY+JwXlqGB +5hyi6ishO8ciiZi3GMFNWWk9ViSfo27IqjKdSkQq1pr3FULvepd6SkdX+NvfZTAH +rG+CSoFGiJcOBbhDkvpY32cAJEnJOA1vHpFxfnGP8/1haeVZHqSwH1cySD78HVtF +fBs000wGHzBYWNI2KkwjNtYf06P4 +-----END CERTIFICATE----- diff --git a/tests/integration/test_keeper_internal_secure/configs/server.key b/tests/integration/test_keeper_internal_secure/configs/server.key new file mode 100644 index 00000000000..7e57c8b6b34 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/server.key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEowIBAAKCAQEA1iPeYn1Vy4QnQi6uNVqQnFLr0u3qdrMjGEBNAOuGmtIdhIn8 +rMCzaehNr3y2YTMRbZAqmv28P/wOXpzR1uQaFlQzTOjmsn/HOZ9JX2hv5sBUv7SU +UiPJS7UtptKDPbLv3N/v1dOXbY+vVyzo8U1Q9OS1J5yhYW6KtxP++hfSrOsFu669 +d1pqWFWaNBsmf0zF+ETvi6lywhyTFA1/PazcStP5GntcDL7eDvGq+DDsRC40oRpy +S4xRQRSteCTtGGmWpx+Jmt+90wFnLgruUbWT0veCoLxLvz0tJUk3ueUVnMkrxBQG +Fz+IWm+SQppNU5LlAcBcu9wJfo3h34BXp0NFNQIDAQABAoIBAHYDso2o8V2F6XTp +8QxqawQcFudaQztDonW9CjMVmks8vRPMUDqMwNP/OMEcBA8xa8tsBm8Ao3zH1suB +tYuujkn8AYHDYVDCZvN0u6UfE3yiRpKYXJ2gJ1HX+d7UaYvZT6P0rmKzh+LTqxhq +Ib7Kk3FDkirQgYgGueAH3x/JfUvaAGvFrq+HvvlhHOs7M7iFU4nJA8jNfBolpTnG +v5MMI+f8/GHGreVICJUoclE+4V/4LDHUlrc3l1kQk0keeD6ECw/pl48TNL6ncXKu +baez1rfKbMPjhLUy2q5UZa93oW+olchEOXs1nUNKUhIOOr0f0YweYhUHNTineVM9 +yTecMIkCgYEA7CFQMyeLVeBA6C9AHBe8Zf/k64cMPyr0lUz6548ulil580PNPbvW +kd2vIKfUMgCO5lMA47ArL4bXZ7cjTvJmPYE1Yv8z+F0Tk03fnTrudHOSBEiGXAu3 +MPTxCDU7Se5Dwj0Fq81aFRtCHl8Rrss+WiBD8eRoxb/vwXKFc6VUAWMCgYEA6CjZ +XrZz11lySBhjkyVXcdLj89hDZ+bPxA7b3VB7TfCxsn5xVck7U3TFkg5Z9XwEQ7Ob +XFAPuwT9GKm7QPp6L8T2RltoJ3ys40UH1RtcNLz2aIo/xSP7lopPdAfWHef5r4y9 +kRw+Gh4NP/l5wefXsRz/D0jY3+t+QnwnhuCKbocCgYEAiR6bPOlkvzyXVH1DxEyA +Sdb8b00f7nqaRyzJsrfxvJ9fQsWHpKa0ZkYOUW9ECLlMQjHHHXEK0vGBmqe9qDWY +63RhtRgvbLVYDb018k7rc9I846Hd7AudmJ9UbIjE4hyrWlsnNOntur32ej6IvTEn +Bx0fd5NEyDi6GGLRXiOOkbMCgYAressLE/yqDlR68CZl/o5cAPU0TAKDyRSMUYQX +9OTC+hstpMSxHlkADlSaQBnVAf8CdvbX2R65FfwYzGEHkGGl5KuDDcd57b2rathG +rzMbpXA4r/u1fkG2Nf0fbABL5ZA7so4mSTXQSmSM4LpO+I7K2vVh9XC4rzAcX4g/ +mHoUrQKBgBf3rxp5h9P3HWoZYjzBDo2FqXUjKLLjE9ed5e/VqecqfHIkmueuNHlN +xifHr7lpsYu6IXkTnlK14pvLoPuwP59dCIOUYwAFz8RlH4MSUGNhYeGA8cqRrhmJ +tYk3OKExuN/+O12kUPVTy6BMH1hBdRJP+7y7lapWsRhZt18y+8Za +-----END RSA PRIVATE KEY----- diff --git a/tests/integration/test_keeper_internal_secure/configs/ssl_conf.xml b/tests/integration/test_keeper_internal_secure/configs/ssl_conf.xml new file mode 100644 index 00000000000..babc7cf0f18 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/ssl_conf.xml @@ -0,0 +1,15 @@ + + + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + /etc/clickhouse-server/config.d/rootCA.pem + true + none + true + sslv2,sslv3 + true + + + diff --git a/tests/integration/test_keeper_internal_secure/test.py b/tests/integration/test_keeper_internal_secure/test.py new file mode 100644 index 00000000000..d9fbca624e1 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/test.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python3 + +import pytest +from helpers.cluster import ClickHouseCluster +import random +import string +import os +import time + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_secure_keeper1.xml', 'configs/ssl_conf.xml', 'configs/server.crt', 'configs/server.key', 'configs/rootCA.pem']) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_secure_keeper2.xml', 'configs/ssl_conf.xml', 'configs/server.crt', 'configs/server.key', 'configs/rootCA.pem']) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_secure_keeper3.xml', 'configs/ssl_conf.xml', 'configs/server.crt', 'configs/server.key', 'configs/rootCA.pem']) + +from kazoo.client import KazooClient, KazooState + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + yield cluster + + finally: + cluster.shutdown() + +def get_fake_zk(nodename, timeout=30.0): + _fake_zk_instance = KazooClient(hosts=cluster.get_instance_ip(nodename) + ":9181", timeout=timeout) + def reset_listener(state): + nonlocal _fake_zk_instance + print("Fake zk callback called for state", state) + if state != KazooState.CONNECTED: + _fake_zk_instance._reset() + + _fake_zk_instance.add_listener(reset_listener) + _fake_zk_instance.start() + return _fake_zk_instance + +def test_secure_raft_works(started_cluster): + try: + node1_zk = get_fake_zk("node1") + node2_zk = get_fake_zk("node2") + node3_zk = get_fake_zk("node3") + + node1_zk.create("/test_node", b"somedata1") + node2_zk.sync("/test_node") + node3_zk.sync("/test_node") + + assert node1_zk.exists("/test_node") is not None + assert node2_zk.exists("/test_node") is not None + assert node3_zk.exists("/test_node") is not None + finally: + try: + for zk_conn in [node1_zk, node2_zk, node3_zk]: + zk_conn.stop() + zk_conn.close() + except: + pass diff --git a/tests/integration/test_testkeeper_multinode_simple/__init__.py b/tests/integration/test_keeper_multinode_blocade_leader/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_multinode_simple/__init__.py rename to tests/integration/test_keeper_multinode_blocade_leader/__init__.py diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper1.xml b/tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper1.xml similarity index 96% rename from tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper1.xml rename to tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper1.xml index d60e2536faf..5a75bf524ec 100644 --- a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper1.xml +++ b/tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper1.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -37,5 +37,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper2.xml b/tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper2.xml similarity index 96% rename from tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper2.xml rename to tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper2.xml index ec5af6f746f..246fb08c9e7 100644 --- a/tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper2.xml +++ b/tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper2.xml @@ -1,5 +1,5 @@ - + 9181 2 /var/lib/clickhouse/coordination/log @@ -37,5 +37,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper3.xml b/tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper3.xml similarity index 96% rename from tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper3.xml rename to tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper3.xml index 4c8eb9e1d48..b5958dd7f41 100644 --- a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper3.xml +++ b/tests/integration/test_keeper_multinode_blocade_leader/configs/enable_keeper3.xml @@ -1,5 +1,5 @@ - + 9181 3 /var/lib/clickhouse/coordination/log @@ -37,5 +37,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/log_conf.xml b/tests/integration/test_keeper_multinode_blocade_leader/configs/log_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_multinode_blocade_leader/configs/log_conf.xml rename to tests/integration/test_keeper_multinode_blocade_leader/configs/log_conf.xml diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/use_test_keeper.xml b/tests/integration/test_keeper_multinode_blocade_leader/configs/use_keeper.xml similarity index 100% rename from tests/integration/test_testkeeper_multinode_blocade_leader/configs/use_test_keeper.xml rename to tests/integration/test_keeper_multinode_blocade_leader/configs/use_keeper.xml diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/test.py b/tests/integration/test_keeper_multinode_blocade_leader/test.py similarity index 98% rename from tests/integration/test_testkeeper_multinode_blocade_leader/test.py rename to tests/integration/test_keeper_multinode_blocade_leader/test.py index 47064413b45..e74ccc80fe8 100644 --- a/tests/integration/test_testkeeper_multinode_blocade_leader/test.py +++ b/tests/integration/test_keeper_multinode_blocade_leader/test.py @@ -9,9 +9,9 @@ from helpers.network import PartitionManager from helpers.test_tools import assert_eq_with_retry cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=['configs/enable_test_keeper1.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) -node2 = cluster.add_instance('node2', main_configs=['configs/enable_test_keeper2.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) -node3 = cluster.add_instance('node3', main_configs=['configs/enable_test_keeper3.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_keeper1.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_keeper2.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_keeper3.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) from kazoo.client import KazooClient, KazooState diff --git a/tests/integration/test_testkeeper_persistent_log/__init__.py b/tests/integration/test_keeper_multinode_simple/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_persistent_log/__init__.py rename to tests/integration/test_keeper_multinode_simple/__init__.py diff --git a/tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper1.xml b/tests/integration/test_keeper_multinode_simple/configs/enable_keeper1.xml similarity index 96% rename from tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper1.xml rename to tests/integration/test_keeper_multinode_simple/configs/enable_keeper1.xml index d60e2536faf..5a75bf524ec 100644 --- a/tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper1.xml +++ b/tests/integration/test_keeper_multinode_simple/configs/enable_keeper1.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -37,5 +37,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper2.xml b/tests/integration/test_keeper_multinode_simple/configs/enable_keeper2.xml similarity index 96% rename from tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper2.xml rename to tests/integration/test_keeper_multinode_simple/configs/enable_keeper2.xml index ec5af6f746f..246fb08c9e7 100644 --- a/tests/integration/test_testkeeper_multinode_blocade_leader/configs/enable_test_keeper2.xml +++ b/tests/integration/test_keeper_multinode_simple/configs/enable_keeper2.xml @@ -1,5 +1,5 @@ - + 9181 2 /var/lib/clickhouse/coordination/log @@ -37,5 +37,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper3.xml b/tests/integration/test_keeper_multinode_simple/configs/enable_keeper3.xml similarity index 96% rename from tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper3.xml rename to tests/integration/test_keeper_multinode_simple/configs/enable_keeper3.xml index 4c8eb9e1d48..b5958dd7f41 100644 --- a/tests/integration/test_testkeeper_multinode_simple/configs/enable_test_keeper3.xml +++ b/tests/integration/test_keeper_multinode_simple/configs/enable_keeper3.xml @@ -1,5 +1,5 @@ - + 9181 3 /var/lib/clickhouse/coordination/log @@ -37,5 +37,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_multinode_simple/configs/log_conf.xml b/tests/integration/test_keeper_multinode_simple/configs/log_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_multinode_simple/configs/log_conf.xml rename to tests/integration/test_keeper_multinode_simple/configs/log_conf.xml diff --git a/tests/integration/test_testkeeper_multinode_simple/configs/use_test_keeper.xml b/tests/integration/test_keeper_multinode_simple/configs/use_keeper.xml similarity index 100% rename from tests/integration/test_testkeeper_multinode_simple/configs/use_test_keeper.xml rename to tests/integration/test_keeper_multinode_simple/configs/use_keeper.xml diff --git a/tests/integration/test_testkeeper_multinode_simple/test.py b/tests/integration/test_keeper_multinode_simple/test.py similarity index 96% rename from tests/integration/test_testkeeper_multinode_simple/test.py rename to tests/integration/test_keeper_multinode_simple/test.py index 985915e10a1..18420c9910d 100644 --- a/tests/integration/test_testkeeper_multinode_simple/test.py +++ b/tests/integration/test_keeper_multinode_simple/test.py @@ -9,9 +9,9 @@ from helpers.network import PartitionManager from helpers.test_tools import assert_eq_with_retry cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=['configs/enable_test_keeper1.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) -node2 = cluster.add_instance('node2', main_configs=['configs/enable_test_keeper2.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) -node3 = cluster.add_instance('node3', main_configs=['configs/enable_test_keeper3.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_keeper1.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_keeper2.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_keeper3.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) from kazoo.client import KazooClient, KazooState diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/__init__.py b/tests/integration/test_keeper_persistent_log/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_persistent_log_multinode/__init__.py rename to tests/integration/test_keeper_persistent_log/__init__.py diff --git a/tests/integration/test_testkeeper_persistent_log/configs/enable_test_keeper.xml b/tests/integration/test_keeper_persistent_log/configs/enable_keeper.xml similarity index 93% rename from tests/integration/test_testkeeper_persistent_log/configs/enable_test_keeper.xml rename to tests/integration/test_keeper_persistent_log/configs/enable_keeper.xml index e5046f2831d..13fb98a0e7a 100644 --- a/tests/integration/test_testkeeper_persistent_log/configs/enable_test_keeper.xml +++ b/tests/integration/test_keeper_persistent_log/configs/enable_keeper.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -18,5 +18,5 @@ 44444
- + diff --git a/tests/integration/test_testkeeper_persistent_log/configs/logs_conf.xml b/tests/integration/test_keeper_persistent_log/configs/logs_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_persistent_log/configs/logs_conf.xml rename to tests/integration/test_keeper_persistent_log/configs/logs_conf.xml diff --git a/tests/integration/test_testkeeper_persistent_log/configs/use_test_keeper.xml b/tests/integration/test_keeper_persistent_log/configs/use_keeper.xml similarity index 100% rename from tests/integration/test_testkeeper_persistent_log/configs/use_test_keeper.xml rename to tests/integration/test_keeper_persistent_log/configs/use_keeper.xml diff --git a/tests/integration/test_testkeeper_persistent_log/test.py b/tests/integration/test_keeper_persistent_log/test.py similarity index 97% rename from tests/integration/test_testkeeper_persistent_log/test.py rename to tests/integration/test_keeper_persistent_log/test.py index 71fee94088f..36ee20ae9e7 100644 --- a/tests/integration/test_testkeeper_persistent_log/test.py +++ b/tests/integration/test_keeper_persistent_log/test.py @@ -10,7 +10,7 @@ from kazoo.client import KazooClient, KazooState cluster = ClickHouseCluster(__file__) -node = cluster.add_instance('node', main_configs=['configs/enable_test_keeper.xml', 'configs/logs_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) +node = cluster.add_instance('node', main_configs=['configs/enable_keeper.xml', 'configs/logs_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) def random_string(length): diff --git a/tests/integration/test_testkeeper_restore_from_snapshot/__init__.py b/tests/integration/test_keeper_persistent_log_multinode/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_restore_from_snapshot/__init__.py rename to tests/integration/test_keeper_persistent_log_multinode/__init__.py diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper1.xml b/tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper1.xml similarity index 96% rename from tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper1.xml rename to tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper1.xml index 009ea95a1a2..073616695cd 100644 --- a/tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper1.xml +++ b/tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper1.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -36,5 +36,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper2.xml b/tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper2.xml similarity index 96% rename from tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper2.xml rename to tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper2.xml index 5afe84e8d20..c94b6c953fd 100644 --- a/tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper2.xml +++ b/tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper2.xml @@ -1,5 +1,5 @@ - + 9181 2 /var/lib/clickhouse/coordination/log @@ -36,5 +36,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper3.xml b/tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper3.xml similarity index 96% rename from tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper3.xml rename to tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper3.xml index 10a50db1444..10955abab02 100644 --- a/tests/integration/test_testkeeper_persistent_log_multinode/configs/enable_test_keeper3.xml +++ b/tests/integration/test_keeper_persistent_log_multinode/configs/enable_keeper3.xml @@ -1,5 +1,5 @@ - + 9181 3 /var/lib/clickhouse/coordination/log @@ -36,5 +36,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/configs/log_conf.xml b/tests/integration/test_keeper_persistent_log_multinode/configs/log_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_persistent_log_multinode/configs/log_conf.xml rename to tests/integration/test_keeper_persistent_log_multinode/configs/log_conf.xml diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/configs/use_test_keeper.xml b/tests/integration/test_keeper_persistent_log_multinode/configs/use_keeper.xml similarity index 100% rename from tests/integration/test_testkeeper_persistent_log_multinode/configs/use_test_keeper.xml rename to tests/integration/test_keeper_persistent_log_multinode/configs/use_keeper.xml diff --git a/tests/integration/test_testkeeper_persistent_log_multinode/test.py b/tests/integration/test_keeper_persistent_log_multinode/test.py similarity index 92% rename from tests/integration/test_testkeeper_persistent_log_multinode/test.py rename to tests/integration/test_keeper_persistent_log_multinode/test.py index cb9cf5a59d1..052f38b0bff 100644 --- a/tests/integration/test_testkeeper_persistent_log_multinode/test.py +++ b/tests/integration/test_keeper_persistent_log_multinode/test.py @@ -7,9 +7,9 @@ import os import time cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=['configs/enable_test_keeper1.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) -node2 = cluster.add_instance('node2', main_configs=['configs/enable_test_keeper2.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) -node3 = cluster.add_instance('node3', main_configs=['configs/enable_test_keeper3.xml', 'configs/log_conf.xml', 'configs/use_test_keeper.xml'], stay_alive=True) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_keeper1.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_keeper2.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_keeper3.xml', 'configs/log_conf.xml', 'configs/use_keeper.xml'], stay_alive=True) from kazoo.client import KazooClient, KazooState diff --git a/tests/integration/test_testkeeper_snapshots/__init__.py b/tests/integration/test_keeper_restore_from_snapshot/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_snapshots/__init__.py rename to tests/integration/test_keeper_restore_from_snapshot/__init__.py diff --git a/tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper1.xml b/tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper1.xml similarity index 96% rename from tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper1.xml rename to tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper1.xml index 7d56a9137e9..526ff259d40 100644 --- a/tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper1.xml +++ b/tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper1.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -39,5 +39,5 @@ 10
- + diff --git a/tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper2.xml b/tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper2.xml similarity index 96% rename from tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper2.xml rename to tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper2.xml index e4c031281ac..0c1d3781c3a 100644 --- a/tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper2.xml +++ b/tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper2.xml @@ -1,5 +1,5 @@ - + 9181 2 /var/lib/clickhouse/coordination/log @@ -39,5 +39,5 @@ 10
- + diff --git a/tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper3.xml b/tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper3.xml similarity index 96% rename from tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper3.xml rename to tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper3.xml index 8db7647c877..b0694540292 100644 --- a/tests/integration/test_testkeeper_restore_from_snapshot/configs/enable_test_keeper3.xml +++ b/tests/integration/test_keeper_restore_from_snapshot/configs/enable_keeper3.xml @@ -1,5 +1,5 @@ - + 9181 3 /var/lib/clickhouse/coordination/log @@ -39,5 +39,5 @@ 10
- + diff --git a/tests/integration/test_testkeeper_restore_from_snapshot/configs/log_conf.xml b/tests/integration/test_keeper_restore_from_snapshot/configs/log_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_restore_from_snapshot/configs/log_conf.xml rename to tests/integration/test_keeper_restore_from_snapshot/configs/log_conf.xml diff --git a/tests/integration/test_testkeeper_restore_from_snapshot/test.py b/tests/integration/test_keeper_restore_from_snapshot/test.py similarity index 95% rename from tests/integration/test_testkeeper_restore_from_snapshot/test.py rename to tests/integration/test_keeper_restore_from_snapshot/test.py index 62122751623..eee49f8a319 100644 --- a/tests/integration/test_testkeeper_restore_from_snapshot/test.py +++ b/tests/integration/test_keeper_restore_from_snapshot/test.py @@ -7,9 +7,9 @@ import os import time cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=['configs/enable_test_keeper1.xml', 'configs/log_conf.xml'], stay_alive=True) -node2 = cluster.add_instance('node2', main_configs=['configs/enable_test_keeper2.xml', 'configs/log_conf.xml'], stay_alive=True) -node3 = cluster.add_instance('node3', main_configs=['configs/enable_test_keeper3.xml', 'configs/log_conf.xml'], stay_alive=True) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_keeper1.xml', 'configs/log_conf.xml'], stay_alive=True) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_keeper2.xml', 'configs/log_conf.xml'], stay_alive=True) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_keeper3.xml', 'configs/log_conf.xml'], stay_alive=True) from kazoo.client import KazooClient, KazooState diff --git a/tests/integration/test_testkeeper_snapshots_multinode/__init__.py b/tests/integration/test_keeper_secure_client/__init__.py similarity index 100% rename from tests/integration/test_testkeeper_snapshots_multinode/__init__.py rename to tests/integration/test_keeper_secure_client/__init__.py diff --git a/tests/integration/test_keeper_secure_client/configs/dhparam.pem b/tests/integration/test_keeper_secure_client/configs/dhparam.pem new file mode 100644 index 00000000000..2e6cee0798d --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/dhparam.pem @@ -0,0 +1,8 @@ +-----BEGIN DH PARAMETERS----- +MIIBCAKCAQEAua92DDli13gJ+//ZXyGaggjIuidqB0crXfhUlsrBk9BV1hH3i7fR +XGP9rUdk2ubnB3k2ejBStL5oBrkHm9SzUFSQHqfDjLZjKoUpOEmuDc4cHvX1XTR5 +Pr1vf5cd0yEncJWG5W4zyUB8k++SUdL2qaeslSs+f491HBLDYn/h8zCgRbBvxhxb +9qeho1xcbnWeqkN6Kc9bgGozA16P9NLuuLttNnOblkH+lMBf42BSne/TWt3AlGZf +slKmmZcySUhF8aKfJnLKbkBCFqOtFRh8zBA9a7g+BT/lSANATCDPaAk1YVih2EKb +dpc3briTDbRsiqg2JKMI7+VdULY9bh3EawIBAg== +-----END DH PARAMETERS----- diff --git a/tests/integration/test_keeper_secure_client/configs/enable_secure_keeper.xml b/tests/integration/test_keeper_secure_client/configs/enable_secure_keeper.xml new file mode 100644 index 00000000000..af815f4a3bc --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/enable_secure_keeper.xml @@ -0,0 +1,24 @@ + + + + 10181 + 1 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 10000 + 30000 + trace + false + + + + + 1 + localhost + 44444 + + + + diff --git a/tests/integration/test_keeper_secure_client/configs/server.crt b/tests/integration/test_keeper_secure_client/configs/server.crt new file mode 100644 index 00000000000..7ade2d96273 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/server.crt @@ -0,0 +1,19 @@ +-----BEGIN CERTIFICATE----- +MIIC/TCCAeWgAwIBAgIJANjx1QSR77HBMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV +BAMMCWxvY2FsaG9zdDAgFw0xODA3MzAxODE2MDhaGA8yMjkyMDUxNDE4MTYwOFow +FDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAs9uSo6lJG8o8pw0fbVGVu0tPOljSWcVSXH9uiJBwlZLQnhN4SFSFohfI +4K8U1tBDTnxPLUo/V1K9yzoLiRDGMkwVj6+4+hE2udS2ePTQv5oaMeJ9wrs+5c9T +4pOtlq3pLAdm04ZMB1nbrEysceVudHRkQbGHzHp6VG29Fw7Ga6YpqyHQihRmEkTU +7UCYNA+Vk7aDPdMS/khweyTpXYZimaK9f0ECU3/VOeG3fH6Sp2X6FN4tUj/aFXEj +sRmU5G2TlYiSIUMF2JPdhSihfk1hJVALrHPTU38SOL+GyyBRWdNcrIwVwbpvsvPg +pryMSNxnpr0AK0dFhjwnupIv5hJIOQIDAQABo1AwTjAdBgNVHQ4EFgQUjPLb3uYC +kcamyZHK4/EV8jAP0wQwHwYDVR0jBBgwFoAUjPLb3uYCkcamyZHK4/EV8jAP0wQw +DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAM/ocuDvfPus/KpMVD51j +4IdlU8R0vmnYLQ+ygzOAo7+hUWP5j0yvq4ILWNmQX6HNvUggCgFv9bjwDFhb/5Vr +85ieWfTd9+LTjrOzTw4avdGwpX9G+6jJJSSq15tw5ElOIFb/qNA9O4dBiu8vn03C +L/zRSXrARhSqTW5w/tZkUcSTT+M5h28+Lgn9ysx4Ff5vi44LJ1NnrbJbEAIYsAAD ++UA+4MBFKx1r6hHINULev8+lCfkpwIaeS8RL+op4fr6kQPxnULw8wT8gkuc8I4+L +P9gg/xDHB44T3ADGZ5Ib6O0DJaNiToO6rnoaaxs0KkotbvDWvRoxEytSbXKoYjYp +0g== +-----END CERTIFICATE----- diff --git a/tests/integration/test_keeper_secure_client/configs/server.key b/tests/integration/test_keeper_secure_client/configs/server.key new file mode 100644 index 00000000000..f0fb61ac443 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/server.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCz25KjqUkbyjyn +DR9tUZW7S086WNJZxVJcf26IkHCVktCeE3hIVIWiF8jgrxTW0ENOfE8tSj9XUr3L +OguJEMYyTBWPr7j6ETa51LZ49NC/mhox4n3Cuz7lz1Pik62WreksB2bThkwHWdus +TKxx5W50dGRBsYfMenpUbb0XDsZrpimrIdCKFGYSRNTtQJg0D5WTtoM90xL+SHB7 +JOldhmKZor1/QQJTf9U54bd8fpKnZfoU3i1SP9oVcSOxGZTkbZOViJIhQwXYk92F +KKF+TWElUAusc9NTfxI4v4bLIFFZ01ysjBXBum+y8+CmvIxI3GemvQArR0WGPCe6 +ki/mEkg5AgMBAAECggEATrbIBIxwDJOD2/BoUqWkDCY3dGevF8697vFuZKIiQ7PP +TX9j4vPq0DfsmDjHvAPFkTHiTQXzlroFik3LAp+uvhCCVzImmHq0IrwvZ9xtB43f +7Pkc5P6h1l3Ybo8HJ6zRIY3TuLtLxuPSuiOMTQSGRL0zq3SQ5DKuGwkz+kVjHXUN +MR2TECFwMHKQ5VLrC+7PMpsJYyOMlDAWhRfUalxC55xOXTpaN8TxNnwQ8K2ISVY5 +212Jz/a4hn4LdwxSz3Tiu95PN072K87HLWx3EdT6vW4Ge5P/A3y+smIuNAlanMnu +plHBRtpATLiTxZt/n6npyrfQVbYjSH7KWhB8hBHtaQKBgQDh9Cq1c/KtqDtE0Ccr +/r9tZNTUwBE6VP+3OJeKdEdtsfuxjOCkS1oAjgBJiSDOiWPh1DdoDeVZjPKq6pIu +Mq12OE3Doa8znfCXGbkSzEKOb2unKZMJxzrz99kXt40W5DtrqKPNb24CNqTiY8Aa +CjtcX+3weat82VRXvph6U8ltMwKBgQDLxjiQQzNoY7qvg7CwJCjf9qq8jmLK766g +1FHXopqS+dTxDLM8eJSRrpmxGWJvNeNc1uPhsKsKgotqAMdBUQTf7rSTbt4MyoH5 +bUcRLtr+0QTK9hDWMOOvleqNXha68vATkohWYfCueNsC60qD44o8RZAS6UNy3ENq +cM1cxqe84wKBgQDKkHutWnooJtajlTxY27O/nZKT/HA1bDgniMuKaz4R4Gr1PIez +on3YW3V0d0P7BP6PWRIm7bY79vkiMtLEKdiKUGWeyZdo3eHvhDb/3DCawtau8L2K +GZsHVp2//mS1Lfz7Qh8/L/NedqCQ+L4iWiPnZ3THjjwn3CoZ05ucpvrAMwKBgB54 +nay039MUVq44Owub3KDg+dcIU62U+cAC/9oG7qZbxYPmKkc4oL7IJSNecGHA5SbU +2268RFdl/gLz6tfRjbEOuOHzCjFPdvAdbysanpTMHLNc6FefJ+zxtgk9sJh0C4Jh +vxFrw9nTKKzfEl12gQ1SOaEaUIO0fEBGbe8ZpauRAoGAMAlGV+2/K4ebvAJKOVTa +dKAzQ+TD2SJmeR1HZmKDYddNqwtZlzg3v4ZhCk4eaUmGeC1Bdh8MDuB3QQvXz4Dr +vOIP4UVaOr+uM+7TgAgVnP4/K6IeJGzUDhX93pmpWhODfdu/oojEKVcpCojmEmS1 +KCBtmIrQLqzMpnBpLNuSY+Q= +-----END PRIVATE KEY----- diff --git a/tests/integration/test_keeper_secure_client/configs/ssl_conf.xml b/tests/integration/test_keeper_secure_client/configs/ssl_conf.xml new file mode 100644 index 00000000000..7ca51acde22 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/ssl_conf.xml @@ -0,0 +1,26 @@ + + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + /etc/clickhouse-server/config.d/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + true + true + sslv2,sslv3 + true + none + + RejectCertificateHandler + + + + diff --git a/tests/integration/test_keeper_secure_client/configs/use_secure_keeper.xml b/tests/integration/test_keeper_secure_client/configs/use_secure_keeper.xml new file mode 100644 index 00000000000..a0d19300022 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/use_secure_keeper.xml @@ -0,0 +1,9 @@ + + + + node1 + 10181 + 1 + + + diff --git a/tests/integration/test_keeper_secure_client/test.py b/tests/integration/test_keeper_secure_client/test.py new file mode 100644 index 00000000000..fe03ed8dcf8 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/test.py @@ -0,0 +1,26 @@ +#!/usr/bin/env python3 +import pytest +from helpers.cluster import ClickHouseCluster +import string +import os +import time + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_secure_keeper.xml', 'configs/ssl_conf.xml', "configs/dhparam.pem", "configs/server.crt", "configs/server.key"]) +node2 = cluster.add_instance('node2', main_configs=['configs/use_secure_keeper.xml', 'configs/ssl_conf.xml', "configs/server.crt", "configs/server.key"]) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + yield cluster + + finally: + cluster.shutdown() + + +def test_connection(started_cluster): + # just nothrow + node2.query("SELECT * FROM system.zookeeper WHERE path = '/'") diff --git a/tests/integration/test_keeper_snapshots/__init__.py b/tests/integration/test_keeper_snapshots/__init__.py new file mode 100644 index 00000000000..e5a0d9b4834 --- /dev/null +++ b/tests/integration/test_keeper_snapshots/__init__.py @@ -0,0 +1 @@ +#!/usr/bin/env python3 diff --git a/tests/integration/test_testkeeper_snapshots/configs/enable_test_keeper.xml b/tests/integration/test_keeper_snapshots/configs/enable_keeper.xml similarity index 94% rename from tests/integration/test_testkeeper_snapshots/configs/enable_test_keeper.xml rename to tests/integration/test_keeper_snapshots/configs/enable_keeper.xml index f7d3c548f87..11039882f8c 100644 --- a/tests/integration/test_testkeeper_snapshots/configs/enable_test_keeper.xml +++ b/tests/integration/test_keeper_snapshots/configs/enable_keeper.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -20,5 +20,5 @@ 44444
- + diff --git a/tests/integration/test_testkeeper_snapshots/configs/logs_conf.xml b/tests/integration/test_keeper_snapshots/configs/logs_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_snapshots/configs/logs_conf.xml rename to tests/integration/test_keeper_snapshots/configs/logs_conf.xml diff --git a/tests/integration/test_testkeeper_snapshots/test.py b/tests/integration/test_keeper_snapshots/test.py similarity index 98% rename from tests/integration/test_testkeeper_snapshots/test.py rename to tests/integration/test_keeper_snapshots/test.py index 8dece678652..9879c0003be 100644 --- a/tests/integration/test_testkeeper_snapshots/test.py +++ b/tests/integration/test_keeper_snapshots/test.py @@ -13,7 +13,7 @@ from kazoo.client import KazooClient, KazooState cluster = ClickHouseCluster(__file__) # clickhouse itself will use external zookeeper -node = cluster.add_instance('node', main_configs=['configs/enable_test_keeper.xml', 'configs/logs_conf.xml'], stay_alive=True, with_zookeeper=True) +node = cluster.add_instance('node', main_configs=['configs/enable_keeper.xml', 'configs/logs_conf.xml'], stay_alive=True, with_zookeeper=True) def random_string(length): return ''.join(random.choices(string.ascii_lowercase + string.digits, k=length)) diff --git a/tests/integration/test_keeper_snapshots_multinode/__init__.py b/tests/integration/test_keeper_snapshots_multinode/__init__.py new file mode 100644 index 00000000000..e5a0d9b4834 --- /dev/null +++ b/tests/integration/test_keeper_snapshots_multinode/__init__.py @@ -0,0 +1 @@ +#!/usr/bin/env python3 diff --git a/tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper1.xml b/tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper1.xml similarity index 96% rename from tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper1.xml rename to tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper1.xml index 46b352f498e..6826d762652 100644 --- a/tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper1.xml +++ b/tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper1.xml @@ -1,5 +1,5 @@ - + 9181 1 /var/lib/clickhouse/coordination/log @@ -38,5 +38,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper2.xml b/tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper2.xml similarity index 96% rename from tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper2.xml rename to tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper2.xml index 848a14b9feb..7f9bdde59da 100644 --- a/tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper2.xml +++ b/tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper2.xml @@ -1,5 +1,5 @@ - + 9181 2 /var/lib/clickhouse/coordination/log @@ -38,5 +38,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper3.xml b/tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper3.xml similarity index 96% rename from tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper3.xml rename to tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper3.xml index 6b37884ab3e..32e2743e7b8 100644 --- a/tests/integration/test_testkeeper_snapshots_multinode/configs/enable_test_keeper3.xml +++ b/tests/integration/test_keeper_snapshots_multinode/configs/enable_keeper3.xml @@ -1,5 +1,5 @@ - + 9181 3 /var/lib/clickhouse/coordination/log @@ -38,5 +38,5 @@ 1
- + diff --git a/tests/integration/test_testkeeper_snapshots_multinode/configs/log_conf.xml b/tests/integration/test_keeper_snapshots_multinode/configs/log_conf.xml similarity index 100% rename from tests/integration/test_testkeeper_snapshots_multinode/configs/log_conf.xml rename to tests/integration/test_keeper_snapshots_multinode/configs/log_conf.xml diff --git a/tests/integration/test_testkeeper_snapshots_multinode/test.py b/tests/integration/test_keeper_snapshots_multinode/test.py similarity index 94% rename from tests/integration/test_testkeeper_snapshots_multinode/test.py rename to tests/integration/test_keeper_snapshots_multinode/test.py index 3ddb7767f2a..b4e82d62f09 100644 --- a/tests/integration/test_testkeeper_snapshots_multinode/test.py +++ b/tests/integration/test_keeper_snapshots_multinode/test.py @@ -7,9 +7,9 @@ import os import time cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=['configs/enable_test_keeper1.xml', 'configs/log_conf.xml'], stay_alive=True) -node2 = cluster.add_instance('node2', main_configs=['configs/enable_test_keeper2.xml', 'configs/log_conf.xml'], stay_alive=True) -node3 = cluster.add_instance('node3', main_configs=['configs/enable_test_keeper3.xml', 'configs/log_conf.xml'], stay_alive=True) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_keeper1.xml', 'configs/log_conf.xml'], stay_alive=True) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_keeper2.xml', 'configs/log_conf.xml'], stay_alive=True) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_keeper3.xml', 'configs/log_conf.xml'], stay_alive=True) from kazoo.client import KazooClient, KazooState diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.reference b/tests/integration/test_library_bridge/__init__.py similarity index 100% rename from tests/queries/0_stateless/01676_clickhouse_client_autocomplete.reference rename to tests/integration/test_library_bridge/__init__.py diff --git a/tests/integration/test_library_bridge/configs/config.d/config.xml b/tests/integration/test_library_bridge/configs/config.d/config.xml new file mode 100644 index 00000000000..9bea75fbb6f --- /dev/null +++ b/tests/integration/test_library_bridge/configs/config.d/config.xml @@ -0,0 +1,12 @@ + + /etc/clickhouse-server/config.d/dictionaries_lib + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + diff --git a/tests/integration/test_library_bridge/configs/dict_lib.cpp b/tests/integration/test_library_bridge/configs/dict_lib.cpp new file mode 100644 index 00000000000..be25804ed64 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/dict_lib.cpp @@ -0,0 +1,298 @@ +/// c++ sample dictionary library + +#include +#include +#include +#include +#include +#include + +namespace ClickHouseLibrary +{ +using CString = const char *; +using ColumnName = CString; +using ColumnNames = ColumnName[]; + +struct CStrings +{ + CString * data = nullptr; + uint64_t size = 0; +}; + +struct VectorUInt64 +{ + const uint64_t * data = nullptr; + uint64_t size = 0; +}; + +struct ColumnsUInt64 +{ + VectorUInt64 * data = nullptr; + uint64_t size = 0; +}; + +struct Field +{ + const void * data = nullptr; + uint64_t size = 0; +}; + +struct Row +{ + const Field * data = nullptr; + uint64_t size = 0; +}; + +struct Table +{ + const Row * data = nullptr; + uint64_t size = 0; + uint64_t error_code = 0; // 0 = ok; !0 = error, with message in error_string + const char * error_string = nullptr; +}; + +enum LogLevel +{ + FATAL = 1, + CRITICAL, + ERROR, + WARNING, + NOTICE, + INFORMATION, + DEBUG, + TRACE, +}; + +void log(LogLevel level, CString msg); +} + + +#define LOG(logger, message) \ + do \ + { \ + std::stringstream builder; \ + builder << message; \ + (logger)(ClickHouseLibrary::INFORMATION, builder.str().c_str()); \ + } while (false) + + +struct LibHolder +{ + std::function log; +}; + + +struct DataHolder +{ + std::vector> dataHolder; // Actual data storage + std::vector> fieldHolder; // Pointers and sizes of data + std::unique_ptr rowHolder; + ClickHouseLibrary::Table ctable; // Result data prepared for transfer via c-style interface + LibHolder * lib = nullptr; + + size_t num_rows; + size_t num_cols; +}; + + +template +void MakeColumnsFromVector(T * ptr) +{ + if (ptr->dataHolder.empty()) + { + LOG(ptr->lib->log, "generating null values, cols: " << ptr->num_cols); + std::vector fields; + for (size_t i = 0; i < ptr->num_cols; ++i) + fields.push_back({nullptr, 0}); + ptr->fieldHolder.push_back(fields); + } + else + { + for (const auto & row : ptr->dataHolder) + { + std::vector fields; + for (const auto & field : row) + fields.push_back({&field, sizeof(field)}); + ptr->fieldHolder.push_back(fields); + } + } + + const auto rows_num = ptr->fieldHolder.size(); + ptr->rowHolder = std::make_unique(rows_num); + size_t i = 0; + for (auto & row : ptr->fieldHolder) + { + ptr->rowHolder[i].size = row.size(); + ptr->rowHolder[i].data = row.data(); + ++i; + } + ptr->ctable.size = rows_num; + ptr->ctable.data = ptr->rowHolder.get(); +} + + +extern "C" +{ + +void * ClickHouseDictionary_v3_loadIds(void * data_ptr, + ClickHouseLibrary::CStrings * settings, + ClickHouseLibrary::CStrings * columns, + const struct ClickHouseLibrary::VectorUInt64 * ids) +{ + auto ptr = static_cast(data_ptr); + + if (ids) + LOG(ptr->lib->log, "loadIds lib call ptr=" << data_ptr << " => " << ptr << " size=" << ids->size); + + if (!ptr) + return nullptr; + + if (settings) + { + LOG(ptr->lib->log, "settings passed: " << settings->size); + for (size_t i = 0; i < settings->size; ++i) + { + LOG(ptr->lib->log, "setting " << i << " :" << settings->data[i]); + } + } + + if (columns) + { + LOG(ptr->lib->log, "columns passed:" << columns->size); + for (size_t i = 0; i < columns->size; ++i) + { + LOG(ptr->lib->log, "column " << i << " :" << columns->data[i]); + } + } + + if (ids) + { + LOG(ptr->lib->log, "ids passed: " << ids->size); + for (size_t i = 0; i < ids->size; ++i) + { + LOG(ptr->lib->log, "id " << i << " :" << ids->data[i] << " generating."); + ptr->dataHolder.emplace_back(std::vector{ids->data[i], ids->data[i] + 100, ids->data[i] + 200, ids->data[i] + 300}); + } + } + + MakeColumnsFromVector(ptr); + return static_cast(&ptr->ctable); +} + + +void * ClickHouseDictionary_v3_loadAll(void * data_ptr, ClickHouseLibrary::CStrings * settings, ClickHouseLibrary::CStrings * /*columns*/) +{ + auto ptr = static_cast(data_ptr); + + LOG(ptr->lib->log, "loadAll lib call ptr=" << data_ptr << " => " << ptr); + + if (!ptr) + return nullptr; + + size_t num_rows = 0, num_cols = 4; + std::string test_type; + std::vector settings_values; + if (settings) + { + LOG(ptr->lib->log, "settings size: " << settings->size); + + for (size_t i = 0; i < settings->size; ++i) + { + std::string setting_name = settings->data[i]; + std::string setting_value = settings->data[++i]; + LOG(ptr->lib->log, "setting " + std::to_string(i) + " name " + setting_name + " value " + setting_value); + + if (setting_name == "num_rows") + num_rows = std::atoi(setting_value.data()); + else if (setting_name == "num_cols") + num_cols = std::atoi(setting_value.data()); + else if (setting_name == "test_type") + test_type = setting_value; + else + { + LOG(ptr->lib->log, "Adding setting " + setting_name); + settings_values.push_back(setting_value); + } + } + } + + if (test_type == "test_simple") + { + for (size_t i = 0; i < 10; ++i) + { + LOG(ptr->lib->log, "id " << i << " :" << " generating."); + ptr->dataHolder.emplace_back(std::vector{i, i + 10, i + 20, i + 30}); + } + } + else if (test_type == "test_many_rows" && num_rows) + { + for (size_t i = 0; i < num_rows; ++i) + { + ptr->dataHolder.emplace_back(std::vector{i, i, i, i}); + } + } + + ptr->num_cols = num_cols; + ptr->num_rows = num_rows; + + MakeColumnsFromVector(ptr); + return static_cast(&ptr->ctable); +} + + +void * ClickHouseDictionary_v3_loadKeys(void * data_ptr, ClickHouseLibrary::CStrings * settings, ClickHouseLibrary::Table * requested_keys) +{ + auto ptr = static_cast(data_ptr); + LOG(ptr->lib->log, "loadKeys lib call ptr=" << data_ptr << " => " << ptr); + if (settings) + { + LOG(ptr->lib->log, "settings passed: " << settings->size); + for (size_t i = 0; i < settings->size; ++i) + { + LOG(ptr->lib->log, "setting " << i << " :" << settings->data[i]); + } + } + if (requested_keys) + { + LOG(ptr->lib->log, "requested_keys columns passed: " << requested_keys->size); + for (size_t i = 0; i < requested_keys->size; ++i) + { + LOG(ptr->lib->log, "requested_keys at column " << i << " passed: " << requested_keys->data[i].size); + ptr->dataHolder.emplace_back(std::vector{i, i + 100, i + 200, i + 300}); + } + } + + MakeColumnsFromVector(ptr); + return static_cast(&ptr->ctable); +} + +void * ClickHouseDictionary_v3_libNew( + ClickHouseLibrary::CStrings * /*settings*/, void (*logFunc)(ClickHouseLibrary::LogLevel, ClickHouseLibrary::CString)) +{ + auto lib_ptr = new LibHolder; + lib_ptr->log = logFunc; + return lib_ptr; +} + +void ClickHouseDictionary_v3_libDelete(void * lib_ptr) +{ + auto ptr = static_cast(lib_ptr); + delete ptr; + return; +} + +void * ClickHouseDictionary_v3_dataNew(void * lib_ptr) +{ + auto data_ptr = new DataHolder; + data_ptr->lib = static_castlib)>(lib_ptr); + return data_ptr; +} + +void ClickHouseDictionary_v3_dataDelete(void * /*lib_ptr*/, void * data_ptr) +{ + auto ptr = static_cast(data_ptr); + delete ptr; + return; +} + +} diff --git a/tests/integration/test_library_bridge/configs/dictionaries/dict1.xml b/tests/integration/test_library_bridge/configs/dictionaries/dict1.xml new file mode 100644 index 00000000000..9be21aea1e3 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/dictionaries/dict1.xml @@ -0,0 +1,86 @@ + + + + dict1 + + + /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so + + test_simple + nice key + interesting, nice value + //home/interesting-path/to-/interesting_data
+ 11 + user-u -user +
+
+ + + + + + 1 + 1 + + + + key + UInt64 + + + value1 + + UInt64 + + + value2 + + UInt64 + + + value3 + + UInt64 + + +
+ + dict2 + + + /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so + + test_nulls + + + + + + + + 1 + 1 + + + + key + UInt64 + + + value1 + 12 + UInt64 + + + value2 + 12 + UInt64 + + + value3 + 12 + UInt64 + + + +
diff --git a/tests/integration/test_library_bridge/configs/enable_dict.xml b/tests/integration/test_library_bridge/configs/enable_dict.xml new file mode 100644 index 00000000000..264f1f667b1 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/enable_dict.xml @@ -0,0 +1,4 @@ + + + /etc/clickhouse-server/config.d/dict*.xml + diff --git a/tests/integration/test_library_bridge/configs/log_conf.xml b/tests/integration/test_library_bridge/configs/log_conf.xml new file mode 100644 index 00000000000..eed7a435b68 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/log_conf.xml @@ -0,0 +1,17 @@ + + + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + /var/log/clickhouse-server/clickhouse-library-bridge.log + /var/log/clickhouse-server/clickhouse-library-bridge.err.log + /var/log/clickhouse-server/clickhouse-library-bridge.stdout + /var/log/clickhouse-server/clickhouse-library-bridge.stderr + trace + + diff --git a/tests/integration/test_library_bridge/test.py b/tests/integration/test_library_bridge/test.py new file mode 100644 index 00000000000..422a7ee954f --- /dev/null +++ b/tests/integration/test_library_bridge/test.py @@ -0,0 +1,169 @@ +import os +import os.path as p +import pytest +import time + +from helpers.cluster import ClickHouseCluster, run_and_check + +cluster = ClickHouseCluster(__file__) + +instance = cluster.add_instance('instance', + main_configs=[ + 'configs/enable_dict.xml', + 'configs/config.d/config.xml', + 'configs/dictionaries/dict1.xml', + 'configs/log_conf.xml']) + +@pytest.fixture(scope="module") +def ch_cluster(): + try: + cluster.start() + instance.query('CREATE DATABASE test') + container_lib_path = '/etc/clickhouse-server/config.d/dictionarites_lib/dict_lib.cpp' + + instance.copy_file_to_container(os.path.join(os.path.dirname(os.path.realpath(__file__)), "configs/dict_lib.cpp"), + "/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.cpp") + + instance.query("SYSTEM RELOAD CONFIG") + + instance.exec_in_container( + ['bash', '-c', + '/usr/bin/g++ -shared -o /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so -fPIC /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.cpp'], + user='root') + + yield cluster + + finally: + cluster.shutdown() + + +@pytest.fixture(autouse=True) +def setup_teardown(): + yield # run test + + +def test_load_all(ch_cluster): + if instance.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + instance.query(''' + CREATE DICTIONARY lib_dict (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key + SOURCE(library( + PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so' + SETTINGS (test_type test_simple))) + LAYOUT(HASHED()) + LIFETIME (MIN 0 MAX 10) + ''') + + result = instance.query('SELECT * FROM lib_dict ORDER BY key') + expected = ( +"0\t10\t20\t30\n" + +"1\t11\t21\t31\n" + +"2\t12\t22\t32\n" + +"3\t13\t23\t33\n" + +"4\t14\t24\t34\n" + +"5\t15\t25\t35\n" + +"6\t16\t26\t36\n" + +"7\t17\t27\t37\n" + +"8\t18\t28\t38\n" + +"9\t19\t29\t39\n" +) + instance.query('SYSTEM RELOAD DICTIONARY dict1') + instance.query('DROP DICTIONARY lib_dict') + assert(result == expected) + + instance.query(""" + CREATE TABLE IF NOT EXISTS `dict1_table` ( + key UInt64, value1 UInt64, value2 UInt64, value3 UInt64 + ) ENGINE = Dictionary(dict1) + """) + + result = instance.query('SELECT * FROM dict1_table ORDER BY key') + assert(result == expected) + + +def test_load_ids(ch_cluster): + if instance.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + instance.query(''' + CREATE DICTIONARY lib_dict_c (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key SOURCE(library(PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so')) + LAYOUT(CACHE( + SIZE_IN_CELLS 10000000 + BLOCK_SIZE 4096 + FILE_SIZE 16777216 + READ_BUFFER_SIZE 1048576 + MAX_STORED_KEYS 1048576)) + LIFETIME(2) ; + ''') + + result = instance.query('''select dictGet(lib_dict_c, 'value1', toUInt64(0));''') + assert(result.strip() == '100') + result = instance.query('''select dictGet(lib_dict_c, 'value1', toUInt64(1));''') + assert(result.strip() == '101') + instance.query('DROP DICTIONARY lib_dict_c') + + +def test_load_keys(ch_cluster): + if instance.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + instance.query(''' + CREATE DICTIONARY lib_dict_ckc (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key + SOURCE(library(PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so')) + LAYOUT(COMPLEX_KEY_CACHE( SIZE_IN_CELLS 10000000)) + LIFETIME(2); + ''') + + result = instance.query('''select dictGet(lib_dict_ckc, 'value1', tuple(toUInt64(0)));''') + assert(result.strip() == '100') + result = instance.query('''select dictGet(lib_dict_ckc, 'value2', tuple(toUInt64(0)));''') + assert(result.strip() == '200') + instance.query('DROP DICTIONARY lib_dict_ckc') + + +def test_load_all_many_rows(ch_cluster): + if instance.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + num_rows = [1000, 10000, 100000, 1000000] + for num in num_rows: + instance.query(''' + CREATE DICTIONARY lib_dict (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key + SOURCE(library( + PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so' + SETTINGS (num_rows {} test_type test_many_rows))) + LAYOUT(HASHED()) + LIFETIME (MIN 0 MAX 10) + '''.format(num)) + + result = instance.query('SELECT * FROM lib_dict ORDER BY key') + expected = instance.query('SELECT number, number, number, number FROM numbers({})'.format(num)) + instance.query('DROP DICTIONARY lib_dict') + assert(result == expected) + + +def test_null_values(ch_cluster): + if instance.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + instance.query('SYSTEM RELOAD DICTIONARY dict2') + instance.query(""" + CREATE TABLE IF NOT EXISTS `dict2_table` ( + key UInt64, value1 UInt64, value2 UInt64, value3 UInt64 + ) ENGINE = Dictionary(dict2) + """) + + result = instance.query('SELECT * FROM dict2_table ORDER BY key') + expected = "0\t12\t12\t12\n" + assert(result == expected) + + +if __name__ == '__main__': + cluster.start() + input("Cluster created, press any key to destroy...") + cluster.shutdown() diff --git a/tests/integration/test_limited_replicated_fetches/configs/custom_settings.xml b/tests/integration/test_limited_replicated_fetches/configs/custom_settings.xml new file mode 100644 index 00000000000..b5e6bb80891 --- /dev/null +++ b/tests/integration/test_limited_replicated_fetches/configs/custom_settings.xml @@ -0,0 +1,7 @@ + + + + 3 + + + diff --git a/tests/integration/test_limited_replicated_fetches/test.py b/tests/integration/test_limited_replicated_fetches/test.py index 9b9b8befd67..7b0c7aed15d 100644 --- a/tests/integration/test_limited_replicated_fetches/test.py +++ b/tests/integration/test_limited_replicated_fetches/test.py @@ -6,12 +6,14 @@ from helpers.cluster import ClickHouseCluster from helpers.network import PartitionManager import random import string +import os cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', with_zookeeper=True) -node2 = cluster.add_instance('node2', with_zookeeper=True) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +node1 = cluster.add_instance('node1', user_configs=['configs/custom_settings.xml'], with_zookeeper=True) +node2 = cluster.add_instance('node2', user_configs=['configs/custom_settings.xml'], with_zookeeper=True) -DEFAULT_MAX_THREADS_FOR_FETCH = 3 +MAX_THREADS_FOR_FETCH = 3 @pytest.fixture(scope="module") def started_cluster(): @@ -64,11 +66,11 @@ def test_limited_fetches(started_cluster): time.sleep(0.1) for concurrently_fetching_parts in fetches_result: - if len(concurrently_fetching_parts) > DEFAULT_MAX_THREADS_FOR_FETCH: - assert False, "Found more than {} concurrently fetching parts: {}".format(DEFAULT_MAX_THREADS_FOR_FETCH, ', '.join(concurrently_fetching_parts)) + if len(concurrently_fetching_parts) > MAX_THREADS_FOR_FETCH: + assert False, "Found more than {} concurrently fetching parts: {}".format(MAX_THREADS_FOR_FETCH, ', '.join(concurrently_fetching_parts)) assert max([len(parts) for parts in fetches_result]) == 3, "Strange, but we don't utilize max concurrent threads for fetches" assert(max(background_fetches_metric)) == 3, "Just checking metric consistent with table" node1.query("DROP TABLE IF EXISTS t SYNC") - node2.query("DROP TABLE IF EXISTS t SYNC") \ No newline at end of file + node2.query("DROP TABLE IF EXISTS t SYNC") diff --git a/tests/integration/test_live_view_over_distributed/configs/set_distributed_defaults.xml b/tests/integration/test_live_view_over_distributed/configs/set_distributed_defaults.xml deleted file mode 100644 index 194eb1ebb87..00000000000 --- a/tests/integration/test_live_view_over_distributed/configs/set_distributed_defaults.xml +++ /dev/null @@ -1,35 +0,0 @@ - - - - 3 - 1000 - 1 - - - 5 - 3000 - 1 - - - - - - - - ::/0 - - default - default - - - - - ::/0 - - delays - default - - - - - diff --git a/tests/integration/test_live_view_over_distributed/test.py b/tests/integration/test_live_view_over_distributed/test.py deleted file mode 100644 index a21eeb772e5..00000000000 --- a/tests/integration/test_live_view_over_distributed/test.py +++ /dev/null @@ -1,272 +0,0 @@ - - -import sys - -import pytest -from helpers.cluster import ClickHouseCluster -from helpers.uclient import client, prompt, end_of_block - -cluster = ClickHouseCluster(__file__) -# log = sys.stdout -log = None - -NODES = {'node' + str(i): cluster.add_instance( - 'node' + str(i), - main_configs=['configs/remote_servers.xml'], - user_configs=['configs/set_distributed_defaults.xml'], -) for i in (1, 2)} - -CREATE_TABLES_SQL = ''' -DROP TABLE IF EXISTS lv_over_distributed_table; -DROP TABLE IF EXISTS distributed_table; -DROP TABLE IF EXISTS base_table; - -SET allow_experimental_live_view = 1; - -CREATE TABLE - base_table( - node String, - key Int32, - value Int32 - ) -ENGINE = Memory; - -CREATE TABLE - distributed_table -AS base_table -ENGINE = Distributed(test_cluster, default, base_table, rand()); - -CREATE LIVE VIEW lv_over_distributed_table AS SELECT * FROM distributed_table; -''' - -INSERT_SQL_TEMPLATE = "INSERT INTO base_table VALUES ('{node_id}', {key}, {value})" - - -@pytest.fixture(scope="function") -def started_cluster(): - try: - cluster.start() - for node_index, (node_name, node) in enumerate(NODES.items()): - node.query(CREATE_TABLES_SQL) - for i in range(0, 2): - sql = INSERT_SQL_TEMPLATE.format(node_id=node_name, key=i, value=i + (node_index * 10)) - node.query(sql) - yield cluster - - finally: - cluster.shutdown() - - -@pytest.mark.parametrize("node", list(NODES.values())[:1]) -@pytest.mark.parametrize("source", ["lv_over_distributed_table"]) -class TestLiveViewOverDistributedSuite: - def test_select_with_order_by_node(self, started_cluster, node, source): - r = node.query("SELECT * FROM {source} ORDER BY node, key".format(source=source)) - assert r == """node1\t0\t0 -node1\t1\t1 -node2\t0\t10 -node2\t1\t11 -""" - - def test_select_with_order_by_key(self, started_cluster, node, source): - assert node.query("SELECT * FROM {source} ORDER BY key, node".format(source=source)) \ - == """node1\t0\t0 -node2\t0\t10 -node1\t1\t1 -node2\t1\t11 -""" - - def test_select_with_group_by_node(self, started_cluster, node, source): - assert node.query("SELECT node, SUM(value) FROM {source} GROUP BY node ORDER BY node".format(source=source)) \ - == "node1\t1\nnode2\t21\n" - - def test_select_with_group_by_key(self, started_cluster, node, source): - assert node.query("SELECT key, SUM(value) FROM {source} GROUP BY key ORDER BY key".format(source=source)) \ - == "0\t10\n1\t12\n" - - def test_select_sum(self, started_cluster, node, source): - assert node.query("SELECT SUM(value) FROM {source}".format(source=source)) \ - == "22\n" - - def test_watch_live_view_order_by_node(self, started_cluster, node, source): - command = " ".join(node.client.command) - args = dict(log=log, command=command) - - with client(name="client1> ", **args) as client1, client(name="client2> ", **args) as client2: - client1.expect(prompt) - client2.expect(prompt) - - client1.send("SET allow_experimental_live_view = 1") - client1.expect(prompt) - client2.send("SET allow_experimental_live_view = 1") - client2.expect(prompt) - - client1.send("DROP TABLE IF EXISTS lv") - client1.expect(prompt) - client1.send("CREATE LIVE VIEW lv AS SELECT * FROM distributed_table ORDER BY node, key") - client1.expect(prompt) - - client1.send("WATCH lv FORMAT CSV") - client1.expect('"node1",0,0,1') - client1.expect('"node1",1,1,1') - client1.expect('"node2",0,10,1') - client1.expect('"node2",1,11,1') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 2, 2)") - client2.expect(prompt) - client1.expect('"node1",0,0,2') - client1.expect('"node1",1,1,2') - client1.expect('"node1",2,2,2') - client1.expect('"node2",0,10,2') - client1.expect('"node2",1,11,2') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 0, 3), ('node3', 3, 3)") - client2.expect(prompt) - client1.expect('"node1",0,0,3') - client1.expect('"node1",0,3,3') - client1.expect('"node1",1,1,3') - client1.expect('"node1",2,2,3') - client1.expect('"node2",0,10,3') - client1.expect('"node2",1,11,3') - client1.expect('"node3",3,3,3') - - def test_watch_live_view_order_by_key(self, started_cluster, node, source): - command = " ".join(node.client.command) - args = dict(log=log, command=command) - - with client(name="client1> ", **args) as client1, client(name="client2> ", **args) as client2: - client1.expect(prompt) - client2.expect(prompt) - - client1.send("SET allow_experimental_live_view = 1") - client1.expect(prompt) - client2.send("SET allow_experimental_live_view = 1") - client2.expect(prompt) - - client1.send("DROP TABLE IF EXISTS lv") - client1.expect(prompt) - client1.send("CREATE LIVE VIEW lv AS SELECT * FROM distributed_table ORDER BY key, node") - client1.expect(prompt) - - client1.send("WATCH lv FORMAT CSV") - client1.expect('"node1",0,0,1') - client1.expect('"node2",0,10,1') - client1.expect('"node1",1,1,1') - client1.expect('"node2",1,11,1') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 2, 2)") - client2.expect(prompt) - client1.expect('"node1",0,0,2') - client1.expect('"node2",0,10,2') - client1.expect('"node1",1,1,2') - client1.expect('"node2",1,11,2') - client1.expect('"node1",2,2,2') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 0, 3), ('node3', 3, 3)") - client2.expect(prompt) - client1.expect('"node1",0,0,3') - client1.expect('"node1",0,3,3') - client1.expect('"node2",0,10,3') - client1.expect('"node1",1,1,3') - client1.expect('"node2",1,11,3') - client1.expect('"node1",2,2,3') - client1.expect('"node3",3,3,3') - - def test_watch_live_view_group_by_node(self, started_cluster, node, source): - command = " ".join(node.client.command) - args = dict(log=log, command=command) - - with client(name="client1> ", **args) as client1, client(name="client2> ", **args) as client2: - client1.expect(prompt) - client2.expect(prompt) - - client1.send("SET allow_experimental_live_view = 1") - client1.expect(prompt) - client2.send("SET allow_experimental_live_view = 1") - client2.expect(prompt) - - client1.send("DROP TABLE IF EXISTS lv") - client1.expect(prompt) - client1.send( - "CREATE LIVE VIEW lv AS SELECT node, SUM(value) FROM distributed_table GROUP BY node ORDER BY node") - client1.expect(prompt) - - client1.send("WATCH lv FORMAT CSV") - client1.expect('"node1",1,1') - client1.expect('"node2",21,1') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 2, 2)") - client2.expect(prompt) - client1.expect('"node1",3,2') - client1.expect('"node2",21,2') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 0, 3), ('node3', 3, 3)") - client2.expect(prompt) - client1.expect('"node1",6,3') - client1.expect('"node2",21,3') - client1.expect('"node3",3,3') - - def test_watch_live_view_group_by_key(self, started_cluster, node, source): - command = " ".join(node.client.command) - args = dict(log=log, command=command) - sep = ' \xe2\x94\x82' - with client(name="client1> ", **args) as client1, client(name="client2> ", **args) as client2: - client1.expect(prompt) - client2.expect(prompt) - - client1.send("SET allow_experimental_live_view = 1") - client1.expect(prompt) - client2.send("SET allow_experimental_live_view = 1") - client2.expect(prompt) - - client1.send("DROP TABLE IF EXISTS lv") - client1.expect(prompt) - client1.send( - "CREATE LIVE VIEW lv AS SELECT key, SUM(value) FROM distributed_table GROUP BY key ORDER BY key") - client1.expect(prompt) - - client1.send("WATCH lv FORMAT CSV") - client1.expect('0,10,1') - client1.expect('1,12,1') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 2, 2)") - client2.expect(prompt) - client1.expect('0,10,2') - client1.expect('1,12,2') - client1.expect('2,2,2') - - client2.send("INSERT INTO distributed_table VALUES ('node1', 0, 3), ('node1', 3, 3)") - client2.expect(prompt) - client1.expect('0,13,3') - client1.expect('1,12,3') - client1.expect('2,2,3') - client1.expect('3,3,3') - - def test_watch_live_view_sum(self, started_cluster, node, source): - command = " ".join(node.client.command) - args = dict(log=log, command=command) - - with client(name="client1> ", **args) as client1, client(name="client2> ", **args) as client2: - client1.expect(prompt) - client2.expect(prompt) - - client1.send("SET allow_experimental_live_view = 1") - client1.expect(prompt) - client2.send("SET allow_experimental_live_view = 1") - client2.expect(prompt) - - client1.send("DROP TABLE IF EXISTS lv") - client1.expect(prompt) - client1.send("CREATE LIVE VIEW lv AS SELECT sum(value) FROM distributed_table") - client1.expect(prompt) - - client1.send("WATCH lv") - client1.expect(r"22.*1" + end_of_block) - - client2.send("INSERT INTO distributed_table VALUES ('node1', 2, 2)") - client2.expect(prompt) - client1.expect(r"24.*2" + end_of_block) - - client2.send("INSERT INTO distributed_table VALUES ('node1', 3, 3), ('node1', 4, 4)") - client2.expect(prompt) - client1.expect(r"31.*3" + end_of_block) diff --git a/tests/integration/test_materialize_mysql_database/configs/users_disable_bytes_settings.xml b/tests/integration/test_materialize_mysql_database/configs/users_disable_bytes_settings.xml new file mode 100644 index 00000000000..4516cb80c17 --- /dev/null +++ b/tests/integration/test_materialize_mysql_database/configs/users_disable_bytes_settings.xml @@ -0,0 +1,21 @@ + + + + + 1 + Atomic + 1 + 0 + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_materialize_mysql_database/configs/users_disable_rows_settings.xml b/tests/integration/test_materialize_mysql_database/configs/users_disable_rows_settings.xml new file mode 100644 index 00000000000..dea20eb9e12 --- /dev/null +++ b/tests/integration/test_materialize_mysql_database/configs/users_disable_rows_settings.xml @@ -0,0 +1,21 @@ + + + + + 1 + Atomic + 0 + 1 + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py index f906c309443..813a654add3 100644 --- a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py +++ b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py @@ -117,6 +117,45 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam mysql_node.query("DROP DATABASE test_database") +def materialize_mysql_database_with_views(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS test_database") + clickhouse_node.query("DROP DATABASE IF EXISTS test_database") + mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'") + # existed before the mapping was created + + mysql_node.query("CREATE TABLE test_database.test_table_1 (" + "`key` INT NOT NULL PRIMARY KEY, " + "unsigned_tiny_int TINYINT UNSIGNED, tiny_int TINYINT, " + "unsigned_small_int SMALLINT UNSIGNED, small_int SMALLINT, " + "unsigned_medium_int MEDIUMINT UNSIGNED, medium_int MEDIUMINT, " + "unsigned_int INT UNSIGNED, _int INT, " + "unsigned_integer INTEGER UNSIGNED, _integer INTEGER, " + "unsigned_bigint BIGINT UNSIGNED, _bigint BIGINT, " + "/* Need ClickHouse support read mysql decimal unsigned_decimal DECIMAL(19, 10) UNSIGNED, _decimal DECIMAL(19, 10), */" + "unsigned_float FLOAT UNSIGNED, _float FLOAT, " + "unsigned_double DOUBLE UNSIGNED, _double DOUBLE, " + "_varchar VARCHAR(10), _char CHAR(10), binary_col BINARY(8), " + "/* Need ClickHouse support Enum('a', 'b', 'v') _enum ENUM('a', 'b', 'c'), */" + "_date Date, _datetime DateTime, _timestamp TIMESTAMP, _bool BOOLEAN) ENGINE = InnoDB;") + + mysql_node.query("CREATE VIEW test_database.test_table_1_view AS SELECT SUM(tiny_int) FROM test_database.test_table_1 GROUP BY _date;") + + # it already has some data + mysql_node.query(""" + INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', 'binary', + '2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', true); + """) + clickhouse_node.query( + "CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format( + service_name)) + + assert "test_database" in clickhouse_node.query("SHOW DATABASES") + check_query(clickhouse_node, "SHOW TABLES FROM test_database FORMAT TSV", "test_table_1\n") + + clickhouse_node.query("DROP DATABASE test_database") + mysql_node.query("DROP DATABASE test_database") + + def materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, mysql_node, service_name): mysql_node.query("DROP DATABASE IF EXISTS test_database") clickhouse_node.query("DROP DATABASE IF EXISTS test_database") @@ -653,10 +692,17 @@ def mysql_kill_sync_thread_restore_test(clickhouse_node, mysql_node, service_nam check_query(clickhouse_node, "SELECT * FROM test_database.test_table FORMAT TSV", '1\n') check_query(clickhouse_node, "SELECT * FROM test_database_auto.test_table FORMAT TSV", '11\n') + + # When ClickHouse dump all history data we can query it on ClickHouse + # but it don't mean that the sync thread is already to connect to MySQL. + # So After ClickHouse can query data, insert some rows to MySQL. Use this to re-check sync successed. + mysql_node.query("INSERT INTO test_database_auto.test_table VALUES (22)") + mysql_node.query("INSERT INTO test_database.test_table VALUES (2)") + check_query(clickhouse_node, "SELECT * FROM test_database.test_table ORDER BY id FORMAT TSV", '1\n2\n') + check_query(clickhouse_node, "SELECT * FROM test_database_auto.test_table ORDER BY id FORMAT TSV", '11\n22\n') + get_sync_id_query = "select id from information_schema.processlist where STATE='Master has sent all binlog to slave; waiting for more updates'" result = mysql_node.query_and_get_data(get_sync_id_query) - assert len(result) == 2 - for row in result: row_result = {} query = "kill " + str(row[0]) + ";" @@ -671,13 +717,13 @@ def mysql_kill_sync_thread_restore_test(clickhouse_node, mysql_node, service_nam clickhouse_node.query("DETACH DATABASE test_database") clickhouse_node.query("ATTACH DATABASE test_database") - check_query(clickhouse_node, "SELECT * FROM test_database.test_table FORMAT TSV", '1\n') - - mysql_node.query("INSERT INTO test_database.test_table VALUES (2)") check_query(clickhouse_node, "SELECT * FROM test_database.test_table ORDER BY id FORMAT TSV", '1\n2\n') - mysql_node.query("INSERT INTO test_database_auto.test_table VALUES (12)") - check_query(clickhouse_node, "SELECT * FROM test_database_auto.test_table ORDER BY id FORMAT TSV", '11\n12\n') + mysql_node.query("INSERT INTO test_database.test_table VALUES (3)") + check_query(clickhouse_node, "SELECT * FROM test_database.test_table ORDER BY id FORMAT TSV", '1\n2\n3\n') + + mysql_node.query("INSERT INTO test_database_auto.test_table VALUES (33)") + check_query(clickhouse_node, "SELECT * FROM test_database_auto.test_table ORDER BY id FORMAT TSV", '11\n22\n33\n') clickhouse_node.query("DROP DATABASE test_database") clickhouse_node.query("DROP DATABASE test_database_auto") @@ -756,6 +802,7 @@ def utf8mb4_test(clickhouse_node, mysql_node, service_name): mysql_node.query("CREATE TABLE utf8mb4_test.test (id INT(11) NOT NULL PRIMARY KEY, name VARCHAR(255)) ENGINE=InnoDB DEFAULT CHARACTER SET utf8mb4") mysql_node.query("INSERT INTO utf8mb4_test.test VALUES(1, '🦄'),(2, '\u2601')") clickhouse_node.query("CREATE DATABASE utf8mb4_test ENGINE = MaterializeMySQL('{}:3306', 'utf8mb4_test', 'root', 'clickhouse')".format(service_name)) + check_query(clickhouse_node, "SHOW TABLES FROM utf8mb4_test FORMAT TSV", "test\n") check_query(clickhouse_node, "SELECT id, name FROM utf8mb4_test.test ORDER BY id", "1\t\U0001F984\n2\t\u2601\n") def system_parts_test(clickhouse_node, mysql_node, service_name): @@ -795,3 +842,31 @@ def system_tables_test(clickhouse_node, mysql_node, service_name): mysql_node.query("CREATE TABLE system_tables_test.test (id int NOT NULL PRIMARY KEY) ENGINE=InnoDB") clickhouse_node.query("CREATE DATABASE system_tables_test ENGINE = MaterializeMySQL('{}:3306', 'system_tables_test', 'root', 'clickhouse')".format(service_name)) check_query(clickhouse_node, "SELECT partition_key, sorting_key, primary_key FROM system.tables WHERE database = 'system_tables_test' AND name = 'test'", "intDiv(id, 4294967)\tid\tid\n") + +def move_to_prewhere_and_column_filtering(clickhouse_node, mysql_node, service_name): + clickhouse_node.query("DROP DATABASE IF EXISTS cond_on_key_col") + mysql_node.query("DROP DATABASE IF EXISTS cond_on_key_col") + mysql_node.query("CREATE DATABASE cond_on_key_col") + clickhouse_node.query("CREATE DATABASE cond_on_key_col ENGINE = MaterializeMySQL('{}:3306', 'cond_on_key_col', 'root', 'clickhouse')".format(service_name)) + mysql_node.query("create table cond_on_key_col.products (id int primary key, product_id int not null, catalog_id int not null, brand_id int not null, name text)") + mysql_node.query("insert into cond_on_key_col.products (id, name, catalog_id, brand_id, product_id) values (915, 'ertyui', 5287, 15837, 0), (990, 'wer', 1053, 24390, 1), (781, 'qwerty', 1041, 1176, 2);") + check_query(clickhouse_node, "SELECT DISTINCT P.id, P.name, P.catalog_id FROM cond_on_key_col.products P WHERE P.name ILIKE '%e%' and P.catalog_id=5287", '915\tertyui\t5287\n') + clickhouse_node.query("DROP DATABASE cond_on_key_col") + mysql_node.query("DROP DATABASE cond_on_key_col") + +def mysql_settings_test(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS test_database") + clickhouse_node.query("DROP DATABASE IF EXISTS test_database") + mysql_node.query("CREATE DATABASE test_database") + mysql_node.query("CREATE TABLE test_database.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))") + mysql_node.query("INSERT INTO test_database.a VALUES(1, 'foo')") + mysql_node.query("INSERT INTO test_database.a VALUES(2, 'bar')") + + clickhouse_node.query("CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(service_name)) + check_query(clickhouse_node, "SELECT COUNT() FROM test_database.a FORMAT TSV", "2\n") + + assert clickhouse_node.query("SELECT COUNT(DISTINCT blockNumber()) FROM test_database.a FORMAT TSV") == "2\n" + + clickhouse_node.query("DROP DATABASE test_database") + mysql_node.query("DROP DATABASE test_database") + diff --git a/tests/integration/test_materialize_mysql_database/test.py b/tests/integration/test_materialize_mysql_database/test.py index ced9a978d02..19067db3eca 100644 --- a/tests/integration/test_materialize_mysql_database/test.py +++ b/tests/integration/test_materialize_mysql_database/test.py @@ -16,7 +16,8 @@ cluster = ClickHouseCluster(__file__) node_db_ordinary = cluster.add_instance('node1', user_configs=["configs/users.xml"], with_mysql=False, stay_alive=True) node_db_atomic = cluster.add_instance('node2', user_configs=["configs/users_db_atomic.xml"], with_mysql=False, stay_alive=True) - +node_disable_bytes_settings = cluster.add_instance('node3', user_configs=["configs/users_disable_bytes_settings.xml"], with_mysql=False, stay_alive=True) +node_disable_rows_settings = cluster.add_instance('node4', user_configs=["configs/users_disable_rows_settings.xml"], with_mysql=False, stay_alive=True) @pytest.fixture(scope="module") def started_cluster(): @@ -42,7 +43,7 @@ class MySQLNodeInstance: if not os.path.exists(self.instances_dir): os.mkdir(self.instances_dir) self.docker_logs_path = p.join(self.instances_dir, 'docker_mysql.log') - + self.start_up = False def alloc_connection(self): if self.mysql_connection is None: @@ -78,12 +79,16 @@ class MySQLNodeInstance: return cursor.fetchall() def start_and_wait(self): + if self.start_up: + return + run_and_check(['docker-compose', - '-p', cluster.project_name, - '-f', self.docker_compose, - 'up', '--no-recreate', '-d', - ]) + '-p', cluster.project_name, + '-f', self.docker_compose, + 'up', '--no-recreate', '-d', + ]) self.wait_mysql_to_start(120) + self.start_up = True def close(self): if self.mysql_connection is not None: @@ -99,6 +104,8 @@ class MySQLNodeInstance: except Exception as e: print("Unable to get logs from docker mysql.") + self.start_up = False + def wait_mysql_to_start(self, timeout=60): start = time.time() while time.time() - start < timeout: @@ -113,43 +120,49 @@ class MySQLNodeInstance: run_and_check(['docker-compose', 'ps', '--services', 'all']) raise Exception("Cannot wait MySQL container") + +mysql_5_7_docker_compose = os.path.join(DOCKER_COMPOSE_PATH, 'docker_compose_mysql_5_7_for_materialize_mysql.yml') +mysql_5_7_node = MySQLNodeInstance('root', 'clickhouse', '127.0.0.1', 3308, mysql_5_7_docker_compose) + +mysql_8_0_docker_compose = os.path.join(DOCKER_COMPOSE_PATH, 'docker_compose_mysql_8_0_for_materialize_mysql.yml') +mysql_8_0_node = MySQLNodeInstance('root', 'clickhouse', '127.0.0.1', 3309, mysql_8_0_docker_compose) + + @pytest.fixture(scope="module") def started_mysql_5_7(): - docker_compose = os.path.join(DOCKER_COMPOSE_PATH, 'docker_compose_mysql_5_7_for_materialize_mysql.yml') - mysql_node = MySQLNodeInstance('root', 'clickhouse', '127.0.0.1', 3308, docker_compose) - try: - mysql_node.start_and_wait() - yield mysql_node + mysql_5_7_node.start_and_wait() + yield mysql_5_7_node finally: - mysql_node.close() - run_and_check(['docker-compose', '-p', cluster.project_name, '-f', docker_compose, 'down', '--volumes', - '--remove-orphans']) + mysql_5_7_node.close() + run_and_check(['docker-compose', '-p', cluster.project_name, '-f', mysql_5_7_docker_compose, 'down', '--volumes', '--remove-orphans']) @pytest.fixture(scope="module") def started_mysql_8_0(): - docker_compose = os.path.join(DOCKER_COMPOSE_PATH, 'docker_compose_mysql_8_0_for_materialize_mysql.yml') - mysql_node = MySQLNodeInstance('root', 'clickhouse', '127.0.0.1', 33308, docker_compose) - try: - mysql_node.start_and_wait() - yield mysql_node + mysql_8_0_node.start_and_wait() + yield mysql_8_0_node finally: - mysql_node.close() - run_and_check(['docker-compose', '-p', cluster.project_name, '-f', docker_compose, 'down', '--volumes', - '--remove-orphans']) + mysql_8_0_node.close() + run_and_check(['docker-compose', '-p', cluster.project_name, '-f', mysql_8_0_docker_compose, 'down', '--volumes', '--remove-orphans']) @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_dml_with_mysql_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.materialize_mysql_database_with_views(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.move_to_prewhere_and_column_filtering(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_dml_with_mysql_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") + materialize_with_ddl.materialize_mysql_database_with_views(clickhouse_node, started_mysql_8_0, "mysql8_0") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_8_0, "mysql8_0") + materialize_with_ddl.move_to_prewhere_and_column_filtering(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_ddl_with_mysql_5_7(started_cluster, started_mysql_5_7, clickhouse_node): @@ -163,6 +176,7 @@ def test_materialize_database_ddl_with_mysql_5_7(started_cluster, started_mysql_ materialize_with_ddl.alter_rename_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.alter_modify_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_ddl_with_mysql_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.drop_table_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") @@ -179,10 +193,12 @@ def test_materialize_database_ddl_with_mysql_8_0(started_cluster, started_mysql_ materialize_with_ddl.alter_modify_column_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_ddl_with_empty_transaction_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.query_event_with_empty_transaction(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_ddl_with_empty_transaction_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.query_event_with_empty_transaction(clickhouse_node, started_mysql_8_0, "mysql8_0") @@ -217,52 +233,71 @@ def test_materialize_database_err_sync_user_privs_5_7(started_cluster, started_m def test_materialize_database_err_sync_user_privs_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_network_partition_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.network_partition_test(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_network_partition_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.network_partition_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_mysql_kill_sync_thread_restore_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.mysql_kill_sync_thread_restore_test(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_mysql_kill_sync_thread_restore_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.mysql_kill_sync_thread_restore_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_mysql_killed_while_insert_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.mysql_killed_while_insert(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_mysql_killed_while_insert_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.mysql_killed_while_insert(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_clickhouse_killed_while_insert_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.clickhouse_killed_while_insert(clickhouse_node, started_mysql_5_7, "mysql1") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_clickhouse_killed_while_insert_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.clickhouse_killed_while_insert(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) def test_utf8mb4(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): materialize_with_ddl.utf8mb4_test(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.utf8mb4_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) def test_system_parts_table(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.system_parts_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) def test_multi_table_update(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): materialize_with_ddl.multi_table_update_test(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.multi_table_update_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) -def test_system_tables_table(started_cluster, started_mysql_8_0, clickhouse_node): +def test_system_tables_table(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): + materialize_with_ddl.system_tables_test(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.system_tables_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + + +@pytest.mark.parametrize(('clickhouse_node'), [node_disable_bytes_settings, node_disable_rows_settings]) +def test_mysql_settings(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): + materialize_with_ddl.mysql_settings_test(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.mysql_settings_test(clickhouse_node, started_mysql_8_0, "mysql8_0") diff --git a/tests/integration/test_max_http_connections_for_replication/test.py b/tests/integration/test_max_http_connections_for_replication/test.py index 2dc4e2a8810..634697c8668 100644 --- a/tests/integration/test_max_http_connections_for_replication/test.py +++ b/tests/integration/test_max_http_connections_for_replication/test.py @@ -43,6 +43,8 @@ def start_small_cluster(): def test_single_endpoint_connections_count(start_small_cluster): + node1.query("TRUNCATE TABLE test_table") + node2.query("SYSTEM SYNC REPLICA test_table") def task(count): print(("Inserting ten times from {}".format(count))) for i in range(count, count + 10): @@ -58,9 +60,11 @@ def test_single_endpoint_connections_count(start_small_cluster): def test_keepalive_timeout(start_small_cluster): - current_count = int(node1.query("select count() from test_table").strip()) + node1.query("TRUNCATE TABLE test_table") + node2.query("SYSTEM SYNC REPLICA test_table") + node1.query("insert into test_table values ('2017-06-16', 777, 0)") - assert_eq_with_retry(node2, "select count() from test_table", str(current_count + 1)) + assert_eq_with_retry(node2, "select count() from test_table", str(1)) # Server keepAliveTimeout is 3 seconds, default client session timeout is 8 # lets sleep in that interval time.sleep(4) @@ -69,7 +73,7 @@ def test_keepalive_timeout(start_small_cluster): time.sleep(3) - assert_eq_with_retry(node2, "select count() from test_table", str(current_count + 2)) + assert_eq_with_retry(node2, "select count() from test_table", str(2)) assert not node2.contains_in_log("No message received"), "Found 'No message received' in clickhouse-server.log" diff --git a/tests/integration/test_merge_tree_s3/configs/config.d/storage_conf.xml b/tests/integration/test_merge_tree_s3/configs/config.d/storage_conf.xml index 343f248c5fb..2d9778af32a 100644 --- a/tests/integration/test_merge_tree_s3/configs/config.d/storage_conf.xml +++ b/tests/integration/test_merge_tree_s3/configs/config.d/storage_conf.xml @@ -6,6 +6,7 @@ http://minio1:9001/root/data/ minio minio123 + 33554432 local diff --git a/tests/integration/test_merge_tree_s3/test.py b/tests/integration/test_merge_tree_s3/test.py index 45b3c3c65f0..c0c05355def 100644 --- a/tests/integration/test_merge_tree_s3/test.py +++ b/tests/integration/test_merge_tree_s3/test.py @@ -2,6 +2,8 @@ import logging import random import string import time +import threading +import os import pytest from helpers.cluster import ClickHouseCluster @@ -10,13 +12,47 @@ logging.getLogger().setLevel(logging.INFO) logging.getLogger().addHandler(logging.StreamHandler()) +# By default the exceptions that was throwed in threads will be ignored +# (they will not mark the test as failed, only printed to stderr). +# +# Wrap thrading.Thread and re-throw exception on join() +class SafeThread(threading.Thread): + def __init__(self, target): + super().__init__() + self.target = target + self.exception = None + def run(self): + try: + self.target() + except Exception as e: # pylint: disable=broad-except + self.exception = e + def join(self, timeout=None): + super().join(timeout) + if self.exception: + raise self.exception + + +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +CONFIG_PATH = os.path.join(SCRIPT_DIR, './_instances/node/configs/config.d/storage_conf.xml') + + +def replace_config(old, new): + config = open(CONFIG_PATH, 'r') + config_lines = config.readlines() + config.close() + config_lines = [line.replace(old, new) for line in config_lines] + config = open(CONFIG_PATH, 'w') + config.writelines(config_lines) + config.close() + + @pytest.fixture(scope="module") def cluster(): try: cluster = ClickHouseCluster(__file__) cluster.add_instance("node", main_configs=["configs/config.d/storage_conf.xml", "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], with_minio=True) + "configs/config.d/log_conf.xml"], with_minio=True) logging.info("Starting cluster...") cluster.start() logging.info("Cluster started") @@ -68,6 +104,16 @@ def create_table(cluster, table_name, additional_settings=None): node.query(create_table_statement) +def wait_for_delete_s3_objects(cluster, expected, timeout=30): + minio = cluster.minio_client + while timeout > 0: + if len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == expected: + return + timeout -= 1 + time.sleep(1) + assert(len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == expected) + + @pytest.fixture(autouse=True) def drop_table(cluster): yield @@ -75,8 +121,9 @@ def drop_table(cluster): minio = cluster.minio_client node.query("DROP TABLE IF EXISTS s3_test NO DELAY") + try: - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == 0 + wait_for_delete_s3_objects(cluster, 0) finally: # Remove extra objects to prevent tests cascade failing for obj in list(minio.list_objects(cluster.minio_bucket, 'data/')): @@ -137,12 +184,21 @@ def test_insert_same_partition_and_merge(cluster, merge_vertical): list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD_PER_PART_WIDE * 6 + FILES_OVERHEAD node.query("SYSTEM START MERGES s3_test") + # Wait for merges and old parts deletion - time.sleep(3) + for attempt in range(0, 10): + parts_count = node.query("SELECT COUNT(*) FROM system.parts WHERE table = 's3_test' FORMAT Values") + if parts_count == "(1)": + break + + if attempt == 9: + assert parts_count == "(1)" + + time.sleep(1) assert node.query("SELECT sum(id) FROM s3_test FORMAT Values") == "(0)" assert node.query("SELECT count(distinct(id)) FROM s3_test FORMAT Values") == "(8192)" - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD) def test_alter_table_columns(cluster): @@ -158,32 +214,20 @@ def test_alter_table_columns(cluster): # To ensure parts have merged node.query("OPTIMIZE TABLE s3_test") - # Wait for merges, mutations and old parts deletion - time.sleep(3) - assert node.query("SELECT sum(col1) FROM s3_test FORMAT Values") == "(8192)" assert node.query("SELECT sum(col1) FROM s3_test WHERE id > 0 FORMAT Values") == "(4096)" - assert len(list(minio.list_objects(cluster.minio_bucket, - 'data/'))) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN) node.query("ALTER TABLE s3_test MODIFY COLUMN col1 String", settings={"mutations_sync": 2}) - # Wait for old parts deletion - time.sleep(3) - assert node.query("SELECT distinct(col1) FROM s3_test FORMAT Values") == "('1')" # and file with mutation - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == ( - FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN + 1) + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN + 1) node.query("ALTER TABLE s3_test DROP COLUMN col1", settings={"mutations_sync": 2}) - # Wait for old parts deletion - time.sleep(3) - # and 2 files with mutations - assert len( - list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + 2 + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + 2) def test_attach_detach_partition(cluster): @@ -311,9 +355,7 @@ def test_move_replace_partition_to_another_table(cluster): assert node.query("SELECT count(*) FROM s3_clone FORMAT Values") == "(8192)" # Wait for outdated partitions deletion. - time.sleep(3) - assert len(list( - minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD * 2 + FILES_OVERHEAD_PER_PART_WIDE * 4 + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD * 2 + FILES_OVERHEAD_PER_PART_WIDE * 4) node.query("DROP TABLE s3_clone NO DELAY") assert node.query("SELECT sum(id) FROM s3_test FORMAT Values") == "(0)" @@ -329,7 +371,93 @@ def test_move_replace_partition_to_another_table(cluster): node.query("DROP TABLE s3_test NO DELAY") # Backup data should remain in S3. - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD_PER_PART_WIDE * 4 + + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD_PER_PART_WIDE * 4) for obj in list(minio.list_objects(cluster.minio_bucket, 'data/')): minio.remove_object(cluster.minio_bucket, obj.object_name) + + +def test_freeze_unfreeze(cluster): + create_table(cluster, "s3_test") + + node = cluster.instances["node"] + minio = cluster.minio_client + + node.query("INSERT INTO s3_test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("ALTER TABLE s3_test FREEZE WITH NAME 'backup1'") + node.query("INSERT INTO s3_test VALUES {}".format(generate_values('2020-01-04', 4096))) + node.query("ALTER TABLE s3_test FREEZE WITH NAME 'backup2'") + + node.query("TRUNCATE TABLE s3_test") + assert len( + list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE * 2 + + # Unfreeze single partition from backup1. + node.query("ALTER TABLE s3_test UNFREEZE PARTITION '2020-01-03' WITH NAME 'backup1'") + # Unfreeze all partitions from backup2. + node.query("ALTER TABLE s3_test UNFREEZE WITH NAME 'backup2'") + + # Data should be removed from S3. + assert len( + list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD + + +def test_s3_disk_apply_new_settings(cluster): + create_table(cluster, "s3_test") + node = cluster.instances["node"] + + def get_s3_requests(): + node.query("SYSTEM FLUSH LOGS") + return int(node.query("SELECT value FROM system.events WHERE event='S3WriteRequestsCount'")) + + s3_requests_before = get_s3_requests() + node.query("INSERT INTO s3_test VALUES {}".format(generate_values('2020-01-03', 4096))) + s3_requests_to_write_partition = get_s3_requests() - s3_requests_before + + # Force multi-part upload mode. + replace_config("33554432", + "0") + + node.query("SYSTEM RELOAD CONFIG") + + s3_requests_before = get_s3_requests() + node.query("INSERT INTO s3_test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + + # There should be 3 times more S3 requests because multi-part upload mode uses 3 requests to upload object. + assert get_s3_requests() - s3_requests_before == s3_requests_to_write_partition * 3 + + +def test_s3_disk_restart_during_load(cluster): + create_table(cluster, "s3_test") + + node = cluster.instances["node"] + + node.query("INSERT INTO s3_test VALUES {}".format(generate_values('2020-01-04', 1024 * 1024))) + node.query("INSERT INTO s3_test VALUES {}".format(generate_values('2020-01-05', 1024 * 1024, -1))) + + def read(): + for ii in range(0, 20): + logging.info("Executing %d query", ii) + assert node.query("SELECT sum(id) FROM s3_test FORMAT Values") == "(0)" + logging.info("Query %d executed", ii) + time.sleep(0.2) + + def restart_disk(): + for iii in range(0, 5): + logging.info("Restarting disk, attempt %d", iii) + node.query("SYSTEM RESTART DISK s3") + logging.info("Disk restarted, attempt %d", iii) + time.sleep(0.5) + + threads = [] + for i in range(0, 4): + threads.append(SafeThread(target=read)) + + threads.append(SafeThread(target=restart_disk)) + + for thread in threads: + thread.start() + + for thread in threads: + thread.join() diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/clusters.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/clusters.xml new file mode 100644 index 00000000000..4808ae4bc4a --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/clusters.xml @@ -0,0 +1,23 @@ + + + + + + true + + node + 9000 + + + + + + true + + node_another_bucket + 9000 + + + + + diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_not_restorable.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_not_restorable.xml new file mode 100644 index 00000000000..c682ddae785 --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_not_restorable.xml @@ -0,0 +1,35 @@ + + + + + s3 + http://minio1:9001/root/another_data/ + minio + minio123 + false + 1 + 0 + + + local + / + + + + + +
+ s3 +
+ + hdd + +
+
+
+
+ + + 0 + +
diff --git a/tests/integration/test_merge_tree_s3_restore/test.py b/tests/integration/test_merge_tree_s3_restore/test.py index 346d9aced3f..1ad49eec7d2 100644 --- a/tests/integration/test_merge_tree_s3_restore/test.py +++ b/tests/integration/test_merge_tree_s3_restore/test.py @@ -1,3 +1,4 @@ +import os import logging import random import string @@ -6,26 +7,45 @@ import time import pytest from helpers.cluster import ClickHouseCluster + logging.getLogger().setLevel(logging.INFO) logging.getLogger().addHandler(logging.StreamHandler()) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +NOT_RESTORABLE_CONFIG_PATH = os.path.join(SCRIPT_DIR, './_instances/node_not_restorable/configs/config.d/storage_conf_not_restorable.xml') +COMMON_CONFIGS = ["configs/config.d/bg_processing_pool_conf.xml", "configs/config.d/log_conf.xml", "configs/config.d/clusters.xml"] + + +def replace_config(old, new): + config = open(NOT_RESTORABLE_CONFIG_PATH, 'r') + config_lines = config.readlines() + config.close() + config_lines = [line.replace(old, new) for line in config_lines] + config = open(NOT_RESTORABLE_CONFIG_PATH, 'w') + config.writelines(config_lines) + config.close() + @pytest.fixture(scope="module") def cluster(): try: cluster = ClickHouseCluster(__file__) - cluster.add_instance("node", main_configs=[ - "configs/config.d/storage_conf.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], with_minio=True, stay_alive=True) - cluster.add_instance("node_another_bucket", main_configs=[ - "configs/config.d/storage_conf_another_bucket.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) - cluster.add_instance("node_another_bucket_path", main_configs=[ - "configs/config.d/storage_conf_another_bucket_path.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) + + cluster.add_instance("node", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf.xml"], + macros={"cluster": "node", "replica": "0"}, + with_minio=True, with_zookeeper=True, stay_alive=True) + cluster.add_instance("node_another_bucket", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf_another_bucket.xml"], + macros={"cluster": "node_another_bucket", "replica": "0"}, + with_zookeeper=True, stay_alive=True) + cluster.add_instance("node_another_bucket_path", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf_another_bucket_path.xml"], + stay_alive=True) + cluster.add_instance("node_not_restorable", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf_not_restorable.xml"], + stay_alive=True) + logging.info("Starting cluster...") cluster.start() logging.info("Cluster started") @@ -46,28 +66,27 @@ def generate_values(date_str, count, sign=1): return ",".join(["('{}',{},'{}',{})".format(x, y, z, 0) for x, y, z in data]) -def create_table(node, table_name, additional_settings=None): +def create_table(node, table_name, attach=False, replicated=False): node.query("CREATE DATABASE IF NOT EXISTS s3 ENGINE = Ordinary") create_table_statement = """ - CREATE TABLE s3.{} ( + {create} TABLE s3.{table_name} {on_cluster} ( dt Date, id Int64, data String, counter Int64, INDEX min_max (id) TYPE minmax GRANULARITY 3 - ) ENGINE=MergeTree() + ) ENGINE={engine} PARTITION BY dt ORDER BY (dt, id) SETTINGS storage_policy='s3', old_parts_lifetime=600, index_granularity=512 - """.format(table_name) - - if additional_settings: - create_table_statement += "," - create_table_statement += additional_settings + """.format(create="ATTACH" if attach else "CREATE", + table_name=table_name, + on_cluster="ON CLUSTER '{}'".format(node.name) if replicated else "", + engine="ReplicatedMergeTree('/clickhouse/tables/{cluster}/test', '{replica}')" if replicated else "MergeTree()") node.query(create_table_statement) @@ -75,6 +94,8 @@ def create_table(node, table_name, additional_settings=None): def purge_s3(cluster, bucket): minio = cluster.minio_client for obj in list(minio.list_objects(bucket, recursive=True)): + if str(obj.object_name).find(".SCHEMA_VERSION") != -1: + continue minio.remove_object(bucket, obj.object_name) @@ -86,28 +107,36 @@ def drop_shadow_information(node): node.exec_in_container(['bash', '-c', 'rm -rf /var/lib/clickhouse/shadow/*'], user='root') -def create_restore_file(node, revision=0, bucket=None, path=None): - add_restore_option = 'echo -en "{}\n" >> /var/lib/clickhouse/disks/s3/restore' - node.exec_in_container(['bash', '-c', add_restore_option.format(revision)], user='root') +def create_restore_file(node, revision=None, bucket=None, path=None, detached=None): + node.exec_in_container(['bash', '-c', 'mkdir -p /var/lib/clickhouse/disks/s3/'], user='root') + node.exec_in_container(['bash', '-c', 'touch /var/lib/clickhouse/disks/s3/restore'], user='root') + + add_restore_option = 'echo -en "{}={}\n" >> /var/lib/clickhouse/disks/s3/restore' + if revision: + node.exec_in_container(['bash', '-c', add_restore_option.format('revision', revision)], user='root') if bucket: - node.exec_in_container(['bash', '-c', add_restore_option.format(bucket)], user='root') + node.exec_in_container(['bash', '-c', add_restore_option.format('source_bucket', bucket)], user='root') if path: - node.exec_in_container(['bash', '-c', add_restore_option.format(path)], user='root') + node.exec_in_container(['bash', '-c', add_restore_option.format('source_path', path)], user='root') + if detached: + node.exec_in_container(['bash', '-c', add_restore_option.format('detached', 'true')], user='root') def get_revision_counter(node, backup_number): - return int(node.exec_in_container(['bash', '-c', 'cat /var/lib/clickhouse/disks/s3/shadow/{}/revision.txt'.format(backup_number)], user='root')) + return int(node.exec_in_container( + ['bash', '-c', 'cat /var/lib/clickhouse/disks/s3/shadow/{}/revision.txt'.format(backup_number)], user='root')) @pytest.fixture(autouse=True) def drop_table(cluster): yield - node_names = ["node", "node_another_bucket", "node_another_bucket_path"] + node_names = ["node", "node_another_bucket", "node_another_bucket_path", "node_not_restorable"] for node_name in node_names: node = cluster.instances[node_name] - node.query("DROP TABLE IF EXISTS s3.test NO DELAY") + node.query("DROP TABLE IF EXISTS s3.test SYNC") + node.query("DROP DATABASE IF EXISTS s3 SYNC") drop_s3_metadata(node) drop_shadow_information(node) @@ -117,32 +146,24 @@ def drop_table(cluster): purge_s3(cluster, bucket) -def test_full_restore(cluster): +@pytest.mark.parametrize( + "replicated", [False, True] +) +def test_full_restore(cluster, replicated): node = cluster.instances["node"] - create_table(node, "test") + create_table(node, "test", attach=False, replicated=replicated) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096, -1))) - # To ensure parts have merged - node.query("OPTIMIZE TABLE s3.test") - - assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) - assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) - - node.stop_clickhouse() + node.query("DETACH TABLE s3.test") drop_s3_metadata(node) - node.start_clickhouse() - - # All data is removed. - assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(0) - - node.stop_clickhouse() create_restore_file(node) - node.start_clickhouse(10) + node.query("SYSTEM RESTART DISK s3") + node.query("ATTACH TABLE s3.test") assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -166,22 +187,18 @@ def test_restore_another_bucket_path(cluster): node_another_bucket = cluster.instances["node_another_bucket"] - create_table(node_another_bucket, "test") - - node_another_bucket.stop_clickhouse() create_restore_file(node_another_bucket, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + create_table(node_another_bucket, "test", attach=True) assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) node_another_bucket_path = cluster.instances["node_another_bucket_path"] - create_table(node_another_bucket_path, "test") - - node_another_bucket_path.stop_clickhouse() create_restore_file(node_another_bucket_path, bucket="root2", path="data") - node_another_bucket_path.start_clickhouse(10) + node_another_bucket_path.query("SYSTEM RESTART DISK s3") + create_table(node_another_bucket_path, "test", attach=True) assert node_another_bucket_path.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket_path.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -216,36 +233,30 @@ def test_restore_different_revisions(cluster): node_another_bucket = cluster.instances["node_another_bucket"] - create_table(node_another_bucket, "test") - # Restore to revision 1 (2 parts). - node_another_bucket.stop_clickhouse() - drop_s3_metadata(node_another_bucket) - purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision1, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + create_table(node_another_bucket, "test", attach=True) assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) assert node_another_bucket.query("SELECT count(*) from system.parts where table = 'test'") == '2\n' # Restore to revision 2 (4 parts). - node_another_bucket.stop_clickhouse() - drop_s3_metadata(node_another_bucket) - purge_s3(cluster, cluster.minio_bucket_2) + node_another_bucket.query("DETACH TABLE s3.test") create_restore_file(node_another_bucket, revision=revision2, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + node_another_bucket.query("ATTACH TABLE s3.test") assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) assert node_another_bucket.query("SELECT count(*) from system.parts where table = 'test'") == '4\n' # Restore to revision 3 (4 parts + 1 merged). - node_another_bucket.stop_clickhouse() - drop_s3_metadata(node_another_bucket) - purge_s3(cluster, cluster.minio_bucket_2) + node_another_bucket.query("DETACH TABLE s3.test") create_restore_file(node_another_bucket, revision=revision3, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + node_another_bucket.query("ATTACH TABLE s3.test") assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -270,25 +281,20 @@ def test_restore_mutations(cluster): node_another_bucket = cluster.instances["node_another_bucket"] - create_table(node_another_bucket, "test") - # Restore to revision before mutation. - node_another_bucket.stop_clickhouse() - drop_s3_metadata(node_another_bucket) - purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision_before_mutation, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + create_table(node_another_bucket, "test", attach=True) assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(0) # Restore to revision after mutation. - node_another_bucket.stop_clickhouse() - drop_s3_metadata(node_another_bucket) - purge_s3(cluster, cluster.minio_bucket_2) + node_another_bucket.query("DETACH TABLE s3.test") create_restore_file(node_another_bucket, revision=revision_after_mutation, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + node_another_bucket.query("ATTACH TABLE s3.test") assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -297,12 +303,11 @@ def test_restore_mutations(cluster): # Restore to revision in the middle of mutation. # Unfinished mutation should be completed after table startup. - node_another_bucket.stop_clickhouse() - drop_s3_metadata(node_another_bucket) - purge_s3(cluster, cluster.minio_bucket_2) + node_another_bucket.query("DETACH TABLE s3.test") revision = (revision_before_mutation + revision_after_mutation) // 2 create_restore_file(node_another_bucket, revision=revision, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.query("SYSTEM RESTART DISK s3") + node_another_bucket.query("ATTACH TABLE s3.test") # Wait for unfinished mutation completion. time.sleep(3) @@ -311,3 +316,83 @@ def test_restore_mutations(cluster): assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(counter) FROM s3.test WHERE id > 0 FORMAT Values") == "({})".format(4096) + + +def test_migrate_to_restorable_schema(cluster): + node = cluster.instances["node_not_restorable"] + + create_table(node, "test") + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096, -1))) + + replace_config("false", "true") + node.restart_clickhouse() + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-06', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-06', 4096, -1))) + + node.query("ALTER TABLE s3.test FREEZE") + revision = get_revision_counter(node, 1) + + assert revision != 0 + + node_another_bucket = cluster.instances["node_another_bucket"] + + # Restore to revision before mutation. + create_restore_file(node_another_bucket, revision=revision, bucket="root", path="another_data") + node_another_bucket.query("SYSTEM RESTART DISK s3") + create_table(node_another_bucket, "test", attach=True) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 6) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + +@pytest.mark.parametrize( + "replicated", [False, True] +) +def test_restore_to_detached(cluster, replicated): + node = cluster.instances["node"] + + create_table(node, "test", attach=False, replicated=replicated) + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-06', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-07', 4096, 0))) + + # Add some mutation. + node.query("ALTER TABLE s3.test UPDATE counter = 1 WHERE 1", settings={"mutations_sync": 2}) + + # Detach some partition. + node.query("ALTER TABLE s3.test DETACH PARTITION '2020-01-07'") + + node.query("ALTER TABLE s3.test FREEZE") + revision = get_revision_counter(node, 1) + + node_another_bucket = cluster.instances["node_another_bucket"] + + create_restore_file(node_another_bucket, revision=revision, bucket="root", path="data", detached=True) + node_another_bucket.query("SYSTEM RESTART DISK s3") + create_table(node_another_bucket, "test", replicated=replicated) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(0) + + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-03'") + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-04'") + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-05'") + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-06'") + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + + # Attach partition that was already detached before backup-restore. + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-07'") + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 5) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 5) diff --git a/tests/integration/test_mysql_database_engine/test.py b/tests/integration/test_mysql_database_engine/test.py index 4d10e2ea6f5..f4b0bb1b2fc 100644 --- a/tests/integration/test_mysql_database_engine/test.py +++ b/tests/integration/test_mysql_database_engine/test.py @@ -146,10 +146,14 @@ def test_clickhouse_join_for_mysql_database(started_cluster): "CREATE TABLE default.t1_remote_mysql AS mysql('mysql1:3306','test','t1_mysql_local','root','clickhouse')") clickhouse_node.query( "CREATE TABLE default.t2_remote_mysql AS mysql('mysql1:3306','test','t2_mysql_local','root','clickhouse')") + clickhouse_node.query("INSERT INTO `default`.`t1_remote_mysql` VALUES ('EN','A',''),('RU','B','AAA')") + clickhouse_node.query("INSERT INTO `default`.`t2_remote_mysql` VALUES ('A','AAA'),('Z','')") + assert clickhouse_node.query("SELECT s.pays " "FROM default.t1_remote_mysql AS s " "LEFT JOIN default.t1_remote_mysql AS s_ref " - "ON (s_ref.opco = s.opco AND s_ref.service = s.service)") == '' + "ON (s_ref.opco = s.opco AND s_ref.service = s.service) " + "WHERE s_ref.opco != '' AND s.opco != '' ").rstrip() == 'RU' mysql_node.query("DROP DATABASE test") diff --git a/tests/integration/test_mysql_protocol/test.py b/tests/integration/test_mysql_protocol/test.py index 7f7d59674bc..43daeebeaf5 100644 --- a/tests/integration/test_mysql_protocol/test.py +++ b/tests/integration/test_mysql_protocol/test.py @@ -149,8 +149,8 @@ def test_mysql_client_exception(mysql_client, server_address): -e "CREATE TABLE default.t1_remote_mysql AS mysql('127.0.0.1:10086','default','t1_local','default','');" '''.format(host=server_address, port=server_port), demux=True) - assert stderr[0:266].decode() == "mysql: [Warning] Using a password on the command line interface can be insecure.\n" \ - "ERROR 1000 (00000) at line 1: Poco::Exception. Code: 1000, e.code() = 2002, e.displayText() = mysqlxx::ConnectionFailed: Can't connect to MySQL server on '127.0.0.1' (115) ((nullptr):0)" + assert stderr[0:258].decode() == "mysql: [Warning] Using a password on the command line interface can be insecure.\n" \ + "ERROR 1000 (00000) at line 1: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = Exception: Connections to all replicas failed: default@127.0.0.1:10086 as user default" def test_mysql_affected_rows(mysql_client, server_address): diff --git a/tests/integration/test_odbc_interaction/test.py b/tests/integration/test_odbc_interaction/test.py index 6bb6a6ee777..25668737885 100644 --- a/tests/integration/test_odbc_interaction/test.py +++ b/tests/integration/test_odbc_interaction/test.py @@ -6,6 +6,7 @@ import pytest from helpers.cluster import ClickHouseCluster from helpers.test_tools import assert_eq_with_retry from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT +from multiprocessing.dummy import Pool cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', with_odbc_drivers=True, with_mysql=True, @@ -26,6 +27,11 @@ create_table_sql_template = """ """ +def skip_test_msan(instance): + if instance.is_built_with_memory_sanitizer(): + pytest.skip("Memory Sanitizer cannot work with third-party shared libraries") + + def get_mysql_conn(): conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=3308) return conn @@ -74,6 +80,9 @@ def started_cluster(): node1.exec_in_container( ["bash", "-c", "echo 'CREATE TABLE t4(X INTEGER PRIMARY KEY ASC, Y, Z);' | sqlite3 {}".format(sqlite_db)], privileged=True, user='root') + node1.exec_in_container( + ["bash", "-c", "echo 'CREATE TABLE tf1(x INTEGER PRIMARY KEY ASC, y, z);' | sqlite3 {}".format(sqlite_db)], + privileged=True, user='root') print("sqlite tables created") mysql_conn = get_mysql_conn() print("mysql connection received") @@ -101,6 +110,8 @@ def started_cluster(): def test_mysql_simple_select_works(started_cluster): + skip_test_msan(node1) + mysql_setup = node1.odbc_drivers["MySQL"] table_name = 'test_insert_select' @@ -141,6 +152,8 @@ CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32, column_x Nulla def test_mysql_insert(started_cluster): + skip_test_msan(node1) + mysql_setup = node1.odbc_drivers["MySQL"] table_name = 'test_insert' conn = get_mysql_conn() @@ -162,6 +175,8 @@ def test_mysql_insert(started_cluster): def test_sqlite_simple_select_function_works(started_cluster): + skip_test_msan(node1) + sqlite_setup = node1.odbc_drivers["SQLite3"] sqlite_db = sqlite_setup["Database"] @@ -177,8 +192,27 @@ def test_sqlite_simple_select_function_works(started_cluster): assert node1.query( "select count(), sum(x) from odbc('DSN={}', '{}') group by x".format(sqlite_setup["DSN"], 't1')) == "1\t1\n" +def test_sqlite_table_function(started_cluster): + skip_test_msan(node1) + + sqlite_setup = node1.odbc_drivers["SQLite3"] + sqlite_db = sqlite_setup["Database"] + + node1.exec_in_container(["bash", "-c", "echo 'INSERT INTO tf1 values(1, 2, 3);' | sqlite3 {}".format(sqlite_db)], + privileged=True, user='root') + node1.query("create table odbc_tf as odbc('DSN={}', '{}')".format(sqlite_setup["DSN"], 'tf1')) + assert node1.query("select * from odbc_tf") == "1\t2\t3\n" + + assert node1.query("select y from odbc_tf") == "2\n" + assert node1.query("select z from odbc_tf") == "3\n" + assert node1.query("select x from odbc_tf") == "1\n" + assert node1.query("select x, y from odbc_tf") == "1\t2\n" + assert node1.query("select z, x, y from odbc_tf") == "3\t1\t2\n" + assert node1.query("select count(), sum(x) from odbc_tf group by x") == "1\t1\n" def test_sqlite_simple_select_storage_works(started_cluster): + skip_test_msan(node1) + sqlite_setup = node1.odbc_drivers["SQLite3"] sqlite_db = sqlite_setup["Database"] @@ -197,6 +231,8 @@ def test_sqlite_simple_select_storage_works(started_cluster): def test_sqlite_odbc_hashed_dictionary(started_cluster): + skip_test_msan(node1) + sqlite_db = node1.odbc_drivers["SQLite3"]["Database"] node1.exec_in_container(["bash", "-c", "echo 'INSERT INTO t2 values(1, 2, 3);' | sqlite3 {}".format(sqlite_db)], privileged=True, user='root') @@ -241,6 +277,8 @@ def test_sqlite_odbc_hashed_dictionary(started_cluster): def test_sqlite_odbc_cached_dictionary(started_cluster): + skip_test_msan(node1) + sqlite_db = node1.odbc_drivers["SQLite3"]["Database"] node1.exec_in_container(["bash", "-c", "echo 'INSERT INTO t3 values(1, 2, 3);' | sqlite3 {}".format(sqlite_db)], privileged=True, user='root') @@ -251,7 +289,7 @@ def test_sqlite_odbc_cached_dictionary(started_cluster): node1.exec_in_container(["bash", "-c", "chmod a+rw /tmp"], privileged=True, user='root') node1.exec_in_container(["bash", "-c", "chmod a+rw {}".format(sqlite_db)], privileged=True, user='root') - node1.query("insert into table function odbc('DSN={};', '', 't3') values (200, 2, 7)".format( + node1.query("insert into table function odbc('DSN={};ReadOnly=0', '', 't3') values (200, 2, 7)".format( node1.odbc_drivers["SQLite3"]["DSN"])) assert node1.query("select dictGetUInt8('sqlite3_odbc_cached', 'Z', toUInt64(200))") == "7\n" # new value @@ -263,6 +301,8 @@ def test_sqlite_odbc_cached_dictionary(started_cluster): def test_postgres_odbc_hashed_dictionary_with_schema(started_cluster): + skip_test_msan(node1) + conn = get_postgres_conn() cursor = conn.cursor() cursor.execute("truncate table clickhouse.test_table") @@ -273,6 +313,8 @@ def test_postgres_odbc_hashed_dictionary_with_schema(started_cluster): def test_postgres_odbc_hashed_dictionary_no_tty_pipe_overflow(started_cluster): + skip_test_msan(node1) + conn = get_postgres_conn() cursor = conn.cursor() cursor.execute("truncate table clickhouse.test_table") @@ -287,6 +329,8 @@ def test_postgres_odbc_hashed_dictionary_no_tty_pipe_overflow(started_cluster): def test_postgres_insert(started_cluster): + skip_test_msan(node1) + conn = get_postgres_conn() conn.cursor().execute("truncate table clickhouse.test_table") @@ -307,11 +351,13 @@ def test_postgres_insert(started_cluster): def test_bridge_dies_with_parent(started_cluster): + skip_test_msan(node1) + if node1.is_built_with_address_sanitizer(): # TODO: Leak sanitizer falsely reports about a leak of 16 bytes in clickhouse-odbc-bridge in this test and # that's linked somehow with that we have replaced getauxval() in glibc-compatibility. # The leak sanitizer calls getauxval() for its own purposes, and our replaced version doesn't seem to be equivalent in that case. - return + pytest.skip("Leak sanitizer falsely reports about a leak of 16 bytes in clickhouse-odbc-bridge") node1.query("select dictGetString('postgres_odbc_hashed', 'column2', toUInt64(1))") @@ -342,9 +388,12 @@ def test_bridge_dies_with_parent(started_cluster): assert clickhouse_pid is None assert bridge_pid is None + node1.start_clickhouse(20) def test_odbc_postgres_date_data_type(started_cluster): + skip_test_msan(node1) + conn = get_postgres_conn(); cursor = conn.cursor() cursor.execute("CREATE TABLE IF NOT EXISTS clickhouse.test_date (column1 integer, column2 date)") @@ -362,5 +411,194 @@ def test_odbc_postgres_date_data_type(started_cluster): expected = '1\t2020-12-01\n2\t2020-12-02\n3\t2020-12-03\n' result = node1.query('SELECT * FROM test_date'); assert(result == expected) + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_date") + node1.query("DROP TABLE IF EXISTS test_date") +def test_odbc_postgres_conversions(started_cluster): + skip_test_msan(node1) + + conn = get_postgres_conn() + cursor = conn.cursor() + + cursor.execute( + '''CREATE TABLE IF NOT EXISTS clickhouse.test_types ( + a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, + h timestamp)''') + + node1.query(''' + INSERT INTO TABLE FUNCTION + odbc('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_types') + VALUES (-32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12')''') + + result = node1.query(''' + SELECT a, b, c, d, e, f, g, h + FROM odbc('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_types') + ''') + + assert(result == '-32768\t-2147483648\t-9223372036854775808\t1.12345\t1.123456789\t2147483647\t9223372036854775807\t2000-05-12 12:12:12\n') + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_types") + + cursor.execute("""CREATE TABLE IF NOT EXISTS clickhouse.test_types (column1 Timestamp, column2 Numeric)""") + + node1.query( + ''' + CREATE TABLE test_types (column1 DateTime64, column2 Decimal(5, 1)) + ENGINE=ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_types')''') + + node1.query( + """INSERT INTO test_types + SELECT toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'), toDecimal32(1.1, 1)""") + + expected = node1.query("SELECT toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'), toDecimal32(1.1, 1)") + result = node1.query("SELECT * FROM test_types") + print(result) + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_types") + assert(result == expected) + + +def test_odbc_cyrillic_with_varchar(started_cluster): + skip_test_msan(node1) + + conn = get_postgres_conn() + cursor = conn.cursor() + + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_cyrillic") + cursor.execute("CREATE TABLE clickhouse.test_cyrillic (name varchar(11))") + + node1.query(''' + CREATE TABLE test_cyrillic (name String) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_cyrillic')''') + + cursor.execute("INSERT INTO clickhouse.test_cyrillic VALUES ('A-nice-word')") + cursor.execute("INSERT INTO clickhouse.test_cyrillic VALUES ('Красивенько')") + + result = node1.query(''' SELECT * FROM test_cyrillic ORDER BY name''') + assert(result == 'A-nice-word\nКрасивенько\n') + result = node1.query(''' SELECT name FROM odbc('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_cyrillic') ''') + assert(result == 'A-nice-word\nКрасивенько\n') + + +def test_many_connections(started_cluster): + skip_test_msan(node1) + + conn = get_postgres_conn() + cursor = conn.cursor() + + cursor.execute('DROP TABLE IF EXISTS clickhouse.test_pg_table') + cursor.execute('CREATE TABLE clickhouse.test_pg_table (key integer, value integer)') + + node1.query(''' + DROP TABLE IF EXISTS test_pg_table; + CREATE TABLE test_pg_table (key UInt32, value UInt32) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_pg_table')''') + + node1.query("INSERT INTO test_pg_table SELECT number, number FROM numbers(10)") + + query = "SELECT count() FROM (" + for i in range (24): + query += "SELECT key FROM {t} UNION ALL " + query += "SELECT key FROM {t})" + + assert node1.query(query.format(t='test_pg_table')) == '250\n' + + +def test_concurrent_queries(started_cluster): + skip_test_msan(node1) + + conn = get_postgres_conn() + cursor = conn.cursor() + + node1.query(''' + DROP TABLE IF EXISTS test_pg_table; + CREATE TABLE test_pg_table (key UInt32, value UInt32) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_pg_table')''') + + cursor.execute('DROP TABLE IF EXISTS clickhouse.test_pg_table') + cursor.execute('CREATE TABLE clickhouse.test_pg_table (key integer, value integer)') + + def node_insert(_): + for i in range(5): + node1.query("INSERT INTO test_pg_table SELECT number, number FROM numbers(1000)", user='default') + + busy_pool = Pool(5) + p = busy_pool.map_async(node_insert, range(5)) + p.wait() + result = node1.query("SELECT count() FROM test_pg_table", user='default') + print(result) + assert(int(result) == 5 * 5 * 1000) + + def node_insert_select(_): + for i in range(5): + result = node1.query("INSERT INTO test_pg_table SELECT number, number FROM numbers(1000)", user='default') + result = node1.query("SELECT * FROM test_pg_table LIMIT 100", user='default') + + busy_pool = Pool(5) + p = busy_pool.map_async(node_insert_select, range(5)) + p.wait() + result = node1.query("SELECT count() FROM test_pg_table", user='default') + print(result) + assert(int(result) == 5 * 5 * 1000 * 2) + + node1.query('DROP TABLE test_pg_table;') + cursor.execute('DROP TABLE clickhouse.test_pg_table;') + + +def test_odbc_long_column_names(started_cluster): + skip_test_msan(node1) + + conn = get_postgres_conn(); + cursor = conn.cursor() + + column_name = "column" * 8 + create_table = "CREATE TABLE clickhouse.test_long_column_names (" + for i in range(1000): + if i != 0: + create_table += ", " + create_table += "{} integer".format(column_name + str(i)) + create_table += ")" + cursor.execute(create_table) + insert = "INSERT INTO clickhouse.test_long_column_names SELECT i" + ", i" * 999 + " FROM generate_series(0, 99) as t(i)" + cursor.execute(insert) + conn.commit() + + create_table = "CREATE TABLE test_long_column_names (" + for i in range(1000): + if i != 0: + create_table += ", " + create_table += "{} UInt32".format(column_name + str(i)) + create_table += ") ENGINE=ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_long_column_names')" + result = node1.query(create_table); + + result = node1.query('SELECT * FROM test_long_column_names'); + expected = node1.query("SELECT number" + ", number" * 999 + " FROM numbers(100)") + assert(result == expected) + + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_long_column_names") + node1.query("DROP TABLE IF EXISTS test_long_column_names") + + +def test_odbc_long_text(started_cluster): + skip_test_msan(node1) + + conn = get_postgres_conn() + cursor = conn.cursor() + cursor.execute("drop table if exists clickhouse.test_long_text") + cursor.execute("create table clickhouse.test_long_text(flen int, field1 text)"); + + # sample test from issue 9363 + text_from_issue = """BEGIN These examples only show the order that data is arranged in. The values from different columns are stored separately, and data from the same column is stored together. Examples of a column-oriented DBMS: Vertica, Paraccel (Actian Matrix and Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise and Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid, and kdb+. Different orders for storing data are better suited to different scenarios. The data access scenario refers to what queries are made, how often, and in what proportion; how much data is read for each type of query – rows, columns, and bytes; the relationship between reading and updating data; the working size of the data and how locally it is used; whether transactions are used, and how isolated they are; requirements for data replication and logical integrity; requirements for latency and throughput for each type of query, and so on. The higher the load on the system, the more important it is to customize the system set up to match the requirements of the usage scenario, and the more fine grained this customization becomes. There is no system that is equally well-suited to significantly different scenarios. If a system is adaptable to a wide set of scenarios, under a high load, the system will handle all the scenarios equally poorly, or will work well for just one or few of possible scenarios. Key Properties of OLAP Scenario¶ The vast majority of requests are for read access. Data is updated in fairly large batches (> 1000 rows), not by single rows; or it is not updated at all. Data is added to the DB but is not modified. For reads, quite a large number of rows are extracted from the DB, but only a small subset of columns. Tables are "wide," meaning they contain a large number of columns. Queries are relatively rare (usually hundreds of queries per server or less per second). For simple queries, latencies around 50 ms are allowed. Column values are fairly small: numbers and short strings (for example, 60 bytes per URL). Requires high throughput when processing a single query (up to billions of rows per second per server). Transactions are not necessary. Low requirements for data consistency. There is one large table per query. All tables are small, except for one. A query result is significantly smaller than the source data. In other words, data is filtered or aggregated, so the result fits in a single server"s RAM. It is easy to see that the OLAP scenario is very different from other popular scenarios (such as OLTP or Key-Value access). So it doesn"t make sense to try to use OLTP or a Key-Value DB for processing analytical queries if you want to get decent performance. For example, if you try to use MongoDB or Redis for analytics, you will get very poor performance compared to OLAP databases. Why Column-Oriented Databases Work Better in the OLAP Scenario¶ Column-oriented databases are better suited to OLAP scenarios: they are at least 100 times faster in processing most queries. The reasons are explained in detail below, but the fact is easier to demonstrate visually. END""" + cursor.execute("""insert into clickhouse.test_long_text (flen, field1) values (3248, '{}')""".format(text_from_issue)); + + node1.query(''' + DROP TABLE IF EXISTS test_long_test; + CREATE TABLE test_long_text (flen UInt32, field1 String) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_long_text')''') + result = node1.query("select field1 from test_long_text;") + assert(result.strip() == text_from_issue) + + long_text = "text" * 1000000 + cursor.execute("""insert into clickhouse.test_long_text (flen, field1) values (400000, '{}')""".format(long_text)); + result = node1.query("select field1 from test_long_text where flen=400000;") + assert(result.strip() == long_text) + diff --git a/tests/integration/test_optimize_on_insert/__init__.py b/tests/integration/test_optimize_on_insert/__init__.py new file mode 100644 index 00000000000..e5a0d9b4834 --- /dev/null +++ b/tests/integration/test_optimize_on_insert/__init__.py @@ -0,0 +1 @@ +#!/usr/bin/env python3 diff --git a/tests/integration/test_optimize_on_insert/test.py b/tests/integration/test_optimize_on_insert/test.py new file mode 100644 index 00000000000..da4e20edf0c --- /dev/null +++ b/tests/integration/test_optimize_on_insert/test.py @@ -0,0 +1,48 @@ +#!/usr/bin/env python3 + +import pytest +from helpers.client import QueryRuntimeException +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance('node1', with_zookeeper=True) +node2 = cluster.add_instance('node2', with_zookeeper=True) + +@pytest.fixture(scope="module") +def start_cluster(): + try: + cluster.start() + + yield cluster + + finally: + cluster.shutdown() + + +def get_data_files_for_table(node, table_name): + raw_output = node.exec_in_container(["bash", "-c", "ls /var/lib/clickhouse/data/default/{}".format(table_name)]) + return raw_output.strip().split("\n") + +def test_empty_parts_optimize(start_cluster): + for n, node in enumerate([node1, node2]): + node.query(""" + CREATE TABLE empty (key UInt32, val UInt32, date Datetime) + ENGINE=ReplicatedSummingMergeTree('/clickhouse/01560_optimize_on_insert', '{}', val) + PARTITION BY date ORDER BY key; + """.format(n+1)) + + node1.query("INSERT INTO empty VALUES (1, 1, '2020-01-01'), (1, 1, '2020-01-01'), (1, -2, '2020-01-01')") + + node2.query("SYSTEM SYNC REPLICA empty", timeout=15) + + assert node1.query("SELECT * FROM empty") == "" + assert node2.query("SELECT * FROM empty") == "" + + # No other tmp files exists + assert set(get_data_files_for_table(node1, "empty")) == {"detached", "format_version.txt"} + assert set(get_data_files_for_table(node2, "empty")) == {"detached", "format_version.txt"} + + node1.query("INSERT INTO empty VALUES (1, 1, '2020-02-01'), (1, 1, '2020-02-01'), (1, -2, '2020-02-01')", settings={"insert_quorum": 2}) + + assert node1.query("SELECT * FROM empty") == "" + assert node2.query("SELECT * FROM empty") == "" diff --git a/tests/integration/test_polymorphic_parts/test.py b/tests/integration/test_polymorphic_parts/test.py index 50a8192fbc5..dc16bab0ca4 100644 --- a/tests/integration/test_polymorphic_parts/test.py +++ b/tests/integration/test_polymorphic_parts/test.py @@ -376,74 +376,6 @@ def test_in_memory(start_cluster): "Wide\t1\n") -def test_in_memory_wal(start_cluster): - # Merges are disabled in config - - for i in range(5): - insert_random_data('wal_table', node11, 50) - node12.query("SYSTEM SYNC REPLICA wal_table", timeout=20) - - def check(node, rows, parts): - node.query("SELECT count() FROM wal_table") == "{}\n".format(rows) - node.query( - "SELECT count() FROM system.parts WHERE table = 'wal_table' AND part_type = 'InMemory'") == "{}\n".format( - parts) - - check(node11, 250, 5) - check(node12, 250, 5) - - # WAL works at inserts - node11.restart_clickhouse(kill=True) - check(node11, 250, 5) - - # WAL works at fetches - node12.restart_clickhouse(kill=True) - check(node12, 250, 5) - - insert_random_data('wal_table', node11, 50) - node12.query("SYSTEM SYNC REPLICA wal_table", timeout=20) - - # Disable replication - with PartitionManager() as pm: - pm.partition_instances(node11, node12) - check(node11, 300, 6) - - wal_file = "/var/lib/clickhouse/data/default/wal_table/wal.bin" - # Corrupt wal file - # Truncate it to it's size minus 10 bytes - node11.exec_in_container(['bash', '-c', 'truncate --size="$(($(stat -c "%s" {}) - 10))" {}'.format(wal_file, wal_file)], - privileged=True, user='root') - node11.restart_clickhouse(kill=True) - - # Broken part is lost, but other restored successfully - check(node11, 250, 5) - # WAL with blocks from 0 to 4 - broken_wal_file = "/var/lib/clickhouse/data/default/wal_table/wal_0_4.bin" - # Check file exists - node11.exec_in_container(['bash', '-c', 'test -f {}'.format(broken_wal_file)]) - - # Fetch lost part from replica - node11.query("SYSTEM SYNC REPLICA wal_table", timeout=20) - check(node11, 300, 6) - - # Check that new data is written to new wal, but old is still exists for restoring - # Check file not empty - node11.exec_in_container(['bash', '-c', 'test -s {}'.format(wal_file)]) - # Check file exists - node11.exec_in_container(['bash', '-c', 'test -f {}'.format(broken_wal_file)]) - - # Data is lost without WAL - node11.query("ALTER TABLE wal_table MODIFY SETTING in_memory_parts_enable_wal = 0") - with PartitionManager() as pm: - pm.partition_instances(node11, node12) - - insert_random_data('wal_table', node11, 50) - check(node11, 350, 7) - - node11.restart_clickhouse(kill=True) - check(node11, 300, 6) - - def test_in_memory_wal_rotate(start_cluster): # Write every part to single wal node11.query("ALTER TABLE restore_table MODIFY SETTING write_ahead_log_max_bytes = 10") diff --git a/tests/integration/test_quota/test.py b/tests/integration/test_quota/test.py index 4374f46a39f..0f59ae27583 100644 --- a/tests/integration/test_quota/test.py +++ b/tests/integration/test_quota/test.py @@ -372,6 +372,7 @@ def test_dcl_management(): def test_users_xml_is_readonly(): assert re.search("storage is readonly", instance.query_and_get_error("DROP QUOTA myQuota")) + def test_query_inserts(): check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], 0, "['default']", "[]"]]) @@ -380,9 +381,16 @@ def test_query_inserts(): system_quotas_usage( [["myQuota", "default", 1, 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) - instance.query("INSERT INTO test_table values(1)") + instance.query("DROP TABLE IF EXISTS test_table_ins") + instance.query("CREATE TABLE test_table_ins(x UInt32) ENGINE = MergeTree ORDER BY tuple()") system_quota_usage( - [["myQuota", "default", 31556952, 1, 1000, 0, 500, 1, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 2, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + + instance.query("INSERT INTO test_table_ins values(1)") + system_quota_usage( + [["myQuota", "default", 31556952, 3, 1000, 0, 500, 1, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + instance.query("DROP TABLE test_table_ins") + def test_consumption_of_show_tables(): assert instance.query("SHOW TABLES") == "test_table\n" diff --git a/tests/integration/test_redirect_url_storage/test.py b/tests/integration/test_redirect_url_storage/test.py index f2731794d43..d899d316abc 100644 --- a/tests/integration/test_redirect_url_storage/test.py +++ b/tests/integration/test_redirect_url_storage/test.py @@ -26,6 +26,32 @@ def test_url_without_redirect(started_cluster): assert node1.query("select * from WebHDFSStorage") == "1\tMark\t72.53\n" +def test_url_with_globs(started_cluster): + started_cluster.hdfs_api.write_data("/simple_storage_1_1", "1\n") + started_cluster.hdfs_api.write_data("/simple_storage_1_2", "2\n") + started_cluster.hdfs_api.write_data("/simple_storage_1_3", "3\n") + started_cluster.hdfs_api.write_data("/simple_storage_2_1", "4\n") + started_cluster.hdfs_api.write_data("/simple_storage_2_2", "5\n") + started_cluster.hdfs_api.write_data("/simple_storage_2_3", "6\n") + + result = node1.query( + "select * from url('http://hdfs1:50075/webhdfs/v1/simple_storage_{1..2}_{1..3}?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV', 'data String') as data order by data") + assert result == "1\n2\n3\n4\n5\n6\n" + + +def test_url_with_globs_and_failover(started_cluster): + started_cluster.hdfs_api.write_data("/simple_storage_1_1", "1\n") + started_cluster.hdfs_api.write_data("/simple_storage_1_2", "2\n") + started_cluster.hdfs_api.write_data("/simple_storage_1_3", "3\n") + started_cluster.hdfs_api.write_data("/simple_storage_3_1", "4\n") + started_cluster.hdfs_api.write_data("/simple_storage_3_2", "5\n") + started_cluster.hdfs_api.write_data("/simple_storage_3_3", "6\n") + + result = node1.query( + "select * from url('http://hdfs1:50075/webhdfs/v1/simple_storage_{0|1|2|3}_{1..3}?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV', 'data String') as data order by data") + assert result == "1\n2\n3\n" + + def test_url_with_redirect_not_allowed(started_cluster): started_cluster.hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n") assert started_cluster.hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n" diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.reference b/tests/integration/test_reload_clusters_config/__init__.py similarity index 100% rename from tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.reference rename to tests/integration/test_reload_clusters_config/__init__.py diff --git a/tests/integration/test_reload_clusters_config/configs/remote_servers.xml b/tests/integration/test_reload_clusters_config/configs/remote_servers.xml new file mode 100644 index 00000000000..b827fce02be --- /dev/null +++ b/tests/integration/test_reload_clusters_config/configs/remote_servers.xml @@ -0,0 +1,30 @@ + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + diff --git a/tests/integration/test_reload_clusters_config/test.py b/tests/integration/test_reload_clusters_config/test.py new file mode 100644 index 00000000000..f1fb0d820d4 --- /dev/null +++ b/tests/integration/test_reload_clusters_config/test.py @@ -0,0 +1,235 @@ +import os +import sys +import time + +import pytest + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from helpers.cluster import ClickHouseCluster +from helpers.network import PartitionManager +from helpers.test_tools import TSV + +cluster = ClickHouseCluster(__file__) +node = cluster.add_instance('node', with_zookeeper=True, main_configs=['configs/remote_servers.xml']) +node_1 = cluster.add_instance('node_1', with_zookeeper=True) +node_2 = cluster.add_instance('node_2', with_zookeeper=True) + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + node.query('''CREATE TABLE distributed (id UInt32) ENGINE = + Distributed('test_cluster', 'default', 'replicated')''') + + node.query('''CREATE TABLE distributed2 (id UInt32) ENGINE = + Distributed('test_cluster2', 'default', 'replicated')''') + + cluster.pause_container('node_1') + cluster.pause_container('node_2') + + yield cluster + + finally: + cluster.shutdown() + + +base_config = ''' + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + +''' + +test_config1 = ''' + + + + + true + + node_1 + 9000 + + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + +''' + +test_config2 = ''' + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + +''' + +test_config3 = ''' + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + + true + + node_1 + 9000 + + + node_2 + 9000 + + + + + + true + + node_1 + 9000 + + + + + +''' + + +def send_repeated_query(table, count=5): + for i in range(count): + node.query_and_get_error("SELECT count() FROM {} SETTINGS receive_timeout=1".format(table)) + + +def get_errors_count(cluster, host_name="node_1"): + return int(node.query("SELECT errors_count FROM system.clusters WHERE cluster='{}' and host_name='{}'".format(cluster, host_name))) + + +def set_config(config): + node.replace_config("/etc/clickhouse-server/config.d/remote_servers.xml", config) + node.query("SYSTEM RELOAD CONFIG") + + +def test_simple_reload(started_cluster): + send_repeated_query("distributed") + + assert get_errors_count("test_cluster") > 0 + + node.query("SYSTEM RELOAD CONFIG") + + assert get_errors_count("test_cluster") > 0 + + +def test_update_one_cluster(started_cluster): + send_repeated_query("distributed") + send_repeated_query("distributed2") + + assert get_errors_count("test_cluster") > 0 + assert get_errors_count("test_cluster2") > 0 + + set_config(test_config1) + + assert get_errors_count("test_cluster") == 0 + assert get_errors_count("test_cluster2") > 0 + + set_config(base_config) + + +def test_delete_cluster(started_cluster): + send_repeated_query("distributed") + send_repeated_query("distributed2") + + assert get_errors_count("test_cluster") > 0 + assert get_errors_count("test_cluster2") > 0 + + set_config(test_config2) + + assert get_errors_count("test_cluster") > 0 + + result = node.query("SELECT * FROM system.clusters WHERE cluster='test_cluster2'") + assert result == '' + + set_config(base_config) + + +def test_add_cluster(started_cluster): + send_repeated_query("distributed") + send_repeated_query("distributed2") + + assert get_errors_count("test_cluster") > 0 + assert get_errors_count("test_cluster2") > 0 + + set_config(test_config3) + + assert get_errors_count("test_cluster") > 0 + assert get_errors_count("test_cluster2") > 0 + + result = node.query("SELECT * FROM system.clusters WHERE cluster='test_cluster3'") + assert result != '' + + set_config(base_config) + diff --git a/tests/integration/test_replace_partition/test.py b/tests/integration/test_replace_partition/test.py index 06e7f4be82b..d30a038825f 100644 --- a/tests/integration/test_replace_partition/test.py +++ b/tests/integration/test_replace_partition/test.py @@ -1,3 +1,8 @@ +# pylint: disable=line-too-long +# pylint: disable=unused-argument +# pylint: disable=redefined-outer-name: + +import time import pytest from helpers.cluster import ClickHouseCluster @@ -13,13 +18,13 @@ def _fill_nodes(nodes, shard): node.query( ''' CREATE DATABASE test; - + CREATE TABLE real_table(date Date, id UInt32, dummy UInt32) ENGINE = MergeTree(date, id, 8192); - + CREATE TABLE other_table(date Date, id UInt32, dummy UInt32) ENGINE = MergeTree(date, id, 8192); - + CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test{shard}/replicated', '{replica}', date, id, 8192); '''.format(shard=shard, replica=node.name)) @@ -97,12 +102,13 @@ def test_drop_failover(drop_failover): # Drop partition on source node node3.query("ALTER TABLE test_table DROP PARTITION 201706") - # connection restored + # Wait few seconds for connection to zookeeper to be restored + time.sleep(5) - node4.query_with_retry("select last_exception from system.replication_queue where type = 'REPLACE_RANGE'", - check_callback=lambda x: 'Not found part' not in x, sleep_time=1) - assert 'Not found part' not in node4.query( - "select last_exception from system.replication_queue where type = 'REPLACE_RANGE'") + msg = node4.query_with_retry( + "select last_exception from system.replication_queue where type = 'REPLACE_RANGE'", + check_callback=lambda x: 'Not found part' not in x, sleep_time=1) + assert 'Not found part' not in msg assert_eq_with_retry(node4, "SELECT id FROM test_table order by id", '') @@ -151,8 +157,11 @@ def test_replace_after_replace_failover(replace_after_replace_failover): assert_eq_with_retry(node5, "SELECT id FROM test_table order by id", '333') - node6.query_with_retry("select last_exception from system.replication_queue where type = 'REPLACE_RANGE'", - check_callback=lambda x: 'Not found part' not in x, sleep_time=1) - assert 'Not found part' not in node6.query( - "select last_exception from system.replication_queue where type = 'REPLACE_RANGE'") + # Wait few seconds for connection to zookeeper to be restored + time.sleep(5) + + msg = node6.query_with_retry( + "select last_exception from system.replication_queue where type = 'REPLACE_RANGE'", + check_callback=lambda x: 'Not found part' not in x, sleep_time=1) + assert 'Not found part' not in msg assert_eq_with_retry(node6, "SELECT id FROM test_table order by id", '333') diff --git a/tests/integration/test_replicated_database/configs/config.xml b/tests/integration/test_replicated_database/configs/config.xml index ebceee3aa5c..d751454437c 100644 --- a/tests/integration/test_replicated_database/configs/config.xml +++ b/tests/integration/test_replicated_database/configs/config.xml @@ -1,34 +1,3 @@ 10 - - - - - true - - main_node - 9000 - - - dummy_node - 9000 - - - competing_node - 9000 - - - - true - - snapshotting_node - 9000 - - - snapshot_recovering_node - 9000 - - - - diff --git a/tests/integration/test_replicated_database/configs/settings.xml b/tests/integration/test_replicated_database/configs/settings.xml index e0f7e8691e6..7f45502e20d 100644 --- a/tests/integration/test_replicated_database/configs/settings.xml +++ b/tests/integration/test_replicated_database/configs/settings.xml @@ -2,6 +2,7 @@ 1 + 1 diff --git a/tests/integration/test_replicated_database/test.py b/tests/integration/test_replicated_database/test.py index 99e7d6077f8..70d779ea737 100644 --- a/tests/integration/test_replicated_database/test.py +++ b/tests/integration/test_replicated_database/test.py @@ -35,8 +35,17 @@ def started_cluster(): cluster.shutdown() def test_create_replicated_table(started_cluster): + assert "Explicit zookeeper_path and replica_name are specified" in \ + main_node.query_and_get_error("CREATE TABLE testdb.replicated_table (d Date, k UInt64, i32 Int32) " + "ENGINE=ReplicatedMergeTree('/test/tmp', 'r') ORDER BY k PARTITION BY toYYYYMM(d);") + + assert "Explicit zookeeper_path and replica_name are specified" in \ + main_node.query_and_get_error("CREATE TABLE testdb.replicated_table (d Date, k UInt64, i32 Int32) " + "ENGINE=ReplicatedMergeTree('/test/tmp', 'r', d, k, 8192);") + assert "Old syntax is not allowed" in \ - main_node.query_and_get_error("CREATE TABLE testdb.replicated_table (d Date, k UInt64, i32 Int32) ENGINE=ReplicatedMergeTree('/test/tmp', 'r', d, k, 8192);") + main_node.query_and_get_error("CREATE TABLE testdb.replicated_table (d Date, k UInt64, i32 Int32) " + "ENGINE=ReplicatedMergeTree('/test/tmp/{shard}', '{replica}', d, k, 8192);") main_node.query("CREATE TABLE testdb.replicated_table (d Date, k UInt64, i32 Int32) ENGINE=ReplicatedMergeTree ORDER BY k PARTITION BY toYYYYMM(d);") @@ -99,16 +108,20 @@ def test_alters_from_different_replicas(started_cluster): "(CounterID UInt32, StartDate Date, UserID UInt32, VisitID UInt32, NestedColumn Nested(A UInt8, S String), ToDrop UInt32) " "ENGINE = MergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192);") - main_node.query("CREATE TABLE testdb.dist AS testdb.concurrent_test ENGINE = Distributed(cluster, testdb, concurrent_test, CounterID)") + main_node.query("CREATE TABLE testdb.dist AS testdb.concurrent_test ENGINE = Distributed(testdb, testdb, concurrent_test, CounterID)") dummy_node.stop_clickhouse(kill=True) - settings = {"distributed_ddl_task_timeout": 10} + settings = {"distributed_ddl_task_timeout": 5} assert "There are 1 unfinished hosts (0 of them are currently active)" in \ competing_node.query_and_get_error("ALTER TABLE testdb.concurrent_test ADD COLUMN Added0 UInt32;", settings=settings) + settings = {"distributed_ddl_task_timeout": 5, "distributed_ddl_output_mode": "null_status_on_timeout"} + assert "shard1|replica2\t\\N\t\\N" in \ + main_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN Added2 UInt32;", settings=settings) + settings = {"distributed_ddl_task_timeout": 5, "distributed_ddl_output_mode": "never_throw"} + assert "shard1|replica2\t\\N\t\\N" in \ + competing_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN Added1 UInt32 AFTER Added0;", settings=settings) dummy_node.start_clickhouse() - main_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN Added2 UInt32;") - competing_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN Added1 UInt32 AFTER Added0;") main_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN AddedNested1 Nested(A UInt32, B UInt64) AFTER Added2;") competing_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN AddedNested1.C Array(String) AFTER AddedNested1.B;") main_node.query("ALTER TABLE testdb.concurrent_test ADD COLUMN AddedNested2 Nested(A UInt32, B UInt64) AFTER AddedNested1;") @@ -198,8 +211,14 @@ def test_recover_staled_replica(started_cluster): dummy_node.query("CREATE TABLE recover.rmt2 (n int) ENGINE=ReplicatedMergeTree order by n", settings=settings) main_node.query("CREATE TABLE recover.rmt3 (n int) ENGINE=ReplicatedMergeTree order by n", settings=settings) dummy_node.query("CREATE TABLE recover.rmt5 (n int) ENGINE=ReplicatedMergeTree order by n", settings=settings) - main_node.query("CREATE DICTIONARY recover.d1 (n int DEFAULT 0, m int DEFAULT 1) PRIMARY KEY n SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'rmt1' PASSWORD '' DB 'recover')) LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT())") - dummy_node.query("CREATE DICTIONARY recover.d2 (n int DEFAULT 0, m int DEFAULT 1) PRIMARY KEY n SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'rmt2' PASSWORD '' DB 'recover')) LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT())") + main_node.query("CREATE MATERIALIZED VIEW recover.mv1 (n int) ENGINE=ReplicatedMergeTree order by n AS SELECT n FROM recover.rmt1", settings=settings) + dummy_node.query("CREATE MATERIALIZED VIEW recover.mv2 (n int) ENGINE=ReplicatedMergeTree order by n AS SELECT n FROM recover.rmt2", settings=settings) + main_node.query("CREATE DICTIONARY recover.d1 (n int DEFAULT 0, m int DEFAULT 1) PRIMARY KEY n " + "SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'rmt1' PASSWORD '' DB 'recover')) " + "LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT())") + dummy_node.query("CREATE DICTIONARY recover.d2 (n int DEFAULT 0, m int DEFAULT 1) PRIMARY KEY n " + "SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'rmt2' PASSWORD '' DB 'recover')) " + "LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT())") for table in ['t1', 't2', 'mt1', 'mt2', 'rmt1', 'rmt2', 'rmt3', 'rmt5']: main_node.query("INSERT INTO recover.{} VALUES (42)".format(table)) @@ -217,35 +236,44 @@ def test_recover_staled_replica(started_cluster): main_node.query("RENAME TABLE recover.rmt3 TO recover.rmt4", settings=settings) main_node.query("DROP TABLE recover.rmt5", settings=settings) main_node.query("DROP DICTIONARY recover.d2", settings=settings) - main_node.query("CREATE DICTIONARY recover.d2 (n int DEFAULT 0, m int DEFAULT 1) PRIMARY KEY n SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'rmt1' PASSWORD '' DB 'recover')) LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT());", settings=settings) + main_node.query("CREATE DICTIONARY recover.d2 (n int DEFAULT 0, m int DEFAULT 1) PRIMARY KEY n " + "SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' TABLE 'rmt1' PASSWORD '' DB 'recover')) " + "LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT());", settings=settings) + + inner_table = ".inner_id." + dummy_node.query("SELECT uuid FROM system.tables WHERE database='recover' AND name='mv1'").strip() + main_node.query("ALTER TABLE recover.`{}` MODIFY COLUMN n int DEFAULT 42".format(inner_table), settings=settings) + main_node.query("ALTER TABLE recover.mv1 MODIFY QUERY SELECT m FROM recover.rmt1".format(inner_table), settings=settings) + main_node.query("RENAME TABLE recover.mv2 TO recover.mv3".format(inner_table), settings=settings) main_node.query("CREATE TABLE recover.tmp AS recover.m1", settings=settings) main_node.query("DROP TABLE recover.tmp", settings=settings) main_node.query("CREATE TABLE recover.tmp AS recover.m1", settings=settings) main_node.query("DROP TABLE recover.tmp", settings=settings) main_node.query("CREATE TABLE recover.tmp AS recover.m1", settings=settings) - main_node.query("DROP TABLE recover.tmp", settings=settings) - main_node.query("CREATE TABLE recover.tmp AS recover.m1", settings=settings) - assert main_node.query("SELECT name FROM system.tables WHERE database='recover' ORDER BY name") == "d1\nd2\nm1\nmt1\nmt2\nrmt1\nrmt2\nrmt4\nt2\ntmp\n" - query = "SELECT name, uuid, create_table_query FROM system.tables WHERE database='recover' ORDER BY name" + assert main_node.query("SELECT name FROM system.tables WHERE database='recover' AND name NOT LIKE '.inner_id.%' ORDER BY name") == \ + "d1\nd2\nm1\nmt1\nmt2\nmv1\nmv3\nrmt1\nrmt2\nrmt4\nt2\ntmp\n" + query = "SELECT name, uuid, create_table_query FROM system.tables WHERE database='recover' AND name NOT LIKE '.inner_id.%' " \ + "ORDER BY name SETTINGS show_table_uuid_in_table_create_query_if_not_nil=1" expected = main_node.query(query) assert_eq_with_retry(dummy_node, query, expected) + assert main_node.query("SELECT count() FROM system.tables WHERE database='recover' AND name LIKE '.inner_id.%'") == "2\n" + assert dummy_node.query("SELECT count() FROM system.tables WHERE database='recover' AND name LIKE '.inner_id.%'") == "2\n" - for table in ['m1', 't2', 'mt1', 'mt2', 'rmt1', 'rmt2', 'rmt4', 'd1', 'd2']: + for table in ['m1', 't2', 'mt1', 'mt2', 'rmt1', 'rmt2', 'rmt4', 'd1', 'd2', 'mv1', 'mv3']: assert main_node.query("SELECT (*,).1 FROM recover.{}".format(table)) == "42\n" - for table in ['t2', 'rmt1', 'rmt2', 'rmt4', 'd1', 'd2', 'mt2']: + for table in ['t2', 'rmt1', 'rmt2', 'rmt4', 'd1', 'd2', 'mt2', 'mv1', 'mv3']: assert dummy_node.query("SELECT (*,).1 FROM recover.{}".format(table)) == "42\n" for table in ['m1', 'mt1']: assert dummy_node.query("SELECT count() FROM recover.{}".format(table)) == "0\n" assert dummy_node.query("SELECT count() FROM system.tables WHERE database='recover_broken_tables'") == "2\n" - table = dummy_node.query("SHOW TABLES FROM recover_broken_tables LIKE 'mt1_26_%'").strip() + table = dummy_node.query("SHOW TABLES FROM recover_broken_tables LIKE 'mt1_29_%'").strip() assert dummy_node.query("SELECT (*,).1 FROM recover_broken_tables.{}".format(table)) == "42\n" - table = dummy_node.query("SHOW TABLES FROM recover_broken_tables LIKE 'rmt5_26_%'").strip() + table = dummy_node.query("SHOW TABLES FROM recover_broken_tables LIKE 'rmt5_29_%'").strip() assert dummy_node.query("SELECT (*,).1 FROM recover_broken_tables.{}".format(table)) == "42\n" - expected = "Cleaned 4 outdated objects: dropped 1 dictionaries and 1 tables, moved 2 tables" + expected = "Cleaned 6 outdated objects: dropped 1 dictionaries and 3 tables, moved 2 tables" assert_logs_contain(dummy_node, expected) dummy_node.query("DROP TABLE recover.tmp") diff --git a/tests/integration/test_replicated_fetches_timeouts/__init__.py b/tests/integration/test_replicated_fetches_timeouts/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_replicated_fetches_timeouts/configs/server.xml b/tests/integration/test_replicated_fetches_timeouts/configs/server.xml new file mode 100644 index 00000000000..d4b441b91fb --- /dev/null +++ b/tests/integration/test_replicated_fetches_timeouts/configs/server.xml @@ -0,0 +1,3 @@ + + 0.1 + diff --git a/tests/integration/test_replicated_fetches_timeouts/test.py b/tests/integration/test_replicated_fetches_timeouts/test.py new file mode 100644 index 00000000000..963ec2487fd --- /dev/null +++ b/tests/integration/test_replicated_fetches_timeouts/test.py @@ -0,0 +1,95 @@ +#!/usr/bin/env python3 + +import random +import string +import time + +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.network import PartitionManager + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance( + 'node1', with_zookeeper=True, + main_configs=['configs/server.xml']) + +node2 = cluster.add_instance( + 'node2', with_zookeeper=True, + main_configs=['configs/server.xml']) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + yield cluster + + finally: + cluster.shutdown() + + +def get_random_string(length): + return ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(length)) + + +def test_no_stall(started_cluster): + for instance in started_cluster.instances.values(): + instance.query(""" + CREATE TABLE t (key UInt64, data String) + ENGINE = ReplicatedMergeTree('/clickhouse/test/t', '{instance}') + ORDER BY tuple() + PARTITION BY key""") + + # Pause node3 until the test setup is prepared + node2.query("SYSTEM STOP FETCHES t") + + node1.query("INSERT INTO t SELECT 1, '{}' FROM numbers(500)".format(get_random_string(104857))) + node1.query("INSERT INTO t SELECT 2, '{}' FROM numbers(500)".format(get_random_string(104857))) + + with PartitionManager() as pm: + pm.add_network_delay(node1, 2000) + node2.query("SYSTEM START FETCHES t") + + # Wait for timeout exceptions to confirm that timeout is triggered. + while True: + conn_timeout_exceptions = int(node2.query( + """ + SELECT count() + FROM system.replication_queue + WHERE last_exception LIKE '%connect timed out%' + """)) + + if conn_timeout_exceptions >= 2: + break + + time.sleep(0.1) + + print("Connection timeouts tested!") + + # Increase connection timeout and wait for receive timeouts. + node2.query(""" + ALTER TABLE t + MODIFY SETTING replicated_fetches_http_connection_timeout = 30, + replicated_fetches_http_receive_timeout = 1""") + + while True: + timeout_exceptions = int(node2.query( + """ + SELECT count() + FROM system.replication_queue + WHERE last_exception LIKE '%e.displayText() = Timeout%' + AND last_exception NOT LIKE '%connect timed out%' + """).strip()) + + if timeout_exceptions >= 2: + break + + time.sleep(0.1) + + for instance in started_cluster.instances.values(): + # Workaround for DROP TABLE not finishing if it is started while table is readonly. + instance.query("SYSTEM RESTART REPLICA t") + + # Cleanup data directory from test results archive. + instance.query("DROP TABLE t SYNC") diff --git a/tests/integration/test_replicated_merge_tree_s3/configs/config.d/storage_conf.xml b/tests/integration/test_replicated_merge_tree_s3/configs/config.d/storage_conf.xml index 20b750ffff3..1f75a4efeae 100644 --- a/tests/integration/test_replicated_merge_tree_s3/configs/config.d/storage_conf.xml +++ b/tests/integration/test_replicated_merge_tree_s3/configs/config.d/storage_conf.xml @@ -21,6 +21,7 @@ 0 + 0 diff --git a/tests/integration/test_replicated_merge_tree_s3_zero_copy/__init__.py b/tests/integration/test_replicated_merge_tree_s3_zero_copy/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_replicated_merge_tree_s3_zero_copy/configs/config.d/storage_conf.xml b/tests/integration/test_replicated_merge_tree_s3_zero_copy/configs/config.d/storage_conf.xml new file mode 100644 index 00000000000..d8c7f49fc49 --- /dev/null +++ b/tests/integration/test_replicated_merge_tree_s3_zero_copy/configs/config.d/storage_conf.xml @@ -0,0 +1,50 @@ + + + + + s3 + http://minio1:9001/root/data/ + minio + minio123 + + + + + +
+ s3 +
+
+
+
+
+ + + 0 + 1 + + + + + + + node1 + 9000 + + + node2 + 9000 + + + node3 + 9000 + + + + + + + 0 + + +
diff --git a/tests/integration/test_replicated_merge_tree_s3_zero_copy/test.py b/tests/integration/test_replicated_merge_tree_s3_zero_copy/test.py new file mode 100644 index 00000000000..793abc53566 --- /dev/null +++ b/tests/integration/test_replicated_merge_tree_s3_zero_copy/test.py @@ -0,0 +1,105 @@ +import logging +import random +import string + +import pytest +from helpers.cluster import ClickHouseCluster + +logging.getLogger().setLevel(logging.INFO) +logging.getLogger().addHandler(logging.StreamHandler()) + + +@pytest.fixture(scope="module") +def cluster(): + try: + cluster = ClickHouseCluster(__file__) + + cluster.add_instance("node1", main_configs=["configs/config.d/storage_conf.xml"], macros={'replica': '1'}, + with_minio=True, with_zookeeper=True) + cluster.add_instance("node2", main_configs=["configs/config.d/storage_conf.xml"], macros={'replica': '2'}, + with_zookeeper=True) + cluster.add_instance("node3", main_configs=["configs/config.d/storage_conf.xml"], macros={'replica': '3'}, + with_zookeeper=True) + + logging.info("Starting cluster...") + cluster.start() + logging.info("Cluster started") + + yield cluster + finally: + cluster.shutdown() + + +FILES_OVERHEAD = 1 +FILES_OVERHEAD_PER_COLUMN = 2 # Data and mark files +FILES_OVERHEAD_PER_PART_WIDE = FILES_OVERHEAD_PER_COLUMN * 3 + 2 + 6 + 1 +FILES_OVERHEAD_PER_PART_COMPACT = 10 + 1 + + +def random_string(length): + letters = string.ascii_letters + return ''.join(random.choice(letters) for i in range(length)) + + +def generate_values(date_str, count, sign=1): + data = [[date_str, sign * (i + 1), random_string(10)] for i in range(count)] + data.sort(key=lambda tup: tup[1]) + return ",".join(["('{}',{},'{}')".format(x, y, z) for x, y, z in data]) + + +def create_table(cluster, additional_settings=None): + create_table_statement = """ + CREATE TABLE s3_test ON CLUSTER cluster( + dt Date, + id Int64, + data String, + INDEX min_max (id) TYPE minmax GRANULARITY 3 + ) ENGINE=ReplicatedMergeTree() + PARTITION BY dt + ORDER BY (dt, id) + SETTINGS storage_policy='s3' + """ + if additional_settings: + create_table_statement += "," + create_table_statement += additional_settings + + list(cluster.instances.values())[0].query(create_table_statement) + + +@pytest.fixture(autouse=True) +def drop_table(cluster): + yield + for node in list(cluster.instances.values()): + node.query("DROP TABLE IF EXISTS s3_test") + + minio = cluster.minio_client + # Remove extra objects to prevent tests cascade failing + for obj in list(minio.list_objects(cluster.minio_bucket, 'data/')): + minio.remove_object(cluster.minio_bucket, obj.object_name) + +@pytest.mark.parametrize( + "min_rows_for_wide_part,files_per_part", + [ + (0, FILES_OVERHEAD_PER_PART_WIDE), + (8192, FILES_OVERHEAD_PER_PART_COMPACT) + ] +) +def test_insert_select_replicated(cluster, min_rows_for_wide_part, files_per_part): + create_table(cluster, additional_settings="min_rows_for_wide_part={}".format(min_rows_for_wide_part)) + + all_values = "" + for node_idx in range(1, 4): + node = cluster.instances["node" + str(node_idx)] + values = generate_values("2020-01-0" + str(node_idx), 4096) + node.query("INSERT INTO s3_test VALUES {}".format(values), settings={"insert_quorum": 3}) + if node_idx != 1: + all_values += "," + all_values += values + + for node_idx in range(1, 4): + node = cluster.instances["node" + str(node_idx)] + assert node.query("SELECT * FROM s3_test order by dt, id FORMAT Values", + settings={"select_sequential_consistency": 1}) == all_values + + minio = cluster.minio_client + assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == (3 * FILES_OVERHEAD) + (files_per_part * 3) diff --git a/tests/integration/test_replication_credentials/test.py b/tests/integration/test_replication_credentials/test.py index 4f07d6966a6..9181c515adf 100644 --- a/tests/integration/test_replication_credentials/test.py +++ b/tests/integration/test_replication_credentials/test.py @@ -9,7 +9,6 @@ def _fill_nodes(nodes, shard): node.query( ''' CREATE DATABASE test; - CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test{shard}/replicated', '{replica}', date, id, 8192); '''.format(shard=shard, replica=node.name)) @@ -114,6 +113,32 @@ def test_different_credentials(different_credentials_cluster): assert node5.query("SELECT id FROM test_table order by id") == '111\n' assert node6.query("SELECT id FROM test_table order by id") == '222\n' + add_old = """ + + 9009 + + admin + 222 + + root + 111 + + + aaa + 333 + + + + """ + + node5.replace_config("/etc/clickhouse-server/config.d/credentials1.xml", add_old) + + node5.query("SYSTEM RELOAD CONFIG") + node5.query("INSERT INTO test_table values('2017-06-21', 333, 1)") + node6.query("SYSTEM SYNC REPLICA test_table", timeout=10) + + assert node6.query("SELECT id FROM test_table order by id") == '111\n222\n333\n' + node7 = cluster.add_instance('node7', main_configs=['configs/remote_servers.xml', 'configs/credentials1.xml'], with_zookeeper=True) @@ -146,3 +171,23 @@ def test_credentials_and_no_credentials(credentials_and_no_credentials_cluster): assert node7.query("SELECT id FROM test_table order by id") == '111\n' assert node8.query("SELECT id FROM test_table order by id") == '222\n' + + allow_empty = """ + + 9009 + + admin + 222 + true + + + """ + + # change state: Flip node7 to mixed auth/non-auth (allow node8) + node7.replace_config("/etc/clickhouse-server/config.d/credentials1.xml", + allow_empty) + + node7.query("SYSTEM RELOAD CONFIG") + node7.query("insert into test_table values ('2017-06-22', 333, 1)") + node8.query("SYSTEM SYNC REPLICA test_table", timeout=10) + assert node8.query("SELECT id FROM test_table order by id") == '111\n222\n333\n' diff --git a/tests/integration/test_row_policy/test.py b/tests/integration/test_row_policy/test.py index ffb6dcb0588..76aef325bfa 100644 --- a/tests/integration/test_row_policy/test.py +++ b/tests/integration/test_row_policy/test.py @@ -409,32 +409,31 @@ def test_tags_with_db_and_table_names(): def test_miscellaneous_engines(): - copy_policy_xml('normal_filters.xml') + node.query("CREATE ROW POLICY OR REPLACE pC ON mydb.other_table FOR SELECT USING a = 1 TO default") + assert node.query("SHOW ROW POLICIES ON mydb.other_table") == "pC\n" # ReplicatedMergeTree - node.query("DROP TABLE mydb.filtered_table1") - node.query( - "CREATE TABLE mydb.filtered_table1 (a UInt8, b UInt8) ENGINE ReplicatedMergeTree('/clickhouse/tables/00-00/filtered_table1', 'replica1') ORDER BY a") - node.query("INSERT INTO mydb.filtered_table1 values (0, 0), (0, 1), (1, 0), (1, 1)") - assert node.query("SELECT * FROM mydb.filtered_table1") == TSV([[1, 0], [1, 1]]) + node.query("DROP TABLE IF EXISTS mydb.other_table") + node.query("CREATE TABLE mydb.other_table (a UInt8, b UInt8) ENGINE ReplicatedMergeTree('/clickhouse/tables/00-00/filtered_table1', 'replica1') ORDER BY a") + node.query("INSERT INTO mydb.other_table values (0, 0), (0, 1), (1, 0), (1, 1)") + assert node.query("SELECT * FROM mydb.other_table") == TSV([[1, 0], [1, 1]]) # CollapsingMergeTree - node.query("DROP TABLE mydb.filtered_table1") - node.query("CREATE TABLE mydb.filtered_table1 (a UInt8, b Int8) ENGINE CollapsingMergeTree(b) ORDER BY a") - node.query("INSERT INTO mydb.filtered_table1 values (0, 1), (0, 1), (1, 1), (1, 1)") - assert node.query("SELECT * FROM mydb.filtered_table1") == TSV([[1, 1], [1, 1]]) + node.query("DROP TABLE mydb.other_table") + node.query("CREATE TABLE mydb.other_table (a UInt8, b Int8) ENGINE CollapsingMergeTree(b) ORDER BY a") + node.query("INSERT INTO mydb.other_table values (0, 1), (0, 1), (1, 1), (1, 1)") + assert node.query("SELECT * FROM mydb.other_table") == TSV([[1, 1], [1, 1]]) # ReplicatedCollapsingMergeTree - node.query("DROP TABLE mydb.filtered_table1") - node.query( - "CREATE TABLE mydb.filtered_table1 (a UInt8, b Int8) ENGINE ReplicatedCollapsingMergeTree('/clickhouse/tables/00-01/filtered_table1', 'replica1', b) ORDER BY a") - node.query("INSERT INTO mydb.filtered_table1 values (0, 1), (0, 1), (1, 1), (1, 1)") - assert node.query("SELECT * FROM mydb.filtered_table1") == TSV([[1, 1], [1, 1]]) + node.query("DROP TABLE mydb.other_table") + node.query("CREATE TABLE mydb.other_table (a UInt8, b Int8) ENGINE ReplicatedCollapsingMergeTree('/clickhouse/tables/00-01/filtered_table1', 'replica1', b) ORDER BY a") + node.query("INSERT INTO mydb.other_table values (0, 1), (0, 1), (1, 1), (1, 1)") + assert node.query("SELECT * FROM mydb.other_table") == TSV([[1, 1], [1, 1]]) + + node.query("DROP ROW POLICY pC ON mydb.other_table") # DistributedMergeTree - node.query("DROP TABLE IF EXISTS mydb.not_filtered_table") - node.query( - "CREATE TABLE mydb.not_filtered_table (a UInt8, b UInt8) ENGINE Distributed('test_local_cluster', mydb, local)") - assert node.query("SELECT * FROM mydb.not_filtered_table", user="another") == TSV([[1, 0], [1, 1], [1, 0], [1, 1]]) - assert node.query("SELECT sum(a), b FROM mydb.not_filtered_table GROUP BY b ORDER BY b", user="another") == TSV( - [[2, 0], [2, 1]]) + node.query("DROP TABLE IF EXISTS mydb.other_table") + node.query("CREATE TABLE mydb.other_table (a UInt8, b UInt8) ENGINE Distributed('test_local_cluster', mydb, local)") + assert node.query("SELECT * FROM mydb.other_table", user="another") == TSV([[1, 0], [1, 1], [1, 0], [1, 1]]) + assert node.query("SELECT sum(a), b FROM mydb.other_table GROUP BY b ORDER BY b", user="another") == TSV([[2, 0], [2, 1]]) diff --git a/tests/integration/test_s3_cluster/__init__.py b/tests/integration/test_s3_cluster/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_s3_cluster/configs/cluster.xml b/tests/integration/test_s3_cluster/configs/cluster.xml new file mode 100644 index 00000000000..8334ace15eb --- /dev/null +++ b/tests/integration/test_s3_cluster/configs/cluster.xml @@ -0,0 +1,24 @@ + + + + + + + s0_0_0 + 9000 + + + s0_0_1 + 9000 + + + + + s0_1_0 + 9000 + + + + + + \ No newline at end of file diff --git a/tests/integration/test_s3_cluster/data/clickhouse/part1.csv b/tests/integration/test_s3_cluster/data/clickhouse/part1.csv new file mode 100644 index 00000000000..a44d3ca1ffb --- /dev/null +++ b/tests/integration/test_s3_cluster/data/clickhouse/part1.csv @@ -0,0 +1,10 @@ +"fSRH",976027584,"[[(-1.5346513608456012e-204,-2.867937504545497e266),(3.1627675144114637e-231,-2.20343471241604e-54),(-1.866886218651809e-89,-7.695893036366416e100),(8.196307577166986e-169,-8.203793887684096e-263),(-1.6150328830402252e-215,8.531116551449711e-296),(4.3378407855931477e92,1.1313645428723989e117),(-4.238081208165573e137,-8.969951719788361e67)],[(-3.409639554701108e169,-7.277093176871153e-254),(1.1466207153308928e-226,3.429893348531029e96),(6.451302850199177e-189,-7.52379443153242e125),(-1.7132078539493614e-127,-2.3177814806867505e241),(1.4996520594989919e-257,4.271017883966942e128)],[(65460976657479156000,1.7055814145588595e253),(-1.921491101580189e154,3.2912740465446566e-286),(0.0008437955075350972,-5.143493717005472e-107),(8.637208599142187e-150,7.825076274945548e136),(1.8077733932468565e-159,5.51061479974026e-77),(1.300406236793709e-260,10669142.497111017),(-1.731981751951159e91,-1.270795062098902e102)],[(3.336706342781395e-7,-1.1919528866481513e266)]]" +"sX6>",733011552,"[[(-3.737863336077909e-44,3.066510481088993e-161),(-1.0047259170558555e-31,8.066145272086467e-274)],[(1.2261835328136691e-58,-6.154561379350395e258),(8.26019994651558e35,-6.736984599062694e-19),(-1.4143671344485664e-238,-1.220003479858045e203),(2.466089772925698e-207,1.0025476904532926e-242),(-6.3786667153054354e240,-7.010834902137467e-103),(-6.766918514324285e-263,7.404639608483947e188),(2.753493977757937e126,-4.089565842001999e-152)],[(4.339873790493155e239,-5.022554811588342e24),(-1.7712390083519473e-66,1.3290563068463308e112),(3.3648764781548893e233,1.1123394188044336e112),(-5.415278137684864e195,5.590597851016202e-270),(-2.1032310903543943e99,-2.2335799924679948e-184)]]" +"",2396526460,"[[(1.9925796792641788e-261,1.647618305107044e158),(3.014593666207223e-222,-9.016473078578002e-20),(-1.5307802021477097e-230,-7.867078587209265e-243),(-7.330317098800564e295,1.7496539408601967e-281)],[(2.2816938730052074e98,-3.3089122320442997e-136),(-4.930983789361344e-263,-6.526758521792829e59),(-2.6482873886835413e34,-4.1985691142515947e83),(1.5496810029349365e238,-4.790553105593492e71),(-7.597436233325566e83,-1.3791763752378415e137),(-1.917321980700588e-307,-1.5913257477581824e62)]]" +"=@ep",3618088392,"[[(-2.2303235811290024e-306,8.64070367587338e-13),(-7.403012423264767e-129,-1.0825508572345856e-147),(-3.6080301450167e286,1.7302718548299961e285),(-1.3839239794870825e-156,4.255424291564323e107),(2.3191305762555e-33,-2.873899421579949e-145),(7.237414513124649e-159,-4.926574547865783e178),(4.251831312243431e-199,1.2164714479391436e201)],[(-5.114074387943793e242,2.0119340496886292e295),(-3.3663670765548e-262,-6.1992631068472835e221),(1.1539386993255106e-261,1.582903697171063e-33),(-6.1914577817088e118,-1.0401495621681123e145)],[],[(-5.9815907467493136e82,4.369047439375412e219),(-4.485368440431237e89,-3.633023372434946e-59),(-2.087497331251707e-180,1.0524018118646965e257)],[(-1.2636503461000215e-228,-4.8426877075223456e204),(2.74943107551342e281,-7.453097760262003e-14)]]" +"",3467776823,"[]" +"b'zQ",484159052,"[[(3.041838095219909e276,-6.956822159518612e-87)],[(6.636906358770296e-97,1.0531865724169307e-214)],[(-8.429249069245283e-243,-2.134779842898037e243)],[(-0.4657586598569572,2.799768548127799e187),(-5.961335445789657e-129,2.560331789344886e293),(-3.139409694983184e45,2.8011384557268085e-47)]]" +"6xGw",4126129912,"[]" +"Q",3109335413,"[[(-2.8435266267772945e39,9.548278488724291e26),(-1.1682790407223344e46,-3.925561182768867e-266),(2.8381633655721614e-202,-3.472921303086527e40),(3.3968328275944204e-150,-2.2188876184777275e-69),(-1.2612795000783405e-88,-1.2942793285205966e-49),(1.3678466236967012e179,1.721664680964459e97),(-1.1020844667744628e198,-3.403142062758506e-47)],[],[(1.343149099058239e-279,9.397894929770352e-132),(-5.280854317597215e250,9.862550191577643e-292),(-7.11468799151533e-58,7.510011657942604e96),(1.183774454157175e-288,-1.5697197095936546e272),(-3.727289017361602e120,2.422831380775067e-107),(1.4345094301262986e-177,2.4990983297605437e-91)],[(9.195226893854516e169,6.546374357272709e-236),(2.320311199531441e-126,2.2257031285964243e-185),(3.351868475505779e-184,1.84394695526876e88)],[(1.6290814396647987e-112,-3.589542711073253e38),(4.0060174859833907e-261,-1.9900431208726192e-296),(2.047468933030435e56,8.483912759156179e-57),(3.1165727272872075e191,-1.5487136748040008e-156),(0.43564020198461034,4.618165048931035e-244),(-7.674951896752824e-214,1.1652522629091777e-105),(4.838653901829244e-89,5.3085904574780206e169)],[(1.8286703553352283e-246,2.0403170465657044e255),(2.040810692623279e267,4.3956975402250484e-8),(2.4101343663018673e131,-8.672394158504762e167),(3.092080945239809e-219,-3.775474693770226e293),(-1.527991241079512e-15,-1.2603969180963007e226),(9.17470637459212e-56,1.6021090930395906e-133),(7.877647227721046e58,3.2592118033868903e-108)],[(1.4334765313272463e170,2.6971234798957105e-50)]]" +"^ip",1015254922,"[[(-2.227414144223298e-63,1.2391785738638914e276),(1.2668491759136862e207,2.5656762953078853e-67),(2.385410876813441e-268,1.451107969531624e25),(-5.475956161647574e131,2239495689376746),(1.5591286361054593e180,3.672868971445151e117)]]" +"5N]",1720727300,"[[(-2.0670321228319122e-258,-2.6893477429616666e-32),(-2.2424105705209414e225,3.547832127050775e25),(4.452916756606404e-121,-3.71114618421911e156),(-1.966961937965055e-110,3.1217044497868816e227),(20636923519704216,1.3500210618276638e30),(3.3195926701816527e-276,1.5557140338374535e234)],[]]" diff --git a/tests/integration/test_s3_cluster/data/clickhouse/part123.csv b/tests/integration/test_s3_cluster/data/clickhouse/part123.csv new file mode 100644 index 00000000000..1ca3353b741 --- /dev/null +++ b/tests/integration/test_s3_cluster/data/clickhouse/part123.csv @@ -0,0 +1,3 @@ +"b'zQ",2960084897,"[[(3.014593666207223e-222,-7.277093176871153e-254),(-1.5307802021477097e-230,3.429893348531029e96),(-7.330317098800564e295,-7.52379443153242e125),(2.2816938730052074e98,-2.3177814806867505e241),(-4.930983789361344e-263,4.271017883966942e128)],[(-2.6482873886835413e34,1.7055814145588595e253),(1.5496810029349365e238,3.2912740465446566e-286),(-7.597436233325566e83,-5.143493717005472e-107),(-1.917321980700588e-307,7.825076274945548e136)],[(-2.2303235811290024e-306,5.51061479974026e-77),(-7.403012423264767e-129,10669142.497111017),(-3.6080301450167e286,-1.270795062098902e102),(-1.3839239794870825e-156,-1.1919528866481513e266),(2.3191305762555e-33,3.066510481088993e-161),(7.237414513124649e-159,8.066145272086467e-274)],[(4.251831312243431e-199,-6.154561379350395e258),(-5.114074387943793e242,-6.736984599062694e-19),(-3.3663670765548e-262,-1.220003479858045e203),(1.1539386993255106e-261,1.0025476904532926e-242),(-6.1914577817088e118,-7.010834902137467e-103),(-5.9815907467493136e82,7.404639608483947e188),(-4.485368440431237e89,-4.089565842001999e-152)]]" +"6xGw",2107128550,"[[(-2.087497331251707e-180,-5.022554811588342e24),(-1.2636503461000215e-228,1.3290563068463308e112),(2.74943107551342e281,1.1123394188044336e112),(3.041838095219909e276,5.590597851016202e-270)],[],[(6.636906358770296e-97,-2.2335799924679948e-184),(-8.429249069245283e-243,1.647618305107044e158),(-0.4657586598569572,-9.016473078578002e-20)]]" +"Q",2713167232,"[[(-5.961335445789657e-129,-7.867078587209265e-243),(-3.139409694983184e45,1.7496539408601967e-281)],[(-2.8435266267772945e39,-3.3089122320442997e-136)]]" diff --git a/tests/integration/test_s3_cluster/data/database/part2.csv b/tests/integration/test_s3_cluster/data/database/part2.csv new file mode 100644 index 00000000000..572676e47c6 --- /dev/null +++ b/tests/integration/test_s3_cluster/data/database/part2.csv @@ -0,0 +1,5 @@ +"~m`",820408404,"[]" +"~E",3621610983,"[[(1.183772215004139e-238,-1.282774073199881e211),(1.6787305112393978e-46,7.500499989257719e25),(-2.458759475104641e-260,3.1724599388651864e-171),(-2.0163203163062471e118,-4.677226438945462e-162),(-5.52491070012707e-135,7.051780441780731e-236)]]" +"~1",1715555780,"[[(-6.847404226505131e-267,5.939552045362479e-272),(8.02275075985457e-160,8.369250185716419e-104),(-1.193940928527857e-258,-1.132580458849774e39)],[(1.1866087552639048e253,3.104988412734545e57),(-3.37278669639914e84,-2.387628643569968e287),(-2.452136349495753e73,3.194309776006896e-204),(-1001997440265471100,3.482122851077378e-182)],[],[(-5.754682082202988e-20,6.598766936241908e156)],[(8.386764833095757e300,1.2049637765877942e229),(3.136243074210055e53,5.764669663844127e-100),(-4.190632347661851e195,-5.053553379163823e302),(2.0805194731736336e-19,-1.0849036699112485e-271),(1.1292361211411365e227,-8.767824448179629e229),(-3.6938137156625264e-19,-5.387931698392423e109),(-1.2240482125885677e189,-1.5631467861525635e-103)],[(-2.3917431782202442e138,7.817228281030191e-242),(-1.1462343232899826e279,-1.971215065504208e-225),(5.4316119855340265e-62,3.761081156597423e-60),(8.111852137718836e306,8.115485489580134e-208)],[]]" +"~%]",1606443384,"[[]]" +"}or",726681547,"[]" \ No newline at end of file diff --git a/tests/integration/test_s3_cluster/data/database/partition675.csv b/tests/integration/test_s3_cluster/data/database/partition675.csv new file mode 100644 index 00000000000..e8496680368 --- /dev/null +++ b/tests/integration/test_s3_cluster/data/database/partition675.csv @@ -0,0 +1,7 @@ +"kvUES",4281162618,"[[(2.4538308454074088e303,1.2209370543175666e178),(1.4564007891121754e-186,2.340773478952682e-273),(-1.01791181533976e165,-3.9617466227377253e248)]]" +"Gu",4280623186,"[[(-1.623487579335014e38,-1.0633405021023563e225),(-4.373688812751571e180,2.5511550357717127e138)]]" +"J_u1",4277430503,"[[(2.981826196369429e-294,-6.059236590410922e236),(8.502045137575854e-296,3.0210403188125657e-91),(-9.370591842861745e175,4.150870185764185e129),(1.011801592194125e275,-9.236010982686472e266),(-3.1830638196303316e277,2.417706446545472e-105),(-1.4369143023804266e-201,4.7529126795899655e238)],[(-2.118789593804697e186,-1.8760231612433755e-280),(2.5982563179976053e200,-1.4683025762313524e-40)],[(-1.873397623255704e-240,1.4363190147949886e-283),(-1.5760337746177136e153,1.5272278536086246e-34),(-8.117473317695919e155,2.4375370926733504e150),(-1.179230972881795e99,1.7693459774706515e-259),(2.2102106250558424e-40,4.734162675762768e-56),(6.058833110550111e-8,8.892471775821198e164),(-1.8208740799996599e59,6.446958261080721e178)]]" +"s:\",4265055390,"[[(-3.291651377214531e-167,3.9198636942402856e185),(2.4897781692770126e176,2.579309759138358e188),(4.653945381397663e205,3.216314556208208e158),(-5.3373279440714224e-39,2.404386813826413e212),(-1.4217294382527138e307,8.874978978402512e-173)],[(8.527603121149904e-58,-5.0520795335878225e88),(-0.00022870878520550814,-3.2334214176860943e-68),(-6.97683613433404e304,-2.1573757788072144e-82),(-1.1394163455875937e36,-3.817990182461824e271),(2.4099027412881423e-209,8.542179392011098e-156),(3.2610511540394803e174,1.1692631657517616e-20)],[(3.625474290538107e261,-5.359205062039837e-193),(-3.574126569378072e-112,-5.421804160994412e265),(-4.873653931207849e-76,3219678918284.317),(-7.030770825898911e-57,1.4647389742249787e-274),(-4.4882439220492357e-203,6.569338333730439e-38)],[(-2.2418056002374865e-136,5.113251922954469e-16),(2.5156744571032497e297,-3.0536957683846124e-192)],[(1.861112291954516e306,-1.8160882143331256e129),(1.982573454900027e290,-2.451412311394593e170)],[(-2.8292230178712157e-18,1.2570198161962067e216),(6.24832495972797e-164,-2.0770908334330718e-273)],[(980143647.1858811,1.2738714961511727e106),(6.516450532397311e-184,4.088688742052062e31),(-2.246311532913914e269,-7.418103885850518e-179),(1.2222973942835046e-289,2.750544834553288e-46),(9.503169349701076e159,-1.355457053256579e215)]]" +":hzO",4263726959,"[[(-2.553206398375626e-90,1.6536977728640226e199),(1.5630078027143848e-36,2.805242683101373e-211),(2.2573933085983554e-92,3.450501333524858e292),(-1.215900901292646e-275,-3.860558658606121e272),(6.65716072773856e-145,2.5359010031217893e217)],[(-1.3308039625779135e308,1.7464622720773261e258),(-3.2986890093446374e179,3.9038871583175653e-69),(-4.3594764087383885e-95,4.229921973278908e-123),(-5.455694205415656e137,3.597894902167716e108),(1.2480860990110662e-29,-1.4873488392480292e-185),(7.563210285835444e55,-5624068447.488605)],[(3.9517937289943195e181,-3.2799189227094424e-68),(8.906762198487649e-167,3.952452177941537e-159)]]" +"a",4258301804,"[[(5.827965576703262e-281,2.2523852665173977e90)],[(-6.837604072282348e-97,8.125864241406046e-61)],[(-2.3047912084435663e53,-8.814499720685194e36),(1.2072558137199047e-79,1.2096862541827071e142),(2.2000026293774143e275,-3.2571689055108606e-199),(1.1822278574921316e134,2.9571188365006754e-86),(1.0448954272555034e-169,1.2182183489600953e-60)],[(-3.1366540817730525e89,9.327128058982966e-306),(6.588968210928936e73,-11533531378.938957),(-2.6715943555840563e44,-4.557428011172859e224),(-3.8334913754415923e285,-4.748721454106074e-173),(-1.6912052107425128e275,-4.789382438422238e-219),(1.8538365229016863e151,-3.5698172075468775e-37)],[(-2.1963131282037294e49,-5.53604352524995e-296)],[(-8.834414834987965e167,1.3186354307320576e247),(2.109209547987338e298,1.2191009105107557e-32),(-3.896880410603213e-92,-3.4589588698231044e-121),(-3.252529090888335e138,-7.862741341454407e204)],[(-9.673078095447289e-207,8.839303128607278e123),(2.6043620378793597e-244,-6.898328199987363e-308),(-2.5921142292355475e-54,1.0352159149517285e-143)]]" +"S+",4257734123,"[[(1.5714269203495863e245,-15651321.549208183),(-3.7292056272445236e-254,-4.556927533596056e-234),(-3.0309414401442555e-203,-3.84393827531526e-12)],[(1.7718777510571518e219,3.972086323144777e139),(1.5723805735454373e-67,-3.805243648123396e226),(154531069271292800000,1.1384408025183933e-285),(-2.009892367470994e-247,2.0325742976832167e81)],[(1.2145787097670788e55,-5.0579298233321666e-30),(5.05577441452021e-182,-2.968914705509665e-175),(-1.702335524921919e67,-2.852552828587631e-226),(-2.7664498327826963e-99,-1.2967072085088717e-305),(7.68881162387673e-68,-1.2506915095983359e-142),(-7.60308693295946e-40,5.414853590549086e218)],[(8.595602987813848e226,-3.9708286611967497e-206),(-5.80352787694746e-52,5.610493934761672e236),(2.1336999375861025e217,-5.431988994371099e-154),(-6.2758614367782974e29,-8.359901046980544e-55)],[(1.6910790690897504e54,9.798739710823911e197),(-6.530270107036228e-284,8.758552462406328e-302),(2.931625032390877e-118,2.8793800873550273e83),(-3.293986884112906e-88,11877326093331202),(0.0008071321465157103,1.0720860516457485e-298)]]" diff --git a/tests/integration/test_s3_cluster/test.py b/tests/integration/test_s3_cluster/test.py new file mode 100644 index 00000000000..f60e6e6862f --- /dev/null +++ b/tests/integration/test_s3_cluster/test.py @@ -0,0 +1,129 @@ +import logging +import os + +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import TSV + +logging.getLogger().setLevel(logging.INFO) +logging.getLogger().addHandler(logging.StreamHandler()) + +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +S3_DATA = ['data/clickhouse/part1.csv', 'data/clickhouse/part123.csv', 'data/database/part2.csv', 'data/database/partition675.csv'] + +def create_buckets_s3(cluster): + minio = cluster.minio_client + for file in S3_DATA: + minio.fput_object(bucket_name=cluster.minio_bucket, object_name=file, file_path=os.path.join(SCRIPT_DIR, file)) + for obj in minio.list_objects(cluster.minio_bucket, recursive=True): + print(obj.object_name) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster = ClickHouseCluster(__file__) + cluster.add_instance('s0_0_0', main_configs=["configs/cluster.xml"], with_minio=True) + cluster.add_instance('s0_0_1', main_configs=["configs/cluster.xml"]) + cluster.add_instance('s0_1_0', main_configs=["configs/cluster.xml"]) + + logging.info("Starting cluster...") + cluster.start() + logging.info("Cluster started") + + create_buckets_s3(cluster) + + yield cluster + finally: + cluster.shutdown() + + +def test_select_all(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon)""") + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT * from s3Cluster( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_count(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT count(*) from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT count(*) from s3Cluster( + 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_union_all(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT * FROM + ( + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ) + ORDER BY (name, value, polygon) + """) + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT * FROM + ( + SELECT * from s3Cluster( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT * from s3Cluster( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ) + ORDER BY (name, value, polygon) + """) + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_wrong_cluster(started_cluster): + node = started_cluster.instances['s0_0_0'] + error = node.query_and_get_error(""" + SELECT count(*) from s3Cluster( + 'non_existent_cluster', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT count(*) from s3Cluster( + 'non_existent_cluster', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + + assert "not found" in error \ No newline at end of file diff --git a/tests/integration/test_s3_zero_copy_replication/__init__.py b/tests/integration/test_s3_zero_copy_replication/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml b/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml new file mode 100644 index 00000000000..ec28840054a --- /dev/null +++ b/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml @@ -0,0 +1,61 @@ + + + + + + s3 + http://minio1:9001/root/data/ + minio + minio123 + + + + + +
+ s31 +
+
+
+ + +
+ default +
+ + s31 + +
+ 0.0 +
+
+
+ + + 0 + 1 + 1 + + + + + + + node1 + 9000 + + + + + node2 + 9000 + + + + + + + test_cluster + + +
diff --git a/tests/integration/test_s3_zero_copy_replication/test.py b/tests/integration/test_s3_zero_copy_replication/test.py new file mode 100644 index 00000000000..f7078d55c33 --- /dev/null +++ b/tests/integration/test_s3_zero_copy_replication/test.py @@ -0,0 +1,137 @@ +import logging +import time + +import pytest +from helpers.cluster import ClickHouseCluster + +logging.getLogger().setLevel(logging.INFO) +logging.getLogger().addHandler(logging.StreamHandler()) + + +@pytest.fixture(scope="module") +def cluster(): + try: + cluster = ClickHouseCluster(__file__) + cluster.add_instance("node1", main_configs=["configs/config.d/s3.xml"], macros={'replica': '1'}, + with_minio=True, + with_zookeeper=True) + cluster.add_instance("node2", main_configs=["configs/config.d/s3.xml"], macros={'replica': '2'}, + with_minio=True, + with_zookeeper=True) + logging.info("Starting cluster...") + cluster.start() + logging.info("Cluster started") + + yield cluster + finally: + cluster.shutdown() + + +def get_large_objects_count(cluster, size=100): + minio = cluster.minio_client + counter = 0 + for obj in minio.list_objects(cluster.minio_bucket, 'data/'): + if obj.size >= size: + counter = counter + 1 + return counter + + +def wait_for_large_objects_count(cluster, expected, size=100, timeout=30): + while timeout > 0: + if get_large_objects_count(cluster, size) == expected: + return + timeout -= 1 + time.sleep(1) + assert get_large_objects_count(cluster, size) == expected + + +@pytest.mark.parametrize( + "policy", ["s3"] +) +def test_s3_zero_copy_replication(cluster, policy): + node1 = cluster.instances["node1"] + node2 = cluster.instances["node2"] + + node1.query( + """ + CREATE TABLE s3_test ON CLUSTER test_cluster (id UInt32, value String) + ENGINE=ReplicatedMergeTree('/clickhouse/tables/s3_test', '{}') + ORDER BY id + SETTINGS storage_policy='{}' + """ + .format('{replica}', policy) + ) + + node1.query("INSERT INTO s3_test VALUES (0,'data'),(1,'data')") + time.sleep(1) + assert node1.query("SELECT * FROM s3_test order by id FORMAT Values") == "(0,'data'),(1,'data')" + assert node2.query("SELECT * FROM s3_test order by id FORMAT Values") == "(0,'data'),(1,'data')" + + # Based on version 20.x - should be only one file with size 100+ (checksums.txt), used by both nodes + assert get_large_objects_count(cluster) == 1 + + node2.query("INSERT INTO s3_test VALUES (2,'data'),(3,'data')") + time.sleep(1) + assert node2.query("SELECT * FROM s3_test order by id FORMAT Values") == "(0,'data'),(1,'data'),(2,'data'),(3,'data')" + assert node1.query("SELECT * FROM s3_test order by id FORMAT Values") == "(0,'data'),(1,'data'),(2,'data'),(3,'data')" + + # Based on version 20.x - two parts + wait_for_large_objects_count(cluster, 2) + + node1.query("OPTIMIZE TABLE s3_test") + + # Based on version 20.x - after merge, two old parts and one merged + wait_for_large_objects_count(cluster, 3) + + # Based on version 20.x - after cleanup - only one merged part + wait_for_large_objects_count(cluster, 1, timeout=60) + + node1.query("DROP TABLE IF EXISTS s3_test NO DELAY") + node2.query("DROP TABLE IF EXISTS s3_test NO DELAY") + + +def test_s3_zero_copy_on_hybrid_storage(cluster): + node1 = cluster.instances["node1"] + node2 = cluster.instances["node2"] + + node1.query( + """ + CREATE TABLE hybrid_test ON CLUSTER test_cluster (id UInt32, value String) + ENGINE=ReplicatedMergeTree('/clickhouse/tables/hybrid_test', '{}') + ORDER BY id + SETTINGS storage_policy='hybrid' + """ + .format('{replica}') + ) + + node1.query("INSERT INTO hybrid_test VALUES (0,'data'),(1,'data')") + + time.sleep(1) + + assert node1.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')" + assert node2.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')" + + assert node1.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','default')" + assert node2.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','default')" + + node1.query("ALTER TABLE hybrid_test MOVE PARTITION ID 'all' TO DISK 's31'") + + assert node1.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','s31')" + assert node2.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','default')" + + # Total objects in S3 + s3_objects = get_large_objects_count(cluster, 0) + + node2.query("ALTER TABLE hybrid_test MOVE PARTITION ID 'all' TO DISK 's31'") + + assert node1.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','s31')" + assert node2.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','s31')" + + # Check that after moving partition on node2 no new obects on s3 + wait_for_large_objects_count(cluster, s3_objects, size=0) + + assert node1.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')" + assert node2.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')" + + node1.query("DROP TABLE IF EXISTS hybrid_test NO DELAY") + node2.query("DROP TABLE IF EXISTS hybrid_test NO DELAY") diff --git a/tests/integration/test_secure_socket/__init__.py b/tests/integration/test_secure_socket/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_secure_socket/configs_secure/config.d/remote_servers.xml b/tests/integration/test_secure_socket/configs_secure/config.d/remote_servers.xml new file mode 100644 index 00000000000..0c109d6d768 --- /dev/null +++ b/tests/integration/test_secure_socket/configs_secure/config.d/remote_servers.xml @@ -0,0 +1,14 @@ + + 9440 + + + + + node2 + 9440 + 1 + + + + + diff --git a/tests/integration/test_secure_socket/configs_secure/config.d/ssl_conf.xml b/tests/integration/test_secure_socket/configs_secure/config.d/ssl_conf.xml new file mode 100644 index 00000000000..fe39e3712b8 --- /dev/null +++ b/tests/integration/test_secure_socket/configs_secure/config.d/ssl_conf.xml @@ -0,0 +1,18 @@ + + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + /etc/clickhouse-server/config.d/dhparam.pem + none + true + + + true + none + + AcceptCertificateHandler + + + + diff --git a/tests/integration/test_secure_socket/configs_secure/dhparam.pem b/tests/integration/test_secure_socket/configs_secure/dhparam.pem new file mode 100644 index 00000000000..2e6cee0798d --- /dev/null +++ b/tests/integration/test_secure_socket/configs_secure/dhparam.pem @@ -0,0 +1,8 @@ +-----BEGIN DH PARAMETERS----- +MIIBCAKCAQEAua92DDli13gJ+//ZXyGaggjIuidqB0crXfhUlsrBk9BV1hH3i7fR +XGP9rUdk2ubnB3k2ejBStL5oBrkHm9SzUFSQHqfDjLZjKoUpOEmuDc4cHvX1XTR5 +Pr1vf5cd0yEncJWG5W4zyUB8k++SUdL2qaeslSs+f491HBLDYn/h8zCgRbBvxhxb +9qeho1xcbnWeqkN6Kc9bgGozA16P9NLuuLttNnOblkH+lMBf42BSne/TWt3AlGZf +slKmmZcySUhF8aKfJnLKbkBCFqOtFRh8zBA9a7g+BT/lSANATCDPaAk1YVih2EKb +dpc3briTDbRsiqg2JKMI7+VdULY9bh3EawIBAg== +-----END DH PARAMETERS----- diff --git a/tests/integration/test_secure_socket/configs_secure/server.crt b/tests/integration/test_secure_socket/configs_secure/server.crt new file mode 100644 index 00000000000..7ade2d96273 --- /dev/null +++ b/tests/integration/test_secure_socket/configs_secure/server.crt @@ -0,0 +1,19 @@ +-----BEGIN CERTIFICATE----- +MIIC/TCCAeWgAwIBAgIJANjx1QSR77HBMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV +BAMMCWxvY2FsaG9zdDAgFw0xODA3MzAxODE2MDhaGA8yMjkyMDUxNDE4MTYwOFow +FDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAs9uSo6lJG8o8pw0fbVGVu0tPOljSWcVSXH9uiJBwlZLQnhN4SFSFohfI +4K8U1tBDTnxPLUo/V1K9yzoLiRDGMkwVj6+4+hE2udS2ePTQv5oaMeJ9wrs+5c9T +4pOtlq3pLAdm04ZMB1nbrEysceVudHRkQbGHzHp6VG29Fw7Ga6YpqyHQihRmEkTU +7UCYNA+Vk7aDPdMS/khweyTpXYZimaK9f0ECU3/VOeG3fH6Sp2X6FN4tUj/aFXEj +sRmU5G2TlYiSIUMF2JPdhSihfk1hJVALrHPTU38SOL+GyyBRWdNcrIwVwbpvsvPg +pryMSNxnpr0AK0dFhjwnupIv5hJIOQIDAQABo1AwTjAdBgNVHQ4EFgQUjPLb3uYC +kcamyZHK4/EV8jAP0wQwHwYDVR0jBBgwFoAUjPLb3uYCkcamyZHK4/EV8jAP0wQw +DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAM/ocuDvfPus/KpMVD51j +4IdlU8R0vmnYLQ+ygzOAo7+hUWP5j0yvq4ILWNmQX6HNvUggCgFv9bjwDFhb/5Vr +85ieWfTd9+LTjrOzTw4avdGwpX9G+6jJJSSq15tw5ElOIFb/qNA9O4dBiu8vn03C +L/zRSXrARhSqTW5w/tZkUcSTT+M5h28+Lgn9ysx4Ff5vi44LJ1NnrbJbEAIYsAAD ++UA+4MBFKx1r6hHINULev8+lCfkpwIaeS8RL+op4fr6kQPxnULw8wT8gkuc8I4+L +P9gg/xDHB44T3ADGZ5Ib6O0DJaNiToO6rnoaaxs0KkotbvDWvRoxEytSbXKoYjYp +0g== +-----END CERTIFICATE----- diff --git a/tests/integration/test_secure_socket/configs_secure/server.key b/tests/integration/test_secure_socket/configs_secure/server.key new file mode 100644 index 00000000000..f0fb61ac443 --- /dev/null +++ b/tests/integration/test_secure_socket/configs_secure/server.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCz25KjqUkbyjyn +DR9tUZW7S086WNJZxVJcf26IkHCVktCeE3hIVIWiF8jgrxTW0ENOfE8tSj9XUr3L +OguJEMYyTBWPr7j6ETa51LZ49NC/mhox4n3Cuz7lz1Pik62WreksB2bThkwHWdus +TKxx5W50dGRBsYfMenpUbb0XDsZrpimrIdCKFGYSRNTtQJg0D5WTtoM90xL+SHB7 +JOldhmKZor1/QQJTf9U54bd8fpKnZfoU3i1SP9oVcSOxGZTkbZOViJIhQwXYk92F +KKF+TWElUAusc9NTfxI4v4bLIFFZ01ysjBXBum+y8+CmvIxI3GemvQArR0WGPCe6 +ki/mEkg5AgMBAAECggEATrbIBIxwDJOD2/BoUqWkDCY3dGevF8697vFuZKIiQ7PP +TX9j4vPq0DfsmDjHvAPFkTHiTQXzlroFik3LAp+uvhCCVzImmHq0IrwvZ9xtB43f +7Pkc5P6h1l3Ybo8HJ6zRIY3TuLtLxuPSuiOMTQSGRL0zq3SQ5DKuGwkz+kVjHXUN +MR2TECFwMHKQ5VLrC+7PMpsJYyOMlDAWhRfUalxC55xOXTpaN8TxNnwQ8K2ISVY5 +212Jz/a4hn4LdwxSz3Tiu95PN072K87HLWx3EdT6vW4Ge5P/A3y+smIuNAlanMnu +plHBRtpATLiTxZt/n6npyrfQVbYjSH7KWhB8hBHtaQKBgQDh9Cq1c/KtqDtE0Ccr +/r9tZNTUwBE6VP+3OJeKdEdtsfuxjOCkS1oAjgBJiSDOiWPh1DdoDeVZjPKq6pIu +Mq12OE3Doa8znfCXGbkSzEKOb2unKZMJxzrz99kXt40W5DtrqKPNb24CNqTiY8Aa +CjtcX+3weat82VRXvph6U8ltMwKBgQDLxjiQQzNoY7qvg7CwJCjf9qq8jmLK766g +1FHXopqS+dTxDLM8eJSRrpmxGWJvNeNc1uPhsKsKgotqAMdBUQTf7rSTbt4MyoH5 +bUcRLtr+0QTK9hDWMOOvleqNXha68vATkohWYfCueNsC60qD44o8RZAS6UNy3ENq +cM1cxqe84wKBgQDKkHutWnooJtajlTxY27O/nZKT/HA1bDgniMuKaz4R4Gr1PIez +on3YW3V0d0P7BP6PWRIm7bY79vkiMtLEKdiKUGWeyZdo3eHvhDb/3DCawtau8L2K +GZsHVp2//mS1Lfz7Qh8/L/NedqCQ+L4iWiPnZ3THjjwn3CoZ05ucpvrAMwKBgB54 +nay039MUVq44Owub3KDg+dcIU62U+cAC/9oG7qZbxYPmKkc4oL7IJSNecGHA5SbU +2268RFdl/gLz6tfRjbEOuOHzCjFPdvAdbysanpTMHLNc6FefJ+zxtgk9sJh0C4Jh +vxFrw9nTKKzfEl12gQ1SOaEaUIO0fEBGbe8ZpauRAoGAMAlGV+2/K4ebvAJKOVTa +dKAzQ+TD2SJmeR1HZmKDYddNqwtZlzg3v4ZhCk4eaUmGeC1Bdh8MDuB3QQvXz4Dr +vOIP4UVaOr+uM+7TgAgVnP4/K6IeJGzUDhX93pmpWhODfdu/oojEKVcpCojmEmS1 +KCBtmIrQLqzMpnBpLNuSY+Q= +-----END PRIVATE KEY----- diff --git a/tests/integration/test_merge_tree_s3/configs/config.d/users.xml b/tests/integration/test_secure_socket/configs_secure/users.d/users.xml similarity index 57% rename from tests/integration/test_merge_tree_s3/configs/config.d/users.xml rename to tests/integration/test_secure_socket/configs_secure/users.d/users.xml index 797113053f4..479017f6370 100644 --- a/tests/integration/test_merge_tree_s3/configs/config.d/users.xml +++ b/tests/integration/test_secure_socket/configs_secure/users.d/users.xml @@ -1,5 +1,6 @@ - + + diff --git a/tests/integration/test_secure_socket/test.py b/tests/integration/test_secure_socket/test.py new file mode 100644 index 00000000000..65c789f9d02 --- /dev/null +++ b/tests/integration/test_secure_socket/test.py @@ -0,0 +1,84 @@ +import os.path +import time + +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import TSV + +cluster = ClickHouseCluster(__file__) + +NODES = {'node' + str(i): None for i in (1, 2)} + +config = ''' + + + {sleep_in_send_data_ms} + + +''' + + +@pytest.fixture(scope="module") +def started_cluster(): + cluster.__with_ssl_config = True + main_configs = [ + "configs_secure/config.d/remote_servers.xml", + "configs_secure/server.crt", + "configs_secure/server.key", + "configs_secure/dhparam.pem", + "configs_secure/config.d/ssl_conf.xml", + ] + + NODES['node1'] = cluster.add_instance('node1', main_configs=main_configs) + NODES['node2'] = cluster.add_instance('node2', main_configs=main_configs, user_configs=["configs_secure/users.d/users.xml"]) + + try: + cluster.start() + NODES['node2'].query("CREATE TABLE base_table (x UInt64) ENGINE = MergeTree ORDER BY x;") + NODES['node2'].query("INSERT INTO base_table VALUES (5);") + NODES['node1'].query("CREATE TABLE distributed_table (x UInt64) ENGINE = Distributed(test_cluster, default, base_table);") + + yield cluster + + finally: + cluster.shutdown() + + +def test(started_cluster): + NODES['node2'].replace_config('/etc/clickhouse-server/users.d/users.xml', config.format(sleep_in_send_data_ms=1000000)) + + attempts = 0 + while attempts < 1000: + setting = NODES['node2'].http_query("SELECT value FROM system.settings WHERE name='sleep_in_send_data_ms'") + if int(setting) == 1000000: + break + time.sleep(0.1) + attempts += 1 + + assert attempts < 1000 + + + start = time.time() + NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5, use_hedged_requests=0, async_socket_for_remote=0;') + end = time.time() + assert end - start < 10 + + start = time.time() + error = NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5, use_hedged_requests=0, async_socket_for_remote=1;') + end = time.time() + + assert end - start < 10 + + # Check that exception about timeout wasn't thrown from DB::ReadBufferFromPocoSocket::nextImpl(). + assert error.find('DB::ReadBufferFromPocoSocket::nextImpl()') == -1 + + start = time.time() + error = NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5, use_hedged_requests=1, async_socket_for_remote=1;') + end = time.time() + + assert end - start < 10 + + # Check that exception about timeout wasn't thrown from DB::ReadBufferFromPocoSocket::nextImpl(). + assert error.find('DB::ReadBufferFromPocoSocket::nextImpl()') == -1 + + diff --git a/tests/integration/test_send_crash_reports/test.py b/tests/integration/test_send_crash_reports/test.py index 65d49637b13..3f88f719fe4 100644 --- a/tests/integration/test_send_crash_reports/test.py +++ b/tests/integration/test_send_crash_reports/test.py @@ -23,7 +23,7 @@ def started_node(): cluster.shutdown() -def test_send_segfault(started_node, ): +def test_send_segfault(started_node): if started_node.is_built_with_thread_sanitizer(): pytest.skip("doesn't fit in timeouts for stacktrace generation") diff --git a/tests/integration/test_storage_hdfs/test.py b/tests/integration/test_storage_hdfs/test.py index a6c8b7e1ee9..a0dc342e910 100644 --- a/tests/integration/test_storage_hdfs/test.py +++ b/tests/integration/test_storage_hdfs/test.py @@ -201,6 +201,24 @@ def test_write_gzip_storage(started_cluster): assert started_cluster.hdfs_api.read_gzip_data("/gzip_storage") == "1\tMark\t72.53\n" assert node1.query("select * from GZIPHDFSStorage") == "1\tMark\t72.53\n" + +def test_virtual_columns(started_cluster): + node1.query("create table virtual_cols (id UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/file*', 'TSV')") + started_cluster.hdfs_api.write_data("/file1", "1\n") + started_cluster.hdfs_api.write_data("/file2", "2\n") + started_cluster.hdfs_api.write_data("/file3", "3\n") + expected = "1\tfile1\thdfs://hdfs1:9000//file1\n2\tfile2\thdfs://hdfs1:9000//file2\n3\tfile3\thdfs://hdfs1:9000//file3\n" + assert node1.query("select id, _file as file_name, _path as file_path from virtual_cols order by id") == expected + + +def test_read_files_with_spaces(started_cluster): + started_cluster.hdfs_api.write_data("/test test test 1.txt", "1\n") + started_cluster.hdfs_api.write_data("/test test test 2.txt", "2\n") + started_cluster.hdfs_api.write_data("/test test test 3.txt", "3\n") + node1.query("create table test (id UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/test*', 'TSV')") + assert node1.query("select * from test order by id") == "1\n2\n3\n" + + if __name__ == '__main__': cluster.start() input("Cluster created, press any key to destroy...") diff --git a/tests/integration/test_storage_kafka/test.py b/tests/integration/test_storage_kafka/test.py index c5691f3534e..8bb36cfa8fd 100644 --- a/tests/integration/test_storage_kafka/test.py +++ b/tests/integration/test_storage_kafka/test.py @@ -5,8 +5,12 @@ import socket import subprocess import threading import time +import io +import string import avro.schema +import avro.io +import avro.datafile from confluent_kafka.avro.cached_schema_registry_client import CachedSchemaRegistryClient from confluent_kafka.avro.serializer.message_serializer import MessageSerializer from confluent_kafka import admin @@ -140,6 +144,37 @@ def kafka_produce_protobuf_social(topic, start_index, num_messages): producer.flush() print(("Produced {} messages for topic {}".format(num_messages, topic))) +def avro_message(value): + schema = avro.schema.make_avsc_object({ + 'name': 'row', + 'type': 'record', + 'fields': [ + {'name': 'id', 'type': 'long'}, + {'name': 'blockNo', 'type': 'int'}, + {'name': 'val1', 'type': 'string'}, + {'name': 'val2', 'type': 'float'}, + {'name': 'val3', 'type': 'int'} + ] + }) + bytes_writer = io.BytesIO() + # writer = avro.io.DatumWriter(schema) + # encoder = avro.io.BinaryEncoder(bytes_writer) + # writer.write(value, encoder) + + + # DataFileWrite seems to be mandatory to get schema encoded + writer = avro.datafile.DataFileWriter(bytes_writer, avro.io.DatumWriter(), schema) + if isinstance(value, list): + for v in value: + writer.append(v) + else: + writer.append(value) + writer.flush() + raw_bytes = bytes_writer.getvalue() + + writer.close() + bytes_writer.close() + return raw_bytes def avro_confluent_message(schema_registry_client, value): # type: (CachedSchemaRegistryClient, dict) -> str @@ -572,13 +607,6 @@ def test_kafka_formats(kafka_cluster): # # '' # # ], # }, - # 'Avro' : { - # 'data_sample' : [ - # b'\x4f\x62\x6a\x01\x04\x16\x61\x76\x72\x6f\x2e\x73\x63\x68\x65\x6d\x61\x82\x03\x7b\x22\x74\x79\x70\x65\x22\x3a\x22\x72\x65\x63\x6f\x72\x64\x22\x2c\x22\x6e\x61\x6d\x65\x22\x3a\x22\x72\x6f\x77\x22\x2c\x22\x66\x69\x65\x6c\x64\x73\x22\x3a\x5b\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x69\x64\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x6c\x6f\x6e\x67\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x62\x6c\x6f\x63\x6b\x4e\x6f\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x69\x6e\x74\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x31\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x73\x74\x72\x69\x6e\x67\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x32\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x66\x6c\x6f\x61\x74\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x33\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x69\x6e\x74\x22\x7d\x5d\x7d\x14\x61\x76\x72\x6f\x2e\x63\x6f\x64\x65\x63\x08\x6e\x75\x6c\x6c\x00\x8d\x1f\xf2\x17\x71\xa4\x2e\xe4\xc9\x0a\x23\x67\x12\xaa\xc6\xc0\x02\x14\x00\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x8d\x1f\xf2\x17\x71\xa4\x2e\xe4\xc9\x0a\x23\x67\x12\xaa\xc6\xc0', - # b'\x4f\x62\x6a\x01\x04\x16\x61\x76\x72\x6f\x2e\x73\x63\x68\x65\x6d\x61\x82\x03\x7b\x22\x74\x79\x70\x65\x22\x3a\x22\x72\x65\x63\x6f\x72\x64\x22\x2c\x22\x6e\x61\x6d\x65\x22\x3a\x22\x72\x6f\x77\x22\x2c\x22\x66\x69\x65\x6c\x64\x73\x22\x3a\x5b\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x69\x64\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x6c\x6f\x6e\x67\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x62\x6c\x6f\x63\x6b\x4e\x6f\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x69\x6e\x74\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x31\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x73\x74\x72\x69\x6e\x67\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x32\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x66\x6c\x6f\x61\x74\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x33\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x69\x6e\x74\x22\x7d\x5d\x7d\x14\x61\x76\x72\x6f\x2e\x63\x6f\x64\x65\x63\x08\x6e\x75\x6c\x6c\x00\xeb\x9d\x51\x82\xf2\x11\x3d\x0b\xc5\x92\x97\xb2\x07\x6d\x72\x5a\x1e\xac\x02\x02\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x04\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x06\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x08\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x0a\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x0c\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x0e\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x10\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x12\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x14\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x16\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x18\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x1a\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x1c\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x1e\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\xeb\x9d\x51\x82\xf2\x11\x3d\x0b\xc5\x92\x97\xb2\x07\x6d\x72\x5a', - # b'\x4f\x62\x6a\x01\x04\x16\x61\x76\x72\x6f\x2e\x73\x63\x68\x65\x6d\x61\x82\x03\x7b\x22\x74\x79\x70\x65\x22\x3a\x22\x72\x65\x63\x6f\x72\x64\x22\x2c\x22\x6e\x61\x6d\x65\x22\x3a\x22\x72\x6f\x77\x22\x2c\x22\x66\x69\x65\x6c\x64\x73\x22\x3a\x5b\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x69\x64\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x6c\x6f\x6e\x67\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x62\x6c\x6f\x63\x6b\x4e\x6f\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x69\x6e\x74\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x31\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x73\x74\x72\x69\x6e\x67\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x32\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x66\x6c\x6f\x61\x74\x22\x7d\x2c\x7b\x22\x6e\x61\x6d\x65\x22\x3a\x22\x76\x61\x6c\x33\x22\x2c\x22\x74\x79\x70\x65\x22\x3a\x22\x69\x6e\x74\x22\x7d\x5d\x7d\x14\x61\x76\x72\x6f\x2e\x63\x6f\x64\x65\x63\x08\x6e\x75\x6c\x6c\x00\x73\x65\x4f\x7c\xd9\x33\xe1\x18\xdd\x30\xe8\x22\x2a\x58\x20\x6f\x02\x14\x00\x00\x04\x41\x4d\x00\x00\x00\x3f\x02\x73\x65\x4f\x7c\xd9\x33\xe1\x18\xdd\x30\xe8\x22\x2a\x58\x20\x6f', - # ], - # }, 'AvroConfluent': { 'data_sample': [ avro_confluent_message(cluster.schema_registry_client, @@ -596,49 +624,34 @@ def test_kafka_formats(kafka_cluster): cluster.schema_registry_port ), 'supports_empty_value': True, - } - # 'Arrow' : { - # # Not working at all: DB::Exception: Error while opening a table: Invalid: File is too small: 0, Stack trace (when copying this message, always include the lines below): - # # /src/Common/Exception.cpp:37: DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) @ 0x15c2d2a3 in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:88: DB::ArrowBlockInputFormat::prepareReader() @ 0x1ddff1c3 in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:26: DB::ArrowBlockInputFormat::ArrowBlockInputFormat(DB::ReadBuffer&, DB::Block const&, bool) @ 0x1ddfef63 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2214: std::__1::__compressed_pair_elem::__compressed_pair_elem(std::__1::piecewise_construct_t, std::__1::tuple, std::__1::__tuple_indices<0ul, 1ul, 2ul>) @ 0x1de0470f in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2299: std::__1::__compressed_pair, DB::ArrowBlockInputFormat>::__compressed_pair&, DB::ReadBuffer&, DB::Block const&, bool&&>(std::__1::piecewise_construct_t, std::__1::tuple&>, std::__1::tuple) @ 0x1de04375 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:3569: std::__1::__shared_ptr_emplace >::__shared_ptr_emplace(std::__1::allocator, DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03f97 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:4400: std::__1::enable_if::value), std::__1::shared_ptr >::type std::__1::make_shared(DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03d4c in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:107: DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_0::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1de010df in /usr/bin/clickhouse - # 'data_sample' : [ - # '\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', - # '\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', - # '\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', - # ], - # }, - # 'ArrowStream' : { - # # Not working at all: - # # Error while opening a table: Invalid: Tried reading schema message, was null or length 0, Stack trace (when copying this message, always include the lines below): - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:88: DB::ArrowBlockInputFormat::prepareReader() @ 0x1ddff1c3 in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:26: DB::ArrowBlockInputFormat::ArrowBlockInputFormat(DB::ReadBuffer&, DB::Block const&, bool) @ 0x1ddfef63 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2214: std::__1::__compressed_pair_elem::__compressed_pair_elem(std::__1::piecewise_construct_t, std::__1::tuple, std::__1::__tuple_indices<0ul, 1ul, 2ul>) @ 0x1de0470f in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2299: std::__1::__compressed_pair, DB::ArrowBlockInputFormat>::__compressed_pair&, DB::ReadBuffer&, DB::Block const&, bool&&>(std::__1::piecewise_construct_t, std::__1::tuple&>, std::__1::tuple) @ 0x1de04375 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:3569: std::__1::__shared_ptr_emplace >::__shared_ptr_emplace(std::__1::allocator, DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03f97 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:4400: std::__1::enable_if::value), std::__1::shared_ptr >::type std::__1::make_shared(DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03d4c in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:117: DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_1::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1de0273f in /usr/bin/clickhouse - # # /contrib/libcxx/include/type_traits:3519: decltype(std::__1::forward(fp)(std::__1::forward(fp0), std::__1::forward(fp0), std::__1::forward(fp0), std::__1::forward(fp0))) std::__1::__invoke(DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_1&, DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de026da in /usr/bin/clickhouse - # # /contrib/libcxx/include/__functional_base:317: std::__1::shared_ptr std::__1::__invoke_void_return_wrapper >::__call(DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_1&, DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de025ed in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:1540: std::__1::__function::__alloc_func, std::__1::shared_ptr (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de0254a in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:1714: std::__1::__function::__func, std::__1::shared_ptr (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de0165c in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:1867: std::__1::__function::__value_func (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1dd14dbd in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:2473: std::__1::function (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1dd07035 in /usr/bin/clickhouse - # # /src/Formats/FormatFactory.cpp:258: DB::FormatFactory::getInputFormat(std::__1::basic_string, std::__1::allocator > const&, DB::ReadBuffer&, DB::Block const&, DB::Context const&, unsigned long, std::__1::function) const @ 0x1dd04007 in /usr/bin/clickhouse - # # /src/Storages/Kafka/KafkaBlockInputStream.cpp:76: DB::KafkaBlockInputStream::readImpl() @ 0x1d8f6559 in /usr/bin/clickhouse - # # /src/DataStreams/IBlockInputStream.cpp:60: DB::IBlockInputStream::read() @ 0x1c9c92fd in /usr/bin/clickhouse - # # /src/DataStreams/copyData.cpp:26: void DB::copyDataImpl*)::$_0&, void (&)(DB::Block const&)>(DB::IBlockInputStream&, DB::IBlockOutputStream&, DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic*)::$_0&, void (&)(DB::Block const&)) @ 0x1c9ea01c in /usr/bin/clickhouse - # 'data_sample' : [ - # '\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', - # '\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00', - # '\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', - # ], - # }, + }, + 'Avro': { + # It seems impossible to send more than one avro file per a message + # because of nature of Avro: blocks go one after another + 'data_sample': [ + avro_message({'id': 0, 'blockNo': 0, 'val1': str('AM'), 'val2': 0.5, "val3": 1}), + + avro_message([{'id': id, 'blockNo': 0, 'val1': str('AM'), + 'val2': 0.5, "val3": 1} for id in range(1, 16)]), + + avro_message({'id': 0, 'blockNo': 0, 'val1': str('AM'), 'val2': 0.5, "val3": 1}), + ], + 'supports_empty_value': False, + }, + 'Arrow' : { + 'data_sample' : [ + b'\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', + b'\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', + b'\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', + ], + }, + 'ArrowStream' : { + 'data_sample' : [ + b'\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', + b'\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00', + b'\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', + ], + }, } for format_name, format_opts in list(all_formats.items()): @@ -2454,8 +2467,6 @@ def test_kafka_issue14202(kafka_cluster): kafka_format = 'JSONEachRow'; ''') - time.sleep(3) - instance.query( 'INSERT INTO test.kafka_q SELECT t, some_string FROM ( SELECT dt AS t, some_string FROM test.empty_table )') # check instance is alive @@ -2493,6 +2504,382 @@ def test_kafka_csv_with_thread_per_consumer(kafka_cluster): kafka_check_result(result, True) +def random_string(size=8): + return ''.join(random.choices(string.ascii_uppercase + string.digits, k=size)) + +@pytest.mark.timeout(180) +def test_kafka_engine_put_errors_to_stream(kafka_cluster): + instance.query(''' + DROP TABLE IF EXISTS test.kafka; + DROP TABLE IF EXISTS test.kafka_data; + DROP TABLE IF EXISTS test.kafka_errors; + CREATE TABLE test.kafka (i Int64, s String) + ENGINE = Kafka + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = 'json', + kafka_group_name = 'json', + kafka_format = 'JSONEachRow', + kafka_max_block_size = 128, + kafka_handle_error_mode = 'stream'; + CREATE MATERIALIZED VIEW test.kafka_data (i Int64, s String) + ENGINE = MergeTree + ORDER BY i + AS SELECT i, s FROM test.kafka WHERE length(_error) == 0; + CREATE MATERIALIZED VIEW test.kafka_errors (topic String, partition Int64, offset Int64, raw String, error String) + ENGINE = MergeTree + ORDER BY (topic, offset) + AS SELECT + _topic AS topic, + _partition AS partition, + _offset AS offset, + _raw_message AS raw, + _error AS error + FROM test.kafka WHERE length(_error) > 0; + ''') + + messages = [] + for i in range(128): + if i % 2 == 0: + messages.append(json.dumps({'i': i, 's': random_string(8)})) + else: + # Unexpected json content for table test.kafka. + messages.append(json.dumps({'i': 'n_' + random_string(4), 's': random_string(8)})) + + kafka_produce('json', messages) + + while True: + total_rows = instance.query('SELECT count() FROM test.kafka_data', ignore_error=True) + if total_rows == '64\n': + break + + while True: + total_error_rows = instance.query('SELECT count() FROM test.kafka_errors', ignore_error=True) + if total_error_rows == '64\n': + break + + instance.query(''' + DROP TABLE test.kafka; + DROP TABLE test.kafka_data; + DROP TABLE test.kafka_errors; + ''') + +def gen_normal_json(): + return '{"i":1000, "s":"ABC123abc"}' + +def gen_malformed_json(): + return '{"i":"n1000", "s":"1000"}' + +def gen_message_with_jsons(jsons = 10, malformed = 0): + s = io.StringIO() + for i in range (jsons): + if malformed and random.randint(0,1) == 1: + s.write(gen_malformed_json()) + else: + s.write(gen_normal_json()) + s.write(' ') + return s.getvalue() + + +def test_kafka_engine_put_errors_to_stream_with_random_malformed_json(kafka_cluster): + instance.query(''' + DROP TABLE IF EXISTS test.kafka; + DROP TABLE IF EXISTS test.kafka_data; + DROP TABLE IF EXISTS test.kafka_errors; + CREATE TABLE test.kafka (i Int64, s String) + ENGINE = Kafka + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = 'json', + kafka_group_name = 'json', + kafka_format = 'JSONEachRow', + kafka_max_block_size = 100, + kafka_poll_max_batch_size = 1, + kafka_handle_error_mode = 'stream'; + CREATE MATERIALIZED VIEW test.kafka_data (i Int64, s String) + ENGINE = MergeTree + ORDER BY i + AS SELECT i, s FROM test.kafka WHERE length(_error) == 0; + CREATE MATERIALIZED VIEW test.kafka_errors (topic String, partition Int64, offset Int64, raw String, error String) + ENGINE = MergeTree + ORDER BY (topic, offset) + AS SELECT + _topic AS topic, + _partition AS partition, + _offset AS offset, + _raw_message AS raw, + _error AS error + FROM test.kafka WHERE length(_error) > 0; + ''') + + messages = [] + for i in range(128): + if i % 2 == 0: + messages.append(gen_message_with_jsons(10, 1)) + else: + messages.append(gen_message_with_jsons(10, 0)) + + kafka_produce('json', messages) + + while True: + total_rows = instance.query('SELECT count() FROM test.kafka_data', ignore_error=True) + if total_rows == '640\n': + break + + while True: + total_error_rows = instance.query('SELECT count() FROM test.kafka_errors', ignore_error=True) + if total_error_rows == '64\n': + break + + instance.query(''' + DROP TABLE test.kafka; + DROP TABLE test.kafka_data; + DROP TABLE test.kafka_errors; + ''') + +@pytest.mark.timeout(120) +def test_kafka_formats_with_broken_message(kafka_cluster): + # data was dumped from clickhouse itself in a following manner + # clickhouse-client --format=Native --query='SELECT toInt64(number) as id, toUInt16( intDiv( id, 65536 ) ) as blockNo, reinterpretAsString(19777) as val1, toFloat32(0.5) as val2, toUInt8(1) as val3 from numbers(100) ORDER BY id' | xxd -ps | tr -d '\n' | sed 's/\(..\)/\\x\1/g' + + all_formats = { + ## Text formats ## + # dumped with clickhouse-client ... | perl -pe 's/\n/\\n/; s/\t/\\t/g;' + 'JSONEachRow': { + 'data_sample': [ + '{"id":"0","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n', + '{"id":"1","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"2","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"3","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"4","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"5","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"6","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"7","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"8","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"9","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"10","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"11","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"12","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"13","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"14","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"15","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n', + '{"id":"0","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n', + # broken message + '{"id":"0","blockNo":"BAD","val1":"AM","val2":0.5,"val3":1}', + ], + 'expected':'''{"raw_message":"{\\"id\\":\\"0\\",\\"blockNo\\":\\"BAD\\",\\"val1\\":\\"AM\\",\\"val2\\":0.5,\\"val3\\":1}","error":"Cannot parse input: expected '\\"' before: 'BAD\\",\\"val1\\":\\"AM\\",\\"val2\\":0.5,\\"val3\\":1}': (while reading the value of key blockNo)"}''', + 'supports_empty_value': True, + 'printable': True, + }, + # JSONAsString doesn't fit to that test, and tested separately + 'JSONCompactEachRow': { + 'data_sample': [ + '["0", 0, "AM", 0.5, 1]\n', + '["1", 0, "AM", 0.5, 1]\n["2", 0, "AM", 0.5, 1]\n["3", 0, "AM", 0.5, 1]\n["4", 0, "AM", 0.5, 1]\n["5", 0, "AM", 0.5, 1]\n["6", 0, "AM", 0.5, 1]\n["7", 0, "AM", 0.5, 1]\n["8", 0, "AM", 0.5, 1]\n["9", 0, "AM", 0.5, 1]\n["10", 0, "AM", 0.5, 1]\n["11", 0, "AM", 0.5, 1]\n["12", 0, "AM", 0.5, 1]\n["13", 0, "AM", 0.5, 1]\n["14", 0, "AM", 0.5, 1]\n["15", 0, "AM", 0.5, 1]\n', + '["0", 0, "AM", 0.5, 1]\n', + # broken message + '["0", "BAD", "AM", 0.5, 1]', + ], + 'expected':'''{"raw_message":"[\\"0\\", \\"BAD\\", \\"AM\\", 0.5, 1]","error":"Cannot parse input: expected '\\"' before: 'BAD\\", \\"AM\\", 0.5, 1]': (while reading the value of key blockNo)"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'JSONCompactEachRowWithNamesAndTypes': { + 'data_sample': [ + '["id", "blockNo", "val1", "val2", "val3"]\n["Int64", "UInt16", "String", "Float32", "UInt8"]\n["0", 0, "AM", 0.5, 1]\n', + '["id", "blockNo", "val1", "val2", "val3"]\n["Int64", "UInt16", "String", "Float32", "UInt8"]\n["1", 0, "AM", 0.5, 1]\n["2", 0, "AM", 0.5, 1]\n["3", 0, "AM", 0.5, 1]\n["4", 0, "AM", 0.5, 1]\n["5", 0, "AM", 0.5, 1]\n["6", 0, "AM", 0.5, 1]\n["7", 0, "AM", 0.5, 1]\n["8", 0, "AM", 0.5, 1]\n["9", 0, "AM", 0.5, 1]\n["10", 0, "AM", 0.5, 1]\n["11", 0, "AM", 0.5, 1]\n["12", 0, "AM", 0.5, 1]\n["13", 0, "AM", 0.5, 1]\n["14", 0, "AM", 0.5, 1]\n["15", 0, "AM", 0.5, 1]\n', + '["id", "blockNo", "val1", "val2", "val3"]\n["Int64", "UInt16", "String", "Float32", "UInt8"]\n["0", 0, "AM", 0.5, 1]\n', + # broken message + '["0", "BAD", "AM", 0.5, 1]', + ], + 'expected':'''{"raw_message":"[\\"0\\", \\"BAD\\", \\"AM\\", 0.5, 1]","error":"Cannot parse JSON string: expected opening quote"}''', + 'printable':True, + }, + 'TSKV': { + 'data_sample': [ + 'id=0\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\n', + 'id=1\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=2\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=3\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=4\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=5\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=6\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=7\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=8\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=9\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=10\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=11\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=12\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=13\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=14\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=15\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\n', + 'id=0\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\n', + # broken message + 'id=0\tblockNo=BAD\tval1=AM\tval2=0.5\tval3=1\n', + ], + 'expected':'{"raw_message":"id=0\\tblockNo=BAD\\tval1=AM\\tval2=0.5\\tval3=1\\n","error":"Found garbage after field in TSKV format: blockNo: (at row 1)\\n"}', + 'printable':True, + }, + 'CSV': { + 'data_sample': [ + '0,0,"AM",0.5,1\n', + '1,0,"AM",0.5,1\n2,0,"AM",0.5,1\n3,0,"AM",0.5,1\n4,0,"AM",0.5,1\n5,0,"AM",0.5,1\n6,0,"AM",0.5,1\n7,0,"AM",0.5,1\n8,0,"AM",0.5,1\n9,0,"AM",0.5,1\n10,0,"AM",0.5,1\n11,0,"AM",0.5,1\n12,0,"AM",0.5,1\n13,0,"AM",0.5,1\n14,0,"AM",0.5,1\n15,0,"AM",0.5,1\n', + '0,0,"AM",0.5,1\n', + # broken message + '0,"BAD","AM",0.5,1\n', + ], + 'expected':'''{"raw_message":"0,\\"BAD\\",\\"AM\\",0.5,1\\n","error":"Cannot parse input: expected '\\"' before: 'BAD\\",\\"AM\\",0.5,1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'printable':True, + 'supports_empty_value': True, + }, + 'TSV': { + 'data_sample': [ + '0\t0\tAM\t0.5\t1\n', + '1\t0\tAM\t0.5\t1\n2\t0\tAM\t0.5\t1\n3\t0\tAM\t0.5\t1\n4\t0\tAM\t0.5\t1\n5\t0\tAM\t0.5\t1\n6\t0\tAM\t0.5\t1\n7\t0\tAM\t0.5\t1\n8\t0\tAM\t0.5\t1\n9\t0\tAM\t0.5\t1\n10\t0\tAM\t0.5\t1\n11\t0\tAM\t0.5\t1\n12\t0\tAM\t0.5\t1\n13\t0\tAM\t0.5\t1\n14\t0\tAM\t0.5\t1\n15\t0\tAM\t0.5\t1\n', + '0\t0\tAM\t0.5\t1\n', + # broken message + '0\tBAD\tAM\t0.5\t1\n', + ], + 'expected':'''{"raw_message":"0\\tBAD\\tAM\\t0.5\\t1\\n","error":"Cannot parse input: expected '\\\\t' before: 'BAD\\\\tAM\\\\t0.5\\\\t1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'CSVWithNames': { + 'data_sample': [ + '"id","blockNo","val1","val2","val3"\n0,0,"AM",0.5,1\n', + '"id","blockNo","val1","val2","val3"\n1,0,"AM",0.5,1\n2,0,"AM",0.5,1\n3,0,"AM",0.5,1\n4,0,"AM",0.5,1\n5,0,"AM",0.5,1\n6,0,"AM",0.5,1\n7,0,"AM",0.5,1\n8,0,"AM",0.5,1\n9,0,"AM",0.5,1\n10,0,"AM",0.5,1\n11,0,"AM",0.5,1\n12,0,"AM",0.5,1\n13,0,"AM",0.5,1\n14,0,"AM",0.5,1\n15,0,"AM",0.5,1\n', + '"id","blockNo","val1","val2","val3"\n0,0,"AM",0.5,1\n', + # broken message + '"id","blockNo","val1","val2","val3"\n0,"BAD","AM",0.5,1\n', + ], + 'expected':'''{"raw_message":"\\"id\\",\\"blockNo\\",\\"val1\\",\\"val2\\",\\"val3\\"\\n0,\\"BAD\\",\\"AM\\",0.5,1\\n","error":"Cannot parse input: expected '\\"' before: 'BAD\\",\\"AM\\",0.5,1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'printable':True, + }, + 'Values': { + 'data_sample': [ + "(0,0,'AM',0.5,1)", + "(1,0,'AM',0.5,1),(2,0,'AM',0.5,1),(3,0,'AM',0.5,1),(4,0,'AM',0.5,1),(5,0,'AM',0.5,1),(6,0,'AM',0.5,1),(7,0,'AM',0.5,1),(8,0,'AM',0.5,1),(9,0,'AM',0.5,1),(10,0,'AM',0.5,1),(11,0,'AM',0.5,1),(12,0,'AM',0.5,1),(13,0,'AM',0.5,1),(14,0,'AM',0.5,1),(15,0,'AM',0.5,1)", + "(0,0,'AM',0.5,1)", + # broken message + "(0,'BAD','AM',0.5,1)", + ], + 'expected':r'''{"raw_message":"(0,'BAD','AM',0.5,1)","error":"Cannot parse string 'BAD' as UInt16: syntax error at begin of string. Note: there are toUInt16OrZero and toUInt16OrNull functions, which returns zero\/NULL instead of throwing exception.: while executing 'FUNCTION CAST(assumeNotNull(_dummy_0) :: 2, 'UInt16' :: 1) -> CAST(assumeNotNull(_dummy_0), 'UInt16') UInt16 : 4'"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'TSVWithNames': { + 'data_sample': [ + 'id\tblockNo\tval1\tval2\tval3\n0\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\n1\t0\tAM\t0.5\t1\n2\t0\tAM\t0.5\t1\n3\t0\tAM\t0.5\t1\n4\t0\tAM\t0.5\t1\n5\t0\tAM\t0.5\t1\n6\t0\tAM\t0.5\t1\n7\t0\tAM\t0.5\t1\n8\t0\tAM\t0.5\t1\n9\t0\tAM\t0.5\t1\n10\t0\tAM\t0.5\t1\n11\t0\tAM\t0.5\t1\n12\t0\tAM\t0.5\t1\n13\t0\tAM\t0.5\t1\n14\t0\tAM\t0.5\t1\n15\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\n0\t0\tAM\t0.5\t1\n', + # broken message + 'id\tblockNo\tval1\tval2\tval3\n0\tBAD\tAM\t0.5\t1\n', + ], + 'expected':'''{"raw_message":"id\\tblockNo\\tval1\\tval2\\tval3\\n0\\tBAD\\tAM\\t0.5\\t1\\n","error":"Cannot parse input: expected '\\\\t' before: 'BAD\\\\tAM\\\\t0.5\\\\t1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'TSVWithNamesAndTypes': { + 'data_sample': [ + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n0\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n1\t0\tAM\t0.5\t1\n2\t0\tAM\t0.5\t1\n3\t0\tAM\t0.5\t1\n4\t0\tAM\t0.5\t1\n5\t0\tAM\t0.5\t1\n6\t0\tAM\t0.5\t1\n7\t0\tAM\t0.5\t1\n8\t0\tAM\t0.5\t1\n9\t0\tAM\t0.5\t1\n10\t0\tAM\t0.5\t1\n11\t0\tAM\t0.5\t1\n12\t0\tAM\t0.5\t1\n13\t0\tAM\t0.5\t1\n14\t0\tAM\t0.5\t1\n15\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n0\t0\tAM\t0.5\t1\n', + # broken message + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n0\tBAD\tAM\t0.5\t1\n', + ], + 'expected':'''{"raw_message":"id\\tblockNo\\tval1\\tval2\\tval3\\nInt64\\tUInt16\\tString\\tFloat32\\tUInt8\\n0\\tBAD\\tAM\\t0.5\\t1\\n","error":"Cannot parse input: expected '\\\\t' before: 'BAD\\\\tAM\\\\t0.5\\\\t1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'printable':True, + }, + 'Native': { + 'data_sample': [ + b'\x05\x01\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x00\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x55\x49\x6e\x74\x31\x36\x00\x00\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01', + b'\x05\x0f\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x55\x49\x6e\x74\x31\x36\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01', + b'\x05\x01\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x00\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x55\x49\x6e\x74\x31\x36\x00\x00\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01', + # broken message + b'\x05\x01\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x00\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x53\x74\x72\x69\x6e\x67\x03\x42\x41\x44\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01', + ], + 'expected':'''{"raw_message":"050102696405496E743634000000000000000007626C6F636B4E6F06537472696E67034241440476616C3106537472696E6702414D0476616C3207466C6F617433320000003F0476616C330555496E743801","error":"Cannot convert: String to UInt16"}''', + 'printable':False, + }, + 'RowBinary': { + 'data_sample': [ + b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + # broken message + b'\x00\x00\x00\x00\x00\x00\x00\x00\x03\x42\x41\x44\x02\x41\x4d\x00\x00\x00\x3f\x01', + ], + 'expected':'{"raw_message":"00000000000000000342414402414D0000003F01","error":"Cannot read all data. Bytes read: 9. Bytes expected: 65.: (at row 1)\\n"}', + 'printable':False, + }, + 'RowBinaryWithNamesAndTypes': { + 'data_sample': [ + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x55\x49\x6e\x74\x31\x36\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x55\x49\x6e\x74\x31\x36\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x55\x49\x6e\x74\x31\x36\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + # broken message + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x53\x74\x72\x69\x6e\x67\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x00\x00\x00\x00\x00\x00\x00\x00\x03\x42\x41\x44\x02\x41\x4d\x00\x00\x00\x3f\x01', + ], + 'expected':'{"raw_message":"0502696407626C6F636B4E6F0476616C310476616C320476616C3305496E74363406537472696E6706537472696E6707466C6F617433320555496E743800000000000000000342414402414D0000003F01","error":"Cannot read all data. Bytes read: 9. Bytes expected: 65.: (at row 1)\\n"}', + 'printable':False, + }, + 'ORC': { + 'data_sample': [ + b'\x4f\x52\x43\x11\x00\x00\x0a\x06\x12\x04\x08\x01\x50\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x30\x00\x00\xe3\x12\xe7\x62\x65\x00\x01\x21\x3e\x0e\x46\x25\x0e\x2e\x46\x03\x21\x46\x03\x09\xa6\x00\x06\x00\x32\x00\x00\xe3\x92\xe4\x62\x65\x00\x01\x21\x01\x0e\x46\x25\x2e\x2e\x26\x47\x5f\x21\x20\x96\x60\x09\x60\x00\x00\x36\x00\x00\xe3\x92\xe1\x62\x65\x00\x01\x21\x61\x0e\x46\x23\x5e\x2e\x46\x03\x21\x66\x03\x3d\x53\x29\x10\x11\xc0\x00\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x05\x00\x00\xff\x00\x03\x00\x00\x30\x07\x00\x00\x40\x00\x80\x05\x00\x00\x41\x4d\x07\x00\x00\x42\x00\x80\x03\x00\x00\x0a\x07\x00\x00\x42\x00\x80\x05\x00\x00\xff\x01\x88\x00\x00\x4d\xca\xc1\x0a\x80\x30\x0c\x03\xd0\x2e\x6b\xcb\x98\x17\xf1\x14\x50\xfc\xff\xcf\xb4\x66\x1e\x3c\x84\x47\x9a\xce\x1c\xb9\x1b\xb7\xf9\xda\x48\x09\x9e\xb2\xf3\x92\xce\x5b\x86\xf6\x56\x7f\x21\x41\x2f\x51\xa6\x7a\xd7\x1d\xe5\xea\xae\x3d\xca\xd5\x83\x71\x60\xd8\x17\xfc\x62\x0f\xa8\x00\x00\xe3\x4a\xe6\x62\xe1\x60\x0c\x60\xe0\xe2\xe3\x60\x14\x62\xe3\x60\x10\x60\x90\x60\x08\x60\x88\x60\xe5\x12\xe0\x60\x54\xe2\xe0\x62\x34\x10\x62\x34\x90\x60\x02\x8a\x70\x71\x09\x01\x45\xb8\xb8\x98\x1c\x7d\x85\x80\x58\x82\x05\x28\xc6\xcd\x25\xca\xc1\x68\xc4\x0b\x52\xc5\x6c\xa0\x67\x2a\x05\x22\xc0\x4a\x21\x86\x31\x09\x30\x81\xb5\xb2\x02\x00\x36\x01\x00\x25\x8c\xbd\x0a\xc2\x30\x14\x85\x73\x6f\x92\xf6\x92\x6a\x09\x01\x21\x64\x92\x4e\x75\x91\x58\x71\xc9\x64\x27\x5d\x2c\x1d\x5d\xfd\x59\xc4\x42\x37\x5f\xc0\x17\xe8\x23\x9b\xc6\xe1\x3b\x70\x0f\xdf\xb9\xc4\xf5\x17\x5d\x41\x5c\x4f\x60\x37\xeb\x53\x0d\x55\x4d\x0b\x23\x01\xb9\x90\x2e\xbf\x0f\xe3\xe3\xdd\x8d\x0e\x5f\x4f\x27\x3e\xb7\x61\x97\xb2\x49\xb9\xaf\x90\x20\x92\x27\x32\x2a\x6b\xf4\xf3\x0d\x1e\x82\x20\xe8\x59\x28\x09\x4c\x46\x4c\x33\xcb\x7a\x76\x95\x41\x47\x9f\x14\x78\x03\xde\x62\x6c\x54\x30\xb1\x51\x0a\xdb\x8b\x89\x58\x11\xbb\x22\xac\x08\x9a\xe5\x6c\x71\xbf\x3d\xb8\x39\x92\xfa\x7f\x86\x1a\xd3\x54\x1e\xa7\xee\xcc\x7e\x08\x9e\x01\x10\x01\x18\x80\x80\x10\x22\x02\x00\x0c\x28\x57\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + b'\x4f\x52\x43\x11\x00\x00\x0a\x06\x12\x04\x08\x0f\x50\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x0f\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x30\x00\x00\xe3\x12\xe7\x62\x65\x00\x01\x21\x3e\x0e\x7e\x25\x0e\x2e\x46\x43\x21\x46\x4b\x09\xad\x00\x06\x00\x33\x00\x00\x0a\x17\x0a\x03\x00\x00\x00\x12\x10\x08\x0f\x22\x0a\x0a\x02\x41\x4d\x12\x02\x41\x4d\x18\x3c\x50\x00\x3a\x00\x00\xe3\x92\xe1\x62\x65\x00\x01\x21\x61\x0e\x7e\x23\x5e\x2e\x46\x03\x21\x66\x03\x3d\x53\x29\x66\x73\x3d\xd3\x00\x06\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x0f\x12\x06\x08\x02\x10\x02\x18\x1e\x50\x00\x05\x00\x00\x0c\x00\x2b\x00\x00\x31\x32\x33\x34\x35\x36\x37\x38\x39\x31\x30\x31\x31\x31\x32\x31\x33\x31\x34\x31\x35\x09\x00\x00\x06\x01\x03\x02\x09\x00\x00\xc0\x0e\x00\x00\x07\x00\x00\x42\x00\x80\x05\x00\x00\x41\x4d\x0a\x00\x00\xe3\xe2\x42\x01\x00\x09\x00\x00\xc0\x0e\x02\x00\x05\x00\x00\x0c\x01\x94\x00\x00\x2d\xca\xc1\x0e\x80\x30\x08\x03\xd0\xc1\x60\x2e\xf3\x62\x76\x6a\xe2\x0e\xfe\xff\x57\x5a\x3b\x0f\xe4\x51\xe8\x68\xbd\x5d\x05\xe7\xf8\x34\x40\x3a\x6e\x59\xb1\x64\xe0\x91\xa9\xbf\xb1\x97\xd2\x95\x9d\x1e\xca\x55\x3a\x6d\xb4\xd2\xdd\x0b\x74\x9a\x74\xf7\x12\x39\xbd\x97\x7f\x7c\x06\xbb\xa6\x8d\x97\x17\xb4\x00\x00\xe3\x4a\xe6\x62\xe1\xe0\x0f\x60\xe0\xe2\xe3\xe0\x17\x62\xe3\x60\x10\x60\x90\x60\x08\x60\x88\x60\xe5\x12\xe0\xe0\x57\xe2\xe0\x62\x34\x14\x62\xb4\x94\xd0\x02\x8a\xc8\x73\x09\x01\x45\xb8\xb8\x98\x1c\x7d\x85\x80\x58\xc2\x06\x28\x26\xc4\x25\xca\xc1\x6f\xc4\xcb\xc5\x68\x20\xc4\x6c\xa0\x67\x2a\xc5\x6c\xae\x67\x0a\x14\xe6\x87\x1a\xc6\x24\xc0\x24\x21\x07\x32\x0c\x00\x4a\x01\x00\xe3\x60\x16\x58\xc3\x24\xc5\xcd\xc1\x2c\x30\x89\x51\xc2\x4b\xc1\x57\x83\x5f\x49\x83\x83\x47\x88\x95\x91\x89\x99\x85\x55\x8a\x3d\x29\x27\x3f\x39\xdb\x2f\x5f\x8a\x29\x33\x45\x8a\xa5\x2c\x31\xc7\x10\x4c\x1a\x81\x49\x63\x25\x26\x0e\x46\x20\x66\x07\x63\x36\x0e\x3e\x0d\x26\x03\x10\x9f\xd1\x80\xdf\x8a\x85\x83\x3f\x80\xc1\x8a\x8f\x83\x5f\x88\x8d\x83\x41\x80\x41\x82\x21\x80\x21\x82\xd5\x4a\x80\x83\x5f\x89\x83\x8b\xd1\x50\x88\xd1\x52\x42\x0b\x28\x22\x6f\x25\x04\x14\xe1\xe2\x62\x72\xf4\x15\x02\x62\x09\x1b\xa0\x98\x90\x95\x28\x07\xbf\x11\x2f\x17\xa3\x81\x10\xb3\x81\x9e\xa9\x14\xb3\xb9\x9e\x29\x50\x98\x1f\x6a\x18\x93\x00\x93\x84\x1c\xc8\x30\x87\x09\x7e\x1e\x0c\x00\x08\xa8\x01\x10\x01\x18\x80\x80\x10\x22\x02\x00\x0c\x28\x5d\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + b'\x4f\x52\x43\x11\x00\x00\x0a\x06\x12\x04\x08\x01\x50\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x30\x00\x00\xe3\x12\xe7\x62\x65\x00\x01\x21\x3e\x0e\x46\x25\x0e\x2e\x46\x03\x21\x46\x03\x09\xa6\x00\x06\x00\x32\x00\x00\xe3\x92\xe4\x62\x65\x00\x01\x21\x01\x0e\x46\x25\x2e\x2e\x26\x47\x5f\x21\x20\x96\x60\x09\x60\x00\x00\x36\x00\x00\xe3\x92\xe1\x62\x65\x00\x01\x21\x61\x0e\x46\x23\x5e\x2e\x46\x03\x21\x66\x03\x3d\x53\x29\x10\x11\xc0\x00\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x05\x00\x00\xff\x00\x03\x00\x00\x30\x07\x00\x00\x40\x00\x80\x05\x00\x00\x41\x4d\x07\x00\x00\x42\x00\x80\x03\x00\x00\x0a\x07\x00\x00\x42\x00\x80\x05\x00\x00\xff\x01\x88\x00\x00\x4d\xca\xc1\x0a\x80\x30\x0c\x03\xd0\x2e\x6b\xcb\x98\x17\xf1\x14\x50\xfc\xff\xcf\xb4\x66\x1e\x3c\x84\x47\x9a\xce\x1c\xb9\x1b\xb7\xf9\xda\x48\x09\x9e\xb2\xf3\x92\xce\x5b\x86\xf6\x56\x7f\x21\x41\x2f\x51\xa6\x7a\xd7\x1d\xe5\xea\xae\x3d\xca\xd5\x83\x71\x60\xd8\x17\xfc\x62\x0f\xa8\x00\x00\xe3\x4a\xe6\x62\xe1\x60\x0c\x60\xe0\xe2\xe3\x60\x14\x62\xe3\x60\x10\x60\x90\x60\x08\x60\x88\x60\xe5\x12\xe0\x60\x54\xe2\xe0\x62\x34\x10\x62\x34\x90\x60\x02\x8a\x70\x71\x09\x01\x45\xb8\xb8\x98\x1c\x7d\x85\x80\x58\x82\x05\x28\xc6\xcd\x25\xca\xc1\x68\xc4\x0b\x52\xc5\x6c\xa0\x67\x2a\x05\x22\xc0\x4a\x21\x86\x31\x09\x30\x81\xb5\xb2\x02\x00\x36\x01\x00\x25\x8c\xbd\x0a\xc2\x30\x14\x85\x73\x6f\x92\xf6\x92\x6a\x09\x01\x21\x64\x92\x4e\x75\x91\x58\x71\xc9\x64\x27\x5d\x2c\x1d\x5d\xfd\x59\xc4\x42\x37\x5f\xc0\x17\xe8\x23\x9b\xc6\xe1\x3b\x70\x0f\xdf\xb9\xc4\xf5\x17\x5d\x41\x5c\x4f\x60\x37\xeb\x53\x0d\x55\x4d\x0b\x23\x01\xb9\x90\x2e\xbf\x0f\xe3\xe3\xdd\x8d\x0e\x5f\x4f\x27\x3e\xb7\x61\x97\xb2\x49\xb9\xaf\x90\x20\x92\x27\x32\x2a\x6b\xf4\xf3\x0d\x1e\x82\x20\xe8\x59\x28\x09\x4c\x46\x4c\x33\xcb\x7a\x76\x95\x41\x47\x9f\x14\x78\x03\xde\x62\x6c\x54\x30\xb1\x51\x0a\xdb\x8b\x89\x58\x11\xbb\x22\xac\x08\x9a\xe5\x6c\x71\xbf\x3d\xb8\x39\x92\xfa\x7f\x86\x1a\xd3\x54\x1e\xa7\xee\xcc\x7e\x08\x9e\x01\x10\x01\x18\x80\x80\x10\x22\x02\x00\x0c\x28\x57\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + # broken message + b'\x4f\x52\x43\x0a\x0b\x0a\x03\x00\x00\x00\x12\x04\x08\x01\x50\x00\x0a\x15\x0a\x05\x00\x00\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x0a\x12\x0a\x06\x00\x00\x00\x00\x00\x00\x12\x08\x08\x01\x42\x02\x08\x06\x50\x00\x0a\x12\x0a\x06\x00\x00\x00\x00\x00\x00\x12\x08\x08\x01\x42\x02\x08\x04\x50\x00\x0a\x29\x0a\x04\x00\x00\x00\x00\x12\x21\x08\x01\x1a\x1b\x09\x00\x00\x00\x00\x00\x00\xe0\x3f\x11\x00\x00\x00\x00\x00\x00\xe0\x3f\x19\x00\x00\x00\x00\x00\x00\xe0\x3f\x50\x00\x0a\x15\x0a\x05\x00\x00\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\xff\x80\xff\x80\xff\x00\xff\x80\xff\x03\x42\x41\x44\xff\x80\xff\x02\x41\x4d\xff\x80\x00\x00\x00\x3f\xff\x80\xff\x01\x0a\x06\x08\x06\x10\x00\x18\x0d\x0a\x06\x08\x06\x10\x01\x18\x17\x0a\x06\x08\x06\x10\x02\x18\x14\x0a\x06\x08\x06\x10\x03\x18\x14\x0a\x06\x08\x06\x10\x04\x18\x2b\x0a\x06\x08\x06\x10\x05\x18\x17\x0a\x06\x08\x00\x10\x00\x18\x02\x0a\x06\x08\x00\x10\x01\x18\x02\x0a\x06\x08\x01\x10\x01\x18\x02\x0a\x06\x08\x00\x10\x02\x18\x02\x0a\x06\x08\x02\x10\x02\x18\x02\x0a\x06\x08\x01\x10\x02\x18\x03\x0a\x06\x08\x00\x10\x03\x18\x02\x0a\x06\x08\x02\x10\x03\x18\x02\x0a\x06\x08\x01\x10\x03\x18\x02\x0a\x06\x08\x00\x10\x04\x18\x02\x0a\x06\x08\x01\x10\x04\x18\x04\x0a\x06\x08\x00\x10\x05\x18\x02\x0a\x06\x08\x01\x10\x05\x18\x02\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x1a\x03\x47\x4d\x54\x0a\x59\x0a\x04\x08\x01\x50\x00\x0a\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x0a\x08\x08\x01\x42\x02\x08\x06\x50\x00\x0a\x08\x08\x01\x42\x02\x08\x04\x50\x00\x0a\x21\x08\x01\x1a\x1b\x09\x00\x00\x00\x00\x00\x00\xe0\x3f\x11\x00\x00\x00\x00\x00\x00\xe0\x3f\x19\x00\x00\x00\x00\x00\x00\xe0\x3f\x50\x00\x0a\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x08\x03\x10\xec\x02\x1a\x0c\x08\x03\x10\x8e\x01\x18\x1d\x20\xc1\x01\x28\x01\x22\x2e\x08\x0c\x12\x05\x01\x02\x03\x04\x05\x1a\x02\x69\x64\x1a\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x1a\x04\x76\x61\x6c\x31\x1a\x04\x76\x61\x6c\x32\x1a\x04\x76\x61\x6c\x33\x20\x00\x28\x00\x30\x00\x22\x08\x08\x04\x20\x00\x28\x00\x30\x00\x22\x08\x08\x08\x20\x00\x28\x00\x30\x00\x22\x08\x08\x08\x20\x00\x28\x00\x30\x00\x22\x08\x08\x05\x20\x00\x28\x00\x30\x00\x22\x08\x08\x01\x20\x00\x28\x00\x30\x00\x30\x01\x3a\x04\x08\x01\x50\x00\x3a\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x3a\x08\x08\x01\x42\x02\x08\x06\x50\x00\x3a\x08\x08\x01\x42\x02\x08\x04\x50\x00\x3a\x21\x08\x01\x1a\x1b\x09\x00\x00\x00\x00\x00\x00\xe0\x3f\x11\x00\x00\x00\x00\x00\x00\xe0\x3f\x19\x00\x00\x00\x00\x00\x00\xe0\x3f\x50\x00\x3a\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x40\x90\x4e\x48\x01\x08\xd5\x01\x10\x00\x18\x80\x80\x04\x22\x02\x00\x0b\x28\x5b\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + ], + 'expected':r'''{"raw_message":"4F52430A0B0A030000001204080150000A150A050000000000120C0801120608001000180050000A120A06000000000000120808014202080650000A120A06000000000000120808014202080450000A290A0400000000122108011A1B09000000000000E03F11000000000000E03F19000000000000E03F50000A150A050000000000120C080112060802100218025000FF80FF80FF00FF80FF03424144FF80FF02414DFF800000003FFF80FF010A0608061000180D0A060806100118170A060806100218140A060806100318140A0608061004182B0A060806100518170A060800100018020A060800100118020A060801100118020A060800100218020A060802100218020A060801100218030A060800100318020A060802100318020A060801100318020A060800100418020A060801100418040A060800100518020A060801100518021204080010001204080010001204080010001204080010001204080010001204080010001A03474D540A590A04080150000A0C0801120608001000180050000A0808014202080650000A0808014202080450000A2108011A1B09000000000000E03F11000000000000E03F19000000000000E03F50000A0C080112060802100218025000080310EC021A0C0803108E01181D20C1012801222E080C120501020304051A0269641A07626C6F636B4E6F1A0476616C311A0476616C321A0476616C33200028003000220808042000280030002208080820002800300022080808200028003000220808052000280030002208080120002800300030013A04080150003A0C0801120608001000180050003A0808014202080650003A0808014202080450003A2108011A1B09000000000000E03F11000000000000E03F19000000000000E03F50003A0C08011206080210021802500040904E480108D5011000188080042202000B285B300682F403034F524318","error":"Cannot parse string 'BAD' as UInt16: syntax error at begin of string. Note: there are toUInt16OrZero and toUInt16OrNull functions, which returns zero\/NULL instead of throwing exception."}''', + 'printable':False, + } + } + + topic_name_prefix = 'format_tests_4_stream_' + for format_name, format_opts in list(all_formats.items()): + print(('Set up {}'.format(format_name))) + topic_name = topic_name_prefix + '{}'.format(format_name) + data_sample = format_opts['data_sample'] + data_prefix = [] + raw_message = '_raw_message' + # prepend empty value when supported + if format_opts.get('supports_empty_value', False): + data_prefix = data_prefix + [''] + if format_opts.get('printable', False) == False: + raw_message = 'hex(_raw_message)' + kafka_produce(topic_name, data_prefix + data_sample) + instance.query(''' + DROP TABLE IF EXISTS test.kafka_{format_name}; + + CREATE TABLE test.kafka_{format_name} ( + id Int64, + blockNo UInt16, + val1 String, + val2 Float32, + val3 UInt8 + ) ENGINE = Kafka() + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = '{topic_name}', + kafka_group_name = '{topic_name}', + kafka_format = '{format_name}', + kafka_handle_error_mode = 'stream', + kafka_flush_interval_ms = 1000 {extra_settings}; + + DROP TABLE IF EXISTS test.kafka_data_{format_name}_mv; + CREATE MATERIALIZED VIEW test.kafka_data_{format_name}_mv Engine=Log AS + SELECT *, _topic, _partition, _offset FROM test.kafka_{format_name} + WHERE length(_error) = 0; + + DROP TABLE IF EXISTS test.kafka_errors_{format_name}_mv; + CREATE MATERIALIZED VIEW test.kafka_errors_{format_name}_mv Engine=Log AS + SELECT {raw_message} as raw_message, _error as error, _topic as topic, _partition as partition, _offset as offset FROM test.kafka_{format_name} + WHERE length(_error) > 0; + '''.format(topic_name=topic_name, format_name=format_name, raw_message=raw_message, + extra_settings=format_opts.get('extra_settings') or '')) + + for format_name, format_opts in list(all_formats.items()): + print(('Checking {}'.format(format_name))) + topic_name = topic_name_prefix + '{}'.format(format_name) + # shift offsets by 1 if format supports empty value + offsets = [1, 2, 3] if format_opts.get('supports_empty_value', False) else [0, 1, 2] + result = instance.query('SELECT * FROM test.kafka_data_{format_name}_mv;'.format(format_name=format_name)) + expected = '''\ +0 0 AM 0.5 1 {topic_name} 0 {offset_0} +1 0 AM 0.5 1 {topic_name} 0 {offset_1} +2 0 AM 0.5 1 {topic_name} 0 {offset_1} +3 0 AM 0.5 1 {topic_name} 0 {offset_1} +4 0 AM 0.5 1 {topic_name} 0 {offset_1} +5 0 AM 0.5 1 {topic_name} 0 {offset_1} +6 0 AM 0.5 1 {topic_name} 0 {offset_1} +7 0 AM 0.5 1 {topic_name} 0 {offset_1} +8 0 AM 0.5 1 {topic_name} 0 {offset_1} +9 0 AM 0.5 1 {topic_name} 0 {offset_1} +10 0 AM 0.5 1 {topic_name} 0 {offset_1} +11 0 AM 0.5 1 {topic_name} 0 {offset_1} +12 0 AM 0.5 1 {topic_name} 0 {offset_1} +13 0 AM 0.5 1 {topic_name} 0 {offset_1} +14 0 AM 0.5 1 {topic_name} 0 {offset_1} +15 0 AM 0.5 1 {topic_name} 0 {offset_1} +0 0 AM 0.5 1 {topic_name} 0 {offset_2} +'''.format(topic_name=topic_name, offset_0=offsets[0], offset_1=offsets[1], offset_2=offsets[2]) + print(('Checking result\n {result} \n expected \n {expected}\n'.format(result=str(result), expected=str(expected)))) + assert TSV(result) == TSV(expected), 'Proper result for format: {}'.format(format_name) + errors_result = instance.query('SELECT raw_message, error FROM test.kafka_errors_{format_name}_mv format JSONEachRow'.format(format_name=format_name)) + errors_expected = format_opts['expected'] + print(errors_result.strip()) + print(errors_expected.strip()) + assert errors_result.strip() == errors_expected.strip(), 'Proper errors for format: {}'.format(format_name) if __name__ == '__main__': cluster.start() diff --git a/tests/integration/test_storage_mysql/configs/users.xml b/tests/integration/test_storage_mysql/configs/users.xml new file mode 100644 index 00000000000..27c4d46984e --- /dev/null +++ b/tests/integration/test_storage_mysql/configs/users.xml @@ -0,0 +1,18 @@ + + + + + 2 + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_storage_mysql/test.py b/tests/integration/test_storage_mysql/test.py index 7b23e20e200..9c3abd799af 100644 --- a/tests/integration/test_storage_mysql/test.py +++ b/tests/integration/test_storage_mysql/test.py @@ -8,6 +8,9 @@ from helpers.cluster import ClickHouseCluster cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml'], with_mysql=True) +node2 = cluster.add_instance('node2', main_configs=['configs/remote_servers.xml'], with_mysql_cluster=True) +node3 = cluster.add_instance('node3', main_configs=['configs/remote_servers.xml'], user_configs=['configs/users.xml'], with_mysql=True) + create_table_sql_template = """ CREATE TABLE `clickhouse`.`{}` ( `id` int(11) NOT NULL, @@ -18,15 +21,30 @@ create_table_sql_template = """ PRIMARY KEY (`id`)) ENGINE=InnoDB; """ +def get_mysql_conn(port=3308): + conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=port) + return conn + + +def create_mysql_db(conn, name): + with conn.cursor() as cursor: + cursor.execute( + "CREATE DATABASE {} DEFAULT CHARACTER SET 'utf8'".format(name)) + + +def create_mysql_table(conn, tableName): + with conn.cursor() as cursor: + cursor.execute(create_table_sql_template.format(tableName)) + @pytest.fixture(scope="module") def started_cluster(): try: cluster.start() - conn = get_mysql_conn() ## create mysql db and table - create_mysql_db(conn, 'clickhouse') + conn1 = get_mysql_conn(port=3308) + create_mysql_db(conn1, 'clickhouse') yield cluster finally: @@ -52,6 +70,7 @@ CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL assert node1.query(query.format(t=table_name)) == '250\n' conn.close() + def test_insert_select(started_cluster): table_name = 'test_insert_select' conn = get_mysql_conn() @@ -148,6 +167,7 @@ def test_table_function(started_cluster): assert node1.query("SELECT sum(`money`) FROM {}".format(table_function)).rstrip() == '60000' conn.close() + def test_binary_type(started_cluster): conn = get_mysql_conn() with conn.cursor() as cursor: @@ -156,6 +176,7 @@ def test_binary_type(started_cluster): node1.query("INSERT INTO {} VALUES (42, 'clickhouse')".format('TABLE FUNCTION ' + table_function)) assert node1.query("SELECT * FROM {}".format(table_function)) == '42\tclickhouse\\0\\0\\0\\0\\0\\0\n' + def test_enum_type(started_cluster): table_name = 'test_enum_type' conn = get_mysql_conn() @@ -168,20 +189,95 @@ CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32, source Enum8(' conn.close() -def get_mysql_conn(): - conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=3308) - return conn +def test_mysql_distributed(started_cluster): + table_name = 'test_replicas' + + conn1 = get_mysql_conn(port=3348) + conn2 = get_mysql_conn(port=3388) + conn3 = get_mysql_conn(port=3368) + conn4 = get_mysql_conn(port=3308) + + create_mysql_db(conn1, 'clickhouse') + create_mysql_db(conn2, 'clickhouse') + create_mysql_db(conn3, 'clickhouse') + + create_mysql_table(conn1, table_name) + create_mysql_table(conn2, table_name) + create_mysql_table(conn3, table_name) + create_mysql_table(conn4, table_name) + + # Storage with with 3 replicas + node2.query(''' + CREATE TABLE test_replicas + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = MySQL(`mysql{2|3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + + # Fill remote tables with different data to be able to check + nodes = [node1, node2, node2, node2] + for i in range(1, 5): + nodes[i-1].query(''' + CREATE TABLE test_replica{} + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = MySQL(`mysql{}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse');'''.format(i, i)) + nodes[i-1].query("INSERT INTO test_replica{} (id, name) SELECT number, 'host{}' from numbers(10) ".format(i, i)) + + # test multiple ports parsing + result = node2.query('''SELECT DISTINCT(name) FROM mysql(`mysql{1|2|3}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + assert(result == 'host1\n' or result == 'host2\n' or result == 'host3\n') + result = node2.query('''SELECT DISTINCT(name) FROM mysql(`mysql1:3306|mysql2:3306|mysql3:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + assert(result == 'host1\n' or result == 'host2\n' or result == 'host3\n') + + # check all replicas are traversed + query = "SELECT * FROM (" + for i in range (3): + query += "SELECT name FROM test_replicas UNION DISTINCT " + query += "SELECT name FROM test_replicas)" + + result = node2.query(query) + assert(result == 'host2\nhost3\nhost4\n') + + # Storage with with two shards, each has 2 replicas + node2.query(''' + CREATE TABLE test_shards + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = ExternalDistributed('MySQL', `mysql{1|2}:3306,mysql{3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + + # Check only one replica in each shard is used + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + assert(result == 'host1\nhost3\n') + + # check all replicas are traversed + query = "SELECT name FROM (" + for i in range (3): + query += "SELECT name FROM test_shards UNION DISTINCT " + query += "SELECT name FROM test_shards) ORDER BY name" + result = node2.query(query) + assert(result == 'host1\nhost2\nhost3\nhost4\n') + + # disconnect mysql1 + started_cluster.pause_container('mysql1') + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + started_cluster.unpause_container('mysql1') + assert(result == 'host2\nhost4\n' or result == 'host3\nhost4\n') -def create_mysql_db(conn, name): - with conn.cursor() as cursor: - cursor.execute( - "CREATE DATABASE {} DEFAULT CHARACTER SET 'utf8'".format(name)) +def test_external_settings(started_cluster): + table_name = 'test_external_settings' + conn = get_mysql_conn() + create_mysql_table(conn, table_name) - -def create_mysql_table(conn, tableName): - with conn.cursor() as cursor: - cursor.execute(create_table_sql_template.format(tableName)) + node3.query(''' +CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL('mysql1:3306', 'clickhouse', '{}', 'root', 'clickhouse'); +'''.format(table_name, table_name)) + node3.query( + "INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(100) ".format( + table_name)) + assert node3.query("SELECT count() FROM {}".format(table_name)).rstrip() == '100' + assert node3.query("SELECT sum(money) FROM {}".format(table_name)).rstrip() == '300' + node3.query("select value from system.settings where name = 'max_block_size' FORMAT TSV") == "2\n" + node3.query("select value from system.settings where name = 'external_storage_max_read_rows' FORMAT TSV") == "0\n" + assert node3.query("SELECT COUNT(DISTINCT blockNumber()) FROM {} FORMAT TSV".format(table_name)) == '50\n' + conn.close() if __name__ == '__main__': diff --git a/tests/integration/test_storage_postgresql/configs/log_conf.xml b/tests/integration/test_storage_postgresql/configs/log_conf.xml new file mode 100644 index 00000000000..f9d15e572aa --- /dev/null +++ b/tests/integration/test_storage_postgresql/configs/log_conf.xml @@ -0,0 +1,11 @@ + + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + diff --git a/tests/integration/test_storage_postgresql/test.py b/tests/integration/test_storage_postgresql/test.py index cee495438a2..b1ef58866bc 100644 --- a/tests/integration/test_storage_postgresql/test.py +++ b/tests/integration/test_storage_postgresql/test.py @@ -2,18 +2,22 @@ import time import pytest import psycopg2 +from multiprocessing.dummy import Pool + from helpers.cluster import ClickHouseCluster from helpers.test_tools import assert_eq_with_retry from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', main_configs=[], with_postgres=True) +node1 = cluster.add_instance('node1', main_configs=["configs/log_conf.xml"], with_postgres=True) +node2 = cluster.add_instance('node2', main_configs=['configs/log_conf.xml'], with_postgres_cluster=True) -def get_postgres_conn(database=False): +def get_postgres_conn(database=False, port=5432): if database == True: - conn_string = "host='localhost' dbname='clickhouse' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} dbname='clickhouse' user='postgres' password='mysecretpassword'".format(port) else: - conn_string = "host='localhost' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} user='postgres' password='mysecretpassword'".format(port) + conn = psycopg2.connect(conn_string) conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) conn.autocommit = True @@ -28,9 +32,20 @@ def create_postgres_db(conn, name): def started_cluster(): try: cluster.start() - postgres_conn = get_postgres_conn() - print("postgres connected") + + postgres_conn = get_postgres_conn(port=5432) create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5421) + create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5441) + create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5461) + create_postgres_db(postgres_conn, 'clickhouse') + + print("postgres connected") yield cluster finally: @@ -63,13 +78,19 @@ def test_postgres_conversions(started_cluster): cursor.execute( '''CREATE TABLE IF NOT EXISTS test_types ( a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, - h timestamp, i date, j decimal(5, 3), k numeric)''') + h timestamp, i date, j decimal(5, 3), k numeric, l boolean)''') node1.query(''' INSERT INTO TABLE FUNCTION postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword') VALUES - (-32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12', '2000-05-12', 22.222, 22.222)''') + (-32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12', '2000-05-12', 22.222, 22.222, 1)''') result = node1.query(''' - SELECT a, b, c, d, e, f, g, h, i, j, toDecimal128(k, 3) FROM postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword')''') - assert(result == '-32768\t-2147483648\t-9223372036854775808\t1.12345\t1.123456789\t2147483647\t9223372036854775807\t2000-05-12 12:12:12\t2000-05-12\t22.222\t22.222\n') + SELECT a, b, c, d, e, f, g, h, i, j, toDecimal128(k, 3), l FROM postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword')''') + assert(result == '-32768\t-2147483648\t-9223372036854775808\t1.12345\t1.123456789\t2147483647\t9223372036854775807\t2000-05-12 12:12:12\t2000-05-12\t22.222\t22.222\t1\n') + + cursor.execute("INSERT INTO test_types (l) VALUES (TRUE), (true), ('yes'), ('y'), ('1');") + cursor.execute("INSERT INTO test_types (l) VALUES (FALSE), (false), ('no'), ('off'), ('0');") + expected = "1\n1\n1\n1\n1\n1\n0\n0\n0\n0\n0\n" + result = node1.query('''SELECT l FROM postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword')''') + assert(result == expected) cursor.execute( '''CREATE TABLE IF NOT EXISTS test_array_dimensions @@ -132,6 +153,152 @@ def test_postgres_conversions(started_cluster): assert(result == expected) +def test_non_default_scema(started_cluster): + conn = get_postgres_conn(True) + cursor = conn.cursor() + cursor.execute('CREATE SCHEMA test_schema') + cursor.execute('CREATE TABLE test_schema.test_table (a integer)') + cursor.execute('INSERT INTO test_schema.test_table SELECT i FROM generate_series(0, 99) as t(i)') + + node1.query(''' + CREATE TABLE test_pg_table_schema (a UInt32) + ENGINE PostgreSQL('postgres1:5432', 'clickhouse', 'test_table', 'postgres', 'mysecretpassword', 'test_schema'); + ''') + + result = node1.query('SELECT * FROM test_pg_table_schema') + expected = node1.query('SELECT number FROM numbers(100)') + assert(result == expected) + + table_function = '''postgresql('postgres1:5432', 'clickhouse', 'test_table', 'postgres', 'mysecretpassword', 'test_schema')''' + result = node1.query('SELECT * FROM {}'.format(table_function)) + assert(result == expected) + + cursor.execute('''CREATE SCHEMA "test.nice.schema"''') + cursor.execute('''CREATE TABLE "test.nice.schema"."test.nice.table" (a integer)''') + cursor.execute('INSERT INTO "test.nice.schema"."test.nice.table" SELECT i FROM generate_series(0, 99) as t(i)') + + node1.query(''' + CREATE TABLE test_pg_table_schema_with_dots (a UInt32) + ENGINE PostgreSQL('postgres1:5432', 'clickhouse', 'test.nice.table', 'postgres', 'mysecretpassword', 'test.nice.schema'); + ''') + result = node1.query('SELECT * FROM test_pg_table_schema_with_dots') + assert(result == expected) + + +def test_concurrent_queries(started_cluster): + conn = get_postgres_conn(True) + cursor = conn.cursor() + + node1.query(''' + CREATE TABLE test_table (key UInt32, value UInt32) + ENGINE = PostgreSQL('postgres1:5432', 'clickhouse', 'test_table', 'postgres', 'mysecretpassword')''') + + cursor.execute('CREATE TABLE test_table (key integer, value integer)') + + prev_count = node1.count_in_log('New connection to postgres1:5432') + def node_select(_): + for i in range(20): + result = node1.query("SELECT * FROM test_table", user='default') + busy_pool = Pool(20) + p = busy_pool.map_async(node_select, range(20)) + p.wait() + count = node1.count_in_log('New connection to postgres1:5432') + print(count, prev_count) + # 16 is default size for connection pool + assert(int(count) == int(prev_count) + 16) + + def node_insert(_): + for i in range(5): + result = node1.query("INSERT INTO test_table SELECT number, number FROM numbers(1000)", user='default') + + busy_pool = Pool(5) + p = busy_pool.map_async(node_insert, range(5)) + p.wait() + result = node1.query("SELECT count() FROM test_table", user='default') + print(result) + assert(int(result) == 5 * 5 * 1000) + + def node_insert_select(_): + for i in range(5): + result = node1.query("INSERT INTO test_table SELECT number, number FROM numbers(1000)", user='default') + result = node1.query("SELECT * FROM test_table LIMIT 100", user='default') + + busy_pool = Pool(5) + p = busy_pool.map_async(node_insert_select, range(5)) + p.wait() + result = node1.query("SELECT count() FROM test_table", user='default') + print(result) + assert(int(result) == 5 * 5 * 1000 * 2) + + node1.query('DROP TABLE test_table;') + cursor.execute('DROP TABLE test_table;') + + count = node1.count_in_log('New connection to postgres1:5432') + print(count, prev_count) + assert(int(count) == int(prev_count) + 16) + + +def test_postgres_distributed(started_cluster): + conn0 = get_postgres_conn(port=5432, database=True) + conn1 = get_postgres_conn(port=5421, database=True) + conn2 = get_postgres_conn(port=5441, database=True) + conn3 = get_postgres_conn(port=5461, database=True) + + cursor0 = conn0.cursor() + cursor1 = conn1.cursor() + cursor2 = conn2.cursor() + cursor3 = conn3.cursor() + cursors = [cursor0, cursor1, cursor2, cursor3] + + for i in range(4): + cursors[i].execute('CREATE TABLE test_replicas (id Integer, name Text)') + cursors[i].execute("""INSERT INTO test_replicas select i, 'host{}' from generate_series(0, 99) as t(i);""".format(i + 1)); + + # test multiple ports parsing + result = node2.query('''SELECT DISTINCT(name) FROM postgresql(`postgres{1|2|3}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + assert(result == 'host1\n' or result == 'host2\n' or result == 'host3\n') + result = node2.query('''SELECT DISTINCT(name) FROM postgresql(`postgres2:5431|postgres3:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + assert(result == 'host3\n' or result == 'host2\n') + + # Create storage with with 3 replicas + node2.query(''' + CREATE TABLE test_replicas + (id UInt32, name String) + ENGINE = PostgreSQL(`postgres{2|3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + + # Check all replicas are traversed + query = "SELECT name FROM (" + for i in range (3): + query += "SELECT name FROM test_replicas UNION DISTINCT " + query += "SELECT name FROM test_replicas) ORDER BY name" + result = node2.query(query) + assert(result == 'host2\nhost3\nhost4\n') + + # Create storage with with two two shards, each has 2 replicas + node2.query(''' + CREATE TABLE test_shards + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = ExternalDistributed('PostgreSQL', `postgres{1|2}:5432,postgres{3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + + # Check only one replica in each shard is used + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + assert(result == 'host1\nhost3\n') + + # Check all replicas are traversed + query = "SELECT name FROM (" + for i in range (3): + query += "SELECT name FROM test_shards UNION DISTINCT " + query += "SELECT name FROM test_shards) ORDER BY name" + result = node2.query(query) + assert(result == 'host1\nhost2\nhost3\nhost4\n') + + # Disconnect postgres1 + started_cluster.pause_container('postgres1') + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + started_cluster.unpause_container('postgres1') + assert(result == 'host2\nhost4\n' or result == 'host3\nhost4\n') + + if __name__ == '__main__': cluster.start() input("Cluster created, press any key to destroy...") diff --git a/tests/integration/test_storage_rabbitmq/test.py b/tests/integration/test_storage_rabbitmq/test.py index ca89ebdea0a..cab7685d96c 100644 --- a/tests/integration/test_storage_rabbitmq/test.py +++ b/tests/integration/test_storage_rabbitmq/test.py @@ -253,12 +253,20 @@ def test_rabbitmq_csv_with_delimiter(rabbitmq_cluster): @pytest.mark.timeout(240) def test_rabbitmq_tsv_with_delimiter(rabbitmq_cluster): instance.query(''' + DROP TABLE IF EXISTS test.view; + DROP TABLE IF EXISTS test.consumer; CREATE TABLE test.rabbitmq (key UInt64, value UInt64) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'rabbitmq1:5672', rabbitmq_exchange_name = 'tsv', rabbitmq_format = 'TSV', + rabbitmq_queue_base = 'tsv', rabbitmq_row_delimiter = '\\n'; + CREATE TABLE test.view (key UInt64, value UInt64) + ENGINE = MergeTree() + ORDER BY key; + CREATE MATERIALIZED VIEW test.consumer TO test.view AS + SELECT * FROM test.rabbitmq; ''') credentials = pika.PlainCredentials('root', 'clickhouse') @@ -272,13 +280,11 @@ def test_rabbitmq_tsv_with_delimiter(rabbitmq_cluster): for message in messages: channel.basic_publish(exchange='tsv', routing_key='', body=message) - connection.close() - time.sleep(1) result = '' while True: - result += instance.query('SELECT * FROM test.rabbitmq ORDER BY key', ignore_error=True) + result = instance.query('SELECT * FROM test.view ORDER BY key') if rabbitmq_check_result(result): break @@ -1965,6 +1971,29 @@ def test_rabbitmq_format_factory_settings(rabbitmq_cluster): assert(result == expected) +@pytest.mark.timeout(120) +def test_rabbitmq_vhost(rabbitmq_cluster): + instance.query(''' + CREATE TABLE test.rabbitmq_vhost (key UInt64, value UInt64) + ENGINE = RabbitMQ + SETTINGS rabbitmq_host_port = 'rabbitmq1:5672', + rabbitmq_exchange_name = 'vhost', + rabbitmq_format = 'JSONEachRow', + rabbitmq_vhost = '/' + ''') + + credentials = pika.PlainCredentials('root', 'clickhouse') + parameters = pika.ConnectionParameters('localhost', 5672, '/', credentials) + connection = pika.BlockingConnection(parameters) + channel = connection.channel() + channel.basic_publish(exchange='vhost', routing_key='', body=json.dumps({'key': 1, 'value': 2})) + connection.close() + while True: + result = instance.query('SELECT * FROM test.rabbitmq_vhost ORDER BY key', ignore_error=True) + if result == "1\t2\n": + break + + if __name__ == '__main__': cluster.start() input("Cluster created, press any key to destroy...") diff --git a/tests/integration/test_storage_s3/s3_mock/mock_s3.py b/tests/integration/test_storage_s3/s3_mocks/mock_s3.py similarity index 89% rename from tests/integration/test_storage_s3/s3_mock/mock_s3.py rename to tests/integration/test_storage_s3/s3_mocks/mock_s3.py index 088cc883e57..3e876689175 100644 --- a/tests/integration/test_storage_s3/s3_mock/mock_s3.py +++ b/tests/integration/test_storage_s3/s3_mocks/mock_s3.py @@ -1,3 +1,5 @@ +import sys + from bottle import abort, route, run, request, response @@ -21,4 +23,4 @@ def ping(): return 'OK' -run(host='0.0.0.0', port=8080) +run(host='0.0.0.0', port=int(sys.argv[1])) diff --git a/tests/integration/test_storage_s3/s3_mocks/unstable_server.py b/tests/integration/test_storage_s3/s3_mocks/unstable_server.py new file mode 100644 index 00000000000..4a27845ff9f --- /dev/null +++ b/tests/integration/test_storage_s3/s3_mocks/unstable_server.py @@ -0,0 +1,90 @@ +import http.server +import random +import re +import socket +import struct +import sys + + +def gen_n_digit_number(n): + assert 0 < n < 19 + return random.randint(10**(n-1), 10**n-1) + + +def gen_line(): + columns = 4 + + row = [] + def add_number(): + digits = random.randint(1, 18) + row.append(gen_n_digit_number(digits)) + + for i in range(columns // 2): + add_number() + row.append(1) + for i in range(columns - 1 - columns // 2): + add_number() + + line = ",".join(map(str, row)) + "\n" + return line.encode() + + +random.seed("Unstable server/1.0") +lines = b"".join((gen_line() for _ in range(500000))) + + +class RequestHandler(http.server.BaseHTTPRequestHandler): + def do_HEAD(self): + if self.path == "/root/test.csv": + self.from_bytes = 0 + self.end_bytes = len(lines) + self.size = self.end_bytes + self.send_block_size = 256 + self.stop_at = random.randint(900000, 1200000) // self.send_block_size # Block size is 1024**2. + + if "Range" in self.headers: + cr = self.headers["Range"] + parts = re.split("[ -/=]+", cr) + assert parts[0] == "bytes" + self.from_bytes = int(parts[1]) + if parts[2]: + self.end_bytes = int(parts[2])+1 + self.send_response(206) + self.send_header("Content-Range", f"bytes {self.from_bytes}-{self.end_bytes-1}/{self.size}") + else: + self.send_response(200) + + self.send_header("Accept-Ranges", "bytes") + self.send_header("Content-Type", "text/plain") + self.send_header("Content-Length", f"{self.end_bytes-self.from_bytes}") + self.end_headers() + + elif self.path == "/": + self.send_response(200) + self.send_header("Content-Type", "text/plain") + self.end_headers() + + else: + self.send_response(404) + self.send_header("Content-Type", "text/plain") + self.end_headers() + + + def do_GET(self): + self.do_HEAD() + if self.path == "/root/test.csv": + for c, i in enumerate(range(self.from_bytes, self.end_bytes, self.send_block_size)): + self.wfile.write(lines[i:min(i+self.send_block_size, self.end_bytes)]) + if (c + 1) % self.stop_at == 0: + #self.wfile._sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 0, 0)) + #self.wfile._sock.shutdown(socket.SHUT_RDWR) + #self.wfile._sock.close() + print('Dropping connection') + break + + elif self.path == "/": + self.wfile.write(b"OK") + + +httpd = http.server.HTTPServer(("0.0.0.0", int(sys.argv[1])), RequestHandler) +httpd.serve_forever() diff --git a/tests/integration/test_storage_s3/test.py b/tests/integration/test_storage_s3/test.py index 3b4c56b524b..c239dc68810 100644 --- a/tests/integration/test_storage_s3/test.py +++ b/tests/integration/test_storage_s3/test.py @@ -96,7 +96,7 @@ def cluster(): prepare_s3_bucket(cluster) logging.info("S3 bucket created") - run_s3_mock(cluster) + run_s3_mocks(cluster) yield cluster finally: @@ -113,13 +113,18 @@ def run_query(instance, query, stdin=None, settings=None): return result -# Test simple put. -@pytest.mark.parametrize("maybe_auth,positive", [ - ("", True), - ("'minio','minio123',", True), - ("'wrongid','wrongkey',", False) +# Test simple put. Also checks that wrong credentials produce an error with every compression method. +@pytest.mark.parametrize("maybe_auth,positive,compression", [ + ("", True, 'auto'), + ("'minio','minio123',", True, 'auto'), + ("'wrongid','wrongkey',", False, 'auto'), + ("'wrongid','wrongkey',", False, 'gzip'), + ("'wrongid','wrongkey',", False, 'deflate'), + ("'wrongid','wrongkey',", False, 'brotli'), + ("'wrongid','wrongkey',", False, 'xz'), + ("'wrongid','wrongkey',", False, 'zstd') ]) -def test_put(cluster, maybe_auth, positive): +def test_put(cluster, maybe_auth, positive, compression): # type: (ClickHouseCluster) -> None bucket = cluster.minio_bucket if not maybe_auth else cluster.minio_restricted_bucket @@ -128,8 +133,8 @@ def test_put(cluster, maybe_auth, positive): values = "(1, 2, 3), (3, 2, 1), (78, 43, 45)" values_csv = "1,2,3\n3,2,1\n78,43,45\n" filename = "test.csv" - put_query = "insert into table function s3('http://{}:{}/{}/{}', {}'CSV', '{}') values {}".format( - cluster.minio_host, cluster.minio_port, bucket, filename, maybe_auth, table_format, values) + put_query = f"""insert into table function s3('http://{cluster.minio_host}:{cluster.minio_port}/{bucket}/{filename}', + {maybe_auth}'CSV', '{table_format}', {compression}) values {values}""" try: run_query(instance, put_query) @@ -281,9 +286,9 @@ def test_put_get_with_globs(cluster): # Test multipart put. @pytest.mark.parametrize("maybe_auth,positive", [ - ("", True) + ("", True), # ("'minio','minio123',",True), Redirect with credentials not working with nginx. - # ("'wrongid','wrongkey',", False) ClickHouse crashes in some time after this test, local integration tests run fails. + ("'wrongid','wrongkey',", False), ]) def test_multipart_put(cluster, maybe_auth, positive): # type: (ClickHouseCluster) -> None @@ -379,26 +384,32 @@ def test_s3_glob_scheherazade(cluster): assert run_query(instance, query).splitlines() == ["1001\t1001\t1001\t1001"] -def run_s3_mock(cluster): - logging.info("Starting s3 mock") - container_id = cluster.get_container_id('resolver') - current_dir = os.path.dirname(__file__) - cluster.copy_file_to_container(container_id, os.path.join(current_dir, "s3_mock", "mock_s3.py"), "mock_s3.py") - cluster.exec_in_container(container_id, ["python", "mock_s3.py"], detach=True) +def run_s3_mocks(cluster): + logging.info("Starting s3 mocks") + mocks = ( + ("mock_s3.py", "resolver", "8080"), + ("unstable_server.py", "resolver", "8081"), + ) + for mock_filename, container, port in mocks: + container_id = cluster.get_container_id(container) + current_dir = os.path.dirname(__file__) + cluster.copy_file_to_container(container_id, os.path.join(current_dir, "s3_mocks", mock_filename), mock_filename) + cluster.exec_in_container(container_id, ["python", mock_filename, port], detach=True) - # Wait for S3 mock start - for attempt in range(10): - ping_response = cluster.exec_in_container(cluster.get_container_id('resolver'), - ["curl", "-s", "http://resolver:8080/"], nothrow=True) - if ping_response != 'OK': - if attempt == 9: - assert ping_response == 'OK', 'Expected "OK", but got "{}"'.format(ping_response) + # Wait for S3 mocks to start + for mock_filename, container, port in mocks: + for attempt in range(10): + ping_response = cluster.exec_in_container(cluster.get_container_id(container), + ["curl", "-s", f"http://{container}:{port}/"], nothrow=True) + if ping_response != 'OK': + if attempt == 9: + assert ping_response == 'OK', 'Expected "OK", but got "{}"'.format(ping_response) + else: + time.sleep(1) else: - time.sleep(1) - else: - break + break - logging.info("S3 mock started") + logging.info("S3 mocks started") def replace_config(old, new): @@ -518,6 +529,15 @@ def test_storage_s3_get_gzip(cluster, extension, method): run_query(instance, f"DROP TABLE {name}") +def test_storage_s3_get_unstable(cluster): + bucket = cluster.minio_bucket + instance = cluster.instances["dummy"] + table_format = "column1 Int64, column2 Int64, column3 Int64, column4 Int64" + get_query = f"SELECT count(), sum(column3) FROM s3('http://resolver:8081/{cluster.minio_bucket}/test.csv', 'CSV', '{table_format}') FORMAT CSV" + result = run_query(instance, get_query) + assert result.splitlines() == ["500000,500000"] + + def test_storage_s3_put_uncompressed(cluster): bucket = cluster.minio_bucket instance = cluster.instances["dummy"] diff --git a/tests/integration/test_system_clusters_actual_information/configs/users.xml b/tests/integration/test_system_clusters_actual_information/configs/users.xml deleted file mode 100644 index 156cd3a6b59..00000000000 --- a/tests/integration/test_system_clusters_actual_information/configs/users.xml +++ /dev/null @@ -1,8 +0,0 @@ - - - - - 5 - - - diff --git a/tests/integration/test_ttl_replicated/test.py b/tests/integration/test_ttl_replicated/test.py index 389e249790f..67614b88029 100644 --- a/tests/integration/test_ttl_replicated/test.py +++ b/tests/integration/test_ttl_replicated/test.py @@ -396,6 +396,10 @@ def test_ttl_compatibility(started_cluster, node_left, node_right, num_run): node_right.query("OPTIMIZE TABLE test_ttl_group_by FINAL") node_right.query("OPTIMIZE TABLE test_ttl_where FINAL") + node_left.query("SYSTEM SYNC REPLICA test_ttl_delete", timeout=20) + node_left.query("SYSTEM SYNC REPLICA test_ttl_group_by", timeout=20) + node_left.query("SYSTEM SYNC REPLICA test_ttl_where", timeout=20) + assert node_left.query("SELECT id FROM test_ttl_delete ORDER BY id") == "2\n4\n" assert node_right.query("SELECT id FROM test_ttl_delete ORDER BY id") == "2\n4\n" diff --git a/tests/jepsen.clickhouse-keeper/.gitignore b/tests/jepsen.clickhouse-keeper/.gitignore new file mode 100644 index 00000000000..d956ab0a125 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/.gitignore @@ -0,0 +1,13 @@ +/target +/classes +/checkouts +profiles.clj +pom.xml +pom.xml.asc +*.jar +*.class +/.lein-* +/.nrepl-port +/.prepl-port +.hgignore +.hg/ diff --git a/tests/jepsen.clickhouse-keeper/LICENSE b/tests/jepsen.clickhouse-keeper/LICENSE new file mode 100644 index 00000000000..231512650b9 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/LICENSE @@ -0,0 +1,280 @@ +Eclipse Public License - v 2.0 + + THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE + PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION + OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. + +1. DEFINITIONS + +"Contribution" means: + + a) in the case of the initial Contributor, the initial content + Distributed under this Agreement, and + + b) in the case of each subsequent Contributor: + i) changes to the Program, and + ii) additions to the Program; + where such changes and/or additions to the Program originate from + and are Distributed by that particular Contributor. A Contribution + "originates" from a Contributor if it was added to the Program by + such Contributor itself or anyone acting on such Contributor's behalf. + Contributions do not include changes or additions to the Program that + are not Modified Works. + +"Contributor" means any person or entity that Distributes the Program. + +"Licensed Patents" mean patent claims licensable by a Contributor which +are necessarily infringed by the use or sale of its Contribution alone +or when combined with the Program. + +"Program" means the Contributions Distributed in accordance with this +Agreement. + +"Recipient" means anyone who receives the Program under this Agreement +or any Secondary License (as applicable), including Contributors. + +"Derivative Works" shall mean any work, whether in Source Code or other +form, that is based on (or derived from) the Program and for which the +editorial revisions, annotations, elaborations, or other modifications +represent, as a whole, an original work of authorship. + +"Modified Works" shall mean any work in Source Code or other form that +results from an addition to, deletion from, or modification of the +contents of the Program, including, for purposes of clarity any new file +in Source Code form that contains any contents of the Program. Modified +Works shall not include works that contain only declarations, +interfaces, types, classes, structures, or files of the Program solely +in each case in order to link to, bind by name, or subclass the Program +or Modified Works thereof. + +"Distribute" means the acts of a) distributing or b) making available +in any manner that enables the transfer of a copy. + +"Source Code" means the form of a Program preferred for making +modifications, including but not limited to software source code, +documentation source, and configuration files. + +"Secondary License" means either the GNU General Public License, +Version 2.0, or any later versions of that license, including any +exceptions or additional permissions as identified by the initial +Contributor. + +2. GRANT OF RIGHTS + + a) Subject to the terms of this Agreement, each Contributor hereby + grants Recipient a non-exclusive, worldwide, royalty-free copyright + license to reproduce, prepare Derivative Works of, publicly display, + publicly perform, Distribute and sublicense the Contribution of such + Contributor, if any, and such Derivative Works. + + b) Subject to the terms of this Agreement, each Contributor hereby + grants Recipient a non-exclusive, worldwide, royalty-free patent + license under Licensed Patents to make, use, sell, offer to sell, + import and otherwise transfer the Contribution of such Contributor, + if any, in Source Code or other form. This patent license shall + apply to the combination of the Contribution and the Program if, at + the time the Contribution is added by the Contributor, such addition + of the Contribution causes such combination to be covered by the + Licensed Patents. The patent license shall not apply to any other + combinations which include the Contribution. No hardware per se is + licensed hereunder. + + c) Recipient understands that although each Contributor grants the + licenses to its Contributions set forth herein, no assurances are + provided by any Contributor that the Program does not infringe the + patent or other intellectual property rights of any other entity. + Each Contributor disclaims any liability to Recipient for claims + brought by any other entity based on infringement of intellectual + property rights or otherwise. As a condition to exercising the + rights and licenses granted hereunder, each Recipient hereby + assumes sole responsibility to secure any other intellectual + property rights needed, if any. For example, if a third party + patent license is required to allow Recipient to Distribute the + Program, it is Recipient's responsibility to acquire that license + before distributing the Program. + + d) Each Contributor represents that to its knowledge it has + sufficient copyright rights in its Contribution, if any, to grant + the copyright license set forth in this Agreement. + + e) Notwithstanding the terms of any Secondary License, no + Contributor makes additional grants to any Recipient (other than + those set forth in this Agreement) as a result of such Recipient's + receipt of the Program under the terms of a Secondary License + (if permitted under the terms of Section 3). + +3. REQUIREMENTS + +3.1 If a Contributor Distributes the Program in any form, then: + + a) the Program must also be made available as Source Code, in + accordance with section 3.2, and the Contributor must accompany + the Program with a statement that the Source Code for the Program + is available under this Agreement, and informs Recipients how to + obtain it in a reasonable manner on or through a medium customarily + used for software exchange; and + + b) the Contributor may Distribute the Program under a license + different than this Agreement, provided that such license: + i) effectively disclaims on behalf of all other Contributors all + warranties and conditions, express and implied, including + warranties or conditions of title and non-infringement, and + implied warranties or conditions of merchantability and fitness + for a particular purpose; + + ii) effectively excludes on behalf of all other Contributors all + liability for damages, including direct, indirect, special, + incidental and consequential damages, such as lost profits; + + iii) does not attempt to limit or alter the recipients' rights + in the Source Code under section 3.2; and + + iv) requires any subsequent distribution of the Program by any + party to be under a license that satisfies the requirements + of this section 3. + +3.2 When the Program is Distributed as Source Code: + + a) it must be made available under this Agreement, or if the + Program (i) is combined with other material in a separate file or + files made available under a Secondary License, and (ii) the initial + Contributor attached to the Source Code the notice described in + Exhibit A of this Agreement, then the Program may be made available + under the terms of such Secondary Licenses, and + + b) a copy of this Agreement must be included with each copy of + the Program. + +3.3 Contributors may not remove or alter any copyright, patent, +trademark, attribution notices, disclaimers of warranty, or limitations +of liability ("notices") contained within the Program from any copy of +the Program which they Distribute, provided that Contributors may add +their own appropriate notices. + +4. COMMERCIAL DISTRIBUTION + +Commercial distributors of software may accept certain responsibilities +with respect to end users, business partners and the like. While this +license is intended to facilitate the commercial use of the Program, +the Contributor who includes the Program in a commercial product +offering should do so in a manner which does not create potential +liability for other Contributors. Therefore, if a Contributor includes +the Program in a commercial product offering, such Contributor +("Commercial Contributor") hereby agrees to defend and indemnify every +other Contributor ("Indemnified Contributor") against any losses, +damages and costs (collectively "Losses") arising from claims, lawsuits +and other legal actions brought by a third party against the Indemnified +Contributor to the extent caused by the acts or omissions of such +Commercial Contributor in connection with its distribution of the Program +in a commercial product offering. The obligations in this section do not +apply to any claims or Losses relating to any actual or alleged +intellectual property infringement. In order to qualify, an Indemnified +Contributor must: a) promptly notify the Commercial Contributor in +writing of such claim, and b) allow the Commercial Contributor to control, +and cooperate with the Commercial Contributor in, the defense and any +related settlement negotiations. The Indemnified Contributor may +participate in any such claim at its own expense. + +For example, a Contributor might include the Program in a commercial +product offering, Product X. That Contributor is then a Commercial +Contributor. If that Commercial Contributor then makes performance +claims, or offers warranties related to Product X, those performance +claims and warranties are such Commercial Contributor's responsibility +alone. Under this section, the Commercial Contributor would have to +defend claims against the other Contributors related to those performance +claims and warranties, and if a court requires any other Contributor to +pay any damages as a result, the Commercial Contributor must pay +those damages. + +5. NO WARRANTY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT +PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN "AS IS" +BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR +IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF +TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR +PURPOSE. Each Recipient is solely responsible for determining the +appropriateness of using and distributing the Program and assumes all +risks associated with its exercise of rights under this Agreement, +including but not limited to the risks and costs of program errors, +compliance with applicable laws, damage to or loss of data, programs +or equipment, and unavailability or interruption of operations. + +6. DISCLAIMER OF LIABILITY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT +PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS +SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST +PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE +EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + +7. GENERAL + +If any provision of this Agreement is invalid or unenforceable under +applicable law, it shall not affect the validity or enforceability of +the remainder of the terms of this Agreement, and without further +action by the parties hereto, such provision shall be reformed to the +minimum extent necessary to make such provision valid and enforceable. + +If Recipient institutes patent litigation against any entity +(including a cross-claim or counterclaim in a lawsuit) alleging that the +Program itself (excluding combinations of the Program with other software +or hardware) infringes such Recipient's patent(s), then such Recipient's +rights granted under Section 2(b) shall terminate as of the date such +litigation is filed. + +All Recipient's rights under this Agreement shall terminate if it +fails to comply with any of the material terms or conditions of this +Agreement and does not cure such failure in a reasonable period of +time after becoming aware of such noncompliance. If all Recipient's +rights under this Agreement terminate, Recipient agrees to cease use +and distribution of the Program as soon as reasonably practicable. +However, Recipient's obligations under this Agreement and any licenses +granted by Recipient relating to the Program shall continue and survive. + +Everyone is permitted to copy and distribute copies of this Agreement, +but in order to avoid inconsistency the Agreement is copyrighted and +may only be modified in the following manner. The Agreement Steward +reserves the right to publish new versions (including revisions) of +this Agreement from time to time. No one other than the Agreement +Steward has the right to modify this Agreement. The Eclipse Foundation +is the initial Agreement Steward. The Eclipse Foundation may assign the +responsibility to serve as the Agreement Steward to a suitable separate +entity. Each new version of the Agreement will be given a distinguishing +version number. The Program (including Contributions) may always be +Distributed subject to the version of the Agreement under which it was +received. In addition, after a new version of the Agreement is published, +Contributor may elect to Distribute the Program (including its +Contributions) under the new version. + +Except as expressly stated in Sections 2(a) and 2(b) above, Recipient +receives no rights or licenses to the intellectual property of any +Contributor under this Agreement, whether expressly, by implication, +estoppel or otherwise. All rights in the Program not expressly granted +under this Agreement are reserved. Nothing in this Agreement is intended +to be enforceable by any entity that is not a Contributor or Recipient. +No third-party beneficiary rights are created under this Agreement. + +Exhibit A - Form of Secondary Licenses Notice + +"This Source Code may also be made available under the following +Secondary Licenses when the conditions for such availability set forth +in the Eclipse Public License, v. 2.0 are satisfied: GNU General Public +License as published by the Free Software Foundation, either version 2 +of the License, or (at your option) any later version, with the GNU +Classpath Exception which is available at +https://www.gnu.org/software/classpath/license.html." + + Simply including a copy of this Agreement, including this Exhibit A + is not sufficient to license the Source Code under Secondary Licenses. + + If it is not possible or desirable to put the notice in a particular + file, then You may include the notice in a location (such as a LICENSE + file in a relevant directory) where a recipient would be likely to + look for such a notice. + + You may add additional accurate notices of copyright ownership. diff --git a/tests/jepsen.clickhouse-keeper/README.md b/tests/jepsen.clickhouse-keeper/README.md new file mode 100644 index 00000000000..8f3754b8f7b --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/README.md @@ -0,0 +1,155 @@ +# Jepsen tests ClickHouse Keeper + +A Clojure library designed to test ZooKeeper-like implementation inside ClickHouse. + +## Test scenarios (workloads) + +### CAS register + +CAS Register has three operations: read number, write number, compare-and-swap number. This register is simulated as a single ZooKeeper node. Read transforms to ZooKeeper's `getData` request. Write transforms to the `set` request. Compare-and-swap implemented via `getData` + compare in code + `set` new value with `version` from `getData`. + +In this test, we use a linearizable checker, so Jepsen validates that history was linearizable. One of the heaviest workloads. + +Strictly requires `quorum_reads` to be true. + +### Set + +Set has two operations: add a number to set and read all values from set. This workload is simulated on a single ZooKeeper node with a string value that represents Clojure set data structure. Add operation very similar to compare-and-swap. We read string value from ZooKeeper node with `getData`, parse it to Clojure's set, add new value to the set and try to write it with the received version. + +In this test, Jepsen validates that all successfully added values can be read. Generator for this workload performs only add operations until a timeout and after that tries to read set once. + +### Unique IDs + +In the Unique IDs workload we have only one operation: generate a new unique number. It's implemented using ZooKeeper's sequential nodes. For each generates request client just creates a new sequential node in ZooKeeper with a fixed prefix. After that cuts the prefix off from the returned path and parses the number from the rest part. + +Jepsen checks that all returned IDs were unique. + +### Counter + +Counter workload has two operations: read counter value and add some number to the counter. Its implementation is quite weird. We add number `N` to the counter creating `N` sequential nodes in a single ZooKeeper transaction. Counter read implemented as `getChildren` ZooKeeper request and count of all returned nodes. + +Jepsen checks that counter value lies in the interval of possible value. Strictly requires `quorum_reads` to be true. + +### Total queue + +Simulates an unordered queue with three operations: enqueue number, dequeue, and drain. Enqueue operation uses `create` request with node name equals to number. `Dequeue` operation is more interesting. We list (`getChildren`) all nodes and remember the parent node version. After that we choose the smallest one and prepare the transaction: `check` parent node version + set an empty value to parent node + delete smalled child node. Drain operation is just `getChildren` on the parent path. + +Jepsen checks that all enqueued values were dequeued or drained. Duplicates are allowed because Jepsen doesn't know the value of the unknown-status (`:info`) dequeue operation. So when we try to `dequeue` some element we should return it even if our delete transaction failed with `Connection loss` error. + +### Linear queue + +Same with the total queue, but without drain operation. Checks linearizability between enqueue and dequeue. Sometimes consume more than 10GB during validation even for very short histories. + + +## Nemesis + +We use almost all standard nemeses with small changes for our storage. + +### Random node killer (random-node-killer) + +Sleep 5 seconds, kills random node, sleep for 5 seconds, and starts it back. + +### All nodes killer (all-nodes-killer) + +Kill all nodes at once, sleep for 5 seconds, and starts them back. + +### Simple partitioner (simple-partitioner) + +Partition one node from others using iptables. No one can see the victim and the victim cannot see anybody. + +### Random node stop (random-node-hammer-time) + +Send `SIGSTOP` to the random node. Sleep 5 seconds. Send `SIGCONT`. + +### All nodes stop (all-nodes-hammer-time) + +Send `SIGSTOP` to all nodes. Sleep 5 seconds. Send `SIGCONT`. + +### Logs corruptor (logs-corruptor) + +Corrupts latest log (change one random byte) in `clickhouse_path/coordination/logs`. Restarts nodes. + +### Snapshots corruptor (snapshots-corruptor) + +Corrupts latest snapshot (change one random byte) in `clickhouse_path/coordination/snapshots`. Restarts nodes. + +### Logs and snapshots corruptor (logs-and-snapshots-corruptor) + +Corrupts both the latest log and snapshot. Restarts node. + +### Drop data corruptor (drop-data-corruptor) + +Drop all data from `clickhouse_path/coordinator`. Restarts node. + +### Bridge partitioner (bridge-partitioner) + +Two nodes don't see each other but can see another node. The last node can see both. + +### Blind node partitioner (blind-node-partitioner) + +One of the nodes cannot see another, but they can see it. + +### Blind others partitioner (blind-others-partitioner) + +Two nodes don't see one node but it can see both. + +## Usage + +### Dependencies + +- leiningen (https://leiningen.org/) +- clojure (https://clojure.org/) +- jvm + +### Options for `lein run` + +- `test` Run a single test. +- `test-all` Run all available tests from tests-set. +- `-w (--workload)` One of the workloads. Option for a single `test`. +- `--nemesis` One of nemeses. Option for a single `test`. +- `-q (--quorum)` Run test with quorum reads. +- `-r (--rate)` How many operations per second Jepsen will generate in a single thread. +- `-s (--snapshot-distance)` ClickHouse Keeper setting. How often we will create a new snapshot. +- `--stale-log-gap` ClickHosue Keeper setting. A leader will send a snapshot instead of a log to this node if it's committed index less than leaders - this setting value. +- `--reserved-log-items` ClickHouse Keeper setting. How many log items to keep after the snapshot. +- `--ops-per-key` Option for CAS register workload. Total ops that will be generated for a single register. +- `--lightweight-run` Run some lightweight tests without linearizability checks. Option for `tests-all` run. +- `--reuse-binary` Don't download clickhouse binary if it already exists on the node. +- `--clickhouse-source` URL to clickhouse `.deb`, `.tgz` or binary. +- `--time-limit` (in seconds) How long Jepsen will generate new operations. +- `--nodes-file` File with nodes for SSH. Newline separated. +- `--username` SSH username for nodes. +- `--password` SSH password for nodes. +- `--concurrency` How many threads Jepsen will use for concurrent requests. +- `--test-count` How many times to run a single test or how many tests to run from the tests set. + + +### Examples: + +1. Run `Set` workload with `logs-and-snapshots-corruptor` ten times: + +```sh +$ lein run test --nodes-file nodes.txt --username root --password '' --time-limit 30 --concurrency 50 -r 50 --workload set --nemesis logs-and-snapshots-corruptor --clickhouse-source 'https://clickhouse-builds.s3.yandex.net/someurl/clickhouse-common-static_21.4.1.6321_amd64.deb' -q --test-count 10 --reuse-binary +``` + +2. Run ten random tests from `lightweight-run` with some custom Keeper settings: + +``` sh +$ lein run test-all --nodes-file nodes.txt --username root --password '' --time-limit 30 --concurrency 50 -r 50 --snapshot-distance 100 --stale-log-gap 100 --reserved-log-items 10 --lightweight-run --clickhouse-source 'someurl' -q --reuse-binary --test-count 10 +``` + + +## License + +Copyright © 2021 FIXME + +This program and the accompanying materials are made available under the +terms of the Eclipse Public License 2.0 which is available at +http://www.eclipse.org/legal/epl-2.0. + +This Source Code may also be made available under the following Secondary +Licenses when the conditions for such availability set forth in the Eclipse +Public License, v. 2.0 are satisfied: GNU General Public License as published by +the Free Software Foundation, either version 2 of the License, or (at your +option) any later version, with the GNU Classpath Exception which is available +at https://www.gnu.org/software/classpath/license.html. diff --git a/tests/jepsen.clickhouse-keeper/doc/intro.md b/tests/jepsen.clickhouse-keeper/doc/intro.md new file mode 100644 index 00000000000..09ce235c467 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/doc/intro.md @@ -0,0 +1,3 @@ +# Introduction to jepsen.keeper + +TODO: write [great documentation](http://jacobian.org/writing/what-to-write/) diff --git a/tests/jepsen.clickhouse-keeper/project.clj b/tests/jepsen.clickhouse-keeper/project.clj new file mode 100644 index 00000000000..c38767a767d --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/project.clj @@ -0,0 +1,13 @@ +(defproject jepsen.keeper "0.1.0-SNAPSHOT" + :injections [(.. System (setProperty "zookeeper.request.timeout" "10000"))] + :description "A jepsen tests for ClickHouse Keeper" + :url "https://clickhouse.tech/" + :license {:name "EPL-2.0" + :url "https://www.eclipse.org/legal/epl-2.0/"} + :main jepsen.clickhouse-keeper.main + :plugins [[lein-cljfmt "0.7.0"]] + :dependencies [[org.clojure/clojure "1.10.1"] + [jepsen "0.2.3"] + [zookeeper-clj "0.9.4"] + [org.apache.zookeeper/zookeeper "3.6.1" :exclusions [org.slf4j/slf4j-log4j12]]] + :repl-options {:init-ns jepsen.clickhouse-keeper.main}) diff --git a/tests/jepsen.clickhouse-keeper/resources/config.xml b/tests/jepsen.clickhouse-keeper/resources/config.xml new file mode 120000 index 00000000000..c7596baa075 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/resources/config.xml @@ -0,0 +1 @@ +../../../programs/server/config.xml \ No newline at end of file diff --git a/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml b/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml new file mode 100644 index 00000000000..528ea5d77be --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml @@ -0,0 +1,39 @@ + + + 9181 + {id} + + + 10000 + 30000 + false + 120000 + trace + 1000 + 2000 + 4000 + {quorum_reads} + {snapshot_distance} + {stale_log_gap} + {reserved_log_items} + + + + + 1 + {srv1} + 9444 + + + 2 + {srv2} + 9444 + + + 3 + {srv3} + 9444 + + + + diff --git a/tests/jepsen.clickhouse-keeper/resources/listen.xml b/tests/jepsen.clickhouse-keeper/resources/listen.xml new file mode 100644 index 00000000000..de8c737ff75 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/resources/listen.xml @@ -0,0 +1,3 @@ + + :: + diff --git a/tests/jepsen.clickhouse-keeper/resources/users.xml b/tests/jepsen.clickhouse-keeper/resources/users.xml new file mode 120000 index 00000000000..41b137a130f --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/resources/users.xml @@ -0,0 +1 @@ +../../../programs/server/users.xml \ No newline at end of file diff --git a/tests/jepsen.clickhouse-keeper/resources/zoo.cfg b/tests/jepsen.clickhouse-keeper/resources/zoo.cfg new file mode 100644 index 00000000000..fd49be16d0f --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/resources/zoo.cfg @@ -0,0 +1,23 @@ +# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html + +# The number of milliseconds of each tick +tickTime=2000 +# The number of ticks that the initial +# synchronization phase can take +initLimit=10 +# The number of ticks that can pass between +# sending a request and getting an acknowledgement +syncLimit=5 +# the directory where the snapshot is stored. +dataDir=/var/lib/zookeeper +# Place the dataLogDir to a separate physical disc for better performance +# dataLogDir=/disk2/zookeeper + +# the port at which the clients will connect +clientPort=2181 + +# Leader accepts client connections. Default value is "yes". The leader machine +# coordinates updates. For higher update throughput at thes slight expense of +# read throughput the leader can be configured to not accept clients and focus +# on coordination. +leaderServes=yes diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/bench.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/bench.clj new file mode 100644 index 00000000000..040d2eaa77b --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/bench.clj @@ -0,0 +1,39 @@ +(ns jepsen.clickhouse-keeper.bench + (:require [clojure.tools.logging :refer :all] + [jepsen + [client :as client]]) + (:import (java.lang ProcessBuilder) + (java.lang ProcessBuilder$Redirect))) + +(defn exec-process-builder + [command & args] + (let [pbuilder (ProcessBuilder. (into-array (cons command args)))] + (.redirectOutput pbuilder ProcessBuilder$Redirect/INHERIT) + (.redirectError pbuilder ProcessBuilder$Redirect/INHERIT) + (let [p (.start pbuilder)] + (.waitFor p)))) + +(defrecord BenchClient [port] + client/Client + (open! [this test node] + this) + + (setup! [this test] + this) + + (invoke! [this test op] + (let [bench-opts (into [] (clojure.string/split (:bench-opts op) #" ")) + bench-path (:bench-path op) + nodes (into [] (flatten (map (fn [x] (identity ["-h" (str x ":" port)])) (:nodes test)))) + all-args (concat [bench-path] bench-opts nodes)] + (info "Running cmd" all-args) + (apply exec-process-builder all-args) + (assoc op :type :ok :value "ok"))) + + (teardown! [_ test]) + + (close! [_ test])) + +(defn bench-client + [port] + (BenchClient. port)) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj new file mode 100644 index 00000000000..cd62d66e652 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj @@ -0,0 +1,20 @@ +(ns jepsen.clickhouse-keeper.constants) + +(def common-prefix "/home/robot-clickhouse") + +(def binary-name "clickhouse") + +(def binary-path (str common-prefix "/" binary-name)) +(def pid-file-path (str common-prefix "/clickhouse.pid")) + +(def data-dir (str common-prefix "/db")) +(def logs-dir (str common-prefix "/logs")) +(def configs-dir (str common-prefix "/config")) +(def sub-configs-dir (str configs-dir "/config.d")) +(def coordination-data-dir (str data-dir "/coordination")) +(def coordination-snapshots-dir (str coordination-data-dir "/snapshots")) +(def coordination-logs-dir (str coordination-data-dir "/logs")) + +(def stderr-file (str logs-dir "/stderr.log")) + +(def binaries-cache-dir (str common-prefix "/binaries")) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/counter.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/counter.clj new file mode 100644 index 00000000000..dfccf7dd635 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/counter.clj @@ -0,0 +1,50 @@ +(ns jepsen.clickhouse-keeper.counter + (:require + [clojure.tools.logging :refer :all] + [jepsen + [checker :as checker] + [client :as client] + [generator :as gen]] + [jepsen.clickhouse-keeper.utils :refer :all] + [zookeeper :as zk]) + (:import (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException))) + +(defn r [_ _] {:type :invoke, :f :read}) +(defn add [_ _] {:type :invoke, :f :add, :value (rand-int 5)}) + +(defrecord CounterClient [conn nodename] + client/Client + (open! [this test node] + (assoc + (assoc this + :conn (zk-connect node 9181 30000)) + :nodename node)) + + (setup! [this test]) + + (invoke! [this test op] + (case (:f op) + :read (exec-with-retries 30 (fn [] + (assoc op + :type :ok + :value (count (zk-list conn "/"))))) + :add (try + (do + (zk-multi-create-many-seq-nodes conn "/seq-" (:value op)) + (assoc op :type :ok)) + (catch Exception _ (assoc op :type :info, :error :connect-error))))) + + (teardown! [_ test]) + + (close! [_ test] + (zk/close conn))) + +(defn workload + "A generator, client, and checker for a set test." + [opts] + {:client (CounterClient. nil nil) + :checker (checker/counter) + :generator (->> (range) + (map (fn [x] + (->> (gen/mix [r add]))))) + :final-generator (gen/once {:type :invoke, :f :read, :value nil})}) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj new file mode 100644 index 00000000000..fdb6b233fec --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj @@ -0,0 +1,151 @@ +(ns jepsen.clickhouse-keeper.db + (:require [clojure.tools.logging :refer :all] + [jepsen + [control :as c] + [db :as db] + [util :as util :refer [meh]]] + [jepsen.clickhouse-keeper.constants :refer :all] + [jepsen.clickhouse-keeper.utils :refer :all] + [clojure.java.io :as io] + [jepsen.control.util :as cu] + [jepsen.os.ubuntu :as ubuntu])) + +(defn get-clickhouse-sky + [version] + (c/exec :sky :get :-d common-prefix :-N :Backbone version) + (str common-prefix "/clickhouse")) + +(defn get-clickhouse-url + [url] + (non-precise-cached-wget! url)) + +(defn get-clickhouse-scp + [path] + (c/upload path (str common-prefix "/clickhouse"))) + +(defn download-clickhouse + [source] + (info "Downloading clickhouse from" source) + (cond + (clojure.string/starts-with? source "rbtorrent:") (get-clickhouse-sky source) + (clojure.string/starts-with? source "http") (get-clickhouse-url source) + (.exists (io/file source)) (get-clickhouse-scp source) + :else (throw (Exception. (str "Don't know how to download clickhouse from" source))))) + +(defn unpack-deb + [path] + (do + (c/exec :dpkg :-x path common-prefix) + (c/exec :rm :-f path) + (c/exec :mv (str common-prefix "/usr/bin/clickhouse") common-prefix) + (c/exec :rm :-rf (str common-prefix "/usr") (str common-prefix "/etc")))) + +(defn unpack-tgz + [path] + (do + (c/exec :mkdir :-p (str common-prefix "/unpacked")) + (c/exec :tar :-zxvf path :-C (str common-prefix "/unpacked")) + (c/exec :rm :-f path) + (let [subdir (c/exec :ls (str common-prefix "/unpacked"))] + (c/exec :mv (str common-prefix "/unpacked/" subdir "/usr/bin/clickhouse") common-prefix) + (c/exec :rm :-fr (str common-prefix "/unpacked"))))) + +(defn chmod-binary + [path] + (info "Binary path chmod" path) + (c/exec :chmod :+x path)) + +(defn install-downloaded-clickhouse + [path] + (cond + (clojure.string/ends-with? path ".deb") (unpack-deb path) + (clojure.string/ends-with? path ".tgz") (unpack-tgz path) + (clojure.string/ends-with? path "clickhouse") (chmod-binary path) + :else (throw (Exception. (str "Don't know how to install clickhouse from path" path))))) + +(defn prepare-dirs + [] + (do + (c/exec :mkdir :-p common-prefix) + (c/exec :mkdir :-p data-dir) + (c/exec :mkdir :-p logs-dir) + (c/exec :mkdir :-p configs-dir) + (c/exec :mkdir :-p sub-configs-dir) + (c/exec :touch stderr-file) + (c/exec :chown :-R :root common-prefix))) + +(defn cluster-config + [test node config-template] + (let [nodes (:nodes test) + replacement-map {#"\{srv1\}" (get nodes 0) + #"\{srv2\}" (get nodes 1) + #"\{srv3\}" (get nodes 2) + #"\{id\}" (str (inc (.indexOf nodes node))) + #"\{quorum_reads\}" (str (boolean (:quorum test))) + #"\{snapshot_distance\}" (str (:snapshot-distance test)) + #"\{stale_log_gap\}" (str (:stale-log-gap test)) + #"\{reserved_log_items\}" (str (:reserved-log-items test))}] + (reduce #(clojure.string/replace %1 (get %2 0) (get %2 1)) config-template replacement-map))) + +(defn install-configs + [test node] + (c/exec :echo (slurp (io/resource "config.xml")) :> (str configs-dir "/config.xml")) + (c/exec :echo (slurp (io/resource "users.xml")) :> (str configs-dir "/users.xml")) + (c/exec :echo (slurp (io/resource "listen.xml")) :> (str sub-configs-dir "/listen.xml")) + (c/exec :echo (cluster-config test node (slurp (io/resource "keeper_config.xml"))) :> (str sub-configs-dir "/keeper_config.xml"))) + +(defn collect-traces + [test node] + (let [pid (c/exec :pidof "clickhouse")] + (c/exec :timeout :-s "KILL" "60" :gdb :-ex "set pagination off" :-ex (str "set logging file " logs-dir "/gdb.log") :-ex + "set logging on" :-ex "backtrace" :-ex "thread apply all backtrace" + :-ex "backtrace" :-ex "detach" :-ex "quit" :--pid pid :|| :true))) + +(defn db + [version reuse-binary] + (reify db/DB + (setup! [_ test node] + (c/su + (do + (info "Preparing directories") + (prepare-dirs) + (if (or (not (cu/exists? binary-path)) (not reuse-binary)) + (do (info "Downloading clickhouse") + (install-downloaded-clickhouse (download-clickhouse version))) + (info "Binary already exsist on path" binary-path "skipping download")) + (info "Installing configs") + (install-configs test node) + (info "Starting server") + (start-clickhouse! node test) + (info "ClickHouse started")))) + + (teardown! [_ test node] + (info node "Tearing down clickhouse") + (c/su + (kill-clickhouse! node test) + (if (not reuse-binary) + (c/exec :rm :-rf binary-path)) + (c/exec :rm :-rf pid-file-path) + (c/exec :rm :-rf data-dir) + (c/exec :rm :-rf logs-dir) + (c/exec :rm :-rf configs-dir))) + + db/LogFiles + (log-files [_ test node] + (c/su + ;(if (cu/exists? pid-file-path) + ;(do + ; (info node "Collecting traces") + ; (collect-traces test node)) + ;(info node "Pid files doesn't exists")) + (kill-clickhouse! node test) + (if (cu/exists? coordination-data-dir) + (do + (info node "Coordination files exists, going to compress") + (c/cd data-dir + (c/exec :tar :czf "coordination.tar.gz" "coordination"))))) + (let [common-logs [stderr-file (str logs-dir "/clickhouse-server.log") (str data-dir "/coordination.tar.gz")] + gdb-log (str logs-dir "/gdb.log")] + (if (cu/exists? (str logs-dir "/gdb.log")) + (conj common-logs gdb-log) + common-logs))))) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj new file mode 100644 index 00000000000..0384d4d583a --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj @@ -0,0 +1,202 @@ +(ns jepsen.clickhouse-keeper.main + (:require [clojure.tools.logging :refer :all] + [jepsen.clickhouse-keeper.utils :refer :all] + [clojure.pprint :refer [pprint]] + [jepsen.clickhouse-keeper.set :as set] + [jepsen.clickhouse-keeper.db :refer :all] + [jepsen.clickhouse-keeper.zookeeperdb :refer :all] + [jepsen.clickhouse-keeper.nemesis :as custom-nemesis] + [jepsen.clickhouse-keeper.register :as register] + [jepsen.clickhouse-keeper.unique :as unique] + [jepsen.clickhouse-keeper.queue :as queue] + [jepsen.clickhouse-keeper.counter :as counter] + [jepsen.clickhouse-keeper.bench :as bench] + [jepsen.clickhouse-keeper.constants :refer :all] + [clojure.string :as str] + [jepsen + [checker :as checker] + [cli :as cli] + [client :as client] + [control :as c] + [db :as db] + [nemesis :as nemesis] + [generator :as gen] + [independent :as independent] + [tests :as tests] + [util :as util :refer [meh]]] + [jepsen.control.util :as cu] + [jepsen.os.ubuntu :as ubuntu] + [jepsen.checker.timeline :as timeline] + [clojure.java.io :as io] + [zookeeper.data :as data] + [zookeeper :as zk]) + (:import (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException) + (ch.qos.logback.classic Level) + (org.slf4j Logger LoggerFactory))) + +(def workloads + "A map of workload names to functions that construct workloads, given opts." + {"set" set/workload + "register" register/workload + "unique-ids" unique/workload + "counter" counter/workload + "total-queue" queue/total-workload + "linear-queue" queue/linear-workload}) + +(def cli-opts + "Additional command line options." + [["-w" "--workload NAME" "What workload should we run?" + :default "set" + :validate [workloads (cli/one-of workloads)]] + [nil "--nemesis NAME" "Which nemesis will poison our lives?" + :default "random-node-killer" + :validate [custom-nemesis/custom-nemesises (cli/one-of custom-nemesis/custom-nemesises)]] + ["-q" "--quorum" "Use quorum reads, instead of reading from any primary."] + ["-r" "--rate HZ" "Approximate number of requests per second, per thread." + :default 10 + :parse-fn read-string + :validate [#(and (number? %) (pos? %)) "Must be a positive number"]] + ["-s" "--snapshot-distance NUM" "Number of log entries to create snapshot" + :default 10000 + :parse-fn read-string + :validate [#(and (number? %) (pos? %)) "Must be a positive number"]] + [nil "--stale-log-gap NUM" "Number of log entries to send snapshot instead of separate logs" + :default 1000 + :parse-fn read-string + :validate [#(and (number? %) (pos? %)) "Must be a positive number"]] + [nil "--reserved-log-items NUM" "Number of log entries to keep after snapshot" + :default 1000 + :parse-fn read-string + :validate [#(and (number? %) (pos? %)) "Must be a positive number"]] + [nil "--ops-per-key NUM" "Maximum number of operations on any given key." + :default 100 + :parse-fn parse-long + :validate [pos? "Must be a positive integer."]] + [nil, "--lightweight-run" "Subset of workloads/nemesises which is simple to validate"] + [nil, "--reuse-binary" "Use already downloaded binary if it exists, don't remove it on shutdown"] + [nil, "--bench" "Run perf-test mode"] + [nil, "--zookeeper-version VERSION" "Run zookeeper with version" + :default ""] + [nil, "--bench-opts STR" "Run perf-test mode" + :default "--generator list_medium_nodes -c 30 -i 1000"] + ["-c" "--clickhouse-source URL" "URL for clickhouse deb or tgz package" + :default "https://clickhouse-builds.s3.yandex.net/21677/ef82333089156907a0979669d9374c2e18daabe5/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_deb/clickhouse-common-static_21.4.1.6313_amd64.deb"] + [nil "--bench-path path" "Path to keeper-bench util" + :default "/home/alesap/code/cpp/BuildCH/utils/keeper-bench/keeper-bench"]]) + +(defn get-db + [opts] + (if (empty? (:zookeeper-version opts)) + (db (:clickhouse-source opts) (boolean (:reuse-binary opts))) + (zookeeper-db (:zookeeper-version opts)))) + +(defn get-port + [opts] + (if (empty? (:zookeeper-version opts)) + 9181 + 2181)) + +(defn clickhouse-func-tests + [opts] + (info "Test opts\n" (with-out-str (pprint opts))) + (let [quorum (boolean (:quorum opts)) + workload ((get workloads (:workload opts)) opts) + current-nemesis (get custom-nemesis/custom-nemesises (:nemesis opts))] + (merge tests/noop-test + opts + {:name (str "clickhouse-keeper-quorum=" quorum "-" (name (:workload opts)) "-" (name (:nemesis opts))) + :os ubuntu/os + :db (get-db opts) + :pure-generators true + :client (:client workload) + :nemesis (:nemesis current-nemesis) + :checker (checker/compose + {:perf (checker/perf) + :workload (:checker workload)}) + :generator (gen/phases + (->> (:generator workload) + (gen/stagger (/ (:rate opts))) + (gen/nemesis (:generator current-nemesis)) + (gen/time-limit (:time-limit opts))) + (gen/log "Healing cluster") + (gen/nemesis (gen/once {:type :info, :f :stop})) + (gen/log "Waiting for recovery") + (gen/sleep 10) + (gen/clients (:final-generator workload)))}))) + +(defn clickhouse-perf-test + [opts] + (info "Starting performance test") + (let [dct {:type :invoke :bench-opts (:bench-opts opts) :bench-path (:bench-path opts)}] + (merge tests/noop-test + opts + {:name (str "clickhouse-keeper-perf") + :os ubuntu/os + :db (get-db opts) + :pure-generators true + :client (bench/bench-client (get-port opts)) + :nemesis nemesis/noop + :generator (->> dct + (gen/stagger 1) + (gen/nemesis nil))}))) + +(defn clickhouse-keeper-test + "Given an options map from the command line runner (e.g. :nodes, :ssh, + :concurrency, ...), constructs a test map." + [opts] + (if (boolean (:bench opts)) + (clickhouse-perf-test opts) + (clickhouse-func-tests opts))) + +(def all-nemesises (keys custom-nemesis/custom-nemesises)) + +(def all-workloads (keys workloads)) + +(def lightweight-workloads ["set" "unique-ids" "counter" "total-queue"]) + +(def useful-nemesises ["random-node-killer" + "simple-partitioner" + "all-nodes-hammer-time" + ; can lead to a very rare data loss https://github.com/eBay/NuRaft/issues/185 + ;"logs-and-snapshots-corruptor" + ;"drop-data-corruptor" + "bridge-partitioner" + "blind-node-partitioner" + "blind-others-partitioner"]) + +(defn cart [colls] + (if (empty? colls) + '(()) + (for [more (cart (rest colls)) + x (first colls)] + (cons x more)))) + +(defn all-test-options + "Takes base cli options, a collection of nemeses, workloads, and a test count, + and constructs a sequence of test options." + [cli worload-nemeseis-collection] + (take (:test-count cli) + (shuffle (for [[workload nemesis] worload-nemeseis-collection] + (assoc cli + :nemesis nemesis + :workload workload + :test-count 1))))) +(defn all-tests + "Turns CLI options into a sequence of tests." + [test-fn cli] + (if (boolean (:lightweight-run cli)) + (map test-fn (all-test-options cli (cart [lightweight-workloads useful-nemesises]))) + (map test-fn (all-test-options cli (cart [all-workloads all-nemesises]))))) + +(defn -main + "Handles command line arguments. Can either run a test, or a web server for + browsing results." + [& args] + (.setLevel + (LoggerFactory/getLogger "org.apache.zookeeper") Level/OFF) + (cli/run! (merge (cli/single-test-cmd {:test-fn clickhouse-keeper-test + :opt-spec cli-opts}) + (cli/test-all-cmd {:tests-fn (partial all-tests clickhouse-keeper-test) + :opt-spec cli-opts}) + (cli/serve-cmd)) + args)) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/nemesis.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/nemesis.clj new file mode 100644 index 00000000000..caf59d3a25f --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/nemesis.clj @@ -0,0 +1,160 @@ +(ns jepsen.clickhouse-keeper.nemesis + (:require + [clojure.tools.logging :refer :all] + [jepsen + [nemesis :as nemesis] + [control :as c] + [generator :as gen]] + [jepsen.clickhouse-keeper.constants :refer :all] + [jepsen.clickhouse-keeper.utils :refer :all])) + +(defn random-node-killer-nemesis + [] + (nemesis/node-start-stopper + rand-nth + (fn start [test node] (kill-clickhouse! node test)) + (fn stop [test node] (start-clickhouse! node test)))) + +(defn all-nodes-killer-nemesis + [] + (nemesis/node-start-stopper + identity + (fn start [test node] (kill-clickhouse! node test)) + (fn stop [test node] (start-clickhouse! node test)))) + +(defn random-node-hammer-time-nemesis + [] + (nemesis/hammer-time "clickhouse")) + +(defn all-nodes-hammer-time-nemesis + [] + (nemesis/hammer-time identity "clickhouse")) + +(defn select-last-file + [path] + (last (clojure.string/split + (c/exec :find path :-type :f :-printf "%T+ %p\n" :| :grep :-v :tmp_ :| :sort :| :awk "{print $2}") + #"\n"))) + +(defn random-file-pos + [fname] + (let [fsize (Integer/parseInt (c/exec :du :-b fname :| :cut :-f1))] + (rand-int fsize))) + +(defn corrupt-file + [fname] + (if (not (empty? fname)) + (do + (info "Corrupting" fname) + (c/exec :dd "if=/dev/zero" (str "of=" fname) "bs=1" "count=1" (str "seek=" (random-file-pos fname)) "conv=notrunc")) + (info "Nothing to corrupt"))) + +(defn corruptor-nemesis + [path corruption-op] + (reify nemesis/Nemesis + + (setup! [this test] this) + + (invoke! [this test op] + (cond (= (:f op) :corrupt) + (let [nodes (list (rand-nth (:nodes test)))] + (info "Corruption on node" nodes) + (c/on-nodes test nodes + (fn [test node] + (c/su + (kill-clickhouse! node test) + (corruption-op path) + (start-clickhouse! node test)))) + (assoc op :type :info, :value :corrupted)) + :else (do (c/on-nodes test (:nodes test) + (fn [test node] + (c/su + (start-clickhouse! node test)))) + (assoc op :type :info, :value :done)))) + + (teardown! [this test]))) + +(defn logs-corruption-nemesis + [] + (corruptor-nemesis coordination-logs-dir #(corrupt-file (select-last-file %1)))) + +(defn snapshots-corruption-nemesis + [] + (corruptor-nemesis coordination-snapshots-dir #(corrupt-file (select-last-file %1)))) + +(defn logs-and-snapshots-corruption-nemesis + [] + (corruptor-nemesis coordination-data-dir (fn [path] + (do + (corrupt-file (select-last-file (str path "/snapshots"))) + (corrupt-file (select-last-file (str path "/logs"))))))) +(defn drop-all-corruption-nemesis + [] + (corruptor-nemesis coordination-data-dir (fn [path] + (c/exec :rm :-fr path)))) + +(defn partition-bridge-nemesis + [] + (nemesis/partitioner nemesis/bridge)) + +(defn blind-node + [nodes] + (let [[[victim] others] (nemesis/split-one nodes)] + {victim (into #{} others)})) + +(defn blind-node-partition-nemesis + [] + (nemesis/partitioner blind-node)) + +(defn blind-others + [nodes] + (let [[[victim] others] (nemesis/split-one nodes)] + (into {} (map (fn [node] [node #{victim}])) others))) + +(defn blind-others-partition-nemesis + [] + (nemesis/partitioner blind-others)) + +(defn network-non-symmetric-nemesis + [] + (nemesis/partitioner nemesis/bridge)) + +(defn start-stop-generator + [time-corrupt time-ok] + (->> + (cycle [(gen/sleep time-ok) + {:type :info, :f :start} + (gen/sleep time-corrupt) + {:type :info, :f :stop}]))) + +(defn corruption-generator + [] + (->> + (cycle [(gen/sleep 5) + {:type :info, :f :corrupt}]))) + +(def custom-nemesises + {"random-node-killer" {:nemesis (random-node-killer-nemesis) + :generator (start-stop-generator 5 5)} + "all-nodes-killer" {:nemesis (all-nodes-killer-nemesis) + :generator (start-stop-generator 1 10)} + "simple-partitioner" {:nemesis (nemesis/partition-random-halves) + :generator (start-stop-generator 5 5)} + "random-node-hammer-time" {:nemesis (random-node-hammer-time-nemesis) + :generator (start-stop-generator 5 5)} + "all-nodes-hammer-time" {:nemesis (all-nodes-hammer-time-nemesis) + :generator (start-stop-generator 1 10)} + "logs-corruptor" {:nemesis (logs-corruption-nemesis) + :generator (corruption-generator)} + "snapshots-corruptor" {:nemesis (snapshots-corruption-nemesis) + :generator (corruption-generator)} + "logs-and-snapshots-corruptor" {:nemesis (logs-and-snapshots-corruption-nemesis) + :generator (corruption-generator)} + "drop-data-corruptor" {:nemesis (drop-all-corruption-nemesis) + :generator (corruption-generator)} + "bridge-partitioner" {:nemesis (partition-bridge-nemesis) + :generator (start-stop-generator 5 5)} + "blind-node-partitioner" {:nemesis (blind-node-partition-nemesis) + :generator (start-stop-generator 5 5)} + "blind-others-partitioner" {:nemesis (blind-others-partition-nemesis) + :generator (start-stop-generator 5 5)}}) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/queue.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/queue.clj new file mode 100644 index 00000000000..30ff7c01ec4 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/queue.clj @@ -0,0 +1,79 @@ +(ns jepsen.clickhouse-keeper.queue + (:require + [clojure.tools.logging :refer :all] + [jepsen + [checker :as checker] + [client :as client] + [generator :as gen]] + [knossos.model :as model] + [jepsen.checker.timeline :as timeline] + [jepsen.clickhouse-keeper.utils :refer :all] + [zookeeper :as zk]) + (:import (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException))) + +(defn enqueue [val _ _] {:type :invoke, :f :enqueue :value val}) +(defn dequeue [_ _] {:type :invoke, :f :dequeue}) + +(defrecord QueueClient [conn nodename] + client/Client + (open! [this test node] + (assoc + (assoc this + :conn (zk-connect node 9181 30000)) + :nodename node)) + + (setup! [this test]) + + (invoke! [this test op] + (case (:f op) + :enqueue (try + (do + (zk-create-if-not-exists conn (str "/" (:value op)) "") + (assoc op :type :ok)) + (catch Exception _ (assoc op :type :info, :error :connect-error))) + :dequeue + (try + (let [result (zk-multi-delete-first-child conn "/")] + (if (not (nil? result)) + (assoc op :type :ok :value result) + (assoc op :type :fail :value result))) + (catch Exception _ (assoc op :type :info, :error :connect-error))) + :drain + ; drain via delete is to long, just list all nodes + (exec-with-retries 30 (fn [] + (zk-sync conn) + (assoc op :type :ok :value (into #{} (map #(str %1) (zk-list conn "/")))))))) + + (teardown! [_ test]) + + (close! [_ test] + (zk/close conn))) + +(defn sorted-str-range + [n] + (sort (map (fn [v] (str v)) (take n (range))))) + +(defn total-workload + "A generator, client, and checker for a set test." + [opts] + {:client (QueueClient. nil nil) + :checker (checker/compose + {:total-queue (checker/total-queue) + :timeline (timeline/html)}) + :generator (->> (sorted-str-range 50000) + (map (fn [x] + (rand-nth [{:type :invoke, :f :enqueue :value x} + {:type :invoke, :f :dequeue}])))) + :final-generator (gen/once {:type :invoke, :f :drain, :value nil})}) + +(defn linear-workload + [opts] + {:client (QueueClient. nil nil) + :checker (checker/compose + {:linear (checker/linearizable {:model (model/unordered-queue) + :algorithm :linear}) + :timeline (timeline/html)}) + :generator (->> (sorted-str-range 10000) + (map (fn [x] + (rand-nth [{:type :invoke, :f :enqueue :value x} + {:type :invoke, :f :dequeue}]))))}) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/register.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/register.clj new file mode 100644 index 00000000000..b2f381168bd --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/register.clj @@ -0,0 +1,64 @@ +(ns jepsen.clickhouse-keeper.register + (:require [jepsen + [checker :as checker] + [client :as client] + [independent :as independent] + [generator :as gen]] + [jepsen.checker.timeline :as timeline] + [knossos.model :as model] + [jepsen.clickhouse-keeper.utils :refer :all] + [zookeeper :as zk]) + (:import (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException))) + +(defn r [_ _] {:type :invoke, :f :read, :value nil}) +(defn w [_ _] {:type :invoke, :f :write, :value (rand-int 5)}) +(defn cas [_ _] {:type :invoke, :f :cas, :value [(rand-int 5) (rand-int 5)]}) + +(defrecord RegisterClient [conn] + client/Client + (open! [this test node] + (assoc this :conn (zk-connect node 9181 30000))) + + (setup! [this test] + (zk-create-range conn 300)) ; 300 nodes to be sure + + (invoke! [_ test op] + (let [[k v] (:value op) + zk-k (zk-path k)] + (case (:f op) + :read (try + (assoc op :type :ok, :value (independent/tuple k (parse-long (:data (zk-get-str conn zk-k))))) + (catch Exception _ (assoc op :type :fail, :error :connect-error))) + :write (try + (do (zk-set conn zk-k v) + (assoc op :type :ok)) + (catch Exception _ (assoc op :type :info, :error :connect-error))) + :cas (try + (let [[old new] v] + (assoc op :type (if (zk-cas conn zk-k old new) + :ok + :fail))) + (catch KeeperException$BadVersionException _ (assoc op :type :fail, :error :bad-version)) + (catch Exception _ (assoc op :type :info, :error :connect-error)))))) + + (teardown! [this test]) + + (close! [_ test] + (zk/close conn))) + +(defn workload + "Tests linearizable reads, writes, and compare-and-set operations on + independent keys." + [opts] + {:client (RegisterClient. nil) + :checker (independent/checker + (checker/compose + {:linear (checker/linearizable {:model (model/cas-register) + :algorithm :linear}) + :timeline (timeline/html)})) + :generator (independent/concurrent-generator + 10 + (range) + (fn [k] + (->> (gen/mix [r w cas]) + (gen/limit (:ops-per-key opts)))))}) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj new file mode 100644 index 00000000000..79ec4f824bb --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj @@ -0,0 +1,50 @@ +(ns jepsen.clickhouse-keeper.set + (:require + [clojure.tools.logging :refer :all] + [jepsen + [checker :as checker] + [client :as client] + [generator :as gen]] + [jepsen.clickhouse-keeper.utils :refer :all] + [zookeeper :as zk]) + (:import (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException))) + +(defrecord SetClient [k conn nodename] + client/Client + (open! [this test node] + (assoc + (assoc this + :conn (zk-connect node 9181 30000)) + :nodename node)) + + (setup! [this test] + (exec-with-retries 30 (fn [] + (zk-create-if-not-exists conn k "#{}")))) + + (invoke! [this test op] + (case (:f op) + :read (exec-with-retries 30 (fn [] + (zk-sync conn) + (assoc op + :type :ok + :value (read-string (:data (zk-get-str conn k)))))) + :add (try + (do + (zk-add-to-set conn k (:value op)) + (assoc op :type :ok)) + (catch KeeperException$BadVersionException _ (assoc op :type :fail, :error :bad-version)) + (catch Exception _ (assoc op :type :info, :error :connect-error))))) + + (teardown! [_ test]) + + (close! [_ test] + (zk/close conn))) + +(defn workload + "A generator, client, and checker for a set test." + [opts] + {:client (SetClient. "/a-set" nil nil) + :checker (checker/set) + :generator (->> (range) + (map (fn [x] {:type :invoke, :f :add, :value x}))) + :final-generator (gen/once {:type :invoke, :f :read, :value nil})}) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/unique.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/unique.clj new file mode 100644 index 00000000000..c50f33924e0 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/unique.clj @@ -0,0 +1,42 @@ +(ns jepsen.clickhouse-keeper.unique + (:require + [clojure.tools.logging :refer :all] + [jepsen + [checker :as checker] + [client :as client] + [generator :as gen]] + [jepsen.clickhouse-keeper.utils :refer :all] + [zookeeper :as zk]) + (:import (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException))) + +(defrecord UniqueClient [conn nodename] + client/Client + (open! [this test node] + (assoc + (assoc this + :conn (zk-connect node 9181 30000)) + :nodename node)) + + (setup! [this test]) + + (invoke! [this test op] + (case + :generate + (try + (let [result-path (zk-create-sequential conn "/seq-" "")] + (assoc op :type :ok :value (parse-and-get-counter result-path))) + (catch Exception _ (assoc op :type :info, :error :connect-error))))) + + (teardown! [_ test]) + + (close! [_ test] + (zk/close conn))) + +(defn workload + "A generator, client, and checker for a set test." + [opts] + {:client (UniqueClient. nil nil) + :checker (checker/unique-ids) + :generator (->> + (range) + (map (fn [_] {:type :invoke, :f :generate})))}) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj new file mode 100644 index 00000000000..70813457251 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj @@ -0,0 +1,203 @@ +(ns jepsen.clickhouse-keeper.utils + (:require [clojure.string :as str] + [zookeeper.data :as data] + [zookeeper :as zk] + [zookeeper.internal :as zi] + [jepsen.control.util :as cu] + [jepsen.clickhouse-keeper.constants :refer :all] + [jepsen.control :as c] + [clojure.tools.logging :refer :all] + [clojure.java.io :as io]) + (:import (org.apache.zookeeper.data Stat) + (org.apache.zookeeper CreateMode + ZooKeeper) + (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException) + (java.security MessageDigest))) + +(defn exec-with-retries + [retries f & args] + (let [res (try {:value (apply f args)} + (catch Exception e + (if (zero? retries) + (throw e) + {:exception e})))] + (if (:exception res) + (do (Thread/sleep 1000) (recur (dec retries) f args)) + (:value res)))) + +(defn parse-long + "Parses a string to a Long. Passes through `nil` and empty strings." + [s] + (if (and s (> (count s) 0)) + (Long/parseLong s))) + +(defn parse-and-get-counter + [path] + (Integer/parseInt (apply str (take-last 10 (seq (str path)))))) + +(defn zk-range + [] + (map (fn [v] (str "/" v)) (range))) + +(defn zk-path + [n] + (str "/" n)) + +(defn zk-connect + [host port timeout] + (exec-with-retries 30 (fn [] (zk/connect (str host ":" port) :timeout-msec timeout)))) + +(defn zk-create-range + [conn n] + (dorun (map (fn [v] (zk/create-all conn v :persistent? true)) (take n (zk-range))))) + +(defn zk-set + ([conn path value] + (zk/set-data conn path (data/to-bytes (str value)) -1)) + ([conn path value version] + (zk/set-data conn path (data/to-bytes (str value)) version))) + +(defn zk-get-str + [conn path] + (let [zk-result (zk/data conn path)] + {:data (data/to-string (:data zk-result)) + :stat (:stat zk-result)})) + +(defn zk-list + [conn path] + (zk/children conn path)) + +(defn zk-list-with-stat + [conn path] + (let [stat (new Stat) + children (seq (.getChildren conn path false stat))] + {:children children + :stat (zi/stat-to-map stat)})) + +(defn zk-cas + [conn path old-value new-value] + (let [current-value (zk-get-str conn path)] + (if (= (parse-long (:data current-value)) old-value) + (do (zk-set conn path new-value (:version (:stat current-value))) + true)))) + +(defn zk-add-to-set + [conn path elem] + (let [current-value (zk-get-str conn path) + current-set (read-string (:data current-value)) + new-set (conj current-set elem)] + (zk-set conn path (pr-str new-set) (:version (:stat current-value))))) + +(defn zk-create-if-not-exists + [conn path data] + (zk/create conn path :data (data/to-bytes (str data)) :persistent? true)) + +(defn zk-create-sequential + [conn path-prefix data] + (zk/create conn path-prefix :data (data/to-bytes (str data)) :persistent? true :sequential? true)) + +(defn zk-multi-create-many-seq-nodes + [conn path-prefix num] + (let [txn (.transaction conn)] + (loop [i 0] + (cond (>= i num) (.commit txn) + :else (do (.create txn path-prefix + (data/to-bytes "") + (zi/acls :open-acl-unsafe) + CreateMode/PERSISTENT_SEQUENTIAL) + (recur (inc i))))))) + +; sync call not implemented in zookeeper-clj and don't have sync version in java API +(defn zk-sync + [conn] + (zk-set conn "/" "" -1)) + +(defn zk-parent-path + [path] + (let [rslash_pos (str/last-index-of path "/")] + (if (> rslash_pos 0) + (subs path 0 rslash_pos) + "/"))) + +(defn zk-multi-delete-first-child + [conn path] + (let [{children :children stat :stat} (zk-list-with-stat conn path) + txn (.transaction conn) + first-child (first (sort children))] + (if (not (nil? first-child)) + (try + (do (.check txn path (:version stat)) + (.setData txn path (data/to-bytes "") -1) ; I'm just checking multitransactions + (.delete txn (str path first-child) -1) + (.commit txn) + first-child) + (catch KeeperException$BadVersionException _ nil) + ; Even if we got connection loss, delete may actually be executed. + ; This function is used for queue model, which strictly require + ; all enqueued elements to be dequeued, but allow duplicates. + ; So even in case when we not sure about delete we return first-child. + (catch Exception _ first-child)) + nil))) + +(defn clickhouse-alive? + [node test] + (info "Checking server alive on" node) + (try + (c/exec binary-path :client :--query "SELECT 1") + (catch Exception _ false))) + +(defn wait-clickhouse-alive! + [node test & {:keys [maxtries] :or {maxtries 30}}] + (loop [i 0] + (cond (> i maxtries) false + (clickhouse-alive? node test) true + :else (do (Thread/sleep 1000) (recur (inc i)))))) + +(defn kill-clickhouse! + [node test] + (info "Killing server on node" node) + (c/su + (cu/stop-daemon! binary-path pid-file-path) + (c/exec :rm :-fr (str data-dir "/status")))) + +(defn start-clickhouse! + [node test] + (info "Starting server on node" node) + (c/su + (cu/start-daemon! + {:pidfile pid-file-path + :logfile stderr-file + :chdir data-dir} + binary-path + :server + :--config (str configs-dir "/config.xml") + :-- + :--path (str data-dir "/") + :--user_files_path (str data-dir "/user_files") + :--top_level_domains_path (str data-dir "/top_level_domains") + :--logger.log (str logs-dir "/clickhouse-server.log") + :--logger.errorlog (str logs-dir "/clickhouse-server.err.log") + :--keeper_server.snapshot_storage_path coordination-snapshots-dir + :--keeper_server.logs_storage_path coordination-logs-dir) + (wait-clickhouse-alive! node test))) + +(defn md5 [^String s] + (let [algorithm (MessageDigest/getInstance "MD5") + raw (.digest algorithm (.getBytes s))] + (format "%032x" (BigInteger. 1 raw)))) + +(defn non-precise-cached-wget! + [url] + (let [encoded-url (md5 url) + expected-file-name (.getName (io/file url)) + dest-file (str binaries-cache-dir "/" encoded-url) + dest-symlink (str common-prefix "/" expected-file-name) + wget-opts (concat cu/std-wget-opts [:-O dest-file])] + (when-not (cu/exists? dest-file) + (info "Downloading" url) + (do (c/exec :mkdir :-p binaries-cache-dir) + (c/cd binaries-cache-dir + (cu/wget-helper! wget-opts url)))) + (c/exec :rm :-rf dest-symlink) + (c/exec :ln :-s dest-file dest-symlink) + dest-symlink)) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/zookeeperdb.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/zookeeperdb.clj new file mode 100644 index 00000000000..7cb88cd1fd9 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/zookeeperdb.clj @@ -0,0 +1,64 @@ +(ns jepsen.clickhouse-keeper.zookeeperdb + (:require [clojure.tools.logging :refer :all] + [jepsen.clickhouse-keeper.utils :refer :all] + [clojure.java.io :as io] + [jepsen + [control :as c] + [db :as db]] + [jepsen.os.ubuntu :as ubuntu])) + +(defn zk-node-ids + "Returns a map of node names to node ids." + [test] + (->> test + :nodes + (map-indexed (fn [i node] [node (inc i)])) + (into {}))) + +(defn zk-node-id + "Given a test and a node name from that test, returns the ID for that node." + [test node] + ((zk-node-ids test) node)) + +(defn zoo-cfg-servers + "Constructs a zoo.cfg fragment for servers." + [test mynode] + (->> (zk-node-ids test) + (map (fn [[node id]] + (str "server." id "=" (if (= (name node) mynode) "0.0.0.0" (name node)) ":2888:3888"))) + (clojure.string/join "\n"))) + +(defn zookeeper-db + "Zookeeper DB for a particular version." + [version] + (reify db/DB + (setup! [_ test node] + (c/su + (info node "Installing ZK" version) + (c/exec :apt-get :update) + (c/exec :apt-get :install (str "zookeeper=" version)) + (c/exec :apt-get :install (str "zookeeperd=" version)) + (c/exec :echo (zk-node-id test node) :> "/etc/zookeeper/conf/myid") + + (c/exec :echo (str (slurp (io/resource "zoo.cfg")) + "\n" + (zoo-cfg-servers test node)) + :> "/etc/zookeeper/conf/zoo.cfg") + + (info node "ZK restarting") + (c/exec :service :zookeeper :restart) + (info "Connecting to zk" (name node)) + (zk-connect (name node) 2181 1000) + (info node "ZK ready"))) + + (teardown! [_ test node] + (info node "tearing down ZK") + (c/su + (c/exec :service :zookeeper :stop :|| true) + (c/exec :rm :-rf + (c/lit "/var/lib/zookeeper/version-*") + (c/lit "/var/log/zookeeper/*")))) + + db/LogFiles + (log-files [_ test node] + ["/var/log/zookeeper/zookeeper.log"]))) diff --git a/tests/jepsen.clickhouse-keeper/test/jepsen/keeper_test.clj b/tests/jepsen.clickhouse-keeper/test/jepsen/keeper_test.clj new file mode 100644 index 00000000000..25333605351 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/test/jepsen/keeper_test.clj @@ -0,0 +1,39 @@ +(ns jepsen.keeper-test + (:require [clojure.test :refer :all] + [jepsen.clickhouse-keeper.utils :refer :all] + [zookeeper :as zk] + [zookeeper.data :as data]) + (:import (ch.qos.logback.classic Level) + (org.slf4j Logger LoggerFactory))) + +(defn multicreate + [conn] + (dorun (map (fn [v] (zk/create conn v :persistent? true)) (take 10 (zk-range))))) + +(defn multidelete + [conn] + (dorun (map (fn [v] (zk/delete conn v)) (take 10 (zk-range))))) + +(deftest a-test + (testing "keeper connection" + (.setLevel + (LoggerFactory/getLogger "org.apache.zookeeper") Level/OFF) + (let [conn (zk/connect "localhost:9181" :timeout-msec 5000)] + ;(println (take 10 (zk-range))) + ;(multidelete conn) + ;(multicreate conn) + ;(zk/create-all conn "/0") + ;(zk/create conn "/0") + ;(println (zk/children conn "/")) + ;(zk/set-data conn "/0" (data/to-bytes "777") -1) + (println (zk-parent-path "/sasds/dasda/das")) + (println (zk-parent-path "/sasds")) + (zk-multi-create-many-seq-nodes conn "/a-" 5) + (println (zk/children conn "/")) + (println (zk-list-with-stat conn "/")) + (println (zk-multi-delete-first-child conn "/")) + (println (zk-list-with-stat conn "/")) + ;(Thread/sleep 5000) + ;(println "VALUE" (data/to-string (:data (zk/data conn "/0")))) + ;(is (= (data/to-string (:data (zk/data conn "/0"))) "777")) + (zk/close conn)))) diff --git a/tests/msan_suppressions.txt b/tests/msan_suppressions.txt index e0edd7f3dd7..cf468b0be96 100644 --- a/tests/msan_suppressions.txt +++ b/tests/msan_suppressions.txt @@ -7,12 +7,6 @@ fun:tolower # Suppress some failures in contrib so that we can enable MSan in CI. # Ideally, we should report these upstream. -src:*/contrib/zlib-ng/* -src:*/contrib/simdjson/* -src:*/contrib/lz4/* # Hyperscan fun:roseRunProgram - -# mariadbclient, NSS functions from libc -fun:_nss_files_parse_servent diff --git a/tests/performance/ColumnMap.xml b/tests/performance/ColumnMap.xml index f6393985377..874ed638224 100644 --- a/tests/performance/ColumnMap.xml +++ b/tests/performance/ColumnMap.xml @@ -26,10 +26,13 @@ FROM arrayMap(x -> toString(x), range(100)) AS keys, arrayMap(x -> toString(x * x), range(100)) AS values, cast((keys, values), 'Map(String, String)') AS map - FROM numbers(10000) + FROM numbers_mt(10000) ) +SETTINGS max_insert_threads = 8 + optimize table column_map_test final + SELECT count() FROM column_map_test WHERE NOT ignore(arrayMap(x -> map[CONCAT(toString(x), {key_suffix})], range(0, 100, 10))) DROP TABLE IF EXISTS column_map_test diff --git a/tests/performance/agg_functions_min_max_any.xml b/tests/performance/agg_functions_min_max_any.xml index 79c9e2c6976..6ca9e3eb65a 100644 --- a/tests/performance/agg_functions_min_max_any.xml +++ b/tests/performance/agg_functions_min_max_any.xml @@ -6,7 +6,9 @@ group_scale - 1000000 + + 1000000 + diff --git a/tests/performance/aggregation_overflow.xml b/tests/performance/aggregation_overflow.xml new file mode 100644 index 00000000000..71afdf3947a --- /dev/null +++ b/tests/performance/aggregation_overflow.xml @@ -0,0 +1,5 @@ + + select bitAnd(number, 15) as k, sum(number) from numbers(100000000) group by k with totals order by k format Null settings max_rows_to_group_by = 10, group_by_overflow_mode='any', totals_mode = 'after_having_inclusive' + select bitAnd(number, 65535) as k, sum(number) from numbers(100000000) group by k with totals order by k format Null settings max_rows_to_group_by = 10, group_by_overflow_mode='any', totals_mode = 'after_having_inclusive', max_block_size = 65530 + select bitAnd(number, 65535) as k, sum(number) from numbers(100000000) group by k with totals order by k format Null settings max_rows_to_group_by = 10, group_by_overflow_mode='any', max_block_size = 65530 + diff --git a/tests/performance/arithmetic.xml b/tests/performance/arithmetic.xml index 0be61eb5823..bf5e7662e37 100644 --- a/tests/performance/arithmetic.xml +++ b/tests/performance/arithmetic.xml @@ -1,4 +1,4 @@ - + 30000000000 diff --git a/tests/performance/array_join.xml b/tests/performance/array_join.xml index ca280ce28ad..cf92b51f545 100644 --- a/tests/performance/array_join.xml +++ b/tests/performance/array_join.xml @@ -1,4 +1,4 @@ - + diff --git a/tests/performance/async_remote_read.xml b/tests/performance/async_remote_read.xml index 7f0ee6473ab..4ea159f9a97 100644 --- a/tests/performance/async_remote_read.xml +++ b/tests/performance/async_remote_read.xml @@ -1,4 +1,7 @@ + + 1 + SELECT sum(x) FROM diff --git a/tests/performance/avg_weighted.xml b/tests/performance/avg_weighted.xml index df9e7c21068..2476011e6a9 100644 --- a/tests/performance/avg_weighted.xml +++ b/tests/performance/avg_weighted.xml @@ -11,8 +11,8 @@ CREATE TABLE perf_avg( num UInt64, - num_u Decimal256(75) DEFAULT toDecimal256(num / 400000, 75), - num_f Float64 DEFAULT num / 100 + num_u Decimal256(75) MATERIALIZED toDecimal256(num / 400000, 75), + num_f Float64 MATERIALIZED num / 100 ) ENGINE = MergeTree() ORDER BY num @@ -23,6 +23,8 @@ LIMIT 50000000 + optimize table perf_avg final + SELECT avg(num) FROM perf_avg FORMAT Null SELECT avg(2 * num) FROM perf_avg FORMAT Null SELECT avg(num_u) FROM perf_avg FORMAT Null diff --git a/tests/performance/bounding_ratio.xml b/tests/performance/bounding_ratio.xml index e3a15f90013..e430136b624 100644 --- a/tests/performance/bounding_ratio.xml +++ b/tests/performance/bounding_ratio.xml @@ -1,4 +1,4 @@ - + SELECT boundingRatio(number, number) FROM numbers(100000000) SELECT (argMax(number, number) - argMin(number, number)) / (max(number) - min(number)) FROM numbers(100000000) diff --git a/tests/performance/codec_none.xml b/tests/performance/codec_none.xml new file mode 100644 index 00000000000..e6eb9773a66 --- /dev/null +++ b/tests/performance/codec_none.xml @@ -0,0 +1,13 @@ + + + hits_10m_single + + + CREATE TABLE hits_none (Title String CODEC(NONE)) ENGINE = MergeTree ORDER BY tuple() + INSERT INTO hits_none SELECT Title FROM test.hits + OPTIMIZE TABLE hits_none FINAL + + + + DROP TABLE hits_none + diff --git a/tests/performance/codecs_float_insert.xml b/tests/performance/codecs_float_insert.xml index a7cb5152c09..b282bcc268f 100644 --- a/tests/performance/codecs_float_insert.xml +++ b/tests/performance/codecs_float_insert.xml @@ -1,5 +1,5 @@ - + 1 diff --git a/tests/performance/codecs_int_insert.xml b/tests/performance/codecs_int_insert.xml index caefaba3725..662df80ae70 100644 --- a/tests/performance/codecs_int_insert.xml +++ b/tests/performance/codecs_int_insert.xml @@ -1,4 +1,4 @@ - + 1 diff --git a/tests/performance/collations.xml b/tests/performance/collations.xml index 17b2d36b7e3..52ccede3798 100644 --- a/tests/performance/collations.xml +++ b/tests/performance/collations.xml @@ -1,4 +1,4 @@ - + diff --git a/tests/performance/conditional.xml b/tests/performance/conditional.xml index 21623f45b05..91b6cb95ff2 100644 --- a/tests/performance/conditional.xml +++ b/tests/performance/conditional.xml @@ -1,4 +1,4 @@ - + SELECT count() FROM zeros(10000000) WHERE NOT ignore(if(rand() % 2, toDateTime('2019-02-04 01:24:31'), toDate('2019-02-04'))) SELECT count() FROM zeros(10000000) WHERE NOT ignore(multiIf(rand() % 2, toDateTime('2019-02-04 01:24:31'), toDate('2019-02-04'))) SELECT count() FROM zeros(10000000) WHERE NOT ignore(if(rand() % 2, [toDateTime('2019-02-04 01:24:31')], [toDate('2019-02-04')])) diff --git a/tests/performance/constant_column_search.xml b/tests/performance/constant_column_search.xml index cb76fd4cefb..71d8185d818 100644 --- a/tests/performance/constant_column_search.xml +++ b/tests/performance/constant_column_search.xml @@ -1,4 +1,4 @@ - + search diff --git a/tests/performance/date_time_64.xml b/tests/performance/date_time_64.xml index 838aba34d87..fd883416a33 100644 --- a/tests/performance/date_time_64.xml +++ b/tests/performance/date_time_64.xml @@ -1,4 +1,4 @@ - + hits_100m_single diff --git a/tests/performance/date_time_long.xml b/tests/performance/date_time_long.xml index 1229631a434..c2eb42d3318 100644 --- a/tests/performance/date_time_long.xml +++ b/tests/performance/date_time_long.xml @@ -1,5 +1,4 @@ - - long + datetime_transform @@ -78,7 +77,6 @@ toYYYYMMDDhhmmss toRelativeQuarterNum - toUnixTimestamp @@ -120,9 +118,9 @@ - SELECT count() FROM numbers(100000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, {datetime_transform}(t, '{time_zone}')) - SELECT count() FROM numbers(100000000) WHERE NOT ignore(toDate('2017-01-01') + number % 1000 + rand() % 10 AS t, {date_transform}(t)) - SELECT count() FROM numbers(100000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, {binary_function}(t, 1)) - SELECT count() FROM numbers(100000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, toStartOfInterval(t, INTERVAL 1 month)) - SELECT count() FROM numbers(100000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, date_trunc('month', t)) + SELECT count() FROM numbers(50000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, {datetime_transform}(t, '{time_zone}')) + SELECT count() FROM numbers(50000000) WHERE NOT ignore(toDate('2017-01-01') + number % 1000 + rand() % 10 AS t, {date_transform}(t)) + SELECT count() FROM numbers(50000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, {binary_function}(t, 1)) + SELECT count() FROM numbers(50000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, toStartOfInterval(t, INTERVAL 1 month)) + SELECT count() FROM numbers(50000000) WHERE NOT ignore(toDateTime('2017-01-01 00:00:00') + number % 100000000 + rand() % 100000 AS t, date_trunc('month', t)) diff --git a/tests/performance/decimal_aggregates.xml b/tests/performance/decimal_aggregates.xml index f7bc2ac1868..3fc1408d7e4 100644 --- a/tests/performance/decimal_aggregates.xml +++ b/tests/performance/decimal_aggregates.xml @@ -18,7 +18,7 @@ SELECT uniq(d32), uniqCombined(d32), uniqExact(d32), uniqHLL12(d32) FROM (SELECT * FROM t LIMIT 10000000) SELECT uniq(d64), uniqCombined(d64), uniqExact(d64), uniqHLL12(d64) FROM (SELECT * FROM t LIMIT 10000000) - SELECT uniq(d128), uniqCombined(d128), uniqExact(d128), uniqHLL12(d128) FROM (SELECT * FROM t LIMIT 1000000) + SELECT uniq(d128), uniqCombined(d128), uniqExact(d128), uniqHLL12(d128) FROM (SELECT * FROM t LIMIT 10000000) SELECT median(d32), medianExact(d32), medianExactWeighted(d32, 2) FROM (SELECT * FROM t LIMIT 10000000) SELECT median(d64), medianExact(d64), medianExactWeighted(d64, 2) FROM (SELECT * FROM t LIMIT 1000000) diff --git a/tests/performance/direct_dictionary.xml b/tests/performance/direct_dictionary.xml index eb1b4e0da00..3f01449ed99 100644 --- a/tests/performance/direct_dictionary.xml +++ b/tests/performance/direct_dictionary.xml @@ -1,38 +1,17 @@ - + - CREATE TABLE simple_direct_dictionary_test_table + CREATE TABLE simple_key_direct_dictionary_source_table ( id UInt64, value_int UInt64, value_string String, value_decimal Decimal64(8), value_string_nullable Nullable(String) - ) ENGINE = TinyLog; + ) ENGINE = Memory; - INSERT INTO simple_direct_dictionary_test_table - SELECT number, number, toString(number), toDecimal64(number, 8), toString(number) - FROM system.numbers - LIMIT 100000; - - - - CREATE DICTIONARY simple_direct_dictionary - ( - id UInt64, - value_int UInt64, - value_string String, - value_decimal Decimal64(8), - value_string_nullable Nullable(String) - ) - PRIMARY KEY id - SOURCE(CLICKHOUSE(DB 'default' TABLE 'simple_direct_dictionary_test_table')) - LAYOUT(DIRECT()) - - - - CREATE TABLE complex_direct_dictionary_test_table + CREATE TABLE complex_key_direct_dictionary_source_table ( id UInt64, id_key String, @@ -44,14 +23,21 @@ - INSERT INTO complex_direct_dictionary_test_table - SELECT number, toString(number), number, toString(number), toDecimal64(number, 8), toString(number) - FROM system.numbers - LIMIT 100000; + CREATE DICTIONARY simple_key_direct_dictionary + ( + id UInt64, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) + PRIMARY KEY id + SOURCE(CLICKHOUSE(DB 'default' TABLE 'simple_key_direct_dictionary_source_table')) + LAYOUT(DIRECT()) - CREATE DICTIONARY complex_direct_dictionary + CREATE DICTIONARY complex_key_direct_dictionary ( id UInt64, id_key String, @@ -61,20 +47,92 @@ value_string_nullable Nullable(String) ) PRIMARY KEY id, id_key - SOURCE(CLICKHOUSE(DB 'default' TABLE 'complex_direct_dictionary_test_table')) + SOURCE(CLICKHOUSE(DB 'default' TABLE 'complex_key_direct_dictionary_source_table')) LAYOUT(COMPLEX_KEY_DIRECT()) - SELECT dictGet('default.simple_direct_dictionary', 'value_int', number) FROM system.numbers LIMIT 150000; - SELECT dictGet('default.simple_direct_dictionary', 'value_string', number) FROM system.numbers LIMIT 150000; - SELECT dictGet('default.simple_direct_dictionary', 'value_decimal', number) FROM system.numbers LIMIT 150000; - SELECT dictGet('default.simple_direct_dictionary', 'value_string_nullable', number) FROM system.numbers LIMIT 150000; - SELECT dictHas('default.simple_direct_dictionary', number) FROM system.numbers LIMIT 150000; + + INSERT INTO simple_key_direct_dictionary_source_table + SELECT number, number, toString(number), toDecimal64(number, 8), toString(number) + FROM system.numbers + LIMIT 50000; + - SELECT dictGet('default.complex_direct_dictionary', 'value_int', (number, toString(number))) FROM system.numbers LIMIT 150000; - SELECT dictGet('default.complex_direct_dictionary', 'value_string', (number, toString(number))) FROM system.numbers LIMIT 150000; - SELECT dictGet('default.complex_direct_dictionary', 'value_decimal', (number, toString(number))) FROM system.numbers LIMIT 150000; - SELECT dictGet('default.complex_direct_dictionary', 'value_string_nullable', (number, toString(number))) FROM system.numbers LIMIT 150000; - SELECT dictHas('default.complex_direct_dictionary', (number, toString(number))) FROM system.numbers LIMIT 150000; + + INSERT INTO complex_key_direct_dictionary_source_table + SELECT number, toString(number), number, toString(number), toDecimal64(number, 8), toString(number) + FROM system.numbers + LIMIT 50000; + + + + + column_name + + 'value_int' + 'value_string' + 'value_decimal' + 'value_string_nullable' + + + + + elements_count + + 50000 + 75000 + + + + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_direct_dictionary', {column_name}, key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_direct_dictionary', ('value_int', 'value_string', 'value_decimal', 'value_string_nullable'), key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictHas('default.simple_key_direct_dictionary', key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + + WITH (number, toString(number)) as key + SELECT dictGet('default.complex_key_direct_dictionary', {column_name}, key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH (number, toString(number)) as key + SELECT dictGet('default.complex_key_direct_dictionary', ('value_int', 'value_string', 'value_decimal', 'value_string_nullable'), key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH (number, toString(number)) as key + SELECT dictHas('default.complex_key_direct_dictionary', key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + DROP TABLE IF EXISTS simple_key_direct_dictionary_source_table; + DROP TABLE IF EXISTS complex_key_direct_dictionary_source_table; + + DROP DICTIONARY IF EXISTS simple_key_direct_dictionary; + DROP DICTIONARY IF EXISTS complex_key_direct_dictionary; diff --git a/tests/performance/flat_dictionary.xml b/tests/performance/flat_dictionary.xml new file mode 100644 index 00000000000..a80631db541 --- /dev/null +++ b/tests/performance/flat_dictionary.xml @@ -0,0 +1,80 @@ + + + CREATE TABLE simple_key_flat_dictionary_source_table + ( + id UInt64, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) ENGINE = Memory; + + + + CREATE DICTIONARY simple_key_flat_dictionary + ( + id UInt64, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) + PRIMARY KEY id + SOURCE(CLICKHOUSE(DB 'default' TABLE 'simple_key_flat_dictionary_source_table')) + LAYOUT(FLAT(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000)) + LIFETIME(MIN 0 MAX 1000) + + + + INSERT INTO simple_key_flat_dictionary_source_table + SELECT number, number, toString(number), toDecimal64(number, 8), toString(number) + FROM system.numbers + LIMIT 5000000; + + + + + column_name + + 'value_int' + 'value_string' + 'value_decimal' + 'value_string_nullable' + + + + + elements_count + + 5000000 + 7500000 + + + + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_flat_dictionary', {column_name}, key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + + SELECT * FROM simple_key_flat_dictionary + FORMAT Null; + + + + WITH rand64() % toUInt64(75000000) as key + SELECT dictHas('default.simple_key_flat_dictionary', key) + FROM system.numbers + LIMIT 75000000 + FORMAT Null; + + + DROP TABLE IF EXISTS simple_key_flat_dictionary_source_table + + DROP DICTIONARY IF EXISTS simple_key_flat_dictionary + + diff --git a/tests/performance/float_formatting.xml b/tests/performance/float_formatting.xml index d24ccd7664c..71d8aee3f89 100644 --- a/tests/performance/float_formatting.xml +++ b/tests/performance/float_formatting.xml @@ -3,7 +3,7 @@ is 10 times faster than toString(number % 100 + 0.5). The shorter queries are somewhat unstable, so ignore differences less than 10%. --> - + expr diff --git a/tests/performance/float_parsing.xml b/tests/performance/float_parsing.xml index 33ab8ba6f10..eb8577bd127 100644 --- a/tests/performance/float_parsing.xml +++ b/tests/performance/float_parsing.xml @@ -1,4 +1,4 @@ - + expr diff --git a/tests/performance/fuse_sumcount.xml b/tests/performance/fuse_sumcount.xml new file mode 100644 index 00000000000..b2eb0e678e2 --- /dev/null +++ b/tests/performance/fuse_sumcount.xml @@ -0,0 +1,33 @@ + + + + 1 + + + + + key + + 1 + intHash32(number) % 1000 + + + + + SELECT sum(number) FROM numbers(1000000000) FORMAT Null + SELECT sum(number), count(number) FROM numbers(1000000000) FORMAT Null + SELECT sum(number), count(number) FROM numbers(1000000000) SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(1000000000) FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(1000000000) SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + + SELECT sum(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 FORMAT Null + SELECT sum(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 FORMAT Null + SELECT sum(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + diff --git a/tests/performance/fuzz_bits.xml b/tests/performance/fuzz_bits.xml index 2679977cb1d..87064e520c2 100644 --- a/tests/performance/fuzz_bits.xml +++ b/tests/performance/fuzz_bits.xml @@ -1,4 +1,4 @@ - + diff --git a/tests/performance/general_purpose_hashes.xml b/tests/performance/general_purpose_hashes.xml index bd2fa9674f6..f34554360cf 100644 --- a/tests/performance/general_purpose_hashes.xml +++ b/tests/performance/general_purpose_hashes.xml @@ -1,4 +1,4 @@ - + gp_hash_func diff --git a/tests/performance/generate_table_function.xml b/tests/performance/generate_table_function.xml index bc49a7de1bd..0339a8c19e8 100644 --- a/tests/performance/generate_table_function.xml +++ b/tests/performance/generate_table_function.xml @@ -1,4 +1,4 @@ - + SELECT sum(NOT ignore(*)) FROM (SELECT * FROM generateRandom('ui64 UInt64, i64 Int64, ui32 UInt32, i32 Int32, ui16 UInt16, i16 Int16, ui8 UInt8, i8 Int8') LIMIT 1000000000); SELECT sum(NOT ignore(*)) FROM (SELECT * FROM generateRandom('ui64 UInt64, i64 Int64, ui32 UInt32, i32 Int32, ui16 UInt16, i16 Int16, ui8 UInt8, i8 Int8', 0, 10, 10) LIMIT 1000000000); SELECT sum(NOT ignore(*)) FROM (SELECT * FROM generateRandom('i Enum8(\'hello\' = 1, \'world\' = 5)', 0, 10, 10) LIMIT 1000000000); diff --git a/tests/performance/great_circle_dist.xml b/tests/performance/great_circle_dist.xml index b5e271ddfa8..ad445f34417 100644 --- a/tests/performance/great_circle_dist.xml +++ b/tests/performance/great_circle_dist.xml @@ -2,6 +2,6 @@ SELECT count() FROM numbers(1000000) WHERE NOT ignore(greatCircleDistance((rand(1) % 360) * 1. - 180, (number % 150) * 1.2 - 90, (number % 360) + toFloat64(rand(2)) / 4294967296 - 180, (rand(3) % 180) * 1. - 90)) - SELECT count() FROM zeros(1000000) WHERE NOT ignore(greatCircleDistance(55. + toFloat64(rand(1)) / 4294967296, 37. + toFloat64(rand(2)) / 4294967296, 55. + toFloat64(rand(3)) / 4294967296, 37. + toFloat64(rand(4)) / 4294967296)) + SELECT count() FROM zeros(10000000) WHERE NOT ignore(greatCircleDistance(55. + toFloat64(rand(1)) / 4294967296, 37. + toFloat64(rand(2)) / 4294967296, 55. + toFloat64(rand(3)) / 4294967296, 37. + toFloat64(rand(4)) / 4294967296)) diff --git a/tests/performance/group_by_sundy_li.xml b/tests/performance/group_by_sundy_li.xml index c49712a8519..aebc305335c 100644 --- a/tests/performance/group_by_sundy_li.xml +++ b/tests/performance/group_by_sundy_li.xml @@ -1,4 +1,4 @@ - + 8 diff --git a/tests/performance/hashed_dictionary.xml b/tests/performance/hashed_dictionary.xml new file mode 100644 index 00000000000..26164b4f888 --- /dev/null +++ b/tests/performance/hashed_dictionary.xml @@ -0,0 +1,126 @@ + + + CREATE TABLE simple_key_hashed_dictionary_source_table + ( + id UInt64, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) ENGINE = Memory; + + + + CREATE TABLE complex_key_hashed_dictionary_source_table + ( + id UInt64, + id_key String, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) ENGINE = Memory; + + + + CREATE DICTIONARY simple_key_hashed_dictionary + ( + id UInt64, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) + PRIMARY KEY id + SOURCE(CLICKHOUSE(DB 'default' TABLE 'simple_key_hashed_dictionary_source_table')) + LAYOUT(HASHED()) + LIFETIME(MIN 0 MAX 1000); + + + + CREATE DICTIONARY complex_key_hashed_dictionary + ( + id UInt64, + id_key String, + value_int UInt64, + value_string String, + value_decimal Decimal64(8), + value_string_nullable Nullable(String) + ) + PRIMARY KEY id, id_key + SOURCE(CLICKHOUSE(DB 'default' TABLE 'complex_key_hashed_dictionary_source_table')) + LAYOUT(COMPLEX_KEY_HASHED()) + LIFETIME(MIN 0 MAX 1000); + + + + INSERT INTO simple_key_hashed_dictionary_source_table + SELECT number, number, toString(number), toDecimal64(number, 8), toString(number) + FROM system.numbers + LIMIT 5000000; + + + + INSERT INTO complex_key_hashed_dictionary_source_table + SELECT number, toString(number), number, toString(number), toDecimal64(number, 8), toString(number) + FROM system.numbers + LIMIT 5000000; + + + + + column_name + + 'value_int' + 'value_string' + 'value_decimal' + 'value_string_nullable' + + + + + elements_count + + 5000000 + 7500000 + + + + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_hashed_dictionary', {column_name}, key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictHas('default.simple_key_hashed_dictionary', key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + + WITH (rand64() % toUInt64({elements_count}), toString(rand64() % toUInt64({elements_count}))) as key + SELECT dictGet('default.complex_key_hashed_dictionary', {column_name}, key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH (rand64() % toUInt64({elements_count}), toString(rand64() % toUInt64({elements_count}))) as key + SELECT dictHas('default.complex_key_hashed_dictionary', key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + DROP TABLE IF EXISTS simple_key_hashed_dictionary_source_table; + DROP TABLE IF EXISTS complex_key_hashed_dictionary_source_table; + + DROP DICTIONARY IF EXISTS simple_key_hashed_dictionary; + DROP DICTIONARY IF EXISTS complex_key_hashed_dictionary; + + diff --git a/tests/performance/if_array_string.xml b/tests/performance/if_array_string.xml index 445b3c8c55a..f1752767e76 100644 --- a/tests/performance/if_array_string.xml +++ b/tests/performance/if_array_string.xml @@ -1,8 +1,8 @@ - + SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? ['Hello', 'World'] : ['a', 'b', 'c']) SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['Hello', 'World']) : ['a', 'b', 'c']) SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? ['Hello', 'World'] : materialize(['a', 'b', 'c'])) SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['Hello', 'World']) : materialize(['a', 'b', 'c'])) SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['', '']) : emptyArrayString()) - SELECT count() FROM zeros(1000000) WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/ClickHouse/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&site=&source=hp&q=zookeeper+wire+protocol+exists&oq=zookeeper+wire+protocol+exists&gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString()) + SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/ClickHouse/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&site=&source=hp&q=zookeeper+wire+protocol+exists&oq=zookeeper+wire+protocol+exists&gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString()) diff --git a/tests/performance/intDiv.xml b/tests/performance/intDiv.xml new file mode 100644 index 00000000000..c6fa0238986 --- /dev/null +++ b/tests/performance/intDiv.xml @@ -0,0 +1,5 @@ + + SELECT count() FROM numbers(200000000) WHERE NOT ignore(intDiv(number, 1000000000)) + SELECT count() FROM numbers(200000000) WHERE NOT ignore(divide(number, 1000000000)) + SELECT count() FROM numbers(200000000) WHERE NOT ignore(toUInt32(divide(number, 1000000000))) + diff --git a/tests/performance/int_parsing.xml b/tests/performance/int_parsing.xml index 3b8620e46c3..32f904331ce 100644 --- a/tests/performance/int_parsing.xml +++ b/tests/performance/int_parsing.xml @@ -1,4 +1,4 @@ - + hits_100m_single hits_10m_single diff --git a/tests/performance/jit_small_requests.xml b/tests/performance/jit_small_requests.xml index c9abec0926b..d8f917fb9af 100644 --- a/tests/performance/jit_small_requests.xml +++ b/tests/performance/jit_small_requests.xml @@ -1,4 +1,4 @@ - + WITH bitXor(number, 0x4CF2D2BAAE6DA887) AS x0, diff --git a/tests/performance/joins_in_memory.xml b/tests/performance/joins_in_memory.xml index bac7679930f..158602e28ab 100644 --- a/tests/performance/joins_in_memory.xml +++ b/tests/performance/joins_in_memory.xml @@ -1,4 +1,4 @@ - + CREATE TABLE ints (i64 Int64, i32 Int32, i16 Int16, i8 Int8) ENGINE = Memory INSERT INTO ints SELECT number AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) @@ -13,12 +13,12 @@ SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) diff --git a/tests/performance/joins_in_memory_pmj.xml b/tests/performance/joins_in_memory_pmj.xml index 5dd4395513d..d122dba72c3 100644 --- a/tests/performance/joins_in_memory_pmj.xml +++ b/tests/performance/joins_in_memory_pmj.xml @@ -1,55 +1,56 @@ - + CREATE TABLE ints (i64 Int64, i32 Int32, i16 Int16, i8 Int8) ENGINE = Memory partial_merge + 0 - INSERT INTO ints SELECT number AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 10000 + number % 1000 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 20000 + number % 100 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 30000 + number % 10 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 40000 + number % 1 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) + INSERT INTO ints SELECT number AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 10000 + number % 1000 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 20000 + number % 100 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 30000 + number % 10 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 40000 + number % 1 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l RIGHT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l RIGHT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l FULL JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l FULL JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l FULL JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l FULL JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 DROP TABLE IF EXISTS ints diff --git a/tests/performance/json_extract_simdjson.xml b/tests/performance/json_extract_simdjson.xml index f9f6df5140e..9ec3613d5e8 100644 --- a/tests/performance/json_extract_simdjson.xml +++ b/tests/performance/json_extract_simdjson.xml @@ -1,7 +1,4 @@ - - - json @@ -21,19 +18,19 @@ 1 - SELECT 'simdjson-1', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam')) - SELECT 'simdjson-2', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam', 'nested_1')) - SELECT 'simdjson-3', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractInt(materialize({json}), 'nparam')) - SELECT 'simdjson-4', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractUInt(materialize({json}), 'nparam')) - SELECT 'simdjson-5', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractFloat(materialize({json}), 'fparam')) + SELECT 'simdjson-1', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam')) + SELECT 'simdjson-2', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam', 'nested_1')) + SELECT 'simdjson-3', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractInt(materialize({json}), 'nparam')) + SELECT 'simdjson-4', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractUInt(materialize({json}), 'nparam')) + SELECT 'simdjson-5', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractFloat(materialize({json}), 'fparam')) - SELECT 'simdjson-6', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam')) - SELECT 'simdjson-7', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam', 'nested_1')) - SELECT 'simdjson-8', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractInt(materialize({long_json}), 'nparam')) - SELECT 'simdjson-9', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractUInt(materialize({long_json}), 'nparam')) - SELECT 'simdjson-10', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractRaw(materialize({long_json}), 'fparam')) - SELECT 'simdjson-11', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam')) - SELECT 'simdjson-12', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam', 'nested_2', -2)) - SELECT 'simdjson-13', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractBool(materialize({long_json}), 'bparam')) + SELECT 'simdjson-6', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam')) + SELECT 'simdjson-7', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam', 'nested_1')) + SELECT 'simdjson-8', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractInt(materialize({long_json}), 'nparam')) + SELECT 'simdjson-9', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractUInt(materialize({long_json}), 'nparam')) + SELECT 'simdjson-10', count() FROM zeros(3000000) WHERE NOT ignore(JSONExtractRaw(materialize({long_json}), 'fparam')) + SELECT 'simdjson-11', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam')) + SELECT 'simdjson-12', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam', 'nested_2', -2)) + SELECT 'simdjson-13', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractBool(materialize({long_json}), 'bparam')) diff --git a/tests/performance/logical_functions_medium.xml b/tests/performance/logical_functions_medium.xml index be474894b54..19572191532 100644 --- a/tests/performance/logical_functions_medium.xml +++ b/tests/performance/logical_functions_medium.xml @@ -1,4 +1,4 @@ - + 1 diff --git a/tests/performance/logical_functions_small.xml b/tests/performance/logical_functions_small.xml index 3d70ef6811d..d5f6a7b99cb 100644 --- a/tests/performance/logical_functions_small.xml +++ b/tests/performance/logical_functions_small.xml @@ -1,4 +1,4 @@ - + 1 diff --git a/tests/performance/math.xml b/tests/performance/math.xml index 006e33548c9..35250351683 100644 --- a/tests/performance/math.xml +++ b/tests/performance/math.xml @@ -1,4 +1,4 @@ - + func_slow diff --git a/tests/performance/mmap_io.xml b/tests/performance/mmap_io.xml new file mode 100644 index 00000000000..b8f3f6e69dd --- /dev/null +++ b/tests/performance/mmap_io.xml @@ -0,0 +1,17 @@ + + + hits_10m_single + + + + 1 + + + CREATE TABLE hits_none (WatchID UInt64 CODEC(NONE)) ENGINE = MergeTree ORDER BY tuple() + INSERT INTO hits_none SELECT WatchID FROM test.hits + OPTIMIZE TABLE hits_none FINAL + + + + DROP TABLE hits_none + diff --git a/tests/performance/optimized_select_final.xml b/tests/performance/optimized_select_final.xml index 2c8254d2b88..d70fccc1330 100644 --- a/tests/performance/optimized_select_final.xml +++ b/tests/performance/optimized_select_final.xml @@ -1,4 +1,4 @@ - + 1 diff --git a/tests/performance/optimized_select_final_one_part.xml b/tests/performance/optimized_select_final_one_part.xml index 92c8eed859a..63541313ac9 100644 --- a/tests/performance/optimized_select_final_one_part.xml +++ b/tests/performance/optimized_select_final_one_part.xml @@ -1,4 +1,4 @@ - + 1 diff --git a/tests/performance/or_null_default.xml b/tests/performance/or_null_default.xml index 6fed0cce4d6..009719f66a5 100644 --- a/tests/performance/or_null_default.xml +++ b/tests/performance/or_null_default.xml @@ -1,4 +1,4 @@ - + SELECT sumOrNull(number) FROM numbers(100000000) SELECT sumOrDefault(toNullable(number)) FROM numbers(100000000) SELECT sumOrNull(number) FROM numbers(10000000) GROUP BY number % 1024 diff --git a/tests/performance/order_by_decimals.xml b/tests/performance/order_by_decimals.xml index 4889137865d..20b860f0a2d 100644 --- a/tests/performance/order_by_decimals.xml +++ b/tests/performance/order_by_decimals.xml @@ -4,13 +4,10 @@ comparison + SELECT toInt32(number) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null + SELECT toDecimal32(number, 0) AS n FROM numbers(10000000) ORDER BY n FORMAT Null - - SELECT toInt32(number) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - SELECT toDecimal32(number, 0) AS n FROM numbers(1000000) ORDER BY n FORMAT Null - - SELECT toDecimal32(number, 0) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - SELECT toDecimal64(number, 8) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - SELECT toDecimal128(number, 10) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - + SELECT toDecimal32(number, 0) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null + SELECT toDecimal64(number, 8) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null + SELECT toDecimal128(number, 10) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null diff --git a/tests/performance/order_by_read_in_order.xml b/tests/performance/order_by_read_in_order.xml index b91cd14baf4..cdbf477c335 100644 --- a/tests/performance/order_by_read_in_order.xml +++ b/tests/performance/order_by_read_in_order.xml @@ -3,10 +3,11 @@ hits_100m_single -SELECT * FROM hits_100m_single ORDER BY CounterID, EventDate LIMIT 1000 -SELECT * FROM hits_100m_single ORDER BY CounterID DESC, toStartOfWeek(EventDate) DESC LIMIT 100 + +SELECT * FROM hits_100m_single ORDER BY CounterID, EventDate LIMIT 100 +SELECT * FROM hits_100m_single ORDER BY CounterID DESC, toStartOfWeek(EventDate) DESC LIMIT 100 SELECT * FROM hits_100m_single ORDER BY CounterID, EventDate, URL LIMIT 100 -SELECT * FROM hits_100m_single WHERE CounterID IN (152220, 168777, 149234, 149234) ORDER BY CounterID DESC, EventDate DESC LIMIT 100 +SELECT * FROM hits_100m_single WHERE CounterID IN (152220, 168777, 149234, 149234) ORDER BY CounterID DESC, EventDate DESC LIMIT 100 SELECT * FROM hits_100m_single WHERE UserID=1988954671305023629 ORDER BY CounterID, EventDate LIMIT 100 diff --git a/tests/performance/parse_engine_file.xml b/tests/performance/parse_engine_file.xml index 2459ed084cd..d0226c3bb68 100644 --- a/tests/performance/parse_engine_file.xml +++ b/tests/performance/parse_engine_file.xml @@ -1,4 +1,4 @@ - + test.hits @@ -30,7 +30,7 @@ INSERT INTO table_{format} SELECT * FROM test.hits LIMIT 100000 -SELECT * FROM table_{format} FORMAT Null +SELECT * FROM table_{format} FORMAT Null DROP TABLE IF EXISTS table_{format} diff --git a/tests/performance/point_in_polygon.xml b/tests/performance/point_in_polygon.xml index 403c2d62cba..31c24eb006f 100644 --- a/tests/performance/point_in_polygon.xml +++ b/tests/performance/point_in_polygon.xml @@ -1,5 +1,9 @@ + 0 @@ -8,7 +12,8 @@ INSERT INTO polygons WITH number + 1 AS radius SELECT [arrayMap(x -> (cos(x / 90. * pi()) * radius, sin(x / 90. * pi()) * radius), range(180))] - FROM numbers(1000000) + FROM numbers_mt(5000000) + SETTINGS max_insert_threads = 2, max_memory_usage = 30000000000 SELECT pointInPolygon((100, 100), polygon) FROM polygons FORMAT Null diff --git a/tests/performance/questdb_sum_int32.xml b/tests/performance/questdb_sum_int32.xml index ae13210107e..613ef3dc058 100644 --- a/tests/performance/questdb_sum_int32.xml +++ b/tests/performance/questdb_sum_int32.xml @@ -25,7 +25,8 @@ CREATE TABLE `zz_{type}_{engine}` (x {type}) ENGINE {engine} - INSERT INTO `zz_{type}_{engine}` SELECT rand() FROM numbers(1000000000) + INSERT INTO `zz_{type}_{engine}` SELECT rand() FROM numbers_mt(1000000000) SETTINGS max_insert_threads = 8 + OPTIMIZE TABLE `zz_{type}_MergeTree ORDER BY tuple()` FINAL SELECT sum(x) FROM `zz_{type}_{engine}` diff --git a/tests/performance/random_string.xml b/tests/performance/random_string.xml index 1a740ae077a..79f12373f1c 100644 --- a/tests/performance/random_string.xml +++ b/tests/performance/random_string.xml @@ -1,4 +1,4 @@ - + SELECT count() FROM zeros(100000000) WHERE NOT ignore(randomString(10)) SELECT count() FROM zeros(100000000) WHERE NOT ignore(randomString(100)) SELECT count() FROM zeros(1000000) WHERE NOT ignore(randomString(1000)) diff --git a/tests/performance/sum.xml b/tests/performance/sum.xml index 32c194dab6f..9bee2a580c3 100644 --- a/tests/performance/sum.xml +++ b/tests/performance/sum.xml @@ -1,4 +1,4 @@ - + SELECT sum(number) FROM numbers(100000000) SELECT sum(toUInt32(number)) FROM numbers(100000000) SELECT sum(toUInt16(number)) FROM numbers(100000000) diff --git a/tests/performance/sum_map.xml b/tests/performance/sum_map.xml index bc9f9be2a18..b732c150220 100644 --- a/tests/performance/sum_map.xml +++ b/tests/performance/sum_map.xml @@ -1,4 +1,4 @@ - + 1 diff --git a/tests/performance/synthetic_hardware_benchmark.xml b/tests/performance/synthetic_hardware_benchmark.xml index 4b94f73a21d..ffcf30db5cb 100644 --- a/tests/performance/synthetic_hardware_benchmark.xml +++ b/tests/performance/synthetic_hardware_benchmark.xml @@ -1,4 +1,4 @@ - + 30000000000 diff --git a/tests/performance/url_hits.xml b/tests/performance/url_hits.xml index a699ef6ba97..1813b2a72cb 100644 --- a/tests/performance/url_hits.xml +++ b/tests/performance/url_hits.xml @@ -1,4 +1,4 @@ - + hits_100m_single hits_10m_single diff --git a/tests/performance/visit_param_extract_raw.xml b/tests/performance/visit_param_extract_raw.xml index 67faeb1f743..358dcc9cc0e 100644 --- a/tests/performance/visit_param_extract_raw.xml +++ b/tests/performance/visit_param_extract_raw.xml @@ -1,4 +1,4 @@ - + param diff --git a/tests/performance/window_functions.xml b/tests/performance/window_functions.xml index 622e349d060..6be3d59e2b0 100644 --- a/tests/performance/window_functions.xml +++ b/tests/performance/window_functions.xml @@ -110,4 +110,46 @@ format Null + + + select leadInFrame(number) over w + from + (select number, intDiv(number, 1111) p, mod(number, 111) o + from numbers(10000000)) t + window w as (partition by p order by o + rows between unbounded preceding and unbounded following) + format Null + + + + + select any(number) over w + from + (select number, intDiv(number, 1111) p, mod(number, 111) o + from numbers(10000000)) t + window w as (partition by p order by o + rows between 1 following and 1 following) + format Null + + + + select leadInFrame(number, number) over w + from + (select number, intDiv(number, 1111) p, mod(number, 111) o + from numbers(10000000)) t + window w as (partition by p order by o + rows between unbounded preceding and unbounded following) + format Null + + + + select leadInFrame(number, number, number) over w + from + (select number, intDiv(number, 1111) p, mod(number, 111) o + from numbers(10000000)) t + window w as (partition by p order by o + rows between unbounded preceding and unbounded following) + format Null + + diff --git a/tests/queries/0_stateless/00027_argMinMax.reference b/tests/queries/0_stateless/00027_argMinMax.reference index 5ba447dd04b..101e8c16044 100644 --- a/tests/queries/0_stateless/00027_argMinMax.reference +++ b/tests/queries/0_stateless/00027_argMinMax.reference @@ -1,5 +1,5 @@ -0 (0,1) 9 (9,10) -0 ('0',1) 9 ('9',10) -1970-01-01 ('1970-01-01','1970-01-01 00:00:01') 1970-01-10 ('1970-01-10','1970-01-01 00:00:10') -0.00 (0.00,1.00) 9.00 (9.00,10.00) +0 9 +0 9 +1970-01-01 1970-01-10 +0.00 9.00 4 1 diff --git a/tests/queries/0_stateless/00027_argMinMax.sql b/tests/queries/0_stateless/00027_argMinMax.sql index 2bb3b507df5..2b67b99ec77 100644 --- a/tests/queries/0_stateless/00027_argMinMax.sql +++ b/tests/queries/0_stateless/00027_argMinMax.sql @@ -1,8 +1,8 @@ -- types -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (number, number + 1) as x from numbers(10)); -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (toString(number), toInt32(number) + 1) as x from numbers(10)); -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (toDate(number, 'UTC'), toDateTime(number, 'UTC') + 1) as x from numbers(10)); -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (toDecimal32(number, 2), toDecimal64(number, 2) + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (number, number + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (toString(number), toInt32(number) + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (toDate(number, 'UTC'), toDateTime(number, 'UTC') + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (toDecimal32(number, 2), toDecimal64(number, 2) + 1) as x from numbers(10)); -- array SELECT argMinArray(id, num), argMaxArray(id, num) FROM (SELECT arrayJoin([[10, 4, 3], [7, 5, 6], [8, 8, 2]]) AS num, arrayJoin([[1, 2, 4], [2, 3, 3]]) AS id); diff --git a/tests/queries/0_stateless/00027_simple_argMinArray.reference b/tests/queries/0_stateless/00027_simple_argMinArray.reference new file mode 100644 index 00000000000..4482956b706 --- /dev/null +++ b/tests/queries/0_stateless/00027_simple_argMinArray.reference @@ -0,0 +1 @@ +4 1 diff --git a/tests/queries/0_stateless/00027_simple_argMinArray.sql b/tests/queries/0_stateless/00027_simple_argMinArray.sql new file mode 100644 index 00000000000..b681a2c53cf --- /dev/null +++ b/tests/queries/0_stateless/00027_simple_argMinArray.sql @@ -0,0 +1 @@ +SELECT argMinArray(id, num), argMaxArray(id, num) FROM (SELECT arrayJoin([[10, 4, 3], [7, 5, 6], [8, 8, 2]]) AS num, arrayJoin([[1, 2, 4], [2, 3, 3]]) AS id) diff --git a/tests/queries/0_stateless/00029_test_zookeeper_optimize_exception.sh b/tests/queries/0_stateless/00029_test_zookeeper_optimize_exception.sh index 86f1d1f161c..3360b8da83d 100755 --- a/tests/queries/0_stateless/00029_test_zookeeper_optimize_exception.sh +++ b/tests/queries/0_stateless/00029_test_zookeeper_optimize_exception.sh @@ -4,12 +4,11 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh - ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS test_optimize_exception" ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS test_optimize_exception_replicated" ${CLICKHOUSE_CLIENT} --query="CREATE TABLE test_optimize_exception (date Date) ENGINE=MergeTree() PARTITION BY toYYYYMM(date) ORDER BY date" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE test_optimize_exception_replicated (date Date) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_00029/optimize', 'r1') PARTITION BY toYYYYMM(date) ORDER BY date" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE test_optimize_exception_replicated (date Date) ENGINE=ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/optimize', 'r1') PARTITION BY toYYYYMM(date) ORDER BY date" ${CLICKHOUSE_CLIENT} --query="INSERT INTO test_optimize_exception VALUES (toDate('2017-09-09')), (toDate('2017-09-10'))" ${CLICKHOUSE_CLIENT} --query="INSERT INTO test_optimize_exception VALUES (toDate('2017-09-09')), (toDate('2017-09-10'))" diff --git a/tests/queries/0_stateless/00119_storage_join.reference b/tests/queries/0_stateless/00119_storage_join.reference index 2689dadfb42..9b1dfe11616 100644 --- a/tests/queries/0_stateless/00119_storage_join.reference +++ b/tests/queries/0_stateless/00119_storage_join.reference @@ -38,3 +38,7 @@ ghi [] 1 [] 2 3 7 4 1 [] 2 3 8 4 1 [] 2 3 9 4 +0 a [] +1 a [0] + +0 a [] diff --git a/tests/queries/0_stateless/00119_storage_join.sql b/tests/queries/0_stateless/00119_storage_join.sql index 735ef2da16a..e1cc7a67588 100644 --- a/tests/queries/0_stateless/00119_storage_join.sql +++ b/tests/queries/0_stateless/00119_storage_join.sql @@ -11,4 +11,8 @@ SELECT s, x FROM (SELECT number AS k FROM system.numbers LIMIT 10) js1 ANY LEFT SELECT x, s, k FROM (SELECT number AS k FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN join USING k; SELECT 1, x, 2, s, 3, k, 4 FROM (SELECT number AS k FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN join USING k; +SELECT t1.k, t1.s, t2.x +FROM ( SELECT number AS k, 'a' AS s FROM numbers(2) GROUP BY number WITH TOTALS ) AS t1 +ANY LEFT JOIN join AS t2 USING(k); + DROP TABLE join; diff --git a/tests/queries/0_stateless/00120_join_and_group_by.reference b/tests/queries/0_stateless/00120_join_and_group_by.reference index 417dbb5b661..9b3afb80a8d 100644 --- a/tests/queries/0_stateless/00120_join_and_group_by.reference +++ b/tests/queries/0_stateless/00120_join_and_group_by.reference @@ -1,11 +1,11 @@ 0 +4416930539393268817 1241149650 9 +4761183170873013810 4249604106 0 7766709361750702608 3902320246 4 +9624464864560415994 1298551497 3 10577349846663553072 1343103100 1 11700034558374135620 1618865725 8 -18198135717204167749 1996614413 2 -4761183170873013810 4249604106 0 12742043333840853032 1295823179 6 -9624464864560415994 1298551497 3 -4416930539393268817 1241149650 9 -15228578409069794350 2641603337 5 13365811232860260488 3844986530 7 +15228578409069794350 2641603337 5 +18198135717204167749 1996614413 2 diff --git a/tests/queries/0_stateless/00120_join_and_group_by.sql b/tests/queries/0_stateless/00120_join_and_group_by.sql index 21168443ee4..005e25a7127 100644 --- a/tests/queries/0_stateless/00120_join_and_group_by.sql +++ b/tests/queries/0_stateless/00120_join_and_group_by.sql @@ -3,4 +3,4 @@ SELECT value FROM system.one ANY LEFT JOIN (SELECT dummy, dummy AS value) js2 US SELECT value1, value2, sum(number) FROM (SELECT number, intHash64(number) AS value1 FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN (SELECT number, intHash32(number) AS value2 FROM system.numbers LIMIT 10) js2 -USING number GROUP BY value1, value2; +USING number GROUP BY value1, value2 ORDER BY value1, value2; diff --git a/tests/queries/0_stateless/00189_time_zones.reference b/tests/queries/0_stateless/00189_time_zones_long.reference similarity index 97% rename from tests/queries/0_stateless/00189_time_zones.reference rename to tests/queries/0_stateless/00189_time_zones_long.reference index 664c30056de..df42e8f1b6e 100644 --- a/tests/queries/0_stateless/00189_time_zones.reference +++ b/tests/queries/0_stateless/00189_time_zones_long.reference @@ -148,9 +148,9 @@ toStartOfInterval 2019-02-05 00:00:00 2019-02-03 00:00:00 2019-02-06 22:00:00 -2019-02-06 21:00:00 -2019-02-06 21:00:00 -2019-02-06 03:00:00 +2019-02-06 22:00:00 +2019-02-06 18:00:00 +2019-02-06 00:00:00 2019-02-06 22:57:00 2019-02-06 22:56:00 2019-02-06 22:55:00 @@ -179,13 +179,13 @@ toRelativeYearNum 44 44 44 -44 +45 toRelativeMonthNum 536 536 536 537 -536 +537 toRelativeWeekNum 2335 2335 @@ -197,12 +197,13 @@ toRelativeDayNum 16343 16343 16344 -16343 +16344 toRelativeHourNum 392251 392251 392251 392251 +392252 toRelativeMinuteNum 23535110 23535110 diff --git a/tests/queries/0_stateless/00189_time_zones.sql b/tests/queries/0_stateless/00189_time_zones_long.sql similarity index 98% rename from tests/queries/0_stateless/00189_time_zones.sql rename to tests/queries/0_stateless/00189_time_zones_long.sql index a0ef5b59517..36c7dfb402a 100644 --- a/tests/queries/0_stateless/00189_time_zones.sql +++ b/tests/queries/0_stateless/00189_time_zones_long.sql @@ -277,7 +277,8 @@ SELECT toRelativeDayNum(toDateTime(1412106600), 'Europe/Moscow') - toRelativeDay SELECT toRelativeDayNum(toDateTime(1412106600), 'Europe/Paris') - toRelativeDayNum(toDateTime(0), 'Europe/Paris'); SELECT toRelativeDayNum(toDateTime(1412106600), 'Europe/London') - toRelativeDayNum(toDateTime(0), 'Europe/London'); SELECT toRelativeDayNum(toDateTime(1412106600), 'Asia/Tokyo') - toRelativeDayNum(toDateTime(0), 'Asia/Tokyo'); -SELECT toRelativeDayNum(toDateTime(1412106600), 'Pacific/Pitcairn') - toRelativeDayNum(toDateTime(0), 'Pacific/Pitcairn'); +-- NOTE: toRelativeDayNum(toDateTime(0), 'Pacific/Pitcairn') overflows from -1 to 65535 +SELECT toUInt16(toRelativeDayNum(toDateTime(1412106600), 'Pacific/Pitcairn') - toRelativeDayNum(toDateTime(0), 'Pacific/Pitcairn')); /* toRelativeHourNum */ @@ -286,7 +287,7 @@ SELECT toRelativeHourNum(toDateTime(1412106600), 'Europe/Moscow') - toRelativeHo SELECT toRelativeHourNum(toDateTime(1412106600), 'Europe/Paris') - toRelativeHourNum(toDateTime(0), 'Europe/Paris'); SELECT toRelativeHourNum(toDateTime(1412106600), 'Europe/London') - toRelativeHourNum(toDateTime(0), 'Europe/London'); SELECT toRelativeHourNum(toDateTime(1412106600), 'Asia/Tokyo') - toRelativeHourNum(toDateTime(0), 'Asia/Tokyo'); --- known wrong result: SELECT toRelativeHourNum(toDateTime(1412106600), 'Pacific/Pitcairn') - toRelativeHourNum(toDateTime(0), 'Pacific/Pitcairn'); +SELECT toRelativeHourNum(toDateTime(1412106600), 'Pacific/Pitcairn') - toRelativeHourNum(toDateTime(0), 'Pacific/Pitcairn'); /* toRelativeMinuteNum */ diff --git a/tests/queries/0_stateless/00206_empty_array_to_single.sql b/tests/queries/0_stateless/00206_empty_array_to_single.sql index 9724aa3fda7..0e3ff4f3537 100644 --- a/tests/queries/0_stateless/00206_empty_array_to_single.sql +++ b/tests/queries/0_stateless/00206_empty_array_to_single.sql @@ -1,5 +1,5 @@ SELECT emptyArrayToSingle(arrayFilter(x -> x != 99, arrayJoin([[1, 2], [99], [4, 5, 6]]))); -SELECT emptyArrayToSingle(emptyArrayString()), emptyArrayToSingle(emptyArrayDate()), emptyArrayToSingle(emptyArrayDateTime()); +SELECT emptyArrayToSingle(emptyArrayString()), emptyArrayToSingle(emptyArrayDate()), emptyArrayToSingle(arrayFilter(x -> 0, [now('Europe/Moscow')])); SELECT emptyArrayToSingle(range(number % 3)), diff --git a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.reference b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.reference similarity index 100% rename from tests/queries/0_stateless/00212_shard_aggregate_function_uniq.reference rename to tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.reference index 63686e2e352..33a2bb5437f 100644 --- a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.reference +++ b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.reference @@ -52,110 +52,110 @@ uniqHLL12 35 54328 36 52997 uniqHLL12 round(float) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 161 -0.125 160 -0.5 164 -0.05 164 -0.143 162 -0.091 81 -0.056 163 -0.048 159 -0.083 158 -0.25 165 -1 159 -0.1 164 -0.028 160 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 161 +0.028 160 0.031 164 -0.067 160 -0.043 159 0.037 160 +0.043 159 +0.045 161 +0.048 159 +0.05 164 +0.056 163 +0.067 160 0.071 161 +0.083 158 +0.091 81 +0.1 164 +0.125 160 +0.143 162 +0.25 165 +0.5 164 +1 159 +0.027 52997 +0.028 54328 +0.031 54151 +0.037 53394 +0.043 54620 0.045 54268 -0.125 54011 -0.5 55013 -0.05 55115 -0.143 52353 -0.091 26870 -0.056 55227 0.048 54370 +0.05 55115 +0.056 55227 +0.067 53396 +0.071 53951 0.083 54554 +0.091 26870 +0.1 54138 +0.125 54011 +0.143 52353 0.25 52912 +0.5 55013 1 54571 -0.1 54138 -0.028 54328 -0.027 52997 -0.031 54151 -0.067 53396 -0.043 54620 -0.037 53394 -0.071 53951 uniqHLL12 round(toFloat32()) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 164 -0.05 164 -0.25 165 -0.048 159 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 161 +0.028 160 +0.031 164 +0.037 160 0.043 159 +0.045 161 +0.048 159 +0.05 164 +0.056 163 +0.067 160 0.071 161 0.083 158 -0.125 160 -0.031 164 -0.143 162 -0.028 160 -0.067 160 -0.045 161 -0.027 161 -0.056 163 -0.037 160 +0.091 81 0.1 164 +0.125 160 +0.143 162 +0.25 165 +0.5 164 1 159 -0.5 55013 -0.05 55115 -0.25 52912 -0.048 54370 -0.091 26870 +0.027 52997 +0.028 54328 +0.031 54151 +0.037 53394 0.043 54620 +0.045 54268 +0.048 54370 +0.05 55115 +0.056 55227 +0.067 53396 0.071 53951 0.083 54554 -0.125 54011 -0.031 54151 -0.143 52353 -0.028 54328 -0.067 53396 -0.045 54268 -0.027 52997 -0.056 55227 -0.037 53394 +0.091 26870 0.1 54138 +0.125 54011 +0.143 52353 +0.25 52912 +0.5 55013 1 54571 uniqHLL12 IPv4NumToString 1 1 @@ -425,428 +425,428 @@ uniqCombined(20) 35 54054 36 54054 uniqCombined(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 -0.045 54117 -0.125 54213 -0.5 54056 -0.05 53923 -0.143 54129 -0.091 26975 -0.056 54129 -0.048 53958 -0.083 54064 -0.25 53999 -1 53901 -0.1 53853 -0.028 53931 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 0.027 53982 +0.028 53931 0.031 53948 -0.067 53997 -0.043 54150 0.037 54047 +0.043 54150 +0.045 54117 +0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 0.071 53963 +0.083 54064 +0.091 26975 +0.1 53853 +0.125 54213 +0.143 54129 +0.25 53999 +0.5 54056 +1 53901 uniqCombined(12)(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 -0.045 53928 -0.125 52275 -0.5 53721 -0.05 54123 -0.143 54532 -0.091 26931 -0.056 55120 -0.048 53293 -0.083 54428 -0.25 53226 -1 54708 -0.1 53417 -0.028 54635 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 0.027 53155 +0.028 54635 0.031 53763 -0.067 53188 -0.043 53827 0.037 53920 +0.043 53827 +0.045 53928 +0.048 53293 +0.05 54123 +0.056 55120 +0.067 53188 0.071 53409 +0.083 54428 +0.091 26931 +0.1 53417 +0.125 52275 +0.143 54532 +0.25 53226 +0.5 53721 +1 54708 uniqCombined(17)(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 +0.027 53982 +0.028 53931 +0.031 53948 +0.037 54047 +0.043 54150 0.045 54117 -0.125 54213 -0.5 54056 -0.05 53923 -0.143 54129 -0.091 26975 -0.056 54129 0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 +0.071 53963 0.083 54064 +0.091 26975 +0.1 53853 +0.125 54213 +0.143 54129 0.25 53999 +0.5 54056 1 53901 -0.1 53853 -0.028 53931 -0.027 53982 -0.031 53948 -0.067 53997 -0.043 54150 -0.037 54047 -0.071 53963 uniqCombined(20)(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 -0.045 54054 -0.125 54053 -0.5 54054 -0.05 54053 -0.143 54054 -0.091 27027 -0.056 54054 -0.048 54053 -0.083 54055 -0.25 54054 -1 54054 -0.1 54053 -0.028 54054 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 0.027 54054 +0.028 54054 0.031 54054 -0.067 54054 -0.043 54053 0.037 54053 +0.043 54053 +0.045 54054 +0.048 54053 +0.05 54053 +0.056 54054 +0.067 54054 0.071 54054 +0.083 54055 +0.091 27027 +0.1 54053 +0.125 54053 +0.143 54054 +0.25 54054 +0.5 54054 +1 54054 uniqCombined(X)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 54056 -0.05 53923 -0.25 53999 -0.048 53958 -0.091 26975 +0.027 53982 +0.028 53931 +0.031 53948 +0.037 54047 0.043 54150 +0.045 54117 +0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 0.071 53963 0.083 54064 -0.125 54213 -0.031 53948 -0.143 54129 -0.028 53931 -0.067 53997 -0.045 54117 -0.027 53982 -0.056 54129 -0.037 54047 +0.091 26975 0.1 53853 +0.125 54213 +0.143 54129 +0.25 53999 +0.5 54056 1 53901 uniqCombined(12)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 53721 -0.05 54123 -0.25 53226 -0.048 53293 -0.091 26931 +0.027 53155 +0.028 54635 +0.031 53763 +0.037 53920 0.043 53827 +0.045 53928 +0.048 53293 +0.05 54123 +0.056 55120 +0.067 53188 0.071 53409 0.083 54428 -0.125 52275 -0.031 53763 -0.143 54532 -0.028 54635 -0.067 53188 -0.045 53928 -0.027 53155 -0.056 55120 -0.037 53920 +0.091 26931 0.1 53417 +0.125 52275 +0.143 54532 +0.25 53226 +0.5 53721 1 54708 uniqCombined(17)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 54056 -0.05 53923 -0.25 53999 -0.048 53958 -0.091 26975 +0.027 53982 +0.028 53931 +0.031 53948 +0.037 54047 0.043 54150 +0.045 54117 +0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 0.071 53963 0.083 54064 -0.125 54213 -0.031 53948 -0.143 54129 -0.028 53931 -0.067 53997 -0.045 54117 -0.027 53982 -0.056 54129 -0.037 54047 +0.091 26975 0.1 53853 +0.125 54213 +0.143 54129 +0.25 53999 +0.5 54056 1 53901 uniqCombined(20)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 54054 -0.05 54053 -0.25 54054 -0.048 54053 -0.091 27027 +0.027 54054 +0.028 54054 +0.031 54054 +0.037 54053 0.043 54053 +0.045 54054 +0.048 54053 +0.05 54053 +0.056 54054 +0.067 54054 0.071 54054 0.083 54055 -0.125 54053 -0.031 54054 -0.143 54054 -0.028 54054 -0.067 54054 -0.045 54054 -0.027 54054 -0.056 54054 -0.037 54053 +0.091 27027 0.1 54053 +0.125 54053 +0.143 54054 +0.25 54054 +0.5 54054 1 54054 uniqCombined(Z)(IPv4NumToString) 1 1 diff --git a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.sql b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.sql similarity index 75% rename from tests/queries/0_stateless/00212_shard_aggregate_function_uniq.sql rename to tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.sql index afef71ae06d..f1b3c82fec3 100644 --- a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.sql +++ b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.sql @@ -2,27 +2,27 @@ SELECT 'uniqHLL12'; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 round(float)'; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 round(toFloat32())'; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 IPv4NumToString'; -SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 remote()'; @@ -32,99 +32,99 @@ SELECT uniqHLL12(dummy) FROM remote('127.0.0.{2,3}', system.one); SELECT 'uniqCombined'; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)'; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)'; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)'; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(round(float))'; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)(round(float))'; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)(round(float))'; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)(round(float))'; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(X)(round(toFloat32()))'; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)(round(toFloat32()))'; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)(round(toFloat32()))'; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)(round(toFloat32()))'; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(Z)(IPv4NumToString)'; -SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)(IPv4NumToString)'; -SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)(IPv4NumToString)'; -SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)(IPv4NumToString)'; -SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined remote()'; diff --git a/tests/queries/0_stateless/00232_format_readable_size.reference b/tests/queries/0_stateless/00232_format_readable_size.reference index 0f723e968d9..b9b9b467b50 100644 --- a/tests/queries/0_stateless/00232_format_readable_size.reference +++ b/tests/queries/0_stateless/00232_format_readable_size.reference @@ -20,51 +20,51 @@ 170.21 MiB 170.21 MiB 170.21 MiB 462.69 MiB 462.69 MiB 462.69 MiB 1.23 GiB 1.23 GiB 1.23 GiB -3.34 GiB 3.34 GiB -2.00 GiB -9.08 GiB 9.08 GiB -2.00 GiB -24.67 GiB 24.67 GiB -2.00 GiB -67.06 GiB 67.06 GiB -2.00 GiB -182.29 GiB 182.29 GiB -2.00 GiB -495.51 GiB 495.51 GiB -2.00 GiB -1.32 TiB 1.32 TiB -2.00 GiB -3.58 TiB 3.58 TiB -2.00 GiB -9.72 TiB 9.72 TiB -2.00 GiB -26.42 TiB 26.42 TiB -2.00 GiB -71.82 TiB 71.82 TiB -2.00 GiB -195.22 TiB 195.22 TiB -2.00 GiB -530.66 TiB 530.66 TiB -2.00 GiB -1.41 PiB 1.41 PiB -2.00 GiB -3.83 PiB 3.83 PiB -2.00 GiB -10.41 PiB 10.41 PiB -2.00 GiB -28.29 PiB 28.29 PiB -2.00 GiB -76.91 PiB 76.91 PiB -2.00 GiB -209.06 PiB 209.06 PiB -2.00 GiB -568.30 PiB 568.30 PiB -2.00 GiB -1.51 EiB 1.51 EiB -2.00 GiB -4.10 EiB 4.10 EiB -2.00 GiB -11.15 EiB 11.15 EiB -2.00 GiB -30.30 EiB 0.00 B -2.00 GiB -82.37 EiB 0.00 B -2.00 GiB -223.89 EiB 0.00 B -2.00 GiB -608.60 EiB 0.00 B -2.00 GiB -1.62 ZiB 0.00 B -2.00 GiB -4.39 ZiB 0.00 B -2.00 GiB -11.94 ZiB 0.00 B -2.00 GiB -32.45 ZiB 0.00 B -2.00 GiB -88.21 ZiB 0.00 B -2.00 GiB -239.77 ZiB 0.00 B -2.00 GiB -651.77 ZiB 0.00 B -2.00 GiB -1.73 YiB 0.00 B -2.00 GiB -4.70 YiB 0.00 B -2.00 GiB -12.78 YiB 0.00 B -2.00 GiB -34.75 YiB 0.00 B -2.00 GiB -94.46 YiB 0.00 B -2.00 GiB -256.78 YiB 0.00 B -2.00 GiB -698.00 YiB 0.00 B -2.00 GiB -1897.37 YiB 0.00 B -2.00 GiB -5157.59 YiB 0.00 B -2.00 GiB -14019.80 YiB 0.00 B -2.00 GiB -38109.75 YiB 0.00 B -2.00 GiB -103593.05 YiB 0.00 B -2.00 GiB -281595.11 YiB 0.00 B -2.00 GiB -765454.88 YiB 0.00 B -2.00 GiB +3.34 GiB 3.34 GiB 2.00 GiB +9.08 GiB 9.08 GiB 2.00 GiB +24.67 GiB 24.67 GiB 2.00 GiB +67.06 GiB 67.06 GiB 2.00 GiB +182.29 GiB 182.29 GiB 2.00 GiB +495.51 GiB 495.51 GiB 2.00 GiB +1.32 TiB 1.32 TiB 2.00 GiB +3.58 TiB 3.58 TiB 2.00 GiB +9.72 TiB 9.72 TiB 2.00 GiB +26.42 TiB 26.42 TiB 2.00 GiB +71.82 TiB 71.82 TiB 2.00 GiB +195.22 TiB 195.22 TiB 2.00 GiB +530.66 TiB 530.66 TiB 2.00 GiB +1.41 PiB 1.41 PiB 2.00 GiB +3.83 PiB 3.83 PiB 2.00 GiB +10.41 PiB 10.41 PiB 2.00 GiB +28.29 PiB 28.29 PiB 2.00 GiB +76.91 PiB 76.91 PiB 2.00 GiB +209.06 PiB 209.06 PiB 2.00 GiB +568.30 PiB 568.30 PiB 2.00 GiB +1.51 EiB 1.51 EiB 2.00 GiB +4.10 EiB 4.10 EiB 2.00 GiB +11.15 EiB 11.15 EiB 2.00 GiB +30.30 EiB 16.00 EiB 2.00 GiB +82.37 EiB 16.00 EiB 2.00 GiB +223.89 EiB 16.00 EiB 2.00 GiB +608.60 EiB 16.00 EiB 2.00 GiB +1.62 ZiB 16.00 EiB 2.00 GiB +4.39 ZiB 16.00 EiB 2.00 GiB +11.94 ZiB 16.00 EiB 2.00 GiB +32.45 ZiB 16.00 EiB 2.00 GiB +88.21 ZiB 16.00 EiB 2.00 GiB +239.77 ZiB 16.00 EiB 2.00 GiB +651.77 ZiB 16.00 EiB 2.00 GiB +1.73 YiB 16.00 EiB 2.00 GiB +4.70 YiB 16.00 EiB 2.00 GiB +12.78 YiB 16.00 EiB 2.00 GiB +34.75 YiB 16.00 EiB 2.00 GiB +94.46 YiB 16.00 EiB 2.00 GiB +256.78 YiB 16.00 EiB 2.00 GiB +698.00 YiB 16.00 EiB 2.00 GiB +1897.37 YiB 16.00 EiB 2.00 GiB +5157.59 YiB 16.00 EiB 2.00 GiB +14019.80 YiB 16.00 EiB 2.00 GiB +38109.75 YiB 16.00 EiB 2.00 GiB +103593.05 YiB 16.00 EiB 2.00 GiB +281595.11 YiB 16.00 EiB 2.00 GiB +765454.88 YiB 16.00 EiB 2.00 GiB diff --git a/tests/queries/0_stateless/00232_format_readable_size.sql b/tests/queries/0_stateless/00232_format_readable_size.sql index 952ee82b81a..e96f7ebeb20 100644 --- a/tests/queries/0_stateless/00232_format_readable_size.sql +++ b/tests/queries/0_stateless/00232_format_readable_size.sql @@ -1,4 +1,4 @@ -WITH round(exp(number), 6) AS x, toUInt64(x) AS y, toInt32(x) AS z +WITH round(exp(number), 6) AS x, x > 0xFFFFFFFFFFFFFFFF ? 0xFFFFFFFFFFFFFFFF : toUInt64(x) AS y, x > 0x7FFFFFFF ? 0x7FFFFFFF : toInt32(x) AS z SELECT formatReadableSize(x), formatReadableSize(y), formatReadableSize(z) FROM system.numbers LIMIT 70; diff --git a/tests/queries/0_stateless/00273_quantiles.reference b/tests/queries/0_stateless/00273_quantiles.reference index 616e06841e4..aefb0084648 100644 --- a/tests/queries/0_stateless/00273_quantiles.reference +++ b/tests/queries/0_stateless/00273_quantiles.reference @@ -4,7 +4,7 @@ [500] [0,1,10,50,100,200,300,400,500,600,700,800,900,950,990,999,1000] [0,1,10,50,100,200,300,400,500,600,700,800,900,950,990,999,1000] -[0,0.50100005,9.51,49.55,99.6,199.7,299.8,399.9,500,600.1,700.2,800.3,900.4,950.45,990.49,999.499,1000] +[0,1,10,50,99.6,199.7,299.8,399.9,500,600.1,700.2,800.3,900.4,950,990,999,1000] [0,1,10,50,100,200,300,400,500,600,700,800,900,950,990,999,1000] 1 333334 [699144.2,835663,967429.2] [699999,833333,966666] 2 266667 [426549.5,536255.5,638957.6] [426665,533332,639999] diff --git a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference index 53cdf1e9393..bc8e5e14552 100644 --- a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference +++ b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference @@ -1 +1,124 @@ -PASSED +0 0.000000000 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 +0 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 1.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 2251799813685248.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 4503599627370496.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740991.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740994.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 104.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 0.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 1.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 2251799813685247.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 2251799813685248.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 1152921504606846976.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 9223372036854786048.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 0.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -1.000000000 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 +-1 1.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 2251799813685248.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 4503599627370496.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740991.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740994.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 104.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -0.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 0.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 1.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 2251799813685247.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 2251799813685248.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 1152921504606846976.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 9223372036854786048.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 0.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 1.000000000 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 +1 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 2251799813685248.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 4503599627370496.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740991.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740994.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 104.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 1.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 2251799813685247.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 2251799813685248.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 1152921504606846976.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 9223372036854786048.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +18446744073709551615 0.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +18446744073709551615 9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 2251799813685248.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 104.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 2251799813685247.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 2251799813685248.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 0.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 diff --git a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sql b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sql new file mode 100644 index 00000000000..16245c42a7a --- /dev/null +++ b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sql @@ -0,0 +1,127 @@ +-- The results are different than in Python. That's why this file is genearated and the reference is edited instead of using the Python script. +-- Example: in ClickHouse, 9223372036854775808.0 != 9223372036854775808. + +SELECT '0', '0.000000000', 0 = 0.000000000, 0 != 0.000000000, 0 < 0.000000000, 0 <= 0.000000000, 0 > 0.000000000, 0 >= 0.000000000, 0.000000000 = 0, 0.000000000 != 0, 0.000000000 < 0, 0.000000000 <= 0, 0.000000000 > 0, 0.000000000 >= 0 , toUInt8(0) = 0.000000000, toUInt8(0) != 0.000000000, toUInt8(0) < 0.000000000, toUInt8(0) <= 0.000000000, toUInt8(0) > 0.000000000, toUInt8(0) >= 0.000000000, 0.000000000 = toUInt8(0), 0.000000000 != toUInt8(0), 0.000000000 < toUInt8(0), 0.000000000 <= toUInt8(0), 0.000000000 > toUInt8(0), 0.000000000 >= toUInt8(0) , toInt8(0) = 0.000000000, toInt8(0) != 0.000000000, toInt8(0) < 0.000000000, toInt8(0) <= 0.000000000, toInt8(0) > 0.000000000, toInt8(0) >= 0.000000000, 0.000000000 = toInt8(0), 0.000000000 != toInt8(0), 0.000000000 < toInt8(0), 0.000000000 <= toInt8(0), 0.000000000 > toInt8(0), 0.000000000 >= toInt8(0) , toUInt16(0) = 0.000000000, toUInt16(0) != 0.000000000, toUInt16(0) < 0.000000000, toUInt16(0) <= 0.000000000, toUInt16(0) > 0.000000000, toUInt16(0) >= 0.000000000, 0.000000000 = toUInt16(0), 0.000000000 != toUInt16(0), 0.000000000 < toUInt16(0), 0.000000000 <= toUInt16(0), 0.000000000 > toUInt16(0), 0.000000000 >= toUInt16(0) , toInt16(0) = 0.000000000, toInt16(0) != 0.000000000, toInt16(0) < 0.000000000, toInt16(0) <= 0.000000000, toInt16(0) > 0.000000000, toInt16(0) >= 0.000000000, 0.000000000 = toInt16(0), 0.000000000 != toInt16(0), 0.000000000 < toInt16(0), 0.000000000 <= toInt16(0), 0.000000000 > toInt16(0), 0.000000000 >= toInt16(0) , toUInt32(0) = 0.000000000, toUInt32(0) != 0.000000000, toUInt32(0) < 0.000000000, toUInt32(0) <= 0.000000000, toUInt32(0) > 0.000000000, toUInt32(0) >= 0.000000000, 0.000000000 = toUInt32(0), 0.000000000 != toUInt32(0), 0.000000000 < toUInt32(0), 0.000000000 <= toUInt32(0), 0.000000000 > toUInt32(0), 0.000000000 >= toUInt32(0) , toInt32(0) = 0.000000000, toInt32(0) != 0.000000000, toInt32(0) < 0.000000000, toInt32(0) <= 0.000000000, toInt32(0) > 0.000000000, toInt32(0) >= 0.000000000, 0.000000000 = toInt32(0), 0.000000000 != toInt32(0), 0.000000000 < toInt32(0), 0.000000000 <= toInt32(0), 0.000000000 > toInt32(0), 0.000000000 >= toInt32(0) , toUInt64(0) = 0.000000000, toUInt64(0) != 0.000000000, toUInt64(0) < 0.000000000, toUInt64(0) <= 0.000000000, toUInt64(0) > 0.000000000, toUInt64(0) >= 0.000000000, 0.000000000 = toUInt64(0), 0.000000000 != toUInt64(0), 0.000000000 < toUInt64(0), 0.000000000 <= toUInt64(0), 0.000000000 > toUInt64(0), 0.000000000 >= toUInt64(0) , toInt64(0) = 0.000000000, toInt64(0) != 0.000000000, toInt64(0) < 0.000000000, toInt64(0) <= 0.000000000, toInt64(0) > 0.000000000, toInt64(0) >= 0.000000000, 0.000000000 = toInt64(0), 0.000000000 != toInt64(0), 0.000000000 < toInt64(0), 0.000000000 <= toInt64(0), 0.000000000 > toInt64(0), 0.000000000 >= toInt64(0) ; +SELECT '0', '-1.000000000', 0 = -1.000000000, 0 != -1.000000000, 0 < -1.000000000, 0 <= -1.000000000, 0 > -1.000000000, 0 >= -1.000000000, -1.000000000 = 0, -1.000000000 != 0, -1.000000000 < 0, -1.000000000 <= 0, -1.000000000 > 0, -1.000000000 >= 0 , toUInt8(0) = -1.000000000, toUInt8(0) != -1.000000000, toUInt8(0) < -1.000000000, toUInt8(0) <= -1.000000000, toUInt8(0) > -1.000000000, toUInt8(0) >= -1.000000000, -1.000000000 = toUInt8(0), -1.000000000 != toUInt8(0), -1.000000000 < toUInt8(0), -1.000000000 <= toUInt8(0), -1.000000000 > toUInt8(0), -1.000000000 >= toUInt8(0) , toInt8(0) = -1.000000000, toInt8(0) != -1.000000000, toInt8(0) < -1.000000000, toInt8(0) <= -1.000000000, toInt8(0) > -1.000000000, toInt8(0) >= -1.000000000, -1.000000000 = toInt8(0), -1.000000000 != toInt8(0), -1.000000000 < toInt8(0), -1.000000000 <= toInt8(0), -1.000000000 > toInt8(0), -1.000000000 >= toInt8(0) , toUInt16(0) = -1.000000000, toUInt16(0) != -1.000000000, toUInt16(0) < -1.000000000, toUInt16(0) <= -1.000000000, toUInt16(0) > -1.000000000, toUInt16(0) >= -1.000000000, -1.000000000 = toUInt16(0), -1.000000000 != toUInt16(0), -1.000000000 < toUInt16(0), -1.000000000 <= toUInt16(0), -1.000000000 > toUInt16(0), -1.000000000 >= toUInt16(0) , toInt16(0) = -1.000000000, toInt16(0) != -1.000000000, toInt16(0) < -1.000000000, toInt16(0) <= -1.000000000, toInt16(0) > -1.000000000, toInt16(0) >= -1.000000000, -1.000000000 = toInt16(0), -1.000000000 != toInt16(0), -1.000000000 < toInt16(0), -1.000000000 <= toInt16(0), -1.000000000 > toInt16(0), -1.000000000 >= toInt16(0) , toUInt32(0) = -1.000000000, toUInt32(0) != -1.000000000, toUInt32(0) < -1.000000000, toUInt32(0) <= -1.000000000, toUInt32(0) > -1.000000000, toUInt32(0) >= -1.000000000, -1.000000000 = toUInt32(0), -1.000000000 != toUInt32(0), -1.000000000 < toUInt32(0), -1.000000000 <= toUInt32(0), -1.000000000 > toUInt32(0), -1.000000000 >= toUInt32(0) , toInt32(0) = -1.000000000, toInt32(0) != -1.000000000, toInt32(0) < -1.000000000, toInt32(0) <= -1.000000000, toInt32(0) > -1.000000000, toInt32(0) >= -1.000000000, -1.000000000 = toInt32(0), -1.000000000 != toInt32(0), -1.000000000 < toInt32(0), -1.000000000 <= toInt32(0), -1.000000000 > toInt32(0), -1.000000000 >= toInt32(0) , toUInt64(0) = -1.000000000, toUInt64(0) != -1.000000000, toUInt64(0) < -1.000000000, toUInt64(0) <= -1.000000000, toUInt64(0) > -1.000000000, toUInt64(0) >= -1.000000000, -1.000000000 = toUInt64(0), -1.000000000 != toUInt64(0), -1.000000000 < toUInt64(0), -1.000000000 <= toUInt64(0), -1.000000000 > toUInt64(0), -1.000000000 >= toUInt64(0) , toInt64(0) = -1.000000000, toInt64(0) != -1.000000000, toInt64(0) < -1.000000000, toInt64(0) <= -1.000000000, toInt64(0) > -1.000000000, toInt64(0) >= -1.000000000, -1.000000000 = toInt64(0), -1.000000000 != toInt64(0), -1.000000000 < toInt64(0), -1.000000000 <= toInt64(0), -1.000000000 > toInt64(0), -1.000000000 >= toInt64(0) ; +SELECT '0', '1.000000000', 0 = 1.000000000, 0 != 1.000000000, 0 < 1.000000000, 0 <= 1.000000000, 0 > 1.000000000, 0 >= 1.000000000, 1.000000000 = 0, 1.000000000 != 0, 1.000000000 < 0, 1.000000000 <= 0, 1.000000000 > 0, 1.000000000 >= 0 , toUInt8(0) = 1.000000000, toUInt8(0) != 1.000000000, toUInt8(0) < 1.000000000, toUInt8(0) <= 1.000000000, toUInt8(0) > 1.000000000, toUInt8(0) >= 1.000000000, 1.000000000 = toUInt8(0), 1.000000000 != toUInt8(0), 1.000000000 < toUInt8(0), 1.000000000 <= toUInt8(0), 1.000000000 > toUInt8(0), 1.000000000 >= toUInt8(0) , toInt8(0) = 1.000000000, toInt8(0) != 1.000000000, toInt8(0) < 1.000000000, toInt8(0) <= 1.000000000, toInt8(0) > 1.000000000, toInt8(0) >= 1.000000000, 1.000000000 = toInt8(0), 1.000000000 != toInt8(0), 1.000000000 < toInt8(0), 1.000000000 <= toInt8(0), 1.000000000 > toInt8(0), 1.000000000 >= toInt8(0) , toUInt16(0) = 1.000000000, toUInt16(0) != 1.000000000, toUInt16(0) < 1.000000000, toUInt16(0) <= 1.000000000, toUInt16(0) > 1.000000000, toUInt16(0) >= 1.000000000, 1.000000000 = toUInt16(0), 1.000000000 != toUInt16(0), 1.000000000 < toUInt16(0), 1.000000000 <= toUInt16(0), 1.000000000 > toUInt16(0), 1.000000000 >= toUInt16(0) , toInt16(0) = 1.000000000, toInt16(0) != 1.000000000, toInt16(0) < 1.000000000, toInt16(0) <= 1.000000000, toInt16(0) > 1.000000000, toInt16(0) >= 1.000000000, 1.000000000 = toInt16(0), 1.000000000 != toInt16(0), 1.000000000 < toInt16(0), 1.000000000 <= toInt16(0), 1.000000000 > toInt16(0), 1.000000000 >= toInt16(0) , toUInt32(0) = 1.000000000, toUInt32(0) != 1.000000000, toUInt32(0) < 1.000000000, toUInt32(0) <= 1.000000000, toUInt32(0) > 1.000000000, toUInt32(0) >= 1.000000000, 1.000000000 = toUInt32(0), 1.000000000 != toUInt32(0), 1.000000000 < toUInt32(0), 1.000000000 <= toUInt32(0), 1.000000000 > toUInt32(0), 1.000000000 >= toUInt32(0) , toInt32(0) = 1.000000000, toInt32(0) != 1.000000000, toInt32(0) < 1.000000000, toInt32(0) <= 1.000000000, toInt32(0) > 1.000000000, toInt32(0) >= 1.000000000, 1.000000000 = toInt32(0), 1.000000000 != toInt32(0), 1.000000000 < toInt32(0), 1.000000000 <= toInt32(0), 1.000000000 > toInt32(0), 1.000000000 >= toInt32(0) , toUInt64(0) = 1.000000000, toUInt64(0) != 1.000000000, toUInt64(0) < 1.000000000, toUInt64(0) <= 1.000000000, toUInt64(0) > 1.000000000, toUInt64(0) >= 1.000000000, 1.000000000 = toUInt64(0), 1.000000000 != toUInt64(0), 1.000000000 < toUInt64(0), 1.000000000 <= toUInt64(0), 1.000000000 > toUInt64(0), 1.000000000 >= toUInt64(0) , toInt64(0) = 1.000000000, toInt64(0) != 1.000000000, toInt64(0) < 1.000000000, toInt64(0) <= 1.000000000, toInt64(0) > 1.000000000, toInt64(0) >= 1.000000000, 1.000000000 = toInt64(0), 1.000000000 != toInt64(0), 1.000000000 < toInt64(0), 1.000000000 <= toInt64(0), 1.000000000 > toInt64(0), 1.000000000 >= toInt64(0) ; +SELECT '0', '18446744073709551616.000000000', 0 = 18446744073709551616.000000000, 0 != 18446744073709551616.000000000, 0 < 18446744073709551616.000000000, 0 <= 18446744073709551616.000000000, 0 > 18446744073709551616.000000000, 0 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 0, 18446744073709551616.000000000 != 0, 18446744073709551616.000000000 < 0, 18446744073709551616.000000000 <= 0, 18446744073709551616.000000000 > 0, 18446744073709551616.000000000 >= 0 , toUInt8(0) = 18446744073709551616.000000000, toUInt8(0) != 18446744073709551616.000000000, toUInt8(0) < 18446744073709551616.000000000, toUInt8(0) <= 18446744073709551616.000000000, toUInt8(0) > 18446744073709551616.000000000, toUInt8(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt8(0), 18446744073709551616.000000000 != toUInt8(0), 18446744073709551616.000000000 < toUInt8(0), 18446744073709551616.000000000 <= toUInt8(0), 18446744073709551616.000000000 > toUInt8(0), 18446744073709551616.000000000 >= toUInt8(0) , toInt8(0) = 18446744073709551616.000000000, toInt8(0) != 18446744073709551616.000000000, toInt8(0) < 18446744073709551616.000000000, toInt8(0) <= 18446744073709551616.000000000, toInt8(0) > 18446744073709551616.000000000, toInt8(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt8(0), 18446744073709551616.000000000 != toInt8(0), 18446744073709551616.000000000 < toInt8(0), 18446744073709551616.000000000 <= toInt8(0), 18446744073709551616.000000000 > toInt8(0), 18446744073709551616.000000000 >= toInt8(0) , toUInt16(0) = 18446744073709551616.000000000, toUInt16(0) != 18446744073709551616.000000000, toUInt16(0) < 18446744073709551616.000000000, toUInt16(0) <= 18446744073709551616.000000000, toUInt16(0) > 18446744073709551616.000000000, toUInt16(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt16(0), 18446744073709551616.000000000 != toUInt16(0), 18446744073709551616.000000000 < toUInt16(0), 18446744073709551616.000000000 <= toUInt16(0), 18446744073709551616.000000000 > toUInt16(0), 18446744073709551616.000000000 >= toUInt16(0) , toInt16(0) = 18446744073709551616.000000000, toInt16(0) != 18446744073709551616.000000000, toInt16(0) < 18446744073709551616.000000000, toInt16(0) <= 18446744073709551616.000000000, toInt16(0) > 18446744073709551616.000000000, toInt16(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt16(0), 18446744073709551616.000000000 != toInt16(0), 18446744073709551616.000000000 < toInt16(0), 18446744073709551616.000000000 <= toInt16(0), 18446744073709551616.000000000 > toInt16(0), 18446744073709551616.000000000 >= toInt16(0) , toUInt32(0) = 18446744073709551616.000000000, toUInt32(0) != 18446744073709551616.000000000, toUInt32(0) < 18446744073709551616.000000000, toUInt32(0) <= 18446744073709551616.000000000, toUInt32(0) > 18446744073709551616.000000000, toUInt32(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt32(0), 18446744073709551616.000000000 != toUInt32(0), 18446744073709551616.000000000 < toUInt32(0), 18446744073709551616.000000000 <= toUInt32(0), 18446744073709551616.000000000 > toUInt32(0), 18446744073709551616.000000000 >= toUInt32(0) , toInt32(0) = 18446744073709551616.000000000, toInt32(0) != 18446744073709551616.000000000, toInt32(0) < 18446744073709551616.000000000, toInt32(0) <= 18446744073709551616.000000000, toInt32(0) > 18446744073709551616.000000000, toInt32(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt32(0), 18446744073709551616.000000000 != toInt32(0), 18446744073709551616.000000000 < toInt32(0), 18446744073709551616.000000000 <= toInt32(0), 18446744073709551616.000000000 > toInt32(0), 18446744073709551616.000000000 >= toInt32(0) , toUInt64(0) = 18446744073709551616.000000000, toUInt64(0) != 18446744073709551616.000000000, toUInt64(0) < 18446744073709551616.000000000, toUInt64(0) <= 18446744073709551616.000000000, toUInt64(0) > 18446744073709551616.000000000, toUInt64(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(0), 18446744073709551616.000000000 != toUInt64(0), 18446744073709551616.000000000 < toUInt64(0), 18446744073709551616.000000000 <= toUInt64(0), 18446744073709551616.000000000 > toUInt64(0), 18446744073709551616.000000000 >= toUInt64(0) , toInt64(0) = 18446744073709551616.000000000, toInt64(0) != 18446744073709551616.000000000, toInt64(0) < 18446744073709551616.000000000, toInt64(0) <= 18446744073709551616.000000000, toInt64(0) > 18446744073709551616.000000000, toInt64(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt64(0), 18446744073709551616.000000000 != toInt64(0), 18446744073709551616.000000000 < toInt64(0), 18446744073709551616.000000000 <= toInt64(0), 18446744073709551616.000000000 > toInt64(0), 18446744073709551616.000000000 >= toInt64(0) ; +SELECT '0', '9223372036854775808.000000000', 0 = 9223372036854775808.000000000, 0 != 9223372036854775808.000000000, 0 < 9223372036854775808.000000000, 0 <= 9223372036854775808.000000000, 0 > 9223372036854775808.000000000, 0 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 0, 9223372036854775808.000000000 != 0, 9223372036854775808.000000000 < 0, 9223372036854775808.000000000 <= 0, 9223372036854775808.000000000 > 0, 9223372036854775808.000000000 >= 0 , toUInt8(0) = 9223372036854775808.000000000, toUInt8(0) != 9223372036854775808.000000000, toUInt8(0) < 9223372036854775808.000000000, toUInt8(0) <= 9223372036854775808.000000000, toUInt8(0) > 9223372036854775808.000000000, toUInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(0), 9223372036854775808.000000000 != toUInt8(0), 9223372036854775808.000000000 < toUInt8(0), 9223372036854775808.000000000 <= toUInt8(0), 9223372036854775808.000000000 > toUInt8(0), 9223372036854775808.000000000 >= toUInt8(0) , toInt8(0) = 9223372036854775808.000000000, toInt8(0) != 9223372036854775808.000000000, toInt8(0) < 9223372036854775808.000000000, toInt8(0) <= 9223372036854775808.000000000, toInt8(0) > 9223372036854775808.000000000, toInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(0), 9223372036854775808.000000000 != toInt8(0), 9223372036854775808.000000000 < toInt8(0), 9223372036854775808.000000000 <= toInt8(0), 9223372036854775808.000000000 > toInt8(0), 9223372036854775808.000000000 >= toInt8(0) , toUInt16(0) = 9223372036854775808.000000000, toUInt16(0) != 9223372036854775808.000000000, toUInt16(0) < 9223372036854775808.000000000, toUInt16(0) <= 9223372036854775808.000000000, toUInt16(0) > 9223372036854775808.000000000, toUInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(0), 9223372036854775808.000000000 != toUInt16(0), 9223372036854775808.000000000 < toUInt16(0), 9223372036854775808.000000000 <= toUInt16(0), 9223372036854775808.000000000 > toUInt16(0), 9223372036854775808.000000000 >= toUInt16(0) , toInt16(0) = 9223372036854775808.000000000, toInt16(0) != 9223372036854775808.000000000, toInt16(0) < 9223372036854775808.000000000, toInt16(0) <= 9223372036854775808.000000000, toInt16(0) > 9223372036854775808.000000000, toInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(0), 9223372036854775808.000000000 != toInt16(0), 9223372036854775808.000000000 < toInt16(0), 9223372036854775808.000000000 <= toInt16(0), 9223372036854775808.000000000 > toInt16(0), 9223372036854775808.000000000 >= toInt16(0) , toUInt32(0) = 9223372036854775808.000000000, toUInt32(0) != 9223372036854775808.000000000, toUInt32(0) < 9223372036854775808.000000000, toUInt32(0) <= 9223372036854775808.000000000, toUInt32(0) > 9223372036854775808.000000000, toUInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(0), 9223372036854775808.000000000 != toUInt32(0), 9223372036854775808.000000000 < toUInt32(0), 9223372036854775808.000000000 <= toUInt32(0), 9223372036854775808.000000000 > toUInt32(0), 9223372036854775808.000000000 >= toUInt32(0) , toInt32(0) = 9223372036854775808.000000000, toInt32(0) != 9223372036854775808.000000000, toInt32(0) < 9223372036854775808.000000000, toInt32(0) <= 9223372036854775808.000000000, toInt32(0) > 9223372036854775808.000000000, toInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(0), 9223372036854775808.000000000 != toInt32(0), 9223372036854775808.000000000 < toInt32(0), 9223372036854775808.000000000 <= toInt32(0), 9223372036854775808.000000000 > toInt32(0), 9223372036854775808.000000000 >= toInt32(0) , toUInt64(0) = 9223372036854775808.000000000, toUInt64(0) != 9223372036854775808.000000000, toUInt64(0) < 9223372036854775808.000000000, toUInt64(0) <= 9223372036854775808.000000000, toUInt64(0) > 9223372036854775808.000000000, toUInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(0), 9223372036854775808.000000000 != toUInt64(0), 9223372036854775808.000000000 < toUInt64(0), 9223372036854775808.000000000 <= toUInt64(0), 9223372036854775808.000000000 > toUInt64(0), 9223372036854775808.000000000 >= toUInt64(0) , toInt64(0) = 9223372036854775808.000000000, toInt64(0) != 9223372036854775808.000000000, toInt64(0) < 9223372036854775808.000000000, toInt64(0) <= 9223372036854775808.000000000, toInt64(0) > 9223372036854775808.000000000, toInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(0), 9223372036854775808.000000000 != toInt64(0), 9223372036854775808.000000000 < toInt64(0), 9223372036854775808.000000000 <= toInt64(0), 9223372036854775808.000000000 > toInt64(0), 9223372036854775808.000000000 >= toInt64(0) ; +SELECT '0', '-9223372036854775808.000000000', 0 = -9223372036854775808.000000000, 0 != -9223372036854775808.000000000, 0 < -9223372036854775808.000000000, 0 <= -9223372036854775808.000000000, 0 > -9223372036854775808.000000000, 0 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = 0, -9223372036854775808.000000000 != 0, -9223372036854775808.000000000 < 0, -9223372036854775808.000000000 <= 0, -9223372036854775808.000000000 > 0, -9223372036854775808.000000000 >= 0 , toUInt8(0) = -9223372036854775808.000000000, toUInt8(0) != -9223372036854775808.000000000, toUInt8(0) < -9223372036854775808.000000000, toUInt8(0) <= -9223372036854775808.000000000, toUInt8(0) > -9223372036854775808.000000000, toUInt8(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt8(0), -9223372036854775808.000000000 != toUInt8(0), -9223372036854775808.000000000 < toUInt8(0), -9223372036854775808.000000000 <= toUInt8(0), -9223372036854775808.000000000 > toUInt8(0), -9223372036854775808.000000000 >= toUInt8(0) , toInt8(0) = -9223372036854775808.000000000, toInt8(0) != -9223372036854775808.000000000, toInt8(0) < -9223372036854775808.000000000, toInt8(0) <= -9223372036854775808.000000000, toInt8(0) > -9223372036854775808.000000000, toInt8(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt8(0), -9223372036854775808.000000000 != toInt8(0), -9223372036854775808.000000000 < toInt8(0), -9223372036854775808.000000000 <= toInt8(0), -9223372036854775808.000000000 > toInt8(0), -9223372036854775808.000000000 >= toInt8(0) , toUInt16(0) = -9223372036854775808.000000000, toUInt16(0) != -9223372036854775808.000000000, toUInt16(0) < -9223372036854775808.000000000, toUInt16(0) <= -9223372036854775808.000000000, toUInt16(0) > -9223372036854775808.000000000, toUInt16(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt16(0), -9223372036854775808.000000000 != toUInt16(0), -9223372036854775808.000000000 < toUInt16(0), -9223372036854775808.000000000 <= toUInt16(0), -9223372036854775808.000000000 > toUInt16(0), -9223372036854775808.000000000 >= toUInt16(0) , toInt16(0) = -9223372036854775808.000000000, toInt16(0) != -9223372036854775808.000000000, toInt16(0) < -9223372036854775808.000000000, toInt16(0) <= -9223372036854775808.000000000, toInt16(0) > -9223372036854775808.000000000, toInt16(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt16(0), -9223372036854775808.000000000 != toInt16(0), -9223372036854775808.000000000 < toInt16(0), -9223372036854775808.000000000 <= toInt16(0), -9223372036854775808.000000000 > toInt16(0), -9223372036854775808.000000000 >= toInt16(0) , toUInt32(0) = -9223372036854775808.000000000, toUInt32(0) != -9223372036854775808.000000000, toUInt32(0) < -9223372036854775808.000000000, toUInt32(0) <= -9223372036854775808.000000000, toUInt32(0) > -9223372036854775808.000000000, toUInt32(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt32(0), -9223372036854775808.000000000 != toUInt32(0), -9223372036854775808.000000000 < toUInt32(0), -9223372036854775808.000000000 <= toUInt32(0), -9223372036854775808.000000000 > toUInt32(0), -9223372036854775808.000000000 >= toUInt32(0) , toInt32(0) = -9223372036854775808.000000000, toInt32(0) != -9223372036854775808.000000000, toInt32(0) < -9223372036854775808.000000000, toInt32(0) <= -9223372036854775808.000000000, toInt32(0) > -9223372036854775808.000000000, toInt32(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt32(0), -9223372036854775808.000000000 != toInt32(0), -9223372036854775808.000000000 < toInt32(0), -9223372036854775808.000000000 <= toInt32(0), -9223372036854775808.000000000 > toInt32(0), -9223372036854775808.000000000 >= toInt32(0) , toUInt64(0) = -9223372036854775808.000000000, toUInt64(0) != -9223372036854775808.000000000, toUInt64(0) < -9223372036854775808.000000000, toUInt64(0) <= -9223372036854775808.000000000, toUInt64(0) > -9223372036854775808.000000000, toUInt64(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt64(0), -9223372036854775808.000000000 != toUInt64(0), -9223372036854775808.000000000 < toUInt64(0), -9223372036854775808.000000000 <= toUInt64(0), -9223372036854775808.000000000 > toUInt64(0), -9223372036854775808.000000000 >= toUInt64(0) , toInt64(0) = -9223372036854775808.000000000, toInt64(0) != -9223372036854775808.000000000, toInt64(0) < -9223372036854775808.000000000, toInt64(0) <= -9223372036854775808.000000000, toInt64(0) > -9223372036854775808.000000000, toInt64(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt64(0), -9223372036854775808.000000000 != toInt64(0), -9223372036854775808.000000000 < toInt64(0), -9223372036854775808.000000000 <= toInt64(0), -9223372036854775808.000000000 > toInt64(0), -9223372036854775808.000000000 >= toInt64(0) ; +SELECT '0', '9223372036854775808.000000000', 0 = 9223372036854775808.000000000, 0 != 9223372036854775808.000000000, 0 < 9223372036854775808.000000000, 0 <= 9223372036854775808.000000000, 0 > 9223372036854775808.000000000, 0 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 0, 9223372036854775808.000000000 != 0, 9223372036854775808.000000000 < 0, 9223372036854775808.000000000 <= 0, 9223372036854775808.000000000 > 0, 9223372036854775808.000000000 >= 0 , toUInt8(0) = 9223372036854775808.000000000, toUInt8(0) != 9223372036854775808.000000000, toUInt8(0) < 9223372036854775808.000000000, toUInt8(0) <= 9223372036854775808.000000000, toUInt8(0) > 9223372036854775808.000000000, toUInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(0), 9223372036854775808.000000000 != toUInt8(0), 9223372036854775808.000000000 < toUInt8(0), 9223372036854775808.000000000 <= toUInt8(0), 9223372036854775808.000000000 > toUInt8(0), 9223372036854775808.000000000 >= toUInt8(0) , toInt8(0) = 9223372036854775808.000000000, toInt8(0) != 9223372036854775808.000000000, toInt8(0) < 9223372036854775808.000000000, toInt8(0) <= 9223372036854775808.000000000, toInt8(0) > 9223372036854775808.000000000, toInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(0), 9223372036854775808.000000000 != toInt8(0), 9223372036854775808.000000000 < toInt8(0), 9223372036854775808.000000000 <= toInt8(0), 9223372036854775808.000000000 > toInt8(0), 9223372036854775808.000000000 >= toInt8(0) , toUInt16(0) = 9223372036854775808.000000000, toUInt16(0) != 9223372036854775808.000000000, toUInt16(0) < 9223372036854775808.000000000, toUInt16(0) <= 9223372036854775808.000000000, toUInt16(0) > 9223372036854775808.000000000, toUInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(0), 9223372036854775808.000000000 != toUInt16(0), 9223372036854775808.000000000 < toUInt16(0), 9223372036854775808.000000000 <= toUInt16(0), 9223372036854775808.000000000 > toUInt16(0), 9223372036854775808.000000000 >= toUInt16(0) , toInt16(0) = 9223372036854775808.000000000, toInt16(0) != 9223372036854775808.000000000, toInt16(0) < 9223372036854775808.000000000, toInt16(0) <= 9223372036854775808.000000000, toInt16(0) > 9223372036854775808.000000000, toInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(0), 9223372036854775808.000000000 != toInt16(0), 9223372036854775808.000000000 < toInt16(0), 9223372036854775808.000000000 <= toInt16(0), 9223372036854775808.000000000 > toInt16(0), 9223372036854775808.000000000 >= toInt16(0) , toUInt32(0) = 9223372036854775808.000000000, toUInt32(0) != 9223372036854775808.000000000, toUInt32(0) < 9223372036854775808.000000000, toUInt32(0) <= 9223372036854775808.000000000, toUInt32(0) > 9223372036854775808.000000000, toUInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(0), 9223372036854775808.000000000 != toUInt32(0), 9223372036854775808.000000000 < toUInt32(0), 9223372036854775808.000000000 <= toUInt32(0), 9223372036854775808.000000000 > toUInt32(0), 9223372036854775808.000000000 >= toUInt32(0) , toInt32(0) = 9223372036854775808.000000000, toInt32(0) != 9223372036854775808.000000000, toInt32(0) < 9223372036854775808.000000000, toInt32(0) <= 9223372036854775808.000000000, toInt32(0) > 9223372036854775808.000000000, toInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(0), 9223372036854775808.000000000 != toInt32(0), 9223372036854775808.000000000 < toInt32(0), 9223372036854775808.000000000 <= toInt32(0), 9223372036854775808.000000000 > toInt32(0), 9223372036854775808.000000000 >= toInt32(0) , toUInt64(0) = 9223372036854775808.000000000, toUInt64(0) != 9223372036854775808.000000000, toUInt64(0) < 9223372036854775808.000000000, toUInt64(0) <= 9223372036854775808.000000000, toUInt64(0) > 9223372036854775808.000000000, toUInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(0), 9223372036854775808.000000000 != toUInt64(0), 9223372036854775808.000000000 < toUInt64(0), 9223372036854775808.000000000 <= toUInt64(0), 9223372036854775808.000000000 > toUInt64(0), 9223372036854775808.000000000 >= toUInt64(0) , toInt64(0) = 9223372036854775808.000000000, toInt64(0) != 9223372036854775808.000000000, toInt64(0) < 9223372036854775808.000000000, toInt64(0) <= 9223372036854775808.000000000, toInt64(0) > 9223372036854775808.000000000, toInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(0), 9223372036854775808.000000000 != toInt64(0), 9223372036854775808.000000000 < toInt64(0), 9223372036854775808.000000000 <= toInt64(0), 9223372036854775808.000000000 > toInt64(0), 9223372036854775808.000000000 >= toInt64(0) ; +SELECT '0', '2251799813685248.000000000', 0 = 2251799813685248.000000000, 0 != 2251799813685248.000000000, 0 < 2251799813685248.000000000, 0 <= 2251799813685248.000000000, 0 > 2251799813685248.000000000, 0 >= 2251799813685248.000000000, 2251799813685248.000000000 = 0, 2251799813685248.000000000 != 0, 2251799813685248.000000000 < 0, 2251799813685248.000000000 <= 0, 2251799813685248.000000000 > 0, 2251799813685248.000000000 >= 0 , toUInt8(0) = 2251799813685248.000000000, toUInt8(0) != 2251799813685248.000000000, toUInt8(0) < 2251799813685248.000000000, toUInt8(0) <= 2251799813685248.000000000, toUInt8(0) > 2251799813685248.000000000, toUInt8(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt8(0), 2251799813685248.000000000 != toUInt8(0), 2251799813685248.000000000 < toUInt8(0), 2251799813685248.000000000 <= toUInt8(0), 2251799813685248.000000000 > toUInt8(0), 2251799813685248.000000000 >= toUInt8(0) , toInt8(0) = 2251799813685248.000000000, toInt8(0) != 2251799813685248.000000000, toInt8(0) < 2251799813685248.000000000, toInt8(0) <= 2251799813685248.000000000, toInt8(0) > 2251799813685248.000000000, toInt8(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt8(0), 2251799813685248.000000000 != toInt8(0), 2251799813685248.000000000 < toInt8(0), 2251799813685248.000000000 <= toInt8(0), 2251799813685248.000000000 > toInt8(0), 2251799813685248.000000000 >= toInt8(0) , toUInt16(0) = 2251799813685248.000000000, toUInt16(0) != 2251799813685248.000000000, toUInt16(0) < 2251799813685248.000000000, toUInt16(0) <= 2251799813685248.000000000, toUInt16(0) > 2251799813685248.000000000, toUInt16(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt16(0), 2251799813685248.000000000 != toUInt16(0), 2251799813685248.000000000 < toUInt16(0), 2251799813685248.000000000 <= toUInt16(0), 2251799813685248.000000000 > toUInt16(0), 2251799813685248.000000000 >= toUInt16(0) , toInt16(0) = 2251799813685248.000000000, toInt16(0) != 2251799813685248.000000000, toInt16(0) < 2251799813685248.000000000, toInt16(0) <= 2251799813685248.000000000, toInt16(0) > 2251799813685248.000000000, toInt16(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt16(0), 2251799813685248.000000000 != toInt16(0), 2251799813685248.000000000 < toInt16(0), 2251799813685248.000000000 <= toInt16(0), 2251799813685248.000000000 > toInt16(0), 2251799813685248.000000000 >= toInt16(0) , toUInt32(0) = 2251799813685248.000000000, toUInt32(0) != 2251799813685248.000000000, toUInt32(0) < 2251799813685248.000000000, toUInt32(0) <= 2251799813685248.000000000, toUInt32(0) > 2251799813685248.000000000, toUInt32(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt32(0), 2251799813685248.000000000 != toUInt32(0), 2251799813685248.000000000 < toUInt32(0), 2251799813685248.000000000 <= toUInt32(0), 2251799813685248.000000000 > toUInt32(0), 2251799813685248.000000000 >= toUInt32(0) , toInt32(0) = 2251799813685248.000000000, toInt32(0) != 2251799813685248.000000000, toInt32(0) < 2251799813685248.000000000, toInt32(0) <= 2251799813685248.000000000, toInt32(0) > 2251799813685248.000000000, toInt32(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt32(0), 2251799813685248.000000000 != toInt32(0), 2251799813685248.000000000 < toInt32(0), 2251799813685248.000000000 <= toInt32(0), 2251799813685248.000000000 > toInt32(0), 2251799813685248.000000000 >= toInt32(0) , toUInt64(0) = 2251799813685248.000000000, toUInt64(0) != 2251799813685248.000000000, toUInt64(0) < 2251799813685248.000000000, toUInt64(0) <= 2251799813685248.000000000, toUInt64(0) > 2251799813685248.000000000, toUInt64(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt64(0), 2251799813685248.000000000 != toUInt64(0), 2251799813685248.000000000 < toUInt64(0), 2251799813685248.000000000 <= toUInt64(0), 2251799813685248.000000000 > toUInt64(0), 2251799813685248.000000000 >= toUInt64(0) , toInt64(0) = 2251799813685248.000000000, toInt64(0) != 2251799813685248.000000000, toInt64(0) < 2251799813685248.000000000, toInt64(0) <= 2251799813685248.000000000, toInt64(0) > 2251799813685248.000000000, toInt64(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt64(0), 2251799813685248.000000000 != toInt64(0), 2251799813685248.000000000 < toInt64(0), 2251799813685248.000000000 <= toInt64(0), 2251799813685248.000000000 > toInt64(0), 2251799813685248.000000000 >= toInt64(0) ; +SELECT '0', '4503599627370496.000000000', 0 = 4503599627370496.000000000, 0 != 4503599627370496.000000000, 0 < 4503599627370496.000000000, 0 <= 4503599627370496.000000000, 0 > 4503599627370496.000000000, 0 >= 4503599627370496.000000000, 4503599627370496.000000000 = 0, 4503599627370496.000000000 != 0, 4503599627370496.000000000 < 0, 4503599627370496.000000000 <= 0, 4503599627370496.000000000 > 0, 4503599627370496.000000000 >= 0 , toUInt8(0) = 4503599627370496.000000000, toUInt8(0) != 4503599627370496.000000000, toUInt8(0) < 4503599627370496.000000000, toUInt8(0) <= 4503599627370496.000000000, toUInt8(0) > 4503599627370496.000000000, toUInt8(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt8(0), 4503599627370496.000000000 != toUInt8(0), 4503599627370496.000000000 < toUInt8(0), 4503599627370496.000000000 <= toUInt8(0), 4503599627370496.000000000 > toUInt8(0), 4503599627370496.000000000 >= toUInt8(0) , toInt8(0) = 4503599627370496.000000000, toInt8(0) != 4503599627370496.000000000, toInt8(0) < 4503599627370496.000000000, toInt8(0) <= 4503599627370496.000000000, toInt8(0) > 4503599627370496.000000000, toInt8(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt8(0), 4503599627370496.000000000 != toInt8(0), 4503599627370496.000000000 < toInt8(0), 4503599627370496.000000000 <= toInt8(0), 4503599627370496.000000000 > toInt8(0), 4503599627370496.000000000 >= toInt8(0) , toUInt16(0) = 4503599627370496.000000000, toUInt16(0) != 4503599627370496.000000000, toUInt16(0) < 4503599627370496.000000000, toUInt16(0) <= 4503599627370496.000000000, toUInt16(0) > 4503599627370496.000000000, toUInt16(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt16(0), 4503599627370496.000000000 != toUInt16(0), 4503599627370496.000000000 < toUInt16(0), 4503599627370496.000000000 <= toUInt16(0), 4503599627370496.000000000 > toUInt16(0), 4503599627370496.000000000 >= toUInt16(0) , toInt16(0) = 4503599627370496.000000000, toInt16(0) != 4503599627370496.000000000, toInt16(0) < 4503599627370496.000000000, toInt16(0) <= 4503599627370496.000000000, toInt16(0) > 4503599627370496.000000000, toInt16(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt16(0), 4503599627370496.000000000 != toInt16(0), 4503599627370496.000000000 < toInt16(0), 4503599627370496.000000000 <= toInt16(0), 4503599627370496.000000000 > toInt16(0), 4503599627370496.000000000 >= toInt16(0) , toUInt32(0) = 4503599627370496.000000000, toUInt32(0) != 4503599627370496.000000000, toUInt32(0) < 4503599627370496.000000000, toUInt32(0) <= 4503599627370496.000000000, toUInt32(0) > 4503599627370496.000000000, toUInt32(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt32(0), 4503599627370496.000000000 != toUInt32(0), 4503599627370496.000000000 < toUInt32(0), 4503599627370496.000000000 <= toUInt32(0), 4503599627370496.000000000 > toUInt32(0), 4503599627370496.000000000 >= toUInt32(0) , toInt32(0) = 4503599627370496.000000000, toInt32(0) != 4503599627370496.000000000, toInt32(0) < 4503599627370496.000000000, toInt32(0) <= 4503599627370496.000000000, toInt32(0) > 4503599627370496.000000000, toInt32(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt32(0), 4503599627370496.000000000 != toInt32(0), 4503599627370496.000000000 < toInt32(0), 4503599627370496.000000000 <= toInt32(0), 4503599627370496.000000000 > toInt32(0), 4503599627370496.000000000 >= toInt32(0) , toUInt64(0) = 4503599627370496.000000000, toUInt64(0) != 4503599627370496.000000000, toUInt64(0) < 4503599627370496.000000000, toUInt64(0) <= 4503599627370496.000000000, toUInt64(0) > 4503599627370496.000000000, toUInt64(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt64(0), 4503599627370496.000000000 != toUInt64(0), 4503599627370496.000000000 < toUInt64(0), 4503599627370496.000000000 <= toUInt64(0), 4503599627370496.000000000 > toUInt64(0), 4503599627370496.000000000 >= toUInt64(0) , toInt64(0) = 4503599627370496.000000000, toInt64(0) != 4503599627370496.000000000, toInt64(0) < 4503599627370496.000000000, toInt64(0) <= 4503599627370496.000000000, toInt64(0) > 4503599627370496.000000000, toInt64(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt64(0), 4503599627370496.000000000 != toInt64(0), 4503599627370496.000000000 < toInt64(0), 4503599627370496.000000000 <= toInt64(0), 4503599627370496.000000000 > toInt64(0), 4503599627370496.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740991.000000000', 0 = 9007199254740991.000000000, 0 != 9007199254740991.000000000, 0 < 9007199254740991.000000000, 0 <= 9007199254740991.000000000, 0 > 9007199254740991.000000000, 0 >= 9007199254740991.000000000, 9007199254740991.000000000 = 0, 9007199254740991.000000000 != 0, 9007199254740991.000000000 < 0, 9007199254740991.000000000 <= 0, 9007199254740991.000000000 > 0, 9007199254740991.000000000 >= 0 , toUInt8(0) = 9007199254740991.000000000, toUInt8(0) != 9007199254740991.000000000, toUInt8(0) < 9007199254740991.000000000, toUInt8(0) <= 9007199254740991.000000000, toUInt8(0) > 9007199254740991.000000000, toUInt8(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt8(0), 9007199254740991.000000000 != toUInt8(0), 9007199254740991.000000000 < toUInt8(0), 9007199254740991.000000000 <= toUInt8(0), 9007199254740991.000000000 > toUInt8(0), 9007199254740991.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740991.000000000, toInt8(0) != 9007199254740991.000000000, toInt8(0) < 9007199254740991.000000000, toInt8(0) <= 9007199254740991.000000000, toInt8(0) > 9007199254740991.000000000, toInt8(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt8(0), 9007199254740991.000000000 != toInt8(0), 9007199254740991.000000000 < toInt8(0), 9007199254740991.000000000 <= toInt8(0), 9007199254740991.000000000 > toInt8(0), 9007199254740991.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740991.000000000, toUInt16(0) != 9007199254740991.000000000, toUInt16(0) < 9007199254740991.000000000, toUInt16(0) <= 9007199254740991.000000000, toUInt16(0) > 9007199254740991.000000000, toUInt16(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt16(0), 9007199254740991.000000000 != toUInt16(0), 9007199254740991.000000000 < toUInt16(0), 9007199254740991.000000000 <= toUInt16(0), 9007199254740991.000000000 > toUInt16(0), 9007199254740991.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740991.000000000, toInt16(0) != 9007199254740991.000000000, toInt16(0) < 9007199254740991.000000000, toInt16(0) <= 9007199254740991.000000000, toInt16(0) > 9007199254740991.000000000, toInt16(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt16(0), 9007199254740991.000000000 != toInt16(0), 9007199254740991.000000000 < toInt16(0), 9007199254740991.000000000 <= toInt16(0), 9007199254740991.000000000 > toInt16(0), 9007199254740991.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740991.000000000, toUInt32(0) != 9007199254740991.000000000, toUInt32(0) < 9007199254740991.000000000, toUInt32(0) <= 9007199254740991.000000000, toUInt32(0) > 9007199254740991.000000000, toUInt32(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt32(0), 9007199254740991.000000000 != toUInt32(0), 9007199254740991.000000000 < toUInt32(0), 9007199254740991.000000000 <= toUInt32(0), 9007199254740991.000000000 > toUInt32(0), 9007199254740991.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740991.000000000, toInt32(0) != 9007199254740991.000000000, toInt32(0) < 9007199254740991.000000000, toInt32(0) <= 9007199254740991.000000000, toInt32(0) > 9007199254740991.000000000, toInt32(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt32(0), 9007199254740991.000000000 != toInt32(0), 9007199254740991.000000000 < toInt32(0), 9007199254740991.000000000 <= toInt32(0), 9007199254740991.000000000 > toInt32(0), 9007199254740991.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740991.000000000, toUInt64(0) != 9007199254740991.000000000, toUInt64(0) < 9007199254740991.000000000, toUInt64(0) <= 9007199254740991.000000000, toUInt64(0) > 9007199254740991.000000000, toUInt64(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt64(0), 9007199254740991.000000000 != toUInt64(0), 9007199254740991.000000000 < toUInt64(0), 9007199254740991.000000000 <= toUInt64(0), 9007199254740991.000000000 > toUInt64(0), 9007199254740991.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740991.000000000, toInt64(0) != 9007199254740991.000000000, toInt64(0) < 9007199254740991.000000000, toInt64(0) <= 9007199254740991.000000000, toInt64(0) > 9007199254740991.000000000, toInt64(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt64(0), 9007199254740991.000000000 != toInt64(0), 9007199254740991.000000000 < toInt64(0), 9007199254740991.000000000 <= toInt64(0), 9007199254740991.000000000 > toInt64(0), 9007199254740991.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740992.000000000', 0 = 9007199254740992.000000000, 0 != 9007199254740992.000000000, 0 < 9007199254740992.000000000, 0 <= 9007199254740992.000000000, 0 > 9007199254740992.000000000, 0 >= 9007199254740992.000000000, 9007199254740992.000000000 = 0, 9007199254740992.000000000 != 0, 9007199254740992.000000000 < 0, 9007199254740992.000000000 <= 0, 9007199254740992.000000000 > 0, 9007199254740992.000000000 >= 0 , toUInt8(0) = 9007199254740992.000000000, toUInt8(0) != 9007199254740992.000000000, toUInt8(0) < 9007199254740992.000000000, toUInt8(0) <= 9007199254740992.000000000, toUInt8(0) > 9007199254740992.000000000, toUInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(0), 9007199254740992.000000000 != toUInt8(0), 9007199254740992.000000000 < toUInt8(0), 9007199254740992.000000000 <= toUInt8(0), 9007199254740992.000000000 > toUInt8(0), 9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740992.000000000, toInt8(0) != 9007199254740992.000000000, toInt8(0) < 9007199254740992.000000000, toInt8(0) <= 9007199254740992.000000000, toInt8(0) > 9007199254740992.000000000, toInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(0), 9007199254740992.000000000 != toInt8(0), 9007199254740992.000000000 < toInt8(0), 9007199254740992.000000000 <= toInt8(0), 9007199254740992.000000000 > toInt8(0), 9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740992.000000000, toUInt16(0) != 9007199254740992.000000000, toUInt16(0) < 9007199254740992.000000000, toUInt16(0) <= 9007199254740992.000000000, toUInt16(0) > 9007199254740992.000000000, toUInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(0), 9007199254740992.000000000 != toUInt16(0), 9007199254740992.000000000 < toUInt16(0), 9007199254740992.000000000 <= toUInt16(0), 9007199254740992.000000000 > toUInt16(0), 9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740992.000000000, toInt16(0) != 9007199254740992.000000000, toInt16(0) < 9007199254740992.000000000, toInt16(0) <= 9007199254740992.000000000, toInt16(0) > 9007199254740992.000000000, toInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(0), 9007199254740992.000000000 != toInt16(0), 9007199254740992.000000000 < toInt16(0), 9007199254740992.000000000 <= toInt16(0), 9007199254740992.000000000 > toInt16(0), 9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740992.000000000, toUInt32(0) != 9007199254740992.000000000, toUInt32(0) < 9007199254740992.000000000, toUInt32(0) <= 9007199254740992.000000000, toUInt32(0) > 9007199254740992.000000000, toUInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(0), 9007199254740992.000000000 != toUInt32(0), 9007199254740992.000000000 < toUInt32(0), 9007199254740992.000000000 <= toUInt32(0), 9007199254740992.000000000 > toUInt32(0), 9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740992.000000000, toInt32(0) != 9007199254740992.000000000, toInt32(0) < 9007199254740992.000000000, toInt32(0) <= 9007199254740992.000000000, toInt32(0) > 9007199254740992.000000000, toInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(0), 9007199254740992.000000000 != toInt32(0), 9007199254740992.000000000 < toInt32(0), 9007199254740992.000000000 <= toInt32(0), 9007199254740992.000000000 > toInt32(0), 9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740992.000000000, toUInt64(0) != 9007199254740992.000000000, toUInt64(0) < 9007199254740992.000000000, toUInt64(0) <= 9007199254740992.000000000, toUInt64(0) > 9007199254740992.000000000, toUInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(0), 9007199254740992.000000000 != toUInt64(0), 9007199254740992.000000000 < toUInt64(0), 9007199254740992.000000000 <= toUInt64(0), 9007199254740992.000000000 > toUInt64(0), 9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740992.000000000, toInt64(0) != 9007199254740992.000000000, toInt64(0) < 9007199254740992.000000000, toInt64(0) <= 9007199254740992.000000000, toInt64(0) > 9007199254740992.000000000, toInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(0), 9007199254740992.000000000 != toInt64(0), 9007199254740992.000000000 < toInt64(0), 9007199254740992.000000000 <= toInt64(0), 9007199254740992.000000000 > toInt64(0), 9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740992.000000000', 0 = 9007199254740992.000000000, 0 != 9007199254740992.000000000, 0 < 9007199254740992.000000000, 0 <= 9007199254740992.000000000, 0 > 9007199254740992.000000000, 0 >= 9007199254740992.000000000, 9007199254740992.000000000 = 0, 9007199254740992.000000000 != 0, 9007199254740992.000000000 < 0, 9007199254740992.000000000 <= 0, 9007199254740992.000000000 > 0, 9007199254740992.000000000 >= 0 , toUInt8(0) = 9007199254740992.000000000, toUInt8(0) != 9007199254740992.000000000, toUInt8(0) < 9007199254740992.000000000, toUInt8(0) <= 9007199254740992.000000000, toUInt8(0) > 9007199254740992.000000000, toUInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(0), 9007199254740992.000000000 != toUInt8(0), 9007199254740992.000000000 < toUInt8(0), 9007199254740992.000000000 <= toUInt8(0), 9007199254740992.000000000 > toUInt8(0), 9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740992.000000000, toInt8(0) != 9007199254740992.000000000, toInt8(0) < 9007199254740992.000000000, toInt8(0) <= 9007199254740992.000000000, toInt8(0) > 9007199254740992.000000000, toInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(0), 9007199254740992.000000000 != toInt8(0), 9007199254740992.000000000 < toInt8(0), 9007199254740992.000000000 <= toInt8(0), 9007199254740992.000000000 > toInt8(0), 9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740992.000000000, toUInt16(0) != 9007199254740992.000000000, toUInt16(0) < 9007199254740992.000000000, toUInt16(0) <= 9007199254740992.000000000, toUInt16(0) > 9007199254740992.000000000, toUInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(0), 9007199254740992.000000000 != toUInt16(0), 9007199254740992.000000000 < toUInt16(0), 9007199254740992.000000000 <= toUInt16(0), 9007199254740992.000000000 > toUInt16(0), 9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740992.000000000, toInt16(0) != 9007199254740992.000000000, toInt16(0) < 9007199254740992.000000000, toInt16(0) <= 9007199254740992.000000000, toInt16(0) > 9007199254740992.000000000, toInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(0), 9007199254740992.000000000 != toInt16(0), 9007199254740992.000000000 < toInt16(0), 9007199254740992.000000000 <= toInt16(0), 9007199254740992.000000000 > toInt16(0), 9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740992.000000000, toUInt32(0) != 9007199254740992.000000000, toUInt32(0) < 9007199254740992.000000000, toUInt32(0) <= 9007199254740992.000000000, toUInt32(0) > 9007199254740992.000000000, toUInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(0), 9007199254740992.000000000 != toUInt32(0), 9007199254740992.000000000 < toUInt32(0), 9007199254740992.000000000 <= toUInt32(0), 9007199254740992.000000000 > toUInt32(0), 9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740992.000000000, toInt32(0) != 9007199254740992.000000000, toInt32(0) < 9007199254740992.000000000, toInt32(0) <= 9007199254740992.000000000, toInt32(0) > 9007199254740992.000000000, toInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(0), 9007199254740992.000000000 != toInt32(0), 9007199254740992.000000000 < toInt32(0), 9007199254740992.000000000 <= toInt32(0), 9007199254740992.000000000 > toInt32(0), 9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740992.000000000, toUInt64(0) != 9007199254740992.000000000, toUInt64(0) < 9007199254740992.000000000, toUInt64(0) <= 9007199254740992.000000000, toUInt64(0) > 9007199254740992.000000000, toUInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(0), 9007199254740992.000000000 != toUInt64(0), 9007199254740992.000000000 < toUInt64(0), 9007199254740992.000000000 <= toUInt64(0), 9007199254740992.000000000 > toUInt64(0), 9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740992.000000000, toInt64(0) != 9007199254740992.000000000, toInt64(0) < 9007199254740992.000000000, toInt64(0) <= 9007199254740992.000000000, toInt64(0) > 9007199254740992.000000000, toInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(0), 9007199254740992.000000000 != toInt64(0), 9007199254740992.000000000 < toInt64(0), 9007199254740992.000000000 <= toInt64(0), 9007199254740992.000000000 > toInt64(0), 9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740994.000000000', 0 = 9007199254740994.000000000, 0 != 9007199254740994.000000000, 0 < 9007199254740994.000000000, 0 <= 9007199254740994.000000000, 0 > 9007199254740994.000000000, 0 >= 9007199254740994.000000000, 9007199254740994.000000000 = 0, 9007199254740994.000000000 != 0, 9007199254740994.000000000 < 0, 9007199254740994.000000000 <= 0, 9007199254740994.000000000 > 0, 9007199254740994.000000000 >= 0 , toUInt8(0) = 9007199254740994.000000000, toUInt8(0) != 9007199254740994.000000000, toUInt8(0) < 9007199254740994.000000000, toUInt8(0) <= 9007199254740994.000000000, toUInt8(0) > 9007199254740994.000000000, toUInt8(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt8(0), 9007199254740994.000000000 != toUInt8(0), 9007199254740994.000000000 < toUInt8(0), 9007199254740994.000000000 <= toUInt8(0), 9007199254740994.000000000 > toUInt8(0), 9007199254740994.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740994.000000000, toInt8(0) != 9007199254740994.000000000, toInt8(0) < 9007199254740994.000000000, toInt8(0) <= 9007199254740994.000000000, toInt8(0) > 9007199254740994.000000000, toInt8(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt8(0), 9007199254740994.000000000 != toInt8(0), 9007199254740994.000000000 < toInt8(0), 9007199254740994.000000000 <= toInt8(0), 9007199254740994.000000000 > toInt8(0), 9007199254740994.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740994.000000000, toUInt16(0) != 9007199254740994.000000000, toUInt16(0) < 9007199254740994.000000000, toUInt16(0) <= 9007199254740994.000000000, toUInt16(0) > 9007199254740994.000000000, toUInt16(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt16(0), 9007199254740994.000000000 != toUInt16(0), 9007199254740994.000000000 < toUInt16(0), 9007199254740994.000000000 <= toUInt16(0), 9007199254740994.000000000 > toUInt16(0), 9007199254740994.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740994.000000000, toInt16(0) != 9007199254740994.000000000, toInt16(0) < 9007199254740994.000000000, toInt16(0) <= 9007199254740994.000000000, toInt16(0) > 9007199254740994.000000000, toInt16(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt16(0), 9007199254740994.000000000 != toInt16(0), 9007199254740994.000000000 < toInt16(0), 9007199254740994.000000000 <= toInt16(0), 9007199254740994.000000000 > toInt16(0), 9007199254740994.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740994.000000000, toUInt32(0) != 9007199254740994.000000000, toUInt32(0) < 9007199254740994.000000000, toUInt32(0) <= 9007199254740994.000000000, toUInt32(0) > 9007199254740994.000000000, toUInt32(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt32(0), 9007199254740994.000000000 != toUInt32(0), 9007199254740994.000000000 < toUInt32(0), 9007199254740994.000000000 <= toUInt32(0), 9007199254740994.000000000 > toUInt32(0), 9007199254740994.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740994.000000000, toInt32(0) != 9007199254740994.000000000, toInt32(0) < 9007199254740994.000000000, toInt32(0) <= 9007199254740994.000000000, toInt32(0) > 9007199254740994.000000000, toInt32(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt32(0), 9007199254740994.000000000 != toInt32(0), 9007199254740994.000000000 < toInt32(0), 9007199254740994.000000000 <= toInt32(0), 9007199254740994.000000000 > toInt32(0), 9007199254740994.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740994.000000000, toUInt64(0) != 9007199254740994.000000000, toUInt64(0) < 9007199254740994.000000000, toUInt64(0) <= 9007199254740994.000000000, toUInt64(0) > 9007199254740994.000000000, toUInt64(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt64(0), 9007199254740994.000000000 != toUInt64(0), 9007199254740994.000000000 < toUInt64(0), 9007199254740994.000000000 <= toUInt64(0), 9007199254740994.000000000 > toUInt64(0), 9007199254740994.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740994.000000000, toInt64(0) != 9007199254740994.000000000, toInt64(0) < 9007199254740994.000000000, toInt64(0) <= 9007199254740994.000000000, toInt64(0) > 9007199254740994.000000000, toInt64(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt64(0), 9007199254740994.000000000 != toInt64(0), 9007199254740994.000000000 < toInt64(0), 9007199254740994.000000000 <= toInt64(0), 9007199254740994.000000000 > toInt64(0), 9007199254740994.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740991.000000000', 0 = -9007199254740991.000000000, 0 != -9007199254740991.000000000, 0 < -9007199254740991.000000000, 0 <= -9007199254740991.000000000, 0 > -9007199254740991.000000000, 0 >= -9007199254740991.000000000, -9007199254740991.000000000 = 0, -9007199254740991.000000000 != 0, -9007199254740991.000000000 < 0, -9007199254740991.000000000 <= 0, -9007199254740991.000000000 > 0, -9007199254740991.000000000 >= 0 , toUInt8(0) = -9007199254740991.000000000, toUInt8(0) != -9007199254740991.000000000, toUInt8(0) < -9007199254740991.000000000, toUInt8(0) <= -9007199254740991.000000000, toUInt8(0) > -9007199254740991.000000000, toUInt8(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt8(0), -9007199254740991.000000000 != toUInt8(0), -9007199254740991.000000000 < toUInt8(0), -9007199254740991.000000000 <= toUInt8(0), -9007199254740991.000000000 > toUInt8(0), -9007199254740991.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740991.000000000, toInt8(0) != -9007199254740991.000000000, toInt8(0) < -9007199254740991.000000000, toInt8(0) <= -9007199254740991.000000000, toInt8(0) > -9007199254740991.000000000, toInt8(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt8(0), -9007199254740991.000000000 != toInt8(0), -9007199254740991.000000000 < toInt8(0), -9007199254740991.000000000 <= toInt8(0), -9007199254740991.000000000 > toInt8(0), -9007199254740991.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740991.000000000, toUInt16(0) != -9007199254740991.000000000, toUInt16(0) < -9007199254740991.000000000, toUInt16(0) <= -9007199254740991.000000000, toUInt16(0) > -9007199254740991.000000000, toUInt16(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt16(0), -9007199254740991.000000000 != toUInt16(0), -9007199254740991.000000000 < toUInt16(0), -9007199254740991.000000000 <= toUInt16(0), -9007199254740991.000000000 > toUInt16(0), -9007199254740991.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740991.000000000, toInt16(0) != -9007199254740991.000000000, toInt16(0) < -9007199254740991.000000000, toInt16(0) <= -9007199254740991.000000000, toInt16(0) > -9007199254740991.000000000, toInt16(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt16(0), -9007199254740991.000000000 != toInt16(0), -9007199254740991.000000000 < toInt16(0), -9007199254740991.000000000 <= toInt16(0), -9007199254740991.000000000 > toInt16(0), -9007199254740991.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740991.000000000, toUInt32(0) != -9007199254740991.000000000, toUInt32(0) < -9007199254740991.000000000, toUInt32(0) <= -9007199254740991.000000000, toUInt32(0) > -9007199254740991.000000000, toUInt32(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt32(0), -9007199254740991.000000000 != toUInt32(0), -9007199254740991.000000000 < toUInt32(0), -9007199254740991.000000000 <= toUInt32(0), -9007199254740991.000000000 > toUInt32(0), -9007199254740991.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740991.000000000, toInt32(0) != -9007199254740991.000000000, toInt32(0) < -9007199254740991.000000000, toInt32(0) <= -9007199254740991.000000000, toInt32(0) > -9007199254740991.000000000, toInt32(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt32(0), -9007199254740991.000000000 != toInt32(0), -9007199254740991.000000000 < toInt32(0), -9007199254740991.000000000 <= toInt32(0), -9007199254740991.000000000 > toInt32(0), -9007199254740991.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740991.000000000, toUInt64(0) != -9007199254740991.000000000, toUInt64(0) < -9007199254740991.000000000, toUInt64(0) <= -9007199254740991.000000000, toUInt64(0) > -9007199254740991.000000000, toUInt64(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt64(0), -9007199254740991.000000000 != toUInt64(0), -9007199254740991.000000000 < toUInt64(0), -9007199254740991.000000000 <= toUInt64(0), -9007199254740991.000000000 > toUInt64(0), -9007199254740991.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740991.000000000, toInt64(0) != -9007199254740991.000000000, toInt64(0) < -9007199254740991.000000000, toInt64(0) <= -9007199254740991.000000000, toInt64(0) > -9007199254740991.000000000, toInt64(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt64(0), -9007199254740991.000000000 != toInt64(0), -9007199254740991.000000000 < toInt64(0), -9007199254740991.000000000 <= toInt64(0), -9007199254740991.000000000 > toInt64(0), -9007199254740991.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740992.000000000', 0 = -9007199254740992.000000000, 0 != -9007199254740992.000000000, 0 < -9007199254740992.000000000, 0 <= -9007199254740992.000000000, 0 > -9007199254740992.000000000, 0 >= -9007199254740992.000000000, -9007199254740992.000000000 = 0, -9007199254740992.000000000 != 0, -9007199254740992.000000000 < 0, -9007199254740992.000000000 <= 0, -9007199254740992.000000000 > 0, -9007199254740992.000000000 >= 0 , toUInt8(0) = -9007199254740992.000000000, toUInt8(0) != -9007199254740992.000000000, toUInt8(0) < -9007199254740992.000000000, toUInt8(0) <= -9007199254740992.000000000, toUInt8(0) > -9007199254740992.000000000, toUInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(0), -9007199254740992.000000000 != toUInt8(0), -9007199254740992.000000000 < toUInt8(0), -9007199254740992.000000000 <= toUInt8(0), -9007199254740992.000000000 > toUInt8(0), -9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740992.000000000, toInt8(0) != -9007199254740992.000000000, toInt8(0) < -9007199254740992.000000000, toInt8(0) <= -9007199254740992.000000000, toInt8(0) > -9007199254740992.000000000, toInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(0), -9007199254740992.000000000 != toInt8(0), -9007199254740992.000000000 < toInt8(0), -9007199254740992.000000000 <= toInt8(0), -9007199254740992.000000000 > toInt8(0), -9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740992.000000000, toUInt16(0) != -9007199254740992.000000000, toUInt16(0) < -9007199254740992.000000000, toUInt16(0) <= -9007199254740992.000000000, toUInt16(0) > -9007199254740992.000000000, toUInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(0), -9007199254740992.000000000 != toUInt16(0), -9007199254740992.000000000 < toUInt16(0), -9007199254740992.000000000 <= toUInt16(0), -9007199254740992.000000000 > toUInt16(0), -9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740992.000000000, toInt16(0) != -9007199254740992.000000000, toInt16(0) < -9007199254740992.000000000, toInt16(0) <= -9007199254740992.000000000, toInt16(0) > -9007199254740992.000000000, toInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(0), -9007199254740992.000000000 != toInt16(0), -9007199254740992.000000000 < toInt16(0), -9007199254740992.000000000 <= toInt16(0), -9007199254740992.000000000 > toInt16(0), -9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740992.000000000, toUInt32(0) != -9007199254740992.000000000, toUInt32(0) < -9007199254740992.000000000, toUInt32(0) <= -9007199254740992.000000000, toUInt32(0) > -9007199254740992.000000000, toUInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(0), -9007199254740992.000000000 != toUInt32(0), -9007199254740992.000000000 < toUInt32(0), -9007199254740992.000000000 <= toUInt32(0), -9007199254740992.000000000 > toUInt32(0), -9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740992.000000000, toInt32(0) != -9007199254740992.000000000, toInt32(0) < -9007199254740992.000000000, toInt32(0) <= -9007199254740992.000000000, toInt32(0) > -9007199254740992.000000000, toInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(0), -9007199254740992.000000000 != toInt32(0), -9007199254740992.000000000 < toInt32(0), -9007199254740992.000000000 <= toInt32(0), -9007199254740992.000000000 > toInt32(0), -9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740992.000000000, toUInt64(0) != -9007199254740992.000000000, toUInt64(0) < -9007199254740992.000000000, toUInt64(0) <= -9007199254740992.000000000, toUInt64(0) > -9007199254740992.000000000, toUInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(0), -9007199254740992.000000000 != toUInt64(0), -9007199254740992.000000000 < toUInt64(0), -9007199254740992.000000000 <= toUInt64(0), -9007199254740992.000000000 > toUInt64(0), -9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740992.000000000, toInt64(0) != -9007199254740992.000000000, toInt64(0) < -9007199254740992.000000000, toInt64(0) <= -9007199254740992.000000000, toInt64(0) > -9007199254740992.000000000, toInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(0), -9007199254740992.000000000 != toInt64(0), -9007199254740992.000000000 < toInt64(0), -9007199254740992.000000000 <= toInt64(0), -9007199254740992.000000000 > toInt64(0), -9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740992.000000000', 0 = -9007199254740992.000000000, 0 != -9007199254740992.000000000, 0 < -9007199254740992.000000000, 0 <= -9007199254740992.000000000, 0 > -9007199254740992.000000000, 0 >= -9007199254740992.000000000, -9007199254740992.000000000 = 0, -9007199254740992.000000000 != 0, -9007199254740992.000000000 < 0, -9007199254740992.000000000 <= 0, -9007199254740992.000000000 > 0, -9007199254740992.000000000 >= 0 , toUInt8(0) = -9007199254740992.000000000, toUInt8(0) != -9007199254740992.000000000, toUInt8(0) < -9007199254740992.000000000, toUInt8(0) <= -9007199254740992.000000000, toUInt8(0) > -9007199254740992.000000000, toUInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(0), -9007199254740992.000000000 != toUInt8(0), -9007199254740992.000000000 < toUInt8(0), -9007199254740992.000000000 <= toUInt8(0), -9007199254740992.000000000 > toUInt8(0), -9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740992.000000000, toInt8(0) != -9007199254740992.000000000, toInt8(0) < -9007199254740992.000000000, toInt8(0) <= -9007199254740992.000000000, toInt8(0) > -9007199254740992.000000000, toInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(0), -9007199254740992.000000000 != toInt8(0), -9007199254740992.000000000 < toInt8(0), -9007199254740992.000000000 <= toInt8(0), -9007199254740992.000000000 > toInt8(0), -9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740992.000000000, toUInt16(0) != -9007199254740992.000000000, toUInt16(0) < -9007199254740992.000000000, toUInt16(0) <= -9007199254740992.000000000, toUInt16(0) > -9007199254740992.000000000, toUInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(0), -9007199254740992.000000000 != toUInt16(0), -9007199254740992.000000000 < toUInt16(0), -9007199254740992.000000000 <= toUInt16(0), -9007199254740992.000000000 > toUInt16(0), -9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740992.000000000, toInt16(0) != -9007199254740992.000000000, toInt16(0) < -9007199254740992.000000000, toInt16(0) <= -9007199254740992.000000000, toInt16(0) > -9007199254740992.000000000, toInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(0), -9007199254740992.000000000 != toInt16(0), -9007199254740992.000000000 < toInt16(0), -9007199254740992.000000000 <= toInt16(0), -9007199254740992.000000000 > toInt16(0), -9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740992.000000000, toUInt32(0) != -9007199254740992.000000000, toUInt32(0) < -9007199254740992.000000000, toUInt32(0) <= -9007199254740992.000000000, toUInt32(0) > -9007199254740992.000000000, toUInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(0), -9007199254740992.000000000 != toUInt32(0), -9007199254740992.000000000 < toUInt32(0), -9007199254740992.000000000 <= toUInt32(0), -9007199254740992.000000000 > toUInt32(0), -9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740992.000000000, toInt32(0) != -9007199254740992.000000000, toInt32(0) < -9007199254740992.000000000, toInt32(0) <= -9007199254740992.000000000, toInt32(0) > -9007199254740992.000000000, toInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(0), -9007199254740992.000000000 != toInt32(0), -9007199254740992.000000000 < toInt32(0), -9007199254740992.000000000 <= toInt32(0), -9007199254740992.000000000 > toInt32(0), -9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740992.000000000, toUInt64(0) != -9007199254740992.000000000, toUInt64(0) < -9007199254740992.000000000, toUInt64(0) <= -9007199254740992.000000000, toUInt64(0) > -9007199254740992.000000000, toUInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(0), -9007199254740992.000000000 != toUInt64(0), -9007199254740992.000000000 < toUInt64(0), -9007199254740992.000000000 <= toUInt64(0), -9007199254740992.000000000 > toUInt64(0), -9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740992.000000000, toInt64(0) != -9007199254740992.000000000, toInt64(0) < -9007199254740992.000000000, toInt64(0) <= -9007199254740992.000000000, toInt64(0) > -9007199254740992.000000000, toInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(0), -9007199254740992.000000000 != toInt64(0), -9007199254740992.000000000 < toInt64(0), -9007199254740992.000000000 <= toInt64(0), -9007199254740992.000000000 > toInt64(0), -9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740994.000000000', 0 = -9007199254740994.000000000, 0 != -9007199254740994.000000000, 0 < -9007199254740994.000000000, 0 <= -9007199254740994.000000000, 0 > -9007199254740994.000000000, 0 >= -9007199254740994.000000000, -9007199254740994.000000000 = 0, -9007199254740994.000000000 != 0, -9007199254740994.000000000 < 0, -9007199254740994.000000000 <= 0, -9007199254740994.000000000 > 0, -9007199254740994.000000000 >= 0 , toUInt8(0) = -9007199254740994.000000000, toUInt8(0) != -9007199254740994.000000000, toUInt8(0) < -9007199254740994.000000000, toUInt8(0) <= -9007199254740994.000000000, toUInt8(0) > -9007199254740994.000000000, toUInt8(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt8(0), -9007199254740994.000000000 != toUInt8(0), -9007199254740994.000000000 < toUInt8(0), -9007199254740994.000000000 <= toUInt8(0), -9007199254740994.000000000 > toUInt8(0), -9007199254740994.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740994.000000000, toInt8(0) != -9007199254740994.000000000, toInt8(0) < -9007199254740994.000000000, toInt8(0) <= -9007199254740994.000000000, toInt8(0) > -9007199254740994.000000000, toInt8(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt8(0), -9007199254740994.000000000 != toInt8(0), -9007199254740994.000000000 < toInt8(0), -9007199254740994.000000000 <= toInt8(0), -9007199254740994.000000000 > toInt8(0), -9007199254740994.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740994.000000000, toUInt16(0) != -9007199254740994.000000000, toUInt16(0) < -9007199254740994.000000000, toUInt16(0) <= -9007199254740994.000000000, toUInt16(0) > -9007199254740994.000000000, toUInt16(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt16(0), -9007199254740994.000000000 != toUInt16(0), -9007199254740994.000000000 < toUInt16(0), -9007199254740994.000000000 <= toUInt16(0), -9007199254740994.000000000 > toUInt16(0), -9007199254740994.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740994.000000000, toInt16(0) != -9007199254740994.000000000, toInt16(0) < -9007199254740994.000000000, toInt16(0) <= -9007199254740994.000000000, toInt16(0) > -9007199254740994.000000000, toInt16(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt16(0), -9007199254740994.000000000 != toInt16(0), -9007199254740994.000000000 < toInt16(0), -9007199254740994.000000000 <= toInt16(0), -9007199254740994.000000000 > toInt16(0), -9007199254740994.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740994.000000000, toUInt32(0) != -9007199254740994.000000000, toUInt32(0) < -9007199254740994.000000000, toUInt32(0) <= -9007199254740994.000000000, toUInt32(0) > -9007199254740994.000000000, toUInt32(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt32(0), -9007199254740994.000000000 != toUInt32(0), -9007199254740994.000000000 < toUInt32(0), -9007199254740994.000000000 <= toUInt32(0), -9007199254740994.000000000 > toUInt32(0), -9007199254740994.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740994.000000000, toInt32(0) != -9007199254740994.000000000, toInt32(0) < -9007199254740994.000000000, toInt32(0) <= -9007199254740994.000000000, toInt32(0) > -9007199254740994.000000000, toInt32(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt32(0), -9007199254740994.000000000 != toInt32(0), -9007199254740994.000000000 < toInt32(0), -9007199254740994.000000000 <= toInt32(0), -9007199254740994.000000000 > toInt32(0), -9007199254740994.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740994.000000000, toUInt64(0) != -9007199254740994.000000000, toUInt64(0) < -9007199254740994.000000000, toUInt64(0) <= -9007199254740994.000000000, toUInt64(0) > -9007199254740994.000000000, toUInt64(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt64(0), -9007199254740994.000000000 != toUInt64(0), -9007199254740994.000000000 < toUInt64(0), -9007199254740994.000000000 <= toUInt64(0), -9007199254740994.000000000 > toUInt64(0), -9007199254740994.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740994.000000000, toInt64(0) != -9007199254740994.000000000, toInt64(0) < -9007199254740994.000000000, toInt64(0) <= -9007199254740994.000000000, toInt64(0) > -9007199254740994.000000000, toInt64(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt64(0), -9007199254740994.000000000 != toInt64(0), -9007199254740994.000000000 < toInt64(0), -9007199254740994.000000000 <= toInt64(0), -9007199254740994.000000000 > toInt64(0), -9007199254740994.000000000 >= toInt64(0) ; +SELECT '0', '104.000000000', 0 = 104.000000000, 0 != 104.000000000, 0 < 104.000000000, 0 <= 104.000000000, 0 > 104.000000000, 0 >= 104.000000000, 104.000000000 = 0, 104.000000000 != 0, 104.000000000 < 0, 104.000000000 <= 0, 104.000000000 > 0, 104.000000000 >= 0 , toUInt8(0) = 104.000000000, toUInt8(0) != 104.000000000, toUInt8(0) < 104.000000000, toUInt8(0) <= 104.000000000, toUInt8(0) > 104.000000000, toUInt8(0) >= 104.000000000, 104.000000000 = toUInt8(0), 104.000000000 != toUInt8(0), 104.000000000 < toUInt8(0), 104.000000000 <= toUInt8(0), 104.000000000 > toUInt8(0), 104.000000000 >= toUInt8(0) , toInt8(0) = 104.000000000, toInt8(0) != 104.000000000, toInt8(0) < 104.000000000, toInt8(0) <= 104.000000000, toInt8(0) > 104.000000000, toInt8(0) >= 104.000000000, 104.000000000 = toInt8(0), 104.000000000 != toInt8(0), 104.000000000 < toInt8(0), 104.000000000 <= toInt8(0), 104.000000000 > toInt8(0), 104.000000000 >= toInt8(0) , toUInt16(0) = 104.000000000, toUInt16(0) != 104.000000000, toUInt16(0) < 104.000000000, toUInt16(0) <= 104.000000000, toUInt16(0) > 104.000000000, toUInt16(0) >= 104.000000000, 104.000000000 = toUInt16(0), 104.000000000 != toUInt16(0), 104.000000000 < toUInt16(0), 104.000000000 <= toUInt16(0), 104.000000000 > toUInt16(0), 104.000000000 >= toUInt16(0) , toInt16(0) = 104.000000000, toInt16(0) != 104.000000000, toInt16(0) < 104.000000000, toInt16(0) <= 104.000000000, toInt16(0) > 104.000000000, toInt16(0) >= 104.000000000, 104.000000000 = toInt16(0), 104.000000000 != toInt16(0), 104.000000000 < toInt16(0), 104.000000000 <= toInt16(0), 104.000000000 > toInt16(0), 104.000000000 >= toInt16(0) , toUInt32(0) = 104.000000000, toUInt32(0) != 104.000000000, toUInt32(0) < 104.000000000, toUInt32(0) <= 104.000000000, toUInt32(0) > 104.000000000, toUInt32(0) >= 104.000000000, 104.000000000 = toUInt32(0), 104.000000000 != toUInt32(0), 104.000000000 < toUInt32(0), 104.000000000 <= toUInt32(0), 104.000000000 > toUInt32(0), 104.000000000 >= toUInt32(0) , toInt32(0) = 104.000000000, toInt32(0) != 104.000000000, toInt32(0) < 104.000000000, toInt32(0) <= 104.000000000, toInt32(0) > 104.000000000, toInt32(0) >= 104.000000000, 104.000000000 = toInt32(0), 104.000000000 != toInt32(0), 104.000000000 < toInt32(0), 104.000000000 <= toInt32(0), 104.000000000 > toInt32(0), 104.000000000 >= toInt32(0) , toUInt64(0) = 104.000000000, toUInt64(0) != 104.000000000, toUInt64(0) < 104.000000000, toUInt64(0) <= 104.000000000, toUInt64(0) > 104.000000000, toUInt64(0) >= 104.000000000, 104.000000000 = toUInt64(0), 104.000000000 != toUInt64(0), 104.000000000 < toUInt64(0), 104.000000000 <= toUInt64(0), 104.000000000 > toUInt64(0), 104.000000000 >= toUInt64(0) , toInt64(0) = 104.000000000, toInt64(0) != 104.000000000, toInt64(0) < 104.000000000, toInt64(0) <= 104.000000000, toInt64(0) > 104.000000000, toInt64(0) >= 104.000000000, 104.000000000 = toInt64(0), 104.000000000 != toInt64(0), 104.000000000 < toInt64(0), 104.000000000 <= toInt64(0), 104.000000000 > toInt64(0), 104.000000000 >= toInt64(0) ; +SELECT '0', '-4503599627370496.000000000', 0 = -4503599627370496.000000000, 0 != -4503599627370496.000000000, 0 < -4503599627370496.000000000, 0 <= -4503599627370496.000000000, 0 > -4503599627370496.000000000, 0 >= -4503599627370496.000000000, -4503599627370496.000000000 = 0, -4503599627370496.000000000 != 0, -4503599627370496.000000000 < 0, -4503599627370496.000000000 <= 0, -4503599627370496.000000000 > 0, -4503599627370496.000000000 >= 0 , toUInt8(0) = -4503599627370496.000000000, toUInt8(0) != -4503599627370496.000000000, toUInt8(0) < -4503599627370496.000000000, toUInt8(0) <= -4503599627370496.000000000, toUInt8(0) > -4503599627370496.000000000, toUInt8(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt8(0), -4503599627370496.000000000 != toUInt8(0), -4503599627370496.000000000 < toUInt8(0), -4503599627370496.000000000 <= toUInt8(0), -4503599627370496.000000000 > toUInt8(0), -4503599627370496.000000000 >= toUInt8(0) , toInt8(0) = -4503599627370496.000000000, toInt8(0) != -4503599627370496.000000000, toInt8(0) < -4503599627370496.000000000, toInt8(0) <= -4503599627370496.000000000, toInt8(0) > -4503599627370496.000000000, toInt8(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt8(0), -4503599627370496.000000000 != toInt8(0), -4503599627370496.000000000 < toInt8(0), -4503599627370496.000000000 <= toInt8(0), -4503599627370496.000000000 > toInt8(0), -4503599627370496.000000000 >= toInt8(0) , toUInt16(0) = -4503599627370496.000000000, toUInt16(0) != -4503599627370496.000000000, toUInt16(0) < -4503599627370496.000000000, toUInt16(0) <= -4503599627370496.000000000, toUInt16(0) > -4503599627370496.000000000, toUInt16(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt16(0), -4503599627370496.000000000 != toUInt16(0), -4503599627370496.000000000 < toUInt16(0), -4503599627370496.000000000 <= toUInt16(0), -4503599627370496.000000000 > toUInt16(0), -4503599627370496.000000000 >= toUInt16(0) , toInt16(0) = -4503599627370496.000000000, toInt16(0) != -4503599627370496.000000000, toInt16(0) < -4503599627370496.000000000, toInt16(0) <= -4503599627370496.000000000, toInt16(0) > -4503599627370496.000000000, toInt16(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt16(0), -4503599627370496.000000000 != toInt16(0), -4503599627370496.000000000 < toInt16(0), -4503599627370496.000000000 <= toInt16(0), -4503599627370496.000000000 > toInt16(0), -4503599627370496.000000000 >= toInt16(0) , toUInt32(0) = -4503599627370496.000000000, toUInt32(0) != -4503599627370496.000000000, toUInt32(0) < -4503599627370496.000000000, toUInt32(0) <= -4503599627370496.000000000, toUInt32(0) > -4503599627370496.000000000, toUInt32(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt32(0), -4503599627370496.000000000 != toUInt32(0), -4503599627370496.000000000 < toUInt32(0), -4503599627370496.000000000 <= toUInt32(0), -4503599627370496.000000000 > toUInt32(0), -4503599627370496.000000000 >= toUInt32(0) , toInt32(0) = -4503599627370496.000000000, toInt32(0) != -4503599627370496.000000000, toInt32(0) < -4503599627370496.000000000, toInt32(0) <= -4503599627370496.000000000, toInt32(0) > -4503599627370496.000000000, toInt32(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt32(0), -4503599627370496.000000000 != toInt32(0), -4503599627370496.000000000 < toInt32(0), -4503599627370496.000000000 <= toInt32(0), -4503599627370496.000000000 > toInt32(0), -4503599627370496.000000000 >= toInt32(0) , toUInt64(0) = -4503599627370496.000000000, toUInt64(0) != -4503599627370496.000000000, toUInt64(0) < -4503599627370496.000000000, toUInt64(0) <= -4503599627370496.000000000, toUInt64(0) > -4503599627370496.000000000, toUInt64(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt64(0), -4503599627370496.000000000 != toUInt64(0), -4503599627370496.000000000 < toUInt64(0), -4503599627370496.000000000 <= toUInt64(0), -4503599627370496.000000000 > toUInt64(0), -4503599627370496.000000000 >= toUInt64(0) , toInt64(0) = -4503599627370496.000000000, toInt64(0) != -4503599627370496.000000000, toInt64(0) < -4503599627370496.000000000, toInt64(0) <= -4503599627370496.000000000, toInt64(0) > -4503599627370496.000000000, toInt64(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt64(0), -4503599627370496.000000000 != toInt64(0), -4503599627370496.000000000 < toInt64(0), -4503599627370496.000000000 <= toInt64(0), -4503599627370496.000000000 > toInt64(0), -4503599627370496.000000000 >= toInt64(0) ; +SELECT '0', '-0.500000000', 0 = -0.500000000, 0 != -0.500000000, 0 < -0.500000000, 0 <= -0.500000000, 0 > -0.500000000, 0 >= -0.500000000, -0.500000000 = 0, -0.500000000 != 0, -0.500000000 < 0, -0.500000000 <= 0, -0.500000000 > 0, -0.500000000 >= 0 , toUInt8(0) = -0.500000000, toUInt8(0) != -0.500000000, toUInt8(0) < -0.500000000, toUInt8(0) <= -0.500000000, toUInt8(0) > -0.500000000, toUInt8(0) >= -0.500000000, -0.500000000 = toUInt8(0), -0.500000000 != toUInt8(0), -0.500000000 < toUInt8(0), -0.500000000 <= toUInt8(0), -0.500000000 > toUInt8(0), -0.500000000 >= toUInt8(0) , toInt8(0) = -0.500000000, toInt8(0) != -0.500000000, toInt8(0) < -0.500000000, toInt8(0) <= -0.500000000, toInt8(0) > -0.500000000, toInt8(0) >= -0.500000000, -0.500000000 = toInt8(0), -0.500000000 != toInt8(0), -0.500000000 < toInt8(0), -0.500000000 <= toInt8(0), -0.500000000 > toInt8(0), -0.500000000 >= toInt8(0) , toUInt16(0) = -0.500000000, toUInt16(0) != -0.500000000, toUInt16(0) < -0.500000000, toUInt16(0) <= -0.500000000, toUInt16(0) > -0.500000000, toUInt16(0) >= -0.500000000, -0.500000000 = toUInt16(0), -0.500000000 != toUInt16(0), -0.500000000 < toUInt16(0), -0.500000000 <= toUInt16(0), -0.500000000 > toUInt16(0), -0.500000000 >= toUInt16(0) , toInt16(0) = -0.500000000, toInt16(0) != -0.500000000, toInt16(0) < -0.500000000, toInt16(0) <= -0.500000000, toInt16(0) > -0.500000000, toInt16(0) >= -0.500000000, -0.500000000 = toInt16(0), -0.500000000 != toInt16(0), -0.500000000 < toInt16(0), -0.500000000 <= toInt16(0), -0.500000000 > toInt16(0), -0.500000000 >= toInt16(0) , toUInt32(0) = -0.500000000, toUInt32(0) != -0.500000000, toUInt32(0) < -0.500000000, toUInt32(0) <= -0.500000000, toUInt32(0) > -0.500000000, toUInt32(0) >= -0.500000000, -0.500000000 = toUInt32(0), -0.500000000 != toUInt32(0), -0.500000000 < toUInt32(0), -0.500000000 <= toUInt32(0), -0.500000000 > toUInt32(0), -0.500000000 >= toUInt32(0) , toInt32(0) = -0.500000000, toInt32(0) != -0.500000000, toInt32(0) < -0.500000000, toInt32(0) <= -0.500000000, toInt32(0) > -0.500000000, toInt32(0) >= -0.500000000, -0.500000000 = toInt32(0), -0.500000000 != toInt32(0), -0.500000000 < toInt32(0), -0.500000000 <= toInt32(0), -0.500000000 > toInt32(0), -0.500000000 >= toInt32(0) , toUInt64(0) = -0.500000000, toUInt64(0) != -0.500000000, toUInt64(0) < -0.500000000, toUInt64(0) <= -0.500000000, toUInt64(0) > -0.500000000, toUInt64(0) >= -0.500000000, -0.500000000 = toUInt64(0), -0.500000000 != toUInt64(0), -0.500000000 < toUInt64(0), -0.500000000 <= toUInt64(0), -0.500000000 > toUInt64(0), -0.500000000 >= toUInt64(0) , toInt64(0) = -0.500000000, toInt64(0) != -0.500000000, toInt64(0) < -0.500000000, toInt64(0) <= -0.500000000, toInt64(0) > -0.500000000, toInt64(0) >= -0.500000000, -0.500000000 = toInt64(0), -0.500000000 != toInt64(0), -0.500000000 < toInt64(0), -0.500000000 <= toInt64(0), -0.500000000 > toInt64(0), -0.500000000 >= toInt64(0) ; +SELECT '0', '0.500000000', 0 = 0.500000000, 0 != 0.500000000, 0 < 0.500000000, 0 <= 0.500000000, 0 > 0.500000000, 0 >= 0.500000000, 0.500000000 = 0, 0.500000000 != 0, 0.500000000 < 0, 0.500000000 <= 0, 0.500000000 > 0, 0.500000000 >= 0 , toUInt8(0) = 0.500000000, toUInt8(0) != 0.500000000, toUInt8(0) < 0.500000000, toUInt8(0) <= 0.500000000, toUInt8(0) > 0.500000000, toUInt8(0) >= 0.500000000, 0.500000000 = toUInt8(0), 0.500000000 != toUInt8(0), 0.500000000 < toUInt8(0), 0.500000000 <= toUInt8(0), 0.500000000 > toUInt8(0), 0.500000000 >= toUInt8(0) , toInt8(0) = 0.500000000, toInt8(0) != 0.500000000, toInt8(0) < 0.500000000, toInt8(0) <= 0.500000000, toInt8(0) > 0.500000000, toInt8(0) >= 0.500000000, 0.500000000 = toInt8(0), 0.500000000 != toInt8(0), 0.500000000 < toInt8(0), 0.500000000 <= toInt8(0), 0.500000000 > toInt8(0), 0.500000000 >= toInt8(0) , toUInt16(0) = 0.500000000, toUInt16(0) != 0.500000000, toUInt16(0) < 0.500000000, toUInt16(0) <= 0.500000000, toUInt16(0) > 0.500000000, toUInt16(0) >= 0.500000000, 0.500000000 = toUInt16(0), 0.500000000 != toUInt16(0), 0.500000000 < toUInt16(0), 0.500000000 <= toUInt16(0), 0.500000000 > toUInt16(0), 0.500000000 >= toUInt16(0) , toInt16(0) = 0.500000000, toInt16(0) != 0.500000000, toInt16(0) < 0.500000000, toInt16(0) <= 0.500000000, toInt16(0) > 0.500000000, toInt16(0) >= 0.500000000, 0.500000000 = toInt16(0), 0.500000000 != toInt16(0), 0.500000000 < toInt16(0), 0.500000000 <= toInt16(0), 0.500000000 > toInt16(0), 0.500000000 >= toInt16(0) , toUInt32(0) = 0.500000000, toUInt32(0) != 0.500000000, toUInt32(0) < 0.500000000, toUInt32(0) <= 0.500000000, toUInt32(0) > 0.500000000, toUInt32(0) >= 0.500000000, 0.500000000 = toUInt32(0), 0.500000000 != toUInt32(0), 0.500000000 < toUInt32(0), 0.500000000 <= toUInt32(0), 0.500000000 > toUInt32(0), 0.500000000 >= toUInt32(0) , toInt32(0) = 0.500000000, toInt32(0) != 0.500000000, toInt32(0) < 0.500000000, toInt32(0) <= 0.500000000, toInt32(0) > 0.500000000, toInt32(0) >= 0.500000000, 0.500000000 = toInt32(0), 0.500000000 != toInt32(0), 0.500000000 < toInt32(0), 0.500000000 <= toInt32(0), 0.500000000 > toInt32(0), 0.500000000 >= toInt32(0) , toUInt64(0) = 0.500000000, toUInt64(0) != 0.500000000, toUInt64(0) < 0.500000000, toUInt64(0) <= 0.500000000, toUInt64(0) > 0.500000000, toUInt64(0) >= 0.500000000, 0.500000000 = toUInt64(0), 0.500000000 != toUInt64(0), 0.500000000 < toUInt64(0), 0.500000000 <= toUInt64(0), 0.500000000 > toUInt64(0), 0.500000000 >= toUInt64(0) , toInt64(0) = 0.500000000, toInt64(0) != 0.500000000, toInt64(0) < 0.500000000, toInt64(0) <= 0.500000000, toInt64(0) > 0.500000000, toInt64(0) >= 0.500000000, 0.500000000 = toInt64(0), 0.500000000 != toInt64(0), 0.500000000 < toInt64(0), 0.500000000 <= toInt64(0), 0.500000000 > toInt64(0), 0.500000000 >= toInt64(0) ; +SELECT '0', '-1.500000000', 0 = -1.500000000, 0 != -1.500000000, 0 < -1.500000000, 0 <= -1.500000000, 0 > -1.500000000, 0 >= -1.500000000, -1.500000000 = 0, -1.500000000 != 0, -1.500000000 < 0, -1.500000000 <= 0, -1.500000000 > 0, -1.500000000 >= 0 , toUInt8(0) = -1.500000000, toUInt8(0) != -1.500000000, toUInt8(0) < -1.500000000, toUInt8(0) <= -1.500000000, toUInt8(0) > -1.500000000, toUInt8(0) >= -1.500000000, -1.500000000 = toUInt8(0), -1.500000000 != toUInt8(0), -1.500000000 < toUInt8(0), -1.500000000 <= toUInt8(0), -1.500000000 > toUInt8(0), -1.500000000 >= toUInt8(0) , toInt8(0) = -1.500000000, toInt8(0) != -1.500000000, toInt8(0) < -1.500000000, toInt8(0) <= -1.500000000, toInt8(0) > -1.500000000, toInt8(0) >= -1.500000000, -1.500000000 = toInt8(0), -1.500000000 != toInt8(0), -1.500000000 < toInt8(0), -1.500000000 <= toInt8(0), -1.500000000 > toInt8(0), -1.500000000 >= toInt8(0) , toUInt16(0) = -1.500000000, toUInt16(0) != -1.500000000, toUInt16(0) < -1.500000000, toUInt16(0) <= -1.500000000, toUInt16(0) > -1.500000000, toUInt16(0) >= -1.500000000, -1.500000000 = toUInt16(0), -1.500000000 != toUInt16(0), -1.500000000 < toUInt16(0), -1.500000000 <= toUInt16(0), -1.500000000 > toUInt16(0), -1.500000000 >= toUInt16(0) , toInt16(0) = -1.500000000, toInt16(0) != -1.500000000, toInt16(0) < -1.500000000, toInt16(0) <= -1.500000000, toInt16(0) > -1.500000000, toInt16(0) >= -1.500000000, -1.500000000 = toInt16(0), -1.500000000 != toInt16(0), -1.500000000 < toInt16(0), -1.500000000 <= toInt16(0), -1.500000000 > toInt16(0), -1.500000000 >= toInt16(0) , toUInt32(0) = -1.500000000, toUInt32(0) != -1.500000000, toUInt32(0) < -1.500000000, toUInt32(0) <= -1.500000000, toUInt32(0) > -1.500000000, toUInt32(0) >= -1.500000000, -1.500000000 = toUInt32(0), -1.500000000 != toUInt32(0), -1.500000000 < toUInt32(0), -1.500000000 <= toUInt32(0), -1.500000000 > toUInt32(0), -1.500000000 >= toUInt32(0) , toInt32(0) = -1.500000000, toInt32(0) != -1.500000000, toInt32(0) < -1.500000000, toInt32(0) <= -1.500000000, toInt32(0) > -1.500000000, toInt32(0) >= -1.500000000, -1.500000000 = toInt32(0), -1.500000000 != toInt32(0), -1.500000000 < toInt32(0), -1.500000000 <= toInt32(0), -1.500000000 > toInt32(0), -1.500000000 >= toInt32(0) , toUInt64(0) = -1.500000000, toUInt64(0) != -1.500000000, toUInt64(0) < -1.500000000, toUInt64(0) <= -1.500000000, toUInt64(0) > -1.500000000, toUInt64(0) >= -1.500000000, -1.500000000 = toUInt64(0), -1.500000000 != toUInt64(0), -1.500000000 < toUInt64(0), -1.500000000 <= toUInt64(0), -1.500000000 > toUInt64(0), -1.500000000 >= toUInt64(0) , toInt64(0) = -1.500000000, toInt64(0) != -1.500000000, toInt64(0) < -1.500000000, toInt64(0) <= -1.500000000, toInt64(0) > -1.500000000, toInt64(0) >= -1.500000000, -1.500000000 = toInt64(0), -1.500000000 != toInt64(0), -1.500000000 < toInt64(0), -1.500000000 <= toInt64(0), -1.500000000 > toInt64(0), -1.500000000 >= toInt64(0) ; +SELECT '0', '1.500000000', 0 = 1.500000000, 0 != 1.500000000, 0 < 1.500000000, 0 <= 1.500000000, 0 > 1.500000000, 0 >= 1.500000000, 1.500000000 = 0, 1.500000000 != 0, 1.500000000 < 0, 1.500000000 <= 0, 1.500000000 > 0, 1.500000000 >= 0 , toUInt8(0) = 1.500000000, toUInt8(0) != 1.500000000, toUInt8(0) < 1.500000000, toUInt8(0) <= 1.500000000, toUInt8(0) > 1.500000000, toUInt8(0) >= 1.500000000, 1.500000000 = toUInt8(0), 1.500000000 != toUInt8(0), 1.500000000 < toUInt8(0), 1.500000000 <= toUInt8(0), 1.500000000 > toUInt8(0), 1.500000000 >= toUInt8(0) , toInt8(0) = 1.500000000, toInt8(0) != 1.500000000, toInt8(0) < 1.500000000, toInt8(0) <= 1.500000000, toInt8(0) > 1.500000000, toInt8(0) >= 1.500000000, 1.500000000 = toInt8(0), 1.500000000 != toInt8(0), 1.500000000 < toInt8(0), 1.500000000 <= toInt8(0), 1.500000000 > toInt8(0), 1.500000000 >= toInt8(0) , toUInt16(0) = 1.500000000, toUInt16(0) != 1.500000000, toUInt16(0) < 1.500000000, toUInt16(0) <= 1.500000000, toUInt16(0) > 1.500000000, toUInt16(0) >= 1.500000000, 1.500000000 = toUInt16(0), 1.500000000 != toUInt16(0), 1.500000000 < toUInt16(0), 1.500000000 <= toUInt16(0), 1.500000000 > toUInt16(0), 1.500000000 >= toUInt16(0) , toInt16(0) = 1.500000000, toInt16(0) != 1.500000000, toInt16(0) < 1.500000000, toInt16(0) <= 1.500000000, toInt16(0) > 1.500000000, toInt16(0) >= 1.500000000, 1.500000000 = toInt16(0), 1.500000000 != toInt16(0), 1.500000000 < toInt16(0), 1.500000000 <= toInt16(0), 1.500000000 > toInt16(0), 1.500000000 >= toInt16(0) , toUInt32(0) = 1.500000000, toUInt32(0) != 1.500000000, toUInt32(0) < 1.500000000, toUInt32(0) <= 1.500000000, toUInt32(0) > 1.500000000, toUInt32(0) >= 1.500000000, 1.500000000 = toUInt32(0), 1.500000000 != toUInt32(0), 1.500000000 < toUInt32(0), 1.500000000 <= toUInt32(0), 1.500000000 > toUInt32(0), 1.500000000 >= toUInt32(0) , toInt32(0) = 1.500000000, toInt32(0) != 1.500000000, toInt32(0) < 1.500000000, toInt32(0) <= 1.500000000, toInt32(0) > 1.500000000, toInt32(0) >= 1.500000000, 1.500000000 = toInt32(0), 1.500000000 != toInt32(0), 1.500000000 < toInt32(0), 1.500000000 <= toInt32(0), 1.500000000 > toInt32(0), 1.500000000 >= toInt32(0) , toUInt64(0) = 1.500000000, toUInt64(0) != 1.500000000, toUInt64(0) < 1.500000000, toUInt64(0) <= 1.500000000, toUInt64(0) > 1.500000000, toUInt64(0) >= 1.500000000, 1.500000000 = toUInt64(0), 1.500000000 != toUInt64(0), 1.500000000 < toUInt64(0), 1.500000000 <= toUInt64(0), 1.500000000 > toUInt64(0), 1.500000000 >= toUInt64(0) , toInt64(0) = 1.500000000, toInt64(0) != 1.500000000, toInt64(0) < 1.500000000, toInt64(0) <= 1.500000000, toInt64(0) > 1.500000000, toInt64(0) >= 1.500000000, 1.500000000 = toInt64(0), 1.500000000 != toInt64(0), 1.500000000 < toInt64(0), 1.500000000 <= toInt64(0), 1.500000000 > toInt64(0), 1.500000000 >= toInt64(0) ; +SELECT '0', '9007199254740992.000000000', 0 = 9007199254740992.000000000, 0 != 9007199254740992.000000000, 0 < 9007199254740992.000000000, 0 <= 9007199254740992.000000000, 0 > 9007199254740992.000000000, 0 >= 9007199254740992.000000000, 9007199254740992.000000000 = 0, 9007199254740992.000000000 != 0, 9007199254740992.000000000 < 0, 9007199254740992.000000000 <= 0, 9007199254740992.000000000 > 0, 9007199254740992.000000000 >= 0 , toUInt8(0) = 9007199254740992.000000000, toUInt8(0) != 9007199254740992.000000000, toUInt8(0) < 9007199254740992.000000000, toUInt8(0) <= 9007199254740992.000000000, toUInt8(0) > 9007199254740992.000000000, toUInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(0), 9007199254740992.000000000 != toUInt8(0), 9007199254740992.000000000 < toUInt8(0), 9007199254740992.000000000 <= toUInt8(0), 9007199254740992.000000000 > toUInt8(0), 9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740992.000000000, toInt8(0) != 9007199254740992.000000000, toInt8(0) < 9007199254740992.000000000, toInt8(0) <= 9007199254740992.000000000, toInt8(0) > 9007199254740992.000000000, toInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(0), 9007199254740992.000000000 != toInt8(0), 9007199254740992.000000000 < toInt8(0), 9007199254740992.000000000 <= toInt8(0), 9007199254740992.000000000 > toInt8(0), 9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740992.000000000, toUInt16(0) != 9007199254740992.000000000, toUInt16(0) < 9007199254740992.000000000, toUInt16(0) <= 9007199254740992.000000000, toUInt16(0) > 9007199254740992.000000000, toUInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(0), 9007199254740992.000000000 != toUInt16(0), 9007199254740992.000000000 < toUInt16(0), 9007199254740992.000000000 <= toUInt16(0), 9007199254740992.000000000 > toUInt16(0), 9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740992.000000000, toInt16(0) != 9007199254740992.000000000, toInt16(0) < 9007199254740992.000000000, toInt16(0) <= 9007199254740992.000000000, toInt16(0) > 9007199254740992.000000000, toInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(0), 9007199254740992.000000000 != toInt16(0), 9007199254740992.000000000 < toInt16(0), 9007199254740992.000000000 <= toInt16(0), 9007199254740992.000000000 > toInt16(0), 9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740992.000000000, toUInt32(0) != 9007199254740992.000000000, toUInt32(0) < 9007199254740992.000000000, toUInt32(0) <= 9007199254740992.000000000, toUInt32(0) > 9007199254740992.000000000, toUInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(0), 9007199254740992.000000000 != toUInt32(0), 9007199254740992.000000000 < toUInt32(0), 9007199254740992.000000000 <= toUInt32(0), 9007199254740992.000000000 > toUInt32(0), 9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740992.000000000, toInt32(0) != 9007199254740992.000000000, toInt32(0) < 9007199254740992.000000000, toInt32(0) <= 9007199254740992.000000000, toInt32(0) > 9007199254740992.000000000, toInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(0), 9007199254740992.000000000 != toInt32(0), 9007199254740992.000000000 < toInt32(0), 9007199254740992.000000000 <= toInt32(0), 9007199254740992.000000000 > toInt32(0), 9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740992.000000000, toUInt64(0) != 9007199254740992.000000000, toUInt64(0) < 9007199254740992.000000000, toUInt64(0) <= 9007199254740992.000000000, toUInt64(0) > 9007199254740992.000000000, toUInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(0), 9007199254740992.000000000 != toUInt64(0), 9007199254740992.000000000 < toUInt64(0), 9007199254740992.000000000 <= toUInt64(0), 9007199254740992.000000000 > toUInt64(0), 9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740992.000000000, toInt64(0) != 9007199254740992.000000000, toInt64(0) < 9007199254740992.000000000, toInt64(0) <= 9007199254740992.000000000, toInt64(0) > 9007199254740992.000000000, toInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(0), 9007199254740992.000000000 != toInt64(0), 9007199254740992.000000000 < toInt64(0), 9007199254740992.000000000 <= toInt64(0), 9007199254740992.000000000 > toInt64(0), 9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '2251799813685247.500000000', 0 = 2251799813685247.500000000, 0 != 2251799813685247.500000000, 0 < 2251799813685247.500000000, 0 <= 2251799813685247.500000000, 0 > 2251799813685247.500000000, 0 >= 2251799813685247.500000000, 2251799813685247.500000000 = 0, 2251799813685247.500000000 != 0, 2251799813685247.500000000 < 0, 2251799813685247.500000000 <= 0, 2251799813685247.500000000 > 0, 2251799813685247.500000000 >= 0 , toUInt8(0) = 2251799813685247.500000000, toUInt8(0) != 2251799813685247.500000000, toUInt8(0) < 2251799813685247.500000000, toUInt8(0) <= 2251799813685247.500000000, toUInt8(0) > 2251799813685247.500000000, toUInt8(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt8(0), 2251799813685247.500000000 != toUInt8(0), 2251799813685247.500000000 < toUInt8(0), 2251799813685247.500000000 <= toUInt8(0), 2251799813685247.500000000 > toUInt8(0), 2251799813685247.500000000 >= toUInt8(0) , toInt8(0) = 2251799813685247.500000000, toInt8(0) != 2251799813685247.500000000, toInt8(0) < 2251799813685247.500000000, toInt8(0) <= 2251799813685247.500000000, toInt8(0) > 2251799813685247.500000000, toInt8(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt8(0), 2251799813685247.500000000 != toInt8(0), 2251799813685247.500000000 < toInt8(0), 2251799813685247.500000000 <= toInt8(0), 2251799813685247.500000000 > toInt8(0), 2251799813685247.500000000 >= toInt8(0) , toUInt16(0) = 2251799813685247.500000000, toUInt16(0) != 2251799813685247.500000000, toUInt16(0) < 2251799813685247.500000000, toUInt16(0) <= 2251799813685247.500000000, toUInt16(0) > 2251799813685247.500000000, toUInt16(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt16(0), 2251799813685247.500000000 != toUInt16(0), 2251799813685247.500000000 < toUInt16(0), 2251799813685247.500000000 <= toUInt16(0), 2251799813685247.500000000 > toUInt16(0), 2251799813685247.500000000 >= toUInt16(0) , toInt16(0) = 2251799813685247.500000000, toInt16(0) != 2251799813685247.500000000, toInt16(0) < 2251799813685247.500000000, toInt16(0) <= 2251799813685247.500000000, toInt16(0) > 2251799813685247.500000000, toInt16(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt16(0), 2251799813685247.500000000 != toInt16(0), 2251799813685247.500000000 < toInt16(0), 2251799813685247.500000000 <= toInt16(0), 2251799813685247.500000000 > toInt16(0), 2251799813685247.500000000 >= toInt16(0) , toUInt32(0) = 2251799813685247.500000000, toUInt32(0) != 2251799813685247.500000000, toUInt32(0) < 2251799813685247.500000000, toUInt32(0) <= 2251799813685247.500000000, toUInt32(0) > 2251799813685247.500000000, toUInt32(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt32(0), 2251799813685247.500000000 != toUInt32(0), 2251799813685247.500000000 < toUInt32(0), 2251799813685247.500000000 <= toUInt32(0), 2251799813685247.500000000 > toUInt32(0), 2251799813685247.500000000 >= toUInt32(0) , toInt32(0) = 2251799813685247.500000000, toInt32(0) != 2251799813685247.500000000, toInt32(0) < 2251799813685247.500000000, toInt32(0) <= 2251799813685247.500000000, toInt32(0) > 2251799813685247.500000000, toInt32(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt32(0), 2251799813685247.500000000 != toInt32(0), 2251799813685247.500000000 < toInt32(0), 2251799813685247.500000000 <= toInt32(0), 2251799813685247.500000000 > toInt32(0), 2251799813685247.500000000 >= toInt32(0) , toUInt64(0) = 2251799813685247.500000000, toUInt64(0) != 2251799813685247.500000000, toUInt64(0) < 2251799813685247.500000000, toUInt64(0) <= 2251799813685247.500000000, toUInt64(0) > 2251799813685247.500000000, toUInt64(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt64(0), 2251799813685247.500000000 != toUInt64(0), 2251799813685247.500000000 < toUInt64(0), 2251799813685247.500000000 <= toUInt64(0), 2251799813685247.500000000 > toUInt64(0), 2251799813685247.500000000 >= toUInt64(0) , toInt64(0) = 2251799813685247.500000000, toInt64(0) != 2251799813685247.500000000, toInt64(0) < 2251799813685247.500000000, toInt64(0) <= 2251799813685247.500000000, toInt64(0) > 2251799813685247.500000000, toInt64(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt64(0), 2251799813685247.500000000 != toInt64(0), 2251799813685247.500000000 < toInt64(0), 2251799813685247.500000000 <= toInt64(0), 2251799813685247.500000000 > toInt64(0), 2251799813685247.500000000 >= toInt64(0) ; +SELECT '0', '2251799813685248.500000000', 0 = 2251799813685248.500000000, 0 != 2251799813685248.500000000, 0 < 2251799813685248.500000000, 0 <= 2251799813685248.500000000, 0 > 2251799813685248.500000000, 0 >= 2251799813685248.500000000, 2251799813685248.500000000 = 0, 2251799813685248.500000000 != 0, 2251799813685248.500000000 < 0, 2251799813685248.500000000 <= 0, 2251799813685248.500000000 > 0, 2251799813685248.500000000 >= 0 , toUInt8(0) = 2251799813685248.500000000, toUInt8(0) != 2251799813685248.500000000, toUInt8(0) < 2251799813685248.500000000, toUInt8(0) <= 2251799813685248.500000000, toUInt8(0) > 2251799813685248.500000000, toUInt8(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt8(0), 2251799813685248.500000000 != toUInt8(0), 2251799813685248.500000000 < toUInt8(0), 2251799813685248.500000000 <= toUInt8(0), 2251799813685248.500000000 > toUInt8(0), 2251799813685248.500000000 >= toUInt8(0) , toInt8(0) = 2251799813685248.500000000, toInt8(0) != 2251799813685248.500000000, toInt8(0) < 2251799813685248.500000000, toInt8(0) <= 2251799813685248.500000000, toInt8(0) > 2251799813685248.500000000, toInt8(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt8(0), 2251799813685248.500000000 != toInt8(0), 2251799813685248.500000000 < toInt8(0), 2251799813685248.500000000 <= toInt8(0), 2251799813685248.500000000 > toInt8(0), 2251799813685248.500000000 >= toInt8(0) , toUInt16(0) = 2251799813685248.500000000, toUInt16(0) != 2251799813685248.500000000, toUInt16(0) < 2251799813685248.500000000, toUInt16(0) <= 2251799813685248.500000000, toUInt16(0) > 2251799813685248.500000000, toUInt16(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt16(0), 2251799813685248.500000000 != toUInt16(0), 2251799813685248.500000000 < toUInt16(0), 2251799813685248.500000000 <= toUInt16(0), 2251799813685248.500000000 > toUInt16(0), 2251799813685248.500000000 >= toUInt16(0) , toInt16(0) = 2251799813685248.500000000, toInt16(0) != 2251799813685248.500000000, toInt16(0) < 2251799813685248.500000000, toInt16(0) <= 2251799813685248.500000000, toInt16(0) > 2251799813685248.500000000, toInt16(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt16(0), 2251799813685248.500000000 != toInt16(0), 2251799813685248.500000000 < toInt16(0), 2251799813685248.500000000 <= toInt16(0), 2251799813685248.500000000 > toInt16(0), 2251799813685248.500000000 >= toInt16(0) , toUInt32(0) = 2251799813685248.500000000, toUInt32(0) != 2251799813685248.500000000, toUInt32(0) < 2251799813685248.500000000, toUInt32(0) <= 2251799813685248.500000000, toUInt32(0) > 2251799813685248.500000000, toUInt32(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt32(0), 2251799813685248.500000000 != toUInt32(0), 2251799813685248.500000000 < toUInt32(0), 2251799813685248.500000000 <= toUInt32(0), 2251799813685248.500000000 > toUInt32(0), 2251799813685248.500000000 >= toUInt32(0) , toInt32(0) = 2251799813685248.500000000, toInt32(0) != 2251799813685248.500000000, toInt32(0) < 2251799813685248.500000000, toInt32(0) <= 2251799813685248.500000000, toInt32(0) > 2251799813685248.500000000, toInt32(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt32(0), 2251799813685248.500000000 != toInt32(0), 2251799813685248.500000000 < toInt32(0), 2251799813685248.500000000 <= toInt32(0), 2251799813685248.500000000 > toInt32(0), 2251799813685248.500000000 >= toInt32(0) , toUInt64(0) = 2251799813685248.500000000, toUInt64(0) != 2251799813685248.500000000, toUInt64(0) < 2251799813685248.500000000, toUInt64(0) <= 2251799813685248.500000000, toUInt64(0) > 2251799813685248.500000000, toUInt64(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt64(0), 2251799813685248.500000000 != toUInt64(0), 2251799813685248.500000000 < toUInt64(0), 2251799813685248.500000000 <= toUInt64(0), 2251799813685248.500000000 > toUInt64(0), 2251799813685248.500000000 >= toUInt64(0) , toInt64(0) = 2251799813685248.500000000, toInt64(0) != 2251799813685248.500000000, toInt64(0) < 2251799813685248.500000000, toInt64(0) <= 2251799813685248.500000000, toInt64(0) > 2251799813685248.500000000, toInt64(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt64(0), 2251799813685248.500000000 != toInt64(0), 2251799813685248.500000000 < toInt64(0), 2251799813685248.500000000 <= toInt64(0), 2251799813685248.500000000 > toInt64(0), 2251799813685248.500000000 >= toInt64(0) ; +SELECT '0', '1152921504606846976.000000000', 0 = 1152921504606846976.000000000, 0 != 1152921504606846976.000000000, 0 < 1152921504606846976.000000000, 0 <= 1152921504606846976.000000000, 0 > 1152921504606846976.000000000, 0 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = 0, 1152921504606846976.000000000 != 0, 1152921504606846976.000000000 < 0, 1152921504606846976.000000000 <= 0, 1152921504606846976.000000000 > 0, 1152921504606846976.000000000 >= 0 , toUInt8(0) = 1152921504606846976.000000000, toUInt8(0) != 1152921504606846976.000000000, toUInt8(0) < 1152921504606846976.000000000, toUInt8(0) <= 1152921504606846976.000000000, toUInt8(0) > 1152921504606846976.000000000, toUInt8(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt8(0), 1152921504606846976.000000000 != toUInt8(0), 1152921504606846976.000000000 < toUInt8(0), 1152921504606846976.000000000 <= toUInt8(0), 1152921504606846976.000000000 > toUInt8(0), 1152921504606846976.000000000 >= toUInt8(0) , toInt8(0) = 1152921504606846976.000000000, toInt8(0) != 1152921504606846976.000000000, toInt8(0) < 1152921504606846976.000000000, toInt8(0) <= 1152921504606846976.000000000, toInt8(0) > 1152921504606846976.000000000, toInt8(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt8(0), 1152921504606846976.000000000 != toInt8(0), 1152921504606846976.000000000 < toInt8(0), 1152921504606846976.000000000 <= toInt8(0), 1152921504606846976.000000000 > toInt8(0), 1152921504606846976.000000000 >= toInt8(0) , toUInt16(0) = 1152921504606846976.000000000, toUInt16(0) != 1152921504606846976.000000000, toUInt16(0) < 1152921504606846976.000000000, toUInt16(0) <= 1152921504606846976.000000000, toUInt16(0) > 1152921504606846976.000000000, toUInt16(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt16(0), 1152921504606846976.000000000 != toUInt16(0), 1152921504606846976.000000000 < toUInt16(0), 1152921504606846976.000000000 <= toUInt16(0), 1152921504606846976.000000000 > toUInt16(0), 1152921504606846976.000000000 >= toUInt16(0) , toInt16(0) = 1152921504606846976.000000000, toInt16(0) != 1152921504606846976.000000000, toInt16(0) < 1152921504606846976.000000000, toInt16(0) <= 1152921504606846976.000000000, toInt16(0) > 1152921504606846976.000000000, toInt16(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt16(0), 1152921504606846976.000000000 != toInt16(0), 1152921504606846976.000000000 < toInt16(0), 1152921504606846976.000000000 <= toInt16(0), 1152921504606846976.000000000 > toInt16(0), 1152921504606846976.000000000 >= toInt16(0) , toUInt32(0) = 1152921504606846976.000000000, toUInt32(0) != 1152921504606846976.000000000, toUInt32(0) < 1152921504606846976.000000000, toUInt32(0) <= 1152921504606846976.000000000, toUInt32(0) > 1152921504606846976.000000000, toUInt32(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt32(0), 1152921504606846976.000000000 != toUInt32(0), 1152921504606846976.000000000 < toUInt32(0), 1152921504606846976.000000000 <= toUInt32(0), 1152921504606846976.000000000 > toUInt32(0), 1152921504606846976.000000000 >= toUInt32(0) , toInt32(0) = 1152921504606846976.000000000, toInt32(0) != 1152921504606846976.000000000, toInt32(0) < 1152921504606846976.000000000, toInt32(0) <= 1152921504606846976.000000000, toInt32(0) > 1152921504606846976.000000000, toInt32(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt32(0), 1152921504606846976.000000000 != toInt32(0), 1152921504606846976.000000000 < toInt32(0), 1152921504606846976.000000000 <= toInt32(0), 1152921504606846976.000000000 > toInt32(0), 1152921504606846976.000000000 >= toInt32(0) , toUInt64(0) = 1152921504606846976.000000000, toUInt64(0) != 1152921504606846976.000000000, toUInt64(0) < 1152921504606846976.000000000, toUInt64(0) <= 1152921504606846976.000000000, toUInt64(0) > 1152921504606846976.000000000, toUInt64(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt64(0), 1152921504606846976.000000000 != toUInt64(0), 1152921504606846976.000000000 < toUInt64(0), 1152921504606846976.000000000 <= toUInt64(0), 1152921504606846976.000000000 > toUInt64(0), 1152921504606846976.000000000 >= toUInt64(0) , toInt64(0) = 1152921504606846976.000000000, toInt64(0) != 1152921504606846976.000000000, toInt64(0) < 1152921504606846976.000000000, toInt64(0) <= 1152921504606846976.000000000, toInt64(0) > 1152921504606846976.000000000, toInt64(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt64(0), 1152921504606846976.000000000 != toInt64(0), 1152921504606846976.000000000 < toInt64(0), 1152921504606846976.000000000 <= toInt64(0), 1152921504606846976.000000000 > toInt64(0), 1152921504606846976.000000000 >= toInt64(0) ; +SELECT '0', '-1152921504606846976.000000000', 0 = -1152921504606846976.000000000, 0 != -1152921504606846976.000000000, 0 < -1152921504606846976.000000000, 0 <= -1152921504606846976.000000000, 0 > -1152921504606846976.000000000, 0 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = 0, -1152921504606846976.000000000 != 0, -1152921504606846976.000000000 < 0, -1152921504606846976.000000000 <= 0, -1152921504606846976.000000000 > 0, -1152921504606846976.000000000 >= 0 , toUInt8(0) = -1152921504606846976.000000000, toUInt8(0) != -1152921504606846976.000000000, toUInt8(0) < -1152921504606846976.000000000, toUInt8(0) <= -1152921504606846976.000000000, toUInt8(0) > -1152921504606846976.000000000, toUInt8(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt8(0), -1152921504606846976.000000000 != toUInt8(0), -1152921504606846976.000000000 < toUInt8(0), -1152921504606846976.000000000 <= toUInt8(0), -1152921504606846976.000000000 > toUInt8(0), -1152921504606846976.000000000 >= toUInt8(0) , toInt8(0) = -1152921504606846976.000000000, toInt8(0) != -1152921504606846976.000000000, toInt8(0) < -1152921504606846976.000000000, toInt8(0) <= -1152921504606846976.000000000, toInt8(0) > -1152921504606846976.000000000, toInt8(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt8(0), -1152921504606846976.000000000 != toInt8(0), -1152921504606846976.000000000 < toInt8(0), -1152921504606846976.000000000 <= toInt8(0), -1152921504606846976.000000000 > toInt8(0), -1152921504606846976.000000000 >= toInt8(0) , toUInt16(0) = -1152921504606846976.000000000, toUInt16(0) != -1152921504606846976.000000000, toUInt16(0) < -1152921504606846976.000000000, toUInt16(0) <= -1152921504606846976.000000000, toUInt16(0) > -1152921504606846976.000000000, toUInt16(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt16(0), -1152921504606846976.000000000 != toUInt16(0), -1152921504606846976.000000000 < toUInt16(0), -1152921504606846976.000000000 <= toUInt16(0), -1152921504606846976.000000000 > toUInt16(0), -1152921504606846976.000000000 >= toUInt16(0) , toInt16(0) = -1152921504606846976.000000000, toInt16(0) != -1152921504606846976.000000000, toInt16(0) < -1152921504606846976.000000000, toInt16(0) <= -1152921504606846976.000000000, toInt16(0) > -1152921504606846976.000000000, toInt16(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt16(0), -1152921504606846976.000000000 != toInt16(0), -1152921504606846976.000000000 < toInt16(0), -1152921504606846976.000000000 <= toInt16(0), -1152921504606846976.000000000 > toInt16(0), -1152921504606846976.000000000 >= toInt16(0) , toUInt32(0) = -1152921504606846976.000000000, toUInt32(0) != -1152921504606846976.000000000, toUInt32(0) < -1152921504606846976.000000000, toUInt32(0) <= -1152921504606846976.000000000, toUInt32(0) > -1152921504606846976.000000000, toUInt32(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt32(0), -1152921504606846976.000000000 != toUInt32(0), -1152921504606846976.000000000 < toUInt32(0), -1152921504606846976.000000000 <= toUInt32(0), -1152921504606846976.000000000 > toUInt32(0), -1152921504606846976.000000000 >= toUInt32(0) , toInt32(0) = -1152921504606846976.000000000, toInt32(0) != -1152921504606846976.000000000, toInt32(0) < -1152921504606846976.000000000, toInt32(0) <= -1152921504606846976.000000000, toInt32(0) > -1152921504606846976.000000000, toInt32(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt32(0), -1152921504606846976.000000000 != toInt32(0), -1152921504606846976.000000000 < toInt32(0), -1152921504606846976.000000000 <= toInt32(0), -1152921504606846976.000000000 > toInt32(0), -1152921504606846976.000000000 >= toInt32(0) , toUInt64(0) = -1152921504606846976.000000000, toUInt64(0) != -1152921504606846976.000000000, toUInt64(0) < -1152921504606846976.000000000, toUInt64(0) <= -1152921504606846976.000000000, toUInt64(0) > -1152921504606846976.000000000, toUInt64(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt64(0), -1152921504606846976.000000000 != toUInt64(0), -1152921504606846976.000000000 < toUInt64(0), -1152921504606846976.000000000 <= toUInt64(0), -1152921504606846976.000000000 > toUInt64(0), -1152921504606846976.000000000 >= toUInt64(0) , toInt64(0) = -1152921504606846976.000000000, toInt64(0) != -1152921504606846976.000000000, toInt64(0) < -1152921504606846976.000000000, toInt64(0) <= -1152921504606846976.000000000, toInt64(0) > -1152921504606846976.000000000, toInt64(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt64(0), -1152921504606846976.000000000 != toInt64(0), -1152921504606846976.000000000 < toInt64(0), -1152921504606846976.000000000 <= toInt64(0), -1152921504606846976.000000000 > toInt64(0), -1152921504606846976.000000000 >= toInt64(0) ; +SELECT '0', '-9223372036854786048.000000000', 0 = -9223372036854786048.000000000, 0 != -9223372036854786048.000000000, 0 < -9223372036854786048.000000000, 0 <= -9223372036854786048.000000000, 0 > -9223372036854786048.000000000, 0 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = 0, -9223372036854786048.000000000 != 0, -9223372036854786048.000000000 < 0, -9223372036854786048.000000000 <= 0, -9223372036854786048.000000000 > 0, -9223372036854786048.000000000 >= 0 , toUInt8(0) = -9223372036854786048.000000000, toUInt8(0) != -9223372036854786048.000000000, toUInt8(0) < -9223372036854786048.000000000, toUInt8(0) <= -9223372036854786048.000000000, toUInt8(0) > -9223372036854786048.000000000, toUInt8(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt8(0), -9223372036854786048.000000000 != toUInt8(0), -9223372036854786048.000000000 < toUInt8(0), -9223372036854786048.000000000 <= toUInt8(0), -9223372036854786048.000000000 > toUInt8(0), -9223372036854786048.000000000 >= toUInt8(0) , toInt8(0) = -9223372036854786048.000000000, toInt8(0) != -9223372036854786048.000000000, toInt8(0) < -9223372036854786048.000000000, toInt8(0) <= -9223372036854786048.000000000, toInt8(0) > -9223372036854786048.000000000, toInt8(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt8(0), -9223372036854786048.000000000 != toInt8(0), -9223372036854786048.000000000 < toInt8(0), -9223372036854786048.000000000 <= toInt8(0), -9223372036854786048.000000000 > toInt8(0), -9223372036854786048.000000000 >= toInt8(0) , toUInt16(0) = -9223372036854786048.000000000, toUInt16(0) != -9223372036854786048.000000000, toUInt16(0) < -9223372036854786048.000000000, toUInt16(0) <= -9223372036854786048.000000000, toUInt16(0) > -9223372036854786048.000000000, toUInt16(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt16(0), -9223372036854786048.000000000 != toUInt16(0), -9223372036854786048.000000000 < toUInt16(0), -9223372036854786048.000000000 <= toUInt16(0), -9223372036854786048.000000000 > toUInt16(0), -9223372036854786048.000000000 >= toUInt16(0) , toInt16(0) = -9223372036854786048.000000000, toInt16(0) != -9223372036854786048.000000000, toInt16(0) < -9223372036854786048.000000000, toInt16(0) <= -9223372036854786048.000000000, toInt16(0) > -9223372036854786048.000000000, toInt16(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt16(0), -9223372036854786048.000000000 != toInt16(0), -9223372036854786048.000000000 < toInt16(0), -9223372036854786048.000000000 <= toInt16(0), -9223372036854786048.000000000 > toInt16(0), -9223372036854786048.000000000 >= toInt16(0) , toUInt32(0) = -9223372036854786048.000000000, toUInt32(0) != -9223372036854786048.000000000, toUInt32(0) < -9223372036854786048.000000000, toUInt32(0) <= -9223372036854786048.000000000, toUInt32(0) > -9223372036854786048.000000000, toUInt32(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt32(0), -9223372036854786048.000000000 != toUInt32(0), -9223372036854786048.000000000 < toUInt32(0), -9223372036854786048.000000000 <= toUInt32(0), -9223372036854786048.000000000 > toUInt32(0), -9223372036854786048.000000000 >= toUInt32(0) , toInt32(0) = -9223372036854786048.000000000, toInt32(0) != -9223372036854786048.000000000, toInt32(0) < -9223372036854786048.000000000, toInt32(0) <= -9223372036854786048.000000000, toInt32(0) > -9223372036854786048.000000000, toInt32(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt32(0), -9223372036854786048.000000000 != toInt32(0), -9223372036854786048.000000000 < toInt32(0), -9223372036854786048.000000000 <= toInt32(0), -9223372036854786048.000000000 > toInt32(0), -9223372036854786048.000000000 >= toInt32(0) , toUInt64(0) = -9223372036854786048.000000000, toUInt64(0) != -9223372036854786048.000000000, toUInt64(0) < -9223372036854786048.000000000, toUInt64(0) <= -9223372036854786048.000000000, toUInt64(0) > -9223372036854786048.000000000, toUInt64(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt64(0), -9223372036854786048.000000000 != toUInt64(0), -9223372036854786048.000000000 < toUInt64(0), -9223372036854786048.000000000 <= toUInt64(0), -9223372036854786048.000000000 > toUInt64(0), -9223372036854786048.000000000 >= toUInt64(0) , toInt64(0) = -9223372036854786048.000000000, toInt64(0) != -9223372036854786048.000000000, toInt64(0) < -9223372036854786048.000000000, toInt64(0) <= -9223372036854786048.000000000, toInt64(0) > -9223372036854786048.000000000, toInt64(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt64(0), -9223372036854786048.000000000 != toInt64(0), -9223372036854786048.000000000 < toInt64(0), -9223372036854786048.000000000 <= toInt64(0), -9223372036854786048.000000000 > toInt64(0), -9223372036854786048.000000000 >= toInt64(0) ; +SELECT '0', '9223372036854786048.000000000', 0 = 9223372036854786048.000000000, 0 != 9223372036854786048.000000000, 0 < 9223372036854786048.000000000, 0 <= 9223372036854786048.000000000, 0 > 9223372036854786048.000000000, 0 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = 0, 9223372036854786048.000000000 != 0, 9223372036854786048.000000000 < 0, 9223372036854786048.000000000 <= 0, 9223372036854786048.000000000 > 0, 9223372036854786048.000000000 >= 0 , toUInt8(0) = 9223372036854786048.000000000, toUInt8(0) != 9223372036854786048.000000000, toUInt8(0) < 9223372036854786048.000000000, toUInt8(0) <= 9223372036854786048.000000000, toUInt8(0) > 9223372036854786048.000000000, toUInt8(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt8(0), 9223372036854786048.000000000 != toUInt8(0), 9223372036854786048.000000000 < toUInt8(0), 9223372036854786048.000000000 <= toUInt8(0), 9223372036854786048.000000000 > toUInt8(0), 9223372036854786048.000000000 >= toUInt8(0) , toInt8(0) = 9223372036854786048.000000000, toInt8(0) != 9223372036854786048.000000000, toInt8(0) < 9223372036854786048.000000000, toInt8(0) <= 9223372036854786048.000000000, toInt8(0) > 9223372036854786048.000000000, toInt8(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt8(0), 9223372036854786048.000000000 != toInt8(0), 9223372036854786048.000000000 < toInt8(0), 9223372036854786048.000000000 <= toInt8(0), 9223372036854786048.000000000 > toInt8(0), 9223372036854786048.000000000 >= toInt8(0) , toUInt16(0) = 9223372036854786048.000000000, toUInt16(0) != 9223372036854786048.000000000, toUInt16(0) < 9223372036854786048.000000000, toUInt16(0) <= 9223372036854786048.000000000, toUInt16(0) > 9223372036854786048.000000000, toUInt16(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt16(0), 9223372036854786048.000000000 != toUInt16(0), 9223372036854786048.000000000 < toUInt16(0), 9223372036854786048.000000000 <= toUInt16(0), 9223372036854786048.000000000 > toUInt16(0), 9223372036854786048.000000000 >= toUInt16(0) , toInt16(0) = 9223372036854786048.000000000, toInt16(0) != 9223372036854786048.000000000, toInt16(0) < 9223372036854786048.000000000, toInt16(0) <= 9223372036854786048.000000000, toInt16(0) > 9223372036854786048.000000000, toInt16(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt16(0), 9223372036854786048.000000000 != toInt16(0), 9223372036854786048.000000000 < toInt16(0), 9223372036854786048.000000000 <= toInt16(0), 9223372036854786048.000000000 > toInt16(0), 9223372036854786048.000000000 >= toInt16(0) , toUInt32(0) = 9223372036854786048.000000000, toUInt32(0) != 9223372036854786048.000000000, toUInt32(0) < 9223372036854786048.000000000, toUInt32(0) <= 9223372036854786048.000000000, toUInt32(0) > 9223372036854786048.000000000, toUInt32(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt32(0), 9223372036854786048.000000000 != toUInt32(0), 9223372036854786048.000000000 < toUInt32(0), 9223372036854786048.000000000 <= toUInt32(0), 9223372036854786048.000000000 > toUInt32(0), 9223372036854786048.000000000 >= toUInt32(0) , toInt32(0) = 9223372036854786048.000000000, toInt32(0) != 9223372036854786048.000000000, toInt32(0) < 9223372036854786048.000000000, toInt32(0) <= 9223372036854786048.000000000, toInt32(0) > 9223372036854786048.000000000, toInt32(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt32(0), 9223372036854786048.000000000 != toInt32(0), 9223372036854786048.000000000 < toInt32(0), 9223372036854786048.000000000 <= toInt32(0), 9223372036854786048.000000000 > toInt32(0), 9223372036854786048.000000000 >= toInt32(0) , toUInt64(0) = 9223372036854786048.000000000, toUInt64(0) != 9223372036854786048.000000000, toUInt64(0) < 9223372036854786048.000000000, toUInt64(0) <= 9223372036854786048.000000000, toUInt64(0) > 9223372036854786048.000000000, toUInt64(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt64(0), 9223372036854786048.000000000 != toUInt64(0), 9223372036854786048.000000000 < toUInt64(0), 9223372036854786048.000000000 <= toUInt64(0), 9223372036854786048.000000000 > toUInt64(0), 9223372036854786048.000000000 >= toUInt64(0) , toInt64(0) = 9223372036854786048.000000000, toInt64(0) != 9223372036854786048.000000000, toInt64(0) < 9223372036854786048.000000000, toInt64(0) <= 9223372036854786048.000000000, toInt64(0) > 9223372036854786048.000000000, toInt64(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt64(0), 9223372036854786048.000000000 != toInt64(0), 9223372036854786048.000000000 < toInt64(0), 9223372036854786048.000000000 <= toInt64(0), 9223372036854786048.000000000 > toInt64(0), 9223372036854786048.000000000 >= toInt64(0) ; +SELECT '-1', '0.000000000', -1 = 0.000000000, -1 != 0.000000000, -1 < 0.000000000, -1 <= 0.000000000, -1 > 0.000000000, -1 >= 0.000000000, 0.000000000 = -1, 0.000000000 != -1, 0.000000000 < -1, 0.000000000 <= -1, 0.000000000 > -1, 0.000000000 >= -1 , toInt8(-1) = 0.000000000, toInt8(-1) != 0.000000000, toInt8(-1) < 0.000000000, toInt8(-1) <= 0.000000000, toInt8(-1) > 0.000000000, toInt8(-1) >= 0.000000000, 0.000000000 = toInt8(-1), 0.000000000 != toInt8(-1), 0.000000000 < toInt8(-1), 0.000000000 <= toInt8(-1), 0.000000000 > toInt8(-1), 0.000000000 >= toInt8(-1) , toInt16(-1) = 0.000000000, toInt16(-1) != 0.000000000, toInt16(-1) < 0.000000000, toInt16(-1) <= 0.000000000, toInt16(-1) > 0.000000000, toInt16(-1) >= 0.000000000, 0.000000000 = toInt16(-1), 0.000000000 != toInt16(-1), 0.000000000 < toInt16(-1), 0.000000000 <= toInt16(-1), 0.000000000 > toInt16(-1), 0.000000000 >= toInt16(-1) , toInt32(-1) = 0.000000000, toInt32(-1) != 0.000000000, toInt32(-1) < 0.000000000, toInt32(-1) <= 0.000000000, toInt32(-1) > 0.000000000, toInt32(-1) >= 0.000000000, 0.000000000 = toInt32(-1), 0.000000000 != toInt32(-1), 0.000000000 < toInt32(-1), 0.000000000 <= toInt32(-1), 0.000000000 > toInt32(-1), 0.000000000 >= toInt32(-1) , toInt64(-1) = 0.000000000, toInt64(-1) != 0.000000000, toInt64(-1) < 0.000000000, toInt64(-1) <= 0.000000000, toInt64(-1) > 0.000000000, toInt64(-1) >= 0.000000000, 0.000000000 = toInt64(-1), 0.000000000 != toInt64(-1), 0.000000000 < toInt64(-1), 0.000000000 <= toInt64(-1), 0.000000000 > toInt64(-1), 0.000000000 >= toInt64(-1) ; +SELECT '-1', '-1.000000000', -1 = -1.000000000, -1 != -1.000000000, -1 < -1.000000000, -1 <= -1.000000000, -1 > -1.000000000, -1 >= -1.000000000, -1.000000000 = -1, -1.000000000 != -1, -1.000000000 < -1, -1.000000000 <= -1, -1.000000000 > -1, -1.000000000 >= -1 , toInt8(-1) = -1.000000000, toInt8(-1) != -1.000000000, toInt8(-1) < -1.000000000, toInt8(-1) <= -1.000000000, toInt8(-1) > -1.000000000, toInt8(-1) >= -1.000000000, -1.000000000 = toInt8(-1), -1.000000000 != toInt8(-1), -1.000000000 < toInt8(-1), -1.000000000 <= toInt8(-1), -1.000000000 > toInt8(-1), -1.000000000 >= toInt8(-1) , toInt16(-1) = -1.000000000, toInt16(-1) != -1.000000000, toInt16(-1) < -1.000000000, toInt16(-1) <= -1.000000000, toInt16(-1) > -1.000000000, toInt16(-1) >= -1.000000000, -1.000000000 = toInt16(-1), -1.000000000 != toInt16(-1), -1.000000000 < toInt16(-1), -1.000000000 <= toInt16(-1), -1.000000000 > toInt16(-1), -1.000000000 >= toInt16(-1) , toInt32(-1) = -1.000000000, toInt32(-1) != -1.000000000, toInt32(-1) < -1.000000000, toInt32(-1) <= -1.000000000, toInt32(-1) > -1.000000000, toInt32(-1) >= -1.000000000, -1.000000000 = toInt32(-1), -1.000000000 != toInt32(-1), -1.000000000 < toInt32(-1), -1.000000000 <= toInt32(-1), -1.000000000 > toInt32(-1), -1.000000000 >= toInt32(-1) , toInt64(-1) = -1.000000000, toInt64(-1) != -1.000000000, toInt64(-1) < -1.000000000, toInt64(-1) <= -1.000000000, toInt64(-1) > -1.000000000, toInt64(-1) >= -1.000000000, -1.000000000 = toInt64(-1), -1.000000000 != toInt64(-1), -1.000000000 < toInt64(-1), -1.000000000 <= toInt64(-1), -1.000000000 > toInt64(-1), -1.000000000 >= toInt64(-1) ; +SELECT '-1', '1.000000000', -1 = 1.000000000, -1 != 1.000000000, -1 < 1.000000000, -1 <= 1.000000000, -1 > 1.000000000, -1 >= 1.000000000, 1.000000000 = -1, 1.000000000 != -1, 1.000000000 < -1, 1.000000000 <= -1, 1.000000000 > -1, 1.000000000 >= -1 , toInt8(-1) = 1.000000000, toInt8(-1) != 1.000000000, toInt8(-1) < 1.000000000, toInt8(-1) <= 1.000000000, toInt8(-1) > 1.000000000, toInt8(-1) >= 1.000000000, 1.000000000 = toInt8(-1), 1.000000000 != toInt8(-1), 1.000000000 < toInt8(-1), 1.000000000 <= toInt8(-1), 1.000000000 > toInt8(-1), 1.000000000 >= toInt8(-1) , toInt16(-1) = 1.000000000, toInt16(-1) != 1.000000000, toInt16(-1) < 1.000000000, toInt16(-1) <= 1.000000000, toInt16(-1) > 1.000000000, toInt16(-1) >= 1.000000000, 1.000000000 = toInt16(-1), 1.000000000 != toInt16(-1), 1.000000000 < toInt16(-1), 1.000000000 <= toInt16(-1), 1.000000000 > toInt16(-1), 1.000000000 >= toInt16(-1) , toInt32(-1) = 1.000000000, toInt32(-1) != 1.000000000, toInt32(-1) < 1.000000000, toInt32(-1) <= 1.000000000, toInt32(-1) > 1.000000000, toInt32(-1) >= 1.000000000, 1.000000000 = toInt32(-1), 1.000000000 != toInt32(-1), 1.000000000 < toInt32(-1), 1.000000000 <= toInt32(-1), 1.000000000 > toInt32(-1), 1.000000000 >= toInt32(-1) , toInt64(-1) = 1.000000000, toInt64(-1) != 1.000000000, toInt64(-1) < 1.000000000, toInt64(-1) <= 1.000000000, toInt64(-1) > 1.000000000, toInt64(-1) >= 1.000000000, 1.000000000 = toInt64(-1), 1.000000000 != toInt64(-1), 1.000000000 < toInt64(-1), 1.000000000 <= toInt64(-1), 1.000000000 > toInt64(-1), 1.000000000 >= toInt64(-1) ; +SELECT '-1', '18446744073709551616.000000000', -1 = 18446744073709551616.000000000, -1 != 18446744073709551616.000000000, -1 < 18446744073709551616.000000000, -1 <= 18446744073709551616.000000000, -1 > 18446744073709551616.000000000, -1 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = -1, 18446744073709551616.000000000 != -1, 18446744073709551616.000000000 < -1, 18446744073709551616.000000000 <= -1, 18446744073709551616.000000000 > -1, 18446744073709551616.000000000 >= -1 , toInt8(-1) = 18446744073709551616.000000000, toInt8(-1) != 18446744073709551616.000000000, toInt8(-1) < 18446744073709551616.000000000, toInt8(-1) <= 18446744073709551616.000000000, toInt8(-1) > 18446744073709551616.000000000, toInt8(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt8(-1), 18446744073709551616.000000000 != toInt8(-1), 18446744073709551616.000000000 < toInt8(-1), 18446744073709551616.000000000 <= toInt8(-1), 18446744073709551616.000000000 > toInt8(-1), 18446744073709551616.000000000 >= toInt8(-1) , toInt16(-1) = 18446744073709551616.000000000, toInt16(-1) != 18446744073709551616.000000000, toInt16(-1) < 18446744073709551616.000000000, toInt16(-1) <= 18446744073709551616.000000000, toInt16(-1) > 18446744073709551616.000000000, toInt16(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt16(-1), 18446744073709551616.000000000 != toInt16(-1), 18446744073709551616.000000000 < toInt16(-1), 18446744073709551616.000000000 <= toInt16(-1), 18446744073709551616.000000000 > toInt16(-1), 18446744073709551616.000000000 >= toInt16(-1) , toInt32(-1) = 18446744073709551616.000000000, toInt32(-1) != 18446744073709551616.000000000, toInt32(-1) < 18446744073709551616.000000000, toInt32(-1) <= 18446744073709551616.000000000, toInt32(-1) > 18446744073709551616.000000000, toInt32(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt32(-1), 18446744073709551616.000000000 != toInt32(-1), 18446744073709551616.000000000 < toInt32(-1), 18446744073709551616.000000000 <= toInt32(-1), 18446744073709551616.000000000 > toInt32(-1), 18446744073709551616.000000000 >= toInt32(-1) , toInt64(-1) = 18446744073709551616.000000000, toInt64(-1) != 18446744073709551616.000000000, toInt64(-1) < 18446744073709551616.000000000, toInt64(-1) <= 18446744073709551616.000000000, toInt64(-1) > 18446744073709551616.000000000, toInt64(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt64(-1), 18446744073709551616.000000000 != toInt64(-1), 18446744073709551616.000000000 < toInt64(-1), 18446744073709551616.000000000 <= toInt64(-1), 18446744073709551616.000000000 > toInt64(-1), 18446744073709551616.000000000 >= toInt64(-1) ; +SELECT '-1', '9223372036854775808.000000000', -1 = 9223372036854775808.000000000, -1 != 9223372036854775808.000000000, -1 < 9223372036854775808.000000000, -1 <= 9223372036854775808.000000000, -1 > 9223372036854775808.000000000, -1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = -1, 9223372036854775808.000000000 != -1, 9223372036854775808.000000000 < -1, 9223372036854775808.000000000 <= -1, 9223372036854775808.000000000 > -1, 9223372036854775808.000000000 >= -1 , toInt8(-1) = 9223372036854775808.000000000, toInt8(-1) != 9223372036854775808.000000000, toInt8(-1) < 9223372036854775808.000000000, toInt8(-1) <= 9223372036854775808.000000000, toInt8(-1) > 9223372036854775808.000000000, toInt8(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(-1), 9223372036854775808.000000000 != toInt8(-1), 9223372036854775808.000000000 < toInt8(-1), 9223372036854775808.000000000 <= toInt8(-1), 9223372036854775808.000000000 > toInt8(-1), 9223372036854775808.000000000 >= toInt8(-1) , toInt16(-1) = 9223372036854775808.000000000, toInt16(-1) != 9223372036854775808.000000000, toInt16(-1) < 9223372036854775808.000000000, toInt16(-1) <= 9223372036854775808.000000000, toInt16(-1) > 9223372036854775808.000000000, toInt16(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(-1), 9223372036854775808.000000000 != toInt16(-1), 9223372036854775808.000000000 < toInt16(-1), 9223372036854775808.000000000 <= toInt16(-1), 9223372036854775808.000000000 > toInt16(-1), 9223372036854775808.000000000 >= toInt16(-1) , toInt32(-1) = 9223372036854775808.000000000, toInt32(-1) != 9223372036854775808.000000000, toInt32(-1) < 9223372036854775808.000000000, toInt32(-1) <= 9223372036854775808.000000000, toInt32(-1) > 9223372036854775808.000000000, toInt32(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(-1), 9223372036854775808.000000000 != toInt32(-1), 9223372036854775808.000000000 < toInt32(-1), 9223372036854775808.000000000 <= toInt32(-1), 9223372036854775808.000000000 > toInt32(-1), 9223372036854775808.000000000 >= toInt32(-1) , toInt64(-1) = 9223372036854775808.000000000, toInt64(-1) != 9223372036854775808.000000000, toInt64(-1) < 9223372036854775808.000000000, toInt64(-1) <= 9223372036854775808.000000000, toInt64(-1) > 9223372036854775808.000000000, toInt64(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(-1), 9223372036854775808.000000000 != toInt64(-1), 9223372036854775808.000000000 < toInt64(-1), 9223372036854775808.000000000 <= toInt64(-1), 9223372036854775808.000000000 > toInt64(-1), 9223372036854775808.000000000 >= toInt64(-1) ; +SELECT '-1', '-9223372036854775808.000000000', -1 = -9223372036854775808.000000000, -1 != -9223372036854775808.000000000, -1 < -9223372036854775808.000000000, -1 <= -9223372036854775808.000000000, -1 > -9223372036854775808.000000000, -1 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = -1, -9223372036854775808.000000000 != -1, -9223372036854775808.000000000 < -1, -9223372036854775808.000000000 <= -1, -9223372036854775808.000000000 > -1, -9223372036854775808.000000000 >= -1 , toInt8(-1) = -9223372036854775808.000000000, toInt8(-1) != -9223372036854775808.000000000, toInt8(-1) < -9223372036854775808.000000000, toInt8(-1) <= -9223372036854775808.000000000, toInt8(-1) > -9223372036854775808.000000000, toInt8(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt8(-1), -9223372036854775808.000000000 != toInt8(-1), -9223372036854775808.000000000 < toInt8(-1), -9223372036854775808.000000000 <= toInt8(-1), -9223372036854775808.000000000 > toInt8(-1), -9223372036854775808.000000000 >= toInt8(-1) , toInt16(-1) = -9223372036854775808.000000000, toInt16(-1) != -9223372036854775808.000000000, toInt16(-1) < -9223372036854775808.000000000, toInt16(-1) <= -9223372036854775808.000000000, toInt16(-1) > -9223372036854775808.000000000, toInt16(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt16(-1), -9223372036854775808.000000000 != toInt16(-1), -9223372036854775808.000000000 < toInt16(-1), -9223372036854775808.000000000 <= toInt16(-1), -9223372036854775808.000000000 > toInt16(-1), -9223372036854775808.000000000 >= toInt16(-1) , toInt32(-1) = -9223372036854775808.000000000, toInt32(-1) != -9223372036854775808.000000000, toInt32(-1) < -9223372036854775808.000000000, toInt32(-1) <= -9223372036854775808.000000000, toInt32(-1) > -9223372036854775808.000000000, toInt32(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt32(-1), -9223372036854775808.000000000 != toInt32(-1), -9223372036854775808.000000000 < toInt32(-1), -9223372036854775808.000000000 <= toInt32(-1), -9223372036854775808.000000000 > toInt32(-1), -9223372036854775808.000000000 >= toInt32(-1) , toInt64(-1) = -9223372036854775808.000000000, toInt64(-1) != -9223372036854775808.000000000, toInt64(-1) < -9223372036854775808.000000000, toInt64(-1) <= -9223372036854775808.000000000, toInt64(-1) > -9223372036854775808.000000000, toInt64(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt64(-1), -9223372036854775808.000000000 != toInt64(-1), -9223372036854775808.000000000 < toInt64(-1), -9223372036854775808.000000000 <= toInt64(-1), -9223372036854775808.000000000 > toInt64(-1), -9223372036854775808.000000000 >= toInt64(-1) ; +SELECT '-1', '9223372036854775808.000000000', -1 = 9223372036854775808.000000000, -1 != 9223372036854775808.000000000, -1 < 9223372036854775808.000000000, -1 <= 9223372036854775808.000000000, -1 > 9223372036854775808.000000000, -1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = -1, 9223372036854775808.000000000 != -1, 9223372036854775808.000000000 < -1, 9223372036854775808.000000000 <= -1, 9223372036854775808.000000000 > -1, 9223372036854775808.000000000 >= -1 , toInt8(-1) = 9223372036854775808.000000000, toInt8(-1) != 9223372036854775808.000000000, toInt8(-1) < 9223372036854775808.000000000, toInt8(-1) <= 9223372036854775808.000000000, toInt8(-1) > 9223372036854775808.000000000, toInt8(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(-1), 9223372036854775808.000000000 != toInt8(-1), 9223372036854775808.000000000 < toInt8(-1), 9223372036854775808.000000000 <= toInt8(-1), 9223372036854775808.000000000 > toInt8(-1), 9223372036854775808.000000000 >= toInt8(-1) , toInt16(-1) = 9223372036854775808.000000000, toInt16(-1) != 9223372036854775808.000000000, toInt16(-1) < 9223372036854775808.000000000, toInt16(-1) <= 9223372036854775808.000000000, toInt16(-1) > 9223372036854775808.000000000, toInt16(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(-1), 9223372036854775808.000000000 != toInt16(-1), 9223372036854775808.000000000 < toInt16(-1), 9223372036854775808.000000000 <= toInt16(-1), 9223372036854775808.000000000 > toInt16(-1), 9223372036854775808.000000000 >= toInt16(-1) , toInt32(-1) = 9223372036854775808.000000000, toInt32(-1) != 9223372036854775808.000000000, toInt32(-1) < 9223372036854775808.000000000, toInt32(-1) <= 9223372036854775808.000000000, toInt32(-1) > 9223372036854775808.000000000, toInt32(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(-1), 9223372036854775808.000000000 != toInt32(-1), 9223372036854775808.000000000 < toInt32(-1), 9223372036854775808.000000000 <= toInt32(-1), 9223372036854775808.000000000 > toInt32(-1), 9223372036854775808.000000000 >= toInt32(-1) , toInt64(-1) = 9223372036854775808.000000000, toInt64(-1) != 9223372036854775808.000000000, toInt64(-1) < 9223372036854775808.000000000, toInt64(-1) <= 9223372036854775808.000000000, toInt64(-1) > 9223372036854775808.000000000, toInt64(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(-1), 9223372036854775808.000000000 != toInt64(-1), 9223372036854775808.000000000 < toInt64(-1), 9223372036854775808.000000000 <= toInt64(-1), 9223372036854775808.000000000 > toInt64(-1), 9223372036854775808.000000000 >= toInt64(-1) ; +SELECT '-1', '2251799813685248.000000000', -1 = 2251799813685248.000000000, -1 != 2251799813685248.000000000, -1 < 2251799813685248.000000000, -1 <= 2251799813685248.000000000, -1 > 2251799813685248.000000000, -1 >= 2251799813685248.000000000, 2251799813685248.000000000 = -1, 2251799813685248.000000000 != -1, 2251799813685248.000000000 < -1, 2251799813685248.000000000 <= -1, 2251799813685248.000000000 > -1, 2251799813685248.000000000 >= -1 , toInt8(-1) = 2251799813685248.000000000, toInt8(-1) != 2251799813685248.000000000, toInt8(-1) < 2251799813685248.000000000, toInt8(-1) <= 2251799813685248.000000000, toInt8(-1) > 2251799813685248.000000000, toInt8(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt8(-1), 2251799813685248.000000000 != toInt8(-1), 2251799813685248.000000000 < toInt8(-1), 2251799813685248.000000000 <= toInt8(-1), 2251799813685248.000000000 > toInt8(-1), 2251799813685248.000000000 >= toInt8(-1) , toInt16(-1) = 2251799813685248.000000000, toInt16(-1) != 2251799813685248.000000000, toInt16(-1) < 2251799813685248.000000000, toInt16(-1) <= 2251799813685248.000000000, toInt16(-1) > 2251799813685248.000000000, toInt16(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt16(-1), 2251799813685248.000000000 != toInt16(-1), 2251799813685248.000000000 < toInt16(-1), 2251799813685248.000000000 <= toInt16(-1), 2251799813685248.000000000 > toInt16(-1), 2251799813685248.000000000 >= toInt16(-1) , toInt32(-1) = 2251799813685248.000000000, toInt32(-1) != 2251799813685248.000000000, toInt32(-1) < 2251799813685248.000000000, toInt32(-1) <= 2251799813685248.000000000, toInt32(-1) > 2251799813685248.000000000, toInt32(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt32(-1), 2251799813685248.000000000 != toInt32(-1), 2251799813685248.000000000 < toInt32(-1), 2251799813685248.000000000 <= toInt32(-1), 2251799813685248.000000000 > toInt32(-1), 2251799813685248.000000000 >= toInt32(-1) , toInt64(-1) = 2251799813685248.000000000, toInt64(-1) != 2251799813685248.000000000, toInt64(-1) < 2251799813685248.000000000, toInt64(-1) <= 2251799813685248.000000000, toInt64(-1) > 2251799813685248.000000000, toInt64(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt64(-1), 2251799813685248.000000000 != toInt64(-1), 2251799813685248.000000000 < toInt64(-1), 2251799813685248.000000000 <= toInt64(-1), 2251799813685248.000000000 > toInt64(-1), 2251799813685248.000000000 >= toInt64(-1) ; +SELECT '-1', '4503599627370496.000000000', -1 = 4503599627370496.000000000, -1 != 4503599627370496.000000000, -1 < 4503599627370496.000000000, -1 <= 4503599627370496.000000000, -1 > 4503599627370496.000000000, -1 >= 4503599627370496.000000000, 4503599627370496.000000000 = -1, 4503599627370496.000000000 != -1, 4503599627370496.000000000 < -1, 4503599627370496.000000000 <= -1, 4503599627370496.000000000 > -1, 4503599627370496.000000000 >= -1 , toInt8(-1) = 4503599627370496.000000000, toInt8(-1) != 4503599627370496.000000000, toInt8(-1) < 4503599627370496.000000000, toInt8(-1) <= 4503599627370496.000000000, toInt8(-1) > 4503599627370496.000000000, toInt8(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt8(-1), 4503599627370496.000000000 != toInt8(-1), 4503599627370496.000000000 < toInt8(-1), 4503599627370496.000000000 <= toInt8(-1), 4503599627370496.000000000 > toInt8(-1), 4503599627370496.000000000 >= toInt8(-1) , toInt16(-1) = 4503599627370496.000000000, toInt16(-1) != 4503599627370496.000000000, toInt16(-1) < 4503599627370496.000000000, toInt16(-1) <= 4503599627370496.000000000, toInt16(-1) > 4503599627370496.000000000, toInt16(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt16(-1), 4503599627370496.000000000 != toInt16(-1), 4503599627370496.000000000 < toInt16(-1), 4503599627370496.000000000 <= toInt16(-1), 4503599627370496.000000000 > toInt16(-1), 4503599627370496.000000000 >= toInt16(-1) , toInt32(-1) = 4503599627370496.000000000, toInt32(-1) != 4503599627370496.000000000, toInt32(-1) < 4503599627370496.000000000, toInt32(-1) <= 4503599627370496.000000000, toInt32(-1) > 4503599627370496.000000000, toInt32(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt32(-1), 4503599627370496.000000000 != toInt32(-1), 4503599627370496.000000000 < toInt32(-1), 4503599627370496.000000000 <= toInt32(-1), 4503599627370496.000000000 > toInt32(-1), 4503599627370496.000000000 >= toInt32(-1) , toInt64(-1) = 4503599627370496.000000000, toInt64(-1) != 4503599627370496.000000000, toInt64(-1) < 4503599627370496.000000000, toInt64(-1) <= 4503599627370496.000000000, toInt64(-1) > 4503599627370496.000000000, toInt64(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt64(-1), 4503599627370496.000000000 != toInt64(-1), 4503599627370496.000000000 < toInt64(-1), 4503599627370496.000000000 <= toInt64(-1), 4503599627370496.000000000 > toInt64(-1), 4503599627370496.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740991.000000000', -1 = 9007199254740991.000000000, -1 != 9007199254740991.000000000, -1 < 9007199254740991.000000000, -1 <= 9007199254740991.000000000, -1 > 9007199254740991.000000000, -1 >= 9007199254740991.000000000, 9007199254740991.000000000 = -1, 9007199254740991.000000000 != -1, 9007199254740991.000000000 < -1, 9007199254740991.000000000 <= -1, 9007199254740991.000000000 > -1, 9007199254740991.000000000 >= -1 , toInt8(-1) = 9007199254740991.000000000, toInt8(-1) != 9007199254740991.000000000, toInt8(-1) < 9007199254740991.000000000, toInt8(-1) <= 9007199254740991.000000000, toInt8(-1) > 9007199254740991.000000000, toInt8(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt8(-1), 9007199254740991.000000000 != toInt8(-1), 9007199254740991.000000000 < toInt8(-1), 9007199254740991.000000000 <= toInt8(-1), 9007199254740991.000000000 > toInt8(-1), 9007199254740991.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740991.000000000, toInt16(-1) != 9007199254740991.000000000, toInt16(-1) < 9007199254740991.000000000, toInt16(-1) <= 9007199254740991.000000000, toInt16(-1) > 9007199254740991.000000000, toInt16(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt16(-1), 9007199254740991.000000000 != toInt16(-1), 9007199254740991.000000000 < toInt16(-1), 9007199254740991.000000000 <= toInt16(-1), 9007199254740991.000000000 > toInt16(-1), 9007199254740991.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740991.000000000, toInt32(-1) != 9007199254740991.000000000, toInt32(-1) < 9007199254740991.000000000, toInt32(-1) <= 9007199254740991.000000000, toInt32(-1) > 9007199254740991.000000000, toInt32(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt32(-1), 9007199254740991.000000000 != toInt32(-1), 9007199254740991.000000000 < toInt32(-1), 9007199254740991.000000000 <= toInt32(-1), 9007199254740991.000000000 > toInt32(-1), 9007199254740991.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740991.000000000, toInt64(-1) != 9007199254740991.000000000, toInt64(-1) < 9007199254740991.000000000, toInt64(-1) <= 9007199254740991.000000000, toInt64(-1) > 9007199254740991.000000000, toInt64(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt64(-1), 9007199254740991.000000000 != toInt64(-1), 9007199254740991.000000000 < toInt64(-1), 9007199254740991.000000000 <= toInt64(-1), 9007199254740991.000000000 > toInt64(-1), 9007199254740991.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740992.000000000', -1 = 9007199254740992.000000000, -1 != 9007199254740992.000000000, -1 < 9007199254740992.000000000, -1 <= 9007199254740992.000000000, -1 > 9007199254740992.000000000, -1 >= 9007199254740992.000000000, 9007199254740992.000000000 = -1, 9007199254740992.000000000 != -1, 9007199254740992.000000000 < -1, 9007199254740992.000000000 <= -1, 9007199254740992.000000000 > -1, 9007199254740992.000000000 >= -1 , toInt8(-1) = 9007199254740992.000000000, toInt8(-1) != 9007199254740992.000000000, toInt8(-1) < 9007199254740992.000000000, toInt8(-1) <= 9007199254740992.000000000, toInt8(-1) > 9007199254740992.000000000, toInt8(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(-1), 9007199254740992.000000000 != toInt8(-1), 9007199254740992.000000000 < toInt8(-1), 9007199254740992.000000000 <= toInt8(-1), 9007199254740992.000000000 > toInt8(-1), 9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740992.000000000, toInt16(-1) != 9007199254740992.000000000, toInt16(-1) < 9007199254740992.000000000, toInt16(-1) <= 9007199254740992.000000000, toInt16(-1) > 9007199254740992.000000000, toInt16(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(-1), 9007199254740992.000000000 != toInt16(-1), 9007199254740992.000000000 < toInt16(-1), 9007199254740992.000000000 <= toInt16(-1), 9007199254740992.000000000 > toInt16(-1), 9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740992.000000000, toInt32(-1) != 9007199254740992.000000000, toInt32(-1) < 9007199254740992.000000000, toInt32(-1) <= 9007199254740992.000000000, toInt32(-1) > 9007199254740992.000000000, toInt32(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(-1), 9007199254740992.000000000 != toInt32(-1), 9007199254740992.000000000 < toInt32(-1), 9007199254740992.000000000 <= toInt32(-1), 9007199254740992.000000000 > toInt32(-1), 9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740992.000000000, toInt64(-1) != 9007199254740992.000000000, toInt64(-1) < 9007199254740992.000000000, toInt64(-1) <= 9007199254740992.000000000, toInt64(-1) > 9007199254740992.000000000, toInt64(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(-1), 9007199254740992.000000000 != toInt64(-1), 9007199254740992.000000000 < toInt64(-1), 9007199254740992.000000000 <= toInt64(-1), 9007199254740992.000000000 > toInt64(-1), 9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740992.000000000', -1 = 9007199254740992.000000000, -1 != 9007199254740992.000000000, -1 < 9007199254740992.000000000, -1 <= 9007199254740992.000000000, -1 > 9007199254740992.000000000, -1 >= 9007199254740992.000000000, 9007199254740992.000000000 = -1, 9007199254740992.000000000 != -1, 9007199254740992.000000000 < -1, 9007199254740992.000000000 <= -1, 9007199254740992.000000000 > -1, 9007199254740992.000000000 >= -1 , toInt8(-1) = 9007199254740992.000000000, toInt8(-1) != 9007199254740992.000000000, toInt8(-1) < 9007199254740992.000000000, toInt8(-1) <= 9007199254740992.000000000, toInt8(-1) > 9007199254740992.000000000, toInt8(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(-1), 9007199254740992.000000000 != toInt8(-1), 9007199254740992.000000000 < toInt8(-1), 9007199254740992.000000000 <= toInt8(-1), 9007199254740992.000000000 > toInt8(-1), 9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740992.000000000, toInt16(-1) != 9007199254740992.000000000, toInt16(-1) < 9007199254740992.000000000, toInt16(-1) <= 9007199254740992.000000000, toInt16(-1) > 9007199254740992.000000000, toInt16(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(-1), 9007199254740992.000000000 != toInt16(-1), 9007199254740992.000000000 < toInt16(-1), 9007199254740992.000000000 <= toInt16(-1), 9007199254740992.000000000 > toInt16(-1), 9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740992.000000000, toInt32(-1) != 9007199254740992.000000000, toInt32(-1) < 9007199254740992.000000000, toInt32(-1) <= 9007199254740992.000000000, toInt32(-1) > 9007199254740992.000000000, toInt32(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(-1), 9007199254740992.000000000 != toInt32(-1), 9007199254740992.000000000 < toInt32(-1), 9007199254740992.000000000 <= toInt32(-1), 9007199254740992.000000000 > toInt32(-1), 9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740992.000000000, toInt64(-1) != 9007199254740992.000000000, toInt64(-1) < 9007199254740992.000000000, toInt64(-1) <= 9007199254740992.000000000, toInt64(-1) > 9007199254740992.000000000, toInt64(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(-1), 9007199254740992.000000000 != toInt64(-1), 9007199254740992.000000000 < toInt64(-1), 9007199254740992.000000000 <= toInt64(-1), 9007199254740992.000000000 > toInt64(-1), 9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740994.000000000', -1 = 9007199254740994.000000000, -1 != 9007199254740994.000000000, -1 < 9007199254740994.000000000, -1 <= 9007199254740994.000000000, -1 > 9007199254740994.000000000, -1 >= 9007199254740994.000000000, 9007199254740994.000000000 = -1, 9007199254740994.000000000 != -1, 9007199254740994.000000000 < -1, 9007199254740994.000000000 <= -1, 9007199254740994.000000000 > -1, 9007199254740994.000000000 >= -1 , toInt8(-1) = 9007199254740994.000000000, toInt8(-1) != 9007199254740994.000000000, toInt8(-1) < 9007199254740994.000000000, toInt8(-1) <= 9007199254740994.000000000, toInt8(-1) > 9007199254740994.000000000, toInt8(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt8(-1), 9007199254740994.000000000 != toInt8(-1), 9007199254740994.000000000 < toInt8(-1), 9007199254740994.000000000 <= toInt8(-1), 9007199254740994.000000000 > toInt8(-1), 9007199254740994.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740994.000000000, toInt16(-1) != 9007199254740994.000000000, toInt16(-1) < 9007199254740994.000000000, toInt16(-1) <= 9007199254740994.000000000, toInt16(-1) > 9007199254740994.000000000, toInt16(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt16(-1), 9007199254740994.000000000 != toInt16(-1), 9007199254740994.000000000 < toInt16(-1), 9007199254740994.000000000 <= toInt16(-1), 9007199254740994.000000000 > toInt16(-1), 9007199254740994.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740994.000000000, toInt32(-1) != 9007199254740994.000000000, toInt32(-1) < 9007199254740994.000000000, toInt32(-1) <= 9007199254740994.000000000, toInt32(-1) > 9007199254740994.000000000, toInt32(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt32(-1), 9007199254740994.000000000 != toInt32(-1), 9007199254740994.000000000 < toInt32(-1), 9007199254740994.000000000 <= toInt32(-1), 9007199254740994.000000000 > toInt32(-1), 9007199254740994.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740994.000000000, toInt64(-1) != 9007199254740994.000000000, toInt64(-1) < 9007199254740994.000000000, toInt64(-1) <= 9007199254740994.000000000, toInt64(-1) > 9007199254740994.000000000, toInt64(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt64(-1), 9007199254740994.000000000 != toInt64(-1), 9007199254740994.000000000 < toInt64(-1), 9007199254740994.000000000 <= toInt64(-1), 9007199254740994.000000000 > toInt64(-1), 9007199254740994.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740991.000000000', -1 = -9007199254740991.000000000, -1 != -9007199254740991.000000000, -1 < -9007199254740991.000000000, -1 <= -9007199254740991.000000000, -1 > -9007199254740991.000000000, -1 >= -9007199254740991.000000000, -9007199254740991.000000000 = -1, -9007199254740991.000000000 != -1, -9007199254740991.000000000 < -1, -9007199254740991.000000000 <= -1, -9007199254740991.000000000 > -1, -9007199254740991.000000000 >= -1 , toInt8(-1) = -9007199254740991.000000000, toInt8(-1) != -9007199254740991.000000000, toInt8(-1) < -9007199254740991.000000000, toInt8(-1) <= -9007199254740991.000000000, toInt8(-1) > -9007199254740991.000000000, toInt8(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt8(-1), -9007199254740991.000000000 != toInt8(-1), -9007199254740991.000000000 < toInt8(-1), -9007199254740991.000000000 <= toInt8(-1), -9007199254740991.000000000 > toInt8(-1), -9007199254740991.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740991.000000000, toInt16(-1) != -9007199254740991.000000000, toInt16(-1) < -9007199254740991.000000000, toInt16(-1) <= -9007199254740991.000000000, toInt16(-1) > -9007199254740991.000000000, toInt16(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt16(-1), -9007199254740991.000000000 != toInt16(-1), -9007199254740991.000000000 < toInt16(-1), -9007199254740991.000000000 <= toInt16(-1), -9007199254740991.000000000 > toInt16(-1), -9007199254740991.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740991.000000000, toInt32(-1) != -9007199254740991.000000000, toInt32(-1) < -9007199254740991.000000000, toInt32(-1) <= -9007199254740991.000000000, toInt32(-1) > -9007199254740991.000000000, toInt32(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt32(-1), -9007199254740991.000000000 != toInt32(-1), -9007199254740991.000000000 < toInt32(-1), -9007199254740991.000000000 <= toInt32(-1), -9007199254740991.000000000 > toInt32(-1), -9007199254740991.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740991.000000000, toInt64(-1) != -9007199254740991.000000000, toInt64(-1) < -9007199254740991.000000000, toInt64(-1) <= -9007199254740991.000000000, toInt64(-1) > -9007199254740991.000000000, toInt64(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt64(-1), -9007199254740991.000000000 != toInt64(-1), -9007199254740991.000000000 < toInt64(-1), -9007199254740991.000000000 <= toInt64(-1), -9007199254740991.000000000 > toInt64(-1), -9007199254740991.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740992.000000000', -1 = -9007199254740992.000000000, -1 != -9007199254740992.000000000, -1 < -9007199254740992.000000000, -1 <= -9007199254740992.000000000, -1 > -9007199254740992.000000000, -1 >= -9007199254740992.000000000, -9007199254740992.000000000 = -1, -9007199254740992.000000000 != -1, -9007199254740992.000000000 < -1, -9007199254740992.000000000 <= -1, -9007199254740992.000000000 > -1, -9007199254740992.000000000 >= -1 , toInt8(-1) = -9007199254740992.000000000, toInt8(-1) != -9007199254740992.000000000, toInt8(-1) < -9007199254740992.000000000, toInt8(-1) <= -9007199254740992.000000000, toInt8(-1) > -9007199254740992.000000000, toInt8(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(-1), -9007199254740992.000000000 != toInt8(-1), -9007199254740992.000000000 < toInt8(-1), -9007199254740992.000000000 <= toInt8(-1), -9007199254740992.000000000 > toInt8(-1), -9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740992.000000000, toInt16(-1) != -9007199254740992.000000000, toInt16(-1) < -9007199254740992.000000000, toInt16(-1) <= -9007199254740992.000000000, toInt16(-1) > -9007199254740992.000000000, toInt16(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(-1), -9007199254740992.000000000 != toInt16(-1), -9007199254740992.000000000 < toInt16(-1), -9007199254740992.000000000 <= toInt16(-1), -9007199254740992.000000000 > toInt16(-1), -9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740992.000000000, toInt32(-1) != -9007199254740992.000000000, toInt32(-1) < -9007199254740992.000000000, toInt32(-1) <= -9007199254740992.000000000, toInt32(-1) > -9007199254740992.000000000, toInt32(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(-1), -9007199254740992.000000000 != toInt32(-1), -9007199254740992.000000000 < toInt32(-1), -9007199254740992.000000000 <= toInt32(-1), -9007199254740992.000000000 > toInt32(-1), -9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740992.000000000, toInt64(-1) != -9007199254740992.000000000, toInt64(-1) < -9007199254740992.000000000, toInt64(-1) <= -9007199254740992.000000000, toInt64(-1) > -9007199254740992.000000000, toInt64(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(-1), -9007199254740992.000000000 != toInt64(-1), -9007199254740992.000000000 < toInt64(-1), -9007199254740992.000000000 <= toInt64(-1), -9007199254740992.000000000 > toInt64(-1), -9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740992.000000000', -1 = -9007199254740992.000000000, -1 != -9007199254740992.000000000, -1 < -9007199254740992.000000000, -1 <= -9007199254740992.000000000, -1 > -9007199254740992.000000000, -1 >= -9007199254740992.000000000, -9007199254740992.000000000 = -1, -9007199254740992.000000000 != -1, -9007199254740992.000000000 < -1, -9007199254740992.000000000 <= -1, -9007199254740992.000000000 > -1, -9007199254740992.000000000 >= -1 , toInt8(-1) = -9007199254740992.000000000, toInt8(-1) != -9007199254740992.000000000, toInt8(-1) < -9007199254740992.000000000, toInt8(-1) <= -9007199254740992.000000000, toInt8(-1) > -9007199254740992.000000000, toInt8(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(-1), -9007199254740992.000000000 != toInt8(-1), -9007199254740992.000000000 < toInt8(-1), -9007199254740992.000000000 <= toInt8(-1), -9007199254740992.000000000 > toInt8(-1), -9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740992.000000000, toInt16(-1) != -9007199254740992.000000000, toInt16(-1) < -9007199254740992.000000000, toInt16(-1) <= -9007199254740992.000000000, toInt16(-1) > -9007199254740992.000000000, toInt16(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(-1), -9007199254740992.000000000 != toInt16(-1), -9007199254740992.000000000 < toInt16(-1), -9007199254740992.000000000 <= toInt16(-1), -9007199254740992.000000000 > toInt16(-1), -9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740992.000000000, toInt32(-1) != -9007199254740992.000000000, toInt32(-1) < -9007199254740992.000000000, toInt32(-1) <= -9007199254740992.000000000, toInt32(-1) > -9007199254740992.000000000, toInt32(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(-1), -9007199254740992.000000000 != toInt32(-1), -9007199254740992.000000000 < toInt32(-1), -9007199254740992.000000000 <= toInt32(-1), -9007199254740992.000000000 > toInt32(-1), -9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740992.000000000, toInt64(-1) != -9007199254740992.000000000, toInt64(-1) < -9007199254740992.000000000, toInt64(-1) <= -9007199254740992.000000000, toInt64(-1) > -9007199254740992.000000000, toInt64(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(-1), -9007199254740992.000000000 != toInt64(-1), -9007199254740992.000000000 < toInt64(-1), -9007199254740992.000000000 <= toInt64(-1), -9007199254740992.000000000 > toInt64(-1), -9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740994.000000000', -1 = -9007199254740994.000000000, -1 != -9007199254740994.000000000, -1 < -9007199254740994.000000000, -1 <= -9007199254740994.000000000, -1 > -9007199254740994.000000000, -1 >= -9007199254740994.000000000, -9007199254740994.000000000 = -1, -9007199254740994.000000000 != -1, -9007199254740994.000000000 < -1, -9007199254740994.000000000 <= -1, -9007199254740994.000000000 > -1, -9007199254740994.000000000 >= -1 , toInt8(-1) = -9007199254740994.000000000, toInt8(-1) != -9007199254740994.000000000, toInt8(-1) < -9007199254740994.000000000, toInt8(-1) <= -9007199254740994.000000000, toInt8(-1) > -9007199254740994.000000000, toInt8(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt8(-1), -9007199254740994.000000000 != toInt8(-1), -9007199254740994.000000000 < toInt8(-1), -9007199254740994.000000000 <= toInt8(-1), -9007199254740994.000000000 > toInt8(-1), -9007199254740994.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740994.000000000, toInt16(-1) != -9007199254740994.000000000, toInt16(-1) < -9007199254740994.000000000, toInt16(-1) <= -9007199254740994.000000000, toInt16(-1) > -9007199254740994.000000000, toInt16(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt16(-1), -9007199254740994.000000000 != toInt16(-1), -9007199254740994.000000000 < toInt16(-1), -9007199254740994.000000000 <= toInt16(-1), -9007199254740994.000000000 > toInt16(-1), -9007199254740994.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740994.000000000, toInt32(-1) != -9007199254740994.000000000, toInt32(-1) < -9007199254740994.000000000, toInt32(-1) <= -9007199254740994.000000000, toInt32(-1) > -9007199254740994.000000000, toInt32(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt32(-1), -9007199254740994.000000000 != toInt32(-1), -9007199254740994.000000000 < toInt32(-1), -9007199254740994.000000000 <= toInt32(-1), -9007199254740994.000000000 > toInt32(-1), -9007199254740994.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740994.000000000, toInt64(-1) != -9007199254740994.000000000, toInt64(-1) < -9007199254740994.000000000, toInt64(-1) <= -9007199254740994.000000000, toInt64(-1) > -9007199254740994.000000000, toInt64(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt64(-1), -9007199254740994.000000000 != toInt64(-1), -9007199254740994.000000000 < toInt64(-1), -9007199254740994.000000000 <= toInt64(-1), -9007199254740994.000000000 > toInt64(-1), -9007199254740994.000000000 >= toInt64(-1) ; +SELECT '-1', '104.000000000', -1 = 104.000000000, -1 != 104.000000000, -1 < 104.000000000, -1 <= 104.000000000, -1 > 104.000000000, -1 >= 104.000000000, 104.000000000 = -1, 104.000000000 != -1, 104.000000000 < -1, 104.000000000 <= -1, 104.000000000 > -1, 104.000000000 >= -1 , toInt8(-1) = 104.000000000, toInt8(-1) != 104.000000000, toInt8(-1) < 104.000000000, toInt8(-1) <= 104.000000000, toInt8(-1) > 104.000000000, toInt8(-1) >= 104.000000000, 104.000000000 = toInt8(-1), 104.000000000 != toInt8(-1), 104.000000000 < toInt8(-1), 104.000000000 <= toInt8(-1), 104.000000000 > toInt8(-1), 104.000000000 >= toInt8(-1) , toInt16(-1) = 104.000000000, toInt16(-1) != 104.000000000, toInt16(-1) < 104.000000000, toInt16(-1) <= 104.000000000, toInt16(-1) > 104.000000000, toInt16(-1) >= 104.000000000, 104.000000000 = toInt16(-1), 104.000000000 != toInt16(-1), 104.000000000 < toInt16(-1), 104.000000000 <= toInt16(-1), 104.000000000 > toInt16(-1), 104.000000000 >= toInt16(-1) , toInt32(-1) = 104.000000000, toInt32(-1) != 104.000000000, toInt32(-1) < 104.000000000, toInt32(-1) <= 104.000000000, toInt32(-1) > 104.000000000, toInt32(-1) >= 104.000000000, 104.000000000 = toInt32(-1), 104.000000000 != toInt32(-1), 104.000000000 < toInt32(-1), 104.000000000 <= toInt32(-1), 104.000000000 > toInt32(-1), 104.000000000 >= toInt32(-1) , toInt64(-1) = 104.000000000, toInt64(-1) != 104.000000000, toInt64(-1) < 104.000000000, toInt64(-1) <= 104.000000000, toInt64(-1) > 104.000000000, toInt64(-1) >= 104.000000000, 104.000000000 = toInt64(-1), 104.000000000 != toInt64(-1), 104.000000000 < toInt64(-1), 104.000000000 <= toInt64(-1), 104.000000000 > toInt64(-1), 104.000000000 >= toInt64(-1) ; +SELECT '-1', '-4503599627370496.000000000', -1 = -4503599627370496.000000000, -1 != -4503599627370496.000000000, -1 < -4503599627370496.000000000, -1 <= -4503599627370496.000000000, -1 > -4503599627370496.000000000, -1 >= -4503599627370496.000000000, -4503599627370496.000000000 = -1, -4503599627370496.000000000 != -1, -4503599627370496.000000000 < -1, -4503599627370496.000000000 <= -1, -4503599627370496.000000000 > -1, -4503599627370496.000000000 >= -1 , toInt8(-1) = -4503599627370496.000000000, toInt8(-1) != -4503599627370496.000000000, toInt8(-1) < -4503599627370496.000000000, toInt8(-1) <= -4503599627370496.000000000, toInt8(-1) > -4503599627370496.000000000, toInt8(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt8(-1), -4503599627370496.000000000 != toInt8(-1), -4503599627370496.000000000 < toInt8(-1), -4503599627370496.000000000 <= toInt8(-1), -4503599627370496.000000000 > toInt8(-1), -4503599627370496.000000000 >= toInt8(-1) , toInt16(-1) = -4503599627370496.000000000, toInt16(-1) != -4503599627370496.000000000, toInt16(-1) < -4503599627370496.000000000, toInt16(-1) <= -4503599627370496.000000000, toInt16(-1) > -4503599627370496.000000000, toInt16(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt16(-1), -4503599627370496.000000000 != toInt16(-1), -4503599627370496.000000000 < toInt16(-1), -4503599627370496.000000000 <= toInt16(-1), -4503599627370496.000000000 > toInt16(-1), -4503599627370496.000000000 >= toInt16(-1) , toInt32(-1) = -4503599627370496.000000000, toInt32(-1) != -4503599627370496.000000000, toInt32(-1) < -4503599627370496.000000000, toInt32(-1) <= -4503599627370496.000000000, toInt32(-1) > -4503599627370496.000000000, toInt32(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt32(-1), -4503599627370496.000000000 != toInt32(-1), -4503599627370496.000000000 < toInt32(-1), -4503599627370496.000000000 <= toInt32(-1), -4503599627370496.000000000 > toInt32(-1), -4503599627370496.000000000 >= toInt32(-1) , toInt64(-1) = -4503599627370496.000000000, toInt64(-1) != -4503599627370496.000000000, toInt64(-1) < -4503599627370496.000000000, toInt64(-1) <= -4503599627370496.000000000, toInt64(-1) > -4503599627370496.000000000, toInt64(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt64(-1), -4503599627370496.000000000 != toInt64(-1), -4503599627370496.000000000 < toInt64(-1), -4503599627370496.000000000 <= toInt64(-1), -4503599627370496.000000000 > toInt64(-1), -4503599627370496.000000000 >= toInt64(-1) ; +SELECT '-1', '-0.500000000', -1 = -0.500000000, -1 != -0.500000000, -1 < -0.500000000, -1 <= -0.500000000, -1 > -0.500000000, -1 >= -0.500000000, -0.500000000 = -1, -0.500000000 != -1, -0.500000000 < -1, -0.500000000 <= -1, -0.500000000 > -1, -0.500000000 >= -1 , toInt8(-1) = -0.500000000, toInt8(-1) != -0.500000000, toInt8(-1) < -0.500000000, toInt8(-1) <= -0.500000000, toInt8(-1) > -0.500000000, toInt8(-1) >= -0.500000000, -0.500000000 = toInt8(-1), -0.500000000 != toInt8(-1), -0.500000000 < toInt8(-1), -0.500000000 <= toInt8(-1), -0.500000000 > toInt8(-1), -0.500000000 >= toInt8(-1) , toInt16(-1) = -0.500000000, toInt16(-1) != -0.500000000, toInt16(-1) < -0.500000000, toInt16(-1) <= -0.500000000, toInt16(-1) > -0.500000000, toInt16(-1) >= -0.500000000, -0.500000000 = toInt16(-1), -0.500000000 != toInt16(-1), -0.500000000 < toInt16(-1), -0.500000000 <= toInt16(-1), -0.500000000 > toInt16(-1), -0.500000000 >= toInt16(-1) , toInt32(-1) = -0.500000000, toInt32(-1) != -0.500000000, toInt32(-1) < -0.500000000, toInt32(-1) <= -0.500000000, toInt32(-1) > -0.500000000, toInt32(-1) >= -0.500000000, -0.500000000 = toInt32(-1), -0.500000000 != toInt32(-1), -0.500000000 < toInt32(-1), -0.500000000 <= toInt32(-1), -0.500000000 > toInt32(-1), -0.500000000 >= toInt32(-1) , toInt64(-1) = -0.500000000, toInt64(-1) != -0.500000000, toInt64(-1) < -0.500000000, toInt64(-1) <= -0.500000000, toInt64(-1) > -0.500000000, toInt64(-1) >= -0.500000000, -0.500000000 = toInt64(-1), -0.500000000 != toInt64(-1), -0.500000000 < toInt64(-1), -0.500000000 <= toInt64(-1), -0.500000000 > toInt64(-1), -0.500000000 >= toInt64(-1) ; +SELECT '-1', '0.500000000', -1 = 0.500000000, -1 != 0.500000000, -1 < 0.500000000, -1 <= 0.500000000, -1 > 0.500000000, -1 >= 0.500000000, 0.500000000 = -1, 0.500000000 != -1, 0.500000000 < -1, 0.500000000 <= -1, 0.500000000 > -1, 0.500000000 >= -1 , toInt8(-1) = 0.500000000, toInt8(-1) != 0.500000000, toInt8(-1) < 0.500000000, toInt8(-1) <= 0.500000000, toInt8(-1) > 0.500000000, toInt8(-1) >= 0.500000000, 0.500000000 = toInt8(-1), 0.500000000 != toInt8(-1), 0.500000000 < toInt8(-1), 0.500000000 <= toInt8(-1), 0.500000000 > toInt8(-1), 0.500000000 >= toInt8(-1) , toInt16(-1) = 0.500000000, toInt16(-1) != 0.500000000, toInt16(-1) < 0.500000000, toInt16(-1) <= 0.500000000, toInt16(-1) > 0.500000000, toInt16(-1) >= 0.500000000, 0.500000000 = toInt16(-1), 0.500000000 != toInt16(-1), 0.500000000 < toInt16(-1), 0.500000000 <= toInt16(-1), 0.500000000 > toInt16(-1), 0.500000000 >= toInt16(-1) , toInt32(-1) = 0.500000000, toInt32(-1) != 0.500000000, toInt32(-1) < 0.500000000, toInt32(-1) <= 0.500000000, toInt32(-1) > 0.500000000, toInt32(-1) >= 0.500000000, 0.500000000 = toInt32(-1), 0.500000000 != toInt32(-1), 0.500000000 < toInt32(-1), 0.500000000 <= toInt32(-1), 0.500000000 > toInt32(-1), 0.500000000 >= toInt32(-1) , toInt64(-1) = 0.500000000, toInt64(-1) != 0.500000000, toInt64(-1) < 0.500000000, toInt64(-1) <= 0.500000000, toInt64(-1) > 0.500000000, toInt64(-1) >= 0.500000000, 0.500000000 = toInt64(-1), 0.500000000 != toInt64(-1), 0.500000000 < toInt64(-1), 0.500000000 <= toInt64(-1), 0.500000000 > toInt64(-1), 0.500000000 >= toInt64(-1) ; +SELECT '-1', '-1.500000000', -1 = -1.500000000, -1 != -1.500000000, -1 < -1.500000000, -1 <= -1.500000000, -1 > -1.500000000, -1 >= -1.500000000, -1.500000000 = -1, -1.500000000 != -1, -1.500000000 < -1, -1.500000000 <= -1, -1.500000000 > -1, -1.500000000 >= -1 , toInt8(-1) = -1.500000000, toInt8(-1) != -1.500000000, toInt8(-1) < -1.500000000, toInt8(-1) <= -1.500000000, toInt8(-1) > -1.500000000, toInt8(-1) >= -1.500000000, -1.500000000 = toInt8(-1), -1.500000000 != toInt8(-1), -1.500000000 < toInt8(-1), -1.500000000 <= toInt8(-1), -1.500000000 > toInt8(-1), -1.500000000 >= toInt8(-1) , toInt16(-1) = -1.500000000, toInt16(-1) != -1.500000000, toInt16(-1) < -1.500000000, toInt16(-1) <= -1.500000000, toInt16(-1) > -1.500000000, toInt16(-1) >= -1.500000000, -1.500000000 = toInt16(-1), -1.500000000 != toInt16(-1), -1.500000000 < toInt16(-1), -1.500000000 <= toInt16(-1), -1.500000000 > toInt16(-1), -1.500000000 >= toInt16(-1) , toInt32(-1) = -1.500000000, toInt32(-1) != -1.500000000, toInt32(-1) < -1.500000000, toInt32(-1) <= -1.500000000, toInt32(-1) > -1.500000000, toInt32(-1) >= -1.500000000, -1.500000000 = toInt32(-1), -1.500000000 != toInt32(-1), -1.500000000 < toInt32(-1), -1.500000000 <= toInt32(-1), -1.500000000 > toInt32(-1), -1.500000000 >= toInt32(-1) , toInt64(-1) = -1.500000000, toInt64(-1) != -1.500000000, toInt64(-1) < -1.500000000, toInt64(-1) <= -1.500000000, toInt64(-1) > -1.500000000, toInt64(-1) >= -1.500000000, -1.500000000 = toInt64(-1), -1.500000000 != toInt64(-1), -1.500000000 < toInt64(-1), -1.500000000 <= toInt64(-1), -1.500000000 > toInt64(-1), -1.500000000 >= toInt64(-1) ; +SELECT '-1', '1.500000000', -1 = 1.500000000, -1 != 1.500000000, -1 < 1.500000000, -1 <= 1.500000000, -1 > 1.500000000, -1 >= 1.500000000, 1.500000000 = -1, 1.500000000 != -1, 1.500000000 < -1, 1.500000000 <= -1, 1.500000000 > -1, 1.500000000 >= -1 , toInt8(-1) = 1.500000000, toInt8(-1) != 1.500000000, toInt8(-1) < 1.500000000, toInt8(-1) <= 1.500000000, toInt8(-1) > 1.500000000, toInt8(-1) >= 1.500000000, 1.500000000 = toInt8(-1), 1.500000000 != toInt8(-1), 1.500000000 < toInt8(-1), 1.500000000 <= toInt8(-1), 1.500000000 > toInt8(-1), 1.500000000 >= toInt8(-1) , toInt16(-1) = 1.500000000, toInt16(-1) != 1.500000000, toInt16(-1) < 1.500000000, toInt16(-1) <= 1.500000000, toInt16(-1) > 1.500000000, toInt16(-1) >= 1.500000000, 1.500000000 = toInt16(-1), 1.500000000 != toInt16(-1), 1.500000000 < toInt16(-1), 1.500000000 <= toInt16(-1), 1.500000000 > toInt16(-1), 1.500000000 >= toInt16(-1) , toInt32(-1) = 1.500000000, toInt32(-1) != 1.500000000, toInt32(-1) < 1.500000000, toInt32(-1) <= 1.500000000, toInt32(-1) > 1.500000000, toInt32(-1) >= 1.500000000, 1.500000000 = toInt32(-1), 1.500000000 != toInt32(-1), 1.500000000 < toInt32(-1), 1.500000000 <= toInt32(-1), 1.500000000 > toInt32(-1), 1.500000000 >= toInt32(-1) , toInt64(-1) = 1.500000000, toInt64(-1) != 1.500000000, toInt64(-1) < 1.500000000, toInt64(-1) <= 1.500000000, toInt64(-1) > 1.500000000, toInt64(-1) >= 1.500000000, 1.500000000 = toInt64(-1), 1.500000000 != toInt64(-1), 1.500000000 < toInt64(-1), 1.500000000 <= toInt64(-1), 1.500000000 > toInt64(-1), 1.500000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740992.000000000', -1 = 9007199254740992.000000000, -1 != 9007199254740992.000000000, -1 < 9007199254740992.000000000, -1 <= 9007199254740992.000000000, -1 > 9007199254740992.000000000, -1 >= 9007199254740992.000000000, 9007199254740992.000000000 = -1, 9007199254740992.000000000 != -1, 9007199254740992.000000000 < -1, 9007199254740992.000000000 <= -1, 9007199254740992.000000000 > -1, 9007199254740992.000000000 >= -1 , toInt8(-1) = 9007199254740992.000000000, toInt8(-1) != 9007199254740992.000000000, toInt8(-1) < 9007199254740992.000000000, toInt8(-1) <= 9007199254740992.000000000, toInt8(-1) > 9007199254740992.000000000, toInt8(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(-1), 9007199254740992.000000000 != toInt8(-1), 9007199254740992.000000000 < toInt8(-1), 9007199254740992.000000000 <= toInt8(-1), 9007199254740992.000000000 > toInt8(-1), 9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740992.000000000, toInt16(-1) != 9007199254740992.000000000, toInt16(-1) < 9007199254740992.000000000, toInt16(-1) <= 9007199254740992.000000000, toInt16(-1) > 9007199254740992.000000000, toInt16(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(-1), 9007199254740992.000000000 != toInt16(-1), 9007199254740992.000000000 < toInt16(-1), 9007199254740992.000000000 <= toInt16(-1), 9007199254740992.000000000 > toInt16(-1), 9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740992.000000000, toInt32(-1) != 9007199254740992.000000000, toInt32(-1) < 9007199254740992.000000000, toInt32(-1) <= 9007199254740992.000000000, toInt32(-1) > 9007199254740992.000000000, toInt32(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(-1), 9007199254740992.000000000 != toInt32(-1), 9007199254740992.000000000 < toInt32(-1), 9007199254740992.000000000 <= toInt32(-1), 9007199254740992.000000000 > toInt32(-1), 9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740992.000000000, toInt64(-1) != 9007199254740992.000000000, toInt64(-1) < 9007199254740992.000000000, toInt64(-1) <= 9007199254740992.000000000, toInt64(-1) > 9007199254740992.000000000, toInt64(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(-1), 9007199254740992.000000000 != toInt64(-1), 9007199254740992.000000000 < toInt64(-1), 9007199254740992.000000000 <= toInt64(-1), 9007199254740992.000000000 > toInt64(-1), 9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '2251799813685247.500000000', -1 = 2251799813685247.500000000, -1 != 2251799813685247.500000000, -1 < 2251799813685247.500000000, -1 <= 2251799813685247.500000000, -1 > 2251799813685247.500000000, -1 >= 2251799813685247.500000000, 2251799813685247.500000000 = -1, 2251799813685247.500000000 != -1, 2251799813685247.500000000 < -1, 2251799813685247.500000000 <= -1, 2251799813685247.500000000 > -1, 2251799813685247.500000000 >= -1 , toInt8(-1) = 2251799813685247.500000000, toInt8(-1) != 2251799813685247.500000000, toInt8(-1) < 2251799813685247.500000000, toInt8(-1) <= 2251799813685247.500000000, toInt8(-1) > 2251799813685247.500000000, toInt8(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt8(-1), 2251799813685247.500000000 != toInt8(-1), 2251799813685247.500000000 < toInt8(-1), 2251799813685247.500000000 <= toInt8(-1), 2251799813685247.500000000 > toInt8(-1), 2251799813685247.500000000 >= toInt8(-1) , toInt16(-1) = 2251799813685247.500000000, toInt16(-1) != 2251799813685247.500000000, toInt16(-1) < 2251799813685247.500000000, toInt16(-1) <= 2251799813685247.500000000, toInt16(-1) > 2251799813685247.500000000, toInt16(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt16(-1), 2251799813685247.500000000 != toInt16(-1), 2251799813685247.500000000 < toInt16(-1), 2251799813685247.500000000 <= toInt16(-1), 2251799813685247.500000000 > toInt16(-1), 2251799813685247.500000000 >= toInt16(-1) , toInt32(-1) = 2251799813685247.500000000, toInt32(-1) != 2251799813685247.500000000, toInt32(-1) < 2251799813685247.500000000, toInt32(-1) <= 2251799813685247.500000000, toInt32(-1) > 2251799813685247.500000000, toInt32(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt32(-1), 2251799813685247.500000000 != toInt32(-1), 2251799813685247.500000000 < toInt32(-1), 2251799813685247.500000000 <= toInt32(-1), 2251799813685247.500000000 > toInt32(-1), 2251799813685247.500000000 >= toInt32(-1) , toInt64(-1) = 2251799813685247.500000000, toInt64(-1) != 2251799813685247.500000000, toInt64(-1) < 2251799813685247.500000000, toInt64(-1) <= 2251799813685247.500000000, toInt64(-1) > 2251799813685247.500000000, toInt64(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt64(-1), 2251799813685247.500000000 != toInt64(-1), 2251799813685247.500000000 < toInt64(-1), 2251799813685247.500000000 <= toInt64(-1), 2251799813685247.500000000 > toInt64(-1), 2251799813685247.500000000 >= toInt64(-1) ; +SELECT '-1', '2251799813685248.500000000', -1 = 2251799813685248.500000000, -1 != 2251799813685248.500000000, -1 < 2251799813685248.500000000, -1 <= 2251799813685248.500000000, -1 > 2251799813685248.500000000, -1 >= 2251799813685248.500000000, 2251799813685248.500000000 = -1, 2251799813685248.500000000 != -1, 2251799813685248.500000000 < -1, 2251799813685248.500000000 <= -1, 2251799813685248.500000000 > -1, 2251799813685248.500000000 >= -1 , toInt8(-1) = 2251799813685248.500000000, toInt8(-1) != 2251799813685248.500000000, toInt8(-1) < 2251799813685248.500000000, toInt8(-1) <= 2251799813685248.500000000, toInt8(-1) > 2251799813685248.500000000, toInt8(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt8(-1), 2251799813685248.500000000 != toInt8(-1), 2251799813685248.500000000 < toInt8(-1), 2251799813685248.500000000 <= toInt8(-1), 2251799813685248.500000000 > toInt8(-1), 2251799813685248.500000000 >= toInt8(-1) , toInt16(-1) = 2251799813685248.500000000, toInt16(-1) != 2251799813685248.500000000, toInt16(-1) < 2251799813685248.500000000, toInt16(-1) <= 2251799813685248.500000000, toInt16(-1) > 2251799813685248.500000000, toInt16(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt16(-1), 2251799813685248.500000000 != toInt16(-1), 2251799813685248.500000000 < toInt16(-1), 2251799813685248.500000000 <= toInt16(-1), 2251799813685248.500000000 > toInt16(-1), 2251799813685248.500000000 >= toInt16(-1) , toInt32(-1) = 2251799813685248.500000000, toInt32(-1) != 2251799813685248.500000000, toInt32(-1) < 2251799813685248.500000000, toInt32(-1) <= 2251799813685248.500000000, toInt32(-1) > 2251799813685248.500000000, toInt32(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt32(-1), 2251799813685248.500000000 != toInt32(-1), 2251799813685248.500000000 < toInt32(-1), 2251799813685248.500000000 <= toInt32(-1), 2251799813685248.500000000 > toInt32(-1), 2251799813685248.500000000 >= toInt32(-1) , toInt64(-1) = 2251799813685248.500000000, toInt64(-1) != 2251799813685248.500000000, toInt64(-1) < 2251799813685248.500000000, toInt64(-1) <= 2251799813685248.500000000, toInt64(-1) > 2251799813685248.500000000, toInt64(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt64(-1), 2251799813685248.500000000 != toInt64(-1), 2251799813685248.500000000 < toInt64(-1), 2251799813685248.500000000 <= toInt64(-1), 2251799813685248.500000000 > toInt64(-1), 2251799813685248.500000000 >= toInt64(-1) ; +SELECT '-1', '1152921504606846976.000000000', -1 = 1152921504606846976.000000000, -1 != 1152921504606846976.000000000, -1 < 1152921504606846976.000000000, -1 <= 1152921504606846976.000000000, -1 > 1152921504606846976.000000000, -1 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = -1, 1152921504606846976.000000000 != -1, 1152921504606846976.000000000 < -1, 1152921504606846976.000000000 <= -1, 1152921504606846976.000000000 > -1, 1152921504606846976.000000000 >= -1 , toInt8(-1) = 1152921504606846976.000000000, toInt8(-1) != 1152921504606846976.000000000, toInt8(-1) < 1152921504606846976.000000000, toInt8(-1) <= 1152921504606846976.000000000, toInt8(-1) > 1152921504606846976.000000000, toInt8(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt8(-1), 1152921504606846976.000000000 != toInt8(-1), 1152921504606846976.000000000 < toInt8(-1), 1152921504606846976.000000000 <= toInt8(-1), 1152921504606846976.000000000 > toInt8(-1), 1152921504606846976.000000000 >= toInt8(-1) , toInt16(-1) = 1152921504606846976.000000000, toInt16(-1) != 1152921504606846976.000000000, toInt16(-1) < 1152921504606846976.000000000, toInt16(-1) <= 1152921504606846976.000000000, toInt16(-1) > 1152921504606846976.000000000, toInt16(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt16(-1), 1152921504606846976.000000000 != toInt16(-1), 1152921504606846976.000000000 < toInt16(-1), 1152921504606846976.000000000 <= toInt16(-1), 1152921504606846976.000000000 > toInt16(-1), 1152921504606846976.000000000 >= toInt16(-1) , toInt32(-1) = 1152921504606846976.000000000, toInt32(-1) != 1152921504606846976.000000000, toInt32(-1) < 1152921504606846976.000000000, toInt32(-1) <= 1152921504606846976.000000000, toInt32(-1) > 1152921504606846976.000000000, toInt32(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt32(-1), 1152921504606846976.000000000 != toInt32(-1), 1152921504606846976.000000000 < toInt32(-1), 1152921504606846976.000000000 <= toInt32(-1), 1152921504606846976.000000000 > toInt32(-1), 1152921504606846976.000000000 >= toInt32(-1) , toInt64(-1) = 1152921504606846976.000000000, toInt64(-1) != 1152921504606846976.000000000, toInt64(-1) < 1152921504606846976.000000000, toInt64(-1) <= 1152921504606846976.000000000, toInt64(-1) > 1152921504606846976.000000000, toInt64(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt64(-1), 1152921504606846976.000000000 != toInt64(-1), 1152921504606846976.000000000 < toInt64(-1), 1152921504606846976.000000000 <= toInt64(-1), 1152921504606846976.000000000 > toInt64(-1), 1152921504606846976.000000000 >= toInt64(-1) ; +SELECT '-1', '-1152921504606846976.000000000', -1 = -1152921504606846976.000000000, -1 != -1152921504606846976.000000000, -1 < -1152921504606846976.000000000, -1 <= -1152921504606846976.000000000, -1 > -1152921504606846976.000000000, -1 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = -1, -1152921504606846976.000000000 != -1, -1152921504606846976.000000000 < -1, -1152921504606846976.000000000 <= -1, -1152921504606846976.000000000 > -1, -1152921504606846976.000000000 >= -1 , toInt8(-1) = -1152921504606846976.000000000, toInt8(-1) != -1152921504606846976.000000000, toInt8(-1) < -1152921504606846976.000000000, toInt8(-1) <= -1152921504606846976.000000000, toInt8(-1) > -1152921504606846976.000000000, toInt8(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt8(-1), -1152921504606846976.000000000 != toInt8(-1), -1152921504606846976.000000000 < toInt8(-1), -1152921504606846976.000000000 <= toInt8(-1), -1152921504606846976.000000000 > toInt8(-1), -1152921504606846976.000000000 >= toInt8(-1) , toInt16(-1) = -1152921504606846976.000000000, toInt16(-1) != -1152921504606846976.000000000, toInt16(-1) < -1152921504606846976.000000000, toInt16(-1) <= -1152921504606846976.000000000, toInt16(-1) > -1152921504606846976.000000000, toInt16(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt16(-1), -1152921504606846976.000000000 != toInt16(-1), -1152921504606846976.000000000 < toInt16(-1), -1152921504606846976.000000000 <= toInt16(-1), -1152921504606846976.000000000 > toInt16(-1), -1152921504606846976.000000000 >= toInt16(-1) , toInt32(-1) = -1152921504606846976.000000000, toInt32(-1) != -1152921504606846976.000000000, toInt32(-1) < -1152921504606846976.000000000, toInt32(-1) <= -1152921504606846976.000000000, toInt32(-1) > -1152921504606846976.000000000, toInt32(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt32(-1), -1152921504606846976.000000000 != toInt32(-1), -1152921504606846976.000000000 < toInt32(-1), -1152921504606846976.000000000 <= toInt32(-1), -1152921504606846976.000000000 > toInt32(-1), -1152921504606846976.000000000 >= toInt32(-1) , toInt64(-1) = -1152921504606846976.000000000, toInt64(-1) != -1152921504606846976.000000000, toInt64(-1) < -1152921504606846976.000000000, toInt64(-1) <= -1152921504606846976.000000000, toInt64(-1) > -1152921504606846976.000000000, toInt64(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt64(-1), -1152921504606846976.000000000 != toInt64(-1), -1152921504606846976.000000000 < toInt64(-1), -1152921504606846976.000000000 <= toInt64(-1), -1152921504606846976.000000000 > toInt64(-1), -1152921504606846976.000000000 >= toInt64(-1) ; +SELECT '-1', '-9223372036854786048.000000000', -1 = -9223372036854786048.000000000, -1 != -9223372036854786048.000000000, -1 < -9223372036854786048.000000000, -1 <= -9223372036854786048.000000000, -1 > -9223372036854786048.000000000, -1 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = -1, -9223372036854786048.000000000 != -1, -9223372036854786048.000000000 < -1, -9223372036854786048.000000000 <= -1, -9223372036854786048.000000000 > -1, -9223372036854786048.000000000 >= -1 , toInt8(-1) = -9223372036854786048.000000000, toInt8(-1) != -9223372036854786048.000000000, toInt8(-1) < -9223372036854786048.000000000, toInt8(-1) <= -9223372036854786048.000000000, toInt8(-1) > -9223372036854786048.000000000, toInt8(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt8(-1), -9223372036854786048.000000000 != toInt8(-1), -9223372036854786048.000000000 < toInt8(-1), -9223372036854786048.000000000 <= toInt8(-1), -9223372036854786048.000000000 > toInt8(-1), -9223372036854786048.000000000 >= toInt8(-1) , toInt16(-1) = -9223372036854786048.000000000, toInt16(-1) != -9223372036854786048.000000000, toInt16(-1) < -9223372036854786048.000000000, toInt16(-1) <= -9223372036854786048.000000000, toInt16(-1) > -9223372036854786048.000000000, toInt16(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt16(-1), -9223372036854786048.000000000 != toInt16(-1), -9223372036854786048.000000000 < toInt16(-1), -9223372036854786048.000000000 <= toInt16(-1), -9223372036854786048.000000000 > toInt16(-1), -9223372036854786048.000000000 >= toInt16(-1) , toInt32(-1) = -9223372036854786048.000000000, toInt32(-1) != -9223372036854786048.000000000, toInt32(-1) < -9223372036854786048.000000000, toInt32(-1) <= -9223372036854786048.000000000, toInt32(-1) > -9223372036854786048.000000000, toInt32(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt32(-1), -9223372036854786048.000000000 != toInt32(-1), -9223372036854786048.000000000 < toInt32(-1), -9223372036854786048.000000000 <= toInt32(-1), -9223372036854786048.000000000 > toInt32(-1), -9223372036854786048.000000000 >= toInt32(-1) , toInt64(-1) = -9223372036854786048.000000000, toInt64(-1) != -9223372036854786048.000000000, toInt64(-1) < -9223372036854786048.000000000, toInt64(-1) <= -9223372036854786048.000000000, toInt64(-1) > -9223372036854786048.000000000, toInt64(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt64(-1), -9223372036854786048.000000000 != toInt64(-1), -9223372036854786048.000000000 < toInt64(-1), -9223372036854786048.000000000 <= toInt64(-1), -9223372036854786048.000000000 > toInt64(-1), -9223372036854786048.000000000 >= toInt64(-1) ; +SELECT '-1', '9223372036854786048.000000000', -1 = 9223372036854786048.000000000, -1 != 9223372036854786048.000000000, -1 < 9223372036854786048.000000000, -1 <= 9223372036854786048.000000000, -1 > 9223372036854786048.000000000, -1 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = -1, 9223372036854786048.000000000 != -1, 9223372036854786048.000000000 < -1, 9223372036854786048.000000000 <= -1, 9223372036854786048.000000000 > -1, 9223372036854786048.000000000 >= -1 , toInt8(-1) = 9223372036854786048.000000000, toInt8(-1) != 9223372036854786048.000000000, toInt8(-1) < 9223372036854786048.000000000, toInt8(-1) <= 9223372036854786048.000000000, toInt8(-1) > 9223372036854786048.000000000, toInt8(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt8(-1), 9223372036854786048.000000000 != toInt8(-1), 9223372036854786048.000000000 < toInt8(-1), 9223372036854786048.000000000 <= toInt8(-1), 9223372036854786048.000000000 > toInt8(-1), 9223372036854786048.000000000 >= toInt8(-1) , toInt16(-1) = 9223372036854786048.000000000, toInt16(-1) != 9223372036854786048.000000000, toInt16(-1) < 9223372036854786048.000000000, toInt16(-1) <= 9223372036854786048.000000000, toInt16(-1) > 9223372036854786048.000000000, toInt16(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt16(-1), 9223372036854786048.000000000 != toInt16(-1), 9223372036854786048.000000000 < toInt16(-1), 9223372036854786048.000000000 <= toInt16(-1), 9223372036854786048.000000000 > toInt16(-1), 9223372036854786048.000000000 >= toInt16(-1) , toInt32(-1) = 9223372036854786048.000000000, toInt32(-1) != 9223372036854786048.000000000, toInt32(-1) < 9223372036854786048.000000000, toInt32(-1) <= 9223372036854786048.000000000, toInt32(-1) > 9223372036854786048.000000000, toInt32(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt32(-1), 9223372036854786048.000000000 != toInt32(-1), 9223372036854786048.000000000 < toInt32(-1), 9223372036854786048.000000000 <= toInt32(-1), 9223372036854786048.000000000 > toInt32(-1), 9223372036854786048.000000000 >= toInt32(-1) , toInt64(-1) = 9223372036854786048.000000000, toInt64(-1) != 9223372036854786048.000000000, toInt64(-1) < 9223372036854786048.000000000, toInt64(-1) <= 9223372036854786048.000000000, toInt64(-1) > 9223372036854786048.000000000, toInt64(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt64(-1), 9223372036854786048.000000000 != toInt64(-1), 9223372036854786048.000000000 < toInt64(-1), 9223372036854786048.000000000 <= toInt64(-1), 9223372036854786048.000000000 > toInt64(-1), 9223372036854786048.000000000 >= toInt64(-1) ; +SELECT '1', '0.000000000', 1 = 0.000000000, 1 != 0.000000000, 1 < 0.000000000, 1 <= 0.000000000, 1 > 0.000000000, 1 >= 0.000000000, 0.000000000 = 1, 0.000000000 != 1, 0.000000000 < 1, 0.000000000 <= 1, 0.000000000 > 1, 0.000000000 >= 1 , toUInt8(1) = 0.000000000, toUInt8(1) != 0.000000000, toUInt8(1) < 0.000000000, toUInt8(1) <= 0.000000000, toUInt8(1) > 0.000000000, toUInt8(1) >= 0.000000000, 0.000000000 = toUInt8(1), 0.000000000 != toUInt8(1), 0.000000000 < toUInt8(1), 0.000000000 <= toUInt8(1), 0.000000000 > toUInt8(1), 0.000000000 >= toUInt8(1) , toInt8(1) = 0.000000000, toInt8(1) != 0.000000000, toInt8(1) < 0.000000000, toInt8(1) <= 0.000000000, toInt8(1) > 0.000000000, toInt8(1) >= 0.000000000, 0.000000000 = toInt8(1), 0.000000000 != toInt8(1), 0.000000000 < toInt8(1), 0.000000000 <= toInt8(1), 0.000000000 > toInt8(1), 0.000000000 >= toInt8(1) , toUInt16(1) = 0.000000000, toUInt16(1) != 0.000000000, toUInt16(1) < 0.000000000, toUInt16(1) <= 0.000000000, toUInt16(1) > 0.000000000, toUInt16(1) >= 0.000000000, 0.000000000 = toUInt16(1), 0.000000000 != toUInt16(1), 0.000000000 < toUInt16(1), 0.000000000 <= toUInt16(1), 0.000000000 > toUInt16(1), 0.000000000 >= toUInt16(1) , toInt16(1) = 0.000000000, toInt16(1) != 0.000000000, toInt16(1) < 0.000000000, toInt16(1) <= 0.000000000, toInt16(1) > 0.000000000, toInt16(1) >= 0.000000000, 0.000000000 = toInt16(1), 0.000000000 != toInt16(1), 0.000000000 < toInt16(1), 0.000000000 <= toInt16(1), 0.000000000 > toInt16(1), 0.000000000 >= toInt16(1) , toUInt32(1) = 0.000000000, toUInt32(1) != 0.000000000, toUInt32(1) < 0.000000000, toUInt32(1) <= 0.000000000, toUInt32(1) > 0.000000000, toUInt32(1) >= 0.000000000, 0.000000000 = toUInt32(1), 0.000000000 != toUInt32(1), 0.000000000 < toUInt32(1), 0.000000000 <= toUInt32(1), 0.000000000 > toUInt32(1), 0.000000000 >= toUInt32(1) , toInt32(1) = 0.000000000, toInt32(1) != 0.000000000, toInt32(1) < 0.000000000, toInt32(1) <= 0.000000000, toInt32(1) > 0.000000000, toInt32(1) >= 0.000000000, 0.000000000 = toInt32(1), 0.000000000 != toInt32(1), 0.000000000 < toInt32(1), 0.000000000 <= toInt32(1), 0.000000000 > toInt32(1), 0.000000000 >= toInt32(1) , toUInt64(1) = 0.000000000, toUInt64(1) != 0.000000000, toUInt64(1) < 0.000000000, toUInt64(1) <= 0.000000000, toUInt64(1) > 0.000000000, toUInt64(1) >= 0.000000000, 0.000000000 = toUInt64(1), 0.000000000 != toUInt64(1), 0.000000000 < toUInt64(1), 0.000000000 <= toUInt64(1), 0.000000000 > toUInt64(1), 0.000000000 >= toUInt64(1) , toInt64(1) = 0.000000000, toInt64(1) != 0.000000000, toInt64(1) < 0.000000000, toInt64(1) <= 0.000000000, toInt64(1) > 0.000000000, toInt64(1) >= 0.000000000, 0.000000000 = toInt64(1), 0.000000000 != toInt64(1), 0.000000000 < toInt64(1), 0.000000000 <= toInt64(1), 0.000000000 > toInt64(1), 0.000000000 >= toInt64(1) ; +SELECT '1', '-1.000000000', 1 = -1.000000000, 1 != -1.000000000, 1 < -1.000000000, 1 <= -1.000000000, 1 > -1.000000000, 1 >= -1.000000000, -1.000000000 = 1, -1.000000000 != 1, -1.000000000 < 1, -1.000000000 <= 1, -1.000000000 > 1, -1.000000000 >= 1 , toUInt8(1) = -1.000000000, toUInt8(1) != -1.000000000, toUInt8(1) < -1.000000000, toUInt8(1) <= -1.000000000, toUInt8(1) > -1.000000000, toUInt8(1) >= -1.000000000, -1.000000000 = toUInt8(1), -1.000000000 != toUInt8(1), -1.000000000 < toUInt8(1), -1.000000000 <= toUInt8(1), -1.000000000 > toUInt8(1), -1.000000000 >= toUInt8(1) , toInt8(1) = -1.000000000, toInt8(1) != -1.000000000, toInt8(1) < -1.000000000, toInt8(1) <= -1.000000000, toInt8(1) > -1.000000000, toInt8(1) >= -1.000000000, -1.000000000 = toInt8(1), -1.000000000 != toInt8(1), -1.000000000 < toInt8(1), -1.000000000 <= toInt8(1), -1.000000000 > toInt8(1), -1.000000000 >= toInt8(1) , toUInt16(1) = -1.000000000, toUInt16(1) != -1.000000000, toUInt16(1) < -1.000000000, toUInt16(1) <= -1.000000000, toUInt16(1) > -1.000000000, toUInt16(1) >= -1.000000000, -1.000000000 = toUInt16(1), -1.000000000 != toUInt16(1), -1.000000000 < toUInt16(1), -1.000000000 <= toUInt16(1), -1.000000000 > toUInt16(1), -1.000000000 >= toUInt16(1) , toInt16(1) = -1.000000000, toInt16(1) != -1.000000000, toInt16(1) < -1.000000000, toInt16(1) <= -1.000000000, toInt16(1) > -1.000000000, toInt16(1) >= -1.000000000, -1.000000000 = toInt16(1), -1.000000000 != toInt16(1), -1.000000000 < toInt16(1), -1.000000000 <= toInt16(1), -1.000000000 > toInt16(1), -1.000000000 >= toInt16(1) , toUInt32(1) = -1.000000000, toUInt32(1) != -1.000000000, toUInt32(1) < -1.000000000, toUInt32(1) <= -1.000000000, toUInt32(1) > -1.000000000, toUInt32(1) >= -1.000000000, -1.000000000 = toUInt32(1), -1.000000000 != toUInt32(1), -1.000000000 < toUInt32(1), -1.000000000 <= toUInt32(1), -1.000000000 > toUInt32(1), -1.000000000 >= toUInt32(1) , toInt32(1) = -1.000000000, toInt32(1) != -1.000000000, toInt32(1) < -1.000000000, toInt32(1) <= -1.000000000, toInt32(1) > -1.000000000, toInt32(1) >= -1.000000000, -1.000000000 = toInt32(1), -1.000000000 != toInt32(1), -1.000000000 < toInt32(1), -1.000000000 <= toInt32(1), -1.000000000 > toInt32(1), -1.000000000 >= toInt32(1) , toUInt64(1) = -1.000000000, toUInt64(1) != -1.000000000, toUInt64(1) < -1.000000000, toUInt64(1) <= -1.000000000, toUInt64(1) > -1.000000000, toUInt64(1) >= -1.000000000, -1.000000000 = toUInt64(1), -1.000000000 != toUInt64(1), -1.000000000 < toUInt64(1), -1.000000000 <= toUInt64(1), -1.000000000 > toUInt64(1), -1.000000000 >= toUInt64(1) , toInt64(1) = -1.000000000, toInt64(1) != -1.000000000, toInt64(1) < -1.000000000, toInt64(1) <= -1.000000000, toInt64(1) > -1.000000000, toInt64(1) >= -1.000000000, -1.000000000 = toInt64(1), -1.000000000 != toInt64(1), -1.000000000 < toInt64(1), -1.000000000 <= toInt64(1), -1.000000000 > toInt64(1), -1.000000000 >= toInt64(1) ; +SELECT '1', '1.000000000', 1 = 1.000000000, 1 != 1.000000000, 1 < 1.000000000, 1 <= 1.000000000, 1 > 1.000000000, 1 >= 1.000000000, 1.000000000 = 1, 1.000000000 != 1, 1.000000000 < 1, 1.000000000 <= 1, 1.000000000 > 1, 1.000000000 >= 1 , toUInt8(1) = 1.000000000, toUInt8(1) != 1.000000000, toUInt8(1) < 1.000000000, toUInt8(1) <= 1.000000000, toUInt8(1) > 1.000000000, toUInt8(1) >= 1.000000000, 1.000000000 = toUInt8(1), 1.000000000 != toUInt8(1), 1.000000000 < toUInt8(1), 1.000000000 <= toUInt8(1), 1.000000000 > toUInt8(1), 1.000000000 >= toUInt8(1) , toInt8(1) = 1.000000000, toInt8(1) != 1.000000000, toInt8(1) < 1.000000000, toInt8(1) <= 1.000000000, toInt8(1) > 1.000000000, toInt8(1) >= 1.000000000, 1.000000000 = toInt8(1), 1.000000000 != toInt8(1), 1.000000000 < toInt8(1), 1.000000000 <= toInt8(1), 1.000000000 > toInt8(1), 1.000000000 >= toInt8(1) , toUInt16(1) = 1.000000000, toUInt16(1) != 1.000000000, toUInt16(1) < 1.000000000, toUInt16(1) <= 1.000000000, toUInt16(1) > 1.000000000, toUInt16(1) >= 1.000000000, 1.000000000 = toUInt16(1), 1.000000000 != toUInt16(1), 1.000000000 < toUInt16(1), 1.000000000 <= toUInt16(1), 1.000000000 > toUInt16(1), 1.000000000 >= toUInt16(1) , toInt16(1) = 1.000000000, toInt16(1) != 1.000000000, toInt16(1) < 1.000000000, toInt16(1) <= 1.000000000, toInt16(1) > 1.000000000, toInt16(1) >= 1.000000000, 1.000000000 = toInt16(1), 1.000000000 != toInt16(1), 1.000000000 < toInt16(1), 1.000000000 <= toInt16(1), 1.000000000 > toInt16(1), 1.000000000 >= toInt16(1) , toUInt32(1) = 1.000000000, toUInt32(1) != 1.000000000, toUInt32(1) < 1.000000000, toUInt32(1) <= 1.000000000, toUInt32(1) > 1.000000000, toUInt32(1) >= 1.000000000, 1.000000000 = toUInt32(1), 1.000000000 != toUInt32(1), 1.000000000 < toUInt32(1), 1.000000000 <= toUInt32(1), 1.000000000 > toUInt32(1), 1.000000000 >= toUInt32(1) , toInt32(1) = 1.000000000, toInt32(1) != 1.000000000, toInt32(1) < 1.000000000, toInt32(1) <= 1.000000000, toInt32(1) > 1.000000000, toInt32(1) >= 1.000000000, 1.000000000 = toInt32(1), 1.000000000 != toInt32(1), 1.000000000 < toInt32(1), 1.000000000 <= toInt32(1), 1.000000000 > toInt32(1), 1.000000000 >= toInt32(1) , toUInt64(1) = 1.000000000, toUInt64(1) != 1.000000000, toUInt64(1) < 1.000000000, toUInt64(1) <= 1.000000000, toUInt64(1) > 1.000000000, toUInt64(1) >= 1.000000000, 1.000000000 = toUInt64(1), 1.000000000 != toUInt64(1), 1.000000000 < toUInt64(1), 1.000000000 <= toUInt64(1), 1.000000000 > toUInt64(1), 1.000000000 >= toUInt64(1) , toInt64(1) = 1.000000000, toInt64(1) != 1.000000000, toInt64(1) < 1.000000000, toInt64(1) <= 1.000000000, toInt64(1) > 1.000000000, toInt64(1) >= 1.000000000, 1.000000000 = toInt64(1), 1.000000000 != toInt64(1), 1.000000000 < toInt64(1), 1.000000000 <= toInt64(1), 1.000000000 > toInt64(1), 1.000000000 >= toInt64(1) ; +SELECT '1', '18446744073709551616.000000000', 1 = 18446744073709551616.000000000, 1 != 18446744073709551616.000000000, 1 < 18446744073709551616.000000000, 1 <= 18446744073709551616.000000000, 1 > 18446744073709551616.000000000, 1 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 1, 18446744073709551616.000000000 != 1, 18446744073709551616.000000000 < 1, 18446744073709551616.000000000 <= 1, 18446744073709551616.000000000 > 1, 18446744073709551616.000000000 >= 1 , toUInt8(1) = 18446744073709551616.000000000, toUInt8(1) != 18446744073709551616.000000000, toUInt8(1) < 18446744073709551616.000000000, toUInt8(1) <= 18446744073709551616.000000000, toUInt8(1) > 18446744073709551616.000000000, toUInt8(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt8(1), 18446744073709551616.000000000 != toUInt8(1), 18446744073709551616.000000000 < toUInt8(1), 18446744073709551616.000000000 <= toUInt8(1), 18446744073709551616.000000000 > toUInt8(1), 18446744073709551616.000000000 >= toUInt8(1) , toInt8(1) = 18446744073709551616.000000000, toInt8(1) != 18446744073709551616.000000000, toInt8(1) < 18446744073709551616.000000000, toInt8(1) <= 18446744073709551616.000000000, toInt8(1) > 18446744073709551616.000000000, toInt8(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt8(1), 18446744073709551616.000000000 != toInt8(1), 18446744073709551616.000000000 < toInt8(1), 18446744073709551616.000000000 <= toInt8(1), 18446744073709551616.000000000 > toInt8(1), 18446744073709551616.000000000 >= toInt8(1) , toUInt16(1) = 18446744073709551616.000000000, toUInt16(1) != 18446744073709551616.000000000, toUInt16(1) < 18446744073709551616.000000000, toUInt16(1) <= 18446744073709551616.000000000, toUInt16(1) > 18446744073709551616.000000000, toUInt16(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt16(1), 18446744073709551616.000000000 != toUInt16(1), 18446744073709551616.000000000 < toUInt16(1), 18446744073709551616.000000000 <= toUInt16(1), 18446744073709551616.000000000 > toUInt16(1), 18446744073709551616.000000000 >= toUInt16(1) , toInt16(1) = 18446744073709551616.000000000, toInt16(1) != 18446744073709551616.000000000, toInt16(1) < 18446744073709551616.000000000, toInt16(1) <= 18446744073709551616.000000000, toInt16(1) > 18446744073709551616.000000000, toInt16(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt16(1), 18446744073709551616.000000000 != toInt16(1), 18446744073709551616.000000000 < toInt16(1), 18446744073709551616.000000000 <= toInt16(1), 18446744073709551616.000000000 > toInt16(1), 18446744073709551616.000000000 >= toInt16(1) , toUInt32(1) = 18446744073709551616.000000000, toUInt32(1) != 18446744073709551616.000000000, toUInt32(1) < 18446744073709551616.000000000, toUInt32(1) <= 18446744073709551616.000000000, toUInt32(1) > 18446744073709551616.000000000, toUInt32(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt32(1), 18446744073709551616.000000000 != toUInt32(1), 18446744073709551616.000000000 < toUInt32(1), 18446744073709551616.000000000 <= toUInt32(1), 18446744073709551616.000000000 > toUInt32(1), 18446744073709551616.000000000 >= toUInt32(1) , toInt32(1) = 18446744073709551616.000000000, toInt32(1) != 18446744073709551616.000000000, toInt32(1) < 18446744073709551616.000000000, toInt32(1) <= 18446744073709551616.000000000, toInt32(1) > 18446744073709551616.000000000, toInt32(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt32(1), 18446744073709551616.000000000 != toInt32(1), 18446744073709551616.000000000 < toInt32(1), 18446744073709551616.000000000 <= toInt32(1), 18446744073709551616.000000000 > toInt32(1), 18446744073709551616.000000000 >= toInt32(1) , toUInt64(1) = 18446744073709551616.000000000, toUInt64(1) != 18446744073709551616.000000000, toUInt64(1) < 18446744073709551616.000000000, toUInt64(1) <= 18446744073709551616.000000000, toUInt64(1) > 18446744073709551616.000000000, toUInt64(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(1), 18446744073709551616.000000000 != toUInt64(1), 18446744073709551616.000000000 < toUInt64(1), 18446744073709551616.000000000 <= toUInt64(1), 18446744073709551616.000000000 > toUInt64(1), 18446744073709551616.000000000 >= toUInt64(1) , toInt64(1) = 18446744073709551616.000000000, toInt64(1) != 18446744073709551616.000000000, toInt64(1) < 18446744073709551616.000000000, toInt64(1) <= 18446744073709551616.000000000, toInt64(1) > 18446744073709551616.000000000, toInt64(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt64(1), 18446744073709551616.000000000 != toInt64(1), 18446744073709551616.000000000 < toInt64(1), 18446744073709551616.000000000 <= toInt64(1), 18446744073709551616.000000000 > toInt64(1), 18446744073709551616.000000000 >= toInt64(1) ; +SELECT '1', '9223372036854775808.000000000', 1 = 9223372036854775808.000000000, 1 != 9223372036854775808.000000000, 1 < 9223372036854775808.000000000, 1 <= 9223372036854775808.000000000, 1 > 9223372036854775808.000000000, 1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 1, 9223372036854775808.000000000 != 1, 9223372036854775808.000000000 < 1, 9223372036854775808.000000000 <= 1, 9223372036854775808.000000000 > 1, 9223372036854775808.000000000 >= 1 , toUInt8(1) = 9223372036854775808.000000000, toUInt8(1) != 9223372036854775808.000000000, toUInt8(1) < 9223372036854775808.000000000, toUInt8(1) <= 9223372036854775808.000000000, toUInt8(1) > 9223372036854775808.000000000, toUInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(1), 9223372036854775808.000000000 != toUInt8(1), 9223372036854775808.000000000 < toUInt8(1), 9223372036854775808.000000000 <= toUInt8(1), 9223372036854775808.000000000 > toUInt8(1), 9223372036854775808.000000000 >= toUInt8(1) , toInt8(1) = 9223372036854775808.000000000, toInt8(1) != 9223372036854775808.000000000, toInt8(1) < 9223372036854775808.000000000, toInt8(1) <= 9223372036854775808.000000000, toInt8(1) > 9223372036854775808.000000000, toInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(1), 9223372036854775808.000000000 != toInt8(1), 9223372036854775808.000000000 < toInt8(1), 9223372036854775808.000000000 <= toInt8(1), 9223372036854775808.000000000 > toInt8(1), 9223372036854775808.000000000 >= toInt8(1) , toUInt16(1) = 9223372036854775808.000000000, toUInt16(1) != 9223372036854775808.000000000, toUInt16(1) < 9223372036854775808.000000000, toUInt16(1) <= 9223372036854775808.000000000, toUInt16(1) > 9223372036854775808.000000000, toUInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(1), 9223372036854775808.000000000 != toUInt16(1), 9223372036854775808.000000000 < toUInt16(1), 9223372036854775808.000000000 <= toUInt16(1), 9223372036854775808.000000000 > toUInt16(1), 9223372036854775808.000000000 >= toUInt16(1) , toInt16(1) = 9223372036854775808.000000000, toInt16(1) != 9223372036854775808.000000000, toInt16(1) < 9223372036854775808.000000000, toInt16(1) <= 9223372036854775808.000000000, toInt16(1) > 9223372036854775808.000000000, toInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(1), 9223372036854775808.000000000 != toInt16(1), 9223372036854775808.000000000 < toInt16(1), 9223372036854775808.000000000 <= toInt16(1), 9223372036854775808.000000000 > toInt16(1), 9223372036854775808.000000000 >= toInt16(1) , toUInt32(1) = 9223372036854775808.000000000, toUInt32(1) != 9223372036854775808.000000000, toUInt32(1) < 9223372036854775808.000000000, toUInt32(1) <= 9223372036854775808.000000000, toUInt32(1) > 9223372036854775808.000000000, toUInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(1), 9223372036854775808.000000000 != toUInt32(1), 9223372036854775808.000000000 < toUInt32(1), 9223372036854775808.000000000 <= toUInt32(1), 9223372036854775808.000000000 > toUInt32(1), 9223372036854775808.000000000 >= toUInt32(1) , toInt32(1) = 9223372036854775808.000000000, toInt32(1) != 9223372036854775808.000000000, toInt32(1) < 9223372036854775808.000000000, toInt32(1) <= 9223372036854775808.000000000, toInt32(1) > 9223372036854775808.000000000, toInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(1), 9223372036854775808.000000000 != toInt32(1), 9223372036854775808.000000000 < toInt32(1), 9223372036854775808.000000000 <= toInt32(1), 9223372036854775808.000000000 > toInt32(1), 9223372036854775808.000000000 >= toInt32(1) , toUInt64(1) = 9223372036854775808.000000000, toUInt64(1) != 9223372036854775808.000000000, toUInt64(1) < 9223372036854775808.000000000, toUInt64(1) <= 9223372036854775808.000000000, toUInt64(1) > 9223372036854775808.000000000, toUInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(1), 9223372036854775808.000000000 != toUInt64(1), 9223372036854775808.000000000 < toUInt64(1), 9223372036854775808.000000000 <= toUInt64(1), 9223372036854775808.000000000 > toUInt64(1), 9223372036854775808.000000000 >= toUInt64(1) , toInt64(1) = 9223372036854775808.000000000, toInt64(1) != 9223372036854775808.000000000, toInt64(1) < 9223372036854775808.000000000, toInt64(1) <= 9223372036854775808.000000000, toInt64(1) > 9223372036854775808.000000000, toInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(1), 9223372036854775808.000000000 != toInt64(1), 9223372036854775808.000000000 < toInt64(1), 9223372036854775808.000000000 <= toInt64(1), 9223372036854775808.000000000 > toInt64(1), 9223372036854775808.000000000 >= toInt64(1) ; +SELECT '1', '-9223372036854775808.000000000', 1 = -9223372036854775808.000000000, 1 != -9223372036854775808.000000000, 1 < -9223372036854775808.000000000, 1 <= -9223372036854775808.000000000, 1 > -9223372036854775808.000000000, 1 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = 1, -9223372036854775808.000000000 != 1, -9223372036854775808.000000000 < 1, -9223372036854775808.000000000 <= 1, -9223372036854775808.000000000 > 1, -9223372036854775808.000000000 >= 1 , toUInt8(1) = -9223372036854775808.000000000, toUInt8(1) != -9223372036854775808.000000000, toUInt8(1) < -9223372036854775808.000000000, toUInt8(1) <= -9223372036854775808.000000000, toUInt8(1) > -9223372036854775808.000000000, toUInt8(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt8(1), -9223372036854775808.000000000 != toUInt8(1), -9223372036854775808.000000000 < toUInt8(1), -9223372036854775808.000000000 <= toUInt8(1), -9223372036854775808.000000000 > toUInt8(1), -9223372036854775808.000000000 >= toUInt8(1) , toInt8(1) = -9223372036854775808.000000000, toInt8(1) != -9223372036854775808.000000000, toInt8(1) < -9223372036854775808.000000000, toInt8(1) <= -9223372036854775808.000000000, toInt8(1) > -9223372036854775808.000000000, toInt8(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt8(1), -9223372036854775808.000000000 != toInt8(1), -9223372036854775808.000000000 < toInt8(1), -9223372036854775808.000000000 <= toInt8(1), -9223372036854775808.000000000 > toInt8(1), -9223372036854775808.000000000 >= toInt8(1) , toUInt16(1) = -9223372036854775808.000000000, toUInt16(1) != -9223372036854775808.000000000, toUInt16(1) < -9223372036854775808.000000000, toUInt16(1) <= -9223372036854775808.000000000, toUInt16(1) > -9223372036854775808.000000000, toUInt16(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt16(1), -9223372036854775808.000000000 != toUInt16(1), -9223372036854775808.000000000 < toUInt16(1), -9223372036854775808.000000000 <= toUInt16(1), -9223372036854775808.000000000 > toUInt16(1), -9223372036854775808.000000000 >= toUInt16(1) , toInt16(1) = -9223372036854775808.000000000, toInt16(1) != -9223372036854775808.000000000, toInt16(1) < -9223372036854775808.000000000, toInt16(1) <= -9223372036854775808.000000000, toInt16(1) > -9223372036854775808.000000000, toInt16(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt16(1), -9223372036854775808.000000000 != toInt16(1), -9223372036854775808.000000000 < toInt16(1), -9223372036854775808.000000000 <= toInt16(1), -9223372036854775808.000000000 > toInt16(1), -9223372036854775808.000000000 >= toInt16(1) , toUInt32(1) = -9223372036854775808.000000000, toUInt32(1) != -9223372036854775808.000000000, toUInt32(1) < -9223372036854775808.000000000, toUInt32(1) <= -9223372036854775808.000000000, toUInt32(1) > -9223372036854775808.000000000, toUInt32(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt32(1), -9223372036854775808.000000000 != toUInt32(1), -9223372036854775808.000000000 < toUInt32(1), -9223372036854775808.000000000 <= toUInt32(1), -9223372036854775808.000000000 > toUInt32(1), -9223372036854775808.000000000 >= toUInt32(1) , toInt32(1) = -9223372036854775808.000000000, toInt32(1) != -9223372036854775808.000000000, toInt32(1) < -9223372036854775808.000000000, toInt32(1) <= -9223372036854775808.000000000, toInt32(1) > -9223372036854775808.000000000, toInt32(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt32(1), -9223372036854775808.000000000 != toInt32(1), -9223372036854775808.000000000 < toInt32(1), -9223372036854775808.000000000 <= toInt32(1), -9223372036854775808.000000000 > toInt32(1), -9223372036854775808.000000000 >= toInt32(1) , toUInt64(1) = -9223372036854775808.000000000, toUInt64(1) != -9223372036854775808.000000000, toUInt64(1) < -9223372036854775808.000000000, toUInt64(1) <= -9223372036854775808.000000000, toUInt64(1) > -9223372036854775808.000000000, toUInt64(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt64(1), -9223372036854775808.000000000 != toUInt64(1), -9223372036854775808.000000000 < toUInt64(1), -9223372036854775808.000000000 <= toUInt64(1), -9223372036854775808.000000000 > toUInt64(1), -9223372036854775808.000000000 >= toUInt64(1) , toInt64(1) = -9223372036854775808.000000000, toInt64(1) != -9223372036854775808.000000000, toInt64(1) < -9223372036854775808.000000000, toInt64(1) <= -9223372036854775808.000000000, toInt64(1) > -9223372036854775808.000000000, toInt64(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt64(1), -9223372036854775808.000000000 != toInt64(1), -9223372036854775808.000000000 < toInt64(1), -9223372036854775808.000000000 <= toInt64(1), -9223372036854775808.000000000 > toInt64(1), -9223372036854775808.000000000 >= toInt64(1) ; +SELECT '1', '9223372036854775808.000000000', 1 = 9223372036854775808.000000000, 1 != 9223372036854775808.000000000, 1 < 9223372036854775808.000000000, 1 <= 9223372036854775808.000000000, 1 > 9223372036854775808.000000000, 1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 1, 9223372036854775808.000000000 != 1, 9223372036854775808.000000000 < 1, 9223372036854775808.000000000 <= 1, 9223372036854775808.000000000 > 1, 9223372036854775808.000000000 >= 1 , toUInt8(1) = 9223372036854775808.000000000, toUInt8(1) != 9223372036854775808.000000000, toUInt8(1) < 9223372036854775808.000000000, toUInt8(1) <= 9223372036854775808.000000000, toUInt8(1) > 9223372036854775808.000000000, toUInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(1), 9223372036854775808.000000000 != toUInt8(1), 9223372036854775808.000000000 < toUInt8(1), 9223372036854775808.000000000 <= toUInt8(1), 9223372036854775808.000000000 > toUInt8(1), 9223372036854775808.000000000 >= toUInt8(1) , toInt8(1) = 9223372036854775808.000000000, toInt8(1) != 9223372036854775808.000000000, toInt8(1) < 9223372036854775808.000000000, toInt8(1) <= 9223372036854775808.000000000, toInt8(1) > 9223372036854775808.000000000, toInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(1), 9223372036854775808.000000000 != toInt8(1), 9223372036854775808.000000000 < toInt8(1), 9223372036854775808.000000000 <= toInt8(1), 9223372036854775808.000000000 > toInt8(1), 9223372036854775808.000000000 >= toInt8(1) , toUInt16(1) = 9223372036854775808.000000000, toUInt16(1) != 9223372036854775808.000000000, toUInt16(1) < 9223372036854775808.000000000, toUInt16(1) <= 9223372036854775808.000000000, toUInt16(1) > 9223372036854775808.000000000, toUInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(1), 9223372036854775808.000000000 != toUInt16(1), 9223372036854775808.000000000 < toUInt16(1), 9223372036854775808.000000000 <= toUInt16(1), 9223372036854775808.000000000 > toUInt16(1), 9223372036854775808.000000000 >= toUInt16(1) , toInt16(1) = 9223372036854775808.000000000, toInt16(1) != 9223372036854775808.000000000, toInt16(1) < 9223372036854775808.000000000, toInt16(1) <= 9223372036854775808.000000000, toInt16(1) > 9223372036854775808.000000000, toInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(1), 9223372036854775808.000000000 != toInt16(1), 9223372036854775808.000000000 < toInt16(1), 9223372036854775808.000000000 <= toInt16(1), 9223372036854775808.000000000 > toInt16(1), 9223372036854775808.000000000 >= toInt16(1) , toUInt32(1) = 9223372036854775808.000000000, toUInt32(1) != 9223372036854775808.000000000, toUInt32(1) < 9223372036854775808.000000000, toUInt32(1) <= 9223372036854775808.000000000, toUInt32(1) > 9223372036854775808.000000000, toUInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(1), 9223372036854775808.000000000 != toUInt32(1), 9223372036854775808.000000000 < toUInt32(1), 9223372036854775808.000000000 <= toUInt32(1), 9223372036854775808.000000000 > toUInt32(1), 9223372036854775808.000000000 >= toUInt32(1) , toInt32(1) = 9223372036854775808.000000000, toInt32(1) != 9223372036854775808.000000000, toInt32(1) < 9223372036854775808.000000000, toInt32(1) <= 9223372036854775808.000000000, toInt32(1) > 9223372036854775808.000000000, toInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(1), 9223372036854775808.000000000 != toInt32(1), 9223372036854775808.000000000 < toInt32(1), 9223372036854775808.000000000 <= toInt32(1), 9223372036854775808.000000000 > toInt32(1), 9223372036854775808.000000000 >= toInt32(1) , toUInt64(1) = 9223372036854775808.000000000, toUInt64(1) != 9223372036854775808.000000000, toUInt64(1) < 9223372036854775808.000000000, toUInt64(1) <= 9223372036854775808.000000000, toUInt64(1) > 9223372036854775808.000000000, toUInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(1), 9223372036854775808.000000000 != toUInt64(1), 9223372036854775808.000000000 < toUInt64(1), 9223372036854775808.000000000 <= toUInt64(1), 9223372036854775808.000000000 > toUInt64(1), 9223372036854775808.000000000 >= toUInt64(1) , toInt64(1) = 9223372036854775808.000000000, toInt64(1) != 9223372036854775808.000000000, toInt64(1) < 9223372036854775808.000000000, toInt64(1) <= 9223372036854775808.000000000, toInt64(1) > 9223372036854775808.000000000, toInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(1), 9223372036854775808.000000000 != toInt64(1), 9223372036854775808.000000000 < toInt64(1), 9223372036854775808.000000000 <= toInt64(1), 9223372036854775808.000000000 > toInt64(1), 9223372036854775808.000000000 >= toInt64(1) ; +SELECT '1', '2251799813685248.000000000', 1 = 2251799813685248.000000000, 1 != 2251799813685248.000000000, 1 < 2251799813685248.000000000, 1 <= 2251799813685248.000000000, 1 > 2251799813685248.000000000, 1 >= 2251799813685248.000000000, 2251799813685248.000000000 = 1, 2251799813685248.000000000 != 1, 2251799813685248.000000000 < 1, 2251799813685248.000000000 <= 1, 2251799813685248.000000000 > 1, 2251799813685248.000000000 >= 1 , toUInt8(1) = 2251799813685248.000000000, toUInt8(1) != 2251799813685248.000000000, toUInt8(1) < 2251799813685248.000000000, toUInt8(1) <= 2251799813685248.000000000, toUInt8(1) > 2251799813685248.000000000, toUInt8(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt8(1), 2251799813685248.000000000 != toUInt8(1), 2251799813685248.000000000 < toUInt8(1), 2251799813685248.000000000 <= toUInt8(1), 2251799813685248.000000000 > toUInt8(1), 2251799813685248.000000000 >= toUInt8(1) , toInt8(1) = 2251799813685248.000000000, toInt8(1) != 2251799813685248.000000000, toInt8(1) < 2251799813685248.000000000, toInt8(1) <= 2251799813685248.000000000, toInt8(1) > 2251799813685248.000000000, toInt8(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt8(1), 2251799813685248.000000000 != toInt8(1), 2251799813685248.000000000 < toInt8(1), 2251799813685248.000000000 <= toInt8(1), 2251799813685248.000000000 > toInt8(1), 2251799813685248.000000000 >= toInt8(1) , toUInt16(1) = 2251799813685248.000000000, toUInt16(1) != 2251799813685248.000000000, toUInt16(1) < 2251799813685248.000000000, toUInt16(1) <= 2251799813685248.000000000, toUInt16(1) > 2251799813685248.000000000, toUInt16(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt16(1), 2251799813685248.000000000 != toUInt16(1), 2251799813685248.000000000 < toUInt16(1), 2251799813685248.000000000 <= toUInt16(1), 2251799813685248.000000000 > toUInt16(1), 2251799813685248.000000000 >= toUInt16(1) , toInt16(1) = 2251799813685248.000000000, toInt16(1) != 2251799813685248.000000000, toInt16(1) < 2251799813685248.000000000, toInt16(1) <= 2251799813685248.000000000, toInt16(1) > 2251799813685248.000000000, toInt16(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt16(1), 2251799813685248.000000000 != toInt16(1), 2251799813685248.000000000 < toInt16(1), 2251799813685248.000000000 <= toInt16(1), 2251799813685248.000000000 > toInt16(1), 2251799813685248.000000000 >= toInt16(1) , toUInt32(1) = 2251799813685248.000000000, toUInt32(1) != 2251799813685248.000000000, toUInt32(1) < 2251799813685248.000000000, toUInt32(1) <= 2251799813685248.000000000, toUInt32(1) > 2251799813685248.000000000, toUInt32(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt32(1), 2251799813685248.000000000 != toUInt32(1), 2251799813685248.000000000 < toUInt32(1), 2251799813685248.000000000 <= toUInt32(1), 2251799813685248.000000000 > toUInt32(1), 2251799813685248.000000000 >= toUInt32(1) , toInt32(1) = 2251799813685248.000000000, toInt32(1) != 2251799813685248.000000000, toInt32(1) < 2251799813685248.000000000, toInt32(1) <= 2251799813685248.000000000, toInt32(1) > 2251799813685248.000000000, toInt32(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt32(1), 2251799813685248.000000000 != toInt32(1), 2251799813685248.000000000 < toInt32(1), 2251799813685248.000000000 <= toInt32(1), 2251799813685248.000000000 > toInt32(1), 2251799813685248.000000000 >= toInt32(1) , toUInt64(1) = 2251799813685248.000000000, toUInt64(1) != 2251799813685248.000000000, toUInt64(1) < 2251799813685248.000000000, toUInt64(1) <= 2251799813685248.000000000, toUInt64(1) > 2251799813685248.000000000, toUInt64(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt64(1), 2251799813685248.000000000 != toUInt64(1), 2251799813685248.000000000 < toUInt64(1), 2251799813685248.000000000 <= toUInt64(1), 2251799813685248.000000000 > toUInt64(1), 2251799813685248.000000000 >= toUInt64(1) , toInt64(1) = 2251799813685248.000000000, toInt64(1) != 2251799813685248.000000000, toInt64(1) < 2251799813685248.000000000, toInt64(1) <= 2251799813685248.000000000, toInt64(1) > 2251799813685248.000000000, toInt64(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt64(1), 2251799813685248.000000000 != toInt64(1), 2251799813685248.000000000 < toInt64(1), 2251799813685248.000000000 <= toInt64(1), 2251799813685248.000000000 > toInt64(1), 2251799813685248.000000000 >= toInt64(1) ; +SELECT '1', '4503599627370496.000000000', 1 = 4503599627370496.000000000, 1 != 4503599627370496.000000000, 1 < 4503599627370496.000000000, 1 <= 4503599627370496.000000000, 1 > 4503599627370496.000000000, 1 >= 4503599627370496.000000000, 4503599627370496.000000000 = 1, 4503599627370496.000000000 != 1, 4503599627370496.000000000 < 1, 4503599627370496.000000000 <= 1, 4503599627370496.000000000 > 1, 4503599627370496.000000000 >= 1 , toUInt8(1) = 4503599627370496.000000000, toUInt8(1) != 4503599627370496.000000000, toUInt8(1) < 4503599627370496.000000000, toUInt8(1) <= 4503599627370496.000000000, toUInt8(1) > 4503599627370496.000000000, toUInt8(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt8(1), 4503599627370496.000000000 != toUInt8(1), 4503599627370496.000000000 < toUInt8(1), 4503599627370496.000000000 <= toUInt8(1), 4503599627370496.000000000 > toUInt8(1), 4503599627370496.000000000 >= toUInt8(1) , toInt8(1) = 4503599627370496.000000000, toInt8(1) != 4503599627370496.000000000, toInt8(1) < 4503599627370496.000000000, toInt8(1) <= 4503599627370496.000000000, toInt8(1) > 4503599627370496.000000000, toInt8(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt8(1), 4503599627370496.000000000 != toInt8(1), 4503599627370496.000000000 < toInt8(1), 4503599627370496.000000000 <= toInt8(1), 4503599627370496.000000000 > toInt8(1), 4503599627370496.000000000 >= toInt8(1) , toUInt16(1) = 4503599627370496.000000000, toUInt16(1) != 4503599627370496.000000000, toUInt16(1) < 4503599627370496.000000000, toUInt16(1) <= 4503599627370496.000000000, toUInt16(1) > 4503599627370496.000000000, toUInt16(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt16(1), 4503599627370496.000000000 != toUInt16(1), 4503599627370496.000000000 < toUInt16(1), 4503599627370496.000000000 <= toUInt16(1), 4503599627370496.000000000 > toUInt16(1), 4503599627370496.000000000 >= toUInt16(1) , toInt16(1) = 4503599627370496.000000000, toInt16(1) != 4503599627370496.000000000, toInt16(1) < 4503599627370496.000000000, toInt16(1) <= 4503599627370496.000000000, toInt16(1) > 4503599627370496.000000000, toInt16(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt16(1), 4503599627370496.000000000 != toInt16(1), 4503599627370496.000000000 < toInt16(1), 4503599627370496.000000000 <= toInt16(1), 4503599627370496.000000000 > toInt16(1), 4503599627370496.000000000 >= toInt16(1) , toUInt32(1) = 4503599627370496.000000000, toUInt32(1) != 4503599627370496.000000000, toUInt32(1) < 4503599627370496.000000000, toUInt32(1) <= 4503599627370496.000000000, toUInt32(1) > 4503599627370496.000000000, toUInt32(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt32(1), 4503599627370496.000000000 != toUInt32(1), 4503599627370496.000000000 < toUInt32(1), 4503599627370496.000000000 <= toUInt32(1), 4503599627370496.000000000 > toUInt32(1), 4503599627370496.000000000 >= toUInt32(1) , toInt32(1) = 4503599627370496.000000000, toInt32(1) != 4503599627370496.000000000, toInt32(1) < 4503599627370496.000000000, toInt32(1) <= 4503599627370496.000000000, toInt32(1) > 4503599627370496.000000000, toInt32(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt32(1), 4503599627370496.000000000 != toInt32(1), 4503599627370496.000000000 < toInt32(1), 4503599627370496.000000000 <= toInt32(1), 4503599627370496.000000000 > toInt32(1), 4503599627370496.000000000 >= toInt32(1) , toUInt64(1) = 4503599627370496.000000000, toUInt64(1) != 4503599627370496.000000000, toUInt64(1) < 4503599627370496.000000000, toUInt64(1) <= 4503599627370496.000000000, toUInt64(1) > 4503599627370496.000000000, toUInt64(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt64(1), 4503599627370496.000000000 != toUInt64(1), 4503599627370496.000000000 < toUInt64(1), 4503599627370496.000000000 <= toUInt64(1), 4503599627370496.000000000 > toUInt64(1), 4503599627370496.000000000 >= toUInt64(1) , toInt64(1) = 4503599627370496.000000000, toInt64(1) != 4503599627370496.000000000, toInt64(1) < 4503599627370496.000000000, toInt64(1) <= 4503599627370496.000000000, toInt64(1) > 4503599627370496.000000000, toInt64(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt64(1), 4503599627370496.000000000 != toInt64(1), 4503599627370496.000000000 < toInt64(1), 4503599627370496.000000000 <= toInt64(1), 4503599627370496.000000000 > toInt64(1), 4503599627370496.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740991.000000000', 1 = 9007199254740991.000000000, 1 != 9007199254740991.000000000, 1 < 9007199254740991.000000000, 1 <= 9007199254740991.000000000, 1 > 9007199254740991.000000000, 1 >= 9007199254740991.000000000, 9007199254740991.000000000 = 1, 9007199254740991.000000000 != 1, 9007199254740991.000000000 < 1, 9007199254740991.000000000 <= 1, 9007199254740991.000000000 > 1, 9007199254740991.000000000 >= 1 , toUInt8(1) = 9007199254740991.000000000, toUInt8(1) != 9007199254740991.000000000, toUInt8(1) < 9007199254740991.000000000, toUInt8(1) <= 9007199254740991.000000000, toUInt8(1) > 9007199254740991.000000000, toUInt8(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt8(1), 9007199254740991.000000000 != toUInt8(1), 9007199254740991.000000000 < toUInt8(1), 9007199254740991.000000000 <= toUInt8(1), 9007199254740991.000000000 > toUInt8(1), 9007199254740991.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740991.000000000, toInt8(1) != 9007199254740991.000000000, toInt8(1) < 9007199254740991.000000000, toInt8(1) <= 9007199254740991.000000000, toInt8(1) > 9007199254740991.000000000, toInt8(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt8(1), 9007199254740991.000000000 != toInt8(1), 9007199254740991.000000000 < toInt8(1), 9007199254740991.000000000 <= toInt8(1), 9007199254740991.000000000 > toInt8(1), 9007199254740991.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740991.000000000, toUInt16(1) != 9007199254740991.000000000, toUInt16(1) < 9007199254740991.000000000, toUInt16(1) <= 9007199254740991.000000000, toUInt16(1) > 9007199254740991.000000000, toUInt16(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt16(1), 9007199254740991.000000000 != toUInt16(1), 9007199254740991.000000000 < toUInt16(1), 9007199254740991.000000000 <= toUInt16(1), 9007199254740991.000000000 > toUInt16(1), 9007199254740991.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740991.000000000, toInt16(1) != 9007199254740991.000000000, toInt16(1) < 9007199254740991.000000000, toInt16(1) <= 9007199254740991.000000000, toInt16(1) > 9007199254740991.000000000, toInt16(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt16(1), 9007199254740991.000000000 != toInt16(1), 9007199254740991.000000000 < toInt16(1), 9007199254740991.000000000 <= toInt16(1), 9007199254740991.000000000 > toInt16(1), 9007199254740991.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740991.000000000, toUInt32(1) != 9007199254740991.000000000, toUInt32(1) < 9007199254740991.000000000, toUInt32(1) <= 9007199254740991.000000000, toUInt32(1) > 9007199254740991.000000000, toUInt32(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt32(1), 9007199254740991.000000000 != toUInt32(1), 9007199254740991.000000000 < toUInt32(1), 9007199254740991.000000000 <= toUInt32(1), 9007199254740991.000000000 > toUInt32(1), 9007199254740991.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740991.000000000, toInt32(1) != 9007199254740991.000000000, toInt32(1) < 9007199254740991.000000000, toInt32(1) <= 9007199254740991.000000000, toInt32(1) > 9007199254740991.000000000, toInt32(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt32(1), 9007199254740991.000000000 != toInt32(1), 9007199254740991.000000000 < toInt32(1), 9007199254740991.000000000 <= toInt32(1), 9007199254740991.000000000 > toInt32(1), 9007199254740991.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740991.000000000, toUInt64(1) != 9007199254740991.000000000, toUInt64(1) < 9007199254740991.000000000, toUInt64(1) <= 9007199254740991.000000000, toUInt64(1) > 9007199254740991.000000000, toUInt64(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt64(1), 9007199254740991.000000000 != toUInt64(1), 9007199254740991.000000000 < toUInt64(1), 9007199254740991.000000000 <= toUInt64(1), 9007199254740991.000000000 > toUInt64(1), 9007199254740991.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740991.000000000, toInt64(1) != 9007199254740991.000000000, toInt64(1) < 9007199254740991.000000000, toInt64(1) <= 9007199254740991.000000000, toInt64(1) > 9007199254740991.000000000, toInt64(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt64(1), 9007199254740991.000000000 != toInt64(1), 9007199254740991.000000000 < toInt64(1), 9007199254740991.000000000 <= toInt64(1), 9007199254740991.000000000 > toInt64(1), 9007199254740991.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740992.000000000', 1 = 9007199254740992.000000000, 1 != 9007199254740992.000000000, 1 < 9007199254740992.000000000, 1 <= 9007199254740992.000000000, 1 > 9007199254740992.000000000, 1 >= 9007199254740992.000000000, 9007199254740992.000000000 = 1, 9007199254740992.000000000 != 1, 9007199254740992.000000000 < 1, 9007199254740992.000000000 <= 1, 9007199254740992.000000000 > 1, 9007199254740992.000000000 >= 1 , toUInt8(1) = 9007199254740992.000000000, toUInt8(1) != 9007199254740992.000000000, toUInt8(1) < 9007199254740992.000000000, toUInt8(1) <= 9007199254740992.000000000, toUInt8(1) > 9007199254740992.000000000, toUInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(1), 9007199254740992.000000000 != toUInt8(1), 9007199254740992.000000000 < toUInt8(1), 9007199254740992.000000000 <= toUInt8(1), 9007199254740992.000000000 > toUInt8(1), 9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740992.000000000, toInt8(1) != 9007199254740992.000000000, toInt8(1) < 9007199254740992.000000000, toInt8(1) <= 9007199254740992.000000000, toInt8(1) > 9007199254740992.000000000, toInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(1), 9007199254740992.000000000 != toInt8(1), 9007199254740992.000000000 < toInt8(1), 9007199254740992.000000000 <= toInt8(1), 9007199254740992.000000000 > toInt8(1), 9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740992.000000000, toUInt16(1) != 9007199254740992.000000000, toUInt16(1) < 9007199254740992.000000000, toUInt16(1) <= 9007199254740992.000000000, toUInt16(1) > 9007199254740992.000000000, toUInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(1), 9007199254740992.000000000 != toUInt16(1), 9007199254740992.000000000 < toUInt16(1), 9007199254740992.000000000 <= toUInt16(1), 9007199254740992.000000000 > toUInt16(1), 9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740992.000000000, toInt16(1) != 9007199254740992.000000000, toInt16(1) < 9007199254740992.000000000, toInt16(1) <= 9007199254740992.000000000, toInt16(1) > 9007199254740992.000000000, toInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(1), 9007199254740992.000000000 != toInt16(1), 9007199254740992.000000000 < toInt16(1), 9007199254740992.000000000 <= toInt16(1), 9007199254740992.000000000 > toInt16(1), 9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740992.000000000, toUInt32(1) != 9007199254740992.000000000, toUInt32(1) < 9007199254740992.000000000, toUInt32(1) <= 9007199254740992.000000000, toUInt32(1) > 9007199254740992.000000000, toUInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(1), 9007199254740992.000000000 != toUInt32(1), 9007199254740992.000000000 < toUInt32(1), 9007199254740992.000000000 <= toUInt32(1), 9007199254740992.000000000 > toUInt32(1), 9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740992.000000000, toInt32(1) != 9007199254740992.000000000, toInt32(1) < 9007199254740992.000000000, toInt32(1) <= 9007199254740992.000000000, toInt32(1) > 9007199254740992.000000000, toInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(1), 9007199254740992.000000000 != toInt32(1), 9007199254740992.000000000 < toInt32(1), 9007199254740992.000000000 <= toInt32(1), 9007199254740992.000000000 > toInt32(1), 9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740992.000000000, toUInt64(1) != 9007199254740992.000000000, toUInt64(1) < 9007199254740992.000000000, toUInt64(1) <= 9007199254740992.000000000, toUInt64(1) > 9007199254740992.000000000, toUInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(1), 9007199254740992.000000000 != toUInt64(1), 9007199254740992.000000000 < toUInt64(1), 9007199254740992.000000000 <= toUInt64(1), 9007199254740992.000000000 > toUInt64(1), 9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740992.000000000, toInt64(1) != 9007199254740992.000000000, toInt64(1) < 9007199254740992.000000000, toInt64(1) <= 9007199254740992.000000000, toInt64(1) > 9007199254740992.000000000, toInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(1), 9007199254740992.000000000 != toInt64(1), 9007199254740992.000000000 < toInt64(1), 9007199254740992.000000000 <= toInt64(1), 9007199254740992.000000000 > toInt64(1), 9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740992.000000000', 1 = 9007199254740992.000000000, 1 != 9007199254740992.000000000, 1 < 9007199254740992.000000000, 1 <= 9007199254740992.000000000, 1 > 9007199254740992.000000000, 1 >= 9007199254740992.000000000, 9007199254740992.000000000 = 1, 9007199254740992.000000000 != 1, 9007199254740992.000000000 < 1, 9007199254740992.000000000 <= 1, 9007199254740992.000000000 > 1, 9007199254740992.000000000 >= 1 , toUInt8(1) = 9007199254740992.000000000, toUInt8(1) != 9007199254740992.000000000, toUInt8(1) < 9007199254740992.000000000, toUInt8(1) <= 9007199254740992.000000000, toUInt8(1) > 9007199254740992.000000000, toUInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(1), 9007199254740992.000000000 != toUInt8(1), 9007199254740992.000000000 < toUInt8(1), 9007199254740992.000000000 <= toUInt8(1), 9007199254740992.000000000 > toUInt8(1), 9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740992.000000000, toInt8(1) != 9007199254740992.000000000, toInt8(1) < 9007199254740992.000000000, toInt8(1) <= 9007199254740992.000000000, toInt8(1) > 9007199254740992.000000000, toInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(1), 9007199254740992.000000000 != toInt8(1), 9007199254740992.000000000 < toInt8(1), 9007199254740992.000000000 <= toInt8(1), 9007199254740992.000000000 > toInt8(1), 9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740992.000000000, toUInt16(1) != 9007199254740992.000000000, toUInt16(1) < 9007199254740992.000000000, toUInt16(1) <= 9007199254740992.000000000, toUInt16(1) > 9007199254740992.000000000, toUInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(1), 9007199254740992.000000000 != toUInt16(1), 9007199254740992.000000000 < toUInt16(1), 9007199254740992.000000000 <= toUInt16(1), 9007199254740992.000000000 > toUInt16(1), 9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740992.000000000, toInt16(1) != 9007199254740992.000000000, toInt16(1) < 9007199254740992.000000000, toInt16(1) <= 9007199254740992.000000000, toInt16(1) > 9007199254740992.000000000, toInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(1), 9007199254740992.000000000 != toInt16(1), 9007199254740992.000000000 < toInt16(1), 9007199254740992.000000000 <= toInt16(1), 9007199254740992.000000000 > toInt16(1), 9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740992.000000000, toUInt32(1) != 9007199254740992.000000000, toUInt32(1) < 9007199254740992.000000000, toUInt32(1) <= 9007199254740992.000000000, toUInt32(1) > 9007199254740992.000000000, toUInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(1), 9007199254740992.000000000 != toUInt32(1), 9007199254740992.000000000 < toUInt32(1), 9007199254740992.000000000 <= toUInt32(1), 9007199254740992.000000000 > toUInt32(1), 9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740992.000000000, toInt32(1) != 9007199254740992.000000000, toInt32(1) < 9007199254740992.000000000, toInt32(1) <= 9007199254740992.000000000, toInt32(1) > 9007199254740992.000000000, toInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(1), 9007199254740992.000000000 != toInt32(1), 9007199254740992.000000000 < toInt32(1), 9007199254740992.000000000 <= toInt32(1), 9007199254740992.000000000 > toInt32(1), 9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740992.000000000, toUInt64(1) != 9007199254740992.000000000, toUInt64(1) < 9007199254740992.000000000, toUInt64(1) <= 9007199254740992.000000000, toUInt64(1) > 9007199254740992.000000000, toUInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(1), 9007199254740992.000000000 != toUInt64(1), 9007199254740992.000000000 < toUInt64(1), 9007199254740992.000000000 <= toUInt64(1), 9007199254740992.000000000 > toUInt64(1), 9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740992.000000000, toInt64(1) != 9007199254740992.000000000, toInt64(1) < 9007199254740992.000000000, toInt64(1) <= 9007199254740992.000000000, toInt64(1) > 9007199254740992.000000000, toInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(1), 9007199254740992.000000000 != toInt64(1), 9007199254740992.000000000 < toInt64(1), 9007199254740992.000000000 <= toInt64(1), 9007199254740992.000000000 > toInt64(1), 9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740994.000000000', 1 = 9007199254740994.000000000, 1 != 9007199254740994.000000000, 1 < 9007199254740994.000000000, 1 <= 9007199254740994.000000000, 1 > 9007199254740994.000000000, 1 >= 9007199254740994.000000000, 9007199254740994.000000000 = 1, 9007199254740994.000000000 != 1, 9007199254740994.000000000 < 1, 9007199254740994.000000000 <= 1, 9007199254740994.000000000 > 1, 9007199254740994.000000000 >= 1 , toUInt8(1) = 9007199254740994.000000000, toUInt8(1) != 9007199254740994.000000000, toUInt8(1) < 9007199254740994.000000000, toUInt8(1) <= 9007199254740994.000000000, toUInt8(1) > 9007199254740994.000000000, toUInt8(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt8(1), 9007199254740994.000000000 != toUInt8(1), 9007199254740994.000000000 < toUInt8(1), 9007199254740994.000000000 <= toUInt8(1), 9007199254740994.000000000 > toUInt8(1), 9007199254740994.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740994.000000000, toInt8(1) != 9007199254740994.000000000, toInt8(1) < 9007199254740994.000000000, toInt8(1) <= 9007199254740994.000000000, toInt8(1) > 9007199254740994.000000000, toInt8(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt8(1), 9007199254740994.000000000 != toInt8(1), 9007199254740994.000000000 < toInt8(1), 9007199254740994.000000000 <= toInt8(1), 9007199254740994.000000000 > toInt8(1), 9007199254740994.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740994.000000000, toUInt16(1) != 9007199254740994.000000000, toUInt16(1) < 9007199254740994.000000000, toUInt16(1) <= 9007199254740994.000000000, toUInt16(1) > 9007199254740994.000000000, toUInt16(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt16(1), 9007199254740994.000000000 != toUInt16(1), 9007199254740994.000000000 < toUInt16(1), 9007199254740994.000000000 <= toUInt16(1), 9007199254740994.000000000 > toUInt16(1), 9007199254740994.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740994.000000000, toInt16(1) != 9007199254740994.000000000, toInt16(1) < 9007199254740994.000000000, toInt16(1) <= 9007199254740994.000000000, toInt16(1) > 9007199254740994.000000000, toInt16(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt16(1), 9007199254740994.000000000 != toInt16(1), 9007199254740994.000000000 < toInt16(1), 9007199254740994.000000000 <= toInt16(1), 9007199254740994.000000000 > toInt16(1), 9007199254740994.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740994.000000000, toUInt32(1) != 9007199254740994.000000000, toUInt32(1) < 9007199254740994.000000000, toUInt32(1) <= 9007199254740994.000000000, toUInt32(1) > 9007199254740994.000000000, toUInt32(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt32(1), 9007199254740994.000000000 != toUInt32(1), 9007199254740994.000000000 < toUInt32(1), 9007199254740994.000000000 <= toUInt32(1), 9007199254740994.000000000 > toUInt32(1), 9007199254740994.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740994.000000000, toInt32(1) != 9007199254740994.000000000, toInt32(1) < 9007199254740994.000000000, toInt32(1) <= 9007199254740994.000000000, toInt32(1) > 9007199254740994.000000000, toInt32(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt32(1), 9007199254740994.000000000 != toInt32(1), 9007199254740994.000000000 < toInt32(1), 9007199254740994.000000000 <= toInt32(1), 9007199254740994.000000000 > toInt32(1), 9007199254740994.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740994.000000000, toUInt64(1) != 9007199254740994.000000000, toUInt64(1) < 9007199254740994.000000000, toUInt64(1) <= 9007199254740994.000000000, toUInt64(1) > 9007199254740994.000000000, toUInt64(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt64(1), 9007199254740994.000000000 != toUInt64(1), 9007199254740994.000000000 < toUInt64(1), 9007199254740994.000000000 <= toUInt64(1), 9007199254740994.000000000 > toUInt64(1), 9007199254740994.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740994.000000000, toInt64(1) != 9007199254740994.000000000, toInt64(1) < 9007199254740994.000000000, toInt64(1) <= 9007199254740994.000000000, toInt64(1) > 9007199254740994.000000000, toInt64(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt64(1), 9007199254740994.000000000 != toInt64(1), 9007199254740994.000000000 < toInt64(1), 9007199254740994.000000000 <= toInt64(1), 9007199254740994.000000000 > toInt64(1), 9007199254740994.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740991.000000000', 1 = -9007199254740991.000000000, 1 != -9007199254740991.000000000, 1 < -9007199254740991.000000000, 1 <= -9007199254740991.000000000, 1 > -9007199254740991.000000000, 1 >= -9007199254740991.000000000, -9007199254740991.000000000 = 1, -9007199254740991.000000000 != 1, -9007199254740991.000000000 < 1, -9007199254740991.000000000 <= 1, -9007199254740991.000000000 > 1, -9007199254740991.000000000 >= 1 , toUInt8(1) = -9007199254740991.000000000, toUInt8(1) != -9007199254740991.000000000, toUInt8(1) < -9007199254740991.000000000, toUInt8(1) <= -9007199254740991.000000000, toUInt8(1) > -9007199254740991.000000000, toUInt8(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt8(1), -9007199254740991.000000000 != toUInt8(1), -9007199254740991.000000000 < toUInt8(1), -9007199254740991.000000000 <= toUInt8(1), -9007199254740991.000000000 > toUInt8(1), -9007199254740991.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740991.000000000, toInt8(1) != -9007199254740991.000000000, toInt8(1) < -9007199254740991.000000000, toInt8(1) <= -9007199254740991.000000000, toInt8(1) > -9007199254740991.000000000, toInt8(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt8(1), -9007199254740991.000000000 != toInt8(1), -9007199254740991.000000000 < toInt8(1), -9007199254740991.000000000 <= toInt8(1), -9007199254740991.000000000 > toInt8(1), -9007199254740991.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740991.000000000, toUInt16(1) != -9007199254740991.000000000, toUInt16(1) < -9007199254740991.000000000, toUInt16(1) <= -9007199254740991.000000000, toUInt16(1) > -9007199254740991.000000000, toUInt16(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt16(1), -9007199254740991.000000000 != toUInt16(1), -9007199254740991.000000000 < toUInt16(1), -9007199254740991.000000000 <= toUInt16(1), -9007199254740991.000000000 > toUInt16(1), -9007199254740991.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740991.000000000, toInt16(1) != -9007199254740991.000000000, toInt16(1) < -9007199254740991.000000000, toInt16(1) <= -9007199254740991.000000000, toInt16(1) > -9007199254740991.000000000, toInt16(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt16(1), -9007199254740991.000000000 != toInt16(1), -9007199254740991.000000000 < toInt16(1), -9007199254740991.000000000 <= toInt16(1), -9007199254740991.000000000 > toInt16(1), -9007199254740991.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740991.000000000, toUInt32(1) != -9007199254740991.000000000, toUInt32(1) < -9007199254740991.000000000, toUInt32(1) <= -9007199254740991.000000000, toUInt32(1) > -9007199254740991.000000000, toUInt32(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt32(1), -9007199254740991.000000000 != toUInt32(1), -9007199254740991.000000000 < toUInt32(1), -9007199254740991.000000000 <= toUInt32(1), -9007199254740991.000000000 > toUInt32(1), -9007199254740991.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740991.000000000, toInt32(1) != -9007199254740991.000000000, toInt32(1) < -9007199254740991.000000000, toInt32(1) <= -9007199254740991.000000000, toInt32(1) > -9007199254740991.000000000, toInt32(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt32(1), -9007199254740991.000000000 != toInt32(1), -9007199254740991.000000000 < toInt32(1), -9007199254740991.000000000 <= toInt32(1), -9007199254740991.000000000 > toInt32(1), -9007199254740991.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740991.000000000, toUInt64(1) != -9007199254740991.000000000, toUInt64(1) < -9007199254740991.000000000, toUInt64(1) <= -9007199254740991.000000000, toUInt64(1) > -9007199254740991.000000000, toUInt64(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt64(1), -9007199254740991.000000000 != toUInt64(1), -9007199254740991.000000000 < toUInt64(1), -9007199254740991.000000000 <= toUInt64(1), -9007199254740991.000000000 > toUInt64(1), -9007199254740991.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740991.000000000, toInt64(1) != -9007199254740991.000000000, toInt64(1) < -9007199254740991.000000000, toInt64(1) <= -9007199254740991.000000000, toInt64(1) > -9007199254740991.000000000, toInt64(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt64(1), -9007199254740991.000000000 != toInt64(1), -9007199254740991.000000000 < toInt64(1), -9007199254740991.000000000 <= toInt64(1), -9007199254740991.000000000 > toInt64(1), -9007199254740991.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740992.000000000', 1 = -9007199254740992.000000000, 1 != -9007199254740992.000000000, 1 < -9007199254740992.000000000, 1 <= -9007199254740992.000000000, 1 > -9007199254740992.000000000, 1 >= -9007199254740992.000000000, -9007199254740992.000000000 = 1, -9007199254740992.000000000 != 1, -9007199254740992.000000000 < 1, -9007199254740992.000000000 <= 1, -9007199254740992.000000000 > 1, -9007199254740992.000000000 >= 1 , toUInt8(1) = -9007199254740992.000000000, toUInt8(1) != -9007199254740992.000000000, toUInt8(1) < -9007199254740992.000000000, toUInt8(1) <= -9007199254740992.000000000, toUInt8(1) > -9007199254740992.000000000, toUInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(1), -9007199254740992.000000000 != toUInt8(1), -9007199254740992.000000000 < toUInt8(1), -9007199254740992.000000000 <= toUInt8(1), -9007199254740992.000000000 > toUInt8(1), -9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740992.000000000, toInt8(1) != -9007199254740992.000000000, toInt8(1) < -9007199254740992.000000000, toInt8(1) <= -9007199254740992.000000000, toInt8(1) > -9007199254740992.000000000, toInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(1), -9007199254740992.000000000 != toInt8(1), -9007199254740992.000000000 < toInt8(1), -9007199254740992.000000000 <= toInt8(1), -9007199254740992.000000000 > toInt8(1), -9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740992.000000000, toUInt16(1) != -9007199254740992.000000000, toUInt16(1) < -9007199254740992.000000000, toUInt16(1) <= -9007199254740992.000000000, toUInt16(1) > -9007199254740992.000000000, toUInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(1), -9007199254740992.000000000 != toUInt16(1), -9007199254740992.000000000 < toUInt16(1), -9007199254740992.000000000 <= toUInt16(1), -9007199254740992.000000000 > toUInt16(1), -9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740992.000000000, toInt16(1) != -9007199254740992.000000000, toInt16(1) < -9007199254740992.000000000, toInt16(1) <= -9007199254740992.000000000, toInt16(1) > -9007199254740992.000000000, toInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(1), -9007199254740992.000000000 != toInt16(1), -9007199254740992.000000000 < toInt16(1), -9007199254740992.000000000 <= toInt16(1), -9007199254740992.000000000 > toInt16(1), -9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740992.000000000, toUInt32(1) != -9007199254740992.000000000, toUInt32(1) < -9007199254740992.000000000, toUInt32(1) <= -9007199254740992.000000000, toUInt32(1) > -9007199254740992.000000000, toUInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(1), -9007199254740992.000000000 != toUInt32(1), -9007199254740992.000000000 < toUInt32(1), -9007199254740992.000000000 <= toUInt32(1), -9007199254740992.000000000 > toUInt32(1), -9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740992.000000000, toInt32(1) != -9007199254740992.000000000, toInt32(1) < -9007199254740992.000000000, toInt32(1) <= -9007199254740992.000000000, toInt32(1) > -9007199254740992.000000000, toInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(1), -9007199254740992.000000000 != toInt32(1), -9007199254740992.000000000 < toInt32(1), -9007199254740992.000000000 <= toInt32(1), -9007199254740992.000000000 > toInt32(1), -9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740992.000000000, toUInt64(1) != -9007199254740992.000000000, toUInt64(1) < -9007199254740992.000000000, toUInt64(1) <= -9007199254740992.000000000, toUInt64(1) > -9007199254740992.000000000, toUInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(1), -9007199254740992.000000000 != toUInt64(1), -9007199254740992.000000000 < toUInt64(1), -9007199254740992.000000000 <= toUInt64(1), -9007199254740992.000000000 > toUInt64(1), -9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740992.000000000, toInt64(1) != -9007199254740992.000000000, toInt64(1) < -9007199254740992.000000000, toInt64(1) <= -9007199254740992.000000000, toInt64(1) > -9007199254740992.000000000, toInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(1), -9007199254740992.000000000 != toInt64(1), -9007199254740992.000000000 < toInt64(1), -9007199254740992.000000000 <= toInt64(1), -9007199254740992.000000000 > toInt64(1), -9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740992.000000000', 1 = -9007199254740992.000000000, 1 != -9007199254740992.000000000, 1 < -9007199254740992.000000000, 1 <= -9007199254740992.000000000, 1 > -9007199254740992.000000000, 1 >= -9007199254740992.000000000, -9007199254740992.000000000 = 1, -9007199254740992.000000000 != 1, -9007199254740992.000000000 < 1, -9007199254740992.000000000 <= 1, -9007199254740992.000000000 > 1, -9007199254740992.000000000 >= 1 , toUInt8(1) = -9007199254740992.000000000, toUInt8(1) != -9007199254740992.000000000, toUInt8(1) < -9007199254740992.000000000, toUInt8(1) <= -9007199254740992.000000000, toUInt8(1) > -9007199254740992.000000000, toUInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(1), -9007199254740992.000000000 != toUInt8(1), -9007199254740992.000000000 < toUInt8(1), -9007199254740992.000000000 <= toUInt8(1), -9007199254740992.000000000 > toUInt8(1), -9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740992.000000000, toInt8(1) != -9007199254740992.000000000, toInt8(1) < -9007199254740992.000000000, toInt8(1) <= -9007199254740992.000000000, toInt8(1) > -9007199254740992.000000000, toInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(1), -9007199254740992.000000000 != toInt8(1), -9007199254740992.000000000 < toInt8(1), -9007199254740992.000000000 <= toInt8(1), -9007199254740992.000000000 > toInt8(1), -9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740992.000000000, toUInt16(1) != -9007199254740992.000000000, toUInt16(1) < -9007199254740992.000000000, toUInt16(1) <= -9007199254740992.000000000, toUInt16(1) > -9007199254740992.000000000, toUInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(1), -9007199254740992.000000000 != toUInt16(1), -9007199254740992.000000000 < toUInt16(1), -9007199254740992.000000000 <= toUInt16(1), -9007199254740992.000000000 > toUInt16(1), -9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740992.000000000, toInt16(1) != -9007199254740992.000000000, toInt16(1) < -9007199254740992.000000000, toInt16(1) <= -9007199254740992.000000000, toInt16(1) > -9007199254740992.000000000, toInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(1), -9007199254740992.000000000 != toInt16(1), -9007199254740992.000000000 < toInt16(1), -9007199254740992.000000000 <= toInt16(1), -9007199254740992.000000000 > toInt16(1), -9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740992.000000000, toUInt32(1) != -9007199254740992.000000000, toUInt32(1) < -9007199254740992.000000000, toUInt32(1) <= -9007199254740992.000000000, toUInt32(1) > -9007199254740992.000000000, toUInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(1), -9007199254740992.000000000 != toUInt32(1), -9007199254740992.000000000 < toUInt32(1), -9007199254740992.000000000 <= toUInt32(1), -9007199254740992.000000000 > toUInt32(1), -9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740992.000000000, toInt32(1) != -9007199254740992.000000000, toInt32(1) < -9007199254740992.000000000, toInt32(1) <= -9007199254740992.000000000, toInt32(1) > -9007199254740992.000000000, toInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(1), -9007199254740992.000000000 != toInt32(1), -9007199254740992.000000000 < toInt32(1), -9007199254740992.000000000 <= toInt32(1), -9007199254740992.000000000 > toInt32(1), -9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740992.000000000, toUInt64(1) != -9007199254740992.000000000, toUInt64(1) < -9007199254740992.000000000, toUInt64(1) <= -9007199254740992.000000000, toUInt64(1) > -9007199254740992.000000000, toUInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(1), -9007199254740992.000000000 != toUInt64(1), -9007199254740992.000000000 < toUInt64(1), -9007199254740992.000000000 <= toUInt64(1), -9007199254740992.000000000 > toUInt64(1), -9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740992.000000000, toInt64(1) != -9007199254740992.000000000, toInt64(1) < -9007199254740992.000000000, toInt64(1) <= -9007199254740992.000000000, toInt64(1) > -9007199254740992.000000000, toInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(1), -9007199254740992.000000000 != toInt64(1), -9007199254740992.000000000 < toInt64(1), -9007199254740992.000000000 <= toInt64(1), -9007199254740992.000000000 > toInt64(1), -9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740994.000000000', 1 = -9007199254740994.000000000, 1 != -9007199254740994.000000000, 1 < -9007199254740994.000000000, 1 <= -9007199254740994.000000000, 1 > -9007199254740994.000000000, 1 >= -9007199254740994.000000000, -9007199254740994.000000000 = 1, -9007199254740994.000000000 != 1, -9007199254740994.000000000 < 1, -9007199254740994.000000000 <= 1, -9007199254740994.000000000 > 1, -9007199254740994.000000000 >= 1 , toUInt8(1) = -9007199254740994.000000000, toUInt8(1) != -9007199254740994.000000000, toUInt8(1) < -9007199254740994.000000000, toUInt8(1) <= -9007199254740994.000000000, toUInt8(1) > -9007199254740994.000000000, toUInt8(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt8(1), -9007199254740994.000000000 != toUInt8(1), -9007199254740994.000000000 < toUInt8(1), -9007199254740994.000000000 <= toUInt8(1), -9007199254740994.000000000 > toUInt8(1), -9007199254740994.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740994.000000000, toInt8(1) != -9007199254740994.000000000, toInt8(1) < -9007199254740994.000000000, toInt8(1) <= -9007199254740994.000000000, toInt8(1) > -9007199254740994.000000000, toInt8(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt8(1), -9007199254740994.000000000 != toInt8(1), -9007199254740994.000000000 < toInt8(1), -9007199254740994.000000000 <= toInt8(1), -9007199254740994.000000000 > toInt8(1), -9007199254740994.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740994.000000000, toUInt16(1) != -9007199254740994.000000000, toUInt16(1) < -9007199254740994.000000000, toUInt16(1) <= -9007199254740994.000000000, toUInt16(1) > -9007199254740994.000000000, toUInt16(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt16(1), -9007199254740994.000000000 != toUInt16(1), -9007199254740994.000000000 < toUInt16(1), -9007199254740994.000000000 <= toUInt16(1), -9007199254740994.000000000 > toUInt16(1), -9007199254740994.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740994.000000000, toInt16(1) != -9007199254740994.000000000, toInt16(1) < -9007199254740994.000000000, toInt16(1) <= -9007199254740994.000000000, toInt16(1) > -9007199254740994.000000000, toInt16(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt16(1), -9007199254740994.000000000 != toInt16(1), -9007199254740994.000000000 < toInt16(1), -9007199254740994.000000000 <= toInt16(1), -9007199254740994.000000000 > toInt16(1), -9007199254740994.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740994.000000000, toUInt32(1) != -9007199254740994.000000000, toUInt32(1) < -9007199254740994.000000000, toUInt32(1) <= -9007199254740994.000000000, toUInt32(1) > -9007199254740994.000000000, toUInt32(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt32(1), -9007199254740994.000000000 != toUInt32(1), -9007199254740994.000000000 < toUInt32(1), -9007199254740994.000000000 <= toUInt32(1), -9007199254740994.000000000 > toUInt32(1), -9007199254740994.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740994.000000000, toInt32(1) != -9007199254740994.000000000, toInt32(1) < -9007199254740994.000000000, toInt32(1) <= -9007199254740994.000000000, toInt32(1) > -9007199254740994.000000000, toInt32(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt32(1), -9007199254740994.000000000 != toInt32(1), -9007199254740994.000000000 < toInt32(1), -9007199254740994.000000000 <= toInt32(1), -9007199254740994.000000000 > toInt32(1), -9007199254740994.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740994.000000000, toUInt64(1) != -9007199254740994.000000000, toUInt64(1) < -9007199254740994.000000000, toUInt64(1) <= -9007199254740994.000000000, toUInt64(1) > -9007199254740994.000000000, toUInt64(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt64(1), -9007199254740994.000000000 != toUInt64(1), -9007199254740994.000000000 < toUInt64(1), -9007199254740994.000000000 <= toUInt64(1), -9007199254740994.000000000 > toUInt64(1), -9007199254740994.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740994.000000000, toInt64(1) != -9007199254740994.000000000, toInt64(1) < -9007199254740994.000000000, toInt64(1) <= -9007199254740994.000000000, toInt64(1) > -9007199254740994.000000000, toInt64(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt64(1), -9007199254740994.000000000 != toInt64(1), -9007199254740994.000000000 < toInt64(1), -9007199254740994.000000000 <= toInt64(1), -9007199254740994.000000000 > toInt64(1), -9007199254740994.000000000 >= toInt64(1) ; +SELECT '1', '104.000000000', 1 = 104.000000000, 1 != 104.000000000, 1 < 104.000000000, 1 <= 104.000000000, 1 > 104.000000000, 1 >= 104.000000000, 104.000000000 = 1, 104.000000000 != 1, 104.000000000 < 1, 104.000000000 <= 1, 104.000000000 > 1, 104.000000000 >= 1 , toUInt8(1) = 104.000000000, toUInt8(1) != 104.000000000, toUInt8(1) < 104.000000000, toUInt8(1) <= 104.000000000, toUInt8(1) > 104.000000000, toUInt8(1) >= 104.000000000, 104.000000000 = toUInt8(1), 104.000000000 != toUInt8(1), 104.000000000 < toUInt8(1), 104.000000000 <= toUInt8(1), 104.000000000 > toUInt8(1), 104.000000000 >= toUInt8(1) , toInt8(1) = 104.000000000, toInt8(1) != 104.000000000, toInt8(1) < 104.000000000, toInt8(1) <= 104.000000000, toInt8(1) > 104.000000000, toInt8(1) >= 104.000000000, 104.000000000 = toInt8(1), 104.000000000 != toInt8(1), 104.000000000 < toInt8(1), 104.000000000 <= toInt8(1), 104.000000000 > toInt8(1), 104.000000000 >= toInt8(1) , toUInt16(1) = 104.000000000, toUInt16(1) != 104.000000000, toUInt16(1) < 104.000000000, toUInt16(1) <= 104.000000000, toUInt16(1) > 104.000000000, toUInt16(1) >= 104.000000000, 104.000000000 = toUInt16(1), 104.000000000 != toUInt16(1), 104.000000000 < toUInt16(1), 104.000000000 <= toUInt16(1), 104.000000000 > toUInt16(1), 104.000000000 >= toUInt16(1) , toInt16(1) = 104.000000000, toInt16(1) != 104.000000000, toInt16(1) < 104.000000000, toInt16(1) <= 104.000000000, toInt16(1) > 104.000000000, toInt16(1) >= 104.000000000, 104.000000000 = toInt16(1), 104.000000000 != toInt16(1), 104.000000000 < toInt16(1), 104.000000000 <= toInt16(1), 104.000000000 > toInt16(1), 104.000000000 >= toInt16(1) , toUInt32(1) = 104.000000000, toUInt32(1) != 104.000000000, toUInt32(1) < 104.000000000, toUInt32(1) <= 104.000000000, toUInt32(1) > 104.000000000, toUInt32(1) >= 104.000000000, 104.000000000 = toUInt32(1), 104.000000000 != toUInt32(1), 104.000000000 < toUInt32(1), 104.000000000 <= toUInt32(1), 104.000000000 > toUInt32(1), 104.000000000 >= toUInt32(1) , toInt32(1) = 104.000000000, toInt32(1) != 104.000000000, toInt32(1) < 104.000000000, toInt32(1) <= 104.000000000, toInt32(1) > 104.000000000, toInt32(1) >= 104.000000000, 104.000000000 = toInt32(1), 104.000000000 != toInt32(1), 104.000000000 < toInt32(1), 104.000000000 <= toInt32(1), 104.000000000 > toInt32(1), 104.000000000 >= toInt32(1) , toUInt64(1) = 104.000000000, toUInt64(1) != 104.000000000, toUInt64(1) < 104.000000000, toUInt64(1) <= 104.000000000, toUInt64(1) > 104.000000000, toUInt64(1) >= 104.000000000, 104.000000000 = toUInt64(1), 104.000000000 != toUInt64(1), 104.000000000 < toUInt64(1), 104.000000000 <= toUInt64(1), 104.000000000 > toUInt64(1), 104.000000000 >= toUInt64(1) , toInt64(1) = 104.000000000, toInt64(1) != 104.000000000, toInt64(1) < 104.000000000, toInt64(1) <= 104.000000000, toInt64(1) > 104.000000000, toInt64(1) >= 104.000000000, 104.000000000 = toInt64(1), 104.000000000 != toInt64(1), 104.000000000 < toInt64(1), 104.000000000 <= toInt64(1), 104.000000000 > toInt64(1), 104.000000000 >= toInt64(1) ; +SELECT '1', '-4503599627370496.000000000', 1 = -4503599627370496.000000000, 1 != -4503599627370496.000000000, 1 < -4503599627370496.000000000, 1 <= -4503599627370496.000000000, 1 > -4503599627370496.000000000, 1 >= -4503599627370496.000000000, -4503599627370496.000000000 = 1, -4503599627370496.000000000 != 1, -4503599627370496.000000000 < 1, -4503599627370496.000000000 <= 1, -4503599627370496.000000000 > 1, -4503599627370496.000000000 >= 1 , toUInt8(1) = -4503599627370496.000000000, toUInt8(1) != -4503599627370496.000000000, toUInt8(1) < -4503599627370496.000000000, toUInt8(1) <= -4503599627370496.000000000, toUInt8(1) > -4503599627370496.000000000, toUInt8(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt8(1), -4503599627370496.000000000 != toUInt8(1), -4503599627370496.000000000 < toUInt8(1), -4503599627370496.000000000 <= toUInt8(1), -4503599627370496.000000000 > toUInt8(1), -4503599627370496.000000000 >= toUInt8(1) , toInt8(1) = -4503599627370496.000000000, toInt8(1) != -4503599627370496.000000000, toInt8(1) < -4503599627370496.000000000, toInt8(1) <= -4503599627370496.000000000, toInt8(1) > -4503599627370496.000000000, toInt8(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt8(1), -4503599627370496.000000000 != toInt8(1), -4503599627370496.000000000 < toInt8(1), -4503599627370496.000000000 <= toInt8(1), -4503599627370496.000000000 > toInt8(1), -4503599627370496.000000000 >= toInt8(1) , toUInt16(1) = -4503599627370496.000000000, toUInt16(1) != -4503599627370496.000000000, toUInt16(1) < -4503599627370496.000000000, toUInt16(1) <= -4503599627370496.000000000, toUInt16(1) > -4503599627370496.000000000, toUInt16(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt16(1), -4503599627370496.000000000 != toUInt16(1), -4503599627370496.000000000 < toUInt16(1), -4503599627370496.000000000 <= toUInt16(1), -4503599627370496.000000000 > toUInt16(1), -4503599627370496.000000000 >= toUInt16(1) , toInt16(1) = -4503599627370496.000000000, toInt16(1) != -4503599627370496.000000000, toInt16(1) < -4503599627370496.000000000, toInt16(1) <= -4503599627370496.000000000, toInt16(1) > -4503599627370496.000000000, toInt16(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt16(1), -4503599627370496.000000000 != toInt16(1), -4503599627370496.000000000 < toInt16(1), -4503599627370496.000000000 <= toInt16(1), -4503599627370496.000000000 > toInt16(1), -4503599627370496.000000000 >= toInt16(1) , toUInt32(1) = -4503599627370496.000000000, toUInt32(1) != -4503599627370496.000000000, toUInt32(1) < -4503599627370496.000000000, toUInt32(1) <= -4503599627370496.000000000, toUInt32(1) > -4503599627370496.000000000, toUInt32(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt32(1), -4503599627370496.000000000 != toUInt32(1), -4503599627370496.000000000 < toUInt32(1), -4503599627370496.000000000 <= toUInt32(1), -4503599627370496.000000000 > toUInt32(1), -4503599627370496.000000000 >= toUInt32(1) , toInt32(1) = -4503599627370496.000000000, toInt32(1) != -4503599627370496.000000000, toInt32(1) < -4503599627370496.000000000, toInt32(1) <= -4503599627370496.000000000, toInt32(1) > -4503599627370496.000000000, toInt32(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt32(1), -4503599627370496.000000000 != toInt32(1), -4503599627370496.000000000 < toInt32(1), -4503599627370496.000000000 <= toInt32(1), -4503599627370496.000000000 > toInt32(1), -4503599627370496.000000000 >= toInt32(1) , toUInt64(1) = -4503599627370496.000000000, toUInt64(1) != -4503599627370496.000000000, toUInt64(1) < -4503599627370496.000000000, toUInt64(1) <= -4503599627370496.000000000, toUInt64(1) > -4503599627370496.000000000, toUInt64(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt64(1), -4503599627370496.000000000 != toUInt64(1), -4503599627370496.000000000 < toUInt64(1), -4503599627370496.000000000 <= toUInt64(1), -4503599627370496.000000000 > toUInt64(1), -4503599627370496.000000000 >= toUInt64(1) , toInt64(1) = -4503599627370496.000000000, toInt64(1) != -4503599627370496.000000000, toInt64(1) < -4503599627370496.000000000, toInt64(1) <= -4503599627370496.000000000, toInt64(1) > -4503599627370496.000000000, toInt64(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt64(1), -4503599627370496.000000000 != toInt64(1), -4503599627370496.000000000 < toInt64(1), -4503599627370496.000000000 <= toInt64(1), -4503599627370496.000000000 > toInt64(1), -4503599627370496.000000000 >= toInt64(1) ; +SELECT '1', '-0.500000000', 1 = -0.500000000, 1 != -0.500000000, 1 < -0.500000000, 1 <= -0.500000000, 1 > -0.500000000, 1 >= -0.500000000, -0.500000000 = 1, -0.500000000 != 1, -0.500000000 < 1, -0.500000000 <= 1, -0.500000000 > 1, -0.500000000 >= 1 , toUInt8(1) = -0.500000000, toUInt8(1) != -0.500000000, toUInt8(1) < -0.500000000, toUInt8(1) <= -0.500000000, toUInt8(1) > -0.500000000, toUInt8(1) >= -0.500000000, -0.500000000 = toUInt8(1), -0.500000000 != toUInt8(1), -0.500000000 < toUInt8(1), -0.500000000 <= toUInt8(1), -0.500000000 > toUInt8(1), -0.500000000 >= toUInt8(1) , toInt8(1) = -0.500000000, toInt8(1) != -0.500000000, toInt8(1) < -0.500000000, toInt8(1) <= -0.500000000, toInt8(1) > -0.500000000, toInt8(1) >= -0.500000000, -0.500000000 = toInt8(1), -0.500000000 != toInt8(1), -0.500000000 < toInt8(1), -0.500000000 <= toInt8(1), -0.500000000 > toInt8(1), -0.500000000 >= toInt8(1) , toUInt16(1) = -0.500000000, toUInt16(1) != -0.500000000, toUInt16(1) < -0.500000000, toUInt16(1) <= -0.500000000, toUInt16(1) > -0.500000000, toUInt16(1) >= -0.500000000, -0.500000000 = toUInt16(1), -0.500000000 != toUInt16(1), -0.500000000 < toUInt16(1), -0.500000000 <= toUInt16(1), -0.500000000 > toUInt16(1), -0.500000000 >= toUInt16(1) , toInt16(1) = -0.500000000, toInt16(1) != -0.500000000, toInt16(1) < -0.500000000, toInt16(1) <= -0.500000000, toInt16(1) > -0.500000000, toInt16(1) >= -0.500000000, -0.500000000 = toInt16(1), -0.500000000 != toInt16(1), -0.500000000 < toInt16(1), -0.500000000 <= toInt16(1), -0.500000000 > toInt16(1), -0.500000000 >= toInt16(1) , toUInt32(1) = -0.500000000, toUInt32(1) != -0.500000000, toUInt32(1) < -0.500000000, toUInt32(1) <= -0.500000000, toUInt32(1) > -0.500000000, toUInt32(1) >= -0.500000000, -0.500000000 = toUInt32(1), -0.500000000 != toUInt32(1), -0.500000000 < toUInt32(1), -0.500000000 <= toUInt32(1), -0.500000000 > toUInt32(1), -0.500000000 >= toUInt32(1) , toInt32(1) = -0.500000000, toInt32(1) != -0.500000000, toInt32(1) < -0.500000000, toInt32(1) <= -0.500000000, toInt32(1) > -0.500000000, toInt32(1) >= -0.500000000, -0.500000000 = toInt32(1), -0.500000000 != toInt32(1), -0.500000000 < toInt32(1), -0.500000000 <= toInt32(1), -0.500000000 > toInt32(1), -0.500000000 >= toInt32(1) , toUInt64(1) = -0.500000000, toUInt64(1) != -0.500000000, toUInt64(1) < -0.500000000, toUInt64(1) <= -0.500000000, toUInt64(1) > -0.500000000, toUInt64(1) >= -0.500000000, -0.500000000 = toUInt64(1), -0.500000000 != toUInt64(1), -0.500000000 < toUInt64(1), -0.500000000 <= toUInt64(1), -0.500000000 > toUInt64(1), -0.500000000 >= toUInt64(1) , toInt64(1) = -0.500000000, toInt64(1) != -0.500000000, toInt64(1) < -0.500000000, toInt64(1) <= -0.500000000, toInt64(1) > -0.500000000, toInt64(1) >= -0.500000000, -0.500000000 = toInt64(1), -0.500000000 != toInt64(1), -0.500000000 < toInt64(1), -0.500000000 <= toInt64(1), -0.500000000 > toInt64(1), -0.500000000 >= toInt64(1) ; +SELECT '1', '0.500000000', 1 = 0.500000000, 1 != 0.500000000, 1 < 0.500000000, 1 <= 0.500000000, 1 > 0.500000000, 1 >= 0.500000000, 0.500000000 = 1, 0.500000000 != 1, 0.500000000 < 1, 0.500000000 <= 1, 0.500000000 > 1, 0.500000000 >= 1 , toUInt8(1) = 0.500000000, toUInt8(1) != 0.500000000, toUInt8(1) < 0.500000000, toUInt8(1) <= 0.500000000, toUInt8(1) > 0.500000000, toUInt8(1) >= 0.500000000, 0.500000000 = toUInt8(1), 0.500000000 != toUInt8(1), 0.500000000 < toUInt8(1), 0.500000000 <= toUInt8(1), 0.500000000 > toUInt8(1), 0.500000000 >= toUInt8(1) , toInt8(1) = 0.500000000, toInt8(1) != 0.500000000, toInt8(1) < 0.500000000, toInt8(1) <= 0.500000000, toInt8(1) > 0.500000000, toInt8(1) >= 0.500000000, 0.500000000 = toInt8(1), 0.500000000 != toInt8(1), 0.500000000 < toInt8(1), 0.500000000 <= toInt8(1), 0.500000000 > toInt8(1), 0.500000000 >= toInt8(1) , toUInt16(1) = 0.500000000, toUInt16(1) != 0.500000000, toUInt16(1) < 0.500000000, toUInt16(1) <= 0.500000000, toUInt16(1) > 0.500000000, toUInt16(1) >= 0.500000000, 0.500000000 = toUInt16(1), 0.500000000 != toUInt16(1), 0.500000000 < toUInt16(1), 0.500000000 <= toUInt16(1), 0.500000000 > toUInt16(1), 0.500000000 >= toUInt16(1) , toInt16(1) = 0.500000000, toInt16(1) != 0.500000000, toInt16(1) < 0.500000000, toInt16(1) <= 0.500000000, toInt16(1) > 0.500000000, toInt16(1) >= 0.500000000, 0.500000000 = toInt16(1), 0.500000000 != toInt16(1), 0.500000000 < toInt16(1), 0.500000000 <= toInt16(1), 0.500000000 > toInt16(1), 0.500000000 >= toInt16(1) , toUInt32(1) = 0.500000000, toUInt32(1) != 0.500000000, toUInt32(1) < 0.500000000, toUInt32(1) <= 0.500000000, toUInt32(1) > 0.500000000, toUInt32(1) >= 0.500000000, 0.500000000 = toUInt32(1), 0.500000000 != toUInt32(1), 0.500000000 < toUInt32(1), 0.500000000 <= toUInt32(1), 0.500000000 > toUInt32(1), 0.500000000 >= toUInt32(1) , toInt32(1) = 0.500000000, toInt32(1) != 0.500000000, toInt32(1) < 0.500000000, toInt32(1) <= 0.500000000, toInt32(1) > 0.500000000, toInt32(1) >= 0.500000000, 0.500000000 = toInt32(1), 0.500000000 != toInt32(1), 0.500000000 < toInt32(1), 0.500000000 <= toInt32(1), 0.500000000 > toInt32(1), 0.500000000 >= toInt32(1) , toUInt64(1) = 0.500000000, toUInt64(1) != 0.500000000, toUInt64(1) < 0.500000000, toUInt64(1) <= 0.500000000, toUInt64(1) > 0.500000000, toUInt64(1) >= 0.500000000, 0.500000000 = toUInt64(1), 0.500000000 != toUInt64(1), 0.500000000 < toUInt64(1), 0.500000000 <= toUInt64(1), 0.500000000 > toUInt64(1), 0.500000000 >= toUInt64(1) , toInt64(1) = 0.500000000, toInt64(1) != 0.500000000, toInt64(1) < 0.500000000, toInt64(1) <= 0.500000000, toInt64(1) > 0.500000000, toInt64(1) >= 0.500000000, 0.500000000 = toInt64(1), 0.500000000 != toInt64(1), 0.500000000 < toInt64(1), 0.500000000 <= toInt64(1), 0.500000000 > toInt64(1), 0.500000000 >= toInt64(1) ; +SELECT '1', '-1.500000000', 1 = -1.500000000, 1 != -1.500000000, 1 < -1.500000000, 1 <= -1.500000000, 1 > -1.500000000, 1 >= -1.500000000, -1.500000000 = 1, -1.500000000 != 1, -1.500000000 < 1, -1.500000000 <= 1, -1.500000000 > 1, -1.500000000 >= 1 , toUInt8(1) = -1.500000000, toUInt8(1) != -1.500000000, toUInt8(1) < -1.500000000, toUInt8(1) <= -1.500000000, toUInt8(1) > -1.500000000, toUInt8(1) >= -1.500000000, -1.500000000 = toUInt8(1), -1.500000000 != toUInt8(1), -1.500000000 < toUInt8(1), -1.500000000 <= toUInt8(1), -1.500000000 > toUInt8(1), -1.500000000 >= toUInt8(1) , toInt8(1) = -1.500000000, toInt8(1) != -1.500000000, toInt8(1) < -1.500000000, toInt8(1) <= -1.500000000, toInt8(1) > -1.500000000, toInt8(1) >= -1.500000000, -1.500000000 = toInt8(1), -1.500000000 != toInt8(1), -1.500000000 < toInt8(1), -1.500000000 <= toInt8(1), -1.500000000 > toInt8(1), -1.500000000 >= toInt8(1) , toUInt16(1) = -1.500000000, toUInt16(1) != -1.500000000, toUInt16(1) < -1.500000000, toUInt16(1) <= -1.500000000, toUInt16(1) > -1.500000000, toUInt16(1) >= -1.500000000, -1.500000000 = toUInt16(1), -1.500000000 != toUInt16(1), -1.500000000 < toUInt16(1), -1.500000000 <= toUInt16(1), -1.500000000 > toUInt16(1), -1.500000000 >= toUInt16(1) , toInt16(1) = -1.500000000, toInt16(1) != -1.500000000, toInt16(1) < -1.500000000, toInt16(1) <= -1.500000000, toInt16(1) > -1.500000000, toInt16(1) >= -1.500000000, -1.500000000 = toInt16(1), -1.500000000 != toInt16(1), -1.500000000 < toInt16(1), -1.500000000 <= toInt16(1), -1.500000000 > toInt16(1), -1.500000000 >= toInt16(1) , toUInt32(1) = -1.500000000, toUInt32(1) != -1.500000000, toUInt32(1) < -1.500000000, toUInt32(1) <= -1.500000000, toUInt32(1) > -1.500000000, toUInt32(1) >= -1.500000000, -1.500000000 = toUInt32(1), -1.500000000 != toUInt32(1), -1.500000000 < toUInt32(1), -1.500000000 <= toUInt32(1), -1.500000000 > toUInt32(1), -1.500000000 >= toUInt32(1) , toInt32(1) = -1.500000000, toInt32(1) != -1.500000000, toInt32(1) < -1.500000000, toInt32(1) <= -1.500000000, toInt32(1) > -1.500000000, toInt32(1) >= -1.500000000, -1.500000000 = toInt32(1), -1.500000000 != toInt32(1), -1.500000000 < toInt32(1), -1.500000000 <= toInt32(1), -1.500000000 > toInt32(1), -1.500000000 >= toInt32(1) , toUInt64(1) = -1.500000000, toUInt64(1) != -1.500000000, toUInt64(1) < -1.500000000, toUInt64(1) <= -1.500000000, toUInt64(1) > -1.500000000, toUInt64(1) >= -1.500000000, -1.500000000 = toUInt64(1), -1.500000000 != toUInt64(1), -1.500000000 < toUInt64(1), -1.500000000 <= toUInt64(1), -1.500000000 > toUInt64(1), -1.500000000 >= toUInt64(1) , toInt64(1) = -1.500000000, toInt64(1) != -1.500000000, toInt64(1) < -1.500000000, toInt64(1) <= -1.500000000, toInt64(1) > -1.500000000, toInt64(1) >= -1.500000000, -1.500000000 = toInt64(1), -1.500000000 != toInt64(1), -1.500000000 < toInt64(1), -1.500000000 <= toInt64(1), -1.500000000 > toInt64(1), -1.500000000 >= toInt64(1) ; +SELECT '1', '1.500000000', 1 = 1.500000000, 1 != 1.500000000, 1 < 1.500000000, 1 <= 1.500000000, 1 > 1.500000000, 1 >= 1.500000000, 1.500000000 = 1, 1.500000000 != 1, 1.500000000 < 1, 1.500000000 <= 1, 1.500000000 > 1, 1.500000000 >= 1 , toUInt8(1) = 1.500000000, toUInt8(1) != 1.500000000, toUInt8(1) < 1.500000000, toUInt8(1) <= 1.500000000, toUInt8(1) > 1.500000000, toUInt8(1) >= 1.500000000, 1.500000000 = toUInt8(1), 1.500000000 != toUInt8(1), 1.500000000 < toUInt8(1), 1.500000000 <= toUInt8(1), 1.500000000 > toUInt8(1), 1.500000000 >= toUInt8(1) , toInt8(1) = 1.500000000, toInt8(1) != 1.500000000, toInt8(1) < 1.500000000, toInt8(1) <= 1.500000000, toInt8(1) > 1.500000000, toInt8(1) >= 1.500000000, 1.500000000 = toInt8(1), 1.500000000 != toInt8(1), 1.500000000 < toInt8(1), 1.500000000 <= toInt8(1), 1.500000000 > toInt8(1), 1.500000000 >= toInt8(1) , toUInt16(1) = 1.500000000, toUInt16(1) != 1.500000000, toUInt16(1) < 1.500000000, toUInt16(1) <= 1.500000000, toUInt16(1) > 1.500000000, toUInt16(1) >= 1.500000000, 1.500000000 = toUInt16(1), 1.500000000 != toUInt16(1), 1.500000000 < toUInt16(1), 1.500000000 <= toUInt16(1), 1.500000000 > toUInt16(1), 1.500000000 >= toUInt16(1) , toInt16(1) = 1.500000000, toInt16(1) != 1.500000000, toInt16(1) < 1.500000000, toInt16(1) <= 1.500000000, toInt16(1) > 1.500000000, toInt16(1) >= 1.500000000, 1.500000000 = toInt16(1), 1.500000000 != toInt16(1), 1.500000000 < toInt16(1), 1.500000000 <= toInt16(1), 1.500000000 > toInt16(1), 1.500000000 >= toInt16(1) , toUInt32(1) = 1.500000000, toUInt32(1) != 1.500000000, toUInt32(1) < 1.500000000, toUInt32(1) <= 1.500000000, toUInt32(1) > 1.500000000, toUInt32(1) >= 1.500000000, 1.500000000 = toUInt32(1), 1.500000000 != toUInt32(1), 1.500000000 < toUInt32(1), 1.500000000 <= toUInt32(1), 1.500000000 > toUInt32(1), 1.500000000 >= toUInt32(1) , toInt32(1) = 1.500000000, toInt32(1) != 1.500000000, toInt32(1) < 1.500000000, toInt32(1) <= 1.500000000, toInt32(1) > 1.500000000, toInt32(1) >= 1.500000000, 1.500000000 = toInt32(1), 1.500000000 != toInt32(1), 1.500000000 < toInt32(1), 1.500000000 <= toInt32(1), 1.500000000 > toInt32(1), 1.500000000 >= toInt32(1) , toUInt64(1) = 1.500000000, toUInt64(1) != 1.500000000, toUInt64(1) < 1.500000000, toUInt64(1) <= 1.500000000, toUInt64(1) > 1.500000000, toUInt64(1) >= 1.500000000, 1.500000000 = toUInt64(1), 1.500000000 != toUInt64(1), 1.500000000 < toUInt64(1), 1.500000000 <= toUInt64(1), 1.500000000 > toUInt64(1), 1.500000000 >= toUInt64(1) , toInt64(1) = 1.500000000, toInt64(1) != 1.500000000, toInt64(1) < 1.500000000, toInt64(1) <= 1.500000000, toInt64(1) > 1.500000000, toInt64(1) >= 1.500000000, 1.500000000 = toInt64(1), 1.500000000 != toInt64(1), 1.500000000 < toInt64(1), 1.500000000 <= toInt64(1), 1.500000000 > toInt64(1), 1.500000000 >= toInt64(1) ; +SELECT '1', '9007199254740992.000000000', 1 = 9007199254740992.000000000, 1 != 9007199254740992.000000000, 1 < 9007199254740992.000000000, 1 <= 9007199254740992.000000000, 1 > 9007199254740992.000000000, 1 >= 9007199254740992.000000000, 9007199254740992.000000000 = 1, 9007199254740992.000000000 != 1, 9007199254740992.000000000 < 1, 9007199254740992.000000000 <= 1, 9007199254740992.000000000 > 1, 9007199254740992.000000000 >= 1 , toUInt8(1) = 9007199254740992.000000000, toUInt8(1) != 9007199254740992.000000000, toUInt8(1) < 9007199254740992.000000000, toUInt8(1) <= 9007199254740992.000000000, toUInt8(1) > 9007199254740992.000000000, toUInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(1), 9007199254740992.000000000 != toUInt8(1), 9007199254740992.000000000 < toUInt8(1), 9007199254740992.000000000 <= toUInt8(1), 9007199254740992.000000000 > toUInt8(1), 9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740992.000000000, toInt8(1) != 9007199254740992.000000000, toInt8(1) < 9007199254740992.000000000, toInt8(1) <= 9007199254740992.000000000, toInt8(1) > 9007199254740992.000000000, toInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(1), 9007199254740992.000000000 != toInt8(1), 9007199254740992.000000000 < toInt8(1), 9007199254740992.000000000 <= toInt8(1), 9007199254740992.000000000 > toInt8(1), 9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740992.000000000, toUInt16(1) != 9007199254740992.000000000, toUInt16(1) < 9007199254740992.000000000, toUInt16(1) <= 9007199254740992.000000000, toUInt16(1) > 9007199254740992.000000000, toUInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(1), 9007199254740992.000000000 != toUInt16(1), 9007199254740992.000000000 < toUInt16(1), 9007199254740992.000000000 <= toUInt16(1), 9007199254740992.000000000 > toUInt16(1), 9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740992.000000000, toInt16(1) != 9007199254740992.000000000, toInt16(1) < 9007199254740992.000000000, toInt16(1) <= 9007199254740992.000000000, toInt16(1) > 9007199254740992.000000000, toInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(1), 9007199254740992.000000000 != toInt16(1), 9007199254740992.000000000 < toInt16(1), 9007199254740992.000000000 <= toInt16(1), 9007199254740992.000000000 > toInt16(1), 9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740992.000000000, toUInt32(1) != 9007199254740992.000000000, toUInt32(1) < 9007199254740992.000000000, toUInt32(1) <= 9007199254740992.000000000, toUInt32(1) > 9007199254740992.000000000, toUInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(1), 9007199254740992.000000000 != toUInt32(1), 9007199254740992.000000000 < toUInt32(1), 9007199254740992.000000000 <= toUInt32(1), 9007199254740992.000000000 > toUInt32(1), 9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740992.000000000, toInt32(1) != 9007199254740992.000000000, toInt32(1) < 9007199254740992.000000000, toInt32(1) <= 9007199254740992.000000000, toInt32(1) > 9007199254740992.000000000, toInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(1), 9007199254740992.000000000 != toInt32(1), 9007199254740992.000000000 < toInt32(1), 9007199254740992.000000000 <= toInt32(1), 9007199254740992.000000000 > toInt32(1), 9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740992.000000000, toUInt64(1) != 9007199254740992.000000000, toUInt64(1) < 9007199254740992.000000000, toUInt64(1) <= 9007199254740992.000000000, toUInt64(1) > 9007199254740992.000000000, toUInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(1), 9007199254740992.000000000 != toUInt64(1), 9007199254740992.000000000 < toUInt64(1), 9007199254740992.000000000 <= toUInt64(1), 9007199254740992.000000000 > toUInt64(1), 9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740992.000000000, toInt64(1) != 9007199254740992.000000000, toInt64(1) < 9007199254740992.000000000, toInt64(1) <= 9007199254740992.000000000, toInt64(1) > 9007199254740992.000000000, toInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(1), 9007199254740992.000000000 != toInt64(1), 9007199254740992.000000000 < toInt64(1), 9007199254740992.000000000 <= toInt64(1), 9007199254740992.000000000 > toInt64(1), 9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '2251799813685247.500000000', 1 = 2251799813685247.500000000, 1 != 2251799813685247.500000000, 1 < 2251799813685247.500000000, 1 <= 2251799813685247.500000000, 1 > 2251799813685247.500000000, 1 >= 2251799813685247.500000000, 2251799813685247.500000000 = 1, 2251799813685247.500000000 != 1, 2251799813685247.500000000 < 1, 2251799813685247.500000000 <= 1, 2251799813685247.500000000 > 1, 2251799813685247.500000000 >= 1 , toUInt8(1) = 2251799813685247.500000000, toUInt8(1) != 2251799813685247.500000000, toUInt8(1) < 2251799813685247.500000000, toUInt8(1) <= 2251799813685247.500000000, toUInt8(1) > 2251799813685247.500000000, toUInt8(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt8(1), 2251799813685247.500000000 != toUInt8(1), 2251799813685247.500000000 < toUInt8(1), 2251799813685247.500000000 <= toUInt8(1), 2251799813685247.500000000 > toUInt8(1), 2251799813685247.500000000 >= toUInt8(1) , toInt8(1) = 2251799813685247.500000000, toInt8(1) != 2251799813685247.500000000, toInt8(1) < 2251799813685247.500000000, toInt8(1) <= 2251799813685247.500000000, toInt8(1) > 2251799813685247.500000000, toInt8(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt8(1), 2251799813685247.500000000 != toInt8(1), 2251799813685247.500000000 < toInt8(1), 2251799813685247.500000000 <= toInt8(1), 2251799813685247.500000000 > toInt8(1), 2251799813685247.500000000 >= toInt8(1) , toUInt16(1) = 2251799813685247.500000000, toUInt16(1) != 2251799813685247.500000000, toUInt16(1) < 2251799813685247.500000000, toUInt16(1) <= 2251799813685247.500000000, toUInt16(1) > 2251799813685247.500000000, toUInt16(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt16(1), 2251799813685247.500000000 != toUInt16(1), 2251799813685247.500000000 < toUInt16(1), 2251799813685247.500000000 <= toUInt16(1), 2251799813685247.500000000 > toUInt16(1), 2251799813685247.500000000 >= toUInt16(1) , toInt16(1) = 2251799813685247.500000000, toInt16(1) != 2251799813685247.500000000, toInt16(1) < 2251799813685247.500000000, toInt16(1) <= 2251799813685247.500000000, toInt16(1) > 2251799813685247.500000000, toInt16(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt16(1), 2251799813685247.500000000 != toInt16(1), 2251799813685247.500000000 < toInt16(1), 2251799813685247.500000000 <= toInt16(1), 2251799813685247.500000000 > toInt16(1), 2251799813685247.500000000 >= toInt16(1) , toUInt32(1) = 2251799813685247.500000000, toUInt32(1) != 2251799813685247.500000000, toUInt32(1) < 2251799813685247.500000000, toUInt32(1) <= 2251799813685247.500000000, toUInt32(1) > 2251799813685247.500000000, toUInt32(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt32(1), 2251799813685247.500000000 != toUInt32(1), 2251799813685247.500000000 < toUInt32(1), 2251799813685247.500000000 <= toUInt32(1), 2251799813685247.500000000 > toUInt32(1), 2251799813685247.500000000 >= toUInt32(1) , toInt32(1) = 2251799813685247.500000000, toInt32(1) != 2251799813685247.500000000, toInt32(1) < 2251799813685247.500000000, toInt32(1) <= 2251799813685247.500000000, toInt32(1) > 2251799813685247.500000000, toInt32(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt32(1), 2251799813685247.500000000 != toInt32(1), 2251799813685247.500000000 < toInt32(1), 2251799813685247.500000000 <= toInt32(1), 2251799813685247.500000000 > toInt32(1), 2251799813685247.500000000 >= toInt32(1) , toUInt64(1) = 2251799813685247.500000000, toUInt64(1) != 2251799813685247.500000000, toUInt64(1) < 2251799813685247.500000000, toUInt64(1) <= 2251799813685247.500000000, toUInt64(1) > 2251799813685247.500000000, toUInt64(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt64(1), 2251799813685247.500000000 != toUInt64(1), 2251799813685247.500000000 < toUInt64(1), 2251799813685247.500000000 <= toUInt64(1), 2251799813685247.500000000 > toUInt64(1), 2251799813685247.500000000 >= toUInt64(1) , toInt64(1) = 2251799813685247.500000000, toInt64(1) != 2251799813685247.500000000, toInt64(1) < 2251799813685247.500000000, toInt64(1) <= 2251799813685247.500000000, toInt64(1) > 2251799813685247.500000000, toInt64(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt64(1), 2251799813685247.500000000 != toInt64(1), 2251799813685247.500000000 < toInt64(1), 2251799813685247.500000000 <= toInt64(1), 2251799813685247.500000000 > toInt64(1), 2251799813685247.500000000 >= toInt64(1) ; +SELECT '1', '2251799813685248.500000000', 1 = 2251799813685248.500000000, 1 != 2251799813685248.500000000, 1 < 2251799813685248.500000000, 1 <= 2251799813685248.500000000, 1 > 2251799813685248.500000000, 1 >= 2251799813685248.500000000, 2251799813685248.500000000 = 1, 2251799813685248.500000000 != 1, 2251799813685248.500000000 < 1, 2251799813685248.500000000 <= 1, 2251799813685248.500000000 > 1, 2251799813685248.500000000 >= 1 , toUInt8(1) = 2251799813685248.500000000, toUInt8(1) != 2251799813685248.500000000, toUInt8(1) < 2251799813685248.500000000, toUInt8(1) <= 2251799813685248.500000000, toUInt8(1) > 2251799813685248.500000000, toUInt8(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt8(1), 2251799813685248.500000000 != toUInt8(1), 2251799813685248.500000000 < toUInt8(1), 2251799813685248.500000000 <= toUInt8(1), 2251799813685248.500000000 > toUInt8(1), 2251799813685248.500000000 >= toUInt8(1) , toInt8(1) = 2251799813685248.500000000, toInt8(1) != 2251799813685248.500000000, toInt8(1) < 2251799813685248.500000000, toInt8(1) <= 2251799813685248.500000000, toInt8(1) > 2251799813685248.500000000, toInt8(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt8(1), 2251799813685248.500000000 != toInt8(1), 2251799813685248.500000000 < toInt8(1), 2251799813685248.500000000 <= toInt8(1), 2251799813685248.500000000 > toInt8(1), 2251799813685248.500000000 >= toInt8(1) , toUInt16(1) = 2251799813685248.500000000, toUInt16(1) != 2251799813685248.500000000, toUInt16(1) < 2251799813685248.500000000, toUInt16(1) <= 2251799813685248.500000000, toUInt16(1) > 2251799813685248.500000000, toUInt16(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt16(1), 2251799813685248.500000000 != toUInt16(1), 2251799813685248.500000000 < toUInt16(1), 2251799813685248.500000000 <= toUInt16(1), 2251799813685248.500000000 > toUInt16(1), 2251799813685248.500000000 >= toUInt16(1) , toInt16(1) = 2251799813685248.500000000, toInt16(1) != 2251799813685248.500000000, toInt16(1) < 2251799813685248.500000000, toInt16(1) <= 2251799813685248.500000000, toInt16(1) > 2251799813685248.500000000, toInt16(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt16(1), 2251799813685248.500000000 != toInt16(1), 2251799813685248.500000000 < toInt16(1), 2251799813685248.500000000 <= toInt16(1), 2251799813685248.500000000 > toInt16(1), 2251799813685248.500000000 >= toInt16(1) , toUInt32(1) = 2251799813685248.500000000, toUInt32(1) != 2251799813685248.500000000, toUInt32(1) < 2251799813685248.500000000, toUInt32(1) <= 2251799813685248.500000000, toUInt32(1) > 2251799813685248.500000000, toUInt32(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt32(1), 2251799813685248.500000000 != toUInt32(1), 2251799813685248.500000000 < toUInt32(1), 2251799813685248.500000000 <= toUInt32(1), 2251799813685248.500000000 > toUInt32(1), 2251799813685248.500000000 >= toUInt32(1) , toInt32(1) = 2251799813685248.500000000, toInt32(1) != 2251799813685248.500000000, toInt32(1) < 2251799813685248.500000000, toInt32(1) <= 2251799813685248.500000000, toInt32(1) > 2251799813685248.500000000, toInt32(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt32(1), 2251799813685248.500000000 != toInt32(1), 2251799813685248.500000000 < toInt32(1), 2251799813685248.500000000 <= toInt32(1), 2251799813685248.500000000 > toInt32(1), 2251799813685248.500000000 >= toInt32(1) , toUInt64(1) = 2251799813685248.500000000, toUInt64(1) != 2251799813685248.500000000, toUInt64(1) < 2251799813685248.500000000, toUInt64(1) <= 2251799813685248.500000000, toUInt64(1) > 2251799813685248.500000000, toUInt64(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt64(1), 2251799813685248.500000000 != toUInt64(1), 2251799813685248.500000000 < toUInt64(1), 2251799813685248.500000000 <= toUInt64(1), 2251799813685248.500000000 > toUInt64(1), 2251799813685248.500000000 >= toUInt64(1) , toInt64(1) = 2251799813685248.500000000, toInt64(1) != 2251799813685248.500000000, toInt64(1) < 2251799813685248.500000000, toInt64(1) <= 2251799813685248.500000000, toInt64(1) > 2251799813685248.500000000, toInt64(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt64(1), 2251799813685248.500000000 != toInt64(1), 2251799813685248.500000000 < toInt64(1), 2251799813685248.500000000 <= toInt64(1), 2251799813685248.500000000 > toInt64(1), 2251799813685248.500000000 >= toInt64(1) ; +SELECT '1', '1152921504606846976.000000000', 1 = 1152921504606846976.000000000, 1 != 1152921504606846976.000000000, 1 < 1152921504606846976.000000000, 1 <= 1152921504606846976.000000000, 1 > 1152921504606846976.000000000, 1 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = 1, 1152921504606846976.000000000 != 1, 1152921504606846976.000000000 < 1, 1152921504606846976.000000000 <= 1, 1152921504606846976.000000000 > 1, 1152921504606846976.000000000 >= 1 , toUInt8(1) = 1152921504606846976.000000000, toUInt8(1) != 1152921504606846976.000000000, toUInt8(1) < 1152921504606846976.000000000, toUInt8(1) <= 1152921504606846976.000000000, toUInt8(1) > 1152921504606846976.000000000, toUInt8(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt8(1), 1152921504606846976.000000000 != toUInt8(1), 1152921504606846976.000000000 < toUInt8(1), 1152921504606846976.000000000 <= toUInt8(1), 1152921504606846976.000000000 > toUInt8(1), 1152921504606846976.000000000 >= toUInt8(1) , toInt8(1) = 1152921504606846976.000000000, toInt8(1) != 1152921504606846976.000000000, toInt8(1) < 1152921504606846976.000000000, toInt8(1) <= 1152921504606846976.000000000, toInt8(1) > 1152921504606846976.000000000, toInt8(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt8(1), 1152921504606846976.000000000 != toInt8(1), 1152921504606846976.000000000 < toInt8(1), 1152921504606846976.000000000 <= toInt8(1), 1152921504606846976.000000000 > toInt8(1), 1152921504606846976.000000000 >= toInt8(1) , toUInt16(1) = 1152921504606846976.000000000, toUInt16(1) != 1152921504606846976.000000000, toUInt16(1) < 1152921504606846976.000000000, toUInt16(1) <= 1152921504606846976.000000000, toUInt16(1) > 1152921504606846976.000000000, toUInt16(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt16(1), 1152921504606846976.000000000 != toUInt16(1), 1152921504606846976.000000000 < toUInt16(1), 1152921504606846976.000000000 <= toUInt16(1), 1152921504606846976.000000000 > toUInt16(1), 1152921504606846976.000000000 >= toUInt16(1) , toInt16(1) = 1152921504606846976.000000000, toInt16(1) != 1152921504606846976.000000000, toInt16(1) < 1152921504606846976.000000000, toInt16(1) <= 1152921504606846976.000000000, toInt16(1) > 1152921504606846976.000000000, toInt16(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt16(1), 1152921504606846976.000000000 != toInt16(1), 1152921504606846976.000000000 < toInt16(1), 1152921504606846976.000000000 <= toInt16(1), 1152921504606846976.000000000 > toInt16(1), 1152921504606846976.000000000 >= toInt16(1) , toUInt32(1) = 1152921504606846976.000000000, toUInt32(1) != 1152921504606846976.000000000, toUInt32(1) < 1152921504606846976.000000000, toUInt32(1) <= 1152921504606846976.000000000, toUInt32(1) > 1152921504606846976.000000000, toUInt32(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt32(1), 1152921504606846976.000000000 != toUInt32(1), 1152921504606846976.000000000 < toUInt32(1), 1152921504606846976.000000000 <= toUInt32(1), 1152921504606846976.000000000 > toUInt32(1), 1152921504606846976.000000000 >= toUInt32(1) , toInt32(1) = 1152921504606846976.000000000, toInt32(1) != 1152921504606846976.000000000, toInt32(1) < 1152921504606846976.000000000, toInt32(1) <= 1152921504606846976.000000000, toInt32(1) > 1152921504606846976.000000000, toInt32(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt32(1), 1152921504606846976.000000000 != toInt32(1), 1152921504606846976.000000000 < toInt32(1), 1152921504606846976.000000000 <= toInt32(1), 1152921504606846976.000000000 > toInt32(1), 1152921504606846976.000000000 >= toInt32(1) , toUInt64(1) = 1152921504606846976.000000000, toUInt64(1) != 1152921504606846976.000000000, toUInt64(1) < 1152921504606846976.000000000, toUInt64(1) <= 1152921504606846976.000000000, toUInt64(1) > 1152921504606846976.000000000, toUInt64(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt64(1), 1152921504606846976.000000000 != toUInt64(1), 1152921504606846976.000000000 < toUInt64(1), 1152921504606846976.000000000 <= toUInt64(1), 1152921504606846976.000000000 > toUInt64(1), 1152921504606846976.000000000 >= toUInt64(1) , toInt64(1) = 1152921504606846976.000000000, toInt64(1) != 1152921504606846976.000000000, toInt64(1) < 1152921504606846976.000000000, toInt64(1) <= 1152921504606846976.000000000, toInt64(1) > 1152921504606846976.000000000, toInt64(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt64(1), 1152921504606846976.000000000 != toInt64(1), 1152921504606846976.000000000 < toInt64(1), 1152921504606846976.000000000 <= toInt64(1), 1152921504606846976.000000000 > toInt64(1), 1152921504606846976.000000000 >= toInt64(1) ; +SELECT '1', '-1152921504606846976.000000000', 1 = -1152921504606846976.000000000, 1 != -1152921504606846976.000000000, 1 < -1152921504606846976.000000000, 1 <= -1152921504606846976.000000000, 1 > -1152921504606846976.000000000, 1 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = 1, -1152921504606846976.000000000 != 1, -1152921504606846976.000000000 < 1, -1152921504606846976.000000000 <= 1, -1152921504606846976.000000000 > 1, -1152921504606846976.000000000 >= 1 , toUInt8(1) = -1152921504606846976.000000000, toUInt8(1) != -1152921504606846976.000000000, toUInt8(1) < -1152921504606846976.000000000, toUInt8(1) <= -1152921504606846976.000000000, toUInt8(1) > -1152921504606846976.000000000, toUInt8(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt8(1), -1152921504606846976.000000000 != toUInt8(1), -1152921504606846976.000000000 < toUInt8(1), -1152921504606846976.000000000 <= toUInt8(1), -1152921504606846976.000000000 > toUInt8(1), -1152921504606846976.000000000 >= toUInt8(1) , toInt8(1) = -1152921504606846976.000000000, toInt8(1) != -1152921504606846976.000000000, toInt8(1) < -1152921504606846976.000000000, toInt8(1) <= -1152921504606846976.000000000, toInt8(1) > -1152921504606846976.000000000, toInt8(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt8(1), -1152921504606846976.000000000 != toInt8(1), -1152921504606846976.000000000 < toInt8(1), -1152921504606846976.000000000 <= toInt8(1), -1152921504606846976.000000000 > toInt8(1), -1152921504606846976.000000000 >= toInt8(1) , toUInt16(1) = -1152921504606846976.000000000, toUInt16(1) != -1152921504606846976.000000000, toUInt16(1) < -1152921504606846976.000000000, toUInt16(1) <= -1152921504606846976.000000000, toUInt16(1) > -1152921504606846976.000000000, toUInt16(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt16(1), -1152921504606846976.000000000 != toUInt16(1), -1152921504606846976.000000000 < toUInt16(1), -1152921504606846976.000000000 <= toUInt16(1), -1152921504606846976.000000000 > toUInt16(1), -1152921504606846976.000000000 >= toUInt16(1) , toInt16(1) = -1152921504606846976.000000000, toInt16(1) != -1152921504606846976.000000000, toInt16(1) < -1152921504606846976.000000000, toInt16(1) <= -1152921504606846976.000000000, toInt16(1) > -1152921504606846976.000000000, toInt16(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt16(1), -1152921504606846976.000000000 != toInt16(1), -1152921504606846976.000000000 < toInt16(1), -1152921504606846976.000000000 <= toInt16(1), -1152921504606846976.000000000 > toInt16(1), -1152921504606846976.000000000 >= toInt16(1) , toUInt32(1) = -1152921504606846976.000000000, toUInt32(1) != -1152921504606846976.000000000, toUInt32(1) < -1152921504606846976.000000000, toUInt32(1) <= -1152921504606846976.000000000, toUInt32(1) > -1152921504606846976.000000000, toUInt32(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt32(1), -1152921504606846976.000000000 != toUInt32(1), -1152921504606846976.000000000 < toUInt32(1), -1152921504606846976.000000000 <= toUInt32(1), -1152921504606846976.000000000 > toUInt32(1), -1152921504606846976.000000000 >= toUInt32(1) , toInt32(1) = -1152921504606846976.000000000, toInt32(1) != -1152921504606846976.000000000, toInt32(1) < -1152921504606846976.000000000, toInt32(1) <= -1152921504606846976.000000000, toInt32(1) > -1152921504606846976.000000000, toInt32(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt32(1), -1152921504606846976.000000000 != toInt32(1), -1152921504606846976.000000000 < toInt32(1), -1152921504606846976.000000000 <= toInt32(1), -1152921504606846976.000000000 > toInt32(1), -1152921504606846976.000000000 >= toInt32(1) , toUInt64(1) = -1152921504606846976.000000000, toUInt64(1) != -1152921504606846976.000000000, toUInt64(1) < -1152921504606846976.000000000, toUInt64(1) <= -1152921504606846976.000000000, toUInt64(1) > -1152921504606846976.000000000, toUInt64(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt64(1), -1152921504606846976.000000000 != toUInt64(1), -1152921504606846976.000000000 < toUInt64(1), -1152921504606846976.000000000 <= toUInt64(1), -1152921504606846976.000000000 > toUInt64(1), -1152921504606846976.000000000 >= toUInt64(1) , toInt64(1) = -1152921504606846976.000000000, toInt64(1) != -1152921504606846976.000000000, toInt64(1) < -1152921504606846976.000000000, toInt64(1) <= -1152921504606846976.000000000, toInt64(1) > -1152921504606846976.000000000, toInt64(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt64(1), -1152921504606846976.000000000 != toInt64(1), -1152921504606846976.000000000 < toInt64(1), -1152921504606846976.000000000 <= toInt64(1), -1152921504606846976.000000000 > toInt64(1), -1152921504606846976.000000000 >= toInt64(1) ; +SELECT '1', '-9223372036854786048.000000000', 1 = -9223372036854786048.000000000, 1 != -9223372036854786048.000000000, 1 < -9223372036854786048.000000000, 1 <= -9223372036854786048.000000000, 1 > -9223372036854786048.000000000, 1 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = 1, -9223372036854786048.000000000 != 1, -9223372036854786048.000000000 < 1, -9223372036854786048.000000000 <= 1, -9223372036854786048.000000000 > 1, -9223372036854786048.000000000 >= 1 , toUInt8(1) = -9223372036854786048.000000000, toUInt8(1) != -9223372036854786048.000000000, toUInt8(1) < -9223372036854786048.000000000, toUInt8(1) <= -9223372036854786048.000000000, toUInt8(1) > -9223372036854786048.000000000, toUInt8(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt8(1), -9223372036854786048.000000000 != toUInt8(1), -9223372036854786048.000000000 < toUInt8(1), -9223372036854786048.000000000 <= toUInt8(1), -9223372036854786048.000000000 > toUInt8(1), -9223372036854786048.000000000 >= toUInt8(1) , toInt8(1) = -9223372036854786048.000000000, toInt8(1) != -9223372036854786048.000000000, toInt8(1) < -9223372036854786048.000000000, toInt8(1) <= -9223372036854786048.000000000, toInt8(1) > -9223372036854786048.000000000, toInt8(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt8(1), -9223372036854786048.000000000 != toInt8(1), -9223372036854786048.000000000 < toInt8(1), -9223372036854786048.000000000 <= toInt8(1), -9223372036854786048.000000000 > toInt8(1), -9223372036854786048.000000000 >= toInt8(1) , toUInt16(1) = -9223372036854786048.000000000, toUInt16(1) != -9223372036854786048.000000000, toUInt16(1) < -9223372036854786048.000000000, toUInt16(1) <= -9223372036854786048.000000000, toUInt16(1) > -9223372036854786048.000000000, toUInt16(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt16(1), -9223372036854786048.000000000 != toUInt16(1), -9223372036854786048.000000000 < toUInt16(1), -9223372036854786048.000000000 <= toUInt16(1), -9223372036854786048.000000000 > toUInt16(1), -9223372036854786048.000000000 >= toUInt16(1) , toInt16(1) = -9223372036854786048.000000000, toInt16(1) != -9223372036854786048.000000000, toInt16(1) < -9223372036854786048.000000000, toInt16(1) <= -9223372036854786048.000000000, toInt16(1) > -9223372036854786048.000000000, toInt16(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt16(1), -9223372036854786048.000000000 != toInt16(1), -9223372036854786048.000000000 < toInt16(1), -9223372036854786048.000000000 <= toInt16(1), -9223372036854786048.000000000 > toInt16(1), -9223372036854786048.000000000 >= toInt16(1) , toUInt32(1) = -9223372036854786048.000000000, toUInt32(1) != -9223372036854786048.000000000, toUInt32(1) < -9223372036854786048.000000000, toUInt32(1) <= -9223372036854786048.000000000, toUInt32(1) > -9223372036854786048.000000000, toUInt32(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt32(1), -9223372036854786048.000000000 != toUInt32(1), -9223372036854786048.000000000 < toUInt32(1), -9223372036854786048.000000000 <= toUInt32(1), -9223372036854786048.000000000 > toUInt32(1), -9223372036854786048.000000000 >= toUInt32(1) , toInt32(1) = -9223372036854786048.000000000, toInt32(1) != -9223372036854786048.000000000, toInt32(1) < -9223372036854786048.000000000, toInt32(1) <= -9223372036854786048.000000000, toInt32(1) > -9223372036854786048.000000000, toInt32(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt32(1), -9223372036854786048.000000000 != toInt32(1), -9223372036854786048.000000000 < toInt32(1), -9223372036854786048.000000000 <= toInt32(1), -9223372036854786048.000000000 > toInt32(1), -9223372036854786048.000000000 >= toInt32(1) , toUInt64(1) = -9223372036854786048.000000000, toUInt64(1) != -9223372036854786048.000000000, toUInt64(1) < -9223372036854786048.000000000, toUInt64(1) <= -9223372036854786048.000000000, toUInt64(1) > -9223372036854786048.000000000, toUInt64(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt64(1), -9223372036854786048.000000000 != toUInt64(1), -9223372036854786048.000000000 < toUInt64(1), -9223372036854786048.000000000 <= toUInt64(1), -9223372036854786048.000000000 > toUInt64(1), -9223372036854786048.000000000 >= toUInt64(1) , toInt64(1) = -9223372036854786048.000000000, toInt64(1) != -9223372036854786048.000000000, toInt64(1) < -9223372036854786048.000000000, toInt64(1) <= -9223372036854786048.000000000, toInt64(1) > -9223372036854786048.000000000, toInt64(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt64(1), -9223372036854786048.000000000 != toInt64(1), -9223372036854786048.000000000 < toInt64(1), -9223372036854786048.000000000 <= toInt64(1), -9223372036854786048.000000000 > toInt64(1), -9223372036854786048.000000000 >= toInt64(1) ; +SELECT '1', '9223372036854786048.000000000', 1 = 9223372036854786048.000000000, 1 != 9223372036854786048.000000000, 1 < 9223372036854786048.000000000, 1 <= 9223372036854786048.000000000, 1 > 9223372036854786048.000000000, 1 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = 1, 9223372036854786048.000000000 != 1, 9223372036854786048.000000000 < 1, 9223372036854786048.000000000 <= 1, 9223372036854786048.000000000 > 1, 9223372036854786048.000000000 >= 1 , toUInt8(1) = 9223372036854786048.000000000, toUInt8(1) != 9223372036854786048.000000000, toUInt8(1) < 9223372036854786048.000000000, toUInt8(1) <= 9223372036854786048.000000000, toUInt8(1) > 9223372036854786048.000000000, toUInt8(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt8(1), 9223372036854786048.000000000 != toUInt8(1), 9223372036854786048.000000000 < toUInt8(1), 9223372036854786048.000000000 <= toUInt8(1), 9223372036854786048.000000000 > toUInt8(1), 9223372036854786048.000000000 >= toUInt8(1) , toInt8(1) = 9223372036854786048.000000000, toInt8(1) != 9223372036854786048.000000000, toInt8(1) < 9223372036854786048.000000000, toInt8(1) <= 9223372036854786048.000000000, toInt8(1) > 9223372036854786048.000000000, toInt8(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt8(1), 9223372036854786048.000000000 != toInt8(1), 9223372036854786048.000000000 < toInt8(1), 9223372036854786048.000000000 <= toInt8(1), 9223372036854786048.000000000 > toInt8(1), 9223372036854786048.000000000 >= toInt8(1) , toUInt16(1) = 9223372036854786048.000000000, toUInt16(1) != 9223372036854786048.000000000, toUInt16(1) < 9223372036854786048.000000000, toUInt16(1) <= 9223372036854786048.000000000, toUInt16(1) > 9223372036854786048.000000000, toUInt16(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt16(1), 9223372036854786048.000000000 != toUInt16(1), 9223372036854786048.000000000 < toUInt16(1), 9223372036854786048.000000000 <= toUInt16(1), 9223372036854786048.000000000 > toUInt16(1), 9223372036854786048.000000000 >= toUInt16(1) , toInt16(1) = 9223372036854786048.000000000, toInt16(1) != 9223372036854786048.000000000, toInt16(1) < 9223372036854786048.000000000, toInt16(1) <= 9223372036854786048.000000000, toInt16(1) > 9223372036854786048.000000000, toInt16(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt16(1), 9223372036854786048.000000000 != toInt16(1), 9223372036854786048.000000000 < toInt16(1), 9223372036854786048.000000000 <= toInt16(1), 9223372036854786048.000000000 > toInt16(1), 9223372036854786048.000000000 >= toInt16(1) , toUInt32(1) = 9223372036854786048.000000000, toUInt32(1) != 9223372036854786048.000000000, toUInt32(1) < 9223372036854786048.000000000, toUInt32(1) <= 9223372036854786048.000000000, toUInt32(1) > 9223372036854786048.000000000, toUInt32(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt32(1), 9223372036854786048.000000000 != toUInt32(1), 9223372036854786048.000000000 < toUInt32(1), 9223372036854786048.000000000 <= toUInt32(1), 9223372036854786048.000000000 > toUInt32(1), 9223372036854786048.000000000 >= toUInt32(1) , toInt32(1) = 9223372036854786048.000000000, toInt32(1) != 9223372036854786048.000000000, toInt32(1) < 9223372036854786048.000000000, toInt32(1) <= 9223372036854786048.000000000, toInt32(1) > 9223372036854786048.000000000, toInt32(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt32(1), 9223372036854786048.000000000 != toInt32(1), 9223372036854786048.000000000 < toInt32(1), 9223372036854786048.000000000 <= toInt32(1), 9223372036854786048.000000000 > toInt32(1), 9223372036854786048.000000000 >= toInt32(1) , toUInt64(1) = 9223372036854786048.000000000, toUInt64(1) != 9223372036854786048.000000000, toUInt64(1) < 9223372036854786048.000000000, toUInt64(1) <= 9223372036854786048.000000000, toUInt64(1) > 9223372036854786048.000000000, toUInt64(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt64(1), 9223372036854786048.000000000 != toUInt64(1), 9223372036854786048.000000000 < toUInt64(1), 9223372036854786048.000000000 <= toUInt64(1), 9223372036854786048.000000000 > toUInt64(1), 9223372036854786048.000000000 >= toUInt64(1) , toInt64(1) = 9223372036854786048.000000000, toInt64(1) != 9223372036854786048.000000000, toInt64(1) < 9223372036854786048.000000000, toInt64(1) <= 9223372036854786048.000000000, toInt64(1) > 9223372036854786048.000000000, toInt64(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt64(1), 9223372036854786048.000000000 != toInt64(1), 9223372036854786048.000000000 < toInt64(1), 9223372036854786048.000000000 <= toInt64(1), 9223372036854786048.000000000 > toInt64(1), 9223372036854786048.000000000 >= toInt64(1) ; +SELECT '18446744073709551615', '0.000000000', 18446744073709551615 = 0.000000000, 18446744073709551615 != 0.000000000, 18446744073709551615 < 0.000000000, 18446744073709551615 <= 0.000000000, 18446744073709551615 > 0.000000000, 18446744073709551615 >= 0.000000000, 0.000000000 = 18446744073709551615, 0.000000000 != 18446744073709551615, 0.000000000 < 18446744073709551615, 0.000000000 <= 18446744073709551615, 0.000000000 > 18446744073709551615, 0.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 0.000000000, toUInt64(18446744073709551615) != 0.000000000, toUInt64(18446744073709551615) < 0.000000000, toUInt64(18446744073709551615) <= 0.000000000, toUInt64(18446744073709551615) > 0.000000000, toUInt64(18446744073709551615) >= 0.000000000, 0.000000000 = toUInt64(18446744073709551615), 0.000000000 != toUInt64(18446744073709551615), 0.000000000 < toUInt64(18446744073709551615), 0.000000000 <= toUInt64(18446744073709551615), 0.000000000 > toUInt64(18446744073709551615), 0.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-1.000000000', 18446744073709551615 = -1.000000000, 18446744073709551615 != -1.000000000, 18446744073709551615 < -1.000000000, 18446744073709551615 <= -1.000000000, 18446744073709551615 > -1.000000000, 18446744073709551615 >= -1.000000000, -1.000000000 = 18446744073709551615, -1.000000000 != 18446744073709551615, -1.000000000 < 18446744073709551615, -1.000000000 <= 18446744073709551615, -1.000000000 > 18446744073709551615, -1.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -1.000000000, toUInt64(18446744073709551615) != -1.000000000, toUInt64(18446744073709551615) < -1.000000000, toUInt64(18446744073709551615) <= -1.000000000, toUInt64(18446744073709551615) > -1.000000000, toUInt64(18446744073709551615) >= -1.000000000, -1.000000000 = toUInt64(18446744073709551615), -1.000000000 != toUInt64(18446744073709551615), -1.000000000 < toUInt64(18446744073709551615), -1.000000000 <= toUInt64(18446744073709551615), -1.000000000 > toUInt64(18446744073709551615), -1.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '1.000000000', 18446744073709551615 = 1.000000000, 18446744073709551615 != 1.000000000, 18446744073709551615 < 1.000000000, 18446744073709551615 <= 1.000000000, 18446744073709551615 > 1.000000000, 18446744073709551615 >= 1.000000000, 1.000000000 = 18446744073709551615, 1.000000000 != 18446744073709551615, 1.000000000 < 18446744073709551615, 1.000000000 <= 18446744073709551615, 1.000000000 > 18446744073709551615, 1.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 1.000000000, toUInt64(18446744073709551615) != 1.000000000, toUInt64(18446744073709551615) < 1.000000000, toUInt64(18446744073709551615) <= 1.000000000, toUInt64(18446744073709551615) > 1.000000000, toUInt64(18446744073709551615) >= 1.000000000, 1.000000000 = toUInt64(18446744073709551615), 1.000000000 != toUInt64(18446744073709551615), 1.000000000 < toUInt64(18446744073709551615), 1.000000000 <= toUInt64(18446744073709551615), 1.000000000 > toUInt64(18446744073709551615), 1.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '18446744073709551616.000000000', 18446744073709551615 = 18446744073709551616.000000000, 18446744073709551615 != 18446744073709551616.000000000, 18446744073709551615 < 18446744073709551616.000000000, 18446744073709551615 <= 18446744073709551616.000000000, 18446744073709551615 > 18446744073709551616.000000000, 18446744073709551615 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 18446744073709551615, 18446744073709551616.000000000 != 18446744073709551615, 18446744073709551616.000000000 < 18446744073709551615, 18446744073709551616.000000000 <= 18446744073709551615, 18446744073709551616.000000000 > 18446744073709551615, 18446744073709551616.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 18446744073709551616.000000000, toUInt64(18446744073709551615) != 18446744073709551616.000000000, toUInt64(18446744073709551615) < 18446744073709551616.000000000, toUInt64(18446744073709551615) <= 18446744073709551616.000000000, toUInt64(18446744073709551615) > 18446744073709551616.000000000, toUInt64(18446744073709551615) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(18446744073709551615), 18446744073709551616.000000000 != toUInt64(18446744073709551615), 18446744073709551616.000000000 < toUInt64(18446744073709551615), 18446744073709551616.000000000 <= toUInt64(18446744073709551615), 18446744073709551616.000000000 > toUInt64(18446744073709551615), 18446744073709551616.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9223372036854775808.000000000', 18446744073709551615 = 9223372036854775808.000000000, 18446744073709551615 != 9223372036854775808.000000000, 18446744073709551615 < 9223372036854775808.000000000, 18446744073709551615 <= 9223372036854775808.000000000, 18446744073709551615 > 9223372036854775808.000000000, 18446744073709551615 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 18446744073709551615, 9223372036854775808.000000000 != 18446744073709551615, 9223372036854775808.000000000 < 18446744073709551615, 9223372036854775808.000000000 <= 18446744073709551615, 9223372036854775808.000000000 > 18446744073709551615, 9223372036854775808.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9223372036854775808.000000000, toUInt64(18446744073709551615) != 9223372036854775808.000000000, toUInt64(18446744073709551615) < 9223372036854775808.000000000, toUInt64(18446744073709551615) <= 9223372036854775808.000000000, toUInt64(18446744073709551615) > 9223372036854775808.000000000, toUInt64(18446744073709551615) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(18446744073709551615), 9223372036854775808.000000000 != toUInt64(18446744073709551615), 9223372036854775808.000000000 < toUInt64(18446744073709551615), 9223372036854775808.000000000 <= toUInt64(18446744073709551615), 9223372036854775808.000000000 > toUInt64(18446744073709551615), 9223372036854775808.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9223372036854775808.000000000', 18446744073709551615 = -9223372036854775808.000000000, 18446744073709551615 != -9223372036854775808.000000000, 18446744073709551615 < -9223372036854775808.000000000, 18446744073709551615 <= -9223372036854775808.000000000, 18446744073709551615 > -9223372036854775808.000000000, 18446744073709551615 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = 18446744073709551615, -9223372036854775808.000000000 != 18446744073709551615, -9223372036854775808.000000000 < 18446744073709551615, -9223372036854775808.000000000 <= 18446744073709551615, -9223372036854775808.000000000 > 18446744073709551615, -9223372036854775808.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9223372036854775808.000000000, toUInt64(18446744073709551615) != -9223372036854775808.000000000, toUInt64(18446744073709551615) < -9223372036854775808.000000000, toUInt64(18446744073709551615) <= -9223372036854775808.000000000, toUInt64(18446744073709551615) > -9223372036854775808.000000000, toUInt64(18446744073709551615) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt64(18446744073709551615), -9223372036854775808.000000000 != toUInt64(18446744073709551615), -9223372036854775808.000000000 < toUInt64(18446744073709551615), -9223372036854775808.000000000 <= toUInt64(18446744073709551615), -9223372036854775808.000000000 > toUInt64(18446744073709551615), -9223372036854775808.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9223372036854775808.000000000', 18446744073709551615 = 9223372036854775808.000000000, 18446744073709551615 != 9223372036854775808.000000000, 18446744073709551615 < 9223372036854775808.000000000, 18446744073709551615 <= 9223372036854775808.000000000, 18446744073709551615 > 9223372036854775808.000000000, 18446744073709551615 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 18446744073709551615, 9223372036854775808.000000000 != 18446744073709551615, 9223372036854775808.000000000 < 18446744073709551615, 9223372036854775808.000000000 <= 18446744073709551615, 9223372036854775808.000000000 > 18446744073709551615, 9223372036854775808.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9223372036854775808.000000000, toUInt64(18446744073709551615) != 9223372036854775808.000000000, toUInt64(18446744073709551615) < 9223372036854775808.000000000, toUInt64(18446744073709551615) <= 9223372036854775808.000000000, toUInt64(18446744073709551615) > 9223372036854775808.000000000, toUInt64(18446744073709551615) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(18446744073709551615), 9223372036854775808.000000000 != toUInt64(18446744073709551615), 9223372036854775808.000000000 < toUInt64(18446744073709551615), 9223372036854775808.000000000 <= toUInt64(18446744073709551615), 9223372036854775808.000000000 > toUInt64(18446744073709551615), 9223372036854775808.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '2251799813685248.000000000', 18446744073709551615 = 2251799813685248.000000000, 18446744073709551615 != 2251799813685248.000000000, 18446744073709551615 < 2251799813685248.000000000, 18446744073709551615 <= 2251799813685248.000000000, 18446744073709551615 > 2251799813685248.000000000, 18446744073709551615 >= 2251799813685248.000000000, 2251799813685248.000000000 = 18446744073709551615, 2251799813685248.000000000 != 18446744073709551615, 2251799813685248.000000000 < 18446744073709551615, 2251799813685248.000000000 <= 18446744073709551615, 2251799813685248.000000000 > 18446744073709551615, 2251799813685248.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 2251799813685248.000000000, toUInt64(18446744073709551615) != 2251799813685248.000000000, toUInt64(18446744073709551615) < 2251799813685248.000000000, toUInt64(18446744073709551615) <= 2251799813685248.000000000, toUInt64(18446744073709551615) > 2251799813685248.000000000, toUInt64(18446744073709551615) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt64(18446744073709551615), 2251799813685248.000000000 != toUInt64(18446744073709551615), 2251799813685248.000000000 < toUInt64(18446744073709551615), 2251799813685248.000000000 <= toUInt64(18446744073709551615), 2251799813685248.000000000 > toUInt64(18446744073709551615), 2251799813685248.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '4503599627370496.000000000', 18446744073709551615 = 4503599627370496.000000000, 18446744073709551615 != 4503599627370496.000000000, 18446744073709551615 < 4503599627370496.000000000, 18446744073709551615 <= 4503599627370496.000000000, 18446744073709551615 > 4503599627370496.000000000, 18446744073709551615 >= 4503599627370496.000000000, 4503599627370496.000000000 = 18446744073709551615, 4503599627370496.000000000 != 18446744073709551615, 4503599627370496.000000000 < 18446744073709551615, 4503599627370496.000000000 <= 18446744073709551615, 4503599627370496.000000000 > 18446744073709551615, 4503599627370496.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 4503599627370496.000000000, toUInt64(18446744073709551615) != 4503599627370496.000000000, toUInt64(18446744073709551615) < 4503599627370496.000000000, toUInt64(18446744073709551615) <= 4503599627370496.000000000, toUInt64(18446744073709551615) > 4503599627370496.000000000, toUInt64(18446744073709551615) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt64(18446744073709551615), 4503599627370496.000000000 != toUInt64(18446744073709551615), 4503599627370496.000000000 < toUInt64(18446744073709551615), 4503599627370496.000000000 <= toUInt64(18446744073709551615), 4503599627370496.000000000 > toUInt64(18446744073709551615), 4503599627370496.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740991.000000000', 18446744073709551615 = 9007199254740991.000000000, 18446744073709551615 != 9007199254740991.000000000, 18446744073709551615 < 9007199254740991.000000000, 18446744073709551615 <= 9007199254740991.000000000, 18446744073709551615 > 9007199254740991.000000000, 18446744073709551615 >= 9007199254740991.000000000, 9007199254740991.000000000 = 18446744073709551615, 9007199254740991.000000000 != 18446744073709551615, 9007199254740991.000000000 < 18446744073709551615, 9007199254740991.000000000 <= 18446744073709551615, 9007199254740991.000000000 > 18446744073709551615, 9007199254740991.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740991.000000000, toUInt64(18446744073709551615) != 9007199254740991.000000000, toUInt64(18446744073709551615) < 9007199254740991.000000000, toUInt64(18446744073709551615) <= 9007199254740991.000000000, toUInt64(18446744073709551615) > 9007199254740991.000000000, toUInt64(18446744073709551615) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt64(18446744073709551615), 9007199254740991.000000000 != toUInt64(18446744073709551615), 9007199254740991.000000000 < toUInt64(18446744073709551615), 9007199254740991.000000000 <= toUInt64(18446744073709551615), 9007199254740991.000000000 > toUInt64(18446744073709551615), 9007199254740991.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740992.000000000', 18446744073709551615 = 9007199254740992.000000000, 18446744073709551615 != 9007199254740992.000000000, 18446744073709551615 < 9007199254740992.000000000, 18446744073709551615 <= 9007199254740992.000000000, 18446744073709551615 > 9007199254740992.000000000, 18446744073709551615 >= 9007199254740992.000000000, 9007199254740992.000000000 = 18446744073709551615, 9007199254740992.000000000 != 18446744073709551615, 9007199254740992.000000000 < 18446744073709551615, 9007199254740992.000000000 <= 18446744073709551615, 9007199254740992.000000000 > 18446744073709551615, 9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740992.000000000, toUInt64(18446744073709551615) != 9007199254740992.000000000, toUInt64(18446744073709551615) < 9007199254740992.000000000, toUInt64(18446744073709551615) <= 9007199254740992.000000000, toUInt64(18446744073709551615) > 9007199254740992.000000000, toUInt64(18446744073709551615) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(18446744073709551615), 9007199254740992.000000000 != toUInt64(18446744073709551615), 9007199254740992.000000000 < toUInt64(18446744073709551615), 9007199254740992.000000000 <= toUInt64(18446744073709551615), 9007199254740992.000000000 > toUInt64(18446744073709551615), 9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740992.000000000', 18446744073709551615 = 9007199254740992.000000000, 18446744073709551615 != 9007199254740992.000000000, 18446744073709551615 < 9007199254740992.000000000, 18446744073709551615 <= 9007199254740992.000000000, 18446744073709551615 > 9007199254740992.000000000, 18446744073709551615 >= 9007199254740992.000000000, 9007199254740992.000000000 = 18446744073709551615, 9007199254740992.000000000 != 18446744073709551615, 9007199254740992.000000000 < 18446744073709551615, 9007199254740992.000000000 <= 18446744073709551615, 9007199254740992.000000000 > 18446744073709551615, 9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740992.000000000, toUInt64(18446744073709551615) != 9007199254740992.000000000, toUInt64(18446744073709551615) < 9007199254740992.000000000, toUInt64(18446744073709551615) <= 9007199254740992.000000000, toUInt64(18446744073709551615) > 9007199254740992.000000000, toUInt64(18446744073709551615) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(18446744073709551615), 9007199254740992.000000000 != toUInt64(18446744073709551615), 9007199254740992.000000000 < toUInt64(18446744073709551615), 9007199254740992.000000000 <= toUInt64(18446744073709551615), 9007199254740992.000000000 > toUInt64(18446744073709551615), 9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740994.000000000', 18446744073709551615 = 9007199254740994.000000000, 18446744073709551615 != 9007199254740994.000000000, 18446744073709551615 < 9007199254740994.000000000, 18446744073709551615 <= 9007199254740994.000000000, 18446744073709551615 > 9007199254740994.000000000, 18446744073709551615 >= 9007199254740994.000000000, 9007199254740994.000000000 = 18446744073709551615, 9007199254740994.000000000 != 18446744073709551615, 9007199254740994.000000000 < 18446744073709551615, 9007199254740994.000000000 <= 18446744073709551615, 9007199254740994.000000000 > 18446744073709551615, 9007199254740994.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740994.000000000, toUInt64(18446744073709551615) != 9007199254740994.000000000, toUInt64(18446744073709551615) < 9007199254740994.000000000, toUInt64(18446744073709551615) <= 9007199254740994.000000000, toUInt64(18446744073709551615) > 9007199254740994.000000000, toUInt64(18446744073709551615) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt64(18446744073709551615), 9007199254740994.000000000 != toUInt64(18446744073709551615), 9007199254740994.000000000 < toUInt64(18446744073709551615), 9007199254740994.000000000 <= toUInt64(18446744073709551615), 9007199254740994.000000000 > toUInt64(18446744073709551615), 9007199254740994.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740991.000000000', 18446744073709551615 = -9007199254740991.000000000, 18446744073709551615 != -9007199254740991.000000000, 18446744073709551615 < -9007199254740991.000000000, 18446744073709551615 <= -9007199254740991.000000000, 18446744073709551615 > -9007199254740991.000000000, 18446744073709551615 >= -9007199254740991.000000000, -9007199254740991.000000000 = 18446744073709551615, -9007199254740991.000000000 != 18446744073709551615, -9007199254740991.000000000 < 18446744073709551615, -9007199254740991.000000000 <= 18446744073709551615, -9007199254740991.000000000 > 18446744073709551615, -9007199254740991.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740991.000000000, toUInt64(18446744073709551615) != -9007199254740991.000000000, toUInt64(18446744073709551615) < -9007199254740991.000000000, toUInt64(18446744073709551615) <= -9007199254740991.000000000, toUInt64(18446744073709551615) > -9007199254740991.000000000, toUInt64(18446744073709551615) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt64(18446744073709551615), -9007199254740991.000000000 != toUInt64(18446744073709551615), -9007199254740991.000000000 < toUInt64(18446744073709551615), -9007199254740991.000000000 <= toUInt64(18446744073709551615), -9007199254740991.000000000 > toUInt64(18446744073709551615), -9007199254740991.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740992.000000000', 18446744073709551615 = -9007199254740992.000000000, 18446744073709551615 != -9007199254740992.000000000, 18446744073709551615 < -9007199254740992.000000000, 18446744073709551615 <= -9007199254740992.000000000, 18446744073709551615 > -9007199254740992.000000000, 18446744073709551615 >= -9007199254740992.000000000, -9007199254740992.000000000 = 18446744073709551615, -9007199254740992.000000000 != 18446744073709551615, -9007199254740992.000000000 < 18446744073709551615, -9007199254740992.000000000 <= 18446744073709551615, -9007199254740992.000000000 > 18446744073709551615, -9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740992.000000000, toUInt64(18446744073709551615) != -9007199254740992.000000000, toUInt64(18446744073709551615) < -9007199254740992.000000000, toUInt64(18446744073709551615) <= -9007199254740992.000000000, toUInt64(18446744073709551615) > -9007199254740992.000000000, toUInt64(18446744073709551615) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(18446744073709551615), -9007199254740992.000000000 != toUInt64(18446744073709551615), -9007199254740992.000000000 < toUInt64(18446744073709551615), -9007199254740992.000000000 <= toUInt64(18446744073709551615), -9007199254740992.000000000 > toUInt64(18446744073709551615), -9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740992.000000000', 18446744073709551615 = -9007199254740992.000000000, 18446744073709551615 != -9007199254740992.000000000, 18446744073709551615 < -9007199254740992.000000000, 18446744073709551615 <= -9007199254740992.000000000, 18446744073709551615 > -9007199254740992.000000000, 18446744073709551615 >= -9007199254740992.000000000, -9007199254740992.000000000 = 18446744073709551615, -9007199254740992.000000000 != 18446744073709551615, -9007199254740992.000000000 < 18446744073709551615, -9007199254740992.000000000 <= 18446744073709551615, -9007199254740992.000000000 > 18446744073709551615, -9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740992.000000000, toUInt64(18446744073709551615) != -9007199254740992.000000000, toUInt64(18446744073709551615) < -9007199254740992.000000000, toUInt64(18446744073709551615) <= -9007199254740992.000000000, toUInt64(18446744073709551615) > -9007199254740992.000000000, toUInt64(18446744073709551615) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(18446744073709551615), -9007199254740992.000000000 != toUInt64(18446744073709551615), -9007199254740992.000000000 < toUInt64(18446744073709551615), -9007199254740992.000000000 <= toUInt64(18446744073709551615), -9007199254740992.000000000 > toUInt64(18446744073709551615), -9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740994.000000000', 18446744073709551615 = -9007199254740994.000000000, 18446744073709551615 != -9007199254740994.000000000, 18446744073709551615 < -9007199254740994.000000000, 18446744073709551615 <= -9007199254740994.000000000, 18446744073709551615 > -9007199254740994.000000000, 18446744073709551615 >= -9007199254740994.000000000, -9007199254740994.000000000 = 18446744073709551615, -9007199254740994.000000000 != 18446744073709551615, -9007199254740994.000000000 < 18446744073709551615, -9007199254740994.000000000 <= 18446744073709551615, -9007199254740994.000000000 > 18446744073709551615, -9007199254740994.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740994.000000000, toUInt64(18446744073709551615) != -9007199254740994.000000000, toUInt64(18446744073709551615) < -9007199254740994.000000000, toUInt64(18446744073709551615) <= -9007199254740994.000000000, toUInt64(18446744073709551615) > -9007199254740994.000000000, toUInt64(18446744073709551615) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt64(18446744073709551615), -9007199254740994.000000000 != toUInt64(18446744073709551615), -9007199254740994.000000000 < toUInt64(18446744073709551615), -9007199254740994.000000000 <= toUInt64(18446744073709551615), -9007199254740994.000000000 > toUInt64(18446744073709551615), -9007199254740994.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '104.000000000', 18446744073709551615 = 104.000000000, 18446744073709551615 != 104.000000000, 18446744073709551615 < 104.000000000, 18446744073709551615 <= 104.000000000, 18446744073709551615 > 104.000000000, 18446744073709551615 >= 104.000000000, 104.000000000 = 18446744073709551615, 104.000000000 != 18446744073709551615, 104.000000000 < 18446744073709551615, 104.000000000 <= 18446744073709551615, 104.000000000 > 18446744073709551615, 104.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 104.000000000, toUInt64(18446744073709551615) != 104.000000000, toUInt64(18446744073709551615) < 104.000000000, toUInt64(18446744073709551615) <= 104.000000000, toUInt64(18446744073709551615) > 104.000000000, toUInt64(18446744073709551615) >= 104.000000000, 104.000000000 = toUInt64(18446744073709551615), 104.000000000 != toUInt64(18446744073709551615), 104.000000000 < toUInt64(18446744073709551615), 104.000000000 <= toUInt64(18446744073709551615), 104.000000000 > toUInt64(18446744073709551615), 104.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-4503599627370496.000000000', 18446744073709551615 = -4503599627370496.000000000, 18446744073709551615 != -4503599627370496.000000000, 18446744073709551615 < -4503599627370496.000000000, 18446744073709551615 <= -4503599627370496.000000000, 18446744073709551615 > -4503599627370496.000000000, 18446744073709551615 >= -4503599627370496.000000000, -4503599627370496.000000000 = 18446744073709551615, -4503599627370496.000000000 != 18446744073709551615, -4503599627370496.000000000 < 18446744073709551615, -4503599627370496.000000000 <= 18446744073709551615, -4503599627370496.000000000 > 18446744073709551615, -4503599627370496.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -4503599627370496.000000000, toUInt64(18446744073709551615) != -4503599627370496.000000000, toUInt64(18446744073709551615) < -4503599627370496.000000000, toUInt64(18446744073709551615) <= -4503599627370496.000000000, toUInt64(18446744073709551615) > -4503599627370496.000000000, toUInt64(18446744073709551615) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt64(18446744073709551615), -4503599627370496.000000000 != toUInt64(18446744073709551615), -4503599627370496.000000000 < toUInt64(18446744073709551615), -4503599627370496.000000000 <= toUInt64(18446744073709551615), -4503599627370496.000000000 > toUInt64(18446744073709551615), -4503599627370496.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-0.500000000', 18446744073709551615 = -0.500000000, 18446744073709551615 != -0.500000000, 18446744073709551615 < -0.500000000, 18446744073709551615 <= -0.500000000, 18446744073709551615 > -0.500000000, 18446744073709551615 >= -0.500000000, -0.500000000 = 18446744073709551615, -0.500000000 != 18446744073709551615, -0.500000000 < 18446744073709551615, -0.500000000 <= 18446744073709551615, -0.500000000 > 18446744073709551615, -0.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -0.500000000, toUInt64(18446744073709551615) != -0.500000000, toUInt64(18446744073709551615) < -0.500000000, toUInt64(18446744073709551615) <= -0.500000000, toUInt64(18446744073709551615) > -0.500000000, toUInt64(18446744073709551615) >= -0.500000000, -0.500000000 = toUInt64(18446744073709551615), -0.500000000 != toUInt64(18446744073709551615), -0.500000000 < toUInt64(18446744073709551615), -0.500000000 <= toUInt64(18446744073709551615), -0.500000000 > toUInt64(18446744073709551615), -0.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '0.500000000', 18446744073709551615 = 0.500000000, 18446744073709551615 != 0.500000000, 18446744073709551615 < 0.500000000, 18446744073709551615 <= 0.500000000, 18446744073709551615 > 0.500000000, 18446744073709551615 >= 0.500000000, 0.500000000 = 18446744073709551615, 0.500000000 != 18446744073709551615, 0.500000000 < 18446744073709551615, 0.500000000 <= 18446744073709551615, 0.500000000 > 18446744073709551615, 0.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 0.500000000, toUInt64(18446744073709551615) != 0.500000000, toUInt64(18446744073709551615) < 0.500000000, toUInt64(18446744073709551615) <= 0.500000000, toUInt64(18446744073709551615) > 0.500000000, toUInt64(18446744073709551615) >= 0.500000000, 0.500000000 = toUInt64(18446744073709551615), 0.500000000 != toUInt64(18446744073709551615), 0.500000000 < toUInt64(18446744073709551615), 0.500000000 <= toUInt64(18446744073709551615), 0.500000000 > toUInt64(18446744073709551615), 0.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-1.500000000', 18446744073709551615 = -1.500000000, 18446744073709551615 != -1.500000000, 18446744073709551615 < -1.500000000, 18446744073709551615 <= -1.500000000, 18446744073709551615 > -1.500000000, 18446744073709551615 >= -1.500000000, -1.500000000 = 18446744073709551615, -1.500000000 != 18446744073709551615, -1.500000000 < 18446744073709551615, -1.500000000 <= 18446744073709551615, -1.500000000 > 18446744073709551615, -1.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -1.500000000, toUInt64(18446744073709551615) != -1.500000000, toUInt64(18446744073709551615) < -1.500000000, toUInt64(18446744073709551615) <= -1.500000000, toUInt64(18446744073709551615) > -1.500000000, toUInt64(18446744073709551615) >= -1.500000000, -1.500000000 = toUInt64(18446744073709551615), -1.500000000 != toUInt64(18446744073709551615), -1.500000000 < toUInt64(18446744073709551615), -1.500000000 <= toUInt64(18446744073709551615), -1.500000000 > toUInt64(18446744073709551615), -1.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '1.500000000', 18446744073709551615 = 1.500000000, 18446744073709551615 != 1.500000000, 18446744073709551615 < 1.500000000, 18446744073709551615 <= 1.500000000, 18446744073709551615 > 1.500000000, 18446744073709551615 >= 1.500000000, 1.500000000 = 18446744073709551615, 1.500000000 != 18446744073709551615, 1.500000000 < 18446744073709551615, 1.500000000 <= 18446744073709551615, 1.500000000 > 18446744073709551615, 1.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 1.500000000, toUInt64(18446744073709551615) != 1.500000000, toUInt64(18446744073709551615) < 1.500000000, toUInt64(18446744073709551615) <= 1.500000000, toUInt64(18446744073709551615) > 1.500000000, toUInt64(18446744073709551615) >= 1.500000000, 1.500000000 = toUInt64(18446744073709551615), 1.500000000 != toUInt64(18446744073709551615), 1.500000000 < toUInt64(18446744073709551615), 1.500000000 <= toUInt64(18446744073709551615), 1.500000000 > toUInt64(18446744073709551615), 1.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740992.000000000', 18446744073709551615 = 9007199254740992.000000000, 18446744073709551615 != 9007199254740992.000000000, 18446744073709551615 < 9007199254740992.000000000, 18446744073709551615 <= 9007199254740992.000000000, 18446744073709551615 > 9007199254740992.000000000, 18446744073709551615 >= 9007199254740992.000000000, 9007199254740992.000000000 = 18446744073709551615, 9007199254740992.000000000 != 18446744073709551615, 9007199254740992.000000000 < 18446744073709551615, 9007199254740992.000000000 <= 18446744073709551615, 9007199254740992.000000000 > 18446744073709551615, 9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740992.000000000, toUInt64(18446744073709551615) != 9007199254740992.000000000, toUInt64(18446744073709551615) < 9007199254740992.000000000, toUInt64(18446744073709551615) <= 9007199254740992.000000000, toUInt64(18446744073709551615) > 9007199254740992.000000000, toUInt64(18446744073709551615) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(18446744073709551615), 9007199254740992.000000000 != toUInt64(18446744073709551615), 9007199254740992.000000000 < toUInt64(18446744073709551615), 9007199254740992.000000000 <= toUInt64(18446744073709551615), 9007199254740992.000000000 > toUInt64(18446744073709551615), 9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '2251799813685247.500000000', 18446744073709551615 = 2251799813685247.500000000, 18446744073709551615 != 2251799813685247.500000000, 18446744073709551615 < 2251799813685247.500000000, 18446744073709551615 <= 2251799813685247.500000000, 18446744073709551615 > 2251799813685247.500000000, 18446744073709551615 >= 2251799813685247.500000000, 2251799813685247.500000000 = 18446744073709551615, 2251799813685247.500000000 != 18446744073709551615, 2251799813685247.500000000 < 18446744073709551615, 2251799813685247.500000000 <= 18446744073709551615, 2251799813685247.500000000 > 18446744073709551615, 2251799813685247.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 2251799813685247.500000000, toUInt64(18446744073709551615) != 2251799813685247.500000000, toUInt64(18446744073709551615) < 2251799813685247.500000000, toUInt64(18446744073709551615) <= 2251799813685247.500000000, toUInt64(18446744073709551615) > 2251799813685247.500000000, toUInt64(18446744073709551615) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt64(18446744073709551615), 2251799813685247.500000000 != toUInt64(18446744073709551615), 2251799813685247.500000000 < toUInt64(18446744073709551615), 2251799813685247.500000000 <= toUInt64(18446744073709551615), 2251799813685247.500000000 > toUInt64(18446744073709551615), 2251799813685247.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '2251799813685248.500000000', 18446744073709551615 = 2251799813685248.500000000, 18446744073709551615 != 2251799813685248.500000000, 18446744073709551615 < 2251799813685248.500000000, 18446744073709551615 <= 2251799813685248.500000000, 18446744073709551615 > 2251799813685248.500000000, 18446744073709551615 >= 2251799813685248.500000000, 2251799813685248.500000000 = 18446744073709551615, 2251799813685248.500000000 != 18446744073709551615, 2251799813685248.500000000 < 18446744073709551615, 2251799813685248.500000000 <= 18446744073709551615, 2251799813685248.500000000 > 18446744073709551615, 2251799813685248.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 2251799813685248.500000000, toUInt64(18446744073709551615) != 2251799813685248.500000000, toUInt64(18446744073709551615) < 2251799813685248.500000000, toUInt64(18446744073709551615) <= 2251799813685248.500000000, toUInt64(18446744073709551615) > 2251799813685248.500000000, toUInt64(18446744073709551615) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt64(18446744073709551615), 2251799813685248.500000000 != toUInt64(18446744073709551615), 2251799813685248.500000000 < toUInt64(18446744073709551615), 2251799813685248.500000000 <= toUInt64(18446744073709551615), 2251799813685248.500000000 > toUInt64(18446744073709551615), 2251799813685248.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '1152921504606846976.000000000', 18446744073709551615 = 1152921504606846976.000000000, 18446744073709551615 != 1152921504606846976.000000000, 18446744073709551615 < 1152921504606846976.000000000, 18446744073709551615 <= 1152921504606846976.000000000, 18446744073709551615 > 1152921504606846976.000000000, 18446744073709551615 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = 18446744073709551615, 1152921504606846976.000000000 != 18446744073709551615, 1152921504606846976.000000000 < 18446744073709551615, 1152921504606846976.000000000 <= 18446744073709551615, 1152921504606846976.000000000 > 18446744073709551615, 1152921504606846976.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 1152921504606846976.000000000, toUInt64(18446744073709551615) != 1152921504606846976.000000000, toUInt64(18446744073709551615) < 1152921504606846976.000000000, toUInt64(18446744073709551615) <= 1152921504606846976.000000000, toUInt64(18446744073709551615) > 1152921504606846976.000000000, toUInt64(18446744073709551615) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt64(18446744073709551615), 1152921504606846976.000000000 != toUInt64(18446744073709551615), 1152921504606846976.000000000 < toUInt64(18446744073709551615), 1152921504606846976.000000000 <= toUInt64(18446744073709551615), 1152921504606846976.000000000 > toUInt64(18446744073709551615), 1152921504606846976.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-1152921504606846976.000000000', 18446744073709551615 = -1152921504606846976.000000000, 18446744073709551615 != -1152921504606846976.000000000, 18446744073709551615 < -1152921504606846976.000000000, 18446744073709551615 <= -1152921504606846976.000000000, 18446744073709551615 > -1152921504606846976.000000000, 18446744073709551615 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = 18446744073709551615, -1152921504606846976.000000000 != 18446744073709551615, -1152921504606846976.000000000 < 18446744073709551615, -1152921504606846976.000000000 <= 18446744073709551615, -1152921504606846976.000000000 > 18446744073709551615, -1152921504606846976.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -1152921504606846976.000000000, toUInt64(18446744073709551615) != -1152921504606846976.000000000, toUInt64(18446744073709551615) < -1152921504606846976.000000000, toUInt64(18446744073709551615) <= -1152921504606846976.000000000, toUInt64(18446744073709551615) > -1152921504606846976.000000000, toUInt64(18446744073709551615) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt64(18446744073709551615), -1152921504606846976.000000000 != toUInt64(18446744073709551615), -1152921504606846976.000000000 < toUInt64(18446744073709551615), -1152921504606846976.000000000 <= toUInt64(18446744073709551615), -1152921504606846976.000000000 > toUInt64(18446744073709551615), -1152921504606846976.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9223372036854786048.000000000', 18446744073709551615 = -9223372036854786048.000000000, 18446744073709551615 != -9223372036854786048.000000000, 18446744073709551615 < -9223372036854786048.000000000, 18446744073709551615 <= -9223372036854786048.000000000, 18446744073709551615 > -9223372036854786048.000000000, 18446744073709551615 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = 18446744073709551615, -9223372036854786048.000000000 != 18446744073709551615, -9223372036854786048.000000000 < 18446744073709551615, -9223372036854786048.000000000 <= 18446744073709551615, -9223372036854786048.000000000 > 18446744073709551615, -9223372036854786048.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9223372036854786048.000000000, toUInt64(18446744073709551615) != -9223372036854786048.000000000, toUInt64(18446744073709551615) < -9223372036854786048.000000000, toUInt64(18446744073709551615) <= -9223372036854786048.000000000, toUInt64(18446744073709551615) > -9223372036854786048.000000000, toUInt64(18446744073709551615) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt64(18446744073709551615), -9223372036854786048.000000000 != toUInt64(18446744073709551615), -9223372036854786048.000000000 < toUInt64(18446744073709551615), -9223372036854786048.000000000 <= toUInt64(18446744073709551615), -9223372036854786048.000000000 > toUInt64(18446744073709551615), -9223372036854786048.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9223372036854786048.000000000', 18446744073709551615 = 9223372036854786048.000000000, 18446744073709551615 != 9223372036854786048.000000000, 18446744073709551615 < 9223372036854786048.000000000, 18446744073709551615 <= 9223372036854786048.000000000, 18446744073709551615 > 9223372036854786048.000000000, 18446744073709551615 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = 18446744073709551615, 9223372036854786048.000000000 != 18446744073709551615, 9223372036854786048.000000000 < 18446744073709551615, 9223372036854786048.000000000 <= 18446744073709551615, 9223372036854786048.000000000 > 18446744073709551615, 9223372036854786048.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9223372036854786048.000000000, toUInt64(18446744073709551615) != 9223372036854786048.000000000, toUInt64(18446744073709551615) < 9223372036854786048.000000000, toUInt64(18446744073709551615) <= 9223372036854786048.000000000, toUInt64(18446744073709551615) > 9223372036854786048.000000000, toUInt64(18446744073709551615) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt64(18446744073709551615), 9223372036854786048.000000000 != toUInt64(18446744073709551615), 9223372036854786048.000000000 < toUInt64(18446744073709551615), 9223372036854786048.000000000 <= toUInt64(18446744073709551615), 9223372036854786048.000000000 > toUInt64(18446744073709551615), 9223372036854786048.000000000 >= toUInt64(18446744073709551615) ; +SELECT '9223372036854775808', '0.000000000', 9223372036854775808 = 0.000000000, 9223372036854775808 != 0.000000000, 9223372036854775808 < 0.000000000, 9223372036854775808 <= 0.000000000, 9223372036854775808 > 0.000000000, 9223372036854775808 >= 0.000000000, 0.000000000 = 9223372036854775808, 0.000000000 != 9223372036854775808, 0.000000000 < 9223372036854775808, 0.000000000 <= 9223372036854775808, 0.000000000 > 9223372036854775808, 0.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = 0.000000000, toUInt64(9223372036854775808) != 0.000000000, toUInt64(9223372036854775808) < 0.000000000, toUInt64(9223372036854775808) <= 0.000000000, toUInt64(9223372036854775808) > 0.000000000, toUInt64(9223372036854775808) >= 0.000000000, 0.000000000 = toUInt64(9223372036854775808), 0.000000000 != toUInt64(9223372036854775808), 0.000000000 < toUInt64(9223372036854775808), 0.000000000 <= toUInt64(9223372036854775808), 0.000000000 > toUInt64(9223372036854775808), 0.000000000 >= toUInt64(9223372036854775808) ; +SELECT '9223372036854775808', '-1.000000000', 9223372036854775808 = -1.000000000, 9223372036854775808 != -1.000000000, 9223372036854775808 < -1.000000000, 9223372036854775808 <= -1.000000000, 9223372036854775808 > -1.000000000, 9223372036854775808 >= -1.000000000, -1.000000000 = 9223372036854775808, -1.000000000 != 9223372036854775808, -1.000000000 < 9223372036854775808, -1.000000000 <= 9223372036854775808, -1.000000000 > 9223372036854775808, -1.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = -1.000000000, toUInt64(9223372036854775808) != -1.000000000, toUInt64(9223372036854775808) < -1.000000000, toUInt64(9223372036854775808) <= -1.000000000, toUInt64(9223372036854775808) > -1.000000000, toUInt64(9223372036854775808) >= -1.000000000, -1.000000000 = toUInt64(9223372036854775808), -1.000000000 != toUInt64(9223372036854775808), -1.000000000 < toUInt64(9223372036854775808), -1.000000000 <= toUInt64(9223372036854775808), -1.000000000 > toUInt64(9223372036854775808), -1.000000000 >= toUInt64(9223372036854775808) ; +SELECT '9223372036854775808', '1.000000000', 9223372036854775808 = 1.000000000, 9223372036854775808 != 1.000000000, 9223372036854775808 < 1.000000000, 9223372036854775808 <= 1.000000000, 9223372036854775808 > 1.000000000, 9223372036854775808 >= 1.000000000, 1.000000000 = 9223372036854775808, 1.000000000 != 9223372036854775808, 1.000000000 < 9223372036854775808, 1.000000000 <= 9223372036854775808, 1.000000000 > 9223372036854775808, 1.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = 1.000000000, toUInt64(9223372036854775808) != 1.000000000, toUInt64(9223372036854775808) < 1.000000000, toUInt64(9223372036854775808) <= 1.000000000, toUInt64(9223372036854775808) > 1.000000000, toUInt64(9223372036854775808) >= 1.000000000, 1.000000000 = toUInt64(9223372036854775808), 1.000000000 != toUInt64(9223372036854775808), 1.000000000 < toUInt64(9223372036854775808), 1.000000000 <= toUInt64(9223372036854775808), 1.000000000 > toUInt64(9223372036854775808), 1.000000000 >= toUInt64(9223372036854775808) ; +SELECT '9223372036854775808', '18446744073709551616.000000000', 9223372036854775808 = 18446744073709551616.000000000, 9223372036854775808 != 18446744073709551616.000000000, 9223372036854775808 < 18446744073709551616.000000000, 9223372036854775808 <= 18446744073709551616.000000000, 9223372036854775808 > 18446744073709551616.000000000, 9223372036854775808 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 9223372036854775808, 18446744073709551616.000000000 != 9223372036854775808, 18446744073709551616.000000000 < 9223372036854775808, 18446744073709551616.000000000 <= 9223372036854775808, 18446744073709551616.000000000 > 9223372036854775808, 18446744073709551616.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = 18446744073709551616.000000000, toUInt64(9223372036854775808) != 18446744073709551616.000000000, toUInt64(9223372036854775808) < 18446744073709551616.000000000, toUInt64(9223372036854775808) <= 18446744073709551616.000000000, toUInt64(9223372036854775808) > 18446744073709551616.000000000, toUInt64(9223372036854775808) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(9223372036854775808), 18446744073709551616.000000000 != toUInt64(9223372036854775808), 18446744073709551616.000000000 < toUInt64(9223372036854775808), 18446744073709551616.000000000 <= toUInt64(9223372036854775808), 18446744073709551616.000000000 > toUInt64(9223372036854775808), 18446744073709551616.000000000 >= toUInt64(9223372036854775808) ; diff --git a/tests/queries/0_stateless/00416_pocopatch_progress_in_http_headers.sh b/tests/queries/0_stateless/00416_pocopatch_progress_in_http_headers.sh index 5d9cd12e4bf..6e9814cbca8 100755 --- a/tests/queries/0_stateless/00416_pocopatch_progress_in_http_headers.sh +++ b/tests/queries/0_stateless/00416_pocopatch_progress_in_http_headers.sh @@ -7,7 +7,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) ${CLICKHOUSE_CURL} -vsS "${CLICKHOUSE_URL}&max_block_size=5&send_progress_in_http_headers=1&http_headers_progress_interval_ms=0" -d 'SELECT max(number) FROM numbers(10)' 2>&1 | grep -E 'Content-Encoding|X-ClickHouse-Progress|^[0-9]' # This test will fail with external poco (progress not supported) -${CLICKHOUSE_CURL} -vsS "${CLICKHOUSE_URL}&max_block_size=1&send_progress_in_http_headers=1&http_headers_progress_interval_ms=0" -d 'SELECT number FROM numbers(10)' 2>&1 | grep -E 'Content-Encoding|X-ClickHouse-Progress|^[0-9]' +${CLICKHOUSE_CURL} -vsS "${CLICKHOUSE_URL}&max_block_size=1&send_progress_in_http_headers=1&http_headers_progress_interval_ms=0&output_format_parallel_formatting=0" -d 'SELECT number FROM numbers(10)' 2>&1 | grep -E 'Content-Encoding|X-ClickHouse-Progress|^[0-9]' ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}&max_block_size=1&send_progress_in_http_headers=1&http_headers_progress_interval_ms=0&enable_http_compression=1" -H 'Accept-Encoding: gzip' -d 'SELECT number FROM system.numbers LIMIT 10' | gzip -d # 'send_progress_in_http_headers' is false by default diff --git a/tests/queries/0_stateless/00417_system_build_options.reference b/tests/queries/0_stateless/00417_system_build_options.reference index 2b2c8f1df33..5decd8ca4ec 100644 --- a/tests/queries/0_stateless/00417_system_build_options.reference +++ b/tests/queries/0_stateless/00417_system_build_options.reference @@ -1,4 +1,3 @@ -BUILD_DATE BUILD_TYPE CXX_COMPILER CXX_FLAGS diff --git a/tests/queries/0_stateless/00417_system_build_options.sh b/tests/queries/0_stateless/00417_system_build_options.sh index bfdfa7d14ce..36c9e9d3885 100755 --- a/tests/queries/0_stateless/00417_system_build_options.sh +++ b/tests/queries/0_stateless/00417_system_build_options.sh @@ -4,4 +4,4 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -$CLICKHOUSE_CLIENT --query="SELECT * FROM system.build_options" | perl -lnE 'print $1 if /(BUILD_DATE|BUILD_TYPE|CXX_COMPILER)\s+\S+/ || /(CXX_FLAGS|LINK_FLAGS|TZDATA_VERSION)/'; +$CLICKHOUSE_CLIENT --query="SELECT * FROM system.build_options" | perl -lnE 'print $1 if /(BUILD_TYPE|CXX_COMPILER)\s+\S+/ || /(CXX_FLAGS|LINK_FLAGS|TZDATA_VERSION)/'; diff --git a/tests/queries/0_stateless/00446_clear_column_in_partition_concurrent_zookeeper.sh b/tests/queries/0_stateless/00446_clear_column_in_partition_concurrent_zookeeper.sh index 60de1822318..5c5ecd4564b 100755 --- a/tests/queries/0_stateless/00446_clear_column_in_partition_concurrent_zookeeper.sh +++ b/tests/queries/0_stateless/00446_clear_column_in_partition_concurrent_zookeeper.sh @@ -8,8 +8,8 @@ ch="$CLICKHOUSE_CLIENT --stacktrace -q" $ch "DROP TABLE IF EXISTS clear_column1" $ch "DROP TABLE IF EXISTS clear_column2" -$ch "CREATE TABLE clear_column1 (d Date, i Int64, s String) ENGINE = ReplicatedMergeTree('/clickhouse/test_00446/tables/clear_column_concurrent', '1', d, d, 8192)" -$ch "CREATE TABLE clear_column2 (d Date, i Int64, s String) ENGINE = ReplicatedMergeTree('/clickhouse/test_00446/tables/clear_column_concurrent', '2', d, d, 8192)" +$ch "CREATE TABLE clear_column1 (d Date, i Int64, s String) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/tables/clear_column_concurrent', '1', d, d, 8192)" +$ch "CREATE TABLE clear_column2 (d Date, i Int64, s String) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/tables/clear_column_concurrent', '2', d, d, 8192)" $ch "ALTER TABLE clear_column1 CLEAR COLUMN VasyaUnexistingColumn IN PARTITION '200001'" --replication_alter_partitions_sync=2 1>/dev/null 2>/dev/null rc=$? diff --git a/tests/queries/0_stateless/00506_union_distributed.reference b/tests/queries/0_stateless/00506_union_distributed.reference index 4a2dcd69dc2..3324c3d5675 100644 --- a/tests/queries/0_stateless/00506_union_distributed.reference +++ b/tests/queries/0_stateless/00506_union_distributed.reference @@ -1,16 +1,16 @@ 3 8 +13 28 23 48 33 68 -13 28 3 8 +13 28 23 48 33 68 -13 28 3 8 +13 28 23 48 33 68 -13 28 3 8 +13 28 23 48 33 68 -13 28 diff --git a/tests/queries/0_stateless/00506_union_distributed.sql b/tests/queries/0_stateless/00506_union_distributed.sql index 3f631b8da56..4c5fd9a1743 100644 --- a/tests/queries/0_stateless/00506_union_distributed.sql +++ b/tests/queries/0_stateless/00506_union_distributed.sql @@ -15,10 +15,10 @@ INSERT INTO union1 VALUES (11,12,13,14,15); INSERT INTO union2 VALUES (21,22,23,24,25); INSERT INTO union3 VALUES (31,32,33,34,35); -select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b ) as a group by b; -select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b ) as a group by b; -select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union1 where b>1 group by a, b ) as a group by b; -select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union3 where b>1 group by a, b ) as a group by b; +select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b order by a, b) as a group by b order by b; +select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b order by a, b) as a group by b order by b; +select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union1 where b>1 group by a, b order by a, b) as a group by b order by b; +select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union3 where b>1 group by a, b order by a, b) as a group by b order by b; DROP TABLE union1; DROP TABLE union2; diff --git a/tests/queries/0_stateless/00536_int_exp.sql b/tests/queries/0_stateless/00536_int_exp.sql index c78a326a3b3..80b88e8f4f8 100644 --- a/tests/queries/0_stateless/00536_int_exp.sql +++ b/tests/queries/0_stateless/00536_int_exp.sql @@ -1 +1 @@ -SELECT exp2(number) AS e2d, intExp2(number) AS e2i, e2d = e2i AS e2eq, exp10(number) AS e10d, intExp10(number) AS e10i, e10d = e10i AS e10eq FROM system.numbers LIMIT 64; +SELECT exp2(number) AS e2d, intExp2(number) AS e2i, toUInt64(e2d) = e2i AS e2eq, exp10(number) AS e10d, intExp10(number) AS e10i, toString(e10d) = toString(e10i) AS e10eq FROM system.numbers LIMIT 64; diff --git a/tests/queries/0_stateless/00539_functions_for_working_with_json.reference b/tests/queries/0_stateless/00539_functions_for_working_with_json.reference index c0399f8ab2e..4d3527722a1 100644 --- a/tests/queries/0_stateless/00539_functions_for_working_with_json.reference +++ b/tests/queries/0_stateless/00539_functions_for_working_with_json.reference @@ -13,3 +13,10 @@ test"string "[" ["]", "2", "3"] {"nested" : [1,2,3]} +-1 +0 +0 +-1 +1 +test_string +test"string diff --git a/tests/queries/0_stateless/00539_functions_for_working_with_json.sql b/tests/queries/0_stateless/00539_functions_for_working_with_json.sql index 514b5f2e5ea..31853e92262 100644 --- a/tests/queries/0_stateless/00539_functions_for_working_with_json.sql +++ b/tests/queries/0_stateless/00539_functions_for_working_with_json.sql @@ -15,3 +15,11 @@ SELECT visitParamExtractRaw('{"myparam": "{"}', 'myparam'); SELECT visitParamExtractRaw('{"myparam": "["}', 'myparam'); SELECT visitParamExtractRaw('{"myparam": ["]", "2", "3"], "other":123}', 'myparam'); SELECT visitParamExtractRaw('{"myparam": {"nested" : [1,2,3]}, "other":123}', 'myparam'); + +SELECT simpleJSONExtractInt('{"myparam":-1}', 'myparam'); +SELECT simpleJSONExtractUInt('{"myparam":-1}', 'myparam'); +SELECT simpleJSONExtractFloat('{"myparam":null}', 'myparam'); +SELECT simpleJSONExtractFloat('{"myparam":-1}', 'myparam'); +SELECT simpleJSONExtractBool('{"myparam":true}', 'myparam'); +SELECT simpleJSONExtractString('{"myparam":"test_string"}', 'myparam'); +SELECT simpleJSONExtractString('{"myparam":"test\\"string"}', 'myparam'); diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.reference b/tests/queries/0_stateless/00555_hasAll_hasAny.reference index b33700bfa02..5608f7b970e 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.reference +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.reference @@ -34,10 +34,6 @@ 1 0 - -0 -0 -0 -0 - 0 1 diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.sql b/tests/queries/0_stateless/00555_hasAll_hasAny.sql index 9df356dce2e..c8a6c3cecbd 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.sql +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.sql @@ -39,10 +39,10 @@ select hasAny(['a', 'b'], ['a', 'c']); select hasAll(['a', 'b'], ['a', 'c']); select '-'; -select hasAny([1], ['a']); -select hasAll([1], ['a']); -select hasAll([[1, 2], [3, 4]], ['a', 'c']); -select hasAny([[1, 2], [3, 4]], ['a', 'c']); +select hasAny([1], ['a']); -- { serverError 386 } +select hasAll([1], ['a']); -- { serverError 386 } +select hasAll([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } +select hasAny([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } select '-'; select hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]]); diff --git a/tests/queries/0_stateless/00555_hasSubstr.reference b/tests/queries/0_stateless/00555_hasSubstr.reference index 1051fa28d6c..de97d19c932 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.reference +++ b/tests/queries/0_stateless/00555_hasSubstr.reference @@ -20,8 +20,6 @@ 0 1 - -0 -0 1 1 0 diff --git a/tests/queries/0_stateless/00555_hasSubstr.sql b/tests/queries/0_stateless/00555_hasSubstr.sql index 04c70e4a43b..5f90a69c546 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.sql +++ b/tests/queries/0_stateless/00555_hasSubstr.sql @@ -25,8 +25,8 @@ select hasSubstr(['a', 'b'], ['a', 'c']); select hasSubstr(['a', 'c', 'b'], ['a', 'c']); select '-'; -select hasSubstr([1], ['a']); -select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); +select hasSubstr([1], ['a']); -- { serverError 386 } +select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4], [5, 8]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[1, 2], [5, 8]]); diff --git a/tests/queries/0_stateless/00561_storage_join.reference b/tests/queries/0_stateless/00561_storage_join.reference index 867b945ba1c..2fe12a38360 100644 --- a/tests/queries/0_stateless/00561_storage_join.reference +++ b/tests/queries/0_stateless/00561_storage_join.reference @@ -2,3 +2,4 @@ 2 22 92 82 123457 1 11 91 81 123456 2 22 92 82 123457 +11 1 91 81 123456 diff --git a/tests/queries/0_stateless/00561_storage_join.sql b/tests/queries/0_stateless/00561_storage_join.sql index 62ca80d31fe..01e66d0c380 100644 --- a/tests/queries/0_stateless/00561_storage_join.sql +++ b/tests/queries/0_stateless/00561_storage_join.sql @@ -36,5 +36,13 @@ from ( ) js1 SEMI LEFT JOIN joinbug_join using id2; +/* type conversion */ +SELECT * FROM +( + SELECT toUInt32(11) AS id2 +) AS js1 +SEMI LEFT JOIN joinbug_join USING (id2); + + DROP TABLE joinbug; DROP TABLE joinbug_join; diff --git a/tests/queries/0_stateless/00597_push_down_predicate.reference b/tests/queries/0_stateless/00597_push_down_predicate.reference index bd1c4791df4..59313c35b81 100644 --- a/tests/queries/0_stateless/00597_push_down_predicate.reference +++ b/tests/queries/0_stateless/00597_push_down_predicate.reference @@ -585,3 +585,15 @@ SEMI LEFT JOIN ) AS r USING (id) WHERE r.id = 1 2000-01-01 1 test string 1 1 2000-01-01 test string 1 1 +SELECT value + t1.value AS expr +FROM +( + SELECT + value, + t1.value + FROM test_00597 AS t0 + ALL FULL OUTER JOIN test_00597 AS t1 USING (date) + WHERE (value + `t1.value`) < 3 +) +WHERE expr < 3 +2 diff --git a/tests/queries/0_stateless/00597_push_down_predicate.sql b/tests/queries/0_stateless/00597_push_down_predicate.sql index ec306ac6792..2e3357241ad 100644 --- a/tests/queries/0_stateless/00597_push_down_predicate.sql +++ b/tests/queries/0_stateless/00597_push_down_predicate.sql @@ -135,5 +135,9 @@ SELECT * FROM (SELECT * FROM (SELECT * FROM test_00597) AS a ANY LEFT JOIN (SELE EXPLAIN SYNTAX SELECT * FROM (SELECT * FROM test_00597) ANY INNER JOIN (SELECT * FROM (SELECT * FROM test_00597)) as r USING id WHERE r.id = 1; SELECT * FROM (SELECT * FROM test_00597) ANY INNER JOIN (SELECT * FROM (SELECT * FROM test_00597)) as r USING id WHERE r.id = 1; +-- issue 20497 +EXPLAIN SYNTAX SELECT value + t1.value AS expr FROM (SELECT t0.value, t1.value FROM test_00597 AS t0 FULL JOIN test_00597 AS t1 USING date) WHERE expr < 3; +SELECT value + t1.value AS expr FROM (SELECT t0.value, t1.value FROM test_00597 AS t0 FULL JOIN test_00597 AS t1 USING date) WHERE expr < 3; + DROP TABLE IF EXISTS test_00597; DROP TABLE IF EXISTS test_view_00597; diff --git a/tests/queries/0_stateless/00600_replace_running_query.sh b/tests/queries/0_stateless/00600_replace_running_query.sh index be5523e06ea..78ea4daf6bb 100755 --- a/tests/queries/0_stateless/00600_replace_running_query.sh +++ b/tests/queries/0_stateless/00600_replace_running_query.sh @@ -6,6 +6,8 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh +${CLICKHOUSE_CLIENT} -q "drop user if exists u_00600" +${CLICKHOUSE_CLIENT} -q "create user u_00600 settings max_execution_time=60, readonly=1" function wait_for_query_to_start() { @@ -22,7 +24,7 @@ $CLICKHOUSE_CURL -sS "$CLICKHOUSE_URL&query_id=hello&replace_running_query=1" -d # Wait for it to be replaced wait -${CLICKHOUSE_CLIENT_BINARY} --user=readonly --query_id=42 --query='SELECT 2, count() FROM system.numbers' 2>&1 | grep -cF 'was cancelled' & +${CLICKHOUSE_CLIENT_BINARY} --user=u_00600 --query_id=42 --query='SELECT 2, count() FROM system.numbers' 2>&1 | grep -cF 'was cancelled' & wait_for_query_to_start '42' # Trying to run another query with the same query_id @@ -39,3 +41,4 @@ wait_for_query_to_start '42' ${CLICKHOUSE_CLIENT} --query_id=42 --replace_running_query=1 --replace_running_query_max_wait_ms=500 --query='SELECT 43' 2>&1 | grep -F "can't be stopped" > /dev/null wait ${CLICKHOUSE_CLIENT} --query_id=42 --replace_running_query=1 --query='SELECT 44' +${CLICKHOUSE_CLIENT} -q "drop user u_00600" diff --git a/tests/queries/0_stateless/00626_replace_partition_from_table_zookeeper.sh b/tests/queries/0_stateless/00626_replace_partition_from_table_zookeeper.sh index 5aa445858db..443f2856c88 100755 --- a/tests/queries/0_stateless/00626_replace_partition_from_table_zookeeper.sh +++ b/tests/queries/0_stateless/00626_replace_partition_from_table_zookeeper.sh @@ -12,19 +12,22 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) function query_with_retry { - retry=0 + local query="$1" && shift + + local retry=0 until [ $retry -ge 5 ] do - result=$($CLICKHOUSE_CLIENT $2 --query="$1" 2>&1) + local result + result="$($CLICKHOUSE_CLIENT "$@" --query="$query" 2>&1)" if [ "$?" == 0 ]; then echo -n "$result" return else - retry=$(($retry + 1)) + retry=$((retry + 1)) sleep 3 fi done - echo "Query '$1' failed with '$result'" + echo "Query '$query' failed with '$result'" } $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS src;" @@ -32,8 +35,8 @@ $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS dst_r1;" $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS dst_r2;" $CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = MergeTree PARTITION BY p ORDER BY k;" -$CLICKHOUSE_CLIENT --query="CREATE TABLE dst_r1 (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00626/dst_1', '1') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" -$CLICKHOUSE_CLIENT --query="CREATE TABLE dst_r2 (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00626/dst_1', '2') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE dst_r1 (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/dst_1', '1') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE dst_r2 (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/dst_1', '2') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (0, '0', 1);" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (1, '0', 1);" @@ -139,7 +142,7 @@ $CLICKHOUSE_CLIENT --query="DROP TABLE src;" $CLICKHOUSE_CLIENT --query="SELECT count(), sum(d), uniqExact(_part) FROM dst_r1;" $CLICKHOUSE_CLIENT --query="SYSTEM SYNC REPLICA dst_r1;" -query_with_retry "OPTIMIZE TABLE dst_r1 PARTITION 1;" "--replication_alter_partitions_sync=0 --optimize_throw_if_noop=1" +query_with_retry "OPTIMIZE TABLE dst_r1 PARTITION 1;" --replication_alter_partitions_sync=0 --optimize_throw_if_noop=1 $CLICKHOUSE_CLIENT --query="SYSTEM SYNC REPLICA dst_r1;" $CLICKHOUSE_CLIENT --query="SELECT count(), sum(d), uniqExact(_part) FROM dst_r1;" diff --git a/tests/queries/0_stateless/00632_aggregation_window_funnel.reference b/tests/queries/0_stateless/00632_aggregation_window_funnel.reference index 492135567ea..2c68f277bfa 100644 --- a/tests/queries/0_stateless/00632_aggregation_window_funnel.reference +++ b/tests/queries/0_stateless/00632_aggregation_window_funnel.reference @@ -57,3 +57,7 @@ [2, 0] [3, 1] [4, 1] +1 +1 +1 +1 diff --git a/tests/queries/0_stateless/00632_aggregation_window_funnel.sql b/tests/queries/0_stateless/00632_aggregation_window_funnel.sql index 5a1610256ac..aa0dc804238 100644 --- a/tests/queries/0_stateless/00632_aggregation_window_funnel.sql +++ b/tests/queries/0_stateless/00632_aggregation_window_funnel.sql @@ -79,3 +79,13 @@ select u, windowFunnel(86400)(dt, a is null and b is null) as s from funnel_test select u, windowFunnel(86400)(dt, a is null, b = 'b3') as s from funnel_test_non_null group by u order by u format JSONCompactEachRow; select u, windowFunnel(86400, 'strict_order')(dt, a is null, b = 'b3') as s from funnel_test_non_null group by u order by u format JSONCompactEachRow; drop table funnel_test_non_null; + +create table funnel_test_strict_increase (timestamp UInt32, event UInt32) engine=Memory; +insert into funnel_test_strict_increase values (0,1000),(1,1001),(1,1002),(1,1003),(2,1004); + +select 5 = windowFunnel(10000)(timestamp, event = 1000, event = 1001, event = 1002, event = 1003, event = 1004) from funnel_test_strict_increase; +select 2 = windowFunnel(10000, 'strict_increase')(timestamp, event = 1000, event = 1001, event = 1002, event = 1003, event = 1004) from funnel_test_strict_increase; +select 3 = windowFunnel(10000)(timestamp, event = 1004, event = 1004, event = 1004) from funnel_test_strict_increase; +select 1 = windowFunnel(10000, 'strict_increase')(timestamp, event = 1004, event = 1004, event = 1004) from funnel_test_strict_increase; + +drop table funnel_test_strict_increase; diff --git a/tests/queries/0_stateless/00633_materialized_view_and_too_many_parts_zookeeper.sh b/tests/queries/0_stateless/00633_materialized_view_and_too_many_parts_zookeeper.sh index 817da08bfa0..def8e8f4cfe 100755 --- a/tests/queries/0_stateless/00633_materialized_view_and_too_many_parts_zookeeper.sh +++ b/tests/queries/0_stateless/00633_materialized_view_and_too_many_parts_zookeeper.sh @@ -10,10 +10,10 @@ ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS a" ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS b" ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS c" -${CLICKHOUSE_CLIENT} --query "CREATE TABLE root (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00633/root', '1') ORDER BY d" -${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW a (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00633/a', '1') ORDER BY d AS SELECT * FROM root" -${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW b (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00633/b', '1') ORDER BY d SETTINGS parts_to_delay_insert=1, parts_to_throw_insert=1 AS SELECT * FROM root" -${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW c (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00633/c', '1') ORDER BY d AS SELECT * FROM root" +${CLICKHOUSE_CLIENT} --query "CREATE TABLE root (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/root', '1') ORDER BY d" +${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW a (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/a', '1') ORDER BY d AS SELECT * FROM root" +${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW b (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/b', '1') ORDER BY d SETTINGS parts_to_delay_insert=1, parts_to_throw_insert=1 AS SELECT * FROM root" +${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW c (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/c', '1') ORDER BY d AS SELECT * FROM root" ${CLICKHOUSE_CLIENT} --query "INSERT INTO root VALUES (1)"; ${CLICKHOUSE_CLIENT} --query "SELECT _table, d FROM merge('${CLICKHOUSE_DATABASE}', '^[abc]\$') ORDER BY _table" @@ -33,7 +33,7 @@ ${CLICKHOUSE_CLIENT} --query "DROP TABLE c" # Deduplication check for non-replicated root table echo ${CLICKHOUSE_CLIENT} --query "CREATE TABLE root (d UInt64) ENGINE = Null" -${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW d (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/test_00633/d', '1') ORDER BY d AS SELECT * FROM root" +${CLICKHOUSE_CLIENT} --query "CREATE MATERIALIZED VIEW d (d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/d', '1') ORDER BY d AS SELECT * FROM root" ${CLICKHOUSE_CLIENT} --query "INSERT INTO root VALUES (1)"; ${CLICKHOUSE_CLIENT} --query "INSERT INTO root VALUES (1)"; ${CLICKHOUSE_CLIENT} --query "SELECT * FROM d"; diff --git a/tests/queries/0_stateless/00634_performance_introspection_and_logging.sh b/tests/queries/0_stateless/00634_performance_introspection_and_logging.sh index e51e4fea5db..cc5ece15435 100755 --- a/tests/queries/0_stateless/00634_performance_introspection_and_logging.sh +++ b/tests/queries/0_stateless/00634_performance_introspection_and_logging.sh @@ -48,7 +48,7 @@ SELECT threads_realtime >= threads_time_user_system_io, any(length(thread_ids)) >= 1 FROM - (SELECT * FROM system.query_log PREWHERE query='$heavy_cpu_query' WHERE event_date >= today()-1 AND current_database = currentDatabase() AND type=2 ORDER BY event_time DESC LIMIT 1) + (SELECT * FROM system.query_log PREWHERE query='$heavy_cpu_query' WHERE event_date >= today()-2 AND current_database = currentDatabase() AND type=2 ORDER BY event_time DESC LIMIT 1) ARRAY JOIN ProfileEvents.Names AS PN, ProfileEvents.Values AS PV" # Clean diff --git a/tests/queries/0_stateless/00652_replicated_mutations_default_database_zookeeper.sh b/tests/queries/0_stateless/00652_replicated_mutations_default_database_zookeeper.sh index 02f552c250d..58295e17790 100755 --- a/tests/queries/0_stateless/00652_replicated_mutations_default_database_zookeeper.sh +++ b/tests/queries/0_stateless/00652_replicated_mutations_default_database_zookeeper.sh @@ -11,7 +11,7 @@ ${CLICKHOUSE_CLIENT} --multiquery << EOF DROP TABLE IF EXISTS mutations_r1; DROP TABLE IF EXISTS for_subquery; -CREATE TABLE mutations_r1(x UInt32, y UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}/mutations', 'r1') ORDER BY x; +CREATE TABLE mutations_r1(x UInt32, y UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutations', 'r1') ORDER BY x; INSERT INTO mutations_r1 VALUES (123, 1), (234, 2), (345, 3); CREATE TABLE for_subquery(x UInt32) ENGINE TinyLog; diff --git a/tests/queries/0_stateless/00652_replicated_mutations_zookeeper.sh b/tests/queries/0_stateless/00652_replicated_mutations_zookeeper.sh index 08a39c58c3e..3ec6e4e3e90 100755 --- a/tests/queries/0_stateless/00652_replicated_mutations_zookeeper.sh +++ b/tests/queries/0_stateless/00652_replicated_mutations_zookeeper.sh @@ -10,8 +10,8 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS mutations_r1" ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS mutations_r2" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_r1(d Date, x UInt32, s String, m MATERIALIZED x + 2) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00652/mutations', 'r1', d, intDiv(x, 10), 8192)" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_r2(d Date, x UInt32, s String, m MATERIALIZED x + 2) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00652/mutations', 'r2', d, intDiv(x, 10), 8192)" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_r1(d Date, x UInt32, s String, m MATERIALIZED x + 2) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutations', 'r1', d, intDiv(x, 10), 8192)" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_r2(d Date, x UInt32, s String, m MATERIALIZED x + 2) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutations', 'r2', d, intDiv(x, 10), 8192)" # Test a mutation on empty table ${CLICKHOUSE_CLIENT} --query="ALTER TABLE mutations_r1 DELETE WHERE x = 1 SETTINGS mutations_sync = 2" @@ -51,11 +51,11 @@ ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS mutations_cleaner_r1" ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS mutations_cleaner_r2" # Create 2 replicas with finished_mutations_to_keep = 2 -${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_cleaner_r1(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00652/mutations_cleaner', 'r1') ORDER BY x SETTINGS \ +${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_cleaner_r1(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutations_cleaner', 'r1') ORDER BY x SETTINGS \ finished_mutations_to_keep = 2, cleanup_delay_period = 1, cleanup_delay_period_random_add = 0" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_cleaner_r2(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00652/mutations_cleaner', 'r2') ORDER BY x SETTINGS \ +${CLICKHOUSE_CLIENT} --query="CREATE TABLE mutations_cleaner_r2(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutations_cleaner', 'r2') ORDER BY x SETTINGS \ finished_mutations_to_keep = 2, cleanup_delay_period = 1, cleanup_delay_period_random_add = 0" diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.reference b/tests/queries/0_stateless/00700_decimal_complex_types.reference index e81dd94513f..9c7c6fefefd 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.reference +++ b/tests/queries/0_stateless/00700_decimal_complex_types.reference @@ -39,9 +39,33 @@ Tuple(Decimal(9, 1), Decimal(18, 1), Decimal(38, 1)) Decimal(9, 1) Decimal(18, 1 1 0 1 0 1 0 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 1 0 2 0 3 0 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 [0.100,0.200,0.300,0.400,0.500,0.600] Array(Decimal(18, 3)) [0.100,0.200,0.300,0.700,0.800,0.900] Array(Decimal(38, 3)) [0.400,0.500,0.600,0.700,0.800,0.900] Array(Decimal(38, 3)) diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.sql b/tests/queries/0_stateless/00700_decimal_complex_types.sql index 2d506b124a2..f4b29e77be9 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.sql +++ b/tests/queries/0_stateless/00700_decimal_complex_types.sql @@ -58,35 +58,35 @@ SELECT has(a, toDecimal32(0.1, 3)), has(a, toDecimal32(1.0, 3)) FROM decimal; SELECT has(b, toDecimal64(0.4, 3)), has(b, toDecimal64(1.0, 3)) FROM decimal; SELECT has(c, toDecimal128(0.7, 3)), has(c, toDecimal128(1.0, 3)) FROM decimal; -SELECT has(a, toDecimal32(0.1, 2)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal32(0.1, 4)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal64(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal128(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal32(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal64(0.4, 2)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal64(0.4, 4)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal128(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal32(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal64(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal128(0.7, 2)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal128(0.7, 4)) FROM decimal; -- { serverError 43 } +SELECT has(a, toDecimal32(0.1, 2)) FROM decimal; +SELECT has(a, toDecimal32(0.1, 4)) FROM decimal; +SELECT has(a, toDecimal64(0.1, 3)) FROM decimal; +SELECT has(a, toDecimal128(0.1, 3)) FROM decimal; +SELECT has(b, toDecimal32(0.4, 3)) FROM decimal; +SELECT has(b, toDecimal64(0.4, 2)) FROM decimal; +SELECT has(b, toDecimal64(0.4, 4)) FROM decimal; +SELECT has(b, toDecimal128(0.4, 3)) FROM decimal; +SELECT has(c, toDecimal32(0.7, 3)) FROM decimal; +SELECT has(c, toDecimal64(0.7, 3)) FROM decimal; +SELECT has(c, toDecimal128(0.7, 2)) FROM decimal; +SELECT has(c, toDecimal128(0.7, 4)) FROM decimal; SELECT indexOf(a, toDecimal32(0.1, 3)), indexOf(a, toDecimal32(1.0, 3)) FROM decimal; SELECT indexOf(b, toDecimal64(0.5, 3)), indexOf(b, toDecimal64(1.0, 3)) FROM decimal; SELECT indexOf(c, toDecimal128(0.9, 3)), indexOf(c, toDecimal128(1.0, 3)) FROM decimal; -SELECT indexOf(a, toDecimal32(0.1, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal32(0.1, 4)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal64(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal128(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal32(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal64(0.4, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal64(0.4, 4)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal128(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal32(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal64(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal128(0.7, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal128(0.7, 4)) FROM decimal; -- { serverError 43 } +SELECT indexOf(a, toDecimal32(0.1, 2)) FROM decimal; +SELECT indexOf(a, toDecimal32(0.1, 4)) FROM decimal; +SELECT indexOf(a, toDecimal64(0.1, 3)) FROM decimal; +SELECT indexOf(a, toDecimal128(0.1, 3)) FROM decimal; +SELECT indexOf(b, toDecimal32(0.4, 3)) FROM decimal; +SELECT indexOf(b, toDecimal64(0.4, 2)) FROM decimal; +SELECT indexOf(b, toDecimal64(0.4, 4)) FROM decimal; +SELECT indexOf(b, toDecimal128(0.4, 3)) FROM decimal; +SELECT indexOf(c, toDecimal32(0.7, 3)) FROM decimal; +SELECT indexOf(c, toDecimal64(0.7, 3)) FROM decimal; +SELECT indexOf(c, toDecimal128(0.7, 2)) FROM decimal; +SELECT indexOf(c, toDecimal128(0.7, 4)) FROM decimal; SELECT arrayConcat(a, b) AS x, toTypeName(x) FROM decimal; SELECT arrayConcat(a, c) AS x, toTypeName(x) FROM decimal; diff --git a/tests/queries/0_stateless/00715_fetch_merged_or_mutated_part_zookeeper.sh b/tests/queries/0_stateless/00715_fetch_merged_or_mutated_part_zookeeper.sh index 54b6c80f2ac..48833d2643c 100755 --- a/tests/queries/0_stateless/00715_fetch_merged_or_mutated_part_zookeeper.sh +++ b/tests/queries/0_stateless/00715_fetch_merged_or_mutated_part_zookeeper.sh @@ -11,8 +11,8 @@ ${CLICKHOUSE_CLIENT} -n --query=" DROP TABLE IF EXISTS fetches_r1; DROP TABLE IF EXISTS fetches_r2" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE fetches_r1(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00715/fetches', 'r1') ORDER BY x" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE fetches_r2(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00715/fetches', 'r2') ORDER BY x \ +${CLICKHOUSE_CLIENT} --query="CREATE TABLE fetches_r1(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/fetches', 'r1') ORDER BY x" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE fetches_r2(x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/fetches', 'r2') ORDER BY x \ SETTINGS prefer_fetch_merged_part_time_threshold=0, \ prefer_fetch_merged_part_size_threshold=0" diff --git a/tests/queries/0_stateless/00717_merge_and_distributed.sql b/tests/queries/0_stateless/00717_merge_and_distributed.sql index f0d34b5165f..35dad18937a 100644 --- a/tests/queries/0_stateless/00717_merge_and_distributed.sql +++ b/tests/queries/0_stateless/00717_merge_and_distributed.sql @@ -18,9 +18,9 @@ SELECT * FROM merge(currentDatabase(), 'test_local_1'); SELECT *, _table FROM merge(currentDatabase(), 'test_local_1') ORDER BY _table; SELECT sum(value), _table FROM merge(currentDatabase(), 'test_local_1') GROUP BY _table ORDER BY _table; SELECT * FROM merge(currentDatabase(), 'test_local_1') WHERE _table = 'test_local_1'; -SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table = 'test_local_1'; -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table = 'test_local_1'; -- { serverError 10 } SELECT * FROM merge(currentDatabase(), 'test_local_1') WHERE _table in ('test_local_1', 'test_local_2'); -SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table in ('test_local_1', 'test_local_2'); -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table in ('test_local_1', 'test_local_2'); -- { serverError 10 } SELECT '--------------Single Distributed------------'; SELECT * FROM merge(currentDatabase(), 'test_distributed_1'); @@ -36,9 +36,9 @@ SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') ORDER BY _ta SELECT *, _table FROM merge(currentDatabase(), 'test_local_1|test_local_2') ORDER BY _table; SELECT sum(value), _table FROM merge(currentDatabase(), 'test_local_1|test_local_2') GROUP BY _table ORDER BY _table; SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') WHERE _table = 'test_local_1'; -SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table = 'test_local_1'; -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table = 'test_local_1'; -- { serverError 10 } SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') WHERE _table in ('test_local_1', 'test_local_2') ORDER BY value; -SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table in ('test_local_1', 'test_local_2') ORDER BY value; -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table in ('test_local_1', 'test_local_2') ORDER BY value; -- { serverError 10 } SELECT '--------------Local Merge Distributed------------'; SELECT * FROM merge(currentDatabase(), 'test_local_1|test_distributed_2') ORDER BY _table; diff --git a/tests/queries/0_stateless/00725_quantiles_shard.reference b/tests/queries/0_stateless/00725_quantiles_shard.reference index 6974bee9735..ec404bb89a1 100644 --- a/tests/queries/0_stateless/00725_quantiles_shard.reference +++ b/tests/queries/0_stateless/00725_quantiles_shard.reference @@ -1,4 +1,4 @@ [4.5,8.100000000000001] [5,9] -[4.5,8.5] +[4,8] [4.5,8.100000000000001] diff --git a/tests/queries/0_stateless/00800_versatile_storage_join.reference b/tests/queries/0_stateless/00800_versatile_storage_join.reference index f1d3f98e32a..0a143f8bc12 100644 --- a/tests/queries/0_stateless/00800_versatile_storage_join.reference +++ b/tests/queries/0_stateless/00800_versatile_storage_join.reference @@ -1,5 +1,4 @@ --------read-------- -def [1,2] 2 abc [0] 1 def [1,2] 2 abc [0] 1 @@ -7,6 +6,7 @@ def [1,2] 2 abc [0] 1 def [1,2] 2 abc [0] 1 +def [1,2] 2 --------joinGet-------- abc diff --git a/tests/queries/0_stateless/00800_versatile_storage_join.sql b/tests/queries/0_stateless/00800_versatile_storage_join.sql index c1e325ce9aa..b0ec6f69f93 100644 --- a/tests/queries/0_stateless/00800_versatile_storage_join.sql +++ b/tests/queries/0_stateless/00800_versatile_storage_join.sql @@ -22,10 +22,10 @@ INSERT INTO join_all_left VALUES ('abc', [0], 1), ('def', [1, 2], 2); -- read from StorageJoin SELECT '--------read--------'; -SELECT * from join_any_inner; -SELECT * from join_any_left; -SELECT * from join_all_inner; -SELECT * from join_all_left; +SELECT * from join_any_inner ORDER BY k; +SELECT * from join_any_left ORDER BY k; +SELECT * from join_all_inner ORDER BY k; +SELECT * from join_all_left ORDER BY k; -- create StorageJoin tables with customized settings diff --git a/tests/queries/0_stateless/00804_rollup_with_having.reference b/tests/queries/0_stateless/00804_rollup_with_having.reference index 62de36a36ba..0f708e8d900 100644 --- a/tests/queries/0_stateless/00804_rollup_with_having.reference +++ b/tests/queries/0_stateless/00804_rollup_with_having.reference @@ -1,4 +1,4 @@ -a \N 1 a b 1 a \N 2 +a \N 1 a b 1 diff --git a/tests/queries/0_stateless/00804_rollup_with_having.sql b/tests/queries/0_stateless/00804_rollup_with_having.sql index cddaa8b6451..29b9ae19041 100644 --- a/tests/queries/0_stateless/00804_rollup_with_having.sql +++ b/tests/queries/0_stateless/00804_rollup_with_having.sql @@ -8,7 +8,7 @@ INSERT INTO rollup_having VALUES (NULL, NULL); INSERT INTO rollup_having VALUES ('a', NULL); INSERT INTO rollup_having VALUES ('a', 'b'); -SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL; -SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL and b IS NOT NULL; +SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL ORDER BY a, b; +SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL and b IS NOT NULL ORDER BY a, b; DROP TABLE rollup_having; diff --git a/tests/queries/0_stateless/00814_replicated_minimalistic_part_header_zookeeper.sql b/tests/queries/0_stateless/00814_replicated_minimalistic_part_header_zookeeper.sql index 0fd760d73d5..63897e225ce 100644 --- a/tests/queries/0_stateless/00814_replicated_minimalistic_part_header_zookeeper.sql +++ b/tests/queries/0_stateless/00814_replicated_minimalistic_part_header_zookeeper.sql @@ -4,13 +4,13 @@ DROP TABLE IF EXISTS part_header_r2; SET replication_alter_partitions_sync = 2; CREATE TABLE part_header_r1(x UInt32, y UInt32) - ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00814/part_header', '1') ORDER BY x + ENGINE ReplicatedMergeTree('/clickhouse/tables/'||currentDatabase()||'/test_00814/part_header/{shard}', '1{replica}') ORDER BY x SETTINGS use_minimalistic_part_header_in_zookeeper = 0, old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 0; CREATE TABLE part_header_r2(x UInt32, y UInt32) - ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00814/part_header', '2') ORDER BY x + ENGINE ReplicatedMergeTree('/clickhouse/tables/'||currentDatabase()||'/test_00814/part_header/{shard}', '2{replica}') ORDER BY x SETTINGS use_minimalistic_part_header_in_zookeeper = 1, old_parts_lifetime = 1, cleanup_delay_period = 0, @@ -39,10 +39,10 @@ SELECT sleep(3) FORMAT Null; SELECT '*** Test part removal ***'; SELECT '*** replica 1 ***'; SELECT name FROM system.parts WHERE active AND database = currentDatabase() AND table = 'part_header_r1'; -SELECT name FROM system.zookeeper WHERE path = '/clickhouse/tables/test_00814/part_header/replicas/1/parts'; +SELECT name FROM system.zookeeper WHERE path = '/clickhouse/tables/'||currentDatabase()||'/test_00814/part_header/s1/replicas/1r1/parts'; SELECT '*** replica 2 ***'; SELECT name FROM system.parts WHERE active AND database = currentDatabase() AND table = 'part_header_r2'; -SELECT name FROM system.zookeeper WHERE path = '/clickhouse/tables/test_00814/part_header/replicas/1/parts'; +SELECT name FROM system.zookeeper WHERE path = '/clickhouse/tables/'||currentDatabase()||'/test_00814/part_header/s1/replicas/1r1/parts'; SELECT '*** Test ALTER ***'; ALTER TABLE part_header_r1 MODIFY COLUMN y String; diff --git a/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh b/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh index 63b687d072d..b0244991b3c 100755 --- a/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh +++ b/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh @@ -60,16 +60,10 @@ wait echo "DROP TABLE concurrent_alter_column NO DELAY" | ${CLICKHOUSE_CLIENT} # NO DELAY has effect only for Atomic database -db_engine=`$CLICKHOUSE_CLIENT -q "SELECT engine FROM system.databases WHERE name='$CLICKHOUSE_DATABASE'"` -if [[ $db_engine == "Atomic" ]]; then - # DROP is non-blocking, so wait for alters - while true; do - $CLICKHOUSE_CLIENT -q "SELECT c = 0 FROM (SELECT count() as c FROM system.processes WHERE query_id LIKE 'alter_00816_%')" | grep 1 > /dev/null && break; - sleep 1; - done -fi - -# Check for deadlocks -echo "SELECT * FROM system.processes WHERE query_id LIKE 'alter_00816_%'" | ${CLICKHOUSE_CLIENT} +# Wait for alters and check for deadlocks (in case of deadlock this loop will not finish) +while true; do + echo "SELECT * FROM system.processes WHERE query_id LIKE 'alter\\_00816\\_%'" | ${CLICKHOUSE_CLIENT} | grep -q -F 'alter' || break + sleep 1; +done echo 'did not crash' diff --git a/tests/queries/0_stateless/00826_cross_to_inner_join.reference b/tests/queries/0_stateless/00826_cross_to_inner_join.reference index 9b630d0d391..973c5b078a3 100644 --- a/tests/queries/0_stateless/00826_cross_to_inner_join.reference +++ b/tests/queries/0_stateless/00826_cross_to_inner_join.reference @@ -109,7 +109,7 @@ SELECT t2_00826.a, t2_00826.b FROM t1_00826 -ALL INNER JOIN t2_00826 ON (((a = t2_00826.a) AND (a = t2_00826.a)) AND (a = t2_00826.a)) AND (b = t2_00826.b) +ALL INNER JOIN t2_00826 ON (a = t2_00826.a) AND (a = t2_00826.a) AND (a = t2_00826.a) AND (b = t2_00826.b) WHERE (a = t2_00826.a) AND ((a = t2_00826.a) AND ((a = t2_00826.a) AND (b = t2_00826.b))) --- cross split conjunction --- SELECT diff --git a/tests/queries/0_stateless/00834_kill_mutation_replicated_zookeeper.sh b/tests/queries/0_stateless/00834_kill_mutation_replicated_zookeeper.sh index d1f938f73fe..92ab6814235 100755 --- a/tests/queries/0_stateless/00834_kill_mutation_replicated_zookeeper.sh +++ b/tests/queries/0_stateless/00834_kill_mutation_replicated_zookeeper.sh @@ -10,8 +10,8 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS kill_mutation_r1" ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS kill_mutation_r2" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE kill_mutation_r1(d Date, x UInt32, s String) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00834/kill_mutation', '1') ORDER BY x PARTITION BY d" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE kill_mutation_r2(d Date, x UInt32, s String) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_00834/kill_mutation', '2') ORDER BY x PARTITION BY d" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE kill_mutation_r1(d Date, x UInt32, s String) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/kill_mutation', '1') ORDER BY x PARTITION BY d" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE kill_mutation_r2(d Date, x UInt32, s String) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/kill_mutation', '2') ORDER BY x PARTITION BY d" ${CLICKHOUSE_CLIENT} --query="INSERT INTO kill_mutation_r1 VALUES ('2000-01-01', 1, 'a')" ${CLICKHOUSE_CLIENT} --query="INSERT INTO kill_mutation_r1 VALUES ('2001-01-01', 2, 'b')" diff --git a/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh b/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh index 60a2d8eb9a0..0a68225a31a 100755 --- a/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh +++ b/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh @@ -6,10 +6,28 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -for _ in {1..200}; do echo "drop table if exists view_00840" | $CLICKHOUSE_CLIENT; echo "create view view_00840 as select count(*),database,table from system.columns group by database,table" | $CLICKHOUSE_CLIENT; done & -for _ in {1..500}; do echo "select * from view_00840 order by table" | $CLICKHOUSE_CLIENT >/dev/null 2>&1 || true; done & +function cleanup() +{ + echo Failed + wait +} + +trap cleanup EXIT + +$CLICKHOUSE_CLIENT -q "create view view_00840 as select count(*),database,table from system.columns group by database,table" + +for _ in {1..200}; do + $CLICKHOUSE_CLIENT -nm -q " + drop table if exists view_00840; + create view view_00840 as select count(*),database,table from system.columns group by database,table; + " +done & +for _ in {1..500}; do + $CLICKHOUSE_CLIENT -q "select * from view_00840 order by table" >/dev/null 2>&1 || true +done & wait +trap '' EXIT echo "drop table view_00840" | $CLICKHOUSE_CLIENT diff --git a/tests/queries/0_stateless/00849_multiple_comma_join_2.reference b/tests/queries/0_stateless/00849_multiple_comma_join_2.reference index 4db65b0b795..fc39ef13935 100644 --- a/tests/queries/0_stateless/00849_multiple_comma_join_2.reference +++ b/tests/queries/0_stateless/00849_multiple_comma_join_2.reference @@ -127,7 +127,7 @@ FROM ) AS `--.s` CROSS JOIN t3 ) AS `--.s` -ALL INNER JOIN t4 ON ((a = `--t1.a`) AND (a = `--t2.a`)) AND (a = `--t3.a`) +ALL INNER JOIN t4 ON (a = `--t1.a`) AND (a = `--t2.a`) AND (a = `--t3.a`) WHERE (a = `--t1.a`) AND (a = `--t2.a`) AND (a = `--t3.a`) SELECT `--t1.a` AS `t1.a` FROM diff --git a/tests/queries/0_stateless/00855_join_with_array_join.reference b/tests/queries/0_stateless/00855_join_with_array_join.reference index 386bde518ea..88f9253500c 100644 --- a/tests/queries/0_stateless/00855_join_with_array_join.reference +++ b/tests/queries/0_stateless/00855_join_with_array_join.reference @@ -4,3 +4,8 @@ 4 0 5 0 6 0 +- +1 0 +2 2 a2 +1 0 +2 2 a2 diff --git a/tests/queries/0_stateless/00855_join_with_array_join.sql b/tests/queries/0_stateless/00855_join_with_array_join.sql index 10b03fec062..506d9479110 100644 --- a/tests/queries/0_stateless/00855_join_with_array_join.sql +++ b/tests/queries/0_stateless/00855_join_with_array_join.sql @@ -1,10 +1,35 @@ SET joined_subquery_requires_alias = 0; -select ax, c from (select [1,2] ax, 0 c) array join ax join (select 0 c) using(c); -select ax, c from (select [3,4] ax, 0 c) join (select 0 c) using(c) array join ax; -select ax, c from (select [5,6] ax, 0 c) s1 join system.one s2 ON s1.c = s2.dummy array join ax; +SELECT ax, c FROM (SELECT [1,2] ax, 0 c) ARRAY JOIN ax JOIN (SELECT 0 c) USING (c); +SELECT ax, c FROM (SELECT [3,4] ax, 0 c) JOIN (SELECT 0 c) USING (c) ARRAY JOIN ax; +SELECT ax, c FROM (SELECT [5,6] ax, 0 c) s1 JOIN system.one s2 ON s1.c = s2.dummy ARRAY JOIN ax; + + +SELECT ax, c FROM (SELECT [101,102] ax, 0 c) s1 +JOIN system.one s2 ON s1.c = s2.dummy +JOIN system.one s3 ON s1.c = s3.dummy +ARRAY JOIN ax; -- { serverError 48 } + +SELECT '-'; + +SET joined_subquery_requires_alias = 1; + +DROP TABLE IF EXISTS f; +DROP TABLE IF EXISTS d; + +CREATE TABLE f (`d_ids` Array(Int64) ) ENGINE = TinyLog; +INSERT INTO f VALUES ([1, 2]); + +CREATE TABLE d (`id` Int64, `name` String ) ENGINE = TinyLog; + +INSERT INTO d VALUES (2, 'a2'), (3, 'a3'); + +SELECT d_ids, id, name FROM f LEFT ARRAY JOIN d_ids LEFT JOIN d ON d.id = d_ids ORDER BY id; +SELECT did, id, name FROM f LEFT ARRAY JOIN d_ids as did LEFT JOIN d ON d.id = did ORDER BY id; + +-- name clash, doesn't work yet +SELECT id, name FROM f LEFT ARRAY JOIN d_ids as id LEFT JOIN d ON d.id = id ORDER BY id; -- { serverError 403 } + +DROP TABLE IF EXISTS f; +DROP TABLE IF EXISTS d; -select ax, c from (select [7,8] ax, 0 c) s1 -join system.one s2 ON s1.c = s2.dummy -join system.one s3 ON s1.c = s3.dummy -array join ax; -- { serverError 48 } diff --git a/tests/queries/0_stateless/00878_join_unexpected_results.reference b/tests/queries/0_stateless/00878_join_unexpected_results.reference index 65fcbc257ca..a389cb47a96 100644 --- a/tests/queries/0_stateless/00878_join_unexpected_results.reference +++ b/tests/queries/0_stateless/00878_join_unexpected_results.reference @@ -23,8 +23,6 @@ join_use_nulls = 1 - \N \N - -1 1 \N \N -2 2 \N \N - 1 1 1 1 2 2 \N \N @@ -51,8 +49,6 @@ join_use_nulls = 0 - - - -1 1 0 0 -2 2 0 0 - 1 1 1 1 2 2 0 0 diff --git a/tests/queries/0_stateless/00878_join_unexpected_results.sql b/tests/queries/0_stateless/00878_join_unexpected_results.sql index 6f6cd6e6479..0aef5208b26 100644 --- a/tests/queries/0_stateless/00878_join_unexpected_results.sql +++ b/tests/queries/0_stateless/00878_join_unexpected_results.sql @@ -30,11 +30,11 @@ select * from t left outer join s on (t.a=s.a and t.b=s.b) where s.a is null; select '-'; select s.* from t left outer join s on (t.a=s.a and t.b=s.b) where s.a is null; select '-'; -select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; +select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; -- {serverError 403 } select '-'; select t.*, s.* from t left join s on (s.a=t.a) order by t.a; select '-'; -select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; +select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; -- {serverError 403 } select 'join_use_nulls = 0'; set join_use_nulls = 0; @@ -58,11 +58,11 @@ select '-'; select '-'; -- select s.* from t left outer join s on (t.a=s.a and t.b=s.b) where s.a is null; -- TODO select '-'; -select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; +select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; -- {serverError 403 } select '-'; select t.*, s.* from t left join s on (s.a=t.a) order by t.a; select '-'; -select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; +select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; -- {serverError 403 } drop table t; drop table s; diff --git a/tests/queries/0_stateless/00882_multiple_join_no_alias.reference b/tests/queries/0_stateless/00882_multiple_join_no_alias.reference index a3723bc9976..523063f8a3c 100644 --- a/tests/queries/0_stateless/00882_multiple_join_no_alias.reference +++ b/tests/queries/0_stateless/00882_multiple_join_no_alias.reference @@ -1,8 +1,8 @@ 1 1 1 1 0 0 0 0 -0 1 +0 1 1 1 1 1 1 2 2 0 0 0 0 -2 2 0 1 1 1 +2 2 0 diff --git a/tests/queries/0_stateless/00882_multiple_join_no_alias.sql b/tests/queries/0_stateless/00882_multiple_join_no_alias.sql index bd3a2a19913..4a96e73c679 100644 --- a/tests/queries/0_stateless/00882_multiple_join_no_alias.sql +++ b/tests/queries/0_stateless/00882_multiple_join_no_alias.sql @@ -13,22 +13,22 @@ insert into y values (1,1); select s.a, s.a, s.b as s_b, s.b from t left join s on s.a = t.a left join y on s.b = y.b -order by t.a; +order by t.a, s.a, s.b; select max(s.a) from t left join s on s.a = t.a left join y on s.b = y.b -group by t.a; +group by t.a order by t.a; select t.a, t.a as t_a, s.a, s.a as s_a, y.a, y.a as y_a from t left join s on t.a = s.a left join y on y.b = s.b -order by t.a; +order by t.a, s.a, y.a; select t.a, t.a as t_a, max(s.a) from t left join s on t.a = s.a left join y on y.b = s.b -group by t.a; +group by t.a order by t.a; drop table t; drop table s; diff --git a/tests/queries/0_stateless/00906_low_cardinality_rollup.reference b/tests/queries/0_stateless/00906_low_cardinality_rollup.reference index 3e287311126..257605d9006 100644 --- a/tests/queries/0_stateless/00906_low_cardinality_rollup.reference +++ b/tests/queries/0_stateless/00906_low_cardinality_rollup.reference @@ -1,18 +1,18 @@ -c d 1 a b 1 -c \N 1 a \N 1 +c d 1 +c \N 1 \N \N 2 -c 1 a 1 +c 1 \N 2 -c d 1 a b 1 -c \N 1 a \N 1 +c d 1 +c \N 1 \N b 1 \N d 1 \N \N 2 -c 1 a 1 +c 1 \N 2 diff --git a/tests/queries/0_stateless/00906_low_cardinality_rollup.sql b/tests/queries/0_stateless/00906_low_cardinality_rollup.sql index 3b8be7b9ac6..125529ad383 100644 --- a/tests/queries/0_stateless/00906_low_cardinality_rollup.sql +++ b/tests/queries/0_stateless/00906_low_cardinality_rollup.sql @@ -3,10 +3,10 @@ CREATE TABLE lc (a LowCardinality(Nullable(String)), b LowCardinality(Nullable(S INSERT INTO lc VALUES ('a', 'b'); INSERT INTO lc VALUES ('c', 'd'); -SELECT a, b, count(a) FROM lc GROUP BY a, b WITH ROLLUP; -SELECT a, count(a) FROM lc GROUP BY a WITH ROLLUP; +SELECT a, b, count(a) FROM lc GROUP BY a, b WITH ROLLUP ORDER BY a, b; +SELECT a, count(a) FROM lc GROUP BY a WITH ROLLUP ORDER BY a; -SELECT a, b, count(a) FROM lc GROUP BY a, b WITH CUBE; -SELECT a, count(a) FROM lc GROUP BY a WITH CUBE; +SELECT a, b, count(a) FROM lc GROUP BY a, b WITH CUBE ORDER BY a, b; +SELECT a, count(a) FROM lc GROUP BY a WITH CUBE ORDER BY a; DROP TABLE if exists lc; diff --git a/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql b/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql index 29ec19a6efe..cf0e0bac3dd 100644 --- a/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql +++ b/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql @@ -28,7 +28,7 @@ SELECT `time`, groupArray((sensor_id, volume)) AS groupArr FROM ( WHERE received_at BETWEEN '2018-12-12 00:00:00' AND '2018-12-30 00:00:00' GROUP BY `time`,sensor_id ORDER BY `time` -) GROUP BY `time`; +) GROUP BY `time` ORDER BY `time`; DROP TABLE sensor_value; @@ -59,4 +59,4 @@ select s.a, s.b, max(s.dt1) dt1, s.c, s.d, s.f, s.i, max(s.dt2) dt2 from ( , toDecimal128(268.970000000000, 12) f , toDecimal128(0.000000000000, 12) i , toDateTime('2018-11-02 00:00:00', 'Europe/Moscow') dt2 -) s group by s.a, s.b, s.c, s.d, s.f, s.i; +) s group by s.a, s.b, s.c, s.d, s.f, s.i ORDER BY s.a, s.b, s.c, s.d, s.f, s.i; diff --git a/tests/queries/0_stateless/00915_simple_aggregate_function.reference b/tests/queries/0_stateless/00915_simple_aggregate_function.reference index 8d5d8340f17..6bbe9b1e8b3 100644 --- a/tests/queries/0_stateless/00915_simple_aggregate_function.reference +++ b/tests/queries/0_stateless/00915_simple_aggregate_function.reference @@ -39,7 +39,7 @@ SimpleAggregateFunction(sum, Float64) 7 14 8 16 9 18 -1 1 2 2.2.2.2 3 ([1,2,3],[2,1,1]) ([1,2,3],[1,1,2]) ([1,2,3],[2,1,2]) [1,2,2,3,4] [4,2,1,3] (1,1) (2,2) -10 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 20 20.20.20.20 5 ([2,3,4],[2,1,1]) ([2,3,4],[3,3,4]) ([2,3,4],[4,3,4]) [] [] (3,3) (4,4) -SimpleAggregateFunction(anyLast, Nullable(String)) SimpleAggregateFunction(anyLast, LowCardinality(Nullable(String))) SimpleAggregateFunction(anyLast, IPv4) SimpleAggregateFunction(groupBitOr, UInt32) SimpleAggregateFunction(sumMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(minMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(maxMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(groupArrayArray, Array(Int32)) SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)) SimpleAggregateFunction(argMin, Tuple(Int32, Int64)) SimpleAggregateFunction(argMax, Tuple(Int32, Int64)) +1 1 2 2.2.2.2 3 ([1,2,3],[2,1,1]) ([1,2,3],[1,1,2]) ([1,2,3],[2,1,2]) [1,2,2,3,4] [4,2,1,3] +10 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 20 20.20.20.20 5 ([2,3,4],[2,1,1]) ([2,3,4],[3,3,4]) ([2,3,4],[4,3,4]) [] [] +SimpleAggregateFunction(anyLast, Nullable(String)) SimpleAggregateFunction(anyLast, LowCardinality(Nullable(String))) SimpleAggregateFunction(anyLast, IPv4) SimpleAggregateFunction(groupBitOr, UInt32) SimpleAggregateFunction(sumMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(minMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(maxMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(groupArrayArray, Array(Int32)) SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)) with_overflow 1 0 diff --git a/tests/queries/0_stateless/00915_simple_aggregate_function.sql b/tests/queries/0_stateless/00915_simple_aggregate_function.sql index c669f810312..82a7aa2152f 100644 --- a/tests/queries/0_stateless/00915_simple_aggregate_function.sql +++ b/tests/queries/0_stateless/00915_simple_aggregate_function.sql @@ -31,22 +31,16 @@ create table simple ( tup_min SimpleAggregateFunction(minMap, Tuple(Array(Int32), Array(Int64))), tup_max SimpleAggregateFunction(maxMap, Tuple(Array(Int32), Array(Int64))), arr SimpleAggregateFunction(groupArrayArray, Array(Int32)), - uniq_arr SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)), - arg_min SimpleAggregateFunction(argMin, Tuple(Int32, Int64)), - arg_max SimpleAggregateFunction(argMax, Tuple(Int32, Int64)) + uniq_arr SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)) ) engine=AggregatingMergeTree order by id; - -insert into simple values(1,'1','1','1.1.1.1', 1, ([1,2], [1,1]), ([1,2], [1,1]), ([1,2], [1,1]), [1,2], [1,2], (1,1), (1,1)); -insert into simple values(1,null,'2','2.2.2.2', 2, ([1,3], [1,1]), ([1,3], [2,2]), ([1,3], [2,2]), [2,3,4], [2,3,4], (2,2), (2,2)); +insert into simple values(1,'1','1','1.1.1.1', 1, ([1,2], [1,1]), ([1,2], [1,1]), ([1,2], [1,1]), [1,2], [1,2]); +insert into simple values(1,null,'2','2.2.2.2', 2, ([1,3], [1,1]), ([1,3], [2,2]), ([1,3], [2,2]), [2,3,4], [2,3,4]); -- String longer then MAX_SMALL_STRING_SIZE (actual string length is 100) -insert into simple values(10,'10','10','10.10.10.10', 4, ([2,3], [1,1]), ([2,3], [3,3]), ([2,3], [3,3]), [], [], (3,3), (3,3)); -insert into simple values(10,'2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222','20','20.20.20.20', 1, ([2, 4], [1,1]), ([2, 4], [4,4]), ([2, 4], [4,4]), [], [], (4,4), (4,4)); +insert into simple values(10,'10','10','10.10.10.10', 4, ([2,3], [1,1]), ([2,3], [3,3]), ([2,3], [3,3]), [], []); +insert into simple values(10,'2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222','20','20.20.20.20', 1, ([2, 4], [1,1]), ([2, 4], [4,4]), ([2, 4], [4,4]), [], []); select * from simple final order by id; -select toTypeName(nullable_str), toTypeName(low_str), toTypeName(ip), toTypeName(status), - toTypeName(tup), toTypeName(tup_min), toTypeName(tup_max), toTypeName(arr), - toTypeName(uniq_arr), toTypeName(arg_min), toTypeName(arg_max) -from simple limit 1; +select toTypeName(nullable_str),toTypeName(low_str),toTypeName(ip),toTypeName(status), toTypeName(tup), toTypeName(tup_min), toTypeName(tup_max), toTypeName(arr), toTypeName(uniq_arr) from simple limit 1; optimize table simple final; diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference b/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference index 7938dcdde86..b261da18d51 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference @@ -1,3 +1,2 @@ -0 1 0 diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql index f76fd446a8e..c40419e4d56 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql @@ -1,3 +1,3 @@ -SELECT hasAny([['Hello, world']], [[[]]]); +SELECT hasAny([['Hello, world']], [[[]]]); -- { serverError 386 } SELECT hasAny([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); SELECT hasAll([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); diff --git a/tests/queries/0_stateless/00921_datetime64_compatibility.python b/tests/queries/0_stateless/00921_datetime64_compatibility_long.python similarity index 95% rename from tests/queries/0_stateless/00921_datetime64_compatibility.python rename to tests/queries/0_stateless/00921_datetime64_compatibility_long.python index bf0ae8a72ac..c8b9620629d 100644 --- a/tests/queries/0_stateless/00921_datetime64_compatibility.python +++ b/tests/queries/0_stateless/00921_datetime64_compatibility_long.python @@ -86,8 +86,7 @@ CAST(N as DateTime64(9, 'Europe/Minsk')) formatDateTime(N, '%C %d %D %e %F %H %I %j %m %M %p %R %S %T %u %V %w %y %Y %%') """.splitlines() -# Expanded later to cartesian product of all arguments. -# NOTE: {N} to be turned into N after str.format() for keys (format string), but not for list of values! +# Expanded later to cartesian product of all arguments, using format string. extra_ops = [ # With same type: ( @@ -179,7 +178,7 @@ def escape_string(s): def execute_functions_for_types(functions, types): - # TODO: use string.Template here to allow lines that do not contain type, like: SELECT CAST(toDateTime64(1234567890), 'DateTime64') + # NOTE: use string.Template here to allow lines with missing keys, like type, e.g. SELECT CAST(toDateTime64(1234567890), 'DateTime64') for func in functions: print(("""SELECT 'SELECT {func}';""".format(func=escape_string(func)))) for dt in types: diff --git a/tests/queries/0_stateless/00921_datetime64_compatibility.reference b/tests/queries/0_stateless/00921_datetime64_compatibility_long.reference similarity index 99% rename from tests/queries/0_stateless/00921_datetime64_compatibility.reference rename to tests/queries/0_stateless/00921_datetime64_compatibility_long.reference index 004f4f5e824..67413512e06 100644 --- a/tests/queries/0_stateless/00921_datetime64_compatibility.reference +++ b/tests/queries/0_stateless/00921_datetime64_compatibility_long.reference @@ -1,5 +1,4 @@ SELECT toTimeZone(N, \'UTC\') - Code: 43 "DateTime('UTC')","2019-09-16 16:20:11" "DateTime64(3, 'UTC')","2019-09-16 16:20:11.234" @@ -35,25 +34,21 @@ SELECT toDayOfWeek(N) "UInt8",1 ------------------------------------------ SELECT toHour(N) - Code: 43 "UInt8",19 "UInt8",19 ------------------------------------------ SELECT toMinute(N) - Code: 43 "UInt8",20 "UInt8",20 ------------------------------------------ SELECT toSecond(N) - Code: 43 "UInt8",11 "UInt8",11 ------------------------------------------ SELECT toUnixTimestamp(N) - Code: 44 "UInt32",1568650811 "UInt32",1568650811 @@ -94,31 +89,26 @@ SELECT toStartOfDay(N) "DateTime('Europe/Minsk')","2019-09-16 00:00:00" ------------------------------------------ SELECT toStartOfHour(N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:00:00" "DateTime('Europe/Minsk')","2019-09-16 19:00:00" ------------------------------------------ SELECT toStartOfMinute(N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:20:00" "DateTime('Europe/Minsk')","2019-09-16 19:20:00" ------------------------------------------ SELECT toStartOfFiveMinute(N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:20:00" "DateTime('Europe/Minsk')","2019-09-16 19:20:00" ------------------------------------------ SELECT toStartOfTenMinutes(N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:20:00" "DateTime('Europe/Minsk')","2019-09-16 19:20:00" ------------------------------------------ SELECT toStartOfFifteenMinutes(N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:15:00" "DateTime('Europe/Minsk')","2019-09-16 19:15:00" @@ -139,7 +129,6 @@ SELECT toStartOfInterval(N, INTERVAL 1 day) "DateTime('Europe/Minsk')","2019-09-16 00:00:00" ------------------------------------------ SELECT toStartOfInterval(N, INTERVAL 15 minute) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:15:00" "DateTime('Europe/Minsk')","2019-09-16 19:15:00" @@ -160,13 +149,11 @@ SELECT date_trunc(\'day\', N) "DateTime('Europe/Minsk')","2019-09-16 00:00:00" ------------------------------------------ SELECT date_trunc(\'minute\', N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:20:00" "DateTime('Europe/Minsk')","2019-09-16 19:20:00" ------------------------------------------ SELECT toTime(N) - Code: 43 "DateTime('Europe/Minsk')","1970-01-02 19:20:11" "DateTime('Europe/Minsk')","1970-01-02 19:20:11" @@ -232,7 +219,6 @@ SELECT toYearWeek(N) "UInt32",201937 ------------------------------------------ SELECT timeSlot(N) - Code: 43 "DateTime('Europe/Minsk')","2019-09-16 19:00:00" "DateTime('Europe/Minsk')","2019-09-16 19:00:00" @@ -375,15 +361,11 @@ SELECT formatDateTime(N, \'%C %d %D %e %F %H %I %j %m %M %p %R %S %T %u %V %w %y SELECT N - N "Int32",0 "Int32",0 - Code: 43 ------------------------------------------ SELECT N + N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N != N @@ -417,47 +399,33 @@ SELECT N >= N "UInt8",1 ------------------------------------------ SELECT N - DT - Code: 43 "Int32",0 - Code: 43 ------------------------------------------ SELECT DT - N - Code: 43 "Int32",0 - Code: 43 ------------------------------------------ SELECT N - D "Int32",0 - Code: 43 - Code: 43 ------------------------------------------ SELECT D - N "Int32",0 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - DT64 - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT DT64 - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N != DT @@ -726,11 +694,8 @@ SELECT N - toUInt8(1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:10.234" ------------------------------------------ SELECT toUInt8(1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toInt8(-1) @@ -739,11 +704,8 @@ SELECT N - toInt8(-1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:12.234" ------------------------------------------ SELECT toInt8(-1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toUInt16(1) @@ -752,11 +714,8 @@ SELECT N - toUInt16(1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:10.234" ------------------------------------------ SELECT toUInt16(1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toInt16(-1) @@ -765,11 +724,8 @@ SELECT N - toInt16(-1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:12.234" ------------------------------------------ SELECT toInt16(-1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toUInt32(1) @@ -778,11 +734,8 @@ SELECT N - toUInt32(1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:10.234" ------------------------------------------ SELECT toUInt32(1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toInt32(-1) @@ -791,11 +744,8 @@ SELECT N - toInt32(-1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:12.234" ------------------------------------------ SELECT toInt32(-1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toUInt64(1) @@ -804,11 +754,8 @@ SELECT N - toUInt64(1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:10.234" ------------------------------------------ SELECT toUInt64(1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N - toInt64(-1) @@ -817,585 +764,486 @@ SELECT N - toInt64(-1) "DateTime64(3, 'Europe/Minsk')","2019-09-16 19:20:12.234" ------------------------------------------ SELECT toInt64(-1) - N - Code: 43 - Code: 43 - Code: 43 ------------------------------------------ SELECT N == toUInt8(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt8(1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toInt8(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt8(-1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toUInt16(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt16(1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toInt16(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt16(-1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toUInt32(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt32(1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toInt32(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt32(-1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toUInt64(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt64(1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N == toInt64(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt64(-1) == N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N != toUInt8(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt8(1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toInt8(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt8(-1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toUInt16(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt16(1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toInt16(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt16(-1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toUInt32(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt32(1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toInt32(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt32(-1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toUInt64(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt64(1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N != toInt64(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt64(-1) != N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toUInt8(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt8(1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toInt8(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt8(-1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toUInt16(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt16(1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toInt16(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt16(-1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toUInt32(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt32(1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toInt32(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt32(-1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toUInt64(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt64(1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N < toInt64(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt64(-1) < N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toUInt8(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt8(1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toInt8(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt8(-1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toUInt16(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt16(1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toInt16(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt16(-1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toUInt32(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt32(1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toInt32(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt32(-1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toUInt64(1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toUInt64(1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N <= toInt64(-1) - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT toInt64(-1) <= N - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT N > toUInt8(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt8(1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toInt8(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt8(-1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toUInt16(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt16(1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toInt16(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt16(-1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toUInt32(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt32(1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toInt32(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt32(-1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toUInt64(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt64(1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N > toInt64(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt64(-1) > N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toUInt8(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt8(1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toInt8(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt8(-1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toUInt16(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt16(1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toInt16(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt16(-1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toUInt32(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt32(1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toInt32(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt32(-1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toUInt64(1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toUInt64(1) >= N - Code: 43 "UInt8",0 "UInt8",0 ------------------------------------------ SELECT N >= toInt64(-1) - Code: 43 "UInt8",1 "UInt8",1 ------------------------------------------ SELECT toInt64(-1) >= N - Code: 43 "UInt8",0 "UInt8",0 diff --git a/tests/queries/0_stateless/00921_datetime64_compatibility.sh b/tests/queries/0_stateless/00921_datetime64_compatibility_long.sh similarity index 82% rename from tests/queries/0_stateless/00921_datetime64_compatibility.sh rename to tests/queries/0_stateless/00921_datetime64_compatibility_long.sh index 1617e5b1f77..52a29c19be1 100755 --- a/tests/queries/0_stateless/00921_datetime64_compatibility.sh +++ b/tests/queries/0_stateless/00921_datetime64_compatibility_long.sh @@ -11,6 +11,6 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # ${CURDIR}/00921_datetime64_compatibility.python -python3 "${CURDIR}"/00921_datetime64_compatibility.python \ +python3 "${CURDIR}"/00921_datetime64_compatibility_long.python \ | ${CLICKHOUSE_CLIENT} --ignore-error -T -nm --calculate_text_stack_trace 0 --log-level 'error' 2>&1 \ - | sed 's/Received exception .*//g; s/^\(Code: [0-9]\+\).*$/\1/g' + | grep -v 'Received exception .*$' | sed 's/^\(Code: [0-9]\+\).*$/\1/g' diff --git a/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.reference b/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.reference index e5e283f754b..3a176a17f5a 100644 --- a/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.reference +++ b/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.reference @@ -1,7 +1,7 @@ 4 4 8 -7 +8 ----- 4 1 diff --git a/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.sql b/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.sql index d4c19cbe8f2..58b266f106f 100644 --- a/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.sql +++ b/tests/queries/0_stateless/00926_adaptive_index_granularity_collapsing_merge_tree.sql @@ -58,7 +58,7 @@ OPTIMIZE TABLE four_rows_per_granule FINAL; SELECT COUNT(*) FROM four_rows_per_granule; -SELECT distinct(marks) from system.parts WHERE table = 'four_rows_per_granule' and database=currentDatabase() and active=1; +SELECT sum(marks) from system.parts WHERE table = 'four_rows_per_granule' and database=currentDatabase() and active=1; INSERT INTO four_rows_per_granule (p, k, v1, v2, Sign) VALUES ('2018-05-15', 1, 1000, 2000, 1), ('2018-05-16', 2, 3000, 4000, 1), ('2018-05-17', 3, 5000, 6000, 1), ('2018-05-18', 4, 7000, 8000, 1); diff --git a/tests/queries/0_stateless/00926_adaptive_index_granularity_replacing_merge_tree.reference b/tests/queries/0_stateless/00926_adaptive_index_granularity_replacing_merge_tree.reference index a418a89ccc5..5d5f3af28ab 100644 --- a/tests/queries/0_stateless/00926_adaptive_index_granularity_replacing_merge_tree.reference +++ b/tests/queries/0_stateless/00926_adaptive_index_granularity_replacing_merge_tree.reference @@ -1,7 +1,7 @@ 4 4 8 -7 +8 ----- 4 2 diff --git a/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.reference b/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.reference index 067189f73fc..f93aae0225a 100644 --- a/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.reference +++ b/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.reference @@ -6,11 +6,9 @@ 4 1 0 -0 6 2 ----- 6 3 0 -0 diff --git a/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.sql b/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.sql index 4d4dbda922d..44dd0412aea 100644 --- a/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.sql +++ b/tests/queries/0_stateless/00926_adaptive_index_granularity_versioned_collapsing_merge_tree.sql @@ -62,7 +62,11 @@ OPTIMIZE TABLE four_rows_per_granule FINAL; SELECT COUNT(*) FROM four_rows_per_granule; -SELECT distinct(marks) from system.parts WHERE table = 'four_rows_per_granule' and database=currentDatabase() and active=1; +-- We expect zero marks here, so we might get zero rows if all the parts were +-- deleted already. This can happen in parallel runs where there may be a long delay +-- between queries. So we must write the query in such a way that it always returns +-- zero rows if OK. +SELECT distinct(marks) d from system.parts WHERE table = 'four_rows_per_granule' and database=currentDatabase() and active=1 having d > 0; INSERT INTO four_rows_per_granule (p, k, v1, v2, Sign, Version) VALUES ('2018-05-15', 1, 1000, 2000, 1, 1), ('2018-05-16', 2, 3000, 4000, 1, 1), ('2018-05-17', 3, 5000, 6000, 1, 1), ('2018-05-18', 4, 7000, 8000, 1, 1); @@ -120,6 +124,10 @@ OPTIMIZE TABLE six_rows_per_granule FINAL; SELECT COUNT(*) FROM six_rows_per_granule; -SELECT distinct(marks) from system.parts WHERE table = 'six_rows_per_granule' and database=currentDatabase() and active=1; +-- We expect zero marks here, so we might get zero rows if all the parts were +-- deleted already. This can happen in parallel runs where there may be a long delay +-- between queries. So we must write the query in such a way that it always returns +-- zero rows if OK. +SELECT distinct(marks) d from system.parts WHERE table = 'six_rows_per_granule' and database=currentDatabase() and active=1 having d > 0; DROP TABLE IF EXISTS six_rows_per_granule; diff --git a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference index 7a08495654c..f1839bae259 100644 --- a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference +++ b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference @@ -1 +1 @@ -0 36 13 +0 0 13 diff --git a/tests/queries/0_stateless/00950_test_double_delta_codec.sql b/tests/queries/0_stateless/00950_test_double_delta_codec.sql index a71264910ac..6bf9b2628ad 100644 --- a/tests/queries/0_stateless/00950_test_double_delta_codec.sql +++ b/tests/queries/0_stateless/00950_test_double_delta_codec.sql @@ -22,144 +22,145 @@ CREATE TABLE codecTest ( valueI8 Int8 CODEC(DoubleDelta), valueDT DateTime CODEC(DoubleDelta), valueD Date CODEC(DoubleDelta) -) Engine = MergeTree ORDER BY key; +) Engine = MergeTree ORDER BY key SETTINGS min_bytes_for_wide_part = 0; -- checking for overflow INSERT INTO codecTest (key, ref_valueU64, valueU64, ref_valueI64, valueI64) - VALUES (1, 18446744073709551615, 18446744073709551615, 9223372036854775807, 9223372036854775807), (2, 0, 0, -9223372036854775808, -9223372036854775808), (3, 18446744073709551615, 18446744073709551615, 9223372036854775807, 9223372036854775807); + VALUES (1, 18446744073709551615, 18446744073709551615, 9223372036854775807, 9223372036854775807), (2, 0, 0, -9223372036854775808, -9223372036854775808), (3, 18446744073709551615, 18446744073709551615, 9223372036854775807, 9223372036854775807); -- n^3 covers all double delta storage cases, from small difference between neighbouref_values (stride) to big. INSERT INTO codecTest (key, ref_valueU64, valueU64, ref_valueU32, valueU32, ref_valueU16, valueU16, ref_valueU8, valueU8, ref_valueI64, valueI64, ref_valueI32, valueI32, ref_valueI16, valueI16, ref_valueI8, valueI8, ref_valueDT, valueDT, ref_valueD, valueD) - SELECT number as n, n * n * n as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) - FROM system.numbers LIMIT 101, 1000; + SELECT number as n, n * n * n as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) + FROM system.numbers LIMIT 101, 1000; -- best case - constant stride INSERT INTO codecTest (key, ref_valueU64, valueU64, ref_valueU32, valueU32, ref_valueU16, valueU16, ref_valueU8, valueU8, ref_valueI64, valueI64, ref_valueI32, valueI32, ref_valueI16, valueI16, ref_valueI8, valueI8, ref_valueDT, valueDT, ref_valueD, valueD) - SELECT number as n, n as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) - FROM system.numbers LIMIT 2001, 1000; + SELECT number as n, n as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) + FROM system.numbers LIMIT 2001, 1000; -- worst case - random stride INSERT INTO codecTest (key, ref_valueU64, valueU64, ref_valueU32, valueU32, ref_valueU16, valueU16, ref_valueU8, valueU8, ref_valueI64, valueI64, ref_valueI32, valueI32, ref_valueI16, valueI16, ref_valueI8, valueI8, ref_valueDT, valueDT, ref_valueD, valueD) - SELECT number as n, n + (rand64() - 9223372036854775807)/1000 as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) - FROM system.numbers LIMIT 3001, 1000; + SELECT number as n, n + (rand64() - 9223372036854775807)/1000 as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) + FROM system.numbers LIMIT 3001, 1000; SELECT 'U64'; SELECT - key, - ref_valueU64, valueU64, ref_valueU64 - valueU64 as dU64 + key, + ref_valueU64, valueU64, ref_valueU64 - valueU64 as dU64 FROM codecTest WHERE - dU64 != 0 + dU64 != 0 LIMIT 10; SELECT 'U32'; SELECT - key, - ref_valueU32, valueU32, ref_valueU32 - valueU32 as dU32 + key, + ref_valueU32, valueU32, ref_valueU32 - valueU32 as dU32 FROM codecTest WHERE - dU32 != 0 + dU32 != 0 LIMIT 10; SELECT 'U16'; SELECT - key, - ref_valueU16, valueU16, ref_valueU16 - valueU16 as dU16 + key, + ref_valueU16, valueU16, ref_valueU16 - valueU16 as dU16 FROM codecTest WHERE - dU16 != 0 + dU16 != 0 LIMIT 10; SELECT 'U8'; SELECT - key, - ref_valueU8, valueU8, ref_valueU8 - valueU8 as dU8 + key, + ref_valueU8, valueU8, ref_valueU8 - valueU8 as dU8 FROM codecTest WHERE - dU8 != 0 + dU8 != 0 LIMIT 10; SELECT 'I64'; SELECT - key, - ref_valueI64, valueI64, ref_valueI64 - valueI64 as dI64 + key, + ref_valueI64, valueI64, ref_valueI64 - valueI64 as dI64 FROM codecTest WHERE - dI64 != 0 + dI64 != 0 LIMIT 10; SELECT 'I32'; SELECT - key, - ref_valueI32, valueI32, ref_valueI32 - valueI32 as dI32 + key, + ref_valueI32, valueI32, ref_valueI32 - valueI32 as dI32 FROM codecTest WHERE - dI32 != 0 + dI32 != 0 LIMIT 10; SELECT 'I16'; SELECT - key, - ref_valueI16, valueI16, ref_valueI16 - valueI16 as dI16 + key, + ref_valueI16, valueI16, ref_valueI16 - valueI16 as dI16 FROM codecTest WHERE - dI16 != 0 + dI16 != 0 LIMIT 10; SELECT 'I8'; SELECT - key, - ref_valueI8, valueI8, ref_valueI8 - valueI8 as dI8 + key, + ref_valueI8, valueI8, ref_valueI8 - valueI8 as dI8 FROM codecTest WHERE - dI8 != 0 + dI8 != 0 LIMIT 10; SELECT 'DT'; SELECT - key, - ref_valueDT, valueDT, ref_valueDT - valueDT as dDT + key, + ref_valueDT, valueDT, ref_valueDT - valueDT as dDT FROM codecTest WHERE - dDT != 0 + dDT != 0 LIMIT 10; SELECT 'D'; SELECT - key, - ref_valueD, valueD, ref_valueD - valueD as dD + key, + ref_valueD, valueD, ref_valueD - valueD as dD FROM codecTest WHERE - dD != 0 + dD != 0 LIMIT 10; SELECT 'Compression:'; SELECT - table, name, type, - compression_codec, - data_uncompressed_bytes u, - data_compressed_bytes c, - round(u/c,3) ratio + table, name, type, + compression_codec, + data_uncompressed_bytes u, + data_compressed_bytes c, + round(u/c,3) ratio FROM system.columns WHERE - table == 'codecTest' + table = 'codecTest' + AND database = currentDatabase() AND - compression_codec != '' + compression_codec != '' AND - ratio <= 1 + ratio <= 1 ORDER BY - table, name, type; + table, name, type; DROP TABLE IF EXISTS codecTest; diff --git a/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh b/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh index bbc2d957937..71ca29bfd96 100755 --- a/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh +++ b/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh @@ -15,48 +15,44 @@ CREATE TABLE elog ( engine_id UInt32, referrer String ) -ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_00953/elog', 'test') +ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/{shard}', '{replica}') PARTITION BY date ORDER BY (engine_id) SETTINGS replicated_deduplication_window = 2, cleanup_delay_period=4, cleanup_delay_period_random_add=0;" $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 1, 'hello')" -sleep 1 $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 2, 'hello')" -sleep 1 $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 3, 'hello')" $CLICKHOUSE_CLIENT --query="SELECT count(*) from elog" # 3 rows -count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/test_00953/elog/blocks'") - +count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") while [[ $count != 2 ]] do sleep 1 - count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/test_00953/elog/blocks'") + count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") done $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 1, 'hello')" $CLICKHOUSE_CLIENT --query="SELECT count(*) from elog" # 4 rows -count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/test_00953/elog/blocks'") +count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") while [[ $count != 2 ]] do sleep 1 - count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/test_00953/elog/blocks'") + count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") done $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 2, 'hello')" $CLICKHOUSE_CLIENT --query="SELECT count(*) from elog" # 5 rows -count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/test_00953/elog/blocks'") - +count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") while [[ $count != 2 ]] do sleep 1 - count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/test_00953/elog/blocks'") + count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") done $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 2, 'hello')" diff --git a/tests/queries/0_stateless/00956_sensitive_data_masking.sh b/tests/queries/0_stateless/00956_sensitive_data_masking.sh index 764cb6a713e..7462dfd5585 100755 --- a/tests/queries/0_stateless/00956_sensitive_data_masking.sh +++ b/tests/queries/0_stateless/00956_sensitive_data_masking.sh @@ -65,7 +65,7 @@ echo '5.1' # wait until the query in background will start (max: 10 seconds as sleepEachRow) for _ in {1..100}; do - $CLICKHOUSE_CLIENT --query="SHOW PROCESSLIST" --log_queries=0 >"$tmp_file" 2>&1 + $CLICKHOUSE_CLIENT --query="SELECT * FROM system.processes WHERE current_database = currentDatabase()" --log_queries=0 >"$tmp_file" 2>&1 grep -q -F 'fwerkh_that_magic_string_make_me_unique' "$tmp_file" && break sleep 0.1 done @@ -97,7 +97,7 @@ echo 7 # and finally querylog $CLICKHOUSE_CLIENT \ --server_logs_file=/dev/null \ - --query="select * from system.query_log where current_database = currentDatabase() AND event_time > now() - 10 and query like '%TOPSECRET%';" + --query="select * from system.query_log where current_database = currentDatabase() AND event_date >= yesterday() and query like '%TOPSECRET%';" rm -f "$tmp_file" >/dev/null 2>&1 @@ -118,8 +118,8 @@ $CLICKHOUSE_CLIENT --query="SYSTEM FLUSH LOGS" --server_logs_file=/dev/null echo 9 $CLICKHOUSE_CLIENT \ --server_logs_file=/dev/null \ - --query="SELECT if( count() > 0, 'text_log non empty', 'text_log empty') FROM system.text_log WHERE event_time>now() - 60 and message like '%find_me%'; - select * from system.text_log where event_time > now() - 60 and message like '%TOPSECRET=TOPSECRET%';" --ignore-error --multiquery + --query="SELECT if( count() > 0, 'text_log non empty', 'text_log empty') FROM system.text_log WHERE event_date >= yesterday() and message like '%find_me%'; + select * from system.text_log where event_date >= yesterday() and message like '%TOPSECRET=TOPSECRET%';" --ignore-error --multiquery echo 'finish' rm -f "$tmp_file" >/dev/null 2>&1 diff --git a/tests/queries/0_stateless/00966_invalid_json_must_not_parse.reference b/tests/queries/0_stateless/00966_invalid_json_must_not_parse.reference index f7eb44d66e0..4521d575ff3 100644 --- a/tests/queries/0_stateless/00966_invalid_json_must_not_parse.reference +++ b/tests/queries/0_stateless/00966_invalid_json_must_not_parse.reference @@ -4,3 +4,7 @@ 0 0 0 +0 +0 +0 +0 diff --git a/tests/queries/0_stateless/00966_invalid_json_must_not_parse.sql b/tests/queries/0_stateless/00966_invalid_json_must_not_parse.sql index afcbc78cfd5..0e7fa55dbae 100644 --- a/tests/queries/0_stateless/00966_invalid_json_must_not_parse.sql +++ b/tests/queries/0_stateless/00966_invalid_json_must_not_parse.sql @@ -3,6 +3,8 @@ SET allow_simdjson=1; SELECT JSONLength('"HX-='); SELECT JSONLength('[9]\0\x42\xD3\x36\xE3'); SELECT JSONLength(unhex('5B30000E06D7AA5D')); +SELECT JSONLength('{"success"test:"123"}'); +SELECT isValidJSON('{"success"test:"123"}'); SET allow_simdjson=0; @@ -10,3 +12,5 @@ SET allow_simdjson=0; SELECT JSONLength('"HX-='); SELECT JSONLength('[9]\0\x42\xD3\x36\xE3'); SELECT JSONLength(unhex('5B30000E06D7AA5D')); +SELECT JSONLength('{"success"test:"123"}'); +SELECT isValidJSON('{"success"test:"123"}'); diff --git a/tests/queries/0_stateless/00966_live_view_watch_events_http.py b/tests/queries/0_stateless/00966_live_view_watch_events_http.py index 3d407ec5602..1c00a5d1236 100755 --- a/tests/queries/0_stateless/00966_live_view_watch_events_http.py +++ b/tests/queries/0_stateless/00966_live_view_watch_events_http.py @@ -28,13 +28,14 @@ with client(name='client1>', log=log) as client1: client1.expect(prompt) - with http_client({'method':'GET', 'url': '/?allow_experimental_live_view=1&query=WATCH%20test.lv%20EVENTS'}, name='client2>', log=log) as client2: - client2.expect('.*1\n') - client1.send('INSERT INTO test.mt VALUES (1),(2),(3)') + try: + with http_client({'method':'GET', 'url': '/?allow_experimental_live_view=1&query=WATCH%20test.lv%20EVENTS'}, name='client2>', log=log) as client2: + client2.expect('.*1\n') + client1.send('INSERT INTO test.mt VALUES (1),(2),(3)') + client1.expect(prompt) + client2.expect('.*2\n') + finally: + client1.send('DROP TABLE test.lv') + client1.expect(prompt) + client1.send('DROP TABLE test.mt') client1.expect(prompt) - client2.expect('.*2\n') - - client1.send('DROP TABLE test.lv') - client1.expect(prompt) - client1.send('DROP TABLE test.mt') - client1.expect(prompt) diff --git a/tests/queries/0_stateless/00967_live_view_watch_http.py b/tests/queries/0_stateless/00967_live_view_watch_http.py index d26bb5402e7..c41b9f0c861 100755 --- a/tests/queries/0_stateless/00967_live_view_watch_http.py +++ b/tests/queries/0_stateless/00967_live_view_watch_http.py @@ -27,14 +27,14 @@ with client(name='client1>', log=log) as client1: client1.send('CREATE LIVE VIEW test.lv AS SELECT sum(a) FROM test.mt') client1.expect(prompt) - - with http_client({'method':'GET', 'url':'/?allow_experimental_live_view=1&query=WATCH%20test.lv'}, name='client2>', log=log) as client2: - client2.expect('.*0\t1\n') - client1.send('INSERT INTO test.mt VALUES (1),(2),(3)') + try: + with http_client({'method':'GET', 'url':'/?allow_experimental_live_view=1&query=WATCH%20test.lv'}, name='client2>', log=log) as client2: + client2.expect('.*0\t1\n') + client1.send('INSERT INTO test.mt VALUES (1),(2),(3)') + client1.expect(prompt) + client2.expect('.*6\t2\n') + finally: + client1.send('DROP TABLE test.lv') + client1.expect(prompt) + client1.send('DROP TABLE test.mt') client1.expect(prompt) - client2.expect('.*6\t2\n') - - client1.send('DROP TABLE test.lv') - client1.expect(prompt) - client1.send('DROP TABLE test.mt') - client1.expect(prompt) diff --git a/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql b/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql index d31989d65bd..5abb1af620a 100644 --- a/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql +++ b/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql @@ -10,7 +10,8 @@ SELECT count() FROM merge_tree; SET max_rows_to_read = 900000; -SELECT count() FROM merge_tree WHERE not ignore(); -- { serverError 158 } -SELECT count() FROM merge_tree WHERE not ignore(); -- { serverError 158 } +-- constant ignore will be pruned by part pruner. ignore(*) is used. +SELECT count() FROM merge_tree WHERE not ignore(*); -- { serverError 158 } +SELECT count() FROM merge_tree WHERE not ignore(*); -- { serverError 158 } DROP TABLE merge_tree; diff --git a/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper.reference b/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.reference similarity index 100% rename from tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper.reference rename to tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.reference diff --git a/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper.sh b/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh similarity index 90% rename from tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper.sh rename to tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh index 81c0c563db1..22c404c7712 100755 --- a/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper.sh +++ b/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh @@ -17,17 +17,18 @@ CREATE TABLE indices_mutaions1 i64 Int64, i32 Int32, INDEX idx (i64, u64 * i64) TYPE minmax GRANULARITY 1 -) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_00975/indices_mutaions', 'r1') +) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/indices_mutaions', 'r1') PARTITION BY i32 ORDER BY u64 SETTINGS index_granularity = 2; + CREATE TABLE indices_mutaions2 ( u64 UInt64, i64 Int64, i32 Int32, INDEX idx (i64, u64 * i64) TYPE minmax GRANULARITY 1 -) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_00975/indices_mutaions', 'r2') +) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/indices_mutaions', 'r2') PARTITION BY i32 ORDER BY u64 SETTINGS index_granularity = 2;" diff --git a/tests/queries/0_stateless/00976_system_stop_ttl_merges.sql b/tests/queries/0_stateless/00976_system_stop_ttl_merges.sql index 41f2428d9e6..b27e4275d5d 100644 --- a/tests/queries/0_stateless/00976_system_stop_ttl_merges.sql +++ b/tests/queries/0_stateless/00976_system_stop_ttl_merges.sql @@ -2,7 +2,7 @@ drop table if exists ttl; create table ttl (d Date, a Int) engine = MergeTree order by a partition by toDayOfMonth(d) ttl d + interval 1 day; -system stop ttl merges; +system stop ttl merges ttl; insert into ttl values (toDateTime('2000-10-10 00:00:00'), 1), (toDateTime('2000-10-10 00:00:00'), 2) insert into ttl values (toDateTime('2100-10-10 00:00:00'), 3), (toDateTime('2100-10-10 00:00:00'), 4); @@ -11,7 +11,7 @@ select sleep(1) format Null; -- wait if very fast merge happen optimize table ttl partition 10 final; select * from ttl order by d, a; -system start ttl merges; +system start ttl merges ttl; optimize table ttl partition 10 final; select * from ttl order by d, a; diff --git a/tests/queries/0_stateless/00979_live_view_watch_continuous_aggregates.py b/tests/queries/0_stateless/00979_live_view_watch_continuous_aggregates.py index 7924aa15d0c..5f5f4d7a960 100755 --- a/tests/queries/0_stateless/00979_live_view_watch_continuous_aggregates.py +++ b/tests/queries/0_stateless/00979_live_view_watch_continuous_aggregates.py @@ -30,7 +30,6 @@ with client(name='client1>', log=log) as client1, client(name='client2>', log=lo client1.send('CREATE LIVE VIEW test.lv AS SELECT toStartOfDay(time) AS day, location, avg(temperature) FROM test.mt GROUP BY day, location ORDER BY day, location') client1.expect(prompt) client1.send('WATCH test.lv FORMAT CSVWithNames') - client1.expect(r'_version') client2.send("INSERT INTO test.mt VALUES ('2019-01-01 00:00:00','New York',60),('2019-01-01 00:10:00','New York',70)") client2.expect(prompt) client1.expect(r'"2019-01-01 00:00:00","New York",65') @@ -60,7 +59,7 @@ with client(name='client1>', log=log) as client1, client(name='client2>', log=lo match = client1.expect('(%s)|([#\$] )' % prompt) if match.groups()[1]: client1.send(client1.command) - client1.expect(prompt) + client1.expect(prompt) client1.send('DROP TABLE test.lv') client1.expect(prompt) client1.send('DROP TABLE test.mt') diff --git a/tests/queries/0_stateless/00987_distributed_stack_overflow.sql b/tests/queries/0_stateless/00987_distributed_stack_overflow.sql index 4baa6969b31..d2e2b8f37ef 100644 --- a/tests/queries/0_stateless/00987_distributed_stack_overflow.sql +++ b/tests/queries/0_stateless/00987_distributed_stack_overflow.sql @@ -5,13 +5,13 @@ DROP TABLE IF EXISTS distr2; CREATE TABLE distr (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr); -- { serverError 269 } CREATE TABLE distr0 (x UInt8) ENGINE = Distributed(test_shard_localhost, '', distr0); -SELECT * FROM distr0; -- { serverError 306 } +SELECT * FROM distr0; -- { serverError 581 } CREATE TABLE distr1 (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr2); CREATE TABLE distr2 (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr1); -SELECT * FROM distr1; -- { serverError 306 } -SELECT * FROM distr2; -- { serverError 306 } +SELECT * FROM distr1; -- { serverError 581 } +SELECT * FROM distr2; -- { serverError 581 } DROP TABLE distr0; DROP TABLE distr1; diff --git a/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper_long.reference b/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper.sh b/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper_long.sh similarity index 81% rename from tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper.sh rename to tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper_long.sh index 613e032f42a..fe6246e02f6 100755 --- a/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper.sh +++ b/tests/queries/0_stateless/00992_system_parts_race_condition_zookeeper_long.sh @@ -10,8 +10,8 @@ $CLICKHOUSE_CLIENT -n -q " DROP TABLE IF EXISTS alter_table; DROP TABLE IF EXISTS alter_table2; - CREATE TABLE alter_table (a UInt8, b Int16, c Float32, d String, e Array(UInt8), f Nullable(UUID), g Tuple(UInt8, UInt16)) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_DATABASE.alter_table', 'r1') ORDER BY a PARTITION BY b % 10 SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 1, cleanup_delay_period_random_add = 0; - CREATE TABLE alter_table2 (a UInt8, b Int16, c Float32, d String, e Array(UInt8), f Nullable(UUID), g Tuple(UInt8, UInt16)) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_DATABASE.alter_table', 'r2') ORDER BY a PARTITION BY b % 10 SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 1, cleanup_delay_period_random_add = 0 + CREATE TABLE alter_table (a UInt8, b Int16, c Float32, d String, e Array(UInt8), f Nullable(UUID), g Tuple(UInt8, UInt16)) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r1') ORDER BY a PARTITION BY b % 10 SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 1, cleanup_delay_period_random_add = 0; + CREATE TABLE alter_table2 (a UInt8, b Int16, c Float32, d String, e Array(UInt8), f Nullable(UUID), g Tuple(UInt8, UInt16)) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r2') ORDER BY a PARTITION BY b % 10 SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 1, cleanup_delay_period_random_add = 0 " function thread1() @@ -74,7 +74,7 @@ timeout $TIMEOUT bash -c thread5 2> /dev/null & wait -$CLICKHOUSE_CLIENT -n -q " - DROP TABLE alter_table; - DROP TABLE alter_table2 -" +$CLICKHOUSE_CLIENT -n -q "DROP TABLE alter_table;" & +$CLICKHOUSE_CLIENT -n -q "DROP TABLE alter_table2;" & + +wait diff --git a/tests/queries/0_stateless/00993_system_parts_race_condition_drop_zookeeper.sh b/tests/queries/0_stateless/00993_system_parts_race_condition_drop_zookeeper.sh index 1731148f71f..d960d8ff91d 100755 --- a/tests/queries/0_stateless/00993_system_parts_race_condition_drop_zookeeper.sh +++ b/tests/queries/0_stateless/00993_system_parts_race_condition_drop_zookeeper.sh @@ -52,7 +52,7 @@ function thread6() while true; do REPLICA=$(($RANDOM % 10)) $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS alter_table_$REPLICA; - CREATE TABLE alter_table_$REPLICA (a UInt8, b Int16, c Float32, d String, e Array(UInt8), f Nullable(UUID), g Tuple(UInt8, UInt16)) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_00993/alter_table', 'r_$REPLICA') ORDER BY a PARTITION BY b % 10 SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 0;"; + CREATE TABLE alter_table_$REPLICA (a UInt8, b Int16, c Float32, d String, e Array(UInt8), f Nullable(UUID), g Tuple(UInt8, UInt16)) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r_$REPLICA') ORDER BY a PARTITION BY b % 10 SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 0;"; sleep 0.$RANDOM; done } diff --git a/tests/queries/0_stateless/01013_sync_replica_timeout_zookeeper.sh b/tests/queries/0_stateless/01013_sync_replica_timeout_zookeeper.sh index 724caa7f414..89b178a38ea 100755 --- a/tests/queries/0_stateless/01013_sync_replica_timeout_zookeeper.sh +++ b/tests/queries/0_stateless/01013_sync_replica_timeout_zookeeper.sh @@ -4,7 +4,6 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh - R1=table_1013_1 R2=table_1013_2 @@ -12,8 +11,8 @@ ${CLICKHOUSE_CLIENT} -n -q " DROP TABLE IF EXISTS $R1; DROP TABLE IF EXISTS $R2; - CREATE TABLE $R1 (x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}.table_1013', 'r1') ORDER BY x; - CREATE TABLE $R2 (x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}.table_1013', 'r2') ORDER BY x; + CREATE TABLE $R1 (x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_1013', 'r1') ORDER BY x; + CREATE TABLE $R2 (x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_1013', 'r2') ORDER BY x; SYSTEM STOP FETCHES $R2; INSERT INTO $R1 VALUES (1) diff --git a/tests/queries/0_stateless/01017_mutations_with_nondeterministic_functions_zookeeper.sh b/tests/queries/0_stateless/01017_mutations_with_nondeterministic_functions_zookeeper.sh index d7d0dab71b9..a10e5fb2788 100755 --- a/tests/queries/0_stateless/01017_mutations_with_nondeterministic_functions_zookeeper.sh +++ b/tests/queries/0_stateless/01017_mutations_with_nondeterministic_functions_zookeeper.sh @@ -4,7 +4,6 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh - R1=table_1017_1 R2=table_1017_2 T1=table_1017_merge @@ -29,8 +28,8 @@ ${CLICKHOUSE_CLIENT} -n -q " CREATE TABLE lookup_table (y UInt32, y_new UInt32) ENGINE = Join(ANY, LEFT, y); INSERT INTO lookup_table VALUES(1,1001),(2,1002); - CREATE TABLE $R1 (x UInt32, y UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}.table_1017', 'r1') ORDER BY x; - CREATE TABLE $R2 (x UInt32, y UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}.table_1017', 'r2') ORDER BY x; + CREATE TABLE $R1 (x UInt32, y UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_1017', 'r1') ORDER BY x; + CREATE TABLE $R2 (x UInt32, y UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_1017', 'r2') ORDER BY x; CREATE TABLE $T1 (x UInt32, y UInt32) ENGINE MergeTree() ORDER BY x; INSERT INTO $R1 VALUES (0, 1)(1, 2)(2, 3)(3, 4); diff --git a/tests/queries/0_stateless/01018_Distributed__shard_num.reference b/tests/queries/0_stateless/01018_Distributed__shard_num.reference index f4b92011759..b2c8b77554b 100644 --- a/tests/queries/0_stateless/01018_Distributed__shard_num.reference +++ b/tests/queries/0_stateless/01018_Distributed__shard_num.reference @@ -27,8 +27,8 @@ remote(Distributed) 1 100 1 100 JOIN system.clusters -1 10 localhost ::1 9000 -1 20 localhost ::1 9000 +1 10 localhost 1 9000 +1 20 localhost 1 9000 dist_3 100 foo foo 100 foo diff --git a/tests/queries/0_stateless/01018_Distributed__shard_num.sql b/tests/queries/0_stateless/01018_Distributed__shard_num.sql index f6d5f23eca8..7d4aaf42473 100644 --- a/tests/queries/0_stateless/01018_Distributed__shard_num.sql +++ b/tests/queries/0_stateless/01018_Distributed__shard_num.sql @@ -43,13 +43,13 @@ SELECT _shard_num, key FROM remote('127.0.0.1', currentDatabase(), dist_2) order -- JOIN system.clusters SELECT 'JOIN system.clusters'; -SELECT a._shard_num, a.key, b.host_name, b.host_address, b.port +SELECT a._shard_num, a.key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), b.port FROM (SELECT *, _shard_num FROM dist_1) a JOIN system.clusters b ON a._shard_num = b.shard_num WHERE b.cluster = 'test_cluster_two_shards_localhost'; -SELECT _shard_num, key, b.host_name, b.host_address, b.port +SELECT _shard_num, key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), b.port FROM dist_1 a JOIN system.clusters b ON _shard_num = b.shard_num @@ -58,7 +58,7 @@ WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError 403 } -- rewrite does not work with aliases, hence Missing columns (47) SELECT a._shard_num, key FROM dist_1 a; -- { serverError 47; } -- the same with JOIN, just in case -SELECT a._shard_num, a.key, b.host_name, b.host_address, b.port +SELECT a._shard_num, a.key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), b.port FROM dist_1 a JOIN system.clusters b ON a._shard_num = b.shard_num diff --git a/tests/queries/0_stateless/01018_ddl_dictionaries_create.reference b/tests/queries/0_stateless/01018_ddl_dictionaries_create.reference index e591300eddc..a4e2f380eb8 100644 --- a/tests/queries/0_stateless/01018_ddl_dictionaries_create.reference +++ b/tests/queries/0_stateless/01018_ddl_dictionaries_create.reference @@ -12,7 +12,10 @@ db_01018 dict1 ==DROP DICTIONARY 0 =DICTIONARY in Memory DB -0 +CREATE DICTIONARY memory_db.dict2\n(\n `key_column` UInt64 DEFAULT 0 INJECTIVE,\n `second_column` UInt8 DEFAULT 1 EXPRESSION rand() % 222,\n `third_column` String DEFAULT \'qqq\'\n)\nPRIMARY KEY key_column\nSOURCE(CLICKHOUSE(HOST \'localhost\' PORT tcpPort() USER \'default\' TABLE \'table_for_dict\' PASSWORD \'\' DB \'database_for_dict_01018\'))\nLIFETIME(MIN 1 MAX 10)\nLAYOUT(FLAT()) +dict2 +1 +memory_db dict2 =DICTIONARY in Lazy DB =DROP DATABASE WITH DICTIONARY dict4 diff --git a/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql b/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql index f7d34071eb0..dd62f1451a8 100644 --- a/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql +++ b/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql @@ -89,9 +89,9 @@ CREATE DICTIONARY memory_db.dict2 PRIMARY KEY key_column SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_for_dict' PASSWORD '' DB 'database_for_dict_01018')) LIFETIME(MIN 1 MAX 10) -LAYOUT(FLAT()); -- {serverError 48} +LAYOUT(FLAT()); -SHOW CREATE DICTIONARY memory_db.dict2; -- {serverError 487} +SHOW CREATE DICTIONARY memory_db.dict2; SHOW DICTIONARIES FROM memory_db LIKE 'dict2'; @@ -114,7 +114,7 @@ CREATE DICTIONARY lazy_db.dict3 PRIMARY KEY key_column, second_column SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_for_dict' PASSWORD '' DB 'database_for_dict_01018')) LIFETIME(MIN 1 MAX 10) -LAYOUT(COMPLEX_KEY_HASHED()); -- {serverError 48} +LAYOUT(COMPLEX_KEY_HASHED()); --{serverError 1} DROP DATABASE IF EXISTS lazy_db; diff --git a/tests/queries/0_stateless/01018_ip_dictionary.reference b/tests/queries/0_stateless/01018_ip_dictionary_long.reference similarity index 100% rename from tests/queries/0_stateless/01018_ip_dictionary.reference rename to tests/queries/0_stateless/01018_ip_dictionary_long.reference diff --git a/tests/queries/0_stateless/01018_ip_dictionary.sql b/tests/queries/0_stateless/01018_ip_dictionary_long.sql similarity index 99% rename from tests/queries/0_stateless/01018_ip_dictionary.sql rename to tests/queries/0_stateless/01018_ip_dictionary_long.sql index 5df1afcd559..2abd51cc9fe 100644 --- a/tests/queries/0_stateless/01018_ip_dictionary.sql +++ b/tests/queries/0_stateless/01018_ip_dictionary_long.sql @@ -41,6 +41,9 @@ SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db 'database_for_dic LAYOUT(IP_TRIE()) LIFETIME(MIN 10 MAX 100); +-- fuzzer +SELECT '127.0.0.0/24' = dictGetString('database_for_dict.dict_ipv4_trie', 'prefixprefixprefixprefix', tuple(IPv4StringToNum('127.0.0.0127.0.0.0'))); -- { serverError 36 } + SELECT 0 == dictGetUInt32('database_for_dict.dict_ipv4_trie', 'asn', tuple(IPv4StringToNum('0.0.0.0'))); SELECT 1 == dictGetUInt32('database_for_dict.dict_ipv4_trie', 'asn', tuple(IPv4StringToNum('128.0.0.0'))); SELECT 2 == dictGetUInt32('database_for_dict.dict_ipv4_trie', 'asn', tuple(IPv4StringToNum('192.0.0.0'))); diff --git a/tests/queries/0_stateless/01034_move_partition_from_table_zookeeper.sh b/tests/queries/0_stateless/01034_move_partition_from_table_zookeeper.sh index 11c7296932c..ae3dd7851c8 100755 --- a/tests/queries/0_stateless/01034_move_partition_from_table_zookeeper.sh +++ b/tests/queries/0_stateless/01034_move_partition_from_table_zookeeper.sh @@ -26,8 +26,8 @@ function query_with_retry $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS src;" $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS dst;" -$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/src1', '1') PARTITION BY p ORDER BY k;" -$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/dst1', '1') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/src1', '1') PARTITION BY p ORDER BY k;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/dst1', '1') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (0, '0', 1);" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (1, '0', 1);" @@ -56,8 +56,8 @@ $CLICKHOUSE_CLIENT --query="DROP TABLE dst;" $CLICKHOUSE_CLIENT --query="SELECT 'MOVE incompatible schema missing column';" -$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/src2', '1') PARTITION BY p ORDER BY (d, p);" -$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/dst2', '1') PARTITION BY p ORDER BY (d, p) SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/src2', '1') PARTITION BY p ORDER BY (d, p);" +$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/dst2', '1') PARTITION BY p ORDER BY (d, p) SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (0, '0', 1);" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (1, '0', 1);" @@ -75,8 +75,8 @@ $CLICKHOUSE_CLIENT --query="DROP TABLE dst;" $CLICKHOUSE_CLIENT --query="SELECT 'MOVE incompatible schema different order by';" -$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/src3', '1') PARTITION BY p ORDER BY (p, k, d);" -$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/dst3', '1') PARTITION BY p ORDER BY (d, k, p);" +$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CURR_DATABASE/src3', '1') PARTITION BY p ORDER BY (p, k, d);" +$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, k String, d UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/$CURR_DATABASE/dst3', '1') PARTITION BY p ORDER BY (d, k, p);" $CLICKHOUSE_CLIENT --query="INSERT INTO src VALUES (0, '0', 1);" diff --git a/tests/queries/0_stateless/01035_concurrent_move_partition_from_table_zookeeper.sh b/tests/queries/0_stateless/01035_concurrent_move_partition_from_table_zookeeper.sh index 4eb3cb9a7bd..7c15b795c36 100755 --- a/tests/queries/0_stateless/01035_concurrent_move_partition_from_table_zookeeper.sh +++ b/tests/queries/0_stateless/01035_concurrent_move_partition_from_table_zookeeper.sh @@ -9,8 +9,8 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS src;" $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS dst;" -$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/src', '1') PARTITION BY p ORDER BY k;" -$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, k String) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_DATABASE/dst', '1') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE src (p UInt64, k String) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/src', '1') PARTITION BY p ORDER BY k;" +$CLICKHOUSE_CLIENT --query="CREATE TABLE dst (p UInt64, k String) ENGINE = ReplicatedMergeTree('/clickhouse/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/dst', '1') PARTITION BY p ORDER BY k SETTINGS old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=0;" function thread1() { diff --git a/tests/queries/0_stateless/01035_lc_empty_part_bug.sh b/tests/queries/0_stateless/01035_lc_empty_part_bug.sh index b65cf87d1ca..185c4ef4a4e 100755 --- a/tests/queries/0_stateless/01035_lc_empty_part_bug.sh +++ b/tests/queries/0_stateless/01035_lc_empty_part_bug.sh @@ -8,7 +8,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) ${CLICKHOUSE_CLIENT} --multiquery --query=" DROP TABLE IF EXISTS lc_empty_part_bug; - create table lc_empty_part_bug (id UInt64, s String) Engine=MergeTree ORDER BY id; + create table lc_empty_part_bug (id UInt64, s String) Engine=MergeTree ORDER BY id SETTINGS number_of_free_entries_in_pool_to_execute_mutation=0; insert into lc_empty_part_bug select number as id, toString(rand()) from numbers(100); alter table lc_empty_part_bug delete where id < 100; " --mutations_sync=1 diff --git a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sh b/tests/queries/0_stateless/01039_row_policy_dcl.sh similarity index 50% rename from tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sh rename to tests/queries/0_stateless/01039_row_policy_dcl.sh index d03e02efc55..8c2249f2981 100755 --- a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sh +++ b/tests/queries/0_stateless/01039_row_policy_dcl.sh @@ -4,6 +4,4 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -# We should have correct env vars from shell_config.sh to run this test - -python3 "$CURDIR"/00411_long_accurate_number_comparison.python float +${CLICKHOUSE_CLIENT} -q "SHOW POLICIES ON $CLICKHOUSE_DATABASE.*" diff --git a/tests/queries/0_stateless/01039_row_policy_dcl.sql b/tests/queries/0_stateless/01039_row_policy_dcl.sql deleted file mode 100644 index 14742a72914..00000000000 --- a/tests/queries/0_stateless/01039_row_policy_dcl.sql +++ /dev/null @@ -1 +0,0 @@ -SHOW POLICIES; diff --git a/tests/queries/0_stateless/01045_zookeeper_system_mutations_with_parts_names.sh b/tests/queries/0_stateless/01045_zookeeper_system_mutations_with_parts_names.sh index c035a692d12..6510fcf408d 100755 --- a/tests/queries/0_stateless/01045_zookeeper_system_mutations_with_parts_names.sh +++ b/tests/queries/0_stateless/01045_zookeeper_system_mutations_with_parts_names.sh @@ -47,7 +47,7 @@ ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS table_for_mutations" ${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS replicated_table_for_mutations" -${CLICKHOUSE_CLIENT} --query="CREATE TABLE replicated_table_for_mutations(k UInt32, v1 UInt64) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_01045/replicated_table_for_mutations', '1') ORDER BY k PARTITION BY modulo(k, 2)" +${CLICKHOUSE_CLIENT} --query="CREATE TABLE replicated_table_for_mutations(k UInt32, v1 UInt64) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/replicated_table_for_mutations', '1') ORDER BY k PARTITION BY modulo(k, 2)" ${CLICKHOUSE_CLIENT} --query="SYSTEM STOP MERGES replicated_table_for_mutations" diff --git a/tests/queries/0_stateless/01049_join_low_card_bug.reference b/tests/queries/0_stateless/01049_join_low_card_bug.reference index ece76c99662..b4ed8176652 100644 --- a/tests/queries/0_stateless/01049_join_low_card_bug.reference +++ b/tests/queries/0_stateless/01049_join_low_card_bug.reference @@ -38,22 +38,22 @@ str Nullable(String) \N str Nullable(String) LowCardinality(String) \N str Nullable(String) LowCardinality(String) \N str Nullable(String) LowCardinality(String) - LowCardinality(String) -str LowCardinality(String) - LowCardinality(String) -str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - LowCardinality(String) -str LowCardinality(String) - LowCardinality(String) -str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) +\N LowCardinality(Nullable(String)) +str LowCardinality(Nullable(String)) +\N LowCardinality(Nullable(String)) +str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N LowCardinality(Nullable(String)) +str LowCardinality(Nullable(String)) +\N LowCardinality(Nullable(String)) +str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) \N Nullable(String) str Nullable(String) \N Nullable(String) @@ -62,14 +62,14 @@ str Nullable(String) \N str Nullable(String) \N str Nullable(String) \N str Nullable(String) - LowCardinality(String) -str LowCardinality(String) - LowCardinality(String) -str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) - str LowCardinality(String) +\N LowCardinality(Nullable(String)) +str LowCardinality(Nullable(String)) +\N LowCardinality(Nullable(String)) +str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) +\N str LowCardinality(Nullable(String)) \N Nullable(String) str Nullable(String) \N Nullable(String) diff --git a/tests/queries/0_stateless/01049_join_low_card_bug.sql b/tests/queries/0_stateless/01049_join_low_card_bug.sql index dee7f5923aa..07558770abf 100644 --- a/tests/queries/0_stateless/01049_join_low_card_bug.sql +++ b/tests/queries/0_stateless/01049_join_low_card_bug.sql @@ -76,7 +76,6 @@ SELECT l.lc, r.lc, toTypeName(l.lc), toTypeName(materialize(r.lc)) FROM nl AS l SELECT l.lc, r.lc, toTypeName(l.lc), toTypeName(materialize(r.lc)) FROM nl AS l FULL JOIN r_lc AS r USING (x); SELECT l.lc, r.lc, toTypeName(l.lc), toTypeName(materialize(r.lc)) FROM nl AS l FULL JOIN r_lc AS r USING (lc); --- TODO: LC nullability SET join_use_nulls = 1; SELECT lc, toTypeName(lc) FROM l_lc AS l RIGHT JOIN r_lc AS r USING (x); diff --git a/tests/queries/0_stateless/01053_ssd_dictionary.sql b/tests/queries/0_stateless/01053_ssd_dictionary.sql index a23ae7e5e96..23a369cc8a6 100644 --- a/tests/queries/0_stateless/01053_ssd_dictionary.sql +++ b/tests/queries/0_stateless/01053_ssd_dictionary.sql @@ -76,7 +76,7 @@ CREATE DICTIONARY 01053_db.ssd_dict PRIMARY KEY id SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_for_dict' PASSWORD '' DB '01053_db')) LIFETIME(MIN 1000 MAX 2000) -LAYOUT(SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/1d' BLOCK_SIZE 512 WRITE_BUFFER_SIZE 4096 MAX_STORED_KEYS 1000000)); +LAYOUT(SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/1d' BLOCK_SIZE 512 WRITE_BUFFER_SIZE 4096)); SELECT 'UPDATE DICTIONARY'; -- 118 @@ -142,7 +142,7 @@ CREATE DICTIONARY 01053_db.ssd_dict PRIMARY KEY id SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_for_dict' PASSWORD '' DB '01053_db')) LIFETIME(MIN 1000 MAX 2000) -LAYOUT(SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/2d' BLOCK_SIZE 512 WRITE_BUFFER_SIZE 1024 MAX_STORED_KEYS 10)); +LAYOUT(SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/2d' BLOCK_SIZE 512 WRITE_BUFFER_SIZE 1024)); SELECT 'UPDATE DICTIONARY (MT)'; -- 118 diff --git a/tests/queries/0_stateless/01076_parallel_alter_replicated_zookeeper.sh b/tests/queries/0_stateless/01076_parallel_alter_replicated_zookeeper.sh index ca453ee8f0d..efe518046a1 100755 --- a/tests/queries/0_stateless/01076_parallel_alter_replicated_zookeeper.sh +++ b/tests/queries/0_stateless/01076_parallel_alter_replicated_zookeeper.sh @@ -20,7 +20,7 @@ for i in $(seq $REPLICAS); do done for i in $(seq $REPLICAS); do - $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_mutate_mt_$i (key UInt64, value1 UInt64, value2 String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01076/concurrent_mutate_mt', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000,temporary_directories_lifetime=10,cleanup_delay_period=3,cleanup_delay_period_random_add=0" + $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_mutate_mt_$i (key UInt64, value1 UInt64, value2 String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_mutate_mt', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000,temporary_directories_lifetime=10,cleanup_delay_period=3,cleanup_delay_period_random_add=0" done $CLICKHOUSE_CLIENT --query "INSERT INTO concurrent_mutate_mt_1 SELECT number, number + 10, toString(number) from numbers(10)" diff --git a/tests/queries/0_stateless/01079_bad_alters_zookeeper.reference b/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.reference similarity index 52% rename from tests/queries/0_stateless/01079_bad_alters_zookeeper.reference rename to tests/queries/0_stateless/01079_bad_alters_zookeeper_long.reference index ebefe4b2a29..731cd871b3b 100644 --- a/tests/queries/0_stateless/01079_bad_alters_zookeeper.reference +++ b/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.reference @@ -1,6 +1,6 @@ Wrong column name. -CREATE TABLE default.table_for_bad_alters\n(\n `key` UInt64,\n `value1` UInt8,\n `value2` String\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01079/table_for_bad_alters\', \'1\')\nORDER BY key\nSETTINGS index_granularity = 8192 -CREATE TABLE default.table_for_bad_alters\n(\n `key` UInt64,\n `value1` UInt8,\n `value2` UInt32\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01079/table_for_bad_alters\', \'1\')\nORDER BY key\nSETTINGS index_granularity = 8192 +CREATE TABLE default.table_for_bad_alters\n(\n `key` UInt64,\n `value1` UInt8,\n `value2` String\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01079_bad_alters_zookeeper_long_default/table_for_bad_alters\', \'1\')\nORDER BY key\nSETTINGS index_granularity = 8192 +CREATE TABLE default.table_for_bad_alters\n(\n `key` UInt64,\n `value1` UInt8,\n `value2` UInt32\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01079_bad_alters_zookeeper_long_default/table_for_bad_alters\', \'1\')\nORDER BY key\nSETTINGS index_granularity = 8192 syntax error at begin of string. 7 Hello diff --git a/tests/queries/0_stateless/01079_bad_alters_zookeeper.sh b/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh similarity index 90% rename from tests/queries/0_stateless/01079_bad_alters_zookeeper.sh rename to tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh index 1c0206453b7..6e79be90046 100755 --- a/tests/queries/0_stateless/01079_bad_alters_zookeeper.sh +++ b/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh @@ -10,7 +10,7 @@ $CLICKHOUSE_CLIENT -n --query "CREATE TABLE table_for_bad_alters ( key UInt64, value1 UInt8, value2 String -) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01079/table_for_bad_alters', '1') +) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_for_bad_alters', '1') ORDER BY key;" $CLICKHOUSE_CLIENT --query "INSERT INTO table_for_bad_alters VALUES(1, 1, 'Hello');" @@ -21,7 +21,7 @@ $CLICKHOUSE_CLIENT --query "ALTER TABLE table_for_bad_alters MODIFY COLUMN value sleep 2 -while [[ $($CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='0000000000'" 2>&1) ]]; do +while [[ $($CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='0000000000' and database = '$CLICKHOUSE_DATABASE'" 2>&1) ]]; do sleep 1 done diff --git a/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh b/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh index b3a5de8f9bc..7a3e3cf155f 100755 --- a/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh +++ b/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh @@ -12,7 +12,7 @@ done for i in $(seq $REPLICAS); do - $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_alter_add_drop_$i (key UInt64, value0 UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01079/concurrent_alter_add_drop_column', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" + $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_alter_add_drop_$i (key UInt64, value0 UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_alter_add_drop_column', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" done $CLICKHOUSE_CLIENT --query "INSERT INTO concurrent_alter_add_drop_1 SELECT number, number + 10 from numbers(100000)" diff --git a/tests/queries/0_stateless/01079_parallel_alter_detach_table_zookeeper.sh b/tests/queries/0_stateless/01079_parallel_alter_detach_table_zookeeper.sh index d5f0c987e5d..83f3196253a 100755 --- a/tests/queries/0_stateless/01079_parallel_alter_detach_table_zookeeper.sh +++ b/tests/queries/0_stateless/01079_parallel_alter_detach_table_zookeeper.sh @@ -11,7 +11,7 @@ for i in $(seq $REPLICAS); do done for i in $(seq $REPLICAS); do - $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_alter_detach_$i (key UInt64, value1 UInt8, value2 UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01079/concurrent_alter_detach', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000,temporary_directories_lifetime=10,cleanup_delay_period=3,cleanup_delay_period_random_add=0" + $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_alter_detach_$i (key UInt64, value1 UInt8, value2 UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_alter_detach', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000,temporary_directories_lifetime=10,cleanup_delay_period=3,cleanup_delay_period_random_add=0" done $CLICKHOUSE_CLIENT --query "INSERT INTO concurrent_alter_detach_1 SELECT number, number + 10, number from numbers(10)" diff --git a/tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper.reference b/tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper_long.reference similarity index 100% rename from tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper.reference rename to tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper_long.reference diff --git a/tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper.sh b/tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper_long.sh similarity index 94% rename from tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper.sh rename to tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper_long.sh index 5b14c5a8543..9cca73b5eef 100755 --- a/tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper.sh +++ b/tests/queries/0_stateless/01079_parallel_alter_modify_zookeeper_long.sh @@ -11,7 +11,7 @@ for i in $(seq $REPLICAS); do done for i in $(seq $REPLICAS); do - $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_alter_mt_$i (key UInt64, value1 UInt64, value2 Int32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01079/concurrent_alter_mt', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" + $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_alter_mt_$i (key UInt64, value1 UInt64, value2 Int32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_alter_mt', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" done $CLICKHOUSE_CLIENT --query "INSERT INTO concurrent_alter_mt_1 SELECT number, number + 10, number from numbers(10)" diff --git a/tests/queries/0_stateless/01087_table_function_generate.reference b/tests/queries/0_stateless/01087_table_function_generate.reference index d7cc6b0a933..d8886945caa 100644 --- a/tests/queries/0_stateless/01087_table_function_generate.reference +++ b/tests/queries/0_stateless/01087_table_function_generate.reference @@ -1,14 +1,14 @@ UInt64 Int64 UInt32 Int32 UInt16 Int16 UInt8 Int8 -2804162938822577320 -2776833771540858 3467776823 1163715250 31161 -2916 220 -117 -7885388429666205427 -1363628932535403038 484159052 -308788249 43346 13638 143 -105 -4357435422797280898 1355609803008819271 4126129912 -852056475 34184 9166 49 33 -5935810273536892891 -804738887697332962 3109335413 -80126721 47877 -31421 186 -77 -368066018677693974 -4927165984347126295 1015254922 2026080544 46037 -29626 240 108 -8124171311239967992 -1179703908046100129 1720727300 -138469036 33028 -12819 138 16 -15657812979985370729 -5733276247123822513 3254757884 -500590428 3829 30527 3 -81 -18371568619324220532 -6793779541583578394 1686821450 -455892108 43475 2284 252 -90 -821735343441964030 3148260644406230976 256251035 -885069056 11643 11455 176 90 -9558594037060121162 -2907172753635797124 4276198376 1947296644 45922 26632 97 43 +2804162938822577320 -2776833771540858 3467776823 1163715250 23903 -2916 220 -117 +7885388429666205427 -1363628932535403038 484159052 -308788249 44305 13638 143 -105 +4357435422797280898 1355609803008819271 4126129912 -852056475 58858 9166 49 33 +5935810273536892891 -804738887697332962 3109335413 -80126721 13655 -31421 186 -77 +368066018677693974 -4927165984347126295 1015254922 2026080544 21973 -29626 240 108 +8124171311239967992 -1179703908046100129 1720727300 -138469036 36175 -12819 138 16 +15657812979985370729 -5733276247123822513 3254757884 -500590428 13193 30527 3 -81 +18371568619324220532 -6793779541583578394 1686821450 -455892108 52282 2284 252 -90 +821735343441964030 3148260644406230976 256251035 -885069056 55255 11455 176 90 +9558594037060121162 -2907172753635797124 4276198376 1947296644 48701 26632 97 43 - Enum8(\'hello\' = 1, \'world\' = 5) hello @@ -47,16 +47,16 @@ h o - Date DateTime DateTime(\'Europe/Moscow\') -2077-09-17 1970-10-09 02:30:14 2074-08-12 11:31:27 -2005-11-19 2106-01-30 21:52:44 2097-05-25 07:54:35 -2007-02-24 2096-12-12 00:40:50 1988-08-10 11:16:31 -2019-06-30 2096-01-15 16:31:33 2063-10-20 08:48:17 -2039-01-16 2103-02-11 16:44:39 2036-10-09 04:29:10 -1994-11-03 1980-01-02 05:18:22 2055-12-23 12:33:52 -2083-08-20 2079-06-11 16:29:02 2000-12-05 17:46:24 -2030-06-25 2100-03-01 18:50:22 1993-03-25 01:19:12 -2087-03-16 2034-08-25 19:46:33 2045-12-10 16:47:40 -2006-04-30 2069-09-30 16:07:48 2084-08-26 03:33:12 +2113-06-12 1970-10-09 02:30:14 2074-08-12 11:31:27 +2103-11-03 2106-01-30 21:52:44 2097-05-25 07:54:35 +2008-03-16 2096-12-12 00:40:50 1988-08-10 11:16:31 +2126-11-26 2096-01-15 16:31:33 2063-10-20 08:48:17 +1991-02-02 2103-02-11 16:44:39 2036-10-09 04:29:10 +2096-11-03 1980-01-02 05:18:22 2055-12-23 12:33:52 +2024-12-16 2079-06-11 16:29:02 2000-12-05 17:46:24 +2085-04-07 2100-03-01 18:50:22 1993-03-25 01:19:12 +2135-05-30 2034-08-25 19:46:33 2045-12-10 16:47:40 +2094-12-18 2069-09-30 16:07:48 2084-08-26 03:33:12 - DateTime64(3) DateTime64(6) DateTime64(6, \'Europe/Moscow\') 1978-06-07 23:50:57.320 2013-08-28 10:21:54.010758 1991-08-25 16:23:26.140215 @@ -225,14 +225,14 @@ RL,{Xs\\tw [114] -84125.1554 ('2023-06-06 06:55:06.492','bf9ab359-ef9f-ad11-7e6c-160368b1e5ea') [124] -114719.5228 ('2010-11-11 22:57:23.722','c1046ffb-3415-cc3a-509a-e0005856d7d7') - -[] 1900051923 { -189530.5846 h -5.6279699579452485e47 ('1980-08-29','2090-10-31 19:35:45','2038-07-15 05:22:51.805','63d9a12d-d1cf-1f3a-57c6-9bc6dddd0975') 8502 -[-102,-118] 392272782 Eb -14818.0200 o -2.664492247169164e59 ('2059-02-10','1994-07-16 00:40:02','2034-02-02 05:30:44.960','4fa09948-d32e-8903-63df-43ad759e43f7') DA61 -[-71] 775049089 \N -158115.1178 w 4.1323844687113747e-305 ('1997-02-15','2062-08-12 23:41:53','2074-02-13 10:29:40.749','c4a44dd7-d009-6f65-1494-9daedfa8a124') 83A7 -[-28,100] 3675466147 { -146685.1749 h 3.6676044396877755e142 ('1997-10-26','2002-06-26 03:33:41','2002-12-02 05:46:03.455','98714b2c-65e7-b5cb-a040-421e260c6d8d') 4B94 -[-23] 2514120753 (`u, -119659.6174 w 1.3231258347475906e34 ('2055-11-20','2080-03-28 08:11:25','2073-07-10 12:19:58.146','003b3b6b-088f-f941-aeb9-c26e0ee72b8e') 6B1F -[11,-36] 3308237300 \N 171205.1896 \N 5.634708707075817e195 ('2009-03-18','2041-11-11 13:19:44','2044-03-18 17:34:17.814','9e60f4cb-6e55-1deb-5ac4-d66a86a8886d') 1964 -[39] 1614362420 `4A8P 157144.0630 o -1.1843143253872814e-255 ('1991-04-27','2066-03-02 11:07:49','1997-10-22 20:14:13.755','97685503-2609-d2b9-981c-02fd75d106cb') A35B -[48,-120] 3848918261 1 is executing longer than distributed_ddl_task_timeout (=8) seconds. There are 1 unfinished hosts (0 of them are currently active), they are going to execute the query in background. +throw +localhost 9000 0 0 0 +localhost 9000 57 Code: 57, e.displayText() = Error: Table default.throw already exists. 0 0 +Received exception from server: +Code: 57. Error: Received from localhost:9000. Error: There was an error on [localhost:9000]: Code: 57, e.displayText() = Error: Table default.throw already exists +localhost 9000 0 1 0 +Received exception from server: +Code: 159. Error: Received from localhost:9000. Error: Watching task is executing longer than distributed_ddl_task_timeout (=8) seconds. There are 1 unfinished hosts (0 of them are currently active), they are going to execute the query in background. +null_status_on_timeout +localhost 9000 0 0 0 +localhost 9000 57 Code: 57, e.displayText() = Error: Table default.null_status already exists. 0 0 +Received exception from server: +Code: 57. Error: Received from localhost:9000. Error: There was an error on [localhost:9000]: Code: 57, e.displayText() = Error: Table default.null_status already exists +localhost 9000 0 1 0 +localhost 1 \N \N 1 0 +never_throw +localhost 9000 0 0 0 +localhost 9000 57 Code: 57, e.displayText() = Error: Table default.never_throw already exists. 0 0 +localhost 9000 0 1 0 +localhost 1 \N \N 1 0 diff --git a/tests/queries/0_stateless/01175_distributed_ddl_output_mode_long.sh b/tests/queries/0_stateless/01175_distributed_ddl_output_mode_long.sh new file mode 100755 index 00000000000..66ceef21682 --- /dev/null +++ b/tests/queries/0_stateless/01175_distributed_ddl_output_mode_long.sh @@ -0,0 +1,39 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + +$CLICKHOUSE_CLIENT -q "drop table if exists throw;" +$CLICKHOUSE_CLIENT -q "drop table if exists null_status;" +$CLICKHOUSE_CLIENT -q "drop table if exists never_throw;" + +CLICKHOUSE_CLIENT_OPT=$(echo ${CLICKHOUSE_CLIENT_OPT} | sed 's/'"--send_logs_level=${CLICKHOUSE_CLIENT_SERVER_LOGS_LEVEL}"'/--send_logs_level=fatal/g') + +CLIENT="$CLICKHOUSE_CLIENT_BINARY $CLICKHOUSE_CLIENT_OPT --distributed_ddl_task_timeout=8 --distributed_ddl_output_mode=none" +$CLIENT -q "select value from system.settings where name='distributed_ddl_output_mode';" +# Ok +$CLIENT -q "create table throw on cluster test_shard_localhost (n int) engine=Memory;" +# Table exists +$CLIENT -q "create table throw on cluster test_shard_localhost (n int) engine=Memory;" 2>&1| grep -Fv "@ 0x" | sed "s/DB::Exception/Error/g" | sed "s/ (version.*)//" | sed "s/exists.. /exists/" +# Timeout +$CLIENT -q "drop table throw on cluster test_unavailable_shard;" 2>&1| grep -Fv "@ 0x" | sed "s/DB::Exception/Error/g" | sed "s/ (version.*)//" | sed "s/Watching task .* is executing longer/Watching task is executing longer/" | sed "s/background. /background./" + +CLIENT="$CLICKHOUSE_CLIENT_BINARY $CLICKHOUSE_CLIENT_OPT --distributed_ddl_task_timeout=8 --distributed_ddl_output_mode=throw" +$CLIENT -q "select value from system.settings where name='distributed_ddl_output_mode';" +$CLIENT -q "create table throw on cluster test_shard_localhost (n int) engine=Memory;" +$CLIENT -q "create table throw on cluster test_shard_localhost (n int) engine=Memory;" 2>&1| grep -Fv "@ 0x" | sed "s/DB::Exception/Error/g" | sed "s/ (version.*)//" | sed "s/exists.. /exists/" +$CLIENT -q "drop table throw on cluster test_unavailable_shard;" 2>&1| grep -Fv "@ 0x" | sed "s/DB::Exception/Error/g" | sed "s/ (version.*)//" | sed "s/Watching task .* is executing longer/Watching task is executing longer/" | sed "s/background. /background./" + +CLIENT="$CLICKHOUSE_CLIENT_BINARY $CLICKHOUSE_CLIENT_OPT --distributed_ddl_task_timeout=8 --distributed_ddl_output_mode=null_status_on_timeout" +$CLIENT -q "select value from system.settings where name='distributed_ddl_output_mode';" +$CLIENT -q "create table null_status on cluster test_shard_localhost (n int) engine=Memory;" +$CLIENT -q "create table null_status on cluster test_shard_localhost (n int) engine=Memory;" 2>&1| grep -Fv "@ 0x" | sed "s/DB::Exception/Error/g" | sed "s/ (version.*)//" | sed "s/exists.. /exists/" +$CLIENT -q "drop table null_status on cluster test_unavailable_shard;" + +CLIENT="$CLICKHOUSE_CLIENT_BINARY $CLICKHOUSE_CLIENT_OPT --distributed_ddl_task_timeout=8 --distributed_ddl_output_mode=never_throw" +$CLIENT -q "select value from system.settings where name='distributed_ddl_output_mode';" +$CLIENT -q "create table never_throw on cluster test_shard_localhost (n int) engine=Memory;" +$CLIENT -q "create table never_throw on cluster test_shard_localhost (n int) engine=Memory;" 2>&1| sed "s/DB::Exception/Error/g" | sed "s/ (version.*)//" +$CLIENT -q "drop table never_throw on cluster test_unavailable_shard;" diff --git a/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql b/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql index ca393c36617..24283a0e8e3 100644 --- a/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql +++ b/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql @@ -1,5 +1,4 @@ DROP TABLE IF EXISTS test_repl ON CLUSTER test_shard_localhost SYNC; - CREATE TABLE test_repl ON CLUSTER test_shard_localhost (n UInt64) ENGINE ReplicatedMergeTree('/clickhouse/test_01181/{database}/test_repl','r1') ORDER BY tuple(); DETACH TABLE test_repl ON CLUSTER test_shard_localhost SYNC; ATTACH TABLE test_repl ON CLUSTER test_shard_localhost; diff --git a/tests/queries/0_stateless/01184_insert_values_huge_strings.sh b/tests/queries/0_stateless/01184_insert_values_huge_strings.sh deleted file mode 100755 index 9b63f401a59..00000000000 --- a/tests/queries/0_stateless/01184_insert_values_huge_strings.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/usr/bin/env bash - -CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) -# shellcheck source=../shell_config.sh -. "$CURDIR"/../shell_config.sh - -$CLICKHOUSE_CLIENT -q "drop table if exists huge_strings" -$CLICKHOUSE_CLIENT -q "create table huge_strings (n UInt64, l UInt64, s String, h UInt64) engine=MergeTree order by n" - -for _ in {1..10}; do - $CLICKHOUSE_CLIENT -q "select number, (rand() % 100*1000*1000) as l, repeat(randomString(l/1000/1000), 1000*1000) as s, cityHash64(s) from numbers(10) format Values" | $CLICKHOUSE_CLIENT -q "insert into huge_strings values" & - $CLICKHOUSE_CLIENT -q "select number % 10, (rand() % 100) as l, randomString(l) as s, cityHash64(s) from numbers(100000)" | $CLICKHOUSE_CLIENT -q "insert into huge_strings format TSV" & -done; -wait - -$CLICKHOUSE_CLIENT -q "select count() from huge_strings" -$CLICKHOUSE_CLIENT -q "select sum(l = length(s)) from huge_strings" -$CLICKHOUSE_CLIENT -q "select sum(h = cityHash64(s)) from huge_strings" - -$CLICKHOUSE_CLIENT -q "drop table huge_strings" diff --git a/tests/queries/0_stateless/01184_insert_values_huge_strings.reference b/tests/queries/0_stateless/01184_long_insert_values_huge_strings.reference similarity index 100% rename from tests/queries/0_stateless/01184_insert_values_huge_strings.reference rename to tests/queries/0_stateless/01184_long_insert_values_huge_strings.reference diff --git a/tests/queries/0_stateless/01184_long_insert_values_huge_strings.sh b/tests/queries/0_stateless/01184_long_insert_values_huge_strings.sh new file mode 100755 index 00000000000..f3b3431dffe --- /dev/null +++ b/tests/queries/0_stateless/01184_long_insert_values_huge_strings.sh @@ -0,0 +1,22 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT -q "drop table if exists huge_strings" +$CLICKHOUSE_CLIENT -q "create table huge_strings (n UInt64, l UInt64, s String, h UInt64) engine=MergeTree order by n" + +# Timeouts are increased, because test can be slow with sanitizers and parallel runs. + +for _ in {1..10}; do + $CLICKHOUSE_CLIENT --receive_timeout 100 --send_timeout 100 --connect_timeout 100 --query "select number, (rand() % 10*1000*1000) as l, repeat(randomString(l/1000/1000), 1000*1000) as s, cityHash64(s) from numbers(10) format Values" | $CLICKHOUSE_CLIENT --receive_timeout 100 --send_timeout 100 --connect_timeout 100 --query "insert into huge_strings values" & + $CLICKHOUSE_CLIENT --receive_timeout 100 --send_timeout 100 --connect_timeout 100 --query "select number % 10, (rand() % 10) as l, randomString(l) as s, cityHash64(s) from numbers(100000)" | $CLICKHOUSE_CLIENT --receive_timeout 100 --send_timeout 100 --connect_timeout 100 --query "insert into huge_strings format TSV" & +done; +wait + +$CLICKHOUSE_CLIENT -q "select count() from huge_strings" +$CLICKHOUSE_CLIENT -q "select sum(l = length(s)) from huge_strings" +$CLICKHOUSE_CLIENT -q "select sum(h = cityHash64(s)) from huge_strings" + +$CLICKHOUSE_CLIENT -q "drop table huge_strings" diff --git a/tests/queries/0_stateless/01190_full_attach_syntax.sql b/tests/queries/0_stateless/01190_full_attach_syntax.sql index 62e77f9870e..eb739782e5f 100644 --- a/tests/queries/0_stateless/01190_full_attach_syntax.sql +++ b/tests/queries/0_stateless/01190_full_attach_syntax.sql @@ -18,7 +18,7 @@ SHOW CREATE DICTIONARY test_01190.dict; CREATE TABLE log ENGINE = Log AS SELECT 'test' AS s; SHOW CREATE log; DETACH TABLE log; -ATTACH DICTIONARY log; -- { serverError 487 } +ATTACH DICTIONARY log; -- { serverError 80 } ATTACH TABLE log (s String) ENGINE = Log(); SHOW CREATE log; SELECT * FROM log; diff --git a/tests/queries/0_stateless/01191_rename_dictionary.sql b/tests/queries/0_stateless/01191_rename_dictionary.sql index 3656d49f6e6..264c527ccca 100644 --- a/tests/queries/0_stateless/01191_rename_dictionary.sql +++ b/tests/queries/0_stateless/01191_rename_dictionary.sql @@ -13,11 +13,15 @@ INSERT INTO test_01191._ VALUES (42, 'test'); SELECT name, status FROM system.dictionaries WHERE database='test_01191'; SELECT name, engine FROM system.tables WHERE database='test_01191' ORDER BY name; -RENAME DICTIONARY test_01191.table TO test_01191.table1; -- {serverError 80} -EXCHANGE TABLES test_01191.table AND test_01191.dict; -- {serverError 48} +RENAME DICTIONARY test_01191.table TO test_01191.table1; -- {serverError 60} +EXCHANGE TABLES test_01191.table AND test_01191.dict; -- {serverError 60} EXCHANGE TABLES test_01191.dict AND test_01191.table; -- {serverError 80} RENAME TABLE test_01191.dict TO test_01191.dict1; -- {serverError 80} -RENAME DICTIONARY test_01191.dict TO default.dict1; -- {serverError 48} + +CREATE DATABASE dummy_db ENGINE=Atomic; +RENAME DICTIONARY test_01191.dict TO dummy_db.dict1; +RENAME DICTIONARY dummy_db.dict1 TO test_01191.dict; +DROP DATABASE dummy_db; RENAME DICTIONARY test_01191.dict TO test_01191.dict1; diff --git a/tests/queries/0_stateless/01192_rename_database_zookeeper.sh b/tests/queries/0_stateless/01192_rename_database_zookeeper.sh index 90b9baf4ebf..58bdfbf71ad 100755 --- a/tests/queries/0_stateless/01192_rename_database_zookeeper.sh +++ b/tests/queries/0_stateless/01192_rename_database_zookeeper.sh @@ -36,7 +36,7 @@ $CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM test_01192_renamed.mt" # 5. check moving tables from Ordinary to Atomic (can be used to "alter" database engine) $CLICKHOUSE_CLIENT --default_database_engine=Ordinary -q "CREATE DATABASE test_01192" $CLICKHOUSE_CLIENT -q "CREATE TABLE test_01192.mt AS test_01192_renamed.mt ENGINE=MergeTree ORDER BY n" -$CLICKHOUSE_CLIENT -q "CREATE TABLE test_01192.rmt AS test_01192_renamed.mt ENGINE=ReplicatedMergeTree('/test/01192/', '1') ORDER BY n" +$CLICKHOUSE_CLIENT -q "CREATE TABLE test_01192.rmt AS test_01192_renamed.mt ENGINE=ReplicatedMergeTree('/test/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/', '1') ORDER BY n" $CLICKHOUSE_CLIENT -q "CREATE MATERIALIZED VIEW test_01192.mv TO test_01192.rmt AS SELECT * FROM test_01192.mt" $CLICKHOUSE_CLIENT -q "INSERT INTO test_01192.mt SELECT number FROM numbers(10)" && echo "inserted" diff --git a/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.reference b/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.reference index fc2a74d1a93..35385731ad3 100644 --- a/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.reference +++ b/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.reference @@ -1,7 +1,7 @@ 1 -CREATE TABLE default.table_for_rename_replicated\n(\n `date` Date,\n `key` UInt64,\n `value1` String,\n `value2` String,\n `value3` String\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01213/table_for_rename_replicated\', \'1\')\nPARTITION BY date\nORDER BY key\nSETTINGS index_granularity = 8192 +CREATE TABLE default.table_for_rename_replicated\n(\n `date` Date,\n `key` UInt64,\n `value1` String,\n `value2` String,\n `value3` String\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01213_alter_rename_column_zookeeper_default/table_for_rename_replicated\', \'1\')\nPARTITION BY date\nORDER BY key\nSETTINGS index_granularity = 8192 renamed_value1 -CREATE TABLE default.table_for_rename_replicated\n(\n `date` Date,\n `key` UInt64,\n `renamed_value1` String,\n `value2` String,\n `value3` String\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01213/table_for_rename_replicated\', \'1\')\nPARTITION BY date\nORDER BY key\nSETTINGS index_granularity = 8192 +CREATE TABLE default.table_for_rename_replicated\n(\n `date` Date,\n `key` UInt64,\n `renamed_value1` String,\n `value2` String,\n `value3` String\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01213_alter_rename_column_zookeeper_default/table_for_rename_replicated\', \'1\')\nPARTITION BY date\nORDER BY key\nSETTINGS index_granularity = 8192 1 date key renamed_value1 value2 value3 2019-10-02 1 1 1 1 diff --git a/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.sh b/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.sh index 5ab0e800d39..5da8de70c46 100755 --- a/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.sh +++ b/tests/queries/0_stateless/01213_alter_rename_column_zookeeper.sh @@ -15,7 +15,7 @@ CREATE TABLE table_for_rename_replicated value2 String, value3 String ) -ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01213/table_for_rename_replicated', '1') +ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_for_rename_replicated', '1') PARTITION BY date ORDER BY key; " diff --git a/tests/queries/0_stateless/01231_log_queries_min_type.sql b/tests/queries/0_stateless/01231_log_queries_min_type.sql index 9659739b61d..b3540f3354b 100644 --- a/tests/queries/0_stateless/01231_log_queries_min_type.sql +++ b/tests/queries/0_stateless/01231_log_queries_min_type.sql @@ -2,26 +2,33 @@ set log_queries=1; select '01231_log_queries_min_type/QUERY_START'; system flush logs; -select count() from system.query_log where current_database = currentDatabase() and query like '%01231_log_queries_min_type/QUERY_START%' and query not like '%system.query_log%' and event_date = today() and event_time >= now() - interval 1 minute; +select count() from system.query_log where current_database = currentDatabase() + and query like 'select \'01231_log_queries_min_type/QUERY_START%' + and event_date >= yesterday(); set log_queries_min_type='EXCEPTION_BEFORE_START'; select '01231_log_queries_min_type/EXCEPTION_BEFORE_START'; system flush logs; -select count() from system.query_log where current_database = currentDatabase() and query like '%01231_log_queries_min_type/EXCEPTION_BEFORE_START%' and query not like '%system.query_log%' and event_date = today() and event_time >= now() - interval 1 minute; +select count() from system.query_log where current_database = currentDatabase() + and query like 'select \'01231_log_queries_min_type/EXCEPTION_BEFORE_START%' + and event_date >= yesterday(); set max_rows_to_read='100K'; set log_queries_min_type='EXCEPTION_WHILE_PROCESSING'; select '01231_log_queries_min_type/EXCEPTION_WHILE_PROCESSING', max(number) from system.numbers limit 1e6; -- { serverError 158; } +set max_rows_to_read=0; system flush logs; -select count() from system.query_log where current_database = currentDatabase() and query like '%01231_log_queries_min_type/EXCEPTION_WHILE_PROCESSING%' and query not like '%system.query_log%' and event_date = today() and event_time >= now() - interval 1 minute and type = 'ExceptionWhileProcessing'; +select count() from system.query_log where current_database = currentDatabase() + and query like 'select \'01231_log_queries_min_type/EXCEPTION_WHILE_PROCESSING%' + and event_date >= yesterday() and type = 'ExceptionWhileProcessing'; +set max_rows_to_read='100K'; select '01231_log_queries_min_type w/ Settings/EXCEPTION_WHILE_PROCESSING', max(number) from system.numbers limit 1e6; -- { serverError 158; } system flush logs; +set max_rows_to_read=0; select count() from system.query_log where current_database = currentDatabase() and - query like '%01231_log_queries_min_type w/ Settings/EXCEPTION_WHILE_PROCESSING%' and - query not like '%system.query_log%' and - event_date = today() and - event_time >= now() - interval 1 minute and + query like 'select \'01231_log_queries_min_type w/ Settings/EXCEPTION_WHILE_PROCESSING%' and + event_date >= yesterday() and type = 'ExceptionWhileProcessing' and has(Settings.Names, 'max_rows_to_read'); diff --git a/tests/queries/0_stateless/01231_markdown_format.reference b/tests/queries/0_stateless/01231_markdown_format.reference index e2ec03b401a..65838bfede7 100644 --- a/tests/queries/0_stateless/01231_markdown_format.reference +++ b/tests/queries/0_stateless/01231_markdown_format.reference @@ -1,5 +1,5 @@ -| id | name | array | -|-:|:-|:-:| -| 1 | name1 | [1,2,3] | -| 2 | name2 | [4,5,6] | -| 3 | name3 | [7,8,9] | +| id | name | array | nullable | low_cardinality | decimal | +|-:|:-|:-|:-|:-|-:| +| 1 | name1 | [1,2,3] | Some long string | name1 | 1.110000 | +| 2 | name2 | [4,5,60000] | \N | Another long string | 222.222222 | +| 30000 | One more long string | [7,8,9] | name3 | name3 | 3.330000 | diff --git a/tests/queries/0_stateless/01231_markdown_format.sql b/tests/queries/0_stateless/01231_markdown_format.sql index 693664be1ab..65c65389e12 100644 --- a/tests/queries/0_stateless/01231_markdown_format.sql +++ b/tests/queries/0_stateless/01231_markdown_format.sql @@ -1,6 +1,6 @@ DROP TABLE IF EXISTS makrdown; -CREATE TABLE markdown (id UInt32, name String, array Array(Int8)) ENGINE = Memory; -INSERT INTO markdown VALUES (1, 'name1', [1,2,3]), (2, 'name2', [4,5,6]), (3, 'name3', [7,8,9]); +CREATE TABLE markdown (id UInt32, name String, array Array(Int32), nullable Nullable(String), low_cardinality LowCardinality(String), decimal Decimal32(6)) ENGINE = Memory; +INSERT INTO markdown VALUES (1, 'name1', [1,2,3], 'Some long string', 'name1', 1.11), (2, 'name2', [4,5,60000], Null, 'Another long string', 222.222222), (30000, 'One more long string', [7,8,9], 'name3', 'name3', 3.33); SELECT * FROM markdown FORMAT Markdown; DROP TABLE IF EXISTS markdown diff --git a/tests/queries/0_stateless/01246_insert_into_watch_live_view.py b/tests/queries/0_stateless/01246_insert_into_watch_live_view.py index 0f7c6965b7b..193d23999be 100755 --- a/tests/queries/0_stateless/01246_insert_into_watch_live_view.py +++ b/tests/queries/0_stateless/01246_insert_into_watch_live_view.py @@ -25,6 +25,8 @@ with client(name='client1>', log=log) as client1, client(name='client2>', log=lo client1.send('DROP TABLE IF EXISTS test.lv') client1.expect(prompt) + client1.send('DROP TABLE IF EXISTS test.lv_sums') + client1.expect(prompt) client1.send('DROP TABLE IF EXISTS test.mt') client1.expect(prompt) client1.send('DROP TABLE IF EXISTS test.sums') @@ -39,12 +41,10 @@ with client(name='client1>', log=log) as client1, client(name='client2>', log=lo client3.expect(prompt) client3.send("WATCH test.lv_sums FORMAT CSVWithNames") - client3.expect('_version') client1.send('INSERT INTO test.sums WATCH test.lv') client1.expect(r'INSERT INTO') - client1.expect(r'Progress') - + client3.expect('0,1.*\r\n') client2.send('INSERT INTO test.mt VALUES (1),(2),(3)') @@ -67,7 +67,7 @@ with client(name='client1>', log=log) as client1, client(name='client2>', log=lo match = client1.expect('(%s)|([#\$] )' % prompt) if match.groups()[1]: client1.send(client1.command) - client1.expect(prompt) + client1.expect(prompt) client2.send('DROP TABLE test.lv') client2.expect(prompt) diff --git a/tests/queries/0_stateless/01251_dict_is_in_infinite_loop.reference b/tests/queries/0_stateless/01251_dict_is_in_infinite_loop.reference index 757d2858524..0a2c97efb42 100644 --- a/tests/queries/0_stateless/01251_dict_is_in_infinite_loop.reference +++ b/tests/queries/0_stateless/01251_dict_is_in_infinite_loop.reference @@ -29,10 +29,10 @@ 1 1 1 -255 -255 0 -255 +0 +0 +0 [11,22] [22,11] [11,22] diff --git a/tests/queries/0_stateless/01252_weird_time_zone.reference b/tests/queries/0_stateless/01252_weird_time_zone.reference index f2968d4efa6..90f5bf0e30d 100644 --- a/tests/queries/0_stateless/01252_weird_time_zone.reference +++ b/tests/queries/0_stateless/01252_weird_time_zone.reference @@ -1,7 +1,7 @@ -2020-01-02 03:04:05 2020-01-02 00:00:00 3 -2020-01-02 03:04:05 2020-01-02 00:00:00 3 -2020-01-02 03:04:05 2020-01-02 00:00:00 3 -2020-01-02 03:04:05 2020-01-02 00:00:00 3 -2020-01-02 03:04:05 2020-01-02 00:00:00 3 -2020-01-02 03:04:05 2020-01-02 00:00:00 3 -2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Pacific/Kiritimati 2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Africa/El_Aaiun 2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Asia/Pyongyang 2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Pacific/Kwajalein 2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Pacific/Apia 2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Pacific/Enderbury 2020-01-02 03:04:05 2020-01-02 00:00:00 3 +Pacific/Fakaofo 2020-01-02 03:04:05 2020-01-02 00:00:00 3 diff --git a/tests/queries/0_stateless/01252_weird_time_zone.sql b/tests/queries/0_stateless/01252_weird_time_zone.sql index 68ea903a797..c4919ca4fe0 100644 --- a/tests/queries/0_stateless/01252_weird_time_zone.sql +++ b/tests/queries/0_stateless/01252_weird_time_zone.sql @@ -1,15 +1,15 @@ -SELECT toDateTime('2020-01-02 03:04:05', 'Pacific/Kiritimati') AS x, toStartOfDay(x), toHour(x); -SELECT toDateTime('2020-01-02 03:04:05', 'Africa/El_Aaiun') AS x, toStartOfDay(x), toHour(x); -SELECT toDateTime('2020-01-02 03:04:05', 'Asia/Pyongyang') AS x, toStartOfDay(x), toHour(x); -SELECT toDateTime('2020-01-02 03:04:05', 'Pacific/Kwajalein') AS x, toStartOfDay(x), toHour(x); -SELECT toDateTime('2020-01-02 03:04:05', 'Pacific/Apia') AS x, toStartOfDay(x), toHour(x); -SELECT toDateTime('2020-01-02 03:04:05', 'Pacific/Enderbury') AS x, toStartOfDay(x), toHour(x); -SELECT toDateTime('2020-01-02 03:04:05', 'Pacific/Fakaofo') AS x, toStartOfDay(x), toHour(x); +SELECT 'Pacific/Kiritimati', toDateTime('2020-01-02 03:04:05', 'Pacific/Kiritimati') AS x, toStartOfDay(x), toHour(x); +SELECT 'Africa/El_Aaiun', toDateTime('2020-01-02 03:04:05', 'Africa/El_Aaiun') AS x, toStartOfDay(x), toHour(x); +SELECT 'Asia/Pyongyang', toDateTime('2020-01-02 03:04:05', 'Asia/Pyongyang') AS x, toStartOfDay(x), toHour(x); +SELECT 'Pacific/Kwajalein', toDateTime('2020-01-02 03:04:05', 'Pacific/Kwajalein') AS x, toStartOfDay(x), toHour(x); +SELECT 'Pacific/Apia', toDateTime('2020-01-02 03:04:05', 'Pacific/Apia') AS x, toStartOfDay(x), toHour(x); +SELECT 'Pacific/Enderbury', toDateTime('2020-01-02 03:04:05', 'Pacific/Enderbury') AS x, toStartOfDay(x), toHour(x); +SELECT 'Pacific/Fakaofo', toDateTime('2020-01-02 03:04:05', 'Pacific/Fakaofo') AS x, toStartOfDay(x), toHour(x); -SELECT toHour(toDateTime(rand(), 'Pacific/Kiritimati') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; -SELECT toHour(toDateTime(rand(), 'Africa/El_Aaiun') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; -SELECT toHour(toDateTime(rand(), 'Asia/Pyongyang') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; -SELECT toHour(toDateTime(rand(), 'Pacific/Kwajalein') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; -SELECT toHour(toDateTime(rand(), 'Pacific/Apia') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; -SELECT toHour(toDateTime(rand(), 'Pacific/Enderbury') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; -SELECT toHour(toDateTime(rand(), 'Pacific/Fakaofo') AS t) AS h, t FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Pacific/Kiritimati', rand() as r, toHour(toDateTime(r, 'Pacific/Kiritimati') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Africa/El_Aaiun', rand() as r, toHour(toDateTime(r, 'Africa/El_Aaiun') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Asia/Pyongyang', rand() as r, toHour(toDateTime(r, 'Asia/Pyongyang') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Pacific/Kwajalein', rand() as r, toHour(toDateTime(r, 'Pacific/Kwajalein') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Pacific/Apia', rand() as r, toHour(toDateTime(r, 'Pacific/Apia') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Pacific/Enderbury', rand() as r, toHour(toDateTime(r, 'Pacific/Enderbury') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; +SELECT 'Pacific/Fakaofo', rand() as r, toHour(toDateTime(r, 'Pacific/Fakaofo') AS t) AS h, t, toTypeName(t) FROM numbers(1000000) WHERE h < 0 OR h > 23 ORDER BY h LIMIT 1 BY h; diff --git a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference index 09b593dad3d..97c766822ac 100644 --- a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference +++ b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference @@ -3,5 +3,3 @@ a a --- -a -a diff --git a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql index e3d66e9cdba..0eeb97e2b2d 100644 --- a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql +++ b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql @@ -43,7 +43,7 @@ SELECT * FROM d; SELECT '---'; INSERT INTO m VALUES ('b'); -SELECT v FROM d ORDER BY v; -- { clientError 36 } +SELECT toString(v) FROM (SELECT v FROM d ORDER BY v) FORMAT Null; -- { serverError 36 } DROP TABLE m; diff --git a/tests/queries/0_stateless/01269_create_with_null.reference b/tests/queries/0_stateless/01269_create_with_null.reference index 86be41bc06a..73f834da75a 100644 --- a/tests/queries/0_stateless/01269_create_with_null.reference +++ b/tests/queries/0_stateless/01269_create_with_null.reference @@ -2,3 +2,6 @@ Nullable(Int32) Int32 Nullable(Int32) Int32 CREATE TABLE default.data_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Int32\n)\nENGINE = Memory Nullable(Int32) Int32 Nullable(Int32) Nullable(Int32) CREATE TABLE default.set_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Nullable(Int32)\n)\nENGINE = Memory +CREATE TABLE default.set_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Nullable(Int32)\n)\nENGINE = Memory +CREATE TABLE default.cannot_be_nullable\n(\n `n` Nullable(Int8),\n `a` Array(UInt8)\n)\nENGINE = Memory +CREATE TABLE default.cannot_be_nullable\n(\n `n` Nullable(Int8),\n `a` Array(UInt8)\n)\nENGINE = Memory diff --git a/tests/queries/0_stateless/01269_create_with_null.sql b/tests/queries/0_stateless/01269_create_with_null.sql index 856b6ea75f4..faa6b84e9e4 100644 --- a/tests/queries/0_stateless/01269_create_with_null.sql +++ b/tests/queries/0_stateless/01269_create_with_null.sql @@ -1,5 +1,6 @@ DROP TABLE IF EXISTS data_null; DROP TABLE IF EXISTS set_null; +DROP TABLE IF EXISTS cannot_be_nullable; SET data_type_default_nullable='false'; @@ -45,6 +46,17 @@ INSERT INTO set_null VALUES (NULL, 2, NULL, NULL); SELECT toTypeName(a), toTypeName(b), toTypeName(c), toTypeName(d) FROM set_null; SHOW CREATE TABLE set_null; +DETACH TABLE set_null; +ATTACH TABLE set_null; +SHOW CREATE TABLE set_null; + +CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8)) ENGINE=Memory; -- { serverError 43 } +CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8) NOT NULL) ENGINE=Memory; +SHOW CREATE TABLE cannot_be_nullable; +DETACH TABLE cannot_be_nullable; +ATTACH TABLE cannot_be_nullable; +SHOW CREATE TABLE cannot_be_nullable; DROP TABLE data_null; DROP TABLE set_null; +DROP TABLE cannot_be_nullable; diff --git a/tests/queries/0_stateless/01271_show_privileges.reference b/tests/queries/0_stateless/01271_show_privileges.reference index 791f83cca05..2c2a1f8c7b6 100644 --- a/tests/queries/0_stateless/01271_show_privileges.reference +++ b/tests/queries/0_stateless/01271_show_privileges.reference @@ -28,8 +28,8 @@ ALTER TTL ['ALTER MODIFY TTL','MODIFY TTL'] TABLE ALTER TABLE ALTER MATERIALIZE TTL ['MATERIALIZE TTL'] TABLE ALTER TABLE ALTER SETTINGS ['ALTER SETTING','ALTER MODIFY SETTING','MODIFY SETTING'] TABLE ALTER TABLE ALTER MOVE PARTITION ['ALTER MOVE PART','MOVE PARTITION','MOVE PART'] TABLE ALTER TABLE -ALTER FETCH PARTITION ['FETCH PARTITION'] TABLE ALTER TABLE -ALTER FREEZE PARTITION ['FREEZE PARTITION'] TABLE ALTER TABLE +ALTER FETCH PARTITION ['ALTER FETCH PART','FETCH PARTITION'] TABLE ALTER TABLE +ALTER FREEZE PARTITION ['FREEZE PARTITION','UNFREEZE'] TABLE ALTER TABLE ALTER TABLE [] \N ALTER ALTER VIEW REFRESH ['ALTER LIVE VIEW REFRESH','REFRESH VIEW'] VIEW ALTER VIEW ALTER VIEW MODIFY QUERY ['ALTER TABLE MODIFY QUERY'] VIEW ALTER VIEW @@ -76,13 +76,16 @@ SYSTEM SHUTDOWN ['SYSTEM KILL','SHUTDOWN'] GLOBAL SYSTEM SYSTEM DROP DNS CACHE ['SYSTEM DROP DNS','DROP DNS CACHE','DROP DNS'] GLOBAL SYSTEM DROP CACHE SYSTEM DROP MARK CACHE ['SYSTEM DROP MARK','DROP MARK CACHE','DROP MARKS'] GLOBAL SYSTEM DROP CACHE SYSTEM DROP UNCOMPRESSED CACHE ['SYSTEM DROP UNCOMPRESSED','DROP UNCOMPRESSED CACHE','DROP UNCOMPRESSED'] GLOBAL SYSTEM DROP CACHE +SYSTEM DROP MMAP CACHE ['SYSTEM DROP MMAP','DROP MMAP CACHE','DROP MMAP'] GLOBAL SYSTEM DROP CACHE SYSTEM DROP COMPILED EXPRESSION CACHE ['SYSTEM DROP COMPILED EXPRESSION','DROP COMPILED EXPRESSION CACHE','DROP COMPILED EXPRESSIONS'] GLOBAL SYSTEM DROP CACHE SYSTEM DROP CACHE ['DROP CACHE'] \N SYSTEM SYSTEM RELOAD CONFIG ['RELOAD CONFIG'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD SYMBOLS ['RELOAD SYMBOLS'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD DICTIONARY ['SYSTEM RELOAD DICTIONARIES','RELOAD DICTIONARY','RELOAD DICTIONARIES'] GLOBAL SYSTEM RELOAD +SYSTEM RELOAD MODEL ['SYSTEM RELOAD MODELS','RELOAD MODEL','RELOAD MODELS'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD EMBEDDED DICTIONARIES ['RELOAD EMBEDDED DICTIONARIES'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD [] \N SYSTEM +SYSTEM RESTART DISK ['SYSTEM RESTART DISK'] GLOBAL SYSTEM SYSTEM MERGES ['SYSTEM STOP MERGES','SYSTEM START MERGES','STOP_MERGES','START MERGES'] TABLE SYSTEM SYSTEM TTL MERGES ['SYSTEM STOP TTL MERGES','SYSTEM START TTL MERGES','STOP TTL MERGES','START TTL MERGES'] TABLE SYSTEM SYSTEM FETCHES ['SYSTEM STOP FETCHES','SYSTEM START FETCHES','STOP FETCHES','START FETCHES'] TABLE SYSTEM diff --git a/tests/queries/0_stateless/01280_ssd_complex_key_dictionary.sql b/tests/queries/0_stateless/01280_ssd_complex_key_dictionary.sql index 8c304818602..cd3e52c9691 100644 --- a/tests/queries/0_stateless/01280_ssd_complex_key_dictionary.sql +++ b/tests/queries/0_stateless/01280_ssd_complex_key_dictionary.sql @@ -42,8 +42,7 @@ LAYOUT(COMPLEX_KEY_SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse SELECT 'TEST_SMALL'; SELECT 'VALUE FROM RAM BUFFER'; --- NUMBER_OF_ARGUMENTS_DOESNT_MATCH -SELECT dictHas('01280_db.ssd_dict', 'a', tuple('1')); -- { serverError 42 } +SELECT dictHas('01280_db.ssd_dict', 'a', tuple('1')); -- { serverError 43 } SELECT dictGetUInt64('01280_db.ssd_dict', 'a', tuple('1', toInt32(3))); SELECT dictGetInt32('01280_db.ssd_dict', 'b', tuple('1', toInt32(3))); @@ -99,7 +98,7 @@ CREATE DICTIONARY 01280_db.ssd_dict PRIMARY KEY k1, k2 SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_for_dict' PASSWORD '' DB '01280_db')) LIFETIME(MIN 1000 MAX 2000) -LAYOUT(COMPLEX_KEY_SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/1d' BLOCK_SIZE 512 WRITE_BUFFER_SIZE 4096 MAX_STORED_KEYS 1000000)); +LAYOUT(COMPLEX_KEY_SSD_CACHE(FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/1d' BLOCK_SIZE 512 WRITE_BUFFER_SIZE 4096)); SELECT 'UPDATE DICTIONARY'; -- 118 diff --git a/tests/queries/0_stateless/01293_system_distribution_queue.reference b/tests/queries/0_stateless/01293_system_distribution_queue.reference index a2c1e5f2a7b..4a51abdb745 100644 --- a/tests/queries/0_stateless/01293_system_distribution_queue.reference +++ b/tests/queries/0_stateless/01293_system_distribution_queue.reference @@ -1,6 +1,6 @@ INSERT -1 0 1 1 +1 0 1 1 0 0 FLUSH -1 0 0 0 +1 0 0 0 0 0 UNBLOCK -0 0 0 0 +0 0 0 0 0 0 diff --git a/tests/queries/0_stateless/01293_system_distribution_queue.sql b/tests/queries/0_stateless/01293_system_distribution_queue.sql index dc63dece960..b16433029bf 100644 --- a/tests/queries/0_stateless/01293_system_distribution_queue.sql +++ b/tests/queries/0_stateless/01293_system_distribution_queue.sql @@ -10,15 +10,15 @@ select * from system.distribution_queue; select 'INSERT'; system stop distributed sends dist_01293; insert into dist_01293 select * from numbers(10); -select is_blocked, error_count, data_files, data_compressed_bytes>100 from system.distribution_queue where database = currentDatabase(); +select is_blocked, error_count, data_files, data_compressed_bytes>100, broken_data_files, broken_data_compressed_bytes from system.distribution_queue where database = currentDatabase(); system flush distributed dist_01293; select 'FLUSH'; -select is_blocked, error_count, data_files, data_compressed_bytes from system.distribution_queue where database = currentDatabase(); +select is_blocked, error_count, data_files, data_compressed_bytes, broken_data_files, broken_data_compressed_bytes from system.distribution_queue where database = currentDatabase(); select 'UNBLOCK'; system start distributed sends dist_01293; -select is_blocked, error_count, data_files, data_compressed_bytes from system.distribution_queue where database = currentDatabase(); +select is_blocked, error_count, data_files, data_compressed_bytes, broken_data_files, broken_data_compressed_bytes from system.distribution_queue where database = currentDatabase(); drop table null_01293; drop table dist_01293; diff --git a/tests/queries/0_stateless/01294_create_settings_profile.reference b/tests/queries/0_stateless/01294_create_settings_profile.reference index ab1b3833419..da47b084070 100644 --- a/tests/queries/0_stateless/01294_create_settings_profile.reference +++ b/tests/queries/0_stateless/01294_create_settings_profile.reference @@ -38,8 +38,12 @@ CREATE SETTINGS PROFILE s2_01294 SETTINGS max_memory_usage = 5000000 CREATE SETTINGS PROFILE s3_01294 TO ALL CREATE SETTINGS PROFILE s4_01294 TO ALL CREATE SETTINGS PROFILE s1_01294 SETTINGS max_memory_usage = 6000000 +CREATE SETTINGS PROFILE s2_01294 SETTINGS max_memory_usage = 6000000 +CREATE SETTINGS PROFILE s3_01294 TO ALL +CREATE SETTINGS PROFILE s4_01294 TO ALL +CREATE SETTINGS PROFILE s1_01294 SETTINGS max_memory_usage = 6000000 CREATE SETTINGS PROFILE s2_01294 SETTINGS max_memory_usage = 6000000 TO r1_01294 -CREATE SETTINGS PROFILE s3_01294 SETTINGS max_memory_usage = 6000000 TO r1_01294 +CREATE SETTINGS PROFILE s3_01294 TO r1_01294 CREATE SETTINGS PROFILE s4_01294 TO r1_01294 -- readonly ambiguity CREATE SETTINGS PROFILE s1_01294 SETTINGS readonly = 1 @@ -53,7 +57,8 @@ s1_01294 local directory 0 0 [] [] s2_01294 local directory 1 0 ['r1_01294'] [] s3_01294 local directory 1 0 ['r1_01294'] [] s4_01294 local directory 1 0 ['r1_01294'] [] -s5_01294 local directory 3 1 [] ['r1_01294'] +s5_01294 local directory 3 0 ['u1_01294'] [] +s6_01294 local directory 0 1 [] ['r1_01294','u1_01294'] -- system.settings_profile_elements s2_01294 \N \N 0 readonly 0 \N \N \N \N s3_01294 \N \N 0 max_memory_usage 5000000 4000000 6000000 1 \N diff --git a/tests/queries/0_stateless/01294_create_settings_profile.sql b/tests/queries/0_stateless/01294_create_settings_profile.sql index 9dbabd3f068..b7dd91ad6ed 100644 --- a/tests/queries/0_stateless/01294_create_settings_profile.sql +++ b/tests/queries/0_stateless/01294_create_settings_profile.sql @@ -82,7 +82,8 @@ SELECT '-- multiple profiles in one command'; CREATE PROFILE s1_01294, s2_01294 SETTINGS max_memory_usage=5000000; CREATE PROFILE s3_01294, s4_01294 TO ALL; SHOW CREATE PROFILE s1_01294, s2_01294, s3_01294, s4_01294; -ALTER PROFILE s1_01294, s2_01294, s3_01294 SETTINGS max_memory_usage=6000000; +ALTER PROFILE s1_01294, s2_01294 SETTINGS max_memory_usage=6000000; +SHOW CREATE PROFILE s1_01294, s2_01294, s3_01294, s4_01294; ALTER PROFILE s2_01294, s3_01294, s4_01294 TO r1_01294; SHOW CREATE PROFILE s1_01294, s2_01294, s3_01294, s4_01294; DROP PROFILE s1_01294, s2_01294, s3_01294, s4_01294; @@ -107,12 +108,13 @@ CREATE PROFILE s1_01294; CREATE PROFILE s2_01294 SETTINGS readonly=0 TO r1_01294;; CREATE PROFILE s3_01294 SETTINGS max_memory_usage=5000000 MIN 4000000 MAX 6000000 READONLY TO r1_01294; CREATE PROFILE s4_01294 SETTINGS max_memory_usage=5000000 TO r1_01294; -CREATE PROFILE s5_01294 SETTINGS INHERIT default, readonly=0, max_memory_usage MAX 6000000 WRITABLE TO ALL EXCEPT r1_01294; +CREATE PROFILE s5_01294 SETTINGS INHERIT default, readonly=0, max_memory_usage MAX 6000000 WRITABLE TO u1_01294; +CREATE PROFILE s6_01294 TO ALL EXCEPT u1_01294, r1_01294; SELECT name, storage, num_elements, apply_to_all, apply_to_list, apply_to_except FROM system.settings_profiles WHERE name LIKE 's%\_01294' ORDER BY name; SELECT '-- system.settings_profile_elements'; SELECT * FROM system.settings_profile_elements WHERE profile_name LIKE 's%\_01294' ORDER BY profile_name, index; -DROP PROFILE s1_01294, s2_01294, s3_01294, s4_01294, s5_01294; +DROP PROFILE s1_01294, s2_01294, s3_01294, s4_01294, s5_01294, s6_01294; DROP ROLE r1_01294; DROP USER u1_01294; diff --git a/tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables.reference b/tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long.reference similarity index 100% rename from tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables.reference rename to tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long.reference diff --git a/tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables.sh b/tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long.sh similarity index 94% rename from tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables.sh rename to tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long.sh index d8f72c7837d..f5a4a1adac0 100755 --- a/tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables.sh +++ b/tests/queries/0_stateless/01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long.sh @@ -70,8 +70,10 @@ function recreate_lazy_func4() function test_func() { while true; do - $CLICKHOUSE_CLIENT -q "SYSTEM STOP TTL MERGES"; + for table in log tlog slog tlog2; do + $CLICKHOUSE_CLIENT -q "SYSTEM STOP TTL MERGES $CURR_DATABASE.$table" >& /dev/null done + done } diff --git a/tests/queries/0_stateless/01300_polygon_convex_hull.reference b/tests/queries/0_stateless/01300_polygon_convex_hull.reference new file mode 100644 index 00000000000..47be3068b5c --- /dev/null +++ b/tests/queries/0_stateless/01300_polygon_convex_hull.reference @@ -0,0 +1 @@ +[[(0,0),(0,5),(5,5),(5,0),(0,0)]] diff --git a/tests/queries/0_stateless/01300_polygon_convex_hull.sql b/tests/queries/0_stateless/01300_polygon_convex_hull.sql new file mode 100644 index 00000000000..4a4aa66bbfb --- /dev/null +++ b/tests/queries/0_stateless/01300_polygon_convex_hull.sql @@ -0,0 +1 @@ +select polygonConvexHullCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.), (2., 3.)]]]); diff --git a/tests/queries/0_stateless/01300_read_wkt.reference b/tests/queries/0_stateless/01300_read_wkt.reference new file mode 100644 index 00000000000..acee1c8f14b --- /dev/null +++ b/tests/queries/0_stateless/01300_read_wkt.reference @@ -0,0 +1,16 @@ +(0,0) +[[(1,0),(10,0),(10,10),(0,10),(1,0)]] +[[(0,0),(10,0),(10,10),(0,10),(0,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]] +[[[(2,0),(10,0),(10,10),(0,10),(2,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] +(0,0) +(1,0) +(2,0) +[[(1,0),(10,0),(10,10),(0,10),(1,0)]] +[[(0,0),(10,0),(10,10),(0,10),(0,0)]] +[[(2,0),(10,0),(10,10),(0,10),(2,0)]] +[[(0,0),(10,0),(10,10),(0,10),(0,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]] +[[(2,0),(10,0),(10,10),(0,10),(2,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]] +[[(1,0),(10,0),(10,10),(0,10),(1,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]] +[[[(1,0),(10,0),(10,10),(0,10),(1,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] +[[[(0,0),(10,0),(10,10),(0,10),(0,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] +[[[(2,0),(10,0),(10,10),(0,10),(2,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] diff --git a/tests/queries/0_stateless/01300_read_wkt.sql b/tests/queries/0_stateless/01300_read_wkt.sql new file mode 100644 index 00000000000..8121bdf6084 --- /dev/null +++ b/tests/queries/0_stateless/01300_read_wkt.sql @@ -0,0 +1,30 @@ +SELECT readWktPoint('POINT(0 0)'); +SELECT readWktPolygon('POLYGON((1 0,10 0,10 10,0 10,1 0))'); +SELECT readWktPolygon('POLYGON((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4))'); +SELECT readWktMultiPolygon('MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))'); + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ('POINT(0 0)', 1); +INSERT INTO geo VALUES ('POINT(1 0)', 2); +INSERT INTO geo VALUES ('POINT(2 0)', 3); +SELECT readWktPoint(s) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ('POLYGON((1 0,10 0,10 10,0 10,1 0))', 1); +INSERT INTO geo VALUES ('POLYGON((0 0,10 0,10 10,0 10,0 0))', 2); +INSERT INTO geo VALUES ('POLYGON((2 0,10 0,10 10,0 10,2 0))', 3); +INSERT INTO geo VALUES ('POLYGON((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4))', 4); +INSERT INTO geo VALUES ('POLYGON((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4))', 5); +INSERT INTO geo VALUES ('POLYGON((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4))', 6); +SELECT readWktPolygon(s) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ('MULTIPOLYGON(((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 1); +INSERT INTO geo VALUES ('MULTIPOLYGON(((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 2); +INSERT INTO geo VALUES ('MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 3); +SELECT readWktMultiPolygon(s) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01300_svg.reference b/tests/queries/0_stateless/01300_svg.reference new file mode 100644 index 00000000000..d39d67ff273 --- /dev/null +++ b/tests/queries/0_stateless/01300_svg.reference @@ -0,0 +1,56 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/tests/queries/0_stateless/01300_svg.sql b/tests/queries/0_stateless/01300_svg.sql new file mode 100644 index 00000000000..a1deb1745c3 --- /dev/null +++ b/tests/queries/0_stateless/01300_svg.sql @@ -0,0 +1,50 @@ +SELECT svg((0., 0.)); +SELECT svg([(0., 0.), (10, 0), (10, 10), (0, 10)]); +SELECT svg([[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]]); +SELECT svg([[[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]]); +SELECT svg((0., 0.), 'b'); +SELECT svg([(0., 0.), (10, 0), (10, 10), (0, 10)], 'b'); +SELECT svg([[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], 'b'); +SELECT svg([[[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], 'b'); + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Tuple(Float64, Float64), s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ((0., 0.), 'b', 1); +INSERT INTO geo VALUES ((1., 0.), 'c', 2); +INSERT INTO geo VALUES ((2., 0.), 'd', 3); +SELECT svg(p) FROM geo ORDER BY id; +SELECT svg(p, 'b') FROM geo ORDER BY id; +SELECT svg((0., 0.), s) FROM geo ORDER BY id; +SELECT svg(p, s) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Array(Tuple(Float64, Float64)), s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ([(0., 0.), (10, 0), (10, 10), (0, 10)], 'b', 1); +INSERT INTO geo VALUES ([(1., 0.), (10, 0), (10, 10), (0, 10)], 'c', 2); +INSERT INTO geo VALUES ([(2., 0.), (10, 0), (10, 10), (0, 10)], 'd', 3); +SELECT svg(p) FROM geo ORDER BY id; +SELECT svg(p, 'b') FROM geo ORDER BY id; +SELECT svg([(0., 0.), (10, 0), (10, 10), (0, 10)], s) FROM geo ORDER BY id; +SELECT svg(p, s) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Array(Array(Tuple(Float64, Float64))), s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ([[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], 'b', 1); +INSERT INTO geo VALUES ([[(1., 0.), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], 'c', 2); +INSERT INTO geo VALUES ([[(2., 0.), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], 'd', 3); +SELECT svg(p) FROM geo ORDER BY id; +SELECT svg(p, 'b') FROM geo ORDER BY id; +SELECT svg([[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], s) FROM geo ORDER BY id; +SELECT svg(p, s) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Array(Array(Array(Tuple(Float64, Float64)))), s String, id Int) engine=Memory(); +INSERT INTO geo VALUES ([[[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], 'b', 1); +INSERT INTO geo VALUES ([[[(1., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], 'c', 2); +INSERT INTO geo VALUES ([[[(2., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], 'd', 3); +SELECT svg(p) FROM geo ORDER BY id; +SELECT svg(p, 'b') FROM geo ORDER BY id; +SELECT svg([[[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], s) FROM geo ORDER BY id; +SELECT svg(p, s) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01300_wkt.reference b/tests/queries/0_stateless/01300_wkt.reference new file mode 100644 index 00000000000..0079e9f32df --- /dev/null +++ b/tests/queries/0_stateless/01300_wkt.reference @@ -0,0 +1,16 @@ +POINT(0 0) +POLYGON((0 0,10 0,10 10,0 10)) +POLYGON((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)) +MULTIPOLYGON(((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10))) +POINT(0 0) +POINT(1 0) +POINT(2 0) +POLYGON((0 0,10 0,10 10,0 10)) +POLYGON((1 0,10 0,10 10,0 10)) +POLYGON((2 0,10 0,10 10,0 10)) +POLYGON((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)) +POLYGON((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4)) +POLYGON((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)) +MULTIPOLYGON(((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10))) +MULTIPOLYGON(((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10))) +MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10))) diff --git a/tests/queries/0_stateless/01300_wkt.sql b/tests/queries/0_stateless/01300_wkt.sql new file mode 100644 index 00000000000..00063d0a612 --- /dev/null +++ b/tests/queries/0_stateless/01300_wkt.sql @@ -0,0 +1,34 @@ +SELECT wkt((0., 0.)); +SELECT wkt([(0., 0.), (10., 0.), (10., 10.), (0., 10.)]); +SELECT wkt([[(0., 0.), (10., 0.), (10., 10.), (0., 10.)], [(4., 4.), (5., 4.), (5., 5.), (4., 5.)]]); +SELECT wkt([[[(0., 0.), (10., 0.), (10., 10.), (0., 10.)], [(4., 4.), (5., 4.), (5., 5.), (4., 5.)]], [[(-10., -10.), (-10., -9.), (-9., 10.)]]]); + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Tuple(Float64, Float64), id Int) engine=Memory(); +INSERT INTO geo VALUES ((0, 0), 1); +INSERT INTO geo VALUES ((1, 0), 2); +INSERT INTO geo VALUES ((2, 0), 3); +SELECT wkt(p) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Array(Tuple(Float64, Float64)), id Int) engine=Memory(); +INSERT INTO geo VALUES ([(0, 0), (10, 0), (10, 10), (0, 10)], 1); +INSERT INTO geo VALUES ([(1, 0), (10, 0), (10, 10), (0, 10)], 2); +INSERT INTO geo VALUES ([(2, 0), (10, 0), (10, 10), (0, 10)], 3); +SELECT wkt(p) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Array(Array(Tuple(Float64, Float64))), id Int) engine=Memory(); +INSERT INTO geo VALUES ([[(0, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], 1); +INSERT INTO geo VALUES ([[(1, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], 2); +INSERT INTO geo VALUES ([[(2, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], 3); +SELECT wkt(p) FROM geo ORDER BY id; + +DROP TABLE IF EXISTS geo; +CREATE TABLE geo (p Array(Array(Array(Tuple(Float64, Float64)))), id Int) engine=Memory(); +INSERT INTO geo VALUES ([[[(0, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 1); +INSERT INTO geo VALUES ([[[(1, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 2); +INSERT INTO geo VALUES ([[[(2, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 3); +SELECT wkt(p) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01301_polygons_within.reference b/tests/queries/0_stateless/01301_polygons_within.reference new file mode 100644 index 00000000000..ee14f826295 --- /dev/null +++ b/tests/queries/0_stateless/01301_polygons_within.reference @@ -0,0 +1,10 @@ +0 +1 +0 +1 +-------- MultiPolygon with Polygon +0 +-------- MultiPolygon with Polygon with Holes +0 +-------- Polygon with Polygon with Holes +0 diff --git a/tests/queries/0_stateless/01301_polygons_within.sql b/tests/queries/0_stateless/01301_polygons_within.sql new file mode 100644 index 00000000000..901c7909af7 --- /dev/null +++ b/tests/queries/0_stateless/01301_polygons_within.sql @@ -0,0 +1,15 @@ +select polygonsWithinCartesian([[[(0, 0),(0, 3),(1, 2.9),(2, 2.6),(2.6, 2),(2.9, 1),(3, 0),(0, 0)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); +select polygonsWithinCartesian([[[(2., 2.), (2., 3.), (3., 3.), (3., 2.)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); + +select polygonsWithinSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]); +select polygonsWithinSpherical([[[(4.3501568, 50.8518269), (4.3444920, 50.8439961), (4.3565941, 50.8443213), (4.3501568, 50.8518269)]]], [[[(4.3679450, 50.8524550),(4.3466930, 50.8583060),(4.3380740, 50.8486770),(4.3449610, 50.8332640),(4.3662270, 50.8408090),(4.3679450, 50.8524550)]]]); + +select '-------- MultiPolygon with Polygon'; +select polygonsWithinSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]]) format TSV; + +select '-------- MultiPolygon with Polygon with Holes'; +select polygonsWithinSpherical([[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]]) format TSV; + +select '-------- Polygon with Polygon with Holes'; +select polygonsWithinSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]]) format TSV; + diff --git a/tests/queries/0_stateless/01302_polygons_distance.reference b/tests/queries/0_stateless/01302_polygons_distance.reference new file mode 100644 index 00000000000..f5f59e30710 --- /dev/null +++ b/tests/queries/0_stateless/01302_polygons_distance.reference @@ -0,0 +1,4 @@ +0 +1.2727922061357855 +0.3274195462417724 +0.3274195462417724 diff --git a/tests/queries/0_stateless/01302_polygons_distance.sql b/tests/queries/0_stateless/01302_polygons_distance.sql new file mode 100644 index 00000000000..a69b5017a5f --- /dev/null +++ b/tests/queries/0_stateless/01302_polygons_distance.sql @@ -0,0 +1,10 @@ +select polygonsDistanceCartesian([[[(0, 0),(0, 3),(1, 2.9),(2, 2.6),(2.6, 2),(2.9, 1),(3, 0),(0, 0)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); +select polygonsDistanceCartesian([[[(0, 0), (0, 0.1), (0.1, 0.1), (0.1, 0)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); +select polygonsDistanceSpherical([[[(23.725750, 37.971536)]]], [[[(4.3826169, 50.8119483)]]]); + +drop table if exists polygon_01302; +create table polygon_01302 (x Array(Array(Array(Tuple(Float64, Float64)))), y Array(Array(Array(Tuple(Float64, Float64))))) engine=Memory(); +insert into polygon_01302 values ([[[(23.725750, 37.971536)]]], [[[(4.3826169, 50.8119483)]]]); +select polygonsDistanceSpherical(x, y) from polygon_01302; + +drop table polygon_01302; diff --git a/tests/queries/0_stateless/01303_polygons_equals.reference b/tests/queries/0_stateless/01303_polygons_equals.reference new file mode 100644 index 00000000000..0d66ea1aee9 --- /dev/null +++ b/tests/queries/0_stateless/01303_polygons_equals.reference @@ -0,0 +1,2 @@ +0 +1 diff --git a/tests/queries/0_stateless/01303_polygons_equals.sql b/tests/queries/0_stateless/01303_polygons_equals.sql new file mode 100644 index 00000000000..42f1bd4693c --- /dev/null +++ b/tests/queries/0_stateless/01303_polygons_equals.sql @@ -0,0 +1,2 @@ +select polygonsEqualsCartesian([[[(0, 0),(0, 3),(1, 2.9),(2, 2.6),(2.6, 2),(2.9, 1),(3, 0),(0, 0)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); +select polygonsEqualsCartesian([[[(1., 1.),(1., 4.),(4., 4.),(4., 1.)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); diff --git a/tests/queries/0_stateless/01304_direct_io.reference b/tests/queries/0_stateless/01304_direct_io_long.reference similarity index 100% rename from tests/queries/0_stateless/01304_direct_io.reference rename to tests/queries/0_stateless/01304_direct_io_long.reference diff --git a/tests/queries/0_stateless/01304_direct_io.sh b/tests/queries/0_stateless/01304_direct_io_long.sh similarity index 78% rename from tests/queries/0_stateless/01304_direct_io.sh rename to tests/queries/0_stateless/01304_direct_io_long.sh index 3ba3d020d99..7505173ddba 100755 --- a/tests/queries/0_stateless/01304_direct_io.sh +++ b/tests/queries/0_stateless/01304_direct_io_long.sh @@ -9,12 +9,12 @@ $CLICKHOUSE_CLIENT --multiquery --query " CREATE TABLE bug (UserID UInt64, Date Date) ENGINE = MergeTree ORDER BY Date; INSERT INTO bug SELECT rand64(), '2020-06-07' FROM numbers(50000000); OPTIMIZE TABLE bug FINAL;" +LOG="$CLICKHOUSE_TMP/err-$CLICKHOUSE_DATABASE" +$CLICKHOUSE_BENCHMARK --iterations 10 --max_threads 100 --min_bytes_to_use_direct_io 1 <<< "SELECT sum(UserID) FROM bug PREWHERE NOT ignore(Date)" 1>/dev/null 2>"$LOG" +cat "$LOG" | grep Exception +cat "$LOG" | grep Loaded -$CLICKHOUSE_BENCHMARK --iterations 10 --max_threads 100 --min_bytes_to_use_direct_io 1 <<< "SELECT sum(UserID) FROM bug PREWHERE NOT ignore(Date)" 1>/dev/null 2>"$CLICKHOUSE_TMP"/err -cat "$CLICKHOUSE_TMP"/err | grep Exception -cat "$CLICKHOUSE_TMP"/err | grep Loaded - -rm "$CLICKHOUSE_TMP"/err +rm "$LOG" $CLICKHOUSE_CLIENT --multiquery --query " DROP TABLE bug;" diff --git a/tests/queries/0_stateless/01304_polygons_sym_difference.reference b/tests/queries/0_stateless/01304_polygons_sym_difference.reference new file mode 100644 index 00000000000..7d16848ac2e --- /dev/null +++ b/tests/queries/0_stateless/01304_polygons_sym_difference.reference @@ -0,0 +1,7 @@ +[[[(1,2.9),(1,1),(2.9,1),(3,0),(0,0),(0,3),(1,2.9)]],[[(1,2.9),(1,4),(4,4),(4,1),(2.9,1),(2.6,2),(2,2.6),(1,2.9)]]] +-------- MultiPolygon with Polygon +MULTIPOLYGON(((36.9725 59.0149,35.5408 58.9593,37.2817 59.9768,38.7325 59.9465,36.9725 59.0149)),((36.9725 59.0149,37.3119 59.0258,37.8553 58.9075,36.5949 58.1673,36.0123 58.2869,37.191 58.6819,36.4989 58.7512,36.9725 59.0149)),((36.151 54.791,37.7653 55.1891,37.06 55.3843,37.2824 55.5258,38.0373 55.6523,37.6238 55.7402,38.1319 56.0534,38.2186 56.0594,38.1688 56.0758,38.4339 56.2361,38.944 56.0594,38.1884 55.8564,38.4907 55.5327,37.7955 55.3956,38.2609 55.1775,38.1601 55.1091,36.7074 54.6506,37.0035 54.2999,36.6985 54.0791,36.0472 54.7217,36.151 54.791)),((36.151 54.791,36.0123 54.7554,36.0472 54.7217,34.9611 53.9765,34.894 54.1226,35.6193 54.4929,34.9706 54.9262,35.2275 55.0993,36.4354 55.3441,35.7505 55.4454,35.9817 55.5958,36.5563 55.6352,36.193 55.7319,37.2281 56.3799,38.1688 56.0758,38.1319 56.0534,36.647 55.9411,37.6238 55.7402,37.2824 55.5258,36.8283 55.4471,37.06 55.3843,36.151 54.791)),((36.5334 56.6753,38.2312 56.9795,37.565 56.5843,37.463 56.5623,37.5054 56.5484,37.2281 56.3799,36.4446 56.6242,36.5334 56.6753)),((36.5334 56.6753,36.375 56.6455,36.4446 56.6242,36.0233 56.3789,35.4083 56.5254,36.1999 57.0022,36.9794 57.0751,36.4587 57.1544,38.0535 58.0542,38.3395 57.9356,37.4328 57.7103,38.0744 57.5312,37.9669 57.4734,37.1608 57.2554,37.4489 57.1909,36.5334 56.6753)),((36.8709 53.2765,37.135 53.4711,37.8559 52.9188,38.0214 52.8989,37.1608 52.2393,35.4682 52.2022,36.5022 53.0008,37.4328 52.9552,36.8709 53.2765)),((36.8709 53.2765,36.5022 53.0008,35.3776 53.0462,35.3645 53.076,36.1528 53.6763,36.8709 53.2765)),((36.6985 54.0791,36.919 53.8561,36.3552 53.8269,36.6985 54.0791)),((35.5408 58.9593,35.3712 58.8556,34.6522 58.9167,35.5408 58.9593)),((36.0848 57.855,36.3932 58.0447,36.4354 58.0478,36.403 58.0507,36.5949 58.1673,37.1608 58.0478,36.0848 57.855)),((36.0848 57.855,35.9179 57.7512,35.7402 57.7909,36.0848 57.855)),((37.135 53.4711,36.9794 53.5878,37.3119 53.9273,37.0035 54.2999,38.1601 55.1091,38.3093 55.1546,38.2609 55.1775,39.8102 56.1914,39.8205 56.0763,40.425 56.1942,40.5716 55.8007,40.5504 55.7875,39.7601 55.7544,39.8151 55.3187,37.135 53.4711)),((38.2312 56.9795,38.2699 57.0021,38.3093 56.9929,38.2312 56.9795)),((36.4989 58.7512,36.1498 58.553,34.9952 58.6226,35.3712 58.8556,36.4989 58.7512)),((36.4587 57.1544,36.1999 57.0022,34.4816 56.8232,34.8098 57.0409,36.0727 57.0915,35.0338 57.1875,35.4682 57.4674,36.1936 57.4998,35.613 57.5595,35.9179 57.7512,37.0097 57.4998,35.7705 57.2554,36.4587 57.1544)),((38.0535 58.0542,37.4026 58.3187,38.5813 58.7446,37.8553 58.9075,39.7299 59.9314,44.4751 59.81,44.4146 55.3097,40.0925 52.1652,38.3395 52.1652,39.1456 52.7573,39.5787 52.6996,39.2704 52.8471,39.9877 53.3534,40.0019 53.354,39.9942 53.358,43.0243 55.3269,43.0243 56.2614,40.2143 54.467,39.5485 54.5631,39.5485 54.8773,40.3948 54.8773,40.3948 55.2408,39.8205 55.2753,39.8151 55.3187,40.5504 55.7875,40.5761 55.7884,40.5716 55.8007,43.0243 57.2554,43.0243 58.0797,40.4543 56.5923,40.3343 56.9599,39.7903 56.9929,39.7863 57.025,42.5105 58.477,41.6944 58.8542,40.1389 58.048,39.6392 58.0478,39.6392 58.3427,39.7184 58.3823,40.3343 58.3821,40.4136 58.7241,41.2108 59.1035,40.6366 59.3817,39.8163 58.9766,38.5209 59.119,39.4085 58.7696,38.7465 58.4255,38.3698 58.2869,38.432 58.2584,38.0535 58.0542)),((34.4996 55.9565,33.5244 56.1686,33.7222 56.3063,34.5917 56.2949,35.0485 56.303,34.744 56.1118,34.7126 56.11,34.7331 56.1049,34.4996 55.9565)),((34.4996 55.9565,35.0954 55.822,34.9721 55.7463,34.2598 55.8023,34.4996 55.9565)),((31.6069 56.3194,31.5088 55.9411,31.7782 55.7778,30.2092 54.6331,30.2394 53.6774,31.7439 54.8677,31.4182 54.4227,31.8748 54.1736,29.3931 52.2763,29.4536 59.7796,30.5719 59.9919,30.4812 58.8542,32.3249 59.9465,33.6548 59.9465,30.179 57.9196,30.179 56.9764,32.2175 58.3664,32.1738 58.0318,31.5088 57.4998,31.6514 57.1258,30.3301 56.1942,30.2394 55.2753,31.6069 56.3194)),((31.6069 56.3194,31.7506 56.8609,31.6514 57.1258,34.0496 58.6717,34.9952 58.6226,34.6028 58.3749,33.6245 58.271,34.3593 58.2189,33.7581 57.8255,33.2316 57.7748,33.6325 57.7419,31.6069 56.3194)),((33.5244 56.1686,33.1204 55.8832,32.748 55.9072,32.9547 55.7645,31.7439 54.8677,31.8413 54.9989,32.204 55.5156,31.7782 55.7778,33.3418 56.8364,33.8361 56.6953,34.1885 56.6259,33.7222 56.3063,32.8387 56.3117,33.5244 56.1686)),((33.1204 55.8832,34.2598 55.8023,33.6125 55.3778,33.5036 55.3785,32.9547 55.7645,33.1204 55.8832)),((35.3188 55.9582,36.193 55.7319,35.9817 55.5958,35.1358 55.5327,35.7505 55.4454,35.2275 55.0993,34.8335 55.0162,34.9706 54.9262,34.7231 54.7576,34.2593 54.9642,35.0149 55.3613,34.3709 55.3709,34.9721 55.7463,35.6798 55.6863,35.0954 55.822,35.3188 55.9582)),((35.3188 55.9582,34.7331 56.1049,34.744 56.1118,35.6571 56.1619,35.3188 55.9582)),((33.3418 56.8364,32.9596 56.9434,33.5602 56.9781,33.3418 56.8364)),((33.4048 52.8423,34.7731 52.9188,34.7731 53.7847,34.7279 53.8116,34.9611 53.9765,35.3645 53.076,34.2895 52.2208,32.5969 52.2208,33.4048 52.8423)),((33.4048 52.8423,33.1712 52.8276,32.5275 53.1741,34.7231 54.7576,35.0753 54.5981,34.1081 54.1757,34.7279 53.8116,33.4048 52.8423)),((32.2523 53.964,32.476 53.8383,32.0831 53.408,32.5275 53.1741,31.2368 52.1652,29.7861 52.1466,32.2523 53.964)),((32.2523 53.964,31.8748 54.1736,33.6125 55.3778,34.3709 55.3709,32.2523 53.964)),((36.3552 53.8269,36.1528 53.6763,35.9216 53.8026,36.3552 53.8269)),((32.5691 58.5924,34.8637 59.9768,36.2843 59.9616,34.0496 58.6717,33.8361 58.6819,34.7428 59.5659,33.4734 58.8542,32.5691 58.5924)),((32.5691 58.5924,32.2175 58.3664,32.2342 58.4928,32.5691 58.5924)),((33.5602 56.9781,34.0208 57.2724,35.0338 57.1875,34.8098 57.0409,33.5602 56.9781)),((36.3932 58.0447,35.1134 57.9454,35.4314 58.1349,36.403 58.0507,36.3932 58.0447)),((35.1134 57.9454,34.6332 57.6538,33.6325 57.7419,33.7581 57.8255,35.1134 57.9454)),((35.4314 58.1349,34.3593 58.2189,34.6028 58.3749,36.0877 58.5174,35.4314 58.1349)),((35.4682 57.4674,34.2274 57.4023,34.6332 57.6538,35.613 57.5595,35.4682 57.4674)),((34.4816 56.8232,34.3867 56.7596,34.229 56.7948,34.4816 56.8232)),((34.1885 56.6259,34.3867 56.7596,35.4083 56.5254,35.2273 56.414,34.1885 56.6259)),((34.2274 57.4023,34.0208 57.2724,33.1712 57.337,34.2274 57.4023)),((35.0485 56.303,35.2273 56.414,35.71 56.3117,35.0485 56.303)),((35.6571 56.1619,36.0233 56.3789,36.7074 56.211,35.6571 56.1619)),((36.1498 58.553,36.3447 58.5402,36.0877 58.5174,36.1498 58.553)),((40.2143 54.467,40.3948 54.4403,40.6064 54.034,39.9716 53.9807,40.2437 53.5878,39.5485 53.5878,39.9942 53.358,39.9877 53.3534,38.5511 53.2922,40.2143 54.467)),((39.8102 56.1914,39.7903 56.4121,40.2529 56.4682,39.8102 56.1914)),((38.0214 52.8989,38.4609 53.226,39.2704 52.8471,39.1456 52.7573,38.0214 52.8989)),((38.5511 53.2922,38.4609 53.226,38.3395 53.2817,38.5511 53.2922)),((40.4543 56.5923,40.4855 56.4957,40.2529 56.4682,40.4543 56.5923)),((40.1389 58.048,40.2437 58.0478,40.3343 57.4673,39.7299 57.4673,39.7863 57.025,38.4339 56.2361,37.5054 56.5484,37.565 56.5843,38.9742 56.8774,38.4915 57.1308,40.1389 58.048)),((40.4136 58.7241,39.7184 58.3823,39.6392 58.3821,39.6392 58.3427,38.3737 57.6908,38.3395 57.7103,38.8533 58.0638,38.432 58.2584,38.7465 58.4255,39.5485 58.7133,39.4085 58.7696,39.8163 58.9766,40.4552 58.9011,40.4136 58.7241)),((38.3737 57.6908,38.7325 57.4835,38.2186 57.2717,38.4915 57.1308,38.2699 57.0021,37.4489 57.1909,37.9669 57.4734,38.128 57.516,38.0744 57.5312,38.3737 57.6908))) +-------- MultiPolygon with Polygon with Holes +MULTIPOLYGON(((24.3677 61.4598,26.6528 61.1008,26.8726 61.7107,30.564 61.0583,31.3989 62.0215,36.0132 61.1432,36.8921 62.0009,42.6489 60.6301,43.5718 61.3757,47.0435 59.8889,49.5923 60.0868,49.1528 58.1707,51.9214 57.9148,50.2515 56.1455,52.6685 55.826,51.6577 54.2909,52.8882 53.9302,50.647 53.0148,51.394 52.4828,48.0542 51.1793,49.2847 50.5414,47.1753 49.153,43.9233 49.8096,42.561 48.7779,36.936 49.6676,35.2661 48.7489,32.8052 49.5252,27.2241 48.9802,26.1255 50.4015,21.2036 50.205,20.0171 51.5634,17.4683 53.0148,19.4458 54.0852,19.4458 55.8753,19.5776 57.4922,19.5776 58.6769,24.3677 61.4598),(24.4556 59.4227,21.2036 58.4937,21.3354 56.897,21.5991 55.9246,25.2026 55.9984,28.8501 57.0646,27.0923 57.8448,28.8062 59.1759,26.2573 59.1759,24.4556 59.4227),(35.1489 56.5859,36.7074 56.211,34.7126 56.11,36.5563 55.6352,35.1358 55.5327,36.4354 55.3441,34.8335 55.0162,35.6193 54.4929,34.894 54.1226,35.3776 53.0462,37.0604 52.9744,34.9585 51.4814,36.5405 50.4015,39.6606 50.2893,39.7925 52.1335,41.77 50.6808,44.4946 51.9713,47.3071 52.5095,44.0552 53.5403,46.604 53.6967,47.6147 55.4041,45.3735 55.4041,42.8247 56.5837,40.4412 56.1511,40.425 56.1942,39.8205 56.0763,39.7903 56.4121,40.4855 56.4957,40.3343 56.9599,39.7903 56.9929,39.7379 57.4051,40.0149 57.4677,40.3343 57.4673,40.3237 57.5365,42.6929 58.0314,40.8911 59.2659,39.2792 59.0373,38.5209 59.119,38.8838 58.9777,38.0944 58.8545,37.3119 59.0258,37.2327 59.0233,37.1118 59.6677,35.1343 59.8448,31.9702 58.9727,32.25 58.4976,32.2342 58.4928,32.1738 58.0318,31.5088 57.4998,31.7506 56.8609,31.5088 55.9411,32.204 55.5156,31.8413 54.9989,31.627 54.7093,29.5972 55.5037,29.1577 55.7518,22.5659 55.1286,22.5659 53.5403,22.0386 51.4814,26.2573 51.4266,30.1245 50.5414,32.1899 51.1793,30.1245 53.1731,32.4808 53.1989,33.1712 52.8276,34.7731 52.9188,34.7731 53.1793,35.0903 53.1731,34.7731 53.3243,34.7731 53.7847,34.1081 54.1757,35.0753 54.5981,34.2593 54.9642,35.0149 55.3613,33.5036 55.3785,32.748 55.9072,35.6798 55.6863,32.8387 56.3117,34.5917 56.2949,35.71 56.3117,33.8361 56.6953,33.7182 56.7292,35.1489 56.5859)),((35.1489 56.5859,34.229 56.7948,36.9794 57.0751,35.7705 57.2554,37.0097 57.4998,35.7402 57.7909,37.1608 58.0478,36.0123 58.2869,37.191 58.6819,34.6522 58.9167,37.2327 59.0233,37.2876 58.7226,38.0944 58.8545,38.5813 58.7446,37.4026 58.3187,38.3395 57.9356,37.4328 57.7103,38.128 57.516,37.1608 57.2554,38.3092 56.9929,38.309 56.9928,36.375 56.6455,36.8799 56.4895,36.6724 56.4139,35.1489 56.5859)),((33.1079 56.9523,32.25 58.4976,33.4734 58.8542,34.7428 59.5659,33.8361 58.6819,36.3447 58.5402,33.6245 58.271,36.4354 58.0478,33.2316 57.7748,36.1936 57.4998,33.1712 57.337,36.0727 57.0915,33.1079 56.9523)),((33.1079 56.9523,33.1392 56.8934,32.9596 56.9434,33.1079 56.9523)),((33.7182 56.7292,33.2007 56.7768,33.1392 56.8934,33.7182 56.7292)),((37.0604 52.9744,37.2165 53.0798,37.4328 52.9552,37.0604 52.9744)),((34.7731 53.3243,34.7731 53.1793,32.4808 53.1989,32.0831 53.408,32.476 53.8383,31.4182 54.4227,31.627 54.7093,33.1128 54.0852,34.7731 53.3243)),((36.9508 55.414,37.7653 55.1891,36.8822 54.975,36.5845 55.3291,36.9508 55.414)),((36.9508 55.414,36.8283 55.4471,37.9482 55.6376,36.9508 55.414)),((37.2165 53.0798,35.9216 53.8026,36.919 53.8561,36.0123 54.7554,36.8822 54.975,37.0572 54.7635,36.7074 54.6506,37.3119 53.9273,36.9794 53.5878,37.4471 53.2343,37.2165 53.0798)),((37.0572 54.7635,38.3093 55.1546,37.7955 55.3956,38.4907 55.5327,38.3184 55.7179,40.4412 56.1511,40.5761 55.7884,39.7601 55.7544,39.8205 55.2753,40.3948 55.2408,40.3948 54.8773,39.5485 54.8773,39.5485 54.5631,40.3948 54.4403,40.6064 54.034,39.9716 53.9807,40.2437 53.5878,39.5485 53.5878,40.0019 53.354,38.3395 53.2817,39.5787 52.6996,37.8559 52.9188,37.4471 53.2343,37.9907 53.5925,37.0572 54.7635)),((38.5798 57.0849,38.2186 57.2717,38.7325 57.4835,38.3395 57.7103,38.8533 58.0638,38.3698 58.2869,39.5485 58.7133,38.8838 58.9777,39.2792 59.0373,40.4552 58.9011,40.3343 58.3821,39.6392 58.3821,39.6392 58.0478,40.2437 58.0478,40.3237 57.5365,40.0149 57.4677,39.7299 57.4673,39.7379 57.4051,39.0894 57.2553,38.5798 57.0849)),((38.5798 57.0849,38.9742 56.8774,37.463 56.5623,38.944 56.0594,38.1884 55.8564,38.3184 55.7179,38.0262 55.6546,36.647 55.9411,38.2186 56.0594,36.8799 56.4895,38.309 56.9928,38.3093 56.9929,38.3092 56.9929,38.5798 57.0849)),((37.9482 55.6376,38.0262 55.6546,38.0373 55.6523,37.9482 55.6376))) +-------- Polygon with Polygon with Holes +MULTIPOLYGON(((24.3677 61.4598,26.6528 61.1008,26.8726 61.7107,30.564 61.0583,31.3989 62.0215,36.0132 61.1432,36.8921 62.0009,42.6489 60.6301,43.5718 61.3757,47.0435 59.8889,49.5923 60.0868,49.1528 58.1707,51.9214 57.9148,50.2515 56.1455,52.6685 55.826,51.6577 54.2909,52.8882 53.9302,50.647 53.0148,51.394 52.4828,48.0542 51.1793,49.2847 50.5414,47.1753 49.153,43.9233 49.8096,42.561 48.7779,36.936 49.6676,35.2661 48.7489,32.8052 49.5252,27.2241 48.9802,26.1255 50.4015,21.2036 50.205,20.0171 51.5634,17.4683 53.0148,19.4458 54.0852,19.4458 55.8753,19.5776 57.4922,19.5776 58.6769,24.3677 61.4598),(24.4556 59.4227,21.2036 58.4937,21.3354 56.897,21.5991 55.9246,25.2026 55.9984,28.8501 57.0646,27.0923 57.8448,28.8062 59.1759,26.2573 59.1759,24.4556 59.4227),(35.9475 59.7758,36.2843 59.9616,34.8637 59.9768,34.2247 59.6064,31.9702 58.9727,32.2964 58.4175,30.179 56.9764,30.179 57.9196,33.6548 59.9465,32.3249 59.9465,30.4812 58.8542,30.5719 59.9919,29.4536 59.7796,29.4171 55.606,29.1577 55.7518,22.5659 55.1286,22.5659 53.5403,22.0386 51.4814,26.2573 51.4266,30.1245 50.5414,32.1899 51.1793,31.1968 52.1649,31.2368 52.1652,32.5603 53.1989,33.8733 53.1922,32.5969 52.2208,34.2895 52.2208,37.2766 54.4948,37.7431 53.9104,35.4682 52.2022,35.9681 52.2157,34.9585 51.4814,36.5405 50.4015,39.6606 50.2893,39.7925 52.1335,41.77 50.6808,44.4946 51.9713,47.3071 52.5095,44.0552 53.5403,46.604 53.6967,47.6147 55.4041,45.3735 55.4041,44.4212 55.8594,44.4751 59.81,39.7299 59.9314,37.6322 58.7797,37.2876 58.7226,37.2102 59.1452,38.7325 59.9465,37.2817 59.9768,36.7912 59.6986,35.9475 59.7758)),((32.6512 57.792,32.2964 58.4175,34.2247 59.6064,35.1343 59.8448,35.9475 59.7758,32.6512 57.792)),((32.6512 57.792,32.9378 57.2699,30.2394 55.2753,30.3301 56.1942,32.6512 57.792)),((33.2446 56.7729,33.2007 56.7768,32.9378 57.2699,36.7912 59.6986,37.1118 59.6677,37.2102 59.1452,33.2446 56.7729)),((33.2446 56.7729,34.2635 56.6767,31.5682 54.7333,30.7705 55.0525,33.2446 56.7729)),((34.2635 56.6767,37.6322 58.7797,40.2079 59.1718,35.4536 56.5531,34.2635 56.6767)),((40.2079 59.1718,40.6366 59.3817,40.8804 59.2644,40.2079 59.1718)),((34.3351 53.53,35.0903 53.1731,33.8733 53.1922,34.3351 53.53)),((34.3351 53.53,33.5144 53.9057,38.1759 56.9472,39.0894 57.2553,40.9691 57.677,37.1934 55.4694,36.5845 55.3291,36.7219 55.1665,34.3351 53.53)),((32.6907 54.2663,33.1128 54.0852,33.5144 53.9057,32.5603 53.1989,31.1682 53.1903,32.6907 54.2663)),((32.6907 54.2663,32.2591 54.4483,35.4536 56.5531,36.1815 56.4715,32.6907 54.2663)),((38.1759 56.9472,36.6724 56.4139,36.1815 56.4715,41.168 59.0834,41.5887 58.8012,38.1759 56.9472)),((37.2766 54.4948,36.7219 55.1665,37.1934 55.4694,39.4328 55.9511,37.2766 54.4948)),((40.9691 57.677,42.2498 58.3455,42.6929 58.0314,40.9691 57.677)),((30.7705 55.0525,30.2092 54.6331,30.2394 53.6774,31.5682 54.7333,32.2591 54.4483,30.5408 53.1811,30.1245 53.1731,30.3098 53.0028,29.3931 52.2763,29.4171 55.606,29.5972 55.5037,30.7705 55.0525)),((30.5408 53.1811,31.1682 53.1903,30.5785 52.7531,30.3098 53.0028,30.5408 53.1811)),((30.5785 52.7531,31.1968 52.1649,29.7861 52.1466,30.5785 52.7531)),((35.9681 52.2157,37.9907 53.5925,37.7431 53.9104,41.4519 56.3413,42.8247 56.5837,44.4212 55.8594,44.4146 55.3097,40.0925 52.1652,38.3395 52.1652,43.0243 55.3269,43.0243 56.2614,37.1608 52.2393,35.9681 52.2157)),((39.4328 55.9511,43.0243 58.0797,43.0243 57.2554,41.4519 56.3413,39.4328 55.9511)),((41.168 59.0834,40.9299 59.2404,41.2108 59.1035,41.168 59.0834)),((41.5887 58.8012,41.6944 58.8542,42.5105 58.477,42.2498 58.3455,41.5887 58.8012)),((40.9299 59.2404,40.8804 59.2644,40.8911 59.2659,40.9299 59.2404))) diff --git a/tests/queries/0_stateless/01304_polygons_sym_difference.sql b/tests/queries/0_stateless/01304_polygons_sym_difference.sql new file mode 100644 index 00000000000..f4893dd5b33 --- /dev/null +++ b/tests/queries/0_stateless/01304_polygons_sym_difference.sql @@ -0,0 +1,10 @@ +select polygonsSymDifferenceCartesian([[[(0, 0),(0, 3),(1, 2.9),(2, 2.6),(2.6, 2),(2.9, 1),(3, 0),(0, 0)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); + +select '-------- MultiPolygon with Polygon'; +select wkt(polygonsSymDifferenceSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]])) format TSV; + +select '-------- MultiPolygon with Polygon with Holes'; +select wkt(polygonsSymDifferenceSpherical([[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]])) format TSV; + +select '-------- Polygon with Polygon with Holes'; +select wkt(polygonsSymDifferenceSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]])) format TSV; diff --git a/tests/queries/0_stateless/01305_polygons_union.reference b/tests/queries/0_stateless/01305_polygons_union.reference new file mode 100644 index 00000000000..f87d03c151c --- /dev/null +++ b/tests/queries/0_stateless/01305_polygons_union.reference @@ -0,0 +1,8 @@ +[[[(1,2.9),(1,4),(4,4),(4,1),(2.9,1),(3,0),(0,0),(0,3),(1,2.9)]]] +[[[(4.3666052904432435,50.84337386140151),(4.366227,50.840809),(4.344961,50.833264),(4.338074,50.848677),(4.346693,50.858306),(4.3526804582393535,50.856658100365976),(4.3613577,50.8651821),(4.3613148,50.8651279),(4.3904543,50.8564867),(4.3830299,50.8428851),(4.3666052904432435,50.84337386140151)]]] +-------- MultiPolygon with Polygon +MULTIPOLYGON(((35.5408 58.9593,37.2817 59.9768,38.7325 59.9465,36.9725 59.0149,37.3119 59.0258,37.8553 58.9075,39.7299 59.9314,44.4751 59.81,44.4146 55.3097,40.0925 52.1652,38.3395 52.1652,39.1456 52.7573,38.0214 52.8989,37.1608 52.2393,35.4682 52.2022,36.5022 53.0008,35.3776 53.0462,35.3645 53.076,34.2895 52.2208,32.5969 52.2208,33.4048 52.8423,33.1712 52.8276,32.5275 53.1741,31.2368 52.1652,29.7861 52.1466,32.2523 53.964,31.8748 54.1736,29.3931 52.2763,29.4536 59.7796,30.5719 59.9919,30.4812 58.8542,32.3249 59.9465,33.6548 59.9465,30.179 57.9196,30.179 56.9764,32.2175 58.3664,32.2342 58.4928,32.5691 58.5924,34.8637 59.9768,36.2843 59.9616,34.0496 58.6717,34.9952 58.6226,35.3712 58.8556,34.6522 58.9167,35.5408 58.9593),(36.4989 58.7512,36.1498 58.553,36.3447 58.5402,36.0877 58.5174,35.4314 58.1349,36.403 58.0507,36.5949 58.1673,36.0123 58.2869,37.191 58.6819,36.4989 58.7512),(34.4816 56.8232,34.8098 57.0409,33.5602 56.9781,33.3418 56.8364,33.8361 56.6953,34.1885 56.6259,34.3867 56.7596,34.229 56.7948,34.4816 56.8232),(35.9179 57.7512,35.7402 57.7909,36.0848 57.855,36.3932 58.0447,35.1134 57.9454,34.6332 57.6538,35.613 57.5595,35.9179 57.7512),(36.8709 53.2765,37.135 53.4711,36.9794 53.5878,37.3119 53.9273,37.0035 54.2999,36.6985 54.0791,36.919 53.8561,36.3552 53.8269,36.1528 53.6763,36.8709 53.2765),(38.1601 55.1091,38.3093 55.1546,38.2609 55.1775,38.1601 55.1091),(38.1688 56.0758,38.4339 56.2361,37.5054 56.5484,37.2281 56.3799,38.1688 56.0758),(38.1319 56.0534,36.647 55.9411,37.6238 55.7402,38.1319 56.0534),(37.2824 55.5258,36.8283 55.4471,37.06 55.3843,37.2824 55.5258),(36.151 54.791,36.0123 54.7554,36.0472 54.7217,36.151 54.791),(34.9611 53.9765,34.894 54.1226,35.6193 54.4929,34.9706 54.9262,34.7231 54.7576,35.0753 54.5981,34.1081 54.1757,34.7279 53.8116,34.9611 53.9765),(38.2312 56.9795,37.565 56.5843,38.9742 56.8774,38.4915 57.1308,38.2699 57.0021,38.3093 56.9929,38.2312 56.9795),(36.5334 56.6753,36.375 56.6455,36.4446 56.6242,36.5334 56.6753),(36.1999 57.0022,36.9794 57.0751,36.4587 57.1544,36.1999 57.0022),(34.6028 58.3749,33.6245 58.271,34.3593 58.2189,34.6028 58.3749),(33.7581 57.8255,33.2316 57.7748,33.6325 57.7419,33.7581 57.8255),(31.6069 56.3194,31.7506 56.8609,31.6514 57.1258,30.3301 56.1942,30.2394 55.2753,31.6069 56.3194),(34.2274 57.4023,34.0208 57.2724,35.0338 57.1875,35.4682 57.4674,34.2274 57.4023),(31.7782 55.7778,30.2092 54.6331,30.2394 53.6774,31.7439 54.8677,31.8413 54.9989,32.204 55.5156,31.7782 55.7778),(33.7222 56.3063,32.8387 56.3117,33.5244 56.1686,33.7222 56.3063),(33.1204 55.8832,32.748 55.9072,32.9547 55.7645,33.1204 55.8832),(35.2275 55.0993,36.4354 55.3441,35.7505 55.4454,35.2275 55.0993),(35.9817 55.5958,36.5563 55.6352,36.193 55.7319,35.9817 55.5958),(35.0954 55.822,35.3188 55.9582,34.7331 56.1049,34.4996 55.9565,35.0954 55.822),(34.9721 55.7463,34.2598 55.8023,33.6125 55.3778,34.3709 55.3709,34.9721 55.7463),(35.6571 56.1619,36.0233 56.3789,35.4083 56.5254,35.2273 56.414,35.71 56.3117,35.0485 56.303,34.744 56.1118,35.6571 56.1619),(40.2143 54.467,40.3948 54.4403,40.6064 54.034,39.9716 53.9807,40.2437 53.5878,39.5485 53.5878,39.9942 53.358,43.0243 55.3269,43.0243 56.2614,40.2143 54.467),(38.5511 53.2922,38.4609 53.226,39.2704 52.8471,39.9877 53.3534,38.5511 53.2922),(40.5716 55.8007,43.0243 57.2554,43.0243 58.0797,40.4543 56.5923,40.4855 56.4957,40.2529 56.4682,39.8102 56.1914,39.8205 56.0763,40.425 56.1942,40.5716 55.8007),(40.5504 55.7875,39.7601 55.7544,39.8151 55.3187,40.5504 55.7875),(39.7863 57.025,42.5105 58.477,41.6944 58.8542,40.1389 58.048,40.2437 58.0478,40.3343 57.4673,39.7299 57.4673,39.7863 57.025),(38.0744 57.5312,38.3737 57.6908,38.3395 57.7103,38.8533 58.0638,38.432 58.2584,38.0535 58.0542,38.3395 57.9356,37.4328 57.7103,38.0744 57.5312),(37.9669 57.4734,37.1608 57.2554,37.4489 57.1909,37.9669 57.4734),(40.4136 58.7241,41.2108 59.1035,40.6366 59.3817,39.8163 58.9766,40.4552 58.9011,40.4136 58.7241),(39.7184 58.3823,39.6392 58.3821,39.6392 58.3427,39.7184 58.3823),(38.7465 58.4255,39.5485 58.7133,39.4085 58.7696,38.7465 58.4255))) +-------- MultiPolygon with Polygon with Holes +MULTIPOLYGON(((24.3677 61.4598,26.6528 61.1008,26.8726 61.7107,30.564 61.0583,31.3989 62.0215,36.0132 61.1432,36.8921 62.0009,42.6489 60.6301,43.5718 61.3757,47.0435 59.8889,49.5923 60.0868,49.1528 58.1707,51.9214 57.9148,50.2515 56.1455,52.6685 55.826,51.6577 54.2909,52.8882 53.9302,50.647 53.0148,51.394 52.4828,48.0542 51.1793,49.2847 50.5414,47.1753 49.153,43.9233 49.8096,42.561 48.7779,36.936 49.6676,35.2661 48.7489,32.8052 49.5252,27.2241 48.9802,26.1255 50.4015,21.2036 50.205,20.0171 51.5634,17.4683 53.0148,19.4458 54.0852,19.4458 55.8753,19.5776 57.4922,19.5776 58.6769,24.3677 61.4598),(24.4556 59.4227,21.2036 58.4937,21.3354 56.897,21.5991 55.9246,25.2026 55.9984,28.8501 57.0646,27.0923 57.8448,28.8062 59.1759,26.2573 59.1759,24.4556 59.4227),(33.1079 56.9523,33.1392 56.8934,33.7182 56.7292,35.1489 56.5859,34.229 56.7948,36.9794 57.0751,35.7705 57.2554,37.0097 57.4998,35.7402 57.7909,37.1608 58.0478,36.0123 58.2869,37.191 58.6819,34.6522 58.9167,37.2327 59.0233,37.1118 59.6677,35.1343 59.8448,31.9702 58.9727,32.25 58.4976,33.4734 58.8542,34.7428 59.5659,33.8361 58.6819,36.3447 58.5402,33.6245 58.271,36.4354 58.0478,33.2316 57.7748,36.1936 57.4998,33.1712 57.337,36.0727 57.0915,33.1079 56.9523),(37.0604 52.9744,34.9585 51.4814,36.5405 50.4015,39.6606 50.2893,39.7925 52.1335,41.77 50.6808,44.4946 51.9713,47.3071 52.5095,44.0552 53.5403,46.604 53.6967,47.6147 55.4041,45.3735 55.4041,42.8247 56.5837,40.4412 56.1511,40.5761 55.7884,39.7601 55.7544,39.8205 55.2753,40.3948 55.2408,40.3948 54.8773,39.5485 54.8773,39.5485 54.5631,40.3948 54.4403,40.6064 54.034,39.9716 53.9807,40.2437 53.5878,39.5485 53.5878,40.0019 53.354,38.3395 53.2817,39.5787 52.6996,37.8559 52.9188,37.4471 53.2343,37.2165 53.0798,37.4328 52.9552,37.0604 52.9744),(31.627 54.7093,29.5972 55.5037,29.1577 55.7518,22.5659 55.1286,22.5659 53.5403,22.0386 51.4814,26.2573 51.4266,30.1245 50.5414,32.1899 51.1793,30.1245 53.1731,32.4808 53.1989,32.0831 53.408,32.476 53.8383,31.4182 54.4227,31.627 54.7093),(34.7731 53.3243,34.7731 53.1793,35.0903 53.1731,34.7731 53.3243),(36.9508 55.414,37.7653 55.1891,36.8822 54.975,37.0572 54.7635,38.3093 55.1546,37.7955 55.3956,38.4907 55.5327,38.3184 55.7179,38.0262 55.6546,38.0373 55.6523,37.9482 55.6376,36.9508 55.414),(38.3092 56.9929,38.5798 57.0849,38.2186 57.2717,38.7325 57.4835,38.3395 57.7103,38.8533 58.0638,38.3698 58.2869,39.5485 58.7133,38.8838 58.9777,38.0944 58.8545,38.5813 58.7446,37.4026 58.3187,38.3395 57.9356,37.4328 57.7103,38.128 57.516,37.1608 57.2554,38.3092 56.9929),(38.309 56.9928,36.375 56.6455,36.8799 56.4895,38.309 56.9928),(40.3237 57.5365,42.6929 58.0314,40.8911 59.2659,39.2792 59.0373,40.4552 58.9011,40.3343 58.3821,39.6392 58.3821,39.6392 58.0478,40.2437 58.0478,40.3237 57.5365),(40.0149 57.4677,39.7299 57.4673,39.7379 57.4051,40.0149 57.4677))) +-------- Polygon with Polygon with Holes +MULTIPOLYGON(((24.3677 61.4598,26.6528 61.1008,26.8726 61.7107,30.564 61.0583,31.3989 62.0215,36.0132 61.1432,36.8921 62.0009,42.6489 60.6301,43.5718 61.3757,47.0435 59.8889,49.5923 60.0868,49.1528 58.1707,51.9214 57.9148,50.2515 56.1455,52.6685 55.826,51.6577 54.2909,52.8882 53.9302,50.647 53.0148,51.394 52.4828,48.0542 51.1793,49.2847 50.5414,47.1753 49.153,43.9233 49.8096,42.561 48.7779,36.936 49.6676,35.2661 48.7489,32.8052 49.5252,27.2241 48.9802,26.1255 50.4015,21.2036 50.205,20.0171 51.5634,17.4683 53.0148,19.4458 54.0852,19.4458 55.8753,19.5776 57.4922,19.5776 58.6769,24.3677 61.4598),(24.4556 59.4227,21.2036 58.4937,21.3354 56.897,21.5991 55.9246,25.2026 55.9984,28.8501 57.0646,27.0923 57.8448,28.8062 59.1759,26.2573 59.1759,24.4556 59.4227),(32.6512 57.792,32.9378 57.2699,36.7912 59.6986,35.9475 59.7758,32.6512 57.792),(33.2446 56.7729,34.2635 56.6767,37.6322 58.7797,37.2876 58.7226,37.2102 59.1452,33.2446 56.7729),(36.1815 56.4715,41.168 59.0834,40.9299 59.2404,40.8804 59.2644,40.2079 59.1718,35.4536 56.5531,36.1815 56.4715),(30.7705 55.0525,30.2092 54.6331,30.2394 53.6774,31.5682 54.7333,30.7705 55.0525),(33.8733 53.1922,34.3351 53.53,33.5144 53.9057,32.5603 53.1989,33.8733 53.1922),(31.1968 52.1649,29.7861 52.1466,30.5785 52.7531,30.3098 53.0028,29.3931 52.2763,29.4171 55.606,29.1577 55.7518,22.5659 55.1286,22.5659 53.5403,22.0386 51.4814,26.2573 51.4266,30.1245 50.5414,32.1899 51.1793,31.1968 52.1649),(31.1682 53.1903,32.6907 54.2663,32.2591 54.4483,30.5408 53.1811,31.1682 53.1903),(39.4328 55.9511,37.2766 54.4948,37.7431 53.9104,41.4519 56.3413,39.4328 55.9511),(40.9691 57.677,42.2498 58.3455,41.5887 58.8012,38.1759 56.9472,39.0894 57.2553,40.9691 57.677),(37.1934 55.4694,36.5845 55.3291,36.7219 55.1665,37.1934 55.4694),(32.2964 58.4175,34.2247 59.6064,31.9702 58.9727,32.2964 58.4175),(35.9681 52.2157,34.9585 51.4814,36.5405 50.4015,39.6606 50.2893,39.7925 52.1335,41.77 50.6808,44.4946 51.9713,47.3071 52.5095,44.0552 53.5403,46.604 53.6967,47.6147 55.4041,45.3735 55.4041,44.4212 55.8594,44.4146 55.3097,40.0925 52.1652,38.3395 52.1652,43.0243 55.3269,43.0243 56.2614,37.1608 52.2393,35.9681 52.2157))) diff --git a/tests/queries/0_stateless/01305_polygons_union.sql b/tests/queries/0_stateless/01305_polygons_union.sql new file mode 100644 index 00000000000..01982c21e6e --- /dev/null +++ b/tests/queries/0_stateless/01305_polygons_union.sql @@ -0,0 +1,15 @@ +select polygonsUnionCartesian([[[(0., 0.),(0., 3.),(1., 2.9),(2., 2.6),(2.6, 2.),(2.9, 1),(3., 0.),(0., 0.)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); + +SELECT polygonsUnionCartesian([[[(2., 100.0000991821289), (0., 3.), (1., 2.9), (2., 2.6), (2.6, 2.), (2.9, 1), (3., 0.), (100.0000991821289, 2.)]]], [[[(1., 1.), (1000.0001220703125, nan), (4., 4.), (4., 1.), (1., 1.)]]]); -- { serverError 43 } + +select polygonsUnionSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]); + +select '-------- MultiPolygon with Polygon'; +select wkt(polygonsUnionSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]])) format TSV; + +select '-------- MultiPolygon with Polygon with Holes'; +select wkt(polygonsUnionSpherical([[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]])) format TSV; + +select '-------- Polygon with Polygon with Holes'; +select wkt(polygonsUnionSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]])) format TSV; + diff --git a/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh b/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh index 5dd3d2b38d6..6248813c9ba 100755 --- a/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh +++ b/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh @@ -8,24 +8,13 @@ set -e function thread() { - db_engine=`$CLICKHOUSE_CLIENT -q "SELECT engine FROM system.databases WHERE name='$CLICKHOUSE_DATABASE'"` - if [[ $db_engine == "Atomic" ]]; then - # Ignore "Replica already exists" exception - while true; do - $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS test_table_$1 NO DELAY; - CREATE TABLE test_table_$1 (a UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01305/alter_table', 'r_$1') ORDER BY tuple();" 2>&1 | - grep -vP '(^$)|(^Received exception from server)|(^\d+\. )|because the last replica of the table was dropped right now|is already started to be removing by another replica right now|is already finished removing by another replica right now|Removing leftovers from table|Another replica was suddenly created|was successfully removed from ZooKeeper|was created by another server at the same moment|was suddenly removed|some other replicas were created at the same time|already exists' - done - else - while true; do - $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS test_table_$1; - CREATE TABLE test_table_$1 (a UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01305/alter_table', 'r_$1') ORDER BY tuple();" 2>&1 | - grep -vP '(^$)|(^Received exception from server)|(^\d+\. )|because the last replica of the table was dropped right now|is already started to be removing by another replica right now|is already finished removing by another replica right now|Removing leftovers from table|Another replica was suddenly created|was successfully removed from ZooKeeper|was created by another server at the same moment|was suddenly removed|some other replicas were created at the same time' - done - fi + while true; do + $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS test_table_$1 SYNC; + CREATE TABLE test_table_$1 (a UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r_$1') ORDER BY tuple();" 2>&1 | + grep -vP '(^$)|(^Received exception from server)|(^\d+\. )|because the last replica of the table was dropped right now|is already started to be removing by another replica right now| were removed by another replica|Removing leftovers from table|Another replica was suddenly created|was created by another server at the same moment|was suddenly removed|some other replicas were created at the same time' + done } - # https://stackoverflow.com/questions/9954794/execute-a-shell-function-with-timeout export -f thread; diff --git a/tests/queries/0_stateless/01306_polygons_intersection.reference b/tests/queries/0_stateless/01306_polygons_intersection.reference new file mode 100644 index 00000000000..43ee975913e --- /dev/null +++ b/tests/queries/0_stateless/01306_polygons_intersection.reference @@ -0,0 +1,10 @@ +[[[(1,2.9),(2,2.6),(2.6,2),(2.9,1),(1,1),(1,2.9)]]] +[] +[] +[[[(4.3666052904432435,50.84337386140151),(4.3602419,50.8435626),(4.349556,50.8535879),(4.3526804582393535,50.856658100365976),(4.367945,50.852455),(4.3666052904432435,50.84337386140151)]]] +-------- MultiPolygon with Polygon +MULTIPOLYGON(((35.5408 58.9593,36.9725 59.0149,36.4989 58.7512,35.3712 58.8556,35.5408 58.9593)),((34.4816 56.8232,36.1999 57.0022,35.4083 56.5254,34.3867 56.7596,34.4816 56.8232)),((35.9179 57.7512,36.0848 57.855,37.1608 58.0478,36.5949 58.1673,37.8553 58.9075,38.5813 58.7446,37.4026 58.3187,38.0535 58.0542,36.4587 57.1544,35.7705 57.2554,37.0097 57.4998,35.9179 57.7512)),((36.8709 53.2765,37.4328 52.9552,36.5022 53.0008,36.8709 53.2765)),((36.1528 53.6763,35.3645 53.076,34.9611 53.9765,36.0472 54.7217,36.6985 54.0791,36.3552 53.8269,35.9216 53.8026,36.1528 53.6763)),((37.0035 54.2999,36.7074 54.6506,38.1601 55.1091,37.0035 54.2999)),((38.1688 56.0758,38.2186 56.0594,38.1319 56.0534,38.1688 56.0758)),((37.6238 55.7402,38.0373 55.6523,37.2824 55.5258,37.6238 55.7402)),((37.06 55.3843,37.7653 55.1891,36.151 54.791,37.06 55.3843)),((38.2312 56.9795,36.5334 56.6753,37.4489 57.1909,38.2699 57.0021,38.2312 56.9795)),((37.2281 56.3799,36.193 55.7319,35.3188 55.9582,35.6571 56.1619,36.7074 56.211,36.0233 56.3789,36.4446 56.6242,37.2281 56.3799)),((34.9952 58.6226,36.1498 58.553,36.0877 58.5174,34.6028 58.3749,34.9952 58.6226)),((34.3593 58.2189,35.4314 58.1349,35.1134 57.9454,33.7581 57.8255,34.3593 58.2189)),((33.6325 57.7419,34.6332 57.6538,34.2274 57.4023,33.1712 57.337,34.0208 57.2724,33.5602 56.9781,32.9596 56.9434,33.3418 56.8364,31.7782 55.7778,31.5088 55.9411,31.6069 56.3194,33.6325 57.7419)),((36.403 58.0507,36.4354 58.0478,36.3932 58.0447,36.403 58.0507)),((35.613 57.5595,36.1936 57.4998,35.4682 57.4674,35.613 57.5595)),((35.0338 57.1875,36.0727 57.0915,34.8098 57.0409,35.0338 57.1875)),((34.1885 56.6259,35.2273 56.414,35.0485 56.303,34.5917 56.2949,33.7222 56.3063,34.1885 56.6259)),((33.5244 56.1686,34.4996 55.9565,34.2598 55.8023,33.1204 55.8832,33.5244 56.1686)),((32.9547 55.7645,33.5036 55.3785,33.6125 55.3778,31.8748 54.1736,31.4182 54.4227,31.7439 54.8677,32.9547 55.7645)),((34.7279 53.8116,34.7731 53.7847,34.7731 52.9188,33.4048 52.8423,34.7279 53.8116)),((34.7231 54.7576,32.5275 53.1741,32.0831 53.408,32.476 53.8383,32.2523 53.964,34.3709 55.3709,35.0149 55.3613,34.2593 54.9642,34.7231 54.7576)),((34.9706 54.9262,34.8335 55.0162,35.2275 55.0993,34.9706 54.9262)),((35.7505 55.4454,35.1358 55.5327,35.9817 55.5958,35.7505 55.4454)),((35.0954 55.822,35.6798 55.6863,34.9721 55.7463,35.0954 55.822)),((34.7331 56.1049,34.7126 56.11,34.744 56.1118,34.7331 56.1049)),((40.2143 54.467,38.5511 53.2922,38.3395 53.2817,38.4609 53.226,38.0214 52.8989,37.8559 52.9188,37.135 53.4711,39.8151 55.3187,39.8205 55.2753,40.3948 55.2408,40.3948 54.8773,39.5485 54.8773,39.5485 54.5631,40.2143 54.467)),((40.5716 55.8007,40.5761 55.7884,40.5504 55.7875,40.5716 55.8007)),((40.4543 56.5923,40.2529 56.4682,39.7903 56.4121,39.8102 56.1914,38.2609 55.1775,37.7955 55.3956,38.4907 55.5327,38.1884 55.8564,38.944 56.0594,38.4339 56.2361,39.7863 57.025,39.7903 56.9929,40.3343 56.9599,40.4543 56.5923)),((40.1389 58.048,38.4915 57.1308,38.2186 57.2717,38.7325 57.4835,38.3737 57.6908,39.6392 58.3427,39.6392 58.0478,40.1389 58.048)),((37.5054 56.5484,37.463 56.5623,37.565 56.5843,37.5054 56.5484)),((38.0744 57.5312,38.128 57.516,37.9669 57.4734,38.0744 57.5312)),((40.4136 58.7241,40.3343 58.3821,39.7184 58.3823,40.4136 58.7241)),((39.8163 58.9766,39.4085 58.7696,38.5209 59.119,39.8163 58.9766)),((38.432 58.2584,38.3698 58.2869,38.7465 58.4255,38.432 58.2584)),((32.2175 58.3664,32.5691 58.5924,33.4734 58.8542,34.7428 59.5659,33.8361 58.6819,34.0496 58.6717,31.6514 57.1258,31.5088 57.4998,32.1738 58.0318,32.2175 58.3664)),((39.9942 53.358,40.0019 53.354,39.9877 53.3534,39.9942 53.358)),((39.2704 52.8471,39.5787 52.6996,39.1456 52.7573,39.2704 52.8471))) +-------- MultiPolygon with Polygon with Holes +MULTIPOLYGON(((33.1079 56.9523,32.9596 56.9434,33.1392 56.8934,33.2007 56.7768,33.7182 56.7292,33.8361 56.6953,35.71 56.3117,34.5917 56.2949,32.8387 56.3117,35.6798 55.6863,32.748 55.9072,33.5036 55.3785,35.0149 55.3613,34.2593 54.9642,35.0753 54.5981,34.1081 54.1757,34.7731 53.7847,34.7731 53.3243,33.1128 54.0852,31.627 54.7093,31.8413 54.9989,32.204 55.5156,31.5088 55.9411,31.7506 56.8609,31.5088 57.4998,32.1738 58.0318,32.2342 58.4928,32.25 58.4976,33.1079 56.9523)),((35.1489 56.5859,36.6724 56.4139,36.8799 56.4895,38.2186 56.0594,36.647 55.9411,38.0262 55.6546,37.9482 55.6376,36.8283 55.4471,36.9508 55.414,36.5845 55.3291,36.8822 54.975,36.0123 54.7554,36.919 53.8561,35.9216 53.8026,37.2165 53.0798,37.0604 52.9744,35.3776 53.0462,34.894 54.1226,35.6193 54.4929,34.8335 55.0162,36.4354 55.3441,35.1358 55.5327,36.5563 55.6352,34.7126 56.11,36.7074 56.211,35.1489 56.5859)),((37.2327 59.0233,37.3119 59.0258,38.0944 58.8545,37.2876 58.7226,37.2327 59.0233)),((37.4471 53.2343,36.9794 53.5878,37.3119 53.9273,36.7074 54.6506,37.0572 54.7635,37.9907 53.5925,37.4471 53.2343)),((34.7731 53.1793,34.7731 52.9188,33.1712 52.8276,32.4808 53.1989,34.7731 53.1793)),((40.4412 56.1511,38.3184 55.7179,38.1884 55.8564,38.944 56.0594,37.463 56.5623,38.9742 56.8774,38.5798 57.0849,39.0894 57.2553,39.7379 57.4051,39.7903 56.9929,40.3343 56.9599,40.4855 56.4957,39.7903 56.4121,39.8205 56.0763,40.425 56.1942,40.4412 56.1511)),((38.3092 56.9929,38.3093 56.9929,38.309 56.9928,38.3092 56.9929)),((40.3237 57.5365,40.3343 57.4673,40.0149 57.4677,40.3237 57.5365)),((39.2792 59.0373,38.8838 58.9777,38.5209 59.119,39.2792 59.0373))) +-------- Polygon with Polygon with Holes +MULTIPOLYGON(((32.6512 57.792,30.3301 56.1942,30.2394 55.2753,32.9378 57.2699,33.2007 56.7768,33.2446 56.7729,30.7705 55.0525,29.5972 55.5037,29.4171 55.606,29.4536 59.7796,30.5719 59.9919,30.4812 58.8542,32.3249 59.9465,33.6548 59.9465,30.179 57.9196,30.179 56.9764,32.2964 58.4175,32.6512 57.792)),((35.9475 59.7758,35.1343 59.8448,34.2247 59.6064,34.8637 59.9768,36.2843 59.9616,35.9475 59.7758)),((36.7912 59.6986,37.2817 59.9768,38.7325 59.9465,37.2102 59.1452,37.1118 59.6677,36.7912 59.6986)),((34.2635 56.6767,35.4536 56.5531,32.2591 54.4483,31.5682 54.7333,34.2635 56.6767)),((36.1815 56.4715,36.6724 56.4139,38.1759 56.9472,33.5144 53.9057,33.1128 54.0852,32.6907 54.2663,36.1815 56.4715)),((33.8733 53.1922,35.0903 53.1731,34.3351 53.53,36.7219 55.1665,37.2766 54.4948,34.2895 52.2208,32.5969 52.2208,33.8733 53.1922)),((31.1968 52.1649,30.5785 52.7531,31.1682 53.1903,32.5603 53.1989,31.2368 52.1652,31.1968 52.1649)),((30.3098 53.0028,30.1245 53.1731,30.5408 53.1811,30.3098 53.0028)),((37.6322 58.7797,39.7299 59.9314,44.4751 59.81,44.4212 55.8594,42.8247 56.5837,41.4519 56.3413,43.0243 57.2554,43.0243 58.0797,39.4328 55.9511,37.1934 55.4694,40.9691 57.677,42.6929 58.0314,42.2498 58.3455,42.5105 58.477,41.6944 58.8542,41.5887 58.8012,41.168 59.0834,41.2108 59.1035,40.9299 59.2404,40.8911 59.2659,40.8804 59.2644,40.6366 59.3817,40.2079 59.1718,37.6322 58.7797)),((35.9681 52.2157,35.4682 52.2022,37.7431 53.9104,37.9907 53.5925,35.9681 52.2157))) diff --git a/tests/queries/0_stateless/01306_polygons_intersection.sql b/tests/queries/0_stateless/01306_polygons_intersection.sql new file mode 100644 index 00000000000..144408ca0ae --- /dev/null +++ b/tests/queries/0_stateless/01306_polygons_intersection.sql @@ -0,0 +1,14 @@ +select polygonsIntersectionCartesian([[[(0., 0.),(0., 3.),(1., 2.9),(2., 2.6),(2.6, 2.),(2.9, 1.),(3., 0.),(0., 0.)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); +select polygonsIntersectionCartesian([[[(0., 0.),(0., 3.),(1., 2.9),(2., 2.6),(2.6, 2.),(2.9, 1.),(3., 0.),(0., 0.)]]], [[[(3., 3.),(3., 4.),(4., 4.),(4., 3.),(3., 3.)]]]); + +select polygonsIntersectionSpherical([[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]], [[[(25.0010, 136.9987), (17.7500, 142.5000), (11.3733, 142.5917)]]]); +select polygonsIntersectionSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]); + +select '-------- MultiPolygon with Polygon'; +select wkt(polygonsIntersectionSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]])) format TSV; + +select '-------- MultiPolygon with Polygon with Holes'; +select wkt(polygonsIntersectionSpherical([[[(33.473420586689336,58.85424941916091),(32.23422397806246,58.492830557036),(32.173775363007486,58.03176922751564),(31.508840597402823,57.499784781503735),(31.750635057622702,56.86092686957355),(31.508840597402823,55.941082594334574),(32.20399967053497,55.515591939372456),(31.84130798020516,54.998862226280465),(31.418167674820367,54.422670886434275),(32.47601843828233,53.83826377018255),(32.08310244042503,53.408048308050866),(33.171177511414484,52.82758702113742),(34.77306581037117,52.91880107773494),(34.77306581037117,53.784726518357985),(34.108131044766516,54.17574726780569),(35.07530888564602,54.59813930694554),(34.25925258240394,54.96417435716029),(35.01486027059106,55.361278263643584),(33.50364489421682,55.37845402950552),(32.7480372060297,55.90721384574556),(35.67979503619571,55.68634475630185),(32.83871012861215,56.311688992608396),(34.591719965206266,56.29492065473883),(35.7100193437232,56.311688992608396),(33.83611227701915,56.695333481003644),(32.95960735872209,56.9434497616887),(36.072711034053015,57.091531913901434),(33.171177511414484,57.33702717078384),(36.193608264162954,57.499784781503735),(33.23162612646945,57.77481561306047),(36.43540272438284,58.04776787540811),(33.62454212432676,58.27099811968307),(36.344729801800376,58.54018474404165),(33.83611227701915,58.68186423448108),(34.74284150284369,59.565911441555244),(33.473420586689336,58.85424941916091)]], [[(34.65216858026123,58.91672306881671),(37.19101041256995,58.68186423448108),(36.01226241899805,58.28688958537609),(37.16078610504247,58.04776787540811),(35.74024365125068,57.79092907387934),(37.009664567405046,57.499784781503735),(35.77046795877817,57.25537683364851),(36.979440259877556,57.07510745541089),(34.22902827487645,56.794777197297435),(36.7074214921302,56.210968525786996),(34.712617195316206,56.10998276812964),(36.55629995449277,55.63519693782703),(35.13575750070099,55.53270067649592),(36.43540272438284,55.34409504165558),(34.83351442542614,55.01619492319591),(35.61934642114075,54.49294870011772),(34.89396304048112,54.12264226523038),(35.37755196092087,53.046178687628185),(37.43280487278982,52.95523300597458),(35.92158949641559,53.80257986695776),(36.91899164482259,53.856094327816805),(36.01226241899805,54.75541714463799),(37.765272255592166,55.189110239786885),(36.828318722240134,55.44708256557195),(38.03729102333953,55.652253637168315),(36.64697287707522,55.941082594334574),(38.21863686850443,56.05939028508024),(36.37495410932787,56.64551287174558),(38.30930979108689,56.992876013526654),(37.16078610504247,57.25537683364851),(38.127963945921984,57.516020773674256),(37.43280487278982,57.710289827306724),(38.33953409861437,57.935626886818994),(37.40258056526235,58.31865112960426),(38.58132855883426,58.744648733419496),(37.31190764267989,59.02578062465136),(34.65216858026123,58.91672306881671)]], [[(38.52087994377928,59.11898412389468),(39.54850639971376,58.713270635642914),(38.369758406141855,58.28688958537609),(38.85334732658162,58.06375936407028),(38.33953409861437,57.710289827306724),(38.73245009647167,57.48354156434209),(38.21863686850443,57.271721400459285),(38.97424455669155,56.87744603722649),(37.463029180317314,56.5623320541159),(38.94402024916407,56.05939028508024),(38.18841256097694,55.856355210835915),(38.490655636251795,55.53270067649592),(37.795496563119656,55.39562234093384),(38.30930979108689,55.154587013355666),(36.7074214921302,54.65063295250911),(37.31190764267989,53.92734063371401),(36.979440259877556,53.58783775557231),(37.855945178174615,52.91880107773497),(39.57873070724124,52.69956490610895),(38.33953409861437,53.281741738901104),(40.00187101262603,53.35396273604752),(39.54850639971376,53.58783775557231),(40.24366547284591,53.58783775557231),(39.97164670509855,53.98069568468355),(40.60635716317572,54.03398248547225),(40.39478701048334,54.44025165268903),(39.54850639971376,54.56310590284329),(39.54850639971376,54.87732350170489),(40.39478701048334,54.87732350170489),(40.39478701048334,55.24083903654295),(39.82052516746112,55.2752875586599),(39.760076552406154,55.75443792473942),(40.57613285564824,55.78844000174894),(40.425011318010824,56.19415599955667),(39.82052516746112,56.07626182891758),(39.79030085993364,56.41214455508424),(40.48545993306579,56.495655446714636),(40.33433839542836,56.95993246553937),(39.79030085993364,56.992876013526654),(39.72985224487867,57.46729112028032),(40.33433839542836,57.46729112028032),(40.24366547284591,58.04776787540811),(39.63917932229622,58.04776787540811),(39.63917932229622,58.382088724871295),(40.33433839542836,58.382088724871295),(40.45523562553831,58.9011152358548),(38.52087994377928,59.11898412389468)]]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]])) format TSV; + +select '-------- Polygon with Polygon with Holes'; +select wkt(polygonsIntersectionSpherical([[(29.453587685533865,59.779570356240356),(29.393139070478895,52.276266797422124),(40.636581470703206,59.38168915000267),(41.21084331372543,59.103467777099866),(29.786055068336193,52.146627480315004),(31.23682182965546,52.16517054781818),(41.69443223416517,58.85424941916091),(42.51048853740727,58.47703162291134),(32.59691566839227,52.22075341251539),(34.289476889931414,52.22075341251539),(43.02430176537451,58.07974369546071),(43.02430176537451,57.25537683364851),(35.468224883503325,52.2022335126388),(37.16078610504247,52.23926559241349),(43.02430176537451,56.26136189644947),(43.02430176537451,55.326904361850836),(38.33953409861437,52.16517054781818),(40.09254393520848,52.16517054781818),(44.4146199116388,55.3097062225408),(44.47506852669377,59.80998197603594),(39.72985224487867,59.931351417569715),(30.23941968124846,53.67744677450975),(30.20919537372098,54.63314259659509),(38.73245009647167,59.94649146557819),(37.2816833351524,59.97675082987618),(30.23941968124846,55.2752875586599),(30.33009260383092,56.19415599955667),(36.28428118674541,59.96162460231375),(34.863738732953635,59.97675082987618),(30.178971066193498,56.97640788219866),(30.178971066193498,57.91957806959033),(33.65476643185424,59.94649146557819),(32.32489690064491,59.94649146557819),(30.481214141468342,58.85424941916091),(30.571887064050795,59.99187015036608),(29.453587685533865,59.779570356240356)]], [[(24.367675781249993,61.45977057029751),(19.577636718749993,58.67693767258692),(19.577636718749993,57.492213666700735),(19.445800781249996,55.87531083569678),(19.445800781249996,54.085173420886775),(17.468261718749996,53.014783245859235),(20.017089843749993,51.563412328675895),(21.203613281249993,50.205033264943324),(26.125488281249993,50.40151532278236),(27.22412109374999,48.980216985374994),(32.80517578124999,49.525208341974405),(35.26611328124999,48.74894534343292),(36.93603515624999,49.66762782262194),(42.56103515625,48.77791275550183),(43.92333984374999,49.8096315635631),(47.17529296875,49.152969656170455),(49.28466796875,50.54136296522162),(48.05419921875,51.17934297928929),(51.39404296875,52.48278022207825),(50.64697265625,53.014783245859235),(52.88818359375,53.93021986394004),(51.65771484374999,54.29088164657006),(52.66845703125,55.825973254619015),(50.25146484375,56.145549500679095),(51.92138671875,57.914847767009206),(49.15283203125,58.17070248348605),(49.59228515625,60.086762746260064),(47.043457031249986,59.88893689676584),(43.57177734375,61.37567331572748),(42.64892578125,60.630101766266705),(36.89208984374999,62.000904713685856),(36.01318359374999,61.143235250840576),(31.398925781249993,62.02152819100766),(30.563964843749996,61.05828537037917),(26.872558593749993,61.71070595883174),(26.652832031249993,61.10078883158897),(24.367675781249993,61.45977057029751)], [(24.455566406249993,59.42272750081452),(21.203613281249993,58.49369382056807),(21.335449218749993,56.89700392127261),(21.599121093749993,55.92458580482949),(25.202636718749993,55.998380955359636),(28.850097656249993,57.06463027327854),(27.09228515625,57.844750992890994),(28.806152343749996,59.17592824927138),(26.257324218749993,59.17592824927138),(24.455566406249993,59.42272750081452)], [(35.13427734375,59.84481485969107),(31.970214843749993,58.97266715450152),(33.20068359374999,56.776808316568406),(36.67236328125,56.41390137600675),(39.08935546874999,57.25528054528888),(42.69287109374999,58.03137242177638),(40.89111328124999,59.26588062825809),(37.28759765625,58.722598828043374),(37.11181640624999,59.66774058164964),(35.13427734375,59.84481485969107)], [(29.157714843749993,55.75184939173528),(22.565917968749993,55.128649068488784),(22.565917968749993,53.54030739150019),(22.038574218749996,51.48138289610097),(26.257324218749993,51.42661449707484),(30.124511718749993,50.54136296522162),(32.18994140624999,51.17934297928929),(30.124511718749993,53.173119202640635),(35.09033203124999,53.173119202640635),(33.11279296875,54.085173420886775),(29.597167968749993,55.50374985927513),(29.157714843749993,55.75184939173528)], [(42.82470703125,56.58369172128337),(36.584472656249986,55.329144408405085),(37.99072265625,53.592504809039355),(34.95849609374999,51.48138289610097),(36.54052734374999,50.40151532278236),(39.66064453124999,50.289339253291786),(39.79248046875,52.13348804077148),(41.77001953125,50.68079714532166),(44.49462890624999,51.97134580885171),(47.30712890624999,52.509534770327264),(44.05517578125,53.54030739150019),(46.60400390625,53.696706475303245),(47.61474609375,55.40406982700608),(45.37353515625,55.40406982700608),(42.82470703125,56.58369172128337)]])) format TSV; diff --git a/tests/queries/0_stateless/01307_multiple_leaders_zookeeper.sh b/tests/queries/0_stateless/01307_multiple_leaders_zookeeper.sh index 24c6199a94a..21fc88d7c2d 100755 --- a/tests/queries/0_stateless/01307_multiple_leaders_zookeeper.sh +++ b/tests/queries/0_stateless/01307_multiple_leaders_zookeeper.sh @@ -12,7 +12,7 @@ DATA_SIZE=200 SEQ=$(seq 0 $(($NUM_REPLICAS - 1))) for REPLICA in $SEQ; do $CLICKHOUSE_CLIENT -n --query "DROP TABLE IF EXISTS r$REPLICA"; done -for REPLICA in $SEQ; do $CLICKHOUSE_CLIENT -n --query "CREATE TABLE r$REPLICA (x UInt64) ENGINE = ReplicatedMergeTree('/test_01307/table', 'r$REPLICA') ORDER BY x SETTINGS min_bytes_for_wide_part = '10M';"; done +for REPLICA in $SEQ; do $CLICKHOUSE_CLIENT -n --query "CREATE TABLE r$REPLICA (x UInt64) ENGINE = ReplicatedMergeTree('/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table', 'r$REPLICA') ORDER BY x SETTINGS min_bytes_for_wide_part = '10M';"; done function thread() { diff --git a/tests/queries/0_stateless/01307_polygon_perimeter.reference b/tests/queries/0_stateless/01307_polygon_perimeter.reference new file mode 100644 index 00000000000..209e3ef4b62 --- /dev/null +++ b/tests/queries/0_stateless/01307_polygon_perimeter.reference @@ -0,0 +1 @@ +20 diff --git a/tests/queries/0_stateless/01307_polygon_perimeter.sql b/tests/queries/0_stateless/01307_polygon_perimeter.sql new file mode 100644 index 00000000000..18f5b385826 --- /dev/null +++ b/tests/queries/0_stateless/01307_polygon_perimeter.sql @@ -0,0 +1 @@ +select polygonPerimeterCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.), (0., 0.)]]]); diff --git a/tests/queries/0_stateless/01308_polygon_area.reference b/tests/queries/0_stateless/01308_polygon_area.reference new file mode 100644 index 00000000000..56d0c4ef174 --- /dev/null +++ b/tests/queries/0_stateless/01308_polygon_area.reference @@ -0,0 +1,2 @@ +25 +9.387703638370358e-8 diff --git a/tests/queries/0_stateless/01308_polygon_area.sql b/tests/queries/0_stateless/01308_polygon_area.sql new file mode 100644 index 00000000000..e3a44ad7d51 --- /dev/null +++ b/tests/queries/0_stateless/01308_polygon_area.sql @@ -0,0 +1,3 @@ +select polygonAreaCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.)]]]); +select polygonAreaSpherical([[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]); +SELECT polygonAreaCartesian([]); -- { serverError 36 } diff --git a/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh b/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh index ced668e9849..13250e82079 100755 --- a/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh +++ b/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh @@ -11,7 +11,7 @@ $CLICKHOUSE_CLIENT --query " key UInt64, value String ) - ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01318/mutation_table', '1') + ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutation_table', '1') ORDER BY key PARTITION BY key % 10 " @@ -47,7 +47,7 @@ done echo "$query_result" -$CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='$first_mutation_id'" +$CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='$first_mutation_id' and database='$CLICKHOUSE_DATABASE'" check_query="SELECT sum(parts_to_do) FROM system.mutations WHERE table='mutation_table' and database='$CLICKHOUSE_DATABASE'" diff --git a/tests/queries/0_stateless/01320_create_sync_race_condition_zookeeper.sh b/tests/queries/0_stateless/01320_create_sync_race_condition_zookeeper.sh index a15d8c8d2cd..97c200c651f 100755 --- a/tests/queries/0_stateless/01320_create_sync_race_condition_zookeeper.sh +++ b/tests/queries/0_stateless/01320_create_sync_race_condition_zookeeper.sh @@ -11,7 +11,7 @@ $CLICKHOUSE_CLIENT --query "CREATE DATABASE test_01320 ENGINE=Ordinary" # Diff function thread1() { - while true; do $CLICKHOUSE_CLIENT -n --query "CREATE TABLE test_01320.r (x UInt64) ENGINE = ReplicatedMergeTree('/test_01320/table', 'r') ORDER BY x; DROP TABLE test_01320.r;"; done + while true; do $CLICKHOUSE_CLIENT -n --query "CREATE TABLE test_01320.r (x UInt64) ENGINE = ReplicatedMergeTree('/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table', 'r') ORDER BY x; DROP TABLE test_01320.r;"; done } function thread2() diff --git a/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.reference b/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.reference index e7db2788824..b4ed8efab63 100644 --- a/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.reference +++ b/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.reference @@ -1,3 +1,3 @@ 10 5 -CREATE TABLE default.alter_mt\n(\n `key` UInt64,\n `value` UInt64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01338/alter_mt\', \'1\')\nORDER BY key\nSETTINGS index_granularity = 8192 +CREATE TABLE default.alter_mt\n(\n `key` UInt64,\n `value` UInt64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01338_long_select_and_alter_zookeeper_default/alter_mt\', \'1\')\nORDER BY key\nSETTINGS index_granularity = 8192 diff --git a/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.sh b/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.sh index d990a8a1c08..4aeecc7343d 100755 --- a/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.sh +++ b/tests/queries/0_stateless/01338_long_select_and_alter_zookeeper.sh @@ -6,7 +6,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) $CLICKHOUSE_CLIENT --query "DROP TABLE IF EXISTS alter_mt" -$CLICKHOUSE_CLIENT --query "CREATE TABLE alter_mt (key UInt64, value String) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_01338/alter_mt', '1') ORDER BY key" +$CLICKHOUSE_CLIENT --query "CREATE TABLE alter_mt (key UInt64, value String) ENGINE=ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_mt', '1') ORDER BY key" $CLICKHOUSE_CLIENT --query "INSERT INTO alter_mt SELECT number, toString(number) FROM numbers(5)" diff --git a/tests/queries/0_stateless/01344_min_bytes_to_use_mmap_io_index.sql b/tests/queries/0_stateless/01344_min_bytes_to_use_mmap_io_index.sql index 9044ee08f8d..7aab991d203 100644 --- a/tests/queries/0_stateless/01344_min_bytes_to_use_mmap_io_index.sql +++ b/tests/queries/0_stateless/01344_min_bytes_to_use_mmap_io_index.sql @@ -6,6 +6,6 @@ SET min_bytes_to_use_mmap_io = 1; SELECT * FROM test_01344 WHERE x = 'Hello, world'; SYSTEM FLUSH LOGS; -SELECT PE.Values FROM system.query_log ARRAY JOIN ProfileEvents AS PE WHERE current_database = currentDatabase() AND event_date >= yesterday() AND event_time >= now() - 300 AND query LIKE 'SELECT * FROM test_01344 WHERE x = ''Hello, world''%' AND PE.Names = 'CreatedReadBufferMMap' AND type = 2 ORDER BY event_time DESC LIMIT 1; +SELECT PE.Values FROM system.query_log ARRAY JOIN ProfileEvents AS PE WHERE current_database = currentDatabase() AND event_date >= yesterday() AND query LIKE 'SELECT * FROM test_01344 WHERE x = ''Hello, world''%' AND PE.Names = 'CreatedReadBufferMMap' AND type = 2 ORDER BY event_time DESC LIMIT 1; DROP TABLE test_01344; diff --git a/tests/queries/0_stateless/01357_version_collapsing_attach_detach_zookeeper.sql b/tests/queries/0_stateless/01357_version_collapsing_attach_detach_zookeeper.sql index 0086ec5c2a3..d8249a603ff 100644 --- a/tests/queries/0_stateless/01357_version_collapsing_attach_detach_zookeeper.sql +++ b/tests/queries/0_stateless/01357_version_collapsing_attach_detach_zookeeper.sql @@ -8,13 +8,13 @@ CREATE TABLE versioned_collapsing_table( sign Int8, version UInt16 ) -ENGINE = ReplicatedVersionedCollapsingMergeTree('/clickhouse/versioned_collapsing_table', '1', sign, version) +ENGINE = ReplicatedVersionedCollapsingMergeTree('/clickhouse/versioned_collapsing_table/{shard}', '{replica}', sign, version) PARTITION BY d ORDER BY (key1, key2); INSERT INTO versioned_collapsing_table VALUES (toDate('2019-10-10'), 1, 1, 'Hello', -1, 1); -SELECT value FROM system.zookeeper WHERE path = '/clickhouse/versioned_collapsing_table' and name = 'metadata'; +SELECT value FROM system.zookeeper WHERE path = '/clickhouse/versioned_collapsing_table/s1' and name = 'metadata'; SELECT COUNT() FROM versioned_collapsing_table; diff --git a/tests/queries/0_stateless/01360_materialized_view_with_join_on_query_log.sql b/tests/queries/0_stateless/01360_materialized_view_with_join_on_query_log.sql index 950d4fe097f..3380f04f8c9 100644 --- a/tests/queries/0_stateless/01360_materialized_view_with_join_on_query_log.sql +++ b/tests/queries/0_stateless/01360_materialized_view_with_join_on_query_log.sql @@ -17,7 +17,7 @@ CREATE MATERIALIZED VIEW slow_log Engine=Memory AS extract(query,'/\\*\\s*QUERY_GROUP_ID:(.*?)\\s*\\*/') as QUERY_GROUP_ID, * FROM system.query_log - WHERE type<>1 and event_date >= yesterday() and event_time > now() - 120 + WHERE type<>1 and event_date >= yesterday() ) as ql INNER JOIN expected_times USING (QUERY_GROUP_ID) WHERE query_duration_ms > max_query_duration_ms @@ -38,7 +38,7 @@ SELECT extract(query,'/\\*\\s*QUERY_GROUP_ID:(.*?)\\s*\\*/') as QUERY_GROUP_ID, count() FROM system.query_log -WHERE current_database = currentDatabase() AND type<>1 and event_date >= yesterday() and event_time > now() - 20 and QUERY_GROUP_ID<>'' +WHERE current_database = currentDatabase() AND type<>1 and event_date >= yesterday() and QUERY_GROUP_ID<>'' GROUP BY QUERY_GROUP_ID ORDER BY QUERY_GROUP_ID; diff --git a/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect b/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect index 50ef009dee9..a6d52b39918 100755 --- a/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect +++ b/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect @@ -23,7 +23,7 @@ set is_done 0 while {$is_done == 0} { send -- "\t" expect { - "_connections" { + "_" { set is_done 1 } default { diff --git a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference index 44e0be8e356..bb0b1cf658d 100644 --- a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference +++ b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference @@ -1,4 +1,3 @@ 0 0 0 -0 diff --git a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql index 216f43c4285..c2191d6ab96 100644 --- a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql +++ b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql @@ -12,7 +12,7 @@ SETTINGS index_granularity = 8192; INSERT INTO t0 VALUES (0, 0); SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1524532316)); -SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1.0)); +SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1.0)); -- { serverError 70 } SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND inf)); SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND nan)); diff --git a/tests/queries/0_stateless/01396_inactive_replica_cleanup_nodes_zookeeper.sh b/tests/queries/0_stateless/01396_inactive_replica_cleanup_nodes_zookeeper.sh index 30b2b665658..693580bc270 100755 --- a/tests/queries/0_stateless/01396_inactive_replica_cleanup_nodes_zookeeper.sh +++ b/tests/queries/0_stateless/01396_inactive_replica_cleanup_nodes_zookeeper.sh @@ -12,8 +12,8 @@ SCALE=5000 $CLICKHOUSE_CLIENT -n --query " DROP TABLE IF EXISTS r1; DROP TABLE IF EXISTS r2; - CREATE TABLE r1 (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01396/r', '1') ORDER BY x SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 1, parts_to_throw_insert = 100000, max_replicated_logs_to_keep = 10; - CREATE TABLE r2 (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01396/r', '2') ORDER BY x SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 1, parts_to_throw_insert = 100000, max_replicated_logs_to_keep = 10; + CREATE TABLE r1 (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/{shard}', '1{replica}') ORDER BY x SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 1, parts_to_throw_insert = 100000, max_replicated_logs_to_keep = 10; + CREATE TABLE r2 (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/{shard}', '2{replica}') ORDER BY x SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 1, parts_to_throw_insert = 100000, max_replicated_logs_to_keep = 10; DETACH TABLE r2; " @@ -29,16 +29,16 @@ for _ in {1..60}; do done -$CLICKHOUSE_CLIENT --query "SELECT numChildren < $((SCALE / 4)) FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01396/r' AND name = 'log'"; +$CLICKHOUSE_CLIENT --query "SELECT numChildren < $((SCALE / 4)) FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/s1' AND name = 'log'"; echo -e '\n---\n'; -$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01396/r/replicas/1' AND name = 'is_lost'"; -$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01396/r/replicas/2' AND name = 'is_lost'"; +$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/s1/replicas/1r1' AND name = 'is_lost'"; +$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/s1/replicas/2r1' AND name = 'is_lost'"; echo -e '\n---\n'; $CLICKHOUSE_CLIENT --query "ATTACH TABLE r2" $CLICKHOUSE_CLIENT --receive_timeout 600 --query "SYSTEM SYNC REPLICA r2" # Need to increase timeout, otherwise it timed out in debug build -$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01396/r/replicas/2' AND name = 'is_lost'"; +$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/s1/replicas/2r1' AND name = 'is_lost'"; $CLICKHOUSE_CLIENT -n --query " DROP TABLE IF EXISTS r1; diff --git a/tests/queries/0_stateless/01414_mutations_and_errors_zookeeper.sh b/tests/queries/0_stateless/01414_mutations_and_errors_zookeeper.sh index ceeeed41049..6e1a6e01757 100755 --- a/tests/queries/0_stateless/01414_mutations_and_errors_zookeeper.sh +++ b/tests/queries/0_stateless/01414_mutations_and_errors_zookeeper.sh @@ -12,7 +12,7 @@ $CLICKHOUSE_CLIENT --query " key UInt64, value String ) - ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01414/mutation_table', '1') + ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/mutation_table', '1') ORDER BY tuple() PARTITION BY date " diff --git a/tests/queries/0_stateless/01417_freeze_partition_verbose.reference b/tests/queries/0_stateless/01417_freeze_partition_verbose.reference index e648f619c50..7a895d1ed4f 100644 --- a/tests/queries/0_stateless/01417_freeze_partition_verbose.reference +++ b/tests/queries/0_stateless/01417_freeze_partition_verbose.reference @@ -16,3 +16,9 @@ ATTACH PARTITION 3 3_12_12_0 3_4_4_0 command_type partition_id part_name backup_name old_part_name FREEZE PARTITION 7 7_8_8_0 test_01417_single_part_7 ATTACH PART 5 5_13_13_0 5_6_6_0 +command_type partition_id part_name backup_name +UNFREEZE PARTITION 7 7_8_8_0 test_01417_single_part_7 +command_type partition_id part_name backup_name +FREEZE PARTITION 202103 20210301_20210301_1_1_0 test_01417_single_part_old_syntax +command_type partition_id part_name backup_name +UNFREEZE PARTITION 20210301 20210301_20210301_1_1_0 test_01417_single_part_old_syntax diff --git a/tests/queries/0_stateless/01417_freeze_partition_verbose.sh b/tests/queries/0_stateless/01417_freeze_partition_verbose.sh index 5294f4fe8f1..f5389e3e03d 100755 --- a/tests/queries/0_stateless/01417_freeze_partition_verbose.sh +++ b/tests/queries/0_stateless/01417_freeze_partition_verbose.sh @@ -13,6 +13,11 @@ ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS table_for_freeze;" ${CLICKHOUSE_CLIENT} --query "CREATE TABLE table_for_freeze (key UInt64, value String) ENGINE = MergeTree() ORDER BY key PARTITION BY key % 10;" ${CLICKHOUSE_CLIENT} --query "INSERT INTO table_for_freeze SELECT number, toString(number) from numbers(10);" +# also for old syntax +${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS table_for_freeze_old_syntax;" +${CLICKHOUSE_CLIENT} --query "CREATE TABLE table_for_freeze_old_syntax (dt Date, value String) ENGINE = MergeTree(dt, (value), 8192);" +${CLICKHOUSE_CLIENT} --query "INSERT INTO table_for_freeze_old_syntax SELECT toDate('2021-03-01'), toString(number) from numbers(10);" + ${CLICKHOUSE_CLIENT} --query "ALTER TABLE table_for_freeze FREEZE WITH NAME 'test_01417' FORMAT TSVWithNames SETTINGS alter_partition_verbose_result = 1;" \ | ${CLICKHOUSE_LOCAL} --structure "$ALTER_OUT_STRUCTURE, $FREEZE_OUT_STRUCTURE" \ --query "SELECT command_type, partition_id, part_name, backup_name FROM table" @@ -35,5 +40,21 @@ ${CLICKHOUSE_CLIENT} --query "ALTER TABLE table_for_freeze FREEZE PARTITION '7' | ${CLICKHOUSE_LOCAL} --structure "$ALTER_OUT_STRUCTURE, $FREEZE_OUT_STRUCTURE, $ATTACH_OUT_STRUCTURE" \ --query "SELECT command_type, partition_id, part_name, backup_name, old_part_name FROM table" +# Unfreeze partition +${CLICKHOUSE_CLIENT} --query "ALTER TABLE table_for_freeze UNFREEZE PARTITION '7' WITH NAME 'test_01417_single_part_7' FORMAT TSVWithNames SETTINGS alter_partition_verbose_result = 1;" \ + | ${CLICKHOUSE_LOCAL} --structure "$ALTER_OUT_STRUCTURE, $FREEZE_OUT_STRUCTURE" \ + --query "SELECT command_type, partition_id, part_name, backup_name FROM table" + +# Freeze partition with old syntax +${CLICKHOUSE_CLIENT} --query "ALTER TABLE table_for_freeze_old_syntax FREEZE PARTITION '202103' WITH NAME 'test_01417_single_part_old_syntax' FORMAT TSVWithNames SETTINGS alter_partition_verbose_result = 1;" \ + | ${CLICKHOUSE_LOCAL} --structure "$ALTER_OUT_STRUCTURE, $FREEZE_OUT_STRUCTURE" \ + --query "SELECT command_type, partition_id, part_name, backup_name FROM table" + +# Unfreeze partition with old syntax +${CLICKHOUSE_CLIENT} --query "ALTER TABLE table_for_freeze_old_syntax UNFREEZE PARTITION '202103' WITH NAME 'test_01417_single_part_old_syntax' FORMAT TSVWithNames SETTINGS alter_partition_verbose_result = 1;" \ + | ${CLICKHOUSE_LOCAL} --structure "$ALTER_OUT_STRUCTURE, $FREEZE_OUT_STRUCTURE" \ + --query "SELECT command_type, partition_id, part_name, backup_name FROM table" + # teardown ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS table_for_freeze;" +${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS table_for_freeze_old_syntax;" diff --git a/tests/queries/0_stateless/01417_freeze_partition_verbose_zookeeper.sh b/tests/queries/0_stateless/01417_freeze_partition_verbose_zookeeper.sh index 480daeefa46..bb935a950ff 100755 --- a/tests/queries/0_stateless/01417_freeze_partition_verbose_zookeeper.sh +++ b/tests/queries/0_stateless/01417_freeze_partition_verbose_zookeeper.sh @@ -11,7 +11,7 @@ FREEZE_OUT_STRUCTURE='backup_name String, backup_path String , part_backup_path # setup ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS table_for_freeze_replicated;" -${CLICKHOUSE_CLIENT} --query "CREATE TABLE table_for_freeze_replicated (key UInt64, value String) ENGINE = ReplicatedMergeTree('/test_01417/table_for_freeze_replicated', '1') ORDER BY key PARTITION BY key % 10;" +${CLICKHOUSE_CLIENT} --query "CREATE TABLE table_for_freeze_replicated (key UInt64, value String) ENGINE = ReplicatedMergeTree('/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/table_for_freeze_replicated', '1') ORDER BY key PARTITION BY key % 10;" ${CLICKHOUSE_CLIENT} --query "INSERT INTO table_for_freeze_replicated SELECT number, toString(number) from numbers(10);" ${CLICKHOUSE_CLIENT} --query "ALTER TABLE table_for_freeze_replicated FREEZE WITH NAME 'test_01417' FORMAT TSVWithNames SETTINGS alter_partition_verbose_result = 1;" \ diff --git a/tests/queries/0_stateless/01440_to_date_monotonicity.reference b/tests/queries/0_stateless/01440_to_date_monotonicity.reference index 96732e5996c..74716fe6223 100644 --- a/tests/queries/0_stateless/01440_to_date_monotonicity.reference +++ b/tests/queries/0_stateless/01440_to_date_monotonicity.reference @@ -1,4 +1,4 @@ 0 -1970-01-01 2106-02-07 1970-04-11 1970-01-01 2106-02-07 +1970-01-01 2106-02-07 1970-04-11 1970-01-01 2149-06-06 1970-01-01 03:00:00 2106-02-07 09:28:15 1970-01-01 03:16:40 2000-01-01 13:12:12 diff --git a/tests/queries/0_stateless/01459_manual_write_to_replicas.sh b/tests/queries/0_stateless/01459_manual_write_to_replicas.sh index 467c29d3d33..cf239fd7032 100755 --- a/tests/queries/0_stateless/01459_manual_write_to_replicas.sh +++ b/tests/queries/0_stateless/01459_manual_write_to_replicas.sh @@ -11,7 +11,7 @@ NUM_REPLICAS=10 for i in $(seq 1 $NUM_REPLICAS); do $CLICKHOUSE_CLIENT -n -q " DROP TABLE IF EXISTS r$i; - CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01459_manual_write_ro_replicas/r', 'r$i') ORDER BY x; + CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/r', 'r$i') ORDER BY x; " done diff --git a/tests/queries/0_stateless/01459_manual_write_to_replicas_quorum.sh b/tests/queries/0_stateless/01459_manual_write_to_replicas_quorum.sh index 376ee58859e..8c322798173 100755 --- a/tests/queries/0_stateless/01459_manual_write_to_replicas_quorum.sh +++ b/tests/queries/0_stateless/01459_manual_write_to_replicas_quorum.sh @@ -11,7 +11,7 @@ NUM_REPLICAS=10 for i in $(seq 1 $NUM_REPLICAS); do $CLICKHOUSE_CLIENT -n -q " DROP TABLE IF EXISTS r$i; - CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01459_manual_write_ro_replicas_quorum/r', 'r$i') ORDER BY x; + CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/r', 'r$i') ORDER BY x; " done diff --git a/tests/queries/0_stateless/01461_query_start_time_microseconds.sql b/tests/queries/0_stateless/01461_query_start_time_microseconds.sql index 678b9b3d85e..be1d9897053 100644 --- a/tests/queries/0_stateless/01461_query_start_time_microseconds.sql +++ b/tests/queries/0_stateless/01461_query_start_time_microseconds.sql @@ -7,6 +7,8 @@ WITH ( SELECT query_start_time_microseconds FROM system.query_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS time_with_microseconds, @@ -14,6 +16,8 @@ WITH ( SELECT query_start_time FROM system.query_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS t) @@ -27,6 +31,8 @@ WITH ( SELECT query_start_time_microseconds FROM system.query_thread_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS time_with_microseconds, @@ -34,6 +40,8 @@ WITH ( SELECT query_start_time FROM system.query_thread_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS t) diff --git a/tests/queries/0_stateless/01475_read_subcolumns.sql b/tests/queries/0_stateless/01475_read_subcolumns.sql index 16832c4fc59..3457d17dba1 100644 --- a/tests/queries/0_stateless/01475_read_subcolumns.sql +++ b/tests/queries/0_stateless/01475_read_subcolumns.sql @@ -10,7 +10,7 @@ SYSTEM FLUSH LOGS; SELECT ProfileEvents.Values[indexOf(ProfileEvents.Names, 'FileOpen')] FROM system.query_log WHERE (type = 'QueryFinish') AND (lower(query) LIKE lower('SELECT a.size0 FROM %t_arr%')) - AND event_time > now() - INTERVAL 10 SECOND AND current_database = currentDatabase(); + AND current_database = currentDatabase(); SELECT '====tuple===='; DROP TABLE IF EXISTS t_tup; @@ -27,7 +27,7 @@ SYSTEM FLUSH LOGS; SELECT ProfileEvents.Values[indexOf(ProfileEvents.Names, 'FileOpen')] FROM system.query_log WHERE (type = 'QueryFinish') AND (lower(query) LIKE lower('SELECT t._ FROM %t_tup%')) - AND event_time > now() - INTERVAL 10 SECOND AND current_database = currentDatabase(); + AND current_database = currentDatabase(); SELECT '====nullable===='; DROP TABLE IF EXISTS t_nul; @@ -41,7 +41,7 @@ SYSTEM FLUSH LOGS; SELECT ProfileEvents.Values[indexOf(ProfileEvents.Names, 'FileOpen')] FROM system.query_log WHERE (type = 'QueryFinish') AND (lower(query) LIKE lower('SELECT n.null FROM %t_nul%')) - AND event_time > now() - INTERVAL 10 SECOND AND current_database = currentDatabase(); + AND current_database = currentDatabase(); SELECT '====map===='; SET allow_experimental_map_type = 1; @@ -60,7 +60,7 @@ SYSTEM FLUSH LOGS; SELECT ProfileEvents.Values[indexOf(ProfileEvents.Names, 'FileOpen')] FROM system.query_log WHERE (type = 'QueryFinish') AND (lower(query) LIKE lower('SELECT m.% FROM %t_map%')) - AND event_time > now() - INTERVAL 10 SECOND AND current_database = currentDatabase(); + AND current_database = currentDatabase(); DROP TABLE t_arr; DROP TABLE t_nul; diff --git a/tests/queries/0_stateless/01477_lc_in_merge_join_left_key.reference b/tests/queries/0_stateless/01477_lc_in_merge_join_left_key.reference index 0612b4ca23e..ac4d0a3d21a 100644 --- a/tests/queries/0_stateless/01477_lc_in_merge_join_left_key.reference +++ b/tests/queries/0_stateless/01477_lc_in_merge_join_left_key.reference @@ -19,17 +19,17 @@ 1 \N l Nullable(String) LowCardinality(String) - 1 l \N LowCardinality(String) Nullable(String) -2 \N LowCardinality(String) Nullable(String) -1 l \N LowCardinality(String) Nullable(String) -2 \N LowCardinality(String) Nullable(String) +2 \N \N LowCardinality(Nullable(String)) Nullable(String) +1 l \N LowCardinality(Nullable(String)) Nullable(String) +2 \N \N LowCardinality(Nullable(String)) Nullable(String) - -\N \N Nullable(String) LowCardinality(String) +\N \N \N Nullable(String) LowCardinality(Nullable(String)) 1 \N l Nullable(String) LowCardinality(String) -1 \N l Nullable(String) LowCardinality(String) -\N \N Nullable(String) LowCardinality(String) +1 \N l Nullable(String) LowCardinality(Nullable(String)) +\N \N \N Nullable(String) LowCardinality(Nullable(String)) - 1 l \N LowCardinality(String) Nullable(String) -\N \N LowCardinality(String) Nullable(String) -1 l \N LowCardinality(String) Nullable(String) -\N \N LowCardinality(String) Nullable(String) +\N \N \N LowCardinality(Nullable(String)) Nullable(String) +1 l \N LowCardinality(Nullable(String)) Nullable(String) +\N \N \N LowCardinality(Nullable(String)) Nullable(String) - diff --git a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference index b2c3ea56b7f..4261ccd8a1f 100644 --- a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference +++ b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference @@ -5,3 +5,5 @@ 1 0 1 0 1 +1 0 +1 diff --git a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql index a6678ca9040..68b4e7d4015 100644 --- a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql +++ b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql @@ -17,6 +17,9 @@ SELECT ID FROM m INNER JOIN b USING(key) GROUP BY ID; SELECT * FROM m INNER JOIN b USING(key) WHERE ID = 1 HAVING ID = 1 ORDER BY ID; SELECT * FROM m INNER JOIN b USING(key) WHERE ID = 1 GROUP BY ID, key HAVING ID = 1 ORDER BY ID; +SELECT sum(b.ID), sum(m.key) FROM m FULL JOIN b ON (m.key == b.key) GROUP BY key; +SELECT sum(b.ID + m.key) FROM m FULL JOIN b ON (m.key == b.key) GROUP BY key; + DROP TABLE IF EXISTS a; DROP TABLE IF EXISTS b; DROP TABLE IF EXISTS m; diff --git a/tests/queries/0_stateless/01508_partition_pruning_long.reference b/tests/queries/0_stateless/01508_partition_pruning_long.reference index 70f529c6058..334ecb63164 100644 --- a/tests/queries/0_stateless/01508_partition_pruning_long.reference +++ b/tests/queries/0_stateless/01508_partition_pruning_long.reference @@ -5,11 +5,11 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from tMM where toDate(d)=toDate('2020-09-01'); 2 2880 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toDate(d)=toDate('2020-10-15'); 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toDate(d)='2020-09-15'; 0 0 @@ -17,27 +17,27 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from tMM where toYYYYMM(d)=202009; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMMDD(d)=20200816; 2 2880 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMMDD(d)=20201015; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toDate(d)='2020-10-15'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where d >= '2020-09-01 00:00:00' and d<'2020-10-15 00:00:00'; 3 15000 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where d >= '2020-01-16 00:00:00' and d < toDateTime('2021-08-17 00:00:00'); 6 30000 -Selected 6/6 parts by partition key, 6 parts by primary key, 6/12 marks by primary key, 6 marks to read from 6 ranges +Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges select uniqExact(_part), count() from tMM where d >= '2020-09-16 00:00:00' and d < toDateTime('2020-10-01 00:00:00'); 0 0 @@ -45,117 +45,117 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from tMM where d >= '2020-09-12 00:00:00' and d < '2020-10-16 00:00:00'; 2 6440 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) >= '2020-09-12 00:00:00'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) = '2020-09-01 00:00:00'; 2 2880 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) = '2020-10-01 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) >= '2020-09-15 00:00:00' and d < '2020-10-16 00:00:00'; 2 6440 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202010; 4 20000 -Selected 4/6 parts by partition key, 4 parts by primary key, 4/8 marks by primary key, 4 marks to read from 4 ranges +Selected 4/6 parts by partition key, 4 parts by primary key, 4/4 marks by primary key, 4 marks to read from 4 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202009; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202010 and toStartOfDay(d) = '2020-10-01 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) >= 202009 and toStartOfDay(d) < '2020-10-02 00:00:00'; 3 11440 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) > 202009 and toStartOfDay(d) < '2020-10-02 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202009 and toStartOfDay(d) < '2020-10-02 00:00:00'; 3 11440 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202010 and toStartOfDay(d) < '2020-10-02 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202010; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d-1)+1 = 202010; 3 9999 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where toStartOfMonth(d) >= '2020-09-15'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfMonth(d) >= '2020-09-01'; 4 20000 -Selected 4/6 parts by partition key, 4 parts by primary key, 4/8 marks by primary key, 4 marks to read from 4 ranges +Selected 4/6 parts by partition key, 4 parts by primary key, 4/4 marks by primary key, 4 marks to read from 4 ranges select uniqExact(_part), count() from tMM where toStartOfMonth(d) >= '2020-09-01' and toStartOfMonth(d) < '2020-10-01'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d-1)+1 = 202010; 2 9999 -Selected 2/3 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/3 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202010; 1 10000 -Selected 1/3 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/3 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202010; 2 20000 -Selected 2/3 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/3 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges --------- tDD ---------------------------- select uniqExact(_part), count() from tDD where toDate(d)=toDate('2020-09-24'); 1 10000 -Selected 1/4 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/4 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() FROM tDD WHERE toDate(d) = toDate('2020-09-24'); 1 10000 -Selected 1/4 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/4 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() FROM tDD WHERE toDate(d) = '2020-09-24'; 1 10000 -Selected 1/4 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/4 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() FROM tDD WHERE toDate(d) >= '2020-09-23' and toDate(d) <= '2020-09-26'; 3 40000 -Selected 3/4 parts by partition key, 3 parts by primary key, 4/7 marks by primary key, 4 marks to read from 3 ranges +Selected 3/4 parts by partition key, 3 parts by primary key, 4/4 marks by primary key, 4 marks to read from 3 ranges select uniqExact(_part), count() FROM tDD WHERE toYYYYMMDD(d) >= 20200923 and toDate(d) <= '2020-09-26'; 3 40000 -Selected 3/4 parts by partition key, 3 parts by primary key, 4/7 marks by primary key, 4 marks to read from 3 ranges +Selected 3/4 parts by partition key, 3 parts by primary key, 4/4 marks by primary key, 4 marks to read from 3 ranges --------- sDD ---------------------------- select uniqExact(_part), count() from sDD; 6 30000 -Selected 6/6 parts by partition key, 6 parts by primary key, 6/12 marks by primary key, 6 marks to read from 6 ranges +Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1)+1 = 202010; 3 9999 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1) = 202010; 2 9999 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1) = 202110; 0 0 @@ -163,52 +163,52 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC'))+1 > 202009 and toStartOfDay(toDateTime(intDiv(d,1000),'UTC')) < toDateTime('2020-10-02 00:00:00','UTC'); 3 11440 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC'))+1 > 202009 and toDateTime(intDiv(d,1000),'UTC') < toDateTime('2020-10-01 00:00:00','UTC'); 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from sDD where d >= 1598918400000; 4 20000 -Selected 4/6 parts by partition key, 4 parts by primary key, 4/8 marks by primary key, 4 marks to read from 4 ranges +Selected 4/6 parts by partition key, 4 parts by primary key, 4/4 marks by primary key, 4 marks to read from 4 ranges select uniqExact(_part), count() from sDD where d >= 1598918400000 and toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1) < 202010; 3 10001 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges --------- xMM ---------------------------- select uniqExact(_part), count() from xMM where toStartOfDay(d) >= '2020-10-01 00:00:00'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00'; 3 10001 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-10-01 00:00:00'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00' and a=1; 1 1 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00' and a<>3; 2 5001 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-10-01 00:00:00' and a<>3; 1 5000 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-11-01 00:00:00' and a = 1; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where a = 1; 3 15000 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from xMM where a = 66; 0 0 @@ -216,29 +216,29 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from xMM where a <> 66; 6 30000 -Selected 6/6 parts by partition key, 6 parts by primary key, 6/12 marks by primary key, 6 marks to read from 6 ranges +Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges select uniqExact(_part), count() from xMM where a = 2; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where a = 1; 2 15000 -Selected 2/5 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/5 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where toStartOfDay(d) >= '2020-10-01 00:00:00'; 1 10000 -Selected 1/5 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/5 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from xMM where a <> 66; 5 30000 -Selected 5/5 parts by partition key, 5 parts by primary key, 5/10 marks by primary key, 5 marks to read from 5 ranges +Selected 5/5 parts by partition key, 5 parts by primary key, 5/5 marks by primary key, 5 marks to read from 5 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00' and a<>3; 2 5001 -Selected 2/5 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/5 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-10-01 00:00:00' and a<>3; 1 5000 -Selected 1/5 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/5 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges diff --git a/tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper.reference b/tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper_long.reference similarity index 100% rename from tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper.reference rename to tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper_long.reference diff --git a/tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper.sh b/tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper_long.sh similarity index 81% rename from tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper.sh rename to tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper_long.sh index 4cb4734b448..156deb60ff9 100755 --- a/tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper.sh +++ b/tests/queries/0_stateless/01508_race_condition_rename_clear_zookeeper_long.sh @@ -8,7 +8,7 @@ $CLICKHOUSE_CLIENT --query "DROP TABLE IF EXISTS table_for_renames0" $CLICKHOUSE_CLIENT --query "DROP TABLE IF EXISTS table_for_renames50" -$CLICKHOUSE_CLIENT --query "CREATE TABLE table_for_renames0 (value UInt64, data String) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_01508/concurrent_rename', '1') ORDER BY tuple() SETTINGS cleanup_delay_period = 1, cleanup_delay_period_random_add = 0, min_rows_for_compact_part = 100000, min_rows_for_compact_part = 10000000, write_ahead_log_max_bytes = 1" +$CLICKHOUSE_CLIENT --query "CREATE TABLE table_for_renames0 (value UInt64, data String) ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_rename', '1') ORDER BY tuple() SETTINGS cleanup_delay_period = 1, cleanup_delay_period_random_add = 0, min_rows_for_compact_part = 100000, min_rows_for_compact_part = 10000000, write_ahead_log_max_bytes = 1" $CLICKHOUSE_CLIENT --query "INSERT INTO table_for_renames0 SELECT number, toString(number) FROM numbers(1000)" diff --git a/tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts.reference b/tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts_long.reference similarity index 100% rename from tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts.reference rename to tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts_long.reference diff --git a/tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts.sh b/tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts_long.sh similarity index 90% rename from tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts.sh rename to tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts_long.sh index c5ffad1c4ca..b71654e7e6c 100755 --- a/tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts.sh +++ b/tests/queries/0_stateless/01509_check_many_parallel_quorum_inserts_long.sh @@ -11,7 +11,7 @@ NUM_REPLICAS=10 for i in $(seq 1 $NUM_REPLICAS); do $CLICKHOUSE_CLIENT -n -q " DROP TABLE IF EXISTS r$i; - CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01509/parallel_quorum_many', 'r$i') ORDER BY x; + CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/parallel_quorum_many', 'r$i') ORDER BY x; " done diff --git a/tests/queries/0_stateless/01509_check_parallel_quorum_inserts.reference b/tests/queries/0_stateless/01509_check_parallel_quorum_inserts_long.reference similarity index 100% rename from tests/queries/0_stateless/01509_check_parallel_quorum_inserts.reference rename to tests/queries/0_stateless/01509_check_parallel_quorum_inserts_long.reference diff --git a/tests/queries/0_stateless/01509_check_parallel_quorum_inserts.sh b/tests/queries/0_stateless/01509_check_parallel_quorum_inserts_long.sh similarity index 90% rename from tests/queries/0_stateless/01509_check_parallel_quorum_inserts.sh rename to tests/queries/0_stateless/01509_check_parallel_quorum_inserts_long.sh index 898a68d9c77..78336ea073b 100755 --- a/tests/queries/0_stateless/01509_check_parallel_quorum_inserts.sh +++ b/tests/queries/0_stateless/01509_check_parallel_quorum_inserts_long.sh @@ -12,7 +12,7 @@ NUM_INSERTS=5 for i in $(seq 1 $NUM_REPLICAS); do $CLICKHOUSE_CLIENT -n -q " DROP TABLE IF EXISTS r$i; - CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01509/parallel_quorum', 'r$i') ORDER BY x; + CREATE TABLE r$i (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/parallel_quorum', 'r$i') ORDER BY x; " done diff --git a/tests/queries/0_stateless/01509_parallel_quorum_and_merge.reference b/tests/queries/0_stateless/01509_parallel_quorum_and_merge_long.reference similarity index 100% rename from tests/queries/0_stateless/01509_parallel_quorum_and_merge.reference rename to tests/queries/0_stateless/01509_parallel_quorum_and_merge_long.reference diff --git a/tests/queries/0_stateless/01509_parallel_quorum_and_merge.sh b/tests/queries/0_stateless/01509_parallel_quorum_and_merge_long.sh similarity index 88% rename from tests/queries/0_stateless/01509_parallel_quorum_and_merge.sh rename to tests/queries/0_stateless/01509_parallel_quorum_and_merge_long.sh index ca5f58512a3..fbeb65419ce 100755 --- a/tests/queries/0_stateless/01509_parallel_quorum_and_merge.sh +++ b/tests/queries/0_stateless/01509_parallel_quorum_and_merge_long.sh @@ -10,9 +10,9 @@ $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS parallel_q1" $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS parallel_q2" -$CLICKHOUSE_CLIENT -q "CREATE TABLE parallel_q1 (x UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_01509/parallel_q', 'r1') ORDER BY tuple() SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 0" +$CLICKHOUSE_CLIENT -q "CREATE TABLE parallel_q1 (x UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/parallel_q', 'r1') ORDER BY tuple() SETTINGS old_parts_lifetime = 1, cleanup_delay_period = 0, cleanup_delay_period_random_add = 0" -$CLICKHOUSE_CLIENT -q "CREATE TABLE parallel_q2 (x UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_01509/parallel_q', 'r2') ORDER BY tuple() SETTINGS always_fetch_merged_part = 1" +$CLICKHOUSE_CLIENT -q "CREATE TABLE parallel_q2 (x UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/parallel_q', 'r2') ORDER BY tuple() SETTINGS always_fetch_merged_part = 1" $CLICKHOUSE_CLIENT -q "SYSTEM STOP REPLICATION QUEUES parallel_q2" diff --git a/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql b/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql index 1b680cf26c1..16c4a4df936 100644 --- a/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql +++ b/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql @@ -1,16 +1,16 @@ -DROP TABLE IF EXISTS r1; -DROP TABLE IF EXISTS r2; +DROP TABLE IF EXISTS r1 SYNC; +DROP TABLE IF EXISTS r2 SYNC; CREATE TABLE r1 ( key UInt64, value String ) -ENGINE = ReplicatedMergeTree('/clickhouse/01509_no_repliacs', '1') +ENGINE = ReplicatedMergeTree('/clickhouse/01509_parallel_quorum_insert_no_replicas', '1') ORDER BY tuple(); CREATE TABLE r2 ( key UInt64, value String ) -ENGINE = ReplicatedMergeTree('/clickhouse/01509_no_repliacs', '2') +ENGINE = ReplicatedMergeTree('/clickhouse/01509_parallel_quorum_insert_no_replicas', '2') ORDER BY tuple(); SET insert_quorum_parallel=1; @@ -18,8 +18,13 @@ SET insert_quorum_parallel=1; SET insert_quorum=3; INSERT INTO r1 VALUES(1, '1'); --{serverError 285} +-- retry should still fail despite the insert_deduplicate enabled +INSERT INTO r1 VALUES(1, '1'); --{serverError 285} +INSERT INTO r1 VALUES(1, '1'); --{serverError 285} + SELECT 'insert to two replicas works'; SET insert_quorum=2, insert_quorum_parallel=1; + INSERT INTO r1 VALUES(1, '1'); SELECT COUNT() FROM r1; @@ -29,12 +34,18 @@ DETACH TABLE r2; INSERT INTO r1 VALUES(2, '2'); --{serverError 285} +-- retry should fail despite the insert_deduplicate enabled +INSERT INTO r1 VALUES(2, '2'); --{serverError 285} +INSERT INTO r1 VALUES(2, '2'); --{serverError 285} + SET insert_quorum=1, insert_quorum_parallel=1; SELECT 'insert to single replica works'; INSERT INTO r1 VALUES(2, '2'); ATTACH TABLE r2; +INSERT INTO r2 VALUES(2, '2'); + SYSTEM SYNC REPLICA r2; SET insert_quorum=2, insert_quorum_parallel=1; @@ -47,6 +58,17 @@ SELECT COUNT() FROM r2; SELECT 'deduplication works'; INSERT INTO r2 VALUES(3, '3'); +-- still works if we relax quorum +SET insert_quorum=1, insert_quorum_parallel=1; +INSERT INTO r2 VALUES(3, '3'); +INSERT INTO r1 VALUES(3, '3'); +-- will start failing if we increase quorum +SET insert_quorum=3, insert_quorum_parallel=1; +INSERT INTO r1 VALUES(3, '3'); --{serverError 285} +-- work back ok when quorum=2 +SET insert_quorum=2, insert_quorum_parallel=1; +INSERT INTO r2 VALUES(3, '3'); + SELECT COUNT() FROM r1; SELECT COUNT() FROM r2; @@ -56,8 +78,18 @@ SET insert_quorum_timeout=0; INSERT INTO r1 VALUES (4, '4'); -- { serverError 319 } +-- retry should fail despite the insert_deduplicate enabled +INSERT INTO r1 VALUES (4, '4'); -- { serverError 319 } +INSERT INTO r1 VALUES (4, '4'); -- { serverError 319 } +SELECT * FROM r2 WHERE key=4; + SYSTEM START FETCHES r2; +SET insert_quorum_timeout=6000000; + +-- now retry should be successful +INSERT INTO r1 VALUES (4, '4'); + SYSTEM SYNC REPLICA r2; SELECT 'insert happened'; diff --git a/tests/queries/0_stateless/01515_force_data_skipping_indices.reference b/tests/queries/0_stateless/01515_force_data_skipping_indices.reference index e69de29bb2d..d43017edcc5 100644 --- a/tests/queries/0_stateless/01515_force_data_skipping_indices.reference +++ b/tests/queries/0_stateless/01515_force_data_skipping_indices.reference @@ -0,0 +1 @@ +1 2 3 diff --git a/tests/queries/0_stateless/01515_force_data_skipping_indices.sql b/tests/queries/0_stateless/01515_force_data_skipping_indices.sql index 53d3e5c736f..40b66b0ff7b 100644 --- a/tests/queries/0_stateless/01515_force_data_skipping_indices.sql +++ b/tests/queries/0_stateless/01515_force_data_skipping_indices.sql @@ -10,6 +10,8 @@ CREATE TABLE data_01515 Engine=MergeTree() ORDER BY key; +INSERT INTO data_01515 VALUES (1, 2, 3); + SELECT * FROM data_01515; SELECT * FROM data_01515 SETTINGS force_data_skipping_indices=''; -- { serverError 6 } SELECT * FROM data_01515 SETTINGS force_data_skipping_indices='d1_idx'; -- { serverError 277 } diff --git a/tests/queries/0_stateless/01531_query_log_query_comment.sql b/tests/queries/0_stateless/01531_query_log_query_comment.sql index 2e1faf1b9e4..19d6eac0e15 100644 --- a/tests/queries/0_stateless/01531_query_log_query_comment.sql +++ b/tests/queries/0_stateless/01531_query_log_query_comment.sql @@ -1,20 +1,20 @@ set log_queries=1; set log_queries_min_type='QUERY_FINISH'; -set enable_global_with_statement=1; +set enable_global_with_statement=0; select /* test=01531, enable_global_with_statement=0 */ 2; system flush logs; select count() from system.query_log -where event_time >= now() - interval 5 minute - and query like '%select /* test=01531, enable_global_with_statement=0 */ 2%' +where event_date >= yesterday() + and query like 'select /* test=01531, enable_global_with_statement=0 */ 2%' and current_database = currentDatabase() ; set enable_global_with_statement=1; -select /* test=01531 enable_global_with_statement=1 */ 2; +select /* test=01531, enable_global_with_statement=1 */ 2; system flush logs; select count() from system.query_log -where event_time >= now() - interval 5 minute - and query like '%select /* test=01531 enable_global_with_statement=1 */ 2%' +where event_date >= yesterday() + and query like 'select /* test=01531, enable_global_with_statement=1 */ 2%' and current_database = currentDatabase() ; diff --git a/tests/queries/0_stateless/01533_multiple_nested.reference b/tests/queries/0_stateless/01533_multiple_nested.reference index ba37ce1c32c..138a7cfd7f2 100644 --- a/tests/queries/0_stateless/01533_multiple_nested.reference +++ b/tests/queries/0_stateless/01533_multiple_nested.reference @@ -13,16 +13,16 @@ col3 read files 4 6 -0 899984 7199412 -1 899987 7199877 -2 899990 7200255 -3 899993 7199883 -4 899996 7199798 -5 899999 7200306 -6 900002 7200064 -7 900005 7199429 -8 900008 7200067 -9 899992 7199993 +0 89982 719752 +1 89988 720017 +2 89994 720152 +3 90000 720157 +4 90006 720100 +5 90012 720168 +6 90018 720106 +7 90005 719891 +8 89992 719854 +9 89979 719706 0 [] 0 [0] 1 [0,2] diff --git a/tests/queries/0_stateless/01533_multiple_nested.sql b/tests/queries/0_stateless/01533_multiple_nested.sql index 38c80617334..0ddb0cfbfb4 100644 --- a/tests/queries/0_stateless/01533_multiple_nested.sql +++ b/tests/queries/0_stateless/01533_multiple_nested.sql @@ -36,7 +36,7 @@ SYSTEM FLUSH LOGS; SELECT ProfileEvents.Values[indexOf(ProfileEvents.Names, 'FileOpen')] FROM system.query_log WHERE (type = 'QueryFinish') AND (lower(query) LIKE lower('SELECT col1.a FROM %nested%')) - AND event_time > now() - INTERVAL 10 SECOND AND current_database = currentDatabase(); + AND event_date >= yesterday() AND current_database = currentDatabase(); SYSTEM DROP MARK CACHE; SELECT col3.n2.s FROM nested FORMAT Null; @@ -46,7 +46,7 @@ SYSTEM FLUSH LOGS; SELECT ProfileEvents.Values[indexOf(ProfileEvents.Names, 'FileOpen')] FROM system.query_log WHERE (type = 'QueryFinish') AND (lower(query) LIKE lower('SELECT col3.n2.s FROM %nested%')) - AND event_time > now() - INTERVAL 10 SECOND AND current_database = currentDatabase(); + AND event_date >= yesterday() AND current_database = currentDatabase(); DROP TABLE nested; @@ -59,7 +59,7 @@ ENGINE = MergeTree ORDER BY id SETTINGS min_bytes_for_wide_part = 0; -INSERT INTO nested SELECT number, arrayMap(x -> (x, arrayMap(y -> (toString(y * x), y + x), range(number % 17))), range(number % 19)) FROM numbers(1000000); +INSERT INTO nested SELECT number, arrayMap(x -> (x, arrayMap(y -> (toString(y * x), y + x), range(number % 17))), range(number % 19)) FROM numbers(100000); SELECT id % 10, sum(length(col1)), sumArray(arrayMap(x -> length(x), col1.n.b)) FROM nested GROUP BY id % 10; SELECT arraySum(col1.a), arrayMap(x -> x * x * 2, col1.a) FROM nested ORDER BY id LIMIT 5; diff --git a/tests/queries/0_stateless/01540_verbatim_partition_pruning.reference b/tests/queries/0_stateless/01540_verbatim_partition_pruning.reference index b9338e6a9c4..3310ffe7de4 100644 --- a/tests/queries/0_stateless/01540_verbatim_partition_pruning.reference +++ b/tests/queries/0_stateless/01540_verbatim_partition_pruning.reference @@ -2,3 +2,5 @@ 9 5 8 4 1 2 3 +2020-01-02 1 +2021-01-02 2 diff --git a/tests/queries/0_stateless/01540_verbatim_partition_pruning.sql b/tests/queries/0_stateless/01540_verbatim_partition_pruning.sql index 16ab51d1160..2d227856be4 100644 --- a/tests/queries/0_stateless/01540_verbatim_partition_pruning.sql +++ b/tests/queries/0_stateless/01540_verbatim_partition_pruning.sql @@ -28,3 +28,22 @@ create table xyz(x int, y int, z int) engine MergeTree partition by if(toUInt8(x insert into xyz values (1, 2, 3); select * from xyz where y = 2; drop table if exists xyz; + +-- Test if we obey strict rules when facing NOT contitions +drop table if exists test; +create table test(d Date, k Int64, s String) Engine=MergeTree partition by (toYYYYMM(d),k) order by (d, k); + +insert into test values ('2020-01-01', 1, ''); +insert into test values ('2020-01-02', 1, ''); + +select * from test where d != '2020-01-01'; +drop table test; + +-- Test if single value partition pruning works correctly for Date = String +drop table if exists myTable; +CREATE TABLE myTable (myDay Date, myOrder Int32, someData String) ENGINE = ReplacingMergeTree PARTITION BY floor(toYYYYMMDD(myDay), -1) ORDER BY (myOrder); +INSERT INTO myTable (myDay, myOrder) VALUES ('2021-01-01', 1); +INSERT INTO myTable (myDay, myOrder) VALUES ('2021-01-02', 2); // This row should be returned +INSERT INTO myTable (myDay, myOrder) VALUES ('2021-01-03', 3); +SELECT * FROM myTable mt WHERE myDay = '2021-01-02'; +drop table myTable; diff --git a/tests/queries/0_stateless/01544_errorCodeToName.reference b/tests/queries/0_stateless/01544_errorCodeToName.reference index ace588644e1..fefccf984be 100644 --- a/tests/queries/0_stateless/01544_errorCodeToName.reference +++ b/tests/queries/0_stateless/01544_errorCodeToName.reference @@ -1,4 +1,5 @@ + OK UNSUPPORTED_METHOD diff --git a/tests/queries/0_stateless/01544_errorCodeToName.sql b/tests/queries/0_stateless/01544_errorCodeToName.sql index 9e28ed1116c..aa32270f00b 100644 --- a/tests/queries/0_stateless/01544_errorCodeToName.sql +++ b/tests/queries/0_stateless/01544_errorCodeToName.sql @@ -1,4 +1,5 @@ SELECT errorCodeToName(toUInt32(-1)); +SELECT errorCodeToName(-1); SELECT errorCodeToName(600); /* gap in error codes */ SELECT errorCodeToName(0); SELECT errorCodeToName(1); diff --git a/tests/queries/0_stateless/01545_system_errors.reference b/tests/queries/0_stateless/01545_system_errors.reference index d00491fd7e5..0e7f2447090 100644 --- a/tests/queries/0_stateless/01545_system_errors.reference +++ b/tests/queries/0_stateless/01545_system_errors.reference @@ -1 +1,2 @@ -1 +local=1 +remote=1 diff --git a/tests/queries/0_stateless/01545_system_errors.sh b/tests/queries/0_stateless/01545_system_errors.sh index 63af6bb8d43..970fd403866 100755 --- a/tests/queries/0_stateless/01545_system_errors.sh +++ b/tests/queries/0_stateless/01545_system_errors.sh @@ -4,7 +4,14 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -prev="$(${CLICKHOUSE_CLIENT} -q "SELECT value FROM system.errors WHERE name = 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO'")" +# local +prev="$(${CLICKHOUSE_CLIENT} -q "SELECT value FROM system.errors WHERE name = 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO' AND NOT remote")" $CLICKHOUSE_CLIENT -q 'SELECT throwIf(1)' >& /dev/null -cur="$(${CLICKHOUSE_CLIENT} -q "SELECT value FROM system.errors WHERE name = 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO'")" -echo $((cur - prev)) +cur="$(${CLICKHOUSE_CLIENT} -q "SELECT value FROM system.errors WHERE name = 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO' AND NOT remote")" +echo local=$((cur - prev)) + +# remote +prev="$(${CLICKHOUSE_CLIENT} -q "SELECT value FROM system.errors WHERE name = 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO' AND remote")" +${CLICKHOUSE_CLIENT} -q "SELECT * FROM remote('127.2', system.one) where throwIf(not dummy)" >& /dev/null +cur="$(${CLICKHOUSE_CLIENT} -q "SELECT value FROM system.errors WHERE name = 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO' AND remote")" +echo remote=$((cur - prev)) diff --git a/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.reference b/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.reference index 0463db26710..8947c3c2bc3 100644 --- a/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.reference +++ b/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.reference @@ -1,4 +1,4 @@ 0 0 1 -1 +OK diff --git a/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.sql b/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.sql index f0f681288cf..20854da0e8a 100644 --- a/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.sql +++ b/tests/queries/0_stateless/01546_log_queries_min_query_duration_ms.sql @@ -12,19 +12,15 @@ system flush logs; select count() from system.query_log where - query like '%01546_log_queries_min_query_duration_ms-fast%' - and query not like '%system.query_log%' + query like 'select \'01546_log_queries_min_query_duration_ms-fast%' and current_database = currentDatabase() - and event_date = today() - and event_time >= now() - interval 1 minute; + and event_date >= yesterday(); select count() from system.query_thread_log where - query like '%01546_log_queries_min_query_duration_ms-fast%' - and query not like '%system.query_thread_log%' + query like 'select \'01546_log_queries_min_query_duration_ms-fast%' and current_database = currentDatabase() - and event_date = today() - and event_time >= now() - interval 1 minute; + and event_date >= yesterday(); -- -- slow -- query logged @@ -37,18 +33,14 @@ system flush logs; select count() from system.query_log where - query like '%01546_log_queries_min_query_duration_ms-slow%' - and query not like '%system.query_log%' + query like 'select \'01546_log_queries_min_query_duration_ms-slow%' and current_database = currentDatabase() - and event_date = today() - and event_time >= now() - interval 1 minute; + and event_date >= yesterday(); -- There at least two threads involved in a simple query -- (one thread just waits another, sigh) -select count() == 2 +select if(count() == 2, 'OK', 'Fail: ' || toString(count())) from system.query_thread_log where - query like '%01546_log_queries_min_query_duration_ms-slow%' - and query not like '%system.query_thread_log%' + query like 'select \'01546_log_queries_min_query_duration_ms-slow%' and current_database = currentDatabase() - and event_date = today() - and event_time >= now() - interval 1 minute; + and event_date >= yesterday(); diff --git a/tests/queries/0_stateless/01547_query_log_current_database.sql b/tests/queries/0_stateless/01547_query_log_current_database.sql index c0ad22163ba..5eec8a81ccc 100644 --- a/tests/queries/0_stateless/01547_query_log_current_database.sql +++ b/tests/queries/0_stateless/01547_query_log_current_database.sql @@ -21,17 +21,15 @@ system flush logs; select count() from system.query_log where - query like '%01547_query_log_current_database%' + query like 'select \'01547_query_log_current_database%' and current_database = currentDatabase() - and event_date = today() - and event_time >= now() - interval 1 minute; + and event_date >= yesterday(); -- at least two threads for processing -- (but one just waits for another, sigh) select count() == 2 from system.query_thread_log where - query like '%01547_query_log_current_database%' + query like 'select \'01547_query_log_current_database%' and current_database = currentDatabase() - and event_date = today() - and event_time >= now() - interval 1 minute; + and event_date >= yesterday() diff --git a/tests/queries/0_stateless/01548_query_log_query_execution_ms.reference b/tests/queries/0_stateless/01548_query_log_query_execution_ms.reference index 6ed281c757a..e69de29bb2d 100644 --- a/tests/queries/0_stateless/01548_query_log_query_execution_ms.reference +++ b/tests/queries/0_stateless/01548_query_log_query_execution_ms.reference @@ -1,2 +0,0 @@ -1 -1 diff --git a/tests/queries/0_stateless/01548_query_log_query_execution_ms.sh b/tests/queries/0_stateless/01548_query_log_query_execution_ms.sh new file mode 100755 index 00000000000..c973612c80d --- /dev/null +++ b/tests/queries/0_stateless/01548_query_log_query_execution_ms.sh @@ -0,0 +1,59 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +function random_str() +{ + local n=$1 && shift + tr -cd '[:lower:]' < /dev/urandom | head -c"$n" +} +function test_query_duration_ms() +{ + local query_id + query_id="01548_query_log_query_execution_ms-$SECONDS-$(random_str 6)" + local query_opts=( + "--log_query_threads=1" + "--log_queries_min_type=QUERY_FINISH" + "--log_queries=1" + "--query_id=$query_id" + "--format=Null" + ) + $CLICKHOUSE_CLIENT "${query_opts[@]}" -q "select sleep(0.4)" || exit 1 + $CLICKHOUSE_CLIENT -q "system flush logs" || exit 1 + + $CLICKHOUSE_CLIENT -q " + select count() + from system.query_log + where + query_id = '$query_id' + and current_database = currentDatabase() + and query_duration_ms between 400 and 800 + and event_date >= yesterday() + and event_time >= now() - interval 1 minute; + " || exit 1 + + $CLICKHOUSE_CLIENT -q " + -- at least two threads for processing + -- (but one just waits for another, sigh) + select count() == 2 + from system.query_thread_log + where + query_id = '$query_id' + and current_database = currentDatabase() + and query_duration_ms between 400 and 800 + and event_date >= yesterday() + and event_time >= now() - interval 1 minute; + " || exit 1 +} + +function main() +{ + # retries, since there is no guarantee that every time query will take ~0.4 second. + local retries=20 i=0 + while [ "$(test_query_duration_ms | xargs)" != '1 1' ] && [[ $i < $retries ]]; do + ((++i)) + done +} +main "$@" diff --git a/tests/queries/0_stateless/01548_query_log_query_execution_ms.sql b/tests/queries/0_stateless/01548_query_log_query_execution_ms.sql deleted file mode 100644 index e80e84646be..00000000000 --- a/tests/queries/0_stateless/01548_query_log_query_execution_ms.sql +++ /dev/null @@ -1,28 +0,0 @@ -set log_query_threads=1; -set log_queries_min_type='QUERY_FINISH'; -set log_queries=1; -select '01548_query_log_query_execution_ms', sleep(0.4) format Null; -set log_queries=0; -set log_query_threads=0; - -system flush logs; - -select count() -from system.query_log -where - query like '%01548_query_log_query_execution_ms%' - and current_database = currentDatabase() - and query_duration_ms between 100 and 800 - and event_date = today() - and event_time >= now() - interval 1 minute; - --- at least two threads for processing --- (but one just waits for another, sigh) -select count() == 2 -from system.query_thread_log -where - query like '%01548_query_log_query_execution_ms%' - and current_database = currentDatabase() - and query_duration_ms between 100 and 800 - and event_date = today() - and event_time >= now() - interval 1 minute; diff --git a/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference b/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference index becc626c1bb..835e2af269a 100644 --- a/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference +++ b/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference @@ -13,16 +13,16 @@ ExpressionTransform (MergingSorted) (Expression) ExpressionTransform - (ReadFromStorage) + (ReadFromMergeTree) MergeTree 0 → 1 (MergingSorted) MergingSortedTransform 2 → 1 (Expression) ExpressionTransform × 2 - (ReadFromStorage) + (ReadFromMergeTree) MergeTree × 2 0 → 1 (MergingSorted) (Expression) ExpressionTransform - (ReadFromStorage) + (ReadFromMergeTree) MergeTree 0 → 1 diff --git a/tests/queries/0_stateless/01560_mann_whitney.sql b/tests/queries/0_stateless/01560_mann_whitney.sql index 6e1ac55349d..e3a9b4ecd03 100644 --- a/tests/queries/0_stateless/01560_mann_whitney.sql +++ b/tests/queries/0_stateless/01560_mann_whitney.sql @@ -6,4 +6,5 @@ SELECT '223.0', '0.5426959774289482'; WITH mannWhitneyUTest(left, right) AS pair SELECT roundBankers(pair.1, 16) as t_stat, roundBankers(pair.2, 16) as p_value from mann_whitney_test; WITH mannWhitneyUTest('two-sided', 1)(left, right) as pair SELECT roundBankers(pair.1, 16) as t_stat, roundBankers(pair.2, 16) as p_value from mann_whitney_test; WITH mannWhitneyUTest('two-sided')(left, right) as pair SELECT roundBankers(pair.1, 16) as t_stat, roundBankers(pair.2, 16) as p_value from mann_whitney_test; +WITH mannWhitneyUTest('two-sided')(1, right) AS pair SELECT roundBankers(pair.1, 16) AS t_stat, roundBankers(pair.2, 16) AS p_value FROM mann_whitney_test; --{serverError 36} DROP TABLE IF EXISTS mann_whitney_test; diff --git a/tests/queries/0_stateless/01560_optimize_on_insert.reference b/tests/queries/0_stateless/01560_optimize_on_insert.reference index 7ace2043be0..477f48be7a9 100644 --- a/tests/queries/0_stateless/01560_optimize_on_insert.reference +++ b/tests/queries/0_stateless/01560_optimize_on_insert.reference @@ -11,3 +11,4 @@ Summing Merge Tree Aggregating Merge Tree 1 5 2020-01-01 00:00:00 2 5 2020-01-02 00:00:00 +Check creating empty parts diff --git a/tests/queries/0_stateless/01560_optimize_on_insert.sql b/tests/queries/0_stateless/01560_optimize_on_insert.sql index 9f6dac686bb..f64f4c75cfe 100644 --- a/tests/queries/0_stateless/01560_optimize_on_insert.sql +++ b/tests/queries/0_stateless/01560_optimize_on_insert.sql @@ -33,3 +33,10 @@ INSERT INTO aggregating_merge_tree VALUES (1, 1, '2020-01-01'), (2, 1, '2020-01- SELECT * FROM aggregating_merge_tree ORDER BY key; DROP TABLE aggregating_merge_tree; +SELECT 'Check creating empty parts'; +DROP TABLE IF EXISTS empty; +CREATE TABLE empty (key UInt32, val UInt32, date Datetime) ENGINE=SummingMergeTree(val) PARTITION BY date ORDER BY key; +INSERT INTO empty VALUES (1, 1, '2020-01-01'), (1, 1, '2020-01-01'), (1, -2, '2020-01-01'); +SELECT * FROM empty ORDER BY key; +SELECT table, partition, active FROM system.parts where table = 'empty' and active = 1; +DROP TABLE empty; diff --git a/tests/queries/0_stateless/01560_optimize_on_insert_zookeeper.reference b/tests/queries/0_stateless/01560_optimize_on_insert_zookeeper.reference new file mode 100644 index 00000000000..e89c6201fb7 --- /dev/null +++ b/tests/queries/0_stateless/01560_optimize_on_insert_zookeeper.reference @@ -0,0 +1 @@ +Check creating empty parts diff --git a/tests/queries/0_stateless/01560_optimize_on_insert_zookeeper.sql b/tests/queries/0_stateless/01560_optimize_on_insert_zookeeper.sql new file mode 100644 index 00000000000..a98818b2195 --- /dev/null +++ b/tests/queries/0_stateless/01560_optimize_on_insert_zookeeper.sql @@ -0,0 +1,36 @@ +DROP TABLE IF EXISTS empty1; +DROP TABLE IF EXISTS empty2; + +SELECT 'Check creating empty parts'; + +CREATE TABLE empty1 (key UInt32, val UInt32, date Datetime) +ENGINE=ReplicatedSummingMergeTree('/clickhouse/01560_optimize_on_insert', '1', val) +PARTITION BY date ORDER BY key; + +CREATE TABLE empty2 (key UInt32, val UInt32, date Datetime) +ENGINE=ReplicatedSummingMergeTree('/clickhouse/01560_optimize_on_insert', '2', val) +PARTITION BY date ORDER BY key; + +INSERT INTO empty2 VALUES (1, 1, '2020-01-01'), (1, 1, '2020-01-01'), (1, -2, '2020-01-01'); + +SYSTEM SYNC REPLICA empty1; + +SELECT * FROM empty1 ORDER BY key; +SELECT * FROM empty2 ORDER BY key; + +SELECT table, partition, active FROM system.parts where table = 'empty1' and database=currentDatabase() and active = 1; +SELECT table, partition, active FROM system.parts where table = 'empty2' and database=currentDatabase() and active = 1; + +DETACH table empty1; +DETACH table empty2; +ATTACH table empty1; +ATTACH table empty2; + +SELECT * FROM empty1 ORDER BY key; +SELECT * FROM empty2 ORDER BY key; + +SELECT table, partition, active FROM system.parts where table = 'empty1' and database=currentDatabase() and active = 1; +SELECT table, partition, active FROM system.parts where table = 'empty2' and database=currentDatabase() and active = 1; + +DROP TABLE IF EXISTS empty1; +DROP TABLE IF EXISTS empty2; diff --git a/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql b/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql index 7e75d871e07..a61bcff4db7 100644 --- a/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql +++ b/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql @@ -6,7 +6,7 @@ SELECT dt64 < d, toDate(dt64) < d, dt64 < toDateTime64(d, 1, 'UTC'), - + '<=', dt64 <= d, toDate(dt64) <= d, @@ -16,7 +16,7 @@ SELECT dt64 = d, toDate(dt64) = d, dt64 = toDateTime64(d, 1, 'UTC'), - + '>=', dt64 >= d, toDate(dt64) >= d, @@ -31,7 +31,7 @@ SELECT dt64 != d, toDate(dt64) != d, dt64 != toDateTime64(d, 1, 'UTC') -FROM +FROM ( WITH toDateTime('2019-09-16 19:20:11') as val SELECT diff --git a/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference b/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference index a1a1814a581..0eb7e06f724 100644 --- a/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference +++ b/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference @@ -11,7 +11,7 @@ Expression (Projection) PartialSorting (Sort each block for ORDER BY) Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree) + ReadFromMergeTree SELECT timestamp, key @@ -23,7 +23,7 @@ Expression (Projection) FinishSorting Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree SELECT timestamp, key @@ -37,7 +37,7 @@ Expression (Projection) FinishSorting Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree SELECT timestamp, key diff --git a/tests/queries/0_stateless/01566_negate_formatting.reference b/tests/queries/0_stateless/01566_negate_formatting.reference new file mode 100644 index 00000000000..69d79cf929a --- /dev/null +++ b/tests/queries/0_stateless/01566_negate_formatting.reference @@ -0,0 +1,34 @@ +-- { echo } +explain syntax select negate(1), negate(-1), - -1, -(-1), (-1) in (-1); +SELECT + -1, + 1, + 1, + 1, + -1 IN (-1) +explain syntax select negate(1.), negate(-1.), - -1., -(-1.), (-1.) in (-1.); +SELECT + -1., + 1., + 1., + 1., + -1. IN (-1.) +explain syntax select negate(-9223372036854775808), -(-9223372036854775808), - -9223372036854775808; +SELECT + -9223372036854775808, + -9223372036854775808, + -9223372036854775808 +explain syntax select negate(0), negate(-0), - -0, -(-0), (-0) in (-0); +SELECT + 0, + 0, + 0, + 0, + 0 IN (0) +explain syntax select negate(0.), negate(-0.), - -0., -(-0.), (-0.) in (-0.); +SELECT + -0., + 0., + 0., + 0., + -0. IN (-0.) diff --git a/tests/queries/0_stateless/01566_negate_formatting.sql b/tests/queries/0_stateless/01566_negate_formatting.sql new file mode 100644 index 00000000000..65e983fbdd1 --- /dev/null +++ b/tests/queries/0_stateless/01566_negate_formatting.sql @@ -0,0 +1,6 @@ +-- { echo } +explain syntax select negate(1), negate(-1), - -1, -(-1), (-1) in (-1); +explain syntax select negate(1.), negate(-1.), - -1., -(-1.), (-1.) in (-1.); +explain syntax select negate(-9223372036854775808), -(-9223372036854775808), - -9223372036854775808; +explain syntax select negate(0), negate(-0), - -0, -(-0), (-0) in (-0); +explain syntax select negate(0.), negate(-0.), - -0., -(-0.), (-0.) in (-0.); diff --git a/tests/queries/0_stateless/01567_system_processes_current_database.reference b/tests/queries/0_stateless/01567_system_processes_current_database.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01567_system_processes_current_database.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01567_system_processes_current_database.sql b/tests/queries/0_stateless/01567_system_processes_current_database.sql new file mode 100644 index 00000000000..406120d742d --- /dev/null +++ b/tests/queries/0_stateless/01567_system_processes_current_database.sql @@ -0,0 +1 @@ +select count(*) from system.processes where current_database = currentDatabase(); diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.reference b/tests/queries/0_stateless/01568_window_functions_distributed.reference new file mode 100644 index 00000000000..29d3e5ea885 --- /dev/null +++ b/tests/queries/0_stateless/01568_window_functions_distributed.reference @@ -0,0 +1,52 @@ +-- { echo } +set allow_experimental_window_functions = 1; +select row_number() over (order by dummy) from (select * from remote('127.0.0.{1,2}', system, one)); +1 +2 +select row_number() over (order by dummy) from remote('127.0.0.{1,2}', system, one); +1 +2 +select max(identity(dummy + 1)) over () from remote('127.0.0.{1,2}', system, one); +1 +1 +drop table if exists t_01568; +create table t_01568 engine Log as select intDiv(number, 3) p, number from numbers(9); +select sum(number) over w, max(number) over w from t_01568 window w as (partition by p); +3 2 +3 2 +3 2 +12 5 +12 5 +12 5 +21 8 +21 8 +21 8 +select sum(number) over w, max(number) over w from remote('127.0.0.{1,2}', '', t_01568) window w as (partition by p); +6 2 +6 2 +6 2 +6 2 +6 2 +6 2 +24 5 +24 5 +24 5 +24 5 +24 5 +24 5 +42 8 +42 8 +42 8 +42 8 +42 8 +42 8 +select distinct sum(number) over w, max(number) over w from remote('127.0.0.{1,2}', '', t_01568) window w as (partition by p); +6 2 +24 5 +42 8 +-- window functions + aggregation w/shards +select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3); +[[0,3,6,0,3,6]] +[[0,3,6,0,3,6],[1,4,7,1,4,7]] +[[0,3,6,0,3,6],[1,4,7,1,4,7],[2,5,8,2,5,8]] +drop table t_01568; diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.sql b/tests/queries/0_stateless/01568_window_functions_distributed.sql new file mode 100644 index 00000000000..7d9d1ea5c92 --- /dev/null +++ b/tests/queries/0_stateless/01568_window_functions_distributed.sql @@ -0,0 +1,23 @@ +-- { echo } +set allow_experimental_window_functions = 1; + +select row_number() over (order by dummy) from (select * from remote('127.0.0.{1,2}', system, one)); + +select row_number() over (order by dummy) from remote('127.0.0.{1,2}', system, one); + +select max(identity(dummy + 1)) over () from remote('127.0.0.{1,2}', system, one); + +drop table if exists t_01568; + +create table t_01568 engine Log as select intDiv(number, 3) p, number from numbers(9); + +select sum(number) over w, max(number) over w from t_01568 window w as (partition by p); + +select sum(number) over w, max(number) over w from remote('127.0.0.{1,2}', '', t_01568) window w as (partition by p); + +select distinct sum(number) over w, max(number) over w from remote('127.0.0.{1,2}', '', t_01568) window w as (partition by p); + +-- window functions + aggregation w/shards +select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3); + +drop table t_01568; diff --git a/tests/queries/0_stateless/01576_alias_column_rewrite.reference b/tests/queries/0_stateless/01576_alias_column_rewrite.reference index 334ebc7eb1f..c5679544e1d 100644 --- a/tests/queries/0_stateless/01576_alias_column_rewrite.reference +++ b/tests/queries/0_stateless/01576_alias_column_rewrite.reference @@ -28,47 +28,47 @@ Expression (Projection) PartialSorting (Sort each block for ORDER BY) Expression ((Before ORDER BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree) + ReadFromMergeTree Expression (Projection) Limit (preliminary LIMIT) FinishSorting Expression ((Before ORDER BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree Expression (Projection) Limit (preliminary LIMIT) FinishSorting Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree optimize_aggregation_in_order Expression ((Projection + Before ORDER BY)) Aggregating Expression ((Before GROUP BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree) + ReadFromMergeTree Expression ((Projection + Before ORDER BY)) Aggregating Expression ((Before GROUP BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree Expression ((Projection + Before ORDER BY)) Aggregating Expression (Before GROUP BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree second-index 1 1 diff --git a/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.reference b/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.reference index f79be33624b..2f204867c41 100644 --- a/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.reference +++ b/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.reference @@ -2,4 +2,4 @@ 10 10 24 -CREATE TABLE default.replicated_mutations_empty_partitions\n(\n `key` UInt64,\n `value` UInt64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/test/01586_replicated_mutations_empty_partitions\', \'1\')\nPARTITION BY key\nORDER BY key\nSETTINGS index_granularity = 8192 +CREATE TABLE default.replicated_mutations_empty_partitions\n(\n `key` UInt64,\n `value` UInt64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/test/default/01586_replicated_mutations_empty_partitions/{shard}\', \'{replica}\')\nPARTITION BY key\nORDER BY key\nSETTINGS index_granularity = 8192 diff --git a/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.sql b/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.sql index 659cc060f32..73245fe49ec 100644 --- a/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.sql +++ b/tests/queries/0_stateless/01586_replicated_mutations_empty_partition.sql @@ -5,7 +5,7 @@ CREATE TABLE replicated_mutations_empty_partitions key UInt64, value String ) -ENGINE = ReplicatedMergeTree('/clickhouse/test/01586_replicated_mutations_empty_partitions', '1') +ENGINE = ReplicatedMergeTree('/clickhouse/test/'||currentDatabase()||'/01586_replicated_mutations_empty_partitions/{shard}', '{replica}') ORDER BY key PARTITION by key; @@ -13,7 +13,7 @@ INSERT INTO replicated_mutations_empty_partitions SELECT number, toString(number SELECT count(distinct value) FROM replicated_mutations_empty_partitions; -SELECT count() FROM system.zookeeper WHERE path = '/clickhouse/test/01586_replicated_mutations_empty_partitions/block_numbers'; +SELECT count() FROM system.zookeeper WHERE path = '/clickhouse/test/'||currentDatabase()||'/01586_replicated_mutations_empty_partitions/s1/block_numbers'; ALTER TABLE replicated_mutations_empty_partitions DROP PARTITION '3'; ALTER TABLE replicated_mutations_empty_partitions DROP PARTITION '4'; @@ -21,7 +21,7 @@ ALTER TABLE replicated_mutations_empty_partitions DROP PARTITION '5'; ALTER TABLE replicated_mutations_empty_partitions DROP PARTITION '9'; -- still ten records -SELECT count() FROM system.zookeeper WHERE path = '/clickhouse/test/01586_replicated_mutations_empty_partitions/block_numbers'; +SELECT count() FROM system.zookeeper WHERE path = '/clickhouse/test/'||currentDatabase()||'/01586_replicated_mutations_empty_partitions/s1/block_numbers'; ALTER TABLE replicated_mutations_empty_partitions MODIFY COLUMN value UInt64 SETTINGS replication_alter_partitions_sync=2; diff --git a/tests/queries/0_stateless/01591_window_functions.reference b/tests/queries/0_stateless/01591_window_functions.reference index d2543f0db75..2b54328688d 100644 --- a/tests/queries/0_stateless/01591_window_functions.reference +++ b/tests/queries/0_stateless/01591_window_functions.reference @@ -771,6 +771,28 @@ order by x; 125 124 127 4 126 125 127 3 127 126 127 2 +-- We need large offsets to trigger overflow to positive direction, or +-- else the frame end runs into partition end w/o overflow and doesn't move +-- after that. The frame from this query is equivalent to the entire partition. +select x, min(x) over w, max(x) over w, count(x) over w +from ( + select toUInt8(if(mod(number, 2), + toInt64(255 - intDiv(number, 2)), + toInt64(intDiv(number, 2)))) x + from numbers(10) +) +window w as (order by x range between 255 preceding and 255 following) +order by x; +0 0 255 10 +1 0 255 10 +2 0 255 10 +3 0 255 10 +4 0 255 10 +251 0 255 10 +252 0 255 10 +253 0 255 10 +254 0 255 10 +255 0 255 10 -- RANGE OFFSET ORDER BY DESC select x, min(x) over w, max(x) over w, count(x) over w from ( select toUInt8(number) x from numbers(11)) t @@ -920,6 +942,36 @@ FROM numbers(2) ; 1 0 1 1 +-- optimize_read_in_order conflicts with sorting for window functions, check that +-- it is disabled. +drop table if exists window_mt; +create table window_mt engine MergeTree order by number + as select number, mod(number, 3) p from numbers(100); +select number, count(*) over (partition by p) + from window_mt order by number limit 10 settings optimize_read_in_order = 0; +0 34 +1 33 +2 33 +3 34 +4 33 +5 33 +6 34 +7 33 +8 33 +9 34 +select number, count(*) over (partition by p) + from window_mt order by number limit 10 settings optimize_read_in_order = 1; +0 34 +1 33 +2 33 +3 34 +4 33 +5 33 +6 34 +7 33 +8 33 +9 34 +drop table window_mt; -- some true window functions -- rank and friends select number, p, o, count(*) over w, @@ -974,6 +1026,34 @@ from numbers(5); 1 3 2 4 3 \N +-- variants of lag/lead that respect the frame +select number, p, pp, + lagInFrame(number) over w as lag1, + lagInFrame(number, number - pp) over w as lag2, + lagInFrame(number, number - pp, number * 11) over w as lag, + leadInFrame(number, number - pp, number * 11) over w as lead +from (select number, intDiv(number, 5) p, p * 5 pp from numbers(16)) +window w as (partition by p order by number + rows between unbounded preceding and unbounded following) +order by number +settings max_block_size = 3; +; +0 0 0 0 0 0 0 +1 0 0 0 0 0 2 +2 0 0 1 0 0 4 +3 0 0 2 0 0 33 +4 0 0 3 0 0 44 +5 1 5 0 5 5 5 +6 1 5 5 5 5 7 +7 1 5 6 5 5 9 +8 1 5 7 5 5 88 +9 1 5 8 5 5 99 +10 2 10 0 10 10 10 +11 2 10 10 10 10 12 +12 2 10 11 10 10 14 +13 2 10 12 10 10 143 +14 2 10 13 10 10 154 +15 3 15 0 15 15 15 -- case-insensitive SQL-standard synonyms for any and anyLast select number, @@ -993,3 +1073,44 @@ order by number 7 6 8 8 7 9 9 8 9 +-- In this case, we had a problem with PartialSortingTransform returning zero-row +-- chunks for input chunks w/o columns. +select count() over () from numbers(4) where number < 2; +2 +2 +-- floating point RANGE frame +select + count(*) over (order by toFloat32(number) range 5. preceding), + count(*) over (order by toFloat64(number) range 5. preceding), + count(*) over (order by toFloat32(number) range between current row and 5. following), + count(*) over (order by toFloat64(number) range between current row and 5. following) +from numbers(7) +; +1 1 6 6 +2 2 6 6 +3 3 5 5 +4 4 4 4 +5 5 3 3 +6 6 2 2 +6 6 1 1 +-- negative offsets should not be allowed +select count() over (order by toInt64(number) range between -1 preceding and unbounded following) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError 36 } +-- a test with aggregate function that allocates memory in arena +select sum(a[length(a)]) +from ( + select groupArray(number) over (partition by modulo(number, 11) + order by modulo(number, 1111), number) a + from numbers_mt(10000) +) settings max_block_size = 7; +49995000 +-- -INT_MIN row offset that can lead to problems with negation, found when fuzzing +-- under UBSan. Should be limited to at most INT_MAX. +select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } +-- Somehow in this case WindowTransform gets empty input chunks not marked as +-- input end, and then two (!) empty input chunks marked as input end. Whatever. +select count() over () from (select 1 a) l inner join (select 2 a) r using a; +-- This case works as expected, one empty input chunk marked as input end. +select count() over () where null; diff --git a/tests/queries/0_stateless/01591_window_functions.sql b/tests/queries/0_stateless/01591_window_functions.sql index 03bd8371e23..9c6f1a7401c 100644 --- a/tests/queries/0_stateless/01591_window_functions.sql +++ b/tests/queries/0_stateless/01591_window_functions.sql @@ -242,6 +242,19 @@ from ( window w as (order by x range between 1 preceding and 2 following) order by x; +-- We need large offsets to trigger overflow to positive direction, or +-- else the frame end runs into partition end w/o overflow and doesn't move +-- after that. The frame from this query is equivalent to the entire partition. +select x, min(x) over w, max(x) over w, count(x) over w +from ( + select toUInt8(if(mod(number, 2), + toInt64(255 - intDiv(number, 2)), + toInt64(intDiv(number, 2)))) x + from numbers(10) +) +window w as (order by x range between 255 preceding and 255 following) +order by x; + -- RANGE OFFSET ORDER BY DESC select x, min(x) over w, max(x) over w, count(x) over w from ( select toUInt8(number) x from numbers(11)) t @@ -316,6 +329,20 @@ SELECT FROM numbers(2) ; +-- optimize_read_in_order conflicts with sorting for window functions, check that +-- it is disabled. +drop table if exists window_mt; +create table window_mt engine MergeTree order by number + as select number, mod(number, 3) p from numbers(100); + +select number, count(*) over (partition by p) + from window_mt order by number limit 10 settings optimize_read_in_order = 0; + +select number, count(*) over (partition by p) + from window_mt order by number limit 10 settings optimize_read_in_order = 1; + +drop table window_mt; + -- some true window functions -- rank and friends select number, p, o, count(*) over w, @@ -336,6 +363,19 @@ select over (order by number rows between 1 following and 1 following) from numbers(5); +-- variants of lag/lead that respect the frame +select number, p, pp, + lagInFrame(number) over w as lag1, + lagInFrame(number, number - pp) over w as lag2, + lagInFrame(number, number - pp, number * 11) over w as lag, + leadInFrame(number, number - pp, number * 11) over w as lead +from (select number, intDiv(number, 5) p, p * 5 pp from numbers(16)) +window w as (partition by p order by number + rows between unbounded preceding and unbounded following) +order by number +settings max_block_size = 3; +; + -- case-insensitive SQL-standard synonyms for any and anyLast select number, @@ -345,3 +385,40 @@ from numbers(10) window w as (order by number range between 1 preceding and 1 following) order by number ; + +-- In this case, we had a problem with PartialSortingTransform returning zero-row +-- chunks for input chunks w/o columns. +select count() over () from numbers(4) where number < 2; + +-- floating point RANGE frame +select + count(*) over (order by toFloat32(number) range 5. preceding), + count(*) over (order by toFloat64(number) range 5. preceding), + count(*) over (order by toFloat32(number) range between current row and 5. following), + count(*) over (order by toFloat64(number) range between current row and 5. following) +from numbers(7) +; + +-- negative offsets should not be allowed +select count() over (order by toInt64(number) range between -1 preceding and unbounded following) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError 36 } + +-- a test with aggregate function that allocates memory in arena +select sum(a[length(a)]) +from ( + select groupArray(number) over (partition by modulo(number, 11) + order by modulo(number, 1111), number) a + from numbers_mt(10000) +) settings max_block_size = 7; + +-- -INT_MIN row offset that can lead to problems with negation, found when fuzzing +-- under UBSan. Should be limited to at most INT_MAX. +select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } + +-- Somehow in this case WindowTransform gets empty input chunks not marked as +-- input end, and then two (!) empty input chunks marked as input end. Whatever. +select count() over () from (select 1 a) l inner join (select 2 a) r using a; +-- This case works as expected, one empty input chunk marked as input end. +select count() over () where null; diff --git a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.reference b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.reference index 94e15c09768..4b07f533f5a 100644 --- a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.reference +++ b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.reference @@ -1,2 +1,2 @@ -CREATE TABLE default.concurrent_mutate_kill\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01593/concurrent_mutate_kill\', \'1\')\nPARTITION BY key % 100\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 +CREATE TABLE default.concurrent_mutate_kill\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01593_concurrent_alter_mutations_kill_default/concurrent_mutate_kill\', \'1\')\nPARTITION BY key % 100\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 499999500000 diff --git a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.sh b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.sh index 6ae103bdf6e..d40406222c2 100755 --- a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.sh +++ b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.sh @@ -6,7 +6,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) $CLICKHOUSE_CLIENT --query "DROP TABLE IF EXISTS concurrent_mutate_kill" -$CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_mutate_kill (key UInt64, value String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01593/concurrent_mutate_kill', '1') ORDER BY key PARTITION BY key % 100 SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" +$CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_mutate_kill (key UInt64, value String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_mutate_kill', '1') ORDER BY key PARTITION BY key % 100 SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" $CLICKHOUSE_CLIENT --query "INSERT INTO concurrent_mutate_kill SELECT number, toString(number) FROM numbers(1000000)" diff --git a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.reference b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.reference deleted file mode 100644 index cb1eace24a2..00000000000 --- a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.reference +++ /dev/null @@ -1,16 +0,0 @@ -499999500000 -499999500000 -499999500000 -499999500000 -499999500000 -Metadata version on replica 1 equal with first replica, OK -CREATE TABLE default.concurrent_kill_1\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01593_concurrent_kill\', \'1\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 -Metadata version on replica 2 equal with first replica, OK -CREATE TABLE default.concurrent_kill_2\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01593_concurrent_kill\', \'2\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 -Metadata version on replica 3 equal with first replica, OK -CREATE TABLE default.concurrent_kill_3\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01593_concurrent_kill\', \'3\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 -Metadata version on replica 4 equal with first replica, OK -CREATE TABLE default.concurrent_kill_4\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01593_concurrent_kill\', \'4\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 -Metadata version on replica 5 equal with first replica, OK -CREATE TABLE default.concurrent_kill_5\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/test_01593_concurrent_kill\', \'5\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 -499999500000 diff --git a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas_long.reference b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas_long.reference new file mode 100644 index 00000000000..f7c65e36be4 --- /dev/null +++ b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas_long.reference @@ -0,0 +1,16 @@ +499999500000 +499999500000 +499999500000 +499999500000 +499999500000 +Metadata version on replica 1 equal with first replica, OK +CREATE TABLE default.concurrent_kill_1\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01593_concurrent_alter_mutations_kill_many_replicas_long_default/{shard}\', \'{replica}1\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 +Metadata version on replica 2 equal with first replica, OK +CREATE TABLE default.concurrent_kill_2\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01593_concurrent_alter_mutations_kill_many_replicas_long_default/{shard}\', \'{replica}2\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 +Metadata version on replica 3 equal with first replica, OK +CREATE TABLE default.concurrent_kill_3\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01593_concurrent_alter_mutations_kill_many_replicas_long_default/{shard}\', \'{replica}3\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 +Metadata version on replica 4 equal with first replica, OK +CREATE TABLE default.concurrent_kill_4\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01593_concurrent_alter_mutations_kill_many_replicas_long_default/{shard}\', \'{replica}4\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 +Metadata version on replica 5 equal with first replica, OK +CREATE TABLE default.concurrent_kill_5\n(\n `key` UInt64,\n `value` Int64\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01593_concurrent_alter_mutations_kill_many_replicas_long_default/{shard}\', \'{replica}5\')\nORDER BY key\nSETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192 +499999500000 diff --git a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.sh b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas_long.sh similarity index 84% rename from tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.sh rename to tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas_long.sh index bfa68328c06..e263750c431 100755 --- a/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.sh +++ b/tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas_long.sh @@ -11,7 +11,10 @@ for i in $(seq $REPLICAS); do done for i in $(seq $REPLICAS); do - $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_kill_$i (key UInt64, value String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01593_concurrent_kill', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" + $CLICKHOUSE_CLIENT --query "CREATE TABLE concurrent_kill_$i (key UInt64, value String) ENGINE = + ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/{shard}', '{replica}$i') ORDER BY key + SETTINGS max_replicated_mutations_in_queue=1000, number_of_free_entries_in_pool_to_execute_mutation=0,max_replicated_merges_in_queue=1000" + done $CLICKHOUSE_CLIENT --query "INSERT INTO concurrent_kill_1 SELECT number, toString(number) FROM numbers(1000000)" @@ -77,9 +80,10 @@ while true; do done -metadata_version=$($CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01593_concurrent_kill/replicas/$i/' and name = 'metadata_version'") +metadata_version=$($CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/s1/replicas/r1$i/' and name = 'metadata_version'") for i in $(seq $REPLICAS); do - replica_metadata_version=$($CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01593_concurrent_kill/replicas/$i/' and name = 'metadata_version'") + replica_metadata_version=$($CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/s1/replicas/r1$i/' and name = 'metadata_version'") + if [ "$metadata_version" != "$replica_metadata_version" ]; then echo "Metadata version on replica $i differs from the first replica, FAIL" else diff --git a/tests/queries/0_stateless/01598_memory_limit_zeros.sql b/tests/queries/0_stateless/01598_memory_limit_zeros.sql index e90d7bbccb7..a07ce0bcca3 100644 --- a/tests/queries/0_stateless/01598_memory_limit_zeros.sql +++ b/tests/queries/0_stateless/01598_memory_limit_zeros.sql @@ -1,2 +1,2 @@ -SET max_memory_usage = 1; +SET max_memory_usage = 1, max_untracked_memory = 1000000; select 'test', count(*) from zeros_mt(1000000) where not ignore(zero); -- { serverError 241 } diff --git a/tests/queries/0_stateless/01600_benchmark_query.sh b/tests/queries/0_stateless/01600_benchmark_query.sh index a563c87a10f..1cf9cb23c3c 100755 --- a/tests/queries/0_stateless/01600_benchmark_query.sh +++ b/tests/queries/0_stateless/01600_benchmark_query.sh @@ -4,9 +4,10 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -$CLICKHOUSE_BENCHMARK --iterations 10 --query "SELECT 1" 1>/dev/null 2>"$CLICKHOUSE_TMP"/err +LOG="$CLICKHOUSE_TMP/err-$CLICKHOUSE_DATABASE" +$CLICKHOUSE_BENCHMARK --iterations 10 --query "SELECT 1" 1>/dev/null 2>"$LOG" -cat "$CLICKHOUSE_TMP"/err | grep Exception -cat "$CLICKHOUSE_TMP"/err | grep Loaded +cat "$LOG" | grep Exception +cat "$LOG" | grep Loaded -rm "$CLICKHOUSE_TMP"/err +rm "$LOG" diff --git a/tests/queries/0_stateless/01601_custom_tld.reference b/tests/queries/0_stateless/01601_custom_tld.reference index 98b99778396..e056505f273 100644 --- a/tests/queries/0_stateless/01601_custom_tld.reference +++ b/tests/queries/0_stateless/01601_custom_tld.reference @@ -1,11 +1,24 @@ -no-tld +-- no-tld + +foo.there-is-no-such-domain +foo.there-is-no-such-domain foo.there-is-no-such-domain foo.there-is-no-such-domain foo -generic +-- generic kernel kernel.biz.ss -difference +-- difference biz.ss kernel.biz.ss +-- 3+level +xx.blogspot.co.at +blogspot +xx.blogspot.co.at +blogspot +-- url +foobar.com +foobar.com +foobar.com +xx.blogspot.co.at diff --git a/tests/queries/0_stateless/01601_custom_tld.sql b/tests/queries/0_stateless/01601_custom_tld.sql index 6d68299c07d..688dd419858 100644 --- a/tests/queries/0_stateless/01601_custom_tld.sql +++ b/tests/queries/0_stateless/01601_custom_tld.sql @@ -1,16 +1,31 @@ -select 'no-tld'; -select cutToFirstSignificantSubdomainCustom('there-is-no-such-domain', 'public_suffix_list'); +select '-- no-tld'; -- even if there is no TLD, 2-nd level by default anyway -- FIXME: make this behavior optional (so that TLD for host never changed, either empty or something real) +select cutToFirstSignificantSubdomain('there-is-no-such-domain'); +select cutToFirstSignificantSubdomain('foo.there-is-no-such-domain'); +select cutToFirstSignificantSubdomain('bar.foo.there-is-no-such-domain'); +select cutToFirstSignificantSubdomainCustom('there-is-no-such-domain', 'public_suffix_list'); select cutToFirstSignificantSubdomainCustom('foo.there-is-no-such-domain', 'public_suffix_list'); select cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list'); select firstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list'); -select 'generic'; -select firstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel.biz.ss +select '-- generic'; +select firstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel select cutToFirstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel.biz.ss -select 'difference'; +select '-- difference'; -- biz.ss is not in the default TLD list, hence: select cutToFirstSignificantSubdomain('foo.kernel.biz.ss'); -- biz.ss select cutToFirstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel.biz.ss + +select '-- 3+level'; +select cutToFirstSignificantSubdomainCustom('xx.blogspot.co.at', 'public_suffix_list'); -- xx.blogspot.co.at +select firstSignificantSubdomainCustom('xx.blogspot.co.at', 'public_suffix_list'); -- blogspot +select cutToFirstSignificantSubdomainCustom('foo.bar.xx.blogspot.co.at', 'public_suffix_list'); -- xx.blogspot.co.at +select firstSignificantSubdomainCustom('foo.bar.xx.blogspot.co.at', 'public_suffix_list'); -- blogspot + +select '-- url'; +select cutToFirstSignificantSubdomainCustom('http://foobar.com', 'public_suffix_list'); +select cutToFirstSignificantSubdomainCustom('http://foobar.com/foo', 'public_suffix_list'); +select cutToFirstSignificantSubdomainCustom('http://bar.foobar.com/foo', 'public_suffix_list'); +select cutToFirstSignificantSubdomainCustom('http://xx.blogspot.co.at', 'public_suffix_list'); diff --git a/tests/queries/0_stateless/01602_max_distributed_connections.reference b/tests/queries/0_stateless/01602_max_distributed_connections.reference index e69de29bb2d..7326d960397 100644 --- a/tests/queries/0_stateless/01602_max_distributed_connections.reference +++ b/tests/queries/0_stateless/01602_max_distributed_connections.reference @@ -0,0 +1 @@ +Ok diff --git a/tests/queries/0_stateless/01602_max_distributed_connections.sh b/tests/queries/0_stateless/01602_max_distributed_connections.sh index 93c6071c091..772acb39344 100755 --- a/tests/queries/0_stateless/01602_max_distributed_connections.sh +++ b/tests/queries/0_stateless/01602_max_distributed_connections.sh @@ -4,13 +4,31 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -common_opts=( - "--format=Null" +# We check that even if max_threads is small, the setting max_distributed_connections +# will allow to process queries on multiple shards concurrently. - "--max_threads=1" - "--max_distributed_connections=3" -) +# We do sleep 1.5 seconds on ten machines. +# If concurrency is one (bad) the query will take at least 15 seconds and the following loops are guaranteed to be infinite. +# If concurrency is 10 (good), the query may take less than 10 second with non-zero probability +# and the following loops will finish with probability 1 assuming independent random variables. -# NOTE: the test use higher timeout to avoid flakiness. -timeout 9s ${CLICKHOUSE_CLIENT} "$@" "${common_opts[@]}" -q "select sleep(3) from remote('127.{1,2,3,4,5}', system.one)" --prefer_localhost_replica=0 -timeout 9s ${CLICKHOUSE_CLIENT} "$@" "${common_opts[@]}" -q "select sleep(3) from remote('127.{1,2,3,4,5}', system.one)" --prefer_localhost_replica=1 +while true; do + timeout 10 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 10 --query " + SELECT sleep(1.5) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=0 && break +done + +while true; do + timeout 10 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 10 --query " + SELECT sleep(1.5) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=1 && break +done + +# If max_distributed_connections is low and async_socket_for_remote is disabled, +# the concurrency of distributed queries will be also low. + +timeout 1 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 1 --async_socket_for_remote 0 --query " + SELECT sleep(0.15) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=0 && echo 'Fail' + +timeout 1 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 1 --async_socket_for_remote 0 --query " + SELECT sleep(0.15) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=1 && echo 'Fail' + +echo 'Ok' diff --git a/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.reference b/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.reference index e674ac5fabb..6ae67d7d9ad 100644 --- a/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.reference +++ b/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.reference @@ -1,6 +1,6 @@ AlterQuery t1 (children 1) ExpressionList (children 1) - AlterCommand 25 (children 1) + AlterCommand 27 (children 1) Function equals (children 1) ExpressionList (children 2) Identifier date diff --git a/tests/queries/0_stateless/01631_date_overflow_as_partition_key.reference b/tests/queries/0_stateless/01631_date_overflow_as_partition_key.reference index dbcd92da11c..62f620f3ba9 100644 --- a/tests/queries/0_stateless/01631_date_overflow_as_partition_key.reference +++ b/tests/queries/0_stateless/01631_date_overflow_as_partition_key.reference @@ -1,2 +1,2 @@ -1970-01-01 1 -1970-01-01 1 +2106-11-11 1 +2106-11-12 1 diff --git a/tests/queries/0_stateless/01631_date_overflow_as_partition_key.sql b/tests/queries/0_stateless/01631_date_overflow_as_partition_key.sql index f252e10806a..9a8d37084fb 100644 --- a/tests/queries/0_stateless/01631_date_overflow_as_partition_key.sql +++ b/tests/queries/0_stateless/01631_date_overflow_as_partition_key.sql @@ -6,6 +6,6 @@ insert into dt_overflow values('2106-11-11', 1); insert into dt_overflow values('2106-11-12', 1); -select * from dt_overflow; +select * from dt_overflow ORDER BY d; drop table if exists dt_overflow; diff --git a/tests/queries/0_stateless/01641_memory_tracking_insert_optimize_long.sql b/tests/queries/0_stateless/01641_memory_tracking_insert_optimize_long.sql deleted file mode 100644 index 7a92f40b3f0..00000000000 --- a/tests/queries/0_stateless/01641_memory_tracking_insert_optimize_long.sql +++ /dev/null @@ -1,21 +0,0 @@ -drop table if exists data_01641; - -create table data_01641 (key Int, value String) engine=MergeTree order by (key, repeat(value, 10)) settings old_parts_lifetime=0, min_bytes_for_wide_part=0; - --- peak memory usage is 170MiB -set max_memory_usage='200Mi'; -system stop merges data_01641; -insert into data_01641 select number, toString(number) from numbers(120e6); - --- peak: --- - is 21MiB if background merges already scheduled --- - is ~60MiB otherwise -set max_memory_usage='80Mi'; -system start merges data_01641; -optimize table data_01641 final; - --- definitely should fail -set max_memory_usage='1Mi'; -optimize table data_01641 final; -- { serverError 241 } - -drop table data_01641; diff --git a/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.reference b/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.reference index 654db9dbc86..f57d5df6efd 100644 --- a/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.reference +++ b/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.reference @@ -10,3 +10,5 @@ wide fsync_after_insert,fsync_part_directory 1 memory in_memory_parts_insert_sync 1 +wide fsync_part_directory,vertical +1 diff --git a/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.sql b/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.sql index 21ebb607693..644cf063a33 100644 --- a/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.sql +++ b/tests/queries/0_stateless/01643_merge_tree_fsync_smoke.sql @@ -4,34 +4,47 @@ select 'default'; create table data_01643 (key Int) engine=MergeTree() order by key; insert into data_01643 values (1); select * from data_01643; +optimize table data_01643 final; drop table data_01643; select 'compact fsync_after_insert'; create table data_01643 (key Int) engine=MergeTree() order by key settings min_rows_for_wide_part=2, fsync_after_insert=1; insert into data_01643 values (1); select * from data_01643; +optimize table data_01643 final; drop table data_01643; select 'compact fsync_after_insert,fsync_part_directory'; create table data_01643 (key Int) engine=MergeTree() order by key settings min_rows_for_wide_part=2, fsync_after_insert=1, fsync_part_directory=1; insert into data_01643 values (1); select * from data_01643; +optimize table data_01643 final; drop table data_01643; select 'wide fsync_after_insert'; create table data_01643 (key Int) engine=MergeTree() order by key settings min_bytes_for_wide_part=0, fsync_after_insert=1; insert into data_01643 values (1); select * from data_01643; +optimize table data_01643 final; drop table data_01643; select 'wide fsync_after_insert,fsync_part_directory'; create table data_01643 (key Int) engine=MergeTree() order by key settings min_bytes_for_wide_part=0, fsync_after_insert=1, fsync_part_directory=1; insert into data_01643 values (1); select * from data_01643; +optimize table data_01643 final; drop table data_01643; select 'memory in_memory_parts_insert_sync'; create table data_01643 (key Int) engine=MergeTree() order by key settings min_rows_for_compact_part=2, in_memory_parts_insert_sync=1, fsync_after_insert=1, fsync_part_directory=1; insert into data_01643 values (1); select * from data_01643; +optimize table data_01643 final; +drop table data_01643; + +select 'wide fsync_part_directory,vertical'; +create table data_01643 (key Int) engine=MergeTree() order by key settings min_bytes_for_wide_part=0, fsync_part_directory=1, enable_vertical_merge_algorithm=1, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1; +insert into data_01643 values (1); +select * from data_01643; +optimize table data_01643 final; drop table data_01643; diff --git a/tests/queries/0_stateless/01649_with_alias_key_condition.sql b/tests/queries/0_stateless/01649_with_alias_key_condition.sql index b813e6ee84f..0a796f8512e 100644 --- a/tests/queries/0_stateless/01649_with_alias_key_condition.sql +++ b/tests/queries/0_stateless/01649_with_alias_key_condition.sql @@ -6,6 +6,6 @@ insert into alias_key_condition values (1, 2), (3, 4); set force_primary_key = 1; -with i as k select * from alias_key_condition where k = 3; +with i as k select * from alias_key_condition where k = (select i from alias_key_condition where i = 3); drop table if exists alias_key_condition; diff --git a/tests/queries/0_stateless/01651_bugs_from_15889.reference b/tests/queries/0_stateless/01651_bugs_from_15889.reference index 77ac542d4fb..28271a697e2 100644 --- a/tests/queries/0_stateless/01651_bugs_from_15889.reference +++ b/tests/queries/0_stateless/01651_bugs_from_15889.reference @@ -1,2 +1,3 @@ 0 +0 diff --git a/tests/queries/0_stateless/01651_bugs_from_15889.sql b/tests/queries/0_stateless/01651_bugs_from_15889.sql index 1fbf669a1b8..97da4d78ab6 100644 --- a/tests/queries/0_stateless/01651_bugs_from_15889.sql +++ b/tests/queries/0_stateless/01651_bugs_from_15889.sql @@ -8,7 +8,7 @@ INSERT INTO xp SELECT '2020-01-01', number, '' FROM numbers(100000); CREATE TABLE xp_d AS xp ENGINE = Distributed(test_shard_localhost, currentDatabase(), xp); -SELECT count(7 = (SELECT number FROM numbers(0) ORDER BY number ASC NULLS FIRST LIMIT 7)) FROM xp_d PREWHERE toYYYYMM(A) GLOBAL IN (SELECT NULL = (SELECT number FROM numbers(1) ORDER BY number DESC NULLS LAST LIMIT 1), toYYYYMM(min(A)) FROM xp_d) WHERE B > NULL; -- { serverError 20 } +SELECT count(7 = (SELECT number FROM numbers(0) ORDER BY number ASC NULLS FIRST LIMIT 7)) FROM xp_d PREWHERE toYYYYMM(A) GLOBAL IN (SELECT NULL = (SELECT number FROM numbers(1) ORDER BY number DESC NULLS LAST LIMIT 1), toYYYYMM(min(A)) FROM xp_d) WHERE B > NULL; -- B > NULL is evaluated to 0 and this works SELECT count() FROM xp_d WHERE A GLOBAL IN (SELECT NULL); -- { serverError 53 } diff --git a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.reference b/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.reference deleted file mode 100644 index 19487c9f942..00000000000 --- a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.reference +++ /dev/null @@ -1,140 +0,0 @@ ----------Q1---------- -2 2 2 20 -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = table2.a -WHERE table2.b = toUInt32(20) ----------Q2---------- -2 2 2 20 -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = table2.a -WHERE (table2.a < table2.b) AND (table2.b = toUInt32(20)) ----------Q3---------- ----------Q4---------- -6 40 -SELECT - a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = toUInt32(10 - table2.a) -WHERE (b = 6) AND (table2.b > 20) ----------Q5---------- -SELECT - a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 - WHERE 0 -) AS table2 ON a = table2.a -WHERE 0 ----------Q6---------- ----------Q7---------- -0 0 0 0 -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = table2.a -WHERE (table2.b < toUInt32(40)) AND (b < 1) ----------Q8---------- ----------Q9---will not be optimized---------- -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL LEFT JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (b = toUInt32(10)) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL RIGHT JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (b = toUInt32(10)) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL FULL OUTER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (b = toUInt32(10)) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL FULL OUTER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (table2.b = toUInt32(10)) -WHERE a < toUInt32(20) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -CROSS JOIN table2 diff --git a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.sql b/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.sql deleted file mode 100644 index 23871a9c47c..00000000000 --- a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.sql +++ /dev/null @@ -1,48 +0,0 @@ -DROP TABLE IF EXISTS table1; -DROP TABLE IF EXISTS table2; - -CREATE TABLE table1 (a UInt32, b UInt32) ENGINE = Memory; -CREATE TABLE table2 (a UInt32, b UInt32) ENGINE = Memory; - -INSERT INTO table1 SELECT number, number FROM numbers(10); -INSERT INTO table2 SELECT number * 2, number * 20 FROM numbers(6); - -SELECT '---------Q1----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b = toUInt32(20)); -EXPLAIN SYNTAX SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b = toUInt32(20)); - -SELECT '---------Q2----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.a < table2.b) AND (table2.b = toUInt32(20)); -EXPLAIN SYNTAX SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.a < table2.b) AND (table2.b = toUInt32(20)); - -SELECT '---------Q3----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = toUInt32(table2.a + 5)) AND (table2.a < table1.b) AND (table2.b > toUInt32(20)); -- { serverError 48 } - -SELECT '---------Q4----------'; -SELECT table1.a, table2.b FROM table1 INNER JOIN table2 ON (table1.a = toUInt32(10 - table2.a)) AND (table1.b = 6) AND (table2.b > 20); -EXPLAIN SYNTAX SELECT table1.a, table2.b FROM table1 INNER JOIN table2 ON (table1.a = toUInt32(10 - table2.a)) AND (table1.b = 6) AND (table2.b > 20); - -SELECT '---------Q5----------'; -SELECT table1.a, table2.b FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table1.b = 6) AND (table2.b > 20) AND (10 < 6); -EXPLAIN SYNTAX SELECT table1.a, table2.b FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table1.b = 6) AND (table2.b > 20) AND (10 < 6); - -SELECT '---------Q6----------'; -SELECT table1.a, table2.b FROM table1 JOIN table2 ON (table1.b = 6) AND (table2.b > 20); -- { serverError 403 } - -SELECT '---------Q7----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(40)) where table1.b < 1; -EXPLAIN SYNTAX SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(40)) where table1.b < 1; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(40)) where table1.b > 10; - -SELECT '---------Q8----------'; -SELECT * FROM table1 INNER JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(table1, 10)); -- { serverError 47 } - -SELECT '---------Q9---will not be optimized----------'; -EXPLAIN SYNTAX SELECT * FROM table1 LEFT JOIN table2 ON (table1.a = table2.a) AND (table1.b = toUInt32(10)); -EXPLAIN SYNTAX SELECT * FROM table1 RIGHT JOIN table2 ON (table1.a = table2.a) AND (table1.b = toUInt32(10)); -EXPLAIN SYNTAX SELECT * FROM table1 FULL JOIN table2 ON (table1.a = table2.a) AND (table1.b = toUInt32(10)); -EXPLAIN SYNTAX SELECT * FROM table1 FULL JOIN table2 ON (table1.a = table2.a) AND (table2.b = toUInt32(10)) WHERE table1.a < toUInt32(20); -EXPLAIN SYNTAX SELECT * FROM table1 , table2; - -DROP TABLE table1; -DROP TABLE table2; diff --git a/tests/queries/0_stateless/01655_plan_optimizations.reference b/tests/queries/0_stateless/01655_plan_optimizations.reference index 99b32b74ca7..22f5a2e73e3 100644 --- a/tests/queries/0_stateless/01655_plan_optimizations.reference +++ b/tests/queries/0_stateless/01655_plan_optimizations.reference @@ -123,3 +123,26 @@ Filter column: notEquals(y, 2) 3 10 0 37 +> filter is pushed down before CreatingSets +CreatingSets +Filter +Filter +1 +3 +> one condition of filter is pushed down before LEFT JOIN +Join +Filter column: notEquals(number, 1) +Join +0 0 +3 3 +> one condition of filter is pushed down before INNER JOIN +Join +Filter column: notEquals(number, 1) +Join +3 3 +> filter is pushed down before UNION +Union +Filter +Filter +2 3 +2 3 diff --git a/tests/queries/0_stateless/01655_plan_optimizations.sh b/tests/queries/0_stateless/01655_plan_optimizations.sh index 3148dc4a597..148e6157773 100755 --- a/tests/queries/0_stateless/01655_plan_optimizations.sh +++ b/tests/queries/0_stateless/01655_plan_optimizations.sh @@ -150,3 +150,49 @@ $CLICKHOUSE_CLIENT -q " select * from ( select y, sum(x) from (select number as x, number % 4 as y from numbers(10)) group by y with totals ) where y != 2" + +echo "> filter is pushed down before CreatingSets" +$CLICKHOUSE_CLIENT -q " + explain select number from ( + select number from numbers(5) where number in (select 1 + number from numbers(3)) + ) where number != 2 settings enable_optimize_predicate_expression=0" | + grep -o "CreatingSets\|Filter" +$CLICKHOUSE_CLIENT -q " + select number from ( + select number from numbers(5) where number in (select 1 + number from numbers(3)) + ) where number != 2 settings enable_optimize_predicate_expression=0" + +echo "> one condition of filter is pushed down before LEFT JOIN" +$CLICKHOUSE_CLIENT -q " + explain actions = 1 + select number as a, r.b from numbers(4) as l any left join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" | + grep -o "Join\|Filter column: notEquals(number, 1)" +$CLICKHOUSE_CLIENT -q " + select number as a, r.b from numbers(4) as l any left join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" + +echo "> one condition of filter is pushed down before INNER JOIN" +$CLICKHOUSE_CLIENT -q " + explain actions = 1 + select number as a, r.b from numbers(4) as l any inner join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" | + grep -o "Join\|Filter column: notEquals(number, 1)" +$CLICKHOUSE_CLIENT -q " + select number as a, r.b from numbers(4) as l any inner join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" + +echo "> filter is pushed down before UNION" +$CLICKHOUSE_CLIENT -q " + explain select a, b from ( + select number + 1 as a, number + 2 as b from numbers(2) union all select number + 1 as b, number + 2 as a from numbers(2) + ) where a != 1 settings enable_optimize_predicate_expression = 0" | + grep -o "Union\|Filter" +$CLICKHOUSE_CLIENT -q " + select a, b from ( + select number + 1 as a, number + 2 as b from numbers(2) union all select number + 1 as b, number + 2 as a from numbers(2) + ) where a != 1 settings enable_optimize_predicate_expression = 0" diff --git a/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh b/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh index 593f0e59ea7..072e8d75f52 100755 --- a/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh +++ b/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh @@ -8,7 +8,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # Data preparation. # Now we can get the user_files_path by use the table file function for trick. also we can get it by query as: # "insert into function file('exist.txt', 'CSV', 'val1 char') values ('aaaa'); select _path from file('exist.txt', 'CSV', 'val1 char')" -user_files_path=$(clickhouse-client --query "select _path,_file from file('nonexist.txt', 'CSV', 'val1 char')" 2>&1 |grep Exception | awk '{gsub("/nonexist.txt","",$9); print $9}') +user_files_path=$(clickhouse-client --query "select _path,_file from file('nonexist.txt', 'CSV', 'val1 char')" 2>&1 | grep Exception | awk '{gsub("/nonexist.txt","",$9); print $9}') mkdir -p ${user_files_path}/ echo -n aaaaaaaaa > ${user_files_path}/a.txt diff --git a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql index 48d3baba0c5..a056d77896c 100644 --- a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql +++ b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql @@ -1 +1,2 @@ SELECT repeat('abcdefghijklmnopqrstuvwxyz', number * 100) AS haystack, extractAllGroupsHorizontal(haystack, '(\\w)') AS matches FROM numbers(1023); -- { serverError 128 } +SELECT count(extractAllGroupsHorizontal(materialize('a'), '(a)')) FROM numbers(1000000) FORMAT Null; -- shouldn't fail diff --git a/tests/queries/0_stateless/01666_blns.sql b/tests/queries/0_stateless/01666_blns.sql index 787f0369244..be9632092bc 100644 --- a/tests/queries/0_stateless/01666_blns.sql +++ b/tests/queries/0_stateless/01666_blns.sql @@ -554,9 +554,9 @@ SELECT count() FROM test; DROP TABLE IF EXISTS test_r1; DROP TABLE IF EXISTS test_r2; -CREATE TABLE test_r1 AS test ENGINE = ReplicatedMergeTree('/clickhouse/test', 'r1') ORDER BY "\\" SETTINGS min_bytes_for_wide_part = '100G'; +CREATE TABLE test_r1 AS test ENGINE = ReplicatedMergeTree('/clickhouse/test_01666', 'r1') ORDER BY "\\" SETTINGS min_bytes_for_wide_part = '100G'; INSERT INTO test_r1 SELECT * FROM test; -CREATE TABLE test_r2 AS test ENGINE = ReplicatedMergeTree('/clickhouse/test', 'r2') ORDER BY "\\" SETTINGS min_bytes_for_wide_part = '100G'; +CREATE TABLE test_r2 AS test ENGINE = ReplicatedMergeTree('/clickhouse/test_01666', 'r2') ORDER BY "\\" SETTINGS min_bytes_for_wide_part = '100G'; SYSTEM SYNC REPLICA test_r2; diff --git a/tests/queries/0_stateless/01666_merge_tree_max_query_limit.reference b/tests/queries/0_stateless/01666_merge_tree_max_query_limit.reference index a08a20dc95d..25880a7d740 100644 --- a/tests/queries/0_stateless/01666_merge_tree_max_query_limit.reference +++ b/tests/queries/0_stateless/01666_merge_tree_max_query_limit.reference @@ -12,4 +12,3 @@ Check if another query is passed Modify max_concurrent_queries back to 1 Check if another query with less marks to read is throttled yes -finished long_running_query default select sleepEachRow(0.01) from simple settings max_block_size = 1 format Null diff --git a/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh b/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh index e32a83c9560..c5fbb35a9cd 100755 --- a/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh +++ b/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh @@ -15,12 +15,14 @@ drop table if exists simple; create table simple (i int, j int) engine = MergeTree order by i settings index_granularity = 1, max_concurrent_queries = 1, min_marks_to_honor_max_concurrent_queries = 2; -insert into simple select number, number + 100 from numbers(1000); +insert into simple select number, number + 100 from numbers(5000); " +query_id="long_running_query-$CLICKHOUSE_DATABASE" + echo "Spin up a long running query" -${CLICKHOUSE_CLIENT} --query "select sleepEachRow(0.01) from simple settings max_block_size = 1 format Null" --query_id "long_running_query" > /dev/null 2>&1 & -wait_for_query_to_start 'long_running_query' +${CLICKHOUSE_CLIENT} --query "select sleepEachRow(0.1) from simple settings max_block_size = 1 format Null" --query_id "$query_id" > /dev/null 2>&1 & +wait_for_query_to_start "$query_id" # query which reads marks >= min_marks_to_honor_max_concurrent_queries is throttled echo "Check if another query with some marks to read is throttled" @@ -61,7 +63,7 @@ CODE=$? [ "$CODE" -ne "202" ] && echo "Expected error code: 202 but got: $CODE" && exit 1; echo "yes" -${CLICKHOUSE_CLIENT} --query "KILL QUERY WHERE query_id = 'long_running_query' SYNC" +${CLICKHOUSE_CLIENT} --query "KILL QUERY WHERE query_id = '$query_id' SYNC FORMAT Null" wait ${CLICKHOUSE_CLIENT} --multiline --multiquery --query " diff --git a/tests/queries/0_stateless/01670_log_comment.sql b/tests/queries/0_stateless/01670_log_comment.sql index c1496273784..2fb61eb5812 100644 --- a/tests/queries/0_stateless/01670_log_comment.sql +++ b/tests/queries/0_stateless/01670_log_comment.sql @@ -1,5 +1,5 @@ SET log_comment = 'log_comment test', log_queries = 1; SELECT 1; SYSTEM FLUSH LOGS; -SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND event_date >= yesterday() AND type = 1 ORDER BY event_time DESC LIMIT 1; -SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND event_date >= yesterday() AND type = 2 ORDER BY event_time DESC LIMIT 1; +SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND query LIKE 'SELECT 1%' AND event_date >= yesterday() AND type = 1 ORDER BY event_time_microseconds DESC LIMIT 1; +SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND query LIKE 'SELECT 1%' AND event_date >= yesterday() AND type = 2 ORDER BY event_time_microseconds DESC LIMIT 1; diff --git a/tests/queries/0_stateless/01671_ddl_hang_timeout_long.reference b/tests/queries/0_stateless/01671_ddl_hang_timeout_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01671_ddl_hang_timeout.sh b/tests/queries/0_stateless/01671_ddl_hang_timeout_long.sh similarity index 89% rename from tests/queries/0_stateless/01671_ddl_hang_timeout.sh rename to tests/queries/0_stateless/01671_ddl_hang_timeout_long.sh index 2ca97e3978b..641eba2d8fa 100755 --- a/tests/queries/0_stateless/01671_ddl_hang_timeout.sh +++ b/tests/queries/0_stateless/01671_ddl_hang_timeout_long.sh @@ -6,7 +6,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) function thread_create_drop_table { while true; do REPLICA=$(($RANDOM % 10)) - $CLICKHOUSE_CLIENT --query "CREATE TABLE IF NOT EXISTS t1 (x UInt64, s Array(Nullable(String))) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_01671/test_01671', 'r_$REPLICA') order by x" 2>/dev/null + $CLICKHOUSE_CLIENT --query "CREATE TABLE IF NOT EXISTS t1 (x UInt64, s Array(Nullable(String))) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/test_01671', 'r_$REPLICA') order by x" 2>/dev/null sleep 0.0$RANDOM $CLICKHOUSE_CLIENT --query "DROP TABLE IF EXISTS t1" done diff --git a/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert.sh b/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert.sh deleted file mode 100755 index bad12e4cd58..00000000000 --- a/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert.sh +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env bash - -# NOTE: $SECONDS accuracy is second, so we need some delta, hence -1 in time conditions. - -CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) -# shellcheck source=../shell_config.sh -. "$CURDIR"/../shell_config.sh - -max_delay_to_insert=5 - -${CLICKHOUSE_CLIENT} -nq " -drop table if exists dist_01675; -drop table if exists data_01675; -" - -${CLICKHOUSE_CLIENT} -nq " -create table data_01675 (key Int) engine=Null(); -create table dist_01675 (key Int) engine=Distributed(test_shard_localhost, currentDatabase(), data_01675) settings bytes_to_delay_insert=1, max_delay_to_insert=$max_delay_to_insert; -system stop distributed sends dist_01675; -" - -# -# Case 1: max_delay_to_insert will throw. -# -echo "max_delay_to_insert will throw" - -start_seconds=$SECONDS -${CLICKHOUSE_CLIENT} --testmode -nq " --- first batch is always OK, since there is no pending bytes yet -insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; --- second will fail, because of bytes_to_delay_insert=1 and max_delay_to_insert=5, --- while distributed sends is stopped. --- --- (previous block definitelly takes more, since it has header) -insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; -- { serverError 574 } -system flush distributed dist_01675; -" -end_seconds=$SECONDS - -if (( (end_seconds-start_seconds)<(max_delay_to_insert-1) )); then - echo "max_delay_to_insert was not satisfied ($end_seconds-$start_seconds)" -fi - -# -# Case 2: max_delay_to_insert will finally finished. -# -echo "max_delay_to_insert will succeed" - -max_delay_to_insert=10 -${CLICKHOUSE_CLIENT} -nq " -drop table dist_01675; -create table dist_01675 (key Int) engine=Distributed(test_shard_localhost, currentDatabase(), data_01675) settings bytes_to_delay_insert=1, max_delay_to_insert=$max_delay_to_insert; -system stop distributed sends dist_01675; -" - -flush_delay=4 -function flush_distributed_worker() -{ - sleep $flush_delay - ${CLICKHOUSE_CLIENT} -q "system flush distributed dist_01675" - echo flushed -} -flush_distributed_worker & - -start_seconds=$SECONDS -${CLICKHOUSE_CLIENT} --testmode -nq " --- first batch is always OK, since there is no pending bytes yet -insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; --- second will succcedd, due to SYSTEM FLUSH DISTRIBUTED in background. -insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; -" -end_seconds=$SECONDS -wait - -if (( (end_seconds-start_seconds)<(flush_delay-1) )); then - echo "max_delay_to_insert was not wait flush_delay ($end_seconds-$start_seconds)" -fi -if (( (end_seconds-start_seconds)>=(max_delay_to_insert-1) )); then - echo "max_delay_to_insert was overcommited ($end_seconds-$start_seconds)" -fi - - -${CLICKHOUSE_CLIENT} -nq " -drop table dist_01675; -drop table data_01675; -" diff --git a/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert.reference b/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert_long.reference similarity index 88% rename from tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert.reference rename to tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert_long.reference index d8c50c741ea..343d1f3639f 100644 --- a/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert.reference +++ b/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert_long.reference @@ -1,3 +1,2 @@ max_delay_to_insert will throw max_delay_to_insert will succeed -flushed diff --git a/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert_long.sh b/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert_long.sh new file mode 100755 index 00000000000..5687fe323d0 --- /dev/null +++ b/tests/queries/0_stateless/01675_distributed_bytes_to_delay_insert_long.sh @@ -0,0 +1,129 @@ +#!/usr/bin/env bash + +# NOTE: $SECONDS accuracy is second, so we need some delta, hence -1 in time conditions. + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +function drop_tables() +{ + ${CLICKHOUSE_CLIENT} -nq " + drop table if exists dist_01675; + drop table if exists data_01675; + " +} + +# +# Case 1: max_delay_to_insert will throw. +# +function test_max_delay_to_insert_will_throw() +{ + echo "max_delay_to_insert will throw" + + local max_delay_to_insert=2 + ${CLICKHOUSE_CLIENT} -nq " + create table data_01675 (key Int) engine=Null(); + create table dist_01675 (key Int) engine=Distributed(test_shard_localhost, currentDatabase(), data_01675) settings bytes_to_delay_insert=1, max_delay_to_insert=$max_delay_to_insert; + system stop distributed sends dist_01675; + " + + local start_seconds=$SECONDS + ${CLICKHOUSE_CLIENT} --testmode -nq " + -- first batch is always OK, since there is no pending bytes yet + insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; + -- second will fail, because of bytes_to_delay_insert=1 and max_delay_to_insert>0, + -- while distributed sends is stopped. + -- + -- (previous block definitelly takes more, since it has header) + insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; -- { serverError 574 } + system flush distributed dist_01675; + " + local end_seconds=$SECONDS + + if (( (end_seconds-start_seconds)<(max_delay_to_insert-1) )); then + echo "max_delay_to_insert was not satisfied ($end_seconds-$start_seconds)" + fi +} + +# +# Case 2: max_delay_to_insert will finally finished. +# +function test_max_delay_to_insert_will_succeed_once() +{ + local max_delay_to_insert=4 + local flush_delay=2 + + drop_tables + + ${CLICKHOUSE_CLIENT} -nq " + create table data_01675 (key Int) engine=Null(); + create table dist_01675 (key Int) engine=Distributed(test_shard_localhost, currentDatabase(), data_01675) settings bytes_to_delay_insert=1, max_delay_to_insert=$max_delay_to_insert; + system stop distributed sends dist_01675; + " + + function flush_distributed_worker() + { + sleep $flush_delay + ${CLICKHOUSE_CLIENT} -q "system flush distributed dist_01675" + } + flush_distributed_worker & + + local start_seconds=$SECONDS + # ignore stderr, since it may produce exception if flushing thread will be too slow + # (this is possible on CI) + ${CLICKHOUSE_CLIENT} --testmode -nq " + -- first batch is always OK, since there is no pending bytes yet + insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; + -- second will succeed, due to SYSTEM FLUSH DISTRIBUTED in background. + insert into dist_01675 select * from numbers(1) settings prefer_localhost_replica=0; + " >& /dev/null + local end_seconds=$SECONDS + wait + + local diff=$(( end_seconds-start_seconds )) + + if (( diff<(flush_delay-1) )); then + # this is fatal error, that should not be retriable + echo "max_delay_to_insert was not wait flush_delay ($diff)" + exit 1 + fi + + # retry the test until the diff will be satisfied + # (since we cannot assume that there will be no other lags) + if (( diff>=(max_delay_to_insert-1) )); then + return 1 + fi + + return 0 +} +function test_max_delay_to_insert_will_succeed() +{ + echo "max_delay_to_insert will succeed" + + local retries=20 i=0 + while (( (i++) < retries )); do + if test_max_delay_to_insert_will_succeed_once; then + return + fi + done + + echo failed +} + +function run_test() +{ + local test_case=$1 && shift + + drop_tables + $test_case +} + +function main() +{ + run_test test_max_delay_to_insert_will_throw + run_test test_max_delay_to_insert_will_succeed + + drop_tables +} +main "$@" diff --git a/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.reference b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh similarity index 80% rename from tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh rename to tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh index 08e07044841..1ed5c6be272 100755 --- a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh +++ b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh @@ -69,18 +69,6 @@ compwords_positive=( max_concurrent_queries_for_all_users # system.clusters test_shard_localhost - # system.errors, also it is very rare to cover system_events_show_zero_values - CONDITIONAL_TREE_PARENT_NOT_FOUND - # system.events, also it is very rare to cover system_events_show_zero_values - WriteBufferFromFileDescriptorWriteFailed - # system.asynchronous_metrics, also this metric has zero value - # - # NOTE: that there is no ability to complete metrics like - # jemalloc.background_thread.num_runs, due to "." is used as a word breaker - # (and this cannot be changed -- db.table) - ReplicasMaxAbsoluteDelay - # system.metrics - PartsPreCommitted # system.macros default_path_test # system.storage_policies, egh not uniq diff --git a/tests/queries/0_stateless/01676_range_hashed_dictionary.reference b/tests/queries/0_stateless/01676_range_hashed_dictionary.reference new file mode 100644 index 00000000000..23a5180d99c --- /dev/null +++ b/tests/queries/0_stateless/01676_range_hashed_dictionary.reference @@ -0,0 +1,58 @@ +Dictionary not nullable +dictGet +0.33 +0.42 +0.46 +0.2 +0.4 +dictHas +1 +1 +1 +0 +select columns from dictionary +allColumns +1 2019-05-05 2019-05-20 0.33 +1 2019-05-21 2019-05-30 0.42 +2 2019-05-21 2019-05-30 0.46 +noColumns +1 +1 +1 +onlySpecificColumns +1 2019-05-05 0.33 +1 2019-05-21 0.42 +2 2019-05-21 0.46 +onlySpecificColumn +0.33 +0.42 +0.46 +Dictionary nullable +dictGet +0.33 +0.42 +\N +0.2 +0.4 +dictHas +1 +1 +1 +0 +select columns from dictionary +allColumns +1 2019-05-05 2019-05-20 0.33 +1 2019-05-21 2019-05-30 0.42 +2 2019-05-21 2019-05-30 \N +noColumns +1 +1 +1 +onlySpecificColumns +1 2019-05-05 0.33 +1 2019-05-21 0.42 +2 2019-05-21 \N +onlySpecificColumn +0.33 +0.42 +\N diff --git a/tests/queries/0_stateless/01676_range_hashed_dictionary.sql b/tests/queries/0_stateless/01676_range_hashed_dictionary.sql new file mode 100644 index 00000000000..455e850b239 --- /dev/null +++ b/tests/queries/0_stateless/01676_range_hashed_dictionary.sql @@ -0,0 +1,110 @@ +DROP DATABASE IF EXISTS database_for_range_dict; + +CREATE DATABASE database_for_range_dict; + +CREATE TABLE database_for_range_dict.date_table +( + CountryID UInt64, + StartDate Date, + EndDate Date, + Tax Float64 +) +ENGINE = MergeTree() +ORDER BY CountryID; + +INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33); +INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42); +INSERT INTO database_for_range_dict.date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), 0.46); + +CREATE DICTIONARY database_for_range_dict.range_dictionary +( + CountryID UInt64, + StartDate Date, + EndDate Date, + Tax Float64 DEFAULT 0.2 +) +PRIMARY KEY CountryID +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB 'database_for_range_dict')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(RANGE_HASHED()) +RANGE(MIN StartDate MAX EndDate); + +SELECT 'Dictionary not nullable'; +SELECT 'dictGet'; +SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-15')); +SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-29')); +SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-29')); +SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31')); +SELECT dictGetOrDefault('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4); +SELECT 'dictHas'; +SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(1), toDate('2019-05-15')); +SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(1), toDate('2019-05-29')); +SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(2), toDate('2019-05-29')); +SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(2), toDate('2019-05-31')); +SELECT 'select columns from dictionary'; +SELECT 'allColumns'; +SELECT * FROM database_for_range_dict.range_dictionary; +SELECT 'noColumns'; +SELECT 1 FROM database_for_range_dict.range_dictionary; +SELECT 'onlySpecificColumns'; +SELECT CountryID, StartDate, Tax FROM database_for_range_dict.range_dictionary; +SELECT 'onlySpecificColumn'; +SELECT Tax FROM database_for_range_dict.range_dictionary; + +DROP TABLE database_for_range_dict.date_table; +DROP DICTIONARY database_for_range_dict.range_dictionary; + +CREATE TABLE database_for_range_dict.date_table +( + CountryID UInt64, + StartDate Date, + EndDate Date, + Tax Nullable(Float64) +) +ENGINE = MergeTree() +ORDER BY CountryID; + +INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33); +INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42); +INSERT INTO database_for_range_dict.date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), NULL); + +CREATE DICTIONARY database_for_range_dict.range_dictionary_nullable +( + CountryID UInt64, + StartDate Date, + EndDate Date, + Tax Nullable(Float64) DEFAULT 0.2 +) +PRIMARY KEY CountryID +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB 'database_for_range_dict')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(RANGE_HASHED()) +RANGE(MIN StartDate MAX EndDate); + +SELECT 'Dictionary nullable'; +SELECT 'dictGet'; +SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-15')); +SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-29')); +SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-29')); +SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31')); +SELECT dictGetOrDefault('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4); +SELECT 'dictHas'; +SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(1), toDate('2019-05-15')); +SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(1), toDate('2019-05-29')); +SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(2), toDate('2019-05-29')); +SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(2), toDate('2019-05-31')); +SELECT 'select columns from dictionary'; +SELECT 'allColumns'; +SELECT * FROM database_for_range_dict.range_dictionary_nullable; +SELECT 'noColumns'; +SELECT 1 FROM database_for_range_dict.range_dictionary_nullable; +SELECT 'onlySpecificColumns'; +SELECT CountryID, StartDate, Tax FROM database_for_range_dict.range_dictionary_nullable; +SELECT 'onlySpecificColumn'; +SELECT Tax FROM database_for_range_dict.range_dictionary_nullable; + +DROP TABLE database_for_range_dict.date_table; +DROP DICTIONARY database_for_range_dict.range_dictionary_nullable; + +DROP DATABASE database_for_range_dict; + diff --git a/tests/queries/0_stateless/01681_cache_dictionary_simple_key.sql b/tests/queries/0_stateless/01681_cache_dictionary_simple_key.sql index ee2cde963d7..f200ead341b 100644 --- a/tests/queries/0_stateless/01681_cache_dictionary_simple_key.sql +++ b/tests/queries/0_stateless/01681_cache_dictionary_simple_key.sql @@ -40,7 +40,7 @@ SELECT dictGetOrDefault('01681_database_for_cache_dictionary.cache_dictionary_si SELECT 'dictHas'; SELECT dictHas('01681_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes', number) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01681_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes; +SELECT * FROM 01681_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes ORDER BY id; DROP DICTIONARY 01681_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes; DROP TABLE 01681_database_for_cache_dictionary.simple_key_simple_attributes_source_table; @@ -84,7 +84,7 @@ SELECT dictGetOrDefault('01681_database_for_cache_dictionary.cache_dictionary_si SELECT 'dictHas'; SELECT dictHas('01681_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes', number) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01681_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes; +SELECT * FROM 01681_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes ORDER BY id; DROP DICTIONARY 01681_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes; DROP TABLE 01681_database_for_cache_dictionary.simple_key_complex_attributes_source_table; diff --git a/tests/queries/0_stateless/01682_cache_dictionary_complex_key.sql b/tests/queries/0_stateless/01682_cache_dictionary_complex_key.sql index 65c56090c47..4cc83412457 100644 --- a/tests/queries/0_stateless/01682_cache_dictionary_complex_key.sql +++ b/tests/queries/0_stateless/01682_cache_dictionary_complex_key.sql @@ -42,7 +42,7 @@ SELECT dictGetOrDefault('01682_database_for_cache_dictionary.cache_dictionary_co SELECT 'dictHas'; SELECT dictHas('01682_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes', (number, concat('id_key_', toString(number)))) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01682_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes; +SELECT * FROM 01682_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes ORDER BY id; DROP DICTIONARY 01682_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes; DROP TABLE 01682_database_for_cache_dictionary.complex_key_simple_attributes_source_table; @@ -89,7 +89,7 @@ SELECT dictGetOrDefault('01682_database_for_cache_dictionary.cache_dictionary_co SELECT 'dictHas'; SELECT dictHas('01682_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes', (number, concat('id_key_', toString(number)))) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01682_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes; +SELECT * FROM 01682_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes ORDER BY id; DROP DICTIONARY 01682_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes; DROP TABLE 01682_database_for_cache_dictionary.complex_key_complex_attributes_source_table; diff --git a/tests/queries/0_stateless/01684_insert_specify_shard_id.sql b/tests/queries/0_stateless/01684_insert_specify_shard_id.sql index ce1c7807b59..712fbfcc506 100644 --- a/tests/queries/0_stateless/01684_insert_specify_shard_id.sql +++ b/tests/queries/0_stateless/01684_insert_specify_shard_id.sql @@ -28,8 +28,8 @@ INSERT INTO x_dist SELECT * FROM numbers(10); -- { serverError 55 } INSERT INTO y_dist SELECT * FROM numbers(10); -- { serverError 55 } -- invalid shard id -INSERT INTO x_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError 1003 } -INSERT INTO y_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError 1003 } +INSERT INTO x_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError 577 } +INSERT INTO y_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError 577 } DROP TABLE x; DROP TABLE x_dist; diff --git a/tests/queries/0_stateless/01684_ssd_cache_dictionary_simple_key.sql b/tests/queries/0_stateless/01684_ssd_cache_dictionary_simple_key.sql index 3b327257fc4..9dbad1289f1 100644 --- a/tests/queries/0_stateless/01684_ssd_cache_dictionary_simple_key.sql +++ b/tests/queries/0_stateless/01684_ssd_cache_dictionary_simple_key.sql @@ -40,7 +40,7 @@ SELECT dictGetOrDefault('01684_database_for_cache_dictionary.cache_dictionary_si SELECT 'dictHas'; SELECT dictHas('01684_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes', number) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01684_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes; +SELECT * FROM 01684_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes ORDER BY id; DROP DICTIONARY 01684_database_for_cache_dictionary.cache_dictionary_simple_key_simple_attributes; DROP TABLE 01684_database_for_cache_dictionary.simple_key_simple_attributes_source_table; @@ -84,7 +84,7 @@ SELECT dictGetOrDefault('01684_database_for_cache_dictionary.cache_dictionary_si SELECT 'dictHas'; SELECT dictHas('01684_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes', number) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01684_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes; +SELECT * FROM 01684_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes ORDER BY id; DROP DICTIONARY 01684_database_for_cache_dictionary.cache_dictionary_simple_key_complex_attributes; DROP TABLE 01684_database_for_cache_dictionary.simple_key_complex_attributes_source_table; diff --git a/tests/queries/0_stateless/01685_ssd_cache_dictionary_complex_key.sql b/tests/queries/0_stateless/01685_ssd_cache_dictionary_complex_key.sql index 1757b136d3e..03a7e1d80df 100644 --- a/tests/queries/0_stateless/01685_ssd_cache_dictionary_complex_key.sql +++ b/tests/queries/0_stateless/01685_ssd_cache_dictionary_complex_key.sql @@ -42,7 +42,7 @@ SELECT dictGetOrDefault('01685_database_for_cache_dictionary.cache_dictionary_co SELECT 'dictHas'; SELECT dictHas('01685_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes', (number, concat('id_key_', toString(number)))) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01685_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes; +SELECT * FROM 01685_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes ORDER BY id; DROP DICTIONARY 01685_database_for_cache_dictionary.cache_dictionary_complex_key_simple_attributes; DROP TABLE 01685_database_for_cache_dictionary.complex_key_simple_attributes_source_table; @@ -89,10 +89,10 @@ SELECT dictGetOrDefault('01685_database_for_cache_dictionary.cache_dictionary_co SELECT 'dictHas'; SELECT dictHas('01685_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes', (number, concat('id_key_', toString(number)))) FROM system.numbers LIMIT 4; SELECT 'select all values as input stream'; -SELECT * FROM 01685_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes; +SELECT * FROM 01685_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes ORDER BY id; DROP DICTIONARY 01685_database_for_cache_dictionary.cache_dictionary_complex_key_complex_attributes; DROP TABLE 01685_database_for_cache_dictionary.complex_key_complex_attributes_source_table; DROP DATABASE 01685_database_for_cache_dictionary; - + diff --git a/tests/queries/0_stateless/01686_event_time_microseconds_part_log.sql b/tests/queries/0_stateless/01686_event_time_microseconds_part_log.sql index a1b419527d4..4a653379ef1 100644 --- a/tests/queries/0_stateless/01686_event_time_microseconds_part_log.sql +++ b/tests/queries/0_stateless/01686_event_time_microseconds_part_log.sql @@ -8,13 +8,15 @@ CREATE TABLE table_with_single_pk ENGINE = MergeTree ORDER BY key; -INSERT INTO table_with_single_pk SELECT number, toString(number % 10) FROM numbers(10000000); +INSERT INTO table_with_single_pk SELECT number, toString(number % 10) FROM numbers(1000000); SYSTEM FLUSH LOGS; WITH ( SELECT (event_time, event_time_microseconds) FROM system.part_log + WHERE "table" = 'table_with_single_pk' + AND "database" = currentDatabase() ORDER BY event_time DESC LIMIT 1 ) AS time diff --git a/tests/queries/0_stateless/01691_DateTime64_clamp.reference b/tests/queries/0_stateless/01691_DateTime64_clamp.reference index 3adc9a17e5c..881ab4feff8 100644 --- a/tests/queries/0_stateless/01691_DateTime64_clamp.reference +++ b/tests/queries/0_stateless/01691_DateTime64_clamp.reference @@ -1,10 +1,11 @@ -- { echo } +-- These values are within the extended range of DateTime64 [1925-01-01, 2284-01-01) SELECT toTimeZone(toDateTime(-2, 2), 'Europe/Moscow'); -1970-01-01 03:00:00.00 +1970-01-01 02:59:58.00 SELECT toDateTime64(-2, 2, 'Europe/Moscow'); -1970-01-01 03:00:00.00 +1970-01-01 02:59:58.00 SELECT CAST(-1 AS DateTime64(0, 'Europe/Moscow')); -1970-01-01 03:00:00 +1970-01-01 02:59:59 SELECT CAST('2020-01-01 00:00:00.3' AS DateTime64(0, 'Europe/Moscow')); 2020-01-01 00:00:00 SELECT toDateTime64(bitShiftLeft(toUInt64(1), 33), 2, 'Europe/Moscow') FORMAT Null; @@ -13,5 +14,14 @@ SELECT toTimeZone(toDateTime(-2., 2), 'Europe/Moscow'); SELECT toDateTime64(-2., 2, 'Europe/Moscow'); 1970-01-01 03:00:00.00 SELECT toDateTime64(toFloat32(bitShiftLeft(toUInt64(1),33)), 2, 'Europe/Moscow'); -2106-02-07 09:00:00.00 +2106-02-07 09:28:16.00 SELECT toDateTime64(toFloat64(bitShiftLeft(toUInt64(1),33)), 2, 'Europe/Moscow') FORMAT Null; +-- These are outsize of extended range and hence clamped +SELECT toDateTime64(-1 * bitShiftLeft(toUInt64(1), 35), 2); +1925-01-01 02:00:00.00 +SELECT CAST(-1 * bitShiftLeft(toUInt64(1), 35) AS DateTime64); +1925-01-01 02:00:00.000 +SELECT CAST(bitShiftLeft(toUInt64(1), 35) AS DateTime64); +2282-12-31 03:00:00.000 +SELECT toDateTime64(bitShiftLeft(toUInt64(1), 35), 2); +2282-12-31 03:00:00.00 diff --git a/tests/queries/0_stateless/01691_DateTime64_clamp.sql b/tests/queries/0_stateless/01691_DateTime64_clamp.sql index 92d5a33328f..c77a66febb3 100644 --- a/tests/queries/0_stateless/01691_DateTime64_clamp.sql +++ b/tests/queries/0_stateless/01691_DateTime64_clamp.sql @@ -1,4 +1,5 @@ -- { echo } +-- These values are within the extended range of DateTime64 [1925-01-01, 2284-01-01) SELECT toTimeZone(toDateTime(-2, 2), 'Europe/Moscow'); SELECT toDateTime64(-2, 2, 'Europe/Moscow'); SELECT CAST(-1 AS DateTime64(0, 'Europe/Moscow')); @@ -8,3 +9,9 @@ SELECT toTimeZone(toDateTime(-2., 2), 'Europe/Moscow'); SELECT toDateTime64(-2., 2, 'Europe/Moscow'); SELECT toDateTime64(toFloat32(bitShiftLeft(toUInt64(1),33)), 2, 'Europe/Moscow'); SELECT toDateTime64(toFloat64(bitShiftLeft(toUInt64(1),33)), 2, 'Europe/Moscow') FORMAT Null; + +-- These are outsize of extended range and hence clamped +SELECT toDateTime64(-1 * bitShiftLeft(toUInt64(1), 35), 2); +SELECT CAST(-1 * bitShiftLeft(toUInt64(1), 35) AS DateTime64); +SELECT CAST(bitShiftLeft(toUInt64(1), 35) AS DateTime64); +SELECT toDateTime64(bitShiftLeft(toUInt64(1), 35), 2); diff --git a/tests/queries/0_stateless/01699_timezoneOffset.reference b/tests/queries/0_stateless/01699_timezoneOffset.reference index 45f30314f5a..a1cc6391e6f 100644 --- a/tests/queries/0_stateless/01699_timezoneOffset.reference +++ b/tests/queries/0_stateless/01699_timezoneOffset.reference @@ -1,8 +1,8 @@ DST boundary test for Europe/Moscow: -0 1981-04-01 22:40:00 10800 355002000 -1 1981-04-01 22:50:00 10800 355002600 -2 1981-04-02 00:00:00 14400 355003200 -3 1981-04-02 00:10:00 14400 355003800 +0 1981-04-01 22:40:00 14400 354998400 +1 1981-04-01 22:50:00 14400 354999000 +2 1981-04-01 23:00:00 14400 354999600 +3 1981-04-01 23:10:00 14400 355000200 0 1981-09-30 23:00:00 14400 370724400 1 1981-09-30 23:10:00 14400 370725000 2 1981-09-30 23:20:00 14400 370725600 @@ -22,10 +22,10 @@ DST boundary test for Europe/Moscow: 16 1981-10-01 00:40:00 10800 370734000 17 1981-10-01 00:50:00 10800 370734600 DST boundary test for Asia/Tehran: -0 2020-03-21 22:40:00 12600 1584817800 -1 2020-03-21 22:50:00 12600 1584818400 -2 2020-03-22 00:00:00 16200 1584819000 -3 2020-03-22 00:10:00 16200 1584819600 +0 2020-03-21 22:40:00 16200 1584814200 +1 2020-03-21 22:50:00 16200 1584814800 +2 2020-03-21 23:00:00 16200 1584815400 +3 2020-03-21 23:10:00 16200 1584816000 0 2020-09-20 23:00:00 16200 1600626600 1 2020-09-20 23:10:00 16200 1600627200 2 2020-09-20 23:20:00 16200 1600627800 diff --git a/tests/queries/0_stateless/01700_deltasum.reference b/tests/queries/0_stateless/01700_deltasum.reference index be5b176c627..6be953e2b2d 100644 --- a/tests/queries/0_stateless/01700_deltasum.reference +++ b/tests/queries/0_stateless/01700_deltasum.reference @@ -7,3 +7,4 @@ 2 2.25 6.5 +7 diff --git a/tests/queries/0_stateless/01700_deltasum.sql b/tests/queries/0_stateless/01700_deltasum.sql index 93edb2e477d..83d5e0439d2 100644 --- a/tests/queries/0_stateless/01700_deltasum.sql +++ b/tests/queries/0_stateless/01700_deltasum.sql @@ -7,3 +7,4 @@ select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([0, 1])) as rows select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([4, 5])) as rows union all select deltaSumState(arrayJoin([0, 1])) as rows); select deltaSum(arrayJoin([2.25, 3, 4.5])); select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([0.1, 0.3, 0.5])) as rows union all select deltaSumState(arrayJoin([4.1, 5.1, 6.6])) as rows); +select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([3, 5])) as rows union all select deltaSumState(arrayJoin([1, 2])) as rows union all select deltaSumState(arrayJoin([4, 6])) as rows); diff --git a/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql b/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql index 97db40ab65e..72317df5439 100644 --- a/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql +++ b/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql @@ -1 +1 @@ -SELECT pointInPolygon((0, 0), [[(0, 0), (10, 10), (256, -9223372036854775808)]]) FORMAT Null; +SELECT pointInPolygon((0, 0), [[(0, 0), (10, 10), (256, -9223372036854775808)]]) FORMAT Null ;-- { serverError 36 } diff --git a/tests/queries/0_stateless/01700_system_zookeeper_path_in.reference b/tests/queries/0_stateless/01700_system_zookeeper_path_in.reference index 2fc177c812e..e491dd9e091 100644 --- a/tests/queries/0_stateless/01700_system_zookeeper_path_in.reference +++ b/tests/queries/0_stateless/01700_system_zookeeper_path_in.reference @@ -1,16 +1,14 @@ block_numbers blocks -1 +r1 ======== block_numbers blocks -1 +r1 ======== block_numbers blocks ======== -1 failed_parts last_part -leader_election-0000000000 parallel diff --git a/tests/queries/0_stateless/01700_system_zookeeper_path_in.sql b/tests/queries/0_stateless/01700_system_zookeeper_path_in.sql index d4126098c7c..02457a956a1 100644 --- a/tests/queries/0_stateless/01700_system_zookeeper_path_in.sql +++ b/tests/queries/0_stateless/01700_system_zookeeper_path_in.sql @@ -3,17 +3,20 @@ DROP TABLE IF EXISTS sample_table; CREATE TABLE sample_table ( key UInt64 ) -ENGINE ReplicatedMergeTree('/clickhouse/01700_system_zookeeper_path_in', '1') +ENGINE ReplicatedMergeTree('/clickhouse/01700_system_zookeeper_path_in/{shard}', '{replica}') ORDER BY tuple(); -SELECT name FROM system.zookeeper WHERE path = '/clickhouse/01700_system_zookeeper_path_in' AND name like 'block%' ORDER BY name; -SELECT name FROM system.zookeeper WHERE path = '/clickhouse/01700_system_zookeeper_path_in/replicas' ORDER BY name; +SELECT name FROM system.zookeeper WHERE path = '/clickhouse/01700_system_zookeeper_path_in/s1' AND name like 'block%' ORDER BY name; +SELECT name FROM system.zookeeper WHERE path = '/clickhouse/01700_system_zookeeper_path_in/s1/replicas' AND name LIKE '%r1%' ORDER BY name; + SELECT '========'; -SELECT name FROM system.zookeeper WHERE path IN ('/clickhouse/01700_system_zookeeper_path_in') AND name LIKE 'block%' ORDER BY name; -SELECT name FROM system.zookeeper WHERE path IN ('/clickhouse/01700_system_zookeeper_path_in/replicas') ORDER BY name; +SELECT name FROM system.zookeeper WHERE path IN ('/clickhouse/01700_system_zookeeper_path_in/s1') AND name LIKE 'block%' ORDER BY name; +SELECT name FROM system.zookeeper WHERE path IN ('/clickhouse/01700_system_zookeeper_path_in/s1/replicas') AND name LIKE '%r1%' ORDER BY name; SELECT '========'; -SELECT name FROM system.zookeeper WHERE path IN ('/clickhouse/01700_system_zookeeper_path_in','/clickhouse/01700_system_zookeeper_path_in/replicas') AND name LIKE 'block%' ORDER BY name; +SELECT name FROM system.zookeeper WHERE path IN ('/clickhouse/01700_system_zookeeper_path_in/s1', + '/clickhouse/01700_system_zookeeper_path_in/s1/replicas') AND name LIKE 'block%' ORDER BY name; SELECT '========'; -SELECT name FROM system.zookeeper WHERE path IN (SELECT concat('/clickhouse/01700_system_zookeeper_path_in/', name) FROM system.zookeeper WHERE (path = '/clickhouse/01700_system_zookeeper_path_in')) ORDER BY name; +SELECT name FROM system.zookeeper WHERE path IN (SELECT concat('/clickhouse/01700_system_zookeeper_path_in/s1/', name) + FROM system.zookeeper WHERE (name != 'replicas' AND name NOT LIKE 'leader_election%' AND path = '/clickhouse/01700_system_zookeeper_path_in/s1')) ORDER BY name; DROP TABLE IF EXISTS sample_table; diff --git a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh index d3e634eb560..edc4f6916ff 100755 --- a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh +++ b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh @@ -1,9 +1,11 @@ -#!/usr/bin/env bash - -CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) -# shellcheck source=../shell_config.sh -. "$CURDIR"/../shell_config.sh +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh ${CLICKHOUSE_CLIENT} -q "create table insert_big_json(a String, b String) engine=MergeTree() order by tuple()"; -python3 -c "[print('{{\"a\":\"{}\", \"b\":\"{}\"'.format('clickhouse'* 1000000, 'dbms' * 1000000)) for i in range(10)]; [print('{{\"a\":\"{}\", \"b\":\"{}\"}}'.format('clickhouse'* 100000, 'dbms' * 100000)) for i in range(10)]" 2>/dev/null | ${CLICKHOUSE_CLIENT} --input_format_parallel_parsing=1 --max_memory_usage=0 -q "insert into insert_big_json FORMAT JSONEachRow" 2>&1 | grep -q "min_chunk_bytes_for_parallel_parsing" && echo "Ok." || echo "FAIL" ||: \ No newline at end of file +python3 -c "[print('{{\"a\":\"{}\", \"b\":\"{}\"'.format('clickhouse'* 1000000, 'dbms' * 1000000)) for i in range(10)]; [print('{{\"a\":\"{}\", \"b\":\"{}\"}}'.format('clickhouse'* 100000, 'dbms' * 100000)) for i in range(10)]" 2>/dev/null | ${CLICKHOUSE_CLIENT} --input_format_parallel_parsing=1 --max_memory_usage=0 -q "insert into insert_big_json FORMAT JSONEachRow" 2>&1 | grep -q "min_chunk_bytes_for_parallel_parsing" && echo "Ok." || echo "FAIL" ||: + +${CLICKHOUSE_CLIENT} -q "drop table insert_big_json" diff --git a/tests/queries/0_stateless/01702_system_query_log.reference b/tests/queries/0_stateless/01702_system_query_log.reference new file mode 100644 index 00000000000..1f329feac22 --- /dev/null +++ b/tests/queries/0_stateless/01702_system_query_log.reference @@ -0,0 +1,86 @@ +DROP queries and also a cleanup before the test +CREATE queries +SET queries +ALTER TABLE queries +SYSTEM queries +SHOW queries +GRANT queries +REVOKE queries +Misc queries +ACTUAL LOG CONTENT: +Select SELECT \'DROP queries and also a cleanup before the test\'; +Drop DROP DATABASE IF EXISTS sqllt SYNC; + DROP USER IF EXISTS sqllt_user; + DROP ROLE IF EXISTS sqllt_role; + DROP POLICY IF EXISTS sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary; + DROP ROW POLICY IF EXISTS sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary; + DROP QUOTA IF EXISTS sqllt_quota; + DROP SETTINGS PROFILE IF EXISTS sqllt_settings_profile; +Select SELECT \'CREATE queries\'; +Create CREATE DATABASE sqllt; +Create CREATE TABLE sqllt.table\n(\n i UInt8, s String\n)\nENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +Create CREATE VIEW sqllt.view AS SELECT i, s FROM sqllt.table; +Create CREATE DICTIONARY sqllt.dictionary (key UInt64, value UInt64) PRIMARY KEY key SOURCE(CLICKHOUSE(DB \'sqllt\' TABLE \'table\' HOST \'localhost\' PORT 9001)) LIFETIME(0) LAYOUT(FLAT()); + CREATE USER sqllt_user IDENTIFIED WITH PLAINTEXT_PASSWORD BY \'password\'; + CREATE ROLE sqllt_role; + CREATE POLICY sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL; + CREATE POLICY sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL; + CREATE QUOTA sqllt_quota KEYED BY user_name TO sqllt_role; + CREATE SETTINGS PROFILE sqllt_settings_profile SETTINGS interactive_delay = 200000; +Grant GRANT sqllt_role TO sqllt_user; +Select SELECT \'SET queries\'; + SET log_profile_events=false; + SET DEFAULT ROLE sqllt_role TO sqllt_user; +Select -- SET ROLE sqllt_role; -- tests are executed by user `default` which is defined in XML and is impossible to update.\n\nSELECT \'ALTER TABLE queries\'; +Alter ALTER TABLE sqllt.table ADD COLUMN new_col UInt32 DEFAULT 123456789; +Alter ALTER TABLE sqllt.table COMMENT COLUMN new_col \'dummy column with a comment\'; +Alter ALTER TABLE sqllt.table CLEAR COLUMN new_col; +Alter ALTER TABLE sqllt.table MODIFY COLUMN new_col DateTime DEFAULT \'2015-05-18 07:40:13\'; +Alter ALTER TABLE sqllt.table MODIFY COLUMN new_col REMOVE COMMENT; +Alter ALTER TABLE sqllt.table RENAME COLUMN new_col TO the_new_col; +Alter ALTER TABLE sqllt.table DROP COLUMN the_new_col; +Alter ALTER TABLE sqllt.table UPDATE i = i + 1 WHERE 1; +Alter ALTER TABLE sqllt.table DELETE WHERE i > 65535; +Select -- not done, seems to hard, so I\'ve skipped queries of ALTER-X, where X is:\n-- PARTITION\n-- ORDER BY\n-- SAMPLE BY\n-- INDEX\n-- CONSTRAINT\n-- TTL\n-- USER\n-- QUOTA\n-- ROLE\n-- ROW POLICY\n-- SETTINGS PROFILE\n\nSELECT \'SYSTEM queries\'; +System SYSTEM FLUSH LOGS; +System SYSTEM STOP MERGES sqllt.table +System SYSTEM START MERGES sqllt.table +System SYSTEM STOP TTL MERGES sqllt.table +System SYSTEM START TTL MERGES sqllt.table +System SYSTEM STOP MOVES sqllt.table +System SYSTEM START MOVES sqllt.table +System SYSTEM STOP FETCHES sqllt.table +System SYSTEM START FETCHES sqllt.table +System SYSTEM STOP REPLICATED SENDS sqllt.table +System SYSTEM START REPLICATED SENDS sqllt.table +Select -- SYSTEM RELOAD DICTIONARY sqllt.dictionary; -- temporary out of order: Code: 210, Connection refused (localhost:9001) (version 21.3.1.1)\n-- DROP REPLICA\n-- haha, no\n-- SYSTEM KILL;\n-- SYSTEM SHUTDOWN;\n\n-- Since we don\'t really care about the actual output, suppress it with `FORMAT Null`.\nSELECT \'SHOW queries\'; + SHOW CREATE TABLE sqllt.table FORMAT Null; + SHOW CREATE DICTIONARY sqllt.dictionary FORMAT Null; + SHOW DATABASES LIKE \'sqllt\' FORMAT Null; + SHOW TABLES FROM sqllt FORMAT Null; + SHOW DICTIONARIES FROM sqllt FORMAT Null; + SHOW GRANTS FORMAT Null; + SHOW GRANTS FOR sqllt_user FORMAT Null; + SHOW CREATE USER sqllt_user FORMAT Null; + SHOW CREATE ROLE sqllt_role FORMAT Null; + SHOW CREATE POLICY sqllt_policy FORMAT Null; + SHOW CREATE ROW POLICY sqllt_row_policy FORMAT Null; + SHOW CREATE QUOTA sqllt_quota FORMAT Null; + SHOW CREATE SETTINGS PROFILE sqllt_settings_profile FORMAT Null; +Select SELECT \'GRANT queries\'; +Grant GRANT SELECT ON sqllt.table TO sqllt_user; +Grant GRANT DROP ON sqllt.view TO sqllt_user; +Select SELECT \'REVOKE queries\'; +Revoke REVOKE SELECT ON sqllt.table FROM sqllt_user; +Revoke REVOKE DROP ON sqllt.view FROM sqllt_user; +Select SELECT \'Misc queries\'; + DESCRIBE TABLE sqllt.table FORMAT Null; + CHECK TABLE sqllt.table FORMAT Null; +Drop DETACH TABLE sqllt.table; +Create ATTACH TABLE sqllt.table; +Rename RENAME TABLE sqllt.table TO sqllt.table_new; +Rename RENAME TABLE sqllt.table_new TO sqllt.table; +Drop TRUNCATE TABLE sqllt.table; +Drop DROP TABLE sqllt.table SYNC; + SET log_comment=\'\'; +DROP queries and also a cleanup after the test diff --git a/tests/queries/0_stateless/01702_system_query_log.sql b/tests/queries/0_stateless/01702_system_query_log.sql new file mode 100644 index 00000000000..e3ebf97edb7 --- /dev/null +++ b/tests/queries/0_stateless/01702_system_query_log.sql @@ -0,0 +1,146 @@ +-- fire all kinds of queries and then check if those are present in the system.query_log +SET log_comment='system.query_log logging test'; + +SELECT 'DROP queries and also a cleanup before the test'; +DROP DATABASE IF EXISTS sqllt SYNC; +DROP USER IF EXISTS sqllt_user; +DROP ROLE IF EXISTS sqllt_role; +DROP POLICY IF EXISTS sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary; +DROP ROW POLICY IF EXISTS sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary; +DROP QUOTA IF EXISTS sqllt_quota; +DROP SETTINGS PROFILE IF EXISTS sqllt_settings_profile; + +SELECT 'CREATE queries'; +CREATE DATABASE sqllt; + +CREATE TABLE sqllt.table +( + i UInt8, s String +) +ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); + +CREATE VIEW sqllt.view AS SELECT i, s FROM sqllt.table; +CREATE DICTIONARY sqllt.dictionary (key UInt64, value UInt64) PRIMARY KEY key SOURCE(CLICKHOUSE(DB 'sqllt' TABLE 'table' HOST 'localhost' PORT 9001)) LIFETIME(0) LAYOUT(FLAT()); + +CREATE USER sqllt_user IDENTIFIED WITH PLAINTEXT_PASSWORD BY 'password'; +CREATE ROLE sqllt_role; + +CREATE POLICY sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL; +CREATE POLICY sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL; + +CREATE QUOTA sqllt_quota KEYED BY user_name TO sqllt_role; +CREATE SETTINGS PROFILE sqllt_settings_profile SETTINGS interactive_delay = 200000; + +GRANT sqllt_role TO sqllt_user; + + +SELECT 'SET queries'; +SET log_profile_events=false; +SET DEFAULT ROLE sqllt_role TO sqllt_user; +-- SET ROLE sqllt_role; -- tests are executed by user `default` which is defined in XML and is impossible to update. + +SELECT 'ALTER TABLE queries'; +ALTER TABLE sqllt.table ADD COLUMN new_col UInt32 DEFAULT 123456789; +ALTER TABLE sqllt.table COMMENT COLUMN new_col 'dummy column with a comment'; +ALTER TABLE sqllt.table CLEAR COLUMN new_col; +ALTER TABLE sqllt.table MODIFY COLUMN new_col DateTime DEFAULT '2015-05-18 07:40:13'; +ALTER TABLE sqllt.table MODIFY COLUMN new_col REMOVE COMMENT; +ALTER TABLE sqllt.table RENAME COLUMN new_col TO the_new_col; +ALTER TABLE sqllt.table DROP COLUMN the_new_col; +ALTER TABLE sqllt.table UPDATE i = i + 1 WHERE 1; +ALTER TABLE sqllt.table DELETE WHERE i > 65535; + +-- not done, seems to hard, so I've skipped queries of ALTER-X, where X is: +-- PARTITION +-- ORDER BY +-- SAMPLE BY +-- INDEX +-- CONSTRAINT +-- TTL +-- USER +-- QUOTA +-- ROLE +-- ROW POLICY +-- SETTINGS PROFILE + +SELECT 'SYSTEM queries'; +SYSTEM FLUSH LOGS; +SYSTEM STOP MERGES sqllt.table; +SYSTEM START MERGES sqllt.table; +SYSTEM STOP TTL MERGES sqllt.table; +SYSTEM START TTL MERGES sqllt.table; +SYSTEM STOP MOVES sqllt.table; +SYSTEM START MOVES sqllt.table; +SYSTEM STOP FETCHES sqllt.table; +SYSTEM START FETCHES sqllt.table; +SYSTEM STOP REPLICATED SENDS sqllt.table; +SYSTEM START REPLICATED SENDS sqllt.table; + +-- SYSTEM RELOAD DICTIONARY sqllt.dictionary; -- temporary out of order: Code: 210, Connection refused (localhost:9001) (version 21.3.1.1) +-- DROP REPLICA +-- haha, no +-- SYSTEM KILL; +-- SYSTEM SHUTDOWN; + +-- Since we don't really care about the actual output, suppress it with `FORMAT Null`. +SELECT 'SHOW queries'; + +SHOW CREATE TABLE sqllt.table FORMAT Null; +SHOW CREATE DICTIONARY sqllt.dictionary FORMAT Null; +SHOW DATABASES LIKE 'sqllt' FORMAT Null; +SHOW TABLES FROM sqllt FORMAT Null; +SHOW DICTIONARIES FROM sqllt FORMAT Null; +SHOW GRANTS FORMAT Null; +SHOW GRANTS FOR sqllt_user FORMAT Null; +SHOW CREATE USER sqllt_user FORMAT Null; +SHOW CREATE ROLE sqllt_role FORMAT Null; +SHOW CREATE POLICY sqllt_policy FORMAT Null; +SHOW CREATE ROW POLICY sqllt_row_policy FORMAT Null; +SHOW CREATE QUOTA sqllt_quota FORMAT Null; +SHOW CREATE SETTINGS PROFILE sqllt_settings_profile FORMAT Null; + +SELECT 'GRANT queries'; +GRANT SELECT ON sqllt.table TO sqllt_user; +GRANT DROP ON sqllt.view TO sqllt_user; + +SELECT 'REVOKE queries'; +REVOKE SELECT ON sqllt.table FROM sqllt_user; +REVOKE DROP ON sqllt.view FROM sqllt_user; + +SELECT 'Misc queries'; +DESCRIBE TABLE sqllt.table FORMAT Null; + +CHECK TABLE sqllt.table FORMAT Null; +DETACH TABLE sqllt.table; +ATTACH TABLE sqllt.table; + +RENAME TABLE sqllt.table TO sqllt.table_new; +RENAME TABLE sqllt.table_new TO sqllt.table; +TRUNCATE TABLE sqllt.table; +DROP TABLE sqllt.table SYNC; + +SET log_comment=''; +--------------------------------------------------------------------------------------------------- +-- Now get all logs related to this test +--------------------------------------------------------------------------------------------------- + +SYSTEM FLUSH LOGS; +SELECT 'ACTUAL LOG CONTENT:'; + +-- Try to filter out all possible previous junk events by excluding old log entries, +SELECT query_kind, query FROM system.query_log +WHERE + log_comment LIKE '%system.query_log%' AND type == 'QueryStart' AND event_date >= yesterday() + AND current_database == currentDatabase() +ORDER BY event_time_microseconds; + + +-- cleanup +SELECT 'DROP queries and also a cleanup after the test'; +DROP DATABASE IF EXISTS sqllt; +DROP USER IF EXISTS sqllt_user; +DROP ROLE IF EXISTS sqllt_role; +DROP POLICY IF EXISTS sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary; +DROP ROW POLICY IF EXISTS sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary; +DROP QUOTA IF EXISTS sqllt_quota; +DROP SETTINGS PROFILE IF EXISTS sqllt_settings_profile; diff --git a/tests/queries/0_stateless/01702_toDateTime_from_string_clamping.reference b/tests/queries/0_stateless/01702_toDateTime_from_string_clamping.reference index 228086615da..7e8307d66a6 100644 --- a/tests/queries/0_stateless/01702_toDateTime_from_string_clamping.reference +++ b/tests/queries/0_stateless/01702_toDateTime_from_string_clamping.reference @@ -1,9 +1,9 @@ -- { echo } SELECT toString(toDateTime('-922337203.6854775808', 1)); -2106-02-07 15:41:33.6 +1940-10-09 22:13:17.6 SELECT toString(toDateTime('9922337203.6854775808', 1)); -2104-12-30 00:50:11.6 +2283-11-11 23:46:43.6 SELECT toDateTime64(CAST('10000000000.1' AS Decimal64(1)), 1); -2106-02-07 20:50:08.1 +2283-11-11 23:46:40.1 SELECT toDateTime64(CAST('-10000000000.1' AS Decimal64(1)), 1); -2011-12-23 00:38:08.1 +1925-01-01 00:00:00.1 diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.sql b/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.sql deleted file mode 100644 index fad890c4807..00000000000 --- a/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.sql +++ /dev/null @@ -1,12 +0,0 @@ -drop table if exists x; - -create table x (i int) engine MergeTree order by i settings old_parts_lifetime = 10000000000, min_bytes_for_wide_part = 0, inactive_parts_to_throw_insert = 1; - -insert into x values (1); -insert into x values (2); - -optimize table x final; - -insert into x values (3); -- { serverError 252; } - -drop table if exists x; diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.reference b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql new file mode 100644 index 00000000000..6de0d4f4e0c --- /dev/null +++ b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql @@ -0,0 +1,12 @@ +drop table if exists data_01709; + +create table data_01709 (i int) engine MergeTree order by i settings old_parts_lifetime = 10000000000, min_bytes_for_wide_part = 0, inactive_parts_to_throw_insert = 1; + +insert into data_01709 values (1); +insert into data_01709 values (2); + +optimize table data_01709 final; + +insert into data_01709 values (3); -- { serverError 252; } + +drop table data_01709; diff --git a/tests/queries/0_stateless/01711_cte_subquery_fix.sql b/tests/queries/0_stateless/01711_cte_subquery_fix.sql index ddea548eada..10ad9019209 100644 --- a/tests/queries/0_stateless/01711_cte_subquery_fix.sql +++ b/tests/queries/0_stateless/01711_cte_subquery_fix.sql @@ -1,3 +1,7 @@ drop table if exists t; create table t engine = Memory as with cte as (select * from numbers(10)) select * from cte; drop table t; + +drop table if exists view1; +create view view1 as with t as (select number n from numbers(3)) select n from t; +drop table view1; diff --git a/tests/queries/0_stateless/01715_table_function_view_fix.sql b/tests/queries/0_stateless/01715_table_function_view_fix.sql index de5150b7b70..b96609391b5 100644 --- a/tests/queries/0_stateless/01715_table_function_view_fix.sql +++ b/tests/queries/0_stateless/01715_table_function_view_fix.sql @@ -1 +1,3 @@ SELECT view(SELECT 1); -- { clientError 62 } + +SELECT sumIf(dummy, dummy) FROM remote('127.0.0.{1,2}', numbers(2, 100), view(SELECT CAST(NULL, 'Nullable(UInt8)') AS dummy FROM system.one)); -- { serverError 183 } diff --git a/tests/queries/0_stateless/01720_country_intersection.reference b/tests/queries/0_stateless/01720_country_intersection.reference new file mode 100644 index 00000000000..5b7cee92bdc --- /dev/null +++ b/tests/queries/0_stateless/01720_country_intersection.reference @@ -0,0 +1,104 @@ +Dhekelia Sovereign Base Area Dhekelia Sovereign Base Area [[[(33.847423,34.94245),(33.819672,34.964748),(33.80421,34.972602),(33.781896,34.976212),(33.784945,34.976212),(33.788046,34.976988),(33.7928,34.977763),(33.79435,34.977763),(33.791146,34.982414),(33.786495,34.984687),(33.782568,34.984687),(33.777917,34.984687),(33.77399,34.988666),(33.766135,34.990268),(33.761484,34.990268),(33.75921,34.988666),(33.765411,34.985566),(33.769339,34.983964),(33.770889,34.980088),(33.77554,34.980088),(33.780191,34.979313),(33.780986,34.976338),(33.780935,34.976345),(33.760427,34.979682),(33.717296,34.977769),(33.70152,34.97289),(33.702935,34.987943),(33.711461,34.985566),(33.71544,34.997296),(33.699731,35.002722),(33.69663,35.008975),(33.705312,35.015228),(33.702211,35.022256),(33.685003,35.029284),(33.679444,35.033891),(33.679435,35.033899),(33.675649,35.037036),(33.674099,35.046441),(33.678853,35.055794),(33.69446,35.058171),(33.705312,35.06675),(33.714717,35.06675),(33.719368,35.06277),(33.711461,35.040963),(33.707585,35.029284),(33.718489,35.032385),(33.739677,35.047216),(33.766135,35.03161),(33.77554,35.040188),(33.786495,35.038534),(33.79435,35.040188),(33.798278,35.052642),(33.824012,35.06675),(33.834865,35.063597),(33.842719,35.056621),(33.853571,35.058171),(33.866904,35.06675),(33.871555,35.073054),(33.876929,35.076826),(33.871555,35.085456),(33.871555,35.100236),(33.876206,35.118994),(33.889435,35.118994),(33.891812,35.110468),(33.89884,35.108814),(33.903594,35.099512),(33.905868,35.09636),(33.905868,35.090882),(33.913619,35.090882),(33.921474,35.080702),(33.914446,35.073054),(33.908245,35.070729),(33.906524,35.069122),(33.906506,35.069105),(33.898116,35.061272),(33.880133,35.073054),(33.874655,35.067525),(33.867627,35.060497),(33.855122,35.053417),(33.841169,35.051092),(33.834865,35.056621),(33.827113,35.061272),(33.813781,35.055794),(33.804375,35.049541),(33.799001,35.038534),(33.822359,35.030059),(33.830214,35.023031),(33.829387,35.001176),(33.829387,35.001172),(33.840342,34.993369),(33.859049,34.991819),(33.859049,34.974662),(33.850471,34.973009),(33.838068,34.963707),(33.84582,34.959728),(33.864423,34.962983),(33.891841,34.958139),(33.8838,34.949123),(33.874522,34.94123),(33.862315,34.937893),(33.847423,34.94245)],[(33.746689,35.002711),(33.752063,35.004323),(33.752063,35.0144),(33.746151,35.015207),(33.741314,35.013729),(33.740239,35.010101),(33.738761,35.005264),(33.739702,35.002576),(33.742792,35.001233),(33.746689,35.002711)]]] +Dhekelia Sovereign Base Area Kyrgyzstan [] +Kyrgyzstan Dhekelia Sovereign Base Area [] +Kyrgyzstan Kyrgyzstan [[[(74.746683,40.336505),(74.724668,40.337539),(74.705755,40.331286),(74.654388,40.291547),(74.637645,40.281987),(74.598681,40.266174),(74.565402,40.24695),(74.476622,40.172433),(74.369858,40.105822),(74.333788,40.09373),(74.302885,40.090061),(74.272293,40.093833),(74.237876,40.10391),(74.20315,40.109853),(74.167596,40.10639),(74.13287,40.095332),(74.101244,40.078847),(74.069515,40.067788),(74.003989,40.060812),(73.976704,40.043603),(73.966368,40.033268),(73.957894,40.021434),(73.952106,40.008257),(73.949729,39.993684),(73.944044,39.970068),(73.927301,39.953325),(73.907561,39.937873),(73.893195,39.918133),(73.885443,39.876534),(73.880689,39.86501),(73.872937,39.856845),(73.844102,39.838086),(73.83201,39.82372),(73.823018,39.805685),(73.818574,39.786048),(73.819814,39.766721),(73.829529,39.746981),(73.843068,39.740366),(73.859915,39.737162),(73.879345,39.727912),(73.893505,39.710394),(73.899602,39.689465),(73.904253,39.64647),(73.926784,39.592882),(73.921617,39.582133),(73.899292,39.571539),(73.883066,39.561204000000004),(73.870457,39.546786),(73.859192,39.524255),(73.848029,39.489735),(73.838314,39.475679),(73.820744,39.468186),(73.632642,39.448343),(73.604323,39.459608),(73.512236,39.467411),(73.476889,39.464776),(73.367645,39.443795),(73.343151,39.430669),(73.335503,39.415166),(73.333332,39.40111),(73.326408,39.390878),(73.269357,39.382558),(73.167864,39.355377),(73.136032,39.353465),(73.100995,39.361164),(73.083425,39.367831),(73.071126,39.370725),(73.058414,39.368813),(73.009321,39.348607),(72.994335,39.347832),(72.976765,39.352224),(72.91434,39.362715),(72.893669,39.363903),(72.850468,39.35672),(72.835275,39.356204),(72.65048,39.393772),(72.633737,39.394496),(72.616373,39.390413),(72.606141,39.383489),(72.587021,39.364885),(72.575756,39.359304),(72.559426,39.359562),(72.534104,39.372327),(72.519015,39.375686),(72.50806,39.371965),(72.476744,39.346127),(72.460414,39.344111),(72.443567,39.344783),(72.410701,39.351346),(72.393234,39.350829),(72.355924,39.336101),(72.334117,39.333828),(72.31634,39.328815),(72.304248,39.31357),(72.281407,39.259724),(72.240272,39.189909),(72.228903,39.189237),(72.228668,39.189502),(72.218568,39.20089),(72.209163,39.217297),(72.206476,39.226883),(72.206373,39.234609),(72.203272,39.24081),(72.182395,39.249337),(72.167098,39.258742),(72.158623,39.262617),(72.115525,39.268147),(72.094751,39.275072),(72.08576,39.290109),(72.084106,39.311245),(72.078112,39.336618),(72.06695,39.358477),(72.049793,39.368864),(72.031499,39.366694),(71.996463,39.351863),(71.959152,39.345868),(71.942306,39.339047),(71.843501,39.285045),(71.807017,39.272849),(71.77043,39.269542),(71.751207,39.274245),(71.732913,39.285097),(71.71834,39.301065),(71.710899,39.32096),(71.712966,39.341527),(71.723198,39.353981),(71.736634,39.364575),(71.748106,39.379561),(71.751827,39.399663),(71.749243,39.425502),(71.740251,39.448343),(71.724438,39.459401),(71.704594,39.458988),(71.602792,39.442348),(71.554319,39.444157),(71.512875,39.458833),(71.492101,39.493663),(71.500576,39.509217),(71.526001,39.538156),(71.531375,39.553091),(71.52538,39.569162),(71.510911,39.584148),(71.509654,39.584998),(71.493031,39.596241),(71.459545,39.612105),(71.441251,39.610658),(71.406731,39.598153),(71.378309,39.594173),(71.368543,39.591486),(71.36161799999999,39.587559),(71.347872,39.576242),(71.340586,39.571694),(71.305911,39.557173),(71.291958,39.548957),(71.260074,39.520638),(71.249273,39.514385),(71.236923,39.514488),(71.179975,39.523015),(71.137446,39.521155),(71.095898,39.51237),(71.062928,39.495627),(71.042309,39.467876),(71.027581,39.435217),(71.009288,39.407622),(70.977145,39.394909),(70.949808,39.400749),(70.925159,39.412944),(70.903351,39.420851),(70.884489,39.413874),(70.872242,39.402971),(70.864181,39.400594),(70.855292,39.40173),(70.840099,39.40142),(70.828059,39.398578),(70.797621,39.385349),(70.770129,39.382093),(70.739744,39.386228),(70.713285,39.398423),(70.698092,39.41899),(70.698609,39.431393),(70.703053,39.444725),(70.705431,39.458523),(70.699746,39.472165),(70.690134,39.479193),(70.666725,39.486893),(70.655976,39.493663),(70.655976,39.493714),(70.655925,39.49387),(70.655598,39.494664),(70.635977,39.542497),(70.622128,39.563529),(70.59784,39.577792),(70.58027,39.579549),(70.54358,39.574278),(70.526113,39.575622),(70.513504,39.581513),(70.490973,39.596861),(70.477227,39.601201),(70.459967,39.599548),(70.427618,39.590091),(70.411082,39.587456),(70.40147,39.584768),(70.395372,39.579497),(70.390411,39.57402),(70.384313,39.570867),(70.374288,39.571022),(70.353462,39.575932),(70.343179,39.576862),(70.33026,39.573089),(70.241066,39.522395),(70.223289,39.519087999999996),(70.20603,39.5241),(70.204376,39.538156),(70.215021,39.574071),(70.214814,39.591125),(70.210267,39.609728),(70.200655,39.619495),(70.185204,39.610348),(70.15735,39.563891),(70.148979,39.554279),(70.132442,39.550042),(70.116268,39.554641),(70.08273,39.569782000000004),(70.062266,39.573451),(70.040355,39.573606),(70.019684,39.568594),(70.002734,39.556863),(69.987128,39.539603),(69.978757,39.534436),(69.948629,39.545081),(69.907753,39.548492),(69.830445,39.536244),(69.791585,39.545391),(69.74983,39.563839),(69.710349,39.574071),(69.669421,39.577792),(69.582295,39.573555),(69.564932,39.568129),(69.531445,39.545649),(69.514392,39.537588),(69.496099,39.532575),(69.477288,39.530405),(69.455274,39.530456),(69.412176,39.52503),(69.391454,39.530508),(69.367218,39.549938),(69.361637,39.552884),(69.352025,39.549628),(69.348407,39.543169),(69.345927,39.535366),(69.339726,39.528079),(69.300348,39.515625),(69.286189,39.539707),(69.29115,39.658872),(69.287533,39.677786),(69.280091,39.694271),(69.265932,39.707758),(69.248362,39.719385),(69.233324,39.732666),(69.226296,39.751011),(69.229552,39.790544),(69.240714,39.82894),(69.286447,39.935755),(69.305671,39.968621),(69.310064,39.978646),(69.310322,39.984796),(69.313991,39.986914),(69.32784,39.984744),(69.357296,39.959474),(69.405355,39.896119),(69.501576,39.922474),(69.500439,39.935703),(69.477288,39.968156),(69.478115,39.975029),(69.477288,39.981592),(69.475118,39.987741),(69.471191,39.993684),(69.463129,40.025413),(69.46964,40.050993),(69.485712,40.073731),(69.506331,40.096933),(69.509224,40.103445),(69.513462,40.120705),(69.518009,40.125097),(69.526071,40.123185),(69.536613,40.107889),(69.543021,40.103083),(69.558834,40.101739),(69.575474,40.1036),(69.969558,40.211603),(70.004595,40.208761),(70.14717,40.136983),(70.168822,40.13166),(70.218535,40.134141),(70.241531,40.132745),(70.263494,40.12427),(70.277808,40.112075),(70.290262,40.098174),(70.306282,40.08479),(70.324265,40.077503),(70.359509,40.074041),(70.397646,40.061225),(70.477227,40.052027),(70.502135,40.045981),(70.525493,40.033682),(70.535622,40.015957),(70.520946,39.993736),(70.505029,39.985106),(70.488493,39.978543),(70.473197,39.97012),(70.460226,39.956115),(70.450872,39.937408),(70.44803,39.919993),(70.455627,39.906816),(70.477227,39.900925),(70.485961,39.909606),(70.494177,39.931672),(70.500895,39.940561),(70.512574,39.94516),(70.555052,39.946245),(70.576601,39.952239),(70.5966,39.961955),(70.613963,39.975752),(70.627399,39.993891),(70.635977,40.028514),(70.634737,40.059313),(70.63944,40.084945),(70.66595,40.104168),(70.729822,40.120705),(70.743464,40.127113),(70.782532,40.152589),(70.788733,40.158997),(70.792144,40.161374),(70.797725,40.161942),(70.802686,40.160134),(70.806045,40.157653),(70.806716,40.156155),(70.824906,40.163544),(70.831831,40.168195),(70.845164,40.171813),(70.898545,40.162614),(70.929293,40.170624),(70.962883,40.189693),(70.979522,40.214084),(70.958955,40.238372),(70.995129,40.266587),(71.053213,40.274235),(71.169382,40.26142),(71.201008,40.263848),(71.215115,40.280747),(71.223952,40.302864),(71.239558,40.321106),(71.253821,40.324413),(71.263949,40.318212),(71.272838,40.307722),(71.283225,40.29842),(71.297074,40.293821),(71.313559,40.292684),(71.344926,40.295216),(71.365132,40.294131),(71.396551,40.271548),(71.441355,40.260903),(71.450966,40.248914),(71.458925,40.234134),(71.477115,40.220802),(71.498405,40.210518),(71.521246,40.203749),(71.568272,40.196566),(71.592973,40.198426),(71.601655,40.20995),(71.604032,40.227261),(71.61013,40.246382),(71.62832,40.258887),(71.645993,40.247105),(71.660153,40.224471),(71.667181,40.204162),(71.666871,40.163131),(71.673072,40.147886),(71.693329,40.141117),(71.707178,40.144269),(71.759785,40.168092),(71.775908,40.179926),(71.78676,40.193517),(71.805467,40.225298),(71.836473,40.249172),(71.872956,40.250774),(71.912334,40.243436),(71.951814,40.240904),(71.96339,40.243746),(71.969178,40.244418),(71.977033,40.243074),(71.989021,40.239302),(72.004834,40.237287),(72.01889,40.240026),(72.025402,40.250878),(72.019821,40.258732),(72.013239,40.262072),(72.005971,40.26576),(71.977033,40.276096),(71.958222,40.286534),(71.951091,40.30152),(71.951094,40.301529),(71.956672,40.31568),(72.043178,40.349321),(72.069637,40.369423),(72.084106,40.39738),(72.090101,40.416035),(72.099299,40.426371),(72.165858,40.454431),(72.182705,40.45779),(72.228328,40.459606),(72.235931,40.459909),(72.254225,40.458307),(72.263415,40.452651),(72.26363,40.452519),(72.259599,40.44239),(72.24513,40.438721),(72.228077,40.437533),(72.216501,40.434949),(72.211768,40.425836),(72.211644,40.425596),(72.224356,40.422443),(72.269624,40.424045),(72.284301,40.41986),(72.343418,40.393401),(72.370497,40.38565),(72.394061,40.389422),(72.414422,40.410713),(72.425171,40.435983),(72.426204,40.45934),(72.426096,40.459545),(72.420028,40.471027),(72.415662,40.479287),(72.415585,40.47933),(72.372254,40.503265),(72.363469,40.512309),(72.363467,40.51241),(72.363262,40.523573999999996),(72.36998,40.539852),(72.370083,40.557732),(72.369928,40.557930999999996),(72.348483,40.585379),(72.348509,40.585508),(72.351893,40.601967),(72.381556,40.612148),(72.414835,40.589875),(72.447908,40.560471),(72.476744,40.549567),(72.515294,40.54564),(72.585781,40.508743),(72.625468,40.5105),(72.640868,40.519853),(72.65048,40.532151999999996),(72.655131,40.546363),(72.656474,40.561143),(72.664226,40.577783),(72.682416,40.577679),(72.719209,40.564864),(72.748355,40.575096),(72.760034,40.641758),(72.783908,40.669663),(72.818945,40.681084),(72.890982,40.695088),(72.976765,40.736068),(73.070609,40.762474),(73.118048,40.782938),(73.148641,40.813686),(73.14337,40.833839),(73.143292,40.833853),(73.135042,40.835274),(73.112467,40.839162),(73.053556,40.83632),(73.033299,40.847224),(73.01945,40.8619),(73.00343,40.870168),(72.929429,40.844175),(72.883127,40.819628),(72.870311,40.818181),(72.872998,40.834821),(72.868658,40.864122),(72.830107,40.87208),(72.701226,40.863243),(72.658231,40.867171),(72.619474,40.88009),(72.588468,40.905825),(72.545577,40.956519),(72.52625,40.962204),(72.501445,40.963496),(72.483358,40.970575),(72.483565,40.99352),(72.485219,40.999566),(72.484702,41.004682),(72.481911,41.008816),(72.476744,41.011813),(72.423517,41.01574),(72.395198,41.022045),(72.374321,41.031967),(72.345692,41.06597),(72.33236,41.072843),(72.314066,41.061681),(72.308795,41.054446),(72.297323,41.028143),(72.289778,41.023544),(72.252985,41.019616),(72.195486,41.006358),(72.165135,40.999359),(72.178777,41.023182),(72.185599,41.041062),(72.185599,41.060647),(72.1764,41.112892),(72.174643,41.141418),(72.169889,41.168651),(72.158417,41.187875),(72.132889,41.199295),(72.108497,41.196401),(72.085243,41.184877),(72.063642,41.170201),(72.033773,41.15661),(72.016513,41.163535),(72.001217,41.18002),(71.977033,41.195213),(71.897658,41.184929),(71.871613,41.194489),(71.866342,41.236606),(71.868822,41.279239),(71.863034,41.312208),(71.847015,41.341819),(71.75317,41.44729),(71.745212,41.45282),(71.73684,41.455404),(71.730433,41.451166),(71.729709,41.430082),(71.721648,41.424759),(71.712242,41.428015),(71.706868,41.444086),(71.696119,41.447445),(71.691469,41.441916),(71.687334,41.431012),(71.681443,41.422847),(71.671935,41.425483),(71.671108,41.437627),(71.689505,41.493799),(71.689401,41.514625),(71.68413,41.534055),(71.671418,41.547388),(71.649507,41.54992),(71.62739,41.543202),(71.615297,41.532143),(71.595454,41.493799),(71.595454,41.493747),(71.595557,41.493696),(71.595557,41.493489),(71.605582,41.476849),(71.633694,41.449616),(71.637415,41.431271),(71.633074,41.411324),(71.618811,41.377786),(71.585532,41.323525),(71.557937,41.301718),(71.52383,41.296654),(71.480629,41.310761),(71.432466,41.344816),(71.418772,41.3474),(71.412571,41.334687),(71.421459,41.162088),(71.416085,41.127362),(71.393657,41.112737),(71.325238,41.157334),(71.300226,41.133046),(71.289478,41.113874),(71.276145,41.113151),(71.263381,41.123486),(71.253821,41.13749),(71.241522,41.175162),(71.230153,41.187255),(71.206382,41.188753),(71.185711,41.180227),(71.183024,41.166377),(71.187882,41.148446),(71.189949,41.127465),(71.18013,41.108138),(71.164421,41.116199),(71.139099,41.148394),(71.123079,41.158212),(71.085769,41.162036),(71.067579,41.169736),(71.047012,41.182035),(71.024688,41.189787),(70.977145,41.19635),(70.93415,41.19144),(70.914203,41.193042),(70.895962,41.206116),(70.882164,41.220482),(70.865731,41.233195),(70.847747,41.243065),(70.828524,41.249111),(70.805993,41.247458),(70.786563,41.24043),(70.770439,41.238518),(70.758037,41.25216),(70.75628,41.269627),(70.77106,41.331018),(70.769199,41.352102),(70.759691,41.372515),(70.70357,41.445482),(70.686724,41.462483),(70.66781,41.471372),(70.633807,41.467496),(70.511334,41.414476),(70.477227,41.404657),(70.470782,41.404879),(70.453198,41.405484),(70.438057,41.416078),(70.413614,41.450701),(70.398989,41.464964),(70.382246,41.476436),(70.344936,41.493489),(70.344729,41.493644),(70.344522,41.493696),(70.344419,41.493799),(70.203446,41.505633),(70.166549,41.520206),(70.148255,41.552452),(70.169288,41.578342),(70.33119,41.649629),(70.390824,41.685054),(70.423484,41.696913),(70.453663,41.71208),(70.477227,41.738435),(70.506476,41.78559),(70.550091,41.824063),(70.648948,41.887393),(70.679696,41.901113),(70.779328,41.909665),(70.814003,41.919535),(70.825113,41.93633),(70.828059,41.993743),(70.845474,42.030356),(70.886711,42.038495),(70.935907,42.036867),(70.977145,42.044231),(71.118429,42.122908),(71.201008,42.140788),(71.238215,42.160244),(71.253201,42.197555),(71.249868,42.198385),(71.217854,42.206365),(71.077604,42.281167),(71.045772,42.29096),(71.014042,42.287704),(70.9696,42.263468),(70.947793,42.248146),(70.918648,42.253365),(70.897822,42.261608),(70.858134,42.291063),(70.852657,42.306075),(70.864801,42.321552),(70.888825,42.338575),(70.900354,42.346744),(70.932703,42.3762),(70.939835,42.387827),(70.937354,42.394778),(70.931773,42.401496),(70.929913,42.412296),(70.936114,42.431623),(70.947586,42.451286),(70.961952,42.468107),(70.977145,42.478752),(71.024739,42.455963),(71.041017,42.4548),(71.054712,42.460381),(71.064168,42.470329),(71.066287,42.482137),(71.057864,42.493402),(71.057657,42.493402),(71.057554,42.493506),(71.057554,42.493609),(71.022724,42.51645),(71.012699,42.526036),(71.00319,42.563527),(71.040656,42.580529),(71.127007,42.590606),(71.142768,42.602543),(71.148091,42.6174),(71.146437,42.656855),(71.149021,42.677939),(71.157548,42.687938),(71.19522,42.69724),(71.211446,42.705224),(71.223849,42.717936),(71.245449,42.747133),(71.262502,42.753567),(71.284723,42.7484),(71.308495,42.740105),(71.329785,42.737031),(71.347562,42.742844),(71.363065,42.75411),(71.376294,42.768321),(71.386939,42.783074),(71.404613,42.794883),(71.428849,42.79558),(71.477115,42.78354),(71.493651,42.788914),(71.504193,42.789612),(71.51432199999999,42.78571),(71.553803,42.760905),(71.566308,42.757236),(71.583568,42.759639),(71.693536,42.811807),(71.726402,42.819660999999996),(71.796682,42.822297),(71.831408,42.831573),(71.847738,42.834053),(71.863241,42.829041),(71.878847,42.821341),(71.895384,42.816044),(71.956982,42.804598),(72.072737,42.757185),(72.10488,42.750337),(72.11997,42.75181),(72.148805,42.76199),(72.164101,42.765091),(72.292362,42.76106),(72.358405,42.741397),(72.476744,42.682564),(72.51178,42.677551),(72.583817,42.678275),(72.725824,42.652876),(72.756933,42.640215),(72.780601,42.620449),(72.815224,42.573036),(72.840752,42.555698),(72.874549,42.543993),(72.908345,42.536965),(72.942348,42.536035),(73.070609,42.551926),(73.115051,42.550841),(73.152671,42.539213),(73.172722,42.527845),(73.189052,42.520765),(73.206828,42.517173),(73.278348,42.513401),(73.301396,42.507071),(73.316589,42.493712),(73.313385,42.463171),(73.314419,42.441571),(73.326201,42.428677),(73.417461,42.41705),(73.476889,42.399067),(73.505105,42.402943),(73.505518,42.420926),(73.505077,42.421557),(73.432964,42.524847),(73.417565,42.556163),(73.41002,42.58965),(73.412707,42.627322),(73.423869,42.662772),(73.476889,42.75088),(73.503554,42.794082),(73.507068,42.809404),(73.504278,42.827594),(73.491979,42.861003),(73.489498,42.877823),(73.493012,42.894954),(73.501177,42.909346),(73.521744,42.936166),(73.559675,43.017298),(73.57094,43.031922),(73.58665,43.042283),(73.634089,43.062463),(73.805551,43.114734),(73.822811,43.117318),(73.864256,43.116103),(73.90167,43.13096),(73.935053,43.199716),(73.965335,43.216924),(73.985385,43.211679),(74.021456,43.186306),(74.039852,43.181861),(74.197362,43.195788),(74.213899,43.202635),(74.216586,43.216924),(74.206044,43.234106),(74.178759,43.261702),(74.207697,43.24979),(74.25865,43.215761),(74.286969,43.207209),(74.320042,43.201654),(74.348774,43.191034),(74.400554,43.159537),(74.420604,43.151837),(74.462772,43.148634),(74.498222,43.131115),(74.539253,43.122899),(74.558167,43.115044),(74.572016,43.102202),(74.597028,43.070344),(74.61129,43.058174),(74.71392,42.999883),(74.74875,42.990013),(74.862851,42.975828),(74.948531,42.944848),(74.976849,42.926477),(75.005375,42.9164),(75.066146,42.902602),(75.094258,42.890303),(75.118133,42.876531),(75.178594,42.84966),(75.204639,42.84519),(75.271198,42.845758000000004),(75.496611,42.823925),(75.535988,42.827025),(75.555936,42.825217),(75.620428,42.805295),(75.646163,42.806096),(75.675308,42.815371999999996),(75.703833,42.830488),(75.727605,42.848678),(75.73825,42.861571),(75.770599,42.92565),(75.782072,42.933324),(75.797161,42.936192),(75.807059,42.936771),(75.858966,42.939809),(75.898033,42.935701),(75.976582,42.918725),(75.999113,42.91764),(76.064535,42.93397),(76.09058,42.93428),(76.163754,42.921154),(76.254498,42.921154),(76.340487,42.901672),(76.351132,42.902886),(76.370976,42.909604),(76.382965,42.910069),(76.3933,42.905263),(76.404463,42.889244),(76.414074,42.885497),(76.432058,42.88958),(76.461927,42.910999),(76.48053,42.916968),(76.506885,42.914488),(76.558768,42.89821),(76.58502,42.894799),(76.605897,42.898494),(76.644345,42.911129),(76.688786,42.910328),(76.707907,42.913661),(76.74625,42.928879),(76.750385,42.932574),(76.751625,42.937096),(76.753899,42.940765),(76.76134,42.942057),(76.77953,42.939292),(76.785214,42.939654),(76.792966,42.943065),(76.814153,42.961487),(76.83348,42.971745),(76.853427,42.974122),(76.897662,42.974536),(76.93735,42.986034),(76.956884,42.988514),(76.976417,42.98221),(77.089899,42.968154),(77.124005,42.958904),(77.135064,42.951746),(77.163693,42.921981),(77.178679,42.913196),(77.195112,42.910069),(77.21189,42.909944),(77.229632,42.909811),(77.328127,42.897564),(77.36089,42.90454),(77.403265,42.919759),(77.418561,42.922239),(77.43396,42.921412),(77.461762,42.914384),(77.501553,42.914539),(77.520984,42.906917),(77.53845,42.896272),(77.557777,42.888133),(77.574417,42.887848),(77.62692,42.906555),(77.647797,42.909501),(77.71384,42.907589),(77.781226,42.895626),(77.787531,42.889709),(77.791665,42.883043),(77.798176,42.877513),(77.809958,42.871286),(77.83559,42.879839),(77.852229,42.887952),(77.861221,42.89082),(77.883855,42.893223),(77.907006,42.891285),(77.929331,42.885006),(77.986175,42.860202),(78.030616,42.854672),(78.137897,42.861984),(78.183579,42.86015),(78.229777,42.865007),(78.249104,42.862398),(78.290136,42.851442),(78.31122,42.850512),(78.328376,42.855292),(78.364653,42.872526),(78.38553,42.878237),(78.429559,42.880665),(78.496118,42.875601),(78.594303,42.850228),(78.635437,42.832477),(78.669957,42.811161),(78.687114,42.804598),(78.807933,42.79558),(78.888135,42.771215),(78.954281,42.768424),(78.992935,42.757133),(79.030866,42.756151),(79.108794,42.785348),(79.148274,42.790981),(79.173389,42.785632),(79.180831,42.77553),(79.175146,42.737031),(79.176076,42.713931),(79.181864,42.693597),(79.192199,42.674838),(79.206359,42.656467),(79.242119,42.629776),(79.321287,42.602181),(79.353016,42.577299),(79.398388,42.496942),(79.425673,42.469605),(79.47642,42.453973),(79.571918,42.449529),(79.652533,42.461053),(79.696458,42.459838),(79.917943,42.42444),(79.960111,42.403511),(79.97396,42.39147),(80.012098,42.349509),(80.077003,42.305765),(80.110076,42.273338),(80.136534,42.23887),(80.16661,42.208717),(80.210328,42.189519),(80.21653,42.174404),(80.219837,42.141873),(80.224591,42.125569),(80.235133,42.110945),(80.247432,42.098413),(80.256527,42.084357),(80.257561,42.065263),(80.231206,42.033689),(80.181906,42.020976),(79.930862,42.023276),(79.879496,42.013199),(79.842909,42.00183),(79.826786,41.992244),(79.812833,41.97762),(79.803842,41.959998),(79.792576,41.922817),(79.783274,41.905092),(79.747514,41.879745),(79.702969,41.874939),(79.655737,41.875843),(79.610985,41.867626),(79.554658,41.837602),(79.489856,41.819257),(79.410997,41.778588),(79.390844,41.772697),(79.367382,41.772387),(79.30413,41.787554),(79.282323,41.783497),(79.264856,41.774506),(79.217521,41.741226),(79.19592,41.729521),(79.174526,41.722622),(79.127914,41.714302),(79.088743,41.702546),(78.976192,41.6418),(78.915834,41.63317),(78.897437,41.626272),(78.807313,41.578445),(78.672128,41.538448),(78.658278,41.532453),(78.645049,41.52372),(78.637401,41.512868),(78.629133,41.48796),(78.619211,41.478089),(78.583968,41.465997),(78.51038,41.454422),(78.41757,41.400471),(78.377675,41.386622),(78.359899,41.377527),(78.343466,41.362024),(78.339332,41.344041),(78.356901,41.305542),(78.359692,41.287455),(78.34946,41.270402),(78.331477,41.258723),(78.291582,41.240791),(78.275356,41.228854),(78.250345,41.200535),(78.231335,41.172856),(78.204456,41.133718),(78.190503,41.11775),(78.176034,41.105502),(78.074955,41.039512),(78.057178,41.034344),(78.036508,41.036101),(77.997647,41.049795),(77.866699,41.064058),(77.831042,41.062973),(77.797556,41.054704),(77.665574,41.001271),(77.650278,40.997137),(77.631778,40.995793),(77.580721,40.997705),(77.503517,40.981066),(77.47488799999999,40.982047),(77.445226,40.993675),(77.388795,41.011658),(77.332985,41.02065),(77.301152,41.019306),(77.243171,41.005664),(77.118838,41.011658),(77.088555,41.019565),(77.035018,41.040338),(77.007733,41.044214),(76.900143,41.025766),(76.860972,41.013208),(76.834927,40.99352),(76.820871,40.97781),(76.783974,40.957139),(76.766818,40.944685),(76.757413,40.925772),(76.760927,40.9066),(76.768368,40.887014),(76.770848,40.867119),(76.762477,40.847017),(76.746871,40.83446),(76.707286,40.817613),(76.674214,40.795444),(76.647859,40.764851),(76.630805,40.728885),(76.624191,40.627547),(76.62078,40.611321),(76.609411,40.59711),(76.556391,40.565690000000004),(76.531173,40.534995),(76.498927,40.464611),(76.476499,40.436138),(76.449111,40.415519),(76.361984,40.371904),(76.330152,40.348081),(76.313512,40.343327),(76.299353,40.355677),(76.283333,40.417276),(76.273411,40.434122),(76.244162,40.441202),(76.21543,40.416552),(76.185044,40.384203),(76.151351,40.368131),(76.132541,40.371542),(76.095231,40.387355),(76.07611,40.391954),(76.051616,40.390197),(75.962422,40.357331),(75.949297,40.343068),(75.938031,40.326222),(75.921495,40.309117),(75.901858,40.298885),(75.880464,40.295268),(75.858553,40.296353),(75.79375,40.30891),(75.772046,40.31015),(75.750549,40.308342),(75.704454,40.293149),(75.681819,40.291702),(75.664869,40.305706),(75.656291,40.32307),(75.640168,40.367305),(75.638308,40.386115),(75.64636899999999,40.405338),(75.659598,40.419084),(75.669314,40.432365),(75.666936,40.450245),(75.657221,40.461097),(75.629006,40.481303),(75.617947,40.493964),(75.610816,40.512877),(75.605648,40.569411),(75.587665,40.611941),(75.559863,40.63287),(75.523793,40.63318),(75.481832,40.614111),(75.26324,40.480166),(75.253318,40.47598),(75.241536,40.474068),(75.229237,40.470347),(75.223966,40.462441),(75.220142,40.452984),(75.212494,40.444716),(75.19389,40.441254),(75.175287,40.448127),(75.15658,40.458152),(75.13808,40.463888),(75.113172,40.460787),(75.064389,40.443837),(75.039068,40.441099),(75.002378,40.4473),(74.966101,40.459289),(74.879181,40.505074),(74.856443,40.513187),(74.835359,40.511637),(74.832312,40.508116),(74.820063,40.493964),(74.816032,40.48368),(74.815102,40.480114),(74.807041,40.461769),(74.794638,40.440737),(74.787404,40.420738),(74.794638,40.405545),(74.841664,40.372059),(74.853859,40.35883),(74.862128,40.32617),(74.830915,40.319917),(74.746683,40.336505)],[(70.63298,39.79845),(70.661609,39.809819),(70.694527,39.814832),(70.70481,39.822067),(70.706412,39.839998),(70.698919,39.858447),(70.68662,39.860876),(70.654994,39.849765),(70.619906,39.850695),(70.498725,39.881908),(70.483739,39.882218),(70.482602,39.866767),(70.490147,39.850179),(70.503479,39.835502),(70.537379,39.817312),(70.547301,39.807649),(70.575878,39.77008),(70.581614,39.766566),(70.63298,39.79845)],[(71.007634,39.911157),(71.009288,39.885732),(71.021897,39.880823),(71.036056,39.887541),(71.049802,39.897979),(71.060964,39.904129),(71.079981,39.903405),(71.084787,39.894827),(71.087216,39.88382),(71.098895,39.8755),(71.113674,39.874466),(71.16101,39.884233),(71.1946,39.884802),(71.208449,39.888781),(71.219094,39.900925),(71.221058,39.931724),(71.17703,39.968156),(71.174704,39.993994),(71.190982,40.006241),(71.237284,40.031098),(71.244002,40.046756),(71.236354,40.056057),(71.223745,40.057866),(71.198217,40.052595),(71.192946,40.05027),(71.181887,40.043397),(71.176926,40.042001),(71.169382,40.042828),(71.165454,40.044637),(71.162043,40.046911),(71.105975,40.064946),(71.078844,40.07864),(71.06174,40.095176),(71.047942,40.122513),(71.031199,40.146905),(71.008564,40.157757),(70.977145,40.144579),(70.959885,40.113418),(70.954046,40.095797),(70.952961,40.079208),(70.958438,40.063085),(70.98314,40.021383),(70.994509,40.008877),(71.007634,40.003503),(71.034299,39.99911),(71.045823,39.99203),(71.050371,39.962885),(71.007634,39.911157)],[(71.757304,39.903095),(71.767123,39.915446),(71.789757,39.979318),(71.78552,39.989705),(71.760612,39.98371),(71.724128,39.962678),(71.706868,39.956115),(71.681753,39.955082),(71.665527,39.940199),(71.675346,39.925781),(71.696843,39.913844),(71.71617,39.906764),(71.741801,39.90077),(71.757304,39.903095)]]] +Aruba Aruba [[[(-69.936391,12.531724),(-69.924672,12.519232),(-69.915761,12.497016),(-69.88019800000001,12.453559),(-69.87682,12.427395),(-69.888092,12.41767),(-69.908803,12.417792),(-69.930531,12.425971),(-69.945139,12.440375),(-69.924672,12.440375),(-69.924672,12.447211),(-69.958567,12.463202),(-70.027659,12.522935),(-70.048085,12.531155),(-70.058095,12.537177),(-70.062408,12.54682),(-70.060374,12.556952),(-70.051096,12.574042),(-70.048736,12.583726),(-70.052642,12.600002),(-70.059641,12.614244),(-70.061106,12.625393),(-70.048736,12.632148),(-70.007151,12.585517),(-69.996938,12.577582),(-69.936391,12.531724)]]] +Aruba Afghanistan [] +Aruba Albania [] +Aruba Andorra [] +Aruba Ashmore and Cartier Islands [] +Aruba Austria [] +Aruba Burundi [] +Aruba Belgium [] +Aruba Benin [] +Aruba Burkina Faso [] +Aruba Bulgaria [] +Aruba Bahrain [] +Aruba Bosnia and Herzegovina [] +Aruba Bajo Nuevo Bank (Petrel Is.) [] +Aruba Saint Barthelemy [] +Aruba Belarus [] +Aruba Bolivia [] +Aruba Barbados [] +Aruba Bhutan [] +Aruba Botswana [] +Aruba Central African Republic [] +Aruba Switzerland [] +Aruba Clipperton Island [] +Aruba Cameroon [] +Aruba Republic of Congo [] +Aruba Coral Sea Islands [] +Aruba Curaçao [] +Aruba Czech Republic [] +Aruba Djibouti [] +Aruba Dominica [] +Aruba Algeria [] +Aruba Ethiopia [] +Aruba Georgia [] +Aruba Ghana [] +Aruba Gibraltar [] +Aruba Guinea [] +Aruba Gambia [] +Aruba Guatemala [] +Aruba Guam [] +Aruba Heard Island and McDonald Islands [] +Aruba Hungary [] +Aruba Isle of Man [] +Aruba Iraq [] +Aruba Israel [] +Aruba Jamaica [] +Aruba Jersey [] +Aruba Jordan [] +Aruba Baykonur Cosmodrome [] +Aruba Siachen Glacier [] +Aruba Kosovo [] +Aruba Laos [] +Aruba Lebanon [] +Aruba Liberia [] +Aruba Libya [] +Aruba Saint Lucia [] +Aruba Liechtenstein [] +Aruba Lesotho [] +Aruba Luxembourg [] +Aruba Latvia [] +Aruba Saint Martin [] +Aruba Morocco [] +Aruba Monaco [] +Aruba Moldova [] +Aruba Macedonia [] +Aruba Mali [] +Aruba Montenegro [] +Aruba Mongolia [] +Aruba Montserrat [] +Aruba Namibia [] +Aruba Niger [] +Aruba Norfolk Island [] +Aruba Niue [] +Aruba Nepal [] +Aruba Nauru [] +Aruba Poland [] +Aruba Paraguay [] +Aruba Qatar [] +Aruba Romania [] +Aruba Rwanda [] +Aruba Western Sahara [] +Aruba Scarborough Reef [] +Aruba South Sudan [] +Aruba Senegal [] +Aruba Serranilla Bank [] +Aruba Singapore [] +Aruba San Marino [] +Aruba Somaliland [] +Aruba Somalia [] +Aruba Republic of Serbia [] +Aruba Suriname [] +Aruba Slovakia [] +Aruba Slovenia [] +Aruba Swaziland [] +Aruba Sint Maarten [] +Aruba Syria [] +Aruba Chad [] +Aruba Togo [] +Aruba Uganda [] +Aruba Uruguay [] +Aruba Vatican [] diff --git a/tests/queries/0_stateless/01720_country_intersection.sh b/tests/queries/0_stateless/01720_country_intersection.sh new file mode 100755 index 00000000000..d7e0e67d351 --- /dev/null +++ b/tests/queries/0_stateless/01720_country_intersection.sh @@ -0,0 +1,18 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} -q "drop table if exists country_polygons;" +${CLICKHOUSE_CLIENT} -q "create table country_polygons(name String, p Array(Array(Tuple(Float64, Float64)))) engine=MergeTree() order by tuple();" +cat ${CURDIR}/country_polygons.tsv | ${CLICKHOUSE_CLIENT} -q "insert into country_polygons format TSV" +${CLICKHOUSE_CLIENT} -q "SELECT c, d, polygonsIntersectionSpherical(a, b) FROM (SELECT first.p AS a, second.p AS b, first.name AS c, second.name AS d FROM country_polygons AS first CROSS JOIN country_polygons AS second LIMIT 100) format TSV" +${CLICKHOUSE_CLIENT} -q "drop table if exists country_polygons;" + + +${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" +${CLICKHOUSE_CLIENT} -q "create table country_rings(name String, p Array(Tuple(Float64, Float64))) engine=MergeTree() order by tuple();" +cat ${CURDIR}/country_rings.tsv | ${CLICKHOUSE_CLIENT} -q "insert into country_rings format TSV" +${CLICKHOUSE_CLIENT} -q "SELECT c, d, polygonsIntersectionSpherical(a, b) FROM (SELECT first.p AS a, second.p AS b, first.name AS c, second.name AS d FROM country_rings AS first CROSS JOIN country_rings AS second LIMIT 100) format TSV" +${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" \ No newline at end of file diff --git a/tests/queries/0_stateless/01720_country_perimeter_and_area.reference b/tests/queries/0_stateless/01720_country_perimeter_and_area.reference new file mode 100644 index 00000000000..8a9690791c6 --- /dev/null +++ b/tests/queries/0_stateless/01720_country_perimeter_and_area.reference @@ -0,0 +1,214 @@ +Dhekelia Sovereign Base Area 0.0186259930051051 +Kyrgyzstan 0.5868323961091907 +------------------------------------- +Dhekelia Sovereign Base Area 0.000003139488070896512 +Kyrgyzstan 0.004895645023822883 +------------------------------------- +Aruba 0.011249330810410983 +Afghanistan 0.8199216326776404 +Albania 0.17108622597702605 +Andorra 0.015145740647213184 +Ashmore and Cartier Islands 0.001111472909012953 +Austria 0.3258464621357028 +Burundi 0.1409500621452211 +Belgium 0.1794463601873955 +Benin 0.31426073515874664 +Burkina Faso 0.5144381682226761 +Bulgaria 0.3083164214454252 +Bahrain 0.02137170357214413 +Bosnia and Herzegovina 0.20611959113245232 +Bajo Nuevo Bank (Petrel Is.) 0.0001254597070361587 +Saint Barthelemy 0.0032990108720812672 +Belarus 0.42899119772830474 +Bolivia 0.9279328001326348 +Barbados 0.014116142490651021 +Bhutan 0.1601735058766338 +Botswana 0.5896697538755427 +Central African Republic 0.7760222837198817 +Switzerland 0.2318851512510408 +Clipperton Island 0.0014072924221565273 +Cameroon 0.8001045813665599 +Republic of Congo 0.6904316055863188 +Coral Sea Islands 0.00011634674137689659 +Curaçao 0.02078862020307983 +Czech Republic 0.2708588915805718 +Djibouti 0.12937731543684822 +Dominica 0.020094439807419574 +Algeria 1.1549683948032776 +Ethiopia 0.8210654364815099 +Georgia 0.26823008017781313 +Ghana 0.4056578143818251 +Gibraltar 0.0014059440610631154 +Guinea 0.6350853755877334 +Gambia 0.19279774895359095 +Guatemala 0.3030953561509038 +Guam 0.020321390076536976 +Heard Island and McDonald Islands 0.017334896920453105 +Hungary 0.2617732480910806 +Isle of Man 0.01875803631141408 +Iraq 0.5469861219502402 +Israel 0.19353851895699914 +Jamaica 0.10055860979159512 +Jersey 0.008427337812134537 +Jordan 0.2642243503964102 +Baykonur Cosmodrome 0.04482995477542441 +Siachen Glacier 0.03872116827341272 +Kosovo 0.08773172991408161 +Laos 0.6899867972760174 +Lebanon 0.09676977254650951 +Liberia 0.2961649538030388 +Libya 0.9538430912224716 +Saint Lucia 0.016786201647759867 +Liechtenstein 0.009288582116863231 +Lesotho 0.12315874900320756 +Luxembourg 0.04125996057810259 +Latvia 0.24488610945731157 +Saint Martin 0.006547834154217771 +Morocco 0.8817924249630141 +Monaco 0.0026049777439637527 +Moldova 0.20765701819586885 +Macedonia 0.1128831074330059 +Mali 1.1385970015559317 +Montenegro 0.11756794062084858 +Mongolia 1.142306166871007 +Montserrat 0.006620100691409788 +Namibia 0.843464957679987 +Niger 0.8780744302377772 +Norfolk Island 0.004912027225339993 +Niue 0.009881892958363517 +Nepal 0.4076113675280835 +Nauru 0.0031205159769295255 +Poland 0.48922069488271314 +Paraguay 0.5475256537493991 +Qatar 0.09362771431858698 +Romania 0.44095021664473105 +Rwanda 0.1293663890297039 +Western Sahara 0.4691920993279596 +Scarborough Reef 0.00019842225207367386 +South Sudan 0.7584190842556537 +Senegal 0.5883247226863264 +Serranilla Bank 0.0002389083935906293 +Singapore 0.015233384733369614 +San Marino 0.004596873449598911 +Somaliland 0.3096791489207226 +Somalia 0.6879915318072617 +Republic of Serbia 0.29677234233404165 +Suriname 0.32255243342976203 +Slovakia 0.19843599488831584 +Slovenia 0.14713148471782736 +Swaziland 0.08434161089555517 +Sint Maarten 0.0037955305365309296 +Syria 0.35675522352394456 +Chad 0.9102578296637189 +Togo 0.2600585482954555 +Uganda 0.38301730108810556 +Uruguay 0.3083564407046887 +Vatican 0.00006702452496391445 +Akrotiri Sovereign Base Area 0.013376747415600219 +Zambia 0.8807923488623808 +Zimbabwe 0.4553903789902945 +------------------------------------- +Aruba 0.0000041986375296795025 +Afghanistan 0.015826481758320493 +Albania 0.0006971811189621746 +Andorra 0.00001112355564980348 +Ashmore and Cartier Islands 6.66668338977609e-8 +Austria 0.0020634744883290235 +Burundi 0.000669169243101558 +Belgium 0.0007529367590741593 +Benin 0.00287239734953164 +Burkina Faso 0.006746218025419332 +Bulgaria 0.0027733372191197786 +Bahrain 0.00001443842547561405 +Bosnia and Herzegovina 0.0012742491201009779 +Bajo Nuevo Bank (Petrel Is.) 8.864825701897049e-10 +Saint Barthelemy 6.036607210116289e-7 +Belarus 0.005090738074359067 +Bolivia 0.026865324735758436 +Barbados 0.0000109856680212211 +Bhutan 0.0009961026696220909 +Botswana 0.01430200501713062 +Central African Republic 0.015290667187215962 +Switzerland 0.0010181463734151514 +Clipperton Island 1.2373029819547803e-7 +Cameroon 0.011488908713113137 +Republic of Congo 0.008534881807187833 +Coral Sea Islands 5.121674593493771e-10 +Curaçao 0.000011457378136273848 +Czech Republic 0.0019339153549488386 +Djibouti 0.000540370985929321 +Dominica 0.000018056168258583246 +Algeria 0.05696762706232162 +Ethiopia 0.02789047634482515 +Georgia 0.0017113229913929072 +Ghana 0.0059048504621945965 +Gibraltar 9.095456688875715e-8 +Guinea 0.006043151808047173 +Gambia 0.0002596816395280707 +Guatemala 0.0026901925526205263 +Guam 0.000013952443476670549 +Heard Island and McDonald Islands 0.000009688375334192321 +Hungary 0.0022899094702118978 +Isle of Man 0.00001410012284549863 +Iraq 0.010780689598789812 +Israel 0.0005400181032289429 +Jamaica 0.00027268062650994383 +Jersey 0.0000029236161155167853 +Jordan 0.002191215069390572 +Baykonur Cosmodrome 0.00015978303781425133 +Siachen Glacier 0.0000513879615262916 +Kosovo 0.0002684178325412152 +Laos 0.005637555524983489 +Lebanon 0.0002464436461544738 +Liberia 0.002357973807538481 +Libya 0.040072512808839354 +Saint Lucia 0.000014963842166249258 +Liechtenstein 0.0000033722024322722466 +Lesotho 0.0007426290112070925 +Luxembourg 0.00006405006804909529 +Latvia 0.00158313668683266 +Saint Martin 0.00000168759530251474 +Morocco 0.014595589778269167 +Monaco 4.6325700981005285e-7 +Moldova 0.0008158639460823913 +Macedonia 0.0006245180554490506 +Mali 0.03096381132470007 +Montenegro 0.00033762445623993013 +Mongolia 0.038446609480001344 +Montserrat 0.0000024620326175206004 +Namibia 0.020320978539029165 +Niger 0.02919849042641136 +Norfolk Island 0.0000010150641235563077 +Niue 0.000005450796200539049 +Nepal 0.003629565673884544 +Nauru 7.119067469952887e-7 +Poland 0.0076921097527402876 +Paraguay 0.009875843128670564 +Qatar 0.0002752610716836153 +Romania 0.005809479702080411 +Rwanda 0.0006262235765421803 +Western Sahara 0.0022344529652030694 +Scarborough Reef 2.4176335726807567e-9 +South Sudan 0.015509656314462458 +Senegal 0.00485201810074574 +Serranilla Bank 2.6035559945372385e-9 +Singapore 0.000012633505579848072 +San Marino 0.0000014830814619737624 +Somaliland 0.0041412916217828406 +Somalia 0.011674654119996183 +Republic of Serbia 0.001907268740192651 +Suriname 0.0035911641359236534 +Slovakia 0.0011901587428922095 +Slovenia 0.0004995546076509384 +Swaziland 0.00042234053226485263 +Sint Maarten 5.772865969377286e-7 +Syria 0.004581243750467663 +Chad 0.0313064894302088 +Togo 0.0014067991034602252 +Uganda 0.005985159048654327 +Uruguay 0.0043716082436750115 +Vatican 3.002600504657064e-10 +Akrotiri Sovereign Base Area 0.0000024314362587592923 +Zambia 0.018594119224502336 +Zimbabwe 0.009621356779606268 +------------------------------------- diff --git a/tests/queries/0_stateless/01720_country_perimeter_and_area.sh b/tests/queries/0_stateless/01720_country_perimeter_and_area.sh new file mode 100755 index 00000000000..75016ee1d1f --- /dev/null +++ b/tests/queries/0_stateless/01720_country_perimeter_and_area.sh @@ -0,0 +1,27 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} -q "drop table if exists country_polygons;" +${CLICKHOUSE_CLIENT} -q "create table country_polygons(name String, p Array(Array(Tuple(Float64, Float64)))) engine=MergeTree() order by tuple();" +cat ${CURDIR}/country_polygons.tsv | ${CLICKHOUSE_CLIENT} -q "insert into country_polygons format TSV" + +${CLICKHOUSE_CLIENT} -q "SELECT name, polygonPerimeterSpherical(p) from country_polygons" +${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" +${CLICKHOUSE_CLIENT} -q "SELECT name, polygonAreaSpherical(p) from country_polygons" +${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" +${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" + + +${CLICKHOUSE_CLIENT} -q "create table country_rings(name String, p Array(Tuple(Float64, Float64))) engine=MergeTree() order by tuple();" +cat ${CURDIR}/country_rings.tsv | ${CLICKHOUSE_CLIENT} -q "insert into country_rings format TSV" + +${CLICKHOUSE_CLIENT} -q "SELECT name, polygonPerimeterSpherical(p) from country_rings" +${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" +${CLICKHOUSE_CLIENT} -q "SELECT name, polygonAreaSpherical(p) from country_rings" +${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" +${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" + +${CLICKHOUSE_CLIENT} -q "drop table country_polygons" diff --git a/tests/queries/0_stateless/01720_type_map_and_casts.reference b/tests/queries/0_stateless/01720_type_map_and_casts.reference index 1ceb4d78f81..590bdedd7f2 100644 --- a/tests/queries/0_stateless/01720_type_map_and_casts.reference +++ b/tests/queries/0_stateless/01720_type_map_and_casts.reference @@ -31,10 +31,12 @@ Map(Date, Int32) Map(UUID, UInt16) {'00001192-0000-4000-8000-000000000001':1,'00001192-0000-4000-7000-000000000001':2} 0 2 1 -Map(Int128, Int32) +Map(Int128, String) {-1:'a',0:'b',1234567898765432193024000:'c',-1234567898765432193024000:'d'} a b c d a b +a b c d +b b b b diff --git a/tests/queries/0_stateless/01720_type_map_and_casts.sql b/tests/queries/0_stateless/01720_type_map_and_casts.sql index 2f333373dba..d7991999ef7 100644 --- a/tests/queries/0_stateless/01720_type_map_and_casts.sql +++ b/tests/queries/0_stateless/01720_type_map_and_casts.sql @@ -11,7 +11,7 @@ SELECT 'Map(Int8, Int8)'; SELECT m FROM table_map_with_key_integer; SELECT m[127], m[1], m[0], m[-1] FROM table_map_with_key_integer; -SELECT m[toInt8(number - 2)] FROM table_map_with_key_integer ARRAY JOIN range(5) AS number; +SELECT m[toInt8(number - 2)] FROM table_map_with_key_integer ARRAY JOIN [0, 1, 2, 3, 4] AS number; SELECT count() FROM table_map_with_key_integer WHERE m = map(); @@ -26,7 +26,7 @@ SELECT 'Map(Int32, UInt16)'; SELECT m FROM table_map_with_key_integer; SELECT m[-1], m[2147483647], m[-2147483648] FROM table_map_with_key_integer; -SELECT m[toInt32(number - 2)] FROM table_map_with_key_integer ARRAY JOIN range(5) AS number; +SELECT m[toInt32(number - 2)] FROM table_map_with_key_integer ARRAY JOIN [0, 1, 2, 3, 4] AS number; DROP TABLE IF EXISTS table_map_with_key_integer; @@ -39,7 +39,7 @@ SELECT 'Map(Date, Int32)'; SELECT m FROM table_map_with_key_integer; SELECT m[toDate('2020-01-01')], m[toDate('2020-01-02')], m[toDate('2020-01-03')] FROM table_map_with_key_integer; -SELECT m[toDate(number)] FROM table_map_with_key_integer ARRAY JOIN range(3) AS number; +SELECT m[toDate(number)] FROM table_map_with_key_integer ARRAY JOIN [0, 1, 2] AS number; DROP TABLE IF EXISTS table_map_with_key_integer; @@ -51,12 +51,14 @@ INSERT INTO table_map_with_key_integer VALUES ('2020-01-01', map('00001192-0000- SELECT 'Map(UUID, UInt16)'; SELECT m FROM table_map_with_key_integer; -SELECT - m[toUUID('00001192-0000-4000-6000-000000000001')], - m[toUUID('00001192-0000-4000-7000-000000000001')], +SELECT + m[toUUID('00001192-0000-4000-6000-000000000001')], + m[toUUID('00001192-0000-4000-7000-000000000001')], m[toUUID('00001192-0000-4000-8000-000000000001')] FROM table_map_with_key_integer; +SELECT m[257], m[1] FROM table_map_with_key_integer; -- { serverError 43 } + DROP TABLE IF EXISTS table_map_with_key_integer; CREATE TABLE table_map_with_key_integer (d DATE, m Map(Int128, String)) @@ -65,11 +67,14 @@ ENGINE = MergeTree() ORDER BY d; INSERT INTO table_map_with_key_integer SELECT '2020-01-01', map(-1, 'a', 0, 'b', toInt128(1234567898765432123456789), 'c', toInt128(-1234567898765432123456789), 'd'); -SELECT 'Map(Int128, Int32)'; +SELECT 'Map(Int128, String)'; SELECT m FROM table_map_with_key_integer; SELECT m[toInt128(-1)], m[toInt128(0)], m[toInt128(1234567898765432123456789)], m[toInt128(-1234567898765432123456789)] FROM table_map_with_key_integer; -SELECT m[toInt128(number - 2)] FROM table_map_with_key_integer ARRAY JOIN range(4) AS number; +SELECT m[toInt128(number - 2)] FROM table_map_with_key_integer ARRAY JOIN [0, 1, 2, 3] AS number; + +SELECT m[-1], m[0], m[toInt128(1234567898765432123456789)], m[toInt128(-1234567898765432123456789)] FROM table_map_with_key_integer; +SELECT m[toUInt64(0)], m[toInt64(0)], m[toUInt8(0)], m[toUInt16(0)] FROM table_map_with_key_integer; DROP TABLE IF EXISTS table_map_with_key_integer; diff --git a/tests/queries/0_stateless/01732_union_and_union_all.reference b/tests/queries/0_stateless/01732_union_and_union_all.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01732_union_and_union_all.sql b/tests/queries/0_stateless/01732_union_and_union_all.sql new file mode 100644 index 00000000000..2de6daa5bb9 --- /dev/null +++ b/tests/queries/0_stateless/01732_union_and_union_all.sql @@ -0,0 +1 @@ +select 1 UNION select 1 UNION ALL select 1; -- { serverError 558 } diff --git a/tests/queries/0_stateless/01736_null_as_default.sql b/tests/queries/0_stateless/01736_null_as_default.sql index f9a4bc69acf..a00011b06d4 100644 --- a/tests/queries/0_stateless/01736_null_as_default.sql +++ b/tests/queries/0_stateless/01736_null_as_default.sql @@ -1,5 +1,5 @@ -drop table if exists test_num; +drop table if exists test_enum; create table test_enum (c Nullable(Enum16('A' = 1, 'B' = 2))) engine Log; insert into test_enum values (1), (NULL); select * from test_enum; -drop table if exists test_num; +drop table test_enum; diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml new file mode 100644 index 00000000000..2d0a480a375 --- /dev/null +++ b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml @@ -0,0 +1,35 @@ + + + + trace + true + + + 9000 + + ./ + + 0 + + + + + + + ::/0 + + + default + default + 1 + + + + + + + + + + + diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh new file mode 100755 index 00000000000..a4fd7529ab2 --- /dev/null +++ b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh @@ -0,0 +1,83 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +server_opts=( + "--config-file=$CUR_DIR/$(basename "${BASH_SOURCE[0]}" .sh).config.xml" + "--" + # to avoid multiple listen sockets (complexity for port discovering) + "--listen_host=127.1" + # we will discover the real port later. + "--tcp_port=0" + "--shutdown_wait_unfinished=0" +) +CLICKHOUSE_WATCHDOG_ENABLE=0 $CLICKHOUSE_SERVER_BINARY "${server_opts[@]}" >& clickhouse-server.log & +server_pid=$! + +trap cleanup EXIT +function cleanup() +{ + kill -9 $server_pid + kill -9 $client_pid + + echo "Test failed. Server log:" + cat clickhouse-server.log + rm -f clickhouse-server.log + + exit 1 +} + +server_port= +i=0 retries=300 +# wait until server will start to listen (max 30 seconds) +while [[ -z $server_port ]] && [[ $i -lt $retries ]]; do + server_port=$(lsof -n -a -P -i tcp -s tcp:LISTEN -p $server_pid 2>/dev/null | awk -F'[ :]' '/LISTEN/ { print $(NF-1) }') + ((++i)) + sleep 0.1 +done +if [[ -z $server_port ]]; then + echo "Cannot wait for LISTEN socket" >&2 + exit 1 +fi + +# wait for the server to start accepting tcp connections (max 30 seconds) +i=0 retries=300 +while ! $CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" --format Null -q 'select 1' 2>/dev/null && [[ $i -lt $retries ]]; do + sleep 0.1 +done +if ! $CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" --format Null -q 'select 1'; then + echo "Cannot wait until server will start accepting connections on " >&2 + exit 1 +fi + +query_id="$CLICKHOUSE_DATABASE-$SECONDS" +$CLICKHOUSE_CLIENT_BINARY --query_id "$query_id" --host 127.1 --port "$server_port" --format Null -q 'select sleepEachRow(1) from numbers(10)' 2>/dev/null & +client_pid=$! + +# wait until the query will appear in processlist (max 10 second) +# (it is enough to trigger the problem) +i=0 retries=1000 +while [[ $($CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" -q "select count() from system.processes where query_id = '$query_id'") != "1" ]] && [[ $i -lt $retries ]]; do + sleep 0.01 +done +if [[ $($CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" -q "select count() from system.processes where query_id = '$query_id'") != "1" ]]; then + echo "Cannot wait until the query will start" >&2 + exit 1 +fi + +# send TERM and save the error code to ensure that it is 0 (EXIT_SUCCESS) +kill $server_pid +wait $server_pid +return_code=$? + +wait $client_pid + +trap '' EXIT +if [ $return_code != 0 ]; then + cat clickhouse-server.log +fi +rm -f clickhouse-server.log + +exit $return_code diff --git a/tests/queries/0_stateless/01737_move_order_key_to_prewhere_select_final.reference b/tests/queries/0_stateless/01737_move_order_key_to_prewhere_select_final.reference new file mode 100644 index 00000000000..95479cf37ba --- /dev/null +++ b/tests/queries/0_stateless/01737_move_order_key_to_prewhere_select_final.reference @@ -0,0 +1,28 @@ +SELECT + x, + y, + z +FROM prewhere_move_select_final +PREWHERE y > 100 +SELECT + x, + y, + z +FROM prewhere_move_select_final +FINAL +PREWHERE y > 100 +SELECT + x, + y, + z +FROM prewhere_move_select_final +FINAL +WHERE z > 400 +SELECT + x, + y, + z +FROM prewhere_move_select_final +FINAL +PREWHERE y > 100 +WHERE (y > 100) AND (z > 400) diff --git a/tests/queries/0_stateless/01737_move_order_key_to_prewhere_select_final.sql b/tests/queries/0_stateless/01737_move_order_key_to_prewhere_select_final.sql new file mode 100644 index 00000000000..a3a882c461a --- /dev/null +++ b/tests/queries/0_stateless/01737_move_order_key_to_prewhere_select_final.sql @@ -0,0 +1,15 @@ +DROP TABLE IF EXISTS prewhere_move_select_final; +CREATE TABLE prewhere_move_select_final (x Int, y Int, z Int) ENGINE = ReplacingMergeTree() ORDER BY (x, y); +INSERT INTO prewhere_move_select_final SELECT number, number * 2, number * 3 FROM numbers(1000); + +-- order key can be pushed down with final +EXPLAIN SYNTAX SELECT * FROM prewhere_move_select_final WHERE y > 100; +EXPLAIN SYNTAX SELECT * FROM prewhere_move_select_final FINAL WHERE y > 100; + +-- can not be pushed down +EXPLAIN SYNTAX SELECT * FROM prewhere_move_select_final FINAL WHERE z > 400; + +-- only y can be pushed down +EXPLAIN SYNTAX SELECT * FROM prewhere_move_select_final FINAL WHERE y > 100 and z > 400; + +DROP TABLE prewhere_move_select_final; diff --git a/tests/queries/0_stateless/01739_index_hint.reference b/tests/queries/0_stateless/01739_index_hint.reference new file mode 100644 index 00000000000..6aa40c5d302 --- /dev/null +++ b/tests/queries/0_stateless/01739_index_hint.reference @@ -0,0 +1,35 @@ +-- { echo } + +drop table if exists tbl; +create table tbl (p Int64, t Int64, f Float64) Engine=MergeTree partition by p order by t settings index_granularity=1; +insert into tbl select number / 4, number, 0 from numbers(16); +select * from tbl WHERE indexHint(t = 1) order by t; +0 0 0 +0 1 0 +select * from tbl WHERE indexHint(t in (select toInt64(number) + 2 from numbers(3))) order by t; +0 1 0 +0 2 0 +0 3 0 +1 4 0 +select * from tbl WHERE indexHint(p = 2) order by t; +2 8 0 +2 9 0 +2 10 0 +2 11 0 +select * from tbl WHERE indexHint(p in (select toInt64(number) - 2 from numbers(3))) order by t; +0 0 0 +0 1 0 +0 2 0 +0 3 0 +drop table tbl; +drop table if exists XXXX; +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=128; +insert into XXXX select number*60, 0 from numbers(100000); +SELECT count() FROM XXXX WHERE indexHint(t = 42); +128 +drop table if exists XXXX; +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=8192; +insert into XXXX select number*60, 0 from numbers(100000); +SELECT count() FROM XXXX WHERE indexHint(t = toDateTime(0)); +100000 +drop table XXXX; diff --git a/tests/queries/0_stateless/01739_index_hint.sql b/tests/queries/0_stateless/01739_index_hint.sql new file mode 100644 index 00000000000..28395c2dc1d --- /dev/null +++ b/tests/queries/0_stateless/01739_index_hint.sql @@ -0,0 +1,35 @@ +-- { echo } + +drop table if exists tbl; + +create table tbl (p Int64, t Int64, f Float64) Engine=MergeTree partition by p order by t settings index_granularity=1; + +insert into tbl select number / 4, number, 0 from numbers(16); + +select * from tbl WHERE indexHint(t = 1) order by t; + +select * from tbl WHERE indexHint(t in (select toInt64(number) + 2 from numbers(3))) order by t; + +select * from tbl WHERE indexHint(p = 2) order by t; + +select * from tbl WHERE indexHint(p in (select toInt64(number) - 2 from numbers(3))) order by t; + +drop table tbl; + +drop table if exists XXXX; + +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=128; + +insert into XXXX select number*60, 0 from numbers(100000); + +SELECT count() FROM XXXX WHERE indexHint(t = 42); + +drop table if exists XXXX; + +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=8192; + +insert into XXXX select number*60, 0 from numbers(100000); + +SELECT count() FROM XXXX WHERE indexHint(t = toDateTime(0)); + +drop table XXXX; diff --git a/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.reference b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.reference new file mode 100644 index 00000000000..70c19fc8ced --- /dev/null +++ b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.reference @@ -0,0 +1,12 @@ +210 230 20 +SELECT + sum(a), + sumCount(b).1, + sumCount(b).2 +FROM fuse_tbl +---------NOT trigger fuse-------- +210 11.5 +SELECT + sum(a), + avg(b) +FROM fuse_tbl diff --git a/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.sql b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.sql new file mode 100644 index 00000000000..cad7b5803d4 --- /dev/null +++ b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.sql @@ -0,0 +1,11 @@ +DROP TABLE IF EXISTS fuse_tbl; +CREATE TABLE fuse_tbl(a Int8, b Int8) Engine = Log; +INSERT INTO fuse_tbl SELECT number, number + 1 FROM numbers(1, 20); + +SET optimize_fuse_sum_count_avg = 1; +SELECT sum(a), sum(b), count(b) from fuse_tbl; +EXPLAIN SYNTAX SELECT sum(a), sum(b), count(b) from fuse_tbl; +SELECT '---------NOT trigger fuse--------'; +SELECT sum(a), avg(b) from fuse_tbl; +EXPLAIN SYNTAX SELECT sum(a), avg(b) from fuse_tbl; +DROP TABLE fuse_tbl; diff --git a/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.reference b/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.reference index 7c089a2fd05..92dfd99c259 100644 --- a/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.reference +++ b/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.reference @@ -5,7 +5,7 @@ "host": "clickhouse-test-host-001.clickhouse.com", "home": "clickhouse", "detail": "clickhouse", - "row_number": "999998" + "row_number": "99998" }, { "datetime": "2020-12-12", @@ -13,11 +13,11 @@ "host": "clickhouse-test-host-001.clickhouse.com", "home": "clickhouse", "detail": "clickhouse", - "row_number": "999999" + "row_number": "99999" } ], - "rows": 1000000, + "rows": 100000, - "rows_before_limit_at_least": 1048080, + "rows_before_limit_at_least": 131010, diff --git a/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.sh b/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.sh index e663b329660..7a2343a953a 100755 --- a/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.sh +++ b/tests/queries/0_stateless/01746_long_zlib_http_compression_json_format.sh @@ -4,4 +4,4 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -${CLICKHOUSE_CURL} -sS -H 'Accept-Encoding: gzip' "${CLICKHOUSE_URL}&enable_http_compression=1&http_zlib_compression_level=1" -d "SELECT toDate('2020-12-12') as datetime, 'test-pipeline' as pipeline, 'clickhouse-test-host-001.clickhouse.com' as host, 'clickhouse' as home, 'clickhouse' as detail, number as row_number FROM numbers(1000000) FORMAT JSON" | gzip -d | tail -n30 | head -n23 +${CLICKHOUSE_CURL} -sS -H 'Accept-Encoding: gzip' "${CLICKHOUSE_URL}&enable_http_compression=1&http_zlib_compression_level=1" -d "SELECT toDate('2020-12-12') as datetime, 'test-pipeline' as pipeline, 'clickhouse-test-host-001.clickhouse.com' as host, 'clickhouse' as home, 'clickhouse' as detail, number as row_number FROM numbers(100000) FORMAT JSON" | gzip -d | tail -n30 | head -n23 diff --git a/tests/queries/0_stateless/01748_partition_id_pruning.reference b/tests/queries/0_stateless/01748_partition_id_pruning.reference new file mode 100644 index 00000000000..c1e4e2c78ef --- /dev/null +++ b/tests/queries/0_stateless/01748_partition_id_pruning.reference @@ -0,0 +1,17 @@ +1 1 +1 2 +1 3 +1 1 +1 2 +1 3 +3 +1 +11 +21 +31 +41 +51 +61 +71 +81 +91 diff --git a/tests/queries/0_stateless/01748_partition_id_pruning.sql b/tests/queries/0_stateless/01748_partition_id_pruning.sql new file mode 100644 index 00000000000..17a405e17ad --- /dev/null +++ b/tests/queries/0_stateless/01748_partition_id_pruning.sql @@ -0,0 +1,31 @@ +drop table if exists x; + +create table x (i int, j int) engine MergeTree partition by i order by j settings index_granularity = 1; + +insert into x values (1, 1), (1, 2), (1, 3), (2, 4), (2, 5), (2, 6); + +set max_rows_to_read = 3; + +select * from x where _partition_id = partitionId(1); + +set max_rows_to_read = 4; -- one row for subquery + +select * from x where _partition_id in (select partitionId(number + 1) from numbers(1)); + +-- trivial count optimization test +set max_rows_to_read = 1; -- one row for subquery +select count() from x where _partition_id in (select partitionId(number + 1) from numbers(1)); + +drop table x; + +drop table if exists mt; + +create table mt (n UInt64) engine=MergeTree order by n partition by n % 10; + +set max_rows_to_read = 200; + +insert into mt select * from numbers(100); + +select * from mt where toUInt64(substr(_part, 1, position(_part, '_') - 1)) = 1; + +drop table mt; diff --git a/tests/queries/0_stateless/01753_fix_clickhouse_format.sh b/tests/queries/0_stateless/01753_fix_clickhouse_format.sh index 48ce8ded1ad..5cdd53b2166 100755 --- a/tests/queries/0_stateless/01753_fix_clickhouse_format.sh +++ b/tests/queries/0_stateless/01753_fix_clickhouse_format.sh @@ -8,4 +8,4 @@ echo "select 1; select 1 union all (select 1 union distinct select 1); " | $CL echo "select 1; select 1 union all (select 1 union distinct select 1); -- comment " | $CLICKHOUSE_FORMAT -n; -echo "insert into t values (1); " | $CLICKHOUSE_FORMAT -n 2>&1 \ | grep -F -q "Code: 1004" && echo 'OK' || echo 'FAIL' +echo "insert into t values (1); " | $CLICKHOUSE_FORMAT -n 2>&1 \ | grep -F -q "Code: 578" && echo 'OK' || echo 'FAIL' diff --git a/tests/queries/0_stateless/01753_max_uri_size.reference b/tests/queries/0_stateless/01753_max_uri_size.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01753_max_uri_size.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01753_max_uri_size.sh b/tests/queries/0_stateless/01753_max_uri_size.sh new file mode 100755 index 00000000000..62bc4f2c26f --- /dev/null +++ b/tests/queries/0_stateless/01753_max_uri_size.sh @@ -0,0 +1,17 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +# NOTE: since 'max_uri_size' doesn't affect the request itself, this test hardly depends on the default value of this setting (1 MiB). + +python3 -c " +print('${CLICKHOUSE_URL}', end='') +print('&hello=world'*100000, end='') +print('&query=SELECT+1') +" > "${CLICKHOUSE_TMP}/url.txt" + +wget --input-file "${CLICKHOUSE_TMP}/url.txt" 2>&1 | grep -Fc "400: Bad Request" + +rm "${CLICKHOUSE_TMP}/url.txt" diff --git a/tests/queries/0_stateless/01753_system_zookeeper_query_param_path_long.reference b/tests/queries/0_stateless/01753_system_zookeeper_query_param_path_long.reference new file mode 100644 index 00000000000..9daeafb9864 --- /dev/null +++ b/tests/queries/0_stateless/01753_system_zookeeper_query_param_path_long.reference @@ -0,0 +1 @@ +test diff --git a/tests/queries/0_stateless/01753_system_zookeeper_query_param_path_long.sh b/tests/queries/0_stateless/01753_system_zookeeper_query_param_path_long.sh new file mode 100755 index 00000000000..d3046e73b93 --- /dev/null +++ b/tests/queries/0_stateless/01753_system_zookeeper_query_param_path_long.sh @@ -0,0 +1,14 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + + +${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS test_01753"; +${CLICKHOUSE_CLIENT} --query="CREATE TABLE test_01753 (n Int8) ENGINE=ReplicatedMergeTree('/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/test_01753/test', '1') ORDER BY n" + +${CLICKHOUSE_CLIENT} --query="SELECT name FROM system.zookeeper WHERE path = {path:String}" --param_path "$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/test_01753" + + +${CLICKHOUSE_CLIENT} --query="DROP TABLE test_01753 SYNC"; diff --git a/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference b/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference new file mode 100644 index 00000000000..328483d9867 --- /dev/null +++ b/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference @@ -0,0 +1,16 @@ +SELECT * \ +FROM \ +( \ + SELECT 1 AS x \ + UNION ALL \ + SELECT 1 \ + UNION DISTINCT \ + SELECT 3 \ +) +SELECT 1 \ +UNION ALL \ +( \ + SELECT 1 \ + UNION DISTINCT \ + SELECT 1 \ +) diff --git a/tests/queries/0_stateless/01754_clickhouse_format_backslash.sh b/tests/queries/0_stateless/01754_clickhouse_format_backslash.sh new file mode 100755 index 00000000000..6a76dc9c5c8 --- /dev/null +++ b/tests/queries/0_stateless/01754_clickhouse_format_backslash.sh @@ -0,0 +1,9 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +echo "select * from (select 1 as x union all select 1 union distinct select 3)" | $CLICKHOUSE_FORMAT --backslash; + +echo "select 1 union all (select 1 union distinct select 1)" | $CLICKHOUSE_FORMAT --backslash; diff --git a/tests/queries/0_stateless/01754_cluster_all_replicas_shard_num.reference b/tests/queries/0_stateless/01754_cluster_all_replicas_shard_num.reference new file mode 100644 index 00000000000..d308efd8662 --- /dev/null +++ b/tests/queries/0_stateless/01754_cluster_all_replicas_shard_num.reference @@ -0,0 +1,9 @@ +1 +1 +1 +2 +1 +2 +1 +1 +2 diff --git a/tests/queries/0_stateless/01754_cluster_all_replicas_shard_num.sql b/tests/queries/0_stateless/01754_cluster_all_replicas_shard_num.sql new file mode 100644 index 00000000000..833f86c538d --- /dev/null +++ b/tests/queries/0_stateless/01754_cluster_all_replicas_shard_num.sql @@ -0,0 +1,8 @@ +SELECT _shard_num FROM cluster('test_shard_localhost', system.one); +SELECT _shard_num FROM clusterAllReplicas('test_shard_localhost', system.one); + +SELECT _shard_num FROM cluster('test_cluster_two_shards', system.one) ORDER BY _shard_num; +SELECT _shard_num FROM clusterAllReplicas('test_cluster_two_shards', system.one) ORDER BY _shard_num; + +SELECT _shard_num FROM cluster('test_cluster_one_shard_two_replicas', system.one) ORDER BY _shard_num; +SELECT _shard_num FROM clusterAllReplicas('test_cluster_one_shard_two_replicas', system.one) ORDER BY _shard_num; diff --git a/tests/queries/0_stateless/01755_client_highlight_multi_line_comment_regression.expect b/tests/queries/0_stateless/01755_client_highlight_multi_line_comment_regression.expect new file mode 100755 index 00000000000..65b9bde235b --- /dev/null +++ b/tests/queries/0_stateless/01755_client_highlight_multi_line_comment_regression.expect @@ -0,0 +1,25 @@ +#!/usr/bin/expect -f + +log_user 0 +set timeout 5 +match_max 100000 +# A default timeout action is to do nothing, change it to fail +expect_after { + timeout { + exit 2 + } +} + +set basedir [file dirname $argv0] +spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_CLIENT_BINARY \$CLICKHOUSE_CLIENT_OPT" +expect ":) " + +# regression for heap-buffer-overflow issue (under ASAN) +send -- "/**" +expect "/**" +# just in case few more bytes +send -- "foobar" +expect "/**foobar" + +send -- "\3\4" +expect eof diff --git a/tests/queries/0_stateless/01755_client_highlight_multi_line_comment_regression.reference b/tests/queries/0_stateless/01755_client_highlight_multi_line_comment_regression.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01755_shard_pruning_with_literal.reference b/tests/queries/0_stateless/01755_shard_pruning_with_literal.reference new file mode 100644 index 00000000000..6ed281c757a --- /dev/null +++ b/tests/queries/0_stateless/01755_shard_pruning_with_literal.reference @@ -0,0 +1,2 @@ +1 +1 diff --git a/tests/queries/0_stateless/01755_shard_pruning_with_literal.sql b/tests/queries/0_stateless/01755_shard_pruning_with_literal.sql new file mode 100644 index 00000000000..0e93d76573c --- /dev/null +++ b/tests/queries/0_stateless/01755_shard_pruning_with_literal.sql @@ -0,0 +1,14 @@ +set optimize_skip_unused_shards=1; + +drop table if exists data_01755; +drop table if exists dist_01755; + +create table data_01755 (i Int) Engine=Memory; +create table dist_01755 as data_01755 Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01755, i); + +insert into data_01755 values (1); + +select * from dist_01755 where 1 settings enable_early_constant_folding = 0; + +drop table if exists data_01755; +drop table if exists dist_01755; diff --git a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference new file mode 100644 index 00000000000..a1bfcf043da --- /dev/null +++ b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference @@ -0,0 +1,25 @@ +(0, 2) +0 0 +0 0 +WITH CAST(\'default\', \'String\') AS id_no SELECT one.dummy, ignore(id_no) FROM system.one WHERE dummy IN (0, 2) +WITH CAST(\'default\', \'String\') AS id_no SELECT one.dummy, ignore(id_no) FROM system.one WHERE dummy IN (0, 2) +optimize_skip_unused_shards_rewrite_in(0, 2) +0 0 +WITH CAST(\'default\', \'String\') AS id_02 SELECT one.dummy, ignore(id_02) FROM system.one WHERE dummy IN tuple(0) +WITH CAST(\'default\', \'String\') AS id_02 SELECT one.dummy, ignore(id_02) FROM system.one WHERE dummy IN tuple(2) +optimize_skip_unused_shards_rewrite_in(2,) +WITH CAST(\'default\', \'String\') AS id_2 SELECT one.dummy, ignore(id_2) FROM system.one WHERE dummy IN tuple(2) +optimize_skip_unused_shards_rewrite_in(0,) +0 0 +WITH CAST(\'default\', \'String\') AS id_0 SELECT one.dummy, ignore(id_0) FROM system.one WHERE dummy IN tuple(0) +errors +others +0 +0 +0 +different types -- prohibited +different types -- conversion +0 +optimize_skip_unused_shards_limit +0 +0 diff --git a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql new file mode 100644 index 00000000000..dc481ccca72 --- /dev/null +++ b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql @@ -0,0 +1,138 @@ +-- NOTE: this test cannot use 'current_database = currentDatabase()', +-- because it does not propagated via remote queries, +-- hence it uses 'with (select currentDatabase()) as X' +-- (with subquery to expand it on the initiator). + +drop table if exists dist_01756; +drop table if exists dist_01756_str; +drop table if exists dist_01756_column; +drop table if exists data_01756_str; + +-- SELECT +-- intHash64(0) % 2, +-- intHash64(2) % 2 +-- ┌─modulo(intHash64(0), 2)─┬─modulo(intHash64(2), 2)─┐ +-- │ 0 │ 1 │ +-- └─────────────────────────┴─────────────────────────┘ +create table dist_01756 as system.one engine=Distributed(test_cluster_two_shards, system, one, intHash64(dummy)); + +-- separate log entry for localhost queries +set prefer_localhost_replica=0; +set force_optimize_skip_unused_shards=2; +set optimize_skip_unused_shards=1; +set optimize_skip_unused_shards_rewrite_in=0; +set log_queries=1; + +-- +-- w/o optimize_skip_unused_shards_rewrite_in=1 +-- +select '(0, 2)'; +with (select currentDatabase()) as id_no select *, ignore(id_no) from dist_01756 where dummy in (0, 2); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_no %') and + type = 'QueryFinish' +order by query; + +-- +-- w/ optimize_skip_unused_shards_rewrite_in=1 +-- + +set optimize_skip_unused_shards_rewrite_in=1; + +-- detailed coverage for realistic examples +select 'optimize_skip_unused_shards_rewrite_in(0, 2)'; +with (select currentDatabase()) as id_02 select *, ignore(id_02) from dist_01756 where dummy in (0, 2); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_02 %') and + type = 'QueryFinish' +order by query; + +select 'optimize_skip_unused_shards_rewrite_in(2,)'; +with (select currentDatabase()) as id_2 select *, ignore(id_2) from dist_01756 where dummy in (2,); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_2 %') and + type = 'QueryFinish' +order by query; + +select 'optimize_skip_unused_shards_rewrite_in(0,)'; +with (select currentDatabase()) as id_0 select *, ignore(id_0) from dist_01756 where dummy in (0,); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_0 %') and + type = 'QueryFinish' +order by query; + +-- +-- errors +-- +select 'errors'; + +-- not tuple +select * from dist_01756 where dummy in (0); -- { serverError 507 } +-- optimize_skip_unused_shards does not support non-constants +select * from dist_01756 where dummy in (select * from system.one); -- { serverError 507 } +select * from dist_01756 where dummy in (toUInt8(0)); -- { serverError 507 } +-- wrong type (tuple) +select * from dist_01756 where dummy in ('0'); -- { serverError 507 } +-- intHash64 does not accept string +select * from dist_01756 where dummy in ('0', '2'); -- { serverError 43 } +-- NOT IN does not supported +select * from dist_01756 where dummy not in (0, 2); -- { serverError 507 } + +-- +-- others +-- +select 'others'; + +select * from dist_01756 where dummy not in (2, 3) and dummy in (0, 2); +select * from dist_01756 where dummy in tuple(0, 2); +select * from dist_01756 where dummy in tuple(0); +select * from dist_01756 where dummy in tuple(2); +-- Identifier is NULL +select (2 IN (2,)), * from dist_01756 where dummy in (0, 2) format Null; +-- Literal is NULL +select (dummy IN (toUInt8(2),)), * from dist_01756 where dummy in (0, 2) format Null; + +-- different type +select 'different types -- prohibited'; +create table data_01756_str (key String) engine=Memory(); +create table dist_01756_str as data_01756_str engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01756_str, cityHash64(key)); +select * from dist_01756_str where key in ('0', '2'); +select * from dist_01756_str where key in ('0', Null); -- { serverError 507 } +select * from dist_01756_str where key in (0, 2); -- { serverError 53 } +select * from dist_01756_str where key in (0, Null); -- { serverError 53 } + +-- different type #2 +select 'different types -- conversion'; +create table dist_01756_column as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); +select * from dist_01756_column where dummy in (0, '255'); +select * from dist_01756_column where dummy in (0, '255foo'); -- { serverError 53 } + +-- optimize_skip_unused_shards_limit +select 'optimize_skip_unused_shards_limit'; +select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1, force_optimize_skip_unused_shards=0; + +drop table dist_01756; +drop table dist_01756_str; +drop table dist_01756_column; +drop table data_01756_str; diff --git a/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.reference b/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql b/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql new file mode 100644 index 00000000000..68247dbfbe5 --- /dev/null +++ b/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql @@ -0,0 +1,33 @@ +drop table if exists dist_01757; +create table dist_01757 as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); + +set optimize_skip_unused_shards=1; +set force_optimize_skip_unused_shards=2; + +-- in +select * from dist_01757 where dummy in (0,) format Null; +select * from dist_01757 where dummy in (0, 1) format Null settings optimize_skip_unused_shards_limit=2; + +-- in negative +select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } + +-- or negative +select * from dist_01757 where dummy = 0 or dummy = 1 settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } + +-- or +select * from dist_01757 where dummy = 0 or dummy = 1 format Null settings optimize_skip_unused_shards_limit=2; + +-- and negative +select * from dist_01757 where dummy = 0 and dummy = 1 settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01757 where dummy = 0 and dummy = 2 and dummy = 3 settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01757 where dummy = 0 and dummy = 2 and dummy = 3 settings optimize_skip_unused_shards_limit=2; -- { serverError 507 } + +-- and +select * from dist_01757 where dummy = 0 and dummy = 1 settings optimize_skip_unused_shards_limit=2; +select * from dist_01757 where dummy = 0 and dummy = 1 and dummy = 3 settings optimize_skip_unused_shards_limit=3; + +-- ARGUMENT_OUT_OF_BOUND error +select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=0; -- { serverError 69 } +select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=9223372036854775808; -- { serverError 69 } + +drop table dist_01757; diff --git a/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh b/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh index b26961eda8e..d18ea8694a9 100755 --- a/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh +++ b/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh @@ -10,3 +10,5 @@ $CLICKHOUSE_CLIENT --optimize_skip_unused_shards=1 -nm -q " create table dist_01758 as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); select * from dist_01758 where dummy = 0 format Null; " |& grep -o "StorageDistributed (dist_01758).*" + +$CLICKHOUSE_CLIENT -q "drop table dist_01758" 2>/dev/null diff --git a/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.reference b/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.reference new file mode 100644 index 00000000000..bb08bbbe0b5 --- /dev/null +++ b/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.reference @@ -0,0 +1,3 @@ +0 2 3 +1 5 6 +2 8 9 diff --git a/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql b/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql new file mode 100644 index 00000000000..11a52976716 --- /dev/null +++ b/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql @@ -0,0 +1,32 @@ +DROP DATABASE IF EXISTS 01759_db; +CREATE DATABASE 01759_db; + +DROP TABLE IF EXISTS 01759_db.dictionary_source_table; +CREATE TABLE 01759_db.dictionary_source_table +( + key UInt64, + value1 UInt64, + value2 UInt64 +) +ENGINE = TinyLog; + +INSERT INTO 01759_db.dictionary_source_table VALUES (0, 2, 3), (1, 5, 6), (2, 8, 9); + +DROP DICTIONARY IF EXISTS 01759_db.test_dictionary; + +CREATE DICTIONARY 01759_db.test_dictionary(key UInt64, value1 UInt64, value1 UInt64) +PRIMARY KEY key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'dictionary_source_table' DB '01759_db')) +LAYOUT(COMPLEX_KEY_DIRECT()); -- {serverError 36} + +CREATE DICTIONARY 01759_db.test_dictionary(key UInt64, value1 UInt64, value2 UInt64) +PRIMARY KEY key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'dictionary_source_table' DB '01759_db')) +LAYOUT(COMPLEX_KEY_DIRECT()); + +SELECT number, dictGet('01759_db.test_dictionary', 'value1', tuple(number)) as value1, + dictGet('01759_db.test_dictionary', 'value2', tuple(number)) as value2 FROM system.numbers LIMIT 3; + +DROP TABLE 01759_db.dictionary_source_table; + +DROP DATABASE 01759_db; diff --git a/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.reference b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql new file mode 100644 index 00000000000..2ddf318313f --- /dev/null +++ b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql @@ -0,0 +1,3 @@ +create table dist_01756 (dummy UInt8) ENGINE = Distributed('test_cluster_two_shards', 'system', 'one', dummy); +select ignore(1), * from dist_01756 where 0 settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=1; +drop table dist_01756; diff --git a/tests/queries/0_stateless/01760_ddl_dictionary_use_current_database_name.reference b/tests/queries/0_stateless/01760_ddl_dictionary_use_current_database_name.reference new file mode 100644 index 00000000000..6594f6baa3d --- /dev/null +++ b/tests/queries/0_stateless/01760_ddl_dictionary_use_current_database_name.reference @@ -0,0 +1,8 @@ +dictGet +0 +1 +0 +dictHas +1 +1 +0 diff --git a/tests/queries/0_stateless/01760_ddl_dictionary_use_current_database_name.sql b/tests/queries/0_stateless/01760_ddl_dictionary_use_current_database_name.sql new file mode 100644 index 00000000000..9c405640930 --- /dev/null +++ b/tests/queries/0_stateless/01760_ddl_dictionary_use_current_database_name.sql @@ -0,0 +1,29 @@ +DROP TABLE IF EXISTS ddl_dictonary_test_source; +CREATE TABLE ddl_dictonary_test_source +( + id UInt64, + value UInt64 +) +ENGINE = TinyLog; + +INSERT INTO ddl_dictonary_test_source VALUES (0, 0); +INSERT INTO ddl_dictonary_test_source VALUES (1, 1); + +DROP DICTIONARY IF EXISTS ddl_dictionary_test; +CREATE DICTIONARY ddl_dictionary_test +( + id UInt64, + value UInt64 DEFAULT 0 +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'ddl_dictonary_test_source')) +LAYOUT(DIRECT()); + +SELECT 'dictGet'; +SELECT dictGet('ddl_dictionary_test', 'value', number) FROM system.numbers LIMIT 3; + +SELECT 'dictHas'; +SELECT dictHas('ddl_dictionary_test', number) FROM system.numbers LIMIT 3; + +DROP TABLE ddl_dictonary_test_source; +DROP DICTIONARY ddl_dictionary_test; diff --git a/tests/queries/0_stateless/01760_modulo_negative.reference b/tests/queries/0_stateless/01760_modulo_negative.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01760_modulo_negative.sql b/tests/queries/0_stateless/01760_modulo_negative.sql new file mode 100644 index 00000000000..dbea06cc100 --- /dev/null +++ b/tests/queries/0_stateless/01760_modulo_negative.sql @@ -0,0 +1 @@ +SELECT -number % -9223372036854775808 FROM system.numbers; -- { serverError 153 } diff --git a/tests/queries/0_stateless/01760_polygon_dictionaries.reference b/tests/queries/0_stateless/01760_polygon_dictionaries.reference new file mode 100644 index 00000000000..6b236e980e1 --- /dev/null +++ b/tests/queries/0_stateless/01760_polygon_dictionaries.reference @@ -0,0 +1,19 @@ +dictGet +(-0.1,0) Click South 423 423 +(0,-1.1) Click West 424 \N +(0,1.1) Click North 422 \N +(0.1,0) Click East 421 421 +(3,3) qqq 10 20 +dictGetOrDefault +(-0.1,0) Click South 423 423 +(0,-1.1) Click West 424 \N +(0,1.1) Click North 422 \N +(0.1,0) Click East 421 421 +(3,3) DefaultName 30 40 +dictHas +(-0.1,0) 1 1 1 +(0,-1.1) 1 1 1 +(0,1.1) 1 1 1 +(0.1,0) 1 1 1 +(3,3) 0 0 0 +check NaN or infinite point input diff --git a/tests/queries/0_stateless/01760_polygon_dictionaries.sql b/tests/queries/0_stateless/01760_polygon_dictionaries.sql new file mode 100644 index 00000000000..406e9af27ea --- /dev/null +++ b/tests/queries/0_stateless/01760_polygon_dictionaries.sql @@ -0,0 +1,69 @@ +DROP DATABASE IF EXISTS 01760_db; +CREATE DATABASE 01760_db; + +DROP TABLE IF EXISTS 01760_db.polygons; +CREATE TABLE 01760_db.polygons (key Array(Array(Array(Tuple(Float64, Float64)))), name String, value UInt64, value_nullable Nullable(UInt64)) ENGINE = Memory; +INSERT INTO 01760_db.polygons VALUES ([[[(3, 1), (0, 1), (0, -1), (3, -1)]]], 'Click East', 421, 421); +INSERT INTO 01760_db.polygons VALUES ([[[(-1, 1), (1, 1), (1, 3), (-1, 3)]]], 'Click North', 422, NULL); +INSERT INTO 01760_db.polygons VALUES ([[[(-3, 1), (-3, -1), (0, -1), (0, 1)]]], 'Click South', 423, 423); +INSERT INTO 01760_db.polygons VALUES ([[[(-1, -1), (1, -1), (1, -3), (-1, -3)]]], 'Click West', 424, NULL); + +DROP TABLE IF EXISTS 01760_db.points; +CREATE TABLE 01760_db.points (x Float64, y Float64, def_i UInt64, def_s String) ENGINE = Memory; +INSERT INTO 01760_db.points VALUES (0.1, 0.0, 112, 'aax'); +INSERT INTO 01760_db.points VALUES (-0.1, 0.0, 113, 'aay'); +INSERT INTO 01760_db.points VALUES (0.0, 1.1, 114, 'aaz'); +INSERT INTO 01760_db.points VALUES (0.0, -1.1, 115, 'aat'); +INSERT INTO 01760_db.points VALUES (3.0, 3.0, 22, 'bb'); + +DROP DICTIONARY IF EXISTS 01760_db.dict_array; +CREATE DICTIONARY 01760_db.dict_array +( + key Array(Array(Array(Tuple(Float64, Float64)))), + name String DEFAULT 'qqq', + value UInt64 DEFAULT 10, + value_nullable Nullable(UInt64) DEFAULT 20 +) +PRIMARY KEY key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'polygons' DB '01760_db')) +LIFETIME(0) +LAYOUT(POLYGON()); + +SELECT 'dictGet'; + +SELECT tuple(x, y) as key, + dictGet('01760_db.dict_array', 'name', key), + dictGet('01760_db.dict_array', 'value', key), + dictGet('01760_db.dict_array', 'value_nullable', key) +FROM 01760_db.points +ORDER BY x, y; + +SELECT 'dictGetOrDefault'; + +SELECT tuple(x, y) as key, + dictGetOrDefault('01760_db.dict_array', 'name', key, 'DefaultName'), + dictGetOrDefault('01760_db.dict_array', 'value', key, 30), + dictGetOrDefault('01760_db.dict_array', 'value_nullable', key, 40) +FROM 01760_db.points +ORDER BY x, y; + +SELECT 'dictHas'; + +SELECT tuple(x, y) as key, + dictHas('01760_db.dict_array', key), + dictHas('01760_db.dict_array', key), + dictHas('01760_db.dict_array', key) +FROM 01760_db.points +ORDER BY x, y; + +SELECT 'check NaN or infinite point input'; +SELECT tuple(nan, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} +SELECT tuple(nan, nan) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} +SELECT tuple(inf, nan) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} +SELECT tuple(inf, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} + +DROP DICTIONARY 01760_db.dict_array; +DROP TABLE 01760_db.points; +DROP TABLE 01760_db.polygons; + +DROP DATABASE 01760_db; diff --git a/tests/queries/0_stateless/01760_system_dictionaries.reference b/tests/queries/0_stateless/01760_system_dictionaries.reference new file mode 100644 index 00000000000..aaf160dea69 --- /dev/null +++ b/tests/queries/0_stateless/01760_system_dictionaries.reference @@ -0,0 +1,14 @@ +simple key +example_simple_key_dictionary 01760_db ['id'] ['UInt64'] ['value'] ['UInt64'] NOT_LOADED +example_simple_key_dictionary 01760_db ['id'] ['UInt64'] ['value'] ['UInt64'] NOT_LOADED +0 0 +1 1 +2 2 +example_simple_key_dictionary 01760_db ['id'] ['UInt64'] ['value'] ['UInt64'] LOADED +complex key +example_complex_key_dictionary 01760_db ['id','id_key'] ['UInt64','String'] ['value'] ['UInt64'] NOT_LOADED +example_complex_key_dictionary 01760_db ['id','id_key'] ['UInt64','String'] ['value'] ['UInt64'] NOT_LOADED +0 0_key 0 +1 1_key 1 +2 2_key 2 +example_complex_key_dictionary 01760_db ['id','id_key'] ['UInt64','String'] ['value'] ['UInt64'] LOADED diff --git a/tests/queries/0_stateless/01760_system_dictionaries.sql b/tests/queries/0_stateless/01760_system_dictionaries.sql new file mode 100644 index 00000000000..f4e0cfa0086 --- /dev/null +++ b/tests/queries/0_stateless/01760_system_dictionaries.sql @@ -0,0 +1,57 @@ +DROP DATABASE IF EXISTS 01760_db; +CREATE DATABASE 01760_db; + +DROP TABLE IF EXISTS 01760_db.example_simple_key_source; +CREATE TABLE 01760_db.example_simple_key_source (id UInt64, value UInt64) ENGINE=TinyLog; +INSERT INTO 01760_db.example_simple_key_source VALUES (0, 0), (1, 1), (2, 2); + +DROP DICTIONARY IF EXISTS 01760_db.example_simple_key_dictionary; +CREATE DICTIONARY 01760_db.example_simple_key_dictionary ( + id UInt64, + value UInt64 +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'example_simple_key_source' DATABASE '01760_db')) +LAYOUT(DIRECT()); + +SELECT 'simple key'; + +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; + +SELECT * FROM 01760_db.example_simple_key_dictionary; + +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; + +DROP TABLE 01760_db.example_simple_key_source; +DROP DICTIONARY 01760_db.example_simple_key_dictionary; + +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; + +DROP TABLE IF EXISTS 01760_db.example_complex_key_source; +CREATE TABLE 01760_db.example_complex_key_source (id UInt64, id_key String, value UInt64) ENGINE=TinyLog; +INSERT INTO 01760_db.example_complex_key_source VALUES (0, '0_key', 0), (1, '1_key', 1), (2, '2_key', 2); + +DROP DICTIONARY IF EXISTS 01760_db.example_complex_key_dictionary; +CREATE DICTIONARY 01760_db.example_complex_key_dictionary ( + id UInt64, + id_key String, + value UInt64 +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'example_complex_key_source' DATABASE '01760_db')) +LAYOUT(COMPLEX_KEY_DIRECT()); + +SELECT 'complex key'; + +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; + +SELECT * FROM 01760_db.example_complex_key_dictionary; + +SELECT name, database, key.names, key.types, attribute.names, attribute.types, status FROM system.dictionaries WHERE database='01760_db'; + +DROP TABLE 01760_db.example_complex_key_source; +DROP DICTIONARY 01760_db.example_complex_key_dictionary; + +DROP DATABASE 01760_db; diff --git a/tests/queries/0_stateless/01761_alter_decimal_zookeeper.reference b/tests/queries/0_stateless/01761_alter_decimal_zookeeper.reference new file mode 100644 index 00000000000..5dcc95fd7b7 --- /dev/null +++ b/tests/queries/0_stateless/01761_alter_decimal_zookeeper.reference @@ -0,0 +1,9 @@ +1 5.00000000 +2 6.00000000 +CREATE TABLE default.test_alter_decimal\n(\n `n` UInt64,\n `d` Decimal(18, 8)\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/01761_alter_decimal_zookeeper\', \'r1\')\nORDER BY tuple()\nSETTINGS index_granularity = 8192 +1 5.00000000 +2 6.00000000 +CREATE TABLE default.test_alter_decimal\n(\n `n` UInt64,\n `d` Decimal(18, 8)\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/01761_alter_decimal_zookeeper\', \'r1\')\nORDER BY tuple()\nSETTINGS index_granularity = 8192 +1 5.00000000 +2 6.00000000 +3 7.00000000 diff --git a/tests/queries/0_stateless/01761_alter_decimal_zookeeper.sql b/tests/queries/0_stateless/01761_alter_decimal_zookeeper.sql new file mode 100644 index 00000000000..01766f0d6c2 --- /dev/null +++ b/tests/queries/0_stateless/01761_alter_decimal_zookeeper.sql @@ -0,0 +1,31 @@ +DROP TABLE IF EXISTS test_alter_decimal; + +CREATE TABLE test_alter_decimal +(n UInt64, d Decimal(15, 8)) +ENGINE = ReplicatedMergeTree('/clickhouse/01761_alter_decimal_zookeeper', 'r1') +ORDER BY tuple(); + +INSERT INTO test_alter_decimal VALUES (1, toDecimal32(5, 5)); + +INSERT INTO test_alter_decimal VALUES (2, toDecimal32(6, 6)); + +SELECT * FROM test_alter_decimal ORDER BY n; + +ALTER TABLE test_alter_decimal MODIFY COLUMN d Decimal(18, 8); + +SHOW CREATE TABLE test_alter_decimal; + +SELECT * FROM test_alter_decimal ORDER BY n; + +DETACH TABLE test_alter_decimal; +ATTACH TABLE test_alter_decimal; + +SHOW CREATE TABLE test_alter_decimal; + +INSERT INTO test_alter_decimal VALUES (3, toDecimal32(7, 7)); + +OPTIMIZE TABLE test_alter_decimal FINAL; + +SELECT * FROM test_alter_decimal ORDER BY n; + +DROP TABLE IF EXISTS test_alter_decimal; diff --git a/tests/queries/0_stateless/01761_cast_to_enum_nullable.reference b/tests/queries/0_stateless/01761_cast_to_enum_nullable.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01761_cast_to_enum_nullable.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01761_cast_to_enum_nullable.sql b/tests/queries/0_stateless/01761_cast_to_enum_nullable.sql new file mode 100644 index 00000000000..42a51d2f7b9 --- /dev/null +++ b/tests/queries/0_stateless/01761_cast_to_enum_nullable.sql @@ -0,0 +1 @@ +SELECT toUInt8(assumeNotNull(cast(cast(NULL, 'Nullable(String)'), 'Nullable(Enum8(\'Hello\' = 1))'))); diff --git a/tests/queries/0_stateless/01761_round_year_bounds.reference b/tests/queries/0_stateless/01761_round_year_bounds.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01761_round_year_bounds.sql b/tests/queries/0_stateless/01761_round_year_bounds.sql new file mode 100644 index 00000000000..fed12c55568 --- /dev/null +++ b/tests/queries/0_stateless/01761_round_year_bounds.sql @@ -0,0 +1 @@ +SELECT toStartOfInterval(toDateTime(-9223372036854775808), toIntervalYear(100), 'Europe/Moscow') FORMAT Null; diff --git a/tests/queries/0_stateless/01762_datetime64_extended_parsing.reference b/tests/queries/0_stateless/01762_datetime64_extended_parsing.reference new file mode 100644 index 00000000000..531b6f8bf13 --- /dev/null +++ b/tests/queries/0_stateless/01762_datetime64_extended_parsing.reference @@ -0,0 +1 @@ +1925-01-02 03:04:05.678901 diff --git a/tests/queries/0_stateless/01762_datetime64_extended_parsing.sql b/tests/queries/0_stateless/01762_datetime64_extended_parsing.sql new file mode 100644 index 00000000000..a7ad447b215 --- /dev/null +++ b/tests/queries/0_stateless/01762_datetime64_extended_parsing.sql @@ -0,0 +1 @@ +SELECT toDateTime64('1925-01-02 03:04:05.678901', 6); diff --git a/tests/queries/0_stateless/01762_deltasumtimestamp.reference b/tests/queries/0_stateless/01762_deltasumtimestamp.reference new file mode 100644 index 00000000000..563ef5c412d --- /dev/null +++ b/tests/queries/0_stateless/01762_deltasumtimestamp.reference @@ -0,0 +1,5 @@ +10 +10 +8 +8 +13 diff --git a/tests/queries/0_stateless/01762_deltasumtimestamp.sql b/tests/queries/0_stateless/01762_deltasumtimestamp.sql new file mode 100644 index 00000000000..a0098ccc2ee --- /dev/null +++ b/tests/queries/0_stateless/01762_deltasumtimestamp.sql @@ -0,0 +1,5 @@ +select deltaSumTimestampMerge(state) from (select deltaSumTimestampState(value, timestamp) as state from (select toDate(number) as timestamp, [4, 5, 5, 5][number-4] as value from numbers(5, 4)) UNION ALL select deltaSumTimestampState(value, timestamp) as state from (select toDate(number) as timestamp, [0, 4, 8, 3][number] as value from numbers(1, 4))); +select deltaSumTimestampMerge(state) from (select deltaSumTimestampState(value, timestamp) as state from (select number as timestamp, [0, 4, 8, 3][number] as value from numbers(1, 4)) UNION ALL select deltaSumTimestampState(value, timestamp) as state from (select number as timestamp, [4, 5, 5, 5][number-4] as value from numbers(5, 4))); +select deltaSumTimestamp(value, timestamp) from (select toDateTime(number) as timestamp, [0, 4, 8, 3][number] as value from numbers(1, 4)); +select deltaSumTimestamp(value, timestamp) from (select toDateTime(number) as timestamp, [0, 4.5, 8, 3][number] as value from numbers(1, 4)); +select deltaSumTimestamp(value, timestamp) from (select number as timestamp, [0, 4, 8, 3, 0, 0, 0, 1, 3, 5][number] as value from numbers(1, 10)); diff --git a/tests/queries/0_stateless/01763_filter_push_down_bugs.reference b/tests/queries/0_stateless/01763_filter_push_down_bugs.reference new file mode 100644 index 00000000000..66ea84a07c1 --- /dev/null +++ b/tests/queries/0_stateless/01763_filter_push_down_bugs.reference @@ -0,0 +1,6 @@ +1 2 +1 2 +[1] 2 +[[1]] 2 +String1_0 String2_0 String3_0 String4_0 1 +String1_0 String2_0 String3_0 String4_0 1 diff --git a/tests/queries/0_stateless/01763_filter_push_down_bugs.sql b/tests/queries/0_stateless/01763_filter_push_down_bugs.sql new file mode 100644 index 00000000000..5000eb38878 --- /dev/null +++ b/tests/queries/0_stateless/01763_filter_push_down_bugs.sql @@ -0,0 +1,37 @@ +SELECT * FROM (SELECT col1, col2 FROM (select '1' as col1, '2' as col2) GROUP by col1, col2) AS expr_qry WHERE col2 != ''; +SELECT * FROM (SELECT materialize('1') AS s1, materialize('2') AS s2 GROUP BY s1, s2) WHERE s2 = '2'; +SELECT * FROM (SELECT materialize([1]) AS s1, materialize('2') AS s2 GROUP BY s1, s2) WHERE s2 = '2'; +SELECT * FROM (SELECT materialize([[1]]) AS s1, materialize('2') AS s2 GROUP BY s1, s2) WHERE s2 = '2'; + +DROP TABLE IF EXISTS Test; + +CREATE TABLE Test +ENGINE = MergeTree() +PRIMARY KEY (String1,String2) +ORDER BY (String1,String2) +AS +SELECT + 'String1_' || toString(number) as String1, + 'String2_' || toString(number) as String2, + 'String3_' || toString(number) as String3, + 'String4_' || toString(number%4) as String4 +FROM numbers(1); + +SELECT * +FROM + ( + SELECT String1,String2,String3,String4,COUNT(*) + FROM Test + GROUP by String1,String2,String3,String4 + ) AS expr_qry; + +SELECT * +FROM + ( + SELECT String1,String2,String3,String4,COUNT(*) + FROM Test + GROUP by String1,String2,String3,String4 + ) AS expr_qry +WHERE String4 ='String4_0'; + +DROP TABLE IF EXISTS Test; diff --git a/tests/queries/0_stateless/01763_long_ttl_group_by.reference b/tests/queries/0_stateless/01763_long_ttl_group_by.reference new file mode 100644 index 00000000000..bdea806e747 --- /dev/null +++ b/tests/queries/0_stateless/01763_long_ttl_group_by.reference @@ -0,0 +1,6 @@ +206000 0 3 5 +41200 0 3 1 +20600 0 3 1 +206000 0 3 5 +41200 0 3 1 +20600 0 3 1 diff --git a/tests/queries/0_stateless/01763_long_ttl_group_by.sql b/tests/queries/0_stateless/01763_long_ttl_group_by.sql new file mode 100644 index 00000000000..e0c6f678f15 --- /dev/null +++ b/tests/queries/0_stateless/01763_long_ttl_group_by.sql @@ -0,0 +1,26 @@ +DROP TABLE IF EXISTS test_ttl_group_by01763; +CREATE TABLE test_ttl_group_by01763 +(key UInt32, ts DateTime, value UInt32, min_value UInt32 default value, max_value UInt32 default value) +ENGINE = MergeTree() PARTITION BY toYYYYMM(ts) +ORDER BY (key, toStartOfInterval(ts, toIntervalMinute(3)), ts) +TTL ts + INTERVAL 5 MINUTE GROUP BY key, toStartOfInterval(ts, toIntervalMinute(3)) +SET value = sum(value), min_value = min(min_value), max_value = max(max_value), ts=min(toStartOfInterval(ts, toIntervalMinute(3))); + +INSERT INTO test_ttl_group_by01763(key, ts, value) SELECT number%5 as key, now() - interval 10 minute + number, 1 FROM numbers(100000); +INSERT INTO test_ttl_group_by01763(key, ts, value) SELECT number%5 as key, now() - interval 10 minute + number, 0 FROM numbers(1000); +INSERT INTO test_ttl_group_by01763(key, ts, value) SELECT number%5 as key, now() - interval 10 minute + number, 3 FROM numbers(1000); +INSERT INTO test_ttl_group_by01763(key, ts, value) SELECT number%5 as key, now() - interval 2 month + number, 1 FROM numbers(100000); +INSERT INTO test_ttl_group_by01763(key, ts, value) SELECT number%5 as key, now() - interval 2 month + number, 0 FROM numbers(1000); +INSERT INTO test_ttl_group_by01763(key, ts, value) SELECT number%5 as key, now() - interval 2 month + number, 3 FROM numbers(1000); + +SELECT sum(value), min(min_value), max(max_value), uniqExact(key) FROM test_ttl_group_by01763; +SELECT sum(value), min(min_value), max(max_value), uniqExact(key) FROM test_ttl_group_by01763 where key = 3 ; +SELECT sum(value), min(min_value), max(max_value), uniqExact(key) FROM test_ttl_group_by01763 where key = 3 and ts <= today() - interval 30 day ; + +OPTIMIZE TABLE test_ttl_group_by01763 FINAL; + +SELECT sum(value), min(min_value), max(max_value), uniqExact(key) FROM test_ttl_group_by01763; +SELECT sum(value), min(min_value), max(max_value), uniqExact(key) FROM test_ttl_group_by01763 where key = 3 ; +SELECT sum(value), min(min_value), max(max_value), uniqExact(key) FROM test_ttl_group_by01763 where key = 3 and ts <= today() - interval 30 day ; + +DROP TABLE test_ttl_group_by01763; diff --git a/tests/queries/0_stateless/01763_max_distributed_depth.reference b/tests/queries/0_stateless/01763_max_distributed_depth.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01763_max_distributed_depth.sql b/tests/queries/0_stateless/01763_max_distributed_depth.sql new file mode 100644 index 00000000000..d1bb9e4be90 --- /dev/null +++ b/tests/queries/0_stateless/01763_max_distributed_depth.sql @@ -0,0 +1,26 @@ +DROP TABLE IF EXISTS tt6; + +CREATE TABLE tt6 +( + `id` UInt32, + `first_column` UInt32, + `second_column` UInt32, + `third_column` UInt32, + `status` String + +) +ENGINE = Distributed('test_shard_localhost', '', 'tt6', rand()); + +INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError 581 } + +SELECT * FROM tt6; -- { serverError 581 } + +SET max_distributed_depth = 0; + +-- stack overflow +INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError 306} + +-- stack overflow +SELECT * FROM tt6; -- { serverError 306 } + +DROP TABLE tt6; diff --git a/tests/queries/0_stateless/01764_collapsing_merge_adaptive_granularity.reference b/tests/queries/0_stateless/01764_collapsing_merge_adaptive_granularity.reference new file mode 100644 index 00000000000..0f128a62bbb --- /dev/null +++ b/tests/queries/0_stateless/01764_collapsing_merge_adaptive_granularity.reference @@ -0,0 +1,4 @@ +-8191 8193 +-8191 8193 +0 2 +0 2 diff --git a/tests/queries/0_stateless/01764_collapsing_merge_adaptive_granularity.sql b/tests/queries/0_stateless/01764_collapsing_merge_adaptive_granularity.sql new file mode 100644 index 00000000000..ca6465154ea --- /dev/null +++ b/tests/queries/0_stateless/01764_collapsing_merge_adaptive_granularity.sql @@ -0,0 +1,53 @@ +DROP TABLE IF EXISTS collapsing_table; +SET optimize_on_insert = 0; + +CREATE TABLE collapsing_table +( + key UInt64, + value UInt64, + Sign Int8 +) +ENGINE = CollapsingMergeTree(Sign) +ORDER BY key +SETTINGS + vertical_merge_algorithm_min_rows_to_activate=0, + vertical_merge_algorithm_min_columns_to_activate=0, + min_bytes_for_wide_part = 0; + +INSERT INTO collapsing_table SELECT if(number == 8192, 8191, number), 1, if(number == 8192, +1, -1) FROM numbers(8193); + +SELECT sum(Sign), count() from collapsing_table; + +OPTIMIZE TABLE collapsing_table FINAL; + +SELECT sum(Sign), count() from collapsing_table; + +DROP TABLE IF EXISTS collapsing_table; + + +DROP TABLE IF EXISTS collapsing_suspicious_granularity; + +CREATE TABLE collapsing_suspicious_granularity +( + key UInt64, + value UInt64, + Sign Int8 +) +ENGINE = CollapsingMergeTree(Sign) +ORDER BY key +SETTINGS + vertical_merge_algorithm_min_rows_to_activate=0, + vertical_merge_algorithm_min_columns_to_activate=0, + min_bytes_for_wide_part = 0, + index_granularity = 1; + +INSERT INTO collapsing_suspicious_granularity VALUES (1, 1, -1) (1, 1, 1); + +SELECT sum(Sign), count() from collapsing_suspicious_granularity; + +OPTIMIZE TABLE collapsing_suspicious_granularity FINAL; + +SELECT sum(Sign), count() from collapsing_suspicious_granularity; + + +DROP TABLE IF EXISTS collapsing_suspicious_granularity; diff --git a/tests/queries/0_stateless/01764_prefer_column_name_to_alias.reference b/tests/queries/0_stateless/01764_prefer_column_name_to_alias.reference new file mode 100644 index 00000000000..6fde46c38a4 --- /dev/null +++ b/tests/queries/0_stateless/01764_prefer_column_name_to_alias.reference @@ -0,0 +1,3 @@ +4.5 9 +3 2 +3 3 diff --git a/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql b/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql new file mode 100644 index 00000000000..5524712c3c6 --- /dev/null +++ b/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql @@ -0,0 +1,8 @@ +SELECT avg(number) AS number, max(number) FROM numbers(10); -- { serverError 184 } +SELECT sum(x) AS x, max(x) FROM (SELECT 1 AS x UNION ALL SELECT 2 AS x) t; -- { serverError 184 } +select sum(C1) as C1, count(C1) as C2 from (select number as C1 from numbers(3)) as ITBL; -- { serverError 184 } + +set prefer_column_name_to_alias = 1; +SELECT avg(number) AS number, max(number) FROM numbers(10); +SELECT sum(x) AS x, max(x) FROM (SELECT 1 AS x UNION ALL SELECT 2 AS x) t settings prefer_column_name_to_alias = 1; +select sum(C1) as C1, count(C1) as C2 from (select number as C1 from numbers(3)) as ITBL settings prefer_column_name_to_alias = 1; diff --git a/tests/queries/0_stateless/01764_table_function_dictionary.reference b/tests/queries/0_stateless/01764_table_function_dictionary.reference new file mode 100644 index 00000000000..b8e844ab3e9 --- /dev/null +++ b/tests/queries/0_stateless/01764_table_function_dictionary.reference @@ -0,0 +1,2 @@ +0 0 +1 1 diff --git a/tests/queries/0_stateless/01764_table_function_dictionary.sql b/tests/queries/0_stateless/01764_table_function_dictionary.sql new file mode 100644 index 00000000000..0168566077d --- /dev/null +++ b/tests/queries/0_stateless/01764_table_function_dictionary.sql @@ -0,0 +1,25 @@ +DROP TABLE IF EXISTS table_function_dictionary_source_table; +CREATE TABLE table_function_dictionary_source_table +( + id UInt64, + value UInt64 +) +ENGINE = TinyLog; + +INSERT INTO table_function_dictionary_source_table VALUES (0, 0); +INSERT INTO table_function_dictionary_source_table VALUES (1, 1); + +DROP DICTIONARY IF EXISTS table_function_dictionary_test_dictionary; +CREATE DICTIONARY table_function_dictionary_test_dictionary +( + id UInt64, + value UInt64 DEFAULT 0 +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_function_dictionary_source_table')) +LAYOUT(DIRECT()); + +SELECT * FROM dictionary('table_function_dictionary_test_dictionary'); + +DROP TABLE table_function_dictionary_source_table; +DROP DICTIONARY table_function_dictionary_test_dictionary; diff --git a/tests/queries/0_stateless/01765_hashed_dictionary_simple_key.reference b/tests/queries/0_stateless/01765_hashed_dictionary_simple_key.reference new file mode 100644 index 00000000000..2cc0a8668a2 --- /dev/null +++ b/tests/queries/0_stateless/01765_hashed_dictionary_simple_key.reference @@ -0,0 +1,132 @@ +Dictionary hashed_dictionary_simple_key_simple_attributes +dictGet existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +dictGet with non existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +value_first_default value_second_default +dictGetOrDefault existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +dictGetOrDefault non existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +default default +dictHas +1 +1 +1 +0 +select all values as input stream +0 value_0 value_second_0 +1 value_1 value_second_1 +2 value_2 value_second_2 +Dictionary sparse_hashed_dictionary_simple_key_simple_attributes +dictGet existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +dictGet with non existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +value_first_default value_second_default +dictGetOrDefault existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +dictGetOrDefault non existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +default default +dictHas +1 +1 +1 +0 +select all values as input stream +0 value_0 value_second_0 +1 value_1 value_second_1 +2 value_2 value_second_2 +Dictionary hashed_dictionary_simple_key_complex_attributes +dictGet existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +dictGet with non existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +value_first_default value_second_default +dictGetOrDefault existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +dictGetOrDefault non existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +default default +dictHas +1 +1 +1 +0 +select all values as input stream +0 value_0 value_second_0 +1 value_1 \N +2 value_2 value_second_2 +Dictionary sparse_hashed_dictionary_simple_key_complex_attributes +dictGet existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +dictGet with non existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +value_first_default value_second_default +dictGetOrDefault existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +dictGetOrDefault non existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +default default +dictHas +1 +1 +1 +0 +select all values as input stream +0 value_0 value_second_0 +1 value_1 \N +2 value_2 value_second_2 +Dictionary hashed_dictionary_simple_key_hierarchy +dictGet +0 +0 +1 +1 +2 +dictGetHierarchy +[1] +[4,2,1] +Dictionary sparse_hashed_dictionary_simple_key_hierarchy +dictGet +0 +0 +1 +1 +2 +dictGetHierarchy +[1] +[4,2,1] diff --git a/tests/queries/0_stateless/01765_hashed_dictionary_simple_key.sql b/tests/queries/0_stateless/01765_hashed_dictionary_simple_key.sql new file mode 100644 index 00000000000..7502c6a93bb --- /dev/null +++ b/tests/queries/0_stateless/01765_hashed_dictionary_simple_key.sql @@ -0,0 +1,207 @@ +DROP DATABASE IF EXISTS 01765_db; +CREATE DATABASE 01765_db; + +CREATE TABLE 01765_db.simple_key_simple_attributes_source_table +( + id UInt64, + value_first String, + value_second String +) +ENGINE = TinyLog; + +INSERT INTO 01765_db.simple_key_simple_attributes_source_table VALUES(0, 'value_0', 'value_second_0'); +INSERT INTO 01765_db.simple_key_simple_attributes_source_table VALUES(1, 'value_1', 'value_second_1'); +INSERT INTO 01765_db.simple_key_simple_attributes_source_table VALUES(2, 'value_2', 'value_second_2'); + +CREATE DICTIONARY 01765_db.hashed_dictionary_simple_key_simple_attributes +( + id UInt64, + value_first String DEFAULT 'value_first_default', + value_second String DEFAULT 'value_second_default' +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'simple_key_simple_attributes_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Dictionary hashed_dictionary_simple_key_simple_attributes'; +SELECT 'dictGet existing value'; +SELECT dictGet('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_first', number) as value_first, + dictGet('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGet with non existing value'; +SELECT dictGet('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_first', number) as value_first, + dictGet('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictGetOrDefault existing value'; +SELECT dictGetOrDefault('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGetOrDefault non existing value'; +SELECT dictGetOrDefault('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.hashed_dictionary_simple_key_simple_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictHas'; +SELECT dictHas('01765_db.hashed_dictionary_simple_key_simple_attributes', number) FROM system.numbers LIMIT 4; +SELECT 'select all values as input stream'; +SELECT * FROM 01765_db.hashed_dictionary_simple_key_simple_attributes ORDER BY id; + +DROP DICTIONARY 01765_db.hashed_dictionary_simple_key_simple_attributes; + +CREATE DICTIONARY 01765_db.sparse_hashed_dictionary_simple_key_simple_attributes +( + id UInt64, + value_first String DEFAULT 'value_first_default', + value_second String DEFAULT 'value_second_default' +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'simple_key_simple_attributes_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(SPARSE_HASHED()); + +SELECT 'Dictionary sparse_hashed_dictionary_simple_key_simple_attributes'; +SELECT 'dictGet existing value'; +SELECT dictGet('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_first', number) as value_first, + dictGet('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGet with non existing value'; +SELECT dictGet('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_first', number) as value_first, + dictGet('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictGetOrDefault existing value'; +SELECT dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGetOrDefault non existing value'; +SELECT dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictHas'; +SELECT dictHas('01765_db.sparse_hashed_dictionary_simple_key_simple_attributes', number) FROM system.numbers LIMIT 4; +SELECT 'select all values as input stream'; +SELECT * FROM 01765_db.sparse_hashed_dictionary_simple_key_simple_attributes ORDER BY id; + +DROP DICTIONARY 01765_db.sparse_hashed_dictionary_simple_key_simple_attributes; + +DROP TABLE 01765_db.simple_key_simple_attributes_source_table; + +CREATE TABLE 01765_db.simple_key_complex_attributes_source_table +( + id UInt64, + value_first String, + value_second Nullable(String) +) +ENGINE = TinyLog; + +INSERT INTO 01765_db.simple_key_complex_attributes_source_table VALUES(0, 'value_0', 'value_second_0'); +INSERT INTO 01765_db.simple_key_complex_attributes_source_table VALUES(1, 'value_1', NULL); +INSERT INTO 01765_db.simple_key_complex_attributes_source_table VALUES(2, 'value_2', 'value_second_2'); + +CREATE DICTIONARY 01765_db.hashed_dictionary_simple_key_complex_attributes +( + id UInt64, + value_first String DEFAULT 'value_first_default', + value_second Nullable(String) DEFAULT 'value_second_default' +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'simple_key_complex_attributes_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Dictionary hashed_dictionary_simple_key_complex_attributes'; +SELECT 'dictGet existing value'; +SELECT dictGet('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_first', number) as value_first, + dictGet('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGet with non existing value'; +SELECT dictGet('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_first', number) as value_first, + dictGet('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictGetOrDefault existing value'; +SELECT dictGetOrDefault('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGetOrDefault non existing value'; +SELECT dictGetOrDefault('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.hashed_dictionary_simple_key_complex_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictHas'; +SELECT dictHas('01765_db.hashed_dictionary_simple_key_complex_attributes', number) FROM system.numbers LIMIT 4; +SELECT 'select all values as input stream'; +SELECT * FROM 01765_db.hashed_dictionary_simple_key_complex_attributes ORDER BY id; + +DROP DICTIONARY 01765_db.hashed_dictionary_simple_key_complex_attributes; + +CREATE DICTIONARY 01765_db.sparse_hashed_dictionary_simple_key_complex_attributes +( + id UInt64, + value_first String DEFAULT 'value_first_default', + value_second Nullable(String) DEFAULT 'value_second_default' +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'simple_key_complex_attributes_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Dictionary sparse_hashed_dictionary_simple_key_complex_attributes'; +SELECT 'dictGet existing value'; +SELECT dictGet('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_first', number) as value_first, + dictGet('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGet with non existing value'; +SELECT dictGet('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_first', number) as value_first, + dictGet('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_second', number) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictGetOrDefault existing value'; +SELECT dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGetOrDefault non existing value'; +SELECT dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_first', number, toString('default')) as value_first, + dictGetOrDefault('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', 'value_second', number, toString('default')) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictHas'; +SELECT dictHas('01765_db.sparse_hashed_dictionary_simple_key_complex_attributes', number) FROM system.numbers LIMIT 4; +SELECT 'select all values as input stream'; +SELECT * FROM 01765_db.sparse_hashed_dictionary_simple_key_complex_attributes ORDER BY id; + +DROP DICTIONARY 01765_db.sparse_hashed_dictionary_simple_key_complex_attributes; + +DROP TABLE 01765_db.simple_key_complex_attributes_source_table; + +CREATE TABLE 01765_db.simple_key_hierarchy_table +( + id UInt64, + parent_id UInt64 +) ENGINE = TinyLog(); + +INSERT INTO 01765_db.simple_key_hierarchy_table VALUES (1, 0); +INSERT INTO 01765_db.simple_key_hierarchy_table VALUES (2, 1); +INSERT INTO 01765_db.simple_key_hierarchy_table VALUES (3, 1); +INSERT INTO 01765_db.simple_key_hierarchy_table VALUES (4, 2); + +CREATE DICTIONARY 01765_db.hashed_dictionary_simple_key_hierarchy +( + id UInt64, + parent_id UInt64 HIERARCHICAL +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'simple_key_hierarchy_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Dictionary hashed_dictionary_simple_key_hierarchy'; +SELECT 'dictGet'; +SELECT dictGet('01765_db.hashed_dictionary_simple_key_hierarchy', 'parent_id', number) FROM system.numbers LIMIT 5; +SELECT 'dictGetHierarchy'; +SELECT dictGetHierarchy('01765_db.hashed_dictionary_simple_key_hierarchy', toUInt64(1)); +SELECT dictGetHierarchy('01765_db.hashed_dictionary_simple_key_hierarchy', toUInt64(4)); + +DROP DICTIONARY 01765_db.hashed_dictionary_simple_key_hierarchy; + +CREATE DICTIONARY 01765_db.sparse_hashed_dictionary_simple_key_hierarchy +( + id UInt64, + parent_id UInt64 HIERARCHICAL +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'simple_key_hierarchy_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Dictionary sparse_hashed_dictionary_simple_key_hierarchy'; +SELECT 'dictGet'; +SELECT dictGet('01765_db.sparse_hashed_dictionary_simple_key_hierarchy', 'parent_id', number) FROM system.numbers LIMIT 5; +SELECT 'dictGetHierarchy'; +SELECT dictGetHierarchy('01765_db.sparse_hashed_dictionary_simple_key_hierarchy', toUInt64(1)); +SELECT dictGetHierarchy('01765_db.sparse_hashed_dictionary_simple_key_hierarchy', toUInt64(4)); + +DROP DICTIONARY 01765_db.sparse_hashed_dictionary_simple_key_hierarchy; + +DROP TABLE 01765_db.simple_key_hierarchy_table; + +DROP DATABASE 01765_db; diff --git a/tests/queries/0_stateless/01765_move_to_table_overlapping_block_number.reference b/tests/queries/0_stateless/01765_move_to_table_overlapping_block_number.reference new file mode 100644 index 00000000000..a07ed155918 --- /dev/null +++ b/tests/queries/0_stateless/01765_move_to_table_overlapping_block_number.reference @@ -0,0 +1,4 @@ +1 1 1_1_1_0 +1 2 1_2_2_0 +1 3 1_3_3_0 +1 4 1_4_4_0 diff --git a/tests/queries/0_stateless/01765_move_to_table_overlapping_block_number.sql b/tests/queries/0_stateless/01765_move_to_table_overlapping_block_number.sql new file mode 100644 index 00000000000..ea00c573c74 --- /dev/null +++ b/tests/queries/0_stateless/01765_move_to_table_overlapping_block_number.sql @@ -0,0 +1,20 @@ +DROP TABLE IF EXISTS t_src; +DROP TABLE IF EXISTS t_dst; + +CREATE TABLE t_src (id UInt32, v UInt32) ENGINE = MergeTree ORDER BY id PARTITION BY id; +CREATE TABLE t_dst (id UInt32, v UInt32) ENGINE = MergeTree ORDER BY id PARTITION BY id; + +SYSTEM STOP MERGES t_src; +SYSTEM STOP MERGES t_dst; + +INSERT INTO t_dst VALUES (1, 1); +INSERT INTO t_dst VALUES (1, 2); +INSERT INTO t_dst VALUES (1, 3); + +INSERT INTO t_src VALUES (1, 4); + +ALTER TABLE t_src MOVE PARTITION 1 TO TABLE t_dst; +SELECT *, _part FROM t_dst ORDER BY v; + +DROP TABLE t_src; +DROP TABLE t_dst; diff --git a/tests/queries/0_stateless/01765_tehran_dst.reference b/tests/queries/0_stateless/01765_tehran_dst.reference new file mode 100644 index 00000000000..61f5403e5e5 --- /dev/null +++ b/tests/queries/0_stateless/01765_tehran_dst.reference @@ -0,0 +1,2 @@ +2021-03-22 23:15:11 +2020-03-21 23:00:00 diff --git a/tests/queries/0_stateless/01765_tehran_dst.sql b/tests/queries/0_stateless/01765_tehran_dst.sql new file mode 100644 index 00000000000..41b92ae2360 --- /dev/null +++ b/tests/queries/0_stateless/01765_tehran_dst.sql @@ -0,0 +1,2 @@ +SELECT toTimeZone(toDateTime('2021-03-22 18:45:11', 'UTC'), 'Asia/Tehran'); +SELECT toDateTime('2020-03-21 23:00:00', 'Asia/Tehran'); diff --git a/tests/queries/0_stateless/01766_hashed_dictionary_complex_key.reference b/tests/queries/0_stateless/01766_hashed_dictionary_complex_key.reference new file mode 100644 index 00000000000..12c210581c2 --- /dev/null +++ b/tests/queries/0_stateless/01766_hashed_dictionary_complex_key.reference @@ -0,0 +1,56 @@ +Dictionary hashed_dictionary_complex_key_simple_attributes +dictGet existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +dictGet with non existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +value_first_default value_second_default +dictGetOrDefault existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +dictGetOrDefault non existing value +value_0 value_second_0 +value_1 value_second_1 +value_2 value_second_2 +default default +dictHas +1 +1 +1 +0 +select all values as input stream +0 id_key_0 value_0 value_second_0 +1 id_key_1 value_1 value_second_1 +2 id_key_2 value_2 value_second_2 +Dictionary hashed_dictionary_complex_key_complex_attributes +dictGet existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +dictGet with non existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +value_first_default value_second_default +dictGetOrDefault existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +dictGetOrDefault non existing value +value_0 value_second_0 +value_1 \N +value_2 value_second_2 +default default +dictHas +1 +1 +1 +0 +select all values as input stream +0 id_key_0 value_0 value_second_0 +1 id_key_1 value_1 \N +2 id_key_2 value_2 value_second_2 diff --git a/tests/queries/0_stateless/01766_hashed_dictionary_complex_key.sql b/tests/queries/0_stateless/01766_hashed_dictionary_complex_key.sql new file mode 100644 index 00000000000..de7ab5b5a1a --- /dev/null +++ b/tests/queries/0_stateless/01766_hashed_dictionary_complex_key.sql @@ -0,0 +1,98 @@ +DROP DATABASE IF EXISTS 01766_db; +CREATE DATABASE 01766_db; + +CREATE TABLE 01766_db.complex_key_simple_attributes_source_table +( + id UInt64, + id_key String, + value_first String, + value_second String +) +ENGINE = TinyLog; + +INSERT INTO 01766_db.complex_key_simple_attributes_source_table VALUES(0, 'id_key_0', 'value_0', 'value_second_0'); +INSERT INTO 01766_db.complex_key_simple_attributes_source_table VALUES(1, 'id_key_1', 'value_1', 'value_second_1'); +INSERT INTO 01766_db.complex_key_simple_attributes_source_table VALUES(2, 'id_key_2', 'value_2', 'value_second_2'); + +CREATE DICTIONARY 01766_db.hashed_dictionary_complex_key_simple_attributes +( + id UInt64, + id_key String, + value_first String DEFAULT 'value_first_default', + value_second String DEFAULT 'value_second_default' +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'complex_key_simple_attributes_source_table' DB '01766_db')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(COMPLEX_KEY_HASHED()); + +SELECT 'Dictionary hashed_dictionary_complex_key_simple_attributes'; +SELECT 'dictGet existing value'; +SELECT dictGet('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_first', (number, concat('id_key_', toString(number)))) as value_first, + dictGet('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_second', (number, concat('id_key_', toString(number)))) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGet with non existing value'; +SELECT dictGet('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_first', (number, concat('id_key_', toString(number)))) as value_first, + dictGet('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_second', (number, concat('id_key_', toString(number)))) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictGetOrDefault existing value'; +SELECT dictGetOrDefault('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_first', (number, concat('id_key_', toString(number))), toString('default')) as value_first, + dictGetOrDefault('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_second', (number, concat('id_key_', toString(number))), toString('default')) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGetOrDefault non existing value'; +SELECT dictGetOrDefault('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_first', (number, concat('id_key_', toString(number))), toString('default')) as value_first, + dictGetOrDefault('01766_db.hashed_dictionary_complex_key_simple_attributes', 'value_second', (number, concat('id_key_', toString(number))), toString('default')) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictHas'; +SELECT dictHas('01766_db.hashed_dictionary_complex_key_simple_attributes', (number, concat('id_key_', toString(number)))) FROM system.numbers LIMIT 4; +SELECT 'select all values as input stream'; +SELECT * FROM 01766_db.hashed_dictionary_complex_key_simple_attributes ORDER BY (id, id_key); + +DROP DICTIONARY 01766_db.hashed_dictionary_complex_key_simple_attributes; + +DROP TABLE 01766_db.complex_key_simple_attributes_source_table; + +CREATE TABLE 01766_db.complex_key_complex_attributes_source_table +( + id UInt64, + id_key String, + value_first String, + value_second Nullable(String) +) +ENGINE = TinyLog; + +INSERT INTO 01766_db.complex_key_complex_attributes_source_table VALUES(0, 'id_key_0', 'value_0', 'value_second_0'); +INSERT INTO 01766_db.complex_key_complex_attributes_source_table VALUES(1, 'id_key_1', 'value_1', NULL); +INSERT INTO 01766_db.complex_key_complex_attributes_source_table VALUES(2, 'id_key_2', 'value_2', 'value_second_2'); + +CREATE DICTIONARY 01766_db.hashed_dictionary_complex_key_complex_attributes +( + id UInt64, + id_key String, + + value_first String DEFAULT 'value_first_default', + value_second Nullable(String) DEFAULT 'value_second_default' +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'complex_key_complex_attributes_source_table' DB '01766_db')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(COMPLEX_KEY_HASHED()); + +SELECT 'Dictionary hashed_dictionary_complex_key_complex_attributes'; +SELECT 'dictGet existing value'; +SELECT dictGet('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_first', (number, concat('id_key_', toString(number)))) as value_first, + dictGet('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_second', (number, concat('id_key_', toString(number)))) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGet with non existing value'; +SELECT dictGet('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_first', (number, concat('id_key_', toString(number)))) as value_first, + dictGet('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_second', (number, concat('id_key_', toString(number)))) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictGetOrDefault existing value'; +SELECT dictGetOrDefault('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_first', (number, concat('id_key_', toString(number))), toString('default')) as value_first, + dictGetOrDefault('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_second', (number, concat('id_key_', toString(number))), toString('default')) as value_second FROM system.numbers LIMIT 3; +SELECT 'dictGetOrDefault non existing value'; +SELECT dictGetOrDefault('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_first', (number, concat('id_key_', toString(number))), toString('default')) as value_first, + dictGetOrDefault('01766_db.hashed_dictionary_complex_key_complex_attributes', 'value_second', (number, concat('id_key_', toString(number))), toString('default')) as value_second FROM system.numbers LIMIT 4; +SELECT 'dictHas'; +SELECT dictHas('01766_db.hashed_dictionary_complex_key_complex_attributes', (number, concat('id_key_', toString(number)))) FROM system.numbers LIMIT 4; +SELECT 'select all values as input stream'; +SELECT * FROM 01766_db.hashed_dictionary_complex_key_complex_attributes ORDER BY (id, id_key); + +DROP DICTIONARY 01766_db.hashed_dictionary_complex_key_complex_attributes; +DROP TABLE 01766_db.complex_key_complex_attributes_source_table; + +DROP DATABASE 01766_db; diff --git a/tests/queries/0_stateless/01766_todatetime64_no_timezone_arg.reference b/tests/queries/0_stateless/01766_todatetime64_no_timezone_arg.reference new file mode 100644 index 00000000000..52eea094ae4 --- /dev/null +++ b/tests/queries/0_stateless/01766_todatetime64_no_timezone_arg.reference @@ -0,0 +1 @@ +2021-03-22 00:00:00.000 diff --git a/tests/queries/0_stateless/01766_todatetime64_no_timezone_arg.sql b/tests/queries/0_stateless/01766_todatetime64_no_timezone_arg.sql new file mode 100644 index 00000000000..99141a694c1 --- /dev/null +++ b/tests/queries/0_stateless/01766_todatetime64_no_timezone_arg.sql @@ -0,0 +1 @@ +SELECT toDateTime64('2021-03-22', 3); diff --git a/tests/queries/0_stateless/01767_timezoneOf.reference b/tests/queries/0_stateless/01767_timezoneOf.reference new file mode 100644 index 00000000000..0a8a8c32d4e --- /dev/null +++ b/tests/queries/0_stateless/01767_timezoneOf.reference @@ -0,0 +1 @@ +Asia/Tehran Asia/Tehran Asia/Tehran Africa/Accra Pacific/Pitcairn diff --git a/tests/queries/0_stateless/01767_timezoneOf.sh b/tests/queries/0_stateless/01767_timezoneOf.sh new file mode 100755 index 00000000000..9dee051ee3f --- /dev/null +++ b/tests/queries/0_stateless/01767_timezoneOf.sh @@ -0,0 +1,7 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +TZ=Asia/Tehran $CLICKHOUSE_LOCAL --query "SELECT timezone(), timezoneOf(now()), timeZone(), timeZoneOf(toTimezone(toNullable(now()), 'Africa/Accra')), timeZoneOf(toTimeZone(now64(3), 'Pacific/Pitcairn'))" diff --git a/tests/queries/0_stateless/01768_array_product.reference b/tests/queries/0_stateless/01768_array_product.reference new file mode 100644 index 00000000000..af2508e1aff --- /dev/null +++ b/tests/queries/0_stateless/01768_array_product.reference @@ -0,0 +1,18 @@ +Array product with constant column +720 Float64 +24 Float64 +3.5 Float64 +6 Float64 +Array product with non constant column +24 +0 +6 +24 +0 +6 +Types of aggregation result array product +Float64 Float64 Float64 Float64 +Float64 Float64 Float64 Float64 +Float64 Float64 Float64 +Float64 Float64 +Float64 Float64 Float64 diff --git a/tests/queries/0_stateless/01768_array_product.sql b/tests/queries/0_stateless/01768_array_product.sql new file mode 100644 index 00000000000..75056888ef2 --- /dev/null +++ b/tests/queries/0_stateless/01768_array_product.sql @@ -0,0 +1,26 @@ +SELECT 'Array product with constant column'; + +SELECT arrayProduct([1,2,3,4,5,6]) as a, toTypeName(a); +SELECT arrayProduct(array(1.0,2.0,3.0,4.0)) as a, toTypeName(a); +SELECT arrayProduct(array(1,3.5)) as a, toTypeName(a); +SELECT arrayProduct([toDecimal64(1,8), toDecimal64(2,8), toDecimal64(3,8)]) as a, toTypeName(a); + +SELECT 'Array product with non constant column'; + +DROP TABLE IF EXISTS test_aggregation; +CREATE TABLE test_aggregation (x Array(Int)) ENGINE=TinyLog; +INSERT INTO test_aggregation VALUES ([1,2,3,4]), ([]), ([1,2,3]); +SELECT arrayProduct(x) FROM test_aggregation; +DROP TABLE test_aggregation; + +CREATE TABLE test_aggregation (x Array(Decimal64(8))) ENGINE=TinyLog; +INSERT INTO test_aggregation VALUES ([1,2,3,4]), ([]), ([1,2,3]); +SELECT arrayProduct(x) FROM test_aggregation; +DROP TABLE test_aggregation; + +SELECT 'Types of aggregation result array product'; +SELECT toTypeName(arrayProduct([toInt8(0)])), toTypeName(arrayProduct([toInt16(0)])), toTypeName(arrayProduct([toInt32(0)])), toTypeName(arrayProduct([toInt64(0)])); +SELECT toTypeName(arrayProduct([toUInt8(0)])), toTypeName(arrayProduct([toUInt16(0)])), toTypeName(arrayProduct([toUInt32(0)])), toTypeName(arrayProduct([toUInt64(0)])); +SELECT toTypeName(arrayProduct([toInt128(0)])), toTypeName(arrayProduct([toInt256(0)])), toTypeName(arrayProduct([toUInt256(0)])); +SELECT toTypeName(arrayProduct([toFloat32(0)])), toTypeName(arrayProduct([toFloat64(0)])); +SELECT toTypeName(arrayProduct([toDecimal32(0, 8)])), toTypeName(arrayProduct([toDecimal64(0, 8)])), toTypeName(arrayProduct([toDecimal128(0, 8)])); diff --git a/tests/queries/0_stateless/01768_extended_range.reference b/tests/queries/0_stateless/01768_extended_range.reference new file mode 100644 index 00000000000..1436eeae43a --- /dev/null +++ b/tests/queries/0_stateless/01768_extended_range.reference @@ -0,0 +1,3 @@ +1968 +-473 +1990-01-01 diff --git a/tests/queries/0_stateless/01768_extended_range.sql b/tests/queries/0_stateless/01768_extended_range.sql new file mode 100644 index 00000000000..4acaccd1399 --- /dev/null +++ b/tests/queries/0_stateless/01768_extended_range.sql @@ -0,0 +1,4 @@ +SELECT toYear(toDateTime64('1968-12-12 11:22:33', 0, 'UTC')); +SELECT toInt16(toRelativeWeekNum(toDateTime64('1960-11-30 18:00:11.999', 3, 'UTC'))); +SELECT toStartOfQuarter(toDateTime64('1990-01-04 12:14:12', 0, 'UTC')); +SELECT toUnixTimestamp(toDateTime64('1900-12-12 11:22:33', 0, 'UTC')); -- { serverError 407 } diff --git a/tests/queries/0_stateless/01769_extended_range_2.reference b/tests/queries/0_stateless/01769_extended_range_2.reference new file mode 100644 index 00000000000..e9c4e1d8604 --- /dev/null +++ b/tests/queries/0_stateless/01769_extended_range_2.reference @@ -0,0 +1,3 @@ +1969-12-31 18:00:12 +1969-12-30 18:00:12 +1969-12-31 18:00:12 diff --git a/tests/queries/0_stateless/01769_extended_range_2.sql b/tests/queries/0_stateless/01769_extended_range_2.sql new file mode 100644 index 00000000000..a2570c9397b --- /dev/null +++ b/tests/queries/0_stateless/01769_extended_range_2.sql @@ -0,0 +1,3 @@ +SELECT toDateTime64('1969-12-31 18:00:12', 0, 'America/Phoenix'); +SELECT toDateTime64('1969-12-30 18:00:12', 0, 'America/Phoenix'); +SELECT toDateTime64('1969-12-31 18:00:12', 0, 'Europe/Moscow'); diff --git a/tests/queries/0_stateless/01770_add_months_ubsan.reference b/tests/queries/0_stateless/01770_add_months_ubsan.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/01770_add_months_ubsan.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/01770_add_months_ubsan.sql b/tests/queries/0_stateless/01770_add_months_ubsan.sql new file mode 100644 index 00000000000..039434ff9bc --- /dev/null +++ b/tests/queries/0_stateless/01770_add_months_ubsan.sql @@ -0,0 +1,2 @@ +-- Result does not make sense but UBSan report should not be triggered. +SELECT ignore(now() + INTERVAL 9223372036854775807 MONTH); diff --git a/tests/queries/0_stateless/01770_extended_range_3.reference b/tests/queries/0_stateless/01770_extended_range_3.reference new file mode 100644 index 00000000000..1a35ee6cc1e --- /dev/null +++ b/tests/queries/0_stateless/01770_extended_range_3.reference @@ -0,0 +1,2 @@ +1984-04-01 08:00:00 +1985-03-31 09:00:00 diff --git a/tests/queries/0_stateless/01770_extended_range_3.sql b/tests/queries/0_stateless/01770_extended_range_3.sql new file mode 100644 index 00000000000..68e0782d3d5 --- /dev/null +++ b/tests/queries/0_stateless/01770_extended_range_3.sql @@ -0,0 +1,2 @@ +SELECT addHours(toDateTime64('1984-03-31 23:00:00', 0, 'Asia/Novosibirsk'), 8); +SELECT addHours(toDateTime64('1985-03-31 00:00:00', 0, 'Asia/Novosibirsk'), 8); diff --git a/tests/queries/0_stateless/01771_bloom_filter_not_has.reference b/tests/queries/0_stateless/01771_bloom_filter_not_has.reference new file mode 100644 index 00000000000..fc08c4c0d15 --- /dev/null +++ b/tests/queries/0_stateless/01771_bloom_filter_not_has.reference @@ -0,0 +1,3 @@ +10000000 +1 +9999999 diff --git a/tests/queries/0_stateless/01771_bloom_filter_not_has.sql b/tests/queries/0_stateless/01771_bloom_filter_not_has.sql new file mode 100644 index 00000000000..ab0e3d308f9 --- /dev/null +++ b/tests/queries/0_stateless/01771_bloom_filter_not_has.sql @@ -0,0 +1,7 @@ +DROP TABLE IF EXISTS bloom_filter_null_array; +CREATE TABLE bloom_filter_null_array (v Array(Int32), INDEX idx v TYPE bloom_filter GRANULARITY 3) ENGINE = MergeTree() ORDER BY v; +INSERT INTO bloom_filter_null_array SELECT [number] FROM numbers(10000000); +SELECT COUNT() FROM bloom_filter_null_array; +SELECT COUNT() FROM bloom_filter_null_array WHERE has(v, 0); +SELECT COUNT() FROM bloom_filter_null_array WHERE not has(v, 0); +DROP TABLE bloom_filter_null_array; diff --git a/tests/queries/0_stateless/01771_datetime64_no_time_part.reference b/tests/queries/0_stateless/01771_datetime64_no_time_part.reference new file mode 100644 index 00000000000..c13116eeefe --- /dev/null +++ b/tests/queries/0_stateless/01771_datetime64_no_time_part.reference @@ -0,0 +1 @@ +1985-03-31 00:00:00 diff --git a/tests/queries/0_stateless/01771_datetime64_no_time_part.sql b/tests/queries/0_stateless/01771_datetime64_no_time_part.sql new file mode 100644 index 00000000000..debf4783eb8 --- /dev/null +++ b/tests/queries/0_stateless/01771_datetime64_no_time_part.sql @@ -0,0 +1 @@ +SELECT toDateTime64('1985-03-31', 0, 'Europe/Helsinki'); diff --git a/tests/queries/0_stateless/01772_intdiv_minus_one_ubsan.reference b/tests/queries/0_stateless/01772_intdiv_minus_one_ubsan.reference new file mode 100644 index 00000000000..6b764d18a4d --- /dev/null +++ b/tests/queries/0_stateless/01772_intdiv_minus_one_ubsan.reference @@ -0,0 +1,10 @@ +-9223372036854775807 +-9223372036854775808 +9223372036854775807 +9223372036854775806 +9223372036854775805 +9223372036854775804 +9223372036854775803 +9223372036854775802 +9223372036854775801 +9223372036854775800 diff --git a/tests/queries/0_stateless/01772_intdiv_minus_one_ubsan.sql b/tests/queries/0_stateless/01772_intdiv_minus_one_ubsan.sql new file mode 100644 index 00000000000..20b4f585182 --- /dev/null +++ b/tests/queries/0_stateless/01772_intdiv_minus_one_ubsan.sql @@ -0,0 +1 @@ +SELECT intDiv(toInt64(number), -1) FROM numbers(9223372036854775807, 10); diff --git a/tests/queries/0_stateless/01772_to_start_of_hour_align.reference b/tests/queries/0_stateless/01772_to_start_of_hour_align.reference new file mode 100644 index 00000000000..f130df3bef5 --- /dev/null +++ b/tests/queries/0_stateless/01772_to_start_of_hour_align.reference @@ -0,0 +1,86 @@ +2021-03-23 00:00:00 +2021-03-23 11:00:00 +2021-03-23 22:00:00 +2021-03-23 13:00:00 +2021-03-23 12:00:00 +2021-03-23 00:00:00 +2010-03-28 00:00:00 2010-03-28 00:00:00 1269723600 +2010-03-28 00:15:00 2010-03-28 00:00:00 1269724500 +2010-03-28 00:30:00 2010-03-28 00:00:00 1269725400 +2010-03-28 00:45:00 2010-03-28 00:00:00 1269726300 +2010-03-28 01:00:00 2010-03-28 00:00:00 1269727200 +2010-03-28 01:15:00 2010-03-28 00:00:00 1269728100 +2010-03-28 01:30:00 2010-03-28 00:00:00 1269729000 +2010-03-28 01:45:00 2010-03-28 00:00:00 1269729900 +2010-03-28 03:00:00 2010-03-28 03:00:00 1269730800 +2010-03-28 03:15:00 2010-03-28 03:00:00 1269731700 +2010-03-28 03:30:00 2010-03-28 03:00:00 1269732600 +2010-03-28 03:45:00 2010-03-28 03:00:00 1269733500 +2010-03-28 04:00:00 2010-03-28 04:00:00 1269734400 +2010-03-28 04:15:00 2010-03-28 04:00:00 1269735300 +2010-03-28 04:30:00 2010-03-28 04:00:00 1269736200 +2010-03-28 04:45:00 2010-03-28 04:00:00 1269737100 +2010-03-28 05:00:00 2010-03-28 04:00:00 1269738000 +2010-03-28 05:15:00 2010-03-28 04:00:00 1269738900 +2010-03-28 05:30:00 2010-03-28 04:00:00 1269739800 +2010-03-28 05:45:00 2010-03-28 04:00:00 1269740700 +2010-10-31 00:00:00 2010-10-31 00:00:00 1288468800 +2010-10-31 00:15:00 2010-10-31 00:00:00 1288469700 +2010-10-31 00:30:00 2010-10-31 00:00:00 1288470600 +2010-10-31 00:45:00 2010-10-31 00:00:00 1288471500 +2010-10-31 01:00:00 2010-10-31 00:00:00 1288472400 +2010-10-31 01:15:00 2010-10-31 00:00:00 1288473300 +2010-10-31 01:30:00 2010-10-31 00:00:00 1288474200 +2010-10-31 01:45:00 2010-10-31 00:00:00 1288475100 +2010-10-31 02:00:00 2010-10-31 02:00:00 1288476000 +2010-10-31 02:15:00 2010-10-31 02:00:00 1288476900 +2010-10-31 02:30:00 2010-10-31 02:00:00 1288477800 +2010-10-31 02:45:00 2010-10-31 02:00:00 1288478700 +2010-10-31 02:00:00 2010-10-31 02:00:00 1288479600 +2010-10-31 02:15:00 2010-10-31 02:00:00 1288480500 +2010-10-31 02:30:00 2010-10-31 02:00:00 1288481400 +2010-10-31 02:45:00 2010-10-31 02:00:00 1288482300 +2010-10-31 03:00:00 2010-10-31 02:00:00 1288483200 +2010-10-31 03:15:00 2010-10-31 02:00:00 1288484100 +2010-10-31 03:30:00 2010-10-31 02:00:00 1288485000 +2010-10-31 03:45:00 2010-10-31 02:00:00 1288485900 +2020-04-05 00:00:00 2020-04-05 00:00:00 1586005200 +2020-04-05 00:15:00 2020-04-05 00:00:00 1586006100 +2020-04-05 00:30:00 2020-04-05 00:00:00 1586007000 +2020-04-05 00:45:00 2020-04-05 00:00:00 1586007900 +2020-04-05 01:00:00 2020-04-05 00:00:00 1586008800 +2020-04-05 01:15:00 2020-04-05 00:00:00 1586009700 +2020-04-05 01:30:00 2020-04-05 00:00:00 1586010600 +2020-04-05 01:45:00 2020-04-05 00:00:00 1586011500 +2020-04-05 01:30:00 2020-04-05 00:00:00 1586012400 +2020-04-05 01:45:00 2020-04-05 00:00:00 1586013300 +2020-04-05 02:00:00 2020-04-05 02:00:00 1586014200 +2020-04-05 02:15:00 2020-04-05 02:00:00 1586015100 +2020-04-05 02:30:00 2020-04-05 02:00:00 1586016000 +2020-04-05 02:45:00 2020-04-05 02:00:00 1586016900 +2020-04-05 03:00:00 2020-04-05 02:00:00 1586017800 +2020-04-05 03:15:00 2020-04-05 02:00:00 1586018700 +2020-04-05 03:30:00 2020-04-05 02:00:00 1586019600 +2020-04-05 03:45:00 2020-04-05 02:00:00 1586020500 +2020-04-05 04:00:00 2020-04-05 04:00:00 1586021400 +2020-04-05 04:15:00 2020-04-05 04:00:00 1586022300 +2020-10-04 00:00:00 2020-10-04 00:00:00 1601731800 +2020-10-04 00:15:00 2020-10-04 00:00:00 1601732700 +2020-10-04 00:30:00 2020-10-04 00:00:00 1601733600 +2020-10-04 00:45:00 2020-10-04 00:00:00 1601734500 +2020-10-04 01:00:00 2020-10-04 00:00:00 1601735400 +2020-10-04 01:15:00 2020-10-04 00:00:00 1601736300 +2020-10-04 01:30:00 2020-10-04 00:00:00 1601737200 +2020-10-04 01:45:00 2020-10-04 00:00:00 1601738100 +2020-10-04 02:30:00 2020-10-04 02:30:00 1601739000 +2020-10-04 02:45:00 2020-10-04 02:30:00 1601739900 +2020-10-04 03:00:00 2020-10-04 02:30:00 1601740800 +2020-10-04 03:15:00 2020-10-04 02:30:00 1601741700 +2020-10-04 03:30:00 2020-10-04 02:30:00 1601742600 +2020-10-04 03:45:00 2020-10-04 02:30:00 1601743500 +2020-10-04 04:00:00 2020-10-04 04:00:00 1601744400 +2020-10-04 04:15:00 2020-10-04 04:00:00 1601745300 +2020-10-04 04:30:00 2020-10-04 04:00:00 1601746200 +2020-10-04 04:45:00 2020-10-04 04:00:00 1601747100 +2020-10-04 05:00:00 2020-10-04 04:00:00 1601748000 +2020-10-04 05:15:00 2020-10-04 04:00:00 1601748900 diff --git a/tests/queries/0_stateless/01772_to_start_of_hour_align.sql b/tests/queries/0_stateless/01772_to_start_of_hour_align.sql new file mode 100644 index 00000000000..6d1bb460f90 --- /dev/null +++ b/tests/queries/0_stateless/01772_to_start_of_hour_align.sql @@ -0,0 +1,21 @@ +-- Rounding down to hour intervals is aligned to midnight even if the interval length does not divide the whole day. +SELECT toStartOfInterval(toDateTime('2021-03-23 03:58:00'), INTERVAL 11 HOUR); +SELECT toStartOfInterval(toDateTime('2021-03-23 13:58:00'), INTERVAL 11 HOUR); +SELECT toStartOfInterval(toDateTime('2021-03-23 23:58:00'), INTERVAL 11 HOUR); + +-- It should work correctly even in timezones with non-whole hours offset. India have +05:30. +SELECT toStartOfHour(toDateTime('2021-03-23 13:58:00', 'Asia/Kolkata')); +SELECT toStartOfInterval(toDateTime('2021-03-23 13:58:00', 'Asia/Kolkata'), INTERVAL 6 HOUR); + +-- Specifying the interval longer than 24 hours is not correct, but it works as expected by just rounding to midnight. +SELECT toStartOfInterval(toDateTime('2021-03-23 13:58:00', 'Asia/Kolkata'), INTERVAL 66 HOUR); + +-- In case of timezone shifts, rounding is performed to the hour number on "wall clock" time. +-- The intervals may become shorter or longer due to time shifts. For example, the three hour interval may actually last two hours. +-- If the same hour number on "wall clock" time correspond to multiple time points due to shifting backwards, the unspecified time point is selected among the candidates. +SELECT toDateTime('2010-03-28 00:00:00', 'Europe/Moscow') + INTERVAL 15 * number MINUTE AS src, toStartOfInterval(src, INTERVAL 2 HOUR) AS rounded, toUnixTimestamp(src) AS t FROM numbers(20); +SELECT toDateTime('2010-10-31 00:00:00', 'Europe/Moscow') + INTERVAL 15 * number MINUTE AS src, toStartOfInterval(src, INTERVAL 2 HOUR) AS rounded, toUnixTimestamp(src) AS t FROM numbers(20); + +-- And this should work even for non whole number of hours shifts. +SELECT toDateTime('2020-04-05 00:00:00', 'Australia/Lord_Howe') + INTERVAL 15 * number MINUTE AS src, toStartOfInterval(src, INTERVAL 2 HOUR) AS rounded, toUnixTimestamp(src) AS t FROM numbers(20); +SELECT toDateTime('2020-10-04 00:00:00', 'Australia/Lord_Howe') + INTERVAL 15 * number MINUTE AS src, toStartOfInterval(src, INTERVAL 2 HOUR) AS rounded, toUnixTimestamp(src) AS t FROM numbers(20); diff --git a/tests/queries/0_stateless/01773_case_sensitive_version.reference b/tests/queries/0_stateless/01773_case_sensitive_version.reference new file mode 100644 index 00000000000..72749c905a3 --- /dev/null +++ b/tests/queries/0_stateless/01773_case_sensitive_version.reference @@ -0,0 +1 @@ +1 1 1 diff --git a/tests/queries/0_stateless/01773_case_sensitive_version.sql b/tests/queries/0_stateless/01773_case_sensitive_version.sql new file mode 100644 index 00000000000..27fa1c27b2a --- /dev/null +++ b/tests/queries/0_stateless/01773_case_sensitive_version.sql @@ -0,0 +1 @@ +SELECT version()=Version(), VERSION()=Version(), vErSiOn()=VeRsIoN(); diff --git a/tests/queries/0_stateless/01773_datetime64_add_ubsan.reference b/tests/queries/0_stateless/01773_datetime64_add_ubsan.reference new file mode 100644 index 00000000000..aa47d0d46d4 --- /dev/null +++ b/tests/queries/0_stateless/01773_datetime64_add_ubsan.reference @@ -0,0 +1,2 @@ +0 +0 diff --git a/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql b/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql new file mode 100644 index 00000000000..f7267f2b6b4 --- /dev/null +++ b/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql @@ -0,0 +1,2 @@ +-- The result is unspecified but UBSan should not argue. +SELECT ignore(addHours(now64(3), inf)) FROM numbers(2); diff --git a/tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.reference b/tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.reference new file mode 100644 index 00000000000..1cea52ec1c2 --- /dev/null +++ b/tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.reference @@ -0,0 +1,2 @@ +2000-01-02 03:04:05 2001-02-03 04:05:06 +2000-01-02 03:04:05 2001-02-03 04:05:06 diff --git a/tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.sql b/tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.sql new file mode 100644 index 00000000000..5a1f809b03b --- /dev/null +++ b/tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.sql @@ -0,0 +1,9 @@ +DROP TABLE IF EXISTS test; +CREATE TABLE test (time DateTime64(3)) ENGINE = MergeTree ORDER BY tuple() PARTITION BY toStartOfInterval(time, INTERVAL 2 YEAR); + +INSERT INTO test VALUES ('2000-01-02 03:04:05.123'), ('2001-02-03 04:05:06.789'); + +SELECT min_time, max_time FROM system.parts WHERE table = 'test' AND database = currentDatabase(); +SELECT min_time, max_time FROM system.parts_columns WHERE table = 'test' AND database = currentDatabase(); + +DROP TABLE test; diff --git a/tests/queries/0_stateless/01774_bar_with_illegal_value.reference b/tests/queries/0_stateless/01774_bar_with_illegal_value.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01774_bar_with_illegal_value.sql b/tests/queries/0_stateless/01774_bar_with_illegal_value.sql new file mode 100644 index 00000000000..60c7f303c13 --- /dev/null +++ b/tests/queries/0_stateless/01774_bar_with_illegal_value.sql @@ -0,0 +1 @@ +SELECT greatCircleAngle(1048575, 257, -9223372036854775808, 1048576) - NULL, bar(7, -inf, 1024); -- { serverError 36 } diff --git a/tests/queries/0_stateless/01774_case_sensitive_connection_id.reference b/tests/queries/0_stateless/01774_case_sensitive_connection_id.reference new file mode 100644 index 00000000000..95bea1a178c --- /dev/null +++ b/tests/queries/0_stateless/01774_case_sensitive_connection_id.reference @@ -0,0 +1 @@ +0 0 0 0 0 0 diff --git a/tests/queries/0_stateless/01774_case_sensitive_connection_id.sql b/tests/queries/0_stateless/01774_case_sensitive_connection_id.sql new file mode 100644 index 00000000000..5a4f2b5853a --- /dev/null +++ b/tests/queries/0_stateless/01774_case_sensitive_connection_id.sql @@ -0,0 +1 @@ +SELECT connection_id(), CONNECTION_ID(), CoNnEcTiOn_Id(), connectionid(), CONNECTIONID(), CoNnEcTiOnId(); diff --git a/tests/queries/0_stateless/01774_ip_address_in_range.reference b/tests/queries/0_stateless/01774_ip_address_in_range.reference new file mode 100644 index 00000000000..b2f282d2183 --- /dev/null +++ b/tests/queries/0_stateless/01774_ip_address_in_range.reference @@ -0,0 +1,46 @@ +# Invocation with constants +1 +0 +1 +0 +# Invocation with non-constant addresses +192.168.99.255 192.168.100.0/22 0 +192.168.100.1 192.168.100.0/22 1 +192.168.103.255 192.168.100.0/22 1 +192.168.104.0 192.168.100.0/22 0 +::192.168.99.255 ::192.168.100.0/118 0 +::192.168.100.1 ::192.168.100.0/118 1 +::192.168.103.255 ::192.168.100.0/118 1 +::192.168.104.0 ::192.168.100.0/118 0 +# Invocation with non-constant prefixes +192.168.100.1 192.168.100.0/22 1 +192.168.100.1 192.168.100.0/24 1 +192.168.100.1 192.168.100.0/32 0 +::192.168.100.1 ::192.168.100.0/118 1 +::192.168.100.1 ::192.168.100.0/120 1 +::192.168.100.1 ::192.168.100.0/128 0 +# Invocation with non-constants +192.168.100.1 192.168.100.0/22 1 +192.168.100.1 192.168.100.0/24 1 +192.168.103.255 192.168.100.0/22 1 +192.168.103.255 192.168.100.0/24 0 +::192.168.100.1 ::192.168.100.0/118 1 +::192.168.100.1 ::192.168.100.0/120 1 +::192.168.103.255 ::192.168.100.0/118 1 +::192.168.103.255 ::192.168.100.0/120 0 +# Check with dense table +1 +1 +1 +1 +1 +1 +1 +1 +# Mismatching IP versions is not an error. +0 +0 +0 +0 +# Unparsable arguments +# Wrong argument types diff --git a/tests/queries/0_stateless/01774_ip_address_in_range.sql b/tests/queries/0_stateless/01774_ip_address_in_range.sql new file mode 100644 index 00000000000..29c2bcb220d --- /dev/null +++ b/tests/queries/0_stateless/01774_ip_address_in_range.sql @@ -0,0 +1,64 @@ +SELECT '# Invocation with constants'; + +SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8'); +SELECT isIPAddressInRange('128.0.0.1', '127.0.0.0/8'); + +SELECT isIPAddressInRange('ffff::1', 'ffff::/16'); +SELECT isIPAddressInRange('fffe::1', 'ffff::/16'); + +SELECT '# Invocation with non-constant addresses'; + +WITH arrayJoin(['192.168.99.255', '192.168.100.1', '192.168.103.255', '192.168.104.0']) as addr, '192.168.100.0/22' as prefix SELECT addr, prefix, isIPAddressInRange(addr, prefix); +WITH arrayJoin(['::192.168.99.255', '::192.168.100.1', '::192.168.103.255', '::192.168.104.0']) as addr, '::192.168.100.0/118' as prefix SELECT addr, prefix, isIPAddressInRange(addr, prefix); + +SELECT '# Invocation with non-constant prefixes'; + +WITH '192.168.100.1' as addr, arrayJoin(['192.168.100.0/22', '192.168.100.0/24', '192.168.100.0/32']) as prefix SELECT addr, prefix, isIPAddressInRange(addr, prefix); +WITH '::192.168.100.1' as addr, arrayJoin(['::192.168.100.0/118', '::192.168.100.0/120', '::192.168.100.0/128']) as prefix SELECT addr, prefix, isIPAddressInRange(addr, prefix); + +SELECT '# Invocation with non-constants'; + +WITH arrayJoin(['192.168.100.1', '192.168.103.255']) as addr, arrayJoin(['192.168.100.0/22', '192.168.100.0/24']) as prefix SELECT addr, prefix, isIPAddressInRange(addr, prefix); +WITH arrayJoin(['::192.168.100.1', '::192.168.103.255']) as addr, arrayJoin(['::192.168.100.0/118', '::192.168.100.0/120']) as prefix SELECT addr, prefix, isIPAddressInRange(addr, prefix); + +SELECT '# Check with dense table'; + +DROP TABLE IF EXISTS test_data; +CREATE TABLE test_data (cidr String) ENGINE = Memory; + +INSERT INTO test_data +SELECT + IPv4NumToString(IPv4CIDRToRange(IPv4StringToNum('255.255.255.255'), toUInt8(number)).1) || '/' || toString(number) AS cidr +FROM system.numbers LIMIT 33; + +SELECT sum(isIPAddressInRange('0.0.0.0', cidr)) == 1 FROM test_data; +SELECT sum(isIPAddressInRange('127.0.0.0', cidr)) == 1 FROM test_data; +SELECT sum(isIPAddressInRange('128.0.0.0', cidr)) == 2 FROM test_data; +SELECT sum(isIPAddressInRange('255.0.0.0', cidr)) == 9 FROM test_data; +SELECT sum(isIPAddressInRange('255.0.0.1', cidr)) == 9 FROM test_data; +SELECT sum(isIPAddressInRange('255.0.0.255', cidr)) == 9 FROM test_data; +SELECT sum(isIPAddressInRange('255.255.255.255', cidr)) == 33 FROM test_data; +SELECT sum(isIPAddressInRange('255.255.255.254', cidr)) == 32 FROM test_data; + +DROP TABLE IF EXISTS test_data; + +SELECT '# Mismatching IP versions is not an error.'; + +SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16'); +SELECT isIPAddressInRange('127.0.0.1', '::127.0.0.1/128'); +SELECT isIPAddressInRange('::1', '127.0.0.0/8'); +SELECT isIPAddressInRange('::127.0.0.1', '127.0.0.1/32'); + +SELECT '# Unparsable arguments'; + +SELECT isIPAddressInRange('unparsable', '127.0.0.0/8'); -- { serverError 6 } +SELECT isIPAddressInRange('127.0.0.1', 'unparsable'); -- { serverError 6 } + +SELECT '# Wrong argument types'; + +SELECT isIPAddressInRange(100, '127.0.0.0/8'); -- { serverError 43 } +SELECT isIPAddressInRange(NULL, '127.0.0.0/8'); -- { serverError 43 } +SELECT isIPAddressInRange(CAST(NULL, 'Nullable(String)'), '127.0.0.0/8'); -- { serverError 43 } +SELECT isIPAddressInRange('127.0.0.1', 100); -- { serverError 43 } +SELECT isIPAddressInRange(100, NULL); -- { serverError 43 } +WITH arrayJoin([NULL, NULL, NULL, NULL]) AS prefix SELECT isIPAddressInRange([NULL, NULL, 0, 255, 0], prefix); -- { serverError 43 } diff --git a/tests/queries/0_stateless/01774_tuple_null_in.reference b/tests/queries/0_stateless/01774_tuple_null_in.reference new file mode 100644 index 00000000000..aa47d0d46d4 --- /dev/null +++ b/tests/queries/0_stateless/01774_tuple_null_in.reference @@ -0,0 +1,2 @@ +0 +0 diff --git a/tests/queries/0_stateless/01774_tuple_null_in.sql b/tests/queries/0_stateless/01774_tuple_null_in.sql new file mode 100644 index 00000000000..a9cc39e8840 --- /dev/null +++ b/tests/queries/0_stateless/01774_tuple_null_in.sql @@ -0,0 +1,2 @@ +SELECT (NULL, NULL) = (8, 0) OR (NULL, NULL) = (3, 2) OR (NULL, NULL) = (0, 0) OR (NULL, NULL) = (3, 1); +SELECT (NULL, NULL) IN ((NULL, 0), (3, 1), (3, 2), (8, 0), (NULL, NULL)); diff --git a/tests/queries/0_stateless/01776_decrypt_aead_size_check.reference b/tests/queries/0_stateless/01776_decrypt_aead_size_check.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql b/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql new file mode 100644 index 00000000000..8730ed0eda2 --- /dev/null +++ b/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql @@ -0,0 +1 @@ +SELECT decrypt('aes-128-gcm', 'text', 'key', 'IV'); -- { serverError 36 } diff --git a/tests/queries/0_stateless/01777_map_populate_series_ubsan.reference b/tests/queries/0_stateless/01777_map_populate_series_ubsan.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql b/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql new file mode 100644 index 00000000000..5a8c182425a --- /dev/null +++ b/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql @@ -0,0 +1,2 @@ +-- Should correctly throw exception about overflow: +SELECT mapPopulateSeries([-9223372036854775808, toUInt32(2)], [toUInt32(1023), -1]); -- { serverError 128 } diff --git a/tests/queries/0_stateless/01778_hierarchical_dictionaries.reference b/tests/queries/0_stateless/01778_hierarchical_dictionaries.reference new file mode 100644 index 00000000000..5fe5f5f1db6 --- /dev/null +++ b/tests/queries/0_stateless/01778_hierarchical_dictionaries.reference @@ -0,0 +1,102 @@ +Flat dictionary +Get hierarchy +[] +[1] +[2,1] +[3,1] +[4,2,1] +[] +Get is in hierarchy +0 +1 +1 +1 +1 +0 +Get children +[1] +[2,3] +[4] +[] +[] +[] +Get all descendants +[1,2,3,4] +[2,3,4] +[4] +[] +[] +[] +Get descendants at first level +[1] +[2,3] +[4] +[] +[] +[] +Hashed dictionary +Get hierarchy +[] +[1] +[2,1] +[3,1] +[4,2,1] +[] +Get is in hierarchy +0 +1 +1 +1 +1 +0 +Get children +[1] +[3,2] +[4] +[] +[] +[] +Get all descendants +[1,3,2,4] +[3,2,4] +[4] +[] +[] +[] +Get descendants at first level +[1] +[3,2] +[4] +[] +[] +[] +Cache dictionary +Get hierarchy +[] +[1] +[2,1] +[3,1] +[4,2,1] +[] +Get is in hierarchy +0 +1 +1 +1 +1 +0 +Direct dictionary +Get hierarchy +[] +[1] +[2,1] +[3,1] +[4,2,1] +[] +Get is in hierarchy +0 +1 +1 +1 +1 +0 diff --git a/tests/queries/0_stateless/01778_hierarchical_dictionaries.sql b/tests/queries/0_stateless/01778_hierarchical_dictionaries.sql new file mode 100644 index 00000000000..f6e1a7c9375 --- /dev/null +++ b/tests/queries/0_stateless/01778_hierarchical_dictionaries.sql @@ -0,0 +1,95 @@ +DROP DATABASE IF EXISTS 01778_db; +CREATE DATABASE 01778_db; + +CREATE TABLE 01778_db.hierarchy_source_table (id UInt64, parent_id UInt64) ENGINE = TinyLog; +INSERT INTO 01778_db.hierarchy_source_table VALUES (1, 0), (2, 1), (3, 1), (4, 2); + +CREATE DICTIONARY 01778_db.hierarchy_flat_dictionary +( + id UInt64, + parent_id UInt64 HIERARCHICAL +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'hierarchy_source_table' DB '01778_db')) +LAYOUT(FLAT()) +LIFETIME(MIN 1 MAX 1000); + +SELECT 'Flat dictionary'; + +SELECT 'Get hierarchy'; +SELECT dictGetHierarchy('01778_db.hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get is in hierarchy'; +SELECT dictIsIn('01778_db.hierarchy_flat_dictionary', number, number) FROM system.numbers LIMIT 6; +SELECT 'Get children'; +SELECT dictGetChildren('01778_db.hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get all descendants'; +SELECT dictGetDescendants('01778_db.hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get descendants at first level'; +SELECT dictGetDescendants('01778_db.hierarchy_flat_dictionary', number, 1) FROM system.numbers LIMIT 6; + +DROP DICTIONARY 01778_db.hierarchy_flat_dictionary; + +CREATE DICTIONARY 01778_db.hierarchy_hashed_dictionary +( + id UInt64, + parent_id UInt64 HIERARCHICAL +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'hierarchy_source_table' DB '01778_db')) +LAYOUT(HASHED()) +LIFETIME(MIN 1 MAX 1000); + +SELECT 'Hashed dictionary'; + +SELECT 'Get hierarchy'; +SELECT dictGetHierarchy('01778_db.hierarchy_hashed_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get is in hierarchy'; +SELECT dictIsIn('01778_db.hierarchy_hashed_dictionary', number, number) FROM system.numbers LIMIT 6; +SELECT 'Get children'; +SELECT dictGetChildren('01778_db.hierarchy_hashed_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get all descendants'; +SELECT dictGetDescendants('01778_db.hierarchy_hashed_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get descendants at first level'; +SELECT dictGetDescendants('01778_db.hierarchy_hashed_dictionary', number, 1) FROM system.numbers LIMIT 6; + +DROP DICTIONARY 01778_db.hierarchy_hashed_dictionary; + +CREATE DICTIONARY 01778_db.hierarchy_cache_dictionary +( + id UInt64, + parent_id UInt64 HIERARCHICAL +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'hierarchy_source_table' DB '01778_db')) +LAYOUT(CACHE(SIZE_IN_CELLS 10)) +LIFETIME(MIN 1 MAX 1000); + +SELECT 'Cache dictionary'; + +SELECT 'Get hierarchy'; +SELECT dictGetHierarchy('01778_db.hierarchy_cache_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get is in hierarchy'; +SELECT dictIsIn('01778_db.hierarchy_cache_dictionary', number, number) FROM system.numbers LIMIT 6; + +DROP DICTIONARY 01778_db.hierarchy_cache_dictionary; + +CREATE DICTIONARY 01778_db.hierarchy_direct_dictionary +( + id UInt64, + parent_id UInt64 HIERARCHICAL +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'hierarchy_source_table' DB '01778_db')) +LAYOUT(DIRECT()); + +SELECT 'Direct dictionary'; + +SELECT 'Get hierarchy'; +SELECT dictGetHierarchy('01778_db.hierarchy_direct_dictionary', number) FROM system.numbers LIMIT 6; +SELECT 'Get is in hierarchy'; +SELECT dictIsIn('01778_db.hierarchy_direct_dictionary', number, number) FROM system.numbers LIMIT 6; + +DROP DICTIONARY 01778_db.hierarchy_direct_dictionary; + +DROP TABLE 01778_db.hierarchy_source_table; +DROP DATABASE 01778_db; diff --git a/tests/queries/0_stateless/01778_mmap_cache_infra.reference b/tests/queries/0_stateless/01778_mmap_cache_infra.reference new file mode 100644 index 00000000000..0e82b277bc1 --- /dev/null +++ b/tests/queries/0_stateless/01778_mmap_cache_infra.reference @@ -0,0 +1,6 @@ +CreatedReadBufferMMap +CreatedReadBufferMMapFailed +MMappedFileCacheHits +MMappedFileCacheMisses +MMappedFileBytes +MMappedFiles diff --git a/tests/queries/0_stateless/01778_mmap_cache_infra.sql b/tests/queries/0_stateless/01778_mmap_cache_infra.sql new file mode 100644 index 00000000000..29a84c5507b --- /dev/null +++ b/tests/queries/0_stateless/01778_mmap_cache_infra.sql @@ -0,0 +1,7 @@ +-- We check the existence of queries and metrics and don't check the results (a smoke test). + +SYSTEM DROP MMAP CACHE; + +SET system_events_show_zero_values = 1; +SELECT event FROM system.events WHERE event LIKE '%MMap%' ORDER BY event; +SELECT metric FROM system.metrics WHERE metric LIKE '%MMap%' ORDER BY metric; diff --git a/tests/queries/0_stateless/01778_test_LowCardinality_FixedString_pk.reference b/tests/queries/0_stateless/01778_test_LowCardinality_FixedString_pk.reference new file mode 100644 index 00000000000..a134ce52c11 --- /dev/null +++ b/tests/queries/0_stateless/01778_test_LowCardinality_FixedString_pk.reference @@ -0,0 +1,3 @@ +100 +100 +100 diff --git a/tests/queries/0_stateless/01778_test_LowCardinality_FixedString_pk.sql b/tests/queries/0_stateless/01778_test_LowCardinality_FixedString_pk.sql new file mode 100644 index 00000000000..1a0a1d35f76 --- /dev/null +++ b/tests/queries/0_stateless/01778_test_LowCardinality_FixedString_pk.sql @@ -0,0 +1,21 @@ +DROP TABLE IF EXISTS test_01778; + +CREATE TABLE test_01778 +( + `key` LowCardinality(FixedString(3)), + `d` date +) +ENGINE = MergeTree(d, key, 8192); + + +INSERT INTO test_01778 SELECT toString(intDiv(number,8000)), today() FROM numbers(100000); +INSERT INTO test_01778 SELECT toString('xxx'), today() FROM numbers(100); + +SELECT count() FROM test_01778 WHERE key = 'xxx'; + +SELECT count() FROM test_01778 WHERE key = toFixedString('xxx', 3); + +SELECT count() FROM test_01778 WHERE toString(key) = 'xxx'; + +DROP TABLE test_01778; + diff --git a/tests/queries/0_stateless/01778_where_with_column_name.reference b/tests/queries/0_stateless/01778_where_with_column_name.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01778_where_with_column_name.sql b/tests/queries/0_stateless/01778_where_with_column_name.sql new file mode 100644 index 00000000000..effde87f018 --- /dev/null +++ b/tests/queries/0_stateless/01778_where_with_column_name.sql @@ -0,0 +1,5 @@ +DROP TABLE IF EXISTS ttt01778; +CREATE TABLE ttt01778 (`1` String, `2` INT) ENGINE = MergeTree() ORDER BY tuple(); +INSERT INTO ttt01778 values('1',1),('2',2),('3',3); +select * from ttt01778 where 1=2; -- no server error +DROP TABLE ttt01778; diff --git a/tests/queries/0_stateless/01779_quantile_deterministic_msan.reference b/tests/queries/0_stateless/01779_quantile_deterministic_msan.reference new file mode 100644 index 00000000000..b432de6ddb3 --- /dev/null +++ b/tests/queries/0_stateless/01779_quantile_deterministic_msan.reference @@ -0,0 +1 @@ +11447494982455782708 diff --git a/tests/queries/0_stateless/01779_quantile_deterministic_msan.sql b/tests/queries/0_stateless/01779_quantile_deterministic_msan.sql new file mode 100644 index 00000000000..ef4234da306 --- /dev/null +++ b/tests/queries/0_stateless/01779_quantile_deterministic_msan.sql @@ -0,0 +1 @@ +SELECT cityHash64(toString(quantileDeterministicState(number, sipHash64(number)))) FROM numbers(8193); diff --git a/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference new file mode 100644 index 00000000000..0cfb83aa2f2 --- /dev/null +++ b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference @@ -0,0 +1,3 @@ +1 1 +2 2 +3 3 diff --git a/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql new file mode 100644 index 00000000000..5673e646a47 --- /dev/null +++ b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql @@ -0,0 +1,53 @@ +DROP DATABASE IF EXISTS 01780_db; +CREATE DATABASE 01780_db; + +DROP DICTIONARY IF EXISTS dict1; +CREATE DICTIONARY dict1 +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 TABLE 'dict1')) +LAYOUT(DIRECT()); + +SELECT * FROM dict1; --{serverError 36} + +DROP DICTIONARY dict1; + +DROP DICTIONARY IF EXISTS dict2; +CREATE DICTIONARY 01780_db.dict2 +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 DATABASE '01780_db' TABLE 'dict2')) +LAYOUT(DIRECT()); + +SELECT * FROM 01780_db.dict2; --{serverError 36} +DROP DICTIONARY 01780_db.dict2; + +DROP TABLE IF EXISTS 01780_db.dict3_source; +CREATE TABLE 01780_db.dict3_source +( + id UInt64, + value String +) ENGINE = TinyLog; + +INSERT INTO 01780_db.dict3_source VALUES (1, '1'), (2, '2'), (3, '3'); + +CREATE DICTIONARY 01780_db.dict3 +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 TABLE 'dict3_source' DATABASE '01780_db')) +LAYOUT(DIRECT()); + +SELECT * FROM 01780_db.dict3; + +DROP DICTIONARY 01780_db.dict3; + +DROP DATABASE 01780_db; diff --git a/tests/queries/0_stateless/01780_dict_get_or_null.reference b/tests/queries/0_stateless/01780_dict_get_or_null.reference new file mode 100644 index 00000000000..4baca9ec91b --- /dev/null +++ b/tests/queries/0_stateless/01780_dict_get_or_null.reference @@ -0,0 +1,18 @@ +Simple key dictionary dictGetOrNull +0 0 \N \N (NULL,NULL) +1 1 First First ('First','First') +2 1 Second \N ('Second',NULL) +3 1 Third Third ('Third','Third') +4 0 \N \N (NULL,NULL) +Complex key dictionary dictGetOrNull +(0,'key') 0 \N \N (NULL,NULL) +(1,'key') 1 First First ('First','First') +(2,'key') 1 Second \N ('Second',NULL) +(3,'key') 1 Third Third ('Third','Third') +(4,'key') 0 \N \N (NULL,NULL) +Range key dictionary dictGetOrNull +(0,'2019-05-20') 0 \N \N (NULL,NULL) +(1,'2019-05-20') 1 First First ('First','First') +(2,'2019-05-20') 1 Second \N ('Second',NULL) +(3,'2019-05-20') 1 Third Third ('Third','Third') +(4,'2019-05-20') 0 \N \N (NULL,NULL) diff --git a/tests/queries/0_stateless/01780_dict_get_or_null.sql b/tests/queries/0_stateless/01780_dict_get_or_null.sql new file mode 100644 index 00000000000..f13bcf57d27 --- /dev/null +++ b/tests/queries/0_stateless/01780_dict_get_or_null.sql @@ -0,0 +1,116 @@ +DROP TABLE IF EXISTS simple_key_dictionary_source_table; +CREATE TABLE simple_key_dictionary_source_table +( + id UInt64, + value String, + value_nullable Nullable(String) +) ENGINE = TinyLog; + +INSERT INTO simple_key_dictionary_source_table VALUES (1, 'First', 'First'); +INSERT INTO simple_key_dictionary_source_table VALUES (2, 'Second', NULL); +INSERT INTO simple_key_dictionary_source_table VALUES (3, 'Third', 'Third'); + +DROP DICTIONARY IF EXISTS simple_key_dictionary; +CREATE DICTIONARY simple_key_dictionary +( + id UInt64, + value String, + value_nullable Nullable(String) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'simple_key_dictionary_source_table')) +LAYOUT(DIRECT()); + +SELECT 'Simple key dictionary dictGetOrNull'; + +SELECT + number, + dictHas('simple_key_dictionary', number), + dictGetOrNull('simple_key_dictionary', 'value', number), + dictGetOrNull('simple_key_dictionary', 'value_nullable', number), + dictGetOrNull('simple_key_dictionary', ('value', 'value_nullable'), number) +FROM system.numbers LIMIT 5; + +DROP DICTIONARY simple_key_dictionary; +DROP TABLE simple_key_dictionary_source_table; + +DROP TABLE IF EXISTS complex_key_dictionary_source_table; +CREATE TABLE complex_key_dictionary_source_table +( + id UInt64, + id_key String, + value String, + value_nullable Nullable(String) +) ENGINE = TinyLog; + +INSERT INTO complex_key_dictionary_source_table VALUES (1, 'key', 'First', 'First'); +INSERT INTO complex_key_dictionary_source_table VALUES (2, 'key', 'Second', NULL); +INSERT INTO complex_key_dictionary_source_table VALUES (3, 'key', 'Third', 'Third'); + +DROP DICTIONARY IF EXISTS complex_key_dictionary; +CREATE DICTIONARY complex_key_dictionary +( + id UInt64, + id_key String, + value String, + value_nullable Nullable(String) +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'complex_key_dictionary_source_table')) +LAYOUT(COMPLEX_KEY_DIRECT()); + +SELECT 'Complex key dictionary dictGetOrNull'; + +SELECT + (number, 'key'), + dictHas('complex_key_dictionary', (number, 'key')), + dictGetOrNull('complex_key_dictionary', 'value', (number, 'key')), + dictGetOrNull('complex_key_dictionary', 'value_nullable', (number, 'key')), + dictGetOrNull('complex_key_dictionary', ('value', 'value_nullable'), (number, 'key')) +FROM system.numbers LIMIT 5; + +DROP DICTIONARY complex_key_dictionary; +DROP TABLE complex_key_dictionary_source_table; + +DROP TABLE IF EXISTS range_key_dictionary_source_table; +CREATE TABLE range_key_dictionary_source_table +( + key UInt64, + start_date Date, + end_date Date, + value String, + value_nullable Nullable(String) +) +ENGINE = TinyLog(); + +INSERT INTO range_key_dictionary_source_table VALUES(1, toDate('2019-05-20'), toDate('2019-05-20'), 'First', 'First'); +INSERT INTO range_key_dictionary_source_table VALUES(2, toDate('2019-05-20'), toDate('2019-05-20'), 'Second', NULL); +INSERT INTO range_key_dictionary_source_table VALUES(3, toDate('2019-05-20'), toDate('2019-05-20'), 'Third', 'Third'); + +DROP DICTIONARY IF EXISTS range_key_dictionary; +CREATE DICTIONARY range_key_dictionary +( + key UInt64, + start_date Date, + end_date Date, + value String, + value_nullable Nullable(String) +) +PRIMARY KEY key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'range_key_dictionary_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(RANGE_HASHED()) +RANGE(MIN start_date MAX end_date); + +SELECT 'Range key dictionary dictGetOrNull'; + +SELECT + (number, toDate('2019-05-20')), + dictHas('range_key_dictionary', number, toDate('2019-05-20')), + dictGetOrNull('range_key_dictionary', 'value', number, toDate('2019-05-20')), + dictGetOrNull('range_key_dictionary', 'value_nullable', number, toDate('2019-05-20')), + dictGetOrNull('range_key_dictionary', ('value', 'value_nullable'), number, toDate('2019-05-20')) +FROM system.numbers LIMIT 5; + +DROP DICTIONARY range_key_dictionary; +DROP TABLE range_key_dictionary_source_table; diff --git a/tests/queries/0_stateless/01780_range_msan.reference b/tests/queries/0_stateless/01780_range_msan.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01780_range_msan.sql b/tests/queries/0_stateless/01780_range_msan.sql new file mode 100644 index 00000000000..dd0a35c3eea --- /dev/null +++ b/tests/queries/0_stateless/01780_range_msan.sql @@ -0,0 +1 @@ +SELECT range(toUInt256(1), 1); -- { serverError 44 } diff --git a/tests/queries/0_stateless/01781_map_op_ubsan.reference b/tests/queries/0_stateless/01781_map_op_ubsan.reference new file mode 100644 index 00000000000..030c8bb5ab4 --- /dev/null +++ b/tests/queries/0_stateless/01781_map_op_ubsan.reference @@ -0,0 +1 @@ +\N (([0,10,255],[-9223372036854775808,1025,0]),[255,NULL]) \N ([0,255],3,[-2]) [NULL] diff --git a/tests/queries/0_stateless/01781_map_op_ubsan.sql b/tests/queries/0_stateless/01781_map_op_ubsan.sql new file mode 100644 index 00000000000..adbb5d5a8d7 --- /dev/null +++ b/tests/queries/0_stateless/01781_map_op_ubsan.sql @@ -0,0 +1 @@ +SELECT toInt32([toUInt8(NULL)], NULL), (mapSubtract(([toUInt8(256), 10], [toInt32(-9223372036854775808), 1025]), ([toUInt8(65535), 0], [toInt16(0.), -9223372036854775808])), [toUInt8(-1), toInt32(([toUInt8(9223372036854775807), -1], [toInt32(255), 65536]), NULL)]), toUInt8(([2, 9223372036854775807], [toFloat32('0.0000065536'), 2]), 9223372036854775807, NULL), ([toUInt8(1024), 255], toUInt8(3), [toInt16(-2)]), [NULL]; diff --git a/tests/queries/0_stateless/01781_merge_tree_deduplication.reference b/tests/queries/0_stateless/01781_merge_tree_deduplication.reference new file mode 100644 index 00000000000..cb5a3f1ff52 --- /dev/null +++ b/tests/queries/0_stateless/01781_merge_tree_deduplication.reference @@ -0,0 +1,85 @@ +1 1 +1 1 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +5 5 +6 6 +7 7 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +5 5 +6 6 +7 7 +8 8 +9 9 +10 10 +11 11 +12 12 +=============== +10 10 +12 12 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +5 5 +6 6 +8 8 +9 9 +11 11 +12 12 +=============== +88 11 11 +77 11 11 +77 12 12 +=============== +1 1 33 +1 1 33 +2 2 33 +3 3 33 +=============== +1 1 33 +1 1 33 +1 1 33 +1 1 33 +2 2 33 +3 3 33 +=============== +1 1 33 +1 1 33 +1 1 33 +1 1 33 +1 1 33 +2 2 33 +3 3 33 +=============== +1 1 44 +2 2 44 +3 3 44 +4 4 44 +=============== +1 1 +1 1 +=============== +1 1 +1 1 +1 1 +2 2 +3 3 +4 4 diff --git a/tests/queries/0_stateless/01781_merge_tree_deduplication.sql b/tests/queries/0_stateless/01781_merge_tree_deduplication.sql new file mode 100644 index 00000000000..236f7b35b80 --- /dev/null +++ b/tests/queries/0_stateless/01781_merge_tree_deduplication.sql @@ -0,0 +1,187 @@ +DROP TABLE IF EXISTS merge_tree_deduplication; + +CREATE TABLE merge_tree_deduplication +( + key UInt64, + value String, + part UInt8 DEFAULT 77 +) +ENGINE=MergeTree() +ORDER BY key +PARTITION BY part +SETTINGS non_replicated_deduplication_window=3; + +SYSTEM STOP MERGES merge_tree_deduplication; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (1, '1'); + +SELECT key, value FROM merge_tree_deduplication; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (1, '1'); + +SELECT key, value FROM merge_tree_deduplication; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (2, '2'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (3, '3'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (4, '4'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (1, '1'); + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (5, '5'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (6, '6'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (7, '7'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (5, '5'); + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (8, '8'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (9, '9'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (10, '10'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (10, '10'); +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_deduplication DROP PART '77_9_9_0'; -- some old part + +INSERT INTO merge_tree_deduplication (key, value) VALUES (10, '10'); + +SELECT key, value FROM merge_tree_deduplication WHERE key = 10; + +ALTER TABLE merge_tree_deduplication DROP PART '77_13_13_0'; -- fresh part + +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); + +SELECT key, value FROM merge_tree_deduplication WHERE key = 12; + +DETACH TABLE merge_tree_deduplication; +ATTACH TABLE merge_tree_deduplication; + +OPTIMIZE TABLE merge_tree_deduplication FINAL; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); -- deduplicated +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); -- deduplicated + +SELECT '==============='; + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (11, '11', 88); + +ALTER TABLE merge_tree_deduplication DROP PARTITION 77; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (11, '11', 88); --deduplicated + +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); -- not deduplicated +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); -- not deduplicated + +SELECT part, key, value FROM merge_tree_deduplication ORDER BY key; + +-- Alters.... + +ALTER TABLE merge_tree_deduplication MODIFY SETTING non_replicated_deduplication_window = 2; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (2, '2', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (3, '3', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); + +SELECT * FROM merge_tree_deduplication WHERE part = 33 ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_deduplication MODIFY SETTING non_replicated_deduplication_window = 0; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); + +DETACH TABLE merge_tree_deduplication; +ATTACH TABLE merge_tree_deduplication; + +SELECT * FROM merge_tree_deduplication WHERE part = 33 ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_deduplication MODIFY SETTING non_replicated_deduplication_window = 3; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); + +SELECT * FROM merge_tree_deduplication WHERE part = 33 ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 44); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (2, '2', 44); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (3, '3', 44); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 44); + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (4, '4', 44); + +DETACH TABLE merge_tree_deduplication; +ATTACH TABLE merge_tree_deduplication; + +SELECT * FROM merge_tree_deduplication WHERE part = 44 ORDER BY key; + +DROP TABLE IF EXISTS merge_tree_deduplication; + +SELECT '==============='; + +DROP TABLE IF EXISTS merge_tree_no_deduplication; + +CREATE TABLE merge_tree_no_deduplication +( + key UInt64, + value String +) +ENGINE=MergeTree() +ORDER BY key; + +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); + +SELECT * FROM merge_tree_no_deduplication ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_no_deduplication MODIFY SETTING non_replicated_deduplication_window = 3; + +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (2, '2'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (3, '3'); + +DETACH TABLE merge_tree_no_deduplication; +ATTACH TABLE merge_tree_no_deduplication; + +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (4, '4'); + +SELECT * FROM merge_tree_no_deduplication ORDER BY key; + +DROP TABLE IF EXISTS merge_tree_no_deduplication; diff --git a/tests/queries/0_stateless/01781_token_extractor_buffer_overflow.reference b/tests/queries/0_stateless/01781_token_extractor_buffer_overflow.reference new file mode 100644 index 00000000000..aa47d0d46d4 --- /dev/null +++ b/tests/queries/0_stateless/01781_token_extractor_buffer_overflow.reference @@ -0,0 +1,2 @@ +0 +0 diff --git a/tests/queries/0_stateless/01781_token_extractor_buffer_overflow.sql b/tests/queries/0_stateless/01781_token_extractor_buffer_overflow.sql new file mode 100644 index 00000000000..4cc216955b3 --- /dev/null +++ b/tests/queries/0_stateless/01781_token_extractor_buffer_overflow.sql @@ -0,0 +1,10 @@ +SET max_block_size = 10, min_insert_block_size_rows = 0, min_insert_block_size_bytes = 0, max_threads = 20; + +DROP TABLE IF EXISTS bloom_filter; +CREATE TABLE bloom_filter (`id` UInt64, `s` String, INDEX tok_bf (s, lower(s)) TYPE tokenbf_v1(512, 3, 0) GRANULARITY 1) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 8; +INSERT INTO bloom_filter SELECT number, 'yyy,uuu' FROM numbers(1024); + +SELECT max(id) FROM bloom_filter WHERE hasToken(s, 'abc'); +SELECT max(id) FROM bloom_filter WHERE hasToken(s, 'abcabcabcabcabcabcabcab\0'); + +DROP TABLE bloom_filter; diff --git a/tests/queries/0_stateless/01782_field_oom.reference b/tests/queries/0_stateless/01782_field_oom.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01782_field_oom.sql b/tests/queries/0_stateless/01782_field_oom.sql new file mode 100644 index 00000000000..2609c589d94 --- /dev/null +++ b/tests/queries/0_stateless/01782_field_oom.sql @@ -0,0 +1,2 @@ +SET max_memory_usage = '500M'; +SELECT sumMap([number], [number]) FROM system.numbers_mt; -- { serverError 241 } diff --git a/tests/queries/0_stateless/01783_http_chunk_size.reference b/tests/queries/0_stateless/01783_http_chunk_size.reference new file mode 100644 index 00000000000..e454a00607c --- /dev/null +++ b/tests/queries/0_stateless/01783_http_chunk_size.reference @@ -0,0 +1 @@ +1234567890 1234567890 1234567890 1234567890 diff --git a/tests/queries/0_stateless/01783_http_chunk_size.sh b/tests/queries/0_stateless/01783_http_chunk_size.sh new file mode 100755 index 00000000000..66ac4dfa975 --- /dev/null +++ b/tests/queries/0_stateless/01783_http_chunk_size.sh @@ -0,0 +1,17 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +URL="${CLICKHOUSE_URL}&session_id=id_${CLICKHOUSE_DATABASE}" + +echo "DROP TABLE IF EXISTS table" | ${CLICKHOUSE_CURL} -sSg "${URL}" -d @- +echo "CREATE TABLE table (a String) ENGINE Memory()" | ${CLICKHOUSE_CURL} -sSg "${URL}" -d @- + +# NOTE: suppose that curl sends everything in a single chunk - there are no options to force the chunk-size. +echo "SET max_query_size=44" | ${CLICKHOUSE_CURL} -sSg "${URL}" -d @- +echo -ne "INSERT INTO TABLE table FORMAT TabSeparated 1234567890 1234567890 1234567890 1234567890\n" | ${CLICKHOUSE_CURL} -H "Transfer-Encoding: chunked" -sS "${URL}" --data-binary @- + +echo "SELECT * from table" | ${CLICKHOUSE_CURL} -sSg "${URL}" -d @- +echo "DROP TABLE table" | ${CLICKHOUSE_CURL} -sSg "${URL}" -d @- diff --git a/tests/queries/0_stateless/01783_merge_engine_join_key_condition.reference b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.reference new file mode 100644 index 00000000000..4068a6e00dd --- /dev/null +++ b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.reference @@ -0,0 +1,5 @@ +3 3 +1 4 +1 4 +1 4 +1 4 diff --git a/tests/queries/0_stateless/01783_merge_engine_join_key_condition.sql b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.sql new file mode 100644 index 00000000000..372c1bd3572 --- /dev/null +++ b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.sql @@ -0,0 +1,23 @@ +DROP TABLE IF EXISTS foo; +DROP TABLE IF EXISTS foo_merge; +DROP TABLE IF EXISTS t2; + +CREATE TABLE foo(Id Int32, Val Int32) Engine=MergeTree PARTITION BY Val ORDER BY Id; +INSERT INTO foo SELECT number, number%5 FROM numbers(100000); + +CREATE TABLE foo_merge as foo ENGINE=Merge(currentDatabase(), '^foo'); + +CREATE TABLE t2 (Id Int32, Val Int32, X Int32) Engine=Memory; +INSERT INTO t2 values (4, 3, 4); + +SET force_primary_key = 1, force_index_by_date=1; + +SELECT * FROM foo_merge WHERE Val = 3 AND Id = 3; +SELECT count(), X FROM foo_merge JOIN t2 USING Val WHERE Val = 3 AND Id = 3 AND t2.X == 4 GROUP BY X; +SELECT count(), X FROM foo_merge JOIN t2 USING Val WHERE Val = 3 AND (Id = 3 AND t2.X == 4) GROUP BY X; +SELECT count(), X FROM foo_merge JOIN t2 USING Val WHERE Val = 3 AND Id = 3 GROUP BY X; +SELECT count(), X FROM (SELECT * FROM foo_merge) f JOIN t2 USING Val WHERE Val = 3 AND Id = 3 GROUP BY X; + +DROP TABLE IF EXISTS foo; +DROP TABLE IF EXISTS foo_merge; +DROP TABLE IF EXISTS t2; diff --git a/tests/queries/0_stateless/01783_parallel_formatting_memory.reference b/tests/queries/0_stateless/01783_parallel_formatting_memory.reference new file mode 100644 index 00000000000..c5cdc5cf0bb --- /dev/null +++ b/tests/queries/0_stateless/01783_parallel_formatting_memory.reference @@ -0,0 +1 @@ +Code: 241 diff --git a/tests/queries/0_stateless/01783_parallel_formatting_memory.sh b/tests/queries/0_stateless/01783_parallel_formatting_memory.sh new file mode 100755 index 00000000000..0b8cb0bc6be --- /dev/null +++ b/tests/queries/0_stateless/01783_parallel_formatting_memory.sh @@ -0,0 +1,7 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +$CLICKHOUSE_CURL -sS "$CLICKHOUSE_URL&max_memory_usage=1G" -d "SELECT range(65535) FROM system.one ARRAY JOIN range(65536) AS number" | grep -oF 'Code: 241' diff --git a/tests/queries/0_stateless/01784_parallel_formatting_memory.reference b/tests/queries/0_stateless/01784_parallel_formatting_memory.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01784_parallel_formatting_memory.sql b/tests/queries/0_stateless/01784_parallel_formatting_memory.sql new file mode 100644 index 00000000000..35dc063f895 --- /dev/null +++ b/tests/queries/0_stateless/01784_parallel_formatting_memory.sql @@ -0,0 +1,2 @@ +SET max_memory_usage = '1G'; +SELECT range(65535) FROM system.one ARRAY JOIN range(65536) AS number; -- { serverError 241 } diff --git a/tests/queries/0_stateless/01785_dictionary_element_count.reference b/tests/queries/0_stateless/01785_dictionary_element_count.reference new file mode 100644 index 00000000000..4b79788b4d4 --- /dev/null +++ b/tests/queries/0_stateless/01785_dictionary_element_count.reference @@ -0,0 +1,8 @@ +1 First +simple_key_flat_dictionary 01785_db 1 +1 First +simple_key_hashed_dictionary 01785_db 1 +1 First +simple_key_cache_dictionary 01785_db 1 +1 FirstKey First +complex_key_hashed_dictionary 01785_db 1 diff --git a/tests/queries/0_stateless/01785_dictionary_element_count.sql b/tests/queries/0_stateless/01785_dictionary_element_count.sql new file mode 100644 index 00000000000..6db65152a56 --- /dev/null +++ b/tests/queries/0_stateless/01785_dictionary_element_count.sql @@ -0,0 +1,91 @@ +DROP DATABASE IF EXISTS 01785_db; +CREATE DATABASE 01785_db; + +DROP TABLE IF EXISTS 01785_db.simple_key_source_table; +CREATE TABLE 01785_db.simple_key_source_table +( + id UInt64, + value String +) ENGINE = TinyLog(); + +INSERT INTO 01785_db.simple_key_source_table VALUES (1, 'First'); +INSERT INTO 01785_db.simple_key_source_table VALUES (1, 'First'); + +DROP DICTIONARY IF EXISTS 01785_db.simple_key_flat_dictionary; +CREATE DICTIONARY 01785_db.simple_key_flat_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'simple_key_source_table')) +LAYOUT(FLAT()) +LIFETIME(MIN 0 MAX 1000); + +SELECT * FROM 01785_db.simple_key_flat_dictionary; +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'simple_key_flat_dictionary'; + +DROP DICTIONARY 01785_db.simple_key_flat_dictionary; + +CREATE DICTIONARY 01785_db.simple_key_hashed_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'simple_key_source_table')) +LAYOUT(HASHED()) +LIFETIME(MIN 0 MAX 1000); + +SELECT * FROM 01785_db.simple_key_hashed_dictionary; +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'simple_key_hashed_dictionary'; + +DROP DICTIONARY 01785_db.simple_key_hashed_dictionary; + +CREATE DICTIONARY 01785_db.simple_key_cache_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'simple_key_source_table')) +LAYOUT(CACHE(SIZE_IN_CELLS 100000)) +LIFETIME(MIN 0 MAX 1000); + +SELECT toUInt64(1) as key, dictGet('01785_db.simple_key_cache_dictionary', 'value', key); +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'simple_key_cache_dictionary'; + +DROP DICTIONARY 01785_db.simple_key_cache_dictionary; + +DROP TABLE 01785_db.simple_key_source_table; + +DROP TABLE IF EXISTS 01785_db.complex_key_source_table; +CREATE TABLE 01785_db.complex_key_source_table +( + id UInt64, + id_key String, + value String +) ENGINE = TinyLog(); + +INSERT INTO 01785_db.complex_key_source_table VALUES (1, 'FirstKey', 'First'); +INSERT INTO 01785_db.complex_key_source_table VALUES (1, 'FirstKey', 'First'); + +CREATE DICTIONARY 01785_db.complex_key_hashed_dictionary +( + id UInt64, + id_key String, + value String +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'complex_key_source_table')) +LAYOUT(COMPLEX_KEY_HASHED()) +LIFETIME(MIN 0 MAX 1000); + +SELECT * FROM 01785_db.complex_key_hashed_dictionary; +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'complex_key_hashed_dictionary'; + +DROP DICTIONARY 01785_db.complex_key_hashed_dictionary; + +DROP TABLE 01785_db.complex_key_source_table; + +DROP DATABASE 01785_db; diff --git a/tests/queries/0_stateless/01785_parallel_formatting_memory.reference b/tests/queries/0_stateless/01785_parallel_formatting_memory.reference new file mode 100644 index 00000000000..0ec7fc54b01 --- /dev/null +++ b/tests/queries/0_stateless/01785_parallel_formatting_memory.reference @@ -0,0 +1,2 @@ +Code: 241 +Code: 241 diff --git a/tests/queries/0_stateless/01785_parallel_formatting_memory.sh b/tests/queries/0_stateless/01785_parallel_formatting_memory.sh new file mode 100755 index 00000000000..6d081c61fd3 --- /dev/null +++ b/tests/queries/0_stateless/01785_parallel_formatting_memory.sh @@ -0,0 +1,8 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT --compress 0 --max_memory_usage 1G --query "SELECT range(65535) FROM system.one ARRAY JOIN range(65536) AS number" 2>&1 | grep -oF 'Code: 241' | head -n1 +$CLICKHOUSE_CLIENT --compress 1 --max_memory_usage 1G --query "SELECT range(65535) FROM system.one ARRAY JOIN range(65536) AS number" 2>&1 | grep -oF 'Code: 241' | head -n1 diff --git a/tests/queries/0_stateless/01785_pmj_lc_bug.reference b/tests/queries/0_stateless/01785_pmj_lc_bug.reference new file mode 100644 index 00000000000..98fb6a68656 --- /dev/null +++ b/tests/queries/0_stateless/01785_pmj_lc_bug.reference @@ -0,0 +1,4 @@ +1 +1 +1 +1 diff --git a/tests/queries/0_stateless/01785_pmj_lc_bug.sql b/tests/queries/0_stateless/01785_pmj_lc_bug.sql new file mode 100644 index 00000000000..722faa9b40d --- /dev/null +++ b/tests/queries/0_stateless/01785_pmj_lc_bug.sql @@ -0,0 +1,14 @@ +SET join_algorithm = 'partial_merge'; +SET max_bytes_in_join = '100'; + +CREATE TABLE foo_lc (n LowCardinality(String)) ENGINE = Memory; +CREATE TABLE foo (n String) ENGINE = Memory; + +INSERT INTO foo SELECT toString(number) AS n FROM system.numbers LIMIT 1025; +INSERT INTO foo_lc SELECT toString(number) AS n FROM system.numbers LIMIT 1025; + +SELECT 1025 == count(n) FROM foo_lc AS t1 ANY LEFT JOIN foo_lc AS t2 ON t1.n == t2.n; +SELECT 1025 == count(n) FROM foo AS t1 ANY LEFT JOIN foo_lc AS t2 ON t1.n == t2.n; +SELECT 1025 == count(n) FROM foo_lc AS t1 ANY LEFT JOIN foo AS t2 ON t1.n == t2.n; + +SELECT 1025 == count(n) FROM foo_lc AS t1 ALL LEFT JOIN foo_lc AS t2 ON t1.n == t2.n; diff --git a/tests/queries/0_stateless/01786_explain_merge_tree.reference b/tests/queries/0_stateless/01786_explain_merge_tree.reference new file mode 100644 index 00000000000..7a0a0af3e05 --- /dev/null +++ b/tests/queries/0_stateless/01786_explain_merge_tree.reference @@ -0,0 +1,109 @@ + ReadFromMergeTree + Indexes: + MinMax + Keys: + y + Condition: (y in [1, +inf)) + Parts: 4/5 + Granules: 11/12 + Partition + Keys: + y + bitAnd(z, 3) + Condition: and((bitAnd(z, 3) not in [1, 1]), and((y in [1, +inf)), (bitAnd(z, 3) not in [1, 1]))) + Parts: 3/4 + Granules: 10/11 + PrimaryKey + Keys: + x + y + Condition: and((x in [11, +inf)), (y in [1, +inf))) + Parts: 2/3 + Granules: 6/10 + Skip + Name: t_minmax + Description: minmax GRANULARITY 2 + Parts: 1/2 + Granules: 2/6 + Skip + Name: t_set + Description: set GRANULARITY 2 + Parts: 1/1 + Granules: 1/2 +----------------- + "Node Type": "ReadFromMergeTree", + "Indexes": [ + { + "Type": "MinMax", + "Keys": ["y"], + "Condition": "(y in [1, +inf))", + "Initial Parts": 5, + "Selected Parts": 4, + "Initial Granules": 12, + "Selected Granules": 11 + }, + { + "Type": "Partition", + "Keys": ["y", "bitAnd(z, 3)"], + "Condition": "and((bitAnd(z, 3) not in [1, 1]), and((y in [1, +inf)), (bitAnd(z, 3) not in [1, 1])))", + "Initial Parts": 4, + "Selected Parts": 3, + "Initial Granules": 11, + "Selected Granules": 10 + }, + { + "Type": "PrimaryKey", + "Keys": ["x", "y"], + "Condition": "and((x in [11, +inf)), (y in [1, +inf)))", + "Initial Parts": 3, + "Selected Parts": 2, + "Initial Granules": 10, + "Selected Granules": 6 + }, + { + "Type": "Skip", + "Name": "t_minmax", + "Description": "minmax GRANULARITY 2", + "Initial Parts": 2, + "Selected Parts": 1, + "Initial Granules": 6, + "Selected Granules": 2 + }, + { + "Type": "Skip", + "Name": "t_set", + "Description": "set GRANULARITY 2", + "Initial Parts": 1, + "Selected Parts": 1, + "Initial Granules": 2, + "Selected Granules": 1 + } + ] + } + ] + } + ] + } + ] + } + } +] +----------------- + ReadFromMergeTree + ReadType: InOrder + Parts: 1 + Granules: 3 +----------------- + ReadFromMergeTree + ReadType: InReverseOrder + Parts: 1 + Granules: 3 + ReadFromMergeTree + Indexes: + PrimaryKey + Keys: + x + plus(x, y) + Condition: or((x in 2-element set), (plus(plus(x, y), 1) in (-inf, 2])) + Parts: 1/1 + Granules: 1/1 diff --git a/tests/queries/0_stateless/01786_explain_merge_tree.sh b/tests/queries/0_stateless/01786_explain_merge_tree.sh new file mode 100755 index 00000000000..6be86f9ce02 --- /dev/null +++ b/tests/queries/0_stateless/01786_explain_merge_tree.sh @@ -0,0 +1,43 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT -q "drop table if exists test_index" +$CLICKHOUSE_CLIENT -q "drop table if exists idx" + +$CLICKHOUSE_CLIENT -q "create table test_index (x UInt32, y UInt32, z UInt32, t UInt32, index t_minmax t % 20 TYPE minmax GRANULARITY 2, index t_set t % 19 type set(4) granularity 2) engine = MergeTree order by (x, y) partition by (y, bitAnd(z, 3), intDiv(t, 15)) settings index_granularity = 2, min_bytes_for_wide_part = 0" +$CLICKHOUSE_CLIENT -q "insert into test_index select number, number > 3 ? 3 : number, number = 1 ? 1 : 0, number from numbers(20)" + +$CLICKHOUSE_CLIENT -q " + explain indexes = 1 select *, _part from test_index where t % 19 = 16 and y > 0 and bitAnd(z, 3) != 1 and x > 10 and t % 20 > 14; + " | grep -A 100 "ReadFromMergeTree" # | grep -v "Description" + +echo "-----------------" + +$CLICKHOUSE_CLIENT -q " + explain indexes = 1, json = 1 select *, _part from test_index where t % 19 = 16 and y > 0 and bitAnd(z, 3) != 1 and x > 10 and t % 20 > 14 format TSVRaw; + " | grep -A 100 "ReadFromMergeTree" # | grep -v "Description" + +echo "-----------------" + +$CLICKHOUSE_CLIENT -q " + explain actions = 1 select x from test_index where x > 15 order by x; + " | grep -A 100 "ReadFromMergeTree" + +echo "-----------------" + +$CLICKHOUSE_CLIENT -q " + explain actions = 1 select x from test_index where x > 15 order by x desc; + " | grep -A 100 "ReadFromMergeTree" + +$CLICKHOUSE_CLIENT -q "CREATE TABLE idx (x UInt32, y UInt32, z UInt32) ENGINE = MergeTree ORDER BY (x, x + y) settings min_bytes_for_wide_part = 0" +$CLICKHOUSE_CLIENT -q "insert into idx select number, number, number from numbers(10)" + +$CLICKHOUSE_CLIENT -q " + explain indexes = 1 select z from idx where not(x + y + 1 > 2 and x not in (4, 5)) + " | grep -A 100 "ReadFromMergeTree" + +$CLICKHOUSE_CLIENT -q "drop table if exists test_index" +$CLICKHOUSE_CLIENT -q "drop table if exists idx" diff --git a/tests/queries/0_stateless/01786_group_by_pk_many_streams.reference b/tests/queries/0_stateless/01786_group_by_pk_many_streams.reference new file mode 100644 index 00000000000..b8809e746a5 --- /dev/null +++ b/tests/queries/0_stateless/01786_group_by_pk_many_streams.reference @@ -0,0 +1,11 @@ +94950 +84950 +74950 +64950 +54950 +======= +94950 +84950 +74950 +64950 +54950 diff --git a/tests/queries/0_stateless/01786_group_by_pk_many_streams.sql b/tests/queries/0_stateless/01786_group_by_pk_many_streams.sql new file mode 100644 index 00000000000..e555aa4d6e6 --- /dev/null +++ b/tests/queries/0_stateless/01786_group_by_pk_many_streams.sql @@ -0,0 +1,16 @@ +DROP TABLE IF EXISTS group_by_pk; + +CREATE TABLE group_by_pk (k UInt64, v UInt64) +ENGINE = MergeTree ORDER BY k PARTITION BY v % 50; + +INSERT INTO group_by_pk SELECT number / 100, number FROM numbers(1000); + +SELECT sum(v) AS s FROM group_by_pk GROUP BY k ORDER BY s DESC LIMIT 5 +SETTINGS optimize_aggregation_in_order = 1, max_block_size = 1; + +SELECT '======='; + +SELECT sum(v) AS s FROM group_by_pk GROUP BY k ORDER BY s DESC LIMIT 5 +SETTINGS optimize_aggregation_in_order = 0, max_block_size = 1; + +DROP TABLE IF EXISTS group_by_pk; diff --git a/tests/queries/0_stateless/01786_nullable_string_tsv_at_eof.reference b/tests/queries/0_stateless/01786_nullable_string_tsv_at_eof.reference new file mode 100644 index 00000000000..35b388bbafb --- /dev/null +++ b/tests/queries/0_stateless/01786_nullable_string_tsv_at_eof.reference @@ -0,0 +1,6 @@ +1 +1 +1 +1 +1 +1 diff --git a/tests/queries/0_stateless/01786_nullable_string_tsv_at_eof.sh b/tests/queries/0_stateless/01786_nullable_string_tsv_at_eof.sh new file mode 100755 index 00000000000..f0a663ae409 --- /dev/null +++ b/tests/queries/0_stateless/01786_nullable_string_tsv_at_eof.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +printf '1\t' | $CLICKHOUSE_LOCAL --query="SELECT * FROM table" --structure='a String, b String' +printf '1\t' | $CLICKHOUSE_LOCAL --input_format_null_as_default 0 --query="SELECT * FROM table" --structure='a String, b String' +printf '1\t' | $CLICKHOUSE_LOCAL --input_format_null_as_default 1 --query="SELECT * FROM table" --structure='a String, b String' +printf '1\t' | $CLICKHOUSE_LOCAL --query="SELECT * FROM table" --structure='a String, b Nullable(String)' +printf '1\t' | $CLICKHOUSE_LOCAL --input_format_null_as_default 0 --query="SELECT * FROM table" --structure='a String, b Nullable(String)' +printf '1\t' | $CLICKHOUSE_LOCAL --input_format_null_as_default 1 --query="SELECT * FROM table" --structure='a Nullable(String), b Nullable(String)' diff --git a/tests/queries/0_stateless/01787_arena_assert_column_nothing.reference b/tests/queries/0_stateless/01787_arena_assert_column_nothing.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01787_arena_assert_column_nothing.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01787_arena_assert_column_nothing.sql b/tests/queries/0_stateless/01787_arena_assert_column_nothing.sql new file mode 100644 index 00000000000..de6374a1bc3 --- /dev/null +++ b/tests/queries/0_stateless/01787_arena_assert_column_nothing.sql @@ -0,0 +1 @@ +SELECT 1 GROUP BY emptyArrayToSingle(arrayFilter(x -> 1, [])); diff --git a/tests/queries/0_stateless/01787_map_remote.reference b/tests/queries/0_stateless/01787_map_remote.reference new file mode 100644 index 00000000000..1c488d4418e --- /dev/null +++ b/tests/queries/0_stateless/01787_map_remote.reference @@ -0,0 +1,2 @@ +{'a':1,'b':2} +{'a':1,'b':2} diff --git a/tests/queries/0_stateless/01787_map_remote.sql b/tests/queries/0_stateless/01787_map_remote.sql new file mode 100644 index 00000000000..854eafa0a50 --- /dev/null +++ b/tests/queries/0_stateless/01787_map_remote.sql @@ -0,0 +1 @@ +SELECT map('a', 1, 'b', 2) FROM remote('127.0.0.{1,2}', system, one); \ No newline at end of file diff --git a/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.reference b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.reference new file mode 100644 index 00000000000..c6f75cab8b7 --- /dev/null +++ b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.reference @@ -0,0 +1,21 @@ +1 [100,200] ['aa','bb'] [1,2] +0 [0,1] ['aa','bb'] [0,0] +1 [100,200] ['aa','bb'] [1,2] +2 [100,200,300] ['a','b','c'] [10,20,30] +3 [3,4] ['aa','bb'] [3,6] +4 [4,5] ['aa','bb'] [4,8] +0 [0,1] ['aa','bb'] [0,0] +1 [100,200] ['aa','bb'] [1,2] +2 [100,200,300] ['a','b','c'] [100,200,300] +3 [3,4] ['aa','bb'] [3,6] +4 [4,5] ['aa','bb'] [4,8] +0 [0,1] ['aa','bb'] [0,0] +1 [100,200] ['aa','bb'] [1,2] +2 [100,200,300] ['a','b','c'] [100,200,300] +3 [68,72] ['aa','bb'] [68,72] +4 [4,5] ['aa','bb'] [4,8] +0 0 aa 0 +1 1 bb 2 +2 2 aa 4 +3 3 aa 6 +4 4 aa 8 diff --git a/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql new file mode 100644 index 00000000000..8e850b70c24 --- /dev/null +++ b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql @@ -0,0 +1,70 @@ +DROP TABLE IF EXISTS test_wide_nested; + +CREATE TABLE test_wide_nested +( + `id` Int, + `info.id` Array(Int), + `info.name` Array(String), + `info.age` Array(Int) +) +ENGINE = MergeTree +ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0; + +set mutations_sync = 1; + +INSERT INTO test_wide_nested SELECT number, [number,number + 1], ['aa','bb'], [number,number * 2] FROM numbers(5); + +alter table test_wide_nested update `info.id` = [100,200] where id = 1; +select * from test_wide_nested where id = 1 order by id; + +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = [10,20,30], `info.name` = ['a','b','c'] where id = 2; +select * from test_wide_nested; + +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = `info.id`, `info.name` = ['a','b','c'] where id = 2; +select * from test_wide_nested; + +alter table test_wide_nested update `info.id` = [100,200], `info.age`=[68,72] where id = 3; +alter table test_wide_nested update `info.id` = `info.age` where id = 3; +select * from test_wide_nested; + +alter table test_wide_nested update `info.id` = [100,200], `info.age` = [10,20,30], `info.name` = ['a','b','c'] where id = 0; -- { serverError 341 } + +-- Recreate table, because KILL MUTATION is not suitable for parallel tests execution. +DROP TABLE test_wide_nested; + +CREATE TABLE test_wide_nested +( + `id` Int, + `info.id` Array(Int), + `info.name` Array(String), + `info.age` Array(Int) +) +ENGINE = MergeTree +ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0; + +INSERT INTO test_wide_nested SELECT number, [number,number + 1], ['aa','bb'], [number,number * 2] FROM numbers(5); + +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = [10,20,30] where id = 1; -- { serverError 341 } + +DROP TABLE test_wide_nested; + +DROP TABLE IF EXISTS test_wide_not_nested; + +CREATE TABLE test_wide_not_nested +( + `id` Int, + `info.id` Int, + `info.name` String, + `info.age` Int +) +ENGINE = MergeTree +ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0; + +INSERT INTO test_wide_not_nested SELECT number, number, 'aa', number * 2 FROM numbers(5); +ALTER TABLE test_wide_not_nested UPDATE `info.name` = 'bb' WHERE id = 1; +SELECT * FROM test_wide_not_nested ORDER BY id; + +DROP TABLE test_wide_not_nested; diff --git a/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.reference b/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.sql b/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.sql new file mode 100644 index 00000000000..e921460ccfc --- /dev/null +++ b/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.sql @@ -0,0 +1,22 @@ +DROP TABLE IF EXISTS tmp_01781; +DROP TABLE IF EXISTS dist_01781; + +SET prefer_localhost_replica=0; + +CREATE TABLE tmp_01781 (n LowCardinality(String)) ENGINE=Memory; +CREATE TABLE dist_01781 (n LowCardinality(String)) Engine=Distributed(test_cluster_two_shards, currentDatabase(), tmp_01781, cityHash64(n)); + +SET insert_distributed_sync=1; +INSERT INTO dist_01781 VALUES ('1'),('2'); +-- different LowCardinality size +INSERT INTO dist_01781 SELECT * FROM numbers(1000); + +SET insert_distributed_sync=0; +SYSTEM STOP DISTRIBUTED SENDS dist_01781; +INSERT INTO dist_01781 VALUES ('1'),('2'); +-- different LowCardinality size +INSERT INTO dist_01781 SELECT * FROM numbers(1000); +SYSTEM FLUSH DISTRIBUTED dist_01781; + +DROP TABLE tmp_01781; +DROP TABLE dist_01781; diff --git a/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.reference b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.reference new file mode 100644 index 00000000000..3bba1ac23c0 --- /dev/null +++ b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.reference @@ -0,0 +1,6 @@ + DistributedBlockOutputStream: Structure does not match (remote: n Int8 Int8(size = 0), local: n UInt64 UInt64(size = 1)), implicit conversion will be done. + DistributedBlockOutputStream: Structure does not match (remote: n Int8 Int8(size = 0), local: n UInt64 UInt64(size = 1)), implicit conversion will be done. +1 +1 +2 +2 diff --git a/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.sh b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.sh new file mode 100755 index 00000000000..e989696da03 --- /dev/null +++ b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.sh @@ -0,0 +1,30 @@ +#!/usr/bin/env bash + +# NOTE: this is a partial copy of the 01683_dist_INSERT_block_structure_mismatch, +# but this test also checks the log messages + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT --prefer_localhost_replica=0 -nm -q " + DROP TABLE IF EXISTS tmp_01683; + DROP TABLE IF EXISTS dist_01683; + + CREATE TABLE tmp_01683 (n Int8) ENGINE=Memory; + CREATE TABLE dist_01683 (n UInt64) Engine=Distributed(test_cluster_two_shards, currentDatabase(), tmp_01683, n); + + SET insert_distributed_sync=1; + INSERT INTO dist_01683 VALUES (1),(2); + + SET insert_distributed_sync=0; + INSERT INTO dist_01683 VALUES (1),(2); + SYSTEM FLUSH DISTRIBUTED dist_01683; + + -- TODO: cover distributed_directory_monitor_batch_inserts=1 + + SELECT * FROM tmp_01683 ORDER BY n; + + DROP TABLE tmp_01683; + DROP TABLE dist_01683; +" |& sed 's/^.*&1 \ + | grep -q "Code: 27" + +echo $?; + +$CLICKHOUSE_CLIENT --query="DROP TABLE nullable_low_cardinality_tsv_test"; diff --git a/tests/queries/0_stateless/01801_s3_cluster.reference b/tests/queries/0_stateless/01801_s3_cluster.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01801_s3_cluster.sh b/tests/queries/0_stateless/01801_s3_cluster.sh new file mode 100755 index 00000000000..215d5500be5 --- /dev/null +++ b/tests/queries/0_stateless/01801_s3_cluster.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash + +# NOTE: this is a partial copy of the 01683_dist_INSERT_block_structure_mismatch, +# but this test also checks the log messages + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + + +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Cluster('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" diff --git a/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.reference b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.reference new file mode 100644 index 00000000000..75c114cdd74 --- /dev/null +++ b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.reference @@ -0,0 +1,27 @@ +-- { echo } + +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +20 +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +21 +SELECT formatDateTime(toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +22 +-- non-zero scale +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +20 +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +21 +SELECT formatDateTime(toDateTime64('2205-01-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +22 diff --git a/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.sql b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.sql new file mode 100644 index 00000000000..e368f45cbda --- /dev/null +++ b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.sql @@ -0,0 +1,16 @@ +-- { echo } + +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); + +-- non-zero scale +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2205-01-12 12:12:12', 6, 'Europe/Moscow'), '%C'); \ No newline at end of file diff --git a/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.reference b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql new file mode 100644 index 00000000000..24ee9282ac0 --- /dev/null +++ b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql @@ -0,0 +1,22 @@ +DROP TABLE IF EXISTS 01802_empsalary; + +SET allow_experimental_window_functions=1; + +CREATE TABLE 01802_empsalary +( + `depname` LowCardinality(String), + `empno` UInt64, + `salary` Int32, + `enroll_date` Date +) +ENGINE = MergeTree +ORDER BY enroll_date +SETTINGS index_granularity = 8192; + +INSERT INTO 01802_empsalary VALUES ('sales', 1, 5000, '2006-10-01'), ('develop', 8, 6000, '2006-10-01'), ('personnel', 2, 3900, '2006-12-23'), ('develop', 10, 5200, '2007-08-01'), ('sales', 3, 4800, '2007-08-01'), ('sales', 4, 4801, '2007-08-08'), ('develop', 11, 5200, '2007-08-15'), ('personnel', 5, 3500, '2007-12-10'), ('develop', 7, 4200, '2008-01-01'), ('develop', 9, 4500, '2008-01-01'); + +SELECT mannWhitneyUTest(salary, salary) OVER (ORDER BY salary ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError 36} + +SELECT rankCorr(salary, 0.5) OVER (ORDER BY salary ASC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError 36} + +DROP TABLE IF EXISTS 01802_empsalary; diff --git a/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.reference b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.reference new file mode 100644 index 00000000000..729d93bf322 --- /dev/null +++ b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.reference @@ -0,0 +1,24 @@ +before row policy +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 + +after row policy with no password + val +----- + 2 +(1 row) + +after row policy with plaintext_password + val +----- + 2 +(1 row) + diff --git a/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.sh b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.sh new file mode 100755 index 00000000000..edd73131020 --- /dev/null +++ b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.sh @@ -0,0 +1,43 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +echo " +CREATE DATABASE IF NOT EXISTS db01802; +DROP TABLE IF EXISTS db01802.postgresql; +DROP ROW POLICY IF EXISTS test_policy ON db01802.postgresql; + +CREATE TABLE db01802.postgresql (val UInt32) ENGINE=MergeTree ORDER BY val; +INSERT INTO db01802.postgresql SELECT number FROM numbers(10); + +SELECT 'before row policy'; +SELECT * FROM db01802.postgresql; +" | $CLICKHOUSE_CLIENT -n + + +echo " +DROP USER IF EXISTS postgresql_user; +CREATE USER postgresql_user HOST IP '127.0.0.1' IDENTIFIED WITH no_password; +GRANT SELECT(val) ON db01802.postgresql TO postgresql_user; +CREATE ROW POLICY IF NOT EXISTS test_policy ON db01802.postgresql FOR SELECT USING val = 2 TO postgresql_user; + +SELECT ''; +SELECT 'after row policy with no password'; +" | $CLICKHOUSE_CLIENT -n + +psql --host localhost --port ${CLICKHOUSE_PORT_POSTGRESQL} db01802 --user postgresql_user -c "SELECT * FROM postgresql;" + +echo " +DROP USER IF EXISTS postgresql_user; +DROP ROW POLICY IF EXISTS test_policy ON db01802.postgresql; +CREATE USER postgresql_user HOST IP '127.0.0.1' IDENTIFIED WITH plaintext_password BY 'qwerty'; +GRANT SELECT(val) ON db01802.postgresql TO postgresql_user; +CREATE ROW POLICY IF NOT EXISTS test_policy ON db01802.postgresql FOR SELECT USING val = 2 TO postgresql_user; + +SELECT 'after row policy with plaintext_password'; +" | $CLICKHOUSE_CLIENT -n + +psql "postgresql://postgresql_user:qwerty@localhost:${CLICKHOUSE_PORT_POSTGRESQL}/db01802" -c "SELECT * FROM postgresql;" + diff --git a/tests/queries/0_stateless/01802_toDateTime64_large_values.reference b/tests/queries/0_stateless/01802_toDateTime64_large_values.reference new file mode 100644 index 00000000000..c44c61ab93a --- /dev/null +++ b/tests/queries/0_stateless/01802_toDateTime64_large_values.reference @@ -0,0 +1,10 @@ +-- { echo } + +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'UTC'); +2205-12-12 12:12:12 +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'); +2205-12-12 12:12:12 +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); +2205-12-12 12:12:12.000000 +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); +2205-12-12 12:12:12.000000 diff --git a/tests/queries/0_stateless/01802_toDateTime64_large_values.sql b/tests/queries/0_stateless/01802_toDateTime64_large_values.sql new file mode 100644 index 00000000000..299111f43bc --- /dev/null +++ b/tests/queries/0_stateless/01802_toDateTime64_large_values.sql @@ -0,0 +1,7 @@ +-- { echo } + +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'UTC'); +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'); + +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); \ No newline at end of file diff --git a/tests/queries/0_stateless/01803_const_nullable_map.reference b/tests/queries/0_stateless/01803_const_nullable_map.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/01803_const_nullable_map.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/01803_const_nullable_map.sql b/tests/queries/0_stateless/01803_const_nullable_map.sql new file mode 100644 index 00000000000..4ac9f925e24 --- /dev/null +++ b/tests/queries/0_stateless/01803_const_nullable_map.sql @@ -0,0 +1,9 @@ +DROP TABLE IF EXISTS t_map_null; + +SET allow_experimental_map_type = 1; + +CREATE TABLE t_map_null (a Map(String, String), b String) engine = MergeTree() ORDER BY a; +INSERT INTO t_map_null VALUES (map('a', 'b', 'c', 'd'), 'foo'); +SELECT count() FROM t_map_null WHERE a = map('name', NULL, '', NULL); + +DROP TABLE t_map_null; diff --git a/tests/queries/0_stateless/01803_untuple_subquery.reference b/tests/queries/0_stateless/01803_untuple_subquery.reference new file mode 100644 index 00000000000..838ff3aa952 --- /dev/null +++ b/tests/queries/0_stateless/01803_untuple_subquery.reference @@ -0,0 +1,2 @@ +(0.5,'92233720368547758.07',NULL) 1.00 256 \N \N +\N diff --git a/tests/queries/0_stateless/01803_untuple_subquery.sql b/tests/queries/0_stateless/01803_untuple_subquery.sql new file mode 100644 index 00000000000..512b4c561af --- /dev/null +++ b/tests/queries/0_stateless/01803_untuple_subquery.sql @@ -0,0 +1,3 @@ +SELECT (0.5, '92233720368547758.07', NULL), '', '1.00', untuple(('256', NULL)), NULL FROM (SELECT untuple(((NULL, untuple((('0.0000000100', (65536, NULL, (65535, 9223372036854775807), '25.7', (0.00009999999747378752, '10.25', 1048577), 65536)), '0.0000001024', '65537', NULL))), untuple((9223372036854775807, -inf, 0.5)), NULL, -9223372036854775808)), 257, 7, ('0.0001048575', (1024, NULL, (7, 3), (untuple(tuple(-NULL)), NULL, '0.0001048577', NULL), 0)), 0, (0, 0.9998999834060669, '65537'), untuple(tuple('10.25'))); + +SELECT NULL FROM (SELECT untuple((NULL, dummy))); diff --git a/tests/queries/0_stateless/01804_dictionary_decimal256_type.reference b/tests/queries/0_stateless/01804_dictionary_decimal256_type.reference new file mode 100644 index 00000000000..1af9d45f72b --- /dev/null +++ b/tests/queries/0_stateless/01804_dictionary_decimal256_type.reference @@ -0,0 +1,14 @@ +Flat dictionary +5.00000 +Hashed dictionary +5.00000 +Cache dictionary +5.00000 +SSDCache dictionary +5.00000 +Direct dictionary +5.00000 +IPTrie dictionary +5.00000 +Polygon dictionary +5.00000 diff --git a/tests/queries/0_stateless/01804_dictionary_decimal256_type.sql b/tests/queries/0_stateless/01804_dictionary_decimal256_type.sql new file mode 100644 index 00000000000..cc0ec598b70 --- /dev/null +++ b/tests/queries/0_stateless/01804_dictionary_decimal256_type.sql @@ -0,0 +1,141 @@ +SET allow_experimental_bigint_types = 1; + +DROP TABLE IF EXISTS dictionary_decimal_source_table; +CREATE TABLE dictionary_decimal_source_table +( + id UInt64, + decimal_value Decimal256(5) +) ENGINE = TinyLog; + +INSERT INTO dictionary_decimal_source_table VALUES (1, 5.0); + +DROP DICTIONARY IF EXISTS flat_dictionary; +CREATE DICTIONARY flat_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(FLAT()); + +SELECT 'Flat dictionary'; +SELECT dictGet('flat_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY IF EXISTS hashed_dictionary; +CREATE DICTIONARY hashed_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Hashed dictionary'; +SELECT dictGet('hashed_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY hashed_dictionary; + +DROP DICTIONARY IF EXISTS cache_dictionary; +CREATE DICTIONARY cache_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(CACHE(SIZE_IN_CELLS 10)); + +SELECT 'Cache dictionary'; +SELECT dictGet('cache_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY cache_dictionary; + +DROP DICTIONARY IF EXISTS ssd_cache_dictionary; +CREATE DICTIONARY ssd_cache_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(SSD_CACHE(BLOCK_SIZE 4096 FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/0d')); + +SELECT 'SSDCache dictionary'; +SELECT dictGet('ssd_cache_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY ssd_cache_dictionary; + +DROP DICTIONARY IF EXISTS direct_dictionary; +CREATE DICTIONARY direct_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LAYOUT(DIRECT()); + +SELECT 'Direct dictionary'; +SELECT dictGet('direct_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY direct_dictionary; + +DROP TABLE dictionary_decimal_source_table; + +DROP TABLE IF EXISTS ip_trie_dictionary_decimal_source_table; +CREATE TABLE ip_trie_dictionary_decimal_source_table +( + prefix String, + decimal_value Decimal256(5) +) ENGINE = TinyLog; + +INSERT INTO ip_trie_dictionary_decimal_source_table VALUES ('127.0.0.0', 5.0); + +DROP DICTIONARY IF EXISTS ip_trie_dictionary; +CREATE DICTIONARY ip_trie_dictionary +( + prefix String, + decimal_value Decimal256(5) +) +PRIMARY KEY prefix +SOURCE(CLICKHOUSE(HOST 'localhost' port tcpPort() TABLE 'ip_trie_dictionary_decimal_source_table')) +LIFETIME(MIN 10 MAX 1000) +LAYOUT(IP_TRIE()); + +SELECT 'IPTrie dictionary'; +SELECT dictGet('ip_trie_dictionary', 'decimal_value', tuple(IPv4StringToNum('127.0.0.0'))); + +DROP DICTIONARY ip_trie_dictionary; +DROP TABLE ip_trie_dictionary_decimal_source_table; + +DROP TABLE IF EXISTS dictionary_decimal_polygons_source_table; +CREATE TABLE dictionary_decimal_polygons_source_table +( + key Array(Array(Array(Tuple(Float64, Float64)))), + decimal_value Decimal256(5) +) ENGINE = TinyLog; + +INSERT INTO dictionary_decimal_polygons_source_table VALUES ([[[(0, 0), (0, 1), (1, 1), (1, 0)]]], 5.0); + +DROP DICTIONARY IF EXISTS polygon_dictionary; +CREATE DICTIONARY polygon_dictionary +( + key Array(Array(Array(Tuple(Float64, Float64)))), + decimal_value Decimal256(5) +) +PRIMARY KEY key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_polygons_source_table')) +LIFETIME(MIN 0 MAX 1000) +LAYOUT(POLYGON()); + +SELECT 'Polygon dictionary'; +SELECT dictGet('polygon_dictionary', 'decimal_value', tuple(0.5, 0.5)); + +DROP DICTIONARY polygon_dictionary; +DROP TABLE dictionary_decimal_polygons_source_table; diff --git a/tests/queries/0_stateless/01804_uniq_up_to_ubsan.reference b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql new file mode 100644 index 00000000000..fcbe585b70a --- /dev/null +++ b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql @@ -0,0 +1,2 @@ +SELECT uniqUpTo(1e100)(number) FROM numbers(5); -- { serverError 70 } +SELECT uniqUpTo(-1e100)(number) FROM numbers(5); -- { serverError 70 } diff --git a/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.reference b/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.sql b/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.sql new file mode 100644 index 00000000000..e9bbfe69421 --- /dev/null +++ b/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.sql @@ -0,0 +1,12 @@ +drop table if exists data_01809; + +create table data_01809 (i int) engine MergeTree order by i settings old_parts_lifetime = 10000000000, min_bytes_for_wide_part = 0, inactive_parts_to_throw_insert = 0, inactive_parts_to_delay_insert = 1; + +insert into data_01809 values (1); +insert into data_01809 values (2); + +optimize table data_01809 final; + +insert into data_01809 values (3); + +drop table data_01809; diff --git a/tests/queries/0_stateless/01810_max_part_removal_threads_long.reference b/tests/queries/0_stateless/01810_max_part_removal_threads_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01810_max_part_removal_threads_long.sh b/tests/queries/0_stateless/01810_max_part_removal_threads_long.sh new file mode 100755 index 00000000000..f2aa1f63197 --- /dev/null +++ b/tests/queries/0_stateless/01810_max_part_removal_threads_long.sh @@ -0,0 +1,36 @@ +#!/usr/bin/env bash + +# NOTE: this done as not .sql since we need to Ordinary database +# (to account threads in query_log for DROP TABLE query) +# and we can do it compatible with parallel run only in .sh +# (via $CLICKHOUSE_DATABASE) + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT -nm -q "create database ordinary_$CLICKHOUSE_DATABASE engine=Ordinary" + +# MergeTree +$CLICKHOUSE_CLIENT -nm -q """ + use ordinary_$CLICKHOUSE_DATABASE; + drop table if exists data_01810; + create table data_01810 (key Int) Engine=MergeTree() order by key partition by key settings max_part_removal_threads=10, concurrent_part_removal_threshold=49; + insert into data_01810 select * from numbers(50); + drop table data_01810 settings log_queries=1; + system flush logs; + select throwIf(length(thread_ids)<50) from system.query_log where event_date = today() and current_database = currentDatabase() and query = 'drop table data_01810 settings log_queries=1;' and type = 'QueryFinish' format Null; +""" + +# ReplicatedMergeTree +$CLICKHOUSE_CLIENT -nm -q """ + use ordinary_$CLICKHOUSE_DATABASE; + drop table if exists rep_data_01810; + create table rep_data_01810 (key Int) Engine=ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/rep_data_01810', '1') order by key partition by key settings max_part_removal_threads=10, concurrent_part_removal_threshold=49; + insert into rep_data_01810 select * from numbers(50); + drop table rep_data_01810 settings log_queries=1; + system flush logs; + select throwIf(length(thread_ids)<50) from system.query_log where event_date = today() and current_database = currentDatabase() and query = 'drop table rep_data_01810 settings log_queries=1;' and type = 'QueryFinish' format Null; +""" + +$CLICKHOUSE_CLIENT -nm -q "drop database ordinary_$CLICKHOUSE_DATABASE" diff --git a/tests/queries/0_stateless/01811_filter_by_null.reference b/tests/queries/0_stateless/01811_filter_by_null.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01811_filter_by_null.sql b/tests/queries/0_stateless/01811_filter_by_null.sql new file mode 100644 index 00000000000..496faf428ab --- /dev/null +++ b/tests/queries/0_stateless/01811_filter_by_null.sql @@ -0,0 +1,9 @@ +DROP TABLE IF EXISTS test_01344; + +CREATE TABLE test_01344 (x String, INDEX idx (x) TYPE set(10) GRANULARITY 1) ENGINE = MergeTree ORDER BY tuple() SETTINGS min_bytes_for_wide_part = 0; +INSERT INTO test_01344 VALUES ('Hello, world'); +SELECT NULL FROM test_01344 WHERE ignore(1) = NULL; +SELECT NULL FROM test_01344 WHERE encrypt(ignore(encrypt(NULL, '0.0001048577', lcm(2, 65537), NULL, inf, NULL), lcm(-2, 1048575)), '-0.0000000001', lcm(NULL, NULL)) = NULL; +SELECT NULL FROM test_01344 WHERE ignore(x, lcm(NULL, 1048576), -2) = NULL; + +DROP TABLE test_01344; diff --git a/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.reference b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.reference new file mode 100644 index 00000000000..209e3ef4b62 --- /dev/null +++ b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.reference @@ -0,0 +1 @@ +20 diff --git a/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.sql b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.sql new file mode 100644 index 00000000000..dac68ad4ae8 --- /dev/null +++ b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.sql @@ -0,0 +1,22 @@ +drop table if exists data_01811; +drop table if exists buffer_01811; + +create table data_01811 (key Int) Engine=Memory(); +/* Buffer with flush_rows=1000 */ +create table buffer_01811 (key Int) Engine=Buffer(currentDatabase(), data_01811, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6, + /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0 +); + +insert into buffer_01811 select * from numbers(10); +insert into buffer_01811 select * from numbers(10); + +-- wait for background buffer flush +select sleep(3) format Null; +select count() from data_01811; + +drop table buffer_01811; +drop table data_01811; diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.reference b/tests/queries/0_stateless/01812_basic_auth_http_server.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.sh b/tests/queries/0_stateless/01812_basic_auth_http_server.sh new file mode 100755 index 00000000000..4b993137bbd --- /dev/null +++ b/tests/queries/0_stateless/01812_basic_auth_http_server.sh @@ -0,0 +1,19 @@ +#!/usr/bin/env bash +# shellcheck disable=SC2046 + +# In very old (e.g. 1.1.54385) versions of ClickHouse there was a bug in Poco HTTP library: +# Basic HTTP authentication headers was not parsed if the size of URL is exactly 4077 + something bytes. +# So, the user may get authentication error if valid credentials are passed. +# This is a minor issue because it does not have security implications (at worse the user will be not allowed to access). + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +# In this test we do the opposite: passing the invalid credentials while server is accepting default user without a password. +# And if the bug exists, they will be ignored (treat as empty credentials) and query succeed. + +for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' ||:; done + +# You can check that the bug exists in old version by running the old server in Docker: +# docker run --network host -it --rm yandex/clickhouse-server:1.1.54385 diff --git a/tests/queries/0_stateless/01812_has_generic.reference b/tests/queries/0_stateless/01812_has_generic.reference new file mode 100644 index 00000000000..e8183f05f5d --- /dev/null +++ b/tests/queries/0_stateless/01812_has_generic.reference @@ -0,0 +1,3 @@ +1 +1 +1 diff --git a/tests/queries/0_stateless/01812_has_generic.sql b/tests/queries/0_stateless/01812_has_generic.sql new file mode 100644 index 00000000000..9ab5b655102 --- /dev/null +++ b/tests/queries/0_stateless/01812_has_generic.sql @@ -0,0 +1,3 @@ +SELECT has([(1, 2), (3, 4)], (toUInt16(3), 4)); +SELECT hasAny([(1, 2), (3, 4)], [(toUInt16(3), 4)]); +SELECT hasAll([(1, 2), (3, 4)], [(toNullable(1), toUInt64(2)), (toUInt16(3), 4)]); diff --git a/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.reference b/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.sql b/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.sql new file mode 100644 index 00000000000..c39947f2c04 --- /dev/null +++ b/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.sql @@ -0,0 +1,3 @@ +-- remote() does not have sharding key, while force_optimize_skip_unused_shards=2 requires from table to have it. +-- But due to only one node, everything works. +select * from remote('127.1', system.one) settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=2 format Null; diff --git a/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.reference b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.reference new file mode 100644 index 00000000000..5565ed6787f --- /dev/null +++ b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.reference @@ -0,0 +1,4 @@ +0 +1 +0 +1 diff --git a/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.sql b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.sql new file mode 100644 index 00000000000..722bd4af5bb --- /dev/null +++ b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.sql @@ -0,0 +1,18 @@ +DROP TABLE IF EXISTS data; +CREATE TABLE data (a Int64, b Int64) ENGINE = TinyLog(); + +DROP TABLE IF EXISTS data_distributed; +CREATE TABLE data_distributed (a Int64, b Int64) ENGINE = Distributed(test_shard_localhost, currentDatabase(), 'data'); + +INSERT INTO data VALUES (0, 0); + +SET prefer_localhost_replica = 1; +SELECT a / (SELECT sum(number) FROM numbers(10)) FROM data_distributed; +SELECT a < (SELECT 1) FROM data_distributed; + +SET prefer_localhost_replica = 0; +SELECT a / (SELECT sum(number) FROM numbers(10)) FROM data_distributed; +SELECT a < (SELECT 1) FROM data_distributed; + +DROP TABLE data_distributed; +DROP TABLE data; diff --git a/tests/queries/0_stateless/01817_storage_buffer_parameters.reference b/tests/queries/0_stateless/01817_storage_buffer_parameters.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01817_storage_buffer_parameters.sql b/tests/queries/0_stateless/01817_storage_buffer_parameters.sql new file mode 100644 index 00000000000..84727bc5d6b --- /dev/null +++ b/tests/queries/0_stateless/01817_storage_buffer_parameters.sql @@ -0,0 +1,42 @@ +drop table if exists data_01817; +drop table if exists buffer_01817; + +create table data_01817 (key Int) Engine=Null(); + +-- w/ flush_* +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6, + /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0 +); +drop table buffer_01817; + +-- w/o flush_* +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6 +); +drop table buffer_01817; + +-- not enough args +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0 /* max_bytes= 4e6 */ +); -- { serverError 42 } +-- too much args +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6, + /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0, + 0 +); -- { serverError 42 } + +drop table data_01817; diff --git a/tests/queries/0_stateless/01818_case_float_value_fangyc.reference b/tests/queries/0_stateless/01818_case_float_value_fangyc.reference new file mode 100644 index 00000000000..61780798228 --- /dev/null +++ b/tests/queries/0_stateless/01818_case_float_value_fangyc.reference @@ -0,0 +1 @@ +b diff --git a/tests/queries/0_stateless/01818_case_float_value_fangyc.sql b/tests/queries/0_stateless/01818_case_float_value_fangyc.sql new file mode 100644 index 00000000000..3cdb8503e64 --- /dev/null +++ b/tests/queries/0_stateless/01818_case_float_value_fangyc.sql @@ -0,0 +1 @@ +select case 1.1 when 0.1 then 'a' when 1.1 then 'b' when 2.1 then 'c' else 'default' end as f; diff --git a/tests/queries/0_stateless/01818_input_format_with_names_use_header.reference b/tests/queries/0_stateless/01818_input_format_with_names_use_header.reference new file mode 100644 index 00000000000..b7b577c4685 --- /dev/null +++ b/tests/queries/0_stateless/01818_input_format_with_names_use_header.reference @@ -0,0 +1,2 @@ +testdata1 +testdata2 diff --git a/tests/queries/0_stateless/01818_input_format_with_names_use_header.sh b/tests/queries/0_stateless/01818_input_format_with_names_use_header.sh new file mode 100755 index 00000000000..953c39a40a2 --- /dev/null +++ b/tests/queries/0_stateless/01818_input_format_with_names_use_header.sh @@ -0,0 +1,15 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} -q "DROP TABLE IF EXISTS \`01818_with_names\`;" + +${CLICKHOUSE_CLIENT} -q "CREATE TABLE \`01818_with_names\` (t String) ENGINE = MergeTree ORDER BY t;" + +echo -ne "t\ntestdata1\ntestdata2" | ${CLICKHOUSE_CLIENT} --input_format_with_names_use_header 0 --query "INSERT INTO \`01818_with_names\` FORMAT CSVWithNames" + +${CLICKHOUSE_CLIENT} -q "SELECT * FROM \`01818_with_names\`;" + +${CLICKHOUSE_CLIENT} -q "DROP TABLE IF EXISTS \`01818_with_names\`;" diff --git a/tests/queries/0_stateless/01818_move_partition_simple.reference b/tests/queries/0_stateless/01818_move_partition_simple.reference new file mode 100644 index 00000000000..d5aea68e9f9 --- /dev/null +++ b/tests/queries/0_stateless/01818_move_partition_simple.reference @@ -0,0 +1,12 @@ +INSERT INTO main_table_01818 +INSERT INTO tmp_table_01818 +INSERT INTO tmp_table_01818 +ALL tmp_table_01818 200 +ALL main_table_01818 100 +tmp_table_01818 100 +main_table_01818 100 +Executing ALTER TABLE MOVE PARTITION... +ALL tmp_table_01818 100 +ALL main_table_01818 200 +tmp_table_01818 0 +main_table_01818 200 diff --git a/tests/queries/0_stateless/01818_move_partition_simple.sql b/tests/queries/0_stateless/01818_move_partition_simple.sql new file mode 100644 index 00000000000..6ca3ae75efc --- /dev/null +++ b/tests/queries/0_stateless/01818_move_partition_simple.sql @@ -0,0 +1,122 @@ +DROP TABLE IF EXISTS main_table_01818; +DROP TABLE IF EXISTS tmp_table_01818; + + +CREATE TABLE main_table_01818 +( + `id` UInt32, + `advertiser_id` String, + `campaign_id` String, + `name` String, + `budget` Float64, + `budget_mode` String, + `landing_type` String, + `status` String, + `modify_time` String, + `campaign_type` String, + `campaign_create_time` DateTime, + `campaign_modify_time` DateTime, + `create_time` DateTime, + `update_time` DateTime +) +ENGINE = MergeTree +PARTITION BY advertiser_id +ORDER BY campaign_id +SETTINGS index_granularity = 8192; + +CREATE TABLE tmp_table_01818 +( + `id` UInt32, + `advertiser_id` String, + `campaign_id` String, + `name` String, + `budget` Float64, + `budget_mode` String, + `landing_type` String, + `status` String, + `modify_time` String, + `campaign_type` String, + `campaign_create_time` DateTime, + `campaign_modify_time` DateTime, + `create_time` DateTime, + `update_time` DateTime +) +ENGINE = MergeTree +PARTITION BY advertiser_id +ORDER BY campaign_id +SETTINGS index_granularity = 8192; + +SELECT 'INSERT INTO main_table_01818'; +INSERT INTO main_table_01818 SELECT 1 as `id`, 'ClickHouse' as `advertiser_id`, * EXCEPT (`id`, `advertiser_id`) +FROM generateRandom( + '`id` UInt32, + `advertiser_id` String, + `campaign_id` String, + `name` String, + `budget` Float64, + `budget_mode` String, + `landing_type` String, + `status` String, + `modify_time` String, + `campaign_type` String, + `campaign_create_time` DateTime, + `campaign_modify_time` DateTime, + `create_time` DateTime, + `update_time` DateTime', 10, 10, 10) +LIMIT 100; + +SELECT 'INSERT INTO tmp_table_01818'; +INSERT INTO tmp_table_01818 SELECT 2 as `id`, 'Database' as `advertiser_id`, * EXCEPT (`id`, `advertiser_id`) +FROM generateRandom( + '`id` UInt32, + `advertiser_id` String, + `campaign_id` String, + `name` String, + `budget` Float64, + `budget_mode` String, + `landing_type` String, + `status` String, + `modify_time` String, + `campaign_type` String, + `campaign_create_time` DateTime, + `campaign_modify_time` DateTime, + `create_time` DateTime, + `update_time` DateTime', 10, 10, 10) +LIMIT 100; + +SELECT 'INSERT INTO tmp_table_01818'; +INSERT INTO tmp_table_01818 SELECT 3 as `id`, 'ClickHouse' as `advertiser_id`, * EXCEPT (`id`, `advertiser_id`) +FROM generateRandom( + '`id` UInt32, + `advertiser_id` String, + `campaign_id` String, + `name` String, + `budget` Float64, + `budget_mode` String, + `landing_type` String, + `status` String, + `modify_time` String, + `campaign_type` String, + `campaign_create_time` DateTime, + `campaign_modify_time` DateTime, + `create_time` DateTime, + `update_time` DateTime', 10, 10, 10) +LIMIT 100; + +SELECT 'ALL tmp_table_01818', count() FROM tmp_table_01818; +SELECT 'ALL main_table_01818', count() FROM main_table_01818; +SELECT 'tmp_table_01818', count() FROM tmp_table_01818 WHERE `advertiser_id` = 'ClickHouse'; +SELECT 'main_table_01818', count() FROM main_table_01818 WHERE `advertiser_id` = 'ClickHouse'; + +SELECT 'Executing ALTER TABLE MOVE PARTITION...'; +ALTER TABLE tmp_table_01818 MOVE PARTITION 'ClickHouse' TO TABLE main_table_01818; + + +SELECT 'ALL tmp_table_01818', count() FROM tmp_table_01818; +SELECT 'ALL main_table_01818', count() FROM main_table_01818; +SELECT 'tmp_table_01818', count() FROM tmp_table_01818 WHERE `advertiser_id` = 'ClickHouse'; +SELECT 'main_table_01818', count() FROM main_table_01818 WHERE `advertiser_id` = 'ClickHouse'; + + +DROP TABLE IF EXISTS main_table_01818; +DROP TABLE IF EXISTS tmp_table_01818; diff --git a/tests/queries/0_stateless/01820_unhex_case_insensitive.reference b/tests/queries/0_stateless/01820_unhex_case_insensitive.reference new file mode 100644 index 00000000000..e692ee54787 --- /dev/null +++ b/tests/queries/0_stateless/01820_unhex_case_insensitive.reference @@ -0,0 +1 @@ +012 MySQL diff --git a/tests/queries/0_stateless/01820_unhex_case_insensitive.sql b/tests/queries/0_stateless/01820_unhex_case_insensitive.sql new file mode 100644 index 00000000000..99d8031eeda --- /dev/null +++ b/tests/queries/0_stateless/01820_unhex_case_insensitive.sql @@ -0,0 +1,2 @@ +-- MySQL has function `unhex`, so we will make our function `unhex` also case insensitive for compatibility. +SELECT unhex('303132'), UNHEX('4D7953514C'); diff --git a/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.reference b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.reference new file mode 100644 index 00000000000..9833cbcc9b6 --- /dev/null +++ b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.reference @@ -0,0 +1 @@ +1 20 diff --git a/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.sql b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.sql new file mode 100644 index 00000000000..636426fcc91 --- /dev/null +++ b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.sql @@ -0,0 +1,24 @@ +DROP TABLE IF EXISTS dictionary_primary_key_source_table; +CREATE TABLE dictionary_primary_key_source_table +( + identifier UInt64, + v UInt64 +) ENGINE = TinyLog; + +INSERT INTO dictionary_primary_key_source_table VALUES (20, 1); + +DROP DICTIONARY IF EXISTS flat_dictionary; +CREATE DICTIONARY flat_dictionary +( + identifier UInt64, + v UInt64 +) +PRIMARY KEY v +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_primary_key_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(FLAT()); + +SELECT * FROM flat_dictionary; + +DROP DICTIONARY flat_dictionary; +DROP TABLE dictionary_primary_key_source_table; diff --git a/tests/queries/0_stateless/01821_to_date_time_ubsan.reference b/tests/queries/0_stateless/01821_to_date_time_ubsan.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01821_to_date_time_ubsan.sql b/tests/queries/0_stateless/01821_to_date_time_ubsan.sql new file mode 100644 index 00000000000..74226fc221f --- /dev/null +++ b/tests/queries/0_stateless/01821_to_date_time_ubsan.sql @@ -0,0 +1,2 @@ +SELECT toDateTime('9223372036854775806', 7); -- { serverError 407 } +SELECT toDateTime('9223372036854775806', 8); -- { serverError 407 } diff --git a/tests/queries/0_stateless/01822_async_read_from_socket_crash.reference b/tests/queries/0_stateless/01822_async_read_from_socket_crash.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01822_async_read_from_socket_crash.sh b/tests/queries/0_stateless/01822_async_read_from_socket_crash.sh new file mode 100755 index 00000000000..b4bb2228a91 --- /dev/null +++ b/tests/queries/0_stateless/01822_async_read_from_socket_crash.sh @@ -0,0 +1,9 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + + +for _ in {1..10}; do $CLICKHOUSE_CLIENT -q "select number from remote('127.0.0.{2,3}', numbers(20)) limit 8 settings max_block_size = 2, unknown_packet_in_send_data=4, sleep_in_send_data_ms=100, async_socket_for_remote=1 format Null" > /dev/null 2>&1 || true; done diff --git a/tests/queries/0_stateless/01822_union_and_constans_error.reference b/tests/queries/0_stateless/01822_union_and_constans_error.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01822_union_and_constans_error.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01822_union_and_constans_error.sql b/tests/queries/0_stateless/01822_union_and_constans_error.sql new file mode 100644 index 00000000000..38b7df700cd --- /dev/null +++ b/tests/queries/0_stateless/01822_union_and_constans_error.sql @@ -0,0 +1,20 @@ +drop table if exists t0; +CREATE TABLE t0 (c0 String) ENGINE = Log(); + +SELECT isNull(t0.c0) OR COUNT('\n?pVa') +FROM t0 +GROUP BY t0.c0 +HAVING isNull(t0.c0) +UNION ALL +SELECT isNull(t0.c0) OR COUNT('\n?pVa') +FROM t0 +GROUP BY t0.c0 +HAVING NOT isNull(t0.c0) +UNION ALL +SELECT isNull(t0.c0) OR COUNT('\n?pVa') +FROM t0 +GROUP BY t0.c0 +HAVING isNull(isNull(t0.c0)) +SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0; + +drop table if exists t0; diff --git a/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.reference b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.reference new file mode 100644 index 00000000000..2439021d2e0 --- /dev/null +++ b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.reference @@ -0,0 +1 @@ +[['a'],['b','c']] diff --git a/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.sql b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.sql new file mode 100644 index 00000000000..528a3b464b3 --- /dev/null +++ b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.sql @@ -0,0 +1,7 @@ +create temporary table test ( + arr Array(Array(LowCardinality(String))) +); + +insert into test(arr) values ([['a'], ['b', 'c']]); + +select arrayFilter(x -> 1, arr) from test; diff --git a/tests/queries/0_stateless/01823_explain_json.reference b/tests/queries/0_stateless/01823_explain_json.reference new file mode 100644 index 00000000000..5c7845a22d5 --- /dev/null +++ b/tests/queries/0_stateless/01823_explain_json.reference @@ -0,0 +1,141 @@ +[ + { + "Plan": { + "Node Type": "Union", + "Plans": [ + { + "Node Type": "Expression", + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + }, + { + "Node Type": "Expression", + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + } + ] + } + } +] +-------- + "Header": [ + { + "Name": "1", + "Type": "UInt8" + }, + { + "Name": "plus(2, dummy)", + "Type": "UInt16" + } +-------- + "Node Type": "Aggregating", + "Header": [ + { + "Name": "number", + "Type": "UInt64" + }, + { + "Name": "plus(number, 1)", + "Type": "UInt64" + }, + { + "Name": "quantile(0.2)(number)", + "Type": "Float64" + }, + { + "Name": "sumIf(number, greater(number, 0))", + "Type": "UInt64" + } + ], + "Keys": ["number", "plus(number, 1)"], + "Aggregates": [ + { + "Name": "quantile(0.2)(number)", + "Function": { + "Name": "quantile", + "Parameters": ["0.2"], + "Argument Types": ["UInt64"], + "Result Type": "Float64" + }, + "Arguments": ["number"], + "Argument Positions": [0] + }, + { + "Name": "sumIf(number, greater(number, 0))", + "Function": { + "Name": "sumIf", + "Argument Types": ["UInt64", "UInt8"], + "Result Type": "UInt64" + }, + "Arguments": ["number", "greater(number, 0)"], + "Argument Positions": [0, 2] + } + ], +-------- + "Node Type": "ArrayJoin", + "Left": false, + "Columns": ["x", "y"], +-------- + "Node Type": "Distinct", + "Columns": ["intDiv(number, 3)", "intDiv(number, 2)"], +-- + "Node Type": "Distinct", + "Columns": ["intDiv(number, 3)", "intDiv(number, 2)"], +-------- + "Sort Description": [ + { + "Column": "number", + "Ascending": false, + "With Fill": false + }, + { + "Column": "plus(number, 1)", + "Ascending": true, + "With Fill": false + } + ], + "Limit": 3, +-- + "Sort Description": [ + { + "Column": "number", + "Ascending": false, + "With Fill": false + }, + { + "Column": "plus(number, 1)", + "Ascending": true, + "With Fill": false + } + ], + "Limit": 3, +-- + "Sort Description": [ + { + "Column": "number", + "Ascending": false, + "With Fill": false + }, + { + "Column": "plus(number, 1)", + "Ascending": true, + "With Fill": false + } + ], + "Limit": 3, diff --git a/tests/queries/0_stateless/01823_explain_json.sh b/tests/queries/0_stateless/01823_explain_json.sh new file mode 100755 index 00000000000..004e85cd965 --- /dev/null +++ b/tests/queries/0_stateless/01823_explain_json.sh @@ -0,0 +1,29 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, description = 0 SELECT 1 UNION ALL SELECT 2 FORMAT TSVRaw" +echo "--------" +$CLICKHOUSE_CLIENT -q "explain json = 1, description = 0, header = 1 select 1, 2 + dummy FORMAT TSVRaw" | grep Header -m 1 -A 8 + +echo "--------" +$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, header = 1, description = 0 + SELECT quantile(0.2)(number), sumIf(number, number > 0) from numbers(2) group by number, number + 1 FORMAT TSVRaw + " | grep Aggregating -A 42 + +echo "--------" +$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, description = 0 + SELECT x, y from numbers(2) array join [number, 1] as x, [number + 1] as y FORMAT TSVRaw + " | grep ArrayJoin -A 2 + +echo "--------" +$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, description = 0 + SELECT distinct intDiv(number, 2), intDiv(number, 3) from numbers(10) FORMAT TSVRaw + " | grep Distinct -A 1 + +echo "--------" +$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, description = 0 + SELECT number + 1 from numbers(10) order by number desc, number + 1 limit 3 FORMAT TSVRaw + " | grep "Sort Description" -A 12 diff --git a/tests/queries/0_stateless/01824_move_to_prewhere_many_columns.reference b/tests/queries/0_stateless/01824_move_to_prewhere_many_columns.reference new file mode 100644 index 00000000000..adce19321d5 --- /dev/null +++ b/tests/queries/0_stateless/01824_move_to_prewhere_many_columns.reference @@ -0,0 +1,14 @@ +1 Wide +2 Compact +35 +SELECT count() +FROM t_move_to_prewhere +PREWHERE a AND b AND c +WHERE (a AND b AND c) AND (NOT ignore(fat_string)) +1 Compact +2 Compact +35 +SELECT count() +FROM t_move_to_prewhere +PREWHERE a +WHERE a AND (b AND c AND (NOT ignore(fat_string))) diff --git a/tests/queries/0_stateless/01824_move_to_prewhere_many_columns.sql b/tests/queries/0_stateless/01824_move_to_prewhere_many_columns.sql new file mode 100644 index 00000000000..e03972e818d --- /dev/null +++ b/tests/queries/0_stateless/01824_move_to_prewhere_many_columns.sql @@ -0,0 +1,37 @@ +DROP TABLE IF EXISTS t_move_to_prewhere; + +CREATE TABLE t_move_to_prewhere (id UInt32, a UInt8, b UInt8, c UInt8, fat_string String) +ENGINE = MergeTree ORDER BY id PARTITION BY id +SETTINGS min_rows_for_wide_part = 100, min_bytes_for_wide_part = 0; + +INSERT INTO t_move_to_prewhere SELECT 1, number % 2 = 0, number % 3 = 0, number % 5 = 0, repeat('a', 1000) FROM numbers(1000); +INSERT INTO t_move_to_prewhere SELECT 2, number % 2 = 0, number % 3 = 0, number % 5 = 0, repeat('a', 1000) FROM numbers(10); + +SELECT partition, part_type FROM system.parts +WHERE table = 't_move_to_prewhere' AND database = currentDatabase() +ORDER BY partition; + +SELECT count() FROM t_move_to_prewhere WHERE a AND b AND c AND NOT ignore(fat_string); +EXPLAIN SYNTAX SELECT count() FROM t_move_to_prewhere WHERE a AND b AND c AND NOT ignore(fat_string); + +DROP TABLE IF EXISTS t_move_to_prewhere; + +-- With only compact parts, we cannot move 3 conditions to PREWHERE, +-- because we don't know sizes and we can use only number of columns in conditions. +-- Sometimes moving a lot of columns to prewhere may be harmful. + +CREATE TABLE t_move_to_prewhere (id UInt32, a UInt8, b UInt8, c UInt8, fat_string String) +ENGINE = MergeTree ORDER BY id PARTITION BY id +SETTINGS min_rows_for_wide_part = 10000, min_bytes_for_wide_part = 100000000; + +INSERT INTO t_move_to_prewhere SELECT 1, number % 2 = 0, number % 3 = 0, number % 5 = 0, repeat('a', 1000) FROM numbers(1000); +INSERT INTO t_move_to_prewhere SELECT 2, number % 2 = 0, number % 3 = 0, number % 5 = 0, repeat('a', 1000) FROM numbers(10); + +SELECT partition, part_type FROM system.parts +WHERE table = 't_move_to_prewhere' AND database = currentDatabase() +ORDER BY partition; + +SELECT count() FROM t_move_to_prewhere WHERE a AND b AND c AND NOT ignore(fat_string); +EXPLAIN SYNTAX SELECT count() FROM t_move_to_prewhere WHERE a AND b AND c AND NOT ignore(fat_string); + +DROP TABLE IF EXISTS t_move_to_prewhere; diff --git a/tests/queries/0_stateless/01825_replacing_vertical_merge.reference b/tests/queries/0_stateless/01825_replacing_vertical_merge.reference new file mode 100644 index 00000000000..18fcfcd8f8e --- /dev/null +++ b/tests/queries/0_stateless/01825_replacing_vertical_merge.reference @@ -0,0 +1,4 @@ +1720 32 +220 17 +33558527 8193 +33550336 8192 diff --git a/tests/queries/0_stateless/01825_replacing_vertical_merge.sql b/tests/queries/0_stateless/01825_replacing_vertical_merge.sql new file mode 100644 index 00000000000..0048f8d7b24 --- /dev/null +++ b/tests/queries/0_stateless/01825_replacing_vertical_merge.sql @@ -0,0 +1,48 @@ +SET optimize_on_insert = 0; + +DROP TABLE IF EXISTS replacing_table; + +CREATE TABLE replacing_table (a UInt32, b UInt32, c UInt32) +ENGINE = ReplacingMergeTree ORDER BY a +SETTINGS vertical_merge_algorithm_min_rows_to_activate = 1, + vertical_merge_algorithm_min_columns_to_activate = 1, + index_granularity = 16, + min_bytes_for_wide_part = 0, + merge_max_block_size = 16; + +SYSTEM STOP MERGES replacing_table; + +INSERT INTO replacing_table SELECT number, number, number from numbers(16); +INSERT INTO replacing_table SELECT 100, number, number from numbers(16); + +SELECT sum(a), count() FROM replacing_table; + +SYSTEM START MERGES replacing_table; + +OPTIMIZE TABLE replacing_table FINAL; + +SELECT sum(a), count() FROM replacing_table; + +DROP TABLE IF EXISTS replacing_table; + +CREATE TABLE replacing_table +( + key UInt64, + value UInt64 +) +ENGINE = ReplacingMergeTree +ORDER BY key +SETTINGS + vertical_merge_algorithm_min_rows_to_activate=0, + vertical_merge_algorithm_min_columns_to_activate=0, + min_bytes_for_wide_part = 0; + +INSERT INTO replacing_table SELECT if(number == 8192, 8191, number), 1 FROM numbers(8193); + +SELECT sum(key), count() from replacing_table; + +OPTIMIZE TABLE replacing_table FINAL; + +SELECT sum(key), count() from replacing_table; + +DROP TABLE IF EXISTS replacing_table; diff --git a/tests/queries/0_stateless/01831_max_streams.reference b/tests/queries/0_stateless/01831_max_streams.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/01831_max_streams.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/01831_max_streams.sql b/tests/queries/0_stateless/01831_max_streams.sql new file mode 100644 index 00000000000..aa835dea5ac --- /dev/null +++ b/tests/queries/0_stateless/01831_max_streams.sql @@ -0,0 +1 @@ +select * from remote('127.1', system.one) settings max_distributed_connections=0; diff --git a/tests/queries/0_stateless/01832_memory_write_suffix.reference b/tests/queries/0_stateless/01832_memory_write_suffix.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01832_memory_write_suffix.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01832_memory_write_suffix.sql b/tests/queries/0_stateless/01832_memory_write_suffix.sql new file mode 100644 index 00000000000..718a4e4ac9d --- /dev/null +++ b/tests/queries/0_stateless/01832_memory_write_suffix.sql @@ -0,0 +1,7 @@ +drop table if exists data_01832; + +-- Memory writes from the writeSuffix() and if it will be called twice two rows +-- will be written (since it does not reset the block). +create table data_01832 (key Int) Engine=Memory; +insert into data_01832 values (1); +select * from data_01832; diff --git a/tests/queries/0_stateless/01833_test_collation_alvarotuso.reference b/tests/queries/0_stateless/01833_test_collation_alvarotuso.reference new file mode 100644 index 00000000000..c55134e07d3 --- /dev/null +++ b/tests/queries/0_stateless/01833_test_collation_alvarotuso.reference @@ -0,0 +1,6 @@ +a a +A A +b b +B B +c c +C C diff --git a/tests/queries/0_stateless/01833_test_collation_alvarotuso.sql b/tests/queries/0_stateless/01833_test_collation_alvarotuso.sql new file mode 100644 index 00000000000..65422731711 --- /dev/null +++ b/tests/queries/0_stateless/01833_test_collation_alvarotuso.sql @@ -0,0 +1,21 @@ +DROP TABLE IF EXISTS test_collation; + +CREATE TABLE test_collation +( + `v` String, + `v2` String +) +ENGINE = MergeTree +ORDER BY v +SETTINGS index_granularity = 8192; + +insert into test_collation values ('A', 'A'); +insert into test_collation values ('B', 'B'); +insert into test_collation values ('C', 'C'); +insert into test_collation values ('a', 'a'); +insert into test_collation values ('b', 'b'); +insert into test_collation values ('c', 'c'); + +SELECT * FROM test_collation ORDER BY v ASC COLLATE 'en'; + +DROP TABLE test_collation; diff --git a/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.reference b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.reference new file mode 100644 index 00000000000..7326d960397 --- /dev/null +++ b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.reference @@ -0,0 +1 @@ +Ok diff --git a/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.sh b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.sh new file mode 100755 index 00000000000..793f477b3cb --- /dev/null +++ b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.sh @@ -0,0 +1,27 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} --multiquery --query " +drop table if exists aliases_lazyness; +create table aliases_lazyness (x UInt32, y ALIAS sleepEachRow(0.1)) Engine=MergeTree ORDER BY x; +insert into aliases_lazyness(x) select * from numbers(40); +" + +# In very old ClickHouse versions alias column was calculated for every row. +# If it works this way, the query will take at least 0.1 * 40 = 4 seconds. +# If the issue does not exist, the query should take slightly more than 0.1 seconds. +# The exact time is not guaranteed, so we check in a loop that at least once +# the query will process in less than one second, that proves that the behaviour is not like it was long time ago. + +while true +do + timeout 1 ${CLICKHOUSE_CLIENT} --query "SELECT x, y FROM aliases_lazyness WHERE x = 1 FORMAT Null" && break +done + +${CLICKHOUSE_CLIENT} --multiquery --query " +drop table aliases_lazyness; +SELECT 'Ok'; +" diff --git a/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.reference b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.reference new file mode 100644 index 00000000000..1f49e6b362b --- /dev/null +++ b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.reference @@ -0,0 +1,2 @@ +2017-12-15 1 1 +2017-12-15 1 1 diff --git a/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.sql b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.sql new file mode 100644 index 00000000000..54ffb7b4c1f --- /dev/null +++ b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.sql @@ -0,0 +1,21 @@ +DROP TABLE IF EXISTS db; + +CREATE TABLE tb +( + date Date, + `index` Int32, + value Int32, + idx Int32 ALIAS `index` +) +ENGINE = MergeTree +PARTITION BY date +ORDER BY (date, `index`); + +insert into tb values ('2017-12-15', 1, 1); + +SET force_primary_key = 1; + +select * from tb where `index` >= 0 AND `index` <= 2; +select * from tb where idx >= 0 AND idx <= 2; + +DROP TABLE tb; diff --git a/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.reference b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.reference new file mode 100644 index 00000000000..fc624e3510f --- /dev/null +++ b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.reference @@ -0,0 +1,6 @@ +DateTime +DateTime +DateTime(\'UTC\') +DateTime64(3) +DateTime64(3) +DateTime64(3, \'UTC\') diff --git a/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.sql b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.sql new file mode 100644 index 00000000000..be47cfb0411 --- /dev/null +++ b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.sql @@ -0,0 +1,26 @@ +SELECT toTypeName(now()); +SELECT toTypeName(now() - 1); +SELECT toTypeName(now('UTC') - 1); + +SELECT toTypeName(now64(3)); +SELECT toTypeName(now64(3) - 1); +SELECT toTypeName(toTimeZone(now64(3), 'UTC') - 1); + +DROP TABLE IF EXISTS tt_null; +DROP TABLE IF EXISTS tt; +DROP TABLE IF EXISTS tt_mv; + +create table tt_null(p String) engine = Null; + +create table tt(p String,tmin AggregateFunction(min, DateTime)) +engine = AggregatingMergeTree order by p; + +create materialized view tt_mv to tt as +select p, minState(now() - interval 30 minute) as tmin +from tt_null group by p; + +insert into tt_null values('x'); + +DROP TABLE tt_null; +DROP TABLE tt; +DROP TABLE tt_mv; diff --git a/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference new file mode 100644 index 00000000000..c71bf50e82f --- /dev/null +++ b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference @@ -0,0 +1,2 @@ +[] +[] diff --git a/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql new file mode 100644 index 00000000000..f3aa595f6d5 --- /dev/null +++ b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql @@ -0,0 +1,2 @@ +SELECT CAST([] AS Array(Array(String))); +SELECT CAST([] AS Array(Array(Array(String)))); diff --git a/tests/queries/0_stateless/01837_database_memory_ddl_dictionaries.reference b/tests/queries/0_stateless/01837_database_memory_ddl_dictionaries.reference new file mode 100644 index 00000000000..71c4a605031 --- /dev/null +++ b/tests/queries/0_stateless/01837_database_memory_ddl_dictionaries.reference @@ -0,0 +1,3 @@ +1 First +2 Second +3 Third diff --git a/tests/queries/0_stateless/01837_database_memory_ddl_dictionaries.sql b/tests/queries/0_stateless/01837_database_memory_ddl_dictionaries.sql new file mode 100644 index 00000000000..baba8b5188e --- /dev/null +++ b/tests/queries/0_stateless/01837_database_memory_ddl_dictionaries.sql @@ -0,0 +1,30 @@ +DROP DATABASE IF EXISTS 01837_db; +CREATE DATABASE 01837_db ENGINE = Memory; + +DROP TABLE IF EXISTS 01837_db.simple_key_dictionary_source; +CREATE TABLE 01837_db.simple_key_dictionary_source +( + id UInt64, + value String +) ENGINE = TinyLog; + +INSERT INTO 01837_db.simple_key_dictionary_source VALUES (1, 'First'); +INSERT INTO 01837_db.simple_key_dictionary_source VALUES (2, 'Second'); +INSERT INTO 01837_db.simple_key_dictionary_source VALUES (3, 'Third'); + +DROP DICTIONARY IF EXISTS 01837_db.simple_key_direct_dictionary; +CREATE DICTIONARY 01837_db.simple_key_direct_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01837_db' TABLE 'simple_key_dictionary_source')) +LAYOUT(DIRECT()); + +SELECT * FROM 01837_db.simple_key_direct_dictionary; + +DROP DICTIONARY 01837_db.simple_key_direct_dictionary; +DROP TABLE 01837_db.simple_key_dictionary_source; + +DROP DATABASE 01837_db; diff --git a/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference new file mode 100644 index 00000000000..f0543d9221e --- /dev/null +++ b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference @@ -0,0 +1,4 @@ +simple key +example_simple_key_dictionary UInt64 +complex key +example_complex_key_dictionary (UInt64, String) diff --git a/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql new file mode 100644 index 00000000000..97d96f643cf --- /dev/null +++ b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql @@ -0,0 +1,26 @@ +DROP DICTIONARY IF EXISTS example_simple_key_dictionary; +CREATE DICTIONARY example_simple_key_dictionary ( + id UInt64, + value UInt64 +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE '' DATABASE currentDatabase())) +LAYOUT(DIRECT()); + +SELECT 'simple key'; + +SELECT name, key FROM system.dictionaries WHERE name='example_simple_key_dictionary' AND database=currentDatabase(); + +DROP DICTIONARY IF EXISTS example_complex_key_dictionary; +CREATE DICTIONARY example_complex_key_dictionary ( + id UInt64, + id_key String, + value UInt64 +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE '' DATABASE currentDatabase())) +LAYOUT(COMPLEX_KEY_DIRECT()); + +SELECT 'complex key'; + +SELECT name, key FROM system.dictionaries WHERE name='example_complex_key_dictionary' AND database=currentDatabase(); diff --git a/tests/queries/0_stateless/01839_join_to_subqueries_rewriter_columns_matcher.reference b/tests/queries/0_stateless/01839_join_to_subqueries_rewriter_columns_matcher.reference new file mode 100644 index 00000000000..8e1a7a2271f --- /dev/null +++ b/tests/queries/0_stateless/01839_join_to_subqueries_rewriter_columns_matcher.reference @@ -0,0 +1 @@ +a b c diff --git a/tests/queries/0_stateless/01839_join_to_subqueries_rewriter_columns_matcher.sql b/tests/queries/0_stateless/01839_join_to_subqueries_rewriter_columns_matcher.sql new file mode 100644 index 00000000000..979debbcbb8 --- /dev/null +++ b/tests/queries/0_stateless/01839_join_to_subqueries_rewriter_columns_matcher.sql @@ -0,0 +1,4 @@ +SELECT COLUMNS('test') FROM + (SELECT 1 AS id, 'a' AS test) a + LEFT JOIN (SELECT 1 AS id, 'b' AS test) b ON b.id = a.id + LEFT JOIN (SELECT 1 AS id, 'c' AS test) c ON c.id = a.id; diff --git a/tests/queries/0_stateless/01840_tupleElement_formatting_fuzzer.reference b/tests/queries/0_stateless/01840_tupleElement_formatting_fuzzer.reference new file mode 100644 index 00000000000..3ae98fdb188 --- /dev/null +++ b/tests/queries/0_stateless/01840_tupleElement_formatting_fuzzer.reference @@ -0,0 +1,17 @@ +SelectWithUnionQuery (children 1) + ExpressionList (children 1) + SelectQuery (children 1) + ExpressionList (children 1) + Function tupleElement (children 1) + ExpressionList (children 2) + Literal UInt64_255 + Literal UInt64_100 +SelectWithUnionQuery (children 1) + ExpressionList (children 1) + SelectQuery (children 1) + ExpressionList (children 1) + Function tupleElement (children 1) + ExpressionList (children 2) + Literal Tuple_(UInt64_255, UInt64_1) + Literal UInt64_1 +255 diff --git a/tests/queries/0_stateless/01840_tupleElement_formatting_fuzzer.sql b/tests/queries/0_stateless/01840_tupleElement_formatting_fuzzer.sql new file mode 100644 index 00000000000..e7ef7b158a3 --- /dev/null +++ b/tests/queries/0_stateless/01840_tupleElement_formatting_fuzzer.sql @@ -0,0 +1,3 @@ +explain ast select tupleElement(255, 100); +explain ast select tupleElement((255, 1), 1); +select tupleElement((255, 1), 1); diff --git a/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.reference b/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.sql b/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.sql new file mode 100644 index 00000000000..6aeb71b8511 --- /dev/null +++ b/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.sql @@ -0,0 +1,13 @@ +DROP TABLE IF EXISTS test; +CREATE TABLE test (`key` UInt32, `arr` ALIAS [1, 2], `xx` MATERIALIZED arr[1]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +DROP TABLE test; + +CREATE TABLE test (`key` UInt32, `arr` Array(UInt32) ALIAS [1, 2], `xx` MATERIALIZED arr[1]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +DROP TABLE test; + +CREATE TABLE test (`key` UInt32, `arr` Array(UInt32) ALIAS [1, 2], `xx` UInt32 MATERIALIZED arr[1]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +DROP TABLE test; + +CREATE TABLE test (`key` UInt32, `arr` ALIAS [1, 2]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +ALTER TABLE test ADD COLUMN `xx` UInt32 MATERIALIZED arr[1]; +DROP TABLE test; diff --git a/tests/queries/0_stateless/01846_alter_column_without_type_bugfix.reference b/tests/queries/0_stateless/01846_alter_column_without_type_bugfix.reference new file mode 100644 index 00000000000..849ac80b365 --- /dev/null +++ b/tests/queries/0_stateless/01846_alter_column_without_type_bugfix.reference @@ -0,0 +1 @@ +CREATE TABLE default.alter_test\n(\n `a` Int32,\n `b` DateTime DEFAULT now() + 1\n)\nENGINE = ReplacingMergeTree(b)\nORDER BY a\nSETTINGS index_granularity = 8192 diff --git a/tests/queries/0_stateless/01846_alter_column_without_type_bugfix.sql b/tests/queries/0_stateless/01846_alter_column_without_type_bugfix.sql new file mode 100644 index 00000000000..5df8daedbe6 --- /dev/null +++ b/tests/queries/0_stateless/01846_alter_column_without_type_bugfix.sql @@ -0,0 +1,6 @@ +DROP TABLE IF EXISTS alter_test; +CREATE TABLE alter_test (a Int32, b DateTime) ENGINE = ReplacingMergeTree(b) ORDER BY a; +ALTER TABLE alter_test MODIFY COLUMN `b` DateTime DEFAULT now(); +ALTER TABLE alter_test MODIFY COLUMN `b` DEFAULT now() + 1; +SHOW CREATE TABLE alter_test; +DROP TABLE alter_test; diff --git a/tests/queries/0_stateless/01846_null_as_default_for_insert_select.reference b/tests/queries/0_stateless/01846_null_as_default_for_insert_select.reference new file mode 100644 index 00000000000..a50af17a63f --- /dev/null +++ b/tests/queries/0_stateless/01846_null_as_default_for_insert_select.reference @@ -0,0 +1,15 @@ +HELLO +WORLD + +HELLO +WORLD +WORLD + +HELLO PEOPLE +WORLD PEOPLE + +1 1001 +2 1002 + +1 501 1001 +2 502 1002 diff --git a/tests/queries/0_stateless/01846_null_as_default_for_insert_select.sql b/tests/queries/0_stateless/01846_null_as_default_for_insert_select.sql new file mode 100644 index 00000000000..d6f3b8a2136 --- /dev/null +++ b/tests/queries/0_stateless/01846_null_as_default_for_insert_select.sql @@ -0,0 +1,30 @@ +DROP TABLE IF EXISTS test_null_as_default; +CREATE TABLE test_null_as_default (a String DEFAULT 'WORLD') ENGINE = Memory; + +INSERT INTO test_null_as_default SELECT 'HELLO' UNION ALL SELECT NULL; +SELECT * FROM test_null_as_default ORDER BY a; +SELECT ''; + +INSERT INTO test_null_as_default SELECT NULL; +SELECT * FROM test_null_as_default ORDER BY a; +SELECT ''; + +DROP TABLE IF EXISTS test_null_as_default; +CREATE TABLE test_null_as_default (a String DEFAULT 'WORLD', b String DEFAULT 'PEOPLE') ENGINE = Memory; + +INSERT INTO test_null_as_default(a) SELECT 'HELLO' UNION ALL SELECT NULL; +SELECT * FROM test_null_as_default ORDER BY a; +SELECT ''; + +DROP TABLE IF EXISTS test_null_as_default; +CREATE TABLE test_null_as_default (a Int8, b Int64 DEFAULT a + 1000) ENGINE = Memory; + +INSERT INTO test_null_as_default SELECT 1, NULL UNION ALL SELECT 2, NULL; +SELECT * FROM test_null_as_default ORDER BY a; +SELECT ''; + +DROP TABLE IF EXISTS test_null_as_default; +CREATE TABLE test_null_as_default (a Int8, b Int64 DEFAULT c - 500, c Int32 DEFAULT a + 1000) ENGINE = Memory; + +INSERT INTO test_null_as_default(a, c) SELECT 1, NULL UNION ALL SELECT 2, NULL; +SELECT * FROM test_null_as_default ORDER BY a; diff --git a/tests/queries/0_stateless/01847_bad_like.reference b/tests/queries/0_stateless/01847_bad_like.reference new file mode 100644 index 00000000000..06f4e8a840d --- /dev/null +++ b/tests/queries/0_stateless/01847_bad_like.reference @@ -0,0 +1,25 @@ +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 diff --git a/tests/queries/0_stateless/01847_bad_like.sql b/tests/queries/0_stateless/01847_bad_like.sql new file mode 100644 index 00000000000..c7dedacc600 --- /dev/null +++ b/tests/queries/0_stateless/01847_bad_like.sql @@ -0,0 +1,30 @@ +SELECT '\w' LIKE '%\w%'; +SELECT '\w' LIKE '\w%'; +SELECT '\w' LIKE '%\w'; +SELECT '\w' LIKE '\w'; + +SELECT '\\w' LIKE '%\\w%'; +SELECT '\\w' LIKE '\\w%'; +SELECT '\\w' LIKE '%\\w'; +SELECT '\\w' LIKE '\\w'; + +SELECT '\i' LIKE '%\i%'; +SELECT '\i' LIKE '\i%'; +SELECT '\i' LIKE '%\i'; +SELECT '\i' LIKE '\i'; + +SELECT '\\i' LIKE '%\\i%'; +SELECT '\\i' LIKE '\\i%'; +SELECT '\\i' LIKE '%\\i'; +SELECT '\\i' LIKE '\\i'; + +SELECT '\\' LIKE '%\\\\%'; +SELECT '\\' LIKE '\\\\%'; +SELECT '\\' LIKE '%\\\\'; +SELECT '\\' LIKE '\\\\'; +SELECT '\\' LIKE '\\'; + +SELECT '\\xyz\\' LIKE '\\\\%\\\\'; +SELECT '\\xyz\\' LIKE '\\\\___\\\\'; +SELECT '\\xyz\\' LIKE '\\\\_%_\\\\'; +SELECT '\\xyz\\' LIKE '\\\\%_%\\\\'; diff --git a/tests/queries/0_stateless/01848_http_insert_segfault.reference b/tests/queries/0_stateless/01848_http_insert_segfault.reference new file mode 100644 index 00000000000..587579af915 --- /dev/null +++ b/tests/queries/0_stateless/01848_http_insert_segfault.reference @@ -0,0 +1 @@ +Ok. diff --git a/tests/queries/0_stateless/01848_http_insert_segfault.sh b/tests/queries/0_stateless/01848_http_insert_segfault.sh new file mode 100755 index 00000000000..a263ded44eb --- /dev/null +++ b/tests/queries/0_stateless/01848_http_insert_segfault.sh @@ -0,0 +1,8 @@ +#!/usr/bin/env bash + + CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) + # shellcheck source=../shell_config.sh + . "$CUR_DIR"/../shell_config.sh + + ${CLICKHOUSE_LOCAL} -q "select col1, initializeAggregation('argMaxState', col2, insertTime) as col2, now() as insertTime FROM generateRandom('col1 String, col2 Array(Float64)') LIMIT 1000000 FORMAT CSV" | curl -s 'http://localhost:8123/?query=INSERT%20INTO%20non_existing_table%20SELECT%20col1%2C%20initializeAggregation(%27argMaxState%27%2C%20col2%2C%20insertTime)%20as%20col2%2C%20now()%20as%20insertTime%20FROM%20input(%27col1%20String%2C%20col2%20Array(Float64)%27)%20FORMAT%20CSV' --data-binary @- | grep -q "Table default.non_existing_table doesn't exist" && echo 'Ok.' || echo 'FAIL' ||: + diff --git a/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.reference b/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql b/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql new file mode 100644 index 00000000000..6ed4eafcc8f --- /dev/null +++ b/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql @@ -0,0 +1,15 @@ +create database if not exists shard_0; +create database if not exists shard_1; + +drop table if exists dist_01850; +drop table if exists shard_0.data_01850; + +create table shard_0.data_01850 (key Int) engine=Memory(); +create table dist_01850 (key Int) engine=Distributed('test_cluster_two_replicas_different_databases', /* default_database= */ '', data_01850, key); + +set insert_distributed_sync=1; +set prefer_localhost_replica=0; +insert into dist_01850 values (1); -- { serverError 60 } + +drop table if exists dist_01850; +drop table shard_0.data_01850; diff --git a/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.reference b/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql b/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql new file mode 100644 index 00000000000..a0239ff482c --- /dev/null +++ b/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql @@ -0,0 +1,35 @@ +DROP TABLE IF EXISTS `01851_merge_tree`; +CREATE TABLE `01851_merge_tree` +( + `n1` Int8, + `n2` Int8, + `n3` Int8, + `n4` Int8 +) +ENGINE = MergeTree +ORDER BY n1; + +DROP TABLE IF EXISTS `001851_merge_tree_mv`; +CREATE MATERIALIZED VIEW `01851_merge_tree_mv` +ENGINE = Memory AS +SELECT + n2, + n3 +FROM `01851_merge_tree`; + +ALTER TABLE `01851_merge_tree` + DROP COLUMN n3; -- { serverError 524 } + +ALTER TABLE `01851_merge_tree` + DROP COLUMN n2; -- { serverError 524 } + +-- ok +ALTER TABLE `01851_merge_tree` + DROP COLUMN n4; + +-- CLEAR COLUMN is OK +ALTER TABLE `01851_merge_tree` + CLEAR COLUMN n2; + +DROP TABLE `01851_merge_tree`; +DROP TABLE `01851_merge_tree_mv`; diff --git a/tests/queries/0_stateless/01851_fix_row_policy_empty_result.reference b/tests/queries/0_stateless/01851_fix_row_policy_empty_result.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01851_fix_row_policy_empty_result.sql b/tests/queries/0_stateless/01851_fix_row_policy_empty_result.sql new file mode 100644 index 00000000000..f28426bb651 --- /dev/null +++ b/tests/queries/0_stateless/01851_fix_row_policy_empty_result.sql @@ -0,0 +1,12 @@ +drop table if exists tbl; +create table tbl (s String, i int) engine MergeTree order by i; + +insert into tbl values ('123', 123); + +drop row policy if exists filter on tbl; +create row policy filter on tbl using (s = 'non_existing_domain') to all; + +select * from tbl prewhere s = '123' where i = 123; + +drop row policy filter on tbl; +drop table tbl; diff --git a/tests/queries/0_stateless/01851_hedged_connections_external_tables.reference b/tests/queries/0_stateless/01851_hedged_connections_external_tables.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/01851_hedged_connections_external_tables.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/01851_hedged_connections_external_tables.sql b/tests/queries/0_stateless/01851_hedged_connections_external_tables.sql new file mode 100644 index 00000000000..c4625720e59 --- /dev/null +++ b/tests/queries/0_stateless/01851_hedged_connections_external_tables.sql @@ -0,0 +1 @@ +select number from remote('127.0.0.{3|2}', numbers(2)) where number global in (select number from numbers(1)) settings async_socket_for_remote=1, use_hedged_requests = 1, sleep_in_send_data_ms=10, receive_data_timeout_ms=1; diff --git a/tests/queries/0_stateless/arcadia_skip_list.txt b/tests/queries/0_stateless/arcadia_skip_list.txt index 1d58f65c055..08587395001 100644 --- a/tests/queries/0_stateless/arcadia_skip_list.txt +++ b/tests/queries/0_stateless/arcadia_skip_list.txt @@ -91,6 +91,7 @@ 01125_dict_ddl_cannot_add_column 01129_dict_get_join_lose_constness 01138_join_on_distributed_and_tmp +01153_attach_mv_uuid 01191_rename_dictionary 01200_mutations_memory_consumption 01211_optimize_skip_unused_shards_type_mismatch @@ -176,7 +177,7 @@ 01560_timeseriesgroupsum_segfault 00976_ttl_with_old_parts 01584_distributed_buffer_cannot_find_column -01018_ip_dictionary +01018_ip_dictionary_long 01582_distinct_subquery_groupby 01558_ttest 01558_ttest_scipy @@ -212,9 +213,23 @@ 01017_uniqCombined_memory_usage 01747_join_view_filter_dictionary 01748_dictionary_table_dot +01755_client_highlight_multi_line_comment_regression 00950_dict_get 01683_flat_dictionary 01681_cache_dictionary_simple_key 01682_cache_dictionary_complex_key 01684_ssd_cache_dictionary_simple_key 01685_ssd_cache_dictionary_complex_key +01304_polygons_sym_difference +01305_polygons_union +01306_polygons_intersection +01702_system_query_log +01759_optimize_skip_unused_shards_zero_shards +01780_clickhouse_dictionary_source_loop +01790_dist_INSERT_block_structure_mismatch_types_and_names +01791_dist_INSERT_block_structure_mismatch +01801_distinct_group_by_shard +01804_dictionary_decimal256_type +01801_s3_distributed +01833_test_collation_alvarotuso +01850_dist_INSERT_preserve_error diff --git a/tests/queries/0_stateless/country_polygons.tsv b/tests/queries/0_stateless/country_polygons.tsv new file mode 100644 index 00000000000..4fa8313e3cb --- /dev/null +++ b/tests/queries/0_stateless/country_polygons.tsv @@ -0,0 +1,2 @@ +Dhekelia Sovereign Base Area [[(33.905868,35.090882),(33.913619,35.090882),(33.921474,35.080702),(33.914446,35.073054),(33.908245,35.070729),(33.906524,35.069122),(33.906506,35.069105),(33.898116,35.061272),(33.880133,35.073054),(33.874655,35.067525),(33.867627,35.060497),(33.855122,35.053417),(33.841169,35.051092),(33.834865,35.056621),(33.827113,35.061272),(33.813781,35.055794),(33.804375,35.049541),(33.799001,35.038534),(33.822359,35.030059),(33.830214,35.023031),(33.829387,35.001176),(33.829387,35.001172),(33.840342,34.993369),(33.859049,34.991819),(33.859049,34.974662),(33.850471,34.973009),(33.838068,34.963707),(33.84582,34.959728),(33.864423,34.962983),(33.891841,34.958139),(33.8838,34.949123),(33.874522,34.94123),(33.862315,34.937893),(33.847423,34.94245),(33.819672,34.964748),(33.80421,34.972602),(33.781896,34.976212),(33.784945,34.976212),(33.788046,34.976988),(33.7928,34.977763),(33.79435,34.977763),(33.791146,34.982414),(33.786495,34.984687),(33.782568,34.984687),(33.777917,34.984687),(33.77399,34.988666),(33.766135,34.990268),(33.761484,34.990268),(33.75921,34.988666),(33.765411,34.985566),(33.769339,34.983964),(33.770889,34.980088),(33.77554,34.980088),(33.780191,34.979313),(33.780986,34.976338),(33.780935,34.976345),(33.760427,34.979682),(33.717296,34.977769),(33.70152,34.97289),(33.702935,34.987943),(33.711461,34.985566),(33.71544,34.997296),(33.699731,35.002722),(33.69663,35.008975),(33.705312,35.015228),(33.702211,35.022256),(33.685003,35.029284),(33.679444,35.033891),(33.679435,35.033899),(33.675649,35.037036),(33.674099,35.046441),(33.678853,35.055794),(33.69446,35.058171),(33.705312,35.06675),(33.714717,35.06675),(33.719368,35.06277),(33.711461,35.040963),(33.707585,35.029284),(33.718489,35.032385),(33.739677,35.047216),(33.766135,35.03161),(33.77554,35.040188),(33.786495,35.038534),(33.79435,35.040188),(33.798278,35.052642),(33.824012,35.06675),(33.834865,35.063597),(33.842719,35.056621),(33.853571,35.058171),(33.866904,35.06675),(33.871555,35.073054),(33.876929,35.076826),(33.871555,35.085456),(33.871555,35.100236),(33.876206,35.118994),(33.889435,35.118994),(33.891812,35.110468),(33.89884,35.108814),(33.903594,35.099512),(33.905868,35.09636),(33.905868,35.090882)],[(33.742792,35.001233),(33.746689,35.002711),(33.752063,35.004323),(33.752063,35.0144),(33.746151,35.015207),(33.741314,35.013729),(33.740239,35.010101),(33.738761,35.005264),(33.739702,35.002576),(33.742792,35.001233)]] +Kyrgyzstan [[(75.204639,42.84519),(75.271198,42.845758),(75.496611,42.823925),(75.535988,42.827025),(75.555936,42.825217),(75.620428,42.805295),(75.646163,42.806096),(75.675308,42.815372),(75.703833,42.830488),(75.727605,42.848678),(75.73825,42.861571),(75.770599,42.92565),(75.782072,42.933324),(75.797161,42.936192),(75.807059,42.936771),(75.858966,42.939809),(75.898033,42.935701),(75.976582,42.918725),(75.999113,42.91764),(76.064535,42.93397),(76.09058,42.93428),(76.163754,42.921154),(76.254498,42.921154),(76.340487,42.901672),(76.351132,42.902886),(76.370976,42.909604),(76.382965,42.910069),(76.3933,42.905263),(76.404463,42.889244),(76.414074,42.885497),(76.432058,42.88958),(76.461927,42.910999),(76.48053,42.916968),(76.506885,42.914488),(76.558768,42.89821),(76.58502,42.894799),(76.605897,42.898494),(76.644345,42.911129),(76.688786,42.910328),(76.707907,42.913661),(76.74625,42.928879),(76.750385,42.932574),(76.751625,42.937096),(76.753899,42.940765),(76.76134,42.942057),(76.77953,42.939292),(76.785214,42.939654),(76.792966,42.943065),(76.814153,42.961487),(76.83348,42.971745),(76.853427,42.974122),(76.897662,42.974536),(76.93735,42.986034),(76.956884,42.988514),(76.976417,42.98221),(77.089899,42.968154),(77.124005,42.958904),(77.135064,42.951746),(77.163693,42.921981),(77.178679,42.913196),(77.195112,42.910069),(77.21189,42.909944),(77.229632,42.909811),(77.328127,42.897564),(77.36089,42.90454),(77.403265,42.919759),(77.418561,42.922239),(77.43396,42.921412),(77.461762,42.914384),(77.501553,42.914539),(77.520984,42.906917),(77.53845,42.896272),(77.557777,42.888133),(77.574417,42.887848),(77.62692,42.906555),(77.647797,42.909501),(77.71384,42.907589),(77.781226,42.895626),(77.787531,42.889709),(77.791665,42.883043),(77.798176,42.877513),(77.809958,42.871286),(77.83559,42.879839),(77.852229,42.887952),(77.861221,42.89082),(77.883855,42.893223),(77.907006,42.891285),(77.929331,42.885006),(77.986175,42.860202),(78.030616,42.854672),(78.137897,42.861984),(78.183579,42.86015),(78.229777,42.865007),(78.249104,42.862398),(78.290136,42.851442),(78.31122,42.850512),(78.328376,42.855292),(78.364653,42.872526),(78.38553,42.878237),(78.429559,42.880665),(78.496118,42.875601),(78.594303,42.850228),(78.635437,42.832477),(78.669957,42.811161),(78.687114,42.804598),(78.807933,42.79558),(78.888135,42.771215),(78.954281,42.768424),(78.992935,42.757133),(79.030866,42.756151),(79.108794,42.785348),(79.148274,42.790981),(79.173389,42.785632),(79.180831,42.77553),(79.175146,42.737031),(79.176076,42.713931),(79.181864,42.693597),(79.192199,42.674838),(79.206359,42.656467),(79.242119,42.629776),(79.321287,42.602181),(79.353016,42.577299),(79.398388,42.496942),(79.425673,42.469605),(79.47642,42.453973),(79.571918,42.449529),(79.652533,42.461053),(79.696458,42.459838),(79.917943,42.42444),(79.960111,42.403511),(79.97396,42.39147),(80.012098,42.349509),(80.077003,42.305765),(80.110076,42.273338),(80.136534,42.23887),(80.16661,42.208717),(80.210328,42.189519),(80.21653,42.174404),(80.219837,42.141873),(80.224591,42.125569),(80.235133,42.110945),(80.247432,42.098413),(80.256527,42.084357),(80.257561,42.065263),(80.231206,42.033689),(80.181906,42.020976),(79.930862,42.023276),(79.879496,42.013199),(79.842909,42.00183),(79.826786,41.992244),(79.812833,41.97762),(79.803842,41.959998),(79.792576,41.922817),(79.783274,41.905092),(79.747514,41.879745),(79.702969,41.874939),(79.655737,41.875843),(79.610985,41.867626),(79.554658,41.837602),(79.489856,41.819257),(79.410997,41.778588),(79.390844,41.772697),(79.367382,41.772387),(79.30413,41.787554),(79.282323,41.783497),(79.264856,41.774506),(79.217521,41.741226),(79.19592,41.729521),(79.174526,41.722622),(79.127914,41.714302),(79.088743,41.702546),(78.976192,41.6418),(78.915834,41.63317),(78.897437,41.626272),(78.807313,41.578445),(78.672128,41.538448),(78.658278,41.532453),(78.645049,41.52372),(78.637401,41.512868),(78.629133,41.48796),(78.619211,41.478089),(78.583968,41.465997),(78.51038,41.454422),(78.41757,41.400471),(78.377675,41.386622),(78.359899,41.377527),(78.343466,41.362024),(78.339332,41.344041),(78.356901,41.305542),(78.359692,41.287455),(78.34946,41.270402),(78.331477,41.258723),(78.291582,41.240791),(78.275356,41.228854),(78.250345,41.200535),(78.231335,41.172856),(78.204456,41.133718),(78.190503,41.11775),(78.176034,41.105502),(78.074955,41.039512),(78.057178,41.034344),(78.036508,41.036101),(77.997647,41.049795),(77.866699,41.064058),(77.831042,41.062973),(77.797556,41.054704),(77.665574,41.001271),(77.650278,40.997137),(77.631778,40.995793),(77.580721,40.997705),(77.503517,40.981066),(77.474888,40.982047),(77.445226,40.993675),(77.388795,41.011658),(77.332985,41.02065),(77.301152,41.019306),(77.243171,41.005664),(77.118838,41.011658),(77.088555,41.019565),(77.035018,41.040338),(77.007733,41.044214),(76.900143,41.025766),(76.860972,41.013208),(76.834927,40.99352),(76.820871,40.97781),(76.783974,40.957139),(76.766818,40.944685),(76.757413,40.925772),(76.760927,40.9066),(76.768368,40.887014),(76.770848,40.867119),(76.762477,40.847017),(76.746871,40.83446),(76.707286,40.817613),(76.674214,40.795444),(76.647859,40.764851),(76.630805,40.728885),(76.624191,40.627547),(76.62078,40.611321),(76.609411,40.59711),(76.556391,40.56569),(76.531173,40.534995),(76.498927,40.464611),(76.476499,40.436138),(76.449111,40.415519),(76.361984,40.371904),(76.330152,40.348081),(76.313512,40.343327),(76.299353,40.355677),(76.283333,40.417276),(76.273411,40.434122),(76.244162,40.441202),(76.21543,40.416552),(76.185044,40.384203),(76.151351,40.368131),(76.132541,40.371542),(76.095231,40.387355),(76.07611,40.391954),(76.051616,40.390197),(75.962422,40.357331),(75.949297,40.343068),(75.938031,40.326222),(75.921495,40.309117),(75.901858,40.298885),(75.880464,40.295268),(75.858553,40.296353),(75.79375,40.30891),(75.772046,40.31015),(75.750549,40.308342),(75.704454,40.293149),(75.681819,40.291702),(75.664869,40.305706),(75.656291,40.32307),(75.640168,40.367305),(75.638308,40.386115),(75.646369,40.405338),(75.659598,40.419084),(75.669314,40.432365),(75.666936,40.450245),(75.657221,40.461097),(75.629006,40.481303),(75.617947,40.493964),(75.610816,40.512877),(75.605648,40.569411),(75.587665,40.611941),(75.559863,40.63287),(75.523793,40.63318),(75.481832,40.614111),(75.26324,40.480166),(75.253318,40.47598),(75.241536,40.474068),(75.229237,40.470347),(75.223966,40.462441),(75.220142,40.452984),(75.212494,40.444716),(75.19389,40.441254),(75.175287,40.448127),(75.15658,40.458152),(75.13808,40.463888),(75.113172,40.460787),(75.064389,40.443837),(75.039068,40.441099),(75.002378,40.4473),(74.966101,40.459289),(74.879181,40.505074),(74.856443,40.513187),(74.835359,40.511637),(74.832312,40.508116),(74.820063,40.493964),(74.816032,40.48368),(74.815102,40.480114),(74.807041,40.461769),(74.794638,40.440737),(74.787404,40.420738),(74.794638,40.405545),(74.841664,40.372059),(74.853859,40.35883),(74.862128,40.32617),(74.830915,40.319917),(74.746683,40.336505),(74.724668,40.337539),(74.705755,40.331286),(74.654388,40.291547),(74.637645,40.281987),(74.598681,40.266174),(74.565402,40.24695),(74.476622,40.172433),(74.369858,40.105822),(74.333788,40.09373),(74.302885,40.090061),(74.272293,40.093833),(74.237876,40.10391),(74.20315,40.109853),(74.167596,40.10639),(74.13287,40.095332),(74.101244,40.078847),(74.069515,40.067788),(74.003989,40.060812),(73.976704,40.043603),(73.966368,40.033268),(73.957894,40.021434),(73.952106,40.008257),(73.949729,39.993684),(73.944044,39.970068),(73.927301,39.953325),(73.907561,39.937873),(73.893195,39.918133),(73.885443,39.876534),(73.880689,39.86501),(73.872937,39.856845),(73.844102,39.838086),(73.83201,39.82372),(73.823018,39.805685),(73.818574,39.786048),(73.819814,39.766721),(73.829529,39.746981),(73.843068,39.740366),(73.859915,39.737162),(73.879345,39.727912),(73.893505,39.710394),(73.899602,39.689465),(73.904253,39.64647),(73.926784,39.592882),(73.921617,39.582133),(73.899292,39.571539),(73.883066,39.561204),(73.870457,39.546786),(73.859192,39.524255),(73.848029,39.489735),(73.838314,39.475679),(73.820744,39.468186),(73.632642,39.448343),(73.604323,39.459608),(73.512236,39.467411),(73.476889,39.464776),(73.367645,39.443795),(73.343151,39.430669),(73.335503,39.415166),(73.333332,39.40111),(73.326408,39.390878),(73.269357,39.382558),(73.167864,39.355377),(73.136032,39.353465),(73.100995,39.361164),(73.083425,39.367831),(73.071126,39.370725),(73.058414,39.368813),(73.009321,39.348607),(72.994335,39.347832),(72.976765,39.352224),(72.91434,39.362715),(72.893669,39.363903),(72.850468,39.35672),(72.835275,39.356204),(72.65048,39.393772),(72.633737,39.394496),(72.616373,39.390413),(72.606141,39.383489),(72.587021,39.364885),(72.575756,39.359304),(72.559426,39.359562),(72.534104,39.372327),(72.519015,39.375686),(72.50806,39.371965),(72.476744,39.346127),(72.460414,39.344111),(72.443567,39.344783),(72.410701,39.351346),(72.393234,39.350829),(72.355924,39.336101),(72.334117,39.333828),(72.31634,39.328815),(72.304248,39.31357),(72.281407,39.259724),(72.240272,39.189909),(72.228903,39.189237),(72.228668,39.189502),(72.218568,39.20089),(72.209163,39.217297),(72.206476,39.226883),(72.206373,39.234609),(72.203272,39.24081),(72.182395,39.249337),(72.167098,39.258742),(72.158623,39.262617),(72.115525,39.268147),(72.094751,39.275072),(72.08576,39.290109),(72.084106,39.311245),(72.078112,39.336618),(72.06695,39.358477),(72.049793,39.368864),(72.031499,39.366694),(71.996463,39.351863),(71.959152,39.345868),(71.942306,39.339047),(71.843501,39.285045),(71.807017,39.272849),(71.77043,39.269542),(71.751207,39.274245),(71.732913,39.285097),(71.71834,39.301065),(71.710899,39.32096),(71.712966,39.341527),(71.723198,39.353981),(71.736634,39.364575),(71.748106,39.379561),(71.751827,39.399663),(71.749243,39.425502),(71.740251,39.448343),(71.724438,39.459401),(71.704594,39.458988),(71.602792,39.442348),(71.554319,39.444157),(71.512875,39.458833),(71.492101,39.493663),(71.500576,39.509217),(71.526001,39.538156),(71.531375,39.553091),(71.52538,39.569162),(71.510911,39.584148),(71.509654,39.584998),(71.493031,39.596241),(71.459545,39.612105),(71.441251,39.610658),(71.406731,39.598153),(71.378309,39.594173),(71.368543,39.591486),(71.361618,39.587559),(71.347872,39.576242),(71.340586,39.571694),(71.305911,39.557173),(71.291958,39.548957),(71.260074,39.520638),(71.249273,39.514385),(71.236923,39.514488),(71.179975,39.523015),(71.137446,39.521155),(71.095898,39.51237),(71.062928,39.495627),(71.042309,39.467876),(71.027581,39.435217),(71.009288,39.407622),(70.977145,39.394909),(70.949808,39.400749),(70.925159,39.412944),(70.903351,39.420851),(70.884489,39.413874),(70.872242,39.402971),(70.864181,39.400594),(70.855292,39.40173),(70.840099,39.40142),(70.828059,39.398578),(70.797621,39.385349),(70.770129,39.382093),(70.739744,39.386228),(70.713285,39.398423),(70.698092,39.41899),(70.698609,39.431393),(70.703053,39.444725),(70.705431,39.458523),(70.699746,39.472165),(70.690134,39.479193),(70.666725,39.486893),(70.655976,39.493663),(70.655976,39.493714),(70.655925,39.49387),(70.655598,39.494664),(70.635977,39.542497),(70.622128,39.563529),(70.59784,39.577792),(70.58027,39.579549),(70.54358,39.574278),(70.526113,39.575622),(70.513504,39.581513),(70.490973,39.596861),(70.477227,39.601201),(70.459967,39.599548),(70.427618,39.590091),(70.411082,39.587456),(70.40147,39.584768),(70.395372,39.579497),(70.390411,39.57402),(70.384313,39.570867),(70.374288,39.571022),(70.353462,39.575932),(70.343179,39.576862),(70.33026,39.573089),(70.241066,39.522395),(70.223289,39.519088),(70.20603,39.5241),(70.204376,39.538156),(70.215021,39.574071),(70.214814,39.591125),(70.210267,39.609728),(70.200655,39.619495),(70.185204,39.610348),(70.15735,39.563891),(70.148979,39.554279),(70.132442,39.550042),(70.116268,39.554641),(70.08273,39.569782),(70.062266,39.573451),(70.040355,39.573606),(70.019684,39.568594),(70.002734,39.556863),(69.987128,39.539603),(69.978757,39.534436),(69.948629,39.545081),(69.907753,39.548492),(69.830445,39.536244),(69.791585,39.545391),(69.74983,39.563839),(69.710349,39.574071),(69.669421,39.577792),(69.582295,39.573555),(69.564932,39.568129),(69.531445,39.545649),(69.514392,39.537588),(69.496099,39.532575),(69.477288,39.530405),(69.455274,39.530456),(69.412176,39.52503),(69.391454,39.530508),(69.367218,39.549938),(69.361637,39.552884),(69.352025,39.549628),(69.348407,39.543169),(69.345927,39.535366),(69.339726,39.528079),(69.300348,39.515625),(69.286189,39.539707),(69.29115,39.658872),(69.287533,39.677786),(69.280091,39.694271),(69.265932,39.707758),(69.248362,39.719385),(69.233324,39.732666),(69.226296,39.751011),(69.229552,39.790544),(69.240714,39.82894),(69.286447,39.935755),(69.305671,39.968621),(69.310064,39.978646),(69.310322,39.984796),(69.313991,39.986914),(69.32784,39.984744),(69.357296,39.959474),(69.405355,39.896119),(69.501576,39.922474),(69.500439,39.935703),(69.477288,39.968156),(69.478115,39.975029),(69.477288,39.981592),(69.475118,39.987741),(69.471191,39.993684),(69.463129,40.025413),(69.46964,40.050993),(69.485712,40.073731),(69.506331,40.096933),(69.509224,40.103445),(69.513462,40.120705),(69.518009,40.125097),(69.526071,40.123185),(69.536613,40.107889),(69.543021,40.103083),(69.558834,40.101739),(69.575474,40.1036),(69.969558,40.211603),(70.004595,40.208761),(70.14717,40.136983),(70.168822,40.13166),(70.218535,40.134141),(70.241531,40.132745),(70.263494,40.12427),(70.277808,40.112075),(70.290262,40.098174),(70.306282,40.08479),(70.324265,40.077503),(70.359509,40.074041),(70.397646,40.061225),(70.477227,40.052027),(70.502135,40.045981),(70.525493,40.033682),(70.535622,40.015957),(70.520946,39.993736),(70.505029,39.985106),(70.488493,39.978543),(70.473197,39.97012),(70.460226,39.956115),(70.450872,39.937408),(70.44803,39.919993),(70.455627,39.906816),(70.477227,39.900925),(70.485961,39.909606),(70.494177,39.931672),(70.500895,39.940561),(70.512574,39.94516),(70.555052,39.946245),(70.576601,39.952239),(70.5966,39.961955),(70.613963,39.975752),(70.627399,39.993891),(70.635977,40.028514),(70.634737,40.059313),(70.63944,40.084945),(70.66595,40.104168),(70.729822,40.120705),(70.743464,40.127113),(70.782532,40.152589),(70.788733,40.158997),(70.792144,40.161374),(70.797725,40.161942),(70.802686,40.160134),(70.806045,40.157653),(70.806716,40.156155),(70.824906,40.163544),(70.831831,40.168195),(70.845164,40.171813),(70.898545,40.162614),(70.929293,40.170624),(70.962883,40.189693),(70.979522,40.214084),(70.958955,40.238372),(70.995129,40.266587),(71.053213,40.274235),(71.169382,40.26142),(71.201008,40.263848),(71.215115,40.280747),(71.223952,40.302864),(71.239558,40.321106),(71.253821,40.324413),(71.263949,40.318212),(71.272838,40.307722),(71.283225,40.29842),(71.297074,40.293821),(71.313559,40.292684),(71.344926,40.295216),(71.365132,40.294131),(71.396551,40.271548),(71.441355,40.260903),(71.450966,40.248914),(71.458925,40.234134),(71.477115,40.220802),(71.498405,40.210518),(71.521246,40.203749),(71.568272,40.196566),(71.592973,40.198426),(71.601655,40.20995),(71.604032,40.227261),(71.61013,40.246382),(71.62832,40.258887),(71.645993,40.247105),(71.660153,40.224471),(71.667181,40.204162),(71.666871,40.163131),(71.673072,40.147886),(71.693329,40.141117),(71.707178,40.144269),(71.759785,40.168092),(71.775908,40.179926),(71.78676,40.193517),(71.805467,40.225298),(71.836473,40.249172),(71.872956,40.250774),(71.912334,40.243436),(71.951814,40.240904),(71.96339,40.243746),(71.969178,40.244418),(71.977033,40.243074),(71.989021,40.239302),(72.004834,40.237287),(72.01889,40.240026),(72.025402,40.250878),(72.019821,40.258732),(72.013239,40.262072),(72.005971,40.26576),(71.977033,40.276096),(71.958222,40.286534),(71.951091,40.30152),(71.951094,40.301529),(71.956672,40.31568),(72.043178,40.349321),(72.069637,40.369423),(72.084106,40.39738),(72.090101,40.416035),(72.099299,40.426371),(72.165858,40.454431),(72.182705,40.45779),(72.228328,40.459606),(72.235931,40.459909),(72.254225,40.458307),(72.263415,40.452651),(72.26363,40.452519),(72.259599,40.44239),(72.24513,40.438721),(72.228077,40.437533),(72.216501,40.434949),(72.211768,40.425836),(72.211644,40.425596),(72.224356,40.422443),(72.269624,40.424045),(72.284301,40.41986),(72.343418,40.393401),(72.370497,40.38565),(72.394061,40.389422),(72.414422,40.410713),(72.425171,40.435983),(72.426204,40.45934),(72.426096,40.459545),(72.420028,40.471027),(72.415662,40.479287),(72.415585,40.47933),(72.372254,40.503265),(72.363469,40.512309),(72.363467,40.51241),(72.363262,40.523574),(72.36998,40.539852),(72.370083,40.557732),(72.369928,40.557931),(72.348483,40.585379),(72.348509,40.585508),(72.351893,40.601967),(72.381556,40.612148),(72.414835,40.589875),(72.447908,40.560471),(72.476744,40.549567),(72.515294,40.54564),(72.585781,40.508743),(72.625468,40.5105),(72.640868,40.519853),(72.65048,40.532152),(72.655131,40.546363),(72.656474,40.561143),(72.664226,40.577783),(72.682416,40.577679),(72.719209,40.564864),(72.748355,40.575096),(72.760034,40.641758),(72.783908,40.669663),(72.818945,40.681084),(72.890982,40.695088),(72.976765,40.736068),(73.070609,40.762474),(73.118048,40.782938),(73.148641,40.813686),(73.14337,40.833839),(73.143292,40.833853),(73.135042,40.835274),(73.112467,40.839162),(73.053556,40.83632),(73.033299,40.847224),(73.01945,40.8619),(73.00343,40.870168),(72.929429,40.844175),(72.883127,40.819628),(72.870311,40.818181),(72.872998,40.834821),(72.868658,40.864122),(72.830107,40.87208),(72.701226,40.863243),(72.658231,40.867171),(72.619474,40.88009),(72.588468,40.905825),(72.545577,40.956519),(72.52625,40.962204),(72.501445,40.963496),(72.483358,40.970575),(72.483565,40.99352),(72.485219,40.999566),(72.484702,41.004682),(72.481911,41.008816),(72.476744,41.011813),(72.423517,41.01574),(72.395198,41.022045),(72.374321,41.031967),(72.345692,41.06597),(72.33236,41.072843),(72.314066,41.061681),(72.308795,41.054446),(72.297323,41.028143),(72.289778,41.023544),(72.252985,41.019616),(72.195486,41.006358),(72.165135,40.999359),(72.178777,41.023182),(72.185599,41.041062),(72.185599,41.060647),(72.1764,41.112892),(72.174643,41.141418),(72.169889,41.168651),(72.158417,41.187875),(72.132889,41.199295),(72.108497,41.196401),(72.085243,41.184877),(72.063642,41.170201),(72.033773,41.15661),(72.016513,41.163535),(72.001217,41.18002),(71.977033,41.195213),(71.897658,41.184929),(71.871613,41.194489),(71.866342,41.236606),(71.868822,41.279239),(71.863034,41.312208),(71.847015,41.341819),(71.75317,41.44729),(71.745212,41.45282),(71.73684,41.455404),(71.730433,41.451166),(71.729709,41.430082),(71.721648,41.424759),(71.712242,41.428015),(71.706868,41.444086),(71.696119,41.447445),(71.691469,41.441916),(71.687334,41.431012),(71.681443,41.422847),(71.671935,41.425483),(71.671108,41.437627),(71.689505,41.493799),(71.689401,41.514625),(71.68413,41.534055),(71.671418,41.547388),(71.649507,41.54992),(71.62739,41.543202),(71.615297,41.532143),(71.595454,41.493799),(71.595454,41.493747),(71.595557,41.493696),(71.595557,41.493489),(71.605582,41.476849),(71.633694,41.449616),(71.637415,41.431271),(71.633074,41.411324),(71.618811,41.377786),(71.585532,41.323525),(71.557937,41.301718),(71.52383,41.296654),(71.480629,41.310761),(71.432466,41.344816),(71.418772,41.3474),(71.412571,41.334687),(71.421459,41.162088),(71.416085,41.127362),(71.393657,41.112737),(71.325238,41.157334),(71.300226,41.133046),(71.289478,41.113874),(71.276145,41.113151),(71.263381,41.123486),(71.253821,41.13749),(71.241522,41.175162),(71.230153,41.187255),(71.206382,41.188753),(71.185711,41.180227),(71.183024,41.166377),(71.187882,41.148446),(71.189949,41.127465),(71.18013,41.108138),(71.164421,41.116199),(71.139099,41.148394),(71.123079,41.158212),(71.085769,41.162036),(71.067579,41.169736),(71.047012,41.182035),(71.024688,41.189787),(70.977145,41.19635),(70.93415,41.19144),(70.914203,41.193042),(70.895962,41.206116),(70.882164,41.220482),(70.865731,41.233195),(70.847747,41.243065),(70.828524,41.249111),(70.805993,41.247458),(70.786563,41.24043),(70.770439,41.238518),(70.758037,41.25216),(70.75628,41.269627),(70.77106,41.331018),(70.769199,41.352102),(70.759691,41.372515),(70.70357,41.445482),(70.686724,41.462483),(70.66781,41.471372),(70.633807,41.467496),(70.511334,41.414476),(70.477227,41.404657),(70.470782,41.404879),(70.453198,41.405484),(70.438057,41.416078),(70.413614,41.450701),(70.398989,41.464964),(70.382246,41.476436),(70.344936,41.493489),(70.344729,41.493644),(70.344522,41.493696),(70.344419,41.493799),(70.203446,41.505633),(70.166549,41.520206),(70.148255,41.552452),(70.169288,41.578342),(70.33119,41.649629),(70.390824,41.685054),(70.423484,41.696913),(70.453663,41.71208),(70.477227,41.738435),(70.506476,41.78559),(70.550091,41.824063),(70.648948,41.887393),(70.679696,41.901113),(70.779328,41.909665),(70.814003,41.919535),(70.825113,41.93633),(70.828059,41.993743),(70.845474,42.030356),(70.886711,42.038495),(70.935907,42.036867),(70.977145,42.044231),(71.118429,42.122908),(71.201008,42.140788),(71.238215,42.160244),(71.253201,42.197555),(71.249868,42.198385),(71.217854,42.206365),(71.077604,42.281167),(71.045772,42.29096),(71.014042,42.287704),(70.9696,42.263468),(70.947793,42.248146),(70.918648,42.253365),(70.897822,42.261608),(70.858134,42.291063),(70.852657,42.306075),(70.864801,42.321552),(70.888825,42.338575),(70.900354,42.346744),(70.932703,42.3762),(70.939835,42.387827),(70.937354,42.394778),(70.931773,42.401496),(70.929913,42.412296),(70.936114,42.431623),(70.947586,42.451286),(70.961952,42.468107),(70.977145,42.478752),(71.024739,42.455963),(71.041017,42.4548),(71.054712,42.460381),(71.064168,42.470329),(71.066287,42.482137),(71.057864,42.493402),(71.057657,42.493402),(71.057554,42.493506),(71.057554,42.493609),(71.022724,42.51645),(71.012699,42.526036),(71.00319,42.563527),(71.040656,42.580529),(71.127007,42.590606),(71.142768,42.602543),(71.148091,42.6174),(71.146437,42.656855),(71.149021,42.677939),(71.157548,42.687938),(71.19522,42.69724),(71.211446,42.705224),(71.223849,42.717936),(71.245449,42.747133),(71.262502,42.753567),(71.284723,42.7484),(71.308495,42.740105),(71.329785,42.737031),(71.347562,42.742844),(71.363065,42.75411),(71.376294,42.768321),(71.386939,42.783074),(71.404613,42.794883),(71.428849,42.79558),(71.477115,42.78354),(71.493651,42.788914),(71.504193,42.789612),(71.514322,42.78571),(71.553803,42.760905),(71.566308,42.757236),(71.583568,42.759639),(71.693536,42.811807),(71.726402,42.819661),(71.796682,42.822297),(71.831408,42.831573),(71.847738,42.834053),(71.863241,42.829041),(71.878847,42.821341),(71.895384,42.816044),(71.956982,42.804598),(72.072737,42.757185),(72.10488,42.750337),(72.11997,42.75181),(72.148805,42.76199),(72.164101,42.765091),(72.292362,42.76106),(72.358405,42.741397),(72.476744,42.682564),(72.51178,42.677551),(72.583817,42.678275),(72.725824,42.652876),(72.756933,42.640215),(72.780601,42.620449),(72.815224,42.573036),(72.840752,42.555698),(72.874549,42.543993),(72.908345,42.536965),(72.942348,42.536035),(73.070609,42.551926),(73.115051,42.550841),(73.152671,42.539213),(73.172722,42.527845),(73.189052,42.520765),(73.206828,42.517173),(73.278348,42.513401),(73.301396,42.507071),(73.316589,42.493712),(73.313385,42.463171),(73.314419,42.441571),(73.326201,42.428677),(73.417461,42.41705),(73.476889,42.399067),(73.505105,42.402943),(73.505518,42.420926),(73.505077,42.421557),(73.432964,42.524847),(73.417565,42.556163),(73.41002,42.58965),(73.412707,42.627322),(73.423869,42.662772),(73.476889,42.75088),(73.503554,42.794082),(73.507068,42.809404),(73.504278,42.827594),(73.491979,42.861003),(73.489498,42.877823),(73.493012,42.894954),(73.501177,42.909346),(73.521744,42.936166),(73.559675,43.017298),(73.57094,43.031922),(73.58665,43.042283),(73.634089,43.062463),(73.805551,43.114734),(73.822811,43.117318),(73.864256,43.116103),(73.90167,43.13096),(73.935053,43.199716),(73.965335,43.216924),(73.985385,43.211679),(74.021456,43.186306),(74.039852,43.181861),(74.197362,43.195788),(74.213899,43.202635),(74.216586,43.216924),(74.206044,43.234106),(74.178759,43.261702),(74.207697,43.24979),(74.25865,43.215761),(74.286969,43.207209),(74.320042,43.201654),(74.348774,43.191034),(74.400554,43.159537),(74.420604,43.151837),(74.462772,43.148634),(74.498222,43.131115),(74.539253,43.122899),(74.558167,43.115044),(74.572016,43.102202),(74.597028,43.070344),(74.61129,43.058174),(74.71392,42.999883),(74.74875,42.990013),(74.862851,42.975828),(74.948531,42.944848),(74.976849,42.926477),(75.005375,42.9164),(75.066146,42.902602),(75.094258,42.890303),(75.118133,42.876531),(75.178594,42.84966),(75.204639,42.84519)],[(71.008564,40.157757),(70.977145,40.144579),(70.959885,40.113418),(70.954046,40.095797),(70.952961,40.079208),(70.958438,40.063085),(70.98314,40.021383),(70.994509,40.008877),(71.007634,40.003503),(71.034299,39.99911),(71.045823,39.99203),(71.050371,39.962885),(71.007634,39.911157),(71.009288,39.885732),(71.021897,39.880823),(71.036056,39.887541),(71.049802,39.897979),(71.060964,39.904129),(71.079981,39.903405),(71.084787,39.894827),(71.087216,39.88382),(71.098895,39.8755),(71.113674,39.874466),(71.16101,39.884233),(71.1946,39.884802),(71.208449,39.888781),(71.219094,39.900925),(71.221058,39.931724),(71.17703,39.968156),(71.174704,39.993994),(71.190982,40.006241),(71.237284,40.031098),(71.244002,40.046756),(71.236354,40.056057),(71.223745,40.057866),(71.198217,40.052595),(71.192946,40.05027),(71.181887,40.043397),(71.176926,40.042001),(71.169382,40.042828),(71.165454,40.044637),(71.162043,40.046911),(71.105975,40.064946),(71.078844,40.07864),(71.06174,40.095176),(71.047942,40.122513),(71.031199,40.146905),(71.008564,40.157757)],[(71.741801,39.90077),(71.757304,39.903095),(71.767123,39.915446),(71.789757,39.979318),(71.78552,39.989705),(71.760612,39.98371),(71.724128,39.962678),(71.706868,39.956115),(71.681753,39.955082),(71.665527,39.940199),(71.675346,39.925781),(71.696843,39.913844),(71.71617,39.906764),(71.741801,39.90077)],[(70.498725,39.881908),(70.483739,39.882218),(70.482602,39.866767),(70.490147,39.850179),(70.503479,39.835502),(70.537379,39.817312),(70.547301,39.807649),(70.575878,39.77008),(70.581614,39.766566),(70.63298,39.79845),(70.661609,39.809819),(70.694527,39.814832),(70.70481,39.822067),(70.706412,39.839998),(70.698919,39.858447),(70.68662,39.860876),(70.654994,39.849765),(70.619906,39.850695),(70.498725,39.881908)]] diff --git a/tests/queries/0_stateless/country_rings.tsv b/tests/queries/0_stateless/country_rings.tsv new file mode 100644 index 00000000000..15ab66ec0c0 --- /dev/null +++ b/tests/queries/0_stateless/country_rings.tsv @@ -0,0 +1,103 @@ +Aruba [(-69.996938,12.577582),(-69.936391,12.531724),(-69.924672,12.519232),(-69.915761,12.497016),(-69.880198,12.453559),(-69.87682,12.427395),(-69.888092,12.41767),(-69.908803,12.417792),(-69.930531,12.425971),(-69.945139,12.440375),(-69.924672,12.440375),(-69.924672,12.447211),(-69.958567,12.463202),(-70.027659,12.522935),(-70.048085,12.531155),(-70.058095,12.537177),(-70.062408,12.54682),(-70.060374,12.556952),(-70.051096,12.574042),(-70.048736,12.583726),(-70.052642,12.600002),(-70.059641,12.614244),(-70.061106,12.625393),(-70.048736,12.632148),(-70.007151,12.585517),(-69.996938,12.577582)] +Afghanistan [(71.049802,38.408664),(71.05714,38.409026),(71.064943,38.411817),(71.076984,38.412178),(71.089386,38.409853),(71.117395,38.398639),(71.155946,38.376212),(71.217699,38.325827),(71.300123,38.298697),(71.334384,38.280662),(71.358156,38.251258),(71.364512,38.206816),(71.359086,38.184105),(71.340947,38.140929),(71.334539,38.11168),(71.316039,38.083284),(71.302397,38.04233),(71.272631,37.997992),(71.265913,37.972541),(71.255785,37.949881),(71.254544,37.939287),(71.258265,37.926472),(71.263536,37.924353),(71.271184,37.926187),(71.281829,37.924999),(71.31976,37.900582),(71.341154,37.893295),(71.360998,37.902029),(71.379188,37.912932),(71.501196,37.946212),(71.537059,37.944455),(71.567342,37.928074),(71.597727,37.89836),(71.590286,37.891435),(71.593387,37.879343),(71.59504,37.857535),(71.594214,37.833764),(71.590286,37.815729),(71.57468,37.798004),(71.537783,37.779039),(71.529515,37.761133),(71.531168,37.751754),(71.540263,37.730515),(71.542537,37.719559),(71.540883,37.709715),(71.529515,37.67858),(71.52476,37.647729),(71.522073,37.637626),(71.517216,37.629229),(71.505123,37.616),(71.501506,37.610341),(71.497165,37.566545),(71.511221,37.485878),(71.501506,37.445777),(71.490241,37.423376),(71.48714,37.409087),(71.494685,37.370692),(71.496545,37.328524),(71.493858,37.307543),(71.487863,37.294986),(71.487037,37.267055),(71.45014,37.21667),(71.453757,37.192615),(71.446832,37.183597),(71.441045,37.16843),(71.43319,37.127347),(71.431123,37.066989),(71.43319,37.054742),(71.439184,37.044097),(71.456444,37.022548),(71.459958,37.010739),(71.459958,36.969786),(71.463059,36.948082),(71.47174,36.930047),(71.528481,36.856149),(71.538506,36.836099),(71.545328,36.790004),(71.552872,36.769591),(71.563931,36.750678),(71.577367,36.733263),(71.61106,36.704841),(71.653021,36.687012),(71.699943,36.678641),(71.748519,36.678641),(71.797612,36.686082),(71.836576,36.699156),(72.126687,36.872738),(72.152836,36.895708),(72.186839,36.911392),(72.195934,36.918988),(72.210403,36.93661),(72.220325,36.94555),(72.259806,36.967305),(72.360988,37.000275),(72.405947,37.007665),(72.474987,36.997484),(72.50868,37.011075),(72.657714,37.028826),(72.666603,37.038335),(72.672701,37.05761),(72.714352,37.109984),(72.760964,37.187499),(72.79073,37.220262),(72.83052,37.239769),(72.877443,37.246875),(72.902247,37.2538),(72.924262,37.274832),(72.995575,37.3093),(73.067509,37.315062),(73.087766,37.326069),(73.099238,37.339867),(73.116394,37.369064),(73.132104,37.384386),(73.170241,37.40826),(73.179026,37.410741),(73.200627,37.404178),(73.211479,37.40826),(73.260262,37.450041),(73.276075,37.459472),(73.296332,37.464949),(73.32124,37.467017),(73.345011,37.464484),(73.361858,37.456397),(73.378394,37.452547),(73.440612,37.479936),(73.485261,37.480969),(73.605047,37.445777),(73.6745,37.43105),(73.717805,37.431825),(73.753462,37.428388),(73.747054,37.403119),(73.745917,37.394902),(73.74571,37.352863),(73.739096,37.338342),(73.722249,37.322452),(73.704989,37.310954),(73.690416,37.305218),(73.651969,37.302065),(73.630988,37.295993),(73.609904,37.281395),(73.597295,37.261809),(73.601636,37.24088),(73.617656,37.233181),(73.659927,37.243748),(73.680494,37.242457),(73.688039,37.236901),(73.702405,37.221166),(73.709227,37.217006),(73.719975,37.217549),(73.736202,37.227651),(73.746123,37.230545),(73.768654,37.228892),(73.783951,37.225894),(73.798213,37.22853),(73.836247,37.256745),(73.856401,37.261577),(73.899292,37.265478),(73.95562,37.286769),(73.976704,37.290283),(74.052151,37.312246),(74.163256,37.330074),(74.187647,37.338394),(74.206044,37.355654),(74.207697,37.389915),(74.223924,37.403351),(74.250382,37.403687),(74.279424,37.397512),(74.303816,37.400173),(74.315598,37.426916),(74.320869,37.413893),(74.332444,37.420353),(74.343813,37.420999),(74.353632,37.415909),(74.368204,37.396375),(74.378126,37.393765),(74.389392,37.393403),(74.418331,37.389192),(74.435901,37.392344),(74.454814,37.393636),(74.476622,37.386091),(74.512692,37.377229),(74.521167,37.375601),(74.529848,37.375756),(74.631444,37.381079),(74.660383,37.393972),(74.78854,37.331159),(74.816859,37.306923),(74.862644,37.244601),(74.892307,37.231114),(74.813138,37.21543),(74.794018,37.213931),(74.783269,37.219667),(74.743995,37.288397),(74.737381,37.296123),(74.721464,37.297776),(74.709372,37.290826),(74.698934,37.28031),(74.676299,37.263721),(74.670925,37.261344),(74.650978,37.259639),(74.646327,37.254833),(74.64302,37.248374),(74.629997,37.238297),(74.627207,37.234318),(74.623279,37.230804),(74.615011,37.228375),(74.609017,37.230183),(74.596511,37.240751),(74.59,37.243361),(74.575944,37.242147),(74.532225,37.232199),(74.500083,37.23163),(74.487267,37.225946),(74.476622,37.210055),(74.46763,37.189927),(74.456674,37.177318),(74.441585,37.170549),(74.420294,37.168068),(74.38226,37.172047),(74.368308,37.167061),(74.366137,37.14776),(74.38257,37.126572),(74.414403,37.107839),(74.476622,37.083138),(74.494088,37.066472),(74.505767,37.047042),(74.519203,37.030247),(74.542354,37.021669),(74.547418,37.015675),(74.549279,37.008931),(74.548142,37.001644),(74.544214,36.994022),(74.537393,36.962241),(74.521373,36.958495),(74.501736,36.972421),(74.48365,36.99397),(74.480342,36.996916),(74.476622,36.999319),(74.456984,37.004383),(74.43559,37.003221),(74.394043,36.994022),(74.368101,36.976762),(74.284592,36.934232),(74.235706,36.902167),(74.211728,36.895139),(74.145996,36.901702),(74.129666,36.898421),(74.1153,36.889532),(74.108789,36.875631),(74.103931,36.841318),(74.094319,36.831241),(74.035305,36.815583),(74.006366,36.815738),(73.976704,36.824833),(73.946938,36.83088),(73.865083,36.872582),(73.834077,36.882866),(73.772892,36.892013),(73.711842,36.894139),(73.711335,36.894157),(73.711128,36.894164),(73.709483,36.894221),(73.640083,36.896638),(73.509342,36.87868),(73.476889,36.882866),(73.445883,36.886483),(73.379738,36.879249),(73.331782,36.882091),(73.281862,36.867983),(73.267496,36.866536),(73.252717,36.868035),(73.223468,36.874339),(73.191842,36.877027),(73.042497,36.864263),(73.026271,36.859457),(73.002293,36.846124),(72.990511,36.841577),(72.976765,36.842093),(72.945862,36.852222),(72.921058,36.847364),(72.896873,36.836977),(72.867727,36.830415),(72.779361,36.826797),(72.695852,36.836719),(72.629602,36.832947),(72.565214,36.820596),(72.516741,36.800597),(72.454109,36.757964),(72.433129,36.753417),(72.387343,36.755794),(72.346209,36.74489),(72.301974,36.74272),(72.213607,36.726442),(72.169682,36.711352),(72.153456,36.702102),(72.149632,36.689338),(72.164618,36.670063),(72.171646,36.653629),(72.154179,36.645465),(72.096095,36.638902),(72.070877,36.632287),(72.059095,36.627119),(72.050413,36.618645),(72.05217,36.610376),(72.056304,36.601901),(72.054961,36.592858),(72.038838,36.580456),(71.997806,36.572446),(71.977033,36.563041),(71.960909,36.549605),(71.913471,36.527591),(71.899725,36.518341),(71.887632,36.508005),(71.874713,36.499065),(71.85859,36.494156),(71.793788,36.490745),(71.773221,36.480048),(71.774564,36.448422),(71.791514,36.421292),(71.794408,36.407598),(71.782626,36.396539),(71.766916,36.391837),(71.751,36.391113),(71.736324,36.395919),(71.706868,36.421447),(71.649301,36.452763),(71.629147,36.459533),(71.61044,36.457931),(71.600725,36.449301),(71.588736,36.41845),(71.580364,36.402327),(71.572303,36.391682),(71.547188,36.371631),(71.542227,36.356387),(71.552666,36.340987),(71.55835,36.327861),(71.53892,36.319283),(71.514632,36.315201),(71.495822,36.309619),(71.479078,36.300524),(71.402752,36.231381),(71.382443,36.218566),(71.31914,36.200582),(71.313972,36.194019),(71.309011,36.172729),(71.302087,36.163323),(71.292682,36.157897),(71.262296,36.146115),(71.223073,36.125393),(71.217699,36.118055),(71.212635,36.096816),(71.207622,36.087617),(71.175376,36.061159),(71.165919,36.045708),(71.169913,36.030802),(71.170932,36.027001),(71.181887,36.018836),(71.217854,36.002713),(71.23191,35.99398),(71.257438,35.971552),(71.28431,35.962457),(71.312525,35.957393),(71.341567,35.947264),(71.356399,35.933053),(71.360688,35.90029),(71.371333,35.885149),(71.416705,35.858846),(71.431019,35.843705),(71.464402,35.794715),(71.470914,35.779936),(71.47205,35.769962),(71.471017,35.761333),(71.47112,35.752289),(71.475564,35.741024),(71.481352,35.734099),(71.496132,35.724591),(71.502746,35.719061),(71.515355,35.701491),(71.519799,35.683766),(71.514012,35.665576),(71.495925,35.646611),(71.483316,35.62656),(71.492721,35.609559),(71.512771,35.596278),(71.567859,35.574212),(71.584188,35.564187),(71.59349,35.549356),(71.59287,35.530184),(71.586049,35.512097),(71.581708,35.493132),(71.587806,35.470911),(71.593422,35.464194),(71.600725,35.45546),(71.613954,35.443212),(71.622429,35.429621),(71.620879,35.410139),(71.61106,35.395463),(71.562277,35.360582),(71.531582,35.327922),(71.529825,35.300895),(71.547808,35.275626),(71.603205,35.223381),(71.633694,35.203124),(71.637932,35.189688),(71.62925,35.170102),(71.604135,35.138166),(71.534992,35.098582),(71.508947,35.072021),(71.5075,35.028199),(71.512978,35.018122),(71.511325,35.008717),(71.504297,35.000604),(71.493444,34.994196),(71.493444,34.994144),(71.493341,34.994144),(71.493341,34.994041),(71.485383,34.983499),(71.476701,34.960296),(71.46957,34.949702),(71.459958,34.942674),(71.445239,34.937757),(71.324928,34.897561),(71.289478,34.87503),(71.270822,34.844283),(71.25563,34.810228),(71.203075,34.748164),(71.189535,34.737054),(71.150933,34.720311),(71.080601,34.672923),(71.070163,34.661245),(71.068768,34.645948),(71.076261,34.623624),(71.079878,34.602902),(71.076571,34.578511),(71.065822,34.55846),(71.047529,34.551122),(71.006963,34.556341),(70.985827,34.556186),(70.983356,34.555256),(70.96867,34.549727),(70.956991,34.532002),(70.955648,34.510014),(70.961229,34.487612),(70.970841,34.468879),(70.991511,34.443067),(71.01983,34.414438),(71.050939,34.389788),(71.122046,34.356819),(71.126387,34.332221),(71.097086,34.262458),(71.09688,34.244345),(71.106543,34.209386),(71.109489,34.189206),(71.108093,34.165228),(71.101789,34.151818),(71.073884,34.125257),(71.062773,34.10531),(71.063858,34.088928),(71.067269,34.073012),(71.062618,34.054253),(71.046702,34.041877),(70.997712,34.033324),(70.977145,34.027407),(70.965725,34.013816),(70.953994,34.004825),(70.939731,34.000923),(70.921231,34.002241),(70.894308,34.009372),(70.884334,34.007047),(70.879994,33.994257),(70.862217,33.964775),(70.791317,33.953562),(70.656183,33.95475),(70.521979,33.938705),(70.49149,33.939609),(70.408963,33.954414),(70.328193,33.957282),(70.309899,33.961675),(70.275586,33.97617),(70.218845,33.980692),(70.002838,34.043789),(69.969093,34.045727),(69.916021,34.038879),(69.889253,34.031206),(69.872613,34.017227),(69.870443,34.001337),(69.873853,33.986325),(69.871993,33.971519),(69.854216,33.956352),(69.841091,33.941805),(69.847188,33.926948),(69.876489,33.902557),(69.885429,33.889457),(69.898761,33.851501),(69.907546,33.83561),(69.931318,33.806439),(69.940103,33.791479),(69.947234,33.771971),(69.957518,33.752695),(69.972659,33.744841),(69.996478,33.742091),(70.011829,33.740319),(70.065573,33.721018),(70.108154,33.727296),(70.118128,33.716522),(70.126551,33.676964),(70.132546,33.661435),(70.136886,33.656293),(70.145981,33.649885),(70.162518,33.643245),(70.170218,33.638671),(70.174093,33.632108),(70.173473,33.607872),(70.153009,33.544749),(70.151356,33.525862),(70.154973,33.506612),(70.163758,33.488655),(70.177607,33.47341),(70.185462,33.468966),(70.213368,33.461344),(70.220447,33.456021),(70.227837,33.43995),(70.233005,33.432612),(70.285715,33.382873),(70.301579,33.35179),(70.294448,33.318949),(70.124794,33.199034),(70.105364,33.18981),(70.08366,33.191283),(70.059785,33.198052),(70.048003,33.194073),(70.014517,33.140769),(70.006869,33.131803),(69.994725,33.127333),(69.965734,33.1294),(69.955812,33.127695),(69.880933,33.089248),(69.838713,33.086664),(69.771844,33.114776),(69.732983,33.109298),(69.68601,33.080773),(69.667768,33.077),(69.658776,33.078421),(69.650198,33.081858),(69.640689,33.084235),(69.60834,33.079067),(69.587773,33.079532),(69.547517,33.075011),(69.514185,33.056691),(69.487727,33.028373),(69.478125,33.01135),(69.468503,32.994292),(69.48721,32.885875),(69.477288,32.856833),(69.471449,32.852234),(69.445559,32.835671),(69.421064,32.80681),(69.387165,32.785338),(69.376313,32.771877),(69.37807,32.752317),(69.383237,32.744462),(69.39042,32.737899),(69.399154,32.732964),(69.407732,32.729941),(69.414657,32.725575),(69.41724,32.718004),(69.419101,32.70046),(69.429126,32.667361),(69.426025,32.655088),(69.413933,32.635477),(69.361223,32.568478),(69.34355,32.55605),(69.302105,32.543777),(69.281952,32.532796),(69.251566,32.50055),(69.232859,32.462671),(69.228001,32.421304),(69.238543,32.378283),(69.264433,32.322369),(69.268102,32.301337),(69.266965,32.279581),(69.259989,32.236819),(69.259524,32.194522),(69.251876,32.15225),(69.251049,32.13065),(69.270634,32.035978),(69.302157,31.959911),(69.304793,31.94694),(69.301157,31.941282),(69.29885,31.93769),(69.250946,31.907046),(69.223764,31.881931),(69.114313,31.737754),(69.10005,31.724008),(69.084754,31.715791),(69.071628,31.69786),(69.040106,31.673107),(69.004036,31.651092),(68.977267,31.641481),(68.94099,31.643651),(68.906987,31.634298),(68.843529,31.606496),(68.803841,31.602568),(68.77728,31.618588),(68.731184,31.675535),(68.705449,31.70127),(68.697905,31.715946),(68.694907,31.756357),(68.688293,31.768605),(68.675787,31.774702),(68.61946,31.783487),(68.561065,31.811806),(68.527476,31.822968),(68.5051,31.822658),(68.481484,31.815682),(68.438179,31.796045),(68.418748,31.783022),(68.422211,31.773152),(68.439574,31.766589),(68.461433,31.763592),(68.521274,31.764781),(68.540601,31.762507),(68.5498,31.753515),(68.536829,31.741009),(68.515177,31.729951),(68.49802,31.725248),(68.460968,31.730467),(68.356065,31.762507),(68.316532,31.765194),(68.276948,31.763592),(68.255037,31.766383),(68.242635,31.776925),(68.2323,31.790154),(68.206823,31.807879),(68.184758,31.818214),(68.159126,31.825914),(68.138559,31.824674),(68.125743,31.811496),(68.104504,31.76876),(68.09834,31.759124),(68.093497,31.751551),(68.060476,31.725558),(68.055566,31.716566),(68.052259,31.695431),(68.046575,31.688403),(67.994071,31.663443),(67.977328,31.651868),(67.966063,31.638225),(67.955211,31.633212),(67.914696,31.631352),(67.884311,31.635641),(67.870668,31.635176),(67.842711,31.623497),(67.781165,31.564173),(67.74828,31.544392),(67.726698,31.53141),(67.696312,31.520816),(67.665461,31.518129),(67.600504,31.53048),(67.568981,31.52986),(67.556114,31.512186),(67.5634,31.497252),(67.578076,31.48211),(67.591099,31.464799),(67.59699,31.42568),(67.611459,31.410797),(67.631406,31.4001),(67.651147,31.395294),(67.734294,31.404751),(67.770364,31.394674),(67.775067,31.352764),(67.764731,31.334058),(67.749435,31.328011),(67.709024,31.329303),(67.692798,31.325221),(67.678949,31.316591),(67.665926,31.306256),(67.602157,31.271116),(67.583657,31.265225),(67.530224,31.256646),(67.493327,31.242952),(67.433899,31.236079),(67.364601,31.210706),(67.346204,31.20776),(67.282177,31.212773),(67.230294,31.210603),(67.213757,31.212256),(67.19319,31.218509),(67.156707,31.235924),(67.136759,31.241092),(67.116864,31.240316),(67.078468,31.232048),(67.058211,31.232358),(67.035267,31.235924),(67.022865,31.239386),(67.015268,31.244657),(67.013873,31.252667),(67.017904,31.258713),(67.023433,31.264863),(67.025965,31.273028),(67.023898,31.2953),(67.016818,31.309149),(67.002091,31.315867),(66.943283,31.314731),(66.905921,31.305532),(66.838483,31.277007),(66.808614,31.254734),(66.785308,31.23179),(66.759522,31.214788),(66.721281,31.210292),(66.696993,31.195823),(66.663093,31.083117),(66.643973,31.060172),(66.550025,30.976973),(66.527494,30.968343),(66.392309,30.944624),(66.375359,30.936717),(66.366264,30.922868),(66.267975,30.601389),(66.264926,30.557826),(66.281928,30.518138),(66.306113,30.491112),(66.313761,30.478296),(66.319393,30.457832),(66.321822,30.437368),(66.303012,30.305283),(66.305286,30.24477),(66.300893,30.225598),(66.236349,30.1116),(66.221725,30.073721),(66.219296,30.057856),(66.225239,30.04442),(66.260431,30.02313),(66.301565,29.98675),(66.332364,29.966079),(66.340529,29.956571),(66.336808,29.952023),(66.328437,29.949543),(66.322442,29.946545),(66.301772,29.915694),(66.27521,29.885154),(66.195628,29.835338),(66.110466,29.813634),(66.052381,29.798854),(65.994297,29.784023),(65.936109,29.769295),(65.878232,29.75449),(65.820147,29.739736),(65.762063,29.724905),(65.70403,29.710151),(65.645843,29.695346),(65.58781,29.680567),(65.529726,29.665736),(65.471745,29.650956),(65.413661,29.636177),(65.355783,29.621423),(65.297595,29.606644),(65.239511,29.591838),(65.181427,29.57711),(65.036371,29.540162),(64.986606,29.541557),(64.820312,29.567886),(64.683886,29.568816),(64.499477,29.570203),(64.477697,29.570367),(64.207739,29.499983),(64.172599,29.4843),(64.149758,29.458461),(64.113378,29.396295),(64.086093,29.386605),(63.971991,29.429574),(63.78792,29.46058),(63.568605,29.497477),(63.415953,29.484971),(63.366809,29.480941),(63.317768,29.47691),(63.268675,29.472879),(63.219686,29.468797),(63.170697,29.464766),(63.121604,29.460787),(63.072615,29.456756),(63.023626,29.452674),(62.974533,29.448643),(62.925544,29.444638),(62.876451,29.440581),(62.827462,29.436602),(62.778421,29.43252),(62.72938,29.428489),(62.680391,29.424458),(62.63135,29.420428),(62.602581,29.41807),(62.477509,29.407819),(62.374466,29.424872),(62.279433,29.451485),(62.196286,29.47492),(62.112983,29.498226),(62.029629,29.521636),(61.946534,29.545045),(61.863231,29.568351),(61.779929,29.591787),(61.696626,29.615196),(61.613427,29.638502),(61.530228,29.66186),(61.446926,29.685321),(61.363727,29.708627),(61.280424,29.732011),(61.197225,29.755343),(61.114026,29.778726),(61.030621,29.802161),(60.947422,29.825519),(60.876005,29.845518),(60.844379,29.858179),(60.902153,29.916883),(60.978117,29.994139),(60.978427,29.994398),(61.133043,30.154285),(61.215622,30.239706),(61.272156,30.298307),(61.380367,30.41029),(61.461705,30.494522),(61.461705,30.494574),(61.57622,30.613895),(61.702621,30.745515),(61.7852,30.831401),(61.802253,30.847059),(61.799773,30.852381),(61.798429,30.853467),(61.799876,30.871502),(61.802356,30.87853),(61.808351,30.881424),(61.804527,30.94974),(61.800186,30.961419),(61.819306,30.993923),(61.826438,31.014956),(61.826334,31.034593),(61.821477,31.054488),(61.809281,31.087148),(61.791814,31.118929),(61.789334,31.129315),(61.78768,31.158616),(61.779205,31.181095),(61.752437,31.219129),(61.742722,31.239541),(61.742308,31.259437),(61.749233,31.30238),(61.742308,31.32088),(61.706548,31.359844),(61.686911,31.373177),(61.661176,31.38191),(61.606259,31.388741),(61.490851,31.403097),(61.29479,31.427644),(61.125149,31.448887),(61.123535,31.449089),(60.956,31.47007),(60.855024,31.482731),(60.821744,31.494668),(60.809962,31.588357),(60.794976,31.636675),(60.792599,31.660084),(60.805311,31.733981),(60.791462,31.826534),(60.789292,31.874955),(60.796733,31.93676),(60.794976,31.958464),(60.791049,31.969161),(60.786088,31.978979),(60.784021,31.989263),(60.792599,32.011277),(60.786708,32.01691),(60.778336,32.021044),(60.774926,32.027194),(60.782677,32.042541),(60.808825,32.07117),(60.81482,32.087862),(60.828256,32.167443),(60.830323,32.248885),(60.79632,32.355907),(60.752291,32.494581),(60.711674,32.61031),(60.67488,32.715601),(60.64005,32.815517),(60.60615,32.912359),(60.577522,32.994344),(60.562949,33.058268),(60.561502,33.137307),(60.5676,33.151285),(60.615969,33.206191),(60.670332,33.267712),(60.719012,33.323032),(60.757046,33.366337),(60.785881,33.387782),(60.818644,33.404887),(60.829082,33.416282),(60.834664,33.436255),(60.832803,33.453463),(60.829289,33.470129),(60.832183,33.484495),(60.849856,33.494365),(60.897295,33.497026),(60.912075,33.501522),(60.919788,33.512571),(60.92086,33.514105),(60.911971,33.528446),(60.895125,33.541132),(60.879622,33.548703),(60.846032,33.555782),(60.807378,33.558159),(60.733894,33.554852),(60.655553,33.559891),(60.574834,33.587796),(60.511789,33.638387),(60.486881,33.71138),(60.494943,33.744117),(60.525845,33.80215),(60.528016,33.84145),(60.499387,33.994257),(60.488461,34.080927),(60.486778,34.094277),(60.492772,34.139028),(60.520884,34.186131),(60.552407,34.220057),(60.587443,34.250262),(60.634986,34.271217),(60.650592,34.28535),(60.643667,34.306641),(60.714361,34.310129),(60.815957,34.315271),(60.890474,34.318914),(60.879105,34.337647),(60.804898,34.417487),(60.789085,34.44348),(60.77937,34.455081),(60.767277,34.464151),(60.742783,34.473452),(60.733894,34.480429),(60.725316,34.504277),(60.718908,34.51115),(60.71033,34.515129),(60.699685,34.516396),(60.714361,34.537428),(60.739269,34.548022),(60.794046,34.554119),(60.818954,34.559907),(60.838488,34.570914),(60.883859,34.613702),(60.888924,34.62166),(60.895435,34.628327),(60.908871,34.634683),(60.922824,34.636543),(60.935846,34.635665),(60.947525,34.63799),(60.957964,34.649411),(60.961994,34.67375),(60.958687,34.699589),(60.959927,34.723205),(60.978014,34.74062),(61.00964,34.76098),(61.02907,34.789712),(61.034755,34.806249),(61.06545,34.814724),(61.073409,34.847693),(61.077749,34.88304),(61.077026,34.891721),(61.072582,34.910325),(61.071548,34.920557),(61.074752,34.933114),(61.088912,34.951925),(61.092012,34.961485),(61.095319,34.981018),(61.109892,35.018535),(61.115887,35.059877),(61.123121,35.074191),(61.147306,35.102096),(61.136764,35.111191),(61.135214,35.119408),(61.137901,35.128296),(61.139865,35.139665),(61.137074,35.143902),(61.101521,35.183228),(61.096353,35.193202),(61.102347,35.197698),(61.112373,35.202038),(61.112786,35.21165),(61.107929,35.221262),(61.102347,35.225603),(61.101624,35.235318),(61.102451,35.25666),(61.108342,35.277951),(61.122708,35.287666),(61.143999,35.28808),(61.167873,35.291387),(61.187097,35.300379),(61.195055,35.318104),(61.190404,35.369573),(61.195055,35.390089),(61.200843,35.401148),(61.221823,35.424247),(61.227301,35.434479),(61.235983,35.465847),(61.247041,35.48507),(61.270503,35.508428),(61.283835,35.52729),(61.290036,35.548064),(61.287556,35.568269),(61.277014,35.609197),(61.269676,35.618499),(61.283318,35.622633),(61.335718,35.630901),(61.344193,35.630901),(61.351531,35.627801),(61.360936,35.621031),(61.360109,35.618395),(61.351531,35.60713),(61.363934,35.598242),(61.365484,35.5985),(61.367654,35.597983),(61.383467,35.586459),(61.383881,35.582119),(61.379953,35.571318),(61.380263,35.567339),(61.386878,35.559536),(61.393182,35.553593),(61.401244,35.549821),(61.412613,35.548322),(61.419309,35.545891),(61.427702,35.542845),(61.492711,35.494165),(61.538703,35.452307),(61.604746,35.430603),(61.739621,35.413757),(61.799049,35.417478),(61.919352,35.451842),(61.977746,35.450912),(62.00746,35.438923),(62.033815,35.423989),(62.136341,35.341358),(62.163885,35.326165),(62.219023,35.30627),(62.2337,35.293764),(62.244345,35.278055),(62.251218,35.260381),(62.252613,35.242811),(62.25096,35.202349),(62.256851,35.187156),(62.275713,35.151034),(62.286203,35.140647),(62.302016,35.14752),(62.399323,35.255937),(62.431879,35.280638),(62.458905,35.281879),(62.524586,35.232166),(62.53549,35.22736),(62.545515,35.228032),(62.555437,35.230409),(62.566186,35.230771),(62.595641,35.22059),(62.60453,35.219402),(62.621066,35.222709),(62.69362,35.248392),(62.708606,35.25604),(62.721628,35.266427),(62.734238,35.280742),(62.760489,35.302446),(62.823063,35.327514),(62.827565,35.329318),(62.918206,35.388177),(62.985282,35.414584),(63.005539,35.418408),(63.047087,35.421353),(63.066827,35.425591),(63.080211,35.437063),(63.086257,35.455666),(63.090495,35.493545),(63.100727,35.524964),(63.097213,35.535455),(63.080573,35.546514),(63.077369,35.558761),(63.07494,35.609817),(63.076646,35.6247),(63.103156,35.646042),(63.179895,35.666455),(63.207697,35.683198),(63.213588,35.692293),(63.214415,35.699269),(63.209351,35.704488),(63.183926,35.712343),(63.17726,35.716891),(63.163255,35.73229),(63.112044,35.769911),(63.105688,35.782882),(63.101295,35.800555),(63.091322,35.813319),(63.084397,35.826083),(63.088634,35.843395),(63.104086,35.856469),(63.125273,35.860241),(63.342934,35.856262),(63.430216,35.871042),(63.511089,35.901841),(63.52592,35.912693),(63.549588,35.939048),(63.563438,35.950623),(63.582093,35.959047),(63.602298,35.962664),(63.725702,35.968865),(63.765906,35.977288),(63.859854,36.022092),(63.889826,36.029637),(63.906327,36.031732),(63.921969,36.033719),(63.949977,36.028861),(64.045114,35.998734),(64.045579,36.011291),(64.03638,36.058575),(64.036225,36.076404),(64.044648,36.09232),(64.057258,36.105239),(64.123352,36.146064),(64.158698,36.160275),(64.195905,36.165907),(64.237143,36.16043),(64.253731,36.154849),(64.266134,36.152471),(64.276056,36.159241),(64.291869,36.199445),(64.303651,36.211538),(64.318947,36.219754),(64.33724,36.225955),(64.410879,36.239856),(64.444572,36.249985),(64.477697,36.271637),(64.498161,36.291429),(64.576657,36.389615),(64.594382,36.424186),(64.605855,36.461083),(64.610505,36.500926),(64.607767,36.521648),(64.593711,36.559837),(64.589008,36.580249),(64.588801,36.600454),(64.592005,36.621073),(64.604821,36.660554),(64.621357,36.692593),(64.728549,36.85057),(64.765225,36.904622),(64.778764,36.93847),(64.77463,36.977589),(64.755613,37.053631),(64.76016,37.092621),(64.778764,37.118046),(64.807909,37.135512),(64.989087,37.213673),(65.063088,37.233232),(65.487922,37.241997),(65.501407,37.242276),(65.536857,37.257107),(65.583159,37.30806),(65.611271,37.331056),(65.624087,37.345138),(65.629461,37.365421),(65.628893,37.387538),(65.620986,37.430404),(65.625741,37.452805),(65.635766,37.474096),(65.648168,37.493837),(65.648272,37.49394),(65.648323,37.493992),(65.648323,37.494147),(65.65871,37.51027),(65.668839,37.520553),(65.681034,37.526393),(65.698294,37.529416),(65.739429,37.529054),(65.752658,37.537865),(65.758859,37.564117),(65.761443,37.578379),(65.776532,37.572902),(65.79648,37.569336),(65.804334,37.565305),(65.821078,37.535178),(65.829087,37.525618),(65.836064,37.519158),(65.855184,37.507944),(66.079666,37.440868),(66.097856,37.428414),(66.125607,37.3987),(66.142918,37.385058),(66.163072,37.376893),(66.174648,37.375498),(66.207721,37.376893),(66.218469,37.3748),(66.232525,37.365421),(66.241879,37.363302),(66.250848,37.35965),(66.257743,37.356842),(66.274383,37.343277),(66.294434,37.331211),(66.320685,37.329144),(66.389622,37.347179),(66.413496,37.349633),(66.423625,37.34568),(66.438663,37.327904),(66.451065,37.322943),(66.461452,37.324441),(66.469927,37.329764),(66.485223,37.343406),(66.502896,37.355654),(66.519588,37.36418),(66.539018,37.36909),(66.564598,37.370692),(66.588576,37.36847),(66.65467,37.346145),(66.667021,37.344311),(66.694099,37.343406),(66.704745,37.346533),(66.724433,37.36015),(66.734975,37.363302),(66.865768,37.367927),(66.957649,37.385213),(67.005915,37.384386),(67.024105,37.377487),(67.064102,37.354569),(67.085083,37.349633),(67.097382,37.34059),(67.113918,37.297182),(67.123324,37.281963),(67.143477,37.27186),(67.187816,37.258244),(67.20797,37.243464),(67.217995,37.226256),(67.225281,37.207704),(67.236753,37.192408),(67.259129,37.185147),(67.281453,37.188661),(67.319591,37.208428),(67.345222,37.213027),(67.36951,37.21468),(67.391163,37.21959),(67.411265,37.22791),(67.431108,37.239769),(67.463975,37.266279),(67.482371,37.277364),(67.503145,37.281963),(67.526141,37.272894),(67.545003,37.231527),(67.561385,37.2199),(67.572185,37.223155),(67.579937,37.232664),(67.586034,37.242612),(67.592287,37.247185),(67.602726,37.24827),(67.621794,37.252921),(67.633163,37.254058),(67.644635,37.251216),(67.666443,37.238012),(67.677502,37.233542),(67.690162,37.23225),(67.725871,37.233542),(67.746593,37.229408),(67.764266,37.220882),(67.775997,37.207549),(67.780544,37.188868),(67.771966,37.12497),(67.773051,37.109984),(67.78535,37.096548),(67.808036,37.083241),(67.832893,37.073087),(67.87811,37.064431),(67.891959,37.052029),(67.902501,37.034381),(67.918417,37.013814),(67.929269,37.006244),(67.952937,36.996244),(67.961825,36.98994),(67.99619,36.955782),(68.00203,36.947487),(68.00606,36.938677),(68.011125,36.930874),(68.02022,36.92568),(68.02482,36.925518),(68.032725,36.925241),(68.044714,36.92922),(68.054946,36.934646),(68.121092,36.980276),(68.133391,36.986529),(68.151995,36.992782),(68.167963,37.006089),(68.186411,37.01831),(68.212559,37.021256),(68.253746,37.010223),(68.277775,37.010068),(68.288317,37.024408),(68.281599,37.066472),(68.280772,37.086652),(68.288317,37.103214),(68.307282,37.114221),(68.326661,37.113033),(68.346091,37.106909),(68.365832,37.103214),(68.391877,37.105437),(68.411462,37.113369),(68.418025,37.128277),(68.403711,37.151635),(68.465774,37.155356),(68.513316,37.163986),(68.523858,37.164658),(68.532747,37.168999),(68.546544,37.188222),(68.551453,37.192615),(68.571401,37.194708),(68.608814,37.204087),(68.626901,37.206257),(68.630777,37.213828),(68.63641,37.229925),(68.648605,37.244369),(68.671239,37.247185),(68.665452,37.259381),(68.662661,37.263721),(68.657545,37.267675),(68.668862,37.278294),(68.686949,37.279018),(68.725913,37.274522),(68.746532,37.276227),(68.759606,37.275245),(68.771285,37.270724),(68.808492,37.251629),(68.820378,37.250906),(68.824615,37.260621),(68.822031,37.281963),(68.810197,37.312091),(68.813815,37.323408),(68.835829,37.329144),(68.856758,37.324906),(68.868023,37.312142),(68.877015,37.296097),(68.890296,37.281963),(68.904714,37.276692),(68.915824,37.279069),(68.921147,37.287958),(68.917633,37.302427),(68.912982,37.306768),(68.896755,37.316483),(68.890296,37.322943),(68.885077,37.334828),(68.887867,37.338135),(68.893655,37.337154),(68.897169,37.335965),(68.965433,37.329144),(68.985225,37.320359),(69.005483,37.305528),(69.021037,37.287958),(69.035765,37.251577),(69.055609,37.237108),(69.078398,37.225171),(69.096433,37.213027),(69.114313,37.177473),(69.123615,37.169205),(69.145732,37.156958),(69.151003,37.155046),(69.245675,37.103886),(69.265777,37.105411),(69.285621,37.11293),(69.308617,37.116882),(69.312079,37.117477),(69.323913,37.120991),(69.334765,37.129053),(69.350836,37.144142),(69.391196,37.164658),(69.407732,37.177473),(69.442148,37.223621),(69.445197,37.23641),(69.426852,37.239459),(69.409851,37.245661),(69.417912,37.267675),(69.409696,37.276744),(69.403081,37.31116),(69.394606,37.326069),(69.389439,37.332916),(69.386441,37.34152),(69.384891,37.3509),(69.384478,37.36015),(69.385821,37.363147),(69.388508,37.363767),(69.390885,37.365317),(69.391196,37.370692),(69.388302,37.37511),(69.378948,37.377125),(69.377036,37.380665),(69.376313,37.418854),(69.37838,37.437406),(69.384478,37.453271),(69.389645,37.4588),(69.404011,37.466861),(69.411711,37.47252),(69.415587,37.477507),(69.421788,37.489496),(69.425405,37.49425),(69.469744,37.52014),(69.491448,37.536986),(69.508656,37.578973),(69.528655,37.586027),(69.664977,37.576157),(69.687301,37.579594),(69.729573,37.594089),(69.754274,37.596673),(69.762232,37.595794),(69.784453,37.59042),(69.791843,37.586105),(69.80223,37.581506),(69.812668,37.585846),(69.823004,37.592952),(69.832926,37.596673),(69.853286,37.60122),(69.895247,37.61817),(69.918812,37.617137),(69.935245,37.605664),(69.944547,37.589696),(69.955399,37.575485),(69.976586,37.569336),(69.989505,37.56732),(69.994673,37.562256),(69.998497,37.55559),(70.004187,37.551371),(70.007592,37.548846),(70.016894,37.545978),(70.048003,37.541431),(70.041698,37.548846),(70.06919,37.545203),(70.099059,37.536573),(70.129135,37.532284),(70.157867,37.541431),(70.192697,37.576028),(70.199415,37.579284),(70.201947,37.587991),(70.216468,37.617137),(70.218432,37.617447),(70.237655,37.618118),(70.240963,37.617137),(70.246079,37.621271),(70.248301,37.623338),(70.253934,37.630805),(70.255225,37.637471),(70.253882,37.646592),(70.253572,37.654576),(70.257706,37.65809),(70.262564,37.660881),(70.275121,37.675479),(70.277033,37.687985),(70.280547,37.696046),(70.283079,37.704831),(70.281891,37.719559),(70.271762,37.748705),(70.269592,37.762632),(70.275121,37.774181),(70.265561,37.781209),(70.256517,37.791803),(70.249851,37.804877),(70.247164,37.819166),(70.240343,37.827563),(70.207476,37.835624),(70.196004,37.839914),(70.179158,37.860791),(70.165205,37.889911),(70.160503,37.920658),(70.17213,37.946083),(70.192232,37.93288),(70.214711,37.929262),(70.238121,37.932363),(70.26153,37.939287),(70.253675,37.947401),(70.250161,37.955617),(70.25042,37.964144),(70.253934,37.973394),(70.26091,37.976469),(70.272692,37.978174),(70.283751,37.981869),(70.288712,37.990809),(70.293776,37.996106),(70.298583,37.998134),(70.317961,38.006312),(70.326281,38.011273),(70.371187,38.058298),(70.415371,38.094523),(70.425964,38.100647),(70.460071,38.112248),(70.470303,38.120517),(70.482602,38.137363),(70.508543,38.192502),(70.537947,38.238055),(70.547197,38.262653),(70.559393,38.268182),(70.573036,38.27105),(70.583061,38.275081),(70.5966,38.309187),(70.596703,38.317585),(70.595566,38.327791),(70.595773,38.338178),(70.600011,38.347066),(70.609312,38.3512),(70.632102,38.350115),(70.641352,38.354197),(70.676698,38.37492),(70.68476,38.38665),(70.664865,38.398639),(70.664865,38.405409),(70.683313,38.414581),(70.741811,38.419439),(70.754213,38.436466),(70.761086,38.44352),(70.777261,38.446466),(70.808887,38.446362),(70.817982,38.444993),(70.834105,38.4406),(70.843045,38.440161),(70.849814,38.442823),(70.853587,38.447422),(70.859323,38.451659),(70.871002,38.45321),(70.91224,38.437655),(70.936217,38.433004),(70.946759,38.443236),(70.943452,38.465948),(70.950739,38.473053),(70.974045,38.473673),(70.986809,38.470909),(70.998229,38.465664),(71.008564,38.458584),(71.018641,38.449825),(71.024584,38.441918),(71.032956,38.423651),(71.039467,38.415331),(71.049802,38.408664)] +Albania [(19.747766,42.578901),(19.746009,42.579934),(19.741358,42.574405),(19.733916,42.562597),(19.732159,42.555543),(19.730299,42.540428),(19.730919,42.533658),(19.73433,42.524382),(19.75159,42.493402),(19.784456,42.474566),(19.801406,42.468132),(19.819596,42.466479),(19.829063,42.468723),(19.835512,42.470251),(19.873339,42.486839),(19.882538,42.493402),(19.882745,42.493557),(19.882745,42.493609),(19.907549,42.506399),(19.956332,42.505314),(19.981757,42.510765),(20.017723,42.546241),(20.039221,42.557765),(20.064956,42.546758),(20.085626,42.530015),(20.135546,42.509629),(20.152392,42.493712),(20.152599,42.493712),(20.152599,42.493609),(20.152702,42.493428),(20.152702,42.493402),(20.180814,42.443095),(20.186292,42.437437),(20.199728,42.427799),(20.204689,42.420099),(20.204379,42.411831),(20.194043,42.4001),(20.192803,42.393693),(20.197454,42.38744),(20.21306,42.377518),(20.218848,42.371446),(20.221225,42.36372),(20.219468,42.350207),(20.220915,42.343101),(20.229597,42.32672),(20.237762,42.319924),(20.249647,42.318607),(20.317964,42.319821),(20.333363,42.317883),(20.456973,42.250497),(20.473665,42.237123),(20.481674,42.230705),(20.500795,42.211223),(20.538622,42.150116),(20.549371,42.123476),(20.551954,42.105803),(20.552161,42.07379),(20.558259,42.055109),(20.589678,41.993639),(20.594329,41.973718),(20.59929,41.960567),(20.59929,41.94788),(20.588748,41.929586),(20.580583,41.921732),(20.573348,41.917675),(20.567251,41.912172),(20.562496,41.900105),(20.562703,41.892845),(20.567767,41.88052),(20.567147,41.873182),(20.541722,41.86158),(20.540786,41.844865),(20.540482,41.839437),(20.54844,41.814193),(20.550136,41.80006),(20.550921,41.793522),(20.54441,41.784763),(20.521155,41.767658),(20.511337,41.757943),(20.503275,41.744637),(20.500381,41.734095),(20.508339,41.661748),(20.513404,41.640405),(20.534591,41.594026),(20.53467,41.587324),(20.534694,41.585266),(20.52932,41.574879),(20.520845,41.568368),(20.507409,41.56227),(20.49294,41.557671),(20.444157,41.549661),(20.447878,41.53545),(20.444984,41.508475),(20.452012,41.493592),(20.463174,41.489768),(20.470822,41.483774),(20.481597,41.468269),(20.483535,41.46548),(20.486842,41.457781),(20.488909,41.441451),(20.490873,41.436025),(20.498004,41.431477),(20.514644,41.429462),(20.522189,41.42569),(20.534074,41.412977),(20.539049,41.402944),(20.540172,41.400678),(20.539965,41.387139),(20.532937,41.370447),(20.523429,41.356805),(20.510407,41.344403),(20.49604,41.337788),(20.481674,41.341199),(20.478108,41.321585),(20.477747,41.319598),(20.483121,41.289471),(20.500071,41.23552),(20.51268,41.210147),(20.549681,41.170615),(20.565494,41.147412),(20.570558,41.124829),(20.569938,41.107104),(20.576966,41.093513),(20.597419,41.086273),(20.605284,41.083488),(20.618927,41.082403),(20.631536,41.082868),(20.634129,41.082576),(20.643008,41.081576),(20.65386,41.075272),(20.664092,41.059149),(20.683419,40.99383),(20.702746,40.936314),(20.717216,40.913214),(20.730573,40.904611),(20.740883,40.89797),(20.766102,40.893732),(20.783672,40.899055),(20.816951,40.920242),(20.836544,40.923904),(20.837415,40.924066),(20.890228,40.918279),(20.939941,40.907065),(20.956684,40.894766),(20.964849,40.875956),(20.965262,40.849394),(20.967019,40.802007),(20.960922,40.780768),(20.943662,40.765265),(20.95348,40.759167),(20.957201,40.751622),(20.959165,40.743561),(20.963505,40.736068),(20.992031,40.7155),(21.018593,40.690902),(21.028825,40.676846),(21.035336,40.65938),(21.036679,40.639743),(21.033165,40.621398),(21.02252,40.586619),(21.020763,40.575096),(21.02035,40.566776),(21.018799,40.558972),(21.013528,40.548896),(21.006707,40.543211),(20.997509,40.53918),(20.98831,40.533496),(20.981489,40.523264),(20.96857,40.520577),(20.957924,40.514944),(20.95193,40.506056),(20.952653,40.493653),(20.94976,40.487866),(20.946452,40.482905),(20.93684,40.472518),(20.911932,40.459547),(20.889608,40.463733),(20.866767,40.472053),(20.839999,40.471639),(20.826666,40.464818),(20.821292,40.456395),(20.817365,40.446835),(20.8092,40.436603),(20.800311,40.432779),(20.779537,40.429006),(20.770649,40.421978),(20.768995,40.412573),(20.773336,40.38534),(20.773853,40.374901),(20.770442,40.36255),(20.765998,40.354282),(20.755043,40.338624),(20.739552,40.309166),(20.736646,40.303639),(20.728791,40.298265),(20.717836,40.293924),(20.706984,40.288085),(20.699129,40.278369),(20.694581,40.25558),(20.696648,40.235581),(20.696442,40.215066),(20.68528,40.191191),(20.679802,40.187832),(20.665022,40.18437),(20.660372,40.178892),(20.659855,40.172743),(20.663162,40.161684),(20.663472,40.15662),(20.668226,40.138068),(20.666779,40.13352),(20.651793,40.101016),(20.647556,40.09404),(20.640218,40.090112),(20.615361,40.081017),(20.577793,40.067271),(20.552885,40.065411),(20.499865,40.071457),(20.481674,40.067788),(20.465965,40.060812),(20.432789,40.063809),(20.413772,40.057194),(20.400439,40.045257),(20.38907,40.029444),(20.380699,40.011667),(20.376461,39.993736),(20.359925,39.991048),(20.310005,39.98986),(20.297913,39.986914),(20.302977,39.979111),(20.317964,39.918391),(20.323131,39.912397),(20.346902,39.894259),(20.392998,39.835451),(20.397028,39.818087),(20.388967,39.79814),(20.371604,39.784291),(20.35455,39.78579),(20.336877,39.794006),(20.299577,39.805056),(20.298326,39.805427),(20.288405,39.806615),(20.280033,39.80398),(20.273005,39.796487),(20.275485,39.792404),(20.281377,39.788167),(20.284684,39.780312),(20.283134,39.775144),(20.276829,39.763155),(20.275692,39.760055),(20.278173,39.756748),(20.285511,39.749668),(20.287268,39.746671),(20.291092,39.737937),(20.296466,39.733493),(20.299463,39.728222),(20.296466,39.71737),(20.28272,39.704399),(20.264117,39.696079),(20.249027,39.684814),(20.24603,39.662903),(20.237245,39.667037),(20.228563,39.669156),(20.220088,39.669156),(20.212027,39.667037),(20.204172,39.660319),(20.203035,39.65262),(20.203449,39.645333),(20.199935,39.640062),(20.184122,39.637013),(20.163761,39.6443),(20.135236,39.664195),(20.089347,39.682799),(20.049556,39.69272),(20.01607,39.701402),(19.99991,39.693498),(19.999848,39.693508),(19.996837,39.692572),(19.99586,39.686713),(19.989024,39.686713),(19.983572,39.701361),(19.98699,39.717963),(20.002696,39.748236),(19.985606,39.75373),(19.980642,39.75731),(19.99586,39.782945),(20.014171,39.832953),(20.016287,39.845038),(20.009776,39.865668),(19.999278,39.871405),(19.983735,39.872382),(19.961681,39.878567),(19.960216,39.881049),(19.959972,39.885077),(19.958832,39.88935),(19.954845,39.892808),(19.941173,39.898993),(19.931163,39.901923),(19.914073,39.902411),(19.906423,39.90644),(19.906423,39.912665),(19.921235,39.926947),(19.931,39.934516),(19.941173,39.940009),(19.923839,39.952826),(19.876801,40.022366),(19.870453,40.034735),(19.863455,40.045356),(19.855235,40.049872),(19.823253,40.050605),(19.807384,40.052883),(19.79656,40.057318),(19.804535,40.063463),(19.777354,40.077786),(19.773123,40.071234),(19.771658,40.068305),(19.770518,40.063463),(19.767263,40.072455),(19.762462,40.07685),(19.756358,40.079901),(19.74936,40.084621),(19.748057,40.088609),(19.740896,40.100409),(19.732677,40.110256),(19.729015,40.108222),(19.719086,40.116034),(19.653168,40.132392),(19.59783,40.157701),(19.565766,40.179836),(19.509288,40.194485),(19.476248,40.21369),(19.381114,40.295233),(19.369965,40.309027),(19.365082,40.317369),(19.351573,40.362047),(19.344249,40.374335),(19.309255,40.400824),(19.296886,40.41356),(19.295177,40.417548),(19.295177,40.418687),(19.289561,40.420396),(19.289561,40.427232),(19.316173,40.437812),(19.33253,40.43891),(19.348155,40.43065),(19.3838,40.397406),(19.39324,40.386298),(19.406505,40.362616),(19.411957,40.34809),(19.414236,40.334418),(19.423595,40.330024),(19.444591,40.333808),(19.465587,40.341498),(19.474946,40.348375),(19.481619,40.439114),(19.47877,40.453925),(19.453136,40.465318),(19.418793,40.491889),(19.391368,40.521796),(19.386404,40.54328),(19.39324,40.537055),(19.396007,40.529608),(19.398611,40.520941),(19.406749,40.522773),(19.414236,40.522773),(19.433604,40.505927),(19.446056,40.511135),(19.452647,40.529853),(19.4546,40.55386),(19.442149,40.573065),(19.415701,40.583482),(19.391612,40.578315),(19.386404,40.550727),(19.379242,40.566148),(19.309825,40.644355),(19.304373,40.653144),(19.309337,40.665025),(19.322927,40.674872),(19.339041,40.684068),(19.351573,40.694159),(19.360525,40.709784),(19.364513,40.726508),(19.365571,40.785956),(19.368826,40.802151),(19.377778,40.812974),(19.396251,40.817043),(19.406749,40.820746),(19.404063,40.829291),(19.396739,40.83869),(19.39324,40.844916),(19.397634,40.852973),(19.404796,40.86107),(19.411388,40.867092),(19.414236,40.868842),(19.406261,40.880316),(19.391124,40.89525),(19.381196,40.908271),(19.389822,40.913804),(19.404959,40.918158),(19.424571,40.939643),(19.440929,40.948635),(19.427013,40.936835),(19.435395,40.913398),(19.427257,40.893378),(19.427257,40.87226),(19.434744,40.87226),(19.437836,40.886542),(19.437673,40.88345),(19.439138,40.880072),(19.440929,40.87226),(19.454112,40.88581),(19.506602,40.905097),(19.523448,40.92064),(19.525157,40.945258),(19.514659,40.964911),(19.501475,40.982978),(19.495453,41.002631),(19.492035,40.976955),(19.479666,40.953925),(19.453868,40.923977),(19.453461,40.928371),(19.447765,40.934312),(19.458507,40.942613),(19.469981,40.956732),(19.477224,40.975002),(19.474946,40.995795),(19.469981,41.000718),(19.453787,41.005845),(19.447765,41.009467),(19.440603,41.020657),(19.441254,41.026028),(19.445486,41.030707),(19.456391,41.106676),(19.451182,41.122707),(19.445486,41.129462),(19.441173,41.137356),(19.441905,41.143866),(19.463715,41.151313),(19.467133,41.162665),(19.467133,41.176174),(19.468761,41.187567),(19.509532,41.237291),(19.516612,41.256415),(19.513357,41.276679),(19.50294,41.294135),(19.488048,41.306464),(19.471934,41.311103),(19.450043,41.311103),(19.431651,41.313707),(19.418956,41.322577),(19.414236,41.341498),(19.414236,41.382799),(19.410818,41.391669),(19.396007,41.402045),(19.39324,41.41352),(19.406423,41.400377),(19.428722,41.401313),(19.44516,41.415188),(19.440929,41.440863),(19.46461,41.450873),(19.489268,41.466132),(19.508637,41.486884),(19.516612,41.513414),(19.514171,41.529975),(19.507172,41.536282),(19.496349,41.538764),(19.482432,41.54385),(19.468028,41.554267),(19.451996,41.574774),(19.440929,41.585395),(19.528087,41.576565),(19.556407,41.583075),(19.565196,41.58511),(19.584809,41.619574),(19.591645,41.619574),(19.592296,41.608588),(19.596039,41.602729),(19.603038,41.601752),(19.612966,41.605292),(19.604991,41.632148),(19.600108,41.639106),(19.591645,41.632636),(19.587087,41.640692),(19.557628,41.660549),(19.578136,41.687649),(19.576345,41.71894),(19.571137,41.752387),(19.5713,41.770413),(19.589529,41.765692),(19.598969,41.778713),(19.599783,41.79975),(19.591645,41.818793),(19.585216,41.818549),(19.465505,41.855455),(19.451182,41.862535),(19.432953,41.868598),(19.365122,41.852372),(19.364223,41.862846),(19.364947,41.888995),(19.356989,41.89964),(19.347687,41.906358),(19.34531,41.910337),(19.34593,41.914471),(19.345413,41.921189),(19.346653,41.926176),(19.350477,41.931369),(19.352854,41.938501),(19.346447,41.961548),(19.351614,41.965166),(19.359779,41.965993),(19.365774,41.969713),(19.371045,41.98656),(19.363707,41.993484),(19.363396,41.993639),(19.354198,42.008703),(19.350581,42.027823),(19.351718,42.047615),(19.356575,42.064798),(19.374869,42.094667),(19.372491,42.104253),(19.369998,42.106507),(19.355025,42.12004),(19.297767,42.151666),(19.282058,42.164559),(19.281956,42.164723),(19.272033,42.180682),(19.274926,42.191276),(19.304589,42.215099),(19.400397,42.325893),(19.401327,42.331061),(19.400707,42.344755),(19.402671,42.352015),(19.412076,42.368397),(19.417243,42.374107),(19.468403,42.41811),(19.481735,42.434491),(19.501476,42.444129),(19.517806,42.45803),(19.531655,42.474773),(19.543747,42.493402),(19.543747,42.493557),(19.549328,42.508595),(19.561317,42.516243),(19.575683,42.522341),(19.588086,42.532857),(19.593667,42.54482),(19.599248,42.571046),(19.605035,42.584844),(19.621779,42.604997),(19.647927,42.627838),(19.658037,42.634613),(19.676039,42.646675),(19.699293,42.654814),(19.722134,42.64608),(19.737534,42.624402),(19.746009,42.5989),(19.746885,42.58893),(19.747766,42.578901)] +Andorra [(1.707006,42.502781),(1.697498,42.494462),(1.686336,42.490612),(1.674244,42.490508),(1.662358,42.493712),(1.659774,42.496813),(1.656984,42.49764),(1.653986,42.496529),(1.650369,42.493402),(1.639517,42.466427),(1.607478,42.456428),(1.544432,42.450356),(1.538851,42.445653),(1.534511,42.439917),(1.528206,42.434233),(1.51663,42.429504),(1.508466,42.428677),(1.447901,42.434646),(1.436429,42.440951),(1.436429,42.453482),(1.407593,42.486762),(1.424543,42.492472),(1.430227,42.493557),(1.449968,42.504073),(1.446557,42.519886),(1.428987,42.531462),(1.406456,42.52924),(1.409764,42.540609),(1.4263,42.561796),(1.426403,42.565646),(1.418032,42.569832),(1.419272,42.579263),(1.424853,42.589365),(1.429297,42.595386),(1.451415,42.602052),(1.466814,42.641455),(1.49844,42.640241),(1.527793,42.648535),(1.543089,42.649362),(1.597349,42.621921),(1.608304,42.618123),(1.721993,42.609855),(1.713311,42.589546),(1.729434,42.582001),(1.752688,42.576679),(1.761107,42.567646),(1.765091,42.563372),(1.739976,42.561641),(1.721683,42.548515),(1.710624,42.527741),(1.707006,42.502781)] +Ashmore and Cartier Islands [(123.597016,-12.428318),(123.597748,-12.438572),(123.575613,-12.436782),(123.575043,-12.426609),(123.597016,-12.428318)] +Austria [(15.161792,48.937221),(15.238067,48.95076),(15.243234,48.95293),(15.257084,48.963886),(15.260391,48.969157),(15.262872,48.982593),(15.266902,48.986675),(15.27455,48.986727),(15.279615,48.982593),(15.283749,48.977632),(15.28871,48.975358),(15.335529,48.974996),(15.357853,48.97081),(15.405912,48.954687),(15.450354,48.944817),(15.471851,48.936704),(15.521254,48.908489),(15.544187,48.902612),(15.603729,48.887353),(15.681347,48.858466),(15.703775,48.854952),(15.726823,48.854952),(15.743669,48.858466),(15.779222,48.870868),(15.787904,48.872263),(15.818497,48.872315),(15.824284,48.869421),(15.828315,48.857122),(15.833276,48.852006),(15.840097,48.850094),(15.859218,48.849164),(15.870173,48.843893),(15.877614,48.841671),(15.885986,48.842033),(15.875754,48.833093),(15.888363,48.835056),(15.899629,48.834746),(15.90707,48.830251),(15.908103,48.819709),(15.925053,48.822602),(15.930324,48.818313),(15.930324,48.811389),(15.931771,48.806428),(16.032334,48.758059),(16.085354,48.742866),(16.177751,48.746535),(16.317588,48.73284),(16.318724,48.733512),(16.339498,48.735579),(16.352727,48.728448),(16.358309,48.727259),(16.374018,48.730308),(16.384974,48.737078),(16.43572,48.794542),(16.453083,48.80219),(16.481919,48.7994),(16.492254,48.79971),(16.510341,48.804774),(16.519539,48.805601),(16.528737,48.803534),(16.545791,48.796299),(16.605529,48.784827),(16.624029,48.78338),(16.643149,48.778264),(16.651004,48.766223),(16.654725,48.751857),(16.661959,48.740023),(16.675085,48.733926),(16.691001,48.732065),(16.729035,48.733099),(16.744332,48.729637),(16.780505,48.713772),(16.798902,48.709224),(16.818125,48.710775),(16.856573,48.719818),(16.873006,48.718991),(16.896674,48.696977),(16.910523,48.63078),(16.945043,48.604166),(16.954345,48.557399),(16.949055,48.544836),(16.946903,48.539726),(16.93047,48.528202),(16.913934,48.519339),(16.906492,48.509908),(16.901221,48.496602),(16.875073,48.471539),(16.864944,48.458077),(16.86081,48.443788),(16.851922,48.390407),(16.849338,48.384102),(16.846858,48.381131),(16.845101,48.376583),(16.84448,48.365602),(16.847581,48.359582),(16.855332,48.356455),(16.875486,48.355034),(16.881171,48.352812),(16.891713,48.347024),(16.901945,48.339402),(16.906492,48.33147),(16.904218,48.327025),(16.900291,48.321238),(16.898431,48.316173),(16.902771,48.314106),(16.905459,48.311988),(16.908869,48.306975),(16.912073,48.301239),(16.913313,48.296691),(16.916621,48.2908),(16.924269,48.287519),(16.933364,48.284728),(16.944423,48.27801),(16.950624,48.276589),(16.954345,48.273127),(16.955275,48.268786),(16.953931,48.25734),(16.954345,48.25256),(16.969951,48.216645),(16.974705,48.198558),(16.974808,48.17688),(16.98194,48.161299),(17.007158,48.142799),(17.020904,48.137166),(17.036923,48.135513),(17.047465,48.130991),(17.062455,48.112724),(17.067206,48.106936),(17.080228,48.097608),(17.069996,48.089159),(17.064829,48.07903),(17.059144,48.060427),(17.063072,48.058773),(17.075061,48.052081),(17.069686,48.035674),(17.092631,48.02725),(17.12498,48.019525),(17.148338,48.005443),(17.085603,47.970148),(17.096145,47.961931),(17.095731,47.95573),(17.090874,47.949477),(17.088083,47.940899),(17.086119,47.93922),(17.076818,47.934956),(17.07413,47.932114),(17.075164,47.927773),(17.080125,47.925758),(17.085396,47.924621),(17.087463,47.922632),(17.083329,47.904829),(17.077541,47.891729),(17.067619,47.881626),(17.051186,47.872893),(17.016563,47.867699),(17.004057,47.863281),(17.004367,47.852377),(17.010672,47.847882),(17.031652,47.841345),(17.039507,47.837365),(17.049119,47.818684),(17.05563,47.812354),(17.040644,47.801114),(17.041988,47.783906),(17.048706,47.763649),(17.050566,47.730989),(17.055527,47.720913),(17.064002,47.71329),(17.075371,47.708484),(17.054804,47.702025),(16.98194,47.695436),(16.902668,47.682026),(16.864738,47.686729),(16.850165,47.712929),(16.836936,47.705358),(16.817299,47.684248),(16.80624,47.676884),(16.797455,47.675463),(16.741128,47.681458),(16.730379,47.685902),(16.719217,47.693731),(16.711569,47.704066),(16.707848,47.714608),(16.702474,47.7236),(16.689865,47.729568),(16.609766,47.750627),(16.567805,47.754192),(16.531115,47.742978),(16.52512,47.733315),(16.526877,47.720137),(16.521399,47.711533),(16.512924,47.706004),(16.47303,47.691767),(16.461145,47.684532),(16.45453,47.681768),(16.449879,47.682414),(16.444918,47.68554),(16.43913,47.690475),(16.431792,47.685463),(16.4166,47.668823),(16.407815,47.66133),(16.425901,47.654276),(16.481919,47.638954),(16.509617,47.64314),(16.575453,47.624949),(16.608422,47.628773),(16.63023,47.622004),(16.647697,47.606139),(16.656172,47.585934),(16.65059,47.566555),(16.66754,47.560096),(16.68139,47.550561),(16.689141,47.53803),(16.688004,47.52263),(16.677359,47.509866),(16.648213,47.501546),(16.636845,47.4932),(16.640875,47.452919),(16.626973,47.445549),(16.589406,47.425633),(16.481919,47.392302),(16.46993,47.405531),(16.456597,47.411836),(16.444091,47.40951),(16.433963,47.39685),(16.436443,47.358506),(16.434273,47.35556),(16.429312,47.353803),(16.424971,47.351116),(16.424351,47.345199),(16.427142,47.341375),(16.431689,47.339721),(16.436133,47.338559),(16.43882,47.336569),(16.469206,47.293238),(16.473237,47.276805),(16.466622,47.263421),(16.452463,47.254584),(16.421354,47.243138),(16.421354,47.243061),(16.421354,47.243035),(16.424661,47.225594),(16.409262,47.203683),(16.412776,47.187173),(16.41877,47.183659),(16.426728,47.184124),(16.4351,47.183581),(16.441818,47.177121),(16.442128,47.168336),(16.433963,47.151283),(16.433653,47.145754),(16.447089,47.139708),(16.480885,47.150767),(16.497318,47.149681),(16.509514,47.137537),(16.50476,47.125833),(16.481919,47.10524),(16.460835,47.0963),(16.454323,47.081701),(16.461765,47.068498),(16.481919,47.063898),(16.493701,47.059816),(16.497215,47.054622),(16.493184,47.049145),(16.481919,47.04421),(16.437167,47.031782),(16.424661,47.024082),(16.45329,47.021679),(16.467553,47.018423),(16.481919,47.00938),(16.486363,46.998554),(16.467449,46.995427),(16.441404,46.99522),(16.424764,46.993102),(16.424764,46.992998),(16.420424,46.99026),(16.415566,46.989433),(16.410398,46.990415),(16.405024,46.993153),(16.387764,47.002042),(16.366577,47.003825),(16.325856,47.00044),(16.288545,47.005582),(16.274903,47.004315),(16.265394,46.993257),(16.261054,46.97809),(16.252992,46.973516),(16.24307,46.972018),(16.232942,46.966075),(16.230358,46.959977),(16.231185,46.954422),(16.230668,46.94835),(16.22395,46.941064),(16.217025,46.937395),(16.196148,46.931297),(16.170723,46.918533),(16.159148,46.910316),(16.122871,46.876365),(16.110055,46.867916),(16.094035,46.862774),(16.052936,46.84606),(16.032024,46.837556),(16.028441,46.836947),(15.987582,46.830011),(15.981691,46.827685),(15.971976,46.820632),(15.972906,46.818435),(15.97735,46.816213),(15.978177,46.809134),(15.971252,46.778076),(15.970985,46.77505),(15.969702,46.760532),(15.970529,46.743014),(15.982001,46.718545),(16.003291,46.709191),(16.014557,46.693714),(16.016723,46.670691),(16.016727,46.670641),(16.016593,46.670757),(15.997917,46.686919),(15.986962,46.69219),(15.946344,46.697151),(15.878545,46.720715),(15.850743,46.724488),(15.822941,46.722834),(15.78461,46.7122),(15.755141,46.704024),(15.728683,46.70299),(15.652098,46.710819),(15.635975,46.717563),(15.632875,46.702473),(15.632978,46.689632),(15.626984,46.680873),(15.621745,46.678175),(15.616648,46.67555),(15.603729,46.673018),(15.58864,46.675963),(15.567349,46.675757),(15.545955,46.671881),(15.530659,46.663845),(15.52084,46.649608),(15.520308,46.64772),(15.517636,46.63824),(15.513939,46.632545),(15.511228,46.628369),(15.492625,46.618293),(15.462653,46.614649),(15.440018,46.62439),(15.435381,46.627195),(15.417591,46.637955),(15.388135,46.645578),(15.332784,46.64358),(15.204891,46.638963),(15.172557,46.64136),(15.105591,46.646323),(15.085723,46.647795),(15.061954,46.649557),(15.004386,46.636844),(14.967179,46.600257),(14.947955,46.619274),(14.933589,46.621135),(14.919507,46.615017),(14.897726,46.605554),(14.894063,46.605215),(14.877055,46.603642),(14.862999,46.604831),(14.85039,46.601136),(14.833647,46.584393),(14.822278,46.567546),(14.814519,46.551337),(14.807189,46.536024),(14.796663,46.519401),(14.788585,46.506646),(14.783004,46.503261),(14.760887,46.496284),(14.735255,46.493339),(14.726367,46.497706),(14.709727,46.492512),(14.703733,46.487758),(14.698772,46.480962),(14.687093,46.471221),(14.679961,46.458871),(14.672546,46.459791),(14.666629,46.460524),(14.662081,46.459698),(14.642031,46.445228),(14.631696,46.440577),(14.621567,46.43851),(14.599863,46.437167),(14.590148,46.434428),(14.575368,46.419726),(14.566997,46.400373),(14.562108,46.391737),(14.557695,46.38394),(14.540332,46.378643),(14.527102,46.388229),(14.515837,46.40536),(14.502194,46.418356),(14.467675,46.412672),(14.450931,46.414481),(14.437145,46.418884),(14.420029,46.424351),(14.414448,46.429002),(14.411244,46.434635),(14.40649,46.439337),(14.395844,46.440991),(14.362151,46.435875),(14.242323,46.438237),(14.149865,46.440061),(14.147933,46.440425),(14.137255,46.442438),(14.081032,46.47595),(14.066355,46.48104),(14.050026,46.484399),(14.032456,46.484709),(14.014922,46.482531),(13.998763,46.480523),(13.98233,46.481918),(13.890862,46.511787),(13.860683,46.51525),(13.795778,46.507886),(13.782135,46.507782),(13.716093,46.518867),(13.700951,46.519746),(13.685345,46.517627),(13.670256,46.518712),(13.549229,46.545842),(13.506751,46.546927),(13.499103,46.550622),(13.484944,46.561733),(13.478019,46.563567),(13.417041,46.560492),(13.373323,46.565789),(13.271417,46.550777),(13.231109,46.552173),(13.210025,46.558012),(13.146463,46.584961),(13.064504,46.598035),(12.83041,46.609637),(12.773566,46.635191),(12.7485,46.640996),(12.739873,46.642994),(12.732018,46.64227),(12.715999,46.637955),(12.706593,46.637749),(12.697395,46.640539),(12.679205,46.650022),(12.671368,46.652466),(12.669593,46.653019),(12.62019,46.656481),(12.562003,46.65121),(12.547327,46.652192),(12.53079,46.65767),(12.499888,46.672139),(12.469632,46.675799),(12.445627,46.678702),(12.40501,46.690123),(12.370387,46.711155),(12.351473,46.743246),(12.342688,46.765131),(12.326048,46.772495),(12.305274,46.774459),(12.284397,46.779988),(12.274785,46.784639),(12.269101,46.788566),(12.266724,46.795336),(12.26693,46.808255),(12.269618,46.819831),(12.273338,46.826755),(12.276129,46.83399),(12.275819,46.846289),(12.26662,46.868148),(12.250807,46.875615),(12.208743,46.876623),(12.1951,46.880085),(12.189002,46.884943),(12.183525,46.891222),(12.172156,46.89918),(12.160994,46.90303),(12.137429,46.905924),(12.126887,46.908869),(12.141563,46.918843),(12.141667,46.927989),(12.134742,46.937446),(12.128231,46.948582),(12.122133,46.971656),(12.117792,46.983076),(12.111178,46.992998),(12.12172,47.010517),(12.182284,47.033616),(12.203575,47.053331),(12.203897,47.067012),(12.204195,47.079686),(12.180631,47.085215),(12.133146,47.078832),(12.116242,47.076559),(12.014956,47.040489),(11.943643,47.038164),(11.899511,47.027725),(11.857033,47.012015),(11.822513,46.993257),(11.777141,46.988296),(11.766082,46.983438),(11.746445,46.972457),(11.73487,46.970622),(11.716163,46.97548),(11.683814,46.991913),(11.664797,46.993257),(11.657252,46.992533),(11.649707,46.992998),(11.648467,46.993257),(11.64795,46.993257),(11.59648,47.00044),(11.572709,46.998915),(11.543667,46.993102),(11.543564,46.993102),(11.543357,46.992998),(11.543254,46.992998),(11.533849,46.989536),(11.524443,46.988347),(11.515348,46.989484),(11.506977,46.992998),(11.501706,46.997856),(11.495918,47.001602),(11.489407,47.004212),(11.471837,47.00708),(11.461812,47.005504),(11.452923,47.000879),(11.445275,46.993102),(11.445172,46.992998),(11.411169,46.970493),(11.380576,46.971553),(11.349467,46.98194),(11.31381,46.987262),(11.243737,46.979252),(11.174491,46.963853),(11.156611,46.956515),(11.091912,46.912435),(11.083643,46.900136),(11.073101,46.86497),(11.054291,46.834145),(11.053464,46.830321),(11.054601,46.819934),(11.052844,46.814921),(11.048503,46.811692),(11.037445,46.808255),(11.033311,46.805568),(11.010883,46.779213),(10.99693,46.76911),(10.982668,46.767741),(10.965718,46.771642),(10.930888,46.773994),(10.913731,46.77234),(10.880348,46.764589),(10.870426,46.764072),(10.860195,46.766991),(10.843141,46.777043),(10.83415,46.780246),(10.823918,46.78035),(10.804901,46.776629),(10.795082,46.776888),(10.76635,46.788101),(10.754775,46.790763),(10.733691,46.786189),(10.722942,46.786448),(10.716947,46.795207),(10.721598,46.800142),(10.743819,46.812518),(10.748677,46.819443),(10.738962,46.829546),(10.662481,46.860965),(10.647081,46.863756),(10.629408,46.862412),(10.527708,46.843214),(10.486264,46.846366),(10.453811,46.864427),(10.451434,46.88577),(10.463836,46.919747),(10.458462,46.936619),(10.449574,46.943906),(10.415571,46.962406),(10.394693,46.985402),(10.384358,46.992998),(10.384358,46.993153),(10.384255,46.993153),(10.378984,46.995505),(10.373403,46.996254),(10.367925,46.995505),(10.338883,46.98411),(10.313665,46.964318),(10.296198,46.941374),(10.295681,46.922693),(10.270773,46.921892),(10.251343,46.92538),(10.235116,46.923313),(10.219924,46.905769),(10.215169,46.893108),(10.214342,46.884685),(10.211655,46.877036),(10.201423,46.86683),(10.157808,46.851612),(10.13197,46.846573),(10.125188,46.846751),(10.1113,46.847116),(10.068098,46.856624),(10.045567,46.865564),(10.006913,46.890757),(9.899943,46.914398),(9.875138,46.927421),(9.862426,46.939772),(9.860772,46.949151),(9.863976,46.959925),(9.866457,46.983387),(9.870591,46.992947),(9.870591,46.998838),(9.866767,47.001938),(9.860566,47.001602),(9.856328,47.004083),(9.857982,47.015478),(9.669053,47.056199),(9.65231,47.05793),(9.59991,47.053486),(9.581203,47.05687),(9.608798,47.080771),(9.615723,47.106764),(9.605594,47.132266),(9.581823,47.154901),(9.552057,47.16689),(9.551954,47.175571),(9.561876,47.190609),(9.562909,47.19774),(9.554331,47.211615),(9.544823,47.220323),(9.540378,47.229108),(9.547096,47.243035),(9.53025,47.253654),(9.521155,47.262801),(9.553298,47.299853),(9.587404,47.32781),(9.591228,47.334683),(9.596396,47.352305),(9.601047,47.36127),(9.639804,47.394524),(9.649519,47.409717),(9.650346,47.452092),(9.621717,47.469197),(9.58451,47.480721),(9.554951,47.5109),(9.553059,47.516891),(9.547482,47.534547),(9.549887,47.533534),(9.554144,47.532743),(9.612622,47.521881),(9.676701,47.52263),(9.703986,47.531415),(9.718455,47.546711),(9.730858,47.564901),(9.752665,47.582058),(9.767135,47.587277),(9.782121,47.588414),(9.79628,47.584952),(9.807959,47.576322),(9.811266,47.564488),(9.809303,47.552318),(9.813127,47.54175),(9.833487,47.534619),(9.841549,47.535032),(9.853848,47.540407),(9.858912,47.541337),(9.888988,47.533792),(9.918753,47.53219),(9.933843,47.534206),(9.945935,47.540769),(9.948726,47.524232),(9.959164,47.514879),(9.97167,47.506404),(9.980765,47.492942),(9.980765,47.489919),(9.981075,47.48687),(9.982625,47.481031),(9.993581,47.476586),(10.002366,47.47917),(10.011254,47.484441),(10.022623,47.487955),(10.071819,47.439121),(10.080604,47.42739),(10.075953,47.416874),(10.053215,47.40486),(10.064171,47.396178),(10.066858,47.385791),(10.067788,47.375042),(10.073162,47.365482),(10.082878,47.359074),(10.089596,47.359281),(10.097244,47.363131),(10.11037,47.367394),(10.11533,47.366955),(10.125666,47.362692),(10.130833,47.362666),(10.135071,47.364888),(10.146956,47.373802),(10.190881,47.379228),(10.209278,47.372484),(10.208555,47.354475),(10.190468,47.317475),(10.192742,47.310912),(10.193259,47.304917),(10.190778,47.298251),(10.18592,47.294995),(10.172691,47.292308),(10.168661,47.290422),(10.159462,47.279131),(10.155431,47.272878),(10.159876,47.271121),(10.174951,47.272376),(10.239354,47.277735),(10.261161,47.283497),(10.305913,47.302178),(10.325447,47.314322),(10.34343,47.330756),(10.365651,47.361115),(10.371646,47.367394),(10.381981,47.372122),(10.403065,47.377109),(10.411643,47.381037),(10.428283,47.396049),(10.442856,47.41628),(10.451951,47.438759),(10.451744,47.460205),(10.44761,47.472581),(10.442236,47.481392),(10.433657,47.487852),(10.419601,47.493045),(10.42973,47.532914),(10.43097,47.542164),(10.424356,47.553352),(10.415674,47.564385),(10.414227,47.573015),(10.429213,47.57702),(10.445956,47.579061),(10.453604,47.581102),(10.457118,47.579242),(10.461563,47.569914),(10.460322,47.565987),(10.455465,47.561181),(10.451744,47.554825),(10.453501,47.546246),(10.458772,47.541647),(10.466937,47.53772),(10.482543,47.532862),(10.525125,47.52847),(10.536907,47.529865),(10.549619,47.536789),(10.569876,47.555936),(10.583726,47.562473),(10.60791,47.562266),(10.739892,47.52847),(10.743923,47.525317),(10.74723,47.52015),(10.752191,47.515395),(10.760769,47.513638),(10.790328,47.516067),(10.830636,47.528573),(10.844795,47.531312),(10.858644,47.530666),(10.86929,47.526583),(10.892234,47.514879),(10.883862,47.508006),(10.851616,47.493097),(10.858438,47.485165),(10.909907,47.46899),(10.959827,47.432506),(10.962824,47.427959),(10.963134,47.421241),(10.96086,47.414316),(10.957656,47.409717),(10.955279,47.409614),(10.965511,47.396075),(10.979464,47.390545),(11.083747,47.389512),(11.10328,47.393568),(11.168599,47.424264),(11.193818,47.428941),(11.215212,47.422998),(11.213041,47.395816),(11.237019,47.393956),(11.258827,47.400725),(11.273089,47.411112),(11.286318,47.423205),(11.305645,47.4354),(11.324249,47.439431),(11.367657,47.440129),(11.383884,47.444857),(11.392462,47.454779),(11.388638,47.46222),(11.377786,47.467052),(11.365487,47.469326),(11.412616,47.506094),(11.435353,47.509711),(11.459951,47.507437),(11.482585,47.50258),(11.529508,47.507851),(11.551212,47.51369),(11.569402,47.527384),(11.574466,47.536634),(11.582321,47.559785),(11.589142,47.570379),(11.617048,47.586761),(11.620458,47.589654),(11.638958,47.589241),(11.682573,47.58335),(11.764015,47.583117),(11.820343,47.575288),(11.830678,47.577614),(11.840703,47.594615),(11.851969,47.599163),(11.935891,47.610738),(12.165645,47.603969),(12.173603,47.60508),(12.202128,47.629833),(12.205745,47.636215),(12.206779,47.645568),(12.205125,47.672285),(12.202335,47.677789),(12.197271,47.680088),(12.189106,47.685566),(12.181768,47.692129),(12.178564,47.697658),(12.1766,47.705823),(12.192413,47.710164),(12.216494,47.723703),(12.225176,47.727372),(12.239749,47.731687),(12.242229,47.732023),(12.239128,47.72745),(12.236545,47.716778),(12.233031,47.708872),(12.226623,47.702878),(12.223315,47.695901),(12.228793,47.684868),(12.238715,47.6789),(12.249567,47.680011),(12.271891,47.687891),(12.293699,47.69001),(12.351473,47.68169),(12.368423,47.683551),(12.407697,47.693731),(12.42413,47.69156),(12.428781,47.685798),(12.43922,47.664482),(12.444697,47.656291),(12.452759,47.649754),(12.482524,47.633579),(12.49658,47.628903),(12.517148,47.628541),(12.538232,47.631331),(12.553838,47.636008),(12.563375,47.641761),(12.598383,47.66288),(12.6174,47.669339),(12.653057,47.675127),(12.688713,47.675127),(12.744834,47.66536),(12.761991,47.666859),(12.751655,47.649392),(12.767365,47.63637),(12.793307,47.624794),(12.81315,47.611824),(12.785865,47.602677),(12.773669,47.5795),(12.77894,47.554825),(12.803642,47.541337),(12.819041,47.539528),(12.828446,47.537255),(12.836921,47.532655),(12.882603,47.498575),(12.931593,47.473512),(12.942548,47.470489),(12.951023,47.472478),(12.959291,47.475527),(12.96973,47.475734),(12.979342,47.472246),(12.984923,47.468267),(12.991124,47.465683),(13.001873,47.466019),(13.008797,47.470101),(13.037323,47.493045),(13.037013,47.501365),(13.034429,47.507437),(13.031225,47.512553),(13.028951,47.518031),(13.028124,47.524232),(13.028124,47.541647),(13.030605,47.544257),(13.040113,47.560457),(13.04125,47.561904),(13.03908,47.561336),(13.038253,47.584022),(13.04094,47.58335),(13.057063,47.597664),(13.057683,47.600429),(13.069569,47.620247),(13.072049,47.622417),(13.07422,47.63451),(13.07484,47.646654),(13.072049,47.659469),(13.064091,47.67399),(13.044557,47.696573),(13.032258,47.706624),(13.019856,47.712929),(13.004353,47.714608),(12.979032,47.707089),(12.964045,47.705358),(12.908338,47.71236),(12.892009,47.723548),(12.910612,47.742927),(12.910819,47.74303),(12.910819,47.743082),(12.917227,47.750161),(12.918984,47.756776),(12.919294,47.763184),(12.921257,47.76954),(12.926735,47.777395),(12.991227,47.847106),(12.986576,47.850259),(12.964976,47.871963),(12.931076,47.924879),(12.914229,47.93599),(12.906168,47.93891),(12.886014,47.952888),(12.862036,47.962552),(12.853975,47.970484),(12.8486,47.980948),(12.848246,47.982389),(12.845603,47.993144),(12.830617,48.015468),(12.760234,48.064871),(12.750932,48.074741),(12.742043,48.086782),(12.736979,48.099959),(12.738943,48.113447),(12.745041,48.12063),(12.751449,48.122103),(12.75796,48.121896),(12.764058,48.123756),(12.77832,48.138717),(12.782351,48.142024),(12.822142,48.160679),(12.835164,48.170704),(12.853458,48.18998),(12.862553,48.196646),(12.877642,48.202046),(12.93304,48.209255),(12.931696,48.211632),(12.947199,48.220004),(12.979445,48.232018),(13.016652,48.255893),(13.032775,48.263567),(13.136438,48.291059),(13.274207,48.307104),(13.307797,48.319636),(13.405569,48.376583),(13.419522,48.392164),(13.425206,48.413377),(13.42717,48.418544),(13.431717,48.423712),(13.436161,48.430585),(13.438228,48.440998),(13.436368,48.469239),(13.438228,48.478541),(13.442983,48.487713),(13.456418,48.506653),(13.459209,48.516394),(13.457039,48.525179),(13.447737,48.534816),(13.443706,48.552851),(13.440709,48.557347),(13.440089,48.560965),(13.445566,48.567915),(13.454558,48.573445),(13.487218,48.581584),(13.520704,48.584581),(13.624367,48.565357),(13.641834,48.559104),(13.65806,48.551094),(13.673356,48.535488),(13.714336,48.523215),(13.716506,48.521691),(13.725394,48.548381),(13.733869,48.559776),(13.73728,48.559259),(13.758054,48.561481),(13.766012,48.563755),(13.77428,48.569207),(13.779034,48.573935),(13.796191,48.598585),(13.802392,48.611711),(13.805493,48.626129),(13.806216,48.642923),(13.800945,48.675221),(13.803529,48.687159),(13.816862,48.695582),(13.804356,48.699561),(13.799395,48.70261),(13.78875,48.716511),(13.783582,48.71527),(13.785856,48.724779),(13.815725,48.76643),(13.855929,48.759299),(13.874946,48.752426),(13.893136,48.743021),(13.915047,48.73098),(13.98233,48.706485),(13.991115,48.700181),(14.007754,48.683283),(14.014162,48.675066),(14.019227,48.671914),(14.026151,48.670054),(14.032146,48.667263),(14.034729,48.661165),(14.032869,48.654912),(14.028322,48.653982),(14.023464,48.654602),(14.020467,48.653155),(14.013749,48.643802),(14.008064,48.642407),(14.006204,48.639513),(14.010855,48.62587),(14.015196,48.620393),(14.032456,48.60613),(14.040827,48.601169),(14.07483,48.591712),(14.216527,48.58117),(14.315746,48.557916),(14.325358,48.558639),(14.333213,48.560706),(14.353366,48.571429),(14.405353,48.586286),(14.421166,48.597061),(14.443593,48.636516),(14.458166,48.643182),(14.482144,48.624475),(14.522038,48.610626),(14.53382,48.608714),(14.548807,48.610936),(14.578779,48.620496),(14.594179,48.621323),(14.600586,48.617576),(14.601827,48.611194),(14.60193,48.604528),(14.605134,48.600316),(14.612885,48.599748),(14.628182,48.603288),(14.634693,48.602332),(14.645442,48.592901),(14.651333,48.583496),(14.659187,48.576959),(14.675621,48.576235),(14.69114,48.586535),(14.695671,48.589542),(14.700735,48.615535),(14.700219,48.646102),(14.703836,48.673103),(14.722749,48.693411),(14.775356,48.724004),(14.78042,48.743124),(14.790446,48.754855),(14.794476,48.766999),(14.800367,48.776507),(14.815457,48.780331),(14.86765,48.775577),(14.919327,48.761573),(14.939584,48.762813),(14.951056,48.780331),(14.948886,48.78214),(14.944441,48.788599),(14.940101,48.796506),(14.938137,48.80281),(14.93979,48.809787),(14.947025,48.822396),(14.968213,48.885028),(14.968316,48.912674),(14.964182,48.943267),(14.964595,48.971741),(14.978238,48.992825),(14.977308,48.998251),(14.977308,49.002902),(14.978858,49.00626),(14.982062,49.007914),(15.003973,49.009774),(15.059576,48.997217),(15.137401,48.993031),(15.144739,48.978924),(15.148977,48.965333),(15.148563,48.951742),(15.141949,48.937479),(15.161792,48.937221)] +Burundi [(30.415073,-2.313088),(30.418484,-2.311847),(30.423238,-2.317325),(30.428406,-2.331381),(30.434142,-2.339236),(30.488454,-2.383781),(30.521733,-2.399387),(30.5546,-2.400628),(30.521475,-2.442279),(30.508091,-2.463466),(30.47016,-2.55576),(30.461995,-2.58749),(30.457655,-2.598032),(30.448043,-2.610537),(30.424168,-2.633171),(30.416003,-2.645574),(30.412128,-2.670172),(30.423238,-2.680921),(30.442668,-2.681127),(30.463752,-2.674203),(30.499616,-2.657873),(30.52256,-2.649398),(30.516101,-2.668311),(30.458585,-2.72867),(30.450523,-2.741795),(30.447268,-2.757298),(30.446906,-2.782723),(30.425099,-2.812179),(30.413626,-2.8344),(30.415797,-2.851659),(30.440808,-2.884009),(30.456001,-2.898272),(30.474191,-2.903233),(30.469644,-2.914188),(30.475431,-2.922146),(30.484113,-2.930724),(30.488325,-2.94316),(30.488454,-2.94354),(30.49326,-2.941266),(30.513052,-2.913981),(30.524937,-2.904163),(30.53889,-2.898995),(30.547003,-2.900339),(30.61196,-2.939199),(30.62519,-2.94478),(30.62953,-2.947674),(30.632114,-2.953772),(30.634905,-2.970102),(30.637592,-2.974443),(30.650925,-2.977337),(30.661363,-2.974753),(30.671337,-2.970825),(30.683687,-2.969998),(30.695986,-2.974546),(30.718879,-2.989532),(30.732057,-2.99346),(30.755414,-2.991393),(30.80213,-2.97837),(30.825488,-2.978577),(30.818815,-2.990495),(30.815824,-2.995837),(30.78487,-3.031493),(30.778255,-3.04741),(30.783216,-3.062086),(30.807814,-3.085237),(30.817684,-3.099086),(30.817839,-3.104357),(30.813085,-3.11769),(30.812258,-3.123477),(30.815824,-3.131022),(30.828795,-3.140531),(30.832929,-3.147662),(30.833962,-3.160271),(30.832205,-3.172777),(30.823524,-3.196651),(30.809364,-3.216805),(30.809881,-3.224143),(30.814325,-3.24192),(30.814842,-3.247708),(30.799753,-3.274579),(30.775671,-3.291012),(30.747921,-3.294113),(30.721463,-3.280987),(30.718414,-3.292873),(30.712006,-3.301865),(30.702239,-3.307342),(30.68994,-3.308893),(30.662603,-3.319228),(30.640538,-3.33287),(30.621676,-3.35044),(30.603279,-3.372558),(30.61025,-3.376359),(30.640434,-3.392815),(30.639142,-3.41948),(30.612891,-3.444905),(30.565968,-3.466712),(30.552946,-3.482215),(30.544988,-3.48914),(30.538115,-3.49162),(30.511502,-3.497718),(30.509951,-3.500199),(30.508246,-3.50888),(30.507161,-3.511051),(30.48804,-3.512601),(30.487524,-3.510741),(30.48556,-3.510534),(30.46892,-3.513014),(30.466336,-3.516115),(30.462616,-3.527897),(30.460135,-3.531928),(30.432282,-3.551875),(30.426856,-3.563864),(30.429233,-3.583914),(30.428871,-3.602208),(30.421378,-3.620811),(30.38014,-3.685614),(30.373112,-3.703597),(30.373068,-3.704339),(30.371975,-3.722924),(30.376213,-3.739771),(30.381897,-3.75517),(30.384998,-3.770776),(30.381587,-3.788346),(30.33699,-3.773774),(30.31172,-3.789897),(30.273273,-3.856249),(30.22077,-3.909993),(30.208781,-3.930457),(30.191418,-4.002494),(30.175852,-4.039992),(30.173228,-4.046315),(30.149766,-4.08683),(30.119587,-4.12352),(30.051478,-4.180157),(30.041143,-4.19504),(30.015614,-4.256018),(30.003005,-4.271935),(29.980785,-4.28444),(29.970656,-4.292605),(29.936343,-4.312035),(29.900479,-4.345625),(29.847356,-4.370533),(29.838571,-4.37353),(29.821828,-4.37074),(29.810976,-4.365262),(29.80033,-4.363919),(29.784104,-4.37384),(29.782295,-4.377354),(29.780177,-4.393684),(29.775009,-4.402262),(29.757956,-4.410221),(29.751651,-4.416112),(29.747931,-4.430271),(29.747414,-4.44319),(29.744365,-4.454456),(29.732841,-4.463344),(29.728293,-4.461587),(29.687469,-4.458383),(29.638273,-4.446808),(29.404179,-4.449805),(29.37276,-4.208166),(29.367224,-4.189048),(29.33948,-4.093237),(29.277727,-3.988954),(29.239796,-3.948027),(29.224552,-3.926529),(29.215043,-3.899864),(29.206568,-3.781939),(29.217123,-3.709266),(29.22326,-3.66701),(29.225172,-3.588565),(29.206672,-3.334524),(29.208739,-3.305275),(29.213079,-3.291943),(29.236117,-3.270704),(29.240881,-3.266311),(29.223828,-3.252669),(29.216904,-3.233135),(29.214113,-3.166472),(29.217524,-3.146215),(29.225017,-3.129369),(29.237781,-3.122237),(29.242328,-3.110765),(29.241539,-3.104188),(29.234629,-3.046583),(29.225378,-3.031803),(29.211529,-3.021158),(29.194476,-3.014647),(29.175976,-3.012477),(29.156752,-3.004828),(29.148174,-2.986948),(29.144246,-2.966278),(29.138407,-2.950361),(29.123369,-2.940233),(29.105076,-2.934135),(29.08978,-2.926074),(29.083268,-2.910054),(29.081718,-2.893931),(29.077584,-2.884009),(29.062701,-2.862201),(29.053503,-2.835433),(29.049058,-2.827475),(29.034072,-2.82086),(29.013143,-2.818897),(28.994902,-2.813626),(28.986892,-2.796779),(28.988804,-2.775488),(28.995522,-2.758435),(29.008234,-2.74748),(29.028595,-2.744896),(29.015365,-2.720711),(29.032574,-2.620666),(29.05526,-2.598445),(29.113447,-2.594621),(29.129725,-2.596998),(29.190445,-2.623456),(29.212769,-2.630278),(29.254989,-2.635549),(29.276125,-2.640613),(29.294418,-2.651052),(29.306614,-2.660147),(29.307337,-2.664797),(29.306045,-2.67658),(29.309481,-2.690127),(29.309714,-2.691049),(29.331418,-2.71141),(29.337516,-2.723812),(29.335346,-2.735181),(29.321497,-2.752131),(29.318086,-2.762569),(29.32036,-2.774145),(29.329661,-2.794919),(29.33085,-2.807424),(29.339687,-2.826441),(29.364285,-2.824168),(29.407435,-2.805461),(29.425366,-2.803394),(29.444675,-2.806989),(29.480867,-2.813729),(29.504121,-2.826855),(29.523138,-2.820447),(29.541845,-2.808561),(29.563756,-2.805667),(29.574091,-2.806081),(29.582049,-2.80298),(29.589491,-2.798433),(29.619256,-2.787684),(29.626388,-2.788097),(29.637087,-2.791629),(29.678684,-2.805357),(29.697546,-2.808251),(29.719922,-2.805461),(29.729637,-2.79957),(29.733978,-2.790268),(29.737285,-2.769287),(29.744727,-2.759985),(29.756199,-2.759882),(29.778523,-2.7666),(29.803379,-2.767324),(29.823585,-2.763499),(29.842395,-2.752441),(29.862859,-2.731667),(29.876191,-2.71358),(29.888387,-2.692909),(29.897896,-2.671102),(29.90327,-2.649088),(29.903726,-2.637947),(29.907921,-2.5354),(29.929573,-2.459849),(29.92239,-2.382954),(29.928901,-2.331795),(29.928798,-2.322493),(29.931692,-2.316912),(29.941924,-2.316602),(29.949934,-2.321149),(29.956781,-2.327038),(29.95852,-2.328533),(29.976134,-2.34368),(29.983885,-2.344817),(29.992308,-2.34399),(30.00192,-2.3443),(30.013341,-2.348331),(30.025536,-2.357736),(30.056542,-2.394426),(30.072975,-2.408792),(30.093853,-2.422642),(30.116797,-2.43153),(30.139121,-2.430807),(30.156174,-2.419024),(30.20134,-2.362387),(30.203407,-2.355359),(30.203923,-2.345954),(30.207489,-2.339236),(30.219116,-2.340373),(30.250329,-2.355566),(30.262731,-2.358356),(30.278854,-2.357633),(30.297974,-2.353705),(30.316785,-2.347608),(30.331151,-2.340269),(30.344018,-2.328384),(30.34861,-2.322352),(30.352752,-2.316912),(30.362518,-2.307817),(30.37859,-2.303062),(30.383447,-2.305543),(30.415073,-2.313088)] +Belgium [(4.815447,51.431074),(4.822682,51.413685),(4.779067,51.426423),(4.767802,51.425389),(4.762117,51.413375),(4.783304,51.407638),(4.853274,51.40614),(4.871671,51.403039),(4.910325,51.391903),(4.931926,51.395624),(4.957144,51.410997),(4.980192,51.430531),(4.995798,51.446706),(5.002723,51.459005),(5.006133,51.468307),(5.012128,51.474327),(5.027527,51.476807),(5.030318,51.47443),(5.06029,51.461976),(5.079927,51.438541),(5.076413,51.421901),(5.065251,51.405907),(5.061737,51.384255),(5.068145,51.37454),(5.102561,51.350665),(5.107832,51.341984),(5.118064,51.319918),(5.122922,51.313381),(5.140182,51.307386),(5.178836,51.309505),(5.197336,51.308342),(5.214906,51.294338),(5.215423,51.258733),(5.232993,51.255916),(5.239456,51.25693),(5.270717,51.261833),(5.389056,51.258707),(5.409933,51.263539),(5.451894,51.282116),(5.471635,51.288085),(5.493235,51.286535),(5.516696,51.277233),(5.535197,51.26204),(5.542328,51.242661),(5.545532,51.219924),(5.551407,51.216767),(5.568373,51.207651),(5.62501,51.196902),(5.638549,51.190907),(5.648161,51.183957),(5.659013,51.179074),(5.67617,51.179177),(5.718028,51.184603),(5.730844,51.18375),(5.747173,51.177601),(5.768051,51.159462),(5.781383,51.152409),(5.812079,51.157318),(5.829132,51.156465),(5.83802,51.143727),(5.840501,51.138921),(5.826445,51.130162),(5.823241,51.118741),(5.845979,51.103393),(5.83678,51.099492),(5.826652,51.096908),(5.815903,51.095875),(5.804948,51.096624),(5.807428,51.089389),(5.808978,51.08634),(5.811769,51.082309),(5.804327,51.068589),(5.79885,51.061484),(5.791305,51.054998),(5.779626,51.061303),(5.771461,51.058125),(5.766397,51.048436),(5.76402,51.035181),(5.777559,51.023553),(5.761746,50.999162),(5.722369,50.959423),(5.744693,50.963247),(5.754305,50.963092),(5.76402,50.959423),(5.752548,50.947227),(5.735288,50.923973),(5.722369,50.911674),(5.717614,50.909245),(5.706039,50.90661),(5.694463,50.903664),(5.622323,50.852659),(5.624493,50.830284),(5.64072,50.815039),(5.657432,50.807641),(5.663251,50.805065),(5.684852,50.798503),(5.700458,50.795557),(5.698701,50.783775),(5.688366,50.76083),(5.706556,50.754216),(5.717408,50.752821),(5.727846,50.754164),(5.732777,50.758851),(5.743556,50.769099),(5.752444,50.773439),(5.76247,50.771011),(5.782933,50.756696),(5.793165,50.752149),(5.879878,50.753544),(5.891288,50.751246),(5.891321,50.75124),(5.902203,50.749048),(5.974756,50.74755),(5.99491,50.749927),(5.993256,50.748066),(5.992636,50.746258),(5.99274,50.744449),(5.994187,50.742589),(6.010826,50.737059),(6.011343,50.727447),(6.007623,50.716905),(6.01155,50.708534),(6.024779,50.707552),(6.064363,50.714477),(6.081003,50.713495),(6.100227,50.701299),(6.161102,50.642181),(6.147769,50.636755),(6.159448,50.622389),(6.180015,50.615516),(6.249055,50.614431),(6.251329,50.608437),(6.244301,50.599548),(6.232932,50.590763),(6.232932,50.587146),(6.210608,50.578671),(6.179085,50.561979),(6.160688,50.543583),(6.177328,50.530198),(6.169473,50.522499),(6.170817,50.517951),(6.178878,50.515781),(6.190557,50.515212),(6.182599,50.506582),(6.19066,50.501828),(6.199239,50.490408),(6.207197,50.486325),(6.217119,50.486015),(6.250915,50.492681),(6.253086,50.494077),(6.255566,50.494593),(6.25784,50.494077),(6.260114,50.492681),(6.27541,50.48803),(6.318715,50.481674),(6.336905,50.481003),(6.325743,50.474026),(6.323262,50.466016),(6.326156,50.457025),(6.331634,50.446896),(6.340212,50.437853),(6.347343,50.436974),(6.351167,50.433564),(6.350444,50.416872),(6.343313,50.393308),(6.336388,50.379923),(6.337732,50.368089),(6.372458,50.329436),(6.3734,50.32308),(6.374525,50.315483),(6.362019,50.306956),(6.335458,50.303856),(6.322022,50.305406),(6.310653,50.308248),(6.299491,50.308868),(6.285952,50.303856),(6.27603,50.296104),(6.269415,50.286492),(6.267348,50.276416),(6.271792,50.267424),(6.253396,50.256778),(6.210401,50.249027),(6.189627,50.242722),(6.179808,50.233007),(6.157174,50.222724),(6.146839,50.214042),(6.160171,50.204534),(6.165752,50.193165),(6.165132,50.181899),(6.159138,50.172443),(6.151696,50.169342),(6.131749,50.168308),(6.123171,50.164898),(6.121311,50.16159),(6.118003,50.150377),(6.115833,50.145467),(6.125962,50.140196),(6.129476,50.134047),(6.126582,50.127329),(6.117487,50.120456),(6.102604,50.124745),(6.10033,50.132652),(6.101467,50.142108),(6.097126,50.151307),(6.084207,50.159317),(6.076456,50.158645),(6.070048,50.154201),(6.061366,50.150738),(6.038628,50.148413),(6.027363,50.149447),(6.022967,50.15082),(6.014134,50.153581),(6.008036,50.160919),(6.004729,50.170479),(5.998321,50.174975),(5.982921,50.167068),(5.96163,50.165621),(5.953156,50.156371),(5.949021,50.142677),(5.941063,50.128311),(5.92618,50.118337),(5.888973,50.106607),(5.872437,50.096892),(5.86882,50.090535),(5.866856,50.074309),(5.864375,50.067488),(5.858278,50.061803),(5.843705,50.053638),(5.83771,50.047437),(5.838847,50.035862),(5.838847,50.026457),(5.836367,50.01855),(5.829132,50.013589),(5.808875,50.007801),(5.802674,50.002479),(5.801537,49.988681),(5.808565,49.9831),(5.812596,49.977571),(5.80164,49.963825),(5.793372,49.958812),(5.778593,49.95659),(5.771565,49.953541),(5.76371,49.946823),(5.759989,49.941604),(5.757612,49.93654),(5.753581,49.930183),(5.718958,49.891374),(5.714927,49.881866),(5.726709,49.878197),(5.748827,49.867293),(5.758129,49.858198),(5.731877,49.859955),(5.73198,49.854167),(5.730533,49.85117),(5.728673,49.848845),(5.726916,49.845227),(5.736528,49.83944),(5.724642,49.834272),(5.720198,49.824712),(5.721645,49.812775),(5.727743,49.800217),(5.738285,49.788848),(5.748414,49.785644),(5.759162,49.784456),(5.771875,49.779392),(5.778489,49.773035),(5.802054,49.742856),(5.803604,49.738154),(5.805774,49.724201),(5.805361,49.721876),(5.811769,49.718517),(5.825721,49.715778),(5.831716,49.713607),(5.837297,49.712936),(5.843705,49.714331),(5.850216,49.714021),(5.856004,49.708078),(5.856403,49.705807),(5.857347,49.70043),(5.853627,49.696296),(5.848769,49.69273),(5.847219,49.687201),(5.846702,49.68069),(5.843705,49.677486),(5.842258,49.67392),(5.845772,49.666582),(5.852593,49.663223),(5.872644,49.661724),(5.879878,49.657745),(5.885459,49.643534),(5.879878,49.634853),(5.869957,49.628858),(5.861998,49.62276),(5.849183,49.599713),(5.841224,49.589687),(5.829959,49.582763),(5.846909,49.576717),(5.8374,49.56111),(5.814456,49.545142),(5.790685,49.537753),(5.779006,49.539613),(5.757095,49.548398),(5.746346,49.549431),(5.735598,49.545711),(5.722679,49.5346),(5.710276,49.531086),(5.688676,49.533515),(5.667178,49.54044),(5.644647,49.543799),(5.619739,49.53553),(5.607854,49.524937),(5.60558,49.517754),(5.602273,49.513723),(5.587493,49.512431),(5.578915,49.51393),(5.547599,49.523438),(5.528789,49.517961),(5.503674,49.505093),(5.477732,49.495223),(5.456132,49.498944),(5.44993,49.507832),(5.450964,49.516772),(5.454168,49.526177),(5.454375,49.536616),(5.450241,49.545762),(5.402078,49.602451),(5.391166,49.608817),(5.383475,49.613304),(5.352469,49.620693),(5.3411,49.625137),(5.334899,49.625809),(5.331178,49.623122),(5.326941,49.617644),(5.32167,49.61258),(5.314538,49.611133),(5.297898,49.613252),(5.293041,49.617179),(5.290664,49.626378),(5.293764,49.640537),(5.301826,49.643896),(5.306373,49.647513),(5.299449,49.662499),(5.280122,49.677899),(5.259244,49.691335),(5.237644,49.690973),(5.192478,49.682808),(5.169431,49.687201),(5.151241,49.695986),(5.136358,49.705339),(5.093776,49.745337),(5.081891,49.752727),(5.046234,49.759393),(5.026287,49.766731),(4.98908,49.790554),(4.970993,49.797065),(4.968371,49.796981),(4.950013,49.796393),(4.906604,49.786471),(4.884177,49.784146),(4.858029,49.786575),(4.849037,49.794171),(4.849244,49.831585),(4.846763,49.837528),(4.836945,49.848018),(4.834878,49.85303),(4.836738,49.859128),(4.845626,49.865846),(4.848934,49.871066),(4.854308,49.883313),(4.859372,49.891684),(4.861956,49.900314),(4.860199,49.913337),(4.844593,49.931734),(4.797257,49.944291),(4.783925,49.958295),(4.788265,49.974263),(4.826713,50.036017),(4.827643,50.046714),(4.826093,50.055034),(4.827229,50.064077),(4.84604,50.09069),(4.852137,50.091621),(4.862886,50.084076),(4.86516,50.101594),(4.870534,50.122265),(4.871878,50.139783),(4.862679,50.147586),(4.831364,50.1434),(4.820098,50.146449),(4.815551,50.161332),(4.788575,50.153374),(4.681915,50.083921),(4.676128,50.076066),(4.67344,50.066196),(4.675818,50.060201),(4.679538,50.055189),(4.680572,50.048316),(4.672924,50.01638),(4.665896,50.000515),(4.656697,49.989095),(4.645845,49.984495),(4.604194,49.980206),(4.464771,49.935609),(4.435109,49.93225),(4.427651,49.933572),(4.277599,49.960156),(4.215953,49.954408),(4.201118,49.953024),(4.187682,49.955505),(4.131665,49.974987),(4.132365,49.974946),(4.139623,49.974522),(4.138383,49.989198),(4.132905,50.004494),(4.128151,50.005786),(4.132285,50.017568),(4.137349,50.025061),(4.144584,50.030797),(4.162877,50.041805),(4.186028,50.05245),(4.210006,50.06015),(4.210316,50.066713),(4.203392,50.074671),(4.197397,50.084076),(4.180551,50.127226),(4.170629,50.130481),(4.15988,50.129396),(4.139933,50.123557),(4.125774,50.128156),(4.128151,50.146243),(4.147374,50.186343),(4.152749,50.204534),(4.157296,50.212698),(4.167942,50.222465),(4.191506,50.234299),(4.201325,50.241534),(4.203702,50.251973),(4.197604,50.258484),(4.185201,50.26546),(4.171766,50.270886),(4.162774,50.272953),(4.150268,50.269543),(4.149751,50.263135),(4.150992,50.256262),(4.143654,50.251611),(4.125257,50.25714),(4.097868,50.294812),(4.077198,50.306646),(4.055907,50.314553),(4.022731,50.338117),(4.0033,50.34406),(3.982113,50.342613),(3.918551,50.325456),(3.895917,50.325301),(3.878037,50.330779),(3.86088,50.338479),(3.84021,50.345197),(3.798662,50.348452),(3.755047,50.346334),(3.741508,50.34313),(3.733446,50.337239),(3.715463,50.317653),(3.711122,50.309333),(3.708435,50.306336),(3.703887,50.304321),(3.702234,50.305251),(3.700787,50.304683),(3.696859,50.298223),(3.661719,50.319307),(3.652934,50.360183),(3.652831,50.406847),(3.644356,50.445501),(3.62813,50.464363),(3.608286,50.476507),(3.586375,50.483483),(3.563741,50.486532),(3.499145,50.48679),(3.486743,50.492681),(3.496458,50.503792),(3.498008,50.511853),(3.491497,50.517073),(3.477131,50.519863),(3.462868,50.51914),(3.452636,50.514851),(3.442714,50.508494),(3.429175,50.501828),(3.385354,50.491648),(3.361066,50.489787),(3.343599,50.49294),(3.299881,50.507461),(3.286755,50.514075),(3.271045,50.526943),(3.265878,50.538312),(3.264327,50.570196),(3.237869,50.626472),(3.229808,50.65541),(3.24376,50.672102),(3.232391,50.695925),(3.224433,50.705175),(3.210997,50.707965),(3.196941,50.709206),(3.188983,50.715097),(3.176684,50.734114),(3.162823,50.749925),(3.162811,50.749939),(3.146195,50.768892),(3.129245,50.779072),(3.103304,50.784085),(3.023817,50.768262),(2.971425,50.757833),(2.948378,50.748686),(2.930188,50.736232),(2.900112,50.703108),(2.886883,50.696648),(2.871276,50.698767),(2.866726,50.700091),(2.786734,50.723365),(2.768234,50.733235),(2.762963,50.739385),(2.749527,50.75959),(2.743842,50.766205),(2.706635,50.788787),(2.700021,50.79628),(2.69692,50.80305),(2.691856,50.808838),(2.67873,50.81323),(2.669222,50.813902),(2.650618,50.812248),(2.642453,50.812455),(2.627777,50.814419),(2.620439,50.816486),(2.615168,50.82217),(2.60659,50.834831),(2.602042,50.838087),(2.58995,50.842014),(2.586746,50.845425),(2.58778,50.850024),(2.595531,50.861134),(2.596668,50.867646),(2.576928,50.911519),(2.582715,50.920666),(2.605763,50.935238),(2.611654,50.941233),(2.60845,50.961128),(2.592741,50.976114),(2.55698,51.001177),(2.547059,51.020479),(2.537033,51.06461),(2.5218,51.087541),(2.521821,51.087551),(2.542003,51.096869),(2.579926,51.104804),(2.715668,51.169501),(2.923595,51.246487),(3.124848,51.329657),(3.348969,51.375149),(3.349415,51.375223),(3.358068,51.336713),(3.353934,51.311908),(3.353211,51.288602),(3.36737,51.263487),(3.391658,51.24677),(3.398798,51.244698),(3.421941,51.237985),(3.453877,51.235633),(3.501212,51.240284),(3.504106,51.249741),(3.501522,51.262402),(3.503383,51.273952),(3.514338,51.281031),(3.529014,51.283331),(3.545034,51.283925),(3.584825,51.289532),(3.609526,51.290436),(3.635054,51.287827),(3.757321,51.262324),(3.776234,51.254521),(3.781402,51.241421),(3.779748,51.230311),(3.781402,51.219872),(3.796285,51.208762),(3.808687,51.205325),(3.927026,51.206255),(3.949454,51.210777),(3.995032,51.23323),(4.118952,51.272401),(4.168562,51.29687),(4.207939,51.330589),(4.21352,51.340537),(4.220962,51.363817),(4.221491,51.368001),(4.221528,51.367987),(4.222179,51.367743),(4.241059,51.356594),(4.249034,51.346666),(4.256114,51.328437),(4.272634,51.315823),(4.290782,51.305894),(4.302989,51.296088),(4.306651,51.28974),(4.308116,51.283637),(4.305431,51.276597),(4.298676,51.269599),(4.296723,51.267564),(4.30836,51.271226),(4.316417,51.276842),(4.330821,51.296088),(4.29713,51.305162),(4.289317,51.309149),(4.285411,51.31802),(4.286143,51.339667),(4.283051,51.350084),(4.275645,51.3581),(4.261078,51.369371),(4.261068,51.369379),(4.281124,51.368947),(4.290105,51.368752),(4.326278,51.356427),(4.345605,51.352293),(4.392217,51.350407),(4.411234,51.356737),(4.416195,51.373971),(4.410097,51.38379),(4.38767,51.400533),(4.379918,51.409525),(4.380642,51.419653),(4.387153,51.427456),(4.38922,51.434588),(4.377128,51.442882),(4.429424,51.461976),(4.483064,51.474327),(4.512623,51.477376),(4.525232,51.475929),(4.533191,51.467583),(4.531847,51.450866),(4.523372,51.438412),(4.521512,51.428645),(4.540632,51.420067),(4.553861,51.419291),(4.582697,51.422599),(4.614633,51.41756),(4.630136,51.418206),(4.643571,51.42141),(4.653907,51.426216),(4.731215,51.485618),(4.764184,51.496238),(4.778654,51.495359),(4.796844,51.491018),(4.813587,51.48412),(4.823922,51.475645),(4.826093,51.460555),(4.819891,51.446215),(4.815447,51.431074)] +Benin [(3.341635,11.882893),(3.355071,11.889043),(3.369127,11.897311),(3.380909,11.895037),(3.391658,11.888836),(3.40282,11.88527),(3.447675,11.857985),(3.484262,11.84517),(3.495424,11.837521),(3.503486,11.829977),(3.536972,11.782899),(3.54369,11.77706),(3.549271,11.773753),(3.553715,11.76967),(3.557436,11.761815),(3.5625,11.746003),(3.566221,11.738354),(3.568185,11.729518),(3.571699,11.72187),(3.58007,11.717374),(3.589372,11.701509),(3.5964,11.695773),(3.574386,11.673035),(3.504933,11.556557),(3.493047,11.511495),(3.486743,11.496276),(3.466692,11.442481),(3.468449,11.419485),(3.483332,11.392303),(3.662546,11.143145),(3.686731,11.120485),(3.694999,11.119116),(3.704404,11.120795),(3.705583,11.120733),(3.713706,11.120304),(3.722077,11.112346),(3.711845,11.073511),(3.711845,11.051859),(3.715876,11.031886),(3.723318,11.013541),(3.733549,10.996462),(3.733549,10.996203),(3.754117,10.907139),(3.75298,10.877683),(3.731276,10.825025),(3.730035,10.807714),(3.7351,10.795776),(3.743161,10.787456),(3.751946,10.78017),(3.759491,10.771592),(3.764452,10.760869),(3.773857,10.728054),(3.783262,10.71697),(3.796078,10.71312),(3.810237,10.710484),(3.82295,10.70325),(3.832148,10.683923),(3.837109,10.654829),(3.837246,10.630609),(3.837419,10.599897),(3.829874,10.573077),(3.804346,10.523157),(3.795354,10.496492),(3.794321,10.444867),(3.787706,10.426367),(3.772513,10.407635),(3.756907,10.405257),(3.692828,10.438382),(3.671744,10.442077),(3.652107,10.435798),(3.632574,10.416652),(3.625959,10.407273),(3.623169,10.400917),(3.622135,10.381228),(3.618208,10.369342),(3.59299,10.330947),(3.572422,10.285704),(3.573352,10.266997),(3.587615,10.244182),(3.655621,10.174781),(3.664613,10.162197),(3.667507,10.146565),(3.66483,10.128654),(3.663993,10.123052),(3.654898,10.101116),(3.642392,10.091892),(3.627819,10.084993),(3.612317,10.070291),(3.605805,10.054116),(3.599604,9.968695),(3.589476,9.948825),(3.557126,9.910559),(3.530771,9.862965),(3.513098,9.846609),(3.452946,9.853095),(3.413982,9.842837),(3.342049,9.813588),(3.325926,9.801728),(3.317141,9.78173),(3.314867,9.759095),(3.318381,9.739432),(3.33068,9.708633),(3.338638,9.678222),(3.33161,9.652022),(3.299054,9.633961),(3.281381,9.634994),(3.261847,9.638612),(3.245724,9.633702),(3.238179,9.609053),(3.239419,9.602309),(3.24469,9.589571),(3.243554,9.584403),(3.182575,9.505338),(3.175134,9.498879),(3.159011,9.493478),(3.151776,9.488156),(3.132553,9.457279),(3.128625,9.429581),(3.135343,9.401675),(3.148055,9.370256),(3.151466,9.353565),(3.151363,9.288633),(3.145162,9.267859),(3.124594,9.228999),(3.091315,9.126421),(3.076019,9.095157),(3.051421,9.078207),(2.983104,9.060611),(2.971425,9.060508),(2.9614,9.065313),(2.951685,9.071902),(2.94073,9.07707),(2.929878,9.078362),(2.899182,9.07738),(2.888433,9.075364),(2.877788,9.068543),(2.869313,9.059707),(2.859701,9.052627),(2.846265,9.050767),(2.769164,9.057045),(2.772368,9.030535),(2.770197,8.967154),(2.76472,8.939766),(2.761516,8.933694),(2.75025,8.916873),(2.74777,8.914806),(2.749527,8.906615),(2.753764,8.904419),(2.758725,8.902662),(2.762549,8.895686),(2.762136,8.880415),(2.74963,8.851941),(2.745909,8.836516),(2.74715,8.827654),(2.752317,8.81228),(2.751077,8.803417),(2.745703,8.798146),(2.728856,8.790472),(2.723172,8.782928),(2.722862,8.772308),(2.722949,8.771218),(2.742439,8.527481),(2.744773,8.498294),(2.736918,8.464523),(2.72896,8.452741),(2.709012,8.430391),(2.703741,8.417885),(2.703535,8.409023),(2.706015,8.390548),(2.706015,8.381272),(2.698367,8.339854),(2.697127,8.320708),(2.700744,8.296885),(2.708496,8.274819),(2.728133,8.233194),(2.734954,8.211903),(2.734437,8.189966),(2.726582,8.171595),(2.716144,8.154904),(2.708392,8.13798),(2.705705,8.119686),(2.703535,8.059897),(2.690926,7.996567),(2.687102,7.933522),(2.683588,7.924634),(2.673356,7.905565),(2.671082,7.897917),(2.671695,7.893434),(2.672219,7.889597),(2.68028,7.87735),(2.691546,7.848669),(2.715524,7.822056),(2.724412,7.80769),(2.727203,7.793324),(2.717641,7.658549),(2.715317,7.625789),(2.720588,7.586566),(2.740638,7.549359),(2.770197,7.514271),(2.792315,7.477633),(2.789628,7.435826),(2.787251,7.426886),(2.786527,7.421254),(2.783323,7.419238),(2.761722,7.423527),(2.751284,7.423631),(2.743636,7.419238),(2.741155,7.408179),(2.762136,7.1622),(2.761102,7.14835),(2.757795,7.138894),(2.749837,7.128558),(2.740638,7.109593),(2.736298,7.102927),(2.735264,7.096002),(2.739088,7.08298),(2.747666,7.072231),(2.773918,7.050785),(2.779706,7.041328),(2.775778,7.030528),(2.765547,7.025102),(2.752731,7.02133),(2.741052,7.0158),(2.71511,6.979833),(2.718418,6.943557),(2.731957,6.905213),(2.736918,6.863303),(2.726272,6.821368),(2.723585,6.799715),(2.72865,6.780001),(2.736298,6.769846),(2.744773,6.763774),(2.755005,6.760725),(2.767407,6.759563),(2.773298,6.755041),(2.775262,6.744887),(2.778569,6.698765),(2.772988,6.687474),(2.760689,6.679387),(2.742602,6.663341),(2.73113,6.641844),(2.73082,6.621354),(2.739398,6.579186),(2.737331,6.558025),(2.718803,6.504298),(2.71542,6.494488),(2.71192,6.474048),(2.704258,6.429299),(2.703841,6.368352),(2.703298,6.368313),(2.306977,6.331488),(2.226817,6.313666),(1.884532,6.274807),(1.619663,6.213899),(1.612439,6.234763),(1.633109,6.245692),(1.761577,6.276026),(1.782351,6.277267),(1.76354,6.350647),(1.74411,6.425733),(1.718375,6.466997),(1.709797,6.473404),(1.6883,6.484437),(1.681685,6.489838),(1.679515,6.498571),(1.681995,6.520017),(1.680755,6.530068),(1.67445,6.538594),(1.657811,6.552263),(1.655227,6.559213),(1.653883,6.570659),(1.646648,6.573192),(1.63683,6.572261),(1.627425,6.573347),(1.61957,6.577998),(1.611198,6.58469),(1.603344,6.592596),(1.597349,6.600813),(1.594145,6.611045),(1.594765,6.620269),(1.596522,6.62864),(1.596316,6.636418),(1.58536,6.656442),(1.572441,6.666907),(1.566033,6.678146),(1.574301,6.700574),(1.589182,6.72046),(1.597039,6.73096),(1.60107,6.746437),(1.595179,6.77057),(1.587531,6.791034),(1.58319,6.808268),(1.58257,6.825347),(1.590114,6.867334),(1.588047,6.881235),(1.579056,6.892035),(1.564223,6.903089),(1.562623,6.904283),(1.551357,6.920871),(1.531927,6.991926),(1.607478,6.991409),(1.625668,6.996783),(1.624944,7.143648),(1.624324,7.288807),(1.623704,7.4381),(1.623251,7.536385),(1.622855,7.62246),(1.622774,7.640052),(1.621947,7.838282),(1.621327,7.996567),(1.623807,8.155886),(1.625564,8.270478),(1.614919,8.356261),(1.609855,8.365485),(1.603964,8.367449),(1.603945,8.367476),(1.60107,8.371635),(1.606031,8.396026),(1.606031,8.413389),(1.607581,8.421864),(1.640447,8.475815),(1.644168,8.493488),(1.63993,8.503255),(1.614092,8.535785),(1.609028,8.546973),(1.607581,8.558755),(1.607204,8.588617),(1.605514,8.722492),(1.604274,8.815225),(1.602827,8.924392),(1.601173,9.049526),(1.595799,9.076811),(1.587841,9.100893),(1.567273,9.13673),(1.505158,9.201455),(1.425267,9.284499),(1.420146,9.292317),(1.401185,9.321267),(1.386716,9.361135),(1.379895,9.462473),(1.368422,9.490843),(1.351266,9.482549),(1.336486,9.497509),(1.326358,9.521952),(1.323154,9.542313),(1.327288,9.555309),(1.345065,9.582388),(1.351266,9.595074),(1.352506,9.60528),(1.352196,9.625357),(1.354573,9.635304),(1.35602,9.647474),(1.348165,9.694164),(1.346925,9.771834),(1.345065,9.881103),(1.343942,9.954756),(1.343825,9.962442),(1.331009,9.996471),(1.225485,10.066053),(1.082238,10.16044),(0.997106,10.216553),(0.939508,10.254517),(0.844423,10.317175),(0.768769,10.367094),(0.760708,10.382158),(0.759881,10.405154),(0.773833,10.508145),(0.78851,10.563852),(0.789233,10.602946),(0.781275,10.693044),(0.795641,10.726478),(0.850005,10.77769),(0.862924,10.796086),(0.866024,10.80637),(0.867575,10.829211),(0.869538,10.839908),(0.885144,10.869389),(0.883698,10.880216),(0.875326,10.900524),(0.872949,10.911299),(0.875843,10.931169),(0.901474,10.992741),(0.931963,10.976101),(0.945399,10.974293),(0.963279,10.974137),(0.967517,10.979486),(0.969997,10.991165),(0.969067,11.002895),(0.963279,11.00827),(0.955321,11.01199),(0.951807,11.021086),(0.95129,11.032299),(0.952427,11.042402),(0.955941,11.044727),(0.964416,11.057414),(0.970307,11.06948),(0.976922,11.078369),(0.981366,11.080126),(0.986017,11.078498),(1.007928,11.075062),(1.014646,11.070359),(1.01971,11.065682),(1.025084,11.063512),(1.027978,11.061341),(1.033249,11.051575),(1.035006,11.049275),(1.041311,11.049585),(1.052163,11.055011),(1.05609,11.05607),(1.086889,11.052737),(1.097018,11.049275),(1.099705,11.044366),(1.106836,11.034418),(1.114174,11.027674),(1.117482,11.032196),(1.118619,11.048241),(1.10725,11.059791),(1.100945,11.070411),(1.099292,11.081289),(1.099085,11.092606),(1.097018,11.104491),(1.095158,11.10444),(1.085649,11.108729),(1.082755,11.110692),(1.082135,11.115292),(1.083892,11.12772),(1.082755,11.131802),(1.07335,11.133947),(1.067046,11.12834),(1.063325,11.126971),(1.062291,11.14206),(1.067459,11.144282),(1.100429,11.15162),(1.106216,11.155651),(1.113038,11.173867),(1.117482,11.178983),(1.128127,11.178879),(1.148074,11.1695),(1.158409,11.165935),(1.153242,11.183169),(1.149314,11.217249),(1.130608,11.247583),(1.135568,11.260115),(1.145904,11.269856),(1.156342,11.28554),(1.165954,11.277995),(1.173499,11.2666),(1.172155,11.261484),(1.18094,11.261355),(1.186211,11.263887),(1.191379,11.267014),(1.200061,11.268977),(1.218974,11.268047),(1.235201,11.263474),(1.268377,11.247893),(1.281089,11.271096),(1.28264,11.278873),(1.280366,11.288692),(1.270547,11.298278),(1.268377,11.30621),(1.273338,11.318225),(1.2845,11.312721),(1.296179,11.299621),(1.302483,11.288847),(1.309305,11.297089),(1.316333,11.29988),(1.323464,11.297089),(1.330389,11.288847),(1.347855,11.303497),(1.345892,11.316339),(1.336176,11.330033),(1.330389,11.347164),(1.33907,11.366103),(1.359017,11.378893),(1.381652,11.388686),(1.398085,11.398685),(1.388576,11.420002),(1.389817,11.428399),(1.398085,11.440311),(1.411107,11.449845),(1.423215,11.453748),(1.423613,11.453876),(1.433845,11.459405),(1.439633,11.473771),(1.455962,11.464702),(1.475289,11.460774),(1.56934,11.453307),(1.567273,11.446693),(1.565,11.432637),(1.563243,11.426022),(1.580606,11.427056),(1.601587,11.388686),(1.62112,11.395275),(1.638173,11.404215),(1.684372,11.413852),(1.703492,11.422921),(1.719099,11.427779),(1.763334,11.424678),(1.782247,11.426022),(1.821521,11.441602),(1.840642,11.445478),(1.858005,11.440311),(1.867617,11.446615),(1.879812,11.444858),(1.905754,11.432843),(1.948025,11.41672),(1.983372,11.414007),(2.010761,11.426901),(2.015429,11.43159),(2.14698,11.56374),(2.22005,11.622548),(2.273587,11.652055),(2.287436,11.665284),(2.291054,11.674792),(2.290227,11.685748),(2.290847,11.703214),(2.301596,11.731533),(2.340043,11.773856),(2.354099,11.799849),(2.390169,11.896536),(2.388205,11.906768),(2.376526,11.924079),(2.375183,11.935035),(2.380867,11.947902),(2.389342,11.950073),(2.398747,11.948781),(2.407429,11.95121),(2.414147,11.958703),(2.422002,11.973172),(2.428926,11.980355),(2.438951,11.981285),(2.44939,11.977513),(2.456728,11.979063),(2.457348,11.99622),(2.455281,12.008674),(2.44877,12.013376),(2.439262,12.015598),(2.428616,12.020353),(2.418281,12.028673),(2.41115,12.036786),(2.405568,12.046708),(2.371875,12.140449),(2.362057,12.196156),(2.36123,12.218894),(2.370015,12.236257),(2.3942,12.247212),(2.407946,12.248918),(2.43089,12.247987),(2.443706,12.249228),(2.446083,12.251811),(2.454351,12.263025),(2.459519,12.266746),(2.46696,12.265041),(2.472644,12.259253),(2.477709,12.257599),(2.483393,12.268038),(2.488044,12.280388),(2.495072,12.283231),(2.504477,12.281835),(2.516259,12.281422),(2.51905,12.279148),(2.5206,12.273671),(2.523701,12.269175),(2.531246,12.269691),(2.53693,12.27305),(2.546025,12.281629),(2.550573,12.284523),(2.586126,12.294031),(2.590053,12.293308),(2.598632,12.288347),(2.603593,12.28783),(2.60969,12.290775),(2.621679,12.299509),(2.628604,12.300801),(2.663847,12.291551),(2.672942,12.296718),(2.686895,12.318422),(2.690202,12.326174),(2.692166,12.334029),(2.69506,12.341212),(2.700434,12.347103),(2.711286,12.351237),(2.721828,12.344261),(2.732267,12.346793),(2.741465,12.354131),(2.757485,12.372941),(2.768544,12.380072),(2.778156,12.381933),(2.798413,12.381209),(2.808645,12.383225),(2.832416,12.384051),(2.844301,12.399244),(2.892257,12.357076),(2.912618,12.346328),(2.922643,12.335631),(2.930394,12.323332),(2.936802,12.300852),(2.944967,12.292481),(2.956026,12.286848),(2.967808,12.282559),(3.000984,12.27889),(3.009356,12.276306),(3.013077,12.271087),(3.022895,12.252483),(3.063409,12.193779),(3.235389,12.039525),(3.238489,12.034202),(3.252028,12.024125),(3.255129,12.018751),(3.257609,11.997408),(3.26257,11.974722),(3.269908,11.956532),(3.292749,11.918653),(3.307012,11.902375),(3.315487,11.894572),(3.327786,11.886407),(3.341635,11.882893)] +Burkina Faso [(-0.397671,15.002135),(-0.361472,15.017741),(-0.299176,15.054741),(-0.236699,15.065619),(-0.166988,15.049677),(-0.033404,14.995933),(-0.033197,14.995933),(0.218467,14.910977),(0.220844,14.888214),(0.211852,14.874804),(0.198003,14.863797),(0.185756,14.84819),(0.184102,14.819562),(0.213092,14.761632),(0.219707,14.731221),(0.152941,14.54671),(0.158832,14.496119),(0.18865,14.44775),(0.202382,14.435545),(0.346366,14.307577),(0.388792,14.251612),(0.391635,14.245927),(0.390601,14.237633),(0.381118,14.22567),(0.377424,14.216601),(0.372153,14.185647),(0.367398,14.173787),(0.343472,14.137536),(0.339157,14.12578),(0.343575,14.114333),(0.356262,14.095704),(0.362463,14.088857),(0.368742,14.084154),(0.37422,14.078883),(0.378457,14.070434),(0.37732,14.063949),(0.369052,14.047852),(0.369155,14.039067),(0.378354,14.028809),(0.391376,14.022582),(0.401866,14.013797),(0.403934,13.996072),(0.411866,13.978424),(0.428377,13.967495),(0.445921,13.958581),(0.456695,13.94685),(0.456256,13.938143),(0.448065,13.920108),(0.448634,13.909617),(0.454809,13.902357),(0.47566,13.888947),(0.483515,13.880343),(0.49199,13.861817),(0.499926,13.851015),(0.504082,13.845358),(0.519275,13.831637),(0.552968,13.810037),(0.5632,13.796782),(0.574466,13.785025),(0.593689,13.778101),(0.600511,13.768721),(0.594516,13.74986),(0.584801,13.729008),(0.58046,13.713789),(0.59431,13.688881),(0.620665,13.679967),(0.75337,13.684101),(0.763498,13.682034),(0.766392,13.67511),(0.766185,13.666971),(0.767012,13.660873),(0.766289,13.657178),(0.762878,13.653044),(0.763705,13.646429),(0.775694,13.635293),(0.795331,13.625965),(0.813108,13.625035),(0.850728,13.628601),(0.896927,13.614932),(0.95098,13.583203),(0.991391,13.541113),(0.996559,13.496051),(1.015266,13.465665),(1.048235,13.441945),(1.157169,13.392646),(1.175049,13.38691),(1.217424,13.381768),(1.23427,13.377583),(1.249567,13.367066),(1.268687,13.34606),(1.241505,13.335544),(1.222075,13.344613),(1.205745,13.358901),(1.187658,13.364121),(1.182387,13.358695),(1.177633,13.34761),(1.167608,13.313426),(1.160683,13.311307),(1.138462,13.320377),(1.084719,13.33358),(1.004724,13.364844),(0.983536,13.36841),(0.970411,13.328335),(0.970824,13.243534),(0.971341,13.149405),(0.971661,13.085782),(0.971754,13.067317),(0.974648,13.048326),(0.983536,13.032358),(1.012269,13.016881),(1.083789,13.01099),(1.113968,12.996236),(1.170915,12.949211),(1.230756,12.899756),(1.304806,12.838508),(1.330595,12.817177),(1.412037,12.749869),(1.467124,12.704394),(1.535854,12.647498),(1.563863,12.63215),(1.597039,12.623675),(1.699462,12.61489),(1.826069,12.604193),(1.843742,12.606105),(1.860692,12.61706),(1.872681,12.634217),(1.883223,12.653957),(1.900173,12.678452),(1.906994,12.691836),(1.911438,12.696642),(1.919603,12.698347),(1.926631,12.694989),(1.934486,12.693542),(1.945545,12.700828),(1.962495,12.719121),(1.971693,12.724237),(2.068948,12.716279),(2.109049,12.705634),(2.135197,12.67561),(2.140985,12.656128),(2.144809,12.650753),(2.155455,12.640005),(2.165997,12.631737),(2.18429,12.620626),(2.193282,12.609464),(2.2,12.595925),(2.203204,12.583316),(2.210852,12.523061),(2.215709,12.510349),(2.223874,12.494846),(2.229559,12.487146),(2.242271,12.473452),(2.246302,12.465907),(2.246302,12.461669),(2.243098,12.451179),(2.242994,12.446166),(2.244958,12.424152),(2.23793,12.413455),(2.223254,12.409735),(2.163826,12.404774),(2.158762,12.405652),(2.148013,12.410768),(2.145016,12.41175),(2.138711,12.408339),(2.131477,12.398521),(2.126929,12.39511),(2.069258,12.383328),(2.054169,12.370926),(2.051792,12.341935),(2.070912,12.306898),(2.113803,12.247987),(2.188527,12.145461),(2.258807,12.048775),(2.338079,11.940099),(2.390169,11.896536),(2.354099,11.799849),(2.340043,11.773856),(2.301596,11.731533),(2.290847,11.703214),(2.290227,11.685748),(2.291054,11.674792),(2.287436,11.665284),(2.273587,11.652055),(2.22005,11.622548),(2.14698,11.56374),(2.015429,11.43159),(2.010761,11.426901),(1.983372,11.414007),(1.948025,11.41672),(1.905754,11.432843),(1.879812,11.444858),(1.867617,11.446615),(1.858005,11.440311),(1.840642,11.445478),(1.821521,11.441602),(1.782247,11.426022),(1.763334,11.424678),(1.719099,11.427779),(1.703492,11.422921),(1.684372,11.413852),(1.638173,11.404215),(1.62112,11.395275),(1.601587,11.388686),(1.580606,11.427056),(1.563243,11.426022),(1.565,11.432637),(1.567273,11.446693),(1.56934,11.453307),(1.475289,11.460774),(1.455962,11.464702),(1.439633,11.473771),(1.433845,11.459405),(1.423613,11.453876),(1.423215,11.453748),(1.411107,11.449845),(1.398085,11.440311),(1.389817,11.428399),(1.388576,11.420002),(1.398085,11.398685),(1.381652,11.388686),(1.359017,11.378893),(1.33907,11.366103),(1.330389,11.347164),(1.336176,11.330033),(1.345892,11.316339),(1.347855,11.303497),(1.330389,11.288847),(1.323464,11.297089),(1.316333,11.29988),(1.309305,11.297089),(1.302483,11.288847),(1.296179,11.299621),(1.2845,11.312721),(1.273338,11.318225),(1.268377,11.30621),(1.270547,11.298278),(1.280366,11.288692),(1.28264,11.278873),(1.281089,11.271096),(1.268377,11.247893),(1.235201,11.263474),(1.218974,11.268047),(1.200061,11.268977),(1.191379,11.267014),(1.186211,11.263887),(1.18094,11.261355),(1.172155,11.261484),(1.173499,11.2666),(1.165954,11.277995),(1.156342,11.28554),(1.145904,11.269856),(1.135568,11.260115),(1.130608,11.247583),(1.149314,11.217249),(1.153242,11.183169),(1.158409,11.165935),(1.148074,11.1695),(1.128127,11.178879),(1.117482,11.178983),(1.113038,11.173867),(1.106216,11.155651),(1.100429,11.15162),(1.067459,11.144282),(1.062291,11.14206),(1.063325,11.126971),(1.067046,11.12834),(1.07335,11.133947),(1.082755,11.131802),(1.083892,11.12772),(1.082135,11.115292),(1.082755,11.110692),(1.085649,11.108729),(1.095158,11.10444),(1.097018,11.104491),(1.099085,11.092606),(1.099292,11.081289),(1.100945,11.070411),(1.10725,11.059791),(1.118619,11.048241),(1.117482,11.032196),(1.114174,11.027674),(1.106836,11.034418),(1.099705,11.044366),(1.097018,11.049275),(1.086889,11.052737),(1.05609,11.05607),(1.052163,11.055011),(1.041311,11.049585),(1.035006,11.049275),(1.033249,11.051575),(1.027978,11.061341),(1.025084,11.063512),(1.01971,11.065682),(1.014646,11.070359),(1.007928,11.075062),(0.986017,11.078498),(0.981366,11.080126),(0.976922,11.078369),(0.970307,11.06948),(0.964416,11.057414),(0.955941,11.044727),(0.952427,11.042402),(0.95129,11.032299),(0.951807,11.021086),(0.955321,11.01199),(0.963279,11.00827),(0.969067,11.002895),(0.969997,10.991165),(0.967517,10.979486),(0.963279,10.974137),(0.945399,10.974293),(0.931963,10.976101),(0.901474,10.992741),(0.77156,10.990364),(0.675338,10.988529),(0.612603,10.976566),(0.487649,10.933236),(0.499742,10.975688),(0.483619,10.979693),(0.487133,10.983672),(0.48858,10.987754),(0.489303,10.991914),(0.490647,10.996358),(0.49013,10.999511),(0.488786,11.001784),(0.486616,11.003205),(0.374779,11.025808),(0.287558,11.043436),(0.059588,11.089402),(-0.063118,11.114155),(-0.115062,11.124658),(-0.166109,11.13498),(-0.301682,11.162937),(-0.304473,11.146788),(-0.300545,11.137409),(-0.298297,11.128392),(-0.306126,11.113535),(-0.318813,11.101365),(-0.368939,11.065217),(-0.386923,11.086405),(-0.405888,11.101468),(-0.425215,11.101468),(-0.444387,11.077439),(-0.449038,11.062892),(-0.45126,11.040206),(-0.456686,11.029612),(-0.46852,11.020129),(-0.494823,11.007805),(-0.504228,10.996358),(-0.504228,10.996203),(-0.506709,10.993671),(-0.509706,10.991449),(-0.512961,10.989744),(-0.516475,10.988633),(-0.555439,10.98995),(-0.576213,10.987754),(-0.590579,10.982432),(-0.599829,10.965766),(-0.607116,10.94122),(-0.617244,10.918637),(-0.634763,10.907966),(-0.68277,10.953596),(-0.679773,10.960572),(-0.67228,10.968505),(-0.667991,10.976515),(-0.673933,10.983517),(-0.683235,10.984344),(-0.690212,10.981811),(-0.694656,10.982742),(-0.695999,10.994033),(-0.703017,10.994159),(-0.816974,10.996203),(-0.816974,10.996462),(-0.822141,11.003619),(-0.827774,11.005841),(-0.833665,11.003412),(-0.839401,10.996462),(-0.860795,10.992922),(-0.883171,10.968634),(-0.900586,10.966231),(-0.89392,10.979899),(-0.904359,10.979486),(-0.909475,10.98269),(-0.916451,10.996203),(-0.924047,11.001423),(-0.932471,11.003102),(-1.070343,11.013902),(-1.098714,11.009562),(-1.10755,10.994343),(-1.22806,10.99548),(-1.355287,10.99672),(-1.38035,11.001268),(-1.413553,11.01405),(-1.423584,11.017911),(-1.436057,11.022713),(-1.562406,11.026589),(-1.579821,11.021215),(-1.587004,11.003671),(-1.598683,10.997133),(-1.754074,10.995635),(-1.972872,10.993516),(-2.197871,10.991294),(-2.439769,10.988943),(-2.634951,10.986979),(-2.728798,10.986057),(-2.750706,10.985842),(-2.751119,10.996462),(-2.837341,10.998012),(-2.835946,10.992224),(-2.838711,10.985119),(-2.832716,10.966024),(-2.821761,10.948299),(-2.815767,10.929153),(-2.821399,10.90626),(-2.834861,10.89161),(-2.850803,10.879544),(-2.864239,10.864273),(-2.871629,10.843448),(-2.882662,10.797068),(-2.899482,10.76999),(-2.9031,10.747149),(-2.908577,10.737666),(-2.919274,10.727202),(-2.922323,10.721207),(-2.919959,10.711088),(-2.918913,10.706609),(-2.916949,10.706609),(-2.912634,10.705808),(-2.908035,10.703637),(-2.905141,10.699193),(-2.905115,10.692733),(-2.907337,10.688315),(-2.910154,10.684491),(-2.916251,10.668084),(-2.929661,10.644209),(-2.932504,10.634313),(-2.928266,10.615219),(-2.898294,10.562612),(-2.895865,10.552768),(-2.894211,10.531425),(-2.891524,10.521659),(-2.886899,10.515793),(-2.873386,10.502487),(-2.87044,10.497784),(-2.867391,10.47825),(-2.859562,10.465616),(-2.824448,10.435023),(-2.812149,10.428822),(-2.795871,10.426109),(-2.771428,10.425489),(-2.767552,10.4212),(-2.771893,10.41182),(-2.78104,10.402415),(-2.791995,10.398152),(-2.80494,10.397118),(-2.81463,10.393708),(-2.822329,10.387584),(-2.829461,10.378282),(-2.836282,10.363658),(-2.841295,10.343272),(-2.841295,10.322291),(-2.833182,10.305987),(-2.824448,10.302215),(-2.814216,10.301078),(-2.80569,10.299011),(-2.802098,10.292319),(-2.799669,10.280924),(-2.793184,10.278547),(-2.784451,10.278805),(-2.774839,10.27524),(-2.764865,10.269116),(-2.758354,10.266119),(-2.754943,10.260796),(-2.754375,10.247954),(-2.757682,10.234648),(-2.765124,10.225837),(-2.785381,10.210101),(-2.798455,10.189534),(-2.79892,10.169613),(-2.788481,10.124448),(-2.788481,10.038768),(-2.783572,10.018511),(-2.760886,9.984766),(-2.754375,9.966783),(-2.755564,9.941151),(-2.764865,9.895754),(-2.760524,9.870536),(-2.736753,9.832579),(-2.733911,9.822787),(-2.740267,9.812606),(-2.774839,9.77434),(-2.791582,9.740931),(-2.795303,9.720002),(-2.788481,9.699254),(-2.780317,9.690237),(-2.774839,9.689306),(-2.769258,9.689926),(-2.760524,9.685586),(-2.75546,9.677498),(-2.75422,9.668352),(-2.751016,9.659179),(-2.743404,9.653325),(-2.740164,9.650833),(-2.752928,9.639904),(-2.761506,9.625253),(-2.766467,9.608898),(-2.768018,9.592801),(-2.766261,9.576523),(-2.761093,9.566239),(-2.725281,9.533864),(-2.689211,9.488724),(-2.720475,9.459217),(-2.744918,9.408109),(-2.763677,9.391883),(-2.787603,9.393717),(-2.810418,9.411106),(-2.910154,9.542726),(-2.959892,9.638612),(-2.977824,9.685172),(-2.989994,9.708427),(-3.005677,9.724162),(-3.024643,9.728606),(-3.059059,9.718555),(-3.077094,9.719408),(-3.094561,9.733851),(-3.104379,9.757132),(-3.116911,9.805294),(-3.136936,9.82953),(-3.160112,9.834698),(-3.182566,9.835163),(-3.200007,9.845369),(-3.204399,9.865368),(-3.202229,9.888958),(-3.202565,9.908234),(-3.21476,9.915623),(-3.230237,9.90472),(-3.26406,9.857074),(-3.277857,9.843457),(-3.301732,9.841132),(-3.311809,9.854128),(-3.313566,9.874566),(-3.312739,9.894513),(-3.317183,9.901335),(-3.327518,9.897304),(-3.339249,9.890095),(-3.348344,9.886995),(-3.359015,9.892498),(-3.376559,9.908492),(-3.389168,9.913246),(-3.546471,9.926139),(-3.606261,9.918181),(-3.62334,9.921747),(-3.634011,9.92924),(-3.640548,9.937379),(-3.648171,9.944588),(-3.661658,9.948929),(-3.665043,9.946603),(-3.674319,9.93526),(-3.679641,9.932211),(-3.685946,9.933452),(-3.698994,9.94079),(-3.706462,9.941823),(-3.755037,9.93588),(-3.76961,9.931333),(-3.779739,9.924692),(-3.795758,9.907665),(-3.804259,9.901386),(-3.823483,9.895702),(-3.889706,9.894462),(-3.891625,9.8939),(-3.910093,9.888493),(-3.947739,9.86436),(-3.969185,9.858986),(-3.984042,9.851157),(-3.988253,9.834801),(-3.994609,9.820771),(-4.026675,9.820306),(-4.034969,9.817619),(-4.050317,9.810307),(-4.05135,9.807568),(-4.051712,9.802349),(-4.053675,9.79762),(-4.059877,9.796277),(-4.065716,9.798085),(-4.076155,9.803795),(-4.13305,9.820409),(-4.150259,9.818291),(-4.168294,9.804829),(-4.196974,9.764961),(-4.215423,9.754858),(-4.253095,9.750052),(-4.270329,9.743928),(-4.283636,9.732017),(-4.279191,9.731138),(-4.267513,9.727263),(-4.297795,9.695843),(-4.303299,9.686723),(-4.303454,9.677912),(-4.300482,9.654296),(-4.302988,9.641712),(-4.315933,9.616623),(-4.322477,9.608981),(-4.322548,9.608898),(-4.349678,9.620034),(-4.365026,9.624168),(-4.368023,9.608381),(-4.367817,9.591974),(-4.371175,9.582956),(-4.38487,9.589416),(-4.391588,9.600294),(-4.400166,9.633134),(-4.409106,9.647733),(-4.433368,9.653469),(-4.469206,9.651763),(-4.50171,9.655019),(-4.515921,9.67569),(-4.513647,9.694009),(-4.503829,9.719899),(-4.501607,9.737107),(-4.503958,9.742016),(-4.509953,9.745065),(-4.517782,9.746615),(-4.535946,9.746615),(-4.539098,9.744703),(-4.539899,9.740776),(-4.543051,9.734006),(-4.547961,9.720261),(-4.548426,9.706799),(-4.552198,9.696464),(-4.566719,9.692407),(-4.57832,9.696515),(-4.592712,9.717832),(-4.604029,9.727159),(-4.616949,9.713853),(-4.628731,9.707703),(-4.641107,9.706024),(-4.655809,9.706075),(-4.654724,9.703543),(-4.658264,9.6976),(-4.664827,9.690857),(-4.672914,9.685586),(-4.681079,9.682072),(-4.681803,9.682847),(-4.681544,9.686723),(-4.686608,9.692407),(-4.700096,9.69791),(-4.70547,9.702096),(-4.707641,9.70977),(-4.709088,9.720312),(-4.712808,9.726849),(-4.724384,9.737107),(-4.730017,9.744238),(-4.733324,9.749639),(-4.738337,9.753153),(-4.748646,9.754496),(-4.758594,9.753308),(-4.762134,9.749949),(-4.760454,9.745324),(-4.75477,9.740208),(-4.769963,9.737365),(-4.783192,9.758372),(-4.796447,9.754496),(-4.799315,9.762532),(-4.801227,9.766459),(-4.810115,9.77434),(-4.810115,9.781781),(-4.8004,9.78111),(-4.791899,9.78235),(-4.775905,9.788008),(-4.785414,9.804829),(-4.790323,9.827515),(-4.798462,9.842527),(-4.817557,9.836455),(-4.824274,9.856971),(-4.83846,9.86945),(-4.858329,9.875703),(-4.881739,9.877409),(-4.890059,9.879501),(-4.893495,9.884204),(-4.898017,9.888907),(-4.909386,9.891051),(-4.917085,9.886478),(-4.922356,9.87746),(-4.928015,9.870949),(-4.936671,9.873998),(-4.966075,9.901025),(-4.967315,9.912368),(-4.952949,9.953373),(-4.954809,9.959703),(-4.965713,9.983113),(-4.976927,9.997608),(-4.981397,10.007737),(-4.960209,10.008486),(-4.956411,10.021999),(-4.963336,10.040319),(-4.974524,10.055537),(-4.986642,10.063263),(-4.999793,10.069464),(-5.010516,10.07905),(-5.015581,10.097111),(-5.025528,10.088739),(-5.037233,10.083753),(-5.048343,10.085355),(-5.056483,10.097111),(-5.042866,10.103338),(-5.049377,10.105353),(-5.055139,10.10804),(-5.061702,10.110056),(-5.070099,10.110159),(-5.06103,10.118453),(-5.037595,10.126489),(-5.039455,10.134344),(-5.064725,10.150803),(-5.070719,10.161009),(-5.06395,10.179044),(-5.090718,10.177933),(-5.09015,10.188811),(-5.081597,10.206897),(-5.084414,10.227439),(-5.087333,10.220824),(-5.091907,10.214339),(-5.097849,10.209275),(-5.104903,10.206949),(-5.117073,10.208034),(-5.11821,10.212479),(-5.114412,10.218292),(-5.111802,10.223718),(-5.111802,10.27865),(-5.119192,10.293533),(-5.119882,10.293983),(-5.135677,10.304282),(-5.152058,10.305625),(-5.159499,10.292319),(-5.168439,10.291337),(-5.217584,10.31994),(-5.233552,10.315676),(-5.279802,10.32242),(-5.296649,10.32304),(-5.304142,10.318415),(-5.309206,10.311491),(-5.315511,10.305264),(-5.327086,10.302576),(-5.338507,10.302215),(-5.343778,10.303093),(-5.34817,10.305987),(-5.353338,10.308261),(-5.357058,10.304385),(-5.360831,10.298752),(-5.365533,10.295703),(-5.385739,10.296479),(-5.402534,10.300561),(-5.416228,10.307537),(-5.426977,10.316865),(-5.430077,10.322937),(-5.433488,10.331877),(-5.437829,10.340016),(-5.443772,10.34353),(-5.453022,10.343943),(-5.459946,10.345158),(-5.474157,10.350325),(-5.475294,10.367715),(-5.48749,10.392933),(-5.505111,10.415567),(-5.522578,10.425489),(-5.5171,10.439984),(-5.508987,10.490937),(-5.505163,10.499179),(-5.48165,10.535327),(-5.468111,10.58783),(-5.471057,10.597365),(-5.47855,10.604238),(-5.477413,10.61956),(-5.470488,10.635605),(-5.460566,10.644571),(-5.475914,10.650385),(-5.477154,10.658808),(-5.482115,10.675215),(-5.482632,10.684233),(-5.473072,10.737873),(-5.467956,10.752678),(-5.461186,10.760042),(-5.45297,10.763969),(-5.446459,10.768698),(-5.445012,10.778568),(-5.445322,10.786991),(-5.431421,10.844688),(-5.43509,10.856909),(-5.447027,10.877993),(-5.465579,10.947886),(-5.479118,10.975533),(-5.501804,10.97021),(-5.495603,10.996203),(-5.505602,11.047985),(-5.507334,11.056949),(-5.504233,11.072684),(-5.489764,11.082115),(-5.390131,11.094699),(-5.373026,11.099763),(-5.356283,11.106894),(-5.332512,11.12157),(-5.322332,11.135549),(-5.316182,11.176322),(-5.306726,11.199395),(-5.275255,11.230711),(-5.262852,11.251588),(-5.26094,11.267737),(-5.267865,11.37202),(-5.263472,11.390184),(-5.248176,11.403181),(-5.22642,11.41349),(-5.218307,11.422198),(-5.219186,11.435143),(-5.224612,11.45801),(-5.224663,11.54077),(-5.233914,11.575987),(-5.269467,11.604099),(-5.280732,11.605753),(-5.290964,11.604461),(-5.299956,11.606424),(-5.307604,11.617638),(-5.307811,11.629007),(-5.302281,11.639859),(-5.294788,11.650298),(-5.289259,11.660426),(-5.280681,11.699494),(-5.278562,11.721456),(-5.279544,11.739026),(-5.28714,11.758922),(-5.298974,11.772668),(-5.314787,11.782486),(-5.334269,11.790806),(-5.344656,11.793907),(-5.353286,11.794527),(-5.370649,11.791064),(-5.374525,11.79525),(-5.400415,11.810185),(-5.405996,11.812303),(-5.412197,11.823569),(-5.412352,11.82853),(-5.405944,11.830442),(-5.392302,11.832354),(-5.387909,11.830804),(-5.37561,11.822225),(-5.369874,11.820158),(-5.364552,11.82176),(-5.357058,11.829615),(-5.353803,11.831734),(-5.319025,11.837418),(-5.314839,11.83933),(-5.311066,11.842896),(-5.306519,11.846048),(-5.300163,11.846358),(-5.293445,11.843206),(-5.288019,11.839123),(-5.282489,11.836436),(-5.275151,11.837418),(-5.268227,11.843051),(-5.248228,11.870439),(-5.209471,11.901497),(-5.182909,11.930901),(-5.167819,11.943665),(-5.136994,11.953122),(-5.103947,11.972759),(-5.089323,11.97927),(-5.073768,11.98051),(-5.034313,11.97927),(-5.016046,11.981285),(-5.004703,11.980665),(-4.994238,11.983249),(-4.984523,11.988572),(-4.975454,11.996271),(-4.95375,12.005108),(-4.932071,12.003144),(-4.910677,11.998132),(-4.889955,11.997925),(-4.847245,12.013066),(-4.826238,12.012705),(-4.806808,11.996271),(-4.789522,12.001801),(-4.761281,12.006607),(-4.746889,12.015082),(-4.738983,12.02521),(-4.732281,12.037263),(-4.725133,12.050118),(-4.714617,12.059007),(-4.704023,12.061487),(-4.683715,12.059058),(-4.673586,12.05973),(-4.653716,12.069135),(-4.652737,12.071627),(-4.648678,12.081951),(-4.646663,12.098022),(-4.636121,12.117556),(-4.600309,12.137555),(-4.570233,12.138898),(-4.560466,12.149492),(-4.585529,12.197241),(-4.497421,12.26871),(-4.493545,12.272844),(-4.492253,12.278838),(-4.49401,12.290104),(-4.490496,12.305762),(-4.490703,12.313823),(-4.488171,12.320696),(-4.47719,12.327311),(-4.462488,12.328189),(-4.452669,12.320283),(-4.444814,12.309431),(-4.435823,12.301731),(-4.406135,12.307467),(-4.415359,12.350307),(-4.451636,12.435314),(-4.443678,12.459447),(-4.426624,12.479653),(-4.407607,12.497946),(-4.393448,12.516446),(-4.386885,12.530192),(-4.387247,12.533965),(-4.392647,12.537427),(-4.401406,12.550398),(-4.42249,12.597217),(-4.439078,12.613288),(-4.482538,12.646258),(-4.491323,12.662794),(-4.485251,12.714677),(-4.481711,12.717416),(-4.474942,12.715762),(-4.46437,12.717251),(-4.464296,12.717261),(-4.435254,12.728423),(-4.402026,12.736562),(-4.36885,12.738991),(-4.339446,12.73279),(-4.310921,12.716021),(-4.297227,12.712145),(-4.276298,12.714677),(-4.257642,12.72093),(-4.245472,12.728936),(-4.243974,12.729922),(-4.23413,12.74235),(-4.213278,12.807178),(-4.211883,12.819193),(-4.224647,12.864513),(-4.221314,12.921254),(-4.229324,12.948952),(-4.24785,12.971948),(-4.276918,12.996236),(-4.289733,13.01347),(-4.316398,13.070857),(-4.345363,13.097057),(-4.351073,13.106075),(-4.351073,13.118322),(-4.345906,13.127133),(-4.339601,13.135298),(-4.327044,13.168551),(-4.310197,13.175553),(-4.24617,13.172479),(-4.234801,13.175605),(-4.227773,13.184442),(-4.230021,13.193149),(-4.251209,13.204751),(-4.258107,13.21413),(-4.253508,13.22909),(-4.23878,13.246789),(-4.220668,13.262835),(-4.205501,13.272989),(-4.195837,13.274695),(-4.186458,13.272834),(-4.176872,13.272369),(-4.16602,13.278157),(-4.147106,13.299603),(-4.131578,13.323839),(-4.114654,13.364534),(-4.104267,13.382698),(-4.087549,13.397995),(-4.079307,13.402129),(-4.052539,13.409002),(-4.043495,13.414454),(-4.028199,13.431533),(-4.016029,13.43577),(-4.005358,13.439517),(-3.990217,13.449129),(-3.97691,13.460627),(-3.971613,13.469954),(-3.979468,13.47538),(-3.992232,13.476233),(-3.997968,13.480522),(-3.984481,13.496051),(-3.981251,13.49897),(-3.977737,13.499926),(-3.974094,13.498945),(-3.970321,13.496154),(-3.965468,13.489217),(-3.964482,13.487808),(-3.959288,13.467939),(-3.956059,13.46166),(-3.94748,13.458198),(-3.923399,13.453754),(-3.917973,13.451066),(-3.918593,13.440318),(-3.925544,13.436313),(-3.934923,13.434892),(-3.942726,13.431946),(-3.963009,13.413162),(-3.971613,13.40833),(-3.984042,13.39647),(-3.971613,13.386755),(-3.948411,13.380838),(-3.928825,13.38027),(-3.908878,13.381975),(-3.8926,13.378926),(-3.85808,13.365516),(-3.817101,13.35469),(-3.798626,13.347145),(-3.724419,13.288828),(-3.678711,13.26175),(-3.596882,13.199402),(-3.589802,13.197387),(-3.576599,13.199299),(-3.569958,13.196767),(-3.553913,13.180256),(-3.54505,13.174494),(-3.538487,13.173409),(-3.516111,13.1739),(-3.473922,13.167601),(-3.461593,13.165761),(-3.449475,13.16881),(-3.439475,13.184855),(-3.440586,13.202839),(-3.452265,13.237514),(-3.448544,13.265781),(-3.42653,13.27423),(-3.262406,13.283919),(-3.248634,13.292781),(-3.283955,13.542198),(-3.269176,13.579095),(-3.263905,13.602659),(-3.259719,13.658315),(-3.266282,13.682344),(-3.286952,13.69777),(-3.267057,13.717045),(-3.240418,13.707976),(-3.195821,13.674903),(-3.167941,13.672112),(-3.146031,13.67666),(-3.125102,13.67728),(-3.100039,13.662733),(-3.08221,13.646222),(-3.077301,13.636791),(-3.07451,13.622141),(-3.067508,13.606767),(-3.05601,13.61085),(-3.038621,13.629583),(-3.026968,13.635345),(-3.019113,13.637515),(-2.994774,13.634983),(-2.986402,13.632063),(-2.979322,13.627567),(-2.97232,13.624467),(-2.965008,13.625655),(-2.930473,13.638523),(-2.895245,13.651648),(-2.923383,13.746035),(-2.928059,13.799624),(-2.912169,13.837942),(-2.89447,13.866571),(-2.855377,13.996175),(-2.85703,13.997777),(-2.858606,14.000232),(-2.861552,14.00173),(-2.867288,14.000542),(-2.840855,14.042994),(-2.69319,14.123196),(-2.676265,14.141729),(-2.619758,14.203604),(-2.597382,14.222311),(-2.516147,14.267864),(-2.46168,14.280912),(-2.385612,14.264712),(-2.151879,14.15614),(-2.11984,14.148776),(-2.104337,14.151179),(-2.085008,14.159668),(-2.035918,14.181229),(-2.027288,14.188127),(-2.023412,14.198643),(-2.005015,14.444236),(-1.997057,14.470694),(-1.967033,14.483742),(-1.919129,14.485861),(-1.836963,14.479582),(-1.766683,14.483174),(-1.69692,14.496119),(-1.350275,14.714969),(-1.307745,14.734657),(-1.116697,14.780443),(-1.07856,14.796359),(-1.043317,14.817908),(-1.043138,14.818062),(-0.836404,14.995985),(-0.836249,14.996088),(-0.782816,15.048385),(-0.771272,15.056619),(-0.752895,15.069727),(-0.719719,15.078461),(-0.500343,15.079721),(-0.4679,15.079908),(-0.458649,15.075463),(-0.435602,15.015157),(-0.425732,15.0026),(-0.397671,15.002135)] +Bulgaria [(22.919562,43.834225),(23.052551,43.84282),(23.131849,43.847945),(23.161924,43.857324),(23.196961,43.86275),(23.234478,43.877297),(23.325151,43.886592),(23.484695,43.880604),(23.592699,43.837429),(23.620854,43.83401),(23.636107,43.832158),(23.720753,43.845826),(23.742871,43.8427),(23.799922,43.818463),(24.149606,43.75472),(24.159383,43.752938),(24.336705,43.759251),(24.358234,43.760017),(24.375494,43.763867),(24.431201,43.794176),(24.466341,43.802418),(24.500137,43.799498),(24.661763,43.755657),(24.705603,43.743765),(24.752628,43.738804),(24.963675,43.749605),(25.0816,43.718935),(25.211308,43.711881),(25.252339,43.704646),(25.285405,43.690391),(25.28872,43.688962),(25.323033,43.669713),(25.35962,43.654287),(25.403131,43.65005),(25.426075,43.654391),(25.467417,43.667749),(25.48245,43.669715),(25.488759,43.67054),(25.533821,43.668679),(25.556558,43.670359),(25.575059,43.677387),(25.593869,43.680668),(25.616503,43.687748),(25.637897,43.697282),(25.653503,43.708134),(25.671384,43.717359),(25.732948,43.718781),(25.739596,43.718935),(25.781144,43.732009),(25.804399,43.759914),(25.806259,43.763661),(25.839332,43.788439),(25.869304,43.800893),(25.916433,43.844379),(25.924495,43.858616),(25.934003,43.870321),(26.054306,43.934322),(26.061644,43.949773),(26.079317,43.969049),(26.116214,43.998866),(26.150734,44.012405),(26.231453,44.027495),(26.310518,44.052609),(26.332335,44.054926),(26.415791,44.063789),(26.614168,44.084855),(26.647758,44.093382),(26.667808,44.095087),(26.677317,44.097051),(26.697212,44.106094),(26.708685,44.10811),(26.753695,44.10811),(26.789455,44.115965),(26.884126,44.156531),(27.001535,44.165109),(27.027476,44.177046),(27.100237,44.14449),(27.205553,44.129246),(27.226741,44.120719),(27.251132,44.122373),(27.252662,44.121519),(27.269012,44.112399),(27.26431,44.089765),(27.285342,44.072453),(27.341669,44.053074),(27.353555,44.045271),(27.372882,44.020725),(27.383837,44.015092),(27.574523,44.016281),(27.633434,44.029768),(27.656275,44.023877),(27.676119,43.993543),(27.682515,43.987256),(27.721698,43.94874),(27.787327,43.960419),(27.85647,43.988634),(27.912074,43.993336),(27.912074,43.993233),(27.935845,43.964398),(27.98101,43.84934),(28.014806,43.830039),(28.221254,43.761981),(28.434574,43.735213),(28.57838,43.741278),(28.576182,43.727525),(28.57309,43.606147),(28.575531,43.593329),(28.585704,43.575832),(28.595876,43.563707),(28.602712,43.55272),(28.603526,43.538153),(28.594574,43.512519),(28.578461,43.48078),(28.560313,43.453843),(28.545177,43.442532),(28.535004,43.438625),(28.488617,43.40644),(28.479259,43.396796),(28.473888,43.384263),(28.473155,43.366848),(28.459483,43.380683),(28.414561,43.398871),(28.404959,43.40469),(28.393809,43.414293),(28.368663,43.42178),(28.321625,43.42829),(28.299164,43.425035),(28.261567,43.410956),(28.243175,43.407782),(28.177989,43.40998),(28.157237,43.407782),(28.118826,43.391547),(28.091563,43.363959),(28.087006,43.355251),(28.031098,43.248969),(28.017589,43.232733),(28.000173,43.223212),(27.931977,43.209784),(27.920584,43.204047),(27.913829,43.202338),(27.903982,43.202338),(27.903982,43.195502),(27.938731,43.1765),(27.945567,43.16885),(27.944347,43.157131),(27.925141,43.113593),(27.911388,43.065172),(27.904307,43.055854),(27.895518,43.049262),(27.887706,43.042141),(27.883556,43.030992),(27.885427,43.008612),(27.900401,42.962877),(27.903982,42.942288),(27.898448,42.874661),(27.903982,42.859687),(27.883962,42.848375),(27.883556,42.831936),(27.891612,42.81037),(27.897146,42.784003),(27.897146,42.736233),(27.894705,42.717475),(27.892263,42.710517),(27.841075,42.708319),(27.787364,42.715155),(27.743175,42.716051),(27.732677,42.714504),(27.725597,42.708482),(27.719005,42.694566),(27.715831,42.683743),(27.71697,42.674506),(27.724132,42.666978),(27.739513,42.661078),(27.712738,42.65766),(27.6692,42.643704),(27.646739,42.64057),(27.628429,42.628974),(27.630382,42.602729),(27.641449,42.574774),(27.650157,42.558051),(27.634044,42.563707),(27.540863,42.565497),(27.511485,42.553046),(27.498546,42.532457),(27.491466,42.507758),(27.480154,42.482896),(27.46225,42.489569),(27.452891,42.480129),(27.45338,42.465237),(27.465668,42.455634),(27.462087,42.448717),(27.461111,42.443549),(27.465668,42.42829),(27.468435,42.435777),(27.4699,42.437079),(27.469412,42.437201),(27.465668,42.441352),(27.472667,42.461859),(27.503917,42.437323),(27.514171,42.435126),(27.526378,42.443671),(27.534434,42.455268),(27.544444,42.460028),(27.562022,42.448188),(27.571056,42.458482),(27.5796,42.457221),(27.587738,42.451483),(27.595551,42.448188),(27.60963,42.451158),(27.619477,42.455878),(27.629405,42.458645),(27.643321,42.455634),(27.642833,42.450507),(27.641775,42.44953),(27.639822,42.449774),(27.636485,42.448188),(27.642426,42.430894),(27.652192,42.418524),(27.667735,42.416083),(27.691661,42.42829),(27.693533,42.418891),(27.699067,42.413886),(27.707856,42.412746),(27.719005,42.4147),(27.719005,42.407213),(27.713227,42.405951),(27.698009,42.400377),(27.705414,42.390611),(27.709646,42.388129),(27.719005,42.386705),(27.709483,42.376776),(27.708344,42.363105),(27.714366,42.349189),(27.725841,42.338935),(27.740408,42.334296),(27.752126,42.335395),(27.764496,42.338324),(27.780447,42.338935),(27.780447,42.332709),(27.774587,42.327216),(27.774425,42.321723),(27.779145,42.316311),(27.787852,42.310981),(27.76295,42.2956),(27.75115,42.277045),(27.75587,42.25849),(27.780447,42.243354),(27.776052,42.240668),(27.775645,42.239651),(27.773692,42.235907),(27.809744,42.218411),(27.815278,42.211982),(27.819998,42.204657),(27.831391,42.195014),(27.8449,42.186103),(27.856212,42.181301),(27.854177,42.177924),(27.85141,42.170111),(27.849376,42.16706),(27.87908,42.154242),(27.886729,42.147366),(27.903982,42.119818),(27.957774,42.094306),(27.964366,42.084621),(27.972667,42.075385),(27.986583,42.072089),(27.982432,42.061103),(27.987315,42.051744),(27.996918,42.044013),(28.007009,42.037909),(28.007579,42.032904),(28.006114,42.031562),(28.003429,42.03148),(28.000173,42.030463),(28.011567,42.021389),(28.019054,42.00845),(28.020356,41.994574),(28.013845,41.982652),(28.016775,41.972561),(28.016783,41.972531),(27.98101,41.978524),(27.96592,41.982141),(27.917035,41.977904),(27.903392,41.981056),(27.876934,41.99072),(27.852232,41.995448),(27.843034,41.995758),(27.824017,41.993484),(27.819935,41.994699),(27.815955,41.995138),(27.811925,41.994699),(27.807997,41.993484),(27.804948,41.983433),(27.804897,41.969894),(27.802726,41.96005),(27.818281,41.952867),(27.815335,41.946795),(27.80283,41.943074),(27.789807,41.942686),(27.776165,41.946123),(27.723971,41.967595),(27.687074,41.968602),(27.609301,41.953487),(27.606873,41.943513),(27.603359,41.93863),(27.598088,41.938604),(27.590543,41.942893),(27.582378,41.934883),(27.572198,41.929845),(27.550752,41.924212),(27.552096,41.921835),(27.554886,41.920336),(27.557935,41.91907),(27.560674,41.917675),(27.55747,41.915505),(27.551269,41.913102),(27.548892,41.91088),(27.562741,41.906435),(27.546411,41.901164),(27.533182,41.908063),(27.509463,41.933178),(27.494321,41.942841),(27.420837,41.973718),(27.39686,41.989324),(27.374845,42.008703),(27.332471,42.057434),(27.305289,42.077588),(27.273353,42.091747),(27.238213,42.097922),(27.216405,42.095623),(27.203796,42.08813),(27.181576,42.065883),(27.178992,42.061878),(27.178165,42.058364),(27.173824,42.057072),(27.149588,42.061826),(27.127212,42.062576),(27.11605,42.061826),(27.10065,42.071102),(27.0837,42.078415),(27.06551,42.082704),(27.047903,42.08292),(27.046545,42.082936),(27.022619,42.073893),(27.000398,42.04281),(26.981071,42.032862),(26.96717,42.028495),(26.95823,42.018237),(26.950065,42.006171),(26.938903,41.996223),(26.930221,41.994518),(26.911463,41.996792),(26.901076,41.993484),(26.890327,41.985423),(26.881129,41.985526),(26.871724,41.988136),(26.860768,41.987903),(26.85002,41.982865),(26.837899,41.97522),(26.827695,41.968783),(26.819427,41.965657),(26.80842,41.967879),(26.78992,41.979945),(26.780566,41.983433),(26.768578,41.980772),(26.746977,41.964494),(26.736745,41.958965),(26.717573,41.957311),(26.623573,41.969041),(26.605693,41.967414),(26.589467,41.958758),(26.560632,41.935684),(26.553293,41.931576),(26.547092,41.926899),(26.542958,41.920362),(26.544715,41.916435),(26.556807,41.91075),(26.559805,41.90734),(26.558978,41.901733),(26.55474,41.892948),(26.553604,41.887703),(26.552895,41.881643),(26.552053,41.874448),(26.547816,41.856283),(26.539548,41.83799),(26.526112,41.824244),(26.478363,41.813289),(26.375527,41.816622),(26.334185,41.789543),(26.320026,41.765462),(26.316305,41.743758),(26.32323,41.723682),(26.333359,41.713036),(26.294911,41.710323),(26.273931,41.714897),(26.261012,41.723062),(26.23445,41.745825),(26.226182,41.749675),(26.211816,41.750476),(26.189905,41.734689),(26.135128,41.733397),(26.108359,41.727893),(26.081488,41.711512),(26.074046,41.709135),(26.067225,41.708851),(26.060507,41.7073),(26.053686,41.701513),(26.048105,41.689084),(26.047485,41.674615),(26.050275,41.660456),(26.055029,41.648751),(26.05949,41.643777),(26.066915,41.635496),(26.081178,41.630457),(26.095234,41.62839),(26.106706,41.624282),(26.115284,41.616634),(26.121072,41.608211),(26.13027,41.582683),(26.130994,41.575138),(26.129443,41.559377),(26.1312,41.552659),(26.136781,41.549196),(26.152491,41.547026),(26.158072,41.542323),(26.163136,41.529146),(26.16448,41.517725),(26.16293,41.50646),(26.159312,41.493799),(26.147634,41.484756),(26.145877,41.478038),(26.156212,41.460416),(26.160449,41.455869),(26.174299,41.445998),(26.177399,41.439952),(26.175642,41.431891),(26.170578,41.429565),(26.164273,41.42786),(26.159002,41.421814),(26.14784,41.396906),(26.132441,41.371533),(26.120865,41.357787),(26.114664,41.355203),(26.107326,41.356598),(26.021853,41.341664),(26.008934,41.336806),(25.982269,41.323267),(25.959635,41.314999),(25.948576,41.313965),(25.93514,41.315929),(25.921187,41.316032),(25.896279,41.306265),(25.882017,41.304043),(25.86269,41.310089),(25.833337,41.334636),(25.811013,41.341044),(25.800781,41.338046),(25.763471,41.319029),(25.728744,41.317066),(25.717479,41.31412),(25.70518,41.307299),(25.698049,41.301925),(25.691227,41.298411),(25.679962,41.29717),(25.670453,41.299186),(25.649576,41.308487),(25.639758,41.311071),(25.551494,41.31567),(25.537955,41.312208),(25.530203,41.302751),(25.523692,41.291693),(25.514907,41.283476),(25.505089,41.280582),(25.497337,41.281202),(25.489482,41.283063),(25.479096,41.283786),(25.453464,41.280427),(25.285722,41.239396),(25.262261,41.238104),(25.23911,41.240895),(25.21968,41.249731),(25.177098,41.293863),(25.157875,41.30611),(25.153847,41.307535),(25.116534,41.320735),(25.112606,41.324145),(25.104855,41.334067),(25.101858,41.336651),(25.096793,41.336703),(25.08036,41.334067),(24.916959,41.386364),(24.886367,41.400627),(24.872931,41.401867),(24.863009,41.400316),(24.842132,41.394735),(24.803271,41.392668),(24.800171,41.379336),(24.802858,41.361921),(24.794279,41.3474),(24.774436,41.348072),(24.752628,41.362748),(24.718728,41.395717),(24.699195,41.408946),(24.680901,41.415509),(24.661264,41.41768),(24.638217,41.41768),(24.644314,41.427653),(24.609484,41.42724),(24.595842,41.429772),(24.580442,41.440521),(24.579719,41.44419),(24.582199,41.455249),(24.581062,41.460209),(24.577238,41.463568),(24.56773,41.468116),(24.564113,41.471062),(24.558738,41.476849),(24.553571,41.480777),(24.549436,41.485428),(24.546543,41.493644),(24.543339,41.521343),(24.530936,41.547543),(24.510162,41.56165),(24.481327,41.553227),(24.459209,41.549506),(24.438849,41.527699),(24.423966,41.525218),(24.402882,41.527854),(24.387483,41.526665),(24.353893,41.519121),(24.345005,41.518397),(24.318236,41.520774),(24.309554,41.519792),(24.303043,41.51695),(24.296119,41.515245),(24.286817,41.517622),(24.28506,41.523151),(24.287334,41.531626),(24.286403,41.54036),(24.26687,41.549765),(24.250747,41.563459),(24.23328,41.561805),(24.214366,41.555759),(24.197698,41.547711),(24.19628,41.547026),(24.181604,41.537362),(24.17747,41.531006),(24.173129,41.515297),(24.170235,41.511576),(24.162793,41.512041),(24.158453,41.51633),(24.154008,41.522118),(24.146257,41.526769),(24.116491,41.533383),(24.076597,41.536019),(24.047348,41.525735),(24.049829,41.493644),(24.052516,41.471475),(24.051999,41.463),(24.045901,41.455455),(24.034739,41.451321),(24.022854,41.453078),(24.000736,41.464137),(23.997119,41.457006),(23.992365,41.454628),(23.986887,41.453698),(23.981306,41.450856),(23.964666,41.43835),(23.949887,41.437575),(23.902861,41.463517),(23.894799,41.464344),(23.867721,41.445482),(23.851598,41.439591),(23.830927,41.435611),(23.80974,41.433803),(23.792067,41.434475),(23.777081,41.429049),(23.754446,41.400678),(23.73822,41.397474),(23.705354,41.403159),(23.672177,41.402952),(23.65285,41.397629),(23.627736,41.378509),(23.624847,41.377094),(23.612439,41.371016),(23.578953,41.371998),(23.513014,41.397733),(23.414519,41.399903),(23.395502,41.395252),(23.365116,41.378561),(23.347236,41.371223),(23.326049,41.369311),(23.31561,41.376855),(23.306205,41.388379),(23.287705,41.398198),(23.269928,41.397268),(23.24657,41.389723),(23.224246,41.379026),(23.209777,41.368587),(23.206366,41.360939),(23.204816,41.342697),(23.199545,41.332982),(23.190656,41.326057),(23.179701,41.321355),(23.15717,41.316342),(23.115209,41.312673),(22.916978,41.335773),(22.940852,41.349829),(22.94447,41.368432),(22.939406,41.389413),(22.937339,41.410755),(22.940542,41.416905),(22.952118,41.427705),(22.954598,41.432408),(22.953358,41.438195),(22.94757,41.448376),(22.946227,41.453233),(22.943599,41.523201),(22.943023,41.538551),(22.94788,41.555139),(22.948707,41.560978),(22.946434,41.567748),(22.936925,41.57891),(22.933721,41.584595),(22.932068,41.597953),(22.932998,41.612345),(22.936098,41.626168),(22.940852,41.63764),(22.945813,41.641077),(22.961523,41.644488),(22.967001,41.647046),(22.970101,41.652032),(22.976613,41.666553),(22.985435,41.677198),(22.998627,41.693115),(23.009582,41.71637),(23.008859,41.739934),(22.991185,41.760992),(22.98054,41.764739),(22.956872,41.765669),(22.945917,41.769338),(22.939716,41.776702),(22.918322,41.814348),(22.907676,41.848584),(22.901372,41.860418),(22.896721,41.864448),(22.885042,41.869151),(22.882088,41.871618),(22.880804,41.872691),(22.878221,41.880261),(22.878634,41.895015),(22.877084,41.902043),(22.866335,41.924884),(22.858894,41.94788),(22.857137,41.971884),(22.85476,41.982632),(22.846905,41.993484),(22.846595,41.993639),(22.845768,42.006869),(22.845621,42.007408),(22.843701,42.014465),(22.838223,42.019478),(22.826958,42.025085),(22.821273,42.025369),(22.805977,42.02139),(22.798949,42.021235),(22.791094,42.025808),(22.787684,42.032578),(22.785306,42.039141),(22.780862,42.043171),(22.77063,42.043998),(22.725052,42.042474),(22.718437,42.044463),(22.713993,42.048623),(22.710272,42.05299),(22.705725,42.055935),(22.675856,42.060612),(22.627177,42.079127),(22.617771,42.082704),(22.531058,42.129109),(22.510181,42.144793),(22.506939,42.148927),(22.494678,42.164559),(22.481449,42.193317),(22.443622,42.214427),(22.345023,42.313439),(22.364144,42.320984),(22.405795,42.321552),(22.423985,42.325893),(22.438454,42.340052),(22.454371,42.376768),(22.46977,42.391703),(22.485066,42.397155),(22.497572,42.399196),(22.508838,42.404932),(22.519483,42.420926),(22.533125,42.45759),(22.536536,42.47839),(22.532505,42.493402),(22.532505,42.493557),(22.524857,42.507665),(22.512145,42.519189),(22.481449,42.535622),(22.429669,42.571408),(22.425328,42.572855),(22.428842,42.592776),(22.441318,42.632891),(22.444552,42.64329),(22.449203,42.667965),(22.442072,42.681685),(22.468116,42.718324),(22.481449,42.727677),(22.482586,42.730675),(22.482896,42.733775),(22.482586,42.736824),(22.481449,42.739821),(22.466566,42.748529),(22.45313,42.763592),(22.429359,42.806122),(22.425845,42.809843),(22.427395,42.813615),(22.430446,42.817077),(22.436801,42.824286),(22.445482,42.830178),(22.470907,42.840125),(22.481449,42.84674),(22.497055,42.864413),(22.506047,42.870123),(22.519793,42.870356),(22.53757,42.868341),(22.544494,42.871389),(22.544785,42.871706),(22.549972,42.877358),(22.563615,42.884283),(22.5909,42.886892),(22.666244,42.871932),(22.69663,42.87741),(22.727015,42.886892),(22.738798,42.897383),(22.739579,42.898858),(22.745516,42.910069),(22.763189,42.958645),(22.76939,42.97128),(22.776418,42.979729),(22.788097,42.984897),(22.815796,42.989703),(22.828818,42.993449),(22.828921,42.993449),(22.829025,42.993501),(22.829025,42.993656),(22.842254,43.007505),(22.884215,43.036651),(22.889486,43.044376),(22.896721,43.062721),(22.901682,43.069749),(22.910157,43.075279),(22.927107,43.081144),(22.935271,43.085562),(22.955632,43.108274),(22.974029,43.141192),(22.984571,43.174627),(22.982902,43.187318),(22.981367,43.198992),(22.964727,43.204418),(22.915531,43.212247),(22.897754,43.220335),(22.883802,43.230592),(22.857343,43.256947),(22.833159,43.274647),(22.826958,43.28139),(22.823857,43.289297),(22.820756,43.307539),(22.817139,43.315497),(22.80453,43.328984),(22.73301,43.381513),(22.724343,43.38606),(22.719367,43.388671),(22.702934,43.394045),(22.693219,43.394872),(22.674202,43.394148),(22.664694,43.396732),(22.658926,43.401295),(22.656529,43.403192),(22.645367,43.420297),(22.637822,43.426369),(22.62852,43.428255),(22.606919,43.427402),(22.596274,43.429159),(22.586766,43.43443),(22.57271,43.44815),(22.565785,43.453344),(22.532609,43.464842),(22.518863,43.474247),(22.509354,43.493341),(22.490647,43.540883),(22.478452,43.559229),(22.477625,43.564164),(22.478658,43.569176),(22.481449,43.574111),(22.482689,43.576695),(22.483103,43.579279),(22.482689,43.581734),(22.481449,43.584137),(22.478142,43.587573),(22.477108,43.591294),(22.478142,43.594911),(22.481449,43.598529),(22.481759,43.598942),(22.481966,43.599459),(22.481759,43.599975),(22.481449,43.600647),(22.473801,43.612998),(22.472871,43.635942),(22.466256,43.64912),(22.455921,43.656406),(22.426465,43.668214),(22.41396,43.676663),(22.404865,43.687179),(22.396906,43.699401),(22.390498,43.712449),(22.386054,43.725498),(22.385848,43.733817),(22.389568,43.750509),(22.388535,43.758286),(22.362593,43.780843),(22.349467,43.807921),(22.354738,43.829703),(22.367554,43.852751),(22.377063,43.883524),(22.379026,43.913496),(22.382024,43.918561),(22.391945,43.931867),(22.394529,43.936337),(22.396803,43.951944),(22.39732,43.980934),(22.399594,43.993336),(22.411789,44.006927),(22.43432,44.013955),(22.465885,44.017624),(22.481449,44.019433),(22.50367,44.019898),(22.514935,44.030285),(22.522583,44.044703),(22.534159,44.057157),(22.554623,44.062428),(22.57519,44.061394),(22.592967,44.063926),(22.604749,44.079378),(22.604646,44.088163),(22.598134,44.109298),(22.597101,44.119065),(22.599065,44.130331),(22.6094,44.159941),(22.607953,44.159993),(22.605989,44.163145),(22.604852,44.168468),(22.606196,44.174566),(22.608573,44.175858),(22.624799,44.189397),(22.639992,44.207329),(22.648777,44.213995),(22.69164,44.228435),(22.906436,44.122889),(22.942609,44.111469),(22.988085,44.107025),(23.008307,44.100446),(23.030976,44.093072),(23.040071,44.062325),(23.023018,44.031629),(22.988085,44.017676),(22.966277,44.015557),(22.926486,44.006152),(22.905816,44.003982),(22.885869,43.994525),(22.874707,43.972046),(22.850522,43.896986),(22.851039,43.874352),(22.863441,43.855412),(22.888763,43.839522),(22.919562,43.834225)] +Bahrain [(50.551606,26.194241),(50.594737,26.160305),(50.604871,26.174732),(50.619898,26.185466),(50.626339,26.170438),(50.632779,26.146823),(50.64566,26.14575),(50.635507,26.134382),(50.634926,26.115694),(50.614757,26.111151),(50.620616,26.052965),(50.618419,25.964748),(50.603526,25.85456),(50.594493,25.832709),(50.56544,25.7897),(50.565278,25.789537),(50.558604,25.81977),(50.547211,25.852932),(50.531423,25.88524),(50.511404,25.913031),(50.467784,25.957587),(50.46046,25.985256),(50.46046,25.985297),(50.479991,26.018866),(50.485444,26.033433),(50.490896,26.048),(50.490896,26.048041),(50.485199,26.085639),(50.469981,26.122626),(50.452973,26.149563),(50.448904,26.161607),(50.449555,26.177965),(50.461192,26.225246),(50.468516,26.236884),(50.481456,26.24258),(50.481619,26.24254),(50.482595,26.24254),(50.503917,26.242011),(50.516368,26.239691),(50.5324,26.2331),(50.545421,26.227729),(50.545502,26.227769),(50.586681,26.242499),(50.593272,26.238593),(50.597423,26.225531),(50.604747,26.216498),(50.606944,26.212388),(50.609141,26.208319),(50.609141,26.208279),(50.603526,26.197659),(50.594086,26.195299),(50.565603,26.198879),(50.551606,26.194241)] +Bosnia and Herzegovina [(16.941529,45.241219),(16.947317,45.23569),(16.967987,45.237292),(16.9653,45.244216),(16.961166,45.250934),(16.975945,45.247162),(16.988141,45.236516),(16.998683,45.2314),(17.008295,45.244165),(17.012015,45.236465),(17.013566,45.230574),(17.012532,45.224476),(17.008295,45.216156),(17.014393,45.219773),(17.030206,45.226956),(17.036303,45.23047),(17.055734,45.200446),(17.098418,45.177657),(17.124628,45.169058),(17.187095,45.148563),(17.197844,45.152956),(17.24549,45.155384),(17.249727,45.15988),(17.268951,45.189543),(17.307811,45.171404),(17.326518,45.165926),(17.359384,45.150682),(17.368583,45.145101),(17.382432,45.139623),(17.400829,45.141018),(17.434109,45.148563),(17.436176,45.151922),(17.438759,45.158123),(17.443307,45.162051),(17.451162,45.158485),(17.45695,45.151819),(17.458086,45.146134),(17.454986,45.140605),(17.447751,45.1343),(17.46036,45.131252),(17.472349,45.13275),(17.496224,45.14169),(17.487025,45.120865),(17.481858,45.114405),(17.489299,45.116265),(17.539528,45.120968),(17.561439,45.119366),(17.572395,45.120399),(17.593065,45.129598),(17.629962,45.157348),(17.651563,45.165358),(17.684739,45.163963),(17.714505,45.152025),(17.741377,45.133267),(17.797497,45.081952),(17.815584,45.07017),(17.835738,45.064434),(17.84566,45.065261),(17.875219,45.073994),(17.902607,45.077766),(17.911185,45.081074),(17.927722,45.092029),(17.988493,45.143809),(18.003479,45.149338),(18.019086,45.149442),(18.035002,45.143344),(18.090296,45.107997),(18.112413,45.100607),(18.131844,45.097869),(18.143698,45.0977),(18.143704,45.0977),(18.153651,45.097558),(18.174942,45.100762),(18.192512,45.108514),(18.199333,45.119004),(18.200057,45.131613),(18.20233,45.144067),(18.213286,45.153782),(18.237884,45.157451),(18.261965,45.150682),(18.30744,45.127531),(18.322323,45.12288),(18.392913,45.118642),(18.412033,45.112441),(18.429293,45.102209),(18.45183,45.084959),(18.451842,45.08495),(18.466087,45.074046),(18.481708,45.066921),(18.490788,45.06278),(18.51704,45.055856),(18.541121,45.055287),(18.539261,45.069343),(18.534403,45.081539),(18.534803,45.088412),(18.53492,45.090427),(18.549596,45.094768),(18.557658,45.094406),(18.573781,45.091564),(18.581842,45.091357),(18.58856,45.093218),(18.602513,45.099264),(18.609127,45.100091),(18.616672,45.097662),(18.621116,45.093114),(18.627937,45.08004),(18.633622,45.071979),(18.640547,45.065364),(18.648711,45.062677),(18.658116,45.066294),(18.661217,45.0711),(18.663284,45.077146),(18.666075,45.082986),(18.671346,45.086861),(18.678167,45.087378),(18.684058,45.084794),(18.689019,45.080247),(18.726329,45.026142),(18.738318,45.01591),(18.750101,45.012241),(18.778006,45.010329),(18.78493,45.008468),(18.788961,45.005729),(18.791752,45.001544),(18.794646,44.995446),(18.795162,44.993327),(18.795162,44.988004),(18.795783,44.985059),(18.797953,44.982217),(18.803017,44.977307),(18.804568,44.973845),(18.80157,44.963975),(18.779866,44.952244),(18.772631,44.942478),(18.78338,44.913745),(18.817797,44.887545),(18.842129,44.876239),(18.842144,44.876233),(18.858724,44.868529),(18.889627,44.86119),(18.974916,44.862757),(19.004969,44.863309),(19.015821,44.865635),(19.047757,44.872714),(19.068427,44.874833),(19.08455,44.878967),(19.174261,44.925424),(19.18687,44.927543),(19.193174,44.921549),(19.196895,44.913177),(19.201856,44.908371),(19.211881,44.908371),(19.229554,44.913745),(19.239476,44.915141),(19.283815,44.908371),(19.293013,44.909405),(19.301281,44.91235),(19.31017,44.91235),(19.33022,44.898708),(19.340142,44.896227),(19.350271,44.897002),(19.352958,44.898087),(19.353165,44.899018),(19.356982,44.895877),(19.356993,44.895869),(19.362776,44.891111),(19.368667,44.887132),(19.372802,44.881448),(19.375695,44.873128),(19.376626,44.862999),(19.373112,44.86026),(19.367841,44.859278),(19.363707,44.854628),(19.32836,44.733963),(19.318025,44.715463),(19.308413,44.705128),(19.288259,44.693035),(19.277614,44.684871),(19.270069,44.67562),(19.255703,44.645648),(19.208677,44.588391),(19.20413,44.585135),(19.193278,44.580587),(19.188317,44.57604),(19.18625,44.569322),(19.187283,44.553302),(19.185319,44.546223),(19.179635,44.538264),(19.173124,44.531443),(19.165269,44.526689),(19.129922,44.518317),(19.127339,44.502556),(19.143358,44.458218),(19.141394,44.430829),(19.129612,44.416153),(19.115763,44.403596),(19.107185,44.382718),(19.108942,44.36365),(19.116693,44.343703),(19.138914,44.309338),(19.157001,44.293577),(19.177155,44.286962),(19.220046,44.280244),(19.240717,44.272699),(19.249502,44.270736),(19.263661,44.270167),(19.295907,44.2758),(19.307379,44.274198),(19.324329,44.263966),(19.341589,44.245828),(19.353991,44.22433),(19.356058,44.204021),(19.362363,44.191206),(19.381173,44.177098),(19.424891,44.153792),(19.435537,44.146092),(19.442565,44.143198),(19.448456,44.144438),(19.459721,44.152707),(19.465716,44.15281),(19.474191,44.144903),(19.476361,44.127023),(19.482666,44.120667),(19.498169,44.110229),(19.516189,44.091194),(19.522043,44.08501),(19.554599,44.071265),(19.570929,44.056898),(19.580231,44.051524),(19.583848,44.054315),(19.589946,44.060309),(19.598938,44.062583),(19.611443,44.054521),(19.618885,44.035711),(19.61041,44.019278),(19.593357,44.005584),(19.552222,43.983415),(19.528554,43.977213),(19.50406,43.975663),(19.447112,43.979797),(19.393989,43.977058),(19.379106,43.973854),(19.364637,43.973286),(19.351718,43.979125),(19.325879,43.996592),(19.300558,44.009511),(19.287329,44.013025),(19.272756,44.012405),(19.252085,44.007496),(19.243507,44.002018),(19.238029,43.99251),(19.238133,43.985378),(19.242887,43.972821),(19.240717,43.965741),(19.229348,43.95768),(19.241027,43.952357),(19.275443,43.933263),(19.305932,43.904737),(19.359469,43.842209),(19.461685,43.762136),(19.481735,43.729218),(19.505713,43.67364),(19.507367,43.647182),(19.481735,43.628914),(19.477498,43.616977),(19.475638,43.602559),(19.476878,43.588684),(19.481735,43.578116),(19.48897,43.573181),(19.491761,43.568608),(19.48959,43.564448),(19.481735,43.560831),(19.443908,43.571864),(19.431816,43.57114),(19.41931,43.549255),(19.410732,43.540754),(19.402877,43.549643),(19.394609,43.566463),(19.38045,43.585584),(19.363396,43.601577),(19.34655,43.608838),(19.335801,43.606513),(19.31265,43.593154),(19.300868,43.588297),(19.289189,43.587935),(19.263661,43.590958),(19.252706,43.588633),(19.238856,43.572122),(19.229451,43.549746),(19.217462,43.532796),(19.195345,43.532796),(19.147699,43.538145),(19.122791,43.535948),(19.096023,43.512901),(19.076386,43.507242),(19.054991,43.5067),(19.038145,43.511015),(19.025019,43.523029),(19.01427,43.537783),(19.000214,43.547886),(18.977477,43.546232),(18.962697,43.5383),(18.938926,43.518947),(18.920529,43.512203),(18.911021,43.507268),(18.90513,43.499051),(18.905647,43.490628),(18.915258,43.485383),(18.930658,43.482282),(18.937893,43.478949),(18.951225,43.463808),(18.950708,43.457297),(18.946368,43.44908),(18.946264,43.443938),(18.968278,43.448098),(18.975306,43.444274),(19.00993,43.410995),(19.039592,43.350714),(19.069668,43.30883),(19.062536,43.304386),(19.036285,43.303327),(19.024709,43.298728),(19.022535,43.296546),(19.017164,43.291157),(19.01086,43.28232),(19.002902,43.273949),(18.992463,43.267102),(18.989506,43.272016),(18.988949,43.272941),(18.968589,43.292216),(18.959493,43.303456),(18.957116,43.311828),(18.957943,43.318985),(18.957323,43.325858),(18.950295,43.333067),(18.923423,43.346838),(18.899135,43.351903),(18.839604,43.34782),(18.821207,43.341387),(18.824618,43.337408),(18.830302,43.328157),(18.833506,43.324153),(18.807151,43.318029),(18.679511,43.24948),(18.664318,43.233176),(18.688399,43.224469),(18.645301,43.180182),(18.629074,43.154886),(18.621128,43.124578),(18.620599,43.122563),(18.621116,43.09644),(18.638996,43.020243),(18.598379,43.024455),(18.538951,43.023887),(18.483037,43.014637),(18.452754,42.993398),(18.433531,42.954201),(18.434047,42.936889),(18.443969,42.916865),(18.467637,42.881699),(18.473322,42.862553),(18.465984,42.852812),(18.453478,42.845655),(18.443659,42.834467),(18.444589,42.817078),(18.453891,42.793151),(18.46743,42.769199),(18.502157,42.727445),(18.522518,42.71189),(18.539674,42.695457),(18.550009,42.668069),(18.549596,42.638923),(18.542832,42.626429),(18.538434,42.618304),(18.517247,42.602982),(18.506798,42.59849),(18.486654,42.58983),(18.493682,42.579831),(18.495852,42.570865),(18.492132,42.564767),(18.481797,42.563682),(18.470531,42.569056),(18.458956,42.569728),(18.44769,42.566214),(18.437355,42.559212),(18.421025,42.56363),(18.385265,42.566524),(18.370485,42.57363),(18.369791,42.574384),(18.369783,42.574393),(18.349608,42.59629),(18.338653,42.602517),(18.311574,42.601225),(18.257107,42.614893),(18.224138,42.628071),(18.049368,42.714758),(17.995315,42.740441),(17.971853,42.754962),(17.928859,42.7908),(17.904467,42.804546),(17.869431,42.811807),(17.858475,42.816974),(17.827159,42.853122),(17.825092,42.864542),(17.826229,42.888649),(17.823335,42.898365),(17.81176,42.909863),(17.802665,42.90963),(17.794707,42.904592),(17.786748,42.901388),(17.765458,42.904126),(17.7236,42.916245),(17.701586,42.9195),(17.679468,42.915676),(17.665929,42.90516),(17.653349,42.890928),(17.653331,42.890937),(17.652843,42.891181),(17.556488,42.93476),(17.573985,42.934272),(17.601085,42.925238),(17.617931,42.92178),(17.603038,42.932563),(17.580679,42.942075),(17.63451,42.950403),(17.662725,42.965699),(17.659314,42.993398),(17.659211,42.993449),(17.659108,42.99363),(17.659004,42.993656),(17.628412,43.046573),(17.598026,43.072902),(17.587381,43.082126),(17.450913,43.148152),(17.442067,43.152432),(17.42677,43.165713),(17.415195,43.184962),(17.406823,43.205426),(17.400209,43.216045),(17.389667,43.223099),(17.36941,43.232556),(17.328689,43.260358),(17.289931,43.30343),(17.267607,43.353324),(17.286417,43.437376),(17.270501,43.463214),(17.239495,43.478407),(17.142757,43.489336),(17.084569,43.513237),(17.062314,43.527747),(17.030516,43.54848),(16.98194,43.589692),(16.82381,43.707307),(16.712602,43.771515),(16.698236,43.788078),(16.689761,43.809317),(16.686144,43.826137),(16.678599,43.840684),(16.637361,43.868409),(16.604805,43.901068),(16.555816,43.937474),(16.527447,43.967859),(16.516542,43.979539),(16.501039,43.992716),(16.483469,44.002535),(16.439647,44.014007),(16.431482,44.025789),(16.426935,44.040517),(16.414016,44.055813),(16.403991,44.058759),(16.377946,44.057364),(16.367197,44.058139),(16.357378,44.062169),(16.346113,44.068474),(16.326889,44.082375),(16.312213,44.099997),(16.289992,44.139116),(16.275523,44.157357),(16.232115,44.190999),(16.215785,44.208155),(16.197078,44.25115),(16.18757,44.282414),(16.186639,44.297607),(16.189533,44.308511),(16.199662,44.324686),(16.203176,44.333057),(16.205863,44.349801),(16.205644,44.350531),(16.202866,44.359774),(16.192117,44.367422),(16.171343,44.377086),(16.153153,44.380445),(16.141991,44.380755),(16.13827,44.377602),(16.138787,44.385406),(16.145402,44.389591),(16.153877,44.39357),(16.159871,44.400702),(16.159664,44.416101),(16.152946,44.435067),(16.128348,44.484314),(16.123077,44.5039),(16.116463,44.52147),(16.104887,44.532373),(16.0837,44.534905),(16.04732,44.52333),(16.026649,44.52519),(16.006082,44.541003),(16.013937,44.557023),(16.030473,44.572216),(16.034917,44.585393),(16.043186,44.588442),(16.044736,44.589372),(16.041119,44.602808),(16.036158,44.615314),(16.028303,44.624719),(16.016004,44.628957),(15.991613,44.631437),(15.979107,44.634383),(15.968978,44.63924),(15.955646,44.653193),(15.948101,44.678101),(15.937352,44.688953),(15.929808,44.68947),(15.921539,44.686111),(15.913064,44.684509),(15.905416,44.690142),(15.902006,44.698823),(15.899318,44.70921),(15.895598,44.718564),(15.888776,44.724248),(15.876271,44.724816),(15.869243,44.718564),(15.863868,44.709985),(15.856427,44.703577),(15.847849,44.70151),(15.829969,44.700528),(15.820357,44.698255),(15.805681,44.696653),(15.796999,44.703267),(15.790488,44.713396),(15.78284,44.722594),(15.759275,44.73417),(15.753694,44.739286),(15.728476,44.769103),(15.717314,44.786466),(15.716074,44.80321),(15.730647,44.817007),(15.751214,44.821555),(15.76868,44.827342),(15.773435,44.845016),(15.76403,44.864033),(15.74801,44.882016),(15.734264,44.902118),(15.730853,44.927233),(15.734108,44.93444),(15.738811,44.944855),(15.763616,44.97555),(15.76744,44.993224),(15.755555,45.023558),(15.755968,45.042213),(15.779429,45.085363),(15.78377,45.100917),(15.780049,45.160294),(15.780979,45.167993),(15.792348,45.189801),(15.802569,45.196473),(15.824801,45.210988),(15.878855,45.217241),(15.974559,45.215381),(15.997504,45.218533),(16.008459,45.218171),(16.020448,45.213934),(16.029233,45.205045),(16.028716,45.19626),(16.024996,45.188922),(16.024375,45.184323),(16.033057,45.179724),(16.054968,45.176417),(16.067474,45.169595),(16.08277,45.151612),(16.104061,45.112545),(16.121527,45.096163),(16.212994,45.031464),(16.24121,45.018959),(16.279347,45.007176),(16.316244,45.001234),(16.340635,45.005936),(16.346019,45.02001),(16.346023,45.02002),(16.356965,45.048621),(16.377325,45.075596),(16.381976,45.083709),(16.384043,45.09544),(16.381976,45.101951),(16.38177,45.107584),(16.389211,45.116834),(16.398513,45.121433),(16.424764,45.127272),(16.435513,45.132853),(16.449156,45.148098),(16.481919,45.199516),(16.528944,45.222254),(16.587028,45.220858),(16.786293,45.178846),(16.811718,45.181223),(16.824223,45.189077),(16.825774,45.196364),(16.82505,45.205045),(16.830734,45.216983),(16.84076,45.224011),(16.851922,45.227421),(16.86174,45.232227),(16.867321,45.243389),(16.875383,45.246955),(16.885822,45.255223),(16.893986,45.263543),(16.896984,45.268142),(16.924372,45.284524),(16.93264,45.278788),(16.940702,45.257755),(16.942149,45.249849),(16.941529,45.241219)] +Bajo Nuevo Bank (Petrel Is.) [(-79.989288,15.794949),(-79.987823,15.79621),(-79.986399,15.79442),(-79.988149,15.794176),(-79.989288,15.794949)] +Saint Barthelemy [(-62.838857,17.881985),(-62.850942,17.890448),(-62.861318,17.905422),(-62.86734,17.920396),(-62.866119,17.929145),(-62.857411,17.925035),(-62.791656,17.915473),(-62.79894,17.904486),(-62.810129,17.892279),(-62.823842,17.883246),(-62.838857,17.881985)] +Belarus [(28.148907,56.142414),(28.169112,56.125257),(28.238462,56.082624),(28.269364,56.058207),(28.289828,56.046606),(28.310912,56.042679),(28.333443,56.050249),(28.366516,56.07911),(28.389771,56.088567),(28.537824,56.097714),(28.594719,56.092391),(28.611462,56.088464),(28.620713,56.082986),(28.637094,56.065726),(28.671924,56.037563),(28.680606,56.027382),(28.690011,56.003895),(28.695488,55.98015),(28.706392,55.959841),(28.73192,55.946793),(28.80897,55.934571),(28.830881,55.937672),(28.833051,55.960978),(28.859716,55.976455),(28.922141,55.992139),(28.922245,55.992191),(28.922296,55.992191),(28.922451,55.992242),(28.980846,56.013533),(29.030765,56.024178),(29.088643,56.023222),(29.145435,56.012034),(29.192926,55.992139),(29.224035,55.978186),(29.377411,55.954028),(29.395601,55.947723),(29.413067,55.937982),(29.431567,55.924184),(29.441076,55.914934),(29.444435,55.906925),(29.440559,55.900543),(29.433738,55.899535),(29.425573,55.9),(29.417408,55.89801),(29.407796,55.893876),(29.399631,55.891887),(29.391983,55.888373),(29.384128,55.879949),(29.380718,55.871242),(29.375654,55.846799),(29.371933,55.836386),(29.348162,55.802848),(29.343614,55.786984),(29.350745,55.766055),(29.363406,55.751482),(29.413067,55.727762),(29.461023,55.6873),(29.480867,55.681099),(29.5081,55.685491),(29.585356,55.73802),(29.684162,55.770602),(29.710827,55.773754),(29.779867,55.763807),(29.805705,55.771455),(29.844772,55.812563),(29.869474,55.830547),(29.907817,55.843208),(29.947712,55.848194),(29.988226,55.846799),(30.106255,55.821917),(30.132352,55.826878),(30.177258,55.851372),(30.200254,55.857987),(30.217773,55.855145),(30.251517,55.837626),(30.270793,55.83065),(30.468817,55.793521),(30.477395,55.787733),(30.477808,55.779129),(30.471814,55.770964),(30.469282,55.762644),(30.480702,55.753962),(30.569586,55.729519),(30.580128,55.724765),(30.587466,55.718306),(30.5916,55.705697),(30.588396,55.696601),(30.583745,55.68854),(30.583745,55.679135),(30.596354,55.665286),(30.616611,55.657431),(30.639142,55.654485),(30.679967,55.656346),(30.693713,55.65216),(30.703324,55.642341),(30.71304,55.626683),(30.742185,55.594385),(30.77071,55.591543),(30.804093,55.602344),(30.847657,55.611025),(30.886156,55.600483),(30.907122,55.57777),(30.912821,55.571596),(30.919745,55.534492),(30.899385,55.499146),(30.914371,55.493306),(30.919435,55.492325),(30.913234,55.479715),(30.887189,55.468088),(30.881298,55.451448),(30.889101,55.433413),(30.905276,55.420856),(30.918608,55.407627),(30.918298,55.387783),(30.905948,55.376001),(30.845641,55.351145),(30.811225,55.323084),(30.797169,55.304274),(30.794275,55.285515),(30.804093,55.272958),(30.821353,55.26438),(30.856803,55.253476),(30.869826,55.241539),(30.886517,55.204538),(30.900728,55.192136),(30.947237,55.171362),(30.959795,55.162577),(30.96119,55.158546),(30.956849,55.149503),(30.958813,55.144439),(30.962223,55.142888),(30.972455,55.140873),(30.975039,55.13953),(30.979483,55.134259),(30.984031,55.1309),(30.985478,55.125835),(30.98062,55.115345),(30.972662,55.108575),(30.970285,55.101444),(30.973075,55.093951),(30.98062,55.086303),(30.994211,55.067027),(31.006407,55.04243),(31.005632,55.022999),(30.98062,55.018658),(30.933491,55.025635),(30.913234,55.024601),(30.90662,55.012561),(30.916955,54.996024),(30.931424,54.985172),(30.936075,54.973235),(30.917368,54.953959),(30.899488,54.946363),(30.838613,54.93918),(30.814842,54.928018),(30.817426,54.917631),(30.827451,54.902541),(30.826108,54.877478),(30.810708,54.86151),(30.787867,54.847713),(30.768127,54.829936),(30.762649,54.801979),(30.770607,54.786011),(30.786058,54.779345),(30.827296,54.771025),(30.98062,54.705654),(30.995503,54.689841),(30.999017,54.671341),(31.020928,54.673666),(31.10516,54.668292),(31.128621,54.640387),(31.167947,54.621577),(31.139474,54.582871),(31.089554,54.535897),(31.064646,54.492282),(31.093585,54.479415),(31.167896,54.467064),(31.179017,54.453353),(31.179213,54.453111),(31.20903,54.448047),(31.22536,54.428048),(31.248717,54.376785),(31.26174,54.364486),(31.274142,54.356632),(31.284788,54.347381),(31.292022,54.331258),(31.299154,54.272606),(31.309799,54.244339),(31.324682,54.229249),(31.480641,54.156644),(31.506376,54.143931),(31.530871,54.13742),(31.583116,54.129514),(31.666935,54.101867),(31.697837,54.097939),(31.725794,54.097474),(31.73675,54.094167),(31.748119,54.086312),(31.755767,54.074943),(31.762588,54.060319),(31.77127,54.048537),(31.783517,54.045798),(31.808477,54.055978),(31.813851,54.057063),(31.822636,54.053446),(31.823876,54.050139),(31.823566,54.045488),(31.84651,53.992313),(31.841446,53.984561),(31.839689,53.977223),(31.839276,53.969782),(31.837932,53.962134),(31.82646,53.940171),(31.81013,53.882604),(31.79256,53.857437),(31.753803,53.81961),(31.744656,53.794857),(31.787289,53.794392),(31.873175,53.777132),(32.083602,53.809637),(32.10603,53.806949),(32.164837,53.78168),(32.290411,53.760854),(32.325809,53.745454),(32.357384,53.71972),(32.371801,53.714397),(32.421566,53.715792),(32.442133,53.713932),(32.461563,53.7068),(32.48058,53.692073),(32.487815,53.684683),(32.490192,53.677138),(32.487815,53.669645),(32.48058,53.66241),(32.406115,53.639414),(32.399087,53.635384),(32.398001,53.627425),(32.401102,53.609855),(32.411489,53.582364),(32.429421,53.56159),(32.452933,53.546397),(32.48058,53.535648),(32.53081,53.521437),(32.554271,53.510792),(32.569877,53.492602),(32.569877,53.492395),(32.576802,53.486039),(32.584295,53.48423),(32.592511,53.486452),(32.600573,53.492395),(32.618143,53.494255),(32.635713,53.492653),(32.650286,53.487641),(32.647599,53.479321),(32.641656,53.468985),(32.646772,53.457978),(32.658244,53.455653),(32.688526,53.462216),(32.701239,53.462113),(32.719532,53.439478),(32.717794,53.431062),(32.704443,53.366408),(32.717465,53.334937),(32.697931,53.325893),(32.681808,53.326798),(32.666719,53.331345),(32.649872,53.333671),(32.579954,53.324369),(32.583468,53.321217),(32.591891,53.311837),(32.595922,53.308272),(32.569154,53.298841),(32.537321,53.295068),(32.505643,53.297187),(32.48058,53.305171),(32.454845,53.300443),(32.447197,53.288764),(32.455982,53.277783),(32.479133,53.274915),(32.469211,53.255458),(32.455672,53.236674),(32.423633,53.204402),(32.405546,53.192672),(32.389113,53.187633),(32.352836,53.18045),(32.32028,53.162777),(32.295165,53.140866),(32.267983,53.124769),(32.215687,53.125518),(32.210519,53.121772),(32.209072,53.116346),(32.206695,53.112392),(32.206488,53.10986),(32.207005,53.105158),(32.20592,53.100352),(32.201114,53.097561),(32.197393,53.097742),(32.188608,53.099241),(32.152332,53.096347),(32.126545,53.084229),(32.117295,53.081102),(32.082775,53.081955),(32.009911,53.099964),(31.980559,53.098026),(31.954721,53.089991),(31.930743,53.088337),(31.907179,53.091928),(31.855502,53.110248),(31.842635,53.112186),(31.806823,53.110015),(31.796281,53.112392),(31.780468,53.127869),(31.769719,53.169262),(31.756283,53.186574),(31.738558,53.192542),(31.694478,53.192904),(31.674945,53.19523),(31.614018,53.209751),(31.593606,53.210681),(31.573659,53.207089),(31.535935,53.194868),(31.512061,53.194067),(31.464002,53.199984),(31.416356,53.199984),(31.378529,53.182026),(31.359925,53.133838),(31.361165,53.121306),(31.361573,53.120038),(31.364886,53.109731),(31.367057,53.098931),(31.363698,53.088776),(31.354137,53.08211),(31.329436,53.079345),(31.319204,53.076374),(31.269285,53.028418),(31.247064,53.014388),(31.322615,52.977129),(31.339151,52.958061),(31.36623,52.908555),(31.387934,52.887987),(31.418113,52.870237),(31.450979,52.857214),(31.480641,52.850935),(31.505136,52.848894),(31.512784,52.841039),(31.515161,52.828379),(31.523946,52.811661),(31.538364,52.799853),(31.55056,52.794737),(31.560843,52.787063),(31.569525,52.767504),(31.570248,52.725258),(31.547924,52.705699),(31.480641,52.682522),(31.480641,52.68247),(31.4917,52.667096),(31.536142,52.630484),(31.565907,52.59015),(31.579033,52.577825),(31.615207,52.558576),(31.628643,52.54806),(31.624302,52.53806),(31.610763,52.535993),(31.574382,52.541264),(31.559293,52.540541),(31.563634,52.537053),(31.575933,52.524728),(31.558156,52.519999),(31.55056,52.511886),(31.552988,52.502249),(31.565494,52.492637),(31.587922,52.482612),(31.589369,52.458039),(31.583322,52.4283),(31.582857,52.402772),(31.59035,52.391842),(31.600531,52.383393),(31.608075,52.372463),(31.608592,52.354428),(31.602753,52.340915),(31.592262,52.328616),(31.579602,52.318513),(31.567354,52.311382),(31.59712,52.284303),(31.613656,52.273012),(31.631433,52.26477),(31.648693,52.260997),(31.68342,52.257199),(31.699439,52.251463),(31.688897,52.243221),(31.681921,52.230456),(31.679492,52.215858),(31.682386,52.201931),(31.689517,52.19157),(31.698716,52.185446),(31.749049,52.163949),(31.762588,52.149841),(31.766619,52.130075),(31.764345,52.100568),(31.649933,52.096847),(31.474854,52.117776),(31.383076,52.117492),(31.304425,52.097493),(31.284271,52.081241),(31.268871,52.061242),(31.252438,52.044499),(31.228977,52.03822),(31.204896,52.04393),(31.159214,52.068115),(31.134823,52.076667),(31.096324,52.079613),(30.959329,52.074678),(30.934421,52.069717),(30.918815,52.059175),(30.924706,52.042277),(30.919745,52.037264),(30.916851,52.032355),(30.914629,52.027575),(30.91127,52.022562),(30.940519,52.020082),(30.950131,52.006775),(30.941243,51.993804),(30.915198,51.99259),(30.908377,51.998352),(30.902795,51.99998),(30.896904,51.998326),(30.888739,51.994347),(30.880885,51.988171),(30.879024,51.982022),(30.879644,51.97688),(30.879334,51.97365),(30.869102,51.96502),(30.858044,51.957631),(30.845641,51.951352),(30.831379,51.9459),(30.80306,51.939131),(30.802543,51.935539),(30.80523,51.929338),(30.810605,51.925178),(30.810605,51.918382),(30.797065,51.906729),(30.788177,51.900606),(30.779495,51.897919),(30.74973,51.899624),(30.741668,51.897919),(30.737844,51.891588),(30.737224,51.882235),(30.734537,51.873734),(30.71397,51.866964),(30.707562,51.860376),(30.702808,51.853761),(30.69733,51.850738),(30.694539,51.847276),(30.672732,51.829602),(30.666324,51.822393),(30.661983,51.819422),(30.662293,51.815598),(30.669631,51.806064),(30.670562,51.790535),(30.652785,51.77914),(30.630461,51.770019),(30.618162,51.761338),(30.638109,51.755111),(30.64617,51.754542),(30.64617,51.747075),(30.63506,51.739685),(30.632011,51.727981),(30.630564,51.715604),(30.624983,51.706096),(30.613821,51.702918),(30.582815,51.702297),(30.570309,51.69992),(30.575994,51.686407),(30.564315,51.665297),(30.570309,51.6515),(30.563488,51.647314),(30.555323,51.639769),(30.549949,51.637185),(30.552843,51.633335),(30.556615,51.62362),(30.543851,51.620132),(30.515119,51.603673),(30.515119,51.596257),(30.534549,51.585845),(30.543127,51.582589),(30.543127,51.576388),(30.535273,51.575018),(30.531087,51.573313),(30.527728,51.571298),(30.52256,51.568921),(30.52256,51.562719),(30.534136,51.553263),(30.567415,51.547268),(30.584004,51.542256),(30.576407,51.531119),(30.571446,51.526417),(30.562868,51.521766),(30.575064,51.516056),(30.585399,51.509389),(30.58974,51.501974),(30.584004,51.493835),(30.588913,51.48182),(30.597284,51.474275),(30.60762,51.469831),(30.618162,51.467144),(30.618162,51.459651),(30.604829,51.461227),(30.593564,51.458901),(30.586019,51.451848),(30.584004,51.439213),(30.587879,51.427353),(30.595579,51.42495),(30.606018,51.424562),(30.618162,51.418723),(30.622606,51.412108),(30.632424,51.383971),(30.637489,51.378596),(30.644517,51.372679),(30.64555,51.36746),(30.632424,51.364101),(30.638315,51.33586),(30.610462,51.317541),(30.579921,51.303665),(30.577234,51.289015),(30.559354,51.269326),(30.556615,51.267311),(30.555323,51.243462),(30.554657,51.242507),(30.550672,51.236796),(30.539613,51.235168),(30.532017,51.244625),(30.520803,51.252402),(30.508091,51.257673),(30.494655,51.259921),(30.480702,51.258862),(30.465044,51.261937),(30.442772,51.283047),(30.428096,51.291754),(30.413523,51.294054),(30.383757,51.293924),(30.368461,51.297671),(30.355129,51.305267),(30.324536,51.329969),(30.317095,51.340588),(30.319885,51.341596),(30.328257,51.362473),(30.330014,51.370354),(30.330117,51.380715),(30.329032,51.388622),(30.32619,51.395262),(30.320195,51.402083),(30.30738,51.409602),(30.256892,51.425053),(30.242784,51.434407),(30.20537,51.466498),(30.17731,51.479495),(30.14863,51.48443),(30.00869,51.482078),(29.985745,51.477867),(29.925077,51.457868),(29.912572,51.457429),(29.896552,51.464198),(29.886268,51.464793),(29.873608,51.459186),(29.856761,51.439859),(29.846633,51.432908),(29.828649,51.429963),(29.737905,51.439497),(29.725296,51.450452),(29.716511,51.465671),(29.699975,51.483706),(29.682715,51.491199),(29.660804,51.493137),(29.637756,51.490889),(29.618016,51.485644),(29.597345,51.473965),(29.583186,51.460813),(29.5682,51.449626),(29.545049,51.443657),(29.519004,51.441797),(29.505465,51.437456),(29.495543,51.425648),(29.480867,51.401256),(29.466294,51.385056),(29.446399,51.384901),(29.402732,51.39614),(29.379684,51.391541),(29.35326,51.377055),(29.340565,51.370096),(29.319946,51.365574),(29.297002,51.373713),(29.2846,51.391748),(29.276125,51.413581),(29.264859,51.432908),(29.244395,51.447894),(29.227756,51.45562),(29.221038,51.466963),(29.230443,51.492879),(29.226515,51.519053),(29.214836,51.53546),(29.181453,51.567009),(29.160421,51.603311),(29.147967,51.615636),(29.123886,51.625041),(29.083682,51.631217),(29.063218,51.630596),(29.043581,51.626178),(29.023634,51.614034),(28.999966,51.582408),(28.980846,51.569463),(28.954594,51.563262),(28.892892,51.562874),(28.863643,51.558921),(28.831914,51.548276),(28.799875,51.532618),(28.771866,51.511276),(28.752229,51.483706),(28.749439,51.46686),(28.751609,51.428748),(28.746751,51.413995),(28.728768,51.401256),(28.718226,51.411618),(28.711095,51.430247),(28.703963,51.442623),(28.691148,51.443424),(28.677918,51.438076),(28.663552,51.433993),(28.647016,51.439006),(28.637301,51.449626),(28.630893,51.46363),(28.615183,51.51957),(28.612703,51.53993),(28.603918,51.553547),(28.578079,51.56011),(28.488369,51.572021),(28.461239,51.571659),(28.435349,51.56613),(28.385533,51.545046),(28.359643,51.529362),(28.346879,51.525151),(28.333857,51.52838),(28.328172,51.536364),(28.317424,51.56396),(28.310912,51.57445),(28.29851,51.582977),(28.269726,51.594268),(28.257789,51.601761),(28.249366,51.613078),(28.248384,51.622096),(28.248901,51.630596),(28.245128,51.640622),(28.2304,51.651138),(28.21004,51.651965),(28.187612,51.645195),(28.166218,51.633103),(28.146685,51.614396),(28.114232,51.575613),(28.090151,51.562048),(28.070824,51.557629),(27.973155,51.55781),(27.954758,51.560782),(27.936051,51.567009),(27.87559,51.60804),(27.854196,51.615326),(27.831252,51.612923),(27.812545,51.602019),(27.799522,51.585199),(27.793218,51.565019),(27.793735,51.552307),(27.796267,51.541532),(27.797145,51.530603),(27.792546,51.517296),(27.787327,51.510914),(27.74056,51.471407),(27.730121,51.465154),(27.714256,51.463707),(27.700045,51.467531),(27.683767,51.475515),(27.670331,51.484817),(27.666182,51.490194),(27.66413,51.492853),(27.697306,51.543186),(27.705161,51.568352),(27.692655,51.589229),(27.676842,51.59481),(27.620619,51.595896),(27.512426,51.623099),(27.512305,51.623129),(27.477268,51.623672),(27.458768,51.617496),(27.430966,51.598428),(27.409004,51.59171),(27.388385,51.590728),(27.32906,51.596955),(27.289269,51.588971),(27.267462,51.587498),(27.254026,51.595379),(27.25971,51.612303),(27.27449,51.6338),(27.277487,51.651138),(27.247928,51.655659),(27.22364,51.653696),(27.20359,51.655143),(27.189017,51.663773),(27.181059,51.68341),(27.181472,51.710126),(27.184159,51.731443),(27.177855,51.747075),(27.151086,51.756764),(27.109952,51.762423),(27.021585,51.764542),(26.920816,51.742528),(26.854774,51.749349),(26.665741,51.801387),(26.4456,51.805599),(26.431544,51.810224),(26.419245,51.820921),(26.416971,51.830636),(26.419968,51.839731),(26.419762,51.846862),(26.407773,51.850609),(26.175332,51.856707),(26.14536,51.864846),(26.080764,51.900657),(26.050585,51.904817),(25.981132,51.903474),(25.767915,51.928511),(25.683476,51.918021),(25.547396,51.919442),(25.351971,51.921483),(25.1833,51.94975),(25.138031,51.948949),(25.092659,51.939751),(25.002742,51.910476),(24.721829,51.882338),(24.701055,51.882907),(24.639767,51.892131),(24.39079,51.880013),(24.369602,51.875103),(24.347898,51.861151),(24.311518,51.827561),(24.296119,51.808131),(24.272347,51.742889),(24.244132,51.718214),(24.130754,51.669793),(23.981306,51.586),(23.941308,51.581917),(23.912886,51.598583),(23.884878,51.619874),(23.845293,51.629821),(23.820282,51.631268),(23.749692,51.644472),(23.726231,51.644859),(23.628976,51.629046),(23.616677,51.624783),(23.606445,51.618117),(23.594353,51.604965),(23.593319,51.596955),(23.599107,51.588971),(23.626289,51.540886),(23.628666,51.531223),(23.624118,51.515901),(23.615437,51.51311),(23.606238,51.517399),(23.602311,51.530783),(23.588668,51.535899),(23.572752,51.539672),(23.56035,51.555304),(23.567171,51.55489),(23.573992,51.555304),(23.552081,51.578817),(23.5434,51.592743),(23.539886,51.607109),(23.545157,51.63411),(23.543813,51.6438),(23.532444,51.6515),(23.532444,51.658967),(23.541436,51.659742),(23.547534,51.662067),(23.56035,51.671963),(23.555389,51.676304),(23.549291,51.683203),(23.546087,51.689637),(23.550014,51.692427),(23.559213,51.695166),(23.556939,51.701393),(23.550221,51.708421),(23.546604,51.713589),(23.552288,51.736636),(23.556319,51.747359),(23.56035,51.754542),(23.577299,51.766686),(23.59828,51.77467),(23.617504,51.786504),(23.628666,51.809759),(23.610269,51.825184),(23.594559,51.843297),(23.605721,51.851591),(23.609856,51.873269),(23.621845,51.877429),(23.621328,51.882028),(23.620604,51.883217),(23.618847,51.883268),(23.61492,51.88425),(23.621431,51.895567),(23.622155,51.927943),(23.628666,51.946339),(23.647579,51.97409),(23.660705,51.98688),(23.676415,51.994088),(23.664943,52.011168),(23.650783,52.048943),(23.642515,52.061397),(23.641482,52.072921),(23.637451,52.084471),(23.625255,52.089716),(23.61585,52.0923),(23.609856,52.09863),(23.604791,52.106769),(23.598177,52.11452),(23.579366,52.121548),(23.531928,52.120644),(23.512497,52.124442),(23.484695,52.158626),(23.488726,52.16214),(23.491517,52.174103),(23.487693,52.181622),(23.470949,52.171623),(23.464335,52.176351),(23.454413,52.181467),(23.450485,52.185911),(23.43674,52.175679),(23.427231,52.17679),(23.418239,52.182501),(23.405734,52.185911),(23.391368,52.182501),(23.388474,52.184154),(23.395812,52.193379),(23.395812,52.19958),(23.374831,52.200949),(23.312716,52.215289),(23.29959,52.223428),(23.284191,52.219734),(23.212257,52.231671),(23.189726,52.240533),(23.197684,52.25831),(23.19448,52.271462),(23.183525,52.281306),(23.176099,52.285136),(23.168746,52.288928),(23.165645,52.289393),(23.21205,52.347504),(23.230447,52.365074),(23.270548,52.395123),(23.392298,52.509638),(23.480458,52.554416),(23.569031,52.585887),(23.736153,52.614903),(23.868961,52.670042),(23.908752,52.699859),(23.922498,52.742596),(23.920426,52.772625),(23.912059,52.893853),(23.909476,52.901501),(23.905445,52.906539),(23.902034,52.912534),(23.901104,52.923257),(23.903894,52.931137),(23.914953,52.944625),(23.91795,52.9508),(23.917537,52.95881),(23.909062,52.986664),(23.908959,52.993046),(23.911336,53.00506),(23.909682,53.012734),(23.89821,53.02754),(23.867928,53.051027),(23.859349,53.067976),(23.860693,53.087484),(23.870305,53.101256),(23.882397,53.113297),(23.891389,53.127714),(23.893663,53.151951),(23.883121,53.163991),(23.865964,53.172001),(23.848084,53.183809),(23.836198,53.199286),(23.828447,53.213833),(23.818628,53.227967),(23.800645,53.242462),(23.782972,53.270936),(23.742974,53.365478),(23.722924,53.397104),(23.675795,53.455705),(23.590942,53.611251),(23.567068,53.681014),(23.564794,53.742405),(23.56469,53.742405),(23.56469,53.742457),(23.540402,53.763593),(23.52986,53.78416),(23.520662,53.837335),(23.51043,53.862502),(23.497201,53.885549),(23.486866,53.909992),(23.485625,53.939293),(23.520145,53.93397),(23.609959,53.901827),(23.627116,53.89821),(23.642618,53.898985),(23.691918,53.911491),(23.754446,53.916762),(23.77181,53.924307),(23.787209,53.927924),(23.794134,53.927097),(23.812117,53.920844),(23.820592,53.919294),(23.828344,53.920069),(23.904721,53.946062),(23.919811,53.947768),(23.935417,53.943944),(23.964873,53.93273),(23.981306,53.930404),(24.045385,53.930869),(24.074943,53.935727),(24.136748,53.955416),(24.169615,53.95893),(24.201137,53.95247),(24.229146,53.932368),(24.246406,53.903636),(24.257051,53.894024),(24.276378,53.891802),(24.310175,53.892681),(24.341904,53.887048),(24.377871,53.886841),(24.414044,53.897693),(24.528663,53.958361),(24.579099,53.975621),(24.642971,53.983011),(24.666845,53.993863),(24.670566,53.994276),(24.674287,53.993708),(24.677904,53.992313),(24.690823,53.972831),(24.705603,53.964201),(24.723896,53.963426),(24.788698,53.969678),(24.806372,53.975311),(24.819084,53.992313),(24.819187,53.992416),(24.819291,53.992519),(24.819394,53.992571),(24.821358,54.019908),(24.81247,54.038925),(24.799034,54.05603),(24.787148,54.077631),(24.782497,54.092875),(24.784254,54.096234),(24.790559,54.096596),(24.799447,54.10321),(24.818154,54.110135),(24.818671,54.113959),(24.81743,54.120625),(24.817534,54.128222),(24.821565,54.134526),(24.850984,54.144951),(24.855981,54.146722),(24.902283,54.149771),(24.948275,54.145895),(25.027133,54.12786),(25.071989,54.132201),(25.115603,54.148789),(25.156531,54.175712),(25.173068,54.19659),(25.193532,54.243667),(25.206244,54.256793),(25.224124,54.25855),(25.287583,54.245217),(25.369748,54.247904),(25.394036,54.257154),(25.434137,54.291829),(25.459355,54.298909),(25.472068,54.297152),(25.478682,54.293328),(25.483488,54.287695),(25.553975,54.231265),(25.5455,54.227492),(25.522142,54.228629),(25.501678,54.221756),(25.499301,54.211834),(25.523382,54.204238),(25.528963,54.195659),(25.521729,54.188218),(25.493203,54.18243),(25.485762,54.175402),(25.493307,54.157522),(25.516044,54.144758),(25.543019,54.136955),(25.631593,54.128325),(25.653503,54.128739),(25.663942,54.132356),(25.679755,54.145327),(25.690607,54.148221),(25.728124,54.145223),(25.74001,54.146257),(25.763057,54.156437),(25.771016,54.172147),(25.771946,54.21788),(25.775046,54.22248),(25.786829,54.231781),(25.789206,54.236225),(25.786622,54.242323),(25.781764,54.244442),(25.776287,54.245631),(25.772256,54.248938),(25.765641,54.259997),(25.757786,54.268988),(25.748898,54.276326),(25.739286,54.282579),(25.734739,54.283199),(25.723267,54.280977),(25.719236,54.281546),(25.715412,54.285215),(25.704766,54.302991),(25.701769,54.312758),(25.695981,54.320975),(25.682339,54.325626),(25.667353,54.3233),(25.607408,54.304852),(25.564,54.303146),(25.542296,54.308056),(25.528653,54.320975),(25.529997,54.346141),(25.548807,54.367742),(25.594696,54.400091),(25.612576,54.421796),(25.616296,54.441071),(25.615573,54.461896),(25.619914,54.488096),(25.630869,54.508354),(25.646165,54.520394),(25.70735,54.54153),(25.726471,54.55295),(25.740423,54.568505),(25.745591,54.5884),(25.739906,54.607727),(25.715102,54.643229),(25.708384,54.663021),(25.709521,54.67563),(25.713862,54.685552),(25.719133,54.69475),(25.72306,54.705344),(25.72461,54.715679),(25.7212,54.766736),(25.7243,54.78012),(25.734946,54.78906),(25.756753,54.800377),(25.766571,54.803271),(25.773393,54.803374),(25.778457,54.805286),(25.782901,54.81371),(25.782281,54.821978),(25.773703,54.841925),(25.772256,54.850865),(25.782695,54.869675),(25.802538,54.881612),(25.825793,54.891999),(25.84667,54.90621),(25.853078,54.916442),(25.854835,54.925486),(25.858452,54.932979),(25.870234,54.938818),(25.907855,54.948068),(25.926148,54.947758),(25.962529,54.942901),(25.981132,54.942694),(26.102572,54.957008),(26.138642,54.968946),(26.153628,54.978506),(26.170325,54.99326),(26.187838,55.008736),(26.224631,55.054884),(26.229386,55.06341),(26.231246,55.075451),(26.230316,55.100255),(26.233003,55.111624),(26.264422,55.140098),(26.309484,55.144645),(26.420278,55.128109),(26.428753,55.128212),(26.438572,55.130021),(26.444773,55.133948),(26.450664,55.13984),(26.459139,55.144749),(26.473402,55.145679),(26.578925,55.118549),(26.600939,55.120823),(26.616339,55.135395),(26.627501,55.164438),(26.633909,55.192188),(26.641143,55.202833),(26.656956,55.215339),(26.685689,55.231617),(26.701088,55.236836),(26.739901,55.242776),(26.76558,55.246706),(26.789661,55.257197),(26.80072,55.27332),(26.791574,55.290166),(26.768629,55.300243),(26.602696,55.316883),(26.542338,55.307581),(26.525388,55.308098),(26.465754,55.320862),(26.450044,55.327063),(26.445496,55.337502),(26.455212,55.356105),(26.479086,55.381375),(26.486011,55.390832),(26.49924,55.428091),(26.507818,55.439149),(26.543268,55.459613),(26.546472,55.471241),(26.527662,55.492169),(26.532106,55.516251),(26.55164,55.534492),(26.576134,55.550564),(26.595565,55.56803),(26.605177,55.590096),(26.607657,55.616451),(26.603833,55.643271),(26.594531,55.666991),(26.615615,55.687971),(26.64011,55.695568),(26.666878,55.693966),(26.720105,55.681874),(26.743049,55.682856),(26.822838,55.70611),(26.842785,55.719339),(26.900146,55.778715),(26.957817,55.818584),(26.979426,55.826291),(26.981071,55.826878),(27.110779,55.836283),(27.151448,55.832459),(27.173204,55.825741),(27.235526,55.795846),(27.263018,55.787216),(27.282448,55.791867),(27.329163,55.817576),(27.349627,55.831219),(27.374587,55.814837),(27.405851,55.804347),(27.438821,55.79874),(27.564601,55.792229),(27.592713,55.794244),(27.601498,55.809618),(27.610128,55.83096),(27.617156,55.878554),(27.64501,55.922841),(27.744435,55.959738),(27.776991,55.992397),(27.781229,56.016375),(27.812545,56.034514),(27.880654,56.063866),(27.892747,56.077095),(27.901532,56.089342),(27.911453,56.100246),(27.92706,56.109367),(27.939669,56.113062),(27.98101,56.118023),(28.023798,56.12965),(28.0511,56.140666),(28.06824,56.147582),(28.110976,56.156806),(28.148907,56.142414)] +Bolivia [(-65.29247,-11.504723),(-65.257562,-11.495318),(-65.233042,-11.50834),(-65.222655,-11.517435),(-65.215989,-11.530148),(-65.215989,-11.539966),(-65.223611,-11.55795),(-65.223404,-11.571076),(-65.219244,-11.584511),(-65.214542,-11.587509),(-65.207359,-11.587612),(-65.195447,-11.59216),(-65.173976,-11.606526),(-65.167439,-11.615517),(-65.190926,-11.623165),(-65.193354,-11.632364),(-65.189272,-11.657065),(-65.191339,-11.666884),(-65.200253,-11.681973),(-65.20232,-11.691482),(-65.201855,-11.727965),(-65.196197,-11.741814),(-65.182451,-11.756697),(-65.164467,-11.769823),(-65.151497,-11.77313),(-65.143538,-11.763829),(-65.138009,-11.715253),(-65.133978,-11.702644),(-65.127855,-11.694582),(-65.113411,-11.690551),(-65.110465,-11.699957),(-65.113566,-11.722591),(-65.0992,-11.736233),(-65.081062,-11.743468),(-65.065559,-11.75308),(-65.056309,-11.785636),(-65.043312,-11.80765),(-65.038454,-11.818709),(-65.037447,-11.828424),(-65.039204,-11.848681),(-65.038454,-11.8585),(-65.033674,-11.879791),(-65.028093,-11.888886),(-65.017939,-11.89364),(-65.014528,-11.893847),(-65.00458,-11.898394),(-64.998121,-11.908936),(-64.996493,-11.921235),(-65.00086,-11.931054),(-65.014632,-11.950277),(-65.016337,-11.968467),(-65.009438,-11.98428),(-64.997449,-11.996269),(-64.983445,-12.003917),(-64.967606,-12.007948),(-64.925154,-12.009912),(-64.904483,-12.014046),(-64.883503,-12.021384),(-64.862083,-12.024795),(-64.839836,-12.016733),(-64.827615,-12.022314),(-64.809295,-12.025828),(-64.7925,-12.032546),(-64.785188,-12.047429),(-64.780486,-12.067686),(-64.768574,-12.086496),(-64.736793,-12.119776),(-64.744183,-12.133625),(-64.739532,-12.144581),(-64.728396,-12.149852),(-64.716278,-12.146544),(-64.713642,-12.138483),(-64.710283,-12.111921),(-64.706382,-12.106133),(-64.689923,-12.104376),(-64.68646,-12.109957),(-64.681577,-12.126597),(-64.681577,-12.134865),(-64.688579,-12.147164),(-64.688992,-12.153882),(-64.685944,-12.159153),(-64.677184,-12.167628),(-64.67535,-12.171039),(-64.66486,-12.180961),(-64.640572,-12.19512),(-64.593417,-12.215997),(-64.58104,-12.217444),(-64.554892,-12.217754),(-64.548433,-12.219098),(-64.532103,-12.232224),(-64.511484,-12.242662),(-64.510347,-12.239562),(-64.510244,-12.237288),(-64.508848,-12.236048),(-64.49425,-12.238012),(-64.489392,-12.239459),(-64.473037,-12.253204),(-64.469058,-12.261059),(-64.468153,-12.272635),(-64.469316,-12.294856),(-64.474432,-12.316456),(-64.492441,-12.356557),(-64.48978,-12.373611),(-64.48363,-12.377641),(-64.461642,-12.385289),(-64.452521,-12.390354),(-64.444072,-12.399759),(-64.432367,-12.420946),(-64.424874,-12.431385),(-64.411335,-12.445131),(-64.395729,-12.457326),(-64.376763,-12.465905),(-64.353199,-12.469212),(-64.312659,-12.461874),(-64.297699,-12.465698),(-64.287544,-12.497117),(-64.278242,-12.499288),(-64.268708,-12.497221),(-64.26385,-12.495257),(-64.256822,-12.490813),(-64.245712,-12.479444),(-64.236565,-12.475413),(-64.215223,-12.473759),(-64.19755,-12.47872),(-64.183106,-12.487919),(-64.155252,-12.516961),(-64.144788,-12.520372),(-64.134763,-12.510863),(-64.119828,-12.489676),(-64.104455,-12.507142),(-64.044458,-12.509106),(-64.022832,-12.537838),(-64.004176,-12.535461),(-63.972447,-12.523886),(-63.959321,-12.52802),(-63.950019,-12.536805),(-63.939064,-12.544143),(-63.921236,-12.544349),(-63.909143,-12.534324),(-63.902477,-12.525436),(-63.896896,-12.515721),(-63.863203,-12.474793),(-63.862893,-12.469212),(-63.845581,-12.466938),(-63.814989,-12.457223),(-63.801398,-12.454949),(-63.657738,-12.475413),(-63.654844,-12.478307),(-63.600428,-12.506109),(-63.558726,-12.539905),(-63.542189,-12.547967),(-63.520227,-12.551171),(-63.507514,-12.551584),(-63.496145,-12.553341),(-63.485758,-12.557475),(-63.476198,-12.564813),(-63.468757,-12.575769),(-63.467,-12.585794),(-63.46638,-12.595302),(-63.462504,-12.605121),(-63.454236,-12.612046),(-63.445347,-12.614836),(-63.438164,-12.619177),(-63.435219,-12.630339),(-63.431446,-12.636954),(-63.393309,-12.664549),(-63.371088,-12.669923),(-63.359461,-12.674677),(-63.330419,-12.697002),(-63.317913,-12.701963),(-63.247323,-12.701342),(-63.235955,-12.698552),(-63.232027,-12.691111),(-63.222674,-12.682532),(-63.201848,-12.667856),(-63.197301,-12.666823),(-63.18557,-12.668476),(-63.180712,-12.667856),(-63.176113,-12.663619),(-63.171669,-12.652353),(-63.16707,-12.647392),(-63.139475,-12.63561),(-63.136684,-12.63375),(-63.125057,-12.63592),(-63.105213,-12.645222),(-63.075034,-12.652663),(-63.064751,-12.666202),(-63.057826,-12.701963),(-63.053588,-12.70837),(-63.047077,-12.713538),(-63.04346,-12.719119),(-63.051108,-12.730798),(-63.052193,-12.736482),(-63.051108,-12.742167),(-63.047904,-12.746714),(-63.025683,-12.765731),(-63.014728,-12.77772),(-63.010025,-12.787952),(-63.00708,-12.809966),(-62.999173,-12.829087),(-62.988011,-12.843866),(-62.975299,-12.852858),(-62.964498,-12.856578),(-62.955145,-12.857509),(-62.946515,-12.855235),(-62.93804,-12.849447),(-62.928893,-12.846036),(-62.923777,-12.852858),(-62.92047,-12.862263),(-62.867915,-12.929029),(-62.865073,-12.93554),(-62.860112,-12.940294),(-62.849001,-12.942155),(-62.838925,-12.942775),(-62.831431,-12.944945),(-62.826677,-12.950009),(-62.825075,-12.958691),(-62.820166,-12.974918),(-62.807402,-12.98887),(-62.789832,-13.000652),(-62.770453,-13.010471),(-62.768645,-12.990627),(-62.757896,-12.985149),(-62.741308,-12.985356),(-62.722032,-12.982566),(-62.709165,-12.974607),(-62.698985,-12.966649),(-62.686634,-12.964996),(-62.667462,-12.976364),(-62.657902,-12.984633),(-62.651339,-12.992487),(-62.647515,-13.001583),(-62.64519,-13.025354),(-62.641676,-13.030005),(-62.554601,-13.066798),(-62.492692,-13.065041),(-62.472125,-13.068452),(-62.458483,-13.07796),(-62.430939,-13.109793),(-62.428717,-13.114547),(-62.427477,-13.124572),(-62.424118,-13.126536),(-62.410269,-13.125709),(-62.406703,-13.126536),(-62.389701,-13.137078),(-62.381846,-13.139869),(-62.335028,-13.14514),(-62.321592,-13.143693),(-62.310533,-13.133978),(-62.290328,-13.142039),(-62.273636,-13.138422),(-62.259167,-13.130774),(-62.22165,-13.121265),(-62.211521,-13.120335),(-62.200927,-13.122712),(-62.173952,-13.140799),(-62.172454,-13.117855),(-62.162377,-13.120748),(-62.139846,-13.147),(-62.129149,-13.150927),(-62.120467,-13.148757),(-62.114679,-13.150204),(-62.112509,-13.164777),(-62.112457,-13.231801),(-62.109408,-13.248958),(-62.099228,-13.2626),(-62.077731,-13.277896),(-62.033599,-13.325542),(-62.019698,-13.33257),(-62.018665,-13.336808),(-62.003265,-13.360475),(-61.997787,-13.364093),(-61.97536,-13.374118),(-61.958358,-13.385487),(-61.909834,-13.431892),(-61.899499,-13.43923),(-61.879913,-13.449152),(-61.872317,-13.456077),(-61.868596,-13.463932),(-61.847977,-13.530801),(-61.83635,-13.540619),(-61.813974,-13.544237),(-61.794285,-13.540309),(-61.774235,-13.533385),(-61.75434,-13.530284),(-61.735116,-13.538036),(-61.715737,-13.526563),(-61.692896,-13.517985),(-61.668092,-13.512507),(-61.597114,-13.5061),(-61.59306,-13.506794),(-61.588071,-13.50765),(-61.560863,-13.532661),(-61.550812,-13.538036),(-61.503425,-13.548164),(-61.45875,-13.54372),(-61.415471,-13.528114),(-61.347724,-13.493697),(-61.332376,-13.494834),(-61.317131,-13.500932),(-61.281216,-13.507236),(-61.260029,-13.520052),(-61.248556,-13.524393),(-61.237885,-13.524393),(-61.215044,-13.517468),(-61.149105,-13.519845),(-61.139312,-13.514058),(-61.133576,-13.494007),(-61.118719,-13.484499),(-61.076009,-13.478298),(-61.072185,-13.47437),(-61.060118,-13.467136),(-61.047225,-13.464552),(-61.041256,-13.474577),(-61.041256,-13.515195),(-61.022007,-13.535245),(-60.977875,-13.542997),(-60.929299,-13.5462),(-60.896743,-13.552918),(-60.878243,-13.570385),(-60.870673,-13.575449),(-60.801969,-13.604181),(-60.725572,-13.662845),(-60.652701,-13.7188),(-60.618078,-13.736886),(-60.595082,-13.741641),(-60.588623,-13.744535),(-60.584876,-13.749082),(-60.579579,-13.761071),(-60.575135,-13.765619),(-60.566066,-13.769959),(-60.490618,-13.787736),(-60.472635,-13.797865),(-60.465271,-13.816572),(-60.463126,-13.843443),(-60.457313,-13.870935),(-60.448812,-13.896773),(-60.438554,-13.918374),(-60.430441,-13.926746),(-60.408634,-13.944522),(-60.406722,-13.950414),(-60.411605,-13.960542),(-60.40543,-13.966743),(-60.39574,-13.970981),(-60.389694,-13.974908),(-60.387188,-13.98328),(-60.387963,-13.989378),(-60.396515,-14.008394),(-60.409409,-14.055317),(-60.419434,-14.076917),(-60.437236,-14.092007),(-60.461395,-14.098622),(-60.468268,-14.103686),(-60.473513,-14.117225),(-60.477647,-14.139963),(-60.478939,-14.1627),(-60.475684,-14.176033),(-60.462403,-14.19846),(-60.462015,-14.223885),(-60.465633,-14.251067),(-60.464341,-14.278352),(-60.454316,-14.296646),(-60.399926,-14.340777),(-60.391348,-14.357107),(-60.376698,-14.419946),(-60.343806,-14.490949),(-60.338431,-14.5326),(-60.369204,-14.542832),(-60.35272,-14.575388),(-60.30404,-14.608151),(-60.291716,-14.630062),(-60.288822,-14.69073),(-60.284869,-14.772689),(-60.280889,-14.854648),(-60.276859,-14.936607),(-60.272905,-15.018565),(-60.269805,-15.083368),(-60.274972,-15.09515),(-60.389307,-15.096493),(-60.582241,-15.098871),(-60.529479,-15.143726),(-60.460181,-15.224238),(-60.352125,-15.349605),(-60.260451,-15.456058),(-60.246395,-15.478279),(-60.237972,-15.50267),(-60.224821,-15.651395),(-60.206992,-15.853036),(-60.193763,-16.001451),(-60.186968,-16.108731),(-60.17981,-16.222006),(-60.16069,-16.264794),(-60.129839,-16.273062),(-60.060412,-16.27544),(-59.87001,-16.282157),(-59.679557,-16.288772),(-59.489103,-16.295387),(-59.298675,-16.302105),(-59.108248,-16.308719),(-58.917769,-16.315437),(-58.727444,-16.322155),(-58.536965,-16.32877),(-58.464721,-16.33125),(-58.442242,-16.32846),(-58.421623,-16.318434),(-58.414233,-16.308926),(-58.400332,-16.284225),(-58.392943,-16.279367),(-58.349741,-16.2804),(-58.339561,-16.289289),(-58.334548,-16.386647),(-58.338837,-16.400497),(-58.362299,-16.419307),(-58.36359,-16.43667),(-58.342661,-16.473154),(-58.356097,-16.509534),(-58.455885,-16.619088),(-58.472679,-16.649577),(-58.480276,-16.683683),(-58.476658,-16.726988),(-58.456246,-16.802436),(-58.452939,-16.841296),(-58.455109,-16.848324),(-58.464049,-16.866101),(-58.466633,-16.875506),(-58.466272,-16.887289),(-58.45635,-16.937001),(-58.439193,-16.967284),(-58.432475,-16.98537),(-58.426791,-17.055237),(-58.421623,-17.070636),(-58.410978,-17.088206),(-58.406275,-17.110117),(-58.404828,-17.199621),(-58.399144,-17.237448),(-58.38116,-17.267214),(-58.343695,-17.286954),(-58.312172,-17.290881),(-58.302147,-17.293775),(-58.292277,-17.299666),(-58.274397,-17.314032),(-58.264527,-17.320027),(-58.256155,-17.322507),(-58.239929,-17.325195),(-58.231557,-17.329742),(-58.22024,-17.344315),(-58.204065,-17.377388),(-58.188407,-17.38855),(-58.174558,-17.392064),(-58.163189,-17.393407),(-58.152234,-17.396405),(-58.139366,-17.40457),(-58.11203,-17.433508),(-58.101126,-17.44126),(-58.065727,-17.455936),(-58.053687,-17.462344),(-58.010175,-17.49676),(-57.981495,-17.508749),(-57.943564,-17.517844),(-57.905686,-17.520325),(-57.877005,-17.512263),(-57.854113,-17.508543),(-57.836749,-17.511023),(-57.814425,-17.519705),(-57.800731,-17.533451),(-57.790757,-17.555775),(-57.788277,-17.58058),(-57.789569,-17.604144),(-57.787553,-17.620474),(-57.787967,-17.658921),(-57.7859,-17.677525),(-57.777941,-17.695508),(-57.767554,-17.708737),(-57.754894,-17.720623),(-57.740373,-17.730234),(-57.724456,-17.736642),(-57.732983,-17.768475),(-57.729314,-17.779327),(-57.711331,-17.800824),(-57.696809,-17.825112),(-57.698773,-17.843096),(-57.730089,-17.846093),(-57.636916,-18.001122),(-57.585498,-18.122769),(-57.551082,-18.183643),(-57.511188,-18.205554),(-57.490724,-18.205554),(-57.474549,-18.208035),(-57.465661,-18.21775),(-57.466798,-18.239557),(-57.535734,-18.240488),(-57.553872,-18.244518),(-57.56674,-18.256094),(-57.583018,-18.305703),(-57.630043,-18.448434),(-57.677017,-18.59106),(-57.723991,-18.733687),(-57.771068,-18.876314),(-57.782334,-18.910421),(-57.742233,-18.913625),(-57.731794,-18.921996),(-57.727557,-18.941943),(-57.721769,-19.001165),(-57.715826,-19.044573),(-57.771895,-19.048707),(-57.789,-19.059249),(-57.812048,-19.105034),(-57.843105,-19.166839),(-57.911628,-19.302852),(-57.980048,-19.438864),(-58.011209,-19.500669),(-58.011209,-19.500772),(-58.011209,-19.500876),(-58.011261,-19.500876),(-58.02356,-19.52568),(-58.050483,-19.580147),(-58.077458,-19.634718),(-58.089757,-19.659419),(-58.124639,-19.729906),(-58.116939,-19.758018),(-58.04113,-19.82344),(-57.960463,-19.8931),(-57.859745,-19.980123),(-57.86977,-19.992939),(-57.881139,-20.009165),(-57.895609,-20.024151),(-57.917933,-20.034177),(-57.943203,-20.028079),(-57.959171,-20.026218),(-57.966354,-20.031076),(-57.968938,-20.039758),(-57.975811,-20.047509),(-57.985681,-20.05309),(-57.997101,-20.055261),(-58.022371,-20.064976),(-58.053532,-20.10766),(-58.079577,-20.117376),(-58.086243,-20.12089),(-58.098284,-20.13815),(-58.103555,-20.144041),(-58.116319,-20.150449),(-58.158797,-20.165125),(-58.15952,-20.13908),(-58.144741,-20.086163),(-58.143139,-20.065286),(-58.144482,-20.023118),(-58.141847,-20.000897),(-58.145154,-19.969581),(-58.162207,-19.912324),(-58.164223,-19.880284),(-58.161174,-19.847211),(-58.164429,-19.832949),(-58.175282,-19.821373),(-58.214452,-19.798325),(-58.30778,-19.743652),(-58.401004,-19.688875),(-58.49428,-19.634201),(-58.587504,-19.579424),(-58.680832,-19.52475),(-58.774108,-19.470077),(-58.867436,-19.4153),(-58.96066,-19.360523),(-59.011096,-19.330964),(-59.039673,-19.311637),(-59.069646,-19.291483),(-59.089541,-19.286729),(-59.170725,-19.287762),(-59.359085,-19.290139),(-59.547498,-19.292413),(-59.735858,-19.29479),(-59.924322,-19.297064),(-60.006384,-19.298097),(-60.156995,-19.329),(-60.495476,-19.398453),(-60.833931,-19.467906),(-61.17236,-19.537463),(-61.51084,-19.606916),(-61.532286,-19.610016),(-61.579544,-19.616838),(-61.626725,-19.623659),(-61.648222,-19.62676),(-61.737235,-19.639575),(-61.753203,-19.64588),(-61.761213,-19.657765),(-61.780074,-19.706961),(-61.808186,-19.780342),(-61.836298,-19.853826),(-61.86441,-19.927413),(-61.892522,-20.000897),(-61.892574,-20.000897),(-61.905545,-20.026735),(-61.918412,-20.052573),(-61.92474,-20.065128),(-61.931383,-20.078308),(-61.94425,-20.104146),(-61.960942,-20.128021),(-61.977582,-20.151792),(-61.99417,-20.175667),(-62.01081,-20.199541),(-62.05551,-20.260416),(-62.10021,-20.321187),(-62.144962,-20.382062),(-62.189662,-20.442834),(-62.210573,-20.471305),(-62.232398,-20.501021),(-62.26883,-20.553111),(-62.277305,-20.579776),(-62.276168,-20.670727),(-62.275031,-20.75868),(-62.273791,-20.854798),(-62.271886,-21.000424),(-62.271879,-21.000939),(-62.275703,-21.066568),(-62.307898,-21.169301),(-62.340402,-21.272861),(-62.37549,-21.384482),(-62.411974,-21.500857),(-62.446184,-21.606897),(-62.492796,-21.751798),(-62.529228,-21.865073),(-62.572843,-22.000775),(-62.599404,-22.089865),(-62.627516,-22.184536),(-62.650357,-22.234456),(-62.658987,-22.231665),(-62.661829,-22.224534),(-62.663225,-22.215542),(-62.667462,-22.20655),(-62.679296,-22.194768),(-62.722032,-22.166243),(-62.769368,-22.144745),(-62.783476,-22.130896),(-62.790917,-22.113223),(-62.791072,-22.081597),(-62.79779,-22.069401),(-62.795051,-22.061133),(-62.795,-22.051211),(-62.801356,-22.013487),(-62.804353,-22.004082),(-62.818564,-22.000878),(-62.862024,-21.99323),(-63.009922,-22.000671),(-63.010439,-22.000775),(-63.324916,-21.999121),(-63.639392,-21.997468),(-63.67784,-22.003669),(-63.693911,-22.01204),(-63.74042,-22.050591),(-63.751789,-22.045733),(-63.793026,-22.011524),(-63.813129,-22.003049),(-63.906405,-21.997157),(-63.933173,-22.001808),(-63.947591,-22.007596),(-63.95095,-22.0108),(-63.951466,-22.016691),(-63.961027,-22.039119),(-63.964644,-22.058549),(-63.968106,-22.066507),(-63.975806,-22.073845),(-63.983454,-22.077359),(-63.990379,-22.079323),(-63.99565,-22.08294),(-64.004383,-22.09927),(-64.019783,-22.155391),(-64.051021,-22.229185),(-64.058798,-22.240347),(-64.078875,-22.250579),(-64.086239,-22.257917),(-64.160549,-22.438474),(-64.184708,-22.471237),(-64.236333,-22.516712),(-64.250828,-22.54069),(-64.293978,-22.690862),(-64.294598,-22.762899),(-64.303486,-22.782329),(-64.326947,-22.859224),(-64.325294,-22.871936),(-64.343897,-22.863668),(-64.347721,-22.816952),(-64.359994,-22.803),(-64.352321,-22.779125),(-64.355731,-22.751943),(-64.368986,-22.729723),(-64.390768,-22.720524),(-64.401723,-22.712359),(-64.428285,-22.659029),(-64.438233,-22.651898),(-64.447224,-22.648694),(-64.45371,-22.642906),(-64.456242,-22.628023),(-64.455208,-22.616655),(-64.450454,-22.597121),(-64.449421,-22.587096),(-64.445545,-22.576244),(-64.437251,-22.567355),(-64.429835,-22.557123),(-64.428285,-22.542344),(-64.483734,-22.491184),(-64.498126,-22.472787),(-64.504043,-22.449946),(-64.507815,-22.443952),(-64.525747,-22.432997),(-64.531328,-22.425658),(-64.531069,-22.41553),(-64.523292,-22.396203),(-64.524558,-22.385351),(-64.537891,-22.374292),(-64.558871,-22.36065),(-64.572023,-22.343183),(-64.562075,-22.319825),(-64.549156,-22.305356),(-64.542257,-22.291506),(-64.542774,-22.275487),(-64.551843,-22.255023),(-64.560215,-22.243137),(-64.58688,-22.212752),(-64.596207,-22.20655),(-64.616284,-22.20221),(-64.638711,-22.191668),(-64.65778,-22.178438),(-64.679045,-22.175441),(-64.687442,-22.172857),(-64.695245,-22.174201),(-64.713694,-22.181746),(-64.722892,-22.183296),(-64.762321,-22.174408),(-64.832472,-22.137511),(-65.020368,-22.096583),(-65.190478,-22.098474),(-65.457344,-22.101441),(-65.510467,-22.095859),(-65.579894,-22.086454),(-65.588524,-22.087281),(-65.593433,-22.091105),(-65.597878,-22.095653),(-65.605319,-22.099063),(-65.744613,-22.11405),(-65.775309,-22.105058),(-65.804557,-22.085834),(-65.932689,-21.944551),(-65.954497,-21.933079),(-66.046532,-21.917989),(-66.051752,-21.912615),(-66.063586,-21.864039),(-66.094488,-21.83293),(-66.137819,-21.812466),(-66.222465,-21.786938),(-66.240009,-21.792415),(-66.287551,-21.956953),(-66.297163,-22.050281),(-66.301504,-22.064957),(-66.307602,-22.077049),(-66.309824,-22.078186),(-66.314061,-22.075809),(-66.32667,-22.077359),(-66.335249,-22.08232),(-66.343,-22.090485),(-66.349175,-22.1002),(-66.353232,-22.109605),(-66.377468,-22.127072),(-66.510328,-22.162935),(-66.626755,-22.192598),(-66.630579,-22.197352),(-66.633267,-22.204897),(-66.636315,-22.211305),(-66.64138,-22.212545),(-66.657813,-22.207377),(-66.677398,-22.205103),(-66.688457,-22.201693),(-66.699309,-22.200763),(-66.712848,-22.206447),(-66.735896,-22.225051),(-66.748867,-22.244894),(-66.756101,-22.268252),(-66.767935,-22.342563),(-66.77517,-22.365197),(-66.788037,-22.38008),(-66.790518,-22.388348),(-66.784472,-22.417287),(-66.785092,-22.427622),(-66.796461,-22.434857),(-66.90865,-22.467516),(-66.935315,-22.480539),(-66.955572,-22.500693),(-66.965236,-22.514335),(-66.978155,-22.5225),(-66.993555,-22.525807),(-67.015672,-22.523844),(-67.032725,-22.524567),(-67.024354,-22.617585),(-67.026628,-22.639392),(-67.037686,-22.654585),(-67.113237,-22.710086),(-67.142693,-22.742745),(-67.193904,-22.822223),(-67.510267,-22.885785),(-67.547113,-22.892503),(-67.616772,-22.897258),(-67.77144,-22.888679),(-67.803583,-22.878654),(-67.811283,-22.871626),(-67.833038,-22.840103),(-67.837948,-22.840827),(-67.876343,-22.833592),(-67.887092,-22.818709),(-67.891484,-22.792871),(-67.891329,-22.715667),(-67.858721,-22.566838),(-67.859187,-22.547305),(-67.889676,-22.491184),(-67.898202,-22.441471),(-67.906936,-22.420077),(-67.94528,-22.354242),(-67.951894,-22.334088),(-67.952204,-22.314141),(-67.934789,-22.251716),(-67.935978,-22.232285),(-67.944969,-22.208307),(-67.97303,-22.170584),(-67.976595,-22.159835),(-67.973288,-22.15043),(-67.960627,-22.133687),(-67.957165,-22.124592),(-67.960627,-22.102887),(-67.972616,-22.078806),(-67.990186,-22.057619),(-68.010237,-22.044907),(-68.058761,-22.000775),(-68.082739,-21.980311),(-68.092041,-21.968425),(-68.096226,-21.953956),(-68.107595,-21.789625),(-68.128369,-21.712317),(-68.190588,-21.60638),(-68.198442,-21.57186),(-68.19374,-21.328464),(-68.207537,-21.284333),(-68.41631,-20.959805),(-68.432071,-20.945025),(-68.453931,-20.939754),(-68.495995,-20.941615),(-68.510206,-20.940168),(-68.531554,-20.927688),(-68.555113,-20.913916),(-68.57196,-20.872368),(-68.572786,-20.742867),(-68.565087,-20.719509),(-68.550204,-20.699046),(-68.481629,-20.643028),(-68.480957,-20.624735),(-68.510206,-20.601894),(-68.536303,-20.591765),(-68.583328,-20.563033),(-68.685544,-20.516834),(-68.707559,-20.500918),(-68.729676,-20.479214),(-68.750708,-20.451825),(-68.76541,-20.421233),(-68.769028,-20.390537),(-68.757685,-20.364286),(-68.735154,-20.347542),(-68.678516,-20.328629),(-68.718049,-20.261966),(-68.729469,-20.228687),(-68.728281,-20.189619),(-68.723268,-20.16099),(-68.726265,-20.150965),(-68.759028,-20.144144),(-68.789466,-20.125851),(-68.792592,-20.106627),(-68.775539,-20.089677),(-68.652885,-20.054227),(-68.622137,-20.051333),(-68.607771,-20.053607),(-68.597178,-20.056707),(-68.588341,-20.055571),(-68.579349,-20.045442),(-68.574543,-20.03521),(-68.56612,-20.000897),(-68.564932,-19.981363),(-68.544623,-19.934751),(-68.541574,-19.920385),(-68.542917,-19.914184),(-68.546948,-19.908189),(-68.551547,-19.894443),(-68.552529,-19.864885),(-68.553511,-19.857857),(-68.561108,-19.844317),(-68.570823,-19.837599),(-68.60157,-19.825404),(-68.618882,-19.813415),(-68.650353,-19.781582),(-68.698412,-19.746339),(-68.70544,-19.733833),(-68.703218,-19.71554),(-68.690299,-19.695386),(-68.648596,-19.660349),(-68.630974,-19.641952),(-68.576736,-19.564607),(-68.532117,-19.500979),(-68.532117,-19.500876),(-68.510206,-19.471627),(-68.496357,-19.458398),(-68.456101,-19.441241),(-68.447626,-19.434627),(-68.455016,-19.415403),(-68.472586,-19.395456),(-68.493205,-19.376956),(-68.600072,-19.303368),(-68.635728,-19.290553),(-68.663324,-19.273603),(-68.698464,-19.215622),(-68.721253,-19.191747),(-68.815407,-19.113303),(-68.847369,-19.092012),(-68.894782,-19.076509),(-68.908502,-19.067931),(-68.91969,-19.053771),(-68.943151,-19.001165),(-68.961703,-18.969435),(-68.973976,-18.956206),(-68.989609,-18.946491),(-68.959843,-18.907837),(-68.951781,-18.889337),(-68.951445,-18.867323),(-68.954649,-18.856987),(-69.01015,-18.745263),(-69.010537,-18.744539),(-69.010667,-18.743713),(-69.010537,-18.742989),(-69.003174,-18.729553),(-69.001107,-18.716221),(-69.003484,-18.702682),(-69.01015,-18.689762),(-69.033844,-18.649455),(-69.042835,-18.600466),(-69.041647,-18.548893),(-69.034903,-18.50104),(-69.034386,-18.478303),(-69.041698,-18.457735),(-69.07782,-18.398824),(-69.081877,-18.382494),(-69.079293,-18.328027),(-69.081024,-18.319139),(-69.095752,-18.276764),(-69.096837,-18.267979),(-69.09694,-18.257541),(-69.100351,-18.234286),(-69.110661,-18.217957),(-69.141201,-18.186847),(-69.155438,-18.140235),(-69.125931,-18.11109),(-69.089215,-18.083081),(-69.081722,-18.039983),(-69.082284,-18.039078),(-69.092057,-18.023343),(-69.104175,-18.023757),(-69.119885,-18.029854),(-69.14084,-18.030785),(-69.157712,-18.02572),(-69.173887,-18.018899),(-69.204169,-18.001122),(-69.227785,-17.995541),(-69.234813,-17.992957),(-69.241893,-17.986756),(-69.262047,-17.964019),(-69.277601,-17.967533),(-69.290029,-17.976628),(-69.302406,-17.976214),(-69.317857,-17.951616),(-69.326875,-17.915133),(-69.32765,-17.841546),(-69.334239,-17.805785),(-69.359405,-17.75938),(-69.394778,-17.721139),(-69.47578,-17.65241),(-69.497123,-17.621404),(-69.506114,-17.585127),(-69.510036,-17.506588),(-69.510094,-17.505442),(-69.508337,-17.434025),(-69.511385,-17.398678),(-69.522599,-17.36912),(-69.537094,-17.351136),(-69.556938,-17.331499),(-69.597272,-17.300493),(-69.613007,-17.295119),(-69.664416,-17.288562),(-69.666492,-17.288298),(-69.649413,-17.262769),(-69.633833,-17.207269),(-69.6228,-17.185565),(-69.618356,-17.184221),(-69.601147,-17.181534),(-69.59505,-17.179674),(-69.540996,-17.132235),(-69.510094,-17.112081),(-69.482808,-17.101746),(-69.454154,-17.096578),(-69.427437,-17.086863),(-69.406017,-17.062988),(-69.401263,-17.047175),(-69.406017,-17.040871),(-69.413304,-17.03777),(-69.416559,-17.031983),(-69.413588,-17.022267),(-69.407154,-17.015653),(-69.364314,-16.991158),(-69.352455,-16.977722),(-69.325945,-16.922222),(-69.224452,-16.818249),(-69.210654,-16.797061),(-69.191663,-16.743008),(-69.182362,-16.728745),(-69.166368,-16.719133),(-69.130582,-16.715826),(-69.112056,-16.711485),(-69.053816,-16.68234),(-69.037073,-16.670247),(-69.020511,-16.649474),(-69.008341,-16.634177),(-69.035135,-16.598004),(-69.040122,-16.581364),(-69.037642,-16.550255),(-69.038882,-16.49186),(-69.036815,-16.4715),(-69.027875,-16.454137),(-69.001727,-16.422821),(-68.96837,-16.406388),(-68.855844,-16.363186),(-68.833494,-16.328976),(-68.843648,-16.302001),(-68.86667,-16.286912),(-68.91876,-16.266655),(-68.959507,-16.22335),(-68.982038,-16.210017),(-69.01015,-16.219112),(-69.042732,-16.209294),(-69.053558,-16.208364),(-69.064514,-16.212084),(-69.083892,-16.226554),(-69.095183,-16.231205),(-69.120815,-16.231205),(-69.144689,-16.223453),(-69.166264,-16.210431),(-69.184868,-16.195031),(-69.219646,-16.15276),(-69.242125,-16.098189),(-69.284578,-15.995043),(-69.327133,-15.892),(-69.369611,-15.788958),(-69.412218,-15.685811),(-69.424337,-15.656253),(-69.430021,-15.62628),(-69.421184,-15.596411),(-69.356408,-15.50143),(-69.351111,-15.481896),(-69.348527,-15.461846),(-69.343411,-15.442209),(-69.330363,-15.424225),(-69.32026,-15.418851),(-69.292923,-15.409859),(-69.285921,-15.405415),(-69.285404,-15.396733),(-69.296954,-15.366968),(-69.291166,-15.350845),(-69.275198,-15.330278),(-69.255897,-15.313328),(-69.222592,-15.302166),(-69.20913,-15.263615),(-69.189726,-15.262375),(-69.166135,-15.263925),(-69.150426,-15.251626),(-69.1481,-15.233333),(-69.171768,-15.210802),(-69.196495,-15.176592),(-69.21267,-15.161296),(-69.272201,-15.118198),(-69.288583,-15.101971),(-69.346098,-15.016498),(-69.362816,-15.001305),(-69.384313,-14.981772),(-69.390669,-14.964408),(-69.386225,-14.945805),(-69.369844,-14.908288),(-69.367751,-14.900536),(-69.371988,-14.818681),(-69.370541,-14.801524),(-69.36142,-14.787985),(-69.339045,-14.774756),(-69.283518,-14.759666),(-69.267628,-14.750675),(-69.25383,-14.721839),(-69.2475,-14.593682),(-69.234658,-14.574251),(-69.214944,-14.572288),(-69.192232,-14.576939),(-69.170399,-14.577559),(-69.164507,-14.566397),(-69.167427,-14.520301),(-69.164275,-14.503041),(-69.158203,-14.498494),(-69.14115,-14.494153),(-69.134018,-14.490742),(-69.094563,-14.446404),(-69.075133,-14.432451),(-69.01015,-14.397105),(-68.990254,-14.379018),(-68.994621,-14.358967),(-69.008315,-14.33902),(-69.01617,-14.32114),(-69.01723,-14.282486),(-69.015705,-14.263159),(-69.01015,-14.245899),(-68.984906,-14.228329),(-68.951316,-14.219958),(-68.883724,-14.211483),(-68.864448,-14.191226),(-68.868892,-14.156189),(-68.893723,-14.089216),(-68.899976,-14.0548),(-68.905764,-14.039297),(-68.916176,-14.024931),(-68.969713,-13.990514),(-68.982968,-13.972221),(-68.98904,-13.940078),(-68.989195,-13.903698),(-68.992993,-13.869798),(-69.01015,-13.844373),(-69.023095,-13.80603),(-69.024283,-13.79249),(-69.020821,-13.776367),(-69.015964,-13.764585),(-69.015964,-13.753113),(-69.027048,-13.73823),(-69.08105,-13.695442),(-69.101591,-13.666607),(-69.087587,-13.643869),(-69.074539,-13.642629),(-69.044024,-13.648003),(-69.032629,-13.645936),(-69.023844,-13.634981),(-69.023612,-13.623302),(-69.025524,-13.610693),(-69.023198,-13.596947),(-69.015395,-13.585475),(-68.994259,-13.563047),(-68.985655,-13.550231),(-68.976819,-13.526253),(-68.970747,-13.501449),(-68.974441,-13.478814),(-68.962788,-13.283581),(-68.977025,-13.204929),(-68.979661,-13.16457),(-68.972142,-13.046128),(-68.973936,-13.03176),(-68.986224,-12.93337),(-68.986585,-12.890478),(-68.980953,-12.867947),(-68.94943,-12.843866),(-68.938733,-12.819681),(-68.926615,-12.800975),(-68.904627,-12.803972),(-68.891992,-12.775757),(-68.876566,-12.754983),(-68.856206,-12.741443),(-68.77373,-12.719429),(-68.766134,-12.710541),(-68.742957,-12.665789),(-68.752982,-12.65473),(-68.785332,-12.645945),(-68.794608,-12.63654),(-68.793419,-12.62021),(-68.783885,-12.605741),(-68.770113,-12.594372),(-68.74027,-12.578559),(-68.726059,-12.56595),(-68.714535,-12.551171),(-68.706267,-12.535978),(-68.70022,-12.516031),(-68.694381,-12.506832),(-68.684252,-12.502492),(-68.689463,-12.493418),(-68.714225,-12.450298),(-68.81856,-12.269224),(-68.922868,-12.08815),(-69.027281,-11.907076),(-69.13159,-11.726001),(-69.23595,-11.544927),(-69.340336,-11.36375),(-69.444568,-11.182675),(-69.548954,-11.001705),(-69.552494,-10.995607),(-69.556111,-10.989406),(-69.559729,-10.983101),(-69.563269,-10.9769),(-69.566834,-10.970699),(-69.570503,-10.964601),(-69.574017,-10.958503),(-69.577635,-10.952302),(-69.502885,-10.955299),(-69.463378,-10.951268),(-69.39594,-10.935042),(-69.363074,-10.940106),(-69.329588,-10.949408),(-69.293518,-10.954989),(-69.236803,-10.952922),(-69.158513,-10.96212),(-69.121487,-10.970595),(-69.086889,-10.967185),(-69.071309,-10.969975),(-68.996998,-11.001705),(-68.992502,-11.002428),(-68.983382,-11.002428),(-68.978937,-11.001705),(-68.955425,-11.000464),(-68.908141,-11.013383),(-68.884034,-11.016381),(-68.858919,-11.01049),(-68.831685,-11.000361),(-68.804659,-10.994677),(-68.774557,-11.003462),(-68.757633,-11.011937),(-68.773989,-11.035088),(-68.785642,-11.059272),(-68.791171,-11.08511),(-68.787011,-11.126038),(-68.783988,-11.13534),(-68.775926,-11.140611),(-68.758331,-11.141128),(-68.61573,-11.112499),(-68.588031,-11.09865),(-68.536096,-11.061546),(-68.493308,-11.047387),(-68.476565,-11.044803),(-68.459667,-11.044803),(-68.442407,-11.047697),(-68.428196,-11.043666),(-68.396415,-11.014417),(-68.378276,-11.005012),(-68.331147,-11.000464),(-68.313836,-10.993436),(-68.29332,-10.978967),(-68.274355,-10.960053),(-68.257509,-10.938763),(-68.243763,-10.916955),(-68.217563,-10.858044),(-68.203558,-10.838924),(-68.147283,-10.779599),(-68.112091,-10.714074),(-68.09788,-10.697744),(-68.074574,-10.681724),(-68.043775,-10.666945),(-68.027135,-10.661777),(-68.010237,-10.65971),(-67.981815,-10.664361),(-67.862649,-10.65878),(-67.847766,-10.66095),(-67.833348,-10.664981),(-67.81428,-10.675936),(-67.774954,-10.706529),(-67.755782,-10.714177),(-67.721727,-10.705495),(-67.705139,-10.67666),(-67.696044,-10.640486),(-67.684675,-10.610514),(-67.673823,-10.599352),(-67.643489,-10.574547),(-67.630828,-10.559561),(-67.61114,-10.528762),(-67.599978,-10.515429),(-67.58463,-10.501787),(-67.565096,-10.495896),(-67.529336,-10.478532),(-67.488201,-10.464786),(-67.468048,-10.452281),(-67.450529,-10.435951),(-67.436628,-10.417451),(-67.432236,-10.406082),(-67.431874,-10.397917),(-67.42929,-10.390372),(-67.418335,-10.381484),(-67.409188,-10.378797),(-67.365005,-10.378073),(-67.353481,-10.376213),(-67.342836,-10.372492),(-67.335291,-10.366291),(-67.333689,-10.356679),(-67.339425,-10.334872),(-67.337823,-10.326087),(-67.323509,-10.318852),(-67.300668,-10.315132),(-67.259172,-10.313685),(-67.241033,-10.316372),(-67.201759,-10.32681),(-67.184706,-10.326707),(-67.172303,-10.319576),(-67.164862,-10.308517),(-67.159074,-10.297148),(-67.151736,-10.288983),(-67.141763,-10.285573),(-67.12073,-10.284539),(-67.110033,-10.282575),(-67.0643,-10.256944),(-66.902656,-10.09313),(-66.770623,-9.992567),(-66.751606,-9.982645),(-66.670009,-9.951536),(-66.660087,-9.945335),(-66.654816,-9.93717),(-66.65249,-9.926111),(-66.648925,-9.915879),(-66.648904,-9.915854),(-66.64262,-9.907921),(-66.631871,-9.904304),(-66.513842,-9.883943),(-66.452864,-9.888698),(-66.43271,-9.886114),(-66.41328,-9.879189),(-66.371732,-9.855935),(-66.355273,-9.84963),(-66.257321,-9.834851),(-66.238975,-9.828133),(-66.20797,-9.808082),(-66.190658,-9.800848),(-66.135467,-9.795163),(-66.131695,-9.797024),(-66.127923,-9.801158),(-66.123582,-9.805085),(-66.118182,-9.806015),(-66.113117,-9.802398),(-66.106064,-9.790409),(-66.101981,-9.787412),(-66.086995,-9.784518),(-66.081724,-9.785655),(-66.069322,-9.790099),(-66.042786,-9.804878),(-66.029479,-9.808289),(-66.010385,-9.804052),(-65.965659,-9.776766),(-65.946952,-9.771392),(-65.92933,-9.770255),(-65.915274,-9.771702),(-65.874424,-9.781314),(-65.862461,-9.781831),(-65.85042,-9.77532),(-65.834065,-9.758266),(-65.831662,-9.765294),(-65.827398,-9.773976),(-65.822127,-9.781831),(-65.816521,-9.785965),(-65.806418,-9.784415),(-65.803705,-9.775526),(-65.803886,-9.764778),(-65.802645,-9.757336),(-65.794506,-9.737699),(-65.788564,-9.733048),(-65.782207,-9.746277),(-65.779959,-9.755993),(-65.77735,-9.763434),(-65.772441,-9.768808),(-65.76319,-9.772322),(-65.725493,-9.756819),(-65.709964,-9.755993),(-65.713736,-9.77873),(-65.717302,-9.791236),(-65.713116,-9.794233),(-65.705054,-9.793303),(-65.69707,-9.794336),(-65.694357,-9.793716),(-65.683609,-9.789892),(-65.678958,-9.789376),(-65.672628,-9.792579),(-65.663713,-9.802915),(-65.658494,-9.807359),(-65.628883,-9.826893),(-65.614388,-9.833921),(-65.596121,-9.837951),(-65.583977,-9.837228),(-65.579377,-9.832784),(-65.577052,-9.826169),(-65.571884,-9.818831),(-65.570618,-9.815834),(-65.569921,-9.806015),(-65.567983,-9.802088),(-65.562815,-9.799194),(-65.5508,-9.797024),(-65.546821,-9.795163),(-65.535323,-9.782451),(-65.525427,-9.767878),(-65.521396,-9.758576),(-65.517133,-9.743177),(-65.510777,-9.734082),(-65.465224,-9.696461),(-65.45184,-9.681372),(-65.442487,-9.679821),(-65.416235,-9.680132),(-65.397993,-9.686746),(-65.370501,-9.710621),(-65.356058,-9.741626),(-65.353732,-9.745037),(-65.340064,-9.789686),(-65.330323,-9.800744),(-65.316603,-9.812527),(-65.304381,-9.825652),(-65.29911,-9.841259),(-65.30358,-9.864513),(-65.325233,-9.905131),(-65.333294,-9.926835),(-65.336834,-9.967246),(-65.332932,-10.00838),(-65.324148,-10.046414),(-65.30557,-10.098401),(-65.301462,-10.119691),(-65.29911,-10.162996),(-65.296811,-10.173331),(-65.286837,-10.196482),(-65.284822,-10.206818),(-65.288077,-10.216223),(-65.305363,-10.242164),(-65.315621,-10.291257),(-65.327817,-10.314408),(-65.365024,-10.332185),(-65.379235,-10.351408),(-65.39019,-10.374043),(-65.394646,-10.39207),(-65.394738,-10.392439),(-65.393084,-10.400191),(-65.39143,-10.403498),(-65.38727,-10.406702),(-65.410938,-10.44918),(-65.425382,-10.463133),(-65.44998,-10.468094),(-65.44998,-10.474915),(-65.437913,-10.492588),(-65.435691,-10.499203),(-65.436725,-10.504061),(-65.441479,-10.513362),(-65.442487,-10.519357),(-65.441634,-10.527832),(-65.439308,-10.538167),(-65.435898,-10.546952),(-65.431996,-10.550673),(-65.429232,-10.561421),(-65.435639,-10.610824),(-65.435691,-10.62581),(-65.418612,-10.644517),(-65.400784,-10.655473),(-65.386753,-10.669529),(-65.381069,-10.698054),(-65.382258,-10.715624),(-65.389777,-10.753761),(-65.404737,-10.799236),(-65.399698,-10.812259),(-65.360295,-10.822594),(-65.342183,-10.834893),(-65.327119,-10.850499),(-65.31898,-10.865382),(-65.312779,-10.950958),(-65.310117,-10.956539),(-65.30451,-10.962017),(-65.299601,-10.969665),(-65.29911,-10.982068),(-65.302986,-10.990749),(-65.309652,-10.99664),(-65.317843,-11.002015),(-65.326447,-11.009353),(-65.341976,-11.032814),(-65.342699,-11.050591),(-65.337041,-11.067747),(-65.333294,-11.088521),(-65.341046,-11.106608),(-65.379674,-11.136373),(-65.394738,-11.153427),(-65.398148,-11.178025),(-65.385203,-11.193631),(-65.368331,-11.205413),(-65.360011,-11.218849),(-65.366057,-11.234042),(-65.378614,-11.242),(-65.388614,-11.253162),(-65.38727,-11.277553),(-65.37448,-11.295847),(-65.353913,-11.310626),(-65.334819,-11.321168),(-65.326447,-11.327783),(-65.330064,-11.340495),(-65.347505,-11.360752),(-65.353732,-11.372431),(-65.354766,-11.382353),(-65.353009,-11.390621),(-65.319548,-11.476404),(-65.29247,-11.504723)] +Barbados [(-59.42691,13.160386),(-59.430043,13.125922),(-59.455719,13.100043),(-59.475779,13.086493),(-59.487538,13.078559),(-59.50886,13.057318),(-59.516713,13.056383),(-59.523101,13.056464),(-59.529286,13.055406),(-59.536122,13.051174),(-59.545969,13.067694),(-59.564443,13.07803),(-59.586903,13.083197),(-59.60025,13.084133),(-59.608144,13.084662),(-59.621571,13.09398),(-59.633127,13.115912),(-59.646067,13.160386),(-59.654205,13.295111),(-59.652211,13.310614),(-59.645131,13.323717),(-59.63093,13.337958),(-59.612864,13.34455),(-59.594472,13.334784),(-59.580719,13.314521),(-59.575917,13.304389),(-59.568959,13.289659),(-59.549794,13.249172),(-59.53661,13.231391),(-59.518056,13.217515),(-59.491811,13.197943),(-59.476674,13.189399),(-59.457102,13.183173),(-59.438629,13.174872),(-59.42691,13.160386)] +Bhutan [(90.261804,28.335354),(90.261804,28.299141),(90.29147,28.261351),(90.329504,28.255796),(90.353984,28.299141),(90.386905,28.299141),(90.416535,28.28268),(90.45604,28.28268),(90.505422,28.249758),(90.54822,28.246466),(90.587726,28.233298),(90.597602,28.200376),(90.567973,28.1905),(90.54822,28.157579),(90.515299,28.154287),(90.485669,28.124657),(90.475232,28.072344),(90.492595,28.074412),(90.510061,28.074101),(90.575277,28.065833),(90.597395,28.070587),(90.621889,28.072965),(90.651035,28.086995),(90.667468,28.090328),(90.683901,28.087072),(90.726896,28.061699),(90.755278,28.055173),(90.788701,28.047488),(90.850919,28.044026),(90.90952,28.032657),(90.960577,27.994726),(90.975149,27.982092),(91.008842,27.96677),(91.051941,27.96279),(91.092558,27.971679),(91.11943,27.994726),(91.11943,27.994778),(91.119637,27.994881),(91.133796,28.014105),(91.175861,28.057487),(91.194671,28.070587),(91.220612,28.074877),(91.246141,28.071466),(91.269705,28.072861),(91.290169,28.091413),(91.309599,28.056402),(91.341432,28.030564),(91.418843,27.994726),(91.436483,27.989375),(91.446438,27.986355),(91.460908,27.984365),(91.497495,27.984081),(91.537492,27.969353),(91.578627,27.964702),(91.600641,27.959276),(91.621725,27.950724),(91.637951,27.939743),(91.652834,27.916747),(91.648907,27.897213),(91.637124,27.87724),(91.628339,27.852694),(91.627306,27.829827),(91.632887,27.759444),(91.626686,27.716423),(91.579867,27.657977),(91.573252,27.619711),(91.59506,27.546382),(91.604465,27.532145),(91.632887,27.511655),(91.641672,27.494964),(91.657382,27.479151),(91.680223,27.472846),(91.704924,27.468764),(91.727041,27.460005),(91.727092,27.459942),(91.743681,27.439153),(91.744818,27.423754),(91.750089,27.416157),(91.779648,27.418844),(91.857369,27.443081),(91.884345,27.447163),(91.920931,27.445148),(91.933747,27.449127),(91.963306,27.468919),(91.975088,27.472433),(91.997309,27.448662),(92.084849,27.304226),(92.088776,27.29234),(92.081852,27.275132),(92.050639,27.251413),(92.03679,27.236478),(92.033173,27.227435),(92.029969,27.20847),(92.027075,27.19984),(92.021494,27.19183),(92.008471,27.179479),(92.005788,27.176112),(92.002994,27.172606),(91.995862,27.158033),(91.990798,27.140773),(91.987904,27.122273),(91.987697,27.104031),(91.999893,27.071527),(92.049709,27.026827),(92.066969,26.994943),(92.066969,26.994839),(92.083919,26.937323),(92.080371,26.921571),(92.072757,26.887766),(92.03586,26.854848),(91.975088,26.846631),(91.957415,26.854279),(91.925686,26.878671),(91.908219,26.885182),(91.89313,26.881358),(91.895507,26.868284),(91.895507,26.853504),(91.874009,26.844306),(91.885895,26.83087),(91.886928,26.814437),(91.87866,26.803016),(91.863054,26.804773),(91.852408,26.818519),(91.849515,26.834746),(91.843934,26.84937),(91.825123,26.858465),(91.795254,26.853504),(91.731589,26.816194),(91.702133,26.803481),(91.653764,26.797952),(91.637951,26.798985),(91.593716,26.810768),(91.57563,26.810148),(91.539353,26.79883),(91.520646,26.79759),(91.507417,26.808029),(91.484472,26.852729),(91.475067,26.865545),(91.460598,26.869369),(91.420187,26.871539),(91.406338,26.869524),(91.388457,26.858465),(91.378122,26.842807),(91.370061,26.824669),(91.358382,26.806169),(91.345566,26.794696),(91.330063,26.785498),(91.313217,26.778677),(91.296577,26.774594),(91.27601,26.774077),(91.261954,26.779038),(91.232498,26.795161),(91.198185,26.802344),(91.127078,26.800898),(91.091938,26.804773),(91.062069,26.804567),(91.012781,26.784348),(91.007395,26.782139),(90.944493,26.77887),(90.717077,26.767049),(90.587783,26.780072),(90.475232,26.83242),(90.382627,26.891796),(90.348831,26.896654),(90.328574,26.890659),(90.314208,26.880376),(90.300875,26.868284),(90.284442,26.85707),(90.265994,26.851592),(90.229148,26.852884),(90.210907,26.851489),(90.17742,26.832058),(90.152202,26.771752),(90.127191,26.751392),(90.089157,26.741728),(89.975262,26.731858),(89.912113,26.716717),(89.890409,26.714856),(89.880281,26.716252),(89.85992,26.721884),(89.858701,26.722057),(89.850774,26.723176),(89.847001,26.72483),(89.839043,26.731238),(89.834961,26.731548),(89.829741,26.72762),(89.827674,26.722453),(89.827261,26.717905),(89.826744,26.715683),(89.826796,26.713513),(89.825245,26.707673),(89.822093,26.701007),(89.817287,26.696201),(89.81367,26.696149),(89.804058,26.699767),(89.800079,26.70049),(89.760288,26.70018),(89.75467,26.700989),(89.738739,26.703281),(89.685151,26.724571),(89.66324,26.725502),(89.628307,26.712531),(89.609806,26.712221),(89.597507,26.720954),(89.613475,26.748549),(89.611357,26.765964),(89.586087,26.784051),(89.546503,26.797539),(89.50542,26.803688),(89.442891,26.797022),(89.407338,26.813403),(89.368895,26.837359),(89.341657,26.854331),(89.300161,26.844409),(89.286364,26.844978),(89.273599,26.843892),(89.262954,26.836348),(89.252619,26.826787),(89.240837,26.819759),(89.212828,26.813041),(89.184613,26.810561),(89.128079,26.813455),(89.096504,26.821413),(89.082448,26.836037),(89.074335,26.856398),(89.060693,26.881461),(89.04488,26.897119),(89.021574,26.912674),(88.996614,26.922751),(88.975323,26.921717),(88.965815,26.915464),(88.954549,26.912622),(88.942767,26.913707),(88.932328,26.919237),(88.9253,26.929365),(88.925404,26.939029),(88.926954,26.949416),(88.924474,26.961663),(88.906645,26.981093),(88.88644,26.978975),(88.867113,26.964247),(88.851713,26.945437),(88.845615,26.994839),(88.845615,26.994943),(88.845615,27.049565),(88.840809,27.075248),(88.827632,27.097882),(88.805256,27.113023),(88.742572,27.142737),(88.733082,27.148972),(88.730067,27.150954),(88.738438,27.179789),(88.754355,27.212604),(88.775852,27.240871),(88.801174,27.256425),(88.861428,27.270378),(88.876116,27.280486),(88.884631,27.286346),(88.892331,27.315543),(88.91516,27.330869),(88.950808,27.328229),(88.971933,27.312385),(88.997019,27.330869),(88.981175,27.354635),(88.964011,27.392924),(88.95106,27.433144),(88.957549,27.456939),(88.966202,27.47803),(88.982966,27.492632),(88.980046,27.49798),(88.97521,27.506839),(88.972955,27.517659),(88.978366,27.530283),(88.992793,27.537497),(89.004966,27.54967),(89.002712,27.560491),(89.016238,27.57582),(89.03337,27.578976),(89.039682,27.594305),(89.050954,27.608282),(89.064029,27.615946),(89.091531,27.625865),(89.102351,27.623611),(89.105865,27.622952),(89.109565,27.622258),(89.119484,27.618201),(89.124655,27.614852),(89.127359,27.62729),(89.156021,27.665686),(89.198203,27.730582),(89.21659,27.771142),(89.22492,27.807813),(89.258717,27.827553),(89.299489,27.844245),(89.336025,27.869075),(89.370131,27.909357),(89.417364,27.988654),(89.420981,27.994726),(89.421187,27.994726),(89.444545,28.031184),(89.458808,28.047798),(89.475344,28.061337),(89.495343,28.068004),(89.515032,28.081905),(89.561489,28.13464),(89.579421,28.14464),(89.597817,28.149807),(89.71781,28.169083),(89.755844,28.184379),(89.773621,28.212775),(89.780855,28.228769),(89.796048,28.24019),(89.829741,28.259155),(89.83863,28.267914),(89.854029,28.287267),(89.862918,28.295793),(89.873615,28.300599),(89.881314,28.297525),(89.91613,28.315601),(89.952524,28.308247),(89.971593,28.31804),(89.990455,28.320624),(90.07086,28.34523),(90.123535,28.335354),(90.153164,28.322185),(90.176209,28.325478),(90.199254,28.348523),(90.225591,28.358399),(90.261804,28.335354)] +Botswana [(25.259781,-17.794107),(25.21937,-17.879786),(25.21937,-17.908001),(25.226088,-17.931876),(25.255026,-18.001122),(25.296368,-18.068612),(25.323446,-18.09662),(25.357449,-18.115844),(25.387525,-18.138995),(25.408816,-18.175995),(25.440855,-18.2532),(25.473204,-18.303429),(25.481163,-18.323377),(25.490516,-18.365545),(25.49558,-18.378877),(25.508499,-18.399134),(25.574439,-18.465693),(25.608442,-18.487708),(25.622084,-18.501143),(25.669523,-18.566049),(25.698255,-18.590234),(25.736909,-18.608734),(25.761921,-18.630335),(25.773393,-18.665578),(25.779491,-18.738752),(25.815251,-18.813993),(25.940721,-18.921273),(25.967449,-18.999925),(25.9678,-19.000958),(25.964389,-19.021629),(25.948059,-19.058732),(25.944855,-19.079196),(25.948576,-19.103277),(25.956534,-19.122088),(25.981132,-19.161775),(26.011414,-19.199809),(26.034359,-19.243734),(26.13027,-19.501082),(26.155488,-19.537153),(26.194452,-19.5602),(26.239101,-19.571466),(26.292638,-19.572499),(26.30349,-19.577254),(26.313205,-19.584178),(26.32106,-19.592033),(26.330981,-19.604952),(26.333462,-19.613014),(26.326331,-19.633891),(26.319096,-19.646293),(26.312481,-19.649601),(26.312171,-19.651358),(26.324367,-19.659109),(26.332325,-19.662416),(26.362711,-19.667584),(26.385242,-19.679056),(26.412837,-19.71957),(26.431854,-19.73652),(26.450251,-19.743342),(26.489731,-19.75192),(26.508852,-19.759258),(26.549263,-19.784063),(26.566316,-19.800806),(26.574791,-19.819513),(26.581922,-19.842147),(26.595565,-19.855583),(26.614065,-19.863438),(26.659437,-19.875737),(26.673803,-19.883385),(26.67717,-19.886815),(26.684758,-19.894547),(26.698608,-19.91253),(26.713904,-19.927413),(26.730957,-19.935888),(26.750801,-19.939609),(26.774469,-19.939815),(26.811882,-19.94643),(26.925054,-20.000897),(26.961072,-20.007201),(26.9943,-20.006788),(27.02665,-20.010095),(27.060136,-20.027562),(27.069231,-20.03738),(27.086491,-20.060532),(27.097343,-20.068903),(27.109642,-20.073244),(27.119771,-20.073864),(27.129692,-20.072934),(27.141888,-20.073347),(27.16292,-20.076551),(27.183746,-20.082339),(27.201781,-20.092984),(27.214907,-20.110451),(27.266015,-20.234164),(27.283998,-20.35147),(27.268392,-20.49575),(27.306012,-20.477354),(27.340739,-20.473013),(27.45391,-20.473323),(27.534112,-20.483038),(27.590853,-20.473323),(27.625786,-20.488619),(27.66599,-20.489136),(27.683767,-20.49606),(27.698133,-20.509083),(27.705575,-20.526653),(27.702629,-20.566134),(27.690382,-20.60148),(27.682475,-20.637344),(27.707332,-20.716719),(27.709605,-20.756716),(27.694412,-20.837745),(27.689038,-20.849011),(27.681803,-20.857589),(27.676016,-20.866684),(27.674775,-20.879913),(27.675085,-20.891282),(27.672605,-20.913709),(27.672657,-20.923528),(27.680356,-20.979649),(27.678961,-21.000733),(27.666817,-21.053753),(27.666611,-21.071219),(27.674775,-21.090133),(27.709192,-21.134471),(27.724385,-21.149664),(27.793838,-21.197413),(27.823604,-21.231726),(27.849132,-21.269657),(27.884995,-21.310171),(27.8944,-21.32433),(27.896674,-21.332392),(27.896157,-21.347895),(27.897811,-21.35544),(27.904219,-21.364741),(27.920549,-21.381174),(27.950211,-21.438329),(27.953001,-21.448664),(27.949642,-21.456519),(27.941943,-21.468508),(27.939876,-21.478016),(27.943338,-21.479876),(27.950418,-21.482047),(27.954448,-21.487835),(27.949281,-21.500754),(27.953208,-21.510469),(27.958066,-21.511502),(27.963698,-21.510469),(27.970571,-21.514396),(27.975739,-21.522561),(27.984731,-21.542922),(27.990415,-21.551913),(28.002559,-21.564212),(28.016563,-21.572894),(28.032893,-21.577855),(28.090771,-21.581266),(28.165702,-21.595218),(28.284867,-21.596872),(28.321919,-21.603486),(28.361762,-21.616302),(28.443101,-21.655783),(28.464598,-21.660331),(28.481393,-21.657437),(28.497309,-21.651546),(28.532501,-21.643071),(28.542939,-21.638316),(28.553998,-21.636559),(28.585934,-21.644414),(28.6157,-21.647101),(28.629704,-21.651339),(28.66841,-21.679968),(28.714195,-21.693507),(28.860853,-21.757379),(28.891032,-21.764924),(28.951907,-21.768334),(28.980846,-21.774845),(28.998726,-21.786008),(29.038723,-21.797893),(29.05526,-21.809985),(29.057637,-21.829209),(29.045441,-21.852567),(29.028905,-21.876648),(29.017949,-21.898145),(29.013815,-21.940417),(29.021567,-21.982791),(29.040532,-22.020929),(29.070763,-22.051004),(29.10797,-22.069194),(29.144867,-22.075292),(29.239331,-22.072605),(29.244395,-22.075706),(29.254111,-22.087074),(29.259588,-22.096066),(29.26734,-22.115807),(29.273644,-22.125108),(29.350074,-22.186707),(29.331522,-22.192804),(29.302893,-22.190221),(29.248219,-22.179265),(29.222071,-22.182159),(29.202796,-22.194458),(29.186621,-22.207687),(29.168844,-22.213992),(29.047818,-22.220296),(29.038723,-22.223914),(29.017846,-22.250372),(29.018053,-22.255023),(29.002756,-22.263394),(28.983739,-22.281998),(28.960227,-22.310213),(28.967926,-22.38039),(28.963379,-22.392172),(28.953457,-22.40075),(28.930409,-22.440851),(28.912529,-22.453667),(28.86938,-22.448913),(28.84659,-22.449946),(28.836772,-22.463176),(28.831191,-22.482296),(28.816825,-22.492941),(28.796774,-22.498212),(28.77476,-22.501416),(28.731352,-22.513922),(28.659005,-22.551956),(28.623865,-22.562808),(28.5972,-22.562704),(28.577563,-22.560017),(28.559838,-22.563014),(28.538495,-22.580274),(28.522476,-22.584822),(28.459379,-22.569629),(28.376851,-22.575003),(28.338559,-22.584615),(28.301817,-22.603839),(28.286831,-22.615828),(28.250554,-22.655619),(28.240684,-22.66089),(28.216344,-22.663163),(28.205492,-22.665954),(28.196191,-22.671638),(28.164668,-22.708639),(28.159707,-22.718147),(28.157743,-22.731066),(28.159707,-22.743262),(28.162704,-22.751013),(28.162498,-22.758971),(28.154953,-22.771994),(28.145858,-22.779745),(28.135626,-22.783363),(28.125601,-22.785533),(28.117436,-22.789461),(28.113198,-22.795352),(28.107721,-22.810648),(28.10369,-22.816746),(28.088807,-22.822637),(28.065759,-22.828838),(28.047156,-22.8369),(28.045399,-22.847752),(28.053977,-22.869146),(28.04974,-22.891366),(28.037441,-22.911417),(28.021834,-22.92661),(28.008295,-22.934775),(27.989382,-22.943766),(27.968401,-22.951001),(27.949177,-22.953895),(27.937498,-22.963817),(27.937912,-22.986658),(27.946077,-23.029032),(27.930057,-23.057041),(27.895537,-23.079469),(27.853783,-23.095592),(27.815749,-23.104687),(27.82226,-23.12298),(27.806447,-23.128355),(27.78495,-23.127011),(27.774201,-23.125151),(27.77172,-23.133005),(27.774614,-23.140757),(27.779058,-23.147682),(27.781022,-23.152539),(27.783038,-23.155433),(27.787378,-23.159774),(27.78929,-23.163805),(27.784588,-23.165562),(27.783296,-23.166905),(27.775751,-23.172486),(27.774201,-23.173003),(27.759421,-23.21145),(27.753634,-23.220752),(27.739164,-23.227573),(27.725728,-23.221682),(27.698443,-23.193467),(27.658756,-23.223439),(27.647129,-23.227573),(27.633434,-23.223439),(27.619172,-23.217031),(27.608009,-23.216721),(27.599224,-23.244627),(27.580414,-23.26385),(27.576073,-23.279146),(27.575247,-23.284831),(27.573076,-23.290515),(27.569614,-23.294856),(27.565893,-23.29651),(27.563309,-23.299197),(27.564911,-23.305811),(27.567754,-23.31377),(27.569356,-23.320384),(27.563671,-23.342812),(27.548892,-23.360795),(27.528841,-23.373094),(27.50724,-23.379089),(27.465693,-23.380432),(27.45267,-23.38529),(27.443162,-23.395108),(27.434997,-23.406581),(27.425178,-23.412678),(27.411122,-23.405754),(27.41846,-23.391284),(27.410502,-23.389321),(27.395826,-23.394075),(27.383734,-23.399553),(27.38084,-23.40534),(27.371331,-23.417019),(27.361203,-23.422497),(27.349989,-23.391491),(27.335571,-23.402446),(27.312007,-23.43707),(27.295574,-23.451746),(27.278107,-23.463631),(27.239712,-23.481408),(27.202505,-23.492053),(27.191497,-23.503836),(27.205657,-23.523059),(27.195632,-23.52709),(27.18664,-23.527297),(27.178682,-23.523576),(27.171447,-23.515618),(27.168967,-23.521612),(27.161525,-23.52988),(27.157184,-23.536702),(27.149123,-23.527503),(27.137651,-23.523369),(27.127419,-23.525023),(27.123078,-23.532981),(27.127419,-23.54528),(27.135687,-23.551895),(27.141165,-23.558303),(27.13672,-23.570188),(27.124421,-23.56316),(27.113363,-23.564297),(27.103751,-23.571738),(27.095793,-23.583831),(27.102511,-23.583831),(27.084114,-23.596026),(27.078326,-23.602021),(27.075225,-23.611219),(27.068404,-23.607292),(27.061531,-23.604398),(27.063133,-23.624242),(27.069438,-23.641708),(27.070058,-23.656074),(27.054762,-23.66641),(27.037605,-23.665686),(27.029854,-23.654731),(27.024273,-23.642535),(27.013834,-23.638504),(27.00417,-23.645842),(26.986549,-23.686977),(26.96717,-23.718706),(26.962054,-23.737413),(26.964542,-23.74714),(26.966602,-23.75519),(26.938128,-23.802887),(26.945001,-23.823248),(26.941177,-23.850223),(26.870897,-24.11584),(26.868933,-24.160385),(26.84971,-24.248131),(26.839168,-24.265598),(26.823251,-24.278827),(26.802271,-24.289679),(26.714007,-24.315931),(26.692923,-24.326576),(26.684087,-24.341459),(26.62347,-24.39944),(26.558358,-24.438197),(26.531176,-24.458661),(26.506061,-24.48884),(26.476812,-24.559947),(26.469474,-24.571419),(26.404362,-24.632811),(26.404001,-24.632758),(26.381624,-24.629503),(26.363538,-24.628056),(26.344521,-24.624026),(26.32571,-24.622372),(26.278788,-24.62847),(26.165911,-24.660846),(26.013585,-24.704537),(25.981132,-24.726345),(25.966043,-24.73358),(25.914676,-24.731823),(25.881707,-24.742158),(25.874058,-24.743398),(25.868374,-24.748152),(25.866927,-24.772647),(25.877262,-24.845717),(25.876746,-24.886025),(25.866204,-24.918168),(25.837058,-24.979146),(25.835198,-25.016353),(25.804915,-25.064826),(25.782901,-25.131385),(25.695465,-25.309979),(25.663529,-25.439996),(25.652838,-25.463319),(25.608855,-25.559266),(25.587254,-25.61952),(25.458425,-25.711194),(25.386491,-25.743544),(25.177822,-25.763284),(25.135964,-25.756566),(25.090282,-25.743647),(25.051731,-25.737859),(25.013284,-25.743027),(24.967912,-25.762871),(24.908381,-25.804109),(24.888537,-25.811653),(24.828903,-25.826019),(24.79862,-25.829223),(24.770715,-25.822609),(24.737952,-25.815891),(24.664985,-25.823332),(24.629638,-25.816098),(24.576722,-25.783335),(24.500851,-25.757848),(24.456729,-25.743027),(24.436782,-25.745301),(24.416731,-25.753052),(24.38924,-25.75946),(24.338907,-25.751812),(24.296739,-25.723493),(24.227699,-25.648976),(24.183464,-25.625928),(24.130961,-25.626238),(24.027091,-25.651766),(24.005697,-25.65466),(24.00425,-25.649906),(24.009624,-25.641328),(24.008694,-25.633163),(23.998256,-25.625308),(23.988024,-25.61952),(23.976448,-25.617557),(23.924772,-25.629236),(23.905755,-25.618487),(23.890665,-25.59885),(23.845397,-25.569084),(23.807466,-25.523609),(23.798578,-25.518545),(23.778321,-25.51255),(23.769019,-25.507383),(23.762301,-25.498494),(23.752069,-25.477617),(23.742664,-25.469969),(23.68675,-25.454362),(23.674244,-25.44258),(23.6667,-25.432658),(23.561073,-25.357314),(23.535855,-25.343258),(23.506606,-25.330959),(23.496581,-25.324241),(23.483455,-25.311529),(23.468159,-25.290238),(23.45927,-25.282177),(23.444904,-25.278146),(23.431985,-25.280316),(23.399222,-25.291375),(23.381446,-25.292615),(23.365943,-25.289308),(23.320364,-25.273392),(23.265691,-25.263987),(23.216081,-25.26719),(23.168539,-25.280006),(23.093401,-25.312459),(23.083893,-25.319694),(23.071697,-25.325998),(23.065496,-25.320831),(23.061155,-25.310702),(23.054541,-25.302537),(23.030666,-25.299437),(23.006998,-25.310805),(22.968034,-25.346876),(22.962557,-25.355764),(22.960283,-25.364342),(22.956149,-25.372094),(22.94478,-25.378708),(22.930931,-25.384289),(22.920492,-25.39049),(22.911914,-25.399069),(22.903645,-25.411368),(22.887626,-25.450228),(22.876774,-25.462011),(22.851349,-25.471312),(22.844631,-25.481131),(22.824684,-25.543039),(22.816829,-25.558439),(22.814039,-25.566604),(22.813418,-25.575079),(22.819206,-25.584897),(22.828715,-25.595542),(22.833366,-25.606084),(22.824374,-25.6158),(22.810835,-25.625308),(22.809594,-25.633266),(22.813418,-25.641948),(22.814969,-25.6528),(22.812798,-25.667063),(22.810214,-25.677398),(22.804427,-25.687216),(22.759778,-25.734862),(22.746446,-25.755326),(22.740451,-25.77703),(22.741175,-25.790879),(22.744999,-25.794083),(22.75089,-25.796047),(22.757918,-25.805969),(22.764119,-25.820335),(22.765463,-25.826536),(22.754714,-25.843383),(22.739521,-25.858472),(22.719471,-25.874285),(22.708825,-25.891028),(22.721641,-25.909012),(22.726809,-25.921621),(22.727222,-25.943945),(22.724225,-25.967509),(22.719367,-25.984253),(22.710479,-25.996862),(22.70066,-26.00327),(22.672238,-26.012881),(22.661593,-26.020943),(22.66211,-26.029831),(22.665831,-26.039856),(22.665004,-26.051432),(22.623249,-26.10931),(22.620863,-26.111532),(22.611157,-26.120575),(22.59152,-26.133184),(22.580254,-26.146517),(22.563821,-26.18114),(22.545114,-26.207391),(22.527958,-26.213696),(22.482586,-26.204808),(22.450857,-26.210079),(22.428842,-26.226512),(22.395046,-26.271884),(22.342853,-26.317359),(22.318772,-26.328831),(22.303165,-26.333585),(22.257277,-26.340923),(22.248285,-26.346918),(22.237743,-26.366762),(22.197435,-26.404072),(22.178315,-26.439935),(22.146172,-26.519207),(22.108138,-26.571917),(22.057702,-26.617599),(21.998998,-26.650258),(21.935849,-26.663901),(21.838801,-26.664521),(21.807278,-26.669585),(21.780613,-26.678164),(21.768728,-26.688706),(21.767487,-26.704312),(21.776479,-26.749167),(21.777513,-26.768494),(21.773379,-26.786788),(21.76232,-26.804461),(21.74382,-26.820481),(21.687182,-26.855207),(21.664548,-26.863062),(21.643361,-26.862752),(21.583933,-26.848386),(21.498667,-26.846422),(21.449471,-26.826372),(21.42663,-26.822651),(21.380844,-26.823995),(21.296922,-26.844459),(21.282866,-26.844975),(21.254857,-26.841771),(21.240595,-26.841978),(21.153468,-26.864509),(21.122462,-26.865336),(20.990377,-26.838567),(20.950173,-26.816036),(20.907695,-26.800017),(20.851884,-26.806528),(20.826046,-26.818827),(20.779434,-26.848076),(20.714942,-26.875051),(20.693548,-26.889004),(20.690757,-26.891794),(20.685693,-26.881769),(20.675461,-26.85004),(20.665643,-26.839084),(20.650553,-26.829679),(20.644869,-26.824511),(20.638667,-26.816967),(20.624818,-26.789371),(20.61469,-26.753198),(20.609212,-26.716198),(20.608902,-26.686122),(20.630709,-26.618322),(20.632363,-26.595275),(20.627712,-26.580805),(20.611486,-26.559928),(20.605284,-26.547732),(20.602287,-26.522204),(20.605284,-26.493265),(20.618204,-26.439729),(20.622544,-26.427533),(20.652103,-26.407793),(20.665953,-26.391773),(20.695512,-26.341957),(20.753286,-26.276224),(20.77623,-26.240568),(20.79473,-26.201914),(20.837932,-26.146827),(20.841446,-26.131324),(20.803929,-26.071276),(20.800518,-26.052466),(20.802792,-25.980532),(20.79411,-25.893922),(20.767135,-25.834391),(20.766928,-25.83067),(20.755766,-25.818991),(20.749978,-25.818475),(20.740057,-25.823126),(20.726517,-25.827363),(20.727137,-25.818165),(20.737783,-25.798941),(20.716699,-25.732588),(20.70688,-25.714812),(20.696545,-25.70706),(20.671017,-25.695485),(20.662852,-25.685253),(20.662749,-25.674918),(20.67143,-25.653317),(20.67081,-25.642258),(20.663059,-25.636264),(20.65138,-25.63461),(20.641045,-25.631199),(20.637944,-25.620347),(20.643318,-25.613216),(20.665229,-25.60102),(20.67143,-25.591202),(20.663369,-25.564743),(20.618514,-25.528466),(20.620374,-25.500561),(20.626885,-25.484748),(20.656547,-25.467798),(20.643525,-25.45736),(20.63443,-25.45767),(20.625645,-25.462734),(20.61655,-25.466145),(20.606628,-25.462011),(20.605905,-25.451779),(20.611382,-25.439376),(20.612002,-25.431005),(20.596913,-25.432865),(20.593606,-25.413228),(20.588025,-25.40403),(20.558776,-25.392041),(20.542239,-25.382222),(20.535831,-25.371163),(20.529217,-25.340054),(20.525083,-25.332199),(20.514644,-25.31928),(20.51051,-25.312252),(20.510407,-25.304708),(20.514851,-25.299126),(20.518571,-25.293029),(20.516401,-25.283624),(20.50989,-25.277526),(20.491803,-25.269051),(20.485498,-25.26316),(20.483535,-25.256648),(20.481674,-25.2305),(20.462451,-25.223679),(20.443847,-25.213344),(20.434442,-25.199804),(20.443227,-25.183061),(20.444777,-25.168385),(20.433925,-25.146784),(20.406847,-25.10782),(20.375531,-25.046119),(20.364886,-25.0332),(20.235695,-24.936048),(20.159317,-24.893673),(20.149085,-24.890986),(20.128414,-24.889746),(20.118906,-24.887369),(20.107847,-24.880341),(20.081079,-24.852642),(20.029299,-24.815228),(20.010489,-24.788047),(19.992919,-24.775851),(19.986408,-24.768823),(19.981447,-24.752493),(19.981343,-24.580411),(19.981137,-24.408431),(19.980826,-24.236556),(19.980723,-24.064473),(19.980516,-23.892494),(19.980483,-23.864489),(19.98031,-23.720567),(19.980103,-23.548587),(19.979896,-23.376505),(19.979855,-23.308551),(19.979793,-23.204526),(19.979586,-23.032546),(19.979276,-22.860671),(19.979173,-22.688588),(19.978966,-22.538106),(19.978863,-22.387625),(19.978656,-22.237246),(19.978449,-22.086764),(19.978346,-22.000671),(20.21244,-22.000671),(20.446534,-22.000671),(20.680629,-22.000671),(20.914723,-22.000671),(20.97198,-22.000671),(20.984796,-21.963981),(20.984693,-21.922847),(20.984486,-21.865176),(20.984279,-21.807608),(20.984073,-21.750041),(20.983866,-21.69237),(20.983659,-21.634699),(20.983453,-21.577028),(20.983143,-21.519357),(20.983039,-21.46179),(20.982832,-21.404119),(20.982626,-21.346448),(20.982419,-21.28888),(20.982212,-21.231209),(20.982109,-21.173642),(20.981902,-21.116074),(20.981592,-21.058403),(20.981489,-21.000733),(20.981486,-20.999958),(20.981075,-20.875779),(20.980662,-20.750722),(20.980249,-20.625768),(20.979835,-20.500711),(20.979422,-20.375758),(20.97936,-20.351028),(20.979112,-20.250804),(20.978698,-20.125851),(20.978285,-20.000794),(20.977975,-19.87584),(20.977561,-19.750783),(20.977045,-19.625829),(20.976735,-19.500876),(20.976321,-19.375819),(20.975804,-19.250865),(20.975609,-19.172106),(20.975494,-19.125912),(20.975081,-19.000958),(20.975081,-18.96003),(20.975081,-18.919206),(20.975081,-18.878381),(20.975081,-18.83766),(20.975081,-18.796836),(20.975081,-18.756012),(20.975081,-18.715187),(20.975081,-18.674363),(20.975081,-18.633539),(20.975081,-18.592714),(20.975081,-18.551993),(20.975081,-18.511065),(20.975081,-18.470138),(20.975081,-18.429313),(20.975081,-18.388592),(20.975081,-18.347768),(20.975081,-18.319346),(20.993436,-18.318612),(21.024174,-18.317382),(21.080604,-18.315212),(21.136932,-18.312938),(21.193466,-18.310768),(21.249896,-18.308494),(21.296818,-18.306633),(21.343947,-18.304773),(21.39087,-18.302913),(21.437895,-18.301052),(21.456809,-18.300226),(21.475722,-18.299502),(21.492879,-18.296401),(21.509829,-18.293198),(21.526985,-18.289994),(21.544039,-18.28679),(21.628478,-18.271183),(21.71271,-18.255577),(21.79715,-18.239867),(21.881486,-18.224158),(21.966028,-18.208552),(22.050261,-18.192842),(22.134597,-18.177236),(22.219036,-18.161526),(22.303475,-18.145816),(22.387811,-18.13021),(22.472147,-18.1145),(22.556483,-18.098791),(22.640922,-18.083185),(22.725258,-18.067475),(22.809698,-18.051869),(22.894034,-18.036159),(22.981367,-18.020036),(23.099602,-18.010217),(23.185799,-18.003086),(23.254942,-17.997402),(23.292769,-17.998952),(23.305688,-18.005463),(23.311476,-18.009804),(23.312819,-18.016522),(23.312716,-18.029958),(23.31592,-18.037089),(23.323258,-18.039156),(23.330596,-18.039983),(23.3338,-18.04329),(23.33256,-18.067061),(23.3338,-18.073986),(23.336591,-18.07936),(23.348063,-18.09445),(23.359535,-18.119151),(23.389714,-18.153258),(23.395812,-18.163386),(23.396329,-18.169381),(23.394468,-18.186227),(23.395812,-18.191395),(23.40129,-18.194289),(23.410281,-18.197389),(23.418859,-18.198423),(23.429401,-18.188501),(23.444284,-18.200697),(23.458754,-18.217647),(23.464128,-18.225501),(23.477771,-18.230876),(23.49131,-18.23315),(23.501542,-18.23749),(23.505676,-18.249376),(23.511257,-18.260331),(23.521696,-18.268186),(23.527277,-18.277695),(23.518802,-18.293714),(23.551461,-18.327511),(23.560763,-18.348491),(23.546604,-18.369472),(23.555492,-18.383115),(23.571305,-18.426006),(23.57916,-18.467864),(23.592182,-18.478199),(23.609856,-18.477682),(23.645409,-18.466004),(23.649853,-18.46342),(23.656468,-18.458252),(23.680136,-18.431484),(23.700909,-18.42797),(23.715792,-18.419081),(23.809843,-18.321723),(23.825553,-18.317072),(23.837232,-18.305807),(23.855215,-18.280072),(23.867514,-18.269426),(23.89697,-18.250203),(23.912886,-18.235733),(23.915987,-18.201213),(23.950817,-18.177649),(23.956604,-18.176615),(23.961979,-18.177856),(23.966733,-18.180646),(23.971177,-18.18385),(23.974898,-18.177029),(23.979652,-18.171861),(23.991744,-18.163386),(24.02027,-18.151604),(24.028125,-18.145816),(24.05696,-18.119048),(24.065125,-18.115224),(24.101609,-18.108816),(24.135095,-18.085458),(24.183051,-18.029441),(24.218294,-18.012595),(24.238241,-18.009907),(24.259222,-18.012595),(24.270074,-18.015798),(24.287437,-18.02448),(24.296429,-18.026237),(24.305834,-18.019416),(24.334256,-17.971563),(24.350586,-17.95606),(24.365262,-17.950789),(24.399471,-17.95234),(24.421589,-17.956474),(24.433888,-17.967223),(24.451045,-17.998952),(24.458486,-18.005773),(24.465101,-18.008667),(24.469751,-18.014455),(24.471508,-18.029958),(24.505615,-18.060344),(24.518431,-18.057346),(24.564423,-18.052799),(24.574551,-18.050422),(24.577755,-18.044427),(24.591811,-18.028407),(24.595015,-18.022826),(24.598632,-18.020759),(24.649172,-17.962778),(24.664055,-17.949859),(24.698471,-17.928879),(24.725343,-17.895909),(24.730717,-17.891878),(24.738159,-17.887641),(24.74684,-17.88423),(24.756039,-17.882783),(24.764514,-17.879683),(24.770715,-17.865523),(24.776916,-17.862319),(24.79738,-17.858082),(24.820841,-17.839272),(24.838101,-17.835034),(24.857428,-17.833587),(24.931119,-17.81054),(24.93763,-17.807129),(24.945175,-17.799998),(24.948585,-17.79328),(24.953339,-17.788422),(24.964295,-17.786665),(24.958094,-17.800928),(24.968636,-17.807646),(24.975147,-17.816327),(24.983312,-17.820462),(24.998505,-17.81395),(25.007083,-17.825733),(25.019692,-17.823769),(25.047494,-17.807129),(25.057002,-17.827696),(25.087905,-17.826766),(25.120874,-17.813537),(25.153947,-17.781808),(25.194152,-17.782324),(25.259781,-17.794107)] +Central African Republic [(22.55576,10.978969),(22.57705,10.985119),(22.600822,10.987806),(22.60909,10.985222),(22.62821,10.976127),(22.638339,10.974137),(22.673169,10.975068),(22.682677,10.974137),(22.719884,10.964655),(22.745102,10.943803),(22.805977,10.924502),(22.861064,10.919154),(22.863234,10.891817),(23.005551,10.686817),(23.109421,10.614495),(23.291219,10.439726),(23.457617,10.173747),(23.624015,9.907768),(23.644789,9.863068),(23.674038,9.69034),(23.669077,9.652022),(23.655538,9.622463),(23.618537,9.566084),(23.606135,9.537171),(23.606755,9.526241),(23.61647,9.501463),(23.618537,9.488311),(23.617194,9.457176),(23.619984,9.447719),(23.627529,9.435084),(23.63497,9.435394),(23.640861,9.43312),(23.646339,9.425033),(23.646856,9.418393),(23.621845,9.34062),(23.620501,9.322869),(23.630733,9.292173),(23.63218,9.277574),(23.623291,9.265637),(23.610062,9.255457),(23.571408,9.207294),(23.549498,9.185254),(23.537922,9.178149),(23.521799,9.175358),(23.48945,9.176702),(23.474257,9.170682),(23.462821,9.153725),(23.45772,9.146161),(23.435706,9.018934),(23.43829,8.994543),(23.451416,8.974182),(23.478081,8.958912),(23.505469,8.961082),(23.542573,8.997281),(23.560143,8.996455),(23.567481,8.974828),(23.565931,8.940024),(23.553632,8.883154),(23.536682,8.85561),(23.495444,8.809438),(23.482318,8.783393),(23.481801,8.759363),(23.49007,8.732595),(23.505262,8.710736),(23.525726,8.701485),(23.540506,8.704793),(23.56407,8.722699),(23.578023,8.730476),(23.59642,8.734145),(23.61337,8.732233),(23.629079,8.725593),(23.644065,8.715154),(23.657088,8.710012),(23.689127,8.710684),(23.72127,8.702002),(23.737496,8.70686),(23.768916,8.721252),(23.803436,8.72213),(23.866687,8.70779),(23.922291,8.713449),(23.962082,8.697015),(23.981306,8.694173),(24.009624,8.698876),(24.114011,8.681642),(24.170328,8.689327),(24.180467,8.690711),(24.217674,8.691512),(24.235761,8.682003),(24.233487,8.667767),(24.21509,8.64136),(24.211266,8.627071),(24.218397,8.613041),(24.245579,8.591802),(24.25054,8.579632),(24.243719,8.570176),(24.203204,8.543459),(24.193179,8.532426),(24.137162,8.438866),(24.125173,8.404889),(24.121556,8.372307),(24.131374,8.343058),(24.152872,8.317891),(24.17995,8.297712),(24.206615,8.283268),(24.222428,8.277196),(24.257051,8.269186),(24.263186,8.268505),(24.281029,8.266525),(24.295498,8.266525),(24.302733,8.26544),(24.310278,8.261564),(24.326814,8.248567),(24.332085,8.245725),(24.396578,8.267817),(24.431097,8.271408),(24.454869,8.248671),(24.458899,8.239757),(24.464274,8.233194),(24.471508,8.22893),(24.481327,8.226657),(24.51285,8.207149),(24.544682,8.206115),(24.614859,8.217148),(24.670876,8.206839),(24.69041,8.206632),(24.710564,8.204307),(24.742293,8.18684),(24.800171,8.180277),(24.832107,8.16573),(24.917993,8.087027),(24.927811,8.070956),(24.930085,8.035454),(24.950549,8.014396),(24.952616,7.996567),(24.958094,7.989178),(24.964812,7.982615),(24.972666,7.976982),(24.981245,7.972331),(25.029304,7.918846),(25.059173,7.896108),(25.089559,7.884895),(25.103408,7.885773),(25.134414,7.892543),(25.148573,7.892594),(25.173171,7.888564),(25.180509,7.884016),(25.18547,7.877918),(25.191568,7.872389),(25.216889,7.864069),(25.229705,7.851615),(25.23942,7.835957),(25.249652,7.804331),(25.264535,7.777718),(25.269909,7.760768),(25.270633,7.746143),(25.266809,7.703097),(25.270633,7.688162),(25.276421,7.674261),(25.279418,7.659482),(25.274974,7.641964),(25.252546,7.62243),(25.186504,7.600106),(25.165213,7.5799),(25.164593,7.567343),(25.169037,7.551168),(25.190431,7.501197),(25.249962,7.469984),(25.263295,7.461044),(25.269599,7.454223),(25.279211,7.438358),(25.285102,7.431589),(25.292957,7.427196),(25.3097,7.422029),(25.316211,7.417223),(25.32417,7.402908),(25.330577,7.374125),(25.335952,7.359604),(25.346804,7.345031),(25.360033,7.335574),(25.416154,7.307772),(25.455118,7.278213),(25.481163,7.266173),(25.500335,7.272787),(25.516768,7.269945),(25.53165,7.260695),(25.576609,7.219819),(25.592422,7.211241),(25.65402,7.195376),(25.672727,7.187831),(25.697015,7.168607),(25.705077,7.166385),(25.724507,7.166695),(25.734429,7.162406),(25.750242,7.146542),(25.7613,7.143751),(25.786209,7.142666),(25.791583,7.134398),(25.791686,7.121117),(25.800781,7.104942),(25.815767,7.099774),(25.860726,7.095537),(25.877469,7.087734),(25.881707,7.079827),(25.884084,7.061069),(25.888735,7.05187),(25.920257,7.034145),(25.930386,7.03151),(25.950436,7.028564),(25.960461,7.024482),(25.966249,7.017454),(25.966766,7.009857),(25.969453,7.003553),(25.981132,7.000297),(26.026091,6.99668),(26.032292,6.974097),(26.035806,6.922266),(26.045418,6.899683),(26.050585,6.896686),(26.05689,6.89741),(26.063401,6.899063),(26.069189,6.899063),(26.073633,6.895136),(26.073116,6.88971),(26.071152,6.88356),(26.071566,6.877462),(26.0757,6.864647),(26.077354,6.85333),(26.080971,6.842348),(26.091203,6.830334),(26.113734,6.816458),(26.131821,6.81173),(26.147634,6.804702),(26.178743,6.765919),(26.219464,6.737523),(26.236207,6.72003),(26.270623,6.702486),(26.329741,6.680575),(26.378007,6.65329),(26.379224,6.631362),(26.379867,6.619778),(26.372323,6.609804),(26.356096,6.579496),(26.324367,6.537199),(26.30504,6.496917),(26.296462,6.489527),(26.286436,6.484179),(26.277031,6.477358),(26.27021,6.465653),(26.289124,6.459426),(26.290674,6.444672),(26.285093,6.425888),(26.282819,6.407853),(26.28964,6.387208),(26.300699,6.377829),(26.315789,6.371576),(26.335219,6.360595),(26.351755,6.344368),(26.375837,6.312587),(26.394647,6.300547),(26.436195,6.29171),(26.452008,6.28029),(26.455315,6.254451),(26.455625,6.230474),(26.464927,6.224402),(26.4795,6.225073),(26.495829,6.221146),(26.508955,6.205462),(26.509472,6.186394),(26.500067,6.167971),(26.48353,6.154457),(26.46534,6.144303),(26.4456,6.130221),(26.42927,6.113091),(26.421312,6.09366),(26.424826,6.072447),(26.440639,6.076995),(26.48105,6.104951),(26.498827,6.099836),(26.509885,6.082679),(26.51805,6.061156),(26.527972,6.043172),(26.543578,6.030847),(26.602179,6.010616),(26.634322,6.006482),(26.705016,6.009789),(26.776329,5.981884),(26.794364,5.970489),(26.805061,5.957183),(26.810849,5.912379),(26.81891,5.894628),(26.842165,5.884267),(26.865316,5.885973),(26.882059,5.891941),(26.894255,5.889667),(26.903763,5.866982),(26.916475,5.849644),(26.937456,5.848507),(26.981071,5.859204),(26.99182,5.847706),(27.003499,5.819284),(27.021999,5.805099),(27.029543,5.78988),(27.036055,5.785126),(27.045356,5.784919),(27.063547,5.790862),(27.072745,5.791224),(27.11481,5.775282),(27.123801,5.768745),(27.130933,5.752131),(27.131449,5.741459),(27.136307,5.734018),(27.156254,5.727481),(27.170413,5.72035),(27.182351,5.707456),(27.190774,5.691824),(27.193823,5.676321),(27.194288,5.667304),(27.196407,5.661464),(27.201109,5.656968),(27.218369,5.645315),(27.219196,5.639993),(27.216716,5.633456),(27.215165,5.616144),(27.210669,5.601623),(27.211341,5.594879),(27.217336,5.585397),(27.222607,5.584363),(27.227309,5.586714),(27.231082,5.587593),(27.234596,5.588833),(27.240073,5.591985),(27.247515,5.592605),(27.257127,5.586068),(27.261054,5.577593),(27.258884,5.559972),(27.260744,5.550257),(27.220126,5.440883),(27.218059,5.42551),(27.220126,5.413082),(27.228239,5.387605),(27.233665,5.33828),(27.2378,5.32319),(27.264465,5.260145),(27.280898,5.23056),(27.30131,5.205187),(27.358516,5.166766),(27.385801,5.143847),(27.402131,5.104496),(27.431173,5.083618),(27.441301,5.070725),(27.414843,5.080725),(27.405335,5.082843),(27.391382,5.092223),(27.380323,5.094393),(27.361203,5.095478),(27.351384,5.097339),(27.342754,5.100594),(27.328647,5.1108),(27.315728,5.122402),(27.301878,5.131988),(27.284412,5.135967),(27.262811,5.138447),(27.239712,5.144881),(27.157494,5.185421),(27.11636,5.200304),(27.073675,5.203327),(27.027476,5.189969),(26.991716,5.171288),(26.961641,5.151185),(26.933684,5.126587),(26.90397,5.094393),(26.899216,5.085996),(26.889604,5.062121),(26.884126,5.053388),(26.872654,5.041786),(26.867073,5.037549),(26.849296,5.039099),(26.848366,5.041373),(26.844955,5.046282),(26.839478,5.051166),(26.832656,5.053388),(26.826145,5.050752),(26.815706,5.040184),(26.808368,5.039099),(26.801857,5.044525),(26.760619,5.088114),(26.753695,5.092274),(26.739432,5.094393),(26.691683,5.094393),(26.681244,5.089277),(26.667602,5.078347),(26.657577,5.073257),(26.644037,5.078347),(26.635666,5.077624),(26.629671,5.067056),(26.62347,5.067056),(26.615512,5.078244),(26.598355,5.080621),(26.579855,5.074343),(26.568176,5.059641),(26.561355,5.067056),(26.554224,5.059847),(26.541511,5.053129),(26.528282,5.048091),(26.519807,5.045972),(26.502547,5.047471),(26.475159,5.057263),(26.462653,5.059641),(26.460689,5.066023),(26.441879,5.104289),(26.429373,5.113798),(26.414284,5.122712),(26.400125,5.133641),(26.390616,5.149015),(26.369429,5.140566),(26.362711,5.135967),(26.353306,5.147775),(26.336356,5.148602),(26.317339,5.146431),(26.291811,5.150979),(26.272484,5.159066),(26.253984,5.172321),(26.246646,5.189969),(26.242098,5.188057),(26.230729,5.185163),(26.226182,5.183147),(26.225665,5.199038),(26.226182,5.204257),(26.213573,5.199141),(26.207165,5.207694),(26.198276,5.238389),(26.195176,5.234772),(26.18453,5.225341),(26.18174,5.229579),(26.175332,5.233687),(26.170888,5.238389),(26.157349,5.232472),(26.150837,5.239966),(26.144223,5.251799),(26.12996,5.258879),(26.125103,5.248595),(26.119625,5.241542),(26.112494,5.238389),(26.104535,5.238596),(26.098438,5.239578),(26.093373,5.241438),(26.088412,5.244591),(26.09172,5.219062),(26.095234,5.211053),(26.086035,5.215238),(26.077147,5.216324),(26.070532,5.212241),(26.067845,5.200846),(26.062677,5.200123),(26.050999,5.200226),(26.037356,5.197953),(26.026917,5.189969),(26.013895,5.199244),(25.996635,5.219243),(25.98599,5.225341),(25.971934,5.227667),(25.962839,5.224514),(25.958394,5.214825),(25.958601,5.197384),(25.943202,5.204102),(25.925632,5.201777),(25.912506,5.190382),(25.910852,5.170099),(25.892972,5.185214),(25.883567,5.203534),(25.872405,5.217099),(25.849357,5.217874),(25.84605,5.211673),(25.836231,5.20002),(25.826103,5.194697),(25.818971,5.214567),(25.814217,5.22162),(25.811427,5.230922),(25.814631,5.244591),(25.810186,5.242679),(25.799438,5.240095),(25.794683,5.238389),(25.792306,5.262832),(25.778354,5.269137),(25.769259,5.265158),(25.781144,5.258879),(25.771739,5.245469),(25.757993,5.244436),(25.743731,5.250611),(25.732775,5.258879),(25.705077,5.289601),(25.691124,5.299807),(25.668903,5.306137),(25.667559,5.307248),(25.663115,5.30916),(25.655777,5.317764),(25.650093,5.320297),(25.637897,5.317971),(25.631076,5.311822),(25.623428,5.309935),(25.609165,5.320297),(25.609165,5.326524),(25.614229,5.329934),(25.617227,5.333552),(25.61981,5.337221),(25.622911,5.34076),(25.600483,5.34045),(25.592319,5.349184),(25.589011,5.362284),(25.58126,5.374919),(25.575059,5.374919),(25.561829,5.373006),(25.543743,5.37528),(25.532787,5.372231),(25.540849,5.354429),(25.528963,5.351897),(25.504469,5.342285),(25.49558,5.344533),(25.484935,5.355307),(25.481989,5.356858),(25.479922,5.353189),(25.435067,5.328694),(25.413363,5.323552),(25.39569,5.333965),(25.385354,5.321485),(25.370678,5.316834),(25.36365,5.310556),(25.376466,5.293011),(25.355899,5.286448),(25.343393,5.271927),(25.337192,5.251748),(25.335538,5.228442),(25.328924,5.216013),(25.316211,5.202242),(25.307736,5.185292),(25.314454,5.163252),(25.321482,5.159169),(25.341223,5.155604),(25.348561,5.149015),(25.349284,5.141677),(25.343807,5.121885),(25.34236,5.111472),(25.339156,5.105168),(25.324996,5.088114),(25.321896,5.077004),(25.320966,5.052251),(25.316832,5.042251),(25.307633,5.032278),(25.292337,5.028273),(25.248515,5.024139),(25.239317,5.015535),(25.227535,5.009566),(25.163663,5.005639),(25.154568,5.009308),(25.147643,5.01414),(25.140201,5.017964),(25.129453,5.018635),(25.121701,5.008636),(25.11426,5.001841),(25.105268,4.998766),(25.100927,4.994554),(25.090489,4.974478),(25.084804,4.954221),(25.076123,4.951844),(25.040569,4.962101),(25.006463,4.980937),(24.969359,4.991402),(24.958094,4.99135),(24.950859,4.987681),(24.932359,4.974426),(24.927088,4.971481),(24.90435,4.966804),(24.868177,4.944376),(24.848333,4.936728),(24.836861,4.93771),(24.827662,4.941922),(24.819498,4.94448),(24.811023,4.940449),(24.800067,4.928899),(24.789422,4.919856),(24.77826,4.918538),(24.766374,4.929881),(24.749838,4.918745),(24.730614,4.917505),(24.664365,4.924403),(24.654236,4.929416),(24.65744,4.939777),(24.670773,4.957218),(24.656923,4.967269),(24.627468,4.976907),(24.615479,4.985097),(24.610001,4.996725),(24.610105,5.016723),(24.601216,5.026103),(24.588607,5.031064),(24.563389,5.034888),(24.553467,5.039099),(24.547369,5.046489),(24.538688,5.065558),(24.533003,5.073257),(24.513676,5.087236),(24.487218,5.100801),(24.480844,5.102335),(24.459623,5.107441),(24.436679,5.100594),(24.433061,5.089329),(24.433475,5.075531),(24.430684,5.067418),(24.416938,5.073257),(24.41146,5.081861),(24.401642,5.111472),(24.396371,5.122298),(24.382522,5.107855),(24.367846,5.084549),(24.361231,5.062948),(24.37198,5.053388),(24.396888,5.048194),(24.398645,5.036024),(24.386139,5.022356),(24.368466,5.012434),(24.296635,5.003262),(24.286507,4.995071),(24.284646,4.992435),(24.272968,4.964013),(24.26966,4.952464),(24.269557,4.943188),(24.267697,4.935721),(24.259222,4.929881),(24.25302,4.929881),(24.246199,4.954066),(24.229766,4.961171),(24.209612,4.956184),(24.190905,4.94417),(24.162897,4.907789),(24.152768,4.902596),(24.139229,4.905438),(24.119902,4.9189),(24.10843,4.92306),(24.08962,4.920218),(24.072463,4.91027),(24.046935,4.888928),(23.995775,4.865053),(23.977998,4.854149),(23.963116,4.868696),(23.957121,4.870815),(23.950817,4.867792),(23.946476,4.861642),(23.943169,4.851695),(23.942755,4.840507),(23.947406,4.830585),(23.948336,4.817692),(23.924358,4.817795),(23.88188,4.82689),(23.874749,4.824978),(23.864414,4.81609),(23.855215,4.813222),(23.847877,4.813997),(23.82824,4.819345),(23.816665,4.820611),(23.798475,4.814746),(23.760027,4.787435),(23.73791,4.779063),(23.692848,4.773405),(23.672694,4.767204),(23.63404,4.745138),(23.588048,4.73395),(23.566551,4.724441),(23.506089,4.676434),(23.488416,4.669845),(23.465058,4.66721),(23.448315,4.659407),(23.437153,4.646565),(23.430022,4.62884),(23.428265,4.617342),(23.428368,4.608609),(23.427438,4.601348),(23.42258,4.594062),(23.414519,4.590832),(23.396019,4.591607),(23.38837,4.587266),(23.371214,4.597266),(23.353954,4.60357),(23.335867,4.6068),(23.316437,4.60773),(23.313026,4.610211),(23.301967,4.621709),(23.29959,4.625119),(23.295146,4.6276),(23.273132,4.632018),(23.264864,4.635067),(23.251841,4.662972),(23.224246,4.680051),(23.220835,4.683514),(23.20864,4.690542),(23.183732,4.725268),(23.168746,4.73811),(23.12389,4.715631),(23.099396,4.711652),(23.077382,4.721057),(23.066633,4.730178),(23.055161,4.7378),(23.042758,4.742967),(23.029323,4.744931),(23.017334,4.750512),(23.011132,4.763638),(23.007205,4.778443),(22.99625,4.799501),(22.985708,4.826425),(22.977439,4.834306),(22.965037,4.835287),(22.953048,4.830792),(22.941473,4.82472),(22.92969,4.820611),(22.915221,4.821154),(22.905919,4.824203),(22.898374,4.823583),(22.888763,4.813222),(22.882872,4.800742),(22.882355,4.788494),(22.886799,4.776686),(22.895584,4.765395),(22.887316,4.755499),(22.865302,4.738678),(22.853829,4.711264),(22.837293,4.714235),(22.818069,4.724596),(22.802876,4.730643),(22.78603,4.724648),(22.776418,4.710101),(22.765153,4.676021),(22.757401,4.660802),(22.74872,4.64827),(22.737867,4.637754),(22.723605,4.62884),(22.723811,4.617032),(22.729806,4.596749),(22.73115,4.587266),(22.728669,4.58029),(22.718954,4.562772),(22.716783,4.556519),(22.713683,4.551222),(22.700144,4.541068),(22.697043,4.535745),(22.697146,4.512025),(22.695079,4.501483),(22.689498,4.491665),(22.675649,4.48474),(22.654358,4.482828),(22.61033,4.484224),(22.59245,4.473707),(22.586972,4.449394),(22.589039,4.421953),(22.594,4.402316),(22.61126,4.381853),(22.613741,4.374359),(22.609297,4.363352),(22.599891,4.352552),(22.588006,4.34418),(22.576534,4.340847),(22.568575,4.331546),(22.539327,4.278164),(22.53974,4.266924),(22.545734,4.245944),(22.546148,4.237184),(22.540464,4.227262),(22.52217,4.211346),(22.518243,4.206799),(22.510491,4.19127),(22.492714,4.174036),(22.457368,4.149076),(22.451532,4.146672),(22.43184,4.13856),(22.422641,4.134787),(22.303269,4.128716),(22.207254,4.150316),(22.184723,4.164605),(22.161882,4.173183),(22.15165,4.179462),(22.133047,4.19866),(22.127879,4.203078),(22.108449,4.209848),(22.057806,4.219537),(22.036613,4.229439),(22.00892,4.242378),(22.001478,4.244652),(21.989593,4.244238),(21.984528,4.242946),(21.980291,4.240621),(21.970783,4.237184),(21.949905,4.233464),(21.891614,4.237184),(21.86774,4.241551),(21.851927,4.25168),(21.838077,4.263255),(21.819887,4.271937),(21.810999,4.273177),(21.78206,4.271937),(21.771001,4.273487),(21.76604,4.277208),(21.76325,4.281678),(21.758496,4.285605),(21.737308,4.293382),(21.722942,4.295139),(21.688939,4.292452),(21.651112,4.295656),(21.634162,4.294209),(21.538147,4.244652),(21.527812,4.249251),(21.508072,4.249406),(21.49722,4.251473),(21.489882,4.256253),(21.48244,4.263126),(21.473138,4.269301),(21.460219,4.271937),(21.416915,4.270076),(21.397691,4.271627),(21.37485,4.278164),(21.358313,4.285553),(21.310358,4.316094),(21.3055,4.323148),(21.298886,4.330228),(21.288344,4.33338),(21.275321,4.332527),(21.265916,4.330073),(21.258371,4.32599),(21.250723,4.320358),(21.244625,4.313769),(21.239148,4.305604),(21.23243,4.297827),(21.223438,4.292452),(21.210932,4.291574),(21.202974,4.296638),(21.195739,4.302917),(21.185611,4.306069),(21.16556,4.307154),(21.158119,4.310203),(21.150057,4.326714),(21.137965,4.332682),(21.113574,4.340847),(21.103445,4.353275),(21.084222,4.388235),(21.076057,4.395469),(21.059624,4.398596),(20.963505,4.433736),(20.871521,4.453373),(20.832351,4.450686),(20.818708,4.443528),(20.807029,4.434252),(20.791836,4.426191),(20.767859,4.42278),(20.68838,4.42278),(20.603114,4.409732),(20.580273,4.415003),(20.559809,4.428361),(20.456713,4.524816),(20.452322,4.528924),(20.44209,4.550499),(20.447051,4.569671),(20.45718,4.58644),(20.463071,4.600935),(20.456456,4.621502),(20.44023,4.64411),(20.420593,4.662352),(20.403746,4.669845),(20.395478,4.677106),(20.373671,4.724441),(20.354034,4.755396),(20.338944,4.771906),(20.322511,4.779063),(20.301737,4.781647),(20.254815,4.797124),(20.236625,4.806349),(20.217194,4.822446),(20.171719,4.87836),(20.152599,4.890685),(20.08635,4.916264),(20.05462,4.937813),(20.031056,4.957993),(20.003977,4.974478),(19.915404,4.993107),(19.890909,5.002125),(19.880367,5.015535),(19.87551,5.031064),(19.858163,5.060305),(19.850188,5.073748),(19.838613,5.087546),(19.82466,5.096977),(19.779185,5.118164),(19.748799,5.124417),(19.728749,5.133848),(19.71955,5.135967),(19.605552,5.138292),(19.568759,5.15519),(19.548605,5.151366),(19.517082,5.135967),(19.496515,5.133796),(19.431506,5.135967),(19.408975,5.130386),(19.394196,5.116976),(19.373112,5.087546),(19.283815,5.032278),(19.236996,5.010781),(19.229761,5.002176),(19.214672,4.976131),(19.195655,4.950345),(19.179015,4.946418),(19.133023,4.946934),(19.116176,4.940449),(19.106741,4.931518),(19.08331,4.90934),(19.069047,4.891253),(19.057782,4.867792),(19.048997,4.836528),(19.044139,4.82689),(19.035768,4.818002),(19.02688,4.811516),(19.019748,4.8033),(19.011583,4.765033),(18.998044,4.750641),(18.980474,4.739402),(18.962284,4.724441),(18.928178,4.666435),(18.923837,4.662869),(18.90451,4.655867),(18.900169,4.652146),(18.897895,4.650647),(18.886526,4.621399),(18.85242,4.594062),(18.828235,4.559981),(18.776972,4.433451),(18.753304,4.400611),(18.721058,4.377357),(18.674343,4.361311),(18.654086,4.357229),(18.633932,4.355963),(18.614295,4.359813),(18.595795,4.371259),(18.576468,4.372757),(18.556831,4.354154),(18.541948,4.327851),(18.53709,4.306069),(18.545048,4.289016),(18.57254,4.265684),(18.578638,4.254858),(18.583909,4.235789),(18.619669,4.169566),(18.632692,4.130421),(18.646541,4.045982),(18.645301,3.982445),(18.64158,3.958261),(18.634552,3.93573),(18.613675,3.901468),(18.611711,3.879093),(18.612024,3.866611),(18.612848,3.833746),(18.593004,3.709775),(18.626387,3.476869),(18.582462,3.477386),(18.570267,3.489607),(18.55435,3.526375),(18.513422,3.592237),(18.497093,3.61146),(18.472081,3.632053),(18.459059,3.631382),(18.448,3.618023),(18.429707,3.600221),(18.420612,3.596371),(18.403662,3.597353),(18.392293,3.595854),(18.387849,3.591668),(18.383715,3.578),(18.379891,3.574382),(18.372863,3.574537),(18.361597,3.58017),(18.355603,3.581462),(18.275918,3.574382),(18.264859,3.574744),(18.255867,3.577225),(18.248529,3.574899),(18.242535,3.561127),(18.235403,3.530923),(18.231579,3.521156),(18.220004,3.503947),(18.202537,3.487333),(18.181556,3.477386),(18.159852,3.480305),(18.146727,3.494904),(18.134014,3.534824),(18.116444,3.552007),(18.082028,3.563453),(18.047094,3.563143),(18.013091,3.55397),(17.981879,3.538803),(17.969476,3.537511),(17.959865,3.541077),(17.924518,3.563453),(17.919247,3.565597),(17.913562,3.561334),(17.875322,3.542731),(17.856408,3.537305),(17.842146,3.540147),(17.835324,3.555831),(17.83057,3.575674),(17.821165,3.593994),(17.808349,3.608592),(17.793156,3.617506),(17.784475,3.61823),(17.765148,3.614354),(17.755949,3.614509),(17.749645,3.618075),(17.73993,3.629159),(17.733212,3.632208),(17.72608,3.631847),(17.714505,3.627506),(17.70913,3.626421),(17.627275,3.626317),(17.586761,3.632234),(17.550484,3.648151),(17.513484,3.679182),(17.508936,3.681766),(17.494983,3.683213),(17.490126,3.68714),(17.486198,3.700292),(17.481858,3.704762),(17.458913,3.708276),(17.442687,3.701274),(17.427494,3.690112),(17.414489,3.683415),(17.375714,3.663447),(17.355767,3.638435),(17.334373,3.618514),(17.298096,3.615904),(17.272258,3.62456),(17.259442,3.625904),(17.244973,3.620607),(17.231123,3.609109),(17.222959,3.598515),(17.213967,3.589291),(17.197224,3.58172),(17.182858,3.579602),(17.142757,3.580842),(17.129011,3.57707),(17.102139,3.566605),(17.088393,3.564435),(17.061418,3.56614),(17.052426,3.56397),(17.035787,3.556141),(17.028862,3.551541),(17.017596,3.5418),(17.009225,3.538131),(17.003127,3.537718),(16.989691,3.539475),(16.98194,3.537976),(16.968607,3.533481),(16.960546,3.538441),(16.953208,3.546891),(16.942252,3.553092),(16.932434,3.553919),(16.904735,3.550637),(16.895537,3.552368),(16.877243,3.559164),(16.867838,3.561541),(16.853162,3.560094),(16.851818,3.551903),(16.854609,3.541335),(16.852645,3.532809),(16.833008,3.522861),(16.811924,3.521672),(16.729552,3.542214),(16.709398,3.543712),(16.686351,3.542214),(16.639428,3.528597),(16.598501,3.501674),(16.567701,3.464389),(16.551682,3.419767),(16.542793,3.336154),(16.517782,3.273238),(16.517162,3.248692),(16.511477,3.227246),(16.477061,3.179394),(16.465485,3.156501),(16.464762,3.1318),(16.47272,3.110612),(16.483159,3.090148),(16.490807,3.067824),(16.484192,3.031909),(16.445538,2.95765),(16.448329,2.913467),(16.460525,2.894605),(16.474994,2.878378),(16.483779,2.860395),(16.479128,2.836314),(16.450603,2.775594),(16.427492,2.726493),(16.413706,2.697201),(16.371124,2.606922),(16.325649,2.510494),(16.293506,2.442074),(16.248031,2.345594),(16.196665,2.236454),(16.192841,2.239503),(16.179715,2.246996),(16.181265,2.256556),(16.18695,2.26932),(16.186536,2.277692),(16.182919,2.281154),(16.169069,2.289887),(16.165969,2.294796),(16.164729,2.314795),(16.162868,2.323839),(16.159251,2.332934),(16.152326,2.343269),(16.133413,2.364818),(16.120907,2.389571),(16.112949,2.410448),(16.111502,2.418562),(16.111502,2.455872),(16.107058,2.474165),(16.094345,2.497523),(16.090315,2.510494),(16.091245,2.532766),(16.106851,2.567028),(16.111502,2.5862),(16.108505,2.606612),(16.093105,2.641494),(16.090315,2.661906),(16.093725,2.67312),(16.101373,2.687847),(16.110468,2.701438),(16.0991,2.70397),(16.086284,2.704952),(16.072848,2.702834),(16.062616,2.703195),(16.059309,2.711825),(16.060032,2.72583),(16.056208,2.759833),(16.059929,2.786188),(16.086904,2.813163),(16.095172,2.836779),(16.092382,2.863289),(16.08246,2.885665),(16.070161,2.906594),(16.055381,2.93977),(16.034194,2.970621),(16.026236,2.978992),(16.015797,2.983488),(16.006082,2.983953),(15.995437,2.981473),(15.982001,2.976719),(15.977763,2.980956),(15.974249,2.985762),(15.971562,2.991188),(15.95823,3.041521),(15.950065,3.061003),(15.933425,3.081777),(15.913995,3.097952),(15.897768,3.103533),(15.854773,3.098985),(15.850949,3.100432),(15.840201,3.107408),(15.834206,3.108545),(15.829142,3.106633),(15.817566,3.100122),(15.813536,3.098985),(15.80103,3.100897),(15.793175,3.103274),(15.78563,3.107977),(15.7107,3.187455),(15.616855,3.286777),(15.517946,3.391603),(15.413973,3.50188),(15.329741,3.590996),(15.262306,3.662589),(15.25264,3.672852),(15.171301,3.758971),(15.084898,3.885836),(15.025987,4.026086),(15.043143,4.026138),(15.075183,4.017973),(15.091719,4.015854),(15.111253,4.019937),(15.150424,4.038514),(15.172438,4.043811),(15.192075,4.052234),(15.189801,4.063112),(15.177709,4.071458),(15.162413,4.073938),(15.14846,4.073938),(15.142569,4.08045),(15.139262,4.094583),(15.132854,4.108536),(15.117764,4.114324),(15.102571,4.112412),(15.091202,4.121791),(15.084484,4.148611),(15.082934,4.205507),(15.072806,4.265942),(15.063814,4.293434),(15.048518,4.321649),(15.013894,4.364257),(15.007383,4.37777),(15.000562,4.403195),(14.993947,4.413298),(14.982062,4.420481),(14.938344,4.435932),(14.779697,4.545047),(14.74373,4.57998),(14.717272,4.622019),(14.704249,4.669354),(14.697325,4.745215),(14.69071,4.762139),(14.691847,4.768702),(14.696084,4.779735),(14.700632,4.804282),(14.701459,4.814384),(14.686989,4.908875),(14.681615,4.920941),(14.675827,4.93076),(14.672107,4.940811),(14.674484,4.967579),(14.669833,4.977113),(14.663425,4.985717),(14.659498,4.997035),(14.660132,5.01144),(14.660324,5.015793),(14.667456,5.052768),(14.663012,5.091447),(14.663218,5.141832),(14.658877,5.164544),(14.651539,5.187514),(14.640481,5.208236),(14.624461,5.224049),(14.603584,5.234255),(14.560485,5.248595),(14.539712,5.261799),(14.523899,5.279679),(14.519868,5.290583),(14.528033,5.293011),(14.539608,5.303579),(14.548393,5.320503),(14.553974,5.338228),(14.555938,5.351328),(14.560796,5.355462),(14.570821,5.370939),(14.579296,5.387605),(14.579606,5.395382),(14.592835,5.400033),(14.597279,5.41148),(14.596246,5.440108),(14.599553,5.456645),(14.614126,5.48486),(14.617433,5.495299),(14.61888,5.525452),(14.617226,5.53788),(14.610612,5.553667),(14.596556,5.577232),(14.591491,5.590952),(14.590148,5.608289),(14.592835,5.62387),(14.610612,5.669138),(14.624358,5.723605),(14.631075,5.738049),(14.624047,5.758306),(14.617433,5.864966),(14.602964,5.883389),(14.598003,5.903491),(14.584773,5.923412),(14.572474,5.924187),(14.545396,5.907573),(14.530616,5.90592),(14.499197,5.909537),(14.482144,5.90964),(14.465711,5.920673),(14.458166,5.936409),(14.453825,5.953875),(14.446901,5.970024),(14.432328,5.98545),(14.417548,5.997258),(14.406076,6.01103),(14.397705,6.019169),(14.389436,6.031054),(14.387266,6.039038),(14.390263,6.045782),(14.397705,6.053947),(14.402459,6.054877),(14.412898,6.047022),(14.418169,6.047126),(14.422509,6.051363),(14.426023,6.059864),(14.429124,6.064179),(14.433568,6.073868),(14.432709,6.079123),(14.431811,6.084617),(14.430571,6.088079),(14.443903,6.100249),(14.467054,6.122599),(14.482144,6.127069),(14.510979,6.155336),(14.525966,6.174069),(14.534957,6.190088),(14.576402,6.189778),(14.719029,6.257862),(14.734842,6.270445),(14.772255,6.318323),(14.782384,6.336772),(14.784658,6.347805),(14.784348,6.358683),(14.780007,6.382221),(14.782177,6.393616),(14.807292,6.4278),(14.921874,6.686414),(14.932452,6.710289),(14.946095,6.72574),(14.962321,6.736696),(15.022369,6.766539),(15.034565,6.776771),(15.042213,6.790775),(15.05906,6.836276),(15.080247,6.878083),(15.113217,6.964692),(15.128926,7.026497),(15.136161,7.045308),(15.147013,7.061844),(15.156108,7.067787),(15.179052,7.07838),(15.187527,7.088612),(15.190214,7.098793),(15.185357,7.156877),(15.186597,7.166334),(15.191248,7.177341),(15.205097,7.200233),(15.223908,7.247517),(15.246564,7.266731),(15.261321,7.279247),(15.338216,7.32219),(15.385345,7.358363),(15.389169,7.364151),(15.397437,7.380739),(15.403225,7.386372),(15.409633,7.387664),(15.428443,7.388801),(15.432784,7.389731),(15.434954,7.402908),(15.415524,7.42084),(15.419451,7.43655),(15.473298,7.509103),(15.481049,7.523263),(15.515569,7.512204),(15.55195,7.509879),(15.624813,7.519749),(15.668532,7.516287),(15.720001,7.468641),(15.758345,7.455567),(15.793899,7.457996),(15.92526,7.488175),(15.94314,7.495306),(15.975696,7.515201),(15.990372,7.529412),(16.012283,7.560315),(16.026339,7.574888),(16.042979,7.583931),(16.0652,7.591217),(16.128762,7.599486),(16.162662,7.611113),(16.185606,7.610751),(16.207207,7.613542),(16.227257,7.625582),(16.26188,7.651885),(16.282758,7.660102),(16.370814,7.672504),(16.38301,7.680669),(16.387144,7.694674),(16.387041,7.762628),(16.392208,7.783609),(16.407401,7.796063),(16.41815,7.795856),(16.427245,7.791464),(16.437373,7.788363),(16.450913,7.792084),(16.457217,7.799628),(16.469413,7.825828),(16.475717,7.835802),(16.491737,7.849238),(16.509204,7.858281),(16.548685,7.870012),(16.550131,7.861692),(16.550752,7.8527),(16.550235,7.843502),(16.54486,7.813013),(16.549305,7.794719),(16.560157,7.779733),(16.576383,7.767796),(16.586822,7.763558),(16.596434,7.761129),(16.605219,7.757202),(16.613073,7.74821),(16.618138,7.734309),(16.616174,7.723871),(16.611833,7.713897),(16.609869,7.701598),(16.61421,7.679584),(16.624546,7.66868),(16.6632,7.657415),(16.686867,7.645788),(16.709812,7.627753),(16.746709,7.586566),(16.768619,7.550238),(16.782365,7.541143),(16.805516,7.543675),(16.812854,7.547706),(16.815071,7.549544),(16.836729,7.567498),(16.839623,7.567446),(16.850372,7.565896),(16.853886,7.567498),(16.855849,7.575404),(16.852439,7.580262),(16.848098,7.584706),(16.847168,7.591579),(16.854816,7.611475),(16.865461,7.624445),(16.880654,7.632869),(16.919618,7.644082),(16.962716,7.650697),(16.98194,7.648733),(16.988968,7.660774),(16.997546,7.666717),(17.008295,7.667647),(17.038991,7.662479),(17.042091,7.667027),(17.041058,7.675502),(17.045605,7.6847),(17.059351,7.696534),(17.079091,7.689919),(17.090254,7.683822),(17.095525,7.67824),(17.101932,7.677724),(17.116298,7.68687),(17.135522,7.705112),(17.170559,7.747539),(17.188645,7.764127),(17.196397,7.76516),(17.204562,7.763817),(17.211073,7.768312),(17.214484,7.786813),(17.220685,7.798181),(17.23379,7.810801),(17.234637,7.811617),(17.250864,7.822831),(17.385739,7.870477),(17.405997,7.883034),(17.419122,7.898227),(17.466355,7.884119),(17.478344,7.891819),(17.487232,7.914453),(17.503665,7.926236),(17.523819,7.93099),(17.559786,7.934711),(17.580766,7.940602),(17.59999,7.950369),(17.62066,7.978739),(17.639884,7.985043),(17.661795,7.986284),(17.679778,7.985198),(17.817238,7.962099),(17.858785,7.960445),(17.897853,7.967473),(17.977228,7.997187),(18.070452,8.019202),(18.508255,8.030674),(18.589283,8.047882),(18.617912,8.090127),(18.618636,8.138652),(18.638583,8.177642),(18.672586,8.20751),(18.715064,8.228672),(18.773768,8.248619),(18.791235,8.257378),(18.813042,8.276421),(18.8549,8.339802),(18.887457,8.370963),(18.901306,8.388636),(18.910401,8.412976),(18.920529,8.432122),(19.020472,8.545578),(19.061296,8.625624),(19.072261,8.631872),(19.081657,8.637226),(19.091475,8.652858),(19.124135,8.675079),(19.103877,8.698049),(19.073492,8.720528),(19.057782,8.729701),(19.048687,8.745669),(18.929108,8.796544),(18.917325,8.805148),(18.912468,8.818403),(18.909677,8.835327),(18.902959,8.84481),(18.886526,8.835844),(18.869783,8.849409),(18.86968,8.864111),(18.892727,8.897933),(18.901409,8.89398),(18.910608,8.894239),(18.917739,8.898812),(18.920736,8.907855),(18.922907,8.918216),(18.928591,8.921679),(18.936446,8.923358),(18.952052,8.931988),(18.971896,8.938215),(18.975927,8.941988),(18.97851,8.949455),(18.984815,8.956431),(18.999491,8.969635),(19.021919,8.985215),(19.060676,9.00418),(19.10057,9.015265),(19.126718,9.007177),(19.133643,9.007177),(19.142118,9.008676),(19.168783,9.002216),(19.174571,9.003767),(19.179635,9.015265),(19.192037,9.020174),(19.232862,9.022293),(19.253429,9.027951),(19.263248,9.028261),(19.275236,9.021983),(19.285778,9.01281),(19.295494,9.009503),(19.304795,9.020846),(19.316268,9.01312),(19.333321,9.006557),(19.352234,9.002061),(19.369391,9.000382),(19.38262,9.003198),(19.420964,9.017435),(19.430162,9.012862),(19.506644,9.013999),(19.511811,9.012293),(19.515842,9.009451),(19.52132,9.008831),(19.530828,9.013999),(19.553772,9.013327),(19.583125,9.025342),(19.613614,9.031672),(19.640072,9.013999),(19.656402,9.021259),(19.700017,9.020587),(19.709008,9.024567),(19.713969,9.02914),(19.738154,9.039372),(19.746836,9.04131),(19.758411,9.041982),(19.784663,9.048751),(19.80554,9.051128),(19.889463,9.046374),(19.902175,9.0518),(19.914474,9.069267),(19.928737,9.062575),(19.932091,9.063331),(19.943413,9.065882),(19.969044,9.082883),(19.976486,9.084175),(19.985477,9.083193),(19.993022,9.084589),(19.996329,9.092831),(19.997156,9.098619),(19.999533,9.104303),(20.002944,9.108515),(20.006871,9.11022),(20.014416,9.106345),(20.01793,9.091048),(20.024338,9.089756),(20.030332,9.093865),(20.029919,9.098981),(20.028369,9.104562),(20.031159,9.11022),(20.060512,9.134353),(20.068676,9.138126),(20.076945,9.13673),(20.090587,9.131123),(20.099372,9.13071),(20.100509,9.134146),(20.099579,9.141174),(20.101026,9.148073),(20.109604,9.151174),(20.117149,9.145748),(20.125727,9.133785),(20.137613,9.121822),(20.154563,9.116421),(20.169549,9.120891),(20.197971,9.140503),(20.212647,9.144973),(20.222982,9.14195),(20.234454,9.13456),(20.256985,9.116421),(20.276829,9.120736),(20.359408,9.116421),(20.378322,9.120065),(20.384419,9.123579),(20.40292,9.136989),(20.412428,9.142053),(20.423073,9.143241),(20.435166,9.138126),(20.455423,9.159313),(20.467618,9.184143),(20.481778,9.204659),(20.507203,9.213263),(20.51299,9.223883),(20.502655,9.247421),(20.496144,9.270882),(20.514231,9.281502),(20.526426,9.283621),(20.533661,9.289641),(20.537278,9.299201),(20.538208,9.311939),(20.541826,9.321629),(20.550921,9.320259),(20.572315,9.308218),(20.573555,9.309149),(20.592262,9.308063),(20.592882,9.308218),(20.609729,9.302792),(20.616653,9.302017),(20.667916,9.302017),(20.654894,9.342971),(20.695822,9.363461),(20.740883,9.377129),(20.750495,9.384545),(20.757937,9.377129),(20.766618,9.38656),(20.771062,9.398058),(20.77654,9.407799),(20.788322,9.411881),(20.792353,9.414),(20.798244,9.423379),(20.801965,9.42555),(20.806719,9.423328),(20.81323,9.419116),(20.813884,9.418847),(20.820258,9.416222),(20.82615,9.418083),(20.832661,9.437255),(20.834314,9.462473),(20.841446,9.484409),(20.86377,9.493789),(20.874209,9.502496),(20.912656,9.544328),(20.921234,9.558978),(20.925058,9.569598),(20.934257,9.577065),(20.955961,9.589416),(20.960508,9.598123),(20.962265,9.606934),(20.96764,9.610603),(20.983143,9.603678),(20.981799,9.616778),(20.980145,9.619931),(20.975804,9.624168),(20.975804,9.630964),(20.989964,9.635615),(20.996062,9.648301),(20.996888,9.682175),(21.000092,9.697394),(21.014252,9.724472),(21.017352,9.743592),(21.019936,9.747933),(21.025621,9.753256),(21.032339,9.75801),(21.037816,9.760671),(21.039056,9.76217),(21.04009,9.764754),(21.042674,9.767079),(21.048358,9.768165),(21.052906,9.765891),(21.062207,9.756201),(21.065825,9.754496),(21.073163,9.757648),(21.080294,9.762609),(21.09311,9.77434),(21.097037,9.775373),(21.101585,9.774237),(21.105202,9.77403),(21.106753,9.778061),(21.105926,9.791884),(21.106753,9.79545),(21.121739,9.831907),(21.131971,9.849426),(21.187264,9.885134),(21.195429,9.887305),(21.198427,9.902988),(21.205661,9.916838),(21.256924,9.975749),(21.27129,9.987247),(21.28514,9.978591),(21.326067,9.962752),(21.339503,9.95991),(21.374333,9.973113),(21.406889,10.005566),(21.433348,10.046055),(21.485437,10.164316),(21.513239,10.200515),(21.544969,10.219997),(21.554477,10.218654),(21.566673,10.214494),(21.579282,10.212272),(21.589721,10.216613),(21.593441,10.214546),(21.625274,10.224493),(21.630648,10.227439),(21.65628,10.233666),(21.668889,10.249195),(21.67695,10.269168),(21.688939,10.288908),(21.697414,10.292422),(21.708886,10.293533),(21.718808,10.296634),(21.723046,10.305987),(21.722116,10.314178),(21.717775,10.328983),(21.716845,10.339809),(21.720462,10.360816),(21.751054,10.41182),(21.743613,10.418978),(21.742786,10.425773),(21.74444,10.432439),(21.744233,10.439106),(21.738032,10.44802),(21.720462,10.468018),(21.716845,10.47732),(21.714674,10.493702),(21.705476,10.53202),(21.703305,10.552406),(21.705476,10.571811),(21.714674,10.601395),(21.716845,10.621007),(21.722632,10.636716),(21.736482,10.646173),(21.753431,10.650721),(21.768418,10.651987),(21.774205,10.655165),(21.778753,10.66227),(21.784851,10.669324),(21.795393,10.672476),(21.803558,10.673665),(21.822575,10.678755),(21.83291,10.679944),(21.852754,10.670151),(21.863606,10.667929),(21.86836,10.676223),(21.872081,10.677592),(21.895025,10.699193),(21.925617,10.720122),(21.943084,10.728261),(22.003855,10.743273),(22.014604,10.754564),(22.004889,10.775519),(22.0114,10.789291),(22.018325,10.809884),(22.029487,10.828746),(22.048607,10.836988),(22.132426,10.828539),(22.148239,10.830761),(22.176351,10.816214),(22.189787,10.837298),(22.200639,10.868666),(22.229371,10.891972),(22.24012,10.905899),(22.253039,10.915304),(22.268439,10.908999),(22.2765,10.908534),(22.287249,10.91595),(22.305956,10.932564),(22.318772,10.939359),(22.339442,10.947731),(22.361146,10.953648),(22.36469,10.954209),(22.391842,10.958505),(22.407035,10.963053),(22.419231,10.970727),(22.437627,10.991552),(22.447136,10.99827),(22.460468,11.000828),(22.476281,10.99796),(22.490751,10.993154),(22.502843,10.992173),(22.511938,11.000828),(22.534676,10.98052),(22.55576,10.978969)] +Switzerland [(8.617437,47.757319),(8.62984,47.762796),(8.635007,47.784604),(8.644102,47.791012),(8.657022,47.788118),(8.666633,47.778273),(8.674488,47.766698),(8.68193,47.75874),(8.692265,47.757164),(8.703324,47.758714),(8.713142,47.757422),(8.719757,47.747319),(8.71707,47.743547),(8.703737,47.730033),(8.70043,47.723496),(8.704667,47.715332),(8.712625,47.708691),(8.715106,47.701069),(8.71707,47.694558),(8.769883,47.695074),(8.761718,47.70125),(8.77071,47.720861),(8.797581,47.720034),(8.830241,47.707192),(8.856079,47.690682),(8.837682,47.687788),(8.837786,47.680838),(8.851935,47.671282),(8.852668,47.670786),(8.881711,47.656136),(8.906205,47.651795),(8.945376,47.654302),(8.981756,47.662156),(8.997673,47.673835),(9.016586,47.6789),(9.128104,47.670425),(9.183398,47.670425),(9.196937,47.656136),(9.234351,47.656162),(9.273211,47.65009),(9.547482,47.534547),(9.553059,47.516891),(9.554951,47.5109),(9.58451,47.480721),(9.621717,47.469197),(9.650346,47.452092),(9.649519,47.409717),(9.639804,47.394524),(9.601047,47.36127),(9.596396,47.352305),(9.591228,47.334683),(9.587404,47.32781),(9.553298,47.299853),(9.521155,47.262801),(9.504618,47.243732),(9.487358,47.210014),(9.484981,47.176346),(9.492629,47.15981),(9.503481,47.145392),(9.511853,47.129372),(9.51237,47.10803),(9.502861,47.094698),(9.487565,47.083949),(9.475886,47.073226),(9.477023,47.063898),(9.499554,47.059351),(9.560636,47.0524),(9.581203,47.05687),(9.59991,47.053486),(9.65231,47.05793),(9.669053,47.056199),(9.857982,47.015478),(9.856328,47.004083),(9.860566,47.001602),(9.866767,47.001938),(9.870591,46.998838),(9.870591,46.992947),(9.866457,46.983387),(9.863976,46.959925),(9.860772,46.949151),(9.862426,46.939772),(9.875138,46.927421),(9.899943,46.914398),(10.006913,46.890757),(10.045567,46.865564),(10.068098,46.856624),(10.1113,46.847116),(10.125188,46.846751),(10.13197,46.846573),(10.157808,46.851612),(10.201423,46.86683),(10.211655,46.877036),(10.214342,46.884685),(10.215169,46.893108),(10.219924,46.905769),(10.235116,46.923313),(10.251343,46.92538),(10.270773,46.921892),(10.295681,46.922693),(10.296198,46.941374),(10.313665,46.964318),(10.338883,46.98411),(10.367925,46.995505),(10.373403,46.996254),(10.378984,46.995505),(10.384255,46.993153),(10.384358,46.993153),(10.384358,46.992998),(10.394693,46.985402),(10.415571,46.962406),(10.449574,46.943906),(10.458462,46.936619),(10.463836,46.919747),(10.451434,46.88577),(10.453811,46.864427),(10.44854,46.832233),(10.444923,46.823241),(10.439032,46.816885),(10.417224,46.79885),(10.419085,46.783967),(10.426216,46.76942),(10.428696,46.755648),(10.416604,46.743014),(10.399654,46.735546),(10.395623,46.7264),(10.396554,46.715005),(10.394383,46.70082),(10.384771,46.689012),(10.373919,46.681906),(10.369165,46.672398),(10.377537,46.653277),(10.395623,46.638808),(10.438205,46.635656),(10.459082,46.623564),(10.466627,46.604288),(10.465903,46.578476),(10.457945,46.553697),(10.451833,46.546702),(10.443993,46.537729),(10.425906,46.535326),(10.354282,46.548323),(10.319452,46.546049),(10.306637,46.547496),(10.295371,46.551087),(10.289067,46.555687),(10.283796,46.560725),(10.275941,46.565531),(10.234703,46.575298),(10.230259,46.58615),(10.235737,46.606691),(10.233773,46.617982),(10.217856,46.626974),(10.192122,46.626819),(10.09745,46.608035),(10.087839,46.604392),(10.083498,46.597002),(10.071199,46.564394),(10.062931,46.556746),(10.04133,46.541863),(10.032752,46.532975),(10.031408,46.525792),(10.031201,46.503829),(10.02686,46.493184),(10.028101,46.483934),(10.030478,46.476673),(10.035335,46.471066),(10.044017,46.466984),(10.026344,46.446262),(10.042053,46.432722),(10.071405,46.424816),(10.116157,46.418822),(10.133417,46.414016),(10.140755,46.402905),(10.13321,46.381098),(10.125976,46.37438),(10.104995,46.361357),(10.09745,46.351642),(10.092386,46.338103),(10.091766,46.328956),(10.095797,46.320533),(10.104892,46.309371),(10.14613,46.280277),(10.158945,46.262449),(10.14582,46.243328),(10.117914,46.231133),(10.075746,46.220022),(10.042673,46.220487),(10.041847,46.24307),(10.031718,46.260072),(9.992237,46.284359),(9.977561,46.298105),(9.97105,46.320016),(9.970636,46.339808),(9.964022,46.356086),(9.93901,46.367455),(9.918443,46.37115),(9.899013,46.372158),(9.855398,46.366964),(9.788839,46.343296),(9.768065,46.33862),(9.755249,46.340532),(9.730858,46.350712),(9.720109,46.350893),(9.709257,46.342392),(9.707293,46.330972),(9.708844,46.319629),(9.70843,46.311748),(9.693134,46.297072),(9.674324,46.291801),(9.559705,46.292731),(9.536451,46.298622),(9.515264,46.308596),(9.502551,46.32074),(9.482604,46.35681),(9.473716,46.361874),(9.451598,46.370375),(9.444364,46.375284),(9.4424,46.380891),(9.443847,46.396136),(9.437852,46.492047),(9.434648,46.498326),(9.426794,46.497111),(9.410671,46.488895),(9.403849,46.482513),(9.400335,46.475407),(9.395478,46.469413),(9.384626,46.466416),(9.377081,46.468689),(9.351553,46.485484),(9.350829,46.497861),(9.330986,46.501504),(9.282306,46.49737),(9.263186,46.485122),(9.245823,46.461041),(9.237968,46.436547),(9.24758,46.423033),(9.260912,46.416651),(9.262876,46.406626),(9.260292,46.394017),(9.260396,46.379728),(9.273831,46.344252),(9.275175,46.331385),(9.268974,46.309371),(9.239725,46.266996),(9.224842,46.231184),(9.215747,46.221056),(9.204172,46.213563),(9.192183,46.209635),(9.181331,46.204054),(9.17575,46.194132),(9.171099,46.182609),(9.163788,46.172989),(9.163244,46.172273),(9.090587,46.138167),(9.072087,46.118892),(9.068159,46.105972),(9.07033,46.083441),(9.067126,46.071142),(9.059168,46.061789),(9.049659,46.057913),(9.027748,46.053107),(9.002117,46.03931),(8.997776,46.027941),(9.015553,45.993111),(8.982686,45.971975),(8.980516,45.969495),(8.979793,45.966911),(8.980516,45.964379),(8.982686,45.961847),(8.993435,45.95425),(9.001703,45.93606),(9.010798,45.926655),(9.020514,45.922779),(9.042321,45.919731),(9.051726,45.915545),(9.063095,45.898957),(9.059271,45.881955),(9.034363,45.848107),(9.002427,45.820718),(8.972351,45.824646),(8.939588,45.834826),(8.900004,45.826403),(8.903725,45.841802),(8.909719,45.853688),(8.91375,45.86609),(8.912096,45.883402),(8.906515,45.896476),(8.898144,45.90955),(8.88078,45.931099),(8.870962,45.947067),(8.864451,45.953424),(8.857733,45.957093),(8.800372,45.978538),(8.785076,45.982311),(8.767919,45.983086),(8.769573,45.985773),(8.773397,45.990579),(8.790967,46.018691),(8.819596,46.042927),(8.834375,46.066388),(8.80895,46.089746),(8.793861,46.093415),(8.763165,46.092898),(8.747145,46.094449),(8.739497,46.098066),(8.732159,46.107419),(8.728983,46.108233),(8.723891,46.109538),(8.71769,46.107523),(8.702187,46.097963),(8.695055,46.095172),(8.677485,46.095792),(8.630873,46.114706),(8.611546,46.119357),(8.601831,46.122819),(8.538682,46.187621),(8.51026,46.207878),(8.482665,46.217542),(8.456517,46.224828),(8.43812,46.23537),(8.427165,46.251442),(8.423237,46.275833),(8.426648,46.301568),(8.442874,46.353373),(8.446285,46.382183),(8.445768,46.412362),(8.441634,46.434945),(8.427888,46.44869),(8.399156,46.452179),(8.385907,46.450206),(8.343449,46.443885),(8.316267,46.433653),(8.294976,46.418046),(8.286605,46.40536),(8.290429,46.401122),(8.297043,46.397634),(8.297457,46.387506),(8.291462,46.378359),(8.281541,46.370116),(8.270068,46.364044),(8.24175,46.354123),(8.192554,46.309164),(8.171883,46.299191),(8.128475,46.292473),(8.106874,46.285548),(8.087341,46.271802),(8.077315,46.262035),(8.073078,46.253612),(8.076592,46.249736),(8.09995,46.235629),(8.129509,46.196044),(8.132299,46.159354),(8.110595,46.126953),(8.066877,46.100598),(8.056025,46.098066),(8.035354,46.096516),(8.025329,46.091141),(8.018197,46.080858),(8.016027,46.069385),(8.015924,46.058172),(8.010653,46.029698),(8.008792,46.027683),(7.999077,46.0128),(7.998354,46.010629),(7.985848,45.999312),(7.978717,45.995178),(7.969105,45.993111),(7.898205,45.981949),(7.883782,45.973869),(7.872917,45.959383),(7.870201,45.94037),(7.84962,45.939712),(7.848389,45.938076),(7.845288,45.927792),(7.846115,45.922573),(7.843738,45.919214),(7.831232,45.91446),(7.825444,45.914666),(7.807564,45.91849),(7.780072,45.918129),(7.732013,45.930376),(7.722195,45.929601),(7.71465,45.92712),(7.706279,45.925725),(7.69398,45.928671),(7.692533,45.931203),(7.673722,45.950323),(7.658736,45.960038),(7.643027,45.966343),(7.541121,45.984119),(7.524377,45.978073),(7.514662,45.966704),(7.503707,45.956731),(7.482726,45.954871),(7.452961,45.945879),(7.393843,45.9157),(7.361803,45.907845),(7.286666,45.913426),(7.27354,45.910274),(7.245428,45.89813),(7.183726,45.880456),(7.153547,45.876529),(7.120888,45.876116),(7.090192,45.880508),(7.066938,45.890223),(7.022083,45.92526),(7.015158,45.933321),(7.009784,45.943398),(7.002756,45.961692),(6.991283,45.982466),(6.987666,45.993111),(6.982808,45.995385),(6.915112,46.048612),(6.892375,46.055588),(6.884003,46.053211),(6.876872,46.048095),(6.869224,46.044064),(6.859715,46.044994),(6.85093,46.049645),(6.85031,46.052746),(6.852377,46.056931),(6.851964,46.064683),(6.853411,46.065665),(6.853101,46.076103),(6.851344,46.086025),(6.848553,46.085043),(6.853101,46.090211),(6.861162,46.097032),(6.86819,46.10468),(6.869224,46.112329),(6.853927,46.122612),(6.774346,46.134808),(6.765664,46.151603),(6.774863,46.185864),(6.792226,46.221676),(6.827676,46.269477),(6.804938,46.296607),(6.769488,46.322678),(6.750368,46.345518),(6.755742,46.357068),(6.782097,46.378462),(6.789229,46.395205),(6.788107,46.405008),(6.787058,46.414171),(6.777756,46.424093),(6.777716,46.424106),(6.762667,46.42926),(6.613684,46.455899),(6.547021,46.457372),(6.482942,46.448587),(6.397676,46.408176),(6.365223,46.40244),(6.332357,46.401381),(6.301558,46.394482),(6.269105,46.375026),(6.24058,46.348955),(6.219496,46.329111),(6.214122,46.315469),(6.218256,46.305495),(6.227454,46.288494),(6.227971,46.284463),(6.237583,46.267926),(6.24182,46.263689),(6.252259,46.259916),(6.269002,46.265239),(6.27603,46.26312),(6.281198,46.240073),(6.255359,46.221107),(6.191384,46.191704),(6.140328,46.150207),(6.107875,46.138632),(6.073872,46.149174),(6.028293,46.147934),(5.982921,46.140441),(5.95884,46.130467),(5.972172,46.152171),(5.979821,46.162248),(5.982921,46.170826),(5.965248,46.186226),(5.954809,46.19992),(5.95853,46.211961),(5.982921,46.222709),(6.042866,46.24307),(6.044519,46.243432),(6.046173,46.243535),(6.048033,46.243432),(6.055888,46.241675),(6.061883,46.241158),(6.067671,46.241623),(6.089685,46.246377),(6.094026,46.253044),(6.093095,46.262294),(6.093612,46.273094),(6.100743,46.301413),(6.104051,46.309216),(6.118607,46.331771),(6.1364,46.359342),(6.135057,46.370401),(6.122861,46.385542),(6.108185,46.396497),(6.059019,46.417383),(6.054235,46.419416),(6.065707,46.427012),(6.067567,46.433601),(6.0655,46.440371),(6.0655,46.447993),(6.064777,46.451068),(6.0624,46.455202),(6.060229,46.459904),(6.060229,46.46502),(6.064157,46.471118),(6.075525,46.479593),(6.110355,46.520831),(6.145702,46.55163),(6.121517,46.570285),(6.118417,46.583463),(6.131853,46.595607),(6.266315,46.680356),(6.337938,46.707409),(6.34786,46.71317),(6.374215,46.733609),(6.407391,46.745701),(6.417933,46.751101),(6.429199,46.760816),(6.433023,46.76911),(6.432609,46.785983),(6.425168,46.791615),(6.419897,46.796525),(6.417107,46.802157),(6.418554,46.807015),(6.434263,46.839545),(6.441188,46.848149),(6.443118,46.851455),(6.446769,46.857709),(6.448422,46.871559),(6.445219,46.882617),(6.431886,46.900032),(6.427752,46.909076),(6.442635,46.944164),(6.491107,46.963388),(6.598698,46.986539),(6.665412,47.021291),(6.688253,47.043848),(6.676264,47.0624),(6.6897,47.07829),(6.699105,47.084621),(6.72422,47.09077),(6.72794,47.097126),(6.731661,47.098883),(6.746027,47.103948),(6.744787,47.121053),(6.774759,47.128184),(6.838076,47.168132),(6.840285,47.169525),(6.859302,47.190919),(6.888344,47.211305),(6.956247,47.245231),(6.952216,47.270036),(6.958624,47.290551),(6.977434,47.303729),(6.986529,47.304504),(6.991904,47.305951),(7.006476,47.319361),(7.016915,47.323521),(7.027147,47.325433),(7.036552,47.329515),(7.044303,47.340497),(7.033865,47.350651),(7.018879,47.359901),(7.003996,47.368143),(6.985599,47.362123),(6.86664,47.354165),(6.871601,47.366955),(6.884003,47.382587),(6.898783,47.395713),(6.924517,47.405996),(6.926068,47.424858),(6.952319,47.428837),(6.968546,47.435194),(6.983429,47.443798),(6.990973,47.452221),(6.986116,47.464132),(6.97578,47.477956),(6.9733,47.489092),(6.991904,47.492942),(7.000792,47.49767),(7.009784,47.499247),(7.018879,47.49767),(7.027974,47.492942),(7.053915,47.490384),(7.103731,47.496275),(7.127296,47.492942),(7.140318,47.487852),(7.142169,47.487651),(7.153651,47.486405),(7.180833,47.488265),(7.162642,47.459895),(7.168327,47.443565),(7.190031,47.434728),(7.219383,47.428476),(7.223517,47.426228),(7.226308,47.422636),(7.230339,47.419019),(7.23809,47.416797),(7.244808,47.417727),(7.282635,47.428889),(7.309093,47.432661),(7.33693,47.431854),(7.378547,47.430646),(7.388219,47.433289),(7.406349,47.438242),(7.420563,47.450857),(7.426089,47.455761),(7.429293,47.465114),(7.427639,47.470747),(7.422782,47.475424),(7.419474,47.477852),(7.41441,47.484028),(7.414307,47.490177),(7.425986,47.492503),(7.441488,47.488834),(7.445996,47.486884),(7.454511,47.483201),(7.46743,47.481909),(7.482726,47.491263),(7.484483,47.492942),(7.485827,47.495784),(7.48593,47.498394),(7.485776,47.498768),(7.484897,47.5009),(7.477765,47.507696),(7.475595,47.511726),(7.476939,47.514879),(7.482726,47.516997),(7.493062,47.515499),(7.501123,47.517307),(7.505464,47.523018),(7.505154,47.533017),(7.501743,47.532965),(7.485103,47.541595),(7.482726,47.542267),(7.520867,47.563416),(7.526341,47.566452),(7.550319,47.575495),(7.585483,47.584479),(7.586028,47.584619),(7.637032,47.594977),(7.659666,47.596579),(7.646541,47.571542),(7.635895,47.564591),(7.612337,47.564731),(7.609747,47.564746),(7.646901,47.551445),(7.661423,47.546246),(7.683438,47.544257),(7.72732,47.550423),(7.76674,47.555961),(7.78555,47.563196),(7.801467,47.576089),(7.819657,47.595339),(7.833713,47.590481),(7.898205,47.587846),(7.904303,47.583557),(7.907507,47.574177),(7.90947,47.564798),(7.912157,47.560561),(8.042279,47.560561),(8.087237,47.567382),(8.096952,47.571852),(8.101397,47.576193),(8.105427,47.581257),(8.113902,47.587846),(8.122067,47.592135),(8.143771,47.600067),(8.162065,47.603762),(8.168886,47.608646),(8.173847,47.613529),(8.179015,47.615803),(8.232965,47.621952),(8.251051,47.622004),(8.276993,47.61663),(8.288569,47.615803),(8.293943,47.611462),(8.299317,47.601824),(8.306345,47.592187),(8.316164,47.587846),(8.354094,47.581024),(8.41807,47.580766),(8.4211,47.581112),(8.448869,47.58428),(8.450109,47.589034),(8.461788,47.606139),(8.49238,47.619833),(8.522353,47.621901),(8.537752,47.612134),(8.549638,47.598594),(8.551719,47.596863),(8.560697,47.589396),(8.574133,47.592445),(8.576305,47.595023),(8.580644,47.600171),(8.581781,47.607664),(8.581264,47.614769),(8.582504,47.62159),(8.582091,47.625027),(8.579714,47.628929),(8.57806,47.633476),(8.580334,47.639005),(8.583951,47.641124),(8.589222,47.642468),(8.593563,47.642571),(8.594493,47.640969),(8.595113,47.63482),(8.601624,47.632598),(8.607309,47.656291),(8.598214,47.656885),(8.593589,47.658169),(8.582194,47.66133),(8.568241,47.662932),(8.519666,47.657351),(8.504679,47.65226),(8.49083,47.645568),(8.476051,47.640401),(8.458274,47.639884),(8.437603,47.647842),(8.411972,47.661045),(8.407011,47.661562),(8.391301,47.665464),(8.397709,47.67629),(8.395229,47.684817),(8.390991,47.692129),(8.392128,47.699519),(8.401947,47.707089),(8.427268,47.716468),(8.437913,47.723186),(8.445458,47.743185),(8.450109,47.750471),(8.463648,47.763907),(8.471503,47.76706),(8.482665,47.766853),(8.536512,47.774088),(8.551602,47.779255),(8.5423,47.795017),(8.558216,47.801166),(8.583124,47.800236),(8.601624,47.794603),(8.603175,47.787317),(8.604105,47.774398),(8.607619,47.762254),(8.617437,47.757319)] +Clipperton Island [(-109.212026,10.30268),(-109.210357,10.288723),(-109.218577,10.281562),(-109.228017,10.292141),(-109.234242,10.307929),(-109.225942,10.311021),(-109.212026,10.30268)] +Cameroon [(14.560692,12.766224),(14.569374,12.769403),(14.569374,12.75961),(14.570717,12.750386),(14.575265,12.744908),(14.584877,12.746355),(14.603997,12.760798),(14.610612,12.762349),(14.618363,12.759196),(14.620327,12.754055),(14.62074,12.747853),(14.624254,12.741885),(14.641721,12.730904),(14.648129,12.725116),(14.664458,12.715969),(14.694017,12.723721),(14.709934,12.718243),(14.716755,12.70181),(14.702079,12.668995),(14.713654,12.65251),(14.728847,12.677212),(14.734118,12.680416),(14.743317,12.67716),(14.752102,12.668944),(14.758716,12.658815),(14.761403,12.649358),(14.768845,12.633235),(14.786208,12.63122),(14.819178,12.638816),(14.830753,12.618249),(14.863826,12.495466),(14.860105,12.490091),(14.852044,12.474692),(14.850184,12.468129),(14.849563,12.457019),(14.85101,12.455158),(14.855455,12.454538),(14.863826,12.447045),(14.861862,12.452006),(14.865893,12.452574),(14.872404,12.450404),(14.877469,12.447045),(14.878502,12.443066),(14.876849,12.431594),(14.877469,12.427201),(14.891215,12.402655),(14.908268,12.326897),(14.906304,12.317957),(14.904857,12.248401),(14.9031,12.236257),(14.898036,12.219462),(14.898036,12.207473),(14.90093,12.200032),(14.905891,12.195639),(14.910128,12.190368),(14.911575,12.180136),(14.907441,12.17347),(14.899896,12.171765),(14.894832,12.167372),(14.898036,12.152799),(14.9031,12.145875),(14.928732,12.128615),(14.950746,12.103345),(14.964802,12.092441),(14.976791,12.094147),(14.993844,12.106704),(15.011827,12.108616),(15.030431,12.100916),(15.049448,12.084586),(15.04459,12.078385),(15.04087,12.067585),(15.04056,12.055648),(15.045727,12.046088),(15.053065,12.038026),(15.053685,12.031877),(15.051205,12.024228),(15.049448,12.011929),(15.05968,11.997305),(15.076836,11.982836),(15.081177,11.971673),(15.052548,11.967229),(15.048311,11.962785),(15.050585,11.952863),(15.055546,11.942476),(15.05937,11.936792),(15.063401,11.927283),(15.056579,11.918447),(15.046967,11.910179),(15.042006,11.902375),(15.044177,11.877312),(15.050998,11.861603),(15.06278,11.853489),(15.079834,11.851216),(15.086345,11.840467),(15.110943,11.782899),(15.105362,11.772719),(15.089135,11.75763),(15.083554,11.748173),(15.085311,11.744039),(15.095957,11.734117),(15.097197,11.727657),(15.093063,11.722903),(15.086345,11.72218),(15.079834,11.722593),(15.076113,11.721456),(15.068258,11.700114),(15.066501,11.680632),(15.069395,11.660788),(15.085208,11.615881),(15.09575,11.598156),(15.123862,11.563171),(15.135644,11.530822),(15.122312,11.503227),(15.076113,11.453307),(15.067018,11.435556),(15.06061,11.416126),(15.054305,11.364088),(15.049448,11.348714),(15.049448,11.337268),(15.052548,11.329801),(15.057819,11.325356),(15.062264,11.320034),(15.06309,11.309931),(15.056373,11.29342),(15.033325,11.26027),(15.028261,11.244483),(15.021233,11.182549),(15.02883,11.080218),(15.035185,10.994627),(15.06247,10.930703),(15.072702,10.915769),(15.079007,10.898147),(15.075596,10.873498),(15.06309,10.830761),(15.065881,10.793115),(15.142259,10.647207),(15.149907,10.623125),(15.14846,10.607338),(15.138435,10.589691),(15.132337,10.565403),(15.131717,10.540701),(15.138228,10.521659),(15.144429,10.51631),(15.153628,10.511969),(15.164686,10.509075),(15.176055,10.50799),(15.186597,10.505871),(15.193315,10.501195),(15.198173,10.496492),(15.218326,10.487216),(15.227215,10.470137),(15.241167,10.432904),(15.255637,10.41704),(15.268763,10.406343),(15.278374,10.394276),(15.282199,10.374588),(15.284266,10.349292),(15.29057,10.328983),(15.301422,10.311749),(15.439192,10.185245),(15.476399,10.132742),(15.490661,10.124448),(15.490144,10.120107),(15.536137,10.080523),(15.602799,10.04099),(15.637939,10.029957),(15.681244,9.991278),(15.663777,9.98611),(15.439398,9.931695),(15.382968,9.930196),(15.214916,9.984095),(15.155488,9.986523),(15.109599,9.981537),(15.100711,9.978901),(15.089445,9.972183),(15.071669,9.955957),(15.06123,9.949239),(15.033015,9.942857),(15.002422,9.945053),(14.944131,9.958825),(14.898139,9.960478),(14.772566,9.921747),(14.732465,9.923814),(14.440493,9.995308),(14.196341,9.979147),(14.181697,9.978178),(14.173532,9.975025),(14.171155,9.96761),(14.170638,9.957843),(14.168365,9.94774),(14.119789,9.85201),(14.088473,9.809635),(14.006721,9.739277),(13.945949,9.652642),(13.947603,9.637759),(14.036486,9.568771),(14.321327,9.243158),(14.331145,9.200215),(14.349542,9.168356),(14.571699,8.99099),(14.793856,8.813623),(14.809876,8.808611),(14.819178,8.811608),(14.827963,8.813158),(14.836748,8.813003),(14.846049,8.810988),(14.859899,8.803314),(14.89938,8.774323),(14.908268,8.764841),(14.913332,8.752438),(14.919947,8.74771),(14.927285,8.745075),(14.934416,8.739003),(14.940101,8.729649),(14.949092,8.704276),(14.951469,8.692106),(14.951469,8.68283),(14.955087,8.676216),(14.968523,8.672159),(15.005833,8.66663),(15.031258,8.658749),(15.051928,8.643789),(15.068568,8.623867),(15.110736,8.555138),(15.16851,8.498604),(15.183703,8.479148),(15.210706,8.421822),(15.345347,8.13599),(15.406532,7.921585),(15.440742,7.839419),(15.487871,7.804951),(15.509368,7.804021),(15.540787,7.796838),(15.562905,7.792445),(15.562181,7.690178),(15.548952,7.630801),(15.521977,7.576076),(15.481049,7.523263),(15.473298,7.509103),(15.419451,7.43655),(15.415524,7.42084),(15.434954,7.402908),(15.432784,7.389731),(15.428443,7.388801),(15.409633,7.387664),(15.403225,7.386372),(15.397437,7.380739),(15.389169,7.364151),(15.385345,7.358363),(15.338216,7.32219),(15.261321,7.279247),(15.246564,7.266731),(15.223908,7.247517),(15.205097,7.200233),(15.191248,7.177341),(15.186597,7.166334),(15.185357,7.156877),(15.190214,7.098793),(15.187527,7.088612),(15.179052,7.07838),(15.156108,7.067787),(15.147013,7.061844),(15.136161,7.045308),(15.128926,7.026497),(15.113217,6.964692),(15.080247,6.878083),(15.05906,6.836276),(15.042213,6.790775),(15.034565,6.776771),(15.022369,6.766539),(14.962321,6.736696),(14.946095,6.72574),(14.932452,6.710289),(14.921874,6.686414),(14.807292,6.4278),(14.782177,6.393616),(14.780007,6.382221),(14.784348,6.358683),(14.784658,6.347805),(14.782384,6.336772),(14.772255,6.318323),(14.734842,6.270445),(14.719029,6.257862),(14.576402,6.189778),(14.534957,6.190088),(14.525966,6.174069),(14.510979,6.155336),(14.482144,6.127069),(14.467054,6.122599),(14.443903,6.100249),(14.430571,6.088079),(14.431811,6.084617),(14.432709,6.079123),(14.433568,6.073868),(14.429124,6.064179),(14.426023,6.059864),(14.422509,6.051363),(14.418169,6.047126),(14.412898,6.047022),(14.402459,6.054877),(14.397705,6.053947),(14.390263,6.045782),(14.387266,6.039038),(14.389436,6.031054),(14.397705,6.019169),(14.406076,6.01103),(14.417548,5.997258),(14.432328,5.98545),(14.446901,5.970024),(14.453825,5.953875),(14.458166,5.936409),(14.465711,5.920673),(14.482144,5.90964),(14.499197,5.909537),(14.530616,5.90592),(14.545396,5.907573),(14.572474,5.924187),(14.584773,5.923412),(14.598003,5.903491),(14.602964,5.883389),(14.617433,5.864966),(14.624047,5.758306),(14.631075,5.738049),(14.624358,5.723605),(14.610612,5.669138),(14.592835,5.62387),(14.590148,5.608289),(14.591491,5.590952),(14.596556,5.577232),(14.610612,5.553667),(14.617226,5.53788),(14.61888,5.525452),(14.617433,5.495299),(14.614126,5.48486),(14.599553,5.456645),(14.596246,5.440108),(14.597279,5.41148),(14.592835,5.400033),(14.579606,5.395382),(14.579296,5.387605),(14.570821,5.370939),(14.560796,5.355462),(14.555938,5.351328),(14.553974,5.338228),(14.548393,5.320503),(14.539608,5.303579),(14.528033,5.293011),(14.519868,5.290583),(14.523899,5.279679),(14.539712,5.261799),(14.560485,5.248595),(14.603584,5.234255),(14.624461,5.224049),(14.640481,5.208236),(14.651539,5.187514),(14.658877,5.164544),(14.663218,5.141832),(14.663012,5.091447),(14.667456,5.052768),(14.660324,5.015793),(14.660132,5.01144),(14.659498,4.997035),(14.663425,4.985717),(14.669833,4.977113),(14.674484,4.967579),(14.672107,4.940811),(14.675827,4.93076),(14.681615,4.920941),(14.686989,4.908875),(14.701459,4.814384),(14.700632,4.804282),(14.696084,4.779735),(14.691847,4.768702),(14.69071,4.762139),(14.697325,4.745215),(14.704249,4.669354),(14.717272,4.622019),(14.74373,4.57998),(14.779697,4.545047),(14.938344,4.435932),(14.982062,4.420481),(14.993947,4.413298),(15.000562,4.403195),(15.007383,4.37777),(15.013894,4.364257),(15.048518,4.321649),(15.063814,4.293434),(15.072806,4.265942),(15.082934,4.205507),(15.084484,4.148611),(15.091202,4.121791),(15.102571,4.112412),(15.117764,4.114324),(15.132854,4.108536),(15.139262,4.094583),(15.142569,4.08045),(15.14846,4.073938),(15.162413,4.073938),(15.177709,4.071458),(15.189801,4.063112),(15.192075,4.052234),(15.172438,4.043811),(15.150424,4.038514),(15.111253,4.019937),(15.091719,4.015854),(15.075183,4.017973),(15.043143,4.026138),(15.025987,4.026086),(15.084898,3.885836),(15.171301,3.758971),(15.25264,3.672852),(15.262306,3.662589),(15.329741,3.590996),(15.413973,3.50188),(15.517946,3.391603),(15.616855,3.286777),(15.7107,3.187455),(15.78563,3.107977),(15.793175,3.103274),(15.80103,3.100897),(15.813536,3.098985),(15.817566,3.100122),(15.829142,3.106633),(15.834206,3.108545),(15.840201,3.107408),(15.850949,3.100432),(15.854773,3.098985),(15.897768,3.103533),(15.913995,3.097952),(15.933425,3.081777),(15.950065,3.061003),(15.95823,3.041521),(15.971562,2.991188),(15.974249,2.985762),(15.977763,2.980956),(15.982001,2.976719),(15.995437,2.981473),(16.006082,2.983953),(16.015797,2.983488),(16.026236,2.978992),(16.034194,2.970621),(16.055381,2.93977),(16.070161,2.906594),(16.08246,2.885665),(16.092382,2.863289),(16.095172,2.836779),(16.086904,2.813163),(16.059929,2.786188),(16.056208,2.759833),(16.060032,2.72583),(16.059309,2.711825),(16.062616,2.703195),(16.072848,2.702834),(16.086284,2.704952),(16.0991,2.70397),(16.110468,2.701438),(16.101373,2.687847),(16.093725,2.67312),(16.090315,2.661906),(16.093105,2.641494),(16.108505,2.606612),(16.111502,2.5862),(16.106851,2.567028),(16.091245,2.532766),(16.090315,2.510494),(16.094345,2.497523),(16.107058,2.474165),(16.111502,2.455872),(16.111502,2.418562),(16.112949,2.410448),(16.120907,2.389571),(16.133413,2.364818),(16.152326,2.343269),(16.159251,2.332934),(16.162868,2.323839),(16.164729,2.314795),(16.165969,2.294796),(16.169069,2.289887),(16.182919,2.281154),(16.186536,2.277692),(16.18695,2.26932),(16.181265,2.256556),(16.179715,2.246996),(16.192841,2.239503),(16.196665,2.236454),(16.202349,2.23201),(16.207723,2.22307),(16.199455,2.212166),(16.159251,2.18209),(16.140647,2.199815),(16.12039,2.203329),(16.104371,2.193562),(16.097756,2.171548),(16.085767,2.149172),(16.083493,2.137752),(16.08556,2.12491),(16.089798,2.116151),(16.092485,2.106901),(16.090315,2.092742),(16.088144,2.09176),(16.084113,2.091605),(16.079669,2.090261),(16.076672,2.08592),(16.075535,2.079952),(16.075535,2.075456),(16.076155,2.07282),(16.076672,2.072252),(16.078119,2.063441),(16.08153,2.055638),(16.081736,2.046827),(16.06644,2.025485),(16.063753,2.015873),(16.06489,1.97895),(16.068714,1.973602),(16.074915,1.970604),(16.083493,1.963008),(16.101373,1.932803),(16.135066,1.845315),(16.159044,1.734805),(16.15584,1.719069),(16.145092,1.714315),(16.123594,1.721601),(16.114809,1.719069),(16.069851,1.654551),(16.030473,1.705401),(16.021998,1.72279),(16.021998,1.733048),(16.024892,1.742479),(16.026339,1.752297),(16.021998,1.763795),(16.013833,1.771314),(16.002775,1.775913),(15.990786,1.776172),(15.98045,1.770591),(15.963191,1.77736),(15.94376,1.781624),(15.928981,1.788962),(15.92588,1.804749),(15.915235,1.798961),(15.901799,1.793742),(15.889913,1.793277),(15.884952,1.801648),(15.882369,1.816712),(15.875651,1.823766),(15.865936,1.828468),(15.842061,1.844152),(15.815396,1.853428),(15.80289,1.859965),(15.76465,1.908722),(15.757415,1.912985),(15.734057,1.916706),(15.721552,1.921615),(15.711113,1.927377),(15.706772,1.931976),(15.611791,1.942544),(15.526628,1.967814),(15.48291,1.97585),(15.439192,1.969829),(15.388135,1.939547),(15.367981,1.932855),(15.349171,1.923295),(15.337389,1.92146),(15.337802,1.92376),(15.314341,1.93425),(15.309484,1.935077),(15.301629,1.943578),(15.300182,1.94766),(15.301939,1.953319),(15.303283,1.966419),(15.301112,1.973757),(15.295428,1.981534),(15.287573,1.987787),(15.25357,2.000318),(15.248092,2.003341),(15.244165,2.010059),(15.238687,2.025304),(15.233726,2.031298),(15.212642,2.039618),(15.197036,2.034967),(15.1836,2.029335),(15.151664,2.040988),(15.138538,2.011506),(15.123862,2.01701),(15.117764,2.01701),(15.107636,2.000835),(15.091616,1.984428),(15.074873,1.979364),(15.06309,1.997166),(15.04366,1.990862),(15.028261,1.995306),(14.980718,2.033107),(14.971003,2.034451),(14.963045,2.010421),(14.953846,2.005822),(14.932142,2.003341),(14.892455,2.00639),(14.896589,2.030937),(14.907338,2.058894),(14.887804,2.072252),(14.8846,2.082096),(14.871474,2.101656),(14.857005,2.115944),(14.850184,2.109795),(14.850184,2.097367),(14.84822,2.085197),(14.841192,2.075921),(14.826309,2.072252),(14.764814,2.065456),(14.76161,2.070185),(14.76254,2.092535),(14.761403,2.099589),(14.756339,2.099589),(14.747554,2.097186),(14.737116,2.096075),(14.727297,2.099589),(14.733498,2.102018),(14.747864,2.113205),(14.720889,2.127597),(14.713654,2.12398),(14.720476,2.099589),(14.705696,2.105532),(14.685956,2.125866),(14.672003,2.134341),(14.646062,2.133359),(14.636346,2.13832),(14.644821,2.154185),(14.620947,2.16483),(14.587874,2.201727),(14.562139,2.208807),(14.557075,2.202399),(14.547773,2.193666),(14.54798,2.184364),(14.550254,2.175166),(14.54705,2.166536),(14.538058,2.163383),(14.4592,2.146537),(14.438736,2.133979),(14.432948,2.132119),(14.418685,2.135271),(14.399358,2.153151),(14.383649,2.162091),(14.320293,2.17067),(14.276265,2.153565),(14.266963,2.152428),(14.16175,2.15341),(14.002793,2.154805),(13.823993,2.156407),(13.662266,2.157761),(13.620336,2.158112),(13.466547,2.159508),(13.294568,2.161058),(13.294258,2.185759),(13.298392,2.215835),(13.294155,2.230511),(13.284646,2.242242),(13.26966,2.255729),(13.254157,2.266891),(13.242891,2.27149),(13.235967,2.269888),(13.221601,2.263946),(13.212299,2.264049),(13.205684,2.266891),(13.191732,2.277692),(13.163827,2.283169),(13.133337,2.284513),(13.124656,2.281051),(13.11308,2.272731),(13.09303,2.253455),(13.083521,2.249528),(13.04094,2.244205),(13.012931,2.256298),(12.996912,2.260018),(12.983166,2.253455),(12.97221,2.250355),(12.954847,2.253249),(12.924771,2.264049),(12.914539,2.254541),(12.903171,2.250355),(12.890458,2.251388),(12.876402,2.257176),(12.867824,2.24498),(12.848084,2.249321),(12.825243,2.259243),(12.808189,2.264049),(12.79558,2.258778),(12.784108,2.249528),(12.770155,2.240691),(12.749898,2.236712),(12.614609,2.256918),(12.590735,2.265703),(12.574819,2.277692),(12.551461,2.272266),(12.534201,2.277692),(12.47839,2.295882),(12.439943,2.298517),(12.384856,2.285908),(12.366769,2.289577),(12.334006,2.309576),(12.321604,2.314072),(12.312096,2.312935),(12.291218,2.301773),(12.28078,2.299034),(12.261039,2.296192),(12.222592,2.283324),(12.202645,2.279552),(12.159237,2.281412),(12.118826,2.28782),(12.100739,2.288492),(12.041931,2.283996),(11.966174,2.28875),(11.889796,2.282084),(11.802153,2.284823),(11.768149,2.280172),(11.75275,2.281877),(11.701384,2.298724),(11.681023,2.315415),(11.671204,2.320996),(11.657252,2.322495),(11.644643,2.319653),(11.618598,2.310403),(11.351637,2.300584),(11.349777,2.291489),(11.354325,2.278467),(11.355978,2.265186),(11.349984,2.25113),(11.342232,2.240588),(11.335928,2.229167),(11.334171,2.212786),(11.334171,2.20028),(11.332414,2.188033),(11.328486,2.176302),(11.322078,2.16576),(11.022975,2.165709),(10.723769,2.165657),(10.424666,2.165657),(10.409686,2.165654),(10.189983,2.165616),(10.125562,2.165605),(9.990997,2.165605),(9.97074,2.168913),(9.907488,2.200487),(9.890125,2.204569),(9.872141,2.211236),(9.84682,2.228547),(9.823669,2.249011),(9.810853,2.264876),(9.808682,2.285495),(9.811886,2.306165),(9.81106,2.324872),(9.799571,2.341742),(9.812185,2.353583),(9.812185,2.359809),(9.819184,2.376939),(9.822927,2.562934),(9.853201,2.696967),(9.870453,2.738105),(9.874278,2.754543),(9.877778,2.881415),(9.88795,2.926907),(9.903331,2.968329),(9.922048,2.99726),(9.94337,3.023139),(9.95574,3.050482),(9.958018,3.082221),(9.919688,3.228095),(9.90211,3.251695),(9.90211,3.257961),(9.911632,3.257717),(9.917979,3.259467),(9.929454,3.264797),(9.929454,3.271552),(9.91391,3.270331),(9.903087,3.272895),(9.89503,3.278306),(9.88795,3.285224),(9.881521,3.294338),(9.855317,3.343573),(9.780935,3.436754),(9.739024,3.47309),(9.6421,3.539049),(9.664236,3.543931),(9.68865,3.556098),(9.770763,3.61872),(9.792817,3.627183),(9.819591,3.627875),(9.787608,3.637112),(9.759613,3.626166),(9.733653,3.60932),(9.706879,3.600531),(9.638438,3.594387),(9.626231,3.600002),(9.621349,3.612779),(9.625987,3.626451),(9.6421,3.63467),(9.612559,3.704983),(9.565115,3.770901),(9.551524,3.79621),(9.545909,3.819648),(9.570974,3.795478),(9.576671,3.792304),(9.581554,3.787828),(9.58961,3.766099),(9.594249,3.758205),(9.606944,3.752265),(9.624278,3.75019),(9.662608,3.750718),(9.652517,3.756049),(9.621104,3.76496),(9.611013,3.770575),(9.607107,3.774726),(9.604259,3.779975),(9.597423,3.788886),(9.589203,3.809638),(9.592133,3.83515),(9.60377,3.859036),(9.621104,3.874823),(9.659028,3.84101),(9.679698,3.836086),(9.689789,3.860582),(9.723481,3.843817),(9.736339,3.834174),(9.744477,3.819648),(9.750336,3.830268),(9.750499,3.840766),(9.74586,3.850898),(9.738292,3.860582),(9.741466,3.861029),(9.751964,3.860582),(9.704926,3.877509),(9.682872,3.890448),(9.67628,3.909003),(9.68922,3.922431),(9.715017,3.931342),(9.743337,3.93594),(9.764415,3.936347),(9.764334,3.940741),(9.76295,3.943793),(9.758149,3.950019),(9.760753,3.951361),(9.76295,3.951239),(9.764415,3.952338),(9.764415,3.957465),(9.664806,3.940375),(9.634613,3.943183),(9.618907,3.960354),(9.623709,3.985785),(9.639903,4.008775),(9.65919,4.018866),(9.675629,4.03384),(9.718923,4.099555),(9.748302,4.114447),(9.752452,4.119818),(9.756847,4.129828),(9.754568,4.135688),(9.738292,4.12873),(9.719493,4.114936),(9.710948,4.105699),(9.703461,4.09394),(9.696625,4.09394),(9.696625,4.114447),(9.678884,4.10102),(9.646332,4.051459),(9.628429,4.031928),(9.606619,4.020657),(9.579356,4.013007),(9.559337,4.016791),(9.559581,4.039374),(9.545909,4.025702),(9.526622,4.035793),(9.508556,4.054185),(9.498546,4.077216),(9.504242,4.100775),(9.490082,4.115871),(9.478526,4.111396),(9.450369,4.080308),(9.452891,4.076972),(9.457286,4.071112),(9.46518,4.068101),(9.474294,4.066148),(9.484386,4.059882),(9.485362,4.055325),(9.495616,4.03148),(9.507009,4.019477),(9.523611,4.00609),(9.532481,3.989447),(9.518565,3.970445),(9.504242,3.976142),(9.497081,3.977688),(9.490082,3.980658),(9.476899,3.990912),(9.464041,4.01203),(9.458344,4.016181),(9.453868,4.018622),(9.451182,4.022773),(9.450369,4.031928),(9.437999,4.030463),(9.42921,4.026557),(9.424083,4.019477),(9.422374,4.008368),(9.449392,3.992133),(9.456554,3.984076),(9.458832,3.972357),(9.456391,3.963528),(9.452403,3.953925),(9.450369,3.939765),(9.453298,3.932115),(9.469086,3.923407),(9.476899,3.915839),(9.464041,3.90998),(9.448253,3.905748),(9.41212,3.902167),(9.393403,3.904283),(9.359223,3.913723),(9.343516,3.915839),(9.313731,3.930854),(9.302582,3.964993),(9.307628,4.001939),(9.326182,4.025702),(9.326182,4.031928),(9.301443,4.026353),(9.290294,4.009711),(9.284353,3.992255),(9.250336,3.960842),(9.220714,3.96601),(9.210704,3.96426),(9.211111,3.982571),(9.213227,3.996568),(9.210623,4.008124),(9.196462,4.018866),(9.184744,4.020901),(9.166515,4.020901),(9.148774,4.019029),(9.138682,4.015448),(9.123383,4.01789),(9.07016,4.051418),(8.983409,4.092434),(8.971202,4.100775),(8.969412,4.128974),(8.988455,4.18891),(8.990977,4.217475),(8.960297,4.245917),(8.93629,4.279527),(8.918956,4.318061),(8.894786,4.460517),(8.900076,4.485093),(8.926036,4.529608),(8.93572,4.55329),(8.913585,4.528632),(8.900564,4.521633),(8.894786,4.535875),(8.897227,4.548),(8.901541,4.560248),(8.904307,4.573188),(8.901622,4.587388),(8.893809,4.594631),(8.871918,4.602484),(8.864106,4.61872),(8.856293,4.62995),(8.847016,4.639716),(8.83961,4.642646),(8.835704,4.629625),(8.86671,4.582709),(8.873709,4.560045),(8.867849,4.544582),(8.858735,4.540839),(8.850597,4.547431),(8.847016,4.563218),(8.798595,4.601142),(8.798025,4.588284),(8.79949,4.577786),(8.80421,4.570136),(8.812836,4.566352),(8.799571,4.542182),(8.795177,4.539008),(8.788422,4.541449),(8.768565,4.553127),(8.764415,4.556627),(8.757009,4.56509),(8.740571,4.571234),(8.724132,4.580471),(8.716563,4.597724),(8.705903,4.63467),(8.661306,4.698798),(8.655121,4.738267),(8.638682,4.71308),(8.645356,4.684068),(8.660492,4.653998),(8.68572,4.561225),(8.694835,4.55329),(8.706309,4.547675),(8.716563,4.539008),(8.722667,4.513129),(8.701671,4.501288),(8.576671,4.491889),(8.56544,4.49726),(8.570323,4.50967),(8.582693,4.523342),(8.593761,4.532782),(8.581716,4.538479),(8.570567,4.53856),(8.562673,4.533189),(8.556407,4.511705),(8.54835,4.507799),(8.537364,4.508531),(8.525564,4.511705),(8.510265,4.527045),(8.505056,4.55093),(8.509939,4.568834),(8.525564,4.566352),(8.530772,4.57925),(8.552094,4.614732),(8.538341,4.604641),(8.525727,4.604438),(8.516124,4.612982),(8.511241,4.629055),(8.513682,4.64232),(8.521169,4.655951),(8.538585,4.676215),(8.567638,4.69599),(8.573253,4.704088),(8.573578,4.716376),(8.569672,4.728339),(8.567638,4.74022),(8.573253,4.752509),(8.580089,4.745063),(8.576508,4.784654),(8.57838,4.804674),(8.590017,4.813381),(8.594168,4.815294),(8.594179,4.815299),(8.594186,4.815299),(8.595113,4.815314),(8.612166,4.831877),(8.608446,4.846217),(8.605448,4.86004),(8.602245,4.881331),(8.61103,4.896653),(8.638315,4.916781),(8.644412,4.934868),(8.683687,4.99688),(8.683687,4.997035),(8.698569,5.017809),(8.707871,5.037549),(8.722651,5.08119),(8.728852,5.094548),(8.748282,5.12421),(8.758307,5.134881),(8.769469,5.139946),(8.781252,5.141987),(8.79169,5.146741),(8.798925,5.159635),(8.825693,5.295492),(8.82466,5.306137),(8.816392,5.330296),(8.815255,5.343499),(8.821663,5.367684),(8.84316,5.409981),(8.849775,5.432357),(8.849258,5.453467),(8.845744,5.472794),(8.844297,5.492353),(8.850498,5.514006),(8.865484,5.533462),(8.884294,5.551445),(8.899487,5.570953),(8.903932,5.595086),(8.8937,5.621751),(8.874889,5.64144),(8.852565,5.658674),(8.832411,5.678156),(8.819286,5.703451),(8.826934,5.717817),(8.844607,5.730866),(8.862177,5.752337),(8.866621,5.777452),(8.857836,5.794893),(8.847087,5.810758),(8.845227,5.831041),(8.855872,5.847499),(8.891219,5.871116),(8.904655,5.888117),(8.921708,5.904473),(8.943206,5.901424),(8.965013,5.893595),(8.982686,5.895765),(8.986304,5.908814),(8.988474,5.937287),(8.993849,5.944083),(9.009868,5.952351),(9.022477,5.965813),(9.043871,5.996689),(9.043871,5.996844),(9.136165,6.092523),(9.273315,6.202775),(9.298843,6.238432),(9.327368,6.298506),(9.341011,6.313983),(9.354757,6.321502),(9.366539,6.32269),(9.379148,6.321605),(9.394961,6.322535),(9.422866,6.339356),(9.453149,6.397673),(9.482604,6.415992),(9.48859,6.418208),(9.528596,6.433019),(9.548957,6.443174),(9.56477,6.458599),(9.588127,6.502731),(9.60301,6.515185),(9.632673,6.521153),(9.644806,6.521226),(9.680525,6.521438),(9.693031,6.531308),(9.69398,6.534305),(9.769822,6.773722),(9.788529,6.794754),(9.8123,6.793049),(9.83173,6.783566),(9.851161,6.777882),(9.874622,6.787701),(10.119465,6.994406),(10.142926,7.007687),(10.156878,7.004742),(10.165043,6.988515),(10.179409,6.914928),(10.189848,6.895446),(10.210828,6.879168),(10.238424,6.871003),(10.495979,6.874672),(10.496599,6.897151),(10.500733,6.912809),(10.508898,6.926348),(10.521197,6.942781),(10.527502,6.947484),(10.533186,6.948828),(10.537424,6.952135),(10.543315,7.028719),(10.55179,7.06634),(10.564192,7.102668),(10.578661,7.130832),(10.597161,7.107268),(10.602536,7.058072),(10.620623,7.043757),(10.646978,7.035076),(10.716947,6.998127),(10.799113,6.967328),(10.831876,6.94521),(10.848206,6.905316),(10.857404,6.859789),(10.877454,6.816097),(10.909391,6.784264),(11.002615,6.767262),(11.036721,6.740546),(11.055945,6.700367),(11.059045,6.652722),(11.057392,6.613008),(11.060596,6.573967),(11.077442,6.496736),(11.097079,6.449117),(11.113512,6.434001),(11.140384,6.43028),(11.202276,6.436739),(11.231748,6.439815),(11.237122,6.438058),(11.248285,6.431004),(11.255519,6.42997),(11.261514,6.432554),(11.272159,6.442037),(11.27588,6.444207),(11.286835,6.443225),(11.308539,6.437903),(11.319495,6.437283),(11.344196,6.443122),(11.369828,6.455757),(11.389361,6.473921),(11.395666,6.496736),(11.401764,6.538181),(11.41675,6.572003),(11.443105,6.593268),(11.482585,6.597169),(11.509767,6.612311),(11.533435,6.645099),(11.562374,6.712485),(11.568885,6.739099),(11.571469,6.766229),(11.566611,6.784187),(11.544597,6.80894),(11.538086,6.8239),(11.543667,6.850694),(11.561754,6.876067),(11.623249,6.932291),(11.681126,6.96862),(11.684847,6.973839),(11.688464,6.986913),(11.692495,6.991616),(11.697249,6.992856),(11.707481,6.991822),(11.718643,6.994458),(11.73425,6.995078),(11.741071,6.997507),(11.746859,7.005362),(11.748616,7.014973),(11.75151,7.023965),(11.766392,7.033525),(11.80887,7.071921),(11.820343,7.078794),(11.833985,7.08298),(11.845147,7.081946),(11.854139,7.077812),(11.862614,7.075435),(11.872226,7.079362),(11.881321,7.102048),(11.868918,7.127111),(11.829954,7.169383),(11.780242,7.241058),(11.766186,7.252943),(11.742415,7.257439),(11.736627,7.263279),(11.744585,7.272167),(11.762465,7.285551),(11.782722,7.304517),(11.844734,7.396397),(11.903545,7.453557),(11.966174,7.514426),(11.992839,7.55401),(12.008548,7.564862),(12.021054,7.576231),(12.024981,7.596592),(12.010719,7.655244),(12.009065,7.676328),(12.012166,7.698343),(12.018987,7.718858),(12.054954,7.783919),(12.154276,7.921326),(12.192103,7.960652),(12.197787,7.975173),(12.1951,7.99171),(12.188175,8.016256),(12.182698,8.054652),(12.182594,8.093486),(12.188279,8.117774),(12.229103,8.175497),(12.233713,8.188455),(12.236751,8.196994),(12.237682,8.21689),(12.221765,8.29766),(12.219285,8.343161),(12.227036,8.386311),(12.249981,8.418764),(12.271788,8.427781),(12.310235,8.421348),(12.331319,8.424293),(12.343205,8.432742),(12.351163,8.44424),(12.357571,8.456358),(12.365116,8.466564),(12.395191,8.487338),(12.403873,8.496537),(12.403253,8.524235),(12.400462,8.538421),(12.377001,8.59511),(12.368836,8.609579),(12.396328,8.600768),(12.410591,8.598494),(12.42444,8.599786),(12.440253,8.606608),(12.451829,8.614695),(12.464334,8.621154),(12.482524,8.623247),(12.495547,8.619553),(12.527793,8.604851),(12.542572,8.602008),(12.557972,8.605677),(12.660395,8.659318),(12.678171,8.678696),(12.692848,8.710891),(12.701633,8.739158),(12.708867,8.750914),(12.720236,8.756469),(12.734499,8.755901),(12.747211,8.753627),(12.759613,8.75342),(12.772946,8.759053),(12.794547,8.788612),(12.805709,8.831503),(12.82824,9.019967),(12.821005,9.056425),(12.822142,9.09681),(12.842192,9.14443),(12.888391,9.226725),(12.894592,9.243959),(12.898416,9.262433),(12.899347,9.302017),(12.891285,9.33199),(12.88157,9.352221),(12.853148,9.356639),(12.848394,9.359947),(12.855422,9.378111),(12.862036,9.382891),(12.920741,9.410486),(12.929009,9.416119),(12.940068,9.4271),(12.969006,9.464075),(12.982442,9.474591),(13.051689,9.504511),(13.13034,9.516526),(13.195453,9.542209),(13.223461,9.613135),(13.228422,9.677912),(13.241341,9.745324),(13.251056,9.76925),(13.260461,9.785709),(13.262942,9.801728),(13.251987,9.824234),(13.222944,9.854128),(13.212712,9.870148),(13.211059,9.892756),(13.218707,9.913039),(13.242271,9.949756),(13.246405,9.973475),(13.242478,9.992673),(13.230489,10.025358),(13.230179,10.045331),(13.247956,10.079412),(13.281339,10.092098),(13.359784,10.100547),(13.378387,10.108609),(13.392672,10.116957),(13.401952,10.122381),(13.423242,10.138478),(13.434921,10.153412),(13.450837,10.187157),(13.453008,10.200515),(13.451354,10.212117),(13.444533,10.232477),(13.444843,10.245474),(13.467167,10.307589),(13.501274,10.496337),(13.538791,10.62142),(13.566283,10.679013),(13.744411,10.930497),(13.749839,10.942442),(13.75702,10.958247),(13.750406,10.996203),(13.75485,11.016719),(13.765909,11.033901),(13.782032,11.047518),(13.820066,11.069274),(13.832054,11.084803),(13.849211,11.124568),(13.873189,11.166503),(13.906262,11.208025),(13.943882,11.245723),(13.98233,11.276264),(14.013129,11.276367),(14.052506,11.265153),(14.122579,11.235724),(14.14325,11.232726),(14.165471,11.238333),(14.186762,11.249237),(14.204642,11.26213),(14.232133,11.290294),(14.244329,11.295436),(14.271304,11.29864),(14.279159,11.301844),(14.368249,11.383001),(14.387059,11.393388),(14.446281,11.411682),(14.467261,11.423852),(14.503951,11.456304),(14.524932,11.468836),(14.576609,11.487336),(14.593765,11.496431),(14.605754,11.514725),(14.616193,11.538677),(14.623841,11.563481),(14.627561,11.584359),(14.627148,11.60968),(14.62136,11.629782),(14.610198,11.646835),(14.593868,11.66301),(14.588597,11.669676),(14.585083,11.676808),(14.580949,11.683267),(14.573508,11.687711),(14.556455,11.687505),(14.552424,11.689003),(14.543329,11.701096),(14.541882,11.709829),(14.546533,11.718614),(14.583223,11.766363),(14.590665,11.783158),(14.612059,11.88496),(14.623427,11.916535),(14.627148,11.950073),(14.626941,11.957204),(14.625184,11.968108),(14.606271,12.011258),(14.604307,12.03043),(14.622187,12.042005),(14.619913,12.052495),(14.637173,12.124222),(14.643168,12.13585),(14.648129,12.141844),(14.650609,12.148097),(14.669936,12.167424),(14.668283,12.178069),(14.663425,12.187578),(14.651539,12.193779),(14.643168,12.190627),(14.629318,12.183495),(14.616296,12.178948),(14.610612,12.183547),(14.6041,12.204321),(14.588287,12.217188),(14.568134,12.22773),(14.548497,12.241579),(14.558315,12.246902),(14.562139,12.248401),(14.562139,12.255842),(14.551184,12.260441),(14.521521,12.282352),(14.51439,12.28938),(14.512426,12.29863),(14.517594,12.313358),(14.51439,12.323538),(14.487105,12.337801),(14.42313,12.353149),(14.217561,12.359298),(14.202161,12.362502),(14.188932,12.370409),(14.179837,12.385602),(14.179113,12.397281),(14.185108,12.427873),(14.185418,12.447045),(14.180216,12.46353),(14.1787,12.468336),(14.177356,12.545592),(14.121132,12.81179),(14.064908,13.077988),(14.418169,13.081141),(14.435429,13.070289),(14.481041,13.000508),(14.482144,12.99882),(14.496303,12.983808),(14.504675,12.969003),(14.507672,12.95244),(14.506122,12.93208),(14.490619,12.886346),(14.490102,12.873608),(14.500437,12.859113),(14.53103,12.836246),(14.549013,12.818211),(14.549013,12.780461),(14.55108,12.771573),(14.554801,12.766896),(14.560692,12.766224)] +Republic of Congo [(17.627275,3.626317),(17.70913,3.626421),(17.714505,3.627506),(17.72608,3.631847),(17.733212,3.632208),(17.73993,3.629159),(17.749645,3.618075),(17.755949,3.614509),(17.765148,3.614354),(17.784475,3.61823),(17.793156,3.617506),(17.808349,3.608592),(17.821165,3.593994),(17.83057,3.575674),(17.835324,3.555831),(17.842146,3.540147),(17.856408,3.537305),(17.875322,3.542731),(17.913562,3.561334),(17.919247,3.565597),(17.924518,3.563453),(17.959865,3.541077),(17.969476,3.537511),(17.981879,3.538803),(18.013091,3.55397),(18.047094,3.563143),(18.082028,3.563453),(18.116444,3.552007),(18.134014,3.534824),(18.146727,3.494904),(18.159852,3.480305),(18.181556,3.477386),(18.202537,3.487333),(18.220004,3.503947),(18.231579,3.521156),(18.235403,3.530923),(18.242535,3.561127),(18.248529,3.574899),(18.255867,3.577225),(18.264859,3.574744),(18.275918,3.574382),(18.355603,3.581462),(18.361597,3.58017),(18.372863,3.574537),(18.379891,3.574382),(18.383715,3.578),(18.387849,3.591668),(18.392293,3.595854),(18.403662,3.597353),(18.420612,3.596371),(18.429707,3.600221),(18.448,3.618023),(18.459059,3.631382),(18.472081,3.632053),(18.497093,3.61146),(18.513422,3.592237),(18.55435,3.526375),(18.570267,3.489607),(18.582462,3.477386),(18.626387,3.476869),(18.634552,3.449222),(18.642407,3.323829),(18.631141,3.17898),(18.62184,3.156656),(18.619669,3.14477),(18.613055,3.128182),(18.597345,3.114643),(18.562102,3.096815),(18.541431,3.08281),(18.528409,3.064775),(18.495336,2.966073),(18.479626,2.941372),(18.475802,2.931192),(18.471771,2.909229),(18.467327,2.898894),(18.44924,2.873004),(18.425159,2.822154),(18.421852,2.808822),(18.418338,2.781743),(18.393223,2.713686),(18.316742,2.584339),(18.313745,2.575968),(18.303513,2.572557),(18.246152,2.518917),(18.233026,2.502587),(18.225171,2.486464),(18.214113,2.427657),(18.207808,2.407968),(18.199126,2.389106),(18.187964,2.370916),(18.174012,2.336138),(18.16378,2.321048),(18.146727,2.315932),(18.142696,2.3057),(18.118615,2.276968),(18.111173,2.265289),(18.072416,2.160024),(18.072416,2.145193),(18.078307,2.114807),(18.080787,2.084163),(18.076033,1.988484),(18.081718,1.942415),(18.081511,1.92655),(18.078307,1.90929),(18.068178,1.876217),(18.066111,1.858983),(18.073656,1.572334),(18.071382,1.551457),(18.012264,1.421956),(18.006683,1.402396),(18.002033,1.365783),(17.966996,1.253723),(17.963379,1.191453),(17.956247,1.171661),(17.914803,1.102001),(17.892478,1.077403),(17.873255,1.039628),(17.86664,1.016632),(17.878216,0.947178),(17.877906,0.867907),(17.907671,0.740473),(17.907568,0.713084),(17.892582,0.63588),(17.893202,0.607664),(17.8993,0.583273),(17.91842,0.535214),(17.933096,0.527049),(17.943638,0.510823),(17.949943,0.491496),(17.95201,0.474339),(17.945498,0.377291),(17.939917,0.361271),(17.854031,0.241072),(17.77538,0.026408),(17.757086,-0.009456),(17.716365,-0.195801),(17.713678,-0.220916),(17.716262,-0.246237),(17.740756,-0.346179),(17.749541,-0.523429),(17.741997,-0.533351),(17.728044,-0.553092),(17.664172,-0.612726),(17.658694,-0.619754),(17.654457,-0.627506),(17.652493,-0.642182),(17.650943,-0.646316),(17.647636,-0.649933),(17.635853,-0.658512),(17.623038,-0.673291),(17.60216,-0.688691),(17.545863,-0.756491),(17.411474,-0.918341),(17.385429,-0.942422),(17.364965,-0.967227),(17.357317,-0.973531),(17.330446,-0.986554),(17.327655,-0.991721),(17.308018,-1.014976),(17.297683,-1.021074),(17.245386,-1.035853),(17.194847,-1.042984),(17.175416,-1.048255),(17.10865,-1.075644),(17.060488,-1.089183),(17.028655,-1.094454),(17.022557,-1.097038),(17.012015,-1.104273),(17.005918,-1.106753),(17.003127,-1.112851),(16.979563,-1.140239),(16.940495,-1.174759),(16.909179,-1.211139),(16.862567,-1.248863),(16.839726,-1.262506),(16.833215,-1.269017),(16.828667,-1.276252),(16.817815,-1.304157),(16.794664,-1.345498),(16.779265,-1.382395),(16.762728,-1.405856),(16.725521,-1.47996),(16.692552,-1.522955),(16.669504,-1.56936),(16.655971,-1.59029),(16.636224,-1.62083),(16.628886,-1.637263),(16.623822,-1.6538),(16.622478,-1.665169),(16.623099,-1.699172),(16.617207,-1.728731),(16.604185,-1.754052),(16.524971,-1.858535),(16.522536,-1.861746),(16.517162,-1.873735),(16.513958,-1.887377),(16.507757,-1.890271),(16.49029,-1.903604),(16.474787,-1.910115),(16.470033,-1.912699),(16.460938,-1.920864),(16.445642,-1.940501),(16.382596,-2.002202),(16.340222,-2.034138),(16.284411,-2.087572),(16.256093,-2.106795),(16.231288,-2.129016),(16.206793,-2.160952),(16.194598,-2.182036),(16.187363,-2.204567),(16.188603,-2.227718),(16.19005,-2.23547),(16.189947,-2.241878),(16.176924,-2.286836),(16.174857,-2.303062),(16.178268,-2.318462),(16.187466,-2.332311),(16.189843,-2.337479),(16.191704,-2.346884),(16.192324,-2.39515),(16.225603,-2.635239),(16.204519,-2.782723),(16.180852,-2.85414),(16.178165,-2.868093),(16.177028,-2.882459),(16.183981,-2.940449),(16.188293,-2.976406),(16.184469,-3.026119),(16.189843,-3.082343),(16.185709,-3.136397),(16.189947,-3.153036),(16.199248,-3.171847),(16.199455,-3.194791),(16.193357,-3.238406),(16.198628,-3.25949),(16.210411,-3.271065),(16.222089,-3.279127),(16.227464,-3.289876),(16.225603,-3.313233),(16.220746,-3.331733),(16.207723,-3.361913),(16.158424,-3.447282),(16.137857,-3.500715),(16.115533,-3.544227),(16.053314,-3.619985),(16.012697,-3.702254),(16.005255,-3.724784),(15.991096,-3.738944),(15.974249,-3.786486),(15.94314,-3.840746),(15.936112,-3.848498),(15.930324,-3.858626),(15.912134,-3.917434),(15.888019,-3.940523),(15.882989,-3.945339),(15.844231,-3.965597),(15.799996,-3.978826),(15.755141,-3.985647),(15.711216,-3.992158),(15.566832,-4.038047),(15.549056,-4.046212),(15.533759,-4.057994),(15.52115,-4.076804),(15.490351,-4.175196),(15.484977,-4.183878),(15.462859,-4.207132),(15.458932,-4.21385),(15.45025,-4.217261),(15.425446,-4.23018),(15.386585,-4.244339),(15.275274,-4.300357),(15.25574,-4.313069),(15.228352,-4.324851),(15.20396,-4.339011),(15.193418,-4.358751),(15.188044,-4.372187),(15.150678,-4.425151),(15.09792,-4.499931),(15.076113,-4.520085),(15.022783,-4.54706),(15.014618,-4.558015),(15.00821,-4.571038),(14.931005,-4.646899),(14.849047,-4.783738),(14.841399,-4.800171),(14.831167,-4.815054),(14.815457,-4.827973),(14.768431,-4.84823),(14.702182,-4.889571),(14.672727,-4.899907),(14.634486,-4.903627),(14.623324,-4.899493),(14.592111,-4.88058),(14.579606,-4.876342),(14.575058,-4.873655),(14.566893,-4.860529),(14.562139,-4.855878),(14.553871,-4.855155),(14.532477,-4.856188),(14.528033,-4.852157),(14.519868,-4.841305),(14.501471,-4.840685),(14.481937,-4.845853),(14.470052,-4.852157),(14.449484,-4.873965),(14.437289,-4.880166),(14.418169,-4.883163),(14.400702,-4.886264),(14.396464,-4.855878),(14.412588,-4.778157),(14.408763,-4.767822),(14.403699,-4.75883),(14.399668,-4.749321),(14.399048,-4.737539),(14.401529,-4.728134),(14.409177,-4.710564),(14.410934,-4.701056),(14.406903,-4.676044),(14.395844,-4.660128),(14.380962,-4.646175),(14.365769,-4.627262),(14.3575,-4.589124),(14.349956,-4.566904),(14.350783,-4.559049),(14.358947,-4.544579),(14.37476,-4.530213),(14.412588,-4.514504),(14.424576,-4.502308),(14.435739,-4.486702),(14.452482,-4.467995),(14.466538,-4.448048),(14.469742,-4.429341),(14.46323,-4.422623),(14.428607,-4.400402),(14.409074,-4.377561),(14.392744,-4.352756),(14.386026,-4.338287),(14.385716,-4.332913),(14.389126,-4.327538),(14.393467,-4.313172),(14.395224,-4.300357),(14.395328,-4.289711),(14.392847,-4.281753),(14.387059,-4.277102),(14.368559,-4.278446),(14.325461,-4.301803),(14.279986,-4.312759),(14.259625,-4.326918),(14.222212,-4.359784),(14.184694,-4.377458),(14.146351,-4.38986),(14.058397,-4.400505),(14.021087,-4.415388),(13.991631,-4.455076),(13.989874,-4.45797),(13.987704,-4.46014),(13.971788,-4.471612),(13.954941,-4.491869),(13.944502,-4.498071),(13.934581,-4.498484),(13.905436,-4.4901),(13.904401,-4.489802),(13.87629,-4.491663),(13.867091,-4.489286),(13.856136,-4.479674),(13.84208,-4.453422),(13.832881,-4.441227),(13.810454,-4.426654),(13.78999,-4.424587),(13.726531,-4.446704),(13.714956,-4.453009),(13.711028,-4.464481),(13.721777,-4.561012),(13.71816,-4.580339),(13.702605,-4.62075),(13.699091,-4.642661),(13.712475,-4.685656),(13.705861,-4.694338),(13.698368,-4.693511),(13.690099,-4.691134),(13.681934,-4.695371),(13.6778,-4.706327),(13.680591,-4.724723),(13.67532,-4.731751),(13.666535,-4.732682),(13.643694,-4.726274),(13.633979,-4.725654),(13.613101,-4.739193),(13.591397,-4.779397),(13.57986,-4.783031),(13.570727,-4.785908),(13.558634,-4.780431),(13.538894,-4.759967),(13.526182,-4.754282),(13.51471,-4.755936),(13.482257,-4.77485),(13.492385,-4.783531),(13.496416,-4.792109),(13.493522,-4.799654),(13.482257,-4.805545),(13.430787,-4.843372),(13.403192,-4.876859),(13.39265,-4.885334),(13.371359,-4.841719),(13.364124,-4.806889),(13.35875,-4.794797),(13.349345,-4.785598),(13.334565,-4.777537),(13.274001,-4.755833),(13.255811,-4.743947),(13.208785,-4.703123),(13.18212,-4.686069),(13.172405,-4.676148),(13.160002,-4.643385),(13.142122,-4.61734),(13.134061,-4.602354),(13.112357,-4.576929),(13.087966,-4.579513),(13.071532,-4.60163),(13.073703,-4.635323),(13.027194,-4.612172),(12.961565,-4.533831),(12.921981,-4.502308),(12.902447,-4.480191),(12.884464,-4.432131),(12.869891,-4.411978),(12.854285,-4.403089),(12.842296,-4.40433),(12.830307,-4.409704),(12.814804,-4.412908),(12.801058,-4.410014),(12.782656,-4.400071),(12.775426,-4.396165),(12.761681,-4.391204),(12.737909,-4.40495),(12.726541,-4.427997),(12.718479,-4.451872),(12.70494,-4.467995),(12.686853,-4.47833),(12.670523,-4.491353),(12.656571,-4.507372),(12.634867,-4.54768),(12.623808,-4.559255),(12.608098,-4.565043),(12.497201,-4.5853),(12.429091,-4.607521),(12.414105,-4.608865),(12.387026,-4.605454),(12.377621,-4.619407),(12.378862,-4.662298),(12.374107,-4.683176),(12.332353,-4.765858),(12.321914,-4.778157),(12.307651,-4.783635),(12.273752,-4.787149),(12.258559,-4.790766),(12.244089,-4.796347),(12.235201,-4.803995),(12.222592,-4.780637),(12.213497,-4.769165),(12.204092,-4.763481),(12.192206,-4.763481),(12.187555,-4.768028),(12.15717,-4.870554),(12.130815,-4.912929),(12.018072,-5.0086),(12.009608,-5.019631),(11.994314,-5.005792),(11.980479,-4.989435),(11.957856,-4.951918),(11.920258,-4.906427),(11.895728,-4.882645),(11.886241,-4.855239),(11.830586,-4.797022),(11.82021,-4.7848),(11.819266,-4.77916),(11.824069,-4.77236),(11.829961,-4.773533),(11.843319,-4.788783),(11.848424,-4.786044),(11.853137,-4.773918),(11.850367,-4.752839),(11.824067,-4.718683),(11.781633,-4.680409),(11.776906,-4.667878),(11.777688,-4.657701),(11.801321,-4.657938),(11.820163,-4.637281),(11.824624,-4.614767),(11.806409,-4.577118),(11.692982,-4.463491),(11.589122,-4.374688),(11.497895,-4.294041),(11.450587,-4.26473),(11.401403,-4.221366),(11.382498,-4.191013),(11.377696,-4.184666),(11.364757,-4.162774),(11.362071,-4.15309),(11.363129,-4.131117),(11.362153,-4.119806),(11.358246,-4.111912),(11.270181,-4.041725),(11.172425,-3.977474),(11.139095,-3.9531),(11.114016,-3.936856),(11.129623,-3.907857),(11.196091,-3.728815),(11.212524,-3.697293),(11.238569,-3.675072),(11.272676,-3.652954),(11.302441,-3.62846),(11.334688,-3.608616),(11.375822,-3.600348),(11.413959,-3.587635),(11.436283,-3.559006),(11.45499,-3.528207),(11.482585,-3.509294),(11.532712,-3.516528),(11.569815,-3.54495),(11.64919,-3.673315),(11.665933,-3.691401),(11.687638,-3.69874),(11.742105,-3.691712),(11.762155,-3.693985),(11.819723,-3.710418),(11.838326,-3.710935),(11.840808,-3.709989),(11.858377,-3.703287),(11.879977,-3.687164),(11.897651,-3.66608),(11.905505,-3.643549),(11.898477,-3.622362),(11.879254,-3.612026),(11.855586,-3.604378),(11.834915,-3.592079),(11.824994,-3.571202),(11.827784,-3.548051),(11.884835,-3.413279),(11.898374,-3.389198),(11.941472,-3.356952),(11.954908,-3.335454),(11.944366,-3.303208),(11.905712,-3.285121),(11.858273,-3.272099),(11.821893,-3.254839),(11.781585,-3.221663),(11.760501,-3.209054),(11.703864,-3.185386),(11.696113,-3.179598),(11.687534,-3.17102),(11.685674,-3.167196),(11.687328,-3.161511),(11.688258,-3.092265),(11.693219,-3.074695),(11.706448,-3.04989),(11.716886,-3.043586),(11.730632,-3.044723),(11.75368,-3.042345),(11.778381,-3.027153),(11.778381,-3.005655),(11.763705,-2.982608),(11.687638,-2.901786),(11.653118,-2.851659),(11.636788,-2.833263),(11.616841,-2.826751),(11.588419,-2.835433),(11.547905,-2.863855),(11.530748,-2.866749),(11.52558,-2.843288),(11.535566,-2.804996),(11.537466,-2.797709),(11.61033,-2.678853),(11.619528,-2.653532),(11.624386,-2.632758),(11.624386,-2.611984),(11.619011,-2.586456),(11.609606,-2.564028),(11.584285,-2.523617),(11.576533,-2.502533),(11.57612,-2.485067),(11.583355,-2.42967),(11.579737,-2.407656),(11.55824,-2.349365),(11.566198,-2.329727),(11.568368,-2.329107),(11.605369,-2.330348),(11.655908,-2.345644),(11.669654,-2.352672),(11.675855,-2.360423),(11.679886,-2.368795),(11.687534,-2.377683),(11.721331,-2.404142),(11.739521,-2.413753),(11.756057,-2.415097),(11.772697,-2.405382),(11.806597,-2.374169),(11.828818,-2.365281),(11.864888,-2.35815),(11.907056,-2.34461),(11.938372,-2.329107),(11.948707,-2.335205),(11.966794,-2.348538),(12.000693,-2.390602),(12.031286,-2.411583),(12.062292,-2.414994),(12.45927,-2.329934),(12.447901,-2.30792),(12.455136,-2.275571),(12.468572,-2.241154),(12.475496,-2.212422),(12.468365,-2.171081),(12.470225,-2.156301),(12.477357,-2.141108),(12.49534,-2.11279),(12.499784,-2.09553),(12.46082,-2.077857),(12.440046,-2.045817),(12.431468,-2.005406),(12.423097,-1.897609),(12.426921,-1.88407),(12.438083,-1.886654),(12.450009,-1.899298),(12.468985,-1.919417),(12.486555,-1.924894),(12.499474,-1.917866),(12.503815,-1.907738),(12.504539,-1.896886),(12.506296,-1.888204),(12.512393,-1.875492),(12.517354,-1.86898),(12.523452,-1.863296),(12.572441,-1.83043),(12.58319,-1.826502),(12.632386,-1.825159),(12.674967,-1.842832),(12.741014,-1.890672),(12.745454,-1.893888),(12.787725,-1.907531),(12.804572,-1.919107),(12.816148,-1.943498),(12.838162,-2.016052),(12.847153,-2.037239),(12.852941,-2.044577),(12.868547,-2.059873),(12.872992,-2.065868),(12.876919,-2.088709),(12.875162,-2.089122),(12.875162,-2.093773),(12.874128,-2.099767),(12.876299,-2.104935),(12.895109,-2.109793),(12.894489,-2.11558),(12.890562,-2.122195),(12.888701,-2.126949),(12.884464,-2.134494),(12.882707,-2.140798),(12.885084,-2.147413),(12.89821,-2.160229),(12.902034,-2.167257),(12.902757,-2.177592),(12.907098,-2.195369),(12.921361,-2.194852),(12.945649,-2.18245),(12.95433,-2.190925),(12.973037,-2.221517),(12.982442,-2.232989),(12.987507,-2.259964),(12.990194,-2.268026),(12.997325,-2.275674),(13.003423,-2.275467),(13.008177,-2.276501),(13.010658,-2.287766),(13.002286,-2.296655),(12.996705,-2.304819),(12.994638,-2.314948),(12.993914,-2.323526),(12.991124,-2.332311),(12.987093,-2.340683),(12.982442,-2.347711),(12.97128,-2.357116),(12.966113,-2.366418),(12.969006,-2.372412),(12.982442,-2.371689),(12.994008,-2.360123),(13.012105,-2.342026),(13.025024,-2.338202),(13.049828,-2.342647),(13.114321,-2.371895),(13.135198,-2.37696),(13.146153,-2.374686),(13.165273,-2.363524),(13.176849,-2.361974),(13.186668,-2.364144),(13.361024,-2.428843),(13.3788,-2.43091),(13.398024,-2.426156),(13.419728,-2.42967),(13.424896,-2.429773),(13.438952,-2.425432),(13.44536,-2.427086),(13.453731,-2.439385),(13.458899,-2.441039),(13.469441,-2.435044),(13.473988,-2.426983),(13.476779,-2.417681),(13.482257,-2.408069),(13.492179,-2.398457),(13.524631,-2.37603),(13.533106,-2.367658),(13.547679,-2.349365),(13.557601,-2.340476),(13.569487,-2.334482),(13.593361,-2.326834),(13.603386,-2.320012),(13.60783,-2.311124),(13.610724,-2.289937),(13.617442,-2.279601),(13.69351,-2.205911),(13.701571,-2.194749),(13.705241,-2.18431),(13.706791,-2.16457),(13.709478,-2.153511),(13.72002,-2.133564),(13.740897,-2.102455),(13.743171,-2.09708),(13.746375,-2.09832),(13.756917,-2.105865),(13.770663,-2.119094),(13.782755,-2.138008),(13.791437,-2.159092),(13.800532,-2.200433),(13.823269,-2.246838),(13.835568,-2.265235),(13.853862,-2.283839),(13.86058,-2.293657),(13.863887,-2.321459),(13.871639,-2.328797),(13.882181,-2.333862),(13.892516,-2.3412),(13.906158,-2.359803),(13.901921,-2.366314),(13.890035,-2.373652),(13.875876,-2.403831),(13.855102,-2.41427),(13.847661,-2.423055),(13.846937,-2.461606),(13.8458,-2.466567),(13.86213,-2.480933),(13.884971,-2.488477),(13.909466,-2.489718),(13.930446,-2.485377),(13.947706,-2.474938),(13.957318,-2.470597),(13.967033,-2.470597),(13.976232,-2.477315),(13.979229,-2.495609),(13.985637,-2.5015),(13.993285,-2.501603),(14.039277,-2.487237),(14.045582,-2.4861),(14.055607,-2.486927),(14.071006,-2.495816),(14.080928,-2.49995),(14.090023,-2.498606),(14.096018,-2.491061),(14.100553,-2.478065),(14.106766,-2.460262),(14.115241,-2.451374),(14.124853,-2.443829),(14.131881,-2.435147),(14.131468,-2.408896),(14.136015,-2.398767),(14.14418,-2.390189),(14.1633,-2.373963),(14.172395,-2.367865),(14.182317,-2.363317),(14.230066,-2.352155),(14.234407,-2.354946),(14.226966,-2.323113),(14.151208,-2.259034),(14.148004,-2.224824),(14.161543,-2.214282),(14.195857,-2.201777),(14.209189,-2.187204),(14.209396,-2.179866),(14.203195,-2.161986),(14.204332,-2.152167),(14.209706,-2.147413),(14.226966,-2.141315),(14.233787,-2.134494),(14.238438,-2.120231),(14.244019,-2.067521),(14.250634,-2.047781),(14.251667,-2.037859),(14.24681,-2.029384),(14.238645,-2.02339),(14.233477,-2.018222),(14.232857,-2.012021),(14.238128,-2.002616),(14.237611,-1.991143),(14.240092,-1.982255),(14.245053,-1.974814),(14.252184,-1.967579),(14.368571,-1.92212),(14.37073,-1.921277),(14.397291,-1.906808),(14.404526,-1.893062),(14.396878,-1.849447),(14.397498,-1.840972),(14.403596,-1.815237),(14.406696,-1.735552),(14.413104,-1.713951),(14.425197,-1.700515),(14.448761,-1.686253),(14.441113,-1.679638),(14.420029,-1.676124),(14.402976,-1.670853),(14.390883,-1.630339),(14.371453,-1.619487),(14.364839,-1.612045),(14.37383,-1.604087),(14.394294,-1.596852),(14.403699,-1.592201),(14.413208,-1.585277),(14.419926,-1.575148),(14.423956,-1.563573),(14.430364,-1.554891),(14.463127,-1.54869),(14.462714,-1.5336),(14.451552,-1.515617),(14.438426,-1.502595),(14.451035,-1.473656),(14.452688,-1.449574),(14.456306,-1.440169),(14.467881,-1.429007),(14.480594,-1.420842),(14.487725,-1.40937),(14.482144,-1.388596),(14.447004,-1.365342),(14.432328,-1.350149),(14.426333,-1.331029),(14.429124,-1.31997),(14.436049,-1.312735),(14.444317,-1.306327),(14.451552,-1.298163),(14.457133,-1.287621),(14.46075,-1.278009),(14.464781,-1.255581),(14.465814,-1.224369),(14.464161,-1.215377),(14.456409,-1.202458),(14.439435,-1.185733),(14.435325,-1.181684),(14.428297,-1.169488),(14.427987,-1.16091),(14.433568,-1.14365),(14.434395,-1.135072),(14.430984,-1.123393),(14.420339,-1.101792),(14.391814,-1.023347),(14.39202,-1.008981),(14.402562,-0.987691),(14.407523,-0.944489),(14.412794,-0.921545),(14.428194,-0.896947),(14.445867,-0.877516),(14.457133,-0.856122),(14.452895,-0.825737),(14.446177,-0.800312),(14.44597,-0.775714),(14.451552,-0.751219),(14.492169,-0.663369),(14.498991,-0.630916),(14.489999,-0.600944),(14.466538,-0.552678),(14.454239,-0.539553),(14.414448,-0.512164),(14.39233,-0.482398),(14.358637,-0.461314),(14.320603,-0.444468),(14.292698,-0.436406),(14.275542,-0.439094),(14.260039,-0.446432),(14.244742,-0.451599),(14.228516,-0.447775),(14.210223,-0.440334),(14.193066,-0.440437),(14.17653,-0.446122),(14.159683,-0.455113),(14.157926,-0.417699),(14.150588,-0.376462),(14.138289,-0.336361),(14.121443,-0.302151),(14.095398,-0.276623),(14.058811,-0.258846),(14.01871,-0.252645),(13.98233,-0.262257),(13.975095,-0.252542),(13.967033,-0.245824),(13.957318,-0.242),(13.921558,-0.237865),(13.913807,-0.240553),(13.909983,-0.247167),(13.907295,-0.254402),(13.903058,-0.258846),(13.889829,-0.263704),(13.883421,-0.264944),(13.877426,-0.2605),(13.831228,-0.205413),(13.832158,-0.177094),(13.8427,-0.152289),(13.888054,-0.092088),(13.900474,-0.075602),(13.910706,-0.065886),(13.921145,-0.052967),(13.920111,-0.042425),(13.914943,-0.031263),(13.91298,-0.016587),(13.917217,-0.006665),(13.931377,0.011628),(13.932927,0.022067),(13.925072,0.034366),(13.913083,0.036536),(13.901404,0.036846),(13.894893,0.043668),(13.890656,0.115911),(13.885901,0.140303),(13.876186,0.171308),(13.871225,0.196423),(13.875773,0.220298),(13.909776,0.270941),(13.945226,0.350419),(13.966517,0.373053),(14.040414,0.413774),(14.059121,0.435272),(14.063048,0.455942),(14.056847,0.476406),(14.045582,0.497283),(14.075967,0.539451),(14.125473,0.546169),(14.181594,0.539865),(14.231823,0.542552),(14.245259,0.54772),(14.250944,0.551854),(14.254664,0.557125),(14.328768,0.621204),(14.335383,0.642598),(14.33838,0.693912),(14.367319,0.735512),(14.413414,0.775044),(14.447624,0.81623),(14.44101,0.862791),(14.437599,0.875142),(14.445144,0.883255),(14.457029,0.890334),(14.466228,0.899636),(14.468088,0.913227),(14.46044,0.921237),(14.436565,0.933277),(14.417548,0.950589),(14.400805,0.970329),(14.386749,0.99224),(14.375691,1.015598),(14.360084,1.066758),(14.346442,1.087118),(14.298486,1.109236),(14.28288,1.130113),(14.274715,1.155641),(14.275748,1.231089),(14.273061,1.257237),(14.258178,1.319662),(14.252184,1.330824),(14.241435,1.339196),(14.230066,1.34395),(14.221488,1.349479),(14.219939,1.355018),(14.218387,1.360564),(14.206502,1.362838),(14.198957,1.368316),(14.192549,1.374956),(14.183868,1.380847),(14.14449,1.391622),(14.12444,1.393585),(14.106456,1.388934),(14.070696,1.375111),(14.049509,1.376403),(14.033799,1.387643),(14.01902,1.402138),(14.00083,1.413016),(13.98357,1.41684),(13.794744,1.434436),(13.780585,1.431025),(13.779758,1.426271),(13.770353,1.400432),(13.758157,1.376506),(13.75485,1.374336),(13.744101,1.370279),(13.741104,1.371313),(13.729425,1.37692),(13.724361,1.377953),(13.71754,1.375369),(13.705551,1.36599),(13.699091,1.362786),(13.680694,1.359401),(13.671496,1.358781),(13.662091,1.359195),(13.642247,1.357386),(13.628088,1.348136),(13.615065,1.337439),(13.579202,1.323486),(13.556361,1.29279),(13.543442,1.283489),(13.509955,1.277287),(13.479776,1.277132),(13.448874,1.283075),(13.41394,1.295529),(13.398748,1.296925),(13.37694,1.294444),(13.357406,1.28907),(13.349345,1.281577),(13.342937,1.263852),(13.325987,1.259304),(13.307797,1.25672),(13.289194,1.233983),(13.2709,1.226024),(13.25023,1.221787),(13.215813,1.224371),(13.195453,1.221115),(13.184911,1.22251),(13.184885,1.222623),(13.18181,1.236463),(13.172301,1.242871),(13.160313,1.247729),(13.150184,1.256617),(13.169201,1.258581),(13.176642,1.27088),(13.180363,1.284832),(13.188321,1.291447),(13.202377,1.293411),(13.219327,1.299198),(13.23359,1.30881),(13.242995,1.328861),(13.249093,1.332375),(13.251263,1.337646),(13.242891,1.34979),(13.239584,1.356507),(13.239171,1.364181),(13.239791,1.372346),(13.238447,1.41963),(13.239481,1.427976),(13.223461,1.438001),(13.205478,1.48637),(13.184911,1.496886),(13.190905,1.511072),(13.189665,1.52474),(13.182637,1.536548),(13.171268,1.545256),(13.164447,1.53784),(13.145946,1.56342),(13.129617,1.592462),(13.146773,1.612254),(13.149977,1.636284),(13.143983,1.681811),(13.147807,1.702869),(13.154628,1.721653),(13.157522,1.739146),(13.150184,1.756302),(13.162793,1.76364),(13.173025,1.776947),(13.180673,1.792011),(13.191008,1.828727),(13.192455,1.844746),(13.188321,1.85658),(13.183154,1.864048),(13.175092,1.882108),(13.162173,1.899833),(13.166307,1.904381),(13.178193,1.907792),(13.184911,1.92345),(13.191732,1.935077),(13.19845,1.943112),(13.205064,1.949339),(13.210232,1.956316),(13.212299,1.966419),(13.206615,1.985151),(13.206925,1.993575),(13.21602,1.997166),(13.221291,2.000835),(13.239481,2.02073),(13.245475,2.025304),(13.281132,2.044967),(13.291364,2.068945),(13.294878,2.101087),(13.294568,2.161058),(13.466547,2.159508),(13.620336,2.158112),(13.662266,2.157761),(13.823993,2.156407),(14.002793,2.154805),(14.16175,2.15341),(14.266963,2.152428),(14.276265,2.153565),(14.320293,2.17067),(14.383649,2.162091),(14.399358,2.153151),(14.418685,2.135271),(14.432948,2.132119),(14.438736,2.133979),(14.4592,2.146537),(14.538058,2.163383),(14.54705,2.166536),(14.550254,2.175166),(14.54798,2.184364),(14.547773,2.193666),(14.557075,2.202399),(14.562139,2.208807),(14.587874,2.201727),(14.620947,2.16483),(14.644821,2.154185),(14.636346,2.13832),(14.646062,2.133359),(14.672003,2.134341),(14.685956,2.125866),(14.705696,2.105532),(14.720476,2.099589),(14.713654,2.12398),(14.720889,2.127597),(14.747864,2.113205),(14.733498,2.102018),(14.727297,2.099589),(14.737116,2.096075),(14.747554,2.097186),(14.756339,2.099589),(14.761403,2.099589),(14.76254,2.092535),(14.76161,2.070185),(14.764814,2.065456),(14.826309,2.072252),(14.841192,2.075921),(14.84822,2.085197),(14.850184,2.097367),(14.850184,2.109795),(14.857005,2.115944),(14.871474,2.101656),(14.8846,2.082096),(14.887804,2.072252),(14.907338,2.058894),(14.896589,2.030937),(14.892455,2.00639),(14.932142,2.003341),(14.953846,2.005822),(14.963045,2.010421),(14.971003,2.034451),(14.980718,2.033107),(15.028261,1.995306),(15.04366,1.990862),(15.06309,1.997166),(15.074873,1.979364),(15.091616,1.984428),(15.107636,2.000835),(15.117764,2.01701),(15.123862,2.01701),(15.138538,2.011506),(15.151664,2.040988),(15.1836,2.029335),(15.197036,2.034967),(15.212642,2.039618),(15.233726,2.031298),(15.238687,2.025304),(15.244165,2.010059),(15.248092,2.003341),(15.25357,2.000318),(15.287573,1.987787),(15.295428,1.981534),(15.301112,1.973757),(15.303283,1.966419),(15.301939,1.953319),(15.300182,1.94766),(15.301629,1.943578),(15.309484,1.935077),(15.314341,1.93425),(15.337802,1.92376),(15.337389,1.92146),(15.349171,1.923295),(15.367981,1.932855),(15.388135,1.939547),(15.439192,1.969829),(15.48291,1.97585),(15.526628,1.967814),(15.611791,1.942544),(15.706772,1.931976),(15.711113,1.927377),(15.721552,1.921615),(15.734057,1.916706),(15.757415,1.912985),(15.76465,1.908722),(15.80289,1.859965),(15.815396,1.853428),(15.842061,1.844152),(15.865936,1.828468),(15.875651,1.823766),(15.882369,1.816712),(15.884952,1.801648),(15.889913,1.793277),(15.901799,1.793742),(15.915235,1.798961),(15.92588,1.804749),(15.928981,1.788962),(15.94376,1.781624),(15.963191,1.77736),(15.98045,1.770591),(15.990786,1.776172),(16.002775,1.775913),(16.013833,1.771314),(16.021998,1.763795),(16.026339,1.752297),(16.024892,1.742479),(16.021998,1.733048),(16.021998,1.72279),(16.030473,1.705401),(16.069851,1.654551),(16.114809,1.719069),(16.123594,1.721601),(16.145092,1.714315),(16.15584,1.719069),(16.159044,1.734805),(16.135066,1.845315),(16.101373,1.932803),(16.083493,1.963008),(16.074915,1.970604),(16.068714,1.973602),(16.06489,1.97895),(16.063753,2.015873),(16.06644,2.025485),(16.081736,2.046827),(16.08153,2.055638),(16.078119,2.063441),(16.076672,2.072252),(16.076155,2.07282),(16.075535,2.075456),(16.075535,2.079952),(16.076672,2.08592),(16.079669,2.090261),(16.084113,2.091605),(16.088144,2.09176),(16.090315,2.092742),(16.092485,2.106901),(16.089798,2.116151),(16.08556,2.12491),(16.083493,2.137752),(16.085767,2.149172),(16.097756,2.171548),(16.104371,2.193562),(16.12039,2.203329),(16.140647,2.199815),(16.159251,2.18209),(16.199455,2.212166),(16.207723,2.22307),(16.202349,2.23201),(16.196665,2.236454),(16.248031,2.345594),(16.293506,2.442074),(16.325649,2.510494),(16.371124,2.606922),(16.413706,2.697201),(16.427492,2.726493),(16.450603,2.775594),(16.479128,2.836314),(16.483779,2.860395),(16.474994,2.878378),(16.460525,2.894605),(16.448329,2.913467),(16.445538,2.95765),(16.484192,3.031909),(16.490807,3.067824),(16.483159,3.090148),(16.47272,3.110612),(16.464762,3.1318),(16.465485,3.156501),(16.477061,3.179394),(16.511477,3.227246),(16.517162,3.248692),(16.517782,3.273238),(16.542793,3.336154),(16.551682,3.419767),(16.567701,3.464389),(16.598501,3.501674),(16.639428,3.528597),(16.686351,3.542214),(16.709398,3.543712),(16.729552,3.542214),(16.811924,3.521672),(16.833008,3.522861),(16.852645,3.532809),(16.854609,3.541335),(16.851818,3.551903),(16.853162,3.560094),(16.867838,3.561541),(16.877243,3.559164),(16.895537,3.552368),(16.904735,3.550637),(16.932434,3.553919),(16.942252,3.553092),(16.953208,3.546891),(16.960546,3.538441),(16.968607,3.533481),(16.98194,3.537976),(16.989691,3.539475),(17.003127,3.537718),(17.009225,3.538131),(17.017596,3.5418),(17.028862,3.551541),(17.035787,3.556141),(17.052426,3.56397),(17.061418,3.56614),(17.088393,3.564435),(17.102139,3.566605),(17.129011,3.57707),(17.142757,3.580842),(17.182858,3.579602),(17.197224,3.58172),(17.213967,3.589291),(17.222959,3.598515),(17.231123,3.609109),(17.244973,3.620607),(17.259442,3.625904),(17.272258,3.62456),(17.298096,3.615904),(17.334373,3.618514),(17.355767,3.638435),(17.375714,3.663447),(17.414489,3.683415),(17.427494,3.690112),(17.442687,3.701274),(17.458913,3.708276),(17.481858,3.704762),(17.486198,3.700292),(17.490126,3.68714),(17.494983,3.683213),(17.508936,3.681766),(17.513484,3.679182),(17.550484,3.648151),(17.586761,3.632234),(17.627275,3.626317)] +Coral Sea Islands [(154.391287,-21.030043),(154.38852,-21.029067),(154.391287,-21.028741),(154.391287,-21.030043)] +Curaçao [(-68.780751,12.097805),(-68.745229,12.062242),(-68.739735,12.053046),(-68.748647,12.047024),(-68.769846,12.042914),(-68.794749,12.041327),(-68.814809,12.043158),(-68.841176,12.055487),(-68.893422,12.090522),(-68.944732,12.105618),(-68.977773,12.124579),(-69.007436,12.148179),(-69.020253,12.169745),(-69.027211,12.19009),(-69.043609,12.196682),(-69.062896,12.198432),(-69.078603,12.20425),(-69.091908,12.219875),(-69.12267,12.269029),(-69.134389,12.27733),(-69.148101,12.28441),(-69.159535,12.293362),(-69.164296,12.306952),(-69.161977,12.342841),(-69.163808,12.361884),(-69.171742,12.378892),(-69.166127,12.387844),(-69.161204,12.391506),(-69.156239,12.390611),(-69.150624,12.385728),(-69.107777,12.367825),(-69.089589,12.356879),(-69.071523,12.341376),(-69.060414,12.322577),(-69.049957,12.276109),(-69.040761,12.256049),(-69.004018,12.221666),(-68.961293,12.201361),(-68.835276,12.171047),(-68.82079,12.16234),(-68.814809,12.149604),(-68.811838,12.12934),(-68.804026,12.117011),(-68.780751,12.097805)] +Czech Republic [(14.397808,51.013115),(14.425817,51.020944),(14.462094,51.019652),(14.473462,51.023372),(14.482144,51.037196),(14.487932,51.028721),(14.490102,51.022701),(14.493513,51.016706),(14.502918,51.008516),(14.51532,51.003038),(14.542295,50.998749),(14.554594,50.992651),(14.566687,50.983401),(14.574335,50.975494),(14.577435,50.965727),(14.575575,50.95069),(14.567824,50.938649),(14.555835,50.924283),(14.550357,50.912087),(14.561829,50.906351),(14.617123,50.920872),(14.629215,50.920717),(14.634589,50.9094),(14.628802,50.896119),(14.611852,50.870953),(14.608338,50.853073),(14.613195,50.84558),(14.647922,50.839379),(14.662185,50.834366),(14.700425,50.815969),(14.737529,50.810698),(14.759233,50.810181),(14.775356,50.812972),(14.785071,50.820052),(14.79458,50.831627),(14.810393,50.858447),(14.831683,50.857982),(14.867857,50.86439),(14.982062,50.859067),(14.984129,50.867697),(14.982889,50.902424),(14.981752,50.905266),(14.981132,50.90909),(14.982062,50.914671),(14.984852,50.918082),(14.994671,50.92511),(14.997771,50.930174),(14.996635,50.959216),(14.981442,50.973221),(14.965215,50.981334),(14.960978,50.992651),(14.961081,50.992651),(14.965629,50.996062),(14.970693,50.998697),(14.976377,51.000402),(14.982062,51.001177),(14.996531,51.007844),(15.001182,51.012443),(15.003973,51.020685),(15.02299,51.009652),(15.073632,51.002624),(15.082107,50.992754),(15.09389,50.985416),(15.107946,50.981024),(15.119211,50.982471),(15.122518,50.992651),(15.122518,50.992754),(15.13151,51.005828),(15.144429,51.011564),(15.156108,51.007844),(15.160759,50.992651),(15.170164,50.977975),(15.227111,50.977406),(15.253776,50.969035),(15.269796,50.952653),(15.268763,50.94299),(15.260081,50.932499),(15.253053,50.914103),(15.25605,50.899892),(15.265352,50.88227),(15.277134,50.865734),(15.28778,50.854881),(15.307727,50.844753),(15.325813,50.839689),(15.341006,50.83173),(15.352995,50.812972),(15.353305,50.803257),(15.350515,50.793232),(15.349688,50.783723),(15.356096,50.775455),(15.368579,50.772949),(15.370255,50.772613),(15.381314,50.780002),(15.390822,50.790648),(15.400124,50.797986),(15.441775,50.800208),(15.641866,50.755766),(15.653235,50.75096),(15.666464,50.738093),(15.673803,50.733545),(15.684345,50.731426),(15.76744,50.744191),(15.792245,50.742692),(15.794415,50.735871),(15.797619,50.729773),(15.848159,50.675151),(15.854567,50.673704),(15.882265,50.674221),(15.941486,50.68838),(15.958023,50.686881),(15.971459,50.678613),(15.984481,50.662904),(15.993163,50.649364),(15.998641,50.635567),(15.996367,50.623681),(15.982001,50.61624),(15.980037,50.612984),(15.97921,50.60978),(15.979934,50.606628),(15.982001,50.603631),(16.024479,50.608592),(16.086491,50.64678),(16.128658,50.644042),(16.160284,50.628642),(16.175477,50.624095),(16.192324,50.626575),(16.204519,50.636342),(16.211754,50.649106),(16.219092,50.659131),(16.232011,50.660888),(16.268288,50.660423),(16.300948,50.6551),(16.331644,50.644042),(16.416806,50.586629),(16.425901,50.567561),(16.394482,50.559344),(16.398306,50.551799),(16.398306,50.548492),(16.395516,50.545701),(16.391175,50.539914),(16.389624,50.534849),(16.388281,50.52715),(16.388384,50.521258),(16.391175,50.521827),(16.382596,50.514179),(16.362029,50.501208),(16.352624,50.492785),(16.335468,50.490356),(16.303428,50.49604),(16.287719,50.492785),(16.279967,50.479866),(16.264981,50.471649),(16.247411,50.464983),(16.232011,50.456766),(16.226534,50.455371),(16.217335,50.45165),(16.211237,50.451289),(16.216095,50.440178),(16.217955,50.437129),(16.198008,50.440023),(16.190153,50.424158),(16.199559,50.406278),(16.232011,50.402609),(16.24493,50.376823),(16.249685,50.371035),(16.2625,50.36442),(16.269839,50.365454),(16.27697,50.369433),(16.288649,50.371862),(16.334227,50.371862),(16.343736,50.36995),(16.350764,50.360648),(16.351487,50.351346),(16.35066,50.342251),(16.353244,50.333518),(16.371021,50.318377),(16.415566,50.307938),(16.437787,50.297965),(16.452153,50.284994),(16.477578,50.255332),(16.492047,50.242722),(16.50476,50.227995),(16.535042,50.207738),(16.545894,50.188824),(16.557263,50.154718),(16.566668,50.142212),(16.584238,50.127432),(16.618654,50.10702),(16.660616,50.093016),(16.701233,50.094928),(16.732033,50.121955),(16.754253,50.13229),(16.766449,50.1434),(16.777301,50.15694),(16.795284,50.174406),(16.817712,50.186757),(16.837556,50.188617),(16.857606,50.187739),(16.869762,50.189729),(16.880964,50.191563),(16.890369,50.196937),(16.907009,50.211768),(16.918998,50.217091),(16.927783,50.217298),(16.946283,50.21306),(16.955275,50.213474),(16.957755,50.217143),(16.957548,50.229235),(16.960339,50.231974),(16.975015,50.231819),(16.98101,50.229183),(16.98566,50.223137),(16.996306,50.212905),(17.008398,50.20996),(17.014806,50.218486),(17.012842,50.231664),(16.99982,50.242722),(16.998373,50.26701),(16.987831,50.284942),(16.969641,50.297758),(16.945249,50.306905),(16.926543,50.319255),(16.916724,50.338634),(16.909283,50.359925),(16.897914,50.378218),(16.865874,50.408449),(16.865771,50.422401),(16.892953,50.432943),(16.915794,50.431083),(16.944005,50.420267),(16.958789,50.414598),(16.98194,50.416355),(17.084672,50.397958),(17.095628,50.393669),(17.118882,50.381164),(17.131698,50.376823),(17.145444,50.376719),(17.175933,50.38075),(17.187612,50.378476),(17.185442,50.375686),(17.188852,50.364834),(17.195053,50.351915),(17.201048,50.343233),(17.210866,50.336412),(17.24611,50.322149),(17.269881,50.317757),(17.316803,50.318635),(17.337474,50.313002),(17.319077,50.308248),(17.321867,50.304476),(17.325381,50.301117),(17.329515,50.298223),(17.334063,50.295691),(17.332823,50.282668),(17.334786,50.270008),(17.341711,50.259724),(17.355044,50.253264),(17.365896,50.271558),(17.388013,50.272643),(17.409821,50.261326),(17.419122,50.242722),(17.424187,50.240552),(17.429561,50.239777),(17.434832,50.2405),(17.44,50.242722),(17.469352,50.265563),(17.516998,50.268354),(17.606398,50.258225),(17.629549,50.262101),(17.646705,50.271093),(17.676368,50.297396),(17.681949,50.305561),(17.684533,50.312382),(17.691044,50.315069),(17.707994,50.311039),(17.713575,50.30768),(17.731971,50.291763),(17.734452,50.28949),(17.734969,50.286802),(17.734245,50.283805),(17.731971,50.280653),(17.721119,50.261946),(17.719156,50.252799),(17.721429,50.242722),(17.731971,50.236005),(17.747578,50.217504),(17.738173,50.202363),(17.717915,50.191098),(17.675334,50.174716),(17.648669,50.167998),(17.600403,50.16681),(17.589448,50.163244),(17.58211,50.153167),(17.584177,50.145881),(17.632753,50.106348),(17.648359,50.101801),(17.658488,50.102989),(17.666963,50.106142),(17.677608,50.107744),(17.690734,50.10547),(17.717915,50.096788),(17.731971,50.094876),(17.732385,50.093894),(17.732592,50.092964),(17.732385,50.092189),(17.731971,50.091466),(17.72267,50.08635),(17.719466,50.080872),(17.722566,50.075136),(17.731971,50.06909),(17.74303,50.060563),(17.749852,50.049504),(17.755433,50.037154),(17.763287,50.025216),(17.774553,50.015191),(17.839355,49.973643),(17.875735,49.968682),(17.912116,49.975814),(17.941571,49.992764),(17.959761,49.995812),(17.999862,49.996588),(18.032418,50.00284),(18.037483,50.007233),(18.035726,50.017207),(18.028698,50.024131),(18.020119,50.024286),(18.012161,50.022684),(18.00689,50.024131),(18.002446,50.046765),(18.019074,50.044087),(18.033245,50.041805),(18.091639,50.017207),(18.085128,49.998758),(18.098151,49.989043),(18.162023,49.98124),(18.177216,49.975814),(18.208738,49.958295),(18.22052,49.954885),(18.243982,49.951732),(18.255867,49.946048),(18.260105,49.94057),(18.267029,49.925067),(18.27261,49.918298),(18.292454,49.907808),(18.306923,49.909668),(18.334932,49.925429),(18.333795,49.927135),(18.332452,49.928271),(18.330695,49.92884),(18.328524,49.92884),(18.368832,49.932044),(18.407486,49.923155),(18.481797,49.896645),(18.505051,49.896697),(18.544428,49.912975),(18.559208,49.907187),(18.565409,49.889049),(18.562929,49.873701),(18.567269,49.861454),(18.593831,49.852204),(18.566339,49.831585),(18.566753,49.804765),(18.584116,49.774379),(18.60768,49.74296),(18.617706,49.713866),(18.62401,49.706373),(18.637343,49.700327),(18.668142,49.69857),(18.682404,49.695779),(18.695427,49.688854),(18.715064,49.673817),(18.727673,49.668907),(18.742246,49.669683),(18.757852,49.674075),(18.773458,49.675832),(18.788444,49.668597),(18.792579,49.660794),(18.799297,49.626378),(18.803947,49.616766),(18.81573,49.599299),(18.820174,49.590256),(18.83454,49.547623),(18.837434,49.526952),(18.833196,49.510261),(18.792269,49.509537),(18.773562,49.504886),(18.732531,49.480288),(18.704522,49.479255),(18.675583,49.485043),(18.64282,49.495791),(18.635896,49.496721),(18.628971,49.495791),(18.600446,49.485973),(18.556211,49.490159),(18.535643,49.481684),(18.530476,49.473467),(18.527995,49.454864),(18.522931,49.446079),(18.514663,49.440601),(18.481797,49.429077),(18.439215,49.395074),(18.416788,49.385462),(18.387642,49.389751),(18.385162,49.342261),(18.361586,49.330191),(18.324597,49.311255),(18.190031,49.276942),(18.160576,49.2587),(18.136495,49.233068),(18.117788,49.202579),(18.105075,49.169765),(18.101458,49.142945),(18.10163,49.136928),(18.102905,49.09225),(18.09629,49.070288),(18.075516,49.047188),(18.046061,49.029153),(18.012885,49.019128),(17.959244,49.021867),(17.93506,49.019386),(17.914079,49.010498),(17.90085,48.993031),(17.894649,48.9782),(17.886897,48.947246),(17.878733,48.933552),(17.860336,48.921718),(17.841215,48.920994),(17.820131,48.923371),(17.795533,48.920581),(17.77941,48.911744),(17.744891,48.872574),(17.727217,48.86291),(17.535084,48.812991),(17.498497,48.816763),(17.467905,48.838105),(17.453332,48.842756),(17.429561,48.838157),(17.411225,48.830216),(17.391734,48.821776),(17.374371,48.819605),(17.260269,48.857794),(17.21221,48.866062),(17.167458,48.859758),(17.113715,48.83392),(17.10462,48.824618),(17.098728,48.805756),(17.084362,48.793715),(17.049946,48.774027),(17.025348,48.746328),(16.974808,48.649874),(16.963336,48.635947),(16.947523,48.623157),(16.945043,48.604166),(16.910523,48.63078),(16.896674,48.696977),(16.873006,48.718991),(16.856573,48.719818),(16.818125,48.710775),(16.798902,48.709224),(16.780505,48.713772),(16.744332,48.729637),(16.729035,48.733099),(16.691001,48.732065),(16.675085,48.733926),(16.661959,48.740023),(16.654725,48.751857),(16.651004,48.766223),(16.643149,48.778264),(16.624029,48.78338),(16.605529,48.784827),(16.545791,48.796299),(16.528737,48.803534),(16.519539,48.805601),(16.510341,48.804774),(16.492254,48.79971),(16.481919,48.7994),(16.453083,48.80219),(16.43572,48.794542),(16.384974,48.737078),(16.374018,48.730308),(16.358309,48.727259),(16.352727,48.728448),(16.339498,48.735579),(16.318724,48.733512),(16.317588,48.73284),(16.177751,48.746535),(16.085354,48.742866),(16.032334,48.758059),(15.931771,48.806428),(15.930324,48.811389),(15.930324,48.818313),(15.925053,48.822602),(15.908103,48.819709),(15.90707,48.830251),(15.899629,48.834746),(15.888363,48.835056),(15.875754,48.833093),(15.885986,48.842033),(15.877614,48.841671),(15.870173,48.843893),(15.859218,48.849164),(15.840097,48.850094),(15.833276,48.852006),(15.828315,48.857122),(15.824284,48.869421),(15.818497,48.872315),(15.787904,48.872263),(15.779222,48.870868),(15.743669,48.858466),(15.726823,48.854952),(15.703775,48.854952),(15.681347,48.858466),(15.603729,48.887353),(15.544187,48.902612),(15.521254,48.908489),(15.471851,48.936704),(15.450354,48.944817),(15.405912,48.954687),(15.357853,48.97081),(15.335529,48.974996),(15.28871,48.975358),(15.283749,48.977632),(15.279615,48.982593),(15.27455,48.986727),(15.266902,48.986675),(15.262872,48.982593),(15.260391,48.969157),(15.257084,48.963886),(15.243234,48.95293),(15.238067,48.95076),(15.161792,48.937221),(15.141949,48.937479),(15.148563,48.951742),(15.148977,48.965333),(15.144739,48.978924),(15.137401,48.993031),(15.059576,48.997217),(15.003973,49.009774),(14.982062,49.007914),(14.978858,49.00626),(14.977308,49.002902),(14.977308,48.998251),(14.978238,48.992825),(14.964595,48.971741),(14.964182,48.943267),(14.968316,48.912674),(14.968213,48.885028),(14.947025,48.822396),(14.93979,48.809787),(14.938137,48.80281),(14.940101,48.796506),(14.944441,48.788599),(14.948886,48.78214),(14.951056,48.780331),(14.939584,48.762813),(14.919327,48.761573),(14.86765,48.775577),(14.815457,48.780331),(14.800367,48.776507),(14.794476,48.766999),(14.790446,48.754855),(14.78042,48.743124),(14.775356,48.724004),(14.722749,48.693411),(14.703836,48.673103),(14.700219,48.646102),(14.700735,48.615535),(14.695671,48.589542),(14.69114,48.586535),(14.675621,48.576235),(14.659187,48.576959),(14.651333,48.583496),(14.645442,48.592901),(14.634693,48.602332),(14.628182,48.603288),(14.612885,48.599748),(14.605134,48.600316),(14.60193,48.604528),(14.601827,48.611194),(14.600586,48.617576),(14.594179,48.621323),(14.578779,48.620496),(14.548807,48.610936),(14.53382,48.608714),(14.522038,48.610626),(14.482144,48.624475),(14.458166,48.643182),(14.443593,48.636516),(14.421166,48.597061),(14.405353,48.586286),(14.353366,48.571429),(14.333213,48.560706),(14.325358,48.558639),(14.315746,48.557916),(14.216527,48.58117),(14.07483,48.591712),(14.040827,48.601169),(14.032456,48.60613),(14.015196,48.620393),(14.010855,48.62587),(14.006204,48.639513),(14.008064,48.642407),(14.013749,48.643802),(14.020467,48.653155),(14.023464,48.654602),(14.028322,48.653982),(14.032869,48.654912),(14.034729,48.661165),(14.032146,48.667263),(14.026151,48.670054),(14.019227,48.671914),(14.014162,48.675066),(14.007754,48.683283),(13.991115,48.700181),(13.98233,48.706485),(13.915047,48.73098),(13.893136,48.743021),(13.874946,48.752426),(13.855929,48.759299),(13.815725,48.76643),(13.796191,48.779814),(13.747925,48.843376),(13.724671,48.867303),(13.710615,48.87247),(13.673356,48.876346),(13.654443,48.882237),(13.637389,48.893451),(13.621783,48.909057),(13.611138,48.927247),(13.608657,48.946161),(13.590467,48.944817),(13.573414,48.951018),(13.556464,48.959958),(13.538894,48.967038),(13.522395,48.969342),(13.51595,48.970242),(13.5021,48.966056),(13.492592,48.955049),(13.482257,48.937634),(13.458899,48.945024),(13.427273,48.959958),(13.398541,48.977683),(13.384175,48.993031),(13.383555,49.021092),(13.372393,49.038507),(13.329915,49.066929),(13.299529,49.093645),(13.287333,49.10057),(13.267283,49.105634),(13.206821,49.107495),(13.178503,49.118347),(13.162586,49.135142),(13.14946,49.15483),(13.055823,49.238184),(13.04342,49.242784),(13.043214,49.242887),(13.043214,49.242939),(13.011278,49.267795),(13.00673,49.277097),(13.005387,49.28676),(12.999806,49.294925),(12.982442,49.299628),(12.95433,49.31911),(12.939861,49.326758),(12.922911,49.332701),(12.88467,49.339573),(12.870925,49.336731),(12.875369,49.323864),(12.856662,49.32221),(12.797751,49.327843),(12.777907,49.332545),(12.762094,49.343242),(12.729331,49.391147),(12.709177,49.404737),(12.663082,49.418897),(12.643548,49.42949),(12.632179,49.44396),(12.627218,49.460186),(12.624325,49.493001),(12.622981,49.509847),(12.61554,49.513775),(12.604998,49.512793),(12.593732,49.515015),(12.58288,49.522405),(12.575025,49.529898),(12.568721,49.538838),(12.562313,49.550827),(12.558282,49.562557),(12.553631,49.584726),(12.546603,49.595579),(12.536061,49.603123),(12.512187,49.61165),(12.501955,49.619711),(12.502678,49.62214),(12.507432,49.629995),(12.499474,49.632527),(12.496891,49.633871),(12.505365,49.646118),(12.504642,49.659399),(12.496374,49.669993),(12.482524,49.674695),(12.449245,49.68503),(12.433949,49.6918),(12.419583,49.700223),(12.398912,49.719499),(12.388577,49.732314),(12.383616,49.742856),(12.395398,49.757894),(12.436739,49.769263),(12.452655,49.779702),(12.455859,49.796135),(12.456376,49.817115),(12.462474,49.831223),(12.482524,49.827244),(12.489863,49.842695),(12.51291,49.870239),(12.520972,49.885535),(12.524072,49.904965),(12.517871,49.9122),(12.503608,49.915766),(12.462681,49.931269),(12.463818,49.942121),(12.470742,49.955608),(12.468158,49.970233),(12.451312,49.980568),(12.414518,49.983048),(12.399222,49.992764),(12.398189,49.992764),(12.24688,50.044957),(12.243676,50.051313),(12.243469,50.060408),(12.241402,50.07064),(12.232514,50.08051),(12.218044,50.088572),(12.202025,50.094514),(12.187865,50.102834),(12.179184,50.117976),(12.178564,50.132962),(12.185592,50.155699),(12.182698,50.170737),(12.17505,50.182623),(12.163784,50.193785),(12.13929,50.211045),(12.089887,50.230837),(12.079552,50.242722),(12.092057,50.254815),(12.102393,50.259156),(12.109111,50.263806),(12.110971,50.276622),(12.10725,50.290885),(12.099189,50.299773),(12.076141,50.315173),(12.101344,50.31398),(12.149315,50.311711),(12.168538,50.302977),(12.17443,50.293727),(12.17412,50.276571),(12.178047,50.268767),(12.189209,50.262463),(12.203782,50.259569),(12.232514,50.259104),(12.239645,50.256572),(12.242953,50.252954),(12.242436,50.248355),(12.237992,50.242722),(12.236441,50.242567),(12.235098,50.242257),(12.233754,50.241689),(12.232514,50.240965),(12.229827,50.239053),(12.228897,50.237038),(12.22993,50.234816),(12.232514,50.232491),(12.249981,50.221173),(12.254321,50.217401),(12.258455,50.21089),(12.260212,50.20505),(12.261246,50.199883),(12.273235,50.172339),(12.283363,50.162211),(12.300417,50.160815),(12.314369,50.167327),(12.314369,50.176215),(12.308582,50.186912),(12.305584,50.199314),(12.308685,50.217091),(12.314059,50.226496),(12.3338,50.242722),(12.336487,50.258587),(12.344652,50.26639),(12.355814,50.271403),(12.367183,50.279051),(12.398189,50.315173),(12.408937,50.321994),(12.438289,50.332536),(12.450175,50.339357),(12.453792,50.340701),(12.462784,50.340494),(12.466815,50.34189),(12.468985,50.345559),(12.468365,50.349486),(12.467022,50.352845),(12.466815,50.354912),(12.469502,50.359408),(12.471569,50.364472),(12.475083,50.369433),(12.482524,50.373567),(12.510223,50.38876),(12.550531,50.396357),(12.625358,50.399922),(12.678792,50.393721),(12.691814,50.394651),(12.702976,50.401524),(12.725094,50.423435),(12.737909,50.431548),(12.754239,50.435321),(12.770259,50.435941),(12.788139,50.434339),(12.79434,50.435889),(12.808396,50.441832),(12.817388,50.44302),(12.825759,50.441418),(12.918363,50.4054),(12.952573,50.40416),(12.960631,50.409234),(12.982442,50.42297),(12.996602,50.433667),(13.003113,50.451909),(13.009624,50.492681),(13.017892,50.49635),(13.026264,50.497642),(13.034532,50.496505),(13.042284,50.492785),(13.051069,50.491131),(13.06037,50.490614),(13.069775,50.491079),(13.07887,50.492681),(13.078974,50.492681),(13.079181,50.492785),(13.107396,50.498986),(13.160106,50.497022),(13.184704,50.508753),(13.19907,50.525341),(13.232349,50.58203),(13.25178,50.580893),(13.268006,50.573658),(13.284646,50.568956),(13.305627,50.575777),(13.316272,50.596034),(13.321853,50.60239),(13.35937,50.61934),(13.365365,50.623474),(13.365985,50.627092),(13.368775,50.628332),(13.380971,50.62549),(13.386345,50.621614),(13.400401,50.606576),(13.407016,50.601305),(13.42934,50.594329),(13.447944,50.597274),(13.464893,50.607093),(13.482257,50.620891),(13.492179,50.624456),(13.498173,50.629159),(13.499103,50.635205),(13.493315,50.643215),(13.509955,50.651225),(13.523288,50.66218),(13.527215,50.675668),(13.51564,50.691016),(13.556567,50.706725),(13.601939,50.712151),(13.691753,50.711841),(13.711028,50.71427),(13.749062,50.72321),(13.770663,50.724605),(13.816552,50.72135),(13.834535,50.723675),(13.863474,50.735147),(13.869572,50.735251),(13.875359,50.736542),(13.882181,50.742744),(13.882697,50.748015),(13.88032,50.755043),(13.87908,50.763518),(13.883318,50.773233),(13.892516,50.780416),(13.900991,50.780622),(13.910706,50.778762),(13.923315,50.779951),(13.933547,50.784808),(13.948843,50.797211),(13.959592,50.802068),(13.979126,50.804755),(14.015713,50.801758),(14.034936,50.802585),(14.190792,50.847905),(14.202988,50.85514),(14.221281,50.872607),(14.232133,50.879376),(14.2681,50.88413),(14.346545,50.880203),(14.375897,50.895964),(14.381892,50.920872),(14.355227,50.930639),(14.31833,50.937512),(14.292802,50.95348),(14.293732,50.96397),(14.302207,50.967691),(14.303757,50.969707),(14.28381,50.974822),(14.250013,50.977613),(14.238335,50.982471),(14.249497,50.992651),(14.263553,51.021435),(14.287531,51.036834),(14.319363,51.040012),(14.35657,51.032338),(14.364115,51.027998),(14.369386,51.022442),(14.375897,51.01699),(14.386749,51.01327),(14.397808,51.013115)] +Djibouti [(43.240733,11.48786),(43.188815,11.407764),(42.923715,10.998787),(42.908109,11.004187),(42.898084,10.995687),(42.888059,10.983568),(42.873279,10.978246),(42.860153,10.980416),(42.835452,10.987444),(42.794111,10.991604),(42.771787,10.996462),(42.754837,11.010544),(42.729619,11.06501),(42.710395,11.072116),(42.686521,11.073046),(42.615621,11.08966),(42.608145,11.089088),(42.569732,11.086146),(42.479712,11.059119),(42.413256,11.015944),(42.391758,11.005893),(42.376669,11.006668),(42.361166,11.011887),(42.338532,11.015504),(42.305872,11.004601),(42.297294,11.003774),(42.279414,11.004394),(42.270835,11.003516),(42.235179,10.987703),(42.221639,10.984344),(42.202726,10.98455),(42.151359,10.996203),(42.096272,10.990261),(42.069401,10.981501),(42.038395,10.950444),(42.0108,10.943597),(41.954162,10.941271),(41.947341,10.938532),(41.942793,10.933546),(41.937522,10.929825),(41.928841,10.931065),(41.922846,10.936026),(41.920262,10.941943),(41.916852,10.947059),(41.79872,10.970675),(41.773811,10.980054),(41.766523,10.989466),(41.761306,10.996203),(41.779289,11.024677),(41.785594,11.067388),(41.787971,11.259727),(41.775672,11.368429),(41.749834,11.483383),(41.74911,11.537953),(41.778669,11.628645),(41.792415,11.704455),(41.808228,11.736236),(41.823007,11.746106),(41.862282,11.763056),(41.872203,11.775871),(41.877784,11.790134),(41.886259,11.796025),(41.897525,11.799384),(41.911477,11.805947),(41.925327,11.816437),(41.936386,11.827445),(41.97468,11.88091),(42.057308,11.996271),(42.097719,12.070169),(42.108778,12.082106),(42.132343,12.093475),(42.140301,12.105826),(42.155494,12.142257),(42.288715,12.321006),(42.319205,12.386377),(42.379459,12.465907),(42.430929,12.520322),(42.448499,12.523371),(42.475681,12.516343),(42.521569,12.496241),(42.678252,12.360022),(42.691482,12.378832),(42.696187,12.380126),(42.727758,12.388806),(42.741401,12.408081),(42.749876,12.417073),(42.779745,12.422602),(42.792251,12.429165),(42.798038,12.456088),(42.788737,12.487456),(42.785223,12.517222),(42.828838,12.557633),(42.852092,12.611738),(42.868835,12.626155),(42.877723,12.62507),(42.890126,12.615407),(42.900358,12.616389),(43.117686,12.707913),(43.133962,12.697211),(43.143321,12.679511),(43.151134,12.659735),(43.161794,12.642727),(43.259613,12.533352),(43.283214,12.496527),(43.297374,12.482001),(43.312999,12.477688),(43.329356,12.488756),(43.329356,12.453559),(43.331228,12.436713),(43.336192,12.419908),(43.344086,12.406155),(43.353201,12.395209),(43.360525,12.383002),(43.363536,12.365302),(43.364757,12.327135),(43.367849,12.306871),(43.374034,12.292955),(43.411388,12.241767),(43.414317,12.224921),(43.411388,12.166653),(43.418712,12.091254),(43.413341,12.059272),(43.398774,12.029527),(43.377126,12.004828),(43.350434,11.987942),(43.34254,11.985663),(43.315684,11.981106),(43.288341,11.967434),(43.268403,11.963813),(43.225434,11.962877),(43.206554,11.957831),(43.174001,11.935004),(43.097179,11.84455),(43.09197,11.841295),(43.086029,11.838853),(43.076182,11.834703),(43.069184,11.830268),(43.064708,11.823432),(43.060883,11.809556),(43.055512,11.803616),(43.041515,11.798529),(42.986827,11.796088),(42.972504,11.790839),(42.95753,11.780463),(42.942638,11.77383),(42.915538,11.783596),(42.896495,11.780585),(42.803396,11.747992),(42.777192,11.732245),(42.75408,11.707953),(42.684093,11.582261),(42.677419,11.573676),(42.667979,11.568549),(42.636974,11.559556),(42.627696,11.559068),(42.60377,11.563381),(42.577403,11.562405),(42.569021,11.563381),(42.563487,11.567206),(42.554047,11.580268),(42.548595,11.583808),(42.526866,11.582465),(42.515147,11.574123),(42.514985,11.567206),(42.528087,11.570217),(42.532481,11.548489),(42.532481,11.547471),(42.533051,11.523505),(42.538748,11.502997),(42.571625,11.490465),(42.591563,11.47248),(42.60377,11.46776),(42.617686,11.470282),(42.633556,11.484809),(42.644786,11.488227),(42.653494,11.485907),(42.662852,11.481635),(42.672537,11.479722),(42.681651,11.484524),(42.687511,11.497219),(42.680675,11.504788),(42.670421,11.509955),(42.665294,11.51557),(42.674083,11.532131),(42.694102,11.545966),(42.739757,11.563381),(42.807465,11.571275),(42.829112,11.576972),(42.839529,11.584459),(42.847504,11.592719),(42.855805,11.595771),(42.866954,11.587592),(42.875173,11.583686),(42.900727,11.584052),(43.040538,11.585924),(43.06186,11.590155),(43.081798,11.589545),(43.103363,11.576972),(43.123057,11.581488),(43.133556,11.595282),(43.141124,11.608588),(43.151866,11.611762),(43.157888,11.601304),(43.172211,11.542873),(43.173106,11.541653),(43.183849,11.526557),(43.19516,11.518297),(43.226329,11.501899),(43.240733,11.48786)] +Dominica [(-61.362864,15.201809),(-61.374094,15.20482),(-61.376129,15.212836),(-61.372792,15.237209),(-61.37328,15.266425),(-61.372792,15.270697),(-61.378285,15.28205),(-61.39094,15.296291),(-61.393219,15.308295),(-61.395009,15.333075),(-61.399566,15.351142),(-61.413726,15.380561),(-61.415679,15.387641),(-61.41808,15.405951),(-61.421213,15.41474),(-61.444203,15.436835),(-61.447865,15.445136),(-61.449778,15.454657),(-61.454254,15.46483),(-61.460032,15.472886),(-61.465566,15.476142),(-61.472076,15.481391),(-61.474273,15.493476),(-61.475209,15.517768),(-61.47936,15.525458),(-61.484853,15.530341),(-61.488922,15.535712),(-61.488881,15.545071),(-61.483998,15.551581),(-61.467275,15.562974),(-61.462148,15.572984),(-61.469065,15.574286),(-61.481191,15.578559),(-61.488881,15.57925),(-61.480621,15.59101),(-61.468984,15.633857),(-61.449208,15.633083),(-61.437245,15.632636),(-61.425933,15.628485),(-61.413726,15.619574),(-61.397572,15.596625),(-61.384633,15.588365),(-61.36974,15.595933),(-61.35733,15.598293),(-61.307688,15.57925),(-61.305898,15.572455),(-61.29955,15.557196),(-61.289622,15.540961),(-61.284047,15.531399),(-61.268625,15.509914),(-61.267161,15.507799),(-61.25768,15.472602),(-61.256012,15.43594),(-61.262929,15.407904),(-61.258372,15.406317),(-61.254221,15.402981),(-61.249257,15.401068),(-61.255523,15.364569),(-61.254058,15.332668),(-61.253733,15.324693),(-61.256663,15.285346),(-61.277211,15.25023),(-61.284413,15.244818),(-61.289296,15.243598),(-61.293202,15.243883),(-61.303375,15.24258),(-61.311879,15.244534),(-61.318186,15.243394),(-61.323598,15.238593),(-61.328725,15.225572),(-61.342681,15.219672),(-61.345937,15.212348),(-61.346384,15.211697),(-61.350738,15.205146),(-61.362864,15.201809)] +Algeria [(8.60251,36.939511),(8.605655,36.913045),(8.604828,36.900979),(8.608239,36.890721),(8.642552,36.848501),(8.641725,36.836357),(8.623845,36.826074),(8.554495,36.803646),(8.520079,36.79791),(8.482665,36.799977),(8.444011,36.796205),(8.413109,36.783906),(8.406701,36.767989),(8.441634,36.75321),(8.461788,36.732798),(8.45321,36.697658),(8.430369,36.662621),(8.407941,36.642622),(8.286915,36.583401),(8.222319,36.563764),(8.193484,36.548985),(8.169919,36.525834),(8.16789,36.491995),(8.167852,36.491365),(8.20878,36.477826),(8.295597,36.46868),(8.314924,36.460618),(8.333837,36.456381),(8.349237,36.448784),(8.357712,36.430387),(8.359675,36.411215),(8.358125,36.369926),(8.355423,36.356116),(8.354404,36.350909),(8.307069,36.243629),(8.302314,36.202339),(8.307069,36.182495),(8.31451,36.163944),(8.317094,36.145909),(8.307999,36.126995),(8.296733,36.110407),(8.275649,36.03682),(8.272446,35.981732),(8.268423,35.969864),(8.247227,35.907318),(8.24144,35.827737),(8.24578,35.78531),(8.257666,35.750325),(8.306552,35.684903),(8.323605,35.652192),(8.327225,35.621405),(8.337041,35.537935),(8.336731,35.508376),(8.300868,35.437476),(8.290016,35.402595),(8.287948,35.366059),(8.294356,35.32508),(8.317404,35.289475),(8.355231,35.274334),(8.396882,35.263688),(8.431299,35.241726),(8.421377,35.222089),(8.360916,35.145918),(8.313683,35.103078),(8.300454,35.06768),(8.282884,34.994041),(8.25901,34.940659),(8.248881,34.901953),(8.250741,34.864746),(8.269345,34.763874),(8.266554,34.750283),(8.258714,34.741298),(8.257356,34.739741),(8.225213,34.711681),(8.213431,34.696746),(8.210641,34.681192),(8.236479,34.647654),(8.228831,34.63644),(8.192864,34.617733),(8.186559,34.609568),(8.179531,34.592153),(8.17519,34.58528),(8.167232,34.578562),(8.143668,34.566728),(8.102483,34.536355),(8.094058,34.530142),(7.999077,34.494175),(7.957323,34.469422),(7.87061,34.437899),(7.831542,34.414386),(7.811905,34.379272),(7.774285,34.260959),(7.765293,34.244707),(7.752787,34.232149),(7.733874,34.223778),(7.670312,34.215303),(7.631451,34.199102),(7.604063,34.175486),(7.579568,34.148459),(7.539228,34.113758),(7.517453,34.095026),(7.503707,34.067973),(7.500606,33.994257),(7.479832,33.893901),(7.484897,33.855015),(7.498539,33.799979),(7.505464,33.782849),(7.534196,33.736236),(7.544428,33.68425),(7.553006,33.658748),(7.616568,33.565963),(7.693153,33.454109),(7.708242,33.414861),(7.708286,33.410704),(7.709689,33.278151),(7.724985,33.231409),(7.75072,33.207664),(7.787514,33.198311),(7.83547,33.194538),(7.871953,33.184125),(7.982747,33.116817),(8.002178,33.108368),(8.022642,33.103045),(8.043622,33.101831),(8.0645,33.105784),(8.086824,33.094286),(8.181392,32.969823),(8.282884,32.836369),(8.296423,32.804407),(8.309963,32.663873),(8.319574,32.560598),(8.331253,32.527628),(8.332627,32.526355),(8.359985,32.501015),(8.482665,32.434791),(8.642345,32.336658),(8.851532,32.208242),(9.019893,32.104863),(9.033743,32.090756),(9.045008,32.071842),(9.063304,32.000192),(9.065989,31.989676),(9.094514,31.879606),(9.12273,31.769535),(9.151152,31.659567),(9.17947,31.549497),(9.207789,31.439426),(9.236211,31.329407),(9.264426,31.219336),(9.292952,31.109317),(9.32127,30.999246),(9.349589,30.889175),(9.377908,30.779156),(9.40633,30.669085),(9.434752,30.559066),(9.463071,30.449047),(9.491389,30.338976),(9.519708,30.228905),(9.286544,30.117129),(9.310005,30.084366),(9.369487,30.022794),(9.421729,29.968714),(9.54968,29.802316),(9.667709,29.608323),(9.746929,29.368428),(9.826149,29.128533),(9.848267,28.975726),(9.851264,28.785996),(9.827276,28.618647),(9.776953,28.267578),(9.789872,28.209442),(9.93591,27.866724),(9.934256,27.827398),(9.931274,27.818642),(9.863356,27.619246),(9.846613,27.599298),(9.818501,27.585733),(9.793903,27.569739),(9.797107,27.548914),(9.81168,27.526538),(9.821602,27.505687),(9.813643,27.486489),(9.770545,27.444217),(9.756283,27.42303),(9.743053,27.364119),(9.726517,27.32469),(9.721866,27.308463),(9.721349,27.291875),(9.806099,27.025122),(9.825529,26.92058),(9.835141,26.900995),(9.846096,26.891951),(9.890848,26.869576),(9.906661,26.857483),(9.910588,26.843117),(9.885474,26.736612),(9.88258,26.701782),(9.894052,26.67398),(9.896326,26.652793),(9.854571,26.524377),(9.835761,26.504223),(9.482604,26.352553),(9.481364,26.332606),(9.477643,26.315552),(9.468858,26.301031),(9.434752,26.271989),(9.416458,26.23225),(9.402506,26.216179),(9.377804,26.168946),(9.401162,26.113394),(9.541309,25.936351),(9.693961,25.743494),(9.816744,25.588465),(9.9695,25.395402),(10.007947,25.331426),(10.021383,25.26802),(10.025517,25.1364),(10.029961,24.995065),(10.032028,24.856339),(10.044534,24.829622),(10.193362,24.749937),(10.212172,24.722859),(10.229742,24.629945),(10.242248,24.595115),(10.260335,24.576641),(10.391799,24.47998),(10.410506,24.473288),(10.450194,24.476931),(10.566776,24.516463),(10.677363,24.553851),(10.699067,24.556125),(10.720668,24.552301),(10.911354,24.494501),(11.149996,24.422335),(11.46724,24.326339),(11.50863,24.313814),(11.541393,24.29751),(11.567128,24.26684),(11.636478,24.137649),(11.708618,24.00298),(11.71014,24.000137),(11.822306,23.790642),(11.89238,23.660056),(11.968861,23.517351),(11.900855,23.477173),(11.832745,23.437046),(11.764739,23.396945),(11.696733,23.356844),(11.628623,23.316717),(11.560514,23.276591),(11.492611,23.236516),(11.424501,23.196415),(11.356392,23.156262),(11.288489,23.116161),(11.220379,23.07606),(11.15227,23.035882),(11.084367,22.995755),(11.016257,22.955654),(10.948148,22.915527),(10.880038,22.875375),(10.812032,22.835326),(10.743923,22.795199),(10.675813,22.755046),(10.607807,22.714945),(10.539801,22.674896),(10.471691,22.634744),(10.403788,22.594643),(10.335679,22.554542),(10.267569,22.514337),(10.199563,22.474237),(10.131557,22.434136),(10.063447,22.394035),(9.995441,22.353882),(9.927435,22.313781),(9.859325,22.27368),(9.791319,22.233579),(9.723313,22.193427),(9.655203,22.153326),(9.587197,22.113225),(9.519088,22.073072),(9.451082,22.03292),(9.383075,21.992819),(9.314966,21.952718),(9.24696,21.912565),(9.178953,21.872516),(9.110844,21.832364),(9.042838,21.792263),(8.974832,21.75211),(8.906722,21.712061),(8.838613,21.671908),(8.770606,21.631807),(8.702497,21.591655),(8.634491,21.551502),(8.566381,21.51135),(8.498375,21.4713),(8.430265,21.431148),(8.362259,21.391047),(8.294253,21.350946),(8.226143,21.310845),(8.158034,21.270744),(8.090131,21.230592),(8.022022,21.190491),(7.953912,21.15039),(7.886009,21.110237),(7.8179,21.070085),(7.74979,21.029984),(7.681887,20.989831),(7.613778,20.94973),(7.482726,20.872577),(7.383611,20.791652),(7.312297,20.733464),(7.240881,20.675173),(7.169464,20.616986),(7.098047,20.558746),(7.020532,20.495443),(6.982808,20.46361),(6.925138,20.414001),(6.862092,20.35974),(6.799047,20.305532),(6.736105,20.251323),(6.672957,20.197115),(6.609963,20.142854),(6.546918,20.088646),(6.483769,20.034385),(6.420931,19.980177),(6.357782,19.925917),(6.294737,19.871682),(6.231692,19.817474),(6.168646,19.763213),(6.105601,19.709005),(6.042556,19.654745),(5.979407,19.600536),(5.916362,19.546276),(5.837607,19.478631),(5.794302,19.449796),(5.749344,19.433699),(5.620463,19.408997),(5.520831,19.389903),(5.301826,19.347993),(5.082821,19.306006),(4.983189,19.286963),(4.794467,19.250764),(4.605848,19.214616),(4.417229,19.178417),(4.22861,19.142244),(4.057561,19.110437),(3.886512,19.07863),(3.790291,19.06077),(3.715566,19.046901),(3.54462,19.015068),(3.439821,18.995638),(3.439717,18.995638),(3.358689,18.976853),(3.333057,18.975561),(3.318381,18.977706),(3.308356,18.981685),(3.284895,18.995741),(3.225984,19.05106),(3.179785,19.07),(3.158597,19.08155),(3.138754,19.096045),(3.120874,19.112814),(3.10413,19.135526),(3.102684,19.153561),(3.111779,19.171337),(3.126558,19.193352),(3.134206,19.212859),(3.13927,19.221929),(3.152706,19.230197),(3.17441,19.251643),(3.178855,19.268722),(3.183816,19.307505),(3.192911,19.325798),(3.211514,19.340862),(3.232701,19.351843),(3.250995,19.36546),(3.260813,19.388327),(3.25823,19.410393),(3.247481,19.426464),(3.234769,19.441347),(3.225984,19.459692),(3.22619,19.4692),(3.231668,19.489044),(3.232288,19.495478),(3.228051,19.504159),(3.222883,19.508061),(3.217509,19.511162),(3.212238,19.517156),(3.199422,19.553769),(3.198285,19.592397),(3.216785,19.794064),(3.212754,19.807758),(3.198802,19.820523),(3.183195,19.827731),(3.147022,19.837938),(3.130485,19.845224),(3.072608,19.88889),(2.946001,19.941652),(2.671805,19.996222),(2.616925,19.998367),(2.525665,20.015162),(2.514812,20.015937),(2.495382,20.020097),(2.459312,20.038778),(2.439882,20.04609),(2.415697,20.051284),(2.400401,20.056555),(2.388722,20.067407),(2.348208,20.137635),(2.316478,20.180165),(2.279581,20.21794),(2.21819,20.264087),(2.200826,20.273906),(2.18243,20.278505),(2.161346,20.274939),(2.138195,20.260728),(2.097474,20.224193),(2.071222,20.213263),(2.056339,20.215046),(1.993397,20.235949),(1.983372,20.241918),(1.975621,20.248429),(1.967146,20.253416),(1.95526,20.254915),(1.941204,20.251116),(1.924254,20.23613),(1.913402,20.231092),(1.891388,20.231789),(1.883843,20.24414),(1.880329,20.263054),(1.870614,20.283543),(1.855008,20.294835),(1.838885,20.29592),(1.820901,20.293594),(1.799197,20.294886),(1.778113,20.304291),(1.659154,20.397516),(1.649232,20.412089),(1.650059,20.487019),(1.643961,20.522676),(1.623704,20.551253),(1.559729,20.597504),(1.520351,20.616986),(1.483454,20.622618),(1.465781,20.633522),(1.447487,20.638741),(1.40718,20.645046),(1.363978,20.657707),(1.346925,20.669127),(1.331526,20.687937),(1.310545,20.722716),(1.296696,20.733464),(1.273338,20.739407),(1.252254,20.738994),(1.212463,20.73088),(1.191276,20.73057),(1.168538,20.733464),(1.154585,20.738787),(1.147247,20.751448),(1.14456,20.776252),(1.145284,20.795889),(1.167505,20.886013),(1.180114,20.995309),(1.177943,21.017323),(1.15934,21.081505),(1.146524,21.101711),(1.029322,21.178347),(0.942609,21.235036),(0.855999,21.291673),(0.769389,21.34831),(0.682676,21.404896),(0.596067,21.461585),(0.509457,21.518274),(0.422795,21.574963),(0.33616,21.631601),(0.249499,21.688238),(0.162966,21.744824),(0.076253,21.801461),(0.003735,21.848899),(-0.010408,21.85815),(-0.097044,21.914839),(-0.183731,21.971477),(-0.270392,22.028114),(-0.357028,22.084751),(-0.443663,22.141389),(-0.530325,22.198078),(-0.616934,22.254767),(-0.703647,22.311404),(-0.790309,22.368041),(-0.876918,22.424627),(-0.96358,22.481265),(-1.05019,22.537954),(-1.136903,22.594643),(-1.223512,22.65128),(-1.310122,22.707969),(-1.396783,22.764555),(-1.483496,22.821166),(-1.570054,22.877881),(-1.656716,22.93457),(-1.743326,22.991182),(-1.830039,23.047819),(-1.916648,23.104456),(-2.00331,23.161042),(-2.089919,23.217757),(-2.176632,23.274446),(-2.26319,23.331109),(-2.349852,23.387747),(-2.436513,23.444358),(-2.523226,23.500996),(-2.609836,23.557685),(-2.696446,23.614348),(-2.783107,23.670985),(-2.869768,23.727623),(-2.956404,23.784234),(-3.043065,23.840923),(-3.129649,23.897612),(-3.209489,23.949702),(-3.289123,24.001844),(-3.368808,24.053959),(-3.448544,24.106023),(-3.528281,24.158113),(-3.608018,24.210203),(-3.687625,24.262344),(-3.767336,24.314434),(-3.846996,24.366576),(-3.926732,24.418717),(-4.006495,24.470807),(-4.086154,24.522897),(-4.165865,24.574987),(-4.245498,24.627077),(-4.325235,24.679218),(-4.404946,24.731308),(-4.516025,24.803991),(-4.592351,24.851688),(-4.668677,24.899334),(-4.744925,24.947057),(-4.821226,24.994755),(-4.821355,24.994806),(-4.821536,24.994961),(-4.821613,24.995065),(-4.995194,25.102086),(-5.16875,25.20916),(-5.342331,25.316182),(-5.515964,25.423256),(-5.63699,25.494466),(-5.661525,25.508881),(-5.804628,25.592961),(-5.972266,25.691611),(-6.139905,25.79021),(-6.307491,25.888808),(-6.505412,26.005184),(-6.703333,26.121559),(-6.901202,26.237934),(-7.099123,26.354361),(-7.297043,26.470737),(-7.494912,26.587164),(-7.692782,26.703487),(-7.890754,26.819914),(-8.088623,26.936238),(-8.28644,27.052665),(-8.484413,27.168989),(-8.682385,27.285416),(-8.682385,27.379467),(-8.682385,27.473466),(-8.682385,27.567414),(-8.682385,27.661439),(-8.682385,27.721436),(-8.682385,27.781354),(-8.682385,27.841222),(-8.682385,27.90114),(-8.682385,27.961085),(-8.682385,28.02103),(-8.682385,28.080949),(-8.682385,28.140893),(-8.682385,28.20076),(-8.682385,28.260679),(-8.682385,28.320624),(-8.682385,28.380543),(-8.682385,28.440487),(-8.682385,28.500406),(-8.682385,28.560273),(-8.682385,28.620218),(-8.682385,28.6659),(-8.678768,28.692823),(-8.667606,28.711685),(-8.648847,28.725948),(-8.520819,28.787081),(-8.475835,28.818759),(-8.430411,28.841006),(-8.417802,28.852349),(-8.383489,28.905782),(-8.368451,28.916531),(-8.333259,28.93038),(-8.316774,28.939062),(-8.250474,28.994769),(-8.182312,29.035541),(-8.069658,29.079337),(-8.036326,29.099853),(-7.945014,29.176231),(-7.839129,29.239043),(-7.777996,29.289324),(-7.729989,29.311158),(-7.714667,29.321829),(-7.653611,29.376193),(-7.619453,29.389422),(-7.572686,29.387613),(-7.528502,29.380947),(-7.506126,29.380223),(-7.48406,29.382445),(-7.463286,29.389137),(-7.34961,29.383439),(-7.258755,29.467305),(-7.146934,29.509238),(-7.070057,29.516227),(-6.958236,29.509238),(-6.783515,29.446339),(-6.699649,29.516227),(-6.559872,29.530205),(-6.413107,29.565149),(-6.27333,29.579127),(-6.126564,29.579127),(-6.000765,29.579127),(-5.881955,29.600093),(-5.756156,29.614071),(-5.721212,29.523216),(-5.637346,29.495261),(-5.539503,29.523216),(-5.43467,29.642026),(-5.343815,29.767825),(-5.273927,29.886635),(-5.176083,29.97749),(-5.071251,30.04039),(-4.95943,30.124256),(-4.875564,30.180166),(-4.770731,30.229088),(-4.623966,30.284999),(-4.484189,30.382842),(-4.372368,30.508641),(-4.274524,30.557563),(-4.155714,30.585519),(-4.001959,30.592507),(-3.834228,30.627452),(-3.645529,30.711317),(-3.652518,30.774217),(-3.659507,30.837116),(-3.610585,30.879049),(-3.554674,30.955927),(-3.608741,31.030872),(-3.610085,31.050302),(-3.614529,31.068027),(-3.624141,31.086527),(-3.635872,31.095674),(-3.671942,31.110867),(-3.689718,31.125595),(-3.717107,31.163267),(-3.731421,31.176341),(-3.748862,31.180217),(-3.763719,31.173447),(-3.792968,31.149676),(-3.810796,31.142906),(-3.827462,31.143837),(-3.839322,31.152828),(-3.842836,31.170192),(-3.836169,31.189777),(-3.814982,31.220524),(-3.812915,31.243365),(-3.81922,31.318865),(-3.815189,31.337158),(-3.802502,31.350646),(-3.747467,31.385166),(-3.673484,31.389234),(-3.659507,31.647821),(-3.591378,31.678274),(-3.548745,31.669954),(-3.511564,31.672745),(-3.21921,31.717709),(-3.002556,31.77362),(-2.827836,31.794586),(-2.869769,31.89243),(-2.938731,32.048639),(-2.881189,32.076286),(-2.695567,32.08967),(-2.516147,32.1322),(-2.378636,32.126774),(-2.315177,32.124294),(-2.243398,32.1214),(-2.165109,32.118299),(-2.081858,32.11494),(-1.995455,32.111478),(-1.907553,32.107964),(-1.819962,32.104501),(-1.734334,32.101091),(-1.652375,32.097835),(-1.575791,32.094735),(-1.506337,32.091944),(-1.445618,32.089515),(-1.395388,32.087552),(-1.357406,32.086053),(-1.333428,32.085071),(-1.324953,32.084709),(-1.249557,32.08166),(-1.210335,32.08967),(-1.190594,32.125224),(-1.195607,32.146049),(-1.211833,32.158348),(-1.232711,32.163723),(-1.251831,32.163516),(-1.288986,32.150907),(-1.305161,32.151165),(-1.309554,32.167443),(-1.305678,32.173722),(-1.289193,32.184858),(-1.282992,32.190181),(-1.276636,32.200852),(-1.275189,32.209069),(-1.275499,32.217518),(-1.273423,32.229442),(-1.257515,32.320845),(-1.244131,32.356915),(-1.234158,32.374614),(-1.217983,32.392623),(-1.201705,32.399858),(-1.16026,32.404948),(-1.123157,32.417945),(-1.08998,32.439442),(-1.031999,32.4944),(-1.047502,32.517009),(-1.32702,32.69891),(-1.390531,32.718779),(-1.423345,32.742395),(-1.558789,32.93365),(-1.516363,32.959488),(-1.50887,32.966309),(-1.502978,32.974629),(-1.498844,32.984008),(-1.496519,32.994292),(-1.493367,33.016151),(-1.493057,33.039483),(-1.499516,33.060205),(-1.516363,33.073977),(-1.545508,33.091831),(-1.571398,33.111985),(-1.592069,33.136609),(-1.605608,33.168028),(-1.623591,33.196605),(-1.674234,33.237972),(-1.683381,33.270839),(-1.683226,33.36923),(-1.672839,33.394604),(-1.659145,33.41977),(-1.640386,33.475529),(-1.6254,33.494236),(-1.612739,33.521469),(-1.617338,33.554439),(-1.6624,33.644692),(-1.673252,33.6565),(-1.690719,33.667326),(-1.725394,33.677713),(-1.740742,33.686601),(-1.746788,33.702388),(-1.74224,33.717762),(-1.732577,33.727865),(-1.72095,33.736366),(-1.710821,33.747063),(-1.703225,33.761816),(-1.702501,33.772823),(-1.713043,33.801995),(-1.722087,33.851165),(-1.718728,33.898087),(-1.672115,34.059188),(-1.669635,34.079213),(-1.672085,34.092021),(-1.674751,34.105956),(-1.746064,34.290311),(-1.771334,34.334701),(-1.809626,34.372451),(-1.702966,34.479679),(-1.714232,34.485054),(-1.750664,34.494175),(-1.871121,34.596649),(-1.862957,34.613599),(-1.810505,34.680727),(-1.786114,34.72584),(-1.773143,34.734057),(-1.769526,34.741343),(-1.787716,34.756691),(-1.892825,34.811675),(-1.926725,34.838081),(-1.979745,34.865263),(-1.993491,34.878854),(-1.9984,34.892703),(-1.999692,34.906346),(-2.00362,34.91818),(-2.01628,34.926241),(-2.061239,34.929704),(-2.094932,34.947687),(-2.126041,34.971923),(-2.1633,34.994041),(-2.193789,35.003601),(-2.211669,35.023445),(-2.221126,35.049955),(-2.222564,35.089301),(-2.200103,35.096137),(-2.196116,35.094875),(-2.192616,35.094468),(-2.189443,35.093004),(-2.186391,35.08869),(-2.15689,35.100979),(-2.122222,35.094672),(-2.08609,35.081977),(-2.052358,35.075019),(-1.987701,35.082465),(-1.964752,35.076239),(-1.956451,35.075019),(-1.949818,35.077135),(-1.932485,35.08869),(-1.838694,35.110907),(-1.812734,35.126532),(-1.804351,35.128119),(-1.774241,35.127346),(-1.761138,35.129625),(-1.745595,35.138821),(-1.712799,35.1706),(-1.648834,35.188422),(-1.637685,35.194525),(-1.629506,35.212226),(-1.510365,35.295478),(-1.490875,35.300971),(-1.395904,35.309394),(-1.369537,35.315863),(-1.364003,35.318345),(-1.358957,35.322211),(-1.332672,35.349351),(-1.313791,35.353746),(-1.29483,35.364569),(-1.278635,35.377997),(-1.26773,35.390326),(-1.259185,35.410305),(-1.247304,35.45775),(-1.23705,35.475735),(-1.225413,35.491278),(-1.190338,35.566148),(-1.183339,35.576483),(-1.171539,35.581041),(-1.14802,35.582099),(-1.142812,35.5869),(-1.117014,35.616889),(-1.106353,35.623236),(-1.082753,35.637356),(-1.052073,35.660956),(-1.040395,35.676418),(-1.032948,35.682603),(-0.996693,35.689602),(-0.977651,35.699774),(-0.961537,35.711249),(-0.946278,35.719306),(-0.922678,35.725165),(-0.910227,35.724311),(-0.904693,35.715888),(-0.899403,35.711819),(-0.887563,35.715969),(-0.849477,35.73432),(-0.825022,35.768541),(-0.801666,35.773912),(-0.793528,35.770087),(-0.786122,35.763129),(-0.77717,35.756415),(-0.764027,35.753404),(-0.754018,35.752265),(-0.734486,35.747219),(-0.723134,35.745998),(-0.711293,35.742987),(-0.705881,35.735663),(-0.701324,35.726874),(-0.692454,35.719306),(-0.682607,35.717353),(-0.644521,35.719306),(-0.626454,35.722968),(-0.606109,35.730943),(-0.589345,35.742499),(-0.57726,35.770819),(-0.565053,35.772447),(-0.551259,35.769029),(-0.540924,35.767768),(-0.530426,35.772895),(-0.523427,35.77969),(-0.47704,35.859361),(-0.476064,35.870103),(-0.478424,35.87287),(-0.480336,35.879096),(-0.480824,35.885728),(-0.479482,35.890041),(-0.473785,35.891262),(-0.468007,35.888739),(-0.46288,35.885199),(-0.459055,35.883775),(-0.448964,35.884345),(-0.428131,35.882799),(-0.41804,35.883775),(-0.376129,35.900336),(-0.348541,35.90705),(-0.336171,35.900824),(-0.331451,35.889553),(-0.320709,35.884101),(-0.309193,35.881171),(-0.30191,35.87759),(-0.298492,35.867174),(-0.304596,35.853502),(-0.30191,35.842841),(-0.285553,35.828274),(-0.259429,35.817532),(-0.188059,35.805243),(-0.14509,35.790351),(-0.123891,35.786933),(-0.113678,35.788804),(-0.083079,35.794379),(-0.038482,35.8133),(0.00294,35.838039),(0.034516,35.863267),(0.069957,35.92182),(0.08017,35.947008),(0.114024,36.010199),(0.122406,36.033352),(0.12379,36.044582),(0.127289,36.050767),(0.13559,36.055609),(0.145518,36.059963),(0.153982,36.065009),(0.202159,36.106513),(0.219493,36.116523),(0.283458,36.134426),(0.294444,36.140774),(0.299083,36.147406),(0.321056,36.158881),(0.329275,36.164984),(0.332774,36.173814),(0.337576,36.197496),(0.342296,36.205959),(0.360037,36.213853),(0.384451,36.217963),(0.428396,36.219631),(0.451345,36.223212),(0.468272,36.232245),(0.483165,36.243598),(0.556651,36.284084),(0.594574,36.293158),(0.611827,36.305487),(0.627208,36.319241),(0.644054,36.328843),(0.663259,36.332587),(0.740408,36.337755),(0.753266,36.338609),(0.768728,36.344713),(0.802257,36.363593),(0.864757,36.377387),(0.880544,36.387152),(0.89088,36.396389),(0.913829,36.409125),(0.925141,36.418158),(0.929535,36.42593),(0.93214,36.434556),(0.93629,36.44359),(0.945567,36.452338),(0.962576,36.458157),(1.007091,36.466864),(1.04477,36.486884),(1.116954,36.487128),(1.138845,36.493394),(1.178722,36.510891),(1.202647,36.51439),(1.245453,36.513577),(1.265473,36.515326),(1.288259,36.521226),(1.34669,36.544989),(1.367686,36.547919),(1.385997,36.545152),(1.429535,36.532213),(1.45281,36.528022),(1.476329,36.528957),(1.517426,36.538723),(1.700938,36.547187),(1.716563,36.547919),(1.726736,36.549994),(1.750173,36.559149),(1.761241,36.561591),(1.77003,36.560492),(1.791515,36.555487),(1.802908,36.555365),(1.857595,36.56745),(1.913829,36.570217),(1.96046,36.562201),(1.97283,36.561591),(2.059906,36.570136),(2.309825,36.63052),(2.3296,36.637274),(2.351573,36.638332),(2.373302,36.633857),(2.391856,36.624254),(2.398204,36.615302),(2.400157,36.606187),(2.404063,36.599189),(2.416026,36.59634),(2.426768,36.595282),(2.447276,36.590562),(2.457042,36.589504),(2.600759,36.59634),(2.632986,36.605129),(2.72755,36.665229),(2.792979,36.692939),(2.818696,36.710883),(2.843598,36.740383),(2.849864,36.754055),(2.875662,36.774807),(2.933116,36.80858),(2.977061,36.815863),(2.984874,36.814968),(3.014171,36.811713),(3.047862,36.794989),(3.073985,36.771389),(3.101817,36.751899),(3.140147,36.741645),(3.173269,36.742621),(3.179373,36.742825),(3.21046,36.757392),(3.221202,36.77143),(3.226085,36.7862),(3.227306,36.812323),(3.237804,36.813056),(3.302989,36.788764),(3.340505,36.780707),(3.460704,36.775092),(3.480479,36.777248),(3.519054,36.786607),(3.539806,36.788764),(3.550466,36.791449),(3.558116,36.797675),(3.564301,36.804348),(3.570567,36.80858),(3.580414,36.809963),(3.601817,36.808092),(3.612071,36.80858),(3.649669,36.823961),(3.709483,36.873196),(3.741873,36.891181),(3.827485,36.912909),(3.873871,36.917222),(3.912608,36.911607),(3.948253,36.893297),(3.964041,36.891181),(3.974864,36.892279),(3.99643,36.896918),(4.031423,36.897773),(4.03948,36.896918),(4.08546,36.892076),(4.104828,36.883734),(4.146495,36.901801),(4.159516,36.904202),(4.17156,36.902167),(4.197032,36.893256),(4.21046,36.891181),(4.22283,36.894192),(4.249278,36.907864),(4.261974,36.911607),(4.278575,36.910468),(4.306,36.900458),(4.320323,36.898017),(4.440684,36.911607),(4.452973,36.910061),(4.46811,36.905992),(4.482432,36.900458),(4.491954,36.894599),(4.50408,36.889309),(4.536388,36.888577),(4.550548,36.883734),(4.581554,36.894924),(4.591482,36.89525),(4.622895,36.896389),(4.698009,36.891181),(4.769054,36.898017),(4.78712,36.895413),(4.94988,36.839789),(4.968272,36.82567),(4.973643,36.818915),(4.985688,36.817084),(4.999522,36.817125),(5.009288,36.816067),(5.016449,36.810981),(5.025238,36.798651),(5.029145,36.794989),(5.045909,36.78852),(5.065603,36.783515),(5.086192,36.780829),(5.104828,36.781317),(5.084809,36.732611),(5.09018,36.716783),(5.118419,36.699368),(5.23585,36.650946),(5.269054,36.64411),(5.304373,36.643012),(5.372406,36.650946),(5.428071,36.663886),(5.433849,36.665229),(5.464854,36.665229),(5.469574,36.667711),(5.482188,36.680243),(5.489106,36.685126),(5.529145,36.697659),(5.549001,36.709703),(5.562755,36.745998),(5.576182,36.761379),(5.593923,36.773871),(5.61199,36.781317),(5.655935,36.791815),(5.674571,36.799628),(5.71046,36.825507),(5.733735,36.832343),(5.7588,36.833401),(5.82781,36.821763),(5.868826,36.822943),(6.03712,36.853583),(6.197032,36.90233),(6.235688,36.92121),(6.256847,36.945787),(6.258556,36.966783),(6.254731,36.983303),(6.254405,36.998969),(6.266856,37.017768),(6.274425,37.024237),(6.314626,37.048651),(6.328868,37.061916),(6.338715,37.069281),(6.371593,37.083482),(6.415782,37.092963),(6.462413,37.09394),(6.502696,37.082953),(6.538585,37.06037),(6.5442,37.058783),(6.544688,37.04857),(6.547048,37.041205),(6.552257,37.036689),(6.56129,37.035142),(6.572765,37.027777),(6.583263,36.989814),(6.599457,36.97309),(6.614106,36.977851),(6.631602,36.972398),(6.651134,36.963853),(6.671153,36.959418),(6.822032,36.952623),(6.834809,36.949123),(6.852794,36.940334),(6.86964,36.929185),(6.880056,36.918443),(6.879649,36.902086),(6.909434,36.893052),(6.948578,36.890041),(6.975597,36.891181),(7.064464,36.913316),(7.092296,36.92593),(7.108084,36.920315),(7.133311,36.915269),(7.156993,36.9147),(7.167491,36.922187),(7.176443,36.931871),(7.218761,36.960842),(7.232921,36.966864),(7.239024,36.971503),(7.247081,36.982327),(7.254242,36.994696),(7.257335,37.004136),(7.255544,37.015367),(7.250499,37.023139),(7.211111,37.057034),(7.181651,37.076728),(7.181651,37.082953),(7.192719,37.084296),(7.212169,37.089301),(7.222667,37.089789),(7.232758,37.087226),(7.251313,37.078599),(7.260509,37.076728),(7.262706,37.075995),(7.266449,37.074693),(7.270681,37.071031),(7.27475,37.069322),(7.281016,37.073065),(7.288259,37.080268),(7.292654,37.082506),(7.297699,37.082953),(7.307302,37.080146),(7.325857,37.071194),(7.335948,37.069281),(7.344412,37.071438),(7.369965,37.082953),(7.383474,37.082994),(7.386241,37.082099),(7.386892,37.078111),(7.394054,37.069281),(7.398204,37.05801),(7.404633,37.051581),(7.418224,37.048814),(7.438324,37.04857),(7.447113,37.046617),(7.456065,37.041978),(7.472423,37.050035),(7.490245,37.054999),(7.526134,37.033149),(7.555919,37.00788),(7.588878,36.987616),(7.633556,36.980536),(7.643403,36.982652),(7.652192,36.986233),(7.661876,36.988471),(7.67449,36.986762),(7.685802,36.982408),(7.695974,36.976996),(7.704845,36.970526),(7.712738,36.963202),(7.760753,36.966864),(7.770763,36.97012),(7.789073,36.986558),(7.798595,36.993598),(7.791026,36.973212),(7.778575,36.951117),(7.774262,36.931464),(7.791189,36.918443),(7.773448,36.88996),(7.773774,36.889757),(7.809906,36.86815),(7.866466,36.854397),(7.908458,36.849555),(8.049978,36.87759),(8.073009,36.887397),(8.093028,36.901801),(8.233165,36.958075),(8.25115,36.948676),(8.268728,36.933743),(8.288097,36.92593),(8.333263,36.927802),(8.355968,36.92593),(8.374034,36.918443),(8.388845,36.923245),(8.401622,36.918606),(8.414399,36.910224),(8.428722,36.904202),(8.463227,36.901557),(8.497569,36.904202),(8.601899,36.939358),(8.60251,36.939511)] +Ethiopia [(38.13331,14.677762),(38.190258,14.687322),(38.213925,14.685875),(38.229325,14.679725),(38.239557,14.667555),(38.245841,14.652474),(38.247825,14.647712),(38.252683,14.62022),(38.257644,14.610427),(38.267152,14.601771),(38.278831,14.595467),(38.28927,14.587741),(38.295678,14.574874),(38.305186,14.536917),(38.313351,14.519347),(38.32503,14.504594),(38.340636,14.492682),(38.394276,14.464312),(38.402338,14.456871),(38.409056,14.43527),(38.41443,14.426304),(38.426832,14.417183),(38.435204,14.416305),(38.443679,14.418656),(38.492978,14.418294),(38.569356,14.426821),(38.606563,14.436484),(38.668471,14.467723),(38.688005,14.467103),(38.706195,14.46116),(38.724385,14.459403),(38.742885,14.461056),(38.761902,14.465449),(38.821537,14.489478),(38.866082,14.493664),(38.884582,14.506247),(38.917552,14.535703),(38.959203,14.554642),(38.979667,14.567122),(38.993826,14.585157),(38.996823,14.60358),(38.995273,14.622261),(38.996823,14.638901),(39.009742,14.651226),(39.011189,14.647763),(39.04695,14.643758),(39.075475,14.637402),(39.0855,14.633475),(39.107308,14.620116),(39.127461,14.602211),(39.144721,14.581566),(39.158467,14.560146),(39.187509,14.477205),(39.20818,14.440179),(39.233191,14.440438),(39.254999,14.474415),(39.268538,14.48643),(39.293446,14.490305),(39.347603,14.481805),(39.36662,14.482916),(39.448579,14.505111),(39.489403,14.524489),(39.50718,14.547821),(39.515035,14.56782),(39.532295,14.567587),(39.552138,14.556684),(39.567331,14.544307),(39.587692,14.520278),(39.59379,14.514567),(39.599474,14.511648),(39.650944,14.496584),(39.677712,14.494233),(39.722985,14.501717),(39.735073,14.503715),(39.766906,14.504775),(39.796878,14.495111),(39.824473,14.478135),(39.87708,14.433616),(39.892273,14.426562),(39.911186,14.42341),(39.93072,14.428784),(39.961519,14.454209),(39.979812,14.458731),(39.999839,14.455238),(40.023221,14.45116),(40.037483,14.451522),(40.086886,14.465759),(40.104559,14.465966),(40.194373,14.444649),(40.237678,14.427441),(40.277055,14.404574),(40.353687,14.335241),(40.421646,14.273755),(40.479834,14.244403),(40.585874,14.19469),(40.614089,14.185802),(40.704419,14.173219),(40.772322,14.148362),(40.833197,14.105962),(40.93779,13.989922),(41.043623,13.872488),(41.121345,13.738181),(41.192038,13.615863),(41.222217,13.586614),(41.357093,13.502278),(41.530726,13.393783),(41.635525,13.309344),(41.7118,13.247926),(41.750764,13.208316),(41.784767,13.164572),(41.8166,13.112792),(41.894321,12.94797),(41.946617,12.876166),(42.024649,12.82697),(42.0817,12.803922),(42.137304,12.77333),(42.184949,12.733048),(42.250165,12.623985),(42.33171,12.515465),(42.379459,12.465907),(42.319205,12.386377),(42.288715,12.321006),(42.155494,12.142257),(42.140301,12.105826),(42.132343,12.093475),(42.108778,12.082106),(42.097719,12.070169),(42.057308,11.996271),(41.97468,11.88091),(41.936386,11.827445),(41.925327,11.816437),(41.911477,11.805947),(41.897525,11.799384),(41.886259,11.796025),(41.877784,11.790134),(41.872203,11.775871),(41.862282,11.763056),(41.823007,11.746106),(41.808228,11.736236),(41.792415,11.704455),(41.778669,11.628645),(41.74911,11.537953),(41.749834,11.483383),(41.775672,11.368429),(41.787971,11.259727),(41.785594,11.067388),(41.779289,11.024677),(41.761306,10.996203),(41.766523,10.989466),(41.773811,10.980054),(41.79872,10.970675),(41.916852,10.947059),(41.920262,10.941943),(41.922846,10.936026),(41.928841,10.931065),(41.937522,10.929825),(41.942793,10.933546),(41.947341,10.938532),(41.954162,10.941271),(42.0108,10.943597),(42.038395,10.950444),(42.069401,10.981501),(42.096272,10.990261),(42.151359,10.996203),(42.202726,10.98455),(42.221639,10.984344),(42.235179,10.987703),(42.270835,11.003516),(42.279414,11.004394),(42.297294,11.003774),(42.305872,11.004601),(42.338532,11.015504),(42.361166,11.011887),(42.376669,11.006668),(42.391758,11.005893),(42.413256,11.015944),(42.479712,11.059119),(42.569732,11.086146),(42.608145,11.089088),(42.615621,11.08966),(42.686521,11.073046),(42.710395,11.072116),(42.729619,11.06501),(42.754837,11.010544),(42.771787,10.996462),(42.794111,10.991604),(42.835452,10.987444),(42.860153,10.980416),(42.873279,10.978246),(42.888059,10.983568),(42.898084,10.995687),(42.908109,11.004187),(42.923715,10.998787),(42.911313,10.977367),(42.901185,10.936595),(42.893536,10.919671),(42.876793,10.905744),(42.834832,10.888406),(42.819432,10.872567),(42.811578,10.85262),(42.80765,10.83611),(42.801242,10.820788),(42.785636,10.804406),(42.749876,10.775829),(42.738094,10.759784),(42.728585,10.735496),(42.718973,10.694852),(42.711635,10.675163),(42.699646,10.65855),(42.680216,10.647672),(42.660269,10.641625),(42.647247,10.63222),(42.64859,10.611033),(42.668537,10.56623),(42.695202,10.524708),(42.750393,10.471636),(42.765792,10.451947),(42.776644,10.42461),(42.78946,10.333324),(42.808167,10.269116),(42.836279,10.208086),(42.862324,10.177158),(42.958235,10.115559),(42.981903,10.091633),(42.999887,10.061971),(43.012599,10.029286),(43.020764,9.996419),(43.039884,9.949988),(43.067686,9.922522),(43.104273,9.907923),(43.149852,9.899991),(43.187214,9.883326),(43.206334,9.851209),(43.235273,9.691787),(43.248812,9.652332),(43.270671,9.628354),(43.297698,9.621533),(43.305966,9.617554),(43.315578,9.607658),(43.325086,9.585902),(43.331753,9.575076),(43.341364,9.566394),(43.361518,9.553165),(43.370665,9.544225),(43.393351,9.49893),(43.399449,9.480947),(43.401981,9.447409),(43.406322,9.42865),(43.419034,9.413018),(43.471227,9.382012),(43.548225,9.336072),(43.566829,9.334315),(43.59122,9.343591),(43.60724,9.344625),(43.621502,9.336899),(43.698449,9.267162),(43.78759,9.186701),(43.914921,9.071489),(43.984788,9.008314),(44.023855,8.985525),(44.104057,8.959222),(44.192217,8.930283),(44.280377,8.901396),(44.368537,8.872457),(44.456697,8.843544),(44.544753,8.814605),(44.633017,8.785718),(44.721073,8.756779),(44.809233,8.72784),(44.89729,8.698928),(44.985553,8.67004),(45.073558,8.641153),(45.16177,8.612137),(45.249878,8.583276),(45.33809,8.554414),(45.426147,8.525476),(45.51441,8.496537),(45.514462,8.496537),(45.598436,8.468425),(45.682358,8.440313),(45.766384,8.412227),(45.850203,8.384115),(45.934126,8.356003),(46.018049,8.327891),(46.101919,8.299727),(46.185894,8.271641),(46.269816,8.243555),(46.353842,8.215469),(46.437713,8.187357),(46.521687,8.159245),(46.605713,8.131133),(46.689532,8.102969),(46.773558,8.074909),(46.857377,8.046745),(46.920526,8.025609),(46.97923,7.996567),(47.068114,7.996516),(47.210586,7.996516),(47.36608,7.996516),(47.52359,7.996516),(47.653918,7.996516),(47.836697,7.996516),(47.979169,7.996567),(47.930593,7.949697),(47.881811,7.902826),(47.833183,7.855956),(47.784659,7.809137),(47.736032,7.762215),(47.687507,7.715396),(47.638932,7.668525),(47.590252,7.621655),(47.541676,7.574836),(47.532388,7.565864),(47.493101,7.527914),(47.444525,7.481043),(47.395949,7.434173),(47.347373,7.387302),(47.298797,7.34038),(47.250118,7.293509),(47.201439,7.246691),(47.152863,7.19982),(47.104287,7.153001),(47.055608,7.106079),(47.007032,7.059208),(46.95856,7.012286),(46.909984,6.965467),(46.861408,6.918597),(46.812729,6.871675),(46.764101,6.824882),(46.715577,6.777985),(46.667001,6.731115),(46.618425,6.684244),(46.598065,6.664633),(46.55321,6.621457),(46.508406,6.578308),(46.488046,6.558645),(46.466963,6.538292),(46.423915,6.496736),(46.362937,6.43426),(46.302062,6.371731),(46.240981,6.309228),(46.179899,6.246726),(46.118818,6.184223),(46.057839,6.12172),(45.99681,6.059192),(45.935986,5.996689),(45.828706,5.879513),(45.721426,5.762337),(45.614146,5.645057),(45.518477,5.540564),(45.506865,5.527881),(45.399585,5.410704),(45.292356,5.293476),(45.185024,5.176248),(45.077744,5.059072),(45.0209,4.99688),(45.020642,4.99688),(44.941525,4.911484),(44.912586,4.899392),(44.858378,4.902544),(44.806546,4.905541),(44.754766,4.908565),(44.702883,4.911562),(44.651104,4.914507),(44.59922,4.917505),(44.596808,4.917643),(44.547441,4.920476),(44.495557,4.923473),(44.443726,4.926445),(44.391998,4.929468),(44.340115,4.932439),(44.288335,4.935488),(44.236452,4.938485),(44.184672,4.941431),(44.132789,4.94448),(44.081009,4.947451),(44.029126,4.950448),(43.968975,4.953962),(43.932284,4.945203),(43.845158,4.914352),(43.815186,4.907479),(43.716484,4.884793),(43.640519,4.867353),(43.528485,4.841566),(43.459342,4.808829),(43.346997,4.755654),(43.232792,4.701497),(43.119259,4.647702),(43.03544,4.578895),(42.960406,4.517374),(42.952551,4.507581),(42.945936,4.496858),(42.9325,4.463217),(42.914724,4.393299),(42.899634,4.361104),(42.870195,4.328171),(42.868938,4.326765),(42.831628,4.302322),(42.78977,4.285605),(42.718663,4.273487),(42.568285,4.247856),(42.415013,4.221707),(42.297294,4.201734),(42.221846,4.200959),(42.13534,4.200081),(42.102887,4.19189),(42.068367,4.174553),(42.011833,4.129491),(41.941243,4.086212),(41.923466,4.070554),(41.916645,4.051098),(41.917472,4.020453),(41.912201,4.007974),(41.898455,3.996915),(41.885019,3.977226),(41.835927,3.94945),(41.791381,3.958157),(41.747353,3.981412),(41.699811,3.996915),(41.657436,3.972704),(41.642967,3.969449),(41.628704,3.972084),(41.602246,3.983065),(41.586329,3.984151),(41.506231,3.964384),(41.479773,3.962938),(41.429543,3.948882),(41.317405,3.942422),(41.215293,3.936608),(41.162996,3.94268),(41.114524,3.962343),(41.070082,3.996915),(41.0059,4.086677),(40.979751,4.110758),(40.963628,4.12037),(40.915879,4.136648),(40.897689,4.145924),(40.887354,4.156052),(40.870611,4.186981),(40.846013,4.214989),(40.785758,4.25584),(40.763744,4.284933),(40.700699,4.243515),(40.6175,4.211605),(40.511873,4.171064),(40.382682,4.121377),(40.376998,4.116081),(40.371106,4.099932),(40.365732,4.094867),(40.286874,4.067453),(40.185588,4.032158),(40.182797,4.032003),(40.1798,4.03283),(40.176803,4.034535),(40.167191,4.035465),(40.166261,4.031486),(40.166984,4.025259),(40.162437,4.019394),(40.117168,3.996915),(40.020223,3.950122),(39.93413,3.908703),(39.848451,3.867284),(39.828297,3.842583),(39.80866,3.790287),(39.780755,3.71657),(39.764218,3.685745),(39.745098,3.660579),(39.666447,3.588723),(39.600818,3.528907),(39.574979,3.497901),(39.553172,3.431316),(39.536325,3.405013),(39.504389,3.403333),(39.488473,3.413901),(39.480928,3.42726),(39.475657,3.440799),(39.466976,3.452013),(39.436176,3.462374),(39.311326,3.466094),(39.320525,3.484543),(39.315874,3.494904),(39.302748,3.495679),(39.296338,3.49199),(39.257583,3.469686),(39.219445,3.468704),(39.180481,3.476998),(39.090441,3.518334),(39.077542,3.524256),(39.06793,3.526582),(39.010466,3.514464),(38.994963,3.514257),(38.979977,3.519812),(38.963234,3.522086),(38.921272,3.513999),(38.895227,3.513585),(38.82071,3.533739),(38.722008,3.560404),(38.703715,3.570429),(38.688108,3.586294),(38.66103,3.621951),(38.661443,3.615336),(38.6601,3.601306),(38.660306,3.594459),(38.601912,3.59867),(38.596641,3.602133),(38.595608,3.607507),(38.593747,3.61084),(38.579588,3.605311),(38.57411,3.604561),(38.561088,3.606008),(38.547135,3.609522),(38.543208,3.615388),(38.541244,3.623294),(38.533182,3.632828),(38.514269,3.648512),(38.508998,3.650812),(38.499386,3.64691),(38.497319,3.63859),(38.498146,3.629676),(38.496906,3.62394),(38.445953,3.601668),(38.389935,3.596888),(38.283068,3.608902),(38.176925,3.620917),(38.145609,3.618437),(38.114087,3.610608),(38.101891,3.612649),(38.078843,3.632673),(38.048974,3.641846),(38.036158,3.648926),(38.020759,3.667167),(37.995334,3.707914),(37.976731,3.72644),(37.945931,3.746232),(37.846196,3.810337),(37.751008,3.871522),(37.647758,3.937874),(37.556084,3.996889),(37.475056,4.048979),(37.360954,4.122204),(37.263492,4.184888),(37.166031,4.247545),(37.111874,4.282298),(37.105983,4.283848),(37.099988,4.283693),(37.094821,4.284882),(37.091617,4.290256),(37.086449,4.312477),(37.082522,4.322166),(37.075907,4.33108),(37.068466,4.333458),(37.048415,4.331959),(37.041077,4.336817),(37.025367,4.363275),(37.015652,4.370587),(36.974828,4.37994),(36.903721,4.415778),(36.844087,4.432237),(36.651333,4.430583),(36.648857,4.43118),(36.642962,4.432599),(36.628182,4.441461),(36.619501,4.443373),(36.463025,4.440169),(36.277196,4.436449),(36.264277,4.438206),(36.246087,4.446991),(36.236268,4.449652),(36.225726,4.4496),(36.19131,4.444924),(36.045325,4.443716),(36.041345,4.443683),(36.018401,4.449135),(35.99711,4.462959),(35.958352,4.496858),(35.940472,4.508046),(35.933755,4.52218),(35.934065,4.539207),(35.937062,4.559258),(35.936028,4.578533),(35.922799,4.602072),(35.920835,4.619332),(35.784513,4.764181),(35.760018,4.805728),(35.751647,4.854356),(35.755884,5.063439),(35.760949,5.076332),(35.799189,5.124624),(35.807354,5.144674),(35.807147,5.165267),(35.798259,5.189271),(35.778415,5.22715),(35.775315,5.24658),(35.782033,5.26986),(35.809111,5.309109),(35.80415,5.318023),(35.749373,5.339158),(35.683641,5.379595),(35.658319,5.386417),(35.639819,5.382076),(35.621939,5.373782),(35.598168,5.368743),(35.57388,5.375332),(35.530782,5.410084),(35.508044,5.423417),(35.490164,5.428042),(35.469287,5.430755),(35.448203,5.430755),(35.430219,5.427215),(35.408205,5.412565),(35.367484,5.368046),(35.344953,5.352362),(35.320769,5.348719),(35.302372,5.357374),(35.287489,5.374092),(35.254623,5.425923),(35.250695,5.435147),(35.251832,5.447369),(35.258964,5.458143),(35.267645,5.468556),(35.27364,5.479873),(35.269299,5.491785),(35.261961,5.511887),(35.22341,5.542996),(35.143312,5.58966),(35.098663,5.622474),(35.088225,5.64175),(35.079543,5.699782),(35.06528,5.726447),(34.984355,5.841014),(34.974433,5.863106),(34.972676,5.876257),(34.974537,5.887239),(34.977741,5.898659),(34.979911,5.912844),(34.97681,5.924032),(34.960377,5.941706),(34.955313,5.952558),(34.955726,5.964211),(34.95924,5.975502),(34.969782,5.996689),(34.967964,6.008131),(34.959447,6.061724),(34.951489,6.08118),(34.934436,6.102368),(34.915522,6.119705),(34.898882,6.13867),(34.879969,6.187634),(34.844725,6.248689),(34.839764,6.268688),(34.839041,6.327599),(34.832633,6.353541),(34.792739,6.421676),(34.784264,6.44183),(34.776099,6.499449),(34.753878,6.556423),(34.743543,6.596808),(34.733518,6.637606),(34.726593,6.641947),(34.714501,6.655667),(34.709643,6.674012),(34.703545,6.684916),(34.635539,6.729823),(34.618486,6.736541),(34.596679,6.739202),(34.553994,6.739254),(34.53632,6.743078),(34.524745,6.752871),(34.517924,6.793824),(34.516683,6.798062),(34.512136,6.808268),(34.511102,6.814314),(34.513273,6.815296),(34.522575,6.821781),(34.524745,6.824572),(34.523298,6.844183),(34.519061,6.861649),(34.512343,6.876842),(34.503661,6.89002),(34.439169,6.934875),(34.297575,6.968568),(34.294475,6.972237),(34.287653,6.975286),(34.280729,6.980092),(34.275665,6.997403),(34.270807,7.003398),(34.229879,7.03337),(34.219957,7.046134),(34.213756,7.051974),(34.206005,7.054506),(34.198253,7.064376),(34.195773,7.086959),(34.195773,7.09774),(34.195773,7.129023),(34.190915,7.149539),(34.181407,7.167212),(34.167764,7.175377),(34.151124,7.167471),(34.132417,7.163388),(34.113194,7.177548),(34.085909,7.211499),(34.04002,7.247259),(34.033612,7.250256),(34.031132,7.255372),(34.030615,7.3487),(34.02369,7.382445),(34.006534,7.409936),(33.882407,7.536079),(33.852538,7.553545),(33.842099,7.555922),(33.821429,7.558403),(33.811507,7.560987),(33.761071,7.598142),(33.729445,7.642119),(33.71606,7.657156),(33.706552,7.661859),(33.675029,7.670851),(33.647744,7.688627),(33.637202,7.691315),(33.572297,7.685217),(33.550851,7.691315),(33.540361,7.698911),(33.526511,7.717101),(33.516796,7.726093),(33.506823,7.73095),(33.488477,7.73617),(33.479537,7.743146),(33.462329,7.749502),(33.436491,7.748572),(33.393289,7.739735),(33.384401,7.735446),(33.359803,7.719272),(33.347142,7.718961),(33.29159,7.733069),(33.273503,7.749192),(33.257794,7.767434),(33.242394,7.780715),(33.233919,7.783195),(33.201467,7.788156),(33.172941,7.799628),(33.164518,7.801127),(33.132117,7.793944),(33.126949,7.791567),(33.120851,7.783299),(33.107519,7.785211),(33.085401,7.794978),(33.058323,7.797975),(33.051295,7.801127),(33.047677,7.807277),(33.044629,7.816423),(33.040753,7.824846),(33.023493,7.835078),(33.015018,7.850633),(32.9898,7.917244),(32.993572,7.928148),(33.006801,7.942462),(33.007569,7.944058),(33.012021,7.953314),(33.016775,7.958327),(33.021633,7.965251),(33.026077,7.986284),(33.031141,7.996309),(33.043853,8.013465),(33.100594,8.071033),(33.106589,8.08049),(33.111343,8.090644),(33.11341,8.099093),(33.117544,8.104158),(33.147516,8.109067),(33.164776,8.117051),(33.177282,8.127102),(33.185137,8.14108),(33.187824,8.160898),(33.184723,8.166247),(33.171081,8.175962),(33.16798,8.181362),(33.16829,8.195392),(33.169944,8.205495),(33.173975,8.21459),(33.181623,8.225726),(33.190511,8.232419),(33.199865,8.235519),(33.206893,8.240738),(33.208288,8.253683),(33.194852,8.259239),(33.185757,8.26451),(33.178419,8.275672),(33.171391,8.283113),(33.164363,8.292777),(33.161159,8.304585),(33.163123,8.312052),(33.171804,8.330888),(33.174078,8.343058),(33.172321,8.34652),(33.163329,8.352954),(33.161159,8.356106),(33.161882,8.361506),(33.166843,8.368276),(33.16798,8.372823),(33.167877,8.389722),(33.16953,8.397421),(33.174078,8.404475),(33.187824,8.411038),(33.202913,8.414113),(33.210355,8.420779),(33.207358,8.426593),(33.206427,8.430572),(33.23516,8.455635),(33.25273,8.458167),(33.361353,8.434293),(33.370862,8.437393),(33.384608,8.448013),(33.392101,8.451862),(33.400162,8.452638),(33.413133,8.449718),(33.480519,8.45765),(33.489924,8.462921),(33.490338,8.466513),(33.484757,8.474006),(33.505117,8.474884),(33.521964,8.465324),(33.537053,8.453206),(33.552143,8.446824),(33.57116,8.450131),(33.585371,8.458115),(33.600099,8.463955),(33.620407,8.460699),(33.651672,8.434499),(33.671309,8.400289),(33.695545,8.372875),(33.74102,8.367087),(33.751872,8.368793),(33.756213,8.370136),(33.797658,8.405612),(33.808096,8.412614),(33.824116,8.419487),(33.840859,8.423983),(33.875172,8.427936),(33.931706,8.428143),(33.95031,8.433207),(33.970774,8.445377),(34.069579,8.533615),(34.090249,8.556507),(34.103169,8.579322),(34.107251,8.601531),(34.11185,8.626555),(34.097473,8.915836),(34.083096,9.205117),(34.070698,9.454592),(34.066892,9.531176),(34.098208,9.67972),(34.221921,10.02603),(34.280832,10.080109),(34.303466,10.113544),(34.305017,10.124603),(34.301606,10.148994),(34.301813,10.160285),(34.304603,10.165634),(34.314215,10.175452),(34.317419,10.181266),(34.325894,10.223563),(34.32486,10.268651),(34.276181,10.488715),(34.271737,10.530082),(34.279489,10.565506),(34.365582,10.685783),(34.404235,10.759835),(34.420048,10.780816),(34.437308,10.795673),(34.557818,10.877787),(34.574768,10.884143),(34.587067,10.879285),(34.708093,10.774072),(34.737445,10.755391),(34.750674,10.743945),(34.756979,10.728804),(34.752948,10.698728),(34.754395,10.684129),(34.764834,10.680254),(34.772895,10.688263),(34.786228,10.718985),(34.798527,10.729656),(34.815373,10.73131),(34.827569,10.72777),(34.838524,10.72914),(34.85134,10.745753),(34.857334,10.763556),(34.857438,10.775674),(34.861572,10.787586),(34.879762,10.804613),(34.933816,10.845308),(34.950662,10.869183),(34.95831,10.903418),(34.952832,10.91843),(34.92317,10.939928),(34.913558,10.952976),(34.914282,10.963957),(34.980324,11.155263),(34.984148,11.186398),(34.971436,11.206888),(34.953969,11.226112),(34.944254,11.253242),(34.947148,11.274868),(35.073135,11.54896),(35.076856,11.568649),(35.074375,11.587356),(35.053498,11.628077),(35.046263,11.648644),(35.048124,11.68952),(35.040579,11.727399),(35.041199,11.747604),(35.048227,11.771376),(35.059079,11.79587),(35.072722,11.818866),(35.088018,11.838193),(35.126672,11.862946),(35.148556,11.871229),(35.215555,11.896588),(35.245734,11.929867),(35.247802,11.939634),(35.248379,11.957858),(35.248525,11.962475),(35.251419,11.971725),(35.260411,11.980407),(35.297824,12.001594),(35.317358,12.023712),(35.32604,12.047483),(35.335962,12.102312),(35.353738,12.14603),(35.361283,12.157347),(35.371102,12.165512),(35.392909,12.176726),(35.403348,12.185097),(35.412546,12.198636),(35.41544,12.211142),(35.418127,12.238996),(35.428772,12.266953),(35.616875,12.575151),(35.625866,12.585124),(35.634445,12.588587),(35.653772,12.590034),(35.662453,12.593651),(35.672375,12.604296),(35.675992,12.614528),(35.677439,12.625225),(35.684261,12.650753),(35.684054,12.656076),(35.687671,12.659228),(35.70137,12.665421),(35.702761,12.66605),(35.728392,12.67437),(36.029976,12.717158),(36.038451,12.717468),(36.050647,12.715452),(36.057261,12.712145),(36.063049,12.707908),(36.072144,12.70305),(36.099429,12.69566),(36.115035,12.70181),(36.123614,12.721447),(36.143251,12.832551),(36.143044,12.856942),(36.138703,12.871851),(36.115346,12.911719),(36.114415,12.933889),(36.125991,12.950968),(36.140874,12.96554),(36.149865,12.980242),(36.149555,12.988821),(36.146351,12.997373),(36.138393,13.013315),(36.136946,13.032436),(36.233581,13.36164),(36.382099,13.571033),(36.391298,13.599352),(36.394812,13.655938),(36.398429,13.67604),(36.439942,13.776176),(36.458167,13.820139),(36.455376,13.870007),(36.438013,13.919952),(36.432535,13.944499),(36.432122,13.969924),(36.436566,13.991214),(36.52638,14.263523),(36.545707,14.263162),(36.553045,14.282773),(36.565654,14.292927),(36.5825,14.296751),(36.603791,14.29732),(36.61764,14.300446),(36.648543,14.31458),(36.668593,14.317784),(36.713655,14.317784),(36.72213,14.320057),(36.738357,14.329798),(36.748072,14.332072),(36.816388,14.332072),(36.848531,14.327783),(36.913746,14.309024),(36.946923,14.304735),(36.952814,14.300963),(36.970901,14.27683),(36.978859,14.27112),(36.986714,14.266831),(36.995395,14.264092),(37.005627,14.263162),(37.011518,14.265074),(37.026091,14.27683),(37.034256,14.277502),(37.052446,14.275331),(37.060197,14.27683),(37.072703,14.285305),(37.083452,14.297733),(37.090997,14.314218),(37.093787,14.335173),(37.094407,14.35915),(37.097818,14.379511),(37.105879,14.399226),(37.121072,14.420826),(37.149701,14.441936),(37.184221,14.449558),(37.258221,14.448111),(37.280029,14.45545),(37.293775,14.457439),(37.318166,14.43713),(37.33708,14.426356),(37.374907,14.373026),(37.390203,14.358169),(37.394027,14.350081),(37.395474,14.335173),(37.394957,14.312383),(37.397748,14.301738),(37.412217,14.293599),(37.452628,14.260578),(37.456969,14.254609),(37.464307,14.235825),(37.469371,14.210813),(37.475366,14.199574),(37.488182,14.194897),(37.498517,14.193088),(37.508749,14.188127),(37.5165,14.180557),(37.519498,14.170997),(37.520634,14.145443),(37.525285,14.128829),(37.535724,14.11684),(37.553707,14.105497),(37.564973,14.116685),(37.601043,14.200917),(37.658507,14.335173),(37.721242,14.481753),(37.785631,14.632183),(37.85064,14.783879),(37.891464,14.879532),(37.908828,14.864727),(37.92092,14.847519),(37.94004,14.81088),(37.944381,14.80734),(37.954096,14.806255),(37.95761,14.802844),(37.958127,14.798762),(37.957817,14.788194),(37.958437,14.785093),(37.962261,14.777859),(37.964432,14.771193),(37.968772,14.766387),(37.980038,14.764552),(37.989856,14.757317),(37.992027,14.749514),(37.99275,14.740755),(37.998435,14.730911),(38.006703,14.7244),(38.115223,14.681017),(38.13331,14.677762)] +Georgia [(40.479834,43.508741),(40.519624,43.505175),(40.550837,43.511247),(40.615536,43.535845),(40.651916,43.538971),(40.681062,43.528068),(40.689966,43.523129),(40.740179,43.495279),(40.842706,43.469389),(40.874952,43.455643),(41.011791,43.375416),(41.042487,43.369834),(41.146666,43.37986),(41.180773,43.374692),(41.213122,43.362781),(41.243921,43.344203),(41.278751,43.333196),(41.350271,43.34335),(41.380657,43.334772),(41.389752,43.324876),(41.403808,43.301518),(41.41311,43.291286),(41.427579,43.283199),(41.479773,43.271236),(41.506128,43.25979),(41.550569,43.226277),(41.577544,43.215296),(41.614131,43.213022),(41.687408,43.220205),(41.723789,43.217234),(41.788901,43.203566),(41.82032,43.20124),(41.88998,43.207028),(41.919332,43.205943),(41.948271,43.200077),(41.97969,43.188657),(42.011316,43.181861),(42.043356,43.18057),(42.075292,43.185272),(42.154334,43.217498),(42.159834,43.21974),(42.185363,43.225476),(42.354551,43.236018),(42.36256,43.235194),(42.422867,43.22899),(42.479712,43.200904),(42.506067,43.181551),(42.534902,43.165067),(42.59588,43.140029),(42.615207,43.13698),(42.629366,43.14127),(42.656238,43.159796),(42.673602,43.165893),(42.704917,43.166436),(42.75153,43.176952),(42.79132,43.176177),(42.829768,43.168761),(42.857673,43.155455),(42.885785,43.135844),(42.914827,43.121943),(43.000093,43.098662),(43.000197,43.085356),(42.992548,43.068819),(42.989758,43.05267),(42.998853,43.042206),(43.048256,43.016962),(43.155588,42.942393),(43.175228,42.934471),(43.194345,42.926761),(43.386178,42.886882),(43.479599,42.867462),(43.517426,42.86046),(43.586259,42.835629),(43.603002,42.824803),(43.644343,42.788578),(43.662843,42.779845),(43.755964,42.763127),(43.782526,42.753213),(43.800613,42.746462),(43.811775,42.717265),(43.788727,42.69308),(43.753794,42.674657),(43.725682,42.6544),(43.722581,42.624867),(43.751107,42.598745),(43.796169,42.58983),(43.845003,42.58704),(43.885362,42.579211),(43.908161,42.566283),(43.920812,42.559109),(43.939726,42.55221),(43.958433,42.554406),(43.973212,42.563992),(43.998017,42.589365),(44.011504,42.600321),(44.055068,42.615307),(44.150824,42.618666),(44.185803,42.626508),(44.197384,42.629105),(44.211351,42.639634),(44.230147,42.653806),(44.248648,42.681995),(44.272005,42.702382),(44.360682,42.703751),(44.401403,42.713518),(44.479228,42.744291),(44.508115,42.750286),(44.558293,42.753102),(44.600977,42.749123),(44.638759,42.740149),(44.642422,42.739279),(44.679422,42.723672),(44.695029,42.714216),(44.724484,42.691039),(44.73637,42.677551),(44.761071,42.619234),(44.773473,42.609648),(44.78231,42.61665),(44.798485,42.671738),(44.827424,42.717962),(44.833676,42.734085),(44.85936,42.75951),(44.902974,42.753076),(45.012115,42.695302),(45.029788,42.688765),(45.048082,42.685897),(45.066995,42.688455),(45.076018,42.691412),(45.103892,42.700547),(45.122961,42.702278),(45.155362,42.693261),(45.184198,42.674838),(45.282693,42.586394),(45.293648,42.572906),(45.301571,42.559842),(45.312562,42.54172),(45.32269,42.530118),(45.35597,42.519964),(45.398551,42.524382),(45.47927,42.54234),(45.536838,42.541358),(45.620657,42.530377),(45.698688,42.507587),(45.739616,42.471466),(45.739506,42.470647),(45.736412,42.447539),(45.62138,42.224478),(45.631405,42.201172),(45.661585,42.186703),(45.701789,42.17275),(45.811601,42.115492),(45.881364,42.102031),(45.895162,42.091876),(45.923481,42.039631),(45.934126,42.028495),(45.947562,42.022914),(45.966372,42.022061),(45.973503,42.024154),(45.986733,42.031906),(45.995156,42.033456),(46.001615,42.030976),(46.018359,42.020046),(46.02673,42.01679),(46.044817,42.017721),(46.05846,42.02033),(46.068278,42.015318),(46.074996,41.993484),(46.099181,41.985061),(46.129876,41.984493),(46.188581,41.993536),(46.188891,41.993639),(46.189201,41.993769),(46.197676,41.9955),(46.206151,41.996094),(46.214574,41.9955),(46.224341,41.993484),(46.250489,41.977723),(46.294156,41.93863),(46.322629,41.929018),(46.382057,41.925246),(46.406552,41.91504),(46.430892,41.890442),(46.392754,41.831685),(46.371309,41.805692),(46.328314,41.766418),(46.313741,41.756651),(46.297773,41.750812),(46.276947,41.749313),(46.230129,41.757323),(46.208321,41.755514),(46.188994,41.740141),(46.179537,41.705879),(46.180002,41.655159),(46.192767,41.610174),(46.21969,41.593302),(46.228475,41.598728),(46.233436,41.607952),(46.24005,41.615497),(46.2539,41.615911),(46.256174,41.610329),(46.278394,41.578393),(46.279871,41.571172),(46.287851,41.532143),(46.29674,41.510387),(46.313948,41.493747),(46.366244,41.466566),(46.377923,41.462483),(46.398284,41.458711),(46.445103,41.425948),(46.542461,41.394994),(46.591554,41.372928),(46.607367,41.345023),(46.62287,41.349622),(46.629841,41.346136),(46.631034,41.345539),(46.637752,41.336961),(46.657079,41.320631),(46.688499,41.284355),(46.694803,41.269782),(46.691703,41.268542),(46.683434,41.269472),(46.674649,41.261669),(46.672686,41.253969),(46.6716,41.236451),(46.669482,41.229888),(46.663591,41.223531),(46.650361,41.214178),(46.64447,41.207977),(46.634755,41.181053),(46.629484,41.14674),(46.621113,41.114597),(46.615539,41.108666),(46.601786,41.09403),(46.582148,41.091653),(46.563132,41.095581),(46.543546,41.097027),(46.522411,41.087002),(46.511145,41.070983),(46.505564,41.055273),(46.496779,41.044834),(46.47492,41.044111),(46.456988,41.052431),(46.429031,41.080956),(46.41265,41.09186),(46.353119,41.106536),(46.332345,41.115889),(46.273433,41.1563),(46.244391,41.183741),(46.228578,41.195419),(46.210595,41.200174),(46.135406,41.199915),(46.113133,41.19604),(46.087608,41.183659),(46.061353,41.170925),(46.041613,41.165447),(46.017428,41.163793),(45.969163,41.168289),(45.820748,41.208804),(45.770337,41.232102),(45.751088,41.240998),(45.709747,41.267715),(45.685872,41.29624),(45.708093,41.317117),(45.731038,41.323267),(45.7453,41.329158),(45.74406,41.335359),(45.720702,41.342387),(45.65559,41.352361),(45.511826,41.39458),(45.47927,41.41768),(45.464801,41.430185),(45.450021,41.432201),(45.416948,41.424604),(45.396794,41.423261),(45.38124,41.425586),(45.333096,41.444352),(45.314422,41.451631),(45.281711,41.452923),(45.24962,41.4445),(45.097834,41.349919),(45.0024,41.290452),(44.989998,41.284458),(44.967363,41.269007),(44.955168,41.262702),(44.917237,41.261772),(44.820706,41.273347),(44.801275,41.258516),(44.809543,41.244409),(44.847061,41.230766),(44.85382,41.223514),(44.857706,41.219346),(44.847784,41.208907),(44.821532,41.20622),(44.674926,41.208494),(44.63312,41.221051),(44.612863,41.223118),(44.591366,41.21578),(44.578446,41.202809),(44.566251,41.186944),(44.551523,41.177023),(44.531008,41.181467),(44.52553,41.189063),(44.521912,41.199295),(44.516641,41.20684),(44.505893,41.206582),(44.499382,41.202809),(44.479228,41.185808),(44.45494,41.180485),(44.432461,41.181673),(44.410601,41.18772),(44.357271,41.212163),(44.344921,41.213248),(44.333087,41.210354),(44.324922,41.204825),(44.317739,41.198572),(44.308489,41.193817),(44.284976,41.190872),(44.266476,41.196195),(44.218107,41.221568),(44.190977,41.230043),(44.180331,41.229939),(44.168446,41.223583),(44.165862,41.216142),(44.16519,41.207667),(44.160074,41.1979),(44.144364,41.186066),(44.124727,41.178056),(44.103643,41.175162),(44.061269,41.184154),(44.041322,41.182552),(44.001117,41.168858),(43.978186,41.164617),(43.977755,41.164537),(43.955849,41.160486),(43.863245,41.158988),(43.820353,41.1455),(43.773741,41.114494),(43.754156,41.108241),(43.729713,41.106691),(43.566622,41.124054),(43.524144,41.122814),(43.470142,41.106174),(43.460324,41.104521),(43.450453,41.104676),(43.440428,41.106588),(43.451797,41.132581),(43.437586,41.156197),(43.410559,41.175162),(43.383274,41.187203),(43.351751,41.193611),(43.322606,41.19206),(43.230622,41.17263),(43.216204,41.180537),(43.192226,41.224978),(43.172021,41.242342),(43.152332,41.24415),(43.130318,41.24229),(43.103136,41.248801),(43.157241,41.26973),(43.171866,41.279342),(43.185353,41.293811),(43.18401,41.298979),(43.154761,41.301873),(43.12391,41.312932),(43.075644,41.344868),(43.002057,41.382695),(42.987588,41.394684),(42.957719,41.437007),(42.949967,41.443673),(42.941079,41.446308),(42.936221,41.450236),(42.93219,41.454783),(42.926093,41.459073),(42.896637,41.466566),(42.888989,41.469976),(42.880721,41.481242),(42.87514,41.493696),(42.868008,41.500207),(42.854572,41.493799),(42.829974,41.472508),(42.811164,41.477159),(42.792147,41.492869),(42.766516,41.504341),(42.774887,41.514263),(42.794628,41.52682),(42.802689,41.534107),(42.806823,41.542995),(42.80889,41.552917),(42.812198,41.562994),(42.819949,41.572347),(42.800932,41.57922),(42.660992,41.588264),(42.610556,41.58506),(42.590914,41.580169),(42.586648,41.579107),(42.585235,41.578755),(42.565184,41.567128),(42.554952,41.550333),(42.545547,41.509612),(42.535522,41.493489),(42.513818,41.476229),(42.483846,41.442174),(42.463072,41.431839),(42.451033,41.43137),(42.437854,41.430857),(42.262464,41.482327),(42.213268,41.480208),(42.18929,41.481707),(42.17265,41.493489),(42.158387,41.499897),(42.143195,41.500259),(42.111775,41.493489),(42.097513,41.498347),(42.083457,41.500104),(42.069401,41.498605),(42.054931,41.493489),(42.019585,41.485066),(41.947961,41.505581),(41.907757,41.493489),(41.894011,41.485531),(41.862695,41.451786),(41.822594,41.426),(41.812982,41.421814),(41.800787,41.42569),(41.760996,41.453543),(41.747767,41.45685),(41.718518,41.459589),(41.706839,41.463155),(41.702601,41.46946),(41.702808,41.477624),(41.703945,41.485117),(41.702601,41.489045),(41.694953,41.489097),(41.669839,41.480467),(41.63997,41.47871),(41.627257,41.480518),(41.520763,41.514228),(41.586274,41.602729),(41.615733,41.632636),(41.676036,41.662177),(41.692882,41.692288),(41.739757,41.749905),(41.742361,41.761664),(41.742442,41.770697),(41.743907,41.778632),(41.750011,41.787502),(41.766287,41.804185),(41.771821,41.812567),(41.773936,41.821601),(41.773936,41.90261),(41.773936,41.902615),(41.773936,41.906928),(41.761974,41.969875),(41.760265,41.99258),(41.755707,42.006415),(41.715261,42.059353),(41.684906,42.098944),(41.674327,42.105536),(41.664073,42.113959),(41.65919,42.133002),(41.657237,42.16706),(41.649181,42.212226),(41.596202,42.349433),(41.592296,42.355414),(41.585216,42.36286),(41.55836,42.382799),(41.554861,42.389838),(41.551199,42.40644),(41.551195,42.406457),(41.500173,42.64057),(41.482758,42.680243),(41.457367,42.71369),(41.424327,42.739895),(41.383474,42.75727),(41.307302,42.767646),(41.291026,42.774359),(41.274913,42.785305),(41.250743,42.795233),(41.225352,42.802314),(41.205333,42.804429),(41.193696,42.801907),(41.173595,42.792792),(41.160899,42.790839),(41.150238,42.79564),(41.104747,42.846584),(41.081798,42.908149),(41.040294,42.96133),(41.023123,42.989976),(40.998383,42.989447),(40.958995,42.976386),(40.933767,42.982733),(40.903982,43.016343),(40.886567,43.024156),(40.886241,43.029283),(40.862804,43.058987),(40.855968,43.063056),(40.783051,43.084174),(40.682872,43.094306),(40.636241,43.092475),(40.604015,43.086493),(40.591807,43.085639),(40.579926,43.088813),(40.56072,43.102973),(40.550548,43.106106),(40.546072,43.108832),(40.528982,43.12226),(40.520193,43.127265),(40.510265,43.12995),(40.478526,43.134019),(40.398285,43.16234),(40.361176,43.165473),(40.335134,43.140855),(40.325043,43.14883),(40.310313,43.175727),(40.301036,43.188666),(40.278331,43.205512),(40.268403,43.216376),(40.272797,43.247992),(40.257172,43.277411),(40.235199,43.304429),(40.219086,43.319037),(40.188243,43.329576),(40.133474,43.34162),(40.101329,43.36518),(40.086925,43.370307),(40.040782,43.3758),(40.009776,43.385159),(39.986013,43.388983),(39.985976,43.38899),(39.991285,43.406318),(40.059187,43.535199),(40.07283,43.551064),(40.096705,43.562639),(40.164297,43.575843),(40.23313,43.575765),(40.479834,43.508741)] +Ghana [(-0.166109,11.13498),(-0.158668,11.118444),(-0.14208,11.103329),(-0.121745,11.092192),(-0.103348,11.087541),(-0.085029,11.089402),(-0.051077,11.098264),(-0.032267,11.098574),(0.001116,11.085991),(0.016205,11.062582),(0.019409,11.031628),(0.016567,10.996203),(0.01419,10.983672),(0.007188,10.976799),(-0.001571,10.971295),(-0.009581,10.963001),(-0.014439,10.953596),(-0.036091,10.853137),(-0.035316,10.843448),(-0.0302,10.823785),(-0.032061,10.813165),(-0.04069,10.805026),(-0.061103,10.791358),(-0.07521,10.773659),(-0.082807,10.756218),(-0.088129,10.714412),(-0.091564,10.700254),(-0.09769,10.675008),(-0.098336,10.654157),(-0.088129,10.633486),(-0.079344,10.626252),(-0.068363,10.621058),(-0.056452,10.618009),(-0.033352,10.615064),(-0.026221,10.610361),(-0.019968,10.604341),(-0.010925,10.598269),(0.020339,10.588037),(0.029434,10.582533),(0.037341,10.57318),(0.04902,10.550959),(0.057262,10.541115),(0.110179,10.508249),(0.126018,10.491609),(0.176971,10.397687),(0.179865,10.394173),(0.187513,10.394948),(0.19175,10.400839),(0.194437,10.407376),(0.197073,10.41027),(0.223609,10.399961),(0.23322,10.398617),(0.244563,10.401537),(0.253943,10.406084),(0.263012,10.408926),(0.273244,10.406601),(0.283114,10.396808),(0.283553,10.385775),(0.28022,10.375182),(0.278825,10.366629),(0.290039,10.348491),(0.307066,10.333272),(0.317582,10.31733),(0.308797,10.297099),(0.32275,10.297254),(0.366597,10.304488),(0.376183,10.299941),(0.396596,10.283224),(0.372049,10.265137),(0.367295,10.249815),(0.364427,10.233407),(0.357192,10.219997),(0.358846,10.18633),(0.353601,10.115508),(0.363393,10.089669),(0.37608,10.081582),(0.387087,10.078947),(0.395149,10.072358),(0.398146,10.052127),(0.39457,10.035557),(0.393908,10.03249),(0.382746,10.029596),(0.367295,10.032128),(0.349777,10.02882),(0.355719,10.018976),(0.355125,10.01262),(0.35174,10.006419),(0.349777,9.997169),(0.35143,9.990451),(0.363393,9.966783),(0.370189,9.947017),(0.369569,9.938826),(0.362592,9.932676),(0.349777,9.918956),(0.341715,9.903789),(0.339674,9.889837),(0.344196,9.849762),(0.343886,9.838393),(0.32244,9.760671),(0.323887,9.742068),(0.329132,9.731293),(0.336289,9.72349),(0.342955,9.713543),(0.347606,9.698841),(0.34988,9.680651),(0.347089,9.663158),(0.336651,9.650833),(0.318719,9.648611),(0.301924,9.657835),(0.275285,9.685586),(0.260971,9.657164),(0.261694,9.627941),(0.277636,9.606986),(0.308797,9.603678),(0.349363,9.61391),(0.369155,9.610138),(0.377734,9.589416),(0.370809,9.575644),(0.354608,9.571096),(0.33616,9.574197),(0.32244,9.583137),(0.30456,9.571562),(0.28053,9.570166),(0.230197,9.575747),(0.227743,9.569339),(0.223066,9.541176),(0.223402,9.534794),(0.225908,9.531926),(0.23322,9.525595),(0.243116,9.518697),(0.253503,9.514252),(0.279006,9.519007),(0.293139,9.518283),(0.301924,9.507457),(0.2981,9.497096),(0.28531,9.488363),(0.26973,9.482342),(0.228053,9.473971),(0.227949,9.459734),(0.234977,9.443766),(0.226813,9.432345),(0.23136,9.418961),(0.249886,9.416946),(0.26849,9.420641),(0.314223,9.436479),(0.32182,9.442836),(0.325747,9.451336),(0.328796,9.471568),(0.333783,9.479474),(0.34895,9.486037),(0.371429,9.490016),(0.411659,9.492238),(0.430444,9.489189),(0.483515,9.470017),(0.489923,9.461129),(0.49168,9.451285),(0.489406,9.441027),(0.482792,9.429839),(0.482585,9.428857),(0.482792,9.427875),(0.483515,9.426842),(0.513281,9.416791),(0.531368,9.401107),(0.537569,9.377646),(0.528267,9.326951),(0.526613,9.297651),(0.522479,9.285067),(0.512351,9.274706),(0.501085,9.268195),(0.49292,9.258609),(0.49199,9.238895),(0.504393,9.203109),(0.504703,9.187166),(0.49261,9.166754),(0.469976,9.14673),(0.463646,9.137092),(0.463749,9.12146),(0.458478,9.08924),(0.441554,9.058182),(0.43127,9.02759),(0.44468,8.996558),(0.444887,8.996558),(0.444887,8.996455),(0.493127,8.915219),(0.505633,8.866333),(0.483629,8.834973),(0.483515,8.834811),(0.478451,8.822331),(0.477004,8.80675),(0.47349,8.793547),(0.462483,8.788302),(0.449512,8.790369),(0.439409,8.793444),(0.430134,8.794736),(0.419592,8.791247),(0.424966,8.78732),(0.4183,8.770603),(0.407809,8.759053),(0.397112,8.781532),(0.385278,8.78608),(0.372669,8.783754),(0.365874,8.774272),(0.368084,8.761481),(0.374426,8.724766),(0.403727,8.662082),(0.443518,8.606866),(0.483515,8.579787),(0.500568,8.560176),(0.524753,8.538033),(0.550798,8.519326),(0.616324,8.488759),(0.63379,8.452018),(0.646709,8.414165),(0.685053,8.382564),(0.687224,8.372203),(0.686914,8.360731),(0.689394,8.346055),(0.688567,8.33603),(0.689187,8.331301),(0.693115,8.327891),(0.705104,8.325979),(0.708825,8.322568),(0.712752,8.29797),(0.705207,8.282751),(0.689187,8.272855),(0.645056,8.253425),(0.628313,8.24247),(0.595033,8.21273),(0.586661,8.208751),(0.579013,8.207821),(0.573846,8.203945),(0.572709,8.191),(0.576843,8.180484),(0.584594,8.173068),(0.591519,8.16449),(0.593276,8.150666),(0.587798,8.137721),(0.579943,8.131701),(0.574983,8.124621),(0.580254,8.100075),(0.583974,8.05305),(0.593173,8.015739),(0.593689,7.996567),(0.592863,7.959309),(0.602688,7.893132),(0.603405,7.888305),(0.602578,7.828412),(0.611673,7.771258),(0.609502,7.718961),(0.612293,7.699428),(0.583354,7.704285),(0.570538,7.683925),(0.567438,7.652919),(0.567748,7.625737),(0.562063,7.612301),(0.548524,7.605377),(0.532505,7.601036),(0.519379,7.595196),(0.50925,7.585378),(0.503049,7.574681),(0.499432,7.562382),(0.495608,7.504298),(0.500568,7.45505),(0.516692,7.411022),(0.548214,7.383271),(0.566404,7.380791),(0.586248,7.384357),(0.623042,7.397896),(0.629656,7.389162),(0.645676,7.323585),(0.644436,7.301933),(0.631103,7.265242),(0.621801,7.218837),(0.59772,7.158892),(0.59431,7.119722),(0.598754,7.074711),(0.59803,7.0312),(0.57891,6.996783),(0.565991,6.989652),(0.523823,6.978697),(0.509147,6.977611),(0.494574,6.973116),(0.506046,6.957819),(0.534468,6.935909),(0.54625,6.914773),(0.543873,6.897926),(0.522789,6.862838),(0.516071,6.841806),(0.519275,6.832246),(0.528887,6.826535),(0.541083,6.817182),(0.570022,6.763516),(0.576636,6.754731),(0.593793,6.751114),(0.611259,6.741528),(0.625832,6.727756),(0.634307,6.711607),(0.635341,6.695381),(0.626659,6.644893),(0.63224,6.62399),(0.645366,6.612026),(0.659422,6.601717),(0.668207,6.585827),(0.677612,6.584896),(0.705517,6.585775),(0.714612,6.583914),(0.727428,6.571435),(0.726808,6.562107),(0.71978,6.552728),(0.713475,6.53999),(0.713076,6.533936),(0.711718,6.513376),(0.71792,6.48883),(0.729908,6.465085),(0.745721,6.4409),(0.766805,6.419351),(0.816208,6.395063),(0.841323,6.379224),(0.873879,6.340131),(0.886798,6.33039),(0.907262,6.324912),(0.983536,6.324602),(1.009685,6.244452),(1.025188,6.222334),(1.033146,6.216547),(1.040587,6.213472),(1.048132,6.20939),(1.05578,6.20063),(1.068906,6.180347),(1.079655,6.167919),(1.093401,6.160917),(1.176186,6.151512),(1.187968,6.135647),(1.185396,6.100491),(1.185395,6.100491),(1.184825,6.100328),(1.178396,6.097968),(1.133067,6.073635),(1.098806,6.047024),(1.048025,5.992662),(1.036388,5.975165),(1.011729,5.926256),(1.007091,5.906684),(1.005626,5.886054),(0.995291,5.839342),(0.986583,5.820746),(0.954845,5.791083),(0.914399,5.777086),(0.723481,5.755439),(0.364919,5.78148),(0.321951,5.779202),(0.298106,5.773668),(0.285981,5.768866),(0.280935,5.763007),(0.27589,5.760159),(0.25172,5.759101),(0.243337,5.756171),(0.224376,5.742621),(0.1574,5.724555),(0.1421,5.71426),(0.114594,5.688666),(0.075694,5.677558),(0.061697,5.662828),(0.050629,5.644761),(0.034516,5.628363),(-0.045033,5.60163),(-0.049672,5.598863),(-0.062734,5.585842),(-0.068593,5.581204),(-0.077382,5.57807),(-0.093251,5.576646),(-0.307851,5.507392),(-0.353098,5.501695),(-0.359283,5.498684),(-0.371571,5.492743),(-0.445058,5.431383),(-0.45287,5.420111),(-0.458119,5.404731),(-0.471588,5.390815),(-0.507436,5.368842),(-0.577382,5.35163),(-0.589345,5.351508),(-0.596669,5.344306),(-0.612864,5.332017),(-0.629465,5.326728),(-0.637766,5.340888),(-0.689524,5.304755),(-0.702301,5.299994),(-0.760732,5.252183),(-0.784495,5.22427),(-0.79955,5.21483),(-0.873891,5.204006),(-0.89684,5.205024),(-0.911488,5.218004),(-0.918284,5.209418),(-0.927724,5.206122),(-0.952463,5.204413),(-0.976552,5.19831),(-0.988393,5.198798),(-0.993479,5.20775),(-0.99706,5.216783),(-1.005035,5.214545),(-1.017323,5.204413),(-1.033803,5.202948),(-1.049794,5.198676),(-1.154124,5.153876),(-1.164703,5.139228),(-1.173655,5.130561),(-1.194081,5.118598),(-1.233632,5.100735),(-1.25414,5.096503),(-1.319244,5.09455),(-1.329986,5.091539),(-1.353139,5.077786),(-1.364003,5.073432),(-1.374135,5.072659),(-1.394765,5.07392),(-1.404937,5.073432),(-1.423451,5.06745),(-1.459706,5.04975),(-1.529286,5.039862),(-1.551869,5.031399),(-1.568796,5.019273),(-1.569488,5.018785),(-1.583852,5.025377),(-1.601471,5.026679),(-1.618479,5.022447),(-1.630849,5.012641),(-1.632436,5.004543),(-1.625478,4.989488),(-1.627756,4.981594),(-1.634023,4.974433),(-1.638173,4.970689),(-1.706654,4.936916),(-1.718821,4.92768),(-1.74767,4.895901),(-1.749908,4.888577),(-1.748118,4.881903),(-1.748606,4.877183),(-1.757436,4.875434),(-1.766754,4.874701),(-1.835439,4.855129),(-1.912099,4.820868),(-1.92809,4.806789),(-1.95226,4.772447),(-1.967275,4.758124),(-1.981435,4.752427),(-2.050364,4.743638),(-2.063832,4.738186),(-2.090566,4.737128),(-2.096588,4.738471),(-2.103871,4.745063),(-2.106679,4.755357),(-2.105621,4.766547),(-2.107981,4.77558),(-2.120961,4.779202),(-2.129018,4.780341),(-2.144154,4.78498),(-2.155385,4.786078),(-2.154368,4.790676),(-2.163726,4.800523),(-2.206288,4.834459),(-2.227406,4.843411),(-2.23705,4.849189),(-2.241119,4.858059),(-2.242828,4.865383),(-2.247222,4.873033),(-2.252756,4.879136),(-2.356597,4.919623),(-2.525787,4.954413),(-2.690419,5.010199),(-2.820872,5.020087),(-2.964996,5.045315),(-3.103749,5.087714),(-3.119601,5.091331),(-3.115281,5.107821),(-3.108306,5.109605),(-3.098988,5.113227),(-3.087636,5.114976),(-3.016103,5.114447),(-2.994496,5.10814),(-2.988515,5.10224),(-2.976308,5.084621),(-2.970937,5.080878),(-2.957753,5.082668),(-2.937245,5.091295),(-2.92634,5.09455),(-2.929026,5.122016),(-2.901235,5.133612),(-2.865305,5.139309),(-2.843699,5.149115),(-2.843658,5.149115),(-2.803855,5.147568),(-2.780885,5.150875),(-2.766622,5.16103),(-2.75701,5.179866),(-2.762488,5.1863),(-2.775666,5.197539),(-2.778818,5.201001),(-2.779645,5.210122),(-2.775407,5.231025),(-2.774891,5.239578),(-2.779335,5.251076),(-2.793287,5.271101),(-2.79613,5.284976),(-2.789567,5.346858),(-2.755874,5.340709),(-2.742593,5.343138),(-2.732671,5.355721),(-2.730139,5.373162),(-2.734428,5.391145),(-2.761713,5.454914),(-2.779438,5.526485),(-2.781815,5.578885),(-2.785588,5.595034),(-2.793442,5.608134),(-2.806258,5.618495),(-2.877494,5.636815),(-2.893746,5.630355),(-2.926302,5.610718),(-2.945862,5.608289),(-2.969504,5.621441),(-2.974904,5.647279),(-2.964801,5.709782),(-3.029759,5.704252),(-3.019966,5.818354),(-3.022524,5.854502),(-3.03472,5.890701),(-3.073244,5.960955),(-3.087765,5.996689),(-3.118074,6.163475),(-3.139364,6.219286),(-3.168794,6.246054),(-3.171722,6.249585),(-3.183599,6.263908),(-3.262509,6.617142),(-3.261114,6.633937),(-3.253905,6.644428),(-3.235483,6.654582),(-3.230057,6.661119),(-3.210419,6.715999),(-3.212745,6.722666),(-3.226232,6.748581),(-3.238738,6.764627),(-3.243079,6.774756),(-3.241684,6.810903),(-3.224941,6.849402),(-3.109599,7.046393),(-3.097274,7.054053),(-3.096593,7.054477),(-3.095465,7.055178),(-3.045933,7.071249),(-3.040869,7.078329),(-3.032601,7.128352),(-3.027537,7.138428),(-2.9817,7.195531),(-2.972811,7.214961),(-2.972088,7.228604),(-2.976532,7.236717),(-2.983224,7.244985),(-2.989529,7.258938),(-2.990459,7.270668),(-2.941521,7.577161),(-2.928163,7.617676),(-2.855945,7.771568),(-2.852069,7.778131),(-2.842018,7.790275),(-2.838168,7.796838),(-2.83791,7.802264),(-2.84083,7.815235),(-2.840003,7.820247),(-2.831399,7.82836),(-2.809152,7.841693),(-2.801194,7.85022),(-2.797887,7.861278),(-2.793546,7.935951),(-2.790626,7.943289),(-2.784347,7.944012),(-2.777681,7.93962),(-2.772617,7.933935),(-2.771221,7.930732),(-2.754892,7.944374),(-2.709261,7.996722),(-2.690865,8.009745),(-2.631178,8.029589),(-2.610559,8.040001),(-2.600689,8.052946),(-2.599759,8.070387),(-2.606219,8.094313),(-2.619448,8.123097),(-2.621877,8.135732),(-2.617639,8.151493),(-2.610663,8.164852),(-2.604255,8.170794),(-2.595987,8.170278),(-2.583274,8.164335),(-2.57175,8.16312),(-2.558108,8.167694),(-2.513201,8.194023),(-2.506328,8.209267),(-2.552346,8.489367),(-2.598364,8.769466),(-2.603118,8.78515),(-2.612161,8.780964),(-2.612382,8.781321),(-2.617226,8.78918),(-2.614642,8.801919),(-2.607252,8.808559),(-2.596658,8.814398),(-2.595573,8.828067),(-2.615934,8.89783),(-2.617174,8.915013),(-2.619448,8.923617),(-2.625029,8.925994),(-2.631747,8.926976),(-2.63769,8.931471),(-2.653296,8.978394),(-2.654329,8.998832),(-2.660789,9.014903),(-2.686059,9.028261),(-2.711328,9.033894),(-2.73696,9.035134),(-2.750809,9.037408),(-2.758664,9.04286),(-2.765072,9.049475),(-2.774839,9.054978),(-2.765434,9.069112),(-2.7613,9.084795),(-2.759388,9.134818),(-2.755202,9.143138),(-2.733911,9.164842),(-2.716496,9.200266),(-2.704817,9.218405),(-2.673088,9.23489),(-2.661151,9.253907),(-2.659807,9.272898),(-2.675517,9.281502),(-2.692983,9.285326),(-2.707143,9.295635),(-2.714998,9.311061),(-2.713344,9.329923),(-2.70306,9.343255),(-2.689418,9.349534),(-2.677532,9.357001),(-2.672416,9.374028),(-2.672416,9.401004),(-2.677377,9.417773),(-2.678669,9.42555),(-2.674587,9.457486),(-2.675672,9.471723),(-2.689211,9.488724),(-2.725281,9.533864),(-2.761093,9.566239),(-2.766261,9.576523),(-2.768018,9.592801),(-2.766467,9.608898),(-2.761506,9.625253),(-2.752928,9.639904),(-2.740164,9.650833),(-2.743404,9.653325),(-2.751016,9.659179),(-2.75422,9.668352),(-2.75546,9.677498),(-2.760524,9.685586),(-2.769258,9.689926),(-2.774839,9.689306),(-2.780317,9.690237),(-2.788481,9.699254),(-2.795303,9.720002),(-2.791582,9.740931),(-2.774839,9.77434),(-2.740267,9.812606),(-2.733911,9.822787),(-2.736753,9.832579),(-2.760524,9.870536),(-2.764865,9.895754),(-2.755564,9.941151),(-2.754375,9.966783),(-2.760886,9.984766),(-2.783572,10.018511),(-2.788481,10.038768),(-2.788481,10.124448),(-2.79892,10.169613),(-2.798455,10.189534),(-2.785381,10.210101),(-2.765124,10.225837),(-2.757682,10.234648),(-2.754375,10.247954),(-2.754943,10.260796),(-2.758354,10.266119),(-2.764865,10.269116),(-2.774839,10.27524),(-2.784451,10.278805),(-2.793184,10.278547),(-2.799669,10.280924),(-2.802098,10.292319),(-2.80569,10.299011),(-2.814216,10.301078),(-2.824448,10.302215),(-2.833182,10.305987),(-2.841295,10.322291),(-2.841295,10.343272),(-2.836282,10.363658),(-2.829461,10.378282),(-2.822329,10.387584),(-2.81463,10.393708),(-2.80494,10.397118),(-2.791995,10.398152),(-2.78104,10.402415),(-2.771893,10.41182),(-2.767552,10.4212),(-2.771428,10.425489),(-2.795871,10.426109),(-2.812149,10.428822),(-2.824448,10.435023),(-2.859562,10.465616),(-2.867391,10.47825),(-2.87044,10.497784),(-2.873386,10.502487),(-2.886899,10.515793),(-2.891524,10.521659),(-2.894211,10.531425),(-2.895865,10.552768),(-2.898294,10.562612),(-2.928266,10.615219),(-2.932504,10.634313),(-2.929661,10.644209),(-2.916251,10.668084),(-2.910154,10.684491),(-2.907337,10.688315),(-2.905115,10.692733),(-2.905141,10.699193),(-2.908035,10.703637),(-2.912634,10.705808),(-2.916949,10.706609),(-2.918913,10.706609),(-2.919959,10.711088),(-2.922323,10.721207),(-2.919274,10.727202),(-2.908577,10.737666),(-2.9031,10.747149),(-2.899482,10.76999),(-2.882662,10.797068),(-2.871629,10.843448),(-2.864239,10.864273),(-2.850803,10.879544),(-2.834861,10.89161),(-2.821399,10.90626),(-2.815767,10.929153),(-2.821761,10.948299),(-2.832716,10.966024),(-2.838711,10.985119),(-2.835946,10.992224),(-2.837341,10.998012),(-2.751119,10.996462),(-2.750706,10.985842),(-2.728798,10.986057),(-2.634951,10.986979),(-2.439769,10.988943),(-2.197871,10.991294),(-1.972872,10.993516),(-1.754074,10.995635),(-1.598683,10.997133),(-1.587004,11.003671),(-1.579821,11.021215),(-1.562406,11.026589),(-1.436057,11.022713),(-1.423584,11.017911),(-1.413553,11.01405),(-1.38035,11.001268),(-1.355287,10.99672),(-1.22806,10.99548),(-1.10755,10.994343),(-1.098714,11.009562),(-1.070343,11.013902),(-0.932471,11.003102),(-0.924047,11.001423),(-0.916451,10.996203),(-0.909475,10.98269),(-0.904359,10.979486),(-0.89392,10.979899),(-0.900586,10.966231),(-0.883171,10.968634),(-0.860795,10.992922),(-0.839401,10.996462),(-0.833665,11.003412),(-0.827774,11.005841),(-0.822141,11.003619),(-0.816974,10.996462),(-0.816974,10.996203),(-0.703017,10.994159),(-0.695999,10.994033),(-0.694656,10.982742),(-0.690212,10.981811),(-0.683235,10.984344),(-0.673933,10.983517),(-0.667991,10.976515),(-0.67228,10.968505),(-0.679773,10.960572),(-0.68277,10.953596),(-0.634763,10.907966),(-0.617244,10.918637),(-0.607116,10.94122),(-0.599829,10.965766),(-0.590579,10.982432),(-0.576213,10.987754),(-0.555439,10.98995),(-0.516475,10.988633),(-0.512961,10.989744),(-0.509706,10.991449),(-0.506709,10.993671),(-0.504228,10.996203),(-0.504228,10.996358),(-0.494823,11.007805),(-0.46852,11.020129),(-0.456686,11.029612),(-0.45126,11.040206),(-0.449038,11.062892),(-0.444387,11.077439),(-0.425215,11.101468),(-0.405888,11.101468),(-0.386923,11.086405),(-0.368939,11.065217),(-0.318813,11.101365),(-0.306126,11.113535),(-0.298297,11.128392),(-0.300545,11.137409),(-0.304473,11.146788),(-0.301682,11.162937),(-0.166109,11.13498)] +Gibraltar [(-5.358387,36.141109),(-5.338773,36.14112),(-5.339915,36.129828),(-5.33906,36.123847),(-5.34203,36.1105),(-5.35025,36.119289),(-5.358387,36.141109)] +Guinea [(-13.338613,12.63923),(-13.332773,12.639643),(-13.330034,12.642072),(-13.327709,12.644914),(-13.322877,12.646516),(-13.292931,12.652717),(-13.284146,12.653337),(-13.275722,12.651064),(-13.263398,12.643209),(-13.25469,12.641813),(-13.246008,12.643157),(-13.229084,12.647808),(-13.22154,12.648376),(-13.21601,12.645741),(-13.20645,12.635664),(-13.201903,12.63339),(-13.197872,12.634062),(-13.185211,12.637783),(-13.176271,12.637731),(-13.169217,12.638765),(-13.162629,12.638144),(-13.154696,12.632822),(-13.145317,12.643932),(-13.127308,12.634837),(-13.108988,12.635406),(-13.091703,12.638196),(-13.076458,12.635922),(-13.063642,12.62228),(-13.059327,12.60347),(-13.060413,12.583367),(-13.064211,12.566159),(-13.079094,12.532156),(-13.082711,12.515671),(-13.078267,12.496241),(-13.066743,12.480118),(-13.051989,12.472263),(-13.034626,12.470558),(-13.015247,12.472676),(-12.997419,12.465855),(-12.981038,12.466992),(-12.969514,12.476811),(-12.966232,12.496086),(-12.949489,12.535928),(-12.913135,12.536342),(-12.874042,12.516395),(-12.849211,12.495001),(-12.84965,12.491073),(-12.851252,12.484355),(-12.849133,12.475829),(-12.838617,12.466372),(-12.829057,12.462755),(-12.799421,12.460636),(-12.783479,12.453091),(-12.779164,12.443273),(-12.773272,12.435056),(-12.752654,12.432421),(-12.688058,12.435986),(-12.667517,12.433661),(-12.648009,12.425754),(-12.63155,12.41268),(-12.603412,12.381933),(-12.598064,12.372424),(-12.597314,12.365448),(-12.593671,12.361572),(-12.579408,12.361159),(-12.57813,12.361528),(-12.570649,12.363691),(-12.541452,12.379452),(-12.521918,12.386894),(-12.505072,12.390253),(-12.48776,12.389477),(-12.466754,12.384465),(-12.423397,12.369117),(-12.405517,12.356456),(-12.377121,12.313668),(-12.36092,12.305607),(-12.192042,12.348756),(-12.155041,12.369014),(-12.121917,12.398728),(-12.103985,12.407357),(-12.07918,12.408133),(-12.018874,12.388444),(-11.997583,12.386377),(-11.984303,12.389012),(-11.946269,12.409425),(-11.93521,12.417021),(-11.930869,12.418726),(-11.923944,12.418778),(-11.921981,12.417693),(-11.920637,12.416039),(-11.860176,12.391079),(-11.839609,12.386687),(-11.757805,12.383328),(-11.720029,12.389477),(-11.643083,12.417641),(-11.601535,12.425238),(-11.515391,12.431645),(-11.481646,12.428235),(-11.416017,12.40498),(-11.388422,12.403895),(-11.388628,12.384465),(-11.407594,12.381933),(-11.431675,12.382966),(-11.447229,12.374543),(-11.451467,12.352115),(-11.444904,12.312531),(-11.444594,12.289484),(-11.463249,12.249951),(-11.491671,12.220909),(-11.507794,12.191608),(-11.489604,12.151197),(-11.470794,12.134609),(-11.409557,12.107893),(-11.391729,12.093733),(-11.351318,12.040403),(-11.334678,12.026192),(-11.315144,12.013325),(-11.294681,12.003093),(-11.274992,11.996271),(-11.25551,11.996168),(-11.223315,12.009294),(-11.206986,12.010172),(-11.191173,12.01441),(-11.176342,12.029499),(-11.165644,12.04862),(-11.159392,12.077507),(-11.152725,12.080866),(-11.14425,12.081331),(-11.136292,12.085103),(-11.122081,12.107272),(-11.117477,12.112702),(-11.116604,12.113732),(-11.084306,12.136573),(-11.072782,12.149492),(-11.059811,12.192435),(-11.052318,12.202512),(-11.038779,12.205251),(-11.015473,12.204424),(-10.972013,12.218583),(-10.952117,12.219462),(-10.927675,12.212951),(-10.909794,12.200135),(-10.869642,12.138433),(-10.846387,12.12138),(-10.822926,12.110476),(-10.804943,12.095955),(-10.798535,12.068102),(-10.801222,12.059213),(-10.811041,12.04154),(-10.812436,12.033427),(-10.807785,12.022833),(-10.781017,11.99622),(-10.740347,11.919894),(-10.711357,11.890386),(-10.669964,11.89204),(-10.655081,11.898965),(-10.645159,11.90837),(-10.634152,11.925423),(-10.625264,11.943872),(-10.621543,11.957669),(-10.613792,11.967694),(-10.595653,11.97989),(-10.574983,11.990329),(-10.55948,11.995186),(-10.566508,12.017252),(-10.555449,12.029344),(-10.535399,12.038698),(-10.51509,12.052909),(-10.514883,12.060144),(-10.517674,12.070737),(-10.517312,12.080349),(-10.507959,12.084586),(-10.504445,12.088359),(-10.512661,12.108461),(-10.511628,12.118073),(-10.493489,12.124429),(-10.445688,12.120708),(-10.435973,12.135178),(-10.428945,12.143394),(-10.39458,12.17161),(-10.381403,12.180136),(-10.37484,12.174814),(-10.368225,12.171506),(-10.353394,12.166494),(-10.354118,12.18086),(-10.348433,12.184787),(-10.339338,12.185459),(-10.329882,12.19042),(-10.325592,12.199773),(-10.323577,12.201995),(-10.316497,12.196879),(-10.313603,12.191143),(-10.31102,12.19104),(-10.304974,12.202874),(-10.26694,12.217808),(-10.258981,12.213778),(-10.254124,12.207215),(-10.247819,12.201065),(-10.21485,12.194606),(-10.179762,12.178224),(-10.160228,12.174503),(-10.135837,12.175485),(-10.128654,12.174503),(-10.118835,12.170163),(-10.105192,12.158639),(-10.097286,12.153678),(-10.09034,12.151584),(-10.087002,12.150577),(-10.070363,12.149854),(-10.060182,12.147528),(-10.04809,12.141379),(-10.027678,12.125669),(-10.015534,12.119881),(-9.996414,12.114145),(-9.961584,12.096007),(-9.941533,12.092751),(-9.923188,12.087584),(-9.888772,12.060557),(-9.870736,12.052082),(-9.861125,12.051875),(-9.840661,12.055028),(-9.833684,12.055131),(-9.823039,12.050893),(-9.806813,12.038439),(-9.796632,12.032342),(-9.778649,12.026657),(-9.722994,12.025417),(-9.698809,12.050997),(-9.686303,12.068153),(-9.681549,12.080814),(-9.683668,12.101485),(-9.682273,12.118538),(-9.675141,12.133989),(-9.66,12.149802),(-9.628167,12.170163),(-9.515616,12.20706),(-9.495876,12.224681),(-9.479598,12.235843),(-9.438877,12.254757),(-9.421513,12.257082),(-9.360173,12.246644),(-9.336816,12.269846),(-9.331906,12.282766),(-9.335007,12.306537),(-9.321829,12.310206),(-9.312889,12.32607),(-9.308497,12.345036),(-9.309169,12.357748),(-9.314801,12.362657),(-9.331648,12.366326),(-9.339167,12.370616),(-9.343947,12.380124),(-9.344464,12.389012),(-9.347151,12.396454),(-9.374694,12.41113),(-9.392781,12.428028),(-9.407251,12.445133),(-9.412573,12.455365),(-9.398155,12.473245),(-9.350872,12.484614),(-9.33418,12.496293),(-9.327307,12.497533),(-9.321519,12.496603),(-9.308187,12.488955),(-9.294079,12.483787),(-9.286379,12.488128),(-9.278938,12.494432),(-9.265864,12.495207),(-9.189486,12.479239),(-9.147318,12.465493),(-9.027765,12.404751),(-8.993839,12.387514),(-8.975907,12.368704),(-8.964745,12.346276),(-8.963092,12.32421),(-8.972135,12.301989),(-8.985261,12.282094),(-8.995906,12.26101),(-8.99756,12.23481),(-8.994278,12.224836),(-8.984537,12.20892),(-8.981178,12.198946),(-8.97384,12.187578),(-8.961696,12.187991),(-8.942008,12.195846),(-8.927021,12.187371),(-8.913896,12.169853),(-8.906041,12.147735),(-8.906971,12.125359),(-8.921957,12.086033),(-8.926298,12.064278),(-8.922112,12.045364),(-8.903715,12.030378),(-8.88165,12.029189),(-8.858757,12.031825),(-8.837983,12.028156),(-8.819793,12.012653),(-8.815452,11.995703),(-8.816434,11.976841),(-8.814909,11.955395),(-8.809974,11.94444),(-8.803205,11.934932),(-8.79783,11.925113),(-8.797004,11.913227),(-8.847233,11.657946),(-8.828526,11.659186),(-8.81168,11.651745),(-8.795195,11.641926),(-8.777211,11.635932),(-8.756541,11.638826),(-8.740366,11.646164),(-8.726517,11.649471),(-8.712874,11.640066),(-8.690705,11.568029),(-8.687346,11.55002),(-8.679026,11.52984),(-8.667967,11.510875),(-8.656754,11.496431),(-8.602183,11.472092),(-8.575208,11.470283),(-8.549577,11.490127),(-8.543996,11.477647),(-8.539425,11.456312),(-8.535521,11.438088),(-8.527563,11.423387),(-8.51578,11.418968),(-8.488134,11.418193),(-8.427724,11.40008),(-8.397648,11.385895),(-8.375324,11.367654),(-8.393359,11.363003),(-8.414133,11.353468),(-8.42607,11.340472),(-8.417957,11.325511),(-8.39956,11.322152),(-8.381267,11.325098),(-8.371035,11.32055),(-8.376719,11.294661),(-8.388398,11.281406),(-8.406691,11.275282),(-8.426897,11.274197),(-8.44457,11.276264),(-8.483224,11.28492),(-8.492578,11.277788),(-8.50312,11.255412),(-8.502345,11.25164),(-8.499244,11.246705),(-8.497022,11.24071),(-8.498779,11.233501),(-8.502138,11.231176),(-8.51206,11.230298),(-8.515625,11.229057),(-8.5241,11.22376),(-8.544616,11.214975),(-8.551799,11.210738),(-8.563943,11.193426),(-8.569679,11.157796),(-8.578154,11.14206),(-8.586991,11.137461),(-8.607222,11.137151),(-8.615413,11.13405),(-8.624611,11.122035),(-8.624921,11.112191),(-8.622389,11.102605),(-8.623112,11.091262),(-8.63368,11.071289),(-8.679801,11.025194),(-8.68869,11.012559),(-8.696906,10.996048),(-8.70073,10.978452),(-8.696234,10.962743),(-8.680938,10.952304),(-8.665229,10.956955),(-8.647969,10.965818),(-8.628228,10.967833),(-8.611382,10.965146),(-8.593347,10.964707),(-8.575622,10.966748),(-8.55955,10.971605),(-8.541567,10.983775),(-8.531645,10.999046),(-8.505755,11.052789),(-8.489064,11.052737),(-8.469013,11.046381),(-8.448704,11.047105),(-8.438162,11.04974),(-8.429067,11.048293),(-8.410464,11.041911),(-8.400645,11.043332),(-8.381318,11.053926),(-8.369329,11.054236),(-8.347315,11.043126),(-8.326593,11.02385),(-8.311142,11.000803),(-8.305096,10.978246),(-8.303649,10.874893),(-8.305871,10.856289),(-8.311814,10.847246),(-8.318893,10.83952),(-8.339822,10.783865),(-8.341941,10.763246),(-8.315586,10.753092),(-8.311142,10.735289),(-8.295535,10.537446),(-8.284425,10.511246),(-8.253884,10.469827),(-8.228873,10.423422),(-8.210579,10.414068),(-8.18903,10.413629),(-8.165983,10.418099),(-8.150583,10.424533),(-8.142056,10.429649),(-8.135442,10.427323),(-8.125417,10.411665),(-8.122109,10.399496),(-8.120921,10.373037),(-8.115495,10.360247),(-8.096116,10.346734),(-8.070071,10.341876),(-8.015708,10.339344),(-7.996277,10.32826),(-7.981549,10.307021),(-7.971214,10.281802),(-7.9646,10.258936),(-7.962584,10.233252),(-7.96801,10.209998),(-7.989663,10.161991),(-7.999843,10.150699),(-8.007233,10.138323),(-8.012349,10.124861),(-8.015708,10.110366),(-8.03488,10.085794),(-8.057876,10.073081),(-8.108777,10.054194),(-8.127535,10.037115),(-8.147741,10.010036),(-8.15849,9.984921),(-8.148878,9.973888),(-8.136269,9.969444),(-8.140093,9.959703),(-8.152392,9.949988),(-8.173837,9.941875),(-8.174716,9.933038),(-8.171512,9.922832),(-8.167895,9.914693),(-8.164536,9.912678),(-8.158955,9.912368),(-8.153219,9.910921),(-8.14955,9.90534),(-8.147224,9.887925),(-8.145674,9.882499),(-8.147327,9.880122),(-8.152133,9.87883),(-8.155079,9.875238),(-8.150996,9.866143),(-8.146862,9.864696),(-8.128104,9.866246),(-8.123505,9.844852),(-8.160293,9.584814),(-8.16159,9.575644),(-8.161022,9.553604),(-8.157611,9.530866),(-8.152857,9.513116),(-8.145364,9.497148),(-8.087176,9.415395),(-8.070433,9.397541),(-8.054051,9.389092),(-8.032296,9.384648),(-8.009868,9.383614),(-7.991575,9.38532),(-7.971059,9.392968),(-7.933025,9.414259),(-7.91189,9.418651),(-7.8706,9.418599),(-7.865381,9.409918),(-7.891736,9.330233),(-7.89494,9.307133),(-7.930648,9.220291),(-7.929563,9.183885),(-7.878713,9.163912),(-7.866311,9.166031),(-7.856234,9.170217),(-7.847914,9.168563),(-7.83422,9.142983),(-7.823368,9.131976),(-7.801044,9.114794),(-7.759392,9.091668),(-7.746783,9.07645),(-7.748203,9.075403),(-7.764198,9.063608),(-7.788073,9.060042),(-7.807813,9.060042),(-7.827037,9.05782),(-7.916747,9.012345),(-7.931785,8.996558),(-7.931837,8.996558),(-7.931837,8.996455),(-7.931888,8.996455),(-7.941604,8.972218),(-7.946461,8.948602),(-7.950699,8.899535),(-7.96894,8.812848),(-7.958915,8.781997),(-7.91313,8.768226),(-7.890961,8.765693),(-7.831016,8.747658),(-7.826572,8.748227),(-7.816056,8.753524),(-7.81019,8.753369),(-7.803679,8.749054),(-7.795695,8.73647),(-7.790915,8.730941),(-7.772053,8.714534),(-7.765129,8.706085),(-7.741202,8.662909),(-7.731823,8.652083),(-7.703944,8.633479),(-7.696812,8.624849),(-7.691593,8.606142),(-7.69397,8.567773),(-7.691696,8.546663),(-7.679242,8.505089),(-7.679294,8.487726),(-7.686477,8.462585),(-7.689578,8.438659),(-7.683221,8.417317),(-7.662447,8.374865),(-7.720738,8.368534),(-7.748334,8.372307),(-7.77102,8.386518),(-7.802439,8.449201),(-7.823833,8.477856),(-7.845744,8.474316),(-7.849413,8.459201),(-7.846777,8.44269),(-7.848586,8.428634),(-7.865588,8.420882),(-7.882331,8.421451),(-7.898583,8.426283),(-7.912871,8.434189),(-7.923827,8.443775),(-7.931578,8.45703),(-7.936953,8.471422),(-7.943877,8.484625),(-7.95628,8.494366),(-7.970542,8.49695),(-7.98589,8.494935),(-8.015708,8.487209),(-8.024906,8.488604),(-8.042476,8.487157),(-8.050667,8.487674),(-8.057204,8.490852),(-8.068107,8.500464),(-8.076634,8.503255),(-8.096891,8.501704),(-8.130119,8.488604),(-8.150531,8.486486),(-8.186343,8.493695),(-8.200296,8.491834),(-8.248872,8.453568),(-8.254401,8.446514),(-8.254298,8.434499),(-8.249078,8.423776),(-8.242102,8.412769),(-8.236883,8.400083),(-8.235281,8.379515),(-8.247631,8.322568),(-8.247631,8.303835),(-8.245306,8.285077),(-8.248252,8.273579),(-8.264581,8.2565),(-8.265977,8.245984),(-8.182932,8.191672),(-8.085522,8.162035),(-8.075962,8.163379),(-8.015708,8.183894),(-8.003305,8.184049),(-8.002427,8.174308),(-8.008266,8.160433),(-8.015708,8.14816),(-8.021702,8.117723),(-8.021289,8.102659),(-8.015708,8.088474),(-7.997052,8.077338),(-7.980413,8.058992),(-7.968217,8.03703),(-7.965638,8.026927),(-7.962636,8.015171),(-7.980361,8.016566),(-7.992505,8.024162),(-8.002168,8.032431),(-8.011987,8.035557),(-8.053845,8.028762),(-8.069089,8.029278),(-8.067229,8.020752),(-8.068883,7.996567),(-8.079115,7.957655),(-8.128879,7.882569),(-8.131204,7.842416),(-8.123763,7.83203),(-8.101439,7.812806),(-8.096323,7.801902),(-8.098261,7.791877),(-8.103919,7.786348),(-8.110482,7.78242),(-8.115133,7.776891),(-8.119009,7.760819),(-8.122626,7.727488),(-8.12738,7.713019),(-8.159523,7.676328),(-8.160195,7.672194),(-8.157404,7.662376),(-8.158024,7.658242),(-8.162107,7.654159),(-8.173424,7.647803),(-8.177868,7.643876),(-8.197867,7.618554),(-8.202621,7.607185),(-8.203035,7.599589),(-8.199779,7.583363),(-8.200244,7.576335),(-8.228976,7.544295),(-8.265356,7.560056),(-8.303235,7.589305),(-8.336773,7.597832),(-8.372275,7.58729),(-8.390878,7.585791),(-8.414805,7.612921),(-8.440023,7.599899),(-8.485446,7.557989),(-8.515625,7.577161),(-8.54782,7.600777),(-8.566527,7.623412),(-8.57278,7.650749),(-8.567405,7.688214),(-8.686571,7.694364),(-8.694477,7.674416),(-8.720677,7.642739),(-8.727576,7.622017),(-8.730031,7.587445),(-8.731478,7.581761),(-8.737369,7.571425),(-8.739178,7.565689),(-8.738041,7.559488),(-8.73401,7.557679),(-8.729514,7.556387),(-8.726827,7.55184),(-8.725742,7.528534),(-8.729617,7.50869),(-8.739539,7.490552),(-8.756489,7.472413),(-8.784394,7.455722),(-8.790389,7.450812),(-8.793955,7.441511),(-8.795453,7.421099),(-8.799226,7.41221),(-8.807701,7.40451),(-8.82403,7.399808),(-8.83341,7.393865),(-8.851626,7.371489),(-8.860617,7.35764),(-8.865371,7.346891),(-8.863304,7.32989),(-8.855915,7.311855),(-8.851367,7.292889),(-8.857827,7.273046),(-8.868885,7.266638),(-8.91219,7.250153),(-8.925419,7.248603),(-8.944385,7.278678),(-8.9572,7.286481),(-8.962575,7.262969),(-8.973117,7.267671),(-8.977716,7.266224),(-8.981023,7.250566),(-8.985467,7.25005),(-9.007637,7.243487),(-9.015698,7.238887),(-9.032855,7.240283),(-9.046652,7.232738),(-9.059261,7.221369),(-9.073111,7.211344),(-9.086392,7.20721),(-9.100448,7.204833),(-9.106085,7.202975),(-9.113935,7.200388),(-9.125407,7.190208),(-9.116054,7.224676),(-9.11962,7.237957),(-9.139153,7.247879),(-9.146905,7.25408),(-9.147473,7.271392),(-9.15308,7.275474),(-9.17543,7.274596),(-9.181476,7.275681),(-9.206281,7.298367),(-9.219872,7.360999),(-9.234341,7.381308),(-9.249715,7.383065),(-9.283796,7.374073),(-9.299505,7.376605),(-9.3055,7.383478),(-9.312424,7.403322),(-9.319401,7.411952),(-9.327772,7.417068),(-9.335007,7.419652),(-9.349218,7.421564),(-9.365961,7.421099),(-9.375159,7.41929),(-9.379242,7.415828),(-9.381412,7.404975),(-9.386477,7.401668),(-9.392264,7.39986),(-9.407354,7.378052),(-9.418723,7.387767),(-9.438101,7.421564),(-9.449263,7.405079),(-9.478357,7.376915),(-9.486626,7.372833),(-9.487099,7.373204),(-9.490708,7.376037),(-9.479701,7.38539),(-9.4735,7.39557),(-9.470864,7.407353),(-9.470244,7.420995),(-9.466213,7.433449),(-9.456705,7.443061),(-9.436086,7.458564),(-9.4134,7.492257),(-9.404925,7.513599),(-9.408697,7.527449),(-9.390817,7.548223),(-9.38366,7.55954),(-9.378777,7.586256),(-9.372989,7.601346),(-9.364411,7.614265),(-9.353507,7.62243),(-9.374694,7.646925),(-9.376245,7.680566),(-9.366374,7.716171),(-9.355471,7.741906),(-9.380999,7.77441),(-9.389681,7.791464),(-9.415674,7.824433),(-9.43867,7.866239),(-9.448437,7.907839),(-9.446008,7.950058),(-9.433093,7.993468),(-9.431745,7.996567),(-9.424562,8.005817),(-9.42234,8.015791),(-9.424149,8.026178),(-9.429265,8.036901),(-9.440944,8.050569),(-9.451537,8.051758),(-9.462183,8.043464),(-9.474223,8.028813),(-9.472776,8.040363),(-9.461873,8.058424),(-9.459805,8.068785),(-9.478047,8.141287),(-9.478099,8.152785),(-9.481406,8.164697),(-9.491328,8.169968),(-9.515616,8.174877),(-9.530137,8.185496),(-9.531894,8.212316),(-9.525435,8.24079),(-9.515616,8.256603),(-9.51081,8.261047),(-9.509156,8.265595),(-9.510758,8.270065),(-9.515616,8.274406),(-9.521249,8.284405),(-9.525124,8.295438),(-9.530189,8.30598),(-9.539387,8.314584),(-9.528483,8.32572),(-9.522282,8.330681),(-9.515616,8.334376),(-9.511482,8.336133),(-9.499131,8.343471),(-9.504351,8.346158),(-9.511482,8.353781),(-9.529052,8.366312),(-9.579178,8.387293),(-9.592045,8.405095),(-9.600159,8.410883),(-9.607342,8.402925),(-9.612148,8.396336),(-9.619537,8.392719),(-9.628064,8.391634),(-9.636125,8.392564),(-9.653385,8.392099),(-9.668372,8.38874),(-9.67726,8.39153),(-9.675916,8.409643),(-9.666511,8.422381),(-9.653385,8.43145),(-9.645117,8.443413),(-9.650543,8.464807),(-9.663514,8.477856),(-9.683513,8.487054),(-9.703253,8.487261),(-9.7155,8.473334),(-9.717981,8.452689),(-9.718188,8.438272),(-9.722115,8.435636),(-9.735913,8.450493),(-9.7463,8.468063),(-9.750899,8.485866),(-9.757307,8.53271),(-9.763043,8.54997),(-9.773585,8.563018),(-9.790586,8.565396),(-9.790018,8.553071),(-9.794824,8.544544),(-9.802007,8.53718),(-9.808854,8.528473),(-9.808828,8.520463),(-9.805469,8.512789),(-9.806813,8.506252),(-9.831617,8.499017),(-9.849704,8.48969),(-9.858644,8.487054),(-9.863502,8.49354),(-9.868153,8.498087),(-9.872804,8.500361),(-9.886911,8.49664),(-9.891872,8.495917),(-9.911354,8.495658),(-9.928149,8.493436),(-9.944169,8.488346),(-9.961119,8.479561),(-10.000186,8.449305),(-10.04039,8.425663),(-10.061681,8.422174),(-10.075013,8.426644),(-10.079354,8.439951),(-10.069277,8.486873),(-10.076099,8.501498),(-10.091085,8.510102),(-10.147981,8.524287),(-10.164879,8.524649),(-10.181829,8.519584),(-10.215625,8.486589),(-10.23242,8.477055),(-10.240275,8.493746),(-10.260842,8.485917),(-10.271332,8.484109),(-10.282236,8.484625),(-10.293708,8.483075),(-10.302235,8.485866),(-10.310245,8.490361),(-10.337581,8.498552),(-10.344299,8.499069),(-10.374737,8.49478),(-10.389775,8.489095),(-10.402384,8.481008),(-10.414062,8.470182),(-10.423778,8.456565),(-10.447962,8.407627),(-10.46021,8.39215),(-10.519276,8.33479),(-10.554364,8.311845),(-10.572657,8.304585),(-10.583251,8.30691),(-10.61188,8.325255),(-10.626711,8.331508),(-10.659784,8.333265),(-10.688154,8.324067),(-10.707895,8.303138),(-10.715181,8.2696),(-10.735283,8.287299),(-10.725,8.319855),(-10.702004,8.353574),(-10.683813,8.374865),(-10.669086,8.399075),(-10.654668,8.485504),(-10.643868,8.486873),(-10.641077,8.49478),(-10.64118,8.50664),(-10.638855,8.519895),(-10.631879,8.532013),(-10.616479,8.546508),(-10.600511,8.575653),(-10.584646,8.596841),(-10.565423,8.612783),(-10.545166,8.616762),(-10.522118,8.618571),(-10.503308,8.631567),(-10.489665,8.651049),(-10.481862,8.672469),(-10.524753,8.729546),(-10.530231,8.739158),(-10.545321,8.750036),(-10.5863,8.804089),(-10.589091,8.806595),(-10.596274,8.811246),(-10.599684,8.81538),(-10.600201,8.819024),(-10.599116,8.829617),(-10.599374,8.834036),(-10.608469,8.862303),(-10.610123,8.872302),(-10.61188,8.94514),(-10.607126,8.979841),(-10.591778,9.039062),(-10.592179,9.041658),(-10.593101,9.047617),(-10.594103,9.0541),(-10.605989,9.063763),(-10.629346,9.070352),(-10.669086,9.074434),(-10.710427,9.072987),(-10.746342,9.083814),(-10.753473,9.120116),(-10.74257,9.163292),(-10.724741,9.194504),(-10.714974,9.201352),(-10.703761,9.206313),(-10.693167,9.21285),(-10.685209,9.224503),(-10.683917,9.233753),(-10.687948,9.250703),(-10.688051,9.259281),(-10.683452,9.271812),(-10.677664,9.282897),(-10.67384,9.293982),(-10.675132,9.306461),(-10.717765,9.343333),(-10.728203,9.34819),(-10.741536,9.357053),(-10.753473,9.367569),(-10.765462,9.385216),(-10.774764,9.387335),(-10.785409,9.387232),(-10.795228,9.388524),(-10.801687,9.387283),(-10.819361,9.388059),(-10.831763,9.392141),(-10.822151,9.4009),(-10.811454,9.408264),(-10.813418,9.413897),(-10.829076,9.423948),(-10.843959,9.440252),(-10.854862,9.455264),(-10.8591,9.472989),(-10.853674,9.497871),(-10.852589,9.512082),(-10.860185,9.518542),(-10.870572,9.523399),(-10.877703,9.532675),(-10.882819,9.560167),(-10.886333,9.568409),(-10.896049,9.578305),(-10.92168,9.595901),(-10.92907,9.605694),(-10.935788,9.631894),(-10.940129,9.639697),(-10.94664,9.644839),(-10.962918,9.650782),(-10.968499,9.654244),(-10.974597,9.663778),(-10.986327,9.697755),(-11.010098,9.74274),(-11.044928,9.78713),(-11.085959,9.826171),(-11.16027,9.872758),(-11.16766,9.884462),(-11.181251,9.950376),(-11.188537,9.972597),(-11.199596,9.982157),(-11.213445,9.983371),(-11.247293,9.993396),(-11.272666,9.996006),(-11.484875,9.994845),(-11.910095,9.992518),(-11.914953,9.931488),(-11.921774,9.922212),(-12.109807,9.881676),(-12.141347,9.874876),(-12.158349,9.8756),(-12.174834,9.879191),(-12.193644,9.886064),(-12.217389,9.90056),(-12.235605,9.91676),(-12.253588,9.928413),(-12.276739,9.929266),(-12.426989,9.897614),(-12.472335,9.881258),(-12.508327,9.860381),(-12.514322,9.851312),(-12.514994,9.845111),(-12.513728,9.838341),(-12.513624,9.827825),(-12.527267,9.753618),(-12.526647,9.75062),(-12.522874,9.739432),(-12.522461,9.734472),(-12.525561,9.732069),(-12.530729,9.732844),(-12.53569,9.732792),(-12.537654,9.727934),(-12.537473,9.715713),(-12.538765,9.711915),(-12.54202,9.71132),(-12.573078,9.692562),(-12.582251,9.683364),(-12.59858,9.654864),(-12.601552,9.651712),(-12.602766,9.609363),(-12.606823,9.600707),(-12.614858,9.600087),(-12.624599,9.601276),(-12.633849,9.597865),(-12.638552,9.589416),(-12.647285,9.556627),(-12.649921,9.552105),(-12.652065,9.549444),(-12.655269,9.547739),(-12.660954,9.546033),(-12.668628,9.542364),(-12.669274,9.53823),(-12.66731,9.533709),(-12.667517,9.528696),(-12.701597,9.420227),(-12.706196,9.413948),(-12.713121,9.408161),(-12.716454,9.404414),(-12.719038,9.398368),(-12.72294,9.392089),(-12.727849,9.393123),(-12.732939,9.396508),(-12.737228,9.397231),(-12.754075,9.3878),(-12.759759,9.382839),(-12.767795,9.372323),(-12.767562,9.364391),(-12.763557,9.355347),(-12.764152,9.347673),(-12.777923,9.343849),(-12.795261,9.33509),(-12.819135,9.29579),(-12.835827,9.281967),(-12.856885,9.278505),(-12.876109,9.279848),(-12.895642,9.278505),(-12.917527,9.267007),(-12.937009,9.287083),(-12.957912,9.272433),(-12.973751,9.241582),(-12.97835,9.21285),(-12.973932,9.202799),(-12.960212,9.186081),(-12.959153,9.177839),(-12.962796,9.176702),(-12.97848,9.175203),(-12.984164,9.172309),(-12.993802,9.158693),(-13.002044,9.14412),(-13.015247,9.112752),(-13.028864,9.095983),(-13.047003,9.084537),(-13.085579,9.065985),(-13.115913,9.043945),(-13.131054,9.041465),(-13.153844,9.04932),(-13.191774,9.072987),(-13.207535,9.076708),(-13.233632,9.072057),(-13.277376,9.058595),(-13.301096,9.04149),(-13.312367,9.048326),(-13.321604,9.056627),(-13.325795,9.067776),(-13.322174,9.083075),(-13.319447,9.083564),(-13.304514,9.083075),(-13.300649,9.084621),(-13.300933,9.088202),(-13.302154,9.092515),(-13.301096,9.096137),(-13.273793,9.12401),(-13.254872,9.154934),(-13.243072,9.169745),(-13.225942,9.179267),(-13.208241,9.18008),(-13.176137,9.173651),(-13.157704,9.179267),(-13.157704,9.185492),(-13.218495,9.183254),(-13.247304,9.186021),(-13.259511,9.196357),(-13.25768,9.200263),(-13.248891,9.208319),(-13.245839,9.213446),(-13.245839,9.226467),(-13.237904,9.246649),(-13.232818,9.267401),(-13.254872,9.244818),(-13.264556,9.220445),(-13.270375,9.194037),(-13.280629,9.165025),(-13.304921,9.175482),(-13.315297,9.182074),(-13.322174,9.192939),(-13.313588,9.197943),(-13.311635,9.207221),(-13.314687,9.233303),(-13.313588,9.23314),(-13.311187,9.237982),(-13.308949,9.244778),(-13.307932,9.250312),(-13.310048,9.253811),(-13.319651,9.259263),(-13.322174,9.261217),(-13.325063,9.281317),(-13.325347,9.311021),(-13.318674,9.329495),(-13.301096,9.315863),(-13.303822,9.337795),(-13.303375,9.347113),(-13.301096,9.356838),(-13.307932,9.356838),(-13.313954,9.349555),(-13.322174,9.345404),(-13.332834,9.343573),(-13.345774,9.343166),(-13.352854,9.338772),(-13.357818,9.328437),(-13.367909,9.282172),(-13.374501,9.272773),(-13.397532,9.28384),(-13.407786,9.287014),(-13.411977,9.293402),(-13.404124,9.308417),(-13.414052,9.317206),(-13.423818,9.332913),(-13.430816,9.352037),(-13.431996,9.37108),(-13.42516,9.389879),(-13.40331,9.418158),(-13.396637,9.439358),(-13.404124,9.439358),(-13.428375,9.420722),(-13.465403,9.418158),(-13.499176,9.428778),(-13.513987,9.449612),(-13.520863,9.479478),(-13.520741,9.495103),(-13.510243,9.511054),(-13.50064,9.521796),(-13.493235,9.534857),(-13.488393,9.550605),(-13.486684,9.569078),(-13.49352,9.569078),(-13.515614,9.524482),(-13.529042,9.505683),(-13.547475,9.50019),(-13.561391,9.508979),(-13.575266,9.527248),(-13.58552,9.54914),(-13.58845,9.569078),(-13.619008,9.556383),(-13.643259,9.542548),(-13.677968,9.522691),(-13.7058,9.514472),(-13.707102,9.512152),(-13.714345,9.507636),(-13.722401,9.505805),(-13.726226,9.511054),(-13.72411,9.519517),(-13.718902,9.522284),(-13.71231,9.523668),(-13.7058,9.528062),(-13.673329,9.573798),(-13.652659,9.597154),(-13.640777,9.623725),(-13.625803,9.647284),(-13.623036,9.662909),(-13.624379,9.720282),(-13.623199,9.727362),(-13.61856,9.732082),(-13.604888,9.737372),(-13.602162,9.743801),(-13.595448,9.755032),(-13.580719,9.76203),(-13.566518,9.771064),(-13.561106,9.788235),(-13.581776,9.777167),(-13.594309,9.774237),(-13.604115,9.780178),(-13.647776,9.831488),(-13.651194,9.842841),(-13.655181,9.841742),(-13.658111,9.840522),(-13.664784,9.836615),(-13.653798,9.805243),(-13.650543,9.777004),(-13.661285,9.754543),(-13.692128,9.740383),(-13.728912,9.744086),(-13.747182,9.767768),(-13.749827,9.799872),(-13.739898,9.829169),(-13.698313,9.880764),(-13.681996,9.913479),(-13.685292,9.946479),(-13.693959,9.953803),(-13.697499,9.940416),(-13.699208,9.921332),(-13.702016,9.911689),(-13.712758,9.905178),(-13.732493,9.876776),(-13.746693,9.870754),(-13.745432,9.866523),(-13.749745,9.863471),(-13.784657,9.850246),(-13.788686,9.850043),(-13.809682,9.85163),(-13.827707,9.855373),(-13.842844,9.860785),(-13.849192,9.867621),(-13.855051,9.878852),(-13.86913,9.896226),(-13.886098,9.912177),(-13.900624,9.919135),(-13.927642,9.93594),(-13.933176,9.943345),(-13.945383,9.966946),(-13.954254,9.970933),(-13.971099,9.971829),(-13.979482,9.974351),(-13.98705,9.980414),(-14.006744,10.001044),(-14.021718,10.00727),(-14.032297,10.008979),(-14.042958,10.012844),(-14.058339,10.025295),(-14.061879,10.050116),(-14.046132,10.079495),(-14.020334,10.094428),(-13.993764,10.076158),(-14.001088,10.088202),(-14.014801,10.104804),(-14.028188,10.116604),(-14.034088,10.114),(-14.041615,10.101304),(-14.056752,10.106594),(-14.068349,10.118109),(-14.0655,10.124579),(-14.043813,10.133246),(-14.026967,10.153957),(-13.999908,10.199693),(-14.014516,10.192776),(-14.024892,10.183743),(-14.044667,10.162177),(-14.055491,10.155341),(-14.082834,10.14289),(-14.089345,10.138251),(-14.091176,10.126044),(-14.089345,10.079901),(-14.092275,10.06859),(-14.100575,10.058295),(-14.113433,10.051011),(-14.130279,10.048896),(-14.153188,10.05565),(-14.170481,10.070787),(-14.179799,10.090277),(-14.178782,10.110297),(-14.16275,10.126614),(-14.138173,10.144477),(-14.123647,10.162584),(-14.137766,10.179185),(-14.141672,10.164537),(-14.153147,10.156317),(-14.168609,10.152737),(-14.184926,10.151923),(-14.182485,10.129543),(-14.200185,10.112738),(-14.222401,10.106106),(-14.233388,10.114),(-14.236562,10.120917),(-14.24356,10.129381),(-14.2506,10.14053),(-14.253814,10.155341),(-14.250478,10.162584),(-14.244049,10.168891),(-14.240061,10.175116),(-14.243886,10.182318),(-14.249257,10.184272),(-14.25475,10.182847),(-14.259023,10.179592),(-14.26065,10.175767),(-14.270131,10.159735),(-14.292104,10.17178),(-14.352366,10.226223),(-14.362701,10.231187),(-14.376698,10.234442),(-14.391835,10.233385),(-14.405507,10.229397),(-14.418568,10.227973),(-14.431956,10.234442),(-14.452545,10.214586),(-14.462636,10.225165),(-14.465891,10.249701),(-14.466135,10.27204),(-14.458241,10.304267),(-14.458608,10.316962),(-14.463043,10.330024),(-14.468617,10.336249),(-14.475331,10.340318),(-14.537262,10.403632),(-14.548004,10.425605),(-14.550893,10.446601),(-14.550526,10.472113),(-14.545522,10.495022),(-14.535593,10.507473),(-14.535024,10.508205),(-14.53543,10.508165),(-14.551422,10.506781),(-14.562408,10.499335),(-14.571604,10.489447),(-14.582834,10.480862),(-14.596425,10.475979),(-14.613515,10.473375),(-14.628977,10.476264),(-14.637441,10.487738),(-14.644276,10.487738),(-14.651194,10.475084),(-14.657786,10.47956),(-14.662709,10.494127),(-14.664703,10.511623),(-14.663808,10.529202),(-14.661122,10.542629),(-14.650461,10.570258),(-14.639394,10.566636),(-14.625966,10.577379),(-14.611399,10.592963),(-14.596425,10.603746),(-14.620595,10.604315),(-14.628651,10.617255),(-14.627431,10.635484),(-14.615224,10.691311),(-14.607289,10.70539),(-14.596425,10.699368),(-14.59024,10.699368),(-14.588287,10.714301),(-14.580719,10.722073),(-14.572418,10.726304),(-14.568512,10.731024),(-14.555491,10.775702),(-14.531321,10.823472),(-14.517445,10.829047),(-14.509674,10.842475),(-14.508372,10.858466),(-14.513905,10.871894),(-14.509999,10.876451),(-14.500234,10.891099),(-14.519928,10.885403),(-14.522572,10.877631),(-14.519154,10.866523),(-14.521352,10.850816),(-14.554921,10.827623),(-14.558909,10.823472),(-14.573476,10.812405),(-14.590403,10.787258),(-14.60436,10.760443),(-14.615956,10.728013),(-14.629709,10.706448),(-14.657867,10.672675),(-14.686106,10.648179),(-14.699818,10.643134),(-14.705719,10.65526),(-14.703725,10.670844),(-14.69811,10.68183),(-14.689443,10.688951),(-14.678334,10.693101),(-14.68578,10.694729),(-14.691762,10.694485),(-14.697865,10.69184),(-14.705719,10.686347),(-14.71935,10.699368),(-14.706899,10.739203),(-14.699086,10.756252),(-14.688385,10.771959),(-14.6822,10.777777),(-14.66808,10.787095),(-14.661285,10.792792),(-14.655995,10.800605),(-14.65331,10.807603),(-14.651357,10.809963),(-14.656565,10.811957),(-14.664703,10.816636),(-14.677968,10.799628),(-14.723134,10.765123),(-14.734771,10.748114),(-14.746449,10.710761),(-14.760325,10.699368),(-14.770009,10.718817),(-14.782704,10.731269),(-14.792836,10.745022),(-14.795074,10.768256),(-14.781606,10.767768),(-14.770009,10.776923),(-14.760813,10.790432),(-14.75414,10.802965),(-14.749257,10.816107),(-14.746083,10.831204),(-14.744985,10.844062),(-14.746693,10.850816),(-14.743886,10.856594),(-14.742421,10.860989),(-14.740468,10.871894),(-14.758901,10.858466),(-14.765289,10.850328),(-14.76773,10.840888),(-14.772572,10.836493),(-14.801259,10.823472),(-14.808705,10.823472),(-14.809682,10.834784),(-14.809071,10.845608),(-14.806508,10.855455),(-14.801259,10.864447),(-14.807281,10.873603),(-14.807973,10.882473),(-14.803822,10.890815),(-14.795074,10.898586),(-14.813873,10.900336),(-14.819732,10.912014),(-14.816762,10.927191),(-14.808705,10.939602),(-14.796132,10.947577),(-14.778635,10.954088),(-14.759918,10.958441),(-14.74356,10.960028),(-14.73176,10.966457),(-14.724965,10.980536),(-14.720611,10.994574),(-14.716298,11.000963),(-14.699452,11.00552),(-14.680776,11.027493),(-14.664703,11.035793),(-14.672231,11.041449),(-14.677235,11.047593),(-14.68517,11.063666),(-14.687001,11.054267),(-14.690094,11.045233),(-14.694976,11.038479),(-14.70226,11.035793),(-14.707875,11.034491),(-14.723541,11.028266),(-14.726796,11.025824),(-14.730824,11.005845),(-14.741526,10.989081),(-14.756825,10.981187),(-14.774566,10.987982),(-14.779449,10.983303),(-14.784576,10.981106),(-14.791412,10.980373),(-14.801259,10.980536),(-14.795481,10.999701),(-14.801259,11.011908),(-14.810658,11.014472),(-14.815541,11.004706),(-14.843495,10.967475),(-14.874623,10.973293),(-14.894439,11.03026),(-14.918568,11.028957),(-14.901479,11.011664),(-14.898793,10.993232),(-14.904124,10.975491),(-14.911122,10.960028),(-14.931956,10.975653),(-14.943918,10.980048),(-14.959584,10.980536),(-14.926381,10.955552),(-14.914418,10.942328),(-14.911122,10.92593),(-14.920888,10.90766),(-14.939809,10.889106),(-14.958119,10.876207),(-14.96642,10.874986),(-14.968495,10.877875),(-14.973215,10.878852),(-14.977895,10.877875),(-14.980051,10.874986),(-14.978993,10.868842),(-14.974233,10.85928),(-14.973215,10.854193),(-14.973215,10.820054),(-14.968984,10.814358),(-14.959584,10.809963),(-14.950185,10.803778),(-14.945912,10.792792),(-14.947662,10.787584),(-14.959584,10.768256),(-14.977528,10.777045),(-15.013905,10.78262),(-15.031565,10.792792),(-15.052602,10.814399),(-15.069447,10.837144),(-15.079457,10.85871),(-15.081125,10.878241),(-15.07433,10.896877),(-15.058909,10.915961),(-15.051625,10.919623),(-15.03189,10.925482),(-15.027821,10.929348),(-15.026682,10.936672),(-15.02359,10.94359),(-15.019399,10.94953),(-15.014801,10.953803),(-15.014801,10.960028),(-15.019276,10.959703),(-15.020334,10.961371),(-15.020131,10.964179),(-15.020985,10.967475),(-14.992594,10.996203),(-14.992594,10.996358),(-14.959056,11.035762),(-14.863145,11.218645),(-14.843249,11.297193),(-14.828212,11.334787),(-14.728993,11.478009),(-14.711061,11.497568),(-14.686773,11.51072),(-14.64538,11.515061),(-14.562259,11.505604),(-14.520944,11.51289),(-14.432268,11.560575),(-14.339223,11.61061),(-14.326485,11.620584),(-14.319793,11.630506),(-14.308218,11.654535),(-14.300389,11.663165),(-14.289692,11.66885),(-14.284472,11.668178),(-14.278736,11.665077),(-14.19502,11.659703),(-14.171973,11.662752),(-14.161844,11.660943),(-14.153085,11.654845),(-14.144481,11.647404),(-14.134921,11.641771),(-14.118565,11.637999),(-14.104664,11.638567),(-14.07464,11.643735),(-14.058827,11.644045),(-14.015135,11.63712),(-13.996454,11.643838),(-13.953355,11.669676),(-13.945423,11.675671),(-13.923745,11.665026),(-13.903643,11.665387),(-13.886409,11.675568),(-13.873515,11.694533),(-13.870673,11.704145),(-13.868529,11.727037),(-13.87305,11.738354),(-13.870053,11.743677),(-13.855041,11.743419),(-13.846282,11.737269),(-13.842975,11.71727),(-13.834551,11.712103),(-13.820134,11.708072),(-13.819514,11.701664),(-13.821684,11.694223),(-13.815767,11.686833),(-13.795691,11.684042),(-13.770007,11.688693),(-13.745771,11.698202),(-13.729932,11.709829),(-13.724765,11.72311),(-13.719571,11.766105),(-13.719365,11.784036),(-13.734635,11.901859),(-13.731328,11.943406),(-13.722594,11.983869),(-13.723938,12.002783),(-13.734919,12.019267),(-13.744428,12.023918),(-13.779645,12.029344),(-13.795897,12.036269),(-13.807757,12.044279),(-13.829642,12.068102),(-13.835197,12.077765),(-13.837729,12.085413),(-13.842406,12.091304),(-13.869717,12.10071),(-13.879768,12.107686),(-13.899302,12.126238),(-13.918396,12.137762),(-13.938654,12.144273),(-13.964853,12.14789),(-13.972527,12.151559),(-13.968367,12.162566),(-13.968264,12.18396),(-13.965732,12.194244),(-13.943925,12.218635),(-13.883153,12.246437),(-13.858968,12.265971),(-13.840003,12.277133),(-13.812641,12.283592),(-13.795691,12.278735),(-13.808558,12.256101),(-13.740836,12.256927),(-13.71182,12.267521),(-13.695025,12.292946),(-13.69102,12.319301),(-13.68301,12.33868),(-13.682028,12.350307),(-13.685155,12.379194),(-13.684147,12.39201),(-13.682028,12.400329),(-13.664433,12.441516),(-13.660712,12.458672),(-13.663399,12.475777),(-13.67469,12.496086),(-13.725075,12.558149),(-13.734635,12.585331),(-13.736314,12.607604),(-13.728279,12.673388),(-13.404758,12.661761),(-13.359801,12.649737),(-13.356648,12.648893),(-13.352049,12.646981),(-13.343444,12.640987),(-13.338613,12.63923)] +Gambia [(-14.74093,13.615449),(-14.728579,13.619325),(-14.667188,13.652889),(-14.626674,13.663508),(-14.585539,13.660459),(-14.542596,13.640822),(-14.523243,13.625655),(-14.50973,13.609093),(-14.500221,13.589973),(-14.486863,13.546539),(-14.480378,13.533129),(-14.470275,13.522354),(-14.462135,13.516656),(-14.389582,13.465872),(-14.365553,13.454839),(-14.347285,13.452358),(-14.329947,13.456441),(-14.271217,13.476956),(-14.215794,13.510753),(-14.196829,13.518788),(-14.139132,13.531552),(-14.092882,13.554729),(-14.072599,13.55969),(-14.06185,13.560052),(-14.044461,13.558605),(-14.034022,13.560207),(-14.022266,13.565323),(-14.001905,13.577028),(-13.988495,13.579146),(-13.967489,13.575581),(-13.94209,13.566925),(-13.918086,13.554988),(-13.901343,13.541629),(-13.896175,13.532922),(-13.889793,13.513957),(-13.8846,13.505533),(-13.882504,13.503782),(-13.875479,13.497911),(-13.852793,13.484811),(-13.84269,13.476646),(-13.818713,13.429362),(-13.822562,13.378203),(-13.851036,13.33575),(-13.900568,13.314563),(-13.976119,13.308207),(-14.016272,13.297639),(-14.056579,13.297174),(-14.099496,13.281955),(-14.116808,13.282162),(-14.131794,13.275573),(-14.179827,13.240123),(-14.201687,13.229555),(-14.230625,13.228574),(-14.284911,13.238547),(-14.341781,13.233638),(-14.368834,13.235705),(-14.394982,13.242965),(-14.43144,13.26144),(-14.442034,13.268545),(-14.45149,13.276865),(-14.459139,13.286916),(-14.470326,13.298259),(-14.483633,13.301928),(-14.515104,13.303168),(-14.529574,13.307277),(-14.542854,13.314666),(-14.552109,13.323094),(-14.580733,13.34916),(-14.592154,13.35314),(-14.624555,13.345181),(-14.661607,13.340918),(-14.698142,13.345181),(-14.731577,13.359005),(-14.759482,13.38337),(-14.779636,13.410888),(-14.790333,13.417348),(-14.810797,13.421172),(-14.829142,13.427011),(-14.862576,13.447061),(-14.877614,13.451376),(-14.915907,13.454425),(-14.949703,13.463081),(-15.015074,13.495741),(-15.015229,13.495844),(-15.015435,13.495999),(-15.01559,13.496051),(-15.065251,13.531268),(-15.110158,13.57248),(-15.13765,13.589973),(-15.160491,13.580981),(-15.181331,13.559843),(-15.203951,13.536901),(-15.217955,13.51468),(-15.221728,13.452823),(-15.227774,13.425176),(-15.248341,13.398977),(-15.277073,13.380296),(-15.309836,13.368436),(-15.342392,13.36257),(-15.379237,13.362519),(-15.488326,13.385256),(-15.519539,13.3866),(-15.542948,13.377996),(-15.566048,13.365671),(-15.596123,13.355827),(-15.612453,13.354225),(-15.679891,13.360658),(-15.6925,13.360348),(-15.737407,13.346344),(-15.807118,13.339781),(-15.818694,13.333528),(-15.822673,13.319421),(-15.824367,13.248512),(-15.82474,13.232914),(-15.8266,13.161678),(-15.833318,13.156847),(-15.870784,13.157157),(-15.897441,13.157399),(-15.961734,13.157984),(-16.092062,13.15912),(-16.313289,13.161084),(-16.465218,13.162428),(-16.630014,13.1639),(-16.673525,13.164314),(-16.708407,13.156692),(-16.723186,13.132249),(-16.726339,13.122637),(-16.742152,13.107392),(-16.748508,13.099227),(-16.752125,13.088039),(-16.753651,13.065009),(-16.76887,13.077826),(-16.784332,13.082994),(-16.786733,13.090074),(-16.782541,13.098334),(-16.775502,13.112738),(-16.777333,13.127834),(-16.796742,13.177232),(-16.797475,13.189114),(-16.795033,13.235907),(-16.797353,13.248928),(-16.802968,13.262193),(-16.820058,13.281317),(-16.824452,13.294582),(-16.81664,13.310614),(-16.820302,13.315823),(-16.829701,13.338528),(-16.820709,13.349107),(-16.809967,13.379869),(-16.79955,13.386298),(-16.787343,13.389146),(-16.774322,13.396186),(-16.762929,13.404975),(-16.704457,13.475409),(-16.678822,13.496161),(-16.676747,13.485907),(-16.671132,13.478664),(-16.662506,13.473131),(-16.652252,13.468248),(-16.634633,13.477362),(-16.616851,13.478176),(-16.601389,13.472235),(-16.590728,13.461371),(-16.585317,13.443345),(-16.596303,13.440253),(-16.612782,13.439154),(-16.624257,13.427232),(-16.608713,13.413723),(-16.59553,13.385077),(-16.588368,13.355292),(-16.590728,13.338528),(-16.553863,13.295803),(-16.545766,13.290107),(-16.495188,13.287909),(-16.480824,13.283881),(-16.467763,13.275865),(-16.45995,13.268866),(-16.449696,13.264065),(-16.429351,13.262193),(-16.419667,13.255561),(-16.41393,13.239976),(-16.413482,13.222073),(-16.419423,13.208197),(-16.39977,13.224311),(-16.390859,13.224799),(-16.377797,13.215033),(-16.383656,13.225165),(-16.401926,13.243354),(-16.405751,13.246039),(-16.408274,13.258734),(-16.415273,13.267035),(-16.426015,13.270738),(-16.439931,13.269599),(-16.439931,13.276435),(-16.410512,13.275865),(-16.396799,13.277899),(-16.384633,13.283881),(-16.374745,13.263861),(-16.363271,13.268297),(-16.34968,13.282172),(-16.333404,13.290107),(-16.314809,13.295559),(-16.277943,13.319485),(-16.257965,13.324897),(-16.230133,13.316392),(-16.225901,13.296576),(-16.230214,13.273627),(-16.22761,13.256008),(-16.209706,13.251166),(-16.185455,13.258287),(-16.151275,13.276435),(-16.151275,13.283881),(-16.162262,13.282701),(-16.172515,13.279283),(-16.181752,13.273749),(-16.189809,13.265937),(-16.19929,13.259101),(-16.203521,13.26496),(-16.208079,13.292385),(-16.208608,13.303534),(-16.210764,13.3133),(-16.21703,13.317369),(-16.222035,13.31977),(-16.22525,13.325832),(-16.227122,13.333686),(-16.22761,13.341946),(-16.224436,13.360297),(-16.216217,13.372748),(-16.176178,13.405422),(-16.168691,13.413316),(-16.165517,13.423814),(-16.161204,13.428697),(-16.151438,13.426947),(-16.135121,13.420478),(-15.994862,13.413031),(-15.941314,13.425442),(-15.922231,13.427232),(-15.911285,13.426703),(-15.905629,13.425767),(-15.900624,13.425523),(-15.891225,13.427232),(-15.883778,13.430406),(-15.878407,13.434801),(-15.874379,13.43887),(-15.871327,13.440904),(-15.849517,13.442206),(-15.785024,13.43476),(-15.764475,13.436916),(-15.706207,13.454576),(-15.696767,13.459866),(-15.678049,13.473049),(-15.668691,13.475735),(-15.658925,13.472398),(-15.644683,13.457913),(-15.634918,13.454576),(-15.620839,13.45482),(-15.610422,13.456448),(-15.600901,13.460435),(-15.589589,13.468248),(-15.580922,13.478176),(-15.565053,13.502509),(-15.555409,13.509223),(-15.543446,13.50967),(-15.530995,13.505845),(-15.519643,13.499661),(-15.510732,13.492743),(-15.500396,13.488186),(-15.489654,13.489936),(-15.47936,13.493964),(-15.470041,13.496161),(-15.45108,13.490912),(-15.438873,13.478583),(-15.429433,13.464748),(-15.418853,13.454576),(-15.384429,13.443508),(-15.340566,13.440823),(-15.304799,13.455308),(-15.294749,13.496161),(-15.299957,13.496161),(-15.30191,13.496161),(-15.302154,13.496161),(-15.304067,13.481147),(-15.311391,13.467841),(-15.321848,13.458238),(-15.332875,13.454576),(-15.351674,13.456732),(-15.385976,13.466132),(-15.401479,13.468248),(-15.415761,13.473212),(-15.43224,13.484768),(-15.456451,13.50609),(-15.46524,13.510159),(-15.47411,13.508531),(-15.482818,13.505032),(-15.490834,13.502997),(-15.499867,13.505113),(-15.507558,13.509833),(-15.51358,13.514553),(-15.517893,13.516669),(-15.531728,13.519761),(-15.54304,13.525539),(-15.556142,13.528306),(-15.575836,13.522895),(-15.586008,13.515041),(-15.600697,13.49551),(-15.610666,13.489325),(-15.627756,13.488511),(-15.654652,13.500963),(-15.672109,13.502997),(-15.690175,13.498969),(-15.734202,13.475735),(-15.769887,13.465074),(-16.09024,13.440497),(-16.121897,13.451606),(-16.141591,13.454576),(-16.16039,13.450588),(-16.178456,13.440985),(-16.206532,13.420478),(-16.248443,13.378974),(-16.257965,13.372016),(-16.265614,13.36872),(-16.27302,13.362616),(-16.280588,13.359768),(-16.289133,13.365871),(-16.289703,13.373236),(-16.286733,13.383612),(-16.284901,13.395331),(-16.289133,13.406806),(-16.30602,13.391669),(-16.308095,13.383368),(-16.302113,13.372016),(-16.312123,13.366929),(-16.316396,13.365871),(-16.316396,13.359036),(-16.311635,13.357611),(-16.306996,13.35456),(-16.302113,13.352769),(-16.321889,13.342841),(-16.355946,13.337958),(-16.390533,13.336859),(-16.411977,13.338528),(-16.489491,13.357123),(-16.52184,13.359036),(-16.515777,13.367865),(-16.503529,13.392564),(-16.501332,13.403062),(-16.503529,13.416938),(-16.513417,13.436754),(-16.515696,13.444322),(-16.519765,13.451321),(-16.538197,13.46483),(-16.544504,13.475898),(-16.554433,13.481838),(-16.556549,13.485582),(-16.556549,13.50609),(-16.548004,13.548082),(-16.549143,13.56387),(-16.554799,13.577053),(-16.561391,13.5869),(-16.561399,13.586914),(-16.498756,13.586665),(-16.436641,13.586459),(-16.374577,13.5862),(-16.312462,13.585994),(-16.250295,13.585735),(-16.188232,13.585528),(-16.126117,13.58527),(-16.085383,13.585168),(-16.064054,13.585115),(-16.001938,13.584908),(-15.939772,13.58465),(-15.877708,13.584443),(-15.815593,13.584185),(-15.753426,13.583978),(-15.691363,13.58372),(-15.629196,13.583513),(-15.567133,13.583255),(-15.518195,13.5831),(-15.502486,13.588267),(-15.50047,13.627981),(-15.49803,13.641522),(-15.496801,13.648341),(-15.488946,13.670329),(-15.478611,13.691258),(-15.467501,13.708286),(-15.43634,13.741178),(-15.394069,13.771512),(-15.347766,13.788643),(-15.304462,13.781873),(-15.295108,13.773191),(-15.27573,13.748051),(-15.267255,13.741798),(-15.245964,13.746733),(-15.17062,13.793784),(-15.097704,13.819984),(-15.07631,13.818951),(-15.016779,13.796885),(-14.915803,13.792441),(-14.879268,13.780581),(-14.860314,13.765595),(-14.843973,13.752676),(-14.831312,13.735623),(-14.822372,13.7172),(-14.802942,13.65232),(-14.796017,13.644698),(-14.754934,13.620384),(-14.74093,13.615449)] +Guatemala [(-89.160496,17.814314),(-89.156217,17.598243),(-89.150728,17.321032),(-89.149205,17.036271),(-89.166067,16.777009),(-89.184353,16.495872),(-89.19314,16.392626),(-89.208517,16.186132),(-89.234135,15.954945),(-89.236022,15.906493),(-89.236512,15.893915),(-89.228244,15.880841),(-89.225996,15.884044),(-89.216539,15.888644),(-89.204835,15.892416),(-89.195765,15.892933),(-89.177446,15.900271),(-89.100913,15.896809),(-89.072388,15.901511),(-89.040323,15.901718),(-88.951878,15.879652),(-88.913971,15.893948),(-88.900014,15.887844),(-88.8662,15.860338),(-88.831614,15.868638),(-88.787343,15.859076),(-88.753285,15.839789),(-88.749501,15.818793),(-88.726796,15.815172),(-88.625966,15.753892),(-88.638295,15.708808),(-88.63622,15.702094),(-88.609242,15.702094),(-88.598378,15.706122),(-88.595204,15.714667),(-88.596344,15.722154),(-88.598704,15.723212),(-88.598988,15.727362),(-88.602366,15.730902),(-88.60558,15.735541),(-88.605458,15.743069),(-88.600453,15.744574),(-88.591176,15.743557),(-88.582183,15.745022),(-88.578196,15.753892),(-88.57433,15.769965),(-88.5655,15.780951),(-88.55606,15.789496),(-88.550282,15.798285),(-88.549062,15.813666),(-88.556711,15.837348),(-88.557688,15.852932),(-88.550282,15.852932),(-88.550649,15.842475),(-88.550282,15.83926),(-88.544057,15.83926),(-88.532135,15.846829),(-88.503041,15.841376),(-88.48884,15.846747),(-88.511952,15.86758),(-88.52481,15.877021),(-88.543284,15.885159),(-88.551015,15.895168),(-88.557688,15.906562),(-88.564524,15.914984),(-88.607574,15.938422),(-88.619862,15.953274),(-88.605458,15.969631),(-88.590199,15.961005),(-88.561838,15.949042),(-88.550282,15.942328),(-88.52123,15.912746),(-88.500234,15.901068),(-88.447906,15.85871),(-88.344838,15.811957),(-88.311025,15.782416),(-88.300404,15.777818),(-88.256337,15.740383),(-88.221547,15.725898),(-88.220937,15.725653),(-88.220922,15.725647),(-88.23262,15.721419),(-88.235746,15.712427),(-88.23492,15.701575),(-88.241586,15.688242),(-88.247761,15.695684),(-88.264995,15.688604),(-88.279697,15.680749),(-88.295588,15.675013),(-88.316672,15.6746),(-88.322304,15.667262),(-88.352561,15.616903),(-88.398992,15.572797),(-88.467434,15.52074),(-88.50002,15.495954),(-88.50002,15.495903),(-88.587379,15.4303),(-88.674195,15.365058),(-88.760899,15.299962),(-88.847027,15.235299),(-88.973427,15.140395),(-89.008283,15.124401),(-89.109827,15.091457),(-89.140523,15.076394),(-89.14694,15.071284),(-89.160806,15.060245),(-89.172769,15.04221),(-89.188582,14.996088),(-89.1691,14.978415),(-89.169152,14.95206),(-89.181528,14.923664),(-89.189215,14.913059),(-89.198814,14.899815),(-89.223619,14.879015),(-89.231448,14.867311),(-89.23243,14.8485),(-89.227676,14.834574),(-89.218968,14.82168),(-89.198814,14.79809),(-89.189409,14.783001),(-89.170547,14.738791),(-89.159669,14.725123),(-89.150523,14.717294),(-89.146213,14.710858),(-89.144502,14.708302),(-89.142694,14.691275),(-89.145226,14.683446),(-89.155845,14.672258),(-89.159411,14.665979),(-89.155225,14.615595),(-89.155742,14.597844),(-89.160393,14.581282),(-89.172149,14.570404),(-89.183699,14.569525),(-89.208814,14.577949),(-89.221087,14.580093),(-89.238114,14.577949),(-89.245685,14.572988),(-89.250542,14.565184),(-89.259612,14.554255),(-89.290927,14.528003),(-89.304854,14.513353),(-89.314802,14.495964),(-89.323044,14.487463),(-89.345265,14.48196),(-89.355497,14.475448),(-89.361983,14.4624),(-89.363791,14.446561),(-89.361621,14.415478),(-89.390611,14.435993),(-89.391697,14.43372),(-89.396864,14.426046),(-89.398182,14.445373),(-89.408362,14.441523),(-89.4311,14.418708),(-89.446835,14.415529),(-89.458127,14.419586),(-89.469418,14.425374),(-89.485283,14.427389),(-89.496148,14.422897),(-89.503033,14.420051),(-89.527761,14.391061),(-89.541351,14.381526),(-89.544245,14.400052),(-89.555175,14.410827),(-89.569877,14.412015),(-89.584062,14.401809),(-89.590573,14.385971),(-89.586284,14.374679),(-89.578972,14.364783),(-89.576078,14.35313),(-89.582408,14.343363),(-89.592434,14.336387),(-89.598351,14.32799),(-89.592175,14.31383),(-89.571272,14.311143),(-89.564373,14.308146),(-89.558586,14.302823),(-89.555692,14.299025),(-89.549878,14.288276),(-89.548586,14.2836),(-89.546778,14.268329),(-89.544917,14.261818),(-89.54055,14.257606),(-89.527864,14.251457),(-89.52466,14.247374),(-89.524505,14.231768),(-89.530939,14.225567),(-89.638322,14.200556),(-89.664626,14.188851),(-89.689844,14.170015),(-89.709868,14.148724),(-89.752605,14.07524),(-89.755447,14.067075),(-89.754465,14.059401),(-89.74801,14.044895),(-89.747256,14.043201),(-89.747644,14.037646),(-89.762268,14.029997),(-89.776944,14.035785),(-89.802963,14.055474),(-89.821102,14.060073),(-89.835855,14.059091),(-89.880116,14.042684),(-89.890788,14.035889),(-89.912078,14.015192),(-90.008171,13.949692),(-90.022976,13.936954),(-90.031477,13.922924),(-90.038091,13.908377),(-90.047367,13.894295),(-90.05964,13.884709),(-90.08708,13.870472),(-90.099044,13.858509),(-90.106976,13.844066),(-90.112299,13.828253),(-90.114779,13.812),(-90.114314,13.796265),(-90.092982,13.728903),(-90.098328,13.731414),(-90.245553,13.790726),(-90.401159,13.858588),(-90.500209,13.8947),(-90.580413,13.913267),(-90.638019,13.922358),(-90.707278,13.928889),(-90.784615,13.921538),(-90.796247,13.918161),(-90.910333,13.911135),(-91.04374,13.913585),(-91.165761,13.926703),(-91.316461,13.955679),(-91.551869,14.057278),(-91.637115,14.104438),(-91.762558,14.180174),(-91.908193,14.28384),(-92.049591,14.408026),(-92.246234,14.546283),(-92.226097,14.549346),(-92.208062,14.570507),(-92.186074,14.629573),(-92.164731,14.66815),(-92.160804,14.683498),(-92.161734,14.702334),(-92.179382,14.751581),(-92.180105,14.75879),(-92.178813,14.773389),(-92.179382,14.781244),(-92.188658,14.805893),(-92.190828,14.814239),(-92.191655,14.835297),(-92.186797,14.84726),(-92.166023,14.870825),(-92.15468,14.900797),(-92.157574,14.963041),(-92.154215,14.995933),(-92.142356,15.007483),(-92.129876,15.010945),(-92.116982,15.012392),(-92.103831,15.01787),(-92.093599,15.02942),(-92.073807,15.073706),(-92.20336,15.237159),(-92.21093,15.253359),(-92.211292,15.272066),(-92.204651,15.289507),(-92.186177,15.320384),(-92.137446,15.401826),(-92.105931,15.454469),(-92.068536,15.516935),(-91.9896,15.648968),(-91.910561,15.78095),(-91.841676,15.896085),(-91.792971,15.977579),(-91.774471,16.00843),(-91.751888,16.046308),(-91.739848,16.060985),(-91.723776,16.068788),(-91.678405,16.068891),(-91.54048,16.069149),(-91.402607,16.069305),(-91.264786,16.069511),(-91.126914,16.06977),(-90.994798,16.069869),(-90.988989,16.069873),(-90.851116,16.070131),(-90.713295,16.070338),(-90.575474,16.070596),(-90.485738,16.0707),(-90.466153,16.072922),(-90.448195,16.07926),(-90.448144,16.079278),(-90.447841,16.079687),(-90.444165,16.084652),(-90.437421,16.098863),(-90.434475,16.107183),(-90.440211,16.104858),(-90.442743,16.104186),(-90.444526,16.103256),(-90.448144,16.099742),(-90.455585,16.099742),(-90.459487,16.116795),(-90.435302,16.136277),(-90.448144,16.148214),(-90.448144,16.154416),(-90.42613,16.163149),(-90.432718,16.173226),(-90.461812,16.189142),(-90.465584,16.208107),(-90.458505,16.22175),(-90.446232,16.230845),(-90.434475,16.236323),(-90.447989,16.24149),(-90.457833,16.249138),(-90.459435,16.25906),(-90.448144,16.271101),(-90.443725,16.266864),(-90.440625,16.265313),(-90.42768,16.263608),(-90.431426,16.268931),(-90.437524,16.280041),(-90.441297,16.285364),(-90.436749,16.284847),(-90.435509,16.286242),(-90.435509,16.288774),(-90.434475,16.291565),(-90.428558,16.287637),(-90.420238,16.285364),(-90.414063,16.287844),(-90.414011,16.298386),(-90.420187,16.296009),(-90.434734,16.301228),(-90.444888,16.30836),(-90.437886,16.31208),(-90.419463,16.317816),(-90.404684,16.331511),(-90.398793,16.347582),(-90.40719,16.36045),(-90.414011,16.36045),(-90.414011,16.353008),(-90.420807,16.353008),(-90.418223,16.367167),(-90.40874,16.372335),(-90.395124,16.371405),(-90.379853,16.367271),(-90.391377,16.385978),(-90.392695,16.391714),(-90.392902,16.40484),(-90.394917,16.419206),(-90.400601,16.420549),(-90.409464,16.416673),(-90.420807,16.415123),(-90.426698,16.417707),(-90.434114,16.42241),(-90.444371,16.426802),(-90.475842,16.430729),(-90.478426,16.436414),(-90.477082,16.444992),(-90.482302,16.456051),(-90.488219,16.46158),(-90.497753,16.46804),(-90.508915,16.471089),(-90.532893,16.459927),(-90.539068,16.468867),(-90.543719,16.490209),(-90.553409,16.490261),(-90.57935,16.482768),(-90.584724,16.480235),(-90.590616,16.483388),(-90.604672,16.486178),(-90.620743,16.486747),(-90.633094,16.483336),(-90.627874,16.492896),(-90.618986,16.500079),(-90.612061,16.507624),(-90.61263,16.518114),(-90.624154,16.524936),(-90.639502,16.525866),(-90.648183,16.529586),(-90.639967,16.544831),(-90.643532,16.561006),(-90.637021,16.577904),(-90.633765,16.592167),(-90.647356,16.600073),(-90.65025,16.593407),(-90.653609,16.5901),(-90.658828,16.588446),(-90.6672,16.586999),(-90.6672,16.5932),(-90.654023,16.61661),(-90.667717,16.652576),(-90.73531,16.746576),(-90.749779,16.757686),(-90.785591,16.774998),(-90.80037,16.786832),(-90.79696,16.798666),(-90.79696,16.805435),(-90.810602,16.812928),(-90.819956,16.805332),(-90.831738,16.805435),(-90.900623,16.826003),(-90.919071,16.836338),(-90.948165,16.864708),(-90.968887,16.874372),(-90.952764,16.890908),(-90.955193,16.898763),(-90.968835,16.901037),(-90.98625,16.901088),(-90.987956,16.896903),(-90.984287,16.887756),(-90.983253,16.878609),(-90.993072,16.874372),(-90.998343,16.877162),(-91.0109,16.890288),(-91.016688,16.894836),(-91.054928,16.908013),(-91.066866,16.918193),(-91.07131,16.938916),(-91.076943,16.949509),(-91.112238,16.990489),(-91.116113,16.999274),(-91.121229,17.018394),(-91.126552,17.024595),(-91.137146,17.026507),(-91.161227,17.023975),(-91.1712,17.028006),(-91.206754,17.065368),(-91.215952,17.079217),(-91.220241,17.089294),(-91.225357,17.107019),(-91.229595,17.113995),(-91.238793,17.119215),(-91.259205,17.124331),(-91.263133,17.130738),(-91.265148,17.146241),(-91.270832,17.164276),(-91.279514,17.180089),(-91.290418,17.189081),(-91.317755,17.190476),(-91.344523,17.186342),(-91.364884,17.189753),(-91.372945,17.213576),(-91.388345,17.221586),(-91.419247,17.227735),(-91.442502,17.237967),(-91.438264,17.253677),(-91.430771,17.254969),(-90.991986,17.25192),(-90.991444,17.526942),(-90.990901,17.801964),(-90.98284,17.810542),(-90.962221,17.81602),(-90.919123,17.81602),(-90.699239,17.815813),(-90.47946,17.815606),(-90.259654,17.815348),(-90.039771,17.815141),(-89.820017,17.814986),(-89.600159,17.814728),(-89.380302,17.814521),(-89.160496,17.814314)] +Guam [(144.886404,13.640204),(144.896658,13.617987),(144.914317,13.605414),(144.934418,13.59984),(144.951915,13.598578),(144.952159,13.586859),(144.937673,13.560533),(144.906993,13.516669),(144.871837,13.485175),(144.808279,13.445787),(144.783458,13.420478),(144.769379,13.379136),(144.760102,13.285631),(144.735606,13.249172),(144.718028,13.241848),(144.69752,13.241034),(144.680675,13.246772),(144.670421,13.264838),(144.656749,13.28205),(144.653575,13.287014),(144.647472,13.324897),(144.644298,13.33161),(144.639496,13.33747),(144.633637,13.342108),(144.626313,13.345364),(144.636567,13.355414),(144.643077,13.365709),(144.646495,13.378485),(144.647472,13.396226),(144.645518,13.404486),(144.640473,13.414781),(144.634776,13.423529),(144.624197,13.43122),(144.628673,13.438788),(144.636485,13.443101),(144.640636,13.437812),(144.65447,13.439154),(144.714854,13.473456),(144.72877,13.485582),(144.783458,13.516669),(144.794119,13.527086),(144.803233,13.53913),(144.811209,13.553616),(144.832205,13.620673),(144.838634,13.632758),(144.850434,13.643744),(144.86671,13.653022),(144.880626,13.65412),(144.886404,13.640204)] +Heard Island and McDonald Islands [(73.735118,-53.1124),(73.761892,-53.11891),(73.789073,-53.120538),(73.812185,-53.118504),(73.661632,-53.146661),(73.609548,-53.179864),(73.580821,-53.187595),(73.507335,-53.19256),(73.473399,-53.189223),(73.440196,-53.173272),(73.430431,-53.165297),(73.404145,-53.139337),(73.370942,-53.090509),(73.36378,-53.074802),(73.371267,-53.06463),(73.359711,-53.058852),(73.353282,-53.05169),(73.354177,-53.044203),(73.364431,-53.037286),(73.344737,-53.029229),(73.328868,-53.025079),(73.271007,-53.021742),(73.253673,-53.015069),(73.24171,-53.002211),(73.236013,-52.982029),(73.269379,-52.975274),(73.296886,-52.961602),(73.330903,-52.984796),(73.337169,-52.985772),(73.344574,-52.996515),(73.361095,-53.00921),(73.377452,-53.017022),(73.384939,-53.013116),(73.389171,-52.997817),(73.399587,-52.990981),(73.412934,-52.992852),(73.426524,-53.003188),(73.412852,-53.009373),(73.425629,-53.025323),(73.447032,-53.02809),(73.470225,-53.02158),(73.487966,-53.009373),(73.50766,-53.015313),(73.56422,-53.016778),(73.587576,-53.026788),(73.598481,-53.033868),(73.620942,-53.039646),(73.632579,-53.044122),(73.640147,-53.050714),(73.65561,-53.067966),(73.663341,-53.071466),(73.688975,-53.090265),(73.699474,-53.100844),(73.735118,-53.1124)] +Hungary [(20.981489,48.516859),(21.0065,48.518151),(21.035956,48.514637),(21.063861,48.506239),(21.084015,48.49301),(21.109336,48.489109),(21.186748,48.513707),(21.219924,48.518719),(21.238011,48.513448),(21.250413,48.506498),(21.261782,48.50319),(21.276561,48.50872),(21.28824,48.519934),(21.293925,48.530579),(21.302296,48.539907),(21.32183,48.54758),(21.338573,48.549854),(21.372679,48.550345),(21.424563,48.561275),(21.439135,48.558329),(21.472725,48.544997),(21.490812,48.540268),(21.49939,48.535075),(21.506005,48.526496),(21.5151,48.507118),(21.521818,48.50009),(21.537734,48.495232),(21.574631,48.495568),(21.591684,48.49301),(21.600366,48.481641),(21.613388,48.440352),(21.621553,48.429655),(21.67757,48.372346),(21.701238,48.353949),(21.727697,48.340901),(21.759012,48.333769),(21.789398,48.335526),(21.841178,48.353174),(21.884276,48.357463),(21.914765,48.36909),(21.929338,48.372914),(21.981531,48.374723),(21.999928,48.37896),(22.018015,48.379735),(22.077856,48.375808),(22.09646,48.379425),(22.113926,48.38865),(22.13284,48.404798),(22.156508,48.40206),(22.158263,48.402222),(22.158315,48.402227),(22.159298,48.402318),(22.16922,48.409527),(22.201983,48.418157),(22.236089,48.415289),(22.271849,48.403455),(22.256967,48.373224),(22.25676,48.357282),(22.284355,48.358393),(22.29128,48.357566),(22.298721,48.349143),(22.298721,48.339324),(22.296447,48.327801),(22.297378,48.314003),(22.308126,48.293694),(22.357116,48.243103),(22.363627,48.23871),(22.370861,48.237393),(22.378613,48.238865),(22.386468,48.243051),(22.390498,48.244343),(22.394529,48.244757),(22.398767,48.244343),(22.418404,48.238969),(22.434217,48.236747),(22.44972,48.237677),(22.469047,48.244136),(22.473284,48.244498),(22.477418,48.24393),(22.481449,48.242586),(22.55576,48.177164),(22.568886,48.156519),(22.583045,48.124816),(22.600098,48.101122),(22.605472,48.097039),(22.608056,48.096833),(22.621182,48.101768),(22.693219,48.101768),(22.711616,48.105824),(22.728566,48.113137),(22.745722,48.116289),(22.762052,48.109261),(22.765566,48.104533),(22.801429,48.090968),(22.830988,48.072442),(22.844321,48.061047),(22.85476,48.047301),(22.861581,48.028387),(22.857964,48.018026),(22.851349,48.008802),(22.849489,47.993144),(22.832435,47.978933),(22.840807,47.966789),(22.877601,47.946739),(22.861167,47.933819),(22.836053,47.902452),(22.819723,47.892323),(22.779622,47.882298),(22.763602,47.874753),(22.752854,47.86124),(22.753267,47.852687),(22.758331,47.846228),(22.760295,47.838916),(22.75182,47.827676),(22.745826,47.824989),(22.724225,47.82349),(22.703864,47.817186),(22.691669,47.8107),(22.666967,47.78879),(22.637719,47.771555),(22.601235,47.760936),(22.562994,47.757215),(22.528475,47.761039),(22.454371,47.787394),(22.423675,47.782563),(22.407345,47.743082),(22.395873,47.735692),(22.382644,47.732023),(22.368278,47.73117),(22.322079,47.735873),(22.309366,47.735046),(22.29159,47.730705),(22.27309,47.723755),(22.261721,47.715848),(22.23981,47.693472),(22.232162,47.688253),(22.215212,47.679933),(22.207874,47.673732),(22.204153,47.666291),(22.200743,47.647945),(22.197642,47.639315),(22.172837,47.615389),(22.16984,47.608775),(22.169117,47.601385),(22.167608,47.594862),(22.16736,47.593789),(22.162089,47.586244),(22.154177,47.582213),(22.148549,47.579345),(22.09956,47.570948),(22.037445,47.539322),(22.007886,47.517411),(21.988869,47.492942),(21.991453,47.461807),(22.000238,47.427184),(22.001582,47.393827),(21.981531,47.366102),(21.936986,47.357162),(21.919003,47.349669),(21.900813,47.335742),(21.862159,47.297424),(21.856061,47.285745),(21.844589,47.249934),(21.839318,47.240839),(21.827742,47.226007),(21.823608,47.214923),(21.823298,47.203321),(21.825675,47.194278),(21.825985,47.185054),(21.819887,47.172729),(21.811516,47.164693),(21.789398,47.150301),(21.77989,47.140741),(21.775446,47.131672),(21.770485,47.114025),(21.763767,47.105214),(21.74351,47.091597),(21.69421,47.069169),(21.671886,47.054726),(21.632819,47.022686),(21.63427,47.019422),(21.636643,47.014082),(21.645221,47.01124),(21.654833,47.009948),(21.661757,47.006072),(21.670749,46.994394),(21.671369,46.99336),(21.667545,46.992378),(21.648528,46.942976),(21.640053,46.935689),(21.594371,46.910006),(21.589927,46.908869),(21.588687,46.906182),(21.587137,46.8944),(21.588067,46.882204),(21.591477,46.871533),(21.591788,46.860707),(21.583416,46.847943),(21.573081,46.841767),(21.536597,46.835023),(21.51572,46.821588),(21.502904,46.80531),(21.48151,46.764976),(21.477066,46.760222),(21.473655,46.754873),(21.471381,46.749034),(21.470451,46.743014),(21.472622,46.740197),(21.475206,46.737846),(21.478203,46.735934),(21.505178,46.723247),(21.501767,46.703507),(21.483577,46.684852),(21.463527,46.677049),(21.436241,46.673741),(21.425493,46.662011),(21.423539,46.658231),(21.416811,46.645216),(21.396037,46.626354),(21.374436,46.618448),(21.337643,46.620411),(21.316249,46.616639),(21.300953,46.603926),(21.295268,46.585039),(21.291341,46.546721),(21.278938,46.528401),(21.261679,46.513338),(21.247623,46.497473),(21.245142,46.47688),(21.274391,46.438355),(21.280695,46.416393),(21.257544,46.404171),(21.21517,46.402905),(21.195636,46.398099),(21.17879,46.384457),(21.168661,46.362753),(21.164527,46.318259),(21.155949,46.298932),(21.144476,46.283739),(21.134865,46.27852),(21.105616,46.278675),(21.099208,46.27635),(21.051459,46.236094),(21.033475,46.231339),(21.013942,46.24307),(21.007017,46.248858),(20.999162,46.251597),(20.990584,46.251597),(20.981489,46.248858),(20.961748,46.248341),(20.924438,46.259555),(20.906558,46.26219),(20.899737,46.260692),(20.885061,46.255007),(20.875139,46.254387),(20.866974,46.257126),(20.848991,46.267771),(20.839689,46.271079),(20.819638,46.271699),(20.798658,46.267616),(20.787369,46.26339),(20.778504,46.260072),(20.739333,46.237489),(20.734992,46.231908),(20.735509,46.222503),(20.744811,46.200282),(20.744707,46.192272),(20.736646,46.186691),(20.727137,46.187725),(20.717836,46.189998),(20.710291,46.188086),(20.70533,46.180645),(20.704368,46.168526),(20.704193,46.16633),(20.698819,46.15646),(20.683523,46.144678),(20.663989,46.137753),(20.607558,46.129485),(20.600117,46.12964),(20.588025,46.132844),(20.578103,46.137547),(20.548957,46.15615),(20.509373,46.167674),(20.468549,46.174134),(20.444157,46.1469),(20.283237,46.1438),(20.242826,46.108091),(20.188462,46.140389),(20.170479,46.145505),(20.145364,46.137082),(20.138026,46.136461),(20.130378,46.139355),(20.120146,46.149226),(20.114772,46.152223),(20.098442,46.154962),(20.088623,46.154135),(20.063405,46.145298),(20.034983,46.142973),(19.993125,46.159406),(19.92915,46.16354),(19.888946,46.15739),(19.873649,46.152992),(19.79045,46.129072),(19.772984,46.131552),(19.711889,46.15871),(19.690095,46.168398),(19.669321,46.1731),(19.647824,46.173875),(19.589739,46.165969),(19.568449,46.166434),(19.549948,46.16416),(19.525247,46.156409),(19.501993,46.145608),(19.487523,46.134239),(19.48928,46.126023),(19.496722,46.116876),(19.499305,46.108608),(19.47295,46.098686),(19.465922,46.091968),(19.460858,46.084475),(19.453727,46.07755),(19.43688,46.06799),(19.428302,46.06551),(19.417243,46.064321),(19.404738,46.060239),(19.396573,46.051557),(19.389131,46.041532),(19.378899,46.033677),(19.362156,46.029646),(19.325156,46.029491),(19.306966,46.026649),(19.298315,46.022123),(19.286915,46.016159),(19.279474,46.003808),(19.274823,45.991612),(19.263454,45.981432),(19.235652,45.977711),(19.148009,45.984068),(19.125788,45.993111),(19.125788,45.993266),(19.111009,46.012955),(19.088478,46.018846),(19.06543,46.012025),(19.049721,45.993111),(19.048584,45.963449),(19.03091,45.960038),(19.005589,45.96257),(18.981818,45.950788),(18.978717,45.947171),(18.977684,45.943502),(18.97882,45.939781),(18.986469,45.931254),(18.988742,45.927017),(18.987502,45.923813),(18.981818,45.921849),(18.962697,45.927947),(18.901306,45.931203),(18.886526,45.930428),(18.876604,45.922418),(18.865442,45.918077),(18.845392,45.913943),(18.828339,45.905726),(18.822451,45.905582),(18.817797,45.905468),(18.804878,45.913633),(18.794852,45.903039),(18.790408,45.893892),(18.785964,45.886916),(18.775525,45.882834),(18.76333,45.88397),(18.723849,45.89844),(18.674343,45.910377),(18.655636,45.907587),(18.633002,45.891929),(18.629281,45.886864),(18.626284,45.874824),(18.623597,45.868674),(18.62215,45.868416),(18.60737,45.856685),(18.585459,45.82692),(18.57347,45.816688),(18.530269,45.79085),(18.504224,45.784028),(18.481797,45.791418),(18.465467,45.783718),(18.43074,45.753642),(18.411723,45.743204),(18.404385,45.741809),(18.397564,45.741343),(18.390743,45.741809),(18.384128,45.7431),(18.383818,45.743204),(18.367178,45.75819),(18.351158,45.758397),(18.334932,45.751989),(18.317672,45.747286),(18.300516,45.751834),(18.283256,45.764908),(18.260518,45.765115),(18.229822,45.781238),(18.211219,45.785372),(18.12895,45.785372),(18.120165,45.783408),(18.102905,45.774623),(18.092053,45.771729),(18.080477,45.771729),(17.975367,45.792141),(17.906234,45.792141),(17.889791,45.792141),(17.88049,45.788524),(17.875735,45.780669),(17.870051,45.773486),(17.858062,45.771729),(17.857959,45.775863),(17.838218,45.799635),(17.809176,45.814414),(17.687323,45.840614),(17.664689,45.841647),(17.656524,45.84542),(17.654669,45.852113),(17.653217,45.857357),(17.65208,45.868829),(17.646912,45.883712),(17.645775,45.891825),(17.638437,45.901334),(17.591102,45.936215),(17.554308,45.947791),(17.511933,45.95394),(17.426667,45.956731),(17.418192,45.95301),(17.415402,45.949703),(17.411578,45.946602),(17.406203,45.943657),(17.399692,45.959935),(17.38915,45.963087),(17.365276,45.956731),(17.345328,45.955697),(17.342124,45.958643),(17.343675,45.967531),(17.337887,45.984636),(17.327138,45.972027),(17.316596,45.973784),(17.306613,45.979914),(17.304814,45.981019),(17.290138,45.984636),(17.290138,45.991457),(17.296959,45.992646),(17.309362,45.996625),(17.316803,45.997659),(17.307811,46.006392),(17.291482,46.007942),(17.275875,46.012025),(17.268951,46.028716),(17.266574,46.038793),(17.254068,46.070832),(17.24859,46.080238),(17.208593,46.116566),(17.197327,46.121165),(17.116195,46.123129),(17.101209,46.128193),(17.052943,46.153463),(17.041264,46.162196),(17.036303,46.172997),(17.028035,46.18049),(16.974808,46.210565),(16.9653,46.219454),(16.940702,46.251493),(16.903288,46.281931),(16.896054,46.286271),(16.889232,46.292163),(16.879312,46.312062),(16.871766,46.327199),(16.875073,46.342857),(16.865461,46.359135),(16.859411,46.364758),(16.850475,46.373062),(16.837659,46.381873),(16.832388,46.381925),(16.830218,46.377351),(16.827531,46.374018),(16.820916,46.3781),(16.816575,46.381976),(16.812338,46.385025),(16.807894,46.387351),(16.803449,46.388694),(16.776474,46.39288),(16.769343,46.39611),(16.761901,46.381873),(16.75508,46.381873),(16.742885,46.399624),(16.712395,46.412672),(16.693585,46.429622),(16.677462,46.44869),(16.659272,46.464193),(16.637671,46.474477),(16.61111,46.478069),(16.602118,46.482048),(16.594263,46.482978),(16.588062,46.479567),(16.583721,46.470653),(16.577623,46.470653),(16.564291,46.479929),(16.521089,46.498532),(16.515302,46.501711),(16.500832,46.544809),(16.467139,46.564704),(16.430242,46.604392),(16.394585,46.619016),(16.376395,46.629093),(16.372246,46.636341),(16.368437,46.642994),(16.377636,46.652864),(16.396652,46.659143),(16.402607,46.663109),(16.410502,46.668367),(16.405024,46.687255),(16.390038,46.694154),(16.371434,46.694929),(16.366216,46.696467),(16.365383,46.696712),(16.357585,46.699011),(16.357275,46.715832),(16.343426,46.714178),(16.334124,46.721749),(16.325546,46.733273),(16.314177,46.743324),(16.300431,46.772082),(16.298157,46.775802),(16.29949,46.77951),(16.302188,46.787016),(16.311097,46.797519),(16.3149,46.802002),(16.321825,46.813268),(16.327509,46.825463),(16.329783,46.834403),(16.325339,46.839442),(16.310663,46.84001),(16.301878,46.843214),(16.297392,46.847033),(16.282241,46.859932),(16.272009,46.863962),(16.179486,46.858468),(16.135376,46.855849),(16.130246,46.856708),(16.094035,46.862774),(16.110055,46.867916),(16.122871,46.876365),(16.159148,46.910316),(16.170723,46.918533),(16.196148,46.931297),(16.217025,46.937395),(16.22395,46.941064),(16.230668,46.94835),(16.231185,46.954422),(16.230358,46.959977),(16.232942,46.966075),(16.24307,46.972018),(16.252992,46.973516),(16.261054,46.97809),(16.265394,46.993257),(16.274903,47.004315),(16.288545,47.005582),(16.325856,47.00044),(16.366577,47.003825),(16.387764,47.002042),(16.405024,46.993153),(16.410398,46.990415),(16.415566,46.989433),(16.420424,46.99026),(16.424764,46.992998),(16.424764,46.993102),(16.441404,46.99522),(16.467449,46.995427),(16.486363,46.998554),(16.481919,47.00938),(16.467553,47.018423),(16.45329,47.021679),(16.424661,47.024082),(16.437167,47.031782),(16.481919,47.04421),(16.493184,47.049145),(16.497215,47.054622),(16.493701,47.059816),(16.481919,47.063898),(16.461765,47.068498),(16.454323,47.081701),(16.460835,47.0963),(16.481919,47.10524),(16.50476,47.125833),(16.509514,47.137537),(16.497318,47.149681),(16.480885,47.150767),(16.447089,47.139708),(16.433653,47.145754),(16.433963,47.151283),(16.442128,47.168336),(16.441818,47.177121),(16.4351,47.183581),(16.426728,47.184124),(16.41877,47.183659),(16.412776,47.187173),(16.409262,47.203683),(16.424661,47.225594),(16.421354,47.243035),(16.421354,47.243061),(16.421354,47.243138),(16.452463,47.254584),(16.466622,47.263421),(16.473237,47.276805),(16.469206,47.293238),(16.43882,47.336569),(16.436133,47.338559),(16.431689,47.339721),(16.427142,47.341375),(16.424351,47.345199),(16.424971,47.351116),(16.429312,47.353803),(16.434273,47.35556),(16.436443,47.358506),(16.433963,47.39685),(16.444091,47.40951),(16.456597,47.411836),(16.46993,47.405531),(16.481919,47.392302),(16.589406,47.425633),(16.626973,47.445549),(16.640875,47.452919),(16.636845,47.4932),(16.648213,47.501546),(16.677359,47.509866),(16.688004,47.52263),(16.689141,47.53803),(16.68139,47.550561),(16.66754,47.560096),(16.65059,47.566555),(16.656172,47.585934),(16.647697,47.606139),(16.63023,47.622004),(16.608422,47.628773),(16.575453,47.624949),(16.509617,47.64314),(16.481919,47.638954),(16.425901,47.654276),(16.407815,47.66133),(16.4166,47.668823),(16.431792,47.685463),(16.43913,47.690475),(16.444918,47.68554),(16.449879,47.682414),(16.45453,47.681768),(16.461145,47.684532),(16.47303,47.691767),(16.512924,47.706004),(16.521399,47.711533),(16.526877,47.720137),(16.52512,47.733315),(16.531115,47.742978),(16.567805,47.754192),(16.609766,47.750627),(16.689865,47.729568),(16.702474,47.7236),(16.707848,47.714608),(16.711569,47.704066),(16.719217,47.693731),(16.730379,47.685902),(16.741128,47.681458),(16.797455,47.675463),(16.80624,47.676884),(16.817299,47.684248),(16.836936,47.705358),(16.850165,47.712929),(16.864738,47.686729),(16.902668,47.682026),(16.98194,47.695436),(17.054804,47.702025),(17.075371,47.708484),(17.064002,47.71329),(17.055527,47.720913),(17.050566,47.730989),(17.048706,47.763649),(17.041988,47.783906),(17.040644,47.801114),(17.05563,47.812354),(17.049119,47.818684),(17.039507,47.837365),(17.031652,47.841345),(17.010672,47.847882),(17.004367,47.852377),(17.004057,47.863281),(17.016563,47.867699),(17.051186,47.872893),(17.067619,47.881626),(17.077541,47.891729),(17.083329,47.904829),(17.087463,47.922632),(17.085396,47.924621),(17.080125,47.925758),(17.075164,47.927773),(17.07413,47.932114),(17.076818,47.934956),(17.086119,47.93922),(17.088083,47.940899),(17.090874,47.949477),(17.095731,47.95573),(17.096145,47.961931),(17.085603,47.970148),(17.148338,48.005443),(17.184821,48.020274),(17.220892,48.015055),(17.262316,48.007283),(17.272671,48.00534),(17.337887,47.998725),(17.36941,47.981207),(17.472039,47.888809),(17.481858,47.882711),(17.492193,47.879818),(17.517308,47.876252),(17.526609,47.872118),(17.560406,47.837986),(17.572601,47.829536),(17.582937,47.829795),(17.592962,47.833102),(17.604227,47.834239),(17.61911,47.829226),(17.639884,47.819201),(17.658384,47.807316),(17.666239,47.797032),(17.676678,47.789177),(17.719245,47.773669),(17.741997,47.76538),(17.825713,47.750006),(17.883532,47.752521),(18.11262,47.762486),(18.23592,47.753882),(18.273024,47.756259),(18.347851,47.776775),(18.552593,47.792846),(18.597448,47.79065),(18.633829,47.779824),(18.663698,47.775896),(18.692843,47.777963),(18.717234,47.788118),(18.750307,47.81362),(18.767671,47.822302),(18.790305,47.826332),(18.814916,47.832194),(18.816453,47.83256),(18.778006,47.851447),(18.748757,47.870723),(18.742246,47.889455),(18.744726,47.910513),(18.754751,47.951803),(18.751444,47.963353),(18.744519,47.967357),(18.743176,47.971052),(18.756198,47.981827),(18.765293,47.985393),(18.784724,47.987615),(18.794232,47.993144),(18.821001,48.030454),(18.838467,48.040015),(18.933595,48.054349),(18.981818,48.061615),(18.996494,48.066214),(19.019017,48.065497),(19.038662,48.064871),(19.098503,48.070736),(19.222526,48.060582),(19.233482,48.06208),(19.29322,48.087764),(19.428199,48.085852),(19.481735,48.111328),(19.483286,48.116496),(19.483803,48.121767),(19.483286,48.127193),(19.481735,48.13267),(19.481425,48.133342),(19.481219,48.133962),(19.481425,48.134427),(19.481735,48.134892),(19.493621,48.150705),(19.503026,48.189411),(19.513982,48.203958),(19.531241,48.21065),(19.623226,48.227006),(19.633871,48.226773),(19.643896,48.224809),(19.655265,48.21835),(19.676969,48.200392),(19.686994,48.196904),(19.73309,48.202899),(19.756551,48.200315),(19.774327,48.185897),(19.774844,48.176079),(19.76916,48.167449),(19.766679,48.159),(19.776084,48.149517),(19.785593,48.14869),(19.821663,48.157914),(19.846261,48.152669),(19.884295,48.129621),(19.905069,48.124299),(19.928633,48.130087),(19.973902,48.158379),(19.99695,48.167914),(20.034983,48.175872),(20.038303,48.177233),(20.078082,48.193545),(20.096995,48.198429),(20.105263,48.202795),(20.112601,48.211735),(20.118079,48.229719),(20.122006,48.236747),(20.134512,48.246669),(20.14309,48.247805),(20.153322,48.245273),(20.170892,48.244033),(20.187636,48.248994),(20.217815,48.267598),(20.22908,48.270905),(20.249027,48.264187),(20.260293,48.255893),(20.272385,48.252456),(20.295226,48.260415),(20.324268,48.279948),(20.349279,48.305476),(20.370157,48.334338),(20.409017,48.413713),(20.420593,48.429241),(20.435579,48.442393),(20.465968,48.46379),(20.468239,48.465389),(20.481674,48.478747),(20.482501,48.482261),(20.482915,48.485827),(20.482811,48.489367),(20.482191,48.492881),(20.480227,48.510218),(20.480538,48.518538),(20.481674,48.526083),(20.51051,48.533783),(20.572522,48.536573),(20.784085,48.569052),(20.800311,48.569233),(20.815814,48.563807),(20.845477,48.545823),(20.859946,48.543317),(20.891365,48.541095),(20.945832,48.518978),(20.981489,48.516859)] +Isle of Man [(-4.612131,54.056952),(-4.620758,54.069648),(-4.631256,54.070624),(-4.643219,54.066596),(-4.656402,54.063788),(-4.67101,54.067043),(-4.689687,54.081529),(-4.704498,54.084866),(-4.725982,54.079006),(-4.745595,54.067369),(-4.766225,54.059231),(-4.790151,54.063788),(-4.790151,54.071234),(-4.78482,54.073065),(-4.781321,54.074774),(-4.777089,54.076239),(-4.769683,54.07746),(-4.770863,54.096829),(-4.737172,54.124905),(-4.728139,54.142564),(-4.727651,54.172593),(-4.723785,54.185452),(-4.714467,54.200344),(-4.71109,54.209906),(-4.71227,54.21662),(-4.710276,54.220649),(-4.68814,54.223782),(-4.663971,54.233547),(-4.643056,54.249416),(-4.612457,54.265448),(-4.598378,54.276597),(-4.554596,54.339179),(-4.530995,54.366116),(-4.495351,54.385891),(-4.442006,54.404364),(-4.404449,54.409654),(-4.385732,54.415717),(-4.367787,54.419013),(-4.351308,54.413804),(-4.357086,54.3994),(-4.377553,54.361884),(-4.3756,54.348334),(-4.370229,54.340888),(-4.363759,54.324286),(-4.358225,54.317613),(-4.350494,54.314765),(-4.332183,54.313422),(-4.324696,54.310207),(-4.319407,54.304592),(-4.313222,54.2956),(-4.31192,54.287177),(-4.320953,54.283515),(-4.326731,54.280341),(-4.34081,54.263007),(-4.348256,54.259508),(-4.35436,54.251532),(-4.359771,54.242255),(-4.365631,54.235053),(-4.371816,54.231269),(-4.399159,54.222073),(-4.390777,54.194729),(-4.412587,54.18008),(-4.446889,54.169135),(-4.475494,54.153144),(-4.474599,54.140815),(-4.529286,54.119086),(-4.543772,54.108466),(-4.549794,54.100653),(-4.563832,54.096829),(-4.579579,54.094428),(-4.590932,54.091051),(-4.598948,54.083889),(-4.612131,54.056952)] +Iraq [(42.89674,37.324906),(42.937048,37.320152),(42.979629,37.331831),(43.005158,37.347256),(43.043501,37.360253),(43.083706,37.368831),(43.114815,37.371131),(43.131971,37.367255),(43.263385,37.310695),(43.270309,37.30868),(43.278629,37.30775),(43.287518,37.309145),(43.296871,37.316741),(43.305656,37.319971),(43.324156,37.322219),(43.336145,37.32023),(43.362505,37.303908),(43.376039,37.295528),(43.41676,37.279173),(43.463269,37.248684),(43.479599,37.243361),(43.492415,37.244756),(43.517116,37.252301),(43.529518,37.253903),(43.542437,37.252301),(43.550602,37.248735),(43.568896,37.237702),(43.594217,37.22946),(43.618298,37.226979),(43.720824,37.232612),(43.746921,37.230648),(43.770641,37.225843),(43.780562,37.220365),(43.80206,37.20357),(43.809398,37.199694),(43.822007,37.202381),(43.83999,37.217109),(43.89363,37.224912),(43.924171,37.253076),(43.953782,37.287467),(43.990369,37.312504),(44.035586,37.31824),(44.069025,37.313727),(44.088037,37.31116),(44.184465,37.279173),(44.206893,37.26752),(44.223171,37.254058),(44.235005,37.236772),(44.243532,37.213828),(44.248544,37.191917),(44.249629,37.179437),(44.248337,37.16967),(44.240121,37.157966),(44.230199,37.154322),(44.218779,37.152617),(44.205963,37.146623),(44.18922,37.129208),(44.180641,37.108925),(44.180021,37.087608),(44.187463,37.066938),(44.22777,36.994125),(44.234281,36.983661),(44.243273,36.97777),(44.284821,36.969166),(44.297223,36.969941),(44.306628,36.977227),(44.315879,36.99397),(44.315982,36.99397),(44.316137,36.994022),(44.316137,36.994125),(44.33164,37.015468),(44.335154,37.031152),(44.343112,37.042443),(44.428068,37.064767),(44.454423,37.076343),(44.479228,37.092001),(44.503412,37.116624),(44.539379,37.143677),(44.57824,37.166415),(44.610279,37.178352),(44.628934,37.179024),(44.733786,37.167242),(44.753836,37.159154),(44.766135,37.14192),(44.752544,37.113136),(44.752699,37.103318),(44.760658,37.085644),(44.766549,37.078952),(44.773628,37.076343),(44.781121,37.074482),(44.788356,37.070245),(44.792232,37.063734),(44.797296,37.048489),(44.801895,37.043528),(44.811197,37.041926),(44.830938,37.046835),(44.840601,37.047197),(44.847577,37.044045),(44.858946,37.034123),(44.880754,37.024925),(44.887368,37.015933),(44.885715,37.005081),(44.874759,36.994022),(44.869178,36.967357),(44.874863,36.949658),(44.884061,36.933354),(44.888815,36.910926),(44.883441,36.886845),(44.869643,36.868965),(44.834141,36.834032),(44.823393,36.809331),(44.831248,36.791916),(44.851195,36.781219),(44.87724,36.776464),(44.908245,36.77786),(44.922715,36.775896),(44.935221,36.767111),(44.944419,36.758274),(44.954754,36.752331),(44.966433,36.749334),(44.979456,36.749024),(44.996199,36.741686),(45.010978,36.725563),(45.035266,36.689596),(45.045911,36.668047),(45.04431,36.64934),(45.024879,36.613167),(45.014182,36.579009),(45.013252,36.557925),(45.010255,36.556013),(44.997852,36.552085),(44.993408,36.549656),(44.991393,36.53374),(45.005914,36.518702),(45.009525,36.516196),(45.025344,36.505215),(45.03816,36.494001),(45.03909,36.479842),(45.045498,36.471625),(45.054076,36.464959),(45.062086,36.455347),(45.066892,36.442376),(45.068546,36.432506),(45.072266,36.423411),(45.083739,36.412817),(45.110817,36.402534),(45.139963,36.404394),(45.19598,36.422429),(45.221301,36.42093),(45.238871,36.402999),(45.24962,36.377936),(45.254478,36.354785),(45.25701,36.311687),(45.263934,36.294272),(45.282693,36.274893),(45.258508,36.262232),(45.264193,36.250192),(45.282796,36.238409),(45.297369,36.22673),(45.30202,36.207558),(45.299539,36.161411),(45.304604,36.140379),(45.33561,36.10772),(45.348322,36.087514),(45.344188,36.067464),(45.331889,36.051599),(45.31959,36.031032),(45.313699,36.010413),(45.32021,35.99429),(45.32021,35.994135),(45.32021,35.994083),(45.320313,35.994031),(45.32052,35.99398),(45.33809,35.979304),(45.359587,35.976875),(45.381808,35.982973),(45.401549,35.99398),(45.4016,35.994031),(45.401859,35.994031),(45.401962,35.994083),(45.402169,35.994238),(45.419946,35.998269),(45.453122,36.011653),(45.47927,36.012015),(45.501424,36.005435),(45.53999,35.99398),(45.576939,35.96623),(45.589858,35.959615),(45.617039,35.955429),(45.646857,35.933208),(45.69321,35.879878),(45.718739,35.828305),(45.731038,35.815128),(45.749021,35.810994),(45.786745,35.820244),(45.797907,35.818383),(45.814185,35.80934),(45.834752,35.810528),(45.854079,35.818332),(45.878315,35.83523),(45.888496,35.834868),(45.898779,35.832181),(45.909115,35.831302),(45.941257,35.840346),(46.004923,35.837969),(46.024973,35.843291),(46.0443,35.852283),(46.060423,35.857244),(46.07696,35.856986),(46.107759,35.847425),(46.115045,35.846444),(46.119851,35.842878),(46.124812,35.822466),(46.129101,35.815541),(46.135199,35.810012),(46.143364,35.804534),(46.163518,35.797506),(46.184963,35.798178),(46.227441,35.805723),(46.246407,35.811407),(46.263512,35.821122),(46.281392,35.828305),(46.302992,35.826548),(46.319632,35.816885),(46.327074,35.8035),(46.325058,35.787843),(46.313379,35.771513),(46.297515,35.759886),(46.267129,35.744124),(46.253486,35.727743),(46.23788,35.715547),(46.217003,35.713583),(46.177729,35.715392),(46.127603,35.693688),(46.107035,35.689812),(46.037065,35.691569),(46.016498,35.68573),(46.003786,35.674258),(45.995879,35.658393),(45.992365,35.640875),(45.992314,35.62439),(46.002029,35.585943),(45.999238,35.572145),(45.968698,35.579742),(45.963892,35.579742),(45.959344,35.578708),(45.968026,35.558503),(45.97924,35.539486),(45.983115,35.5353),(45.984562,35.531062),(45.983219,35.52698),(45.97924,35.523001),(45.971333,35.51804),(45.966062,35.511477),(45.963582,35.503415),(45.963995,35.494165),(45.977017,35.46533),(46.040993,35.381562),(46.096287,35.341151),(46.120161,35.318517),(46.127396,35.289062),(46.125285,35.284989),(46.120058,35.274902),(46.107655,35.262758),(46.098767,35.250046),(46.101454,35.234285),(46.115407,35.226326),(46.154164,35.226636),(46.169254,35.216249),(46.165481,35.189946),(46.143829,35.159612),(46.130393,35.131397),(46.151064,35.111605),(46.143106,35.099512),(46.132357,35.094758),(46.119954,35.092743),(46.09453,35.085818),(46.07665,35.089694),(46.066573,35.088815),(46.039753,35.075535),(46.025077,35.064269),(46.009212,35.06091),(45.97924,35.071555),(45.966372,35.070729),(45.956037,35.074243),(45.934953,35.085456),(45.920173,35.089642),(45.913197,35.087317),(45.899399,35.06861),(45.898366,35.063907),(45.901156,35.051608),(45.899089,35.046906),(45.893095,35.044374),(45.879039,35.043237),(45.872476,35.041428),(45.861882,35.032798),(45.857025,35.021274),(45.856818,35.008045),(45.859712,34.994403),(45.859712,34.994248),(45.868962,34.968358),(45.866327,34.949547),(45.857852,34.931306),(45.850203,34.906811),(45.835011,34.890171),(45.808036,34.897871),(45.796214,34.903043),(45.77827,34.910893),(45.755119,34.909963),(45.748504,34.89074),(45.750468,34.865367),(45.745404,34.840975),(45.717705,34.825111),(45.689231,34.82263),(45.676571,34.818289),(45.666752,34.80692),(45.6636,34.794932),(45.665925,34.774726),(45.662825,34.763254),(45.635023,34.736951),(45.627375,34.720828),(45.642258,34.706875),(45.650526,34.702999),(45.659517,34.697211),(45.666856,34.689977),(45.67037,34.681812),(45.67347,34.671166),(45.678379,34.66512),(45.683495,34.660211),(45.687629,34.653235),(45.690782,34.634321),(45.691247,34.615511),(45.694451,34.598561),(45.705613,34.58559),(45.708558,34.579079),(45.7083,34.568382),(45.705613,34.557737),(45.70153,34.551277),(45.695588,34.550244),(45.673573,34.556652),(45.604844,34.561147),(45.573941,34.567297),(45.539628,34.582231),(45.500664,34.591688),(45.49653,34.564248),(45.50485,34.523785),(45.503455,34.494175),(45.4987,34.486242),(45.493016,34.479008),(45.486505,34.472729),(45.47927,34.467251),(45.426767,34.45751),(45.417052,34.444359),(45.447954,34.36209),(45.46077,34.340463),(45.47927,34.329146),(45.499217,34.342401),(45.521541,34.342504),(45.542212,34.331601),(45.55756,34.311964),(45.563038,34.289278),(45.561022,34.264602),(45.54769,34.216491),(45.542729,34.207474),(45.528673,34.18856),(45.526657,34.17931),(45.531463,34.167916),(45.538801,34.159234),(45.544537,34.15063),(45.544537,34.139494),(45.534357,34.128202),(45.493946,34.100685),(45.47927,34.087636),(45.454879,34.06986),(45.443613,34.047664),(45.43519,34.022136),(45.419946,33.99436),(45.419946,33.994257),(45.419739,33.994257),(45.411471,33.987151),(45.401755,33.981622),(45.380361,33.97356),(45.401032,33.949453),(45.423718,33.938782),(45.449711,33.937387),(45.47927,33.941185),(45.481751,33.940875),(45.483921,33.940048),(45.485781,33.938653),(45.574768,33.803933),(45.588204,33.791504),(45.603604,33.78073),(45.616729,33.768689),(45.623912,33.752075),(45.628202,33.731844),(45.635126,33.714171),(45.645151,33.697918),(45.658794,33.681692),(45.673987,33.669031),(45.721116,33.640764),(45.72463,33.633036),(45.727989,33.625649),(45.72773,33.590096),(45.734552,33.583119),(45.752018,33.586762),(45.768451,33.595909),(45.797545,33.619319),(45.815787,33.625804),(45.864363,33.626424),(45.885964,33.63092),(45.882036,33.600198),(45.89971,33.58516),(45.920897,33.572991),(45.927925,33.550873),(45.915626,33.535809),(45.868187,33.511573),(45.852581,33.494365),(45.869737,33.482118),(45.899399,33.476304),(45.929682,33.479508),(45.948182,33.494262),(45.953143,33.496587),(45.958156,33.497388),(45.963272,33.496561),(45.964487,33.495991),(45.968232,33.494236),(45.971385,33.492763),(45.974434,33.490799),(46.008023,33.455659),(46.019392,33.438399),(46.027454,33.419512),(46.030503,33.399203),(46.031071,33.382511),(46.036239,33.367939),(46.069725,33.34086),(46.109516,33.293705),(46.141349,33.272105),(46.155301,33.260219),(46.164189,33.243347),(46.164396,33.233063),(46.15835,33.213865),(46.157678,33.205675),(46.162122,33.196373),(46.173698,33.190223),(46.174318,33.18118),(46.167187,33.168338),(46.153441,33.154256),(46.126466,33.131777),(46.105382,33.118393),(46.089155,33.11568),(46.072412,33.116765),(46.050398,33.115112),(46.030296,33.105681),(46.029314,33.093511),(46.043267,33.08346),(46.087192,33.079222),(46.106002,33.072504),(46.12042,33.061678),(46.126156,33.047984),(46.119489,33.031008),(46.104658,33.018063),(46.088018,33.00672),(46.075719,32.994499),(46.075719,32.994344),(46.09701,32.95432),(46.15556,32.948352),(46.273433,32.959488),(46.379629,32.931764),(46.479312,32.891792),(46.507321,32.868021),(46.600613,32.822541),(46.604421,32.820685),(46.650361,32.789369),(46.715887,32.756012),(46.757332,32.716195),(47.058398,32.494478),(47.058605,32.494478),(47.058657,32.4944),(47.090593,32.474556),(47.12103,32.461043),(47.152553,32.455178),(47.187796,32.458382),(47.20547,32.46404),(47.251255,32.48546),(47.265311,32.484685),(47.321948,32.468226),(47.343342,32.458692),(47.356106,32.446186),(47.36732,32.430761),(47.384115,32.41257),(47.410832,32.395285),(47.416413,32.387973),(47.417963,32.376423),(47.414346,32.36986),(47.408765,32.364331),(47.39998,32.345779),(47.395949,32.34198),(47.395794,32.33702),(47.40246,32.323506),(47.407059,32.31777),(47.436877,32.293689),(47.441579,32.288185),(47.445352,32.28263),(47.463025,32.26183),(47.469433,32.25612),(47.479148,32.252063),(47.484522,32.239351),(47.504004,32.226742),(47.509224,32.213797),(47.507673,32.199844),(47.502351,32.190129),(47.496408,32.181344),(47.492481,32.170492),(47.490413,32.15654),(47.491344,32.148633),(47.498165,32.144137),(47.513823,32.140313),(47.52669,32.133389),(47.534752,32.123157),(47.54364,32.114578),(47.55873,32.112976),(47.578057,32.106155),(47.580232,32.103495),(47.595007,32.085433),(47.617796,32.04187),(47.633144,32.026935),(47.668077,32.012414),(47.677947,31.994689),(47.677947,31.994534),(47.68265,31.976912),(47.71841,31.922342),(47.725231,31.914332),(47.743318,31.904255),(47.750966,31.898209),(47.756547,31.889166),(47.761715,31.871647),(47.76528,31.864103),(47.781352,31.848858),(47.820781,31.823588),(47.834475,31.805708),(47.837266,31.784469),(47.831323,31.761835),(47.678929,31.407851),(47.676443,31.236517),(47.672935,30.994698),(48.001545,30.994647),(48.012242,30.989066),(48.015446,30.97625),(48.012035,30.494574),(48.012035,30.494522),(48.013586,30.463878),(48.119419,30.450804),(48.130581,30.447497),(48.140916,30.441916),(48.15766,30.426309),(48.170475,30.406672),(48.179467,30.384865),(48.187684,30.340836),(48.192076,30.331276),(48.200086,30.323938),(48.211196,30.319649),(48.222979,30.318099),(48.235071,30.318512),(48.271658,30.323835),(48.284267,30.323318),(48.296359,30.319753),(48.305713,30.312828),(48.326021,30.283476),(48.358164,30.251798),(48.397025,30.220999),(48.403588,30.212524),(48.408704,30.202344),(48.410616,30.191543),(48.408032,30.180898),(48.400745,30.172268),(48.391134,30.164775),(48.383279,30.156455),(48.381108,30.145345),(48.383072,30.13842),(48.395578,30.115217),(48.415525,30.095632),(48.421209,30.085348),(48.423948,30.083643),(48.44219,30.03393),(48.444257,30.020908),(48.453145,30.001426),(48.457796,29.994863),(48.464307,29.989127),(48.478157,29.979618),(48.49304,29.971763),(48.523994,29.964115),(48.531025,29.961351),(48.531016,29.96133),(48.530772,29.960761),(48.530772,29.956204),(48.554942,29.956488),(48.559255,29.946601),(48.546886,29.934719),(48.520518,29.9289),(48.411876,29.938422),(48.33546,29.961412),(48.299571,29.984524),(48.264659,29.993964),(48.209239,30.024482),(48.165782,30.037543),(48.119151,30.044989),(48.076345,30.04564),(48.038259,30.036851),(47.969005,30.004055),(47.961925,30.030341),(47.958507,30.060614),(47.951671,30.088324),(47.934825,30.107082),(47.947113,30.07453),(47.948497,30.062323),(47.946056,30.049384),(47.941742,30.037258),(47.940929,30.026516),(47.948497,30.017646),(47.948009,29.994045),(47.731432,30.088552),(47.674175,30.098216),(47.415793,30.098216),(47.358225,30.092118),(47.197098,30.03424),(47.144698,30.003338),(47.110488,29.960911),(47.025429,29.772137),(46.988842,29.712658),(46.983674,29.698292),(46.97985,29.668061),(46.97737,29.657984),(46.957836,29.620441),(46.883112,29.512515),(46.85345,29.44456),(46.838567,29.424975),(46.774488,29.363532),(46.711856,29.271393),(46.561468,29.124167),(46.532436,29.095745),(46.488614,29.08758),(46.444896,29.079415),(46.427322,29.076143),(46.401178,29.071276),(46.357459,29.063137),(46.252143,29.071457),(46.17592,29.077477),(46.099697,29.083575),(46.02363,29.089595),(45.947355,29.095641),(45.844932,29.103754),(45.742406,29.111868),(45.63988,29.120032),(45.537406,29.12812),(45.434932,29.136207),(45.332406,29.144372),(45.229983,29.152459),(45.127457,29.160624),(45.047824,29.166903),(44.968293,29.173259),(44.888712,29.179512),(44.808923,29.185894),(44.717456,29.193103),(44.710893,29.195273),(44.70464,29.197495),(44.698232,29.199718),(44.691825,29.201836),(44.614878,29.256071),(44.519742,29.323328),(44.424296,29.390533),(44.328953,29.45779),(44.233661,29.524995),(44.119146,29.605817),(44.004631,29.686613),(43.890065,29.767383),(43.824013,29.81398),(43.775498,29.848205),(43.660983,29.929027),(43.546365,30.009797),(43.431746,30.090568),(43.317231,30.171338),(43.229071,30.233505),(43.140963,30.295671),(43.052907,30.357786),(42.964747,30.420005),(42.859017,30.494522),(42.858913,30.494574),(42.85881,30.494626),(42.858707,30.494677),(42.783413,30.551045),(42.763932,30.565629),(42.669157,30.636684),(42.574486,30.707636),(42.479712,30.778639),(42.395169,30.841581),(42.31073,30.904523),(42.22629,30.967465),(42.141644,31.030407),(42.075395,31.079861),(41.986202,31.125285),(41.898558,31.169985),(41.810708,31.214685),(41.723065,31.259385),(41.635422,31.304034),(41.547572,31.348734),(41.459825,31.393434),(41.372182,31.438134),(41.284539,31.482834),(41.196689,31.527586),(41.109046,31.572286),(41.021299,31.616934),(40.933553,31.661634),(40.845909,31.706335),(40.758163,31.751086),(40.670313,31.795683),(40.58267,31.840435),(40.479834,31.892835),(40.424126,31.920533),(40.37028,31.938465),(40.029318,31.994379),(40.029112,31.994431),(40.029008,31.994482),(40.028802,31.994482),(39.9486,32.00611),(39.750989,32.034893),(39.553275,32.063729),(39.355561,32.092564),(39.157744,32.121348),(39.154953,32.120573),(39.152163,32.119746),(39.149269,32.118919),(39.146375,32.118144),(39.146168,32.125844),(39.266471,32.212867),(39.291999,32.244519),(39.271122,32.311956),(39.256342,32.342678),(39.235775,32.352858),(39.046329,32.308494),(39.036201,32.313352),(39.028759,32.328338),(38.979977,32.472102),(38.978633,32.47373),(38.97822,32.47497),(38.978633,32.475693),(38.979977,32.476055),(39.057181,32.496596),(38.990002,32.705576),(38.94277,32.852337),(38.897191,32.994344),(38.862568,33.10072),(38.82102,33.229032),(38.774511,33.371685),(38.885099,33.427108),(38.995686,33.482479),(39.106274,33.537928),(39.216862,33.593325),(39.327449,33.648722),(39.438037,33.704094),(39.548624,33.759491),(39.659212,33.81481),(39.769696,33.870233),(39.880387,33.925605),(39.990871,33.980976),(40.101459,34.036373),(40.173111,34.072283),(40.212046,34.091796),(40.322634,34.147167),(40.433221,34.202539),(40.543809,34.257988),(40.690467,34.331497),(40.936033,34.386068),(40.965282,34.401855),(40.98802,34.42852),(41.023986,34.494175),(41.023986,34.49433),(41.195656,34.768473),(41.204234,34.793123),(41.206508,34.819323),(41.198033,34.994041),(41.192326,35.158904),(41.191521,35.182143),(41.20134,35.243018),(41.243095,35.366525),(41.25188,35.46409),(41.261078,35.494165),(41.261181,35.494165),(41.261285,35.494165),(41.261285,35.49432),(41.308458,35.552248),(41.34221,35.593694),(41.358023,35.623925),(41.363501,35.655241),(41.359263,35.792752),(41.354509,35.825566),(41.343657,35.857657),(41.266349,35.994238),(41.266349,35.994342),(41.266246,35.99429),(41.240614,36.043021),(41.236687,36.060332),(41.236583,36.077024),(41.268829,36.327965),(41.276994,36.354785),(41.365258,36.493898),(41.365258,36.494001),(41.365361,36.494053),(41.365361,36.494156),(41.385411,36.516377),(41.414867,36.527384),(41.479773,36.536117),(41.789935,36.589292),(41.817323,36.599731),(41.843781,36.617869),(41.978554,36.733625),(42.178438,36.90532),(42.281584,36.99397),(42.281894,36.99397),(42.281894,36.994022),(42.281894,36.994125),(42.34587,37.042908),(42.376806,37.062001),(42.377186,37.062235),(42.376875,37.076756),(42.371191,37.087944),(42.363646,37.09815),(42.357238,37.109984),(42.401887,37.114144),(42.459144,37.129311),(42.545237,37.140887),(42.561257,37.146623),(42.564668,37.152049),(42.576967,37.17923),(42.702334,37.325346),(42.706158,37.333226),(42.707398,37.340151),(42.709465,37.347179),(42.715666,37.355292),(42.722177,37.358909),(42.77158,37.374903),(42.780468,37.375498),(42.792457,37.374335),(42.801139,37.36909),(42.80548,37.351856),(42.814058,37.346817),(42.89674,37.324906)] +Israel [(35.803633,33.248463),(35.807664,33.201721),(35.830195,33.189991),(35.833399,33.161129),(35.822443,33.14157),(35.811488,33.126765),(35.811488,33.111908),(35.848902,33.098678),(35.845801,33.085423),(35.85903,32.99021),(35.864611,32.97773),(35.888073,32.944941),(35.874017,32.922333),(35.866885,32.920782),(35.849729,32.895823),(35.83805,32.866031),(35.841874,32.853577),(35.834226,32.827946),(35.784203,32.777949),(35.75759,32.744347),(35.757435,32.744282),(35.740175,32.740535),(35.685191,32.711234),(35.652015,32.686171),(35.635995,32.679143),(35.612334,32.681535),(35.612224,32.681546),(35.593827,32.670358),(35.578737,32.653434),(35.569849,32.646768),(35.562718,32.64421),(35.560547,32.640903),(35.560031,32.632686),(35.564061,32.625477),(35.572536,32.62124),(35.572536,32.615013),(35.565612,32.615013),(35.565612,32.607546),(35.571296,32.598554),(35.574397,32.572767),(35.579978,32.560391),(35.575844,32.554965),(35.574397,32.554396),(35.57388,32.556748),(35.572536,32.560391),(35.570159,32.556825),(35.565612,32.546102),(35.55941,32.55295),(35.562821,32.532021),(35.565612,32.525587),(35.551969,32.525587),(35.551969,32.518817),(35.55724,32.519334),(35.561374,32.519179),(35.568505,32.510265),(35.570573,32.506027),(35.579978,32.497733),(35.579978,32.49148),(35.570986,32.489207),(35.564991,32.48391),(35.561271,32.477089),(35.55941,32.470396),(35.563131,32.468174),(35.565405,32.466314),(35.568092,32.46497),(35.572536,32.464195),(35.572536,32.456728),(35.566232,32.453111),(35.55941,32.450527),(35.565612,32.443706),(35.551969,32.436212),(35.554139,32.434455),(35.556207,32.434791),(35.55786,32.43412),(35.55941,32.429443),(35.551969,32.429443),(35.551969,32.423242),(35.55817,32.417997),(35.5591,32.413966),(35.554863,32.411175),(35.545148,32.409573),(35.549592,32.39854),(35.551969,32.395285),(35.556827,32.390918),(35.560961,32.384717),(35.480139,32.402416),(35.456678,32.40761),(35.432803,32.408256),(35.406862,32.414793),(35.401384,32.440011),(35.401797,32.470965),(35.392909,32.494478),(35.363143,32.51011),(35.333378,32.513081),(35.270436,32.510471),(35.252246,32.515949),(35.22372,32.535896),(35.210359,32.541868),(35.208631,32.54264),(35.190751,32.54171),(35.176798,32.532692),(35.151683,32.507914),(35.120678,32.491325),(35.090705,32.479233),(35.06435,32.463136),(35.044713,32.43412),(35.03273,32.3822),(35.028797,32.365157),(35.021562,32.34459),(35.017841,32.342213),(35.004509,32.338053),(35.000685,32.335728),(34.996551,32.323041),(34.993037,32.29157),(34.989936,32.27847),(35.002442,32.275137),(35.009366,32.267644),(35.011123,32.257102),(35.007609,32.24439),(35.000271,32.232013),(34.961411,32.201576),(34.948078,32.186874),(34.946115,32.177262),(34.962341,32.146359),(34.967302,32.129461),(34.979911,32.035203),(34.980428,32.016548),(34.978671,31.994379),(34.978671,31.993707),(34.978981,31.993087),(34.979601,31.99257),(34.980324,31.992157),(34.981151,31.98885),(34.981461,31.985232),(34.981151,31.981615),(34.980324,31.978204),(34.976087,31.948232),(34.995827,31.909165),(35.007713,31.875678),(34.980324,31.862604),(34.953246,31.854439),(34.947326,31.840925),(34.945391,31.836507),(34.954796,31.819764),(34.987766,31.814803),(35.003372,31.814648),(35.010503,31.815889),(35.020425,31.82116),(35.038099,31.837386),(35.04554,31.841417),(35.059699,31.839401),(35.111893,31.818059),(35.181449,31.804106),(35.198399,31.795166),(35.206254,31.782764),(35.208011,31.766124),(35.206564,31.744627),(35.202843,31.738942),(35.198296,31.737702),(35.193128,31.739873),(35.187443,31.744472),(35.125742,31.733103),(35.106725,31.724938),(35.088845,31.712587),(35.054015,31.682202),(34.973606,31.63037),(34.955313,31.61187),(34.953176,31.60832),(34.93733,31.582001),(34.932885,31.554923),(34.932472,31.526966),(34.926788,31.494409),(34.881726,31.429866),(34.867153,31.396431),(34.878832,31.362841),(34.900949,31.348475),(34.927408,31.34491),(34.9549,31.34863),(34.980324,31.356175),(35.040166,31.363203),(35.164913,31.362273),(35.223307,31.381031),(35.332344,31.458804),(35.390118,31.487071),(35.458125,31.491929),(35.458538,31.491619),(35.457128,31.433524),(35.456884,31.423509),(35.452854,31.400823),(35.435077,31.360619),(35.416473,31.331835),(35.423915,31.324601),(35.422261,31.303),(35.408205,31.282019),(35.3957,31.25768),(35.401177,31.230291),(35.410686,31.204608),(35.421331,31.184506),(35.436214,31.159546),(35.443242,31.132209),(35.438488,31.103736),(35.391565,31.023947),(35.385158,30.994647),(35.385261,30.963279),(35.374099,30.945141),(35.34711,30.92271),(35.334928,30.912585),(35.322216,30.88995),(35.319528,30.867316),(35.320045,30.84494),(35.316635,30.822823),(35.310847,30.813314),(35.293897,30.800188),(35.286145,30.792333),(35.279531,30.780241),(35.27612,30.768976),(35.271573,30.743706),(35.263882,30.719967),(35.263821,30.71978),(35.205324,30.617099),(35.162122,30.494677),(35.157368,30.470854),(35.140005,30.430185),(35.140005,30.406155),(35.144965,30.395872),(35.159332,30.375615),(35.162122,30.361404),(35.159952,30.347503),(35.154474,30.336754),(35.147756,30.32647),(35.141762,30.313965),(35.132356,30.261875),(35.125225,30.244667),(35.124812,30.21609),(35.145276,30.154905),(35.145276,30.123382),(35.129049,30.089741),(35.086261,30.034034),(35.074686,29.994604),(35.074065,29.982564),(35.070345,29.973727),(35.065384,29.965976),(35.061456,29.957346),(35.054118,29.923394),(35.053188,29.862623),(35.048951,29.842314),(35.002545,29.733096),(34.995104,29.708162),(34.989833,29.651964),(34.980324,29.627004),(34.966992,29.608116),(34.95986,29.586206),(34.95558,29.558987),(34.955577,29.558987),(34.951345,29.54564),(34.944998,29.536851),(34.927745,29.518012),(34.91977,29.507392),(34.915782,29.50023),(34.914073,29.493801),(34.910818,29.489936),(34.903005,29.489691),(34.893728,29.490546),(34.886729,29.490058),(34.878108,29.504298),(34.855267,29.545717),(34.848239,29.569643),(34.8501,29.63876),(34.824365,29.7417),(34.785194,29.835699),(34.741373,29.940241),(34.735068,29.994553),(34.735585,30.000702),(34.734965,30.006697),(34.733208,30.012588),(34.730417,30.018169),(34.69166,30.114545),(34.632645,30.26203),(34.599469,30.344506),(34.58841,30.35882),(34.53384,30.400213),(34.526915,30.409618),(34.524745,30.421142),(34.526502,30.438712),(34.53601,30.468581),(34.536217,30.482172),(34.526295,30.494522),(34.526295,30.494574),(34.510379,30.513332),(34.504384,30.530334),(34.502214,30.571675),(34.480407,30.651205),(34.418808,30.7913),(34.367855,30.907417),(34.329511,30.994492),(34.297989,31.078776),(34.258611,31.184144),(34.248351,31.211449),(34.264399,31.224193),(34.315869,31.256905),(34.350905,31.289254),(34.353799,31.306359),(34.345531,31.340724),(34.345841,31.357725),(34.367339,31.392814),(34.480407,31.485624),(34.495496,31.494409),(34.495703,31.494409),(34.528259,31.520144),(34.530429,31.541383),(34.511412,31.561589),(34.481204,31.583141),(34.481212,31.583157),(34.489513,31.600409),(34.513927,31.627143),(34.602712,31.757758),(34.60963,31.76553),(34.665945,31.873887),(34.693207,31.926459),(34.711925,31.951606),(34.743337,32.039862),(34.743988,32.044623),(34.742686,32.055609),(34.743337,32.06037),(34.757009,32.066596),(34.837657,32.280707),(34.87322,32.430325),(34.904145,32.560614),(34.909841,32.570258),(34.915863,32.614895),(34.920909,32.628241),(34.919119,32.642971),(34.942068,32.724514),(34.94752,32.814154),(34.955577,32.834377),(34.97047,32.841376),(34.983165,32.838772),(35.003429,32.827541),(35.018728,32.825019),(35.02768,32.826606),(35.062755,32.858344),(35.067068,32.866848),(35.075531,32.893093),(35.0796,32.905829),(35.07781,32.917873),(35.06544,32.923082),(35.071544,32.937934),(35.078461,32.998847),(35.09197,33.031195),(35.096202,33.05036),(35.092133,33.067694),(35.096446,33.071357),(35.101817,33.077623),(35.106456,33.080756),(35.09962,33.087592),(35.104259,33.088528),(35.105235,33.089016),(35.185273,33.083977),(35.189614,33.085527),(35.200673,33.092736),(35.207184,33.094803),(35.213178,33.094441),(35.227338,33.091366),(35.234469,33.090927),(35.271469,33.101185),(35.283665,33.101185),(35.288825,33.099201),(35.29462,33.096973),(35.305783,33.089377),(35.315704,33.079222),(35.322629,33.067363),(35.331931,33.057156),(35.34516,33.05558),(35.401591,33.067931),(35.449443,33.085217),(35.480139,33.087387),(35.4851,33.102606),(35.503703,33.130615),(35.512282,33.147409),(35.517656,33.172886),(35.520033,33.222185),(35.527784,33.244251),(35.53688,33.257868),(35.542771,33.271536),(35.549489,33.281019),(35.561167,33.28213),(35.566955,33.276187),(35.585042,33.252209),(35.597754,33.244354),(35.597858,33.24438),(35.598064,33.244251),(35.603852,33.240091),(35.603892,33.240323),(35.604576,33.244251),(35.604576,33.244303),(35.604576,33.244354),(35.601475,33.262725),(35.610363,33.27027),(35.624716,33.272924),(35.640542,33.275851),(35.66049,33.289261),(35.698523,33.32267),(35.716197,33.326727),(35.729426,33.327812),(35.743689,33.331171),(35.757538,33.336313),(35.769423,33.342643),(35.785753,33.357887),(35.8057,33.391348),(35.8211,33.406722),(35.822443,33.401373),(35.816966,33.395198),(35.815415,33.378868),(35.812315,33.373365),(35.809938,33.360032),(35.793505,33.349929),(35.785753,33.342875),(35.763842,33.334401),(35.802083,33.31249),(35.768597,33.272699),(35.775625,33.264896),(35.803633,33.248463)] +Jamaica [(-76.263743,18.012356),(-76.256785,17.996324),(-76.248769,17.977851),(-76.234039,17.953925),(-76.21524,17.934963),(-76.187978,17.915473),(-76.198394,17.907416),(-76.250071,17.894965),(-76.286488,17.875881),(-76.307362,17.870103),(-76.325185,17.874498),(-76.318227,17.879299),(-76.315419,17.882717),(-76.311513,17.894965),(-76.329579,17.888658),(-76.345611,17.865871),(-76.363026,17.860826),(-76.378529,17.865058),(-76.396555,17.872992),(-76.414906,17.877509),(-76.444936,17.866767),(-76.523549,17.860256),(-76.57901,17.868313),(-76.587066,17.867987),(-76.594594,17.866604),(-76.602773,17.867621),(-76.613149,17.874498),(-76.617665,17.883531),(-76.6197,17.895087),(-76.622711,17.905015),(-76.630523,17.909247),(-76.63858,17.911607),(-76.643951,17.917222),(-76.64859,17.923896),(-76.654775,17.929145),(-76.684804,17.936021),(-76.692291,17.940009),(-76.707672,17.94595),(-76.728424,17.94477),(-76.764638,17.936591),(-76.7801,17.934801),(-76.832265,17.936591),(-76.832265,17.943427),(-76.722401,17.949612),(-76.729319,17.955959),(-76.734364,17.960598),(-76.747467,17.963446),(-76.777659,17.963853),(-76.798164,17.96342),(-76.819056,17.975273),(-76.836829,17.986359),(-76.849967,17.992276),(-76.853069,17.981968),(-76.857761,17.971398),(-76.870173,17.95805),(-76.874005,17.950349),(-76.887156,17.951856),(-76.891037,17.944502),(-76.894944,17.921681),(-76.912828,17.861287),(-76.926096,17.843329),(-76.942128,17.833564),(-76.955312,17.833441),(-76.964019,17.837226),(-76.972076,17.842434),(-76.983062,17.846584),(-77.038319,17.846584),(-77.038319,17.85399),(-77.017201,17.860826),(-77.027984,17.869818),(-77.041493,17.887152),(-77.051991,17.894965),(-77.064198,17.899482),(-77.093007,17.902493),(-77.119109,17.893803),(-77.141652,17.881134),(-77.158079,17.849875),(-77.161977,17.840766),(-77.154408,17.813056),(-77.161692,17.810289),(-77.174387,17.802924),(-77.181711,17.799994),(-77.179107,17.794745),(-77.177358,17.784613),(-77.174916,17.778266),(-77.185618,17.782904),(-77.19636,17.783393),(-77.206776,17.779853),(-77.216461,17.772121),(-77.198883,17.761786),(-77.180328,17.75788),(-77.160878,17.759426),(-77.140736,17.765286),(-77.130605,17.727688),(-77.133656,17.710435),(-77.158111,17.703192),(-77.178944,17.704169),(-77.197581,17.708197),(-77.216054,17.716702),(-77.302887,17.790839),(-77.320872,17.812445),(-77.395619,17.853949),(-77.417714,17.861721),(-77.441029,17.864651),(-77.459462,17.857408),(-77.480621,17.843736),(-77.50182,17.841946),(-77.545481,17.85399),(-77.588368,17.860989),(-77.723866,17.853583),(-77.730702,17.854804),(-77.751536,17.868313),(-77.786733,17.884955),(-77.795562,17.891547),(-77.799916,17.899115),(-77.810292,17.92475),(-77.812367,17.932847),(-77.816151,17.939195),(-77.834055,17.945299),(-77.83967,17.949612),(-77.841135,17.970608),(-77.839345,17.995836),(-77.84557,18.017035),(-77.871002,18.025946),(-77.940338,18.026516),(-77.956288,18.032172),(-77.97997,18.084459),(-77.987864,18.095364),(-77.996449,18.10102),(-78.006418,18.105862),(-78.014394,18.107856),(-78.020579,18.113105),(-78.02538,18.12519),(-78.031484,18.148871),(-78.050282,18.185126),(-78.075185,18.198432),(-78.148793,18.196682),(-78.193756,18.203925),(-78.210194,18.204088),(-78.219594,18.195868),(-78.227284,18.19184),(-78.234039,18.193183),(-78.258656,18.202623),(-78.261138,18.204088),(-78.275136,18.206244),(-78.305898,18.215644),(-78.32315,18.217719),(-78.340728,18.221869),(-78.357737,18.232733),(-78.370269,18.248114),(-78.374664,18.26557),(-78.36913,18.273871),(-78.354319,18.28384),(-78.34732,18.292222),(-78.34439,18.305365),(-78.345815,18.335517),(-78.343902,18.344062),(-78.336293,18.355902),(-78.335439,18.36343),(-78.331939,18.367377),(-78.316396,18.368598),(-78.30663,18.374661),(-78.298695,18.386379),(-78.288401,18.394355),(-78.271637,18.389065),(-78.261301,18.417385),(-78.230621,18.444241),(-78.194976,18.455471),(-78.16926,18.436835),(-78.154408,18.453518),(-78.138539,18.453843),(-78.119985,18.447333),(-78.097239,18.443061),(-78.032541,18.450629),(-78.010976,18.450507),(-78.000152,18.446682),(-77.992421,18.443996),(-77.981353,18.441352),(-77.96996,18.443061),(-77.959828,18.449897),(-77.958852,18.456448),(-77.959137,18.464097),(-77.952952,18.474433),(-77.930287,18.502021),(-77.919993,18.511176),(-77.902333,18.518785),(-77.861887,18.525092),(-77.81607,18.523424),(-77.735544,18.5067),(-77.66454,18.491929),(-77.56725,18.49018),(-77.524281,18.480211),(-77.454986,18.475328),(-77.428863,18.469224),(-77.408274,18.457343),(-77.376536,18.465155),(-77.325307,18.465033),(-77.272613,18.459377),(-77.236318,18.450507),(-77.231842,18.446031),(-77.228139,18.433661),(-77.223297,18.429389),(-77.217437,18.429348),(-77.212026,18.431789),(-77.208852,18.434963),(-77.209625,18.436835),(-77.164947,18.429389),(-77.159413,18.426215),(-77.142568,18.412095),(-77.137359,18.408922),(-77.104726,18.407864),(-77.070343,18.413357),(-77.051991,18.416327),(-77.040028,18.413479),(-77.019154,18.40412),(-77.007314,18.402086),(-76.94815,18.404486),(-76.936615,18.410295),(-76.920338,18.412195),(-76.907118,18.411205),(-76.894919,18.409251),(-76.887827,18.400552),(-76.890916,18.389947),(-76.890948,18.376441),(-76.879783,18.366769),(-76.844211,18.345447),(-76.812754,18.32801),(-76.807622,18.318308),(-76.798329,18.29206),(-76.792388,18.283393),(-76.782826,18.277818),(-76.764638,18.273017),(-76.71288,18.269192),(-76.695709,18.26557),(-76.68102,18.258124),(-76.661977,18.239814),(-76.647328,18.23078),(-76.63036,18.225735),(-76.597158,18.227748),(-76.57977,18.21603),(-76.574208,18.214667),(-76.569203,18.21011),(-76.563059,18.205959),(-76.554799,18.204088),(-76.479726,18.204088),(-76.474477,18.201077),(-76.466624,18.186754),(-76.463002,18.182359),(-76.389394,18.169176),(-76.351145,18.155992),(-76.346262,18.135199),(-76.31786,18.10928),(-76.295806,18.079169),(-76.263743,18.012356)] +Jersey [(-2.082672,49.26024),(-2.06786,49.250678),(-2.02066,49.235256),(-2.021962,49.225491),(-2.017323,49.221381),(-2.011871,49.215277),(-2.00829,49.21247),(-2.016998,49.207221),(-2.020985,49.200751),(-2.022043,49.19245),(-2.021962,49.181383),(-2.024526,49.171332),(-2.031158,49.171454),(-2.040028,49.175727),(-2.049224,49.17829),(-2.079986,49.178697),(-2.096995,49.183661),(-2.120229,49.199164),(-2.128896,49.20307),(-2.137929,49.205634),(-2.144846,49.205634),(-2.152943,49.199937),(-2.160634,49.190741),(-2.168691,49.17829),(-2.177154,49.180162),(-2.189687,49.189114),(-2.200103,49.191962),(-2.202952,49.190416),(-2.223378,49.184719),(-2.233632,49.184516),(-2.226471,49.206),(-2.233225,49.228502),(-2.242014,49.247952),(-2.241119,49.26024),(-2.22936,49.263007),(-2.215199,49.26024),(-2.196278,49.253404),(-2.182729,49.254055),(-2.172353,49.256252),(-2.162709,49.260403),(-2.151682,49.267035),(-2.143788,49.261868),(-2.121978,49.258287),(-2.110666,49.253404),(-2.103871,49.258205),(-2.097564,49.25967),(-2.082672,49.26024)] +Jordan [(39.046329,32.308494),(39.235775,32.352858),(39.256342,32.342678),(39.271122,32.311956),(39.291999,32.244519),(39.266471,32.212867),(39.146168,32.125844),(39.146375,32.118144),(39.136246,32.115353),(39.116816,32.102899),(38.998064,32.006936),(38.96344,31.994482),(38.96344,31.994327),(38.963337,31.994379),(38.849752,31.966319),(38.624856,31.91087),(38.400167,31.855369),(38.175375,31.799921),(37.986522,31.753358),(37.950479,31.744472),(37.761412,31.69612),(37.702742,31.681116),(37.455005,31.617709),(37.207269,31.554354),(36.997912,31.500814),(36.959532,31.490999),(37.089653,31.370076),(37.219774,31.249101),(37.221722,31.247292),(37.349895,31.12823),(37.480017,31.007256),(37.483117,31.004103),(37.486321,31.0009),(37.489422,30.997696),(37.492626,30.994492),(37.602283,30.883232),(37.712044,30.772025),(37.821908,30.660869),(37.931565,30.549661),(37.981071,30.499483),(37.981381,30.498811),(37.980968,30.498398),(37.980038,30.498088),(37.97332,30.494522),(37.900146,30.459072),(37.779327,30.400419),(37.670703,30.347606),(37.647552,30.330863),(37.634529,30.312776),(37.605177,30.250713),(37.56921,30.174955),(37.536137,30.105295),(37.491696,30.011193),(37.470198,29.994553),(37.352376,29.973314),(37.218534,29.949129),(37.075804,29.923291),(36.931936,29.897246),(36.84295,29.881226),(36.756237,29.865517),(36.728745,29.853528),(36.70487,29.831152),(36.649576,29.7494),(36.603584,29.681471),(36.541263,29.589409),(36.477081,29.494609),(36.399979,29.438902),(36.283707,29.35485),(36.177977,29.278369),(36.069457,29.200028),(36.043825,29.190881),(36.016437,29.189951),(35.912464,29.205686),(35.797225,29.223075),(35.740251,29.231731),(35.622042,29.249689),(35.473628,29.272065),(35.334618,29.293148),(35.179072,29.316661),(35.060319,29.334696),(34.949385,29.351686),(34.949392,29.351711),(34.962413,29.359768),(34.969005,29.450832),(34.976085,29.477037),(34.997813,29.517971),(34.996837,29.533881),(34.976085,29.552151),(34.962413,29.552151),(34.961599,29.555406),(34.961599,29.558092),(34.960216,29.559516),(34.95558,29.558987),(34.95986,29.586206),(34.966992,29.608116),(34.980324,29.627004),(34.989833,29.651964),(34.995104,29.708162),(35.002545,29.733096),(35.048951,29.842314),(35.053188,29.862623),(35.054118,29.923394),(35.061456,29.957346),(35.065384,29.965976),(35.070345,29.973727),(35.074065,29.982564),(35.074686,29.994604),(35.086261,30.034034),(35.129049,30.089741),(35.145276,30.123382),(35.145276,30.154905),(35.124812,30.21609),(35.125225,30.244667),(35.132356,30.261875),(35.141762,30.313965),(35.147756,30.32647),(35.154474,30.336754),(35.159952,30.347503),(35.162122,30.361404),(35.159332,30.375615),(35.144965,30.395872),(35.140005,30.406155),(35.140005,30.430185),(35.157368,30.470854),(35.162122,30.494677),(35.205324,30.617099),(35.263821,30.71978),(35.263882,30.719967),(35.271573,30.743706),(35.27612,30.768976),(35.279531,30.780241),(35.286145,30.792333),(35.293897,30.800188),(35.310847,30.813314),(35.316635,30.822823),(35.320045,30.84494),(35.319528,30.867316),(35.322216,30.88995),(35.334928,30.912585),(35.34711,30.92271),(35.374099,30.945141),(35.385261,30.963279),(35.385158,30.994647),(35.391565,31.023947),(35.438488,31.103736),(35.443242,31.132209),(35.436214,31.159546),(35.421331,31.184506),(35.410686,31.204608),(35.401177,31.230291),(35.3957,31.25768),(35.408205,31.282019),(35.422261,31.303),(35.423915,31.324601),(35.416473,31.331835),(35.435077,31.360619),(35.452854,31.400823),(35.456884,31.423509),(35.457128,31.433524),(35.458538,31.491619),(35.458125,31.491929),(35.458745,31.491567),(35.459158,31.491877),(35.459055,31.492808),(35.458745,31.494409),(35.464222,31.568565),(35.480139,31.641119),(35.502479,31.68536),(35.527578,31.735067),(35.55941,31.765349),(35.538326,31.819299),(35.538326,31.826741),(35.549075,31.839195),(35.524684,31.919241),(35.527474,31.927355),(35.533676,31.9303),(35.5406,31.932006),(35.545148,31.936605),(35.545871,31.944563),(35.53998,31.955622),(35.538326,31.963942),(35.537706,31.977584),(35.535536,31.988229),(35.524684,32.011691),(35.522824,32.057838),(35.528301,32.075098),(35.545148,32.086828),(35.534606,32.09923),(35.535226,32.110806),(35.551969,32.135197),(35.546698,32.141605),(35.546698,32.147031),(35.551246,32.151527),(35.55941,32.155093),(35.55941,32.162534),(35.555173,32.174394),(35.559562,32.190371),(35.572536,32.237594),(35.55941,32.237594),(35.561064,32.243149),(35.563751,32.246818),(35.567575,32.249506),(35.572536,32.251908),(35.564578,32.263587),(35.560857,32.28263),(35.561167,32.301699),(35.565612,32.313352),(35.556517,32.328209),(35.557343,32.358207),(35.551969,32.367948),(35.551969,32.374821),(35.55941,32.374821),(35.55941,32.367948),(35.565612,32.367948),(35.563958,32.377043),(35.560961,32.384717),(35.556827,32.390918),(35.551969,32.395285),(35.549592,32.39854),(35.545148,32.409573),(35.554863,32.411175),(35.5591,32.413966),(35.55817,32.417997),(35.551969,32.423242),(35.551969,32.429443),(35.55941,32.429443),(35.55786,32.43412),(35.556207,32.434791),(35.554139,32.434455),(35.551969,32.436212),(35.565612,32.443706),(35.55941,32.450527),(35.566232,32.453111),(35.572536,32.456728),(35.572536,32.464195),(35.568092,32.46497),(35.565405,32.466314),(35.563131,32.468174),(35.55941,32.470396),(35.561271,32.477089),(35.564991,32.48391),(35.570986,32.489207),(35.579978,32.49148),(35.579978,32.497733),(35.570573,32.506027),(35.568505,32.510265),(35.561374,32.519179),(35.55724,32.519334),(35.551969,32.518817),(35.551969,32.525587),(35.565612,32.525587),(35.562821,32.532021),(35.55941,32.55295),(35.565612,32.546102),(35.570159,32.556825),(35.572536,32.560391),(35.57388,32.556748),(35.574397,32.554396),(35.575844,32.554965),(35.579978,32.560391),(35.574397,32.572767),(35.571296,32.598554),(35.565612,32.607546),(35.565612,32.615013),(35.572536,32.615013),(35.572536,32.62124),(35.564061,32.625477),(35.560031,32.632686),(35.560547,32.640903),(35.562718,32.64421),(35.569849,32.646768),(35.578737,32.653434),(35.593827,32.670358),(35.612224,32.681546),(35.612334,32.681535),(35.635995,32.679143),(35.652015,32.686171),(35.685191,32.711234),(35.740175,32.740535),(35.757435,32.744282),(35.75759,32.744347),(35.763842,32.746969),(35.769734,32.748054),(35.774901,32.747279),(35.779139,32.744514),(35.779139,32.744462),(35.779035,32.744359),(35.779035,32.744282),(35.788234,32.734411),(35.895721,32.713276),(35.905229,32.708573),(35.922489,32.693768),(35.92745,32.692373),(35.940369,32.692502),(35.944193,32.690771),(35.945743,32.684104),(35.9444,32.677619),(35.941196,32.673536),(35.937475,32.674002),(35.94657,32.664441),(35.955355,32.657439),(35.965794,32.654365),(35.980263,32.656612),(36.003621,32.655088),(36.008272,32.643719),(36.005275,32.626692),(36.005998,32.607907),(36.015403,32.591164),(36.060465,32.533261),(36.066046,32.521608),(36.066253,32.517319),(36.06987,32.516595),(36.081906,32.516265),(36.096225,32.515872),(36.133226,32.520109),(36.13953,32.519541),(36.149865,32.51613),(36.15586,32.5152),(36.160821,32.517215),(36.17219,32.525923),(36.177357,32.527318),(36.188209,32.52228),(36.220765,32.494581),(36.285258,32.456935),(36.373108,32.386422),(36.387887,32.379317),(36.407627,32.374227),(36.463955,32.369395),(36.480181,32.360791),(36.516684,32.357014),(36.653504,32.342859),(36.689574,32.319656),(36.706937,32.328338),(36.728641,32.327795),(36.792513,32.313533),(36.806569,32.313042),(36.819385,32.316788),(36.980099,32.410038),(37.133165,32.494478),(37.133165,32.494529),(37.133371,32.494529),(37.133371,32.494581),(37.244062,32.554396),(37.415214,32.64713),(37.494606,32.690056),(37.586677,32.739837),(37.758036,32.832519),(37.929395,32.92533),(38.056726,32.994292),(38.056726,32.994344),(38.230875,33.086302),(38.315742,33.13118),(38.529565,33.244251),(38.774511,33.371685),(38.82102,33.229032),(38.862568,33.10072),(38.897191,32.994344),(38.94277,32.852337),(38.990002,32.705576),(39.057181,32.496596),(38.979977,32.476055),(38.978633,32.475693),(38.97822,32.47497),(38.978633,32.47373),(38.979977,32.472102),(39.028759,32.328338),(39.036201,32.313352),(39.046329,32.308494)] +Baykonur Cosmodrome [(63.383914,45.565824),(63.324279,45.567969),(63.26635,45.574222),(63.210488,45.584376),(63.156899,45.598251),(63.105946,45.615615),(63.057835,45.636285),(63.012929,45.660005),(62.971484,45.686644),(62.933864,45.715945),(62.900274,45.7477),(62.871025,45.781703),(62.846479,45.817773),(62.826893,45.855703),(62.812527,45.895184),(62.803639,45.936164),(62.800642,45.978332),(62.803639,46.0205),(62.812527,46.061427),(62.826893,46.100986),(62.846479,46.13889),(62.871025,46.17496),(62.900274,46.208964),(62.933864,46.240719),(62.971484,46.270019),(63.012929,46.296658),(63.057835,46.320378),(63.105946,46.341048),(63.156899,46.358412),(63.210488,46.372261),(63.26635,46.382441),(63.324279,46.388694),(63.383914,46.390839),(63.443548,46.388694),(63.501477,46.382441),(63.557391,46.372261),(63.610928,46.358412),(63.661881,46.341048),(63.70994,46.320378),(63.754899,46.296658),(63.796343,46.270019),(63.833964,46.240719),(63.867553,46.208964),(63.896699,46.17496),(63.921297,46.13889),(63.940934,46.100986),(63.9553,46.061427),(63.964085,46.0205),(63.967134,45.978332),(63.964085,45.936164),(63.9553,45.895184),(63.940934,45.855703),(63.921297,45.817773),(63.896699,45.781703),(63.867553,45.7477),(63.833964,45.715945),(63.796343,45.686644),(63.754899,45.660005),(63.70994,45.636285),(63.661881,45.615615),(63.610928,45.598251),(63.557391,45.584376),(63.501477,45.574222),(63.443548,45.567969),(63.383914,45.565824)] +Siachen Glacier [(76.976417,35.578656),(77.017552,35.584599),(77.039049,35.585013),(77.058169,35.580103),(77.071089,35.571008),(77.092276,35.547805),(77.105298,35.537987),(77.141265,35.525378),(77.177542,35.523156),(77.250819,35.530701),(77.281102,35.528013),(77.305389,35.517833),(77.352002,35.486052),(77.383731,35.471996),(77.412773,35.469412),(77.476438,35.477164),(77.512199,35.478456),(77.519608,35.477439),(77.660613,35.458095),(77.689655,35.462694),(77.716734,35.475407),(77.74743,35.49432),(77.760349,35.498093),(77.773578,35.498971),(77.800346,35.495406),(77.424659,35.302924),(77.048971,35.110442),(76.913161,35.378277),(76.777351,35.646112),(76.782837,35.645732),(76.824592,35.647799),(76.842989,35.641288),(76.856941,35.628576),(76.880506,35.60067),(76.896422,35.589612),(76.915026,35.583152),(76.9358,35.579638),(76.976417,35.578656)] +Kosovo [(20.8647,43.217337),(20.8616,43.217518),(20.851471,43.219818),(20.839999,43.212092),(20.840619,43.206976),(20.839069,43.192455),(20.836071,43.179433),(20.832041,43.178606),(20.838552,43.170467),(20.912494,43.138805),(20.993431,43.104148),(21.005157,43.099128),(21.025827,43.093366),(21.092593,43.090678),(21.10851,43.081558),(21.124012,43.058277),(21.139309,43.005826),(21.147887,42.992622),(21.165354,42.985103),(21.17941,42.990581),(21.193052,42.997893),(21.209795,42.995878),(21.225298,42.973554),(21.226745,42.942522),(21.23243,42.910896),(21.260542,42.886556),(21.2716,42.884283),(21.294958,42.884386),(21.306327,42.882448),(21.316972,42.877616),(21.336403,42.865473),(21.346738,42.860847),(21.378777,42.855292),(21.398931,42.854569),(21.40844,42.846998),(21.408419,42.841743),(21.408336,42.820747),(21.404099,42.803978),(21.39056,42.770414),(21.387149,42.754962),(21.383945,42.749976),(21.379191,42.747004),(21.378674,42.744136),(21.388699,42.739046),(21.405546,42.7353),(21.4178,42.73557),(21.441823,42.736101),(21.542488,42.725817),(21.565226,42.720184),(21.580522,42.710211),(21.612665,42.680393),(21.629408,42.672203),(21.644084,42.672306),(21.687389,42.686827),(21.708886,42.687189),(21.738652,42.68215),(21.74425,42.679425),(21.764387,42.669619),(21.772758,42.647501),(21.767074,42.638716),(21.756532,42.633626),(21.744543,42.629647),(21.734725,42.624247),(21.735551,42.621276),(21.730074,42.601018),(21.728833,42.598305),(21.726973,42.596393),(21.720152,42.593887),(21.718291,42.591019),(21.719738,42.586678),(21.726766,42.577945),(21.727697,42.574147),(21.717671,42.551151),(21.667855,42.490095),(21.627858,42.460381),(21.618969,42.449245),(21.616902,42.433923),(21.621863,42.402167),(21.617419,42.386639),(21.596645,42.372092),(21.537321,42.35863),(21.51603,42.341939),(21.514893,42.317832),(21.553857,42.273984),(21.564066,42.246289),(21.561815,42.247164),(21.519854,42.239025),(21.499287,42.238663),(21.48151,42.247784),(21.471973,42.23913),(21.467557,42.235123),(21.457222,42.237035),(21.44792,42.244141),(21.436345,42.24688),(21.419912,42.240058),(21.421462,42.231351),(21.428697,42.222566),(21.429834,42.215848),(21.419085,42.215021),(21.384255,42.224943),(21.366788,42.223858),(21.360602,42.220114),(21.353766,42.215977),(21.294958,42.148979),(21.293718,42.140168),(21.295785,42.134794),(21.298782,42.129419),(21.300332,42.120893),(21.301263,42.10061),(21.300796,42.098425),(21.299299,42.091411),(21.28886,42.089706),(21.245727,42.096167),(21.237804,42.097354),(21.229298,42.103822),(21.225402,42.106785),(21.216203,42.121151),(21.199873,42.14115),(21.164423,42.16704),(21.12639,42.188925),(21.112954,42.194402),(21.106236,42.195798),(21.098484,42.195953),(21.074403,42.184455),(21.04094,42.15997),(21.029238,42.151408),(21.003916,42.141951),(20.975145,42.134658),(20.90417,42.116668),(20.810543,42.092936),(20.784912,42.082032),(20.765378,42.064333),(20.755353,42.042784),(20.743054,41.993484),(20.741814,41.970773),(20.751439,41.940338),(20.754423,41.930904),(20.751052,41.910218),(20.750495,41.906797),(20.739953,41.888039),(20.723313,41.866619),(20.714398,41.859163),(20.702953,41.849591),(20.681456,41.84401),(20.671844,41.849307),(20.652827,41.86928),(20.643422,41.873647),(20.637014,41.870365),(20.626162,41.855198),(20.61872,41.850522),(20.602391,41.849876),(20.590298,41.854733),(20.567147,41.873182),(20.567767,41.88052),(20.562703,41.892845),(20.562496,41.900105),(20.567251,41.912172),(20.573348,41.917675),(20.580583,41.921732),(20.588748,41.929586),(20.59929,41.94788),(20.59929,41.960567),(20.594329,41.973718),(20.589678,41.993639),(20.558259,42.055109),(20.552161,42.07379),(20.551954,42.105803),(20.549371,42.123476),(20.538622,42.150116),(20.500795,42.211223),(20.481674,42.230705),(20.473665,42.237123),(20.456973,42.250497),(20.333363,42.317883),(20.317964,42.319821),(20.249647,42.318607),(20.237762,42.319924),(20.229597,42.32672),(20.220915,42.343101),(20.219468,42.350207),(20.221225,42.36372),(20.218848,42.371446),(20.21306,42.377518),(20.197454,42.38744),(20.192803,42.393693),(20.194043,42.4001),(20.204379,42.411831),(20.204689,42.420099),(20.199728,42.427799),(20.186292,42.437437),(20.180814,42.443095),(20.152702,42.493402),(20.152702,42.493428),(20.152599,42.493609),(20.152599,42.493712),(20.152392,42.493712),(20.135546,42.509629),(20.085626,42.530015),(20.064956,42.546758),(20.077048,42.55991),(20.078185,42.572906),(20.075394,42.586962),(20.075704,42.603085),(20.079218,42.61125),(20.090587,42.627348),(20.10392,42.653108),(20.101956,42.656674),(20.094205,42.666983),(20.037006,42.707363),(20.03612,42.707989),(20.024751,42.723414),(20.026508,42.743206),(20.034673,42.751423),(20.055638,42.763866),(20.065059,42.769458),(20.076325,42.773437),(20.112085,42.766538),(20.149498,42.749872),(20.183398,42.742508),(20.208409,42.763282),(20.20996,42.772972),(20.208409,42.782093),(20.209236,42.791704),(20.217608,42.802737),(20.226083,42.806768),(20.264427,42.817258),(20.345352,42.827439),(20.428034,42.840642),(20.4763,42.855525),(20.498831,42.877875),(20.494367,42.887467),(20.488909,42.899191),(20.466688,42.909501),(20.450979,42.922032),(20.459454,42.950015),(20.476403,42.966371),(20.493457,42.970195),(20.51113,42.970195),(20.530457,42.975156),(20.538415,42.980659),(20.543893,42.987248),(20.553401,43.002596),(20.562393,43.009495),(20.572832,43.00934),(20.584304,43.006911),(20.596396,43.007092),(20.617273,43.02213),(20.643835,43.052257),(20.664919,43.085407),(20.669157,43.109799),(20.661818,43.115948),(20.65169,43.11631),(20.640941,43.115276),(20.632053,43.117266),(20.626058,43.123725),(20.620581,43.133337),(20.612106,43.154938),(20.600117,43.173826),(20.597533,43.184962),(20.604044,43.197959),(20.612312,43.202274),(20.644869,43.203307),(20.666986,43.209663),(20.745328,43.252865),(20.769512,43.260849),(20.79442,43.263071),(20.809717,43.25961),(20.819432,43.257412),(20.838449,43.245863),(20.848474,43.238137),(20.855088,43.231445),(20.8647,43.217337)] +Laos [(101.867921,22.378842),(101.877275,22.391864),(101.881512,22.412173),(101.891641,22.429691),(101.909366,22.435893),(101.952877,22.436874),(101.97422,22.445143),(101.994994,22.447416),(102.014579,22.446073),(102.075919,22.43243),(102.085531,22.429123),(102.095349,22.423129),(102.099948,22.41486),(102.10062,22.405765),(102.107345,22.397549),(102.118655,22.397549),(102.125425,22.383648),(102.131368,22.373726),(102.141393,22.342152),(102.144183,22.338586),(102.152865,22.332488),(102.156121,22.328096),(102.156121,22.32267),(102.152813,22.310836),(102.154364,22.304686),(102.168885,22.289648),(102.186196,22.278383),(102.200562,22.265464),(102.20697,22.245413),(102.226246,22.228412),(102.286449,22.200041),(102.358072,22.134722),(102.38944,22.117101),(102.398793,22.108884),(102.405408,22.098291),(102.415226,22.073486),(102.422151,22.06222),(102.442615,22.047441),(102.467626,22.034005),(102.482871,22.021396),(102.474086,22.00889),(102.471244,22.005996),(102.469073,22.002844),(102.467316,21.999227),(102.466386,21.995351),(102.478065,21.957576),(102.515892,21.939179),(102.56023,21.925433),(102.59103,21.901352),(102.625136,21.828643),(102.634593,21.787767),(102.629787,21.729114),(102.631906,21.706273),(102.636815,21.683639),(102.643326,21.668033),(102.650664,21.657801),(102.653455,21.656302),(102.657434,21.658421),(102.710299,21.659299),(102.721151,21.661676),(102.752157,21.6809),(102.774688,21.707875),(102.786676,21.739914),(102.789157,21.820323),(102.806727,21.836084),(102.826467,21.821305),(102.834942,21.775623),(102.831893,21.73578),(102.835149,21.716453),(102.84719,21.704464),(102.864501,21.705653),(102.937003,21.735057),(102.942636,21.738984),(102.947804,21.737124),(102.95602,21.723946),(102.959896,21.713043),(102.961756,21.701519),(102.960826,21.632841),(102.964133,21.608605),(102.974107,21.586694),(102.974934,21.583903),(102.975295,21.581009),(102.974934,21.578167),(102.974107,21.575325),(102.954108,21.55729),(102.916591,21.513572),(102.89499,21.495278),(102.84967,21.425412),(102.86476,21.432181),(102.884758,21.444067),(102.904189,21.452697),(102.918038,21.450113),(102.916798,21.438641),(102.883828,21.38836),(102.879642,21.367896),(102.878816,21.322162),(102.875198,21.305316),(102.863003,21.293792),(102.829155,21.285885),(102.812773,21.273896),(102.801301,21.254828),(102.809156,21.25059),(102.828173,21.252451),(102.850187,21.251676),(102.865121,21.246198),(102.878144,21.23762),(102.887859,21.226923),(102.892975,21.215089),(102.891786,21.205994),(102.883001,21.186098),(102.883001,21.176435),(102.889099,21.166409),(102.89592,21.164549),(102.904189,21.164187),(102.914421,21.1584),(102.925634,21.141656),(102.925885,21.14064),(102.935298,21.102382),(102.942171,21.084864),(102.946822,21.075872),(102.951576,21.068999),(102.959482,21.065795),(102.974107,21.067656),(102.99917,21.057372),(103.014518,21.040526),(103.0371,20.995464),(103.06821,20.931282),(103.08671,20.901981),(103.115287,20.868391),(103.130015,20.854904),(103.136371,20.85046),(103.146965,20.846016),(103.165051,20.84369),(103.175232,20.84121),(103.183655,20.837076),(103.197091,20.827825),(103.206909,20.82457),(103.218588,20.824363),(103.225829,20.826413),(103.238122,20.829892),(103.247113,20.830719),(103.263392,20.826378),(103.319822,20.797853),(103.347676,20.789068),(103.358321,20.787725),(103.363799,20.787621),(103.389115,20.794288),(103.390877,20.794753),(103.411238,20.805605),(103.418266,20.811806),(103.421315,20.81811),(103.426586,20.820694),(103.440435,20.815733),(103.452579,20.804313),(103.470872,20.770878),(103.481621,20.757907),(103.489373,20.752481),(103.497331,20.748347),(103.505961,20.74597),(103.516089,20.745505),(103.516089,20.745557),(103.516141,20.745505),(103.516141,20.74535),(103.577946,20.732896),(103.597376,20.722354),(103.63603,20.682821),(103.657941,20.664735),(103.661145,20.660756),(103.665383,20.658275),(103.676338,20.656518),(103.685433,20.657655),(103.697629,20.661531),(103.708481,20.66675),(103.714268,20.671711),(103.713752,20.679824),(103.703623,20.699254),(103.702383,20.708711),(103.70724,20.721837),(103.713442,20.725764),(103.72202,20.72685),(103.734371,20.730932),(103.754576,20.742921),(103.758607,20.749742),(103.755196,20.759406),(103.753026,20.77987),(103.756746,20.795993),(103.76517,20.81625),(103.775557,20.834905),(103.785324,20.845964),(103.796434,20.84953),(103.816691,20.846894),(103.82682,20.847669),(103.83576,20.852268),(103.850488,20.864774),(103.858601,20.86958),(103.932136,20.891801),(103.952187,20.900276),(103.962315,20.902705),(103.973994,20.902343),(104.010581,20.908079),(104.056676,20.958825),(104.092282,20.961202),(104.099155,20.957688),(104.114244,20.947043),(104.122616,20.943167),(104.132072,20.941669),(104.154758,20.941255),(104.165404,20.939757),(104.193877,20.929886),(104.201267,20.925442),(104.207107,20.917639),(104.217545,20.89702),(104.221059,20.892008),(104.228036,20.892318),(104.250566,20.89826),(104.260023,20.898932),(104.269067,20.896193),(104.278937,20.891542),(104.287877,20.885755),(104.295267,20.879812),(104.307204,20.864619),(104.323224,20.833872),(104.336349,20.81904),(104.344669,20.813666),(104.360327,20.808085),(104.367975,20.803124),(104.37335,20.795734),(104.38167,20.778371),(104.389059,20.770981),(104.409316,20.765194),(104.432157,20.765762),(104.452621,20.763592),(104.466161,20.749639),(104.475669,20.71822),(104.492309,20.702097),(104.543882,20.678739),(104.600726,20.660549),(104.615195,20.646493),(104.614071,20.642616),(104.607237,20.619053),(104.542642,20.535699),(104.520731,20.519265),(104.495823,20.511927),(104.469675,20.516165),(104.444973,20.534613),(104.423786,20.495443),(104.423682,20.495443),(104.366425,20.458494),(104.35981,20.439735),(104.39154,20.420667),(104.432364,20.411055),(104.575508,20.414052),(104.589357,20.406818),(104.612301,20.376742),(104.628425,20.365812),(104.666407,20.351782),(104.680928,20.340982),(104.684648,20.333049),(104.687697,20.321706),(104.689299,20.310002),(104.688628,20.301036),(104.681858,20.287471),(104.674106,20.2853),(104.665838,20.285714),(104.657363,20.279616),(104.653539,20.266464),(104.651989,20.246104),(104.653023,20.225795),(104.657053,20.212695),(104.69085,20.196546),(104.733586,20.204763),(104.778803,20.219025),(104.819162,20.220989),(104.907994,20.175333),(104.908976,20.167814),(104.903446,20.157091),(104.904893,20.134638),(104.918381,20.115724),(104.95688,20.091798),(104.964425,20.076088),(104.961169,20.072109),(104.947475,20.069525),(104.943961,20.067097),(104.943651,20.060947),(104.946441,20.052188),(104.948302,20.033662),(104.951454,20.021518),(104.953211,20.009271),(104.94949,19.995499),(104.945688,19.992373),(104.935693,19.984156),(104.922463,19.982244),(104.892905,19.9855),(104.873836,19.982657),(104.86903,19.977748),(104.86717,19.969144),(104.85792,19.955217),(104.835337,19.938887),(104.780663,19.908088),(104.761801,19.885506),(104.755549,19.867006),(104.760716,19.86145),(104.790275,19.859667),(104.80924,19.853673),(104.81663,19.848092),(104.816888,19.838997),(104.81508,19.822409),(104.805003,19.790886),(104.785211,19.776933),(104.760716,19.766521),(104.736738,19.745669),(104.717773,19.740192),(104.704182,19.738693),(104.691676,19.735463),(104.675967,19.724689),(104.663048,19.711227),(104.65478,19.697972),(104.62305,19.61808),(104.614265,19.607435),(104.604963,19.606479),(104.587807,19.615961),(104.579229,19.618028),(104.570237,19.61529),(104.554837,19.605445),(104.546156,19.601931),(104.522798,19.599373),(104.505848,19.60281),(104.449107,19.635159),(104.423786,19.645779),(104.415414,19.650817),(104.401255,19.664434),(104.391746,19.676836),(104.380068,19.685285),(104.360017,19.687456),(104.341879,19.683012),(104.295267,19.654874),(104.277025,19.68389),(104.242143,19.695181),(104.228966,19.706008),(104.200203,19.698586),(104.158169,19.68774),(104.13631,19.687456),(104.131246,19.682443),(104.124476,19.675906),(104.11719,19.660997),(104.109231,19.654874),(104.088716,19.651902),(104.069699,19.658026),(104.021743,19.681435),(104.010271,19.684562),(104.000969,19.680738),(103.995182,19.664899),(103.997662,19.652522),(104.017351,19.600407),(104.028926,19.586041),(104.06019,19.568161),(104.071559,19.559143),(104.077967,19.542891),(104.079259,19.525709),(104.082515,19.509301),(104.093987,19.495633),(104.093987,19.495581),(104.094142,19.495529),(104.094142,19.495478),(104.089956,19.48367),(104.081274,19.479303),(104.070526,19.477029),(104.060501,19.471293),(104.054041,19.46057),(104.049907,19.437471),(104.046548,19.427497),(104.038796,19.41339),(104.03828,19.412925),(104.035127,19.416671),(104.0204,19.414992),(103.99668,19.40693),(103.973994,19.395665),(103.952807,19.390962),(103.941025,19.38135),(103.921284,19.352773),(103.905885,19.341792),(103.870021,19.326883),(103.85674,19.317194),(103.84842,19.299908),(103.858549,19.295335),(103.876739,19.2958),(103.892655,19.293888),(103.905885,19.270039),(103.910949,19.263011),(103.917977,19.258154),(103.943143,19.245596),(103.972754,19.225288),(103.973994,19.225288),(103.995182,19.23384),(104.016162,19.234874),(104.036729,19.230249),(104.14773,19.182887),(104.187831,19.156584),(104.198115,19.146946),(104.201732,19.127051),(104.201836,19.109171),(104.210621,19.100747),(104.239559,19.108886),(104.259603,19.10413),(104.25992,19.104055),(104.274338,19.093461),(104.287257,19.08049),(104.302863,19.068656),(104.321983,19.060466),(104.33914,19.055195),(104.35516,19.047882),(104.372006,19.033568),(104.400738,18.992279),(104.413244,18.983106),(104.431072,18.979334),(104.46213,18.982408),(104.48094,18.977706),(104.497683,18.967991),(104.509879,18.956363),(104.544295,18.906806),(104.556078,18.89647),(104.589667,18.879107),(104.603982,18.868694),(104.631215,18.842365),(104.646201,18.832547),(104.65633,18.829704),(104.675657,18.828206),(104.685579,18.824769),(104.69085,18.819318),(104.702529,18.802471),(104.71028,18.796089),(104.721959,18.792007),(104.762887,18.786451),(104.801644,18.775134),(104.81539,18.773661),(104.872906,18.777976),(104.899364,18.770612),(104.909234,18.745679),(104.919725,18.735576),(104.937243,18.731158),(104.95688,18.731674),(104.97383,18.736067),(104.99016,18.733018),(105.040337,18.705009),(105.064574,18.696638),(105.076976,18.697516),(105.088655,18.700565),(105.100282,18.702064),(105.112684,18.698343),(105.141106,18.653074),(105.162862,18.638993),(105.169477,18.630569),(105.169477,18.618327),(105.169477,18.61486),(105.162087,18.598194),(105.152062,18.596799),(105.140331,18.600468),(105.128136,18.598918),(105.113615,18.580831),(105.100489,18.551169),(105.085244,18.495771),(105.079973,18.454353),(105.090774,18.426525),(105.113976,18.405131),(105.145447,18.383168),(105.162862,18.366012),(105.162242,18.353894),(105.155989,18.340639),(105.156661,18.319839),(105.166169,18.306791),(105.214745,18.277852),(105.220481,18.270798),(105.229266,18.255321),(105.235933,18.249456),(105.246475,18.246381),(105.25692,18.247464),(105.283113,18.250179),(105.288023,18.253409),(105.292518,18.252737),(105.299288,18.24204),(105.300838,18.23093),(105.294379,18.206952),(105.29505,18.195609),(105.305592,18.181062),(105.321871,18.170081),(105.340112,18.162639),(105.355925,18.158738),(105.363315,18.155224),(105.365537,18.151038),(105.368328,18.149953),(105.377474,18.155689),(105.381557,18.162768),(105.386104,18.184938),(105.391995,18.194214),(105.410031,18.201888),(105.431373,18.200544),(105.451423,18.192715),(105.46672,18.180959),(105.472817,18.171993),(105.480879,18.144811),(105.481189,18.138713),(105.479122,18.133494),(105.476848,18.129644),(105.476021,18.127318),(105.480621,18.123649),(105.491989,18.121066),(105.496072,18.118146),(105.512505,18.082825),(105.52067,18.0759),(105.538136,18.066728),(105.546508,18.058951),(105.551882,18.047943),(105.555551,18.025154),(105.559324,18.015336),(105.566145,18.008256),(105.589813,17.99575),(105.589813,17.995595),(105.603431,17.969184),(105.606039,17.964124),(105.594154,17.888108),(105.605212,17.85426),(105.64397,17.820102),(105.669808,17.773076),(105.692029,17.744086),(105.701227,17.727601),(105.713836,17.689464),(105.723086,17.673496),(105.735282,17.664039),(105.787424,17.642077),(105.819876,17.621561),(105.846025,17.597997),(105.911137,17.516813),(105.932169,17.495729),(105.932324,17.495729),(106.019296,17.394495),(106.185435,17.257397),(106.207295,17.246287),(106.221816,17.244892),(106.233391,17.251971),(106.247912,17.266234),(106.252356,17.274554),(106.254785,17.283184),(106.259953,17.28923),(106.272614,17.28985),(106.284551,17.284786),(106.28667,17.276569),(106.285688,17.265976),(106.287961,17.25378),(106.293129,17.246907),(106.306978,17.232954),(106.310544,17.227115),(106.309769,17.217917),(106.301656,17.202),(106.300829,17.191562),(106.306358,17.176524),(106.336537,17.128568),(106.384648,17.085367),(106.394932,17.068003),(106.394363,17.062215),(106.387904,17.051673),(106.387129,17.047178),(106.40402,17.014642),(106.408833,17.005371),(106.411365,16.995811),(106.411365,16.995708),(106.420253,16.98868),(106.437203,16.980463),(106.45653,16.97359),(106.473738,16.970697),(106.496838,16.963927),(106.524536,16.989248),(106.533425,16.97266),(106.53355,16.950638),(106.535078,16.683531),(106.540659,16.659139),(106.549754,16.637797),(106.57244,16.613561),(106.591405,16.60245),(106.611559,16.594595),(106.624943,16.59351),(106.640188,16.586637),(106.644322,16.57532),(106.639671,16.521783),(106.64117,16.510363),(106.655174,16.47083),(106.670677,16.444424),(106.692898,16.427887),(106.722664,16.432486),(106.72256,16.421634),(106.725713,16.417139),(106.732482,16.417552),(106.742404,16.421531),(106.751511,16.428268),(106.752533,16.429024),(106.754393,16.436569),(106.7546,16.444269),(106.759147,16.452485),(106.786122,16.469125),(106.797336,16.47946),(106.801418,16.4961),(106.822864,16.530982),(106.846687,16.530052),(106.86405,16.504058),(106.866841,16.463854),(106.862603,16.430574),(106.861363,16.427887),(106.858211,16.424632),(106.855885,16.420343),(106.855885,16.414658),(106.859865,16.411816),(106.866531,16.414038),(106.872835,16.417345),(106.876298,16.418017),(106.888442,16.390887),(106.897743,16.378536),(106.915003,16.364739),(106.939498,16.351716),(106.946009,16.346135),(106.95066,16.336317),(106.954071,16.313837),(106.956551,16.307068),(106.9673,16.299885),(106.980219,16.298334),(107.014325,16.29942),(107.053393,16.295234),(107.06798,16.290809),(107.070601,16.290015),(107.089101,16.281075),(107.108273,16.267484),(107.116851,16.254616),(107.127497,16.200976),(107.133336,16.188212),(107.13599,16.184784),(107.142018,16.176998),(107.154369,16.166043),(107.169665,16.156896),(107.183204,16.153227),(107.197157,16.151005),(107.212815,16.146664),(107.238808,16.129818),(107.275912,16.084032),(107.298959,16.064189),(107.326864,16.056075),(107.355907,16.059279),(107.41151,16.073077),(107.431044,16.073697),(107.435643,16.068839),(107.433421,16.057367),(107.433292,16.055461),(107.432129,16.038299),(107.436729,16.026878),(107.44355,16.01892),(107.444738,16.00998),(107.432698,15.995924),(107.418022,15.983573),(107.390633,15.935824),(107.375544,15.919081),(107.361074,15.912312),(107.322937,15.905852),(107.305057,15.899134),(107.235656,15.855519),(107.227232,15.853297),(107.218086,15.854796),(107.208835,15.859136),(107.198604,15.862082),(107.186253,15.859395),(107.175246,15.848543),(107.150183,15.794283),(107.148787,15.773457),(107.16026,15.758729),(107.215295,15.727517),(107.225682,15.71682),(107.230023,15.702402),(107.23457,15.660931),(107.239118,15.645119),(107.24873,15.631088),(107.265421,15.61866),(107.303558,15.599747),(107.316426,15.58874),(107.369342,15.496058),(107.381641,15.488539),(107.388669,15.492518),(107.394974,15.501484),(107.405051,15.508589),(107.412234,15.508796),(107.432129,15.503887),(107.437969,15.505592),(107.442981,15.509209),(107.447891,15.508124),(107.453368,15.495954),(107.453368,15.495903),(107.456469,15.488332),(107.46081,15.484379),(107.466546,15.484353),(107.473781,15.488255),(107.487475,15.481408),(107.490162,15.461512),(107.489645,15.437922),(107.493934,15.420042),(107.512641,15.409422),(107.555584,15.415753),(107.573154,15.411231),(107.57636,15.403054),(107.577702,15.39963),(107.579872,15.361234),(107.58659,15.343354),(107.598476,15.330616),(107.643073,15.297853),(107.657594,15.282427),(107.664105,15.268552),(107.664363,15.253153),(107.656818,15.205688),(107.653976,15.199564),(107.647465,15.197652),(107.642607,15.202458),(107.636716,15.205068),(107.62726,15.196515),(107.623591,15.187239),(107.621317,15.154528),(107.615787,15.135511),(107.6054,15.120603),(107.599199,15.114789),(107.584988,15.104299),(107.580492,15.096754),(107.580802,15.086936),(107.590001,15.0689),(107.589071,15.058178),(107.578012,15.047248),(107.562612,15.045801),(107.535224,15.0504),(107.518997,15.04885),(107.502386,15.044037),(107.473781,15.03575),(107.455074,15.034122),(107.445565,15.025389),(107.444687,15.011824),(107.451405,14.995933),(107.449906,14.981102),(107.453937,14.967873),(107.461947,14.956814),(107.473781,14.948236),(107.48949,14.940304),(107.549642,14.895268),(107.558168,14.886715),(107.555326,14.876199),(107.540598,14.856846),(107.498844,14.81765),(107.490885,14.80424),(107.494296,14.788763),(107.504321,14.773182),(107.509282,14.758635),(107.498068,14.745949),(107.504425,14.736285),(107.514657,14.728224),(107.522253,14.718689),(107.520393,14.704582),(107.506905,14.693445),(107.473781,14.646265),(107.458226,14.638798),(107.451766,14.632183),(107.443808,14.609549),(107.440449,14.603916),(107.433266,14.59433),(107.430321,14.586243),(107.429752,14.578336),(107.430941,14.569887),(107.430941,14.561386),(107.42784,14.553712),(107.414921,14.543429),(107.403449,14.543894),(107.380143,14.554048),(107.361384,14.557924),(107.351566,14.561619),(107.332704,14.578233),(107.305677,14.597224),(107.28728,14.598516),(107.272191,14.585364),(107.254517,14.560973),(107.249453,14.556632),(107.237878,14.549811),(107.233899,14.544927),(107.2324,14.538313),(107.230281,14.518262),(107.219739,14.497979),(107.211213,14.493458),(107.195968,14.495964),(107.195813,14.495964),(107.195606,14.496067),(107.195503,14.496119),(107.180568,14.496171),(107.172455,14.484259),(107.166823,14.467154),(107.158709,14.451522),(107.131424,14.427544),(107.10078,14.407623),(107.078559,14.400776),(107.067862,14.406124),(107.059336,14.418036),(107.042954,14.430722),(107.030345,14.436097),(107.027245,14.43527),(107.026573,14.430231),(106.997376,14.395143),(106.977945,14.365429),(106.973656,14.363026),(106.961099,14.360391),(106.950143,14.334863),(106.94973,14.328661),(106.946113,14.32352),(106.933607,14.320161),(106.923478,14.321918),(106.904771,14.333312),(106.895676,14.33631),(106.885083,14.335534),(106.879295,14.332692),(106.874592,14.328351),(106.835835,14.307061),(106.820849,14.307577),(106.806896,14.312926),(106.792995,14.322589),(106.766692,14.348324),(106.731655,14.396719),(106.723904,14.411602),(106.719253,14.417881),(106.711502,14.425839),(106.707161,14.429301),(106.6344,14.450902),(106.622049,14.457594),(106.605358,14.466637),(106.585669,14.495964),(106.575386,14.50201),(106.552441,14.506945),(106.54376,14.512113),(106.540246,14.521673),(106.534975,14.553247),(106.530996,14.56596),(106.518438,14.585002),(106.512496,14.590351),(106.49105,14.570998),(106.483763,14.567484),(106.477149,14.56689),(106.47224,14.564073),(106.470586,14.554255),(106.470948,14.544643),(106.47317,14.537486),(106.477511,14.532602),(106.484849,14.530019),(106.484849,14.523817),(106.469449,14.524696),(106.456685,14.5279),(106.445781,14.529088),(106.43648,14.523817),(106.434309,14.519606),(106.432655,14.512423),(106.433586,14.505705),(106.445161,14.499478),(106.44082,14.492631),(106.433482,14.486636),(106.429503,14.485964),(106.41922,14.466457),(106.393795,14.456173),(106.339999,14.448988),(106.333437,14.448111),(106.320156,14.444494),(106.306462,14.445476),(106.296746,14.447879),(106.235975,14.484362),(106.230084,14.49046),(106.220782,14.473485),(106.210654,14.421395),(106.203936,14.399897),(106.189156,14.378891),(106.17324,14.369667),(106.15412,14.367548),(106.119031,14.369512),(106.095002,14.373646),(106.079447,14.372716),(106.068647,14.373439),(106.063531,14.372922),(106.060585,14.36884),(106.058105,14.353104),(106.055418,14.348428),(106.038158,14.347652),(105.994129,14.362406),(105.973975,14.365843),(105.969428,14.354861),(105.967826,14.343363),(105.96917,14.33202),(105.973975,14.321298),(105.993406,14.30595),(106.004516,14.289982),(106.007979,14.270758),(106.004051,14.245927),(106.012474,14.225774),(106.026272,14.213035),(106.060585,14.191848),(106.076191,14.174511),(106.10172,14.11516),(106.115259,14.100303),(106.131279,14.086997),(106.144559,14.072398),(106.149469,14.053562),(106.142079,14.03501),(106.126421,14.022013),(106.108489,14.01036),(106.094071,13.996175),(106.081979,13.980957),(106.08384,13.934008),(106.069163,13.916542),(106.049216,13.915457),(105.997437,13.928298),(105.973975,13.92928),(105.949171,13.919591),(105.924573,13.91711),(105.899665,13.922614),(105.895581,13.924799),(105.873051,13.936851),(105.787113,14.010154),(105.770164,14.0347),(105.758898,14.062476),(105.744015,14.086945),(105.716007,14.101543),(105.705155,14.102009),(105.683657,14.098856),(105.669343,14.100407),(105.653272,14.107047),(105.643143,14.113455),(105.632911,14.114023),(105.617253,14.10299),(105.609863,14.120741),(105.594567,14.136348),(105.576532,14.144022),(105.561391,14.138156),(105.555551,14.146089),(105.548937,14.153117),(105.541237,14.159189),(105.533175,14.16433),(105.50527,14.145055),(105.490491,14.136916),(105.473851,14.132523),(105.45008,14.11715),(105.422175,14.107874),(105.392461,14.10374),(105.36409,14.10405),(105.347864,14.106065),(105.337787,14.109993),(105.329467,14.117563),(105.303939,14.152393),(105.296601,14.156837),(105.282183,14.161256),(105.27531,14.164821),(105.256913,14.182443),(105.197434,14.270293),(105.187977,14.291015),(105.184256,14.314166),(105.184308,14.34574),(105.198209,14.345224),(105.250454,14.357187),(105.268592,14.36437),(105.275724,14.372095),(105.287712,14.393283),(105.29474,14.399071),(105.304662,14.398192),(105.322232,14.386384),(105.3351,14.385712),(105.340836,14.390105),(105.359646,14.408269),(105.367656,14.414548),(105.379231,14.418242),(105.40569,14.423048),(105.415973,14.428164),(105.420314,14.435218),(105.428892,14.458163),(105.433853,14.467929),(105.440365,14.475345),(105.463671,14.495964),(105.495452,14.537124),(105.507544,14.564048),(105.510128,14.593581),(105.495762,14.659881),(105.497364,14.690422),(105.505115,14.736363),(105.50155,14.745949),(105.490697,14.763803),(105.489147,14.786334),(105.49385,14.808994),(105.50155,14.827339),(105.5062,14.830414),(105.52005,14.833463),(105.524391,14.837235),(105.52408,14.842532),(105.516949,14.856536),(105.515864,14.871677),(105.517776,14.877672),(105.52191,14.878705),(105.53762,14.878034),(105.543046,14.880101),(105.550642,14.902296),(105.534157,14.934025),(105.543924,14.953094),(105.58356,14.977588),(105.593068,14.990792),(105.566455,14.995933),(105.566352,14.995933),(105.566145,14.995933),(105.566145,14.995985),(105.553536,15.005338),(105.524287,15.045956),(105.52284,15.050039),(105.52315,15.061175),(105.520877,15.066136),(105.516536,15.067143),(105.503565,15.064896),(105.498294,15.066136),(105.473851,15.09063),(105.45194,15.094765),(105.44362,15.111353),(105.443775,15.133806),(105.447703,15.155303),(105.454421,15.175535),(105.463102,15.189436),(105.503927,15.231061),(105.531574,15.252894),(105.560254,15.270929),(105.563975,15.272531),(105.564801,15.299455),(105.561494,15.320901),(105.558032,15.325138),(105.53855,15.321521),(105.519378,15.321056),(105.499017,15.324673),(105.483308,15.334517),(105.477468,15.352811),(105.486253,15.374902),(105.5062,15.387589),(105.552813,15.402033),(105.588366,15.423788),(105.592243,15.427546),(105.604282,15.439214),(105.61038,15.457171),(105.612034,15.499468),(105.616271,15.521276),(105.640869,15.583313),(105.650998,15.634602),(105.640921,15.68199),(105.612912,15.72147),(105.568522,15.749376),(105.525424,15.763483),(105.50341,15.766171),(105.462379,15.762812),(105.447961,15.764879),(105.435662,15.771648),(105.423001,15.783896),(105.412046,15.79976),(105.372565,15.882132),(105.360628,15.919391),(105.362488,15.95479),(105.386518,15.980783),(105.407395,15.988896),(105.418144,15.994735),(105.423932,16.002125),(105.423232,16.00504),(105.422071,16.009877),(105.414475,16.015199),(105.404915,16.018817),(105.288539,16.047394),(105.247405,16.049151),(105.237225,16.050804),(105.106979,16.095853),(105.079301,16.105426),(105.058114,16.121188),(105.042456,16.141445),(105.032586,16.165268),(105.028865,16.191468),(105.029744,16.236013),(105.026385,16.257665),(105.015481,16.276734),(105.007419,16.283555),(104.988028,16.293963),(104.979307,16.298645),(104.971504,16.304329),(104.952074,16.324431),(104.944064,16.330839),(104.917296,16.345567),(104.908046,16.35337),(104.901586,16.362465),(104.897039,16.372749),(104.887737,16.403909),(104.879469,16.421841),(104.868927,16.438429),(104.855491,16.454087),(104.838851,16.46773),(104.78273,16.497082),(104.76516,16.511345),(104.754515,16.528915),(104.749502,16.549327),(104.748417,16.571909),(104.753016,16.614801),(104.763713,16.656762),(104.777563,16.694021),(104.77901,16.704873),(104.778338,16.715828),(104.756789,16.798459),(104.756324,16.809879),(104.75791,16.818895),(104.758442,16.82192),(104.765212,16.844658),(104.767072,16.85613),(104.766401,16.86817),(104.762887,16.879074),(104.753016,16.899848),(104.749657,16.910804),(104.74634,16.943532),(104.740356,17.002581),(104.745317,17.024802),(104.809912,17.171666),(104.81446,17.194042),(104.818335,17.240241),(104.816423,17.300082),(104.819989,17.348916),(104.819317,17.36106),(104.816423,17.372791),(104.792704,17.423485),(104.788932,17.428963),(104.781645,17.435268),(104.764644,17.444414),(104.756789,17.449944),(104.749657,17.45816),(104.734258,17.486892),(104.714517,17.515263),(104.702632,17.527303),(104.65509,17.556087),(104.636383,17.562443),(104.481198,17.640423),(104.452001,17.665951),(104.422287,17.699799),(104.417843,17.707602),(104.411642,17.727395),(104.408903,17.733699),(104.404872,17.739952),(104.3903,17.75716),(104.367769,17.800568),(104.352627,17.819327),(104.287567,17.856741),(104.281982,17.861488),(104.278265,17.864647),(104.270307,17.874001),(104.264312,17.884646),(104.255114,17.91498),(104.210879,18.008618),(104.203329,18.018644),(104.198115,18.025568),(104.13569,18.079673),(104.122357,18.095279),(104.110937,18.114554),(104.068407,18.215401),(104.018643,18.299117),(104.000039,18.318444),(103.975441,18.330665),(103.951257,18.332991),(103.929294,18.32759),(103.910019,18.315317),(103.893689,18.297256),(103.889503,18.291159),(103.885938,18.286999),(103.881287,18.284415),(103.873949,18.283226),(103.857774,18.286482),(103.848989,18.296352),(103.842219,18.309116),(103.832091,18.320666),(103.81726,18.330355),(103.801395,18.338262),(103.7846,18.343894),(103.767185,18.346685),(103.709101,18.349708),(103.695562,18.353222),(103.638614,18.377949),(103.629932,18.38322),(103.621354,18.390765),(103.607143,18.400403),(103.5488,18.415415),(103.523169,18.417688),(103.497744,18.423734),(103.489114,18.424432),(103.4748,18.422158),(103.469942,18.421977),(103.451545,18.425311),(103.396768,18.441408),(103.38664,18.441459),(103.306851,18.425672),(103.282564,18.41557),(103.260756,18.400196),(103.244065,18.380068),(103.233988,18.35578),(103.233678,18.349269),(103.235331,18.345264),(103.238018,18.341569),(103.240189,18.335988),(103.248664,18.332629),(103.290418,18.305137),(103.296516,18.296895),(103.295379,18.287774),(103.286336,18.280332),(103.27352,18.276069),(103.236727,18.269997),(103.214919,18.262297),(103.185205,18.25775),(103.17673,18.254856),(103.168204,18.249766),(103.1573,18.237027),(103.150892,18.221499),(103.145104,18.188012),(103.138593,18.166722),(103.126656,18.151865),(103.110068,18.141814),(103.089087,18.134786),(103.089404,18.135223),(103.090087,18.136165),(103.090586,18.136853),(103.092756,18.141194),(103.092133,18.140882),(103.087703,18.138667),(103.07994,18.134786),(103.07131,18.125561),(103.066349,18.112255),(103.064799,18.094039),(103.073067,18.035515),(103.06821,18.025774),(103.058288,18.019263),(103.049399,18.005),(103.037927,17.990686),(103.020461,17.984226),(103.003821,17.988567),(102.972918,18.007739),(102.954935,18.012132),(102.938605,18.008928),(102.852719,17.972031),(102.838043,17.963763),(102.831893,17.953117),(102.774067,17.922731),(102.737377,17.890434),(102.722184,17.881804),(102.686734,17.873535),(102.672988,17.865371),(102.667614,17.850074),(102.67335,17.831781),(102.68291,17.817363),(102.68291,17.810077),(102.660793,17.812867),(102.609426,17.837),(102.595267,17.840824),(102.595577,17.850074),(102.61232,17.91529),(102.599608,17.955184),(102.567517,17.970842),(102.482044,17.971152),(102.445509,17.977767),(102.412229,17.9942),(102.385151,18.016059),(102.366392,18.038797),(102.336523,18.03704),(102.306706,18.051457),(102.25658,18.094039),(102.233119,18.124166),(102.225574,18.128197),(102.221801,18.131608),(102.191571,18.152071),(102.182114,18.169228),(102.178393,18.185971),(102.171934,18.200286),(102.153537,18.210104),(102.107385,18.212377),(102.078503,18.213799),(102.071268,18.208502),(102.055527,18.192345),(101.904301,18.037117),(101.883786,18.030839),(101.845184,18.052129),(101.806064,18.062387),(101.796659,18.073575),(101.790407,18.073575),(101.773922,18.058563),(101.765757,18.040321),(101.75692,17.997817),(101.732581,17.932033),(101.729067,17.912189),(101.636204,17.894826),(101.619874,17.884646),(101.611141,17.865526),(101.600392,17.854364),(101.577551,17.868109),(101.577551,17.852917),(101.575071,17.839791),(101.570523,17.828887),(101.564012,17.820309),(101.545505,17.813672),(101.53404,17.80956),(101.556261,17.792145),(101.53652,17.779123),(101.502724,17.765377),(101.482983,17.745791),(101.479004,17.737988),(101.474198,17.730598),(101.470426,17.722589),(101.468307,17.720367),(101.459109,17.739487),(101.457352,17.745791),(101.45606,17.750184),(101.453476,17.751011),(101.449807,17.749254),(101.44526,17.745791),(101.43203,17.733596),(101.416993,17.732355),(101.401851,17.733751),(101.388002,17.729307),(101.381904,17.720728),(101.388364,17.717783),(101.398751,17.717163),(101.404022,17.715561),(101.403092,17.709463),(101.401593,17.706672),(101.383765,17.684916),(101.37746,17.682643),(101.369864,17.683159),(101.357306,17.679025),(101.247029,17.592157),(101.228632,17.568489),(101.20853,17.531282),(101.202174,17.523169),(101.190288,17.518725),(101.176956,17.5164),(101.166724,17.510612),(101.164243,17.495781),(101.164243,17.495729),(101.14719,17.472475),(101.141816,17.467152),(101.132307,17.461674),(101.128897,17.461158),(101.124659,17.463948),(101.09603,17.475265),(101.066161,17.492215),(101.049315,17.495729),(101.035466,17.512627),(101.019032,17.526632),(101.000842,17.538207),(100.981515,17.547612),(100.937738,17.555486),(100.901934,17.561927),(100.886017,17.57376),(100.885656,17.595878),(100.899453,17.616962),(100.932939,17.654531),(100.953817,17.699799),(100.955878,17.707532),(100.96069,17.725586),(100.958726,17.745791),(100.956401,17.75592),(100.958571,17.764136),(100.964876,17.770596),(100.974281,17.775505),(100.975211,17.777624),(100.975521,17.779794),(100.975211,17.781913),(100.974281,17.78398),(100.967718,17.791008),(100.965186,17.797623),(100.967046,17.803824),(100.974281,17.80925),(100.994796,17.817725),(101.000274,17.830489),(100.997122,17.864595),(101.000842,17.884801),(101.008284,17.89524),(101.039806,17.912034),(101.066161,17.933997),(101.081974,17.958336),(101.094893,17.984381),(101.112308,18.011305),(101.125434,18.02133),(101.157525,18.04045),(101.165173,18.053421),(101.162073,18.067451),(101.144968,18.104736),(101.141609,18.123959),(101.14564,18.141943),(101.160523,18.179512),(101.162796,18.195299),(101.159282,18.204058),(101.152771,18.209071),(101.145536,18.212998),(101.139645,18.218708),(101.131739,18.238862),(101.12838,18.245735),(101.127966,18.280952),(101.130137,18.291159),(101.135408,18.298393),(101.152409,18.316377),(101.155407,18.322888),(101.145536,18.336246),(101.082284,18.364203),(101.074089,18.371613),(101.05102,18.39247),(101.037016,18.410454),(101.030298,18.427791),(101.035621,18.447841),(101.064714,18.476419),(101.071949,18.495668),(101.073809,18.505771),(101.077892,18.510422),(101.142074,18.544296),(101.155355,18.556724),(101.159127,18.567705),(101.158145,18.589978),(101.160884,18.599796),(101.167757,18.606359),(101.185327,18.612534),(101.194112,18.616798),(101.221914,18.642042),(101.23473,18.658449),(101.240104,18.673642),(101.234782,18.68948),(101.211269,18.711908),(101.205843,18.730072),(101.209202,18.746325),(101.223981,18.778545),(101.228632,18.795624),(101.223309,18.855491),(101.224911,18.874766),(101.232921,18.892724),(101.257261,18.919492),(101.267286,18.934453),(101.277001,18.96706),(101.283926,18.981478),(101.326404,19.020546),(101.317877,19.05013),(101.291161,19.078992),(101.247442,19.115785),(101.235712,19.12811),(101.229149,19.143587),(101.226772,19.167152),(101.229872,19.225804),(101.209512,19.309417),(101.209253,19.315592),(101.210907,19.320631),(101.211269,19.326367),(101.207755,19.334687),(101.201967,19.338666),(101.185017,19.341999),(101.178454,19.346624),(101.172356,19.376389),(101.177782,19.417214),(101.192872,19.452793),(101.2153,19.466358),(101.238244,19.472301),(101.251731,19.494444),(101.256951,19.522789),(101.254884,19.547154),(101.25106,19.560306),(101.246202,19.570279),(101.238244,19.578186),(101.225015,19.585214),(101.207961,19.589555),(101.197316,19.586868),(101.188324,19.580744),(101.176852,19.574749),(101.152668,19.568367),(101.124659,19.565112),(101.095824,19.567282),(101.068228,19.577127),(101.024303,19.602861),(101.002548,19.610613),(100.974281,19.613481),(100.885656,19.612447),(100.879816,19.613584),(100.874597,19.612912),(100.865037,19.607151),(100.861213,19.602603),(100.849017,19.582527),(100.808451,19.536535),(100.762614,19.495478),(100.753002,19.48243),(100.745509,19.485013),(100.738171,19.495039),(100.729231,19.50465),(100.714762,19.512454),(100.633268,19.542167),(100.607016,19.540927),(100.585984,19.525967),(100.574758,19.508347),(100.566657,19.495633),(100.549294,19.494548),(100.520613,19.502609),(100.491829,19.514469),(100.474259,19.524985),(100.46196,19.537103),(100.458963,19.551831),(100.458963,19.568057),(100.455553,19.584671),(100.447491,19.597771),(100.427079,19.621801),(100.418862,19.634358),(100.41292,19.651566),(100.410491,19.684252),(100.407235,19.699936),(100.400466,19.71164),(100.390048,19.724387),(100.383826,19.732001),(100.379175,19.745669),(100.383516,19.763859),(100.407659,19.794257),(100.420516,19.810446),(100.43142,19.838273),(100.438448,19.847213),(100.446251,19.852123),(100.46563,19.859771),(100.474259,19.865352),(100.483975,19.8807),(100.487954,19.896978),(100.492656,19.931653),(100.518546,19.995421),(100.518546,19.995499),(100.543351,20.06658),(100.550586,20.106526),(100.548674,20.158021),(100.546503,20.167917),(100.541956,20.166729),(100.536685,20.153474),(100.529243,20.149727),(100.51281,20.155282),(100.496377,20.164868),(100.488936,20.173343),(100.485628,20.178769),(100.468782,20.18572),(100.46103,20.190707),(100.45731,20.196391),(100.452142,20.21024),(100.447336,20.218044),(100.421601,20.250186),(100.364395,20.368965),(100.344345,20.389971),(100.306518,20.399686),(100.270241,20.391831),(100.240372,20.372814),(100.220683,20.348992),(100.209159,20.312999),(100.207196,20.311139),(100.201718,20.310312),(100.179497,20.302483),(100.176397,20.300622),(100.167095,20.292613),(100.167715,20.253623),(100.152522,20.239153),(100.134642,20.239102),(100.116607,20.248016),(100.102964,20.263596),(100.09728,20.283233),(100.099295,20.317805),(100.097642,20.334936),(100.097073,20.348371),(100.099812,20.362221),(100.111129,20.37483),(100.120948,20.388782),(100.125237,20.406404),(100.127407,20.42785),(100.136657,20.459217),(100.149266,20.548101),(100.160067,20.582672),(100.172934,20.609079),(100.172934,20.622102),(100.180169,20.638173),(100.202338,20.667939),(100.207196,20.680754),(100.21133,20.697394),(100.221613,20.70959),(100.248744,20.731966),(100.27453,20.76199),(100.290188,20.775012),(100.329669,20.786019),(100.341968,20.799507),(100.351115,20.81532),(100.364809,20.828187),(100.383671,20.831701),(100.470849,20.818265),(100.513379,20.807),(100.536685,20.807672),(100.569034,20.8178),(100.598903,20.833458),(100.607776,20.839881),(100.626602,20.853509),(100.65213,20.876608),(100.65213,20.882757),(100.623088,20.888545),(100.534204,20.873921),(100.515704,20.886478),(100.517823,20.906322),(100.527073,20.94694),(100.529243,20.968437),(100.533274,20.97345),(100.551981,20.985335),(100.557252,20.992001),(100.555753,20.999856),(100.543454,21.0274),(100.561851,21.0351),(100.606758,21.043936),(100.625516,21.054117),(100.637557,21.069671),(100.689388,21.15933),(100.721066,21.280666),(100.722565,21.292707),(100.721996,21.303662),(100.725407,21.311672),(100.738378,21.314772),(100.747731,21.314256),(100.756671,21.312447),(100.764991,21.309191),(100.772588,21.30423),(100.791966,21.297357),(100.8146,21.3002),(100.851342,21.314772),(100.862453,21.323299),(100.896043,21.355752),(100.909943,21.360093),(101.003426,21.405981),(101.143159,21.513417),(101.157112,21.519669),(101.161143,21.534707),(101.159024,21.552691),(101.174527,21.551657),(101.186567,21.535327),(101.187756,21.505872),(101.168274,21.424533),(101.171426,21.403398),(101.179126,21.398178),(101.207961,21.383502),(101.226462,21.370893),(101.230802,21.371772),(101.233128,21.37172),(101.234782,21.363658),(101.222844,21.335081),(101.207445,21.314824),(101.209512,21.308675),(101.2184,21.299941),(101.220364,21.296272),(101.219072,21.282475),(101.208995,21.245526),(101.220364,21.233692),(101.234782,21.206252),(101.244342,21.192868),(101.259741,21.179484),(101.275296,21.174109),(101.292711,21.176073),(101.313898,21.184755),(101.362267,21.215812),(101.381491,21.222478),(101.495127,21.242839),(101.517607,21.242632),(101.539104,21.23886),(101.5729,21.228163),(101.57967,21.227594),(101.583959,21.22439),(101.588197,21.213332),(101.586646,21.204237),(101.580962,21.194728),(101.577241,21.185168),(101.581789,21.175659),(101.596517,21.173386),(101.635946,21.189457),(101.655118,21.189199),(101.670155,21.177055),(101.688294,21.145842),(101.704262,21.135093),(101.717905,21.134473),(101.73749,21.137574),(101.75599,21.143258),(101.766739,21.150648),(101.768453,21.16045),(101.770201,21.17044),(101.763586,21.184651),(101.760072,21.195968),(101.77356,21.207389),(101.796039,21.202945),(101.805238,21.204495),(101.813299,21.209559),(101.819294,21.215967),(101.82149,21.221071),(101.822808,21.224132),(101.823738,21.234467),(101.820224,21.247283),(101.812266,21.258135),(101.801155,21.267075),(101.788546,21.274206),(101.736508,21.292293),(101.722504,21.30423),(101.715114,21.321129),(101.715579,21.338182),(101.725553,21.375389),(101.729687,21.474246),(101.744259,21.495588),(101.748239,21.514502),(101.734751,21.554396),(101.736198,21.570881),(101.750926,21.579666),(101.772682,21.582508),(101.793559,21.588244),(101.806168,21.605969),(101.803842,21.625916),(101.789528,21.634029),(101.770821,21.638835),(101.755938,21.648654),(101.752321,21.659144),(101.755628,21.680021),(101.754801,21.689788),(101.749117,21.698057),(101.732787,21.711441),(101.72855,21.717539),(101.727981,21.73268),(101.73165,21.750508),(101.751546,21.80637),(101.751081,21.816137),(101.726173,21.837273),(101.721263,21.844353),(101.708706,21.869571),(101.685865,21.898613),(101.683075,21.907139),(101.682454,21.914839),(101.680594,21.922642),(101.673669,21.931376),(101.638116,21.940884),(101.616309,21.953648),(101.6068,21.967601),(101.600392,22.007495),(101.592176,22.028114),(101.564942,22.069507),(101.555434,22.090332),(101.55502,22.112398),(101.561842,22.130072),(101.566958,22.149347),(101.561532,22.176115),(101.549026,22.19353),(101.518227,22.228205),(101.515746,22.245362),(101.531714,22.263397),(101.540654,22.271458),(101.55099,22.276678),(101.563702,22.276781),(101.575743,22.273112),(101.58799,22.271355),(101.601219,22.276988),(101.606128,22.284842),(101.619874,22.326442),(101.62251,22.342358),(101.623698,22.346647),(101.641113,22.363287),(101.643697,22.364683),(101.645713,22.384526),(101.644162,22.404008),(101.645092,22.424214),(101.654549,22.446125),(101.668657,22.462299),(101.689121,22.478887),(101.71315,22.491548),(101.741779,22.496044),(101.750616,22.496044),(101.75506,22.495527),(101.784877,22.472221),(101.817795,22.406334),(101.8426,22.383338),(101.867921,22.378842)] +Lebanon [(36.391194,34.622487),(36.415999,34.622901),(36.440184,34.62936),(36.436256,34.597838),(36.429125,34.593187),(36.418479,34.600318),(36.4037,34.603832),(36.388921,34.596236),(36.386853,34.584247),(36.388094,34.569519),(36.383546,34.553809),(36.364012,34.541097),(36.336624,34.52885),(36.319881,34.514122),(36.329851,34.498603),(36.332697,34.494175),(36.335384,34.493477),(36.338174,34.493296),(36.340965,34.493477),(36.343652,34.494175),(36.363082,34.499032),(36.392331,34.500143),(36.41972,34.498335),(36.433466,34.49433),(36.433466,34.494175),(36.439873,34.477457),(36.449795,34.464538),(36.463231,34.455237),(36.480181,34.449139),(36.494651,34.434256),(36.501885,34.430303),(36.509223,34.432421),(36.52514,34.422939),(36.530617,34.41418),(36.52979,34.403095),(36.526276,34.386688),(36.523589,34.379789),(36.519972,34.375061),(36.516768,34.369867),(36.515941,34.36165),(36.518628,34.353641),(36.523589,34.345889),(36.54519,34.320723),(36.555835,34.311085),(36.562967,34.308501),(36.570305,34.308966),(36.576816,34.307674),(36.581984,34.299897),(36.579917,34.286901),(36.570305,34.275868),(36.56276,34.262949),(36.567514,34.244138),(36.574026,34.229746),(36.59821,34.209851),(36.603554,34.200101),(36.604101,34.199102),(36.575886,34.173264),(36.552735,34.140992),(36.507983,34.109185),(36.488449,34.088644),(36.482662,34.07451),(36.480698,34.062754),(36.47522,34.053504),(36.45889,34.046708),(36.445144,34.04614),(36.423647,34.052806),(36.412175,34.053633),(36.391091,34.044719),(36.357811,34.009553),(36.31802,33.980511),(36.302311,33.964388),(36.288565,33.946508),(36.267171,33.910386),(36.268928,33.906769),(36.275336,33.897209),(36.28164,33.891111),(36.311096,33.87199),(36.35037,33.866409),(36.36763,33.857547),(36.369594,33.837006),(36.357501,33.823776),(36.338278,33.821193),(36.300554,33.828686),(36.249187,33.849744),(36.231824,33.852534),(36.216631,33.847754),(36.202679,33.838297),(36.186556,33.83021),(36.165678,33.829797),(36.157617,33.83406),(36.144904,33.847418),(36.13829,33.850622),(36.102426,33.832277),(36.085063,33.82605),(36.06646,33.821994),(36.05044,33.816438),(36.040518,33.805612),(36.027082,33.784296),(35.994009,33.760705),(35.980263,33.743549),(35.974165,33.732154),(35.955872,33.718124),(35.948431,33.70921),(35.946984,33.700838),(35.949878,33.6827),(35.94719,33.673088),(35.942436,33.669212),(35.929414,33.665078),(35.925383,33.661667),(35.921559,33.640299),(35.934788,33.6334),(35.956906,33.629938),(35.980263,33.619008),(35.994526,33.615701),(36.012923,33.608027),(36.029666,33.59764),(36.038451,33.586349),(36.035247,33.567797),(36.019847,33.55263),(35.99928,33.541261),(35.980263,33.533949),(35.956802,33.534207),(35.933238,33.527825),(35.921869,33.51457),(35.935408,33.494262),(35.919492,33.462351),(35.888486,33.440415),(35.870089,33.431242),(35.845116,33.418742),(35.8211,33.406722),(35.8057,33.391348),(35.785753,33.357887),(35.769423,33.342643),(35.757538,33.336313),(35.743689,33.331171),(35.729426,33.327812),(35.716197,33.326727),(35.698523,33.32267),(35.66049,33.289261),(35.640542,33.275851),(35.624716,33.272924),(35.610363,33.27027),(35.601475,33.262725),(35.604576,33.244354),(35.604576,33.244303),(35.604576,33.244251),(35.603892,33.240323),(35.603852,33.240091),(35.598064,33.244251),(35.597858,33.24438),(35.597754,33.244354),(35.585042,33.252209),(35.566955,33.276187),(35.561167,33.28213),(35.549489,33.281019),(35.542771,33.271536),(35.53688,33.257868),(35.527784,33.244251),(35.520033,33.222185),(35.517656,33.172886),(35.512282,33.147409),(35.503703,33.130615),(35.4851,33.102606),(35.480139,33.087387),(35.449443,33.085217),(35.401591,33.067931),(35.34516,33.05558),(35.331931,33.057156),(35.322629,33.067363),(35.315704,33.079222),(35.305783,33.089377),(35.29462,33.096973),(35.288825,33.099201),(35.283665,33.101185),(35.271469,33.101185),(35.234469,33.090927),(35.227338,33.091366),(35.213178,33.094441),(35.207184,33.094803),(35.200673,33.092736),(35.189614,33.085527),(35.185273,33.083977),(35.105235,33.089016),(35.105235,33.090644),(35.106456,33.095038),(35.101899,33.094468),(35.100434,33.095689),(35.100434,33.098212),(35.09962,33.101264),(35.105805,33.112454),(35.114757,33.123725),(35.125499,33.132514),(35.147634,33.140326),(35.154796,33.150295),(35.160981,33.161689),(35.167817,33.170152),(35.161794,33.176947),(35.167817,33.177883),(35.171072,33.176581),(35.174653,33.170152),(35.204763,33.224799),(35.209158,33.254625),(35.188324,33.280015),(35.209727,33.293402),(35.224783,33.311998),(35.243663,33.355699),(35.246104,33.368883),(35.246755,33.395819),(35.250499,33.409735),(35.268728,33.432603),(35.275076,33.444973),(35.271007,33.457506),(35.323009,33.490912),(35.349132,33.513861),(35.360362,33.543443),(35.362478,33.556342),(35.400645,33.643134),(35.394379,33.64997),(35.409923,33.661282),(35.421641,33.702338),(35.438487,33.711371),(35.437836,33.723375),(35.477306,33.8039),(35.481293,33.823676),(35.483002,33.86518),(35.483246,33.872463),(35.479991,33.884263),(35.473806,33.893866),(35.47047,33.902086),(35.475841,33.90998),(35.486583,33.912991),(35.499766,33.9112),(35.511241,33.90705),(35.517345,33.902533),(35.524181,33.902533),(35.533865,33.90705),(35.548106,33.906195),(35.557465,33.905666),(35.572032,33.90998),(35.57838,33.916693),(35.587901,33.936591),(35.59311,33.94477),(35.593435,33.94892),(35.59197,33.950426),(35.585704,33.950914),(35.594249,33.959296),(35.606293,33.983222),(35.613536,33.99315),(35.620128,33.995103),(35.636729,33.995307),(35.64088,33.999416),(35.640391,34.008531),(35.635753,34.013251),(35.630382,34.016099),(35.627208,34.019843),(35.626964,34.044257),(35.643321,34.08808),(35.647716,34.119127),(35.644542,34.130845),(35.630382,34.152818),(35.627208,34.166938),(35.627208,34.201117),(35.630382,34.208441),(35.644542,34.216986),(35.647716,34.222154),(35.648285,34.253607),(35.653005,34.284857),(35.665212,34.311347),(35.688731,34.328315),(35.705821,34.316596),(35.722179,34.33747),(35.737315,34.369208),(35.750743,34.389797),(35.806163,34.414984),(35.823578,34.434516),(35.805431,34.458686),(35.805431,34.464911),(35.823985,34.460842),(35.903331,34.477973),(35.956798,34.520087),(35.972911,34.529771),(35.983165,34.538642),(35.98878,34.55272),(35.989594,34.599677),(35.986827,34.620795),(35.980642,34.638495),(35.9699,34.649848),(35.9699,34.649849),(35.98264,34.650238),(35.991425,34.648222),(35.998763,34.64538),(36.01623,34.633649),(36.037934,34.62843),(36.060775,34.627913),(36.087234,34.631014),(36.11214,34.629532),(36.15586,34.626931),(36.160201,34.628068),(36.17219,34.634063),(36.178287,34.635303),(36.183352,34.633494),(36.195134,34.626208),(36.201232,34.624451),(36.26159,34.627242),(36.271512,34.630962),(36.28071,34.639127),(36.284431,34.648326),(36.284844,34.667911),(36.288151,34.675559),(36.308925,34.687548),(36.323808,34.679486),(36.347373,34.64414),(36.367526,34.629205),(36.391194,34.622487)] +Liberia [(-9.722115,8.435636),(-9.718188,8.438272),(-9.717981,8.452689),(-9.7155,8.473334),(-9.703253,8.487261),(-9.683513,8.487054),(-9.663514,8.477856),(-9.650543,8.464807),(-9.645117,8.443413),(-9.653385,8.43145),(-9.666511,8.422381),(-9.675916,8.409643),(-9.67726,8.39153),(-9.668372,8.38874),(-9.653385,8.392099),(-9.636125,8.392564),(-9.628064,8.391634),(-9.619537,8.392719),(-9.612148,8.396336),(-9.607342,8.402925),(-9.600159,8.410883),(-9.592045,8.405095),(-9.579178,8.387293),(-9.529052,8.366312),(-9.511482,8.353781),(-9.504351,8.346158),(-9.499131,8.343471),(-9.511482,8.336133),(-9.515616,8.334376),(-9.522282,8.330681),(-9.528483,8.32572),(-9.539387,8.314584),(-9.530189,8.30598),(-9.525124,8.295438),(-9.521249,8.284405),(-9.515616,8.274406),(-9.510758,8.270065),(-9.509156,8.265595),(-9.51081,8.261047),(-9.515616,8.256603),(-9.525435,8.24079),(-9.531894,8.212316),(-9.530137,8.185496),(-9.515616,8.174877),(-9.491328,8.169968),(-9.481406,8.164697),(-9.478099,8.152785),(-9.478047,8.141287),(-9.459805,8.068785),(-9.461873,8.058424),(-9.472776,8.040363),(-9.474223,8.028813),(-9.462183,8.043464),(-9.451537,8.051758),(-9.440944,8.050569),(-9.429265,8.036901),(-9.424149,8.026178),(-9.42234,8.015791),(-9.424562,8.005817),(-9.431745,7.996567),(-9.433093,7.993468),(-9.446008,7.950058),(-9.448437,7.907839),(-9.43867,7.866239),(-9.415674,7.824433),(-9.389681,7.791464),(-9.380999,7.77441),(-9.355471,7.741906),(-9.366374,7.716171),(-9.376245,7.680566),(-9.374694,7.646925),(-9.353507,7.62243),(-9.364411,7.614265),(-9.372989,7.601346),(-9.378777,7.586256),(-9.38366,7.55954),(-9.390817,7.548223),(-9.408697,7.527449),(-9.404925,7.513599),(-9.4134,7.492257),(-9.436086,7.458564),(-9.456705,7.443061),(-9.466213,7.433449),(-9.470244,7.420995),(-9.470864,7.407353),(-9.4735,7.39557),(-9.479701,7.38539),(-9.490708,7.376037),(-9.487099,7.373204),(-9.486626,7.372833),(-9.478357,7.376915),(-9.449263,7.405079),(-9.438101,7.421564),(-9.418723,7.387767),(-9.407354,7.378052),(-9.392264,7.39986),(-9.386477,7.401668),(-9.381412,7.404975),(-9.379242,7.415828),(-9.375159,7.41929),(-9.365961,7.421099),(-9.349218,7.421564),(-9.335007,7.419652),(-9.327772,7.417068),(-9.319401,7.411952),(-9.312424,7.403322),(-9.3055,7.383478),(-9.299505,7.376605),(-9.283796,7.374073),(-9.249715,7.383065),(-9.234341,7.381308),(-9.219872,7.360999),(-9.206281,7.298367),(-9.181476,7.275681),(-9.17543,7.274596),(-9.15308,7.275474),(-9.147473,7.271392),(-9.146905,7.25408),(-9.139153,7.247879),(-9.11962,7.237957),(-9.116054,7.224676),(-9.125407,7.190208),(-9.113935,7.200388),(-9.106085,7.202975),(-9.100448,7.204833),(-9.086392,7.20721),(-9.073111,7.211344),(-9.059261,7.221369),(-9.046652,7.232738),(-9.032855,7.240283),(-9.015698,7.238887),(-9.007637,7.243487),(-8.985467,7.25005),(-8.981023,7.250566),(-8.977716,7.266224),(-8.973117,7.267671),(-8.962575,7.262969),(-8.9572,7.286481),(-8.944385,7.278678),(-8.925419,7.248603),(-8.91219,7.250153),(-8.868885,7.266638),(-8.857827,7.273046),(-8.851367,7.292889),(-8.855915,7.311855),(-8.863304,7.32989),(-8.865371,7.346891),(-8.860617,7.35764),(-8.851626,7.371489),(-8.83341,7.393865),(-8.82403,7.399808),(-8.807701,7.40451),(-8.799226,7.41221),(-8.795453,7.421099),(-8.793955,7.441511),(-8.790389,7.450812),(-8.784394,7.455722),(-8.756489,7.472413),(-8.739539,7.490552),(-8.729617,7.50869),(-8.725742,7.528534),(-8.726827,7.55184),(-8.729514,7.556387),(-8.73401,7.557679),(-8.738041,7.559488),(-8.739178,7.565689),(-8.737369,7.571425),(-8.731478,7.581761),(-8.730031,7.587445),(-8.727576,7.622017),(-8.720677,7.642739),(-8.694477,7.674416),(-8.686571,7.694364),(-8.567405,7.688214),(-8.57278,7.650749),(-8.566527,7.623412),(-8.54782,7.600777),(-8.515625,7.577161),(-8.485446,7.557989),(-8.44364,7.537112),(-8.425605,7.50192),(-8.405141,7.431279),(-8.393049,7.334334),(-8.387675,7.314955),(-8.38106,7.303018),(-8.364007,7.278885),(-8.357547,7.265708),(-8.349692,7.234702),(-8.34499,7.226795),(-8.333001,7.211861),(-8.32897,7.204006),(-8.328453,7.197391),(-8.330572,7.184731),(-8.329745,7.178323),(-8.326231,7.17207),(-8.311503,7.15729),(-8.305354,7.146903),(-8.302615,7.137602),(-8.30122,7.11321),(-8.284115,7.017867),(-8.286647,6.993786),(-8.29414,6.984381),(-8.314552,6.968568),(-8.319978,6.95999),(-8.321219,6.942265),(-8.319978,6.925832),(-8.320909,6.910174),(-8.333673,6.882578),(-8.328763,6.875085),(-8.320599,6.869659),(-8.315715,6.86382),(-8.314966,6.850281),(-8.316619,6.836276),(-8.324991,6.806252),(-8.337497,6.777262),(-8.351759,6.755351),(-8.441646,6.65531),(-8.461313,6.633421),(-8.533557,6.576344),(-8.547148,6.56004),(-8.549473,6.560143),(-8.566113,6.550919),(-8.567198,6.550609),(-8.566113,6.549059),(-8.570609,6.535132),(-8.572728,6.530843),(-8.577327,6.528026),(-8.589368,6.526063),(-8.594122,6.523892),(-8.61872,6.492835),(-8.601847,6.49175),(-8.592106,6.489683),(-8.568077,6.490406),(-8.556501,6.487021),(-8.545339,6.477203),(-8.515625,6.437593),(-8.497229,6.431366),(-8.484775,6.441572),(-8.47692,6.460149),(-8.472372,6.47865),(-8.467515,6.487073),(-8.46121,6.482887),(-8.449635,6.467927),(-8.418267,6.451209),(-8.405089,6.436327),(-8.406485,6.430539),(-8.413874,6.424183),(-8.419042,6.407646),(-8.417233,6.398189),(-8.408242,6.379922),(-8.405968,6.372248),(-8.410981,6.363179),(-8.420153,6.358088),(-8.423693,6.351991),(-8.411807,6.340028),(-8.401059,6.35044),(-8.388811,6.355091),(-8.376564,6.35473),(-8.36597,6.350079),(-8.352948,6.36106),(-8.342613,6.356952),(-8.332949,6.34597),(-8.322356,6.336462),(-8.317498,6.336979),(-8.314191,6.341526),(-8.310005,6.344627),(-8.302718,6.340648),(-8.297706,6.332586),(-8.296569,6.324395),(-8.296569,6.317497),(-8.29507,6.313363),(-8.285975,6.310055),(-8.208486,6.292537),(-8.196575,6.288144),(-8.185723,6.28184),(-8.179057,6.276853),(-8.170582,6.274011),(-8.154355,6.274218),(-8.140609,6.277577),(-8.118285,6.289901),(-8.106606,6.294681),(-8.060098,6.301141),(-8.033019,6.301761),(-8.015708,6.297782),(-7.999378,6.286697),(-7.934886,6.271427),(-7.929511,6.270755),(-7.925222,6.272512),(-7.920545,6.273804),(-7.91437,6.271737),(-7.912665,6.26843),(-7.910029,6.256777),(-7.906567,6.251997),(-7.862125,6.216392),(-7.846622,6.197478),(-7.844142,6.183965),(-7.857371,6.145388),(-7.862022,6.124382),(-7.864502,6.101696),(-7.861815,6.08149),(-7.850911,6.067564),(-7.850498,6.07224),(-7.840886,6.077356),(-7.82838,6.080715),(-7.819389,6.080095),(-7.809674,6.071207),(-7.805695,6.062034),(-7.802439,6.039245),(-7.799235,6.035679),(-7.793861,6.034077),(-7.788745,6.031493),(-7.786471,6.025111),(-7.78859,6.019789),(-7.797736,6.01475),(-7.799958,6.010151),(-7.800682,5.981729),(-7.799648,5.974468),(-7.786626,5.958165),(-7.747145,5.937597),(-7.73495,5.925634),(-7.717173,5.900752),(-7.70322,5.907883),(-7.691541,5.927727),(-7.680483,5.94093),(-7.674023,5.935737),(-7.663119,5.923102),(-7.64834,5.911294),(-7.630615,5.908607),(-7.629116,5.904938),(-7.608497,5.893698),(-7.605758,5.89313),(-7.584158,5.883699),(-7.579093,5.877188),(-7.587207,5.866775),(-7.579507,5.860341),(-7.566278,5.843727),(-7.559146,5.839851),(-7.547571,5.845432),(-7.535194,5.855845),(-7.527779,5.858842),(-7.531086,5.842022),(-7.515583,5.839231),(-7.508813,5.826183),(-7.503904,5.812101),(-7.494396,5.806184),(-7.486644,5.813806),(-7.481683,5.830214),(-7.478583,5.859876),(-7.446543,5.845949),(-7.442047,5.811274),(-7.448559,5.769055),(-7.449489,5.732364),(-7.43104,5.653971),(-7.417963,5.627757),(-7.396676,5.585087),(-7.384118,5.568964),(-7.405099,5.5724),(-7.41192,5.560359),(-7.407114,5.541394),(-7.393627,5.523979),(-7.41347,5.522816),(-7.419982,5.523127),(-7.409852,5.513417),(-7.408148,5.511784),(-7.41347,5.503489),(-7.425149,5.494162),(-7.432384,5.479563),(-7.431712,5.441917),(-7.436208,5.431323),(-7.450212,5.438713),(-7.449954,5.422771),(-7.446027,5.404917),(-7.439102,5.387812),(-7.429697,5.374298),(-7.414866,5.365978),(-7.404375,5.367994),(-7.396934,5.366444),(-7.391456,5.347685),(-7.388821,5.328384),(-7.389751,5.317144),(-7.39559,5.306783),(-7.407683,5.289859),(-7.410111,5.283167),(-7.41378,5.266708),(-7.418793,5.260455),(-7.428508,5.25614),(-7.430834,5.25999),(-7.430885,5.267535),(-7.434089,5.274072),(-7.435175,5.279291),(-7.43011,5.284252),(-7.428508,5.288154),(-7.439929,5.290169),(-7.442409,5.287379),(-7.468247,5.273452),(-7.475689,5.264228),(-7.479926,5.255158),(-7.490468,5.222189),(-7.482252,5.17506),(-7.483027,5.158239),(-7.488918,5.140669),(-7.515841,5.091809),(-7.529923,5.09739),(-7.546847,5.0977),(-7.563229,5.090156),(-7.575786,5.072069),(-7.578163,5.054731),(-7.57036,4.99688),(-7.570205,4.99688),(-7.5716,4.975382),(-7.564624,4.961585),(-7.555322,4.948433),(-7.549586,4.929054),(-7.549534,4.913887),(-7.553462,4.914249),(-7.561782,4.920166),(-7.574959,4.921561),(-7.6055,4.897428),(-7.605965,4.887997),(-7.598834,4.857095),(-7.600229,4.845804),(-7.605862,4.825133),(-7.605997,4.817363),(-7.606017,4.816193),(-7.601004,4.804643),(-7.593408,4.797434),(-7.585553,4.791543),(-7.579558,4.784283),(-7.576871,4.775291),(-7.572479,4.723356),(-7.576871,4.509338),(-7.575424,4.500347),(-7.572737,4.491122),(-7.570929,4.48071),(-7.572841,4.455388),(-7.565709,4.435208),(-7.564314,4.425829),(-7.567415,4.415933),(-7.577853,4.397278),(-7.579197,4.387149),(-7.573771,4.375393),(-7.540666,4.352845),(-7.541005,4.352769),(-7.593251,4.347235),(-7.608225,4.349433),(-7.631663,4.35928),(-7.644439,4.361518),(-7.712717,4.361518),(-7.721018,4.364691),(-7.725413,4.371772),(-7.728627,4.378811),(-7.733225,4.382025),(-7.749501,4.383246),(-7.788726,4.398912),(-7.801015,4.406317),(-7.812327,4.416083),(-7.818267,4.427191),(-7.823801,4.440741),(-7.832875,4.452216),(-7.862904,4.461371),(-7.88683,4.48017),(-7.89802,4.484442),(-7.91454,4.486518),(-7.924428,4.491685),(-7.9322,4.498358),(-7.942006,4.504869),(-7.959869,4.51203),(-8.003163,4.523505),(-8.037506,4.528754),(-8.100331,4.55329),(-8.222035,4.568061),(-8.257314,4.579983),(-8.300201,4.627265),(-8.316029,4.635199),(-8.335357,4.638821),(-8.438547,4.676174),(-8.549957,4.754869),(-8.607533,4.786933),(-8.74706,4.837714),(-8.778147,4.854315),(-8.848297,4.91356),(-8.874338,4.930121),(-8.9051,4.944322),(-8.915517,4.946275),(-8.936106,4.95539),(-8.966786,4.96011),(-8.991322,4.967515),(-9.014801,4.978461),(-9.031972,4.991441),(-9.028717,4.993638),(-9.028066,4.994289),(-9.027821,4.995429),(-9.025787,4.998928),(-9.069651,5.010972),(-9.10497,5.031562),(-9.162262,5.080878),(-9.17809,5.09101),(-9.212758,5.107408),(-9.243967,5.130561),(-9.261952,5.137274),(-9.278717,5.145738),(-9.291982,5.163479),(-9.278961,5.163479),(-9.292144,5.181627),(-9.312245,5.192613),(-9.334462,5.200995),(-9.354075,5.211168),(-9.470815,5.320461),(-9.477935,5.335354),(-9.488515,5.371649),(-9.497467,5.38939),(-9.508046,5.399604),(-9.524281,5.410793),(-9.541819,5.419827),(-9.556508,5.423489),(-9.559397,5.425035),(-9.587636,5.430732),(-9.593658,5.430325),(-9.59496,5.440009),(-9.583079,5.454901),(-9.580719,5.464504),(-9.596995,5.488349),(-9.696767,5.546373),(-9.842112,5.685452),(-9.979644,5.798082),(-10.015207,5.838121),(-10.022613,5.844224),(-10.039947,5.854315),(-10.046213,5.861721),(-10.049062,5.871772),(-10.044789,5.884467),(-10.046213,5.89643),(-10.055531,5.91177),(-10.071278,5.924791),(-10.089101,5.933987),(-10.104237,5.937445),(-10.11498,5.943305),(-10.185699,5.999091),(-10.226552,6.043606),(-10.279897,6.076972),(-10.298166,6.084906),(-10.336008,6.092231),(-10.353668,6.102362),(-10.364125,6.115668),(-10.36148,6.129218),(-10.352284,6.123521),(-10.344472,6.122707),(-10.338124,6.126776),(-10.333567,6.136054),(-10.353627,6.140611),(-10.367991,6.151801),(-10.380605,6.165351),(-10.395619,6.176988),(-10.428131,6.197008),(-10.444692,6.203843),(-10.463938,6.204983),(-10.463938,6.197496),(-10.41218,6.171047),(-10.389394,6.152167),(-10.381947,6.129218),(-10.397125,6.147854),(-10.424875,6.164618),(-10.456695,6.177476),(-10.659413,6.227484),(-10.788319,6.292955),(-10.807851,6.310858),(-10.800364,6.327826),(-10.795562,6.319648),(-10.790354,6.31391),(-10.78307,6.310004),(-10.772369,6.307359),(-10.789052,6.334947),(-10.792348,6.359036),(-10.782338,6.381415),(-10.759389,6.404202),(-10.773671,6.404202),(-10.784088,6.401272),(-10.792348,6.396226),(-10.800364,6.389879),(-10.82433,6.43773),(-10.85733,6.472846),(-10.895904,6.49901),(-11.053619,6.584174),(-11.233062,6.648098),(-11.353179,6.701158),(-11.374867,6.726874),(-11.375722,6.766669),(-11.369944,6.788886),(-11.368642,6.798245),(-11.368276,6.811347),(-11.381947,6.836819),(-11.38561,6.841783),(-11.404408,6.847073),(-11.424224,6.85932),(-11.44107,6.873277),(-11.450795,6.883368),(-11.476186,6.91942),(-11.444491,6.933945),(-11.436739,6.938544),(-11.438858,6.963865),(-11.434672,6.980815),(-11.407283,6.99637),(-11.397723,7.012803),(-11.391522,7.031768),(-11.3893,7.046703),(-11.392917,7.066857),(-11.389714,7.073523),(-11.367906,7.075952),(-11.360465,7.078639),(-11.355142,7.084065),(-11.353127,7.092798),(-11.356434,7.105356),(-11.370593,7.128403),(-11.373229,7.138842),(-11.370128,7.142614),(-11.352041,7.159254),(-11.317211,7.213566),(-11.30021,7.21708),(-11.217889,7.25408),(-11.209259,7.262194),(-11.144213,7.345193),(-11.126319,7.368027),(-10.977904,7.484919),(-10.937028,7.50776),(-10.898994,7.516493),(-10.885713,7.522178),(-10.874603,7.530859),(-10.864025,7.543684),(-10.730581,7.705474),(-10.692598,7.737565),(-10.645418,7.763041),(-10.638131,7.765522),(-10.615187,7.769036),(-10.617668,8.010778),(-10.616221,8.022509),(-10.612138,8.031862),(-10.603147,8.039226),(-10.577825,8.053153),(-10.56718,8.063204),(-10.543977,8.10607),(-10.529249,8.124286),(-10.505736,8.135835),(-10.485738,8.138858),(-10.405071,8.139272),(-10.384555,8.142114),(-10.364918,8.148212),(-10.34554,8.159348),(-10.320942,8.183119),(-10.318151,8.201645),(-10.326006,8.221541),(-10.333292,8.249756),(-10.331277,8.272287),(-10.313655,8.310347),(-10.282908,8.432949),(-10.282236,8.484625),(-10.271332,8.484109),(-10.260842,8.485917),(-10.240275,8.493746),(-10.23242,8.477055),(-10.215625,8.486589),(-10.181829,8.519584),(-10.164879,8.524649),(-10.147981,8.524287),(-10.091085,8.510102),(-10.076099,8.501498),(-10.069277,8.486873),(-10.079354,8.439951),(-10.075013,8.426644),(-10.061681,8.422174),(-10.04039,8.425663),(-10.000186,8.449305),(-9.961119,8.479561),(-9.944169,8.488346),(-9.928149,8.493436),(-9.911354,8.495658),(-9.891872,8.495917),(-9.886911,8.49664),(-9.872804,8.500361),(-9.868153,8.498087),(-9.863502,8.49354),(-9.858644,8.487054),(-9.849704,8.48969),(-9.831617,8.499017),(-9.806813,8.506252),(-9.805469,8.512789),(-9.808828,8.520463),(-9.808854,8.528473),(-9.802007,8.53718),(-9.794824,8.544544),(-9.790018,8.553071),(-9.790586,8.565396),(-9.773585,8.563018),(-9.763043,8.54997),(-9.757307,8.53271),(-9.750899,8.485866),(-9.7463,8.468063),(-9.735913,8.450493),(-9.722115,8.435636)] +Libya [(19.291677,30.287055),(19.338552,30.298814),(19.418956,30.333238),(19.530528,30.396796),(19.551036,30.404934),(19.596039,30.412014),(19.616384,30.421373),(19.748871,30.510932),(19.914317,30.682929),(19.958181,30.749701),(19.982188,30.778632),(20.018728,30.803778),(20.036469,30.819078),(20.099864,30.935004),(20.146658,31.045559),(20.152354,31.088202),(20.146658,31.216783),(20.128103,31.242987),(20.103282,31.308743),(20.082205,31.336615),(20.038341,31.375881),(20.022227,31.396796),(19.954845,31.559475),(19.948904,31.60102),(19.948009,31.645087),(19.944347,31.661444),(19.926931,31.699164),(19.920665,31.717719),(19.917247,31.759223),(19.919688,31.804674),(19.934337,31.888414),(19.948009,31.925971),(19.945811,31.935981),(19.941742,31.946926),(19.940115,31.95775),(19.944591,31.967515),(19.957205,31.982733),(20.048025,32.146064),(20.071544,32.176418),(20.261729,32.339545),(20.277192,32.360826),(20.32545,32.409817),(20.345714,32.419908),(20.385509,32.434638),(20.403331,32.447333),(20.416759,32.459459),(20.562673,32.557847),(20.79656,32.643012),(20.934093,32.722602),(21.050955,32.772284),(21.09669,32.781073),(21.286794,32.771145),(21.331879,32.774644),(21.414806,32.793402),(21.4546,32.812567),(21.608653,32.930732),(21.62322,32.936754),(21.671723,32.939887),(21.716319,32.950385),(21.743663,32.930854),(21.787446,32.915961),(21.836111,32.906562),(21.877452,32.903225),(22.096202,32.923082),(22.101329,32.926418),(22.111827,32.940904),(22.11671,32.944159),(22.138845,32.947577),(22.149587,32.947943),(22.161388,32.944159),(22.169444,32.938218),(22.171072,32.932929),(22.171072,32.926256),(22.175059,32.916246),(22.206798,32.888129),(22.247325,32.880845),(22.33961,32.882148),(22.379161,32.870347),(22.477306,32.807074),(22.496755,32.797309),(22.513845,32.790961),(22.532237,32.787543),(22.578868,32.784613),(22.812511,32.724514),(22.86671,32.694159),(22.884532,32.689765),(22.895844,32.688178),(22.92921,32.676703),(22.942638,32.675035),(22.976899,32.676703),(22.999685,32.670559),(23.041352,32.652533),(23.086436,32.64647),(23.10613,32.639838),(23.118663,32.62934),(23.120942,32.61522),(23.115245,32.604926),(23.097911,32.586656),(23.094412,32.577379),(23.103852,32.516343),(23.113292,32.5008),(23.128673,32.488471),(23.148936,32.478095),(23.140961,32.458564),(23.128429,32.395494),(23.108653,32.40233),(23.112153,32.411933),(23.109874,32.420315),(23.103363,32.428168),(23.094412,32.436469),(23.079112,32.354438),(23.080577,32.334052),(23.095551,32.323554),(23.156505,32.303534),(23.178966,32.299954),(23.195079,32.291653),(23.214529,32.272406),(23.230805,32.250637),(23.2435,32.220608),(23.257335,32.213772),(23.273204,32.209703),(23.285411,32.203762),(23.293224,32.192328),(23.302501,32.170722),(23.312185,32.162746),(23.310395,32.179267),(23.299083,32.224189),(23.31837,32.217841),(23.36199,32.217108),(23.37086,32.214016),(23.398936,32.194037),(23.405284,32.19066),(23.531098,32.181383),(23.669688,32.183254),(23.840343,32.149115),(23.998709,32.093899),(23.975271,32.074205),(23.970714,32.066596),(23.97877,32.062242),(24.004893,32.052924),(24.067882,32.011176),(24.090668,32.005764),(24.280935,32.007554),(24.546072,31.991441),(24.636241,32.014879),(24.658539,32.028632),(24.670258,32.032416),(24.710623,32.02558),(24.718598,32.027533),(24.726329,32.031317),(24.733735,32.033149),(24.741873,32.028998),(24.74586,32.02619),(24.758067,32.019965),(24.771983,32.01675),(24.791759,32.007758),(24.824392,32.002997),(24.865408,31.989325),(24.9817,31.967922),(25.017914,31.949042),(25.033376,31.916327),(25.024425,31.876614),(25.022227,31.854682),(25.029552,31.8369),(25.063731,31.804104),(25.116873,31.737982),(25.122081,31.720852),(25.136729,31.692572),(25.156261,31.663072),(25.152517,31.659817),(25.15089,31.65648),(25.088422,31.613524),(25.060413,31.579366),(25.023723,31.494409),(24.861252,31.38036),(24.848953,31.34863),(24.848643,31.312043),(24.858565,31.227914),(24.859495,31.145955),(24.882646,31.039088),(24.889571,31.018831),(24.924607,30.956768),(24.93763,30.905763),(24.981245,30.825406),(24.994887,30.785099),(24.988583,30.750579),(24.973597,30.716266),(24.961194,30.676268),(24.950652,30.621233),(24.92037,30.552348),(24.917063,30.532453),(24.907658,30.498656),(24.888124,30.464343),(24.829729,30.387552),(24.799447,30.360835),(24.772369,30.331018),(24.76286,30.317065),(24.746117,30.282752),(24.704672,30.217898),(24.688653,30.182758),(24.688343,30.144156),(24.697334,30.122607),(24.72555,30.078372),(24.811643,29.89089),(24.81619,29.868411),(24.81495,29.848567),(24.798207,29.796425),(24.804925,29.7548),(24.856188,29.679482),(24.870657,29.638347),(24.870554,29.614007),(24.861562,29.541557),(24.863836,29.502903),(24.873758,29.468332),(24.956647,29.292683),(24.97432,29.23894),(24.981245,29.181372),(24.981245,29.057917),(24.981245,28.890176),(24.981245,28.491544),(24.981245,28.096701),(24.981245,28.018549),(24.981245,27.666503),(24.981245,27.627824),(24.981245,27.475456),(24.981245,27.138035),(24.981245,27.049239),(24.981245,26.859964),(24.981245,26.664523),(24.981245,26.372707),(24.981245,26.080838),(24.981245,25.78897),(24.981245,25.497256),(24.981245,25.205439),(24.981245,24.913571),(24.981245,24.621728),(24.981245,24.329963),(24.981245,24.038146),(24.981245,23.746278),(24.981245,23.454435),(24.981245,23.16267),(24.981245,22.870801),(24.981245,22.578985),(24.981245,22.287168),(24.981245,21.995351),(24.981245,21.870811),(24.981141,21.746167),(24.981141,21.621575),(24.981141,21.497035),(24.981141,21.372443),(24.981038,21.247851),(24.980935,21.123311),(24.980935,20.998719),(24.980935,20.874128),(24.980831,20.749536),(24.980831,20.624944),(24.980831,20.500352),(24.980728,20.405319),(24.980625,20.310234),(24.980625,20.215098),(24.980521,20.120013),(24.980521,20.008391),(24.980521,20.002062),(24.979591,20.000512),(24.978661,19.999013),(24.977731,19.997463),(24.976801,19.995964),(24.974527,19.995654),(24.972356,19.995421),(24.970186,19.995189),(24.968016,19.994956),(24.867247,19.994956),(24.721312,19.995034),(24.599149,19.99506),(24.474609,19.995189),(24.34087,19.995292),(24.227906,19.995318),(24.142433,19.995421),(23.981306,19.995421),(23.981306,19.870597),(23.981306,19.745824),(23.981306,19.621026),(23.981306,19.496124),(23.857799,19.557179),(23.734292,19.618132),(23.610786,19.67911),(23.487279,19.740166),(23.363772,19.80117),(23.240162,19.862096),(23.116759,19.923152),(22.993149,19.984104),(22.869642,20.045108),(22.746136,20.106061),(22.622629,20.167091),(22.499122,20.228095),(22.375616,20.289124),(22.252109,20.350077),(22.128602,20.411055),(22.005096,20.472033),(21.881486,20.533063),(21.758082,20.59399),(21.634472,20.654968),(21.510966,20.716049),(21.387459,20.777027),(21.263952,20.838006),(21.140342,20.899036),(21.016939,20.959962),(20.893329,21.02094),(20.769926,21.08197),(20.646419,21.142897),(20.522809,21.203926),(20.399406,21.265008),(20.275796,21.325935),(20.152289,21.386913),(20.028782,21.447943),(19.905276,21.508869),(19.781665,21.569899),(19.658262,21.630877),(19.534652,21.691855),(19.411145,21.752885),(19.287639,21.813915),(19.185837,21.864177),(19.164132,21.874893),(19.040729,21.935871),(18.917119,21.99685),(18.793612,22.05788),(18.670105,22.118806),(18.546599,22.179836),(18.422989,22.240866),(18.299585,22.301844),(18.175975,22.362822),(18.052469,22.423852),(17.928962,22.484779),(17.805455,22.545757),(17.681949,22.606787),(17.558442,22.667765),(17.434832,22.728795),(17.311429,22.789799),(17.187922,22.850751),(17.064312,22.911755),(16.940909,22.972733),(16.817299,23.03366),(16.693792,23.094664),(16.570285,23.15572),(16.446779,23.216724),(16.323169,23.277728),(16.199765,23.338706),(16.076155,23.399632),(15.985101,23.44472),(15.964637,23.442214),(15.920196,23.422034),(15.862525,23.395937),(15.804854,23.369789),(15.747183,23.343641),(15.689616,23.317518),(15.631945,23.291344),(15.574274,23.265196),(15.516499,23.239048),(15.458829,23.212899),(15.401158,23.186751),(15.343487,23.160577),(15.285816,23.134455),(15.228145,23.108358),(15.170371,23.082261),(15.112907,23.056113),(15.055236,23.029965),(14.997461,23.003817),(14.979909,22.995664),(14.979271,22.995368),(14.979168,22.995316),(14.808016,22.908887),(14.63707,22.822536),(14.466021,22.736184),(14.294869,22.649833),(14.23172,22.617949),(14.216217,22.616295),(14.201644,22.62322),(14.156169,22.660737),(14.031009,22.763702),(13.906055,22.866616),(13.780895,22.969633),(13.655786,23.072624),(13.599562,23.119029),(13.482257,23.179672),(13.380661,23.202383),(13.292398,23.222098),(13.204134,23.241709),(13.115871,23.261449),(13.027607,23.281138),(12.939447,23.300827),(12.851184,23.32049),(12.763024,23.340179),(12.674657,23.359893),(12.586497,23.379582),(12.498337,23.399245),(12.409971,23.418882),(12.321811,23.438622),(12.233651,23.458285),(12.145284,23.477948),(12.057021,23.497611),(11.968861,23.517351),(11.89238,23.660056),(11.822306,23.790642),(11.71014,24.000137),(11.708618,24.00298),(11.636478,24.137649),(11.567128,24.26684),(11.541393,24.29751),(11.50863,24.313814),(11.46724,24.326339),(11.149996,24.422335),(10.911354,24.494501),(10.720668,24.552301),(10.699067,24.556125),(10.677363,24.553851),(10.566776,24.516463),(10.450194,24.476931),(10.410506,24.473288),(10.391799,24.47998),(10.260335,24.576641),(10.242248,24.595115),(10.229742,24.629945),(10.212172,24.722859),(10.193362,24.749937),(10.044534,24.829622),(10.032028,24.856339),(10.029961,24.995065),(10.025517,25.1364),(10.021383,25.26802),(10.007947,25.331426),(9.9695,25.395402),(9.816744,25.588465),(9.693961,25.743494),(9.541309,25.936351),(9.401162,26.113394),(9.377804,26.168946),(9.402506,26.216179),(9.416458,26.23225),(9.434752,26.271989),(9.468858,26.301031),(9.477643,26.315552),(9.481364,26.332606),(9.482604,26.352553),(9.835761,26.504223),(9.854571,26.524377),(9.896326,26.652793),(9.894052,26.67398),(9.88258,26.701782),(9.885474,26.736612),(9.910588,26.843117),(9.906661,26.857483),(9.890848,26.869576),(9.846096,26.891951),(9.835141,26.900995),(9.825529,26.92058),(9.806099,27.025122),(9.721349,27.291875),(9.721866,27.308463),(9.726517,27.32469),(9.743053,27.364119),(9.756283,27.42303),(9.770545,27.444217),(9.813643,27.486489),(9.821602,27.505687),(9.81168,27.526538),(9.797107,27.548914),(9.793903,27.569739),(9.818501,27.585733),(9.846613,27.599298),(9.863356,27.619246),(9.931274,27.818642),(9.934256,27.827398),(9.93591,27.866724),(9.789872,28.209442),(9.776953,28.267578),(9.827276,28.618647),(9.851264,28.785996),(9.848267,28.975726),(9.826149,29.128533),(9.746929,29.368428),(9.667709,29.608323),(9.54968,29.802316),(9.421729,29.968714),(9.369487,30.022794),(9.310005,30.084366),(9.286544,30.117129),(9.519708,30.228905),(9.743467,30.331328),(9.772922,30.338098),(9.845786,30.342283),(9.871314,30.355151),(9.995648,30.494522),(10.101171,30.641697),(10.192225,30.731252),(10.253927,30.841788),(10.26974,30.882147),(10.270153,30.915633),(10.245038,30.985707),(10.240594,31.021157),(10.246175,31.059552),(10.244832,31.078156),(10.213206,31.135362),(10.182717,31.240782),(10.108096,31.411831),(10.106235,31.429194),(10.116881,31.494409),(10.13259,31.517561),(10.196462,31.57859),(10.263952,31.680496),(10.315422,31.715843),(10.427766,31.714603),(10.482543,31.733103),(10.498873,31.744317),(10.513549,31.757029),(10.525538,31.772067),(10.542384,31.806638),(10.584552,31.84028),(10.597575,31.873508),(10.605946,31.953606),(10.628477,31.974122),(10.647288,31.971951),(10.665271,31.963218),(10.683564,31.957017),(10.703098,31.962185),(10.736585,31.985387),(10.772861,32.004508),(10.805624,32.032361),(10.845932,32.111788),(10.873217,32.136696),(11.182341,32.262223),(11.444035,32.36849),(11.513901,32.407997),(11.526085,32.417863),(11.546354,32.434275),(11.564131,32.465487),(11.560617,32.507578),(11.537363,32.543519),(11.47039,32.599277),(11.449926,32.637957),(11.449203,32.693018),(11.463775,32.798464),(11.456541,32.902101),(11.474421,32.969875),(11.474937,33.025841),(11.477418,33.041214),(11.506047,33.136376),(11.505112,33.181225),(11.505138,33.18122),(11.52589,33.176947),(11.560232,33.16592),(11.574962,33.157945),(11.587657,33.13231),(11.603038,33.121487),(11.635753,33.10871),(11.756602,33.092515),(11.793468,33.080756),(11.77296,33.099351),(11.707774,33.117174),(11.691091,33.135972),(11.739268,33.112982),(11.847016,33.082831),(12.08074,32.957831),(12.122406,32.921332),(12.160655,32.911689),(12.224864,32.875922),(12.303884,32.843085),(12.348318,32.832668),(12.729828,32.79857),(12.824392,32.804348),(13.074474,32.86636),(13.115489,32.882148),(13.147716,32.903998),(13.164724,32.912746),(13.187755,32.916246),(13.257091,32.918647),(13.351736,32.904242),(13.363129,32.899563),(13.373546,32.892035),(13.571462,32.802802),(13.616954,32.793402),(13.78712,32.799628),(13.830333,32.794094),(13.915294,32.772773),(13.991222,32.744574),(14.175548,32.717678),(14.185313,32.714504),(14.20338,32.700344),(14.209321,32.697211),(14.226329,32.690904),(14.262462,32.660305),(14.317149,32.630439),(14.404063,32.56684),(14.449392,32.525377),(14.469493,32.519029),(14.631602,32.495673),(14.647634,32.488267),(14.662934,32.476752),(14.706716,32.456285),(14.726817,32.450751),(14.911143,32.443915),(15.114757,32.412787),(15.160004,32.398668),(15.181977,32.395494),(15.195974,32.389797),(15.233246,32.347113),(15.269705,32.328843),(15.283051,32.317288),(15.292166,32.280911),(15.300792,32.26081),(15.322602,32.224189),(15.364513,32.174709),(15.370453,32.159084),(15.368663,32.138007),(15.35963,32.102118),(15.356863,32.061672),(15.355235,32.03852),(15.361095,31.965318),(15.370453,31.929389),(15.490408,31.664862),(15.514415,31.628363),(15.622813,31.49726),(15.687511,31.437812),(15.761485,31.388129),(16.044607,31.275539),(16.358653,31.227118),(16.728038,31.223822),(16.941905,31.183295),(17.16033,31.118801),(17.330577,31.086493),(17.365896,31.086615),(17.375011,31.083075),(17.382009,31.078762),(17.41863,31.062161),(17.447276,31.038723),(17.457774,31.033352),(17.466807,31.029975),(17.613943,30.992499),(17.691742,30.959662),(17.776052,30.935207),(17.848643,30.923651),(17.861664,30.913642),(17.889171,30.877916),(17.907481,30.863511),(17.926768,30.85517),(18.016856,30.837877),(18.175304,30.786078),(18.208344,30.769232),(18.235688,30.746812),(18.324229,30.656887),(18.56837,30.500067),(18.623302,30.445746),(18.653168,30.422309),(18.719249,30.391506),(18.760997,30.384223),(18.780528,30.375922),(18.925955,30.293362),(18.968028,30.277737),(19.056814,30.266181),(19.146007,30.264716),(19.239757,30.273993),(19.291677,30.287055)] +Saint Lucia [(-60.88679,14.010077),(-60.882965,13.980618),(-60.883046,13.954413),(-60.89509,13.877346),(-60.892201,13.833197),(-60.904612,13.784247),(-60.907297,13.77733),(-60.910308,13.776109),(-60.922841,13.772691),(-60.927113,13.769924),(-60.931752,13.760321),(-60.940785,13.722113),(-60.947011,13.716295),(-60.950103,13.714667),(-60.955068,13.714667),(-60.973378,13.739),(-61.048166,13.76732),(-61.064931,13.786933),(-61.068105,13.792182),(-61.074208,13.798733),(-61.078114,13.806301),(-61.071156,13.821967),(-61.070058,13.832099),(-61.071156,13.851793),(-61.072743,13.858222),(-61.075592,13.863267),(-61.078114,13.869615),(-61.078521,13.879788),(-61.075917,13.890367),(-61.066884,13.909003),(-61.064931,13.917304),(-61.059478,13.93122),(-61.035553,13.964911),(-61.026601,13.987372),(-61.002268,14.023139),(-60.99706,14.027167),(-60.987294,14.032538),(-60.98233,14.03734),(-60.97997,14.043524),(-60.976064,14.061184),(-60.972076,14.068101),(-60.947621,14.102281),(-60.933461,14.111884),(-60.914133,14.105699),(-60.918609,14.088446),(-60.911936,14.079495),(-60.901031,14.073391),(-60.892974,14.064683),(-60.890207,14.052232),(-60.88679,14.010077)] +Liechtenstein [(9.581203,47.05687),(9.560636,47.0524),(9.499554,47.059351),(9.477023,47.063898),(9.475886,47.073226),(9.487565,47.083949),(9.502861,47.094698),(9.51237,47.10803),(9.511853,47.129372),(9.503481,47.145392),(9.492629,47.15981),(9.484981,47.176346),(9.487358,47.210014),(9.504618,47.243732),(9.521155,47.262801),(9.53025,47.253654),(9.547096,47.243035),(9.540378,47.229108),(9.544823,47.220323),(9.554331,47.211615),(9.562909,47.19774),(9.561876,47.190609),(9.551954,47.175571),(9.552057,47.16689),(9.581823,47.154901),(9.605594,47.132266),(9.615723,47.106764),(9.608798,47.080771),(9.581203,47.05687)] +Lesotho [(28.980846,-28.909035),(28.995418,-28.908312),(29.018983,-28.913996),(29.040532,-28.922988),(29.049679,-28.933013),(29.049989,-28.95265),(29.055776,-28.965363),(29.069006,-28.973321),(29.091536,-28.978592),(29.125953,-28.992441),(29.208429,-29.068922),(29.241398,-29.077707),(29.281396,-29.078637),(29.311781,-29.089799),(29.316846,-29.147367),(29.32806,-29.159356),(29.343717,-29.169795),(29.357463,-29.183334),(29.365008,-29.200284),(29.375964,-29.235527),(29.403869,-29.268703),(29.413997,-29.290304),(29.414928,-29.296402),(29.413997,-29.311698),(29.415031,-29.318933),(29.419475,-29.325031),(29.432808,-29.336606),(29.435908,-29.342394),(29.432239,-29.356967),(29.413739,-29.380118),(29.411879,-29.395827),(29.41255,-29.410297),(29.4079,-29.421459),(29.399528,-29.430347),(29.388159,-29.437789),(29.361804,-29.447917),(29.332039,-29.455979),(29.303823,-29.466831),(29.282739,-29.485641),(29.275505,-29.506622),(29.275199,-29.528295),(29.275143,-29.532253),(29.282016,-29.556128),(29.296279,-29.571631),(29.279949,-29.581449),(29.278398,-29.597055),(29.278605,-29.613592),(29.26734,-29.626098),(29.190342,-29.646871),(29.1613,-29.666922),(29.145073,-29.691933),(29.132413,-29.720252),(29.113757,-29.750121),(29.113599,-29.752851),(29.11231,-29.775132),(29.106316,-29.801591),(29.105799,-29.825569),(29.120372,-29.843449),(29.132981,-29.852027),(29.135772,-29.859158),(29.135151,-29.868047),(29.137322,-29.882103),(29.148277,-29.901326),(29.150654,-29.911145),(29.144246,-29.919723),(29.132774,-29.924684),(29.106316,-29.931505),(29.09412,-29.936259),(29.015786,-29.978587),(28.976298,-29.999925),(28.860026,-30.066381),(28.795637,-30.089428),(28.729491,-30.101831),(28.639678,-30.131493),(28.605881,-30.131286),(28.557306,-30.114957),(28.540562,-30.113303),(28.524336,-30.116714),(28.480928,-30.139348),(28.456226,-30.146996),(28.407961,-30.140898),(28.383879,-30.144205),(28.364087,-30.159295),(28.355596,-30.17324),(28.338663,-30.201049),(28.323005,-30.214692),(28.313703,-30.22079),(28.3013,-30.2365),(28.294376,-30.242494),(28.284712,-30.246215),(28.255308,-30.249935),(28.237015,-30.256343),(28.216034,-30.267402),(28.200635,-30.282492),(28.198464,-30.301095),(28.20942,-30.314634),(28.224096,-30.320525),(28.235775,-30.329414),(28.238462,-30.352255),(28.231434,-30.373855),(28.218205,-30.387395),(28.133249,-30.445376),(28.125394,-30.457261),(28.127564,-30.465116),(28.139243,-30.482686),(28.142447,-30.492608),(28.141207,-30.500773),(28.124257,-30.547282),(28.120846,-30.553379),(28.112837,-30.562474),(28.104878,-30.568366),(28.096352,-30.572396),(28.088032,-30.577564),(28.081572,-30.586866),(28.075836,-30.60702),(28.076508,-30.623246),(28.078988,-30.639266),(28.078782,-30.658799),(28.054701,-30.649704),(27.936568,-30.640816),(27.915588,-30.634925),(27.895641,-30.626243),(27.886132,-30.619732),(27.878277,-30.612911),(27.869802,-30.607123),(27.858227,-30.603712),(27.849545,-30.604952),(27.830425,-30.613117),(27.8204,-30.614564),(27.778438,-30.609707),(27.743608,-30.600302),(27.713533,-30.583765),(27.659531,-30.530538),(27.598604,-30.490334),(27.5923,-30.481859),(27.590646,-30.472868),(27.590129,-30.463876),(27.586822,-30.454987),(27.562741,-30.425532),(27.536593,-30.403208),(27.533364,-30.393425),(27.529668,-30.382227),(27.493184,-30.367654),(27.477475,-30.356286),(27.466726,-30.341093),(27.459388,-30.316598),(27.452102,-30.310087),(27.43603,-30.30988),(27.414843,-30.317528),(27.407815,-30.318562),(27.397635,-30.317218),(27.377708,-30.312324),(27.377016,-30.312154),(27.366164,-30.311017),(27.367817,-30.295411),(27.345545,-30.241564),(27.343685,-30.212935),(27.365544,-30.165806),(27.38053,-30.152577),(27.381357,-30.142655),(27.371952,-30.135937),(27.356655,-30.132216),(27.334434,-30.132216),(27.322032,-30.1348),(27.312782,-30.131596),(27.300431,-30.114957),(27.293713,-30.100901),(27.280174,-30.053462),(27.265911,-30.033308),(27.217956,-29.998064),(27.198835,-29.978531),(27.17135,-29.921495),(27.088765,-29.750121),(27.088661,-29.750121),(27.088661,-29.750018),(27.080135,-29.735135),(27.050989,-29.696791),(27.037657,-29.686456),(27.024531,-29.680048),(27.010217,-29.669919),(27.002155,-29.666508),(27.014454,-29.641704),(27.014867,-29.625581),(27.022412,-29.616176),(27.032954,-29.616176),(27.042359,-29.628061),(27.049181,-29.620516),(27.056829,-29.604393),(27.062203,-29.600156),(27.072022,-29.599949),(27.075122,-29.605117),(27.077706,-29.611318),(27.085871,-29.614419),(27.090418,-29.611628),(27.122768,-29.581759),(27.128142,-29.578555),(27.156357,-29.568633),(27.18571,-29.565946),(27.195838,-29.562639),(27.213718,-29.554061),(27.24183,-29.548996),(27.263121,-29.541141),(27.295677,-29.524398),(27.300307,-29.52012),(27.302498,-29.518094),(27.309113,-29.509205),(27.314074,-29.498663),(27.316038,-29.487191),(27.321412,-29.483574),(27.344873,-29.484504),(27.350247,-29.480267),(27.354485,-29.462283),(27.365544,-29.446677),(27.407092,-29.406886),(27.413706,-29.402545),(27.417634,-29.396551),(27.41908,-29.383838),(27.416187,-29.375984),(27.410295,-29.369782),(27.406265,-29.363995),(27.408642,-29.35707),(27.417634,-29.350662),(27.426832,-29.347975),(27.436134,-29.343531),(27.445849,-29.331955),(27.448329,-29.322964),(27.450293,-29.299709),(27.45329,-29.291648),(27.469878,-29.282449),(27.510686,-29.270753),(27.518919,-29.268393),(27.528893,-29.260952),(27.527239,-29.249376),(27.520986,-29.226122),(27.521503,-29.21651),(27.528169,-29.207622),(27.538246,-29.202351),(27.548478,-29.19894),(27.555713,-29.195426),(27.61297,-29.133828),(27.635605,-29.096621),(27.634479,-29.091041),(27.630747,-29.07254),(27.637517,-29.065718),(27.64439,-29.07254),(27.649351,-29.070886),(27.658601,-29.066752),(27.664957,-29.065718),(27.656275,-29.055796),(27.64377,-29.044531),(27.63917,-29.035333),(27.655035,-29.031508),(27.667489,-29.026651),(27.679736,-29.015075),(27.699063,-28.990581),(27.702887,-28.989341),(27.707642,-28.990684),(27.711672,-28.990994),(27.713326,-28.98686),(27.713378,-28.979522),(27.714463,-28.976421),(27.716685,-28.973217),(27.71901,-28.967843),(27.720044,-28.959782),(27.720147,-28.945932),(27.728312,-28.937974),(27.744074,-28.93322),(27.754771,-28.925468),(27.747432,-28.908622),(27.861327,-28.915443),(27.887992,-28.904901),(27.897294,-28.889812),(27.903599,-28.873999),(27.91166,-28.86077),(27.926233,-28.853431),(27.945043,-28.855499),(27.956929,-28.86511),(27.966231,-28.875652),(27.977496,-28.880717),(28.004678,-28.880717),(28.013566,-28.878443),(28.014806,-28.872552),(28.014496,-28.8648),(28.018734,-28.856842),(28.056561,-28.812504),(28.100899,-28.770853),(28.104103,-28.765065),(28.131698,-28.729305),(28.145031,-28.714732),(28.154953,-28.70233),(28.164048,-28.70543),(28.218116,-28.700652),(28.23071,-28.699539),(28.250864,-28.707394),(28.273912,-28.709564),(28.297063,-28.706877),(28.317217,-28.700056),(28.33644,-28.684656),(28.363519,-28.643625),(28.382949,-28.626469),(28.403413,-28.618924),(28.542629,-28.610759),(28.562473,-28.606522),(28.619421,-28.572415),(28.634303,-28.570761),(28.653217,-28.58306),(28.66655,-28.59722),(28.675645,-28.613756),(28.681742,-28.633393),(28.688977,-28.674424),(28.699519,-28.68848),(28.743857,-28.689514),(28.758637,-28.700469),(28.787679,-28.748735),(28.795947,-28.754316),(28.82716,-28.759277),(28.827937,-28.759534),(28.843696,-28.764755),(28.858786,-28.772816),(28.859014,-28.77299),(28.872842,-28.783565),(28.885658,-28.797208),(28.925862,-28.859529),(28.980846,-28.909035)] +Luxembourg [(6.038628,50.148413),(6.061366,50.150738),(6.070048,50.154201),(6.076456,50.158645),(6.084207,50.159317),(6.097126,50.151307),(6.101467,50.142108),(6.10033,50.132652),(6.102604,50.124745),(6.117487,50.120456),(6.110562,50.105987),(6.105704,50.092241),(6.0994,50.064129),(6.101157,50.063405),(6.107151,50.063767),(6.109425,50.06356),(6.105704,50.060305),(6.096403,50.048626),(6.107255,50.044595),(6.113766,50.036378),(6.119967,50.015501),(6.117177,50.004391),(6.121517,49.996278),(6.138777,49.979328),(6.15397,49.951216),(6.161928,49.942482),(6.172677,49.956228),(6.189834,49.938297),(6.2227,49.887137),(6.238306,49.876905),(6.272206,49.867138),(6.286779,49.86104),(6.291533,49.855563),(6.295568,49.848616),(6.301558,49.838303),(6.302592,49.834944),(6.310653,49.834479),(6.325639,49.84037),(6.334424,49.840215),(6.351374,49.83329),(6.380106,49.815565),(6.397056,49.808589),(6.414523,49.805488),(6.462478,49.805488),(6.493278,49.799597),(6.497205,49.799442),(6.502579,49.79567),(6.502269,49.795876),(6.500099,49.794481),(6.499996,49.786058),(6.498549,49.785386),(6.493794,49.76363),(6.493691,49.756551),(6.496378,49.752106),(6.495965,49.748231),(6.48625,49.742701),(6.485526,49.733761),(6.4878,49.725596),(6.492554,49.718413),(6.499686,49.712212),(6.491004,49.714434),(6.486353,49.712936),(6.482942,49.708078),(6.489764,49.702962),(6.491727,49.700792),(6.418243,49.669889),(6.402741,49.655782),(6.404498,49.652474),(6.410905,49.649632),(6.417003,49.645756),(6.417933,49.639142),(6.415246,49.634646),(6.407701,49.627411),(6.397573,49.613872),(6.371528,49.592995),(6.356438,49.577595),(6.350754,49.566588),(6.350754,49.53491),(6.348809,49.52529),(6.342279,49.493001),(6.34538,49.462718),(6.345307,49.455349),(6.325329,49.456724),(6.298768,49.466232),(6.251117,49.48946),(6.249365,49.490314),(6.221976,49.497445),(6.193554,49.499409),(6.13609,49.495275),(6.142601,49.486076),(6.114386,49.484371),(6.107255,49.48251),(6.101984,49.476413),(6.102087,49.470005),(6.099813,49.46463),(6.087721,49.461633),(6.078006,49.452021),(6.06333,49.448456),(5.978064,49.445097),(5.9607,49.441324),(5.946954,49.469953),(5.928144,49.482304),(5.864686,49.492174),(5.839571,49.499874),(5.822104,49.510519),(5.790685,49.537753),(5.814456,49.545142),(5.8374,49.56111),(5.846909,49.576717),(5.829959,49.582763),(5.841224,49.589687),(5.849183,49.599713),(5.861998,49.62276),(5.869957,49.628858),(5.879878,49.634853),(5.885459,49.643534),(5.879878,49.657745),(5.872644,49.661724),(5.852593,49.663223),(5.845772,49.666582),(5.842258,49.67392),(5.843705,49.677486),(5.846702,49.68069),(5.847219,49.687201),(5.848769,49.69273),(5.853627,49.696296),(5.857347,49.70043),(5.856403,49.705807),(5.856004,49.708078),(5.850216,49.714021),(5.843705,49.714331),(5.837297,49.712936),(5.831716,49.713607),(5.825721,49.715778),(5.811769,49.718517),(5.805361,49.721876),(5.805774,49.724201),(5.803604,49.738154),(5.802054,49.742856),(5.778489,49.773035),(5.771875,49.779392),(5.759162,49.784456),(5.748414,49.785644),(5.738285,49.788848),(5.727743,49.800217),(5.721645,49.812775),(5.720198,49.824712),(5.724642,49.834272),(5.736528,49.83944),(5.726916,49.845227),(5.728673,49.848845),(5.730533,49.85117),(5.73198,49.854167),(5.731877,49.859955),(5.758129,49.858198),(5.748827,49.867293),(5.726709,49.878197),(5.714927,49.881866),(5.718958,49.891374),(5.753581,49.930183),(5.757612,49.93654),(5.759989,49.941604),(5.76371,49.946823),(5.771565,49.953541),(5.778593,49.95659),(5.793372,49.958812),(5.80164,49.963825),(5.812596,49.977571),(5.808565,49.9831),(5.801537,49.988681),(5.802674,50.002479),(5.808875,50.007801),(5.829132,50.013589),(5.836367,50.01855),(5.838847,50.026457),(5.838847,50.035862),(5.83771,50.047437),(5.843705,50.053638),(5.858278,50.061803),(5.864375,50.067488),(5.866856,50.074309),(5.86882,50.090535),(5.872437,50.096892),(5.888973,50.106607),(5.92618,50.118337),(5.941063,50.128311),(5.949021,50.142677),(5.953156,50.156371),(5.96163,50.165621),(5.982921,50.167068),(5.998321,50.174975),(6.004729,50.170479),(6.008036,50.160919),(6.014134,50.153581),(6.022967,50.15082),(6.027363,50.149447),(6.038628,50.148413)] +Latvia [(25.333678,58.031808),(25.339879,58.032583),(25.34639,58.034443),(25.356519,58.034753),(25.39662,58.023178),(25.457185,57.984059),(25.496614,57.970597),(25.53196,57.966437),(25.541676,57.96282),(25.550254,57.955327),(25.551816,57.94986),(25.552114,57.948815),(25.552941,57.942175),(25.558315,57.934243),(25.579399,57.919463),(25.601517,57.912203),(25.625081,57.910291),(25.650093,57.911428),(25.7038,57.921327),(25.714998,57.923391),(25.730295,57.923132),(25.745384,57.90918),(25.754686,57.888897),(25.767502,57.868924),(25.792616,57.855979),(26.002733,57.845979),(26.014515,57.842827),(26.021956,57.837944),(26.025264,57.828409),(26.02175,57.822415),(26.016065,57.816472),(26.013068,57.8073),(26.016065,57.786681),(26.024747,57.774356),(26.135231,57.724772),(26.168511,57.70405),(26.263182,57.620825),(26.297185,57.599147),(26.33863,57.583153),(26.351325,57.580203),(26.430678,57.561764),(26.430924,57.561707),(26.447564,57.552922),(26.48105,57.527136),(26.49955,57.515819),(26.515053,57.517266),(26.528696,57.52388),(26.542235,57.528169),(26.551743,57.526826),(26.570243,57.519591),(26.579855,57.517834),(26.590087,57.520366),(26.594841,57.52649),(26.598355,57.533854),(26.604763,57.540313),(26.634322,57.554421),(26.666982,57.56486),(26.700468,57.570751),(26.778086,57.568477),(26.795759,57.571319),(26.816327,57.581189),(26.828315,57.595039),(26.838547,57.60987),(26.854154,57.622556),(26.872654,57.627181),(26.884953,57.622091),(26.896115,57.614986),(26.911618,57.613435),(26.929188,57.615554),(26.946551,57.615347),(26.981071,57.608991),(27.004635,57.598914),(27.041326,57.568451),(27.061531,57.555661),(27.085354,57.549434),(27.165659,57.546695),(27.319965,57.516129),(27.352935,57.527601),(27.528169,57.528479),(27.522795,57.492022),(27.525637,57.46838),(27.514837,57.44789),(27.511323,57.430397),(27.536283,57.415695),(27.640824,57.389005),(27.694409,57.356375),(27.707952,57.348129),(27.809134,57.313945),(27.827738,57.304953),(27.840295,57.290639),(27.846134,57.267307),(27.841174,57.211212),(27.833939,57.18049),(27.824327,57.15907),(27.794045,57.143387),(27.700975,57.118789),(27.682217,57.103699),(27.696066,57.085457),(27.722059,57.07799),(27.745469,57.067991),(27.75074,57.042359),(27.745779,57.031455),(27.729552,57.009855),(27.723351,56.998796),(27.720767,56.988667),(27.718545,56.96872),(27.715135,56.957403),(27.648731,56.879268),(27.62837,56.844154),(27.644379,56.841772),(27.661185,56.839271),(27.744074,56.864799),(27.786397,56.871258),(27.830838,56.864282),(27.852232,56.854309),(27.891506,56.829711),(27.913521,56.82015),(27.918895,56.805888),(27.900085,56.783047),(27.876107,56.759431),(27.865461,56.742687),(27.882618,56.725479),(27.98101,56.687006),(27.991345,56.669979),(27.992172,56.624994),(27.997443,56.603833),(28.010982,56.587555),(28.028656,56.576754),(28.108651,56.55518),(28.126117,56.547764),(28.132215,56.535982),(28.125239,56.527145),(28.099246,56.513399),(28.093251,56.501591),(28.096455,56.491953),(28.104517,56.483065),(28.156503,56.44622),(28.164255,56.437952),(28.167872,56.427125),(28.164255,56.392812),(28.167148,56.369868),(28.174435,56.349637),(28.214794,56.281372),(28.217275,56.270727),(28.215311,56.255999),(28.20942,56.248144),(28.201358,56.241684),(28.193142,56.231039),(28.184408,56.207526),(28.178621,56.18391),(28.169112,56.161741),(28.148907,56.142414),(28.110976,56.156806),(28.06824,56.147582),(28.0511,56.140666),(28.023798,56.12965),(27.98101,56.118023),(27.939669,56.113062),(27.92706,56.109367),(27.911453,56.100246),(27.901532,56.089342),(27.892747,56.077095),(27.880654,56.063866),(27.812545,56.034514),(27.781229,56.016375),(27.776991,55.992397),(27.744435,55.959738),(27.64501,55.922841),(27.617156,55.878554),(27.610128,55.83096),(27.601498,55.809618),(27.592713,55.794244),(27.564601,55.792229),(27.438821,55.79874),(27.405851,55.804347),(27.374587,55.814837),(27.349627,55.831219),(27.329163,55.817576),(27.282448,55.791867),(27.263018,55.787216),(27.235526,55.795846),(27.173204,55.825741),(27.151448,55.832459),(27.110779,55.836283),(26.981071,55.826878),(26.979426,55.826291),(26.957817,55.818584),(26.900146,55.778715),(26.842785,55.719339),(26.822838,55.70611),(26.743049,55.682856),(26.720105,55.681874),(26.666878,55.693966),(26.64011,55.695568),(26.615615,55.687971),(26.594531,55.666991),(26.537481,55.669523),(26.48105,55.678308),(26.34235,55.716342),(26.279718,55.743239),(26.226905,55.796931),(26.203754,55.827446),(26.178639,55.84959),(26.079421,55.89814),(26.017616,55.937362),(25.961251,55.958459),(25.871268,55.992139),(25.795496,56.04013),(25.766261,56.058647),(25.708797,56.081048),(25.696602,56.087895),(25.687817,56.098334),(25.669213,56.132027),(25.661875,56.140812),(25.649679,56.143809),(25.590045,56.141122),(25.572888,56.143034),(25.522762,56.156573),(25.454808,56.149649),(25.349086,56.159876),(25.109196,56.183083),(25.073125,56.197811),(25.04398,56.22799),(25.026203,56.260185),(25.016488,56.2699),(24.983208,56.290493),(24.973183,56.300751),(24.962124,56.319277),(24.936286,56.379919),(24.910138,56.421958),(24.892258,56.438727),(24.870967,56.442602),(24.857118,56.435368),(24.853397,56.424722),(24.852157,56.413664),(24.845852,56.40493),(24.837688,56.403948),(24.81557,56.408806),(24.805545,56.408548),(24.71504,56.386253),(24.678628,56.377283),(24.639147,56.359972),(24.557911,56.29858),(24.538998,56.287625),(24.481327,56.26884),(24.44722,56.260521),(24.414354,56.265507),(24.350792,56.291552),(24.318546,56.29858),(24.287644,56.295531),(24.166824,56.260314),(24.140422,56.2576),(24.138919,56.257446),(24.108637,56.260701),(24.07515,56.271192),(23.981306,56.312404),(23.871107,56.332072),(23.858109,56.334392),(23.825036,56.334909),(23.7561,56.325969),(23.724991,56.328811),(23.723647,56.332118),(23.724784,56.338397),(23.723544,56.345812),(23.715069,56.352737),(23.706697,56.354442),(23.679412,56.351859),(23.609856,56.353822),(23.577713,56.348603),(23.517561,56.328656),(23.481595,56.330103),(23.37804,56.35463),(23.310649,56.370591),(23.288428,56.373046),(23.17257,56.357956),(23.154173,56.351394),(23.162854,56.341007),(23.112728,56.310879),(23.062292,56.304161),(23.016714,56.32385),(22.981367,56.373175),(22.956976,56.401985),(22.924523,56.412113),(22.888969,56.408444),(22.859603,56.39707),(22.822927,56.382864),(22.681437,56.350308),(22.666347,56.349094),(22.649087,56.353099),(22.604749,56.379092),(22.578222,56.387003),(22.57581,56.387722),(22.511731,56.395964),(22.214902,56.390383),(22.195265,56.394853),(22.158161,56.410253),(22.139144,56.415705),(22.094082,56.41741),(21.983173,56.388353),(21.965322,56.383677),(21.684495,56.310104),(21.596025,56.307882),(21.558921,56.297288),(21.419292,56.237395),(21.403685,56.234476),(21.358313,56.233235),(21.327618,56.224373),(21.290411,56.206932),(21.254857,56.185202),(21.229639,56.163188),(21.212586,56.130994),(21.205351,56.103424),(21.190365,56.084459),(21.150161,56.07818),(21.091353,56.077896),(21.053398,56.072604),(21.053396,56.072618),(21.053396,56.072943),(21.052582,56.077541),(21.042735,56.114569),(21.028819,56.147406),(20.98406,56.210435),(20.973399,56.232001),(20.969086,56.253485),(20.971039,56.25967),(20.980235,56.279283),(20.982677,56.290758),(20.982677,56.301459),(20.981456,56.310492),(20.970958,56.34984),(20.968598,56.369818),(20.970551,56.388983),(20.979015,56.404364),(20.992524,56.417914),(20.998383,56.425686),(21.003184,56.434801),(21.005382,56.445868),(21.005544,56.465522),(21.01059,56.475735),(21.003266,56.49258),(21.000499,56.510647),(21.006358,56.519965),(21.024262,56.510484),(21.030528,56.500149),(21.035818,56.473944),(21.044688,56.462104),(21.030528,56.448432),(21.037446,56.444648),(21.044688,56.442206),(21.044688,56.427965),(21.044688,56.421129),(21.030528,56.408108),(21.053396,56.38996),(21.061778,56.38703),(21.07016,56.38935),(21.077159,56.395209),(21.079763,56.403062),(21.068614,56.424058),(21.066661,56.44184),(21.068126,56.458808),(21.072113,56.46955),(21.067638,56.476386),(21.063243,56.493964),(21.05836,56.503648),(21.052908,56.511054),(21.048106,56.515611),(21.030528,56.524156),(21.019542,56.527086),(21.010509,56.527777),(21.003103,56.530015),(20.996349,56.537828),(20.992198,56.548896),(20.995128,56.553371),(21.000336,56.557685),(21.010509,56.5987),(21.046153,56.656195),(21.05836,56.688666),(21.065196,56.774319),(21.061209,56.794135),(21.05421,56.811957),(21.052745,56.828925),(21.065196,56.84634),(21.147227,56.87641),(21.156749,56.885565),(21.222911,56.907701),(21.240977,56.91828),(21.277599,56.949123),(21.287771,56.955512),(21.300141,56.95954),(21.328217,56.973619),(21.360118,56.989691),(21.382498,57.008938),(21.401622,57.037502),(21.413585,57.073188),(21.414806,57.113837),(21.411876,57.124457),(21.401052,57.151313),(21.403168,57.162055),(21.412608,57.176215),(21.414806,57.184882),(21.412608,57.250067),(21.414806,57.270901),(21.421397,57.288723),(21.433604,57.306464),(21.449474,57.320014),(21.466563,57.325507),(21.480235,57.333645),(21.517751,57.3876),(21.589122,57.438788),(21.699229,57.555325),(21.729666,57.573798),(21.771658,57.586127),(21.955903,57.592963),(21.999278,57.600328),(22.194509,57.65762),(22.483735,57.742499),(22.526134,57.749661),(22.549083,57.750637),(22.560313,57.752753),(22.572602,57.756822),(22.585785,57.759711),(22.600271,57.758043),(22.610362,57.754625),(22.610118,57.753485),(22.604991,57.751044),(22.600271,57.743842),(22.590831,57.706122),(22.587169,57.664984),(22.590994,57.644761),(22.600759,57.630113),(22.656098,57.585761),(22.878917,57.481269),(22.950369,57.432807),(23.031423,57.394477),(23.083344,57.378241),(23.117524,57.373928),(23.130219,57.370795),(23.133149,57.362982),(23.133556,57.353095),(23.138031,57.343166),(23.160167,57.31859),(23.164561,57.313666),(23.174571,57.297349),(23.182384,57.277655),(23.190196,57.242499),(23.198741,57.224189),(23.213552,57.216254),(23.222423,57.207017),(23.250499,57.114447),(23.260916,57.098944),(23.274913,57.092719),(23.292654,57.088935),(23.340099,57.05858),(23.422862,57.040229),(23.508556,57.031236),(23.520844,57.02558),(23.569672,56.986274),(23.578624,56.982082),(23.585134,56.979071),(23.632009,56.970933),(23.69516,56.967271),(23.866059,56.99018),(23.926931,57.006334),(23.952891,57.013251),(23.991222,57.031236),(24.009288,57.0456),(24.021983,57.059027),(24.035655,57.068996),(24.074555,57.074937),(24.113048,57.088731),(24.210785,57.123725),(24.219249,57.138007),(24.238617,57.149563),(24.261241,57.157457),(24.278575,57.161038),(24.278819,57.17178),(24.288748,57.1789),(24.312673,57.1883),(24.379161,57.230211),(24.400645,57.258124),(24.408865,57.298163),(24.405528,57.344468),(24.381602,57.469428),(24.379893,57.506537),(24.375173,57.528551),(24.364513,57.538398),(24.358572,57.55036),(24.362966,57.576606),(24.374766,57.613471),(24.363048,57.672268),(24.357677,57.685492),(24.313731,57.725043),(24.299571,57.743842),(24.299571,57.761908),(24.288829,57.808905),(24.285981,57.832587),(24.288829,57.845282),(24.296153,57.857367),(24.306159,57.868186),(24.349655,57.85864),(24.378181,57.859441),(24.401539,57.866469),(24.412701,57.876572),(24.429134,57.900343),(24.441639,57.908094),(24.481327,57.919308),(24.53383,57.945069),(24.553881,57.94742),(24.624057,57.943958),(24.684829,57.947575),(24.700435,57.95419),(24.718832,57.981837),(24.733508,57.992198),(24.764721,57.987909),(24.790765,57.978994),(24.81681,57.976669),(24.827417,57.981877),(24.848333,57.992146),(24.872518,58.00044),(24.949309,58.006435),(24.974837,58.01584),(25.034857,58.048574),(25.048838,58.056199),(25.070645,58.063615),(25.094209,58.067309),(25.141028,58.068007),(25.16697,58.058731),(25.180716,58.038164),(25.189914,58.013514),(25.20211,57.991965),(25.216786,57.985376),(25.232496,57.985376),(25.264535,57.994187),(25.284379,58.00814),(25.277557,58.024108),(25.249549,58.051186),(25.250686,58.068498),(25.263501,58.075138),(25.281692,58.073407),(25.299572,58.065604),(25.306393,58.058731),(25.318278,58.040619),(25.324583,58.034753),(25.333678,58.031808)] +Saint Martin [(-63.017569,18.033391),(-63.085886,18.058511),(-63.107004,18.062109),(-63.107004,18.06212),(-63.107004,18.066962),(-63.115305,18.065375),(-63.130523,18.060248),(-63.134348,18.059475),(-63.14387,18.064683),(-63.14684,18.070299),(-63.143625,18.074164),(-63.113759,18.074652),(-63.096099,18.078803),(-63.083811,18.087958),(-63.079091,18.104193),(-63.070872,18.110012),(-63.052113,18.116848),(-63.031402,18.121894),(-63.017649,18.122138),(-63.017934,18.100531),(-63.011098,18.070746),(-63.010732,18.040839),(-63.017568,18.033393),(-63.017569,18.033391)] +Morocco [(-5.40549,35.926519),(-5.399322,35.924465),(-5.398859,35.924504),(-5.389718,35.902047),(-5.378401,35.881687),(-5.362898,35.863807),(-5.340726,35.847366),(-5.340728,35.847357),(-5.34081,35.847113),(-5.345693,35.832587),(-5.342356,35.807685),(-5.327382,35.755845),(-5.322621,35.713772),(-5.318756,35.701728),(-5.312652,35.69245),(-5.304799,35.685207),(-5.298899,35.684312),(-5.28189,35.68651),(-5.276763,35.685207),(-5.27538,35.679389),(-5.277455,35.664211),(-5.276763,35.657904),(-5.259999,35.604722),(-5.24942,35.584621),(-5.228424,35.568508),(-5.186187,35.543769),(-5.150787,35.516099),(-5.066274,35.417467),(-5.050852,35.407538),(-5.026357,35.403998),(-5.008209,35.399237),(-4.988515,35.387885),(-4.954701,35.362454),(-4.926503,35.330308),(-4.909332,35.314521),(-4.86498,35.301988),(-4.810618,35.259996),(-4.770497,35.241034),(-4.628285,35.20539),(-4.495432,35.180772),(-4.494252,35.180569),(-4.376047,35.151842),(-4.347646,35.150133),(-4.327056,35.154771),(-4.269399,35.184882),(-4.130605,35.204779),(-4.046702,35.235907),(-3.961293,35.250434),(-3.919342,35.266832),(-3.909413,35.255276),(-3.91253,35.250918),(-3.924673,35.247556),(-3.929529,35.24382),(-3.925046,35.237842),(-3.918467,35.230893),(-3.903901,35.214173),(-3.889257,35.208407),(-3.86856,35.207343),(-3.840891,35.20539),(-3.821929,35.206773),(-3.787465,35.209215),(-3.764801,35.218451),(-3.755523,35.236396),(-3.747141,35.263658),(-3.726226,35.282294),(-3.699574,35.290839),(-3.673573,35.287909),(-3.654205,35.277004),(-3.612213,35.245347),(-3.590972,35.233303),(-3.516021,35.223538),(-3.489329,35.211005),(-3.477406,35.210679),(-3.464752,35.211819),(-3.453847,35.211615),(-3.44286,35.208645),(-3.423248,35.200832),(-3.412913,35.197943),(-3.36205,35.194525),(-3.319244,35.201809),(-3.195139,35.238227),(-3.17394,35.249091),(-3.128651,35.28856),(-3.118031,35.294135),(-3.109731,35.293647),(-3.100413,35.289537),(-3.08967,35.286282),(-3.077056,35.287909),(-3.0454,35.314439),(-3.006256,35.393541),(-2.980824,35.424506),(-2.982818,35.435004),(-2.977935,35.443305),(-2.968577,35.446438),(-2.956654,35.44123),(-2.952219,35.433051),(-2.95759,35.415839),(-2.952952,35.403998),(-2.963043,35.373481),(-2.949778,35.334947),(-2.94811,35.330268),(-2.947825,35.329779),(-2.947818,35.329768),(-2.96692,35.313866),(-2.963768,35.286219),(-2.943046,35.267874),(-2.912907,35.276916),(-2.899037,35.259996),(-2.907948,35.250963),(-2.912343,35.235785),(-2.913319,35.219062),(-2.90982,35.186713),(-2.905141,35.17357),(-2.896392,35.161811),(-2.881988,35.146715),(-2.856068,35.131293),(-2.820953,35.121894),(-2.748118,35.115953),(-2.791819,35.145168),(-2.868276,35.217353),(-2.883168,35.238511),(-2.878407,35.246324),(-2.864858,35.237616),(-2.816396,35.194525),(-2.808665,35.183783),(-2.761098,35.143256),(-2.718332,35.122992),(-2.666737,35.108222),(-2.563222,35.096137),(-2.517486,35.099351),(-2.493316,35.104397),(-2.477203,35.112616),(-2.456125,35.132473),(-2.437123,35.145087),(-2.416737,35.149156),(-2.321441,35.118842),(-2.299062,35.115953),(-2.275258,35.109565),(-2.248687,35.097235),(-2.222564,35.089301),(-2.221126,35.049955),(-2.211669,35.023445),(-2.193789,35.003601),(-2.1633,34.994041),(-2.126041,34.971923),(-2.094932,34.947687),(-2.061239,34.929704),(-2.01628,34.926241),(-2.00362,34.91818),(-1.999692,34.906346),(-1.9984,34.892703),(-1.993491,34.878854),(-1.979745,34.865263),(-1.926725,34.838081),(-1.892825,34.811675),(-1.787716,34.756691),(-1.769526,34.741343),(-1.773143,34.734057),(-1.786114,34.72584),(-1.810505,34.680727),(-1.862957,34.613599),(-1.871121,34.596649),(-1.750664,34.494175),(-1.714232,34.485054),(-1.702966,34.479679),(-1.809626,34.372451),(-1.771334,34.334701),(-1.746064,34.290311),(-1.674751,34.105956),(-1.672085,34.092021),(-1.669635,34.079213),(-1.672115,34.059188),(-1.718728,33.898087),(-1.722087,33.851165),(-1.713043,33.801995),(-1.702501,33.772823),(-1.703225,33.761816),(-1.710821,33.747063),(-1.72095,33.736366),(-1.732577,33.727865),(-1.74224,33.717762),(-1.746788,33.702388),(-1.740742,33.686601),(-1.725394,33.677713),(-1.690719,33.667326),(-1.673252,33.6565),(-1.6624,33.644692),(-1.617338,33.554439),(-1.612739,33.521469),(-1.6254,33.494236),(-1.640386,33.475529),(-1.659145,33.41977),(-1.672839,33.394604),(-1.683226,33.36923),(-1.683381,33.270839),(-1.674234,33.237972),(-1.623591,33.196605),(-1.605608,33.168028),(-1.592069,33.136609),(-1.571398,33.111985),(-1.545508,33.091831),(-1.516363,33.073977),(-1.499516,33.060205),(-1.493057,33.039483),(-1.493367,33.016151),(-1.496519,32.994292),(-1.498844,32.984008),(-1.502978,32.974629),(-1.50887,32.966309),(-1.516363,32.959488),(-1.558789,32.93365),(-1.423345,32.742395),(-1.390531,32.718779),(-1.32702,32.69891),(-1.047502,32.517009),(-1.031999,32.4944),(-1.08998,32.439442),(-1.123157,32.417945),(-1.16026,32.404948),(-1.201705,32.399858),(-1.217983,32.392623),(-1.234158,32.374614),(-1.244131,32.356915),(-1.257515,32.320845),(-1.273423,32.229442),(-1.275499,32.217518),(-1.275189,32.209069),(-1.276636,32.200852),(-1.282992,32.190181),(-1.289193,32.184858),(-1.305678,32.173722),(-1.309554,32.167443),(-1.305161,32.151165),(-1.288986,32.150907),(-1.251831,32.163516),(-1.232711,32.163723),(-1.211833,32.158348),(-1.195607,32.146049),(-1.190594,32.125224),(-1.210335,32.08967),(-1.249557,32.08166),(-1.324953,32.084709),(-1.333428,32.085071),(-1.357406,32.086053),(-1.395388,32.087552),(-1.445618,32.089515),(-1.506337,32.091944),(-1.575791,32.094735),(-1.652375,32.097835),(-1.734334,32.101091),(-1.819962,32.104501),(-1.907553,32.107964),(-1.995455,32.111478),(-2.081858,32.11494),(-2.165109,32.118299),(-2.243398,32.1214),(-2.315177,32.124294),(-2.378636,32.126774),(-2.516147,32.1322),(-2.695567,32.08967),(-2.881189,32.076286),(-2.938731,32.048639),(-2.869769,31.89243),(-2.827836,31.794586),(-3.002556,31.77362),(-3.21921,31.717709),(-3.511564,31.672745),(-3.548745,31.669954),(-3.591378,31.678274),(-3.659507,31.647821),(-3.673484,31.389234),(-3.747467,31.385166),(-3.802502,31.350646),(-3.815189,31.337158),(-3.81922,31.318865),(-3.812915,31.243365),(-3.814982,31.220524),(-3.836169,31.189777),(-3.842836,31.170192),(-3.839322,31.152828),(-3.827462,31.143837),(-3.810796,31.142906),(-3.792968,31.149676),(-3.763719,31.173447),(-3.748862,31.180217),(-3.731421,31.176341),(-3.717107,31.163267),(-3.689718,31.125595),(-3.671942,31.110867),(-3.635872,31.095674),(-3.624141,31.086527),(-3.614529,31.068027),(-3.610085,31.050302),(-3.608741,31.030872),(-3.554674,30.955927),(-3.610585,30.879049),(-3.659507,30.837116),(-3.652518,30.774217),(-3.645529,30.711317),(-3.834228,30.627452),(-4.001959,30.592507),(-4.155714,30.585519),(-4.274524,30.557563),(-4.372368,30.508641),(-4.484189,30.382842),(-4.623966,30.284999),(-4.770731,30.229088),(-4.875564,30.180166),(-4.95943,30.124256),(-5.071251,30.04039),(-5.176083,29.97749),(-5.273927,29.886635),(-5.343815,29.767825),(-5.43467,29.642026),(-5.539503,29.523216),(-5.637346,29.495261),(-5.721212,29.523216),(-5.756156,29.614071),(-5.881955,29.600093),(-6.000765,29.579127),(-6.126564,29.579127),(-6.27333,29.579127),(-6.413107,29.565149),(-6.559872,29.530205),(-6.699649,29.516227),(-6.783515,29.446339),(-6.958236,29.509238),(-7.070057,29.516227),(-7.146934,29.509238),(-7.258755,29.467305),(-7.34961,29.383439),(-7.463286,29.389137),(-7.48406,29.382445),(-7.506126,29.380223),(-7.528502,29.380947),(-7.572686,29.387613),(-7.619453,29.389422),(-7.653611,29.376193),(-7.714667,29.321829),(-7.729989,29.311158),(-7.777996,29.289324),(-7.839129,29.239043),(-7.945014,29.176231),(-8.036326,29.099853),(-8.069658,29.079337),(-8.182312,29.035541),(-8.250474,28.994769),(-8.316774,28.939062),(-8.333259,28.93038),(-8.368451,28.916531),(-8.383489,28.905782),(-8.417802,28.852349),(-8.430411,28.841006),(-8.475835,28.818759),(-8.520819,28.787081),(-8.648847,28.725948),(-8.667606,28.711685),(-8.678768,28.692823),(-8.682385,28.6659),(-8.682385,28.620218),(-8.682385,28.560273),(-8.682385,28.500406),(-8.682385,28.440487),(-8.682385,28.380543),(-8.682385,28.320624),(-8.682385,28.260679),(-8.682385,28.20076),(-8.682385,28.140893),(-8.682385,28.080949),(-8.682385,28.02103),(-8.682385,27.961085),(-8.682385,27.90114),(-8.682385,27.841222),(-8.682385,27.781354),(-8.682385,27.721436),(-8.682385,27.661439),(-8.752562,27.661439),(-8.816537,27.661465),(-8.817034,27.661465),(-8.817035,27.661464),(-8.818449,27.659398),(-8.81292,27.613354),(-8.783619,27.530362),(-8.773387,27.46003),(-8.788012,27.416054),(-8.801706,27.360424),(-8.795841,27.307688),(-8.773387,27.250069),(-8.752872,27.190486),(-8.752872,27.150463),(-8.793903,27.12018),(-8.888109,27.103566),(-9.000919,27.089924),(-9.083446,27.089924),(-9.207469,27.099691),(-9.284622,27.097727),(-9.352008,27.097727),(-9.412056,27.08796),(-9.486315,27.049875),(-9.568843,26.990292),(-9.672351,26.910219),(-9.734362,26.860429),(-9.816864,26.84968),(-9.899365,26.84968),(-9.979929,26.889729),(-10.031709,26.910219),(-10.065867,26.908281),(-10.122039,26.879962),(-10.188443,26.860429),(-10.250455,26.860429),(-10.353963,26.900478),(-10.477986,26.960035),(-10.550282,26.990292),(-10.653273,27.000058),(-10.756781,27.019592),(-10.829076,27.009825),(-10.921835,27.009825),(-11.045859,26.969802),(-11.149366,26.940501),(-11.262641,26.910219),(-11.391574,26.882908),(-11.36031,26.793043),(-11.315868,26.744208),(-11.315868,26.683644),(-11.3369,26.632897),(-11.398912,26.583081),(-11.469709,26.519597),(-11.510688,26.469807),(-11.55221,26.400457),(-11.582983,26.360408),(-11.63621,26.294985),(-11.683546,26.212975),(-11.698222,26.162177),(-11.717239,26.103576),(-11.753877,26.086006),(-11.879864,26.070399),(-11.959911,26.049884),(-12.029752,26.03035),(-12.055823,25.99583),(-12.055823,25.99583),(-12.060009,25.990301),(-12.080059,25.919969),(-12.080059,25.870205),(-12.100058,25.830156),(-12.129849,25.730549),(-12.169873,25.639728),(-12.200155,25.51958),(-12.229946,25.42),(-12.26997,25.259803),(-12.310019,25.110432),(-12.359835,24.969795),(-12.399884,24.879955),(-12.430141,24.830165),(-12.499982,24.7696),(-12.56003,24.730533),(-12.629845,24.679761),(-12.709943,24.629971),(-12.819781,24.570388),(-12.910137,24.51959),(-12.946879,24.496749),(-12.94688,24.496748),(-12.990184,24.4698),(-13.060025,24.400476),(-13.120099,24.299862),(-13.160122,24.219815),(-13.229963,24.0899),(-13.279753,24.019594),(-13.31001,23.980553),(-13.390108,23.940504),(-13.479948,23.910221),(-13.580045,23.870198),(-13.660118,23.830123),(-13.769982,23.790125),(-13.839797,23.750076),(-13.890129,23.690493),(-13.930127,23.620187),(-13.979943,23.519599),(-14.019992,23.410252),(-14.039991,23.33992),(-14.100065,23.099676),(-14.120089,22.960047),(-14.140088,22.870181),(-14.169906,22.759852),(-14.189904,22.589914),(-14.189904,22.450259),(-14.209955,22.370186),(-14.220187,22.309647),(-14.270003,22.240297),(-14.310052,22.190507),(-14.379841,22.120201),(-14.439915,22.080152),(-14.459914,22.040103),(-14.519988,21.990338),(-14.580036,21.91024),(-14.629852,21.860424),(-14.620085,21.820375),(-14.609827,21.750069),(-14.640109,21.679763),(-14.669875,21.599665),(-14.749974,21.500084),(-14.839813,21.450268),(-14.970167,21.440501),(-15.149872,21.440501),(-15.289992,21.450268),(-15.45993,21.450268),(-15.609818,21.469802),(-15.749964,21.490317),(-15.919876,21.500084),(-16.040024,21.500084),(-16.189886,21.48055),(-16.580043,21.48055),(-16.729956,21.469802),(-16.950149,21.429753),(-17.013743,21.419971),(-17.013743,21.419989),(-17.012807,21.445217),(-16.971059,21.592597),(-16.966908,21.636542),(-16.970326,21.674465),(-16.968007,21.697252),(-16.95108,21.724758),(-16.947662,21.736029),(-16.946401,21.748277),(-16.947011,21.759996),(-16.950266,21.765367),(-16.963246,21.775621),(-16.966908,21.781155),(-16.961537,21.795803),(-16.956939,21.82567),(-16.937815,21.866604),(-16.929799,21.907945),(-16.92101,21.931301),(-16.81664,22.130561),(-16.78067,22.165717),(-16.775054,22.174954),(-16.773427,22.180976),(-16.762074,22.205024),(-16.756337,22.210191),(-16.739898,22.221137),(-16.73412,22.226752),(-16.717885,22.260972),(-16.706776,22.275458),(-16.682607,22.284084),(-16.677154,22.289537),(-16.670318,22.29442),(-16.659006,22.295071),(-16.650258,22.291693),(-16.642649,22.285956),(-16.631703,22.273912),(-16.52184,22.314887),(-16.502187,22.329901),(-16.482167,22.352281),(-16.466624,22.377387),(-16.431508,22.493069),(-16.429351,22.510443),(-16.419545,22.521796),(-16.369781,22.568264),(-16.357289,22.583075),(-16.34439,22.646959),(-16.341379,22.732001),(-16.336822,22.75373),(-16.324859,22.789496),(-16.316477,22.806057),(-16.296457,22.832099),(-16.296254,22.84219),(-16.299957,22.850165),(-16.302113,22.856757),(-16.302154,22.868313),(-16.300608,22.875067),(-16.291737,22.893459),(-16.289622,22.899807),(-16.284779,22.903388),(-16.262603,22.905585),(-16.238637,22.911566),(-16.206451,22.937974),(-16.172963,22.973456),(-16.163238,22.993232),(-16.156484,23.013495),(-16.152577,23.035224),(-16.151275,23.059068),(-16.159535,23.082465),(-16.179067,23.086127),(-16.220204,23.075873),(-16.213124,23.094428),(-16.196034,23.126166),(-16.186676,23.137926),(-16.139272,23.178534),(-16.135121,23.185126),(-16.060414,23.34455),(-16.049143,23.3581),(-16.038319,23.363918),(-16.025543,23.374945),(-15.981191,23.445787),(-15.967193,23.512926),(-15.956451,23.531724),(-15.941314,23.551337),(-15.905507,23.610297),(-15.857086,23.661811),(-15.853871,23.667914),(-15.837392,23.679348),(-15.830393,23.685981),(-15.825063,23.696723),(-15.820058,23.717231),(-15.81607,23.727607),(-15.800649,23.750637),(-15.78482,23.768988),(-15.772613,23.787828),(-15.76769,23.812649),(-15.765452,23.856757),(-15.769154,23.875678),(-15.781972,23.898261),(-15.778676,23.903713),(-15.775136,23.911933),(-15.786773,23.903876),(-15.787953,23.892157),(-15.786,23.878363),(-15.788197,23.864163),(-15.795318,23.856635),(-15.820872,23.840277),(-15.830393,23.836249),(-15.843577,23.835883),(-15.852203,23.838121),(-15.861724,23.836168),(-15.877553,23.823188),(-15.888173,23.810736),(-15.895334,23.798774),(-15.91039,23.765326),(-15.923329,23.743598),(-15.925933,23.73725),(-15.92809,23.725165),(-15.937489,23.705227),(-15.939605,23.696234),(-15.948232,23.684312),(-15.994862,23.645006),(-16.009755,23.655422),(-16.003163,23.672105),(-15.981191,23.696234),(-15.975819,23.710761),(-15.939605,23.774807),(-15.877553,23.853583),(-15.871246,23.869208),(-15.856801,23.886949),(-15.840932,23.902655),(-15.768137,23.959866),(-15.735748,23.971177),(-15.692616,23.995795),(-15.679555,24.007514),(-15.646637,24.010403),(-15.620107,24.026272),(-15.599233,24.048326),(-15.583323,24.070258),(-15.542348,24.114244),(-15.514394,24.152167),(-15.476064,24.177802),(-15.459706,24.191881),(-15.453033,24.210191),(-15.444692,24.221869),(-15.353383,24.291083),(-15.298248,24.34516),(-15.229563,24.440741),(-15.203969,24.467475),(-15.171864,24.494127),(-15.153554,24.505601),(-15.133412,24.514635),(-15.112701,24.520575),(-15.069407,24.527289),(-15.048329,24.538275),(-15.014801,24.562445),(-15.01537,24.585679),(-15.012237,24.591539),(-14.999989,24.614488),(-14.976226,24.641425),(-14.952138,24.659247),(-14.926991,24.67178),(-14.907826,24.685492),(-14.895619,24.705715),(-14.885854,24.770738),(-14.861928,24.830146),(-14.853424,24.876614),(-14.83967,24.905341),(-14.836049,24.918687),(-14.837961,24.935126),(-14.84203,24.950263),(-14.84317,24.965277),(-14.836049,24.981391),(-14.843495,24.987616),(-14.837229,25.001695),(-14.83259,25.017564),(-14.829742,25.034247),(-14.829213,25.050238),(-14.831899,25.065741),(-14.84142,25.087551),(-14.843495,25.100531),(-14.842437,25.189032),(-14.849721,25.214179),(-14.819244,25.330227),(-14.812123,25.346096),(-14.782053,25.446926),(-14.736195,25.50552),(-14.723134,25.515815),(-14.714915,25.52558),(-14.68517,25.590888),(-14.678334,25.635565),(-14.676259,25.641913),(-14.66686,25.659735),(-14.661366,25.691311),(-14.620961,25.783596),(-14.546457,25.879218),(-14.519521,25.923774),(-14.513905,25.936957),(-14.511057,25.956855),(-14.494985,26.001614),(-14.486562,26.015448),(-14.490346,26.03148),(-14.480377,26.087795),(-14.483306,26.102484),(-14.492828,26.132717),(-14.490061,26.142727),(-14.482289,26.153876),(-14.471791,26.183051),(-14.466135,26.194241),(-14.410146,26.260077),(-14.400746,26.266303),(-14.353179,26.275946),(-14.29894,26.302232),(-14.215932,26.375474),(-14.202056,26.392239),(-14.192779,26.409491),(-14.180247,26.424384),(-14.157053,26.433783),(-14.046864,26.444403),(-14.02774,26.452379),(-13.923736,26.500881),(-13.742014,26.618109),(-13.619985,26.68891),(-13.561106,26.749091),(-13.500885,26.858303),(-13.480336,26.911689),(-13.461049,26.992743),(-13.431996,27.050767),(-13.402455,27.177191),(-13.383656,27.215277),(-13.301096,27.324449),(-13.300771,27.330024),(-13.307769,27.339342),(-13.307932,27.344957),(-13.29426,27.351752),(-13.240834,27.465399),(-13.20759,27.56745),(-13.205434,27.588568),(-13.200795,27.606513),(-13.180979,27.644843),(-13.178254,27.662491),(-13.178253,27.662502),(-13.178212,27.662787),(-13.177561,27.667141),(-13.171254,27.685004),(-13.160512,27.694648),(-13.13036,27.708075),(-13.097971,27.732245),(-13.061431,27.742906),(-13.045155,27.759833),(-13.003,27.821031),(-12.979563,27.893704),(-12.968088,27.914618),(-12.952382,27.928616),(-12.933095,27.938707),(-12.838938,27.970404),(-12.657826,27.998033),(-12.513539,27.995429),(-12.335032,28.047268),(-12.061431,28.089016),(-12.050404,28.096015),(-12.027903,28.116889),(-12.023101,28.119574),(-12.005605,28.123196),(-11.782379,28.21011),(-11.485585,28.325629),(-11.452789,28.348456),(-11.428822,28.375922),(-11.389312,28.434882),(-11.374013,28.449774),(-11.355946,28.463853),(-11.340972,28.479682),(-11.334706,28.499742),(-11.325917,28.515204),(-11.266469,28.551581),(-11.219635,28.602484),(-11.211822,28.616767),(-11.170318,28.639716),(-11.162465,28.647406),(-11.154205,28.661078),(-11.149159,28.66767),(-11.11498,28.69184),(-11.111562,28.695746),(-11.091054,28.726304),(-11.0631,28.752387),(-11.047597,28.761705),(-10.762603,28.890204),(-10.694203,28.930569),(-10.612416,28.967719),(-10.573801,28.990424),(-10.450917,29.09219),(-10.34789,29.229397),(-10.320872,29.257961),(-10.301991,29.271877),(-10.264027,29.284125),(-10.248402,29.299384),(-10.224355,29.333075),(-10.199127,29.361274),(-10.187571,29.378974),(-10.182729,29.397284),(-10.177642,29.405829),(-10.141754,29.428616),(-10.137034,29.434963),(-10.128163,29.455959),(-10.086985,29.509955),(-10.080922,29.521145),(-10.078684,29.542304),(-10.072621,29.564683),(-10.063629,29.584947),(-10.052968,29.599921),(-10.005279,29.638414),(-9.997792,29.650824),(-9.991689,29.666815),(-9.949941,29.71601),(-9.820221,29.838935),(-9.717926,29.996568),(-9.655141,30.126899),(-9.645253,30.164008),(-9.628407,30.291409),(-9.610015,30.357815),(-9.60733,30.377631),(-9.611643,30.404364),(-9.622711,30.417426),(-9.638173,30.426663),(-9.655141,30.442206),(-9.666249,30.459296),(-9.691762,30.509589),(-9.700063,30.542711),(-9.708404,30.547512),(-9.719553,30.548529),(-9.730824,30.552069),(-9.739817,30.559882),(-9.758168,30.586819),(-9.829661,30.628607),(-9.850942,30.629625),(-9.874013,30.633979),(-9.888539,30.648261),(-9.886708,30.686672),(-9.865631,30.72427),(-9.839019,30.759833),(-9.820221,30.792304),(-9.81371,30.814683),(-9.812896,30.827135),(-9.816803,30.836656),(-9.823842,30.845649),(-9.824371,30.851955),(-9.821889,30.859117),(-9.820221,30.87108),(-9.827707,31.06745),(-9.831451,31.085354),(-9.847524,31.121283),(-9.835032,31.144761),(-9.808949,31.2862),(-9.80663,31.343451),(-9.810902,31.366848),(-9.821523,31.379462),(-9.834788,31.388821),(-9.847524,31.402411),(-9.816233,31.436835),(-9.792958,31.47134),(-9.769643,31.53148),(-9.758168,31.545803),(-9.739613,31.560004),(-9.733266,31.567369),(-9.727651,31.58515),(-9.713612,31.597846),(-9.707631,31.60932),(-9.694203,31.625718),(-9.68928,31.635199),(-9.675201,31.715399),(-9.667307,31.731147),(-9.575795,31.816107),(-9.53539,31.864488),(-9.527943,31.871243),(-9.513295,31.880845),(-9.504954,31.888414),(-9.497426,31.898139),(-9.488678,31.915351),(-9.483795,31.922553),(-9.366119,32.026313),(-9.354075,32.046698),(-9.346425,32.06151),(-9.334828,32.10163),(-9.333079,32.118069),(-9.328114,32.130439),(-9.291982,32.168931),(-9.27774,32.196601),(-9.270009,32.227484),(-9.26476,32.330634),(-9.268951,32.335517),(-9.28775,32.343492),(-9.291982,32.350816),(-9.289662,32.372463),(-9.283355,32.387763),(-9.256256,32.432563),(-9.24885,32.452541),(-9.246938,32.472317),(-9.254791,32.488267),(-9.263661,32.498481),(-9.272369,32.512926),(-9.27831,32.529527),(-9.278961,32.546332),(-9.259918,32.57689),(-9.112172,32.682847),(-8.998525,32.797105),(-8.901682,32.854804),(-8.867787,32.881903),(-8.754709,33.010647),(-8.62914,33.127631),(-8.62149,33.13935),(-8.620351,33.152004),(-8.621002,33.173529),(-8.611887,33.186265),(-8.529164,33.268704),(-8.521148,33.272528),(-8.512522,33.270494),(-8.502797,33.261542),(-8.49413,33.259508),(-8.469797,33.260647),(-8.45165,33.264228),(-8.415028,33.280015),(-8.378774,33.304674),(-8.320221,33.365383),(-8.284657,33.389838),(-8.243153,33.404527),(-8.106557,33.430854),(-8.011545,33.466417),(-7.962554,33.484809),(-7.916249,33.494574),(-7.891428,33.503363),(-7.86974,33.526557),(-7.844797,33.535956),(-7.777984,33.55329),(-7.738189,33.570624),(-7.719472,33.574205),(-7.700999,33.58275),(-7.684438,33.599921),(-7.667551,33.612698),(-7.647857,33.608344),(-7.63622,33.614569),(-7.626617,33.613227),(-7.617258,33.609361),(-7.606191,33.608344),(-7.596669,33.611558),(-7.579254,33.620754),(-7.549957,33.625556),(-7.529124,33.632636),(-7.510243,33.641547),(-7.49706,33.64997),(-7.463287,33.68651),(-7.434316,33.698228),(-7.406361,33.726996),(-7.387115,33.732489),(-7.393951,33.725043),(-7.37385,33.719306),(-7.35261,33.724677),(-7.341135,33.731106),(-7.267486,33.772406),(-7.229319,33.80093),(-7.192942,33.817694),(-7.152577,33.830064),(-7.117014,33.834866),(-7.116567,33.834906),(-7.099477,33.840277),(-7.06786,33.863674),(-7.031809,33.874254),(-6.927358,33.944159),(-6.822092,34.039618),(-6.733754,34.162584),(-6.667795,34.280504),(-6.557932,34.417711),(-6.54776,34.434719),(-6.530914,34.474026),(-6.507924,34.504788),(-6.292104,34.880805),(-6.205393,35.124579),(-6.023915,35.501166),(-6.018544,35.520209),(-6.016672,35.544623),(-6.012318,35.558824),(-5.990834,35.598538),(-5.970041,35.652533),(-5.968251,35.660956),(-5.96524,35.670233),(-5.952056,35.685492),(-5.949045,35.69538),(-5.943105,35.742377),(-5.936187,35.765611),(-5.927235,35.780748),(-5.90803,35.797349),(-5.873402,35.801988),(-5.781402,35.79092),(-5.769276,35.792792),(-5.74234,35.814887),(-5.729888,35.822659),(-5.711741,35.830634),(-5.693512,35.834296),(-5.680898,35.829169),(-5.663442,35.835883),(-5.639719,35.836371),(-5.615956,35.831529),(-5.598988,35.822333),(-5.540028,35.850979),(-5.523264,35.863267),(-5.487457,35.896959),(-5.468861,35.910346),(-5.44811,35.918524),(-5.440027,35.918106),(-5.435452,35.914417),(-5.423497,35.908956),(-5.414198,35.909398),(-5.410361,35.917073),(-5.408,35.924748),(-5.40549,35.926519)] +Monaco [(7.437454,43.743361),(7.432845,43.739853),(7.417957,43.730904),(7.404319,43.717969),(7.380723,43.719273),(7.36575,43.72273),(7.367263,43.734125),(7.372655,43.745832),(7.387538,43.757899),(7.406969,43.763506),(7.426246,43.755464),(7.437454,43.743361)] +Moldova [(27.606873,48.457819),(27.627026,48.451256),(27.751773,48.451979),(27.78526,48.441566),(27.849855,48.40963),(27.864841,48.398907),(27.885371,48.378444),(27.896674,48.367178),(27.904529,48.362475),(27.92644,48.339428),(27.967057,48.328886),(28.003177,48.325883),(28.055941,48.321496),(28.076405,48.314933),(28.092838,48.302066),(28.098315,48.284108),(28.078678,48.244912),(28.093251,48.237444),(28.115575,48.236747),(28.131078,48.239589),(28.162084,48.257185),(28.178517,48.258864),(28.185752,48.242379),(28.189369,48.223052),(28.199911,48.211735),(28.216965,48.208144),(28.240322,48.211632),(28.261716,48.219952),(28.277529,48.229124),(28.294479,48.236488),(28.340575,48.240106),(28.357731,48.238607),(28.36817,48.230623),(28.37003,48.211632),(28.363364,48.191013),(28.352253,48.181866),(28.337887,48.175484),(28.322178,48.163211),(28.315563,48.149284),(28.317424,48.135358),(28.328586,48.127012),(28.349566,48.129725),(28.364966,48.141507),(28.37556,48.156648),(28.388634,48.168586),(28.411578,48.170704),(28.427288,48.163366),(28.436899,48.149853),(28.435659,48.134686),(28.419019,48.122232),(28.447441,48.082751),(28.459379,48.074457),(28.479584,48.064923),(28.490023,48.065491),(28.493744,48.074819),(28.494157,48.091846),(28.49979,48.108914),(28.501082,48.112827),(28.501113,48.112889),(28.519323,48.149129),(28.541596,48.155951),(28.573635,48.154969),(28.665619,48.129337),(28.735072,48.128691),(28.771246,48.124454),(28.799255,48.111793),(28.805921,48.103809),(28.80835,48.096574),(28.809176,48.089779),(28.811554,48.082906),(28.827573,48.057016),(28.830777,48.030868),(28.832534,48.024822),(28.839252,48.018285),(28.855685,48.007923),(28.862085,48.000502),(28.882557,47.976763),(28.914803,47.953146),(28.936087,47.942164),(28.950356,47.934801),(28.980846,47.926378),(29.017226,47.931081),(29.061047,47.96976),(29.092725,47.980173),(29.110037,47.979656),(29.123989,47.975987),(29.136288,47.968184),(29.147967,47.955369),(29.155822,47.939866),(29.164607,47.905604),(29.172565,47.891006),(29.186621,47.883642),(29.225999,47.87558),(29.236024,47.870723),(29.232768,47.85708),(29.21742,47.842895),(29.19892,47.829485),(29.186724,47.818219),(29.177733,47.800701),(29.177836,47.78972),(29.187448,47.783209),(29.221968,47.774811),(29.234732,47.766801),(29.238608,47.756001),(29.22667,47.743237),(29.196956,47.717502),(29.191479,47.686264),(29.192409,47.650917),(29.182074,47.613219),(29.156184,47.582781),(29.130501,47.55963),(29.117426,47.533327),(29.130397,47.4932),(29.130501,47.493097),(29.130604,47.493045),(29.130656,47.492942),(29.137189,47.484008),(29.140009,47.480152),(29.155615,47.449999),(29.163884,47.439379),(29.182074,47.429871),(29.192409,47.435814),(29.201866,47.446588),(29.21835,47.451601),(29.232613,47.44633),(29.2507,47.425375),(29.281396,47.411474),(29.288217,47.400777),(29.292713,47.388866),(29.300826,47.378194),(29.314675,47.372717),(29.330385,47.370495),(29.345836,47.365844),(29.35891,47.352718),(29.364078,47.336879),(29.364491,47.322436),(29.367592,47.308225),(29.370882,47.304434),(29.380821,47.29298),(29.394464,47.284712),(29.409553,47.279751),(29.425573,47.278666),(29.441696,47.282128),(29.459163,47.293497),(29.466087,47.307449),(29.470635,47.322746),(29.480867,47.338274),(29.501072,47.339825),(29.520968,47.33892),(29.539881,47.334218),(29.556573,47.324038),(29.57006,47.306778),(29.579259,47.283601),(29.580189,47.26045),(29.568717,47.243138),(29.544015,47.23425),(29.540398,47.212546),(29.55063,47.160172),(29.544842,47.135574),(29.530373,47.123688),(29.508669,47.119399),(29.480867,47.117487),(29.478593,47.114645),(29.47787,47.111544),(29.478696,47.108469),(29.480867,47.10524),(29.511459,47.066017),(29.520141,47.059919),(29.530993,47.066017),(29.539726,47.078885),(29.551353,47.090202),(29.570164,47.091442),(29.583393,47.085215),(29.594865,47.074595),(29.6021,47.061005),(29.60303,47.04576),(29.595433,47.032143),(29.583393,47.02279),(29.572489,47.011757),(29.564996,46.974498),(29.560626,46.961256),(29.559828,46.95884),(29.558691,46.94574),(29.567476,46.934656),(29.573161,46.933725),(29.598482,46.935379),(29.607577,46.93202),(29.623287,46.91954),(29.631865,46.914398),(29.648092,46.910419),(29.680855,46.908559),(29.696151,46.904683),(29.712481,46.893444),(29.735838,46.867192),(29.754132,46.85802),(29.785758,46.854661),(29.815213,46.856624),(29.843635,46.854144),(29.872212,46.837711),(29.876915,46.830786),(29.884873,46.812673),(29.889627,46.80717),(29.898516,46.806653),(29.907714,46.810942),(29.917429,46.813965),(29.928178,46.809754),(29.930658,46.803087),(29.930452,46.781642),(29.931795,46.772443),(29.935671,46.765157),(29.946678,46.750739),(29.951122,46.743117),(29.952621,46.724694),(29.944404,46.662476),(29.94027,46.650797),(29.935413,46.641857),(29.931899,46.632374),(29.931279,46.619171),(29.934948,46.608836),(29.947402,46.588346),(29.949675,46.578863),(29.9381,46.55734),(29.916086,46.554369),(29.898826,46.553129),(29.901926,46.53083),(29.916086,46.518815),(29.960217,46.505948),(29.962078,46.503442),(29.967039,46.493856),(29.970553,46.491737),(29.977064,46.493184),(29.989259,46.498687),(29.994944,46.498532),(30.00285,46.494217),(30.009413,46.488068),(30.021712,46.470653),(30.014064,46.462023),(30.060005,46.436805),(30.077006,46.422826),(30.086928,46.428873),(30.102017,46.430733),(30.118244,46.428692),(30.131576,46.422826),(30.107185,46.391898),(30.08114,46.374199),(30.037267,46.368928),(29.918669,46.373656),(29.902546,46.371176),(29.884666,46.3642),(29.847769,46.341462),(29.828132,46.339395),(29.808185,46.354691),(29.80617,46.361564),(29.806738,46.380891),(29.805085,46.389986),(29.80033,46.398358),(29.779557,46.421095),(29.72695,46.455796),(29.714134,46.47117),(29.713824,46.443058),(29.702869,46.42833),(29.682715,46.42293),(29.654396,46.422697),(29.647988,46.416393),(29.652949,46.391898),(29.645508,46.375594),(29.632072,46.36606),(29.615639,46.361822),(29.598586,46.363063),(29.582979,46.369651),(29.56944,46.382286),(29.555643,46.404197),(29.541432,46.413034),(29.527479,46.416651),(29.49606,46.420811),(29.480867,46.425152),(29.475079,46.432257),(29.473322,46.439802),(29.475286,46.447554),(29.480867,46.455512),(29.486241,46.462488),(29.488205,46.469361),(29.486448,46.475872),(29.480867,46.482048),(29.457612,46.484709),(29.456724,46.484366),(29.436838,46.476699),(29.418235,46.462488),(29.374723,46.416186),(29.362114,46.415928),(29.344234,46.434169),(29.320773,46.468741),(29.306717,46.471997),(29.289147,46.451636),(29.285943,46.439492),(29.290697,46.419261),(29.290077,46.410243),(29.283256,46.397427),(29.278192,46.395877),(29.27137,46.397427),(29.259175,46.394353),(29.222898,46.366473),(29.200677,46.35712),(29.183727,46.367171),(29.183417,46.37761),(29.190342,46.387196),(29.199282,46.396497),(29.205638,46.406109),(29.207602,46.417891),(29.206775,46.502511),(29.200212,46.523983),(29.184244,46.538039),(29.183059,46.538042),(29.162747,46.538091),(29.074948,46.503674),(29.055915,46.49874),(29.020326,46.489515),(28.945809,46.454788),(28.925448,46.432774),(28.919247,46.404662),(28.927309,46.368024),(28.944879,46.32074),(28.945809,46.304978),(28.939918,46.286995),(28.93289,46.272629),(28.93351,46.258986),(28.950615,46.243122),(28.95077,46.24307),(28.984112,46.221221),(29.009836,46.204364),(29.015365,46.182609),(29.00348,46.158941),(28.980846,46.132121),(28.946429,46.105094),(28.938781,46.089281),(28.941158,46.064683),(28.958935,46.021016),(28.956868,46.001379),(28.93196,45.993163),(28.757603,45.961175),(28.74024,45.953217),(28.729181,45.938644),(28.728406,45.921953),(28.742307,45.887329),(28.746441,45.870586),(28.745046,45.850484),(28.738276,45.837565),(28.725564,45.82878),(28.706392,45.821132),(28.677298,45.816584),(28.669753,45.812037),(28.669392,45.806301),(28.671924,45.797257),(28.673267,45.78687),(28.669753,45.77731),(28.643605,45.76651),(28.576684,45.761911),(28.560613,45.743204),(28.563525,45.735494),(28.567641,45.7246),(28.56144,45.71659),(28.532501,45.710079),(28.515241,45.702018),(28.504182,45.693698),(28.480928,45.669462),(28.474003,45.657938),(28.482891,45.650755),(28.498498,45.644295),(28.511314,45.635045),(28.516895,45.620989),(28.518755,45.607036),(28.523096,45.593239),(28.536893,45.579803),(28.510435,45.571018),(28.504596,45.567246),(28.503743,45.566102),(28.498239,45.558719),(28.497981,45.556445),(28.500048,45.554585),(28.506456,45.520582),(28.502012,45.508748),(28.480928,45.501978),(28.416849,45.503787),(28.341918,45.517636),(28.270501,45.521512),(28.270423,45.521471),(28.217275,45.493348),(28.217171,45.493193),(28.217171,45.493142),(28.20849,45.481411),(28.201565,45.468854),(28.199498,45.461774),(28.172936,45.484357),(28.165908,45.494589),(28.165391,45.528282),(28.164048,45.530607),(28.161567,45.532726),(28.157743,45.538927),(28.140949,45.560218),(28.118004,45.572723),(28.062142,45.593601),(28.074751,45.604866),(28.091184,45.615976),(28.107721,45.6244),(28.120846,45.627707),(28.153713,45.6275),(28.167975,45.632461),(28.161774,45.645432),(28.163738,45.661503),(28.154953,45.761807),(28.146478,45.771161),(28.128908,45.795035),(28.113302,45.825369),(28.110511,45.854308),(28.114955,45.859734),(28.128184,45.8664),(28.131078,45.871361),(28.129115,45.875134),(28.120226,45.887846),(28.117436,45.895236),(28.118779,45.903142),(28.12281,45.910791),(28.124981,45.918387),(28.120846,45.926293),(28.114852,45.933218),(28.112371,45.939213),(28.110511,45.950426),(28.08612,46.000552),(28.082709,46.014712),(28.084466,46.024582),(28.096869,46.05967),(28.096558,46.065045),(28.090461,46.06799),(28.090151,46.073365),(28.092838,46.078377),(28.100796,46.081995),(28.107823,46.096103),(28.110506,46.101491),(28.127358,46.135325),(28.133249,46.159974),(28.144721,46.183229),(28.141517,46.191962),(28.135626,46.198835),(28.129321,46.204674),(28.110976,46.225603),(28.108961,46.23413),(28.120846,46.237851),(28.132112,46.239918),(28.134489,46.245344),(28.131078,46.262397),(28.135212,46.269322),(28.144928,46.272939),(28.156296,46.275368),(28.165753,46.27883),(28.177794,46.287047),(28.191333,46.307769),(28.192883,46.311231),(28.191953,46.323504),(28.187922,46.343787),(28.190351,46.351074),(28.202392,46.353916),(28.207146,46.358153),(28.212934,46.388694),(28.219548,46.393345),(28.226796,46.395548),(28.228643,46.39611),(28.233294,46.401768),(28.22606,46.415359),(28.240219,46.420268),(28.246058,46.427865),(28.247247,46.436185),(28.247144,46.467191),(28.2458,46.475511),(28.242338,46.476518),(28.2381,46.475588),(28.234121,46.478069),(28.221099,46.495768),(28.219032,46.503726),(28.221099,46.541191),(28.224768,46.548736),(28.234121,46.553129),(28.225439,46.569562),(28.230504,46.583902),(28.240529,46.596382),(28.247144,46.607156),(28.24735,46.620825),(28.245085,46.631451),(28.24425,46.635372),(28.234121,46.662424),(28.178104,46.739836),(28.178156,46.758646),(28.137796,46.80624),(28.121518,46.8343),(28.122615,46.84227),(28.124257,46.854195),(28.124257,46.861637),(28.113095,46.87143),(28.114335,46.883393),(28.11356,46.894761),(28.096869,46.902616),(28.105447,46.91768),(28.104994,46.920185),(28.102295,46.935121),(28.082709,46.971527),(28.069067,46.988503),(28.038146,47.015483),(28.037027,47.016459),(28.028036,47.03297),(28.008295,47.026304),(27.986591,47.033228),(27.96313,47.043383),(27.938739,47.046638),(27.938429,47.061056),(27.925923,47.068756),(27.897604,47.081417),(27.867012,47.104645),(27.85771,47.109322),(27.848202,47.111415),(27.843757,47.114386),(27.849855,47.121724),(27.849855,47.12852),(27.80655,47.144617),(27.794665,47.155831),(27.805879,47.158234),(27.807687,47.162471),(27.804793,47.168492),(27.802106,47.176346),(27.800349,47.176966),(27.788309,47.204277),(27.763039,47.226317),(27.752393,47.238926),(27.753634,47.251432),(27.733273,47.275617),(27.722835,47.283317),(27.697823,47.287502),(27.689865,47.290887),(27.68232,47.295254),(27.671727,47.299853),(27.6447,47.303987),(27.636948,47.306674),(27.624029,47.321428),(27.599948,47.360728),(27.588724,47.367389),(27.586409,47.368764),(27.572353,47.375197),(27.580311,47.405996),(27.562534,47.416538),(27.565531,47.428269),(27.569356,47.438553),(27.574782,47.446252),(27.582998,47.450671),(27.575867,47.460412),(27.563154,47.468344),(27.548168,47.474261),(27.534526,47.477982),(27.532975,47.477413),(27.50724,47.477982),(27.501453,47.479945),(27.497525,47.482426),(27.492564,47.484545),(27.483676,47.485423),(27.473134,47.491779),(27.466726,47.506352),(27.459491,47.533224),(27.456908,47.533896),(27.446056,47.534877),(27.442128,47.536634),(27.440061,47.541027),(27.440268,47.550096),(27.438976,47.553713),(27.435824,47.55994),(27.431342,47.572638),(27.429404,47.57813),(27.428382,47.581024),(27.397066,47.589086),(27.369161,47.60831),(27.304049,47.666549),(27.281104,47.693007),(27.272009,47.71497),(27.295057,47.718174),(27.295057,47.724427),(27.289579,47.731687),(27.29113,47.74241),(27.288114,47.75068),(27.287512,47.752332),(27.282448,47.755691),(27.264671,47.764553),(27.260847,47.769075),(27.256196,47.77755),(27.244879,47.792588),(27.231082,47.807083),(27.219196,47.813775),(27.253406,47.828089),(27.245809,47.83659),(27.234906,47.840208),(27.222813,47.842662),(27.212375,47.847933),(27.213202,47.854134),(27.211548,47.885011),(27.212375,47.889507),(27.194185,47.904054),(27.17155,47.912632),(27.160698,47.921727),(27.178268,47.937928),(27.178268,47.942096),(27.178268,47.944129),(27.171964,47.946429),(27.150983,47.957797),(27.162869,47.965006),(27.169793,47.973791),(27.16876,47.983067),(27.157184,47.991955),(27.143748,47.986839),(27.134602,48.000456),(27.121476,48.013194),(27.095793,48.005598),(27.093364,48.00844),(27.091452,48.011851),(27.090005,48.015623),(27.088971,48.019809),(27.109332,48.026088),(27.109332,48.033503),(27.083184,48.043658),(27.059206,48.05805),(27.041739,48.077273),(27.034298,48.101768),(27.036882,48.107349),(27.047424,48.116496),(27.047955,48.121409),(27.048044,48.122232),(27.042773,48.127296),(27.033678,48.132386),(27.025099,48.135048),(27.012697,48.128123),(26.99399,48.132412),(26.966602,48.143367),(26.966602,48.149594),(26.986239,48.150421),(26.997607,48.15794),(26.997607,48.16657),(26.963088,48.175381),(26.955646,48.186),(26.950582,48.197524),(26.938128,48.204811),(26.929085,48.199049),(26.917716,48.187964),(26.908001,48.184528),(26.90397,48.201452),(26.897665,48.208971),(26.855497,48.237832),(26.844749,48.23331),(26.821908,48.252534),(26.804906,48.25827),(26.76372,48.252741),(26.744031,48.255583),(26.733127,48.27075),(26.722379,48.259769),(26.711475,48.261319),(26.688531,48.274832),(26.665741,48.274212),(26.617889,48.258968),(26.618613,48.267184),(26.625124,48.282894),(26.636079,48.294883),(26.6691,48.30881),(26.67401,48.321703),(26.679797,48.330178),(26.699434,48.325113),(26.712974,48.314907),(26.723619,48.302582),(26.735918,48.291834),(26.754573,48.286149),(26.774365,48.287183),(26.785734,48.294108),(26.790075,48.307053),(26.789351,48.32594),(26.78408,48.345164),(26.778189,48.357204),(26.777518,48.36617),(26.78806,48.375963),(26.793589,48.376273),(26.810074,48.371881),(26.816947,48.371726),(26.825628,48.377772),(26.828419,48.385136),(26.831933,48.391363),(26.842785,48.39374),(26.875961,48.383999),(26.908724,48.36524),(26.943244,48.351262),(26.981071,48.35568),(26.990579,48.362372),(26.997814,48.361545),(27.004635,48.358755),(27.015591,48.359401),(27.027683,48.357747),(27.033057,48.360202),(27.031817,48.36524),(27.02851,48.370795),(27.027786,48.374774),(27.0282,48.381389),(27.025409,48.389864),(27.02634,48.397099),(27.037398,48.399683),(27.048044,48.397667),(27.068818,48.388882),(27.175788,48.361804),(27.208551,48.360615),(27.246481,48.373741),(27.25193,48.37831),(27.306012,48.42366),(27.342186,48.436114),(27.36141,48.432807),(27.389832,48.415005),(27.403474,48.411491),(27.420217,48.417149),(27.480885,48.451411),(27.50383,48.472365),(27.503896,48.472365),(27.545171,48.472365),(27.557057,48.474355),(27.582998,48.486034),(27.604806,48.484122),(27.606873,48.457819)] +Macedonia [(22.345023,42.313439),(22.443622,42.214427),(22.481449,42.193317),(22.494678,42.164559),(22.506939,42.148927),(22.510181,42.144793),(22.531058,42.129109),(22.617771,42.082704),(22.627177,42.079127),(22.675856,42.060612),(22.705725,42.055935),(22.710272,42.05299),(22.713993,42.048623),(22.718437,42.044463),(22.725052,42.042474),(22.77063,42.043998),(22.780862,42.043171),(22.785306,42.039141),(22.787684,42.032578),(22.791094,42.025808),(22.798949,42.021235),(22.805977,42.02139),(22.821273,42.025369),(22.826958,42.025085),(22.838223,42.019478),(22.843701,42.014465),(22.845621,42.007408),(22.845768,42.006869),(22.846595,41.993639),(22.846905,41.993484),(22.85476,41.982632),(22.857137,41.971884),(22.858894,41.94788),(22.866335,41.924884),(22.877084,41.902043),(22.878634,41.895015),(22.878221,41.880261),(22.880804,41.872691),(22.882088,41.871618),(22.885042,41.869151),(22.896721,41.864448),(22.901372,41.860418),(22.907676,41.848584),(22.918322,41.814348),(22.939716,41.776702),(22.945917,41.769338),(22.956872,41.765669),(22.98054,41.764739),(22.991185,41.760992),(23.008859,41.739934),(23.009582,41.71637),(22.998627,41.693115),(22.985435,41.677198),(22.976613,41.666553),(22.970101,41.652032),(22.967001,41.647046),(22.961523,41.644488),(22.945813,41.641077),(22.940852,41.63764),(22.936098,41.626168),(22.932998,41.612345),(22.932068,41.597953),(22.933721,41.584595),(22.936925,41.57891),(22.946434,41.567748),(22.948707,41.560978),(22.94788,41.555139),(22.943023,41.538551),(22.943599,41.523201),(22.946227,41.453233),(22.94757,41.448376),(22.953358,41.438195),(22.954598,41.432408),(22.952118,41.427705),(22.940542,41.416905),(22.937339,41.410755),(22.939406,41.389413),(22.94447,41.368432),(22.940852,41.349829),(22.916978,41.335773),(22.826027,41.340992),(22.796818,41.337039),(22.780966,41.334894),(22.762293,41.3225),(22.751303,41.315205),(22.742003,41.28703),(22.740865,41.283579),(22.736731,41.204411),(22.727222,41.165809),(22.71575,41.145603),(22.704898,41.139661),(22.691875,41.14457),(22.674099,41.15661),(22.666347,41.164517),(22.661903,41.172268),(22.657185,41.176986),(22.656115,41.178056),(22.64454,41.180227),(22.629347,41.177074),(22.625936,41.169788),(22.626763,41.160796),(22.624489,41.152631),(22.607333,41.135992),(22.590176,41.125139),(22.586634,41.124085),(22.571263,41.119507),(22.549248,41.118525),(22.516589,41.122246),(22.500673,41.122142),(22.481449,41.11775),(22.467496,41.115011),(22.451477,41.113667),(22.421814,41.114649),(22.410446,41.117905),(22.389258,41.128343),(22.380163,41.131702),(22.367658,41.132374),(22.342336,41.128964),(22.323422,41.128498),(22.315051,41.124416),(22.308746,41.124984),(22.306679,41.128447),(22.305232,41.141624),(22.302649,41.1455),(22.29345,41.147515),(22.262961,41.150461),(22.225134,41.159608),(22.213637,41.160684),(22.205807,41.161416),(22.183173,41.159918),(22.160332,41.151908),(22.124365,41.127207),(22.103384,41.121522),(22.095943,41.123641),(22.062767,41.13718),(22.060286,41.140022),(22.058632,41.145087),(22.055532,41.149892),(22.048194,41.151856),(22.04406,41.149944),(22.033828,41.141263),(22.029074,41.138575),(21.965098,41.124364),(21.934712,41.11191),(21.925545,41.106754),(21.909081,41.097493),(21.901226,41.090775),(21.897195,41.081266),(21.896333,41.064705),(21.896162,41.061422),(21.894301,41.053826),(21.880659,41.038426),(21.845002,41.012381),(21.831463,40.993726),(21.831359,40.993675),(21.831359,40.993623),(21.831256,40.99352),(21.793636,40.973572),(21.783714,40.964167),(21.781853,40.955899),(21.78175,40.945099),(21.778133,40.93373),(21.765524,40.923911),(21.757091,40.922506),(21.736998,40.919157),(21.685115,40.927994),(21.6569,40.918227),(21.654833,40.9143),(21.652662,40.901381),(21.648838,40.895799),(21.643877,40.894456),(21.629511,40.895076),(21.623724,40.894404),(21.613698,40.888306),(21.590341,40.870685),(21.581556,40.866292),(21.55365,40.870426),(21.509519,40.900502),(21.505731,40.900903),(21.42942,40.908977),(21.404926,40.908615),(21.381258,40.900502),(21.344878,40.873062),(21.329271,40.866292),(21.295372,40.860866),(21.260955,40.860815),(21.245824,40.863226),(21.209072,40.869083),(21.183027,40.870168),(21.112127,40.853942),(20.965262,40.849394),(20.964849,40.875956),(20.956684,40.894766),(20.939941,40.907065),(20.890228,40.918279),(20.837415,40.924066),(20.836544,40.923904),(20.816951,40.920242),(20.783672,40.899055),(20.766102,40.893732),(20.740883,40.89797),(20.730573,40.904611),(20.717216,40.913214),(20.702746,40.936314),(20.683419,40.99383),(20.664092,41.059149),(20.65386,41.075272),(20.643008,41.081576),(20.634129,41.082576),(20.631536,41.082868),(20.618927,41.082403),(20.605284,41.083488),(20.597419,41.086273),(20.576966,41.093513),(20.569938,41.107104),(20.570558,41.124829),(20.565494,41.147412),(20.549681,41.170615),(20.51268,41.210147),(20.500071,41.23552),(20.483121,41.289471),(20.477747,41.319598),(20.478108,41.321585),(20.481674,41.341199),(20.49604,41.337788),(20.510407,41.344403),(20.523429,41.356805),(20.532937,41.370447),(20.539965,41.387139),(20.540172,41.400678),(20.539049,41.402944),(20.534074,41.412977),(20.522189,41.42569),(20.514644,41.429462),(20.498004,41.431477),(20.490873,41.436025),(20.488909,41.441451),(20.486842,41.457781),(20.483535,41.46548),(20.481597,41.468269),(20.470822,41.483774),(20.463174,41.489768),(20.452012,41.493592),(20.444984,41.508475),(20.447878,41.53545),(20.444157,41.549661),(20.49294,41.557671),(20.507409,41.56227),(20.520845,41.568368),(20.52932,41.574879),(20.534694,41.585266),(20.53467,41.587324),(20.534591,41.594026),(20.513404,41.640405),(20.508339,41.661748),(20.500381,41.734095),(20.503275,41.744637),(20.511337,41.757943),(20.521155,41.767658),(20.54441,41.784763),(20.550921,41.793522),(20.550136,41.80006),(20.54844,41.814193),(20.540482,41.839437),(20.540786,41.844865),(20.541722,41.86158),(20.567147,41.873182),(20.590298,41.854733),(20.602391,41.849876),(20.61872,41.850522),(20.626162,41.855198),(20.637014,41.870365),(20.643422,41.873647),(20.652827,41.86928),(20.671844,41.849307),(20.681456,41.84401),(20.702953,41.849591),(20.714398,41.859163),(20.723313,41.866619),(20.739953,41.888039),(20.750495,41.906797),(20.751052,41.910218),(20.754423,41.930904),(20.751439,41.940338),(20.741814,41.970773),(20.743054,41.993484),(20.755353,42.042784),(20.765378,42.064333),(20.784912,42.082032),(20.810543,42.092936),(20.90417,42.116668),(20.975145,42.134658),(21.003916,42.141951),(21.029238,42.151408),(21.04094,42.15997),(21.074403,42.184455),(21.098484,42.195953),(21.106236,42.195798),(21.112954,42.194402),(21.12639,42.188925),(21.164423,42.16704),(21.199873,42.14115),(21.216203,42.121151),(21.225402,42.106785),(21.229298,42.103822),(21.237804,42.097354),(21.245727,42.096167),(21.28886,42.089706),(21.299299,42.091411),(21.300796,42.098425),(21.301263,42.10061),(21.300332,42.120893),(21.298782,42.129419),(21.295785,42.134794),(21.293718,42.140168),(21.294958,42.148979),(21.353766,42.215977),(21.360602,42.220114),(21.366788,42.223858),(21.384255,42.224943),(21.419085,42.215021),(21.429834,42.215848),(21.428697,42.222566),(21.421462,42.231351),(21.419912,42.240058),(21.436345,42.24688),(21.44792,42.244141),(21.457222,42.237035),(21.467557,42.235123),(21.471973,42.23913),(21.48151,42.247784),(21.499287,42.238663),(21.519854,42.239025),(21.561815,42.247164),(21.564066,42.246289),(21.575044,42.242022),(21.624654,42.242772),(21.65969,42.235563),(21.676886,42.234945),(21.67695,42.234943),(21.69204,42.242022),(21.706509,42.25507),(21.719521,42.260957),(21.8172,42.305145),(21.837354,42.308556),(21.877352,42.30822),(21.88438,42.309512),(21.918279,42.331345),(21.929028,42.335117),(21.941534,42.333102),(21.994967,42.312612),(22.027627,42.303956),(22.045407,42.302424),(22.060906,42.301088),(22.095529,42.305817),(22.233402,42.348889),(22.259757,42.369095),(22.268852,42.370335),(22.273296,42.365477),(22.27402,42.348476),(22.276914,42.341241),(22.29159,42.328399),(22.307609,42.31933),(22.325283,42.314318),(22.345023,42.313439)] +Mali [(-4.821226,24.994755),(-4.744925,24.947057),(-4.668677,24.899334),(-4.592351,24.851688),(-4.516025,24.803991),(-4.404946,24.731308),(-4.325235,24.679218),(-4.245498,24.627077),(-4.165865,24.574987),(-4.086154,24.522897),(-4.006495,24.470807),(-3.926732,24.418717),(-3.846996,24.366576),(-3.767336,24.314434),(-3.687625,24.262344),(-3.608018,24.210203),(-3.528281,24.158113),(-3.448544,24.106023),(-3.368808,24.053959),(-3.289123,24.001844),(-3.209489,23.949702),(-3.129649,23.897612),(-3.043065,23.840923),(-2.956404,23.784234),(-2.869768,23.727623),(-2.783107,23.670985),(-2.696446,23.614348),(-2.609836,23.557685),(-2.523226,23.500996),(-2.436513,23.444358),(-2.349852,23.387747),(-2.26319,23.331109),(-2.176632,23.274446),(-2.089919,23.217757),(-2.00331,23.161042),(-1.916648,23.104456),(-1.830039,23.047819),(-1.743326,22.991182),(-1.656716,22.93457),(-1.570054,22.877881),(-1.483496,22.821166),(-1.396783,22.764555),(-1.310122,22.707969),(-1.223512,22.65128),(-1.136903,22.594643),(-1.05019,22.537954),(-0.96358,22.481265),(-0.876918,22.424627),(-0.790309,22.368041),(-0.703647,22.311404),(-0.616934,22.254767),(-0.530325,22.198078),(-0.443663,22.141389),(-0.357028,22.084751),(-0.270392,22.028114),(-0.183731,21.971477),(-0.097044,21.914839),(-0.010408,21.85815),(0.003735,21.848899),(0.076253,21.801461),(0.162966,21.744824),(0.249499,21.688238),(0.33616,21.631601),(0.422795,21.574963),(0.509457,21.518274),(0.596067,21.461585),(0.682676,21.404896),(0.769389,21.34831),(0.855999,21.291673),(0.942609,21.235036),(1.029322,21.178347),(1.146524,21.101711),(1.15934,21.081505),(1.177943,21.017323),(1.180114,20.995309),(1.167505,20.886013),(1.145284,20.795889),(1.14456,20.776252),(1.147247,20.751448),(1.154585,20.738787),(1.168538,20.733464),(1.191276,20.73057),(1.212463,20.73088),(1.252254,20.738994),(1.273338,20.739407),(1.296696,20.733464),(1.310545,20.722716),(1.331526,20.687937),(1.346925,20.669127),(1.363978,20.657707),(1.40718,20.645046),(1.447487,20.638741),(1.465781,20.633522),(1.483454,20.622618),(1.520351,20.616986),(1.559729,20.597504),(1.623704,20.551253),(1.643961,20.522676),(1.650059,20.487019),(1.649232,20.412089),(1.659154,20.397516),(1.778113,20.304291),(1.799197,20.294886),(1.820901,20.293594),(1.838885,20.29592),(1.855008,20.294835),(1.870614,20.283543),(1.880329,20.263054),(1.883843,20.24414),(1.891388,20.231789),(1.913402,20.231092),(1.924254,20.23613),(1.941204,20.251116),(1.95526,20.254915),(1.967146,20.253416),(1.975621,20.248429),(1.983372,20.241918),(1.993397,20.235949),(2.056339,20.215046),(2.071222,20.213263),(2.097474,20.224193),(2.138195,20.260728),(2.161346,20.274939),(2.18243,20.278505),(2.200826,20.273906),(2.21819,20.264087),(2.279581,20.21794),(2.316478,20.180165),(2.348208,20.137635),(2.388722,20.067407),(2.400401,20.056555),(2.415697,20.051284),(2.439882,20.04609),(2.459312,20.038778),(2.495382,20.020097),(2.514812,20.015937),(2.525665,20.015162),(2.616925,19.998367),(2.671805,19.996222),(2.946001,19.941652),(3.072608,19.88889),(3.130485,19.845224),(3.147022,19.837938),(3.183195,19.827731),(3.198802,19.820523),(3.212754,19.807758),(3.216785,19.794064),(3.198285,19.592397),(3.199422,19.553769),(3.212238,19.517156),(3.217509,19.511162),(3.222883,19.508061),(3.228051,19.504159),(3.232288,19.495478),(3.231668,19.489044),(3.22619,19.4692),(3.225984,19.459692),(3.234769,19.441347),(3.247481,19.426464),(3.25823,19.410393),(3.260813,19.388327),(3.250995,19.36546),(3.232701,19.351843),(3.211514,19.340862),(3.192911,19.325798),(3.183816,19.307505),(3.178855,19.268722),(3.17441,19.251643),(3.152706,19.230197),(3.13927,19.221929),(3.134206,19.212859),(3.126558,19.193352),(3.111779,19.171337),(3.102684,19.153561),(3.10413,19.135526),(3.120874,19.112814),(3.138754,19.096045),(3.158597,19.08155),(3.179785,19.07),(3.225984,19.05106),(3.284895,18.995741),(3.308356,18.981685),(3.318381,18.977706),(3.333057,18.975561),(3.358689,18.976853),(3.439717,18.995638),(3.439821,18.995638),(3.54462,19.015068),(3.715566,19.046901),(3.790291,19.06077),(3.886512,19.07863),(4.057561,19.110437),(4.22861,19.142244),(4.229023,19.018582),(4.229436,18.894946),(4.22985,18.771284),(4.230263,18.647648),(4.230271,18.645374),(4.230677,18.524038),(4.23109,18.400325),(4.231504,18.276741),(4.231917,18.153105),(4.232227,18.029443),(4.232408,17.975234),(4.23264,17.905782),(4.233054,17.78212),(4.233364,17.658458),(4.233777,17.534796),(4.234294,17.411186),(4.234707,17.287525),(4.235018,17.163915),(4.235328,17.100818),(4.235638,16.995863),(4.222305,16.986561),(4.21166,16.986044),(4.203702,16.982789),(4.197811,16.965219),(4.19657,16.947132),(4.202048,16.848895),(4.197604,16.838508),(4.184581,16.818509),(4.181998,16.809621),(4.182308,16.746472),(4.183031,16.612682),(4.183444,16.526538),(4.183961,16.416053),(4.1759,16.392644),(4.16174,16.379983),(4.118229,16.358331),(4.094768,16.340812),(4.075647,16.321072),(4.060558,16.298334),(3.971158,16.086099),(3.96692,16.058504),(3.970848,16.030857),(3.98325,16.00135),(3.983973,16.00011),(3.98449,15.998766),(3.984697,15.997371),(3.984594,15.995924),(3.9848,15.989826),(3.98418,15.986881),(3.98325,15.983987),(3.925062,15.927608),(3.909869,15.904767),(3.903462,15.886318),(3.89447,15.78865),(3.886512,15.750099),(3.873849,15.720882),(3.871215,15.714804),(3.846101,15.685297),(3.808377,15.665582),(3.728899,15.65088),(3.692208,15.63145),(3.614074,15.547734),(3.526534,15.495954),(3.516508,15.469186),(3.507103,15.353973),(3.48881,15.357539),(3.483332,15.359296),(3.380393,15.376324),(3.19229,15.40751),(3.073021,15.427199),(3.03354,15.42645),(3.017521,15.422832),(3.010286,15.417665),(3.007806,15.407665),(3.005842,15.389294),(3.005739,15.35232),(3.000158,15.339117),(2.950651,15.337463),(2.85412,15.334207),(2.757485,15.331081),(2.660953,15.327825),(2.564215,15.324595),(2.467684,15.321417),(2.371049,15.318239),(2.274517,15.314984),(2.177882,15.311831),(2.081351,15.308576),(1.984612,15.305398),(1.888081,15.302194),(1.791446,15.29899),(1.694914,15.295734),(1.598383,15.292608),(1.501748,15.289352),(1.405216,15.286148),(1.331526,15.283616),(1.297832,15.275735),(1.270857,15.259897),(1.203161,15.198789),(1.123063,15.126313),(1.057537,15.067118),(0.973718,14.991257),(0.949327,14.979552),(0.922145,14.973971),(0.769183,14.969062),(0.739934,14.958339),(0.711408,14.947461),(0.683813,14.940872),(0.670067,14.939735),(0.514831,14.993556),(0.483515,14.992109),(0.418791,14.969888),(0.387035,14.963248),(0.353187,14.963429),(0.221257,14.995933),(0.213196,14.985417),(0.212782,14.960716),(0.218467,14.910977),(-0.033197,14.995933),(-0.033404,14.995933),(-0.166988,15.049677),(-0.236699,15.065619),(-0.299176,15.054741),(-0.361472,15.017741),(-0.397671,15.002135),(-0.425732,15.0026),(-0.435602,15.015157),(-0.458649,15.075463),(-0.4679,15.079908),(-0.500343,15.079721),(-0.719719,15.078461),(-0.752895,15.069727),(-0.771272,15.056619),(-0.782816,15.048385),(-0.836249,14.996088),(-0.836404,14.995985),(-1.043138,14.818062),(-1.043317,14.817908),(-1.07856,14.796359),(-1.116697,14.780443),(-1.307745,14.734657),(-1.350275,14.714969),(-1.69692,14.496119),(-1.766683,14.483174),(-1.836963,14.479582),(-1.919129,14.485861),(-1.967033,14.483742),(-1.997057,14.470694),(-2.005015,14.444236),(-2.023412,14.198643),(-2.027288,14.188127),(-2.035918,14.181229),(-2.085008,14.159668),(-2.104337,14.151179),(-2.11984,14.148776),(-2.151879,14.15614),(-2.385612,14.264712),(-2.46168,14.280912),(-2.516147,14.267864),(-2.597382,14.222311),(-2.619758,14.203604),(-2.676265,14.141729),(-2.69319,14.123196),(-2.840855,14.042994),(-2.867288,14.000542),(-2.861552,14.00173),(-2.858606,14.000232),(-2.85703,13.997777),(-2.855377,13.996175),(-2.89447,13.866571),(-2.912169,13.837942),(-2.928059,13.799624),(-2.923383,13.746035),(-2.895245,13.651648),(-2.930473,13.638523),(-2.965008,13.625655),(-2.97232,13.624467),(-2.979322,13.627567),(-2.986402,13.632063),(-2.994774,13.634983),(-3.019113,13.637515),(-3.026968,13.635345),(-3.038621,13.629583),(-3.05601,13.61085),(-3.067508,13.606767),(-3.07451,13.622141),(-3.077301,13.636791),(-3.08221,13.646222),(-3.100039,13.662733),(-3.125102,13.67728),(-3.146031,13.67666),(-3.167941,13.672112),(-3.195821,13.674903),(-3.240418,13.707976),(-3.267057,13.717045),(-3.286952,13.69777),(-3.266282,13.682344),(-3.259719,13.658315),(-3.263905,13.602659),(-3.269176,13.579095),(-3.283955,13.542198),(-3.248634,13.292781),(-3.262406,13.283919),(-3.42653,13.27423),(-3.448544,13.265781),(-3.452265,13.237514),(-3.440586,13.202839),(-3.439475,13.184855),(-3.449475,13.16881),(-3.461593,13.165761),(-3.473922,13.167601),(-3.516111,13.1739),(-3.538487,13.173409),(-3.54505,13.174494),(-3.553913,13.180256),(-3.569958,13.196767),(-3.576599,13.199299),(-3.589802,13.197387),(-3.596882,13.199402),(-3.678711,13.26175),(-3.724419,13.288828),(-3.798626,13.347145),(-3.817101,13.35469),(-3.85808,13.365516),(-3.8926,13.378926),(-3.908878,13.381975),(-3.928825,13.38027),(-3.948411,13.380838),(-3.971613,13.386755),(-3.984042,13.39647),(-3.971613,13.40833),(-3.963009,13.413162),(-3.942726,13.431946),(-3.934923,13.434892),(-3.925544,13.436313),(-3.918593,13.440318),(-3.917973,13.451066),(-3.923399,13.453754),(-3.94748,13.458198),(-3.956059,13.46166),(-3.959288,13.467939),(-3.964482,13.487808),(-3.965468,13.489217),(-3.970321,13.496154),(-3.974094,13.498945),(-3.977737,13.499926),(-3.981251,13.49897),(-3.984481,13.496051),(-3.997968,13.480522),(-3.992232,13.476233),(-3.979468,13.47538),(-3.971613,13.469954),(-3.97691,13.460627),(-3.990217,13.449129),(-4.005358,13.439517),(-4.016029,13.43577),(-4.028199,13.431533),(-4.043495,13.414454),(-4.052539,13.409002),(-4.079307,13.402129),(-4.087549,13.397995),(-4.104267,13.382698),(-4.114654,13.364534),(-4.131578,13.323839),(-4.147106,13.299603),(-4.16602,13.278157),(-4.176872,13.272369),(-4.186458,13.272834),(-4.195837,13.274695),(-4.205501,13.272989),(-4.220668,13.262835),(-4.23878,13.246789),(-4.253508,13.22909),(-4.258107,13.21413),(-4.251209,13.204751),(-4.230021,13.193149),(-4.227773,13.184442),(-4.234801,13.175605),(-4.24617,13.172479),(-4.310197,13.175553),(-4.327044,13.168551),(-4.339601,13.135298),(-4.345906,13.127133),(-4.351073,13.118322),(-4.351073,13.106075),(-4.345363,13.097057),(-4.316398,13.070857),(-4.289733,13.01347),(-4.276918,12.996236),(-4.24785,12.971948),(-4.229324,12.948952),(-4.221314,12.921254),(-4.224647,12.864513),(-4.211883,12.819193),(-4.213278,12.807178),(-4.23413,12.74235),(-4.243974,12.729922),(-4.245472,12.728936),(-4.257642,12.72093),(-4.276298,12.714677),(-4.297227,12.712145),(-4.310921,12.716021),(-4.339446,12.73279),(-4.36885,12.738991),(-4.402026,12.736562),(-4.435254,12.728423),(-4.464296,12.717261),(-4.46437,12.717251),(-4.474942,12.715762),(-4.481711,12.717416),(-4.485251,12.714677),(-4.491323,12.662794),(-4.482538,12.646258),(-4.439078,12.613288),(-4.42249,12.597217),(-4.401406,12.550398),(-4.392647,12.537427),(-4.387247,12.533965),(-4.386885,12.530192),(-4.393448,12.516446),(-4.407607,12.497946),(-4.426624,12.479653),(-4.443678,12.459447),(-4.451636,12.435314),(-4.415359,12.350307),(-4.406135,12.307467),(-4.435823,12.301731),(-4.444814,12.309431),(-4.452669,12.320283),(-4.462488,12.328189),(-4.47719,12.327311),(-4.488171,12.320696),(-4.490703,12.313823),(-4.490496,12.305762),(-4.49401,12.290104),(-4.492253,12.278838),(-4.493545,12.272844),(-4.497421,12.26871),(-4.585529,12.197241),(-4.560466,12.149492),(-4.570233,12.138898),(-4.600309,12.137555),(-4.636121,12.117556),(-4.646663,12.098022),(-4.648678,12.081951),(-4.652737,12.071627),(-4.653716,12.069135),(-4.673586,12.05973),(-4.683715,12.059058),(-4.704023,12.061487),(-4.714617,12.059007),(-4.725133,12.050118),(-4.732281,12.037263),(-4.738983,12.02521),(-4.746889,12.015082),(-4.761281,12.006607),(-4.789522,12.001801),(-4.806808,11.996271),(-4.826238,12.012705),(-4.847245,12.013066),(-4.889955,11.997925),(-4.910677,11.998132),(-4.932071,12.003144),(-4.95375,12.005108),(-4.975454,11.996271),(-4.984523,11.988572),(-4.994238,11.983249),(-5.004703,11.980665),(-5.016046,11.981285),(-5.034313,11.97927),(-5.073768,11.98051),(-5.089323,11.97927),(-5.103947,11.972759),(-5.136994,11.953122),(-5.167819,11.943665),(-5.182909,11.930901),(-5.209471,11.901497),(-5.248228,11.870439),(-5.268227,11.843051),(-5.275151,11.837418),(-5.282489,11.836436),(-5.288019,11.839123),(-5.293445,11.843206),(-5.300163,11.846358),(-5.306519,11.846048),(-5.311066,11.842896),(-5.314839,11.83933),(-5.319025,11.837418),(-5.353803,11.831734),(-5.357058,11.829615),(-5.364552,11.82176),(-5.369874,11.820158),(-5.37561,11.822225),(-5.387909,11.830804),(-5.392302,11.832354),(-5.405944,11.830442),(-5.412352,11.82853),(-5.412197,11.823569),(-5.405996,11.812303),(-5.400415,11.810185),(-5.374525,11.79525),(-5.370649,11.791064),(-5.353286,11.794527),(-5.344656,11.793907),(-5.334269,11.790806),(-5.314787,11.782486),(-5.298974,11.772668),(-5.28714,11.758922),(-5.279544,11.739026),(-5.278562,11.721456),(-5.280681,11.699494),(-5.289259,11.660426),(-5.294788,11.650298),(-5.302281,11.639859),(-5.307811,11.629007),(-5.307604,11.617638),(-5.299956,11.606424),(-5.290964,11.604461),(-5.280732,11.605753),(-5.269467,11.604099),(-5.233914,11.575987),(-5.224663,11.54077),(-5.224612,11.45801),(-5.219186,11.435143),(-5.218307,11.422198),(-5.22642,11.41349),(-5.248176,11.403181),(-5.263472,11.390184),(-5.267865,11.37202),(-5.26094,11.267737),(-5.262852,11.251588),(-5.275255,11.230711),(-5.306726,11.199395),(-5.316182,11.176322),(-5.322332,11.135549),(-5.332512,11.12157),(-5.356283,11.106894),(-5.373026,11.099763),(-5.390131,11.094699),(-5.489764,11.082115),(-5.504233,11.072684),(-5.507334,11.056949),(-5.505602,11.047985),(-5.495603,10.996203),(-5.501804,10.97021),(-5.479118,10.975533),(-5.465579,10.947886),(-5.447027,10.877993),(-5.43509,10.856909),(-5.431421,10.844688),(-5.445322,10.786991),(-5.445012,10.778568),(-5.446459,10.768698),(-5.45297,10.763969),(-5.461186,10.760042),(-5.467956,10.752678),(-5.473072,10.737873),(-5.482632,10.684233),(-5.482115,10.675215),(-5.477154,10.658808),(-5.475914,10.650385),(-5.460566,10.644571),(-5.470488,10.635605),(-5.477413,10.61956),(-5.47855,10.604238),(-5.471057,10.597365),(-5.468111,10.58783),(-5.48165,10.535327),(-5.505163,10.499179),(-5.508987,10.490937),(-5.5171,10.439984),(-5.522578,10.425489),(-5.549708,10.435643),(-5.571722,10.44988),(-5.584383,10.454247),(-5.594047,10.453937),(-5.621797,10.446935),(-5.677556,10.441508),(-5.806282,10.413474),(-5.87806,10.37606),(-5.893305,10.365027),(-5.904829,10.348284),(-5.908963,10.330068),(-5.904725,10.275291),(-5.916404,10.266635),(-5.963016,10.282733),(-5.974747,10.273638),(-5.978313,10.248626),(-5.985237,10.227284),(-5.997123,10.208189),(-6.015933,10.18987),(-6.105333,10.189276),(-6.180884,10.215838),(-6.19432,10.224183),(-6.214009,10.247851),(-6.224809,10.255034),(-6.240002,10.251933),(-6.243309,10.289011),(-6.240674,10.302576),(-6.228323,10.31609),(-6.225894,10.314488),(-6.22114,10.310354),(-6.21437,10.307796),(-6.206154,10.310819),(-6.203984,10.316038),(-6.205379,10.322627),(-6.207808,10.328518),(-6.208324,10.33198),(-6.183933,10.358852),(-6.198764,10.370815),(-6.195664,10.389212),(-6.180574,10.420218),(-6.196904,10.439157),(-6.200366,10.448769),(-6.198196,10.457399),(-6.187499,10.475408),(-6.185793,10.481325),(-6.190289,10.491144),(-6.196129,10.494684),(-6.20419,10.494373),(-6.215662,10.492616),(-6.222432,10.494632),(-6.228168,10.500575),(-6.233491,10.507577),(-6.238658,10.512744),(-6.24765,10.51383),(-6.256022,10.511608),(-6.261293,10.514088),(-6.260931,10.529152),(-6.256228,10.537239),(-6.237935,10.551941),(-6.231372,10.560235),(-6.230028,10.572043),(-6.232871,10.58411),(-6.238658,10.597985),(-6.2307,10.606925),(-6.219797,10.61186),(-6.209771,10.617751),(-6.204449,10.629611),(-6.207808,10.637543),(-6.216903,10.645088),(-6.227755,10.65085),(-6.246617,10.657619),(-6.249459,10.665603),(-6.244446,10.706867),(-6.245841,10.720742),(-6.25597,10.726478),(-6.28,10.723274),(-6.303564,10.713171),(-6.322219,10.700485),(-6.341701,10.691442),(-6.367539,10.692268),(-6.388055,10.695524),(-6.409811,10.694387),(-6.427019,10.686093),(-6.433995,10.667929),(-6.435856,10.628706),(-6.430688,10.61496),(-6.419061,10.6077),(-6.406658,10.60186),(-6.399217,10.592481),(-6.402783,10.569123),(-6.42149,10.556592),(-6.445984,10.552251),(-6.466965,10.553543),(-6.52138,10.564214),(-6.537555,10.569692),(-6.578069,10.591138),(-6.598585,10.604909),(-6.613623,10.618474),(-6.633983,10.65302),(-6.647109,10.66134),(-6.669175,10.654157),(-6.667883,10.650901),(-6.681939,10.627389),(-6.684213,10.626562),(-6.688967,10.607855),(-6.691086,10.589432),(-6.689225,10.580931),(-6.685608,10.574007),(-6.682817,10.565971),(-6.687778,10.52047),(-6.697545,10.488301),(-6.695788,10.471197),(-6.683076,10.451689),(-6.675634,10.447865),(-6.664627,10.444557),(-6.653982,10.440036),(-6.647781,10.432698),(-6.647936,10.421871),(-6.653052,10.413061),(-6.659615,10.405361),(-6.663955,10.397997),(-6.66827,10.36584),(-6.668916,10.361022),(-6.677391,10.349034),(-6.700336,10.34167),(-6.722557,10.344047),(-6.760229,10.366113),(-6.782088,10.373037),(-6.803895,10.370867),(-6.862755,10.34384),(-6.88203,10.341256),(-6.938719,10.348336),(-6.96125,10.344874),(-6.973136,10.33229),(-6.997113,10.25276),(-6.994891,10.239402),(-6.984298,10.230384),(-6.971172,10.222065),(-6.961612,10.211032),(-6.95939,10.185503),(-6.971637,10.164419),(-6.992411,10.149097),(-7.015769,10.140881),(-7.040522,10.140054),(-7.058247,10.15597),(-7.073129,10.178527),(-7.089459,10.19757),(-7.106357,10.207259),(-7.204749,10.234493),(-7.284021,10.246559),(-7.346963,10.247903),(-7.363396,10.252192),(-7.371819,10.264517),(-7.372698,10.288598),(-7.366342,10.329448),(-7.372336,10.34477),(-7.395022,10.346217),(-7.422204,10.336089),(-7.431402,10.333737),(-7.444425,10.334202),(-7.445768,10.340378),(-7.443288,10.349964),(-7.444683,10.360816),(-7.461478,10.388643),(-7.465457,10.398617),(-7.46618,10.40748),(-7.465354,10.417996),(-7.46618,10.429494),(-7.471606,10.441095),(-7.480236,10.450293),(-7.489848,10.456107),(-7.501475,10.458639),(-7.515841,10.457812),(-7.539613,10.427633),(-7.547416,10.420502),(-7.556382,10.417272),(-7.574339,10.417531),(-7.584416,10.416523),(-7.607774,10.418719),(-7.643689,10.440475),(-7.664411,10.437142),(-7.670044,10.42877),(-7.673971,10.416394),(-7.679036,10.405929),(-7.687924,10.403087),(-7.69981,10.404534),(-7.708646,10.402467),(-7.722289,10.391331),(-7.722289,10.384484),(-7.726785,10.382882),(-7.736035,10.378282),(-7.723994,10.363606),(-7.735337,10.347096),(-7.757429,10.334047),(-7.777583,10.329862),(-7.764405,10.312912),(-7.776342,10.292732),(-7.778254,10.27834),(-7.783835,10.270434),(-7.791225,10.265783),(-7.797736,10.260279),(-7.80125,10.249453),(-7.805074,10.241831),(-7.829001,10.211652),(-7.838664,10.203719),(-7.847423,10.197932),(-7.857216,10.194418),(-7.882847,10.190826),(-7.892253,10.184832),(-7.910959,10.169406),(-7.929046,10.160957),(-7.949872,10.155428),(-7.970904,10.155066),(-7.989663,10.161991),(-7.96801,10.209998),(-7.962584,10.233252),(-7.9646,10.258936),(-7.971214,10.281802),(-7.981549,10.307021),(-7.996277,10.32826),(-8.015708,10.339344),(-8.070071,10.341876),(-8.096116,10.346734),(-8.115495,10.360247),(-8.120921,10.373037),(-8.122109,10.399496),(-8.125417,10.411665),(-8.135442,10.427323),(-8.142056,10.429649),(-8.150583,10.424533),(-8.165983,10.418099),(-8.18903,10.413629),(-8.210579,10.414068),(-8.228873,10.423422),(-8.253884,10.469827),(-8.284425,10.511246),(-8.295535,10.537446),(-8.311142,10.735289),(-8.315586,10.753092),(-8.341941,10.763246),(-8.339822,10.783865),(-8.318893,10.83952),(-8.311814,10.847246),(-8.305871,10.856289),(-8.303649,10.874893),(-8.305096,10.978246),(-8.311142,11.000803),(-8.326593,11.02385),(-8.347315,11.043126),(-8.369329,11.054236),(-8.381318,11.053926),(-8.400645,11.043332),(-8.410464,11.041911),(-8.429067,11.048293),(-8.438162,11.04974),(-8.448704,11.047105),(-8.469013,11.046381),(-8.489064,11.052737),(-8.505755,11.052789),(-8.531645,10.999046),(-8.541567,10.983775),(-8.55955,10.971605),(-8.575622,10.966748),(-8.593347,10.964707),(-8.611382,10.965146),(-8.628228,10.967833),(-8.647969,10.965818),(-8.665229,10.956955),(-8.680938,10.952304),(-8.696234,10.962743),(-8.70073,10.978452),(-8.696906,10.996048),(-8.68869,11.012559),(-8.679801,11.025194),(-8.63368,11.071289),(-8.623112,11.091262),(-8.622389,11.102605),(-8.624921,11.112191),(-8.624611,11.122035),(-8.615413,11.13405),(-8.607222,11.137151),(-8.586991,11.137461),(-8.578154,11.14206),(-8.569679,11.157796),(-8.563943,11.193426),(-8.551799,11.210738),(-8.544616,11.214975),(-8.5241,11.22376),(-8.515625,11.229057),(-8.51206,11.230298),(-8.502138,11.231176),(-8.498779,11.233501),(-8.497022,11.24071),(-8.499244,11.246705),(-8.502345,11.25164),(-8.50312,11.255412),(-8.492578,11.277788),(-8.483224,11.28492),(-8.44457,11.276264),(-8.426897,11.274197),(-8.406691,11.275282),(-8.388398,11.281406),(-8.376719,11.294661),(-8.371035,11.32055),(-8.381267,11.325098),(-8.39956,11.322152),(-8.417957,11.325511),(-8.42607,11.340472),(-8.414133,11.353468),(-8.393359,11.363003),(-8.375324,11.367654),(-8.397648,11.385895),(-8.427724,11.40008),(-8.488134,11.418193),(-8.51578,11.418968),(-8.527563,11.423387),(-8.535521,11.438088),(-8.539425,11.456312),(-8.543996,11.477647),(-8.549577,11.490127),(-8.575208,11.470283),(-8.602183,11.472092),(-8.656754,11.496431),(-8.667967,11.510875),(-8.679026,11.52984),(-8.687346,11.55002),(-8.690705,11.568029),(-8.712874,11.640066),(-8.726517,11.649471),(-8.740366,11.646164),(-8.756541,11.638826),(-8.777211,11.635932),(-8.795195,11.641926),(-8.81168,11.651745),(-8.828526,11.659186),(-8.847233,11.657946),(-8.797004,11.913227),(-8.79783,11.925113),(-8.803205,11.934932),(-8.809974,11.94444),(-8.814909,11.955395),(-8.816434,11.976841),(-8.815452,11.995703),(-8.819793,12.012653),(-8.837983,12.028156),(-8.858757,12.031825),(-8.88165,12.029189),(-8.903715,12.030378),(-8.922112,12.045364),(-8.926298,12.064278),(-8.921957,12.086033),(-8.906971,12.125359),(-8.906041,12.147735),(-8.913896,12.169853),(-8.927021,12.187371),(-8.942008,12.195846),(-8.961696,12.187991),(-8.97384,12.187578),(-8.981178,12.198946),(-8.984537,12.20892),(-8.994278,12.224836),(-8.99756,12.23481),(-8.995906,12.26101),(-8.985261,12.282094),(-8.972135,12.301989),(-8.963092,12.32421),(-8.964745,12.346276),(-8.975907,12.368704),(-8.993839,12.387514),(-9.027765,12.404751),(-9.147318,12.465493),(-9.189486,12.479239),(-9.265864,12.495207),(-9.278938,12.494432),(-9.286379,12.488128),(-9.294079,12.483787),(-9.308187,12.488955),(-9.321519,12.496603),(-9.327307,12.497533),(-9.33418,12.496293),(-9.350872,12.484614),(-9.398155,12.473245),(-9.412573,12.455365),(-9.407251,12.445133),(-9.392781,12.428028),(-9.374694,12.41113),(-9.347151,12.396454),(-9.344464,12.389012),(-9.343947,12.380124),(-9.339167,12.370616),(-9.331648,12.366326),(-9.314801,12.362657),(-9.309169,12.357748),(-9.308497,12.345036),(-9.312889,12.32607),(-9.321829,12.310206),(-9.335007,12.306537),(-9.331906,12.282766),(-9.336816,12.269846),(-9.360173,12.246644),(-9.421513,12.257082),(-9.438877,12.254757),(-9.479598,12.235843),(-9.495876,12.224681),(-9.515616,12.20706),(-9.628167,12.170163),(-9.66,12.149802),(-9.675141,12.133989),(-9.682273,12.118538),(-9.683668,12.101485),(-9.681549,12.080814),(-9.686303,12.068153),(-9.698809,12.050997),(-9.722994,12.025417),(-9.778649,12.026657),(-9.796632,12.032342),(-9.806813,12.038439),(-9.823039,12.050893),(-9.833684,12.055131),(-9.840661,12.055028),(-9.861125,12.051875),(-9.870736,12.052082),(-9.888772,12.060557),(-9.923188,12.087584),(-9.941533,12.092751),(-9.961584,12.096007),(-9.996414,12.114145),(-10.015534,12.119881),(-10.027678,12.125669),(-10.04809,12.141379),(-10.060182,12.147528),(-10.070363,12.149854),(-10.087002,12.150577),(-10.09034,12.151584),(-10.097286,12.153678),(-10.105192,12.158639),(-10.118835,12.170163),(-10.128654,12.174503),(-10.135837,12.175485),(-10.160228,12.174503),(-10.179762,12.178224),(-10.21485,12.194606),(-10.247819,12.201065),(-10.254124,12.207215),(-10.258981,12.213778),(-10.26694,12.217808),(-10.304974,12.202874),(-10.31102,12.19104),(-10.313603,12.191143),(-10.316497,12.196879),(-10.323577,12.201995),(-10.325592,12.199773),(-10.329882,12.19042),(-10.339338,12.185459),(-10.348433,12.184787),(-10.354118,12.18086),(-10.353394,12.166494),(-10.368225,12.171506),(-10.37484,12.174814),(-10.381403,12.180136),(-10.39458,12.17161),(-10.428945,12.143394),(-10.435973,12.135178),(-10.445688,12.120708),(-10.493489,12.124429),(-10.511628,12.118073),(-10.512661,12.108461),(-10.504445,12.088359),(-10.507959,12.084586),(-10.517312,12.080349),(-10.517674,12.070737),(-10.514883,12.060144),(-10.51509,12.052909),(-10.535399,12.038698),(-10.555449,12.029344),(-10.566508,12.017252),(-10.55948,11.995186),(-10.574983,11.990329),(-10.595653,11.97989),(-10.613792,11.967694),(-10.621543,11.957669),(-10.625264,11.943872),(-10.634152,11.925423),(-10.645159,11.90837),(-10.655081,11.898965),(-10.669964,11.89204),(-10.711357,11.890386),(-10.740347,11.919894),(-10.781017,11.99622),(-10.807785,12.022833),(-10.812436,12.033427),(-10.811041,12.04154),(-10.801222,12.059213),(-10.798535,12.068102),(-10.804943,12.095955),(-10.822926,12.110476),(-10.846387,12.12138),(-10.869642,12.138433),(-10.909794,12.200135),(-10.927675,12.212951),(-10.952117,12.219462),(-10.972013,12.218583),(-11.015473,12.204424),(-11.038779,12.205251),(-11.052318,12.202512),(-11.059811,12.192435),(-11.072782,12.149492),(-11.084306,12.136573),(-11.116604,12.113732),(-11.117477,12.112702),(-11.122081,12.107272),(-11.136292,12.085103),(-11.14425,12.081331),(-11.152725,12.080866),(-11.159392,12.077507),(-11.165644,12.04862),(-11.176342,12.029499),(-11.191173,12.01441),(-11.206986,12.010172),(-11.223315,12.009294),(-11.25551,11.996168),(-11.274992,11.996271),(-11.294681,12.003093),(-11.315144,12.013325),(-11.334678,12.026192),(-11.351318,12.040403),(-11.391729,12.093733),(-11.409557,12.107893),(-11.470794,12.134609),(-11.489604,12.151197),(-11.507794,12.191608),(-11.491671,12.220909),(-11.463249,12.249951),(-11.444594,12.289484),(-11.444904,12.312531),(-11.451467,12.352115),(-11.447229,12.374543),(-11.431675,12.382966),(-11.407594,12.381933),(-11.388628,12.384465),(-11.388422,12.403895),(-11.386561,12.430457),(-11.379792,12.456864),(-11.377776,12.480221),(-11.390437,12.497533),(-11.392607,12.491848),(-11.400514,12.479549),(-11.402374,12.488541),(-11.408679,12.504923),(-11.410849,12.513346),(-11.416999,12.529314),(-11.42475,12.526368),(-11.430383,12.515723),(-11.429814,12.508333),(-11.437876,12.515361),(-11.449451,12.535205),(-11.456686,12.539753),(-11.467797,12.543525),(-11.460614,12.549829),(-11.43984,12.559286),(-11.437359,12.570758),(-11.438599,12.577993),(-11.440925,12.584866),(-11.441803,12.595511),(-11.436636,12.627086),(-11.436842,12.637473),(-11.441752,12.647756),(-11.449555,12.653182),(-11.457151,12.655973),(-11.460975,12.658402),(-11.461905,12.673129),(-11.455756,12.680622),(-11.447953,12.685945),(-11.437359,12.712248),(-11.424182,12.716383),(-11.410332,12.718811),(-11.402168,12.73173),(-11.404028,12.741213),(-11.41457,12.759713),(-11.41457,12.764209),(-11.407594,12.77147),(-11.405423,12.782348),(-11.406353,12.794285),(-11.408472,12.804491),(-11.411676,12.812888),(-11.416017,12.820071),(-11.419686,12.824671),(-11.420978,12.825575),(-11.423458,12.826402),(-11.428161,12.827022),(-11.432088,12.829347),(-11.432502,12.835523),(-11.430021,12.841052),(-11.427024,12.845238),(-11.425474,12.849553),(-11.428626,12.865805),(-11.424905,12.876166),(-11.419324,12.886501),(-11.415138,12.896656),(-11.417309,12.901953),(-11.423096,12.907069),(-11.427954,12.912495),(-11.427231,12.918618),(-11.420668,12.922029),(-11.413588,12.919703),(-11.407283,12.916293),(-11.402891,12.916396),(-11.394209,12.923321),(-11.387801,12.927119),(-11.384443,12.933114),(-11.386561,12.959107),(-11.388835,12.970553),(-11.393434,12.980811),(-11.401806,12.989492),(-11.404235,12.972103),(-11.411521,12.960709),(-11.423096,12.956006),(-11.438703,12.9589),(-11.435964,12.968848),(-11.433535,12.988356),(-11.433432,12.99696),(-11.430124,13.00099),(-11.424543,13.005202),(-11.422993,13.009414),(-11.431675,13.01347),(-11.435654,13.016726),(-11.43953,13.023005),(-11.445162,13.035226),(-11.45896,13.054682),(-11.456945,13.060444),(-11.451415,13.068067),(-11.450278,13.075095),(-11.46082,13.079177),(-11.477098,13.083285),(-11.489707,13.090778),(-11.501541,13.099589),(-11.515391,13.107754),(-11.533064,13.111811),(-11.538077,13.128011),(-11.538542,13.16297),(-11.547843,13.176199),(-11.559729,13.186922),(-11.569341,13.199195),(-11.572338,13.217101),(-11.568566,13.227902),(-11.555802,13.245911),(-11.554251,13.256866),(-11.55854,13.266504),(-11.581175,13.286141),(-11.605824,13.319653),(-11.613007,13.339058),(-11.613111,13.360813),(-11.620345,13.357532),(-11.632903,13.35438),(-11.643962,13.355594),(-11.646597,13.365516),(-11.645253,13.379753),(-11.649801,13.38412),(-11.674967,13.383215),(-11.694501,13.379184),(-11.70401,13.380373),(-11.714035,13.388099),(-11.719771,13.397891),(-11.726902,13.405798),(-11.741217,13.407813),(-11.756513,13.399803),(-11.76659,13.383164),(-11.772533,13.362674),(-11.775116,13.343347),(-11.817078,13.312341),(-11.822555,13.306708),(-11.828136,13.307277),(-11.840022,13.316889),(-11.844569,13.323839),(-11.850771,13.34314),(-11.854595,13.350556),(-11.862346,13.356085),(-11.888804,13.368875),(-11.897589,13.371175),(-11.899967,13.382182),(-11.885652,13.43577),(-11.883482,13.453547),(-11.890406,13.465148),(-11.898468,13.468507),(-11.907718,13.469644),(-11.918157,13.474708),(-11.923841,13.481142),(-11.932729,13.496051),(-11.939034,13.50295),(-11.996808,13.544161),(-12.015153,13.564444),(-12.037323,13.598267),(-12.049828,13.634363),(-12.070395,13.673249),(-12.083883,13.692964),(-12.097629,13.70441),(-12.012311,13.751177),(-11.978825,13.784819),(-11.962908,13.829467),(-11.956966,13.870007),(-11.956811,13.890342),(-11.962908,13.909152),(-11.980375,13.929642),(-11.99903,13.943569),(-12.015308,13.959304),(-12.025334,13.985246),(-12.02523,14.026742),(-12.002751,14.103456),(-11.995413,14.144022),(-11.997583,14.166165),(-12.03448,14.250268),(-12.044299,14.264919),(-12.058355,14.274091),(-12.097784,14.28807),(-12.10998,14.298146),(-12.119591,14.310575),(-12.127808,14.324295),(-12.113029,14.329592),(-12.108688,14.34202),(-12.115612,14.35512),(-12.208837,14.388761),(-12.22173,14.391242),(-12.21987,14.434598),(-12.222428,14.454933),(-12.234184,14.498574),(-12.236432,14.519451),(-12.230386,14.536685),(-12.211472,14.547976),(-12.204186,14.554255),(-12.201447,14.565184),(-12.200465,14.577354),(-12.198398,14.58756),(-12.191008,14.597999),(-12.171526,14.616422),(-12.165945,14.625181),(-12.165015,14.641872),(-12.172611,14.647918),(-12.182792,14.650244),(-12.18982,14.655954),(-12.192197,14.666884),(-12.190285,14.673653),(-12.185841,14.68081),(-12.180311,14.692825),(-12.191422,14.693445),(-12.207648,14.696598),(-12.222815,14.70329),(-12.230489,14.714555),(-12.236613,14.728456),(-12.256508,14.745794),(-12.26413,14.774939),(-12.246793,14.767007),(-12.224779,14.763441),(-12.203256,14.76618),(-12.160209,14.781011),(-12.138867,14.784525),(-12.123674,14.776644),(-12.081454,14.740032),(-12.070395,14.734399),(-12.062334,14.746259),(-12.057063,14.761942),(-12.049673,14.767213),(-12.03107,14.762821),(-12.011071,14.762743),(-11.991486,14.76804),(-11.973967,14.779848),(-11.950299,14.811552),(-11.935985,14.823618),(-11.895729,14.832067),(-11.877487,14.841524),(-11.862605,14.854082),(-11.854698,14.867311),(-11.850822,14.876483),(-11.845603,14.883227),(-11.840487,14.888498),(-11.837076,14.893097),(-11.833097,14.894699),(-11.826069,14.89656),(-11.820385,14.89997),(-11.820437,14.906404),(-11.827361,14.915576),(-11.8301,14.920331),(-11.830927,14.926429),(-11.825811,14.962473),(-11.820592,14.980275),(-11.813202,14.995933),(-11.813202,14.996088),(-11.811083,15.014304),(-11.81377,15.028541),(-11.82116,15.039755),(-11.832839,15.049005),(-11.839195,15.049005),(-11.844466,15.044328),(-11.851029,15.041951),(-11.860899,15.049005),(-11.859142,15.054069),(-11.856765,15.084248),(-11.845138,15.124556),(-11.844208,15.13484),(-11.847308,15.148741),(-11.848187,15.158146),(-11.845655,15.179075),(-11.826793,15.23256),(-11.821057,15.280102),(-11.816923,15.296096),(-11.783488,15.365885),(-11.767675,15.437198),(-11.754859,15.469083),(-11.729434,15.496058),(-11.727057,15.506987),(-11.727057,15.541171),(-11.70174,15.537425),(-11.69755,15.536805),(-11.670833,15.528407),(-11.644117,15.526444),(-11.614868,15.541171),(-11.579211,15.577965),(-11.562571,15.587241),(-11.535958,15.594915),(-11.524279,15.600289),(-11.515391,15.610082),(-11.513117,15.615043),(-11.512342,15.620185),(-11.513117,15.625275),(-11.515391,15.630262),(-11.519576,15.632742),(-11.522987,15.635248),(-11.525623,15.638349),(-11.527328,15.642561),(-11.524382,15.643465),(-11.515391,15.644421),(-11.458236,15.632949),(-11.434827,15.623389),(-11.409557,15.602201),(-11.316901,15.471667),(-11.298195,15.451048),(-11.045393,15.270206),(-11.012734,15.239588),(-10.98519,15.187498),(-10.948035,15.151996),(-10.915169,15.102904),(-10.906229,15.119776),(-10.890881,15.176749),(-10.882251,15.192846),(-10.855328,15.217651),(-10.844062,15.234549),(-10.834399,15.271395),(-10.827216,15.287698),(-10.811041,15.303124),(-10.800189,15.306845),(-10.790112,15.306845),(-10.78174,15.308421),(-10.770682,15.324673),(-10.754507,15.333639),(-10.746807,15.341132),(-10.738901,15.362164),(-10.740141,15.403583),(-10.736368,15.422626),(-10.725361,15.433064),(-10.709187,15.433865),(-10.64888,15.424874),(-10.607126,15.424434),(-10.587385,15.427251),(-10.584746,15.428127),(-10.551522,15.439162),(-10.534985,15.441694),(-10.515503,15.437741),(-10.500414,15.440351),(-10.398405,15.438697),(-10.370758,15.433504),(-10.322647,15.437095),(-10.308746,15.435209),(-10.223221,15.402033),(-10.214488,15.401981),(-10.196091,15.405288),(-10.188391,15.40472),(-10.167979,15.395185),(-10.131599,15.372629),(-10.108293,15.365885),(-10.067624,15.364748),(-9.951403,15.381207),(-9.835596,15.371104),(-9.797356,15.380303),(-9.740719,15.4126),(-9.720823,15.42092),(-9.672118,15.430661),(-9.42389,15.440532),(-9.435879,15.495903),(-9.435879,15.495954),(-9.451847,15.559206),(-9.451692,15.588326),(-9.437016,15.614888),(-9.375935,15.685917),(-9.356453,15.697854),(-9.332423,15.687571),(-9.329322,15.655247),(-9.342965,15.586931),(-9.349218,15.495644),(-9.175843,15.495593),(-9.141379,15.495581),(-9.018592,15.495541),(-9.003216,15.495526),(-8.911157,15.495438),(-8.843409,15.495438),(-8.769253,15.495438),(-8.68931,15.495438),(-8.60394,15.495386),(-8.470202,15.495283),(-8.327006,15.495283),(-8.175801,15.495205),(-8.017826,15.495128),(-7.854529,15.495076),(-7.6872,15.495024),(-7.517237,15.494973),(-7.345878,15.494921),(-7.174519,15.494818),(-7.004503,15.494818),(-6.837175,15.49474),(-6.673929,15.494663),(-6.516006,15.494611),(-6.364749,15.494559),(-6.320145,15.494543),(-6.221605,15.494508),(-6.087815,15.494456),(-6.010817,15.494378),(-5.938212,15.494353),(-5.870257,15.494353),(-5.807315,15.494353),(-5.673163,15.494301),(-5.515964,15.494249),(-5.515602,15.494146),(-5.515292,15.494042),(-5.514982,15.493913),(-5.514672,15.493836),(-5.514362,15.493732),(-5.514051,15.493629),(-5.513741,15.493526),(-5.51338,15.493422),(-5.513121,15.493422),(-5.512966,15.493422),(-5.51276,15.493422),(-5.512553,15.493422),(-5.512294,15.493422),(-5.512191,15.493422),(-5.511933,15.493422),(-5.511726,15.493422),(-5.511623,15.493526),(-5.511519,15.493681),(-5.511364,15.493732),(-5.511313,15.493887),(-5.511261,15.493991),(-5.511106,15.494042),(-5.510951,15.494197),(-5.510951,15.494301),(-5.510951,15.494508),(-5.510951,15.494663),(-5.510951,15.494818),(-5.510951,15.495076),(-5.510951,15.495283),(-5.510951,15.495438),(-5.510951,15.49567),(-5.510951,15.495903),(-5.509917,15.500812),(-5.508987,15.505695),(-5.508761,15.5069),(-5.508057,15.510656),(-5.507075,15.51554),(-5.488368,15.612666),(-5.469558,15.709792),(-5.4508,15.80684),(-5.431989,15.90394),(-5.413799,15.998559),(-5.395454,16.093179),(-5.377212,16.187799),(-5.358919,16.282366),(-5.353286,16.311977),(-5.354268,16.318385),(-5.355301,16.324793),(-5.360107,16.329909),(-5.364913,16.334921),(-5.404963,16.36169),(-5.445942,16.389027),(-5.485371,16.415433),(-5.529709,16.444992),(-5.566503,16.469487),(-5.60588,16.495893),(-5.614562,16.511861),(-5.623347,16.527829),(-5.627688,16.568137),(-5.63761,16.658364),(-5.647583,16.748643),(-5.657454,16.838818),(-5.667324,16.929097),(-5.677297,17.019324),(-5.687168,17.109499),(-5.697038,17.199726),(-5.70696,17.289954),(-5.716933,17.380181),(-5.726803,17.470356),(-5.736674,17.560583),(-5.746647,17.650758),(-5.756466,17.741037),(-5.766387,17.831264),(-5.776361,17.92144),(-5.78618,18.011667),(-5.79636,18.104374),(-5.80654,18.197082),(-5.816772,18.289892),(-5.826952,18.382548),(-5.837081,18.475333),(-5.847364,18.568067),(-5.857545,18.6608),(-5.867673,18.753585),(-5.877802,18.846318),(-5.888034,18.939052),(-5.898214,19.031785),(-5.908394,19.12457),(-5.914802,19.182887),(-5.92121,19.24123),(-5.927566,19.299547),(-5.934026,19.357941),(-5.949115,19.495478),(-5.949115,19.495529),(-5.949115,19.495581),(-5.949115,19.495633),(-5.963533,19.620612),(-5.977847,19.745618),(-5.992162,19.870597),(-6.00658,19.995525),(-6.020946,20.12053),(-6.035363,20.245484),(-6.049678,20.370489),(-6.064095,20.495443),(-6.076756,20.603343),(-6.083577,20.661892),(-6.098615,20.790722),(-6.113705,20.919551),(-6.120526,20.978204),(-6.134634,21.098868),(-6.148741,21.219533),(-6.162901,21.340197),(-6.177008,21.460862),(-6.191116,21.581578),(-6.205224,21.702242),(-6.219383,21.822855),(-6.233491,21.943571),(-6.247598,22.064236),(-6.261706,22.184952),(-6.275865,22.305616),(-6.289973,22.426281),(-6.304132,22.546997),(-6.318188,22.667661),(-6.332296,22.7883),(-6.346455,22.908965),(-6.360563,23.029655),(-6.371634,23.123981),(-6.374722,23.150293),(-6.388778,23.270958),(-6.402938,23.391674),(-6.417045,23.512339),(-6.431205,23.633029),(-6.445312,23.753719),(-6.45942,23.874409),(-6.466603,23.93624),(-6.482571,24.072304),(-6.498436,24.208368),(-6.505671,24.270277),(-6.506338,24.275967),(-6.515851,24.357067),(-6.525514,24.436727),(-6.535178,24.51636),(-6.54479,24.595993),(-6.554453,24.675549),(-6.564168,24.755234),(-6.57378,24.834868),(-6.583392,24.914424),(-6.593107,24.994134),(-6.371674,24.994238),(-6.150188,24.994341),(-5.928755,24.994444),(-5.707321,24.9946),(-5.485939,24.994703),(-5.264454,24.994806),(-5.043072,24.99491),(-4.821613,24.995065),(-4.821536,24.994961),(-4.821355,24.994806),(-4.821226,24.994755)] +Montenegro [(19.054991,43.5067),(19.076386,43.507242),(19.096023,43.512901),(19.122791,43.535948),(19.147699,43.538145),(19.195345,43.532796),(19.175191,43.509619),(19.175914,43.480887),(19.192244,43.454532),(19.218392,43.438202),(19.355645,43.393063),(19.372285,43.384226),(19.414039,43.338389),(19.473054,43.293276),(19.485146,43.280098),(19.502303,43.251935),(19.512431,43.240643),(19.547845,43.218815),(19.549948,43.217518),(19.580851,43.187804),(19.598111,43.176203),(19.618678,43.168245),(19.663016,43.158504),(19.684617,43.156721),(19.69733,43.160209),(19.705494,43.166178),(19.713556,43.165609),(19.742805,43.126516),(19.761202,43.108739),(19.781769,43.096466),(19.805127,43.089955),(19.838303,43.088353),(19.872203,43.090368),(19.883778,43.095691),(19.907136,43.113261),(19.916644,43.117059),(19.92915,43.113907),(19.936591,43.105897),(19.942276,43.095639),(19.949821,43.085924),(19.959742,43.07856),(20.019894,43.047348),(20.054388,43.022221),(20.056344,43.020796),(20.116942,42.976654),(20.129464,42.973605),(20.14092,42.970815),(20.19456,42.966836),(20.223189,42.957663),(20.275589,42.929887),(20.337084,42.906969),(20.35362,42.890975),(20.355171,42.86617),(20.345352,42.827439),(20.264427,42.817258),(20.226083,42.806768),(20.217608,42.802737),(20.209236,42.791704),(20.208409,42.782093),(20.20996,42.772972),(20.208409,42.763282),(20.183398,42.742508),(20.149498,42.749872),(20.112085,42.766538),(20.076325,42.773437),(20.065059,42.769458),(20.055638,42.763866),(20.034673,42.751423),(20.026508,42.743206),(20.024751,42.723414),(20.03612,42.707989),(20.037006,42.707363),(20.094205,42.666983),(20.101956,42.656674),(20.10392,42.653108),(20.090587,42.627348),(20.079218,42.61125),(20.075704,42.603085),(20.075394,42.586962),(20.078185,42.572906),(20.077048,42.55991),(20.064956,42.546758),(20.039221,42.557765),(20.017723,42.546241),(19.981757,42.510765),(19.956332,42.505314),(19.907549,42.506399),(19.882745,42.493609),(19.882745,42.493557),(19.882538,42.493402),(19.873339,42.486839),(19.835512,42.470251),(19.829063,42.468723),(19.819596,42.466479),(19.801406,42.468132),(19.784456,42.474566),(19.75159,42.493402),(19.73433,42.524382),(19.730919,42.533658),(19.730299,42.540428),(19.732159,42.555543),(19.733916,42.562597),(19.741358,42.574405),(19.746009,42.579934),(19.747766,42.578901),(19.746885,42.58893),(19.746009,42.5989),(19.737534,42.624402),(19.722134,42.64608),(19.699293,42.654814),(19.676039,42.646675),(19.658037,42.634613),(19.647927,42.627838),(19.621779,42.604997),(19.605035,42.584844),(19.599248,42.571046),(19.593667,42.54482),(19.588086,42.532857),(19.575683,42.522341),(19.561317,42.516243),(19.549328,42.508595),(19.543747,42.493557),(19.543747,42.493402),(19.531655,42.474773),(19.517806,42.45803),(19.501476,42.444129),(19.481735,42.434491),(19.468403,42.41811),(19.417243,42.374107),(19.412076,42.368397),(19.402671,42.352015),(19.400707,42.344755),(19.401327,42.331061),(19.400397,42.325893),(19.304589,42.215099),(19.274926,42.191276),(19.272033,42.180682),(19.281956,42.164723),(19.282058,42.164559),(19.297767,42.151666),(19.355025,42.12004),(19.369998,42.106507),(19.372491,42.104253),(19.374869,42.094667),(19.356575,42.064798),(19.351718,42.047615),(19.350581,42.027823),(19.354198,42.008703),(19.363396,41.993639),(19.363707,41.993484),(19.371045,41.98656),(19.365774,41.969713),(19.359779,41.965993),(19.351614,41.965166),(19.346447,41.961548),(19.352854,41.938501),(19.350477,41.931369),(19.346653,41.926176),(19.345413,41.921189),(19.34593,41.914471),(19.34531,41.910337),(19.347687,41.906358),(19.356989,41.89964),(19.364947,41.888995),(19.364223,41.862846),(19.365122,41.852372),(19.365082,41.852362),(19.346934,41.863959),(19.310232,41.894517),(19.293224,41.900784),(19.220225,41.91885),(19.202973,41.927069),(19.178477,41.933905),(19.170258,41.937974),(19.162283,41.947943),(19.161957,41.953437),(19.163259,41.95897),(19.16033,41.96898),(19.15561,41.970649),(19.145274,41.977607),(19.13795,41.985297),(19.142914,41.988837),(19.143565,41.993069),(19.139171,42.042426),(19.08253,42.081204),(19.074718,42.095649),(19.083832,42.110093),(19.076427,42.115953),(19.063975,42.120185),(19.048676,42.140204),(19.01059,42.139553),(19.002126,42.150214),(18.999848,42.161078),(18.994802,42.163642),(18.988048,42.163723),(18.981619,42.16706),(18.968028,42.188707),(18.953461,42.199856),(18.913341,42.222235),(18.901703,42.237494),(18.894705,42.270494),(18.885997,42.284369),(18.871593,42.290473),(18.854015,42.290473),(18.838878,42.285712),(18.831391,42.277533),(18.826508,42.282213),(18.820486,42.286689),(18.817638,42.291164),(18.801524,42.284369),(18.793956,42.280341),(18.788748,42.276028),(18.786388,42.274115),(18.778575,42.27147),(18.771983,42.276923),(18.762462,42.291164),(18.729503,42.311591),(18.714529,42.325263),(18.714122,42.338935),(18.708344,42.342475),(18.699718,42.34984),(18.693614,42.353217),(18.693614,42.359442),(18.695811,42.373765),(18.678477,42.385972),(18.658539,42.386623),(18.653168,42.366278),(18.646495,42.366278),(18.638438,42.370307),(18.61671,42.368069),(18.612315,42.36994),(18.607107,42.37641),(18.5949,42.383734),(18.580903,42.389838),(18.570649,42.392971),(18.573009,42.39643),(18.57309,42.397528),(18.574067,42.398261),(18.578136,42.400377),(18.552989,42.413642),(18.545177,42.42357),(18.550141,42.435126),(18.562836,42.438544),(18.644949,42.418362),(18.656505,42.415513),(18.685557,42.403998),(18.70045,42.392971),(18.708181,42.408515),(18.706554,42.421047),(18.683849,42.458808),(18.683279,42.468085),(18.687348,42.482896),(18.654145,42.451809),(18.608653,42.44538),(18.502452,42.455634),(18.502452,42.448188),(18.521251,42.437974),(18.51295,42.409003),(18.529796,42.400377),(18.510265,42.40526),(18.49643,42.416327),(18.497816,42.431158),(18.492855,42.442372),(18.475492,42.449813),(18.467741,42.453379),(18.454925,42.464722),(18.444279,42.477925),(18.437148,42.493402),(18.436425,42.510765),(18.442006,42.543012),(18.437355,42.559212),(18.44769,42.566214),(18.458956,42.569728),(18.470531,42.569056),(18.481797,42.563682),(18.492132,42.564767),(18.495852,42.570865),(18.493682,42.579831),(18.486654,42.58983),(18.506798,42.59849),(18.517247,42.602982),(18.538434,42.618304),(18.542832,42.626429),(18.549596,42.638923),(18.550009,42.668069),(18.539674,42.695457),(18.522518,42.71189),(18.502157,42.727445),(18.46743,42.769199),(18.453891,42.793151),(18.444589,42.817078),(18.443659,42.834467),(18.453478,42.845655),(18.465984,42.852812),(18.473322,42.862553),(18.467637,42.881699),(18.443969,42.916865),(18.434047,42.936889),(18.433531,42.954201),(18.452754,42.993398),(18.483037,43.014637),(18.538951,43.023887),(18.598379,43.024455),(18.638996,43.020243),(18.621116,43.09644),(18.620599,43.122563),(18.621128,43.124578),(18.629074,43.154886),(18.645301,43.180182),(18.688399,43.224469),(18.664318,43.233176),(18.679511,43.24948),(18.807151,43.318029),(18.833506,43.324153),(18.830302,43.328157),(18.824618,43.337408),(18.821207,43.341387),(18.839604,43.34782),(18.899135,43.351903),(18.923423,43.346838),(18.950295,43.333067),(18.957323,43.325858),(18.957943,43.318985),(18.957116,43.311828),(18.959493,43.303456),(18.968589,43.292216),(18.988949,43.272941),(18.989506,43.272016),(18.992463,43.267102),(19.002902,43.273949),(19.01086,43.28232),(19.017164,43.291157),(19.022535,43.296546),(19.024709,43.298728),(19.036285,43.303327),(19.062536,43.304386),(19.069668,43.30883),(19.039592,43.350714),(19.00993,43.410995),(18.975306,43.444274),(18.968278,43.448098),(18.946264,43.443938),(18.946368,43.44908),(18.950708,43.457297),(18.951225,43.463808),(18.937893,43.478949),(18.930658,43.482282),(18.915258,43.485383),(18.905647,43.490628),(18.90513,43.499051),(18.911021,43.507268),(18.920529,43.512203),(18.938926,43.518947),(18.962697,43.5383),(18.977477,43.546232),(19.000214,43.547886),(19.01427,43.537783),(19.025019,43.523029),(19.038145,43.511015),(19.054991,43.5067)] +Mongolia [(100.005968,51.731727),(100.098262,51.738652),(100.210296,51.726224),(100.510898,51.726895),(100.571101,51.704804),(100.638901,51.69204),(101.069469,51.553444),(101.117011,51.527838),(101.158766,51.521223),(101.200727,51.520913),(101.241551,51.515177),(101.279688,51.492595),(101.315448,51.463604),(101.350175,51.450452),(101.387795,51.450556),(101.482053,51.47288),(101.526392,51.475826),(101.569852,51.470089),(101.643697,51.450091),(101.696614,51.452442),(101.723692,51.450582),(101.742192,51.444845),(101.795316,51.419395),(101.821774,51.413581),(101.901873,51.417069),(101.920683,51.413039),(101.955616,51.393789),(101.97422,51.387226),(102.051217,51.383609),(102.074679,51.374824),(102.091267,51.364644),(102.183871,51.323897),(102.193379,51.307205),(102.179272,51.286431),(102.135295,51.240284),(102.133796,51.230853),(102.139688,51.220931),(102.147646,51.202638),(102.150023,51.185404),(102.148783,51.16879),(102.142168,51.134839),(102.146509,51.097632),(102.165371,51.057556),(102.191984,51.020582),(102.219373,50.992651),(102.229656,50.979267),(102.231051,50.964901),(102.225574,50.950741),(102.215238,50.938184),(102.239475,50.912294),(102.231878,50.88475),(102.212758,50.856742),(102.203043,50.829353),(102.216789,50.798864),(102.245676,50.778762),(102.307533,50.748221),(102.329392,50.718817),(102.320245,50.696131),(102.296836,50.674944),(102.275907,50.650191),(102.271566,50.620942),(102.282418,50.590608),(102.302933,50.563892),(102.327635,50.545546),(102.372697,50.533661),(102.471864,50.524927),(102.513101,50.503688),(102.586585,50.41527),(102.617798,50.399302),(102.619529,50.399009),(102.762595,50.374807),(102.887135,50.314966),(102.927391,50.302977),(102.974107,50.295639),(103.130015,50.309127),(103.20169,50.296983),(103.240189,50.244376),(103.253108,50.214972),(103.276776,50.197557),(103.306593,50.190426),(103.407104,50.197299),(103.43873,50.192596),(103.473973,50.182003),(103.533918,50.153787),(103.601407,50.13353),(103.66807,50.131205),(103.785272,50.186137),(103.845423,50.18469),(103.973994,50.148723),(104.03952,50.141178),(104.121685,50.148258),(104.198115,50.170014),(104.247466,50.206601),(104.268963,50.228873),(104.321363,50.254298),(104.349502,50.271868),(104.377225,50.28918),(104.405802,50.300548),(104.474119,50.313261),(104.576955,50.309488),(104.610441,50.314191),(104.632455,50.323751),(104.671884,50.348297),(104.697051,50.353207),(104.789552,50.352638),(104.81849,50.358323),(104.879004,50.383902),(104.901586,50.389794),(104.934246,50.393256),(105.050104,50.383489),(105.078216,50.385143),(105.10576,50.390052),(105.13134,50.398062),(105.240739,50.458006),(105.278462,50.472579),(105.32864,50.476455),(105.639371,50.421936),(105.779259,50.428913),(105.806544,50.424365),(105.854706,50.410929),(105.904212,50.403798),(105.958059,50.403591),(105.984879,50.399767),(106.038933,50.371397),(106.043274,50.364782),(106.043274,50.355532),(106.044462,50.346024),(106.048183,50.338117),(106.055934,50.33357),(106.135309,50.32711),(106.160941,50.319617),(106.201197,50.299722),(106.222281,50.292384),(106.244863,50.290265),(106.439787,50.327937),(106.548204,50.335792),(106.656518,50.327007),(106.746176,50.307421),(106.788293,50.291505),(106.936449,50.209133),(106.973656,50.196369),(106.982699,50.187119),(106.997996,50.149808),(107.006057,50.140196),(107.026211,50.125727),(107.033601,50.118802),(107.039233,50.10826),(107.048432,50.084644),(107.057113,50.074102),(107.070291,50.066247),(107.100728,50.056791),(107.114629,50.050486),(107.164497,50.01886),(107.192196,50.006044),(107.224907,49.997053),(107.254879,49.998913),(107.280407,50.007595),(107.30485,50.010023),(107.364071,49.976124),(107.729321,49.971938),(107.744604,49.967353),(107.750508,49.965582),(107.78694,49.948218),(107.80792,49.943878),(107.839185,49.946565),(107.850967,49.946048),(107.898199,49.935506),(107.946517,49.933491),(107.95649,49.923362),(107.952614,49.897472),(107.944398,49.880419),(107.935509,49.866621),(107.930032,49.851429),(107.932357,49.830293),(107.950031,49.782492),(107.955767,49.745544),(107.954836,49.732779),(107.949359,49.720635),(107.926828,49.68286),(107.927448,49.670044),(107.939489,49.663223),(107.962898,49.661104),(108.002792,49.663171),(108.015815,49.65573),(108.007546,49.633612),(108.005273,49.617799),(108.015711,49.602245),(108.032661,49.589326),(108.106197,49.554341),(108.126454,49.547158),(108.209911,49.540233),(108.23637,49.532482),(108.250529,49.522405),(108.259521,49.509641),(108.267272,49.495843),(108.277142,49.48251),(108.334865,49.436363),(108.473668,49.35642),(108.538057,49.32743),(108.569973,49.325719),(108.60844,49.323657),(108.751067,49.341175),(108.858967,49.340555),(108.930488,49.348824),(109.032962,49.327895),(109.058025,49.329083),(109.160603,49.346653),(109.286693,49.33854),(109.311911,49.332545),(109.336303,49.323347),(109.390976,49.309291),(109.411233,49.303193),(109.450714,49.305932),(109.462083,49.300454),(109.4657,49.29446),(109.469834,49.276632),(109.474382,49.269138),(109.491849,49.262782),(109.51624,49.255909),(109.728527,49.266451),(109.784957,49.251362),(109.81772,49.227746),(109.83219,49.226867),(109.913115,49.21369),(110.181419,49.161962),(110.225034,49.165372),(110.268339,49.177051),(110.30937,49.195861),(110.34544,49.220614),(110.361046,49.234619),(110.371175,49.24175),(110.382234,49.240045),(110.43329,49.202631),(110.451273,49.194569),(110.508221,49.184337),(110.603615,49.145322),(110.644957,49.136743),(110.688055,49.134521),(110.731359,49.137674),(110.841017,49.162427),(110.965557,49.207385),(111.13857,49.291566),(111.273549,49.32283),(111.291739,49.330788),(111.320988,49.355335),(111.338454,49.36474),(111.362742,49.367634),(111.385066,49.362259),(111.429611,49.346395),(111.454519,49.343087),(111.479117,49.343346),(111.503509,49.347067),(111.616473,49.386444),(111.661019,49.396366),(112.030195,49.411869),(112.126405,49.43994),(112.432238,49.529174),(112.473269,49.534135),(112.58427,49.526332),(112.671396,49.495946),(112.701472,49.491295),(112.733718,49.492794),(112.777126,49.501372),(112.868077,49.531655),(112.926161,49.566536),(112.947348,49.576096),(112.969776,49.583021),(112.992824,49.587155),(113.043673,49.588602),(113.06083,49.595992),(113.071578,49.617748),(113.074162,49.653353),(113.077159,49.669011),(113.087495,49.687408),(113.156741,49.777376),(113.182683,49.801974),(113.212448,49.822025),(113.415537,49.92238),(113.442305,49.946048),(113.458532,49.957262),(113.510415,49.981033),(113.527881,49.992815),(113.579661,50.019945),(113.75226,50.078598),(113.773138,50.081647),(113.815822,50.076893),(113.836803,50.076944),(113.849825,50.080717),(113.973125,50.160505),(113.993279,50.16867),(114.035137,50.176938),(114.055291,50.183811),(114.118233,50.224377),(114.200088,50.25621),(114.286285,50.276881),(114.333,50.272333),(114.424984,50.241844),(114.473043,50.234041),(114.659699,50.251404),(114.75375,50.236211),(114.997352,50.144331),(115.015232,50.132858),(115.028048,50.119888),(115.049649,50.090845),(115.063395,50.077513),(115.210156,49.97168),(115.368699,49.895405),(115.387819,49.891064),(115.450141,49.891426),(115.472982,49.887085),(115.508536,49.886775),(115.578195,49.893648),(115.683409,49.87768),(115.716275,49.877784),(115.750898,49.884967),(116.053619,49.998448),(116.135371,50.014416),(116.21764,50.013848),(116.300529,49.99297),(116.575447,49.92176),(116.617202,49.897317),(116.653375,49.863831),(116.684278,49.823265),(116.472921,49.516462),(116.457728,49.492794),(116.247767,49.181469),(116.037806,48.870145),(116.033568,48.852368),(116.047624,48.817693),(116.048038,48.799761),(116.028194,48.767464),(115.800404,48.530372),(115.791206,48.513836),(115.786555,48.49301),(115.786555,48.473011),(115.809292,48.273954),(115.799991,48.242328),(115.770845,48.224086),(115.536441,48.149155),(115.51422,48.131637),(115.51453,48.122103),(115.545846,47.992989),(115.561866,47.933199),(115.575715,47.909738),(115.5999,47.887414),(115.852701,47.705565),(115.914506,47.683912),(115.9729,47.709053),(116.084314,47.806928),(116.181569,47.849587),(116.243891,47.862894),(116.428893,47.835143),(116.49969,47.836358),(116.751664,47.874133),(116.822047,47.876304),(116.85326,47.872118),(116.972839,47.83765),(117.047356,47.81946),(117.069681,47.810158),(117.317107,47.654069),(117.360826,47.650865),(117.399893,47.685669),(117.43648,47.730499),(117.47286,47.757241),(117.527224,47.78649),(117.741681,47.977589),(117.766899,47.993144),(117.815165,48.004823),(117.972778,47.997898),(117.997169,47.999991),(118.064039,48.019577),(118.122536,48.027509),(118.182481,48.028206),(118.206252,48.023297),(118.254311,48.002859),(118.280046,47.998131),(118.472799,47.989449),(118.542252,47.966246),(118.605246,47.914673),(118.713301,47.793621),(118.767406,47.756156),(118.972614,47.695126),(119.059585,47.673474),(119.08346,47.661562),(119.096947,47.646602),(119.104233,47.62774),(119.116532,47.556039),(119.122113,47.53896),(119.133896,47.527539),(119.157564,47.517669),(119.251046,47.492942),(119.291044,47.48439),(119.310112,47.476095),(119.31776,47.462117),(119.312593,47.452247),(119.288822,47.429147),(119.281897,47.414859),(119.320654,47.40703),(119.475994,47.309206),(119.49067,47.297166),(119.503899,47.279182),(119.512994,47.262672),(119.525086,47.250812),(119.587356,47.238358),(119.629421,47.216086),(119.699959,47.159526),(119.728898,47.128649),(119.753961,47.094439),(119.760524,47.074595),(119.761713,47.015478),(119.764968,47.003825),(119.767294,46.998114),(119.770136,46.992998),(119.784295,46.978322),(119.81742,46.954913),(119.831683,46.942149),(119.83685,46.93357),(119.839692,46.925664),(119.843516,46.918171),(119.852301,46.910729),(119.862947,46.909024),(119.879225,46.908146),(119.892919,46.90551),(119.896433,46.898586),(119.890439,46.88825),(119.884961,46.880912),(119.881447,46.872851),(119.88124,46.860552),(119.883101,46.855436),(119.886821,46.851457),(119.891627,46.848485),(119.896795,46.84647),(119.902893,46.842413),(119.902118,46.837349),(119.898914,46.831251),(119.897983,46.824378),(119.898655,46.802054),(119.885891,46.770066),(119.883101,46.753013),(119.887183,46.74428),(119.903358,46.7264),(119.907027,46.718312),(119.904185,46.708933),(119.897983,46.701647),(119.890335,46.694774),(119.872662,46.673225),(119.85442,46.65966),(119.834421,46.650952),(119.817265,46.651624),(119.796284,46.658833),(119.782228,46.655629),(119.770033,46.643562),(119.754426,46.62439),(119.740267,46.613254),(119.719648,46.602712),(119.697789,46.594883),(119.680116,46.591628),(119.658773,46.596175),(119.623478,46.616329),(119.602188,46.619688),(119.575884,46.619171),(119.501263,46.628008),(119.452946,46.627491),(119.363287,46.613047),(119.316934,46.611678),(119.062376,46.660951),(119.036951,46.668729),(119.014058,46.683301),(118.984913,46.724539),(118.967239,46.740456),(118.947034,46.737148),(118.922074,46.715496),(118.907088,46.709398),(118.890551,46.713532),(118.879648,46.72733),(118.875203,46.74397),(118.867452,46.758336),(118.846523,46.765493),(118.816241,46.758181),(118.792211,46.737768),(118.753816,46.691156),(118.720071,46.676739),(118.684621,46.683457),(118.647414,46.697719),(118.60938,46.705781),(118.595169,46.703042),(118.567729,46.689735),(118.551967,46.685756),(118.472799,46.685679),(118.435179,46.691156),(118.371203,46.713971),(118.335547,46.722162),(118.30082,46.724849),(118.268987,46.722937),(118.238291,46.715393),(118.206045,46.700923),(118.169045,46.678496),(118.155506,46.674568),(118.101969,46.67307),(118.083882,46.668264),(118.005541,46.626974),(117.972778,46.623409),(117.923789,46.615192),(117.900638,46.607596),(117.878313,46.596382),(117.859297,46.580285),(117.832735,46.542793),(117.817025,46.527471),(117.797595,46.520262),(117.696206,46.512253),(117.673365,46.516025),(117.625202,46.535662),(117.607529,46.550545),(117.582001,46.591989),(117.568151,46.603539),(117.556266,46.601601),(117.510067,46.578476),(117.501799,46.577261),(117.491774,46.579096),(117.47286,46.584677),(117.393795,46.57137),(117.410125,46.539796),(117.412502,46.524629),(117.405474,46.510806),(117.369197,46.475691),(117.357518,46.448406),(117.353074,46.379496),(117.337468,46.357275),(117.301708,46.350144),(116.957853,46.366422),(116.831659,46.386369),(116.814606,46.38673),(116.804581,46.382906),(116.78484,46.365905),(116.722002,46.328698),(116.707325,46.324228),(116.64521,46.318466),(116.603559,46.309319),(116.581959,46.298012),(116.568316,46.290871),(116.548265,46.259296),(116.539584,46.24121),(116.525114,46.226792),(116.357166,46.105559),(116.213195,45.908155),(116.207098,45.88087),(116.220947,45.849761),(116.242134,45.82692),(116.251643,45.803252),(116.230352,45.769766),(116.169891,45.709097),(116.143949,45.694628),(116.094856,45.676541),(116.078733,45.673389),(116.063334,45.673337),(116.011967,45.678557),(116.002149,45.675559),(115.982822,45.659643),(115.767331,45.530142),(115.6924,45.468957),(115.667389,45.454281),(115.63783,45.444359),(115.472982,45.412785),(115.33728,45.394698),(115.126337,45.39444),(114.938855,45.373769),(114.905265,45.377748),(114.737317,45.425032),(114.703313,45.427151),(114.66931,45.42281),(114.533711,45.3855),(114.52379,45.38028),(114.519449,45.370927),(114.511697,45.325503),(114.502292,45.303231),(114.473043,45.259202),(114.418886,45.20148),(114.211767,45.06216),(114.163605,45.040146),(114.149342,45.030069),(114.139214,45.019165),(114.120403,44.993224),(114.089294,44.961753),(114.055291,44.940565),(114.01705,44.927285),(113.973125,44.919326),(113.913801,44.915451),(113.889306,44.908268),(113.83763,44.867908),(113.735621,44.815922),(113.672472,44.768173),(113.635058,44.746262),(113.604776,44.739699),(113.532429,44.74528),(113.517133,44.749518),(113.488917,44.766261),(113.473208,44.770705),(113.104858,44.794373),(113.016905,44.82109),(112.973186,44.821245),(112.748497,44.865273),(112.614242,44.908836),(112.586027,44.923822),(112.564529,44.943924),(112.523912,44.993275),(112.441333,45.052393),(112.412807,45.066036),(112.379631,45.070738),(112.107606,45.067379),(112.011488,45.087482),(111.984637,45.087154),(111.973351,45.087017),(111.958261,45.084536),(111.947926,45.077715),(111.938521,45.068981),(111.926739,45.06092),(111.906275,45.05415),(111.863177,45.046295),(111.843333,45.039526),(111.786489,45.008572),(111.760651,44.989296),(111.738533,44.9663),(111.551361,44.69319),(111.541026,44.668231),(111.540096,44.652211),(111.544436,44.618105),(111.543816,44.601258),(111.538235,44.58343),(111.530174,44.570614),(111.508986,44.546584),(111.483768,44.506432),(111.473226,44.495321),(111.470849,44.493306),(111.446871,44.475891),(111.424134,44.448037),(111.406357,44.416463),(111.397055,44.387369),(111.396435,44.346597),(111.410595,44.326081),(111.473226,44.298848),(111.49059,44.282621),(111.498134,44.264431),(111.503715,44.225725),(111.519425,44.18857),(111.543816,44.156737),(111.634457,44.065942),(111.66815,44.041344),(111.780701,43.986205),(111.811397,43.964294),(111.838785,43.938792),(111.930563,43.820143),(111.944309,43.786372),(111.947926,43.756968),(111.945136,43.72387),(111.933353,43.696636),(111.910409,43.684983),(111.893252,43.682606),(111.845813,43.666922),(111.77264,43.669299),(111.753829,43.663202),(111.59725,43.523391),(111.567691,43.50254),(111.535031,43.490732),(111.428681,43.483936),(111.401396,43.475513),(111.338558,43.438177),(111.096092,43.366656),(110.973412,43.3156),(110.933724,43.287772),(110.796989,43.148918),(110.680096,43.057657),(110.66573,43.0374),(110.641029,42.987791),(110.608163,42.944667),(110.566718,42.91056),(110.473391,42.854905),(110.449413,42.837335),(110.424815,42.785245),(110.406728,42.768605),(110.139871,42.682615),(110.105661,42.666673),(110.093156,42.657862),(110.084267,42.650059),(110.074862,42.643522),(110.060186,42.638355),(110.042719,42.635952),(109.987322,42.636468),(109.912701,42.62882),(109.667342,42.54898),(109.518307,42.467331),(109.510349,42.464748),(109.504871,42.464928),(109.500013,42.463327),(109.493812,42.455213),(109.490092,42.451286),(109.485131,42.449296),(109.328551,42.440486),(109.280285,42.428316),(109.253414,42.425939),(108.994463,42.449529),(108.961287,42.447539),(108.862688,42.40886),(108.82853,42.400049),(108.796852,42.398369),(108.536455,42.435111),(108.319775,42.432553),(108.234768,42.450072),(108.177355,42.454309),(108.04968,42.43912),(107.622396,42.388286),(107.195113,42.337453),(106.767829,42.286619),(106.318218,42.140052),(105.868607,41.993484),(105.473851,41.840341),(105.402744,41.811532),(105.358199,41.785848),(105.344557,41.78112),(105.325953,41.776702),(105.274483,41.755463),(105.218259,41.748435),(105.200896,41.743474),(105.014809,41.596144),(104.97383,41.586145),(104.935434,41.622422),(104.913678,41.638338),(104.892336,41.644772),(104.601243,41.645444),(104.504918,41.656554),(104.497993,41.665778),(104.500784,41.870598),(104.110885,41.813082),(103.720986,41.755566),(103.691427,41.759235),(103.398112,41.876411),(103.104797,41.993588),(103.104435,41.993639),(103.104383,41.993639),(103.073481,42.004517),(102.727042,42.064548),(102.380603,42.124579),(102.034164,42.18461),(101.97422,42.20789),(101.886473,42.278015),(101.735371,42.461156),(101.637599,42.515442),(101.524945,42.537456),(101.41074,42.544872),(101.21871,42.529834),(100.818132,42.578729),(100.417553,42.627623),(100.016975,42.676518),(99.971551,42.676285),(99.621903,42.597443),(99.474476,42.564199),(99.068355,42.60391),(98.662234,42.643622),(98.256113,42.683333),(97.849992,42.723045),(97.521632,42.755152),(97.193271,42.78726),(97.19068,42.787061),(96.832669,42.759604),(96.474659,42.732147),(96.379264,42.720546),(96.36634,42.722923),(96.357767,42.724499),(96.350635,42.740906),(96.337199,42.86648),(96.330688,42.889709),(96.318699,42.910069),(96.294308,42.933272),(95.908492,43.214547),(95.874385,43.247103),(95.854955,43.284543),(95.845446,43.326478),(95.841416,43.372522),(95.832837,43.408902),(95.692484,43.630542),(95.589235,43.869494),(95.535284,43.95768),(95.510273,43.979125),(95.474926,43.986619),(95.422733,43.988634),(95.327545,44.006669),(95.318967,44.017108),(95.31876,44.040724),(95.328269,44.09824),(95.321447,44.148417),(95.322688,44.158236),(95.327132,44.168313),(95.340568,44.187537),(95.347492,44.214925),(95.359688,44.231823),(95.389247,44.260762),(95.397722,44.280502),(95.379428,44.287117),(95.036814,44.254974),(94.993819,44.259418),(94.905762,44.294248),(94.720864,44.335796),(94.69823,44.343496),(94.683967,44.356622),(94.672598,44.373778),(94.657199,44.388816),(94.588055,44.4361),(94.460311,44.493306),(94.430132,44.504468),(94.366364,44.504313),(94.338458,44.512116),(94.321922,44.528033),(94.295464,44.571389),(94.27748,44.589579),(94.195005,44.65433),(94.181879,44.660841),(94.165342,44.665854),(94.149012,44.667197),(94.103744,44.651643),(94.058579,44.651281),(93.97507,44.659962),(93.927424,44.672933),(93.874507,44.714223),(93.714,44.874575),(93.688265,44.891059),(93.525278,44.951263),(93.420582,44.955087),(93.399911,44.960564),(93.36136,44.978393),(93.341,44.984852),(93.317229,44.985886),(93.268963,44.980098),(93.244158,44.981545),(93.165713,45.007642),(93.139152,45.009864),(93.075941,45.005192),(93.052439,45.003456),(93.033112,45.007642),(93.013888,45.009295),(92.994458,45.008365),(92.974924,45.004696),(92.945055,45.005833),(92.91622,45.014566),(92.861029,45.037872),(92.847593,45.039112),(92.81421,45.033893),(92.798914,45.03317),(92.752715,45.037976),(92.63851,45.0156),(92.572054,45.013016),(92.475109,44.997461),(92.458986,44.997926),(92.413821,45.011259),(92.396458,45.013119),(92.347365,45.007642),(92.331036,45.007848),(92.285354,45.019527),(92.267887,45.019527),(92.218898,45.010122),(92.19368,45.014514),(92.108414,45.058181),(92.075651,45.068878),(92.042991,45.074821),(92.009401,45.076216),(91.975088,45.073374),(91.919071,45.065571),(91.746989,45.072444),(91.724664,45.070738),(91.683633,45.061178),(91.662963,45.059525),(91.563951,45.077146),(91.552272,45.075338),(91.54328,45.071824),(91.533978,45.0711),(91.521783,45.077456),(91.45605,45.137453),(91.43848,45.146134),(91.41688,45.145669),(91.399723,45.136884),(91.3836,45.124224),(91.366547,45.11425),(91.346703,45.113733),(91.249138,45.129701),(91.184336,45.153834),(91.169763,45.163549),(91.147749,45.187217),(91.135036,45.197862),(91.101343,45.210678),(91.056798,45.217551),(91.011323,45.218946),(90.975149,45.214864),(90.90549,45.185977),(90.873243,45.186184),(90.857637,45.215071),(90.860324,45.231659),(90.865905,45.242563),(90.866629,45.25357),(90.854433,45.270623),(90.837173,45.280028),(90.796556,45.292844),(90.790975,45.304316),(90.790458,45.337699),(90.778055,45.365294),(90.762553,45.391597),(90.753251,45.42095),(90.737955,45.441724),(90.672325,45.475675),(90.651138,45.493142),(90.696613,45.730026),(90.710773,45.752196),(90.919235,45.950065),(90.975149,45.987323),(90.986208,45.993111),(91.002435,46.021843),(91.005432,46.068094),(90.995923,46.113724),(90.975149,46.140544),(90.948794,46.168294),(90.899185,46.281672),(90.896188,46.302033),(90.906006,46.317303),(90.935772,46.343865),(90.975149,46.416548),(90.986105,46.435255),(91.001401,46.475045),(91.014113,46.493184),(91.035611,46.529073),(91.045326,46.551087),(91.047496,46.566409),(91.03251,46.579923),(91.011323,46.585323),(90.995923,46.594625),(90.998094,46.620256),(91.011736,46.662941),(91.014292,46.689844),(91.01556,46.703197),(91.004812,46.74043),(90.975149,46.774149),(90.939389,46.79885),(90.925333,46.813139),(90.919959,46.833628),(90.922749,46.841586),(90.934428,46.856831),(90.936495,46.864427),(90.932775,46.874789),(90.92585,46.880654),(90.917685,46.885408),(90.91014,46.892384),(90.900322,46.910678),(90.893707,46.92817),(90.885129,46.944733),(90.869833,46.960339),(90.849472,46.973206),(90.828182,46.982301),(90.805547,46.986849),(90.78188,46.986332),(90.742399,46.991861),(90.712116,47.014702),(90.46872,47.308922),(90.466343,47.316338),(90.468514,47.323521),(90.475232,47.33029),(90.485567,47.344915),(90.490011,47.361839),(90.487221,47.378194),(90.475232,47.391088),(90.448153,47.404033),(90.441745,47.430543),(90.443606,47.462789),(90.441228,47.493045),(90.427276,47.506559),(90.394616,47.524671),(90.381594,47.538391),(90.352035,47.588311),(90.338496,47.60278),(90.327437,47.623037),(90.337462,47.63699),(90.349554,47.64779),(90.3448,47.658642),(90.32351,47.670166),(90.302116,47.678512),(90.209718,47.695746),(90.163416,47.710009),(90.119956,47.730137),(90.082852,47.756104),(90.070657,47.76985),(90.058771,47.789513),(90.052157,47.8107),(90.055981,47.828916),(90.064352,47.85062),(90.059856,47.868811),(90.044612,47.879766),(90.002496,47.880386),(89.96317,47.888654),(89.946685,47.883177),(89.939037,47.870852),(89.937642,47.857545),(89.934438,47.844264),(89.921364,47.832094),(89.889066,47.824317),(89.783543,47.818504),(89.751296,47.824317),(89.740238,47.836151),(89.731143,47.864754),(89.723391,47.878164),(89.707268,47.889042),(89.672025,47.89408),(89.655282,47.899816),(89.643913,47.912761),(89.636161,47.929582),(89.627376,47.943948),(89.613424,47.949581),(89.596577,47.952578),(89.583245,47.962035),(89.575648,47.976039),(89.576217,47.992886),(89.571876,48.020455),(89.5418,48.031023),(89.503663,48.029989),(89.475344,48.022961),(89.425425,48.021644),(89.409198,48.023116),(89.369201,48.038154),(89.354266,48.038774),(89.334268,48.032366),(89.285692,48.005598),(89.268587,47.993144),(89.243989,47.980225),(89.218719,47.976969),(89.167146,47.983997),(89.074697,47.984256),(89.045913,47.992886),(89.04581,47.992989),(89.045551,47.992989),(88.932328,48.096962),(88.916825,48.105954),(88.889282,48.110915),(88.831456,48.105153),(88.805411,48.105954),(88.775335,48.120552),(88.709189,48.166363),(88.680664,48.17303),(88.655446,48.171169),(88.635809,48.175846),(88.596225,48.197679),(88.580774,48.211994),(88.573487,48.229719),(88.568836,48.249201),(88.56124,48.268269),(88.558759,48.288087),(88.57297,48.32532),(88.565116,48.342993),(88.50269,48.392551),(88.479126,48.400148),(88.459127,48.398132),(88.439645,48.393714),(88.416701,48.394101),(88.383421,48.408674),(88.332107,48.453994),(88.304873,48.469006),(88.235627,48.493191),(88.21785,48.495),(88.183485,48.492752),(88.165967,48.493734),(88.148293,48.499366),(88.115737,48.516497),(88.088245,48.526083),(88.071812,48.538795),(88.062511,48.542878),(88.051969,48.543188),(88.028301,48.539622),(88.016622,48.539803),(87.983032,48.552335),(87.95182,48.575202),(87.942828,48.599489),(87.975488,48.616517),(87.989647,48.624165),(87.997812,48.635172),(88.003393,48.645998),(88.010162,48.653414),(88.055018,48.673413),(88.063854,48.682611),(88.05941,48.708087),(88.033779,48.72912),(87.975488,48.755992),(87.95244,48.761107),(87.907585,48.758834),(87.841697,48.779349),(87.821078,48.788289),(87.805885,48.800588),(87.802165,48.80958),(87.800511,48.830302),(87.796067,48.839759),(87.787489,48.84658),(87.77674,48.85118),(87.754726,48.857277),(87.737879,48.86937),(87.735709,48.886733),(87.743615,48.905181),(87.756896,48.920374),(87.835858,48.945902),(87.8726,48.96802),(87.872238,49.00409),(87.856942,49.015614),(87.837821,49.02042),(87.821905,49.02874),(87.815962,49.050599),(87.821078,49.071838),(87.829657,49.085997),(87.835548,49.100725),(87.832447,49.123876),(87.818081,49.154003),(87.816324,49.165837),(87.836995,49.159998),(87.855185,49.158964),(87.956781,49.168886),(87.975488,49.175811),(87.992954,49.191727),(88.029748,49.21555),(88.108503,49.247434),(88.143953,49.27074),(88.148397,49.296579),(88.129173,49.325931),(88.115531,49.356575),(88.135788,49.386082),(88.145606,49.397193),(88.161729,49.430111),(88.174752,49.443857),(88.19997,49.454347),(88.37877,49.475793),(88.401921,49.475844),(88.451117,49.465147),(88.475405,49.463649),(88.538761,49.466542),(88.557829,49.47109),(88.599739,49.488918),(88.614157,49.492794),(88.615759,49.492949),(88.633535,49.491244),(88.648676,49.483596),(88.67467,49.459825),(88.69565,49.446957),(88.719421,49.442926),(88.870627,49.436002),(88.875329,49.441789),(88.854865,49.473881),(88.856157,49.511501),(88.858741,49.527366),(88.868198,49.538114),(88.88892,49.54168),(88.90835,49.535944),(88.919099,49.52318),(88.926954,49.507419),(88.937599,49.492846),(88.947573,49.482821),(88.954704,49.471865),(88.962662,49.46215),(88.975323,49.455845),(89.01408,49.460548),(89.160738,49.502716),(89.195258,49.521526),(89.215619,49.545452),(89.201563,49.570619),(89.178205,49.598472),(89.187352,49.621985),(89.216445,49.634491),(89.252205,49.629427),(89.310703,49.59315),(89.339435,49.580799),(89.37194,49.581678),(89.39783,49.598989),(89.418604,49.624207),(89.4416,49.643586),(89.475344,49.643431),(89.520303,49.663171),(89.616318,49.685444),(89.682308,49.708336),(89.702411,49.719964),(89.712642,49.735157),(89.701532,49.75314),(89.683704,49.76115),(89.638952,49.775464),(89.626446,49.788022),(89.627376,49.796807),(89.641949,49.808124),(89.643551,49.818872),(89.638952,49.824815),(89.621072,49.835047),(89.615336,49.84192),(89.615284,49.854529),(89.621072,49.893338),(89.623862,49.902692),(89.633211,49.907939),(89.667684,49.92729),(89.720704,49.939744),(89.866432,49.942741),(89.920175,49.950854),(89.969423,49.96708),(89.996243,49.992764),(89.997091,50.003664),(89.998723,50.024648),(90.004924,50.050745),(90.020531,50.070743),(90.051226,50.084179),(90.182433,50.102473),(90.222947,50.11441),(90.34387,50.174665),(90.363921,50.179781),(90.404952,50.182623),(90.425002,50.186912),(90.475232,50.21461),(90.51895,50.218383),(90.604939,50.205981),(90.648451,50.206652),(90.666434,50.210373),(90.683901,50.216212),(90.699714,50.225152),(90.712633,50.237968),(90.717387,50.252024),(90.717594,50.280756),(90.726586,50.291867),(90.762449,50.305716),(90.839964,50.320806),(90.877068,50.339357),(90.948588,50.397493),(90.975149,50.413926),(91.012046,50.425295),(91.129765,50.426174),(91.163355,50.432788),(91.260093,50.466585),(91.297507,50.466378),(91.382153,50.454958),(91.415743,50.461365),(91.432486,50.478057),(91.440031,50.498572),(91.450366,50.519088),(91.475067,50.535986),(91.575733,50.561979),(91.602708,50.579601),(91.653144,50.640063),(91.679602,50.656547),(91.749779,50.684091),(91.824193,50.701558),(91.899951,50.707242),(92.062732,50.686313),(92.15885,50.689103),(92.242772,50.720006),(92.294449,50.791268),(92.297033,50.809871),(92.297446,50.827596),(92.301683,50.843358),(92.315223,50.855863),(92.33548,50.863615),(92.360905,50.868369),(92.384572,50.865579),(92.399869,50.850592),(92.418989,50.811732),(92.443277,50.786307),(92.474903,50.770494),(92.516451,50.760624),(92.554588,50.743002),(92.611845,50.687657),(92.649776,50.674066),(92.671893,50.675616),(92.695251,50.68192),(92.717369,50.692152),(92.735249,50.705227),(92.746617,50.721453),(92.753439,50.754681),(92.76026,50.770752),(92.796123,50.788012),(92.857722,50.795505),(92.921077,50.792456),(92.961695,50.777884),(92.973994,50.742175),(92.973374,50.694374),(92.991874,50.65479),(93.002933,50.634843),(93.011408,50.623319),(93.02288,50.616291),(93.04262,50.609884),(93.104942,50.598825),(93.139152,50.599445),(93.188348,50.599807),(93.421512,50.609367),(93.536337,50.584717),(93.886961,50.574976),(94.237586,50.565235),(94.25867,50.557535),(94.273346,50.544771),(94.278204,50.529217),(94.279754,50.492681),(94.319751,50.429016),(94.327503,50.404366),(94.332257,50.301013),(94.344349,50.249182),(94.354478,50.223602),(94.368534,50.202725),(94.391788,50.185568),(94.474987,50.162262),(94.491731,50.152289),(94.498552,50.139525),(94.502273,50.125055),(94.509094,50.109862),(94.520876,50.09777),(94.563974,50.071002),(94.574413,50.058548),(94.592293,50.029816),(94.603455,50.019739),(94.624539,50.015191),(94.650997,50.017827),(94.746392,50.040358),(94.921575,50.045887),(94.953718,50.040616),(94.98059,50.024751),(95.004051,49.992764),(95.021621,49.969819),(95.028339,49.963411),(95.076398,49.945531),(95.354727,49.947702),(95.399789,49.938865),(95.445161,49.914422),(95.461697,49.90264),(95.477201,49.894083),(95.480611,49.892201),(95.500661,49.886724),(95.520608,49.889617),(95.537145,49.900728),(95.56319,49.9276),(95.583964,49.936695),(95.680495,49.945996),(95.721423,49.958502),(95.75894,49.993074),(95.787672,50.011729),(95.827876,50.020152),(95.867047,50.014984),(95.892162,49.992919),(95.892162,49.992764),(95.904461,49.964807),(95.922651,49.944601),(95.946319,49.938503),(95.974844,49.952456),(95.985696,49.964962),(95.998305,49.975917),(96.011948,49.985219),(96.026624,49.992815),(96.04099,49.997931),(96.055976,49.99881),(96.071066,49.996743),(96.085949,49.992919),(96.086052,49.992867),(96.086259,49.992815),(96.086362,49.992764),(96.235603,49.9492),(96.281182,49.926256),(96.32335,49.897886),(96.346294,49.888584),(96.372856,49.886052),(96.399108,49.891736),(96.44882,49.913957),(96.474659,49.921657),(96.503287,49.922122),(96.523855,49.915766),(96.539254,49.901865),(96.553,49.879851),(96.56809,49.861764),(96.585143,49.857526),(96.604366,49.862901),(96.625967,49.873391),(96.680227,49.90972),(96.701932,49.91375),(96.969202,49.886672),(97.005892,49.865278),(97.042273,49.823213),(97.057259,49.812723),(97.08527,49.805838),(97.10015,49.802181),(97.119891,49.793654),(97.128986,49.775826),(97.152964,49.750246),(97.205053,49.734071),(97.262311,49.726113),(97.301688,49.725545),(97.343339,49.734175),(97.390778,49.749471),(97.474598,49.788693),(97.542707,49.822748),(97.57361,49.847501),(97.581258,49.877939),(97.571749,49.896749),(97.563171,49.908944),(97.566582,49.916799),(97.642236,49.930028),(97.7336,49.958347),(97.756338,49.961499),(97.777525,49.956177),(97.810701,49.931165),(97.829925,49.92331),(97.847391,49.93008),(97.874573,49.953955),(97.906199,49.973075),(98.047999,50.023046),(98.105567,50.06387),(98.262457,50.283805),(98.269278,50.299412),(98.272275,50.318997),(98.270105,50.335947),(98.261836,50.367469),(98.266281,50.400697),(98.298217,50.460125),(98.300594,50.492785),(98.300697,50.492836),(98.300697,50.49294),(98.293462,50.518623),(98.275376,50.537692),(98.251294,50.550559),(98.22556,50.557639),(98.163341,50.564667),(98.135953,50.570971),(98.060092,50.608902),(98.038491,50.622958),(98.023298,50.642026),(97.974515,50.723158),(97.944026,50.774473),(97.947437,50.79597),(97.990225,50.835606),(97.984644,50.846923),(97.953018,50.86284),(97.938755,50.874415),(97.92966,50.886611),(97.919428,50.897411),(97.871989,50.9171),(97.835713,50.940716),(97.808531,50.970482),(97.80636,51.001126),(97.870129,51.086805),(97.885632,51.120111),(97.894727,51.153855),(97.900308,51.168428),(97.92904,51.21504),(97.931727,51.230414),(97.92811,51.251472),(97.916328,51.285889),(97.915708,51.300617),(97.922839,51.318471),(97.933794,51.331777),(97.962216,51.355445),(97.974515,51.368597),(98.020611,51.435182),(98.039731,51.4573),(98.050893,51.46642),(98.066706,51.469263),(98.111355,51.484714),(98.137813,51.485877),(98.220392,51.475205),(98.2299,51.492879),(98.233828,51.516521),(98.224319,51.560006),(98.228247,51.583416),(98.238685,51.601477),(98.263387,51.635196),(98.273515,51.654833),(98.33222,51.718317),(98.425031,51.746222),(98.618714,51.780665),(98.651373,51.797485),(98.682689,51.819474),(98.711111,51.845105),(98.776637,51.930113),(98.791727,51.9775),(98.800201,51.992745),(98.838545,52.025275),(98.846504,52.039331),(98.848881,52.056307),(98.849604,52.090129),(98.855599,52.106691),(98.886398,52.128551),(98.927636,52.129584),(98.937652,52.124687),(98.962465,52.112557),(98.974454,52.080388),(98.994712,52.058296),(99.025097,52.045532),(99.19873,52.006594),(99.232837,51.994243),(99.255988,51.977914),(99.274694,51.959491),(99.295365,51.943549),(99.32389,51.934221),(99.556848,51.891459),(99.679321,51.888669),(99.709655,51.880581),(99.783966,51.837612),(99.848665,51.789346),(99.917911,51.749478),(100.005968,51.731727)] +Montserrat [(-62.147369,16.744289),(-62.147694,16.737047),(-62.151763,16.724514),(-62.150787,16.720364),(-62.147572,16.715888),(-62.14391,16.70897),(-62.141184,16.70185),(-62.140533,16.696479),(-62.144032,16.688218),(-62.14977,16.681789),(-62.157826,16.677314),(-62.167877,16.67536),(-62.218861,16.688707),(-62.230133,16.727607),(-62.218861,16.776313),(-62.202016,16.819322),(-62.189443,16.814439),(-62.182932,16.81273),(-62.174672,16.813137),(-62.179311,16.794501),(-62.170033,16.777004),(-62.156321,16.760321),(-62.147369,16.744289)] +Namibia [(13.184911,-16.964183),(13.19814,-16.957362),(13.205478,-16.963046),(13.212092,-16.972761),(13.222841,-16.977929),(13.245269,-16.98134),(13.257154,-16.98165),(13.267489,-16.977929),(13.267489,-16.98537),(13.293018,-16.973795),(13.308417,-16.970178),(13.315238,-16.974415),(13.321543,-16.974932),(13.345947,-16.968711),(13.363711,-16.964183),(13.382004,-16.970384),(13.417248,-16.993742),(13.435128,-16.999013),(13.458899,-17.001803),(13.479776,-17.010278),(13.494659,-17.024024),(13.507785,-17.063505),(13.522254,-17.076941),(13.530936,-17.093271),(13.521324,-17.1219),(13.606487,-17.167375),(13.69413,-17.236621),(13.790403,-17.288091),(13.884351,-17.338527),(13.896857,-17.349069),(13.942745,-17.408187),(13.957318,-17.419142),(13.980573,-17.423587),(13.992975,-17.421313),(14.008891,-17.411494),(14.017883,-17.40922),(14.029148,-17.410461),(14.047442,-17.416042),(14.069146,-17.418626),(14.085372,-17.423587),(14.097258,-17.423587),(14.108213,-17.420486),(14.123923,-17.411598),(14.130744,-17.40922),(14.174979,-17.416248),(14.197097,-17.412941),(14.206502,-17.393097),(14.207432,-17.388033),(14.218801,-17.388136),(14.219112,-17.388136),(14.287427,-17.388136),(14.543949,-17.38824),(14.800574,-17.388343),(15.057199,-17.388343),(15.313721,-17.388447),(15.559114,-17.388545),(15.570243,-17.38855),(15.826765,-17.388653),(16.08339,-17.388653),(16.339808,-17.388653),(16.59633,-17.388757),(16.852852,-17.388757),(17.109477,-17.38886),(17.278972,-17.388928),(17.366102,-17.388963),(17.622624,-17.388963),(17.879146,-17.389067),(18.000483,-17.389116),(18.135668,-17.38917),(18.392293,-17.389273),(18.445726,-17.389273),(18.453581,-17.389893),(18.455028,-17.395991),(18.458645,-17.405293),(18.465157,-17.40922),(18.471668,-17.414285),(18.488101,-17.452112),(18.489858,-17.462447),(18.492752,-17.464514),(18.516626,-17.471336),(18.55094,-17.535311),(18.554144,-17.54637),(18.567476,-17.553294),(18.619669,-17.594842),(18.622873,-17.60218),(18.627937,-17.619854),(18.633312,-17.628949),(18.64282,-17.638044),(18.66132,-17.646725),(18.670932,-17.653133),(18.728293,-17.710907),(18.74824,-17.736332),(18.761986,-17.747701),(18.800847,-17.756589),(18.89004,-17.799274),(19.020575,-17.822322),(19.105118,-17.818188),(19.139327,-17.805269),(19.160825,-17.800928),(19.17209,-17.801238),(19.201856,-17.807129),(19.246608,-17.804028),(19.262421,-17.81395),(19.421274,-17.859426),(19.59284,-17.857255),(19.649787,-17.844956),(19.672628,-17.842682),(19.693195,-17.847747),(19.702187,-17.86573),(19.711282,-17.872965),(19.732056,-17.88206),(19.755207,-17.889088),(19.77102,-17.890328),(19.775981,-17.885057),(19.78766,-17.866454),(19.793964,-17.862319),(19.951991,-17.862319),(19.989508,-17.882783),(20.024545,-17.894566),(20.03426,-17.895909),(20.092551,-17.890328),(20.119939,-17.890741),(20.132962,-17.888778),(20.147121,-17.882783),(20.154563,-17.890328),(20.169239,-17.878132),(20.193113,-17.872138),(20.219882,-17.873378),(20.243446,-17.882783),(20.278173,-17.854878),(20.288921,-17.861596),(20.302771,-17.860769),(20.336154,-17.854878),(20.347729,-17.858909),(20.386797,-17.890328),(20.412635,-17.884127),(20.437543,-17.894462),(20.460074,-17.913686),(20.479504,-17.934356),(20.489529,-17.942108),(20.523946,-17.958541),(20.552885,-17.979108),(20.565494,-17.985206),(20.590712,-17.978798),(20.630089,-17.976938),(20.666263,-17.980245),(20.682179,-17.988617),(20.68962,-17.996988),(20.706157,-18.00381),(20.722487,-18.006497),(20.729928,-18.002363),(20.737473,-17.993474),(20.754423,-17.997918),(20.784602,-18.012595),(20.806202,-18.031405),(20.831937,-18.029338),(20.8616,-18.018796),(20.894466,-18.012595),(20.908315,-17.986136),(20.986455,-17.965771),(21.000713,-17.962055),(21.09311,-17.938077),(21.10944,-17.945002),(21.12608,-17.941074),(21.143133,-17.933736),(21.161323,-17.930636),(21.175792,-17.93322),(21.216617,-17.930636),(21.23274,-17.93446),(21.278112,-17.958541),(21.335059,-17.977971),(21.364825,-17.992027),(21.381154,-18.012595),(21.386839,-18.014455),(21.405546,-18.009287),(21.477583,-17.996058),(21.728523,-17.949859),(21.979464,-17.903764),(22.230508,-17.857565),(22.481449,-17.811366),(22.685674,-17.772816),(22.889796,-17.734059),(23.093918,-17.695405),(23.29804,-17.656854),(23.381652,-17.641144),(23.422373,-17.633496),(23.45741,-17.626778),(23.47622,-17.626365),(23.505986,-17.620474),(23.548774,-17.611999),(23.591769,-17.603524),(23.634557,-17.595049),(23.677242,-17.586677),(23.72003,-17.578202),(23.762818,-17.569727),(23.805606,-17.561356),(23.848497,-17.552881),(23.891389,-17.544406),(23.934177,-17.536034),(23.976965,-17.52756),(24.01965,-17.519085),(24.062541,-17.510713),(24.105329,-17.502238),(24.148117,-17.493763),(24.190905,-17.485288),(24.220464,-17.4795),(24.238758,-17.478157),(24.257361,-17.480844),(24.310278,-17.482601),(24.321337,-17.488699),(24.329398,-17.485081),(24.371256,-17.473713),(24.388929,-17.471336),(24.407016,-17.474539),(24.449288,-17.489112),(24.47926,-17.494487),(24.498173,-17.503478),(24.523598,-17.507819),(24.53166,-17.5134),(24.537861,-17.520325),(24.546543,-17.526526),(24.562459,-17.53221),(24.571244,-17.533451),(24.580752,-17.532831),(24.591294,-17.528386),(24.606487,-17.515157),(24.617443,-17.509163),(24.622817,-17.502341),(24.629742,-17.49552),(24.639457,-17.49242),(24.684415,-17.49242),(24.769785,-17.505442),(24.775056,-17.507612),(24.780017,-17.512263),(24.786838,-17.516914),(24.79738,-17.519085),(24.829523,-17.517741),(24.898252,-17.531074),(24.924917,-17.542959),(24.93763,-17.560736),(24.95799,-17.551744),(24.969987,-17.559962),(24.971116,-17.560736),(24.982588,-17.576549),(24.998505,-17.588021),(25.008013,-17.588538),(25.02672,-17.58275),(25.036952,-17.5812),(25.040569,-17.584507),(25.033851,-17.60156),(25.033851,-17.608485),(25.040053,-17.616133),(25.045324,-17.619957),(25.052558,-17.621404),(25.064237,-17.621507),(25.067028,-17.625331),(25.085218,-17.640938),(25.088525,-17.642695),(25.096897,-17.67215),(25.098757,-17.676801),(25.107955,-17.678868),(25.115087,-17.684242),(25.120048,-17.691064),(25.122631,-17.697885),(25.131416,-17.686516),(25.138548,-17.686103),(25.153327,-17.700986),(25.156428,-17.70667),(25.155291,-17.719382),(25.156841,-17.72517),(25.161182,-17.729408),(25.172448,-17.735092),(25.177822,-17.738813),(25.190741,-17.755349),(25.198182,-17.75845),(25.215132,-17.759277),(25.228878,-17.762481),(25.242417,-17.770335),(25.253476,-17.781497),(25.259781,-17.794107),(25.194152,-17.782324),(25.153947,-17.781808),(25.120874,-17.813537),(25.087905,-17.826766),(25.057002,-17.827696),(25.047494,-17.807129),(25.019692,-17.823769),(25.007083,-17.825733),(24.998505,-17.81395),(24.983312,-17.820462),(24.975147,-17.816327),(24.968636,-17.807646),(24.958094,-17.800928),(24.964295,-17.786665),(24.953339,-17.788422),(24.948585,-17.79328),(24.945175,-17.799998),(24.93763,-17.807129),(24.931119,-17.81054),(24.857428,-17.833587),(24.838101,-17.835034),(24.820841,-17.839272),(24.79738,-17.858082),(24.776916,-17.862319),(24.770715,-17.865523),(24.764514,-17.879683),(24.756039,-17.882783),(24.74684,-17.88423),(24.738159,-17.887641),(24.730717,-17.891878),(24.725343,-17.895909),(24.698471,-17.928879),(24.664055,-17.949859),(24.649172,-17.962778),(24.598632,-18.020759),(24.595015,-18.022826),(24.591811,-18.028407),(24.577755,-18.044427),(24.574551,-18.050422),(24.564423,-18.052799),(24.518431,-18.057346),(24.505615,-18.060344),(24.471508,-18.029958),(24.469751,-18.014455),(24.465101,-18.008667),(24.458486,-18.005773),(24.451045,-17.998952),(24.433888,-17.967223),(24.421589,-17.956474),(24.399471,-17.95234),(24.365262,-17.950789),(24.350586,-17.95606),(24.334256,-17.971563),(24.305834,-18.019416),(24.296429,-18.026237),(24.287437,-18.02448),(24.270074,-18.015798),(24.259222,-18.012595),(24.238241,-18.009907),(24.218294,-18.012595),(24.183051,-18.029441),(24.135095,-18.085458),(24.101609,-18.108816),(24.065125,-18.115224),(24.05696,-18.119048),(24.028125,-18.145816),(24.02027,-18.151604),(23.991744,-18.163386),(23.979652,-18.171861),(23.974898,-18.177029),(23.971177,-18.18385),(23.966733,-18.180646),(23.961979,-18.177856),(23.956604,-18.176615),(23.950817,-18.177649),(23.915987,-18.201213),(23.912886,-18.235733),(23.89697,-18.250203),(23.867514,-18.269426),(23.855215,-18.280072),(23.837232,-18.305807),(23.825553,-18.317072),(23.809843,-18.321723),(23.715792,-18.419081),(23.700909,-18.42797),(23.680136,-18.431484),(23.656468,-18.458252),(23.649853,-18.46342),(23.645409,-18.466004),(23.609856,-18.477682),(23.592182,-18.478199),(23.57916,-18.467864),(23.571305,-18.426006),(23.555492,-18.383115),(23.546604,-18.369472),(23.560763,-18.348491),(23.551461,-18.327511),(23.518802,-18.293714),(23.527277,-18.277695),(23.521696,-18.268186),(23.511257,-18.260331),(23.505676,-18.249376),(23.501542,-18.23749),(23.49131,-18.23315),(23.477771,-18.230876),(23.464128,-18.225501),(23.458754,-18.217647),(23.444284,-18.200697),(23.429401,-18.188501),(23.418859,-18.198423),(23.410281,-18.197389),(23.40129,-18.194289),(23.395812,-18.191395),(23.394468,-18.186227),(23.396329,-18.169381),(23.395812,-18.163386),(23.389714,-18.153258),(23.359535,-18.119151),(23.348063,-18.09445),(23.336591,-18.07936),(23.3338,-18.073986),(23.33256,-18.067061),(23.3338,-18.04329),(23.330596,-18.039983),(23.323258,-18.039156),(23.31592,-18.037089),(23.312716,-18.029958),(23.312819,-18.016522),(23.311476,-18.009804),(23.305688,-18.005463),(23.292769,-17.998952),(23.254942,-17.997402),(23.185799,-18.003086),(23.099602,-18.010217),(22.981367,-18.020036),(22.894034,-18.036159),(22.809698,-18.051869),(22.725258,-18.067475),(22.640922,-18.083185),(22.556483,-18.098791),(22.472147,-18.1145),(22.387811,-18.13021),(22.303475,-18.145816),(22.219036,-18.161526),(22.134597,-18.177236),(22.050261,-18.192842),(21.966028,-18.208552),(21.881486,-18.224158),(21.79715,-18.239867),(21.71271,-18.255577),(21.628478,-18.271183),(21.544039,-18.28679),(21.526985,-18.289994),(21.509829,-18.293198),(21.492879,-18.296401),(21.475722,-18.299502),(21.456809,-18.300226),(21.437895,-18.301052),(21.39087,-18.302913),(21.343947,-18.304773),(21.296818,-18.306633),(21.249896,-18.308494),(21.193466,-18.310768),(21.136932,-18.312938),(21.080604,-18.315212),(21.024174,-18.317382),(20.993436,-18.318612),(20.975081,-18.319346),(20.975081,-18.347768),(20.975081,-18.388592),(20.975081,-18.429313),(20.975081,-18.470138),(20.975081,-18.511065),(20.975081,-18.551993),(20.975081,-18.592714),(20.975081,-18.633539),(20.975081,-18.674363),(20.975081,-18.715187),(20.975081,-18.756012),(20.975081,-18.796836),(20.975081,-18.83766),(20.975081,-18.878381),(20.975081,-18.919206),(20.975081,-18.96003),(20.975081,-19.000958),(20.975494,-19.125912),(20.975609,-19.172106),(20.975804,-19.250865),(20.976321,-19.375819),(20.976735,-19.500876),(20.977045,-19.625829),(20.977561,-19.750783),(20.977975,-19.87584),(20.978285,-20.000794),(20.978698,-20.125851),(20.979112,-20.250804),(20.97936,-20.351028),(20.979422,-20.375758),(20.979835,-20.500711),(20.980249,-20.625768),(20.980662,-20.750722),(20.981075,-20.875779),(20.981486,-20.999958),(20.981489,-21.000733),(20.981592,-21.058403),(20.981902,-21.116074),(20.982109,-21.173642),(20.982212,-21.231209),(20.982419,-21.28888),(20.982626,-21.346448),(20.982832,-21.404119),(20.983039,-21.46179),(20.983143,-21.519357),(20.983453,-21.577028),(20.983659,-21.634699),(20.983866,-21.69237),(20.984073,-21.750041),(20.984279,-21.807608),(20.984486,-21.865176),(20.984693,-21.922847),(20.984796,-21.963981),(20.97198,-22.000671),(20.914723,-22.000671),(20.680629,-22.000671),(20.446534,-22.000671),(20.21244,-22.000671),(19.978346,-22.000671),(19.978449,-22.086764),(19.978656,-22.237246),(19.978863,-22.387625),(19.978966,-22.538106),(19.979173,-22.688588),(19.979276,-22.860671),(19.979586,-23.032546),(19.979793,-23.204526),(19.979855,-23.308551),(19.979896,-23.376505),(19.980103,-23.548587),(19.98031,-23.720567),(19.980483,-23.864489),(19.980516,-23.892494),(19.980723,-24.064473),(19.980826,-24.236556),(19.981137,-24.408431),(19.981343,-24.580411),(19.981447,-24.752493),(19.981757,-24.764999),(19.981757,-24.821016),(19.981757,-24.877137),(19.981757,-24.933257),(19.98186,-24.989275),(19.981963,-25.045395),(19.981963,-25.101412),(19.981963,-25.15743),(19.981963,-25.21355),(19.981963,-25.269671),(19.981963,-25.325688),(19.981963,-25.381705),(19.982067,-25.437826),(19.982067,-25.493843),(19.982067,-25.549964),(19.982067,-25.606084),(19.982067,-25.618988),(19.982067,-25.662102),(19.982067,-25.718119),(19.982067,-25.774136),(19.982067,-25.830257),(19.98217,-25.886274),(19.982273,-25.942395),(19.982273,-25.998515),(19.982273,-26.054533),(19.982273,-26.11055),(19.982273,-26.16667),(19.982377,-26.222688),(19.982377,-26.278808),(19.982377,-26.334826),(19.982377,-26.390946),(19.982377,-26.446963),(19.982377,-26.503084),(19.98248,-26.559101),(19.982583,-26.615119),(19.982583,-26.671239),(19.982583,-26.727256),(19.982583,-26.783377),(19.982583,-26.839394),(19.982687,-26.895515),(19.982687,-26.951532),(19.982687,-27.007653),(19.982687,-27.06367),(19.982687,-27.119687),(19.982687,-27.175808),(19.98279,-27.231928),(19.982894,-27.287946),(19.982894,-27.344066),(19.982894,-27.400187),(19.982894,-27.456101),(19.982894,-27.512118),(19.982894,-27.568239),(19.982894,-27.624359),(19.982997,-27.680376),(19.982997,-27.736497),(19.982997,-27.792618),(19.982997,-27.848532),(19.982997,-27.904652),(19.982997,-27.960773),(19.982997,-28.01679),(19.982997,-28.072911),(19.9831,-28.129031),(19.983204,-28.185049),(19.983204,-28.241066),(19.983204,-28.297186),(19.983204,-28.353204),(19.983204,-28.392684),(19.981653,-28.422347),(19.950544,-28.429271),(19.940209,-28.430202),(19.907859,-28.426481),(19.896697,-28.427721),(19.870342,-28.440847),(19.82559,-28.47671),(19.796342,-28.484152),(19.748076,-28.487149),(19.725338,-28.494177),(19.705598,-28.508026),(19.688338,-28.515984),(19.587982,-28.522702),(19.579197,-28.525183),(19.568759,-28.531177),(19.562351,-28.538102),(19.55646,-28.546163),(19.547571,-28.555879),(19.542197,-28.564043),(19.533722,-28.58244),(19.527417,-28.585954),(19.517806,-28.589365),(19.511708,-28.598047),(19.482872,-28.678248),(19.472434,-28.692718),(19.45507,-28.705223),(19.434607,-28.713492),(19.353475,-28.731889),(19.339005,-28.73747),(19.32836,-28.735816),(19.304485,-28.718659),(19.290016,-28.719693),(19.277304,-28.729408),(19.265935,-28.742637),(19.248571,-28.771576),(19.244954,-28.792453),(19.251569,-28.814261),(19.288879,-28.871001),(19.288362,-28.883094),(19.27658,-28.889502),(19.243714,-28.891879),(19.237616,-28.895599),(19.226661,-28.911516),(19.218496,-28.918854),(19.161548,-28.945416),(19.120207,-28.957508),(19.081657,-28.959368),(19.064707,-28.939834),(19.060056,-28.9357),(19.048997,-28.932393),(19.01396,-28.928466),(19.006932,-28.926295),(18.99639,-28.915547),(18.971172,-28.88051),(18.954533,-28.866661),(18.745656,-28.839892),(18.553937,-28.864697),(18.51766,-28.882267),(18.496163,-28.888261),(18.491615,-28.886298),(18.479419,-28.876272),(18.474975,-28.873999),(18.469394,-28.874929),(18.460713,-28.87989),(18.455235,-28.88144),(18.435701,-28.884231),(18.424746,-28.887228),(18.416994,-28.891672),(18.397564,-28.898907),(18.373069,-28.895186),(18.331005,-28.88144),(18.30868,-28.879993),(18.220727,-28.891259),(18.18507,-28.902214),(18.166467,-28.901904),(18.082441,-28.875962),(18.044097,-28.858392),(17.983326,-28.813744),(17.949529,-28.794727),(17.913252,-28.781291),(17.746338,-28.748632),(17.714091,-28.751112),(17.703239,-28.755556),(17.683396,-28.767545),(17.673784,-28.771576),(17.660968,-28.772299),(17.628722,-28.764135),(17.607121,-28.75566),(17.602574,-28.735403),(17.603401,-28.710805),(17.598026,-28.689617),(17.582937,-28.680212),(17.565987,-28.683519),(17.54697,-28.691374),(17.526609,-28.695818),(17.485165,-28.700159),(17.44093,-28.709564),(17.403619,-28.704293),(17.401759,-28.674218),(17.414265,-28.63267),(17.420569,-28.593396),(17.409614,-28.571382),(17.373027,-28.559393),(17.365276,-28.542546),(17.359488,-28.519808),(17.332409,-28.488596),(17.324244,-28.470509),(17.328172,-28.456143),(17.341194,-28.442811),(17.358558,-28.432785),(17.375404,-28.428961),(17.39132,-28.418833),(17.399692,-28.395268),(17.402069,-28.367776),(17.400002,-28.346382),(17.394834,-28.335117),(17.378918,-28.316307),(17.371993,-28.306075),(17.368066,-28.293776),(17.367033,-28.283544),(17.364139,-28.273312),(17.355044,-28.261013),(17.35122,-28.251401),(17.349773,-28.238689),(17.345742,-28.22763),(17.334476,-28.222876),(17.308638,-28.224116),(17.245179,-28.237448),(17.21314,-28.232074),(17.191953,-28.20882),(17.187095,-28.162001),(17.189782,-28.139367),(17.189989,-28.116836),(17.180894,-28.099472),(17.155779,-28.092548),(17.136349,-28.084693),(17.123223,-28.066503),(17.111751,-28.046039),(17.097798,-28.031156),(17.086533,-28.027022),(17.076404,-28.026815),(17.056664,-28.031156),(17.045398,-28.036324),(17.012015,-28.058338),(16.98535,-28.05224),(16.973775,-28.055031),(16.959512,-28.06857),(16.947523,-28.072704),(16.937498,-28.07105),(16.927473,-28.060198),(16.919618,-28.058338),(16.91352,-28.062679),(16.896467,-28.079939),(16.892953,-28.082626),(16.885512,-28.162001),(16.878173,-28.172026),(16.873833,-28.171509),(16.868665,-28.167892),(16.852025,-28.168305),(16.847788,-28.165411),(16.843757,-28.163861),(16.837659,-28.168202),(16.835695,-28.174196),(16.838383,-28.179467),(16.855953,-28.199415),(16.856366,-28.206649),(16.84107,-28.209853),(16.819779,-28.20975),(16.814301,-28.213471),(16.810271,-28.222876),(16.813268,-28.229904),(16.820813,-28.240032),(16.8266,-28.251918),(16.824637,-28.264527),(16.815232,-28.270831),(16.805103,-28.267731),(16.795181,-28.26122),(16.786499,-28.257602),(16.768723,-28.265354),(16.764382,-28.283234),(16.768413,-28.303698),(16.775544,-28.319097),(16.796835,-28.347829),(16.803863,-28.366123),(16.793217,-28.374288),(16.778541,-28.382969),(16.773374,-28.402916),(16.772133,-28.425757),(16.769343,-28.442604),(16.758387,-28.459657),(16.740404,-28.480948),(16.720457,-28.496037),(16.703817,-28.494487),(16.699683,-28.485185),(16.696789,-28.472473),(16.692345,-28.461104),(16.683457,-28.456246),(16.673121,-28.45976),(16.597364,-28.526216),(16.559123,-28.537068),(16.531528,-28.549677),(16.50538,-28.565387),(16.487071,-28.572931),(16.468516,-28.585138),(16.458507,-28.600518),(16.445811,-28.610528),(16.419444,-28.607029),(16.407888,-28.600518),(16.346853,-28.556085),(16.315929,-28.524347),(16.301036,-28.50449),(16.19337,-28.408787),(16.175792,-28.402114),(16.167979,-28.395766),(16.107677,-28.323337),(16.04363,-28.257094),(15.911143,-28.171319),(15.884288,-28.147638),(15.833181,-28.089288),(15.761485,-28.038344),(15.747813,-28.024591),(15.716807,-27.980401),(15.684255,-27.949884),(15.678884,-27.942804),(15.677989,-27.931248),(15.684581,-27.910821),(15.685802,-27.9013),(15.673025,-27.869887),(15.646007,-27.836602),(15.546235,-27.747817),(15.533539,-27.730157),(15.528656,-27.70615),(15.529063,-27.654229),(15.526866,-27.646173),(15.520681,-27.633722),(15.425059,-27.496515),(15.408946,-27.464532),(15.396821,-27.448663),(15.380382,-27.44199),(15.371755,-27.433038),(15.342459,-27.380548),(15.301606,-27.333103),(15.295258,-27.322442),(15.281749,-27.243341),(15.276215,-27.233575),(15.269542,-27.226332),(15.263682,-27.218357),(15.261241,-27.205743),(15.268077,-27.164809),(15.254161,-27.100193),(15.261241,-27.085626),(15.241222,-27.048761),(15.235362,-27.029474),(15.233246,-27.006931),(15.234223,-26.965427),(15.228526,-26.945571),(15.212738,-26.927992),(15.20574,-26.925551),(15.187511,-26.924249),(15.178559,-26.921157),(15.176606,-26.916762),(15.169688,-26.906183),(15.162364,-26.897068),(15.158865,-26.897068),(15.155935,-26.875095),(15.148936,-26.856378),(15.116873,-26.796157),(15.111664,-26.779229),(15.116222,-26.752048),(15.105724,-26.732354),(15.091563,-26.712172),(15.083018,-26.695245),(15.080821,-26.672133),(15.083507,-26.652114),(15.091807,-26.637953),(15.106619,-26.632582),(15.116954,-26.630304),(15.127126,-26.626723),(15.134613,-26.62713),(15.137706,-26.636651),(15.137706,-26.6749),(15.144542,-26.6749),(15.148448,-26.655043),(15.168142,-26.602797),(15.168712,-26.592218),(15.162283,-26.579034),(15.141938,-26.517999),(15.128754,-26.457615),(15.105642,-26.420587),(15.07252,-26.389744),(15.034679,-26.366306),(14.975271,-26.344171),(14.966319,-26.335219),(14.958832,-26.257013),(14.962657,-26.181329),(14.942556,-26.153904),(14.939138,-26.147149),(14.943126,-26.135349),(14.949067,-26.135024),(14.956879,-26.137628),(14.966319,-26.133559),(14.979828,-26.101658),(14.979015,-26.05877),(14.968272,-26.015802),(14.952647,-25.98268),(14.907725,-25.915297),(14.90211,-25.882501),(14.91863,-25.845473),(14.911143,-25.845473),(14.859874,-25.790297),(14.842784,-25.763604),(14.840343,-25.751723),(14.840343,-25.743341),(14.842784,-25.729425),(14.842784,-25.650323),(14.846934,-25.629815),(14.876964,-25.578058),(14.879731,-25.561456),(14.879568,-25.53867),(14.875336,-25.518487),(14.853282,-25.498712),(14.847016,-25.473321),(14.842784,-25.42783),(14.818044,-25.366957),(14.812511,-25.320082),(14.802419,-25.282892),(14.80128,-25.262791),(14.832286,-25.188165),(14.829356,-25.176446),(14.84962,-25.105076),(14.853282,-25.064223),(14.843272,-25.026056),(14.829438,-25.000095),(14.808767,-24.961114),(14.796153,-24.92197),(14.792979,-24.841892),(14.780772,-24.803399),(14.723481,-24.703546),(14.699474,-24.673028),(14.650157,-24.628839),(14.636078,-24.598891),(14.601899,-24.562433),(14.595876,-24.537205),(14.59962,-24.526544),(14.61378,-24.511651),(14.616954,-24.505792),(14.616954,-24.464776),(14.612315,-24.445733),(14.590099,-24.40838),(14.568126,-24.35768),(14.513927,-24.261489),(14.493175,-24.192315),(14.473643,-24.159112),(14.466807,-24.113214),(14.459239,-24.098321),(14.464203,-24.07822),(14.464122,-24.044692),(14.459646,-24.010431),(14.451915,-23.988458),(14.459239,-23.974542),(14.465831,-23.947198),(14.472911,-23.933852),(14.479177,-23.931085),(14.488292,-23.92962),(14.496593,-23.925958),(14.500255,-23.916436),(14.513927,-23.892267),(14.506033,-23.88006),(14.507091,-23.823907),(14.489513,-23.776056),(14.486583,-23.758477),(14.500255,-23.614435),(14.495291,-23.578546),(14.48406,-23.551039),(14.446951,-23.460382),(14.434337,-23.415216),(14.435069,-23.405532),(14.443126,-23.396661),(14.467459,-23.356052),(14.479747,-23.344822),(14.479503,-23.365818),(14.484386,-23.374933),(14.490408,-23.369724),(14.493419,-23.347833),(14.491954,-23.309177),(14.473481,-23.192804),(14.462738,-23.157403),(14.445567,-23.13128),(14.431326,-23.103285),(14.420665,-23.070977),(14.413829,-23.036228),(14.410167,-22.970798),(14.411388,-22.961196),(14.416026,-22.951593),(14.429047,-22.93255),(14.431895,-22.923028),(14.431895,-22.881768),(14.435069,-22.879815),(14.440929,-22.879815),(14.443858,-22.883477),(14.438162,-22.892267),(14.442719,-22.898533),(14.44809,-22.908136),(14.451915,-22.912205),(14.445974,-22.932387),(14.444347,-22.952813),(14.451671,-22.963311),(14.472911,-22.95379),(14.471446,-22.965102),(14.459239,-22.995294),(14.474294,-22.982192),(14.496837,-22.941827),(14.510509,-22.933282),(14.521983,-22.923435),(14.529307,-22.90016),(14.534353,-22.854181),(14.527517,-22.679457),(14.508556,-22.548028),(14.485118,-22.512384),(14.474294,-22.467869),(14.438162,-22.398858),(14.418712,-22.339776),(14.386485,-22.276462),(14.380382,-22.268487),(14.315196,-22.200291),(14.294281,-22.167413),(14.293468,-22.143324),(14.291352,-22.134454),(14.137218,-21.946384),(14.062022,-21.880955),(14.05421,-21.864679),(13.97169,-21.792739),(13.952322,-21.783949),(13.952159,-21.77337),(13.956391,-21.762953),(13.958995,-21.758396),(13.96518,-21.733168),(13.965831,-21.726739),(13.962657,-21.714044),(13.958263,-21.704848),(13.87794,-21.592218),(13.862804,-21.528009),(13.831716,-21.456476),(13.729259,-21.330255),(13.66684,-21.232354),(13.628103,-21.187595),(13.602712,-21.158136),(13.513357,-21.007908),(13.471934,-20.955011),(13.459158,-20.918064),(13.444835,-20.904067),(13.417735,-20.883884),(13.397634,-20.849298),(13.387218,-20.816095),(13.37672,-20.733575),(13.349457,-20.644464),(13.262706,-20.485121),(13.239024,-20.397149),(13.233572,-20.354669),(13.213064,-20.27337),(13.177745,-20.189223),(13.154633,-20.154229),(13.114425,-20.142573),(13.07088,-20.114229),(13.037345,-20.059465),(13.005382,-19.935479),(12.927745,-19.794692),(12.918468,-19.765558),(12.875987,-19.712009),(12.802501,-19.584731),(12.793956,-19.5617),(12.783051,-19.54697),(12.757579,-19.49684),(12.749278,-19.486017),(12.7046,-19.410821),(12.691091,-19.379002),(12.686046,-19.359796),(12.679698,-19.31406),(12.668305,-19.292413),(12.636241,-19.252618),(12.615733,-19.217543),(12.567393,-19.102309),(12.546723,-19.071466),(12.516449,-19.03753),(12.480317,-19.011),(12.470388,-18.997654),(12.458669,-18.925551),(12.450043,-18.904555),(12.286876,-18.697686),(12.174731,-18.627939),(12.125743,-18.574802),(12.11964,-18.569513),(12.102061,-18.545994),(12.091645,-18.540704),(12.082042,-18.534356),(12.031889,-18.50535),(12.015961,-18.452895),(12.012462,-18.434503),(11.999522,-18.403497),(11.99822,-18.39186),(11.998546,-18.381768),(11.997569,-18.372003),(11.999522,-18.361912),(11.976085,-18.329767),(11.937348,-18.23211),(11.931163,-18.225356),(11.849082,-18.143183),(11.83364,-18.098045),(11.826524,-18.03922),(11.808499,-17.991542),(11.796911,-17.956089),(11.784037,-17.859645),(11.760941,-17.786424),(11.745552,-17.683948),(11.734031,-17.632721),(11.726736,-17.599867),(11.717621,-17.546319),(11.72169,-17.467869),(11.755138,-17.266697),(11.766124,-17.252699),(11.766184,-17.252751),(11.779518,-17.254915),(11.796882,-17.263803),(11.80763,-17.26587),(11.82241,-17.264423),(11.827474,-17.260082),(11.829644,-17.253468),(11.835536,-17.245406),(11.853932,-17.233417),(11.894963,-17.214607),(11.942092,-17.180501),(11.98302,-17.161897),(12.028909,-17.148978),(12.075211,-17.142983),(12.082239,-17.14133),(12.08813,-17.139159),(12.095261,-17.139676),(12.105597,-17.146084),(12.13991,-17.156006),(12.151279,-17.153629),(12.156653,-17.149081),(12.16151,-17.146291),(12.170709,-17.149185),(12.177117,-17.154766),(12.179494,-17.161587),(12.181148,-17.169235),(12.185075,-17.17709),(12.190656,-17.182568),(12.200474,-17.189286),(12.21174,-17.195177),(12.222282,-17.197657),(12.23179,-17.201895),(12.236545,-17.2113),(12.239335,-17.220705),(12.242642,-17.224839),(12.314679,-17.218121),(12.379482,-17.220395),(12.393951,-17.21471),(12.407594,-17.204065),(12.417929,-17.203445),(12.4417,-17.216984),(12.46051,-17.223082),(12.519215,-17.227836),(12.554561,-17.235588),(12.56717,-17.234554),(12.591458,-17.222565),(12.63621,-17.185151),(12.660705,-17.17709),(12.685923,-17.173576),(12.704733,-17.164274),(12.739873,-17.135542),(12.784315,-17.115078),(12.818318,-17.104846),(12.824932,-17.096371),(12.833097,-17.081799),(12.842296,-17.074047),(12.849944,-17.070843),(12.867721,-17.065572),(12.876402,-17.059784),(12.880536,-17.050793),(12.882293,-17.039527),(12.887151,-17.029812),(12.911025,-17.023508),(12.930456,-17.014206),(12.961668,-17.007385),(13.014275,-16.977929),(13.121762,-16.959842),(13.1445,-16.952401),(13.166307,-16.951057),(13.184911,-16.964183)] +Niger [(14.216217,22.616295),(14.23172,22.617949),(14.294869,22.649833),(14.466021,22.736184),(14.63707,22.822536),(14.808016,22.908887),(14.979168,22.995316),(14.979271,22.995368),(14.979909,22.995664),(14.991984,22.933097),(15.003973,22.87044),(15.015858,22.807782),(15.02795,22.745176),(15.039939,22.682544),(15.051928,22.619861),(15.064021,22.557177),(15.075906,22.494597),(15.087998,22.431914),(15.099987,22.369282),(15.11208,22.306598),(15.124069,22.243966),(15.136058,22.181335),(15.14815,22.118651),(15.156315,22.076173),(15.16448,22.033695),(15.172024,21.993387),(15.172438,21.973647),(15.173265,21.921557),(15.174608,21.847918),(15.176055,21.763427),(15.177502,21.678988),(15.178846,21.605297),(15.179776,21.553208),(15.180086,21.533467),(15.180396,21.507267),(15.18453,21.491196),(15.20024,21.476468),(15.247472,21.453265),(15.266592,21.440656),(15.301009,21.40133),(15.32478,21.366707),(15.34142,21.342161),(15.358266,21.317718),(15.375009,21.293223),(15.391753,21.268729),(15.408496,21.244234),(15.425342,21.219791),(15.441982,21.195193),(15.458829,21.17075),(15.475572,21.146256),(15.492212,21.121761),(15.509058,21.097266),(15.525801,21.072772),(15.542544,21.048277),(15.559288,21.023782),(15.576134,20.999339),(15.592774,20.974793),(15.60931,20.95066),(15.569003,20.928905),(15.544301,20.890302),(15.536033,20.844052),(15.544198,20.79899),(15.556084,20.773669),(15.570243,20.751913),(15.588123,20.732792),(15.669462,20.671866),(15.697264,20.642875),(15.755555,20.582001),(15.813846,20.521074),(15.872033,20.460199),(15.930324,20.399324),(15.953992,20.374571),(15.968151,20.352816),(15.970322,20.336331),(15.962984,20.319226),(15.94252,20.283957),(15.902419,20.214943),(15.862318,20.145955),(15.822321,20.076993),(15.782323,20.00803),(15.767234,19.982037),(15.736021,19.903541),(15.732404,19.861838),(15.728373,19.816259),(15.724342,19.770577),(15.720208,19.724999),(15.716177,19.679317),(15.712146,19.633764),(15.708116,19.588108),(15.704085,19.542529),(15.700054,19.496873),(15.696023,19.451243),(15.692096,19.405638),(15.687962,19.360008),(15.683931,19.314404),(15.6799,19.268747),(15.67587,19.223169),(15.671839,19.177539),(15.667808,19.131882),(15.663777,19.086304),(15.659747,19.040648),(15.655716,18.995069),(15.651582,18.949387),(15.647551,18.903834),(15.64352,18.858178),(15.639593,18.8126),(15.635562,18.766943),(15.631531,18.721313),(15.6275,18.675709),(15.62347,18.630078),(15.619439,18.584474),(15.615305,18.538818),(15.611274,18.493239),(15.607243,18.447583),(15.603213,18.402004),(15.599182,18.356348),(15.595151,18.310718),(15.59112,18.265088),(15.587193,18.219483),(15.583162,18.173879),(15.579028,18.128249),(15.574997,18.082644),(15.570966,18.03704),(15.567698,18.000039),(15.566936,17.991409),(15.562905,17.945779),(15.558874,17.900149),(15.554843,17.85457),(15.550813,17.808888),(15.546782,17.76331),(15.542648,17.717679),(15.538617,17.672049),(15.53469,17.626367),(15.530659,17.580737),(15.526628,17.535107),(15.522597,17.489528),(15.518567,17.443846),(15.514536,17.398267),(15.510505,17.352637),(15.506371,17.307058),(15.50234,17.261376),(15.498309,17.215746),(15.494279,17.170116),(15.490248,17.124537),(15.486217,17.078907),(15.48229,17.033277),(15.478259,16.987698),(15.474125,16.942068),(15.471954,16.916746),(15.468517,16.90491),(15.465547,16.89468),(15.452421,16.876232),(15.418418,16.840885),(15.3624,16.782646),(15.30628,16.7242),(15.250159,16.665909),(15.194039,16.607566),(15.137918,16.549275),(15.081797,16.490881),(15.02578,16.432538),(14.969659,16.374247),(14.913539,16.315904),(14.857418,16.257613),(14.801401,16.199219),(14.745384,16.140876),(14.689263,16.082534),(14.633143,16.024243),(14.576919,15.965797),(14.520798,15.907557),(14.482144,15.86725),(14.423543,15.806323),(14.368973,15.749634),(14.340447,15.710722),(14.310682,15.669949),(14.280709,15.629254),(14.250944,15.588455),(14.221178,15.547734),(14.191206,15.507039),(14.16144,15.466292),(14.131571,15.425545),(14.101702,15.384876),(14.071833,15.344181),(14.042068,15.303434),(14.012199,15.262687),(13.98233,15.221992),(13.952564,15.181219),(13.922592,15.140498),(13.892826,15.0997),(13.862957,15.059056),(13.833915,15.019601),(13.808077,14.965573),(13.775417,14.897438),(13.760534,14.866355),(13.75516,14.847209),(13.75392,14.825944),(13.75795,14.805532),(13.772006,14.762873),(13.773247,14.742512),(13.764358,14.719077),(13.749165,14.711248),(13.730045,14.709646),(13.709995,14.705124),(13.700021,14.697734),(13.668085,14.662879),(13.648345,14.649417),(13.644727,14.644249),(13.646278,14.639056),(13.657026,14.629651),(13.660334,14.623734),(13.666742,14.585519),(13.665605,14.56689),(13.657853,14.548725),(13.624264,14.521311),(13.608052,14.518266),(13.584369,14.513818),(13.543855,14.510562),(13.507785,14.495964),(13.482257,14.483587),(13.449494,14.439042),(13.449184,14.380131),(13.468304,14.310678),(13.482257,14.259777),(13.492695,14.213862),(13.499723,14.181978),(13.506958,14.150119),(13.514193,14.118235),(13.521324,14.086428),(13.528455,14.054518),(13.53569,14.022737),(13.542821,13.990853),(13.549953,13.958994),(13.557188,13.927161),(13.564319,13.895225),(13.57145,13.863418),(13.578582,13.831534),(13.585816,13.799727),(13.592948,13.767843),(13.600182,13.735984),(13.607314,13.7041),(13.35999,13.714409),(13.35892,13.714232),(13.329915,13.709423),(13.320716,13.702291),(13.321233,13.685109),(13.30604,13.674644),(13.266146,13.638032),(13.259221,13.628963),(13.253847,13.618911),(13.253537,13.614881),(13.258601,13.607801),(13.260048,13.602142),(13.257258,13.594236),(13.25085,13.592763),(13.243822,13.593254),(13.239481,13.591575),(13.230179,13.577751),(13.232246,13.572894),(13.246405,13.571085),(13.228112,13.548037),(13.203927,13.539381),(13.176952,13.534472),(13.150184,13.522664),(13.153698,13.5307),(13.155868,13.540777),(13.153698,13.548761),(13.143983,13.550621),(13.138402,13.546539),(13.13096,13.529821),(13.123519,13.522664),(13.123829,13.533439),(13.123519,13.536953),(13.107086,13.531552),(13.088276,13.530596),(13.07701,13.534679),(13.082488,13.544394),(13.082488,13.550621),(13.054893,13.548373),(13.029261,13.539175),(12.939344,13.495069),(12.911749,13.489901),(12.883844,13.495999),(12.87971,13.479695),(12.870614,13.478868),(12.857799,13.484914),(12.842296,13.489126),(12.842296,13.481685),(12.850564,13.479385),(12.857592,13.475328),(12.863897,13.469282),(12.870201,13.461247),(12.857075,13.450653),(12.829273,13.420293),(12.82855,13.417451),(12.829997,13.409364),(12.829273,13.406625),(12.826276,13.406056),(12.817594,13.407296),(12.815011,13.406625),(12.81067,13.404299),(12.797337,13.399648),(12.794547,13.396057),(12.798474,13.384275),(12.804779,13.376859),(12.805192,13.372983),(12.791136,13.371846),(12.788966,13.375464),(12.783281,13.382337),(12.77739,13.386703),(12.7746,13.382698),(12.773256,13.382233),(12.766538,13.365671),(12.740803,13.33482),(12.72437,13.322289),(12.701736,13.31725),(12.690057,13.311979),(12.683132,13.288932),(12.674347,13.283712),(12.659155,13.286606),(12.649233,13.293608),(12.640654,13.302393),(12.630009,13.310377),(12.628769,13.294177),(12.626805,13.28826),(12.62019,13.281438),(12.613266,13.276503),(12.606134,13.27423),(12.602724,13.279992),(12.597556,13.276503),(12.561073,13.242759),(12.558179,13.226532),(12.545156,13.208213),(12.538335,13.193356),(12.554251,13.187542),(12.547947,13.165451),(12.542572,13.15478),(12.534511,13.145917),(12.526966,13.143643),(12.504332,13.139613),(12.499784,13.135969),(12.491206,13.082639),(12.486659,13.070805),(12.472706,13.064165),(12.456376,13.067421),(12.437256,13.074009),(12.414312,13.077627),(12.386742,13.07858),(12.351576,13.079797),(12.335247,13.084474),(12.325738,13.078815),(12.3184,13.083854),(12.310959,13.092639),(12.301037,13.098116),(12.291528,13.096824),(12.285327,13.092019),(12.280883,13.086851),(12.277162,13.084474),(12.269204,13.085869),(12.267447,13.089641),(12.267034,13.095171),(12.26321,13.101863),(12.251841,13.108012),(12.201405,13.125427),(12.184765,13.122094),(12.163784,13.107961),(12.149625,13.105584),(12.116449,13.102845),(12.057641,13.116772),(12.038004,13.126358),(11.991908,13.162092),(11.852795,13.245394),(11.742311,13.289448),(11.440004,13.364431),(10.953005,13.361899),(10.771311,13.382698),(10.674986,13.375102),(10.647882,13.369406),(10.161529,13.267202),(10.119775,13.250097),(9.930122,13.145969),(9.912449,13.1292),(9.908211,13.118787),(9.904387,13.094344),(9.899736,13.084319),(9.891778,13.076231),(9.887747,13.075508),(9.882683,13.076231),(9.872038,13.072562),(9.852814,13.060444),(9.819328,13.029206),(9.647657,12.830538),(9.640734,12.822526),(9.629262,12.812785),(9.61841,12.805783),(9.611485,12.802941),(9.590298,12.80139),(9.377908,12.825782),(9.359821,12.824593),(9.306491,12.809839),(9.269387,12.809788),(8.977872,12.843183),(8.942792,12.847201),(8.679346,12.923579),(8.642655,12.944353),(8.625499,12.964791),(8.596767,13.011403),(8.577233,13.032436),(8.533515,13.06463),(8.50964,13.075921),(8.482665,13.081554),(8.47078,13.081657),(8.445975,13.078712),(8.434503,13.075095),(8.423134,13.069488),(8.419,13.066697),(8.415176,13.067059),(8.405151,13.070857),(8.397813,13.075405),(8.297974,13.164624),(8.273066,13.193873),(8.259836,13.205216),(8.246728,13.211292),(8.245677,13.211779),(8.216015,13.219608),(8.201442,13.226377),(8.167646,13.252371),(8.14005,13.269114),(8.116589,13.29105),(8.105117,13.299654),(8.077522,13.310016),(7.823997,13.345285),(7.789684,13.341977),(7.755578,13.327146),(7.391982,13.112534),(7.360667,13.102638),(7.324803,13.102845),(7.22052,13.121577),(7.198196,13.117392),(7.137424,13.047241),(7.090089,13.006391),(7.068695,12.996081),(7.062921,12.99514),(7.044924,12.992206),(6.970509,12.991844),(6.937127,12.995409),(6.906637,13.006856),(6.874185,13.030498),(6.837908,13.063493),(6.778687,13.130285),(6.753879,13.1749),(6.672543,13.321178),(6.509297,13.496051),(6.368944,13.626275),(6.302178,13.663844),(6.284208,13.667656),(6.229831,13.679192),(6.210608,13.675265),(6.152833,13.643561),(6.142395,13.640822),(6.13578,13.643664),(6.129476,13.649065),(6.12038,13.654077),(5.837349,13.763748),(5.554317,13.873418),(5.521761,13.880291),(5.374069,13.855254),(5.346268,13.841559),(5.334795,13.826702),(5.318052,13.789935),(5.304203,13.773941),(5.273507,13.752521),(5.262035,13.747741),(5.227618,13.741333),(5.084371,13.747534),(4.983189,13.72968),(4.925105,13.733091),(4.902367,13.742987),(4.879836,13.764019),(4.856892,13.774096),(4.824852,13.770737),(4.625845,13.723151),(4.506732,13.694669),(4.452575,13.673766),(4.405756,13.641236),(4.248453,13.494139),(4.220548,13.480625),(4.190472,13.475018),(4.125774,13.472977),(4.123707,13.233586),(4.120325,13.210608),(4.088773,12.996236),(3.929093,12.750386),(3.864394,12.689666),(3.6457,12.528539),(3.641565,12.517997),(3.651487,12.26902),(3.624512,12.137658),(3.623995,12.09425),(3.648903,12.021024),(3.653554,11.986815),(3.636604,11.953587),(3.619551,11.936275),(3.608492,11.921961),(3.603945,11.905579),(3.609733,11.848787),(3.612937,11.838968),(3.618621,11.828375),(3.622032,11.825016),(3.626269,11.822535),(3.64973,11.797834),(3.662029,11.779489),(3.66699,11.759697),(3.659859,11.738096),(3.647767,11.726004),(3.5964,11.695773),(3.589372,11.701509),(3.58007,11.717374),(3.571699,11.72187),(3.568185,11.729518),(3.566221,11.738354),(3.5625,11.746003),(3.557436,11.761815),(3.553715,11.76967),(3.549271,11.773753),(3.54369,11.77706),(3.536972,11.782899),(3.503486,11.829977),(3.495424,11.837521),(3.484262,11.84517),(3.447675,11.857985),(3.40282,11.88527),(3.391658,11.888836),(3.380909,11.895037),(3.369127,11.897311),(3.355071,11.889043),(3.341635,11.882893),(3.327786,11.886407),(3.315487,11.894572),(3.307012,11.902375),(3.292749,11.918653),(3.269908,11.956532),(3.26257,11.974722),(3.257609,11.997408),(3.255129,12.018751),(3.252028,12.024125),(3.238489,12.034202),(3.235389,12.039525),(3.063409,12.193779),(3.022895,12.252483),(3.013077,12.271087),(3.009356,12.276306),(3.000984,12.27889),(2.967808,12.282559),(2.956026,12.286848),(2.944967,12.292481),(2.936802,12.300852),(2.930394,12.323332),(2.922643,12.335631),(2.912618,12.346328),(2.892257,12.357076),(2.844301,12.399244),(2.832416,12.384051),(2.808645,12.383225),(2.798413,12.381209),(2.778156,12.381933),(2.768544,12.380072),(2.757485,12.372941),(2.741465,12.354131),(2.732267,12.346793),(2.721828,12.344261),(2.711286,12.351237),(2.700434,12.347103),(2.69506,12.341212),(2.692166,12.334029),(2.690202,12.326174),(2.686895,12.318422),(2.672942,12.296718),(2.663847,12.291551),(2.628604,12.300801),(2.621679,12.299509),(2.60969,12.290775),(2.603593,12.28783),(2.598632,12.288347),(2.590053,12.293308),(2.586126,12.294031),(2.550573,12.284523),(2.546025,12.281629),(2.53693,12.27305),(2.531246,12.269691),(2.523701,12.269175),(2.5206,12.273671),(2.51905,12.279148),(2.516259,12.281422),(2.504477,12.281835),(2.495072,12.283231),(2.488044,12.280388),(2.483393,12.268038),(2.477709,12.257599),(2.472644,12.259253),(2.46696,12.265041),(2.459519,12.266746),(2.454351,12.263025),(2.446083,12.251811),(2.443706,12.249228),(2.43089,12.247987),(2.407946,12.248918),(2.3942,12.247212),(2.370015,12.236257),(2.36123,12.218894),(2.362057,12.196156),(2.371875,12.140449),(2.405568,12.046708),(2.41115,12.036786),(2.418281,12.028673),(2.428616,12.020353),(2.439262,12.015598),(2.44877,12.013376),(2.455281,12.008674),(2.457348,11.99622),(2.456728,11.979063),(2.44939,11.977513),(2.438951,11.981285),(2.428926,11.980355),(2.422002,11.973172),(2.414147,11.958703),(2.407429,11.95121),(2.398747,11.948781),(2.389342,11.950073),(2.380867,11.947902),(2.375183,11.935035),(2.376526,11.924079),(2.388205,11.906768),(2.390169,11.896536),(2.338079,11.940099),(2.258807,12.048775),(2.188527,12.145461),(2.113803,12.247987),(2.070912,12.306898),(2.051792,12.341935),(2.054169,12.370926),(2.069258,12.383328),(2.126929,12.39511),(2.131477,12.398521),(2.138711,12.408339),(2.145016,12.41175),(2.148013,12.410768),(2.158762,12.405652),(2.163826,12.404774),(2.223254,12.409735),(2.23793,12.413455),(2.244958,12.424152),(2.242994,12.446166),(2.243098,12.451179),(2.246302,12.461669),(2.246302,12.465907),(2.242271,12.473452),(2.229559,12.487146),(2.223874,12.494846),(2.215709,12.510349),(2.210852,12.523061),(2.203204,12.583316),(2.2,12.595925),(2.193282,12.609464),(2.18429,12.620626),(2.165997,12.631737),(2.155455,12.640005),(2.144809,12.650753),(2.140985,12.656128),(2.135197,12.67561),(2.109049,12.705634),(2.068948,12.716279),(1.971693,12.724237),(1.962495,12.719121),(1.945545,12.700828),(1.934486,12.693542),(1.926631,12.694989),(1.919603,12.698347),(1.911438,12.696642),(1.906994,12.691836),(1.900173,12.678452),(1.883223,12.653957),(1.872681,12.634217),(1.860692,12.61706),(1.843742,12.606105),(1.826069,12.604193),(1.699462,12.61489),(1.597039,12.623675),(1.563863,12.63215),(1.535854,12.647498),(1.467124,12.704394),(1.412037,12.749869),(1.330595,12.817177),(1.304806,12.838508),(1.230756,12.899756),(1.170915,12.949211),(1.113968,12.996236),(1.083789,13.01099),(1.012269,13.016881),(0.983536,13.032358),(0.974648,13.048326),(0.971754,13.067317),(0.971661,13.085782),(0.971341,13.149405),(0.970824,13.243534),(0.970411,13.328335),(0.983536,13.36841),(1.004724,13.364844),(1.084719,13.33358),(1.138462,13.320377),(1.160683,13.311307),(1.167608,13.313426),(1.177633,13.34761),(1.182387,13.358695),(1.187658,13.364121),(1.205745,13.358901),(1.222075,13.344613),(1.241505,13.335544),(1.268687,13.34606),(1.249567,13.367066),(1.23427,13.377583),(1.217424,13.381768),(1.175049,13.38691),(1.157169,13.392646),(1.048235,13.441945),(1.015266,13.465665),(0.996559,13.496051),(0.991391,13.541113),(0.95098,13.583203),(0.896927,13.614932),(0.850728,13.628601),(0.813108,13.625035),(0.795331,13.625965),(0.775694,13.635293),(0.763705,13.646429),(0.762878,13.653044),(0.766289,13.657178),(0.767012,13.660873),(0.766185,13.666971),(0.766392,13.67511),(0.763498,13.682034),(0.75337,13.684101),(0.620665,13.679967),(0.59431,13.688881),(0.58046,13.713789),(0.584801,13.729008),(0.594516,13.74986),(0.600511,13.768721),(0.593689,13.778101),(0.574466,13.785025),(0.5632,13.796782),(0.552968,13.810037),(0.519275,13.831637),(0.504082,13.845358),(0.499926,13.851015),(0.49199,13.861817),(0.483515,13.880343),(0.47566,13.888947),(0.454809,13.902357),(0.448634,13.909617),(0.448065,13.920108),(0.456256,13.938143),(0.456695,13.94685),(0.445921,13.958581),(0.428377,13.967495),(0.411866,13.978424),(0.403934,13.996072),(0.401866,14.013797),(0.391376,14.022582),(0.378354,14.028809),(0.369155,14.039067),(0.369052,14.047852),(0.37732,14.063949),(0.378457,14.070434),(0.37422,14.078883),(0.368742,14.084154),(0.362463,14.088857),(0.356262,14.095704),(0.343575,14.114333),(0.339157,14.12578),(0.343472,14.137536),(0.367398,14.173787),(0.372153,14.185647),(0.377424,14.216601),(0.381118,14.22567),(0.390601,14.237633),(0.391635,14.245927),(0.388792,14.251612),(0.346366,14.307577),(0.202382,14.435545),(0.18865,14.44775),(0.158832,14.496119),(0.152941,14.54671),(0.219707,14.731221),(0.213092,14.761632),(0.184102,14.819562),(0.185756,14.84819),(0.198003,14.863797),(0.211852,14.874804),(0.220844,14.888214),(0.218467,14.910977),(0.212782,14.960716),(0.213196,14.985417),(0.221257,14.995933),(0.353187,14.963429),(0.387035,14.963248),(0.418791,14.969888),(0.483515,14.992109),(0.514831,14.993556),(0.670067,14.939735),(0.683813,14.940872),(0.711408,14.947461),(0.739934,14.958339),(0.769183,14.969062),(0.922145,14.973971),(0.949327,14.979552),(0.973718,14.991257),(1.057537,15.067118),(1.123063,15.126313),(1.203161,15.198789),(1.270857,15.259897),(1.297832,15.275735),(1.331526,15.283616),(1.405216,15.286148),(1.501748,15.289352),(1.598383,15.292608),(1.694914,15.295734),(1.791446,15.29899),(1.888081,15.302194),(1.984612,15.305398),(2.081351,15.308576),(2.177882,15.311831),(2.274517,15.314984),(2.371049,15.318239),(2.467684,15.321417),(2.564215,15.324595),(2.660953,15.327825),(2.757485,15.331081),(2.85412,15.334207),(2.950651,15.337463),(3.000158,15.339117),(3.005739,15.35232),(3.005842,15.389294),(3.007806,15.407665),(3.010286,15.417665),(3.017521,15.422832),(3.03354,15.42645),(3.073021,15.427199),(3.19229,15.40751),(3.380393,15.376324),(3.483332,15.359296),(3.48881,15.357539),(3.507103,15.353973),(3.516508,15.469186),(3.526534,15.495954),(3.614074,15.547734),(3.692208,15.63145),(3.728899,15.65088),(3.808377,15.665582),(3.846101,15.685297),(3.871215,15.714804),(3.873849,15.720882),(3.886512,15.750099),(3.89447,15.78865),(3.903462,15.886318),(3.909869,15.904767),(3.925062,15.927608),(3.98325,15.983987),(3.98418,15.986881),(3.9848,15.989826),(3.984594,15.995924),(3.984697,15.997371),(3.98449,15.998766),(3.983973,16.00011),(3.98325,16.00135),(3.970848,16.030857),(3.96692,16.058504),(3.971158,16.086099),(4.060558,16.298334),(4.075647,16.321072),(4.094768,16.340812),(4.118229,16.358331),(4.16174,16.379983),(4.1759,16.392644),(4.183961,16.416053),(4.183444,16.526538),(4.183031,16.612682),(4.182308,16.746472),(4.181998,16.809621),(4.184581,16.818509),(4.197604,16.838508),(4.202048,16.848895),(4.19657,16.947132),(4.197811,16.965219),(4.203702,16.982789),(4.21166,16.986044),(4.222305,16.986561),(4.235638,16.995863),(4.235328,17.100818),(4.235018,17.163915),(4.234707,17.287525),(4.234294,17.411186),(4.233777,17.534796),(4.233364,17.658458),(4.233054,17.78212),(4.23264,17.905782),(4.232408,17.975234),(4.232227,18.029443),(4.231917,18.153105),(4.231504,18.276741),(4.23109,18.400325),(4.230677,18.524038),(4.230271,18.645374),(4.230263,18.647648),(4.22985,18.771284),(4.229436,18.894946),(4.229023,19.018582),(4.22861,19.142244),(4.417229,19.178417),(4.605848,19.214616),(4.794467,19.250764),(4.983189,19.286963),(5.082821,19.306006),(5.301826,19.347993),(5.520831,19.389903),(5.620463,19.408997),(5.749344,19.433699),(5.794302,19.449796),(5.837607,19.478631),(5.916362,19.546276),(5.979407,19.600536),(6.042556,19.654745),(6.105601,19.709005),(6.168646,19.763213),(6.231692,19.817474),(6.294737,19.871682),(6.357782,19.925917),(6.420931,19.980177),(6.483769,20.034385),(6.546918,20.088646),(6.609963,20.142854),(6.672957,20.197115),(6.736105,20.251323),(6.799047,20.305532),(6.862092,20.35974),(6.925138,20.414001),(6.982808,20.46361),(7.020532,20.495443),(7.098047,20.558746),(7.169464,20.616986),(7.240881,20.675173),(7.312297,20.733464),(7.383611,20.791652),(7.482726,20.872577),(7.613778,20.94973),(7.681887,20.989831),(7.74979,21.029984),(7.8179,21.070085),(7.886009,21.110237),(7.953912,21.15039),(8.022022,21.190491),(8.090131,21.230592),(8.158034,21.270744),(8.226143,21.310845),(8.294253,21.350946),(8.362259,21.391047),(8.430265,21.431148),(8.498375,21.4713),(8.566381,21.51135),(8.634491,21.551502),(8.702497,21.591655),(8.770606,21.631807),(8.838613,21.671908),(8.906722,21.712061),(8.974832,21.75211),(9.042838,21.792263),(9.110844,21.832364),(9.178953,21.872516),(9.24696,21.912565),(9.314966,21.952718),(9.383075,21.992819),(9.451082,22.03292),(9.519088,22.073072),(9.587197,22.113225),(9.655203,22.153326),(9.723313,22.193427),(9.791319,22.233579),(9.859325,22.27368),(9.927435,22.313781),(9.995441,22.353882),(10.063447,22.394035),(10.131557,22.434136),(10.199563,22.474237),(10.267569,22.514337),(10.335679,22.554542),(10.403788,22.594643),(10.471691,22.634744),(10.539801,22.674896),(10.607807,22.714945),(10.675813,22.755046),(10.743923,22.795199),(10.812032,22.835326),(10.880038,22.875375),(10.948148,22.915527),(11.016257,22.955654),(11.084367,22.995755),(11.15227,23.035882),(11.220379,23.07606),(11.288489,23.116161),(11.356392,23.156262),(11.424501,23.196415),(11.492611,23.236516),(11.560514,23.276591),(11.628623,23.316717),(11.696733,23.356844),(11.764739,23.396945),(11.832745,23.437046),(11.900855,23.477173),(11.968861,23.517351),(12.057021,23.497611),(12.145284,23.477948),(12.233651,23.458285),(12.321811,23.438622),(12.409971,23.418882),(12.498337,23.399245),(12.586497,23.379582),(12.674657,23.359893),(12.763024,23.340179),(12.851184,23.32049),(12.939447,23.300827),(13.027607,23.281138),(13.115871,23.261449),(13.204134,23.241709),(13.292398,23.222098),(13.380661,23.202383),(13.482257,23.179672),(13.599562,23.119029),(13.655786,23.072624),(13.780895,22.969633),(13.906055,22.866616),(14.031009,22.763702),(14.156169,22.660737),(14.201644,22.62322),(14.216217,22.616295)] +Norfolk Island [(167.984141,-29.017836),(167.996349,-29.025649),(167.99464,-29.042576),(167.985037,-29.059259),(167.973481,-29.066339),(167.970063,-29.068943),(167.967947,-29.072035),(167.966482,-29.075779),(167.966645,-29.080011),(167.960704,-29.065606),(167.95045,-29.056573),(167.939708,-29.055759),(167.932628,-29.066339),(167.926443,-29.066339),(167.924327,-29.055597),(167.920177,-29.045505),(167.918305,-29.036391),(167.922374,-29.028741),(167.929942,-29.019464),(167.930512,-29.01214),(167.924571,-29.008722),(167.91212,-29.011651),(167.926524,-28.997491),(167.945567,-29.001235),(167.965831,-29.011814),(167.984141,-29.017836)] +Niue [(-169.851145,-18.965102),(-169.825307,-18.96795),(-169.822621,-18.972589),(-169.79426,-19.047133),(-169.782908,-19.068943),(-169.796864,-19.073907),(-169.805979,-19.081964),(-169.823842,-19.102309),(-169.865427,-19.129653),(-169.872996,-19.125909),(-169.880971,-19.129083),(-169.898915,-19.142755),(-169.902048,-19.132989),(-169.909983,-19.128595),(-169.921091,-19.128025),(-169.933705,-19.129653),(-169.930531,-19.126397),(-169.927968,-19.122817),(-169.924794,-19.119317),(-169.919423,-19.115981),(-169.924062,-19.099379),(-169.926259,-19.094903),(-169.950429,-19.087335),(-169.947703,-19.069431),(-169.926259,-19.03753),(-169.927968,-19.031834),(-169.931589,-19.026544),(-169.934397,-19.020929),(-169.933705,-19.013604),(-169.930247,-19.008396),(-169.926381,-19.00742),(-169.921824,-19.007013),(-169.916005,-19.003106),(-169.895823,-18.965102),(-169.863718,-18.964044),(-169.851145,-18.965102)] +Nepal [(81.779018,30.358045),(81.801497,30.3613),(81.822788,30.368173),(81.844389,30.371171),(81.887177,30.354996),(81.92149,30.357373),(81.941231,30.353962),(81.950119,30.348381),(81.962108,30.333653),(81.969032,30.328279),(81.978851,30.326832),(81.988463,30.329778),(81.998178,30.334067),(82.007635,30.336806),(82.049854,30.339235),(82.073212,30.337219),(82.088767,30.330088),(82.093211,30.31474),(82.0821,30.257327),(82.084994,30.230817),(82.095898,30.213454),(82.114243,30.202189),(82.140391,30.194024),(82.155688,30.181363),(82.150985,30.162243),(82.127576,30.124364),(82.128713,30.110773),(82.135224,30.089741),(82.144526,30.069432),(82.153827,30.058166),(82.17424,30.055479),(82.192171,30.061474),(82.210671,30.064419),(82.255733,30.039666),(82.27599,30.036927),(82.296506,30.036824),(82.318882,30.031811),(82.32715,30.025714),(82.339087,30.009229),(82.347304,30.003441),(82.363737,30.001167),(82.379137,30.003234),(82.394433,30.003131),(82.409781,29.994604),(82.422855,29.97967),(82.438926,29.965769),(82.457065,29.95409),(82.475823,29.945512),(82.524244,29.932386),(82.541814,29.923239),(82.618502,29.839833),(82.635865,29.829033),(82.646407,29.829343),(82.666974,29.835028),(82.67731,29.834407),(82.688988,29.827741),(82.689195,29.819576),(82.684648,29.809706),(82.681754,29.797717),(82.690952,29.780741),(82.751517,29.739323),(82.77012,29.714156),(82.781076,29.705113),(82.793065,29.699506),(82.805157,29.695966),(82.815544,29.690127),(82.822365,29.677957),(82.836576,29.659793),(82.857454,29.666201),(82.880088,29.681109),(82.899725,29.688422),(82.90913,29.684623),(82.923703,29.669663),(82.931919,29.664185),(82.9421,29.662945),(82.964321,29.665891),(82.975896,29.66403),(83.032895,29.620467),(83.040905,29.610287),(83.054496,29.586051),(83.064935,29.57879),(83.079559,29.579643),(83.085553,29.590391),(83.088447,29.604421),(83.093357,29.615144),(83.100281,29.617676),(83.126688,29.623413),(83.150046,29.625635),(83.162086,29.62517),(83.173558,29.621268),(83.187253,29.612095),(83.19087,29.606902),(83.194487,29.593647),(83.197175,29.589099),(83.204926,29.584914),(83.221876,29.579824),(83.229317,29.575948),(83.252572,29.5552),(83.26084,29.542048),(83.26332,29.527088),(83.25991,29.518768),(83.254019,29.51329),(83.248748,29.506572),(83.247197,29.494738),(83.26394,29.473008),(83.327968,29.485023),(83.355201,29.460735),(83.357785,29.442752),(83.355201,29.427972),(83.35701,29.415363),(83.38724,29.393504),(83.389411,29.382445),(83.388274,29.370224),(83.393132,29.356245),(83.405017,29.346479),(83.417781,29.340898),(83.428168,29.333456),(83.437212,29.305241),(83.448219,29.296766),(83.475814,29.286921),(83.493384,29.276147),(83.502996,29.261032),(83.512504,29.223489),(83.514571,29.20181),(83.517052,29.191708),(83.523718,29.183594),(83.535965,29.179228),(83.56139,29.176231),(83.572035,29.168402),(83.583818,29.162149),(83.624435,29.155457),(83.639266,29.154759),(83.654992,29.160414),(83.65601,29.160779),(83.663399,29.174809),(83.66774,29.192328),(83.675078,29.208761),(83.69549,29.226176),(83.719727,29.233591),(83.745358,29.23447),(83.770628,29.232248),(83.79321,29.234754),(83.837859,29.254004),(83.86163,29.260205),(83.868865,29.263435),(83.877391,29.276767),(83.883438,29.282012),(83.914392,29.289402),(83.93315,29.291779),(83.945553,29.291831),(83.957955,29.28904),(83.967257,29.284699),(83.973251,29.277025),(83.975835,29.263745),(83.988237,29.269532),(84.000898,29.27253),(84.012525,29.270204),(84.021517,29.259766),(84.035211,29.247622),(84.05304,29.25036),(84.072057,29.256975),(84.08973,29.256613),(84.099135,29.247001),(84.105336,29.219923),(84.112364,29.209097),(84.126007,29.206255),(84.141975,29.20827),(84.155462,29.206642),(84.161663,29.192534),(84.155204,29.175946),(84.141716,29.160263),(84.13226,29.143804),(84.137996,29.124657),(84.160113,29.105486),(84.166211,29.098458),(84.169725,29.088923),(84.175461,29.057401),(84.198199,29.045308),(84.220575,29.0389),(84.231427,29.025981),(84.206725,28.940664),(84.225291,28.914115),(84.235147,28.90002),(84.285997,28.873743),(84.354003,28.861289),(84.366302,28.856741),(84.376586,28.848835),(84.393071,28.82434),(84.41431,28.808269),(84.422733,28.799173),(84.437822,28.753336),(84.450845,28.73388),(84.475753,28.727292),(84.498594,28.72786),(84.515647,28.7214),(84.531977,28.712693),(84.551821,28.706388),(84.607218,28.698198),(84.616209,28.69365),(84.625821,28.67608),(84.632642,28.668794),(84.65228,28.662386),(84.671141,28.658949),(84.682975,28.650759),(84.677756,28.604405),(84.69083,28.596137),(84.736615,28.594173),(84.755632,28.585414),(84.781781,28.558904),(84.799144,28.546631),(84.81196,28.54198),(84.823949,28.540533),(84.879294,28.543634),(84.891438,28.541773),(84.907871,28.537019),(84.918723,28.535675),(84.92849,28.538466),(84.936448,28.54477),(84.948954,28.568283),(84.954948,28.574303),(84.984766,28.588592),(84.994067,28.590814),(85.002491,28.594431),(85.009157,28.602545),(85.018769,28.619856),(85.02528,28.628409),(85.032101,28.634067),(85.054942,28.638692),(85.075044,28.631664),(85.11189,28.608823),(85.160982,28.595),(85.16801,28.583192),(85.156952,28.533143),(85.153748,28.524901),(85.14765,28.515832),(85.140363,28.510922),(85.123155,28.504592),(85.117212,28.497331),(85.115662,28.491492),(85.115559,28.472733),(85.109461,28.459298),(85.083674,28.445526),(85.077577,28.435681),(85.08569,28.381008),(85.08569,28.361526),(85.081867,28.331702),(85.080212,28.318789),(85.087395,28.304113),(85.109409,28.292254),(85.134162,28.291969),(85.154574,28.300858),(85.173591,28.30432),(85.205734,28.278275),(85.223924,28.271686),(85.243406,28.267914),(85.259581,28.266932),(85.27312,28.269826),(85.301025,28.281066),(85.312601,28.282771),(85.325727,28.277397),(85.334822,28.268689),(85.345364,28.261248),(85.362572,28.259672),(85.377817,28.265976),(85.401381,28.288791),(85.41523,28.296439),(85.43156,28.297059),(85.44665,28.29215),(85.475692,28.279102),(85.497551,28.282616),(85.52065,28.282926),(85.543698,28.280394),(85.565299,28.275381),(85.585504,28.263186),(85.59894,28.250835),(85.611807,28.251145),(85.639093,28.287887),(85.646947,28.2937),(85.668135,28.300548),(85.675059,28.306387),(85.676196,28.31512),(85.67599,28.324732),(85.679142,28.332897),(85.692939,28.335222),(85.698004,28.313441),(85.699761,28.269102),(85.715574,28.243368),(85.734487,28.221534),(85.756605,28.203241),(85.782753,28.1881),(85.826885,28.170478),(85.848279,28.158928),(85.863368,28.143348),(85.867037,28.132935),(85.87174,28.110921),(85.889103,28.090276),(85.891067,28.080897),(85.889827,28.071259),(85.890447,28.059193),(85.898302,28.036016),(85.908947,28.022425),(85.943984,27.994726),(85.956593,27.976226),(85.956283,27.956873),(85.951838,27.936797),(85.951218,27.91623),(85.96269,27.891374),(85.98026,27.885172),(86.026459,27.889927),(86.051419,27.88755),(86.0546,27.888013),(86.061703,27.889048),(86.075655,27.894784),(86.107178,27.922845),(86.112862,27.926798),(86.112242,27.937262),(86.107178,27.944239),(86.100563,27.950233),(86.095602,27.957855),(86.095085,27.968888),(86.098031,27.977518),(86.098134,27.985631),(86.089401,27.994726),(86.089401,27.994881),(86.089298,27.994933),(86.089298,27.994881),(86.079169,28.001134),(86.077619,28.006948),(86.079531,28.01333),(86.079531,28.02165),(86.083923,28.038548),(86.084699,28.043871),(86.081546,28.0517),(86.070177,28.066712),(86.068731,28.076763),(86.15596,28.156525),(86.176011,28.153011),(86.186036,28.133529),(86.187173,28.108699),(86.180765,28.088752),(86.172238,28.071828),(86.17446,28.062448),(86.18185,28.054878),(86.189137,28.043612),(86.19601,28.01457),(86.203038,28.002581),(86.217765,27.994985),(86.21792,27.994933),(86.218385,27.994726),(86.238539,27.989559),(86.259003,27.987647),(86.278072,27.983694),(86.29466,27.972376),(86.318224,27.945789),(86.327733,27.939691),(86.337551,27.938037),(86.345613,27.940931),(86.353674,27.944807),(86.362976,27.946099),(86.371968,27.941965),(86.405713,27.917005),(86.425298,27.909925),(86.438837,27.910856),(86.475476,27.927599),(86.496921,27.937882),(86.510461,27.952429),(86.516765,27.971369),(86.516764,27.971432),(86.516455,27.994726),(86.516197,28.012245),(86.522295,28.023717),(86.530925,28.03369),(86.538366,28.047178),(86.540743,28.062164),(86.54126,28.07821),(86.544154,28.093015),(86.553559,28.104151),(86.569527,28.106606),(86.583118,28.098028),(86.596864,28.086504),(86.612315,28.080277),(86.627146,28.083481),(86.64947,28.10118),(86.661976,28.106838),(86.680063,28.105727),(86.699648,28.098751),(86.717683,28.088261),(86.731739,28.076918),(86.735977,28.064851),(86.732204,28.034957),(86.739801,28.021495),(86.76998,28.01209),(86.840466,28.014777),(86.868165,27.994881),(86.868165,27.994726),(86.876898,27.970645),(86.893073,27.954186),(86.914725,27.945117),(86.940357,27.943102),(86.970226,27.947443),(86.982093,27.950674),(86.988726,27.952481),(87.005676,27.951499),(87.030532,27.938089),(87.062003,27.908427),(87.116522,27.844581),(87.155796,27.825796),(87.181996,27.824504),(87.232897,27.829724),(87.290516,27.816081),(87.313926,27.828587),(87.335837,27.846363),(87.363845,27.855252),(87.385343,27.849412),(87.380692,27.835821),(87.369116,27.819078),(87.369116,27.803937),(87.38679,27.804402),(87.47557,27.826701),(87.514534,27.835201),(87.531587,27.836803),(87.551327,27.831894),(87.555978,27.827088),(87.559802,27.819078),(87.565073,27.810862),(87.573342,27.805022),(87.589981,27.804609),(87.621091,27.819543),(87.636077,27.823652),(87.659951,27.819879),(87.678529,27.813351),(87.700466,27.805642),(87.726821,27.807761),(87.756379,27.82037),(87.779841,27.839439),(87.798651,27.863468),(87.814154,27.89096),(87.825884,27.906566),(87.836685,27.908323),(87.856115,27.898608),(87.93456,27.895301),(87.945205,27.892304),(87.966392,27.882744),(87.978433,27.880522),(87.991817,27.88264),(88.018121,27.892149),(88.029954,27.893337),(88.052795,27.886516),(88.095739,27.865329),(88.118218,27.860885),(88.143023,27.855717),(88.156407,27.851273),(88.164623,27.845356),(88.166587,27.833599),(88.154598,27.815151),(88.159662,27.77412),(88.149327,27.748772),(88.134858,27.723167),(88.11088,27.639503),(88.048765,27.545348),(88.02334,27.494912),(88.0221,27.484241),(88.02334,27.474655),(88.028094,27.455121),(88.030781,27.450729),(88.039773,27.442357),(88.041633,27.438068),(88.039308,27.42949),(88.034812,27.423805),(88.029593,27.418741),(88.02582,27.412075),(88.017242,27.389234),(88.014762,27.378072),(88.015795,27.364222),(88.020136,27.35337),(88.032538,27.333888),(88.035122,27.322571),(88.029024,27.298076),(88.004663,27.249163),(87.989337,27.218391),(87.985461,27.14837),(87.970837,27.119224),(87.969183,27.110801),(87.970733,27.10274),(87.975488,27.095143),(87.9913,27.081501),(88.009491,27.045637),(88.027371,27.035353),(88.042822,27.028946),(88.055948,27.018042),(88.076877,26.99179),(88.096824,26.959337),(88.11181,26.924301),(88.12044,26.90885),(88.142816,26.878412),(88.151394,26.862806),(88.155321,26.845598),(88.159042,26.802551),(88.167827,26.762915),(88.169067,26.744002),(88.167517,26.725037),(88.163383,26.705141),(88.14664,26.661216),(88.101681,26.581944),(88.087136,26.5391),(88.079822,26.517556),(88.07915,26.507375),(88.082251,26.494921),(88.082251,26.49487),(88.074189,26.453942),(88.044321,26.405676),(88.006648,26.369864),(87.975488,26.366557),(87.961225,26.378959),(87.928979,26.396426),(87.914768,26.40826),(87.908515,26.41782),(87.902004,26.435287),(87.897301,26.443452),(87.894769,26.442935),(87.870688,26.460712),(87.869809,26.464639),(87.852084,26.46066),(87.821595,26.437509),(87.804439,26.437354),(87.785938,26.445984),(87.768988,26.451461),(87.756173,26.446914),(87.749455,26.425727),(87.74222,26.410482),(87.727544,26.403764),(87.710904,26.405676),(87.697572,26.41627),(87.68119,26.424228),(87.659641,26.41751),(87.648745,26.409993),(87.623984,26.392912),(87.586984,26.378029),(87.552051,26.386711),(87.480324,26.423401),(87.449628,26.428621),(87.416452,26.426967),(87.384206,26.418647),(87.356404,26.403712),(87.344932,26.389346),(87.326225,26.353328),(87.314029,26.343768),(87.30049,26.34599),(87.258425,26.36294),(87.2453,26.370226),(87.236308,26.3833),(87.229435,26.398752),(87.2191,26.408105),(87.188455,26.399578),(87.135022,26.394204),(87.10629,26.404746),(87.083294,26.432083),(87.066706,26.465621),(87.056681,26.49487),(87.056474,26.49487),(87.056474,26.494921),(87.045002,26.544272),(87.044433,26.561171),(87.041333,26.580187),(87.029912,26.579774),(87.015443,26.568974),(86.985212,26.540655),(86.972448,26.531973),(86.935153,26.520289),(86.907129,26.51151),(86.875968,26.494921),(86.865684,26.472442),(86.846357,26.45265),(86.821863,26.438129),(86.796438,26.431514),(86.78724,26.433065),(86.751893,26.445519),(86.738199,26.44371),(86.730706,26.433788),(86.724091,26.421954),(86.713601,26.414565),(86.695204,26.418234),(86.625337,26.456319),(86.5579,26.484018),(86.537953,26.49487),(86.537746,26.49487),(86.52431,26.509081),(86.510616,26.520139),(86.494906,26.527839),(86.475476,26.531973),(86.444832,26.543084),(86.383647,26.572849),(86.353674,26.582616),(86.344838,26.582616),(86.323185,26.580084),(86.316002,26.580963),(86.309026,26.587991),(86.308613,26.60215),(86.301378,26.609023),(86.284428,26.61202),(86.263086,26.609075),(86.225155,26.597396),(86.202883,26.58458),(86.195855,26.582668),(86.185364,26.584425),(86.179266,26.588611),(86.17446,26.593313),(86.167794,26.596621),(86.152653,26.600651),(86.146355,26.601356),(86.144798,26.60153),(86.122061,26.60029),(86.115859,26.602563),(86.110692,26.606749),(86.041704,26.645455),(86.01137,26.654447),(85.975713,26.64437),(85.952148,26.642044),(85.934682,26.632897),(85.866366,26.579929),(85.844765,26.568509),(85.828538,26.566131),(85.821614,26.571713),(85.819753,26.579516),(85.819547,26.588301),(85.817635,26.596724),(85.809728,26.603028),(85.800065,26.600703),(85.789988,26.597034),(85.780996,26.599204),(85.727046,26.6376),(85.712887,26.653206),(85.702241,26.68845),(85.709631,26.76245),(85.701828,26.796608),(85.687772,26.811853),(85.609379,26.851024),(85.598578,26.854383),(85.519462,26.826426),(85.475692,26.805238),(85.439622,26.78772),(85.421535,26.782656),(85.40319,26.787617),(85.385878,26.788444),(85.369238,26.775008),(85.355352,26.759912),(85.353219,26.757593),(85.337044,26.746792),(85.302369,26.737336),(85.28728,26.736974),(85.194934,26.758885),(85.165685,26.786273),(85.16553,26.820793),(85.162016,26.851024),(85.122845,26.8657),(85.100211,26.863529),(85.043987,26.843737),(85.018562,26.845804),(85.016857,26.85924),(85.018045,26.874433),(85.000837,26.882236),(84.988176,26.883787),(84.975774,26.886887),(84.952726,26.897636),(84.944251,26.915723),(84.938567,26.9366),(84.924046,26.955668),(84.901567,26.967192),(84.851751,26.982127),(84.828289,26.994839),(84.828134,26.994943),(84.817489,27.0106),(84.801934,27.013753),(84.785501,27.008482),(84.771962,26.99918),(84.760697,26.999025),(84.640239,27.028377),(84.627268,27.03649),(84.62148,27.057988),(84.630369,27.080829),(84.644528,27.103721),(84.654398,27.125374),(84.659824,27.165113),(84.657654,27.203405),(84.648197,27.240716),(84.631919,27.277044),(84.606546,27.310479),(84.577039,27.329031),(84.289408,27.376108),(84.267703,27.388562),(84.248893,27.412436),(84.239023,27.431143),(84.225536,27.440393),(84.195253,27.436053),(84.185848,27.438636),(84.175099,27.462976),(84.165746,27.472174),(84.141716,27.480753),(84.130968,27.486411),(84.121563,27.494964),(84.11717,27.513283),(84.099548,27.516874),(84.079601,27.509536),(84.028907,27.453726),(84.007668,27.440807),(83.975835,27.439722),(83.935941,27.446078),(83.923022,27.449954),(83.922043,27.449696),(83.899871,27.443856),(83.853517,27.440962),(83.834087,27.434037),(83.842717,27.418069),(83.87119,27.38944),(83.873671,27.380087),(83.87765,27.369907),(83.877391,27.361742),(83.867521,27.358383),(83.861733,27.354507),(83.854602,27.34505),(83.85334,27.346194),(83.847988,27.351045),(83.801995,27.365928),(83.663399,27.43228),(83.590432,27.45662),(83.480981,27.469746),(83.387034,27.470469),(83.360989,27.462201),(83.355925,27.452486),(83.353857,27.44029),(83.355821,27.428663),(83.370962,27.410214),(83.369567,27.398174),(83.362746,27.385616),(83.341145,27.356936),(83.32435,27.341691),(83.304868,27.331925),(83.282441,27.330943),(83.259496,27.338126),(83.249419,27.348099),(83.243115,27.362155),(83.231591,27.381224),(83.21924,27.393781),(83.169631,27.431195),(83.132786,27.444217),(83.010468,27.443391),(82.947164,27.457317),(82.901275,27.480365),(82.876264,27.487548),(82.752137,27.494964),(82.72971,27.518166),(82.718857,27.556123),(82.709246,27.630924),(82.69705,27.669397),(82.679687,27.694409),(82.673891,27.696458),(82.652143,27.70415),(82.526156,27.675211),(82.440683,27.666426),(82.401926,27.677175),(82.347924,27.726009),(82.270409,27.760477),(82.151037,27.848275),(82.107422,27.863572),(82.090369,27.872331),(82.071662,27.89003),(82.051611,27.905171),(82.050212,27.905586),(82.02722,27.912406),(81.975905,27.916953),(81.946398,27.905352),(81.906091,27.863107),(81.883198,27.849051),(81.855654,27.850937),(81.827852,27.865639),(81.800154,27.884087),(81.750131,27.909667),(81.710392,27.94752),(81.688946,27.963204),(81.665433,27.9708),(81.615049,27.981239),(81.59536,27.994726),(81.59536,27.994778),(81.595102,27.994881),(81.582183,28.013278),(81.561925,28.025991),(81.473455,28.066402),(81.458211,28.07728),(81.454025,28.086607),(81.45294,28.097046),(81.448237,28.111412),(81.435267,28.129783),(81.417335,28.147017),(81.396044,28.160634),(81.373203,28.167843),(81.357907,28.165982),(81.351447,28.15681),(81.347262,28.144433),(81.338115,28.132728),(81.323697,28.126243),(81.307057,28.123814),(81.296205,28.12831),(81.299409,28.142521),(81.283079,28.146087),(81.276775,28.153787),(81.280754,28.161977),(81.295947,28.167455),(81.282356,28.191588),(81.224323,28.250783),(81.210732,28.278637),(81.190372,28.338116),(81.169701,28.361319),(81.146344,28.372249),(81.047066,28.389088),(81.000409,28.397002),(80.993175,28.407828),(80.991314,28.420256),(80.9878,28.430979),(80.976018,28.436922),(80.960515,28.432271),(80.941188,28.432684),(80.921344,28.436198),(80.905273,28.440849),(80.88667,28.452786),(80.881864,28.466842),(80.880365,28.48188),(80.872252,28.496866),(80.857989,28.502447),(80.81701,28.502551),(80.798406,28.505703),(80.781663,28.514746),(80.743526,28.549964),(80.727093,28.559679),(80.695777,28.567508),(80.67862,28.574717),(80.668182,28.586344),(80.648338,28.619753),(80.635625,28.627866),(80.598522,28.633783),(80.581365,28.638951),(80.563589,28.647038),(80.556871,28.655022),(80.557387,28.66435),(80.559041,28.673031),(80.555837,28.678948),(80.534444,28.679555),(80.517596,28.680033),(80.497546,28.670137),(80.487624,28.656469),(80.484937,28.639287),(80.493412,28.575647),(80.488451,28.562444),(80.468814,28.571849),(80.426543,28.616807),(80.408352,28.626342),(80.388302,28.627246),(80.369078,28.622647),(80.349751,28.620218),(80.329701,28.627479),(80.318332,28.640113),(80.291047,28.689671),(80.283399,28.695356),(80.265209,28.699283),(80.257044,28.702745),(80.249396,28.710213),(80.238957,28.726258),(80.233273,28.732718),(80.216426,28.741942),(80.181183,28.74729),(80.162063,28.753285),(80.147593,28.76331),(80.11421,28.802584),(80.081861,28.819353),(80.073054,28.820925),(80.054782,28.824185),(80.036386,28.837026),(80.030288,28.87767),(80.031115,28.897798),(80.033905,28.915936),(80.04052,28.932809),(80.052922,28.949113),(80.068425,28.959939),(80.085168,28.967484),(80.099431,28.977276),(80.107906,28.994588),(80.107906,28.994769),(80.104702,29.027609),(80.113073,29.072206),(80.132607,29.110214),(80.16351,29.123366),(80.180563,29.121324),(80.20144,29.121169),(80.220664,29.126079),(80.233066,29.139308),(80.230482,29.154733),(80.213739,29.196565),(80.218803,29.211086),(80.236063,29.213076),(80.248672,29.204394),(80.258181,29.202456),(80.278748,29.268085),(80.282779,29.291314),(80.280092,29.31015),(80.272754,29.315705),(80.263555,29.315318),(80.254977,29.316635),(80.249499,29.326997),(80.242161,29.367408),(80.220974,29.400119),(80.213739,29.416888),(80.217666,29.434639),(80.228312,29.441718),(80.257457,29.450064),(80.263452,29.459237),(80.266242,29.47213),(80.273064,29.478667),(80.282055,29.4843),(80.29115,29.494609),(80.29146,29.494738),(80.311511,29.508122),(80.320089,29.515822),(80.327221,29.52484),(80.327634,29.52993),(80.32505,29.535511),(80.323603,29.54166),(80.327221,29.548585),(80.332698,29.552461),(80.345101,29.558352),(80.350785,29.562073),(80.373316,29.58419),(80.385201,29.604835),(80.386752,29.626823),(80.377553,29.652946),(80.363911,29.679714),(80.354402,29.70488),(80.354299,29.730279),(80.368975,29.757926),(80.395227,29.776582),(80.454861,29.790586),(80.476152,29.80614),(80.527312,29.862416),(80.549533,29.89368),(80.562865,29.929699),(80.57134,29.946855),(80.586223,29.954142),(80.622293,29.958173),(80.641413,29.963444),(80.654126,29.970575),(80.680171,29.992072),(80.715827,30.013311),(80.725749,30.022768),(80.755721,30.064574),(80.769674,30.077287),(80.829825,30.117129),(80.849669,30.143381),(80.83644,30.170046),(80.850393,30.181931),(80.867911,30.200173),(80.883879,30.210405),(80.893646,30.198106),(80.903309,30.180433),(80.920828,30.17666),(80.988162,30.196556),(80.996017,30.196969),(81.006507,30.188908),(81.019426,30.172268),(81.030898,30.152734),(81.043663,30.12023),(81.066607,30.087157),(81.074203,30.071912),(81.084454,30.027876),(81.084849,30.026179),(81.097768,30.016929),(81.124588,30.022768),(81.148101,30.023233),(81.194506,30.004475),(81.21683,30.008195),(81.227269,30.022458),(81.225719,30.050467),(81.239516,30.058787),(81.257086,30.063541),(81.266801,30.069949),(81.267903,30.072053),(81.272021,30.079922),(81.275948,30.095373),(81.274501,30.102195),(81.266336,30.117749),(81.266956,30.125087),(81.277705,30.131857),(81.29326,30.132839),(81.309538,30.132064),(81.322405,30.133769),(81.33827,30.143536),(81.350052,30.15723),(81.357287,30.17356),(81.361834,30.204772),(81.374392,30.223118),(81.377854,30.232626),(81.376924,30.241204),(81.370516,30.257276),(81.369017,30.266061),(81.368036,30.301046),(81.37155,30.31691),(81.382143,30.331276),(81.387259,30.345126),(81.384882,30.360939),(81.387518,30.373909),(81.407413,30.37918),(81.42612,30.372514),(81.46281,30.341353),(81.483067,30.331896),(81.509371,30.328382),(81.520688,30.331948),(81.531953,30.370344),(81.536811,30.37856),(81.544252,30.382643),(81.563269,30.385588),(81.576705,30.390963),(81.581976,30.399438),(81.585077,30.408119),(81.591588,30.414269),(81.614015,30.416904),(81.635719,30.410961),(81.757366,30.362902),(81.779018,30.358045)] +Nauru [(166.938813,-0.490411),(166.955577,-0.497979),(166.958263,-0.517673),(166.951345,-0.539158),(166.938813,-0.551853),(166.916352,-0.54754),(166.906993,-0.524835),(166.913422,-0.500177),(166.938813,-0.490411)] +Poland [(19.002126,54.344916),(19.377208,54.377631),(19.459972,54.401028),(19.499278,54.406399),(19.531016,54.414944),(19.59254,54.452379),(19.609548,54.456732),(19.63077,54.446652),(19.680276,54.436678),(19.758256,54.434256),(19.964703,54.427842),(20.35207,54.415749),(20.640838,54.406784),(20.929606,54.397818),(21.287413,54.386759),(21.720772,54.373271),(21.998946,54.364667),(22.27712,54.356063),(22.510388,54.348777),(22.698387,54.342937),(22.76722,54.35627),(22.78603,54.36521),(22.811344,54.392633),(22.812695,54.394097),(22.837603,54.400918),(22.858894,54.399213),(22.918838,54.381126),(22.92969,54.380299),(22.952428,54.38309),(22.96266,54.381695),(22.973305,54.37539),(22.97899,54.367587),(22.983434,54.359784),(22.989738,54.353376),(23.001004,54.34888),(23.035317,54.346503),(23.041622,54.340974),(23.038004,54.331465),(23.032216,54.320148),(23.032113,54.309554),(23.050096,54.294827),(23.073041,54.294878),(23.096915,54.300821),(23.117069,54.303457),(23.135259,54.298289),(23.197271,54.267851),(23.235408,54.254157),(23.316023,54.236277),(23.33599,54.226294),(23.354471,54.217054),(23.364082,54.208269),(23.382273,54.187856),(23.391471,54.180105),(23.401806,54.175971),(23.423304,54.172508),(23.432502,54.169305),(23.449039,54.154938),(23.462991,54.13494),(23.473947,54.112564),(23.481595,54.091221),(23.496167,54.044558),(23.496477,54.021975),(23.481595,54.006627),(23.469502,54.000684),(23.464128,53.996757),(23.45927,53.992313),(23.45865,53.981616),(23.462888,53.972521),(23.470949,53.965286),(23.481595,53.960118),(23.487486,53.955622),(23.49038,53.95061),(23.48976,53.945132),(23.485625,53.939293),(23.486866,53.909992),(23.497201,53.885549),(23.51043,53.862502),(23.520662,53.837335),(23.52986,53.78416),(23.540402,53.763593),(23.56469,53.742457),(23.56469,53.742405),(23.564794,53.742405),(23.567068,53.681014),(23.590942,53.611251),(23.675795,53.455705),(23.722924,53.397104),(23.742974,53.365478),(23.782972,53.270936),(23.800645,53.242462),(23.818628,53.227967),(23.828447,53.213833),(23.836198,53.199286),(23.848084,53.183809),(23.865964,53.172001),(23.883121,53.163991),(23.893663,53.151951),(23.891389,53.127714),(23.882397,53.113297),(23.870305,53.101256),(23.860693,53.087484),(23.859349,53.067976),(23.867928,53.051027),(23.89821,53.02754),(23.909682,53.012734),(23.911336,53.00506),(23.908959,52.993046),(23.909062,52.986664),(23.917537,52.95881),(23.91795,52.9508),(23.914953,52.944625),(23.903894,52.931137),(23.901104,52.923257),(23.902034,52.912534),(23.905445,52.906539),(23.909476,52.901501),(23.912059,52.893853),(23.920426,52.772625),(23.922498,52.742596),(23.908752,52.699859),(23.868961,52.670042),(23.736153,52.614903),(23.569031,52.585887),(23.480458,52.554416),(23.392298,52.509638),(23.270548,52.395123),(23.230447,52.365074),(23.21205,52.347504),(23.165645,52.289393),(23.168746,52.288928),(23.176099,52.285136),(23.183525,52.281306),(23.19448,52.271462),(23.197684,52.25831),(23.189726,52.240533),(23.212257,52.231671),(23.284191,52.219734),(23.29959,52.223428),(23.312716,52.215289),(23.374831,52.200949),(23.395812,52.19958),(23.395812,52.193379),(23.388474,52.184154),(23.391368,52.182501),(23.405734,52.185911),(23.418239,52.182501),(23.427231,52.17679),(23.43674,52.175679),(23.450485,52.185911),(23.454413,52.181467),(23.464335,52.176351),(23.470949,52.171623),(23.487693,52.181622),(23.491517,52.174103),(23.488726,52.16214),(23.484695,52.158626),(23.512497,52.124442),(23.531928,52.120644),(23.579366,52.121548),(23.598177,52.11452),(23.604791,52.106769),(23.609856,52.09863),(23.61585,52.0923),(23.625255,52.089716),(23.637451,52.084471),(23.641482,52.072921),(23.642515,52.061397),(23.650783,52.048943),(23.664943,52.011168),(23.676415,51.994088),(23.660705,51.98688),(23.647579,51.97409),(23.628666,51.946339),(23.622155,51.927943),(23.621431,51.895567),(23.61492,51.88425),(23.618847,51.883268),(23.620604,51.883217),(23.621328,51.882028),(23.621845,51.877429),(23.609856,51.873269),(23.605721,51.851591),(23.594559,51.843297),(23.610269,51.825184),(23.628666,51.809759),(23.617504,51.786504),(23.59828,51.77467),(23.577299,51.766686),(23.56035,51.754542),(23.556319,51.747359),(23.552288,51.736636),(23.546604,51.713589),(23.550221,51.708421),(23.556939,51.701393),(23.559213,51.695166),(23.550014,51.692427),(23.546087,51.689637),(23.549291,51.683203),(23.555389,51.676304),(23.56035,51.671963),(23.547534,51.662067),(23.541436,51.659742),(23.532444,51.658967),(23.532444,51.6515),(23.543813,51.6438),(23.545157,51.63411),(23.539886,51.607109),(23.5434,51.592743),(23.552081,51.578817),(23.573992,51.555304),(23.567171,51.55489),(23.56035,51.555304),(23.572752,51.539672),(23.588668,51.535899),(23.602311,51.530783),(23.606238,51.517399),(23.608099,51.511224),(23.614817,51.49722),(23.630526,51.490424),(23.648613,51.486212),(23.662772,51.480192),(23.647166,51.460193),(23.648716,51.453966),(23.689437,51.416423),(23.697602,51.404435),(23.686027,51.40105),(23.678895,51.394073),(23.677655,51.38379),(23.683856,51.370276),(23.665563,51.365806),(23.647786,51.353998),(23.63404,51.339296),(23.628666,51.325964),(23.635177,51.304699),(23.650887,51.299609),(23.670214,51.299376),(23.687267,51.2924),(23.742664,51.216255),(23.765195,51.199021),(23.816045,51.178789),(23.863587,51.148274),(23.874749,51.13613),(23.858213,51.13073),(23.854492,51.121532),(23.869271,51.10174),(23.895523,51.076108),(23.904205,51.062724),(23.915987,51.020866),(23.915987,51.014665),(23.911749,51.00681),(23.919087,51.002728),(23.9318,50.999472),(23.943995,50.994201),(23.955054,50.983556),(23.957845,50.975339),(23.958878,50.966348),(23.964356,50.953222),(23.979342,50.937512),(24.046935,50.898031),(24.130237,50.868937),(24.143156,50.856432),(24.130857,50.839069),(24.100472,50.834986),(24.067295,50.834831),(24.046935,50.829095),(24.020477,50.838552),(23.992881,50.836226),(23.969937,50.825168),(23.957638,50.808011),(23.959498,50.788891),(23.974278,50.776178),(23.997946,50.769202),(24.025851,50.767032),(24.025851,50.76083),(24.012828,50.743312),(24.026884,50.728016),(24.05417,50.717164),(24.081041,50.712978),(24.074943,50.690189),(24.082798,50.669156),(24.108165,50.630287),(24.10843,50.629882),(24.085279,50.604406),(24.095407,50.556864),(24.102539,50.543634),(24.107706,50.540844),(24.106466,50.538622),(24.09396,50.52715),(24.075047,50.514282),(24.010658,50.492785),(24.007867,50.480744),(24.009418,50.461469),(24.007764,50.448498),(24.003217,50.437698),(23.981306,50.40478),(23.928699,50.390827),(23.747522,50.38938),(23.713002,50.382404),(23.695742,50.376771),(23.682203,50.368245),(23.671247,50.353568),(23.658677,50.327005),(23.658018,50.325611),(23.644169,50.312692),(23.565311,50.257812),(23.536372,50.242826),(23.536268,50.242826),(23.536165,50.242774),(23.536062,50.242774),(23.481595,50.215954),(23.436429,50.193475),(23.207916,50.03395),(23.177841,50.003977),(23.141254,49.985477),(23.101463,49.957107),(22.993046,49.854426),(22.951188,49.826624),(22.945917,49.818356),(22.937752,49.798202),(22.933101,49.79107),(22.924419,49.785438),(22.906746,49.7812),(22.897961,49.777428),(22.888246,49.769728),(22.826648,49.697381),(22.809801,49.686322),(22.798846,49.683273),(22.777348,49.680483),(22.766186,49.67423),(22.758951,49.6656),(22.741692,49.633664),(22.665831,49.567363),(22.640922,49.528761),(22.660146,49.493001),(22.666864,49.478376),(22.673065,49.435847),(22.676786,49.424478),(22.679576,49.418948),(22.692599,49.405513),(22.7142,49.39063),(22.719781,49.382982),(22.724225,49.367065),(22.737764,49.275391),(22.73394,49.26087),(22.717094,49.230433),(22.702934,49.195034),(22.687121,49.173382),(22.681747,49.161238),(22.687845,49.155915),(22.692496,49.157931),(22.705725,49.168835),(22.721434,49.1616),(22.720194,49.161186),(22.71637,49.160566),(22.748823,49.14558),(22.778072,49.12031),(22.796469,49.11137),(22.841324,49.094886),(22.853416,49.084757),(22.853519,49.076282),(22.844424,49.056697),(22.843184,49.043106),(22.847525,49.033701),(22.863751,49.015356),(22.865014,49.013237),(22.866955,49.009981),(22.855276,48.994013),(22.835019,48.999749),(22.812902,49.012875),(22.795022,49.019025),(22.783963,49.02135),(22.765463,49.038403),(22.755334,49.044708),(22.744482,49.045535),(22.721951,49.041039),(22.685778,49.042796),(22.664384,49.041452),(22.642886,49.043157),(22.618288,49.054216),(22.580771,49.08145),(22.560721,49.085532),(22.539637,49.0722),(22.505013,49.083413),(22.426775,49.085635),(22.390085,49.093025),(22.339442,49.12646),(22.318151,49.131989),(22.262858,49.130594),(22.215936,49.139999),(22.208494,49.144185),(22.20622,49.150489),(22.209011,49.156949),(22.208908,49.163977),(22.197952,49.171987),(22.189684,49.17302),(22.165706,49.171108),(22.155681,49.171522),(22.144105,49.174881),(22.111549,49.188627),(22.040752,49.197463),(22.012434,49.211054),(22.005819,49.242887),(21.99321,49.278182),(21.964478,49.308568),(21.928408,49.330788),(21.874561,49.348358),(21.837871,49.370424),(21.819577,49.377246),(21.799423,49.374817),(21.782164,49.364482),(21.768004,49.353474),(21.757566,49.348927),(21.742166,49.357092),(21.708576,49.390888),(21.69173,49.402154),(21.681808,49.404066),(21.666305,49.401792),(21.65876,49.402464),(21.648632,49.407063),(21.630028,49.418897),(21.6199,49.423341),(21.601193,49.426493),(21.529569,49.421222),(21.514422,49.417213),(21.496186,49.412386),(21.48151,49.415228),(21.444406,49.409905),(21.42787,49.409802),(21.330408,49.427785),(21.274391,49.447267),(21.260542,49.449438),(21.242455,49.441273),(21.211242,49.411094),(21.194086,49.400603),(21.172588,49.398278),(21.157292,49.405151),(21.143029,49.415538),(21.124943,49.423651),(21.10944,49.424581),(21.068822,49.419207),(21.053526,49.414453),(21.033269,49.399673),(21.045464,49.390578),(21.068512,49.381431),(21.080708,49.366342),(21.072543,49.357195),(21.054043,49.354766),(21.032959,49.354611),(21.017249,49.352441),(20.996682,49.339367),(20.964022,49.308103),(20.942422,49.295855),(20.91896,49.290326),(20.900564,49.2926),(20.884337,49.300299),(20.868007,49.31141),(20.849301,49.320505),(20.833591,49.322055),(20.816538,49.321383),(20.79411,49.323657),(20.778297,49.331202),(20.689517,49.4005),(20.673911,49.402309),(20.636084,49.39833),(20.614483,49.400448),(20.605284,49.400035),(20.595053,49.396263),(20.579136,49.383395),(20.567664,49.376367),(20.543996,49.370838),(20.522085,49.374352),(20.437543,49.402515),(20.421936,49.40019),(20.422143,49.382982),(20.370467,49.38169),(20.329539,49.391767),(20.317653,49.391612),(20.307422,49.386599),(20.303701,49.380398),(20.301634,49.371199),(20.296466,49.356937),(20.289335,49.343087),(20.284374,49.338643),(20.207169,49.334199),(20.191976,49.328618),(20.170169,49.311617),(20.16035,49.305622),(20.138336,49.307327),(20.135856,49.308878),(20.130171,49.303917),(20.111051,49.275856),(20.105367,49.263971),(20.098442,49.25286),(20.08604,49.243042),(20.080355,49.208109),(20.07002,49.183097),(20.050486,49.173227),(20.01731,49.183872),(19.965737,49.215653),(19.937935,49.22511),(19.905896,49.222785),(19.887809,49.214),(19.868292,49.200788),(19.854219,49.191262),(19.831998,49.185888),(19.785903,49.188162),(19.760685,49.194208),(19.747766,49.205887),(19.747787,49.205956),(19.751797,49.219064),(19.789624,49.259475),(19.794068,49.261852),(19.800476,49.263041),(19.806263,49.265263),(19.808641,49.270895),(19.80678,49.275236),(19.806415,49.275488),(19.798822,49.280714),(19.798563,49.281081),(19.796962,49.28335),(19.79686,49.283399),(19.783216,49.290016),(19.779702,49.293375),(19.779713,49.293453),(19.780322,49.297612),(19.78797,49.305105),(19.78952,49.309188),(19.783629,49.357557),(19.778565,49.374197),(19.769263,49.39311),(19.769056,49.393213),(19.760065,49.397658),(19.726578,49.388873),(19.706321,49.387529),(19.684514,49.389079),(19.627044,49.401868),(19.627097,49.402165),(19.62922,49.413936),(19.6286,49.429594),(19.63015,49.43502),(19.634801,49.441324),(19.594804,49.441583),(19.573926,49.445252),(19.55677,49.453882),(19.551602,49.461013),(19.535479,49.492846),(19.517289,49.543282),(19.505197,49.563384),(19.481735,49.573513),(19.474191,49.578732),(19.457344,49.598059),(19.44889,49.600313),(19.443392,49.60178),(19.437604,49.600126),(19.43378,49.595165),(19.347067,49.532998),(19.339522,49.528761),(19.325363,49.52504),(19.315027,49.523955),(19.284228,49.524162),(19.265005,49.521475),(19.248675,49.516255),(19.234102,49.507212),(19.220459,49.493001),(19.20444,49.460548),(19.206197,49.459411),(19.209401,49.456414),(19.211364,49.45166),(19.208987,49.444993),(19.204543,49.442823),(19.192244,49.442616),(19.18935,49.442203),(19.182322,49.422721),(19.179738,49.410267),(19.172504,49.402257),(19.141705,49.394195),(19.11659,49.39094),(19.106255,49.391405),(19.096643,49.394609),(19.076799,49.404066),(19.067601,49.406133),(19.045793,49.402774),(19.007139,49.388098),(18.981818,49.386806),(18.962284,49.389183),(18.956703,49.400448),(18.960837,49.433315),(18.958253,49.448146),(18.953396,49.461685),(18.952362,49.475948),(18.961147,49.492794),(18.932208,49.504318),(18.833196,49.510261),(18.837434,49.526952),(18.83454,49.547623),(18.820174,49.590256),(18.81573,49.599299),(18.803947,49.616766),(18.799297,49.626378),(18.792579,49.660794),(18.788444,49.668597),(18.773458,49.675832),(18.757852,49.674075),(18.742246,49.669683),(18.727673,49.668907),(18.715064,49.673817),(18.695427,49.688854),(18.682404,49.695779),(18.668142,49.69857),(18.637343,49.700327),(18.62401,49.706373),(18.617706,49.713866),(18.60768,49.74296),(18.584116,49.774379),(18.566753,49.804765),(18.566339,49.831585),(18.593831,49.852204),(18.567269,49.861454),(18.562929,49.873701),(18.565409,49.889049),(18.559208,49.907187),(18.544428,49.912975),(18.505051,49.896697),(18.481797,49.896645),(18.407486,49.923155),(18.368832,49.932044),(18.328524,49.92884),(18.330695,49.92884),(18.332452,49.928271),(18.333795,49.927135),(18.334932,49.925429),(18.306923,49.909668),(18.292454,49.907808),(18.27261,49.918298),(18.267029,49.925067),(18.260105,49.94057),(18.255867,49.946048),(18.243982,49.951732),(18.22052,49.954885),(18.208738,49.958295),(18.177216,49.975814),(18.162023,49.98124),(18.098151,49.989043),(18.085128,49.998758),(18.091639,50.017207),(18.033245,50.041805),(18.019074,50.044087),(18.002446,50.046765),(18.00689,50.024131),(18.012161,50.022684),(18.020119,50.024286),(18.028698,50.024131),(18.035726,50.017207),(18.037483,50.007233),(18.032418,50.00284),(17.999862,49.996588),(17.959761,49.995812),(17.941571,49.992764),(17.912116,49.975814),(17.875735,49.968682),(17.839355,49.973643),(17.774553,50.015191),(17.763287,50.025216),(17.755433,50.037154),(17.749852,50.049504),(17.74303,50.060563),(17.731971,50.06909),(17.722566,50.075136),(17.719466,50.080872),(17.72267,50.08635),(17.731971,50.091466),(17.732385,50.092189),(17.732592,50.092964),(17.732385,50.093894),(17.731971,50.094876),(17.717915,50.096788),(17.690734,50.10547),(17.677608,50.107744),(17.666963,50.106142),(17.658488,50.102989),(17.648359,50.101801),(17.632753,50.106348),(17.584177,50.145881),(17.58211,50.153167),(17.589448,50.163244),(17.600403,50.16681),(17.648669,50.167998),(17.675334,50.174716),(17.717915,50.191098),(17.738173,50.202363),(17.747578,50.217504),(17.731971,50.236005),(17.721429,50.242722),(17.719156,50.252799),(17.721119,50.261946),(17.731971,50.280653),(17.734245,50.283805),(17.734969,50.286802),(17.734452,50.28949),(17.731971,50.291763),(17.713575,50.30768),(17.707994,50.311039),(17.691044,50.315069),(17.684533,50.312382),(17.681949,50.305561),(17.676368,50.297396),(17.646705,50.271093),(17.629549,50.262101),(17.606398,50.258225),(17.516998,50.268354),(17.469352,50.265563),(17.44,50.242722),(17.434832,50.2405),(17.429561,50.239777),(17.424187,50.240552),(17.419122,50.242722),(17.409821,50.261326),(17.388013,50.272643),(17.365896,50.271558),(17.355044,50.253264),(17.341711,50.259724),(17.334786,50.270008),(17.332823,50.282668),(17.334063,50.295691),(17.329515,50.298223),(17.325381,50.301117),(17.321867,50.304476),(17.319077,50.308248),(17.337474,50.313002),(17.316803,50.318635),(17.269881,50.317757),(17.24611,50.322149),(17.210866,50.336412),(17.201048,50.343233),(17.195053,50.351915),(17.188852,50.364834),(17.185442,50.375686),(17.187612,50.378476),(17.175933,50.38075),(17.145444,50.376719),(17.131698,50.376823),(17.118882,50.381164),(17.095628,50.393669),(17.084672,50.397958),(16.98194,50.416355),(16.958789,50.414598),(16.944005,50.420267),(16.915794,50.431083),(16.892953,50.432943),(16.865771,50.422401),(16.865874,50.408449),(16.897914,50.378218),(16.909283,50.359925),(16.916724,50.338634),(16.926543,50.319255),(16.945249,50.306905),(16.969641,50.297758),(16.987831,50.284942),(16.998373,50.26701),(16.99982,50.242722),(17.012842,50.231664),(17.014806,50.218486),(17.008398,50.20996),(16.996306,50.212905),(16.98566,50.223137),(16.98101,50.229183),(16.975015,50.231819),(16.960339,50.231974),(16.957548,50.229235),(16.957755,50.217143),(16.955275,50.213474),(16.946283,50.21306),(16.927783,50.217298),(16.918998,50.217091),(16.907009,50.211768),(16.890369,50.196937),(16.880964,50.191563),(16.869762,50.189729),(16.857606,50.187739),(16.837556,50.188617),(16.817712,50.186757),(16.795284,50.174406),(16.777301,50.15694),(16.766449,50.1434),(16.754253,50.13229),(16.732033,50.121955),(16.701233,50.094928),(16.660616,50.093016),(16.618654,50.10702),(16.584238,50.127432),(16.566668,50.142212),(16.557263,50.154718),(16.545894,50.188824),(16.535042,50.207738),(16.50476,50.227995),(16.492047,50.242722),(16.477578,50.255332),(16.452153,50.284994),(16.437787,50.297965),(16.415566,50.307938),(16.371021,50.318377),(16.353244,50.333518),(16.35066,50.342251),(16.351487,50.351346),(16.350764,50.360648),(16.343736,50.36995),(16.334227,50.371862),(16.288649,50.371862),(16.27697,50.369433),(16.269839,50.365454),(16.2625,50.36442),(16.249685,50.371035),(16.24493,50.376823),(16.232011,50.402609),(16.199559,50.406278),(16.190153,50.424158),(16.198008,50.440023),(16.217955,50.437129),(16.216095,50.440178),(16.211237,50.451289),(16.217335,50.45165),(16.226534,50.455371),(16.232011,50.456766),(16.247411,50.464983),(16.264981,50.471649),(16.279967,50.479866),(16.287719,50.492785),(16.303428,50.49604),(16.335468,50.490356),(16.352624,50.492785),(16.362029,50.501208),(16.382596,50.514179),(16.391175,50.521827),(16.388384,50.521258),(16.388281,50.52715),(16.389624,50.534849),(16.391175,50.539914),(16.395516,50.545701),(16.398306,50.548492),(16.398306,50.551799),(16.394482,50.559344),(16.425901,50.567561),(16.416806,50.586629),(16.331644,50.644042),(16.300948,50.6551),(16.268288,50.660423),(16.232011,50.660888),(16.219092,50.659131),(16.211754,50.649106),(16.204519,50.636342),(16.192324,50.626575),(16.175477,50.624095),(16.160284,50.628642),(16.128658,50.644042),(16.086491,50.64678),(16.024479,50.608592),(15.982001,50.603631),(15.979934,50.606628),(15.97921,50.60978),(15.980037,50.612984),(15.982001,50.61624),(15.996367,50.623681),(15.998641,50.635567),(15.993163,50.649364),(15.984481,50.662904),(15.971459,50.678613),(15.958023,50.686881),(15.941486,50.68838),(15.882265,50.674221),(15.854567,50.673704),(15.848159,50.675151),(15.797619,50.729773),(15.794415,50.735871),(15.792245,50.742692),(15.76744,50.744191),(15.684345,50.731426),(15.673803,50.733545),(15.666464,50.738093),(15.653235,50.75096),(15.641866,50.755766),(15.441775,50.800208),(15.400124,50.797986),(15.390822,50.790648),(15.381314,50.780002),(15.370255,50.772613),(15.368579,50.772949),(15.356096,50.775455),(15.349688,50.783723),(15.350515,50.793232),(15.353305,50.803257),(15.352995,50.812972),(15.341006,50.83173),(15.325813,50.839689),(15.307727,50.844753),(15.28778,50.854881),(15.277134,50.865734),(15.265352,50.88227),(15.25605,50.899892),(15.253053,50.914103),(15.260081,50.932499),(15.268763,50.94299),(15.269796,50.952653),(15.253776,50.969035),(15.227111,50.977406),(15.170164,50.977975),(15.160759,50.992651),(15.156108,51.007844),(15.144429,51.011564),(15.13151,51.005828),(15.122518,50.992754),(15.122518,50.992651),(15.119211,50.982471),(15.107946,50.981024),(15.09389,50.985416),(15.082107,50.992754),(15.073632,51.002624),(15.02299,51.009652),(15.003973,51.020685),(15.001182,51.012443),(14.996531,51.007844),(14.982062,51.001177),(14.976377,51.000402),(14.970693,50.998697),(14.965629,50.996062),(14.961081,50.992651),(14.960978,50.992651),(14.965215,50.981334),(14.981442,50.973221),(14.996635,50.959216),(14.997771,50.930174),(14.994671,50.92511),(14.984852,50.918082),(14.982062,50.914671),(14.981132,50.90909),(14.981752,50.905266),(14.982889,50.902424),(14.984129,50.867697),(14.982062,50.859067),(14.867857,50.86439),(14.831683,50.857982),(14.810393,50.858447),(14.810496,50.877051),(14.824655,50.89183),(14.860209,50.916842),(14.910645,50.992651),(14.955293,51.064042),(14.958291,51.075514),(14.958497,51.094738),(14.960254,51.104582),(14.965629,51.111507),(14.97338,51.116132),(14.979995,51.122798),(14.982062,51.13564),(14.983199,51.160909),(14.991364,51.18897),(15.004903,51.215609),(15.022059,51.23677),(15.017925,51.239561),(15.014308,51.242661),(15.019476,51.271678),(15.004903,51.291289),(14.982269,51.308006),(14.963768,51.328393),(14.960978,51.335291),(14.956637,51.353223),(14.956637,51.354153),(14.950648,51.396539),(14.946922,51.422909),(14.955397,51.435415),(14.945061,51.449186),(14.926561,51.46133),(14.910955,51.468978),(14.888321,51.475748),(14.841399,51.48443),(14.818558,51.492595),(14.818558,51.492698),(14.797474,51.502103),(14.732155,51.515823),(14.710037,51.530241),(14.70677,51.540828),(14.704559,51.547992),(14.712724,51.567422),(14.732155,51.58662),(14.743937,51.606644),(14.742283,51.632974),(14.732465,51.658347),(14.719546,51.675632),(14.67097,51.700773),(14.648439,51.717206),(14.638103,51.742631),(14.634176,51.769012),(14.620327,51.780536),(14.60224,51.788313),(14.585807,51.803893),(14.58281,51.825081),(14.595936,51.842418),(14.653296,51.878617),(14.671693,51.893888),(14.687093,51.911897),(14.695568,51.931457),(14.699082,51.992512),(14.704456,52.002227),(14.726057,52.026438),(14.730294,52.035895),(14.732155,52.045067),(14.735359,52.054033),(14.743007,52.062534),(14.761403,52.076667),(14.696394,52.106691),(14.686369,52.121032),(14.692467,52.146973),(14.703216,52.165163),(14.70704,52.179452),(14.69257,52.193379),(14.69257,52.19958),(14.712104,52.215444),(14.712414,52.235908),(14.696911,52.253478),(14.668903,52.260997),(14.640481,52.265054),(14.606581,52.275751),(14.584153,52.291228),(14.590148,52.309418),(14.569167,52.338512),(14.545396,52.382153),(14.539815,52.421866),(14.591595,52.449797),(14.615779,52.473284),(14.632316,52.496745),(14.627665,52.507416),(14.609061,52.517803),(14.616813,52.541032),(14.644821,52.576921),(14.613712,52.592527),(14.607965,52.596463),(14.549943,52.636194),(14.490102,52.650638),(14.46137,52.674564),(14.442353,52.679964),(14.431915,52.686088),(14.397705,52.727119),(14.382202,52.738152),(14.275955,52.78572),(14.24867,52.794582),(14.21601,52.817992),(14.19596,52.823366),(14.173739,52.824348),(14.152758,52.828327),(14.135395,52.836595),(14.123923,52.850651),(14.150485,52.86463),(14.16082,52.876748),(14.164954,52.895661),(14.16268,52.908167),(14.150485,52.943643),(14.14449,52.953694),(14.14449,52.959921),(14.192859,52.981651),(14.253734,53.000875),(14.312645,53.029219),(14.343341,53.048624),(14.356777,53.066684),(14.359568,53.075056),(14.38954,53.131668),(14.387473,53.146137),(14.381478,53.160296),(14.377861,53.176549),(14.380651,53.189855),(14.387783,53.202077),(14.397705,53.210991),(14.408247,53.214402),(14.415585,53.22099),(14.44163,53.251841),(14.428607,53.259076),(14.42468,53.264657),(14.409384,53.272434),(14.403286,53.27879),(14.400185,53.288609),(14.399565,53.306747),(14.398015,53.315222),(14.391261,53.334017),(14.370316,53.392298),(14.359878,53.444077),(14.338897,53.465213),(14.328458,53.48609),(14.30417,53.508518),(14.296936,53.529137),(14.296212,53.551306),(14.298899,53.587893),(14.297039,53.597763),(14.276058,53.636624),(14.260969,53.656209),(14.265103,53.671764),(14.263863,53.699876),(14.263901,53.699976),(14.263927,53.699978),(14.273692,53.700629),(14.291677,53.705268),(14.304861,53.712551),(14.306163,53.720689),(14.288585,53.727973),(14.29835,53.731431),(14.301606,53.734849),(14.29835,53.738267),(14.288585,53.741645),(14.2942,53.749091),(14.304047,53.743883),(14.318044,53.739732),(14.330414,53.734198),(14.342052,53.714911),(14.356212,53.708441),(14.383556,53.700629),(14.403575,53.687812),(14.414561,53.682359),(14.458669,53.677883),(14.51238,53.666449),(14.531016,53.657782),(14.546235,53.644761),(14.579112,53.607245),(14.590587,53.598375),(14.595876,53.601304),(14.597016,53.617336),(14.601573,53.626654),(14.610037,53.632758),(14.62379,53.639228),(14.623057,53.643012),(14.622813,53.646959),(14.62379,53.652818),(14.613048,53.65469),(14.587413,53.656073),(14.575938,53.659654),(14.544932,53.704047),(14.551443,53.726142),(14.575938,53.769517),(14.587413,53.763495),(14.597341,53.765204),(14.608246,53.769029),(14.62379,53.769517),(14.61671,53.794745),(14.620372,53.814846),(14.627289,53.832709),(14.630626,53.851508),(14.62379,53.851508),(14.621349,53.839545),(14.613536,53.83161),(14.58961,53.817328),(14.595876,53.810492),(14.587087,53.812649),(14.580089,53.815334),(14.574474,53.819078),(14.569102,53.824164),(14.574229,53.827094),(14.583263,53.834215),(14.58961,53.837836),(14.575857,53.854804),(14.552094,53.862779),(14.441173,53.869615),(14.425548,53.879828),(14.438162,53.899319),(14.434906,53.901313),(14.434093,53.901597),(14.433767,53.902411),(14.431895,53.906155),(14.422048,53.896145),(14.414236,53.894721),(14.405772,53.897406),(14.393809,53.899319),(14.385102,53.897284),(14.378266,53.89232),(14.371593,53.885891),(14.363048,53.879381),(14.392914,53.875963),(14.40919,53.871568),(14.417654,53.865139),(14.415701,53.853013),(14.405284,53.844184),(14.392589,53.840725),(14.383556,53.844672),(14.356212,53.834459),(14.342621,53.831855),(14.328298,53.831),(14.328298,53.824164),(14.342052,53.82099),(14.341319,53.818915),(14.328298,53.817328),(14.318858,53.818183),(14.303071,53.823065),(14.291352,53.824164),(14.274669,53.830797),(14.226573,53.871975),(14.203136,53.878323),(14.200857,53.87816),(14.200819,53.878157),(14.192756,53.893766),(14.175289,53.906478),(14.184798,53.908132),(14.193066,53.911491),(14.210076,53.938457),(14.226573,53.934027),(14.237478,53.928656),(14.268077,53.92182),(14.367035,53.916978),(14.406749,53.92182),(14.445567,53.934027),(14.517345,53.965237),(14.781912,54.034613),(15.010102,54.06981),(15.288341,54.138861),(15.468516,54.167141),(15.665782,54.198432),(15.859711,54.249986),(16.020844,54.261786),(16.131521,54.283515),(16.118907,54.276028),(16.069347,54.263007),(16.091075,54.253974),(16.17921,54.263007),(16.17921,54.269192),(16.172374,54.269192),(16.172374,54.276597),(16.220063,54.276597),(16.219574,54.28148),(16.218516,54.282375),(16.216645,54.28205),(16.213227,54.283515),(16.21518,54.299628),(16.192149,54.300279),(16.137706,54.290269),(16.17213,54.300238),(16.213552,54.316107),(16.275401,54.358588),(16.262869,54.343695),(16.240489,54.330024),(16.231781,54.322577),(16.240896,54.32453),(16.259044,54.326728),(16.274913,54.330471),(16.288829,54.340237),(16.323904,54.349921),(16.336762,54.358588),(16.32488,54.373114),(16.305024,54.373481),(16.296235,54.367255),(16.281586,54.358588),(16.323985,54.401313),(16.439789,54.488267),(16.439789,54.482123),(16.434418,54.472357),(16.444509,54.469875),(16.460623,54.473456),(16.473399,54.482123),(16.479747,54.495551),(16.473888,54.499661),(16.460948,54.496487),(16.446056,54.488267),(16.466563,54.512152),(16.501313,54.532416),(16.569591,54.557196),(16.778494,54.575141),(16.85963,54.59394),(16.902192,54.598131),(16.939952,54.606024),(17.035818,54.66706),(17.337413,54.74901),(17.715668,54.790432),(17.885427,54.824123),(18.152354,54.838324),(18.316905,54.838324),(18.339529,54.833482),(18.375824,54.815131),(18.389171,54.811103),(18.410167,54.808295),(18.75172,54.69009),(18.813975,54.643215),(18.824067,54.632229),(18.834321,54.616685),(18.835297,54.603095),(18.817638,54.598131),(18.813243,54.601549),(18.80836,54.608832),(18.802257,54.616034),(18.784028,54.623521),(18.771821,54.642279),(18.755382,54.654486),(18.707774,54.701239),(18.698497,54.703803),(18.663422,54.707994),(18.654552,54.71015),(18.64503,54.71955),(18.61671,54.725246),(18.556977,54.74901),(18.49822,54.765367),(18.48878,54.773179),(18.456309,54.787665),(18.447765,54.789984),(18.435557,54.786811),(18.42921,54.781684),(18.41505,54.753119),(18.413097,54.746527),(18.413097,54.73192),(18.418468,54.728217),(18.442393,54.721991),(18.447765,54.718248),(18.459809,54.706041),(18.474457,54.687567),(18.473888,54.679918),(18.470063,54.673285),(18.470388,54.666734),(18.481944,54.659613),(18.475922,54.640082),(18.487315,54.631781),(18.506358,54.634101),(18.52296,54.646552),(18.5171,54.62995),(18.525157,54.616685),(18.537608,54.605455),(18.562836,54.560004),(18.570649,54.55036),(18.566742,54.527411),(18.567556,54.48786),(18.574067,54.450385),(18.588064,54.433661),(18.60377,54.430609),(18.634288,54.416938),(18.649913,54.413804),(18.663341,54.412991),(18.680024,54.40998),(18.694347,54.404527),(18.70045,54.395819),(18.717296,54.382025),(18.885916,54.350165),(18.923188,54.348538),(18.954356,54.358588),(18.962901,54.353502),(18.97462,54.349758),(19.002126,54.344916)] +Paraguay [(-58.158797,-20.165125),(-58.158332,-20.181144),(-58.152337,-20.184658),(-58.144069,-20.183521),(-58.137041,-20.185588),(-58.12991,-20.192513),(-58.120505,-20.203882),(-58.118541,-20.214424),(-58.155283,-20.226413),(-58.162672,-20.243156),(-58.156316,-20.261656),(-58.137041,-20.274369),(-58.130943,-20.272818),(-58.126241,-20.267031),(-58.120039,-20.261346),(-58.109756,-20.260726),(-58.10309,-20.264447),(-58.09508,-20.272198),(-58.090325,-20.281707),(-58.099937,-20.309199),(-58.097715,-20.334623),(-58.090791,-20.359428),(-58.082987,-20.376171),(-58.067123,-20.393534),(-58.045625,-20.409347),(-58.023146,-20.420716),(-58.004181,-20.42516),(-57.989401,-20.433119),(-57.99214,-20.450895),(-58.007902,-20.479834),(-58.01002,-20.500091),(-58.008935,-20.525723),(-58.004749,-20.549804),(-57.993329,-20.573782),(-57.992554,-20.585047),(-57.993639,-20.606751),(-57.990952,-20.620601),(-57.977826,-20.640134),(-57.973123,-20.651193),(-57.973588,-20.661942),(-57.981082,-20.681165),(-57.980616,-20.692741),(-57.974415,-20.704006),(-57.96501,-20.711034),(-57.95638,-20.709898),(-57.947182,-20.675068),(-57.934004,-20.66742),(-57.918088,-20.669797),(-57.90429,-20.678995),(-57.900208,-20.685196),(-57.894472,-20.699976),(-57.890648,-20.706384),(-57.884343,-20.711965),(-57.868944,-20.722197),(-57.86021,-20.730258),(-57.857678,-20.73987),(-57.865275,-20.747415),(-57.877005,-20.752376),(-57.898761,-20.756613),(-57.906667,-20.762194),(-57.917933,-20.77408),(-57.933074,-20.784622),(-57.940412,-20.793613),(-57.933953,-20.799711),(-57.908011,-20.801985),(-57.887651,-20.805912),(-57.870701,-20.816454),(-57.859642,-20.831647),(-57.85711,-20.849734),(-57.867393,-20.860069),(-57.908683,-20.881567),(-57.917933,-20.894382),(-57.914832,-20.905648),(-57.908011,-20.910505),(-57.901138,-20.912366),(-57.898089,-20.914846),(-57.893025,-20.914846),(-57.869925,-20.932726),(-57.847033,-20.955981),(-57.840315,-20.955981),(-57.836026,-20.938514),(-57.817577,-20.953604),(-57.816027,-20.975411),(-57.82538,-20.998562),(-57.852459,-21.037836),(-57.851735,-21.051996),(-57.836026,-21.082485),(-57.829876,-21.12703),(-57.833959,-21.173849),(-57.84786,-21.216223),(-57.870701,-21.247643),(-57.899019,-21.267176),(-57.90429,-21.274928),(-57.90491,-21.287433),(-57.89995,-21.295081),(-57.893645,-21.300352),(-57.890648,-21.305934),(-57.882328,-21.316372),(-57.866308,-21.32092),(-57.855559,-21.330738),(-57.863311,-21.356886),(-57.929922,-21.453108),(-57.94527,-21.487214),(-57.949145,-21.508298),(-57.948112,-21.530519),(-57.939792,-21.548193),(-57.921602,-21.555427),(-57.912093,-21.564006),(-57.919328,-21.583953),(-57.932144,-21.60669),(-57.939017,-21.623744),(-57.934831,-21.640693),(-57.924341,-21.658677),(-57.911783,-21.673043),(-57.901138,-21.678934),(-57.895609,-21.688442),(-57.939017,-21.755312),(-57.938448,-21.774639),(-57.934779,-21.792932),(-57.93571,-21.812053),(-57.948939,-21.83386),(-57.955915,-21.85112),(-57.946665,-21.865486),(-57.932764,-21.881919),(-57.925374,-21.904967),(-57.929922,-21.917369),(-57.953228,-21.958193),(-57.962633,-21.966978),(-57.964855,-21.976694),(-57.986818,-22.035295),(-57.988368,-22.049041),(-57.988006,-22.06413),(-57.986249,-22.074465),(-57.973134,-22.081048),(-57.972865,-22.081183),(-57.961393,-22.090072),(-57.952453,-22.105988),(-57.939999,-22.11839),(-57.906254,-22.128932),(-57.895505,-22.128932),(-57.878194,-22.122111),(-57.871372,-22.121181),(-57.862846,-22.125832),(-57.848686,-22.140301),(-57.843881,-22.143712),(-57.831788,-22.141128),(-57.815769,-22.128312),(-57.805485,-22.128622),(-57.795305,-22.130999),(-57.787502,-22.129656),(-57.775409,-22.123971),(-57.777786,-22.122318),(-57.777425,-22.117047),(-57.775616,-22.111156),(-57.773652,-22.107745),(-57.769932,-22.107435),(-57.760268,-22.110742),(-57.756186,-22.110639),(-57.736342,-22.103714),(-57.723268,-22.104748),(-57.709935,-22.108469),(-57.689626,-22.109295),(-57.678516,-22.106401),(-57.670816,-22.102371),(-57.663685,-22.100614),(-57.654486,-22.104541),(-57.638932,-22.144539),(-57.627201,-22.165106),(-57.614075,-22.176991),(-57.596712,-22.181022),(-57.572734,-22.177715),(-57.564001,-22.174408),(-57.55997,-22.171617),(-57.554699,-22.170274),(-57.542297,-22.171514),(-57.534597,-22.174408),(-57.511188,-22.188774),(-57.472482,-22.180299),(-57.463955,-22.180402),(-57.443182,-22.187947),(-57.432278,-22.18867),(-57.41786,-22.193838),(-57.384994,-22.213268),(-57.378638,-22.213372),(-57.37502,-22.207274),(-57.366649,-22.209134),(-57.351249,-22.217196),(-57.339932,-22.216989),(-57.334816,-22.211718),(-57.333059,-22.205414),(-57.331767,-22.202416),(-57.316316,-22.203657),(-57.271667,-22.213785),(-57.252444,-22.210581),(-57.227484,-22.191254),(-57.21379,-22.188154),(-57.209449,-22.192081),(-57.2064,-22.207481),(-57.199527,-22.212855),(-57.189967,-22.213578),(-57.183611,-22.210891),(-57.177926,-22.207377),(-57.170382,-22.205724),(-57.154155,-22.207997),(-57.124183,-22.218539),(-57.115036,-22.22288),(-57.114623,-22.225051),(-57.113279,-22.229391),(-57.108473,-22.233525),(-57.090748,-22.236419),(-57.085064,-22.238073),(-57.079224,-22.23859),(-57.071628,-22.235696),(-57.059484,-22.232699),(-57.027651,-22.235076),(-57.011167,-22.233836),(-56.986465,-22.24138),(-56.975717,-22.24169),(-56.964554,-22.234869),(-56.965485,-22.250889),(-56.960524,-22.254816),(-56.9521,-22.253576),(-56.942592,-22.253989),(-56.924944,-22.262361),(-56.91479,-22.262878),(-56.90368,-22.236936),(-56.892001,-22.252232),(-56.881717,-22.277244),(-56.879547,-22.289956),(-56.85606,-22.29316),(-56.842856,-22.289026),(-56.84284,-22.28901),(-56.819654,-22.265462),(-56.813142,-22.25616),(-56.811902,-22.249649),(-56.808285,-22.247995),(-56.794565,-22.252542),(-56.786994,-22.251819),(-56.754361,-22.242621),(-56.74679,-22.243964),(-56.745601,-22.248408),(-56.743534,-22.250372),(-56.733199,-22.244067),(-56.728548,-22.237453),(-56.720538,-22.21947),(-56.715371,-22.215439),(-56.698602,-22.2235),(-56.664573,-22.259984),(-56.648811,-22.263498),(-56.644135,-22.255436),(-56.654703,-22.23735),(-56.648346,-22.231355),(-56.637184,-22.228254),(-56.629174,-22.22474),(-56.59574,-22.201486),(-56.584965,-22.190531),(-56.567912,-22.162315),(-56.565302,-22.159938),(-56.558016,-22.156424),(-56.554605,-22.152083),(-56.549076,-22.13627),(-56.545975,-22.130069),(-56.530679,-22.109089),(-56.521636,-22.09989),(-56.5113,-22.091415),(-56.493834,-22.086558),(-56.421797,-22.074465),(-56.407637,-22.075809),(-56.397302,-22.086248),(-56.370844,-22.147329),(-56.347331,-22.180402),(-56.316842,-22.207687),(-56.261083,-22.23859),(-56.231524,-22.265358),(-56.214264,-22.275383),(-56.196798,-22.279311),(-56.024715,-22.287372),(-56.011357,-22.283858),(-55.998567,-22.277244),(-55.985234,-22.283238),(-55.970868,-22.293884),(-55.955262,-22.301325),(-55.89294,-22.306803),(-55.874388,-22.317345),(-55.872011,-22.310213),(-55.86103,-22.289543),(-55.810775,-22.353105),(-55.770157,-22.381423),(-55.760235,-22.391965),(-55.751579,-22.416253),(-55.748298,-22.500899),(-55.74127,-22.537176),(-55.723932,-22.569629),(-55.697913,-22.595571),(-55.664685,-22.612727),(-55.644351,-22.619548),(-55.631199,-22.626783),(-55.62368,-22.638669),(-55.620476,-22.659339),(-55.62151,-22.667091),(-55.627013,-22.683627),(-55.627788,-22.692102),(-55.62567,-22.700784),(-55.619184,-22.716907),(-55.618332,-22.725692),(-55.628589,-22.758971),(-55.645927,-22.78791),(-55.659001,-22.818296),(-55.656572,-22.85571),(-55.6343,-22.932811),(-55.634351,-22.950071),(-55.639984,-22.983764),(-55.637814,-23.000714),(-55.611459,-23.029136),(-55.600684,-23.044535),(-55.595232,-23.064069),(-55.600142,-23.100553),(-55.599392,-23.117089),(-55.588669,-23.130112),(-55.560867,-23.145925),(-55.557147,-23.15471),(-55.535236,-23.229124),(-55.535649,-23.245867),(-55.559007,-23.291342),(-55.562159,-23.307879),(-55.555596,-23.330306),(-55.524875,-23.360485),(-55.513868,-23.379399),(-55.515547,-23.410198),(-55.542471,-23.465802),(-55.539913,-23.500528),(-55.533796,-23.530444),(-55.533066,-23.534015),(-55.533892,-23.569981),(-55.530533,-23.603468),(-55.51131,-23.629409),(-55.466971,-23.673231),(-55.445319,-23.735449),(-55.430178,-23.928564),(-55.420669,-23.954506),(-55.398035,-23.97683),(-55.367184,-23.989646),(-55.304346,-23.99409),(-55.27272,-24.000394),(-55.272668,-24.000498),(-55.272565,-24.000498),(-55.27241,-24.000601),(-55.272358,-24.000601),(-55.23683,-24.012693),(-55.200864,-24.019515),(-55.165439,-24.016827),(-55.131281,-24.000601),(-55.131204,-24.000601),(-55.131075,-24.000601),(-55.131075,-24.000394),(-55.105572,-23.988509),(-55.042294,-23.989026),(-55.011495,-23.980551),(-54.994571,-23.973109),(-54.961059,-23.971766),(-54.943644,-23.969182),(-54.926746,-23.95988),(-54.904887,-23.933008),(-54.891916,-23.920606),(-54.827424,-23.887016),(-54.696838,-23.845158),(-54.679784,-23.836683),(-54.654592,-23.811465),(-54.639218,-23.804437),(-54.612553,-23.811155),(-54.443158,-23.899935),(-54.422901,-23.913785),(-54.367607,-23.984685),(-54.273608,-24.036465),(-54.24575,-24.050393),(-54.245289,-24.050624),(-54.266218,-24.06592),(-54.301048,-24.089898),(-54.324767,-24.118217),(-54.334637,-24.148912),(-54.331847,-24.165036),(-54.318979,-24.196765),(-54.314122,-24.234179),(-54.306267,-24.246684),(-54.282651,-24.275313),(-54.274331,-24.298257),(-54.26105,-24.32947),(-54.262187,-24.358512),(-54.271696,-24.389415),(-54.284098,-24.411325),(-54.322907,-24.464345),(-54.334534,-24.496695),(-54.334792,-24.527287),(-54.320685,-24.595914),(-54.321305,-24.628366),(-54.358429,-24.731422),(-54.371121,-24.766653),(-54.39975,-24.80479),(-54.407294,-24.821223),(-54.453286,-25.002814),(-54.462278,-25.03723),(-54.46419,-25.055937),(-54.463312,-25.072784),(-54.455199,-25.092111),(-54.429722,-25.130558),(-54.426621,-25.149472),(-54.435665,-25.167042),(-54.46941,-25.195464),(-54.48145,-25.21324),(-54.503568,-25.278663),(-54.528476,-25.314319),(-54.547079,-25.336644),(-54.575656,-25.360311),(-54.597981,-25.397518),(-54.609814,-25.432762),(-54.61183,-25.444027),(-54.61245,-25.465525),(-54.608523,-25.487642),(-54.598497,-25.505832),(-54.591469,-25.524746),(-54.594156,-25.54862),(-54.599944,-25.572702),(-54.600203,-25.574945),(-54.602218,-25.592442),(-54.58625,-25.624688),(-54.583356,-25.644842),(-54.598497,-25.653834),(-54.633715,-25.652077),(-54.642887,-25.661585),(-54.643146,-25.68794),(-54.62493,-25.740237),(-54.622062,-25.759977),(-54.617152,-25.773516),(-54.594518,-25.798734),(-54.587904,-25.810827),(-54.589557,-25.832944),(-54.599221,-25.850721),(-54.610073,-25.866534),(-54.61524,-25.882863),(-54.607334,-25.927925),(-54.606404,-25.946632),(-54.61524,-25.961722),(-54.62369,-25.964306),(-54.647952,-25.965649),(-54.656194,-25.969163),(-54.662266,-25.979912),(-54.661594,-25.989937),(-54.658261,-26.000376),(-54.653326,-26.030761),(-54.642526,-26.062801),(-54.643146,-26.085228),(-54.663816,-26.149204),(-54.659941,-26.164397),(-54.641854,-26.184034),(-54.638288,-26.196953),(-54.644696,-26.208218),(-54.656194,-26.222998),(-54.664333,-26.236227),(-54.664974,-26.238863),(-54.666142,-26.243668),(-54.663661,-26.263305),(-54.663661,-26.308367),(-54.679061,-26.33989),(-54.68043,-26.345264),(-54.683402,-26.350845),(-54.693065,-26.411927),(-54.697664,-26.428153),(-54.706475,-26.441796),(-54.737171,-26.473215),(-54.765309,-26.494092),(-54.780347,-26.510422),(-54.789829,-26.528509),(-54.78877,-26.542565),(-54.783499,-26.556724),(-54.780347,-26.575224),(-54.781277,-26.598169),(-54.784946,-26.622767),(-54.793059,-26.644987),(-54.807012,-26.661214),(-54.817321,-26.665761),(-54.822825,-26.663281),(-54.827424,-26.658113),(-54.834969,-26.654393),(-54.879359,-26.654393),(-54.903337,-26.65987),(-54.919899,-26.674133),(-54.930725,-26.693977),(-54.943877,-26.739969),(-54.951137,-26.757539),(-54.961007,-26.772318),(-54.97527,-26.787821),(-54.991781,-26.794642),(-55.040744,-26.798466),(-55.060898,-26.805184),(-55.125778,-26.863579),(-55.132677,-26.880632),(-55.118879,-26.920009),(-55.122315,-26.941714),(-55.138361,-26.953702),(-55.161564,-26.95732),(-55.2012,-26.955459),(-55.216651,-26.951222),(-55.235177,-26.94223),(-55.256545,-26.934582),(-55.2806,-26.934272),(-55.329409,-26.958457),(-55.376228,-26.963831),(-55.39633,-26.969309),(-55.414003,-26.979851),(-55.43085,-26.996387),(-55.442761,-27.015508),(-55.448833,-27.035868),(-55.451313,-27.082067),(-55.461855,-27.097983),(-55.486402,-27.10005),(-55.513997,-27.097466),(-55.533841,-27.09943),(-55.545365,-27.115243),(-55.549524,-27.13581),(-55.555338,-27.153794),(-55.571745,-27.161545),(-55.598281,-27.16754),(-55.598979,-27.182629),(-55.574872,-27.223557),(-55.570583,-27.235339),(-55.568671,-27.245984),(-55.570712,-27.256423),(-55.58407,-27.276784),(-55.583243,-27.283501),(-55.580763,-27.291666),(-55.581641,-27.305516),(-55.591331,-27.328357),(-55.60748,-27.345513),(-55.628719,-27.356365),(-55.653627,-27.360086),(-55.681713,-27.379413),(-55.717499,-27.417343),(-55.754732,-27.443698),(-55.815167,-27.41414),(-55.842091,-27.407422),(-55.854157,-27.40122),(-55.862787,-27.391092),(-55.877851,-27.353575),(-55.892837,-27.334868),(-55.913094,-27.32784),(-55.936374,-27.328253),(-55.960662,-27.331561),(-55.966094,-27.331723),(-55.984847,-27.332284),(-56.006887,-27.328253),(-56.006894,-27.328251),(-56.073756,-27.304896),(-56.074246,-27.304811),(-56.098948,-27.300555),(-56.124554,-27.298901),(-56.149979,-27.31182),(-56.168892,-27.324739),(-56.205428,-27.362153),(-56.22801,-27.371248),(-56.256768,-27.378276),(-56.279842,-27.389645),(-56.285319,-27.411349),(-56.281469,-27.438634),(-56.285319,-27.460958),(-56.296998,-27.480905),(-56.31648,-27.500439),(-56.335988,-27.525657),(-56.349657,-27.556353),(-56.367537,-27.580641),(-56.399731,-27.586842),(-56.400299,-27.586636),(-56.431357,-27.57537),(-56.461484,-27.553873),(-56.487323,-27.527208),(-56.50621,-27.500129),(-56.54644,-27.455067),(-56.546933,-27.455003),(-56.613,-27.446386),(-56.638016,-27.452912),(-56.683125,-27.464679),(-56.734233,-27.500439),(-56.771155,-27.506744),(-56.771511,-27.506548),(-56.807458,-27.486797),(-56.869263,-27.431503),(-56.904351,-27.418687),(-56.904447,-27.418696),(-56.94099,-27.422098),(-56.977318,-27.435327),(-57.076227,-27.484006),(-57.10899,-27.490621),(-57.148161,-27.488967),(-57.156687,-27.487727),(-57.180097,-27.487313),(-57.180453,-27.487214),(-57.191207,-27.484213),(-57.200406,-27.478942),(-57.227071,-27.458685),(-57.237096,-27.45424),(-57.237588,-27.45412),(-57.283026,-27.443015),(-57.292493,-27.440701),(-57.302983,-27.435534),(-57.310116,-27.429208),(-57.326289,-27.414863),(-57.335229,-27.409489),(-57.356466,-27.404343),(-57.360396,-27.403391),(-57.386699,-27.403494),(-57.487675,-27.416413),(-57.513358,-27.41414),(-57.519938,-27.412099),(-57.535682,-27.407215),(-57.697791,-27.333524),(-57.698183,-27.333398),(-57.709367,-27.329804),(-57.811583,-27.310683),(-57.811913,-27.310583),(-57.854784,-27.297557),(-57.855088,-27.297499),(-57.899504,-27.288961),(-57.913385,-27.286292),(-57.94403,-27.274613),(-57.944356,-27.274551),(-58.003664,-27.263303),(-58.021958,-27.259834),(-58.022046,-27.259831),(-58.053532,-27.259007),(-58.113993,-27.268929),(-58.128824,-27.269652),(-58.238223,-27.257043),(-58.510765,-27.278437),(-58.600165,-27.312957),(-58.604196,-27.316264),(-58.604135,-27.315548),(-58.599907,-27.266138),(-58.60156,-27.245674),(-58.614014,-27.226657),(-58.638716,-27.210121),(-58.652358,-27.198029),(-58.658404,-27.18511),(-58.653289,-27.156274),(-58.638819,-27.135914),(-58.616133,-27.123821),(-58.56549,-27.115966),(-58.560529,-27.105528),(-58.56239,-27.090128),(-58.562183,-27.072145),(-58.559703,-27.06336),(-58.556395,-27.055505),(-58.551693,-27.048167),(-58.54544,-27.041036),(-58.536259,-27.036294),(-58.536035,-27.036178),(-58.530505,-27.040519),(-58.528025,-27.047754),(-58.528025,-27.051681),(-58.527043,-27.054161),(-58.520032,-27.058368),(-58.519808,-27.058502),(-58.51154,-27.060156),(-58.507561,-27.055092),(-58.506734,-27.038452),(-58.504874,-27.031527),(-58.50136,-27.023672),(-58.492368,-27.00972),(-58.47454,-26.990083),(-58.466633,-26.975923),(-58.481619,-26.963831),(-58.483686,-26.950602),(-58.47547,-26.937993),(-58.459812,-26.928174),(-58.415525,-26.917529),(-58.393873,-26.909674),(-58.384674,-26.897065),(-58.374701,-26.887557),(-58.328916,-26.884146),(-58.31579,-26.874121),(-58.322146,-26.856758),(-58.340388,-26.844459),(-58.352377,-26.830506),(-58.340026,-26.808595),(-58.323335,-26.803841),(-58.305506,-26.809112),(-58.288556,-26.811489),(-58.27481,-26.798363),(-58.278479,-26.790198),(-58.286541,-26.778726),(-58.287936,-26.768598),(-58.2714,-26.764257),(-58.259256,-26.764463),(-58.251763,-26.763223),(-58.247939,-26.758159),(-58.2452,-26.680231),(-58.240652,-26.661214),(-58.235536,-26.649845),(-58.232281,-26.648605),(-58.227165,-26.652015),(-58.195694,-26.657493),(-58.18386,-26.65677),(-58.178641,-26.650672),(-58.181379,-26.641577),(-58.187167,-26.633619),(-58.192076,-26.624627),(-58.192283,-26.612845),(-58.188511,-26.609744),(-58.171509,-26.598582),(-58.164946,-26.592278),(-58.170527,-26.5868),(-58.179312,-26.57264),(-58.18541,-26.564992),(-58.207993,-26.550213),(-58.213367,-26.544529),(-58.217346,-26.527579),(-58.214194,-26.512386),(-58.208716,-26.496883),(-58.205926,-26.479416),(-58.202722,-26.471561),(-58.188666,-26.462053),(-58.18541,-26.451718),(-58.209595,-26.431254),(-58.213057,-26.418645),(-58.206184,-26.402728),(-58.172698,-26.349295),(-58.167633,-26.335032),(-58.164946,-26.318599),(-58.170011,-26.28656),(-58.169959,-26.270333),(-58.161846,-26.263305),(-58.145981,-26.260101),(-58.123295,-26.251523),(-58.105983,-26.239534),(-58.106604,-26.226098),(-58.121228,-26.216693),(-58.137144,-26.209459),(-58.148927,-26.19933),(-58.151304,-26.18145),(-58.141175,-26.184757),(-58.133217,-26.188995),(-58.127636,-26.194679),(-58.124019,-26.201914),(-58.115234,-26.190958),(-58.106293,-26.170288),(-58.096682,-26.136698),(-58.086501,-26.12719),(-58.021596,-26.105692),(-57.988213,-26.088536),(-57.872716,-26.010298),(-57.859694,-25.994278),(-57.85959,-25.980945),(-57.905841,-25.968646),(-57.898244,-25.953557),(-57.863311,-25.927615),(-57.85711,-25.919864),(-57.851374,-25.908392),(-57.852407,-25.897953),(-57.875352,-25.890202),(-57.878194,-25.883174),(-57.875352,-25.876146),(-57.854216,-25.868911),(-57.833235,-25.858782),(-57.819662,-25.849779),(-57.812823,-25.845243),(-57.801868,-25.831394),(-57.817061,-25.797494),(-57.820626,-25.77827),(-57.805588,-25.769899),(-57.787243,-25.764111),(-57.767916,-25.750675),(-57.75107,-25.734862),(-57.740373,-25.72215),(-57.751121,-25.719566),(-57.760578,-25.715639),(-57.768485,-25.709954),(-57.774582,-25.701686),(-57.753705,-25.672747),(-57.741251,-25.661895),(-57.719909,-25.653834),(-57.708695,-25.652903),(-57.698205,-25.653627),(-57.688335,-25.652697),(-57.678981,-25.647012),(-57.67526,-25.639261),(-57.670041,-25.616937),(-57.668077,-25.612286),(-57.638777,-25.6158),(-57.623067,-25.615386),(-57.616298,-25.609185),(-57.611647,-25.585931),(-57.600691,-25.578799),(-57.587152,-25.575699),(-57.575318,-25.564433),(-57.570047,-25.546657),(-57.556921,-25.45984),(-57.55811,-25.44351),(-57.565293,-25.430385),(-57.578729,-25.416742),(-57.640792,-25.37261),(-57.641206,-25.360518),(-57.671488,-25.290135),(-57.671976,-25.289831),(-57.702339,-25.270911),(-57.721769,-25.246106),(-57.754067,-25.180891),(-57.766004,-25.167868),(-57.79732,-25.153916),(-57.81241,-25.143684),(-57.85649,-25.096762),(-57.870701,-25.085289),(-57.890493,-25.078675),(-57.969764,-25.078468),(-57.983769,-25.074231),(-57.989866,-25.064102),(-57.993639,-25.052733),(-58.00046,-25.044362),(-58.009814,-25.042501),(-58.032138,-25.045085),(-58.04206,-25.044362),(-58.111513,-25.012322),(-58.124019,-25.012942),(-58.133889,-24.997853),(-58.224064,-24.941216),(-58.242513,-24.941732),(-58.259152,-24.952998),(-58.287264,-24.979766),(-58.299305,-24.987724),(-58.311966,-24.993615),(-58.323283,-24.995269),(-58.335995,-24.991858),(-58.341318,-24.986174),(-58.344367,-24.978319),(-58.350568,-24.968604),(-58.414802,-24.903078),(-58.438521,-24.872796),(-58.450769,-24.861737),(-58.473403,-24.851299),(-58.694216,-24.812024),(-58.699384,-24.80727),(-58.701916,-24.799312),(-58.708117,-24.795075),(-58.715972,-24.791871),(-58.723258,-24.786703),(-58.737469,-24.782776),(-58.788474,-24.781639),(-58.809196,-24.776781),(-59.000916,-24.644179),(-59.032697,-24.637048),(-59.042412,-24.630537),(-59.062463,-24.612347),(-59.072178,-24.606766),(-59.078327,-24.605629),(-59.097189,-24.605422),(-59.116981,-24.598807),(-59.154395,-24.576277),(-59.177908,-24.568732),(-59.206485,-24.550232),(-59.248084,-24.537209),(-59.257644,-24.530181),(-59.263122,-24.523463),(-59.27046,-24.518192),(-59.285601,-24.516125),(-59.295058,-24.512921),(-59.308081,-24.498762),(-59.340973,-24.4876),(-59.357406,-24.468996),(-59.377766,-24.433546),(-59.387094,-24.423831),(-59.426032,-24.392515),(-59.450268,-24.382387),(-59.453808,-24.374532),(-59.456289,-24.365437),(-59.459725,-24.358512),(-59.4659,-24.353551),(-59.487656,-24.344766),(-59.51042,-24.332674),(-59.521143,-24.325026),(-59.53202,-24.31407),(-59.539979,-24.308799),(-59.558427,-24.305389),(-59.575403,-24.29588),(-59.600518,-24.292883),(-59.610517,-24.289576),(-59.617726,-24.283478),(-59.635322,-24.261464),(-59.67364,-24.225394),(-60.033669,-24.007009),(-60.05173,-24.002875),(-60.072375,-24.004839),(-60.119297,-24.020238),(-60.143249,-24.025199),(-60.218128,-24.027369),(-60.296418,-24.016414),(-60.328018,-24.017964),(-60.337372,-24.016414),(-60.348043,-24.010833),(-60.367577,-23.99471),(-60.378351,-23.988509),(-60.38742,-23.987165),(-60.423051,-23.988509),(-60.484882,-23.977347),(-60.517723,-23.955746),(-60.535964,-23.947581),(-60.57175,-23.946031),(-60.577538,-23.94417),(-60.593584,-23.912234),(-60.594617,-23.90655),(-60.603841,-23.90469),(-60.621153,-23.895801),(-60.63216,-23.892287),(-60.64371,-23.891357),(-60.689676,-23.893631),(-60.699417,-23.891254),(-60.719855,-23.875027),(-60.729286,-23.872133),(-60.787776,-23.873445),(-60.816852,-23.874097),(-60.837625,-23.871823),(-60.866358,-23.855907),(-60.899947,-23.830482),(-60.936663,-23.813842),(-60.974826,-23.824074),(-61.006349,-23.805471),(-61.015806,-23.796686),(-61.030224,-23.77462),(-61.036244,-23.768832),(-61.038053,-23.755396),(-61.04999,-23.734726),(-61.066216,-23.715709),(-61.092519,-23.70186),(-61.10965,-23.675608),(-61.118849,-23.66641),(-61.109754,-23.649977),(-61.10624,-23.627239),(-61.109857,-23.606982),(-61.122233,-23.598197),(-61.139493,-23.592719),(-61.154454,-23.580317),(-61.167812,-23.566571),(-61.18024,-23.557166),(-61.187733,-23.556235),(-61.20768,-23.557579),(-61.214398,-23.557166),(-61.223571,-23.551275),(-61.242924,-23.533084),(-61.251941,-23.52926),(-61.272612,-23.523576),(-61.282947,-23.509933),(-61.288967,-23.49412),(-61.296951,-23.481408),(-61.36178,-23.454743),(-61.384853,-23.453709),(-61.398677,-23.450092),(-61.408728,-23.443581),(-61.42051,-23.433659),(-61.435703,-23.423737),(-61.451929,-23.417433),(-61.4703,-23.414125),(-61.491849,-23.413195),(-61.501073,-23.407821),(-61.510375,-23.38436),(-61.526343,-23.374748),(-61.525465,-23.364723),(-61.520788,-23.35325),(-61.51606,-23.344982),(-61.533965,-23.344569),(-61.545773,-23.342605),(-61.554636,-23.338368),(-61.60515,-23.289275),(-61.619619,-23.282867),(-61.666851,-23.282867),(-61.680313,-23.27925),(-61.688762,-23.274496),(-61.70442,-23.258372),(-61.718528,-23.249277),(-61.732997,-23.243386),(-61.744263,-23.234808),(-61.74881,-23.217341),(-61.749482,-23.198841),(-61.752273,-23.187059),(-61.758474,-23.177447),(-61.769274,-23.165562),(-61.780023,-23.15688),(-61.802657,-23.144994),(-61.814284,-23.135176),(-61.83697,-23.104687),(-61.84498,-23.097245),(-61.956446,-23.034407),(-61.992051,-22.99813),(-62.0059,-22.978906),(-62.006055,-22.974359),(-62.008949,-22.969708),(-62.003782,-22.94635),(-62.006055,-22.936842),(-62.017166,-22.921752),(-62.035821,-22.884855),(-62.050446,-22.864495),(-62.071633,-22.843928),(-62.080831,-22.832352),(-62.084604,-22.820156),(-62.089358,-22.81995),(-62.099228,-22.813748),(-62.107858,-22.806514),(-62.108788,-22.803),(-62.115868,-22.799796),(-62.119744,-22.792148),(-62.122534,-22.783156),(-62.126151,-22.775715),(-62.153488,-22.747809),(-62.159741,-22.737164),(-62.164082,-22.725899),(-62.1708,-22.71732),(-62.184184,-22.713703),(-62.188267,-22.708329),(-62.175916,-22.684867),(-62.195036,-22.673602),(-62.194623,-22.66058),(-62.187646,-22.638565),(-62.192969,-22.62823),(-62.196586,-22.626473),(-62.202529,-22.627197),(-62.214932,-22.624303),(-62.239736,-22.613761),(-62.25281,-22.603632),(-62.252965,-22.591126),(-62.238858,-22.57304),(-62.233225,-22.556296),(-62.241183,-22.538416),(-62.263352,-22.514439),(-62.26883,-22.512992),(-62.2756,-22.513508),(-62.281387,-22.512061),(-62.283816,-22.504827),(-62.28361,-22.493768),(-62.28454,-22.488704),(-62.287227,-22.483949),(-62.294513,-22.479815),(-62.305727,-22.476715),(-62.341357,-22.472261),(-62.348722,-22.47134),(-62.368256,-22.464416),(-62.43807,-22.419664),(-62.454969,-22.403851),(-62.461945,-22.388761),(-62.470575,-22.381837),(-62.510727,-22.370158),(-62.523388,-22.364887),(-62.547624,-22.335328),(-62.564678,-22.321996),(-62.588294,-22.316414),(-62.599611,-22.315174),(-62.61315,-22.311454),(-62.625294,-22.305046),(-62.632684,-22.295951),(-62.630824,-22.287682),(-62.624209,-22.278897),(-62.620282,-22.268459),(-62.626431,-22.255023),(-62.61961,-22.255023),(-62.624829,-22.247271),(-62.631805,-22.24076),(-62.640384,-22.236213),(-62.650357,-22.234456),(-62.627516,-22.184536),(-62.599404,-22.089865),(-62.572843,-22.000775),(-62.529228,-21.865073),(-62.492796,-21.751798),(-62.446184,-21.606897),(-62.411974,-21.500857),(-62.37549,-21.384482),(-62.340402,-21.272861),(-62.307898,-21.169301),(-62.275703,-21.066568),(-62.271879,-21.000939),(-62.271886,-21.000424),(-62.273791,-20.854798),(-62.275031,-20.75868),(-62.276168,-20.670727),(-62.277305,-20.579776),(-62.26883,-20.553111),(-62.232398,-20.501021),(-62.210573,-20.471305),(-62.189662,-20.442834),(-62.144962,-20.382062),(-62.10021,-20.321187),(-62.05551,-20.260416),(-62.01081,-20.199541),(-61.99417,-20.175667),(-61.977582,-20.151792),(-61.960942,-20.128021),(-61.94425,-20.104146),(-61.931383,-20.078308),(-61.92474,-20.065128),(-61.918412,-20.052573),(-61.905545,-20.026735),(-61.892574,-20.000897),(-61.892522,-20.000897),(-61.86441,-19.927413),(-61.836298,-19.853826),(-61.808186,-19.780342),(-61.780074,-19.706961),(-61.761213,-19.657765),(-61.753203,-19.64588),(-61.737235,-19.639575),(-61.648222,-19.62676),(-61.626725,-19.623659),(-61.579544,-19.616838),(-61.532286,-19.610016),(-61.51084,-19.606916),(-61.17236,-19.537463),(-60.833931,-19.467906),(-60.495476,-19.398453),(-60.156995,-19.329),(-60.006384,-19.298097),(-59.924322,-19.297064),(-59.735858,-19.29479),(-59.547498,-19.292413),(-59.359085,-19.290139),(-59.170725,-19.287762),(-59.089541,-19.286729),(-59.069646,-19.291483),(-59.039673,-19.311637),(-59.011096,-19.330964),(-58.96066,-19.360523),(-58.867436,-19.4153),(-58.774108,-19.470077),(-58.680832,-19.52475),(-58.587504,-19.579424),(-58.49428,-19.634201),(-58.401004,-19.688875),(-58.30778,-19.743652),(-58.214452,-19.798325),(-58.175282,-19.821373),(-58.164429,-19.832949),(-58.161174,-19.847211),(-58.164223,-19.880284),(-58.162207,-19.912324),(-58.145154,-19.969581),(-58.141847,-20.000897),(-58.144482,-20.023118),(-58.143139,-20.065286),(-58.144741,-20.086163),(-58.15952,-20.13908),(-58.158797,-20.165125)] +Qatar [(51.215258,24.62585),(51.147451,24.576279),(51.095465,24.560182),(51.038879,24.559872),(50.978883,24.567726),(50.928602,24.587208),(50.881008,24.636404),(50.807872,24.746649),(50.826671,24.748684),(50.852794,24.770006),(50.85963,24.7956),(50.851085,24.890815),(50.842784,24.916083),(50.799978,24.988756),(50.796641,25.003607),(50.807384,25.031643),(50.809744,25.051215),(50.808604,25.069159),(50.802419,25.076972),(50.775727,25.091864),(50.768403,25.1258),(50.766938,25.162584),(50.757823,25.186225),(50.764985,25.204291),(50.763438,25.225287),(50.759044,25.247504),(50.757823,25.269355),(50.765798,25.323554),(50.765147,25.344468),(50.750987,25.419623),(50.762462,25.462714),(50.763845,25.482408),(50.750987,25.494696),(50.778005,25.527045),(50.797093,25.529896),(50.808484,25.517239),(50.811648,25.498887),(50.820508,25.491293),(50.820486,25.471829),(50.847179,25.467353),(50.831309,25.507799),(50.827891,25.525051),(50.826671,25.543118),(50.828624,25.554348),(50.832205,25.566148),(50.832856,25.578518),(50.826671,25.590888),(50.818533,25.595771),(50.79656,25.599595),(50.785004,25.60456),(50.796397,25.617255),(50.809581,25.620592),(50.823497,25.622138),(50.836925,25.628811),(50.84018,25.634508),(50.846528,25.649848),(50.850597,25.652981),(50.857188,25.650824),(50.863536,25.641425),(50.870942,25.639309),(50.87615,25.631781),(50.874278,25.614936),(50.867686,25.587836),(50.867686,25.567043),(50.868663,25.559556),(50.87672,25.540595),(50.881521,25.535142),(50.888682,25.529486),(50.898692,25.532213),(50.900727,25.534857),(50.900076,25.53974),(50.904145,25.563137),(50.904796,25.587063),(50.90919,25.597724),(50.920584,25.608344),(50.927501,25.60517),(50.93572,25.598334),(50.949555,25.597724),(50.955903,25.604682),(50.9699,25.633857),(50.97755,25.646145),(50.94988,25.636461),(50.924327,25.63231),(50.903168,25.640123),(50.888682,25.666653),(50.887543,25.680894),(50.894867,25.728095),(50.897309,25.732571),(50.906993,25.739936),(50.90919,25.745103),(50.90797,25.749254),(50.902843,25.755276),(50.901703,25.759467),(50.901703,25.789537),(50.915538,25.786444),(50.920258,25.794379),(50.923595,25.805162),(50.932872,25.810614),(50.939708,25.807196),(50.945567,25.798489),(50.949229,25.787014),(50.949555,25.775295),(50.963715,25.783881),(50.972992,25.797512),(50.975922,25.813788),(50.970714,25.830512),(50.963878,25.823676),(50.949392,25.837958),(50.951182,25.85928),(50.977306,25.916449),(50.981456,25.935858),(50.984223,25.98135),(50.992524,25.975735),(50.995616,25.972846),(50.998546,25.967027),(51.004731,25.967027),(51.027354,26.026923),(51.047374,26.053046),(51.075938,26.06391),(51.091075,26.066962),(51.133149,26.082099),(51.141856,26.087795),(51.147227,26.103095),(51.170584,26.124579),(51.176117,26.139594),(51.194347,26.133531),(51.214854,26.13935),(51.251231,26.160102),(51.33074,26.121161),(51.346853,26.104885),(51.348481,26.085639),(51.346365,26.060696),(51.347179,26.039008),(51.357677,26.029731),(51.376638,26.022895),(51.382498,26.006171),(51.383474,25.985582),(51.388438,25.967027),(51.403982,25.954088),(51.416515,25.952094),(51.424327,25.950873),(51.46754,25.954047),(51.491954,25.95189),(51.511974,25.946112),(51.52947,25.937323),(51.546072,25.926093),(51.559418,25.913764),(51.570323,25.898383),(51.577485,25.880845),(51.594737,25.771877),(51.594493,25.758246),(51.591563,25.746649),(51.583507,25.741767),(51.577647,25.745063),(51.572032,25.759508),(51.563324,25.762885),(51.555431,25.758694),(51.54835,25.748847),(51.544607,25.737372),(51.546072,25.728095),(51.555024,25.720649),(51.56365,25.721625),(51.570323,25.727484),(51.573985,25.734931),(51.590099,25.7119),(51.576834,25.688666),(51.556814,25.681627),(51.552908,25.707587),(51.539317,25.703925),(51.505138,25.687161),(51.519705,25.681708),(51.534923,25.673245),(51.54713,25.663153),(51.552908,25.652981),(51.54949,25.631171),(51.535899,25.622382),(51.518809,25.61872),(51.505138,25.612006),(51.498302,25.615912),(51.491384,25.618232),(51.493663,25.565823),(51.491547,25.550645),(51.491384,25.549302),(51.48585,25.540513),(51.47755,25.531073),(51.471934,25.521389),(51.474294,25.512112),(51.510997,25.454291),(51.516938,25.440172),(51.518809,25.426215),(51.515717,25.382839),(51.511974,25.330227),(51.513194,25.308254),(51.518809,25.297065),(51.531261,25.292182),(51.590017,25.283759),(51.599132,25.274888),(51.607595,25.255113),(51.611095,25.237535),(51.614594,25.219794),(51.616547,25.137193),(51.605724,25.034247),(51.611013,25.022366),(51.602306,25.013821),(51.586925,24.953437),(51.574067,24.934394),(51.539399,24.898871),(51.528819,24.874416),(51.520844,24.870592),(51.511729,24.867865),(51.505138,24.864081),(51.500255,24.855373),(51.474213,24.764472),(51.47283,24.759426),(51.46754,24.748684),(51.455821,24.740668),(51.450369,24.721666),(51.443614,24.679755),(51.430431,24.664496),(51.389985,24.642483),(51.381521,24.635077),(51.37615,24.611884),(51.363536,24.59634),(51.349783,24.583808),(51.340587,24.569892),(51.328949,24.584784),(51.332693,24.598863),(51.341645,24.614691),(51.346853,24.635077),(51.339122,24.646186),(51.321137,24.648179),(51.285981,24.645006),(51.284434,24.662502),(51.278087,24.662991),(51.267914,24.656195),(51.254568,24.651801),(51.23699,24.650051),(51.220958,24.644232),(51.212576,24.633694),(51.214854,24.627183),(51.215258,24.62585)] +Romania [(26.722379,48.259769),(26.733127,48.27075),(26.744031,48.255583),(26.76372,48.252741),(26.804906,48.25827),(26.821908,48.252534),(26.844749,48.23331),(26.855497,48.237832),(26.897665,48.208971),(26.90397,48.201452),(26.908001,48.184528),(26.917716,48.187964),(26.929085,48.199049),(26.938128,48.204811),(26.950582,48.197524),(26.955646,48.186),(26.963088,48.175381),(26.997607,48.16657),(26.997607,48.15794),(26.986239,48.150421),(26.966602,48.149594),(26.966602,48.143367),(26.99399,48.132412),(27.012697,48.128123),(27.025099,48.135048),(27.033678,48.132386),(27.042773,48.127296),(27.048044,48.122232),(27.047955,48.121409),(27.047424,48.116496),(27.036882,48.107349),(27.034298,48.101768),(27.041739,48.077273),(27.059206,48.05805),(27.083184,48.043658),(27.109332,48.033503),(27.109332,48.026088),(27.088971,48.019809),(27.090005,48.015623),(27.091452,48.011851),(27.093364,48.00844),(27.095793,48.005598),(27.121476,48.013194),(27.134602,48.000456),(27.143748,47.986839),(27.157184,47.991955),(27.16876,47.983067),(27.169793,47.973791),(27.162869,47.965006),(27.150983,47.957797),(27.171964,47.946429),(27.178268,47.944129),(27.178268,47.942096),(27.178268,47.937928),(27.160698,47.921727),(27.17155,47.912632),(27.194185,47.904054),(27.212375,47.889507),(27.211548,47.885011),(27.213202,47.854134),(27.212375,47.847933),(27.222813,47.842662),(27.234906,47.840208),(27.245809,47.83659),(27.253406,47.828089),(27.219196,47.813775),(27.231082,47.807083),(27.244879,47.792588),(27.256196,47.77755),(27.260847,47.769075),(27.264671,47.764553),(27.282448,47.755691),(27.287512,47.752332),(27.288114,47.75068),(27.29113,47.74241),(27.289579,47.731687),(27.295057,47.724427),(27.295057,47.718174),(27.272009,47.71497),(27.281104,47.693007),(27.304049,47.666549),(27.369161,47.60831),(27.397066,47.589086),(27.428382,47.581024),(27.429404,47.57813),(27.431342,47.572638),(27.435824,47.55994),(27.438976,47.553713),(27.440268,47.550096),(27.440061,47.541027),(27.442128,47.536634),(27.446056,47.534877),(27.456908,47.533896),(27.459491,47.533224),(27.466726,47.506352),(27.473134,47.491779),(27.483676,47.485423),(27.492564,47.484545),(27.497525,47.482426),(27.501453,47.479945),(27.50724,47.477982),(27.532975,47.477413),(27.534526,47.477982),(27.548168,47.474261),(27.563154,47.468344),(27.575867,47.460412),(27.582998,47.450671),(27.574782,47.446252),(27.569356,47.438553),(27.565531,47.428269),(27.562534,47.416538),(27.580311,47.405996),(27.572353,47.375197),(27.586409,47.368764),(27.588724,47.367389),(27.599948,47.360728),(27.624029,47.321428),(27.636948,47.306674),(27.6447,47.303987),(27.671727,47.299853),(27.68232,47.295254),(27.689865,47.290887),(27.697823,47.287502),(27.722835,47.283317),(27.733273,47.275617),(27.753634,47.251432),(27.752393,47.238926),(27.763039,47.226317),(27.788309,47.204277),(27.800349,47.176966),(27.802106,47.176346),(27.804793,47.168492),(27.807687,47.162471),(27.805879,47.158234),(27.794665,47.155831),(27.80655,47.144617),(27.849855,47.12852),(27.849855,47.121724),(27.843757,47.114386),(27.848202,47.111415),(27.85771,47.109322),(27.867012,47.104645),(27.897604,47.081417),(27.925923,47.068756),(27.938429,47.061056),(27.938739,47.046638),(27.96313,47.043383),(27.986591,47.033228),(28.008295,47.026304),(28.028036,47.03297),(28.037027,47.016459),(28.038146,47.015483),(28.069067,46.988503),(28.082709,46.971527),(28.102295,46.935121),(28.104994,46.920185),(28.105447,46.91768),(28.096869,46.902616),(28.11356,46.894761),(28.114335,46.883393),(28.113095,46.87143),(28.124257,46.861637),(28.124257,46.854195),(28.122615,46.84227),(28.121518,46.8343),(28.137796,46.80624),(28.178156,46.758646),(28.178104,46.739836),(28.234121,46.662424),(28.24425,46.635372),(28.245085,46.631451),(28.24735,46.620825),(28.247144,46.607156),(28.240529,46.596382),(28.230504,46.583902),(28.225439,46.569562),(28.234121,46.553129),(28.224768,46.548736),(28.221099,46.541191),(28.219032,46.503726),(28.221099,46.495768),(28.234121,46.478069),(28.2381,46.475588),(28.242338,46.476518),(28.2458,46.475511),(28.247144,46.467191),(28.247247,46.436185),(28.246058,46.427865),(28.240219,46.420268),(28.22606,46.415359),(28.233294,46.401768),(28.228643,46.39611),(28.226796,46.395548),(28.219548,46.393345),(28.212934,46.388694),(28.207146,46.358153),(28.202392,46.353916),(28.190351,46.351074),(28.187922,46.343787),(28.191953,46.323504),(28.192883,46.311231),(28.191333,46.307769),(28.177794,46.287047),(28.165753,46.27883),(28.156296,46.275368),(28.144928,46.272939),(28.135212,46.269322),(28.131078,46.262397),(28.134489,46.245344),(28.132112,46.239918),(28.120846,46.237851),(28.108961,46.23413),(28.110976,46.225603),(28.129321,46.204674),(28.135626,46.198835),(28.141517,46.191962),(28.144721,46.183229),(28.133249,46.159974),(28.127358,46.135325),(28.110506,46.101491),(28.107823,46.096103),(28.100796,46.081995),(28.092838,46.078377),(28.090151,46.073365),(28.090461,46.06799),(28.096558,46.065045),(28.096869,46.05967),(28.084466,46.024582),(28.082709,46.014712),(28.08612,46.000552),(28.110511,45.950426),(28.112371,45.939213),(28.114852,45.933218),(28.120846,45.926293),(28.124981,45.918387),(28.12281,45.910791),(28.118779,45.903142),(28.117436,45.895236),(28.120226,45.887846),(28.129115,45.875134),(28.131078,45.871361),(28.128184,45.8664),(28.114955,45.859734),(28.110511,45.854308),(28.113302,45.825369),(28.128908,45.795035),(28.146478,45.771161),(28.154953,45.761807),(28.163738,45.661503),(28.161774,45.645432),(28.167975,45.632461),(28.153713,45.6275),(28.120846,45.627707),(28.107721,45.6244),(28.091184,45.615976),(28.074751,45.604866),(28.062142,45.593601),(28.118004,45.572723),(28.140949,45.560218),(28.157743,45.538927),(28.161567,45.532726),(28.164048,45.530607),(28.165391,45.528282),(28.165908,45.494589),(28.172936,45.484357),(28.199498,45.461774),(28.212934,45.450198),(28.215146,45.450365),(28.237635,45.452059),(28.266884,45.440483),(28.286418,45.421725),(28.281198,45.401829),(28.310809,45.347827),(28.330239,45.322919),(28.353287,45.312429),(28.370237,45.309225),(28.40517,45.295066),(28.494157,45.278891),(28.577046,45.248092),(28.710061,45.226956),(28.747371,45.23047),(28.778687,45.23109),(28.7914,45.234966),(28.802665,45.244165),(28.774295,45.253363),(28.767267,45.257755),(28.762564,45.265145),(28.75967,45.274189),(28.761531,45.281837),(28.79016,45.292069),(28.788609,45.307106),(28.789746,45.321421),(28.816308,45.326072),(28.831811,45.322299),(28.858114,45.30907),(28.880852,45.306176),(28.893616,45.29982),(28.910049,45.287366),(28.929893,45.279098),(28.952837,45.285092),(28.957384,45.292017),(28.960433,45.311292),(28.96648,45.319871),(28.973663,45.323695),(28.982448,45.325503),(29.004617,45.326072),(29.018156,45.330878),(29.069522,45.360798),(29.110295,45.369066),(29.127503,45.374544),(29.141869,45.385086),(29.150138,45.387877),(29.174322,45.388962),(29.179386,45.391856),(29.183107,45.398987),(29.191582,45.40617),(29.206672,45.41542),(29.228169,45.42374),(29.244602,45.424205),(29.260828,45.422035),(29.281809,45.422293),(29.290491,45.424619),(29.299276,45.428908),(29.307027,45.434127),(29.312815,45.439346),(29.322117,45.443842),(29.332969,45.442137),(29.343924,45.438106),(29.353743,45.435936),(29.430637,45.430406),(29.569699,45.394956),(29.616362,45.365346),(29.628145,45.360798),(29.650262,45.346122),(29.667109,45.311861),(29.672276,45.272948),(29.659254,45.244165),(29.66499,45.23786),(29.666488,45.230625),(29.664215,45.223184),(29.659079,45.215887),(29.659075,45.215881),(29.659028,45.215888),(29.650157,45.217231),(29.623871,45.216498),(29.623871,45.210273),(29.638845,45.195624),(29.64975,45.177802),(29.667003,45.164211),(29.699555,45.161811),(29.676524,45.144599),(29.664399,45.116929),(29.656261,45.028957),(29.650727,45.012112),(29.637543,44.98371),(29.634939,44.96955),(29.633637,44.953355),(29.63087,44.938666),(29.623871,44.929104),(29.630382,44.90766),(29.605154,44.880032),(29.617849,44.861477),(29.611176,44.849311),(29.603201,44.845404),(29.593761,44.844306),(29.583507,44.840277),(29.569347,44.824897),(29.561778,44.820258),(29.542003,44.827826),(29.531098,44.826239),(29.520518,44.822496),(29.510997,44.820502),(29.315929,44.798774),(29.192068,44.792873),(29.156098,44.78441),(29.055837,44.735175),(29.026378,44.714667),(29.000743,44.68891),(28.993907,44.696357),(28.994884,44.711127),(28.981293,44.729438),(28.974946,44.74494),(28.997325,44.751614),(29.021495,44.754706),(29.058279,44.768297),(29.098155,44.775377),(29.120616,44.786566),(29.138682,44.804023),(29.144705,44.826646),(29.138845,44.839179),(29.125987,44.852484),(29.096365,44.875067),(29.101817,44.859117),(29.103363,44.851304),(29.10377,44.840277),(29.087901,44.842719),(29.073497,44.835435),(29.060313,44.832343),(29.048595,44.847154),(29.044119,44.865546),(29.041759,44.925686),(29.052094,44.944648),(29.098806,44.956977),(29.110606,44.970038),(29.101736,44.980292),(29.058279,45.000963),(29.045177,45.004828),(29.014415,45.004828),(28.983246,45.012274),(28.97283,45.010972),(28.969249,45.005845),(28.966075,44.996568),(28.960216,44.987698),(28.949229,44.98371),(28.927013,44.982611),(28.91684,44.979682),(28.880056,44.954779),(28.869395,44.943345),(28.863536,44.922309),(28.865082,44.916571),(28.87436,44.911566),(28.876638,44.904853),(28.874685,44.899115),(28.870128,44.89643),(28.865489,44.89468),(28.863536,44.891547),(28.866466,44.882636),(28.873546,44.874986),(28.881521,44.869696),(28.904063,44.862738),(28.952403,44.826646),(28.939708,44.816881),(28.934093,44.814195),(28.924978,44.813056),(28.934418,44.807522),(28.938731,44.80622),(28.938731,44.798774),(28.92628,44.792955),(28.925466,44.783677),(28.932953,44.773505),(28.946137,44.765204),(28.936371,44.759955),(28.923839,44.757799),(28.879405,44.756578),(28.868826,44.753567),(28.808442,44.726874),(28.79835,44.720201),(28.788829,44.706244),(28.78712,44.693305),(28.793712,44.687893),(28.80836,44.696357),(28.80421,44.684231),(28.785411,44.679674),(28.781016,44.665269),(28.784028,44.649237),(28.792491,44.644721),(28.845551,44.651557),(28.869477,44.661811),(28.891368,44.677191),(28.911957,44.696357),(28.897716,44.716783),(28.967459,44.700995),(28.983409,44.692613),(28.987559,44.675279),(28.97934,44.655341),(28.962087,44.63939),(28.938731,44.634263),(28.956391,44.651109),(28.972667,44.670233),(28.975352,44.686835),(28.952403,44.696357),(28.952403,44.68891),(28.959972,44.681952),(28.957774,44.678778),(28.938731,44.675238),(28.924164,44.682074),(28.918224,44.682685),(28.913829,44.679918),(28.909434,44.675198),(28.904552,44.668402),(28.885427,44.658108),(28.854177,44.634345),(28.836274,44.627427),(28.787852,44.609605),(28.774181,44.600775),(28.791515,44.623847),(28.794688,44.634263),(28.779796,44.63581),(28.766938,44.633734),(28.757823,44.626776),(28.753673,44.613756),(28.766775,44.613756),(28.76059,44.603705),(28.764415,44.595893),(28.774425,44.590155),(28.787852,44.586493),(28.748057,44.577786),(28.732677,44.567857),(28.732677,44.552314),(28.737153,44.557074),(28.743907,44.561998),(28.746918,44.565985),(28.761078,44.560207),(28.774181,44.552314),(28.765636,44.538886),(28.753429,44.513495),(28.745616,44.486151),(28.750255,44.466702),(28.758474,44.465277),(28.766368,44.472602),(28.771983,44.48371),(28.778331,44.507392),(28.788097,44.515367),(28.80836,44.525051),(28.838145,44.554023),(28.855642,44.567694),(28.873546,44.573432),(28.892426,44.577053),(28.906261,44.587714),(28.914887,44.601386),(28.924978,44.620551),(28.918468,44.60102),(28.909679,44.584215),(28.844005,44.49372),(28.796723,44.467922),(28.706554,44.381049),(28.6963,44.358344),(28.693126,44.346137),(28.680186,44.353339),(28.664399,44.348375),(28.651215,44.340033),(28.64088,44.328559),(28.635102,44.316392),(28.632335,44.302436),(28.631033,44.26435),(28.635265,44.252021),(28.651215,44.23078),(28.665212,44.199286),(28.668142,44.184312),(28.664887,44.175605),(28.671723,44.168687),(28.658376,44.174465),(28.645844,44.172024),(28.636241,44.164293),(28.63087,44.154486),(28.637462,44.135891),(28.654145,44.054918),(28.671235,44.003323),(28.671723,43.997382),(28.669119,43.989163),(28.660004,43.979804),(28.654959,43.961371),(28.64088,43.932196),(28.637706,43.918647),(28.61378,43.884182),(28.609386,43.874498),(28.593516,43.819525),(28.587657,43.810248),(28.584483,43.800849),(28.589203,43.791327),(28.584809,43.782416),(28.58253,43.772366),(28.581879,43.761705),(28.583018,43.751044),(28.57838,43.741278),(28.434574,43.735213),(28.221254,43.761981),(28.014806,43.830039),(27.98101,43.84934),(27.935845,43.964398),(27.912074,43.993233),(27.912074,43.993336),(27.85647,43.988634),(27.787327,43.960419),(27.721698,43.94874),(27.682515,43.987256),(27.676119,43.993543),(27.656275,44.023877),(27.633434,44.029768),(27.574523,44.016281),(27.383837,44.015092),(27.372882,44.020725),(27.353555,44.045271),(27.341669,44.053074),(27.285342,44.072453),(27.26431,44.089765),(27.269012,44.112399),(27.252662,44.121519),(27.251132,44.122373),(27.226741,44.120719),(27.205553,44.129246),(27.100237,44.14449),(27.027476,44.177046),(27.001535,44.165109),(26.884126,44.156531),(26.789455,44.115965),(26.753695,44.10811),(26.708685,44.10811),(26.697212,44.106094),(26.677317,44.097051),(26.667808,44.095087),(26.647758,44.093382),(26.614168,44.084855),(26.415791,44.063789),(26.332335,44.054926),(26.310518,44.052609),(26.231453,44.027495),(26.150734,44.012405),(26.116214,43.998866),(26.079317,43.969049),(26.061644,43.949773),(26.054306,43.934322),(25.934003,43.870321),(25.924495,43.858616),(25.916433,43.844379),(25.869304,43.800893),(25.839332,43.788439),(25.806259,43.763661),(25.804399,43.759914),(25.781144,43.732009),(25.739596,43.718935),(25.732948,43.718781),(25.671384,43.717359),(25.653503,43.708134),(25.637897,43.697282),(25.616503,43.687748),(25.593869,43.680668),(25.575059,43.677387),(25.556558,43.670359),(25.533821,43.668679),(25.488759,43.67054),(25.48245,43.669715),(25.467417,43.667749),(25.426075,43.654391),(25.403131,43.65005),(25.35962,43.654287),(25.323033,43.669713),(25.28872,43.688962),(25.285405,43.690391),(25.252339,43.704646),(25.211308,43.711881),(25.0816,43.718935),(24.963675,43.749605),(24.752628,43.738804),(24.705603,43.743765),(24.661763,43.755657),(24.500137,43.799498),(24.466341,43.802418),(24.431201,43.794176),(24.375494,43.763867),(24.358234,43.760017),(24.336705,43.759251),(24.159383,43.752938),(24.149606,43.75472),(23.799922,43.818463),(23.742871,43.8427),(23.720753,43.845826),(23.636107,43.832158),(23.620854,43.83401),(23.592699,43.837429),(23.484695,43.880604),(23.325151,43.886592),(23.234478,43.877297),(23.196961,43.86275),(23.161924,43.857324),(23.131849,43.847945),(23.052551,43.84282),(22.919562,43.834225),(22.888763,43.839522),(22.863441,43.855412),(22.851039,43.874352),(22.850522,43.896986),(22.874707,43.972046),(22.885869,43.994525),(22.905816,44.003982),(22.926486,44.006152),(22.966277,44.015557),(22.988085,44.017676),(23.023018,44.031629),(23.040071,44.062325),(23.030976,44.093072),(23.008307,44.100446),(22.988085,44.107025),(22.942609,44.111469),(22.906436,44.122889),(22.69164,44.228435),(22.690739,44.228878),(22.685364,44.243657),(22.689498,44.291716),(22.68154,44.305307),(22.662523,44.311663),(22.621182,44.315901),(22.582838,44.328406),(22.549352,44.348974),(22.522687,44.37507),(22.505117,44.404061),(22.50398,44.411709),(22.506357,44.427522),(22.505427,44.435015),(22.500983,44.44194),(22.480002,44.455789),(22.477005,44.463954),(22.477212,44.476976),(22.479795,44.490412),(22.484136,44.499766),(22.491061,44.50421),(22.500363,44.50638),(22.535503,44.507517),(22.548422,44.511703),(22.55669,44.52147),(22.55979,44.538781),(22.565578,44.555421),(22.580461,44.565498),(22.600305,44.569787),(22.621182,44.569219),(22.642163,44.563121),(22.678543,44.545551),(22.700144,44.54183),(22.719367,44.544362),(22.741692,44.551855),(22.759262,44.564671),(22.765153,44.58281),(22.714716,44.623065),(22.700144,44.63061),(22.621182,44.637432),(22.587489,44.649369),(22.553279,44.669161),(22.483723,44.72399),(22.468943,44.730191),(22.450547,44.732981),(22.426052,44.733653),(22.41551,44.727814),(22.38068,44.700528),(22.361146,44.692054),(22.319702,44.685336),(22.304922,44.677377),(22.299031,44.661668),(22.185136,44.515113),(22.172114,44.505347),(22.148239,44.500902),(22.126639,44.502659),(22.104314,44.509636),(22.086951,44.52178),(22.076306,44.550718),(22.067107,44.55723),(22.055739,44.562139),(22.045197,44.569219),(22.039822,44.576867),(22.036174,44.589489),(22.034243,44.596168),(22.032174,44.603325),(22.004269,44.651539),(21.994347,44.658567),(21.962411,44.662288),(21.871977,44.695981),(21.855441,44.698461),(21.838284,44.695206),(21.801697,44.683889),(21.756842,44.677274),(21.705166,44.677067),(21.65628,44.687661),(21.619693,44.713913),(21.610288,44.731948),(21.604293,44.749983),(21.595612,44.765951),(21.578042,44.777681),(21.558405,44.78166),(21.49722,44.778043),(21.41216,44.784813),(21.395934,44.790239),(21.378571,44.816645),(21.360587,44.826412),(21.359841,44.826644),(21.3598,44.826657),(21.342811,44.831942),(21.346428,44.845636),(21.355523,44.856591),(21.368545,44.86486),(21.395314,44.871629),(21.453398,44.869562),(21.48151,44.872611),(21.522128,44.880776),(21.536287,44.889302),(21.539078,44.908474),(21.531016,44.924649),(21.51634,44.933899),(21.48151,44.943563),(21.456499,44.952348),(21.40844,44.958342),(21.385185,44.969504),(21.384565,44.97493),(21.387046,44.981545),(21.383945,44.986661),(21.366995,44.987281),(21.353456,44.989813),(21.351389,44.998236),(21.356143,45.008572),(21.363378,45.01653),(21.3733,45.020096),(21.398828,45.021387),(21.409266,45.023971),(21.421875,45.031413),(21.425079,45.036219),(21.425596,45.043247),(21.429524,45.057303),(21.432521,45.061488),(21.440996,45.068723),(21.443786,45.072909),(21.444406,45.07787),(21.444406,45.08526),(21.443786,45.091461),(21.442443,45.092908),(21.449987,45.101021),(21.458566,45.107325),(21.469004,45.111046),(21.48151,45.111563),(21.494119,45.119314),(21.49784,45.131872),(21.493292,45.145101),(21.484388,45.152702),(21.459393,45.17404),(21.433761,45.188819),(21.405546,45.199671),(21.299299,45.223184),(21.257131,45.224114),(21.239251,45.229385),(21.206075,45.245922),(21.189745,45.259564),(21.155638,45.295169),(21.139412,45.303696),(21.12918,45.301887),(21.113057,45.28933),(21.103342,45.286074),(21.09156,45.288089),(21.082775,45.293671),(21.0743,45.300544),(21.063964,45.30628),(21.016668,45.321482),(20.981489,45.33279),(20.966193,45.341575),(20.927642,45.37749),(20.863047,45.418728),(20.830387,45.452524),(20.816021,45.462859),(20.799484,45.468544),(20.781604,45.472574),(20.767135,45.479344),(20.760727,45.493348),(20.783155,45.506061),(20.797624,45.516499),(20.800208,45.530504),(20.787392,45.553655),(20.75804,45.58926),(20.754423,45.605589),(20.762174,45.630601),(20.773233,45.648894),(20.777264,45.657524),(20.779951,45.671684),(20.779124,45.72367),(20.781604,45.733954),(20.785532,45.743411),(20.785739,45.752557),(20.77747,45.762324),(20.765171,45.766768),(20.754113,45.763564),(20.745328,45.754986),(20.739436,45.743411),(20.726827,45.736176),(20.713392,45.733334),(20.700059,45.735401),(20.68807,45.7431),(20.678562,45.75664),(20.655617,45.77731),(20.645902,45.788731),(20.64349,45.795141),(20.642285,45.798343),(20.640114,45.818703),(20.636704,45.827023),(20.629676,45.833276),(20.612312,45.841492),(20.605284,45.846143),(20.572108,45.887691),(20.556709,45.89844),(20.538002,45.903711),(20.499865,45.906656),(20.481674,45.912703),(20.429068,45.946706),(20.410258,45.955594),(20.370984,45.96779),(20.35393,45.976678),(20.338634,45.992801),(20.317653,46.038586),(20.305665,46.053572),(20.242826,46.108091),(20.283237,46.1438),(20.444157,46.1469),(20.468549,46.174134),(20.509373,46.167674),(20.548957,46.15615),(20.578103,46.137547),(20.588025,46.132844),(20.600117,46.12964),(20.607558,46.129485),(20.663989,46.137753),(20.683523,46.144678),(20.698819,46.15646),(20.704193,46.16633),(20.704368,46.168526),(20.70533,46.180645),(20.710291,46.188086),(20.717836,46.189998),(20.727137,46.187725),(20.736646,46.186691),(20.744707,46.192272),(20.744811,46.200282),(20.735509,46.222503),(20.734992,46.231908),(20.739333,46.237489),(20.778504,46.260072),(20.787369,46.26339),(20.798658,46.267616),(20.819638,46.271699),(20.839689,46.271079),(20.848991,46.267771),(20.866974,46.257126),(20.875139,46.254387),(20.885061,46.255007),(20.899737,46.260692),(20.906558,46.26219),(20.924438,46.259555),(20.961748,46.248341),(20.981489,46.248858),(20.990584,46.251597),(20.999162,46.251597),(21.007017,46.248858),(21.013942,46.24307),(21.033475,46.231339),(21.051459,46.236094),(21.099208,46.27635),(21.105616,46.278675),(21.134865,46.27852),(21.144476,46.283739),(21.155949,46.298932),(21.164527,46.318259),(21.168661,46.362753),(21.17879,46.384457),(21.195636,46.398099),(21.21517,46.402905),(21.257544,46.404171),(21.280695,46.416393),(21.274391,46.438355),(21.245142,46.47688),(21.247623,46.497473),(21.261679,46.513338),(21.278938,46.528401),(21.291341,46.546721),(21.295268,46.585039),(21.300953,46.603926),(21.316249,46.616639),(21.337643,46.620411),(21.374436,46.618448),(21.396037,46.626354),(21.416811,46.645216),(21.423539,46.658231),(21.425493,46.662011),(21.436241,46.673741),(21.463527,46.677049),(21.483577,46.684852),(21.501767,46.703507),(21.505178,46.723247),(21.478203,46.735934),(21.475206,46.737846),(21.472622,46.740197),(21.470451,46.743014),(21.471381,46.749034),(21.473655,46.754873),(21.477066,46.760222),(21.48151,46.764976),(21.502904,46.80531),(21.51572,46.821588),(21.536597,46.835023),(21.573081,46.841767),(21.583416,46.847943),(21.591788,46.860707),(21.591477,46.871533),(21.588067,46.882204),(21.587137,46.8944),(21.588687,46.906182),(21.589927,46.908869),(21.594371,46.910006),(21.640053,46.935689),(21.648528,46.942976),(21.667545,46.992378),(21.671369,46.99336),(21.670749,46.994394),(21.661757,47.006072),(21.654833,47.009948),(21.645221,47.01124),(21.636643,47.014082),(21.63427,47.019422),(21.632819,47.022686),(21.671886,47.054726),(21.69421,47.069169),(21.74351,47.091597),(21.763767,47.105214),(21.770485,47.114025),(21.775446,47.131672),(21.77989,47.140741),(21.789398,47.150301),(21.811516,47.164693),(21.819887,47.172729),(21.825985,47.185054),(21.825675,47.194278),(21.823298,47.203321),(21.823608,47.214923),(21.827742,47.226007),(21.839318,47.240839),(21.844589,47.249934),(21.856061,47.285745),(21.862159,47.297424),(21.900813,47.335742),(21.919003,47.349669),(21.936986,47.357162),(21.981531,47.366102),(22.001582,47.393827),(22.000238,47.427184),(21.991453,47.461807),(21.988869,47.492942),(22.007886,47.517411),(22.037445,47.539322),(22.09956,47.570948),(22.148549,47.579345),(22.154177,47.582213),(22.162089,47.586244),(22.16736,47.593789),(22.167608,47.594862),(22.169117,47.601385),(22.16984,47.608775),(22.172837,47.615389),(22.197642,47.639315),(22.200743,47.647945),(22.204153,47.666291),(22.207874,47.673732),(22.215212,47.679933),(22.232162,47.688253),(22.23981,47.693472),(22.261721,47.715848),(22.27309,47.723755),(22.29159,47.730705),(22.309366,47.735046),(22.322079,47.735873),(22.368278,47.73117),(22.382644,47.732023),(22.395873,47.735692),(22.407345,47.743082),(22.423675,47.782563),(22.454371,47.787394),(22.528475,47.761039),(22.562994,47.757215),(22.601235,47.760936),(22.637719,47.771555),(22.666967,47.78879),(22.691669,47.8107),(22.703864,47.817186),(22.724225,47.82349),(22.745826,47.824989),(22.75182,47.827676),(22.760295,47.838916),(22.758331,47.846228),(22.753267,47.852687),(22.752854,47.86124),(22.763602,47.874753),(22.779622,47.882298),(22.819723,47.892323),(22.836053,47.902452),(22.861167,47.933819),(22.877601,47.946739),(22.897858,47.950976),(22.915841,47.959193),(22.924213,47.972732),(22.915738,47.992989),(22.924109,48.004823),(22.939302,48.00565),(22.957079,48.00012),(22.972478,47.992886),(22.988188,47.986374),(23.004415,47.983067),(23.020331,47.984721),(23.063119,48.007458),(23.076555,48.024512),(23.098982,48.071227),(23.118619,48.091536),(23.139083,48.098125),(23.161718,48.095903),(23.231377,48.079728),(23.248637,48.071227),(23.289772,48.038206),(23.337521,48.010869),(23.360052,47.993144),(23.367286,47.991206),(23.374728,47.99056),(23.382169,47.991129),(23.390127,47.993067),(23.391574,47.993454),(23.393021,47.993583),(23.394468,47.993454),(23.396019,47.993067),(23.460821,47.971337),(23.481595,47.972163),(23.485476,47.974234),(23.488519,47.975858),(23.494307,47.980535),(23.499371,47.986219),(23.503712,47.992886),(23.514461,47.998983),(23.525623,48.00118),(23.563243,48.005753),(23.581744,48.001567),(23.646132,47.996606),(23.687267,47.987253),(23.710521,47.985083),(23.780078,47.987511),(23.796511,47.981982),(23.848394,47.949658),(23.855215,47.934233),(23.876609,47.93444),(23.977275,47.962293),(24.008798,47.961208),(24.025231,47.95325),(24.074943,47.944181),(24.094787,47.937928),(24.131167,47.914544),(24.148737,47.912115),(24.150184,47.912916),(24.209302,47.897594),(24.231006,47.897026),(24.297876,47.919789),(24.347175,47.920875),(24.386242,47.94369),(24.409187,47.952061),(24.428514,47.952526),(24.485564,47.943199),(24.542098,47.943819),(24.561632,47.940537),(24.569808,47.937289),(24.608451,47.921934),(24.633669,47.908291),(24.649275,47.895217),(24.655993,47.879301),(24.656923,47.866201),(24.661471,47.853876),(24.679144,47.84013),(24.712837,47.825919),(24.793659,47.804473),(24.808232,47.79574),(24.820738,47.784087),(24.854431,47.743133),(24.877789,47.718742),(24.896599,47.710061),(24.928858,47.713938),(24.94238,47.715563),(25.017418,47.724582),(25.079947,47.742927),(25.08005,47.743082),(25.080257,47.743133),(25.121908,47.770315),(25.218956,47.878474),(25.261744,47.898576),(25.752516,47.934595),(25.818351,47.952578),(25.870958,47.957022),(25.90124,47.965962),(25.918087,47.968158),(25.965732,47.964903),(26.029294,47.977796),(26.10253,47.978267),(26.125723,47.978416),(26.173058,47.993144),(26.18267,48.003273),(26.204788,48.037379),(26.217087,48.048024),(26.243235,48.062907),(26.255327,48.072338),(26.270727,48.093603),(26.286953,48.124712),(26.297495,48.156338),(26.296152,48.178998),(26.30349,48.212045),(26.311758,48.20972),(26.330258,48.208506),(26.347311,48.212097),(26.380488,48.223207),(26.397437,48.226205),(26.44467,48.227729),(26.461309,48.230391),(26.5874,48.249356),(26.617889,48.258968),(26.665741,48.274212),(26.688531,48.274832),(26.711475,48.261319),(26.722379,48.259769)] +Rwanda [(30.471786,-1.066837),(30.463856,-1.075127),(30.456156,-1.086082),(30.453004,-1.097348),(30.456311,-1.108097),(30.470884,-1.118122),(30.474191,-1.131764),(30.472072,-1.137656),(30.4683,-1.14334),(30.465613,-1.149334),(30.467266,-1.155329),(30.476051,-1.16122),(30.48401,-1.15998),(30.490521,-1.156466),(30.494655,-1.155329),(30.506954,-1.164217),(30.511191,-1.170418),(30.515119,-1.196257),(30.521217,-1.210829),(30.539407,-1.241008),(30.545298,-1.261369),(30.5546,-1.273461),(30.556615,-1.281626),(30.555633,-1.284727),(30.553359,-1.289067),(30.550982,-1.294959),(30.549949,-1.3024),(30.555323,-1.31842),(30.568242,-1.328135),(30.597698,-1.340331),(30.60824,-1.347772),(30.623226,-1.362035),(30.632424,-1.367616),(30.698467,-1.39211),(30.718104,-1.394901),(30.737534,-1.406683),(30.743219,-1.432831),(30.741358,-1.458876),(30.738258,-1.470658),(30.732935,-1.476446),(30.738878,-1.489469),(30.755414,-1.511586),(30.767817,-1.524815),(30.772261,-1.532463),(30.781976,-1.56843),(30.791588,-1.590961),(30.807246,-1.60326),(30.831017,-1.594165),(30.838303,-1.615352),(30.837476,-1.641087),(30.824557,-1.719946),(30.824247,-1.730694),(30.826521,-1.735862),(30.835409,-1.749504),(30.83789,-1.758703),(30.826418,-1.786195),(30.829932,-1.796737),(30.83789,-1.836838),(30.832412,-1.853788),(30.822232,-1.868774),(30.816599,-1.884173),(30.824247,-1.902053),(30.808021,-1.914766),(30.801923,-1.921277),(30.796962,-1.929338),(30.826934,-1.934093),(30.829932,-1.960551),(30.816754,-2.018739),(30.835461,-2.014708),(30.853496,-2.0237),(30.868741,-2.038996),(30.879438,-2.053465),(30.887809,-2.082507),(30.853393,-2.193818),(30.844711,-2.237847),(30.848949,-2.306266),(30.844711,-2.326627),(30.834376,-2.345334),(30.821353,-2.354739),(30.804559,-2.36218),(30.789107,-2.371069),(30.775155,-2.374479),(30.76792,-2.378613),(30.758825,-2.381094),(30.75066,-2.37913),(30.698467,-2.353395),(30.687718,-2.349985),(30.674799,-2.351742),(30.663327,-2.360733),(30.649064,-2.387605),(30.63785,-2.39701),(30.616921,-2.398147),(30.595217,-2.391946),(30.573927,-2.389259),(30.5546,-2.400628),(30.521733,-2.399387),(30.488454,-2.383781),(30.434142,-2.339236),(30.428406,-2.331381),(30.423238,-2.317325),(30.418484,-2.311847),(30.415073,-2.313088),(30.383447,-2.305543),(30.37859,-2.303062),(30.362518,-2.307817),(30.352752,-2.316912),(30.34861,-2.322352),(30.344018,-2.328384),(30.331151,-2.340269),(30.316785,-2.347608),(30.297974,-2.353705),(30.278854,-2.357633),(30.262731,-2.358356),(30.250329,-2.355566),(30.219116,-2.340373),(30.207489,-2.339236),(30.203923,-2.345954),(30.203407,-2.355359),(30.20134,-2.362387),(30.156174,-2.419024),(30.139121,-2.430807),(30.116797,-2.43153),(30.093853,-2.422642),(30.072975,-2.408792),(30.056542,-2.394426),(30.025536,-2.357736),(30.013341,-2.348331),(30.00192,-2.3443),(29.992308,-2.34399),(29.983885,-2.344817),(29.976134,-2.34368),(29.95852,-2.328533),(29.956781,-2.327038),(29.949934,-2.321149),(29.941924,-2.316602),(29.931692,-2.316912),(29.928798,-2.322493),(29.928901,-2.331795),(29.92239,-2.382954),(29.929573,-2.459849),(29.907921,-2.5354),(29.903726,-2.637947),(29.90327,-2.649088),(29.897896,-2.671102),(29.888387,-2.692909),(29.876191,-2.71358),(29.862859,-2.731667),(29.842395,-2.752441),(29.823585,-2.763499),(29.803379,-2.767324),(29.778523,-2.7666),(29.756199,-2.759882),(29.744727,-2.759985),(29.737285,-2.769287),(29.733978,-2.790268),(29.729637,-2.79957),(29.719922,-2.805461),(29.697546,-2.808251),(29.678684,-2.805357),(29.637087,-2.791629),(29.626388,-2.788097),(29.619256,-2.787684),(29.589491,-2.798433),(29.582049,-2.80298),(29.574091,-2.806081),(29.563756,-2.805667),(29.541845,-2.808561),(29.523138,-2.820447),(29.504121,-2.826855),(29.480867,-2.813729),(29.444675,-2.806989),(29.425366,-2.803394),(29.407435,-2.805461),(29.364285,-2.824168),(29.339687,-2.826441),(29.33085,-2.807424),(29.329661,-2.794919),(29.32036,-2.774145),(29.318086,-2.762569),(29.321497,-2.752131),(29.335346,-2.735181),(29.337516,-2.723812),(29.331418,-2.71141),(29.309714,-2.691049),(29.309481,-2.690127),(29.306045,-2.67658),(29.307337,-2.664797),(29.306614,-2.660147),(29.294418,-2.651052),(29.276125,-2.640613),(29.254989,-2.635549),(29.212769,-2.630278),(29.190445,-2.623456),(29.129725,-2.596998),(29.113447,-2.594621),(29.05526,-2.598445),(29.032574,-2.620666),(29.015365,-2.720711),(29.000121,-2.703658),(28.979605,-2.69353),(28.949736,-2.690326),(28.936404,-2.685055),(28.899093,-2.660663),(28.891342,-2.652705),(28.89806,-2.576327),(28.891342,-2.553073),(28.881058,-2.542634),(28.869845,-2.53726),(28.860956,-2.531162),(28.857236,-2.518346),(28.860853,-2.505221),(28.878216,-2.489718),(28.874702,-2.477522),(28.864574,-2.459435),(28.860853,-2.439592),(28.858838,-2.418198),(28.866331,-2.391946),(28.880387,-2.379854),(28.907258,-2.370552),(28.928136,-2.364454),(28.945447,-2.354222),(28.959658,-2.339753),(28.971234,-2.321666),(28.978779,-2.306783),(28.982189,-2.296034),(28.983378,-2.287353),(28.993661,-2.277741),(29.002136,-2.274744),(29.03092,-2.275674),(29.075413,-2.263892),(29.10983,-2.23671),(29.134841,-2.197746),(29.150551,-2.15041),(29.156339,-2.108656),(29.154272,-2.072689),(29.12771,-1.915696),(29.12554,-1.879109),(29.130501,-1.843349),(29.149311,-1.799321),(29.18664,-1.739618),(29.212253,-1.698655),(29.233337,-1.658657),(29.245429,-1.640984),(29.293695,-1.600986),(29.303823,-1.587861),(29.331212,-1.53143),(29.342787,-1.516547),(29.358497,-1.509933),(29.434926,-1.506832),(29.437407,-1.506212),(29.439629,-1.504765),(29.441696,-1.502595),(29.445065,-1.497287),(29.46433,-1.466938),(29.498747,-1.431074),(29.538641,-1.402342),(29.577915,-1.38839),(29.618119,-1.39056),(29.6391,-1.38901),(29.657703,-1.383945),(29.678322,-1.37237),(29.693774,-1.361208),(29.710517,-1.352526),(29.734805,-1.348185),(29.74669,-1.350872),(29.767981,-1.363792),(29.774906,-1.366272),(29.783174,-1.361414),(29.789168,-1.341674),(29.798212,-1.330925),(29.807462,-1.325138),(29.816143,-1.322554),(29.825024,-1.323881),(29.825135,-1.323897),(29.836091,-1.329478),(29.864203,-1.370303),(29.868647,-1.391283),(29.871024,-1.432418),(29.880739,-1.453605),(29.897896,-1.469625),(29.917429,-1.475206),(29.938462,-1.472932),(29.960424,-1.464767),(30.028327,-1.427147),(30.038662,-1.424977),(30.047757,-1.403169),(30.06078,-1.389733),(30.065984,-1.386945),(30.095506,-1.37113),(30.136331,-1.355213),(30.147183,-1.345085),(30.15235,-1.329892),(30.158448,-1.291135),(30.165579,-1.277492),(30.173434,-1.272841),(30.181392,-1.271497),(30.189351,-1.270877),(30.19674,-1.268707),(30.212192,-1.259509),(30.256685,-1.217237),(30.269759,-1.200494),(30.280508,-1.182407),(30.28242,-1.175793),(30.284642,-1.161427),(30.287536,-1.155432),(30.29095,-1.152621),(30.294564,-1.149644),(30.311307,-1.1421),(30.317405,-1.137035),(30.322572,-1.121843),(30.329032,-1.080501),(30.337455,-1.066239),(30.352752,-1.060761),(30.369288,-1.063241),(30.386341,-1.068202),(30.403188,-1.070373),(30.418897,-1.066445),(30.432023,-1.060554),(30.445614,-1.058694),(30.460829,-1.063428),(30.471786,-1.066837)] +Western Sahara [(-8.752562,27.661439),(-8.682385,27.661439),(-8.682385,27.567414),(-8.682385,27.473466),(-8.682385,27.379467),(-8.682385,27.285416),(-8.682282,27.27167),(-8.68223,27.232706),(-8.68223,27.172141),(-8.682075,27.093696),(-8.68192,27.000834),(-8.681817,26.897171),(-8.681662,26.786325),(-8.681558,26.671913),(-8.681455,26.55745),(-8.6813,26.446604),(-8.681145,26.342993),(-8.681042,26.250182),(-8.680938,26.171634),(-8.680861,26.111121),(-8.680861,26.072208),(-8.680809,26.058411),(-8.680809,26.013142),(-8.687656,26.000016),(-8.707216,25.995004),(-8.792043,25.995004),(-8.892863,25.994952),(-8.993581,25.994952),(-9.094246,25.994952),(-9.195015,25.994952),(-9.295733,25.994952),(-9.39645,25.994952),(-9.497168,25.994952),(-9.597885,25.994952),(-9.698654,25.994952),(-9.799371,25.994952),(-9.900037,25.994952),(-10.000806,25.994952),(-10.101523,25.994952),(-10.202292,25.994952),(-10.302958,25.994952),(-10.403675,25.994952),(-10.504445,25.994952),(-10.605162,25.994952),(-10.705879,25.994952),(-10.806597,25.994952),(-10.907314,25.994952),(-11.008083,25.994952),(-11.108749,25.994952),(-11.209518,25.994952),(-11.310235,25.994952),(-11.410953,25.994952),(-11.511722,25.994952),(-11.612439,25.994952),(-11.713208,25.994952),(-11.813822,25.994952),(-11.914591,25.994952),(-12.015308,25.9949),(-12.015308,25.919246),(-12.015308,25.843695),(-12.015308,25.768092),(-12.015308,25.69249),(-12.015308,25.616939),(-12.015308,25.541336),(-12.015308,25.465734),(-12.015308,25.390183),(-12.015308,25.31458),(-12.015308,25.238926),(-12.015308,25.163323),(-12.015308,25.087798),(-12.015308,25.012247),(-12.015308,24.936619),(-12.015308,24.861016),(-12.015308,24.785465),(-12.015308,24.709862),(-12.015308,24.63426),(-12.015308,24.558631),(-12.015308,24.48308),(-12.015308,24.407478),(-12.015308,24.331901),(-12.015308,24.256298),(-12.015308,24.180696),(-12.015308,24.105093),(-12.015308,24.029542),(-12.015308,23.95394),(-12.015308,23.878311),(-12.015308,23.802734),(-12.015308,23.727106),(-12.015308,23.651477),(-12.015308,23.575926),(-12.015308,23.495182),(-12.019339,23.460998),(-12.032723,23.444978),(-12.134578,23.419347),(-12.353892,23.322479),(-12.389911,23.312532),(-12.558376,23.290285),(-12.619432,23.270829),(-12.682219,23.230754),(-12.789938,23.161869),(-12.915098,23.082055),(-13.015247,23.018002),(-13.034342,22.995213),(-13.034342,22.995161),(-13.119892,22.88354),(-13.152293,22.820004),(-13.165497,22.752695),(-13.15498,22.688797),(-13.106353,22.560226),(-13.09333,22.495476),(-13.076833,22.245439),(-13.060335,21.995403),(-13.032093,21.581939),(-13.026177,21.49533),(-13.015247,21.333428),(-13.161724,21.333428),(-13.203091,21.333376),(-13.319854,21.333376),(-13.501419,21.333376),(-13.736909,21.333324),(-14.015496,21.333273),(-14.295598,21.333226),(-14.326485,21.333221),(-14.658971,21.333169),(-15.002258,21.333169),(-15.345544,21.333066),(-15.678082,21.333066),(-15.989019,21.333014),(-16.26771,21.332911),(-16.503148,21.332911),(-16.684636,21.332911),(-16.801425,21.332859),(-16.842817,21.332859),(-16.958831,21.332859),(-16.968339,21.324591),(-16.985703,21.237826),(-17.004926,21.141915),(-17.047378,21.0274),(-17.054019,20.995464),(-17.077428,20.904513),(-17.081175,20.874748),(-17.056874,20.766913),(-17.068349,20.792467),(-17.100209,20.837592),(-17.104644,20.862494),(-17.101389,20.876166),(-17.092275,20.899888),(-17.090403,20.914008),(-17.090403,20.961819),(-17.086293,20.979315),(-17.06786,21.030911),(-17.026763,21.328925),(-17.017445,21.365058),(-17.015248,21.379828),(-17.013743,21.419971),(-16.950149,21.429753),(-16.729956,21.469802),(-16.580043,21.48055),(-16.189886,21.48055),(-16.040024,21.500084),(-15.919876,21.500084),(-15.749964,21.490317),(-15.609818,21.469802),(-15.45993,21.450268),(-15.289992,21.450268),(-15.149872,21.440501),(-14.970167,21.440501),(-14.839813,21.450268),(-14.749974,21.500084),(-14.669875,21.599665),(-14.640109,21.679763),(-14.609827,21.750069),(-14.620085,21.820375),(-14.629852,21.860424),(-14.580036,21.91024),(-14.519988,21.990338),(-14.459914,22.040103),(-14.439915,22.080152),(-14.379841,22.120201),(-14.310052,22.190507),(-14.270003,22.240297),(-14.220187,22.309647),(-14.209955,22.370186),(-14.189904,22.450259),(-14.189904,22.589914),(-14.169906,22.759852),(-14.140088,22.870181),(-14.120089,22.960047),(-14.100065,23.099676),(-14.039991,23.33992),(-14.019992,23.410252),(-13.979943,23.519599),(-13.930127,23.620187),(-13.890129,23.690493),(-13.839797,23.750076),(-13.769982,23.790125),(-13.660118,23.830123),(-13.580045,23.870198),(-13.479948,23.910221),(-13.390108,23.940504),(-13.31001,23.980553),(-13.279753,24.019594),(-13.229963,24.0899),(-13.160122,24.219815),(-13.120099,24.299862),(-13.060025,24.400476),(-12.990184,24.4698),(-12.94688,24.496748),(-12.946879,24.496749),(-12.910137,24.51959),(-12.819781,24.570388),(-12.709943,24.629971),(-12.629845,24.679761),(-12.56003,24.730533),(-12.499982,24.7696),(-12.430141,24.830165),(-12.399884,24.879955),(-12.359835,24.969795),(-12.310019,25.110432),(-12.26997,25.259803),(-12.229946,25.42),(-12.200155,25.51958),(-12.169873,25.639728),(-12.129849,25.730549),(-12.100058,25.830156),(-12.080059,25.870205),(-12.080059,25.919969),(-12.060009,25.990301),(-12.055823,25.99583),(-12.055823,25.99583),(-12.029752,26.03035),(-11.959911,26.049884),(-11.879864,26.070399),(-11.753877,26.086006),(-11.717239,26.103576),(-11.698222,26.162177),(-11.683546,26.212975),(-11.63621,26.294985),(-11.582983,26.360408),(-11.55221,26.400457),(-11.510688,26.469807),(-11.469709,26.519597),(-11.398912,26.583081),(-11.3369,26.632897),(-11.315868,26.683644),(-11.315868,26.744208),(-11.36031,26.793043),(-11.391574,26.882908),(-11.262641,26.910219),(-11.149366,26.940501),(-11.045859,26.969802),(-10.921835,27.009825),(-10.829076,27.009825),(-10.756781,27.019592),(-10.653273,27.000058),(-10.550282,26.990292),(-10.477986,26.960035),(-10.353963,26.900478),(-10.250455,26.860429),(-10.188443,26.860429),(-10.122039,26.879962),(-10.065867,26.908281),(-10.031709,26.910219),(-9.979929,26.889729),(-9.899365,26.84968),(-9.816864,26.84968),(-9.734362,26.860429),(-9.672351,26.910219),(-9.568843,26.990292),(-9.486315,27.049875),(-9.412056,27.08796),(-9.352008,27.097727),(-9.284622,27.097727),(-9.207469,27.099691),(-9.083446,27.089924),(-9.000919,27.089924),(-8.888109,27.103566),(-8.793903,27.12018),(-8.752872,27.150463),(-8.752872,27.190486),(-8.773387,27.250069),(-8.795841,27.307688),(-8.801706,27.360424),(-8.788012,27.416054),(-8.773387,27.46003),(-8.783619,27.530362),(-8.81292,27.613354),(-8.818449,27.659398),(-8.817035,27.661464),(-8.817034,27.661465),(-8.816537,27.661465),(-8.752562,27.661439)] +Scarborough Reef [(117.753887,15.154369),(117.755692,15.151887),(117.753887,15.150082),(117.751856,15.151887),(117.753887,15.154369)] +South Sudan [(33.96912,9.838341),(33.904886,9.7107),(33.883905,9.619853),(33.877704,9.543424),(33.881425,9.499215),(33.888711,9.470818),(33.892019,9.464281),(33.894809,9.459475),(33.897755,9.456582),(33.900855,9.45529),(34.070698,9.454592),(34.083096,9.205117),(34.097473,8.915836),(34.11185,8.626555),(34.107251,8.601531),(34.103169,8.579322),(34.090249,8.556507),(34.069579,8.533615),(33.970774,8.445377),(33.95031,8.433207),(33.931706,8.428143),(33.875172,8.427936),(33.840859,8.423983),(33.824116,8.419487),(33.808096,8.412614),(33.797658,8.405612),(33.756213,8.370136),(33.751872,8.368793),(33.74102,8.367087),(33.695545,8.372875),(33.671309,8.400289),(33.651672,8.434499),(33.620407,8.460699),(33.600099,8.463955),(33.585371,8.458115),(33.57116,8.450131),(33.552143,8.446824),(33.537053,8.453206),(33.521964,8.465324),(33.505117,8.474884),(33.484757,8.474006),(33.490338,8.466513),(33.489924,8.462921),(33.480519,8.45765),(33.413133,8.449718),(33.400162,8.452638),(33.392101,8.451862),(33.384608,8.448013),(33.370862,8.437393),(33.361353,8.434293),(33.25273,8.458167),(33.23516,8.455635),(33.206427,8.430572),(33.207358,8.426593),(33.210355,8.420779),(33.202913,8.414113),(33.187824,8.411038),(33.174078,8.404475),(33.16953,8.397421),(33.167877,8.389722),(33.16798,8.372823),(33.166843,8.368276),(33.161882,8.361506),(33.161159,8.356106),(33.163329,8.352954),(33.172321,8.34652),(33.174078,8.343058),(33.171804,8.330888),(33.163123,8.312052),(33.161159,8.304585),(33.164363,8.292777),(33.171391,8.283113),(33.178419,8.275672),(33.185757,8.26451),(33.194852,8.259239),(33.208288,8.253683),(33.206893,8.240738),(33.199865,8.235519),(33.190511,8.232419),(33.181623,8.225726),(33.173975,8.21459),(33.169944,8.205495),(33.16829,8.195392),(33.16798,8.181362),(33.171081,8.175962),(33.184723,8.166247),(33.187824,8.160898),(33.185137,8.14108),(33.177282,8.127102),(33.164776,8.117051),(33.147516,8.109067),(33.117544,8.104158),(33.11341,8.099093),(33.111343,8.090644),(33.106589,8.08049),(33.100594,8.071033),(33.043853,8.013465),(33.031141,7.996309),(33.026077,7.986284),(33.021633,7.965251),(33.016775,7.958327),(33.012021,7.953314),(33.007569,7.944058),(33.006801,7.942462),(32.993572,7.928148),(32.9898,7.917244),(33.015018,7.850633),(33.023493,7.835078),(33.040753,7.824846),(33.044629,7.816423),(33.047677,7.807277),(33.051295,7.801127),(33.058323,7.797975),(33.085401,7.794978),(33.107519,7.785211),(33.120851,7.783299),(33.126949,7.791567),(33.132117,7.793944),(33.164518,7.801127),(33.172941,7.799628),(33.201467,7.788156),(33.233919,7.783195),(33.242394,7.780715),(33.257794,7.767434),(33.273503,7.749192),(33.29159,7.733069),(33.347142,7.718961),(33.359803,7.719272),(33.384401,7.735446),(33.393289,7.739735),(33.436491,7.748572),(33.462329,7.749502),(33.479537,7.743146),(33.488477,7.73617),(33.506823,7.73095),(33.516796,7.726093),(33.526511,7.717101),(33.540361,7.698911),(33.550851,7.691315),(33.572297,7.685217),(33.637202,7.691315),(33.647744,7.688627),(33.675029,7.670851),(33.706552,7.661859),(33.71606,7.657156),(33.729445,7.642119),(33.761071,7.598142),(33.811507,7.560987),(33.821429,7.558403),(33.842099,7.555922),(33.852538,7.553545),(33.882407,7.536079),(34.006534,7.409936),(34.02369,7.382445),(34.030615,7.3487),(34.031132,7.255372),(34.033612,7.250256),(34.04002,7.247259),(34.085909,7.211499),(34.113194,7.177548),(34.132417,7.163388),(34.151124,7.167471),(34.167764,7.175377),(34.181407,7.167212),(34.190915,7.149539),(34.195773,7.129023),(34.195773,7.09774),(34.195773,7.086959),(34.198253,7.064376),(34.206005,7.054506),(34.213756,7.051974),(34.219957,7.046134),(34.229879,7.03337),(34.270807,7.003398),(34.275665,6.997403),(34.280729,6.980092),(34.287653,6.975286),(34.294475,6.972237),(34.297575,6.968568),(34.439169,6.934875),(34.503661,6.89002),(34.512343,6.876842),(34.519061,6.861649),(34.523298,6.844183),(34.524745,6.824572),(34.522575,6.821781),(34.513273,6.815296),(34.511102,6.814314),(34.512136,6.808268),(34.516683,6.798062),(34.517924,6.793824),(34.524745,6.752871),(34.53632,6.743078),(34.553994,6.739254),(34.596679,6.739202),(34.618486,6.736541),(34.635539,6.729823),(34.703545,6.684916),(34.709643,6.674012),(34.714501,6.655667),(34.726593,6.641947),(34.733518,6.637606),(34.743543,6.596808),(34.753878,6.556423),(34.776099,6.499449),(34.784264,6.44183),(34.792739,6.421676),(34.832633,6.353541),(34.839041,6.327599),(34.839764,6.268688),(34.844725,6.248689),(34.879969,6.187634),(34.898882,6.13867),(34.915522,6.119705),(34.934436,6.102368),(34.951489,6.08118),(34.959447,6.061724),(34.967964,6.008131),(34.969782,5.996689),(34.95924,5.975502),(34.955726,5.964211),(34.955313,5.952558),(34.960377,5.941706),(34.97681,5.924032),(34.979911,5.912844),(34.977741,5.898659),(34.974537,5.887239),(34.972676,5.876257),(34.974433,5.863106),(34.984355,5.841014),(35.06528,5.726447),(35.079543,5.699782),(35.088225,5.64175),(35.098663,5.622474),(35.143312,5.58966),(35.22341,5.542996),(35.261961,5.511887),(35.269299,5.491785),(35.27364,5.479873),(35.267645,5.468556),(35.258964,5.458143),(35.251832,5.447369),(35.250695,5.435147),(35.254623,5.425923),(35.287489,5.374092),(35.302372,5.357374),(35.320769,5.348719),(35.344953,5.352362),(35.367484,5.368046),(35.408205,5.412565),(35.430219,5.427215),(35.448203,5.430755),(35.469287,5.430755),(35.490164,5.428042),(35.508044,5.423417),(35.530782,5.410084),(35.57388,5.375332),(35.598168,5.368743),(35.621939,5.373782),(35.639819,5.382076),(35.658319,5.386417),(35.683641,5.379595),(35.749373,5.339158),(35.80415,5.318023),(35.809111,5.309109),(35.782033,5.26986),(35.775315,5.24658),(35.778415,5.22715),(35.798259,5.189271),(35.807147,5.165267),(35.807354,5.144674),(35.799189,5.124624),(35.760949,5.076332),(35.755884,5.063439),(35.751647,4.854356),(35.760018,4.805728),(35.784513,4.764181),(35.920835,4.619332),(35.856536,4.619603),(35.781222,4.619922),(35.778439,4.678338),(35.739402,4.681127),(35.711518,4.661608),(35.705939,4.619962),(35.705846,4.619447),(35.700365,4.589111),(35.61214,4.619497),(35.610641,4.620014),(35.562607,4.707137),(35.52218,4.780461),(35.570836,4.904313),(35.49564,4.926429),(35.396117,4.926429),(35.433715,5.003836),(35.411598,5.030376),(35.333563,4.987281),(35.263419,4.948545),(35.245726,4.98172),(35.176189,4.955564),(35.094298,4.924761),(35.00272,4.890314),(34.920829,4.859511),(34.844751,4.830895),(34.763589,4.800366),(34.666277,4.76498),(34.598159,4.73092),(34.529156,4.696419),(34.459674,4.660609),(34.381188,4.620158),(34.280109,4.520061),(34.179133,4.419912),(34.078157,4.319841),(33.977078,4.219692),(33.896049,4.138353),(33.813677,4.056033),(33.701901,3.944076),(33.606196,3.848087),(33.532609,3.774293),(33.527716,3.771431),(33.490648,3.749746),(33.447343,3.744372),(33.286526,3.752537),(33.19542,3.757059),(33.164466,3.763053),(33.143486,3.774086),(33.01724,3.87718),(32.997241,3.885526),(32.979568,3.879196),(32.9189,3.83416),(32.840352,3.794291),(32.756429,3.769022),(32.599281,3.756283),(32.415571,3.741297),(32.371801,3.731065),(32.187782,3.61916),(32.175948,3.605595),(32.174863,3.59265),(32.179203,3.556864),(32.178997,3.541361),(32.175929,3.527234),(32.174552,3.520897),(32.16799,3.512242),(32.155846,3.511776),(32.093007,3.524463),(32.076161,3.53317),(32.060864,3.548958),(32.054226,3.559572),(32.041537,3.57986),(32.030168,3.585932),(32.022236,3.58642),(31.943662,3.591255),(31.93002,3.59836),(31.923095,3.615284),(31.920046,3.661302),(31.916274,3.680267),(31.901908,3.704297),(31.830697,3.783879),(31.80703,3.803722),(31.801107,3.806472),(31.780985,3.815815),(31.777884,3.816435),(31.775714,3.810699),(31.696132,3.721273),(31.68683,3.712824),(31.668537,3.70502),(31.564771,3.689802),(31.547201,3.680991),(31.535522,3.666444),(31.53452,3.665563),(31.523739,3.656083),(31.505446,3.659855),(31.377702,3.729308),(31.29502,3.774189),(31.255745,3.786669),(31.214818,3.792354),(31.167585,3.792405),(31.141489,3.785119),(31.104408,3.756175),(31.077668,3.735303),(31.068677,3.731737),(31.04966,3.727991),(31.040358,3.724218),(31.009559,3.699775),(30.995968,3.693522),(30.985788,3.692127),(30.965996,3.692618),(30.955505,3.689001),(30.944447,3.679286),(30.939382,3.668408),(30.936282,3.656858),(30.931734,3.645102),(30.865692,3.548958),(30.843781,3.505911),(30.839543,3.490202),(30.827658,3.511105),(30.837683,3.562678),(30.823214,3.580429),(30.792105,3.592702),(30.779082,3.602598),(30.772261,3.617558),(30.76916,3.651845),(30.765129,3.661716),(30.746939,3.674867),(30.731333,3.663498),(30.714642,3.64381),(30.693351,3.63195),(30.665394,3.632053),(30.653767,3.630606),(30.636455,3.624095),(30.604932,3.60668),(30.592427,3.603528),(30.572635,3.601047),(30.562248,3.601358),(30.553463,3.60451),(30.543024,3.612907),(30.543438,3.617868),(30.549329,3.624095),(30.55491,3.636394),(30.562248,3.675332),(30.562868,3.694866),(30.53889,3.84155),(30.525247,3.867543),(30.515532,3.87085),(30.505507,3.869765),(30.495482,3.867078),(30.48587,3.865579),(30.465716,3.866096),(30.456828,3.867595),(30.418949,3.879713),(30.33792,3.927875),(30.270948,3.94945),(30.205887,3.95038),(30.189351,3.95609),(30.182116,3.965521),(30.180462,3.97565),(30.180359,3.986218),(30.178085,3.997096),(30.171574,4.00818),(30.147493,4.036654),(30.138501,4.059702),(30.133023,4.082801),(30.122688,4.102464),(30.099795,4.115099),(30.057782,4.123651),(30.0429,4.133108),(30.027913,4.15445),(30.004762,4.19928),(29.989363,4.21778),(29.966005,4.23181),(29.954171,4.233825),(29.942647,4.233774),(29.932157,4.236151),(29.924147,4.245582),(29.923217,4.257752),(29.929315,4.268371),(29.93686,4.278319),(29.940115,4.288344),(29.930348,4.308653),(29.908489,4.329194),(29.882703,4.344025),(29.860689,4.347281),(29.814852,4.346764),(29.787825,4.368804),(29.776198,4.405624),(29.776766,4.449342),(29.785654,4.479366),(29.786998,4.489675),(29.785448,4.498564),(29.778523,4.5151),(29.777696,4.523859),(29.783691,4.533833),(29.788295,4.539366),(29.793302,4.545383),(29.796713,4.556028),(29.784621,4.563288),(29.774492,4.5675),(29.751755,4.581272),(29.74514,4.583907),(29.722919,4.578197),(29.715323,4.57967),(29.700802,4.5883),(29.671863,4.613983),(29.660287,4.622122),(29.618533,4.641578),(29.606751,4.643904),(29.576675,4.644446),(29.546392,4.657365),(29.535024,4.660699),(29.528822,4.659872),(29.513268,4.652146),(29.508462,4.65535),(29.498127,4.665608),(29.494096,4.668295),(29.472599,4.672197),(29.457612,4.669535),(29.449241,4.657055),(29.44676,4.622122),(29.44366,4.604707),(29.44459,4.594475),(29.44273,4.583442),(29.423919,4.566053),(29.417305,4.555485),(29.410897,4.532283),(29.402629,4.512077),(29.390226,4.493758),(29.33886,4.449549),(29.328938,4.434924),(29.318706,4.411076),(29.303927,4.387149),(29.28739,4.383584),(29.267443,4.387873),(29.242845,4.387356),(29.23158,4.375548),(29.228686,4.356712),(29.221296,4.340692),(29.19613,4.337075),(29.173495,4.348004),(29.10983,4.412367),(29.098564,4.420532),(29.058877,4.437689),(29.055983,4.4365),(29.053193,4.450789),(29.042341,4.455336),(28.994075,4.487453),(28.98622,4.495928),(28.925552,4.480813),(28.903641,4.478591),(28.856202,4.482467),(28.825713,4.478384),(28.815274,4.478281),(28.789953,4.48859),(28.785095,4.507995),(28.783648,4.530371),(28.769179,4.549775),(28.755433,4.553987),(28.746855,4.54931),(28.738173,4.540887),(28.72453,4.533988),(28.704067,4.533988),(28.695178,4.532593),(28.653734,4.452701),(28.638644,4.432082),(28.623658,4.422367),(28.58242,4.412626),(28.570535,4.405107),(28.566711,4.394126),(28.564282,4.382602),(28.557306,4.373533),(28.54821,4.37082),(28.538495,4.371827),(28.52878,4.373843),(28.519272,4.374411),(28.496844,4.369011),(28.482426,4.358986),(28.443927,4.308498),(28.425427,4.290824),(28.404137,4.277828),(28.381502,4.275296),(28.357421,4.282866),(28.349101,4.292349),(28.345846,4.306379),(28.336854,4.327489),(28.303161,4.352242),(28.261045,4.350382),(28.216965,4.342036),(28.177587,4.347229),(28.158725,4.360794),(28.140225,4.37932),(28.112371,4.414641),(28.108341,4.433994),(28.101726,4.442133),(28.085551,4.443322),(28.076301,4.437689),(28.059765,4.420067),(28.04974,4.419137),(28.019044,4.462752),(28.014186,4.472209),(28.011241,4.480141),(28.009587,4.488383),(28.008915,4.499081),(28.012119,4.521766),(28.01708,4.539052),(28.014031,4.550033),(27.96251,4.557449),(27.951244,4.55595),(27.944526,4.557294),(27.934294,4.568172),(27.92923,4.569981),(27.920342,4.565278),(27.916104,4.562152),(27.917861,4.560756),(27.917035,4.558431),(27.916414,4.553573),(27.914037,4.54931),(27.907939,4.54869),(27.89099,4.555847),(27.885512,4.556674),(27.856935,4.552462),(27.849132,4.553212),(27.83766,4.559981),(27.819469,4.579567),(27.809238,4.58799),(27.801383,4.590574),(27.776888,4.595741),(27.772444,4.595819),(27.765209,4.612071),(27.760765,4.635997),(27.758801,4.677157),(27.766863,4.735449),(27.763142,4.757101),(27.759215,4.766273),(27.7526,4.777668),(27.744229,4.787848),(27.734823,4.793197),(27.725832,4.792784),(27.706505,4.787435),(27.699063,4.787435),(27.685938,4.797899),(27.679736,4.814488),(27.671882,4.855906),(27.658756,4.879652),(27.640979,4.890943),(27.552716,4.900529),(27.532562,4.907583),(27.514578,4.92244),(27.509204,4.932439),(27.505018,4.953859),(27.501349,4.96329),(27.492823,4.973031),(27.461558,4.99688),(27.45453,4.999799),(27.447864,5.003468),(27.441818,5.007964),(27.43634,5.013158),(27.443058,5.05778),(27.441301,5.070725),(27.431173,5.083618),(27.402131,5.104496),(27.385801,5.143847),(27.358516,5.166766),(27.30131,5.205187),(27.280898,5.23056),(27.264465,5.260145),(27.2378,5.32319),(27.233665,5.33828),(27.228239,5.387605),(27.220126,5.413082),(27.218059,5.42551),(27.220126,5.440883),(27.260744,5.550257),(27.258884,5.559972),(27.261054,5.577593),(27.257127,5.586068),(27.247515,5.592605),(27.240073,5.591985),(27.234596,5.588833),(27.231082,5.587593),(27.227309,5.586714),(27.222607,5.584363),(27.217336,5.585397),(27.211341,5.594879),(27.210669,5.601623),(27.215165,5.616144),(27.216716,5.633456),(27.219196,5.639993),(27.218369,5.645315),(27.201109,5.656968),(27.196407,5.661464),(27.194288,5.667304),(27.193823,5.676321),(27.190774,5.691824),(27.182351,5.707456),(27.170413,5.72035),(27.156254,5.727481),(27.136307,5.734018),(27.131449,5.741459),(27.130933,5.752131),(27.123801,5.768745),(27.11481,5.775282),(27.072745,5.791224),(27.063547,5.790862),(27.045356,5.784919),(27.036055,5.785126),(27.029543,5.78988),(27.021999,5.805099),(27.003499,5.819284),(26.99182,5.847706),(26.981071,5.859204),(26.937456,5.848507),(26.916475,5.849644),(26.903763,5.866982),(26.894255,5.889667),(26.882059,5.891941),(26.865316,5.885973),(26.842165,5.884267),(26.81891,5.894628),(26.810849,5.912379),(26.805061,5.957183),(26.794364,5.970489),(26.776329,5.981884),(26.705016,6.009789),(26.634322,6.006482),(26.602179,6.010616),(26.543578,6.030847),(26.527972,6.043172),(26.51805,6.061156),(26.509885,6.082679),(26.498827,6.099836),(26.48105,6.104951),(26.440639,6.076995),(26.424826,6.072447),(26.421312,6.09366),(26.42927,6.113091),(26.4456,6.130221),(26.46534,6.144303),(26.48353,6.154457),(26.500067,6.167971),(26.509472,6.186394),(26.508955,6.205462),(26.495829,6.221146),(26.4795,6.225073),(26.464927,6.224402),(26.455625,6.230474),(26.455315,6.254451),(26.452008,6.28029),(26.436195,6.29171),(26.394647,6.300547),(26.375837,6.312587),(26.351755,6.344368),(26.335219,6.360595),(26.315789,6.371576),(26.300699,6.377829),(26.28964,6.387208),(26.282819,6.407853),(26.285093,6.425888),(26.290674,6.444672),(26.289124,6.459426),(26.27021,6.465653),(26.277031,6.477358),(26.286436,6.484179),(26.296462,6.489527),(26.30504,6.496917),(26.324367,6.537199),(26.356096,6.579496),(26.372323,6.609804),(26.379867,6.619778),(26.379224,6.631362),(26.378007,6.65329),(26.329741,6.680575),(26.270623,6.702486),(26.236207,6.72003),(26.219464,6.737523),(26.178743,6.765919),(26.147634,6.804702),(26.131821,6.81173),(26.113734,6.816458),(26.091203,6.830334),(26.080971,6.842348),(26.077354,6.85333),(26.0757,6.864647),(26.071566,6.877462),(26.071152,6.88356),(26.073116,6.88971),(26.073633,6.895136),(26.069189,6.899063),(26.063401,6.899063),(26.05689,6.89741),(26.050585,6.896686),(26.045418,6.899683),(26.035806,6.922266),(26.032292,6.974097),(26.026091,6.99668),(25.981132,7.000297),(25.969453,7.003553),(25.966766,7.009857),(25.966249,7.017454),(25.960461,7.024482),(25.950436,7.028564),(25.930386,7.03151),(25.920257,7.034145),(25.888735,7.05187),(25.884084,7.061069),(25.881707,7.079827),(25.877469,7.087734),(25.860726,7.095537),(25.815767,7.099774),(25.800781,7.104942),(25.791686,7.121117),(25.791583,7.134398),(25.786209,7.142666),(25.7613,7.143751),(25.750242,7.146542),(25.734429,7.162406),(25.724507,7.166695),(25.705077,7.166385),(25.697015,7.168607),(25.672727,7.187831),(25.65402,7.195376),(25.592422,7.211241),(25.576609,7.219819),(25.53165,7.260695),(25.516768,7.269945),(25.500335,7.272787),(25.481163,7.266173),(25.455118,7.278213),(25.416154,7.307772),(25.360033,7.335574),(25.346804,7.345031),(25.335952,7.359604),(25.330577,7.374125),(25.32417,7.402908),(25.316211,7.417223),(25.3097,7.422029),(25.292957,7.427196),(25.285102,7.431589),(25.279211,7.438358),(25.269599,7.454223),(25.263295,7.461044),(25.249962,7.469984),(25.190431,7.501197),(25.169037,7.551168),(25.164593,7.567343),(25.165213,7.5799),(25.186504,7.600106),(25.252546,7.62243),(25.274974,7.641964),(25.279418,7.659482),(25.276421,7.674261),(25.270633,7.688162),(25.266809,7.703097),(25.270633,7.746143),(25.269909,7.760768),(25.264535,7.777718),(25.249652,7.804331),(25.23942,7.835957),(25.229705,7.851615),(25.216889,7.864069),(25.191568,7.872389),(25.18547,7.877918),(25.180509,7.884016),(25.173171,7.888564),(25.148573,7.892594),(25.134414,7.892543),(25.103408,7.885773),(25.089559,7.884895),(25.059173,7.896108),(25.029304,7.918846),(24.981245,7.972331),(24.972666,7.976982),(24.964812,7.982615),(24.958094,7.989178),(24.952616,7.996567),(24.950549,8.014396),(24.930085,8.035454),(24.927811,8.070956),(24.917993,8.087027),(24.832107,8.16573),(24.800171,8.180277),(24.742293,8.18684),(24.710564,8.204307),(24.69041,8.206632),(24.670876,8.206839),(24.614859,8.217148),(24.544682,8.206115),(24.51285,8.207149),(24.481327,8.226657),(24.471508,8.22893),(24.464274,8.233194),(24.458899,8.239757),(24.454869,8.248671),(24.431097,8.271408),(24.396578,8.267817),(24.332085,8.245725),(24.326814,8.248567),(24.310278,8.261564),(24.302733,8.26544),(24.295498,8.266525),(24.281029,8.266525),(24.263186,8.268505),(24.257051,8.269186),(24.222428,8.277196),(24.206615,8.283268),(24.17995,8.297712),(24.152872,8.317891),(24.131374,8.343058),(24.121556,8.372307),(24.125173,8.404889),(24.137162,8.438866),(24.193179,8.532426),(24.203204,8.543459),(24.243719,8.570176),(24.25054,8.579632),(24.245579,8.591802),(24.218397,8.613041),(24.211266,8.627071),(24.21509,8.64136),(24.233487,8.667767),(24.235761,8.682003),(24.217674,8.691512),(24.180467,8.690711),(24.170328,8.689327),(24.174782,8.695646),(24.176953,8.709857),(24.182999,8.717919),(24.240566,8.761534),(24.270849,8.775719),(24.380093,8.844216),(24.402882,8.848893),(24.493833,8.858349),(24.542874,8.874653),(24.552795,8.880079),(24.558377,8.886746),(24.558377,8.993354),(24.559307,8.999865),(24.565353,9.014851),(24.655993,9.167297),(24.659869,9.176986),(24.660954,9.182283),(24.680178,9.373357),(24.686999,9.391366),(24.694957,9.402916),(24.723948,9.429219),(24.785236,9.497509),(24.792522,9.520402),(24.794383,9.532184),(24.792677,9.798369),(24.796398,9.818859),(24.799809,9.825215),(24.803685,9.829169),(24.807302,9.830047),(24.816759,9.830745),(24.827611,9.832683),(24.836292,9.836636),(24.844819,9.841674),(24.879287,9.869915),(24.913859,9.890715),(24.926933,9.90521),(24.954373,9.942237),(24.967861,9.955957),(24.976284,9.967351),(24.984965,9.985593),(24.990702,10.00306),(25.016901,10.056752),(25.027754,10.086155),(25.028994,10.092047),(25.029304,10.098248),(25.028684,10.103906),(25.025583,10.114164),(25.017367,10.133),(25.015816,10.138039),(25.013646,10.149072),(25.014421,10.157366),(25.017057,10.168011),(25.067028,10.27493),(25.07695,10.286867),(25.084081,10.293223),(25.090282,10.295755),(25.105992,10.300354),(25.120564,10.30609),(25.131261,10.308287),(25.156893,10.306245),(25.168365,10.308132),(25.19255,10.317433),(25.198286,10.317588),(25.209965,10.316116),(25.216631,10.316968),(25.225623,10.320611),(25.232754,10.322317),(25.239575,10.323014),(25.251512,10.320999),(25.25699,10.319216),(25.267067,10.314798),(25.272803,10.313015),(25.278694,10.312007),(25.28474,10.311852),(25.290166,10.313093),(25.299778,10.316658),(25.305359,10.317123),(25.316211,10.315573),(25.321999,10.315651),(25.327425,10.316581),(25.346029,10.325262),(25.351145,10.326813),(25.362927,10.32813),(25.369335,10.328208),(25.381272,10.329448),(25.391504,10.332652),(25.400961,10.336993),(25.415844,10.342031),(25.427212,10.344124),(25.439305,10.345209),(25.479509,10.343814),(25.490826,10.345597),(25.503383,10.349163),(25.506484,10.349628),(25.58033,10.375311),(25.623324,10.385853),(25.629061,10.386706),(25.661358,10.387016),(25.684148,10.390685),(25.699961,10.395026),(25.705748,10.395956),(25.745901,10.394405),(25.753962,10.395336),(25.762799,10.397739),(25.78378,10.407893),(25.82848,10.419546),(25.839332,10.420554),(25.843053,10.417815),(25.886512,10.364304),(25.891628,10.355984),(25.893644,10.351333),(25.894884,10.34583),(25.90093,10.191059),(25.902791,10.185529),(25.905426,10.181266),(25.911937,10.174212),(25.919689,10.167856),(25.955707,10.146901),(25.9709,10.140312),(25.998082,10.132458),(26.000872,10.131062),(26.007229,10.125998),(26.010794,10.122691),(26.016685,10.114862),(26.076423,10.01771),(26.088206,10.002827),(26.102313,9.988163),(26.102623,9.987841),(26.179466,9.95898),(26.368163,9.739717),(26.556859,9.520454),(26.600887,9.49198),(26.625744,9.480663),(26.683621,9.476012),(26.694318,9.476942),(27.080393,9.606831),(27.33924,9.612799),(27.385129,9.60479),(27.528066,9.605358),(27.609301,9.595178),(27.615089,9.595798),(27.635915,9.601379),(27.641496,9.601947),(27.769602,9.583318),(27.868355,9.588511),(27.884788,9.591379),(27.895072,9.59541),(27.997076,9.38493),(28.04511,9.331407),(28.44383,9.327981),(28.84255,9.324556),(28.843841,9.324545),(28.831489,9.349248),(28.827372,9.38493),(28.828745,9.427474),(28.84933,9.465901),(28.913833,9.529031),(28.989476,9.58709),(29.009474,9.603239),(29.132258,9.667731),(29.252354,9.711165),(29.483089,9.761679),(29.567321,9.841364),(29.613882,9.914461),(29.615432,10.058147),(29.64587,10.081711),(29.700957,10.115017),(29.967814,10.243329),(30.012979,10.270485),(30.248779,10.121257),(30.484578,9.972028),(30.749265,9.735763),(30.765284,9.724291),(30.779082,9.719873),(30.793035,9.728012),(30.804817,9.738942),(30.824092,9.746176),(30.83696,9.749354),(30.94991,9.752446),(30.950234,9.752455),(31.164382,9.764005),(31.234817,9.792323),(31.449816,10.003279),(31.664816,10.214236),(31.774215,10.348775),(31.801965,10.376241),(31.864184,10.472127),(31.929865,10.636923),(31.942887,10.655552),(32.17848,10.853202),(32.414073,11.050851),(32.430713,11.082193),(32.435363,11.107023),(32.364153,11.240013),(32.348754,11.30758),(32.345705,11.41163),(32.359812,11.573481),(32.354852,11.675774),(32.352991,11.687376),(32.34834,11.703163),(32.34524,11.709106),(32.082207,11.999811),(32.414434,12.001271),(32.746662,12.002731),(32.748213,12.026812),(32.747592,12.039525),(32.745577,12.051126),(32.73302,12.085982),(32.725733,12.132336),(32.725268,12.145255),(32.726198,12.157941),(32.730074,12.181893),(32.730384,12.194864),(32.729712,12.201065),(32.728524,12.206853),(32.728834,12.211917),(32.731779,12.216155),(33.209218,12.210367),(33.203017,12.128098),(33.144984,11.934673),(33.146018,11.81866),(33.132427,11.686213),(33.129016,11.67549),(33.115994,11.646939),(33.104573,11.630583),(33.091447,11.614796),(33.087727,11.608853),(33.083231,11.599164),(33.082921,11.584565),(33.132504,11.213903),(33.182088,10.843241),(33.178367,10.824534),(33.174646,10.812287),(33.14824,10.766036),(33.141418,10.750818),(33.140023,10.739036),(33.15072,10.730974),(33.370603,10.650901),(33.3813,10.645785),(33.389827,10.639248),(33.468995,10.543905),(33.685545,10.367973),(33.902096,10.192041),(33.916978,10.174522),(33.961782,10.064038),(33.966794,10.047269),(33.968345,10.020811),(33.96726,10.000269),(33.956097,9.933994),(33.959818,9.904203),(33.96726,9.884514),(33.973254,9.861776),(33.973254,9.854852),(33.972376,9.848883),(33.96912,9.838341)] +Senegal [(-14.912496,16.640639),(-14.882007,16.647667),(-14.711216,16.635782),(-14.699692,16.637125),(-14.665896,16.645652),(-14.655974,16.64591),(-14.647189,16.643533),(-14.619077,16.630666),(-14.60807,16.628805),(-14.597011,16.629167),(-14.555128,16.640587),(-14.544456,16.640846),(-14.503916,16.632681),(-14.49309,16.632474),(-14.482393,16.633663),(-14.471954,16.636402),(-14.462058,16.640794),(-14.435832,16.655987),(-14.426247,16.659604),(-14.406635,16.660793),(-14.343254,16.63666),(-14.335529,16.631544),(-14.331343,16.624103),(-14.331756,16.613147),(-14.337027,16.59413),(-14.335322,16.586482),(-14.325968,16.5824),(-14.28765,16.585707),(-14.277961,16.581263),(-14.274576,16.573511),(-14.273258,16.555631),(-14.268065,16.54788),(-14.257652,16.543177),(-14.233674,16.541575),(-14.22228,16.539612),(-14.214166,16.534961),(-14.197191,16.513463),(-14.190085,16.507107),(-14.166392,16.490881),(-14.13244,16.45264),(-14.073038,16.400654),(-14.039836,16.382102),(-14.033066,16.376056),(-14.0156,16.35585),(-14.006711,16.350269),(-13.987049,16.34288),(-13.97785,16.33797),(-13.971856,16.331201),(-13.971158,16.324224),(-13.977463,16.308825),(-13.979323,16.30035),(-13.977282,16.293839),(-13.967437,16.281385),(-13.963045,16.272393),(-13.962683,16.26397),(-13.967799,16.246038),(-13.964078,16.23007),(-13.947051,16.22113),(-13.907829,16.210019),(-13.899586,16.205317),(-13.883825,16.193741),(-13.877236,16.186868),(-13.872275,16.178755),(-13.863826,16.151832),(-13.859072,16.142272),(-13.852767,16.133487),(-13.844835,16.126045),(-13.835766,16.120568),(-13.827317,16.1185),(-13.819203,16.119276),(-13.811323,16.122066),(-13.794941,16.131213),(-13.778818,16.143202),(-13.771997,16.150437),(-13.754246,16.175345),(-13.746701,16.182373),(-13.738226,16.187127),(-13.729416,16.189504),(-13.72014,16.189969),(-13.71058,16.188884),(-13.712621,16.176068),(-13.718306,16.159288),(-13.722775,16.146096),(-13.719571,16.135605),(-13.714843,16.132763),(-13.704482,16.128836),(-13.69996,16.125632),(-13.698849,16.123926),(-13.697299,16.120361),(-13.696007,16.118914),(-13.693578,16.117519),(-13.687454,16.115245),(-13.68469,16.113798),(-13.678127,16.108217),(-13.675931,16.106925),(-13.668954,16.105736),(-13.662417,16.107442),(-13.635029,16.121601),(-13.629267,16.12279),(-13.610482,16.122893),(-13.606968,16.123255),(-13.599398,16.125477),(-13.569064,16.138189),(-13.516405,16.149041),(-13.502789,16.154416),(-13.497259,16.156069),(-13.491549,16.156276),(-13.484934,16.154984),(-13.4831,16.141031),(-13.499869,16.110646),(-13.501703,16.095143),(-13.491549,16.088425),(-13.455091,16.102222),(-13.440028,16.101861),(-13.435067,16.096538),(-13.427031,16.08305),(-13.422096,16.077211),(-13.394578,16.059279),(-13.388558,16.052096),(-13.386052,16.044396),(-13.385044,16.026878),(-13.383674,16.019178),(-13.369774,15.985485),(-13.36334,15.975098),(-13.355795,15.965538),(-13.34701,15.956753),(-13.321766,15.93815),(-13.315927,15.93045),(-13.31373,15.92213),(-13.314351,15.913603),(-13.318175,15.896705),(-13.318175,15.888489),(-13.31603,15.881771),(-13.311767,15.875776),(-13.293654,15.855312),(-13.284766,15.839913),(-13.27996,15.82286),(-13.279676,15.803533),(-13.281898,15.795109),(-13.285231,15.786634),(-13.287453,15.778159),(-13.286135,15.769633),(-13.280968,15.762812),(-13.273965,15.757489),(-13.258669,15.748807),(-13.250659,15.742761),(-13.243941,15.735785),(-13.232547,15.71961),(-13.228258,15.71129),(-13.227121,15.703952),(-13.228878,15.696614),(-13.243993,15.673721),(-13.247662,15.665608),(-13.248747,15.65672),(-13.2473,15.648297),(-13.237844,15.62344),(-13.222806,15.623854),(-13.192808,15.62897),(-13.178235,15.628194),(-13.171052,15.625352),(-13.152552,15.612407),(-13.145239,15.609539),(-13.122218,15.603493),(-13.109712,15.596413),(-13.099015,15.586879),(-13.091806,15.575123),(-13.089455,15.561041),(-13.095087,15.523421),(-13.094958,15.514713),(-13.093098,15.506212),(-13.089584,15.49704),(-13.083845,15.49275),(-13.05323,15.485102),(-13.038889,15.486885),(-13.023774,15.495954),(-13.010545,15.50629),(-12.995843,15.514222),(-12.980185,15.516289),(-12.96432,15.509158),(-12.958016,15.501742),(-12.954941,15.493061),(-12.95091,15.465414),(-12.947603,15.458618),(-12.937552,15.445777),(-12.933056,15.438284),(-12.93117,15.430532),(-12.931015,15.397046),(-12.927217,15.380742),(-12.918974,15.365368),(-12.90683,15.352682),(-12.847273,15.313278),(-12.834535,15.30023),(-12.828049,15.283874),(-12.83345,15.269999),(-12.850038,15.26739),(-12.884196,15.270826),(-12.894376,15.263359),(-12.888356,15.250879),(-12.865489,15.228684),(-12.856782,15.221992),(-12.847893,15.218943),(-12.838333,15.218762),(-12.816965,15.223077),(-12.806397,15.223077),(-12.79632,15.220287),(-12.787225,15.214163),(-12.78198,15.20729),(-12.779474,15.199616),(-12.779267,15.191399),(-12.784667,15.166026),(-12.783995,15.158146),(-12.780921,15.149076),(-12.779758,15.146389),(-12.754566,15.13856),(-12.746639,15.131136),(-12.717049,15.10342),(-12.690306,15.092387),(-12.689585,15.092479),(-12.679273,15.093783),(-12.647647,15.103162),(-12.632273,15.105436),(-12.622946,15.102258),(-12.615065,15.094609),(-12.602146,15.078771),(-12.547059,15.047171),(-12.529592,15.034355),(-12.518973,15.028283),(-12.495563,15.023012),(-12.484892,15.017276),(-12.476262,15.007741),(-12.471508,14.998),(-12.448202,14.918212),(-12.447633,14.907437),(-12.433836,14.899428),(-12.400091,14.860386),(-12.382495,14.84602),(-12.319579,14.817185),(-12.26413,14.774939),(-12.256508,14.745794),(-12.236613,14.728456),(-12.230489,14.714555),(-12.222815,14.70329),(-12.207648,14.696598),(-12.191422,14.693445),(-12.180311,14.692825),(-12.185841,14.68081),(-12.190285,14.673653),(-12.192197,14.666884),(-12.18982,14.655954),(-12.182792,14.650244),(-12.172611,14.647918),(-12.165015,14.641872),(-12.165945,14.625181),(-12.171526,14.616422),(-12.191008,14.597999),(-12.198398,14.58756),(-12.200465,14.577354),(-12.201447,14.565184),(-12.204186,14.554255),(-12.211472,14.547976),(-12.230386,14.536685),(-12.236432,14.519451),(-12.234184,14.498574),(-12.222428,14.454933),(-12.21987,14.434598),(-12.22173,14.391242),(-12.208837,14.388761),(-12.115612,14.35512),(-12.108688,14.34202),(-12.113029,14.329592),(-12.127808,14.324295),(-12.119591,14.310575),(-12.10998,14.298146),(-12.097784,14.28807),(-12.058355,14.274091),(-12.044299,14.264919),(-12.03448,14.250268),(-11.997583,14.166165),(-11.995413,14.144022),(-12.002751,14.103456),(-12.02523,14.026742),(-12.025334,13.985246),(-12.015308,13.959304),(-11.99903,13.943569),(-11.980375,13.929642),(-11.962908,13.909152),(-11.956811,13.890342),(-11.956966,13.870007),(-11.962908,13.829467),(-11.978825,13.784819),(-12.012311,13.751177),(-12.097629,13.70441),(-12.083883,13.692964),(-12.070395,13.673249),(-12.049828,13.634363),(-12.037323,13.598267),(-12.015153,13.564444),(-11.996808,13.544161),(-11.939034,13.50295),(-11.932729,13.496051),(-11.923841,13.481142),(-11.918157,13.474708),(-11.907718,13.469644),(-11.898468,13.468507),(-11.890406,13.465148),(-11.883482,13.453547),(-11.885652,13.43577),(-11.899967,13.382182),(-11.897589,13.371175),(-11.888804,13.368875),(-11.862346,13.356085),(-11.854595,13.350556),(-11.850771,13.34314),(-11.844569,13.323839),(-11.840022,13.316889),(-11.828136,13.307277),(-11.822555,13.306708),(-11.817078,13.312341),(-11.775116,13.343347),(-11.772533,13.362674),(-11.76659,13.383164),(-11.756513,13.399803),(-11.741217,13.407813),(-11.726902,13.405798),(-11.719771,13.397891),(-11.714035,13.388099),(-11.70401,13.380373),(-11.694501,13.379184),(-11.674967,13.383215),(-11.649801,13.38412),(-11.645253,13.379753),(-11.646597,13.365516),(-11.643962,13.355594),(-11.632903,13.35438),(-11.620345,13.357532),(-11.613111,13.360813),(-11.613007,13.339058),(-11.605824,13.319653),(-11.581175,13.286141),(-11.55854,13.266504),(-11.554251,13.256866),(-11.555802,13.245911),(-11.568566,13.227902),(-11.572338,13.217101),(-11.569341,13.199195),(-11.559729,13.186922),(-11.547843,13.176199),(-11.538542,13.16297),(-11.538077,13.128011),(-11.533064,13.111811),(-11.515391,13.107754),(-11.501541,13.099589),(-11.489707,13.090778),(-11.477098,13.083285),(-11.46082,13.079177),(-11.450278,13.075095),(-11.451415,13.068067),(-11.456945,13.060444),(-11.45896,13.054682),(-11.445162,13.035226),(-11.43953,13.023005),(-11.435654,13.016726),(-11.431675,13.01347),(-11.422993,13.009414),(-11.424543,13.005202),(-11.430124,13.00099),(-11.433432,12.99696),(-11.433535,12.988356),(-11.435964,12.968848),(-11.438703,12.9589),(-11.423096,12.956006),(-11.411521,12.960709),(-11.404235,12.972103),(-11.401806,12.989492),(-11.393434,12.980811),(-11.388835,12.970553),(-11.386561,12.959107),(-11.384443,12.933114),(-11.387801,12.927119),(-11.394209,12.923321),(-11.402891,12.916396),(-11.407283,12.916293),(-11.413588,12.919703),(-11.420668,12.922029),(-11.427231,12.918618),(-11.427954,12.912495),(-11.423096,12.907069),(-11.417309,12.901953),(-11.415138,12.896656),(-11.419324,12.886501),(-11.424905,12.876166),(-11.428626,12.865805),(-11.425474,12.849553),(-11.427024,12.845238),(-11.430021,12.841052),(-11.432502,12.835523),(-11.432088,12.829347),(-11.428161,12.827022),(-11.423458,12.826402),(-11.420978,12.825575),(-11.419686,12.824671),(-11.416017,12.820071),(-11.411676,12.812888),(-11.408472,12.804491),(-11.406353,12.794285),(-11.405423,12.782348),(-11.407594,12.77147),(-11.41457,12.764209),(-11.41457,12.759713),(-11.404028,12.741213),(-11.402168,12.73173),(-11.410332,12.718811),(-11.424182,12.716383),(-11.437359,12.712248),(-11.447953,12.685945),(-11.455756,12.680622),(-11.461905,12.673129),(-11.460975,12.658402),(-11.457151,12.655973),(-11.449555,12.653182),(-11.441752,12.647756),(-11.436842,12.637473),(-11.436636,12.627086),(-11.441803,12.595511),(-11.440925,12.584866),(-11.438599,12.577993),(-11.437359,12.570758),(-11.43984,12.559286),(-11.460614,12.549829),(-11.467797,12.543525),(-11.456686,12.539753),(-11.449451,12.535205),(-11.437876,12.515361),(-11.429814,12.508333),(-11.430383,12.515723),(-11.42475,12.526368),(-11.416999,12.529314),(-11.410849,12.513346),(-11.408679,12.504923),(-11.402374,12.488541),(-11.400514,12.479549),(-11.392607,12.491848),(-11.390437,12.497533),(-11.377776,12.480221),(-11.379792,12.456864),(-11.386561,12.430457),(-11.388422,12.403895),(-11.416017,12.40498),(-11.481646,12.428235),(-11.515391,12.431645),(-11.601535,12.425238),(-11.643083,12.417641),(-11.720029,12.389477),(-11.757805,12.383328),(-11.839609,12.386687),(-11.860176,12.391079),(-11.920637,12.416039),(-11.921981,12.417693),(-11.923944,12.418778),(-11.930869,12.418726),(-11.93521,12.417021),(-11.946269,12.409425),(-11.984303,12.389012),(-11.997583,12.386377),(-12.018874,12.388444),(-12.07918,12.408133),(-12.103985,12.407357),(-12.121917,12.398728),(-12.155041,12.369014),(-12.192042,12.348756),(-12.36092,12.305607),(-12.377121,12.313668),(-12.405517,12.356456),(-12.423397,12.369117),(-12.466754,12.384465),(-12.48776,12.389477),(-12.505072,12.390253),(-12.521918,12.386894),(-12.541452,12.379452),(-12.570649,12.363691),(-12.57813,12.361528),(-12.579408,12.361159),(-12.593671,12.361572),(-12.597314,12.365448),(-12.598064,12.372424),(-12.603412,12.381933),(-12.63155,12.41268),(-12.648009,12.425754),(-12.667517,12.433661),(-12.688058,12.435986),(-12.752654,12.432421),(-12.773272,12.435056),(-12.779164,12.443273),(-12.783479,12.453091),(-12.799421,12.460636),(-12.829057,12.462755),(-12.838617,12.466372),(-12.849133,12.475829),(-12.851252,12.484355),(-12.84965,12.491073),(-12.849211,12.495001),(-12.874042,12.516395),(-12.913135,12.536342),(-12.949489,12.535928),(-12.966232,12.496086),(-12.969514,12.476811),(-12.981038,12.466992),(-12.997419,12.465855),(-13.015247,12.472676),(-13.034626,12.470558),(-13.051989,12.472263),(-13.066743,12.480118),(-13.078267,12.496241),(-13.082711,12.515671),(-13.079094,12.532156),(-13.064211,12.566159),(-13.060413,12.583367),(-13.059327,12.60347),(-13.063642,12.62228),(-13.076458,12.635922),(-13.091703,12.638196),(-13.108988,12.635406),(-13.127308,12.634837),(-13.145317,12.643932),(-13.154696,12.632822),(-13.162629,12.638144),(-13.169217,12.638765),(-13.176271,12.637731),(-13.185211,12.637783),(-13.197872,12.634062),(-13.201903,12.63339),(-13.20645,12.635664),(-13.21601,12.645741),(-13.22154,12.648376),(-13.229084,12.647808),(-13.246008,12.643157),(-13.25469,12.641813),(-13.263398,12.643209),(-13.275722,12.651064),(-13.284146,12.653337),(-13.292931,12.652717),(-13.322877,12.646516),(-13.327709,12.644914),(-13.330034,12.642072),(-13.332773,12.639643),(-13.338613,12.63923),(-13.343444,12.640987),(-13.352049,12.646981),(-13.356648,12.648893),(-13.359801,12.649737),(-13.404758,12.661761),(-13.728279,12.673388),(-14.082753,12.674835),(-14.343108,12.675897),(-14.437305,12.676282),(-14.791728,12.67778),(-14.87216,12.678109),(-15.146228,12.679227),(-15.195114,12.679434),(-15.223795,12.675196),(-15.249736,12.663621),(-15.307821,12.622693),(-15.368644,12.595356),(-15.421731,12.557122),(-15.424609,12.555049),(-15.677049,12.439294),(-15.711517,12.432214),(-15.883964,12.442182),(-15.893883,12.442756),(-15.973258,12.437226),(-15.992482,12.440689),(-16.05501,12.461204),(-16.090512,12.464305),(-16.166063,12.451437),(-16.172832,12.448544),(-16.177173,12.445236),(-16.181876,12.443376),(-16.190092,12.444771),(-16.211693,12.453143),(-16.22239,12.455055),(-16.23288,12.452884),(-16.30192,12.416194),(-16.382019,12.373819),(-16.412301,12.361727),(-16.462686,12.361314),(-16.514879,12.34855),(-16.533327,12.347826),(-16.668874,12.356715),(-16.705616,12.348033),(-16.728437,12.332531),(-16.735015,12.34516),(-16.739735,12.358385),(-16.74413,12.378852),(-16.750478,12.385647),(-16.765492,12.395982),(-16.788482,12.417955),(-16.798736,12.430894),(-16.802968,12.443793),(-16.802968,12.485419),(-16.800771,12.494452),(-16.790924,12.503241),(-16.788686,12.512397),(-16.781361,12.522935),(-16.731679,12.552069),(-16.709869,12.552802),(-16.678822,12.550238),(-16.645375,12.570746),(-16.634674,12.579495),(-16.628977,12.587633),(-16.624623,12.596015),(-16.618031,12.605455),(-16.579742,12.632799),(-16.56135,12.611802),(-16.54719,12.573879),(-16.533315,12.565334),(-16.532704,12.580878),(-16.52245,12.593085),(-16.50475,12.598049),(-16.485097,12.593736),(-16.451568,12.575019),(-16.421457,12.566881),(-16.388336,12.551581),(-16.370961,12.550238),(-16.351308,12.559516),(-16.32726,12.58869),(-16.30956,12.598049),(-16.286285,12.597073),(-16.266469,12.590033),(-16.246693,12.587958),(-16.223866,12.601752),(-16.212229,12.60635),(-16.204457,12.597602),(-16.198598,12.584906),(-16.19286,12.577582),(-16.11498,12.606106),(-16.097076,12.610256),(-16.07311,12.611721),(-16.057607,12.616197),(-16.044667,12.625474),(-16.029775,12.632961),(-16.007883,12.632148),(-15.990346,12.62287),(-15.963857,12.60224),(-15.947987,12.58983),(-15.939605,12.580634),(-15.927968,12.578599),(-15.863881,12.577582),(-15.857737,12.573188),(-15.844797,12.559882),(-15.839996,12.557074),(-15.831166,12.558254),(-15.805531,12.564521),(-15.794057,12.571479),(-15.782948,12.584662),(-15.768788,12.592597),(-15.747874,12.583726),(-15.742421,12.588324),(-15.736073,12.591376),(-15.729604,12.591498),(-15.715932,12.581773),(-15.710113,12.58275),(-15.704986,12.585191),(-15.699452,12.583726),(-15.69164,12.574897),(-15.684926,12.557929),(-15.679555,12.550238),(-15.673085,12.54442),(-15.665924,12.540188),(-15.65689,12.537584),(-15.644765,12.537258),(-15.633412,12.539537),(-15.587717,12.55801),(-15.569407,12.576321),(-15.538686,12.615424),(-15.527252,12.623725),(-15.517079,12.628852),(-15.509836,12.63581),(-15.507069,12.649563),(-15.512807,12.655666),(-15.523752,12.660346),(-15.529856,12.665758),(-15.521311,12.673774),(-15.531077,12.696438),(-15.540191,12.726874),(-15.543446,12.758043),(-15.534901,12.783026),(-15.523346,12.789862),(-15.509023,12.790351),(-15.494008,12.789049),(-15.480336,12.790473),(-15.46642,12.796454),(-15.456939,12.803656),(-15.450673,12.812812),(-15.446156,12.824612),(-15.425608,12.804999),(-15.406809,12.799709),(-15.394032,12.810248),(-15.391591,12.838202),(-15.407053,12.822333),(-15.421783,12.822821),(-15.436879,12.829291),(-15.453033,12.831448),(-15.468129,12.824897),(-15.492787,12.807115),(-15.510732,12.803453),(-15.520863,12.803412),(-15.535878,12.801337),(-15.549469,12.794338),(-15.555409,12.779608),(-15.555409,12.721584),(-15.544097,12.684068),(-15.542348,12.673774),(-15.541737,12.657457),(-15.54247,12.6494),(-15.545481,12.642727),(-15.569692,12.62226),(-15.577382,12.612494),(-15.627756,12.567613),(-15.641021,12.559475),(-15.652903,12.558905),(-15.663319,12.563463),(-15.672109,12.570746),(-15.666412,12.600653),(-15.696767,12.614163),(-15.76769,12.625393),(-15.815826,12.595364),(-15.833485,12.591254),(-15.836903,12.594428),(-15.841298,12.601467),(-15.8485,12.608547),(-15.860463,12.611721),(-15.87267,12.609565),(-15.889556,12.600165),(-15.901723,12.598049),(-15.913319,12.601223),(-15.928375,12.608873),(-15.953277,12.625393),(-15.954457,12.630316),(-15.953196,12.637397),(-15.952707,12.643744),(-15.956451,12.64643),(-15.969716,12.648179),(-15.976674,12.650214),(-15.981191,12.652655),(-15.987457,12.660956),(-16.001088,12.687445),(-16.00475,12.690823),(-16.009348,12.692694),(-16.013173,12.696194),(-16.016957,12.717515),(-16.022125,12.72899),(-16.02831,12.72899),(-16.02831,12.721584),(-16.029775,12.717475),(-16.040517,12.696967),(-16.045155,12.690497),(-16.048899,12.656399),(-16.076812,12.638983),(-16.117747,12.632148),(-16.128326,12.627672),(-16.148915,12.614447),(-16.155019,12.611721),(-16.165598,12.61286),(-16.189809,12.619127),(-16.22761,12.619127),(-16.23884,12.616523),(-16.257924,12.608222),(-16.268625,12.605455),(-16.28067,12.605455),(-16.312978,12.611721),(-16.323842,12.609565),(-16.337758,12.600165),(-16.346791,12.598049),(-16.350209,12.593329),(-16.355133,12.582587),(-16.361969,12.571234),(-16.370961,12.564521),(-16.411977,12.577582),(-16.44518,12.596137),(-16.460683,12.608873),(-16.467193,12.62226),(-16.471181,12.624579),(-16.480336,12.627387),(-16.489573,12.631822),(-16.494496,12.638983),(-16.492828,12.648912),(-16.48648,12.657782),(-16.477284,12.66413),(-16.467193,12.666327),(-16.467193,12.673774),(-16.482574,12.67414),(-16.492299,12.681708),(-16.497914,12.693834),(-16.501332,12.707913),(-16.508168,12.625393),(-16.515696,12.625393),(-16.52892,12.642768),(-16.565663,12.673896),(-16.570221,12.69359),(-16.581939,12.684394),(-16.58967,12.673285),(-16.598785,12.664008),(-16.614613,12.660102),(-16.626576,12.662543),(-16.634674,12.669379),(-16.638661,12.679999),(-16.637929,12.69359),(-16.623769,12.719713),(-16.603831,12.739325),(-16.590728,12.76203),(-16.596913,12.797309),(-16.601389,12.770209),(-16.603749,12.762519),(-16.60912,12.75373),(-16.613759,12.749416),(-16.61852,12.746527),(-16.638417,12.728502),(-16.646881,12.718207),(-16.651031,12.705268),(-16.652252,12.684027),(-16.64509,12.648586),(-16.647206,12.632229),(-16.677317,12.619574),(-16.721059,12.583726),(-16.754628,12.572211),(-16.773101,12.582953),(-16.780873,12.608832),(-16.782541,12.642727),(-16.777089,12.678046),(-16.76122,12.713202),(-16.735463,12.732896),(-16.700551,12.721584),(-16.698964,12.736396),(-16.713246,12.747301),(-16.733957,12.75137),(-16.751536,12.745429),(-16.772125,12.720649),(-16.783559,12.712592),(-16.796213,12.715318),(-16.790273,12.723456),(-16.788238,12.732082),(-16.790273,12.740546),(-16.796213,12.748847),(-16.774485,12.778551),(-16.770131,12.793362),(-16.775054,12.81094),(-16.782541,12.81094),(-16.787221,12.799872),(-16.794057,12.794989),(-16.800282,12.798814),(-16.802968,12.814032),(-16.800526,12.825995),(-16.794545,12.838772),(-16.75886,12.890774),(-16.755238,12.903469),(-16.756256,12.916246),(-16.760976,12.939683),(-16.762074,12.951158),(-16.759836,12.972073),(-16.749989,13.013617),(-16.747792,13.040269),(-16.749338,13.056627),(-16.753651,13.065009),(-16.752125,13.088039),(-16.748508,13.099227),(-16.742152,13.107392),(-16.726339,13.122637),(-16.723186,13.132249),(-16.708407,13.156692),(-16.673525,13.164314),(-16.630014,13.1639),(-16.465218,13.162428),(-16.313289,13.161084),(-16.092062,13.15912),(-15.961734,13.157984),(-15.897441,13.157399),(-15.870784,13.157157),(-15.833318,13.156847),(-15.8266,13.161678),(-15.82474,13.232914),(-15.824367,13.248512),(-15.822673,13.319421),(-15.818694,13.333528),(-15.807118,13.339781),(-15.737407,13.346344),(-15.6925,13.360348),(-15.679891,13.360658),(-15.612453,13.354225),(-15.596123,13.355827),(-15.566048,13.365671),(-15.542948,13.377996),(-15.519539,13.3866),(-15.488326,13.385256),(-15.379237,13.362519),(-15.342392,13.36257),(-15.309836,13.368436),(-15.277073,13.380296),(-15.248341,13.398977),(-15.227774,13.425176),(-15.221728,13.452823),(-15.217955,13.51468),(-15.203951,13.536901),(-15.181331,13.559843),(-15.160491,13.580981),(-15.13765,13.589973),(-15.110158,13.57248),(-15.065251,13.531268),(-15.01559,13.496051),(-15.015435,13.495999),(-15.015229,13.495844),(-15.015074,13.495741),(-14.949703,13.463081),(-14.915907,13.454425),(-14.877614,13.451376),(-14.862576,13.447061),(-14.829142,13.427011),(-14.810797,13.421172),(-14.790333,13.417348),(-14.779636,13.410888),(-14.759482,13.38337),(-14.731577,13.359005),(-14.698142,13.345181),(-14.661607,13.340918),(-14.624555,13.345181),(-14.592154,13.35314),(-14.580733,13.34916),(-14.552109,13.323094),(-14.542854,13.314666),(-14.529574,13.307277),(-14.515104,13.303168),(-14.483633,13.301928),(-14.470326,13.298259),(-14.459139,13.286916),(-14.45149,13.276865),(-14.442034,13.268545),(-14.43144,13.26144),(-14.394982,13.242965),(-14.368834,13.235705),(-14.341781,13.233638),(-14.284911,13.238547),(-14.230625,13.228574),(-14.201687,13.229555),(-14.179827,13.240123),(-14.131794,13.275573),(-14.116808,13.282162),(-14.099496,13.281955),(-14.056579,13.297174),(-14.016272,13.297639),(-13.976119,13.308207),(-13.900568,13.314563),(-13.851036,13.33575),(-13.822562,13.378203),(-13.818713,13.429362),(-13.84269,13.476646),(-13.852793,13.484811),(-13.875479,13.497911),(-13.882504,13.503782),(-13.8846,13.505533),(-13.889793,13.513957),(-13.896175,13.532922),(-13.901343,13.541629),(-13.918086,13.554988),(-13.94209,13.566925),(-13.967489,13.575581),(-13.988495,13.579146),(-14.001905,13.577028),(-14.022266,13.565323),(-14.034022,13.560207),(-14.044461,13.558605),(-14.06185,13.560052),(-14.072599,13.55969),(-14.092882,13.554729),(-14.139132,13.531552),(-14.196829,13.518788),(-14.215794,13.510753),(-14.271217,13.476956),(-14.329947,13.456441),(-14.347285,13.452358),(-14.365553,13.454839),(-14.389582,13.465872),(-14.462135,13.516656),(-14.470275,13.522354),(-14.480378,13.533129),(-14.486863,13.546539),(-14.500221,13.589973),(-14.50973,13.609093),(-14.523243,13.625655),(-14.542596,13.640822),(-14.585539,13.660459),(-14.626674,13.663508),(-14.667188,13.652889),(-14.728579,13.619325),(-14.74093,13.615449),(-14.754934,13.620384),(-14.796017,13.644698),(-14.802942,13.65232),(-14.822372,13.7172),(-14.831312,13.735623),(-14.843973,13.752676),(-14.860314,13.765595),(-14.879268,13.780581),(-14.915803,13.792441),(-15.016779,13.796885),(-15.07631,13.818951),(-15.097704,13.819984),(-15.17062,13.793784),(-15.245964,13.746733),(-15.267255,13.741798),(-15.27573,13.748051),(-15.295108,13.773191),(-15.304462,13.781873),(-15.347766,13.788643),(-15.394069,13.771512),(-15.43634,13.741178),(-15.467501,13.708286),(-15.478611,13.691258),(-15.488946,13.670329),(-15.496801,13.648341),(-15.49803,13.641522),(-15.50047,13.627981),(-15.502486,13.588267),(-15.518195,13.5831),(-15.567133,13.583255),(-15.629196,13.583513),(-15.691363,13.58372),(-15.753426,13.583978),(-15.815593,13.584185),(-15.877708,13.584443),(-15.939772,13.58465),(-16.001938,13.584908),(-16.064054,13.585115),(-16.085383,13.585168),(-16.126117,13.58527),(-16.188232,13.585528),(-16.250295,13.585735),(-16.312462,13.585994),(-16.374577,13.5862),(-16.436641,13.586459),(-16.498756,13.586665),(-16.561399,13.586914),(-16.561513,13.587104),(-16.57136,13.601793),(-16.576405,13.61225),(-16.575754,13.640326),(-16.558665,13.655911),(-16.533518,13.668606),(-16.508168,13.687974),(-16.513499,13.690863),(-16.523793,13.698391),(-16.529286,13.700995),(-16.516591,13.712836),(-16.507232,13.728095),(-16.508901,13.74022),(-16.529286,13.742621),(-16.526601,13.72663),(-16.531484,13.707261),(-16.542104,13.693671),(-16.556549,13.69477),(-16.56371,13.673774),(-16.579254,13.659573),(-16.598866,13.654202),(-16.618031,13.660061),(-16.635569,13.678046),(-16.640004,13.696479),(-16.631703,13.735785),(-16.62975,13.760484),(-16.627105,13.770453),(-16.618031,13.783596),(-16.582509,13.824693),(-16.576405,13.838813),(-16.570546,13.83511),(-16.554921,13.827948),(-16.549143,13.82453),(-16.551381,13.829088),(-16.556549,13.845649),(-16.52184,13.851793),(-16.520131,13.832017),(-16.512929,13.8133),(-16.501576,13.801988),(-16.48705,13.804633),(-16.494618,13.82746),(-16.496327,13.852729),(-16.5006,13.872707),(-16.515696,13.879788),(-16.510365,13.893297),(-16.488678,13.986396),(-16.48705,14.002631),(-16.508901,13.971381),(-16.518056,13.95185),(-16.52595,13.910468),(-16.536204,13.885728),(-16.549428,13.869289),(-16.562815,13.872952),(-16.570221,13.872952),(-16.58023,13.859524),(-16.59789,13.854315),(-16.618642,13.85163),(-16.637929,13.845649),(-16.648671,13.836859),(-16.652252,13.828559),(-16.65392,13.820054),(-16.659006,13.810858),(-16.664459,13.805976),(-16.675933,13.799547),(-16.682525,13.794094),(-16.692494,13.775702),(-16.700063,13.769029),(-16.710194,13.773627),(-16.713368,13.783352),(-16.71345,13.795722),(-16.715321,13.806342),(-16.724477,13.810858),(-16.737213,13.819648),(-16.742421,13.839423),(-16.741078,13.860419),(-16.73412,13.872952),(-16.740224,13.886623),(-16.744456,13.902167),(-16.747792,13.934394),(-16.746816,13.951361),(-16.743398,13.961086),(-16.737213,13.967678),(-16.714223,13.985582),(-16.702504,13.989976),(-16.689687,13.990546),(-16.660268,13.98725),(-16.646311,13.982978),(-16.63679,13.974921),(-16.637929,13.961656),(-16.623931,13.973944),(-16.612904,13.988715),(-16.598988,13.996812),(-16.576405,13.988959),(-16.576772,14.010972),(-16.568918,14.020819),(-16.556264,14.026923),(-16.542348,14.03734),(-16.554799,14.033108),(-16.568349,14.030341),(-16.580963,14.032782),(-16.590728,14.043606),(-16.545522,14.07099),(-16.529286,14.085191),(-16.503326,14.116278),(-16.489125,14.128241),(-16.457021,14.137356),(-16.441477,14.156562),(-16.429351,14.160956),(-16.42101,14.157131),(-16.412994,14.149807),(-16.404693,14.145413),(-16.394887,14.150377),(-16.388824,14.154853),(-16.372629,14.162421),(-16.364735,14.167792),(-16.427724,14.170111),(-16.454742,14.180162),(-16.452952,14.20185),(-16.468739,14.193834),(-16.474599,14.179389),(-16.471506,14.164252),(-16.460357,14.15412),(-16.472768,14.147366),(-16.483062,14.148139),(-16.493072,14.151842),(-16.50475,14.15412),(-16.513173,14.151028),(-16.516754,14.143459),(-16.518788,14.134263),(-16.52184,14.126166),(-16.547515,14.081855),(-16.556549,14.071519),(-16.566803,14.066962),(-16.58845,14.061957),(-16.600331,14.054185),(-16.60912,14.043199),(-16.621571,14.019965),(-16.631703,14.010077),(-16.654164,14.002753),(-16.675404,14.006903),(-16.697011,14.014228),(-16.721059,14.016303),(-16.697581,14.05744),(-16.696848,14.064683),(-16.680002,14.069403),(-16.652252,14.099514),(-16.665639,14.095649),(-16.678578,14.076117),(-16.689687,14.071519),(-16.711781,14.070868),(-16.72053,14.068915),(-16.727895,14.064683),(-16.716135,14.051337),(-16.717112,14.042385),(-16.73412,14.023139),(-16.73705,14.015448),(-16.739613,14.001125),(-16.7447,13.992743),(-16.764556,13.970608),(-16.76887,13.961656),(-16.773427,13.93358),(-16.76887,13.838813),(-16.775054,13.838813),(-16.780995,14.030748),(-16.786,14.059719),(-16.802561,14.107856),(-16.82079,14.143459),(-16.829701,14.15412),(-16.842763,14.160061),(-16.868804,14.165758),(-16.878733,14.174547),(-16.883901,14.188951),(-16.892405,14.242865),(-16.898915,14.259955),(-16.906972,14.272528),(-16.925893,14.291246),(-16.93456,14.295885),(-16.941151,14.297268),(-16.945465,14.301418),(-16.947011,14.314521),(-16.94579,14.324774),(-16.940745,14.344428),(-16.939565,14.355862),(-16.944447,14.372382),(-16.956532,14.390123),(-16.987945,14.421047),(-17.059478,14.453925),(-17.070465,14.465399),(-17.076161,14.478583),(-17.11148,14.530219),(-17.137847,14.599026),(-17.178212,14.653144),(-17.206939,14.6789),(-17.22411,14.690416),(-17.243764,14.699286),(-17.265452,14.702826),(-17.281646,14.707261),(-17.324045,14.728909),(-17.330556,14.736314),(-17.363596,14.735907),(-17.392323,14.739325),(-17.417388,14.737698),(-17.439809,14.722073),(-17.431386,14.71247),(-17.419342,14.70897),(-17.426829,14.694403),(-17.439809,14.688463),(-17.436391,14.685289),(-17.430247,14.678046),(-17.426829,14.674872),(-17.433013,14.660549),(-17.436391,14.659573),(-17.439809,14.653754),(-17.452952,14.662055),(-17.520172,14.743069),(-17.536041,14.757392),(-17.370961,14.800279),(-17.322133,14.839016),(-17.172963,14.900824),(-17.138661,14.92772),(-16.979726,15.111884),(-16.947906,15.157864),(-16.878245,15.2331),(-16.739166,15.472398),(-16.669667,15.561835),(-16.53893,15.774359),(-16.534291,15.784369),(-16.536,15.794664),(-16.542348,15.808824),(-16.542346,15.808848),(-16.519736,15.846372),(-16.509918,15.901511),(-16.507541,16.010393),(-16.501598,16.021866),(-16.4878,16.066049),(-16.470437,16.106357),(-16.465993,16.174053),(-16.458758,16.194155),(-16.444185,16.206299),(-16.407288,16.214825),(-16.391682,16.224747),(-16.385378,16.234927),(-16.381088,16.246658),(-16.340161,16.415898),(-16.334941,16.455793),(-16.327397,16.474551),(-16.313547,16.49331),(-16.295099,16.510621),(-16.273808,16.523179),(-16.251019,16.527623),(-16.217946,16.521525),(-16.206732,16.52075),(-16.195363,16.522714),(-16.185183,16.527313),(-16.155779,16.547518),(-16.144307,16.552686),(-16.132266,16.554753),(-16.119657,16.552376),(-16.110459,16.54695),(-16.103069,16.53956),(-16.090977,16.522197),(-16.074027,16.504885),(-16.05377,16.495428),(-16.031187,16.492896),(-16.007313,16.496152),(-16.000181,16.501061),(-15.993102,16.504627),(-15.985609,16.505867),(-15.977289,16.503955),(-15.96437,16.498994),(-15.958065,16.498426),(-15.950675,16.499821),(-15.91104,16.516874),(-15.896932,16.518424),(-15.861999,16.51026),(-15.853575,16.51026),(-15.828926,16.514549),(-15.812958,16.513773),(-15.767741,16.499459),(-15.735288,16.497702),(-15.726813,16.495273),(-15.703197,16.484421),(-15.687539,16.480546),(-15.672191,16.481372),(-15.657412,16.486178),(-15.643614,16.494653),(-15.621497,16.514239),(-15.609663,16.521318),(-15.594418,16.523644),(-15.551475,16.515117),(-15.536696,16.516512),(-15.52481,16.522197),(-15.515198,16.53093),(-15.49091,16.56452),(-15.478508,16.575165),(-15.463987,16.58147),(-15.448846,16.580178),(-15.43944,16.572685),(-15.433704,16.562504),(-15.426883,16.553151),(-15.414532,16.548035),(-15.399805,16.548345),(-15.385749,16.551756),(-15.343632,16.568706),(-15.328026,16.571961),(-15.295935,16.573046),(-15.248599,16.565812),(-15.232476,16.56638),(-15.125093,16.58271),(-15.104009,16.590358),(-15.097498,16.594492),(-15.093208,16.600176),(-15.092847,16.607618),(-15.096154,16.613974),(-15.112329,16.629219),(-15.118271,16.642603),(-15.116256,16.655522),(-15.107574,16.665392),(-15.093312,16.669165),(-15.078894,16.664979),(-15.069851,16.655264),(-15.061996,16.643791),(-15.051557,16.63418),(-15.044374,16.631906),(-15.036933,16.632578),(-15.029853,16.635471),(-15.023652,16.639967),(-15.018743,16.646117),(-15.016521,16.652576),(-15.013368,16.666581),(-15.006909,16.675831),(-14.996625,16.682445),(-14.974301,16.691385),(-14.962932,16.685804),(-14.949858,16.672317),(-14.948934,16.670553),(-14.941331,16.656039),(-14.930944,16.643378),(-14.912496,16.640639)] +Serranilla Bank [(-78.637074,15.862087),(-78.640411,15.864),(-78.636871,15.867296),(-78.637074,15.862087)] +Singapore [(103.960785,1.391099),(103.985688,1.385443),(103.999522,1.380316),(104.003429,1.374172),(103.991873,1.354926),(103.974864,1.334459),(103.954356,1.318101),(103.931895,1.311469),(103.907237,1.308743),(103.887706,1.301256),(103.852712,1.277289),(103.846934,1.271918),(103.844086,1.2685),(103.838878,1.266262),(103.82602,1.264309),(103.801606,1.264797),(103.789561,1.26789),(103.784434,1.273871),(103.77589,1.287584),(103.755138,1.297105),(103.730154,1.302924),(103.708751,1.305243),(103.665294,1.304104),(103.647634,1.308417),(103.640391,1.322252),(103.644705,1.338039),(103.674571,1.380316),(103.678884,1.399237),(103.683849,1.409898),(103.695079,1.421332),(103.708344,1.429389),(103.717947,1.430976),(103.739757,1.428127),(103.762218,1.430976),(103.79005,1.444281),(103.804942,1.448635),(103.831554,1.447089),(103.857188,1.438707),(103.932465,1.401109),(103.960785,1.391099)] +San Marino [(12.42945,43.892056),(12.399581,43.903218),(12.385629,43.924534),(12.395654,43.948409),(12.411049,43.959661),(12.421389,43.967219),(12.453325,43.979053),(12.48216,43.982567),(12.489188,43.97311),(12.492392,43.956419),(12.490325,43.939159),(12.483094,43.929205),(12.48216,43.927919),(12.479576,43.9258),(12.478026,43.923216),(12.477494,43.920051),(12.478287,43.917038),(12.460456,43.895259),(12.42945,43.892056)] +Somaliland [(48.939112,11.24913),(48.939111,11.136737),(48.939111,11.024367),(48.939111,10.912022),(48.939111,10.7996),(48.939111,10.687333),(48.939111,10.574937),(48.939111,10.462567),(48.939111,10.350119),(48.939111,10.2378),(48.939111,10.12543),(48.939111,10.013033),(48.939111,9.900715),(48.939111,9.788422),(48.939111,9.676),(48.939111,9.563603),(48.939111,9.562701),(48.939111,9.451233),(48.879114,9.360308),(48.819325,9.269384),(48.759225,9.178511),(48.699229,9.087689),(48.639336,8.996661),(48.579184,8.90584),(48.519188,8.814915),(48.459243,8.723887),(48.399299,8.633118),(48.339147,8.54209),(48.279099,8.451165),(48.219155,8.360318),(48.159003,8.269419),(48.099058,8.17852),(48.039114,8.087492),(47.979169,7.996567),(47.836697,7.996516),(47.653918,7.996516),(47.52359,7.996516),(47.36608,7.996516),(47.210586,7.996516),(47.068114,7.996516),(46.97923,7.996567),(46.920526,8.025609),(46.857377,8.046745),(46.773558,8.074909),(46.689532,8.102969),(46.605713,8.131133),(46.521687,8.159245),(46.437713,8.187357),(46.353842,8.215469),(46.269816,8.243555),(46.185894,8.271641),(46.101919,8.299727),(46.018049,8.327891),(45.934126,8.356003),(45.850203,8.384115),(45.766384,8.412227),(45.682358,8.440313),(45.598436,8.468425),(45.514462,8.496537),(45.51441,8.496537),(45.426147,8.525476),(45.33809,8.554414),(45.249878,8.583276),(45.16177,8.612137),(45.073558,8.641153),(44.985553,8.67004),(44.89729,8.698928),(44.809233,8.72784),(44.721073,8.756779),(44.633017,8.785718),(44.544753,8.814605),(44.456697,8.843544),(44.368537,8.872457),(44.280377,8.901396),(44.192217,8.930283),(44.104057,8.959222),(44.023855,8.985525),(43.984788,9.008314),(43.914921,9.071489),(43.78759,9.186701),(43.698449,9.267162),(43.621502,9.336899),(43.60724,9.344625),(43.59122,9.343591),(43.566829,9.334315),(43.548225,9.336072),(43.471227,9.382012),(43.419034,9.413018),(43.406322,9.42865),(43.401981,9.447409),(43.399449,9.480947),(43.393351,9.49893),(43.370665,9.544225),(43.361518,9.553165),(43.341364,9.566394),(43.331753,9.575076),(43.325086,9.585902),(43.315578,9.607658),(43.305966,9.617554),(43.297698,9.621533),(43.270671,9.628354),(43.248812,9.652332),(43.235273,9.691787),(43.206334,9.851209),(43.187214,9.883326),(43.149852,9.899991),(43.104273,9.907923),(43.067686,9.922522),(43.039884,9.949988),(43.020764,9.996419),(43.012599,10.029286),(42.999887,10.061971),(42.981903,10.091633),(42.958235,10.115559),(42.862324,10.177158),(42.836279,10.208086),(42.808167,10.269116),(42.78946,10.333324),(42.776644,10.42461),(42.765792,10.451947),(42.750393,10.471636),(42.695202,10.524708),(42.668537,10.56623),(42.64859,10.611033),(42.647247,10.63222),(42.660269,10.641625),(42.680216,10.647672),(42.699646,10.65855),(42.711635,10.675163),(42.718973,10.694852),(42.728585,10.735496),(42.738094,10.759784),(42.749876,10.775829),(42.785636,10.804406),(42.801242,10.820788),(42.80765,10.83611),(42.811578,10.85262),(42.819432,10.872567),(42.834832,10.888406),(42.876793,10.905744),(42.893536,10.919671),(42.901185,10.936595),(42.911313,10.977367),(42.923715,10.998787),(43.188815,11.407764),(43.240733,11.48786),(43.252696,11.47309),(43.266938,11.462348),(43.288341,11.460354),(43.282725,11.498928),(43.29713,11.482245),(43.320079,11.446112),(43.33961,11.426174),(43.352387,11.419745),(43.375987,11.391588),(43.394216,11.385199),(43.413829,11.380927),(43.44337,11.359565),(43.459727,11.35106),(43.466319,11.367906),(43.478201,11.37995),(43.489431,11.380927),(43.494395,11.364732),(43.479015,11.271796),(43.488129,11.238918),(43.506358,11.20897),(43.529307,11.186591),(43.535655,11.183743),(43.551931,11.179836),(43.559093,11.176011),(43.562999,11.170315),(43.568207,11.155422),(43.583181,11.135159),(43.591645,11.107489),(43.617442,11.070787),(43.658946,10.987982),(43.755626,10.894232),(43.8296,10.802965),(43.92213,10.724189),(44.028168,10.652167),(44.097504,10.58926),(44.149587,10.559149),(44.274262,10.456732),(44.303722,10.439846),(44.347554,10.414511),(44.3892,10.397207),(44.465258,10.401481),(44.541423,10.392743),(44.586149,10.383585),(44.627329,10.390613),(44.661857,10.402411),(44.721063,10.419384),(44.750662,10.427865),(44.790324,10.423484),(44.854784,10.414529),(44.899366,10.411814),(44.962009,10.415799),(44.994151,10.442694),(44.987966,10.460395),(45.001475,10.460395),(45.111339,10.52558),(45.12672,10.531968),(45.157237,10.541083),(45.252289,10.600531),(45.320567,10.662095),(45.33725,10.670844),(45.354747,10.66885),(45.37322,10.662421),(45.392426,10.658393),(45.40919,10.659491),(45.445323,10.667792),(45.487071,10.684719),(45.516612,10.701972),(45.542979,10.721666),(45.573009,10.758124),(45.586111,10.765855),(45.662283,10.790595),(45.673106,10.79621),(45.684906,10.808905),(45.694591,10.822252),(45.706391,10.832831),(45.72462,10.837144),(45.737966,10.843573),(45.753591,10.871731),(45.769298,10.878119),(45.787364,10.877753),(45.802745,10.875312),(45.816905,10.869208),(45.831309,10.857611),(45.84018,10.844672),(45.84669,10.832261),(45.855642,10.823961),(45.872325,10.823472),(45.868175,10.841376),(45.884451,10.841213),(45.97755,10.79914),(46.012543,10.794745),(46.047618,10.785305),(46.063731,10.778795),(46.102224,10.770657),(46.137218,10.776313),(46.207367,10.79621),(46.237559,10.792426),(46.263194,10.776842),(46.30714,10.731024),(46.335134,10.709703),(46.370128,10.696845),(46.408702,10.691596),(46.447765,10.693101),(46.482432,10.701361),(46.499848,10.708482),(46.52768,10.726223),(46.56072,10.731676),(46.588715,10.743313),(46.629161,10.745063),(46.646251,10.747748),(46.659679,10.75434),(46.694102,10.781928),(46.782237,10.821723),(46.821951,10.852607),(46.865977,10.871894),(46.871837,10.878079),(46.881358,10.893134),(46.886485,10.898586),(46.893565,10.901557),(46.9074,10.903144),(46.913829,10.905463),(47.008067,10.938951),(47.082774,10.995266),(47.132335,11.035793),(47.167654,11.072089),(47.180675,11.080471),(47.226736,11.093817),(47.23878,11.101264),(47.263357,11.120795),(47.366873,11.172797),(47.386241,11.180121),(47.405935,11.184638),(47.437022,11.179836),(47.489513,11.192816),(47.522716,11.186957),(47.677745,11.107978),(47.71046,11.101264),(47.767345,11.130194),(47.892833,11.132554),(47.948253,11.125149),(48.130382,11.136664),(48.160818,11.145657),(48.173513,11.157131),(48.195811,11.186835),(48.215587,11.196234),(48.217296,11.203681),(48.2199,11.211127),(48.22934,11.214504),(48.246593,11.21426),(48.255138,11.215074),(48.273611,11.220404),(48.2942,11.222602),(48.30421,11.227525),(48.310557,11.234768),(48.313324,11.242011),(48.314952,11.24901),(48.31837,11.25552),(48.331309,11.268704),(48.342784,11.275458),(48.373057,11.282172),(48.444347,11.289252),(48.478526,11.296332),(48.506847,11.313544),(48.521983,11.319281),(48.585948,11.316962),(48.636892,11.329413),(48.654796,11.330634),(48.723888,11.315253),(48.762462,11.301011),(48.800629,11.279039),(48.888845,11.250556),(48.904063,11.247748),(48.921153,11.248033),(48.939112,11.24913)] +Somalia [(50.797864,11.989119),(50.867055,11.942891),(50.970083,11.932323),(51.029307,11.885077),(51.048839,11.878648),(51.127696,11.878648),(51.147227,11.873603),(51.181163,11.855373),(51.199718,11.851304),(51.222911,11.850775),(51.247081,11.847561),(51.267345,11.839423),(51.292698,11.833108),(51.277354,11.800605),(51.266381,11.759286),(51.249359,11.728775),(51.255708,11.681655),(51.247801,11.652391),(51.22954,11.635897),(51.212529,11.607947),(51.181245,11.578829),(51.148123,11.536038),(51.124766,11.511786),(51.12086,11.505316),(51.121918,11.494127),(51.126638,11.478827),(51.127696,11.470893),(51.125011,11.447984),(51.118419,11.426256),(51.086192,11.357164),(51.081309,11.341498),(51.079356,11.275946),(51.07252,11.233791),(51.085948,11.187974),(51.130945,11.160752),(51.17444,11.158764),(51.189504,11.138306),(51.164399,11.116767),(51.142263,11.068427),(51.131358,11.052883),(51.12379,11.038764),(51.118337,11.016913),(51.115082,10.993354),(51.124522,10.85456),(51.138194,10.676744),(51.160655,10.598456),(51.154307,10.587714),(51.131358,10.58393),(51.107432,10.576565),(51.101899,10.55858),(51.107188,10.515041),(51.102224,10.494696),(51.092296,10.482164),(51.076915,10.475816),(51.055675,10.474066),(51.035818,10.467515),(51.018321,10.452094),(51.010102,10.434027),(51.018403,10.41942),(51.038097,10.416083),(51.19044,10.446112),(51.200938,10.44477),(51.211925,10.44184),(51.220388,10.442043),(51.223888,10.449856),(51.219574,10.461005),(51.210216,10.467108),(51.200857,10.471503),(51.196544,10.477444),(51.195811,10.500312),(51.193044,10.521186),(51.186778,10.541571),(51.176117,10.562812),(51.191742,10.548977),(51.20734,10.538096),(51.220647,10.524928),(51.237966,10.515024),(51.24662,10.50909),(51.257942,10.503801),(51.280556,10.48733),(51.292532,10.480075),(51.3112,10.476055),(51.325879,10.47268),(51.335871,10.474614),(51.345875,10.480494),(51.35721,10.48374),(51.377856,10.48169),(51.395173,10.479023),(51.403077,10.456707),(51.417038,10.447485),(51.410965,10.429814),(51.397509,10.395793),(51.38502,10.391506),(51.380154,10.387339),(51.372569,10.381415),(51.37608,10.367693),(51.367419,10.367726),(51.349458,10.375002),(51.336131,10.374395),(51.320079,10.378485),(51.300831,10.372566),(51.272873,10.377905),(51.265391,10.396877),(51.26295,10.408596),(51.256602,10.420315),(51.240977,10.425605),(51.090099,10.407313),(51.017125,10.381916),(50.927389,10.327844),(50.90988,10.295635),(50.901703,10.234442),(50.912814,10.176853),(50.8838,10.101264),(50.888682,10.05565),(50.900401,10.024075),(50.901703,10.011664),(50.89975,10.000718),(50.890798,9.986029),(50.877696,9.916653),(50.881196,9.905463),(50.86964,9.884467),(50.863129,9.866034),(50.854015,9.816107),(50.840587,9.779771),(50.833263,9.768785),(50.838634,9.740912),(50.836925,9.73078),(50.831716,9.712063),(50.824474,9.651068),(50.806163,9.624457),(50.805512,9.59394),(50.812999,9.541164),(50.836762,9.477037),(50.840343,9.456122),(50.837413,9.437201),(50.830333,9.419623),(50.768321,9.319281),(50.748302,9.303534),(50.706309,9.285712),(50.689464,9.274848),(50.676768,9.259182),(50.646983,9.211371),(50.641124,9.196357),(50.649181,9.132514),(50.648448,9.110419),(50.633637,9.073188),(50.604503,9.037909),(50.494884,8.951972),(50.464122,8.920559),(50.439708,8.88817),(50.422048,8.857123),(50.40797,8.816352),(50.389822,8.733547),(50.374278,8.698879),(50.326508,8.641099),(50.319591,8.620347),(50.322602,8.55976),(50.319591,8.541815),(50.303477,8.504584),(50.290212,8.486762),(50.274913,8.479153),(50.256196,8.472561),(50.246104,8.456448),(50.239106,8.436347),(50.230154,8.41767),(50.166026,8.33336),(50.156423,8.320624),(50.141368,8.287991),(50.113048,8.19953),(50.091075,8.161038),(50.058279,8.130316),(49.929373,8.051744),(49.842052,7.962714),(49.824229,7.933905),(49.812511,7.897528),(49.803233,7.82689),(49.808604,7.815009),(49.823416,7.805325),(49.827647,7.782904),(49.825043,7.757758),(49.819347,7.739936),(49.790782,7.69953),(49.760916,7.667304),(49.752208,7.650295),(49.748302,7.606635),(49.744151,7.589057),(49.640798,7.409125),(49.586599,7.314765),(49.520518,7.239651),(49.367035,7.025702),(49.293224,6.880927),(49.247325,6.810614),(49.197927,6.67357),(49.074718,6.415269),(49.072032,6.393297),(49.074067,6.370795),(49.083507,6.329047),(49.085623,6.307359),(49.0713,6.219916),(49.036143,6.144232),(48.951491,5.99872),(48.850108,5.824449),(48.695811,5.587348),(48.646739,5.480048),(48.620128,5.443996),(48.548106,5.375067),(48.448009,5.208726),(48.328868,5.079657),(48.202403,4.908922),(48.055675,4.612942),(47.948497,4.457099),(47.840587,4.341376),(47.577973,4.059882),(47.489268,3.936347),(47.335623,3.778754),(47.206716,3.644599),(47.039806,3.469265),(47.01295,3.441067),(46.834239,3.232489),(46.700043,3.113267),(46.625743,3.023912),(46.487804,2.90884),(46.414317,2.833441),(46.351573,2.788642),(46.34962,2.780178),(46.3449,2.770087),(46.339203,2.761705),(46.334483,2.758246),(46.317638,2.751899),(46.307628,2.736029),(46.299815,2.715766),(46.289399,2.696234),(46.275401,2.685858),(46.236501,2.667629),(46.142345,2.568915),(46.13087,2.554023),(46.114513,2.523261),(46.104991,2.510565),(46.062999,2.484036),(46.047618,2.455064),(46.027029,2.438137),(45.826345,2.308824),(45.711925,2.24433),(45.605642,2.183051),(45.266205,1.987291),(45.232188,1.967678),(45.210704,1.963121),(45.199718,1.959621),(45.001475,1.86636),(44.831554,1.749701),(44.823985,1.740302),(44.812185,1.735582),(44.67628,1.6376),(44.550059,1.559068),(44.332774,1.38996),(44.221365,1.275621),(44.198416,1.265692),(44.157888,1.24079),(44.145518,1.22956),(44.1421,1.221381),(44.139659,1.211168),(44.138031,1.191596),(44.133962,1.187567),(44.115001,1.182034),(44.11085,1.17829),(44.10377,1.168362),(44.032888,1.09748),(44.012543,1.084215),(43.990977,1.078599),(43.97877,1.068671),(43.932628,1.009833),(43.819835,0.949449),(43.781912,0.921617),(43.734223,0.854967),(43.717296,0.845282),(43.703461,0.840522),(43.658946,0.804918),(43.467621,0.620551),(43.285492,0.414984),(43.184906,0.31627),(43.09018,0.223375),(42.891124,0.003079),(42.770274,-0.129327),(42.712087,-0.17669),(42.647797,-0.228936),(42.580333,-0.29754),(42.555919,-0.332127),(42.553559,-0.34238),(42.553966,-0.351251),(42.550792,-0.357599),(42.526622,-0.362726),(42.519298,-0.369317),(42.479747,-0.422052),(42.474457,-0.438653),(42.486501,-0.449395),(42.47641,-0.465997),(42.47283,-0.469903),(42.472911,-0.4638),(42.472423,-0.458673),(42.47047,-0.454034),(42.465994,-0.449395),(42.412608,-0.490981),(42.404552,-0.500909),(42.39796,-0.517348),(42.38266,-0.535252),(42.349864,-0.565606),(42.250255,-0.685642),(42.246755,-0.694268),(42.246755,-0.701918),(42.245291,-0.707452),(42.236339,-0.709568),(42.232595,-0.713067),(42.192231,-0.776137),(42.166515,-0.802667),(42.151622,-0.813897),(42.135509,-0.82171),(42.116466,-0.826267),(42.086599,-0.825616),(42.071544,-0.828709),(42.062511,-0.839288),(42.081716,-0.843438),(42.080903,-0.862888),(42.071951,-0.884454),(42.065684,-0.894464),(42.051443,-0.902032),(42.036957,-0.920017),(42.025564,-0.941583),(42.020844,-0.960056),(42.016368,-0.968032),(41.983165,-0.994073),(41.971934,-0.99977),(41.967947,-0.992852),(41.963634,-0.95086),(41.964692,-0.911554),(41.958832,-0.894464),(41.95281,-0.894464),(41.955414,-0.909601),(41.952403,-0.920831),(41.94752,-0.930841),(41.94516,-0.942966),(41.946462,-0.954767),(41.951427,-0.969415),(41.95281,-0.980401),(41.955577,-0.992934),(41.96225,-0.998956),(41.969249,-1.003188),(41.973155,-1.011163),(41.97169,-1.023614),(41.961436,-1.044122),(41.958832,-1.055841),(41.953461,-1.068617),(41.930024,-1.089939),(41.924653,-1.097101),(41.923025,-1.110121),(41.918305,-1.121026),(41.911632,-1.130955),(41.873871,-1.176039),(41.869477,-1.186212),(41.872895,-1.195489),(41.880707,-1.19492),(41.890147,-1.189386),(41.897472,-1.183201),(41.885427,-1.209649),(41.862804,-1.212335),(41.84197,-1.196954),(41.835297,-1.168878),(41.829112,-1.168878),(41.830251,-1.189223),(41.84197,-1.224867),(41.842784,-1.24391),(41.836599,-1.262302),(41.73227,-1.431085),(41.708751,-1.460056),(41.689708,-1.489516),(41.65211,-1.565118),(41.633311,-1.580336),(41.610606,-1.593194),(41.575043,-1.652602),(41.554861,-1.669692),(41.561046,-1.669692),(41.535105,-1.696283),(41.535103,-1.696269),(41.535944,-1.676312),(41.53858,-1.613699),(41.534963,-1.594785),(41.522871,-1.572771),(41.484527,-1.523162),(41.434194,-1.458049),(41.383758,-1.39304),(41.332185,-1.326378),(41.282989,-1.262816),(41.237307,-1.203905),(41.18222,-1.132695),(41.13106,-1.066445),(41.081554,-1.002573),(41.056129,-0.969604),(41.030601,-0.936738),(41.00528,-0.903768),(40.979751,-0.870798),(40.978821,-0.794834),(40.977891,-0.724347),(40.976961,-0.653861),(40.975514,-0.545237),(40.974067,-0.436716),(40.97262,-0.328299),(40.971173,-0.219675),(40.97045,-0.152186),(40.969623,-0.084697),(40.968486,-0.002738),(40.968486,0.122319),(40.968486,0.247273),(40.968486,0.37233),(40.968486,0.497283),(40.968486,0.62234),(40.968486,0.747242),(40.968486,0.872248),(40.968486,0.997201),(40.968486,1.122207),(40.968486,1.230295),(40.968486,1.247212),(40.968486,1.372191),(40.968486,1.497145),(40.968486,1.62215),(40.968486,1.747181),(40.968486,1.872135),(40.968486,1.997166),(40.968486,2.103619),(40.968486,2.210047),(40.968486,2.316501),(40.968486,2.422954),(40.968486,2.49711),(40.967142,2.631727),(40.965902,2.760608),(40.965385,2.814145),(40.979751,2.841895),(41.013961,2.875795),(41.096127,2.957082),(41.178396,3.03842),(41.260665,3.119811),(41.342727,3.201149),(41.394506,3.27515),(41.446183,3.349047),(41.497963,3.423074),(41.549639,3.497023),(41.633562,3.617093),(41.717381,3.737163),(41.801303,3.857182),(41.885019,3.977226),(41.898455,3.996915),(41.912201,4.007974),(41.917472,4.020453),(41.916645,4.051098),(41.923466,4.070554),(41.941243,4.086212),(42.011833,4.129491),(42.068367,4.174553),(42.102887,4.19189),(42.13534,4.200081),(42.221846,4.200959),(42.297294,4.201734),(42.415013,4.221707),(42.568285,4.247856),(42.718663,4.273487),(42.78977,4.285605),(42.831628,4.302322),(42.868938,4.326765),(42.870195,4.328171),(42.899634,4.361104),(42.914724,4.393299),(42.9325,4.463217),(42.945936,4.496858),(42.952551,4.507581),(42.960406,4.517374),(43.03544,4.578895),(43.119259,4.647702),(43.232792,4.701497),(43.346997,4.755654),(43.459342,4.808829),(43.528485,4.841566),(43.640519,4.867353),(43.716484,4.884793),(43.815186,4.907479),(43.845158,4.914352),(43.932284,4.945203),(43.968975,4.953962),(44.029126,4.950448),(44.081009,4.947451),(44.132789,4.94448),(44.184672,4.941431),(44.236452,4.938485),(44.288335,4.935488),(44.340115,4.932439),(44.391998,4.929468),(44.443726,4.926445),(44.495557,4.923473),(44.547441,4.920476),(44.596808,4.917643),(44.59922,4.917505),(44.651104,4.914507),(44.702883,4.911562),(44.754766,4.908565),(44.806546,4.905541),(44.858378,4.902544),(44.912586,4.899392),(44.941525,4.911484),(45.020642,4.99688),(45.0209,4.99688),(45.077744,5.059072),(45.185024,5.176248),(45.292356,5.293476),(45.399585,5.410704),(45.506865,5.527881),(45.518477,5.540564),(45.614146,5.645057),(45.721426,5.762337),(45.828706,5.879513),(45.935986,5.996689),(45.99681,6.059192),(46.057839,6.12172),(46.118818,6.184223),(46.179899,6.246726),(46.240981,6.309228),(46.302062,6.371731),(46.362937,6.43426),(46.423915,6.496736),(46.466963,6.538292),(46.488046,6.558645),(46.508406,6.578308),(46.55321,6.621457),(46.598065,6.664633),(46.618425,6.684244),(46.667001,6.731115),(46.715577,6.777985),(46.764101,6.824882),(46.812729,6.871675),(46.861408,6.918597),(46.909984,6.965467),(46.95856,7.012286),(47.007032,7.059208),(47.055608,7.106079),(47.104287,7.153001),(47.152863,7.19982),(47.201439,7.246691),(47.250118,7.293509),(47.298797,7.34038),(47.347373,7.387302),(47.395949,7.434173),(47.444525,7.481043),(47.493101,7.527914),(47.532388,7.565864),(47.541676,7.574836),(47.590252,7.621655),(47.638932,7.668525),(47.687507,7.715396),(47.736032,7.762215),(47.784659,7.809137),(47.833183,7.855956),(47.881811,7.902826),(47.930593,7.949697),(47.979169,7.996567),(48.039114,8.087492),(48.099058,8.17852),(48.159003,8.269419),(48.219155,8.360318),(48.279099,8.451165),(48.339147,8.54209),(48.399299,8.633118),(48.459243,8.723887),(48.519188,8.814915),(48.579184,8.90584),(48.639336,8.996661),(48.699229,9.087689),(48.759225,9.178511),(48.819325,9.269384),(48.879114,9.360308),(48.939111,9.451233),(48.939111,9.562701),(48.939111,9.563603),(48.939111,9.676),(48.939111,9.788422),(48.939111,9.900715),(48.939111,10.013033),(48.939111,10.12543),(48.939111,10.2378),(48.939111,10.350119),(48.939111,10.462567),(48.939111,10.574937),(48.939111,10.687333),(48.939111,10.7996),(48.939111,10.912022),(48.939111,11.024367),(48.939111,11.136737),(48.939112,11.24913),(48.939138,11.249132),(48.976817,11.25141),(49.108409,11.276597),(49.239594,11.300279),(49.278087,11.316962),(49.293305,11.337958),(49.303477,11.345445),(49.315929,11.340522),(49.32545,11.33511),(49.335704,11.334621),(49.400076,11.340522),(49.4192,11.344794),(49.435069,11.35106),(49.488617,11.381741),(49.507335,11.385199),(49.521739,11.393785),(49.544444,11.436713),(49.558604,11.453518),(49.576834,11.460028),(49.59962,11.461737),(49.644542,11.460354),(49.650076,11.462144),(49.666026,11.471137),(49.675304,11.473944),(49.686778,11.474351),(49.716319,11.46776),(49.804861,11.458401),(49.851248,11.464911),(49.880707,11.488227),(49.88852,11.487291),(49.906423,11.498725),(49.917979,11.501899),(49.921886,11.503852),(49.934418,11.512885),(49.942149,11.51557),(49.949474,11.51496),(49.956309,11.512519),(49.961111,11.509955),(49.962657,11.508734),(50.053884,11.511379),(50.07252,11.508734),(50.131521,11.539944),(50.151134,11.557929),(50.161876,11.563381),(50.197602,11.562649),(50.206554,11.566799),(50.214366,11.572659),(50.225434,11.578111),(50.268321,11.589301),(50.28891,11.602444),(50.372325,11.668931),(50.418142,11.678046),(50.437022,11.690619),(50.470388,11.727851),(50.500824,11.754828),(50.512706,11.771226),(50.517589,11.79267),(50.518891,11.815253),(50.52296,11.837958),(50.52947,11.859524),(50.538585,11.878648),(50.561209,11.907904),(50.575938,11.920966),(50.589854,11.926459),(50.602061,11.927965),(50.627452,11.940131),(50.632091,11.946763),(50.636485,11.951606),(50.641124,11.954413),(50.647146,11.954047),(50.655935,11.948432),(50.661469,11.947577),(50.682953,11.953355),(50.724864,11.971137),(50.761567,11.978746),(50.771983,11.985785),(50.797864,11.989119)] +Republic of Serbia [(19.690095,46.168398),(19.711889,46.15871),(19.772984,46.131552),(19.79045,46.129072),(19.873649,46.152992),(19.888946,46.15739),(19.92915,46.16354),(19.993125,46.159406),(20.034983,46.142973),(20.063405,46.145298),(20.088623,46.154135),(20.098442,46.154962),(20.114772,46.152223),(20.120146,46.149226),(20.130378,46.139355),(20.138026,46.136461),(20.145364,46.137082),(20.170479,46.145505),(20.188462,46.140389),(20.242826,46.108091),(20.305665,46.053572),(20.317653,46.038586),(20.338634,45.992801),(20.35393,45.976678),(20.370984,45.96779),(20.410258,45.955594),(20.429068,45.946706),(20.481674,45.912703),(20.499865,45.906656),(20.538002,45.903711),(20.556709,45.89844),(20.572108,45.887691),(20.605284,45.846143),(20.612312,45.841492),(20.629676,45.833276),(20.636704,45.827023),(20.640114,45.818703),(20.642285,45.798343),(20.64349,45.795141),(20.645902,45.788731),(20.655617,45.77731),(20.678562,45.75664),(20.68807,45.7431),(20.700059,45.735401),(20.713392,45.733334),(20.726827,45.736176),(20.739436,45.743411),(20.745328,45.754986),(20.754113,45.763564),(20.765171,45.766768),(20.77747,45.762324),(20.785739,45.752557),(20.785532,45.743411),(20.781604,45.733954),(20.779124,45.72367),(20.779951,45.671684),(20.777264,45.657524),(20.773233,45.648894),(20.762174,45.630601),(20.754423,45.605589),(20.75804,45.58926),(20.787392,45.553655),(20.800208,45.530504),(20.797624,45.516499),(20.783155,45.506061),(20.760727,45.493348),(20.767135,45.479344),(20.781604,45.472574),(20.799484,45.468544),(20.816021,45.462859),(20.830387,45.452524),(20.863047,45.418728),(20.927642,45.37749),(20.966193,45.341575),(20.981489,45.33279),(21.016668,45.321482),(21.063964,45.30628),(21.0743,45.300544),(21.082775,45.293671),(21.09156,45.288089),(21.103342,45.286074),(21.113057,45.28933),(21.12918,45.301887),(21.139412,45.303696),(21.155638,45.295169),(21.189745,45.259564),(21.206075,45.245922),(21.239251,45.229385),(21.257131,45.224114),(21.299299,45.223184),(21.405546,45.199671),(21.433761,45.188819),(21.459393,45.17404),(21.484388,45.152702),(21.493292,45.145101),(21.49784,45.131872),(21.494119,45.119314),(21.48151,45.111563),(21.469004,45.111046),(21.458566,45.107325),(21.449987,45.101021),(21.442443,45.092908),(21.443786,45.091461),(21.444406,45.08526),(21.444406,45.07787),(21.443786,45.072909),(21.440996,45.068723),(21.432521,45.061488),(21.429524,45.057303),(21.425596,45.043247),(21.425079,45.036219),(21.421875,45.031413),(21.409266,45.023971),(21.398828,45.021387),(21.3733,45.020096),(21.363378,45.01653),(21.356143,45.008572),(21.351389,44.998236),(21.353456,44.989813),(21.366995,44.987281),(21.383945,44.986661),(21.387046,44.981545),(21.384565,44.97493),(21.385185,44.969504),(21.40844,44.958342),(21.456499,44.952348),(21.48151,44.943563),(21.51634,44.933899),(21.531016,44.924649),(21.539078,44.908474),(21.536287,44.889302),(21.522128,44.880776),(21.48151,44.872611),(21.453398,44.869562),(21.395314,44.871629),(21.368545,44.86486),(21.355523,44.856591),(21.346428,44.845636),(21.342811,44.831942),(21.3598,44.826657),(21.359841,44.826644),(21.360587,44.826412),(21.378571,44.816645),(21.395934,44.790239),(21.41216,44.784813),(21.49722,44.778043),(21.558405,44.78166),(21.578042,44.777681),(21.595612,44.765951),(21.604293,44.749983),(21.610288,44.731948),(21.619693,44.713913),(21.65628,44.687661),(21.705166,44.677067),(21.756842,44.677274),(21.801697,44.683889),(21.838284,44.695206),(21.855441,44.698461),(21.871977,44.695981),(21.962411,44.662288),(21.994347,44.658567),(22.004269,44.651539),(22.032174,44.603325),(22.034243,44.596168),(22.036174,44.589489),(22.039822,44.576867),(22.045197,44.569219),(22.055739,44.562139),(22.067107,44.55723),(22.076306,44.550718),(22.086951,44.52178),(22.104314,44.509636),(22.126639,44.502659),(22.148239,44.500902),(22.172114,44.505347),(22.185136,44.515113),(22.299031,44.661668),(22.304922,44.677377),(22.319702,44.685336),(22.361146,44.692054),(22.38068,44.700528),(22.41551,44.727814),(22.426052,44.733653),(22.450547,44.732981),(22.468943,44.730191),(22.483723,44.72399),(22.553279,44.669161),(22.587489,44.649369),(22.621182,44.637432),(22.700144,44.63061),(22.714716,44.623065),(22.765153,44.58281),(22.759262,44.564671),(22.741692,44.551855),(22.719367,44.544362),(22.700144,44.54183),(22.678543,44.545551),(22.642163,44.563121),(22.621182,44.569219),(22.600305,44.569787),(22.580461,44.565498),(22.565578,44.555421),(22.55979,44.538781),(22.55669,44.52147),(22.548422,44.511703),(22.535503,44.507517),(22.500363,44.50638),(22.491061,44.50421),(22.484136,44.499766),(22.479795,44.490412),(22.477212,44.476976),(22.477005,44.463954),(22.480002,44.455789),(22.500983,44.44194),(22.505427,44.435015),(22.506357,44.427522),(22.50398,44.411709),(22.505117,44.404061),(22.522687,44.37507),(22.549352,44.348974),(22.582838,44.328406),(22.621182,44.315901),(22.662523,44.311663),(22.68154,44.305307),(22.689498,44.291716),(22.685364,44.243657),(22.690739,44.228878),(22.69164,44.228435),(22.648777,44.213995),(22.639992,44.207329),(22.624799,44.189397),(22.608573,44.175858),(22.606196,44.174566),(22.604852,44.168468),(22.605989,44.163145),(22.607953,44.159993),(22.6094,44.159941),(22.599065,44.130331),(22.597101,44.119065),(22.598134,44.109298),(22.604646,44.088163),(22.604749,44.079378),(22.592967,44.063926),(22.57519,44.061394),(22.554623,44.062428),(22.534159,44.057157),(22.522583,44.044703),(22.514935,44.030285),(22.50367,44.019898),(22.481449,44.019433),(22.465885,44.017624),(22.43432,44.013955),(22.411789,44.006927),(22.399594,43.993336),(22.39732,43.980934),(22.396803,43.951944),(22.394529,43.936337),(22.391945,43.931867),(22.382024,43.918561),(22.379026,43.913496),(22.377063,43.883524),(22.367554,43.852751),(22.354738,43.829703),(22.349467,43.807921),(22.362593,43.780843),(22.388535,43.758286),(22.389568,43.750509),(22.385848,43.733817),(22.386054,43.725498),(22.390498,43.712449),(22.396906,43.699401),(22.404865,43.687179),(22.41396,43.676663),(22.426465,43.668214),(22.455921,43.656406),(22.466256,43.64912),(22.472871,43.635942),(22.473801,43.612998),(22.481449,43.600647),(22.481759,43.599975),(22.481966,43.599459),(22.481759,43.598942),(22.481449,43.598529),(22.478142,43.594911),(22.477108,43.591294),(22.478142,43.587573),(22.481449,43.584137),(22.482689,43.581734),(22.483103,43.579279),(22.482689,43.576695),(22.481449,43.574111),(22.478658,43.569176),(22.477625,43.564164),(22.478452,43.559229),(22.490647,43.540883),(22.509354,43.493341),(22.518863,43.474247),(22.532609,43.464842),(22.565785,43.453344),(22.57271,43.44815),(22.586766,43.43443),(22.596274,43.429159),(22.606919,43.427402),(22.62852,43.428255),(22.637822,43.426369),(22.645367,43.420297),(22.656529,43.403192),(22.658926,43.401295),(22.664694,43.396732),(22.674202,43.394148),(22.693219,43.394872),(22.702934,43.394045),(22.719367,43.388671),(22.724343,43.38606),(22.73301,43.381513),(22.80453,43.328984),(22.817139,43.315497),(22.820756,43.307539),(22.823857,43.289297),(22.826958,43.28139),(22.833159,43.274647),(22.857343,43.256947),(22.883802,43.230592),(22.897754,43.220335),(22.915531,43.212247),(22.964727,43.204418),(22.981367,43.198992),(22.982902,43.187318),(22.984571,43.174627),(22.974029,43.141192),(22.955632,43.108274),(22.935271,43.085562),(22.927107,43.081144),(22.910157,43.075279),(22.901682,43.069749),(22.896721,43.062721),(22.889486,43.044376),(22.884215,43.036651),(22.842254,43.007505),(22.829025,42.993656),(22.829025,42.993501),(22.828921,42.993449),(22.828818,42.993449),(22.815796,42.989703),(22.788097,42.984897),(22.776418,42.979729),(22.76939,42.97128),(22.763189,42.958645),(22.745516,42.910069),(22.739579,42.898858),(22.738798,42.897383),(22.727015,42.886892),(22.69663,42.87741),(22.666244,42.871932),(22.5909,42.886892),(22.563615,42.884283),(22.549972,42.877358),(22.544785,42.871706),(22.544494,42.871389),(22.53757,42.868341),(22.519793,42.870356),(22.506047,42.870123),(22.497055,42.864413),(22.481449,42.84674),(22.470907,42.840125),(22.445482,42.830178),(22.436801,42.824286),(22.430446,42.817077),(22.427395,42.813615),(22.425845,42.809843),(22.429359,42.806122),(22.45313,42.763592),(22.466566,42.748529),(22.481449,42.739821),(22.482586,42.736824),(22.482896,42.733775),(22.482586,42.730675),(22.481449,42.727677),(22.468116,42.718324),(22.442072,42.681685),(22.449203,42.667965),(22.444552,42.64329),(22.441318,42.632891),(22.428842,42.592776),(22.425328,42.572855),(22.429669,42.571408),(22.481449,42.535622),(22.512145,42.519189),(22.524857,42.507665),(22.532505,42.493557),(22.532505,42.493402),(22.536536,42.47839),(22.533125,42.45759),(22.519483,42.420926),(22.508838,42.404932),(22.497572,42.399196),(22.485066,42.397155),(22.46977,42.391703),(22.454371,42.376768),(22.438454,42.340052),(22.423985,42.325893),(22.405795,42.321552),(22.364144,42.320984),(22.345023,42.313439),(22.325283,42.314318),(22.307609,42.31933),(22.29159,42.328399),(22.276914,42.341241),(22.27402,42.348476),(22.273296,42.365477),(22.268852,42.370335),(22.259757,42.369095),(22.233402,42.348889),(22.095529,42.305817),(22.060906,42.301088),(22.045407,42.302424),(22.027627,42.303956),(21.994967,42.312612),(21.941534,42.333102),(21.929028,42.335117),(21.918279,42.331345),(21.88438,42.309512),(21.877352,42.30822),(21.837354,42.308556),(21.8172,42.305145),(21.719521,42.260957),(21.706509,42.25507),(21.69204,42.242022),(21.67695,42.234943),(21.676886,42.234945),(21.65969,42.235563),(21.624654,42.242772),(21.575044,42.242022),(21.564066,42.246289),(21.553857,42.273984),(21.514893,42.317832),(21.51603,42.341939),(21.537321,42.35863),(21.596645,42.372092),(21.617419,42.386639),(21.621863,42.402167),(21.616902,42.433923),(21.618969,42.449245),(21.627858,42.460381),(21.667855,42.490095),(21.717671,42.551151),(21.727697,42.574147),(21.726766,42.577945),(21.719738,42.586678),(21.718291,42.591019),(21.720152,42.593887),(21.726973,42.596393),(21.728833,42.598305),(21.730074,42.601018),(21.735551,42.621276),(21.734725,42.624247),(21.744543,42.629647),(21.756532,42.633626),(21.767074,42.638716),(21.772758,42.647501),(21.764387,42.669619),(21.74425,42.679425),(21.738652,42.68215),(21.708886,42.687189),(21.687389,42.686827),(21.644084,42.672306),(21.629408,42.672203),(21.612665,42.680393),(21.580522,42.710211),(21.565226,42.720184),(21.542488,42.725817),(21.441823,42.736101),(21.4178,42.73557),(21.405546,42.7353),(21.388699,42.739046),(21.378674,42.744136),(21.379191,42.747004),(21.383945,42.749976),(21.387149,42.754962),(21.39056,42.770414),(21.404099,42.803978),(21.408336,42.820747),(21.408419,42.841743),(21.40844,42.846998),(21.398931,42.854569),(21.378777,42.855292),(21.346738,42.860847),(21.336403,42.865473),(21.316972,42.877616),(21.306327,42.882448),(21.294958,42.884386),(21.2716,42.884283),(21.260542,42.886556),(21.23243,42.910896),(21.226745,42.942522),(21.225298,42.973554),(21.209795,42.995878),(21.193052,42.997893),(21.17941,42.990581),(21.165354,42.985103),(21.147887,42.992622),(21.139309,43.005826),(21.124012,43.058277),(21.10851,43.081558),(21.092593,43.090678),(21.025827,43.093366),(21.005157,43.099128),(20.993431,43.104148),(20.912494,43.138805),(20.838552,43.170467),(20.832041,43.178606),(20.836071,43.179433),(20.839069,43.192455),(20.840619,43.206976),(20.839999,43.212092),(20.851471,43.219818),(20.8616,43.217518),(20.8647,43.217337),(20.855088,43.231445),(20.848474,43.238137),(20.838449,43.245863),(20.819432,43.257412),(20.809717,43.25961),(20.79442,43.263071),(20.769512,43.260849),(20.745328,43.252865),(20.666986,43.209663),(20.644869,43.203307),(20.612312,43.202274),(20.604044,43.197959),(20.597533,43.184962),(20.600117,43.173826),(20.612106,43.154938),(20.620581,43.133337),(20.626058,43.123725),(20.632053,43.117266),(20.640941,43.115276),(20.65169,43.11631),(20.661818,43.115948),(20.669157,43.109799),(20.664919,43.085407),(20.643835,43.052257),(20.617273,43.02213),(20.596396,43.007092),(20.584304,43.006911),(20.572832,43.00934),(20.562393,43.009495),(20.553401,43.002596),(20.543893,42.987248),(20.538415,42.980659),(20.530457,42.975156),(20.51113,42.970195),(20.493457,42.970195),(20.476403,42.966371),(20.459454,42.950015),(20.450979,42.922032),(20.466688,42.909501),(20.488909,42.899191),(20.494367,42.887467),(20.498831,42.877875),(20.4763,42.855525),(20.428034,42.840642),(20.345352,42.827439),(20.355171,42.86617),(20.35362,42.890975),(20.337084,42.906969),(20.275589,42.929887),(20.223189,42.957663),(20.19456,42.966836),(20.14092,42.970815),(20.129464,42.973605),(20.116942,42.976654),(20.056344,43.020796),(20.054388,43.022221),(20.019894,43.047348),(19.959742,43.07856),(19.949821,43.085924),(19.942276,43.095639),(19.936591,43.105897),(19.92915,43.113907),(19.916644,43.117059),(19.907136,43.113261),(19.883778,43.095691),(19.872203,43.090368),(19.838303,43.088353),(19.805127,43.089955),(19.781769,43.096466),(19.761202,43.108739),(19.742805,43.126516),(19.713556,43.165609),(19.705494,43.166178),(19.69733,43.160209),(19.684617,43.156721),(19.663016,43.158504),(19.618678,43.168245),(19.598111,43.176203),(19.580851,43.187804),(19.549948,43.217518),(19.547845,43.218815),(19.512431,43.240643),(19.502303,43.251935),(19.485146,43.280098),(19.473054,43.293276),(19.414039,43.338389),(19.372285,43.384226),(19.355645,43.393063),(19.218392,43.438202),(19.192244,43.454532),(19.175914,43.480887),(19.175191,43.509619),(19.195345,43.532796),(19.217462,43.532796),(19.229451,43.549746),(19.238856,43.572122),(19.252706,43.588633),(19.263661,43.590958),(19.289189,43.587935),(19.300868,43.588297),(19.31265,43.593154),(19.335801,43.606513),(19.34655,43.608838),(19.363396,43.601577),(19.38045,43.585584),(19.394609,43.566463),(19.402877,43.549643),(19.410732,43.540754),(19.41931,43.549255),(19.431816,43.57114),(19.443908,43.571864),(19.481735,43.560831),(19.48959,43.564448),(19.491761,43.568608),(19.48897,43.573181),(19.481735,43.578116),(19.476878,43.588684),(19.475638,43.602559),(19.477498,43.616977),(19.481735,43.628914),(19.507367,43.647182),(19.505713,43.67364),(19.481735,43.729218),(19.461685,43.762136),(19.359469,43.842209),(19.305932,43.904737),(19.275443,43.933263),(19.241027,43.952357),(19.229348,43.95768),(19.240717,43.965741),(19.242887,43.972821),(19.238133,43.985378),(19.238029,43.99251),(19.243507,44.002018),(19.252085,44.007496),(19.272756,44.012405),(19.287329,44.013025),(19.300558,44.009511),(19.325879,43.996592),(19.351718,43.979125),(19.364637,43.973286),(19.379106,43.973854),(19.393989,43.977058),(19.447112,43.979797),(19.50406,43.975663),(19.528554,43.977213),(19.552222,43.983415),(19.593357,44.005584),(19.61041,44.019278),(19.618885,44.035711),(19.611443,44.054521),(19.598938,44.062583),(19.589946,44.060309),(19.583848,44.054315),(19.580231,44.051524),(19.570929,44.056898),(19.554599,44.071265),(19.522043,44.08501),(19.516189,44.091194),(19.498169,44.110229),(19.482666,44.120667),(19.476361,44.127023),(19.474191,44.144903),(19.465716,44.15281),(19.459721,44.152707),(19.448456,44.144438),(19.442565,44.143198),(19.435537,44.146092),(19.424891,44.153792),(19.381173,44.177098),(19.362363,44.191206),(19.356058,44.204021),(19.353991,44.22433),(19.341589,44.245828),(19.324329,44.263966),(19.307379,44.274198),(19.295907,44.2758),(19.263661,44.270167),(19.249502,44.270736),(19.240717,44.272699),(19.220046,44.280244),(19.177155,44.286962),(19.157001,44.293577),(19.138914,44.309338),(19.116693,44.343703),(19.108942,44.36365),(19.107185,44.382718),(19.115763,44.403596),(19.129612,44.416153),(19.141394,44.430829),(19.143358,44.458218),(19.127339,44.502556),(19.129922,44.518317),(19.165269,44.526689),(19.173124,44.531443),(19.179635,44.538264),(19.185319,44.546223),(19.187283,44.553302),(19.18625,44.569322),(19.188317,44.57604),(19.193278,44.580587),(19.20413,44.585135),(19.208677,44.588391),(19.255703,44.645648),(19.270069,44.67562),(19.277614,44.684871),(19.288259,44.693035),(19.308413,44.705128),(19.318025,44.715463),(19.32836,44.733963),(19.363707,44.854628),(19.367841,44.859278),(19.373112,44.86026),(19.376626,44.862999),(19.375695,44.873128),(19.372802,44.881448),(19.368667,44.887132),(19.362776,44.891111),(19.356993,44.895869),(19.356982,44.895877),(19.353165,44.899018),(19.352958,44.898087),(19.350271,44.897002),(19.340142,44.896227),(19.33022,44.898708),(19.31017,44.91235),(19.301281,44.91235),(19.293013,44.909405),(19.283815,44.908371),(19.239476,44.915141),(19.229554,44.913745),(19.211881,44.908371),(19.201856,44.908371),(19.196895,44.913177),(19.193174,44.921549),(19.18687,44.927543),(19.174261,44.925424),(19.08455,44.878967),(19.068427,44.874833),(19.047757,44.872714),(19.015821,44.865635),(18.994323,44.894832),(18.991429,44.914934),(19.018405,44.925683),(19.031944,44.922634),(19.052098,44.906976),(19.06667,44.905684),(19.078659,44.910852),(19.087858,44.919326),(19.103567,44.93824),(19.113076,44.942426),(19.124548,44.94594),(19.131679,44.953175),(19.127649,44.968419),(19.118554,44.97555),(19.09809,44.970021),(19.087238,44.977101),(19.085171,44.987281),(19.088478,44.999477),(19.093232,45.011517),(19.095402,45.020922),(19.094266,45.031413),(19.085997,45.06092),(19.082277,45.084949),(19.079176,45.094613),(19.071528,45.107067),(19.064087,45.113578),(19.054165,45.120606),(19.046207,45.128358),(19.044863,45.137246),(19.060056,45.146858),(19.116073,45.142879),(19.137674,45.146031),(19.141705,45.162154),(19.128785,45.181068),(19.121861,45.195795),(19.143875,45.199516),(19.155761,45.194969),(19.174571,45.175797),(19.185319,45.168045),(19.204543,45.162878),(19.225937,45.161947),(19.267795,45.165823),(19.272756,45.168355),(19.275753,45.172851),(19.279474,45.177244),(19.286709,45.179517),(19.291359,45.177709),(19.299421,45.168975),(19.304279,45.166547),(19.390682,45.169337),(19.393929,45.171624),(19.405358,45.179672),(19.407838,45.203133),(19.397296,45.223287),(19.378073,45.229592),(19.363086,45.248247),(19.162065,45.285041),(19.124755,45.298115),(19.098813,45.319871),(19.096229,45.329121),(19.095402,45.340489),(19.091992,45.349998),(19.08207,45.354029),(19.040832,45.354029),(19.020988,45.357233),(19.003625,45.366121),(18.988742,45.379247),(18.975927,45.394956),(19.028637,45.412165),(19.037422,45.422293),(19.031634,45.434024),(19.012258,45.448468),(19.003005,45.455366),(18.99639,45.473815),(19.00962,45.498671),(19.017926,45.497807),(19.077212,45.491643),(19.106255,45.511642),(19.095092,45.526059),(19.082277,45.531175),(19.067807,45.533398),(19.050961,45.538927),(19.040936,45.545025),(19.036491,45.549469),(19.033391,45.554637),(19.02719,45.562595),(19.018095,45.567401),(19.009206,45.565489),(19.000628,45.561199),(18.983161,45.55474),(18.973136,45.546265),(18.960217,45.539134),(18.941717,45.538927),(18.932002,45.545541),(18.912054,45.568227),(18.90358,45.573085),(18.90699,45.581663),(18.90823,45.600422),(18.912365,45.619232),(18.924457,45.627707),(18.935412,45.633391),(18.962904,45.660108),(18.968485,45.668738),(18.962697,45.683052),(18.948125,45.692044),(18.931071,45.698917),(18.917325,45.706565),(18.908954,45.719433),(18.90482,45.749767),(18.900169,45.764908),(18.871127,45.789971),(18.863995,45.798549),(18.848699,45.809556),(18.844978,45.815706),(18.846735,45.819995),(18.850353,45.824232),(18.85242,45.830072),(18.855314,45.857357),(18.887146,45.859476),(18.899755,45.861388),(18.90699,45.867951),(18.903993,45.875702),(18.893451,45.883867),(18.872884,45.895236),(18.881979,45.902626),(18.889834,45.907897),(18.90699,45.9157),(18.901306,45.931203),(18.962697,45.927947),(18.981818,45.921849),(18.987502,45.923813),(18.988742,45.927017),(18.986469,45.931254),(18.97882,45.939781),(18.977684,45.943502),(18.978717,45.947171),(18.981818,45.950788),(19.005589,45.96257),(19.03091,45.960038),(19.048584,45.963449),(19.049721,45.993111),(19.06543,46.012025),(19.088478,46.018846),(19.111009,46.012955),(19.125788,45.993266),(19.125788,45.993111),(19.148009,45.984068),(19.235652,45.977711),(19.263454,45.981432),(19.274823,45.991612),(19.279474,46.003808),(19.286915,46.016159),(19.298315,46.022123),(19.306966,46.026649),(19.325156,46.029491),(19.362156,46.029646),(19.378899,46.033677),(19.389131,46.041532),(19.396573,46.051557),(19.404738,46.060239),(19.417243,46.064321),(19.428302,46.06551),(19.43688,46.06799),(19.453727,46.07755),(19.460858,46.084475),(19.465922,46.091968),(19.47295,46.098686),(19.499305,46.108608),(19.496722,46.116876),(19.48928,46.126023),(19.487523,46.134239),(19.501993,46.145608),(19.525247,46.156409),(19.549948,46.16416),(19.568449,46.166434),(19.589739,46.165969),(19.647824,46.173875),(19.669321,46.1731),(19.690095,46.168398)] +Suriname [(-54.170965,5.348376),(-54.19046,5.325748),(-54.269163,5.269085),(-54.311331,5.225806),(-54.320652,5.212454),(-54.332002,5.196196),(-54.343577,5.156379),(-54.350037,5.149583),(-54.365437,5.138757),(-54.369881,5.132375),(-54.37231,5.121058),(-54.375617,5.114883),(-54.430239,5.053388),(-54.448636,5.024294),(-54.454785,5.008533),(-54.457576,4.99135),(-54.448842,4.951482),(-54.450806,4.936728),(-54.455509,4.931431),(-54.482122,4.912802),(-54.486721,4.902958),(-54.484964,4.892752),(-54.47866,4.87867),(-54.47866,4.755189),(-54.475404,4.741727),(-54.467291,4.736198),(-54.457007,4.73333),(-54.447344,4.727852),(-54.435458,4.709404),(-54.435148,4.692376),(-54.438559,4.673282),(-54.437732,4.648735),(-54.425278,4.61605),(-54.423469,4.604345),(-54.423469,4.563392),(-54.428172,4.546158),(-54.447964,4.509545),(-54.450806,4.484224),(-54.433443,4.376375),(-54.402954,4.312942),(-54.394117,4.269818),(-54.393187,4.244393),(-54.399595,4.227262),(-54.41055,4.208581),(-54.406571,4.191528),(-54.394892,4.177911),(-54.38249,4.169566),(-54.3489,4.160522),(-54.338565,4.152332),(-54.334689,4.131713),(-54.337583,4.116649),(-54.351019,4.082078),(-54.355153,4.066523),(-54.353861,4.041718),(-54.344353,4.024949),(-54.310194,3.994202),(-54.30389,3.984461),(-54.293141,3.949812),(-54.286475,3.940717),(-54.272522,3.929529),(-54.265804,3.922527),(-54.241051,3.876354),(-54.222241,3.866199),(-54.218004,3.857647),(-54.214541,3.858215),(-54.207048,3.846485),(-54.197488,3.826899),(-54.18736,3.814678),(-54.181623,3.811009),(-54.170203,3.805815),(-54.136097,3.795738),(-54.125503,3.788736),(-54.104781,3.75804),(-54.094445,3.747059),(-54.074033,3.676107),(-54.050675,3.640554),(-54.050107,3.634534),(-54.03941,3.636446),(-54.028403,3.639831),(-54.017861,3.641458),(-54.008818,3.637919),(-53.994813,3.623604),(-53.988819,3.610995),(-53.990266,3.595699),(-53.998276,3.573039),(-54.006285,3.531026),(-54.00303,3.455397),(-54.019359,3.415374),(-54.055688,3.377676),(-54.060339,3.364214),(-54.062716,3.346696),(-54.069486,3.327085),(-54.080234,3.309722),(-54.080805,3.309314),(-54.114289,3.285382),(-54.140024,3.245023),(-54.176559,3.200581),(-54.19046,3.178102),(-54.211182,3.127407),(-54.18798,3.130973),(-54.177076,3.116555),(-54.174699,3.093921),(-54.177024,3.072785),(-54.179453,3.070305),(-54.184001,3.067979),(-54.188496,3.06462),(-54.190667,3.059143),(-54.188548,3.058316),(-54.179195,3.051598),(-54.177024,3.048911),(-54.170875,3.024829),(-54.169169,3.010722),(-54.173614,3.004521),(-54.185137,2.997803),(-54.179763,2.983747),(-54.168756,2.971396),(-54.163433,2.969742),(-54.162968,2.955996),(-54.170203,2.925404),(-54.173252,2.919513),(-54.18705,2.899307),(-54.190667,2.887835),(-54.191339,2.877345),(-54.189633,2.870627),(-54.18338,2.863857),(-54.170203,2.853057),(-54.182192,2.846546),(-54.18891,2.838277),(-54.204723,2.791769),(-54.212526,2.776421),(-54.285338,2.677977),(-54.320426,2.606664),(-54.359442,2.508013),(-54.375617,2.483777),(-54.423469,2.435925),(-54.435458,2.431119),(-54.450548,2.428948),(-54.472613,2.42869),(-54.483311,2.422851),(-54.493387,2.417063),(-54.520311,2.348902),(-54.531576,2.340168),(-54.551162,2.338101),(-54.584751,2.348333),(-54.599221,2.345801),(-54.615292,2.326267),(-54.634128,2.320118),(-54.653378,2.316501),(-54.67286,2.315984),(-54.692497,2.318981),(-54.704124,2.32482),(-54.706553,2.332882),(-54.705442,2.342545),(-54.706734,2.353191),(-54.715699,2.375928),(-54.709963,2.39479),(-54.707276,2.395876),(-54.702031,2.395721),(-54.697251,2.397322),(-54.695597,2.403524),(-54.696838,2.409105),(-54.701101,2.419905),(-54.702548,2.425745),(-54.703194,2.445847),(-54.707664,2.450343),(-54.721177,2.457991),(-54.743036,2.466517),(-54.759805,2.465794),(-54.775494,2.457335),(-54.792491,2.448172),(-54.841919,2.433496),(-54.880289,2.447345),(-54.978577,2.54305),(-54.982815,2.552352),(-54.979559,2.566098),(-54.960232,2.585683),(-54.953669,2.599067),(-54.959095,2.608989),(-54.976872,2.606922),(-55.017748,2.590592),(-55.037747,2.577932),(-55.076298,2.54522),(-55.112471,2.527805),(-55.120894,2.524808),(-55.128852,2.525687),(-55.136604,2.5338),(-55.13309,2.552869),(-55.137793,2.56217),(-55.171951,2.559328),(-55.25192,2.497885),(-55.275252,2.499332),(-55.286982,2.513905),(-55.302795,2.519692),(-55.32052,2.518297),(-55.337754,2.511269),(-55.354523,2.494888),(-55.360079,2.476232),(-55.362766,2.457939),(-55.37106,2.442539),(-55.399585,2.430189),(-55.435552,2.43086),(-55.474309,2.435821),(-55.51131,2.43639),(-55.568206,2.431274),(-55.587326,2.433858),(-55.608306,2.433961),(-55.645384,2.416649),(-55.724294,2.396909),(-55.744112,2.401043),(-55.754602,2.409415),(-55.766126,2.431274),(-55.773568,2.440059),(-55.783128,2.445175),(-55.853769,2.462642),(-55.870668,2.47091),(-55.925031,2.515662),(-55.9472,2.528167),(-55.971023,2.530338),(-55.983012,2.526462),(-55.989239,2.520829),(-55.996913,2.503259),(-56.007507,2.460678),(-56.008179,2.416288),(-56.013346,2.398821),(-56.042569,2.355103),(-56.050243,2.347041),(-56.062155,2.341564),(-56.07342,2.34239),(-56.082283,2.347713),(-56.091455,2.351899),(-56.10378,2.349057),(-56.116803,2.333089),(-56.131995,2.304357),(-56.143881,2.274746),(-56.146775,2.256194),(-56.135251,2.248546),(-56.091455,2.245911),(-56.072929,2.241518),(-56.054016,2.224051),(-56.044843,2.184726),(-56.03133,2.163487),(-56.004406,2.144986),(-55.995776,2.13739),(-55.959965,2.090985),(-55.925806,2.061684),(-55.918262,2.050393),(-55.915135,2.037655),(-55.916763,2.028508),(-55.92038,2.019387),(-55.923584,2.006597),(-55.922344,1.962155),(-55.916763,1.922235),(-55.922034,1.886217),(-55.953815,1.853273),(-56.019651,1.833507),(-56.082644,1.846555),(-56.145276,1.871773),(-56.209768,1.888542),(-56.257931,1.885648),(-56.273796,1.887509),(-56.292709,1.895467),(-56.328624,1.918463),(-56.347951,1.926602),(-56.367175,1.928876),(-56.396527,1.921667),(-56.415182,1.919703),(-56.429807,1.9227),(-56.481819,1.941614),(-56.4851,1.953215),(-56.491198,1.962905),(-56.499983,1.969984),(-56.520964,1.976728),(-56.529387,1.981792),(-56.536493,1.988639),(-56.542074,1.997166),(-56.579772,2.016803),(-56.677802,2.018405),(-56.70519,2.029645),(-56.801464,2.165967),(-56.80875,2.195784),(-56.816346,2.217592),(-56.826191,2.261672),(-56.838877,2.281102),(-56.847792,2.285856),(-56.870865,2.290146),(-56.880141,2.294796),(-56.88443,2.303736),(-56.883887,2.339755),(-56.89505,2.362079),(-56.930836,2.393292),(-56.938458,2.41143),(-56.932024,2.425796),(-56.930241,2.436183),(-56.935073,2.445898),(-56.951635,2.461246),(-56.956364,2.470031),(-56.959593,2.483777),(-56.957526,2.493492),(-56.952617,2.504551),(-56.951274,2.515455),(-56.959593,2.524705),(-56.980006,2.509719),(-56.994113,2.504861),(-57.000573,2.514163),(-56.995302,2.547856),(-56.997111,2.555142),(-57.02238,2.584546),(-57.027238,2.59359),(-57.02791,2.610591),(-57.023155,2.623665),(-57.020468,2.635447),(-57.027238,2.648212),(-57.048012,2.636481),(-57.05597,2.642372),(-57.061964,2.68175),(-57.071886,2.700095),(-57.097725,2.72738),(-57.102944,2.740454),(-57.099016,2.750376),(-57.092144,2.763037),(-57.08987,2.773889),(-57.099533,2.778591),(-57.133743,2.773424),(-57.13705,2.77115),(-57.143665,2.790012),(-57.126405,2.82603),(-57.133381,2.832593),(-57.145215,2.830423),(-57.165266,2.821069),(-57.174619,2.818951),(-57.186608,2.82324),(-57.191052,2.832593),(-57.194256,2.841998),(-57.202266,2.846236),(-57.206348,2.856571),(-57.219629,2.908299),(-57.210328,2.918221),(-57.194721,2.930623),(-57.183818,2.944679),(-57.188623,2.95951),(-57.221851,2.963024),(-57.234615,2.970362),(-57.226451,2.983385),(-57.22118,2.992583),(-57.2218,3.000438),(-57.224694,3.009171),(-57.226451,3.021264),(-57.223247,3.024778),(-57.209242,3.024054),(-57.205987,3.028137),(-57.20702,3.035681),(-57.211774,3.048342),(-57.21286,3.056042),(-57.222213,3.078263),(-57.239473,3.094128),(-57.248568,3.112369),(-57.233324,3.14167),(-57.236734,3.143478),(-57.237406,3.143995),(-57.237613,3.144874),(-57.239473,3.147871),(-57.246243,3.141618),(-57.254976,3.13583),(-57.264898,3.132626),(-57.274923,3.13428),(-57.286033,3.142393),(-57.284173,3.148956),(-57.277714,3.156914),(-57.274923,3.169007),(-57.285672,3.192106),(-57.287946,3.203113),(-57.287119,3.21536),(-57.281744,3.235928),(-57.280452,3.247813),(-57.284018,3.25975),(-57.290271,3.267967),(-57.292493,3.276287),(-57.284225,3.288793),(-57.282778,3.295045),(-57.280452,3.329772),(-57.283553,3.343802),(-57.308409,3.39491),(-57.340035,3.370157),(-57.366752,3.36561),(-57.393469,3.373826),(-57.425043,3.387495),(-57.423958,3.369382),(-57.430934,3.360623),(-57.459201,3.353336),(-57.461682,3.349978),(-57.468761,3.343182),(-57.476254,3.338945),(-57.479717,3.343389),(-57.482042,3.348556),(-57.487985,3.351528),(-57.49584,3.352975),(-57.503901,3.353336),(-57.509637,3.357677),(-57.522401,3.365248),(-57.535269,3.36778),(-57.545036,3.348143),(-57.554234,3.346748),(-57.565138,3.349822),(-57.596193,3.367121),(-57.596402,3.367237),(-57.613249,3.376643),(-57.641464,3.38243),(-57.649939,3.385944),(-57.654848,3.39013),(-57.658207,3.39522),(-57.662548,3.40708),(-57.664615,3.417544),(-57.663995,3.425839),(-57.658931,3.443899),(-57.658362,3.450617),(-57.660533,3.462994),(-57.660688,3.469505),(-57.645495,3.498987),(-57.644048,3.51666),(-57.660429,3.535186),(-57.681255,3.547304),(-57.685131,3.549164),(-57.687249,3.552162),(-57.697998,3.553609),(-57.702856,3.555676),(-57.707351,3.560662),(-57.708643,3.564125),(-57.709522,3.56769),(-57.71748,3.582444),(-57.722493,3.599962),(-57.72611,3.607197),(-57.764247,3.631588),(-57.813133,3.651845),(-57.829773,3.662439),(-57.840987,3.680991),(-57.846929,3.702101),(-57.849617,3.747473),(-57.853854,3.76667),(-57.875145,3.812197),(-57.925839,3.886405),(-57.942117,3.905447),(-58.007902,3.957279),(-58.032138,3.987716),(-58.042008,4.023037),(-58.055082,4.108071),(-58.067691,4.151143),(-58.065934,4.171839),(-58.052033,4.193104),(-57.964958,4.282039),(-57.953848,4.299015),(-57.947388,4.318756),(-57.94527,4.343922),(-57.94806,4.360484),(-57.953486,4.375703),(-57.956639,4.391438),(-57.952659,4.409732),(-57.942789,4.425106),(-57.931317,4.437844),(-57.92186,4.450944),(-57.914471,4.484069),(-57.896074,4.52851),(-57.879124,4.556829),(-57.863311,4.60773),(-57.845431,4.633517),(-57.837421,4.65088),(-57.836026,4.669845),(-57.844346,4.687519),(-57.871166,4.721728),(-57.88455,4.762863),(-57.901138,4.773922),(-57.91783,4.782371),(-57.925374,4.796142),(-57.918501,4.829758),(-57.90243,4.852728),(-57.884188,4.872262),(-57.870701,4.888928),(-57.85034,4.922801),(-57.842795,4.929881),(-57.830238,4.933059),(-57.820575,4.930294),(-57.810963,4.92554),(-57.798457,4.92306),(-57.773962,4.926445),(-57.762335,4.93554),(-57.747297,4.964013),(-57.720477,4.9898),(-57.686939,5.006259),(-57.64937,5.008429),(-57.610096,4.99135),(-57.566223,5.007086),(-57.543279,5.011091),(-57.517596,5.012434),(-57.512221,5.009153),(-57.496873,4.994657),(-57.490259,4.99135),(-57.47734,4.992487),(-57.460493,4.997629),(-57.381377,5.005742),(-57.35621,5.012434),(-57.345513,5.01662),(-57.338072,5.020754),(-57.329442,5.024087),(-57.315231,5.026103),(-57.306497,5.024449),(-57.298694,5.021116),(-57.290478,5.020289),(-57.280452,5.026103),(-57.316936,5.058969),(-57.321432,5.070157),(-57.320192,5.083412),(-57.317091,5.094755),(-57.302208,5.128474),(-57.295542,5.157206),(-57.288979,5.171029),(-57.277714,5.17692),(-57.259937,5.1763),(-57.247276,5.172373),(-57.23875,5.162012),(-57.233324,5.142116),(-57.217769,5.14855),(-57.202008,5.157619),(-57.189812,5.16922),(-57.184903,5.183147),(-57.192396,5.203172),(-57.225727,5.243609),(-57.232522,5.260572),(-57.233324,5.262574),(-57.237613,5.268723),(-57.247638,5.271772),(-57.259007,5.271204),(-57.26743,5.266295),(-57.269187,5.256709),(-57.264019,5.247045),(-57.253787,5.234927),(-57.256423,5.228287),(-57.263037,5.223997),(-57.271667,5.222861),(-57.280452,5.225341),(-57.285465,5.230819),(-57.287274,5.238699),(-57.288824,5.30854),(-57.292028,5.311331),(-57.298332,5.311021),(-57.308409,5.313475),(-57.322827,5.30761),(-57.331715,5.30761),(-57.335695,5.316886),(-57.332284,5.327867),(-57.324119,5.33859),(-57.275491,5.387398),(-57.265208,5.403366),(-57.261229,5.42675),(-57.260092,5.448196),(-57.25694,5.464086),(-57.24767,5.484931),(-57.233998,5.498969),(-57.225087,5.507514),(-57.193023,5.519721),(-57.179514,5.530585),(-57.170155,5.542792),(-57.163686,5.557074),(-57.139638,5.665107),(-57.134877,5.759914),(-57.128,5.793158),(-57.12577,5.818145),(-57.113847,5.858844),(-57.102283,5.886416),(-57.075795,5.929999),(-57.071645,5.94123),(-57.066029,5.9501),(-57.057688,5.955878),(-57.03482,5.959906),(-57.027903,5.964423),(-57.022694,5.96894),(-57.017445,5.970933),(-57.012807,5.973863),(-57.000111,5.987982),(-56.994496,5.992662),(-56.98469,5.996283),(-56.974599,5.998114),(-56.963872,6.009152),(-56.956537,6.011574),(-56.928844,6.004262),(-56.866972,5.989637),(-56.830356,5.985557),(-56.780724,5.985508),(-56.702631,5.98219),(-56.659512,5.979716),(-56.642432,5.973223),(-56.611422,5.953647),(-56.593104,5.945753),(-56.57395,5.940756),(-56.526859,5.936549),(-56.418314,5.911349),(-56.355024,5.89467),(-56.303807,5.892508),(-56.256766,5.880406),(-56.222834,5.870753),(-56.19164,5.862616),(-56.124257,5.848049),(-56.083036,5.834366),(-56.058595,5.827737),(-56.037994,5.824164),(-55.999182,5.810713),(-55.936006,5.804661),(-55.917877,5.784735),(-55.903554,5.776353),(-55.897694,5.763007),(-55.897694,5.676703),(-55.897247,5.676703),(-55.890248,5.676703),(-55.884999,5.69245),(-55.88329,5.708075),(-55.884023,5.741929),(-55.887278,5.759467),(-55.90453,5.797187),(-55.915362,5.821771),(-55.899963,5.841964),(-55.910003,5.850918),(-55.922245,5.850353),(-55.933959,5.849827),(-55.947872,5.861521),(-55.951169,5.881511),(-55.946669,5.896493),(-55.930433,5.91758),(-55.923613,5.927845),(-55.910395,5.941616),(-55.900654,5.951945),(-55.886052,5.962954),(-55.86799,5.969813),(-55.844382,5.975962),(-55.820095,5.977275),(-55.729089,5.981216),(-55.678578,5.985256),(-55.637318,5.985907),(-55.394988,5.973876),(-55.335967,5.962708),(-55.307871,5.94335),(-55.292045,5.936727),(-55.270823,5.929999),(-55.263783,5.926418),(-55.253204,5.917792),(-55.242909,5.909369),(-55.2329,5.903266),(-55.229766,5.902411),(-55.221425,5.900214),(-55.21288,5.900214),(-55.202433,5.898761),(-55.188361,5.897395),(-55.176908,5.90129),(-55.164052,5.906684),(-55.14784,5.908177),(-55.130727,5.895855),(-55.110829,5.881171),(-55.103107,5.866412),(-55.115101,5.837128),(-55.129872,5.820746),(-55.128611,5.821357),(-55.101039,5.830499),(-55.091712,5.852791),(-55.090749,5.875117),(-55.08552,5.882799),(-55.063423,5.882883),(-55.030385,5.859768),(-55.010121,5.855536),(-54.99706,5.858466),(-54.988922,5.863674),(-54.981191,5.865465),(-54.958811,5.851793),(-54.950307,5.850287),(-54.941803,5.852118),(-54.931956,5.855536),(-54.924143,5.860419),(-54.918691,5.867092),(-54.911977,5.87287),(-54.900258,5.875393),(-54.890777,5.873358),(-54.880727,5.868354),(-54.863026,5.855536),(-54.867543,5.863674),(-54.874338,5.871568),(-54.882192,5.878363),(-54.890289,5.882799),(-54.903472,5.885403),(-54.91258,5.884025),(-54.921851,5.876625),(-54.933319,5.869672),(-54.956207,5.867575),(-54.971566,5.877267),(-54.985634,5.880387),(-55.001517,5.871259),(-55.016591,5.871975),(-55.031728,5.884467),(-55.047271,5.892279),(-55.105295,5.908922),(-55.12564,5.918647),(-55.142974,5.930121),(-55.15038,5.940863),(-55.154124,5.951361),(-55.159966,5.958114),(-55.153323,5.968592),(-55.1445,5.974248),(-55.116851,5.985256),(-55.085301,5.990649),(-55.044799,5.993105),(-55.013905,5.993394),(-54.77359,5.985256),(-54.75536,5.980943),(-54.712555,5.976184),(-54.673556,5.969192),(-54.471781,5.94097),(-54.34863,5.913755),(-54.252699,5.894808),(-54.169342,5.867906),(-54.097396,5.847339),(-54.074456,5.844887),(-54.062032,5.840559),(-54.036731,5.842351),(-54.030944,5.834184),(-54.020462,5.83025),(-54.014687,5.820148),(-54.016307,5.809704),(-53.992085,5.763497),(-53.986357,5.745665),(-53.99592,5.734524),(-54.019753,5.688979),(-54.031148,5.66931),(-54.036439,5.629502),(-54.050364,5.552069),(-54.061106,5.512885),(-54.073598,5.489814),(-54.091786,5.467922),(-54.10912,5.452826),(-54.118479,5.442776),(-54.122548,5.433417),(-54.128529,5.414293),(-54.170969,5.348375),(-54.170965,5.348376)] +Slovakia [(19.706321,49.387529),(19.726578,49.388873),(19.760065,49.397658),(19.769056,49.393213),(19.769263,49.39311),(19.778565,49.374197),(19.783629,49.357557),(19.78952,49.309188),(19.78797,49.305105),(19.780322,49.297612),(19.779713,49.293453),(19.779702,49.293375),(19.783216,49.290016),(19.79686,49.283399),(19.796962,49.28335),(19.798563,49.281081),(19.798822,49.280714),(19.806415,49.275488),(19.80678,49.275236),(19.808641,49.270895),(19.806263,49.265263),(19.800476,49.263041),(19.794068,49.261852),(19.789624,49.259475),(19.751797,49.219064),(19.747787,49.205956),(19.747766,49.205887),(19.760685,49.194208),(19.785903,49.188162),(19.831998,49.185888),(19.854219,49.191262),(19.868292,49.200788),(19.887809,49.214),(19.905896,49.222785),(19.937935,49.22511),(19.965737,49.215653),(20.01731,49.183872),(20.050486,49.173227),(20.07002,49.183097),(20.080355,49.208109),(20.08604,49.243042),(20.098442,49.25286),(20.105367,49.263971),(20.111051,49.275856),(20.130171,49.303917),(20.135856,49.308878),(20.138336,49.307327),(20.16035,49.305622),(20.170169,49.311617),(20.191976,49.328618),(20.207169,49.334199),(20.284374,49.338643),(20.289335,49.343087),(20.296466,49.356937),(20.301634,49.371199),(20.303701,49.380398),(20.307422,49.386599),(20.317653,49.391612),(20.329539,49.391767),(20.370467,49.38169),(20.422143,49.382982),(20.421936,49.40019),(20.437543,49.402515),(20.522085,49.374352),(20.543996,49.370838),(20.567664,49.376367),(20.579136,49.383395),(20.595053,49.396263),(20.605284,49.400035),(20.614483,49.400448),(20.636084,49.39833),(20.673911,49.402309),(20.689517,49.4005),(20.778297,49.331202),(20.79411,49.323657),(20.816538,49.321383),(20.833591,49.322055),(20.849301,49.320505),(20.868007,49.31141),(20.884337,49.300299),(20.900564,49.2926),(20.91896,49.290326),(20.942422,49.295855),(20.964022,49.308103),(20.996682,49.339367),(21.017249,49.352441),(21.032959,49.354611),(21.054043,49.354766),(21.072543,49.357195),(21.080708,49.366342),(21.068512,49.381431),(21.045464,49.390578),(21.033269,49.399673),(21.053526,49.414453),(21.068822,49.419207),(21.10944,49.424581),(21.124943,49.423651),(21.143029,49.415538),(21.157292,49.405151),(21.172588,49.398278),(21.194086,49.400603),(21.211242,49.411094),(21.242455,49.441273),(21.260542,49.449438),(21.274391,49.447267),(21.330408,49.427785),(21.42787,49.409802),(21.444406,49.409905),(21.48151,49.415228),(21.496186,49.412386),(21.514422,49.417213),(21.529569,49.421222),(21.601193,49.426493),(21.6199,49.423341),(21.630028,49.418897),(21.648632,49.407063),(21.65876,49.402464),(21.666305,49.401792),(21.681808,49.404066),(21.69173,49.402154),(21.708576,49.390888),(21.742166,49.357092),(21.757566,49.348927),(21.768004,49.353474),(21.782164,49.364482),(21.799423,49.374817),(21.819577,49.377246),(21.837871,49.370424),(21.874561,49.348358),(21.928408,49.330788),(21.964478,49.308568),(21.99321,49.278182),(22.005819,49.242887),(22.012434,49.211054),(22.040752,49.197463),(22.111549,49.188627),(22.144105,49.174881),(22.155681,49.171522),(22.165706,49.171108),(22.189684,49.17302),(22.197952,49.171987),(22.208908,49.163977),(22.209011,49.156949),(22.20622,49.150489),(22.208494,49.144185),(22.215936,49.139999),(22.262858,49.130594),(22.318151,49.131989),(22.339442,49.12646),(22.390085,49.093025),(22.426775,49.085635),(22.505013,49.083413),(22.539637,49.0722),(22.531989,49.055715),(22.524754,49.032874),(22.520103,49.009826),(22.520516,48.992928),(22.505013,48.984246),(22.466566,48.980526),(22.448996,48.971431),(22.427292,48.929469),(22.41489,48.911693),(22.41365,48.906783),(22.413753,48.893864),(22.411789,48.887766),(22.402384,48.878826),(22.378406,48.865442),(22.370799,48.858248),(22.368898,48.856451),(22.362283,48.844255),(22.36156,48.836452),(22.363523,48.828287),(22.365694,48.794387),(22.36342,48.787617),(22.356495,48.776145),(22.347814,48.767877),(22.338409,48.762968),(22.330761,48.756405),(22.32766,48.743021),(22.3289,48.721575),(22.322492,48.700336),(22.310297,48.681681),(22.294277,48.667625),(22.282185,48.662405),(22.25552,48.656773),(22.243221,48.651192),(22.235676,48.644164),(22.225237,48.628222),(22.219036,48.620935),(22.153717,48.585873),(22.138731,48.569595),(22.136664,48.549337),(22.148343,48.508823),(22.144829,48.493114),(22.13377,48.476835),(22.13284,48.404798),(22.113926,48.38865),(22.09646,48.379425),(22.077856,48.375808),(22.018015,48.379735),(21.999928,48.37896),(21.981531,48.374723),(21.929338,48.372914),(21.914765,48.36909),(21.884276,48.357463),(21.841178,48.353174),(21.789398,48.335526),(21.759012,48.333769),(21.727697,48.340901),(21.701238,48.353949),(21.67757,48.372346),(21.621553,48.429655),(21.613388,48.440352),(21.600366,48.481641),(21.591684,48.49301),(21.574631,48.495568),(21.537734,48.495232),(21.521818,48.50009),(21.5151,48.507118),(21.506005,48.526496),(21.49939,48.535075),(21.490812,48.540268),(21.472725,48.544997),(21.439135,48.558329),(21.424563,48.561275),(21.372679,48.550345),(21.338573,48.549854),(21.32183,48.54758),(21.302296,48.539907),(21.293925,48.530579),(21.28824,48.519934),(21.276561,48.50872),(21.261782,48.50319),(21.250413,48.506498),(21.238011,48.513448),(21.219924,48.518719),(21.186748,48.513707),(21.109336,48.489109),(21.084015,48.49301),(21.063861,48.506239),(21.035956,48.514637),(21.0065,48.518151),(20.981489,48.516859),(20.945832,48.518978),(20.891365,48.541095),(20.859946,48.543317),(20.845477,48.545823),(20.815814,48.563807),(20.800311,48.569233),(20.784085,48.569052),(20.572522,48.536573),(20.51051,48.533783),(20.481674,48.526083),(20.480538,48.518538),(20.480227,48.510218),(20.482191,48.492881),(20.482811,48.489367),(20.482915,48.485827),(20.482501,48.482261),(20.481674,48.478747),(20.468239,48.465389),(20.465968,48.46379),(20.435579,48.442393),(20.420593,48.429241),(20.409017,48.413713),(20.370157,48.334338),(20.349279,48.305476),(20.324268,48.279948),(20.295226,48.260415),(20.272385,48.252456),(20.260293,48.255893),(20.249027,48.264187),(20.22908,48.270905),(20.217815,48.267598),(20.187636,48.248994),(20.170892,48.244033),(20.153322,48.245273),(20.14309,48.247805),(20.134512,48.246669),(20.122006,48.236747),(20.118079,48.229719),(20.112601,48.211735),(20.105263,48.202795),(20.096995,48.198429),(20.078082,48.193545),(20.038303,48.177233),(20.034983,48.175872),(19.99695,48.167914),(19.973902,48.158379),(19.928633,48.130087),(19.905069,48.124299),(19.884295,48.129621),(19.846261,48.152669),(19.821663,48.157914),(19.785593,48.14869),(19.776084,48.149517),(19.766679,48.159),(19.76916,48.167449),(19.774844,48.176079),(19.774327,48.185897),(19.756551,48.200315),(19.73309,48.202899),(19.686994,48.196904),(19.676969,48.200392),(19.655265,48.21835),(19.643896,48.224809),(19.633871,48.226773),(19.623226,48.227006),(19.531241,48.21065),(19.513982,48.203958),(19.503026,48.189411),(19.493621,48.150705),(19.481735,48.134892),(19.481425,48.134427),(19.481219,48.133962),(19.481425,48.133342),(19.481735,48.13267),(19.483286,48.127193),(19.483803,48.121767),(19.483286,48.116496),(19.481735,48.111328),(19.428199,48.085852),(19.29322,48.087764),(19.233482,48.06208),(19.222526,48.060582),(19.098503,48.070736),(19.038662,48.064871),(19.019017,48.065497),(18.996494,48.066214),(18.981818,48.061615),(18.933595,48.054349),(18.838467,48.040015),(18.821001,48.030454),(18.794232,47.993144),(18.784724,47.987615),(18.765293,47.985393),(18.756198,47.981827),(18.743176,47.971052),(18.744519,47.967357),(18.751444,47.963353),(18.754751,47.951803),(18.744726,47.910513),(18.742246,47.889455),(18.748757,47.870723),(18.778006,47.851447),(18.816453,47.83256),(18.814916,47.832194),(18.790305,47.826332),(18.767671,47.822302),(18.750307,47.81362),(18.717234,47.788118),(18.692843,47.777963),(18.663698,47.775896),(18.633829,47.779824),(18.597448,47.79065),(18.552593,47.792846),(18.347851,47.776775),(18.273024,47.756259),(18.23592,47.753882),(18.11262,47.762486),(17.883532,47.752521),(17.825713,47.750006),(17.741997,47.76538),(17.719245,47.773669),(17.676678,47.789177),(17.666239,47.797032),(17.658384,47.807316),(17.639884,47.819201),(17.61911,47.829226),(17.604227,47.834239),(17.592962,47.833102),(17.582937,47.829795),(17.572601,47.829536),(17.560406,47.837986),(17.526609,47.872118),(17.517308,47.876252),(17.492193,47.879818),(17.481858,47.882711),(17.472039,47.888809),(17.36941,47.981207),(17.337887,47.998725),(17.272671,48.00534),(17.262316,48.007283),(17.220892,48.015055),(17.184821,48.020274),(17.148338,48.005443),(17.12498,48.019525),(17.092631,48.02725),(17.069686,48.035674),(17.075061,48.052081),(17.063072,48.058773),(17.059144,48.060427),(17.064829,48.07903),(17.069996,48.089159),(17.080228,48.097608),(17.067206,48.106936),(17.062455,48.112724),(17.047465,48.130991),(17.036923,48.135513),(17.020904,48.137166),(17.007158,48.142799),(16.98194,48.161299),(16.974808,48.17688),(16.974705,48.198558),(16.969951,48.216645),(16.954345,48.25256),(16.953931,48.25734),(16.955275,48.268786),(16.954345,48.273127),(16.950624,48.276589),(16.944423,48.27801),(16.933364,48.284728),(16.924269,48.287519),(16.916621,48.2908),(16.913313,48.296691),(16.912073,48.301239),(16.908869,48.306975),(16.905459,48.311988),(16.902771,48.314106),(16.898431,48.316173),(16.900291,48.321238),(16.904218,48.327025),(16.906492,48.33147),(16.901945,48.339402),(16.891713,48.347024),(16.881171,48.352812),(16.875486,48.355034),(16.855332,48.356455),(16.847581,48.359582),(16.84448,48.365602),(16.845101,48.376583),(16.846858,48.381131),(16.849338,48.384102),(16.851922,48.390407),(16.86081,48.443788),(16.864944,48.458077),(16.875073,48.471539),(16.901221,48.496602),(16.906492,48.509908),(16.913934,48.519339),(16.93047,48.528202),(16.946903,48.539726),(16.949055,48.544836),(16.954345,48.557399),(16.945043,48.604166),(16.947523,48.623157),(16.963336,48.635947),(16.974808,48.649874),(17.025348,48.746328),(17.049946,48.774027),(17.084362,48.793715),(17.098728,48.805756),(17.10462,48.824618),(17.113715,48.83392),(17.167458,48.859758),(17.21221,48.866062),(17.260269,48.857794),(17.374371,48.819605),(17.391734,48.821776),(17.411225,48.830216),(17.429561,48.838157),(17.453332,48.842756),(17.467905,48.838105),(17.498497,48.816763),(17.535084,48.812991),(17.727217,48.86291),(17.744891,48.872574),(17.77941,48.911744),(17.795533,48.920581),(17.820131,48.923371),(17.841215,48.920994),(17.860336,48.921718),(17.878733,48.933552),(17.886897,48.947246),(17.894649,48.9782),(17.90085,48.993031),(17.914079,49.010498),(17.93506,49.019386),(17.959244,49.021867),(18.012885,49.019128),(18.046061,49.029153),(18.075516,49.047188),(18.09629,49.070288),(18.102905,49.09225),(18.10163,49.136928),(18.101458,49.142945),(18.105075,49.169765),(18.117788,49.202579),(18.136495,49.233068),(18.160576,49.2587),(18.190031,49.276942),(18.324597,49.311255),(18.361586,49.330191),(18.385162,49.342261),(18.387642,49.389751),(18.416788,49.385462),(18.439215,49.395074),(18.481797,49.429077),(18.514663,49.440601),(18.522931,49.446079),(18.527995,49.454864),(18.530476,49.473467),(18.535643,49.481684),(18.556211,49.490159),(18.600446,49.485973),(18.628971,49.495791),(18.635896,49.496721),(18.64282,49.495791),(18.675583,49.485043),(18.704522,49.479255),(18.732531,49.480288),(18.773562,49.504886),(18.792269,49.509537),(18.833196,49.510261),(18.932208,49.504318),(18.961147,49.492794),(18.952362,49.475948),(18.953396,49.461685),(18.958253,49.448146),(18.960837,49.433315),(18.956703,49.400448),(18.962284,49.389183),(18.981818,49.386806),(19.007139,49.388098),(19.045793,49.402774),(19.067601,49.406133),(19.076799,49.404066),(19.096643,49.394609),(19.106255,49.391405),(19.11659,49.39094),(19.141705,49.394195),(19.172504,49.402257),(19.179738,49.410267),(19.182322,49.422721),(19.18935,49.442203),(19.192244,49.442616),(19.204543,49.442823),(19.208987,49.444993),(19.211364,49.45166),(19.209401,49.456414),(19.206197,49.459411),(19.20444,49.460548),(19.220459,49.493001),(19.234102,49.507212),(19.248675,49.516255),(19.265005,49.521475),(19.284228,49.524162),(19.315027,49.523955),(19.325363,49.52504),(19.339522,49.528761),(19.347067,49.532998),(19.43378,49.595165),(19.437604,49.600126),(19.443392,49.60178),(19.44889,49.600313),(19.457344,49.598059),(19.474191,49.578732),(19.481735,49.573513),(19.505197,49.563384),(19.517289,49.543282),(19.535479,49.492846),(19.551602,49.461013),(19.55677,49.453882),(19.573926,49.445252),(19.594804,49.441583),(19.634801,49.441324),(19.63015,49.43502),(19.6286,49.429594),(19.62922,49.413936),(19.627097,49.402165),(19.627044,49.401868),(19.684514,49.389079),(19.706321,49.387529)] +Slovenia [(16.343426,46.714178),(16.357275,46.715832),(16.357585,46.699011),(16.365383,46.696712),(16.366216,46.696467),(16.371434,46.694929),(16.390038,46.694154),(16.405024,46.687255),(16.410502,46.668367),(16.402607,46.663109),(16.396652,46.659143),(16.377636,46.652864),(16.368437,46.642994),(16.372246,46.636341),(16.376395,46.629093),(16.394585,46.619016),(16.430242,46.604392),(16.467139,46.564704),(16.500832,46.544809),(16.515302,46.501711),(16.491117,46.515146),(16.481298,46.519022),(16.470756,46.520262),(16.449052,46.51835),(16.440371,46.519022),(16.431999,46.523208),(16.415153,46.535559),(16.406264,46.539486),(16.394895,46.540106),(16.382183,46.539383),(16.369471,46.540571),(16.357895,46.546979),(16.351694,46.539486),(16.351254,46.539922),(16.344149,46.546979),(16.340276,46.543827),(16.32999,46.535455),(16.310663,46.530985),(16.295367,46.524448),(16.263947,46.515922),(16.260931,46.513576),(16.234905,46.493339),(16.234188,46.484892),(16.233665,46.47874),(16.237489,46.465072),(16.249375,46.437528),(16.250098,46.429441),(16.248238,46.413344),(16.250925,46.404998),(16.257023,46.399908),(16.274076,46.392105),(16.278727,46.387351),(16.279936,46.378661),(16.280277,46.376214),(16.275626,46.373165),(16.252785,46.373424),(16.217129,46.367352),(16.208654,46.367093),(16.191807,46.369781),(16.177544,46.375594),(16.153567,46.391433),(16.143851,46.394714),(16.131552,46.393061),(16.123181,46.386885),(16.115533,46.379289),(16.106024,46.373734),(16.094345,46.372261),(16.088654,46.373086),(16.057965,46.377532),(16.057448,46.359652),(16.059722,46.345312),(16.059619,46.332315),(16.052384,46.318595),(16.049922,46.316735),(16.038948,46.308441),(16.019208,46.298829),(15.998434,46.291542),(15.993668,46.290613),(15.982001,46.288339),(15.948721,46.284204),(15.918903,46.272827),(15.883712,46.2594),(15.880895,46.259341),(15.879323,46.259308),(15.834206,46.258366),(15.81829,46.255524),(15.803097,46.250511),(15.799526,46.248607),(15.789144,46.24307),(15.768887,46.21935),(15.749767,46.210772),(15.677975,46.214462),(15.661297,46.21532),(15.639799,46.207672),(15.626571,46.19521),(15.62285,46.191704),(15.604349,46.167002),(15.589983,46.138684),(15.58988,46.113517),(15.603833,46.090986),(15.623605,46.076415),(15.631531,46.070574),(15.643727,46.06551),(15.671735,46.057397),(15.683931,46.051144),(15.69288,46.041636),(15.697987,46.036209),(15.693646,46.025771),(15.681967,46.013523),(15.674319,45.993318),(15.674216,45.993163),(15.675381,45.972486),(15.67711,45.941796),(15.67556,45.925157),(15.670392,45.912651),(15.663364,45.900817),(15.659436,45.888828),(15.663674,45.876064),(15.675456,45.855962),(15.676076,45.841699),(15.666051,45.831674),(15.645587,45.824129),(15.626054,45.820202),(15.587296,45.81922),(15.549748,45.823693),(15.523527,45.826816),(15.513916,45.823406),(15.495312,45.812657),(15.48539,45.810176),(15.473918,45.811572),(15.462549,45.814207),(15.451284,45.815137),(15.440328,45.811468),(15.435574,45.802115),(15.439502,45.791676),(15.441052,45.782168),(15.429755,45.775691),(15.429063,45.775295),(15.331031,45.752486),(15.303799,45.746149),(15.263492,45.730388),(15.255017,45.723463),(15.249913,45.713732),(15.248919,45.711836),(15.250883,45.70765),(15.258221,45.705118),(15.267832,45.698452),(15.277754,45.685016),(15.283025,45.680107),(15.291914,45.675559),(15.304936,45.672149),(15.30907,45.674164),(15.310414,45.678195),(15.314961,45.680882),(15.327054,45.683156),(15.330774,45.684551),(15.333875,45.682639),(15.350034,45.669619),(15.351962,45.668066),(15.368291,45.649049),(15.373769,45.640213),(15.353099,45.640316),(15.339146,45.636905),(15.326744,45.632255),(15.297391,45.625692),(15.291604,45.61825),(15.287883,45.610344),(15.281062,45.606158),(15.268556,45.601662),(15.269589,45.593446),(15.276721,45.582697),(15.282179,45.571494),(15.282612,45.570605),(15.288606,45.54456),(15.296668,45.522959),(15.311034,45.505906),(15.361367,45.482031),(15.351755,45.47614),(15.333875,45.458518),(15.325193,45.452834),(15.314031,45.450509),(15.281578,45.450974),(15.225849,45.43645),(15.18422,45.4256),(15.139262,45.430045),(15.066128,45.473934),(15.056166,45.479912),(15.007383,45.480843),(14.997461,45.487199),(14.986299,45.490868),(14.977341,45.491728),(14.962631,45.493142),(14.945372,45.50451),(14.922634,45.514949),(14.904444,45.514432),(14.900826,45.493142),(14.881603,45.469784),(14.838815,45.458983),(14.797164,45.465185),(14.781454,45.493142),(14.781351,45.493193),(14.781247,45.493348),(14.781144,45.493348),(14.688333,45.522029),(14.668489,45.533966),(14.668179,45.539134),(14.671693,45.556807),(14.669833,45.564558),(14.664045,45.570036),(14.663675,45.570187),(14.657327,45.572775),(14.650299,45.574842),(14.615986,45.594066),(14.603067,45.603574),(14.593868,45.616183),(14.592938,45.629671),(14.594075,45.648016),(14.591905,45.663364),(14.580949,45.667808),(14.569095,45.664252),(14.563896,45.662692),(14.556145,45.656697),(14.543639,45.63644),(14.533097,45.625692),(14.507879,45.605848),(14.498577,45.596184),(14.492273,45.58342),(14.491502,45.579165),(14.487415,45.5566),(14.482144,45.542441),(14.468915,45.525594),(14.429744,45.505389),(14.411244,45.493193),(14.372797,45.477845),(14.326805,45.4749),(14.280399,45.481101),(14.240608,45.493348),(14.218698,45.497172),(14.193066,45.491901),(14.145524,45.476243),(14.119686,45.472884),(14.117321,45.472976),(14.116747,45.472998),(14.092917,45.473918),(14.066665,45.480274),(14.041551,45.493142),(14.041551,45.493193),(14.028322,45.50265),(14.013956,45.507973),(13.987476,45.511909),(13.971891,45.514226),(13.964966,45.51097),(13.961452,45.50389),(13.961452,45.493142),(13.98264,45.475313),(13.969824,45.462963),(13.923005,45.448958),(13.908432,45.438881),(13.899647,45.429218),(13.889002,45.423637),(13.835568,45.429114),(13.819859,45.432628),(13.806526,45.442137),(13.759294,45.463169),(13.65997,45.459978),(13.629018,45.458983),(13.589529,45.488837),(13.591645,45.493109),(13.589122,45.501614),(13.595958,45.511908),(13.595958,45.518134),(13.586192,45.519477),(13.578624,45.523383),(13.572765,45.529975),(13.568614,45.539252),(13.58961,45.535793),(13.675141,45.544745),(13.75294,45.552883),(13.75294,45.559068),(13.742361,45.56977),(13.727387,45.581204),(13.711762,45.593207),(13.761051,45.596236),(13.800532,45.58125),(13.847764,45.584661),(13.867926,45.602169),(13.887038,45.618767),(13.894686,45.631841),(13.893641,45.633758),(13.893136,45.634683),(13.884144,45.635148),(13.869158,45.641143),(13.858409,45.649359),(13.778724,45.743411),(13.709478,45.765321),(13.699908,45.770524),(13.660334,45.792038),(13.643591,45.795655),(13.609174,45.798601),(13.581269,45.809246),(13.574172,45.819028),(13.565973,45.83033),(13.56928,45.86454),(13.599306,45.912327),(13.608244,45.926552),(13.615325,45.945912),(13.622817,45.966394),(13.605867,45.985411),(13.600735,45.984574),(13.571657,45.97983),(13.539411,45.96903),(13.509439,45.967428),(13.482257,45.989235),(13.481843,45.990372),(13.481223,45.991354),(13.480396,45.992284),(13.479466,45.993111),(13.474815,45.995747),(13.461793,46.006392),(13.477089,46.016055),(13.482257,46.018433),(13.490008,46.025564),(13.492799,46.032489),(13.490318,46.038948),(13.482257,46.044839),(13.505098,46.066027),(13.522585,46.075298),(13.616409,46.125041),(13.645037,46.161731),(13.641071,46.171405),(13.637389,46.180386),(13.627366,46.181733),(13.613928,46.183539),(13.584473,46.181317),(13.559048,46.184107),(13.528972,46.204829),(13.510162,46.213976),(13.482257,46.217904),(13.468304,46.223433),(13.438332,46.22488),(13.422829,46.228601),(13.437402,46.210927),(13.410116,46.207982),(13.401641,46.216663),(13.398127,46.230513),(13.384898,46.243122),(13.384795,46.243225),(13.384692,46.243328),(13.378594,46.268391),(13.373013,46.280277),(13.365261,46.290302),(13.391306,46.301568),(13.395638,46.306728),(13.409806,46.323608),(13.423242,46.344847),(13.434094,46.353864),(13.447323,46.354846),(13.459726,46.359032),(13.483704,46.37115),(13.530006,46.388332),(13.554397,46.405954),(13.575688,46.426676),(13.600182,46.442644),(13.634082,46.445719),(13.658783,46.445125),(13.67749,46.452075),(13.685853,46.464047),(13.688446,46.467759),(13.689893,46.493339),(13.695474,46.498636),(13.699194,46.504837),(13.701055,46.511891),(13.700951,46.519746),(13.716093,46.518867),(13.782135,46.507782),(13.795778,46.507886),(13.860683,46.51525),(13.890862,46.511787),(13.98233,46.481918),(13.998763,46.480523),(14.014922,46.482531),(14.032456,46.484709),(14.050026,46.484399),(14.066355,46.48104),(14.081032,46.47595),(14.137255,46.442438),(14.147933,46.440425),(14.149865,46.440061),(14.242323,46.438237),(14.362151,46.435875),(14.395844,46.440991),(14.40649,46.439337),(14.411244,46.434635),(14.414448,46.429002),(14.420029,46.424351),(14.437145,46.418884),(14.450931,46.414481),(14.467675,46.412672),(14.502194,46.418356),(14.515837,46.40536),(14.527102,46.388229),(14.540332,46.378643),(14.557695,46.38394),(14.562108,46.391737),(14.566997,46.400373),(14.575368,46.419726),(14.590148,46.434428),(14.599863,46.437167),(14.621567,46.43851),(14.631696,46.440577),(14.642031,46.445228),(14.662081,46.459698),(14.666629,46.460524),(14.672546,46.459791),(14.679961,46.458871),(14.687093,46.471221),(14.698772,46.480962),(14.703733,46.487758),(14.709727,46.492512),(14.726367,46.497706),(14.735255,46.493339),(14.760887,46.496284),(14.783004,46.503261),(14.788585,46.506646),(14.796663,46.519401),(14.807189,46.536024),(14.814519,46.551337),(14.822278,46.567546),(14.833647,46.584393),(14.85039,46.601136),(14.862999,46.604831),(14.877055,46.603642),(14.894063,46.605215),(14.897726,46.605554),(14.919507,46.615017),(14.933589,46.621135),(14.947955,46.619274),(14.967179,46.600257),(15.004386,46.636844),(15.061954,46.649557),(15.085723,46.647795),(15.105591,46.646323),(15.172557,46.64136),(15.204891,46.638963),(15.332784,46.64358),(15.388135,46.645578),(15.417591,46.637955),(15.435381,46.627195),(15.440018,46.62439),(15.462653,46.614649),(15.492625,46.618293),(15.511228,46.628369),(15.513939,46.632545),(15.517636,46.63824),(15.520308,46.64772),(15.52084,46.649608),(15.530659,46.663845),(15.545955,46.671881),(15.567349,46.675757),(15.58864,46.675963),(15.603729,46.673018),(15.616648,46.67555),(15.621745,46.678175),(15.626984,46.680873),(15.632978,46.689632),(15.632875,46.702473),(15.635975,46.717563),(15.652098,46.710819),(15.728683,46.70299),(15.755141,46.704024),(15.78461,46.7122),(15.822941,46.722834),(15.850743,46.724488),(15.878545,46.720715),(15.946344,46.697151),(15.986962,46.69219),(15.997917,46.686919),(16.016593,46.670757),(16.016727,46.670641),(16.016723,46.670691),(16.014557,46.693714),(16.003291,46.709191),(15.982001,46.718545),(15.970529,46.743014),(15.969702,46.760532),(15.970985,46.77505),(15.971252,46.778076),(15.978177,46.809134),(15.97735,46.816213),(15.972906,46.818435),(15.971976,46.820632),(15.981691,46.827685),(15.987582,46.830011),(16.028441,46.836947),(16.032024,46.837556),(16.052936,46.84606),(16.094035,46.862774),(16.130246,46.856708),(16.135376,46.855849),(16.179486,46.858468),(16.272009,46.863962),(16.282241,46.859932),(16.297392,46.847033),(16.301878,46.843214),(16.310663,46.84001),(16.325339,46.839442),(16.329783,46.834403),(16.327509,46.825463),(16.321825,46.813268),(16.3149,46.802002),(16.311097,46.797519),(16.302188,46.787016),(16.29949,46.77951),(16.298157,46.775802),(16.300431,46.772082),(16.314177,46.743324),(16.325546,46.733273),(16.334124,46.721749),(16.343426,46.714178)] +Swaziland [(31.863357,-25.989937),(31.891882,-25.983839),(31.949243,-25.958104),(31.975288,-25.980429),(32.00402,-25.994381),(32.070579,-26.009781),(32.057454,-26.0412),(32.062725,-26.077167),(32.074559,-26.114167),(32.081432,-26.14879),(32.077814,-26.175455),(32.044741,-26.25452),(32.039884,-26.283666),(32.052338,-26.386812),(32.059831,-26.414821),(32.107063,-26.500293),(32.117398,-26.582252),(32.113884,-26.840014),(32.097761,-26.833503),(32.073163,-26.811386),(32.057144,-26.808595),(31.990068,-26.808285),(31.992238,-26.838257),(31.975288,-26.926624),(31.959682,-27.00879),(31.942112,-27.10067),(31.944204,-27.162338),(31.944851,-27.181389),(31.947383,-27.255493),(31.952964,-27.270169),(31.959785,-27.281021),(31.964849,-27.291666),(31.967743,-27.303035),(31.96826,-27.316264),(31.878602,-27.315231),(31.782535,-27.313991),(31.636911,-27.312234),(31.52653,-27.31089),(31.459454,-27.298694),(31.35667,-27.267068),(31.280137,-27.243504),(31.244707,-27.232586),(31.157043,-27.205573),(31.141954,-27.196478),(31.126503,-27.182629),(31.078754,-27.11824),(30.976073,-27.035041),(30.953645,-27.000315),(30.949408,-26.975407),(30.959743,-26.936133),(30.960983,-26.911224),(30.955092,-26.891381),(30.944447,-26.876601),(30.915818,-26.849419),(30.902589,-26.831333),(30.885294,-26.785641),(30.880213,-26.772215),(30.868741,-26.784927),(30.853393,-26.796606),(30.836443,-26.805081),(30.819803,-26.807975),(30.80244,-26.808905),(30.79567,-26.785547),(30.785697,-26.716921),(30.784146,-26.578842),(30.782906,-26.472388),(30.804559,-26.397457),(30.897318,-26.291727),(30.897334,-26.291709),(30.969871,-26.209148),(31.037774,-26.100111),(31.091621,-25.983633),(31.106866,-25.930923),(31.119836,-25.910045),(31.20841,-25.838628),(31.309282,-25.757393),(31.337394,-25.744681),(31.372017,-25.736412),(31.401473,-25.735999),(31.426898,-25.743647),(31.532835,-25.805556),(31.638771,-25.867464),(31.7306,-25.921207),(31.746966,-25.93079),(31.834573,-25.982082),(31.863357,-25.989937)] +Sint Maarten [(-63.017569,18.033391),(-63.030629,18.01911),(-63.097646,18.035956),(-63.112172,18.043036),(-63.118886,18.0515),(-63.107004,18.059475),(-63.107004,18.062109),(-63.085886,18.058511),(-63.017569,18.033391)] +Syria [(42.236832,37.286304),(42.267218,37.274522),(42.272902,37.276744),(42.279207,37.282273),(42.285822,37.286562),(42.292023,37.285064),(42.296364,37.279689),(42.301428,37.269923),(42.305769,37.264548),(42.339875,37.242637),(42.346696,37.239769),(42.353208,37.227031),(42.335328,37.171169),(42.346696,37.158431),(42.354551,37.15241),(42.356515,37.138277),(42.357238,37.109984),(42.363646,37.09815),(42.371191,37.087944),(42.376875,37.076756),(42.377186,37.062235),(42.376806,37.062001),(42.34587,37.042908),(42.281894,36.994125),(42.281894,36.994022),(42.281894,36.99397),(42.281584,36.99397),(42.178438,36.90532),(41.978554,36.733625),(41.843781,36.617869),(41.817323,36.599731),(41.789935,36.589292),(41.479773,36.536117),(41.414867,36.527384),(41.385411,36.516377),(41.365361,36.494156),(41.365361,36.494053),(41.365258,36.494001),(41.365258,36.493898),(41.276994,36.354785),(41.268829,36.327965),(41.236583,36.077024),(41.236687,36.060332),(41.240614,36.043021),(41.266246,35.99429),(41.266349,35.994342),(41.266349,35.994238),(41.343657,35.857657),(41.354509,35.825566),(41.359263,35.792752),(41.363501,35.655241),(41.358023,35.623925),(41.34221,35.593694),(41.308458,35.552248),(41.261285,35.49432),(41.261285,35.494165),(41.261181,35.494165),(41.261078,35.494165),(41.25188,35.46409),(41.243095,35.366525),(41.20134,35.243018),(41.191521,35.182143),(41.192326,35.158904),(41.198033,34.994041),(41.206508,34.819323),(41.204234,34.793123),(41.195656,34.768473),(41.023986,34.49433),(41.023986,34.494175),(40.98802,34.42852),(40.965282,34.401855),(40.936033,34.386068),(40.690467,34.331497),(40.543809,34.257988),(40.433221,34.202539),(40.322634,34.147167),(40.212046,34.091796),(40.173111,34.072283),(40.101459,34.036373),(39.990871,33.980976),(39.880387,33.925605),(39.769696,33.870233),(39.659212,33.81481),(39.548624,33.759491),(39.438037,33.704094),(39.327449,33.648722),(39.216862,33.593325),(39.106274,33.537928),(38.995686,33.482479),(38.885099,33.427108),(38.774511,33.371685),(38.529565,33.244251),(38.315742,33.13118),(38.230875,33.086302),(38.056726,32.994344),(38.056726,32.994292),(37.929395,32.92533),(37.758036,32.832519),(37.586677,32.739837),(37.494606,32.690056),(37.415214,32.64713),(37.244062,32.554396),(37.133371,32.494581),(37.133371,32.494529),(37.133165,32.494529),(37.133165,32.494478),(36.980099,32.410038),(36.819385,32.316788),(36.806569,32.313042),(36.792513,32.313533),(36.728641,32.327795),(36.706937,32.328338),(36.689574,32.319656),(36.653504,32.342859),(36.516684,32.357014),(36.480181,32.360791),(36.463955,32.369395),(36.407627,32.374227),(36.387887,32.379317),(36.373108,32.386422),(36.285258,32.456935),(36.220765,32.494581),(36.188209,32.52228),(36.177357,32.527318),(36.17219,32.525923),(36.160821,32.517215),(36.15586,32.5152),(36.149865,32.51613),(36.13953,32.519541),(36.133226,32.520109),(36.096225,32.515872),(36.081906,32.516265),(36.06987,32.516595),(36.066253,32.517319),(36.066046,32.521608),(36.060465,32.533261),(36.015403,32.591164),(36.005998,32.607907),(36.005275,32.626692),(36.008272,32.643719),(36.003621,32.655088),(35.980263,32.656612),(35.965794,32.654365),(35.955355,32.657439),(35.94657,32.664441),(35.937475,32.674002),(35.941196,32.673536),(35.9444,32.677619),(35.945743,32.684104),(35.944193,32.690771),(35.940369,32.692502),(35.92745,32.692373),(35.922489,32.693768),(35.905229,32.708573),(35.895721,32.713276),(35.788234,32.734411),(35.779035,32.744282),(35.779035,32.744359),(35.779139,32.744462),(35.779139,32.744514),(35.774901,32.747279),(35.769734,32.748054),(35.763842,32.746969),(35.75759,32.744347),(35.784203,32.777949),(35.834226,32.827946),(35.841874,32.853577),(35.83805,32.866031),(35.849729,32.895823),(35.866885,32.920782),(35.874017,32.922333),(35.888073,32.944941),(35.864611,32.97773),(35.85903,32.99021),(35.845801,33.085423),(35.848902,33.098678),(35.811488,33.111908),(35.811488,33.126765),(35.822443,33.14157),(35.833399,33.161129),(35.830195,33.189991),(35.807664,33.201721),(35.803633,33.248463),(35.775625,33.264896),(35.768597,33.272699),(35.802083,33.31249),(35.763842,33.334401),(35.785753,33.342875),(35.793505,33.349929),(35.809938,33.360032),(35.812315,33.373365),(35.815415,33.378868),(35.816966,33.395198),(35.822443,33.401373),(35.8211,33.406722),(35.845116,33.418742),(35.870089,33.431242),(35.888486,33.440415),(35.919492,33.462351),(35.935408,33.494262),(35.921869,33.51457),(35.933238,33.527825),(35.956802,33.534207),(35.980263,33.533949),(35.99928,33.541261),(36.019847,33.55263),(36.035247,33.567797),(36.038451,33.586349),(36.029666,33.59764),(36.012923,33.608027),(35.994526,33.615701),(35.980263,33.619008),(35.956906,33.629938),(35.934788,33.6334),(35.921559,33.640299),(35.925383,33.661667),(35.929414,33.665078),(35.942436,33.669212),(35.94719,33.673088),(35.949878,33.6827),(35.946984,33.700838),(35.948431,33.70921),(35.955872,33.718124),(35.974165,33.732154),(35.980263,33.743549),(35.994009,33.760705),(36.027082,33.784296),(36.040518,33.805612),(36.05044,33.816438),(36.06646,33.821994),(36.085063,33.82605),(36.102426,33.832277),(36.13829,33.850622),(36.144904,33.847418),(36.157617,33.83406),(36.165678,33.829797),(36.186556,33.83021),(36.202679,33.838297),(36.216631,33.847754),(36.231824,33.852534),(36.249187,33.849744),(36.300554,33.828686),(36.338278,33.821193),(36.357501,33.823776),(36.369594,33.837006),(36.36763,33.857547),(36.35037,33.866409),(36.311096,33.87199),(36.28164,33.891111),(36.275336,33.897209),(36.268928,33.906769),(36.267171,33.910386),(36.288565,33.946508),(36.302311,33.964388),(36.31802,33.980511),(36.357811,34.009553),(36.391091,34.044719),(36.412175,34.053633),(36.423647,34.052806),(36.445144,34.04614),(36.45889,34.046708),(36.47522,34.053504),(36.480698,34.062754),(36.482662,34.07451),(36.488449,34.088644),(36.507983,34.109185),(36.552735,34.140992),(36.575886,34.173264),(36.604101,34.199102),(36.603554,34.200101),(36.59821,34.209851),(36.574026,34.229746),(36.567514,34.244138),(36.56276,34.262949),(36.570305,34.275868),(36.579917,34.286901),(36.581984,34.299897),(36.576816,34.307674),(36.570305,34.308966),(36.562967,34.308501),(36.555835,34.311085),(36.54519,34.320723),(36.523589,34.345889),(36.518628,34.353641),(36.515941,34.36165),(36.516768,34.369867),(36.519972,34.375061),(36.523589,34.379789),(36.526276,34.386688),(36.52979,34.403095),(36.530617,34.41418),(36.52514,34.422939),(36.509223,34.432421),(36.501885,34.430303),(36.494651,34.434256),(36.480181,34.449139),(36.463231,34.455237),(36.449795,34.464538),(36.439873,34.477457),(36.433466,34.494175),(36.433466,34.49433),(36.41972,34.498335),(36.392331,34.500143),(36.363082,34.499032),(36.343652,34.494175),(36.340965,34.493477),(36.338174,34.493296),(36.335384,34.493477),(36.332697,34.494175),(36.329851,34.498603),(36.319881,34.514122),(36.336624,34.52885),(36.364012,34.541097),(36.383546,34.553809),(36.388094,34.569519),(36.386853,34.584247),(36.388921,34.596236),(36.4037,34.603832),(36.418479,34.600318),(36.429125,34.593187),(36.436256,34.597838),(36.440184,34.62936),(36.415999,34.622901),(36.391194,34.622487),(36.367526,34.629205),(36.347373,34.64414),(36.323808,34.679486),(36.308925,34.687548),(36.288151,34.675559),(36.284844,34.667911),(36.284431,34.648326),(36.28071,34.639127),(36.271512,34.630962),(36.26159,34.627242),(36.201232,34.624451),(36.195134,34.626208),(36.183352,34.633494),(36.178287,34.635303),(36.17219,34.634063),(36.160201,34.628068),(36.15586,34.626931),(36.11214,34.629532),(36.087234,34.631014),(36.060775,34.627913),(36.037934,34.62843),(36.01623,34.633649),(35.998763,34.64538),(35.991425,34.648222),(35.98264,34.650238),(35.9699,34.649849),(35.965668,34.668443),(35.947439,34.69953),(35.940929,34.726223),(35.931163,34.740953),(35.928966,34.749823),(35.928966,34.790717),(35.926036,34.796942),(35.908458,34.821763),(35.904552,34.831773),(35.899099,34.852769),(35.866873,34.924221),(35.872895,34.926907),(35.873546,34.931627),(35.879568,34.938056),(35.881358,34.944973),(35.879568,34.95189),(35.873546,34.958319),(35.873546,34.965155),(35.878591,34.96955),(35.878917,34.97016),(35.876638,34.971666),(35.873546,34.978827),(35.881602,34.992865),(35.88738,35.00019),(35.895356,35.006781),(35.900564,35.021918),(35.897309,35.036566),(35.891449,35.049954),(35.887869,35.061347),(35.887869,35.109198),(35.894216,35.12108),(35.941254,35.180569),(35.962413,35.197943),(35.953298,35.224799),(35.915294,35.287909),(35.928966,35.321479),(35.924083,35.338528),(35.916677,35.403306),(35.918956,35.41767),(35.908865,35.425727),(35.860606,35.479071),(35.840099,35.487372),(35.830089,35.493476),(35.825857,35.503323),(35.81837,35.508938),(35.802257,35.506903),(35.786794,35.502183),(35.781261,35.499579),(35.770518,35.504218),(35.771983,35.515448),(35.77711,35.52912),(35.778087,35.541164),(35.771983,35.546332),(35.737071,35.561672),(35.73878,35.566596),(35.740896,35.576483),(35.743337,35.582099),(35.737071,35.582099),(35.723399,35.575914),(35.723399,35.582099),(35.743419,35.587958),(35.763357,35.598293),(35.778494,35.612738),(35.784353,35.630561),(35.782074,35.64175),(35.773204,35.655463),(35.771169,35.667792),(35.776622,35.675116),(35.800059,35.681464),(35.838552,35.742092),(35.840343,35.747463),(35.840099,35.770819),(35.835948,35.789984),(35.825369,35.813463),(35.81186,35.835395),(35.798595,35.849677),(35.846853,35.847235),(35.87086,35.851793),(35.881033,35.866685),(35.885265,35.885972),(35.911305,35.91775),(35.920835,35.915122),(35.940576,35.918377),(35.96073,35.924372),(35.980263,35.926852),(35.992769,35.915277),(35.993079,35.902357),(35.990082,35.889283),(35.992356,35.877346),(36.003931,35.869181),(36.019434,35.866184),(36.05075,35.865409),(36.080309,35.854195),(36.138497,35.819779),(36.155308,35.822232),(36.157617,35.822569),(36.168779,35.852076),(36.174877,35.922666),(36.196891,35.951708),(36.229034,35.961165),(36.253115,35.955636),(36.269031,35.958736),(36.277403,35.99398),(36.277403,35.994031),(36.277506,35.994187),(36.277506,35.994238),(36.29735,36.000749),(36.336727,35.988244),(36.358018,35.99398),(36.355538,36.022402),(36.358638,36.166992),(36.360498,36.181669),(36.366286,36.189213),(36.374451,36.19588),(36.375795,36.20451),(36.361429,36.218152),(36.373004,36.227609),(36.385613,36.215465),(36.398119,36.20823),(36.411865,36.204458),(36.439253,36.201409),(36.445248,36.19991),(36.450726,36.19991),(36.4592,36.203114),(36.463955,36.208799),(36.465298,36.21593),(36.468916,36.22239),(36.480181,36.225542),(36.498578,36.227867),(36.574439,36.218359),(36.593456,36.2181),(36.664253,36.229004),(36.67004,36.237324),(36.66911,36.259855),(36.665493,36.274014),(36.658465,36.290499),(36.648761,36.305719),(36.648646,36.305899),(36.637071,36.316802),(36.620741,36.322022),(36.587771,36.324812),(36.577643,36.332874),(36.58219,36.333081),(36.587771,36.342124),(36.594283,36.35711),(36.594489,36.365947),(36.593352,36.373492),(36.590562,36.380364),(36.585704,36.387031),(36.580847,36.389873),(36.565034,36.395609),(36.558006,36.400622),(36.552735,36.408166),(36.533821,36.462478),(36.531237,36.479015),(36.535475,36.494001),(36.565344,36.5325),(36.569995,36.575701),(36.569271,36.620143),(36.582811,36.662414),(36.594696,36.685824),(36.598107,36.705461),(36.598934,36.724788),(36.603274,36.746957),(36.615677,36.766594),(36.632006,36.784319),(36.643272,36.803646),(36.639861,36.828089),(36.649163,36.828554),(36.658568,36.827521),(36.659563,36.827243),(36.843673,36.775844),(36.906822,36.772589),(36.925942,36.768971),(36.945889,36.762667),(36.964699,36.75383),(36.980099,36.742255),(37.010588,36.719517),(37.018443,36.706649),(37.019476,36.685824),(37.010795,36.653733),(37.014515,36.642261),(37.033326,36.628721),(37.053066,36.619936),(37.062368,36.622107),(37.070119,36.631512),(37.085622,36.644173),(37.104432,36.650994),(37.120969,36.649909),(37.137402,36.645878),(37.156109,36.643914),(37.219671,36.657143),(37.241892,36.65859),(37.404466,36.634458),(37.446117,36.634199),(37.593395,36.710525),(37.654063,36.732126),(37.819738,36.760548),(37.980038,36.819666),(38.007943,36.825764),(38.026005,36.835006),(38.122975,36.884623),(38.190051,36.905526),(38.224261,36.908394),(38.28989,36.901702),(38.479956,36.855736),(38.529358,36.833722),(38.635192,36.744039),(38.66413,36.719517),(38.725005,36.693937),(38.795389,36.686651),(38.957963,36.692904),(38.979977,36.698175),(39.03217,36.701172),(39.185856,36.659521),(39.239599,36.661278),(39.441708,36.692373),(39.765252,36.742151),(39.979812,36.806953),(40.079238,36.854857),(40.115205,36.864728),(40.152722,36.87036),(40.190136,36.884416),(40.261409,36.922785),(40.393741,36.994022),(40.413894,37.004022),(40.435185,37.01061),(40.456993,37.014124),(40.479834,37.015158),(40.525722,37.025881),(40.659461,37.085257),(40.708967,37.100476),(40.896449,37.122696),(41.135091,37.084249),(41.179739,37.069625),(41.20134,37.064974),(41.479773,37.075568),(41.896365,37.154283),(42.009249,37.175613),(42.052968,37.196645),(42.128312,37.253283),(42.178541,37.306303),(42.193011,37.313227),(42.211201,37.324906),(42.22257,37.313951),(42.22381,37.302375),(42.222156,37.292402),(42.22319,37.288139),(42.236832,37.286304)] +Chad [(23.981306,19.496124),(23.981512,19.263838),(23.981719,19.031553),(23.981926,18.799293),(23.982133,18.567033),(23.982236,18.334799),(23.982443,18.102514),(23.982753,17.870228),(23.982856,17.637943),(23.983063,17.405657),(23.983269,17.173371),(23.983476,16.941138),(23.983683,16.7088),(23.983786,16.476515),(23.983993,16.244229),(23.984303,16.011944),(23.984406,15.779658),(23.984406,15.72194),(23.984406,15.72116),(23.972624,15.691085),(23.945546,15.692222),(23.829067,15.731031),(23.707524,15.748859),(23.592699,15.749014),(23.52707,15.735165),(23.395812,15.688346),(23.320571,15.681318),(23.166782,15.712944),(23.119249,15.707224),(23.094642,15.704262),(23.068493,15.686692),(23.004518,15.611425),(22.92566,15.563935),(22.906746,15.541404),(22.899511,15.510191),(22.906849,15.481408),(22.923593,15.455569),(22.944987,15.433116),(22.966484,15.404823),(22.976406,15.373636),(22.97837,15.340202),(22.965657,15.223904),(22.958009,15.201631),(22.913257,15.121688),(22.905506,15.112567),(22.871606,15.0997),(22.848765,15.087504),(22.829025,15.071639),(22.738901,14.97981),(22.727222,14.959036),(22.721951,14.934671),(22.721021,14.915938),(22.715337,14.899428),(22.695493,14.88147),(22.658906,14.857415),(22.650844,14.841938),(22.650844,14.816151),(22.659009,14.761271),(22.66459,14.743752),(22.67968,14.711816),(22.681437,14.702928),(22.676579,14.688975),(22.666347,14.681741),(22.465429,14.629289),(22.422745,14.609084),(22.382334,14.579111),(22.363523,14.543532),(22.386364,14.506532),(22.417164,14.484104),(22.424915,14.470436),(22.439074,14.362923),(22.442382,14.356928),(22.447343,14.351632),(22.45096,14.345792),(22.449926,14.338377),(22.444862,14.333364),(22.431323,14.327214),(22.426982,14.323003),(22.418094,14.30595),(22.41706,14.298095),(22.419954,14.286235),(22.429049,14.264608),(22.441038,14.249855),(22.457678,14.241147),(22.514109,14.231458),(22.531885,14.220632),(22.540877,14.201176),(22.547698,14.169033),(22.546355,14.139707),(22.531472,14.122627),(22.508217,14.113636),(22.481449,14.108546),(22.458711,14.096117),(22.421608,14.06214),(22.402281,14.049299),(22.243531,13.975427),(22.214695,13.956617),(22.190614,13.932794),(22.180762,13.920555),(22.099353,13.819416),(22.074962,13.780323),(22.073722,13.771357),(22.09708,13.749756),(22.112893,13.729835),(22.116303,13.715185),(22.11434,13.699708),(22.11434,13.677383),(22.132116,13.638652),(22.195885,13.580464),(22.210561,13.541733),(22.211181,13.484191),(22.215625,13.464993),(22.228648,13.441119),(22.263995,13.399183),(22.275984,13.376316),(22.267612,13.334562),(22.232265,13.289035),(22.139971,13.193511),(22.123435,13.182168),(22.016051,13.140233),(21.998378,13.130595),(21.964065,13.098349),(21.935126,13.059152),(21.85296,12.905725),(21.827846,12.831104),(21.811826,12.799969),(21.809449,12.793665),(21.813686,12.782296),(21.840868,12.748939),(21.880142,12.676282),(21.900813,12.656438),(21.935953,12.63954),(21.97719,12.63184),(22.019668,12.631426),(22.058736,12.636697),(22.105865,12.650392),(22.144105,12.671269),(22.176248,12.7015),(22.204877,12.743358),(22.33014,12.661451),(22.43246,12.62383),(22.445689,12.611066),(22.395873,12.496086),(22.372825,12.463116),(22.374892,12.450869),(22.407965,12.399761),(22.481449,12.176674),(22.484033,12.164737),(22.484756,12.152541),(22.483826,12.140345),(22.458091,12.03043),(22.464499,12.032962),(22.470494,12.036114),(22.481449,12.044279),(22.546458,12.064433),(22.585525,12.071357),(22.598858,12.063244),(22.607333,12.077714),(22.612397,12.072804),(22.613534,12.05725),(22.610537,12.039628),(22.592863,11.988933),(22.537053,11.68089),(22.541704,11.632986),(22.561754,11.586012),(22.592347,11.543663),(22.628107,11.509841),(22.662523,11.493072),(22.743035,11.466097),(22.76877,11.442326),(22.771561,11.433308),(22.771871,11.403181),(22.781792,11.398892),(22.87698,11.40884),(22.900442,11.408245),(22.914808,11.396101),(22.928347,11.32515),(22.953668,11.250942),(22.954536,11.237843),(22.956459,11.2088),(22.952015,11.191049),(22.935168,11.155883),(22.928967,11.137926),(22.92566,11.118134),(22.925143,11.097954),(22.922352,11.087076),(22.906539,11.069894),(22.900235,11.059895),(22.861064,10.919154),(22.805977,10.924502),(22.745102,10.943803),(22.719884,10.964655),(22.682677,10.974137),(22.673169,10.975068),(22.638339,10.974137),(22.62821,10.976127),(22.60909,10.985222),(22.600822,10.987806),(22.57705,10.985119),(22.55576,10.978969),(22.534676,10.98052),(22.511938,11.000828),(22.502843,10.992173),(22.490751,10.993154),(22.476281,10.99796),(22.460468,11.000828),(22.447136,10.99827),(22.437627,10.991552),(22.419231,10.970727),(22.407035,10.963053),(22.391842,10.958505),(22.36469,10.954209),(22.361146,10.953648),(22.339442,10.947731),(22.318772,10.939359),(22.305956,10.932564),(22.287249,10.91595),(22.2765,10.908534),(22.268439,10.908999),(22.253039,10.915304),(22.24012,10.905899),(22.229371,10.891972),(22.200639,10.868666),(22.189787,10.837298),(22.176351,10.816214),(22.148239,10.830761),(22.132426,10.828539),(22.048607,10.836988),(22.029487,10.828746),(22.018325,10.809884),(22.0114,10.789291),(22.004889,10.775519),(22.014604,10.754564),(22.003855,10.743273),(21.943084,10.728261),(21.925617,10.720122),(21.895025,10.699193),(21.872081,10.677592),(21.86836,10.676223),(21.863606,10.667929),(21.852754,10.670151),(21.83291,10.679944),(21.822575,10.678755),(21.803558,10.673665),(21.795393,10.672476),(21.784851,10.669324),(21.778753,10.66227),(21.774205,10.655165),(21.768418,10.651987),(21.753431,10.650721),(21.736482,10.646173),(21.722632,10.636716),(21.716845,10.621007),(21.714674,10.601395),(21.705476,10.571811),(21.703305,10.552406),(21.705476,10.53202),(21.714674,10.493702),(21.716845,10.47732),(21.720462,10.468018),(21.738032,10.44802),(21.744233,10.439106),(21.74444,10.432439),(21.742786,10.425773),(21.743613,10.418978),(21.751054,10.41182),(21.720462,10.360816),(21.716845,10.339809),(21.717775,10.328983),(21.722116,10.314178),(21.723046,10.305987),(21.718808,10.296634),(21.708886,10.293533),(21.697414,10.292422),(21.688939,10.288908),(21.67695,10.269168),(21.668889,10.249195),(21.65628,10.233666),(21.630648,10.227439),(21.625274,10.224493),(21.593441,10.214546),(21.589721,10.216613),(21.579282,10.212272),(21.566673,10.214494),(21.554477,10.218654),(21.544969,10.219997),(21.513239,10.200515),(21.485437,10.164316),(21.433348,10.046055),(21.406889,10.005566),(21.374333,9.973113),(21.339503,9.95991),(21.326067,9.962752),(21.28514,9.978591),(21.27129,9.987247),(21.256924,9.975749),(21.205661,9.916838),(21.198427,9.902988),(21.195429,9.887305),(21.187264,9.885134),(21.131971,9.849426),(21.121739,9.831907),(21.106753,9.79545),(21.105926,9.791884),(21.106753,9.778061),(21.105202,9.77403),(21.101585,9.774237),(21.097037,9.775373),(21.09311,9.77434),(21.080294,9.762609),(21.073163,9.757648),(21.065825,9.754496),(21.062207,9.756201),(21.052906,9.765891),(21.048358,9.768165),(21.042674,9.767079),(21.04009,9.764754),(21.039056,9.76217),(21.037816,9.760671),(21.032339,9.75801),(21.025621,9.753256),(21.019936,9.747933),(21.017352,9.743592),(21.014252,9.724472),(21.000092,9.697394),(20.996888,9.682175),(20.996062,9.648301),(20.989964,9.635615),(20.975804,9.630964),(20.975804,9.624168),(20.980145,9.619931),(20.981799,9.616778),(20.983143,9.603678),(20.96764,9.610603),(20.962265,9.606934),(20.960508,9.598123),(20.955961,9.589416),(20.934257,9.577065),(20.925058,9.569598),(20.921234,9.558978),(20.912656,9.544328),(20.874209,9.502496),(20.86377,9.493789),(20.841446,9.484409),(20.834314,9.462473),(20.832661,9.437255),(20.82615,9.418083),(20.820258,9.416222),(20.813884,9.418847),(20.81323,9.419116),(20.806719,9.423328),(20.801965,9.42555),(20.798244,9.423379),(20.792353,9.414),(20.788322,9.411881),(20.77654,9.407799),(20.771062,9.398058),(20.766618,9.38656),(20.757937,9.377129),(20.750495,9.384545),(20.740883,9.377129),(20.695822,9.363461),(20.654894,9.342971),(20.667916,9.302017),(20.616653,9.302017),(20.609729,9.302792),(20.592882,9.308218),(20.592262,9.308063),(20.573555,9.309149),(20.572315,9.308218),(20.550921,9.320259),(20.541826,9.321629),(20.538208,9.311939),(20.537278,9.299201),(20.533661,9.289641),(20.526426,9.283621),(20.514231,9.281502),(20.496144,9.270882),(20.502655,9.247421),(20.51299,9.223883),(20.507203,9.213263),(20.481778,9.204659),(20.467618,9.184143),(20.455423,9.159313),(20.435166,9.138126),(20.423073,9.143241),(20.412428,9.142053),(20.40292,9.136989),(20.384419,9.123579),(20.378322,9.120065),(20.359408,9.116421),(20.276829,9.120736),(20.256985,9.116421),(20.234454,9.13456),(20.222982,9.14195),(20.212647,9.144973),(20.197971,9.140503),(20.169549,9.120891),(20.154563,9.116421),(20.137613,9.121822),(20.125727,9.133785),(20.117149,9.145748),(20.109604,9.151174),(20.101026,9.148073),(20.099579,9.141174),(20.100509,9.134146),(20.099372,9.13071),(20.090587,9.131123),(20.076945,9.13673),(20.068676,9.138126),(20.060512,9.134353),(20.031159,9.11022),(20.028369,9.104562),(20.029919,9.098981),(20.030332,9.093865),(20.024338,9.089756),(20.01793,9.091048),(20.014416,9.106345),(20.006871,9.11022),(20.002944,9.108515),(19.999533,9.104303),(19.997156,9.098619),(19.996329,9.092831),(19.993022,9.084589),(19.985477,9.083193),(19.976486,9.084175),(19.969044,9.082883),(19.943413,9.065882),(19.932091,9.063331),(19.928737,9.062575),(19.914474,9.069267),(19.902175,9.0518),(19.889463,9.046374),(19.80554,9.051128),(19.784663,9.048751),(19.758411,9.041982),(19.746836,9.04131),(19.738154,9.039372),(19.713969,9.02914),(19.709008,9.024567),(19.700017,9.020587),(19.656402,9.021259),(19.640072,9.013999),(19.613614,9.031672),(19.583125,9.025342),(19.553772,9.013327),(19.530828,9.013999),(19.52132,9.008831),(19.515842,9.009451),(19.511811,9.012293),(19.506644,9.013999),(19.430162,9.012862),(19.420964,9.017435),(19.38262,9.003198),(19.369391,9.000382),(19.352234,9.002061),(19.333321,9.006557),(19.316268,9.01312),(19.304795,9.020846),(19.295494,9.009503),(19.285778,9.01281),(19.275236,9.021983),(19.263248,9.028261),(19.253429,9.027951),(19.232862,9.022293),(19.192037,9.020174),(19.179635,9.015265),(19.174571,9.003767),(19.168783,9.002216),(19.142118,9.008676),(19.133643,9.007177),(19.126718,9.007177),(19.10057,9.015265),(19.060676,9.00418),(19.021919,8.985215),(18.999491,8.969635),(18.984815,8.956431),(18.97851,8.949455),(18.975927,8.941988),(18.971896,8.938215),(18.952052,8.931988),(18.936446,8.923358),(18.928591,8.921679),(18.922907,8.918216),(18.920736,8.907855),(18.917739,8.898812),(18.910608,8.894239),(18.901409,8.89398),(18.892727,8.897933),(18.86968,8.864111),(18.869783,8.849409),(18.886526,8.835844),(18.902959,8.84481),(18.909677,8.835327),(18.912468,8.818403),(18.917325,8.805148),(18.929108,8.796544),(19.048687,8.745669),(19.057782,8.729701),(19.073492,8.720528),(19.103877,8.698049),(19.124135,8.675079),(19.091475,8.652858),(19.081657,8.637226),(19.072261,8.631872),(19.061296,8.625624),(19.020472,8.545578),(18.920529,8.432122),(18.910401,8.412976),(18.901306,8.388636),(18.887457,8.370963),(18.8549,8.339802),(18.813042,8.276421),(18.791235,8.257378),(18.773768,8.248619),(18.715064,8.228672),(18.672586,8.20751),(18.638583,8.177642),(18.618636,8.138652),(18.617912,8.090127),(18.589283,8.047882),(18.508255,8.030674),(18.070452,8.019202),(17.977228,7.997187),(17.897853,7.967473),(17.858785,7.960445),(17.817238,7.962099),(17.679778,7.985198),(17.661795,7.986284),(17.639884,7.985043),(17.62066,7.978739),(17.59999,7.950369),(17.580766,7.940602),(17.559786,7.934711),(17.523819,7.93099),(17.503665,7.926236),(17.487232,7.914453),(17.478344,7.891819),(17.466355,7.884119),(17.419122,7.898227),(17.405997,7.883034),(17.385739,7.870477),(17.250864,7.822831),(17.234637,7.811617),(17.23379,7.810801),(17.220685,7.798181),(17.214484,7.786813),(17.211073,7.768312),(17.204562,7.763817),(17.196397,7.76516),(17.188645,7.764127),(17.170559,7.747539),(17.135522,7.705112),(17.116298,7.68687),(17.101932,7.677724),(17.095525,7.67824),(17.090254,7.683822),(17.079091,7.689919),(17.059351,7.696534),(17.045605,7.6847),(17.041058,7.675502),(17.042091,7.667027),(17.038991,7.662479),(17.008295,7.667647),(16.997546,7.666717),(16.988968,7.660774),(16.98194,7.648733),(16.962716,7.650697),(16.919618,7.644082),(16.880654,7.632869),(16.865461,7.624445),(16.854816,7.611475),(16.847168,7.591579),(16.848098,7.584706),(16.852439,7.580262),(16.855849,7.575404),(16.853886,7.567498),(16.850372,7.565896),(16.839623,7.567446),(16.836729,7.567498),(16.815071,7.549544),(16.812854,7.547706),(16.805516,7.543675),(16.782365,7.541143),(16.768619,7.550238),(16.746709,7.586566),(16.709812,7.627753),(16.686867,7.645788),(16.6632,7.657415),(16.624546,7.66868),(16.61421,7.679584),(16.609869,7.701598),(16.611833,7.713897),(16.616174,7.723871),(16.618138,7.734309),(16.613073,7.74821),(16.605219,7.757202),(16.596434,7.761129),(16.586822,7.763558),(16.576383,7.767796),(16.560157,7.779733),(16.549305,7.794719),(16.54486,7.813013),(16.550235,7.843502),(16.550752,7.8527),(16.550131,7.861692),(16.548685,7.870012),(16.509204,7.858281),(16.491737,7.849238),(16.475717,7.835802),(16.469413,7.825828),(16.457217,7.799628),(16.450913,7.792084),(16.437373,7.788363),(16.427245,7.791464),(16.41815,7.795856),(16.407401,7.796063),(16.392208,7.783609),(16.387041,7.762628),(16.387144,7.694674),(16.38301,7.680669),(16.370814,7.672504),(16.282758,7.660102),(16.26188,7.651885),(16.227257,7.625582),(16.207207,7.613542),(16.185606,7.610751),(16.162662,7.611113),(16.128762,7.599486),(16.0652,7.591217),(16.042979,7.583931),(16.026339,7.574888),(16.012283,7.560315),(15.990372,7.529412),(15.975696,7.515201),(15.94314,7.495306),(15.92526,7.488175),(15.793899,7.457996),(15.758345,7.455567),(15.720001,7.468641),(15.668532,7.516287),(15.624813,7.519749),(15.55195,7.509879),(15.515569,7.512204),(15.481049,7.523263),(15.521977,7.576076),(15.548952,7.630801),(15.562181,7.690178),(15.562905,7.792445),(15.540787,7.796838),(15.509368,7.804021),(15.487871,7.804951),(15.440742,7.839419),(15.406532,7.921585),(15.345347,8.13599),(15.210706,8.421822),(15.183703,8.479148),(15.16851,8.498604),(15.110736,8.555138),(15.068568,8.623867),(15.051928,8.643789),(15.031258,8.658749),(15.005833,8.66663),(14.968523,8.672159),(14.955087,8.676216),(14.951469,8.68283),(14.951469,8.692106),(14.949092,8.704276),(14.940101,8.729649),(14.934416,8.739003),(14.927285,8.745075),(14.919947,8.74771),(14.913332,8.752438),(14.908268,8.764841),(14.89938,8.774323),(14.859899,8.803314),(14.846049,8.810988),(14.836748,8.813003),(14.827963,8.813158),(14.819178,8.811608),(14.809876,8.808611),(14.793856,8.813623),(14.571699,8.99099),(14.349542,9.168356),(14.331145,9.200215),(14.321327,9.243158),(14.036486,9.568771),(13.947603,9.637759),(13.945949,9.652642),(14.006721,9.739277),(14.088473,9.809635),(14.119789,9.85201),(14.168365,9.94774),(14.170638,9.957843),(14.171155,9.96761),(14.173532,9.975025),(14.181697,9.978178),(14.196341,9.979147),(14.440493,9.995308),(14.732465,9.923814),(14.772566,9.921747),(14.898139,9.960478),(14.944131,9.958825),(15.002422,9.945053),(15.033015,9.942857),(15.06123,9.949239),(15.071669,9.955957),(15.089445,9.972183),(15.100711,9.978901),(15.109599,9.981537),(15.155488,9.986523),(15.214916,9.984095),(15.382968,9.930196),(15.439398,9.931695),(15.663777,9.98611),(15.681244,9.991278),(15.637939,10.029957),(15.602799,10.04099),(15.536137,10.080523),(15.490144,10.120107),(15.490661,10.124448),(15.476399,10.132742),(15.439192,10.185245),(15.301422,10.311749),(15.29057,10.328983),(15.284266,10.349292),(15.282199,10.374588),(15.278374,10.394276),(15.268763,10.406343),(15.255637,10.41704),(15.241167,10.432904),(15.227215,10.470137),(15.218326,10.487216),(15.198173,10.496492),(15.193315,10.501195),(15.186597,10.505871),(15.176055,10.50799),(15.164686,10.509075),(15.153628,10.511969),(15.144429,10.51631),(15.138228,10.521659),(15.131717,10.540701),(15.132337,10.565403),(15.138435,10.589691),(15.14846,10.607338),(15.149907,10.623125),(15.142259,10.647207),(15.065881,10.793115),(15.06309,10.830761),(15.075596,10.873498),(15.079007,10.898147),(15.072702,10.915769),(15.06247,10.930703),(15.035185,10.994627),(15.02883,11.080218),(15.021233,11.182549),(15.028261,11.244483),(15.033325,11.26027),(15.056373,11.29342),(15.06309,11.309931),(15.062264,11.320034),(15.057819,11.325356),(15.052548,11.329801),(15.049448,11.337268),(15.049448,11.348714),(15.054305,11.364088),(15.06061,11.416126),(15.067018,11.435556),(15.076113,11.453307),(15.122312,11.503227),(15.135644,11.530822),(15.123862,11.563171),(15.09575,11.598156),(15.085208,11.615881),(15.069395,11.660788),(15.066501,11.680632),(15.068258,11.700114),(15.076113,11.721456),(15.079834,11.722593),(15.086345,11.72218),(15.093063,11.722903),(15.097197,11.727657),(15.095957,11.734117),(15.085311,11.744039),(15.083554,11.748173),(15.089135,11.75763),(15.105362,11.772719),(15.110943,11.782899),(15.086345,11.840467),(15.079834,11.851216),(15.06278,11.853489),(15.050998,11.861603),(15.044177,11.877312),(15.042006,11.902375),(15.046967,11.910179),(15.056579,11.918447),(15.063401,11.927283),(15.05937,11.936792),(15.055546,11.942476),(15.050585,11.952863),(15.048311,11.962785),(15.052548,11.967229),(15.081177,11.971673),(15.076836,11.982836),(15.05968,11.997305),(15.049448,12.011929),(15.051205,12.024228),(15.053685,12.031877),(15.053065,12.038026),(15.045727,12.046088),(15.04056,12.055648),(15.04087,12.067585),(15.04459,12.078385),(15.049448,12.084586),(15.030431,12.100916),(15.011827,12.108616),(14.993844,12.106704),(14.976791,12.094147),(14.964802,12.092441),(14.950746,12.103345),(14.928732,12.128615),(14.9031,12.145875),(14.898036,12.152799),(14.894832,12.167372),(14.899896,12.171765),(14.907441,12.17347),(14.911575,12.180136),(14.910128,12.190368),(14.905891,12.195639),(14.90093,12.200032),(14.898036,12.207473),(14.898036,12.219462),(14.9031,12.236257),(14.904857,12.248401),(14.906304,12.317957),(14.908268,12.326897),(14.891215,12.402655),(14.877469,12.427201),(14.876849,12.431594),(14.878502,12.443066),(14.877469,12.447045),(14.872404,12.450404),(14.865893,12.452574),(14.861862,12.452006),(14.863826,12.447045),(14.855455,12.454538),(14.85101,12.455158),(14.849563,12.457019),(14.850184,12.468129),(14.852044,12.474692),(14.860105,12.490091),(14.863826,12.495466),(14.830753,12.618249),(14.819178,12.638816),(14.786208,12.63122),(14.768845,12.633235),(14.761403,12.649358),(14.758716,12.658815),(14.752102,12.668944),(14.743317,12.67716),(14.734118,12.680416),(14.728847,12.677212),(14.713654,12.65251),(14.702079,12.668995),(14.716755,12.70181),(14.709934,12.718243),(14.694017,12.723721),(14.664458,12.715969),(14.648129,12.725116),(14.641721,12.730904),(14.624254,12.741885),(14.62074,12.747853),(14.620327,12.754055),(14.618363,12.759196),(14.610612,12.762349),(14.603997,12.760798),(14.584877,12.746355),(14.575265,12.744908),(14.570717,12.750386),(14.569374,12.75961),(14.569374,12.769403),(14.560692,12.766224),(14.554801,12.766896),(14.55108,12.771573),(14.549013,12.780461),(14.549013,12.818211),(14.53103,12.836246),(14.500437,12.859113),(14.490102,12.873608),(14.490619,12.886346),(14.506122,12.93208),(14.507672,12.95244),(14.504675,12.969003),(14.496303,12.983808),(14.482144,12.99882),(14.481041,13.000508),(14.435429,13.070289),(14.418169,13.081141),(14.064908,13.077988),(13.836111,13.391044),(13.607314,13.7041),(13.600182,13.735984),(13.592948,13.767843),(13.585816,13.799727),(13.578582,13.831534),(13.57145,13.863418),(13.564319,13.895225),(13.557188,13.927161),(13.549953,13.958994),(13.542821,13.990853),(13.53569,14.022737),(13.528455,14.054518),(13.521324,14.086428),(13.514193,14.118235),(13.506958,14.150119),(13.499723,14.181978),(13.492695,14.213862),(13.482257,14.259777),(13.468304,14.310678),(13.449184,14.380131),(13.449494,14.439042),(13.482257,14.483587),(13.507785,14.495964),(13.543855,14.510562),(13.584369,14.513818),(13.608052,14.518266),(13.624264,14.521311),(13.657853,14.548725),(13.665605,14.56689),(13.666742,14.585519),(13.660334,14.623734),(13.657026,14.629651),(13.646278,14.639056),(13.644727,14.644249),(13.648345,14.649417),(13.668085,14.662879),(13.700021,14.697734),(13.709995,14.705124),(13.730045,14.709646),(13.749165,14.711248),(13.764358,14.719077),(13.773247,14.742512),(13.772006,14.762873),(13.75795,14.805532),(13.75392,14.825944),(13.75516,14.847209),(13.760534,14.866355),(13.775417,14.897438),(13.808077,14.965573),(13.833915,15.019601),(13.862957,15.059056),(13.892826,15.0997),(13.922592,15.140498),(13.952564,15.181219),(13.98233,15.221992),(14.012199,15.262687),(14.042068,15.303434),(14.071833,15.344181),(14.101702,15.384876),(14.131571,15.425545),(14.16144,15.466292),(14.191206,15.507039),(14.221178,15.547734),(14.250944,15.588455),(14.280709,15.629254),(14.310682,15.669949),(14.340447,15.710722),(14.368973,15.749634),(14.423543,15.806323),(14.482144,15.86725),(14.520798,15.907557),(14.576919,15.965797),(14.633143,16.024243),(14.689263,16.082534),(14.745384,16.140876),(14.801401,16.199219),(14.857418,16.257613),(14.913539,16.315904),(14.969659,16.374247),(15.02578,16.432538),(15.081797,16.490881),(15.137918,16.549275),(15.194039,16.607566),(15.250159,16.665909),(15.30628,16.7242),(15.3624,16.782646),(15.418418,16.840885),(15.452421,16.876232),(15.465547,16.89468),(15.468517,16.90491),(15.471954,16.916746),(15.474125,16.942068),(15.478259,16.987698),(15.48229,17.033277),(15.486217,17.078907),(15.490248,17.124537),(15.494279,17.170116),(15.498309,17.215746),(15.50234,17.261376),(15.506371,17.307058),(15.510505,17.352637),(15.514536,17.398267),(15.518567,17.443846),(15.522597,17.489528),(15.526628,17.535107),(15.530659,17.580737),(15.53469,17.626367),(15.538617,17.672049),(15.542648,17.717679),(15.546782,17.76331),(15.550813,17.808888),(15.554843,17.85457),(15.558874,17.900149),(15.562905,17.945779),(15.566936,17.991409),(15.567698,18.000039),(15.570966,18.03704),(15.574997,18.082644),(15.579028,18.128249),(15.583162,18.173879),(15.587193,18.219483),(15.59112,18.265088),(15.595151,18.310718),(15.599182,18.356348),(15.603213,18.402004),(15.607243,18.447583),(15.611274,18.493239),(15.615305,18.538818),(15.619439,18.584474),(15.62347,18.630078),(15.6275,18.675709),(15.631531,18.721313),(15.635562,18.766943),(15.639593,18.8126),(15.64352,18.858178),(15.647551,18.903834),(15.651582,18.949387),(15.655716,18.995069),(15.659747,19.040648),(15.663777,19.086304),(15.667808,19.131882),(15.671839,19.177539),(15.67587,19.223169),(15.6799,19.268747),(15.683931,19.314404),(15.687962,19.360008),(15.692096,19.405638),(15.696023,19.451243),(15.700054,19.496873),(15.704085,19.542529),(15.708116,19.588108),(15.712146,19.633764),(15.716177,19.679317),(15.720208,19.724999),(15.724342,19.770577),(15.728373,19.816259),(15.732404,19.861838),(15.736021,19.903541),(15.767234,19.982037),(15.782323,20.00803),(15.822321,20.076993),(15.862318,20.145955),(15.902419,20.214943),(15.94252,20.283957),(15.962984,20.319226),(15.970322,20.336331),(15.968151,20.352816),(15.953992,20.374571),(15.930324,20.399324),(15.872033,20.460199),(15.813846,20.521074),(15.755555,20.582001),(15.697264,20.642875),(15.669462,20.671866),(15.588123,20.732792),(15.570243,20.751913),(15.556084,20.773669),(15.544198,20.79899),(15.536033,20.844052),(15.544301,20.890302),(15.569003,20.928905),(15.60931,20.95066),(15.592774,20.974793),(15.576134,20.999339),(15.559288,21.023782),(15.542544,21.048277),(15.525801,21.072772),(15.509058,21.097266),(15.492212,21.121761),(15.475572,21.146256),(15.458829,21.17075),(15.441982,21.195193),(15.425342,21.219791),(15.408496,21.244234),(15.391753,21.268729),(15.375009,21.293223),(15.358266,21.317718),(15.34142,21.342161),(15.32478,21.366707),(15.301009,21.40133),(15.266592,21.440656),(15.247472,21.453265),(15.20024,21.476468),(15.18453,21.491196),(15.180396,21.507267),(15.180086,21.533467),(15.179776,21.553208),(15.178846,21.605297),(15.177502,21.678988),(15.176055,21.763427),(15.174608,21.847918),(15.173265,21.921557),(15.172438,21.973647),(15.172024,21.993387),(15.16448,22.033695),(15.156315,22.076173),(15.14815,22.118651),(15.136058,22.181335),(15.124069,22.243966),(15.11208,22.306598),(15.099987,22.369282),(15.087998,22.431914),(15.075906,22.494597),(15.064021,22.557177),(15.051928,22.619861),(15.039939,22.682544),(15.02795,22.745176),(15.015858,22.807782),(15.003973,22.87044),(14.991984,22.933097),(14.979909,22.995664),(14.997461,23.003817),(15.055236,23.029965),(15.112907,23.056113),(15.170371,23.082261),(15.228145,23.108358),(15.285816,23.134455),(15.343487,23.160577),(15.401158,23.186751),(15.458829,23.212899),(15.516499,23.239048),(15.574274,23.265196),(15.631945,23.291344),(15.689616,23.317518),(15.747183,23.343641),(15.804854,23.369789),(15.862525,23.395937),(15.920196,23.422034),(15.964637,23.442214),(15.985101,23.44472),(16.076155,23.399632),(16.199765,23.338706),(16.323169,23.277728),(16.446779,23.216724),(16.570285,23.15572),(16.693792,23.094664),(16.817299,23.03366),(16.940909,22.972733),(17.064312,22.911755),(17.187922,22.850751),(17.311429,22.789799),(17.434832,22.728795),(17.558442,22.667765),(17.681949,22.606787),(17.805455,22.545757),(17.928962,22.484779),(18.052469,22.423852),(18.175975,22.362822),(18.299585,22.301844),(18.422989,22.240866),(18.546599,22.179836),(18.670105,22.118806),(18.793612,22.05788),(18.917119,21.99685),(19.040729,21.935871),(19.164132,21.874893),(19.185837,21.864177),(19.287639,21.813915),(19.411145,21.752885),(19.534652,21.691855),(19.658262,21.630877),(19.781665,21.569899),(19.905276,21.508869),(20.028782,21.447943),(20.152289,21.386913),(20.275796,21.325935),(20.399406,21.265008),(20.522809,21.203926),(20.646419,21.142897),(20.769926,21.08197),(20.893329,21.02094),(21.016939,20.959962),(21.140342,20.899036),(21.263952,20.838006),(21.387459,20.777027),(21.510966,20.716049),(21.634472,20.654968),(21.758082,20.59399),(21.881486,20.533063),(22.005096,20.472033),(22.128602,20.411055),(22.252109,20.350077),(22.375616,20.289124),(22.499122,20.228095),(22.622629,20.167091),(22.746136,20.106061),(22.869642,20.045108),(22.993149,19.984104),(23.116759,19.923152),(23.240162,19.862096),(23.363772,19.80117),(23.487279,19.740166),(23.610786,19.67911),(23.734292,19.618132),(23.857799,19.557179),(23.981306,19.496124)] +Togo [(0.487649,10.933236),(0.612603,10.976566),(0.675338,10.988529),(0.77156,10.990364),(0.901474,10.992741),(0.875843,10.931169),(0.872949,10.911299),(0.875326,10.900524),(0.883698,10.880216),(0.885144,10.869389),(0.869538,10.839908),(0.867575,10.829211),(0.866024,10.80637),(0.862924,10.796086),(0.850005,10.77769),(0.795641,10.726478),(0.781275,10.693044),(0.789233,10.602946),(0.78851,10.563852),(0.773833,10.508145),(0.759881,10.405154),(0.760708,10.382158),(0.768769,10.367094),(0.844423,10.317175),(0.939508,10.254517),(0.997106,10.216553),(1.082238,10.16044),(1.225485,10.066053),(1.331009,9.996471),(1.343825,9.962442),(1.343942,9.954756),(1.345065,9.881103),(1.346925,9.771834),(1.348165,9.694164),(1.35602,9.647474),(1.354573,9.635304),(1.352196,9.625357),(1.352506,9.60528),(1.351266,9.595074),(1.345065,9.582388),(1.327288,9.555309),(1.323154,9.542313),(1.326358,9.521952),(1.336486,9.497509),(1.351266,9.482549),(1.368422,9.490843),(1.379895,9.462473),(1.386716,9.361135),(1.401185,9.321267),(1.420146,9.292317),(1.425267,9.284499),(1.505158,9.201455),(1.567273,9.13673),(1.587841,9.100893),(1.595799,9.076811),(1.601173,9.049526),(1.602827,8.924392),(1.604274,8.815225),(1.605514,8.722492),(1.607204,8.588617),(1.607581,8.558755),(1.609028,8.546973),(1.614092,8.535785),(1.63993,8.503255),(1.644168,8.493488),(1.640447,8.475815),(1.607581,8.421864),(1.606031,8.413389),(1.606031,8.396026),(1.60107,8.371635),(1.603945,8.367476),(1.603964,8.367449),(1.609855,8.365485),(1.614919,8.356261),(1.625564,8.270478),(1.623807,8.155886),(1.621327,7.996567),(1.621947,7.838282),(1.622774,7.640052),(1.622855,7.62246),(1.623251,7.536385),(1.623704,7.4381),(1.624324,7.288807),(1.624944,7.143648),(1.625668,6.996783),(1.607478,6.991409),(1.531927,6.991926),(1.551357,6.920871),(1.562623,6.904283),(1.564223,6.903089),(1.579056,6.892035),(1.588047,6.881235),(1.590114,6.867334),(1.58257,6.825347),(1.58319,6.808268),(1.587531,6.791034),(1.595179,6.77057),(1.60107,6.746437),(1.597039,6.73096),(1.589182,6.72046),(1.574301,6.700574),(1.566033,6.678146),(1.572441,6.666907),(1.58536,6.656442),(1.596316,6.636418),(1.596522,6.62864),(1.594765,6.620269),(1.594145,6.611045),(1.597349,6.600813),(1.603344,6.592596),(1.611198,6.58469),(1.61957,6.577998),(1.627425,6.573347),(1.63683,6.572261),(1.646648,6.573192),(1.653883,6.570659),(1.655227,6.559213),(1.657811,6.552263),(1.67445,6.538594),(1.680755,6.530068),(1.681995,6.520017),(1.679515,6.498571),(1.681685,6.489838),(1.6883,6.484437),(1.709797,6.473404),(1.718375,6.466997),(1.74411,6.425733),(1.76354,6.350647),(1.782351,6.277267),(1.761577,6.276026),(1.633109,6.245692),(1.612439,6.234763),(1.619663,6.213899),(1.61964,6.213894),(1.619314,6.213813),(1.429535,6.181952),(1.271007,6.133734),(1.199555,6.102525),(1.19158,6.101874),(1.185396,6.100491),(1.187968,6.135647),(1.176186,6.151512),(1.093401,6.160917),(1.079655,6.167919),(1.068906,6.180347),(1.05578,6.20063),(1.048132,6.20939),(1.040587,6.213472),(1.033146,6.216547),(1.025188,6.222334),(1.009685,6.244452),(0.983536,6.324602),(0.907262,6.324912),(0.886798,6.33039),(0.873879,6.340131),(0.841323,6.379224),(0.816208,6.395063),(0.766805,6.419351),(0.745721,6.4409),(0.729908,6.465085),(0.71792,6.48883),(0.711718,6.513376),(0.713076,6.533936),(0.713475,6.53999),(0.71978,6.552728),(0.726808,6.562107),(0.727428,6.571435),(0.714612,6.583914),(0.705517,6.585775),(0.677612,6.584896),(0.668207,6.585827),(0.659422,6.601717),(0.645366,6.612026),(0.63224,6.62399),(0.626659,6.644893),(0.635341,6.695381),(0.634307,6.711607),(0.625832,6.727756),(0.611259,6.741528),(0.593793,6.751114),(0.576636,6.754731),(0.570022,6.763516),(0.541083,6.817182),(0.528887,6.826535),(0.519275,6.832246),(0.516071,6.841806),(0.522789,6.862838),(0.543873,6.897926),(0.54625,6.914773),(0.534468,6.935909),(0.506046,6.957819),(0.494574,6.973116),(0.509147,6.977611),(0.523823,6.978697),(0.565991,6.989652),(0.57891,6.996783),(0.59803,7.0312),(0.598754,7.074711),(0.59431,7.119722),(0.59772,7.158892),(0.621801,7.218837),(0.631103,7.265242),(0.644436,7.301933),(0.645676,7.323585),(0.629656,7.389162),(0.623042,7.397896),(0.586248,7.384357),(0.566404,7.380791),(0.548214,7.383271),(0.516692,7.411022),(0.500568,7.45505),(0.495608,7.504298),(0.499432,7.562382),(0.503049,7.574681),(0.50925,7.585378),(0.519379,7.595196),(0.532505,7.601036),(0.548524,7.605377),(0.562063,7.612301),(0.567748,7.625737),(0.567438,7.652919),(0.570538,7.683925),(0.583354,7.704285),(0.612293,7.699428),(0.609502,7.718961),(0.611673,7.771258),(0.602578,7.828412),(0.603405,7.888305),(0.602688,7.893132),(0.592863,7.959309),(0.593689,7.996567),(0.593173,8.015739),(0.583974,8.05305),(0.580254,8.100075),(0.574983,8.124621),(0.579943,8.131701),(0.587798,8.137721),(0.593276,8.150666),(0.591519,8.16449),(0.584594,8.173068),(0.576843,8.180484),(0.572709,8.191),(0.573846,8.203945),(0.579013,8.207821),(0.586661,8.208751),(0.595033,8.21273),(0.628313,8.24247),(0.645056,8.253425),(0.689187,8.272855),(0.705207,8.282751),(0.712752,8.29797),(0.708825,8.322568),(0.705104,8.325979),(0.693115,8.327891),(0.689187,8.331301),(0.688567,8.33603),(0.689394,8.346055),(0.686914,8.360731),(0.687224,8.372203),(0.685053,8.382564),(0.646709,8.414165),(0.63379,8.452018),(0.616324,8.488759),(0.550798,8.519326),(0.524753,8.538033),(0.500568,8.560176),(0.483515,8.579787),(0.443518,8.606866),(0.403727,8.662082),(0.374426,8.724766),(0.368084,8.761481),(0.365874,8.774272),(0.372669,8.783754),(0.385278,8.78608),(0.397112,8.781532),(0.407809,8.759053),(0.4183,8.770603),(0.424966,8.78732),(0.419592,8.791247),(0.430134,8.794736),(0.439409,8.793444),(0.449512,8.790369),(0.462483,8.788302),(0.47349,8.793547),(0.477004,8.80675),(0.478451,8.822331),(0.483515,8.834811),(0.483629,8.834973),(0.505633,8.866333),(0.493127,8.915219),(0.444887,8.996455),(0.444887,8.996558),(0.44468,8.996558),(0.43127,9.02759),(0.441554,9.058182),(0.458478,9.08924),(0.463749,9.12146),(0.463646,9.137092),(0.469976,9.14673),(0.49261,9.166754),(0.504703,9.187166),(0.504393,9.203109),(0.49199,9.238895),(0.49292,9.258609),(0.501085,9.268195),(0.512351,9.274706),(0.522479,9.285067),(0.526613,9.297651),(0.528267,9.326951),(0.537569,9.377646),(0.531368,9.401107),(0.513281,9.416791),(0.483515,9.426842),(0.482792,9.427875),(0.482585,9.428857),(0.482792,9.429839),(0.489406,9.441027),(0.49168,9.451285),(0.489923,9.461129),(0.483515,9.470017),(0.430444,9.489189),(0.411659,9.492238),(0.371429,9.490016),(0.34895,9.486037),(0.333783,9.479474),(0.328796,9.471568),(0.325747,9.451336),(0.32182,9.442836),(0.314223,9.436479),(0.26849,9.420641),(0.249886,9.416946),(0.23136,9.418961),(0.226813,9.432345),(0.234977,9.443766),(0.227949,9.459734),(0.228053,9.473971),(0.26973,9.482342),(0.28531,9.488363),(0.2981,9.497096),(0.301924,9.507457),(0.293139,9.518283),(0.279006,9.519007),(0.253503,9.514252),(0.243116,9.518697),(0.23322,9.525595),(0.225908,9.531926),(0.223402,9.534794),(0.223066,9.541176),(0.227743,9.569339),(0.230197,9.575747),(0.28053,9.570166),(0.30456,9.571562),(0.32244,9.583137),(0.33616,9.574197),(0.354608,9.571096),(0.370809,9.575644),(0.377734,9.589416),(0.369155,9.610138),(0.349363,9.61391),(0.308797,9.603678),(0.277636,9.606986),(0.261694,9.627941),(0.260971,9.657164),(0.275285,9.685586),(0.301924,9.657835),(0.318719,9.648611),(0.336651,9.650833),(0.347089,9.663158),(0.34988,9.680651),(0.347606,9.698841),(0.342955,9.713543),(0.336289,9.72349),(0.329132,9.731293),(0.323887,9.742068),(0.32244,9.760671),(0.343886,9.838393),(0.344196,9.849762),(0.339674,9.889837),(0.341715,9.903789),(0.349777,9.918956),(0.362592,9.932676),(0.369569,9.938826),(0.370189,9.947017),(0.363393,9.966783),(0.35143,9.990451),(0.349777,9.997169),(0.35174,10.006419),(0.355125,10.01262),(0.355719,10.018976),(0.349777,10.02882),(0.367295,10.032128),(0.382746,10.029596),(0.393908,10.03249),(0.39457,10.035557),(0.398146,10.052127),(0.395149,10.072358),(0.387087,10.078947),(0.37608,10.081582),(0.363393,10.089669),(0.353601,10.115508),(0.358846,10.18633),(0.357192,10.219997),(0.364427,10.233407),(0.367295,10.249815),(0.372049,10.265137),(0.396596,10.283224),(0.376183,10.299941),(0.366597,10.304488),(0.32275,10.297254),(0.308797,10.297099),(0.317582,10.31733),(0.307066,10.333272),(0.290039,10.348491),(0.278825,10.366629),(0.28022,10.375182),(0.283553,10.385775),(0.283114,10.396808),(0.273244,10.406601),(0.263012,10.408926),(0.253943,10.406084),(0.244563,10.401537),(0.23322,10.398617),(0.223609,10.399961),(0.197073,10.41027),(0.194437,10.407376),(0.19175,10.400839),(0.187513,10.394948),(0.179865,10.394173),(0.176971,10.397687),(0.126018,10.491609),(0.110179,10.508249),(0.057262,10.541115),(0.04902,10.550959),(0.037341,10.57318),(0.029434,10.582533),(0.020339,10.588037),(-0.010925,10.598269),(-0.019968,10.604341),(-0.026221,10.610361),(-0.033352,10.615064),(-0.056452,10.618009),(-0.068363,10.621058),(-0.079344,10.626252),(-0.088129,10.633486),(-0.098336,10.654157),(-0.09769,10.675008),(-0.091564,10.700254),(-0.088129,10.714412),(-0.082807,10.756218),(-0.07521,10.773659),(-0.061103,10.791358),(-0.04069,10.805026),(-0.032061,10.813165),(-0.0302,10.823785),(-0.035316,10.843448),(-0.036091,10.853137),(-0.014439,10.953596),(-0.009581,10.963001),(-0.001571,10.971295),(0.007188,10.976799),(0.01419,10.983672),(0.016567,10.996203),(0.019409,11.031628),(0.016205,11.062582),(0.001116,11.085991),(-0.032267,11.098574),(-0.051077,11.098264),(-0.085029,11.089402),(-0.103348,11.087541),(-0.121745,11.092192),(-0.14208,11.103329),(-0.158668,11.118444),(-0.166109,11.13498),(-0.115062,11.124658),(-0.063118,11.114155),(0.059588,11.089402),(0.287558,11.043436),(0.374779,11.025808),(0.486616,11.003205),(0.488786,11.001784),(0.49013,10.999511),(0.490647,10.996358),(0.489303,10.991914),(0.48858,10.987754),(0.487133,10.983672),(0.483619,10.979693),(0.499742,10.975688),(0.487649,10.933236)] +Uganda [(34.108026,3.868938),(34.123736,3.872039),(34.163423,3.886198),(34.182854,3.886095),(34.204971,3.874545),(34.207555,3.860877),(34.196703,3.847389),(34.148954,3.822688),(34.150814,3.81721),(34.166524,3.811319),(34.17903,3.7961),(34.159082,3.783362),(34.152468,3.775636),(34.16518,3.77083),(34.173655,3.771399),(34.190915,3.775843),(34.210552,3.777135),(34.230396,3.782742),(34.240938,3.78362),(34.241558,3.778737),(34.263779,3.750056),(34.278868,3.709723),(34.290444,3.70316),(34.295095,3.707346),(34.299436,3.716932),(34.309874,3.726699),(34.337056,3.734579),(34.35504,3.727371),(34.387492,3.692747),(34.406303,3.682929),(34.425009,3.677193),(34.439686,3.667736),(34.446403,3.646704),(34.443923,3.566295),(34.434518,3.52622),(34.415501,3.497023),(34.406199,3.492294),(34.395967,3.489607),(34.386872,3.485628),(34.381291,3.476869),(34.382118,3.466043),(34.394417,3.444623),(34.398551,3.433642),(34.397001,3.424159),(34.386976,3.399664),(34.383978,3.388347),(34.386562,3.376384),(34.394417,3.365661),(34.403822,3.355404),(34.41116,3.344707),(34.424079,3.304812),(34.434001,3.182029),(34.444853,3.159136),(34.46573,3.145856),(34.512859,3.132368),(34.53384,3.118467),(34.545556,3.097501),(34.545622,3.097383),(34.574664,2.946126),(34.58469,2.928711),(34.616419,2.893416),(34.631405,2.869542),(34.6405,2.860137),(34.654143,2.856519),(34.660344,2.858638),(34.671506,2.867785),(34.67564,2.8698),(34.678121,2.871557),(34.682565,2.87936),(34.685045,2.880962),(34.688869,2.879154),(34.69104,2.875691),(34.692693,2.872229),(34.694657,2.87042),(34.696517,2.867733),(34.724836,2.854142),(34.730107,2.85285),(34.736928,2.844375),(34.740959,2.835539),(34.76132,2.772493),(34.776512,2.685625),(34.81868,2.597982),(34.828086,2.588784),(34.841832,2.58775),(34.849893,2.59514),(34.856508,2.60346),(34.865603,2.604855),(34.875628,2.591212),(34.881312,2.541448),(34.8871,2.522379),(34.914385,2.494371),(34.923584,2.477318),(34.920896,2.454632),(34.906117,2.436907),(34.885963,2.425434),(34.867876,2.411482),(34.859091,2.38678),(34.865603,2.347506),(34.904284,2.254255),(34.922343,2.210719),(34.967509,2.101914),(34.967612,2.082561),(34.958207,2.037965),(34.956657,2.018508),(34.957897,1.997838),(34.962031,1.977813),(34.969162,1.960295),(34.977947,1.949908),(35.001408,1.927997),(35.006473,1.916861),(35.002752,1.906138),(34.984045,1.88247),(34.979704,1.8703),(34.978671,1.675945),(34.972676,1.654241),(34.940947,1.587036),(34.933712,1.575693),(34.92348,1.566107),(34.913558,1.56156),(34.892578,1.55727),(34.882553,1.552232),(34.859505,1.517919),(34.838421,1.437174),(34.778993,1.388547),(34.780747,1.371789),(34.782714,1.352993),(34.800077,1.312324),(34.810516,1.272533),(34.797907,1.231916),(34.766074,1.217498),(34.683805,1.209075),(34.663031,1.196362),(34.628925,1.163599),(34.580452,1.152541),(34.573631,1.134247),(34.571977,1.111716),(34.566883,1.103022),(34.561228,1.093371),(34.560065,1.093149),(34.540351,1.089392),(34.524021,1.098642),(34.507485,1.102518),(34.487021,1.082467),(34.477203,1.064587),(34.468831,1.044227),(34.463663,1.022936),(34.463043,0.978081),(34.457152,0.957617),(34.431211,0.893538),(34.402375,0.856073),(34.388009,0.815817),(34.370852,0.800211),(34.30667,0.768068),(34.298402,0.759283),(34.296252,0.746746),(34.292304,0.72373),(34.276181,0.680838),(34.2521,0.655),(34.219751,0.638567),(34.179133,0.623891),(34.149574,0.603634),(34.132417,0.572834),(34.131588,0.569433),(34.107509,0.470722),(34.104409,0.462247),(34.097381,0.451808),(34.080121,0.431344),(34.075677,0.422249),(34.07671,0.403852),(34.087976,0.367369),(34.085185,0.347008),(34.076607,0.333779),(34.04064,0.305874),(33.960266,0.198677),(33.951757,0.187328),(33.893569,0.109814),(33.890468,0.090073),(33.921578,-0.01297),(33.95248,-0.115702),(33.953514,-0.154356),(33.935117,-0.313106),(33.911346,-0.519295),(33.894809,-0.662749),(33.89853,-0.799072),(33.904214,-1.002573),(33.822255,-1.002573),(33.639269,-1.002573),(33.631777,-1.002573),(33.456128,-1.002573),(33.426415,-1.002573),(33.27309,-1.002573),(33.089949,-1.002573),(32.906963,-1.002573),(32.847225,-1.002573),(32.723873,-1.002573),(32.722268,-1.002573),(32.540835,-1.002573),(32.357642,-1.002573),(32.174552,-1.002573),(31.991411,-1.002573),(31.986193,-1.002573),(31.808373,-1.002573),(31.625335,-1.002573),(31.442246,-1.002573),(31.280493,-1.002573),(31.259156,-1.002573),(31.076118,-1.002573),(30.892977,-1.002573),(30.828381,-1.002573),(30.812465,-0.994719),(30.777118,-0.98583),(30.739498,-1.005467),(30.701567,-1.017456),(30.687098,-1.025001),(30.651338,-1.062828),(30.638626,-1.07337),(30.614544,-1.065825),(30.592737,-1.063448),(30.542301,-1.068099),(30.523697,-1.07244),(30.511657,-1.07337),(30.506024,-1.070786),(30.500856,-1.065205),(30.493415,-1.060244),(30.481012,-1.059107),(30.472951,-1.065619),(30.471786,-1.066837),(30.460829,-1.063428),(30.445614,-1.058694),(30.432023,-1.060554),(30.418897,-1.066445),(30.403188,-1.070373),(30.386341,-1.068202),(30.369288,-1.063241),(30.352752,-1.060761),(30.337455,-1.066239),(30.329032,-1.080501),(30.322572,-1.121843),(30.317405,-1.137035),(30.311307,-1.1421),(30.294564,-1.149644),(30.29095,-1.152621),(30.287536,-1.155432),(30.284642,-1.161427),(30.28242,-1.175793),(30.280508,-1.182407),(30.269759,-1.200494),(30.256685,-1.217237),(30.212192,-1.259509),(30.19674,-1.268707),(30.189351,-1.270877),(30.181392,-1.271497),(30.173434,-1.272841),(30.165579,-1.277492),(30.158448,-1.291135),(30.15235,-1.329892),(30.147183,-1.345085),(30.136331,-1.355213),(30.095506,-1.37113),(30.065984,-1.386945),(30.06078,-1.389733),(30.047757,-1.403169),(30.038662,-1.424977),(30.028327,-1.427147),(29.960424,-1.464767),(29.938462,-1.472932),(29.917429,-1.475206),(29.897896,-1.469625),(29.880739,-1.453605),(29.871024,-1.432418),(29.868647,-1.391283),(29.864203,-1.370303),(29.836091,-1.329478),(29.825135,-1.323897),(29.825024,-1.323881),(29.816143,-1.322554),(29.807462,-1.325138),(29.798212,-1.330925),(29.789168,-1.341674),(29.783174,-1.361414),(29.774906,-1.366272),(29.767981,-1.363792),(29.74669,-1.350872),(29.734805,-1.348185),(29.710517,-1.352526),(29.693774,-1.361208),(29.678322,-1.37237),(29.657703,-1.383945),(29.6391,-1.38901),(29.618119,-1.39056),(29.577915,-1.38839),(29.58732,-1.329685),(29.587113,-1.310565),(29.583186,-1.299299),(29.571507,-1.279249),(29.57099,-1.268604),(29.580447,-1.243282),(29.581429,-1.234807),(29.575538,-1.213206),(29.565254,-1.19791),(29.557038,-1.181787),(29.556934,-1.157706),(29.56913,-1.095901),(29.57006,-1.077918),(29.565926,-1.058591),(29.551198,-1.020143),(29.54846,-1.002573),(29.551353,-0.990584),(29.551518,-0.990305),(29.556521,-0.981799),(29.560345,-0.972084),(29.559157,-0.957305),(29.555281,-0.938495),(29.554661,-0.928573),(29.556004,-0.919478),(29.567166,-0.901908),(29.596725,-0.891882),(29.607991,-0.878447),(29.610781,-0.863977),(29.613365,-0.803826),(29.611091,-0.782742),(29.602203,-0.743778),(29.60303,-0.7229),(29.615536,-0.644146),(29.618843,-0.638978),(29.624424,-0.634844),(29.629281,-0.62978),(29.630573,-0.622028),(29.628248,-0.616034),(29.620496,-0.605388),(29.618843,-0.599394),(29.622874,-0.588438),(29.632175,-0.585751),(29.642821,-0.585028),(29.650882,-0.57955),(29.653053,-0.565597),(29.649745,-0.504309),(29.645094,-0.48891),(29.634615,-0.466968),(29.631865,-0.461211),(29.629385,-0.442401),(29.650981,-0.316455),(29.653983,-0.298947),(29.670569,-0.200867),(29.676617,-0.165105),(29.694032,-0.063096),(29.709018,-0.026302),(29.714341,-0.007492),(29.713462,0.011628),(29.701628,0.055243),(29.703075,0.072503),(29.711809,0.099582),(29.755785,0.16087),(29.761263,0.172135),(29.773045,0.167484),(29.780383,0.16118),(29.787205,0.158493),(29.797127,0.164797),(29.800641,0.172445),(29.832628,0.336983),(29.839605,0.358481),(29.851077,0.377187),(29.903909,0.438229),(29.922907,0.46018),(29.926083,0.467073),(29.940477,0.498317),(29.937996,0.537281),(29.919496,0.618103),(29.920013,0.63867),(29.932312,0.723161),(29.926834,0.774889),(29.928281,0.785018),(29.947298,0.824602),(29.960217,0.832095),(29.982194,0.849017),(29.996391,0.859949),(30.038352,0.878914),(30.145116,0.90315),(30.154831,0.90868),(30.165683,0.921444),(30.18377,0.955137),(30.18687,0.958754),(30.191521,0.974877),(30.214052,0.998545),(30.220977,1.017097),(30.215602,1.057766),(30.215292,1.077093),(30.228005,1.08903),(30.231519,1.097764),(30.234102,1.108099),(30.236169,1.129493),(30.238857,1.136004),(30.269346,1.167268),(30.277976,1.171609),(30.286502,1.174296),(30.295701,1.172643),(30.306553,1.163909),(30.323813,1.155848),(30.33606,1.16887),(30.348204,1.189024),(30.364947,1.202047),(30.376626,1.20308),(30.39926,1.2006),(30.412696,1.202047),(30.43161,1.207008),(30.445562,1.212795),(30.458275,1.221684),(30.478274,1.238634),(30.553704,1.335632),(30.597284,1.391673),(30.681724,1.500349),(30.817013,1.609515),(30.95442,1.720671),(31.025837,1.778239),(31.096018,1.866363),(31.119113,1.895363),(31.183295,1.976211),(31.242826,2.051168),(31.271455,2.102999),(31.280096,2.151441),(31.280447,2.15341),(31.27874,2.156021),(31.267476,2.173253),(31.21089,2.205345),(31.190116,2.221519),(31.182468,2.238728),(31.179058,2.25976),(31.177559,2.30291),(31.129242,2.284668),(31.112602,2.282084),(31.099063,2.282704),(31.055241,2.290249),(31.040668,2.297897),(31.035501,2.30694),(31.038808,2.310971),(31.044389,2.313917),(31.045939,2.319653),(31.043562,2.327043),(31.041288,2.331228),(30.984858,2.394635),(30.968011,2.405436),(30.930907,2.405591),(30.914474,2.378151),(30.900573,2.345956),(30.87179,2.332055),(30.854943,2.339703),(30.83634,2.356343),(30.819803,2.37598),(30.809675,2.392362),(30.806987,2.406986),(30.807142,2.422179),(30.80492,2.434374),(30.794998,2.440111),(30.724925,2.440782),(30.710559,2.445123),(30.707665,2.46228),(30.71676,2.483105),(30.729369,2.503466),(30.737017,2.519537),(30.737844,2.537469),(30.73402,2.574469),(30.73526,2.593073),(30.739395,2.603305),(30.758928,2.633794),(30.761254,2.641597),(30.762013,2.645781),(30.764303,2.658392),(30.797686,2.74836),(30.798926,2.753528),(30.799133,2.76345),(30.801613,2.769083),(30.80585,2.771873),(30.817943,2.774199),(30.82187,2.776162),(30.828691,2.786084),(30.853393,2.853367),(30.85484,2.89321),(30.843988,2.932794),(30.82094,2.973153),(30.803628,2.989069),(30.757068,3.02147),(30.745079,3.036302),(30.743839,3.055474),(30.74787,3.076713),(30.763416,3.123437),(30.804197,3.246005),(30.822129,3.281403),(30.825746,3.283677),(30.837786,3.286416),(30.842437,3.288741),(30.845641,3.293909),(30.846882,3.304399),(30.84869,3.309256),(30.868482,3.343337),(30.897318,3.375015),(30.904933,3.386035),(30.91003,3.393412),(30.916335,3.414806),(30.914474,3.426484),(30.904449,3.447465),(30.902899,3.458911),(30.909617,3.487178),(30.909307,3.496144),(30.896388,3.519967),(30.880161,3.514412),(30.861248,3.498211),(30.839543,3.490202),(30.843781,3.505911),(30.865692,3.548958),(30.931734,3.645102),(30.936282,3.656858),(30.939382,3.668408),(30.944447,3.679286),(30.955505,3.689001),(30.965996,3.692618),(30.985788,3.692127),(30.995968,3.693522),(31.009559,3.699775),(31.040358,3.724218),(31.04966,3.727991),(31.068677,3.731737),(31.077668,3.735303),(31.104408,3.756175),(31.141489,3.785119),(31.167585,3.792405),(31.214818,3.792354),(31.255745,3.786669),(31.29502,3.774189),(31.377702,3.729308),(31.505446,3.659855),(31.523739,3.656083),(31.53452,3.665563),(31.535522,3.666444),(31.547201,3.680991),(31.564771,3.689802),(31.668537,3.70502),(31.68683,3.712824),(31.696132,3.721273),(31.775714,3.810699),(31.777884,3.816435),(31.780985,3.815815),(31.801107,3.806472),(31.80703,3.803722),(31.830697,3.783879),(31.901908,3.704297),(31.916274,3.680267),(31.920046,3.661302),(31.923095,3.615284),(31.93002,3.59836),(31.943662,3.591255),(32.022236,3.58642),(32.030168,3.585932),(32.041537,3.57986),(32.054226,3.559572),(32.060864,3.548958),(32.076161,3.53317),(32.093007,3.524463),(32.155846,3.511776),(32.16799,3.512242),(32.174552,3.520897),(32.175929,3.527234),(32.178997,3.541361),(32.179203,3.556864),(32.174863,3.59265),(32.175948,3.605595),(32.187782,3.61916),(32.371801,3.731065),(32.415571,3.741297),(32.599281,3.756283),(32.756429,3.769022),(32.840352,3.794291),(32.9189,3.83416),(32.979568,3.879196),(32.997241,3.885526),(33.01724,3.87718),(33.143486,3.774086),(33.164466,3.763053),(33.19542,3.757059),(33.286526,3.752537),(33.447343,3.744372),(33.490648,3.749746),(33.527716,3.771431),(33.532609,3.774293),(33.606196,3.848087),(33.701901,3.944076),(33.813677,4.056033),(33.896049,4.138353),(33.977078,4.219692),(34.006017,4.205713),(34.028548,4.188014),(34.041054,4.164812),(34.03971,4.134038),(34.04095,4.120421),(34.049735,4.109466),(34.060897,4.09926),(34.069269,4.08802),(34.072369,4.076419),(34.072679,4.064663),(34.069786,4.041357),(34.065858,4.02743),(34.061517,4.017921),(34.061104,4.007767),(34.068959,3.991876),(34.080844,3.980792),(34.09552,3.970999),(34.107096,3.959501),(34.109576,3.943352),(34.098931,3.917721),(34.086219,3.894673),(34.084772,3.877336),(34.108026,3.868938)] +Uruguay [(-56.906367,-30.108549),(-56.869211,-30.102968),(-56.831281,-30.102037),(-56.795159,-30.123845),(-56.776607,-30.150923),(-56.766995,-30.161362),(-56.704777,-30.198362),(-56.673745,-30.211281),(-56.656821,-30.221307),(-56.641783,-30.233916),(-56.632585,-30.247455),(-56.629278,-30.267712),(-56.63243,-30.278151),(-56.631345,-30.284662),(-56.615403,-30.293344),(-56.605196,-30.295617),(-56.595611,-30.295411),(-56.585818,-30.296548),(-56.575017,-30.302439),(-56.569617,-30.30802),(-56.560781,-30.320525),(-56.523754,-30.357319),(-56.5113,-30.366001),(-56.494557,-30.379333),(-56.441279,-30.408789),(-56.424639,-30.423568),(-56.400454,-30.458915),(-56.385985,-30.475761),(-56.372601,-30.48589),(-56.326531,-30.508318),(-56.267956,-30.551106),(-56.23509,-30.565368),(-56.215091,-30.581905),(-56.183517,-30.614564),(-56.177832,-30.62552),(-56.175171,-30.636062),(-56.170804,-30.645777),(-56.159642,-30.653942),(-56.140315,-30.664794),(-56.131685,-30.671925),(-56.125045,-30.680297),(-56.091894,-30.736107),(-56.076857,-30.752334),(-56.011357,-30.798222),(-55.995311,-30.825818),(-55.988955,-30.85579),(-55.993864,-30.885452),(-56.011357,-30.912634),(-56.015517,-30.934338),(-56.016344,-30.999864),(-56.02239,-31.045546),(-56.022183,-31.067147),(-56.017018,-31.074296),(-56.011357,-31.082133),(-56.009874,-31.081945),(-55.985157,-31.078825),(-55.919347,-31.082443),(-55.888186,-31.076965),(-55.86426,-31.076758),(-55.854596,-31.074588),(-55.846586,-31.069317),(-55.842142,-31.063943),(-55.838396,-31.058155),(-55.832479,-31.051747),(-55.778141,-31.020431),(-55.763594,-31.008442),(-55.755507,-30.992939),(-55.744887,-30.958626),(-55.731891,-30.945397),(-55.712745,-30.943433),(-55.688302,-30.947774),(-55.665822,-30.949324),(-55.652748,-30.938989),(-55.651069,-30.919972),(-55.654583,-30.878734),(-55.649673,-30.860958),(-55.634041,-30.847625),(-55.612906,-30.843388),(-55.591873,-30.848348),(-55.563865,-30.87646),(-55.527484,-30.894547),(-55.51131,-30.906123),(-55.489089,-30.929687),(-55.44258,-30.965551),(-55.368579,-31.037381),(-55.353852,-31.056294),(-55.348426,-31.072728),(-55.344963,-31.108488),(-55.338039,-31.125644),(-55.326928,-31.134946),(-55.2937,-31.153653),(-55.282228,-31.169052),(-55.268741,-31.2106),(-55.259981,-31.228687),(-55.244349,-31.244603),(-55.227554,-31.253492),(-55.18828,-31.266514),(-55.169625,-31.276436),(-55.122238,-31.31354),(-55.102446,-31.324288),(-55.087253,-31.326769),(-55.074256,-31.320568),(-55.061105,-31.304858),(-55.042346,-31.272612),(-55.029375,-31.268788),(-55.011495,-31.287805),(-54.995346,-31.311783),(-54.969637,-31.340825),(-54.940802,-31.36563),(-54.900288,-31.382063),(-54.887808,-31.393225),(-54.876155,-31.406351),(-54.863365,-31.417616),(-54.849903,-31.425057),(-54.819776,-31.435289),(-54.654127,-31.455753),(-54.619685,-31.45565),(-54.604543,-31.459784),(-54.591521,-31.471049),(-54.573848,-31.502469),(-54.562375,-31.515905),(-54.509665,-31.552285),(-54.495041,-31.565617),(-54.483155,-31.583911),(-54.47897,-31.601481),(-54.477368,-31.642098),(-54.467221,-31.664237),(-54.463725,-31.671864),(-54.436492,-31.694292),(-54.403677,-31.714652),(-54.373498,-31.73894),(-54.342957,-31.769326),(-54.274228,-31.823379),(-54.218779,-31.856659),(-54.169583,-31.895933),(-54.146277,-31.909886),(-54.135063,-31.908956),(-54.125503,-31.898517),(-54.107313,-31.883944),(-54.088451,-31.878156),(-54.068814,-31.879913),(-54.049745,-31.887148),(-54.03264,-31.898104),(-54.026646,-31.904821),(-54.017706,-31.919601),(-54.008714,-31.926526),(-53.998896,-31.92973),(-53.965978,-31.93252),(-53.948769,-31.936964),(-53.934352,-31.942545),(-53.905826,-31.959288),(-53.894148,-31.97014),(-53.889755,-31.980372),(-53.887275,-31.990501),(-53.881435,-32.00125),(-53.8711,-32.011482),(-53.858077,-32.021197),(-53.83043,-32.036441),(-53.771829,-32.047087),(-53.757308,-32.055045),(-53.751366,-32.068791),(-53.750022,-32.104034),(-53.721445,-32.162428),(-53.670595,-32.226404),(-53.658555,-32.254309),(-53.651837,-32.288312),(-53.648684,-32.338645),(-53.643879,-32.355595),(-53.634783,-32.368721),(-53.609307,-32.387841),(-53.59892,-32.399933),(-53.58197,-32.425772),(-53.5613,-32.449543),(-53.537632,-32.470213),(-53.474638,-32.508247),(-53.415624,-32.564161),(-53.35909,-32.580284),(-53.316043,-32.602712),(-53.299093,-32.607156),(-53.266899,-32.607363),(-53.233723,-32.624622),(-53.201477,-32.637232),(-53.186129,-32.64674),(-53.119776,-32.707408),(-53.110836,-32.722394),(-53.11404,-32.740584),(-53.126856,-32.754847),(-53.163081,-32.778722),(-53.182836,-32.799904),(-53.203854,-32.82244),(-53.270826,-32.863781),(-53.298835,-32.889102),(-53.307,-32.907189),(-53.309584,-32.944396),(-53.316302,-32.96269),(-53.327309,-32.973645),(-53.44911,-33.042065),(-53.48332,-33.067283),(-53.511535,-33.099219),(-53.511587,-33.099322),(-53.511639,-33.099426),(-53.520662,-33.124979),(-53.536857,-33.170842),(-53.536133,-33.244636),(-53.514016,-33.394911),(-53.536805,-33.559863),(-53.539647,-33.649263),(-53.511535,-33.690294),(-53.491071,-33.690294),(-53.473191,-33.6874),(-53.456707,-33.687503),(-53.440428,-33.697115),(-53.430817,-33.713445),(-53.423634,-33.731015),(-53.411283,-33.74228),(-53.379095,-33.740676),(-53.396352,-33.750584),(-53.414784,-33.764418),(-53.450917,-33.801528),(-53.473053,-33.834568),(-53.485585,-33.869317),(-53.505523,-33.952325),(-53.524648,-34.002374),(-53.529937,-34.04811),(-53.539459,-34.062433),(-53.574452,-34.082696),(-53.591217,-34.095473),(-53.639963,-34.147638),(-53.693593,-34.181736),(-53.711008,-34.198826),(-53.746653,-34.25742),(-53.755971,-34.267673),(-53.75829,-34.278497),(-53.779897,-34.336033),(-53.763661,-34.372491),(-53.764882,-34.390395),(-53.805776,-34.400974),(-53.898305,-34.443943),(-53.972239,-34.487237),(-54.124623,-34.610447),(-54.13679,-34.623956),(-54.14094,-34.641046),(-54.140492,-34.656427),(-54.145904,-34.667576),(-54.236806,-34.684503),(-54.259674,-34.685968),(-54.255605,-34.677504),(-54.249501,-34.670994),(-54.23233,-34.658136),(-54.239329,-34.65309),(-54.245676,-34.646905),(-54.250478,-34.639337),(-54.252838,-34.630792),(-54.25121,-34.618341),(-54.241078,-34.594008),(-54.238596,-34.589776),(-54.253285,-34.577569),(-54.280344,-34.567966),(-54.308949,-34.562188),(-54.328603,-34.561782),(-54.317372,-34.568617),(-54.280141,-34.582289),(-54.288157,-34.594822),(-54.304433,-34.608575),(-54.32311,-34.619399),(-54.33849,-34.623956),(-54.344635,-34.630629),(-54.344797,-34.64422),(-54.337392,-34.654718),(-54.321156,-34.65252),(-54.322581,-34.644708),(-54.307688,-34.647393),(-54.289215,-34.660252),(-54.280141,-34.682306),(-54.288075,-34.695001),(-54.306467,-34.707696),(-54.530792,-34.806899),(-54.542063,-34.811782),(-54.604156,-34.82643),(-54.636464,-34.84881),(-54.65689,-34.856622),(-54.699452,-34.867446),(-54.872711,-34.936212),(-54.893707,-34.939223),(-54.913726,-34.945245),(-54.93932,-34.969903),(-54.958608,-34.973403),(-54.953969,-34.963311),(-54.951812,-34.956964),(-54.951772,-34.952895),(-54.962229,-34.93963),(-54.975738,-34.928481),(-54.99356,-34.920994),(-55.03307,-34.916762),(-55.035471,-34.912856),(-55.034535,-34.906508),(-55.041127,-34.897638),(-55.051259,-34.891371),(-55.060374,-34.887791),(-55.082143,-34.883966),(-55.129791,-34.885919),(-55.22175,-34.903741),(-55.260243,-34.897638),(-55.286488,-34.876235),(-55.338287,-34.81878),(-55.376943,-34.801446),(-55.401926,-34.799981),(-55.468204,-34.795402),(-55.509636,-34.799065),(-55.561173,-34.787834),(-55.601794,-34.772415),(-55.646731,-34.7811),(-55.693448,-34.764741),(-55.756503,-34.782666),(-55.792051,-34.774134),(-55.858786,-34.793061),(-55.889101,-34.803475),(-55.954432,-34.839402),(-56.004052,-34.869102),(-56.069634,-34.901003),(-56.105198,-34.898416),(-56.135633,-34.912829),(-56.149892,-34.920343),(-56.156088,-34.941188),(-56.17729,-34.916325),(-56.19451,-34.913509),(-56.213002,-34.907693),(-56.204661,-34.898696),(-56.203455,-34.887503),(-56.214589,-34.879588),(-56.230551,-34.87775),(-56.236508,-34.892884),(-56.256337,-34.906183),(-56.265824,-34.906266),(-56.287976,-34.903489),(-56.311269,-34.906183),(-56.342763,-34.883966),(-56.402943,-34.85475),(-56.418446,-34.843032),(-56.420766,-34.82822),(-56.403751,-34.816051),(-56.386848,-34.80187),(-56.361282,-34.796636),(-56.37612,-34.782729),(-56.395738,-34.77588),(-56.448432,-34.758272),(-56.485088,-34.75355),(-56.520444,-34.754821),(-56.5419,-34.767022),(-56.558013,-34.765069),(-56.573598,-34.758966),(-56.610951,-34.739435),(-56.628529,-34.732599),(-56.646362,-34.722642),(-56.713544,-34.707037),(-56.795289,-34.698492),(-56.873891,-34.67018),(-56.890696,-34.660089),(-56.904164,-34.635512),(-56.919749,-34.623224),(-56.95287,-34.603448),(-56.976577,-34.581046),(-56.997417,-34.564069),(-57.015788,-34.552091),(-57.042717,-34.538125),(-57.057425,-34.527127),(-57.055898,-34.513442),(-57.062408,-34.506524),(-57.077016,-34.498142),(-57.08316,-34.493585),(-57.101959,-34.475844),(-57.120025,-34.462986),(-57.151964,-34.450932),(-57.203198,-34.442073),(-57.292158,-34.440333),(-57.346942,-34.443459),(-57.3652,-34.43044),(-57.411428,-34.431506),(-57.436974,-34.444576),(-57.44913,-34.431538),(-57.474663,-34.429541),(-57.502635,-34.440582),(-57.537914,-34.453614),(-57.569475,-34.426504),(-57.594997,-34.425473),(-57.624177,-34.430452),(-57.664324,-34.445438),(-57.7203,-34.463373),(-57.738535,-34.461318),(-57.764115,-34.474295),(-57.793302,-34.472197),(-57.82251,-34.475109),(-57.855354,-34.470967),(-57.845585,-34.462979),(-57.854048,-34.447882),(-57.882019,-34.440724),(-57.90019,-34.416525),(-57.8976,-34.376339),(-57.92276,-34.353692),(-57.942738,-34.326267),(-58.016916,-34.253106),(-58.049794,-34.240004),(-58.061391,-34.224705),(-58.076975,-34.19199),(-58.094594,-34.17783),(-58.119374,-34.169203),(-58.145904,-34.165134),(-58.169097,-34.163995),(-58.196197,-34.159438),(-58.20873,-34.147638),(-58.220326,-34.11004),(-58.231842,-34.0888),(-58.316518,-33.986505),(-58.334828,-33.973321),(-58.372426,-33.95436),(-58.388824,-33.942071),(-58.402699,-33.922784),(-58.410024,-33.902276),(-58.434641,-33.784926),(-58.439361,-33.6985),(-58.425771,-33.612888),(-58.429026,-33.592055),(-58.435048,-33.572361),(-58.438344,-33.551446),(-58.433217,-33.527765),(-58.419586,-33.508233),(-58.385365,-33.47031),(-58.37857,-33.448663),(-58.384877,-33.429132),(-58.412587,-33.409601),(-58.418935,-33.389907),(-58.412587,-33.371677),(-58.397206,-33.355645),(-58.361195,-33.325779),(-58.350738,-33.305759),(-58.349233,-33.28281),(-58.366038,-33.196059),(-58.366933,-33.153741),(-58.351389,-33.121677),(-58.310292,-33.108819),(-58.217356,-33.113539),(-58.174428,-33.110447),(-58.137766,-33.095147),(-58.100168,-33.065606),(-58.08373,-33.04754),(-58.076975,-33.030694),(-58.075307,-33.007501),(-58.070668,-32.989516),(-58.055816,-32.951837),(-58.048492,-32.911716),(-58.057118,-32.881768),(-58.077219,-32.857843),(-58.120473,-32.819268),(-58.130767,-32.802667),(-58.136138,-32.784356),(-58.137766,-32.763442),(-58.135854,-32.718357),(-58.137685,-32.695571),(-58.145172,-32.678155),(-58.137766,-32.670668),(-58.14623,-32.644952),(-58.157216,-32.575128),(-58.169097,-32.560805),(-58.177113,-32.545017),(-58.197418,-32.469822),(-58.200103,-32.447198),(-58.200112,-32.44713),(-58.199724,-32.444789),(-58.189958,-32.433936),(-58.164946,-32.389495),(-58.101591,-32.31136),(-58.096527,-32.280974),(-58.106914,-32.251829),(-58.17523,-32.174107),(-58.186547,-32.15292),(-58.180863,-32.132456),(-58.158797,-32.101554),(-58.148306,-32.0629),(-58.145309,-32.01789),(-58.14903,-31.975205),(-58.158797,-31.943889),(-58.190319,-31.913296),(-58.202618,-31.893143),(-58.196004,-31.872575),(-58.168615,-31.846014),(-58.152854,-31.835988),(-58.13084,-31.827824),(-58.083453,-31.819762),(-58.059268,-31.811494),(-58.048881,-31.797128),(-57.988626,-31.642822),(-57.979583,-31.598794),(-57.986818,-31.554145),(-58.013328,-31.52562),(-58.038978,-31.508637),(-58.050793,-31.500815),(-58.075236,-31.475184),(-58.062575,-31.444281),(-58.043558,-31.430639),(-58.005938,-31.411001),(-57.990228,-31.399323),(-57.980513,-31.380512),(-57.975552,-31.33514),(-57.966354,-31.314573),(-57.959481,-31.307959),(-57.942996,-31.296797),(-57.935606,-31.290285),(-57.914832,-31.252458),(-57.905117,-31.240986),(-57.905014,-31.215044),(-57.911732,-31.170603),(-57.899019,-31.126471),(-57.87375,-31.093088),(-57.855249,-31.058982),(-57.863311,-31.012266),(-57.905479,-30.958833),(-57.911732,-30.947361),(-57.90398,-30.929687),(-57.885325,-30.918835),(-57.842795,-30.909223),(-57.819644,-30.9116),(-57.807242,-30.90757),(-57.801868,-30.89248),(-57.796132,-30.868399),(-57.794995,-30.85796),(-57.796028,-30.816206),(-57.799397,-30.791461),(-57.801868,-30.773314),(-57.808659,-30.747331),(-57.817887,-30.712026),(-57.826207,-30.689702),(-57.848531,-30.647947),(-57.863311,-30.62862),(-57.88579,-30.589863),(-57.889769,-30.550796),(-57.877109,-30.514829),(-57.849617,-30.48527),(-57.652729,-30.329104),(-57.631077,-30.297064),(-57.623739,-30.2581),(-57.63795,-30.215105),(-57.642446,-30.193091),(-57.633971,-30.183583),(-57.611698,-30.182963),(-57.586842,-30.204047),(-57.567463,-30.256343),(-57.535992,-30.27443),(-57.510981,-30.276704),(-57.467211,-30.269883),(-57.44313,-30.269469),(-57.429797,-30.275464),(-57.414811,-30.295721),(-57.399257,-30.299131),(-57.389386,-30.294894),(-57.366597,-30.277221),(-57.353833,-30.27195),(-57.334299,-30.272673),(-57.321742,-30.279804),(-57.310993,-30.288279),(-57.296472,-30.293344),(-57.288617,-30.29076),(-57.276215,-30.277737),(-57.270376,-30.275154),(-57.262469,-30.278668),(-57.250067,-30.29045),(-57.245054,-30.293344),(-57.212601,-30.287349),(-57.183714,-30.267505),(-57.163819,-30.23929),(-57.158341,-30.207871),(-57.150848,-30.182343),(-57.129557,-30.149993),(-57.102272,-30.121158),(-57.077106,-30.106068),(-57.067235,-30.106792),(-57.048529,-30.114543),(-57.037883,-30.114647),(-57.018298,-30.105758),(-57.01127,-30.103484),(-56.97458,-30.09749),(-56.954529,-30.09687),(-56.906367,-30.108549)] +Vatican [(12.453137,41.902752),(12.452714,41.903016),(12.452767,41.903439),(12.453031,41.903915),(12.453983,41.903862),(12.454035,41.902752),(12.453137,41.902752)] +Akrotiri Sovereign Base Area [(32.840809,34.699467),(32.845992,34.699467),(32.853766,34.699467),(32.85969,34.700578),(32.86043,34.687621),(32.865613,34.678735),(32.884504,34.666607),(32.913667,34.66028),(32.917246,34.667136),(32.929752,34.662485),(32.936056,34.657731),(32.942258,34.667136),(32.954712,34.677316),(32.987526,34.673389),(32.9898,34.646103),(32.970266,34.646103),(32.965615,34.651581),(32.95404,34.642951),(32.95559,34.628895),(32.962515,34.625019),(32.969543,34.632047),(32.99073,34.634425),(33.015635,34.634425),(33.009939,34.624905),(33.009125,34.597398),(33.030284,34.574774),(33.030284,34.568549),(32.940929,34.568549),(32.940929,34.574774),(32.944672,34.593695),(32.915863,34.63939),(32.913585,34.656684),(32.857432,34.669013),(32.834321,34.670966),(32.812836,34.668647),(32.760671,34.653225),(32.760102,34.66911),(32.766766,34.676144),(32.778613,34.673182),(32.798604,34.670961),(32.806009,34.68577),(32.819336,34.687621),(32.822668,34.691323),(32.827481,34.696506),(32.832294,34.700948),(32.835626,34.699838),(32.840809,34.699467)] +Zambia [(31.119836,-8.616631),(31.141024,-8.606192),(31.161384,-8.591723),(31.182675,-8.580767),(31.206756,-8.580767),(31.218435,-8.588725),(31.237349,-8.613943),(31.248046,-8.621902),(31.260913,-8.623969),(31.269388,-8.621075),(31.277088,-8.61601),(31.328816,-8.597717),(31.340391,-8.595237),(31.347523,-8.592446),(31.350107,-8.588519),(31.353311,-8.587175),(31.362199,-8.592343),(31.36592,-8.598647),(31.369227,-8.616837),(31.372638,-8.623865),(31.38597,-8.632547),(31.398838,-8.633787),(31.412428,-8.63234),(31.427518,-8.633477),(31.443228,-8.641539),(31.464725,-8.666137),(31.480641,-8.676162),(31.519089,-8.687014),(31.539036,-8.70355),(31.54596,-8.728975),(31.545857,-8.766286),(31.55335,-8.809074),(31.576449,-8.839666),(31.672671,-8.913047),(31.689621,-8.919558),(31.709775,-8.919661),(31.730238,-8.912426),(31.764345,-8.894133),(31.787599,-8.892169),(31.936634,-8.93258),(31.917824,-8.973095),(31.917721,-9.022497),(31.938081,-9.061771),(31.980559,-9.0719),(32.001385,-9.063322),(32.015802,-9.052676),(32.031719,-9.045958),(32.057247,-9.049576),(32.086031,-9.066112),(32.096418,-9.069109),(32.105823,-9.068283),(32.125046,-9.063735),(32.134141,-9.064148),(32.145574,-9.070265),(32.154812,-9.075207),(32.191606,-9.112208),(32.211294,-9.12678),(32.231448,-9.133808),(32.252739,-9.136495),(32.383015,-9.133912),(32.423426,-9.143834),(32.459703,-9.168018),(32.470658,-9.181867),(32.490089,-9.227343),(32.504455,-9.249253),(32.518046,-9.258245),(32.555614,-9.261242),(32.641604,-9.279846),(32.713228,-9.28584),(32.725837,-9.292558),(32.739686,-9.307338),(32.743407,-9.314986),(32.746507,-9.329972),(32.752295,-9.337413),(32.759633,-9.339997),(32.776996,-9.33824),(32.784334,-9.339687),(32.83074,-9.370176),(32.905464,-9.398185),(32.920863,-9.4079),(32.923034,-9.466294),(32.927788,-9.479834),(32.937503,-9.487792),(32.943601,-9.485725),(32.950267,-9.480867),(32.961895,-9.48035),(32.968613,-9.484381),(32.983082,-9.497714),(32.99197,-9.501848),(32.992384,-9.529133),(32.97471,-9.601273),(32.986906,-9.628455),(33.000135,-9.632796),(33.011194,-9.627835),(33.022253,-9.62029),(33.03574,-9.616983),(33.046747,-9.622047),(33.050778,-9.632693),(33.053052,-9.644785),(33.058323,-9.654293),(33.08044,-9.662251),(33.089225,-9.643958),(33.092533,-9.614399),(33.09832,-9.588458),(33.121471,-9.599516),(33.175422,-9.602307),(33.19542,-9.616259),(33.209838,-9.641374),(33.214282,-9.654707),(33.215781,-9.6699),(33.21299,-9.680648),(33.207823,-9.68871),(33.204567,-9.698115),(33.207254,-9.712894),(33.216711,-9.727054),(33.241774,-9.74235),(33.25273,-9.754029),(33.267199,-9.781211),(33.275777,-9.793303),(33.287766,-9.803845),(33.299962,-9.810253),(33.336962,-9.823689),(33.343473,-9.83144),(33.359183,-9.870921),(33.365591,-9.898826),(33.35846,-9.91991),(33.328797,-9.965489),(33.309987,-10.012721),(33.304716,-10.037939),(33.304096,-10.063777),(33.317532,-10.082071),(33.399181,-10.117934),(33.420161,-10.135608),(33.436491,-10.153281),(33.454578,-10.16806),(33.480519,-10.177052),(33.499123,-10.201237),(33.518967,-10.214983),(33.533281,-10.231209),(33.535606,-10.262732),(33.529302,-10.318646),(33.532919,-10.342623),(33.559171,-10.404325),(33.567646,-10.416004),(33.595551,-10.440602),(33.603199,-10.44918),(33.622733,-10.494035),(33.638184,-10.511605),(33.665418,-10.515119),(33.665573,-10.541371),(33.672652,-10.559561),(33.673523,-10.569367),(33.673849,-10.573048),(33.674203,-10.577028),(33.657769,-10.601316),(33.642266,-10.615268),(33.603096,-10.643277),(33.59028,-10.648961),(33.576947,-10.657643),(33.528268,-10.711593),(33.51199,-10.751797),(33.500983,-10.769161),(33.480519,-10.781563),(33.465326,-10.784354),(33.447136,-10.802234),(33.43401,-10.808125),(33.418663,-10.807298),(33.385538,-10.80089),(33.37174,-10.803267),(33.357839,-10.808745),(33.332621,-10.813602),(33.319082,-10.81815),(33.306886,-10.828589),(33.288128,-10.856701),(33.2734,-10.865589),(33.261204,-10.865279),(33.250766,-10.862488),(33.241412,-10.865692),(33.232576,-10.883262),(33.231697,-10.897628),(33.239604,-10.903106),(33.252109,-10.903106),(33.265855,-10.901142),(33.280325,-10.913338),(33.28787,-10.944861),(33.29096,-10.975086),(33.293967,-11.004495),(33.303682,-11.034571),(33.3181,-11.062269),(33.3366,-11.087384),(33.358873,-11.109398),(33.390912,-11.164795),(33.382231,-11.194664),(33.368485,-11.222673),(33.298308,-11.32923),(33.29681,-11.335741),(33.300272,-11.347007),(33.298928,-11.353104),(33.293347,-11.357755),(33.278051,-11.364266),(33.273503,-11.369641),(33.27185,-11.383697),(33.273814,-11.396616),(33.273607,-11.409432),(33.265184,-11.423074),(33.239707,-11.402404),(33.230302,-11.416563),(33.234953,-11.501726),(33.233816,-11.514025),(33.227305,-11.532732),(33.212629,-11.563841),(33.215936,-11.571592),(33.232679,-11.57769),(33.249319,-11.577793),(33.266889,-11.575623),(33.283219,-11.578414),(33.296758,-11.593606),(33.302649,-11.60973),(33.314018,-11.7722),(33.310607,-11.808787),(33.297378,-11.846511),(33.297481,-11.862531),(33.308747,-11.884441),(33.312674,-11.898704),(33.309315,-11.91431),(33.303579,-11.930227),(33.294226,-11.981283),(33.2549,-12.072647),(33.251283,-12.105823),(33.25273,-12.139516),(33.260274,-12.155743),(33.288386,-12.182201),(33.299962,-12.195947),(33.312467,-12.228606),(33.330141,-12.298266),(33.349468,-12.327929),(33.384608,-12.340021),(33.458298,-12.31811),(33.497366,-12.331546),(33.514109,-12.331029),(33.525581,-12.335267),(33.527131,-12.355627),(33.51814,-12.37175),(33.513887,-12.374281),(33.487754,-12.389837),(33.480519,-12.406373),(33.465326,-12.409061),(33.460779,-12.416502),(33.459952,-12.4411),(33.451115,-12.452986),(33.420781,-12.464974),(33.409412,-12.475206),(33.393755,-12.498254),(33.373239,-12.518511),(33.350398,-12.532671),(33.32828,-12.537632),(33.320322,-12.535048),(33.311951,-12.52988),(33.302236,-12.525126),(33.28973,-12.524196),(33.263168,-12.528536),(33.255107,-12.531947),(33.243634,-12.544763),(33.226995,-12.579076),(33.21299,-12.588584),(33.202913,-12.591788),(33.196816,-12.595406),(33.192371,-12.599023),(33.187307,-12.60233),(33.181313,-12.608222),(33.178005,-12.615043),(33.173251,-12.620004),(33.163019,-12.620417),(33.158782,-12.615353),(33.146896,-12.593235),(33.13992,-12.586311),(33.13036,-12.583727),(33.122815,-12.584554),(33.107002,-12.588688),(33.099044,-12.593235),(33.089949,-12.60078),(33.081267,-12.604811),(33.074239,-12.5983),(33.070002,-12.590445),(33.064627,-12.584967),(33.057703,-12.583417),(33.049538,-12.586827),(33.035378,-12.599126),(33.024836,-12.612666),(32.994141,-12.672507),(32.958071,-12.725217),(32.944635,-12.752399),(32.941121,-12.769865),(32.947012,-12.843246),(32.953523,-12.854098),(32.980498,-12.870738),(33.005044,-12.892959),(33.013571,-12.917247),(33.00892,-12.945049),(32.981532,-13.00613),(32.965925,-13.116821),(32.967114,-13.134184),(32.9713,-13.150927),(32.979051,-13.167877),(32.995071,-13.190873),(32.998171,-13.202449),(32.992797,-13.216401),(32.962721,-13.22498),(32.937865,-13.255882),(32.919623,-13.294846),(32.905487,-13.351391),(32.892131,-13.404814),(32.874251,-13.443158),(32.845829,-13.45804),(32.827432,-13.460831),(32.819061,-13.471683),(32.816891,-13.487393),(32.817511,-13.505583),(32.813066,-13.528217),(32.800044,-13.537105),(32.763199,-13.54248),(32.749814,-13.550851),(32.722839,-13.573382),(32.71023,-13.575346),(32.691627,-13.569145),(32.677261,-13.567801),(32.667649,-13.575553),(32.663308,-13.596327),(32.67509,-13.628056),(32.706716,-13.635084),(32.743407,-13.635394),(32.769865,-13.646866),(32.807795,-13.699163),(32.813377,-13.711152),(32.80485,-13.722004),(32.768108,-13.739057),(32.757359,-13.750219),(32.763044,-13.777194),(32.791879,-13.789803),(32.828776,-13.797038),(32.85942,-13.808097),(32.874148,-13.820602),(32.923912,-13.880444),(32.927891,-13.895636),(32.930062,-13.910519),(32.93616,-13.926229),(32.944428,-13.932947),(32.962515,-13.932017),(32.970576,-13.936047),(32.976261,-13.944522),(32.976674,-13.949793),(32.974814,-13.955478),(32.9713,-14.005914),(32.976467,-14.021107),(32.999825,-14.050459),(33.01817,-14.046222),(33.049434,-13.996612),(33.062974,-13.981729),(33.076926,-13.974495),(33.111601,-13.964573),(33.120645,-13.955891),(33.130256,-13.93057),(33.140282,-13.924989),(33.154544,-13.936564),(33.185654,-13.993822),(33.202707,-14.013872),(33.13005,-14.038057),(33.042923,-14.067099),(32.917143,-14.10906),(32.744957,-14.166421),(32.576802,-14.222542),(32.425493,-14.272978),(32.320383,-14.308014),(32.227366,-14.33902),(32.147061,-14.353386),(32.108768,-14.364859),(32.033269,-14.398758),(31.932913,-14.431108),(31.801345,-14.473586),(31.658408,-14.519578),(31.589679,-14.554821),(31.496661,-14.60257),(31.410878,-14.633576),(31.306802,-14.658174),(31.178024,-14.68856),(31.074465,-14.713054),(31.056585,-14.714398),(30.915094,-14.746024),(30.815772,-14.768245),(30.676969,-14.817647),(30.591807,-14.848136),(30.48556,-14.885964),(30.392491,-14.931025),(30.318438,-14.966992),(30.230898,-14.977224),(30.214465,-14.981462),(30.214421,-14.981966),(30.213845,-14.98849),(30.220977,-15.018359),(30.220977,-15.063007),(30.225834,-15.106312),(30.263145,-15.231266),(30.287949,-15.278911),(30.320505,-15.299478),(30.334871,-15.304749),(30.347481,-15.317875),(30.357919,-15.335652),(30.364947,-15.354152),(30.369185,-15.374719),(30.371768,-15.439832),(30.374921,-15.451304),(30.389648,-15.471148),(30.392852,-15.480759),(30.390165,-15.490785),(30.371768,-15.525408),(30.365257,-15.547422),(30.367428,-15.561271),(30.395953,-15.590623),(30.401224,-15.598478),(30.412696,-15.627934),(30.410371,-15.630414),(30.396263,-15.635995),(30.356679,-15.651498),(30.32805,-15.652428),(30.296196,-15.639042),(30.280094,-15.632275),(30.254876,-15.628864),(30.246091,-15.632068),(30.231002,-15.644677),(30.223147,-15.649741),(30.207231,-15.653152),(30.195552,-15.649121),(30.16992,-15.632171),(30.130129,-15.623696),(30.090029,-15.629381),(30.050238,-15.640129),(30.010654,-15.646227),(29.967504,-15.641473),(29.881773,-15.618839),(29.837331,-15.614808),(29.814283,-15.619666),(29.773252,-15.638062),(29.73005,-15.644677),(29.672793,-15.663281),(29.648505,-15.666588),(29.62799,-15.663591),(29.608559,-15.658423),(29.587217,-15.655736),(29.563446,-15.662144),(29.526239,-15.692839),(29.508462,-15.703588),(29.422059,-15.71103),(29.406969,-15.714233),(29.186311,-15.812832),(29.150964,-15.848799),(29.141869,-15.854483),(29.121716,-15.859341),(29.102182,-15.870916),(29.086162,-15.884559),(29.076344,-15.895411),(29.055053,-15.934375),(29.042341,-15.946261),(29.018053,-15.950602),(28.972784,-15.951428),(28.951287,-15.955252),(28.946862,-15.957235),(28.932373,-15.963727),(28.898887,-15.995457),(28.877183,-16.022018),(28.874082,-16.028943),(28.860336,-16.049407),(28.857236,-16.060466),(28.859303,-16.069561),(28.86385,-16.076589),(28.868501,-16.08217),(28.870981,-16.087234),(28.8654,-16.121237),(28.852481,-16.162785),(28.847107,-16.202679),(28.86416,-16.231205),(28.840286,-16.284741),(28.836772,-16.306342),(28.840492,-16.323602),(28.857029,-16.36546),(28.857236,-16.388198),(28.833051,-16.426438),(28.829124,-16.434603),(28.822509,-16.470776),(28.808866,-16.486279),(28.769282,-16.515218),(28.761117,-16.532271),(28.741377,-16.550668),(28.73285,-16.55811),(28.718794,-16.56028),(28.690734,-16.56028),(28.643295,-16.568755),(28.280113,-16.706524),(28.21252,-16.748589),(28.113922,-16.827551),(28.022993,-16.865393),(27.868562,-16.929663),(27.816886,-16.959636),(27.777301,-17.001183),(27.641186,-17.198484),(27.624856,-17.233314),(27.604495,-17.312792),(27.577314,-17.363125),(27.524294,-17.415112),(27.422078,-17.504822),(27.157081,-17.769302),(27.146952,-17.783875),(27.145299,-17.794107),(27.146539,-17.818911),(27.149019,-17.842476),(27.11543,-17.882163),(27.078171,-17.916993),(27.048457,-17.944278),(27.021275,-17.958541),(27.006289,-17.962675),(26.95916,-17.964742),(26.94867,-17.968876),(26.912031,-17.992027),(26.88826,-17.984586),(26.794002,-18.026237),(26.769714,-18.029028),(26.753591,-18.032955),(26.740569,-18.0405),(26.71194,-18.065821),(26.700003,-18.069232),(26.685689,-18.066751),(26.628844,-18.049181),(26.612721,-18.041223),(26.598872,-18.029958),(26.583369,-18.013215),(26.570243,-18.002879),(26.553604,-17.996471),(26.527145,-17.992027),(26.485494,-17.979315),(26.408599,-17.939007),(26.362711,-17.930636),(26.325504,-17.93601),(26.318269,-17.934356),(26.311965,-17.928362),(26.3038,-17.922781),(26.294291,-17.918543),(26.248299,-17.913376),(26.239204,-17.910172),(26.233933,-17.903971),(26.228249,-17.894669),(26.221117,-17.886297),(26.211919,-17.882783),(26.203031,-17.887227),(26.167477,-17.913582),(26.158589,-17.918337),(26.135438,-17.922574),(26.118591,-17.931566),(26.101228,-17.935803),(26.095234,-17.938077),(26.0942,-17.941901),(26.096164,-17.954614),(26.095234,-17.958541),(26.081178,-17.962365),(26.062471,-17.962882),(26.046554,-17.966292),(26.04056,-17.978488),(26.033739,-17.971563),(25.978548,-17.998952),(25.966973,-18.000502),(25.924495,-17.998952),(25.86362,-17.971563),(25.853491,-17.959988),(25.846153,-17.943658),(25.847497,-17.929395),(25.86362,-17.923814),(25.849667,-17.906658),(25.804399,-17.888158),(25.794683,-17.872655),(25.786002,-17.862216),(25.765951,-17.849814),(25.743834,-17.839375),(25.70642,-17.829867),(25.694224,-17.819428),(25.681409,-17.81147),(25.657017,-17.81395),(25.603997,-17.836171),(25.536818,-17.848677),(25.530927,-17.850951),(25.522142,-17.860149),(25.516458,-17.862319),(25.510153,-17.861183),(25.500748,-17.856015),(25.49558,-17.854878),(25.420288,-17.854878),(25.409539,-17.853018),(25.376466,-17.841235),(25.345254,-17.842579),(25.335538,-17.841235),(25.315901,-17.83214),(25.285412,-17.809299),(25.266705,-17.800928),(25.259781,-17.794107),(25.253476,-17.781497),(25.242417,-17.770335),(25.228878,-17.762481),(25.215132,-17.759277),(25.198182,-17.75845),(25.190741,-17.755349),(25.177822,-17.738813),(25.172448,-17.735092),(25.161182,-17.729408),(25.156841,-17.72517),(25.155291,-17.719382),(25.156428,-17.70667),(25.153327,-17.700986),(25.138548,-17.686103),(25.131416,-17.686516),(25.122631,-17.697885),(25.120048,-17.691064),(25.115087,-17.684242),(25.107955,-17.678868),(25.098757,-17.676801),(25.096897,-17.67215),(25.088525,-17.642695),(25.085218,-17.640938),(25.067028,-17.625331),(25.064237,-17.621507),(25.052558,-17.621404),(25.045324,-17.619957),(25.040053,-17.616133),(25.033851,-17.608485),(25.033851,-17.60156),(25.040569,-17.584507),(25.036952,-17.5812),(25.02672,-17.58275),(25.008013,-17.588538),(24.998505,-17.588021),(24.982588,-17.576549),(24.971116,-17.560736),(24.969987,-17.559962),(24.95799,-17.551744),(24.93763,-17.560736),(24.924917,-17.542959),(24.898252,-17.531074),(24.829523,-17.517741),(24.79738,-17.519085),(24.786838,-17.516914),(24.780017,-17.512263),(24.775056,-17.507612),(24.769785,-17.505442),(24.684415,-17.49242),(24.639457,-17.49242),(24.629742,-17.49552),(24.622817,-17.502341),(24.617443,-17.509163),(24.606487,-17.515157),(24.591294,-17.528386),(24.580752,-17.532831),(24.571244,-17.533451),(24.562459,-17.53221),(24.546543,-17.526526),(24.537861,-17.520325),(24.53166,-17.5134),(24.523598,-17.507819),(24.498173,-17.503478),(24.47926,-17.494487),(24.449288,-17.489112),(24.407016,-17.474539),(24.388929,-17.471336),(24.371256,-17.473713),(24.329398,-17.485081),(24.321337,-17.488699),(24.310278,-17.482601),(24.257361,-17.480844),(24.238758,-17.478157),(24.220464,-17.4795),(24.190905,-17.485288),(24.148117,-17.493763),(24.105329,-17.502238),(24.062541,-17.510713),(24.01965,-17.519085),(23.976965,-17.52756),(23.934177,-17.536034),(23.891389,-17.544406),(23.848497,-17.552881),(23.805606,-17.561356),(23.762818,-17.569727),(23.72003,-17.578202),(23.677242,-17.586677),(23.634557,-17.595049),(23.591769,-17.603524),(23.548774,-17.611999),(23.505986,-17.620474),(23.47622,-17.626365),(23.45741,-17.626778),(23.422373,-17.633496),(23.381652,-17.641144),(23.375968,-17.628225),(23.375141,-17.615409),(23.382273,-17.601043),(23.359432,-17.582853),(23.340621,-17.560736),(23.320158,-17.54637),(23.305378,-17.539548),(23.290805,-17.535414),(23.258352,-17.532831),(23.241299,-17.535104),(23.224246,-17.539342),(23.206986,-17.541202),(23.189726,-17.536551),(23.177531,-17.524046),(23.17691,-17.509783),(23.179081,-17.494177),(23.176187,-17.478157),(23.165748,-17.467408),(23.121513,-17.450872),(23.097639,-17.432165),(23.073041,-17.405086),(23.054127,-17.375321),(23.046376,-17.348449),(23.040381,-17.33708),(22.998627,-17.293775),(22.984157,-17.285817),(22.936512,-17.273311),(22.876154,-17.248093),(22.849179,-17.23104),(22.809388,-17.196417),(22.778278,-17.180397),(22.765153,-17.169649),(22.755851,-17.154869),(22.744792,-17.108257),(22.730736,-17.081592),(22.710479,-17.055444),(22.665624,-17.008625),(22.651671,-16.998703),(22.592037,-16.975449),(22.579324,-16.975345),(22.573433,-16.971004),(22.569196,-16.962426),(22.569299,-16.944959),(22.567232,-16.936898),(22.554519,-16.924185),(22.522894,-16.914677),(22.508527,-16.906202),(22.499742,-16.89349),(22.489097,-16.865378),(22.416647,-16.75448),(22.408895,-16.745798),(22.399594,-16.740424),(22.383574,-16.73753),(22.377993,-16.734843),(22.372618,-16.728849),(22.365797,-16.71686),(22.350088,-16.696292),(22.343473,-16.683167),(22.333654,-16.673658),(22.306989,-16.669214),(22.303062,-16.66694),(22.295621,-16.659189),(22.290763,-16.656398),(22.287352,-16.658259),(22.283735,-16.661359),(22.274123,-16.663633),(22.259034,-16.669214),(22.251282,-16.670041),(22.237743,-16.665493),(22.15165,-16.597694),(22.145139,-16.584051),(22.142452,-16.567101),(22.138111,-16.552632),(22.127259,-16.546431),(22.108138,-16.54395),(22.104004,-16.536716),(22.106588,-16.525967),(22.107312,-16.512324),(22.101731,-16.497958),(22.084574,-16.470156),(22.079923,-16.457754),(22.08075,-16.441011),(22.086021,-16.422304),(22.094289,-16.404734),(22.103591,-16.391918),(22.105555,-16.379413),(22.089122,-16.371971),(22.055842,-16.364633),(22.054188,-16.358639),(22.053672,-16.336521),(22.052018,-16.326806),(22.048607,-16.322362),(22.036618,-16.312233),(22.032174,-16.306342),(22.02773,-16.291666),(22.025973,-16.278954),(22.022356,-16.266345),(22.01171,-16.252289),(22.019875,-16.253115),(22.045197,-16.252289),(22.010367,-16.198132),(21.983805,-16.165886),(21.981531,-16.144285),(21.981531,-16.128162),(21.981531,-16.067494),(21.981531,-16.004035),(21.980601,-16.001244),(21.980808,-15.956079),(21.980911,-15.853967),(21.981221,-15.751957),(21.981428,-15.649948),(21.981531,-15.547835),(21.981531,-15.503807),(21.981531,-15.473525),(21.981531,-15.452337),(21.981531,-15.412546),(21.981428,-15.261858),(21.981325,-15.11117),(21.981221,-14.960481),(21.981118,-14.809793),(21.981014,-14.659104),(21.980911,-14.508416),(21.980808,-14.357624),(21.980704,-14.206935),(21.980601,-14.05635),(21.980498,-13.905558),(21.980394,-13.754973),(21.980353,-13.694822),(21.980291,-13.604181),(21.980188,-13.453493),(21.980084,-13.302701),(21.979981,-13.152168),(21.979878,-13.001479),(22.104211,-13.001479),(22.228545,-13.001479),(22.352775,-13.001479),(22.477005,-13.001479),(22.601235,-13.001479),(22.725465,-13.001479),(22.849799,-13.001479),(22.974029,-13.001479),(23.098362,-13.001479),(23.222696,-13.001479),(23.346926,-13.001479),(23.471259,-13.001479),(23.595593,-13.001479),(23.719823,-13.001479),(23.844157,-13.001479),(23.968283,-13.001479),(24.000633,-13.001479),(23.988851,-12.965099),(23.971797,-12.93337),(23.949576,-12.904638),(23.895109,-12.849757),(23.874749,-12.821749),(23.865654,-12.789709),(23.872269,-12.750125),(23.891699,-12.705063),(23.910716,-12.631269),(23.928699,-12.561609),(23.940791,-12.532774),(23.98606,-12.467662),(24.019856,-12.419189),(24.028021,-12.402136),(24.030812,-12.385083),(24.020993,-12.340021),(24.016859,-12.278939),(24.006627,-12.253721),(23.981306,-12.227676),(23.959602,-12.19667),(23.954331,-12.151919),(23.961049,-12.011669),(23.967043,-11.882891),(23.98854,-11.834212),(23.990194,-11.824083),(23.985543,-11.799485),(23.981306,-11.724761),(23.972727,-11.700577),(23.962289,-11.68156),(23.954641,-11.662233),(23.954641,-11.636911),(23.959912,-11.617171),(23.977688,-11.577277),(24.005387,-11.535212),(24.009728,-11.52343),(24.009831,-11.507203),(24.007247,-11.483122),(24.007557,-11.470513),(24.009935,-11.459454),(24.020787,-11.444572),(24.052102,-11.42049),(24.061714,-11.406951),(24.061094,-11.394962),(24.016963,-11.298431),(24.011175,-11.272903),(24.015309,-11.130482),(23.995775,-11.127382),(23.990091,-11.113532),(23.994535,-11.074982),(23.993708,-11.019585),(23.997325,-11.001705),(24.003733,-10.982481),(24.000013,-10.967805),(23.980582,-10.938349),(23.974278,-10.921399),(23.967457,-10.872307),(24.00146,-10.873444),(24.048382,-10.882332),(24.09148,-10.897835),(24.114218,-10.919229),(24.114321,-10.927911),(24.10874,-10.945791),(24.108016,-10.954989),(24.110497,-10.965945),(24.117525,-10.986098),(24.119385,-10.99757),(24.117732,-11.03085),(24.124243,-11.043252),(24.144397,-11.040359),(24.16455,-11.033951),(24.181294,-11.034467),(24.218294,-11.045216),(24.238034,-11.048007),(24.275138,-11.047903),(24.293845,-11.05028),(24.316376,-11.060099),(24.345315,-11.078289),(24.370533,-11.1002),(24.381798,-11.12056),(24.378284,-11.132136),(24.369913,-11.140921),(24.362471,-11.150533),(24.361748,-11.164692),(24.367432,-11.176268),(24.384382,-11.191254),(24.391307,-11.199832),(24.396578,-11.217402),(24.397818,-11.238279),(24.396061,-11.259673),(24.392237,-11.279104),(24.384485,-11.295227),(24.352446,-11.345043),(24.33994,-11.358065),(24.298289,-11.370571),(24.285473,-11.381216),(24.293948,-11.3992),(24.310071,-11.406641),(24.351722,-11.407055),(24.370223,-11.410155),(24.392547,-11.423384),(24.426447,-11.45408),(24.446497,-11.463589),(24.469648,-11.465552),(24.490939,-11.461522),(24.532693,-11.446122),(24.542512,-11.445398),(24.554191,-11.446329),(24.564733,-11.444675),(24.571244,-11.436407),(24.574448,-11.425141),(24.578582,-11.415633),(24.584576,-11.407365),(24.593258,-11.399613),(24.60225,-11.395686),(24.624677,-11.390828),(24.635219,-11.385867),(24.645038,-11.376049),(24.663331,-11.351657),(24.67408,-11.341632),(24.712527,-11.326026),(24.787975,-11.325819),(24.826732,-11.317551),(24.901043,-11.281274),(24.942901,-11.268562),(24.981245,-11.271869),(25.124285,-11.259157),(25.184023,-11.246651),(25.278798,-11.199935),(25.31032,-11.194768),(25.322516,-11.205413),(25.324583,-11.227841),(25.323343,-11.253472),(25.32541,-11.273936),(25.324893,-11.283651),(25.317142,-11.290473),(25.296264,-11.298431),(25.284689,-11.307319),(25.290166,-11.315897),(25.301329,-11.324476),(25.306496,-11.333054),(25.298021,-11.351451),(25.286342,-11.360649),(25.277971,-11.371811),(25.279004,-11.396306),(25.286652,-11.416563),(25.309287,-11.455424),(25.316728,-11.472994),(25.315385,-11.482502),(25.301742,-11.498418),(25.297711,-11.507927),(25.297505,-11.517229),(25.300398,-11.535005),(25.327373,-11.616654),(25.336779,-11.633501),(25.344943,-11.642182),(25.351971,-11.64611),(25.360446,-11.647247),(25.373159,-11.647143),(25.386388,-11.64983),(25.392796,-11.656445),(25.39755,-11.66492),(25.406335,-11.673085),(25.463593,-11.70006),(25.481163,-11.715253),(25.482816,-11.72042),(25.483281,-11.725795),(25.482816,-11.731169),(25.474135,-11.756181),(25.480439,-11.769823),(25.49558,-11.776541),(25.515217,-11.775818),(25.524519,-11.77096),(25.530617,-11.764242),(25.537645,-11.758351),(25.550254,-11.756594),(25.561003,-11.753287),(25.56493,-11.743985),(25.567824,-11.733753),(25.575679,-11.727345),(25.621464,-11.728895),(25.655777,-11.750496),(25.687403,-11.778711),(25.725644,-11.800519),(25.744041,-11.803826),(25.801608,-11.80424),(25.812667,-11.802999),(25.832511,-11.793801),(25.839849,-11.791941),(25.850701,-11.793801),(25.855765,-11.797212),(25.859589,-11.801966),(25.914159,-11.845581),(25.935553,-11.853642),(25.949816,-11.863978),(25.963562,-11.895087),(25.981132,-11.903045),(25.994051,-11.904802),(26.16665,-11.902425),(26.189078,-11.9109),(26.209335,-11.923612),(26.229592,-11.933121),(26.265559,-11.930847),(26.28623,-11.940562),(26.298012,-11.941182),(26.309174,-11.936531),(26.328811,-11.923302),(26.339146,-11.918961),(26.376353,-11.912863),(26.414594,-11.91183),(26.438365,-11.919065),(26.455418,-11.931984),(26.470715,-11.94697),(26.489111,-11.960819),(26.524871,-11.972291),(26.6059,-11.977769),(26.643521,-11.985521),(26.662744,-11.996579),(26.679797,-12.009602),(26.697057,-12.017457),(26.717418,-12.013012),(26.732559,-11.998336),(26.744186,-11.98118),(26.759379,-11.968674),(26.785011,-11.968364),(26.831726,-11.973325),(26.874204,-11.964333),(26.912755,-11.942526),(26.947688,-11.909143),(26.967325,-11.870385),(26.971356,-11.78853),(26.981071,-11.749049),(26.996367,-11.727035),(27.011147,-11.713186),(27.021482,-11.69789),(27.023239,-11.671121),(27.010837,-11.62885),(27.010268,-11.609833),(27.023756,-11.59557),(27.032437,-11.59433),(27.054452,-11.597534),(27.064373,-11.59712),(27.102201,-11.584925),(27.144989,-11.581307),(27.173617,-11.570559),(27.180129,-11.569629),(27.197182,-11.59371),(27.20762,-11.639392),(27.211961,-11.688278),(27.209998,-11.721971),(27.234182,-11.809097),(27.420734,-11.921959),(27.467553,-12.001644),(27.47179,-12.041641),(27.52047,-12.179617),(27.536076,-12.200081),(27.589836,-12.242958),(27.595297,-12.247313),(27.638189,-12.293615),(27.666197,-12.30271),(27.710742,-12.306328),(27.756734,-12.304261),(27.788309,-12.296509),(27.807274,-12.28142),(27.818281,-12.267777),(27.831407,-12.259922),(27.856676,-12.262403),(27.877037,-12.269948),(27.933881,-12.303331),(27.948454,-12.321417),(27.952226,-12.346532),(27.958686,-12.367823),(27.98101,-12.374541),(27.998993,-12.37144),(28.014031,-12.37051),(28.08767,-12.377538),(28.096869,-12.383016),(28.100434,-12.391491),(28.10245,-12.401206),(28.106997,-12.410197),(28.131285,-12.429214),(28.145961,-12.423117),(28.160741,-12.408751),(28.185752,-12.402446),(28.204976,-12.410818),(28.220065,-12.42539),(28.237532,-12.437069),(28.263887,-12.436966),(28.283834,-12.434072),(28.303678,-12.435416),(28.322488,-12.4411),(28.339748,-12.450505),(28.42274,-12.521302),(28.436434,-12.542799),(28.455916,-12.592098),(28.468267,-12.614629),(28.502012,-12.649563),(28.512295,-12.668373),(28.511624,-12.694418),(28.500151,-12.710954),(28.483977,-12.72222),(28.474003,-12.736482),(28.480928,-12.762114),(28.501598,-12.788469),(28.523613,-12.810173),(28.539425,-12.833841),(28.542836,-12.885621),(28.553378,-12.894922),(28.568468,-12.895543),(28.583971,-12.888721),(28.593686,-12.876732),(28.608155,-12.847897),(28.621178,-12.839422),(28.667893,-12.849344),(28.707994,-12.889755),(28.77569,-12.976571),(28.793363,-12.991557),(28.800598,-13.002409),(28.809435,-13.038376),(28.813621,-13.04282),(28.827057,-13.047575),(28.830881,-13.051812),(28.83026,-13.056876),(28.825506,-13.067832),(28.8253,-13.072379),(28.826023,-13.076823),(28.824628,-13.087262),(28.825816,-13.092843),(28.829744,-13.097184),(28.835428,-13.098941),(28.840906,-13.100078),(28.844213,-13.102352),(28.85367,-13.137078),(28.86323,-13.142659),(28.882712,-13.143589),(28.891859,-13.14731),(28.903434,-13.162296),(28.93413,-13.309626),(28.950356,-13.350967),(28.980846,-13.380216),(28.993971,-13.395925),(29.010301,-13.399439),(29.028491,-13.395512),(29.046681,-13.388897),(29.065386,-13.388676),(29.072933,-13.388587),(29.098048,-13.38466),(29.122129,-13.38435),(29.144453,-13.394478),(29.153393,-13.406984),(29.160163,-13.422074),(29.168948,-13.433856),(29.184141,-13.436853),(29.194786,-13.430962),(29.21401,-13.408328),(29.227446,-13.400783),(29.250183,-13.391585),(29.308267,-13.353654),(29.449551,-13.300324),(29.470015,-13.287818),(29.527996,-13.234075),(29.551043,-13.224153),(29.574401,-13.225393),(29.592798,-13.235625),(29.609851,-13.249474),(29.629075,-13.261463),(29.641064,-13.262807),(29.649745,-13.259913),(29.657497,-13.26012),(29.666592,-13.271178),(29.669176,-13.281204),(29.667522,-13.291952),(29.663595,-13.301874),(29.658892,-13.309522),(29.651502,-13.314587),(29.633622,-13.318928),(29.626594,-13.326369),(29.625044,-13.334844),(29.627008,-13.354894),(29.625147,-13.364196),(29.618946,-13.367297),(29.609438,-13.369054),(29.601686,-13.374428),(29.601686,-13.388381),(29.60551,-13.399956),(29.610678,-13.411015),(29.617396,-13.421247),(29.626078,-13.429825),(29.641477,-13.438507),(29.663078,-13.446878),(29.702869,-13.457627),(29.767154,-13.458351),(29.782244,-13.455663),(29.797643,-13.424244),(29.797747,-13.31376),(29.797953,-13.164363),(29.798212,-13.02277),(29.79847,-12.873012),(29.798677,-12.698345),(29.798987,-12.54869),(29.799142,-12.382292),(29.7994,-12.265813),(29.799607,-12.159463),(29.799582,-12.159036),(29.799397,-12.15582),(29.799297,-12.154089),(29.756095,-12.157603),(29.734908,-12.164838),(29.701267,-12.189022),(29.648918,-12.212793),(29.631865,-12.215997),(29.621117,-12.21021),(29.603185,-12.192846),(29.590317,-12.188092),(29.576727,-12.188092),(29.56758,-12.191709),(29.553059,-12.205145),(29.533473,-12.216721),(29.489548,-12.2315),(29.474252,-12.242662),(29.467948,-12.255272),(29.455907,-12.294339),(29.451825,-12.3178),(29.448156,-12.326792),(29.446554,-12.33506),(29.450378,-12.342915),(29.480453,-12.380535),(29.49606,-12.386323),(29.511459,-12.387356),(29.523448,-12.392524),(29.528202,-12.410508),(29.524688,-12.429214),(29.514663,-12.44389),(29.499677,-12.453089),(29.480453,-12.454949),(29.473012,-12.452469),(29.468774,-12.448438),(29.46526,-12.444201),(29.459886,-12.441307),(29.453323,-12.44079),(29.439216,-12.44203),(29.432601,-12.441307),(29.405936,-12.431695),(29.364388,-12.409267),(29.356533,-12.407614),(29.316432,-12.406683),(29.304857,-12.404823),(29.295503,-12.400999),(29.283049,-12.390457),(29.265066,-12.370923),(29.254989,-12.366169),(29.173392,-12.367616),(29.148949,-12.373714),(29.128123,-12.382809),(29.11107,-12.393454),(29.100218,-12.374024),(29.084405,-12.376401),(29.067249,-12.384773),(29.052779,-12.383842),(29.042651,-12.378778),(29.035106,-12.378882),(29.030352,-12.376194),(29.028595,-12.363069),(29.026114,-12.353043),(28.944672,-12.206076),(28.918731,-12.174346),(28.859509,-12.14179),(28.850414,-12.130008),(28.84597,-12.111301),(28.835015,-12.092181),(28.821475,-12.075748),(28.808866,-12.064585),(28.773623,-12.045879),(28.759877,-12.035233),(28.754193,-12.020144),(28.751092,-11.992238),(28.741997,-11.98428),(28.707064,-11.989448),(28.686032,-11.984074),(28.636887,-11.957615),(28.623865,-11.947797),(28.606191,-11.91028),(28.592652,-11.900461),(28.569191,-11.91369),(28.566091,-11.904492),(28.554205,-11.885062),(28.549451,-11.879584),(28.540562,-11.872039),(28.531571,-11.866458),(28.521132,-11.865735),(28.507851,-11.872763),(28.497154,-11.857363),(28.488886,-11.808684),(28.476794,-11.798245),(28.460671,-11.793388),(28.452506,-11.780779),(28.442687,-11.711635),(28.439587,-11.701507),(28.425117,-11.688071),(28.418399,-11.678873),(28.42212,-11.674738),(28.42274,-11.664713),(28.40548,-11.605182),(28.399589,-11.596087),(28.384913,-11.581307),(28.37742,-11.571076),(28.365586,-11.54162),(28.360315,-11.533558),(28.355664,-11.518159),(28.354424,-11.464725),(28.357008,-11.446949),(28.408581,-11.372431),(28.411991,-11.368401),(28.434212,-11.351244),(28.439587,-11.348247),(28.43199,-11.31104),(28.436176,-11.289129),(28.445581,-11.274556),(28.45509,-11.263704),(28.473693,-11.22567),(28.472453,-11.191977),(28.473693,-11.181229),(28.476225,-11.175441),(28.484752,-11.162832),(28.487336,-11.153427),(28.486922,-11.140921),(28.476174,-11.087694),(28.476794,-11.077979),(28.490953,-11.058652),(28.504079,-11.034364),(28.507851,-11.023615),(28.508626,-11.013487),(28.506663,-10.992093),(28.507851,-10.982068),(28.513794,-10.972869),(28.531364,-10.955092),(28.535085,-10.944447),(28.537358,-10.934112),(28.547177,-10.913751),(28.549451,-10.902899),(28.549451,-10.858561),(28.554412,-10.834996),(28.563507,-10.811949),(28.576529,-10.790658),(28.593479,-10.772881),(28.60092,-10.763166),(28.608052,-10.741359),(28.623245,-10.716864),(28.627585,-10.711696),(28.633477,-10.708906),(28.646189,-10.709836),(28.651873,-10.708389),(28.665878,-10.697434),(28.681536,-10.681931),(28.694351,-10.663327),(28.699622,-10.643174),(28.694248,-10.636766),(28.67089,-10.619092),(28.665516,-10.608964),(28.66717,-10.597388),(28.672699,-10.576098),(28.672337,-10.564315),(28.648049,-10.53765),(28.634924,-10.51946),(28.634717,-10.505714),(28.638748,-10.496206),(28.630479,-10.481013),(28.634717,-10.471608),(28.640608,-10.462616),(28.644535,-10.451041),(28.646086,-10.438431),(28.644949,-10.426546),(28.637714,-10.407012),(28.628619,-10.391096),(28.620868,-10.373319),(28.617767,-10.348308),(28.620713,-10.327327),(28.626449,-10.311411),(28.629239,-10.295908),(28.623865,-10.276374),(28.622108,-10.275857),(28.613323,-10.27069),(28.610842,-10.268829),(28.608052,-10.263558),(28.605985,-10.252913),(28.604021,-10.248366),(28.569915,-10.219633),(28.57932,-10.201133),(28.604021,-10.172711),(28.619627,-10.141912),(28.628516,-10.102741),(28.631255,-10.023056),(28.623865,-9.951019),(28.627947,-9.929419),(28.658695,-9.864823),(28.668462,-9.821622),(28.678073,-9.803845),(28.696522,-9.796507),(28.698486,-9.791959),(28.672337,-9.765501),(28.66996,-9.750928),(28.663656,-9.745037),(28.654974,-9.741937),(28.644949,-9.735115),(28.631255,-9.715582),(28.625157,-9.695634),(28.623865,-9.649126),(28.621798,-9.638274),(28.612909,-9.6175),(28.610842,-9.607888),(28.606502,-9.594349),(28.587588,-9.573161),(28.583557,-9.563756),(28.588621,-9.558279),(28.611101,-9.551044),(28.617767,-9.543292),(28.586296,-9.544016),(28.5741,-9.539158),(28.569191,-9.525929),(28.568726,-9.508979),(28.566866,-9.495957),(28.562783,-9.484898),(28.5556,-9.474459),(28.531571,-9.456889),(28.519685,-9.445624),(28.514724,-9.430018),(28.516274,-9.415548),(28.518135,-9.397461),(28.515964,-9.378341),(28.495397,-9.353226),(28.460464,-9.335036),(28.411371,-9.324908),(28.368532,-9.308578),(28.35246,-9.271991),(28.372304,-9.235094),(28.415092,-9.207085),(28.506198,-9.164401),(28.59906,-9.096395),(28.765407,-8.934027),(28.893099,-8.765769),(28.930461,-8.679883),(28.935474,-8.590792),(28.892789,-8.501909),(28.889172,-8.483099),(28.915268,-8.472867),(29.035106,-8.45478),(29.208119,-8.428528),(29.365938,-8.40455),(29.575844,-8.372729),(29.592281,-8.370237),(29.858105,-8.329826),(30.123618,-8.289519),(30.32495,-8.258926),(30.581781,-8.219962),(30.752107,-8.194124),(30.778255,-8.289105),(30.828278,-8.388117),(30.891892,-8.479171),(30.959536,-8.550485),(30.992351,-8.57591),(31.03364,-8.600301),(31.077824,-8.61632),(31.119836,-8.616631)] +Zimbabwe [(30.010654,-15.646227),(30.050238,-15.640129),(30.090029,-15.629381),(30.130129,-15.623696),(30.16992,-15.632171),(30.195552,-15.649121),(30.207231,-15.653152),(30.223147,-15.649741),(30.231002,-15.644677),(30.246091,-15.632068),(30.254876,-15.628864),(30.280094,-15.632275),(30.296196,-15.639042),(30.32805,-15.652428),(30.356679,-15.651498),(30.396263,-15.635995),(30.39771,-15.716817),(30.39926,-15.812005),(30.401327,-15.931688),(30.402568,-16.001244),(30.514809,-16.000418),(30.586587,-16.000004),(30.74973,-15.998867),(30.857424,-15.998144),(30.901865,-16.007136),(30.942173,-16.034524),(30.958296,-16.05106),(30.973075,-16.062016),(30.989767,-16.06429),(31.012039,-16.054885),(31.023718,-16.045169),(31.042218,-16.024912),(31.056895,-16.017574),(31.065421,-16.019641),(31.073328,-16.025532),(31.080872,-16.025946),(31.089037,-16.01189),(31.1141,-15.996904),(31.15849,-16.000211),(31.259983,-16.023465),(31.278897,-16.030287),(31.29533,-16.041655),(31.309592,-16.059019),(31.328351,-16.092815),(31.340908,-16.106664),(31.360339,-16.116896),(31.37026,-16.123718),(31.374601,-16.132916),(31.377754,-16.142218),(31.384006,-16.148832),(31.387727,-16.149556),(31.395582,-16.147695),(31.399613,-16.147282),(31.404315,-16.149866),(31.404057,-16.154517),(31.402713,-16.159374),(31.404574,-16.162268),(31.424107,-16.164749),(31.445708,-16.164955),(31.465655,-16.167746),(31.480641,-16.177978),(31.519192,-16.196478),(31.686107,-16.207227),(31.710705,-16.217872),(31.738197,-16.239783),(31.798761,-16.303655),(31.818088,-16.319571),(31.86005,-16.340759),(31.871935,-16.35037),(31.88072,-16.368044),(31.88563,-16.406284),(31.894363,-16.421477),(31.910279,-16.428919),(32.014149,-16.444938),(32.211759,-16.440184),(32.290463,-16.45176),(32.393661,-16.491757),(32.5521,-16.553355),(32.671783,-16.599761),(32.6831,-16.609889),(32.687906,-16.624255),(32.68863,-16.647303),(32.698655,-16.686784),(32.725217,-16.706421),(32.73095,-16.708656),(32.731314,-16.708798),(32.739893,-16.703217),(32.753845,-16.697946),(32.769348,-16.695466),(32.800664,-16.697326),(32.862004,-16.710452),(32.893372,-16.712415),(32.909598,-16.708075),(32.93957,-16.689781),(32.95621,-16.683063),(32.968509,-16.681616),(32.961585,-16.710348),(32.933369,-16.815768),(32.916213,-16.847911),(32.900503,-16.867755),(32.828776,-16.935141),(32.83012,-16.941549),(32.886757,-17.038184),(32.928512,-17.109497),(32.954143,-17.167168),(32.967786,-17.22887),(32.96909,-17.266115),(32.969439,-17.276102),(32.973212,-17.297909),(32.983599,-17.317753),(32.992384,-17.324678),(33.014656,-17.336667),(33.021633,-17.345555),(33.022459,-17.361471),(33.016258,-17.377181),(33.011651,-17.383991),(32.997448,-17.404983),(32.958174,-17.478467),(32.951663,-17.486218),(32.942981,-17.491593),(32.936573,-17.498311),(32.936676,-17.509369),(32.947218,-17.543166),(32.951663,-17.551434),(32.969129,-17.56456),(33.006646,-17.580993),(33.020392,-17.598563),(33.024526,-17.619233),(33.020599,-17.638457),(33.004063,-17.675561),(33.000238,-17.713905),(33.003184,-17.757726),(32.999102,-17.794313),(32.973573,-17.810643),(32.957037,-17.817981),(32.946082,-17.834724),(32.939674,-17.855498),(32.936883,-17.875032),(32.938433,-17.894566),(32.950267,-17.922574),(32.952128,-17.940247),(32.948149,-17.95327),(32.940397,-17.959988),(32.932439,-17.964949),(32.927375,-17.972907),(32.928977,-17.982312),(32.941224,-17.996265),(32.940294,-18.004843),(32.934919,-18.024583),(32.93709,-18.047114),(32.972282,-18.150261),(32.975537,-18.183333),(32.974865,-18.190775),(32.965925,-18.212169),(32.958174,-18.225398),(32.952283,-18.233046),(32.950526,-18.241314),(32.95497,-18.256301),(32.970163,-18.277488),(33.016878,-18.313661),(33.034965,-18.332885),(33.042768,-18.352005),(33.038066,-18.363064),(33.00923,-18.383941),(32.988198,-18.41319),(32.985356,-18.412467),(32.986803,-18.422285),(32.999515,-18.436651),(33.003029,-18.446883),(32.996414,-18.46714),(32.978586,-18.48006),(32.956624,-18.489878),(32.937142,-18.50104),(32.919313,-18.510032),(32.900296,-18.515303),(32.88314,-18.522124),(32.870737,-18.535767),(32.868257,-18.552613),(32.871668,-18.57318),(32.884483,-18.609044),(32.914559,-18.665888),(32.92231,-18.693173),(32.920243,-18.726246),(32.913267,-18.753014),(32.902518,-18.774512),(32.885207,-18.787844),(32.858852,-18.790015),(32.817924,-18.787018),(32.787642,-18.791255),(32.69142,-18.83425),(32.68987,-18.843241),(32.696794,-18.897192),(32.703202,-18.911868),(32.71576,-18.919826),(32.705063,-18.927474),(32.692247,-18.934295),(32.682532,-18.942667),(32.681085,-18.954966),(32.68863,-18.97729),(32.690283,-18.988246),(32.68863,-19.000958),(32.691058,-19.01429),(32.698965,-19.022249),(32.710282,-19.025969),(32.723873,-19.026589),(32.785988,-19.017701),(32.803351,-19.019561),(32.814203,-19.023799),(32.819991,-19.028346),(32.822988,-19.035168),(32.825262,-19.046847),(32.830223,-19.059146),(32.83813,-19.066897),(32.847483,-19.073925),(32.855906,-19.083744),(32.862262,-19.118057),(32.83322,-19.241977),(32.832187,-19.266678),(32.828673,-19.284558),(32.820715,-19.301301),(32.806142,-19.323419),(32.768831,-19.363623),(32.766454,-19.373442),(32.768521,-19.402794),(32.762217,-19.443412),(32.763354,-19.463979),(32.773947,-19.475864),(32.793119,-19.476691),(32.811309,-19.474521),(32.825365,-19.479172),(32.832187,-19.500876),(32.832497,-19.519273),(32.825365,-19.59162),(32.825675,-19.600818),(32.828156,-19.610636),(32.829603,-19.623659),(32.825365,-19.633271),(32.819474,-19.641952),(32.81627,-19.652081),(32.819629,-19.674302),(32.83105,-19.685154),(32.849137,-19.689081),(32.872184,-19.690218),(32.894715,-19.684327),(32.924584,-19.655285),(32.943188,-19.64929),(32.960964,-19.658799),(32.962411,-19.679056),(32.954143,-19.717813),(32.962411,-19.735383),(32.979051,-19.751403),(33.0006,-19.764322),(33.022769,-19.773107),(33.032795,-19.784166),(33.029642,-19.80339),(33.022873,-19.826851),(33.021322,-19.868088),(33.001995,-19.927),(32.998378,-20.000897),(33.004373,-20.024255),(33.007266,-20.032006),(32.95373,-20.030249),(32.940087,-20.041515),(32.934299,-20.072107),(32.926548,-20.086473),(32.910683,-20.091124),(32.894405,-20.094018),(32.88531,-20.10301),(32.877869,-20.151689),(32.872908,-20.167192),(32.859265,-20.190859),(32.857095,-20.200575),(32.858335,-20.207499),(32.865053,-20.220935),(32.86557,-20.228893),(32.858438,-20.259486),(32.852961,-20.273852),(32.845209,-20.286668),(32.800767,-20.338551),(32.735862,-20.414205),(32.704443,-20.471773),(32.671783,-20.531821),(32.646462,-20.557969),(32.603674,-20.56479),(32.556545,-20.559312),(32.513136,-20.564583),(32.481614,-20.603031),(32.471072,-20.645509),(32.469108,-20.68685),(32.483474,-20.794233),(32.49722,-20.898103),(32.491019,-20.936344),(32.467661,-20.980165),(32.417122,-21.040937),(32.339814,-21.134058),(32.345343,-21.142843),(32.359864,-21.151421),(32.368856,-21.162997),(32.373352,-21.163617),(32.377744,-21.16341),(32.380638,-21.165477),(32.380535,-21.172195),(32.376866,-21.178499),(32.37299,-21.183977),(32.37175,-21.187905),(32.444613,-21.304693),(32.445849,-21.308994),(32.447197,-21.313685),(32.408543,-21.290327),(32.37299,-21.327948),(32.324517,-21.378177),(32.272221,-21.432541),(32.219718,-21.486904),(32.167318,-21.541268),(32.114814,-21.595632),(32.062415,-21.649995),(32.010015,-21.704462),(31.957615,-21.758826),(31.905215,-21.813189),(31.852712,-21.867553),(31.800312,-21.92202),(31.747808,-21.976384),(31.695512,-22.030747),(31.643112,-22.085214),(31.590712,-22.139578),(31.538209,-22.193941),(31.485809,-22.248305),(31.433822,-22.302048),(31.36871,-22.345043),(31.288922,-22.39734),(31.265616,-22.365507),(31.255642,-22.357962),(31.24572,-22.357549),(31.229597,-22.363957),(31.221536,-22.364887),(31.213474,-22.36189),(31.197868,-22.352588),(31.190685,-22.350624),(31.183657,-22.34556),(31.163348,-22.322616),(31.152599,-22.316414),(31.137717,-22.318482),(31.10454,-22.333364),(31.097048,-22.334922),(31.087642,-22.336878),(31.07033,-22.333674),(31.036121,-22.319618),(30.927187,-22.295744),(30.867087,-22.289646),(30.83789,-22.282308),(30.805282,-22.294504),(30.693919,-22.302772),(30.674282,-22.30856),(30.647411,-22.32644),(30.632424,-22.330677),(30.625551,-22.32861),(30.610307,-22.318688),(30.601108,-22.316414),(30.57217,-22.316621),(30.507367,-22.309593),(30.488454,-22.310213),(30.46923,-22.315071),(30.431713,-22.331194),(30.412696,-22.336878),(30.372078,-22.343493),(30.334975,-22.344733),(30.300765,-22.336982),(30.269346,-22.316414),(30.25529,-22.304736),(30.240407,-22.296157),(30.2217,-22.290886),(30.196999,-22.289129),(30.15266,-22.294814),(30.13509,-22.293574),(30.111113,-22.282308),(30.082587,-22.262878),(30.067911,-22.25709),(30.038145,-22.253783),(30.035872,-22.250579),(30.034528,-22.246135),(30.015511,-22.227014),(30.005279,-22.22226),(29.983782,-22.217713),(29.973963,-22.213992),(29.946678,-22.198282),(29.932105,-22.194355),(29.896035,-22.191358),(29.871489,-22.179265),(29.837331,-22.172444),(29.779246,-22.136374),(29.758886,-22.130896),(29.691448,-22.1341),(29.679614,-22.138338),(29.661424,-22.126452),(29.641064,-22.129242),(29.60396,-22.145055),(29.570164,-22.141955),(29.551043,-22.145986),(29.542517,-22.162522),(29.53182,-22.172444),(29.506912,-22.170067),(29.456889,-22.158801),(29.436115,-22.163142),(29.399528,-22.182159),(29.378031,-22.192908),(29.363251,-22.192288),(29.356947,-22.190944),(29.350074,-22.186707),(29.273644,-22.125108),(29.26734,-22.115807),(29.259588,-22.096066),(29.254111,-22.087074),(29.244395,-22.075706),(29.239331,-22.072605),(29.144867,-22.075292),(29.10797,-22.069194),(29.070763,-22.051004),(29.040532,-22.020929),(29.021567,-21.982791),(29.013815,-21.940417),(29.017949,-21.898145),(29.028905,-21.876648),(29.045441,-21.852567),(29.057637,-21.829209),(29.05526,-21.809985),(29.038723,-21.797893),(28.998726,-21.786008),(28.980846,-21.774845),(28.951907,-21.768334),(28.891032,-21.764924),(28.860853,-21.757379),(28.714195,-21.693507),(28.66841,-21.679968),(28.629704,-21.651339),(28.6157,-21.647101),(28.585934,-21.644414),(28.553998,-21.636559),(28.542939,-21.638316),(28.532501,-21.643071),(28.497309,-21.651546),(28.481393,-21.657437),(28.464598,-21.660331),(28.443101,-21.655783),(28.361762,-21.616302),(28.321919,-21.603486),(28.284867,-21.596872),(28.165702,-21.595218),(28.090771,-21.581266),(28.032893,-21.577855),(28.016563,-21.572894),(28.002559,-21.564212),(27.990415,-21.551913),(27.984731,-21.542922),(27.975739,-21.522561),(27.970571,-21.514396),(27.963698,-21.510469),(27.958066,-21.511502),(27.953208,-21.510469),(27.949281,-21.500754),(27.954448,-21.487835),(27.950418,-21.482047),(27.943338,-21.479876),(27.939876,-21.478016),(27.941943,-21.468508),(27.949642,-21.456519),(27.953001,-21.448664),(27.950211,-21.438329),(27.920549,-21.381174),(27.904219,-21.364741),(27.897811,-21.35544),(27.896157,-21.347895),(27.896674,-21.332392),(27.8944,-21.32433),(27.884995,-21.310171),(27.849132,-21.269657),(27.823604,-21.231726),(27.793838,-21.197413),(27.724385,-21.149664),(27.709192,-21.134471),(27.674775,-21.090133),(27.666611,-21.071219),(27.666817,-21.053753),(27.678961,-21.000733),(27.680356,-20.979649),(27.672657,-20.923528),(27.672605,-20.913709),(27.675085,-20.891282),(27.674775,-20.879913),(27.676016,-20.866684),(27.681803,-20.857589),(27.689038,-20.849011),(27.694412,-20.837745),(27.709605,-20.756716),(27.707332,-20.716719),(27.682475,-20.637344),(27.690382,-20.60148),(27.702629,-20.566134),(27.705575,-20.526653),(27.698133,-20.509083),(27.683767,-20.49606),(27.66599,-20.489136),(27.625786,-20.488619),(27.590853,-20.473323),(27.534112,-20.483038),(27.45391,-20.473323),(27.340739,-20.473013),(27.306012,-20.477354),(27.268392,-20.49575),(27.283998,-20.35147),(27.266015,-20.234164),(27.214907,-20.110451),(27.201781,-20.092984),(27.183746,-20.082339),(27.16292,-20.076551),(27.141888,-20.073347),(27.129692,-20.072934),(27.119771,-20.073864),(27.109642,-20.073244),(27.097343,-20.068903),(27.086491,-20.060532),(27.069231,-20.03738),(27.060136,-20.027562),(27.02665,-20.010095),(26.9943,-20.006788),(26.961072,-20.007201),(26.925054,-20.000897),(26.811882,-19.94643),(26.774469,-19.939815),(26.750801,-19.939609),(26.730957,-19.935888),(26.713904,-19.927413),(26.698608,-19.91253),(26.684758,-19.894547),(26.67717,-19.886815),(26.673803,-19.883385),(26.659437,-19.875737),(26.614065,-19.863438),(26.595565,-19.855583),(26.581922,-19.842147),(26.574791,-19.819513),(26.566316,-19.800806),(26.549263,-19.784063),(26.508852,-19.759258),(26.489731,-19.75192),(26.450251,-19.743342),(26.431854,-19.73652),(26.412837,-19.71957),(26.385242,-19.679056),(26.362711,-19.667584),(26.332325,-19.662416),(26.324367,-19.659109),(26.312171,-19.651358),(26.312481,-19.649601),(26.319096,-19.646293),(26.326331,-19.633891),(26.333462,-19.613014),(26.330981,-19.604952),(26.32106,-19.592033),(26.313205,-19.584178),(26.30349,-19.577254),(26.292638,-19.572499),(26.239101,-19.571466),(26.194452,-19.5602),(26.155488,-19.537153),(26.13027,-19.501082),(26.034359,-19.243734),(26.011414,-19.199809),(25.981132,-19.161775),(25.956534,-19.122088),(25.948576,-19.103277),(25.944855,-19.079196),(25.948059,-19.058732),(25.964389,-19.021629),(25.9678,-19.000958),(25.967449,-18.999925),(25.940721,-18.921273),(25.815251,-18.813993),(25.779491,-18.738752),(25.773393,-18.665578),(25.761921,-18.630335),(25.736909,-18.608734),(25.698255,-18.590234),(25.669523,-18.566049),(25.622084,-18.501143),(25.608442,-18.487708),(25.574439,-18.465693),(25.508499,-18.399134),(25.49558,-18.378877),(25.490516,-18.365545),(25.481163,-18.323377),(25.473204,-18.303429),(25.440855,-18.2532),(25.408816,-18.175995),(25.387525,-18.138995),(25.357449,-18.115844),(25.323446,-18.09662),(25.296368,-18.068612),(25.255026,-18.001122),(25.226088,-17.931876),(25.21937,-17.908001),(25.21937,-17.879786),(25.259781,-17.794107),(25.266705,-17.800928),(25.285412,-17.809299),(25.315901,-17.83214),(25.335538,-17.841235),(25.345254,-17.842579),(25.376466,-17.841235),(25.409539,-17.853018),(25.420288,-17.854878),(25.49558,-17.854878),(25.500748,-17.856015),(25.510153,-17.861183),(25.516458,-17.862319),(25.522142,-17.860149),(25.530927,-17.850951),(25.536818,-17.848677),(25.603997,-17.836171),(25.657017,-17.81395),(25.681409,-17.81147),(25.694224,-17.819428),(25.70642,-17.829867),(25.743834,-17.839375),(25.765951,-17.849814),(25.786002,-17.862216),(25.794683,-17.872655),(25.804399,-17.888158),(25.849667,-17.906658),(25.86362,-17.923814),(25.847497,-17.929395),(25.846153,-17.943658),(25.853491,-17.959988),(25.86362,-17.971563),(25.924495,-17.998952),(25.966973,-18.000502),(25.978548,-17.998952),(26.033739,-17.971563),(26.04056,-17.978488),(26.046554,-17.966292),(26.062471,-17.962882),(26.081178,-17.962365),(26.095234,-17.958541),(26.096164,-17.954614),(26.0942,-17.941901),(26.095234,-17.938077),(26.101228,-17.935803),(26.118591,-17.931566),(26.135438,-17.922574),(26.158589,-17.918337),(26.167477,-17.913582),(26.203031,-17.887227),(26.211919,-17.882783),(26.221117,-17.886297),(26.228249,-17.894669),(26.233933,-17.903971),(26.239204,-17.910172),(26.248299,-17.913376),(26.294291,-17.918543),(26.3038,-17.922781),(26.311965,-17.928362),(26.318269,-17.934356),(26.325504,-17.93601),(26.362711,-17.930636),(26.408599,-17.939007),(26.485494,-17.979315),(26.527145,-17.992027),(26.553604,-17.996471),(26.570243,-18.002879),(26.583369,-18.013215),(26.598872,-18.029958),(26.612721,-18.041223),(26.628844,-18.049181),(26.685689,-18.066751),(26.700003,-18.069232),(26.71194,-18.065821),(26.740569,-18.0405),(26.753591,-18.032955),(26.769714,-18.029028),(26.794002,-18.026237),(26.88826,-17.984586),(26.912031,-17.992027),(26.94867,-17.968876),(26.95916,-17.964742),(27.006289,-17.962675),(27.021275,-17.958541),(27.048457,-17.944278),(27.078171,-17.916993),(27.11543,-17.882163),(27.149019,-17.842476),(27.146539,-17.818911),(27.145299,-17.794107),(27.146952,-17.783875),(27.157081,-17.769302),(27.422078,-17.504822),(27.524294,-17.415112),(27.577314,-17.363125),(27.604495,-17.312792),(27.624856,-17.233314),(27.641186,-17.198484),(27.777301,-17.001183),(27.816886,-16.959636),(27.868562,-16.929663),(28.022993,-16.865393),(28.113922,-16.827551),(28.21252,-16.748589),(28.280113,-16.706524),(28.643295,-16.568755),(28.690734,-16.56028),(28.718794,-16.56028),(28.73285,-16.55811),(28.741377,-16.550668),(28.761117,-16.532271),(28.769282,-16.515218),(28.808866,-16.486279),(28.822509,-16.470776),(28.829124,-16.434603),(28.833051,-16.426438),(28.857236,-16.388198),(28.857029,-16.36546),(28.840492,-16.323602),(28.836772,-16.306342),(28.840286,-16.284741),(28.86416,-16.231205),(28.847107,-16.202679),(28.852481,-16.162785),(28.8654,-16.121237),(28.870981,-16.087234),(28.868501,-16.08217),(28.86385,-16.076589),(28.859303,-16.069561),(28.857236,-16.060466),(28.860336,-16.049407),(28.874082,-16.028943),(28.877183,-16.022018),(28.898887,-15.995457),(28.932373,-15.963727),(28.946862,-15.957235),(28.951287,-15.955252),(28.972784,-15.951428),(29.018053,-15.950602),(29.042341,-15.946261),(29.055053,-15.934375),(29.076344,-15.895411),(29.086162,-15.884559),(29.102182,-15.870916),(29.121716,-15.859341),(29.141869,-15.854483),(29.150964,-15.848799),(29.186311,-15.812832),(29.406969,-15.714233),(29.422059,-15.71103),(29.508462,-15.703588),(29.526239,-15.692839),(29.563446,-15.662144),(29.587217,-15.655736),(29.608559,-15.658423),(29.62799,-15.663591),(29.648505,-15.666588),(29.672793,-15.663281),(29.73005,-15.644677),(29.773252,-15.638062),(29.814283,-15.619666),(29.837331,-15.614808),(29.881773,-15.618839),(29.967504,-15.641473),(30.010654,-15.646227)] diff --git a/tests/queries/0_stateless/helpers/httpechoserver.py b/tests/queries/0_stateless/helpers/httpechoserver.py deleted file mode 100644 index a1176c5e72d..00000000000 --- a/tests/queries/0_stateless/helpers/httpechoserver.py +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/env python3 - -import sys -import os -import time -import subprocess -import threading -from io import StringIO, SEEK_END -from http.server import BaseHTTPRequestHandler, HTTPServer - -CLICKHOUSE_HOST = os.environ.get('CLICKHOUSE_HOST', '127.0.0.1') -CLICKHOUSE_PORT_HTTP = os.environ.get('CLICKHOUSE_PORT_HTTP', '8123') - -# IP-address of this host accessible from outside world. -HTTP_SERVER_HOST = os.environ.get('HTTP_SERVER_HOST', subprocess.check_output(['hostname', '-i']).decode('utf-8').strip()) -HTTP_SERVER_PORT = int(os.environ.get('CLICKHOUSE_TEST_HOST_EXPOSED_PORT', 51234)) - -# IP address and port of the HTTP server started from this script. -HTTP_SERVER_ADDRESS = (HTTP_SERVER_HOST, HTTP_SERVER_PORT) -HTTP_SERVER_URL_STR = 'http://' + ':'.join(str(s) for s in HTTP_SERVER_ADDRESS) + "/" - -ostream = StringIO() -istream = sys.stdout - -class EchoCSVHTTPServer(BaseHTTPRequestHandler): - def _set_headers(self): - self.send_response(200) - self.send_header('Content-type', 'text/plain') - self.end_headers() - - def do_GET(self): - self._set_headers() - with open(CSV_DATA, 'r') as fl: - ostream.seek(0) - for row in ostream: - self.wfile.write(row + '\n') - return - - def read_chunk(self): - msg = '' - while True: - sym = self.rfile.read(1) - if sym == '': - break - msg += sym.decode('utf-8') - if msg.endswith('\r\n'): - break - length = int(msg[:-2], 16) - if length == 0: - return '' - content = self.rfile.read(length) - self.rfile.read(2) # read sep \r\n - return content.decode('utf-8') - - def do_POST(self): - while True: - chunk = self.read_chunk() - if not chunk: - break - istream.write(chunk) - istream.flush() - text = "" - self._set_headers() - self.wfile.write("ok") - - def log_message(self, format, *args): - return - -def start_server(requests_amount, test_data="Hello,2,-2,7.7\nWorld,2,-5,8.8"): - ostream = StringIO(test_data.decode("utf-8")) - - httpd = HTTPServer(HTTP_SERVER_ADDRESS, EchoCSVHTTPServer) - - def real_func(): - for i in range(requests_amount): - httpd.handle_request() - - t = threading.Thread(target=real_func) - return t - -def run(requests_amount=1): - t = start_server(requests_amount) - t.start() - t.join() - -if __name__ == "__main__": - exception_text = '' - for i in range(1, 5): - try: - run(int(sys.argv[1]) if len(sys.argv) > 1 else 1) - break - except Exception as ex: - exception_text = str(ex) - time.sleep(1) - - if exception_text: - print("Exception: {}".format(exception_text), file=sys.stderr) - os._exit(1) - diff --git a/tests/queries/1_stateful/00159_parallel_formatting_http.reference b/tests/queries/1_stateful/00159_parallel_formatting_http.reference index 499a0b8a7c7..8eabf5d4f03 100644 --- a/tests/queries/1_stateful/00159_parallel_formatting_http.reference +++ b/tests/queries/1_stateful/00159_parallel_formatting_http.reference @@ -1,12 +1,12 @@ TSV, false -8a984bbbfb127c430f67173f5371c6cb - +6e4ce4996dd0e036d27cb0d2166c8e59 - TSV, true -8a984bbbfb127c430f67173f5371c6cb - +6e4ce4996dd0e036d27cb0d2166c8e59 - CSV, false -ea1c740f03f5dcc43a3044528ad0a98f - +ab6b3616f31e8a952c802ca92562e418 - CSV, true -ea1c740f03f5dcc43a3044528ad0a98f - +ab6b3616f31e8a952c802ca92562e418 - JSONCompactEachRow, false -ba1081a754a06ef6563840b2d8d4d327 - +1651b540b43bd6c62446f4c340bf13c7 - JSONCompactEachRow, true -ba1081a754a06ef6563840b2d8d4d327 - +1651b540b43bd6c62446f4c340bf13c7 - diff --git a/tests/queries/1_stateful/00159_parallel_formatting_http.sh b/tests/queries/1_stateful/00159_parallel_formatting_http.sh index 8fd8c15b7c7..a4e68de6a3f 100755 --- a/tests/queries/1_stateful/00159_parallel_formatting_http.sh +++ b/tests/queries/1_stateful/00159_parallel_formatting_http.sh @@ -10,8 +10,8 @@ FORMATS=('TSV' 'CSV' 'JSONCompactEachRow') for format in "${FORMATS[@]}" do echo "$format, false"; - ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}&query=SELECT+ClientEventTime+as+a,MobilePhoneModel+as+b,ClientIP6+as+c+FROM+test.hits+ORDER+BY+a,b,c+Format+$format&output_format_parallel_formatting=false" -d' ' | md5sum + ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}&query=SELECT+ClientEventTime+as+a,MobilePhoneModel+as+b,ClientIP6+as+c+FROM+test.hits+ORDER+BY+a,b,c+LIMIT+1000000+Format+$format&output_format_parallel_formatting=false" -d' ' | md5sum echo "$format, true"; - ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}&query=SELECT+ClientEventTime+as+a,MobilePhoneModel+as+b,ClientIP6+as+c+FROM+test.hits+ORDER+BY+a,b,c+Format+$format&output_format_parallel_formatting=true" -d' ' | md5sum + ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}&query=SELECT+ClientEventTime+as+a,MobilePhoneModel+as+b,ClientIP6+as+c+FROM+test.hits+ORDER+BY+a,b,c+LIMIT+1000000+Format+$format&output_format_parallel_formatting=true" -d' ' | md5sum done diff --git a/tests/queries/1_stateful/00161_parallel_parsing_with_names.reference b/tests/queries/1_stateful/00161_parallel_parsing_with_names.reference new file mode 100644 index 00000000000..fb0ba75c148 --- /dev/null +++ b/tests/queries/1_stateful/00161_parallel_parsing_with_names.reference @@ -0,0 +1,8 @@ +TSVWithNames, false +29caf86494f169d6339f6c5610b20731 - +TSVWithNames, true +29caf86494f169d6339f6c5610b20731 - +CSVWithNames, false +29caf86494f169d6339f6c5610b20731 - +CSVWithNames, true +29caf86494f169d6339f6c5610b20731 - diff --git a/tests/queries/1_stateful/00161_parallel_parsing_with_names.sh b/tests/queries/1_stateful/00161_parallel_parsing_with_names.sh new file mode 100755 index 00000000000..ca9984900e1 --- /dev/null +++ b/tests/queries/1_stateful/00161_parallel_parsing_with_names.sh @@ -0,0 +1,32 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +FORMATS=('TSVWithNames' 'CSVWithNames') +$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS parsing_with_names" + +for format in "${FORMATS[@]}" +do + # Columns are permuted + $CLICKHOUSE_CLIENT -q "CREATE TABLE parsing_with_names(c FixedString(16), a DateTime, b String) ENGINE=Memory()" + + echo "$format, false"; + $CLICKHOUSE_CLIENT --output_format_parallel_formatting=false -q \ + "SELECT URLRegions as d, ClientEventTime as a, MobilePhoneModel as b, ParamPrice as e, ClientIP6 as c FROM test.hits LIMIT 50000 Format $format" | \ + $CLICKHOUSE_CLIENT --input_format_skip_unknown_fields=1 --input_format_parallel_parsing=false -q "INSERT INTO parsing_with_names FORMAT $format" + + $CLICKHOUSE_CLIENT -q "SELECT * FROM parsing_with_names;" | md5sum + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS parsing_with_names" + + + $CLICKHOUSE_CLIENT -q "CREATE TABLE parsing_with_names(c FixedString(16), a DateTime, b String) ENGINE=Memory()" + echo "$format, true"; + $CLICKHOUSE_CLIENT --output_format_parallel_formatting=false -q \ + "SELECT URLRegions as d, ClientEventTime as a, MobilePhoneModel as b, ParamPrice as e, ClientIP6 as c FROM test.hits LIMIT 50000 Format $format" | \ + $CLICKHOUSE_CLIENT --input_format_skip_unknown_fields=1 --input_format_parallel_parsing=true -q "INSERT INTO parsing_with_names FORMAT $format" + + $CLICKHOUSE_CLIENT -q "SELECT * FROM parsing_with_names;" | md5sum + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS parsing_with_names" +done \ No newline at end of file diff --git a/tests/queries/1_stateful/00162_mmap_compression_none.reference b/tests/queries/1_stateful/00162_mmap_compression_none.reference new file mode 100644 index 00000000000..3495cc537c1 --- /dev/null +++ b/tests/queries/1_stateful/00162_mmap_compression_none.reference @@ -0,0 +1 @@ +687074654 diff --git a/tests/queries/1_stateful/00162_mmap_compression_none.sql b/tests/queries/1_stateful/00162_mmap_compression_none.sql new file mode 100644 index 00000000000..2178644214a --- /dev/null +++ b/tests/queries/1_stateful/00162_mmap_compression_none.sql @@ -0,0 +1,8 @@ +DROP TABLE IF EXISTS hits_none; +CREATE TABLE hits_none (Title String CODEC(NONE)) ENGINE = MergeTree ORDER BY tuple(); +INSERT INTO hits_none SELECT Title FROM test.hits; + +SET min_bytes_to_use_mmap_io = 1; +SELECT sum(length(Title)) FROM hits_none; + +DROP TABLE hits_none; diff --git a/tests/queries/1_stateful/00163_column_oriented_formats.reference b/tests/queries/1_stateful/00163_column_oriented_formats.reference new file mode 100644 index 00000000000..cb20aca4392 --- /dev/null +++ b/tests/queries/1_stateful/00163_column_oriented_formats.reference @@ -0,0 +1,12 @@ +Parquet +6b397d4643bc1f920f3eb8aa87ee180c - +7fe6d8c57ddc5fe37bbdcb7f73c5fa78 - +d8746733270cbeff7ab3550c9b944fb6 - +Arrow +6b397d4643bc1f920f3eb8aa87ee180c - +7fe6d8c57ddc5fe37bbdcb7f73c5fa78 - +d8746733270cbeff7ab3550c9b944fb6 - +ORC +6b397d4643bc1f920f3eb8aa87ee180c - +7fe6d8c57ddc5fe37bbdcb7f73c5fa78 - +d8746733270cbeff7ab3550c9b944fb6 - diff --git a/tests/queries/1_stateful/00163_column_oriented_formats.sh b/tests/queries/1_stateful/00163_column_oriented_formats.sh new file mode 100755 index 00000000000..1363ccf3c00 --- /dev/null +++ b/tests/queries/1_stateful/00163_column_oriented_formats.sh @@ -0,0 +1,20 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + +FORMATS=('Parquet' 'Arrow' 'ORC') + +for format in "${FORMATS[@]}" +do + echo $format + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS 00163_column_oriented SYNC" + $CLICKHOUSE_CLIENT -q "CREATE TABLE 00163_column_oriented(ClientEventTime DateTime, MobilePhoneModel String, ClientIP6 FixedString(16)) ENGINE=File($format)" + $CLICKHOUSE_CLIENT -q "INSERT INTO 00163_column_oriented SELECT ClientEventTime, MobilePhoneModel, ClientIP6 FROM test.hits ORDER BY ClientEventTime, MobilePhoneModel, ClientIP6 LIMIT 100" + $CLICKHOUSE_CLIENT -q "SELECT ClientEventTime from 00163_column_oriented" | md5sum + $CLICKHOUSE_CLIENT -q "SELECT MobilePhoneModel from 00163_column_oriented" | md5sum + $CLICKHOUSE_CLIENT -q "SELECT ClientIP6 from 00163_column_oriented" | md5sum + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS 00163_column_oriented SYNC" +done diff --git a/tests/queries/query_test.py b/tests/queries/query_test.py index 417a51fe523..6ebeccbeeac 100644 --- a/tests/queries/query_test.py +++ b/tests/queries/query_test.py @@ -1,5 +1,3 @@ -import pytest - import difflib import os import random @@ -7,6 +5,8 @@ import string import subprocess import sys +import pytest + SKIP_LIST = [ # these couple of tests hangs everything @@ -14,44 +14,63 @@ SKIP_LIST = [ "00987_distributed_stack_overflow", # just fail + "00133_long_shard_memory_tracker_and_exception_safety", "00505_secure", "00505_shard_secure", "00646_url_engine", "00725_memory_tracking", # BROKEN + "00738_lock_for_inner_table", + "00821_distributed_storage_with_join_on", + "00825_protobuf_format_array_3dim", + "00825_protobuf_format_array_of_arrays", + "00825_protobuf_format_enum_mapping", + "00825_protobuf_format_nested_in_nested", + "00825_protobuf_format_nested_optional", + "00825_protobuf_format_no_length_delimiter", + "00825_protobuf_format_persons", + "00825_protobuf_format_squares", + "00825_protobuf_format_table_default", "00834_cancel_http_readonly_queries_on_client_close", + "00877_memory_limit_for_new_delete", + "00900_parquet_load", "00933_test_fix_extra_seek_on_compressed_cache", "00965_logs_level_bugfix", "00965_send_logs_level_concurrent_queries", + "00974_query_profiler", "00990_hasToken", "00990_metric_log_table_not_empty", "01014_lazy_database_concurrent_recreate_reattach_and_show_tables", + "01017_uniqCombined_memory_usage", "01018_Distributed__shard_num", - "01018_ip_dictionary", + "01018_ip_dictionary_long", + "01035_lc_empty_part_bug", # FLAKY "01050_clickhouse_dict_source_with_subquery", "01053_ssd_dictionary", "01054_cache_dictionary_overflow_cell", "01057_http_compression_prefer_brotli", "01080_check_for_error_incorrect_size_of_nested_column", "01083_expressions_in_engine_arguments", - # "01086_odbc_roundtrip", + "01086_odbc_roundtrip", "01088_benchmark_query_id", + "01092_memory_profiler", "01098_temporary_and_external_tables", "01099_parallel_distributed_insert_select", "01103_check_cpu_instructions_at_startup", "01114_database_atomic", "01148_zookeeper_path_macros_unfolding", + "01175_distributed_ddl_output_mode_long", "01181_db_atomic_drop_on_cluster", # tcp port in reference "01280_ssd_complex_key_dictionary", "01293_client_interactive_vertical_multiline", # expect-test "01293_client_interactive_vertical_singleline", # expect-test - "01293_system_distribution_queue", # FLAKY "01293_show_clusters", - "01294_lazy_database_concurrent_recreate_reattach_and_show_tables", + "01293_show_settings", + "01293_system_distribution_queue", # FLAKY + "01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long", "01294_system_distributed_on_cluster", "01300_client_save_history_when_terminated", # expect-test "01304_direct_io", "01306_benchmark_json", - "01035_lc_empty_part_bug", # FLAKY "01320_create_sync_race_condition_zookeeper", "01355_CSV_input_format_allow_errors", "01370_client_autocomplete_word_break_characters", # expect-test @@ -66,18 +85,33 @@ SKIP_LIST = [ "01514_distributed_cancel_query_on_error", "01520_client_print_query_id", # expect-test "01526_client_start_and_exit", # expect-test + "01526_max_untracked_memory", "01527_dist_sharding_key_dictGet_reload", + "01528_play", "01545_url_file_format_settings", "01553_datetime64_comparison", "01555_system_distribution_queue_mask", "01558_ttest_scipy", "01561_mann_whitney_scipy", "01582_distinct_optimization", + "01591_window_functions", "01599_multiline_input_and_singleline_comments", # expect-test "01601_custom_tld", + "01606_git_import", "01610_client_spawn_editor", # expect-test + "01658_read_file_to_stringcolumn", + "01666_merge_tree_max_query_limit", + "01674_unicode_asan", "01676_clickhouse_client_autocomplete", # expect-test (partially) "01683_text_log_deadlock", # secure tcp + "01684_ssd_cache_dictionary_simple_key", + "01685_ssd_cache_dictionary_complex_key", + "01746_executable_pool_dictionary", + "01747_executable_pool_dictionary_implicit_key", + "01747_join_view_filter_dictionary", + "01748_dictionary_table_dot", + "01754_cluster_all_replicas_shard_num", + "01759_optimize_skip_unused_shards_zero_shards", ] diff --git a/tests/queries/shell_config.sh b/tests/queries/shell_config.sh index d20b5669cc5..ea7fa2e7921 100644 --- a/tests/queries/shell_config.sh +++ b/tests/queries/shell_config.sh @@ -5,6 +5,13 @@ export ASAN_OPTIONS=detect_odr_violation=0 export CLICKHOUSE_DATABASE=${CLICKHOUSE_DATABASE:="test"} export CLICKHOUSE_CLIENT_SERVER_LOGS_LEVEL=${CLICKHOUSE_CLIENT_SERVER_LOGS_LEVEL:="warning"} + +# Unique zookeeper path (based on test name and current database) to avoid overlaps +export CLICKHOUSE_TEST_PATH="${BASH_SOURCE[1]}" +CLICKHOUSE_TEST_NAME="$(basename "$CLICKHOUSE_TEST_PATH" .sh)" +export CLICKHOUSE_TEST_NAME +export CLICKHOUSE_TEST_ZOOKEEPER_PREFIX="${CLICKHOUSE_TEST_NAME}_${CLICKHOUSE_DATABASE}" + [ -v CLICKHOUSE_CONFIG_CLIENT ] && CLICKHOUSE_CLIENT_OPT0+=" --config-file=${CLICKHOUSE_CONFIG_CLIENT} " [ -v CLICKHOUSE_HOST ] && CLICKHOUSE_CLIENT_OPT0+=" --host=${CLICKHOUSE_HOST} " [ -v CLICKHOUSE_PORT_TCP ] && CLICKHOUSE_CLIENT_OPT0+=" --port=${CLICKHOUSE_PORT_TCP} " @@ -16,14 +23,21 @@ export CLICKHOUSE_CLIENT_SERVER_LOGS_LEVEL=${CLICKHOUSE_CLIENT_SERVER_LOGS_LEVEL [ -v CLICKHOUSE_LOG_COMMENT ] && CLICKHOUSE_BENCHMARK_OPT0+=" --log_comment='${CLICKHOUSE_LOG_COMMENT}' " export CLICKHOUSE_BINARY=${CLICKHOUSE_BINARY:="clickhouse"} +# client [ -x "$CLICKHOUSE_BINARY-client" ] && CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY-client} [ -x "$CLICKHOUSE_BINARY" ] && CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY client} export CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY-client} export CLICKHOUSE_CLIENT_OPT="${CLICKHOUSE_CLIENT_OPT0:-} ${CLICKHOUSE_CLIENT_OPT:-}" export CLICKHOUSE_CLIENT=${CLICKHOUSE_CLIENT:="$CLICKHOUSE_CLIENT_BINARY ${CLICKHOUSE_CLIENT_OPT:-}"} +# local [ -x "${CLICKHOUSE_BINARY}-local" ] && CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY}-local"} [ -x "${CLICKHOUSE_BINARY}" ] && CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY} local"} export CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY}-local"} +# server +[ -x "${CLICKHOUSE_BINARY}-server" ] && CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY}-server"} +[ -x "${CLICKHOUSE_BINARY}" ] && CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY} server"} +export CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY}-server"} +# others export CLICKHOUSE_OBFUSCATOR=${CLICKHOUSE_OBFUSCATOR:="${CLICKHOUSE_BINARY}-obfuscator"} export CLICKHOUSE_COMPRESSOR=${CLICKHOUSE_COMPRESSOR:="${CLICKHOUSE_BINARY}-compressor"} export CLICKHOUSE_BENCHMARK=${CLICKHOUSE_BENCHMARK:="${CLICKHOUSE_BINARY}-benchmark ${CLICKHOUSE_BENCHMARK_OPT0:-}"} @@ -56,6 +70,8 @@ export CLICKHOUSE_PORT_HTTPS=${CLICKHOUSE_PORT_HTTPS:="8443"} export CLICKHOUSE_PORT_HTTP_PROTO=${CLICKHOUSE_PORT_HTTP_PROTO:="http"} export CLICKHOUSE_PORT_MYSQL=${CLICKHOUSE_PORT_MYSQL:=$(${CLICKHOUSE_EXTRACT_CONFIG} --try --key=mysql_port 2>/dev/null)} 2>/dev/null export CLICKHOUSE_PORT_MYSQL=${CLICKHOUSE_PORT_MYSQL:="9004"} +export CLICKHOUSE_PORT_POSTGRESQL=${CLICKHOUSE_PORT_POSTGRESQL:=$(${CLICKHOUSE_EXTRACT_CONFIG} --try --key=postgresql_port 2>/dev/null)} 2>/dev/null +export CLICKHOUSE_PORT_POSTGRESQL=${CLICKHOUSE_PORT_POSTGRESQL:="9005"} # Add database and log comment to url params if [ -v CLICKHOUSE_URL_PARAMS ] diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index 9bb3dd21b70..5cc2c817fdd 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -45,6 +45,7 @@ "capnproto", "query_profiler", "memory_profiler", + "01175_distributed_ddl_output_mode_long", /// issue 21600 "01103_check_cpu_instructions_at_startup", "01086_odbc_roundtrip", /// can't pass because odbc libraries are not instrumented "00877_memory_limit_for_new_delete", /// memory limits don't work correctly under msan because it replaces malloc/free @@ -95,7 +96,8 @@ "01370_client_autocomplete_word_break_characters", "01676_clickhouse_client_autocomplete", "01193_metadata_loading", - "01455_time_zones" + "01455_time_zones", + "01755_client_highlight_multi_line_comment_regression" ], "release-build": [ ], @@ -103,167 +105,69 @@ "00604_show_create_database", "00609_mv_index_in_in", "00510_materizlized_view_and_deduplication_zookeeper", - "00738_lock_for_inner_table" + "00738_lock_for_inner_table", + "01153_attach_mv_uuid" ], "database-replicated": [ - /// Tests with DETACH TABLE (it's not allowed) - /// and tests with SET (session and query settings are not supported) "memory_tracking", "memory_usage", "live_view", - "01720_type_map_and_casts", - "01413_alter_update_supertype", - "01149_zookeeper_mutation_stuck_after_replace_partition", - "00836_indices_alter_replicated_zookeeper", - "00652_mutations_alter_update", - "01715_tuple_insert_null_as_default", - "00825_protobuf_format_map", - "00152_insert_different_granularity", - "01715_background_checker_blather_zookeeper", - "01714_alter_drop_version", - "01114_materialize_clear_index_compact_parts", - "00814_replicated_minimalistic_part_header_zookeeper", - "01188_attach_table_from_pat", - "01415_sticking_mutations", - "01130_in_memory_parts", - "01110_dictionary_layout_without_arguments", - "01018_ddl_dictionaries_create", - "01018_ddl_dictionaries_select", - "01414_freeze_does_not_prevent_alters", - "01018_ddl_dictionaries_bad_queries", - "01686_rocksdb", - "01550_mutation_subquery", - "01070_mutations_with_dependencies", - "01070_materialize_ttl", - "01055_compact_parts", - "01017_mutations_with_nondeterministic_functions_zookeeper", - "00926_adaptive_index_granularity_pk", - "00910_zookeeper_test_alter_compression_codecs", - "00908_bloom_filter_index", - "00616_final_single_part", - "00446_clear_column_in_partition_zookeeper", - "01533_multiple_nested", - "01213_alter_rename_column_zookeeper", - "01575_disable_detach_table_of_dictionary", - "01457_create_as_table_function_structure", - "01415_inconsistent_merge_tree_settings", - "01413_allow_non_metadata_alters", - "01378_alter_rename_with_ttl_zookeeper", - "01349_mutation_datetime_key", - "01325_freeze_mutation_stuck", - "01272_suspicious_codecs", "01181_db_atomic_drop_on_cluster", - "00957_delta_diff_bug", - "00910_zookeeper_custom_compression_codecs_replicated", - "00899_long_attach_memory_limit", - "00804_test_custom_compression_codes_log_storages", - "00804_test_alter_compression_codecs", - "00804_test_delta_codec_no_type_alter", - "00804_test_custom_compression_codecs", - "00753_alter_attach", - "00715_fetch_merged_or_mutated_part_zookeeper", - "00688_low_cardinality_serialization", - "01575_disable_detach_table_of_dictionary", - "00738_lock_for_inner_table", - "01666_blns", - "01652_ignore_and_low_cardinality", - "01651_map_functions", - "01650_fetch_patition_with_macro_in_zk_path", - "01648_mutations_and_escaping", - "01640_marks_corruption_regression", - "01622_byte_size", - "01611_string_to_low_cardinality_key_alter", - "01602_show_create_view", - "01600_log_queries_with_extensive_info", - "01560_ttl_remove_empty_parts", - "01554_bloom_filter_index_big_integer_uuid", - "01550_type_map_formats_input", - "01550_type_map_formats", - "01550_create_map_type", - "01532_primary_key_without_order_by_zookeeper", - "01511_alter_version_versioned_collapsing_merge_tree_zookeeper", - "01509_parallel_quorum_insert_no_replicas", - "01504_compression_multiple_streams", - "01494_storage_join_persistency", - "01493_storage_set_persistency", - "01493_alter_remove_properties_zookeeper", - "01475_read_subcolumns_storages", - "01475_read_subcolumns", - "01451_replicated_detach_drop_part", - "01451_detach_drop_part", - "01440_big_int_exotic_casts", - "01430_modify_sample_by_zookeeper", - "01417_freeze_partition_verbose_zookeeper", - "01417_freeze_partition_verbose", - "01396_inactive_replica_cleanup_nodes_zookeeper", - "01375_compact_parts_codecs", - "01357_version_collapsing_attach_detach_zookeeper", - "01355_alter_column_with_order", - "01291_geo_types", - "01270_optimize_skip_unused_shards_low_cardinality", - "01182_materialized_view_different_structure", - "01150_ddl_guard_rwr", - "01148_zookeeper_path_macros_unfolding", - "01135_default_and_alter_zookeeper", - "01130_in_memory_parts_partitons", - "01127_month_partitioning_consistency_select", - "01114_database_atomic", - "01083_expressions_in_engine_arguments", - "01073_attach_if_not_exists", - "01072_optimize_skip_unused_shards_const_expr_eval", - "01071_prohibition_secondary_index_with_old_format_merge_tree", - "01062_alter_on_mutataion_zookeeper", - "01060_shutdown_table_after_detach", - "01056_create_table_as", - "01035_avg", - "01021_only_tuple_columns", - "01019_alter_materialized_view_query", - "01019_alter_materialized_view_consistent", - "01019_alter_materialized_view_atomic", - "01015_attach_part", - "00989_parallel_parts_loading", + "01175_distributed_ddl_output_mode", + "01415_sticking_mutations", "00980_zookeeper_merge_tree_alter_settings", - "00980_merge_alter_settings", + "01148_zookeeper_path_macros_unfolding", + "01294_system_distributed_on_cluster", + "01269_create_with_null", + "01451_replicated_detach_drop_and_quorum", + "01188_attach_table_from_path", + "01149_zookeeper_mutation_stuck_after_replace_partition", + /// user_files + "01721_engine_file_truncate_on_insert", + /// Fails due to additional replicas or shards + "quorum", + "01650_drop_part_and_deduplication_zookeeper", + "01532_execute_merges_on_single_replica", + "00652_replicated_mutations_default_database_zookeeper", + "00620_optimize_on_nonleader_replica_zookeeper", + /// grep -c + "01018_ddl_dictionaries_bad_queries", + "00908_bloom_filter_index", + /// Unsupported type of ALTER query + "01650_fetch_patition_with_macro_in_zk_path", + "01451_detach_drop_part", + "01451_replicated_detach_drop_part", + "01417_freeze_partition_verbose", + "01417_freeze_partition_verbose_zookeeper", + "01130_in_memory_parts_partitons", + "01060_shutdown_table_after_detach", + "01021_only_tuple_columns", + "01015_attach_part", "00955_test_final_mark", - "00933_reserved_word", - "00926_zookeeper_adaptive_index_granularity_replicated_merge_tree", - "00926_adaptive_index_granularity_replacing_merge_tree", - "00926_adaptive_index_granularity_merge_tree", - "00925_zookeeper_empty_replicated_merge_tree_optimize_final", - "00800_low_cardinality_distinct_numeric", - "00754_alter_modify_order_by_replicated_zookeeper", - "00751_low_cardinality_nullable_group_by", - "00751_default_databasename_for_view", - "00719_parallel_ddl_table", - "00718_low_cardinaliry_alter", - "00717_low_cardinaliry_distributed_group_by", - "00688_low_cardinality_syntax", - "00688_low_cardinality_nullable_cast", - "00688_low_cardinality_in", - "00652_replicated_mutations_zookeeper", - "00634_rename_view", + "00753_alter_attach", + "00626_replace_partition_from_table_zookeeper", "00626_replace_partition_from_table", - "00625_arrays_in_nested", + "00152_insert_different_granularity", + "00054_merge_tree_partitions", + "01781_merge_tree_deduplication", + /// Old syntax is not allowed + "01062_alter_on_mutataion_zookeeper", + "00925_zookeeper_empty_replicated_merge_tree_optimize_final", + "00754_alter_modify_order_by_replicated_zookeeper", + "00652_replicated_mutations_zookeeper", "00623_replicated_truncate_table_zookeeper", - "00619_union_highlite", - "00599_create_view_with_subquery", - "00571_non_exist_database_when_create_materializ_view", - "00553_buff_exists_materlized_column", "00516_deduplication_after_drop_partition_zookeeper", - "00508_materialized_view_to", "00446_clear_column_in_partition_concurrent_zookeeper", - "00423_storage_log_single_thread", - "00311_array_primary_key", "00236_replicated_drop_on_non_leader_zookeeper", "00226_zookeeper_deduplication_and_unexpected_parts", "00215_primary_key_order_zookeeper", - "00180_attach_materialized_view", "00121_drop_column_zookeeper", - "00116_storage_set", "00083_create_merge_tree_zookeeper", "00062_replicated_merge_tree_alter_zookeeper", - "01720_constraints_complex_types", - "01747_alter_partition_key_enum_zookeeper" + /// Does not support renaming of multiple tables in single query + "00634_rename_view", + "00140_rename", + "01783_http_chunk_size" ], "polymorphic-parts": [ "01508_partition_pruning_long", /// bug, shoud be fixed @@ -278,6 +182,7 @@ "00534_functions_bad_arguments4", "00534_functions_bad_arguments9", "00564_temporary_table_management", + "00600_replace_running_query", "00626_replace_partition_from_table_zookeeper", "00652_replicated_mutations_zookeeper", "00687_top_and_offset", @@ -306,7 +211,7 @@ "00954_client_prepared_statements", "00956_sensitive_data_masking", "00969_columns_clause", - "00975_indices_mutation_replicated_zookeeper", + "00975_indices_mutation_replicated_zookeeper_long", "00975_values_list", "00976_system_stop_ttl_merges", "00977_int_div", @@ -395,7 +300,7 @@ "01293_pretty_max_value_width", "01293_show_settings", "01294_create_settings_profile", - "01294_lazy_database_concurrent_recreate_reattach_and_show_tables", + "01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long", "01295_create_row_policy", "01296_create_row_policy_in_current_database", "01297_create_quota", @@ -441,8 +346,8 @@ "01504_compression_multiple_streams", "01508_explain_header", "01508_partition_pruning_long", - "01509_check_parallel_quorum_inserts", - "01509_parallel_quorum_and_merge", + "01509_check_parallel_quorum_inserts_long", + "01509_parallel_quorum_and_merge_long", "01515_mv_and_array_join_optimisation_bag", "01516_create_table_primary_key", "01517_drop_mv_with_inner_table", @@ -486,7 +391,11 @@ "01655_plan_optimizations", "01475_read_subcolumns_storages", "01674_clickhouse_client_query_param_cte", - "01666_merge_tree_max_query_limit" + "01666_merge_tree_max_query_limit", + "01786_explain_merge_tree", + "01666_merge_tree_max_query_limit", + "01802_test_postgresql_protocol_with_row_policy", /// It cannot parse DROP ROW POLICY + "01823_explain_json" ], "parallel": [ @@ -521,6 +430,7 @@ "00571_non_exist_database_when_create_materializ_view", "00575_illegal_column_exception_when_drop_depen_column", "00599_create_view_with_subquery", + "00600_replace_running_query", "00604_show_create_database", "00612_http_max_query_size", "00619_union_highlite", @@ -572,12 +482,14 @@ "00933_test_fix_extra_seek_on_compressed_cache", "00933_ttl_replicated_zookeeper", "00933_ttl_with_default", + "00950_dict_get", "00955_test_final_mark", "00976_ttl_with_old_parts", "00980_merge_alter_settings", "00980_zookeeper_merge_tree_alter_settings", "00988_constraints_replication_zookeeper", "00989_parallel_parts_loading", + "00992_system_parts_race_condition_zookeeper_long", "00993_system_parts_race_condition_drop_zookeeper", "01012_show_tables_limit", "01013_sync_replica_timeout_zookeeper", @@ -590,7 +502,7 @@ "01018_ddl_dictionaries_select", "01018_ddl_dictionaries_special", "01018_dictionaries_from_dictionaries", - "01018_ip_dictionary", + "01018_ip_dictionary_long", "01021_only_tuple_columns", "01023_materialized_view_query_context", "01031_mutations_interpreter_and_context", @@ -650,6 +562,8 @@ "01135_default_and_alter_zookeeper", "01148_zookeeper_path_macros_unfolding", "01150_ddl_guard_rwr", + "01153_attach_mv_uuid", + "01152_cross_replication", "01185_create_or_replace_table", "01190_full_attach_syntax", "01191_rename_dictionary", @@ -679,7 +593,7 @@ "01281_unsucceeded_insert_select_queries_counter", "01293_system_distribution_queue", "01294_lazy_database_concurrent", - "01294_lazy_database_concurrent_recreate_reattach_and_show_tables", + "01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long", "01294_system_distributed_on_cluster", "01296_create_row_policy_in_current_database", "01297_create_quota", @@ -734,6 +648,9 @@ "01530_drop_database_atomic_sync", "01541_max_memory_usage_for_user_long", "01542_dictionary_load_exception_race", + "01545_system_errors", // looks at the difference of values in system.errors + "01560_optimize_on_insert_zookeeper", + "01563_distributed_query_finish", // looks at system.errors which is global "01575_disable_detach_table_of_dictionary", "01593_concurrent_alter_mutations_kill", "01593_concurrent_alter_mutations_kill_many_replicas", @@ -744,13 +661,30 @@ "01601_detach_permanently", "01602_show_create_view", "01603_rename_overwrite_bug", + "01666_blns", "01646_system_restart_replicas_smoke", // system restart replicas is a global query "01656_test_query_log_factories_info", + "01658_read_file_to_stringcolumn", "01669_columns_declaration_serde", "01676_dictget_in_default_expression", + "01681_cache_dictionary_simple_key", + "01682_cache_dictionary_complex_key", + "01683_flat_dictionary", + "01684_ssd_cache_dictionary_simple_key", + "01685_ssd_cache_dictionary_complex_key", "01700_system_zookeeper_path_in", + "01702_system_query_log", // It's ok to execute in parallel with oter tests but not several instances of the same test. + "01702_system_query_log", // Runs many global system queries "01715_background_checker_blather_zookeeper", + "01721_engine_file_truncate_on_insert", // It's ok to execute in parallel but not several instances of the same test. + "01722_long_brotli_http_compression_json_format", // It is broken in some unimaginable way with the genius error 'cannot write to ofstream'. Not sure how to debug this "01747_alter_partition_key_enum_zookeeper", + "01748_dictionary_table_dot", // creates database + "01760_polygon_dictionaries", + "01760_system_dictionaries", + "01761_alter_decimal_zookeeper", + "01360_materialized_view_with_join_on_query_log", // creates and drops MVs on query_log, which may interrupt flushes. + "01509_parallel_quorum_insert_no_replicas", // It's ok to execute in parallel with oter tests but not several instances of the same test. "attach", "ddl_dictionaries", "dictionary", @@ -761,12 +695,22 @@ "polygon_dicts", // they use an explicitly specified database "01658_read_file_to_stringcolumn", "01721_engine_file_truncate_on_insert", // It's ok to execute in parallel but not several instances of the same test. + "01702_system_query_log", // It's ok to execute in parallel with oter tests but not several instances of the same test. "01748_dictionary_table_dot", // creates database "00950_dict_get", "01683_flat_dictionary", "01681_cache_dictionary_simple_key", "01682_cache_dictionary_complex_key", "01684_ssd_cache_dictionary_simple_key", - "01685_ssd_cache_dictionary_complex_key" + "01685_ssd_cache_dictionary_complex_key", + "01737_clickhouse_server_wait_server_pool_long", // This test is fully compatible to run in parallel, however under ASAN processes are pretty heavy and may fail under flaky adress check. + "01760_system_dictionaries", + "01760_polygon_dictionaries", + "01778_hierarchical_dictionaries", + "01780_clickhouse_dictionary_source_loop", + "01785_dictionary_element_count", + "01802_test_postgresql_protocol_with_row_policy", /// Creates database and users + "01804_dictionary_decimal256_type", + "01850_dist_INSERT_preserve_error" // uses cluster with different static databases shard_0/shard_1 ] } diff --git a/tests/server-test.xml b/tests/server-test.xml index 0b5e8f760a8..dd21d55c78c 100644 --- a/tests/server-test.xml +++ b/tests/server-test.xml @@ -140,4 +140,4 @@ [hidden]
- \ No newline at end of file + diff --git a/tests/testflows/example/configs/clickhouse/config.xml b/tests/testflows/example/configs/clickhouse/config.xml index d34d2c35253..beeeafa5704 100644 --- a/tests/testflows/example/configs/clickhouse/config.xml +++ b/tests/testflows/example/configs/clickhouse/config.xml @@ -406,7 +406,7 @@ 86400 - 60 + 7200 diff --git a/tests/testflows/ldap/external_user_directory/configs/clickhouse/config.xml b/tests/testflows/ldap/external_user_directory/configs/clickhouse/config.xml index e28a0c8e255..3db8338b865 100644 --- a/tests/testflows/ldap/external_user_directory/configs/clickhouse/config.xml +++ b/tests/testflows/ldap/external_user_directory/configs/clickhouse/config.xml @@ -412,7 +412,7 @@ 86400 - 60 + 7200 diff --git a/tests/testflows/ldap/role_mapping/configs/clickhouse/config.xml b/tests/testflows/ldap/role_mapping/configs/clickhouse/config.xml index e28a0c8e255..3db8338b865 100644 --- a/tests/testflows/ldap/role_mapping/configs/clickhouse/config.xml +++ b/tests/testflows/ldap/role_mapping/configs/clickhouse/config.xml @@ -412,7 +412,7 @@ 86400 - 60 + 7200 diff --git a/tests/testflows/map_type/__init__.py b/tests/testflows/map_type/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/logs.xml b/tests/testflows/map_type/configs/clickhouse/config.d/logs.xml new file mode 100644 index 00000000000..e5077af3f49 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.d/logs.xml @@ -0,0 +1,16 @@ + + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + + system + part_log
+ 500 +
+
diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse/config.d/macros.xml new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/remote.xml b/tests/testflows/map_type/configs/clickhouse/config.d/remote.xml new file mode 100644 index 00000000000..b7d02ceeec1 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.d/remote.xml @@ -0,0 +1,42 @@ + + + + + + true + + clickhouse1 + 9000 + + + clickhouse2 + 9000 + + + clickhouse3 + 9000 + + + + + + + clickhouse1 + 9000 + + + + + clickhouse2 + 9000 + + + + + clickhouse3 + 9000 + + + + + diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/zookeeper.xml b/tests/testflows/map_type/configs/clickhouse/config.d/zookeeper.xml new file mode 100644 index 00000000000..96270e7b645 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.d/zookeeper.xml @@ -0,0 +1,10 @@ + + + + + zookeeper + 2181 + + 15000 + + diff --git a/tests/testflows/map_type/configs/clickhouse/config.xml b/tests/testflows/map_type/configs/clickhouse/config.xml new file mode 100644 index 00000000000..4ec12232539 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.xml @@ -0,0 +1,448 @@ + + + + + + trace + /var/log/clickhouse-server/clickhouse-server.log + /var/log/clickhouse-server/clickhouse-server.err.log + 1000M + 10 + + + + 8123 + 9000 + + + + + + + + + /etc/clickhouse-server/server.crt + /etc/clickhouse-server/server.key + + /etc/clickhouse-server/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + + true + true + sslv2,sslv3 + true + + + + RejectCertificateHandler + + + + + + + + + 9009 + + + + + + + + 0.0.0.0 + + + + + + + + + + + + 4096 + 3 + + + 100 + + + + + + 8589934592 + + + 5368709120 + + + + /var/lib/clickhouse/ + + + /var/lib/clickhouse/tmp/ + + + /var/lib/clickhouse/user_files/ + + + /var/lib/clickhouse/access/ + + + + + + users.xml + + + + /var/lib/clickhouse/access/ + + + + + users.xml + + + default + + + + + + default + + + + + + + + + false + + + + + + + + localhost + 9000 + + + + + + + localhost + 9000 + + + + + localhost + 9000 + + + + + + + localhost + 9440 + 1 + + + + + + + localhost + 9000 + + + + + localhost + 1 + + + + + + + + + + + + + + + + + 3600 + + + + 3600 + + + 60 + + + + + + + + + + system + query_log
+ + toYYYYMM(event_date) + + 7500 +
+ + + + system + trace_log
+ + toYYYYMM(event_date) + 7500 +
+ + + + system + query_thread_log
+ toYYYYMM(event_date) + 7500 +
+ + + + + + + + + + + + + + + + *_dictionary.xml + + + + + + + + + + /clickhouse/task_queue/ddl + + + + + + + + + + + + + + + + click_cost + any + + 0 + 3600 + + + 86400 + 60 + + + + max + + 0 + 60 + + + 3600 + 300 + + + 86400 + 3600 + + + + + + /var/lib/clickhouse/format_schemas/ + + + +
diff --git a/tests/testflows/map_type/configs/clickhouse/users.xml b/tests/testflows/map_type/configs/clickhouse/users.xml new file mode 100644 index 00000000000..86b2cd9e1e3 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/users.xml @@ -0,0 +1,133 @@ + + + + + + + + 10000000000 + + + 0 + + + random + + + + + 1 + + + + + + + + + + + + + ::/0 + + + + default + + + default + + + 1 + + + + + + + + + + + + + + + + + 3600 + + + 0 + 0 + 0 + 0 + 0 + + + + diff --git a/tests/testflows/map_type/configs/clickhouse1/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse1/config.d/macros.xml new file mode 100644 index 00000000000..6cdcc1b440c --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse1/config.d/macros.xml @@ -0,0 +1,8 @@ + + + + clickhouse1 + 01 + 01 + + diff --git a/tests/testflows/map_type/configs/clickhouse2/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse2/config.d/macros.xml new file mode 100644 index 00000000000..a114a9ce4ab --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse2/config.d/macros.xml @@ -0,0 +1,8 @@ + + + + clickhouse2 + 01 + 02 + + diff --git a/tests/testflows/map_type/configs/clickhouse3/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse3/config.d/macros.xml new file mode 100644 index 00000000000..904a27b0172 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse3/config.d/macros.xml @@ -0,0 +1,8 @@ + + + + clickhouse3 + 01 + 03 + + diff --git a/tests/testflows/map_type/docker-compose/clickhouse-service.yml b/tests/testflows/map_type/docker-compose/clickhouse-service.yml new file mode 100755 index 00000000000..fdd4a8057a9 --- /dev/null +++ b/tests/testflows/map_type/docker-compose/clickhouse-service.yml @@ -0,0 +1,27 @@ +version: '2.3' + +services: + clickhouse: + image: yandex/clickhouse-integration-test + expose: + - "9000" + - "9009" + - "8123" + volumes: + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/config.d:/etc/clickhouse-server/config.d" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/users.d:/etc/clickhouse-server/users.d" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/config.xml:/etc/clickhouse-server/config.xml" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/users.xml:/etc/clickhouse-server/users.xml" + - "${CLICKHOUSE_TESTS_SERVER_BIN_PATH:-/usr/bin/clickhouse}:/usr/bin/clickhouse" + - "${CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH:-/usr/bin/clickhouse-odbc-bridge}:/usr/bin/clickhouse-odbc-bridge" + entrypoint: bash -c "clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log" + healthcheck: + test: clickhouse client --query='select 1' + interval: 10s + timeout: 10s + retries: 3 + start_period: 300s + cap_add: + - SYS_PTRACE + security_opt: + - label:disable diff --git a/tests/testflows/map_type/docker-compose/docker-compose.yml b/tests/testflows/map_type/docker-compose/docker-compose.yml new file mode 100755 index 00000000000..29f2ef52470 --- /dev/null +++ b/tests/testflows/map_type/docker-compose/docker-compose.yml @@ -0,0 +1,60 @@ +version: '2.3' + +services: + zookeeper: + extends: + file: zookeeper-service.yml + service: zookeeper + + clickhouse1: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse1 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse1/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse1/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse1/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + clickhouse2: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse2 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse2/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse2/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse2/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + clickhouse3: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse3 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse3/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse3/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse3/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + # dummy service which does nothing, but allows to postpone + # 'docker-compose up -d' till all dependecies will go healthy + all_services_ready: + image: hello-world + depends_on: + clickhouse1: + condition: service_healthy + clickhouse2: + condition: service_healthy + clickhouse3: + condition: service_healthy + zookeeper: + condition: service_healthy diff --git a/tests/testflows/map_type/docker-compose/zookeeper-service.yml b/tests/testflows/map_type/docker-compose/zookeeper-service.yml new file mode 100755 index 00000000000..f3df33358be --- /dev/null +++ b/tests/testflows/map_type/docker-compose/zookeeper-service.yml @@ -0,0 +1,18 @@ +version: '2.3' + +services: + zookeeper: + image: zookeeper:3.4.12 + expose: + - "2181" + environment: + ZOO_TICK_TIME: 500 + ZOO_MY_ID: 1 + healthcheck: + test: echo stat | nc localhost 2181 + interval: 3s + timeout: 2s + retries: 5 + start_period: 2s + security_opt: + - label:disable diff --git a/tests/testflows/map_type/regression.py b/tests/testflows/map_type/regression.py new file mode 100755 index 00000000000..54d713347c6 --- /dev/null +++ b/tests/testflows/map_type/regression.py @@ -0,0 +1,121 @@ +#!/usr/bin/env python3 +import sys + +from testflows.core import * + +append_path(sys.path, "..") + +from helpers.cluster import Cluster +from helpers.argparser import argparser +from map_type.requirements import SRS018_ClickHouse_Map_Data_Type + +xfails = { + "tests/table map with key integer/Int:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21032")], + "tests/table map with value integer/Int:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21032")], + "tests/table map with key integer/UInt256": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21031")], + "tests/table map with value integer/UInt256": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21031")], + "tests/select map with key integer/Int64": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21030")], + "tests/select map with value integer/Int64": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21030")], + "tests/cast tuple of two arrays to map/string -> int": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21029")], + "tests/mapcontains/null key in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapcontains/null key not in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapkeys/null key not in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapkeys/null key in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapcontains/select nullable key": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/mapkeys/select keys from column": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/table map select key with value string/LowCardinality:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key string/FixedString": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key string/Nullable": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key string/Nullable(NULL)": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/table map select key with key string/LowCardinality:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key integer/Int:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21032")], + "tests/table map select key with key integer/UInt256": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21031")], + "tests/table map select key with key integer/toNullable": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key integer/toNullable(NULL)": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/select map with key integer/Int128": + [(Fail, "large Int128 as key not supported")], + "tests/select map with key integer/Int256": + [(Fail, "large Int256 as key not supported")], + "tests/select map with key integer/UInt256": + [(Fail, "large UInt256 as key not supported")], + "tests/select map with key integer/toNullable": + [(Fail, "Nullable type as key not supported")], + "tests/select map with key integer/toNullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/select map with key string/Nullable": + [(Fail, "Nullable type as key not supported")], + "tests/select map with key string/Nullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/table map queries/select map with nullable value": + [(Fail, "Nullable value not supported")], + "tests/table map with key integer/toNullable": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key integer/toNullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key string/Nullable": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key string/Nullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key string/LowCardinality(String)": + [(Fail, "LowCardinality(String) as key not supported")], + "tests/table map with key string/LowCardinality(String) cast from String": + [(Fail, "LowCardinality(String) as key not supported")], + "tests/table map with key string/LowCardinality(String) for key and value": + [(Fail, "LowCardinality(String) as key not supported")], + "tests/table map with key string/LowCardinality(FixedString)": + [(Fail, "LowCardinality(FixedString) as key not supported")], + "tests/table map with value string/LowCardinality(String) for key and value": + [(Fail, "LowCardinality(String) as key not supported")], +} + +xflags = { +} + +@TestModule +@ArgumentParser(argparser) +@XFails(xfails) +@XFlags(xflags) +@Name("map type") +@Specifications( + SRS018_ClickHouse_Map_Data_Type +) +def regression(self, local, clickhouse_binary_path, stress=None, parallel=None): + """Map type regression. + """ + nodes = { + "clickhouse": + ("clickhouse1", "clickhouse2", "clickhouse3") + } + with Cluster(local, clickhouse_binary_path, nodes=nodes) as cluster: + self.context.cluster = cluster + self.context.stress = stress + + if parallel is not None: + self.context.parallel = parallel + + Feature(run=load("map_type.tests.feature", "feature")) + +if main(): + regression() diff --git a/tests/testflows/map_type/requirements/__init__.py b/tests/testflows/map_type/requirements/__init__.py new file mode 100644 index 00000000000..02f7d430154 --- /dev/null +++ b/tests/testflows/map_type/requirements/__init__.py @@ -0,0 +1 @@ +from .requirements import * diff --git a/tests/testflows/map_type/requirements/requirements.md b/tests/testflows/map_type/requirements/requirements.md new file mode 100644 index 00000000000..f19f5a7f7bd --- /dev/null +++ b/tests/testflows/map_type/requirements/requirements.md @@ -0,0 +1,512 @@ +# SRS018 ClickHouse Map Data Type +# Software Requirements Specification + +## Table of Contents + +* 1 [Revision History](#revision-history) +* 2 [Introduction](#introduction) +* 3 [Requirements](#requirements) + * 3.1 [General](#general) + * 3.1.1 [RQ.SRS-018.ClickHouse.Map.DataType](#rqsrs-018clickhousemapdatatype) + * 3.2 [Performance](#performance) + * 3.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples](#rqsrs-018clickhousemapdatatypeperformancevsarrayoftuples) + * 3.2.2 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays](#rqsrs-018clickhousemapdatatypeperformancevstupleofarrays) + * 3.3 [Key Types](#key-types) + * 3.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Key.String](#rqsrs-018clickhousemapdatatypekeystring) + * 3.3.2 [RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer](#rqsrs-018clickhousemapdatatypekeyinteger) + * 3.4 [Value Types](#value-types) + * 3.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.String](#rqsrs-018clickhousemapdatatypevaluestring) + * 3.4.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer](#rqsrs-018clickhousemapdatatypevalueinteger) + * 3.4.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Array](#rqsrs-018clickhousemapdatatypevaluearray) + * 3.5 [Invalid Types](#invalid-types) + * 3.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable](#rqsrs-018clickhousemapdatatypeinvalidnullable) + * 3.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing](#rqsrs-018clickhousemapdatatypeinvalidnothingnothing) + * 3.6 [Duplicated Keys](#duplicated-keys) + * 3.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys](#rqsrs-018clickhousemapdatatypeduplicatedkeys) + * 3.7 [Array of Maps](#array-of-maps) + * 3.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps](#rqsrs-018clickhousemapdatatypearrayofmaps) + * 3.8 [Nested With Maps](#nested-with-maps) + * 3.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps](#rqsrs-018clickhousemapdatatypenestedwithmaps) + * 3.9 [Value Retrieval](#value-retrieval) + * 3.9.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval](#rqsrs-018clickhousemapdatatypevalueretrieval) + * 3.9.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid](#rqsrs-018clickhousemapdatatypevalueretrievalkeyinvalid) + * 3.9.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound](#rqsrs-018clickhousemapdatatypevalueretrievalkeynotfound) + * 3.10 [Converting Tuple(Array, Array) to Map](#converting-tuplearray-array-to-map) + * 3.10.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraystomap) + * 3.10.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraysmapinvalid) + * 3.11 [Converting Array(Tuple(K,V)) to Map](#converting-arraytuplekv-to-map) + * 3.11.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomap) + * 3.11.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomapinvalid) + * 3.12 [Keys and Values Subcolumns](#keys-and-values-subcolumns) + * 3.12.1 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys](#rqsrs-018clickhousemapdatatypesubcolumnskeys) + * 3.12.2 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnskeysarrayfunctions) + * 3.12.3 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnskeysinlinedefinedmap) + * 3.12.4 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values](#rqsrs-018clickhousemapdatatypesubcolumnsvalues) + * 3.12.5 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesarrayfunctions) + * 3.12.6 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesinlinedefinedmap) + * 3.13 [Functions](#functions) + * 3.13.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap](#rqsrs-018clickhousemapdatatypefunctionsinlinedefinedmap) + * 3.13.2 [`length`](#length) + * 3.13.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length](#rqsrs-018clickhousemapdatatypefunctionslength) + * 3.13.3 [`empty`](#empty) + * 3.13.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty](#rqsrs-018clickhousemapdatatypefunctionsempty) + * 3.13.4 [`notEmpty`](#notempty) + * 3.13.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty](#rqsrs-018clickhousemapdatatypefunctionsnotempty) + * 3.13.5 [`map`](#map) + * 3.13.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map](#rqsrs-018clickhousemapdatatypefunctionsmap) + * 3.13.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments](#rqsrs-018clickhousemapdatatypefunctionsmapinvalidnumberofarguments) + * 3.13.5.3 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes](#rqsrs-018clickhousemapdatatypefunctionsmapmixedkeyorvaluetypes) + * 3.13.5.4 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd](#rqsrs-018clickhousemapdatatypefunctionsmapmapadd) + * 3.13.5.5 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract](#rqsrs-018clickhousemapdatatypefunctionsmapmapsubstract) + * 3.13.5.6 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries](#rqsrs-018clickhousemapdatatypefunctionsmapmappopulateseries) + * 3.13.6 [`mapContains`](#mapcontains) + * 3.13.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains](#rqsrs-018clickhousemapdatatypefunctionsmapcontains) + * 3.13.7 [`mapKeys`](#mapkeys) + * 3.13.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys](#rqsrs-018clickhousemapdatatypefunctionsmapkeys) + * 3.13.8 [`mapValues`](#mapvalues) + * 3.13.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues](#rqsrs-018clickhousemapdatatypefunctionsmapvalues) + +## Revision History + +This document is stored in an electronic form using [Git] source control management software +hosted in a [GitHub Repository]. +All the updates are tracked using the [Revision History]. + +## Introduction + +This software requirements specification covers requirements for `Map(key, value)` data type in [ClickHouse]. + +## Requirements + +### General + +#### RQ.SRS-018.ClickHouse.Map.DataType +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type that stores `key:value` pairs. + +### Performance + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Array(Tuple(K,V))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Tuple(Array(String), Array(String))` data type where the first +array defines an array of keys and the second array defines an array of values. + +### Key Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of an [Integer] type. + +### Value Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Integer] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Array +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Array] type. + +### Invalid Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Nullable(Map(key, value))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Map(Nothing, Nothing))` data type. + +### Duplicated Keys + +#### RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys +version: 1.0 + +[ClickHouse] MAY support `Map(key, value)` data type with duplicated keys. + +### Array of Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps +version: 1.0 + +[ClickHouse] SHALL support `Array(Map(key, value))` data type. + +### Nested With Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps +version: 1.0 + +[ClickHouse] SHALL support defining `Map(key, value)` data type inside the [Nested] data type. + +### Value Retrieval + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval +version: 1.0 + +[ClickHouse] SHALL support getting the value from a `Map(key, value)` data type using `map[key]` syntax. +If `key` has duplicates then the first `key:value` pair MAY be returned. + +For example, + +```sql +SELECT a['key2'] FROM table_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid +version: 1.0 + +[ClickHouse] SHALL return an error when key does not match the key type. + +For example, + +```sql +SELECT map(1,2) AS m, m[1024] +``` + +Exceptions: + +* when key is `NULL` the return value MAY be `NULL` +* when key value is not valid for the key type, for example it is out of range for [Integer] type, + when reading from a table column it MAY return the default value for key data type + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound +version: 1.0 + +[ClickHouse] SHALL return default value for the data type of the value +when there's no corresponding `key` defined in the `Map(key, value)` data type. + + +### Converting Tuple(Array, Array) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Tuple(Array, Array)] to `Map(key, value)` using the [CAST] function. + +``` sql +SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map; +``` + +``` text +┌─map───────────────────────────┐ +│ {1:'Ready',2:'Steady',3:'Go'} │ +└───────────────────────────────┘ +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Tuple(Array, Array)] to `Map(key, value)` + +* when arrays are not of equal size + + For example, + + ```sql + SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10] + ``` + +### Converting Array(Tuple(K,V)) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Array(Tuple(K,V))] to `Map(key, value)` using the [CAST] function. + +For example, + +```sql +SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Array(Tuple(K, V))] to `Map(key, value)` + +* when element is not a [Tuple] + + ```sql + SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map + ``` + +* when [Tuple] does not contain two elements + + ```sql + SELECT CAST(([(1,2),(3,)]), 'Map(UInt8, UInt8)') AS map + ``` + +### Keys and Values Subcolumns + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys +version: 1.0 + +[ClickHouse] SHALL support `keys` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map keys. + +```sql +SELECT m.keys FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `keys` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.keys, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `keys` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.keys +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values +version: 1.0 + +[ClickHouse] SHALL support `values` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map values. + +```sql +SELECT m.values FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `values` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.values, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `values` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.values +``` + +### Functions + +#### RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap +version: 1.0 + +[ClickHouse] SHALL support using inline defined maps as an argument to map functions. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, mapKeys(c) +SELECT map( 'aa', 4, '44' , 5) as c, mapValues(c) +``` + +#### `length` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [length] function +that SHALL return number of keys in the map. + +For example, + +```sql +SELECT length(map(1,2,3,4)) +SELECT length(map()) +``` + +#### `empty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [empty] function +that SHALL return 1 if number of keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 0. + +For example, + +```sql +SELECT empty(map(1,2,3,4)) +SELECT empty(map()) +``` + +#### `notEmpty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [notEmpty] function +that SHALL return 0 if number if keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 1. + +For example, + +```sql +SELECT notEmpty(map(1,2,3,4)) +SELECT notEmpty(map()) +``` + +#### `map` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map +version: 1.0 + +[ClickHouse] SHALL support arranging `key, value` pairs into `Map(key, value)` data type +using `map` function. + +**Syntax** + +``` sql +map(key1, value1[, key2, value2, ...]) +``` + +For example, + +``` sql +SELECT map('key1', number, 'key2', number * 2) FROM numbers(3); + +┌─map('key1', number, 'key2', multiply(number, 2))─┐ +│ {'key1':0,'key2':0} │ +│ {'key1':1,'key2':2} │ +│ {'key1':2,'key2':4} │ +└──────────────────────────────────────────────────┘ +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with non even number of arguments. + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with mixed key or value types. + + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapAdd` function to a `Map(key, value)` data type. + +For example, + +``` sql +SELECT CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), "Map(Int8,Int8)") +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapSubstract` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), "Map(Int8,Int8)") +``` +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapPopulateSeries` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), "Map(Int8,Int8)") +``` + +#### `mapContains` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains +version: 1.0 + +[ClickHouse] SHALL support `mapContains(map, key)` function to check weather `map.keys` contains the `key`. + +For example, + +```sql +SELECT mapContains(a, 'abc') from table_map; +``` + +#### `mapKeys` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys +version: 1.0 + +[ClickHouse] SHALL support `mapKeys(map)` function to return all the map keys in the [Array] format. + +For example, + +```sql +SELECT mapKeys(a) from table_map; +``` + +#### `mapValues` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues +version: 1.0 + +[ClickHouse] SHALL support `mapValues(map)` function to return all the map values in the [Array] format. + +For example, + +```sql +SELECT mapValues(a) from table_map; +``` + +[Nested]: https://clickhouse.tech/docs/en/sql-reference/data-types/nested-data-structures/nested/ +[length]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array_functions-length +[empty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-empty +[notEmpty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-notempty +[CAST]: https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#type_conversion_function-cast +[Tuple]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Tuple(Array,Array)]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Array]: https://clickhouse.tech/docs/en/sql-reference/data-types/array/ +[String]: https://clickhouse.tech/docs/en/sql-reference/data-types/string/ +[Integer]: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/ +[ClickHouse]: https://clickhouse.tech +[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/map_type/requirements/requirements.md +[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/map_type/requirements/requirements.md +[Git]: https://git-scm.com/ +[GitHub]: https://github.com diff --git a/tests/testflows/map_type/requirements/requirements.py b/tests/testflows/map_type/requirements/requirements.py new file mode 100644 index 00000000000..24e8abdf15f --- /dev/null +++ b/tests/testflows/map_type/requirements/requirements.py @@ -0,0 +1,1427 @@ +# These requirements were auto generated +# from software requirements specification (SRS) +# document by TestFlows v1.6.210226.1200017. +# Do not edit by hand but re-generate instead +# using 'tfs requirements generate' command. +from testflows.core import Specification +from testflows.core import Requirement + +Heading = Specification.Heading + +RQ_SRS_018_ClickHouse_Map_DataType = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type that stores `key:value` pairs.\n' + '\n' + ), + link=None, + level=3, + num='3.1.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_ArrayOfTuples = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as\n' + 'compared to `Array(Tuple(K,V))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.2.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_TupleOfArrays = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as\n' + 'compared to `Tuple(Array(String), Array(String))` data type where the first\n' + 'array defines an array of keys and the second array defines an array of values.\n' + '\n' + ), + link=None, + level=3, + num='3.2.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Key_String = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Key.String', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where key is of a [String] type.\n' + '\n' + ), + link=None, + level=3, + num='3.3.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where key is of an [Integer] type.\n' + '\n' + ), + link=None, + level=3, + num='3.3.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_String = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.String', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [String] type.\n' + '\n' + ), + link=None, + level=3, + num='3.4.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Integer] type.\n' + '\n' + ), + link=None, + level=3, + num='3.4.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Array = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Array', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Array] type.\n' + '\n' + ), + link=None, + level=3, + num='3.4.3') + +RQ_SRS_018_ClickHouse_Map_DataType_Invalid_Nullable = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL not support creating table columns that have `Nullable(Map(key, value))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.5.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Invalid_NothingNothing = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL not support creating table columns that have `Map(Nothing, Nothing))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.5.2') + +RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY support `Map(key, value)` data type with duplicated keys.\n' + '\n' + ), + link=None, + level=3, + num='3.6.1') + +RQ_SRS_018_ClickHouse_Map_DataType_ArrayOfMaps = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Array(Map(key, value))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.7.1') + +RQ_SRS_018_ClickHouse_Map_DataType_NestedWithMaps = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support defining `Map(key, value)` data type inside the [Nested] data type.\n' + '\n' + ), + link=None, + level=3, + num='3.8.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support getting the value from a `Map(key, value)` data type using `map[key]` syntax.\n' + 'If `key` has duplicates then the first `key:value` pair MAY be returned. \n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT a['key2'] FROM table_map;\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.9.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when key does not match the key type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT map(1,2) AS m, m[1024]\n' + '```\n' + '\n' + 'Exceptions:\n' + '\n' + '* when key is `NULL` the return value MAY be `NULL`\n' + '* when key value is not valid for the key type, for example it is out of range for [Integer] type, \n' + ' when reading from a table column it MAY return the default value for key data type\n' + '\n' + ), + link=None, + level=3, + num='3.9.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return default value for the data type of the value\n' + "when there's no corresponding `key` defined in the `Map(key, value)` data type. \n" + '\n' + '\n' + ), + link=None, + level=3, + num='3.9.3') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting [Tuple(Array, Array)] to `Map(key, value)` using the [CAST] function.\n' + '\n' + '``` sql\n' + "SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map;\n" + '```\n' + '\n' + '``` text\n' + '┌─map───────────────────────────┐\n' + "│ {1:'Ready',2:'Steady',3:'Go'} │\n" + '└───────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.10.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY return an error when casting [Tuple(Array, Array)] to `Map(key, value)`\n' + '\n' + '* when arrays are not of equal size\n' + '\n' + ' For example,\n' + '\n' + ' ```sql\n' + " SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10]\n" + ' ```\n' + '\n' + ), + link=None, + level=3, + num='3.10.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting [Array(Tuple(K,V))] to `Map(key, value)` using the [CAST] function.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.11.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY return an error when casting [Array(Tuple(K, V))] to `Map(key, value)`\n' + '\n' + '* when element is not a [Tuple]\n' + '\n' + ' ```sql\n' + " SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map\n" + ' ```\n' + '\n' + '* when [Tuple] does not contain two elements\n' + '\n' + ' ```sql\n' + " SELECT CAST(([(1,2),(3,)]), 'Map(UInt8, UInt8)') AS map\n" + ' ```\n' + '\n' + ), + link=None, + level=3, + num='3.11.2') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `keys` subcolumn in the `Map(key, value)` type that can be used \n' + 'to retrieve an [Array] of map keys.\n' + '\n' + '```sql\n' + 'SELECT m.keys FROM t_map;\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.1') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_ArrayFunctions = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support applying [Array] functions to the `keys` subcolumn in the `Map(key, value)` type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT * FROM t_map WHERE has(m.keys, 'a');\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.2') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_InlineDefinedMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY not support using inline defined map to get `keys` subcolumn.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT map( 'aa', 4, '44' , 5) as c, c.keys\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.3') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `values` subcolumn in the `Map(key, value)` type that can be used \n' + 'to retrieve an [Array] of map values.\n' + '\n' + '```sql\n' + 'SELECT m.values FROM t_map;\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.4') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_ArrayFunctions = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support applying [Array] functions to the `values` subcolumn in the `Map(key, value)` type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT * FROM t_map WHERE has(m.values, 'a');\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.5') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_InlineDefinedMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY not support using inline defined map to get `values` subcolumn.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT map( 'aa', 4, '44' , 5) as c, c.values\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.6') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_InlineDefinedMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support using inline defined maps as an argument to map functions.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT map( 'aa', 4, '44' , 5) as c, mapKeys(c)\n" + "SELECT map( 'aa', 4, '44' , 5) as c, mapValues(c)\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.13.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Length = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [length] function\n' + 'that SHALL return number of keys in the map.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT length(map(1,2,3,4))\n' + 'SELECT length(map())\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.2.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Empty = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [empty] function\n' + 'that SHALL return 1 if number of keys in the map is 0 otherwise if the number of keys is \n' + 'greater or equal to 1 it SHALL return 0.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT empty(map(1,2,3,4))\n' + 'SELECT empty(map())\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.3.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_NotEmpty = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [notEmpty] function\n' + 'that SHALL return 0 if number if keys in the map is 0 otherwise if the number of keys is\n' + 'greater or equal to 1 it SHALL return 1.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT notEmpty(map(1,2,3,4))\n' + 'SELECT notEmpty(map())\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.4.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support arranging `key, value` pairs into `Map(key, value)` data type\n' + 'using `map` function.\n' + '\n' + '**Syntax** \n' + '\n' + '``` sql\n' + 'map(key1, value1[, key2, value2, ...])\n' + '```\n' + '\n' + 'For example,\n' + '\n' + '``` sql\n' + "SELECT map('key1', number, 'key2', number * 2) FROM numbers(3);\n" + '\n' + "┌─map('key1', number, 'key2', multiply(number, 2))─┐\n" + "│ {'key1':0,'key2':0} │\n" + "│ {'key1':1,'key2':2} │\n" + "│ {'key1':2,'key2':4} │\n" + '└──────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_InvalidNumberOfArguments = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `map` function is called with non even number of arguments.\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MixedKeyOrValueTypes = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `map` function is called with mixed key or value types.\n' + '\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.3') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapAdd = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting the results of `mapAdd` function to a `Map(key, value)` data type.\n' + '\n' + 'For example,\n' + '\n' + '``` sql\n' + 'SELECT CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), "Map(Int8,Int8)")\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.4') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapSubstract = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting the results of `mapSubstract` function to a `Map(key, value)` data type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), "Map(Int8,Int8)")\n' + '```\n' + ), + link=None, + level=4, + num='3.13.5.5') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapPopulateSeries = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting the results of `mapPopulateSeries` function to a `Map(key, value)` data type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), "Map(Int8,Int8)")\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.6') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapContains = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `mapContains(map, key)` function to check weather `map.keys` contains the `key`.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT mapContains(a, 'abc') from table_map;\n" + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.6.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapKeys = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `mapKeys(map)` function to return all the map keys in the [Array] format.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT mapKeys(a) from table_map;\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.7.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapValues = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `mapValues(map)` function to return all the map values in the [Array] format.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT mapValues(a) from table_map;\n' + '```\n' + '\n' + '[Nested]: https://clickhouse.tech/docs/en/sql-reference/data-types/nested-data-structures/nested/\n' + '[length]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array_functions-length\n' + '[empty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-empty\n' + '[notEmpty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-notempty\n' + '[CAST]: https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#type_conversion_function-cast\n' + '[Tuple]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/\n' + '[Tuple(Array,Array)]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/\n' + '[Array]: https://clickhouse.tech/docs/en/sql-reference/data-types/array/ \n' + '[String]: https://clickhouse.tech/docs/en/sql-reference/data-types/string/\n' + '[Integer]: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/\n' + '[ClickHouse]: https://clickhouse.tech\n' + '[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/map_type/requirements/requirements.md \n' + '[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/map_type/requirements/requirements.md\n' + '[Git]: https://git-scm.com/\n' + '[GitHub]: https://github.com\n' + ), + link=None, + level=4, + num='3.13.8.1') + +SRS018_ClickHouse_Map_Data_Type = Specification( + name='SRS018 ClickHouse Map Data Type', + description=None, + author=None, + date=None, + status=None, + approved_by=None, + approved_date=None, + approved_version=None, + version=None, + group=None, + type=None, + link=None, + uid=None, + parent=None, + children=None, + headings=( + Heading(name='Revision History', level=1, num='1'), + Heading(name='Introduction', level=1, num='2'), + Heading(name='Requirements', level=1, num='3'), + Heading(name='General', level=2, num='3.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType', level=3, num='3.1.1'), + Heading(name='Performance', level=2, num='3.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples', level=3, num='3.2.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays', level=3, num='3.2.2'), + Heading(name='Key Types', level=2, num='3.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Key.String', level=3, num='3.3.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer', level=3, num='3.3.2'), + Heading(name='Value Types', level=2, num='3.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.String', level=3, num='3.4.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer', level=3, num='3.4.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Array', level=3, num='3.4.3'), + Heading(name='Invalid Types', level=2, num='3.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable', level=3, num='3.5.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing', level=3, num='3.5.2'), + Heading(name='Duplicated Keys', level=2, num='3.6'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys', level=3, num='3.6.1'), + Heading(name='Array of Maps', level=2, num='3.7'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps', level=3, num='3.7.1'), + Heading(name='Nested With Maps', level=2, num='3.8'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps', level=3, num='3.8.1'), + Heading(name='Value Retrieval', level=2, num='3.9'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval', level=3, num='3.9.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid', level=3, num='3.9.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound', level=3, num='3.9.3'), + Heading(name='Converting Tuple(Array, Array) to Map', level=2, num='3.10'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap', level=3, num='3.10.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid', level=3, num='3.10.2'), + Heading(name='Converting Array(Tuple(K,V)) to Map', level=2, num='3.11'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap', level=3, num='3.11.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid', level=3, num='3.11.2'), + Heading(name='Keys and Values Subcolumns', level=2, num='3.12'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys', level=3, num='3.12.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions', level=3, num='3.12.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap', level=3, num='3.12.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values', level=3, num='3.12.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions', level=3, num='3.12.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap', level=3, num='3.12.6'), + Heading(name='Functions', level=2, num='3.13'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap', level=3, num='3.13.1'), + Heading(name='`length`', level=3, num='3.13.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length', level=4, num='3.13.2.1'), + Heading(name='`empty`', level=3, num='3.13.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty', level=4, num='3.13.3.1'), + Heading(name='`notEmpty`', level=3, num='3.13.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty', level=4, num='3.13.4.1'), + Heading(name='`map`', level=3, num='3.13.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map', level=4, num='3.13.5.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments', level=4, num='3.13.5.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes', level=4, num='3.13.5.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd', level=4, num='3.13.5.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract', level=4, num='3.13.5.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries', level=4, num='3.13.5.6'), + Heading(name='`mapContains`', level=3, num='3.13.6'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains', level=4, num='3.13.6.1'), + Heading(name='`mapKeys`', level=3, num='3.13.7'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys', level=4, num='3.13.7.1'), + Heading(name='`mapValues`', level=3, num='3.13.8'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues', level=4, num='3.13.8.1'), + ), + requirements=( + RQ_SRS_018_ClickHouse_Map_DataType, + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_ArrayOfTuples, + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_TupleOfArrays, + RQ_SRS_018_ClickHouse_Map_DataType_Key_String, + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer, + RQ_SRS_018_ClickHouse_Map_DataType_Value_String, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Array, + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_Nullable, + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_NothingNothing, + RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys, + RQ_SRS_018_ClickHouse_Map_DataType_ArrayOfMaps, + RQ_SRS_018_ClickHouse_Map_DataType_NestedWithMaps, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_ArrayFunctions, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_InlineDefinedMap, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_ArrayFunctions, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_InlineDefinedMap, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_InlineDefinedMap, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Length, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Empty, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_NotEmpty, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_InvalidNumberOfArguments, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MixedKeyOrValueTypes, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapAdd, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapSubstract, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapPopulateSeries, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapContains, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapKeys, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapValues, + ), + content=''' +# SRS018 ClickHouse Map Data Type +# Software Requirements Specification + +## Table of Contents + +* 1 [Revision History](#revision-history) +* 2 [Introduction](#introduction) +* 3 [Requirements](#requirements) + * 3.1 [General](#general) + * 3.1.1 [RQ.SRS-018.ClickHouse.Map.DataType](#rqsrs-018clickhousemapdatatype) + * 3.2 [Performance](#performance) + * 3.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples](#rqsrs-018clickhousemapdatatypeperformancevsarrayoftuples) + * 3.2.2 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays](#rqsrs-018clickhousemapdatatypeperformancevstupleofarrays) + * 3.3 [Key Types](#key-types) + * 3.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Key.String](#rqsrs-018clickhousemapdatatypekeystring) + * 3.3.2 [RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer](#rqsrs-018clickhousemapdatatypekeyinteger) + * 3.4 [Value Types](#value-types) + * 3.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.String](#rqsrs-018clickhousemapdatatypevaluestring) + * 3.4.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer](#rqsrs-018clickhousemapdatatypevalueinteger) + * 3.4.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Array](#rqsrs-018clickhousemapdatatypevaluearray) + * 3.5 [Invalid Types](#invalid-types) + * 3.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable](#rqsrs-018clickhousemapdatatypeinvalidnullable) + * 3.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing](#rqsrs-018clickhousemapdatatypeinvalidnothingnothing) + * 3.6 [Duplicated Keys](#duplicated-keys) + * 3.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys](#rqsrs-018clickhousemapdatatypeduplicatedkeys) + * 3.7 [Array of Maps](#array-of-maps) + * 3.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps](#rqsrs-018clickhousemapdatatypearrayofmaps) + * 3.8 [Nested With Maps](#nested-with-maps) + * 3.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps](#rqsrs-018clickhousemapdatatypenestedwithmaps) + * 3.9 [Value Retrieval](#value-retrieval) + * 3.9.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval](#rqsrs-018clickhousemapdatatypevalueretrieval) + * 3.9.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid](#rqsrs-018clickhousemapdatatypevalueretrievalkeyinvalid) + * 3.9.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound](#rqsrs-018clickhousemapdatatypevalueretrievalkeynotfound) + * 3.10 [Converting Tuple(Array, Array) to Map](#converting-tuplearray-array-to-map) + * 3.10.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraystomap) + * 3.10.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraysmapinvalid) + * 3.11 [Converting Array(Tuple(K,V)) to Map](#converting-arraytuplekv-to-map) + * 3.11.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomap) + * 3.11.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomapinvalid) + * 3.12 [Keys and Values Subcolumns](#keys-and-values-subcolumns) + * 3.12.1 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys](#rqsrs-018clickhousemapdatatypesubcolumnskeys) + * 3.12.2 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnskeysarrayfunctions) + * 3.12.3 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnskeysinlinedefinedmap) + * 3.12.4 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values](#rqsrs-018clickhousemapdatatypesubcolumnsvalues) + * 3.12.5 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesarrayfunctions) + * 3.12.6 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesinlinedefinedmap) + * 3.13 [Functions](#functions) + * 3.13.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap](#rqsrs-018clickhousemapdatatypefunctionsinlinedefinedmap) + * 3.13.2 [`length`](#length) + * 3.13.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length](#rqsrs-018clickhousemapdatatypefunctionslength) + * 3.13.3 [`empty`](#empty) + * 3.13.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty](#rqsrs-018clickhousemapdatatypefunctionsempty) + * 3.13.4 [`notEmpty`](#notempty) + * 3.13.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty](#rqsrs-018clickhousemapdatatypefunctionsnotempty) + * 3.13.5 [`map`](#map) + * 3.13.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map](#rqsrs-018clickhousemapdatatypefunctionsmap) + * 3.13.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments](#rqsrs-018clickhousemapdatatypefunctionsmapinvalidnumberofarguments) + * 3.13.5.3 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes](#rqsrs-018clickhousemapdatatypefunctionsmapmixedkeyorvaluetypes) + * 3.13.5.4 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd](#rqsrs-018clickhousemapdatatypefunctionsmapmapadd) + * 3.13.5.5 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract](#rqsrs-018clickhousemapdatatypefunctionsmapmapsubstract) + * 3.13.5.6 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries](#rqsrs-018clickhousemapdatatypefunctionsmapmappopulateseries) + * 3.13.6 [`mapContains`](#mapcontains) + * 3.13.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains](#rqsrs-018clickhousemapdatatypefunctionsmapcontains) + * 3.13.7 [`mapKeys`](#mapkeys) + * 3.13.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys](#rqsrs-018clickhousemapdatatypefunctionsmapkeys) + * 3.13.8 [`mapValues`](#mapvalues) + * 3.13.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues](#rqsrs-018clickhousemapdatatypefunctionsmapvalues) + +## Revision History + +This document is stored in an electronic form using [Git] source control management software +hosted in a [GitHub Repository]. +All the updates are tracked using the [Revision History]. + +## Introduction + +This software requirements specification covers requirements for `Map(key, value)` data type in [ClickHouse]. + +## Requirements + +### General + +#### RQ.SRS-018.ClickHouse.Map.DataType +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type that stores `key:value` pairs. + +### Performance + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Array(Tuple(K,V))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Tuple(Array(String), Array(String))` data type where the first +array defines an array of keys and the second array defines an array of values. + +### Key Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of an [Integer] type. + +### Value Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Integer] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Array +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Array] type. + +### Invalid Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Nullable(Map(key, value))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Map(Nothing, Nothing))` data type. + +### Duplicated Keys + +#### RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys +version: 1.0 + +[ClickHouse] MAY support `Map(key, value)` data type with duplicated keys. + +### Array of Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps +version: 1.0 + +[ClickHouse] SHALL support `Array(Map(key, value))` data type. + +### Nested With Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps +version: 1.0 + +[ClickHouse] SHALL support defining `Map(key, value)` data type inside the [Nested] data type. + +### Value Retrieval + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval +version: 1.0 + +[ClickHouse] SHALL support getting the value from a `Map(key, value)` data type using `map[key]` syntax. +If `key` has duplicates then the first `key:value` pair MAY be returned. + +For example, + +```sql +SELECT a['key2'] FROM table_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid +version: 1.0 + +[ClickHouse] SHALL return an error when key does not match the key type. + +For example, + +```sql +SELECT map(1,2) AS m, m[1024] +``` + +Exceptions: + +* when key is `NULL` the return value MAY be `NULL` +* when key value is not valid for the key type, for example it is out of range for [Integer] type, + when reading from a table column it MAY return the default value for key data type + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound +version: 1.0 + +[ClickHouse] SHALL return default value for the data type of the value +when there's no corresponding `key` defined in the `Map(key, value)` data type. + + +### Converting Tuple(Array, Array) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Tuple(Array, Array)] to `Map(key, value)` using the [CAST] function. + +``` sql +SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map; +``` + +``` text +┌─map───────────────────────────┐ +│ {1:'Ready',2:'Steady',3:'Go'} │ +└───────────────────────────────┘ +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Tuple(Array, Array)] to `Map(key, value)` + +* when arrays are not of equal size + + For example, + + ```sql + SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10] + ``` + +### Converting Array(Tuple(K,V)) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Array(Tuple(K,V))] to `Map(key, value)` using the [CAST] function. + +For example, + +```sql +SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Array(Tuple(K, V))] to `Map(key, value)` + +* when element is not a [Tuple] + + ```sql + SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map + ``` + +* when [Tuple] does not contain two elements + + ```sql + SELECT CAST(([(1,2),(3,)]), 'Map(UInt8, UInt8)') AS map + ``` + +### Keys and Values Subcolumns + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys +version: 1.0 + +[ClickHouse] SHALL support `keys` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map keys. + +```sql +SELECT m.keys FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `keys` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.keys, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `keys` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.keys +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values +version: 1.0 + +[ClickHouse] SHALL support `values` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map values. + +```sql +SELECT m.values FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `values` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.values, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `values` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.values +``` + +### Functions + +#### RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap +version: 1.0 + +[ClickHouse] SHALL support using inline defined maps as an argument to map functions. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, mapKeys(c) +SELECT map( 'aa', 4, '44' , 5) as c, mapValues(c) +``` + +#### `length` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [length] function +that SHALL return number of keys in the map. + +For example, + +```sql +SELECT length(map(1,2,3,4)) +SELECT length(map()) +``` + +#### `empty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [empty] function +that SHALL return 1 if number of keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 0. + +For example, + +```sql +SELECT empty(map(1,2,3,4)) +SELECT empty(map()) +``` + +#### `notEmpty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [notEmpty] function +that SHALL return 0 if number if keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 1. + +For example, + +```sql +SELECT notEmpty(map(1,2,3,4)) +SELECT notEmpty(map()) +``` + +#### `map` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map +version: 1.0 + +[ClickHouse] SHALL support arranging `key, value` pairs into `Map(key, value)` data type +using `map` function. + +**Syntax** + +``` sql +map(key1, value1[, key2, value2, ...]) +``` + +For example, + +``` sql +SELECT map('key1', number, 'key2', number * 2) FROM numbers(3); + +┌─map('key1', number, 'key2', multiply(number, 2))─┐ +│ {'key1':0,'key2':0} │ +│ {'key1':1,'key2':2} │ +│ {'key1':2,'key2':4} │ +└──────────────────────────────────────────────────┘ +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with non even number of arguments. + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with mixed key or value types. + + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapAdd` function to a `Map(key, value)` data type. + +For example, + +``` sql +SELECT CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), "Map(Int8,Int8)") +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapSubstract` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), "Map(Int8,Int8)") +``` +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapPopulateSeries` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), "Map(Int8,Int8)") +``` + +#### `mapContains` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains +version: 1.0 + +[ClickHouse] SHALL support `mapContains(map, key)` function to check weather `map.keys` contains the `key`. + +For example, + +```sql +SELECT mapContains(a, 'abc') from table_map; +``` + +#### `mapKeys` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys +version: 1.0 + +[ClickHouse] SHALL support `mapKeys(map)` function to return all the map keys in the [Array] format. + +For example, + +```sql +SELECT mapKeys(a) from table_map; +``` + +#### `mapValues` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues +version: 1.0 + +[ClickHouse] SHALL support `mapValues(map)` function to return all the map values in the [Array] format. + +For example, + +```sql +SELECT mapValues(a) from table_map; +``` + +[Nested]: https://clickhouse.tech/docs/en/sql-reference/data-types/nested-data-structures/nested/ +[length]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array_functions-length +[empty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-empty +[notEmpty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-notempty +[CAST]: https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#type_conversion_function-cast +[Tuple]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Tuple(Array,Array)]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Array]: https://clickhouse.tech/docs/en/sql-reference/data-types/array/ +[String]: https://clickhouse.tech/docs/en/sql-reference/data-types/string/ +[Integer]: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/ +[ClickHouse]: https://clickhouse.tech +[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/map_type/requirements/requirements.md +[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/map_type/requirements/requirements.md +[Git]: https://git-scm.com/ +[GitHub]: https://github.com +''') diff --git a/tests/testflows/map_type/tests/__init__.py b/tests/testflows/map_type/tests/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/map_type/tests/common.py b/tests/testflows/map_type/tests/common.py new file mode 100644 index 00000000000..a3a0d0ef0b1 --- /dev/null +++ b/tests/testflows/map_type/tests/common.py @@ -0,0 +1,49 @@ +import uuid +from collections import namedtuple + +from testflows.core import * +from testflows.core.name import basename, parentname +from testflows._core.testtype import TestSubType + +def getuid(): + if current().subtype == TestSubType.Example: + testname = f"{basename(parentname(current().name)).replace(' ', '_').replace(',','')}" + else: + testname = f"{basename(current().name).replace(' ', '_').replace(',','')}" + return testname + "_" + str(uuid.uuid1()).replace('-', '_') + +@TestStep(Given) +def allow_experimental_map_type(self): + """Set allow_experimental_map_type = 1 + """ + setting = ("allow_experimental_map_type", 1) + default_query_settings = None + + try: + with By("adding allow_experimental_map_type to the default query settings"): + default_query_settings = getsattr(current().context, "default_query_settings", []) + default_query_settings.append(setting) + yield + finally: + with Finally("I remove allow_experimental_map_type from the default query settings"): + if default_query_settings: + try: + default_query_settings.pop(default_query_settings.index(setting)) + except ValueError: + pass + +@TestStep(Given) +def create_table(self, name, statement, on_cluster=False): + """Create table. + """ + node = current().context.node + try: + with Given(f"I have a {name} table"): + node.query(statement.format(name=name)) + yield name + finally: + with Finally("I drop the table"): + if on_cluster: + node.query(f"DROP TABLE IF EXISTS {name} ON CLUSTER {on_cluster}") + else: + node.query(f"DROP TABLE IF EXISTS {name}") diff --git a/tests/testflows/map_type/tests/feature.py b/tests/testflows/map_type/tests/feature.py new file mode 100755 index 00000000000..5fd48844825 --- /dev/null +++ b/tests/testflows/map_type/tests/feature.py @@ -0,0 +1,1195 @@ +# -*- coding: utf-8 -*- +import time + +from testflows.core import * +from testflows.asserts import error + +from map_type.requirements import * +from map_type.tests.common import * + +@TestOutline +def select_map(self, map, output, exitcode=0, message=None): + """Create a map using select statement. + """ + node = self.context.node + + with When("I create a map using select", description=map): + r = node.query(f"SELECT {map}", exitcode=exitcode, message=message) + + with Then("I expect output to match", description=output): + assert r.output == output, error() + +@TestOutline +def table_map(self, type, data, select, filter, exitcode, message, check_insert=False, order_by=None): + """Check using a map column in a table. + """ + uid = getuid() + node = self.context.node + + if order_by is None: + order_by = "m" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + if check_insert: + node.query(f"INSERT INTO {table} VALUES {data}", exitcode=exitcode, message=message) + else: + node.query(f"INSERT INTO {table} VALUES {data}") + + if not check_insert: + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} WHERE {filter} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_String("1.0") +) +@Examples("map output", [ + ("map('',1)", "{'':1}", Name("empty string")), + ("map('hello',1)", "{'hello':1}", Name("non-empty string")), + ("map('Gãńdåłf_Thê_Gręât',1)", "{'Gãńdåłf_Thê_Gręât':1}", Name("utf-8 string")), + ("map('hello there',1)", "{'hello there':1}", Name("multi word string")), + ("map('hello',1,'there',2)", "{'hello':1,'there':2}", Name("multiple keys")), + ("map(toString(1),1)", "{'1':1}", Name("toString")), + ("map(toFixedString('1',1),1)", "{'1':1}", Name("toFixedString")), + ("map(toNullable('1'),1)", "{'1':1}", Name("Nullable")), + ("map(toNullable(NULL),1)", "{NULL:1}", Name("Nullable(NULL)")), + ("map(toLowCardinality('1'),1)", "{'1':1}", Name("LowCardinality(String)")), + ("map(toLowCardinality(toFixedString('1',1)),1)", "{'1':1}", Name("LowCardinality(FixedString)")), +], row_format="%20s,%20s") +def select_map_with_key_string(self, map, output): + """Create a map using select that has key string type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_String("1.0") +) +@Examples("map output", [ + ("map('key','')", "{'key':''}", Name("empty string")), + ("map('key','hello')", "{'key':'hello'}", Name("non-empty string")), + ("map('key','Gãńdåłf_Thê_Gręât')", "{'key':'Gãńdåłf_Thê_Gręât'}", Name("utf-8 string")), + ("map('key','hello there')", "{'key':'hello there'}", Name("multi word string")), + ("map('key','hello','key2','there')", "{'key':'hello','key2':'there'}", Name("multiple keys")), + ("map('key',toString(1))", "{'key':'1'}", Name("toString")), + ("map('key',toFixedString('1',1))", "{'key':'1'}", Name("toFixedString")), + ("map('key',toNullable('1'))", "{'key':'1'}", Name("Nullable")), + ("map('key',toNullable(NULL))", "{'key':NULL}", Name("Nullable(NULL)")), + ("map('key',toLowCardinality('1'))", "{'key':'1'}", Name("LowCardinality(String)")), + ("map('key',toLowCardinality(toFixedString('1',1)))", "{'key':'1'}", Name("LowCardinality(FixedString)")), +], row_format="%20s,%20s") +def select_map_with_value_string(self, map, output): + """Create a map using select that has value string type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Array("1.0") +) +@Examples("map output", [ + ("map('key',[])", "{'key':[]}", Name("empty Array")), + ("map('key',[1,2,3])", "{'key':[1,2,3]}", Name("non-empty array of ints")), + ("map('key',['1','2','3'])", "{'key':['1','2','3']}", Name("non-empty array of strings")), + ("map('key',[map(1,2),map(2,3)])", "{'key':[{1:2},{2:3}]}", Name("non-empty array of maps")), + ("map('key',[map(1,[map(1,[1])]),map(2,[map(2,[3])])])", "{'key':[{1:[{1:[1]}]},{2:[{2:[3]}]}]}", Name("non-empty array of maps of array of maps")), +]) +def select_map_with_value_array(self, map, output): + """Create a map using select that has value array type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer("1.0") +) +@Examples("map output", [ + ("(map(1,127,2,0,3,-128))", '{1:127,2:0,3:-128}', Name("Int8")), + ("(map(1,0,2,255))", '{1:0,2:255}', Name("UInt8")), + ("(map(1,32767,2,0,3,-32768))", '{1:32767,2:0,3:-32768}', Name("Int16")), + ("(map(1,0,2,65535))", '{1:0,2:65535}', Name("UInt16")), + ("(map(1,2147483647,2,0,3,-2147483648))", '{1:2147483647,2:0,3:-2147483648}', Name("Int32")), + ("(map(1,0,2,4294967295))", '{1:0,2:4294967295}', Name("UInt32")), + ("(map(1,9223372036854775807,2,0,3,-9223372036854775808))", '{1:"9223372036854775807",2:"0",3:"-9223372036854775808"}', Name("Int64")), + ("(map(1,0,2,18446744073709551615))", '{1:0,2:18446744073709551615}', Name("UInt64")), + ("(map(1,170141183460469231731687303715884105727,2,0,3,-170141183460469231731687303715884105728))", '{1:1.7014118346046923e38,2:0,3:-1.7014118346046923e38}', Name("Int128")), + ("(map(1,57896044618658097711785492504343953926634992332820282019728792003956564819967,2,0,3,-57896044618658097711785492504343953926634992332820282019728792003956564819968))", '{1:5.78960446186581e76,2:0,3:-5.78960446186581e76}', Name("Int256")), + ("(map(1,0,2,115792089237316195423570985008687907853269984665640564039457584007913129639935))", '{1:0,2:1.157920892373162e77}', Name("UInt256")), + ("(map(1,toNullable(1)))", '{1:1}', Name("toNullable")), + ("(map(1,toNullable(NULL)))", '{1:NULL}', Name("toNullable(NULL)")), +]) +def select_map_with_value_integer(self, map, output): + """Create a map using select that has value integer type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer("1.0") +) +@Examples("map output", [ + ("(map(127,1,0,1,-128,1))", '{127:1,0:1,-128:1}', Name("Int8")), + ("(map(0,1,255,1))", '{0:1,255:1}', Name("UInt8")), + ("(map(32767,1,0,1,-32768,1))", '{32767:1,0:1,-32768:1}', Name("Int16")), + ("(map(0,1,65535,1))", '{0:1,65535:1}', Name("UInt16")), + ("(map(2147483647,1,0,1,-2147483648,1))", '{2147483647:1,0:1,-2147483648:1}', Name("Int32")), + ("(map(0,1,4294967295,1))", '{0:1,4294967295:1}', Name("UInt32")), + ("(map(9223372036854775807,1,0,1,-9223372036854775808,1))", '{"9223372036854775807":1,"0":1,"-9223372036854775808":1}', Name("Int64")), + ("(map(0,1,18446744073709551615,1))", '{0:1,18446744073709551615:1}', Name("UInt64")), + ("(map(170141183460469231731687303715884105727,1,0,1,-170141183460469231731687303715884105728,1))", '{1.7014118346046923e38:1,0:1,-1.7014118346046923e38:1}', Name("Int128")), + ("(map(57896044618658097711785492504343953926634992332820282019728792003956564819967,1,0,1,-57896044618658097711785492504343953926634992332820282019728792003956564819968,1))", '{5.78960446186581e76:1,0:1,-5.78960446186581e76:1}', Name("Int256")), + ("(map(0,1,115792089237316195423570985008687907853269984665640564039457584007913129639935,1))", '{0:1,1.157920892373162e77:1}', Name("UInt256")), + ("(map(toNullable(1),1))", '{1:1}', Name("toNullable")), + ("(map(toNullable(NULL),1))", '{NULL:1}', Name("toNullable(NULL)")), +]) +def select_map_with_key_integer(self, map, output): + """Create a map using select that has key integer type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_String("1.0") +) +@Examples("type data output", [ + ("Map(String, Int8)", "('2020-01-01', map('',1))", '{"d":"2020-01-01","m":{"":1}}', Name("empty string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1))", '{"d":"2020-01-01","m":{"hello":1}}', Name("non-empty string")), + ("Map(String, Int8)", "('2020-01-01', map('Gãńdåłf_Thê_Gręât',1))", '{"d":"2020-01-01","m":{"Gãńdåłf_Thê_Gręât":1}}', Name("utf-8 string")), + ("Map(String, Int8)", "('2020-01-01', map('hello there',1))", '{"d":"2020-01-01","m":{"hello there":1}}', Name("multi word string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1,'there',2))", '{"d":"2020-01-01","m":{"hello":1,"there":2}}', Name("multiple keys")), + ("Map(String, Int8)", "('2020-01-01', map(toString(1),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("toString")), + ("Map(FixedString(1), Int8)", "('2020-01-01', map(toFixedString('1',1),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("FixedString")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable('1'),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("Nullable")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"d":"2020-01-01","m":{null:1}}', Name("Nullable(NULL)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map(toLowCardinality('1'),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("LowCardinality(String)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map('1',1))", '{"d":"2020-01-01","m":{"1":1}}', Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('1','1'))", '{"d":"2020-01-01","m":{"1":"1"}}', Name("LowCardinality(String) for key and value")), + ("Map(LowCardinality(FixedString(1)), Int8)", "('2020-01-01', map(toLowCardinality(toFixedString('1',1)),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("LowCardinality(FixedString)")), +]) +def table_map_with_key_string(self, type, data, output): + """Check what values we can insert into map type column with key string. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_String("1.0") +) +@Examples("type data output select", [ + ("Map(String, Int8)", "('2020-01-01', map('',1))", '{"m":1}', "m[''] AS m", Name("empty string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1))", '{"m":1}', "m['hello'] AS m", Name("non-empty string")), + ("Map(String, Int8)", "('2020-01-01', map('Gãńdåłf_Thê_Gręât',1))", '{"m":1}', "m['Gãńdåłf_Thê_Gręât'] AS m", Name("utf-8 string")), + ("Map(String, Int8)", "('2020-01-01', map('hello there',1))", '{"m":1}', "m['hello there'] AS m", Name("multi word string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1,'there',2))", '{"m":1}', "m['hello'] AS m", Name("multiple keys")), + ("Map(String, Int8)", "('2020-01-01', map(toString(1),1))", '{"m":1}', "m['1'] AS m", Name("toString")), + ("Map(FixedString(1), Int8)", "('2020-01-01', map(toFixedString('1',1),1))", '{"m":1}', "m['1'] AS m", Name("FixedString")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable('1'),1))", '{"m":1}}', "m['1'] AS m", Name("Nullable")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"m":1}', "m[null] AS m", Name("Nullable(NULL)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map(toLowCardinality('1'),1))", '{"m":1}}', "m['1'] AS m", Name("LowCardinality(String)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map('1',1))", '{"m":1}', "m['1'] AS m", Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('1','1'))", '{"m":"1"}', "m['1'] AS m", Name("LowCardinality(String) for key and value")), + ("Map(LowCardinality(FixedString(1)), Int8)", "('2020-01-01', map(toLowCardinality(toFixedString('1',1)),1))", '{"m":1}', "m['1'] AS m", Name("LowCardinality(FixedString)")), +]) +def table_map_select_key_with_key_string(self, type, data, output, select): + """Check what values we can insert into map type column with key string and if key can be selected. + """ + insert_into_table(type=type, data=data, output=output, select=select) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_String("1.0") +) +@Examples("type data output", [ + ("Map(String, String)", "('2020-01-01', map('key',''))", '{"d":"2020-01-01","m":{"key":""}}', Name("empty string")), + ("Map(String, String)", "('2020-01-01', map('key','hello'))", '{"d":"2020-01-01","m":{"key":"hello"}}', Name("non-empty string")), + ("Map(String, String)", "('2020-01-01', map('key','Gãńdåłf_Thê_Gręât'))", '{"d":"2020-01-01","m":{"key":"Gãńdåłf_Thê_Gręât"}}', Name("utf-8 string")), + ("Map(String, String)", "('2020-01-01', map('key', 'hello there'))", '{"d":"2020-01-01","m":{"key":"hello there"}}', Name("multi word string")), + ("Map(String, String)", "('2020-01-01', map('key','hello','key2','there'))", '{"d":"2020-01-01","m":{"key":"hello","key2":"there"}}', Name("multiple keys")), + ("Map(String, String)", "('2020-01-01', map('key', toString(1)))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("toString")), + ("Map(String, FixedString(1))", "('2020-01-01', map('key',toFixedString('1',1)))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("FixedString")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable('1')))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("Nullable")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable(NULL)))", '{"d":"2020-01-01","m":{"key":null}}', Name("Nullable(NULL)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key',toLowCardinality('1')))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("LowCardinality(String)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key','1'))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('1','1'))", '{"d":"2020-01-01","m":{"1":"1"}}', Name("LowCardinality(String) for key and value")), + ("Map(String, LowCardinality(FixedString(1)))", "('2020-01-01', map('key',toLowCardinality(toFixedString('1',1))))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("LowCardinality(FixedString)")) +]) +def table_map_with_value_string(self, type, data, output): + """Check what values we can insert into map type column with value string. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_String("1.0") +) +@Examples("type data output", [ + ("Map(String, String)", "('2020-01-01', map('key',''))", '{"m":""}', Name("empty string")), + ("Map(String, String)", "('2020-01-01', map('key','hello'))", '{"m":"hello"}', Name("non-empty string")), + ("Map(String, String)", "('2020-01-01', map('key','Gãńdåłf_Thê_Gręât'))", '{"m":"Gãńdåłf_Thê_Gręât"}', Name("utf-8 string")), + ("Map(String, String)", "('2020-01-01', map('key', 'hello there'))", '{"m":"hello there"}', Name("multi word string")), + ("Map(String, String)", "('2020-01-01', map('key','hello','key2','there'))", '{"m":"hello"}', Name("multiple keys")), + ("Map(String, String)", "('2020-01-01', map('key', toString(1)))", '{"m":"1"}', Name("toString")), + ("Map(String, FixedString(1))", "('2020-01-01', map('key',toFixedString('1',1)))", '{"m":"1"}', Name("FixedString")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable('1')))", '{"m":"1"}', Name("Nullable")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable(NULL)))", '{"m":null}', Name("Nullable(NULL)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key',toLowCardinality('1')))", '{"m":"1"}', Name("LowCardinality(String)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key','1'))", '{"m":"1"}', Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('key','1'))", '{"m":"1"}', Name("LowCardinality(String) for key and value")), + ("Map(String, LowCardinality(FixedString(1)))", "('2020-01-01', map('key',toLowCardinality(toFixedString('1',1))))", '{"m":"1"}', Name("LowCardinality(FixedString)")) +]) +def table_map_select_key_with_value_string(self, type, data, output): + """Check what values we can insert into map type column with value string and if it can be selected by key. + """ + insert_into_table(type=type, data=data, output=output, select="m['key'] AS m") + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer("1.0") +) +@Examples("type data output", [ + ("Map(Int8, Int8)", "('2020-01-01', map(1,127,2,0,3,-128))", '{"d":"2020-01-01","m":{1:127,2:0,3:-128}}', Name("Int8")), + ("Map(Int8, UInt8)", "('2020-01-01', map(1,0,2,255))", '{"d":"2020-01-01","m":{1:0,2:255}}', Name("UInt8")), + ("Map(Int8, Int16)", "('2020-01-01', map(1,127,2,0,3,-128))", '{"d":"2020-01-01","m":{1:32767,2:0,3:-32768}}', Name("Int16")), + ("Map(Int8, UInt16)", "('2020-01-01', map(1,0,2,65535))", '{"d":"2020-01-01","m":{1:0,2:65535}}', Name("UInt16")), + ("Map(Int8, Int32)", "('2020-01-01', map(1,127,2,0,3,-128))", '{"d":"2020-01-01","m":{1:2147483647,2:0,3:-2147483648}}', Name("Int32")), + ("Map(Int8, UInt32)", "('2020-01-01', map(1,0,2,4294967295))", '{"d":"2020-01-01","m":{1:0,2:4294967295}}', Name("UInt32")), + ("Map(Int8, Int64)", "('2020-01-01', map(1,9223372036854775807,2,0,3,-9223372036854775808))", '{"d":"2020-01-01","m":{1:"9223372036854775807",2:"0",3:"-9223372036854775808"}}', Name("Int64")), + ("Map(Int8, UInt64)", "('2020-01-01', map(1,0,2,18446744073709551615))", '{"d":"2020-01-01","m":{1:"0",2:"18446744073709551615"}}', Name("UInt64")), + ("Map(Int8, Int128)", "('2020-01-01', map(1,170141183460469231731687303715884105727,2,0,3,-170141183460469231731687303715884105728))", '{"d":"2020-01-01","m":{1:"170141183460469231731687303715884105727",2:"0",3:"-170141183460469231731687303715884105728"}}', Name("Int128")), + ("Map(Int8, Int256)", "('2020-01-01', map(1,57896044618658097711785492504343953926634992332820282019728792003956564819967,2,0,3,-57896044618658097711785492504343953926634992332820282019728792003956564819968))", '{"d":"2020-01-01","m":{1:"57896044618658097711785492504343953926634992332820282019728792003956564819967",2:"0",3:"-57896044618658097711785492504343953926634992332820282019728792003956564819968"}}', Name("Int256")), + ("Map(Int8, UInt256)", "('2020-01-01', map(1,0,2,115792089237316195423570985008687907853269984665640564039457584007913129639935))", '{"d":"2020-01-01","m":{1:"0",2:"115792089237316195423570985008687907853269984665640564039457584007913129639935"}}', Name("UInt256")), + ("Map(Int8, Nullable(Int8))", "('2020-01-01', map(1,toNullable(1)))", '{"d":"2020-01-01","m":{1:1}}', Name("toNullable")), + ("Map(Int8, Nullable(Int8))", "('2020-01-01', map(1,toNullable(NULL)))", '{"d":"2020-01-01","m":{1:null}}', Name("toNullable(NULL)")), +]) +def table_map_with_value_integer(self, type, data, output): + """Check what values we can insert into map type column with value integer. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Array("1.0") +) +@Examples("type data output", [ + ("Map(String, Array(Int8))", "('2020-01-01', map('key',[]))", '{"d":"2020-01-01","m":{"key":[]}}', Name("empty array")), + ("Map(String, Array(Int8))", "('2020-01-01', map('key',[1,2,3]))", '{"d":"2020-01-01","m":{"key":[1,2,3]}}', Name("non-empty array of ints")), + ("Map(String, Array(String))", "('2020-01-01', map('key',['1','2','3']))", '{"d":"2020-01-01","m":{"key":["1","2","3"]}}', Name("non-empty array of strings")), + ("Map(String, Array(Map(Int8, Int8)))", "('2020-01-01', map('key',[map(1,2),map(2,3)]))", '{"d":"2020-01-01","m":{"key":[{1:2},{2:3}]}}', Name("non-empty array of maps")), + ("Map(String, Array(Map(Int8, Array(Map(Int8, Array(Int8))))))", "('2020-01-01', map('key',[map(1,[map(1,[1])]),map(2,[map(2,[3])])]))", '{"d":"2020-01-01","m":{"key":[{1:[{1:[1]}]},{2:[{2:[3]}]}]}}', Name("non-empty array of maps of array of maps")), +]) +def table_map_with_value_array(self, type, data, output): + """Check what values we can insert into map type column with value Array. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer("1.0") +) +@Examples("type data output", [ + ("Map(Int8, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"d":"2020-01-01","m":{127:1,0:1,-128:1}}', Name("Int8")), + ("Map(UInt8, Int8)", "('2020-01-01', map(0,1,255,1))", '{"d":"2020-01-01","m":{0:1,255:1}}', Name("UInt8")), + ("Map(Int16, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"d":"2020-01-01","m":{32767:1,0:1,-32768:1}}', Name("Int16")), + ("Map(UInt16, Int8)", "('2020-01-01', map(0,1,65535,1))", '{"d":"2020-01-01","m":{0:1,65535:1}}', Name("UInt16")), + ("Map(Int32, Int8)", "('2020-01-01', map(2147483647,1,0,1,-2147483648,1))", '{"d":"2020-01-01","m":{2147483647:1,0:1,-2147483648:1}}', Name("Int32")), + ("Map(UInt32, Int8)", "('2020-01-01', map(0,1,4294967295,1))", '{"d":"2020-01-01","m":{0:1,4294967295:1}}', Name("UInt32")), + ("Map(Int64, Int8)", "('2020-01-01', map(9223372036854775807,1,0,1,-9223372036854775808,1))", '{"d":"2020-01-01","m":{"9223372036854775807":1,"0":1,"-9223372036854775808":1}}', Name("Int64")), + ("Map(UInt64, Int8)", "('2020-01-01', map(0,1,18446744073709551615,1))", '{"d":"2020-01-01","m":{"0":1,"18446744073709551615":1}}', Name("UInt64")), + ("Map(Int128, Int8)", "('2020-01-01', map(170141183460469231731687303715884105727,1,0,1,-170141183460469231731687303715884105728,1))", '{"d":"2020-01-01","m":{170141183460469231731687303715884105727:1,0:1,"-170141183460469231731687303715884105728":1}}', Name("Int128")), + ("Map(Int256, Int8)", "('2020-01-01', map(57896044618658097711785492504343953926634992332820282019728792003956564819967,1,0,1,-57896044618658097711785492504343953926634992332820282019728792003956564819968,1))", '{"d":"2020-01-01","m":{"57896044618658097711785492504343953926634992332820282019728792003956564819967":1,"0":1,"-57896044618658097711785492504343953926634992332820282019728792003956564819968":1}}', Name("Int256")), + ("Map(UInt256, Int8)", "('2020-01-01', map(0,1,115792089237316195423570985008687907853269984665640564039457584007913129639935,1))", '{"d":"2020-01-01","m":{"0":1,"115792089237316195423570985008687907853269984665640564039457584007913129639935":1}}', Name("UInt256")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(1),1))", '{"d":"2020-01-01","m":{1:1}}', Name("toNullable")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"d":"2020-01-01","m":{null:1}}', Name("toNullable(NULL)")), +]) +def table_map_with_key_integer(self, type, data, output): + """Check what values we can insert into map type column with key integer. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer("1.0") +) +@Examples("type data output select", [ + ("Map(Int8, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"m":1}', "m[127] AS m", Name("Int8")), + ("Map(UInt8, Int8)", "('2020-01-01', map(0,1,255,1))", '{"m":2}', "(m[255] + m[0]) AS m", Name("UInt8")), + ("Map(Int16, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"m":3}', "(m[-128] + m[0] + m[-128]) AS m", Name("Int16")), + ("Map(UInt16, Int8)", "('2020-01-01', map(0,1,65535,1))", '{"m":2}', "(m[0] + m[65535]) AS m", Name("UInt16")), + ("Map(Int32, Int8)", "('2020-01-01', map(2147483647,1,0,1,-2147483648,1))", '{"m":3}', "(m[2147483647] + m[0] + m[-2147483648]) AS m", Name("Int32")), + ("Map(UInt32, Int8)", "('2020-01-01', map(0,1,4294967295,1))", '{"m":2}', "(m[0] + m[4294967295]) AS m", Name("UInt32")), + ("Map(Int64, Int8)", "('2020-01-01', map(9223372036854775807,1,0,1,-9223372036854775808,1))", '{"m":3}', "(m[9223372036854775807] + m[0] + m[-9223372036854775808]) AS m", Name("Int64")), + ("Map(UInt64, Int8)", "('2020-01-01', map(0,1,18446744073709551615,1))", '{"m":2}', "(m[0] + m[18446744073709551615]) AS m", Name("UInt64")), + ("Map(Int128, Int8)", "('2020-01-01', map(170141183460469231731687303715884105727,1,0,1,-170141183460469231731687303715884105728,1))", '{"m":3}', "(m[170141183460469231731687303715884105727] + m[0] + m[-170141183460469231731687303715884105728]) AS m", Name("Int128")), + ("Map(Int256, Int8)", "('2020-01-01', map(57896044618658097711785492504343953926634992332820282019728792003956564819967,1,0,1,-57896044618658097711785492504343953926634992332820282019728792003956564819968,1))", '{"m":3}', "(m[57896044618658097711785492504343953926634992332820282019728792003956564819967] + m[0] + m[-57896044618658097711785492504343953926634992332820282019728792003956564819968]) AS m", Name("Int256")), + ("Map(UInt256, Int8)", "('2020-01-01', map(0,1,115792089237316195423570985008687907853269984665640564039457584007913129639935,1))", '{"m":2}', "(m[0] + m[115792089237316195423570985008687907853269984665640564039457584007913129639935]) AS m", Name("UInt256")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(1),1))", '{"m":1}', "m[1] AS m", Name("toNullable")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"m":1}', "m[null] AS m", Name("toNullable(NULL)")), +]) +def table_map_select_key_with_key_integer(self, type, data, output, select): + """Check what values we can insert into map type column with key integer and if we can use the key to select the value. + """ + insert_into_table(type=type, data=data, output=output, select=select) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_ArrayOfMaps("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_NestedWithMaps("1.0") +) +@Examples("type data output partition_by", [ + ("Array(Map(String, Int8))", + "('2020-01-01', [map('hello',1),map('hello',1,'there',2)])", + '{"d":"2020-01-01","m":[{"hello":1},{"hello":1,"there":2}]}', + "m", + Name("Array(Map(String, Int8))")), + ("Nested(x Map(String, Int8))", + "('2020-01-01', [map('hello',1)])", + '{"d":"2020-01-01","m.x":[{"hello":1}]}', + "m.x", + Name("Nested(x Map(String, Int8)")) +]) +def table_with_map_inside_another_type(self, type, data, output, partition_by): + """Check what values we can insert into a type that has map type. + """ + insert_into_table(type=type, data=data, output=output, partition_by=partition_by) + +@TestOutline +def insert_into_table(self, type, data, output, partition_by="m", select="*"): + """Check we can insert data into a table. + """ + uid = getuid() + node = self.context.node + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (d DATE, m " + type + ") ENGINE = MergeTree() PARTITION BY " + partition_by + " ORDER BY d" + + with Given(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data", description=data): + sql = f"INSERT INTO {table} VALUES {data}" + node.query(sql) + + with And("I select rows from the table"): + r = node.query(f"SELECT {select} FROM {table} FORMAT JSONEachRow") + + with Then("I expect output to match", description=output): + assert r.output == output, error() + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MixedKeyOrValueTypes("1.0") +) +def select_map_with_invalid_mixed_key_and_value_types(self): + """Check that creating a map with mixed key types fails. + """ + node = self.context.node + exitcode = 130 + message = "DB::Exception: There is no supertype for types String, UInt8 because some of them are String/FixedString and some of them are not" + + with Check("attempt to create a map using SELECT with mixed key types then it fails"): + node.query("SELECT map('hello',1,2,3)", exitcode=exitcode, message=message) + + with Check("attempt to create a map using SELECT with mixed value types then it fails"): + node.query("SELECT map(1,'hello',2,2)", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_InvalidNumberOfArguments("1.0") +) +def select_map_with_invalid_number_of_arguments(self): + """Check that creating a map with invalid number of arguments fails. + """ + node = self.context.node + exitcode = 42 + message = "DB::Exception: Function map requires even number of arguments" + + with When("I create a map using SELECT with invalid number of arguments"): + node.query("SELECT map(1,2,3)", exitcode=exitcode, message=message) + +@TestScenario +def select_map_empty(self): + """Check that we can can create a empty map by not passing any arguments. + """ + node = self.context.node + + with When("I create a map using SELECT with no arguments"): + r = node.query("SELECT map()") + + with Then("it should create an empty map"): + assert r.output == "{}", error() + +@TestScenario +def insert_invalid_mixed_key_and_value_types(self): + """Check that inserting a map with mixed key or value types fails. + """ + uid = getuid() + node = self.context.node + exitcode = 130 + message = "DB::Exception: There is no supertype for types String, UInt8 because some of them are String/FixedString and some of them are not" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (d DATE, m Map(String, Int8)) ENGINE = MergeTree() PARTITION BY m ORDER BY d" + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert a map with mixed key types then it should fail"): + sql = f"INSERT INTO {table} VALUES ('2020-01-01', map('hello',1,2,3))" + node.query(sql, exitcode=exitcode, message=message) + + with When("I insert a map with mixed value types then it should fail"): + sql = f"INSERT INTO {table} VALUES ('2020-01-01', map(1,'hello',2,2))" + node.query(sql, exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys("1.0") +) +@Examples("type data output", [ + ("Map(String, String)", + "('2020-01-01', map('hello','there','hello','over there'))", + '{"d":"2020-01-01","m":{"hello":"there","hello":"over there"}}', + Name("Map(String, String))")), + ("Map(Int64, String)", + "('2020-01-01', map(12345,'there',12345,'over there'))", + '{"d":"2020-01-01","m":{"12345":"there","12345":"over there"}}', + Name("Map(Int64, String))")), +]) +def table_map_with_duplicated_keys(self, type, data, output): + """Check that map supports duplicated keys. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys("1.0") +) +@Examples("map output", [ + ("map('hello','there','hello','over there')", "{'hello':'there','hello':'over there'}", Name("String")), + ("map(12345,'there',12345,'over there')", "{12345:'there',12345:'over there'}", Name("Integer")) +]) +def select_map_with_duplicated_keys(self, map, output): + """Check creating a map with duplicated keys. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound("1.0") +) +def select_map_key_not_found(self): + node = self.context.node + + with When("map is empty"): + node.query("SELECT map() AS m, m[1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("map has integer values"): + r = node.query("SELECT map(1,2) AS m, m[2] FORMAT Values") + with Then("zero should be returned for key that is not found"): + assert r.output == "({1:2},0)", error() + + with When("map has string values"): + r = node.query("SELECT map(1,'2') AS m, m[2] FORMAT Values") + with Then("empty string should be returned for key that is not found"): + assert r.output == "({1:'2'},'')", error() + + with When("map has array values"): + r = node.query("SELECT map(1,[2]) AS m, m[2] FORMAT Values") + with Then("empty array be returned for key that is not found"): + assert r.output == "({1:[2]},[])", error() + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound("1.0") +) +@Examples("type data select exitcode message", [ + ("Map(UInt8, UInt8), y Int8", "(y) VALUES (1)", "m[1] AS v", 0, '{"v":0}', Name("empty map")), + ("Map(UInt8, UInt8)", "VALUES (map(1,2))", "m[2] AS v", 0, '{"v":0}', Name("map has integer values")), + ("Map(UInt8, String)", "VALUES (map(1,'2'))", "m[2] AS v", 0, '{"v":""}', Name("map has string values")), + ("Map(UInt8, Array(Int8))", "VALUES (map(1,[2]))", "m[2] AS v", 0, '{"v":[]}', Name("map has array values")), +]) +def table_map_key_not_found(self, type, data, select, exitcode, message, order_by=None): + """Check values returned from a map column when key is not found. + """ + uid = getuid() + node = self.context.node + + if order_by is None: + order_by = "m" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + node.query(f"INSERT INTO {table} {data}") + + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid("1.0") +) +def invalid_key(self): + """Check when key is not valid. + """ + node = self.context.node + + with When("I try to use an integer key that is too large"): + node.query("SELECT map(1,2) AS m, m[256]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use an integer key that is negative when key is unsigned"): + node.query("SELECT map(1,2) AS m, m[-1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use a string key when key is an integer"): + node.query("SELECT map(1,2) AS m, m['1']", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use an integer key when key is a string"): + r = node.query("SELECT map('1',2) AS m, m[1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use an empty key when key is a string"): + r = node.query("SELECT map('1',2) AS m, m[]", exitcode=62, message="DB::Exception: Syntax error: failed at position") + + with When("I try to use wrong type conversion in key"): + r = node.query("SELECT map(1,2) AS m, m[toInt8('1')]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("in array of maps I try to use an integer key that is negative when key is unsigned"): + node.query("SELECT [map(1,2)] AS m, m[1][-1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use a NULL key when key is not nullable"): + r = node.query("SELECT map(1,2) AS m, m[NULL] FORMAT Values") + with Then("it should return NULL"): + assert r.output == "({1:2},NULL)", error() + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid("1.0") +) +@Examples("type data select exitcode message order_by", [ + ("Map(UInt8, UInt8)", "(map(1,2))", "m[256] AS v", 0, '{"v":0}', "m", Name("key too large)")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m[-1] AS v", 0, '{"v":0}', "m", Name("key is negative")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m['1'] AS v", 43, "DB::Exception: Illegal types of arguments", "m", Name("string when key is integer")), + ("Map(String, UInt8)", "(map('1',2))", "m[1] AS v", 43, "DB::Exception: Illegal types of arguments", "m", Name("integer when key is string")), + ("Map(String, UInt8)", "(map('1',2))", "m[] AS v", 62, "DB::Exception: Syntax error: failed at position", "m", Name("empty when key is string")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m[toInt8('1')] AS v", 0, '{"v":2}', "m", Name("wrong type conversion when key is integer")), + ("Map(String, UInt8)", "(map('1',2))", "m[toFixedString('1',1)] AS v", 0, '{"v":2}', "m", Name("wrong type conversion when key is string")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m[NULL] AS v", 0, '{"v":null}', "m", Name("NULL key when key is not nullable")), + ("Array(Map(UInt8, UInt8))", "([map(1,2)])", "m[1]['1'] AS v", 43, "DB::Exception: Illegal types of arguments", "m", Name("string when key is integer in array of maps")), + ("Nested(x Map(UInt8, UInt8))", "([map(1,2)])", "m.x[1]['1'] AS v", 43, "DB::Exception: Illegal types of arguments", "m.x", Name("string when key is integer in nested map")), +]) +def table_map_invalid_key(self, type, data, select, exitcode, message, order_by="m"): + """Check selecting values from a map column using an invalid key. + """ + uid = getuid() + node = self.context.node + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + node.query(f"INSERT INTO {table} VALUES {data}") + + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval("1.0") +) +@Examples("type data select filter exitcode message order_by", [ + ("Map(UInt8, UInt8)", "(map(1,1)),(map(1,2)),(map(2,3))", "m[1] AS v", "1=1 ORDER BY m[1]", 0, '{"v":0}\n{"v":1}\n{"v":2}', None, + Name("select the same key from all the rows")), + ("Map(String, String)", "(map('a','b')),(map('c','d','e','f')),(map('e','f'))", "m", "m = map('e','f','c','d')", 0, '', None, + Name("filter rows by map having different pair order")), + ("Map(String, String)", "(map('a','b')),(map('c','d','e','f')),(map('e','f'))", "m", "m = map('c','d','e','f')", 0, '{"m":{"c":"d","e":"f"}}', None, + Name("filter rows by map having the same pair order")), + ("Map(String, String)", "(map('a','b')),(map('e','f'))", "m", "m = map()", 0, '', None, + Name("filter rows by empty map")), + ("Map(String, Int8)", "(map('a',1,'b',2)),(map('a',2)),(map('b',3))", "m", "m['a'] = 1", 0, '{"m":{"a":1,"b":2}}', None, + Name("filter rows by map key value")), + ("Map(String, Int8)", "(map('a',1,'b',2)),(map('a',2)),(map('b',3))", "m", "m['a'] = 1 AND m['b'] = 2", 0, '{"m":{"a":1,"b":2}}', None, + Name("filter rows by map multiple key value combined with AND")), + ("Map(String, Int8)", "(map('a',1,'b',2)),(map('a',2)),(map('b',3))", "m", "m['a'] = 1 OR m['b'] = 3", 0, '{"m":{"a":1,"b":2}}\n{"m":{"b":3}}', None, + Name("filter rows by map multiple key value combined with OR")), + ("Map(String, Array(Int8))", "(map('a',[])),(map('b',[1])),(map('c',[2]))", "m['b'] AS v", "m['b'] IN ([1],[2])", 0, '{"v":[1]}', None, + Name("filter rows by map array value using IN")), + ("Map(String, Nullable(String))", "(map('a',NULL)),(map('a',1))", "m", "isNull(m['a']) = 1", 0, '{"m":{"a":null}}', None, + Name("select map with nullable value")) +]) +def table_map_queries(self, type, data, select, filter, exitcode, message, order_by=None): + """Check retrieving map values and using maps in queries. + """ + uid = getuid() + node = self.context.node + + if order_by is None: + order_by = "m" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + node.query(f"INSERT INTO {table} VALUES {data}") + + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} WHERE {filter} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_Nullable("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_NothingNothing("1.0") +) +@Examples("type exitcode message", [ + ("Nullable(Map(String, String))", + 43, "DB::Exception: Nested type Map(String,String) cannot be inside Nullable type", + Name("nullable map")), + ("Map(Nothing, Nothing)", + 37, "DB::Exception: Column `m` with type Map(Nothing,Nothing) is not allowed in key expression, it's not comparable", + Name("map with nothing type for key and value")) +]) +def table_map_unsupported_types(self, type, exitcode, message): + """Check creating a table with unsupported map column types. + """ + uid = getuid() + node = self.context.node + + try: + with When(f"I create a table definition with {type}"): + sql = f"CREATE TABLE {uid} (m " + type + ") ENGINE = MergeTree() ORDER BY m" + node.query(sql, exitcode=exitcode, message=message) + finally: + with Finally("drop table if any"): + node.query(f"DROP TABLE IF EXISTS {uid}") + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid("1.0") +) +@Examples("tuple type exitcode message", [ + ("([1, 2, 3], ['Ready', 'Steady', 'Go'])", "Map(UInt8, String)", + 0, "{1:'Ready',2:'Steady',3:'Go'}", Name("int -> int")), + ("([1, 2, 3], ['Ready', 'Steady', 'Go'])", "Map(String, String)", + 0, "{'1':'Ready','2':'Steady','3':'Go'}", Name("int -> string")), + ("(['1', '2', '3'], ['Ready', 'Steady', 'Go'])", "Map(UInt8, String)", + 0, "{1:'Ready',187:'Steady',143:'Go'}", Name("string -> int")), + ("([],[])", "Map(String, String)", + 0, "{}", Name("empty arrays to map str:str")), + ("([],[])", "Map(UInt8, Array(Int8))", + 0, "{}", Name("empty arrays to map uint8:array")), + ("([[1]],['hello'])", "Map(String, String)", + 0, "{'[1]':'hello'}", Name("array -> string")), + ("([(1,2),(3,4)])", "Map(UInt8, UInt8)", + 0, "{1:2,3:4}", Name("array of two tuples")), + ("([1, 2], ['Ready', 'Steady', 'Go'])", "Map(UInt8, String)", + 53, "DB::Exception: CAST AS Map can only be performed from tuple of arrays with equal sizes", + Name("unequal array sizes")), +]) +def cast_tuple_of_two_arrays_to_map(self, tuple, type, exitcode, message): + """Check casting Tuple(Array, Array) to a map type. + """ + node = self.context.node + + with When("I try to cast tuple", description=tuple): + node.query(f"SELECT CAST({tuple}, '{type}') AS map", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid("1.0") +) +@Examples("tuple type exitcode message check_insert", [ + ("(([1, 2, 3], ['Ready', 'Steady', 'Go']))", "Map(UInt8, String)", + 0, '{"m":{1:"Ready",2:"Steady",3:"Go"}}', False, Name("int -> int")), + ("(([1, 2, 3], ['Ready', 'Steady', 'Go']))", "Map(String, String)", + 0, '{"m":{"1":"Ready","2":"Steady","3":"Go"}}', False, Name("int -> string")), + ("((['1', '2', '3'], ['Ready', 'Steady', 'Go']))", "Map(UInt8, String)", + 0, '', True, Name("string -> int")), + ("(([],[]))", "Map(String, String)", + 0, '{"m":{}}', False, Name("empty arrays to map str:str")), + ("(([],[]))", "Map(UInt8, Array(Int8))", + 0, '{"m":{}}', False, Name("empty arrays to map uint8:array")), + ("(([[1]],['hello']))", "Map(String, String)", + 53, 'DB::Exception: Type mismatch in IN or VALUES section', True, Name("array -> string")), + ("(([(1,2),(3,4)]))", "Map(UInt8, UInt8)", + 0, '{"m":{1:2,3:4}}', False, Name("array of two tuples")), + ("(([1, 2], ['Ready', 'Steady', 'Go']))", "Map(UInt8, String)", + 53, "DB::Exception: CAST AS Map can only be performed from tuple of arrays with equal sizes", True, + Name("unequal array sizes")), +]) +def table_map_cast_tuple_of_arrays_to_map(self, tuple, type, exitcode, message, check_insert): + """Check converting Tuple(Array, Array) into map on insert into a map type column. + """ + table_map(type=type, data=tuple, select="*", filter="1=1", exitcode=exitcode, message=message, check_insert=check_insert) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid("1.0") +) +@Examples("tuple type exitcode message", [ + ("([(1,2),(3,4)])", "Map(UInt8, UInt8)", 0, "{1:2,3:4}", + Name("array of two tuples")), + ("([(1,2),(3)])", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), UInt8 because some of them are Tuple and some of them are not", + Name("not a tuple")), + ("([(1,2),(3,)])", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), Tuple(UInt8) because Tuples have different sizes", + Name("invalid tuple")), +]) +def cast_array_of_two_tuples_to_map(self, tuple, type, exitcode, message): + """Check casting Array(Tuple(K,V)) to a map type. + """ + node = self.context.node + + with When("I try to cast tuple", description=tuple): + node.query(f"SELECT CAST({tuple}, '{type}') AS map", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid("1.0") +) +@Examples("tuple type exitcode message check_insert", [ + ("(([(1,2),(3,4)]))", "Map(UInt8, UInt8)", 0, '{"m":{1:2,3:4}}', False, + Name("array of two tuples")), + ("(([(1,2),(3)]))", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), UInt8 because some of them are Tuple and some of them are not", True, + Name("not a tuple")), + ("(([(1,2),(3,)]))", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), Tuple(UInt8) because Tuples have different sizes", True, + Name("invalid tuple")), +]) +def table_map_cast_array_of_two_tuples_to_map(self, tuple, type, exitcode, message, check_insert): + """Check converting Array(Tuple(K,V),...) into map on insert into a map type column. + """ + table_map(type=type, data=tuple, select="*", filter="1=1", exitcode=exitcode, message=message, check_insert=check_insert) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_InlineDefinedMap("1.0") +) +def subcolumns_keys_using_inline_defined_map(self): + node = self.context.node + exitcode = 47 + message = "DB::Exception: Missing columns: 'c.keys'" + + with When("I try to access keys sub-column using an inline defined map"): + node.query("SELECT map( 'aa', 4, '44' , 5) as c, c.keys", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_InlineDefinedMap("1.0") +) +def subcolumns_values_using_inline_defined_map(self): + node = self.context.node + exitcode = 47 + message = "DB::Exception: Missing columns: 'c.values'" + + with When("I try to access values sub-column using an inline defined map"): + node.query("SELECT map( 'aa', 4, '44' , 5) as c, c.values", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_ArrayFunctions("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_ArrayFunctions("1.0") +) +@Examples("type data select filter exitcode message", [ + # keys + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.keys AS keys", "1=1", + 0, '{"keys":["a","c"]}\n{"keys":["e"]}', Name("select keys")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.keys AS keys", "has(m.keys, 'e')", + 0, '{"keys":["e"]}', Name("filter by using keys in an array function")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "has(m.keys, 'e') AS r", "1=1", + 0, '{"r":0}\n{"r":1}', Name("column that uses keys in an array function")), + # values + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.values AS values", "1=1", + 0, '{"values":["b","d"]}\n{"values":["f"]}', Name("select values")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.values AS values", "has(m.values, 'f')", + 0, '{"values":["f"]}', Name("filter by using values in an array function")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "has(m.values, 'f') AS r", "1=1", + 0, '{"r":0}\n{"r":1}', Name("column that uses values in an array function")) +]) +def subcolumns(self, type, data, select, filter, exitcode, message, order_by=None): + """Check usage of sub-columns in queries. + """ + table_map(type=type, data=data, select=select, filter=filter, exitcode=exitcode, message=message, order_by=order_by) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Length("1.0") +) +def length(self): + """Check usage of length function with map data type. + """ + table_map(type="Map(String, String)", + data="(map('a','b','c','d')),(map('e','f'))", + select="length(m) AS len, m", + filter="length(m) = 1", + exitcode=0, message='{"len":"1","m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Empty("1.0") +) +def empty(self): + """Check usage of empty function with map data type. + """ + table_map(type="Map(String, String)", + data="(map('e','f'))", + select="empty(m) AS em, m", + filter="empty(m) <> 1", + exitcode=0, message='{"em":0,"m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_NotEmpty("1.0") +) +def notempty(self): + """Check usage of notEmpty function with map data type. + """ + table_map(type="Map(String, String)", + data="(map('e','f'))", + select="notEmpty(m) AS em, m", + filter="notEmpty(m) = 1", + exitcode=0, message='{"em":1,"m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapAdd("1.0") +) +def cast_from_mapadd(self): + """Check converting the result of mapAdd function to a map data type. + """ + select_map(map="CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), 'Map(Int8, Int8)')", output="{1:2,2:2}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapSubstract("1.0") +) +def cast_from_mapsubstract(self): + """Check converting the result of mapSubstract function to a map data type. + """ + select_map(map="CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), 'Map(Int8, Int8)')", output="{1:-1,2:0}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapPopulateSeries("1.0") +) +def cast_from_mappopulateseries(self): + """Check converting the result of mapPopulateSeries function to a map data type. + """ + select_map(map="CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), 'Map(Int8, Int8)')", output="{1:11,2:22,3:0,4:44,5:0}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapContains("1.0") +) +def mapcontains(self): + """Check usages of mapContains function with map data type. + """ + node = self.context.node + + with Example("key in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="mapContains(m, 'a')", + exitcode=0, message='{"m":{"a":"b"}}') + + with Example("key not in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="NOT mapContains(m, 'a')", + exitcode=0, message='{"m":{"e":"f"}}') + + with Example("null key not in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="mapContains(m, NULL)", + exitcode=0, message='') + + with Example("null key in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b')),(map(NULL,'c'))", + select="m", + filter="mapContains(m, NULL)", + exitcode=0, message='{null:"c"}') + + with Example("select nullable key"): + node.query("SELECT map(NULL, 1, 2, 3) AS m, mapContains(m, toNullable(toUInt8(2)))", exitcode=0, message="{2:3}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapKeys("1.0") +) +def mapkeys(self): + """Check usages of mapKeys function with map data type. + """ + with Example("key in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapKeys(m), 'a')", + exitcode=0, message='{"m":{"a":"b"}}') + + with Example("key not in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="NOT has(mapKeys(m), 'a')", + exitcode=0, message='{"m":{"e":"f"}}') + + with Example("null key not in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapKeys(m), NULL)", + exitcode=0, message='') + + with Example("null key in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b')),(map(NULL,'c'))", + select="m", + filter="has(mapKeys(m), NULL)", + exitcode=0, message='{"m":{null:"c"}}') + + with Example("select keys from column"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b')),(map(NULL,'c'))", + select="mapKeys(m) AS keys", + filter="1 = 1", + exitcode=0, message='{"keys":["a"]}\n{"keys":["e"]}\n{"keys":[null]}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapValues("1.0") +) +def mapvalues(self): + """Check usages of mapValues function with map data type. + """ + with Example("value in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapValues(m), 'b')", + exitcode=0, message='{"m":{"a":"b"}}') + + with Example("value not in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="NOT has(mapValues(m), 'b')", + exitcode=0, message='{"m":{"e":"f"}}') + + with Example("null value not in map"): + table_map(type="Map(String, Nullable(String))", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapValues(m), NULL)", + exitcode=0, message='') + + with Example("null value in map"): + table_map(type="Map(String, Nullable(String))", + data="(map('e','f')),(map('a','b')),(map('c',NULL))", + select="m", + filter="has(mapValues(m), NULL)", + exitcode=0, message='{"m":{"c":null}}') + + with Example("select values from column"): + table_map(type="Map(String, Nullable(String))", + data="(map('e','f')),(map('a','b')),(map('c',NULL))", + select="mapValues(m) AS values", + filter="1 = 1", + exitcode=0, message='{"values":["b"]}\n{"values":[null]}\n{"values":["f"]}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_InlineDefinedMap("1.0") +) +def functions_with_inline_defined_map(self): + """Check that a map defined inline inside the select statement + can be used with functions that work with maps. + """ + with Example("mapKeys"): + select_map(map="map(1,2,3,4) as map, mapKeys(map) AS keys", output="{1:2,3:4}\t[1,3]") + + with Example("mapValyes"): + select_map(map="map(1,2,3,4) as map, mapValues(map) AS values", output="{1:2,3:4}\t[2,4]") + + with Example("mapContains"): + select_map(map="map(1,2,3,4) as map, mapContains(map, 1) AS contains", output="{1:2,3:4}\t1") + +@TestScenario +def empty_map(self): + """Check creating of an empty map `{}` using the map() function + when inserting data into a map type table column. + """ + table_map(type="Map(String, String)", + data="(map('e','f')),(map())", + select="m", + filter="1=1", + exitcode=0, message='{"m":{}}\n{"m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_TupleOfArrays("1.0") +) +def performance_vs_two_tuple_of_arrays(self, len=10, rows=6000000): + """Check performance of using map data type vs Tuple(Array, Array). + """ + uid = getuid() + node = self.context.node + + with Given(f"table with Tuple(Array(Int8),Array(Int8))"): + sql = "CREATE TABLE {name} (pairs Tuple(Array(Int8),Array(Int8))) ENGINE = MergeTree() ORDER BY pairs" + tuple_table = create_table(name=f"tuple_{uid}", statement=sql) + + with And(f"table with Map(Int8,Int8)"): + sql = "CREATE TABLE {name} (pairs Map(Int8,Int8)) ENGINE = MergeTree() ORDER BY pairs" + map_table = create_table(name=f"map_{uid}", statement=sql) + + with When("I insert data into table with tuples"): + keys = range(len) + values = range(len) + start_time = time.time() + node.query(f"INSERT INTO {tuple_table} SELECT ({keys},{values}) FROM numbers({rows})") + tuple_insert_time = time.time() - start_time + metric("tuple insert time", tuple_insert_time, "sec") + + with When("I insert data into table with a map"): + keys = range(len) + values = range(len) + start_time = time.time() + node.query(f"INSERT INTO {map_table} SELECT ({keys},{values}) FROM numbers({rows})") + map_insert_time = time.time() - start_time + metric("map insert time", map_insert_time, "sec") + + with And("I retrieve particular key value from table with tuples"): + start_time = time.time() + node.query(f"SELECT sum(arrayFirst((v, k) -> k = {len-1}, tupleElement(pairs, 2), tupleElement(pairs, 1))) AS sum FROM {tuple_table}", + exitcode=0, message=f"{rows*(len-1)}") + tuple_select_time = time.time() - start_time + metric("tuple(array, array) select time", tuple_select_time, "sec") + + with And("I retrieve particular key value from table with map"): + start_time = time.time() + node.query(f"SELECT sum(pairs[{len-1}]) AS sum FROM {map_table}", + exitcode=0, message=f"{rows*(len-1)}") + map_select_time = time.time() - start_time + metric("map select time", map_select_time, "sec") + + metric("insert difference", (1 - map_insert_time/tuple_insert_time) * 100, "%") + metric("select difference", (1 - map_select_time/tuple_select_time) * 100, "%") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_ArrayOfTuples("1.0") +) +def performance_vs_array_of_tuples(self, len=10, rows=6000000): + """Check performance of using map data type vs Array(Tuple(K,V)). + """ + uid = getuid() + node = self.context.node + + with Given(f"table with Array(Tuple(K,V))"): + sql = "CREATE TABLE {name} (pairs Array(Tuple(Int8, Int8))) ENGINE = MergeTree() ORDER BY pairs" + array_table = create_table(name=f"tuple_{uid}", statement=sql) + + with And(f"table with Map(Int8,Int8)"): + sql = "CREATE TABLE {name} (pairs Map(Int8,Int8)) ENGINE = MergeTree() ORDER BY pairs" + map_table = create_table(name=f"map_{uid}", statement=sql) + + with When("I insert data into table with an array of tuples"): + pairs = list(zip(range(len),range(len))) + start_time = time.time() + node.query(f"INSERT INTO {array_table} SELECT ({pairs}) FROM numbers({rows})") + array_insert_time = time.time() - start_time + metric("array insert time", array_insert_time, "sec") + + with When("I insert data into table with a map"): + keys = range(len) + values = range(len) + start_time = time.time() + node.query(f"INSERT INTO {map_table} SELECT ({keys},{values}) FROM numbers({rows})") + map_insert_time = time.time() - start_time + metric("map insert time", map_insert_time, "sec") + + with And("I retrieve particular key value from table with an array of tuples"): + start_time = time.time() + node.query(f"SELECT sum(arrayFirst((v) -> v.1 = {len-1}, pairs).2) AS sum FROM {array_table}", + exitcode=0, message=f"{rows*(len-1)}") + array_select_time = time.time() - start_time + metric("array(tuple(k,v)) select time", array_select_time, "sec") + + with And("I retrieve particular key value from table with map"): + start_time = time.time() + node.query(f"SELECT sum(pairs[{len-1}]) AS sum FROM {map_table}", + exitcode=0, message=f"{rows*(len-1)}") + map_select_time = time.time() - start_time + metric("map select time", map_select_time, "sec") + + metric("insert difference", (1 - map_insert_time/array_insert_time) * 100, "%") + metric("select difference", (1 - map_select_time/array_select_time) * 100, "%") + +@TestScenario +def performance(self, len=10, rows=6000000): + """Check insert and select performance of using map data type. + """ + uid = getuid() + node = self.context.node + + with Given("table with Map(Int8,Int8)"): + sql = "CREATE TABLE {name} (pairs Map(Int8,Int8)) ENGINE = MergeTree() ORDER BY pairs" + map_table = create_table(name=f"map_{uid}", statement=sql) + + with When("I insert data into table with a map"): + values = [x for pair in zip(range(len),range(len)) for x in pair] + start_time = time.time() + node.query(f"INSERT INTO {map_table} SELECT (map({','.join([str(v) for v in values])})) FROM numbers({rows})") + map_insert_time = time.time() - start_time + metric("map insert time", map_insert_time, "sec") + + with And("I retrieve particular key value from table with map"): + start_time = time.time() + node.query(f"SELECT sum(pairs[{len-1}]) AS sum FROM {map_table}", + exitcode=0, message=f"{rows*(len-1)}") + map_select_time = time.time() - start_time + metric("map select time", map_select_time, "sec") + +# FIXME: add tests for different table engines + +@TestFeature +@Name("tests") +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map("1.0") +) +def feature(self, node="clickhouse1"): + self.context.node = self.context.cluster.node(node) + + with Given("I allow experimental map type"): + allow_experimental_map_type() + + for scenario in loads(current_module(), Scenario): + scenario() diff --git a/tests/testflows/rbac/helper/common.py b/tests/testflows/rbac/helper/common.py index c140e01f34f..a234a62eabd 100755 --- a/tests/testflows/rbac/helper/common.py +++ b/tests/testflows/rbac/helper/common.py @@ -28,6 +28,10 @@ def instrument_clickhouse_server_log(self, node=None, clickhouse_server_log="/va try: with And("adding test name start message to the clickhouse-server.log"): node.command(f"echo -e \"\\n-- start: {current().name} --\\n\" >> {clickhouse_server_log}") + with And("dump memory info"): + node.command(f"echo -e \"\\n-- {current().name} -- top --\\n\" && top -bn1") + node.command(f"echo -e \"\\n-- {current().name} -- df --\\n\" && df -h") + node.command(f"echo -e \"\\n-- {current().name} -- free --\\n\" && free -mh") yield finally: diff --git a/tests/testflows/rbac/tests/privileges/grant_option.py b/tests/testflows/rbac/tests/privileges/grant_option.py index f337aec2619..bc8b73eb32f 100644 --- a/tests/testflows/rbac/tests/privileges/grant_option.py +++ b/tests/testflows/rbac/tests/privileges/grant_option.py @@ -89,7 +89,7 @@ def grant_option_check(grant_option_target, grant_target, user_name, table_type, @Examples("privilege", [ ("ALTER MOVE PARTITION",), ("ALTER MOVE PART",), ("MOVE PARTITION",), ("MOVE PART",), ("ALTER DELETE",), ("DELETE",), - ("ALTER FETCH PARTITION",), ("FETCH PARTITION",), + ("ALTER FETCH PARTITION",), ("ALTER FETCH PART",), ("FETCH PARTITION",), ("ALTER FREEZE PARTITION",), ("FREEZE PARTITION",), ("ALTER UPDATE",), ("UPDATE",), ("ALTER ADD COLUMN",), ("ADD COLUMN",), diff --git a/tests/testflows/rbac/tests/syntax/revoke_role.py b/tests/testflows/rbac/tests/syntax/revoke_role.py index 4acdf127cec..ea8b874ff51 100755 --- a/tests/testflows/rbac/tests/syntax/revoke_role.py +++ b/tests/testflows/rbac/tests/syntax/revoke_role.py @@ -166,9 +166,10 @@ def feature(self, node="clickhouse1"): with Scenario("I revoke a role on fake cluster, throws exception", requirements=[ RQ_SRS_006_RBAC_Revoke_Role_Cluster("1.0")]): - with When("I revoke a role from user on a cluster"): - exitcode, message = errors.cluster_not_found("fake_cluster") - node.query("REVOKE ON CLUSTER fake_cluster role0 FROM user0", exitcode=exitcode, message=message) + with setup(): + with When("I revoke a role from user on a cluster"): + exitcode, message = errors.cluster_not_found("fake_cluster") + node.query("REVOKE ON CLUSTER fake_cluster role0 FROM user0", exitcode=exitcode, message=message) with Scenario("I revoke multiple roles from multiple users on cluster", requirements=[ RQ_SRS_006_RBAC_Revoke_Role("1.0"), diff --git a/tests/testflows/regression.py b/tests/testflows/regression.py index 05fec3ea985..632ea3b8d62 100755 --- a/tests/testflows/regression.py +++ b/tests/testflows/regression.py @@ -14,10 +14,12 @@ def regression(self, local, clickhouse_binary_path, stress=None, parallel=None): """ args = {"local": local, "clickhouse_binary_path": clickhouse_binary_path, "stress": stress, "parallel": parallel} - Feature(test=load("example.regression", "regression"))(**args) - Feature(test=load("ldap.regression", "regression"))(**args) - Feature(test=load("rbac.regression", "regression"))(**args) - Feature(test=load("aes_encryption.regression", "regression"))(**args) + # Feature(test=load("example.regression", "regression"))(**args) + # Feature(test=load("ldap.regression", "regression"))(**args) + # Feature(test=load("rbac.regression", "regression"))(**args) + # Feature(test=load("aes_encryption.regression", "regression"))(**args) + Feature(test=load("map_type.regression", "regression"))(**args) + Feature(test=load("window_functions.regression", "regression"))(**args) # Feature(test=load("kerberos.regression", "regression"))(**args) if main(): diff --git a/tests/testflows/runner b/tests/testflows/runner index 0acc3a25945..213ff6e50d8 100755 --- a/tests/testflows/runner +++ b/tests/testflows/runner @@ -120,3 +120,11 @@ if __name__ == "__main__": print(("Running testflows container as: '" + cmd + "'.")) # testflows return non zero error code on failed tests subprocess.call(cmd, shell=True) + + result_path = os.environ.get("CLICKHOUSE_TESTS_RESULT_PATH", None) + if result_path is not None: + move_from = os.path.join(args.clickhouse_root, 'tests/testflows') + status = os.path.join(move_from, 'check_status.tsv') + results = os.path.join(move_from, 'test_results.tsv') + subprocess.call("mv {} {}".format(status, result_path), shell=True) + subprocess.call("mv {} {}".format(results, result_path), shell=True) diff --git a/tests/testflows/window_functions/configs/clickhouse/config.d/logs.xml b/tests/testflows/window_functions/configs/clickhouse/config.d/logs.xml new file mode 100644 index 00000000000..e5077af3f49 --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse/config.d/logs.xml @@ -0,0 +1,16 @@ + + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + + system + part_log
+ 500 +
+
diff --git a/tests/testflows/window_functions/configs/clickhouse/config.d/macros.xml b/tests/testflows/window_functions/configs/clickhouse/config.d/macros.xml new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/window_functions/configs/clickhouse/config.d/remote.xml b/tests/testflows/window_functions/configs/clickhouse/config.d/remote.xml new file mode 100644 index 00000000000..b7d02ceeec1 --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse/config.d/remote.xml @@ -0,0 +1,42 @@ + + + + + + true + + clickhouse1 + 9000 + + + clickhouse2 + 9000 + + + clickhouse3 + 9000 + + + + + + + clickhouse1 + 9000 + + + + + clickhouse2 + 9000 + + + + + clickhouse3 + 9000 + + + + + diff --git a/tests/testflows/window_functions/configs/clickhouse/config.d/zookeeper.xml b/tests/testflows/window_functions/configs/clickhouse/config.d/zookeeper.xml new file mode 100644 index 00000000000..96270e7b645 --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse/config.d/zookeeper.xml @@ -0,0 +1,10 @@ + + + + + zookeeper + 2181 + + 15000 + + diff --git a/tests/testflows/window_functions/configs/clickhouse/config.xml b/tests/testflows/window_functions/configs/clickhouse/config.xml new file mode 100644 index 00000000000..4ec12232539 --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse/config.xml @@ -0,0 +1,448 @@ + + + + + + trace + /var/log/clickhouse-server/clickhouse-server.log + /var/log/clickhouse-server/clickhouse-server.err.log + 1000M + 10 + + + + 8123 + 9000 + + + + + + + + + /etc/clickhouse-server/server.crt + /etc/clickhouse-server/server.key + + /etc/clickhouse-server/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + + true + true + sslv2,sslv3 + true + + + + RejectCertificateHandler + + + + + + + + + 9009 + + + + + + + + 0.0.0.0 + + + + + + + + + + + + 4096 + 3 + + + 100 + + + + + + 8589934592 + + + 5368709120 + + + + /var/lib/clickhouse/ + + + /var/lib/clickhouse/tmp/ + + + /var/lib/clickhouse/user_files/ + + + /var/lib/clickhouse/access/ + + + + + + users.xml + + + + /var/lib/clickhouse/access/ + + + + + users.xml + + + default + + + + + + default + + + + + + + + + false + + + + + + + + localhost + 9000 + + + + + + + localhost + 9000 + + + + + localhost + 9000 + + + + + + + localhost + 9440 + 1 + + + + + + + localhost + 9000 + + + + + localhost + 1 + + + + + + + + + + + + + + + + + 3600 + + + + 3600 + + + 60 + + + + + + + + + + system + query_log
+ + toYYYYMM(event_date) + + 7500 +
+ + + + system + trace_log
+ + toYYYYMM(event_date) + 7500 +
+ + + + system + query_thread_log
+ toYYYYMM(event_date) + 7500 +
+ + + + + + + + + + + + + + + + *_dictionary.xml + + + + + + + + + + /clickhouse/task_queue/ddl + + + + + + + + + + + + + + + + click_cost + any + + 0 + 3600 + + + 86400 + 60 + + + + max + + 0 + 60 + + + 3600 + 300 + + + 86400 + 3600 + + + + + + /var/lib/clickhouse/format_schemas/ + + + +
diff --git a/tests/testflows/window_functions/configs/clickhouse/users.xml b/tests/testflows/window_functions/configs/clickhouse/users.xml new file mode 100644 index 00000000000..86b2cd9e1e3 --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse/users.xml @@ -0,0 +1,133 @@ + + + + + + + + 10000000000 + + + 0 + + + random + + + + + 1 + + + + + + + + + + + + + ::/0 + + + + default + + + default + + + 1 + + + + + + + + + + + + + + + + + 3600 + + + 0 + 0 + 0 + 0 + 0 + + + + diff --git a/tests/testflows/window_functions/configs/clickhouse1/config.d/macros.xml b/tests/testflows/window_functions/configs/clickhouse1/config.d/macros.xml new file mode 100644 index 00000000000..42256946311 --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse1/config.d/macros.xml @@ -0,0 +1,7 @@ + + + + clickhouse1 + 01 + + diff --git a/tests/testflows/window_functions/configs/clickhouse2/config.d/macros.xml b/tests/testflows/window_functions/configs/clickhouse2/config.d/macros.xml new file mode 100644 index 00000000000..a0c7042f04e --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse2/config.d/macros.xml @@ -0,0 +1,7 @@ + + + + clickhouse2 + 02 + + diff --git a/tests/testflows/window_functions/configs/clickhouse3/config.d/macros.xml b/tests/testflows/window_functions/configs/clickhouse3/config.d/macros.xml new file mode 100644 index 00000000000..f0afc7d307d --- /dev/null +++ b/tests/testflows/window_functions/configs/clickhouse3/config.d/macros.xml @@ -0,0 +1,7 @@ + + + + clickhouse3 + 03 + + diff --git a/tests/testflows/window_functions/docker-compose/clickhouse-service.yml b/tests/testflows/window_functions/docker-compose/clickhouse-service.yml new file mode 100755 index 00000000000..fdd4a8057a9 --- /dev/null +++ b/tests/testflows/window_functions/docker-compose/clickhouse-service.yml @@ -0,0 +1,27 @@ +version: '2.3' + +services: + clickhouse: + image: yandex/clickhouse-integration-test + expose: + - "9000" + - "9009" + - "8123" + volumes: + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/config.d:/etc/clickhouse-server/config.d" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/users.d:/etc/clickhouse-server/users.d" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/config.xml:/etc/clickhouse-server/config.xml" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/users.xml:/etc/clickhouse-server/users.xml" + - "${CLICKHOUSE_TESTS_SERVER_BIN_PATH:-/usr/bin/clickhouse}:/usr/bin/clickhouse" + - "${CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH:-/usr/bin/clickhouse-odbc-bridge}:/usr/bin/clickhouse-odbc-bridge" + entrypoint: bash -c "clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log" + healthcheck: + test: clickhouse client --query='select 1' + interval: 10s + timeout: 10s + retries: 3 + start_period: 300s + cap_add: + - SYS_PTRACE + security_opt: + - label:disable diff --git a/tests/testflows/window_functions/docker-compose/docker-compose.yml b/tests/testflows/window_functions/docker-compose/docker-compose.yml new file mode 100755 index 00000000000..29f2ef52470 --- /dev/null +++ b/tests/testflows/window_functions/docker-compose/docker-compose.yml @@ -0,0 +1,60 @@ +version: '2.3' + +services: + zookeeper: + extends: + file: zookeeper-service.yml + service: zookeeper + + clickhouse1: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse1 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse1/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse1/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse1/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + clickhouse2: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse2 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse2/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse2/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse2/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + clickhouse3: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse3 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse3/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse3/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse3/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + # dummy service which does nothing, but allows to postpone + # 'docker-compose up -d' till all dependecies will go healthy + all_services_ready: + image: hello-world + depends_on: + clickhouse1: + condition: service_healthy + clickhouse2: + condition: service_healthy + clickhouse3: + condition: service_healthy + zookeeper: + condition: service_healthy diff --git a/tests/testflows/window_functions/docker-compose/zookeeper-service.yml b/tests/testflows/window_functions/docker-compose/zookeeper-service.yml new file mode 100755 index 00000000000..f3df33358be --- /dev/null +++ b/tests/testflows/window_functions/docker-compose/zookeeper-service.yml @@ -0,0 +1,18 @@ +version: '2.3' + +services: + zookeeper: + image: zookeeper:3.4.12 + expose: + - "2181" + environment: + ZOO_TICK_TIME: 500 + ZOO_MY_ID: 1 + healthcheck: + test: echo stat | nc localhost 2181 + interval: 3s + timeout: 2s + retries: 5 + start_period: 2s + security_opt: + - label:disable diff --git a/tests/testflows/window_functions/regression.py b/tests/testflows/window_functions/regression.py new file mode 100755 index 00000000000..54ceeb874f1 --- /dev/null +++ b/tests/testflows/window_functions/regression.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python3 +import sys + +from testflows.core import * + +append_path(sys.path, "..") + +from helpers.cluster import Cluster +from helpers.argparser import argparser +from window_functions.requirements import SRS019_ClickHouse_Window_Functions, RQ_SRS_019_ClickHouse_WindowFunctions + +xfails = { + "tests/:/frame clause/range frame/between expr following and expr following without order by error": + [(Fail, "invalid error message")], + "tests/:/frame clause/range frame/between expr following and expr preceding without order by error": + [(Fail, "invalid error message")], + "tests/:/frame clause/range frame/between expr following and current row without order by error": + [(Fail, "invalid error message")], + "tests/:/frame clause/range frame/between expr following and current row zero special case": + [(Fail, "known bug")], + "tests/:/frame clause/range frame/between expr following and expr preceding with order by zero special case": + [(Fail, "known bug")], + "tests/:/funcs/lag/anyOrNull with column value as offset": + [(Fail, "column values are not supported as offset")], + "tests/:/funcs/lead/subquery as offset": + [(Fail, "subquery is not supported as offset")], + "tests/:/frame clause/range frame/between current row and unbounded following modifying named window": + [(Fail, "range with named window is not supported")], + "tests/:/frame clause/range overflow/negative overflow with Int16": + [(Fail, "exception on conversion")], + "tests/:/frame clause/range overflow/positive overflow with Int16": + [(Fail, "exception on conversion")], + "tests/:/misc/subquery expr preceding": + [(Fail, "subquery is not supported as offset")], + "tests/:/frame clause/range errors/error negative preceding offset": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/22442")], + "tests/:/frame clause/range errors/error negative following offset": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/22442")], + "tests/:/misc/window functions in select expression": + [(Fail, "not supported, https://github.com/ClickHouse/ClickHouse/issues/19857")], + "tests/:/misc/window functions in subquery": + [(Fail, "not supported, https://github.com/ClickHouse/ClickHouse/issues/19857")], + "tests/:/frame clause/range frame/order by decimal": + [(Fail, "Exception: The RANGE OFFSET frame for 'DB::ColumnDecimal >' ORDER BY column is not implemented")], + "tests/:/frame clause/range frame/with nulls": + [(Fail, "DB::Exception: The RANGE OFFSET frame for 'DB::ColumnNullable' ORDER BY column is not implemented")], + "tests/:/aggregate funcs/aggregate funcs over rows frame/func='mannWhitneyUTest(salary, 1)'": + [(Fail, "need to investigate")], + "tests/:/aggregate funcs/aggregate funcs over rows frame/func='rankCorr(salary, 0.5)'": + [(Fail, "need to investigate")], + "tests/distributed/misc/query with order by and one window": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/23902")], + "tests/distributed/over clause/empty named window": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/23902")], + "tests/distributed/over clause/empty": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/23902")], + "tests/distributed/over clause/adhoc window": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/23902")], + "tests/distributed/frame clause/range datetime/:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/23902")], + "tests/distributed/frame clause/range frame/between expr preceding and expr following with partition by same column twice": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/23902")] +} + +xflags = { +} + +@TestModule +@ArgumentParser(argparser) +@XFails(xfails) +@XFlags(xflags) +@Name("window functions") +@Specifications( + SRS019_ClickHouse_Window_Functions +) +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions("1.0") +) +def regression(self, local, clickhouse_binary_path, stress=None, parallel=None): + """Window functions regression. + """ + nodes = { + "clickhouse": + ("clickhouse1", "clickhouse2", "clickhouse3") + } + with Cluster(local, clickhouse_binary_path, nodes=nodes) as cluster: + self.context.cluster = cluster + self.context.stress = stress + + if parallel is not None: + self.context.parallel = parallel + + Feature(run=load("window_functions.tests.feature", "feature"), flags=TE) + +if main(): + regression() diff --git a/tests/testflows/window_functions/requirements/__init__.py b/tests/testflows/window_functions/requirements/__init__.py new file mode 100644 index 00000000000..02f7d430154 --- /dev/null +++ b/tests/testflows/window_functions/requirements/__init__.py @@ -0,0 +1 @@ +from .requirements import * diff --git a/tests/testflows/window_functions/requirements/requirements.md b/tests/testflows/window_functions/requirements/requirements.md new file mode 100644 index 00000000000..6136c1b3d92 --- /dev/null +++ b/tests/testflows/window_functions/requirements/requirements.md @@ -0,0 +1,2292 @@ +# SRS019 ClickHouse Window Functions +# Software Requirements Specification + +## Table of Contents + +* 1 [Revision History](#revision-history) +* 2 [Introduction](#introduction) +* 3 [Requirements](#requirements) + * 3.1 [General](#general) + * 3.1.1 [RQ.SRS-019.ClickHouse.WindowFunctions](#rqsrs-019clickhousewindowfunctions) + * 3.1.2 [RQ.SRS-019.ClickHouse.WindowFunctions.NonDistributedTables](#rqsrs-019clickhousewindowfunctionsnondistributedtables) + * 3.1.3 [RQ.SRS-019.ClickHouse.WindowFunctions.DistributedTables](#rqsrs-019clickhousewindowfunctionsdistributedtables) + * 3.2 [Window Specification](#window-specification) + * 3.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowSpec](#rqsrs-019clickhousewindowfunctionswindowspec) + * 3.3 [PARTITION Clause](#partition-clause) + * 3.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause](#rqsrs-019clickhousewindowfunctionspartitionclause) + * 3.3.2 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MultipleExpr](#rqsrs-019clickhousewindowfunctionspartitionclausemultipleexpr) + * 3.3.3 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MissingExpr.Error](#rqsrs-019clickhousewindowfunctionspartitionclausemissingexprerror) + * 3.3.4 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.InvalidExpr.Error](#rqsrs-019clickhousewindowfunctionspartitionclauseinvalidexprerror) + * 3.4 [ORDER Clause](#order-clause) + * 3.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause](#rqsrs-019clickhousewindowfunctionsorderclause) + * 3.4.1.1 [order_clause](#order_clause) + * 3.4.2 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MultipleExprs](#rqsrs-019clickhousewindowfunctionsorderclausemultipleexprs) + * 3.4.3 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MissingExpr.Error](#rqsrs-019clickhousewindowfunctionsorderclausemissingexprerror) + * 3.4.4 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.InvalidExpr.Error](#rqsrs-019clickhousewindowfunctionsorderclauseinvalidexprerror) + * 3.5 [FRAME Clause](#frame-clause) + * 3.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.FrameClause](#rqsrs-019clickhousewindowfunctionsframeclause) + * 3.5.2 [ROWS](#rows) + * 3.5.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame](#rqsrs-019clickhousewindowfunctionsrowsframe) + * 3.5.2.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.MissingFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrowsframemissingframeextenterror) + * 3.5.2.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.InvalidFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrowsframeinvalidframeextenterror) + * 3.5.2.4 [ROWS CURRENT ROW](#rows-current-row) + * 3.5.2.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframestartcurrentrow) + * 3.5.2.5 [ROWS UNBOUNDED PRECEDING](#rows-unbounded-preceding) + * 3.5.2.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedPreceding](#rqsrs-019clickhousewindowfunctionsrowsframestartunboundedpreceding) + * 3.5.2.6 [ROWS `expr` PRECEDING](#rows-expr-preceding) + * 3.5.2.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprPreceding](#rqsrs-019clickhousewindowfunctionsrowsframestartexprpreceding) + * 3.5.2.7 [ROWS UNBOUNDED FOLLOWING](#rows-unbounded-following) + * 3.5.2.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframestartunboundedfollowingerror) + * 3.5.2.8 [ROWS `expr` FOLLOWING](#rows-expr-following) + * 3.5.2.8.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframestartexprfollowingerror) + * 3.5.2.9 [ROWS BETWEEN CURRENT ROW](#rows-between-current-row) + * 3.5.2.9.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowcurrentrow) + * 3.5.2.9.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowunboundedprecedingerror) + * 3.5.2.9.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowexprprecedingerror) + * 3.5.2.9.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowunboundedfollowing) + * 3.5.2.9.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowexprfollowing) + * 3.5.2.10 [ROWS BETWEEN UNBOUNDED PRECEDING](#rows-between-unbounded-preceding) + * 3.5.2.10.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingcurrentrow) + * 3.5.2.10.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingunboundedprecedingerror) + * 3.5.2.10.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprPreceding](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingexprpreceding) + * 3.5.2.10.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingunboundedfollowing) + * 3.5.2.10.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingexprfollowing) + * 3.5.2.11 [ROWS BETWEEN UNBOUNDED FOLLOWING](#rows-between-unbounded-following) + * 3.5.2.11.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedfollowingerror) + * 3.5.2.12 [ROWS BETWEEN `expr` FOLLOWING](#rows-between-expr-following) + * 3.5.2.12.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingerror) + * 3.5.2.12.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingexprfollowingerror) + * 3.5.2.12.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingunboundedfollowing) + * 3.5.2.12.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingexprfollowing) + * 3.5.2.13 [ROWS BETWEEN `expr` PRECEDING](#rows-between-expr-preceding) + * 3.5.2.13.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingcurrentrow) + * 3.5.2.13.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingunboundedprecedingerror) + * 3.5.2.13.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingunboundedfollowing) + * 3.5.2.13.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingexprprecedingerror) + * 3.5.2.13.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingexprpreceding) + * 3.5.2.13.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingexprfollowing) + * 3.5.3 [RANGE](#range) + * 3.5.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame](#rqsrs-019clickhousewindowfunctionsrangeframe) + * 3.5.3.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.DateAndDateTime](#rqsrs-019clickhousewindowfunctionsrangeframedatatypesdateanddatetime) + * 3.5.3.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.IntAndUInt](#rqsrs-019clickhousewindowfunctionsrangeframedatatypesintanduint) + * 3.5.3.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MultipleColumnsInOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframemultiplecolumnsinorderbyerror) + * 3.5.3.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MissingFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrangeframemissingframeextenterror) + * 3.5.3.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.InvalidFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrangeframeinvalidframeextenterror) + * 3.5.3.7 [`CURRENT ROW` Peers](#current-row-peers) + * 3.5.3.8 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.CurrentRow.Peers](#rqsrs-019clickhousewindowfunctionsrangeframecurrentrowpeers) + * 3.5.3.9 [RANGE CURRENT ROW](#range-current-row) + * 3.5.3.9.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithoutOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartcurrentrowwithoutorderby) + * 3.5.3.9.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartcurrentrowwithorderby) + * 3.5.3.10 [RANGE UNBOUNDED FOLLOWING](#range-unbounded-following) + * 3.5.3.10.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartunboundedfollowingerror) + * 3.5.3.11 [RANGE UNBOUNDED PRECEDING](#range-unbounded-preceding) + * 3.5.3.11.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithoutOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartunboundedprecedingwithoutorderby) + * 3.5.3.11.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartunboundedprecedingwithorderby) + * 3.5.3.12 [RANGE `expr` PRECEDING](#range-expr-preceding) + * 3.5.3.12.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprprecedingwithoutorderbyerror) + * 3.5.3.12.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.OrderByNonNumericalColumn.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprprecedingorderbynonnumericalcolumnerror) + * 3.5.3.12.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartexprprecedingwithorderby) + * 3.5.3.13 [RANGE `expr` FOLLOWING](#range-expr-following) + * 3.5.3.13.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprfollowingwithoutorderbyerror) + * 3.5.3.13.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprfollowingwithorderbyerror) + * 3.5.3.14 [RANGE BETWEEN CURRENT ROW](#range-between-current-row) + * 3.5.3.14.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.CurrentRow](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowcurrentrow) + * 3.5.3.14.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowunboundedprecedingerror) + * 3.5.3.14.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowunboundedfollowing) + * 3.5.3.14.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowexprfollowingwithoutorderbyerror) + * 3.5.3.14.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowexprfollowingwithorderby) + * 3.5.3.14.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowexprprecedingerror) + * 3.5.3.15 [RANGE BETWEEN UNBOUNDED PRECEDING](#range-between-unbounded-preceding) + * 3.5.3.15.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.CurrentRow](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingcurrentrow) + * 3.5.3.15.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingunboundedprecedingerror) + * 3.5.3.15.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingunboundedfollowing) + * 3.5.3.15.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprprecedingwithoutorderbyerror) + * 3.5.3.15.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprprecedingwithorderby) + * 3.5.3.15.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprfollowingwithoutorderbyerror) + * 3.5.3.15.7 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprfollowingwithorderby) + * 3.5.3.16 [RANGE BETWEEN UNBOUNDED FOLLOWING](#range-between-unbounded-following) + * 3.5.3.16.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.CurrentRow.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingcurrentrowerror) + * 3.5.3.16.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingunboundedfollowingerror) + * 3.5.3.16.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingunboundedprecedingerror) + * 3.5.3.16.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingexprprecedingerror) + * 3.5.3.16.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingexprfollowingerror) + * 3.5.3.17 [RANGE BETWEEN expr PRECEDING](#range-between-expr-preceding) + * 3.5.3.17.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingcurrentrowwithorderby) + * 3.5.3.17.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingcurrentrowwithoutorderbyerror) + * 3.5.3.17.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingunboundedprecedingerror) + * 3.5.3.17.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingunboundedfollowingwithoutorderbyerror) + * 3.5.3.17.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingunboundedfollowingwithorderby) + * 3.5.3.17.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprfollowingwithoutorderbyerror) + * 3.5.3.17.7 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprfollowingwithorderby) + * 3.5.3.17.8 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprprecedingwithoutorderbyerror) + * 3.5.3.17.9 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprprecedingwithorderbyerror) + * 3.5.3.17.10 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprprecedingwithorderby) + * 3.5.3.18 [RANGE BETWEEN expr FOLLOWING](#range-between-expr-following) + * 3.5.3.18.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingcurrentrowwithoutorderbyerror) + * 3.5.3.18.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingcurrentrowwithorderbyerror) + * 3.5.3.18.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.ZeroSpecialCase](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingcurrentrowzerospecialcase) + * 3.5.3.18.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingunboundedfollowingwithoutorderbyerror) + * 3.5.3.18.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingunboundedfollowingwithorderby) + * 3.5.3.18.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingunboundedprecedingerror) + * 3.5.3.18.7 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprprecedingwithoutorderbyerror) + * 3.5.3.18.8 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprprecedingerror) + * 3.5.3.18.9 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithOrderBy.ZeroSpecialCase](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprprecedingwithorderbyzerospecialcase) + * 3.5.3.18.10 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprfollowingwithoutorderbyerror) + * 3.5.3.18.11 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprfollowingwithorderbyerror) + * 3.5.3.18.12 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprfollowingwithorderby) + * 3.5.4 [Frame Extent](#frame-extent) + * 3.5.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Extent](#rqsrs-019clickhousewindowfunctionsframeextent) + * 3.5.5 [Frame Start](#frame-start) + * 3.5.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Start](#rqsrs-019clickhousewindowfunctionsframestart) + * 3.5.6 [Frame Between](#frame-between) + * 3.5.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Between](#rqsrs-019clickhousewindowfunctionsframebetween) + * 3.5.7 [Frame End](#frame-end) + * 3.5.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.End](#rqsrs-019clickhousewindowfunctionsframeend) + * 3.5.8 [`CURRENT ROW`](#current-row) + * 3.5.8.1 [RQ.SRS-019.ClickHouse.WindowFunctions.CurrentRow](#rqsrs-019clickhousewindowfunctionscurrentrow) + * 3.5.9 [`UNBOUNDED PRECEDING`](#unbounded-preceding) + * 3.5.9.1 [RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedPreceding](#rqsrs-019clickhousewindowfunctionsunboundedpreceding) + * 3.5.10 [`UNBOUNDED FOLLOWING`](#unbounded-following) + * 3.5.10.1 [RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsunboundedfollowing) + * 3.5.11 [`expr PRECEDING`](#expr-preceding) + * 3.5.11.1 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding](#rqsrs-019clickhousewindowfunctionsexprpreceding) + * 3.5.11.2 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding.ExprValue](#rqsrs-019clickhousewindowfunctionsexprprecedingexprvalue) + * 3.5.12 [`expr FOLLOWING`](#expr-following) + * 3.5.12.1 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing](#rqsrs-019clickhousewindowfunctionsexprfollowing) + * 3.5.12.2 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing.ExprValue](#rqsrs-019clickhousewindowfunctionsexprfollowingexprvalue) + * 3.6 [WINDOW Clause](#window-clause) + * 3.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause](#rqsrs-019clickhousewindowfunctionswindowclause) + * 3.6.2 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MultipleWindows](#rqsrs-019clickhousewindowfunctionswindowclausemultiplewindows) + * 3.6.3 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MissingWindowSpec.Error](#rqsrs-019clickhousewindowfunctionswindowclausemissingwindowspecerror) + * 3.7 [`OVER` Clause](#over-clause) + * 3.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause](#rqsrs-019clickhousewindowfunctionsoverclause) + * 3.7.2 [Empty Clause](#empty-clause) + * 3.7.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.EmptyOverClause](#rqsrs-019clickhousewindowfunctionsoverclauseemptyoverclause) + * 3.7.3 [Ad-Hoc Window](#ad-hoc-window) + * 3.7.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow](#rqsrs-019clickhousewindowfunctionsoverclauseadhocwindow) + * 3.7.3.2 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow.MissingWindowSpec.Error](#rqsrs-019clickhousewindowfunctionsoverclauseadhocwindowmissingwindowspecerror) + * 3.7.4 [Named Window](#named-window) + * 3.7.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow](#rqsrs-019clickhousewindowfunctionsoverclausenamedwindow) + * 3.7.4.2 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.InvalidName.Error](#rqsrs-019clickhousewindowfunctionsoverclausenamedwindowinvalidnameerror) + * 3.7.4.3 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.MultipleWindows.Error](#rqsrs-019clickhousewindowfunctionsoverclausenamedwindowmultiplewindowserror) + * 3.8 [Window Functions](#window-functions) + * 3.8.1 [Nonaggregate Functions](#nonaggregate-functions) + * 3.8.1.1 [The `first_value(expr)` Function](#the-first_valueexpr-function) + * 3.8.1.1.1 [RQ.SRS-019.ClickHouse.WindowFunctions.FirstValue](#rqsrs-019clickhousewindowfunctionsfirstvalue) + * 3.8.1.2 [The `last_value(expr)` Function](#the-last_valueexpr-function) + * 3.8.1.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.LastValue](#rqsrs-019clickhousewindowfunctionslastvalue) + * 3.8.1.3 [The `lag(value, offset)` Function Workaround](#the-lagvalue-offset-function-workaround) + * 3.8.1.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Lag.Workaround](#rqsrs-019clickhousewindowfunctionslagworkaround) + * 3.8.1.4 [The `lead(value, offset)` Function Workaround](#the-leadvalue-offset-function-workaround) + * 3.8.1.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Lead.Workaround](#rqsrs-019clickhousewindowfunctionsleadworkaround) + * 3.8.1.5 [The `rank()` Function](#the-rank-function) + * 3.8.1.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Rank](#rqsrs-019clickhousewindowfunctionsrank) + * 3.8.1.6 [The `dense_rank()` Function](#the-dense_rank-function) + * 3.8.1.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.DenseRank](#rqsrs-019clickhousewindowfunctionsdenserank) + * 3.8.1.7 [The `row_number()` Function](#the-row_number-function) + * 3.8.1.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowNumber](#rqsrs-019clickhousewindowfunctionsrownumber) + * 3.8.2 [Aggregate Functions](#aggregate-functions) + * 3.8.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions](#rqsrs-019clickhousewindowfunctionsaggregatefunctions) + * 3.8.2.2 [Combinators](#combinators) + * 3.8.2.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Combinators](#rqsrs-019clickhousewindowfunctionsaggregatefunctionscombinators) + * 3.8.2.3 [Parametric](#parametric) + * 3.8.2.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Parametric](#rqsrs-019clickhousewindowfunctionsaggregatefunctionsparametric) +* 4 [References](#references) + + +## Revision History + +This document is stored in an electronic form using [Git] source control management software +hosted in a [GitHub Repository]. +All the updates are tracked using the [Revision History]. + +## Introduction + +This software requirements specification covers requirements for `Map(key, value)` data type in [ClickHouse]. + +## Requirements + +### General + +#### RQ.SRS-019.ClickHouse.WindowFunctions +version: 1.0 + +[ClickHouse] SHALL support [window functions] that produce a result for each row inside the window. + +#### RQ.SRS-019.ClickHouse.WindowFunctions.NonDistributedTables +version: 1.0 + +[ClickHouse] SHALL support correct operation of [window functions] on non-distributed +table engines such as `MergeTree`. + + +#### RQ.SRS-019.ClickHouse.WindowFunctions.DistributedTables +version: 1.0 + +[ClickHouse] SHALL support correct operation of [window functions] on +[Distributed](https://clickhouse.tech/docs/en/engines/table-engines/special/distributed/) table engine. + +### Window Specification + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowSpec +version: 1.0 + +[ClickHouse] SHALL support defining a window using window specification clause. +The [window_spec] SHALL be defined as + +``` +window_spec: + [partition_clause] [order_clause] [frame_clause] +``` + +that SHALL specify how to partition query rows into groups for processing by the window function. + +### PARTITION Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause +version: 1.0 + +[ClickHouse] SHALL support [partition_clause] that indicates how to divide the query rows into groups. +The [partition_clause] SHALL be defined as + +``` +partition_clause: + PARTITION BY expr [, expr] ... +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MultipleExpr +version: 1.0 + +[ClickHouse] SHALL support partitioning by more than one `expr` in the [partition_clause] definition. + +For example, + +```sql +SELECT x,s, sum(x) OVER (PARTITION BY x,s) FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b')) +``` + +```bash +┌─x─┬─s─┬─sum(x) OVER (PARTITION BY x, s)─┐ +│ 1 │ a │ 1 │ +│ 1 │ b │ 1 │ +│ 2 │ b │ 2 │ +└───┴───┴─────────────────────────────────┘ +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MissingExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is missing in the [partition_clause] definition. + +```sql +SELECT sum(number) OVER (PARTITION BY) FROM numbers(1,3) +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.InvalidExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is invalid in the [partition_clause] definition. + +### ORDER Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause +version: 1.0 + +[ClickHouse] SHALL support [order_clause] that indicates how to sort rows in each window. + +##### order_clause + +The `order_clause` SHALL be defined as + +``` +order_clause: + ORDER BY expr [ASC|DESC] [, expr [ASC|DESC]] ... +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MultipleExprs +version: 1.0 + +[ClickHouse] SHALL return support using more than one `expr` in the [order_clause] definition. + +For example, + +```sql +SELECT x,s, sum(x) OVER (ORDER BY x DESC, s DESC) FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b')) +``` + +```bash +┌─x─┬─s─┬─sum(x) OVER (ORDER BY x DESC, s DESC)─┐ +│ 2 │ b │ 2 │ +│ 1 │ b │ 3 │ +│ 1 │ a │ 4 │ +└───┴───┴───────────────────────────────────────┘ +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MissingExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is missing in the [order_clause] definition. + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.InvalidExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is invalid in the [order_clause] definition. + +### FRAME Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.FrameClause +version: 1.0 + +[ClickHouse] SHALL support [frame_clause] that SHALL specify a subset of the current window. + +The `frame_clause` SHALL be defined as + +``` +frame_clause: + {ROWS | RANGE } frame_extent +``` + +#### ROWS + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame +version: 1.0 + +[ClickHouse] SHALL support `ROWS` frame to define beginning and ending row positions. +Offsets SHALL be differences in row numbers from the current row number. + +```sql +ROWS frame_extent +``` + +See [frame_extent] definition. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.MissingFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `ROWS` frame clause is defined without [frame_extent]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number ROWS) FROM numbers(1,3) +``` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.InvalidFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `ROWS` frame clause has invalid [frame_extent]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number ROWS '1') FROM numbers(1,3) +``` + +##### ROWS CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include only the current row in the window partition +when `ROWS CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS CURRENT ROW) FROM numbers(1,2) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 2 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedPreceding +version: 1.0 + +[ClickHouse] SHALL include all rows before and including the current row in the window partition +when `ROWS UNBOUNDED PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +##### ROWS `expr` PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL include `expr` rows before and including the current row in the window partition +when `ROWS expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS 1 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +##### ROWS `expr` FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS 1 FOLLOWING) FROM numbers(1,3) +``` + +##### ROWS BETWEEN CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include only the current row in the window partition +when `ROWS BETWEEN CURRENT ROW AND CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,2) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 2 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING` frame is specified. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN CURRENT ROW AND expr PRECEDING` frame is specified. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include the current row and all the following rows in the window partition +when `ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include the current row and the `expr` rows that are following the current row in the window partition +when `ROWS BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)─┐ +│ 1 │ 3 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS BETWEEN UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include all the rows before and including the current row in the window partition +when `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL include all the rows until and including the current row minus `expr` rows preceding it +when `ROWS BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)─┐ +│ 1 │ 0 │ +│ 2 │ 1 │ +│ 3 │ 3 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition +when `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include all the rows until and including the current row plus `expr` rows following it +when `ROWS BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 3 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +##### ROWS BETWEEN UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `UNBOUNDED FOLLOWING` is specified as the start of the frame, including + +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND expr PRECEDING` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND expr FOLLOWING` + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) FROM numbers(1,3) +``` + +##### ROWS BETWEEN `expr` FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `expr FOLLOWING` is specified as the start of the frame +and it points to a row that is after the start of the frame inside the window partition such +as the following cases + +* `ROWS BETWEEN expr FOLLOWING AND CURRENT ROW` +* `ROWS BETWEEN expr FOLLOWING AND UNBOUNDED PRECEDING` +* `ROWS BETWEEN expr FOLLOWING AND expr PRECEDING` + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN expr FOLLOWING AND expr FOLLOWING` +is specified and the end of the frame specified by the `expr FOLLOWING` is a row that is before the row +specified by the frame start. + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all the rows from and including current row plus `expr` rows following it +until and including the last row in the window partition +when `ROWS BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row plus `expr` following it +until and including the row specified by the frame end when the frame end +is the current row plus `expr` following it is right at or after the start of the frame +when `ROWS BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING)─┐ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS BETWEEN `expr` PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows +preceding it until and including the current row in the window frame +when `ROWS BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error +when `ROWS BETWEEN expr PRECEDING AND UNBOUNDED PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows +preceding it until and including the last row in the window partition +when `ROWS BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when the frame end specified by the `expr PRECEDING` +evaluates to a row that is before the row specified by the frame start in the window partition +when `ROWS BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows preceding it +until and including the current row minus `expr` rows preceding it if the end +of the frame is after the frame start in the window partition +when `ROWS BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows preceding it +until and including the current row plus `expr` rows following it in the window partition +when `ROWS BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 3 │ +│ 2 │ 6 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +#### RANGE + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame +version: 1.0 + +[ClickHouse] SHALL support `RANGE` frame to define rows within a value range. +Offsets SHALL be differences in row values from the current row value. + +```sql +RANGE frame_extent +``` + +See [frame_extent] definition. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.DateAndDateTime +version: 1.0 + +[ClickHouse] SHALL support `RANGE` frame over columns with `Date` and `DateTime` +data types. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.IntAndUInt +version: 1.0 + +[ClickHouse] SHALL support `RANGE` frame over columns with numerical data types +such `IntX` and `UIntX`. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MultipleColumnsInOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `RANGE` frame definition is used with `ORDER BY` +that uses multiple columns. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MissingFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `RANGE` frame definition is missing [frame_extent]. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.InvalidFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `RANGE` frame definition has invalid [frame_extent]. + +##### `CURRENT ROW` Peers + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.CurrentRow.Peers +version: 1.0 + +[ClickHouse] for the `RANGE` frame SHALL define the `peers` of the `CURRENT ROW` to be all +the rows that are inside the same order bucket. + +##### RANGE CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithoutOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition +when `RANGE CURRENT ROW` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows that are [current row peers] in the window partition +when `RANGE CURRENT ROW` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE UNBOUNDED FOLLOWING` frame is specified with or without order by +as `UNBOUNDED FOLLOWING` SHALL not be supported as [frame_start]. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +##### RANGE UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithoutOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition +when `RANGE UNBOUNDED PRECEDING` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include rows with values from and including the first row +until and including all [current row peers] in the window partition +when `RANGE UNBOUNDED PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE `expr` PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr PRECEDING` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE 1 PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.OrderByNonNumericalColumn.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr PRECEDING` is used with `ORDER BY` clause +over a non-numerical column. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include rows with values from and including current row value minus `expr` +until and including the value for the current row +when `RANGE expr PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE 1 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE `expr` FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr FOLLOWING` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE 1 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr FOLLOWING` frame is specified wit the `ORDER BY` clause +as the value for the frame start cannot be larger than the value for the frame end. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE 1 FOLLOWING) FROM numbers(1,3) +``` + +##### RANGE BETWEEN CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include all [current row peers] in the window partition +when `RANGE BETWEEN CURRENT ROW AND CURRENT ROW` frame is specified with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including [current row peers] until and including +the last row in the window partition when `RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including [current row peers] until and including +current row value plus `expr` when `RANGE BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified +with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING)─┐ +│ 1 │ 4 │ +│ 1 │ 4 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND expr PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3) +``` + +##### RANGE BETWEEN UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including the first row until and including +[current row peers] in the window partition when `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` frame is specified +with and without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 4 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return and error when `RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING` frame is specified +with and without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition when `RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` frame is specified +with and without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including the first row until and including +the value of the current row minus `expr` in the window partition +when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)─┐ +│ 1 │ 0 │ +│ 1 │ 0 │ +│ 2 │ 2 │ +│ 3 │ 4 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including the first row until and including +the value of the current row plus `expr` in the window partition +when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 4 │ +│ 1 │ 4 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE BETWEEN UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.CurrentRow.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND expr PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND expr FOLLOWING` frame is specified +with or without the `ORDER BY` clause. + +##### RANGE BETWEEN expr PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus `expr` +until and including [current row peers] in the window partition +when `RANGE BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 4 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus `expr` +until and including the last row in the window partition when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame +is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus preceding `expr` +until and including current row plus following `expr` in the window partition +when `RANGE BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 4 │ +│ 1 │ 4 │ +│ 2 │ 7 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when the value of the [frame_end] specified by the +current row minus preceding `expr` is greater than the value of the [frame_start] in the window partition +when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus preceding `expr` for the [frame_start] +until and including current row minus following `expr` for the [frame_end] in the window partition +when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause +if an only if the [frame_end] value is equal or greater than [frame_start] value. + +For example, + +**Greater Than** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 4 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +or **Equal** + +```sql + SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING)─┐ +│ 1 │ 0 │ +│ 1 │ 0 │ +│ 2 │ 2 │ +│ 3 │ 2 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE BETWEEN expr FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified +with the `ORDER BY` clause and `expr` is greater than `0`. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.ZeroSpecialCase +version: 1.0 + +[ClickHouse] SHALL include all [current row peers] in the window partition +when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified +with the `ORDER BY` clause if and only if the `expr` equals to `0`. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row plus `expr` +until and including the last row in the window partition +when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 5 │ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified +without the `ORDER BY`. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified +with the `ORDER BY` clause if the value of both `expr` is not `0`. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithOrderBy.ZeroSpecialCase +version: 1.0 + +[ClickHouse] SHALL include all rows with value equal to [current row peers] in the window partition +when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified +with the `ORDER BY` clause if and only if both `expr`'s are `0`. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING ─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified +with the `ORDER BY` clause but the `expr` for the [frame_end] is less than the `expr` for the [frame_start]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with value from and including current row plus `expr` for the [frame_start] +until and including current row plus `expr` for the [frame_end] in the window partition +when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified +with the `ORDER BY` clause if and only if the `expr` for the [frame_end] is greater than or equal than the +`expr` for the [frame_start]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 2 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 FOLLOWING AND 2 FOLLOWING)─┐ +│ 1 │ 5 │ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +#### Frame Extent + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Extent +version: 1.0 + +[ClickHouse] SHALL support [frame_extent] defined as + +``` +frame_extent: + {frame_start | frame_between} +``` + +#### Frame Start + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Start +version: 1.0 + +[ClickHouse] SHALL support [frame_start] defined as + +``` +frame_start: { + CURRENT ROW + | UNBOUNDED PRECEDING + | UNBOUNDED FOLLOWING + | expr PRECEDING + | expr FOLLOWING +} +``` + +#### Frame Between + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Between +version: 1.0 + +[ClickHouse] SHALL support [frame_between] defined as + +``` +frame_between: + BETWEEN frame_start AND frame_end +``` + +#### Frame End + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.End +version: 1.0 + +[ClickHouse] SHALL support [frame_end] defined as + +``` +frame_end: { + CURRENT ROW + | UNBOUNDED PRECEDING + | UNBOUNDED FOLLOWING + | expr PRECEDING + | expr FOLLOWING +} +``` + +#### `CURRENT ROW` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.CurrentRow +version: 1.0 + +[ClickHouse] SHALL support `CURRENT ROW` as `frame_start` or `frame_end` value. + +* For `ROWS` SHALL define the bound to be the current row +* For `RANGE` SHALL define the bound to be the peers of the current row + +#### `UNBOUNDED PRECEDING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedPreceding +version: 1.0 + +[ClickHouse] SHALL support `UNBOUNDED PRECEDING` as `frame_start` or `frame_end` value +and it SHALL define that the bound is the first partition row. + +#### `UNBOUNDED FOLLOWING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL support `UNBOUNDED FOLLOWING` as `frame_start` or `frame_end` value +and it SHALL define that the bound is the last partition row. + +#### `expr PRECEDING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL support `expr PRECEDING` as `frame_start` or `frame_end` value + +* For `ROWS` it SHALL define the bound to be the `expr` rows before the current row +* For `RANGE` it SHALL define the bound to be the rows with values equal to the current row value minus the `expr`. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding.ExprValue +version: 1.0 + +[ClickHouse] SHALL support only non-negative numeric literal as the value for the `expr` in the `expr PRECEDING` frame boundary. + +For example, + +``` +5 PRECEDING +``` + +#### `expr FOLLOWING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL support `expr FOLLOWING` as `frame_start` or `frame_end` value + +* For `ROWS` it SHALL define the bound to be the `expr` rows after the current row +* For `RANGE` it SHALL define the bound to be the rows with values equal to the current row value plus `expr` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing.ExprValue +version: 1.0 + +[ClickHouse] SHALL support only non-negative numeric literal as the value for the `expr` in the `expr FOLLOWING` frame boundary. + +For example, + +``` +5 FOLLOWING +``` + +### WINDOW Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause +version: 1.0 + +[ClickHouse] SHALL support `WINDOW` clause to define one or more windows. + +```sql +WINDOW window_name AS (window_spec) + [, window_name AS (window_spec)] .. +``` + +The `window_name` SHALL be the name of a window defined by a `WINDOW` clause. + +The [window_spec] SHALL specify the window. + +For example, + +```sql +SELECT ... FROM table WINDOW w AS (partiton by id)) +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MultipleWindows +version: 1.0 + +[ClickHouse] SHALL support `WINDOW` clause that defines multiple windows. + +For example, + +```sql +SELECT ... FROM table WINDOW w1 AS (partition by id), w2 AS (partition by customer) +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MissingWindowSpec.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `WINDOW` clause definition is missing [window_spec]. + +### `OVER` Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause +version: 1.0 + +[ClickHouse] SHALL support `OVER` clause to either use named window defined using `WINDOW` clause +or adhoc window defined inplace. + + +``` +OVER ()|(window_spec)|named_window +``` + +#### Empty Clause + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.EmptyOverClause +version: 1.0 + +[ClickHouse] SHALL treat the entire set of query rows as a single partition when `OVER` clause is empty. +For example, + +``` +SELECT sum(x) OVER () FROM table +``` + +#### Ad-Hoc Window + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow +version: 1.0 + +[ClickHouse] SHALL support ad hoc window specification in the `OVER` clause. + +``` +OVER [window_spec] +``` + +See [window_spec] definition. + +For example, + +```sql +(count(*) OVER (partition by id order by time desc)) +``` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow.MissingWindowSpec.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `OVER` clause has missing [window_spec]. + +#### Named Window + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow +version: 1.0 + +[ClickHouse] SHALL support using a previously defined named window in the `OVER` clause. + +``` +OVER [window_name] +``` + +See [window_name] definition. + +For example, + +```sql +SELECT count(*) OVER w FROM table WINDOW w AS (partition by id) +``` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.InvalidName.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `OVER` clause reference invalid window name. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.MultipleWindows.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `OVER` clause references more than one window name. + +### Window Functions + +#### Nonaggregate Functions + +##### The `first_value(expr)` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.FirstValue +version: 1.0 + +[ClickHouse] SHALL support `first_value` window function that +SHALL be synonum for the `any(value)` function +that SHALL return the value of `expr` from first row in the window frame. + +``` +first_value(expr) OVER ... +``` + +##### The `last_value(expr)` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.LastValue +version: 1.0 + +[ClickHouse] SHALL support `last_value` window function that +SHALL be synonym for the `anyLast(value)` function +that SHALL return the value of `expr` from the last row in the window frame. + +``` +last_value(expr) OVER ... +``` + +##### The `lag(value, offset)` Function Workaround + +###### RQ.SRS-019.ClickHouse.WindowFunctions.Lag.Workaround +version: 1.0 + +[ClickHouse] SHALL support a workaround for the `lag(value, offset)` function as + +``` +any(value) OVER (.... ROWS BETWEEN PRECEDING AND PRECEDING) +``` + +The function SHALL returns the value from the row that lags (precedes) the current row +by the `N` rows within its partition. Where `N` is the `value` passed to the `any` function. + +If there is no such row, the return value SHALL be default. + +For example, if `N` is 3, the return value is default for the first two rows. +If N or default are missing, the defaults are 1 and NULL, respectively. + +`N` SHALL be a literal non-negative integer. If N is 0, the value SHALL be +returned for the current row. + +##### The `lead(value, offset)` Function Workaround + +###### RQ.SRS-019.ClickHouse.WindowFunctions.Lead.Workaround +version: 1.0 + +[ClickHouse] SHALL support a workaround for the `lead(value, offset)` function as + +``` +any(value) OVER (.... ROWS BETWEEN FOLLOWING AND FOLLOWING) +``` + +The function SHALL returns the value from the row that leads (follows) the current row by +the `N` rows within its partition. Where `N` is the `value` passed to the `any` function. + +If there is no such row, the return value SHALL be default. + +For example, if `N` is 3, the return value is default for the last two rows. +If `N` or default are missing, the defaults are 1 and NULL, respectively. + +`N` SHALL be a literal non-negative integer. If `N` is 0, the value SHALL be +returned for the current row. + +##### The `rank()` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.Rank +version: 1.0 + +[ClickHouse] SHALL support `rank` window function that SHALL +return the rank of the current row within its partition with gaps. + +Peers SHALL be considered ties and receive the same rank. +The function SHALL not assign consecutive ranks to peer groups if groups of size greater than one exist +and the result is noncontiguous rank numbers. + +If the function is used without `ORDER BY` to sort partition rows into the desired order +then all rows SHALL be peers. + +``` +rank() OVER ... +``` + +##### The `dense_rank()` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.DenseRank +version: 1.0 + +[ClickHouse] SHALL support `dense_rank` function over a window that SHALL +return the rank of the current row within its partition without gaps. + +Peers SHALL be considered ties and receive the same rank. +The function SHALL assign consecutive ranks to peer groups and +the result is that groups of size greater than one do not produce noncontiguous rank numbers. + +If the function is used without `ORDER BY` to sort partition rows into the desired order +then all rows SHALL be peers. + +``` +dense_rank() OVER ... +``` + +##### The `row_number()` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowNumber +version: 1.0 + +[ClickHouse] SHALL support `row_number` function over a window that SHALL +returns the number of the current row within its partition. + +Rows numbers SHALL range from 1 to the number of partition rows. + +The `ORDER BY` affects the order in which rows are numbered. +Without `ORDER BY`, row numbering MAY be nondeterministic. + +``` +row_number() OVER ... +``` + +#### Aggregate Functions + +##### RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions +version: 1.0 + +[ClickHouse] SHALL support using aggregate functions over windows. + +* [count](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/count/) +* [min](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/min/) +* [max](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/max/) +* [sum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sum/) +* [avg](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avg/) +* [any](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/any/) +* [stddevPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevpop/) +* [stddevSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevsamp/) +* [varPop(x)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varpop/) +* [varSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varsamp/) +* [covarPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarpop/) +* [covarSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarsamp/) +* [anyHeavy](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anyheavy/) +* [anyLast](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anylast/) +* [argMin](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/) +* [argMax](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/) +* [avgWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avgweighted/) +* [corr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/corr/) +* [topK](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topk/) +* [topKWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topkweighted/) +* [groupArray](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparray/) +* [groupUniqArray](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupuniqarray/) +* [groupArrayInsertAt](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat/) +* [groupArrayMovingSum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingsum/) +* [groupArrayMovingAvg](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingavg/) +* [groupArraySample](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraysample/) +* [groupBitAnd](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitand/) +* [groupBitOr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitor/) +* [groupBitXor](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitxor/) +* [groupBitmap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmap/) +* [groupBitmapAnd](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapand/) +* [groupBitmapOr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor/) +* [groupBitmapXor](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor/) +* [sumWithOverflow](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sumwithoverflow/) +* [deltaSum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/deltasum/) +* [sumMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/summap/) +* [minMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/minmap/) +* [maxMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/maxmap/) +* [initializeAggregation](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation/) +* [skewPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewpop/) +* [skewSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewsamp/) +* [kurtPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtpop/) +* [kurtSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtsamp/) +* [uniq](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniq/) +* [uniqExact](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqexact/) +* [uniqCombined](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined/) +* [uniqCombined64](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined64/) +* [uniqHLL12](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqhll12/) +* [quantile](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantile/) +* [quantiles](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiles/) +* [quantileExact](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexact/) +* [quantileExactWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexactweighted/) +* [quantileTiming](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletiming/) +* [quantileTimingWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted/) +* [quantileDeterministic](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiledeterministic/) +* [quantileTDigest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest/) +* [quantileTDigestWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted/) +* [simpleLinearRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/simplelinearregression/) +* [stochasticLinearRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlinearregression/) +* [stochasticLogisticRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/) +* [categoricalInformationValue](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/) +* [studentTTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/studentttest/) +* [welchTTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/welchttest/) +* [mannWhitneyUTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest/) +* [median](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/median/) +* [rankCorr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/rankCorr/) + +##### Combinators + +###### RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Combinators +version: 1.0 + +[ClickHouse] SHALL support aggregate functions with combinator prefixes over windows. + +* [-If](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-if) +* [-Array](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-array) +* [-SimpleState](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-simplestate) +* [-State](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-state) +* [-Merge](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#aggregate_functions_combinators-merge) +* [-MergeState](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#aggregate_functions_combinators-mergestate) +* [-ForEach](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-foreach) +* [-Distinct](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-distinct) +* [-OrDefault](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-ordefault) +* [-OrNull](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-ornull) +* [-Resample](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-resample) + +##### Parametric + +###### RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Parametric +version: 1.0 + +[ClickHouse] SHALL support parametric aggregate functions over windows. + +* [histogram](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#histogram) +* [sequenceMatch(pattern)(timestamp, cond1, cond2, ...)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#function-sequencematch) +* [sequenceCount(pattern)(time, cond1, cond2, ...)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#function-sequencecount) +* [windowFunnel](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#windowfunnel) +* [retention](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#retention) +* [uniqUpTo(N)(x)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#uniquptonx) +* [sumMapFiltered(keys_to_keep)(keys, values)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#summapfilteredkeys-to-keepkeys-values) + +## References + +* [ClickHouse] +* [GitHub Repository] +* [Revision History] +* [Git] + +[current row peers]: #current-row-peers +[frame_extent]: #frame-extent +[frame_between]: #frame-between +[frame_start]: #frame-start +[frame_end]: #frame-end +[windows_name]: #window-clause +[window_spec]: #window-specification +[partition_clause]: #partition-clause +[order_clause]: #order-clause +[frame_clause]: #frame-clause +[window functions]: https://clickhouse.tech/docs/en/sql-reference/window-functions/ +[ClickHouse]: https://clickhouse.tech +[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/window_functions/requirements/requirements.md +[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/window_functions/requirements/requirements.md +[Git]: https://git-scm.com/ +[GitHub]: https://github.com diff --git a/tests/testflows/window_functions/requirements/requirements.py b/tests/testflows/window_functions/requirements/requirements.py new file mode 100644 index 00000000000..bfc3656ba6d --- /dev/null +++ b/tests/testflows/window_functions/requirements/requirements.py @@ -0,0 +1,5887 @@ +# These requirements were auto generated +# from software requirements specification (SRS) +# document by TestFlows v1.6.210312.1172513. +# Do not edit by hand but re-generate instead +# using 'tfs requirements generate' command. +from testflows.core import Specification +from testflows.core import Requirement + +Heading = Specification.Heading + +RQ_SRS_019_ClickHouse_WindowFunctions = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [window functions] that produce a result for each row inside the window.\n' + '\n' + ), + link=None, + level=3, + num='3.1.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_NonDistributedTables = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.NonDistributedTables', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support correct operation of [window functions] on non-distributed \n' + 'table engines such as `MergeTree`.\n' + '\n' + '\n' + ), + link=None, + level=3, + num='3.1.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_DistributedTables = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.DistributedTables', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support correct operation of [window functions] on\n' + '[Distributed](https://clickhouse.tech/docs/en/engines/table-engines/special/distributed/) table engine.\n' + '\n' + ), + link=None, + level=3, + num='3.1.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_WindowSpec = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowSpec', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support defining a window using window specification clause.\n' + 'The [window_spec] SHALL be defined as\n' + '\n' + '```\n' + 'window_spec:\n' + ' [partition_clause] [order_clause] [frame_clause]\n' + '```\n' + '\n' + 'that SHALL specify how to partition query rows into groups for processing by the window function.\n' + '\n' + ), + link=None, + level=3, + num='3.2.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [partition_clause] that indicates how to divide the query rows into groups.\n' + 'The [partition_clause] SHALL be defined as\n' + '\n' + '```\n' + 'partition_clause:\n' + ' PARTITION BY expr [, expr] ...\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.3.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_MultipleExpr = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MultipleExpr', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support partitioning by more than one `expr` in the [partition_clause] definition.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT x,s, sum(x) OVER (PARTITION BY x,s) FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))\n" + '```\n' + '\n' + '```bash\n' + '┌─x─┬─s─┬─sum(x) OVER (PARTITION BY x, s)─┐\n' + '│ 1 │ a │ 1 │\n' + '│ 1 │ b │ 1 │\n' + '│ 2 │ b │ 2 │\n' + '└───┴───┴─────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.3.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_MissingExpr_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MissingExpr.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if `expr` is missing in the [partition_clause] definition.\n' + '\n' + '```sql\n' + 'SELECT sum(number) OVER (PARTITION BY) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.3.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_InvalidExpr_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.InvalidExpr.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if `expr` is invalid in the [partition_clause] definition.\n' + '\n' + ), + link=None, + level=3, + num='3.3.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [order_clause] that indicates how to sort rows in each window.\n' + '\n' + ), + link=None, + level=3, + num='3.4.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MultipleExprs = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MultipleExprs', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return support using more than one `expr` in the [order_clause] definition.\n' + '\n' + 'For example, \n' + '\n' + '```sql\n' + "SELECT x,s, sum(x) OVER (ORDER BY x DESC, s DESC) FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))\n" + '```\n' + '\n' + '```bash\n' + '┌─x─┬─s─┬─sum(x) OVER (ORDER BY x DESC, s DESC)─┐\n' + '│ 2 │ b │ 2 │\n' + '│ 1 │ b │ 3 │\n' + '│ 1 │ a │ 4 │\n' + '└───┴───┴───────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.4.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MissingExpr_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MissingExpr.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if `expr` is missing in the [order_clause] definition.\n' + '\n' + ), + link=None, + level=3, + num='3.4.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_InvalidExpr_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.InvalidExpr.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if `expr` is invalid in the [order_clause] definition.\n' + '\n' + ), + link=None, + level=3, + num='3.4.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_FrameClause = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.FrameClause', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [frame_clause] that SHALL specify a subset of the current window.\n' + '\n' + 'The `frame_clause` SHALL be defined as\n' + '\n' + '```\n' + 'frame_clause:\n' + ' {ROWS | RANGE } frame_extent\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.5.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `ROWS` frame to define beginning and ending row positions.\n' + 'Offsets SHALL be differences in row numbers from the current row number.\n' + '\n' + '```sql\n' + 'ROWS frame_extent\n' + '```\n' + '\n' + 'See [frame_extent] definition.\n' + '\n' + ), + link=None, + level=4, + num='3.5.2.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_MissingFrameExtent_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.MissingFrameExtent.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `ROWS` frame clause is defined without [frame_extent].\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number ROWS) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.2.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_InvalidFrameExtent_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.InvalidFrameExtent.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `ROWS` frame clause has invalid [frame_extent].\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number ROWS '1') FROM numbers(1,3)\n" + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.2.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include only the current row in the window partition\n' + 'when `ROWS CURRENT ROW` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS CURRENT ROW) FROM numbers(1,2)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 2 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.4.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_UnboundedPreceding = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedPreceding', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows before and including the current row in the window partition\n' + 'when `ROWS UNBOUNDED PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 6 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.5.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_ExprPreceding = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprPreceding', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include `expr` rows before and including the current row in the window partition \n' + 'when `ROWS expr PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS 1 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 5 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.6.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_UnboundedFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `ROWS UNBOUNDED FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.7.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_ExprFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `ROWS expr FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.8.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include only the current row in the window partition\n' + 'when `ROWS BETWEEN CURRENT ROW AND CURRENT ROW` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,2)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 2 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.9.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `ROWS BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING` frame is specified.\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.9.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_ExprPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `ROWS BETWEEN CURRENT ROW AND expr PRECEDING` frame is specified.\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.9.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the current row and all the following rows in the window partition\n' + 'when `ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 5 │\n' + '│ 3 │ 3 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.9.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_ExprFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the current row and the `expr` rows that are following the current row in the window partition\n' + 'when `ROWS BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)─┐\n' + '│ 1 │ 3 │\n' + '│ 2 │ 5 │\n' + '│ 3 │ 3 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.9.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all the rows before and including the current row in the window partition\n' + 'when `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 6 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.10.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.10.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_ExprPreceding = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprPreceding', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all the rows until and including the current row minus `expr` rows preceding it\n' + 'when `ROWS BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)─┐\n' + '│ 1 │ 0 │\n' + '│ 2 │ 1 │\n' + '│ 3 │ 3 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.10.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows in the window partition \n' + 'when `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 6 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.10.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_ExprFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all the rows until and including the current row plus `expr` rows following it\n' + 'when `ROWS BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING)─┐\n' + '│ 1 │ 3 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 6 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.10.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `UNBOUNDED FOLLOWING` is specified as the start of the frame, including\n' + '\n' + '* `ROWS BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW`\n' + '* `ROWS BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING`\n' + '* `ROWS BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING`\n' + '* `ROWS BETWEEN UNBOUNDED FOLLOWING AND expr PRECEDING`\n' + '* `ROWS BETWEEN UNBOUNDED FOLLOWING AND expr FOLLOWING`\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.11.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `expr FOLLOWING` is specified as the start of the frame\n' + 'and it points to a row that is after the start of the frame inside the window partition such\n' + 'as the following cases\n' + '\n' + '* `ROWS BETWEEN expr FOLLOWING AND CURRENT ROW`\n' + '* `ROWS BETWEEN expr FOLLOWING AND UNBOUNDED PRECEDING`\n' + '* `ROWS BETWEEN expr FOLLOWING AND expr PRECEDING`\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.12.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `ROWS BETWEEN expr FOLLOWING AND expr FOLLOWING`\n' + 'is specified and the end of the frame specified by the `expr FOLLOWING` is a row that is before the row \n' + 'specified by the frame start.\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.12.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all the rows from and including current row plus `expr` rows following it \n' + 'until and including the last row in the window partition\n' + 'when `ROWS BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 5 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 0 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.12.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the rows from and including current row plus `expr` following it \n' + 'until and including the row specified by the frame end when the frame end \n' + 'is the current row plus `expr` following it is right at or after the start of the frame\n' + 'when `ROWS BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING)─┐\n' + '│ 1 │ 5 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 0 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.12.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the rows from and including current row minus `expr` rows\n' + 'preceding it until and including the current row in the window frame\n' + 'when `ROWS BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 5 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.13.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error\n' + 'when `ROWS BETWEEN expr PRECEDING AND UNBOUNDED PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.13.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the rows from and including current row minus `expr` rows\n' + 'preceding it until and including the last row in the window partition\n' + 'when `ROWS BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 5 │\n' + '└────────┴─────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.13.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when the frame end specified by the `expr PRECEDING`\n' + 'evaluates to a row that is before the row specified by the frame start in the window partition\n' + 'when `ROWS BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.13.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the rows from and including current row minus `expr` rows preceding it\n' + 'until and including the current row minus `expr` rows preceding it if the end\n' + 'of the frame is after the frame start in the window partition \n' + 'when `ROWS BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 5 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.13.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include the rows from and including current row minus `expr` rows preceding it\n' + 'until and including the current row plus `expr` rows following it in the window partition\n' + 'when `ROWS BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING)─┐\n' + '│ 1 │ 3 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 5 │\n' + '└────────┴─────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.2.13.6') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `RANGE` frame to define rows within a value range.\n' + 'Offsets SHALL be differences in row values from the current row value.\n' + '\n' + '```sql\n' + 'RANGE frame_extent\n' + '```\n' + '\n' + 'See [frame_extent] definition.\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_DateAndDateTime = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.DateAndDateTime', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `RANGE` frame over columns with `Date` and `DateTime`\n' + 'data types.\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_IntAndUInt = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.IntAndUInt', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `RANGE` frame over columns with numerical data types\n' + 'such `IntX` and `UIntX`.\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MultipleColumnsInOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MultipleColumnsInOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `RANGE` frame definition is used with `ORDER BY`\n' + 'that uses multiple columns.\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MissingFrameExtent_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MissingFrameExtent.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `RANGE` frame definition is missing [frame_extent].\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_InvalidFrameExtent_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.InvalidFrameExtent.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `RANGE` frame definition has invalid [frame_extent].\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.6') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_CurrentRow_Peers = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.CurrentRow.Peers', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] for the `RANGE` frame SHALL define the `peers` of the `CURRENT ROW` to be all\n' + 'the rows that are inside the same order bucket.\n' + '\n' + ), + link=None, + level=4, + num='3.5.3.8') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithoutOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithoutOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows in the window partition\n' + 'when `RANGE CURRENT ROW` frame is specified without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 6 │\n' + '└────────┴──────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.9.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows that are [current row peers] in the window partition\n' + 'when `RANGE CURRENT ROW` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐\n' + '│ 1 │ 2 │\n' + '│ 1 │ 2 │\n' + '│ 2 │ 2 │\n' + '│ 3 │ 3 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.9.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE UNBOUNDED FOLLOWING` frame is specified with or without order by\n' + 'as `UNBOUNDED FOLLOWING` SHALL not be supported as [frame_start].\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.10.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithoutOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithoutOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows in the window partition\n' + 'when `RANGE UNBOUNDED PRECEDING` frame is specified without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 6 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.11.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include rows with values from and including the first row \n' + 'until and including all [current row peers] in the window partition\n' + 'when `RANGE UNBOUNDED PRECEDING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 6 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.11.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE expr PRECEDING` frame is specified without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE 1 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.12.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_OrderByNonNumericalColumn_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.OrderByNonNumericalColumn.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE expr PRECEDING` is used with `ORDER BY` clause\n' + 'over a non-numerical column.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.12.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include rows with values from and including current row value minus `expr`\n' + 'until and including the value for the current row \n' + 'when `RANGE expr PRECEDING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE 1 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 5 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.12.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE expr FOLLOWING` frame is specified without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.13.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE expr FOLLOWING` frame is specified wit the `ORDER BY` clause \n' + 'as the value for the frame start cannot be larger than the value for the frame end.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.13.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all [current row peers] in the window partition \n' + 'when `RANGE BETWEEN CURRENT ROW AND CURRENT ROW` frame is specified with or without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`** \n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 6 │\n' + '└────────┴──────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + '**With `ORDER BY`** \n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐\n' + '│ 1 │ 1 │\n' + '│ 2 │ 2 │\n' + '│ 3 │ 3 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.14.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING` frame is specified\n' + 'with or without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.14.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including [current row peers] until and including\n' + 'the last row in the window partition when `RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING` frame is specified\n' + 'with or without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 6 │\n' + '│ 3 │ 6 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 6 │\n' + '│ 2 │ 5 │\n' + '│ 3 │ 3 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.14.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified\n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.14.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including [current row peers] until and including\n' + 'current row value plus `expr` when `RANGE BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified\n' + 'with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING)─┐\n' + '│ 1 │ 4 │\n' + '│ 1 │ 4 │\n' + '│ 2 │ 5 │\n' + '│ 3 │ 3 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.14.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND expr PRECEDING` frame is specified\n' + 'with or without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.14.6') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including the first row until and including\n' + '[current row peers] in the window partition when `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` frame is specified\n' + 'with and without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 7 │\n' + '│ 1 │ 7 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 7 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 2 │\n' + '│ 1 │ 2 │\n' + '│ 2 │ 4 │\n' + '│ 3 │ 7 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return and error when `RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING` frame is specified\n' + 'with and without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows in the window partition when `RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` frame is specified\n' + 'with and without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 7 │\n' + '│ 1 │ 7 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 7 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 7 │\n' + '│ 1 │ 7 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 7 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified\n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including the first row until and including\n' + 'the value of the current row minus `expr` in the window partition\n' + 'when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)─┐\n' + '│ 1 │ 0 │\n' + '│ 1 │ 0 │\n' + '│ 2 │ 2 │\n' + '│ 3 │ 4 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified\n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.6') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including the first row until and including\n' + 'the value of the current row plus `expr` in the window partition\n' + 'when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING)─┐\n' + '│ 1 │ 4 │\n' + '│ 1 │ 4 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 7 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.15.7') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_CurrentRow_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.CurrentRow.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.16.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.16.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.16.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND expr PRECEDING` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.16.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprFollowing_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprFollowing.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND expr FOLLOWING` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.16.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including current row minus `expr` \n' + 'until and including [current row peers] in the window partition\n' + 'when `RANGE BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND CURRENT ROW)─┐\n' + '│ 1 │ 2 │\n' + '│ 1 │ 2 │\n' + '│ 2 │ 4 │\n' + '│ 3 │ 5 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified\n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED PRECEDING` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame is specified \n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including current row minus `expr` \n' + 'until and including the last row in the window partition when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame\n' + 'is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 7 │\n' + '│ 1 │ 7 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 5 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified \n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.6') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including current row minus preceding `expr` \n' + 'until and including current row plus following `expr` in the window partition \n' + 'when `RANGE BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING)─┐\n' + '│ 1 │ 4 │\n' + '│ 1 │ 4 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 5 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.7') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified \n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM numbers(1,3)\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.8') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when the value of the [frame_end] specified by the \n' + 'current row minus preceding `expr` is greater than the value of the [frame_start] in the window partition\n' + 'when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.9') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including current row minus preceding `expr` for the [frame_start]\n' + 'until and including current row minus following `expr` for the [frame_end] in the window partition \n' + 'when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause\n' + 'if an only if the [frame_end] value is equal or greater than [frame_start] value.\n' + '\n' + 'For example,\n' + '\n' + '**Greater Than**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING)─┐\n' + '│ 1 │ 2 │\n' + '│ 1 │ 2 │\n' + '│ 2 │ 4 │\n' + '│ 3 │ 5 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + 'or **Equal**\n' + '\n' + '```sql\n' + " SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING)─┐\n' + '│ 1 │ 0 │\n' + '│ 1 │ 0 │\n' + '│ 2 │ 2 │\n' + '│ 3 │ 2 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.17.10') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified \n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified \n' + 'with the `ORDER BY` clause and `expr` is greater than `0`.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_ZeroSpecialCase = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.ZeroSpecialCase', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all [current row peers] in the window partition\n' + 'when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified \n' + 'with the `ORDER BY` clause if and only if the `expr` equals to `0`.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW)─┐\n' + '│ 1 │ 7 │\n' + '│ 1 │ 7 │\n' + '│ 2 │ 7 │\n' + '│ 3 │ 7 │\n' + '└────────┴──────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW)─┐\n' + '│ 1 │ 2 │\n' + '│ 1 │ 2 │\n' + '│ 2 │ 2 │\n' + '│ 3 │ 3 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified \n' + 'without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.4') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with values from and including current row plus `expr`\n' + 'until and including the last row in the window partition \n' + 'when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified with the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)─┐\n' + '│ 1 │ 5 │\n' + '│ 1 │ 5 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 0 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.5') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED PRECEDING` frame is specified \n' + 'with or without the `ORDER BY` clause.\n' + '\n' + 'For example,\n' + '\n' + '**Without `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '**With `ORDER BY`**\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.6') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified \n' + 'without the `ORDER BY`.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.7') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified \n' + 'with the `ORDER BY` clause if the value of both `expr` is not `0`.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.8') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithOrderBy_ZeroSpecialCase = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithOrderBy.ZeroSpecialCase', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with value equal to [current row peers] in the window partition\n' + 'when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified \n' + "with the `ORDER BY` clause if and only if both `expr`'s are `0`.\n" + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING ─┐\n' + '│ 1 │ 2 │\n' + '│ 1 │ 2 │\n' + '│ 2 │ 2 │\n' + '│ 3 │ 3 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.9') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithoutOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithoutOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified \n' + 'without the `ORDER BY` clause.\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.10') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified \n' + 'with the `ORDER BY` clause but the `expr` for the [frame_end] is less than the `expr` for the [frame_start].\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.11') + +RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL include all rows with value from and including current row plus `expr` for the [frame_start]\n' + 'until and including current row plus `expr` for the [frame_end] in the window partition\n' + 'when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified \n' + 'with the `ORDER BY` clause if and only if the `expr` for the [frame_end] is greater than or equal than the \n' + '`expr` for the [frame_start].\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 2 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))\n" + '```\n' + '\n' + '```bash\n' + '┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 FOLLOWING AND 2 FOLLOWING)─┐\n' + '│ 1 │ 5 │\n' + '│ 1 │ 5 │\n' + '│ 2 │ 3 │\n' + '│ 3 │ 0 │\n' + '└────────┴──────────────────────────────────────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.5.3.18.12') + +RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Extent = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Extent', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [frame_extent] defined as\n' + '\n' + '```\n' + 'frame_extent:\n' + ' {frame_start | frame_between}\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.4.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Start = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Start', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [frame_start] defined as\n' + '\n' + '```\n' + 'frame_start: {\n' + ' CURRENT ROW\n' + ' | UNBOUNDED PRECEDING\n' + ' | UNBOUNDED FOLLOWING\n' + ' | expr PRECEDING\n' + ' | expr FOLLOWING\n' + '}\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.5.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Between = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Between', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [frame_between] defined as\n' + '\n' + '```\n' + 'frame_between:\n' + ' BETWEEN frame_start AND frame_end\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.6.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_Frame_End = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.End', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support [frame_end] defined as\n' + '\n' + '```\n' + 'frame_end: {\n' + ' CURRENT ROW\n' + ' | UNBOUNDED PRECEDING\n' + ' | UNBOUNDED FOLLOWING\n' + ' | expr PRECEDING\n' + ' | expr FOLLOWING\n' + '}\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.7.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_CurrentRow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.CurrentRow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `CURRENT ROW` as `frame_start` or `frame_end` value.\n' + '\n' + '* For `ROWS` SHALL define the bound to be the current row\n' + '* For `RANGE` SHALL define the bound to be the peers of the current row\n' + '\n' + ), + link=None, + level=4, + num='3.5.8.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_UnboundedPreceding = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedPreceding', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `UNBOUNDED PRECEDING` as `frame_start` or `frame_end` value\n' + 'and it SHALL define that the bound is the first partition row.\n' + '\n' + ), + link=None, + level=4, + num='3.5.9.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_UnboundedFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `UNBOUNDED FOLLOWING` as `frame_start` or `frame_end` value\n' + 'and it SHALL define that the bound is the last partition row.\n' + '\n' + ), + link=None, + level=4, + num='3.5.10.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `expr PRECEDING` as `frame_start` or `frame_end` value\n' + '\n' + '* For `ROWS` it SHALL define the bound to be the `expr` rows before the current row\n' + '* For `RANGE` it SHALL define the bound to be the rows with values equal to the current row value minus the `expr`.\n' + '\n' + ), + link=None, + level=4, + num='3.5.11.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding_ExprValue = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding.ExprValue', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support only non-negative numeric literal as the value for the `expr` in the `expr PRECEDING` frame boundary.\n' + '\n' + 'For example,\n' + '\n' + '```\n' + '5 PRECEDING\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.11.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `expr FOLLOWING` as `frame_start` or `frame_end` value\n' + '\n' + '* For `ROWS` it SHALL define the bound to be the `expr` rows after the current row\n' + '* For `RANGE` it SHALL define the bound to be the rows with values equal to the current row value plus `expr`\n' + '\n' + ), + link=None, + level=4, + num='3.5.12.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing_ExprValue = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing.ExprValue', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support only non-negative numeric literal as the value for the `expr` in the `expr FOLLOWING` frame boundary.\n' + '\n' + 'For example,\n' + '\n' + '```\n' + '5 FOLLOWING\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.5.12.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `WINDOW` clause to define one or more windows.\n' + '\n' + '```sql\n' + 'WINDOW window_name AS (window_spec)\n' + ' [, window_name AS (window_spec)] ..\n' + '```\n' + '\n' + 'The `window_name` SHALL be the name of a window defined by a `WINDOW` clause.\n' + '\n' + 'The [window_spec] SHALL specify the window.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT ... FROM table WINDOW w AS (partiton by id))\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.6.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MultipleWindows = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MultipleWindows', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `WINDOW` clause that defines multiple windows.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT ... FROM table WINDOW w1 AS (partition by id), w2 AS (partition by customer)\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.6.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MissingWindowSpec_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MissingWindowSpec.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `WINDOW` clause definition is missing [window_spec].\n' + '\n' + ), + link=None, + level=3, + num='3.6.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `OVER` clause to either use named window defined using `WINDOW` clause\n' + 'or adhoc window defined inplace.\n' + '\n' + '\n' + '```\n' + 'OVER ()|(window_spec)|named_window \n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.7.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_EmptyOverClause = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.EmptyOverClause', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL treat the entire set of query rows as a single partition when `OVER` clause is empty.\n' + 'For example,\n' + '\n' + '```\n' + 'SELECT sum(x) OVER () FROM table\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.7.2.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_AdHocWindow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support ad hoc window specification in the `OVER` clause.\n' + '\n' + '```\n' + 'OVER [window_spec]\n' + '```\n' + '\n' + 'See [window_spec] definition.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + '(count(*) OVER (partition by id order by time desc))\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.7.3.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_AdHocWindow_MissingWindowSpec_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow.MissingWindowSpec.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `OVER` clause has missing [window_spec].\n' + '\n' + ), + link=None, + level=4, + num='3.7.3.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support using a previously defined named window in the `OVER` clause.\n' + '\n' + '```\n' + 'OVER [window_name]\n' + '```\n' + '\n' + 'See [window_name] definition.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT count(*) OVER w FROM table WINDOW w AS (partition by id)\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.7.4.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow_InvalidName_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.InvalidName.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `OVER` clause reference invalid window name.\n' + '\n' + ), + link=None, + level=4, + num='3.7.4.2') + +RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow_MultipleWindows_Error = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.MultipleWindows.Error', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error if the `OVER` clause references more than one window name.\n' + '\n' + ), + link=None, + level=4, + num='3.7.4.3') + +RQ_SRS_019_ClickHouse_WindowFunctions_FirstValue = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.FirstValue', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `first_value` window function that\n' + 'SHALL be synonum for the `any(value)` function\n' + 'that SHALL return the value of `expr` from first row in the window frame.\n' + '\n' + '```\n' + 'first_value(expr) OVER ...\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.1.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_LastValue = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.LastValue', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `last_value` window function that\n' + 'SHALL be synonym for the `anyLast(value)` function\n' + 'that SHALL return the value of `expr` from the last row in the window frame.\n' + '\n' + '```\n' + 'last_value(expr) OVER ...\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.2.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_Lag_Workaround = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Lag.Workaround', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support a workaround for the `lag(value, offset)` function as\n' + '\n' + '```\n' + 'any(value) OVER (.... ROWS BETWEEN PRECEDING AND PRECEDING)\n' + '```\n' + '\n' + 'The function SHALL returns the value from the row that lags (precedes) the current row\n' + 'by the `N` rows within its partition. Where `N` is the `value` passed to the `any` function.\n' + '\n' + 'If there is no such row, the return value SHALL be default.\n' + '\n' + 'For example, if `N` is 3, the return value is default for the first two rows.\n' + 'If N or default are missing, the defaults are 1 and NULL, respectively.\n' + '\n' + '`N` SHALL be a literal non-negative integer. If N is 0, the value SHALL be\n' + 'returned for the current row.\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.3.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_Lead_Workaround = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Lead.Workaround', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support a workaround for the `lead(value, offset)` function as\n' + '\n' + '```\n' + 'any(value) OVER (.... ROWS BETWEEN FOLLOWING AND FOLLOWING)\n' + '```\n' + '\n' + 'The function SHALL returns the value from the row that leads (follows) the current row by\n' + 'the `N` rows within its partition. Where `N` is the `value` passed to the `any` function.\n' + '\n' + 'If there is no such row, the return value SHALL be default.\n' + '\n' + 'For example, if `N` is 3, the return value is default for the last two rows.\n' + 'If `N` or default are missing, the defaults are 1 and NULL, respectively.\n' + '\n' + '`N` SHALL be a literal non-negative integer. If `N` is 0, the value SHALL be\n' + 'returned for the current row.\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.4.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_Rank = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.Rank', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `rank` window function that SHALL\n' + 'return the rank of the current row within its partition with gaps.\n' + '\n' + 'Peers SHALL be considered ties and receive the same rank.\n' + 'The function SHALL not assign consecutive ranks to peer groups if groups of size greater than one exist\n' + 'and the result is noncontiguous rank numbers.\n' + '\n' + 'If the function is used without `ORDER BY` to sort partition rows into the desired order\n' + 'then all rows SHALL be peers.\n' + '\n' + '```\n' + 'rank() OVER ...\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.5.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_DenseRank = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.DenseRank', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `dense_rank` function over a window that SHALL\n' + 'return the rank of the current row within its partition without gaps.\n' + '\n' + 'Peers SHALL be considered ties and receive the same rank.\n' + 'The function SHALL assign consecutive ranks to peer groups and\n' + 'the result is that groups of size greater than one do not produce noncontiguous rank numbers.\n' + '\n' + 'If the function is used without `ORDER BY` to sort partition rows into the desired order\n' + 'then all rows SHALL be peers.\n' + '\n' + '```\n' + 'dense_rank() OVER ...\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.6.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_RowNumber = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.RowNumber', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `row_number` function over a window that SHALL\n' + 'returns the number of the current row within its partition.\n' + '\n' + 'Rows numbers SHALL range from 1 to the number of partition rows.\n' + '\n' + 'The `ORDER BY` affects the order in which rows are numbered.\n' + 'Without `ORDER BY`, row numbering MAY be nondeterministic.\n' + '\n' + '```\n' + 'row_number() OVER ...\n' + '```\n' + '\n' + ), + link=None, + level=5, + num='3.8.1.7.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support using aggregate functions over windows.\n' + '\n' + '* [count](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/count/)\n' + '* [min](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/min/)\n' + '* [max](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/max/)\n' + '* [sum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sum/)\n' + '* [avg](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avg/)\n' + '* [any](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/any/)\n' + '* [stddevPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevpop/)\n' + '* [stddevSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevsamp/)\n' + '* [varPop(x)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varpop/)\n' + '* [varSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varsamp/)\n' + '* [covarPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarpop/)\n' + '* [covarSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarsamp/)\n' + '* [anyHeavy](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anyheavy/)\n' + '* [anyLast](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anylast/)\n' + '* [argMin](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/)\n' + '* [argMax](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/)\n' + '* [avgWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avgweighted/)\n' + '* [corr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/corr/)\n' + '* [topK](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topk/)\n' + '* [topKWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topkweighted/)\n' + '* [groupArray](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparray/)\n' + '* [groupUniqArray](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupuniqarray/)\n' + '* [groupArrayInsertAt](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat/)\n' + '* [groupArrayMovingSum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingsum/)\n' + '* [groupArrayMovingAvg](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingavg/)\n' + '* [groupArraySample](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraysample/)\n' + '* [groupBitAnd](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitand/)\n' + '* [groupBitOr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitor/)\n' + '* [groupBitXor](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitxor/)\n' + '* [groupBitmap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmap/)\n' + '* [groupBitmapAnd](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapand/)\n' + '* [groupBitmapOr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor/)\n' + '* [groupBitmapXor](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor/)\n' + '* [sumWithOverflow](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sumwithoverflow/)\n' + '* [deltaSum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/deltasum/)\n' + '* [sumMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/summap/)\n' + '* [minMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/minmap/)\n' + '* [maxMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/maxmap/)\n' + '* [initializeAggregation](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation/)\n' + '* [skewPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewpop/)\n' + '* [skewSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewsamp/)\n' + '* [kurtPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtpop/)\n' + '* [kurtSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtsamp/)\n' + '* [uniq](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniq/)\n' + '* [uniqExact](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqexact/)\n' + '* [uniqCombined](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined/)\n' + '* [uniqCombined64](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined64/)\n' + '* [uniqHLL12](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqhll12/)\n' + '* [quantile](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantile/)\n' + '* [quantiles](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiles/)\n' + '* [quantileExact](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexact/)\n' + '* [quantileExactWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexactweighted/)\n' + '* [quantileTiming](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletiming/)\n' + '* [quantileTimingWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted/)\n' + '* [quantileDeterministic](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiledeterministic/)\n' + '* [quantileTDigest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest/)\n' + '* [quantileTDigestWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted/)\n' + '* [simpleLinearRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/simplelinearregression/)\n' + '* [stochasticLinearRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlinearregression/)\n' + '* [stochasticLogisticRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/)\n' + '* [categoricalInformationValue](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/)\n' + '* [studentTTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/studentttest/)\n' + '* [welchTTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/welchttest/)\n' + '* [mannWhitneyUTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest/)\n' + '* [median](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/median/)\n' + '* [rankCorr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/rankCorr/)\n' + '\n' + ), + link=None, + level=4, + num='3.8.2.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions_Combinators = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Combinators', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support aggregate functions with combinator prefixes over windows.\n' + '\n' + '* [-If](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-if)\n' + '* [-Array](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-array)\n' + '* [-SimpleState](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-simplestate)\n' + '* [-State](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-state)\n' + '* [-Merge](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#aggregate_functions_combinators-merge)\n' + '* [-MergeState](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#aggregate_functions_combinators-mergestate)\n' + '* [-ForEach](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-foreach)\n' + '* [-Distinct](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-distinct)\n' + '* [-OrDefault](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-ordefault)\n' + '* [-OrNull](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-ornull)\n' + '* [-Resample](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-resample)\n' + '\n' + ), + link=None, + level=5, + num='3.8.2.2.1') + +RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions_Parametric = Requirement( + name='RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Parametric', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support parametric aggregate functions over windows.\n' + '\n' + '* [histogram](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#histogram)\n' + '* [sequenceMatch(pattern)(timestamp, cond1, cond2, ...)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#function-sequencematch)\n' + '* [sequenceCount(pattern)(time, cond1, cond2, ...)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#function-sequencecount)\n' + '* [windowFunnel](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#windowfunnel)\n' + '* [retention](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#retention)\n' + '* [uniqUpTo(N)(x)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#uniquptonx)\n' + '* [sumMapFiltered(keys_to_keep)(keys, values)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#summapfilteredkeys-to-keepkeys-values)\n' + '\n' + ), + link=None, + level=5, + num='3.8.2.3.1') + +SRS019_ClickHouse_Window_Functions = Specification( + name='SRS019 ClickHouse Window Functions', + description=None, + author=None, + date=None, + status=None, + approved_by=None, + approved_date=None, + approved_version=None, + version=None, + group=None, + type=None, + link=None, + uid=None, + parent=None, + children=None, + headings=( + Heading(name='Revision History', level=1, num='1'), + Heading(name='Introduction', level=1, num='2'), + Heading(name='Requirements', level=1, num='3'), + Heading(name='General', level=2, num='3.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions', level=3, num='3.1.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.NonDistributedTables', level=3, num='3.1.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.DistributedTables', level=3, num='3.1.3'), + Heading(name='Window Specification', level=2, num='3.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowSpec', level=3, num='3.2.1'), + Heading(name='PARTITION Clause', level=2, num='3.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause', level=3, num='3.3.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MultipleExpr', level=3, num='3.3.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MissingExpr.Error', level=3, num='3.3.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.InvalidExpr.Error', level=3, num='3.3.4'), + Heading(name='ORDER Clause', level=2, num='3.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause', level=3, num='3.4.1'), + Heading(name='order_clause', level=4, num='3.4.1.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MultipleExprs', level=3, num='3.4.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MissingExpr.Error', level=3, num='3.4.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.InvalidExpr.Error', level=3, num='3.4.4'), + Heading(name='FRAME Clause', level=2, num='3.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.FrameClause', level=3, num='3.5.1'), + Heading(name='ROWS', level=3, num='3.5.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame', level=4, num='3.5.2.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.MissingFrameExtent.Error', level=4, num='3.5.2.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.InvalidFrameExtent.Error', level=4, num='3.5.2.3'), + Heading(name='ROWS CURRENT ROW', level=4, num='3.5.2.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.CurrentRow', level=5, num='3.5.2.4.1'), + Heading(name='ROWS UNBOUNDED PRECEDING', level=4, num='3.5.2.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedPreceding', level=5, num='3.5.2.5.1'), + Heading(name='ROWS `expr` PRECEDING', level=4, num='3.5.2.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprPreceding', level=5, num='3.5.2.6.1'), + Heading(name='ROWS UNBOUNDED FOLLOWING', level=4, num='3.5.2.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedFollowing.Error', level=5, num='3.5.2.7.1'), + Heading(name='ROWS `expr` FOLLOWING', level=4, num='3.5.2.8'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprFollowing.Error', level=5, num='3.5.2.8.1'), + Heading(name='ROWS BETWEEN CURRENT ROW', level=4, num='3.5.2.9'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.CurrentRow', level=5, num='3.5.2.9.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedPreceding.Error', level=5, num='3.5.2.9.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprPreceding.Error', level=5, num='3.5.2.9.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedFollowing', level=5, num='3.5.2.9.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprFollowing', level=5, num='3.5.2.9.5'), + Heading(name='ROWS BETWEEN UNBOUNDED PRECEDING', level=4, num='3.5.2.10'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.CurrentRow', level=5, num='3.5.2.10.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedPreceding.Error', level=5, num='3.5.2.10.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprPreceding', level=5, num='3.5.2.10.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedFollowing', level=5, num='3.5.2.10.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprFollowing', level=5, num='3.5.2.10.5'), + Heading(name='ROWS BETWEEN UNBOUNDED FOLLOWING', level=4, num='3.5.2.11'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedFollowing.Error', level=5, num='3.5.2.11.1'), + Heading(name='ROWS BETWEEN `expr` FOLLOWING', level=4, num='3.5.2.12'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.Error', level=5, num='3.5.2.12.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing.Error', level=5, num='3.5.2.12.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.UnboundedFollowing', level=5, num='3.5.2.12.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing', level=5, num='3.5.2.12.4'), + Heading(name='ROWS BETWEEN `expr` PRECEDING', level=4, num='3.5.2.13'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.CurrentRow', level=5, num='3.5.2.13.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedPreceding.Error', level=5, num='3.5.2.13.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedFollowing', level=5, num='3.5.2.13.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding.Error', level=5, num='3.5.2.13.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding', level=5, num='3.5.2.13.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprFollowing', level=5, num='3.5.2.13.6'), + Heading(name='RANGE', level=3, num='3.5.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame', level=4, num='3.5.3.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.DateAndDateTime', level=4, num='3.5.3.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.IntAndUInt', level=4, num='3.5.3.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MultipleColumnsInOrderBy.Error', level=4, num='3.5.3.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MissingFrameExtent.Error', level=4, num='3.5.3.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.InvalidFrameExtent.Error', level=4, num='3.5.3.6'), + Heading(name='`CURRENT ROW` Peers', level=4, num='3.5.3.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.CurrentRow.Peers', level=4, num='3.5.3.8'), + Heading(name='RANGE CURRENT ROW', level=4, num='3.5.3.9'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithoutOrderBy', level=5, num='3.5.3.9.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithOrderBy', level=5, num='3.5.3.9.2'), + Heading(name='RANGE UNBOUNDED FOLLOWING', level=4, num='3.5.3.10'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedFollowing.Error', level=5, num='3.5.3.10.1'), + Heading(name='RANGE UNBOUNDED PRECEDING', level=4, num='3.5.3.11'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithoutOrderBy', level=5, num='3.5.3.11.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithOrderBy', level=5, num='3.5.3.11.2'), + Heading(name='RANGE `expr` PRECEDING', level=4, num='3.5.3.12'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithoutOrderBy.Error', level=5, num='3.5.3.12.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.OrderByNonNumericalColumn.Error', level=5, num='3.5.3.12.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithOrderBy', level=5, num='3.5.3.12.3'), + Heading(name='RANGE `expr` FOLLOWING', level=4, num='3.5.3.13'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.13.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithOrderBy.Error', level=5, num='3.5.3.13.2'), + Heading(name='RANGE BETWEEN CURRENT ROW', level=4, num='3.5.3.14'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.CurrentRow', level=5, num='3.5.3.14.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedPreceding.Error', level=5, num='3.5.3.14.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedFollowing', level=5, num='3.5.3.14.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.14.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithOrderBy', level=5, num='3.5.3.14.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprPreceding.Error', level=5, num='3.5.3.14.6'), + Heading(name='RANGE BETWEEN UNBOUNDED PRECEDING', level=4, num='3.5.3.15'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.CurrentRow', level=5, num='3.5.3.15.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedPreceding.Error', level=5, num='3.5.3.15.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedFollowing', level=5, num='3.5.3.15.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithoutOrderBy.Error', level=5, num='3.5.3.15.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithOrderBy', level=5, num='3.5.3.15.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.15.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithOrderBy', level=5, num='3.5.3.15.7'), + Heading(name='RANGE BETWEEN UNBOUNDED FOLLOWING', level=4, num='3.5.3.16'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.CurrentRow.Error', level=5, num='3.5.3.16.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedFollowing.Error', level=5, num='3.5.3.16.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedPreceding.Error', level=5, num='3.5.3.16.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprPreceding.Error', level=5, num='3.5.3.16.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprFollowing.Error', level=5, num='3.5.3.16.5'), + Heading(name='RANGE BETWEEN expr PRECEDING', level=4, num='3.5.3.17'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithOrderBy', level=5, num='3.5.3.17.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithoutOrderBy.Error', level=5, num='3.5.3.17.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedPreceding.Error', level=5, num='3.5.3.17.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.17.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithOrderBy', level=5, num='3.5.3.17.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.17.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithOrderBy', level=5, num='3.5.3.17.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithoutOrderBy.Error', level=5, num='3.5.3.17.8'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy.Error', level=5, num='3.5.3.17.9'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy', level=5, num='3.5.3.17.10'), + Heading(name='RANGE BETWEEN expr FOLLOWING', level=4, num='3.5.3.18'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithoutOrderBy.Error', level=5, num='3.5.3.18.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithOrderBy.Error', level=5, num='3.5.3.18.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.ZeroSpecialCase', level=5, num='3.5.3.18.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.18.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithOrderBy', level=5, num='3.5.3.18.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedPreceding.Error', level=5, num='3.5.3.18.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithoutOrderBy.Error', level=5, num='3.5.3.18.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.Error', level=5, num='3.5.3.18.8'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithOrderBy.ZeroSpecialCase', level=5, num='3.5.3.18.9'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithoutOrderBy.Error', level=5, num='3.5.3.18.10'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy.Error', level=5, num='3.5.3.18.11'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy', level=5, num='3.5.3.18.12'), + Heading(name='Frame Extent', level=3, num='3.5.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Extent', level=4, num='3.5.4.1'), + Heading(name='Frame Start', level=3, num='3.5.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Start', level=4, num='3.5.5.1'), + Heading(name='Frame Between', level=3, num='3.5.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Between', level=4, num='3.5.6.1'), + Heading(name='Frame End', level=3, num='3.5.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Frame.End', level=4, num='3.5.7.1'), + Heading(name='`CURRENT ROW`', level=3, num='3.5.8'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.CurrentRow', level=4, num='3.5.8.1'), + Heading(name='`UNBOUNDED PRECEDING`', level=3, num='3.5.9'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedPreceding', level=4, num='3.5.9.1'), + Heading(name='`UNBOUNDED FOLLOWING`', level=3, num='3.5.10'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedFollowing', level=4, num='3.5.10.1'), + Heading(name='`expr PRECEDING`', level=3, num='3.5.11'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding', level=4, num='3.5.11.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding.ExprValue', level=4, num='3.5.11.2'), + Heading(name='`expr FOLLOWING`', level=3, num='3.5.12'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing', level=4, num='3.5.12.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing.ExprValue', level=4, num='3.5.12.2'), + Heading(name='WINDOW Clause', level=2, num='3.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause', level=3, num='3.6.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MultipleWindows', level=3, num='3.6.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MissingWindowSpec.Error', level=3, num='3.6.3'), + Heading(name='`OVER` Clause', level=2, num='3.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause', level=3, num='3.7.1'), + Heading(name='Empty Clause', level=3, num='3.7.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.EmptyOverClause', level=4, num='3.7.2.1'), + Heading(name='Ad-Hoc Window', level=3, num='3.7.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow', level=4, num='3.7.3.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow.MissingWindowSpec.Error', level=4, num='3.7.3.2'), + Heading(name='Named Window', level=3, num='3.7.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow', level=4, num='3.7.4.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.InvalidName.Error', level=4, num='3.7.4.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.MultipleWindows.Error', level=4, num='3.7.4.3'), + Heading(name='Window Functions', level=2, num='3.8'), + Heading(name='Nonaggregate Functions', level=3, num='3.8.1'), + Heading(name='The `first_value(expr)` Function', level=4, num='3.8.1.1'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.FirstValue', level=5, num='3.8.1.1.1'), + Heading(name='The `last_value(expr)` Function', level=4, num='3.8.1.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.LastValue', level=5, num='3.8.1.2.1'), + Heading(name='The `lag(value, offset)` Function Workaround', level=4, num='3.8.1.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Lag.Workaround', level=5, num='3.8.1.3.1'), + Heading(name='The `lead(value, offset)` Function Workaround', level=4, num='3.8.1.4'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Lead.Workaround', level=5, num='3.8.1.4.1'), + Heading(name='The `rank()` Function', level=4, num='3.8.1.5'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.Rank', level=5, num='3.8.1.5.1'), + Heading(name='The `dense_rank()` Function', level=4, num='3.8.1.6'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.DenseRank', level=5, num='3.8.1.6.1'), + Heading(name='The `row_number()` Function', level=4, num='3.8.1.7'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.RowNumber', level=5, num='3.8.1.7.1'), + Heading(name='Aggregate Functions', level=3, num='3.8.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions', level=4, num='3.8.2.1'), + Heading(name='Combinators', level=4, num='3.8.2.2'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Combinators', level=5, num='3.8.2.2.1'), + Heading(name='Parametric', level=4, num='3.8.2.3'), + Heading(name='RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Parametric', level=5, num='3.8.2.3.1'), + Heading(name='References', level=1, num='4'), + ), + requirements=( + RQ_SRS_019_ClickHouse_WindowFunctions, + RQ_SRS_019_ClickHouse_WindowFunctions_NonDistributedTables, + RQ_SRS_019_ClickHouse_WindowFunctions_DistributedTables, + RQ_SRS_019_ClickHouse_WindowFunctions_WindowSpec, + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause, + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_MultipleExpr, + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_MissingExpr_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_InvalidExpr_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause, + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MultipleExprs, + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MissingExpr_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_InvalidExpr_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_FrameClause, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_MissingFrameExtent_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_InvalidFrameExtent_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_UnboundedPreceding, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_ExprPreceding, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_UnboundedFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_ExprFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_ExprPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_ExprFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_ExprPreceding, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_ExprFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding, + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_DateAndDateTime, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_IntAndUInt, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MultipleColumnsInOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MissingFrameExtent_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_InvalidFrameExtent_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_CurrentRow_Peers, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithoutOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithoutOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_OrderByNonNumericalColumn_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_CurrentRow_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprFollowing_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_ZeroSpecialCase, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithOrderBy_ZeroSpecialCase, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithoutOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy, + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Extent, + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Start, + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Between, + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_End, + RQ_SRS_019_ClickHouse_WindowFunctions_CurrentRow, + RQ_SRS_019_ClickHouse_WindowFunctions_UnboundedPreceding, + RQ_SRS_019_ClickHouse_WindowFunctions_UnboundedFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding, + RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding_ExprValue, + RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing, + RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing_ExprValue, + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause, + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MultipleWindows, + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MissingWindowSpec_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_EmptyOverClause, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_AdHocWindow, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_AdHocWindow_MissingWindowSpec_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow_InvalidName_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow_MultipleWindows_Error, + RQ_SRS_019_ClickHouse_WindowFunctions_FirstValue, + RQ_SRS_019_ClickHouse_WindowFunctions_LastValue, + RQ_SRS_019_ClickHouse_WindowFunctions_Lag_Workaround, + RQ_SRS_019_ClickHouse_WindowFunctions_Lead_Workaround, + RQ_SRS_019_ClickHouse_WindowFunctions_Rank, + RQ_SRS_019_ClickHouse_WindowFunctions_DenseRank, + RQ_SRS_019_ClickHouse_WindowFunctions_RowNumber, + RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions, + RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions_Combinators, + RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions_Parametric, + ), + content=''' +# SRS019 ClickHouse Window Functions +# Software Requirements Specification + +## Table of Contents + +* 1 [Revision History](#revision-history) +* 2 [Introduction](#introduction) +* 3 [Requirements](#requirements) + * 3.1 [General](#general) + * 3.1.1 [RQ.SRS-019.ClickHouse.WindowFunctions](#rqsrs-019clickhousewindowfunctions) + * 3.1.2 [RQ.SRS-019.ClickHouse.WindowFunctions.NonDistributedTables](#rqsrs-019clickhousewindowfunctionsnondistributedtables) + * 3.1.3 [RQ.SRS-019.ClickHouse.WindowFunctions.DistributedTables](#rqsrs-019clickhousewindowfunctionsdistributedtables) + * 3.2 [Window Specification](#window-specification) + * 3.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowSpec](#rqsrs-019clickhousewindowfunctionswindowspec) + * 3.3 [PARTITION Clause](#partition-clause) + * 3.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause](#rqsrs-019clickhousewindowfunctionspartitionclause) + * 3.3.2 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MultipleExpr](#rqsrs-019clickhousewindowfunctionspartitionclausemultipleexpr) + * 3.3.3 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MissingExpr.Error](#rqsrs-019clickhousewindowfunctionspartitionclausemissingexprerror) + * 3.3.4 [RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.InvalidExpr.Error](#rqsrs-019clickhousewindowfunctionspartitionclauseinvalidexprerror) + * 3.4 [ORDER Clause](#order-clause) + * 3.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause](#rqsrs-019clickhousewindowfunctionsorderclause) + * 3.4.1.1 [order_clause](#order_clause) + * 3.4.2 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MultipleExprs](#rqsrs-019clickhousewindowfunctionsorderclausemultipleexprs) + * 3.4.3 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MissingExpr.Error](#rqsrs-019clickhousewindowfunctionsorderclausemissingexprerror) + * 3.4.4 [RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.InvalidExpr.Error](#rqsrs-019clickhousewindowfunctionsorderclauseinvalidexprerror) + * 3.5 [FRAME Clause](#frame-clause) + * 3.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.FrameClause](#rqsrs-019clickhousewindowfunctionsframeclause) + * 3.5.2 [ROWS](#rows) + * 3.5.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame](#rqsrs-019clickhousewindowfunctionsrowsframe) + * 3.5.2.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.MissingFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrowsframemissingframeextenterror) + * 3.5.2.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.InvalidFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrowsframeinvalidframeextenterror) + * 3.5.2.4 [ROWS CURRENT ROW](#rows-current-row) + * 3.5.2.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframestartcurrentrow) + * 3.5.2.5 [ROWS UNBOUNDED PRECEDING](#rows-unbounded-preceding) + * 3.5.2.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedPreceding](#rqsrs-019clickhousewindowfunctionsrowsframestartunboundedpreceding) + * 3.5.2.6 [ROWS `expr` PRECEDING](#rows-expr-preceding) + * 3.5.2.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprPreceding](#rqsrs-019clickhousewindowfunctionsrowsframestartexprpreceding) + * 3.5.2.7 [ROWS UNBOUNDED FOLLOWING](#rows-unbounded-following) + * 3.5.2.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframestartunboundedfollowingerror) + * 3.5.2.8 [ROWS `expr` FOLLOWING](#rows-expr-following) + * 3.5.2.8.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframestartexprfollowingerror) + * 3.5.2.9 [ROWS BETWEEN CURRENT ROW](#rows-between-current-row) + * 3.5.2.9.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowcurrentrow) + * 3.5.2.9.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowunboundedprecedingerror) + * 3.5.2.9.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowexprprecedingerror) + * 3.5.2.9.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowunboundedfollowing) + * 3.5.2.9.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweencurrentrowexprfollowing) + * 3.5.2.10 [ROWS BETWEEN UNBOUNDED PRECEDING](#rows-between-unbounded-preceding) + * 3.5.2.10.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingcurrentrow) + * 3.5.2.10.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingunboundedprecedingerror) + * 3.5.2.10.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprPreceding](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingexprpreceding) + * 3.5.2.10.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingunboundedfollowing) + * 3.5.2.10.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedprecedingexprfollowing) + * 3.5.2.11 [ROWS BETWEEN UNBOUNDED FOLLOWING](#rows-between-unbounded-following) + * 3.5.2.11.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenunboundedfollowingerror) + * 3.5.2.12 [ROWS BETWEEN `expr` FOLLOWING](#rows-between-expr-following) + * 3.5.2.12.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingerror) + * 3.5.2.12.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingexprfollowingerror) + * 3.5.2.12.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingunboundedfollowing) + * 3.5.2.12.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprfollowingexprfollowing) + * 3.5.2.13 [ROWS BETWEEN `expr` PRECEDING](#rows-between-expr-preceding) + * 3.5.2.13.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.CurrentRow](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingcurrentrow) + * 3.5.2.13.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingunboundedprecedingerror) + * 3.5.2.13.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingunboundedfollowing) + * 3.5.2.13.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingexprprecedingerror) + * 3.5.2.13.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingexprpreceding) + * 3.5.2.13.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprFollowing](#rqsrs-019clickhousewindowfunctionsrowsframebetweenexprprecedingexprfollowing) + * 3.5.3 [RANGE](#range) + * 3.5.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame](#rqsrs-019clickhousewindowfunctionsrangeframe) + * 3.5.3.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.DateAndDateTime](#rqsrs-019clickhousewindowfunctionsrangeframedatatypesdateanddatetime) + * 3.5.3.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.IntAndUInt](#rqsrs-019clickhousewindowfunctionsrangeframedatatypesintanduint) + * 3.5.3.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MultipleColumnsInOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframemultiplecolumnsinorderbyerror) + * 3.5.3.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MissingFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrangeframemissingframeextenterror) + * 3.5.3.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.InvalidFrameExtent.Error](#rqsrs-019clickhousewindowfunctionsrangeframeinvalidframeextenterror) + * 3.5.3.7 [`CURRENT ROW` Peers](#current-row-peers) + * 3.5.3.8 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.CurrentRow.Peers](#rqsrs-019clickhousewindowfunctionsrangeframecurrentrowpeers) + * 3.5.3.9 [RANGE CURRENT ROW](#range-current-row) + * 3.5.3.9.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithoutOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartcurrentrowwithoutorderby) + * 3.5.3.9.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartcurrentrowwithorderby) + * 3.5.3.10 [RANGE UNBOUNDED FOLLOWING](#range-unbounded-following) + * 3.5.3.10.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartunboundedfollowingerror) + * 3.5.3.11 [RANGE UNBOUNDED PRECEDING](#range-unbounded-preceding) + * 3.5.3.11.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithoutOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartunboundedprecedingwithoutorderby) + * 3.5.3.11.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartunboundedprecedingwithorderby) + * 3.5.3.12 [RANGE `expr` PRECEDING](#range-expr-preceding) + * 3.5.3.12.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprprecedingwithoutorderbyerror) + * 3.5.3.12.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.OrderByNonNumericalColumn.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprprecedingorderbynonnumericalcolumnerror) + * 3.5.3.12.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframestartexprprecedingwithorderby) + * 3.5.3.13 [RANGE `expr` FOLLOWING](#range-expr-following) + * 3.5.3.13.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprfollowingwithoutorderbyerror) + * 3.5.3.13.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframestartexprfollowingwithorderbyerror) + * 3.5.3.14 [RANGE BETWEEN CURRENT ROW](#range-between-current-row) + * 3.5.3.14.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.CurrentRow](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowcurrentrow) + * 3.5.3.14.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowunboundedprecedingerror) + * 3.5.3.14.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowunboundedfollowing) + * 3.5.3.14.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowexprfollowingwithoutorderbyerror) + * 3.5.3.14.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowexprfollowingwithorderby) + * 3.5.3.14.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweencurrentrowexprprecedingerror) + * 3.5.3.15 [RANGE BETWEEN UNBOUNDED PRECEDING](#range-between-unbounded-preceding) + * 3.5.3.15.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.CurrentRow](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingcurrentrow) + * 3.5.3.15.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingunboundedprecedingerror) + * 3.5.3.15.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingunboundedfollowing) + * 3.5.3.15.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprprecedingwithoutorderbyerror) + * 3.5.3.15.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprprecedingwithorderby) + * 3.5.3.15.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprfollowingwithoutorderbyerror) + * 3.5.3.15.7 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedprecedingexprfollowingwithorderby) + * 3.5.3.16 [RANGE BETWEEN UNBOUNDED FOLLOWING](#range-between-unbounded-following) + * 3.5.3.16.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.CurrentRow.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingcurrentrowerror) + * 3.5.3.16.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedFollowing.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingunboundedfollowingerror) + * 3.5.3.16.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingunboundedprecedingerror) + * 3.5.3.16.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingexprprecedingerror) + * 3.5.3.16.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprFollowing.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenunboundedfollowingexprfollowingerror) + * 3.5.3.17 [RANGE BETWEEN expr PRECEDING](#range-between-expr-preceding) + * 3.5.3.17.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingcurrentrowwithorderby) + * 3.5.3.17.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingcurrentrowwithoutorderbyerror) + * 3.5.3.17.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingunboundedprecedingerror) + * 3.5.3.17.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingunboundedfollowingwithoutorderbyerror) + * 3.5.3.17.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingunboundedfollowingwithorderby) + * 3.5.3.17.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprfollowingwithoutorderbyerror) + * 3.5.3.17.7 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprfollowingwithorderby) + * 3.5.3.17.8 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprprecedingwithoutorderbyerror) + * 3.5.3.17.9 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprprecedingwithorderbyerror) + * 3.5.3.17.10 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprprecedingexprprecedingwithorderby) + * 3.5.3.18 [RANGE BETWEEN expr FOLLOWING](#range-between-expr-following) + * 3.5.3.18.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingcurrentrowwithoutorderbyerror) + * 3.5.3.18.2 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingcurrentrowwithorderbyerror) + * 3.5.3.18.3 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.ZeroSpecialCase](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingcurrentrowzerospecialcase) + * 3.5.3.18.4 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingunboundedfollowingwithoutorderbyerror) + * 3.5.3.18.5 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingunboundedfollowingwithorderby) + * 3.5.3.18.6 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingunboundedprecedingerror) + * 3.5.3.18.7 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprprecedingwithoutorderbyerror) + * 3.5.3.18.8 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprprecedingerror) + * 3.5.3.18.9 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithOrderBy.ZeroSpecialCase](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprprecedingwithorderbyzerospecialcase) + * 3.5.3.18.10 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithoutOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprfollowingwithoutorderbyerror) + * 3.5.3.18.11 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy.Error](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprfollowingwithorderbyerror) + * 3.5.3.18.12 [RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy](#rqsrs-019clickhousewindowfunctionsrangeframebetweenexprfollowingexprfollowingwithorderby) + * 3.5.4 [Frame Extent](#frame-extent) + * 3.5.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Extent](#rqsrs-019clickhousewindowfunctionsframeextent) + * 3.5.5 [Frame Start](#frame-start) + * 3.5.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Start](#rqsrs-019clickhousewindowfunctionsframestart) + * 3.5.6 [Frame Between](#frame-between) + * 3.5.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Between](#rqsrs-019clickhousewindowfunctionsframebetween) + * 3.5.7 [Frame End](#frame-end) + * 3.5.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Frame.End](#rqsrs-019clickhousewindowfunctionsframeend) + * 3.5.8 [`CURRENT ROW`](#current-row) + * 3.5.8.1 [RQ.SRS-019.ClickHouse.WindowFunctions.CurrentRow](#rqsrs-019clickhousewindowfunctionscurrentrow) + * 3.5.9 [`UNBOUNDED PRECEDING`](#unbounded-preceding) + * 3.5.9.1 [RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedPreceding](#rqsrs-019clickhousewindowfunctionsunboundedpreceding) + * 3.5.10 [`UNBOUNDED FOLLOWING`](#unbounded-following) + * 3.5.10.1 [RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedFollowing](#rqsrs-019clickhousewindowfunctionsunboundedfollowing) + * 3.5.11 [`expr PRECEDING`](#expr-preceding) + * 3.5.11.1 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding](#rqsrs-019clickhousewindowfunctionsexprpreceding) + * 3.5.11.2 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding.ExprValue](#rqsrs-019clickhousewindowfunctionsexprprecedingexprvalue) + * 3.5.12 [`expr FOLLOWING`](#expr-following) + * 3.5.12.1 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing](#rqsrs-019clickhousewindowfunctionsexprfollowing) + * 3.5.12.2 [RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing.ExprValue](#rqsrs-019clickhousewindowfunctionsexprfollowingexprvalue) + * 3.6 [WINDOW Clause](#window-clause) + * 3.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause](#rqsrs-019clickhousewindowfunctionswindowclause) + * 3.6.2 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MultipleWindows](#rqsrs-019clickhousewindowfunctionswindowclausemultiplewindows) + * 3.6.3 [RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MissingWindowSpec.Error](#rqsrs-019clickhousewindowfunctionswindowclausemissingwindowspecerror) + * 3.7 [`OVER` Clause](#over-clause) + * 3.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause](#rqsrs-019clickhousewindowfunctionsoverclause) + * 3.7.2 [Empty Clause](#empty-clause) + * 3.7.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.EmptyOverClause](#rqsrs-019clickhousewindowfunctionsoverclauseemptyoverclause) + * 3.7.3 [Ad-Hoc Window](#ad-hoc-window) + * 3.7.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow](#rqsrs-019clickhousewindowfunctionsoverclauseadhocwindow) + * 3.7.3.2 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow.MissingWindowSpec.Error](#rqsrs-019clickhousewindowfunctionsoverclauseadhocwindowmissingwindowspecerror) + * 3.7.4 [Named Window](#named-window) + * 3.7.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow](#rqsrs-019clickhousewindowfunctionsoverclausenamedwindow) + * 3.7.4.2 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.InvalidName.Error](#rqsrs-019clickhousewindowfunctionsoverclausenamedwindowinvalidnameerror) + * 3.7.4.3 [RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.MultipleWindows.Error](#rqsrs-019clickhousewindowfunctionsoverclausenamedwindowmultiplewindowserror) + * 3.8 [Window Functions](#window-functions) + * 3.8.1 [Nonaggregate Functions](#nonaggregate-functions) + * 3.8.1.1 [The `first_value(expr)` Function](#the-first_valueexpr-function) + * 3.8.1.1.1 [RQ.SRS-019.ClickHouse.WindowFunctions.FirstValue](#rqsrs-019clickhousewindowfunctionsfirstvalue) + * 3.8.1.2 [The `last_value(expr)` Function](#the-last_valueexpr-function) + * 3.8.1.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.LastValue](#rqsrs-019clickhousewindowfunctionslastvalue) + * 3.8.1.3 [The `lag(value, offset)` Function Workaround](#the-lagvalue-offset-function-workaround) + * 3.8.1.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Lag.Workaround](#rqsrs-019clickhousewindowfunctionslagworkaround) + * 3.8.1.4 [The `lead(value, offset)` Function Workaround](#the-leadvalue-offset-function-workaround) + * 3.8.1.4.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Lead.Workaround](#rqsrs-019clickhousewindowfunctionsleadworkaround) + * 3.8.1.5 [The `rank()` Function](#the-rank-function) + * 3.8.1.5.1 [RQ.SRS-019.ClickHouse.WindowFunctions.Rank](#rqsrs-019clickhousewindowfunctionsrank) + * 3.8.1.6 [The `dense_rank()` Function](#the-dense_rank-function) + * 3.8.1.6.1 [RQ.SRS-019.ClickHouse.WindowFunctions.DenseRank](#rqsrs-019clickhousewindowfunctionsdenserank) + * 3.8.1.7 [The `row_number()` Function](#the-row_number-function) + * 3.8.1.7.1 [RQ.SRS-019.ClickHouse.WindowFunctions.RowNumber](#rqsrs-019clickhousewindowfunctionsrownumber) + * 3.8.2 [Aggregate Functions](#aggregate-functions) + * 3.8.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions](#rqsrs-019clickhousewindowfunctionsaggregatefunctions) + * 3.8.2.2 [Combinators](#combinators) + * 3.8.2.2.1 [RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Combinators](#rqsrs-019clickhousewindowfunctionsaggregatefunctionscombinators) + * 3.8.2.3 [Parametric](#parametric) + * 3.8.2.3.1 [RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Parametric](#rqsrs-019clickhousewindowfunctionsaggregatefunctionsparametric) +* 4 [References](#references) + + +## Revision History + +This document is stored in an electronic form using [Git] source control management software +hosted in a [GitHub Repository]. +All the updates are tracked using the [Revision History]. + +## Introduction + +This software requirements specification covers requirements for `Map(key, value)` data type in [ClickHouse]. + +## Requirements + +### General + +#### RQ.SRS-019.ClickHouse.WindowFunctions +version: 1.0 + +[ClickHouse] SHALL support [window functions] that produce a result for each row inside the window. + +#### RQ.SRS-019.ClickHouse.WindowFunctions.NonDistributedTables +version: 1.0 + +[ClickHouse] SHALL support correct operation of [window functions] on non-distributed +table engines such as `MergeTree`. + + +#### RQ.SRS-019.ClickHouse.WindowFunctions.DistributedTables +version: 1.0 + +[ClickHouse] SHALL support correct operation of [window functions] on +[Distributed](https://clickhouse.tech/docs/en/engines/table-engines/special/distributed/) table engine. + +### Window Specification + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowSpec +version: 1.0 + +[ClickHouse] SHALL support defining a window using window specification clause. +The [window_spec] SHALL be defined as + +``` +window_spec: + [partition_clause] [order_clause] [frame_clause] +``` + +that SHALL specify how to partition query rows into groups for processing by the window function. + +### PARTITION Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause +version: 1.0 + +[ClickHouse] SHALL support [partition_clause] that indicates how to divide the query rows into groups. +The [partition_clause] SHALL be defined as + +``` +partition_clause: + PARTITION BY expr [, expr] ... +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MultipleExpr +version: 1.0 + +[ClickHouse] SHALL support partitioning by more than one `expr` in the [partition_clause] definition. + +For example, + +```sql +SELECT x,s, sum(x) OVER (PARTITION BY x,s) FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b')) +``` + +```bash +┌─x─┬─s─┬─sum(x) OVER (PARTITION BY x, s)─┐ +│ 1 │ a │ 1 │ +│ 1 │ b │ 1 │ +│ 2 │ b │ 2 │ +└───┴───┴─────────────────────────────────┘ +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.MissingExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is missing in the [partition_clause] definition. + +```sql +SELECT sum(number) OVER (PARTITION BY) FROM numbers(1,3) +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.PartitionClause.InvalidExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is invalid in the [partition_clause] definition. + +### ORDER Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause +version: 1.0 + +[ClickHouse] SHALL support [order_clause] that indicates how to sort rows in each window. + +##### order_clause + +The `order_clause` SHALL be defined as + +``` +order_clause: + ORDER BY expr [ASC|DESC] [, expr [ASC|DESC]] ... +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MultipleExprs +version: 1.0 + +[ClickHouse] SHALL return support using more than one `expr` in the [order_clause] definition. + +For example, + +```sql +SELECT x,s, sum(x) OVER (ORDER BY x DESC, s DESC) FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b')) +``` + +```bash +┌─x─┬─s─┬─sum(x) OVER (ORDER BY x DESC, s DESC)─┐ +│ 2 │ b │ 2 │ +│ 1 │ b │ 3 │ +│ 1 │ a │ 4 │ +└───┴───┴───────────────────────────────────────┘ +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.MissingExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is missing in the [order_clause] definition. + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OrderClause.InvalidExpr.Error +version: 1.0 + +[ClickHouse] SHALL return an error if `expr` is invalid in the [order_clause] definition. + +### FRAME Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.FrameClause +version: 1.0 + +[ClickHouse] SHALL support [frame_clause] that SHALL specify a subset of the current window. + +The `frame_clause` SHALL be defined as + +``` +frame_clause: + {ROWS | RANGE } frame_extent +``` + +#### ROWS + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame +version: 1.0 + +[ClickHouse] SHALL support `ROWS` frame to define beginning and ending row positions. +Offsets SHALL be differences in row numbers from the current row number. + +```sql +ROWS frame_extent +``` + +See [frame_extent] definition. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.MissingFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `ROWS` frame clause is defined without [frame_extent]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number ROWS) FROM numbers(1,3) +``` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.InvalidFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `ROWS` frame clause has invalid [frame_extent]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number ROWS '1') FROM numbers(1,3) +``` + +##### ROWS CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include only the current row in the window partition +when `ROWS CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS CURRENT ROW) FROM numbers(1,2) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 2 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedPreceding +version: 1.0 + +[ClickHouse] SHALL include all rows before and including the current row in the window partition +when `ROWS UNBOUNDED PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +##### ROWS `expr` PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL include `expr` rows before and including the current row in the window partition +when `ROWS expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS 1 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +##### ROWS `expr` FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Start.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS 1 FOLLOWING) FROM numbers(1,3) +``` + +##### ROWS BETWEEN CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include only the current row in the window partition +when `ROWS BETWEEN CURRENT ROW AND CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,2) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 2 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING` frame is specified. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN CURRENT ROW AND expr PRECEDING` frame is specified. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include the current row and all the following rows in the window partition +when `ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.CurrentRow.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include the current row and the `expr` rows that are following the current row in the window partition +when `ROWS BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)─┐ +│ 1 │ 3 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS BETWEEN UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include all the rows before and including the current row in the window partition +when `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL include all the rows until and including the current row minus `expr` rows preceding it +when `ROWS BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)─┐ +│ 1 │ 0 │ +│ 2 │ 1 │ +│ 3 │ 3 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition +when `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedPreceding.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include all the rows until and including the current row plus `expr` rows following it +when `ROWS BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 3 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +##### ROWS BETWEEN UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `UNBOUNDED FOLLOWING` is specified as the start of the frame, including + +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND expr PRECEDING` +* `ROWS BETWEEN UNBOUNDED FOLLOWING AND expr FOLLOWING` + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) FROM numbers(1,3) +``` + +##### ROWS BETWEEN `expr` FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `expr FOLLOWING` is specified as the start of the frame +and it points to a row that is after the start of the frame inside the window partition such +as the following cases + +* `ROWS BETWEEN expr FOLLOWING AND CURRENT ROW` +* `ROWS BETWEEN expr FOLLOWING AND UNBOUNDED PRECEDING` +* `ROWS BETWEEN expr FOLLOWING AND expr PRECEDING` + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `ROWS BETWEEN expr FOLLOWING AND expr FOLLOWING` +is specified and the end of the frame specified by the `expr FOLLOWING` is a row that is before the row +specified by the frame start. + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all the rows from and including current row plus `expr` rows following it +until and including the last row in the window partition +when `ROWS BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprFollowing.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row plus `expr` following it +until and including the row specified by the frame end when the frame end +is the current row plus `expr` following it is right at or after the start of the frame +when `ROWS BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING)─┐ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +##### ROWS BETWEEN `expr` PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows +preceding it until and including the current row in the window frame +when `ROWS BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error +when `ROWS BETWEEN expr PRECEDING AND UNBOUNDED PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows +preceding it until and including the last row in the window partition +when `ROWS BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when the frame end specified by the `expr PRECEDING` +evaluates to a row that is before the row specified by the frame start in the window partition +when `ROWS BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows preceding it +until and including the current row minus `expr` rows preceding it if the end +of the frame is after the frame start in the window partition +when `ROWS BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowsFrame.Between.ExprPreceding.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL include the rows from and including current row minus `expr` rows preceding it +until and including the current row plus `expr` rows following it in the window partition +when `ROWS BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified. + +For example, + +```sql +SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 3 │ +│ 2 │ 6 │ +│ 3 │ 5 │ +└────────┴─────────────────────────────────────────────────────────────┘ +``` + +#### RANGE + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame +version: 1.0 + +[ClickHouse] SHALL support `RANGE` frame to define rows within a value range. +Offsets SHALL be differences in row values from the current row value. + +```sql +RANGE frame_extent +``` + +See [frame_extent] definition. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.DateAndDateTime +version: 1.0 + +[ClickHouse] SHALL support `RANGE` frame over columns with `Date` and `DateTime` +data types. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.DataTypes.IntAndUInt +version: 1.0 + +[ClickHouse] SHALL support `RANGE` frame over columns with numerical data types +such `IntX` and `UIntX`. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MultipleColumnsInOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `RANGE` frame definition is used with `ORDER BY` +that uses multiple columns. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.MissingFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `RANGE` frame definition is missing [frame_extent]. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.InvalidFrameExtent.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `RANGE` frame definition has invalid [frame_extent]. + +##### `CURRENT ROW` Peers + +##### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.CurrentRow.Peers +version: 1.0 + +[ClickHouse] for the `RANGE` frame SHALL define the `peers` of the `CURRENT ROW` to be all +the rows that are inside the same order bucket. + +##### RANGE CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithoutOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition +when `RANGE CURRENT ROW` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.CurrentRow.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows that are [current row peers] in the window partition +when `RANGE CURRENT ROW` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE UNBOUNDED FOLLOWING` frame is specified with or without order by +as `UNBOUNDED FOLLOWING` SHALL not be supported as [frame_start]. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +##### RANGE UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithoutOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition +when `RANGE UNBOUNDED PRECEDING` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.UnboundedPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include rows with values from and including the first row +until and including all [current row peers] in the window partition +when `RANGE UNBOUNDED PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE `expr` PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr PRECEDING` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE 1 PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.OrderByNonNumericalColumn.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr PRECEDING` is used with `ORDER BY` clause +over a non-numerical column. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include rows with values from and including current row value minus `expr` +until and including the value for the current row +when `RANGE expr PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE 1 PRECEDING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 3 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE `expr` FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr FOLLOWING` frame is specified without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE 1 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Start.ExprFollowing.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE expr FOLLOWING` frame is specified wit the `ORDER BY` clause +as the value for the frame start cannot be larger than the value for the frame end. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE 1 FOLLOWING) FROM numbers(1,3) +``` + +##### RANGE BETWEEN CURRENT ROW + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include all [current row peers] in the window partition +when `RANGE BETWEEN CURRENT ROW AND CURRENT ROW` frame is specified with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND CURRENT ROW) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND CURRENT ROW)─┐ +│ 1 │ 1 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including [current row peers] until and including +the last row in the window partition when `RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 6 │ +│ 3 │ 6 │ +└────────┴──────────────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 6 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including [current row peers] until and including +current row value plus `expr` when `RANGE BETWEEN CURRENT ROW AND expr FOLLOWING` frame is specified +with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING)─┐ +│ 1 │ 4 │ +│ 1 │ 4 │ +│ 2 │ 5 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.CurrentRow.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN CURRENT ROW AND expr PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3) +``` + +##### RANGE BETWEEN UNBOUNDED PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.CurrentRow +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including the first row until and including +[current row peers] in the window partition when `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` frame is specified +with and without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 4 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return and error when `RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING` frame is specified +with and without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL include all rows in the window partition when `RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` frame is specified +with and without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including the first row until and including +the value of the current row minus `expr` in the window partition +when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)─┐ +│ 1 │ 0 │ +│ 1 │ 0 │ +│ 2 │ 2 │ +│ 3 │ 4 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedPreceding.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including the first row until and including +the value of the current row plus `expr` in the window partition +when `RANGE BETWEEN UNBOUNDED PRECEDING AND expr FOLLOWING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 4 │ +│ 1 │ 4 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE BETWEEN UNBOUNDED FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.CurrentRow.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND expr PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.UnboundedFollowing.ExprFollowing.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN UNBOUNDED FOLLOWING AND expr FOLLOWING` frame is specified +with or without the `ORDER BY` clause. + +##### RANGE BETWEEN expr PRECEDING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus `expr` +until and including [current row peers] in the window partition +when `RANGE BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 4 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.CurrentRow.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND CURRENT ROW` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.UnboundedFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus `expr` +until and including the last row in the window partition when `RANGE BETWEEN expr PRECEDING AND UNBOUNDED FOLLOWING` frame +is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus preceding `expr` +until and including current row plus following `expr` in the window partition +when `RANGE BETWEEN expr PRECEDING AND expr FOLLOWING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING)─┐ +│ 1 │ 4 │ +│ 1 │ 4 │ +│ 2 │ 7 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM numbers(1,3) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when the value of the [frame_end] specified by the +current row minus preceding `expr` is greater than the value of the [frame_start] in the window partition +when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprPreceding.ExprPreceding.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row minus preceding `expr` for the [frame_start] +until and including current row minus following `expr` for the [frame_end] in the window partition +when `RANGE BETWEEN expr PRECEDING AND expr PRECEDING` frame is specified with the `ORDER BY` clause +if an only if the [frame_end] value is equal or greater than [frame_start] value. + +For example, + +**Greater Than** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 4 │ +│ 3 │ 5 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +or **Equal** + +```sql + SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING)─┐ +│ 1 │ 0 │ +│ 1 │ 0 │ +│ 2 │ 2 │ +│ 3 │ 2 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +##### RANGE BETWEEN expr FOLLOWING + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified +with the `ORDER BY` clause and `expr` is greater than `0`. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.CurrentRow.ZeroSpecialCase +version: 1.0 + +[ClickHouse] SHALL include all [current row peers] in the window partition +when `RANGE BETWEEN expr FOLLOWING AND CURRENT ROW` frame is specified +with the `ORDER BY` clause if and only if the `expr` equals to `0`. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW)─┐ +│ 1 │ 7 │ +│ 1 │ 7 │ +│ 2 │ 7 │ +│ 3 │ 7 │ +└────────┴──────────────────────────────────────────────────────────────┘ +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW)─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified +without the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with values from and including current row plus `expr` +until and including the last row in the window partition +when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED FOLLOWING` frame is specified with the `ORDER BY` clause. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)─┐ +│ 1 │ 5 │ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.UnboundedPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND UNBOUNDED PRECEDING` frame is specified +with or without the `ORDER BY` clause. + +For example, + +**Without `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +**With `ORDER BY`** + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified +without the `ORDER BY`. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified +with the `ORDER BY` clause if the value of both `expr` is not `0`. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprPreceding.WithOrderBy.ZeroSpecialCase +version: 1.0 + +[ClickHouse] SHALL include all rows with value equal to [current row peers] in the window partition +when `RANGE BETWEEN expr FOLLOWING AND expr PRECEDING` frame is specified +with the `ORDER BY` clause if and only if both `expr`'s are `0`. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING ─┐ +│ 1 │ 2 │ +│ 1 │ 2 │ +│ 2 │ 2 │ +│ 3 │ 3 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithoutOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified +without the `ORDER BY` clause. + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy.Error +version: 1.0 + +[ClickHouse] SHALL return an error when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified +with the `ORDER BY` clause but the `expr` for the [frame_end] is less than the `expr` for the [frame_start]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RangeFrame.Between.ExprFollowing.ExprFollowing.WithOrderBy +version: 1.0 + +[ClickHouse] SHALL include all rows with value from and including current row plus `expr` for the [frame_start] +until and including current row plus `expr` for the [frame_end] in the window partition +when `RANGE BETWEEN expr FOLLOWING AND expr FOLLOWING` frame is specified +with the `ORDER BY` clause if and only if the `expr` for the [frame_end] is greater than or equal than the +`expr` for the [frame_start]. + +For example, + +```sql +SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 2 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3)) +``` + +```bash +┌─number─┬─sum(number) OVER (ORDER BY number ASC RANGE BETWEEN 1 FOLLOWING AND 2 FOLLOWING)─┐ +│ 1 │ 5 │ +│ 1 │ 5 │ +│ 2 │ 3 │ +│ 3 │ 0 │ +└────────┴──────────────────────────────────────────────────────────────────────────────────┘ +``` + +#### Frame Extent + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Extent +version: 1.0 + +[ClickHouse] SHALL support [frame_extent] defined as + +``` +frame_extent: + {frame_start | frame_between} +``` + +#### Frame Start + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Start +version: 1.0 + +[ClickHouse] SHALL support [frame_start] defined as + +``` +frame_start: { + CURRENT ROW + | UNBOUNDED PRECEDING + | UNBOUNDED FOLLOWING + | expr PRECEDING + | expr FOLLOWING +} +``` + +#### Frame Between + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.Between +version: 1.0 + +[ClickHouse] SHALL support [frame_between] defined as + +``` +frame_between: + BETWEEN frame_start AND frame_end +``` + +#### Frame End + +##### RQ.SRS-019.ClickHouse.WindowFunctions.Frame.End +version: 1.0 + +[ClickHouse] SHALL support [frame_end] defined as + +``` +frame_end: { + CURRENT ROW + | UNBOUNDED PRECEDING + | UNBOUNDED FOLLOWING + | expr PRECEDING + | expr FOLLOWING +} +``` + +#### `CURRENT ROW` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.CurrentRow +version: 1.0 + +[ClickHouse] SHALL support `CURRENT ROW` as `frame_start` or `frame_end` value. + +* For `ROWS` SHALL define the bound to be the current row +* For `RANGE` SHALL define the bound to be the peers of the current row + +#### `UNBOUNDED PRECEDING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedPreceding +version: 1.0 + +[ClickHouse] SHALL support `UNBOUNDED PRECEDING` as `frame_start` or `frame_end` value +and it SHALL define that the bound is the first partition row. + +#### `UNBOUNDED FOLLOWING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.UnboundedFollowing +version: 1.0 + +[ClickHouse] SHALL support `UNBOUNDED FOLLOWING` as `frame_start` or `frame_end` value +and it SHALL define that the bound is the last partition row. + +#### `expr PRECEDING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding +version: 1.0 + +[ClickHouse] SHALL support `expr PRECEDING` as `frame_start` or `frame_end` value + +* For `ROWS` it SHALL define the bound to be the `expr` rows before the current row +* For `RANGE` it SHALL define the bound to be the rows with values equal to the current row value minus the `expr`. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprPreceding.ExprValue +version: 1.0 + +[ClickHouse] SHALL support only non-negative numeric literal as the value for the `expr` in the `expr PRECEDING` frame boundary. + +For example, + +``` +5 PRECEDING +``` + +#### `expr FOLLOWING` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing +version: 1.0 + +[ClickHouse] SHALL support `expr FOLLOWING` as `frame_start` or `frame_end` value + +* For `ROWS` it SHALL define the bound to be the `expr` rows after the current row +* For `RANGE` it SHALL define the bound to be the rows with values equal to the current row value plus `expr` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.ExprFollowing.ExprValue +version: 1.0 + +[ClickHouse] SHALL support only non-negative numeric literal as the value for the `expr` in the `expr FOLLOWING` frame boundary. + +For example, + +``` +5 FOLLOWING +``` + +### WINDOW Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause +version: 1.0 + +[ClickHouse] SHALL support `WINDOW` clause to define one or more windows. + +```sql +WINDOW window_name AS (window_spec) + [, window_name AS (window_spec)] .. +``` + +The `window_name` SHALL be the name of a window defined by a `WINDOW` clause. + +The [window_spec] SHALL specify the window. + +For example, + +```sql +SELECT ... FROM table WINDOW w AS (partiton by id)) +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MultipleWindows +version: 1.0 + +[ClickHouse] SHALL support `WINDOW` clause that defines multiple windows. + +For example, + +```sql +SELECT ... FROM table WINDOW w1 AS (partition by id), w2 AS (partition by customer) +``` + +#### RQ.SRS-019.ClickHouse.WindowFunctions.WindowClause.MissingWindowSpec.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `WINDOW` clause definition is missing [window_spec]. + +### `OVER` Clause + +#### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause +version: 1.0 + +[ClickHouse] SHALL support `OVER` clause to either use named window defined using `WINDOW` clause +or adhoc window defined inplace. + + +``` +OVER ()|(window_spec)|named_window +``` + +#### Empty Clause + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.EmptyOverClause +version: 1.0 + +[ClickHouse] SHALL treat the entire set of query rows as a single partition when `OVER` clause is empty. +For example, + +``` +SELECT sum(x) OVER () FROM table +``` + +#### Ad-Hoc Window + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow +version: 1.0 + +[ClickHouse] SHALL support ad hoc window specification in the `OVER` clause. + +``` +OVER [window_spec] +``` + +See [window_spec] definition. + +For example, + +```sql +(count(*) OVER (partition by id order by time desc)) +``` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.AdHocWindow.MissingWindowSpec.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `OVER` clause has missing [window_spec]. + +#### Named Window + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow +version: 1.0 + +[ClickHouse] SHALL support using a previously defined named window in the `OVER` clause. + +``` +OVER [window_name] +``` + +See [window_name] definition. + +For example, + +```sql +SELECT count(*) OVER w FROM table WINDOW w AS (partition by id) +``` + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.InvalidName.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `OVER` clause reference invalid window name. + +##### RQ.SRS-019.ClickHouse.WindowFunctions.OverClause.NamedWindow.MultipleWindows.Error +version: 1.0 + +[ClickHouse] SHALL return an error if the `OVER` clause references more than one window name. + +### Window Functions + +#### Nonaggregate Functions + +##### The `first_value(expr)` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.FirstValue +version: 1.0 + +[ClickHouse] SHALL support `first_value` window function that +SHALL be synonum for the `any(value)` function +that SHALL return the value of `expr` from first row in the window frame. + +``` +first_value(expr) OVER ... +``` + +##### The `last_value(expr)` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.LastValue +version: 1.0 + +[ClickHouse] SHALL support `last_value` window function that +SHALL be synonym for the `anyLast(value)` function +that SHALL return the value of `expr` from the last row in the window frame. + +``` +last_value(expr) OVER ... +``` + +##### The `lag(value, offset)` Function Workaround + +###### RQ.SRS-019.ClickHouse.WindowFunctions.Lag.Workaround +version: 1.0 + +[ClickHouse] SHALL support a workaround for the `lag(value, offset)` function as + +``` +any(value) OVER (.... ROWS BETWEEN PRECEDING AND PRECEDING) +``` + +The function SHALL returns the value from the row that lags (precedes) the current row +by the `N` rows within its partition. Where `N` is the `value` passed to the `any` function. + +If there is no such row, the return value SHALL be default. + +For example, if `N` is 3, the return value is default for the first two rows. +If N or default are missing, the defaults are 1 and NULL, respectively. + +`N` SHALL be a literal non-negative integer. If N is 0, the value SHALL be +returned for the current row. + +##### The `lead(value, offset)` Function Workaround + +###### RQ.SRS-019.ClickHouse.WindowFunctions.Lead.Workaround +version: 1.0 + +[ClickHouse] SHALL support a workaround for the `lead(value, offset)` function as + +``` +any(value) OVER (.... ROWS BETWEEN FOLLOWING AND FOLLOWING) +``` + +The function SHALL returns the value from the row that leads (follows) the current row by +the `N` rows within its partition. Where `N` is the `value` passed to the `any` function. + +If there is no such row, the return value SHALL be default. + +For example, if `N` is 3, the return value is default for the last two rows. +If `N` or default are missing, the defaults are 1 and NULL, respectively. + +`N` SHALL be a literal non-negative integer. If `N` is 0, the value SHALL be +returned for the current row. + +##### The `rank()` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.Rank +version: 1.0 + +[ClickHouse] SHALL support `rank` window function that SHALL +return the rank of the current row within its partition with gaps. + +Peers SHALL be considered ties and receive the same rank. +The function SHALL not assign consecutive ranks to peer groups if groups of size greater than one exist +and the result is noncontiguous rank numbers. + +If the function is used without `ORDER BY` to sort partition rows into the desired order +then all rows SHALL be peers. + +``` +rank() OVER ... +``` + +##### The `dense_rank()` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.DenseRank +version: 1.0 + +[ClickHouse] SHALL support `dense_rank` function over a window that SHALL +return the rank of the current row within its partition without gaps. + +Peers SHALL be considered ties and receive the same rank. +The function SHALL assign consecutive ranks to peer groups and +the result is that groups of size greater than one do not produce noncontiguous rank numbers. + +If the function is used without `ORDER BY` to sort partition rows into the desired order +then all rows SHALL be peers. + +``` +dense_rank() OVER ... +``` + +##### The `row_number()` Function + +###### RQ.SRS-019.ClickHouse.WindowFunctions.RowNumber +version: 1.0 + +[ClickHouse] SHALL support `row_number` function over a window that SHALL +returns the number of the current row within its partition. + +Rows numbers SHALL range from 1 to the number of partition rows. + +The `ORDER BY` affects the order in which rows are numbered. +Without `ORDER BY`, row numbering MAY be nondeterministic. + +``` +row_number() OVER ... +``` + +#### Aggregate Functions + +##### RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions +version: 1.0 + +[ClickHouse] SHALL support using aggregate functions over windows. + +* [count](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/count/) +* [min](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/min/) +* [max](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/max/) +* [sum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sum/) +* [avg](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avg/) +* [any](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/any/) +* [stddevPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevpop/) +* [stddevSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stddevsamp/) +* [varPop(x)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varpop/) +* [varSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/varsamp/) +* [covarPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarpop/) +* [covarSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/covarsamp/) +* [anyHeavy](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anyheavy/) +* [anyLast](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/anylast/) +* [argMin](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/) +* [argMax](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/) +* [avgWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/avgweighted/) +* [corr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/corr/) +* [topK](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topk/) +* [topKWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/topkweighted/) +* [groupArray](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparray/) +* [groupUniqArray](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupuniqarray/) +* [groupArrayInsertAt](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat/) +* [groupArrayMovingSum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingsum/) +* [groupArrayMovingAvg](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingavg/) +* [groupArraySample](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparraysample/) +* [groupBitAnd](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitand/) +* [groupBitOr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitor/) +* [groupBitXor](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitxor/) +* [groupBitmap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmap/) +* [groupBitmapAnd](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapand/) +* [groupBitmapOr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor/) +* [groupBitmapXor](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor/) +* [sumWithOverflow](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/sumwithoverflow/) +* [deltaSum](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/deltasum/) +* [sumMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/summap/) +* [minMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/minmap/) +* [maxMap](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/maxmap/) +* [initializeAggregation](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/initializeAggregation/) +* [skewPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewpop/) +* [skewSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/skewsamp/) +* [kurtPop](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtpop/) +* [kurtSamp](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/kurtsamp/) +* [uniq](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniq/) +* [uniqExact](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqexact/) +* [uniqCombined](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined/) +* [uniqCombined64](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqcombined64/) +* [uniqHLL12](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/uniqhll12/) +* [quantile](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantile/) +* [quantiles](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiles/) +* [quantileExact](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexact/) +* [quantileExactWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantileexactweighted/) +* [quantileTiming](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletiming/) +* [quantileTimingWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted/) +* [quantileDeterministic](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiledeterministic/) +* [quantileTDigest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest/) +* [quantileTDigestWeighted](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted/) +* [simpleLinearRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/simplelinearregression/) +* [stochasticLinearRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlinearregression/) +* [stochasticLogisticRegression](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/) +* [categoricalInformationValue](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression/) +* [studentTTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/studentttest/) +* [welchTTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/welchttest/) +* [mannWhitneyUTest](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest/) +* [median](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/median/) +* [rankCorr](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/rankCorr/) + +##### Combinators + +###### RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Combinators +version: 1.0 + +[ClickHouse] SHALL support aggregate functions with combinator prefixes over windows. + +* [-If](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-if) +* [-Array](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-array) +* [-SimpleState](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-simplestate) +* [-State](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-state) +* [-Merge](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#aggregate_functions_combinators-merge) +* [-MergeState](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#aggregate_functions_combinators-mergestate) +* [-ForEach](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-foreach) +* [-Distinct](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-distinct) +* [-OrDefault](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-ordefault) +* [-OrNull](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-ornull) +* [-Resample](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/combinators/#agg-functions-combinator-resample) + +##### Parametric + +###### RQ.SRS-019.ClickHouse.WindowFunctions.AggregateFunctions.Parametric +version: 1.0 + +[ClickHouse] SHALL support parametric aggregate functions over windows. + +* [histogram](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#histogram) +* [sequenceMatch(pattern)(timestamp, cond1, cond2, ...)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#function-sequencematch) +* [sequenceCount(pattern)(time, cond1, cond2, ...)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#function-sequencecount) +* [windowFunnel](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#windowfunnel) +* [retention](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#retention) +* [uniqUpTo(N)(x)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#uniquptonx) +* [sumMapFiltered(keys_to_keep)(keys, values)](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/parametric-functions/#summapfilteredkeys-to-keepkeys-values) + +## References + +* [ClickHouse] +* [GitHub Repository] +* [Revision History] +* [Git] + +[current row peers]: #current-row-peers +[frame_extent]: #frame-extent +[frame_between]: #frame-between +[frame_start]: #frame-start +[frame_end]: #frame-end +[windows_name]: #window-clause +[window_spec]: #window-specification +[partition_clause]: #partition-clause +[order_clause]: #order-clause +[frame_clause]: #frame-clause +[window functions]: https://clickhouse.tech/docs/en/sql-reference/window-functions/ +[ClickHouse]: https://clickhouse.tech +[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/window_functions/requirements/requirements.md +[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/window_functions/requirements/requirements.md +[Git]: https://git-scm.com/ +[GitHub]: https://github.com +''') diff --git a/tests/testflows/window_functions/tests/__init__.py b/tests/testflows/window_functions/tests/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/window_functions/tests/aggregate_funcs.py b/tests/testflows/window_functions/tests/aggregate_funcs.py new file mode 100644 index 00000000000..67a5f2cfb4f --- /dev/null +++ b/tests/testflows/window_functions/tests/aggregate_funcs.py @@ -0,0 +1,331 @@ +from testflows.core import * +from testflows.asserts import values, error, snapshot + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestOutline(Scenario) +@Examples("func", [ + ("count(salary)",), + ("min(salary)",), + ("max(salary)",), + ("sum(salary)",), + ("avg(salary)",), + ("any(salary)",), + ("stddevPop(salary)",), + ("stddevSamp(salary)",), + ("varPop(salary)",), + ("varSamp(salary)",), + ("covarPop(salary, 2000)",), + ("covarSamp(salary, 2000)",), + ("anyHeavy(salary)",), + ("anyLast(salary)",), + ("argMin(salary, 5000)",), + ("argMax(salary, 5000)",), + ("avgWeighted(salary, 1)",), + ("corr(salary, 0.5)",), + ("topK(salary)",), + ("topKWeighted(salary, 1)",), + ("groupArray(salary)",), + ("groupUniqArray(salary)",), + ("groupArrayInsertAt(salary, 0)",), + ("groupArrayMovingSum(salary)",), + ("groupArrayMovingAvg(salary)",), + ("groupArraySample(3, 1234)(salary)",), + ("groupBitAnd(toUInt8(salary))",), + ("groupBitOr(toUInt8(salary))",), + ("groupBitXor(toUInt8(salary))",), + ("groupBitmap(toUInt8(salary))",), + # #("groupBitmapAnd",), + # #("groupBitmapOr",), + # #("groupBitmapXor",), + ("sumWithOverflow(salary)",), + ("deltaSum(salary)",), + ("sumMap([5000], [salary])",), + ("minMap([5000], [salary])",), + ("maxMap([5000], [salary])",), + # #("initializeAggregation",), + ("skewPop(salary)",), + ("skewSamp(salary)",), + ("kurtPop(salary)",), + ("kurtSamp(salary)",), + ("uniq(salary)",), + ("uniqExact(salary)",), + ("uniqCombined(salary)",), + ("uniqCombined64(salary)",), + ("uniqHLL12(salary)",), + ("quantile(salary)",), + ("quantiles(0.5)(salary)",), + ("quantileExact(salary)",), + ("quantileExactWeighted(salary, 1)",), + ("quantileTiming(salary)",), + ("quantileTimingWeighted(salary, 1)",), + ("quantileDeterministic(salary, 1234)",), + ("quantileTDigest(salary)",), + ("quantileTDigestWeighted(salary, 1)",), + ("simpleLinearRegression(salary, empno)",), + ("stochasticLinearRegression(salary, 1)",), + ("stochasticLogisticRegression(salary, 1)",), + #("categoricalInformationValue(salary, 0)",), + ("studentTTest(salary, 1)",), + ("welchTTest(salary, 1)",), + ("mannWhitneyUTest(salary, 1)",), + ("median(salary)",), + ("rankCorr(salary, 0.5)",), +]) +def aggregate_funcs_over_rows_frame(self, func): + """Checking aggregate funcs over rows frame. + """ + execute_query(f""" + SELECT {func} OVER (ORDER BY salary, empno ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func + FROM empsalary + """ + ) + +@TestScenario +def avg_with_nulls(self): + """Check `avg` aggregate function using a window that contains NULLs. + """ + expected = convert_output(""" + i | avg + ---+-------------------- + 1 | 1.5 + 2 | 2 + 3 | \\N + 4 | \\N + """) + + execute_query(""" + SELECT i, avg(v) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS avg + FROM values('i Int32, v Nullable(Int32)', (1,1),(2,2),(3,NULL),(4,NULL)) + """, + expected=expected + ) + +@TestScenario +def var_pop(self): + """Check `var_pop` aggregate function ove a window. + """ + expected = convert_output(""" + var_pop + ----------------------- + 21704 + 13868.75 + 11266.666666666666 + 4225 + 0 + """) + + execute_query(""" + SELECT VAR_POP(n) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS var_pop + FROM values('i Int8, n Int32', (1,600),(2,470),(3,170),(4,430),(5,300)) + """, + expected=expected + ) + +@TestScenario +def var_samp(self): + """Check `var_samp` aggregate function ove a window. + """ + expected = convert_output(""" + var_samp + ----------------------- + 27130 + 18491.666666666668 + 16900 + 8450 + nan + """) + + execute_query(""" + SELECT VAR_SAMP(n) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS var_samp + FROM VALUES('i Int8, n Int16',(1,600),(2,470),(3,170),(4,430),(5,300)) + """, + expected=expected + ) + +@TestScenario +def stddevpop(self): + """Check `stddevPop` aggregate function ove a window. + """ + expected = convert_output(""" + stddev_pop + --------------------- + 147.32277488562318 + 147.32277488562318 + 117.76565713313877 + 106.14455552060438 + 65 + 0 + """) + + execute_query(""" + SELECT stddevPop(n) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS stddev_pop + FROM VALUES('i Int8, n Nullable(Int16)',(1,NULL),(2,600),(3,470),(4,170),(5,430),(6,300)) + """, + expected=expected + ) + +@TestScenario +def stddevsamp(self): + """Check `stddevSamp` aggregate function ove a window. + """ + expected = convert_output(""" + stddev_samp + --------------------- + 164.7118696390761 + 164.7118696390761 + 135.9840676942217 + 130 + 91.92388155425118 + nan + """) + + execute_query(""" + SELECT stddevSamp(n) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS stddev_samp + FROM VALUES('i Int8, n Nullable(Int16)',(1,NULL),(2,600),(3,470),(4,170),(5,430),(6,300)) + """, + expected=expected + ) + +@TestScenario +def aggregate_function_recovers_from_nan(self): + """Check that aggregate function can recover from `nan` value inside a window. + """ + expected = convert_output(""" + a | b | sum + ---+-----+----- + 1 | 1 | 1 + 2 | 2 | 3 + 3 | nan | nan + 4 | 3 | nan + 5 | 4 | 7 + """) + + execute_query(""" + SELECT a, b, + SUM(b) OVER(ORDER BY a ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS sum + FROM VALUES('a Int8, b Float64',(1,1),(2,2),(3,nan),(4,3),(5,4)) + """, + expected=expected + ) + +@TestScenario +def bit_functions(self): + """Check trying to use bitwise functions over a window. + """ + expected = convert_output(""" + i | b | bool_and | bool_or + ---+---+----------+--------- + 1 | 1 | 1 | 1 + 2 | 1 | 0 | 1 + 3 | 0 | 0 | 0 + 4 | 0 | 0 | 1 + 5 | 1 | 1 | 1 + """) + + execute_query(""" + SELECT i, b, groupBitAnd(b) OVER w AS bool_and, groupBitOr(b) OVER w AS bool_or + FROM VALUES('i Int8, b UInt8', (1,1), (2,1), (3,0), (4,0), (5,1)) + WINDOW w AS (ORDER BY i ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING) + """, + expected=expected + ) + +@TestScenario +def sum(self): + """Check calculation of sum over a window. + """ + expected = convert_output(""" + sum_1 | ten | four + -------+-----+------ + 0 | 0 | 0 + 0 | 0 | 0 + 2 | 0 | 2 + 3 | 1 | 3 + 4 | 1 | 1 + 5 | 1 | 1 + 3 | 3 | 3 + 0 | 4 | 0 + 1 | 7 | 1 + 1 | 9 | 1 + """) + + execute_query( + "SELECT sum(four) OVER (PARTITION BY ten ORDER BY unique2) AS sum_1, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +def nested_aggregates(self): + """Check using nested aggregates over a window. + """ + expected = convert_output(""" + ten | two | gsum | wsum + -----+-----+-------+-------- + 0 | 0 | 45000 | 45000 + 2 | 0 | 47000 | 92000 + 4 | 0 | 49000 | 141000 + 6 | 0 | 51000 | 192000 + 8 | 0 | 53000 | 245000 + 1 | 1 | 46000 | 46000 + 3 | 1 | 48000 | 94000 + 5 | 1 | 50000 | 144000 + 7 | 1 | 52000 | 196000 + 9 | 1 | 54000 | 250000 + """) + + execute_query( + "SELECT ten, two, sum(hundred) AS gsum, sum(sum(hundred)) OVER (PARTITION BY two ORDER BY ten) AS wsum FROM tenk1 GROUP BY ten, two", + expected=expected + ) + +@TestScenario +def aggregate_and_window_function_in_the_same_window(self): + """Check using aggregate and window function in the same window. + """ + expected = convert_output(""" + sum | rank + -------+------ + 6000 | 1 + 16400 | 2 + 16400 | 2 + 20900 | 4 + 25100 | 5 + 3900 | 1 + 7400 | 2 + 5000 | 1 + 14600 | 2 + 14600 | 2 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum, rank() OVER w AS rank FROM empsalary WINDOW w AS (PARTITION BY depname ORDER BY salary DESC)", + expected=expected + ) + +@TestScenario +def ungrouped_aggregate_over_empty_row_set(self): + """Check using window function with ungrouped aggregate over an empty row set. + """ + expected = convert_output(""" + sum + ----- + 0 + """) + + execute_query( + "SELECT SUM(COUNT(number)) OVER () AS sum FROM numbers(10) WHERE number=42", + expected=expected + ) + +@TestFeature +@Name("aggregate funcs") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_AggregateFunctions("1.0") +) +def feature(self): + """Check using aggregate functions over windows. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/common.py b/tests/testflows/window_functions/tests/common.py new file mode 100644 index 00000000000..075f76cf68b --- /dev/null +++ b/tests/testflows/window_functions/tests/common.py @@ -0,0 +1,404 @@ +import os +import re +import uuid +import tempfile + +from testflows.core import * +from testflows.core.name import basename, parentname +from testflows._core.testtype import TestSubType +from testflows.asserts import values, error, snapshot + +def window_frame_error(): + return (36, "Exception: Window frame") + +def frame_start_error(): + return (36, "Exception: Frame start") + +def frame_end_error(): + return (36, "Exception: Frame end") + +def frame_offset_nonnegative_error(): + return syntax_error() + +def frame_end_unbounded_preceding_error(): + return (36, "Exception: Frame end cannot be UNBOUNDED PRECEDING") + +def frame_range_offset_error(): + return (48, "Exception: The RANGE OFFSET frame") + +def frame_requires_order_by_error(): + return (36, "Exception: The RANGE OFFSET window frame requires exactly one ORDER BY column, 0 given") + +def syntax_error(): + return (62, "Exception: Syntax error") + +def groups_frame_error(): + return (48, "Exception: Window frame 'GROUPS' is not implemented") + +def getuid(): + if current().subtype == TestSubType.Example: + testname = f"{basename(parentname(current().name)).replace(' ', '_').replace(',','')}" + else: + testname = f"{basename(current().name).replace(' ', '_').replace(',','')}" + return testname + "_" + str(uuid.uuid1()).replace('-', '_') + +def convert_output(s): + """Convert expected output to TSV format. + """ + return '\n'.join([l.strip() for i, l in enumerate(re.sub('\s+\|\s+', '\t', s).strip().splitlines()) if i != 1]) + +def execute_query(sql, expected=None, exitcode=None, message=None, format="TabSeparatedWithNames"): + """Execute SQL query and compare the output to the snapshot. + """ + name = basename(current().name) + + with When("I execute query", description=sql): + r = current().context.node.query(sql + " FORMAT " + format, exitcode=exitcode, message=message) + + if message is None: + if expected is not None: + with Then("I check output against expected"): + assert r.output.strip() == expected, error() + else: + with Then("I check output against snapshot"): + with values() as that: + assert that(snapshot("\n" + r.output.strip() + "\n", "tests", name=name, encoder=str)), error() + +@TestStep(Given) +def t1_table(self, name="t1", distributed=False): + """Create t1 table. + """ + table = None + data = [ + "(1, 1)", + "(1, 2)", + "(2, 2)" + ] + + if not distributed: + with By("creating table"): + sql = """ + CREATE TABLE {name} ( + f1 Int8, + f2 Int8 + ) ENGINE = MergeTree ORDER BY tuple() + """ + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + sql = f"INSERT INTO {name} VALUES {','.join(data)}" + self.context.node.query(sql) + + else: + with By("creating table"): + sql = """ + CREATE TABLE {name} ON CLUSTER sharded_cluster ( + f1 Int8, + f2 Int8 + ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{{shard}}/{name}', '{{replica}}') ORDER BY tuple() + """ + create_table(name=name + "_source", statement=sql, on_cluster="sharded_cluster") + + with And("a distributed table"): + sql = "CREATE TABLE {name} AS " + name + '_source' + " ENGINE = Distributed(sharded_cluster, default, " + f"{name + '_source'}, rand())" + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + for row in data: + sql = f"INSERT INTO {name} VALUES {row}" + self.context.node.query(sql) + + return table + +@TestStep(Given) +def datetimes_table(self, name="datetimes", distributed=False): + """Create datetimes table. + """ + table = None + data = [ + "(1, '2000-10-19 10:23:54', '2000-10-19 10:23:54')", + "(2, '2001-10-19 10:23:54', '2001-10-19 10:23:54')", + "(3, '2001-10-19 10:23:54', '2001-10-19 10:23:54')", + "(4, '2002-10-19 10:23:54', '2002-10-19 10:23:54')", + "(5, '2003-10-19 10:23:54', '2003-10-19 10:23:54')", + "(6, '2004-10-19 10:23:54', '2004-10-19 10:23:54')", + "(7, '2005-10-19 10:23:54', '2005-10-19 10:23:54')", + "(8, '2006-10-19 10:23:54', '2006-10-19 10:23:54')", + "(9, '2007-10-19 10:23:54', '2007-10-19 10:23:54')", + "(10, '2008-10-19 10:23:54', '2008-10-19 10:23:54')" + ] + + if not distributed: + with By("creating table"): + sql = """ + CREATE TABLE {name} ( + id UInt32, + f_timestamptz DateTime('CET'), + f_timestamp DateTime + ) ENGINE = MergeTree() ORDER BY tuple() + """ + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + sql = f"INSERT INTO {name} VALUES {','.join(data)}" + self.context.node.query(sql) + + else: + with By("creating table"): + sql = """ + CREATE TABLE {name} ON CLUSTER sharded_cluster ( + id UInt32, + f_timestamptz DateTime('CET'), + f_timestamp DateTime + ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{{shard}}/{name}', '{{replica}}') ORDER BY tuple() + """ + create_table(name=name + "_source", statement=sql, on_cluster="sharded_cluster") + + with And("a distributed table"): + sql = "CREATE TABLE {name} AS " + name + '_source' + " ENGINE = Distributed(sharded_cluster, default, " + f"{name + '_source'}, rand())" + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + for row in data: + sql = f"INSERT INTO {name} VALUES {row}" + self.context.node.query(sql) + + return table + +@TestStep(Given) +def numerics_table(self, name="numerics", distributed=False): + """Create numerics tables. + """ + table = None + + data = [ + "(0, '-infinity', '-infinity', toDecimal64(-1000,15))", + "(1, -3, -3, -3)", + "(2, -1, -1, -1)", + "(3, 0, 0, 0)", + "(4, 1.1, 1.1, 1.1)", + "(5, 1.12, 1.12, 1.12)", + "(6, 2, 2, 2)", + "(7, 100, 100, 100)", + "(8, 'infinity', 'infinity', toDecimal64(1000,15))", + "(9, 'NaN', 'NaN', 0)" + ] + + if not distributed: + with By("creating a table"): + sql = """ + CREATE TABLE {name} ( + id Int32, + f_float4 Float32, + f_float8 Float64, + f_numeric Decimal64(15) + ) ENGINE = MergeTree() ORDER BY tuple(); + """ + create_table(name=name, statement=sql) + + with And("populating table with data"): + sql = f"INSERT INTO {name} VALUES {','.join(data)}" + self.context.node.query(sql) + + else: + with By("creating a table"): + sql = """ + CREATE TABLE {name} ON CLUSTER sharded_cluster ( + id Int32, + f_float4 Float32, + f_float8 Float64, + f_numeric Decimal64(15) + ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{{shard}}/{name}', '{{replica}}') ORDER BY tuple(); + """ + create_table(name=name + "_source", statement=sql, on_cluster="sharded_cluster") + + with And("a distributed table"): + sql = "CREATE TABLE {name} AS " + name + '_source' + " ENGINE = Distributed(sharded_cluster, default, " + f"{name + '_source'}, rand())" + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + for row in data: + sql = f"INSERT INTO {name} VALUES {row}" + self.context.node.query(sql) + + return table + +@TestStep(Given) +def tenk1_table(self, name="tenk1", distributed=False): + """Create tenk1 table. + """ + table = None + + if not distributed: + with By("creating a table"): + sql = """ + CREATE TABLE {name} ( + unique1 Int32, + unique2 Int32, + two Int32, + four Int32, + ten Int32, + twenty Int32, + hundred Int32, + thousand Int32, + twothousand Int32, + fivethous Int32, + tenthous Int32, + odd Int32, + even Int32, + stringu1 String, + stringu2 String, + string4 String + ) ENGINE = MergeTree() ORDER BY tuple() + """ + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + datafile = os.path.join(current_dir(), "tenk.data") + debug(datafile) + self.context.cluster.command(None, f"cat \"{datafile}\" | {self.context.node.cluster.docker_compose} exec -T {self.context.node.name} clickhouse client -q \"INSERT INTO {name} FORMAT TSV\"", exitcode=0) + else: + with By("creating a table"): + sql = """ + CREATE TABLE {name} ON CLUSTER sharded_cluster ( + unique1 Int32, + unique2 Int32, + two Int32, + four Int32, + ten Int32, + twenty Int32, + hundred Int32, + thousand Int32, + twothousand Int32, + fivethous Int32, + tenthous Int32, + odd Int32, + even Int32, + stringu1 String, + stringu2 String, + string4 String + ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{{shard}}/{name}', '{{replica}}') ORDER BY tuple() + """ + create_table(name=name + '_source', statement=sql, on_cluster="sharded_cluster") + + with And("a distributed table"): + sql = "CREATE TABLE {name} AS " + name + '_source' + " ENGINE = Distributed(sharded_cluster, default, " + f"{name + '_source'}, rand())" + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + datafile = os.path.join(current_dir(), "tenk.data") + + with open(datafile, "r") as file: + lines = file.readlines() + + chunks = [lines[i:i + 1000] for i in range(0, len(lines), 1000)] + + for chunk in chunks: + with tempfile.NamedTemporaryFile() as file: + file.write(''.join(chunk).encode("utf-8")) + file.flush() + self.context.cluster.command(None, + f"cat \"{file.name}\" | {self.context.node.cluster.docker_compose} exec -T {self.context.node.name} clickhouse client -q \"INSERT INTO {table} FORMAT TSV\"", + exitcode=0) + + return table + +@TestStep(Given) +def empsalary_table(self, name="empsalary", distributed=False): + """Create employee salary reference table. + """ + table = None + + data = [ + "('develop', 10, 5200, '2007-08-01')", + "('sales', 1, 5000, '2006-10-01')", + "('personnel', 5, 3500, '2007-12-10')", + "('sales', 4, 4800, '2007-08-08')", + "('personnel', 2, 3900, '2006-12-23')", + "('develop', 7, 4200, '2008-01-01')", + "('develop', 9, 4500, '2008-01-01')", + "('sales', 3, 4800, '2007-08-01')", + "('develop', 8, 6000, '2006-10-01')", + "('develop', 11, 5200, '2007-08-15')" + ] + + if not distributed: + with By("creating a table"): + sql = """ + CREATE TABLE {name} ( + depname LowCardinality(String), + empno UInt64, + salary Int32, + enroll_date Date + ) + ENGINE = MergeTree() ORDER BY enroll_date + """ + table = create_table(name=name, statement=sql) + + with And("populating table with data"): + sql = f"INSERT INTO {name} VALUES {','.join(data)}" + self.context.node.query(sql) + + else: + with By("creating replicated source tables"): + sql = """ + CREATE TABLE {name} ON CLUSTER sharded_cluster ( + depname LowCardinality(String), + empno UInt64, + salary Int32, + enroll_date Date + ) + ENGINE = ReplicatedMergeTree('/clickhouse/tables/{{shard}}/{name}', '{{replica}}') ORDER BY enroll_date + """ + create_table(name=name + "_source", statement=sql, on_cluster="sharded_cluster") + + with And("a distributed table"): + sql = "CREATE TABLE {name} AS " + name + '_source' + " ENGINE = Distributed(sharded_cluster, default, " + f"{name + '_source'}, rand())" + table = create_table(name=name, statement=sql) + + with And("populating distributed table with data"): + with By("inserting one data row at a time", description="so that data is sharded between nodes"): + for row in data: + self.context.node.query(f"INSERT INTO {table} VALUES {row}", + settings=[("insert_distributed_sync", "1")]) + + with And("dumping all the data in the table"): + self.context.node.query(f"SELECT * FROM {table}") + + return table + +@TestStep(Given) +def create_table(self, name, statement, on_cluster=False): + """Create table. + """ + node = current().context.node + try: + with Given(f"I have a {name} table"): + node.query(statement.format(name=name)) + yield name + finally: + with Finally("I drop the table"): + if on_cluster: + node.query(f"DROP TABLE IF EXISTS {name} ON CLUSTER {on_cluster}") + else: + node.query(f"DROP TABLE IF EXISTS {name}") + +@TestStep(Given) +def allow_experimental_window_functions(self): + """Set allow_experimental_window_functions = 1 + """ + setting = ("allow_experimental_window_functions", 1) + default_query_settings = None + + try: + with By("adding allow_experimental_window_functions to the default query settings"): + default_query_settings = getsattr(current().context, "default_query_settings", []) + default_query_settings.append(setting) + yield + finally: + with Finally("I remove allow_experimental_window_functions from the default query settings"): + if default_query_settings: + try: + default_query_settings.pop(default_query_settings.index(setting)) + except ValueError: + pass diff --git a/tests/testflows/window_functions/tests/errors.py b/tests/testflows/window_functions/tests/errors.py new file mode 100644 index 00000000000..0935c00885d --- /dev/null +++ b/tests/testflows/window_functions/tests/errors.py @@ -0,0 +1,137 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def error_using_non_window_function(self): + """Check that trying to use non window or aggregate function over a window + returns an error. + """ + exitcode = 63 + message = "DB::Exception: Unknown aggregate function numbers" + + sql = ("SELECT numbers(1, 100) OVER () FROM empsalary") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_order_by_another_window_function(self): + """Check that trying to order by another window function returns an error. + """ + exitcode = 184 + message = "DB::Exception: Window function rank() OVER (ORDER BY random() ASC) is found inside window definition in query" + + sql = ("SELECT rank() OVER (ORDER BY rank() OVER (ORDER BY random()))") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_window_function_in_where(self): + """Check that trying to use window function in `WHERE` returns an error. + """ + exitcode = 184 + message = "DB::Exception: Window function row_number() OVER (ORDER BY salary ASC) is found in WHERE in query" + + sql = ("SELECT * FROM empsalary WHERE row_number() OVER (ORDER BY salary) < 10") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_window_function_in_join(self): + """Check that trying to use window function in `JOIN` returns an error. + """ + exitcode = 48 + message = "DB::Exception: JOIN ON inequalities are not supported. Unexpected 'row_number() OVER (ORDER BY salary ASC) < 10" + + sql = ("SELECT * FROM empsalary INNER JOIN tenk1 ON row_number() OVER (ORDER BY salary) < 10") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_window_function_in_group_by(self): + """Check that trying to use window function in `GROUP BY` returns an error. + """ + exitcode = 47 + message = "DB::Exception: Unknown identifier: row_number() OVER (ORDER BY salary ASC); there are columns" + + sql = ("SELECT rank() OVER (ORDER BY 1), count(*) FROM empsalary GROUP BY row_number() OVER (ORDER BY salary) < 10") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_window_function_in_having(self): + """Check that trying to use window function in `HAVING` returns an error. + """ + exitcode = 184 + message = "DB::Exception: Window function row_number() OVER (ORDER BY salary ASC) is found in HAVING in query" + + sql = ("SELECT rank() OVER (ORDER BY 1), count(*) FROM empsalary GROUP BY salary HAVING row_number() OVER (ORDER BY salary) < 10") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_select_from_window(self): + """Check that trying to use window function in `FROM` returns an error. + """ + exitcode = 46 + message = "DB::Exception: Unknown table function rank" + + sql = ("SELECT * FROM rank() OVER (ORDER BY random())") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_window_function_in_alter_delete_where(self): + """Check that trying to use window function in `ALTER DELETE`'s `WHERE` clause returns an error. + """ + if self.context.distributed: + exitcode = 48 + message = "Exception: Table engine Distributed doesn't support mutations" + else: + exitcode = 184 + message = "DB::Exception: Window function rank() OVER (ORDER BY random() ASC) is found in WHERE in query" + + sql = ("ALTER TABLE empsalary DELETE WHERE (rank() OVER (ORDER BY random())) > 10") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_named_window_defined_twice(self): + """Check that trying to define named window twice. + """ + exitcode = 36 + message = "DB::Exception: Window 'w' is defined twice in the WINDOW clause" + + sql = ("SELECT count(*) OVER w FROM tenk1 WINDOW w AS (ORDER BY unique1), w AS (ORDER BY unique1)") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_coma_between_partition_by_and_order_by_clause(self): + """Check that trying to use a coma between partition by and order by clause. + """ + exitcode = 62 + message = "DB::Exception: Syntax error" + + sql = ("SELECT rank() OVER (PARTITION BY four, ORDER BY ten) FROM tenk1") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestFeature +@Name("errors") +def feature(self): + """Check different error conditions. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/feature.py b/tests/testflows/window_functions/tests/feature.py new file mode 100755 index 00000000000..124660e8802 --- /dev/null +++ b/tests/testflows/window_functions/tests/feature.py @@ -0,0 +1,47 @@ +from testflows.core import * + +from window_functions.tests.common import * +from window_functions.requirements import * + + +@TestOutline(Feature) +@Name("tests") +@Examples("distributed", [ + (False, Name("non distributed"),Requirements(RQ_SRS_019_ClickHouse_WindowFunctions_NonDistributedTables("1.0"))), + (True, Name("distributed"), Requirements(RQ_SRS_019_ClickHouse_WindowFunctions_DistributedTables("1.0"))) +]) +def feature(self, distributed, node="clickhouse1"): + """Check window functions behavior using non-distributed or + distributed tables. + """ + self.context.distributed = distributed + self.context.node = self.context.cluster.node(node) + + with Given("I allow experimental window functions"): + allow_experimental_window_functions() + + with And("employee salary table"): + empsalary_table(distributed=distributed) + + with And("tenk1 table"): + tenk1_table(distributed=distributed) + + with And("numerics table"): + numerics_table(distributed=distributed) + + with And("datetimes table"): + datetimes_table(distributed=distributed) + + with And("t1 table"): + t1_table(distributed=distributed) + + Feature(run=load("window_functions.tests.window_spec", "feature"), flags=TE) + Feature(run=load("window_functions.tests.partition_clause", "feature"), flags=TE) + Feature(run=load("window_functions.tests.order_clause", "feature"), flags=TE) + Feature(run=load("window_functions.tests.frame_clause", "feature"), flags=TE) + Feature(run=load("window_functions.tests.window_clause", "feature"), flags=TE) + Feature(run=load("window_functions.tests.over_clause", "feature"), flags=TE) + Feature(run=load("window_functions.tests.funcs", "feature"), flags=TE) + Feature(run=load("window_functions.tests.aggregate_funcs", "feature"), flags=TE) + Feature(run=load("window_functions.tests.errors", "feature"), flags=TE) + Feature(run=load("window_functions.tests.misc", "feature"), flags=TE) diff --git a/tests/testflows/window_functions/tests/frame_clause.py b/tests/testflows/window_functions/tests/frame_clause.py new file mode 100644 index 00000000000..ac3ca90abec --- /dev/null +++ b/tests/testflows/window_functions/tests/frame_clause.py @@ -0,0 +1,29 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestFeature +@Name("frame clause") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_FrameClause("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Extent("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Start("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_End("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_Frame_Between("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_CurrentRow("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_UnboundedPreceding("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_UnboundedFollowing("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding_ExprValue("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing_ExprValue("1.0") +) +def feature(self): + """Check defining frame clause. + """ + Feature(run=load("window_functions.tests.rows_frame", "feature"), flags=TE) + Feature(run=load("window_functions.tests.range_frame", "feature"), flags=TE) + Feature(run=load("window_functions.tests.range_overflow", "feature"), flags=TE) + Feature(run=load("window_functions.tests.range_datetime", "feature"), flags=TE) + Feature(run=load("window_functions.tests.range_errors", "feature"), flags=TE) \ No newline at end of file diff --git a/tests/testflows/window_functions/tests/funcs.py b/tests/testflows/window_functions/tests/funcs.py new file mode 100644 index 00000000000..c4b9f6d73c0 --- /dev/null +++ b/tests/testflows/window_functions/tests/funcs.py @@ -0,0 +1,456 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_FirstValue("1.0") +) +def first_value(self): + """Check `first_value` function. + """ + expected = convert_output(""" + first_value | ten | four + -------------+-----+------ + 0 | 0 | 0 + 0 | 0 | 0 + 0 | 4 | 0 + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 7 | 1 + 1 | 9 | 1 + 0 | 0 | 2 + 1 | 1 | 3 + 1 | 3 | 3 + """) + + with Example("using first_value"): + execute_query( + "SELECT first_value(ten) OVER (PARTITION BY four ORDER BY ten) AS first_value, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("using any equivalent"): + execute_query( + "SELECT any(ten) OVER (PARTITION BY four ORDER BY ten) AS first_value, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_LastValue("1.0") +) +def last_value(self): + """Check `last_value` function. + """ + with Example("order by window", description=""" + Check that last_value returns the last row of the frame that is CURRENT ROW in ORDER BY window + """): + expected = convert_output(""" + last_value | ten | four + ------------+-----+------ + 0 | 0 | 0 + 0 | 0 | 0 + 2 | 0 | 2 + 1 | 1 | 1 + 1 | 1 | 1 + 3 | 1 | 3 + 3 | 3 | 3 + 0 | 4 | 0 + 1 | 7 | 1 + 1 | 9 | 1 + """) + + with Check("using last_value"): + execute_query( + "SELECT last_value(four) OVER (ORDER BY ten, four) AS last_value, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Check("using anyLast() equivalent"): + execute_query( + "SELECT anyLast(four) OVER (ORDER BY ten, four) AS last_value, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("partition by window", description=""" + Check that last_value returns the last row of the frame that is CURRENT ROW in ORDER BY window + """): + expected = convert_output(""" + last_value | ten | four + ------------+-----+------ + 4 | 0 | 0 + 4 | 0 | 0 + 4 | 4 | 0 + 9 | 1 | 1 + 9 | 1 | 1 + 9 | 7 | 1 + 9 | 9 | 1 + 0 | 0 | 2 + 3 | 1 | 3 + 3 | 3 | 3 + """) + + with Check("using last_value"): + execute_query( + """SELECT last_value(ten) OVER (PARTITION BY four) AS last_value, ten, four FROM + (SELECT * FROM tenk1 WHERE unique2 < 10 ORDER BY four, ten) + ORDER BY four, ten""", + expected=expected + ) + + with Check("using anyLast() equivalent"): + execute_query( + """SELECT anyLast(ten) OVER (PARTITION BY four) AS last_value, ten, four FROM + (SELECT * FROM tenk1 WHERE unique2 < 10 ORDER BY four, ten) + ORDER BY four, ten""", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_Lag_Workaround("1.0") +) +def lag(self): + """Check `lag` function workaround. + """ + with Example("anyOrNull"): + expected = convert_output(""" + lag | ten | four + -----+-----+------ + \\N | 0 | 0 + 0 | 0 | 0 + 0 | 4 | 0 + \\N | 1 | 1 + 1 | 1 | 1 + 1 | 7 | 1 + 7 | 9 | 1 + \\N | 0 | 2 + \\N | 1 | 3 + 1 | 3 | 3 + """) + + execute_query( + "SELECT anyOrNull(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS lag , ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("any"): + expected = convert_output(""" + lag | ten | four + -----+-----+------ + 0 | 0 | 0 + 0 | 0 | 0 + 0 | 4 | 0 + 0 | 1 | 1 + 1 | 1 | 1 + 1 | 7 | 1 + 7 | 9 | 1 + 0 | 0 | 2 + 0 | 1 | 3 + 1 | 3 | 3 + """) + + execute_query( + "SELECT any(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS lag , ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("anyOrNull with column value as offset"): + expected = convert_output(""" + lag | ten | four + -----+-----+------ + 0 | 0 | 0 + 0 | 0 | 0 + 4 | 4 | 0 + \\N | 1 | 1 + 1 | 1 | 1 + 1 | 7 | 1 + 7 | 9 | 1 + \\N | 0 | 2 + \\N | 1 | 3 + \\N | 3 | 3 + """) + + execute_query( + "SELECT any(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN four PRECEDING AND four PRECEDING) AS lag , ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_Lead_Workaround("1.0") +) +def lead(self): + """Check `lead` function workaround. + """ + with Example("anyOrNull"): + expected = convert_output(""" + lead | ten | four + ------+-----+------ + 0 | 0 | 0 + 4 | 0 | 0 + \\N | 4 | 0 + 1 | 1 | 1 + 7 | 1 | 1 + 9 | 7 | 1 + \\N | 9 | 1 + \\N | 0 | 2 + 3 | 1 | 3 + \\N | 3 | 3 + """) + + execute_query( + "SELECT anyOrNull(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS lead, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("any"): + expected = convert_output(""" + lead | ten | four + ------+-----+------ + 0 | 0 | 0 + 4 | 0 | 0 + 0 | 4 | 0 + 1 | 1 | 1 + 7 | 1 | 1 + 9 | 7 | 1 + 0 | 9 | 1 + 0 | 0 | 2 + 3 | 1 | 3 + 0 | 3 | 3 + """) + + execute_query( + "SELECT any(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS lead, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("any with arithmetic expr"): + expected = convert_output(""" + lead | ten | four + ------+-----+------ + 0 | 0 | 0 + 8 | 0 | 0 + 0 | 4 | 0 + 2 | 1 | 1 + 14 | 1 | 1 + 18 | 7 | 1 + 0 | 9 | 1 + 0 | 0 | 2 + 6 | 1 | 3 + 0 | 3 | 3 + """) + + execute_query( + "SELECT any(ten * 2) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS lead, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + + with Example("subquery as offset"): + expected = convert_output(""" + lead + ------ + 0 + 0 + 4 + 1 + 7 + 9 + \\N + 0 + 3 + \\N + """) + + execute_query( + "SELECT anyNull(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN (SELECT two FROM tenk1 WHERE unique2 = unique2) FOLLOWING AND (SELECT two FROM tenk1 WHERE unique2 = unique2) FOLLOWING) AS lead " + "FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowNumber("1.0") +) +def row_number(self): + """Check `row_number` function. + """ + expected = convert_output(""" + row_number + ------------ + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + """) + + execute_query( + "SELECT row_number() OVER (ORDER BY unique2) AS row_number FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_Rank("1.0") +) +def rank(self): + """Check `rank` function. + """ + expected = convert_output(""" + rank_1 | ten | four + --------+-----+------ + 1 | 0 | 0 + 1 | 0 | 0 + 3 | 4 | 0 + 1 | 1 | 1 + 1 | 1 | 1 + 3 | 7 | 1 + 4 | 9 | 1 + 1 | 0 | 2 + 1 | 1 | 3 + 2 | 3 | 3 + """) + + execute_query( + "SELECT rank() OVER (PARTITION BY four ORDER BY ten) AS rank_1, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_DenseRank("1.0") +) +def dense_rank(self): + """Check `dense_rank` function. + """ + expected = convert_output(""" + dense_rank | ten | four + ------------+-----+------ + 1 | 0 | 0 + 1 | 0 | 0 + 2 | 4 | 0 + 1 | 1 | 1 + 1 | 1 | 1 + 2 | 7 | 1 + 3 | 9 | 1 + 1 | 0 | 2 + 1 | 1 | 3 + 2 | 3 | 3 + """) + + execute_query( + "SELECT dense_rank() OVER (PARTITION BY four ORDER BY ten) AS dense_rank, ten, four FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +def last_value_with_no_frame(self): + """Check last_value function with no frame. + """ + expected = convert_output(""" + four | ten | sum | last_value + ------+-----+-----+------------ + 0 | 0 | 0 | 0 + 0 | 2 | 2 | 2 + 0 | 4 | 6 | 4 + 0 | 6 | 12 | 6 + 0 | 8 | 20 | 8 + 1 | 1 | 1 | 1 + 1 | 3 | 4 | 3 + 1 | 5 | 9 | 5 + 1 | 7 | 16 | 7 + 1 | 9 | 25 | 9 + 2 | 0 | 0 | 0 + 2 | 2 | 2 | 2 + 2 | 4 | 6 | 4 + 2 | 6 | 12 | 6 + 2 | 8 | 20 | 8 + 3 | 1 | 1 | 1 + 3 | 3 | 4 | 3 + 3 | 5 | 9 | 5 + 3 | 7 | 16 | 7 + 3 | 9 | 25 | 9 + """) + + execute_query( + "SELECT four, ten, sum(ten) over (partition by four order by ten) AS sum, " + "last_value(ten) over (partition by four order by ten) AS last_value " + "FROM (select distinct ten, four from tenk1)", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_LastValue("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_Lag_Workaround("1.0"), +) +def last_value_with_lag_workaround(self): + """Check last value with lag workaround. + """ + expected = convert_output(""" + last_value | lag | salary + ------------+------+-------- + 4500 | 0 | 3500 + 4800 | 3500 | 3900 + 5200 | 3900 | 4200 + 5200 | 4200 | 4500 + 5200 | 4500 | 4800 + 5200 | 4800 | 4800 + 6000 | 4800 | 5000 + 6000 | 5000 | 5200 + 6000 | 5200 | 5200 + 6000 | 5200 | 6000 + """) + + execute_query( + "select last_value(salary) over(order by salary range between 1000 preceding and 1000 following) AS last_value, " + "any(salary) over(order by salary rows between 1 preceding and 1 preceding) AS lag, " + "salary from empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_FirstValue("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_Lead_Workaround("1.0") +) +def first_value_with_lead_workaround(self): + """Check first value with lead workaround. + """ + expected = convert_output(""" + first_value | lead | salary + -------------+------+-------- + 3500 | 3900 | 3500 + 3500 | 4200 | 3900 + 3500 | 4500 | 4200 + 3500 | 4800 | 4500 + 3900 | 4800 | 4800 + 3900 | 5000 | 4800 + 4200 | 5200 | 5000 + 4200 | 5200 | 5200 + 4200 | 6000 | 5200 + 5000 | 0 | 6000 + """) + + execute_query( + "select first_value(salary) over(order by salary range between 1000 preceding and 1000 following) AS first_value, " + "any(salary) over(order by salary rows between 1 following and 1 following) AS lead," + "salary from empsalary", + expected=expected + ) + +@TestFeature +@Name("funcs") +def feature(self): + """Check true window functions. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/misc.py b/tests/testflows/window_functions/tests/misc.py new file mode 100644 index 00000000000..8251e751ed9 --- /dev/null +++ b/tests/testflows/window_functions/tests/misc.py @@ -0,0 +1,396 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def subquery_expr_preceding(self): + """Check using subquery expr in preceding. + """ + expected = convert_output(""" + sum | unique1 + -----+--------- + 0 | 0 + 1 | 1 + 3 | 2 + 5 | 3 + 7 | 4 + 9 | 5 + 11 | 6 + 13 | 7 + 15 | 8 + 17 | 9 + """) + + execute_query( + "SELECT sum(unique1) over " + "(order by unique1 rows (SELECT unique1 FROM tenk1 ORDER BY unique1 LIMIT 1) + 1 PRECEDING) AS sum, " + "unique1 " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestScenario +def window_functions_in_select_expression(self): + """Check using multiple window functions in an expression. + """ + expected = convert_output(""" + cntsum + -------- + 22 + 22 + 87 + 24 + 24 + 82 + 92 + 51 + 92 + 136 + """) + + execute_query( + "SELECT (count(*) OVER (PARTITION BY four ORDER BY ten) + " + "sum(hundred) OVER (PARTITION BY four ORDER BY ten)) AS cntsum " + "FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +def window_functions_in_subquery(self): + """Check using window functions in a subquery. + """ + expected = convert_output(""" + total | fourcount | twosum + -------+-----------+-------- + """) + + execute_query( + "SELECT * FROM (" + " SELECT count(*) OVER (PARTITION BY four ORDER BY ten) + " + " sum(hundred) OVER (PARTITION BY two ORDER BY ten) AS total, " + " count(*) OVER (PARTITION BY four ORDER BY ten) AS fourcount, " + " sum(hundred) OVER (PARTITION BY two ORDER BY ten) AS twosum " + " FROM tenk1 " + ") WHERE total <> fourcount + twosum", + expected=expected + ) + +@TestScenario +def group_by_and_one_window(self): + """Check running window function with group by and one window. + """ + expected = convert_output(""" + four | ten | sum | avg + ------+-----+------+------------------------ + 0 | 0 | 0 | 0 + 0 | 2 | 0 | 2 + 0 | 4 | 0 | 4 + 0 | 6 | 0 | 6 + 0 | 8 | 0 | 8 + 1 | 1 | 2500 | 1 + 1 | 3 | 2500 | 3 + 1 | 5 | 2500 | 5 + 1 | 7 | 2500 | 7 + 1 | 9 | 2500 | 9 + 2 | 0 | 5000 | 0 + 2 | 2 | 5000 | 2 + 2 | 4 | 5000 | 4 + 2 | 6 | 5000 | 6 + 2 | 8 | 5000 | 8 + 3 | 1 | 7500 | 1 + 3 | 3 | 7500 | 3 + 3 | 5 | 7500 | 5 + 3 | 7 | 7500 | 7 + 3 | 9 | 7500 | 9 + """) + + execute_query( + "SELECT four, ten, SUM(SUM(four)) OVER (PARTITION BY four) AS sum, AVG(ten) AS avg FROM tenk1 GROUP BY four, ten ORDER BY four, ten", + expected=expected, + ) + +@TestScenario +def group_by_and_multiple_windows(self): + """Check running window function with group by and multiple windows. + """ + expected = convert_output(""" + sum1 | row_number | sum2 + -------+------------+------- + 25100 | 1 | 47100 + 7400 | 2 | 22000 + 14600 | 3 | 14600 + """) + + execute_query( + "SELECT sum(salary) AS sum1, row_number() OVER (ORDER BY depname) AS row_number, " + "sum(sum(salary)) OVER (ORDER BY depname DESC) AS sum2 " + "FROM empsalary GROUP BY depname", + expected=expected, + ) + +@TestScenario +def query_with_order_by_and_one_window(self): + """Check using a window function in the query that has `ORDER BY` clause. + """ + expected = convert_output(""" + depname | empno | salary | rank + ----------+----------+--------+--------- + sales | 3 | 4800 | 1 + personnel | 5 | 3500 | 1 + develop | 7 | 4200 | 1 + personnel | 2 | 3900 | 2 + sales | 4 | 4800 | 2 + develop | 9 | 4500 | 2 + sales | 1 | 5000 | 3 + develop | 10 | 5200 | 3 + develop | 11 | 5200 | 4 + develop | 8 | 6000 | 5 + """) + execute_query( + "SELECT depname, empno, salary, rank() OVER w AS rank FROM empsalary WINDOW w AS (PARTITION BY depname ORDER BY salary, empno) ORDER BY rank() OVER w, empno", + expected=expected + ) + +@TestScenario +def with_union_all(self): + """Check using window over rows obtained with `UNION ALL`. + """ + expected = convert_output(""" + count + ------- + """) + + execute_query( + "SELECT count(*) OVER (PARTITION BY four) AS count FROM (SELECT * FROM tenk1 UNION ALL SELECT * FROM tenk1) LIMIT 0", + expected=expected + ) + +@TestScenario +def empty_table(self): + """Check using an empty table with a window function. + """ + expected = convert_output(""" + count + ------- + """) + + execute_query( + "SELECT count(*) OVER (PARTITION BY four) AS count FROM (SELECT * FROM tenk1 WHERE 0)", + expected=expected + ) + +@TestScenario +def from_subquery(self): + """Check using a window function over data from subquery. + """ + expected = convert_output(""" + count | four + -------+------ + 4 | 1 + 4 | 1 + 4 | 1 + 4 | 1 + 2 | 3 + 2 | 3 + """) + + execute_query( + "SELECT count(*) OVER (PARTITION BY four) AS count, four FROM (SELECT * FROM tenk1 WHERE two = 1) WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +def groups_frame(self): + """Check using `GROUPS` frame. + """ + exitcode, message = groups_frame_error() + + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 12 | 0 | 0 + 12 | 8 | 0 + 12 | 4 | 0 + 27 | 5 | 1 + 27 | 9 | 1 + 27 | 1 | 1 + 35 | 6 | 2 + 35 | 2 | 2 + 45 | 3 | 3 + 45 | 7 | 3 + """) + + execute_query(""" + SELECT sum(unique1) over (order by four groups between unbounded preceding and current row), + unique1, four + FROM tenk1 WHERE unique1 < 10 + """, + exitcode=exitcode, message=message + ) + +@TestScenario +def count_with_empty_over_clause_without_start(self): + """Check that we can use `count()` window function without passing + `*` argument when using empty over clause. + """ + exitcode = 0 + message = "1" + + sql = ("SELECT count() OVER () FROM tenk1 LIMIT 1") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + + +@TestScenario +def subquery_multiple_window_functions(self): + """Check using multiple window functions is a subquery. + """ + expected = convert_output(""" + depname | depsalary | depminsalary + --------+-------------+-------------- + sales | 5000 | 5000 + sales | 9800 | 4800 + sales | 14600 | 4800 + """) + + execute_query(""" + SELECT * FROM + (SELECT depname, + sum(salary) OVER (PARTITION BY depname order by empno) AS depsalary, + min(salary) OVER (PARTITION BY depname, empno order by enroll_date) AS depminsalary + FROM empsalary) + WHERE depname = 'sales' + """, + expected=expected + ) + +@TestScenario +def windows_with_same_partitioning_but_different_ordering(self): + """Check using using two windows that use the same partitioning + but different ordering. + """ + expected = convert_output(""" + first | last + ------+----- + 7 | 7 + 7 | 9 + 7 | 10 + 7 | 11 + 7 | 8 + 5 | 5 + 5 | 2 + 3 | 3 + 3 | 4 + 3 | 1 + """) + + execute_query(""" + SELECT + any(empno) OVER (PARTITION BY depname ORDER BY salary, enroll_date) AS first, + anyLast(empno) OVER (PARTITION BY depname ORDER BY salary,enroll_date,empno) AS last + FROM empsalary + """, + expected=expected + ) + +@TestScenario +def subquery_with_multiple_windows_filtering(self): + """Check filtering rows from a subquery that uses multiple window functions. + """ + expected = convert_output(""" + depname | empno | salary | enroll_date | first_emp | last_emp + ----------+-------+----------+--------------+-------------+---------- + develop | 8 | 6000 | 2006-10-01 | 1 | 5 + develop | 7 | 4200 | 2008-01-01 | 4 | 1 + personnel | 2 | 3900 | 2006-12-23 | 1 | 2 + personnel | 5 | 3500 | 2007-12-10 | 2 | 1 + sales | 1 | 5000 | 2006-10-01 | 1 | 3 + sales | 4 | 4800 | 2007-08-08 | 3 | 1 + """) + + execute_query(""" + SELECT * FROM + (SELECT depname, + empno, + salary, + enroll_date, + row_number() OVER (PARTITION BY depname ORDER BY enroll_date, empno) AS first_emp, + row_number() OVER (PARTITION BY depname ORDER BY enroll_date DESC, empno) AS last_emp + FROM empsalary) emp + WHERE first_emp = 1 OR last_emp = 1 + """, + expected=expected + ) + +@TestScenario +def exclude_clause(self): + """Check if exclude clause is supported. + """ + exitcode, message = syntax_error() + + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 7 | 4 | 0 + 13 | 2 | 2 + 22 | 1 | 1 + 26 | 6 | 2 + 29 | 9 | 1 + 31 | 8 | 0 + 32 | 5 | 1 + 23 | 3 | 3 + 15 | 7 | 3 + 10 | 0 | 0 + """) + + execute_query( + "SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude no others) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + exitcode=exitcode, message=message + ) + +@TestScenario +def in_view(self): + """Check using a window function in a view. + """ + with Given("I create a view"): + sql = """ + CREATE VIEW v_window AS + SELECT number, sum(number) over (order by number rows between 1 preceding and 1 following) as sum_rows + FROM numbers(1, 10) + """ + create_table(name="v_window", statement=sql) + + expected = convert_output(""" + number | sum_rows + ---------+---------- + 1 | 3 + 2 | 6 + 3 | 9 + 4 | 12 + 5 | 15 + 6 | 18 + 7 | 21 + 8 | 24 + 9 | 27 + 10 | 19 + """) + + execute_query( + "SELECT * FROM v_window", + expected=expected + ) + +@TestFeature +@Name("misc") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_FrameClause("1.0") +) +def feature(self): + """Check misc cases for frame clause. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/order_clause.py b/tests/testflows/window_functions/tests/order_clause.py new file mode 100644 index 00000000000..2dafe5dafc9 --- /dev/null +++ b/tests/testflows/window_functions/tests/order_clause.py @@ -0,0 +1,218 @@ +from testflows.core import * +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def single_expr_asc(self): + """Check defining of order clause with single expr ASC. + """ + expected = convert_output(""" + x | s | sum + ----+---+----- + 1 | a | 2 + 1 | b | 2 + 2 | b | 4 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (ORDER BY x ASC) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +def single_expr_desc(self): + """Check defining of order clause with single expr DESC. + """ + expected = convert_output(""" + x | s | sum + ----+---+----- + 2 | b | 2 + 1 | a | 4 + 1 | b | 4 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (ORDER BY x DESC) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MultipleExprs("1.0") +) +def multiple_expr_desc_desc(self): + """Check defining of order clause with multiple exprs. + """ + expected = convert_output(""" + x | s | sum + --+---+---- + 2 | b | 2 + 1 | b | 3 + 1 | a | 4 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (ORDER BY x DESC, s DESC) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MultipleExprs("1.0") +) +def multiple_expr_asc_asc(self): + """Check defining of order clause with multiple exprs. + """ + expected = convert_output(""" + x | s | sum + ----+---+------ + 1 | a | 1 + 1 | b | 2 + 2 | b | 4 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (ORDER BY x ASC, s ASC) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MultipleExprs("1.0") +) +def multiple_expr_asc_desc(self): + """Check defining of order clause with multiple exprs. + """ + expected = convert_output(""" + x | s | sum + ----+---+------ + 1 | b | 1 + 1 | a | 2 + 2 | b | 4 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (ORDER BY x ASC, s DESC) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_MissingExpr_Error("1.0") +) +def missing_expr_error(self): + """Check that defining of order clause with missing expr returns an error. + """ + exitcode = 62 + message = "Exception: Syntax error: failed at position" + + self.context.node.query("SELECT sum(number) OVER (ORDER BY) FROM numbers(1,3)", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause_InvalidExpr_Error("1.0") +) +def invalid_expr_error(self): + """Check that defining of order clause with invalid expr returns an error. + """ + exitcode = 47 + message = "Exception: Missing columns: 'foo'" + + self.context.node.query("SELECT sum(number) OVER (ORDER BY foo) FROM numbers(1,3)", exitcode=exitcode, message=message) + +@TestScenario +def by_column(self): + """Check order by using a single column. + """ + expected = convert_output(""" + depname | empno | salary | rank + -----------+-------+--------+------ + develop | 7 | 4200 | 1 + develop | 8 | 6000 | 1 + develop | 9 | 4500 | 1 + develop | 10 | 5200 | 1 + develop | 11 | 5200 | 1 + personnel | 2 | 3900 | 1 + personnel | 5 | 3500 | 1 + sales | 1 | 5000 | 1 + sales | 3 | 4800 | 1 + sales | 4 | 4800 | 1 + """) + + execute_query( + "SELECT depname, empno, salary, rank() OVER (PARTITION BY depname, empno ORDER BY salary) AS rank FROM empsalary", + expected=expected, + ) + +@TestScenario +def by_expr(self): + """Check order by with expression. + """ + expected = convert_output(""" + avg + ------------------------ + 0 + 0 + 0 + 1 + 1 + 1 + 1 + 2 + 3 + 3 + """) + + execute_query( + "SELECT avg(four) OVER (PARTITION BY four ORDER BY thousand / 100) AS avg FROM tenk1 WHERE unique2 < 10", + expected=expected, + ) + +@TestScenario +def by_expr_with_aggregates(self): + expected = convert_output(""" + ten | res | rank + -----+----------+------ + 0 | 9976146 | 4 + 1 | 10114187 | 9 + 2 | 10059554 | 8 + 3 | 9878541 | 1 + 4 | 9881005 | 2 + 5 | 9981670 | 5 + 6 | 9947099 | 3 + 7 | 10120309 | 10 + 8 | 9991305 | 6 + 9 | 10040184 | 7 + """) + + execute_query( + "select ten, sum(unique1) + sum(unique2) as res, rank() over (order by sum(unique1) + sum(unique2)) as rank " + "from tenk1 group by ten order by ten", + expected=expected, + ) + +@TestScenario +def by_a_non_integer_constant(self): + """Check if it is allowed to use a window with ordering by a non integer constant. + """ + expected = convert_output(""" + rank + ------ + 1 + """) + + execute_query( + "SELECT rank() OVER (ORDER BY length('abc')) AS rank", + expected=expected + ) + +@TestFeature +@Name("order clause") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OrderClause("1.0") +) +def feature(self): + """Check defining order clause. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/over_clause.py b/tests/testflows/window_functions/tests/over_clause.py new file mode 100644 index 00000000000..d02ddcee656 --- /dev/null +++ b/tests/testflows/window_functions/tests/over_clause.py @@ -0,0 +1,137 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_EmptyOverClause("1.0") +) +def empty(self): + """Check using empty over clause. + """ + expected = convert_output(""" + count + ------- + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + """) + + execute_query( + "SELECT COUNT(*) OVER () AS count FROM tenk1 WHERE unique2 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_EmptyOverClause("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow("1.0") +) +def empty_named_window(self): + """Check using over clause with empty window. + """ + expected = convert_output(""" + count + ------- + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + """) + + execute_query( + "SELECT COUNT(*) OVER w AS count FROM tenk1 WHERE unique2 < 10 WINDOW w AS ()", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_AdHocWindow("1.0"), +) +def adhoc_window(self): + """Check running aggregating `sum` function over an adhoc window. + """ + expected = convert_output(""" + depname | empno | salary | sum + -----------+-------+--------+------- + develop | 7 | 4200 | 25100 + develop | 9 | 4500 | 25100 + develop | 10 | 5200 | 25100 + develop | 11 | 5200 | 25100 + develop | 8 | 6000 | 25100 + personnel | 5 | 3500 | 7400 + personnel | 2 | 3900 | 7400 + sales | 3 | 4800 | 14600 + sales | 4 | 4800 | 14600 + sales | 1 | 5000 | 14600 + """) + + execute_query( + "SELECT depname, empno, salary, sum(salary) OVER (PARTITION BY depname) AS sum FROM empsalary ORDER BY depname, salary, empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_AdHocWindow_MissingWindowSpec_Error("1.0") +) +def missing_window_spec(self): + """Check missing window spec in over clause. + """ + exitcode = 62 + message = "Exception: Syntax error" + + self.context.node.query("SELECT number,sum(number) OVER FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow_InvalidName_Error("1.0") +) +def invalid_window_name(self): + """Check invalid window name. + """ + exitcode = 47 + message = "Exception: Window 'w3' is not defined" + + self.context.node.query("SELECT number,sum(number) OVER w3 FROM values('number Int8', (1),(1),(2),(3)) WINDOW w1 AS ()", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause_NamedWindow_MultipleWindows_Error("1.0") +) +def invalid_multiple_windows(self): + """Check invalid multiple window names. + """ + exitcode = 47 + message = "Exception: Missing columns" + + self.context.node.query("SELECT number,sum(number) OVER w1, w2 FROM values('number Int8', (1),(1),(2),(3)) WINDOW w1 AS (), w2 AS (PARTITION BY number)", + exitcode=exitcode, message=message) + + +@TestFeature +@Name("over clause") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_OverClause("1.0") +) +def feature(self): + """Check defining frame clause. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/partition_clause.py b/tests/testflows/window_functions/tests/partition_clause.py new file mode 100644 index 00000000000..3e9ebefe2ba --- /dev/null +++ b/tests/testflows/window_functions/tests/partition_clause.py @@ -0,0 +1,77 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def single_expr(self): + """Check defining of partition clause with single expr. + """ + expected = convert_output(""" + x | s | sum + ----+---+------ + 1 | a | 2 + 1 | b | 2 + 2 | b | 2 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (PARTITION BY x) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_MultipleExpr("1.0") +) +def multiple_expr(self): + """Check defining of partition clause with multiple exprs. + """ + expected = convert_output(""" + x | s | sum + --+---+---- + 1 | a | 1 + 1 | b | 1 + 2 | b | 2 + """) + + execute_query( + "SELECT x,s, sum(x) OVER (PARTITION BY x,s) AS sum FROM values('x Int8, s String', (1,'a'),(1,'b'),(2,'b'))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_MissingExpr_Error("1.0") +) +def missing_expr_error(self): + """Check that defining of partition clause with missing expr returns an error. + """ + exitcode = 62 + message = "Exception: Syntax error: failed at position" + + self.context.node.query("SELECT sum(number) OVER (PARTITION BY) FROM numbers(1,3)", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause_InvalidExpr_Error("1.0") +) +def invalid_expr_error(self): + """Check that defining of partition clause with invalid expr returns an error. + """ + exitcode = 47 + message = "Exception: Missing columns: 'foo'" + + self.context.node.query("SELECT sum(number) OVER (PARTITION BY foo) FROM numbers(1,3)", exitcode=exitcode, message=message) + + +@TestFeature +@Name("partition clause") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_PartitionClause("1.0") +) +def feature(self): + """Check defining partition clause. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/range_datetime.py b/tests/testflows/window_functions/tests/range_datetime.py new file mode 100644 index 00000000000..0b34fdf43d4 --- /dev/null +++ b/tests/testflows/window_functions/tests/range_datetime.py @@ -0,0 +1,238 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def order_by_asc_range_between_days_preceding_and_days_following(self): + """Check range between days preceding and days following + with ascending order by. + """ + expected = convert_output(""" + sum | salary | enroll_date + -------+--------+------------- + 34900 | 5000 | 2006-10-01 + 38400 | 3900 | 2006-12-23 + 47100 | 4800 | 2007-08-01 + 47100 | 4800 | 2007-08-08 + 36100 | 3500 | 2007-12-10 + 32200 | 4200 | 2008-01-01 + 34900 | 6000 | 2006-10-01 + 32200 | 4500 | 2008-01-01 + 47100 | 5200 | 2007-08-01 + 47100 | 5200 | 2007-08-15 + """) + + execute_query( + "select sum(salary) over (order by enroll_date range between 365 preceding and 365 following) AS sum, " + "salary, enroll_date from empsalary order by empno", + expected=expected + ) + +@TestScenario +def order_by_desc_range_between_days_preceding_and_days_following(self): + """Check range between days preceding and days following + with descending order by.""" + expected = convert_output(""" + sum | salary | enroll_date + -------+--------+------------- + 34900 | 5000 | 2006-10-01 + 38400 | 3900 | 2006-12-23 + 47100 | 4800 | 2007-08-01 + 47100 | 4800 | 2007-08-08 + 36100 | 3500 | 2007-12-10 + 32200 | 4200 | 2008-01-01 + 34900 | 6000 | 2006-10-01 + 32200 | 4500 | 2008-01-01 + 47100 | 5200 | 2007-08-01 + 47100 | 5200 | 2007-08-15 + """) + + execute_query( + "select sum(salary) over (order by enroll_date desc range between 365 preceding and 365 following) AS sum, " + "salary, enroll_date from empsalary order by empno", + expected=expected + ) + +@TestScenario +def order_by_desc_range_between_days_following_and_days_following(self): + """Check range between days following and days following with + descending order by. + """ + expected = convert_output(""" + sum | salary | enroll_date + -------+--------+------------- + 0 | 5000 | 2006-10-01 + 0 | 3900 | 2006-12-23 + 0 | 4800 | 2007-08-01 + 0 | 4800 | 2007-08-08 + 0 | 3500 | 2007-12-10 + 0 | 4200 | 2008-01-01 + 0 | 6000 | 2006-10-01 + 0 | 4500 | 2008-01-01 + 0 | 5200 | 2007-08-01 + 0 | 5200 | 2007-08-15 + """) + + execute_query( + "select sum(salary) over (order by enroll_date desc range between 365 following and 365 following) AS sum, " + "salary, enroll_date from empsalary order by empno", + expected=expected + ) + +@TestScenario +def order_by_desc_range_between_days_preceding_and_days_preceding(self): + """Check range between days preceding and days preceding with + descending order by. + """ + expected = convert_output(""" + sum | salary | enroll_date + -------+--------+------------- + 0 | 5000 | 2006-10-01 + 0 | 3900 | 2006-12-23 + 0 | 4800 | 2007-08-01 + 0 | 4800 | 2007-08-08 + 0 | 3500 | 2007-12-10 + 0 | 4200 | 2008-01-01 + 0 | 6000 | 2006-10-01 + 0 | 4500 | 2008-01-01 + 0 | 5200 | 2007-08-01 + 0 | 5200 | 2007-08-15 + """) + + execute_query( + "select sum(salary) over (order by enroll_date desc range between 365 preceding and 365 preceding) AS sum, " + "salary, enroll_date from empsalary order by empno", + expected=expected + ) + +@TestScenario +def datetime_with_timezone_order_by_asc_range_between_n_preceding_and_n_following(self): + """Check range between preceding and following with + DateTime column that has timezone using ascending order by. + """ + expected = convert_output(""" + id | f_timestamptz | first_value | last_value + ----+------------------------------+-------------+------------ + 1 | 2000-10-19 10:23:54 | 1 | 3 + 2 | 2001-10-19 10:23:54 | 1 | 4 + 3 | 2001-10-19 10:23:54 | 1 | 4 + 4 | 2002-10-19 10:23:54 | 2 | 5 + 5 | 2003-10-19 10:23:54 | 4 | 6 + 6 | 2004-10-19 10:23:54 | 5 | 7 + 7 | 2005-10-19 10:23:54 | 6 | 8 + 8 | 2006-10-19 10:23:54 | 7 | 9 + 9 | 2007-10-19 10:23:54 | 8 | 10 + 10 | 2008-10-19 10:23:54 | 9 | 10 + """) + + execute_query( + """ + select id, f_timestamptz, first_value(id) over w AS first_value, last_value(id) over w AS last_value + from datetimes + window w as (order by f_timestamptz range between + 31622400 preceding and 31622400 following) order by id + """, + expected=expected + ) + +@TestScenario +def datetime_with_timezone_order_by_desc_range_between_n_preceding_and_n_following(self): + """Check range between preceding and following with + DateTime column that has timezone using descending order by. + """ + expected = convert_output(""" + id | f_timestamptz | first_value | last_value + ----+------------------------------+-------------+------------ + 10 | 2008-10-19 10:23:54 | 10 | 9 + 9 | 2007-10-19 10:23:54 | 10 | 8 + 8 | 2006-10-19 10:23:54 | 9 | 7 + 7 | 2005-10-19 10:23:54 | 8 | 6 + 6 | 2004-10-19 10:23:54 | 7 | 5 + 5 | 2003-10-19 10:23:54 | 6 | 4 + 4 | 2002-10-19 10:23:54 | 5 | 3 + 3 | 2001-10-19 10:23:54 | 4 | 1 + 2 | 2001-10-19 10:23:54 | 4 | 1 + 1 | 2000-10-19 10:23:54 | 2 | 1 + """) + + execute_query( + """ + select id, f_timestamptz, first_value(id) over w AS first_value, last_value(id) over w AS last_value + from datetimes + window w as (order by f_timestamptz desc range between + 31622400 preceding and 31622400 following) order by id desc + """, + expected=expected + ) + +@TestScenario +def datetime_order_by_asc_range_between_n_preceding_and_n_following(self): + """Check range between preceding and following with + DateTime column and ascending order by. + """ + expected = convert_output(""" + id | f_timestamp | first_value | last_value + ----+------------------------------+-------------+------------ + 1 | 2000-10-19 10:23:54 | 1 | 3 + 2 | 2001-10-19 10:23:54 | 1 | 4 + 3 | 2001-10-19 10:23:54 | 1 | 4 + 4 | 2002-10-19 10:23:54 | 2 | 5 + 5 | 2003-10-19 10:23:54 | 4 | 6 + 6 | 2004-10-19 10:23:54 | 5 | 7 + 7 | 2005-10-19 10:23:54 | 6 | 8 + 8 | 2006-10-19 10:23:54 | 7 | 9 + 9 | 2007-10-19 10:23:54 | 8 | 10 + 10 | 2008-10-19 10:23:54 | 9 | 10 + """) + + execute_query( + """ + select id, f_timestamp, first_value(id) over w AS first_value, last_value(id) over w AS last_value + from datetimes + window w as (order by f_timestamp range between + 31622400 preceding and 31622400 following) ORDER BY id + """, + expected=expected + ) + +@TestScenario +def datetime_order_by_desc_range_between_n_preceding_and_n_following(self): + """Check range between preceding and following with + DateTime column and descending order by. + """ + expected = convert_output(""" + id | f_timestamp | first_value | last_value + ----+------------------------------+-------------+------------ + 10 | 2008-10-19 10:23:54 | 10 | 9 + 9 | 2007-10-19 10:23:54 | 10 | 8 + 8 | 2006-10-19 10:23:54 | 9 | 7 + 7 | 2005-10-19 10:23:54 | 8 | 6 + 6 | 2004-10-19 10:23:54 | 7 | 5 + 5 | 2003-10-19 10:23:54 | 6 | 4 + 4 | 2002-10-19 10:23:54 | 5 | 3 + 2 | 2001-10-19 10:23:54 | 4 | 1 + 3 | 2001-10-19 10:23:54 | 4 | 1 + 1 | 2000-10-19 10:23:54 | 2 | 1 + """) + + execute_query( + """ + select id, f_timestamp, first_value(id) over w AS first_value, last_value(id) over w AS last_value + from datetimes + window w as (order by f_timestamp desc range between + 31622400 preceding and 31622400 following) + """, + expected=expected + ) + +@TestFeature +@Name("range datetime") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_DateAndDateTime("1.0") +) +def feature(self): + """Check `Date` and `DateTime` data time with range frames. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/range_errors.py b/tests/testflows/window_functions/tests/range_errors.py new file mode 100644 index 00000000000..67a9cfb14c9 --- /dev/null +++ b/tests/testflows/window_functions/tests/range_errors.py @@ -0,0 +1,104 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MultipleColumnsInOrderBy_Error("1.0") +) +def error_more_than_one_order_by_column(self): + """Check that using more than one column in order by with range frame + returns an error. + """ + exitcode = 36 + message = "DB::Exception: Received from localhost:9000. DB::Exception: The RANGE OFFSET window frame requires exactly one ORDER BY column, 2 given" + + sql = ("select sum(salary) over (order by enroll_date, salary range between 1 preceding and 2 following) AS sum, " + "salary, enroll_date from empsalary") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_missing_order_by(self): + """Check that using range frame with offsets without order by returns an error. + """ + exitcode = 36 + message = "DB::Exception: The RANGE OFFSET window frame requires exactly one ORDER BY column, 0 given" + + sql = ("select sum(salary) over (range between 1 preceding and 2 following) AS sum, " + "salary, enroll_date from empsalary") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_missing_order_by_with_partition_by_clause(self): + """Check that range frame with offsets used with partition by but + without order by returns an error. + """ + exitcode = 36 + message = "DB::Exception: The RANGE OFFSET window frame requires exactly one ORDER BY column, 0 given" + + sql = ("select f1, sum(f1) over (partition by f1 range between 1 preceding and 1 following) AS sum " + "from t1 where f1 = f2") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +def error_range_over_non_numerical_column(self): + """Check that range over non numerical column returns an error. + """ + exitcode = 48 + message = "DB::Exception: The RANGE OFFSET frame for 'DB::ColumnLowCardinality' ORDER BY column is not implemented" + + sql = ("select sum(salary) over (order by depname range between 1 preceding and 2 following) as sum, " + "salary, enroll_date from empsalary") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_ExprPreceding_ExprValue("1.0") +) +def error_negative_preceding_offset(self): + """Check that non-positive value of preceding offset returns an error. + """ + exitcode = 36 + message = "DB::Exception: Frame start offset must be greater than zero, -1 given" + + sql = ("select max(enroll_date) over (order by salary range between -1 preceding and 2 following) AS max, " + "salary, enroll_date from empsalary") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_ExprFollowing_ExprValue("1.0") +) +def error_negative_following_offset(self): + """Check that non-positive value of following offset returns an error. + """ + exitcode = 36 + message = "DB::Exception: Frame end offset must be greater than zero, -2 given" + + sql = ("select max(enroll_date) over (order by salary range between 1 preceding and -2 following) AS max, " + "salary, enroll_date from empsalary") + + with When("I execute query", description=sql): + r = current().context.node.query(sql, exitcode=exitcode, message=message) + +@TestFeature +@Name("range errors") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame("1.0") +) +def feature(self): + """Check different error conditions when usign range frame. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/range_frame.py b/tests/testflows/window_functions/tests/range_frame.py new file mode 100644 index 00000000000..71f00965547 --- /dev/null +++ b/tests/testflows/window_functions/tests/range_frame.py @@ -0,0 +1,1410 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_MissingFrameExtent_Error("1.0") +) +def missing_frame_extent(self): + """Check that when range frame has missing frame extent then an error is returned. + """ + exitcode, message = syntax_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_InvalidFrameExtent_Error("1.0") +) +def invalid_frame_extent(self): + """Check that when range frame has invalid frame extent then an error is returned. + """ + exitcode, message = syntax_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE '1') FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_CurrentRow_Peers("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithoutOrderBy("1.0") +) +def start_current_row_without_order_by(self): + """Check range current row frame without order by and + that the peers of the current row are rows that have values in the same order bucket. + In this case without order by clause all rows are the peers of the current row. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 47100 + 2 | 3900 | 47100 + 3 | 4800 | 47100 + 4 | 4800 | 47100 + 5 | 3500 | 47100 + 7 | 4200 | 47100 + 8 | 6000 | 47100 + 9 | 4500 | 47100 + 10 | 5200 | 47100 + 11 | 5200 | 47100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, salary, sum(salary) OVER (RANGE CURRENT ROW) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_CurrentRow_Peers("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_CurrentRow_WithOrderBy("1.0") +) +def start_current_row_with_order_by(self): + """Check range current row frame with order by and that the peers of the current row + are rows that have values in the same order bucket. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 14600 + 2 | personnel | 3900 | 7400 + 3 | sales | 4800 | 14600 + 4 | sales | 4800 | 14600 + 5 | personnel | 3500 | 7400 + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 25100 + 9 | develop | 4500 | 25100 + 10 | develop | 5200 | 25100 + 11 | develop | 5200 | 25100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY depname RANGE CURRENT ROW) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedFollowing_Error("1.0") +) +def start_unbounded_following_error(self): + """Check range current row frame with or without order by returns an error. + """ + exitcode, message = frame_start_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE UNBOUNDED FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE UNBOUNDED FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithoutOrderBy("1.0") +) +def start_unbounded_preceding_without_order_by(self): + """Check range unbounded preceding frame without order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 25100 + 9 | develop | 4500 | 25100 + 10 | develop | 5200 | 25100 + 11 | develop | 5200 | 25100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE UNBOUNDED PRECEDING) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_UnboundedPreceding_WithOrderBy("1.0") +) +def start_unbounded_preceding_with_order_by(self): + """Check range unbounded preceding frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 47100 + 2 | personnel | 3900 | 32500 + 3 | sales | 4800 | 47100 + 4 | sales | 4800 | 47100 + 5 | personnel | 3500 | 32500 + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 25100 + 9 | develop | 4500 | 25100 + 10 | develop | 5200 | 25100 + 11 | develop | 5200 | 25100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY depname RANGE UNBOUNDED PRECEDING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithoutOrderBy_Error("1.0") +) +def start_expr_following_without_order_by_error(self): + """Check range expr following frame without order by returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE 1 FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprFollowing_WithOrderBy_Error("1.0") +) +def start_expr_following_with_order_by_error(self): + """Check range expr following frame with order by returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE 1 FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithOrderBy("1.0") +) +def start_expr_preceding_with_order_by(self): + """Check range expr preceding frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 5000 + 2 | personnel | 3900 | 3900 + 3 | sales | 4800 | 9600 + 4 | sales | 4800 | 9600 + 5 | personnel | 3500 | 3500 + 7 | develop | 4200 | 4200 + 8 | develop | 6000 | 6000 + 9 | develop | 4500 | 4500 + 10 | develop | 5200 | 10400 + 11 | develop | 5200 | 10400 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE 1 PRECEDING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_OrderByNonNumericalColumn_Error("1.0") +) +def start_expr_preceding_order_by_non_numerical_column_error(self): + """Check range expr preceding frame with order by non-numerical column returns an error. + """ + exitcode, message = frame_range_offset_error() + + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY depname RANGE 1 PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Start_ExprPreceding_WithoutOrderBy_Error("1.0") +) +def start_expr_preceding_without_order_by_error(self): + """Check range expr preceding frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE 1 PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_CurrentRow("1.0") +) +def between_current_row_and_current_row(self): + """Check range between current row and current row frame with or without order by. + """ + with Example("without order by"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 25100 + 9 | develop | 4500 | 25100 + 10 | develop | 5200 | 25100 + 11 | develop | 5200 | 25100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND CURRENT ROW) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno", + expected=expected + ) + + with Example("with order by"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+------ + 7 | develop | 4200 | 4200 + 8 | develop | 6000 | 6000 + 9 | develop | 4500 | 4500 + 10 | develop | 5200 | 5200 + 11 | develop | 5200 | 5200 + """) + + execute_query( + "SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN CURRENT ROW AND CURRENT ROW) AS sum FROM empsalary WHERE depname = 'develop'", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedPreceding_Error("1.0") +) +def between_current_row_and_unbounded_preceding_error(self): + """Check range between current row and unbounded preceding frame with or without order by returns an error. + """ + exitcode, message = frame_end_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_UnboundedFollowing("1.0") +) +def between_current_row_and_unbounded_following(self): + """Check range between current row and unbounded following frame with or without order by. + """ + with Example("without order by"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 25100 + 9 | develop | 4500 | 25100 + 10 | develop | 5200 | 25100 + 11 | develop | 5200 | 25100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno", + expected=expected + ) + + with Example("with order by"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 20900 + 9 | develop | 4500 | 14900 + 10 | develop | 5200 | 10400 + 11 | develop | 5200 | 5200 + """) + + execute_query( + "SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS sum FROM empsalary WHERE depname = 'develop'", + expected=expected + ) + + with Example("with order by from tenk1"): + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 45 | 0 | 0 + 33 | 1 | 1 + 18 | 2 | 2 + 10 | 3 | 3 + 45 | 4 | 0 + 33 | 5 | 1 + 18 | 6 | 2 + 10 | 7 | 3 + 45 | 8 | 0 + 33 | 9 | 1 + """) + + execute_query( + "SELECT * FROM (SELECT sum(unique1) over (order by four range between current row and unbounded following) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10) ORDER BY unique1", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithoutOrderBy_Error("1.0") +) +def between_current_row_and_expr_following_without_order_by_error(self): + """Check range between current row and expr following frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprFollowing_WithOrderBy("1.0") +) +def between_current_row_and_expr_following_with_order_by(self): + """Check range between current row and expr following frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 8900 + 2 | personnel | 3900 | 8700 + 3 | sales | 4800 | 9600 + 4 | sales | 4800 | 8300 + 5 | personnel | 3500 | 3500 + 7 | develop | 4200 | 10200 + 8 | develop | 6000 | 10500 + 9 | develop | 4500 | 9700 + 10 | develop | 5200 | 10400 + 11 | develop | 5200 | 5200 + """) + + execute_query( + "SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_CurrentRow_ExprPreceding_Error("1.0") +) +def between_current_row_and_expr_preceding_error(self): + """Check range between current row and expr preceding frame with or without order by returns an error. + """ + exitcode, message = window_frame_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN CURRENT ROW AND 1 PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_CurrentRow("1.0") +) +def between_unbounded_preceding_and_current_row(self): + """Check range between unbounded preceding and current row frame with and without order by. + """ + with Example("with order by"): + expected = convert_output(""" + four | ten | sum | last_value + ------+-----+-----+------------ + 0 | 0 | 0 | 0 + 0 | 2 | 2 | 2 + 0 | 4 | 6 | 4 + 0 | 6 | 12 | 6 + 0 | 8 | 20 | 8 + 1 | 1 | 1 | 1 + 1 | 3 | 4 | 3 + 1 | 5 | 9 | 5 + 1 | 7 | 16 | 7 + 1 | 9 | 25 | 9 + 2 | 0 | 0 | 0 + 2 | 2 | 2 | 2 + 2 | 4 | 6 | 4 + 2 | 6 | 12 | 6 + 2 | 8 | 20 | 8 + 3 | 1 | 1 | 1 + 3 | 3 | 4 | 3 + 3 | 5 | 9 | 5 + 3 | 7 | 16 | 7 + 3 | 9 | 25 | 9 + """) + + execute_query( + "SELECT four, ten," + "sum(ten) over (partition by four order by ten range between unbounded preceding and current row) AS sum," + "last_value(ten) over (partition by four order by ten range between unbounded preceding and current row) AS last_value " + "FROM (select distinct ten, four from tenk1)", + expected=expected + ) + + with Example("without order by"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 7 | develop | 4200 | 25100 + 8 | develop | 6000 | 25100 + 9 | develop | 4500 | 25100 + 10 | develop | 5200 | 25100 + 11 | develop | 5200 | 25100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS sum FROM empsalary WHERE depname = 'develop') ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedPreceding_Error("1.0") +) +def between_unbounded_preceding_and_unbounded_preceding_error(self): + """Check range between unbounded preceding and unbounded preceding frame with or without order by returns an error. + """ + exitcode, message = frame_end_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_UnboundedFollowing("1.0") +) +def between_unbounded_preceding_and_unbounded_following(self): + """Check range between unbounded preceding and unbounded following range with and without order by. + """ + with Example("with order by"): + expected = convert_output(""" + four | ten | sum | last_value + ------+-----+-----+------------ + 0 | 0 | 20 | 8 + 0 | 2 | 20 | 8 + 0 | 4 | 20 | 8 + 0 | 6 | 20 | 8 + 0 | 8 | 20 | 8 + 1 | 1 | 25 | 9 + 1 | 3 | 25 | 9 + 1 | 5 | 25 | 9 + 1 | 7 | 25 | 9 + 1 | 9 | 25 | 9 + 2 | 0 | 20 | 8 + 2 | 2 | 20 | 8 + 2 | 4 | 20 | 8 + 2 | 6 | 20 | 8 + 2 | 8 | 20 | 8 + 3 | 1 | 25 | 9 + 3 | 3 | 25 | 9 + 3 | 5 | 25 | 9 + 3 | 7 | 25 | 9 + 3 | 9 | 25 | 9 + """) + + execute_query( + "SELECT four, ten, " + "sum(ten) over (partition by four order by ten range between unbounded preceding and unbounded following) AS sum, " + "last_value(ten) over (partition by four order by ten range between unbounded preceding and unbounded following) AS last_value " + "FROM (select distinct ten, four from tenk1)", + expected=expected + ) + + with Example("without order by"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 47100 + 2 | personnel | 3900 | 47100 + 3 | sales | 4800 | 47100 + 4 | sales | 4800 | 47100 + 5 | personnel | 3500 | 47100 + 7 | develop | 4200 | 47100 + 8 | develop | 6000 | 47100 + 9 | develop | 4500 | 47100 + 10 | develop | 5200 | 47100 + 11 | develop | 5200 | 47100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithoutOrderBy_Error("1.0") +) +def between_unbounded_preceding_and_expr_following_without_order_by_error(self): + """Check range between unbounded preceding and expr following frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithoutOrderBy_Error("1.0") +) +def between_unbounded_preceding_and_expr_preceding_without_order_by_error(self): + """Check range between unbounded preceding and expr preceding frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprFollowing_WithOrderBy("1.0") +) +def between_unbounded_preceding_and_expr_following_with_order_by(self): + """Check range between unbounded preceding and expr following frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 41100 + 2 | personnel | 3900 | 11600 + 3 | sales | 4800 | 41100 + 4 | sales | 4800 | 41100 + 5 | personnel | 3500 | 7400 + 7 | develop | 4200 | 16100 + 8 | develop | 6000 | 47100 + 9 | develop | 4500 | 30700 + 10 | develop | 5200 | 41100 + 11 | develop | 5200 | 41100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED PRECEDING AND 500 FOLLOWING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedPreceding_ExprPreceding_WithOrderBy("1.0") +) +def between_unbounded_preceding_and_expr_preceding_with_order_by(self): + """Check range between unbounded preceding and expr preceding frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 16100 + 2 | personnel | 3900 | 0 + 3 | sales | 4800 | 11600 + 4 | sales | 4800 | 11600 + 5 | personnel | 3500 | 0 + 7 | develop | 4200 | 3500 + 8 | develop | 6000 | 41100 + 9 | develop | 4500 | 7400 + 10 | develop | 5200 | 16100 + 11 | develop | 5200 | 16100 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED PRECEDING AND 500 PRECEDING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_CurrentRow_Error("1.0") +) +def between_unbounded_following_and_current_row_error(self): + """Check range between unbounded following and current row frame with or without order by returns an error. + """ + exitcode, message = frame_start_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND CURRENT ROW) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedFollowing_Error("1.0") +) +def between_unbounded_following_and_unbounded_following_error(self): + """Check range between unbounded following and unbounded following frame with or without order by returns an error. + """ + exitcode, message = frame_start_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_UnboundedPreceding_Error("1.0") +) +def between_unbounded_following_and_unbounded_preceding_error(self): + """Check range between unbounded following and unbounded preceding frame with or without order by returns an error. + """ + exitcode, message = frame_start_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprPreceding_Error("1.0") +) +def between_unbounded_following_and_expr_preceding_error(self): + """Check range between unbounded following and expr preceding frame with or without order by returns an error. + """ + exitcode, message = frame_start_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 PRECEDING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_UnboundedFollowing_ExprFollowing_Error("1.0") +) +def between_unbounded_following_and_expr_following_error(self): + """Check range between unbounded following and expr following frame with or without order by returns an error. + """ + exitcode, message = frame_start_error() + + with Example("without order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN UNBOUNDED FOLLOWING AND 1 FOLLOWING) AS sum FROM empsalary", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithoutOrderBy_Error("1.0") +) +def between_expr_preceding_and_current_row_without_order_by_error(self): + """Check range between expr preceding and current row frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithoutOrderBy_Error("1.0") +) +def between_expr_preceding_and_unbounded_following_without_order_by_error(self): + """Check range between expr preceding and unbounded following frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithoutOrderBy_Error("1.0") +) +def between_expr_preceding_and_expr_following_without_order_by_error(self): + """Check range between expr preceding and expr following frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithoutOrderBy_Error("1.0") +) +def between_expr_preceding_and_expr_preceding_without_order_by_error(self): + """Check range between expr preceding and expr preceding frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedPreceding_Error("1.0") +) +def between_expr_preceding_and_unbounded_preceding_error(self): + """Check range between expr preceding and unbounded preceding frame with or without order by returns an error. + """ + exitcode, message = frame_end_unbounded_preceding_error() + + with Example("without order by"): + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY salary RANGE BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_CurrentRow_WithOrderBy("1.0") +) +def between_expr_preceding_and_current_row_with_order_by(self): + """Check range between expr preceding and current row frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 5000 + 2 | personnel | 3900 | 8900 + 3 | sales | 4800 | 13700 + 4 | sales | 4800 | 18500 + 5 | personnel | 3500 | 22000 + 7 | develop | 4200 | 26200 + 8 | develop | 6000 | 32200 + 9 | develop | 4500 | 36700 + 10 | develop | 5200 | 41900 + 11 | develop | 5200 | 47100 + """) + + execute_query( + "SELECT empno, depname, salary, sum(salary) OVER (ORDER BY empno RANGE BETWEEN 500 PRECEDING AND CURRENT ROW) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_UnboundedFollowing_WithOrderBy("1.0") +) +def between_expr_preceding_and_unbounded_following_with_order_by(self): + """Check range between expr preceding and unbounded following frame with order by. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 35500 + 2 | personnel | 3900 | 47100 + 3 | sales | 4800 | 35500 + 4 | sales | 4800 | 35500 + 5 | personnel | 3500 | 47100 + 7 | develop | 4200 | 43600 + 8 | develop | 6000 | 6000 + 9 | develop | 4500 | 39700 + 10 | develop | 5200 | 31000 + 11 | develop | 5200 | 31000 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN 500 PRECEDING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprFollowing_WithOrderBy("1.0") +) +def between_expr_preceding_and_expr_following_with_order_by(self): + """Check range between expr preceding and expr following frame with order by. + """ + with Example("empsalary"): + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 29500 + 2 | personnel | 3900 | 11600 + 3 | sales | 4800 | 29500 + 4 | sales | 4800 | 29500 + 5 | personnel | 3500 | 7400 + 7 | develop | 4200 | 12600 + 8 | develop | 6000 | 6000 + 9 | develop | 4500 | 23300 + 10 | develop | 5200 | 25000 + 11 | develop | 5200 | 25000 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN 500 PRECEDING AND 500 FOLLOWING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + + with Example("tenk1"): + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 4 | 0 | 0 + 12 | 4 | 0 + 12 | 8 | 0 + 6 | 1 | 1 + 15 | 5 | 1 + 14 | 9 | 1 + 8 | 2 | 2 + 8 | 6 | 2 + 10 | 3 | 3 + 10 | 7 | 3 + """) + + execute_query( + "SELECT sum(unique1) over (partition by four order by unique1 range between 5 preceding and 6 following) AS sum, " + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy("1.0") +) +def between_expr_preceding_and_expr_preceding_with_order_by(self): + """Check range between expr preceding and expr preceding range with order by. + """ + with Example("order by asc"): + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 0 | 0 | 0 + 0 | 4 | 0 + 0 | 8 | 0 + 12 | 1 | 1 + 12 | 5 | 1 + 12 | 9 | 1 + 27 | 2 | 2 + 27 | 6 | 2 + 23 | 3 | 3 + 23 | 7 | 3 + """) + + execute_query( + "SELECT * FROM (SELECT sum(unique1) over (order by four range between 2 preceding and 1 preceding) AS sum, " + "unique1, four " + "FROM tenk1 WHERE unique1 < 10) ORDER BY four, unique1", + expected=expected + ) + + with Example("order by desc"): + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 23 | 0 | 0 + 23 | 4 | 0 + 23 | 8 | 0 + 18 | 1 | 1 + 18 | 5 | 1 + 18 | 9 | 1 + 10 | 2 | 2 + 10 | 6 | 2 + 0 | 3 | 3 + 0 | 7 | 3 + """) + + execute_query( + "SELECT * FROM (SELECT sum(unique1) over (order by four desc range between 2 preceding and 1 preceding) AS sum, " + "unique1, four " + "FROM tenk1 WHERE unique1 < 10) ORDER BY four, unique1", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprPreceding_ExprPreceding_WithOrderBy_Error("1.0") +) +def between_expr_preceding_and_expr_preceding_with_order_by_error(self): + """Check range between expr preceding and expr preceding range with order by returns error + when end frame is before of start frame. + """ + exitcode, message = frame_start_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithoutOrderBy_Error("1.0") +) +def between_expr_following_and_current_row_without_order_by_error(self): + """Check range between expr following and current row frame without order by returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithoutOrderBy_Error("1.0") +) +def between_expr_following_and_unbounded_following_without_order_by_error(self): + """Check range between expr following and unbounded following frame without order by returns an error. + """ + exitcode, message = frame_requires_order_by_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithoutOrderBy_Error("1.0") +) +def between_expr_following_and_expr_following_without_order_by_error(self): + """Check range between expr following and expr following frame without order by returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND 1 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithoutOrderBy_Error("1.0") +) +def between_expr_following_and_expr_preceding_without_order_by_error(self): + """Check range between expr following and expr preceding frame without order by returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedPreceding_Error("1.0") +) +def between_expr_following_and_unbounded_preceding_error(self): + """Check range between expr following and unbounded preceding frame with or without order by returns an error. + """ + exitcode, message = frame_end_unbounded_preceding_error() + + with Example("without order by"): + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + + with Example("with order by"): + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY salary RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_WithOrderBy_Error("1.0") +) +def between_expr_following_and_current_row_with_order_by_error(self): + """Check range between expr following and current row frame with order by returns an error + when expr if greater than 0. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND CURRENT ROW) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_Error("1.0") +) +def between_expr_following_and_expr_preceding_error(self): + """Check range between expr following and expr preceding frame with order by returns an error + when either expr is not 0. + """ + exitcode, message = frame_start_error() + + with Example("1 following 0 preceding"): + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 1 FOLLOWING AND 0 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + + with Example("1 following 0 preceding"): + self.context.node.query("SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND 1 PRECEDING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy_Error("1.0") +) +def between_expr_following_and_expr_following_with_order_by_error(self): + """Check range between expr following and expr following frame with order by returns an error + when the expr for the frame end is less than the expr for the framevstart. + """ + exitcode, message = frame_start_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM values('number Int8', (1),(1),(2),(3))", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_CurrentRow_ZeroSpecialCase("1.0") +) +def between_expr_following_and_current_row_zero_special_case(self): + """Check range between expr following and current row frame for special case when exp is 0. + It is expected to work. + """ + with When("I use it with order by"): + expected = convert_output(""" + number | sum + ---------+------ + 1 | 2 + 1 | 2 + 2 | 2 + 3 | 3 + """) + + execute_query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) AS sum FROM values('number Int8', (1),(1),(2),(3))", + expected=expected + ) + + with And("I use it without order by"): + expected = convert_output(""" + number | sum + ---------+------ + 1 | 7 + 1 | 7 + 2 | 7 + 3 | 7 + """) + + execute_query( + "SELECT number,sum(number) OVER (RANGE BETWEEN 0 FOLLOWING AND CURRENT ROW) AS sum FROM values('number Int8', (1),(1),(2),(3))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_UnboundedFollowing_WithOrderBy("1.0") +) +def between_expr_following_and_unbounded_following_with_order_by(self): + """Check range between expr following and unbounded following range with order by. + """ + expected = convert_output(""" + number | sum + ---------+------ + 1 | 5 + 1 | 5 + 2 | 3 + 3 | 0 + """) + + + execute_query( + "SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM values('number Int8', (1),(1),(2),(3))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprPreceding_WithOrderBy_ZeroSpecialCase("1.0") +) +def between_expr_following_and_expr_preceding_with_order_by_zero_special_case(self): + """Check range between expr following and expr preceding frame for special case when exp is 0. + It is expected to work. + """ + expected = convert_output(""" + number | sum + ---------+------ + 1 | 2 + 1 | 2 + 2 | 2 + 3 | 3 + """) + + execute_query("SELECT number,sum(number) OVER (ORDER BY number RANGE BETWEEN 0 FOLLOWING AND 0 PRECEDING) AS sum FROM values('number Int8', (1),(1),(2),(3))", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_Between_ExprFollowing_ExprFollowing_WithOrderBy("1.0") +) +def between_expr_following_and_expr_following_with_order_by(self): + """Check range between expr following and expr following frame with order by when frame start + is before frame end. + """ + expected = convert_output(""" + empno | depname | salary | sum + --------+-----------+--------+--------- + 1 | sales | 5000 | 6000 + 2 | personnel | 3900 | 14100 + 3 | sales | 4800 | 0 + 4 | sales | 4800 | 0 + 5 | personnel | 3500 | 8700 + 7 | develop | 4200 | 25000 + 8 | develop | 6000 | 0 + 9 | develop | 4500 | 15400 + 10 | develop | 5200 | 6000 + 11 | develop | 5200 | 6000 + """) + + execute_query( + "SELECT * FROM (SELECT empno, depname, salary, sum(salary) OVER (ORDER BY salary RANGE BETWEEN 500 FOLLOWING AND 1000 FOLLOWING) AS sum FROM empsalary) ORDER BY empno", + expected=expected + ) + +@TestScenario +def between_unbounded_preceding_and_current_row_with_expressions_in_order_by_and_aggregate(self): + """Check range between unbounded prceding and current row with + expression used in the order by clause and aggregate functions. + """ + expected = convert_output(""" + four | two | sum | last_value + ------+-----+-----+------------ + 0 | 0 | 0 | 0 + 0 | 0 | 0 | 0 + 0 | 1 | 2 | 1 + 0 | 1 | 2 | 1 + 0 | 2 | 4 | 2 + 1 | 0 | 0 | 0 + 1 | 0 | 0 | 0 + 1 | 1 | 2 | 1 + 1 | 1 | 2 | 1 + 1 | 2 | 4 | 2 + 2 | 0 | 0 | 0 + 2 | 0 | 0 | 0 + 2 | 1 | 2 | 1 + 2 | 1 | 2 | 1 + 2 | 2 | 4 | 2 + 3 | 0 | 0 | 0 + 3 | 0 | 0 | 0 + 3 | 1 | 2 | 1 + 3 | 1 | 2 | 1 + 3 | 2 | 4 | 2 + """) + + execute_query( + "SELECT four, toInt8(ten/4) as two, " + "sum(toInt8(ten/4)) over (partition by four order by toInt8(ten/4) range between unbounded preceding and current row) AS sum, " + "last_value(toInt8(ten/4)) over (partition by four order by toInt8(ten/4) range between unbounded preceding and current row) AS last_value " + "FROM (select distinct ten, four from tenk1)", + expected=expected + ) + +@TestScenario +def between_current_row_and_unbounded_following_modifying_named_window(self): + """Check range between current row and unbounded following when + modifying named window. + """ + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 45 | 0 | 0 + 45 | 8 | 0 + 45 | 4 | 0 + 33 | 5 | 1 + 33 | 9 | 1 + 33 | 1 | 1 + 18 | 6 | 2 + 18 | 2 | 2 + 10 | 3 | 3 + 10 | 7 | 3 + """) + + execute_query( + "SELECT * FROM (SELECT sum(unique1) over (w range between current row and unbounded following) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four)) ORDER BY unique1", + expected=expected + ) + +@TestScenario +def between_current_row_and_unbounded_following_in_named_window(self): + """Check range between current row and unbounded following in named window. + """ + expected = convert_output(""" + first_value | last_value | unique1 | four + -------------+------------+---------+------ + 0 | 9 | 0 | 0 + 1 | 9 | 1 | 1 + 2 | 9 | 2 | 2 + 3 | 9 | 3 | 3 + 4 | 9 | 4 | 0 + 5 | 9 | 5 | 1 + 6 | 9 | 6 | 2 + 7 | 9 | 7 | 3 + 8 | 9 | 8 | 0 + 9 | 9 | 9 | 1 + """) + + execute_query( + "SELECT first_value(unique1) over w AS first_value, " + "last_value(unique1) over w AS last_value, unique1, four " + "FROM tenk1 WHERE unique1 < 10 " + "WINDOW w AS (order by unique1 range between current row and unbounded following)", + expected=expected + ) + +@TestScenario +def between_expr_preceding_and_expr_following_with_partition_by_two_columns(self): + """Check range between n preceding and n following frame with partition + by two int value columns. + """ + expected = convert_output(""" + f1 | sum + ----+----- + 1 | 0 + 2 | 0 + """) + + execute_query( + """ + select f1, sum(f1) over (partition by f1, f2 order by f2 + range between 1 following and 2 following) AS sum + from t1 where f1 = f2 + """, + expected=expected + ) + +@TestScenario +def between_expr_preceding_and_expr_following_with_partition_by_same_column_twice(self): + """Check range between n preceding and n folowing with partition + by the same column twice. + """ + expected = convert_output(""" + f1 | sum + ----+----- + 1 | 0 + 2 | 0 + """) + + execute_query( + """ + select * from (select f1, sum(f1) over (partition by f1, f1 order by f2 + range between 2 preceding and 1 preceding) AS sum + from t1 where f1 = f2) order by f1, sum + """, + expected=expected + ) + +@TestScenario +def between_expr_preceding_and_expr_following_with_partition_and_order_by(self): + """Check range between expr preceding and expr following frame used + with partition by and order by clauses. + """ + expected = convert_output(""" + f1 | sum + ----+----- + 1 | 1 + 2 | 2 + """) + + execute_query( + """ + select f1, sum(f1) over (partition by f1 order by f2 + range between 1 preceding and 1 following) AS sum + from t1 where f1 = f2 + """, + expected=expected + ) + +@TestScenario +def order_by_decimal(self): + """Check using range with order by decimal column. + """ + expected = convert_output(""" + id | f_numeric | first_value | last_value + ----+-----------+-------------+------------ + 0 | -1000 | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 4 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | 1000 | 8 | 8 + 9 | 0 | 9 | 9 + """) + + execute_query( + """ + select id, f_numeric, first_value(id) over w AS first_value, last_value(id) over w AS last_value + from numerics + window w as (order by f_numeric range between + 1 preceding and 1 following) + """, + expected=expected + ) + +@TestScenario +def order_by_float(self): + """Check using range with order by float column. + """ + expected = convert_output(""" + id | f_float4 | first_value | last_value + ----+-----------+-------------+------------ + 0 | -inf | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 3 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | inf | 8 | 8 + 9 | nan | 8 | 8 + """) + + execute_query( + """ + select id, f_float4, first_value(id) over w AS first_value, last_value(id) over w AS last_value + from numerics + window w as (order by f_float4 range between + 1 preceding and 1 following) + """, + expected=expected + ) + +@TestScenario +def with_nulls(self): + """Check using range frame over window with nulls. + """ + expected = convert_output(""" + x | y | first_value | last_value + ---+----+-------------+------------ + \\N | 42 | 42 | 43 + \\N | 43 | 42 | 43 + 1 | 1 | 1 | 3 + 2 | 2 | 1 | 4 + 3 | 3 | 1 | 5 + 4 | 4 | 2 | 5 + 5 | 5 | 3 | 5 + """) + + execute_query( + """ + select x, y, + first_value(y) over w AS first_value, + last_value(y) over w AS last_value + from + (select number as x, x as y from numbers(1,5) + union all select null, 42 + union all select null, 43) + window w as + (order by x asc nulls first range between 2 preceding and 2 following) + """, + expected=expected + ) + +@TestFeature +@Name("range frame") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame("1.0"), + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame_DataTypes_IntAndUInt("1.0") +) +def feature(self): + """Check defining range frame. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/range_overflow.py b/tests/testflows/window_functions/tests/range_overflow.py new file mode 100644 index 00000000000..0c66e54c8ee --- /dev/null +++ b/tests/testflows/window_functions/tests/range_overflow.py @@ -0,0 +1,135 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def positive_overflow_with_Int16(self): + """Check positive overflow with Int16. + """ + expected = convert_output(""" + x | last_value + -------+------------ + 32764 | 0 + 32765 | 0 + 32766 | 0 + """) + + execute_query( + """ + select number as x, last_value(x) over (order by toInt16(x) range between current row and 2147450884 following) AS last_value + from numbers(32764, 3) + """, + expected=expected + ) + +@TestScenario +def negative_overflow_with_Int16(self): + """Check negative overflow with Int16. + """ + expected = convert_output(""" + x | last_value + --------+------------ + -32764 | 0 + -32765 | 0 + -32766 | 0 + """) + + execute_query( + """ + select number as x, last_value(x) over (order by toInt16(x) desc range between current row and 2147450885 following) as last_value + from (SELECT -number - 32763 AS number FROM numbers(1, 3)) + """, + expected=expected + ) + +@TestScenario +def positive_overflow_for_Int32(self): + """Check positive overflow for Int32. + """ + expected = convert_output(""" + x | last_value + ------------+------------ + 2147483644 | 2147483646 + 2147483645 | 2147483646 + 2147483646 | 2147483646 + """) + + execute_query( + """ + select number as x, last_value(x) over (order by x range between current row and 4 following) as last_value + from numbers(2147483644, 3) + """, + expected=expected + ) + +@TestScenario +def negative_overflow_for_Int32(self): + """Check negative overflow for Int32. + """ + expected = convert_output(""" + x | last_value + -------------+------------- + -2147483644 | -2147483646 + -2147483645 | -2147483646 + -2147483646 | -2147483646 + """) + + execute_query( + """ + select number as x, last_value(x) over (order by x desc range between current row and 5 following) as last_value + from (select -number-2147483643 AS number FROM numbers(1,3)) + """, + expected=expected + ) + +@TestScenario +def positive_overflow_for_Int64(self): + """Check positive overflow for Int64. + """ + expected = convert_output(""" + x | last_value + ---------------------+--------------------- + 9223372036854775804 | 9223372036854775806 + 9223372036854775805 | 9223372036854775806 + 9223372036854775806 | 9223372036854775806 + """) + + execute_query( + """ + select number as x, last_value(x) over (order by x range between current row and 4 following) as last_value + from numbers(9223372036854775804, 3) + """, + expected=expected + ) + +@TestScenario +def negative_overflow_for_Int64(self): + """Check negative overflow for Int64. + """ + expected = convert_output(""" + x | last_value + ----------------------+---------------------- + -9223372036854775804 | -9223372036854775806 + -9223372036854775805 | -9223372036854775806 + -9223372036854775806 | -9223372036854775806 + """) + + execute_query( + """ + select number as x, last_value(x) over (order by x desc range between current row and 5 following) as last_value + from (select -number-9223372036854775803 AS number from numbers(1,3)) + """, + expected=expected + ) + +@TestFeature +@Name("range overflow") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RangeFrame("1.0") +) +def feature(self): + """Check using range frame with overflows. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/rows_frame.py b/tests/testflows/window_functions/tests/rows_frame.py new file mode 100644 index 00000000000..07533e8d1ab --- /dev/null +++ b/tests/testflows/window_functions/tests/rows_frame.py @@ -0,0 +1,688 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_MissingFrameExtent_Error("1.0") +) +def missing_frame_extent(self): + """Check that when rows frame has missing frame extent then an error is returned. + """ + exitcode, message = syntax_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number ROWS) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_InvalidFrameExtent_Error("1.0") +) +def invalid_frame_extent(self): + """Check that when rows frame has invalid frame extent then an error is returned. + """ + exitcode, message = frame_offset_nonnegative_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number ROWS -1) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_CurrentRow("1.0") +) +def start_current_row(self): + """Check rows current row frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+------- + 1 | 5000 | 5000 + 2 | 3900 | 3900 + 3 | 4800 | 4800 + 4 | 4800 | 4800 + 5 | 3500 | 3500 + 7 | 4200 | 4200 + 8 | 6000 | 6000 + 9 | 4500 | 4500 + 10 | 5200 | 5200 + 11 | 5200 | 5200 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS CURRENT ROW) AS sum FROM empsalary ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_UnboundedPreceding("1.0") +) +def start_unbounded_preceding(self): + """Check rows unbounded preceding frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+------- + 1 | 5000 | 5000 + 2 | 3900 | 8900 + 3 | 4800 | 13700 + 4 | 4800 | 18500 + 5 | 3500 | 22000 + 7 | 4200 | 26200 + 8 | 6000 | 32200 + 9 | 4500 | 36700 + 10 | 5200 | 41900 + 11 | 5200 | 47100 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS UNBOUNDED PRECEDING) AS sum FROM empsalary ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_ExprPreceding("1.0") +) +def start_expr_preceding(self): + """Check rows expr preceding frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 5000 + 2 | 3900 | 8900 + 3 | 4800 | 8700 + 4 | 4800 | 9600 + 5 | 3500 | 8300 + 7 | 4200 | 7700 + 8 | 6000 | 10200 + 9 | 4500 | 10500 + 10 | 5200 | 9700 + 11 | 5200 | 10400 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS 1 PRECEDING) AS sum FROM empsalary ORDER BY empno", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_UnboundedFollowing_Error("1.0") +) +def start_unbounded_following_error(self): + """Check rows unbounded following frame returns an error. + """ + exitcode, message = frame_start_error() + + self.context.node.query( + "SELECT empno, salary, sum(salary) OVER (ROWS UNBOUNDED FOLLOWING) AS sum FROM empsalary ORDER BY empno", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Start_ExprFollowing_Error("1.0") +) +def start_expr_following_error(self): + """Check rows expr following frame returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query( + "SELECT empno, salary, sum(salary) OVER (ROWS 1 FOLLOWING) AS sum FROM empsalary ORDER BY empno", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_CurrentRow("1.0") +) +def between_current_row_and_current_row(self): + """Check rows between current row and current row frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 5000 + 2 | 3900 | 3900 + 3 | 4800 | 4800 + 4 | 4800 | 4800 + 5 | 3500 | 3500 + 7 | 4200 | 4200 + 8 | 6000 | 6000 + 9 | 4500 | 4500 + 10 | 5200 | 5200 + 11 | 5200 | 5200 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN CURRENT ROW AND CURRENT ROW) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_ExprPreceding_Error("1.0") +) +def between_current_row_and_expr_preceding_error(self): + """Check rows between current row and expr preceding returns an error. + """ + exitcode, message = window_frame_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number ROWS BETWEEN CURRENT ROW AND 1 PRECEDING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_UnboundedPreceding_Error("1.0") +) +def between_current_row_and_unbounded_preceding_error(self): + """Check rows between current row and unbounded preceding returns an error. + """ + exitcode, message = frame_end_unbounded_preceding_error() + + self.context.node.query("SELECT number,sum(number) OVER (ORDER BY number ROWS BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_UnboundedFollowing("1.0") +) +def between_current_row_and_unbounded_following(self): + """Check rows between current row and unbounded following. + """ + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 45 | 0 | 0 + 45 | 1 | 1 + 44 | 2 | 2 + 42 | 3 | 3 + 39 | 4 | 0 + 35 | 5 | 1 + 30 | 6 | 2 + 24 | 7 | 3 + 17 | 8 | 0 + 9 | 9 | 1 + """) + + execute_query( + "SELECT sum(unique1) over (order by unique1 rows between current row and unbounded following) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_CurrentRow_ExprFollowing("1.0") +) +def between_current_row_and_expr_following(self): + """Check rows between current row and expr following. + """ + expected = convert_output(""" + i | b | bool_and | bool_or + ---+---+----------+--------- + 1 | 1 | 1 | 1 + 2 | 1 | 0 | 1 + 3 | 0 | 0 | 0 + 4 | 0 | 0 | 1 + 5 | 1 | 1 | 1 + """) + + execute_query(""" + SELECT i, b, groupBitAnd(b) OVER w AS bool_and, groupBitOr(b) OVER w AS bool_or + FROM VALUES('i Int8, b UInt8', (1,1), (2,1), (3,0), (4,0), (5,1)) + WINDOW w AS (ORDER BY i ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING) + """, + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_CurrentRow("1.0") +) +def between_unbounded_preceding_and_current_row(self): + """Check rows between unbounded preceding and current row. + """ + expected = convert_output(""" + four | two | sum | last_value + ------+-----+-----+------------ + 0 | 0 | 0 | 0 + 0 | 0 | 0 | 0 + 0 | 1 | 1 | 1 + 0 | 1 | 2 | 1 + 0 | 2 | 4 | 2 + 1 | 0 | 0 | 0 + 1 | 0 | 0 | 0 + 1 | 1 | 1 | 1 + 1 | 1 | 2 | 1 + 1 | 2 | 4 | 2 + 2 | 0 | 0 | 0 + 2 | 0 | 0 | 0 + 2 | 1 | 1 | 1 + 2 | 1 | 2 | 1 + 2 | 2 | 4 | 2 + 3 | 0 | 0 | 0 + 3 | 0 | 0 | 0 + 3 | 1 | 1 | 1 + 3 | 1 | 2 | 1 + 3 | 2 | 4 | 2 + """) + + execute_query( + "SELECT four, toInt8(ten/4) as two," + "sum(toInt8(ten/4)) over (partition by four order by toInt8(ten/4) rows between unbounded preceding and current row) AS sum," + "last_value(toInt8(ten/4)) over (partition by four order by toInt8(ten/4) rows between unbounded preceding and current row) AS last_value " + "FROM (select distinct ten, four from tenk1)", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_UnboundedPreceding_Error("1.0") +) +def between_unbounded_preceding_and_unbounded_preceding_error(self): + """Check rows between unbounded preceding and unbounded preceding returns an error. + """ + exitcode, message = frame_end_unbounded_preceding_error() + + self.context.node.query("SELECT number,sum(number) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_ExprPreceding("1.0") +) +def between_unbounded_preceding_and_expr_preceding(self): + """Check rows between unbounded preceding and expr preceding frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 0 + 2 | 3900 | 5000 + 3 | 4800 | 8900 + 4 | 4800 | 13700 + 5 | 3500 | 18500 + 7 | 4200 | 22000 + 8 | 6000 | 26200 + 9 | 4500 | 32200 + 10 | 5200 | 36700 + 11 | 5200 | 41900 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_UnboundedFollowing("1.0") +) +def between_unbounded_preceding_and_unbounded_following(self): + """Check rows between unbounded preceding and unbounded following frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 47100 + 2 | 3900 | 47100 + 3 | 4800 | 47100 + 4 | 4800 | 47100 + 5 | 3500 | 47100 + 7 | 4200 | 47100 + 8 | 6000 | 47100 + 9 | 4500 | 47100 + 10 | 5200 | 47100 + 11 | 5200 | 47100 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedPreceding_ExprFollowing("1.0") +) +def between_unbounded_preceding_and_expr_following(self): + """Check rows between unbounded preceding and expr following. + """ + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 1 | 0 | 0 + 3 | 1 | 1 + 6 | 2 | 2 + 10 | 3 | 3 + 15 | 4 | 0 + 21 | 5 | 1 + 28 | 6 | 2 + 36 | 7 | 3 + 45 | 8 | 0 + 45 | 9 | 1 + """) + + execute_query( + "SELECT sum(unique1) over (order by unique1 rows between unbounded preceding and 1 following) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_UnboundedFollowing_Error("1.0") +) +@Examples("range", [ + ("UNBOUNDED FOLLOWING AND CURRENT ROW",), + ("UNBOUNDED FOLLOWING AND UNBOUNDED PRECEDING",), + ("UNBOUNDED FOLLOWING AND UNBOUNDED FOLLOWING",), + ("UNBOUNDED FOLLOWING AND 1 PRECEDING",), + ("UNBOUNDED FOLLOWING AND 1 FOLLOWING",), +]) +def between_unbounded_following_error(self, range): + """Check rows between unbounded following and any end frame returns an error. + """ + exitcode, message = frame_start_error() + + self.context.node.query(f"SELECT number,sum(number) OVER (ROWS BETWEEN {range}) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_Error("1.0") +) +@Examples("range exitcode message", [ + ("1 FOLLOWING AND CURRENT ROW", *window_frame_error()), + ("1 FOLLOWING AND UNBOUNDED PRECEDING", *frame_end_unbounded_preceding_error()), + ("1 FOLLOWING AND 1 PRECEDING", *frame_start_error()) +]) +def between_expr_following_error(self, range, exitcode, message): + """Check cases when rows between expr following returns an error. + """ + self.context.node.query(f"SELECT number,sum(number) OVER (ROWS BETWEEN {range}) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing_Error("1.0") +) +def between_expr_following_and_expr_following_error(self): + """Check rows between expr following and expr following returns an error when frame end index is less + than frame start. + """ + exitcode, message = frame_start_error() + + self.context.node.query("SELECT number,sum(number) OVER (ROWS BETWEEN 1 FOLLOWING AND 0 FOLLOWING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_UnboundedFollowing("1.0") +) +def between_expr_following_and_unbounded_following(self): + """Check rows between exp following and unbounded following frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 28600 + 2 | 3900 | 25100 + 3 | 4800 | 20900 + 4 | 4800 | 14900 + 5 | 3500 | 10400 + 7 | 4200 | 5200 + 8 | 6000 | 0 + 9 | 4500 | 0 + 10 | 5200 | 0 + 11 | 5200 | 0 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN 4 FOLLOWING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing("1.0") +) +def between_expr_following_and_expr_following(self): + """Check rows between exp following and expr following frame when end of the frame is greater than + the start of the frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 17000 + 2 | 3900 | 17300 + 3 | 4800 | 18500 + 4 | 4800 | 18200 + 5 | 3500 | 19900 + 7 | 4200 | 20900 + 8 | 6000 | 14900 + 9 | 4500 | 10400 + 10 | 5200 | 5200 + 11 | 5200 | 0 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN 1 FOLLOWING AND 4 FOLLOWING) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_CurrentRow("1.0") +) +def between_expr_preceding_and_current_row(self): + """Check rows between exp preceding and current row frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 8 | 6000 | 6000 + 10 | 5200 | 11200 + 11 | 5200 | 10400 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS sum FROM empsalary WHERE salary > 5000", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_UnboundedPreceding_Error("1.0") +) +def between_expr_preceding_and_unbounded_preceding_error(self): + """Check rows between expr preceding and unbounded preceding returns an error. + """ + exitcode, message = frame_end_error() + + self.context.node.query("SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND UNBOUNDED PRECEDING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_UnboundedFollowing("1.0") +) +def between_expr_preceding_and_unbounded_following(self): + """Check rows between exp preceding and unbounded following frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 8 | 6000 | 16400 + 10 | 5200 | 16400 + 11 | 5200 | 10400 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) AS sum FROM empsalary WHERE salary > 5000", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding_Error("1.0") +) +def between_expr_preceding_and_expr_preceding_error(self): + """Check rows between expr preceding and expr preceding returns an error when frame end is + before frame start. + """ + exitcode, message = frame_start_error() + + self.context.node.query("SELECT number,sum(number) OVER (ROWS BETWEEN 1 PRECEDING AND 2 PRECEDING) FROM numbers(1,3)", + exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding("1.0") +) +def between_expr_preceding_and_expr_preceding(self): + """Check rows between expr preceding and expr preceding frame when frame end is after or at frame start. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 1 | 5000 | 5000 + 2 | 3900 | 8900 + 3 | 4800 | 8700 + 4 | 4800 | 9600 + 5 | 3500 | 8300 + 7 | 4200 | 7700 + 8 | 6000 | 10200 + 9 | 4500 | 10500 + 10 | 5200 | 9700 + 11 | 5200 | 10400 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN 1 PRECEDING AND 0 PRECEDING) AS sum FROM empsalary", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprFollowing("1.0") +) +def between_expr_preceding_and_expr_following(self): + """Check rows between expr preceding and expr following frame. + """ + expected = convert_output(""" + empno | salary | sum + --------+--------+-------- + 8 | 6000 | 11200 + 10 | 5200 | 16400 + 11 | 5200 | 10400 + """) + + execute_query( + "SELECT empno, salary, sum(salary) OVER (ORDER BY empno ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum FROM empsalary WHERE salary > 5000", + expected=expected + ) + + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprFollowing_ExprFollowing("1.0") +) +def between_expr_following_and_expr_following_ref(self): + """Check reference result for rows between expr following and expr following range. + """ + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 6 | 0 | 0 + 9 | 1 | 1 + 12 | 2 | 2 + 15 | 3 | 3 + 18 | 4 | 0 + 21 | 5 | 1 + 24 | 6 | 2 + 17 | 7 | 3 + 9 | 8 | 0 + 0 | 9 | 1 + """) + + execute_query( + "SELECT sum(unique1) over (order by unique1 rows between 1 following and 3 following) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprPreceding("1.0") +) +def between_expr_preceding_and_expr_preceding_ref(self): + """Check reference result for rows between expr preceding and expr preceding frame. + """ + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 0 | 0 | 0 + 0 | 1 | 1 + 1 | 2 | 2 + 3 | 3 | 3 + 5 | 4 | 0 + 7 | 5 | 1 + 9 | 6 | 2 + 11 | 7 | 3 + 13 | 8 | 0 + 15 | 9 | 1 + """) + + execute_query( + "SELECT sum(unique1) over (order by unique1 rows between 2 preceding and 1 preceding) AS sum," + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame_Between_ExprPreceding_ExprFollowing("1.0") +) +def between_expr_preceding_and_expr_following_ref(self): + """Check reference result for rows between expr preceding and expr following frame. + """ + expected = convert_output(""" + sum | unique1 | four + -----+---------+------ + 3 | 0 | 0 + 6 | 1 | 1 + 10 | 2 | 2 + 15 | 3 | 3 + 20 | 4 | 0 + 25 | 5 | 1 + 30 | 6 | 2 + 35 | 7 | 3 + 30 | 8 | 0 + 24 | 9 | 1 + """) + + execute_query( + "SELECT sum(unique1) over (order by unique1 rows between 2 preceding and 2 following) AS sum, " + "unique1, four " + "FROM tenk1 WHERE unique1 < 10", + expected=expected + ) + +@TestFeature +@Name("rows frame") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_RowsFrame("1.0") +) +def feature(self): + """Check defining rows frame. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/snapshots/common.py.tests.snapshot b/tests/testflows/window_functions/tests/snapshots/common.py.tests.snapshot new file mode 100644 index 00000000000..68607f86d92 --- /dev/null +++ b/tests/testflows/window_functions/tests/snapshots/common.py.tests.snapshot @@ -0,0 +1,826 @@ +func__count_salary__ = r""" +func +10 +9 +8 +7 +6 +5 +4 +3 +2 +1 +""" + +func__min_salary__ = r""" +func +3500 +3900 +4200 +4500 +4800 +4800 +5000 +5200 +5200 +6000 +""" + +func__max_salary__ = r""" +func +6000 +6000 +6000 +6000 +6000 +6000 +6000 +6000 +6000 +6000 +""" + +func__sum_salary__ = r""" +func +47100 +43600 +39700 +35500 +31000 +26200 +21400 +16400 +11200 +6000 +""" + +func__avg_salary__ = r""" +func +4710 +4844.444444444444 +4962.5 +5071.428571428572 +5166.666666666667 +5240 +5350 +5466.666666666667 +5600 +6000 +""" + +func__any_salary__ = r""" +func +3500 +3900 +4200 +4500 +4800 +4800 +5000 +5200 +5200 +6000 +""" + +func__stddevPop_salary__ = r""" +func +683.3008122342604 +581.3989089756045 +504.8205126577168 +443.08749769345 +406.88518719112545 +407.9215610874228 +384.0572873934304 +377.1236166328275 +400 +0 +""" + +func__stddevSamp_salary__ = r""" +func +720.2622979011034 +616.6666666666654 +539.6758286230726 +478.5891965429401 +445.720390678583 +456.0701700396552 +443.471156521669 +461.8802153517033 +565.685424949238 +nan +""" + +func__varPop_salary__ = r""" +func +466900 +338024.6913580232 +254843.75 +196326.53061224308 +165555.55555555722 +166400 +147500 +142222.22222222388 +160000 +0 +""" + +func__varSamp_salary__ = r""" +func +518777.77777777775 +380277.7777777761 +291250 +229047.61904761693 +198666.66666666867 +208000 +196666.66666666666 +213333.33333333582 +320000 +nan +""" + +func__covarPop_salary__2000__ = r""" +func +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +""" + +func__covarSamp_salary__2000__ = r""" +func +0 +0 +0 +0 +0 +0 +0 +0 +0 +nan +""" + +func__anyHeavy_salary__ = r""" +func +5200 +5200 +5200 +5200 +5200 +5200 +5200 +5200 +5200 +6000 +""" + +func__anyLast_salary__ = r""" +func +6000 +6000 +6000 +6000 +6000 +6000 +6000 +6000 +6000 +6000 +""" + +func__argMin_salary__5000__ = r""" +func +3500 +3900 +4200 +4500 +4800 +4800 +5000 +5200 +5200 +6000 +""" + +func__argMax_salary__5000__ = r""" +func +3500 +3900 +4200 +4500 +4800 +4800 +5000 +5200 +5200 +6000 +""" + +func__avgWeighted_salary__1__ = r""" +func +4710 +4844.444444444444 +4962.5 +5071.428571428572 +5166.666666666667 +5240 +5350 +5466.666666666667 +5600 +6000 +""" + +func__corr_salary__0_5__ = r""" +func +nan +nan +nan +nan +nan +nan +nan +nan +nan +nan +""" + +func__topK_salary__ = r""" +func +[4800,5200,3500,3900,4200,4500,5000,6000] +[4800,5200,3900,4200,4500,5000,6000] +[4800,5200,4200,4500,5000,6000] +[4800,5200,4500,5000,6000] +[4800,5200,5000,6000] +[5200,4800,5000,6000] +[5200,5000,6000] +[5200,6000] +[5200,6000] +[6000] +""" + +func__topKWeighted_salary__1__ = r""" +func +[4800,5200,3500,3900,4200,4500,5000,6000] +[4800,5200,3900,4200,4500,5000,6000] +[4800,5200,4200,4500,5000,6000] +[4800,5200,4500,5000,6000] +[4800,5200,5000,6000] +[5200,4800,5000,6000] +[5200,5000,6000] +[5200,6000] +[5200,6000] +[6000] +""" + +func__groupArray_salary__ = r""" +func +[3500,3900,4200,4500,4800,4800,5000,5200,5200,6000] +[3900,4200,4500,4800,4800,5000,5200,5200,6000] +[4200,4500,4800,4800,5000,5200,5200,6000] +[4500,4800,4800,5000,5200,5200,6000] +[4800,4800,5000,5200,5200,6000] +[4800,5000,5200,5200,6000] +[5000,5200,5200,6000] +[5200,5200,6000] +[5200,6000] +[6000] +""" + +func__groupUniqArray_salary__ = r""" +func +[3500,5000,6000,3900,4800,5200,4200,4500] +[5000,6000,3900,4800,5200,4200,4500] +[5000,6000,4800,5200,4200,4500] +[5000,6000,4800,5200,4500] +[5000,6000,4800,5200] +[5000,6000,4800,5200] +[5000,6000,5200] +[6000,5200] +[6000,5200] +[6000] +""" + +func__groupArrayInsertAt_salary__0__ = r""" +func +[3500] +[3900] +[4200] +[4500] +[4800] +[4800] +[5000] +[5200] +[5200] +[6000] +""" + +func__groupArrayMovingSum_salary__ = r""" +func +[3500,7400,11600,16100,20900,25700,30700,35900,41100,47100] +[3900,8100,12600,17400,22200,27200,32400,37600,43600] +[4200,8700,13500,18300,23300,28500,33700,39700] +[4500,9300,14100,19100,24300,29500,35500] +[4800,9600,14600,19800,25000,31000] +[4800,9800,15000,20200,26200] +[5000,10200,15400,21400] +[5200,10400,16400] +[5200,11200] +[6000] +""" + +func__groupArrayMovingAvg_salary__ = r""" +func +[350,740,1160,1610,2090,2570,3070,3590,4110,4710] +[433.3333333333333,900,1400,1933.3333333333333,2466.6666666666665,3022.222222222222,3600,4177.777777777777,4844.444444444444] +[525,1087.5,1687.5,2287.5,2912.5,3562.5,4212.5,4962.5] +[642.8571428571429,1328.5714285714287,2014.2857142857142,2728.5714285714284,3471.4285714285716,4214.285714285715,5071.428571428572] +[800,1600,2433.3333333333335,3300,4166.666666666667,5166.666666666667] +[960,1960,3000,4040,5240] +[1250,2550,3850,5350] +[1733.3333333333333,3466.6666666666665,5466.666666666667] +[2600,5600] +[6000] +""" + +func__groupArraySample_3__1234__salary__ = r""" +func +[6000,4800,4200] +[4800,5000,4500] +[4800,5200,4800] +[5000,5200,4800] +[5200,6000,5000] +[5200,6000,5200] +[6000,5200,5200] +[5200,5200,6000] +[5200,6000] +[6000] +""" + +func__groupBitAnd_toUInt8_salary___ = r""" +func +0 +0 +0 +0 +0 +0 +0 +80 +80 +112 +""" + +func__groupBitOr_toUInt8_salary___ = r""" +func +252 +252 +252 +252 +248 +248 +248 +112 +112 +112 +""" + +func__groupBitXor_toUInt8_salary___ = r""" +func +148 +56 +4 +108 +248 +56 +248 +112 +32 +112 +""" + +func__groupBitmap_toUInt8_salary___ = r""" +func +8 +7 +6 +5 +4 +4 +3 +2 +2 +1 +""" + +func__sumWithOverflow_salary__ = r""" +func +47100 +43600 +39700 +35500 +31000 +26200 +21400 +16400 +11200 +6000 +""" + +func__deltaSum_salary__ = r""" +func +2500 +2100 +1800 +1500 +1200 +1200 +1000 +800 +800 +0 +""" + +func__sumMap__5000____salary___ = r""" +func +([5000],[47100]) +([5000],[43600]) +([5000],[39700]) +([5000],[35500]) +([5000],[31000]) +([5000],[26200]) +([5000],[21400]) +([5000],[16400]) +([5000],[11200]) +([5000],[6000]) +""" + +func__minMap__5000____salary___ = r""" +func +([5000],[3500]) +([5000],[3900]) +([5000],[4200]) +([5000],[4500]) +([5000],[4800]) +([5000],[4800]) +([5000],[5000]) +([5000],[5200]) +([5000],[5200]) +([5000],[6000]) +""" + +func__maxMap__5000____salary___ = r""" +func +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +([5000],[6000]) +""" + +func__skewPop_salary__ = r""" +func +-0.01162261667454972 +0.2745338273905704 +0.5759615484689373 +0.9491407966637483 +1.1766149730944095 +1.0013237284459204 +0.9929662701927243 +0.7071067811864931 +0 +nan +""" + +func__skewSamp_salary__ = r""" +func +-0.009923564086909852 +0.2300737552746305 +0.4714173586339343 +0.7532002517703689 +0.8950813364751975 +0.7164889357723577 +0.6449505113159867 +0.3849001794597209 +0 +nan +""" + +func__kurtPop_salary__ = r""" +func +2.539217051205756 +2.7206630060048402 +2.983661891140213 +3.193064086003685 +3.1570199540482906 +2.7045118343195265 +2.235277219189888 +1.499999999988063 +1 +nan +""" + +func__kurtSamp_salary__ = r""" +func +2.0567658114766627 +2.1496596590655526 +2.2843661354042255 +2.3459246346149527 +2.1923749680890907 +1.730887573964497 +1.2573434357943123 +0.6666666666613614 +0.25 +nan +""" + +func__uniq_salary__ = r""" +func +8 +7 +6 +5 +4 +4 +3 +2 +2 +1 +""" + +func__uniqExact_salary__ = r""" +func +8 +7 +6 +5 +4 +4 +3 +2 +2 +1 +""" + +func__uniqCombined_salary__ = r""" +func +8 +7 +6 +5 +4 +4 +3 +2 +2 +1 +""" + +func__uniqCombined64_salary__ = r""" +func +8 +7 +6 +5 +4 +4 +3 +2 +2 +1 +""" + +func__uniqHLL12_salary__ = r""" +func +8 +7 +6 +5 +4 +4 +3 +2 +2 +1 +""" + +func__quantile_salary__ = r""" +func +4800 +4800 +4900 +5000 +5100 +5200 +5200 +5200 +5600 +6000 +""" + +func__quantiles_0_5__salary__ = r""" +func +[4800] +[4800] +[4900] +[5000] +[5100] +[5200] +[5200] +[5200] +[5600] +[6000] +""" + +func__quantileExact_salary__ = r""" +func +4800 +4800 +5000 +5000 +5200 +5200 +5200 +5200 +6000 +6000 +""" + +func__quantileExactWeighted_salary__1__ = r""" +func +4800 +4800 +4800 +5000 +5000 +5200 +5200 +5200 +5200 +6000 +""" + +func__quantileTiming_salary__ = r""" +func +4800 +4800 +5000 +5000 +5200 +5200 +5200 +5200 +6000 +6000 +""" + +func__quantileTimingWeighted_salary__1__ = r""" +func +4800 +4800 +5000 +5000 +5200 +5200 +5200 +5200 +6000 +6000 +""" + +func__quantileDeterministic_salary__1234__ = r""" +func +4800 +4800 +4900 +5000 +5100 +5200 +5200 +5200 +5600 +6000 +""" + +func__quantileTDigest_salary__ = r""" +func +4800 +4800 +4800 +5000 +5000 +5200 +5200 +5200 +5200 +6000 +""" + +func__quantileTDigestWeighted_salary__1__ = r""" +func +4800 +4800 +4800 +5000 +5000 +5200 +5200 +5200 +5200 +6000 +""" + +func__simpleLinearRegression_salary__empno__ = r""" +func +(0.0017991004497751124,-2.473763118440779) +(0.0023192111029948868,-5.12417823228634) +(0.0013182096873083997,0.08338442673206625) +(0.0021933471933471933,-4.551975051975051) +(0.004664429530201342,-17.93288590604027) +(0.003894230769230769,-13.60576923076923) +(0.00288135593220339,-7.915254237288137) +(-0.003125,26.75) +(-0.00375,30.5) +(nan,nan) +""" + +func__stochasticLinearRegression_salary__1__ = r""" +func +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +""" + +func__stochasticLogisticRegression_salary__1__ = r""" +func +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +[0,0] +""" + +func__studentTTest_salary__1__ = r""" +func +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +""" + +func__welchTTest_salary__1__ = r""" +func +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +(nan,0) +""" + +func__median_salary__ = r""" +func +4800 +4800 +4900 +5000 +5100 +5200 +5200 +5200 +5600 +6000 +""" + diff --git a/tests/testflows/window_functions/tests/tenk.data b/tests/testflows/window_functions/tests/tenk.data new file mode 100644 index 00000000000..c9064c9c032 --- /dev/null +++ b/tests/testflows/window_functions/tests/tenk.data @@ -0,0 +1,10000 @@ +8800 0 0 0 0 0 0 800 800 3800 8800 0 1 MAAAAA AAAAAA AAAAxx +1891 1 1 3 1 11 91 891 1891 1891 1891 182 183 TUAAAA BAAAAA HHHHxx +3420 2 0 0 0 0 20 420 1420 3420 3420 40 41 OBAAAA CAAAAA OOOOxx +9850 3 0 2 0 10 50 850 1850 4850 9850 100 101 WOAAAA DAAAAA VVVVxx +7164 4 0 0 4 4 64 164 1164 2164 7164 128 129 OPAAAA EAAAAA AAAAxx +8009 5 1 1 9 9 9 9 9 3009 8009 18 19 BWAAAA FAAAAA HHHHxx +5057 6 1 1 7 17 57 57 1057 57 5057 114 115 NMAAAA GAAAAA OOOOxx +6701 7 1 1 1 1 1 701 701 1701 6701 2 3 TXAAAA HAAAAA VVVVxx +4321 8 1 1 1 1 21 321 321 4321 4321 42 43 FKAAAA IAAAAA AAAAxx +3043 9 1 3 3 3 43 43 1043 3043 3043 86 87 BNAAAA JAAAAA HHHHxx +1314 10 0 2 4 14 14 314 1314 1314 1314 28 29 OYAAAA KAAAAA OOOOxx +1504 11 0 0 4 4 4 504 1504 1504 1504 8 9 WFAAAA LAAAAA VVVVxx +5222 12 0 2 2 2 22 222 1222 222 5222 44 45 WSAAAA MAAAAA AAAAxx +6243 13 1 3 3 3 43 243 243 1243 6243 86 87 DGAAAA NAAAAA HHHHxx +5471 14 1 3 1 11 71 471 1471 471 5471 142 143 LCAAAA OAAAAA OOOOxx +5006 15 0 2 6 6 6 6 1006 6 5006 12 13 OKAAAA PAAAAA VVVVxx +5387 16 1 3 7 7 87 387 1387 387 5387 174 175 FZAAAA QAAAAA AAAAxx +5785 17 1 1 5 5 85 785 1785 785 5785 170 171 NOAAAA RAAAAA HHHHxx +6621 18 1 1 1 1 21 621 621 1621 6621 42 43 RUAAAA SAAAAA OOOOxx +6969 19 1 1 9 9 69 969 969 1969 6969 138 139 BIAAAA TAAAAA VVVVxx +9460 20 0 0 0 0 60 460 1460 4460 9460 120 121 WZAAAA UAAAAA AAAAxx +59 21 1 3 9 19 59 59 59 59 59 118 119 HCAAAA VAAAAA HHHHxx +8020 22 0 0 0 0 20 20 20 3020 8020 40 41 MWAAAA WAAAAA OOOOxx +7695 23 1 3 5 15 95 695 1695 2695 7695 190 191 ZJAAAA XAAAAA VVVVxx +3442 24 0 2 2 2 42 442 1442 3442 3442 84 85 KCAAAA YAAAAA AAAAxx +5119 25 1 3 9 19 19 119 1119 119 5119 38 39 XOAAAA ZAAAAA HHHHxx +646 26 0 2 6 6 46 646 646 646 646 92 93 WYAAAA ABAAAA OOOOxx +9605 27 1 1 5 5 5 605 1605 4605 9605 10 11 LFAAAA BBAAAA VVVVxx +263 28 1 3 3 3 63 263 263 263 263 126 127 DKAAAA CBAAAA AAAAxx +3269 29 1 1 9 9 69 269 1269 3269 3269 138 139 TVAAAA DBAAAA HHHHxx +1839 30 1 3 9 19 39 839 1839 1839 1839 78 79 TSAAAA EBAAAA OOOOxx +9144 31 0 0 4 4 44 144 1144 4144 9144 88 89 SNAAAA FBAAAA VVVVxx +2513 32 1 1 3 13 13 513 513 2513 2513 26 27 RSAAAA GBAAAA AAAAxx +8850 33 0 2 0 10 50 850 850 3850 8850 100 101 KCAAAA HBAAAA HHHHxx +236 34 0 0 6 16 36 236 236 236 236 72 73 CJAAAA IBAAAA OOOOxx +3162 35 0 2 2 2 62 162 1162 3162 3162 124 125 QRAAAA JBAAAA VVVVxx +4380 36 0 0 0 0 80 380 380 4380 4380 160 161 MMAAAA KBAAAA AAAAxx +8095 37 1 3 5 15 95 95 95 3095 8095 190 191 JZAAAA LBAAAA HHHHxx +209 38 1 1 9 9 9 209 209 209 209 18 19 BIAAAA MBAAAA OOOOxx +3055 39 1 3 5 15 55 55 1055 3055 3055 110 111 NNAAAA NBAAAA VVVVxx +6921 40 1 1 1 1 21 921 921 1921 6921 42 43 FGAAAA OBAAAA AAAAxx +7046 41 0 2 6 6 46 46 1046 2046 7046 92 93 ALAAAA PBAAAA HHHHxx +7912 42 0 0 2 12 12 912 1912 2912 7912 24 25 ISAAAA QBAAAA OOOOxx +7267 43 1 3 7 7 67 267 1267 2267 7267 134 135 NTAAAA RBAAAA VVVVxx +3599 44 1 3 9 19 99 599 1599 3599 3599 198 199 LIAAAA SBAAAA AAAAxx +923 45 1 3 3 3 23 923 923 923 923 46 47 NJAAAA TBAAAA HHHHxx +1437 46 1 1 7 17 37 437 1437 1437 1437 74 75 HDAAAA UBAAAA OOOOxx +6439 47 1 3 9 19 39 439 439 1439 6439 78 79 RNAAAA VBAAAA VVVVxx +6989 48 1 1 9 9 89 989 989 1989 6989 178 179 VIAAAA WBAAAA AAAAxx +8798 49 0 2 8 18 98 798 798 3798 8798 196 197 KAAAAA XBAAAA HHHHxx +5960 50 0 0 0 0 60 960 1960 960 5960 120 121 GVAAAA YBAAAA OOOOxx +5832 51 0 0 2 12 32 832 1832 832 5832 64 65 IQAAAA ZBAAAA VVVVxx +6066 52 0 2 6 6 66 66 66 1066 6066 132 133 IZAAAA ACAAAA AAAAxx +322 53 0 2 2 2 22 322 322 322 322 44 45 KMAAAA BCAAAA HHHHxx +8321 54 1 1 1 1 21 321 321 3321 8321 42 43 BIAAAA CCAAAA OOOOxx +734 55 0 2 4 14 34 734 734 734 734 68 69 GCAAAA DCAAAA VVVVxx +688 56 0 0 8 8 88 688 688 688 688 176 177 MAAAAA ECAAAA AAAAxx +4212 57 0 0 2 12 12 212 212 4212 4212 24 25 AGAAAA FCAAAA HHHHxx +9653 58 1 1 3 13 53 653 1653 4653 9653 106 107 HHAAAA GCAAAA OOOOxx +2677 59 1 1 7 17 77 677 677 2677 2677 154 155 ZYAAAA HCAAAA VVVVxx +5423 60 1 3 3 3 23 423 1423 423 5423 46 47 PAAAAA ICAAAA AAAAxx +2592 61 0 0 2 12 92 592 592 2592 2592 184 185 SVAAAA JCAAAA HHHHxx +3233 62 1 1 3 13 33 233 1233 3233 3233 66 67 JUAAAA KCAAAA OOOOxx +5032 63 0 0 2 12 32 32 1032 32 5032 64 65 OLAAAA LCAAAA VVVVxx +2525 64 1 1 5 5 25 525 525 2525 2525 50 51 DTAAAA MCAAAA AAAAxx +4450 65 0 2 0 10 50 450 450 4450 4450 100 101 EPAAAA NCAAAA HHHHxx +5778 66 0 2 8 18 78 778 1778 778 5778 156 157 GOAAAA OCAAAA OOOOxx +5852 67 0 0 2 12 52 852 1852 852 5852 104 105 CRAAAA PCAAAA VVVVxx +5404 68 0 0 4 4 4 404 1404 404 5404 8 9 WZAAAA QCAAAA AAAAxx +6223 69 1 3 3 3 23 223 223 1223 6223 46 47 JFAAAA RCAAAA HHHHxx +6133 70 1 1 3 13 33 133 133 1133 6133 66 67 XBAAAA SCAAAA OOOOxx +9112 71 0 0 2 12 12 112 1112 4112 9112 24 25 MMAAAA TCAAAA VVVVxx +7575 72 1 3 5 15 75 575 1575 2575 7575 150 151 JFAAAA UCAAAA AAAAxx +7414 73 0 2 4 14 14 414 1414 2414 7414 28 29 EZAAAA VCAAAA HHHHxx +9741 74 1 1 1 1 41 741 1741 4741 9741 82 83 RKAAAA WCAAAA OOOOxx +3767 75 1 3 7 7 67 767 1767 3767 3767 134 135 XOAAAA XCAAAA VVVVxx +9372 76 0 0 2 12 72 372 1372 4372 9372 144 145 MWAAAA YCAAAA AAAAxx +8976 77 0 0 6 16 76 976 976 3976 8976 152 153 GHAAAA ZCAAAA HHHHxx +4071 78 1 3 1 11 71 71 71 4071 4071 142 143 PAAAAA ADAAAA OOOOxx +1311 79 1 3 1 11 11 311 1311 1311 1311 22 23 LYAAAA BDAAAA VVVVxx +2604 80 0 0 4 4 4 604 604 2604 2604 8 9 EWAAAA CDAAAA AAAAxx +8840 81 0 0 0 0 40 840 840 3840 8840 80 81 ACAAAA DDAAAA HHHHxx +567 82 1 3 7 7 67 567 567 567 567 134 135 VVAAAA EDAAAA OOOOxx +5215 83 1 3 5 15 15 215 1215 215 5215 30 31 PSAAAA FDAAAA VVVVxx +5474 84 0 2 4 14 74 474 1474 474 5474 148 149 OCAAAA GDAAAA AAAAxx +3906 85 0 2 6 6 6 906 1906 3906 3906 12 13 GUAAAA HDAAAA HHHHxx +1769 86 1 1 9 9 69 769 1769 1769 1769 138 139 BQAAAA IDAAAA OOOOxx +1454 87 0 2 4 14 54 454 1454 1454 1454 108 109 YDAAAA JDAAAA VVVVxx +6877 88 1 1 7 17 77 877 877 1877 6877 154 155 NEAAAA KDAAAA AAAAxx +6501 89 1 1 1 1 1 501 501 1501 6501 2 3 BQAAAA LDAAAA HHHHxx +934 90 0 2 4 14 34 934 934 934 934 68 69 YJAAAA MDAAAA OOOOxx +4075 91 1 3 5 15 75 75 75 4075 4075 150 151 TAAAAA NDAAAA VVVVxx +3180 92 0 0 0 0 80 180 1180 3180 3180 160 161 ISAAAA ODAAAA AAAAxx +7787 93 1 3 7 7 87 787 1787 2787 7787 174 175 NNAAAA PDAAAA HHHHxx +6401 94 1 1 1 1 1 401 401 1401 6401 2 3 FMAAAA QDAAAA OOOOxx +4244 95 0 0 4 4 44 244 244 4244 4244 88 89 GHAAAA RDAAAA VVVVxx +4591 96 1 3 1 11 91 591 591 4591 4591 182 183 PUAAAA SDAAAA AAAAxx +4113 97 1 1 3 13 13 113 113 4113 4113 26 27 FCAAAA TDAAAA HHHHxx +5925 98 1 1 5 5 25 925 1925 925 5925 50 51 XTAAAA UDAAAA OOOOxx +1987 99 1 3 7 7 87 987 1987 1987 1987 174 175 LYAAAA VDAAAA VVVVxx +8248 100 0 0 8 8 48 248 248 3248 8248 96 97 GFAAAA WDAAAA AAAAxx +4151 101 1 3 1 11 51 151 151 4151 4151 102 103 RDAAAA XDAAAA HHHHxx +8670 102 0 2 0 10 70 670 670 3670 8670 140 141 MVAAAA YDAAAA OOOOxx +6194 103 0 2 4 14 94 194 194 1194 6194 188 189 GEAAAA ZDAAAA VVVVxx +88 104 0 0 8 8 88 88 88 88 88 176 177 KDAAAA AEAAAA AAAAxx +4058 105 0 2 8 18 58 58 58 4058 4058 116 117 CAAAAA BEAAAA HHHHxx +2742 106 0 2 2 2 42 742 742 2742 2742 84 85 MBAAAA CEAAAA OOOOxx +8275 107 1 3 5 15 75 275 275 3275 8275 150 151 HGAAAA DEAAAA VVVVxx +4258 108 0 2 8 18 58 258 258 4258 4258 116 117 UHAAAA EEAAAA AAAAxx +6129 109 1 1 9 9 29 129 129 1129 6129 58 59 TBAAAA FEAAAA HHHHxx +7243 110 1 3 3 3 43 243 1243 2243 7243 86 87 PSAAAA GEAAAA OOOOxx +2392 111 0 0 2 12 92 392 392 2392 2392 184 185 AOAAAA HEAAAA VVVVxx +9853 112 1 1 3 13 53 853 1853 4853 9853 106 107 ZOAAAA IEAAAA AAAAxx +6064 113 0 0 4 4 64 64 64 1064 6064 128 129 GZAAAA JEAAAA HHHHxx +4391 114 1 3 1 11 91 391 391 4391 4391 182 183 XMAAAA KEAAAA OOOOxx +726 115 0 2 6 6 26 726 726 726 726 52 53 YBAAAA LEAAAA VVVVxx +6957 116 1 1 7 17 57 957 957 1957 6957 114 115 PHAAAA MEAAAA AAAAxx +3853 117 1 1 3 13 53 853 1853 3853 3853 106 107 FSAAAA NEAAAA HHHHxx +4524 118 0 0 4 4 24 524 524 4524 4524 48 49 ASAAAA OEAAAA OOOOxx +5330 119 0 2 0 10 30 330 1330 330 5330 60 61 AXAAAA PEAAAA VVVVxx +6671 120 1 3 1 11 71 671 671 1671 6671 142 143 PWAAAA QEAAAA AAAAxx +5314 121 0 2 4 14 14 314 1314 314 5314 28 29 KWAAAA REAAAA HHHHxx +9202 122 0 2 2 2 2 202 1202 4202 9202 4 5 YPAAAA SEAAAA OOOOxx +4596 123 0 0 6 16 96 596 596 4596 4596 192 193 UUAAAA TEAAAA VVVVxx +8951 124 1 3 1 11 51 951 951 3951 8951 102 103 HGAAAA UEAAAA AAAAxx +9902 125 0 2 2 2 2 902 1902 4902 9902 4 5 WQAAAA VEAAAA HHHHxx +1440 126 0 0 0 0 40 440 1440 1440 1440 80 81 KDAAAA WEAAAA OOOOxx +5339 127 1 3 9 19 39 339 1339 339 5339 78 79 JXAAAA XEAAAA VVVVxx +3371 128 1 3 1 11 71 371 1371 3371 3371 142 143 RZAAAA YEAAAA AAAAxx +4467 129 1 3 7 7 67 467 467 4467 4467 134 135 VPAAAA ZEAAAA HHHHxx +6216 130 0 0 6 16 16 216 216 1216 6216 32 33 CFAAAA AFAAAA OOOOxx +5364 131 0 0 4 4 64 364 1364 364 5364 128 129 IYAAAA BFAAAA VVVVxx +7547 132 1 3 7 7 47 547 1547 2547 7547 94 95 HEAAAA CFAAAA AAAAxx +4338 133 0 2 8 18 38 338 338 4338 4338 76 77 WKAAAA DFAAAA HHHHxx +3481 134 1 1 1 1 81 481 1481 3481 3481 162 163 XDAAAA EFAAAA OOOOxx +826 135 0 2 6 6 26 826 826 826 826 52 53 UFAAAA FFAAAA VVVVxx +3647 136 1 3 7 7 47 647 1647 3647 3647 94 95 HKAAAA GFAAAA AAAAxx +3337 137 1 1 7 17 37 337 1337 3337 3337 74 75 JYAAAA HFAAAA HHHHxx +3591 138 1 3 1 11 91 591 1591 3591 3591 182 183 DIAAAA IFAAAA OOOOxx +7192 139 0 0 2 12 92 192 1192 2192 7192 184 185 QQAAAA JFAAAA VVVVxx +1078 140 0 2 8 18 78 78 1078 1078 1078 156 157 MPAAAA KFAAAA AAAAxx +1310 141 0 2 0 10 10 310 1310 1310 1310 20 21 KYAAAA LFAAAA HHHHxx +9642 142 0 2 2 2 42 642 1642 4642 9642 84 85 WGAAAA MFAAAA OOOOxx +39 143 1 3 9 19 39 39 39 39 39 78 79 NBAAAA NFAAAA VVVVxx +8682 144 0 2 2 2 82 682 682 3682 8682 164 165 YVAAAA OFAAAA AAAAxx +1794 145 0 2 4 14 94 794 1794 1794 1794 188 189 ARAAAA PFAAAA HHHHxx +5630 146 0 2 0 10 30 630 1630 630 5630 60 61 OIAAAA QFAAAA OOOOxx +6748 147 0 0 8 8 48 748 748 1748 6748 96 97 OZAAAA RFAAAA VVVVxx +3766 148 0 2 6 6 66 766 1766 3766 3766 132 133 WOAAAA SFAAAA AAAAxx +6403 149 1 3 3 3 3 403 403 1403 6403 6 7 HMAAAA TFAAAA HHHHxx +175 150 1 3 5 15 75 175 175 175 175 150 151 TGAAAA UFAAAA OOOOxx +2179 151 1 3 9 19 79 179 179 2179 2179 158 159 VFAAAA VFAAAA VVVVxx +7897 152 1 1 7 17 97 897 1897 2897 7897 194 195 TRAAAA WFAAAA AAAAxx +2760 153 0 0 0 0 60 760 760 2760 2760 120 121 ECAAAA XFAAAA HHHHxx +1675 154 1 3 5 15 75 675 1675 1675 1675 150 151 LMAAAA YFAAAA OOOOxx +2564 155 0 0 4 4 64 564 564 2564 2564 128 129 QUAAAA ZFAAAA VVVVxx +157 156 1 1 7 17 57 157 157 157 157 114 115 BGAAAA AGAAAA AAAAxx +8779 157 1 3 9 19 79 779 779 3779 8779 158 159 RZAAAA BGAAAA HHHHxx +9591 158 1 3 1 11 91 591 1591 4591 9591 182 183 XEAAAA CGAAAA OOOOxx +8732 159 0 0 2 12 32 732 732 3732 8732 64 65 WXAAAA DGAAAA VVVVxx +139 160 1 3 9 19 39 139 139 139 139 78 79 JFAAAA EGAAAA AAAAxx +5372 161 0 0 2 12 72 372 1372 372 5372 144 145 QYAAAA FGAAAA HHHHxx +1278 162 0 2 8 18 78 278 1278 1278 1278 156 157 EXAAAA GGAAAA OOOOxx +4697 163 1 1 7 17 97 697 697 4697 4697 194 195 RYAAAA HGAAAA VVVVxx +8610 164 0 2 0 10 10 610 610 3610 8610 20 21 ETAAAA IGAAAA AAAAxx +8180 165 0 0 0 0 80 180 180 3180 8180 160 161 QCAAAA JGAAAA HHHHxx +2399 166 1 3 9 19 99 399 399 2399 2399 198 199 HOAAAA KGAAAA OOOOxx +615 167 1 3 5 15 15 615 615 615 615 30 31 RXAAAA LGAAAA VVVVxx +7629 168 1 1 9 9 29 629 1629 2629 7629 58 59 LHAAAA MGAAAA AAAAxx +7628 169 0 0 8 8 28 628 1628 2628 7628 56 57 KHAAAA NGAAAA HHHHxx +4659 170 1 3 9 19 59 659 659 4659 4659 118 119 FXAAAA OGAAAA OOOOxx +5865 171 1 1 5 5 65 865 1865 865 5865 130 131 PRAAAA PGAAAA VVVVxx +3973 172 1 1 3 13 73 973 1973 3973 3973 146 147 VWAAAA QGAAAA AAAAxx +552 173 0 0 2 12 52 552 552 552 552 104 105 GVAAAA RGAAAA HHHHxx +708 174 0 0 8 8 8 708 708 708 708 16 17 GBAAAA SGAAAA OOOOxx +3550 175 0 2 0 10 50 550 1550 3550 3550 100 101 OGAAAA TGAAAA VVVVxx +5547 176 1 3 7 7 47 547 1547 547 5547 94 95 JFAAAA UGAAAA AAAAxx +489 177 1 1 9 9 89 489 489 489 489 178 179 VSAAAA VGAAAA HHHHxx +3794 178 0 2 4 14 94 794 1794 3794 3794 188 189 YPAAAA WGAAAA OOOOxx +9479 179 1 3 9 19 79 479 1479 4479 9479 158 159 PAAAAA XGAAAA VVVVxx +6435 180 1 3 5 15 35 435 435 1435 6435 70 71 NNAAAA YGAAAA AAAAxx +5120 181 0 0 0 0 20 120 1120 120 5120 40 41 YOAAAA ZGAAAA HHHHxx +3615 182 1 3 5 15 15 615 1615 3615 3615 30 31 BJAAAA AHAAAA OOOOxx +8399 183 1 3 9 19 99 399 399 3399 8399 198 199 BLAAAA BHAAAA VVVVxx +2155 184 1 3 5 15 55 155 155 2155 2155 110 111 XEAAAA CHAAAA AAAAxx +6690 185 0 2 0 10 90 690 690 1690 6690 180 181 IXAAAA DHAAAA HHHHxx +1683 186 1 3 3 3 83 683 1683 1683 1683 166 167 TMAAAA EHAAAA OOOOxx +6302 187 0 2 2 2 2 302 302 1302 6302 4 5 KIAAAA FHAAAA VVVVxx +516 188 0 0 6 16 16 516 516 516 516 32 33 WTAAAA GHAAAA AAAAxx +3901 189 1 1 1 1 1 901 1901 3901 3901 2 3 BUAAAA HHAAAA HHHHxx +6938 190 0 2 8 18 38 938 938 1938 6938 76 77 WGAAAA IHAAAA OOOOxx +7484 191 0 0 4 4 84 484 1484 2484 7484 168 169 WBAAAA JHAAAA VVVVxx +7424 192 0 0 4 4 24 424 1424 2424 7424 48 49 OZAAAA KHAAAA AAAAxx +9410 193 0 2 0 10 10 410 1410 4410 9410 20 21 YXAAAA LHAAAA HHHHxx +1714 194 0 2 4 14 14 714 1714 1714 1714 28 29 YNAAAA MHAAAA OOOOxx +8278 195 0 2 8 18 78 278 278 3278 8278 156 157 KGAAAA NHAAAA VVVVxx +3158 196 0 2 8 18 58 158 1158 3158 3158 116 117 MRAAAA OHAAAA AAAAxx +2511 197 1 3 1 11 11 511 511 2511 2511 22 23 PSAAAA PHAAAA HHHHxx +2912 198 0 0 2 12 12 912 912 2912 2912 24 25 AIAAAA QHAAAA OOOOxx +2648 199 0 0 8 8 48 648 648 2648 2648 96 97 WXAAAA RHAAAA VVVVxx +9385 200 1 1 5 5 85 385 1385 4385 9385 170 171 ZWAAAA SHAAAA AAAAxx +7545 201 1 1 5 5 45 545 1545 2545 7545 90 91 FEAAAA THAAAA HHHHxx +8407 202 1 3 7 7 7 407 407 3407 8407 14 15 JLAAAA UHAAAA OOOOxx +5893 203 1 1 3 13 93 893 1893 893 5893 186 187 RSAAAA VHAAAA VVVVxx +7049 204 1 1 9 9 49 49 1049 2049 7049 98 99 DLAAAA WHAAAA AAAAxx +6812 205 0 0 2 12 12 812 812 1812 6812 24 25 ACAAAA XHAAAA HHHHxx +3649 206 1 1 9 9 49 649 1649 3649 3649 98 99 JKAAAA YHAAAA OOOOxx +9275 207 1 3 5 15 75 275 1275 4275 9275 150 151 TSAAAA ZHAAAA VVVVxx +1179 208 1 3 9 19 79 179 1179 1179 1179 158 159 JTAAAA AIAAAA AAAAxx +969 209 1 1 9 9 69 969 969 969 969 138 139 HLAAAA BIAAAA HHHHxx +7920 210 0 0 0 0 20 920 1920 2920 7920 40 41 QSAAAA CIAAAA OOOOxx +998 211 0 2 8 18 98 998 998 998 998 196 197 KMAAAA DIAAAA VVVVxx +3958 212 0 2 8 18 58 958 1958 3958 3958 116 117 GWAAAA EIAAAA AAAAxx +6052 213 0 0 2 12 52 52 52 1052 6052 104 105 UYAAAA FIAAAA HHHHxx +8791 214 1 3 1 11 91 791 791 3791 8791 182 183 DAAAAA GIAAAA OOOOxx +5191 215 1 3 1 11 91 191 1191 191 5191 182 183 RRAAAA HIAAAA VVVVxx +4267 216 1 3 7 7 67 267 267 4267 4267 134 135 DIAAAA IIAAAA AAAAxx +2829 217 1 1 9 9 29 829 829 2829 2829 58 59 VEAAAA JIAAAA HHHHxx +6396 218 0 0 6 16 96 396 396 1396 6396 192 193 AMAAAA KIAAAA OOOOxx +9413 219 1 1 3 13 13 413 1413 4413 9413 26 27 BYAAAA LIAAAA VVVVxx +614 220 0 2 4 14 14 614 614 614 614 28 29 QXAAAA MIAAAA AAAAxx +4660 221 0 0 0 0 60 660 660 4660 4660 120 121 GXAAAA NIAAAA HHHHxx +8834 222 0 2 4 14 34 834 834 3834 8834 68 69 UBAAAA OIAAAA OOOOxx +2767 223 1 3 7 7 67 767 767 2767 2767 134 135 LCAAAA PIAAAA VVVVxx +2444 224 0 0 4 4 44 444 444 2444 2444 88 89 AQAAAA QIAAAA AAAAxx +4129 225 1 1 9 9 29 129 129 4129 4129 58 59 VCAAAA RIAAAA HHHHxx +3394 226 0 2 4 14 94 394 1394 3394 3394 188 189 OAAAAA SIAAAA OOOOxx +2705 227 1 1 5 5 5 705 705 2705 2705 10 11 BAAAAA TIAAAA VVVVxx +8499 228 1 3 9 19 99 499 499 3499 8499 198 199 XOAAAA UIAAAA AAAAxx +8852 229 0 0 2 12 52 852 852 3852 8852 104 105 MCAAAA VIAAAA HHHHxx +6174 230 0 2 4 14 74 174 174 1174 6174 148 149 MDAAAA WIAAAA OOOOxx +750 231 0 2 0 10 50 750 750 750 750 100 101 WCAAAA XIAAAA VVVVxx +8164 232 0 0 4 4 64 164 164 3164 8164 128 129 ACAAAA YIAAAA AAAAxx +4930 233 0 2 0 10 30 930 930 4930 4930 60 61 QHAAAA ZIAAAA HHHHxx +9904 234 0 0 4 4 4 904 1904 4904 9904 8 9 YQAAAA AJAAAA OOOOxx +7378 235 0 2 8 18 78 378 1378 2378 7378 156 157 UXAAAA BJAAAA VVVVxx +2927 236 1 3 7 7 27 927 927 2927 2927 54 55 PIAAAA CJAAAA AAAAxx +7155 237 1 3 5 15 55 155 1155 2155 7155 110 111 FPAAAA DJAAAA HHHHxx +1302 238 0 2 2 2 2 302 1302 1302 1302 4 5 CYAAAA EJAAAA OOOOxx +5904 239 0 0 4 4 4 904 1904 904 5904 8 9 CTAAAA FJAAAA VVVVxx +9687 240 1 3 7 7 87 687 1687 4687 9687 174 175 PIAAAA GJAAAA AAAAxx +3553 241 1 1 3 13 53 553 1553 3553 3553 106 107 RGAAAA HJAAAA HHHHxx +4447 242 1 3 7 7 47 447 447 4447 4447 94 95 BPAAAA IJAAAA OOOOxx +6878 243 0 2 8 18 78 878 878 1878 6878 156 157 OEAAAA JJAAAA VVVVxx +9470 244 0 2 0 10 70 470 1470 4470 9470 140 141 GAAAAA KJAAAA AAAAxx +9735 245 1 3 5 15 35 735 1735 4735 9735 70 71 LKAAAA LJAAAA HHHHxx +5967 246 1 3 7 7 67 967 1967 967 5967 134 135 NVAAAA MJAAAA OOOOxx +6601 247 1 1 1 1 1 601 601 1601 6601 2 3 XTAAAA NJAAAA VVVVxx +7631 248 1 3 1 11 31 631 1631 2631 7631 62 63 NHAAAA OJAAAA AAAAxx +3559 249 1 3 9 19 59 559 1559 3559 3559 118 119 XGAAAA PJAAAA HHHHxx +2247 250 1 3 7 7 47 247 247 2247 2247 94 95 LIAAAA QJAAAA OOOOxx +9649 251 1 1 9 9 49 649 1649 4649 9649 98 99 DHAAAA RJAAAA VVVVxx +808 252 0 0 8 8 8 808 808 808 808 16 17 CFAAAA SJAAAA AAAAxx +240 253 0 0 0 0 40 240 240 240 240 80 81 GJAAAA TJAAAA HHHHxx +5031 254 1 3 1 11 31 31 1031 31 5031 62 63 NLAAAA UJAAAA OOOOxx +9563 255 1 3 3 3 63 563 1563 4563 9563 126 127 VDAAAA VJAAAA VVVVxx +5656 256 0 0 6 16 56 656 1656 656 5656 112 113 OJAAAA WJAAAA AAAAxx +3886 257 0 2 6 6 86 886 1886 3886 3886 172 173 MTAAAA XJAAAA HHHHxx +2431 258 1 3 1 11 31 431 431 2431 2431 62 63 NPAAAA YJAAAA OOOOxx +5560 259 0 0 0 0 60 560 1560 560 5560 120 121 WFAAAA ZJAAAA VVVVxx +9065 260 1 1 5 5 65 65 1065 4065 9065 130 131 RKAAAA AKAAAA AAAAxx +8130 261 0 2 0 10 30 130 130 3130 8130 60 61 SAAAAA BKAAAA HHHHxx +4054 262 0 2 4 14 54 54 54 4054 4054 108 109 YZAAAA CKAAAA OOOOxx +873 263 1 1 3 13 73 873 873 873 873 146 147 PHAAAA DKAAAA VVVVxx +3092 264 0 0 2 12 92 92 1092 3092 3092 184 185 YOAAAA EKAAAA AAAAxx +6697 265 1 1 7 17 97 697 697 1697 6697 194 195 PXAAAA FKAAAA HHHHxx +2452 266 0 0 2 12 52 452 452 2452 2452 104 105 IQAAAA GKAAAA OOOOxx +7867 267 1 3 7 7 67 867 1867 2867 7867 134 135 PQAAAA HKAAAA VVVVxx +3753 268 1 1 3 13 53 753 1753 3753 3753 106 107 JOAAAA IKAAAA AAAAxx +7834 269 0 2 4 14 34 834 1834 2834 7834 68 69 IPAAAA JKAAAA HHHHxx +5846 270 0 2 6 6 46 846 1846 846 5846 92 93 WQAAAA KKAAAA OOOOxx +7604 271 0 0 4 4 4 604 1604 2604 7604 8 9 MGAAAA LKAAAA VVVVxx +3452 272 0 0 2 12 52 452 1452 3452 3452 104 105 UCAAAA MKAAAA AAAAxx +4788 273 0 0 8 8 88 788 788 4788 4788 176 177 ECAAAA NKAAAA HHHHxx +8600 274 0 0 0 0 0 600 600 3600 8600 0 1 USAAAA OKAAAA OOOOxx +8511 275 1 3 1 11 11 511 511 3511 8511 22 23 JPAAAA PKAAAA VVVVxx +4452 276 0 0 2 12 52 452 452 4452 4452 104 105 GPAAAA QKAAAA AAAAxx +1709 277 1 1 9 9 9 709 1709 1709 1709 18 19 TNAAAA RKAAAA HHHHxx +3440 278 0 0 0 0 40 440 1440 3440 3440 80 81 ICAAAA SKAAAA OOOOxx +9188 279 0 0 8 8 88 188 1188 4188 9188 176 177 KPAAAA TKAAAA VVVVxx +3058 280 0 2 8 18 58 58 1058 3058 3058 116 117 QNAAAA UKAAAA AAAAxx +5821 281 1 1 1 1 21 821 1821 821 5821 42 43 XPAAAA VKAAAA HHHHxx +3428 282 0 0 8 8 28 428 1428 3428 3428 56 57 WBAAAA WKAAAA OOOOxx +3581 283 1 1 1 1 81 581 1581 3581 3581 162 163 THAAAA XKAAAA VVVVxx +7523 284 1 3 3 3 23 523 1523 2523 7523 46 47 JDAAAA YKAAAA AAAAxx +3131 285 1 3 1 11 31 131 1131 3131 3131 62 63 LQAAAA ZKAAAA HHHHxx +2404 286 0 0 4 4 4 404 404 2404 2404 8 9 MOAAAA ALAAAA OOOOxx +5453 287 1 1 3 13 53 453 1453 453 5453 106 107 TBAAAA BLAAAA VVVVxx +1599 288 1 3 9 19 99 599 1599 1599 1599 198 199 NJAAAA CLAAAA AAAAxx +7081 289 1 1 1 1 81 81 1081 2081 7081 162 163 JMAAAA DLAAAA HHHHxx +1750 290 0 2 0 10 50 750 1750 1750 1750 100 101 IPAAAA ELAAAA OOOOxx +5085 291 1 1 5 5 85 85 1085 85 5085 170 171 PNAAAA FLAAAA VVVVxx +9777 292 1 1 7 17 77 777 1777 4777 9777 154 155 BMAAAA GLAAAA AAAAxx +574 293 0 2 4 14 74 574 574 574 574 148 149 CWAAAA HLAAAA HHHHxx +5984 294 0 0 4 4 84 984 1984 984 5984 168 169 EWAAAA ILAAAA OOOOxx +7039 295 1 3 9 19 39 39 1039 2039 7039 78 79 TKAAAA JLAAAA VVVVxx +7143 296 1 3 3 3 43 143 1143 2143 7143 86 87 TOAAAA KLAAAA AAAAxx +5702 297 0 2 2 2 2 702 1702 702 5702 4 5 ILAAAA LLAAAA HHHHxx +362 298 0 2 2 2 62 362 362 362 362 124 125 YNAAAA MLAAAA OOOOxx +6997 299 1 1 7 17 97 997 997 1997 6997 194 195 DJAAAA NLAAAA VVVVxx +2529 300 1 1 9 9 29 529 529 2529 2529 58 59 HTAAAA OLAAAA AAAAxx +6319 301 1 3 9 19 19 319 319 1319 6319 38 39 BJAAAA PLAAAA HHHHxx +954 302 0 2 4 14 54 954 954 954 954 108 109 SKAAAA QLAAAA OOOOxx +3413 303 1 1 3 13 13 413 1413 3413 3413 26 27 HBAAAA RLAAAA VVVVxx +9081 304 1 1 1 1 81 81 1081 4081 9081 162 163 HLAAAA SLAAAA AAAAxx +5599 305 1 3 9 19 99 599 1599 599 5599 198 199 JHAAAA TLAAAA HHHHxx +4772 306 0 0 2 12 72 772 772 4772 4772 144 145 OBAAAA ULAAAA OOOOxx +1124 307 0 0 4 4 24 124 1124 1124 1124 48 49 GRAAAA VLAAAA VVVVxx +7793 308 1 1 3 13 93 793 1793 2793 7793 186 187 TNAAAA WLAAAA AAAAxx +4201 309 1 1 1 1 1 201 201 4201 4201 2 3 PFAAAA XLAAAA HHHHxx +7015 310 1 3 5 15 15 15 1015 2015 7015 30 31 VJAAAA YLAAAA OOOOxx +5936 311 0 0 6 16 36 936 1936 936 5936 72 73 IUAAAA ZLAAAA VVVVxx +4625 312 1 1 5 5 25 625 625 4625 4625 50 51 XVAAAA AMAAAA AAAAxx +4989 313 1 1 9 9 89 989 989 4989 4989 178 179 XJAAAA BMAAAA HHHHxx +4949 314 1 1 9 9 49 949 949 4949 4949 98 99 JIAAAA CMAAAA OOOOxx +6273 315 1 1 3 13 73 273 273 1273 6273 146 147 HHAAAA DMAAAA VVVVxx +4478 316 0 2 8 18 78 478 478 4478 4478 156 157 GQAAAA EMAAAA AAAAxx +8854 317 0 2 4 14 54 854 854 3854 8854 108 109 OCAAAA FMAAAA HHHHxx +2105 318 1 1 5 5 5 105 105 2105 2105 10 11 ZCAAAA GMAAAA OOOOxx +8345 319 1 1 5 5 45 345 345 3345 8345 90 91 ZIAAAA HMAAAA VVVVxx +1941 320 1 1 1 1 41 941 1941 1941 1941 82 83 RWAAAA IMAAAA AAAAxx +1765 321 1 1 5 5 65 765 1765 1765 1765 130 131 XPAAAA JMAAAA HHHHxx +9592 322 0 0 2 12 92 592 1592 4592 9592 184 185 YEAAAA KMAAAA OOOOxx +1694 323 0 2 4 14 94 694 1694 1694 1694 188 189 ENAAAA LMAAAA VVVVxx +8940 324 0 0 0 0 40 940 940 3940 8940 80 81 WFAAAA MMAAAA AAAAxx +7264 325 0 0 4 4 64 264 1264 2264 7264 128 129 KTAAAA NMAAAA HHHHxx +4699 326 1 3 9 19 99 699 699 4699 4699 198 199 TYAAAA OMAAAA OOOOxx +4541 327 1 1 1 1 41 541 541 4541 4541 82 83 RSAAAA PMAAAA VVVVxx +5768 328 0 0 8 8 68 768 1768 768 5768 136 137 WNAAAA QMAAAA AAAAxx +6183 329 1 3 3 3 83 183 183 1183 6183 166 167 VDAAAA RMAAAA HHHHxx +7457 330 1 1 7 17 57 457 1457 2457 7457 114 115 VAAAAA SMAAAA OOOOxx +7317 331 1 1 7 17 17 317 1317 2317 7317 34 35 LVAAAA TMAAAA VVVVxx +1944 332 0 0 4 4 44 944 1944 1944 1944 88 89 UWAAAA UMAAAA AAAAxx +665 333 1 1 5 5 65 665 665 665 665 130 131 PZAAAA VMAAAA HHHHxx +5974 334 0 2 4 14 74 974 1974 974 5974 148 149 UVAAAA WMAAAA OOOOxx +7370 335 0 2 0 10 70 370 1370 2370 7370 140 141 MXAAAA XMAAAA VVVVxx +9196 336 0 0 6 16 96 196 1196 4196 9196 192 193 SPAAAA YMAAAA AAAAxx +6796 337 0 0 6 16 96 796 796 1796 6796 192 193 KBAAAA ZMAAAA HHHHxx +6180 338 0 0 0 0 80 180 180 1180 6180 160 161 SDAAAA ANAAAA OOOOxx +8557 339 1 1 7 17 57 557 557 3557 8557 114 115 DRAAAA BNAAAA VVVVxx +928 340 0 0 8 8 28 928 928 928 928 56 57 SJAAAA CNAAAA AAAAxx +6275 341 1 3 5 15 75 275 275 1275 6275 150 151 JHAAAA DNAAAA HHHHxx +409 342 1 1 9 9 9 409 409 409 409 18 19 TPAAAA ENAAAA OOOOxx +6442 343 0 2 2 2 42 442 442 1442 6442 84 85 UNAAAA FNAAAA VVVVxx +5889 344 1 1 9 9 89 889 1889 889 5889 178 179 NSAAAA GNAAAA AAAAxx +5180 345 0 0 0 0 80 180 1180 180 5180 160 161 GRAAAA HNAAAA HHHHxx +1629 346 1 1 9 9 29 629 1629 1629 1629 58 59 RKAAAA INAAAA OOOOxx +6088 347 0 0 8 8 88 88 88 1088 6088 176 177 EAAAAA JNAAAA VVVVxx +5598 348 0 2 8 18 98 598 1598 598 5598 196 197 IHAAAA KNAAAA AAAAxx +1803 349 1 3 3 3 3 803 1803 1803 1803 6 7 JRAAAA LNAAAA HHHHxx +2330 350 0 2 0 10 30 330 330 2330 2330 60 61 QLAAAA MNAAAA OOOOxx +5901 351 1 1 1 1 1 901 1901 901 5901 2 3 ZSAAAA NNAAAA VVVVxx +780 352 0 0 0 0 80 780 780 780 780 160 161 AEAAAA ONAAAA AAAAxx +7171 353 1 3 1 11 71 171 1171 2171 7171 142 143 VPAAAA PNAAAA HHHHxx +8778 354 0 2 8 18 78 778 778 3778 8778 156 157 QZAAAA QNAAAA OOOOxx +6622 355 0 2 2 2 22 622 622 1622 6622 44 45 SUAAAA RNAAAA VVVVxx +9938 356 0 2 8 18 38 938 1938 4938 9938 76 77 GSAAAA SNAAAA AAAAxx +8254 357 0 2 4 14 54 254 254 3254 8254 108 109 MFAAAA TNAAAA HHHHxx +1951 358 1 3 1 11 51 951 1951 1951 1951 102 103 BXAAAA UNAAAA OOOOxx +1434 359 0 2 4 14 34 434 1434 1434 1434 68 69 EDAAAA VNAAAA VVVVxx +7539 360 1 3 9 19 39 539 1539 2539 7539 78 79 ZDAAAA WNAAAA AAAAxx +600 361 0 0 0 0 0 600 600 600 600 0 1 CXAAAA XNAAAA HHHHxx +3122 362 0 2 2 2 22 122 1122 3122 3122 44 45 CQAAAA YNAAAA OOOOxx +5704 363 0 0 4 4 4 704 1704 704 5704 8 9 KLAAAA ZNAAAA VVVVxx +6300 364 0 0 0 0 0 300 300 1300 6300 0 1 IIAAAA AOAAAA AAAAxx +4585 365 1 1 5 5 85 585 585 4585 4585 170 171 JUAAAA BOAAAA HHHHxx +6313 366 1 1 3 13 13 313 313 1313 6313 26 27 VIAAAA COAAAA OOOOxx +3154 367 0 2 4 14 54 154 1154 3154 3154 108 109 IRAAAA DOAAAA VVVVxx +642 368 0 2 2 2 42 642 642 642 642 84 85 SYAAAA EOAAAA AAAAxx +7736 369 0 0 6 16 36 736 1736 2736 7736 72 73 OLAAAA FOAAAA HHHHxx +5087 370 1 3 7 7 87 87 1087 87 5087 174 175 RNAAAA GOAAAA OOOOxx +5708 371 0 0 8 8 8 708 1708 708 5708 16 17 OLAAAA HOAAAA VVVVxx +8169 372 1 1 9 9 69 169 169 3169 8169 138 139 FCAAAA IOAAAA AAAAxx +9768 373 0 0 8 8 68 768 1768 4768 9768 136 137 SLAAAA JOAAAA HHHHxx +3874 374 0 2 4 14 74 874 1874 3874 3874 148 149 ATAAAA KOAAAA OOOOxx +6831 375 1 3 1 11 31 831 831 1831 6831 62 63 TCAAAA LOAAAA VVVVxx +18 376 0 2 8 18 18 18 18 18 18 36 37 SAAAAA MOAAAA AAAAxx +6375 377 1 3 5 15 75 375 375 1375 6375 150 151 FLAAAA NOAAAA HHHHxx +7106 378 0 2 6 6 6 106 1106 2106 7106 12 13 INAAAA OOAAAA OOOOxx +5926 379 0 2 6 6 26 926 1926 926 5926 52 53 YTAAAA POAAAA VVVVxx +4956 380 0 0 6 16 56 956 956 4956 4956 112 113 QIAAAA QOAAAA AAAAxx +7042 381 0 2 2 2 42 42 1042 2042 7042 84 85 WKAAAA ROAAAA HHHHxx +6043 382 1 3 3 3 43 43 43 1043 6043 86 87 LYAAAA SOAAAA OOOOxx +2084 383 0 0 4 4 84 84 84 2084 2084 168 169 ECAAAA TOAAAA VVVVxx +6038 384 0 2 8 18 38 38 38 1038 6038 76 77 GYAAAA UOAAAA AAAAxx +7253 385 1 1 3 13 53 253 1253 2253 7253 106 107 ZSAAAA VOAAAA HHHHxx +2061 386 1 1 1 1 61 61 61 2061 2061 122 123 HBAAAA WOAAAA OOOOxx +7800 387 0 0 0 0 0 800 1800 2800 7800 0 1 AOAAAA XOAAAA VVVVxx +4970 388 0 2 0 10 70 970 970 4970 4970 140 141 EJAAAA YOAAAA AAAAxx +8580 389 0 0 0 0 80 580 580 3580 8580 160 161 ASAAAA ZOAAAA HHHHxx +9173 390 1 1 3 13 73 173 1173 4173 9173 146 147 VOAAAA APAAAA OOOOxx +8558 391 0 2 8 18 58 558 558 3558 8558 116 117 ERAAAA BPAAAA VVVVxx +3897 392 1 1 7 17 97 897 1897 3897 3897 194 195 XTAAAA CPAAAA AAAAxx +5069 393 1 1 9 9 69 69 1069 69 5069 138 139 ZMAAAA DPAAAA HHHHxx +2301 394 1 1 1 1 1 301 301 2301 2301 2 3 NKAAAA EPAAAA OOOOxx +9863 395 1 3 3 3 63 863 1863 4863 9863 126 127 JPAAAA FPAAAA VVVVxx +5733 396 1 1 3 13 33 733 1733 733 5733 66 67 NMAAAA GPAAAA AAAAxx +2338 397 0 2 8 18 38 338 338 2338 2338 76 77 YLAAAA HPAAAA HHHHxx +9639 398 1 3 9 19 39 639 1639 4639 9639 78 79 TGAAAA IPAAAA OOOOxx +1139 399 1 3 9 19 39 139 1139 1139 1139 78 79 VRAAAA JPAAAA VVVVxx +2293 400 1 1 3 13 93 293 293 2293 2293 186 187 FKAAAA KPAAAA AAAAxx +6125 401 1 1 5 5 25 125 125 1125 6125 50 51 PBAAAA LPAAAA HHHHxx +5374 402 0 2 4 14 74 374 1374 374 5374 148 149 SYAAAA MPAAAA OOOOxx +7216 403 0 0 6 16 16 216 1216 2216 7216 32 33 ORAAAA NPAAAA VVVVxx +2285 404 1 1 5 5 85 285 285 2285 2285 170 171 XJAAAA OPAAAA AAAAxx +2387 405 1 3 7 7 87 387 387 2387 2387 174 175 VNAAAA PPAAAA HHHHxx +5015 406 1 3 5 15 15 15 1015 15 5015 30 31 XKAAAA QPAAAA OOOOxx +2087 407 1 3 7 7 87 87 87 2087 2087 174 175 HCAAAA RPAAAA VVVVxx +4938 408 0 2 8 18 38 938 938 4938 4938 76 77 YHAAAA SPAAAA AAAAxx +3635 409 1 3 5 15 35 635 1635 3635 3635 70 71 VJAAAA TPAAAA HHHHxx +7737 410 1 1 7 17 37 737 1737 2737 7737 74 75 PLAAAA UPAAAA OOOOxx +8056 411 0 0 6 16 56 56 56 3056 8056 112 113 WXAAAA VPAAAA VVVVxx +4502 412 0 2 2 2 2 502 502 4502 4502 4 5 ERAAAA WPAAAA AAAAxx +54 413 0 2 4 14 54 54 54 54 54 108 109 CCAAAA XPAAAA HHHHxx +3182 414 0 2 2 2 82 182 1182 3182 3182 164 165 KSAAAA YPAAAA OOOOxx +3718 415 0 2 8 18 18 718 1718 3718 3718 36 37 ANAAAA ZPAAAA VVVVxx +3989 416 1 1 9 9 89 989 1989 3989 3989 178 179 LXAAAA AQAAAA AAAAxx +8028 417 0 0 8 8 28 28 28 3028 8028 56 57 UWAAAA BQAAAA HHHHxx +1426 418 0 2 6 6 26 426 1426 1426 1426 52 53 WCAAAA CQAAAA OOOOxx +3801 419 1 1 1 1 1 801 1801 3801 3801 2 3 FQAAAA DQAAAA VVVVxx +241 420 1 1 1 1 41 241 241 241 241 82 83 HJAAAA EQAAAA AAAAxx +8000 421 0 0 0 0 0 0 0 3000 8000 0 1 SVAAAA FQAAAA HHHHxx +8357 422 1 1 7 17 57 357 357 3357 8357 114 115 LJAAAA GQAAAA OOOOxx +7548 423 0 0 8 8 48 548 1548 2548 7548 96 97 IEAAAA HQAAAA VVVVxx +7307 424 1 3 7 7 7 307 1307 2307 7307 14 15 BVAAAA IQAAAA AAAAxx +2275 425 1 3 5 15 75 275 275 2275 2275 150 151 NJAAAA JQAAAA HHHHxx +2718 426 0 2 8 18 18 718 718 2718 2718 36 37 OAAAAA KQAAAA OOOOxx +7068 427 0 0 8 8 68 68 1068 2068 7068 136 137 WLAAAA LQAAAA VVVVxx +3181 428 1 1 1 1 81 181 1181 3181 3181 162 163 JSAAAA MQAAAA AAAAxx +749 429 1 1 9 9 49 749 749 749 749 98 99 VCAAAA NQAAAA HHHHxx +5195 430 1 3 5 15 95 195 1195 195 5195 190 191 VRAAAA OQAAAA OOOOxx +6136 431 0 0 6 16 36 136 136 1136 6136 72 73 ACAAAA PQAAAA VVVVxx +8012 432 0 0 2 12 12 12 12 3012 8012 24 25 EWAAAA QQAAAA AAAAxx +3957 433 1 1 7 17 57 957 1957 3957 3957 114 115 FWAAAA RQAAAA HHHHxx +3083 434 1 3 3 3 83 83 1083 3083 3083 166 167 POAAAA SQAAAA OOOOxx +9997 435 1 1 7 17 97 997 1997 4997 9997 194 195 NUAAAA TQAAAA VVVVxx +3299 436 1 3 9 19 99 299 1299 3299 3299 198 199 XWAAAA UQAAAA AAAAxx +846 437 0 2 6 6 46 846 846 846 846 92 93 OGAAAA VQAAAA HHHHxx +2985 438 1 1 5 5 85 985 985 2985 2985 170 171 VKAAAA WQAAAA OOOOxx +9238 439 0 2 8 18 38 238 1238 4238 9238 76 77 IRAAAA XQAAAA VVVVxx +1403 440 1 3 3 3 3 403 1403 1403 1403 6 7 ZBAAAA YQAAAA AAAAxx +5563 441 1 3 3 3 63 563 1563 563 5563 126 127 ZFAAAA ZQAAAA HHHHxx +7965 442 1 1 5 5 65 965 1965 2965 7965 130 131 JUAAAA ARAAAA OOOOxx +4512 443 0 0 2 12 12 512 512 4512 4512 24 25 ORAAAA BRAAAA VVVVxx +9730 444 0 2 0 10 30 730 1730 4730 9730 60 61 GKAAAA CRAAAA AAAAxx +1129 445 1 1 9 9 29 129 1129 1129 1129 58 59 LRAAAA DRAAAA HHHHxx +2624 446 0 0 4 4 24 624 624 2624 2624 48 49 YWAAAA ERAAAA OOOOxx +8178 447 0 2 8 18 78 178 178 3178 8178 156 157 OCAAAA FRAAAA VVVVxx +6468 448 0 0 8 8 68 468 468 1468 6468 136 137 UOAAAA GRAAAA AAAAxx +3027 449 1 3 7 7 27 27 1027 3027 3027 54 55 LMAAAA HRAAAA HHHHxx +3845 450 1 1 5 5 45 845 1845 3845 3845 90 91 XRAAAA IRAAAA OOOOxx +786 451 0 2 6 6 86 786 786 786 786 172 173 GEAAAA JRAAAA VVVVxx +4971 452 1 3 1 11 71 971 971 4971 4971 142 143 FJAAAA KRAAAA AAAAxx +1542 453 0 2 2 2 42 542 1542 1542 1542 84 85 IHAAAA LRAAAA HHHHxx +7967 454 1 3 7 7 67 967 1967 2967 7967 134 135 LUAAAA MRAAAA OOOOxx +443 455 1 3 3 3 43 443 443 443 443 86 87 BRAAAA NRAAAA VVVVxx +7318 456 0 2 8 18 18 318 1318 2318 7318 36 37 MVAAAA ORAAAA AAAAxx +4913 457 1 1 3 13 13 913 913 4913 4913 26 27 ZGAAAA PRAAAA HHHHxx +9466 458 0 2 6 6 66 466 1466 4466 9466 132 133 CAAAAA QRAAAA OOOOxx +7866 459 0 2 6 6 66 866 1866 2866 7866 132 133 OQAAAA RRAAAA VVVVxx +784 460 0 0 4 4 84 784 784 784 784 168 169 EEAAAA SRAAAA AAAAxx +9040 461 0 0 0 0 40 40 1040 4040 9040 80 81 SJAAAA TRAAAA HHHHxx +3954 462 0 2 4 14 54 954 1954 3954 3954 108 109 CWAAAA URAAAA OOOOxx +4183 463 1 3 3 3 83 183 183 4183 4183 166 167 XEAAAA VRAAAA VVVVxx +3608 464 0 0 8 8 8 608 1608 3608 3608 16 17 UIAAAA WRAAAA AAAAxx +7630 465 0 2 0 10 30 630 1630 2630 7630 60 61 MHAAAA XRAAAA HHHHxx +590 466 0 2 0 10 90 590 590 590 590 180 181 SWAAAA YRAAAA OOOOxx +3453 467 1 1 3 13 53 453 1453 3453 3453 106 107 VCAAAA ZRAAAA VVVVxx +7757 468 1 1 7 17 57 757 1757 2757 7757 114 115 JMAAAA ASAAAA AAAAxx +7394 469 0 2 4 14 94 394 1394 2394 7394 188 189 KYAAAA BSAAAA HHHHxx +396 470 0 0 6 16 96 396 396 396 396 192 193 GPAAAA CSAAAA OOOOxx +7873 471 1 1 3 13 73 873 1873 2873 7873 146 147 VQAAAA DSAAAA VVVVxx +1553 472 1 1 3 13 53 553 1553 1553 1553 106 107 THAAAA ESAAAA AAAAxx +598 473 0 2 8 18 98 598 598 598 598 196 197 AXAAAA FSAAAA HHHHxx +7191 474 1 3 1 11 91 191 1191 2191 7191 182 183 PQAAAA GSAAAA OOOOxx +8116 475 0 0 6 16 16 116 116 3116 8116 32 33 EAAAAA HSAAAA VVVVxx +2516 476 0 0 6 16 16 516 516 2516 2516 32 33 USAAAA ISAAAA AAAAxx +7750 477 0 2 0 10 50 750 1750 2750 7750 100 101 CMAAAA JSAAAA HHHHxx +6625 478 1 1 5 5 25 625 625 1625 6625 50 51 VUAAAA KSAAAA OOOOxx +8838 479 0 2 8 18 38 838 838 3838 8838 76 77 YBAAAA LSAAAA VVVVxx +4636 480 0 0 6 16 36 636 636 4636 4636 72 73 IWAAAA MSAAAA AAAAxx +7627 481 1 3 7 7 27 627 1627 2627 7627 54 55 JHAAAA NSAAAA HHHHxx +1690 482 0 2 0 10 90 690 1690 1690 1690 180 181 ANAAAA OSAAAA OOOOxx +7071 483 1 3 1 11 71 71 1071 2071 7071 142 143 ZLAAAA PSAAAA VVVVxx +2081 484 1 1 1 1 81 81 81 2081 2081 162 163 BCAAAA QSAAAA AAAAxx +7138 485 0 2 8 18 38 138 1138 2138 7138 76 77 OOAAAA RSAAAA HHHHxx +864 486 0 0 4 4 64 864 864 864 864 128 129 GHAAAA SSAAAA OOOOxx +6392 487 0 0 2 12 92 392 392 1392 6392 184 185 WLAAAA TSAAAA VVVVxx +7544 488 0 0 4 4 44 544 1544 2544 7544 88 89 EEAAAA USAAAA AAAAxx +5438 489 0 2 8 18 38 438 1438 438 5438 76 77 EBAAAA VSAAAA HHHHxx +7099 490 1 3 9 19 99 99 1099 2099 7099 198 199 BNAAAA WSAAAA OOOOxx +5157 491 1 1 7 17 57 157 1157 157 5157 114 115 JQAAAA XSAAAA VVVVxx +3391 492 1 3 1 11 91 391 1391 3391 3391 182 183 LAAAAA YSAAAA AAAAxx +3805 493 1 1 5 5 5 805 1805 3805 3805 10 11 JQAAAA ZSAAAA HHHHxx +2110 494 0 2 0 10 10 110 110 2110 2110 20 21 EDAAAA ATAAAA OOOOxx +3176 495 0 0 6 16 76 176 1176 3176 3176 152 153 ESAAAA BTAAAA VVVVxx +5918 496 0 2 8 18 18 918 1918 918 5918 36 37 QTAAAA CTAAAA AAAAxx +1218 497 0 2 8 18 18 218 1218 1218 1218 36 37 WUAAAA DTAAAA HHHHxx +6683 498 1 3 3 3 83 683 683 1683 6683 166 167 BXAAAA ETAAAA OOOOxx +914 499 0 2 4 14 14 914 914 914 914 28 29 EJAAAA FTAAAA VVVVxx +4737 500 1 1 7 17 37 737 737 4737 4737 74 75 FAAAAA GTAAAA AAAAxx +7286 501 0 2 6 6 86 286 1286 2286 7286 172 173 GUAAAA HTAAAA HHHHxx +9975 502 1 3 5 15 75 975 1975 4975 9975 150 151 RTAAAA ITAAAA OOOOxx +8030 503 0 2 0 10 30 30 30 3030 8030 60 61 WWAAAA JTAAAA VVVVxx +7364 504 0 0 4 4 64 364 1364 2364 7364 128 129 GXAAAA KTAAAA AAAAxx +1389 505 1 1 9 9 89 389 1389 1389 1389 178 179 LBAAAA LTAAAA HHHHxx +4025 506 1 1 5 5 25 25 25 4025 4025 50 51 VYAAAA MTAAAA OOOOxx +4835 507 1 3 5 15 35 835 835 4835 4835 70 71 ZDAAAA NTAAAA VVVVxx +8045 508 1 1 5 5 45 45 45 3045 8045 90 91 LXAAAA OTAAAA AAAAxx +1864 509 0 0 4 4 64 864 1864 1864 1864 128 129 STAAAA PTAAAA HHHHxx +3313 510 1 1 3 13 13 313 1313 3313 3313 26 27 LXAAAA QTAAAA OOOOxx +2384 511 0 0 4 4 84 384 384 2384 2384 168 169 SNAAAA RTAAAA VVVVxx +6115 512 1 3 5 15 15 115 115 1115 6115 30 31 FBAAAA STAAAA AAAAxx +5705 513 1 1 5 5 5 705 1705 705 5705 10 11 LLAAAA TTAAAA HHHHxx +9269 514 1 1 9 9 69 269 1269 4269 9269 138 139 NSAAAA UTAAAA OOOOxx +3379 515 1 3 9 19 79 379 1379 3379 3379 158 159 ZZAAAA VTAAAA VVVVxx +8205 516 1 1 5 5 5 205 205 3205 8205 10 11 PDAAAA WTAAAA AAAAxx +6575 517 1 3 5 15 75 575 575 1575 6575 150 151 XSAAAA XTAAAA HHHHxx +486 518 0 2 6 6 86 486 486 486 486 172 173 SSAAAA YTAAAA OOOOxx +4894 519 0 2 4 14 94 894 894 4894 4894 188 189 GGAAAA ZTAAAA VVVVxx +3090 520 0 2 0 10 90 90 1090 3090 3090 180 181 WOAAAA AUAAAA AAAAxx +759 521 1 3 9 19 59 759 759 759 759 118 119 FDAAAA BUAAAA HHHHxx +4864 522 0 0 4 4 64 864 864 4864 4864 128 129 CFAAAA CUAAAA OOOOxx +4083 523 1 3 3 3 83 83 83 4083 4083 166 167 BBAAAA DUAAAA VVVVxx +6918 524 0 2 8 18 18 918 918 1918 6918 36 37 CGAAAA EUAAAA AAAAxx +8146 525 0 2 6 6 46 146 146 3146 8146 92 93 IBAAAA FUAAAA HHHHxx +1523 526 1 3 3 3 23 523 1523 1523 1523 46 47 PGAAAA GUAAAA OOOOxx +1591 527 1 3 1 11 91 591 1591 1591 1591 182 183 FJAAAA HUAAAA VVVVxx +3343 528 1 3 3 3 43 343 1343 3343 3343 86 87 PYAAAA IUAAAA AAAAxx +1391 529 1 3 1 11 91 391 1391 1391 1391 182 183 NBAAAA JUAAAA HHHHxx +9963 530 1 3 3 3 63 963 1963 4963 9963 126 127 FTAAAA KUAAAA OOOOxx +2423 531 1 3 3 3 23 423 423 2423 2423 46 47 FPAAAA LUAAAA VVVVxx +1822 532 0 2 2 2 22 822 1822 1822 1822 44 45 CSAAAA MUAAAA AAAAxx +8706 533 0 2 6 6 6 706 706 3706 8706 12 13 WWAAAA NUAAAA HHHHxx +3001 534 1 1 1 1 1 1 1001 3001 3001 2 3 LLAAAA OUAAAA OOOOxx +6707 535 1 3 7 7 7 707 707 1707 6707 14 15 ZXAAAA PUAAAA VVVVxx +2121 536 1 1 1 1 21 121 121 2121 2121 42 43 PDAAAA QUAAAA AAAAxx +5814 537 0 2 4 14 14 814 1814 814 5814 28 29 QPAAAA RUAAAA HHHHxx +2659 538 1 3 9 19 59 659 659 2659 2659 118 119 HYAAAA SUAAAA OOOOxx +2016 539 0 0 6 16 16 16 16 2016 2016 32 33 OZAAAA TUAAAA VVVVxx +4286 540 0 2 6 6 86 286 286 4286 4286 172 173 WIAAAA UUAAAA AAAAxx +9205 541 1 1 5 5 5 205 1205 4205 9205 10 11 BQAAAA VUAAAA HHHHxx +3496 542 0 0 6 16 96 496 1496 3496 3496 192 193 MEAAAA WUAAAA OOOOxx +5333 543 1 1 3 13 33 333 1333 333 5333 66 67 DXAAAA XUAAAA VVVVxx +5571 544 1 3 1 11 71 571 1571 571 5571 142 143 HGAAAA YUAAAA AAAAxx +1696 545 0 0 6 16 96 696 1696 1696 1696 192 193 GNAAAA ZUAAAA HHHHxx +4871 546 1 3 1 11 71 871 871 4871 4871 142 143 JFAAAA AVAAAA OOOOxx +4852 547 0 0 2 12 52 852 852 4852 4852 104 105 QEAAAA BVAAAA VVVVxx +8483 548 1 3 3 3 83 483 483 3483 8483 166 167 HOAAAA CVAAAA AAAAxx +1376 549 0 0 6 16 76 376 1376 1376 1376 152 153 YAAAAA DVAAAA HHHHxx +5456 550 0 0 6 16 56 456 1456 456 5456 112 113 WBAAAA EVAAAA OOOOxx +499 551 1 3 9 19 99 499 499 499 499 198 199 FTAAAA FVAAAA VVVVxx +3463 552 1 3 3 3 63 463 1463 3463 3463 126 127 FDAAAA GVAAAA AAAAxx +7426 553 0 2 6 6 26 426 1426 2426 7426 52 53 QZAAAA HVAAAA HHHHxx +5341 554 1 1 1 1 41 341 1341 341 5341 82 83 LXAAAA IVAAAA OOOOxx +9309 555 1 1 9 9 9 309 1309 4309 9309 18 19 BUAAAA JVAAAA VVVVxx +2055 556 1 3 5 15 55 55 55 2055 2055 110 111 BBAAAA KVAAAA AAAAxx +2199 557 1 3 9 19 99 199 199 2199 2199 198 199 PGAAAA LVAAAA HHHHxx +7235 558 1 3 5 15 35 235 1235 2235 7235 70 71 HSAAAA MVAAAA OOOOxx +8661 559 1 1 1 1 61 661 661 3661 8661 122 123 DVAAAA NVAAAA VVVVxx +9494 560 0 2 4 14 94 494 1494 4494 9494 188 189 EBAAAA OVAAAA AAAAxx +935 561 1 3 5 15 35 935 935 935 935 70 71 ZJAAAA PVAAAA HHHHxx +7044 562 0 0 4 4 44 44 1044 2044 7044 88 89 YKAAAA QVAAAA OOOOxx +1974 563 0 2 4 14 74 974 1974 1974 1974 148 149 YXAAAA RVAAAA VVVVxx +9679 564 1 3 9 19 79 679 1679 4679 9679 158 159 HIAAAA SVAAAA AAAAxx +9822 565 0 2 2 2 22 822 1822 4822 9822 44 45 UNAAAA TVAAAA HHHHxx +4088 566 0 0 8 8 88 88 88 4088 4088 176 177 GBAAAA UVAAAA OOOOxx +1749 567 1 1 9 9 49 749 1749 1749 1749 98 99 HPAAAA VVAAAA VVVVxx +2116 568 0 0 6 16 16 116 116 2116 2116 32 33 KDAAAA WVAAAA AAAAxx +976 569 0 0 6 16 76 976 976 976 976 152 153 OLAAAA XVAAAA HHHHxx +8689 570 1 1 9 9 89 689 689 3689 8689 178 179 FWAAAA YVAAAA OOOOxx +2563 571 1 3 3 3 63 563 563 2563 2563 126 127 PUAAAA ZVAAAA VVVVxx +7195 572 1 3 5 15 95 195 1195 2195 7195 190 191 TQAAAA AWAAAA AAAAxx +9985 573 1 1 5 5 85 985 1985 4985 9985 170 171 BUAAAA BWAAAA HHHHxx +7699 574 1 3 9 19 99 699 1699 2699 7699 198 199 DKAAAA CWAAAA OOOOxx +5311 575 1 3 1 11 11 311 1311 311 5311 22 23 HWAAAA DWAAAA VVVVxx +295 576 1 3 5 15 95 295 295 295 295 190 191 JLAAAA EWAAAA AAAAxx +8214 577 0 2 4 14 14 214 214 3214 8214 28 29 YDAAAA FWAAAA HHHHxx +3275 578 1 3 5 15 75 275 1275 3275 3275 150 151 ZVAAAA GWAAAA OOOOxx +9646 579 0 2 6 6 46 646 1646 4646 9646 92 93 AHAAAA HWAAAA VVVVxx +1908 580 0 0 8 8 8 908 1908 1908 1908 16 17 KVAAAA IWAAAA AAAAxx +3858 581 0 2 8 18 58 858 1858 3858 3858 116 117 KSAAAA JWAAAA HHHHxx +9362 582 0 2 2 2 62 362 1362 4362 9362 124 125 CWAAAA KWAAAA OOOOxx +9307 583 1 3 7 7 7 307 1307 4307 9307 14 15 ZTAAAA LWAAAA VVVVxx +6124 584 0 0 4 4 24 124 124 1124 6124 48 49 OBAAAA MWAAAA AAAAxx +2405 585 1 1 5 5 5 405 405 2405 2405 10 11 NOAAAA NWAAAA HHHHxx +8422 586 0 2 2 2 22 422 422 3422 8422 44 45 YLAAAA OWAAAA OOOOxx +393 587 1 1 3 13 93 393 393 393 393 186 187 DPAAAA PWAAAA VVVVxx +8973 588 1 1 3 13 73 973 973 3973 8973 146 147 DHAAAA QWAAAA AAAAxx +5171 589 1 3 1 11 71 171 1171 171 5171 142 143 XQAAAA RWAAAA HHHHxx +4929 590 1 1 9 9 29 929 929 4929 4929 58 59 PHAAAA SWAAAA OOOOxx +6935 591 1 3 5 15 35 935 935 1935 6935 70 71 TGAAAA TWAAAA VVVVxx +8584 592 0 0 4 4 84 584 584 3584 8584 168 169 ESAAAA UWAAAA AAAAxx +1035 593 1 3 5 15 35 35 1035 1035 1035 70 71 VNAAAA VWAAAA HHHHxx +3734 594 0 2 4 14 34 734 1734 3734 3734 68 69 QNAAAA WWAAAA OOOOxx +1458 595 0 2 8 18 58 458 1458 1458 1458 116 117 CEAAAA XWAAAA VVVVxx +8746 596 0 2 6 6 46 746 746 3746 8746 92 93 KYAAAA YWAAAA AAAAxx +1677 597 1 1 7 17 77 677 1677 1677 1677 154 155 NMAAAA ZWAAAA HHHHxx +8502 598 0 2 2 2 2 502 502 3502 8502 4 5 APAAAA AXAAAA OOOOxx +7752 599 0 0 2 12 52 752 1752 2752 7752 104 105 EMAAAA BXAAAA VVVVxx +2556 600 0 0 6 16 56 556 556 2556 2556 112 113 IUAAAA CXAAAA AAAAxx +6426 601 0 2 6 6 26 426 426 1426 6426 52 53 ENAAAA DXAAAA HHHHxx +8420 602 0 0 0 0 20 420 420 3420 8420 40 41 WLAAAA EXAAAA OOOOxx +4462 603 0 2 2 2 62 462 462 4462 4462 124 125 QPAAAA FXAAAA VVVVxx +1378 604 0 2 8 18 78 378 1378 1378 1378 156 157 ABAAAA GXAAAA AAAAxx +1387 605 1 3 7 7 87 387 1387 1387 1387 174 175 JBAAAA HXAAAA HHHHxx +8094 606 0 2 4 14 94 94 94 3094 8094 188 189 IZAAAA IXAAAA OOOOxx +7247 607 1 3 7 7 47 247 1247 2247 7247 94 95 TSAAAA JXAAAA VVVVxx +4261 608 1 1 1 1 61 261 261 4261 4261 122 123 XHAAAA KXAAAA AAAAxx +5029 609 1 1 9 9 29 29 1029 29 5029 58 59 LLAAAA LXAAAA HHHHxx +3625 610 1 1 5 5 25 625 1625 3625 3625 50 51 LJAAAA MXAAAA OOOOxx +8068 611 0 0 8 8 68 68 68 3068 8068 136 137 IYAAAA NXAAAA VVVVxx +102 612 0 2 2 2 2 102 102 102 102 4 5 YDAAAA OXAAAA AAAAxx +5596 613 0 0 6 16 96 596 1596 596 5596 192 193 GHAAAA PXAAAA HHHHxx +5872 614 0 0 2 12 72 872 1872 872 5872 144 145 WRAAAA QXAAAA OOOOxx +4742 615 0 2 2 2 42 742 742 4742 4742 84 85 KAAAAA RXAAAA VVVVxx +2117 616 1 1 7 17 17 117 117 2117 2117 34 35 LDAAAA SXAAAA AAAAxx +3945 617 1 1 5 5 45 945 1945 3945 3945 90 91 TVAAAA TXAAAA HHHHxx +7483 618 1 3 3 3 83 483 1483 2483 7483 166 167 VBAAAA UXAAAA OOOOxx +4455 619 1 3 5 15 55 455 455 4455 4455 110 111 JPAAAA VXAAAA VVVVxx +609 620 1 1 9 9 9 609 609 609 609 18 19 LXAAAA WXAAAA AAAAxx +9829 621 1 1 9 9 29 829 1829 4829 9829 58 59 BOAAAA XXAAAA HHHHxx +4857 622 1 1 7 17 57 857 857 4857 4857 114 115 VEAAAA YXAAAA OOOOxx +3314 623 0 2 4 14 14 314 1314 3314 3314 28 29 MXAAAA ZXAAAA VVVVxx +5353 624 1 1 3 13 53 353 1353 353 5353 106 107 XXAAAA AYAAAA AAAAxx +4909 625 1 1 9 9 9 909 909 4909 4909 18 19 VGAAAA BYAAAA HHHHxx +7597 626 1 1 7 17 97 597 1597 2597 7597 194 195 FGAAAA CYAAAA OOOOxx +2683 627 1 3 3 3 83 683 683 2683 2683 166 167 FZAAAA DYAAAA VVVVxx +3223 628 1 3 3 3 23 223 1223 3223 3223 46 47 ZTAAAA EYAAAA AAAAxx +5363 629 1 3 3 3 63 363 1363 363 5363 126 127 HYAAAA FYAAAA HHHHxx +4578 630 0 2 8 18 78 578 578 4578 4578 156 157 CUAAAA GYAAAA OOOOxx +5544 631 0 0 4 4 44 544 1544 544 5544 88 89 GFAAAA HYAAAA VVVVxx +1589 632 1 1 9 9 89 589 1589 1589 1589 178 179 DJAAAA IYAAAA AAAAxx +7412 633 0 0 2 12 12 412 1412 2412 7412 24 25 CZAAAA JYAAAA HHHHxx +3803 634 1 3 3 3 3 803 1803 3803 3803 6 7 HQAAAA KYAAAA OOOOxx +6179 635 1 3 9 19 79 179 179 1179 6179 158 159 RDAAAA LYAAAA VVVVxx +5588 636 0 0 8 8 88 588 1588 588 5588 176 177 YGAAAA MYAAAA AAAAxx +2134 637 0 2 4 14 34 134 134 2134 2134 68 69 CEAAAA NYAAAA HHHHxx +4383 638 1 3 3 3 83 383 383 4383 4383 166 167 PMAAAA OYAAAA OOOOxx +6995 639 1 3 5 15 95 995 995 1995 6995 190 191 BJAAAA PYAAAA VVVVxx +6598 640 0 2 8 18 98 598 598 1598 6598 196 197 UTAAAA QYAAAA AAAAxx +8731 641 1 3 1 11 31 731 731 3731 8731 62 63 VXAAAA RYAAAA HHHHxx +7177 642 1 1 7 17 77 177 1177 2177 7177 154 155 BQAAAA SYAAAA OOOOxx +6578 643 0 2 8 18 78 578 578 1578 6578 156 157 ATAAAA TYAAAA VVVVxx +9393 644 1 1 3 13 93 393 1393 4393 9393 186 187 HXAAAA UYAAAA AAAAxx +1276 645 0 0 6 16 76 276 1276 1276 1276 152 153 CXAAAA VYAAAA HHHHxx +8766 646 0 2 6 6 66 766 766 3766 8766 132 133 EZAAAA WYAAAA OOOOxx +1015 647 1 3 5 15 15 15 1015 1015 1015 30 31 BNAAAA XYAAAA VVVVxx +4396 648 0 0 6 16 96 396 396 4396 4396 192 193 CNAAAA YYAAAA AAAAxx +5564 649 0 0 4 4 64 564 1564 564 5564 128 129 AGAAAA ZYAAAA HHHHxx +927 650 1 3 7 7 27 927 927 927 927 54 55 RJAAAA AZAAAA OOOOxx +3306 651 0 2 6 6 6 306 1306 3306 3306 12 13 EXAAAA BZAAAA VVVVxx +1615 652 1 3 5 15 15 615 1615 1615 1615 30 31 DKAAAA CZAAAA AAAAxx +4550 653 0 2 0 10 50 550 550 4550 4550 100 101 ATAAAA DZAAAA HHHHxx +2468 654 0 0 8 8 68 468 468 2468 2468 136 137 YQAAAA EZAAAA OOOOxx +5336 655 0 0 6 16 36 336 1336 336 5336 72 73 GXAAAA FZAAAA VVVVxx +4471 656 1 3 1 11 71 471 471 4471 4471 142 143 ZPAAAA GZAAAA AAAAxx +8085 657 1 1 5 5 85 85 85 3085 8085 170 171 ZYAAAA HZAAAA HHHHxx +540 658 0 0 0 0 40 540 540 540 540 80 81 UUAAAA IZAAAA OOOOxx +5108 659 0 0 8 8 8 108 1108 108 5108 16 17 MOAAAA JZAAAA VVVVxx +8015 660 1 3 5 15 15 15 15 3015 8015 30 31 HWAAAA KZAAAA AAAAxx +2857 661 1 1 7 17 57 857 857 2857 2857 114 115 XFAAAA LZAAAA HHHHxx +9472 662 0 0 2 12 72 472 1472 4472 9472 144 145 IAAAAA MZAAAA OOOOxx +5666 663 0 2 6 6 66 666 1666 666 5666 132 133 YJAAAA NZAAAA VVVVxx +3555 664 1 3 5 15 55 555 1555 3555 3555 110 111 TGAAAA OZAAAA AAAAxx +378 665 0 2 8 18 78 378 378 378 378 156 157 OOAAAA PZAAAA HHHHxx +4466 666 0 2 6 6 66 466 466 4466 4466 132 133 UPAAAA QZAAAA OOOOxx +3247 667 1 3 7 7 47 247 1247 3247 3247 94 95 XUAAAA RZAAAA VVVVxx +6570 668 0 2 0 10 70 570 570 1570 6570 140 141 SSAAAA SZAAAA AAAAxx +5655 669 1 3 5 15 55 655 1655 655 5655 110 111 NJAAAA TZAAAA HHHHxx +917 670 1 1 7 17 17 917 917 917 917 34 35 HJAAAA UZAAAA OOOOxx +3637 671 1 1 7 17 37 637 1637 3637 3637 74 75 XJAAAA VZAAAA VVVVxx +3668 672 0 0 8 8 68 668 1668 3668 3668 136 137 CLAAAA WZAAAA AAAAxx +5644 673 0 0 4 4 44 644 1644 644 5644 88 89 CJAAAA XZAAAA HHHHxx +8286 674 0 2 6 6 86 286 286 3286 8286 172 173 SGAAAA YZAAAA OOOOxx +6896 675 0 0 6 16 96 896 896 1896 6896 192 193 GFAAAA ZZAAAA VVVVxx +2870 676 0 2 0 10 70 870 870 2870 2870 140 141 KGAAAA AABAAA AAAAxx +8041 677 1 1 1 1 41 41 41 3041 8041 82 83 HXAAAA BABAAA HHHHxx +8137 678 1 1 7 17 37 137 137 3137 8137 74 75 ZAAAAA CABAAA OOOOxx +4823 679 1 3 3 3 23 823 823 4823 4823 46 47 NDAAAA DABAAA VVVVxx +2438 680 0 2 8 18 38 438 438 2438 2438 76 77 UPAAAA EABAAA AAAAxx +6329 681 1 1 9 9 29 329 329 1329 6329 58 59 LJAAAA FABAAA HHHHxx +623 682 1 3 3 3 23 623 623 623 623 46 47 ZXAAAA GABAAA OOOOxx +1360 683 0 0 0 0 60 360 1360 1360 1360 120 121 IAAAAA HABAAA VVVVxx +7987 684 1 3 7 7 87 987 1987 2987 7987 174 175 FVAAAA IABAAA AAAAxx +9788 685 0 0 8 8 88 788 1788 4788 9788 176 177 MMAAAA JABAAA HHHHxx +3212 686 0 0 2 12 12 212 1212 3212 3212 24 25 OTAAAA KABAAA OOOOxx +2725 687 1 1 5 5 25 725 725 2725 2725 50 51 VAAAAA LABAAA VVVVxx +7837 688 1 1 7 17 37 837 1837 2837 7837 74 75 LPAAAA MABAAA AAAAxx +4746 689 0 2 6 6 46 746 746 4746 4746 92 93 OAAAAA NABAAA HHHHxx +3986 690 0 2 6 6 86 986 1986 3986 3986 172 173 IXAAAA OABAAA OOOOxx +9128 691 0 0 8 8 28 128 1128 4128 9128 56 57 CNAAAA PABAAA VVVVxx +5044 692 0 0 4 4 44 44 1044 44 5044 88 89 AMAAAA QABAAA AAAAxx +8132 693 0 0 2 12 32 132 132 3132 8132 64 65 UAAAAA RABAAA HHHHxx +9992 694 0 0 2 12 92 992 1992 4992 9992 184 185 IUAAAA SABAAA OOOOxx +8468 695 0 0 8 8 68 468 468 3468 8468 136 137 SNAAAA TABAAA VVVVxx +6876 696 0 0 6 16 76 876 876 1876 6876 152 153 MEAAAA UABAAA AAAAxx +3532 697 0 0 2 12 32 532 1532 3532 3532 64 65 WFAAAA VABAAA HHHHxx +2140 698 0 0 0 0 40 140 140 2140 2140 80 81 IEAAAA WABAAA OOOOxx +2183 699 1 3 3 3 83 183 183 2183 2183 166 167 ZFAAAA XABAAA VVVVxx +9766 700 0 2 6 6 66 766 1766 4766 9766 132 133 QLAAAA YABAAA AAAAxx +7943 701 1 3 3 3 43 943 1943 2943 7943 86 87 NTAAAA ZABAAA HHHHxx +9243 702 1 3 3 3 43 243 1243 4243 9243 86 87 NRAAAA ABBAAA OOOOxx +6241 703 1 1 1 1 41 241 241 1241 6241 82 83 BGAAAA BBBAAA VVVVxx +9540 704 0 0 0 0 40 540 1540 4540 9540 80 81 YCAAAA CBBAAA AAAAxx +7418 705 0 2 8 18 18 418 1418 2418 7418 36 37 IZAAAA DBBAAA HHHHxx +1603 706 1 3 3 3 3 603 1603 1603 1603 6 7 RJAAAA EBBAAA OOOOxx +8950 707 0 2 0 10 50 950 950 3950 8950 100 101 GGAAAA FBBAAA VVVVxx +6933 708 1 1 3 13 33 933 933 1933 6933 66 67 RGAAAA GBBAAA AAAAxx +2646 709 0 2 6 6 46 646 646 2646 2646 92 93 UXAAAA HBBAAA HHHHxx +3447 710 1 3 7 7 47 447 1447 3447 3447 94 95 PCAAAA IBBAAA OOOOxx +9957 711 1 1 7 17 57 957 1957 4957 9957 114 115 ZSAAAA JBBAAA VVVVxx +4623 712 1 3 3 3 23 623 623 4623 4623 46 47 VVAAAA KBBAAA AAAAxx +9058 713 0 2 8 18 58 58 1058 4058 9058 116 117 KKAAAA LBBAAA HHHHxx +7361 714 1 1 1 1 61 361 1361 2361 7361 122 123 DXAAAA MBBAAA OOOOxx +2489 715 1 1 9 9 89 489 489 2489 2489 178 179 TRAAAA NBBAAA VVVVxx +7643 716 1 3 3 3 43 643 1643 2643 7643 86 87 ZHAAAA OBBAAA AAAAxx +9166 717 0 2 6 6 66 166 1166 4166 9166 132 133 OOAAAA PBBAAA HHHHxx +7789 718 1 1 9 9 89 789 1789 2789 7789 178 179 PNAAAA QBBAAA OOOOxx +2332 719 0 0 2 12 32 332 332 2332 2332 64 65 SLAAAA RBBAAA VVVVxx +1832 720 0 0 2 12 32 832 1832 1832 1832 64 65 MSAAAA SBBAAA AAAAxx +8375 721 1 3 5 15 75 375 375 3375 8375 150 151 DKAAAA TBBAAA HHHHxx +948 722 0 0 8 8 48 948 948 948 948 96 97 MKAAAA UBBAAA OOOOxx +5613 723 1 1 3 13 13 613 1613 613 5613 26 27 XHAAAA VBBAAA VVVVxx +6310 724 0 2 0 10 10 310 310 1310 6310 20 21 SIAAAA WBBAAA AAAAxx +4254 725 0 2 4 14 54 254 254 4254 4254 108 109 QHAAAA XBBAAA HHHHxx +4260 726 0 0 0 0 60 260 260 4260 4260 120 121 WHAAAA YBBAAA OOOOxx +2060 727 0 0 0 0 60 60 60 2060 2060 120 121 GBAAAA ZBBAAA VVVVxx +4831 728 1 3 1 11 31 831 831 4831 4831 62 63 VDAAAA ACBAAA AAAAxx +6176 729 0 0 6 16 76 176 176 1176 6176 152 153 ODAAAA BCBAAA HHHHxx +6688 730 0 0 8 8 88 688 688 1688 6688 176 177 GXAAAA CCBAAA OOOOxx +5752 731 0 0 2 12 52 752 1752 752 5752 104 105 GNAAAA DCBAAA VVVVxx +8714 732 0 2 4 14 14 714 714 3714 8714 28 29 EXAAAA ECBAAA AAAAxx +6739 733 1 3 9 19 39 739 739 1739 6739 78 79 FZAAAA FCBAAA HHHHxx +7066 734 0 2 6 6 66 66 1066 2066 7066 132 133 ULAAAA GCBAAA OOOOxx +7250 735 0 2 0 10 50 250 1250 2250 7250 100 101 WSAAAA HCBAAA VVVVxx +3161 736 1 1 1 1 61 161 1161 3161 3161 122 123 PRAAAA ICBAAA AAAAxx +1411 737 1 3 1 11 11 411 1411 1411 1411 22 23 HCAAAA JCBAAA HHHHxx +9301 738 1 1 1 1 1 301 1301 4301 9301 2 3 TTAAAA KCBAAA OOOOxx +8324 739 0 0 4 4 24 324 324 3324 8324 48 49 EIAAAA LCBAAA VVVVxx +9641 740 1 1 1 1 41 641 1641 4641 9641 82 83 VGAAAA MCBAAA AAAAxx +7077 741 1 1 7 17 77 77 1077 2077 7077 154 155 FMAAAA NCBAAA HHHHxx +9888 742 0 0 8 8 88 888 1888 4888 9888 176 177 IQAAAA OCBAAA OOOOxx +9909 743 1 1 9 9 9 909 1909 4909 9909 18 19 DRAAAA PCBAAA VVVVxx +2209 744 1 1 9 9 9 209 209 2209 2209 18 19 ZGAAAA QCBAAA AAAAxx +6904 745 0 0 4 4 4 904 904 1904 6904 8 9 OFAAAA RCBAAA HHHHxx +6608 746 0 0 8 8 8 608 608 1608 6608 16 17 EUAAAA SCBAAA OOOOxx +8400 747 0 0 0 0 0 400 400 3400 8400 0 1 CLAAAA TCBAAA VVVVxx +5124 748 0 0 4 4 24 124 1124 124 5124 48 49 CPAAAA UCBAAA AAAAxx +5484 749 0 0 4 4 84 484 1484 484 5484 168 169 YCAAAA VCBAAA HHHHxx +3575 750 1 3 5 15 75 575 1575 3575 3575 150 151 NHAAAA WCBAAA OOOOxx +9723 751 1 3 3 3 23 723 1723 4723 9723 46 47 ZJAAAA XCBAAA VVVVxx +360 752 0 0 0 0 60 360 360 360 360 120 121 WNAAAA YCBAAA AAAAxx +1059 753 1 3 9 19 59 59 1059 1059 1059 118 119 TOAAAA ZCBAAA HHHHxx +4941 754 1 1 1 1 41 941 941 4941 4941 82 83 BIAAAA ADBAAA OOOOxx +2535 755 1 3 5 15 35 535 535 2535 2535 70 71 NTAAAA BDBAAA VVVVxx +4119 756 1 3 9 19 19 119 119 4119 4119 38 39 LCAAAA CDBAAA AAAAxx +3725 757 1 1 5 5 25 725 1725 3725 3725 50 51 HNAAAA DDBAAA HHHHxx +4758 758 0 2 8 18 58 758 758 4758 4758 116 117 ABAAAA EDBAAA OOOOxx +9593 759 1 1 3 13 93 593 1593 4593 9593 186 187 ZEAAAA FDBAAA VVVVxx +4663 760 1 3 3 3 63 663 663 4663 4663 126 127 JXAAAA GDBAAA AAAAxx +7734 761 0 2 4 14 34 734 1734 2734 7734 68 69 MLAAAA HDBAAA HHHHxx +9156 762 0 0 6 16 56 156 1156 4156 9156 112 113 EOAAAA IDBAAA OOOOxx +8120 763 0 0 0 0 20 120 120 3120 8120 40 41 IAAAAA JDBAAA VVVVxx +4385 764 1 1 5 5 85 385 385 4385 4385 170 171 RMAAAA KDBAAA AAAAxx +2926 765 0 2 6 6 26 926 926 2926 2926 52 53 OIAAAA LDBAAA HHHHxx +4186 766 0 2 6 6 86 186 186 4186 4186 172 173 AFAAAA MDBAAA OOOOxx +2508 767 0 0 8 8 8 508 508 2508 2508 16 17 MSAAAA NDBAAA VVVVxx +4012 768 0 0 2 12 12 12 12 4012 4012 24 25 IYAAAA ODBAAA AAAAxx +6266 769 0 2 6 6 66 266 266 1266 6266 132 133 AHAAAA PDBAAA HHHHxx +3709 770 1 1 9 9 9 709 1709 3709 3709 18 19 RMAAAA QDBAAA OOOOxx +7289 771 1 1 9 9 89 289 1289 2289 7289 178 179 JUAAAA RDBAAA VVVVxx +8875 772 1 3 5 15 75 875 875 3875 8875 150 151 JDAAAA SDBAAA AAAAxx +4412 773 0 0 2 12 12 412 412 4412 4412 24 25 SNAAAA TDBAAA HHHHxx +3033 774 1 1 3 13 33 33 1033 3033 3033 66 67 RMAAAA UDBAAA OOOOxx +1645 775 1 1 5 5 45 645 1645 1645 1645 90 91 HLAAAA VDBAAA VVVVxx +3557 776 1 1 7 17 57 557 1557 3557 3557 114 115 VGAAAA WDBAAA AAAAxx +6316 777 0 0 6 16 16 316 316 1316 6316 32 33 YIAAAA XDBAAA HHHHxx +2054 778 0 2 4 14 54 54 54 2054 2054 108 109 ABAAAA YDBAAA OOOOxx +7031 779 1 3 1 11 31 31 1031 2031 7031 62 63 LKAAAA ZDBAAA VVVVxx +3405 780 1 1 5 5 5 405 1405 3405 3405 10 11 ZAAAAA AEBAAA AAAAxx +5343 781 1 3 3 3 43 343 1343 343 5343 86 87 NXAAAA BEBAAA HHHHxx +5240 782 0 0 0 0 40 240 1240 240 5240 80 81 OTAAAA CEBAAA OOOOxx +9650 783 0 2 0 10 50 650 1650 4650 9650 100 101 EHAAAA DEBAAA VVVVxx +3777 784 1 1 7 17 77 777 1777 3777 3777 154 155 HPAAAA EEBAAA AAAAxx +9041 785 1 1 1 1 41 41 1041 4041 9041 82 83 TJAAAA FEBAAA HHHHxx +6923 786 1 3 3 3 23 923 923 1923 6923 46 47 HGAAAA GEBAAA OOOOxx +2977 787 1 1 7 17 77 977 977 2977 2977 154 155 NKAAAA HEBAAA VVVVxx +5500 788 0 0 0 0 0 500 1500 500 5500 0 1 ODAAAA IEBAAA AAAAxx +1044 789 0 0 4 4 44 44 1044 1044 1044 88 89 EOAAAA JEBAAA HHHHxx +434 790 0 2 4 14 34 434 434 434 434 68 69 SQAAAA KEBAAA OOOOxx +611 791 1 3 1 11 11 611 611 611 611 22 23 NXAAAA LEBAAA VVVVxx +5760 792 0 0 0 0 60 760 1760 760 5760 120 121 ONAAAA MEBAAA AAAAxx +2445 793 1 1 5 5 45 445 445 2445 2445 90 91 BQAAAA NEBAAA HHHHxx +7098 794 0 2 8 18 98 98 1098 2098 7098 196 197 ANAAAA OEBAAA OOOOxx +2188 795 0 0 8 8 88 188 188 2188 2188 176 177 EGAAAA PEBAAA VVVVxx +4597 796 1 1 7 17 97 597 597 4597 4597 194 195 VUAAAA QEBAAA AAAAxx +1913 797 1 1 3 13 13 913 1913 1913 1913 26 27 PVAAAA REBAAA HHHHxx +8696 798 0 0 6 16 96 696 696 3696 8696 192 193 MWAAAA SEBAAA OOOOxx +3332 799 0 0 2 12 32 332 1332 3332 3332 64 65 EYAAAA TEBAAA VVVVxx +8760 800 0 0 0 0 60 760 760 3760 8760 120 121 YYAAAA UEBAAA AAAAxx +3215 801 1 3 5 15 15 215 1215 3215 3215 30 31 RTAAAA VEBAAA HHHHxx +1625 802 1 1 5 5 25 625 1625 1625 1625 50 51 NKAAAA WEBAAA OOOOxx +4219 803 1 3 9 19 19 219 219 4219 4219 38 39 HGAAAA XEBAAA VVVVxx +415 804 1 3 5 15 15 415 415 415 415 30 31 ZPAAAA YEBAAA AAAAxx +4242 805 0 2 2 2 42 242 242 4242 4242 84 85 EHAAAA ZEBAAA HHHHxx +8660 806 0 0 0 0 60 660 660 3660 8660 120 121 CVAAAA AFBAAA OOOOxx +6525 807 1 1 5 5 25 525 525 1525 6525 50 51 ZQAAAA BFBAAA VVVVxx +2141 808 1 1 1 1 41 141 141 2141 2141 82 83 JEAAAA CFBAAA AAAAxx +5152 809 0 0 2 12 52 152 1152 152 5152 104 105 EQAAAA DFBAAA HHHHxx +8560 810 0 0 0 0 60 560 560 3560 8560 120 121 GRAAAA EFBAAA OOOOxx +9835 811 1 3 5 15 35 835 1835 4835 9835 70 71 HOAAAA FFBAAA VVVVxx +2657 812 1 1 7 17 57 657 657 2657 2657 114 115 FYAAAA GFBAAA AAAAxx +6085 813 1 1 5 5 85 85 85 1085 6085 170 171 BAAAAA HFBAAA HHHHxx +6698 814 0 2 8 18 98 698 698 1698 6698 196 197 QXAAAA IFBAAA OOOOxx +5421 815 1 1 1 1 21 421 1421 421 5421 42 43 NAAAAA JFBAAA VVVVxx +6661 816 1 1 1 1 61 661 661 1661 6661 122 123 FWAAAA KFBAAA AAAAxx +5645 817 1 1 5 5 45 645 1645 645 5645 90 91 DJAAAA LFBAAA HHHHxx +1248 818 0 0 8 8 48 248 1248 1248 1248 96 97 AWAAAA MFBAAA OOOOxx +5690 819 0 2 0 10 90 690 1690 690 5690 180 181 WKAAAA NFBAAA VVVVxx +4762 820 0 2 2 2 62 762 762 4762 4762 124 125 EBAAAA OFBAAA AAAAxx +1455 821 1 3 5 15 55 455 1455 1455 1455 110 111 ZDAAAA PFBAAA HHHHxx +9846 822 0 2 6 6 46 846 1846 4846 9846 92 93 SOAAAA QFBAAA OOOOxx +5295 823 1 3 5 15 95 295 1295 295 5295 190 191 RVAAAA RFBAAA VVVVxx +2826 824 0 2 6 6 26 826 826 2826 2826 52 53 SEAAAA SFBAAA AAAAxx +7496 825 0 0 6 16 96 496 1496 2496 7496 192 193 ICAAAA TFBAAA HHHHxx +3024 826 0 0 4 4 24 24 1024 3024 3024 48 49 IMAAAA UFBAAA OOOOxx +4945 827 1 1 5 5 45 945 945 4945 4945 90 91 FIAAAA VFBAAA VVVVxx +4404 828 0 0 4 4 4 404 404 4404 4404 8 9 KNAAAA WFBAAA AAAAxx +9302 829 0 2 2 2 2 302 1302 4302 9302 4 5 UTAAAA XFBAAA HHHHxx +1286 830 0 2 6 6 86 286 1286 1286 1286 172 173 MXAAAA YFBAAA OOOOxx +8435 831 1 3 5 15 35 435 435 3435 8435 70 71 LMAAAA ZFBAAA VVVVxx +8969 832 1 1 9 9 69 969 969 3969 8969 138 139 ZGAAAA AGBAAA AAAAxx +3302 833 0 2 2 2 2 302 1302 3302 3302 4 5 AXAAAA BGBAAA HHHHxx +9753 834 1 1 3 13 53 753 1753 4753 9753 106 107 DLAAAA CGBAAA OOOOxx +9374 835 0 2 4 14 74 374 1374 4374 9374 148 149 OWAAAA DGBAAA VVVVxx +4907 836 1 3 7 7 7 907 907 4907 4907 14 15 TGAAAA EGBAAA AAAAxx +1659 837 1 3 9 19 59 659 1659 1659 1659 118 119 VLAAAA FGBAAA HHHHxx +5095 838 1 3 5 15 95 95 1095 95 5095 190 191 ZNAAAA GGBAAA OOOOxx +9446 839 0 2 6 6 46 446 1446 4446 9446 92 93 IZAAAA HGBAAA VVVVxx +8528 840 0 0 8 8 28 528 528 3528 8528 56 57 AQAAAA IGBAAA AAAAxx +4890 841 0 2 0 10 90 890 890 4890 4890 180 181 CGAAAA JGBAAA HHHHxx +1221 842 1 1 1 1 21 221 1221 1221 1221 42 43 ZUAAAA KGBAAA OOOOxx +5583 843 1 3 3 3 83 583 1583 583 5583 166 167 TGAAAA LGBAAA VVVVxx +7303 844 1 3 3 3 3 303 1303 2303 7303 6 7 XUAAAA MGBAAA AAAAxx +406 845 0 2 6 6 6 406 406 406 406 12 13 QPAAAA NGBAAA HHHHxx +7542 846 0 2 2 2 42 542 1542 2542 7542 84 85 CEAAAA OGBAAA OOOOxx +9507 847 1 3 7 7 7 507 1507 4507 9507 14 15 RBAAAA PGBAAA VVVVxx +9511 848 1 3 1 11 11 511 1511 4511 9511 22 23 VBAAAA QGBAAA AAAAxx +1373 849 1 1 3 13 73 373 1373 1373 1373 146 147 VAAAAA RGBAAA HHHHxx +6556 850 0 0 6 16 56 556 556 1556 6556 112 113 ESAAAA SGBAAA OOOOxx +4117 851 1 1 7 17 17 117 117 4117 4117 34 35 JCAAAA TGBAAA VVVVxx +7794 852 0 2 4 14 94 794 1794 2794 7794 188 189 UNAAAA UGBAAA AAAAxx +7170 853 0 2 0 10 70 170 1170 2170 7170 140 141 UPAAAA VGBAAA HHHHxx +5809 854 1 1 9 9 9 809 1809 809 5809 18 19 LPAAAA WGBAAA OOOOxx +7828 855 0 0 8 8 28 828 1828 2828 7828 56 57 CPAAAA XGBAAA VVVVxx +8046 856 0 2 6 6 46 46 46 3046 8046 92 93 MXAAAA YGBAAA AAAAxx +4833 857 1 1 3 13 33 833 833 4833 4833 66 67 XDAAAA ZGBAAA HHHHxx +2107 858 1 3 7 7 7 107 107 2107 2107 14 15 BDAAAA AHBAAA OOOOxx +4276 859 0 0 6 16 76 276 276 4276 4276 152 153 MIAAAA BHBAAA VVVVxx +9536 860 0 0 6 16 36 536 1536 4536 9536 72 73 UCAAAA CHBAAA AAAAxx +5549 861 1 1 9 9 49 549 1549 549 5549 98 99 LFAAAA DHBAAA HHHHxx +6427 862 1 3 7 7 27 427 427 1427 6427 54 55 FNAAAA EHBAAA OOOOxx +1382 863 0 2 2 2 82 382 1382 1382 1382 164 165 EBAAAA FHBAAA VVVVxx +3256 864 0 0 6 16 56 256 1256 3256 3256 112 113 GVAAAA GHBAAA AAAAxx +3270 865 0 2 0 10 70 270 1270 3270 3270 140 141 UVAAAA HHBAAA HHHHxx +4808 866 0 0 8 8 8 808 808 4808 4808 16 17 YCAAAA IHBAAA OOOOxx +7938 867 0 2 8 18 38 938 1938 2938 7938 76 77 ITAAAA JHBAAA VVVVxx +4405 868 1 1 5 5 5 405 405 4405 4405 10 11 LNAAAA KHBAAA AAAAxx +2264 869 0 0 4 4 64 264 264 2264 2264 128 129 CJAAAA LHBAAA HHHHxx +80 870 0 0 0 0 80 80 80 80 80 160 161 CDAAAA MHBAAA OOOOxx +320 871 0 0 0 0 20 320 320 320 320 40 41 IMAAAA NHBAAA VVVVxx +2383 872 1 3 3 3 83 383 383 2383 2383 166 167 RNAAAA OHBAAA AAAAxx +3146 873 0 2 6 6 46 146 1146 3146 3146 92 93 ARAAAA PHBAAA HHHHxx +6911 874 1 3 1 11 11 911 911 1911 6911 22 23 VFAAAA QHBAAA OOOOxx +7377 875 1 1 7 17 77 377 1377 2377 7377 154 155 TXAAAA RHBAAA VVVVxx +9965 876 1 1 5 5 65 965 1965 4965 9965 130 131 HTAAAA SHBAAA AAAAxx +8361 877 1 1 1 1 61 361 361 3361 8361 122 123 PJAAAA THBAAA HHHHxx +9417 878 1 1 7 17 17 417 1417 4417 9417 34 35 FYAAAA UHBAAA OOOOxx +2483 879 1 3 3 3 83 483 483 2483 2483 166 167 NRAAAA VHBAAA VVVVxx +9843 880 1 3 3 3 43 843 1843 4843 9843 86 87 POAAAA WHBAAA AAAAxx +6395 881 1 3 5 15 95 395 395 1395 6395 190 191 ZLAAAA XHBAAA HHHHxx +6444 882 0 0 4 4 44 444 444 1444 6444 88 89 WNAAAA YHBAAA OOOOxx +1820 883 0 0 0 0 20 820 1820 1820 1820 40 41 ASAAAA ZHBAAA VVVVxx +2768 884 0 0 8 8 68 768 768 2768 2768 136 137 MCAAAA AIBAAA AAAAxx +5413 885 1 1 3 13 13 413 1413 413 5413 26 27 FAAAAA BIBAAA HHHHxx +2923 886 1 3 3 3 23 923 923 2923 2923 46 47 LIAAAA CIBAAA OOOOxx +5286 887 0 2 6 6 86 286 1286 286 5286 172 173 IVAAAA DIBAAA VVVVxx +6126 888 0 2 6 6 26 126 126 1126 6126 52 53 QBAAAA EIBAAA AAAAxx +8343 889 1 3 3 3 43 343 343 3343 8343 86 87 XIAAAA FIBAAA HHHHxx +6010 890 0 2 0 10 10 10 10 1010 6010 20 21 EXAAAA GIBAAA OOOOxx +4177 891 1 1 7 17 77 177 177 4177 4177 154 155 REAAAA HIBAAA VVVVxx +5808 892 0 0 8 8 8 808 1808 808 5808 16 17 KPAAAA IIBAAA AAAAxx +4859 893 1 3 9 19 59 859 859 4859 4859 118 119 XEAAAA JIBAAA HHHHxx +9252 894 0 0 2 12 52 252 1252 4252 9252 104 105 WRAAAA KIBAAA OOOOxx +2941 895 1 1 1 1 41 941 941 2941 2941 82 83 DJAAAA LIBAAA VVVVxx +8693 896 1 1 3 13 93 693 693 3693 8693 186 187 JWAAAA MIBAAA AAAAxx +4432 897 0 0 2 12 32 432 432 4432 4432 64 65 MOAAAA NIBAAA HHHHxx +2371 898 1 3 1 11 71 371 371 2371 2371 142 143 FNAAAA OIBAAA OOOOxx +7546 899 0 2 6 6 46 546 1546 2546 7546 92 93 GEAAAA PIBAAA VVVVxx +1369 900 1 1 9 9 69 369 1369 1369 1369 138 139 RAAAAA QIBAAA AAAAxx +4687 901 1 3 7 7 87 687 687 4687 4687 174 175 HYAAAA RIBAAA HHHHxx +8941 902 1 1 1 1 41 941 941 3941 8941 82 83 XFAAAA SIBAAA OOOOxx +226 903 0 2 6 6 26 226 226 226 226 52 53 SIAAAA TIBAAA VVVVxx +3493 904 1 1 3 13 93 493 1493 3493 3493 186 187 JEAAAA UIBAAA AAAAxx +6433 905 1 1 3 13 33 433 433 1433 6433 66 67 LNAAAA VIBAAA HHHHxx +9189 906 1 1 9 9 89 189 1189 4189 9189 178 179 LPAAAA WIBAAA OOOOxx +6027 907 1 3 7 7 27 27 27 1027 6027 54 55 VXAAAA XIBAAA VVVVxx +4615 908 1 3 5 15 15 615 615 4615 4615 30 31 NVAAAA YIBAAA AAAAxx +5320 909 0 0 0 0 20 320 1320 320 5320 40 41 QWAAAA ZIBAAA HHHHxx +7002 910 0 2 2 2 2 2 1002 2002 7002 4 5 IJAAAA AJBAAA OOOOxx +7367 911 1 3 7 7 67 367 1367 2367 7367 134 135 JXAAAA BJBAAA VVVVxx +289 912 1 1 9 9 89 289 289 289 289 178 179 DLAAAA CJBAAA AAAAxx +407 913 1 3 7 7 7 407 407 407 407 14 15 RPAAAA DJBAAA HHHHxx +504 914 0 0 4 4 4 504 504 504 504 8 9 KTAAAA EJBAAA OOOOxx +8301 915 1 1 1 1 1 301 301 3301 8301 2 3 HHAAAA FJBAAA VVVVxx +1396 916 0 0 6 16 96 396 1396 1396 1396 192 193 SBAAAA GJBAAA AAAAxx +4794 917 0 2 4 14 94 794 794 4794 4794 188 189 KCAAAA HJBAAA HHHHxx +6400 918 0 0 0 0 0 400 400 1400 6400 0 1 EMAAAA IJBAAA OOOOxx +1275 919 1 3 5 15 75 275 1275 1275 1275 150 151 BXAAAA JJBAAA VVVVxx +5797 920 1 1 7 17 97 797 1797 797 5797 194 195 ZOAAAA KJBAAA AAAAxx +2221 921 1 1 1 1 21 221 221 2221 2221 42 43 LHAAAA LJBAAA HHHHxx +2504 922 0 0 4 4 4 504 504 2504 2504 8 9 ISAAAA MJBAAA OOOOxx +2143 923 1 3 3 3 43 143 143 2143 2143 86 87 LEAAAA NJBAAA VVVVxx +1083 924 1 3 3 3 83 83 1083 1083 1083 166 167 RPAAAA OJBAAA AAAAxx +6148 925 0 0 8 8 48 148 148 1148 6148 96 97 MCAAAA PJBAAA HHHHxx +3612 926 0 0 2 12 12 612 1612 3612 3612 24 25 YIAAAA QJBAAA OOOOxx +9499 927 1 3 9 19 99 499 1499 4499 9499 198 199 JBAAAA RJBAAA VVVVxx +5773 928 1 1 3 13 73 773 1773 773 5773 146 147 BOAAAA SJBAAA AAAAxx +1014 929 0 2 4 14 14 14 1014 1014 1014 28 29 ANAAAA TJBAAA HHHHxx +1427 930 1 3 7 7 27 427 1427 1427 1427 54 55 XCAAAA UJBAAA OOOOxx +6770 931 0 2 0 10 70 770 770 1770 6770 140 141 KAAAAA VJBAAA VVVVxx +9042 932 0 2 2 2 42 42 1042 4042 9042 84 85 UJAAAA WJBAAA AAAAxx +9892 933 0 0 2 12 92 892 1892 4892 9892 184 185 MQAAAA XJBAAA HHHHxx +1771 934 1 3 1 11 71 771 1771 1771 1771 142 143 DQAAAA YJBAAA OOOOxx +7392 935 0 0 2 12 92 392 1392 2392 7392 184 185 IYAAAA ZJBAAA VVVVxx +4465 936 1 1 5 5 65 465 465 4465 4465 130 131 TPAAAA AKBAAA AAAAxx +278 937 0 2 8 18 78 278 278 278 278 156 157 SKAAAA BKBAAA HHHHxx +7776 938 0 0 6 16 76 776 1776 2776 7776 152 153 CNAAAA CKBAAA OOOOxx +3763 939 1 3 3 3 63 763 1763 3763 3763 126 127 TOAAAA DKBAAA VVVVxx +7503 940 1 3 3 3 3 503 1503 2503 7503 6 7 PCAAAA EKBAAA AAAAxx +3793 941 1 1 3 13 93 793 1793 3793 3793 186 187 XPAAAA FKBAAA HHHHxx +6510 942 0 2 0 10 10 510 510 1510 6510 20 21 KQAAAA GKBAAA OOOOxx +7641 943 1 1 1 1 41 641 1641 2641 7641 82 83 XHAAAA HKBAAA VVVVxx +3228 944 0 0 8 8 28 228 1228 3228 3228 56 57 EUAAAA IKBAAA AAAAxx +194 945 0 2 4 14 94 194 194 194 194 188 189 MHAAAA JKBAAA HHHHxx +8555 946 1 3 5 15 55 555 555 3555 8555 110 111 BRAAAA KKBAAA OOOOxx +4997 947 1 1 7 17 97 997 997 4997 4997 194 195 FKAAAA LKBAAA VVVVxx +8687 948 1 3 7 7 87 687 687 3687 8687 174 175 DWAAAA MKBAAA AAAAxx +6632 949 0 0 2 12 32 632 632 1632 6632 64 65 CVAAAA NKBAAA HHHHxx +9607 950 1 3 7 7 7 607 1607 4607 9607 14 15 NFAAAA OKBAAA OOOOxx +6201 951 1 1 1 1 1 201 201 1201 6201 2 3 NEAAAA PKBAAA VVVVxx +857 952 1 1 7 17 57 857 857 857 857 114 115 ZGAAAA QKBAAA AAAAxx +5623 953 1 3 3 3 23 623 1623 623 5623 46 47 HIAAAA RKBAAA HHHHxx +5979 954 1 3 9 19 79 979 1979 979 5979 158 159 ZVAAAA SKBAAA OOOOxx +2201 955 1 1 1 1 1 201 201 2201 2201 2 3 RGAAAA TKBAAA VVVVxx +3166 956 0 2 6 6 66 166 1166 3166 3166 132 133 URAAAA UKBAAA AAAAxx +6249 957 1 1 9 9 49 249 249 1249 6249 98 99 JGAAAA VKBAAA HHHHxx +3271 958 1 3 1 11 71 271 1271 3271 3271 142 143 VVAAAA WKBAAA OOOOxx +7777 959 1 1 7 17 77 777 1777 2777 7777 154 155 DNAAAA XKBAAA VVVVxx +6732 960 0 0 2 12 32 732 732 1732 6732 64 65 YYAAAA YKBAAA AAAAxx +6297 961 1 1 7 17 97 297 297 1297 6297 194 195 FIAAAA ZKBAAA HHHHxx +5685 962 1 1 5 5 85 685 1685 685 5685 170 171 RKAAAA ALBAAA OOOOxx +9931 963 1 3 1 11 31 931 1931 4931 9931 62 63 ZRAAAA BLBAAA VVVVxx +7485 964 1 1 5 5 85 485 1485 2485 7485 170 171 XBAAAA CLBAAA AAAAxx +386 965 0 2 6 6 86 386 386 386 386 172 173 WOAAAA DLBAAA HHHHxx +8204 966 0 0 4 4 4 204 204 3204 8204 8 9 ODAAAA ELBAAA OOOOxx +3606 967 0 2 6 6 6 606 1606 3606 3606 12 13 SIAAAA FLBAAA VVVVxx +1692 968 0 0 2 12 92 692 1692 1692 1692 184 185 CNAAAA GLBAAA AAAAxx +3002 969 0 2 2 2 2 2 1002 3002 3002 4 5 MLAAAA HLBAAA HHHHxx +9676 970 0 0 6 16 76 676 1676 4676 9676 152 153 EIAAAA ILBAAA OOOOxx +915 971 1 3 5 15 15 915 915 915 915 30 31 FJAAAA JLBAAA VVVVxx +7706 972 0 2 6 6 6 706 1706 2706 7706 12 13 KKAAAA KLBAAA AAAAxx +6080 973 0 0 0 0 80 80 80 1080 6080 160 161 WZAAAA LLBAAA HHHHxx +1860 974 0 0 0 0 60 860 1860 1860 1860 120 121 OTAAAA MLBAAA OOOOxx +1444 975 0 0 4 4 44 444 1444 1444 1444 88 89 ODAAAA NLBAAA VVVVxx +7208 976 0 0 8 8 8 208 1208 2208 7208 16 17 GRAAAA OLBAAA AAAAxx +8554 977 0 2 4 14 54 554 554 3554 8554 108 109 ARAAAA PLBAAA HHHHxx +2028 978 0 0 8 8 28 28 28 2028 2028 56 57 AAAAAA QLBAAA OOOOxx +9893 979 1 1 3 13 93 893 1893 4893 9893 186 187 NQAAAA RLBAAA VVVVxx +4740 980 0 0 0 0 40 740 740 4740 4740 80 81 IAAAAA SLBAAA AAAAxx +6186 981 0 2 6 6 86 186 186 1186 6186 172 173 YDAAAA TLBAAA HHHHxx +6357 982 1 1 7 17 57 357 357 1357 6357 114 115 NKAAAA ULBAAA OOOOxx +3699 983 1 3 9 19 99 699 1699 3699 3699 198 199 HMAAAA VLBAAA VVVVxx +7620 984 0 0 0 0 20 620 1620 2620 7620 40 41 CHAAAA WLBAAA AAAAxx +921 985 1 1 1 1 21 921 921 921 921 42 43 LJAAAA XLBAAA HHHHxx +5506 986 0 2 6 6 6 506 1506 506 5506 12 13 UDAAAA YLBAAA OOOOxx +8851 987 1 3 1 11 51 851 851 3851 8851 102 103 LCAAAA ZLBAAA VVVVxx +3205 988 1 1 5 5 5 205 1205 3205 3205 10 11 HTAAAA AMBAAA AAAAxx +1956 989 0 0 6 16 56 956 1956 1956 1956 112 113 GXAAAA BMBAAA HHHHxx +6272 990 0 0 2 12 72 272 272 1272 6272 144 145 GHAAAA CMBAAA OOOOxx +1509 991 1 1 9 9 9 509 1509 1509 1509 18 19 BGAAAA DMBAAA VVVVxx +53 992 1 1 3 13 53 53 53 53 53 106 107 BCAAAA EMBAAA AAAAxx +213 993 1 1 3 13 13 213 213 213 213 26 27 FIAAAA FMBAAA HHHHxx +4924 994 0 0 4 4 24 924 924 4924 4924 48 49 KHAAAA GMBAAA OOOOxx +2097 995 1 1 7 17 97 97 97 2097 2097 194 195 RCAAAA HMBAAA VVVVxx +4607 996 1 3 7 7 7 607 607 4607 4607 14 15 FVAAAA IMBAAA AAAAxx +1582 997 0 2 2 2 82 582 1582 1582 1582 164 165 WIAAAA JMBAAA HHHHxx +6643 998 1 3 3 3 43 643 643 1643 6643 86 87 NVAAAA KMBAAA OOOOxx +2238 999 0 2 8 18 38 238 238 2238 2238 76 77 CIAAAA LMBAAA VVVVxx +2942 1000 0 2 2 2 42 942 942 2942 2942 84 85 EJAAAA MMBAAA AAAAxx +1655 1001 1 3 5 15 55 655 1655 1655 1655 110 111 RLAAAA NMBAAA HHHHxx +3226 1002 0 2 6 6 26 226 1226 3226 3226 52 53 CUAAAA OMBAAA OOOOxx +4263 1003 1 3 3 3 63 263 263 4263 4263 126 127 ZHAAAA PMBAAA VVVVxx +960 1004 0 0 0 0 60 960 960 960 960 120 121 YKAAAA QMBAAA AAAAxx +1213 1005 1 1 3 13 13 213 1213 1213 1213 26 27 RUAAAA RMBAAA HHHHxx +1845 1006 1 1 5 5 45 845 1845 1845 1845 90 91 ZSAAAA SMBAAA OOOOxx +6944 1007 0 0 4 4 44 944 944 1944 6944 88 89 CHAAAA TMBAAA VVVVxx +5284 1008 0 0 4 4 84 284 1284 284 5284 168 169 GVAAAA UMBAAA AAAAxx +188 1009 0 0 8 8 88 188 188 188 188 176 177 GHAAAA VMBAAA HHHHxx +748 1010 0 0 8 8 48 748 748 748 748 96 97 UCAAAA WMBAAA OOOOxx +2226 1011 0 2 6 6 26 226 226 2226 2226 52 53 QHAAAA XMBAAA VVVVxx +7342 1012 0 2 2 2 42 342 1342 2342 7342 84 85 KWAAAA YMBAAA AAAAxx +6120 1013 0 0 0 0 20 120 120 1120 6120 40 41 KBAAAA ZMBAAA HHHHxx +536 1014 0 0 6 16 36 536 536 536 536 72 73 QUAAAA ANBAAA OOOOxx +3239 1015 1 3 9 19 39 239 1239 3239 3239 78 79 PUAAAA BNBAAA VVVVxx +2832 1016 0 0 2 12 32 832 832 2832 2832 64 65 YEAAAA CNBAAA AAAAxx +5296 1017 0 0 6 16 96 296 1296 296 5296 192 193 SVAAAA DNBAAA HHHHxx +5795 1018 1 3 5 15 95 795 1795 795 5795 190 191 XOAAAA ENBAAA OOOOxx +6290 1019 0 2 0 10 90 290 290 1290 6290 180 181 YHAAAA FNBAAA VVVVxx +4916 1020 0 0 6 16 16 916 916 4916 4916 32 33 CHAAAA GNBAAA AAAAxx +8366 1021 0 2 6 6 66 366 366 3366 8366 132 133 UJAAAA HNBAAA HHHHxx +4248 1022 0 0 8 8 48 248 248 4248 4248 96 97 KHAAAA INBAAA OOOOxx +6460 1023 0 0 0 0 60 460 460 1460 6460 120 121 MOAAAA JNBAAA VVVVxx +9296 1024 0 0 6 16 96 296 1296 4296 9296 192 193 OTAAAA KNBAAA AAAAxx +3486 1025 0 2 6 6 86 486 1486 3486 3486 172 173 CEAAAA LNBAAA HHHHxx +5664 1026 0 0 4 4 64 664 1664 664 5664 128 129 WJAAAA MNBAAA OOOOxx +7624 1027 0 0 4 4 24 624 1624 2624 7624 48 49 GHAAAA NNBAAA VVVVxx +2790 1028 0 2 0 10 90 790 790 2790 2790 180 181 IDAAAA ONBAAA AAAAxx +682 1029 0 2 2 2 82 682 682 682 682 164 165 GAAAAA PNBAAA HHHHxx +6412 1030 0 0 2 12 12 412 412 1412 6412 24 25 QMAAAA QNBAAA OOOOxx +6882 1031 0 2 2 2 82 882 882 1882 6882 164 165 SEAAAA RNBAAA VVVVxx +1332 1032 0 0 2 12 32 332 1332 1332 1332 64 65 GZAAAA SNBAAA AAAAxx +4911 1033 1 3 1 11 11 911 911 4911 4911 22 23 XGAAAA TNBAAA HHHHxx +3528 1034 0 0 8 8 28 528 1528 3528 3528 56 57 SFAAAA UNBAAA OOOOxx +271 1035 1 3 1 11 71 271 271 271 271 142 143 LKAAAA VNBAAA VVVVxx +7007 1036 1 3 7 7 7 7 1007 2007 7007 14 15 NJAAAA WNBAAA AAAAxx +2198 1037 0 2 8 18 98 198 198 2198 2198 196 197 OGAAAA XNBAAA HHHHxx +4266 1038 0 2 6 6 66 266 266 4266 4266 132 133 CIAAAA YNBAAA OOOOxx +9867 1039 1 3 7 7 67 867 1867 4867 9867 134 135 NPAAAA ZNBAAA VVVVxx +7602 1040 0 2 2 2 2 602 1602 2602 7602 4 5 KGAAAA AOBAAA AAAAxx +7521 1041 1 1 1 1 21 521 1521 2521 7521 42 43 HDAAAA BOBAAA HHHHxx +7200 1042 0 0 0 0 0 200 1200 2200 7200 0 1 YQAAAA COBAAA OOOOxx +4816 1043 0 0 6 16 16 816 816 4816 4816 32 33 GDAAAA DOBAAA VVVVxx +1669 1044 1 1 9 9 69 669 1669 1669 1669 138 139 FMAAAA EOBAAA AAAAxx +4764 1045 0 0 4 4 64 764 764 4764 4764 128 129 GBAAAA FOBAAA HHHHxx +7393 1046 1 1 3 13 93 393 1393 2393 7393 186 187 JYAAAA GOBAAA OOOOxx +7434 1047 0 2 4 14 34 434 1434 2434 7434 68 69 YZAAAA HOBAAA VVVVxx +9079 1048 1 3 9 19 79 79 1079 4079 9079 158 159 FLAAAA IOBAAA AAAAxx +9668 1049 0 0 8 8 68 668 1668 4668 9668 136 137 WHAAAA JOBAAA HHHHxx +7184 1050 0 0 4 4 84 184 1184 2184 7184 168 169 IQAAAA KOBAAA OOOOxx +7347 1051 1 3 7 7 47 347 1347 2347 7347 94 95 PWAAAA LOBAAA VVVVxx +951 1052 1 3 1 11 51 951 951 951 951 102 103 PKAAAA MOBAAA AAAAxx +4513 1053 1 1 3 13 13 513 513 4513 4513 26 27 PRAAAA NOBAAA HHHHxx +2692 1054 0 0 2 12 92 692 692 2692 2692 184 185 OZAAAA OOBAAA OOOOxx +9930 1055 0 2 0 10 30 930 1930 4930 9930 60 61 YRAAAA POBAAA VVVVxx +4516 1056 0 0 6 16 16 516 516 4516 4516 32 33 SRAAAA QOBAAA AAAAxx +1592 1057 0 0 2 12 92 592 1592 1592 1592 184 185 GJAAAA ROBAAA HHHHxx +6312 1058 0 0 2 12 12 312 312 1312 6312 24 25 UIAAAA SOBAAA OOOOxx +185 1059 1 1 5 5 85 185 185 185 185 170 171 DHAAAA TOBAAA VVVVxx +1848 1060 0 0 8 8 48 848 1848 1848 1848 96 97 CTAAAA UOBAAA AAAAxx +5844 1061 0 0 4 4 44 844 1844 844 5844 88 89 UQAAAA VOBAAA HHHHxx +1666 1062 0 2 6 6 66 666 1666 1666 1666 132 133 CMAAAA WOBAAA OOOOxx +5864 1063 0 0 4 4 64 864 1864 864 5864 128 129 ORAAAA XOBAAA VVVVxx +1004 1064 0 0 4 4 4 4 1004 1004 1004 8 9 QMAAAA YOBAAA AAAAxx +1758 1065 0 2 8 18 58 758 1758 1758 1758 116 117 QPAAAA ZOBAAA HHHHxx +8823 1066 1 3 3 3 23 823 823 3823 8823 46 47 JBAAAA APBAAA OOOOxx +129 1067 1 1 9 9 29 129 129 129 129 58 59 ZEAAAA BPBAAA VVVVxx +5703 1068 1 3 3 3 3 703 1703 703 5703 6 7 JLAAAA CPBAAA AAAAxx +3331 1069 1 3 1 11 31 331 1331 3331 3331 62 63 DYAAAA DPBAAA HHHHxx +5791 1070 1 3 1 11 91 791 1791 791 5791 182 183 TOAAAA EPBAAA OOOOxx +4421 1071 1 1 1 1 21 421 421 4421 4421 42 43 BOAAAA FPBAAA VVVVxx +9740 1072 0 0 0 0 40 740 1740 4740 9740 80 81 QKAAAA GPBAAA AAAAxx +798 1073 0 2 8 18 98 798 798 798 798 196 197 SEAAAA HPBAAA HHHHxx +571 1074 1 3 1 11 71 571 571 571 571 142 143 ZVAAAA IPBAAA OOOOxx +7084 1075 0 0 4 4 84 84 1084 2084 7084 168 169 MMAAAA JPBAAA VVVVxx +650 1076 0 2 0 10 50 650 650 650 650 100 101 AZAAAA KPBAAA AAAAxx +1467 1077 1 3 7 7 67 467 1467 1467 1467 134 135 LEAAAA LPBAAA HHHHxx +5446 1078 0 2 6 6 46 446 1446 446 5446 92 93 MBAAAA MPBAAA OOOOxx +830 1079 0 2 0 10 30 830 830 830 830 60 61 YFAAAA NPBAAA VVVVxx +5516 1080 0 0 6 16 16 516 1516 516 5516 32 33 EEAAAA OPBAAA AAAAxx +8520 1081 0 0 0 0 20 520 520 3520 8520 40 41 SPAAAA PPBAAA HHHHxx +1152 1082 0 0 2 12 52 152 1152 1152 1152 104 105 ISAAAA QPBAAA OOOOxx +862 1083 0 2 2 2 62 862 862 862 862 124 125 EHAAAA RPBAAA VVVVxx +454 1084 0 2 4 14 54 454 454 454 454 108 109 MRAAAA SPBAAA AAAAxx +9956 1085 0 0 6 16 56 956 1956 4956 9956 112 113 YSAAAA TPBAAA HHHHxx +1654 1086 0 2 4 14 54 654 1654 1654 1654 108 109 QLAAAA UPBAAA OOOOxx +257 1087 1 1 7 17 57 257 257 257 257 114 115 XJAAAA VPBAAA VVVVxx +5469 1088 1 1 9 9 69 469 1469 469 5469 138 139 JCAAAA WPBAAA AAAAxx +9075 1089 1 3 5 15 75 75 1075 4075 9075 150 151 BLAAAA XPBAAA HHHHxx +7799 1090 1 3 9 19 99 799 1799 2799 7799 198 199 ZNAAAA YPBAAA OOOOxx +2001 1091 1 1 1 1 1 1 1 2001 2001 2 3 ZYAAAA ZPBAAA VVVVxx +9786 1092 0 2 6 6 86 786 1786 4786 9786 172 173 KMAAAA AQBAAA AAAAxx +7281 1093 1 1 1 1 81 281 1281 2281 7281 162 163 BUAAAA BQBAAA HHHHxx +5137 1094 1 1 7 17 37 137 1137 137 5137 74 75 PPAAAA CQBAAA OOOOxx +4053 1095 1 1 3 13 53 53 53 4053 4053 106 107 XZAAAA DQBAAA VVVVxx +7911 1096 1 3 1 11 11 911 1911 2911 7911 22 23 HSAAAA EQBAAA AAAAxx +4298 1097 0 2 8 18 98 298 298 4298 4298 196 197 IJAAAA FQBAAA HHHHxx +4805 1098 1 1 5 5 5 805 805 4805 4805 10 11 VCAAAA GQBAAA OOOOxx +9038 1099 0 2 8 18 38 38 1038 4038 9038 76 77 QJAAAA HQBAAA VVVVxx +8023 1100 1 3 3 3 23 23 23 3023 8023 46 47 PWAAAA IQBAAA AAAAxx +6595 1101 1 3 5 15 95 595 595 1595 6595 190 191 RTAAAA JQBAAA HHHHxx +9831 1102 1 3 1 11 31 831 1831 4831 9831 62 63 DOAAAA KQBAAA OOOOxx +788 1103 0 0 8 8 88 788 788 788 788 176 177 IEAAAA LQBAAA VVVVxx +902 1104 0 2 2 2 2 902 902 902 902 4 5 SIAAAA MQBAAA AAAAxx +9137 1105 1 1 7 17 37 137 1137 4137 9137 74 75 LNAAAA NQBAAA HHHHxx +1744 1106 0 0 4 4 44 744 1744 1744 1744 88 89 CPAAAA OQBAAA OOOOxx +7285 1107 1 1 5 5 85 285 1285 2285 7285 170 171 FUAAAA PQBAAA VVVVxx +7006 1108 0 2 6 6 6 6 1006 2006 7006 12 13 MJAAAA QQBAAA AAAAxx +9236 1109 0 0 6 16 36 236 1236 4236 9236 72 73 GRAAAA RQBAAA HHHHxx +5472 1110 0 0 2 12 72 472 1472 472 5472 144 145 MCAAAA SQBAAA OOOOxx +7975 1111 1 3 5 15 75 975 1975 2975 7975 150 151 TUAAAA TQBAAA VVVVxx +4181 1112 1 1 1 1 81 181 181 4181 4181 162 163 VEAAAA UQBAAA AAAAxx +7677 1113 1 1 7 17 77 677 1677 2677 7677 154 155 HJAAAA VQBAAA HHHHxx +35 1114 1 3 5 15 35 35 35 35 35 70 71 JBAAAA WQBAAA OOOOxx +6813 1115 1 1 3 13 13 813 813 1813 6813 26 27 BCAAAA XQBAAA VVVVxx +6618 1116 0 2 8 18 18 618 618 1618 6618 36 37 OUAAAA YQBAAA AAAAxx +8069 1117 1 1 9 9 69 69 69 3069 8069 138 139 JYAAAA ZQBAAA HHHHxx +3071 1118 1 3 1 11 71 71 1071 3071 3071 142 143 DOAAAA ARBAAA OOOOxx +4390 1119 0 2 0 10 90 390 390 4390 4390 180 181 WMAAAA BRBAAA VVVVxx +7764 1120 0 0 4 4 64 764 1764 2764 7764 128 129 QMAAAA CRBAAA AAAAxx +8163 1121 1 3 3 3 63 163 163 3163 8163 126 127 ZBAAAA DRBAAA HHHHxx +1961 1122 1 1 1 1 61 961 1961 1961 1961 122 123 LXAAAA ERBAAA OOOOxx +1103 1123 1 3 3 3 3 103 1103 1103 1103 6 7 LQAAAA FRBAAA VVVVxx +5486 1124 0 2 6 6 86 486 1486 486 5486 172 173 ADAAAA GRBAAA AAAAxx +9513 1125 1 1 3 13 13 513 1513 4513 9513 26 27 XBAAAA HRBAAA HHHHxx +7311 1126 1 3 1 11 11 311 1311 2311 7311 22 23 FVAAAA IRBAAA OOOOxx +4144 1127 0 0 4 4 44 144 144 4144 4144 88 89 KDAAAA JRBAAA VVVVxx +7901 1128 1 1 1 1 1 901 1901 2901 7901 2 3 XRAAAA KRBAAA AAAAxx +4629 1129 1 1 9 9 29 629 629 4629 4629 58 59 BWAAAA LRBAAA HHHHxx +6858 1130 0 2 8 18 58 858 858 1858 6858 116 117 UDAAAA MRBAAA OOOOxx +125 1131 1 1 5 5 25 125 125 125 125 50 51 VEAAAA NRBAAA VVVVxx +3834 1132 0 2 4 14 34 834 1834 3834 3834 68 69 MRAAAA ORBAAA AAAAxx +8155 1133 1 3 5 15 55 155 155 3155 8155 110 111 RBAAAA PRBAAA HHHHxx +8230 1134 0 2 0 10 30 230 230 3230 8230 60 61 OEAAAA QRBAAA OOOOxx +744 1135 0 0 4 4 44 744 744 744 744 88 89 QCAAAA RRBAAA VVVVxx +357 1136 1 1 7 17 57 357 357 357 357 114 115 TNAAAA SRBAAA AAAAxx +2159 1137 1 3 9 19 59 159 159 2159 2159 118 119 BFAAAA TRBAAA HHHHxx +8559 1138 1 3 9 19 59 559 559 3559 8559 118 119 FRAAAA URBAAA OOOOxx +6866 1139 0 2 6 6 66 866 866 1866 6866 132 133 CEAAAA VRBAAA VVVVxx +3863 1140 1 3 3 3 63 863 1863 3863 3863 126 127 PSAAAA WRBAAA AAAAxx +4193 1141 1 1 3 13 93 193 193 4193 4193 186 187 HFAAAA XRBAAA HHHHxx +3277 1142 1 1 7 17 77 277 1277 3277 3277 154 155 BWAAAA YRBAAA OOOOxx +5577 1143 1 1 7 17 77 577 1577 577 5577 154 155 NGAAAA ZRBAAA VVVVxx +9503 1144 1 3 3 3 3 503 1503 4503 9503 6 7 NBAAAA ASBAAA AAAAxx +7642 1145 0 2 2 2 42 642 1642 2642 7642 84 85 YHAAAA BSBAAA HHHHxx +6197 1146 1 1 7 17 97 197 197 1197 6197 194 195 JEAAAA CSBAAA OOOOxx +8995 1147 1 3 5 15 95 995 995 3995 8995 190 191 ZHAAAA DSBAAA VVVVxx +440 1148 0 0 0 0 40 440 440 440 440 80 81 YQAAAA ESBAAA AAAAxx +8418 1149 0 2 8 18 18 418 418 3418 8418 36 37 ULAAAA FSBAAA HHHHxx +8531 1150 1 3 1 11 31 531 531 3531 8531 62 63 DQAAAA GSBAAA OOOOxx +3790 1151 0 2 0 10 90 790 1790 3790 3790 180 181 UPAAAA HSBAAA VVVVxx +7610 1152 0 2 0 10 10 610 1610 2610 7610 20 21 SGAAAA ISBAAA AAAAxx +1252 1153 0 0 2 12 52 252 1252 1252 1252 104 105 EWAAAA JSBAAA HHHHxx +7559 1154 1 3 9 19 59 559 1559 2559 7559 118 119 TEAAAA KSBAAA OOOOxx +9945 1155 1 1 5 5 45 945 1945 4945 9945 90 91 NSAAAA LSBAAA VVVVxx +9023 1156 1 3 3 3 23 23 1023 4023 9023 46 47 BJAAAA MSBAAA AAAAxx +3516 1157 0 0 6 16 16 516 1516 3516 3516 32 33 GFAAAA NSBAAA HHHHxx +4671 1158 1 3 1 11 71 671 671 4671 4671 142 143 RXAAAA OSBAAA OOOOxx +1465 1159 1 1 5 5 65 465 1465 1465 1465 130 131 JEAAAA PSBAAA VVVVxx +9515 1160 1 3 5 15 15 515 1515 4515 9515 30 31 ZBAAAA QSBAAA AAAAxx +3242 1161 0 2 2 2 42 242 1242 3242 3242 84 85 SUAAAA RSBAAA HHHHxx +1732 1162 0 0 2 12 32 732 1732 1732 1732 64 65 QOAAAA SSBAAA OOOOxx +1678 1163 0 2 8 18 78 678 1678 1678 1678 156 157 OMAAAA TSBAAA VVVVxx +1464 1164 0 0 4 4 64 464 1464 1464 1464 128 129 IEAAAA USBAAA AAAAxx +6546 1165 0 2 6 6 46 546 546 1546 6546 92 93 URAAAA VSBAAA HHHHxx +4448 1166 0 0 8 8 48 448 448 4448 4448 96 97 CPAAAA WSBAAA OOOOxx +9847 1167 1 3 7 7 47 847 1847 4847 9847 94 95 TOAAAA XSBAAA VVVVxx +8264 1168 0 0 4 4 64 264 264 3264 8264 128 129 WFAAAA YSBAAA AAAAxx +1620 1169 0 0 0 0 20 620 1620 1620 1620 40 41 IKAAAA ZSBAAA HHHHxx +9388 1170 0 0 8 8 88 388 1388 4388 9388 176 177 CXAAAA ATBAAA OOOOxx +6445 1171 1 1 5 5 45 445 445 1445 6445 90 91 XNAAAA BTBAAA VVVVxx +4789 1172 1 1 9 9 89 789 789 4789 4789 178 179 FCAAAA CTBAAA AAAAxx +1562 1173 0 2 2 2 62 562 1562 1562 1562 124 125 CIAAAA DTBAAA HHHHxx +7305 1174 1 1 5 5 5 305 1305 2305 7305 10 11 ZUAAAA ETBAAA OOOOxx +6344 1175 0 0 4 4 44 344 344 1344 6344 88 89 AKAAAA FTBAAA VVVVxx +5130 1176 0 2 0 10 30 130 1130 130 5130 60 61 IPAAAA GTBAAA AAAAxx +3284 1177 0 0 4 4 84 284 1284 3284 3284 168 169 IWAAAA HTBAAA HHHHxx +6346 1178 0 2 6 6 46 346 346 1346 6346 92 93 CKAAAA ITBAAA OOOOxx +1061 1179 1 1 1 1 61 61 1061 1061 1061 122 123 VOAAAA JTBAAA VVVVxx +872 1180 0 0 2 12 72 872 872 872 872 144 145 OHAAAA KTBAAA AAAAxx +123 1181 1 3 3 3 23 123 123 123 123 46 47 TEAAAA LTBAAA HHHHxx +7903 1182 1 3 3 3 3 903 1903 2903 7903 6 7 ZRAAAA MTBAAA OOOOxx +560 1183 0 0 0 0 60 560 560 560 560 120 121 OVAAAA NTBAAA VVVVxx +4446 1184 0 2 6 6 46 446 446 4446 4446 92 93 APAAAA OTBAAA AAAAxx +3909 1185 1 1 9 9 9 909 1909 3909 3909 18 19 JUAAAA PTBAAA HHHHxx +669 1186 1 1 9 9 69 669 669 669 669 138 139 TZAAAA QTBAAA OOOOxx +7843 1187 1 3 3 3 43 843 1843 2843 7843 86 87 RPAAAA RTBAAA VVVVxx +2546 1188 0 2 6 6 46 546 546 2546 2546 92 93 YTAAAA STBAAA AAAAxx +6757 1189 1 1 7 17 57 757 757 1757 6757 114 115 XZAAAA TTBAAA HHHHxx +466 1190 0 2 6 6 66 466 466 466 466 132 133 YRAAAA UTBAAA OOOOxx +5556 1191 0 0 6 16 56 556 1556 556 5556 112 113 SFAAAA VTBAAA VVVVxx +7196 1192 0 0 6 16 96 196 1196 2196 7196 192 193 UQAAAA WTBAAA AAAAxx +2947 1193 1 3 7 7 47 947 947 2947 2947 94 95 JJAAAA XTBAAA HHHHxx +6493 1194 1 1 3 13 93 493 493 1493 6493 186 187 TPAAAA YTBAAA OOOOxx +7203 1195 1 3 3 3 3 203 1203 2203 7203 6 7 BRAAAA ZTBAAA VVVVxx +3716 1196 0 0 6 16 16 716 1716 3716 3716 32 33 YMAAAA AUBAAA AAAAxx +8058 1197 0 2 8 18 58 58 58 3058 8058 116 117 YXAAAA BUBAAA HHHHxx +433 1198 1 1 3 13 33 433 433 433 433 66 67 RQAAAA CUBAAA OOOOxx +7649 1199 1 1 9 9 49 649 1649 2649 7649 98 99 FIAAAA DUBAAA VVVVxx +6966 1200 0 2 6 6 66 966 966 1966 6966 132 133 YHAAAA EUBAAA AAAAxx +553 1201 1 1 3 13 53 553 553 553 553 106 107 HVAAAA FUBAAA HHHHxx +3677 1202 1 1 7 17 77 677 1677 3677 3677 154 155 LLAAAA GUBAAA OOOOxx +2344 1203 0 0 4 4 44 344 344 2344 2344 88 89 EMAAAA HUBAAA VVVVxx +7439 1204 1 3 9 19 39 439 1439 2439 7439 78 79 DAAAAA IUBAAA AAAAxx +3910 1205 0 2 0 10 10 910 1910 3910 3910 20 21 KUAAAA JUBAAA HHHHxx +3638 1206 0 2 8 18 38 638 1638 3638 3638 76 77 YJAAAA KUBAAA OOOOxx +6637 1207 1 1 7 17 37 637 637 1637 6637 74 75 HVAAAA LUBAAA VVVVxx +4438 1208 0 2 8 18 38 438 438 4438 4438 76 77 SOAAAA MUBAAA AAAAxx +171 1209 1 3 1 11 71 171 171 171 171 142 143 PGAAAA NUBAAA HHHHxx +310 1210 0 2 0 10 10 310 310 310 310 20 21 YLAAAA OUBAAA OOOOxx +2714 1211 0 2 4 14 14 714 714 2714 2714 28 29 KAAAAA PUBAAA VVVVxx +5199 1212 1 3 9 19 99 199 1199 199 5199 198 199 ZRAAAA QUBAAA AAAAxx +8005 1213 1 1 5 5 5 5 5 3005 8005 10 11 XVAAAA RUBAAA HHHHxx +3188 1214 0 0 8 8 88 188 1188 3188 3188 176 177 QSAAAA SUBAAA OOOOxx +1518 1215 0 2 8 18 18 518 1518 1518 1518 36 37 KGAAAA TUBAAA VVVVxx +6760 1216 0 0 0 0 60 760 760 1760 6760 120 121 AAAAAA UUBAAA AAAAxx +9373 1217 1 1 3 13 73 373 1373 4373 9373 146 147 NWAAAA VUBAAA HHHHxx +1938 1218 0 2 8 18 38 938 1938 1938 1938 76 77 OWAAAA WUBAAA OOOOxx +2865 1219 1 1 5 5 65 865 865 2865 2865 130 131 FGAAAA XUBAAA VVVVxx +3203 1220 1 3 3 3 3 203 1203 3203 3203 6 7 FTAAAA YUBAAA AAAAxx +6025 1221 1 1 5 5 25 25 25 1025 6025 50 51 TXAAAA ZUBAAA HHHHxx +8684 1222 0 0 4 4 84 684 684 3684 8684 168 169 AWAAAA AVBAAA OOOOxx +7732 1223 0 0 2 12 32 732 1732 2732 7732 64 65 KLAAAA BVBAAA VVVVxx +3218 1224 0 2 8 18 18 218 1218 3218 3218 36 37 UTAAAA CVBAAA AAAAxx +525 1225 1 1 5 5 25 525 525 525 525 50 51 FUAAAA DVBAAA HHHHxx +601 1226 1 1 1 1 1 601 601 601 601 2 3 DXAAAA EVBAAA OOOOxx +6091 1227 1 3 1 11 91 91 91 1091 6091 182 183 HAAAAA FVBAAA VVVVxx +4498 1228 0 2 8 18 98 498 498 4498 4498 196 197 ARAAAA GVBAAA AAAAxx +8192 1229 0 0 2 12 92 192 192 3192 8192 184 185 CDAAAA HVBAAA HHHHxx +8006 1230 0 2 6 6 6 6 6 3006 8006 12 13 YVAAAA IVBAAA OOOOxx +6157 1231 1 1 7 17 57 157 157 1157 6157 114 115 VCAAAA JVBAAA VVVVxx +312 1232 0 0 2 12 12 312 312 312 312 24 25 AMAAAA KVBAAA AAAAxx +8652 1233 0 0 2 12 52 652 652 3652 8652 104 105 UUAAAA LVBAAA HHHHxx +2787 1234 1 3 7 7 87 787 787 2787 2787 174 175 FDAAAA MVBAAA OOOOxx +1782 1235 0 2 2 2 82 782 1782 1782 1782 164 165 OQAAAA NVBAAA VVVVxx +23 1236 1 3 3 3 23 23 23 23 23 46 47 XAAAAA OVBAAA AAAAxx +1206 1237 0 2 6 6 6 206 1206 1206 1206 12 13 KUAAAA PVBAAA HHHHxx +1076 1238 0 0 6 16 76 76 1076 1076 1076 152 153 KPAAAA QVBAAA OOOOxx +5379 1239 1 3 9 19 79 379 1379 379 5379 158 159 XYAAAA RVBAAA VVVVxx +2047 1240 1 3 7 7 47 47 47 2047 2047 94 95 TAAAAA SVBAAA AAAAxx +6262 1241 0 2 2 2 62 262 262 1262 6262 124 125 WGAAAA TVBAAA HHHHxx +1840 1242 0 0 0 0 40 840 1840 1840 1840 80 81 USAAAA UVBAAA OOOOxx +2106 1243 0 2 6 6 6 106 106 2106 2106 12 13 ADAAAA VVBAAA VVVVxx +1307 1244 1 3 7 7 7 307 1307 1307 1307 14 15 HYAAAA WVBAAA AAAAxx +735 1245 1 3 5 15 35 735 735 735 735 70 71 HCAAAA XVBAAA HHHHxx +3657 1246 1 1 7 17 57 657 1657 3657 3657 114 115 RKAAAA YVBAAA OOOOxx +3006 1247 0 2 6 6 6 6 1006 3006 3006 12 13 QLAAAA ZVBAAA VVVVxx +1538 1248 0 2 8 18 38 538 1538 1538 1538 76 77 EHAAAA AWBAAA AAAAxx +6098 1249 0 2 8 18 98 98 98 1098 6098 196 197 OAAAAA BWBAAA HHHHxx +5267 1250 1 3 7 7 67 267 1267 267 5267 134 135 PUAAAA CWBAAA OOOOxx +9757 1251 1 1 7 17 57 757 1757 4757 9757 114 115 HLAAAA DWBAAA VVVVxx +1236 1252 0 0 6 16 36 236 1236 1236 1236 72 73 OVAAAA EWBAAA AAAAxx +83 1253 1 3 3 3 83 83 83 83 83 166 167 FDAAAA FWBAAA HHHHxx +9227 1254 1 3 7 7 27 227 1227 4227 9227 54 55 XQAAAA GWBAAA OOOOxx +8772 1255 0 0 2 12 72 772 772 3772 8772 144 145 KZAAAA HWBAAA VVVVxx +8822 1256 0 2 2 2 22 822 822 3822 8822 44 45 IBAAAA IWBAAA AAAAxx +7167 1257 1 3 7 7 67 167 1167 2167 7167 134 135 RPAAAA JWBAAA HHHHxx +6909 1258 1 1 9 9 9 909 909 1909 6909 18 19 TFAAAA KWBAAA OOOOxx +1439 1259 1 3 9 19 39 439 1439 1439 1439 78 79 JDAAAA LWBAAA VVVVxx +2370 1260 0 2 0 10 70 370 370 2370 2370 140 141 ENAAAA MWBAAA AAAAxx +4577 1261 1 1 7 17 77 577 577 4577 4577 154 155 BUAAAA NWBAAA HHHHxx +2575 1262 1 3 5 15 75 575 575 2575 2575 150 151 BVAAAA OWBAAA OOOOxx +2795 1263 1 3 5 15 95 795 795 2795 2795 190 191 NDAAAA PWBAAA VVVVxx +5520 1264 0 0 0 0 20 520 1520 520 5520 40 41 IEAAAA QWBAAA AAAAxx +382 1265 0 2 2 2 82 382 382 382 382 164 165 SOAAAA RWBAAA HHHHxx +6335 1266 1 3 5 15 35 335 335 1335 6335 70 71 RJAAAA SWBAAA OOOOxx +8430 1267 0 2 0 10 30 430 430 3430 8430 60 61 GMAAAA TWBAAA VVVVxx +4131 1268 1 3 1 11 31 131 131 4131 4131 62 63 XCAAAA UWBAAA AAAAxx +9332 1269 0 0 2 12 32 332 1332 4332 9332 64 65 YUAAAA VWBAAA HHHHxx +293 1270 1 1 3 13 93 293 293 293 293 186 187 HLAAAA WWBAAA OOOOxx +2276 1271 0 0 6 16 76 276 276 2276 2276 152 153 OJAAAA XWBAAA VVVVxx +5687 1272 1 3 7 7 87 687 1687 687 5687 174 175 TKAAAA YWBAAA AAAAxx +5862 1273 0 2 2 2 62 862 1862 862 5862 124 125 MRAAAA ZWBAAA HHHHxx +5073 1274 1 1 3 13 73 73 1073 73 5073 146 147 DNAAAA AXBAAA OOOOxx +4170 1275 0 2 0 10 70 170 170 4170 4170 140 141 KEAAAA BXBAAA VVVVxx +5039 1276 1 3 9 19 39 39 1039 39 5039 78 79 VLAAAA CXBAAA AAAAxx +3294 1277 0 2 4 14 94 294 1294 3294 3294 188 189 SWAAAA DXBAAA HHHHxx +6015 1278 1 3 5 15 15 15 15 1015 6015 30 31 JXAAAA EXBAAA OOOOxx +9015 1279 1 3 5 15 15 15 1015 4015 9015 30 31 TIAAAA FXBAAA VVVVxx +9785 1280 1 1 5 5 85 785 1785 4785 9785 170 171 JMAAAA GXBAAA AAAAxx +4312 1281 0 0 2 12 12 312 312 4312 4312 24 25 WJAAAA HXBAAA HHHHxx +6343 1282 1 3 3 3 43 343 343 1343 6343 86 87 ZJAAAA IXBAAA OOOOxx +2161 1283 1 1 1 1 61 161 161 2161 2161 122 123 DFAAAA JXBAAA VVVVxx +4490 1284 0 2 0 10 90 490 490 4490 4490 180 181 SQAAAA KXBAAA AAAAxx +4454 1285 0 2 4 14 54 454 454 4454 4454 108 109 IPAAAA LXBAAA HHHHxx +7647 1286 1 3 7 7 47 647 1647 2647 7647 94 95 DIAAAA MXBAAA OOOOxx +1028 1287 0 0 8 8 28 28 1028 1028 1028 56 57 ONAAAA NXBAAA VVVVxx +2965 1288 1 1 5 5 65 965 965 2965 2965 130 131 BKAAAA OXBAAA AAAAxx +9900 1289 0 0 0 0 0 900 1900 4900 9900 0 1 UQAAAA PXBAAA HHHHxx +5509 1290 1 1 9 9 9 509 1509 509 5509 18 19 XDAAAA QXBAAA OOOOxx +7751 1291 1 3 1 11 51 751 1751 2751 7751 102 103 DMAAAA RXBAAA VVVVxx +9594 1292 0 2 4 14 94 594 1594 4594 9594 188 189 AFAAAA SXBAAA AAAAxx +7632 1293 0 0 2 12 32 632 1632 2632 7632 64 65 OHAAAA TXBAAA HHHHxx +6528 1294 0 0 8 8 28 528 528 1528 6528 56 57 CRAAAA UXBAAA OOOOxx +1041 1295 1 1 1 1 41 41 1041 1041 1041 82 83 BOAAAA VXBAAA VVVVxx +1534 1296 0 2 4 14 34 534 1534 1534 1534 68 69 AHAAAA WXBAAA AAAAxx +4229 1297 1 1 9 9 29 229 229 4229 4229 58 59 RGAAAA XXBAAA HHHHxx +84 1298 0 0 4 4 84 84 84 84 84 168 169 GDAAAA YXBAAA OOOOxx +2189 1299 1 1 9 9 89 189 189 2189 2189 178 179 FGAAAA ZXBAAA VVVVxx +7566 1300 0 2 6 6 66 566 1566 2566 7566 132 133 AFAAAA AYBAAA AAAAxx +707 1301 1 3 7 7 7 707 707 707 707 14 15 FBAAAA BYBAAA HHHHxx +581 1302 1 1 1 1 81 581 581 581 581 162 163 JWAAAA CYBAAA OOOOxx +6753 1303 1 1 3 13 53 753 753 1753 6753 106 107 TZAAAA DYBAAA VVVVxx +8604 1304 0 0 4 4 4 604 604 3604 8604 8 9 YSAAAA EYBAAA AAAAxx +373 1305 1 1 3 13 73 373 373 373 373 146 147 JOAAAA FYBAAA HHHHxx +9635 1306 1 3 5 15 35 635 1635 4635 9635 70 71 PGAAAA GYBAAA OOOOxx +9277 1307 1 1 7 17 77 277 1277 4277 9277 154 155 VSAAAA HYBAAA VVVVxx +7117 1308 1 1 7 17 17 117 1117 2117 7117 34 35 TNAAAA IYBAAA AAAAxx +8564 1309 0 0 4 4 64 564 564 3564 8564 128 129 KRAAAA JYBAAA HHHHxx +1697 1310 1 1 7 17 97 697 1697 1697 1697 194 195 HNAAAA KYBAAA OOOOxx +7840 1311 0 0 0 0 40 840 1840 2840 7840 80 81 OPAAAA LYBAAA VVVVxx +3646 1312 0 2 6 6 46 646 1646 3646 3646 92 93 GKAAAA MYBAAA AAAAxx +368 1313 0 0 8 8 68 368 368 368 368 136 137 EOAAAA NYBAAA HHHHxx +4797 1314 1 1 7 17 97 797 797 4797 4797 194 195 NCAAAA OYBAAA OOOOxx +5300 1315 0 0 0 0 0 300 1300 300 5300 0 1 WVAAAA PYBAAA VVVVxx +7664 1316 0 0 4 4 64 664 1664 2664 7664 128 129 UIAAAA QYBAAA AAAAxx +1466 1317 0 2 6 6 66 466 1466 1466 1466 132 133 KEAAAA RYBAAA HHHHxx +2477 1318 1 1 7 17 77 477 477 2477 2477 154 155 HRAAAA SYBAAA OOOOxx +2036 1319 0 0 6 16 36 36 36 2036 2036 72 73 IAAAAA TYBAAA VVVVxx +3624 1320 0 0 4 4 24 624 1624 3624 3624 48 49 KJAAAA UYBAAA AAAAxx +5099 1321 1 3 9 19 99 99 1099 99 5099 198 199 DOAAAA VYBAAA HHHHxx +1308 1322 0 0 8 8 8 308 1308 1308 1308 16 17 IYAAAA WYBAAA OOOOxx +3704 1323 0 0 4 4 4 704 1704 3704 3704 8 9 MMAAAA XYBAAA VVVVxx +2451 1324 1 3 1 11 51 451 451 2451 2451 102 103 HQAAAA YYBAAA AAAAxx +4898 1325 0 2 8 18 98 898 898 4898 4898 196 197 KGAAAA ZYBAAA HHHHxx +4959 1326 1 3 9 19 59 959 959 4959 4959 118 119 TIAAAA AZBAAA OOOOxx +5942 1327 0 2 2 2 42 942 1942 942 5942 84 85 OUAAAA BZBAAA VVVVxx +2425 1328 1 1 5 5 25 425 425 2425 2425 50 51 HPAAAA CZBAAA AAAAxx +7760 1329 0 0 0 0 60 760 1760 2760 7760 120 121 MMAAAA DZBAAA HHHHxx +6294 1330 0 2 4 14 94 294 294 1294 6294 188 189 CIAAAA EZBAAA OOOOxx +6785 1331 1 1 5 5 85 785 785 1785 6785 170 171 ZAAAAA FZBAAA VVVVxx +3542 1332 0 2 2 2 42 542 1542 3542 3542 84 85 GGAAAA GZBAAA AAAAxx +1809 1333 1 1 9 9 9 809 1809 1809 1809 18 19 PRAAAA HZBAAA HHHHxx +130 1334 0 2 0 10 30 130 130 130 130 60 61 AFAAAA IZBAAA OOOOxx +8672 1335 0 0 2 12 72 672 672 3672 8672 144 145 OVAAAA JZBAAA VVVVxx +2125 1336 1 1 5 5 25 125 125 2125 2125 50 51 TDAAAA KZBAAA AAAAxx +7683 1337 1 3 3 3 83 683 1683 2683 7683 166 167 NJAAAA LZBAAA HHHHxx +7842 1338 0 2 2 2 42 842 1842 2842 7842 84 85 QPAAAA MZBAAA OOOOxx +9584 1339 0 0 4 4 84 584 1584 4584 9584 168 169 QEAAAA NZBAAA VVVVxx +7963 1340 1 3 3 3 63 963 1963 2963 7963 126 127 HUAAAA OZBAAA AAAAxx +8581 1341 1 1 1 1 81 581 581 3581 8581 162 163 BSAAAA PZBAAA HHHHxx +2135 1342 1 3 5 15 35 135 135 2135 2135 70 71 DEAAAA QZBAAA OOOOxx +7352 1343 0 0 2 12 52 352 1352 2352 7352 104 105 UWAAAA RZBAAA VVVVxx +5789 1344 1 1 9 9 89 789 1789 789 5789 178 179 ROAAAA SZBAAA AAAAxx +8490 1345 0 2 0 10 90 490 490 3490 8490 180 181 OOAAAA TZBAAA HHHHxx +2145 1346 1 1 5 5 45 145 145 2145 2145 90 91 NEAAAA UZBAAA OOOOxx +7021 1347 1 1 1 1 21 21 1021 2021 7021 42 43 BKAAAA VZBAAA VVVVxx +3736 1348 0 0 6 16 36 736 1736 3736 3736 72 73 SNAAAA WZBAAA AAAAxx +7396 1349 0 0 6 16 96 396 1396 2396 7396 192 193 MYAAAA XZBAAA HHHHxx +6334 1350 0 2 4 14 34 334 334 1334 6334 68 69 QJAAAA YZBAAA OOOOxx +5461 1351 1 1 1 1 61 461 1461 461 5461 122 123 BCAAAA ZZBAAA VVVVxx +5337 1352 1 1 7 17 37 337 1337 337 5337 74 75 HXAAAA AACAAA AAAAxx +7440 1353 0 0 0 0 40 440 1440 2440 7440 80 81 EAAAAA BACAAA HHHHxx +6879 1354 1 3 9 19 79 879 879 1879 6879 158 159 PEAAAA CACAAA OOOOxx +2432 1355 0 0 2 12 32 432 432 2432 2432 64 65 OPAAAA DACAAA VVVVxx +8529 1356 1 1 9 9 29 529 529 3529 8529 58 59 BQAAAA EACAAA AAAAxx +7859 1357 1 3 9 19 59 859 1859 2859 7859 118 119 HQAAAA FACAAA HHHHxx +15 1358 1 3 5 15 15 15 15 15 15 30 31 PAAAAA GACAAA OOOOxx +7475 1359 1 3 5 15 75 475 1475 2475 7475 150 151 NBAAAA HACAAA VVVVxx +717 1360 1 1 7 17 17 717 717 717 717 34 35 PBAAAA IACAAA AAAAxx +250 1361 0 2 0 10 50 250 250 250 250 100 101 QJAAAA JACAAA HHHHxx +4700 1362 0 0 0 0 0 700 700 4700 4700 0 1 UYAAAA KACAAA OOOOxx +7510 1363 0 2 0 10 10 510 1510 2510 7510 20 21 WCAAAA LACAAA VVVVxx +4562 1364 0 2 2 2 62 562 562 4562 4562 124 125 MTAAAA MACAAA AAAAxx +8075 1365 1 3 5 15 75 75 75 3075 8075 150 151 PYAAAA NACAAA HHHHxx +871 1366 1 3 1 11 71 871 871 871 871 142 143 NHAAAA OACAAA OOOOxx +7161 1367 1 1 1 1 61 161 1161 2161 7161 122 123 LPAAAA PACAAA VVVVxx +9109 1368 1 1 9 9 9 109 1109 4109 9109 18 19 JMAAAA QACAAA AAAAxx +8675 1369 1 3 5 15 75 675 675 3675 8675 150 151 RVAAAA RACAAA HHHHxx +1025 1370 1 1 5 5 25 25 1025 1025 1025 50 51 LNAAAA SACAAA OOOOxx +4065 1371 1 1 5 5 65 65 65 4065 4065 130 131 JAAAAA TACAAA VVVVxx +3511 1372 1 3 1 11 11 511 1511 3511 3511 22 23 BFAAAA UACAAA AAAAxx +9840 1373 0 0 0 0 40 840 1840 4840 9840 80 81 MOAAAA VACAAA HHHHxx +7495 1374 1 3 5 15 95 495 1495 2495 7495 190 191 HCAAAA WACAAA OOOOxx +55 1375 1 3 5 15 55 55 55 55 55 110 111 DCAAAA XACAAA VVVVxx +6151 1376 1 3 1 11 51 151 151 1151 6151 102 103 PCAAAA YACAAA AAAAxx +2512 1377 0 0 2 12 12 512 512 2512 2512 24 25 QSAAAA ZACAAA HHHHxx +5881 1378 1 1 1 1 81 881 1881 881 5881 162 163 FSAAAA ABCAAA OOOOxx +1442 1379 0 2 2 2 42 442 1442 1442 1442 84 85 MDAAAA BBCAAA VVVVxx +1270 1380 0 2 0 10 70 270 1270 1270 1270 140 141 WWAAAA CBCAAA AAAAxx +959 1381 1 3 9 19 59 959 959 959 959 118 119 XKAAAA DBCAAA HHHHxx +8251 1382 1 3 1 11 51 251 251 3251 8251 102 103 JFAAAA EBCAAA OOOOxx +3051 1383 1 3 1 11 51 51 1051 3051 3051 102 103 JNAAAA FBCAAA VVVVxx +5052 1384 0 0 2 12 52 52 1052 52 5052 104 105 IMAAAA GBCAAA AAAAxx +1863 1385 1 3 3 3 63 863 1863 1863 1863 126 127 RTAAAA HBCAAA HHHHxx +344 1386 0 0 4 4 44 344 344 344 344 88 89 GNAAAA IBCAAA OOOOxx +3590 1387 0 2 0 10 90 590 1590 3590 3590 180 181 CIAAAA JBCAAA VVVVxx +4223 1388 1 3 3 3 23 223 223 4223 4223 46 47 LGAAAA KBCAAA AAAAxx +2284 1389 0 0 4 4 84 284 284 2284 2284 168 169 WJAAAA LBCAAA HHHHxx +9425 1390 1 1 5 5 25 425 1425 4425 9425 50 51 NYAAAA MBCAAA OOOOxx +6221 1391 1 1 1 1 21 221 221 1221 6221 42 43 HFAAAA NBCAAA VVVVxx +195 1392 1 3 5 15 95 195 195 195 195 190 191 NHAAAA OBCAAA AAAAxx +1517 1393 1 1 7 17 17 517 1517 1517 1517 34 35 JGAAAA PBCAAA HHHHxx +3791 1394 1 3 1 11 91 791 1791 3791 3791 182 183 VPAAAA QBCAAA OOOOxx +572 1395 0 0 2 12 72 572 572 572 572 144 145 AWAAAA RBCAAA VVVVxx +46 1396 0 2 6 6 46 46 46 46 46 92 93 UBAAAA SBCAAA AAAAxx +9451 1397 1 3 1 11 51 451 1451 4451 9451 102 103 NZAAAA TBCAAA HHHHxx +3359 1398 1 3 9 19 59 359 1359 3359 3359 118 119 FZAAAA UBCAAA OOOOxx +8867 1399 1 3 7 7 67 867 867 3867 8867 134 135 BDAAAA VBCAAA VVVVxx +674 1400 0 2 4 14 74 674 674 674 674 148 149 YZAAAA WBCAAA AAAAxx +2674 1401 0 2 4 14 74 674 674 2674 2674 148 149 WYAAAA XBCAAA HHHHxx +6523 1402 1 3 3 3 23 523 523 1523 6523 46 47 XQAAAA YBCAAA OOOOxx +6210 1403 0 2 0 10 10 210 210 1210 6210 20 21 WEAAAA ZBCAAA VVVVxx +7564 1404 0 0 4 4 64 564 1564 2564 7564 128 129 YEAAAA ACCAAA AAAAxx +4776 1405 0 0 6 16 76 776 776 4776 4776 152 153 SBAAAA BCCAAA HHHHxx +2993 1406 1 1 3 13 93 993 993 2993 2993 186 187 DLAAAA CCCAAA OOOOxx +2969 1407 1 1 9 9 69 969 969 2969 2969 138 139 FKAAAA DCCAAA VVVVxx +1762 1408 0 2 2 2 62 762 1762 1762 1762 124 125 UPAAAA ECCAAA AAAAxx +685 1409 1 1 5 5 85 685 685 685 685 170 171 JAAAAA FCCAAA HHHHxx +5312 1410 0 0 2 12 12 312 1312 312 5312 24 25 IWAAAA GCCAAA OOOOxx +3264 1411 0 0 4 4 64 264 1264 3264 3264 128 129 OVAAAA HCCAAA VVVVxx +7008 1412 0 0 8 8 8 8 1008 2008 7008 16 17 OJAAAA ICCAAA AAAAxx +5167 1413 1 3 7 7 67 167 1167 167 5167 134 135 TQAAAA JCCAAA HHHHxx +3060 1414 0 0 0 0 60 60 1060 3060 3060 120 121 SNAAAA KCCAAA OOOOxx +1752 1415 0 0 2 12 52 752 1752 1752 1752 104 105 KPAAAA LCCAAA VVVVxx +1016 1416 0 0 6 16 16 16 1016 1016 1016 32 33 CNAAAA MCCAAA AAAAxx +7365 1417 1 1 5 5 65 365 1365 2365 7365 130 131 HXAAAA NCCAAA HHHHxx +4358 1418 0 2 8 18 58 358 358 4358 4358 116 117 QLAAAA OCCAAA OOOOxx +2819 1419 1 3 9 19 19 819 819 2819 2819 38 39 LEAAAA PCCAAA VVVVxx +6727 1420 1 3 7 7 27 727 727 1727 6727 54 55 TYAAAA QCCAAA AAAAxx +1459 1421 1 3 9 19 59 459 1459 1459 1459 118 119 DEAAAA RCCAAA HHHHxx +1708 1422 0 0 8 8 8 708 1708 1708 1708 16 17 SNAAAA SCCAAA OOOOxx +471 1423 1 3 1 11 71 471 471 471 471 142 143 DSAAAA TCCAAA VVVVxx +387 1424 1 3 7 7 87 387 387 387 387 174 175 XOAAAA UCCAAA AAAAxx +1166 1425 0 2 6 6 66 166 1166 1166 1166 132 133 WSAAAA VCCAAA HHHHxx +2400 1426 0 0 0 0 0 400 400 2400 2400 0 1 IOAAAA WCCAAA OOOOxx +3584 1427 0 0 4 4 84 584 1584 3584 3584 168 169 WHAAAA XCCAAA VVVVxx +6423 1428 1 3 3 3 23 423 423 1423 6423 46 47 BNAAAA YCCAAA AAAAxx +9520 1429 0 0 0 0 20 520 1520 4520 9520 40 41 ECAAAA ZCCAAA HHHHxx +8080 1430 0 0 0 0 80 80 80 3080 8080 160 161 UYAAAA ADCAAA OOOOxx +5709 1431 1 1 9 9 9 709 1709 709 5709 18 19 PLAAAA BDCAAA VVVVxx +1131 1432 1 3 1 11 31 131 1131 1131 1131 62 63 NRAAAA CDCAAA AAAAxx +8562 1433 0 2 2 2 62 562 562 3562 8562 124 125 IRAAAA DDCAAA HHHHxx +5766 1434 0 2 6 6 66 766 1766 766 5766 132 133 UNAAAA EDCAAA OOOOxx +245 1435 1 1 5 5 45 245 245 245 245 90 91 LJAAAA FDCAAA VVVVxx +9869 1436 1 1 9 9 69 869 1869 4869 9869 138 139 PPAAAA GDCAAA AAAAxx +3533 1437 1 1 3 13 33 533 1533 3533 3533 66 67 XFAAAA HDCAAA HHHHxx +5109 1438 1 1 9 9 9 109 1109 109 5109 18 19 NOAAAA IDCAAA OOOOxx +977 1439 1 1 7 17 77 977 977 977 977 154 155 PLAAAA JDCAAA VVVVxx +1651 1440 1 3 1 11 51 651 1651 1651 1651 102 103 NLAAAA KDCAAA AAAAxx +1357 1441 1 1 7 17 57 357 1357 1357 1357 114 115 FAAAAA LDCAAA HHHHxx +9087 1442 1 3 7 7 87 87 1087 4087 9087 174 175 NLAAAA MDCAAA OOOOxx +3399 1443 1 3 9 19 99 399 1399 3399 3399 198 199 TAAAAA NDCAAA VVVVxx +7543 1444 1 3 3 3 43 543 1543 2543 7543 86 87 DEAAAA ODCAAA AAAAxx +2469 1445 1 1 9 9 69 469 469 2469 2469 138 139 ZQAAAA PDCAAA HHHHxx +8305 1446 1 1 5 5 5 305 305 3305 8305 10 11 LHAAAA QDCAAA OOOOxx +3265 1447 1 1 5 5 65 265 1265 3265 3265 130 131 PVAAAA RDCAAA VVVVxx +9977 1448 1 1 7 17 77 977 1977 4977 9977 154 155 TTAAAA SDCAAA AAAAxx +3961 1449 1 1 1 1 61 961 1961 3961 3961 122 123 JWAAAA TDCAAA HHHHxx +4952 1450 0 0 2 12 52 952 952 4952 4952 104 105 MIAAAA UDCAAA OOOOxx +5173 1451 1 1 3 13 73 173 1173 173 5173 146 147 ZQAAAA VDCAAA VVVVxx +860 1452 0 0 0 0 60 860 860 860 860 120 121 CHAAAA WDCAAA AAAAxx +4523 1453 1 3 3 3 23 523 523 4523 4523 46 47 ZRAAAA XDCAAA HHHHxx +2361 1454 1 1 1 1 61 361 361 2361 2361 122 123 VMAAAA YDCAAA OOOOxx +7877 1455 1 1 7 17 77 877 1877 2877 7877 154 155 ZQAAAA ZDCAAA VVVVxx +3422 1456 0 2 2 2 22 422 1422 3422 3422 44 45 QBAAAA AECAAA AAAAxx +5781 1457 1 1 1 1 81 781 1781 781 5781 162 163 JOAAAA BECAAA HHHHxx +4752 1458 0 0 2 12 52 752 752 4752 4752 104 105 UAAAAA CECAAA OOOOxx +1786 1459 0 2 6 6 86 786 1786 1786 1786 172 173 SQAAAA DECAAA VVVVxx +1892 1460 0 0 2 12 92 892 1892 1892 1892 184 185 UUAAAA EECAAA AAAAxx +6389 1461 1 1 9 9 89 389 389 1389 6389 178 179 TLAAAA FECAAA HHHHxx +8644 1462 0 0 4 4 44 644 644 3644 8644 88 89 MUAAAA GECAAA OOOOxx +9056 1463 0 0 6 16 56 56 1056 4056 9056 112 113 IKAAAA HECAAA VVVVxx +1423 1464 1 3 3 3 23 423 1423 1423 1423 46 47 TCAAAA IECAAA AAAAxx +4901 1465 1 1 1 1 1 901 901 4901 4901 2 3 NGAAAA JECAAA HHHHxx +3859 1466 1 3 9 19 59 859 1859 3859 3859 118 119 LSAAAA KECAAA OOOOxx +2324 1467 0 0 4 4 24 324 324 2324 2324 48 49 KLAAAA LECAAA VVVVxx +8101 1468 1 1 1 1 1 101 101 3101 8101 2 3 PZAAAA MECAAA AAAAxx +8016 1469 0 0 6 16 16 16 16 3016 8016 32 33 IWAAAA NECAAA HHHHxx +5826 1470 0 2 6 6 26 826 1826 826 5826 52 53 CQAAAA OECAAA OOOOxx +8266 1471 0 2 6 6 66 266 266 3266 8266 132 133 YFAAAA PECAAA VVVVxx +7558 1472 0 2 8 18 58 558 1558 2558 7558 116 117 SEAAAA QECAAA AAAAxx +6976 1473 0 0 6 16 76 976 976 1976 6976 152 153 IIAAAA RECAAA HHHHxx +222 1474 0 2 2 2 22 222 222 222 222 44 45 OIAAAA SECAAA OOOOxx +1624 1475 0 0 4 4 24 624 1624 1624 1624 48 49 MKAAAA TECAAA VVVVxx +1250 1476 0 2 0 10 50 250 1250 1250 1250 100 101 CWAAAA UECAAA AAAAxx +1621 1477 1 1 1 1 21 621 1621 1621 1621 42 43 JKAAAA VECAAA HHHHxx +2350 1478 0 2 0 10 50 350 350 2350 2350 100 101 KMAAAA WECAAA OOOOxx +5239 1479 1 3 9 19 39 239 1239 239 5239 78 79 NTAAAA XECAAA VVVVxx +6681 1480 1 1 1 1 81 681 681 1681 6681 162 163 ZWAAAA YECAAA AAAAxx +4983 1481 1 3 3 3 83 983 983 4983 4983 166 167 RJAAAA ZECAAA HHHHxx +7149 1482 1 1 9 9 49 149 1149 2149 7149 98 99 ZOAAAA AFCAAA OOOOxx +3502 1483 0 2 2 2 2 502 1502 3502 3502 4 5 SEAAAA BFCAAA VVVVxx +3133 1484 1 1 3 13 33 133 1133 3133 3133 66 67 NQAAAA CFCAAA AAAAxx +8342 1485 0 2 2 2 42 342 342 3342 8342 84 85 WIAAAA DFCAAA HHHHxx +3041 1486 1 1 1 1 41 41 1041 3041 3041 82 83 ZMAAAA EFCAAA OOOOxx +5383 1487 1 3 3 3 83 383 1383 383 5383 166 167 BZAAAA FFCAAA VVVVxx +3916 1488 0 0 6 16 16 916 1916 3916 3916 32 33 QUAAAA GFCAAA AAAAxx +1438 1489 0 2 8 18 38 438 1438 1438 1438 76 77 IDAAAA HFCAAA HHHHxx +9408 1490 0 0 8 8 8 408 1408 4408 9408 16 17 WXAAAA IFCAAA OOOOxx +5783 1491 1 3 3 3 83 783 1783 783 5783 166 167 LOAAAA JFCAAA VVVVxx +683 1492 1 3 3 3 83 683 683 683 683 166 167 HAAAAA KFCAAA AAAAxx +9381 1493 1 1 1 1 81 381 1381 4381 9381 162 163 VWAAAA LFCAAA HHHHxx +5676 1494 0 0 6 16 76 676 1676 676 5676 152 153 IKAAAA MFCAAA OOOOxx +3224 1495 0 0 4 4 24 224 1224 3224 3224 48 49 AUAAAA NFCAAA VVVVxx +8332 1496 0 0 2 12 32 332 332 3332 8332 64 65 MIAAAA OFCAAA AAAAxx +3372 1497 0 0 2 12 72 372 1372 3372 3372 144 145 SZAAAA PFCAAA HHHHxx +7436 1498 0 0 6 16 36 436 1436 2436 7436 72 73 AAAAAA QFCAAA OOOOxx +5010 1499 0 2 0 10 10 10 1010 10 5010 20 21 SKAAAA RFCAAA VVVVxx +7256 1500 0 0 6 16 56 256 1256 2256 7256 112 113 CTAAAA SFCAAA AAAAxx +961 1501 1 1 1 1 61 961 961 961 961 122 123 ZKAAAA TFCAAA HHHHxx +4182 1502 0 2 2 2 82 182 182 4182 4182 164 165 WEAAAA UFCAAA OOOOxx +639 1503 1 3 9 19 39 639 639 639 639 78 79 PYAAAA VFCAAA VVVVxx +8836 1504 0 0 6 16 36 836 836 3836 8836 72 73 WBAAAA WFCAAA AAAAxx +8705 1505 1 1 5 5 5 705 705 3705 8705 10 11 VWAAAA XFCAAA HHHHxx +32 1506 0 0 2 12 32 32 32 32 32 64 65 GBAAAA YFCAAA OOOOxx +7913 1507 1 1 3 13 13 913 1913 2913 7913 26 27 JSAAAA ZFCAAA VVVVxx +229 1508 1 1 9 9 29 229 229 229 229 58 59 VIAAAA AGCAAA AAAAxx +2393 1509 1 1 3 13 93 393 393 2393 2393 186 187 BOAAAA BGCAAA HHHHxx +2815 1510 1 3 5 15 15 815 815 2815 2815 30 31 HEAAAA CGCAAA OOOOxx +4858 1511 0 2 8 18 58 858 858 4858 4858 116 117 WEAAAA DGCAAA VVVVxx +6283 1512 1 3 3 3 83 283 283 1283 6283 166 167 RHAAAA EGCAAA AAAAxx +4147 1513 1 3 7 7 47 147 147 4147 4147 94 95 NDAAAA FGCAAA HHHHxx +6801 1514 1 1 1 1 1 801 801 1801 6801 2 3 PBAAAA GGCAAA OOOOxx +1011 1515 1 3 1 11 11 11 1011 1011 1011 22 23 XMAAAA HGCAAA VVVVxx +2527 1516 1 3 7 7 27 527 527 2527 2527 54 55 FTAAAA IGCAAA AAAAxx +381 1517 1 1 1 1 81 381 381 381 381 162 163 ROAAAA JGCAAA HHHHxx +3366 1518 0 2 6 6 66 366 1366 3366 3366 132 133 MZAAAA KGCAAA OOOOxx +9636 1519 0 0 6 16 36 636 1636 4636 9636 72 73 QGAAAA LGCAAA VVVVxx +2239 1520 1 3 9 19 39 239 239 2239 2239 78 79 DIAAAA MGCAAA AAAAxx +5911 1521 1 3 1 11 11 911 1911 911 5911 22 23 JTAAAA NGCAAA HHHHxx +449 1522 1 1 9 9 49 449 449 449 449 98 99 HRAAAA OGCAAA OOOOxx +5118 1523 0 2 8 18 18 118 1118 118 5118 36 37 WOAAAA PGCAAA VVVVxx +7684 1524 0 0 4 4 84 684 1684 2684 7684 168 169 OJAAAA QGCAAA AAAAxx +804 1525 0 0 4 4 4 804 804 804 804 8 9 YEAAAA RGCAAA HHHHxx +8378 1526 0 2 8 18 78 378 378 3378 8378 156 157 GKAAAA SGCAAA OOOOxx +9855 1527 1 3 5 15 55 855 1855 4855 9855 110 111 BPAAAA TGCAAA VVVVxx +1995 1528 1 3 5 15 95 995 1995 1995 1995 190 191 TYAAAA UGCAAA AAAAxx +1979 1529 1 3 9 19 79 979 1979 1979 1979 158 159 DYAAAA VGCAAA HHHHxx +4510 1530 0 2 0 10 10 510 510 4510 4510 20 21 MRAAAA WGCAAA OOOOxx +3792 1531 0 0 2 12 92 792 1792 3792 3792 184 185 WPAAAA XGCAAA VVVVxx +3541 1532 1 1 1 1 41 541 1541 3541 3541 82 83 FGAAAA YGCAAA AAAAxx +8847 1533 1 3 7 7 47 847 847 3847 8847 94 95 HCAAAA ZGCAAA HHHHxx +1336 1534 0 0 6 16 36 336 1336 1336 1336 72 73 KZAAAA AHCAAA OOOOxx +6780 1535 0 0 0 0 80 780 780 1780 6780 160 161 UAAAAA BHCAAA VVVVxx +8711 1536 1 3 1 11 11 711 711 3711 8711 22 23 BXAAAA CHCAAA AAAAxx +7839 1537 1 3 9 19 39 839 1839 2839 7839 78 79 NPAAAA DHCAAA HHHHxx +677 1538 1 1 7 17 77 677 677 677 677 154 155 BAAAAA EHCAAA OOOOxx +1574 1539 0 2 4 14 74 574 1574 1574 1574 148 149 OIAAAA FHCAAA VVVVxx +2905 1540 1 1 5 5 5 905 905 2905 2905 10 11 THAAAA GHCAAA AAAAxx +1879 1541 1 3 9 19 79 879 1879 1879 1879 158 159 HUAAAA HHCAAA HHHHxx +7820 1542 0 0 0 0 20 820 1820 2820 7820 40 41 UOAAAA IHCAAA OOOOxx +4308 1543 0 0 8 8 8 308 308 4308 4308 16 17 SJAAAA JHCAAA VVVVxx +4474 1544 0 2 4 14 74 474 474 4474 4474 148 149 CQAAAA KHCAAA AAAAxx +6985 1545 1 1 5 5 85 985 985 1985 6985 170 171 RIAAAA LHCAAA HHHHxx +6929 1546 1 1 9 9 29 929 929 1929 6929 58 59 NGAAAA MHCAAA OOOOxx +777 1547 1 1 7 17 77 777 777 777 777 154 155 XDAAAA NHCAAA VVVVxx +8271 1548 1 3 1 11 71 271 271 3271 8271 142 143 DGAAAA OHCAAA AAAAxx +2389 1549 1 1 9 9 89 389 389 2389 2389 178 179 XNAAAA PHCAAA HHHHxx +946 1550 0 2 6 6 46 946 946 946 946 92 93 KKAAAA QHCAAA OOOOxx +9682 1551 0 2 2 2 82 682 1682 4682 9682 164 165 KIAAAA RHCAAA VVVVxx +8722 1552 0 2 2 2 22 722 722 3722 8722 44 45 MXAAAA SHCAAA AAAAxx +470 1553 0 2 0 10 70 470 470 470 470 140 141 CSAAAA THCAAA HHHHxx +7425 1554 1 1 5 5 25 425 1425 2425 7425 50 51 PZAAAA UHCAAA OOOOxx +2372 1555 0 0 2 12 72 372 372 2372 2372 144 145 GNAAAA VHCAAA VVVVxx +508 1556 0 0 8 8 8 508 508 508 508 16 17 OTAAAA WHCAAA AAAAxx +163 1557 1 3 3 3 63 163 163 163 163 126 127 HGAAAA XHCAAA HHHHxx +6579 1558 1 3 9 19 79 579 579 1579 6579 158 159 BTAAAA YHCAAA OOOOxx +2355 1559 1 3 5 15 55 355 355 2355 2355 110 111 PMAAAA ZHCAAA VVVVxx +70 1560 0 2 0 10 70 70 70 70 70 140 141 SCAAAA AICAAA AAAAxx +651 1561 1 3 1 11 51 651 651 651 651 102 103 BZAAAA BICAAA HHHHxx +4436 1562 0 0 6 16 36 436 436 4436 4436 72 73 QOAAAA CICAAA OOOOxx +4240 1563 0 0 0 0 40 240 240 4240 4240 80 81 CHAAAA DICAAA VVVVxx +2722 1564 0 2 2 2 22 722 722 2722 2722 44 45 SAAAAA EICAAA AAAAxx +8937 1565 1 1 7 17 37 937 937 3937 8937 74 75 TFAAAA FICAAA HHHHxx +8364 1566 0 0 4 4 64 364 364 3364 8364 128 129 SJAAAA GICAAA OOOOxx +8317 1567 1 1 7 17 17 317 317 3317 8317 34 35 XHAAAA HICAAA VVVVxx +8872 1568 0 0 2 12 72 872 872 3872 8872 144 145 GDAAAA IICAAA AAAAxx +5512 1569 0 0 2 12 12 512 1512 512 5512 24 25 AEAAAA JICAAA HHHHxx +6651 1570 1 3 1 11 51 651 651 1651 6651 102 103 VVAAAA KICAAA OOOOxx +5976 1571 0 0 6 16 76 976 1976 976 5976 152 153 WVAAAA LICAAA VVVVxx +3301 1572 1 1 1 1 1 301 1301 3301 3301 2 3 ZWAAAA MICAAA AAAAxx +6784 1573 0 0 4 4 84 784 784 1784 6784 168 169 YAAAAA NICAAA HHHHxx +573 1574 1 1 3 13 73 573 573 573 573 146 147 BWAAAA OICAAA OOOOxx +3015 1575 1 3 5 15 15 15 1015 3015 3015 30 31 ZLAAAA PICAAA VVVVxx +8245 1576 1 1 5 5 45 245 245 3245 8245 90 91 DFAAAA QICAAA AAAAxx +5251 1577 1 3 1 11 51 251 1251 251 5251 102 103 ZTAAAA RICAAA HHHHxx +2281 1578 1 1 1 1 81 281 281 2281 2281 162 163 TJAAAA SICAAA OOOOxx +518 1579 0 2 8 18 18 518 518 518 518 36 37 YTAAAA TICAAA VVVVxx +9839 1580 1 3 9 19 39 839 1839 4839 9839 78 79 LOAAAA UICAAA AAAAxx +4526 1581 0 2 6 6 26 526 526 4526 4526 52 53 CSAAAA VICAAA HHHHxx +1261 1582 1 1 1 1 61 261 1261 1261 1261 122 123 NWAAAA WICAAA OOOOxx +4259 1583 1 3 9 19 59 259 259 4259 4259 118 119 VHAAAA XICAAA VVVVxx +9098 1584 0 2 8 18 98 98 1098 4098 9098 196 197 YLAAAA YICAAA AAAAxx +6037 1585 1 1 7 17 37 37 37 1037 6037 74 75 FYAAAA ZICAAA HHHHxx +4284 1586 0 0 4 4 84 284 284 4284 4284 168 169 UIAAAA AJCAAA OOOOxx +3267 1587 1 3 7 7 67 267 1267 3267 3267 134 135 RVAAAA BJCAAA VVVVxx +5908 1588 0 0 8 8 8 908 1908 908 5908 16 17 GTAAAA CJCAAA AAAAxx +1549 1589 1 1 9 9 49 549 1549 1549 1549 98 99 PHAAAA DJCAAA HHHHxx +8736 1590 0 0 6 16 36 736 736 3736 8736 72 73 AYAAAA EJCAAA OOOOxx +2008 1591 0 0 8 8 8 8 8 2008 2008 16 17 GZAAAA FJCAAA VVVVxx +548 1592 0 0 8 8 48 548 548 548 548 96 97 CVAAAA GJCAAA AAAAxx +8846 1593 0 2 6 6 46 846 846 3846 8846 92 93 GCAAAA HJCAAA HHHHxx +8374 1594 0 2 4 14 74 374 374 3374 8374 148 149 CKAAAA IJCAAA OOOOxx +7986 1595 0 2 6 6 86 986 1986 2986 7986 172 173 EVAAAA JJCAAA VVVVxx +6819 1596 1 3 9 19 19 819 819 1819 6819 38 39 HCAAAA KJCAAA AAAAxx +4418 1597 0 2 8 18 18 418 418 4418 4418 36 37 YNAAAA LJCAAA HHHHxx +833 1598 1 1 3 13 33 833 833 833 833 66 67 BGAAAA MJCAAA OOOOxx +4416 1599 0 0 6 16 16 416 416 4416 4416 32 33 WNAAAA NJCAAA VVVVxx +4902 1600 0 2 2 2 2 902 902 4902 4902 4 5 OGAAAA OJCAAA AAAAxx +6828 1601 0 0 8 8 28 828 828 1828 6828 56 57 QCAAAA PJCAAA HHHHxx +1118 1602 0 2 8 18 18 118 1118 1118 1118 36 37 ARAAAA QJCAAA OOOOxx +9993 1603 1 1 3 13 93 993 1993 4993 9993 186 187 JUAAAA RJCAAA VVVVxx +1430 1604 0 2 0 10 30 430 1430 1430 1430 60 61 ADAAAA SJCAAA AAAAxx +5670 1605 0 2 0 10 70 670 1670 670 5670 140 141 CKAAAA TJCAAA HHHHxx +5424 1606 0 0 4 4 24 424 1424 424 5424 48 49 QAAAAA UJCAAA OOOOxx +5561 1607 1 1 1 1 61 561 1561 561 5561 122 123 XFAAAA VJCAAA VVVVxx +2027 1608 1 3 7 7 27 27 27 2027 2027 54 55 ZZAAAA WJCAAA AAAAxx +6924 1609 0 0 4 4 24 924 924 1924 6924 48 49 IGAAAA XJCAAA HHHHxx +5946 1610 0 2 6 6 46 946 1946 946 5946 92 93 SUAAAA YJCAAA OOOOxx +4294 1611 0 2 4 14 94 294 294 4294 4294 188 189 EJAAAA ZJCAAA VVVVxx +2936 1612 0 0 6 16 36 936 936 2936 2936 72 73 YIAAAA AKCAAA AAAAxx +3855 1613 1 3 5 15 55 855 1855 3855 3855 110 111 HSAAAA BKCAAA HHHHxx +455 1614 1 3 5 15 55 455 455 455 455 110 111 NRAAAA CKCAAA OOOOxx +2918 1615 0 2 8 18 18 918 918 2918 2918 36 37 GIAAAA DKCAAA VVVVxx +448 1616 0 0 8 8 48 448 448 448 448 96 97 GRAAAA EKCAAA AAAAxx +2149 1617 1 1 9 9 49 149 149 2149 2149 98 99 REAAAA FKCAAA HHHHxx +8890 1618 0 2 0 10 90 890 890 3890 8890 180 181 YDAAAA GKCAAA OOOOxx +8919 1619 1 3 9 19 19 919 919 3919 8919 38 39 BFAAAA HKCAAA VVVVxx +4957 1620 1 1 7 17 57 957 957 4957 4957 114 115 RIAAAA IKCAAA AAAAxx +4 1621 0 0 4 4 4 4 4 4 4 8 9 EAAAAA JKCAAA HHHHxx +4837 1622 1 1 7 17 37 837 837 4837 4837 74 75 BEAAAA KKCAAA OOOOxx +3976 1623 0 0 6 16 76 976 1976 3976 3976 152 153 YWAAAA LKCAAA VVVVxx +9459 1624 1 3 9 19 59 459 1459 4459 9459 118 119 VZAAAA MKCAAA AAAAxx +7097 1625 1 1 7 17 97 97 1097 2097 7097 194 195 ZMAAAA NKCAAA HHHHxx +9226 1626 0 2 6 6 26 226 1226 4226 9226 52 53 WQAAAA OKCAAA OOOOxx +5803 1627 1 3 3 3 3 803 1803 803 5803 6 7 FPAAAA PKCAAA VVVVxx +21 1628 1 1 1 1 21 21 21 21 21 42 43 VAAAAA QKCAAA AAAAxx +5275 1629 1 3 5 15 75 275 1275 275 5275 150 151 XUAAAA RKCAAA HHHHxx +3488 1630 0 0 8 8 88 488 1488 3488 3488 176 177 EEAAAA SKCAAA OOOOxx +1595 1631 1 3 5 15 95 595 1595 1595 1595 190 191 JJAAAA TKCAAA VVVVxx +5212 1632 0 0 2 12 12 212 1212 212 5212 24 25 MSAAAA UKCAAA AAAAxx +6574 1633 0 2 4 14 74 574 574 1574 6574 148 149 WSAAAA VKCAAA HHHHxx +7524 1634 0 0 4 4 24 524 1524 2524 7524 48 49 KDAAAA WKCAAA OOOOxx +6100 1635 0 0 0 0 0 100 100 1100 6100 0 1 QAAAAA XKCAAA VVVVxx +1198 1636 0 2 8 18 98 198 1198 1198 1198 196 197 CUAAAA YKCAAA AAAAxx +7345 1637 1 1 5 5 45 345 1345 2345 7345 90 91 NWAAAA ZKCAAA HHHHxx +5020 1638 0 0 0 0 20 20 1020 20 5020 40 41 CLAAAA ALCAAA OOOOxx +6925 1639 1 1 5 5 25 925 925 1925 6925 50 51 JGAAAA BLCAAA VVVVxx +8915 1640 1 3 5 15 15 915 915 3915 8915 30 31 XEAAAA CLCAAA AAAAxx +3088 1641 0 0 8 8 88 88 1088 3088 3088 176 177 UOAAAA DLCAAA HHHHxx +4828 1642 0 0 8 8 28 828 828 4828 4828 56 57 SDAAAA ELCAAA OOOOxx +7276 1643 0 0 6 16 76 276 1276 2276 7276 152 153 WTAAAA FLCAAA VVVVxx +299 1644 1 3 9 19 99 299 299 299 299 198 199 NLAAAA GLCAAA AAAAxx +76 1645 0 0 6 16 76 76 76 76 76 152 153 YCAAAA HLCAAA HHHHxx +8458 1646 0 2 8 18 58 458 458 3458 8458 116 117 INAAAA ILCAAA OOOOxx +7207 1647 1 3 7 7 7 207 1207 2207 7207 14 15 FRAAAA JLCAAA VVVVxx +5585 1648 1 1 5 5 85 585 1585 585 5585 170 171 VGAAAA KLCAAA AAAAxx +3234 1649 0 2 4 14 34 234 1234 3234 3234 68 69 KUAAAA LLCAAA HHHHxx +8001 1650 1 1 1 1 1 1 1 3001 8001 2 3 TVAAAA MLCAAA OOOOxx +1319 1651 1 3 9 19 19 319 1319 1319 1319 38 39 TYAAAA NLCAAA VVVVxx +6342 1652 0 2 2 2 42 342 342 1342 6342 84 85 YJAAAA OLCAAA AAAAxx +9199 1653 1 3 9 19 99 199 1199 4199 9199 198 199 VPAAAA PLCAAA HHHHxx +5696 1654 0 0 6 16 96 696 1696 696 5696 192 193 CLAAAA QLCAAA OOOOxx +2562 1655 0 2 2 2 62 562 562 2562 2562 124 125 OUAAAA RLCAAA VVVVxx +4226 1656 0 2 6 6 26 226 226 4226 4226 52 53 OGAAAA SLCAAA AAAAxx +1184 1657 0 0 4 4 84 184 1184 1184 1184 168 169 OTAAAA TLCAAA HHHHxx +5807 1658 1 3 7 7 7 807 1807 807 5807 14 15 JPAAAA ULCAAA OOOOxx +1890 1659 0 2 0 10 90 890 1890 1890 1890 180 181 SUAAAA VLCAAA VVVVxx +451 1660 1 3 1 11 51 451 451 451 451 102 103 JRAAAA WLCAAA AAAAxx +1049 1661 1 1 9 9 49 49 1049 1049 1049 98 99 JOAAAA XLCAAA HHHHxx +5272 1662 0 0 2 12 72 272 1272 272 5272 144 145 UUAAAA YLCAAA OOOOxx +4588 1663 0 0 8 8 88 588 588 4588 4588 176 177 MUAAAA ZLCAAA VVVVxx +5213 1664 1 1 3 13 13 213 1213 213 5213 26 27 NSAAAA AMCAAA AAAAxx +9543 1665 1 3 3 3 43 543 1543 4543 9543 86 87 BDAAAA BMCAAA HHHHxx +6318 1666 0 2 8 18 18 318 318 1318 6318 36 37 AJAAAA CMCAAA OOOOxx +7992 1667 0 0 2 12 92 992 1992 2992 7992 184 185 KVAAAA DMCAAA VVVVxx +4619 1668 1 3 9 19 19 619 619 4619 4619 38 39 RVAAAA EMCAAA AAAAxx +7189 1669 1 1 9 9 89 189 1189 2189 7189 178 179 NQAAAA FMCAAA HHHHxx +2178 1670 0 2 8 18 78 178 178 2178 2178 156 157 UFAAAA GMCAAA OOOOxx +4928 1671 0 0 8 8 28 928 928 4928 4928 56 57 OHAAAA HMCAAA VVVVxx +3966 1672 0 2 6 6 66 966 1966 3966 3966 132 133 OWAAAA IMCAAA AAAAxx +9790 1673 0 2 0 10 90 790 1790 4790 9790 180 181 OMAAAA JMCAAA HHHHxx +9150 1674 0 2 0 10 50 150 1150 4150 9150 100 101 YNAAAA KMCAAA OOOOxx +313 1675 1 1 3 13 13 313 313 313 313 26 27 BMAAAA LMCAAA VVVVxx +1614 1676 0 2 4 14 14 614 1614 1614 1614 28 29 CKAAAA MMCAAA AAAAxx +1581 1677 1 1 1 1 81 581 1581 1581 1581 162 163 VIAAAA NMCAAA HHHHxx +3674 1678 0 2 4 14 74 674 1674 3674 3674 148 149 ILAAAA OMCAAA OOOOxx +3444 1679 0 0 4 4 44 444 1444 3444 3444 88 89 MCAAAA PMCAAA VVVVxx +1050 1680 0 2 0 10 50 50 1050 1050 1050 100 101 KOAAAA QMCAAA AAAAxx +8241 1681 1 1 1 1 41 241 241 3241 8241 82 83 ZEAAAA RMCAAA HHHHxx +3382 1682 0 2 2 2 82 382 1382 3382 3382 164 165 CAAAAA SMCAAA OOOOxx +7105 1683 1 1 5 5 5 105 1105 2105 7105 10 11 HNAAAA TMCAAA VVVVxx +2957 1684 1 1 7 17 57 957 957 2957 2957 114 115 TJAAAA UMCAAA AAAAxx +6162 1685 0 2 2 2 62 162 162 1162 6162 124 125 ADAAAA VMCAAA HHHHxx +5150 1686 0 2 0 10 50 150 1150 150 5150 100 101 CQAAAA WMCAAA OOOOxx +2622 1687 0 2 2 2 22 622 622 2622 2622 44 45 WWAAAA XMCAAA VVVVxx +2240 1688 0 0 0 0 40 240 240 2240 2240 80 81 EIAAAA YMCAAA AAAAxx +8880 1689 0 0 0 0 80 880 880 3880 8880 160 161 ODAAAA ZMCAAA HHHHxx +9250 1690 0 2 0 10 50 250 1250 4250 9250 100 101 URAAAA ANCAAA OOOOxx +7010 1691 0 2 0 10 10 10 1010 2010 7010 20 21 QJAAAA BNCAAA VVVVxx +1098 1692 0 2 8 18 98 98 1098 1098 1098 196 197 GQAAAA CNCAAA AAAAxx +648 1693 0 0 8 8 48 648 648 648 648 96 97 YYAAAA DNCAAA HHHHxx +5536 1694 0 0 6 16 36 536 1536 536 5536 72 73 YEAAAA ENCAAA OOOOxx +7858 1695 0 2 8 18 58 858 1858 2858 7858 116 117 GQAAAA FNCAAA VVVVxx +7053 1696 1 1 3 13 53 53 1053 2053 7053 106 107 HLAAAA GNCAAA AAAAxx +8681 1697 1 1 1 1 81 681 681 3681 8681 162 163 XVAAAA HNCAAA HHHHxx +8832 1698 0 0 2 12 32 832 832 3832 8832 64 65 SBAAAA INCAAA OOOOxx +6836 1699 0 0 6 16 36 836 836 1836 6836 72 73 YCAAAA JNCAAA VVVVxx +4856 1700 0 0 6 16 56 856 856 4856 4856 112 113 UEAAAA KNCAAA AAAAxx +345 1701 1 1 5 5 45 345 345 345 345 90 91 HNAAAA LNCAAA HHHHxx +6559 1702 1 3 9 19 59 559 559 1559 6559 118 119 HSAAAA MNCAAA OOOOxx +3017 1703 1 1 7 17 17 17 1017 3017 3017 34 35 BMAAAA NNCAAA VVVVxx +4176 1704 0 0 6 16 76 176 176 4176 4176 152 153 QEAAAA ONCAAA AAAAxx +2839 1705 1 3 9 19 39 839 839 2839 2839 78 79 FFAAAA PNCAAA HHHHxx +6065 1706 1 1 5 5 65 65 65 1065 6065 130 131 HZAAAA QNCAAA OOOOxx +7360 1707 0 0 0 0 60 360 1360 2360 7360 120 121 CXAAAA RNCAAA VVVVxx +9527 1708 1 3 7 7 27 527 1527 4527 9527 54 55 LCAAAA SNCAAA AAAAxx +8849 1709 1 1 9 9 49 849 849 3849 8849 98 99 JCAAAA TNCAAA HHHHxx +7274 1710 0 2 4 14 74 274 1274 2274 7274 148 149 UTAAAA UNCAAA OOOOxx +4368 1711 0 0 8 8 68 368 368 4368 4368 136 137 AMAAAA VNCAAA VVVVxx +2488 1712 0 0 8 8 88 488 488 2488 2488 176 177 SRAAAA WNCAAA AAAAxx +4674 1713 0 2 4 14 74 674 674 4674 4674 148 149 UXAAAA XNCAAA HHHHxx +365 1714 1 1 5 5 65 365 365 365 365 130 131 BOAAAA YNCAAA OOOOxx +5897 1715 1 1 7 17 97 897 1897 897 5897 194 195 VSAAAA ZNCAAA VVVVxx +8918 1716 0 2 8 18 18 918 918 3918 8918 36 37 AFAAAA AOCAAA AAAAxx +1988 1717 0 0 8 8 88 988 1988 1988 1988 176 177 MYAAAA BOCAAA HHHHxx +1210 1718 0 2 0 10 10 210 1210 1210 1210 20 21 OUAAAA COCAAA OOOOxx +2945 1719 1 1 5 5 45 945 945 2945 2945 90 91 HJAAAA DOCAAA VVVVxx +555 1720 1 3 5 15 55 555 555 555 555 110 111 JVAAAA EOCAAA AAAAxx +9615 1721 1 3 5 15 15 615 1615 4615 9615 30 31 VFAAAA FOCAAA HHHHxx +9939 1722 1 3 9 19 39 939 1939 4939 9939 78 79 HSAAAA GOCAAA OOOOxx +1216 1723 0 0 6 16 16 216 1216 1216 1216 32 33 UUAAAA HOCAAA VVVVxx +745 1724 1 1 5 5 45 745 745 745 745 90 91 RCAAAA IOCAAA AAAAxx +3326 1725 0 2 6 6 26 326 1326 3326 3326 52 53 YXAAAA JOCAAA HHHHxx +953 1726 1 1 3 13 53 953 953 953 953 106 107 RKAAAA KOCAAA OOOOxx +444 1727 0 0 4 4 44 444 444 444 444 88 89 CRAAAA LOCAAA VVVVxx +280 1728 0 0 0 0 80 280 280 280 280 160 161 UKAAAA MOCAAA AAAAxx +3707 1729 1 3 7 7 7 707 1707 3707 3707 14 15 PMAAAA NOCAAA HHHHxx +1351 1730 1 3 1 11 51 351 1351 1351 1351 102 103 ZZAAAA OOCAAA OOOOxx +1280 1731 0 0 0 0 80 280 1280 1280 1280 160 161 GXAAAA POCAAA VVVVxx +628 1732 0 0 8 8 28 628 628 628 628 56 57 EYAAAA QOCAAA AAAAxx +6198 1733 0 2 8 18 98 198 198 1198 6198 196 197 KEAAAA ROCAAA HHHHxx +1957 1734 1 1 7 17 57 957 1957 1957 1957 114 115 HXAAAA SOCAAA OOOOxx +9241 1735 1 1 1 1 41 241 1241 4241 9241 82 83 LRAAAA TOCAAA VVVVxx +303 1736 1 3 3 3 3 303 303 303 303 6 7 RLAAAA UOCAAA AAAAxx +1945 1737 1 1 5 5 45 945 1945 1945 1945 90 91 VWAAAA VOCAAA HHHHxx +3634 1738 0 2 4 14 34 634 1634 3634 3634 68 69 UJAAAA WOCAAA OOOOxx +4768 1739 0 0 8 8 68 768 768 4768 4768 136 137 KBAAAA XOCAAA VVVVxx +9262 1740 0 2 2 2 62 262 1262 4262 9262 124 125 GSAAAA YOCAAA AAAAxx +2610 1741 0 2 0 10 10 610 610 2610 2610 20 21 KWAAAA ZOCAAA HHHHxx +6640 1742 0 0 0 0 40 640 640 1640 6640 80 81 KVAAAA APCAAA OOOOxx +3338 1743 0 2 8 18 38 338 1338 3338 3338 76 77 KYAAAA BPCAAA VVVVxx +6560 1744 0 0 0 0 60 560 560 1560 6560 120 121 ISAAAA CPCAAA AAAAxx +5986 1745 0 2 6 6 86 986 1986 986 5986 172 173 GWAAAA DPCAAA HHHHxx +2970 1746 0 2 0 10 70 970 970 2970 2970 140 141 GKAAAA EPCAAA OOOOxx +4731 1747 1 3 1 11 31 731 731 4731 4731 62 63 ZZAAAA FPCAAA VVVVxx +9486 1748 0 2 6 6 86 486 1486 4486 9486 172 173 WAAAAA GPCAAA AAAAxx +7204 1749 0 0 4 4 4 204 1204 2204 7204 8 9 CRAAAA HPCAAA HHHHxx +6685 1750 1 1 5 5 85 685 685 1685 6685 170 171 DXAAAA IPCAAA OOOOxx +6852 1751 0 0 2 12 52 852 852 1852 6852 104 105 ODAAAA JPCAAA VVVVxx +2325 1752 1 1 5 5 25 325 325 2325 2325 50 51 LLAAAA KPCAAA AAAAxx +1063 1753 1 3 3 3 63 63 1063 1063 1063 126 127 XOAAAA LPCAAA HHHHxx +6810 1754 0 2 0 10 10 810 810 1810 6810 20 21 YBAAAA MPCAAA OOOOxx +7718 1755 0 2 8 18 18 718 1718 2718 7718 36 37 WKAAAA NPCAAA VVVVxx +1680 1756 0 0 0 0 80 680 1680 1680 1680 160 161 QMAAAA OPCAAA AAAAxx +7402 1757 0 2 2 2 2 402 1402 2402 7402 4 5 SYAAAA PPCAAA HHHHxx +4134 1758 0 2 4 14 34 134 134 4134 4134 68 69 ADAAAA QPCAAA OOOOxx +8232 1759 0 0 2 12 32 232 232 3232 8232 64 65 QEAAAA RPCAAA VVVVxx +6682 1760 0 2 2 2 82 682 682 1682 6682 164 165 AXAAAA SPCAAA AAAAxx +7952 1761 0 0 2 12 52 952 1952 2952 7952 104 105 WTAAAA TPCAAA HHHHxx +5943 1762 1 3 3 3 43 943 1943 943 5943 86 87 PUAAAA UPCAAA OOOOxx +5394 1763 0 2 4 14 94 394 1394 394 5394 188 189 MZAAAA VPCAAA VVVVxx +6554 1764 0 2 4 14 54 554 554 1554 6554 108 109 CSAAAA WPCAAA AAAAxx +8186 1765 0 2 6 6 86 186 186 3186 8186 172 173 WCAAAA XPCAAA HHHHxx +199 1766 1 3 9 19 99 199 199 199 199 198 199 RHAAAA YPCAAA OOOOxx +3386 1767 0 2 6 6 86 386 1386 3386 3386 172 173 GAAAAA ZPCAAA VVVVxx +8974 1768 0 2 4 14 74 974 974 3974 8974 148 149 EHAAAA AQCAAA AAAAxx +8140 1769 0 0 0 0 40 140 140 3140 8140 80 81 CBAAAA BQCAAA HHHHxx +3723 1770 1 3 3 3 23 723 1723 3723 3723 46 47 FNAAAA CQCAAA OOOOxx +8827 1771 1 3 7 7 27 827 827 3827 8827 54 55 NBAAAA DQCAAA VVVVxx +1998 1772 0 2 8 18 98 998 1998 1998 1998 196 197 WYAAAA EQCAAA AAAAxx +879 1773 1 3 9 19 79 879 879 879 879 158 159 VHAAAA FQCAAA HHHHxx +892 1774 0 0 2 12 92 892 892 892 892 184 185 IIAAAA GQCAAA OOOOxx +9468 1775 0 0 8 8 68 468 1468 4468 9468 136 137 EAAAAA HQCAAA VVVVxx +3797 1776 1 1 7 17 97 797 1797 3797 3797 194 195 BQAAAA IQCAAA AAAAxx +8379 1777 1 3 9 19 79 379 379 3379 8379 158 159 HKAAAA JQCAAA HHHHxx +2817 1778 1 1 7 17 17 817 817 2817 2817 34 35 JEAAAA KQCAAA OOOOxx +789 1779 1 1 9 9 89 789 789 789 789 178 179 JEAAAA LQCAAA VVVVxx +3871 1780 1 3 1 11 71 871 1871 3871 3871 142 143 XSAAAA MQCAAA AAAAxx +7931 1781 1 3 1 11 31 931 1931 2931 7931 62 63 BTAAAA NQCAAA HHHHxx +3636 1782 0 0 6 16 36 636 1636 3636 3636 72 73 WJAAAA OQCAAA OOOOxx +699 1783 1 3 9 19 99 699 699 699 699 198 199 XAAAAA PQCAAA VVVVxx +6850 1784 0 2 0 10 50 850 850 1850 6850 100 101 MDAAAA QQCAAA AAAAxx +6394 1785 0 2 4 14 94 394 394 1394 6394 188 189 YLAAAA RQCAAA HHHHxx +3475 1786 1 3 5 15 75 475 1475 3475 3475 150 151 RDAAAA SQCAAA OOOOxx +3026 1787 0 2 6 6 26 26 1026 3026 3026 52 53 KMAAAA TQCAAA VVVVxx +876 1788 0 0 6 16 76 876 876 876 876 152 153 SHAAAA UQCAAA AAAAxx +1992 1789 0 0 2 12 92 992 1992 1992 1992 184 185 QYAAAA VQCAAA HHHHxx +3079 1790 1 3 9 19 79 79 1079 3079 3079 158 159 LOAAAA WQCAAA OOOOxx +8128 1791 0 0 8 8 28 128 128 3128 8128 56 57 QAAAAA XQCAAA VVVVxx +8123 1792 1 3 3 3 23 123 123 3123 8123 46 47 LAAAAA YQCAAA AAAAxx +3285 1793 1 1 5 5 85 285 1285 3285 3285 170 171 JWAAAA ZQCAAA HHHHxx +9315 1794 1 3 5 15 15 315 1315 4315 9315 30 31 HUAAAA ARCAAA OOOOxx +9862 1795 0 2 2 2 62 862 1862 4862 9862 124 125 IPAAAA BRCAAA VVVVxx +2764 1796 0 0 4 4 64 764 764 2764 2764 128 129 ICAAAA CRCAAA AAAAxx +3544 1797 0 0 4 4 44 544 1544 3544 3544 88 89 IGAAAA DRCAAA HHHHxx +7747 1798 1 3 7 7 47 747 1747 2747 7747 94 95 ZLAAAA ERCAAA OOOOxx +7725 1799 1 1 5 5 25 725 1725 2725 7725 50 51 DLAAAA FRCAAA VVVVxx +2449 1800 1 1 9 9 49 449 449 2449 2449 98 99 FQAAAA GRCAAA AAAAxx +8967 1801 1 3 7 7 67 967 967 3967 8967 134 135 XGAAAA HRCAAA HHHHxx +7371 1802 1 3 1 11 71 371 1371 2371 7371 142 143 NXAAAA IRCAAA OOOOxx +2158 1803 0 2 8 18 58 158 158 2158 2158 116 117 AFAAAA JRCAAA VVVVxx +5590 1804 0 2 0 10 90 590 1590 590 5590 180 181 AHAAAA KRCAAA AAAAxx +8072 1805 0 0 2 12 72 72 72 3072 8072 144 145 MYAAAA LRCAAA HHHHxx +1971 1806 1 3 1 11 71 971 1971 1971 1971 142 143 VXAAAA MRCAAA OOOOxx +772 1807 0 0 2 12 72 772 772 772 772 144 145 SDAAAA NRCAAA VVVVxx +3433 1808 1 1 3 13 33 433 1433 3433 3433 66 67 BCAAAA ORCAAA AAAAxx +8419 1809 1 3 9 19 19 419 419 3419 8419 38 39 VLAAAA PRCAAA HHHHxx +1493 1810 1 1 3 13 93 493 1493 1493 1493 186 187 LFAAAA QRCAAA OOOOxx +2584 1811 0 0 4 4 84 584 584 2584 2584 168 169 KVAAAA RRCAAA VVVVxx +9502 1812 0 2 2 2 2 502 1502 4502 9502 4 5 MBAAAA SRCAAA AAAAxx +4673 1813 1 1 3 13 73 673 673 4673 4673 146 147 TXAAAA TRCAAA HHHHxx +7403 1814 1 3 3 3 3 403 1403 2403 7403 6 7 TYAAAA URCAAA OOOOxx +7103 1815 1 3 3 3 3 103 1103 2103 7103 6 7 FNAAAA VRCAAA VVVVxx +7026 1816 0 2 6 6 26 26 1026 2026 7026 52 53 GKAAAA WRCAAA AAAAxx +8574 1817 0 2 4 14 74 574 574 3574 8574 148 149 URAAAA XRCAAA HHHHxx +1366 1818 0 2 6 6 66 366 1366 1366 1366 132 133 OAAAAA YRCAAA OOOOxx +5787 1819 1 3 7 7 87 787 1787 787 5787 174 175 POAAAA ZRCAAA VVVVxx +2552 1820 0 0 2 12 52 552 552 2552 2552 104 105 EUAAAA ASCAAA AAAAxx +4557 1821 1 1 7 17 57 557 557 4557 4557 114 115 HTAAAA BSCAAA HHHHxx +3237 1822 1 1 7 17 37 237 1237 3237 3237 74 75 NUAAAA CSCAAA OOOOxx +6901 1823 1 1 1 1 1 901 901 1901 6901 2 3 LFAAAA DSCAAA VVVVxx +7708 1824 0 0 8 8 8 708 1708 2708 7708 16 17 MKAAAA ESCAAA AAAAxx +2011 1825 1 3 1 11 11 11 11 2011 2011 22 23 JZAAAA FSCAAA HHHHxx +9455 1826 1 3 5 15 55 455 1455 4455 9455 110 111 RZAAAA GSCAAA OOOOxx +5228 1827 0 0 8 8 28 228 1228 228 5228 56 57 CTAAAA HSCAAA VVVVxx +4043 1828 1 3 3 3 43 43 43 4043 4043 86 87 NZAAAA ISCAAA AAAAxx +8242 1829 0 2 2 2 42 242 242 3242 8242 84 85 AFAAAA JSCAAA HHHHxx +6351 1830 1 3 1 11 51 351 351 1351 6351 102 103 HKAAAA KSCAAA OOOOxx +5899 1831 1 3 9 19 99 899 1899 899 5899 198 199 XSAAAA LSCAAA VVVVxx +4849 1832 1 1 9 9 49 849 849 4849 4849 98 99 NEAAAA MSCAAA AAAAxx +9583 1833 1 3 3 3 83 583 1583 4583 9583 166 167 PEAAAA NSCAAA HHHHxx +4994 1834 0 2 4 14 94 994 994 4994 4994 188 189 CKAAAA OSCAAA OOOOxx +9787 1835 1 3 7 7 87 787 1787 4787 9787 174 175 LMAAAA PSCAAA VVVVxx +243 1836 1 3 3 3 43 243 243 243 243 86 87 JJAAAA QSCAAA AAAAxx +3931 1837 1 3 1 11 31 931 1931 3931 3931 62 63 FVAAAA RSCAAA HHHHxx +5945 1838 1 1 5 5 45 945 1945 945 5945 90 91 RUAAAA SSCAAA OOOOxx +1325 1839 1 1 5 5 25 325 1325 1325 1325 50 51 ZYAAAA TSCAAA VVVVxx +4142 1840 0 2 2 2 42 142 142 4142 4142 84 85 IDAAAA USCAAA AAAAxx +1963 1841 1 3 3 3 63 963 1963 1963 1963 126 127 NXAAAA VSCAAA HHHHxx +7041 1842 1 1 1 1 41 41 1041 2041 7041 82 83 VKAAAA WSCAAA OOOOxx +3074 1843 0 2 4 14 74 74 1074 3074 3074 148 149 GOAAAA XSCAAA VVVVxx +3290 1844 0 2 0 10 90 290 1290 3290 3290 180 181 OWAAAA YSCAAA AAAAxx +4146 1845 0 2 6 6 46 146 146 4146 4146 92 93 MDAAAA ZSCAAA HHHHxx +3832 1846 0 0 2 12 32 832 1832 3832 3832 64 65 KRAAAA ATCAAA OOOOxx +2217 1847 1 1 7 17 17 217 217 2217 2217 34 35 HHAAAA BTCAAA VVVVxx +635 1848 1 3 5 15 35 635 635 635 635 70 71 LYAAAA CTCAAA AAAAxx +6967 1849 1 3 7 7 67 967 967 1967 6967 134 135 ZHAAAA DTCAAA HHHHxx +3522 1850 0 2 2 2 22 522 1522 3522 3522 44 45 MFAAAA ETCAAA OOOOxx +2471 1851 1 3 1 11 71 471 471 2471 2471 142 143 BRAAAA FTCAAA VVVVxx +4236 1852 0 0 6 16 36 236 236 4236 4236 72 73 YGAAAA GTCAAA AAAAxx +853 1853 1 1 3 13 53 853 853 853 853 106 107 VGAAAA HTCAAA HHHHxx +3754 1854 0 2 4 14 54 754 1754 3754 3754 108 109 KOAAAA ITCAAA OOOOxx +796 1855 0 0 6 16 96 796 796 796 796 192 193 QEAAAA JTCAAA VVVVxx +4640 1856 0 0 0 0 40 640 640 4640 4640 80 81 MWAAAA KTCAAA AAAAxx +9496 1857 0 0 6 16 96 496 1496 4496 9496 192 193 GBAAAA LTCAAA HHHHxx +6873 1858 1 1 3 13 73 873 873 1873 6873 146 147 JEAAAA MTCAAA OOOOxx +4632 1859 0 0 2 12 32 632 632 4632 4632 64 65 EWAAAA NTCAAA VVVVxx +5758 1860 0 2 8 18 58 758 1758 758 5758 116 117 MNAAAA OTCAAA AAAAxx +6514 1861 0 2 4 14 14 514 514 1514 6514 28 29 OQAAAA PTCAAA HHHHxx +9510 1862 0 2 0 10 10 510 1510 4510 9510 20 21 UBAAAA QTCAAA OOOOxx +8411 1863 1 3 1 11 11 411 411 3411 8411 22 23 NLAAAA RTCAAA VVVVxx +7762 1864 0 2 2 2 62 762 1762 2762 7762 124 125 OMAAAA STCAAA AAAAxx +2225 1865 1 1 5 5 25 225 225 2225 2225 50 51 PHAAAA TTCAAA HHHHxx +4373 1866 1 1 3 13 73 373 373 4373 4373 146 147 FMAAAA UTCAAA OOOOxx +7326 1867 0 2 6 6 26 326 1326 2326 7326 52 53 UVAAAA VTCAAA VVVVxx +8651 1868 1 3 1 11 51 651 651 3651 8651 102 103 TUAAAA WTCAAA AAAAxx +9825 1869 1 1 5 5 25 825 1825 4825 9825 50 51 XNAAAA XTCAAA HHHHxx +2988 1870 0 0 8 8 88 988 988 2988 2988 176 177 YKAAAA YTCAAA OOOOxx +8138 1871 0 2 8 18 38 138 138 3138 8138 76 77 ABAAAA ZTCAAA VVVVxx +7792 1872 0 0 2 12 92 792 1792 2792 7792 184 185 SNAAAA AUCAAA AAAAxx +1232 1873 0 0 2 12 32 232 1232 1232 1232 64 65 KVAAAA BUCAAA HHHHxx +8221 1874 1 1 1 1 21 221 221 3221 8221 42 43 FEAAAA CUCAAA OOOOxx +4044 1875 0 0 4 4 44 44 44 4044 4044 88 89 OZAAAA DUCAAA VVVVxx +1204 1876 0 0 4 4 4 204 1204 1204 1204 8 9 IUAAAA EUCAAA AAAAxx +5145 1877 1 1 5 5 45 145 1145 145 5145 90 91 XPAAAA FUCAAA HHHHxx +7791 1878 1 3 1 11 91 791 1791 2791 7791 182 183 RNAAAA GUCAAA OOOOxx +8270 1879 0 2 0 10 70 270 270 3270 8270 140 141 CGAAAA HUCAAA VVVVxx +9427 1880 1 3 7 7 27 427 1427 4427 9427 54 55 PYAAAA IUCAAA AAAAxx +2152 1881 0 0 2 12 52 152 152 2152 2152 104 105 UEAAAA JUCAAA HHHHxx +7790 1882 0 2 0 10 90 790 1790 2790 7790 180 181 QNAAAA KUCAAA OOOOxx +5301 1883 1 1 1 1 1 301 1301 301 5301 2 3 XVAAAA LUCAAA VVVVxx +626 1884 0 2 6 6 26 626 626 626 626 52 53 CYAAAA MUCAAA AAAAxx +260 1885 0 0 0 0 60 260 260 260 260 120 121 AKAAAA NUCAAA HHHHxx +4369 1886 1 1 9 9 69 369 369 4369 4369 138 139 BMAAAA OUCAAA OOOOxx +5457 1887 1 1 7 17 57 457 1457 457 5457 114 115 XBAAAA PUCAAA VVVVxx +3468 1888 0 0 8 8 68 468 1468 3468 3468 136 137 KDAAAA QUCAAA AAAAxx +2257 1889 1 1 7 17 57 257 257 2257 2257 114 115 VIAAAA RUCAAA HHHHxx +9318 1890 0 2 8 18 18 318 1318 4318 9318 36 37 KUAAAA SUCAAA OOOOxx +8762 1891 0 2 2 2 62 762 762 3762 8762 124 125 AZAAAA TUCAAA VVVVxx +9153 1892 1 1 3 13 53 153 1153 4153 9153 106 107 BOAAAA UUCAAA AAAAxx +9220 1893 0 0 0 0 20 220 1220 4220 9220 40 41 QQAAAA VUCAAA HHHHxx +8003 1894 1 3 3 3 3 3 3 3003 8003 6 7 VVAAAA WUCAAA OOOOxx +7257 1895 1 1 7 17 57 257 1257 2257 7257 114 115 DTAAAA XUCAAA VVVVxx +3930 1896 0 2 0 10 30 930 1930 3930 3930 60 61 EVAAAA YUCAAA AAAAxx +2976 1897 0 0 6 16 76 976 976 2976 2976 152 153 MKAAAA ZUCAAA HHHHxx +2531 1898 1 3 1 11 31 531 531 2531 2531 62 63 JTAAAA AVCAAA OOOOxx +2250 1899 0 2 0 10 50 250 250 2250 2250 100 101 OIAAAA BVCAAA VVVVxx +8549 1900 1 1 9 9 49 549 549 3549 8549 98 99 VQAAAA CVCAAA AAAAxx +7197 1901 1 1 7 17 97 197 1197 2197 7197 194 195 VQAAAA DVCAAA HHHHxx +5916 1902 0 0 6 16 16 916 1916 916 5916 32 33 OTAAAA EVCAAA OOOOxx +5287 1903 1 3 7 7 87 287 1287 287 5287 174 175 JVAAAA FVCAAA VVVVxx +9095 1904 1 3 5 15 95 95 1095 4095 9095 190 191 VLAAAA GVCAAA AAAAxx +7137 1905 1 1 7 17 37 137 1137 2137 7137 74 75 NOAAAA HVCAAA HHHHxx +7902 1906 0 2 2 2 2 902 1902 2902 7902 4 5 YRAAAA IVCAAA OOOOxx +7598 1907 0 2 8 18 98 598 1598 2598 7598 196 197 GGAAAA JVCAAA VVVVxx +5652 1908 0 0 2 12 52 652 1652 652 5652 104 105 KJAAAA KVCAAA AAAAxx +2017 1909 1 1 7 17 17 17 17 2017 2017 34 35 PZAAAA LVCAAA HHHHxx +7255 1910 1 3 5 15 55 255 1255 2255 7255 110 111 BTAAAA MVCAAA OOOOxx +7999 1911 1 3 9 19 99 999 1999 2999 7999 198 199 RVAAAA NVCAAA VVVVxx +5388 1912 0 0 8 8 88 388 1388 388 5388 176 177 GZAAAA OVCAAA AAAAxx +8754 1913 0 2 4 14 54 754 754 3754 8754 108 109 SYAAAA PVCAAA HHHHxx +5415 1914 1 3 5 15 15 415 1415 415 5415 30 31 HAAAAA QVCAAA OOOOxx +8861 1915 1 1 1 1 61 861 861 3861 8861 122 123 VCAAAA RVCAAA VVVVxx +2874 1916 0 2 4 14 74 874 874 2874 2874 148 149 OGAAAA SVCAAA AAAAxx +9910 1917 0 2 0 10 10 910 1910 4910 9910 20 21 ERAAAA TVCAAA HHHHxx +5178 1918 0 2 8 18 78 178 1178 178 5178 156 157 ERAAAA UVCAAA OOOOxx +5698 1919 0 2 8 18 98 698 1698 698 5698 196 197 ELAAAA VVCAAA VVVVxx +8500 1920 0 0 0 0 0 500 500 3500 8500 0 1 YOAAAA WVCAAA AAAAxx +1814 1921 0 2 4 14 14 814 1814 1814 1814 28 29 URAAAA XVCAAA HHHHxx +4968 1922 0 0 8 8 68 968 968 4968 4968 136 137 CJAAAA YVCAAA OOOOxx +2642 1923 0 2 2 2 42 642 642 2642 2642 84 85 QXAAAA ZVCAAA VVVVxx +1578 1924 0 2 8 18 78 578 1578 1578 1578 156 157 SIAAAA AWCAAA AAAAxx +4774 1925 0 2 4 14 74 774 774 4774 4774 148 149 QBAAAA BWCAAA HHHHxx +7062 1926 0 2 2 2 62 62 1062 2062 7062 124 125 QLAAAA CWCAAA OOOOxx +5381 1927 1 1 1 1 81 381 1381 381 5381 162 163 ZYAAAA DWCAAA VVVVxx +7985 1928 1 1 5 5 85 985 1985 2985 7985 170 171 DVAAAA EWCAAA AAAAxx +3850 1929 0 2 0 10 50 850 1850 3850 3850 100 101 CSAAAA FWCAAA HHHHxx +5624 1930 0 0 4 4 24 624 1624 624 5624 48 49 IIAAAA GWCAAA OOOOxx +8948 1931 0 0 8 8 48 948 948 3948 8948 96 97 EGAAAA HWCAAA VVVVxx +995 1932 1 3 5 15 95 995 995 995 995 190 191 HMAAAA IWCAAA AAAAxx +5058 1933 0 2 8 18 58 58 1058 58 5058 116 117 OMAAAA JWCAAA HHHHxx +9670 1934 0 2 0 10 70 670 1670 4670 9670 140 141 YHAAAA KWCAAA OOOOxx +3115 1935 1 3 5 15 15 115 1115 3115 3115 30 31 VPAAAA LWCAAA VVVVxx +4935 1936 1 3 5 15 35 935 935 4935 4935 70 71 VHAAAA MWCAAA AAAAxx +4735 1937 1 3 5 15 35 735 735 4735 4735 70 71 DAAAAA NWCAAA HHHHxx +1348 1938 0 0 8 8 48 348 1348 1348 1348 96 97 WZAAAA OWCAAA OOOOxx +2380 1939 0 0 0 0 80 380 380 2380 2380 160 161 ONAAAA PWCAAA VVVVxx +4246 1940 0 2 6 6 46 246 246 4246 4246 92 93 IHAAAA QWCAAA AAAAxx +522 1941 0 2 2 2 22 522 522 522 522 44 45 CUAAAA RWCAAA HHHHxx +1701 1942 1 1 1 1 1 701 1701 1701 1701 2 3 LNAAAA SWCAAA OOOOxx +9709 1943 1 1 9 9 9 709 1709 4709 9709 18 19 LJAAAA TWCAAA VVVVxx +8829 1944 1 1 9 9 29 829 829 3829 8829 58 59 PBAAAA UWCAAA AAAAxx +7936 1945 0 0 6 16 36 936 1936 2936 7936 72 73 GTAAAA VWCAAA HHHHxx +8474 1946 0 2 4 14 74 474 474 3474 8474 148 149 YNAAAA WWCAAA OOOOxx +4676 1947 0 0 6 16 76 676 676 4676 4676 152 153 WXAAAA XWCAAA VVVVxx +6303 1948 1 3 3 3 3 303 303 1303 6303 6 7 LIAAAA YWCAAA AAAAxx +3485 1949 1 1 5 5 85 485 1485 3485 3485 170 171 BEAAAA ZWCAAA HHHHxx +2695 1950 1 3 5 15 95 695 695 2695 2695 190 191 RZAAAA AXCAAA OOOOxx +8830 1951 0 2 0 10 30 830 830 3830 8830 60 61 QBAAAA BXCAAA VVVVxx +898 1952 0 2 8 18 98 898 898 898 898 196 197 OIAAAA CXCAAA AAAAxx +7268 1953 0 0 8 8 68 268 1268 2268 7268 136 137 OTAAAA DXCAAA HHHHxx +6568 1954 0 0 8 8 68 568 568 1568 6568 136 137 QSAAAA EXCAAA OOOOxx +9724 1955 0 0 4 4 24 724 1724 4724 9724 48 49 AKAAAA FXCAAA VVVVxx +3329 1956 1 1 9 9 29 329 1329 3329 3329 58 59 BYAAAA GXCAAA AAAAxx +9860 1957 0 0 0 0 60 860 1860 4860 9860 120 121 GPAAAA HXCAAA HHHHxx +6833 1958 1 1 3 13 33 833 833 1833 6833 66 67 VCAAAA IXCAAA OOOOxx +5956 1959 0 0 6 16 56 956 1956 956 5956 112 113 CVAAAA JXCAAA VVVVxx +3963 1960 1 3 3 3 63 963 1963 3963 3963 126 127 LWAAAA KXCAAA AAAAxx +883 1961 1 3 3 3 83 883 883 883 883 166 167 ZHAAAA LXCAAA HHHHxx +2761 1962 1 1 1 1 61 761 761 2761 2761 122 123 FCAAAA MXCAAA OOOOxx +4644 1963 0 0 4 4 44 644 644 4644 4644 88 89 QWAAAA NXCAAA VVVVxx +1358 1964 0 2 8 18 58 358 1358 1358 1358 116 117 GAAAAA OXCAAA AAAAxx +2049 1965 1 1 9 9 49 49 49 2049 2049 98 99 VAAAAA PXCAAA HHHHxx +2193 1966 1 1 3 13 93 193 193 2193 2193 186 187 JGAAAA QXCAAA OOOOxx +9435 1967 1 3 5 15 35 435 1435 4435 9435 70 71 XYAAAA RXCAAA VVVVxx +5890 1968 0 2 0 10 90 890 1890 890 5890 180 181 OSAAAA SXCAAA AAAAxx +8149 1969 1 1 9 9 49 149 149 3149 8149 98 99 LBAAAA TXCAAA HHHHxx +423 1970 1 3 3 3 23 423 423 423 423 46 47 HQAAAA UXCAAA OOOOxx +7980 1971 0 0 0 0 80 980 1980 2980 7980 160 161 YUAAAA VXCAAA VVVVxx +9019 1972 1 3 9 19 19 19 1019 4019 9019 38 39 XIAAAA WXCAAA AAAAxx +1647 1973 1 3 7 7 47 647 1647 1647 1647 94 95 JLAAAA XXCAAA HHHHxx +9495 1974 1 3 5 15 95 495 1495 4495 9495 190 191 FBAAAA YXCAAA OOOOxx +3904 1975 0 0 4 4 4 904 1904 3904 3904 8 9 EUAAAA ZXCAAA VVVVxx +5838 1976 0 2 8 18 38 838 1838 838 5838 76 77 OQAAAA AYCAAA AAAAxx +3866 1977 0 2 6 6 66 866 1866 3866 3866 132 133 SSAAAA BYCAAA HHHHxx +3093 1978 1 1 3 13 93 93 1093 3093 3093 186 187 ZOAAAA CYCAAA OOOOxx +9666 1979 0 2 6 6 66 666 1666 4666 9666 132 133 UHAAAA DYCAAA VVVVxx +1246 1980 0 2 6 6 46 246 1246 1246 1246 92 93 YVAAAA EYCAAA AAAAxx +9759 1981 1 3 9 19 59 759 1759 4759 9759 118 119 JLAAAA FYCAAA HHHHxx +7174 1982 0 2 4 14 74 174 1174 2174 7174 148 149 YPAAAA GYCAAA OOOOxx +7678 1983 0 2 8 18 78 678 1678 2678 7678 156 157 IJAAAA HYCAAA VVVVxx +3004 1984 0 0 4 4 4 4 1004 3004 3004 8 9 OLAAAA IYCAAA AAAAxx +5607 1985 1 3 7 7 7 607 1607 607 5607 14 15 RHAAAA JYCAAA HHHHxx +8510 1986 0 2 0 10 10 510 510 3510 8510 20 21 IPAAAA KYCAAA OOOOxx +1483 1987 1 3 3 3 83 483 1483 1483 1483 166 167 BFAAAA LYCAAA VVVVxx +2915 1988 1 3 5 15 15 915 915 2915 2915 30 31 DIAAAA MYCAAA AAAAxx +1548 1989 0 0 8 8 48 548 1548 1548 1548 96 97 OHAAAA NYCAAA HHHHxx +5767 1990 1 3 7 7 67 767 1767 767 5767 134 135 VNAAAA OYCAAA OOOOxx +3214 1991 0 2 4 14 14 214 1214 3214 3214 28 29 QTAAAA PYCAAA VVVVxx +8663 1992 1 3 3 3 63 663 663 3663 8663 126 127 FVAAAA QYCAAA AAAAxx +5425 1993 1 1 5 5 25 425 1425 425 5425 50 51 RAAAAA RYCAAA HHHHxx +8530 1994 0 2 0 10 30 530 530 3530 8530 60 61 CQAAAA SYCAAA OOOOxx +821 1995 1 1 1 1 21 821 821 821 821 42 43 PFAAAA TYCAAA VVVVxx +8816 1996 0 0 6 16 16 816 816 3816 8816 32 33 CBAAAA UYCAAA AAAAxx +9367 1997 1 3 7 7 67 367 1367 4367 9367 134 135 HWAAAA VYCAAA HHHHxx +4138 1998 0 2 8 18 38 138 138 4138 4138 76 77 EDAAAA WYCAAA OOOOxx +94 1999 0 2 4 14 94 94 94 94 94 188 189 QDAAAA XYCAAA VVVVxx +1858 2000 0 2 8 18 58 858 1858 1858 1858 116 117 MTAAAA YYCAAA AAAAxx +5513 2001 1 1 3 13 13 513 1513 513 5513 26 27 BEAAAA ZYCAAA HHHHxx +9620 2002 0 0 0 0 20 620 1620 4620 9620 40 41 AGAAAA AZCAAA OOOOxx +4770 2003 0 2 0 10 70 770 770 4770 4770 140 141 MBAAAA BZCAAA VVVVxx +5193 2004 1 1 3 13 93 193 1193 193 5193 186 187 TRAAAA CZCAAA AAAAxx +198 2005 0 2 8 18 98 198 198 198 198 196 197 QHAAAA DZCAAA HHHHxx +417 2006 1 1 7 17 17 417 417 417 417 34 35 BQAAAA EZCAAA OOOOxx +173 2007 1 1 3 13 73 173 173 173 173 146 147 RGAAAA FZCAAA VVVVxx +6248 2008 0 0 8 8 48 248 248 1248 6248 96 97 IGAAAA GZCAAA AAAAxx +302 2009 0 2 2 2 2 302 302 302 302 4 5 QLAAAA HZCAAA HHHHxx +8983 2010 1 3 3 3 83 983 983 3983 8983 166 167 NHAAAA IZCAAA OOOOxx +4840 2011 0 0 0 0 40 840 840 4840 4840 80 81 EEAAAA JZCAAA VVVVxx +2876 2012 0 0 6 16 76 876 876 2876 2876 152 153 QGAAAA KZCAAA AAAAxx +5841 2013 1 1 1 1 41 841 1841 841 5841 82 83 RQAAAA LZCAAA HHHHxx +2766 2014 0 2 6 6 66 766 766 2766 2766 132 133 KCAAAA MZCAAA OOOOxx +9482 2015 0 2 2 2 82 482 1482 4482 9482 164 165 SAAAAA NZCAAA VVVVxx +5335 2016 1 3 5 15 35 335 1335 335 5335 70 71 FXAAAA OZCAAA AAAAxx +1502 2017 0 2 2 2 2 502 1502 1502 1502 4 5 UFAAAA PZCAAA HHHHxx +9291 2018 1 3 1 11 91 291 1291 4291 9291 182 183 JTAAAA QZCAAA OOOOxx +8655 2019 1 3 5 15 55 655 655 3655 8655 110 111 XUAAAA RZCAAA VVVVxx +1687 2020 1 3 7 7 87 687 1687 1687 1687 174 175 XMAAAA SZCAAA AAAAxx +8171 2021 1 3 1 11 71 171 171 3171 8171 142 143 HCAAAA TZCAAA HHHHxx +5699 2022 1 3 9 19 99 699 1699 699 5699 198 199 FLAAAA UZCAAA OOOOxx +1462 2023 0 2 2 2 62 462 1462 1462 1462 124 125 GEAAAA VZCAAA VVVVxx +608 2024 0 0 8 8 8 608 608 608 608 16 17 KXAAAA WZCAAA AAAAxx +6860 2025 0 0 0 0 60 860 860 1860 6860 120 121 WDAAAA XZCAAA HHHHxx +6063 2026 1 3 3 3 63 63 63 1063 6063 126 127 FZAAAA YZCAAA OOOOxx +1422 2027 0 2 2 2 22 422 1422 1422 1422 44 45 SCAAAA ZZCAAA VVVVxx +1932 2028 0 0 2 12 32 932 1932 1932 1932 64 65 IWAAAA AADAAA AAAAxx +5065 2029 1 1 5 5 65 65 1065 65 5065 130 131 VMAAAA BADAAA HHHHxx +432 2030 0 0 2 12 32 432 432 432 432 64 65 QQAAAA CADAAA OOOOxx +4680 2031 0 0 0 0 80 680 680 4680 4680 160 161 AYAAAA DADAAA VVVVxx +8172 2032 0 0 2 12 72 172 172 3172 8172 144 145 ICAAAA EADAAA AAAAxx +8668 2033 0 0 8 8 68 668 668 3668 8668 136 137 KVAAAA FADAAA HHHHxx +256 2034 0 0 6 16 56 256 256 256 256 112 113 WJAAAA GADAAA OOOOxx +2500 2035 0 0 0 0 0 500 500 2500 2500 0 1 ESAAAA HADAAA VVVVxx +274 2036 0 2 4 14 74 274 274 274 274 148 149 OKAAAA IADAAA AAAAxx +5907 2037 1 3 7 7 7 907 1907 907 5907 14 15 FTAAAA JADAAA HHHHxx +8587 2038 1 3 7 7 87 587 587 3587 8587 174 175 HSAAAA KADAAA OOOOxx +9942 2039 0 2 2 2 42 942 1942 4942 9942 84 85 KSAAAA LADAAA VVVVxx +116 2040 0 0 6 16 16 116 116 116 116 32 33 MEAAAA MADAAA AAAAxx +7134 2041 0 2 4 14 34 134 1134 2134 7134 68 69 KOAAAA NADAAA HHHHxx +9002 2042 0 2 2 2 2 2 1002 4002 9002 4 5 GIAAAA OADAAA OOOOxx +1209 2043 1 1 9 9 9 209 1209 1209 1209 18 19 NUAAAA PADAAA VVVVxx +9983 2044 1 3 3 3 83 983 1983 4983 9983 166 167 ZTAAAA QADAAA AAAAxx +1761 2045 1 1 1 1 61 761 1761 1761 1761 122 123 TPAAAA RADAAA HHHHxx +7723 2046 1 3 3 3 23 723 1723 2723 7723 46 47 BLAAAA SADAAA OOOOxx +6518 2047 0 2 8 18 18 518 518 1518 6518 36 37 SQAAAA TADAAA VVVVxx +1372 2048 0 0 2 12 72 372 1372 1372 1372 144 145 UAAAAA UADAAA AAAAxx +3587 2049 1 3 7 7 87 587 1587 3587 3587 174 175 ZHAAAA VADAAA HHHHxx +5323 2050 1 3 3 3 23 323 1323 323 5323 46 47 TWAAAA WADAAA OOOOxx +5902 2051 0 2 2 2 2 902 1902 902 5902 4 5 ATAAAA XADAAA VVVVxx +3749 2052 1 1 9 9 49 749 1749 3749 3749 98 99 FOAAAA YADAAA AAAAxx +5965 2053 1 1 5 5 65 965 1965 965 5965 130 131 LVAAAA ZADAAA HHHHxx +663 2054 1 3 3 3 63 663 663 663 663 126 127 NZAAAA ABDAAA OOOOxx +36 2055 0 0 6 16 36 36 36 36 36 72 73 KBAAAA BBDAAA VVVVxx +9782 2056 0 2 2 2 82 782 1782 4782 9782 164 165 GMAAAA CBDAAA AAAAxx +5412 2057 0 0 2 12 12 412 1412 412 5412 24 25 EAAAAA DBDAAA HHHHxx +9961 2058 1 1 1 1 61 961 1961 4961 9961 122 123 DTAAAA EBDAAA OOOOxx +6492 2059 0 0 2 12 92 492 492 1492 6492 184 185 SPAAAA FBDAAA VVVVxx +4234 2060 0 2 4 14 34 234 234 4234 4234 68 69 WGAAAA GBDAAA AAAAxx +4922 2061 0 2 2 2 22 922 922 4922 4922 44 45 IHAAAA HBDAAA HHHHxx +6166 2062 0 2 6 6 66 166 166 1166 6166 132 133 EDAAAA IBDAAA OOOOxx +7019 2063 1 3 9 19 19 19 1019 2019 7019 38 39 ZJAAAA JBDAAA VVVVxx +7805 2064 1 1 5 5 5 805 1805 2805 7805 10 11 FOAAAA KBDAAA AAAAxx +9808 2065 0 0 8 8 8 808 1808 4808 9808 16 17 GNAAAA LBDAAA HHHHxx +2550 2066 0 2 0 10 50 550 550 2550 2550 100 101 CUAAAA MBDAAA OOOOxx +8626 2067 0 2 6 6 26 626 626 3626 8626 52 53 UTAAAA NBDAAA VVVVxx +5649 2068 1 1 9 9 49 649 1649 649 5649 98 99 HJAAAA OBDAAA AAAAxx +3117 2069 1 1 7 17 17 117 1117 3117 3117 34 35 XPAAAA PBDAAA HHHHxx +866 2070 0 2 6 6 66 866 866 866 866 132 133 IHAAAA QBDAAA OOOOxx +2323 2071 1 3 3 3 23 323 323 2323 2323 46 47 JLAAAA RBDAAA VVVVxx +5132 2072 0 0 2 12 32 132 1132 132 5132 64 65 KPAAAA SBDAAA AAAAxx +9222 2073 0 2 2 2 22 222 1222 4222 9222 44 45 SQAAAA TBDAAA HHHHxx +3934 2074 0 2 4 14 34 934 1934 3934 3934 68 69 IVAAAA UBDAAA OOOOxx +4845 2075 1 1 5 5 45 845 845 4845 4845 90 91 JEAAAA VBDAAA VVVVxx +7714 2076 0 2 4 14 14 714 1714 2714 7714 28 29 SKAAAA WBDAAA AAAAxx +9818 2077 0 2 8 18 18 818 1818 4818 9818 36 37 QNAAAA XBDAAA HHHHxx +2219 2078 1 3 9 19 19 219 219 2219 2219 38 39 JHAAAA YBDAAA OOOOxx +6573 2079 1 1 3 13 73 573 573 1573 6573 146 147 VSAAAA ZBDAAA VVVVxx +4555 2080 1 3 5 15 55 555 555 4555 4555 110 111 FTAAAA ACDAAA AAAAxx +7306 2081 0 2 6 6 6 306 1306 2306 7306 12 13 AVAAAA BCDAAA HHHHxx +9313 2082 1 1 3 13 13 313 1313 4313 9313 26 27 FUAAAA CCDAAA OOOOxx +3924 2083 0 0 4 4 24 924 1924 3924 3924 48 49 YUAAAA DCDAAA VVVVxx +5176 2084 0 0 6 16 76 176 1176 176 5176 152 153 CRAAAA ECDAAA AAAAxx +9767 2085 1 3 7 7 67 767 1767 4767 9767 134 135 RLAAAA FCDAAA HHHHxx +905 2086 1 1 5 5 5 905 905 905 905 10 11 VIAAAA GCDAAA OOOOxx +8037 2087 1 1 7 17 37 37 37 3037 8037 74 75 DXAAAA HCDAAA VVVVxx +8133 2088 1 1 3 13 33 133 133 3133 8133 66 67 VAAAAA ICDAAA AAAAxx +2954 2089 0 2 4 14 54 954 954 2954 2954 108 109 QJAAAA JCDAAA HHHHxx +7262 2090 0 2 2 2 62 262 1262 2262 7262 124 125 ITAAAA KCDAAA OOOOxx +8768 2091 0 0 8 8 68 768 768 3768 8768 136 137 GZAAAA LCDAAA VVVVxx +6953 2092 1 1 3 13 53 953 953 1953 6953 106 107 LHAAAA MCDAAA AAAAxx +1984 2093 0 0 4 4 84 984 1984 1984 1984 168 169 IYAAAA NCDAAA HHHHxx +9348 2094 0 0 8 8 48 348 1348 4348 9348 96 97 OVAAAA OCDAAA OOOOxx +7769 2095 1 1 9 9 69 769 1769 2769 7769 138 139 VMAAAA PCDAAA VVVVxx +2994 2096 0 2 4 14 94 994 994 2994 2994 188 189 ELAAAA QCDAAA AAAAxx +5938 2097 0 2 8 18 38 938 1938 938 5938 76 77 KUAAAA RCDAAA HHHHxx +556 2098 0 0 6 16 56 556 556 556 556 112 113 KVAAAA SCDAAA OOOOxx +2577 2099 1 1 7 17 77 577 577 2577 2577 154 155 DVAAAA TCDAAA VVVVxx +8733 2100 1 1 3 13 33 733 733 3733 8733 66 67 XXAAAA UCDAAA AAAAxx +3108 2101 0 0 8 8 8 108 1108 3108 3108 16 17 OPAAAA VCDAAA HHHHxx +4166 2102 0 2 6 6 66 166 166 4166 4166 132 133 GEAAAA WCDAAA OOOOxx +3170 2103 0 2 0 10 70 170 1170 3170 3170 140 141 YRAAAA XCDAAA VVVVxx +8118 2104 0 2 8 18 18 118 118 3118 8118 36 37 GAAAAA YCDAAA AAAAxx +8454 2105 0 2 4 14 54 454 454 3454 8454 108 109 ENAAAA ZCDAAA HHHHxx +5338 2106 0 2 8 18 38 338 1338 338 5338 76 77 IXAAAA ADDAAA OOOOxx +402 2107 0 2 2 2 2 402 402 402 402 4 5 MPAAAA BDDAAA VVVVxx +5673 2108 1 1 3 13 73 673 1673 673 5673 146 147 FKAAAA CDDAAA AAAAxx +4324 2109 0 0 4 4 24 324 324 4324 4324 48 49 IKAAAA DDDAAA HHHHxx +1943 2110 1 3 3 3 43 943 1943 1943 1943 86 87 TWAAAA EDDAAA OOOOxx +7703 2111 1 3 3 3 3 703 1703 2703 7703 6 7 HKAAAA FDDAAA VVVVxx +7180 2112 0 0 0 0 80 180 1180 2180 7180 160 161 EQAAAA GDDAAA AAAAxx +5478 2113 0 2 8 18 78 478 1478 478 5478 156 157 SCAAAA HDDAAA HHHHxx +5775 2114 1 3 5 15 75 775 1775 775 5775 150 151 DOAAAA IDDAAA OOOOxx +6952 2115 0 0 2 12 52 952 952 1952 6952 104 105 KHAAAA JDDAAA VVVVxx +9022 2116 0 2 2 2 22 22 1022 4022 9022 44 45 AJAAAA KDDAAA AAAAxx +547 2117 1 3 7 7 47 547 547 547 547 94 95 BVAAAA LDDAAA HHHHxx +5877 2118 1 1 7 17 77 877 1877 877 5877 154 155 BSAAAA MDDAAA OOOOxx +9580 2119 0 0 0 0 80 580 1580 4580 9580 160 161 MEAAAA NDDAAA VVVVxx +6094 2120 0 2 4 14 94 94 94 1094 6094 188 189 KAAAAA ODDAAA AAAAxx +3398 2121 0 2 8 18 98 398 1398 3398 3398 196 197 SAAAAA PDDAAA HHHHxx +4574 2122 0 2 4 14 74 574 574 4574 4574 148 149 YTAAAA QDDAAA OOOOxx +3675 2123 1 3 5 15 75 675 1675 3675 3675 150 151 JLAAAA RDDAAA VVVVxx +6413 2124 1 1 3 13 13 413 413 1413 6413 26 27 RMAAAA SDDAAA AAAAxx +9851 2125 1 3 1 11 51 851 1851 4851 9851 102 103 XOAAAA TDDAAA HHHHxx +126 2126 0 2 6 6 26 126 126 126 126 52 53 WEAAAA UDDAAA OOOOxx +6803 2127 1 3 3 3 3 803 803 1803 6803 6 7 RBAAAA VDDAAA VVVVxx +6949 2128 1 1 9 9 49 949 949 1949 6949 98 99 HHAAAA WDDAAA AAAAxx +115 2129 1 3 5 15 15 115 115 115 115 30 31 LEAAAA XDDAAA HHHHxx +4165 2130 1 1 5 5 65 165 165 4165 4165 130 131 FEAAAA YDDAAA OOOOxx +201 2131 1 1 1 1 1 201 201 201 201 2 3 THAAAA ZDDAAA VVVVxx +9324 2132 0 0 4 4 24 324 1324 4324 9324 48 49 QUAAAA AEDAAA AAAAxx +6562 2133 0 2 2 2 62 562 562 1562 6562 124 125 KSAAAA BEDAAA HHHHxx +1917 2134 1 1 7 17 17 917 1917 1917 1917 34 35 TVAAAA CEDAAA OOOOxx +558 2135 0 2 8 18 58 558 558 558 558 116 117 MVAAAA DEDAAA VVVVxx +8515 2136 1 3 5 15 15 515 515 3515 8515 30 31 NPAAAA EEDAAA AAAAxx +6321 2137 1 1 1 1 21 321 321 1321 6321 42 43 DJAAAA FEDAAA HHHHxx +6892 2138 0 0 2 12 92 892 892 1892 6892 184 185 CFAAAA GEDAAA OOOOxx +1001 2139 1 1 1 1 1 1 1001 1001 1001 2 3 NMAAAA HEDAAA VVVVxx +2858 2140 0 2 8 18 58 858 858 2858 2858 116 117 YFAAAA IEDAAA AAAAxx +2434 2141 0 2 4 14 34 434 434 2434 2434 68 69 QPAAAA JEDAAA HHHHxx +4460 2142 0 0 0 0 60 460 460 4460 4460 120 121 OPAAAA KEDAAA OOOOxx +5447 2143 1 3 7 7 47 447 1447 447 5447 94 95 NBAAAA LEDAAA VVVVxx +3799 2144 1 3 9 19 99 799 1799 3799 3799 198 199 DQAAAA MEDAAA AAAAxx +4310 2145 0 2 0 10 10 310 310 4310 4310 20 21 UJAAAA NEDAAA HHHHxx +405 2146 1 1 5 5 5 405 405 405 405 10 11 PPAAAA OEDAAA OOOOxx +4573 2147 1 1 3 13 73 573 573 4573 4573 146 147 XTAAAA PEDAAA VVVVxx +706 2148 0 2 6 6 6 706 706 706 706 12 13 EBAAAA QEDAAA AAAAxx +7619 2149 1 3 9 19 19 619 1619 2619 7619 38 39 BHAAAA REDAAA HHHHxx +7959 2150 1 3 9 19 59 959 1959 2959 7959 118 119 DUAAAA SEDAAA OOOOxx +6712 2151 0 0 2 12 12 712 712 1712 6712 24 25 EYAAAA TEDAAA VVVVxx +6959 2152 1 3 9 19 59 959 959 1959 6959 118 119 RHAAAA UEDAAA AAAAxx +9791 2153 1 3 1 11 91 791 1791 4791 9791 182 183 PMAAAA VEDAAA HHHHxx +2112 2154 0 0 2 12 12 112 112 2112 2112 24 25 GDAAAA WEDAAA OOOOxx +9114 2155 0 2 4 14 14 114 1114 4114 9114 28 29 OMAAAA XEDAAA VVVVxx +3506 2156 0 2 6 6 6 506 1506 3506 3506 12 13 WEAAAA YEDAAA AAAAxx +5002 2157 0 2 2 2 2 2 1002 2 5002 4 5 KKAAAA ZEDAAA HHHHxx +3518 2158 0 2 8 18 18 518 1518 3518 3518 36 37 IFAAAA AFDAAA OOOOxx +602 2159 0 2 2 2 2 602 602 602 602 4 5 EXAAAA BFDAAA VVVVxx +9060 2160 0 0 0 0 60 60 1060 4060 9060 120 121 MKAAAA CFDAAA AAAAxx +3292 2161 0 0 2 12 92 292 1292 3292 3292 184 185 QWAAAA DFDAAA HHHHxx +77 2162 1 1 7 17 77 77 77 77 77 154 155 ZCAAAA EFDAAA OOOOxx +1420 2163 0 0 0 0 20 420 1420 1420 1420 40 41 QCAAAA FFDAAA VVVVxx +6001 2164 1 1 1 1 1 1 1 1001 6001 2 3 VWAAAA GFDAAA AAAAxx +7477 2165 1 1 7 17 77 477 1477 2477 7477 154 155 PBAAAA HFDAAA HHHHxx +6655 2166 1 3 5 15 55 655 655 1655 6655 110 111 ZVAAAA IFDAAA OOOOxx +7845 2167 1 1 5 5 45 845 1845 2845 7845 90 91 TPAAAA JFDAAA VVVVxx +8484 2168 0 0 4 4 84 484 484 3484 8484 168 169 IOAAAA KFDAAA AAAAxx +4345 2169 1 1 5 5 45 345 345 4345 4345 90 91 DLAAAA LFDAAA HHHHxx +4250 2170 0 2 0 10 50 250 250 4250 4250 100 101 MHAAAA MFDAAA OOOOxx +2391 2171 1 3 1 11 91 391 391 2391 2391 182 183 ZNAAAA NFDAAA VVVVxx +6884 2172 0 0 4 4 84 884 884 1884 6884 168 169 UEAAAA OFDAAA AAAAxx +7270 2173 0 2 0 10 70 270 1270 2270 7270 140 141 QTAAAA PFDAAA HHHHxx +2499 2174 1 3 9 19 99 499 499 2499 2499 198 199 DSAAAA QFDAAA OOOOxx +7312 2175 0 0 2 12 12 312 1312 2312 7312 24 25 GVAAAA RFDAAA VVVVxx +7113 2176 1 1 3 13 13 113 1113 2113 7113 26 27 PNAAAA SFDAAA AAAAxx +6695 2177 1 3 5 15 95 695 695 1695 6695 190 191 NXAAAA TFDAAA HHHHxx +6521 2178 1 1 1 1 21 521 521 1521 6521 42 43 VQAAAA UFDAAA OOOOxx +272 2179 0 0 2 12 72 272 272 272 272 144 145 MKAAAA VFDAAA VVVVxx +9976 2180 0 0 6 16 76 976 1976 4976 9976 152 153 STAAAA WFDAAA AAAAxx +992 2181 0 0 2 12 92 992 992 992 992 184 185 EMAAAA XFDAAA HHHHxx +6158 2182 0 2 8 18 58 158 158 1158 6158 116 117 WCAAAA YFDAAA OOOOxx +3281 2183 1 1 1 1 81 281 1281 3281 3281 162 163 FWAAAA ZFDAAA VVVVxx +7446 2184 0 2 6 6 46 446 1446 2446 7446 92 93 KAAAAA AGDAAA AAAAxx +4679 2185 1 3 9 19 79 679 679 4679 4679 158 159 ZXAAAA BGDAAA HHHHxx +5203 2186 1 3 3 3 3 203 1203 203 5203 6 7 DSAAAA CGDAAA OOOOxx +9874 2187 0 2 4 14 74 874 1874 4874 9874 148 149 UPAAAA DGDAAA VVVVxx +8371 2188 1 3 1 11 71 371 371 3371 8371 142 143 ZJAAAA EGDAAA AAAAxx +9086 2189 0 2 6 6 86 86 1086 4086 9086 172 173 MLAAAA FGDAAA HHHHxx +430 2190 0 2 0 10 30 430 430 430 430 60 61 OQAAAA GGDAAA OOOOxx +8749 2191 1 1 9 9 49 749 749 3749 8749 98 99 NYAAAA HGDAAA VVVVxx +577 2192 1 1 7 17 77 577 577 577 577 154 155 FWAAAA IGDAAA AAAAxx +4884 2193 0 0 4 4 84 884 884 4884 4884 168 169 WFAAAA JGDAAA HHHHxx +3421 2194 1 1 1 1 21 421 1421 3421 3421 42 43 PBAAAA KGDAAA OOOOxx +2812 2195 0 0 2 12 12 812 812 2812 2812 24 25 EEAAAA LGDAAA VVVVxx +5958 2196 0 2 8 18 58 958 1958 958 5958 116 117 EVAAAA MGDAAA AAAAxx +9901 2197 1 1 1 1 1 901 1901 4901 9901 2 3 VQAAAA NGDAAA HHHHxx +8478 2198 0 2 8 18 78 478 478 3478 8478 156 157 COAAAA OGDAAA OOOOxx +6545 2199 1 1 5 5 45 545 545 1545 6545 90 91 TRAAAA PGDAAA VVVVxx +1479 2200 1 3 9 19 79 479 1479 1479 1479 158 159 XEAAAA QGDAAA AAAAxx +1046 2201 0 2 6 6 46 46 1046 1046 1046 92 93 GOAAAA RGDAAA HHHHxx +6372 2202 0 0 2 12 72 372 372 1372 6372 144 145 CLAAAA SGDAAA OOOOxx +8206 2203 0 2 6 6 6 206 206 3206 8206 12 13 QDAAAA TGDAAA VVVVxx +9544 2204 0 0 4 4 44 544 1544 4544 9544 88 89 CDAAAA UGDAAA AAAAxx +9287 2205 1 3 7 7 87 287 1287 4287 9287 174 175 FTAAAA VGDAAA HHHHxx +6786 2206 0 2 6 6 86 786 786 1786 6786 172 173 ABAAAA WGDAAA OOOOxx +6511 2207 1 3 1 11 11 511 511 1511 6511 22 23 LQAAAA XGDAAA VVVVxx +603 2208 1 3 3 3 3 603 603 603 603 6 7 FXAAAA YGDAAA AAAAxx +2022 2209 0 2 2 2 22 22 22 2022 2022 44 45 UZAAAA ZGDAAA HHHHxx +2086 2210 0 2 6 6 86 86 86 2086 2086 172 173 GCAAAA AHDAAA OOOOxx +1969 2211 1 1 9 9 69 969 1969 1969 1969 138 139 TXAAAA BHDAAA VVVVxx +4841 2212 1 1 1 1 41 841 841 4841 4841 82 83 FEAAAA CHDAAA AAAAxx +5845 2213 1 1 5 5 45 845 1845 845 5845 90 91 VQAAAA DHDAAA HHHHxx +4635 2214 1 3 5 15 35 635 635 4635 4635 70 71 HWAAAA EHDAAA OOOOxx +4658 2215 0 2 8 18 58 658 658 4658 4658 116 117 EXAAAA FHDAAA VVVVxx +2896 2216 0 0 6 16 96 896 896 2896 2896 192 193 KHAAAA GHDAAA AAAAxx +5179 2217 1 3 9 19 79 179 1179 179 5179 158 159 FRAAAA HHDAAA HHHHxx +8667 2218 1 3 7 7 67 667 667 3667 8667 134 135 JVAAAA IHDAAA OOOOxx +7294 2219 0 2 4 14 94 294 1294 2294 7294 188 189 OUAAAA JHDAAA VVVVxx +3706 2220 0 2 6 6 6 706 1706 3706 3706 12 13 OMAAAA KHDAAA AAAAxx +8389 2221 1 1 9 9 89 389 389 3389 8389 178 179 RKAAAA LHDAAA HHHHxx +2486 2222 0 2 6 6 86 486 486 2486 2486 172 173 QRAAAA MHDAAA OOOOxx +8743 2223 1 3 3 3 43 743 743 3743 8743 86 87 HYAAAA NHDAAA VVVVxx +2777 2224 1 1 7 17 77 777 777 2777 2777 154 155 VCAAAA OHDAAA AAAAxx +2113 2225 1 1 3 13 13 113 113 2113 2113 26 27 HDAAAA PHDAAA HHHHxx +2076 2226 0 0 6 16 76 76 76 2076 2076 152 153 WBAAAA QHDAAA OOOOxx +2300 2227 0 0 0 0 0 300 300 2300 2300 0 1 MKAAAA RHDAAA VVVVxx +6894 2228 0 2 4 14 94 894 894 1894 6894 188 189 EFAAAA SHDAAA AAAAxx +6939 2229 1 3 9 19 39 939 939 1939 6939 78 79 XGAAAA THDAAA HHHHxx +446 2230 0 2 6 6 46 446 446 446 446 92 93 ERAAAA UHDAAA OOOOxx +6218 2231 0 2 8 18 18 218 218 1218 6218 36 37 EFAAAA VHDAAA VVVVxx +1295 2232 1 3 5 15 95 295 1295 1295 1295 190 191 VXAAAA WHDAAA AAAAxx +5135 2233 1 3 5 15 35 135 1135 135 5135 70 71 NPAAAA XHDAAA HHHHxx +8122 2234 0 2 2 2 22 122 122 3122 8122 44 45 KAAAAA YHDAAA OOOOxx +316 2235 0 0 6 16 16 316 316 316 316 32 33 EMAAAA ZHDAAA VVVVxx +514 2236 0 2 4 14 14 514 514 514 514 28 29 UTAAAA AIDAAA AAAAxx +7970 2237 0 2 0 10 70 970 1970 2970 7970 140 141 OUAAAA BIDAAA HHHHxx +9350 2238 0 2 0 10 50 350 1350 4350 9350 100 101 QVAAAA CIDAAA OOOOxx +3700 2239 0 0 0 0 0 700 1700 3700 3700 0 1 IMAAAA DIDAAA VVVVxx +582 2240 0 2 2 2 82 582 582 582 582 164 165 KWAAAA EIDAAA AAAAxx +9722 2241 0 2 2 2 22 722 1722 4722 9722 44 45 YJAAAA FIDAAA HHHHxx +7398 2242 0 2 8 18 98 398 1398 2398 7398 196 197 OYAAAA GIDAAA OOOOxx +2265 2243 1 1 5 5 65 265 265 2265 2265 130 131 DJAAAA HIDAAA VVVVxx +3049 2244 1 1 9 9 49 49 1049 3049 3049 98 99 HNAAAA IIDAAA AAAAxx +9121 2245 1 1 1 1 21 121 1121 4121 9121 42 43 VMAAAA JIDAAA HHHHxx +4275 2246 1 3 5 15 75 275 275 4275 4275 150 151 LIAAAA KIDAAA OOOOxx +6567 2247 1 3 7 7 67 567 567 1567 6567 134 135 PSAAAA LIDAAA VVVVxx +6755 2248 1 3 5 15 55 755 755 1755 6755 110 111 VZAAAA MIDAAA AAAAxx +4535 2249 1 3 5 15 35 535 535 4535 4535 70 71 LSAAAA NIDAAA HHHHxx +7968 2250 0 0 8 8 68 968 1968 2968 7968 136 137 MUAAAA OIDAAA OOOOxx +3412 2251 0 0 2 12 12 412 1412 3412 3412 24 25 GBAAAA PIDAAA VVVVxx +6112 2252 0 0 2 12 12 112 112 1112 6112 24 25 CBAAAA QIDAAA AAAAxx +6805 2253 1 1 5 5 5 805 805 1805 6805 10 11 TBAAAA RIDAAA HHHHxx +2880 2254 0 0 0 0 80 880 880 2880 2880 160 161 UGAAAA SIDAAA OOOOxx +7710 2255 0 2 0 10 10 710 1710 2710 7710 20 21 OKAAAA TIDAAA VVVVxx +7949 2256 1 1 9 9 49 949 1949 2949 7949 98 99 TTAAAA UIDAAA AAAAxx +7043 2257 1 3 3 3 43 43 1043 2043 7043 86 87 XKAAAA VIDAAA HHHHxx +9012 2258 0 0 2 12 12 12 1012 4012 9012 24 25 QIAAAA WIDAAA OOOOxx +878 2259 0 2 8 18 78 878 878 878 878 156 157 UHAAAA XIDAAA VVVVxx +7930 2260 0 2 0 10 30 930 1930 2930 7930 60 61 ATAAAA YIDAAA AAAAxx +667 2261 1 3 7 7 67 667 667 667 667 134 135 RZAAAA ZIDAAA HHHHxx +1905 2262 1 1 5 5 5 905 1905 1905 1905 10 11 HVAAAA AJDAAA OOOOxx +4958 2263 0 2 8 18 58 958 958 4958 4958 116 117 SIAAAA BJDAAA VVVVxx +2973 2264 1 1 3 13 73 973 973 2973 2973 146 147 JKAAAA CJDAAA AAAAxx +3631 2265 1 3 1 11 31 631 1631 3631 3631 62 63 RJAAAA DJDAAA HHHHxx +5868 2266 0 0 8 8 68 868 1868 868 5868 136 137 SRAAAA EJDAAA OOOOxx +2873 2267 1 1 3 13 73 873 873 2873 2873 146 147 NGAAAA FJDAAA VVVVxx +6941 2268 1 1 1 1 41 941 941 1941 6941 82 83 ZGAAAA GJDAAA AAAAxx +6384 2269 0 0 4 4 84 384 384 1384 6384 168 169 OLAAAA HJDAAA HHHHxx +3806 2270 0 2 6 6 6 806 1806 3806 3806 12 13 KQAAAA IJDAAA OOOOxx +5079 2271 1 3 9 19 79 79 1079 79 5079 158 159 JNAAAA JJDAAA VVVVxx +1970 2272 0 2 0 10 70 970 1970 1970 1970 140 141 UXAAAA KJDAAA AAAAxx +7810 2273 0 2 0 10 10 810 1810 2810 7810 20 21 KOAAAA LJDAAA HHHHxx +4639 2274 1 3 9 19 39 639 639 4639 4639 78 79 LWAAAA MJDAAA OOOOxx +6527 2275 1 3 7 7 27 527 527 1527 6527 54 55 BRAAAA NJDAAA VVVVxx +8079 2276 1 3 9 19 79 79 79 3079 8079 158 159 TYAAAA OJDAAA AAAAxx +2740 2277 0 0 0 0 40 740 740 2740 2740 80 81 KBAAAA PJDAAA HHHHxx +2337 2278 1 1 7 17 37 337 337 2337 2337 74 75 XLAAAA QJDAAA OOOOxx +6670 2279 0 2 0 10 70 670 670 1670 6670 140 141 OWAAAA RJDAAA VVVVxx +2345 2280 1 1 5 5 45 345 345 2345 2345 90 91 FMAAAA SJDAAA AAAAxx +401 2281 1 1 1 1 1 401 401 401 401 2 3 LPAAAA TJDAAA HHHHxx +2704 2282 0 0 4 4 4 704 704 2704 2704 8 9 AAAAAA UJDAAA OOOOxx +5530 2283 0 2 0 10 30 530 1530 530 5530 60 61 SEAAAA VJDAAA VVVVxx +51 2284 1 3 1 11 51 51 51 51 51 102 103 ZBAAAA WJDAAA AAAAxx +4282 2285 0 2 2 2 82 282 282 4282 4282 164 165 SIAAAA XJDAAA HHHHxx +7336 2286 0 0 6 16 36 336 1336 2336 7336 72 73 EWAAAA YJDAAA OOOOxx +8320 2287 0 0 0 0 20 320 320 3320 8320 40 41 AIAAAA ZJDAAA VVVVxx +7772 2288 0 0 2 12 72 772 1772 2772 7772 144 145 YMAAAA AKDAAA AAAAxx +1894 2289 0 2 4 14 94 894 1894 1894 1894 188 189 WUAAAA BKDAAA HHHHxx +2320 2290 0 0 0 0 20 320 320 2320 2320 40 41 GLAAAA CKDAAA OOOOxx +6232 2291 0 0 2 12 32 232 232 1232 6232 64 65 SFAAAA DKDAAA VVVVxx +2833 2292 1 1 3 13 33 833 833 2833 2833 66 67 ZEAAAA EKDAAA AAAAxx +8265 2293 1 1 5 5 65 265 265 3265 8265 130 131 XFAAAA FKDAAA HHHHxx +4589 2294 1 1 9 9 89 589 589 4589 4589 178 179 NUAAAA GKDAAA OOOOxx +8182 2295 0 2 2 2 82 182 182 3182 8182 164 165 SCAAAA HKDAAA VVVVxx +8337 2296 1 1 7 17 37 337 337 3337 8337 74 75 RIAAAA IKDAAA AAAAxx +8210 2297 0 2 0 10 10 210 210 3210 8210 20 21 UDAAAA JKDAAA HHHHxx +1406 2298 0 2 6 6 6 406 1406 1406 1406 12 13 CCAAAA KKDAAA OOOOxx +4463 2299 1 3 3 3 63 463 463 4463 4463 126 127 RPAAAA LKDAAA VVVVxx +4347 2300 1 3 7 7 47 347 347 4347 4347 94 95 FLAAAA MKDAAA AAAAxx +181 2301 1 1 1 1 81 181 181 181 181 162 163 ZGAAAA NKDAAA HHHHxx +9986 2302 0 2 6 6 86 986 1986 4986 9986 172 173 CUAAAA OKDAAA OOOOxx +661 2303 1 1 1 1 61 661 661 661 661 122 123 LZAAAA PKDAAA VVVVxx +4105 2304 1 1 5 5 5 105 105 4105 4105 10 11 XBAAAA QKDAAA AAAAxx +2187 2305 1 3 7 7 87 187 187 2187 2187 174 175 DGAAAA RKDAAA HHHHxx +1628 2306 0 0 8 8 28 628 1628 1628 1628 56 57 QKAAAA SKDAAA OOOOxx +3119 2307 1 3 9 19 19 119 1119 3119 3119 38 39 ZPAAAA TKDAAA VVVVxx +6804 2308 0 0 4 4 4 804 804 1804 6804 8 9 SBAAAA UKDAAA AAAAxx +9918 2309 0 2 8 18 18 918 1918 4918 9918 36 37 MRAAAA VKDAAA HHHHxx +8916 2310 0 0 6 16 16 916 916 3916 8916 32 33 YEAAAA WKDAAA OOOOxx +6057 2311 1 1 7 17 57 57 57 1057 6057 114 115 ZYAAAA XKDAAA VVVVxx +3622 2312 0 2 2 2 22 622 1622 3622 3622 44 45 IJAAAA YKDAAA AAAAxx +9168 2313 0 0 8 8 68 168 1168 4168 9168 136 137 QOAAAA ZKDAAA HHHHxx +3720 2314 0 0 0 0 20 720 1720 3720 3720 40 41 CNAAAA ALDAAA OOOOxx +9927 2315 1 3 7 7 27 927 1927 4927 9927 54 55 VRAAAA BLDAAA VVVVxx +5616 2316 0 0 6 16 16 616 1616 616 5616 32 33 AIAAAA CLDAAA AAAAxx +5210 2317 0 2 0 10 10 210 1210 210 5210 20 21 KSAAAA DLDAAA HHHHxx +636 2318 0 0 6 16 36 636 636 636 636 72 73 MYAAAA ELDAAA OOOOxx +9936 2319 0 0 6 16 36 936 1936 4936 9936 72 73 ESAAAA FLDAAA VVVVxx +2316 2320 0 0 6 16 16 316 316 2316 2316 32 33 CLAAAA GLDAAA AAAAxx +4363 2321 1 3 3 3 63 363 363 4363 4363 126 127 VLAAAA HLDAAA HHHHxx +7657 2322 1 1 7 17 57 657 1657 2657 7657 114 115 NIAAAA ILDAAA OOOOxx +697 2323 1 1 7 17 97 697 697 697 697 194 195 VAAAAA JLDAAA VVVVxx +912 2324 0 0 2 12 12 912 912 912 912 24 25 CJAAAA KLDAAA AAAAxx +8806 2325 0 2 6 6 6 806 806 3806 8806 12 13 SAAAAA LLDAAA HHHHxx +9698 2326 0 2 8 18 98 698 1698 4698 9698 196 197 AJAAAA MLDAAA OOOOxx +6191 2327 1 3 1 11 91 191 191 1191 6191 182 183 DEAAAA NLDAAA VVVVxx +1188 2328 0 0 8 8 88 188 1188 1188 1188 176 177 STAAAA OLDAAA AAAAxx +7676 2329 0 0 6 16 76 676 1676 2676 7676 152 153 GJAAAA PLDAAA HHHHxx +7073 2330 1 1 3 13 73 73 1073 2073 7073 146 147 BMAAAA QLDAAA OOOOxx +8019 2331 1 3 9 19 19 19 19 3019 8019 38 39 LWAAAA RLDAAA VVVVxx +4726 2332 0 2 6 6 26 726 726 4726 4726 52 53 UZAAAA SLDAAA AAAAxx +4648 2333 0 0 8 8 48 648 648 4648 4648 96 97 UWAAAA TLDAAA HHHHxx +3227 2334 1 3 7 7 27 227 1227 3227 3227 54 55 DUAAAA ULDAAA OOOOxx +7232 2335 0 0 2 12 32 232 1232 2232 7232 64 65 ESAAAA VLDAAA VVVVxx +9761 2336 1 1 1 1 61 761 1761 4761 9761 122 123 LLAAAA WLDAAA AAAAxx +3105 2337 1 1 5 5 5 105 1105 3105 3105 10 11 LPAAAA XLDAAA HHHHxx +5266 2338 0 2 6 6 66 266 1266 266 5266 132 133 OUAAAA YLDAAA OOOOxx +6788 2339 0 0 8 8 88 788 788 1788 6788 176 177 CBAAAA ZLDAAA VVVVxx +2442 2340 0 2 2 2 42 442 442 2442 2442 84 85 YPAAAA AMDAAA AAAAxx +8198 2341 0 2 8 18 98 198 198 3198 8198 196 197 IDAAAA BMDAAA HHHHxx +5806 2342 0 2 6 6 6 806 1806 806 5806 12 13 IPAAAA CMDAAA OOOOxx +8928 2343 0 0 8 8 28 928 928 3928 8928 56 57 KFAAAA DMDAAA VVVVxx +1657 2344 1 1 7 17 57 657 1657 1657 1657 114 115 TLAAAA EMDAAA AAAAxx +9164 2345 0 0 4 4 64 164 1164 4164 9164 128 129 MOAAAA FMDAAA HHHHxx +1851 2346 1 3 1 11 51 851 1851 1851 1851 102 103 FTAAAA GMDAAA OOOOxx +4744 2347 0 0 4 4 44 744 744 4744 4744 88 89 MAAAAA HMDAAA VVVVxx +8055 2348 1 3 5 15 55 55 55 3055 8055 110 111 VXAAAA IMDAAA AAAAxx +1533 2349 1 1 3 13 33 533 1533 1533 1533 66 67 ZGAAAA JMDAAA HHHHxx +1260 2350 0 0 0 0 60 260 1260 1260 1260 120 121 MWAAAA KMDAAA OOOOxx +1290 2351 0 2 0 10 90 290 1290 1290 1290 180 181 QXAAAA LMDAAA VVVVxx +297 2352 1 1 7 17 97 297 297 297 297 194 195 LLAAAA MMDAAA AAAAxx +4145 2353 1 1 5 5 45 145 145 4145 4145 90 91 LDAAAA NMDAAA HHHHxx +863 2354 1 3 3 3 63 863 863 863 863 126 127 FHAAAA OMDAAA OOOOxx +3423 2355 1 3 3 3 23 423 1423 3423 3423 46 47 RBAAAA PMDAAA VVVVxx +8750 2356 0 2 0 10 50 750 750 3750 8750 100 101 OYAAAA QMDAAA AAAAxx +3546 2357 0 2 6 6 46 546 1546 3546 3546 92 93 KGAAAA RMDAAA HHHHxx +3678 2358 0 2 8 18 78 678 1678 3678 3678 156 157 MLAAAA SMDAAA OOOOxx +5313 2359 1 1 3 13 13 313 1313 313 5313 26 27 JWAAAA TMDAAA VVVVxx +6233 2360 1 1 3 13 33 233 233 1233 6233 66 67 TFAAAA UMDAAA AAAAxx +5802 2361 0 2 2 2 2 802 1802 802 5802 4 5 EPAAAA VMDAAA HHHHxx +7059 2362 1 3 9 19 59 59 1059 2059 7059 118 119 NLAAAA WMDAAA OOOOxx +6481 2363 1 1 1 1 81 481 481 1481 6481 162 163 HPAAAA XMDAAA VVVVxx +1596 2364 0 0 6 16 96 596 1596 1596 1596 192 193 KJAAAA YMDAAA AAAAxx +8181 2365 1 1 1 1 81 181 181 3181 8181 162 163 RCAAAA ZMDAAA HHHHxx +5368 2366 0 0 8 8 68 368 1368 368 5368 136 137 MYAAAA ANDAAA OOOOxx +9416 2367 0 0 6 16 16 416 1416 4416 9416 32 33 EYAAAA BNDAAA VVVVxx +9521 2368 1 1 1 1 21 521 1521 4521 9521 42 43 FCAAAA CNDAAA AAAAxx +1042 2369 0 2 2 2 42 42 1042 1042 1042 84 85 COAAAA DNDAAA HHHHxx +4503 2370 1 3 3 3 3 503 503 4503 4503 6 7 FRAAAA ENDAAA OOOOxx +3023 2371 1 3 3 3 23 23 1023 3023 3023 46 47 HMAAAA FNDAAA VVVVxx +1976 2372 0 0 6 16 76 976 1976 1976 1976 152 153 AYAAAA GNDAAA AAAAxx +5610 2373 0 2 0 10 10 610 1610 610 5610 20 21 UHAAAA HNDAAA HHHHxx +7410 2374 0 2 0 10 10 410 1410 2410 7410 20 21 AZAAAA INDAAA OOOOxx +7872 2375 0 0 2 12 72 872 1872 2872 7872 144 145 UQAAAA JNDAAA VVVVxx +8591 2376 1 3 1 11 91 591 591 3591 8591 182 183 LSAAAA KNDAAA AAAAxx +1804 2377 0 0 4 4 4 804 1804 1804 1804 8 9 KRAAAA LNDAAA HHHHxx +5299 2378 1 3 9 19 99 299 1299 299 5299 198 199 VVAAAA MNDAAA OOOOxx +4695 2379 1 3 5 15 95 695 695 4695 4695 190 191 PYAAAA NNDAAA VVVVxx +2672 2380 0 0 2 12 72 672 672 2672 2672 144 145 UYAAAA ONDAAA AAAAxx +585 2381 1 1 5 5 85 585 585 585 585 170 171 NWAAAA PNDAAA HHHHxx +8622 2382 0 2 2 2 22 622 622 3622 8622 44 45 QTAAAA QNDAAA OOOOxx +3780 2383 0 0 0 0 80 780 1780 3780 3780 160 161 KPAAAA RNDAAA VVVVxx +7941 2384 1 1 1 1 41 941 1941 2941 7941 82 83 LTAAAA SNDAAA AAAAxx +3305 2385 1 1 5 5 5 305 1305 3305 3305 10 11 DXAAAA TNDAAA HHHHxx +8653 2386 1 1 3 13 53 653 653 3653 8653 106 107 VUAAAA UNDAAA OOOOxx +5756 2387 0 0 6 16 56 756 1756 756 5756 112 113 KNAAAA VNDAAA VVVVxx +576 2388 0 0 6 16 76 576 576 576 576 152 153 EWAAAA WNDAAA AAAAxx +1915 2389 1 3 5 15 15 915 1915 1915 1915 30 31 RVAAAA XNDAAA HHHHxx +4627 2390 1 3 7 7 27 627 627 4627 4627 54 55 ZVAAAA YNDAAA OOOOxx +920 2391 0 0 0 0 20 920 920 920 920 40 41 KJAAAA ZNDAAA VVVVxx +2537 2392 1 1 7 17 37 537 537 2537 2537 74 75 PTAAAA AODAAA AAAAxx +50 2393 0 2 0 10 50 50 50 50 50 100 101 YBAAAA BODAAA HHHHxx +1313 2394 1 1 3 13 13 313 1313 1313 1313 26 27 NYAAAA CODAAA OOOOxx +8542 2395 0 2 2 2 42 542 542 3542 8542 84 85 OQAAAA DODAAA VVVVxx +6428 2396 0 0 8 8 28 428 428 1428 6428 56 57 GNAAAA EODAAA AAAAxx +4351 2397 1 3 1 11 51 351 351 4351 4351 102 103 JLAAAA FODAAA HHHHxx +2050 2398 0 2 0 10 50 50 50 2050 2050 100 101 WAAAAA GODAAA OOOOxx +5162 2399 0 2 2 2 62 162 1162 162 5162 124 125 OQAAAA HODAAA VVVVxx +8229 2400 1 1 9 9 29 229 229 3229 8229 58 59 NEAAAA IODAAA AAAAxx +7782 2401 0 2 2 2 82 782 1782 2782 7782 164 165 INAAAA JODAAA HHHHxx +1563 2402 1 3 3 3 63 563 1563 1563 1563 126 127 DIAAAA KODAAA OOOOxx +267 2403 1 3 7 7 67 267 267 267 267 134 135 HKAAAA LODAAA VVVVxx +5138 2404 0 2 8 18 38 138 1138 138 5138 76 77 QPAAAA MODAAA AAAAxx +7022 2405 0 2 2 2 22 22 1022 2022 7022 44 45 CKAAAA NODAAA HHHHxx +6705 2406 1 1 5 5 5 705 705 1705 6705 10 11 XXAAAA OODAAA OOOOxx +6190 2407 0 2 0 10 90 190 190 1190 6190 180 181 CEAAAA PODAAA VVVVxx +8226 2408 0 2 6 6 26 226 226 3226 8226 52 53 KEAAAA QODAAA AAAAxx +8882 2409 0 2 2 2 82 882 882 3882 8882 164 165 QDAAAA RODAAA HHHHxx +5181 2410 1 1 1 1 81 181 1181 181 5181 162 163 HRAAAA SODAAA OOOOxx +4598 2411 0 2 8 18 98 598 598 4598 4598 196 197 WUAAAA TODAAA VVVVxx +4882 2412 0 2 2 2 82 882 882 4882 4882 164 165 UFAAAA UODAAA AAAAxx +7490 2413 0 2 0 10 90 490 1490 2490 7490 180 181 CCAAAA VODAAA HHHHxx +5224 2414 0 0 4 4 24 224 1224 224 5224 48 49 YSAAAA WODAAA OOOOxx +2174 2415 0 2 4 14 74 174 174 2174 2174 148 149 QFAAAA XODAAA VVVVxx +3059 2416 1 3 9 19 59 59 1059 3059 3059 118 119 RNAAAA YODAAA AAAAxx +8790 2417 0 2 0 10 90 790 790 3790 8790 180 181 CAAAAA ZODAAA HHHHxx +2222 2418 0 2 2 2 22 222 222 2222 2222 44 45 MHAAAA APDAAA OOOOxx +5473 2419 1 1 3 13 73 473 1473 473 5473 146 147 NCAAAA BPDAAA VVVVxx +937 2420 1 1 7 17 37 937 937 937 937 74 75 BKAAAA CPDAAA AAAAxx +2975 2421 1 3 5 15 75 975 975 2975 2975 150 151 LKAAAA DPDAAA HHHHxx +9569 2422 1 1 9 9 69 569 1569 4569 9569 138 139 BEAAAA EPDAAA OOOOxx +3456 2423 0 0 6 16 56 456 1456 3456 3456 112 113 YCAAAA FPDAAA VVVVxx +6657 2424 1 1 7 17 57 657 657 1657 6657 114 115 BWAAAA GPDAAA AAAAxx +3776 2425 0 0 6 16 76 776 1776 3776 3776 152 153 GPAAAA HPDAAA HHHHxx +6072 2426 0 0 2 12 72 72 72 1072 6072 144 145 OZAAAA IPDAAA OOOOxx +8129 2427 1 1 9 9 29 129 129 3129 8129 58 59 RAAAAA JPDAAA VVVVxx +1085 2428 1 1 5 5 85 85 1085 1085 1085 170 171 TPAAAA KPDAAA AAAAxx +2079 2429 1 3 9 19 79 79 79 2079 2079 158 159 ZBAAAA LPDAAA HHHHxx +1200 2430 0 0 0 0 0 200 1200 1200 1200 0 1 EUAAAA MPDAAA OOOOxx +3276 2431 0 0 6 16 76 276 1276 3276 3276 152 153 AWAAAA NPDAAA VVVVxx +2608 2432 0 0 8 8 8 608 608 2608 2608 16 17 IWAAAA OPDAAA AAAAxx +702 2433 0 2 2 2 2 702 702 702 702 4 5 ABAAAA PPDAAA HHHHxx +5750 2434 0 2 0 10 50 750 1750 750 5750 100 101 ENAAAA QPDAAA OOOOxx +2776 2435 0 0 6 16 76 776 776 2776 2776 152 153 UCAAAA RPDAAA VVVVxx +9151 2436 1 3 1 11 51 151 1151 4151 9151 102 103 ZNAAAA SPDAAA AAAAxx +3282 2437 0 2 2 2 82 282 1282 3282 3282 164 165 GWAAAA TPDAAA HHHHxx +408 2438 0 0 8 8 8 408 408 408 408 16 17 SPAAAA UPDAAA OOOOxx +3473 2439 1 1 3 13 73 473 1473 3473 3473 146 147 PDAAAA VPDAAA VVVVxx +7095 2440 1 3 5 15 95 95 1095 2095 7095 190 191 XMAAAA WPDAAA AAAAxx +3288 2441 0 0 8 8 88 288 1288 3288 3288 176 177 MWAAAA XPDAAA HHHHxx +8215 2442 1 3 5 15 15 215 215 3215 8215 30 31 ZDAAAA YPDAAA OOOOxx +6244 2443 0 0 4 4 44 244 244 1244 6244 88 89 EGAAAA ZPDAAA VVVVxx +8440 2444 0 0 0 0 40 440 440 3440 8440 80 81 QMAAAA AQDAAA AAAAxx +3800 2445 0 0 0 0 0 800 1800 3800 3800 0 1 EQAAAA BQDAAA HHHHxx +7279 2446 1 3 9 19 79 279 1279 2279 7279 158 159 ZTAAAA CQDAAA OOOOxx +9206 2447 0 2 6 6 6 206 1206 4206 9206 12 13 CQAAAA DQDAAA VVVVxx +6465 2448 1 1 5 5 65 465 465 1465 6465 130 131 ROAAAA EQDAAA AAAAxx +4127 2449 1 3 7 7 27 127 127 4127 4127 54 55 TCAAAA FQDAAA HHHHxx +7463 2450 1 3 3 3 63 463 1463 2463 7463 126 127 BBAAAA GQDAAA OOOOxx +5117 2451 1 1 7 17 17 117 1117 117 5117 34 35 VOAAAA HQDAAA VVVVxx +4715 2452 1 3 5 15 15 715 715 4715 4715 30 31 JZAAAA IQDAAA AAAAxx +2010 2453 0 2 0 10 10 10 10 2010 2010 20 21 IZAAAA JQDAAA HHHHxx +6486 2454 0 2 6 6 86 486 486 1486 6486 172 173 MPAAAA KQDAAA OOOOxx +6434 2455 0 2 4 14 34 434 434 1434 6434 68 69 MNAAAA LQDAAA VVVVxx +2151 2456 1 3 1 11 51 151 151 2151 2151 102 103 TEAAAA MQDAAA AAAAxx +4821 2457 1 1 1 1 21 821 821 4821 4821 42 43 LDAAAA NQDAAA HHHHxx +6507 2458 1 3 7 7 7 507 507 1507 6507 14 15 HQAAAA OQDAAA OOOOxx +8741 2459 1 1 1 1 41 741 741 3741 8741 82 83 FYAAAA PQDAAA VVVVxx +6846 2460 0 2 6 6 46 846 846 1846 6846 92 93 IDAAAA QQDAAA AAAAxx +4525 2461 1 1 5 5 25 525 525 4525 4525 50 51 BSAAAA RQDAAA HHHHxx +8299 2462 1 3 9 19 99 299 299 3299 8299 198 199 FHAAAA SQDAAA OOOOxx +5465 2463 1 1 5 5 65 465 1465 465 5465 130 131 FCAAAA TQDAAA VVVVxx +7206 2464 0 2 6 6 6 206 1206 2206 7206 12 13 ERAAAA UQDAAA AAAAxx +2616 2465 0 0 6 16 16 616 616 2616 2616 32 33 QWAAAA VQDAAA HHHHxx +4440 2466 0 0 0 0 40 440 440 4440 4440 80 81 UOAAAA WQDAAA OOOOxx +6109 2467 1 1 9 9 9 109 109 1109 6109 18 19 ZAAAAA XQDAAA VVVVxx +7905 2468 1 1 5 5 5 905 1905 2905 7905 10 11 BSAAAA YQDAAA AAAAxx +6498 2469 0 2 8 18 98 498 498 1498 6498 196 197 YPAAAA ZQDAAA HHHHxx +2034 2470 0 2 4 14 34 34 34 2034 2034 68 69 GAAAAA ARDAAA OOOOxx +7693 2471 1 1 3 13 93 693 1693 2693 7693 186 187 XJAAAA BRDAAA VVVVxx +7511 2472 1 3 1 11 11 511 1511 2511 7511 22 23 XCAAAA CRDAAA AAAAxx +7531 2473 1 3 1 11 31 531 1531 2531 7531 62 63 RDAAAA DRDAAA HHHHxx +6869 2474 1 1 9 9 69 869 869 1869 6869 138 139 FEAAAA ERDAAA OOOOxx +2763 2475 1 3 3 3 63 763 763 2763 2763 126 127 HCAAAA FRDAAA VVVVxx +575 2476 1 3 5 15 75 575 575 575 575 150 151 DWAAAA GRDAAA AAAAxx +8953 2477 1 1 3 13 53 953 953 3953 8953 106 107 JGAAAA HRDAAA HHHHxx +5833 2478 1 1 3 13 33 833 1833 833 5833 66 67 JQAAAA IRDAAA OOOOxx +9035 2479 1 3 5 15 35 35 1035 4035 9035 70 71 NJAAAA JRDAAA VVVVxx +9123 2480 1 3 3 3 23 123 1123 4123 9123 46 47 XMAAAA KRDAAA AAAAxx +206 2481 0 2 6 6 6 206 206 206 206 12 13 YHAAAA LRDAAA HHHHxx +4155 2482 1 3 5 15 55 155 155 4155 4155 110 111 VDAAAA MRDAAA OOOOxx +532 2483 0 0 2 12 32 532 532 532 532 64 65 MUAAAA NRDAAA VVVVxx +1370 2484 0 2 0 10 70 370 1370 1370 1370 140 141 SAAAAA ORDAAA AAAAxx +7656 2485 0 0 6 16 56 656 1656 2656 7656 112 113 MIAAAA PRDAAA HHHHxx +7735 2486 1 3 5 15 35 735 1735 2735 7735 70 71 NLAAAA QRDAAA OOOOxx +2118 2487 0 2 8 18 18 118 118 2118 2118 36 37 MDAAAA RRDAAA VVVVxx +6914 2488 0 2 4 14 14 914 914 1914 6914 28 29 YFAAAA SRDAAA AAAAxx +6277 2489 1 1 7 17 77 277 277 1277 6277 154 155 LHAAAA TRDAAA HHHHxx +6347 2490 1 3 7 7 47 347 347 1347 6347 94 95 DKAAAA URDAAA OOOOxx +4030 2491 0 2 0 10 30 30 30 4030 4030 60 61 AZAAAA VRDAAA VVVVxx +9673 2492 1 1 3 13 73 673 1673 4673 9673 146 147 BIAAAA WRDAAA AAAAxx +2015 2493 1 3 5 15 15 15 15 2015 2015 30 31 NZAAAA XRDAAA HHHHxx +1317 2494 1 1 7 17 17 317 1317 1317 1317 34 35 RYAAAA YRDAAA OOOOxx +404 2495 0 0 4 4 4 404 404 404 404 8 9 OPAAAA ZRDAAA VVVVxx +1604 2496 0 0 4 4 4 604 1604 1604 1604 8 9 SJAAAA ASDAAA AAAAxx +1912 2497 0 0 2 12 12 912 1912 1912 1912 24 25 OVAAAA BSDAAA HHHHxx +5727 2498 1 3 7 7 27 727 1727 727 5727 54 55 HMAAAA CSDAAA OOOOxx +4538 2499 0 2 8 18 38 538 538 4538 4538 76 77 OSAAAA DSDAAA VVVVxx +6868 2500 0 0 8 8 68 868 868 1868 6868 136 137 EEAAAA ESDAAA AAAAxx +9801 2501 1 1 1 1 1 801 1801 4801 9801 2 3 ZMAAAA FSDAAA HHHHxx +1781 2502 1 1 1 1 81 781 1781 1781 1781 162 163 NQAAAA GSDAAA OOOOxx +7061 2503 1 1 1 1 61 61 1061 2061 7061 122 123 PLAAAA HSDAAA VVVVxx +2412 2504 0 0 2 12 12 412 412 2412 2412 24 25 UOAAAA ISDAAA AAAAxx +9191 2505 1 3 1 11 91 191 1191 4191 9191 182 183 NPAAAA JSDAAA HHHHxx +1958 2506 0 2 8 18 58 958 1958 1958 1958 116 117 IXAAAA KSDAAA OOOOxx +2203 2507 1 3 3 3 3 203 203 2203 2203 6 7 TGAAAA LSDAAA VVVVxx +9104 2508 0 0 4 4 4 104 1104 4104 9104 8 9 EMAAAA MSDAAA AAAAxx +3837 2509 1 1 7 17 37 837 1837 3837 3837 74 75 PRAAAA NSDAAA HHHHxx +7055 2510 1 3 5 15 55 55 1055 2055 7055 110 111 JLAAAA OSDAAA OOOOxx +4612 2511 0 0 2 12 12 612 612 4612 4612 24 25 KVAAAA PSDAAA VVVVxx +6420 2512 0 0 0 0 20 420 420 1420 6420 40 41 YMAAAA QSDAAA AAAAxx +613 2513 1 1 3 13 13 613 613 613 613 26 27 PXAAAA RSDAAA HHHHxx +1691 2514 1 3 1 11 91 691 1691 1691 1691 182 183 BNAAAA SSDAAA OOOOxx +33 2515 1 1 3 13 33 33 33 33 33 66 67 HBAAAA TSDAAA VVVVxx +875 2516 1 3 5 15 75 875 875 875 875 150 151 RHAAAA USDAAA AAAAxx +9030 2517 0 2 0 10 30 30 1030 4030 9030 60 61 IJAAAA VSDAAA HHHHxx +4285 2518 1 1 5 5 85 285 285 4285 4285 170 171 VIAAAA WSDAAA OOOOxx +6236 2519 0 0 6 16 36 236 236 1236 6236 72 73 WFAAAA XSDAAA VVVVxx +4702 2520 0 2 2 2 2 702 702 4702 4702 4 5 WYAAAA YSDAAA AAAAxx +3441 2521 1 1 1 1 41 441 1441 3441 3441 82 83 JCAAAA ZSDAAA HHHHxx +2150 2522 0 2 0 10 50 150 150 2150 2150 100 101 SEAAAA ATDAAA OOOOxx +1852 2523 0 0 2 12 52 852 1852 1852 1852 104 105 GTAAAA BTDAAA VVVVxx +7713 2524 1 1 3 13 13 713 1713 2713 7713 26 27 RKAAAA CTDAAA AAAAxx +6849 2525 1 1 9 9 49 849 849 1849 6849 98 99 LDAAAA DTDAAA HHHHxx +3425 2526 1 1 5 5 25 425 1425 3425 3425 50 51 TBAAAA ETDAAA OOOOxx +4681 2527 1 1 1 1 81 681 681 4681 4681 162 163 BYAAAA FTDAAA VVVVxx +1134 2528 0 2 4 14 34 134 1134 1134 1134 68 69 QRAAAA GTDAAA AAAAxx +7462 2529 0 2 2 2 62 462 1462 2462 7462 124 125 ABAAAA HTDAAA HHHHxx +2148 2530 0 0 8 8 48 148 148 2148 2148 96 97 QEAAAA ITDAAA OOOOxx +5921 2531 1 1 1 1 21 921 1921 921 5921 42 43 TTAAAA JTDAAA VVVVxx +118 2532 0 2 8 18 18 118 118 118 118 36 37 OEAAAA KTDAAA AAAAxx +3065 2533 1 1 5 5 65 65 1065 3065 3065 130 131 XNAAAA LTDAAA HHHHxx +6590 2534 0 2 0 10 90 590 590 1590 6590 180 181 MTAAAA MTDAAA OOOOxx +4993 2535 1 1 3 13 93 993 993 4993 4993 186 187 BKAAAA NTDAAA VVVVxx +6818 2536 0 2 8 18 18 818 818 1818 6818 36 37 GCAAAA OTDAAA AAAAxx +1449 2537 1 1 9 9 49 449 1449 1449 1449 98 99 TDAAAA PTDAAA HHHHxx +2039 2538 1 3 9 19 39 39 39 2039 2039 78 79 LAAAAA QTDAAA OOOOxx +2524 2539 0 0 4 4 24 524 524 2524 2524 48 49 CTAAAA RTDAAA VVVVxx +1481 2540 1 1 1 1 81 481 1481 1481 1481 162 163 ZEAAAA STDAAA AAAAxx +6984 2541 0 0 4 4 84 984 984 1984 6984 168 169 QIAAAA TTDAAA HHHHxx +3960 2542 0 0 0 0 60 960 1960 3960 3960 120 121 IWAAAA UTDAAA OOOOxx +1983 2543 1 3 3 3 83 983 1983 1983 1983 166 167 HYAAAA VTDAAA VVVVxx +6379 2544 1 3 9 19 79 379 379 1379 6379 158 159 JLAAAA WTDAAA AAAAxx +8975 2545 1 3 5 15 75 975 975 3975 8975 150 151 FHAAAA XTDAAA HHHHxx +1102 2546 0 2 2 2 2 102 1102 1102 1102 4 5 KQAAAA YTDAAA OOOOxx +2517 2547 1 1 7 17 17 517 517 2517 2517 34 35 VSAAAA ZTDAAA VVVVxx +712 2548 0 0 2 12 12 712 712 712 712 24 25 KBAAAA AUDAAA AAAAxx +5419 2549 1 3 9 19 19 419 1419 419 5419 38 39 LAAAAA BUDAAA HHHHxx +723 2550 1 3 3 3 23 723 723 723 723 46 47 VBAAAA CUDAAA OOOOxx +8057 2551 1 1 7 17 57 57 57 3057 8057 114 115 XXAAAA DUDAAA VVVVxx +7471 2552 1 3 1 11 71 471 1471 2471 7471 142 143 JBAAAA EUDAAA AAAAxx +8855 2553 1 3 5 15 55 855 855 3855 8855 110 111 PCAAAA FUDAAA HHHHxx +5074 2554 0 2 4 14 74 74 1074 74 5074 148 149 ENAAAA GUDAAA OOOOxx +7139 2555 1 3 9 19 39 139 1139 2139 7139 78 79 POAAAA HUDAAA VVVVxx +3833 2556 1 1 3 13 33 833 1833 3833 3833 66 67 LRAAAA IUDAAA AAAAxx +5186 2557 0 2 6 6 86 186 1186 186 5186 172 173 MRAAAA JUDAAA HHHHxx +9436 2558 0 0 6 16 36 436 1436 4436 9436 72 73 YYAAAA KUDAAA OOOOxx +8859 2559 1 3 9 19 59 859 859 3859 8859 118 119 TCAAAA LUDAAA VVVVxx +6943 2560 1 3 3 3 43 943 943 1943 6943 86 87 BHAAAA MUDAAA AAAAxx +2315 2561 1 3 5 15 15 315 315 2315 2315 30 31 BLAAAA NUDAAA HHHHxx +1394 2562 0 2 4 14 94 394 1394 1394 1394 188 189 QBAAAA OUDAAA OOOOxx +8863 2563 1 3 3 3 63 863 863 3863 8863 126 127 XCAAAA PUDAAA VVVVxx +8812 2564 0 0 2 12 12 812 812 3812 8812 24 25 YAAAAA QUDAAA AAAAxx +7498 2565 0 2 8 18 98 498 1498 2498 7498 196 197 KCAAAA RUDAAA HHHHxx +8962 2566 0 2 2 2 62 962 962 3962 8962 124 125 SGAAAA SUDAAA OOOOxx +2533 2567 1 1 3 13 33 533 533 2533 2533 66 67 LTAAAA TUDAAA VVVVxx +8188 2568 0 0 8 8 88 188 188 3188 8188 176 177 YCAAAA UUDAAA AAAAxx +6137 2569 1 1 7 17 37 137 137 1137 6137 74 75 BCAAAA VUDAAA HHHHxx +974 2570 0 2 4 14 74 974 974 974 974 148 149 MLAAAA WUDAAA OOOOxx +2751 2571 1 3 1 11 51 751 751 2751 2751 102 103 VBAAAA XUDAAA VVVVxx +4975 2572 1 3 5 15 75 975 975 4975 4975 150 151 JJAAAA YUDAAA AAAAxx +3411 2573 1 3 1 11 11 411 1411 3411 3411 22 23 FBAAAA ZUDAAA HHHHxx +3143 2574 1 3 3 3 43 143 1143 3143 3143 86 87 XQAAAA AVDAAA OOOOxx +8011 2575 1 3 1 11 11 11 11 3011 8011 22 23 DWAAAA BVDAAA VVVVxx +988 2576 0 0 8 8 88 988 988 988 988 176 177 AMAAAA CVDAAA AAAAxx +4289 2577 1 1 9 9 89 289 289 4289 4289 178 179 ZIAAAA DVDAAA HHHHxx +8105 2578 1 1 5 5 5 105 105 3105 8105 10 11 TZAAAA EVDAAA OOOOxx +9885 2579 1 1 5 5 85 885 1885 4885 9885 170 171 FQAAAA FVDAAA VVVVxx +1002 2580 0 2 2 2 2 2 1002 1002 1002 4 5 OMAAAA GVDAAA AAAAxx +5827 2581 1 3 7 7 27 827 1827 827 5827 54 55 DQAAAA HVDAAA HHHHxx +1228 2582 0 0 8 8 28 228 1228 1228 1228 56 57 GVAAAA IVDAAA OOOOxx +6352 2583 0 0 2 12 52 352 352 1352 6352 104 105 IKAAAA JVDAAA VVVVxx +8868 2584 0 0 8 8 68 868 868 3868 8868 136 137 CDAAAA KVDAAA AAAAxx +3643 2585 1 3 3 3 43 643 1643 3643 3643 86 87 DKAAAA LVDAAA HHHHxx +1468 2586 0 0 8 8 68 468 1468 1468 1468 136 137 MEAAAA MVDAAA OOOOxx +8415 2587 1 3 5 15 15 415 415 3415 8415 30 31 RLAAAA NVDAAA VVVVxx +9631 2588 1 3 1 11 31 631 1631 4631 9631 62 63 LGAAAA OVDAAA AAAAxx +7408 2589 0 0 8 8 8 408 1408 2408 7408 16 17 YYAAAA PVDAAA HHHHxx +1934 2590 0 2 4 14 34 934 1934 1934 1934 68 69 KWAAAA QVDAAA OOOOxx +996 2591 0 0 6 16 96 996 996 996 996 192 193 IMAAAA RVDAAA VVVVxx +8027 2592 1 3 7 7 27 27 27 3027 8027 54 55 TWAAAA SVDAAA AAAAxx +8464 2593 0 0 4 4 64 464 464 3464 8464 128 129 ONAAAA TVDAAA HHHHxx +5007 2594 1 3 7 7 7 7 1007 7 5007 14 15 PKAAAA UVDAAA OOOOxx +8356 2595 0 0 6 16 56 356 356 3356 8356 112 113 KJAAAA VVDAAA VVVVxx +4579 2596 1 3 9 19 79 579 579 4579 4579 158 159 DUAAAA WVDAAA AAAAxx +8513 2597 1 1 3 13 13 513 513 3513 8513 26 27 LPAAAA XVDAAA HHHHxx +383 2598 1 3 3 3 83 383 383 383 383 166 167 TOAAAA YVDAAA OOOOxx +9304 2599 0 0 4 4 4 304 1304 4304 9304 8 9 WTAAAA ZVDAAA VVVVxx +7224 2600 0 0 4 4 24 224 1224 2224 7224 48 49 WRAAAA AWDAAA AAAAxx +6023 2601 1 3 3 3 23 23 23 1023 6023 46 47 RXAAAA BWDAAA HHHHxx +2746 2602 0 2 6 6 46 746 746 2746 2746 92 93 QBAAAA CWDAAA OOOOxx +137 2603 1 1 7 17 37 137 137 137 137 74 75 HFAAAA DWDAAA VVVVxx +9441 2604 1 1 1 1 41 441 1441 4441 9441 82 83 DZAAAA EWDAAA AAAAxx +3690 2605 0 2 0 10 90 690 1690 3690 3690 180 181 YLAAAA FWDAAA HHHHxx +913 2606 1 1 3 13 13 913 913 913 913 26 27 DJAAAA GWDAAA OOOOxx +1768 2607 0 0 8 8 68 768 1768 1768 1768 136 137 AQAAAA HWDAAA VVVVxx +8492 2608 0 0 2 12 92 492 492 3492 8492 184 185 QOAAAA IWDAAA AAAAxx +8083 2609 1 3 3 3 83 83 83 3083 8083 166 167 XYAAAA JWDAAA HHHHxx +4609 2610 1 1 9 9 9 609 609 4609 4609 18 19 HVAAAA KWDAAA OOOOxx +7520 2611 0 0 0 0 20 520 1520 2520 7520 40 41 GDAAAA LWDAAA VVVVxx +4231 2612 1 3 1 11 31 231 231 4231 4231 62 63 TGAAAA MWDAAA AAAAxx +6022 2613 0 2 2 2 22 22 22 1022 6022 44 45 QXAAAA NWDAAA HHHHxx +9784 2614 0 0 4 4 84 784 1784 4784 9784 168 169 IMAAAA OWDAAA OOOOxx +1343 2615 1 3 3 3 43 343 1343 1343 1343 86 87 RZAAAA PWDAAA VVVVxx +7549 2616 1 1 9 9 49 549 1549 2549 7549 98 99 JEAAAA QWDAAA AAAAxx +269 2617 1 1 9 9 69 269 269 269 269 138 139 JKAAAA RWDAAA HHHHxx +1069 2618 1 1 9 9 69 69 1069 1069 1069 138 139 DPAAAA SWDAAA OOOOxx +4610 2619 0 2 0 10 10 610 610 4610 4610 20 21 IVAAAA TWDAAA VVVVxx +482 2620 0 2 2 2 82 482 482 482 482 164 165 OSAAAA UWDAAA AAAAxx +3025 2621 1 1 5 5 25 25 1025 3025 3025 50 51 JMAAAA VWDAAA HHHHxx +7914 2622 0 2 4 14 14 914 1914 2914 7914 28 29 KSAAAA WWDAAA OOOOxx +3198 2623 0 2 8 18 98 198 1198 3198 3198 196 197 ATAAAA XWDAAA VVVVxx +1187 2624 1 3 7 7 87 187 1187 1187 1187 174 175 RTAAAA YWDAAA AAAAxx +4707 2625 1 3 7 7 7 707 707 4707 4707 14 15 BZAAAA ZWDAAA HHHHxx +8279 2626 1 3 9 19 79 279 279 3279 8279 158 159 LGAAAA AXDAAA OOOOxx +6127 2627 1 3 7 7 27 127 127 1127 6127 54 55 RBAAAA BXDAAA VVVVxx +1305 2628 1 1 5 5 5 305 1305 1305 1305 10 11 FYAAAA CXDAAA AAAAxx +4804 2629 0 0 4 4 4 804 804 4804 4804 8 9 UCAAAA DXDAAA HHHHxx +6069 2630 1 1 9 9 69 69 69 1069 6069 138 139 LZAAAA EXDAAA OOOOxx +9229 2631 1 1 9 9 29 229 1229 4229 9229 58 59 ZQAAAA FXDAAA VVVVxx +4703 2632 1 3 3 3 3 703 703 4703 4703 6 7 XYAAAA GXDAAA AAAAxx +6410 2633 0 2 0 10 10 410 410 1410 6410 20 21 OMAAAA HXDAAA HHHHxx +944 2634 0 0 4 4 44 944 944 944 944 88 89 IKAAAA IXDAAA OOOOxx +3744 2635 0 0 4 4 44 744 1744 3744 3744 88 89 AOAAAA JXDAAA VVVVxx +1127 2636 1 3 7 7 27 127 1127 1127 1127 54 55 JRAAAA KXDAAA AAAAxx +6693 2637 1 1 3 13 93 693 693 1693 6693 186 187 LXAAAA LXDAAA HHHHxx +583 2638 1 3 3 3 83 583 583 583 583 166 167 LWAAAA MXDAAA OOOOxx +2684 2639 0 0 4 4 84 684 684 2684 2684 168 169 GZAAAA NXDAAA VVVVxx +6192 2640 0 0 2 12 92 192 192 1192 6192 184 185 EEAAAA OXDAAA AAAAxx +4157 2641 1 1 7 17 57 157 157 4157 4157 114 115 XDAAAA PXDAAA HHHHxx +6470 2642 0 2 0 10 70 470 470 1470 6470 140 141 WOAAAA QXDAAA OOOOxx +8965 2643 1 1 5 5 65 965 965 3965 8965 130 131 VGAAAA RXDAAA VVVVxx +1433 2644 1 1 3 13 33 433 1433 1433 1433 66 67 DDAAAA SXDAAA AAAAxx +4570 2645 0 2 0 10 70 570 570 4570 4570 140 141 UTAAAA TXDAAA HHHHxx +1806 2646 0 2 6 6 6 806 1806 1806 1806 12 13 MRAAAA UXDAAA OOOOxx +1230 2647 0 2 0 10 30 230 1230 1230 1230 60 61 IVAAAA VXDAAA VVVVxx +2283 2648 1 3 3 3 83 283 283 2283 2283 166 167 VJAAAA WXDAAA AAAAxx +6456 2649 0 0 6 16 56 456 456 1456 6456 112 113 IOAAAA XXDAAA HHHHxx +7427 2650 1 3 7 7 27 427 1427 2427 7427 54 55 RZAAAA YXDAAA OOOOxx +8310 2651 0 2 0 10 10 310 310 3310 8310 20 21 QHAAAA ZXDAAA VVVVxx +8103 2652 1 3 3 3 3 103 103 3103 8103 6 7 RZAAAA AYDAAA AAAAxx +3947 2653 1 3 7 7 47 947 1947 3947 3947 94 95 VVAAAA BYDAAA HHHHxx +3414 2654 0 2 4 14 14 414 1414 3414 3414 28 29 IBAAAA CYDAAA OOOOxx +2043 2655 1 3 3 3 43 43 43 2043 2043 86 87 PAAAAA DYDAAA VVVVxx +4393 2656 1 1 3 13 93 393 393 4393 4393 186 187 ZMAAAA EYDAAA AAAAxx +6664 2657 0 0 4 4 64 664 664 1664 6664 128 129 IWAAAA FYDAAA HHHHxx +4545 2658 1 1 5 5 45 545 545 4545 4545 90 91 VSAAAA GYDAAA OOOOxx +7637 2659 1 1 7 17 37 637 1637 2637 7637 74 75 THAAAA HYDAAA VVVVxx +1359 2660 1 3 9 19 59 359 1359 1359 1359 118 119 HAAAAA IYDAAA AAAAxx +5018 2661 0 2 8 18 18 18 1018 18 5018 36 37 ALAAAA JYDAAA HHHHxx +987 2662 1 3 7 7 87 987 987 987 987 174 175 ZLAAAA KYDAAA OOOOxx +1320 2663 0 0 0 0 20 320 1320 1320 1320 40 41 UYAAAA LYDAAA VVVVxx +9311 2664 1 3 1 11 11 311 1311 4311 9311 22 23 DUAAAA MYDAAA AAAAxx +7993 2665 1 1 3 13 93 993 1993 2993 7993 186 187 LVAAAA NYDAAA HHHHxx +7588 2666 0 0 8 8 88 588 1588 2588 7588 176 177 WFAAAA OYDAAA OOOOxx +5983 2667 1 3 3 3 83 983 1983 983 5983 166 167 DWAAAA PYDAAA VVVVxx +4070 2668 0 2 0 10 70 70 70 4070 4070 140 141 OAAAAA QYDAAA AAAAxx +8349 2669 1 1 9 9 49 349 349 3349 8349 98 99 DJAAAA RYDAAA HHHHxx +3810 2670 0 2 0 10 10 810 1810 3810 3810 20 21 OQAAAA SYDAAA OOOOxx +6948 2671 0 0 8 8 48 948 948 1948 6948 96 97 GHAAAA TYDAAA VVVVxx +7153 2672 1 1 3 13 53 153 1153 2153 7153 106 107 DPAAAA UYDAAA AAAAxx +5371 2673 1 3 1 11 71 371 1371 371 5371 142 143 PYAAAA VYDAAA HHHHxx +8316 2674 0 0 6 16 16 316 316 3316 8316 32 33 WHAAAA WYDAAA OOOOxx +5903 2675 1 3 3 3 3 903 1903 903 5903 6 7 BTAAAA XYDAAA VVVVxx +6718 2676 0 2 8 18 18 718 718 1718 6718 36 37 KYAAAA YYDAAA AAAAxx +4759 2677 1 3 9 19 59 759 759 4759 4759 118 119 BBAAAA ZYDAAA HHHHxx +2555 2678 1 3 5 15 55 555 555 2555 2555 110 111 HUAAAA AZDAAA OOOOxx +3457 2679 1 1 7 17 57 457 1457 3457 3457 114 115 ZCAAAA BZDAAA VVVVxx +9626 2680 0 2 6 6 26 626 1626 4626 9626 52 53 GGAAAA CZDAAA AAAAxx +2570 2681 0 2 0 10 70 570 570 2570 2570 140 141 WUAAAA DZDAAA HHHHxx +7964 2682 0 0 4 4 64 964 1964 2964 7964 128 129 IUAAAA EZDAAA OOOOxx +1543 2683 1 3 3 3 43 543 1543 1543 1543 86 87 JHAAAA FZDAAA VVVVxx +929 2684 1 1 9 9 29 929 929 929 929 58 59 TJAAAA GZDAAA AAAAxx +9244 2685 0 0 4 4 44 244 1244 4244 9244 88 89 ORAAAA HZDAAA HHHHxx +9210 2686 0 2 0 10 10 210 1210 4210 9210 20 21 GQAAAA IZDAAA OOOOxx +8334 2687 0 2 4 14 34 334 334 3334 8334 68 69 OIAAAA JZDAAA VVVVxx +9310 2688 0 2 0 10 10 310 1310 4310 9310 20 21 CUAAAA KZDAAA AAAAxx +5024 2689 0 0 4 4 24 24 1024 24 5024 48 49 GLAAAA LZDAAA HHHHxx +8794 2690 0 2 4 14 94 794 794 3794 8794 188 189 GAAAAA MZDAAA OOOOxx +4091 2691 1 3 1 11 91 91 91 4091 4091 182 183 JBAAAA NZDAAA VVVVxx +649 2692 1 1 9 9 49 649 649 649 649 98 99 ZYAAAA OZDAAA AAAAxx +8505 2693 1 1 5 5 5 505 505 3505 8505 10 11 DPAAAA PZDAAA HHHHxx +6652 2694 0 0 2 12 52 652 652 1652 6652 104 105 WVAAAA QZDAAA OOOOxx +8945 2695 1 1 5 5 45 945 945 3945 8945 90 91 BGAAAA RZDAAA VVVVxx +2095 2696 1 3 5 15 95 95 95 2095 2095 190 191 PCAAAA SZDAAA AAAAxx +8676 2697 0 0 6 16 76 676 676 3676 8676 152 153 SVAAAA TZDAAA HHHHxx +3994 2698 0 2 4 14 94 994 1994 3994 3994 188 189 QXAAAA UZDAAA OOOOxx +2859 2699 1 3 9 19 59 859 859 2859 2859 118 119 ZFAAAA VZDAAA VVVVxx +5403 2700 1 3 3 3 3 403 1403 403 5403 6 7 VZAAAA WZDAAA AAAAxx +3254 2701 0 2 4 14 54 254 1254 3254 3254 108 109 EVAAAA XZDAAA HHHHxx +7339 2702 1 3 9 19 39 339 1339 2339 7339 78 79 HWAAAA YZDAAA OOOOxx +7220 2703 0 0 0 0 20 220 1220 2220 7220 40 41 SRAAAA ZZDAAA VVVVxx +4154 2704 0 2 4 14 54 154 154 4154 4154 108 109 UDAAAA AAEAAA AAAAxx +7570 2705 0 2 0 10 70 570 1570 2570 7570 140 141 EFAAAA BAEAAA HHHHxx +2576 2706 0 0 6 16 76 576 576 2576 2576 152 153 CVAAAA CAEAAA OOOOxx +5764 2707 0 0 4 4 64 764 1764 764 5764 128 129 SNAAAA DAEAAA VVVVxx +4314 2708 0 2 4 14 14 314 314 4314 4314 28 29 YJAAAA EAEAAA AAAAxx +2274 2709 0 2 4 14 74 274 274 2274 2274 148 149 MJAAAA FAEAAA HHHHxx +9756 2710 0 0 6 16 56 756 1756 4756 9756 112 113 GLAAAA GAEAAA OOOOxx +8274 2711 0 2 4 14 74 274 274 3274 8274 148 149 GGAAAA HAEAAA VVVVxx +1289 2712 1 1 9 9 89 289 1289 1289 1289 178 179 PXAAAA IAEAAA AAAAxx +7335 2713 1 3 5 15 35 335 1335 2335 7335 70 71 DWAAAA JAEAAA HHHHxx +5351 2714 1 3 1 11 51 351 1351 351 5351 102 103 VXAAAA KAEAAA OOOOxx +8978 2715 0 2 8 18 78 978 978 3978 8978 156 157 IHAAAA LAEAAA VVVVxx +2 2716 0 2 2 2 2 2 2 2 2 4 5 CAAAAA MAEAAA AAAAxx +8906 2717 0 2 6 6 6 906 906 3906 8906 12 13 OEAAAA NAEAAA HHHHxx +6388 2718 0 0 8 8 88 388 388 1388 6388 176 177 SLAAAA OAEAAA OOOOxx +5675 2719 1 3 5 15 75 675 1675 675 5675 150 151 HKAAAA PAEAAA VVVVxx +255 2720 1 3 5 15 55 255 255 255 255 110 111 VJAAAA QAEAAA AAAAxx +9538 2721 0 2 8 18 38 538 1538 4538 9538 76 77 WCAAAA RAEAAA HHHHxx +1480 2722 0 0 0 0 80 480 1480 1480 1480 160 161 YEAAAA SAEAAA OOOOxx +4015 2723 1 3 5 15 15 15 15 4015 4015 30 31 LYAAAA TAEAAA VVVVxx +5166 2724 0 2 6 6 66 166 1166 166 5166 132 133 SQAAAA UAEAAA AAAAxx +91 2725 1 3 1 11 91 91 91 91 91 182 183 NDAAAA VAEAAA HHHHxx +2958 2726 0 2 8 18 58 958 958 2958 2958 116 117 UJAAAA WAEAAA OOOOxx +9131 2727 1 3 1 11 31 131 1131 4131 9131 62 63 FNAAAA XAEAAA VVVVxx +3944 2728 0 0 4 4 44 944 1944 3944 3944 88 89 SVAAAA YAEAAA AAAAxx +4514 2729 0 2 4 14 14 514 514 4514 4514 28 29 QRAAAA ZAEAAA HHHHxx +5661 2730 1 1 1 1 61 661 1661 661 5661 122 123 TJAAAA ABEAAA OOOOxx +8724 2731 0 0 4 4 24 724 724 3724 8724 48 49 OXAAAA BBEAAA VVVVxx +6408 2732 0 0 8 8 8 408 408 1408 6408 16 17 MMAAAA CBEAAA AAAAxx +5013 2733 1 1 3 13 13 13 1013 13 5013 26 27 VKAAAA DBEAAA HHHHxx +6156 2734 0 0 6 16 56 156 156 1156 6156 112 113 UCAAAA EBEAAA OOOOxx +7350 2735 0 2 0 10 50 350 1350 2350 7350 100 101 SWAAAA FBEAAA VVVVxx +9858 2736 0 2 8 18 58 858 1858 4858 9858 116 117 EPAAAA GBEAAA AAAAxx +895 2737 1 3 5 15 95 895 895 895 895 190 191 LIAAAA HBEAAA HHHHxx +8368 2738 0 0 8 8 68 368 368 3368 8368 136 137 WJAAAA IBEAAA OOOOxx +179 2739 1 3 9 19 79 179 179 179 179 158 159 XGAAAA JBEAAA VVVVxx +4048 2740 0 0 8 8 48 48 48 4048 4048 96 97 SZAAAA KBEAAA AAAAxx +3073 2741 1 1 3 13 73 73 1073 3073 3073 146 147 FOAAAA LBEAAA HHHHxx +321 2742 1 1 1 1 21 321 321 321 321 42 43 JMAAAA MBEAAA OOOOxx +5352 2743 0 0 2 12 52 352 1352 352 5352 104 105 WXAAAA NBEAAA VVVVxx +1940 2744 0 0 0 0 40 940 1940 1940 1940 80 81 QWAAAA OBEAAA AAAAxx +8803 2745 1 3 3 3 3 803 803 3803 8803 6 7 PAAAAA PBEAAA HHHHxx +791 2746 1 3 1 11 91 791 791 791 791 182 183 LEAAAA QBEAAA OOOOxx +9809 2747 1 1 9 9 9 809 1809 4809 9809 18 19 HNAAAA RBEAAA VVVVxx +5519 2748 1 3 9 19 19 519 1519 519 5519 38 39 HEAAAA SBEAAA AAAAxx +7420 2749 0 0 0 0 20 420 1420 2420 7420 40 41 KZAAAA TBEAAA HHHHxx +7541 2750 1 1 1 1 41 541 1541 2541 7541 82 83 BEAAAA UBEAAA OOOOxx +6538 2751 0 2 8 18 38 538 538 1538 6538 76 77 MRAAAA VBEAAA VVVVxx +710 2752 0 2 0 10 10 710 710 710 710 20 21 IBAAAA WBEAAA AAAAxx +9488 2753 0 0 8 8 88 488 1488 4488 9488 176 177 YAAAAA XBEAAA HHHHxx +3135 2754 1 3 5 15 35 135 1135 3135 3135 70 71 PQAAAA YBEAAA OOOOxx +4273 2755 1 1 3 13 73 273 273 4273 4273 146 147 JIAAAA ZBEAAA VVVVxx +629 2756 1 1 9 9 29 629 629 629 629 58 59 FYAAAA ACEAAA AAAAxx +9167 2757 1 3 7 7 67 167 1167 4167 9167 134 135 POAAAA BCEAAA HHHHxx +751 2758 1 3 1 11 51 751 751 751 751 102 103 XCAAAA CCEAAA OOOOxx +1126 2759 0 2 6 6 26 126 1126 1126 1126 52 53 IRAAAA DCEAAA VVVVxx +3724 2760 0 0 4 4 24 724 1724 3724 3724 48 49 GNAAAA ECEAAA AAAAxx +1789 2761 1 1 9 9 89 789 1789 1789 1789 178 179 VQAAAA FCEAAA HHHHxx +792 2762 0 0 2 12 92 792 792 792 792 184 185 MEAAAA GCEAAA OOOOxx +2771 2763 1 3 1 11 71 771 771 2771 2771 142 143 PCAAAA HCEAAA VVVVxx +4313 2764 1 1 3 13 13 313 313 4313 4313 26 27 XJAAAA ICEAAA AAAAxx +9312 2765 0 0 2 12 12 312 1312 4312 9312 24 25 EUAAAA JCEAAA HHHHxx +955 2766 1 3 5 15 55 955 955 955 955 110 111 TKAAAA KCEAAA OOOOxx +6382 2767 0 2 2 2 82 382 382 1382 6382 164 165 MLAAAA LCEAAA VVVVxx +7875 2768 1 3 5 15 75 875 1875 2875 7875 150 151 XQAAAA MCEAAA AAAAxx +7491 2769 1 3 1 11 91 491 1491 2491 7491 182 183 DCAAAA NCEAAA HHHHxx +8193 2770 1 1 3 13 93 193 193 3193 8193 186 187 DDAAAA OCEAAA OOOOxx +968 2771 0 0 8 8 68 968 968 968 968 136 137 GLAAAA PCEAAA VVVVxx +4951 2772 1 3 1 11 51 951 951 4951 4951 102 103 LIAAAA QCEAAA AAAAxx +2204 2773 0 0 4 4 4 204 204 2204 2204 8 9 UGAAAA RCEAAA HHHHxx +2066 2774 0 2 6 6 66 66 66 2066 2066 132 133 MBAAAA SCEAAA OOOOxx +2631 2775 1 3 1 11 31 631 631 2631 2631 62 63 FXAAAA TCEAAA VVVVxx +8947 2776 1 3 7 7 47 947 947 3947 8947 94 95 DGAAAA UCEAAA AAAAxx +8033 2777 1 1 3 13 33 33 33 3033 8033 66 67 ZWAAAA VCEAAA HHHHxx +6264 2778 0 0 4 4 64 264 264 1264 6264 128 129 YGAAAA WCEAAA OOOOxx +7778 2779 0 2 8 18 78 778 1778 2778 7778 156 157 ENAAAA XCEAAA VVVVxx +9701 2780 1 1 1 1 1 701 1701 4701 9701 2 3 DJAAAA YCEAAA AAAAxx +5091 2781 1 3 1 11 91 91 1091 91 5091 182 183 VNAAAA ZCEAAA HHHHxx +7577 2782 1 1 7 17 77 577 1577 2577 7577 154 155 LFAAAA ADEAAA OOOOxx +3345 2783 1 1 5 5 45 345 1345 3345 3345 90 91 RYAAAA BDEAAA VVVVxx +7329 2784 1 1 9 9 29 329 1329 2329 7329 58 59 XVAAAA CDEAAA AAAAxx +7551 2785 1 3 1 11 51 551 1551 2551 7551 102 103 LEAAAA DDEAAA HHHHxx +6207 2786 1 3 7 7 7 207 207 1207 6207 14 15 TEAAAA EDEAAA OOOOxx +8664 2787 0 0 4 4 64 664 664 3664 8664 128 129 GVAAAA FDEAAA VVVVxx +8394 2788 0 2 4 14 94 394 394 3394 8394 188 189 WKAAAA GDEAAA AAAAxx +7324 2789 0 0 4 4 24 324 1324 2324 7324 48 49 SVAAAA HDEAAA HHHHxx +2713 2790 1 1 3 13 13 713 713 2713 2713 26 27 JAAAAA IDEAAA OOOOxx +2230 2791 0 2 0 10 30 230 230 2230 2230 60 61 UHAAAA JDEAAA VVVVxx +9211 2792 1 3 1 11 11 211 1211 4211 9211 22 23 HQAAAA KDEAAA AAAAxx +1296 2793 0 0 6 16 96 296 1296 1296 1296 192 193 WXAAAA LDEAAA HHHHxx +8104 2794 0 0 4 4 4 104 104 3104 8104 8 9 SZAAAA MDEAAA OOOOxx +6916 2795 0 0 6 16 16 916 916 1916 6916 32 33 AGAAAA NDEAAA VVVVxx +2208 2796 0 0 8 8 8 208 208 2208 2208 16 17 YGAAAA ODEAAA AAAAxx +3935 2797 1 3 5 15 35 935 1935 3935 3935 70 71 JVAAAA PDEAAA HHHHxx +7814 2798 0 2 4 14 14 814 1814 2814 7814 28 29 OOAAAA QDEAAA OOOOxx +6508 2799 0 0 8 8 8 508 508 1508 6508 16 17 IQAAAA RDEAAA VVVVxx +1703 2800 1 3 3 3 3 703 1703 1703 1703 6 7 NNAAAA SDEAAA AAAAxx +5640 2801 0 0 0 0 40 640 1640 640 5640 80 81 YIAAAA TDEAAA HHHHxx +6417 2802 1 1 7 17 17 417 417 1417 6417 34 35 VMAAAA UDEAAA OOOOxx +1713 2803 1 1 3 13 13 713 1713 1713 1713 26 27 XNAAAA VDEAAA VVVVxx +5309 2804 1 1 9 9 9 309 1309 309 5309 18 19 FWAAAA WDEAAA AAAAxx +4364 2805 0 0 4 4 64 364 364 4364 4364 128 129 WLAAAA XDEAAA HHHHxx +619 2806 1 3 9 19 19 619 619 619 619 38 39 VXAAAA YDEAAA OOOOxx +9498 2807 0 2 8 18 98 498 1498 4498 9498 196 197 IBAAAA ZDEAAA VVVVxx +2804 2808 0 0 4 4 4 804 804 2804 2804 8 9 WDAAAA AEEAAA AAAAxx +2220 2809 0 0 0 0 20 220 220 2220 2220 40 41 KHAAAA BEEAAA HHHHxx +9542 2810 0 2 2 2 42 542 1542 4542 9542 84 85 ADAAAA CEEAAA OOOOxx +3349 2811 1 1 9 9 49 349 1349 3349 3349 98 99 VYAAAA DEEAAA VVVVxx +9198 2812 0 2 8 18 98 198 1198 4198 9198 196 197 UPAAAA EEEAAA AAAAxx +2727 2813 1 3 7 7 27 727 727 2727 2727 54 55 XAAAAA FEEAAA HHHHxx +3768 2814 0 0 8 8 68 768 1768 3768 3768 136 137 YOAAAA GEEAAA OOOOxx +2334 2815 0 2 4 14 34 334 334 2334 2334 68 69 ULAAAA HEEAAA VVVVxx +7770 2816 0 2 0 10 70 770 1770 2770 7770 140 141 WMAAAA IEEAAA AAAAxx +5963 2817 1 3 3 3 63 963 1963 963 5963 126 127 JVAAAA JEEAAA HHHHxx +4732 2818 0 0 2 12 32 732 732 4732 4732 64 65 AAAAAA KEEAAA OOOOxx +2448 2819 0 0 8 8 48 448 448 2448 2448 96 97 EQAAAA LEEAAA VVVVxx +5998 2820 0 2 8 18 98 998 1998 998 5998 196 197 SWAAAA MEEAAA AAAAxx +8577 2821 1 1 7 17 77 577 577 3577 8577 154 155 XRAAAA NEEAAA HHHHxx +266 2822 0 2 6 6 66 266 266 266 266 132 133 GKAAAA OEEAAA OOOOxx +2169 2823 1 1 9 9 69 169 169 2169 2169 138 139 LFAAAA PEEAAA VVVVxx +8228 2824 0 0 8 8 28 228 228 3228 8228 56 57 MEAAAA QEEAAA AAAAxx +4813 2825 1 1 3 13 13 813 813 4813 4813 26 27 DDAAAA REEAAA HHHHxx +2769 2826 1 1 9 9 69 769 769 2769 2769 138 139 NCAAAA SEEAAA OOOOxx +8382 2827 0 2 2 2 82 382 382 3382 8382 164 165 KKAAAA TEEAAA VVVVxx +1717 2828 1 1 7 17 17 717 1717 1717 1717 34 35 BOAAAA UEEAAA AAAAxx +7178 2829 0 2 8 18 78 178 1178 2178 7178 156 157 CQAAAA VEEAAA HHHHxx +9547 2830 1 3 7 7 47 547 1547 4547 9547 94 95 FDAAAA WEEAAA OOOOxx +8187 2831 1 3 7 7 87 187 187 3187 8187 174 175 XCAAAA XEEAAA VVVVxx +3168 2832 0 0 8 8 68 168 1168 3168 3168 136 137 WRAAAA YEEAAA AAAAxx +2180 2833 0 0 0 0 80 180 180 2180 2180 160 161 WFAAAA ZEEAAA HHHHxx +859 2834 1 3 9 19 59 859 859 859 859 118 119 BHAAAA AFEAAA OOOOxx +1554 2835 0 2 4 14 54 554 1554 1554 1554 108 109 UHAAAA BFEAAA VVVVxx +3567 2836 1 3 7 7 67 567 1567 3567 3567 134 135 FHAAAA CFEAAA AAAAxx +5985 2837 1 1 5 5 85 985 1985 985 5985 170 171 FWAAAA DFEAAA HHHHxx +1 2838 1 1 1 1 1 1 1 1 1 2 3 BAAAAA EFEAAA OOOOxx +5937 2839 1 1 7 17 37 937 1937 937 5937 74 75 JUAAAA FFEAAA VVVVxx +7594 2840 0 2 4 14 94 594 1594 2594 7594 188 189 CGAAAA GFEAAA AAAAxx +3783 2841 1 3 3 3 83 783 1783 3783 3783 166 167 NPAAAA HFEAAA HHHHxx +6841 2842 1 1 1 1 41 841 841 1841 6841 82 83 DDAAAA IFEAAA OOOOxx +9694 2843 0 2 4 14 94 694 1694 4694 9694 188 189 WIAAAA JFEAAA VVVVxx +4322 2844 0 2 2 2 22 322 322 4322 4322 44 45 GKAAAA KFEAAA AAAAxx +6012 2845 0 0 2 12 12 12 12 1012 6012 24 25 GXAAAA LFEAAA HHHHxx +108 2846 0 0 8 8 8 108 108 108 108 16 17 EEAAAA MFEAAA OOOOxx +3396 2847 0 0 6 16 96 396 1396 3396 3396 192 193 QAAAAA NFEAAA VVVVxx +8643 2848 1 3 3 3 43 643 643 3643 8643 86 87 LUAAAA OFEAAA AAAAxx +6087 2849 1 3 7 7 87 87 87 1087 6087 174 175 DAAAAA PFEAAA HHHHxx +2629 2850 1 1 9 9 29 629 629 2629 2629 58 59 DXAAAA QFEAAA OOOOxx +3009 2851 1 1 9 9 9 9 1009 3009 3009 18 19 TLAAAA RFEAAA VVVVxx +438 2852 0 2 8 18 38 438 438 438 438 76 77 WQAAAA SFEAAA AAAAxx +2480 2853 0 0 0 0 80 480 480 2480 2480 160 161 KRAAAA TFEAAA HHHHxx +936 2854 0 0 6 16 36 936 936 936 936 72 73 AKAAAA UFEAAA OOOOxx +6 2855 0 2 6 6 6 6 6 6 6 12 13 GAAAAA VFEAAA VVVVxx +768 2856 0 0 8 8 68 768 768 768 768 136 137 ODAAAA WFEAAA AAAAxx +1564 2857 0 0 4 4 64 564 1564 1564 1564 128 129 EIAAAA XFEAAA HHHHxx +3236 2858 0 0 6 16 36 236 1236 3236 3236 72 73 MUAAAA YFEAAA OOOOxx +3932 2859 0 0 2 12 32 932 1932 3932 3932 64 65 GVAAAA ZFEAAA VVVVxx +8914 2860 0 2 4 14 14 914 914 3914 8914 28 29 WEAAAA AGEAAA AAAAxx +119 2861 1 3 9 19 19 119 119 119 119 38 39 PEAAAA BGEAAA HHHHxx +6034 2862 0 2 4 14 34 34 34 1034 6034 68 69 CYAAAA CGEAAA OOOOxx +5384 2863 0 0 4 4 84 384 1384 384 5384 168 169 CZAAAA DGEAAA VVVVxx +6885 2864 1 1 5 5 85 885 885 1885 6885 170 171 VEAAAA EGEAAA AAAAxx +232 2865 0 0 2 12 32 232 232 232 232 64 65 YIAAAA FGEAAA HHHHxx +1293 2866 1 1 3 13 93 293 1293 1293 1293 186 187 TXAAAA GGEAAA OOOOxx +9204 2867 0 0 4 4 4 204 1204 4204 9204 8 9 AQAAAA HGEAAA VVVVxx +527 2868 1 3 7 7 27 527 527 527 527 54 55 HUAAAA IGEAAA AAAAxx +6539 2869 1 3 9 19 39 539 539 1539 6539 78 79 NRAAAA JGEAAA HHHHxx +3679 2870 1 3 9 19 79 679 1679 3679 3679 158 159 NLAAAA KGEAAA OOOOxx +8282 2871 0 2 2 2 82 282 282 3282 8282 164 165 OGAAAA LGEAAA VVVVxx +5027 2872 1 3 7 7 27 27 1027 27 5027 54 55 JLAAAA MGEAAA AAAAxx +7694 2873 0 2 4 14 94 694 1694 2694 7694 188 189 YJAAAA NGEAAA HHHHxx +473 2874 1 1 3 13 73 473 473 473 473 146 147 FSAAAA OGEAAA OOOOxx +6325 2875 1 1 5 5 25 325 325 1325 6325 50 51 HJAAAA PGEAAA VVVVxx +8761 2876 1 1 1 1 61 761 761 3761 8761 122 123 ZYAAAA QGEAAA AAAAxx +6184 2877 0 0 4 4 84 184 184 1184 6184 168 169 WDAAAA RGEAAA HHHHxx +419 2878 1 3 9 19 19 419 419 419 419 38 39 DQAAAA SGEAAA OOOOxx +6111 2879 1 3 1 11 11 111 111 1111 6111 22 23 BBAAAA TGEAAA VVVVxx +3836 2880 0 0 6 16 36 836 1836 3836 3836 72 73 ORAAAA UGEAAA AAAAxx +4086 2881 0 2 6 6 86 86 86 4086 4086 172 173 EBAAAA VGEAAA HHHHxx +5818 2882 0 2 8 18 18 818 1818 818 5818 36 37 UPAAAA WGEAAA OOOOxx +4528 2883 0 0 8 8 28 528 528 4528 4528 56 57 ESAAAA XGEAAA VVVVxx +7199 2884 1 3 9 19 99 199 1199 2199 7199 198 199 XQAAAA YGEAAA AAAAxx +1847 2885 1 3 7 7 47 847 1847 1847 1847 94 95 BTAAAA ZGEAAA HHHHxx +2875 2886 1 3 5 15 75 875 875 2875 2875 150 151 PGAAAA AHEAAA OOOOxx +2872 2887 0 0 2 12 72 872 872 2872 2872 144 145 MGAAAA BHEAAA VVVVxx +3972 2888 0 0 2 12 72 972 1972 3972 3972 144 145 UWAAAA CHEAAA AAAAxx +7590 2889 0 2 0 10 90 590 1590 2590 7590 180 181 YFAAAA DHEAAA HHHHxx +1914 2890 0 2 4 14 14 914 1914 1914 1914 28 29 QVAAAA EHEAAA OOOOxx +1658 2891 0 2 8 18 58 658 1658 1658 1658 116 117 ULAAAA FHEAAA VVVVxx +2126 2892 0 2 6 6 26 126 126 2126 2126 52 53 UDAAAA GHEAAA AAAAxx +645 2893 1 1 5 5 45 645 645 645 645 90 91 VYAAAA HHEAAA HHHHxx +6636 2894 0 0 6 16 36 636 636 1636 6636 72 73 GVAAAA IHEAAA OOOOxx +1469 2895 1 1 9 9 69 469 1469 1469 1469 138 139 NEAAAA JHEAAA VVVVxx +1377 2896 1 1 7 17 77 377 1377 1377 1377 154 155 ZAAAAA KHEAAA AAAAxx +8425 2897 1 1 5 5 25 425 425 3425 8425 50 51 BMAAAA LHEAAA HHHHxx +9300 2898 0 0 0 0 0 300 1300 4300 9300 0 1 STAAAA MHEAAA OOOOxx +5355 2899 1 3 5 15 55 355 1355 355 5355 110 111 ZXAAAA NHEAAA VVVVxx +840 2900 0 0 0 0 40 840 840 840 840 80 81 IGAAAA OHEAAA AAAAxx +5185 2901 1 1 5 5 85 185 1185 185 5185 170 171 LRAAAA PHEAAA HHHHxx +6467 2902 1 3 7 7 67 467 467 1467 6467 134 135 TOAAAA QHEAAA OOOOxx +58 2903 0 2 8 18 58 58 58 58 58 116 117 GCAAAA RHEAAA VVVVxx +5051 2904 1 3 1 11 51 51 1051 51 5051 102 103 HMAAAA SHEAAA AAAAxx +8901 2905 1 1 1 1 1 901 901 3901 8901 2 3 JEAAAA THEAAA HHHHxx +1550 2906 0 2 0 10 50 550 1550 1550 1550 100 101 QHAAAA UHEAAA OOOOxx +1698 2907 0 2 8 18 98 698 1698 1698 1698 196 197 INAAAA VHEAAA VVVVxx +802 2908 0 2 2 2 2 802 802 802 802 4 5 WEAAAA WHEAAA AAAAxx +2440 2909 0 0 0 0 40 440 440 2440 2440 80 81 WPAAAA XHEAAA HHHHxx +2260 2910 0 0 0 0 60 260 260 2260 2260 120 121 YIAAAA YHEAAA OOOOxx +8218 2911 0 2 8 18 18 218 218 3218 8218 36 37 CEAAAA ZHEAAA VVVVxx +5144 2912 0 0 4 4 44 144 1144 144 5144 88 89 WPAAAA AIEAAA AAAAxx +4822 2913 0 2 2 2 22 822 822 4822 4822 44 45 MDAAAA BIEAAA HHHHxx +9476 2914 0 0 6 16 76 476 1476 4476 9476 152 153 MAAAAA CIEAAA OOOOxx +7535 2915 1 3 5 15 35 535 1535 2535 7535 70 71 VDAAAA DIEAAA VVVVxx +8738 2916 0 2 8 18 38 738 738 3738 8738 76 77 CYAAAA EIEAAA AAAAxx +7946 2917 0 2 6 6 46 946 1946 2946 7946 92 93 QTAAAA FIEAAA HHHHxx +8143 2918 1 3 3 3 43 143 143 3143 8143 86 87 FBAAAA GIEAAA OOOOxx +2623 2919 1 3 3 3 23 623 623 2623 2623 46 47 XWAAAA HIEAAA VVVVxx +5209 2920 1 1 9 9 9 209 1209 209 5209 18 19 JSAAAA IIEAAA AAAAxx +7674 2921 0 2 4 14 74 674 1674 2674 7674 148 149 EJAAAA JIEAAA HHHHxx +1135 2922 1 3 5 15 35 135 1135 1135 1135 70 71 RRAAAA KIEAAA OOOOxx +424 2923 0 0 4 4 24 424 424 424 424 48 49 IQAAAA LIEAAA VVVVxx +942 2924 0 2 2 2 42 942 942 942 942 84 85 GKAAAA MIEAAA AAAAxx +7813 2925 1 1 3 13 13 813 1813 2813 7813 26 27 NOAAAA NIEAAA HHHHxx +3539 2926 1 3 9 19 39 539 1539 3539 3539 78 79 DGAAAA OIEAAA OOOOxx +2909 2927 1 1 9 9 9 909 909 2909 2909 18 19 XHAAAA PIEAAA VVVVxx +3748 2928 0 0 8 8 48 748 1748 3748 3748 96 97 EOAAAA QIEAAA AAAAxx +2996 2929 0 0 6 16 96 996 996 2996 2996 192 193 GLAAAA RIEAAA HHHHxx +1869 2930 1 1 9 9 69 869 1869 1869 1869 138 139 XTAAAA SIEAAA OOOOxx +8151 2931 1 3 1 11 51 151 151 3151 8151 102 103 NBAAAA TIEAAA VVVVxx +6361 2932 1 1 1 1 61 361 361 1361 6361 122 123 RKAAAA UIEAAA AAAAxx +5568 2933 0 0 8 8 68 568 1568 568 5568 136 137 EGAAAA VIEAAA HHHHxx +2796 2934 0 0 6 16 96 796 796 2796 2796 192 193 ODAAAA WIEAAA OOOOxx +8489 2935 1 1 9 9 89 489 489 3489 8489 178 179 NOAAAA XIEAAA VVVVxx +9183 2936 1 3 3 3 83 183 1183 4183 9183 166 167 FPAAAA YIEAAA AAAAxx +8227 2937 1 3 7 7 27 227 227 3227 8227 54 55 LEAAAA ZIEAAA HHHHxx +1844 2938 0 0 4 4 44 844 1844 1844 1844 88 89 YSAAAA AJEAAA OOOOxx +3975 2939 1 3 5 15 75 975 1975 3975 3975 150 151 XWAAAA BJEAAA VVVVxx +6490 2940 0 2 0 10 90 490 490 1490 6490 180 181 QPAAAA CJEAAA AAAAxx +8303 2941 1 3 3 3 3 303 303 3303 8303 6 7 JHAAAA DJEAAA HHHHxx +7334 2942 0 2 4 14 34 334 1334 2334 7334 68 69 CWAAAA EJEAAA OOOOxx +2382 2943 0 2 2 2 82 382 382 2382 2382 164 165 QNAAAA FJEAAA VVVVxx +177 2944 1 1 7 17 77 177 177 177 177 154 155 VGAAAA GJEAAA AAAAxx +8117 2945 1 1 7 17 17 117 117 3117 8117 34 35 FAAAAA HJEAAA HHHHxx +5485 2946 1 1 5 5 85 485 1485 485 5485 170 171 ZCAAAA IJEAAA OOOOxx +6544 2947 0 0 4 4 44 544 544 1544 6544 88 89 SRAAAA JJEAAA VVVVxx +8517 2948 1 1 7 17 17 517 517 3517 8517 34 35 PPAAAA KJEAAA AAAAxx +2252 2949 0 0 2 12 52 252 252 2252 2252 104 105 QIAAAA LJEAAA HHHHxx +4480 2950 0 0 0 0 80 480 480 4480 4480 160 161 IQAAAA MJEAAA OOOOxx +4785 2951 1 1 5 5 85 785 785 4785 4785 170 171 BCAAAA NJEAAA VVVVxx +9700 2952 0 0 0 0 0 700 1700 4700 9700 0 1 CJAAAA OJEAAA AAAAxx +2122 2953 0 2 2 2 22 122 122 2122 2122 44 45 QDAAAA PJEAAA HHHHxx +8783 2954 1 3 3 3 83 783 783 3783 8783 166 167 VZAAAA QJEAAA OOOOxx +1453 2955 1 1 3 13 53 453 1453 1453 1453 106 107 XDAAAA RJEAAA VVVVxx +3908 2956 0 0 8 8 8 908 1908 3908 3908 16 17 IUAAAA SJEAAA AAAAxx +7707 2957 1 3 7 7 7 707 1707 2707 7707 14 15 LKAAAA TJEAAA HHHHxx +9049 2958 1 1 9 9 49 49 1049 4049 9049 98 99 BKAAAA UJEAAA OOOOxx +654 2959 0 2 4 14 54 654 654 654 654 108 109 EZAAAA VJEAAA VVVVxx +3336 2960 0 0 6 16 36 336 1336 3336 3336 72 73 IYAAAA WJEAAA AAAAxx +622 2961 0 2 2 2 22 622 622 622 622 44 45 YXAAAA XJEAAA HHHHxx +8398 2962 0 2 8 18 98 398 398 3398 8398 196 197 ALAAAA YJEAAA OOOOxx +9193 2963 1 1 3 13 93 193 1193 4193 9193 186 187 PPAAAA ZJEAAA VVVVxx +7896 2964 0 0 6 16 96 896 1896 2896 7896 192 193 SRAAAA AKEAAA AAAAxx +9798 2965 0 2 8 18 98 798 1798 4798 9798 196 197 WMAAAA BKEAAA HHHHxx +2881 2966 1 1 1 1 81 881 881 2881 2881 162 163 VGAAAA CKEAAA OOOOxx +672 2967 0 0 2 12 72 672 672 672 672 144 145 WZAAAA DKEAAA VVVVxx +6743 2968 1 3 3 3 43 743 743 1743 6743 86 87 JZAAAA EKEAAA AAAAxx +8935 2969 1 3 5 15 35 935 935 3935 8935 70 71 RFAAAA FKEAAA HHHHxx +2426 2970 0 2 6 6 26 426 426 2426 2426 52 53 IPAAAA GKEAAA OOOOxx +722 2971 0 2 2 2 22 722 722 722 722 44 45 UBAAAA HKEAAA VVVVxx +5088 2972 0 0 8 8 88 88 1088 88 5088 176 177 SNAAAA IKEAAA AAAAxx +8677 2973 1 1 7 17 77 677 677 3677 8677 154 155 TVAAAA JKEAAA HHHHxx +6963 2974 1 3 3 3 63 963 963 1963 6963 126 127 VHAAAA KKEAAA OOOOxx +1653 2975 1 1 3 13 53 653 1653 1653 1653 106 107 PLAAAA LKEAAA VVVVxx +7295 2976 1 3 5 15 95 295 1295 2295 7295 190 191 PUAAAA MKEAAA AAAAxx +6675 2977 1 3 5 15 75 675 675 1675 6675 150 151 TWAAAA NKEAAA HHHHxx +7183 2978 1 3 3 3 83 183 1183 2183 7183 166 167 HQAAAA OKEAAA OOOOxx +4378 2979 0 2 8 18 78 378 378 4378 4378 156 157 KMAAAA PKEAAA VVVVxx +2157 2980 1 1 7 17 57 157 157 2157 2157 114 115 ZEAAAA QKEAAA AAAAxx +2621 2981 1 1 1 1 21 621 621 2621 2621 42 43 VWAAAA RKEAAA HHHHxx +9278 2982 0 2 8 18 78 278 1278 4278 9278 156 157 WSAAAA SKEAAA OOOOxx +79 2983 1 3 9 19 79 79 79 79 79 158 159 BDAAAA TKEAAA VVVVxx +7358 2984 0 2 8 18 58 358 1358 2358 7358 116 117 AXAAAA UKEAAA AAAAxx +3589 2985 1 1 9 9 89 589 1589 3589 3589 178 179 BIAAAA VKEAAA HHHHxx +1254 2986 0 2 4 14 54 254 1254 1254 1254 108 109 GWAAAA WKEAAA OOOOxx +3490 2987 0 2 0 10 90 490 1490 3490 3490 180 181 GEAAAA XKEAAA VVVVxx +7533 2988 1 1 3 13 33 533 1533 2533 7533 66 67 TDAAAA YKEAAA AAAAxx +2800 2989 0 0 0 0 0 800 800 2800 2800 0 1 SDAAAA ZKEAAA HHHHxx +351 2990 1 3 1 11 51 351 351 351 351 102 103 NNAAAA ALEAAA OOOOxx +4359 2991 1 3 9 19 59 359 359 4359 4359 118 119 RLAAAA BLEAAA VVVVxx +5788 2992 0 0 8 8 88 788 1788 788 5788 176 177 QOAAAA CLEAAA AAAAxx +5521 2993 1 1 1 1 21 521 1521 521 5521 42 43 JEAAAA DLEAAA HHHHxx +3351 2994 1 3 1 11 51 351 1351 3351 3351 102 103 XYAAAA ELEAAA OOOOxx +5129 2995 1 1 9 9 29 129 1129 129 5129 58 59 HPAAAA FLEAAA VVVVxx +315 2996 1 3 5 15 15 315 315 315 315 30 31 DMAAAA GLEAAA AAAAxx +7552 2997 0 0 2 12 52 552 1552 2552 7552 104 105 MEAAAA HLEAAA HHHHxx +9176 2998 0 0 6 16 76 176 1176 4176 9176 152 153 YOAAAA ILEAAA OOOOxx +7458 2999 0 2 8 18 58 458 1458 2458 7458 116 117 WAAAAA JLEAAA VVVVxx +279 3000 1 3 9 19 79 279 279 279 279 158 159 TKAAAA KLEAAA AAAAxx +738 3001 0 2 8 18 38 738 738 738 738 76 77 KCAAAA LLEAAA HHHHxx +2557 3002 1 1 7 17 57 557 557 2557 2557 114 115 JUAAAA MLEAAA OOOOxx +9395 3003 1 3 5 15 95 395 1395 4395 9395 190 191 JXAAAA NLEAAA VVVVxx +7214 3004 0 2 4 14 14 214 1214 2214 7214 28 29 MRAAAA OLEAAA AAAAxx +6354 3005 0 2 4 14 54 354 354 1354 6354 108 109 KKAAAA PLEAAA HHHHxx +4799 3006 1 3 9 19 99 799 799 4799 4799 198 199 PCAAAA QLEAAA OOOOxx +1231 3007 1 3 1 11 31 231 1231 1231 1231 62 63 JVAAAA RLEAAA VVVVxx +5252 3008 0 0 2 12 52 252 1252 252 5252 104 105 AUAAAA SLEAAA AAAAxx +5250 3009 0 2 0 10 50 250 1250 250 5250 100 101 YTAAAA TLEAAA HHHHxx +9319 3010 1 3 9 19 19 319 1319 4319 9319 38 39 LUAAAA ULEAAA OOOOxx +1724 3011 0 0 4 4 24 724 1724 1724 1724 48 49 IOAAAA VLEAAA VVVVxx +7947 3012 1 3 7 7 47 947 1947 2947 7947 94 95 RTAAAA WLEAAA AAAAxx +1105 3013 1 1 5 5 5 105 1105 1105 1105 10 11 NQAAAA XLEAAA HHHHxx +1417 3014 1 1 7 17 17 417 1417 1417 1417 34 35 NCAAAA YLEAAA OOOOxx +7101 3015 1 1 1 1 1 101 1101 2101 7101 2 3 DNAAAA ZLEAAA VVVVxx +1088 3016 0 0 8 8 88 88 1088 1088 1088 176 177 WPAAAA AMEAAA AAAAxx +979 3017 1 3 9 19 79 979 979 979 979 158 159 RLAAAA BMEAAA HHHHxx +7589 3018 1 1 9 9 89 589 1589 2589 7589 178 179 XFAAAA CMEAAA OOOOxx +8952 3019 0 0 2 12 52 952 952 3952 8952 104 105 IGAAAA DMEAAA VVVVxx +2864 3020 0 0 4 4 64 864 864 2864 2864 128 129 EGAAAA EMEAAA AAAAxx +234 3021 0 2 4 14 34 234 234 234 234 68 69 AJAAAA FMEAAA HHHHxx +7231 3022 1 3 1 11 31 231 1231 2231 7231 62 63 DSAAAA GMEAAA OOOOxx +6792 3023 0 0 2 12 92 792 792 1792 6792 184 185 GBAAAA HMEAAA VVVVxx +4311 3024 1 3 1 11 11 311 311 4311 4311 22 23 VJAAAA IMEAAA AAAAxx +3374 3025 0 2 4 14 74 374 1374 3374 3374 148 149 UZAAAA JMEAAA HHHHxx +3367 3026 1 3 7 7 67 367 1367 3367 3367 134 135 NZAAAA KMEAAA OOOOxx +2598 3027 0 2 8 18 98 598 598 2598 2598 196 197 YVAAAA LMEAAA VVVVxx +1033 3028 1 1 3 13 33 33 1033 1033 1033 66 67 TNAAAA MMEAAA AAAAxx +7803 3029 1 3 3 3 3 803 1803 2803 7803 6 7 DOAAAA NMEAAA HHHHxx +3870 3030 0 2 0 10 70 870 1870 3870 3870 140 141 WSAAAA OMEAAA OOOOxx +4962 3031 0 2 2 2 62 962 962 4962 4962 124 125 WIAAAA PMEAAA VVVVxx +4842 3032 0 2 2 2 42 842 842 4842 4842 84 85 GEAAAA QMEAAA AAAAxx +8814 3033 0 2 4 14 14 814 814 3814 8814 28 29 ABAAAA RMEAAA HHHHxx +3429 3034 1 1 9 9 29 429 1429 3429 3429 58 59 XBAAAA SMEAAA OOOOxx +6550 3035 0 2 0 10 50 550 550 1550 6550 100 101 YRAAAA TMEAAA VVVVxx +6317 3036 1 1 7 17 17 317 317 1317 6317 34 35 ZIAAAA UMEAAA AAAAxx +5023 3037 1 3 3 3 23 23 1023 23 5023 46 47 FLAAAA VMEAAA HHHHxx +5825 3038 1 1 5 5 25 825 1825 825 5825 50 51 BQAAAA WMEAAA OOOOxx +5297 3039 1 1 7 17 97 297 1297 297 5297 194 195 TVAAAA XMEAAA VVVVxx +8764 3040 0 0 4 4 64 764 764 3764 8764 128 129 CZAAAA YMEAAA AAAAxx +5084 3041 0 0 4 4 84 84 1084 84 5084 168 169 ONAAAA ZMEAAA HHHHxx +6808 3042 0 0 8 8 8 808 808 1808 6808 16 17 WBAAAA ANEAAA OOOOxx +1780 3043 0 0 0 0 80 780 1780 1780 1780 160 161 MQAAAA BNEAAA VVVVxx +4092 3044 0 0 2 12 92 92 92 4092 4092 184 185 KBAAAA CNEAAA AAAAxx +3618 3045 0 2 8 18 18 618 1618 3618 3618 36 37 EJAAAA DNEAAA HHHHxx +7299 3046 1 3 9 19 99 299 1299 2299 7299 198 199 TUAAAA ENEAAA OOOOxx +8544 3047 0 0 4 4 44 544 544 3544 8544 88 89 QQAAAA FNEAAA VVVVxx +2359 3048 1 3 9 19 59 359 359 2359 2359 118 119 TMAAAA GNEAAA AAAAxx +1939 3049 1 3 9 19 39 939 1939 1939 1939 78 79 PWAAAA HNEAAA HHHHxx +5834 3050 0 2 4 14 34 834 1834 834 5834 68 69 KQAAAA INEAAA OOOOxx +1997 3051 1 1 7 17 97 997 1997 1997 1997 194 195 VYAAAA JNEAAA VVVVxx +7917 3052 1 1 7 17 17 917 1917 2917 7917 34 35 NSAAAA KNEAAA AAAAxx +2098 3053 0 2 8 18 98 98 98 2098 2098 196 197 SCAAAA LNEAAA HHHHxx +7576 3054 0 0 6 16 76 576 1576 2576 7576 152 153 KFAAAA MNEAAA OOOOxx +376 3055 0 0 6 16 76 376 376 376 376 152 153 MOAAAA NNEAAA VVVVxx +8535 3056 1 3 5 15 35 535 535 3535 8535 70 71 HQAAAA ONEAAA AAAAxx +5659 3057 1 3 9 19 59 659 1659 659 5659 118 119 RJAAAA PNEAAA HHHHxx +2786 3058 0 2 6 6 86 786 786 2786 2786 172 173 EDAAAA QNEAAA OOOOxx +8820 3059 0 0 0 0 20 820 820 3820 8820 40 41 GBAAAA RNEAAA VVVVxx +1229 3060 1 1 9 9 29 229 1229 1229 1229 58 59 HVAAAA SNEAAA AAAAxx +9321 3061 1 1 1 1 21 321 1321 4321 9321 42 43 NUAAAA TNEAAA HHHHxx +7662 3062 0 2 2 2 62 662 1662 2662 7662 124 125 SIAAAA UNEAAA OOOOxx +5535 3063 1 3 5 15 35 535 1535 535 5535 70 71 XEAAAA VNEAAA VVVVxx +4889 3064 1 1 9 9 89 889 889 4889 4889 178 179 BGAAAA WNEAAA AAAAxx +8259 3065 1 3 9 19 59 259 259 3259 8259 118 119 RFAAAA XNEAAA HHHHxx +6789 3066 1 1 9 9 89 789 789 1789 6789 178 179 DBAAAA YNEAAA OOOOxx +5411 3067 1 3 1 11 11 411 1411 411 5411 22 23 DAAAAA ZNEAAA VVVVxx +6992 3068 0 0 2 12 92 992 992 1992 6992 184 185 YIAAAA AOEAAA AAAAxx +7698 3069 0 2 8 18 98 698 1698 2698 7698 196 197 CKAAAA BOEAAA HHHHxx +2342 3070 0 2 2 2 42 342 342 2342 2342 84 85 CMAAAA COEAAA OOOOxx +1501 3071 1 1 1 1 1 501 1501 1501 1501 2 3 TFAAAA DOEAAA VVVVxx +6322 3072 0 2 2 2 22 322 322 1322 6322 44 45 EJAAAA EOEAAA AAAAxx +9861 3073 1 1 1 1 61 861 1861 4861 9861 122 123 HPAAAA FOEAAA HHHHxx +9802 3074 0 2 2 2 2 802 1802 4802 9802 4 5 ANAAAA GOEAAA OOOOxx +4750 3075 0 2 0 10 50 750 750 4750 4750 100 101 SAAAAA HOEAAA VVVVxx +5855 3076 1 3 5 15 55 855 1855 855 5855 110 111 FRAAAA IOEAAA AAAAxx +4304 3077 0 0 4 4 4 304 304 4304 4304 8 9 OJAAAA JOEAAA HHHHxx +2605 3078 1 1 5 5 5 605 605 2605 2605 10 11 FWAAAA KOEAAA OOOOxx +1802 3079 0 2 2 2 2 802 1802 1802 1802 4 5 IRAAAA LOEAAA VVVVxx +9368 3080 0 0 8 8 68 368 1368 4368 9368 136 137 IWAAAA MOEAAA AAAAxx +7107 3081 1 3 7 7 7 107 1107 2107 7107 14 15 JNAAAA NOEAAA HHHHxx +8895 3082 1 3 5 15 95 895 895 3895 8895 190 191 DEAAAA OOEAAA OOOOxx +3750 3083 0 2 0 10 50 750 1750 3750 3750 100 101 GOAAAA POEAAA VVVVxx +8934 3084 0 2 4 14 34 934 934 3934 8934 68 69 QFAAAA QOEAAA AAAAxx +9464 3085 0 0 4 4 64 464 1464 4464 9464 128 129 AAAAAA ROEAAA HHHHxx +1928 3086 0 0 8 8 28 928 1928 1928 1928 56 57 EWAAAA SOEAAA OOOOxx +3196 3087 0 0 6 16 96 196 1196 3196 3196 192 193 YSAAAA TOEAAA VVVVxx +5256 3088 0 0 6 16 56 256 1256 256 5256 112 113 EUAAAA UOEAAA AAAAxx +7119 3089 1 3 9 19 19 119 1119 2119 7119 38 39 VNAAAA VOEAAA HHHHxx +4495 3090 1 3 5 15 95 495 495 4495 4495 190 191 XQAAAA WOEAAA OOOOxx +9292 3091 0 0 2 12 92 292 1292 4292 9292 184 185 KTAAAA XOEAAA VVVVxx +1617 3092 1 1 7 17 17 617 1617 1617 1617 34 35 FKAAAA YOEAAA AAAAxx +481 3093 1 1 1 1 81 481 481 481 481 162 163 NSAAAA ZOEAAA HHHHxx +56 3094 0 0 6 16 56 56 56 56 56 112 113 ECAAAA APEAAA OOOOxx +9120 3095 0 0 0 0 20 120 1120 4120 9120 40 41 UMAAAA BPEAAA VVVVxx +1306 3096 0 2 6 6 6 306 1306 1306 1306 12 13 GYAAAA CPEAAA AAAAxx +7773 3097 1 1 3 13 73 773 1773 2773 7773 146 147 ZMAAAA DPEAAA HHHHxx +4863 3098 1 3 3 3 63 863 863 4863 4863 126 127 BFAAAA EPEAAA OOOOxx +1114 3099 0 2 4 14 14 114 1114 1114 1114 28 29 WQAAAA FPEAAA VVVVxx +8124 3100 0 0 4 4 24 124 124 3124 8124 48 49 MAAAAA GPEAAA AAAAxx +6254 3101 0 2 4 14 54 254 254 1254 6254 108 109 OGAAAA HPEAAA HHHHxx +8109 3102 1 1 9 9 9 109 109 3109 8109 18 19 XZAAAA IPEAAA OOOOxx +1747 3103 1 3 7 7 47 747 1747 1747 1747 94 95 FPAAAA JPEAAA VVVVxx +6185 3104 1 1 5 5 85 185 185 1185 6185 170 171 XDAAAA KPEAAA AAAAxx +3388 3105 0 0 8 8 88 388 1388 3388 3388 176 177 IAAAAA LPEAAA HHHHxx +4905 3106 1 1 5 5 5 905 905 4905 4905 10 11 RGAAAA MPEAAA OOOOxx +5728 3107 0 0 8 8 28 728 1728 728 5728 56 57 IMAAAA NPEAAA VVVVxx +7507 3108 1 3 7 7 7 507 1507 2507 7507 14 15 TCAAAA OPEAAA AAAAxx +5662 3109 0 2 2 2 62 662 1662 662 5662 124 125 UJAAAA PPEAAA HHHHxx +1686 3110 0 2 6 6 86 686 1686 1686 1686 172 173 WMAAAA QPEAAA OOOOxx +5202 3111 0 2 2 2 2 202 1202 202 5202 4 5 CSAAAA RPEAAA VVVVxx +6905 3112 1 1 5 5 5 905 905 1905 6905 10 11 PFAAAA SPEAAA AAAAxx +9577 3113 1 1 7 17 77 577 1577 4577 9577 154 155 JEAAAA TPEAAA HHHHxx +7194 3114 0 2 4 14 94 194 1194 2194 7194 188 189 SQAAAA UPEAAA OOOOxx +7016 3115 0 0 6 16 16 16 1016 2016 7016 32 33 WJAAAA VPEAAA VVVVxx +8905 3116 1 1 5 5 5 905 905 3905 8905 10 11 NEAAAA WPEAAA AAAAxx +3419 3117 1 3 9 19 19 419 1419 3419 3419 38 39 NBAAAA XPEAAA HHHHxx +6881 3118 1 1 1 1 81 881 881 1881 6881 162 163 REAAAA YPEAAA OOOOxx +8370 3119 0 2 0 10 70 370 370 3370 8370 140 141 YJAAAA ZPEAAA VVVVxx +6117 3120 1 1 7 17 17 117 117 1117 6117 34 35 HBAAAA AQEAAA AAAAxx +1636 3121 0 0 6 16 36 636 1636 1636 1636 72 73 YKAAAA BQEAAA HHHHxx +6857 3122 1 1 7 17 57 857 857 1857 6857 114 115 TDAAAA CQEAAA OOOOxx +7163 3123 1 3 3 3 63 163 1163 2163 7163 126 127 NPAAAA DQEAAA VVVVxx +5040 3124 0 0 0 0 40 40 1040 40 5040 80 81 WLAAAA EQEAAA AAAAxx +6263 3125 1 3 3 3 63 263 263 1263 6263 126 127 XGAAAA FQEAAA HHHHxx +4809 3126 1 1 9 9 9 809 809 4809 4809 18 19 ZCAAAA GQEAAA OOOOxx +900 3127 0 0 0 0 0 900 900 900 900 0 1 QIAAAA HQEAAA VVVVxx +3199 3128 1 3 9 19 99 199 1199 3199 3199 198 199 BTAAAA IQEAAA AAAAxx +4156 3129 0 0 6 16 56 156 156 4156 4156 112 113 WDAAAA JQEAAA HHHHxx +3501 3130 1 1 1 1 1 501 1501 3501 3501 2 3 REAAAA KQEAAA OOOOxx +164 3131 0 0 4 4 64 164 164 164 164 128 129 IGAAAA LQEAAA VVVVxx +9548 3132 0 0 8 8 48 548 1548 4548 9548 96 97 GDAAAA MQEAAA AAAAxx +1149 3133 1 1 9 9 49 149 1149 1149 1149 98 99 FSAAAA NQEAAA HHHHxx +1962 3134 0 2 2 2 62 962 1962 1962 1962 124 125 MXAAAA OQEAAA OOOOxx +4072 3135 0 0 2 12 72 72 72 4072 4072 144 145 QAAAAA PQEAAA VVVVxx +4280 3136 0 0 0 0 80 280 280 4280 4280 160 161 QIAAAA QQEAAA AAAAxx +1398 3137 0 2 8 18 98 398 1398 1398 1398 196 197 UBAAAA RQEAAA HHHHxx +725 3138 1 1 5 5 25 725 725 725 725 50 51 XBAAAA SQEAAA OOOOxx +3988 3139 0 0 8 8 88 988 1988 3988 3988 176 177 KXAAAA TQEAAA VVVVxx +5059 3140 1 3 9 19 59 59 1059 59 5059 118 119 PMAAAA UQEAAA AAAAxx +2632 3141 0 0 2 12 32 632 632 2632 2632 64 65 GXAAAA VQEAAA HHHHxx +1909 3142 1 1 9 9 9 909 1909 1909 1909 18 19 LVAAAA WQEAAA OOOOxx +6827 3143 1 3 7 7 27 827 827 1827 6827 54 55 PCAAAA XQEAAA VVVVxx +8156 3144 0 0 6 16 56 156 156 3156 8156 112 113 SBAAAA YQEAAA AAAAxx +1192 3145 0 0 2 12 92 192 1192 1192 1192 184 185 WTAAAA ZQEAAA HHHHxx +9545 3146 1 1 5 5 45 545 1545 4545 9545 90 91 DDAAAA AREAAA OOOOxx +2249 3147 1 1 9 9 49 249 249 2249 2249 98 99 NIAAAA BREAAA VVVVxx +5580 3148 0 0 0 0 80 580 1580 580 5580 160 161 QGAAAA CREAAA AAAAxx +8403 3149 1 3 3 3 3 403 403 3403 8403 6 7 FLAAAA DREAAA HHHHxx +4024 3150 0 0 4 4 24 24 24 4024 4024 48 49 UYAAAA EREAAA OOOOxx +1866 3151 0 2 6 6 66 866 1866 1866 1866 132 133 UTAAAA FREAAA VVVVxx +9251 3152 1 3 1 11 51 251 1251 4251 9251 102 103 VRAAAA GREAAA AAAAxx +9979 3153 1 3 9 19 79 979 1979 4979 9979 158 159 VTAAAA HREAAA HHHHxx +9899 3154 1 3 9 19 99 899 1899 4899 9899 198 199 TQAAAA IREAAA OOOOxx +2540 3155 0 0 0 0 40 540 540 2540 2540 80 81 STAAAA JREAAA VVVVxx +8957 3156 1 1 7 17 57 957 957 3957 8957 114 115 NGAAAA KREAAA AAAAxx +7702 3157 0 2 2 2 2 702 1702 2702 7702 4 5 GKAAAA LREAAA HHHHxx +4211 3158 1 3 1 11 11 211 211 4211 4211 22 23 ZFAAAA MREAAA OOOOxx +6684 3159 0 0 4 4 84 684 684 1684 6684 168 169 CXAAAA NREAAA VVVVxx +3883 3160 1 3 3 3 83 883 1883 3883 3883 166 167 JTAAAA OREAAA AAAAxx +3531 3161 1 3 1 11 31 531 1531 3531 3531 62 63 VFAAAA PREAAA HHHHxx +9178 3162 0 2 8 18 78 178 1178 4178 9178 156 157 APAAAA QREAAA OOOOxx +3389 3163 1 1 9 9 89 389 1389 3389 3389 178 179 JAAAAA RREAAA VVVVxx +7874 3164 0 2 4 14 74 874 1874 2874 7874 148 149 WQAAAA SREAAA AAAAxx +4522 3165 0 2 2 2 22 522 522 4522 4522 44 45 YRAAAA TREAAA HHHHxx +9399 3166 1 3 9 19 99 399 1399 4399 9399 198 199 NXAAAA UREAAA OOOOxx +9083 3167 1 3 3 3 83 83 1083 4083 9083 166 167 JLAAAA VREAAA VVVVxx +1530 3168 0 2 0 10 30 530 1530 1530 1530 60 61 WGAAAA WREAAA AAAAxx +2360 3169 0 0 0 0 60 360 360 2360 2360 120 121 UMAAAA XREAAA HHHHxx +4908 3170 0 0 8 8 8 908 908 4908 4908 16 17 UGAAAA YREAAA OOOOxx +4628 3171 0 0 8 8 28 628 628 4628 4628 56 57 AWAAAA ZREAAA VVVVxx +3889 3172 1 1 9 9 89 889 1889 3889 3889 178 179 PTAAAA ASEAAA AAAAxx +1331 3173 1 3 1 11 31 331 1331 1331 1331 62 63 FZAAAA BSEAAA HHHHxx +1942 3174 0 2 2 2 42 942 1942 1942 1942 84 85 SWAAAA CSEAAA OOOOxx +4734 3175 0 2 4 14 34 734 734 4734 4734 68 69 CAAAAA DSEAAA VVVVxx +8386 3176 0 2 6 6 86 386 386 3386 8386 172 173 OKAAAA ESEAAA AAAAxx +3586 3177 0 2 6 6 86 586 1586 3586 3586 172 173 YHAAAA FSEAAA HHHHxx +2354 3178 0 2 4 14 54 354 354 2354 2354 108 109 OMAAAA GSEAAA OOOOxx +7108 3179 0 0 8 8 8 108 1108 2108 7108 16 17 KNAAAA HSEAAA VVVVxx +1857 3180 1 1 7 17 57 857 1857 1857 1857 114 115 LTAAAA ISEAAA AAAAxx +2544 3181 0 0 4 4 44 544 544 2544 2544 88 89 WTAAAA JSEAAA HHHHxx +819 3182 1 3 9 19 19 819 819 819 819 38 39 NFAAAA KSEAAA OOOOxx +2878 3183 0 2 8 18 78 878 878 2878 2878 156 157 SGAAAA LSEAAA VVVVxx +1772 3184 0 0 2 12 72 772 1772 1772 1772 144 145 EQAAAA MSEAAA AAAAxx +354 3185 0 2 4 14 54 354 354 354 354 108 109 QNAAAA NSEAAA HHHHxx +3259 3186 1 3 9 19 59 259 1259 3259 3259 118 119 JVAAAA OSEAAA OOOOxx +2170 3187 0 2 0 10 70 170 170 2170 2170 140 141 MFAAAA PSEAAA VVVVxx +1190 3188 0 2 0 10 90 190 1190 1190 1190 180 181 UTAAAA QSEAAA AAAAxx +3607 3189 1 3 7 7 7 607 1607 3607 3607 14 15 TIAAAA RSEAAA HHHHxx +4661 3190 1 1 1 1 61 661 661 4661 4661 122 123 HXAAAA SSEAAA OOOOxx +1796 3191 0 0 6 16 96 796 1796 1796 1796 192 193 CRAAAA TSEAAA VVVVxx +1561 3192 1 1 1 1 61 561 1561 1561 1561 122 123 BIAAAA USEAAA AAAAxx +4336 3193 0 0 6 16 36 336 336 4336 4336 72 73 UKAAAA VSEAAA HHHHxx +7550 3194 0 2 0 10 50 550 1550 2550 7550 100 101 KEAAAA WSEAAA OOOOxx +3238 3195 0 2 8 18 38 238 1238 3238 3238 76 77 OUAAAA XSEAAA VVVVxx +9870 3196 0 2 0 10 70 870 1870 4870 9870 140 141 QPAAAA YSEAAA AAAAxx +6502 3197 0 2 2 2 2 502 502 1502 6502 4 5 CQAAAA ZSEAAA HHHHxx +3903 3198 1 3 3 3 3 903 1903 3903 3903 6 7 DUAAAA ATEAAA OOOOxx +2869 3199 1 1 9 9 69 869 869 2869 2869 138 139 JGAAAA BTEAAA VVVVxx +5072 3200 0 0 2 12 72 72 1072 72 5072 144 145 CNAAAA CTEAAA AAAAxx +1201 3201 1 1 1 1 1 201 1201 1201 1201 2 3 FUAAAA DTEAAA HHHHxx +6245 3202 1 1 5 5 45 245 245 1245 6245 90 91 FGAAAA ETEAAA OOOOxx +1402 3203 0 2 2 2 2 402 1402 1402 1402 4 5 YBAAAA FTEAAA VVVVxx +2594 3204 0 2 4 14 94 594 594 2594 2594 188 189 UVAAAA GTEAAA AAAAxx +9171 3205 1 3 1 11 71 171 1171 4171 9171 142 143 TOAAAA HTEAAA HHHHxx +2620 3206 0 0 0 0 20 620 620 2620 2620 40 41 UWAAAA ITEAAA OOOOxx +6309 3207 1 1 9 9 9 309 309 1309 6309 18 19 RIAAAA JTEAAA VVVVxx +1285 3208 1 1 5 5 85 285 1285 1285 1285 170 171 LXAAAA KTEAAA AAAAxx +5466 3209 0 2 6 6 66 466 1466 466 5466 132 133 GCAAAA LTEAAA HHHHxx +168 3210 0 0 8 8 68 168 168 168 168 136 137 MGAAAA MTEAAA OOOOxx +1410 3211 0 2 0 10 10 410 1410 1410 1410 20 21 GCAAAA NTEAAA VVVVxx +6332 3212 0 0 2 12 32 332 332 1332 6332 64 65 OJAAAA OTEAAA AAAAxx +9530 3213 0 2 0 10 30 530 1530 4530 9530 60 61 OCAAAA PTEAAA HHHHxx +7749 3214 1 1 9 9 49 749 1749 2749 7749 98 99 BMAAAA QTEAAA OOOOxx +3656 3215 0 0 6 16 56 656 1656 3656 3656 112 113 QKAAAA RTEAAA VVVVxx +37 3216 1 1 7 17 37 37 37 37 37 74 75 LBAAAA STEAAA AAAAxx +2744 3217 0 0 4 4 44 744 744 2744 2744 88 89 OBAAAA TTEAAA HHHHxx +4206 3218 0 2 6 6 6 206 206 4206 4206 12 13 UFAAAA UTEAAA OOOOxx +1846 3219 0 2 6 6 46 846 1846 1846 1846 92 93 ATAAAA VTEAAA VVVVxx +9913 3220 1 1 3 13 13 913 1913 4913 9913 26 27 HRAAAA WTEAAA AAAAxx +4078 3221 0 2 8 18 78 78 78 4078 4078 156 157 WAAAAA XTEAAA HHHHxx +2080 3222 0 0 0 0 80 80 80 2080 2080 160 161 ACAAAA YTEAAA OOOOxx +4169 3223 1 1 9 9 69 169 169 4169 4169 138 139 JEAAAA ZTEAAA VVVVxx +2070 3224 0 2 0 10 70 70 70 2070 2070 140 141 QBAAAA AUEAAA AAAAxx +4500 3225 0 0 0 0 0 500 500 4500 4500 0 1 CRAAAA BUEAAA HHHHxx +4123 3226 1 3 3 3 23 123 123 4123 4123 46 47 PCAAAA CUEAAA OOOOxx +5594 3227 0 2 4 14 94 594 1594 594 5594 188 189 EHAAAA DUEAAA VVVVxx +9941 3228 1 1 1 1 41 941 1941 4941 9941 82 83 JSAAAA EUEAAA AAAAxx +7154 3229 0 2 4 14 54 154 1154 2154 7154 108 109 EPAAAA FUEAAA HHHHxx +8340 3230 0 0 0 0 40 340 340 3340 8340 80 81 UIAAAA GUEAAA OOOOxx +7110 3231 0 2 0 10 10 110 1110 2110 7110 20 21 MNAAAA HUEAAA VVVVxx +7795 3232 1 3 5 15 95 795 1795 2795 7795 190 191 VNAAAA IUEAAA AAAAxx +132 3233 0 0 2 12 32 132 132 132 132 64 65 CFAAAA JUEAAA HHHHxx +4603 3234 1 3 3 3 3 603 603 4603 4603 6 7 BVAAAA KUEAAA OOOOxx +9720 3235 0 0 0 0 20 720 1720 4720 9720 40 41 WJAAAA LUEAAA VVVVxx +1460 3236 0 0 0 0 60 460 1460 1460 1460 120 121 EEAAAA MUEAAA AAAAxx +4677 3237 1 1 7 17 77 677 677 4677 4677 154 155 XXAAAA NUEAAA HHHHxx +9272 3238 0 0 2 12 72 272 1272 4272 9272 144 145 QSAAAA OUEAAA OOOOxx +2279 3239 1 3 9 19 79 279 279 2279 2279 158 159 RJAAAA PUEAAA VVVVxx +4587 3240 1 3 7 7 87 587 587 4587 4587 174 175 LUAAAA QUEAAA AAAAxx +2244 3241 0 0 4 4 44 244 244 2244 2244 88 89 IIAAAA RUEAAA HHHHxx +742 3242 0 2 2 2 42 742 742 742 742 84 85 OCAAAA SUEAAA OOOOxx +4426 3243 0 2 6 6 26 426 426 4426 4426 52 53 GOAAAA TUEAAA VVVVxx +4571 3244 1 3 1 11 71 571 571 4571 4571 142 143 VTAAAA UUEAAA AAAAxx +4775 3245 1 3 5 15 75 775 775 4775 4775 150 151 RBAAAA VUEAAA HHHHxx +24 3246 0 0 4 4 24 24 24 24 24 48 49 YAAAAA WUEAAA OOOOxx +4175 3247 1 3 5 15 75 175 175 4175 4175 150 151 PEAAAA XUEAAA VVVVxx +9877 3248 1 1 7 17 77 877 1877 4877 9877 154 155 XPAAAA YUEAAA AAAAxx +7271 3249 1 3 1 11 71 271 1271 2271 7271 142 143 RTAAAA ZUEAAA HHHHxx +5468 3250 0 0 8 8 68 468 1468 468 5468 136 137 ICAAAA AVEAAA OOOOxx +6106 3251 0 2 6 6 6 106 106 1106 6106 12 13 WAAAAA BVEAAA VVVVxx +9005 3252 1 1 5 5 5 5 1005 4005 9005 10 11 JIAAAA CVEAAA AAAAxx +109 3253 1 1 9 9 9 109 109 109 109 18 19 FEAAAA DVEAAA HHHHxx +6365 3254 1 1 5 5 65 365 365 1365 6365 130 131 VKAAAA EVEAAA OOOOxx +7437 3255 1 1 7 17 37 437 1437 2437 7437 74 75 BAAAAA FVEAAA VVVVxx +7979 3256 1 3 9 19 79 979 1979 2979 7979 158 159 XUAAAA GVEAAA AAAAxx +6050 3257 0 2 0 10 50 50 50 1050 6050 100 101 SYAAAA HVEAAA HHHHxx +2853 3258 1 1 3 13 53 853 853 2853 2853 106 107 TFAAAA IVEAAA OOOOxx +7603 3259 1 3 3 3 3 603 1603 2603 7603 6 7 LGAAAA JVEAAA VVVVxx +483 3260 1 3 3 3 83 483 483 483 483 166 167 PSAAAA KVEAAA AAAAxx +5994 3261 0 2 4 14 94 994 1994 994 5994 188 189 OWAAAA LVEAAA HHHHxx +6708 3262 0 0 8 8 8 708 708 1708 6708 16 17 AYAAAA MVEAAA OOOOxx +5090 3263 0 2 0 10 90 90 1090 90 5090 180 181 UNAAAA NVEAAA VVVVxx +4608 3264 0 0 8 8 8 608 608 4608 4608 16 17 GVAAAA OVEAAA AAAAxx +4551 3265 1 3 1 11 51 551 551 4551 4551 102 103 BTAAAA PVEAAA HHHHxx +5437 3266 1 1 7 17 37 437 1437 437 5437 74 75 DBAAAA QVEAAA OOOOxx +4130 3267 0 2 0 10 30 130 130 4130 4130 60 61 WCAAAA RVEAAA VVVVxx +6363 3268 1 3 3 3 63 363 363 1363 6363 126 127 TKAAAA SVEAAA AAAAxx +1499 3269 1 3 9 19 99 499 1499 1499 1499 198 199 RFAAAA TVEAAA HHHHxx +384 3270 0 0 4 4 84 384 384 384 384 168 169 UOAAAA UVEAAA OOOOxx +2266 3271 0 2 6 6 66 266 266 2266 2266 132 133 EJAAAA VVEAAA VVVVxx +6018 3272 0 2 8 18 18 18 18 1018 6018 36 37 MXAAAA WVEAAA AAAAxx +7915 3273 1 3 5 15 15 915 1915 2915 7915 30 31 LSAAAA XVEAAA HHHHxx +6167 3274 1 3 7 7 67 167 167 1167 6167 134 135 FDAAAA YVEAAA OOOOxx +9988 3275 0 0 8 8 88 988 1988 4988 9988 176 177 EUAAAA ZVEAAA VVVVxx +6599 3276 1 3 9 19 99 599 599 1599 6599 198 199 VTAAAA AWEAAA AAAAxx +1693 3277 1 1 3 13 93 693 1693 1693 1693 186 187 DNAAAA BWEAAA HHHHxx +5971 3278 1 3 1 11 71 971 1971 971 5971 142 143 RVAAAA CWEAAA OOOOxx +8470 3279 0 2 0 10 70 470 470 3470 8470 140 141 UNAAAA DWEAAA VVVVxx +2807 3280 1 3 7 7 7 807 807 2807 2807 14 15 ZDAAAA EWEAAA AAAAxx +1120 3281 0 0 0 0 20 120 1120 1120 1120 40 41 CRAAAA FWEAAA HHHHxx +5924 3282 0 0 4 4 24 924 1924 924 5924 48 49 WTAAAA GWEAAA OOOOxx +9025 3283 1 1 5 5 25 25 1025 4025 9025 50 51 DJAAAA HWEAAA VVVVxx +9454 3284 0 2 4 14 54 454 1454 4454 9454 108 109 QZAAAA IWEAAA AAAAxx +2259 3285 1 3 9 19 59 259 259 2259 2259 118 119 XIAAAA JWEAAA HHHHxx +5249 3286 1 1 9 9 49 249 1249 249 5249 98 99 XTAAAA KWEAAA OOOOxx +6350 3287 0 2 0 10 50 350 350 1350 6350 100 101 GKAAAA LWEAAA VVVVxx +2930 3288 0 2 0 10 30 930 930 2930 2930 60 61 SIAAAA MWEAAA AAAAxx +6055 3289 1 3 5 15 55 55 55 1055 6055 110 111 XYAAAA NWEAAA HHHHxx +7691 3290 1 3 1 11 91 691 1691 2691 7691 182 183 VJAAAA OWEAAA OOOOxx +1573 3291 1 1 3 13 73 573 1573 1573 1573 146 147 NIAAAA PWEAAA VVVVxx +9943 3292 1 3 3 3 43 943 1943 4943 9943 86 87 LSAAAA QWEAAA AAAAxx +3085 3293 1 1 5 5 85 85 1085 3085 3085 170 171 ROAAAA RWEAAA HHHHxx +5928 3294 0 0 8 8 28 928 1928 928 5928 56 57 AUAAAA SWEAAA OOOOxx +887 3295 1 3 7 7 87 887 887 887 887 174 175 DIAAAA TWEAAA VVVVxx +4630 3296 0 2 0 10 30 630 630 4630 4630 60 61 CWAAAA UWEAAA AAAAxx +9827 3297 1 3 7 7 27 827 1827 4827 9827 54 55 ZNAAAA VWEAAA HHHHxx +8926 3298 0 2 6 6 26 926 926 3926 8926 52 53 IFAAAA WWEAAA OOOOxx +5726 3299 0 2 6 6 26 726 1726 726 5726 52 53 GMAAAA XWEAAA VVVVxx +1569 3300 1 1 9 9 69 569 1569 1569 1569 138 139 JIAAAA YWEAAA AAAAxx +8074 3301 0 2 4 14 74 74 74 3074 8074 148 149 OYAAAA ZWEAAA HHHHxx +7909 3302 1 1 9 9 9 909 1909 2909 7909 18 19 FSAAAA AXEAAA OOOOxx +8367 3303 1 3 7 7 67 367 367 3367 8367 134 135 VJAAAA BXEAAA VVVVxx +7217 3304 1 1 7 17 17 217 1217 2217 7217 34 35 PRAAAA CXEAAA AAAAxx +5254 3305 0 2 4 14 54 254 1254 254 5254 108 109 CUAAAA DXEAAA HHHHxx +1181 3306 1 1 1 1 81 181 1181 1181 1181 162 163 LTAAAA EXEAAA OOOOxx +6907 3307 1 3 7 7 7 907 907 1907 6907 14 15 RFAAAA FXEAAA VVVVxx +5508 3308 0 0 8 8 8 508 1508 508 5508 16 17 WDAAAA GXEAAA AAAAxx +4782 3309 0 2 2 2 82 782 782 4782 4782 164 165 YBAAAA HXEAAA HHHHxx +793 3310 1 1 3 13 93 793 793 793 793 186 187 NEAAAA IXEAAA OOOOxx +5740 3311 0 0 0 0 40 740 1740 740 5740 80 81 UMAAAA JXEAAA VVVVxx +3107 3312 1 3 7 7 7 107 1107 3107 3107 14 15 NPAAAA KXEAAA AAAAxx +1197 3313 1 1 7 17 97 197 1197 1197 1197 194 195 BUAAAA LXEAAA HHHHxx +4376 3314 0 0 6 16 76 376 376 4376 4376 152 153 IMAAAA MXEAAA OOOOxx +6226 3315 0 2 6 6 26 226 226 1226 6226 52 53 MFAAAA NXEAAA VVVVxx +5033 3316 1 1 3 13 33 33 1033 33 5033 66 67 PLAAAA OXEAAA AAAAxx +5494 3317 0 2 4 14 94 494 1494 494 5494 188 189 IDAAAA PXEAAA HHHHxx +3244 3318 0 0 4 4 44 244 1244 3244 3244 88 89 UUAAAA QXEAAA OOOOxx +7670 3319 0 2 0 10 70 670 1670 2670 7670 140 141 AJAAAA RXEAAA VVVVxx +9273 3320 1 1 3 13 73 273 1273 4273 9273 146 147 RSAAAA SXEAAA AAAAxx +5248 3321 0 0 8 8 48 248 1248 248 5248 96 97 WTAAAA TXEAAA HHHHxx +3381 3322 1 1 1 1 81 381 1381 3381 3381 162 163 BAAAAA UXEAAA OOOOxx +4136 3323 0 0 6 16 36 136 136 4136 4136 72 73 CDAAAA VXEAAA VVVVxx +4163 3324 1 3 3 3 63 163 163 4163 4163 126 127 DEAAAA WXEAAA AAAAxx +4270 3325 0 2 0 10 70 270 270 4270 4270 140 141 GIAAAA XXEAAA HHHHxx +1729 3326 1 1 9 9 29 729 1729 1729 1729 58 59 NOAAAA YXEAAA OOOOxx +2778 3327 0 2 8 18 78 778 778 2778 2778 156 157 WCAAAA ZXEAAA VVVVxx +5082 3328 0 2 2 2 82 82 1082 82 5082 164 165 MNAAAA AYEAAA AAAAxx +870 3329 0 2 0 10 70 870 870 870 870 140 141 MHAAAA BYEAAA HHHHxx +4192 3330 0 0 2 12 92 192 192 4192 4192 184 185 GFAAAA CYEAAA OOOOxx +308 3331 0 0 8 8 8 308 308 308 308 16 17 WLAAAA DYEAAA VVVVxx +6783 3332 1 3 3 3 83 783 783 1783 6783 166 167 XAAAAA EYEAAA AAAAxx +7611 3333 1 3 1 11 11 611 1611 2611 7611 22 23 TGAAAA FYEAAA HHHHxx +4221 3334 1 1 1 1 21 221 221 4221 4221 42 43 JGAAAA GYEAAA OOOOxx +6353 3335 1 1 3 13 53 353 353 1353 6353 106 107 JKAAAA HYEAAA VVVVxx +1830 3336 0 2 0 10 30 830 1830 1830 1830 60 61 KSAAAA IYEAAA AAAAxx +2437 3337 1 1 7 17 37 437 437 2437 2437 74 75 TPAAAA JYEAAA HHHHxx +3360 3338 0 0 0 0 60 360 1360 3360 3360 120 121 GZAAAA KYEAAA OOOOxx +1829 3339 1 1 9 9 29 829 1829 1829 1829 58 59 JSAAAA LYEAAA VVVVxx +9475 3340 1 3 5 15 75 475 1475 4475 9475 150 151 LAAAAA MYEAAA AAAAxx +4566 3341 0 2 6 6 66 566 566 4566 4566 132 133 QTAAAA NYEAAA HHHHxx +9944 3342 0 0 4 4 44 944 1944 4944 9944 88 89 MSAAAA OYEAAA OOOOxx +6054 3343 0 2 4 14 54 54 54 1054 6054 108 109 WYAAAA PYEAAA VVVVxx +4722 3344 0 2 2 2 22 722 722 4722 4722 44 45 QZAAAA QYEAAA AAAAxx +2779 3345 1 3 9 19 79 779 779 2779 2779 158 159 XCAAAA RYEAAA HHHHxx +8051 3346 1 3 1 11 51 51 51 3051 8051 102 103 RXAAAA SYEAAA OOOOxx +9671 3347 1 3 1 11 71 671 1671 4671 9671 142 143 ZHAAAA TYEAAA VVVVxx +6084 3348 0 0 4 4 84 84 84 1084 6084 168 169 AAAAAA UYEAAA AAAAxx +3729 3349 1 1 9 9 29 729 1729 3729 3729 58 59 LNAAAA VYEAAA HHHHxx +6627 3350 1 3 7 7 27 627 627 1627 6627 54 55 XUAAAA WYEAAA OOOOxx +4769 3351 1 1 9 9 69 769 769 4769 4769 138 139 LBAAAA XYEAAA VVVVxx +2224 3352 0 0 4 4 24 224 224 2224 2224 48 49 OHAAAA YYEAAA AAAAxx +1404 3353 0 0 4 4 4 404 1404 1404 1404 8 9 ACAAAA ZYEAAA HHHHxx +8532 3354 0 0 2 12 32 532 532 3532 8532 64 65 EQAAAA AZEAAA OOOOxx +6759 3355 1 3 9 19 59 759 759 1759 6759 118 119 ZZAAAA BZEAAA VVVVxx +6404 3356 0 0 4 4 4 404 404 1404 6404 8 9 IMAAAA CZEAAA AAAAxx +3144 3357 0 0 4 4 44 144 1144 3144 3144 88 89 YQAAAA DZEAAA HHHHxx +973 3358 1 1 3 13 73 973 973 973 973 146 147 LLAAAA EZEAAA OOOOxx +9789 3359 1 1 9 9 89 789 1789 4789 9789 178 179 NMAAAA FZEAAA VVVVxx +6181 3360 1 1 1 1 81 181 181 1181 6181 162 163 TDAAAA GZEAAA AAAAxx +1519 3361 1 3 9 19 19 519 1519 1519 1519 38 39 LGAAAA HZEAAA HHHHxx +9729 3362 1 1 9 9 29 729 1729 4729 9729 58 59 FKAAAA IZEAAA OOOOxx +8167 3363 1 3 7 7 67 167 167 3167 8167 134 135 DCAAAA JZEAAA VVVVxx +3830 3364 0 2 0 10 30 830 1830 3830 3830 60 61 IRAAAA KZEAAA AAAAxx +6286 3365 0 2 6 6 86 286 286 1286 6286 172 173 UHAAAA LZEAAA HHHHxx +3047 3366 1 3 7 7 47 47 1047 3047 3047 94 95 FNAAAA MZEAAA OOOOxx +3183 3367 1 3 3 3 83 183 1183 3183 3183 166 167 LSAAAA NZEAAA VVVVxx +6687 3368 1 3 7 7 87 687 687 1687 6687 174 175 FXAAAA OZEAAA AAAAxx +2783 3369 1 3 3 3 83 783 783 2783 2783 166 167 BDAAAA PZEAAA HHHHxx +9920 3370 0 0 0 0 20 920 1920 4920 9920 40 41 ORAAAA QZEAAA OOOOxx +4847 3371 1 3 7 7 47 847 847 4847 4847 94 95 LEAAAA RZEAAA VVVVxx +3645 3372 1 1 5 5 45 645 1645 3645 3645 90 91 FKAAAA SZEAAA AAAAxx +7406 3373 0 2 6 6 6 406 1406 2406 7406 12 13 WYAAAA TZEAAA HHHHxx +6003 3374 1 3 3 3 3 3 3 1003 6003 6 7 XWAAAA UZEAAA OOOOxx +3408 3375 0 0 8 8 8 408 1408 3408 3408 16 17 CBAAAA VZEAAA VVVVxx +4243 3376 1 3 3 3 43 243 243 4243 4243 86 87 FHAAAA WZEAAA AAAAxx +1622 3377 0 2 2 2 22 622 1622 1622 1622 44 45 KKAAAA XZEAAA HHHHxx +5319 3378 1 3 9 19 19 319 1319 319 5319 38 39 PWAAAA YZEAAA OOOOxx +4033 3379 1 1 3 13 33 33 33 4033 4033 66 67 DZAAAA ZZEAAA VVVVxx +8573 3380 1 1 3 13 73 573 573 3573 8573 146 147 TRAAAA AAFAAA AAAAxx +8404 3381 0 0 4 4 4 404 404 3404 8404 8 9 GLAAAA BAFAAA HHHHxx +6993 3382 1 1 3 13 93 993 993 1993 6993 186 187 ZIAAAA CAFAAA OOOOxx +660 3383 0 0 0 0 60 660 660 660 660 120 121 KZAAAA DAFAAA VVVVxx +1136 3384 0 0 6 16 36 136 1136 1136 1136 72 73 SRAAAA EAFAAA AAAAxx +3393 3385 1 1 3 13 93 393 1393 3393 3393 186 187 NAAAAA FAFAAA HHHHxx +9743 3386 1 3 3 3 43 743 1743 4743 9743 86 87 TKAAAA GAFAAA OOOOxx +9705 3387 1 1 5 5 5 705 1705 4705 9705 10 11 HJAAAA HAFAAA VVVVxx +6960 3388 0 0 0 0 60 960 960 1960 6960 120 121 SHAAAA IAFAAA AAAAxx +2753 3389 1 1 3 13 53 753 753 2753 2753 106 107 XBAAAA JAFAAA HHHHxx +906 3390 0 2 6 6 6 906 906 906 906 12 13 WIAAAA KAFAAA OOOOxx +999 3391 1 3 9 19 99 999 999 999 999 198 199 LMAAAA LAFAAA VVVVxx +6927 3392 1 3 7 7 27 927 927 1927 6927 54 55 LGAAAA MAFAAA AAAAxx +4846 3393 0 2 6 6 46 846 846 4846 4846 92 93 KEAAAA NAFAAA HHHHxx +676 3394 0 0 6 16 76 676 676 676 676 152 153 AAAAAA OAFAAA OOOOxx +8612 3395 0 0 2 12 12 612 612 3612 8612 24 25 GTAAAA PAFAAA VVVVxx +4111 3396 1 3 1 11 11 111 111 4111 4111 22 23 DCAAAA QAFAAA AAAAxx +9994 3397 0 2 4 14 94 994 1994 4994 9994 188 189 KUAAAA RAFAAA HHHHxx +4399 3398 1 3 9 19 99 399 399 4399 4399 198 199 FNAAAA SAFAAA OOOOxx +4464 3399 0 0 4 4 64 464 464 4464 4464 128 129 SPAAAA TAFAAA VVVVxx +7316 3400 0 0 6 16 16 316 1316 2316 7316 32 33 KVAAAA UAFAAA AAAAxx +8982 3401 0 2 2 2 82 982 982 3982 8982 164 165 MHAAAA VAFAAA HHHHxx +1871 3402 1 3 1 11 71 871 1871 1871 1871 142 143 ZTAAAA WAFAAA OOOOxx +4082 3403 0 2 2 2 82 82 82 4082 4082 164 165 ABAAAA XAFAAA VVVVxx +3949 3404 1 1 9 9 49 949 1949 3949 3949 98 99 XVAAAA YAFAAA AAAAxx +9352 3405 0 0 2 12 52 352 1352 4352 9352 104 105 SVAAAA ZAFAAA HHHHxx +9638 3406 0 2 8 18 38 638 1638 4638 9638 76 77 SGAAAA ABFAAA OOOOxx +8177 3407 1 1 7 17 77 177 177 3177 8177 154 155 NCAAAA BBFAAA VVVVxx +3499 3408 1 3 9 19 99 499 1499 3499 3499 198 199 PEAAAA CBFAAA AAAAxx +4233 3409 1 1 3 13 33 233 233 4233 4233 66 67 VGAAAA DBFAAA HHHHxx +1953 3410 1 1 3 13 53 953 1953 1953 1953 106 107 DXAAAA EBFAAA OOOOxx +7372 3411 0 0 2 12 72 372 1372 2372 7372 144 145 OXAAAA FBFAAA VVVVxx +5127 3412 1 3 7 7 27 127 1127 127 5127 54 55 FPAAAA GBFAAA AAAAxx +4384 3413 0 0 4 4 84 384 384 4384 4384 168 169 QMAAAA HBFAAA HHHHxx +9964 3414 0 0 4 4 64 964 1964 4964 9964 128 129 GTAAAA IBFAAA OOOOxx +5392 3415 0 0 2 12 92 392 1392 392 5392 184 185 KZAAAA JBFAAA VVVVxx +616 3416 0 0 6 16 16 616 616 616 616 32 33 SXAAAA KBFAAA AAAAxx +591 3417 1 3 1 11 91 591 591 591 591 182 183 TWAAAA LBFAAA HHHHxx +6422 3418 0 2 2 2 22 422 422 1422 6422 44 45 ANAAAA MBFAAA OOOOxx +6551 3419 1 3 1 11 51 551 551 1551 6551 102 103 ZRAAAA NBFAAA VVVVxx +9286 3420 0 2 6 6 86 286 1286 4286 9286 172 173 ETAAAA OBFAAA AAAAxx +3817 3421 1 1 7 17 17 817 1817 3817 3817 34 35 VQAAAA PBFAAA HHHHxx +7717 3422 1 1 7 17 17 717 1717 2717 7717 34 35 VKAAAA QBFAAA OOOOxx +8718 3423 0 2 8 18 18 718 718 3718 8718 36 37 IXAAAA RBFAAA VVVVxx +8608 3424 0 0 8 8 8 608 608 3608 8608 16 17 CTAAAA SBFAAA AAAAxx +2242 3425 0 2 2 2 42 242 242 2242 2242 84 85 GIAAAA TBFAAA HHHHxx +4811 3426 1 3 1 11 11 811 811 4811 4811 22 23 BDAAAA UBFAAA OOOOxx +6838 3427 0 2 8 18 38 838 838 1838 6838 76 77 ADAAAA VBFAAA VVVVxx +787 3428 1 3 7 7 87 787 787 787 787 174 175 HEAAAA WBFAAA AAAAxx +7940 3429 0 0 0 0 40 940 1940 2940 7940 80 81 KTAAAA XBFAAA HHHHxx +336 3430 0 0 6 16 36 336 336 336 336 72 73 YMAAAA YBFAAA OOOOxx +9859 3431 1 3 9 19 59 859 1859 4859 9859 118 119 FPAAAA ZBFAAA VVVVxx +3864 3432 0 0 4 4 64 864 1864 3864 3864 128 129 QSAAAA ACFAAA AAAAxx +7162 3433 0 2 2 2 62 162 1162 2162 7162 124 125 MPAAAA BCFAAA HHHHxx +2071 3434 1 3 1 11 71 71 71 2071 2071 142 143 RBAAAA CCFAAA OOOOxx +7469 3435 1 1 9 9 69 469 1469 2469 7469 138 139 HBAAAA DCFAAA VVVVxx +2917 3436 1 1 7 17 17 917 917 2917 2917 34 35 FIAAAA ECFAAA AAAAxx +7486 3437 0 2 6 6 86 486 1486 2486 7486 172 173 YBAAAA FCFAAA HHHHxx +3355 3438 1 3 5 15 55 355 1355 3355 3355 110 111 BZAAAA GCFAAA OOOOxx +6998 3439 0 2 8 18 98 998 998 1998 6998 196 197 EJAAAA HCFAAA VVVVxx +5498 3440 0 2 8 18 98 498 1498 498 5498 196 197 MDAAAA ICFAAA AAAAxx +5113 3441 1 1 3 13 13 113 1113 113 5113 26 27 ROAAAA JCFAAA HHHHxx +2846 3442 0 2 6 6 46 846 846 2846 2846 92 93 MFAAAA KCFAAA OOOOxx +6834 3443 0 2 4 14 34 834 834 1834 6834 68 69 WCAAAA LCFAAA VVVVxx +8925 3444 1 1 5 5 25 925 925 3925 8925 50 51 HFAAAA MCFAAA AAAAxx +2757 3445 1 1 7 17 57 757 757 2757 2757 114 115 BCAAAA NCFAAA HHHHxx +2775 3446 1 3 5 15 75 775 775 2775 2775 150 151 TCAAAA OCFAAA OOOOxx +6182 3447 0 2 2 2 82 182 182 1182 6182 164 165 UDAAAA PCFAAA VVVVxx +4488 3448 0 0 8 8 88 488 488 4488 4488 176 177 QQAAAA QCFAAA AAAAxx +8523 3449 1 3 3 3 23 523 523 3523 8523 46 47 VPAAAA RCFAAA HHHHxx +52 3450 0 0 2 12 52 52 52 52 52 104 105 ACAAAA SCFAAA OOOOxx +7251 3451 1 3 1 11 51 251 1251 2251 7251 102 103 XSAAAA TCFAAA VVVVxx +6130 3452 0 2 0 10 30 130 130 1130 6130 60 61 UBAAAA UCFAAA AAAAxx +205 3453 1 1 5 5 5 205 205 205 205 10 11 XHAAAA VCFAAA HHHHxx +1186 3454 0 2 6 6 86 186 1186 1186 1186 172 173 QTAAAA WCFAAA OOOOxx +1738 3455 0 2 8 18 38 738 1738 1738 1738 76 77 WOAAAA XCFAAA VVVVxx +9485 3456 1 1 5 5 85 485 1485 4485 9485 170 171 VAAAAA YCFAAA AAAAxx +4235 3457 1 3 5 15 35 235 235 4235 4235 70 71 XGAAAA ZCFAAA HHHHxx +7891 3458 1 3 1 11 91 891 1891 2891 7891 182 183 NRAAAA ADFAAA OOOOxx +4960 3459 0 0 0 0 60 960 960 4960 4960 120 121 UIAAAA BDFAAA VVVVxx +8911 3460 1 3 1 11 11 911 911 3911 8911 22 23 TEAAAA CDFAAA AAAAxx +1219 3461 1 3 9 19 19 219 1219 1219 1219 38 39 XUAAAA DDFAAA HHHHxx +9652 3462 0 0 2 12 52 652 1652 4652 9652 104 105 GHAAAA EDFAAA OOOOxx +9715 3463 1 3 5 15 15 715 1715 4715 9715 30 31 RJAAAA FDFAAA VVVVxx +6629 3464 1 1 9 9 29 629 629 1629 6629 58 59 ZUAAAA GDFAAA AAAAxx +700 3465 0 0 0 0 0 700 700 700 700 0 1 YAAAAA HDFAAA HHHHxx +9819 3466 1 3 9 19 19 819 1819 4819 9819 38 39 RNAAAA IDFAAA OOOOxx +5188 3467 0 0 8 8 88 188 1188 188 5188 176 177 ORAAAA JDFAAA VVVVxx +5367 3468 1 3 7 7 67 367 1367 367 5367 134 135 LYAAAA KDFAAA AAAAxx +6447 3469 1 3 7 7 47 447 447 1447 6447 94 95 ZNAAAA LDFAAA HHHHxx +720 3470 0 0 0 0 20 720 720 720 720 40 41 SBAAAA MDFAAA OOOOxx +9157 3471 1 1 7 17 57 157 1157 4157 9157 114 115 FOAAAA NDFAAA VVVVxx +1082 3472 0 2 2 2 82 82 1082 1082 1082 164 165 QPAAAA ODFAAA AAAAxx +3179 3473 1 3 9 19 79 179 1179 3179 3179 158 159 HSAAAA PDFAAA HHHHxx +4818 3474 0 2 8 18 18 818 818 4818 4818 36 37 IDAAAA QDFAAA OOOOxx +7607 3475 1 3 7 7 7 607 1607 2607 7607 14 15 PGAAAA RDFAAA VVVVxx +2352 3476 0 0 2 12 52 352 352 2352 2352 104 105 MMAAAA SDFAAA AAAAxx +1170 3477 0 2 0 10 70 170 1170 1170 1170 140 141 ATAAAA TDFAAA HHHHxx +4269 3478 1 1 9 9 69 269 269 4269 4269 138 139 FIAAAA UDFAAA OOOOxx +8767 3479 1 3 7 7 67 767 767 3767 8767 134 135 FZAAAA VDFAAA VVVVxx +3984 3480 0 0 4 4 84 984 1984 3984 3984 168 169 GXAAAA WDFAAA AAAAxx +3190 3481 0 2 0 10 90 190 1190 3190 3190 180 181 SSAAAA XDFAAA HHHHxx +7456 3482 0 0 6 16 56 456 1456 2456 7456 112 113 UAAAAA YDFAAA OOOOxx +4348 3483 0 0 8 8 48 348 348 4348 4348 96 97 GLAAAA ZDFAAA VVVVxx +3150 3484 0 2 0 10 50 150 1150 3150 3150 100 101 ERAAAA AEFAAA AAAAxx +8780 3485 0 0 0 0 80 780 780 3780 8780 160 161 SZAAAA BEFAAA HHHHxx +2553 3486 1 1 3 13 53 553 553 2553 2553 106 107 FUAAAA CEFAAA OOOOxx +7526 3487 0 2 6 6 26 526 1526 2526 7526 52 53 MDAAAA DEFAAA VVVVxx +2031 3488 1 3 1 11 31 31 31 2031 2031 62 63 DAAAAA EEFAAA AAAAxx +8793 3489 1 1 3 13 93 793 793 3793 8793 186 187 FAAAAA FEFAAA HHHHxx +1122 3490 0 2 2 2 22 122 1122 1122 1122 44 45 ERAAAA GEFAAA OOOOxx +1855 3491 1 3 5 15 55 855 1855 1855 1855 110 111 JTAAAA HEFAAA VVVVxx +6613 3492 1 1 3 13 13 613 613 1613 6613 26 27 JUAAAA IEFAAA AAAAxx +3231 3493 1 3 1 11 31 231 1231 3231 3231 62 63 HUAAAA JEFAAA HHHHxx +9101 3494 1 1 1 1 1 101 1101 4101 9101 2 3 BMAAAA KEFAAA OOOOxx +4937 3495 1 1 7 17 37 937 937 4937 4937 74 75 XHAAAA LEFAAA VVVVxx +666 3496 0 2 6 6 66 666 666 666 666 132 133 QZAAAA MEFAAA AAAAxx +8943 3497 1 3 3 3 43 943 943 3943 8943 86 87 ZFAAAA NEFAAA HHHHxx +6164 3498 0 0 4 4 64 164 164 1164 6164 128 129 CDAAAA OEFAAA OOOOxx +1081 3499 1 1 1 1 81 81 1081 1081 1081 162 163 PPAAAA PEFAAA VVVVxx +210 3500 0 2 0 10 10 210 210 210 210 20 21 CIAAAA QEFAAA AAAAxx +6024 3501 0 0 4 4 24 24 24 1024 6024 48 49 SXAAAA REFAAA HHHHxx +5715 3502 1 3 5 15 15 715 1715 715 5715 30 31 VLAAAA SEFAAA OOOOxx +8938 3503 0 2 8 18 38 938 938 3938 8938 76 77 UFAAAA TEFAAA VVVVxx +1326 3504 0 2 6 6 26 326 1326 1326 1326 52 53 AZAAAA UEFAAA AAAAxx +7111 3505 1 3 1 11 11 111 1111 2111 7111 22 23 NNAAAA VEFAAA HHHHxx +757 3506 1 1 7 17 57 757 757 757 757 114 115 DDAAAA WEFAAA OOOOxx +8933 3507 1 1 3 13 33 933 933 3933 8933 66 67 PFAAAA XEFAAA VVVVxx +6495 3508 1 3 5 15 95 495 495 1495 6495 190 191 VPAAAA YEFAAA AAAAxx +3134 3509 0 2 4 14 34 134 1134 3134 3134 68 69 OQAAAA ZEFAAA HHHHxx +1304 3510 0 0 4 4 4 304 1304 1304 1304 8 9 EYAAAA AFFAAA OOOOxx +1835 3511 1 3 5 15 35 835 1835 1835 1835 70 71 PSAAAA BFFAAA VVVVxx +7275 3512 1 3 5 15 75 275 1275 2275 7275 150 151 VTAAAA CFFAAA AAAAxx +7337 3513 1 1 7 17 37 337 1337 2337 7337 74 75 FWAAAA DFFAAA HHHHxx +1282 3514 0 2 2 2 82 282 1282 1282 1282 164 165 IXAAAA EFFAAA OOOOxx +6566 3515 0 2 6 6 66 566 566 1566 6566 132 133 OSAAAA FFFAAA VVVVxx +3786 3516 0 2 6 6 86 786 1786 3786 3786 172 173 QPAAAA GFFAAA AAAAxx +5741 3517 1 1 1 1 41 741 1741 741 5741 82 83 VMAAAA HFFAAA HHHHxx +6076 3518 0 0 6 16 76 76 76 1076 6076 152 153 SZAAAA IFFAAA OOOOxx +9998 3519 0 2 8 18 98 998 1998 4998 9998 196 197 OUAAAA JFFAAA VVVVxx +6268 3520 0 0 8 8 68 268 268 1268 6268 136 137 CHAAAA KFFAAA AAAAxx +9647 3521 1 3 7 7 47 647 1647 4647 9647 94 95 BHAAAA LFFAAA HHHHxx +4877 3522 1 1 7 17 77 877 877 4877 4877 154 155 PFAAAA MFFAAA OOOOxx +2652 3523 0 0 2 12 52 652 652 2652 2652 104 105 AYAAAA NFFAAA VVVVxx +1247 3524 1 3 7 7 47 247 1247 1247 1247 94 95 ZVAAAA OFFAAA AAAAxx +2721 3525 1 1 1 1 21 721 721 2721 2721 42 43 RAAAAA PFFAAA HHHHxx +5968 3526 0 0 8 8 68 968 1968 968 5968 136 137 OVAAAA QFFAAA OOOOxx +9570 3527 0 2 0 10 70 570 1570 4570 9570 140 141 CEAAAA RFFAAA VVVVxx +6425 3528 1 1 5 5 25 425 425 1425 6425 50 51 DNAAAA SFFAAA AAAAxx +5451 3529 1 3 1 11 51 451 1451 451 5451 102 103 RBAAAA TFFAAA HHHHxx +5668 3530 0 0 8 8 68 668 1668 668 5668 136 137 AKAAAA UFFAAA OOOOxx +9493 3531 1 1 3 13 93 493 1493 4493 9493 186 187 DBAAAA VFFAAA VVVVxx +7973 3532 1 1 3 13 73 973 1973 2973 7973 146 147 RUAAAA WFFAAA AAAAxx +8250 3533 0 2 0 10 50 250 250 3250 8250 100 101 IFAAAA XFFAAA HHHHxx +82 3534 0 2 2 2 82 82 82 82 82 164 165 EDAAAA YFFAAA OOOOxx +6258 3535 0 2 8 18 58 258 258 1258 6258 116 117 SGAAAA ZFFAAA VVVVxx +9978 3536 0 2 8 18 78 978 1978 4978 9978 156 157 UTAAAA AGFAAA AAAAxx +6930 3537 0 2 0 10 30 930 930 1930 6930 60 61 OGAAAA BGFAAA HHHHxx +3746 3538 0 2 6 6 46 746 1746 3746 3746 92 93 COAAAA CGFAAA OOOOxx +7065 3539 1 1 5 5 65 65 1065 2065 7065 130 131 TLAAAA DGFAAA VVVVxx +4281 3540 1 1 1 1 81 281 281 4281 4281 162 163 RIAAAA EGFAAA AAAAxx +4367 3541 1 3 7 7 67 367 367 4367 4367 134 135 ZLAAAA FGFAAA HHHHxx +9526 3542 0 2 6 6 26 526 1526 4526 9526 52 53 KCAAAA GGFAAA OOOOxx +5880 3543 0 0 0 0 80 880 1880 880 5880 160 161 ESAAAA HGFAAA VVVVxx +8480 3544 0 0 0 0 80 480 480 3480 8480 160 161 EOAAAA IGFAAA AAAAxx +2476 3545 0 0 6 16 76 476 476 2476 2476 152 153 GRAAAA JGFAAA HHHHxx +9074 3546 0 2 4 14 74 74 1074 4074 9074 148 149 ALAAAA KGFAAA OOOOxx +4830 3547 0 2 0 10 30 830 830 4830 4830 60 61 UDAAAA LGFAAA VVVVxx +3207 3548 1 3 7 7 7 207 1207 3207 3207 14 15 JTAAAA MGFAAA AAAAxx +7894 3549 0 2 4 14 94 894 1894 2894 7894 188 189 QRAAAA NGFAAA HHHHxx +3860 3550 0 0 0 0 60 860 1860 3860 3860 120 121 MSAAAA OGFAAA OOOOxx +5293 3551 1 1 3 13 93 293 1293 293 5293 186 187 PVAAAA PGFAAA VVVVxx +6895 3552 1 3 5 15 95 895 895 1895 6895 190 191 FFAAAA QGFAAA AAAAxx +9908 3553 0 0 8 8 8 908 1908 4908 9908 16 17 CRAAAA RGFAAA HHHHxx +9247 3554 1 3 7 7 47 247 1247 4247 9247 94 95 RRAAAA SGFAAA OOOOxx +8110 3555 0 2 0 10 10 110 110 3110 8110 20 21 YZAAAA TGFAAA VVVVxx +4716 3556 0 0 6 16 16 716 716 4716 4716 32 33 KZAAAA UGFAAA AAAAxx +4979 3557 1 3 9 19 79 979 979 4979 4979 158 159 NJAAAA VGFAAA HHHHxx +5280 3558 0 0 0 0 80 280 1280 280 5280 160 161 CVAAAA WGFAAA OOOOxx +8326 3559 0 2 6 6 26 326 326 3326 8326 52 53 GIAAAA XGFAAA VVVVxx +5572 3560 0 0 2 12 72 572 1572 572 5572 144 145 IGAAAA YGFAAA AAAAxx +4665 3561 1 1 5 5 65 665 665 4665 4665 130 131 LXAAAA ZGFAAA HHHHxx +3665 3562 1 1 5 5 65 665 1665 3665 3665 130 131 ZKAAAA AHFAAA OOOOxx +6744 3563 0 0 4 4 44 744 744 1744 6744 88 89 KZAAAA BHFAAA VVVVxx +1897 3564 1 1 7 17 97 897 1897 1897 1897 194 195 ZUAAAA CHFAAA AAAAxx +1220 3565 0 0 0 0 20 220 1220 1220 1220 40 41 YUAAAA DHFAAA HHHHxx +2614 3566 0 2 4 14 14 614 614 2614 2614 28 29 OWAAAA EHFAAA OOOOxx +8509 3567 1 1 9 9 9 509 509 3509 8509 18 19 HPAAAA FHFAAA VVVVxx +8521 3568 1 1 1 1 21 521 521 3521 8521 42 43 TPAAAA GHFAAA AAAAxx +4121 3569 1 1 1 1 21 121 121 4121 4121 42 43 NCAAAA HHFAAA HHHHxx +9663 3570 1 3 3 3 63 663 1663 4663 9663 126 127 RHAAAA IHFAAA OOOOxx +2346 3571 0 2 6 6 46 346 346 2346 2346 92 93 GMAAAA JHFAAA VVVVxx +3370 3572 0 2 0 10 70 370 1370 3370 3370 140 141 QZAAAA KHFAAA AAAAxx +1498 3573 0 2 8 18 98 498 1498 1498 1498 196 197 QFAAAA LHFAAA HHHHxx +7422 3574 0 2 2 2 22 422 1422 2422 7422 44 45 MZAAAA MHFAAA OOOOxx +3472 3575 0 0 2 12 72 472 1472 3472 3472 144 145 ODAAAA NHFAAA VVVVxx +4126 3576 0 2 6 6 26 126 126 4126 4126 52 53 SCAAAA OHFAAA AAAAxx +4494 3577 0 2 4 14 94 494 494 4494 4494 188 189 WQAAAA PHFAAA HHHHxx +6323 3578 1 3 3 3 23 323 323 1323 6323 46 47 FJAAAA QHFAAA OOOOxx +2823 3579 1 3 3 3 23 823 823 2823 2823 46 47 PEAAAA RHFAAA VVVVxx +8596 3580 0 0 6 16 96 596 596 3596 8596 192 193 QSAAAA SHFAAA AAAAxx +6642 3581 0 2 2 2 42 642 642 1642 6642 84 85 MVAAAA THFAAA HHHHxx +9276 3582 0 0 6 16 76 276 1276 4276 9276 152 153 USAAAA UHFAAA OOOOxx +4148 3583 0 0 8 8 48 148 148 4148 4148 96 97 ODAAAA VHFAAA VVVVxx +9770 3584 0 2 0 10 70 770 1770 4770 9770 140 141 ULAAAA WHFAAA AAAAxx +9812 3585 0 0 2 12 12 812 1812 4812 9812 24 25 KNAAAA XHFAAA HHHHxx +4419 3586 1 3 9 19 19 419 419 4419 4419 38 39 ZNAAAA YHFAAA OOOOxx +3802 3587 0 2 2 2 2 802 1802 3802 3802 4 5 GQAAAA ZHFAAA VVVVxx +3210 3588 0 2 0 10 10 210 1210 3210 3210 20 21 MTAAAA AIFAAA AAAAxx +6794 3589 0 2 4 14 94 794 794 1794 6794 188 189 IBAAAA BIFAAA HHHHxx +242 3590 0 2 2 2 42 242 242 242 242 84 85 IJAAAA CIFAAA OOOOxx +962 3591 0 2 2 2 62 962 962 962 962 124 125 ALAAAA DIFAAA VVVVxx +7151 3592 1 3 1 11 51 151 1151 2151 7151 102 103 BPAAAA EIFAAA AAAAxx +9440 3593 0 0 0 0 40 440 1440 4440 9440 80 81 CZAAAA FIFAAA HHHHxx +721 3594 1 1 1 1 21 721 721 721 721 42 43 TBAAAA GIFAAA OOOOxx +2119 3595 1 3 9 19 19 119 119 2119 2119 38 39 NDAAAA HIFAAA VVVVxx +9883 3596 1 3 3 3 83 883 1883 4883 9883 166 167 DQAAAA IIFAAA AAAAxx +5071 3597 1 3 1 11 71 71 1071 71 5071 142 143 BNAAAA JIFAAA HHHHxx +8239 3598 1 3 9 19 39 239 239 3239 8239 78 79 XEAAAA KIFAAA OOOOxx +7451 3599 1 3 1 11 51 451 1451 2451 7451 102 103 PAAAAA LIFAAA VVVVxx +9517 3600 1 1 7 17 17 517 1517 4517 9517 34 35 BCAAAA MIFAAA AAAAxx +9180 3601 0 0 0 0 80 180 1180 4180 9180 160 161 CPAAAA NIFAAA HHHHxx +9327 3602 1 3 7 7 27 327 1327 4327 9327 54 55 TUAAAA OIFAAA OOOOxx +5462 3603 0 2 2 2 62 462 1462 462 5462 124 125 CCAAAA PIFAAA VVVVxx +8306 3604 0 2 6 6 6 306 306 3306 8306 12 13 MHAAAA QIFAAA AAAAxx +6234 3605 0 2 4 14 34 234 234 1234 6234 68 69 UFAAAA RIFAAA HHHHxx +8771 3606 1 3 1 11 71 771 771 3771 8771 142 143 JZAAAA SIFAAA OOOOxx +5853 3607 1 1 3 13 53 853 1853 853 5853 106 107 DRAAAA TIFAAA VVVVxx +8373 3608 1 1 3 13 73 373 373 3373 8373 146 147 BKAAAA UIFAAA AAAAxx +5017 3609 1 1 7 17 17 17 1017 17 5017 34 35 ZKAAAA VIFAAA HHHHxx +8025 3610 1 1 5 5 25 25 25 3025 8025 50 51 RWAAAA WIFAAA OOOOxx +2526 3611 0 2 6 6 26 526 526 2526 2526 52 53 ETAAAA XIFAAA VVVVxx +7419 3612 1 3 9 19 19 419 1419 2419 7419 38 39 JZAAAA YIFAAA AAAAxx +4572 3613 0 0 2 12 72 572 572 4572 4572 144 145 WTAAAA ZIFAAA HHHHxx +7744 3614 0 0 4 4 44 744 1744 2744 7744 88 89 WLAAAA AJFAAA OOOOxx +8825 3615 1 1 5 5 25 825 825 3825 8825 50 51 LBAAAA BJFAAA VVVVxx +6067 3616 1 3 7 7 67 67 67 1067 6067 134 135 JZAAAA CJFAAA AAAAxx +3291 3617 1 3 1 11 91 291 1291 3291 3291 182 183 PWAAAA DJFAAA HHHHxx +7115 3618 1 3 5 15 15 115 1115 2115 7115 30 31 RNAAAA EJFAAA OOOOxx +2626 3619 0 2 6 6 26 626 626 2626 2626 52 53 AXAAAA FJFAAA VVVVxx +4109 3620 1 1 9 9 9 109 109 4109 4109 18 19 BCAAAA GJFAAA AAAAxx +4056 3621 0 0 6 16 56 56 56 4056 4056 112 113 AAAAAA HJFAAA HHHHxx +6811 3622 1 3 1 11 11 811 811 1811 6811 22 23 ZBAAAA IJFAAA OOOOxx +680 3623 0 0 0 0 80 680 680 680 680 160 161 EAAAAA JJFAAA VVVVxx +474 3624 0 2 4 14 74 474 474 474 474 148 149 GSAAAA KJFAAA AAAAxx +9294 3625 0 2 4 14 94 294 1294 4294 9294 188 189 MTAAAA LJFAAA HHHHxx +7555 3626 1 3 5 15 55 555 1555 2555 7555 110 111 PEAAAA MJFAAA OOOOxx +8076 3627 0 0 6 16 76 76 76 3076 8076 152 153 QYAAAA NJFAAA VVVVxx +3840 3628 0 0 0 0 40 840 1840 3840 3840 80 81 SRAAAA OJFAAA AAAAxx +5955 3629 1 3 5 15 55 955 1955 955 5955 110 111 BVAAAA PJFAAA HHHHxx +994 3630 0 2 4 14 94 994 994 994 994 188 189 GMAAAA QJFAAA OOOOxx +2089 3631 1 1 9 9 89 89 89 2089 2089 178 179 JCAAAA RJFAAA VVVVxx +869 3632 1 1 9 9 69 869 869 869 869 138 139 LHAAAA SJFAAA AAAAxx +1223 3633 1 3 3 3 23 223 1223 1223 1223 46 47 BVAAAA TJFAAA HHHHxx +1514 3634 0 2 4 14 14 514 1514 1514 1514 28 29 GGAAAA UJFAAA OOOOxx +4891 3635 1 3 1 11 91 891 891 4891 4891 182 183 DGAAAA VJFAAA VVVVxx +4190 3636 0 2 0 10 90 190 190 4190 4190 180 181 EFAAAA WJFAAA AAAAxx +4377 3637 1 1 7 17 77 377 377 4377 4377 154 155 JMAAAA XJFAAA HHHHxx +9195 3638 1 3 5 15 95 195 1195 4195 9195 190 191 RPAAAA YJFAAA OOOOxx +3827 3639 1 3 7 7 27 827 1827 3827 3827 54 55 FRAAAA ZJFAAA VVVVxx +7386 3640 0 2 6 6 86 386 1386 2386 7386 172 173 CYAAAA AKFAAA AAAAxx +6665 3641 1 1 5 5 65 665 665 1665 6665 130 131 JWAAAA BKFAAA HHHHxx +7514 3642 0 2 4 14 14 514 1514 2514 7514 28 29 ADAAAA CKFAAA OOOOxx +6431 3643 1 3 1 11 31 431 431 1431 6431 62 63 JNAAAA DKFAAA VVVVxx +3251 3644 1 3 1 11 51 251 1251 3251 3251 102 103 BVAAAA EKFAAA AAAAxx +8439 3645 1 3 9 19 39 439 439 3439 8439 78 79 PMAAAA FKFAAA HHHHxx +831 3646 1 3 1 11 31 831 831 831 831 62 63 ZFAAAA GKFAAA OOOOxx +8485 3647 1 1 5 5 85 485 485 3485 8485 170 171 JOAAAA HKFAAA VVVVxx +7314 3648 0 2 4 14 14 314 1314 2314 7314 28 29 IVAAAA IKFAAA AAAAxx +3044 3649 0 0 4 4 44 44 1044 3044 3044 88 89 CNAAAA JKFAAA HHHHxx +4283 3650 1 3 3 3 83 283 283 4283 4283 166 167 TIAAAA KKFAAA OOOOxx +298 3651 0 2 8 18 98 298 298 298 298 196 197 MLAAAA LKFAAA VVVVxx +7114 3652 0 2 4 14 14 114 1114 2114 7114 28 29 QNAAAA MKFAAA AAAAxx +9664 3653 0 0 4 4 64 664 1664 4664 9664 128 129 SHAAAA NKFAAA HHHHxx +5315 3654 1 3 5 15 15 315 1315 315 5315 30 31 LWAAAA OKFAAA OOOOxx +2164 3655 0 0 4 4 64 164 164 2164 2164 128 129 GFAAAA PKFAAA VVVVxx +3390 3656 0 2 0 10 90 390 1390 3390 3390 180 181 KAAAAA QKFAAA AAAAxx +836 3657 0 0 6 16 36 836 836 836 836 72 73 EGAAAA RKFAAA HHHHxx +3316 3658 0 0 6 16 16 316 1316 3316 3316 32 33 OXAAAA SKFAAA OOOOxx +1284 3659 0 0 4 4 84 284 1284 1284 1284 168 169 KXAAAA TKFAAA VVVVxx +2497 3660 1 1 7 17 97 497 497 2497 2497 194 195 BSAAAA UKFAAA AAAAxx +1374 3661 0 2 4 14 74 374 1374 1374 1374 148 149 WAAAAA VKFAAA HHHHxx +9525 3662 1 1 5 5 25 525 1525 4525 9525 50 51 JCAAAA WKFAAA OOOOxx +2911 3663 1 3 1 11 11 911 911 2911 2911 22 23 ZHAAAA XKFAAA VVVVxx +9686 3664 0 2 6 6 86 686 1686 4686 9686 172 173 OIAAAA YKFAAA AAAAxx +584 3665 0 0 4 4 84 584 584 584 584 168 169 MWAAAA ZKFAAA HHHHxx +5653 3666 1 1 3 13 53 653 1653 653 5653 106 107 LJAAAA ALFAAA OOOOxx +4986 3667 0 2 6 6 86 986 986 4986 4986 172 173 UJAAAA BLFAAA VVVVxx +6049 3668 1 1 9 9 49 49 49 1049 6049 98 99 RYAAAA CLFAAA AAAAxx +9891 3669 1 3 1 11 91 891 1891 4891 9891 182 183 LQAAAA DLFAAA HHHHxx +8809 3670 1 1 9 9 9 809 809 3809 8809 18 19 VAAAAA ELFAAA OOOOxx +8598 3671 0 2 8 18 98 598 598 3598 8598 196 197 SSAAAA FLFAAA VVVVxx +2573 3672 1 1 3 13 73 573 573 2573 2573 146 147 ZUAAAA GLFAAA AAAAxx +6864 3673 0 0 4 4 64 864 864 1864 6864 128 129 AEAAAA HLFAAA HHHHxx +7932 3674 0 0 2 12 32 932 1932 2932 7932 64 65 CTAAAA ILFAAA OOOOxx +6605 3675 1 1 5 5 5 605 605 1605 6605 10 11 BUAAAA JLFAAA VVVVxx +9500 3676 0 0 0 0 0 500 1500 4500 9500 0 1 KBAAAA KLFAAA AAAAxx +8742 3677 0 2 2 2 42 742 742 3742 8742 84 85 GYAAAA LLFAAA HHHHxx +9815 3678 1 3 5 15 15 815 1815 4815 9815 30 31 NNAAAA MLFAAA OOOOxx +3319 3679 1 3 9 19 19 319 1319 3319 3319 38 39 RXAAAA NLFAAA VVVVxx +184 3680 0 0 4 4 84 184 184 184 184 168 169 CHAAAA OLFAAA AAAAxx +8886 3681 0 2 6 6 86 886 886 3886 8886 172 173 UDAAAA PLFAAA HHHHxx +7050 3682 0 2 0 10 50 50 1050 2050 7050 100 101 ELAAAA QLFAAA OOOOxx +9781 3683 1 1 1 1 81 781 1781 4781 9781 162 163 FMAAAA RLFAAA VVVVxx +2443 3684 1 3 3 3 43 443 443 2443 2443 86 87 ZPAAAA SLFAAA AAAAxx +1160 3685 0 0 0 0 60 160 1160 1160 1160 120 121 QSAAAA TLFAAA HHHHxx +4600 3686 0 0 0 0 0 600 600 4600 4600 0 1 YUAAAA ULFAAA OOOOxx +813 3687 1 1 3 13 13 813 813 813 813 26 27 HFAAAA VLFAAA VVVVxx +5078 3688 0 2 8 18 78 78 1078 78 5078 156 157 INAAAA WLFAAA AAAAxx +9008 3689 0 0 8 8 8 8 1008 4008 9008 16 17 MIAAAA XLFAAA HHHHxx +9016 3690 0 0 6 16 16 16 1016 4016 9016 32 33 UIAAAA YLFAAA OOOOxx +2747 3691 1 3 7 7 47 747 747 2747 2747 94 95 RBAAAA ZLFAAA VVVVxx +3106 3692 0 2 6 6 6 106 1106 3106 3106 12 13 MPAAAA AMFAAA AAAAxx +8235 3693 1 3 5 15 35 235 235 3235 8235 70 71 TEAAAA BMFAAA HHHHxx +5582 3694 0 2 2 2 82 582 1582 582 5582 164 165 SGAAAA CMFAAA OOOOxx +4334 3695 0 2 4 14 34 334 334 4334 4334 68 69 SKAAAA DMFAAA VVVVxx +1612 3696 0 0 2 12 12 612 1612 1612 1612 24 25 AKAAAA EMFAAA AAAAxx +5650 3697 0 2 0 10 50 650 1650 650 5650 100 101 IJAAAA FMFAAA HHHHxx +6086 3698 0 2 6 6 86 86 86 1086 6086 172 173 CAAAAA GMFAAA OOOOxx +9667 3699 1 3 7 7 67 667 1667 4667 9667 134 135 VHAAAA HMFAAA VVVVxx +4215 3700 1 3 5 15 15 215 215 4215 4215 30 31 DGAAAA IMFAAA AAAAxx +8553 3701 1 1 3 13 53 553 553 3553 8553 106 107 ZQAAAA JMFAAA HHHHxx +9066 3702 0 2 6 6 66 66 1066 4066 9066 132 133 SKAAAA KMFAAA OOOOxx +1092 3703 0 0 2 12 92 92 1092 1092 1092 184 185 AQAAAA LMFAAA VVVVxx +2848 3704 0 0 8 8 48 848 848 2848 2848 96 97 OFAAAA MMFAAA AAAAxx +2765 3705 1 1 5 5 65 765 765 2765 2765 130 131 JCAAAA NMFAAA HHHHxx +6513 3706 1 1 3 13 13 513 513 1513 6513 26 27 NQAAAA OMFAAA OOOOxx +6541 3707 1 1 1 1 41 541 541 1541 6541 82 83 PRAAAA PMFAAA VVVVxx +9617 3708 1 1 7 17 17 617 1617 4617 9617 34 35 XFAAAA QMFAAA AAAAxx +5870 3709 0 2 0 10 70 870 1870 870 5870 140 141 URAAAA RMFAAA HHHHxx +8811 3710 1 3 1 11 11 811 811 3811 8811 22 23 XAAAAA SMFAAA OOOOxx +4529 3711 1 1 9 9 29 529 529 4529 4529 58 59 FSAAAA TMFAAA VVVVxx +161 3712 1 1 1 1 61 161 161 161 161 122 123 FGAAAA UMFAAA AAAAxx +641 3713 1 1 1 1 41 641 641 641 641 82 83 RYAAAA VMFAAA HHHHxx +4767 3714 1 3 7 7 67 767 767 4767 4767 134 135 JBAAAA WMFAAA OOOOxx +6293 3715 1 1 3 13 93 293 293 1293 6293 186 187 BIAAAA XMFAAA VVVVxx +3816 3716 0 0 6 16 16 816 1816 3816 3816 32 33 UQAAAA YMFAAA AAAAxx +4748 3717 0 0 8 8 48 748 748 4748 4748 96 97 QAAAAA ZMFAAA HHHHxx +9924 3718 0 0 4 4 24 924 1924 4924 9924 48 49 SRAAAA ANFAAA OOOOxx +6716 3719 0 0 6 16 16 716 716 1716 6716 32 33 IYAAAA BNFAAA VVVVxx +8828 3720 0 0 8 8 28 828 828 3828 8828 56 57 OBAAAA CNFAAA AAAAxx +4967 3721 1 3 7 7 67 967 967 4967 4967 134 135 BJAAAA DNFAAA HHHHxx +9680 3722 0 0 0 0 80 680 1680 4680 9680 160 161 IIAAAA ENFAAA OOOOxx +2784 3723 0 0 4 4 84 784 784 2784 2784 168 169 CDAAAA FNFAAA VVVVxx +2882 3724 0 2 2 2 82 882 882 2882 2882 164 165 WGAAAA GNFAAA AAAAxx +3641 3725 1 1 1 1 41 641 1641 3641 3641 82 83 BKAAAA HNFAAA HHHHxx +5537 3726 1 1 7 17 37 537 1537 537 5537 74 75 ZEAAAA INFAAA OOOOxx +820 3727 0 0 0 0 20 820 820 820 820 40 41 OFAAAA JNFAAA VVVVxx +5847 3728 1 3 7 7 47 847 1847 847 5847 94 95 XQAAAA KNFAAA AAAAxx +566 3729 0 2 6 6 66 566 566 566 566 132 133 UVAAAA LNFAAA HHHHxx +2246 3730 0 2 6 6 46 246 246 2246 2246 92 93 KIAAAA MNFAAA OOOOxx +6680 3731 0 0 0 0 80 680 680 1680 6680 160 161 YWAAAA NNFAAA VVVVxx +2014 3732 0 2 4 14 14 14 14 2014 2014 28 29 MZAAAA ONFAAA AAAAxx +8355 3733 1 3 5 15 55 355 355 3355 8355 110 111 JJAAAA PNFAAA HHHHxx +1610 3734 0 2 0 10 10 610 1610 1610 1610 20 21 YJAAAA QNFAAA OOOOxx +9719 3735 1 3 9 19 19 719 1719 4719 9719 38 39 VJAAAA RNFAAA VVVVxx +8498 3736 0 2 8 18 98 498 498 3498 8498 196 197 WOAAAA SNFAAA AAAAxx +5883 3737 1 3 3 3 83 883 1883 883 5883 166 167 HSAAAA TNFAAA HHHHxx +7380 3738 0 0 0 0 80 380 1380 2380 7380 160 161 WXAAAA UNFAAA OOOOxx +8865 3739 1 1 5 5 65 865 865 3865 8865 130 131 ZCAAAA VNFAAA VVVVxx +4743 3740 1 3 3 3 43 743 743 4743 4743 86 87 LAAAAA WNFAAA AAAAxx +5086 3741 0 2 6 6 86 86 1086 86 5086 172 173 QNAAAA XNFAAA HHHHxx +2739 3742 1 3 9 19 39 739 739 2739 2739 78 79 JBAAAA YNFAAA OOOOxx +9375 3743 1 3 5 15 75 375 1375 4375 9375 150 151 PWAAAA ZNFAAA VVVVxx +7876 3744 0 0 6 16 76 876 1876 2876 7876 152 153 YQAAAA AOFAAA AAAAxx +453 3745 1 1 3 13 53 453 453 453 453 106 107 LRAAAA BOFAAA HHHHxx +6987 3746 1 3 7 7 87 987 987 1987 6987 174 175 TIAAAA COFAAA OOOOxx +2860 3747 0 0 0 0 60 860 860 2860 2860 120 121 AGAAAA DOFAAA VVVVxx +8372 3748 0 0 2 12 72 372 372 3372 8372 144 145 AKAAAA EOFAAA AAAAxx +2048 3749 0 0 8 8 48 48 48 2048 2048 96 97 UAAAAA FOFAAA HHHHxx +9231 3750 1 3 1 11 31 231 1231 4231 9231 62 63 BRAAAA GOFAAA OOOOxx +634 3751 0 2 4 14 34 634 634 634 634 68 69 KYAAAA HOFAAA VVVVxx +3998 3752 0 2 8 18 98 998 1998 3998 3998 196 197 UXAAAA IOFAAA AAAAxx +4728 3753 0 0 8 8 28 728 728 4728 4728 56 57 WZAAAA JOFAAA HHHHxx +579 3754 1 3 9 19 79 579 579 579 579 158 159 HWAAAA KOFAAA OOOOxx +815 3755 1 3 5 15 15 815 815 815 815 30 31 JFAAAA LOFAAA VVVVxx +1009 3756 1 1 9 9 9 9 1009 1009 1009 18 19 VMAAAA MOFAAA AAAAxx +6596 3757 0 0 6 16 96 596 596 1596 6596 192 193 STAAAA NOFAAA HHHHxx +2793 3758 1 1 3 13 93 793 793 2793 2793 186 187 LDAAAA OOFAAA OOOOxx +9589 3759 1 1 9 9 89 589 1589 4589 9589 178 179 VEAAAA POFAAA VVVVxx +2794 3760 0 2 4 14 94 794 794 2794 2794 188 189 MDAAAA QOFAAA AAAAxx +2551 3761 1 3 1 11 51 551 551 2551 2551 102 103 DUAAAA ROFAAA HHHHxx +1588 3762 0 0 8 8 88 588 1588 1588 1588 176 177 CJAAAA SOFAAA OOOOxx +4443 3763 1 3 3 3 43 443 443 4443 4443 86 87 XOAAAA TOFAAA VVVVxx +5009 3764 1 1 9 9 9 9 1009 9 5009 18 19 RKAAAA UOFAAA AAAAxx +4287 3765 1 3 7 7 87 287 287 4287 4287 174 175 XIAAAA VOFAAA HHHHxx +2167 3766 1 3 7 7 67 167 167 2167 2167 134 135 JFAAAA WOFAAA OOOOxx +2290 3767 0 2 0 10 90 290 290 2290 2290 180 181 CKAAAA XOFAAA VVVVxx +7225 3768 1 1 5 5 25 225 1225 2225 7225 50 51 XRAAAA YOFAAA AAAAxx +8992 3769 0 0 2 12 92 992 992 3992 8992 184 185 WHAAAA ZOFAAA HHHHxx +1540 3770 0 0 0 0 40 540 1540 1540 1540 80 81 GHAAAA APFAAA OOOOxx +2029 3771 1 1 9 9 29 29 29 2029 2029 58 59 BAAAAA BPFAAA VVVVxx +2855 3772 1 3 5 15 55 855 855 2855 2855 110 111 VFAAAA CPFAAA AAAAxx +3534 3773 0 2 4 14 34 534 1534 3534 3534 68 69 YFAAAA DPFAAA HHHHxx +8078 3774 0 2 8 18 78 78 78 3078 8078 156 157 SYAAAA EPFAAA OOOOxx +9778 3775 0 2 8 18 78 778 1778 4778 9778 156 157 CMAAAA FPFAAA VVVVxx +3543 3776 1 3 3 3 43 543 1543 3543 3543 86 87 HGAAAA GPFAAA AAAAxx +4778 3777 0 2 8 18 78 778 778 4778 4778 156 157 UBAAAA HPFAAA HHHHxx +8931 3778 1 3 1 11 31 931 931 3931 8931 62 63 NFAAAA IPFAAA OOOOxx +557 3779 1 1 7 17 57 557 557 557 557 114 115 LVAAAA JPFAAA VVVVxx +5546 3780 0 2 6 6 46 546 1546 546 5546 92 93 IFAAAA KPFAAA AAAAxx +7527 3781 1 3 7 7 27 527 1527 2527 7527 54 55 NDAAAA LPFAAA HHHHxx +5000 3782 0 0 0 0 0 0 1000 0 5000 0 1 IKAAAA MPFAAA OOOOxx +7587 3783 1 3 7 7 87 587 1587 2587 7587 174 175 VFAAAA NPFAAA VVVVxx +3014 3784 0 2 4 14 14 14 1014 3014 3014 28 29 YLAAAA OPFAAA AAAAxx +5276 3785 0 0 6 16 76 276 1276 276 5276 152 153 YUAAAA PPFAAA HHHHxx +6457 3786 1 1 7 17 57 457 457 1457 6457 114 115 JOAAAA QPFAAA OOOOxx +389 3787 1 1 9 9 89 389 389 389 389 178 179 ZOAAAA RPFAAA VVVVxx +7104 3788 0 0 4 4 4 104 1104 2104 7104 8 9 GNAAAA SPFAAA AAAAxx +9995 3789 1 3 5 15 95 995 1995 4995 9995 190 191 LUAAAA TPFAAA HHHHxx +7368 3790 0 0 8 8 68 368 1368 2368 7368 136 137 KXAAAA UPFAAA OOOOxx +3258 3791 0 2 8 18 58 258 1258 3258 3258 116 117 IVAAAA VPFAAA VVVVxx +9208 3792 0 0 8 8 8 208 1208 4208 9208 16 17 EQAAAA WPFAAA AAAAxx +2396 3793 0 0 6 16 96 396 396 2396 2396 192 193 EOAAAA XPFAAA HHHHxx +1715 3794 1 3 5 15 15 715 1715 1715 1715 30 31 ZNAAAA YPFAAA OOOOxx +1240 3795 0 0 0 0 40 240 1240 1240 1240 80 81 SVAAAA ZPFAAA VVVVxx +1952 3796 0 0 2 12 52 952 1952 1952 1952 104 105 CXAAAA AQFAAA AAAAxx +4403 3797 1 3 3 3 3 403 403 4403 4403 6 7 JNAAAA BQFAAA HHHHxx +6333 3798 1 1 3 13 33 333 333 1333 6333 66 67 PJAAAA CQFAAA OOOOxx +2492 3799 0 0 2 12 92 492 492 2492 2492 184 185 WRAAAA DQFAAA VVVVxx +6543 3800 1 3 3 3 43 543 543 1543 6543 86 87 RRAAAA EQFAAA AAAAxx +5548 3801 0 0 8 8 48 548 1548 548 5548 96 97 KFAAAA FQFAAA HHHHxx +3458 3802 0 2 8 18 58 458 1458 3458 3458 116 117 ADAAAA GQFAAA OOOOxx +2588 3803 0 0 8 8 88 588 588 2588 2588 176 177 OVAAAA HQFAAA VVVVxx +1364 3804 0 0 4 4 64 364 1364 1364 1364 128 129 MAAAAA IQFAAA AAAAxx +9856 3805 0 0 6 16 56 856 1856 4856 9856 112 113 CPAAAA JQFAAA HHHHxx +4964 3806 0 0 4 4 64 964 964 4964 4964 128 129 YIAAAA KQFAAA OOOOxx +773 3807 1 1 3 13 73 773 773 773 773 146 147 TDAAAA LQFAAA VVVVxx +6402 3808 0 2 2 2 2 402 402 1402 6402 4 5 GMAAAA MQFAAA AAAAxx +7213 3809 1 1 3 13 13 213 1213 2213 7213 26 27 LRAAAA NQFAAA HHHHxx +3385 3810 1 1 5 5 85 385 1385 3385 3385 170 171 FAAAAA OQFAAA OOOOxx +6005 3811 1 1 5 5 5 5 5 1005 6005 10 11 ZWAAAA PQFAAA VVVVxx +9346 3812 0 2 6 6 46 346 1346 4346 9346 92 93 MVAAAA QQFAAA AAAAxx +1831 3813 1 3 1 11 31 831 1831 1831 1831 62 63 LSAAAA RQFAAA HHHHxx +5406 3814 0 2 6 6 6 406 1406 406 5406 12 13 YZAAAA SQFAAA OOOOxx +2154 3815 0 2 4 14 54 154 154 2154 2154 108 109 WEAAAA TQFAAA VVVVxx +3721 3816 1 1 1 1 21 721 1721 3721 3721 42 43 DNAAAA UQFAAA AAAAxx +2889 3817 1 1 9 9 89 889 889 2889 2889 178 179 DHAAAA VQFAAA HHHHxx +4410 3818 0 2 0 10 10 410 410 4410 4410 20 21 QNAAAA WQFAAA OOOOxx +7102 3819 0 2 2 2 2 102 1102 2102 7102 4 5 ENAAAA XQFAAA VVVVxx +4057 3820 1 1 7 17 57 57 57 4057 4057 114 115 BAAAAA YQFAAA AAAAxx +9780 3821 0 0 0 0 80 780 1780 4780 9780 160 161 EMAAAA ZQFAAA HHHHxx +9481 3822 1 1 1 1 81 481 1481 4481 9481 162 163 RAAAAA ARFAAA OOOOxx +2366 3823 0 2 6 6 66 366 366 2366 2366 132 133 ANAAAA BRFAAA VVVVxx +2708 3824 0 0 8 8 8 708 708 2708 2708 16 17 EAAAAA CRFAAA AAAAxx +7399 3825 1 3 9 19 99 399 1399 2399 7399 198 199 PYAAAA DRFAAA HHHHxx +5234 3826 0 2 4 14 34 234 1234 234 5234 68 69 ITAAAA ERFAAA OOOOxx +1843 3827 1 3 3 3 43 843 1843 1843 1843 86 87 XSAAAA FRFAAA VVVVxx +1006 3828 0 2 6 6 6 6 1006 1006 1006 12 13 SMAAAA GRFAAA AAAAxx +7696 3829 0 0 6 16 96 696 1696 2696 7696 192 193 AKAAAA HRFAAA HHHHxx +6411 3830 1 3 1 11 11 411 411 1411 6411 22 23 PMAAAA IRFAAA OOOOxx +3913 3831 1 1 3 13 13 913 1913 3913 3913 26 27 NUAAAA JRFAAA VVVVxx +2538 3832 0 2 8 18 38 538 538 2538 2538 76 77 QTAAAA KRFAAA AAAAxx +3019 3833 1 3 9 19 19 19 1019 3019 3019 38 39 DMAAAA LRFAAA HHHHxx +107 3834 1 3 7 7 7 107 107 107 107 14 15 DEAAAA MRFAAA OOOOxx +427 3835 1 3 7 7 27 427 427 427 427 54 55 LQAAAA NRFAAA VVVVxx +9849 3836 1 1 9 9 49 849 1849 4849 9849 98 99 VOAAAA ORFAAA AAAAxx +4195 3837 1 3 5 15 95 195 195 4195 4195 190 191 JFAAAA PRFAAA HHHHxx +9215 3838 1 3 5 15 15 215 1215 4215 9215 30 31 LQAAAA QRFAAA OOOOxx +3165 3839 1 1 5 5 65 165 1165 3165 3165 130 131 TRAAAA RRFAAA VVVVxx +3280 3840 0 0 0 0 80 280 1280 3280 3280 160 161 EWAAAA SRFAAA AAAAxx +4477 3841 1 1 7 17 77 477 477 4477 4477 154 155 FQAAAA TRFAAA HHHHxx +5885 3842 1 1 5 5 85 885 1885 885 5885 170 171 JSAAAA URFAAA OOOOxx +3311 3843 1 3 1 11 11 311 1311 3311 3311 22 23 JXAAAA VRFAAA VVVVxx +6453 3844 1 1 3 13 53 453 453 1453 6453 106 107 FOAAAA WRFAAA AAAAxx +8527 3845 1 3 7 7 27 527 527 3527 8527 54 55 ZPAAAA XRFAAA HHHHxx +1921 3846 1 1 1 1 21 921 1921 1921 1921 42 43 XVAAAA YRFAAA OOOOxx +2427 3847 1 3 7 7 27 427 427 2427 2427 54 55 JPAAAA ZRFAAA VVVVxx +3691 3848 1 3 1 11 91 691 1691 3691 3691 182 183 ZLAAAA ASFAAA AAAAxx +3882 3849 0 2 2 2 82 882 1882 3882 3882 164 165 ITAAAA BSFAAA HHHHxx +562 3850 0 2 2 2 62 562 562 562 562 124 125 QVAAAA CSFAAA OOOOxx +377 3851 1 1 7 17 77 377 377 377 377 154 155 NOAAAA DSFAAA VVVVxx +1497 3852 1 1 7 17 97 497 1497 1497 1497 194 195 PFAAAA ESFAAA AAAAxx +4453 3853 1 1 3 13 53 453 453 4453 4453 106 107 HPAAAA FSFAAA HHHHxx +4678 3854 0 2 8 18 78 678 678 4678 4678 156 157 YXAAAA GSFAAA OOOOxx +2234 3855 0 2 4 14 34 234 234 2234 2234 68 69 YHAAAA HSFAAA VVVVxx +1073 3856 1 1 3 13 73 73 1073 1073 1073 146 147 HPAAAA ISFAAA AAAAxx +6479 3857 1 3 9 19 79 479 479 1479 6479 158 159 FPAAAA JSFAAA HHHHxx +5665 3858 1 1 5 5 65 665 1665 665 5665 130 131 XJAAAA KSFAAA OOOOxx +586 3859 0 2 6 6 86 586 586 586 586 172 173 OWAAAA LSFAAA VVVVxx +1584 3860 0 0 4 4 84 584 1584 1584 1584 168 169 YIAAAA MSFAAA AAAAxx +2574 3861 0 2 4 14 74 574 574 2574 2574 148 149 AVAAAA NSFAAA HHHHxx +9833 3862 1 1 3 13 33 833 1833 4833 9833 66 67 FOAAAA OSFAAA OOOOxx +6726 3863 0 2 6 6 26 726 726 1726 6726 52 53 SYAAAA PSFAAA VVVVxx +8497 3864 1 1 7 17 97 497 497 3497 8497 194 195 VOAAAA QSFAAA AAAAxx +2914 3865 0 2 4 14 14 914 914 2914 2914 28 29 CIAAAA RSFAAA HHHHxx +8586 3866 0 2 6 6 86 586 586 3586 8586 172 173 GSAAAA SSFAAA OOOOxx +6973 3867 1 1 3 13 73 973 973 1973 6973 146 147 FIAAAA TSFAAA VVVVxx +1322 3868 0 2 2 2 22 322 1322 1322 1322 44 45 WYAAAA USFAAA AAAAxx +5242 3869 0 2 2 2 42 242 1242 242 5242 84 85 QTAAAA VSFAAA HHHHxx +5581 3870 1 1 1 1 81 581 1581 581 5581 162 163 RGAAAA WSFAAA OOOOxx +1365 3871 1 1 5 5 65 365 1365 1365 1365 130 131 NAAAAA XSFAAA VVVVxx +2818 3872 0 2 8 18 18 818 818 2818 2818 36 37 KEAAAA YSFAAA AAAAxx +3758 3873 0 2 8 18 58 758 1758 3758 3758 116 117 OOAAAA ZSFAAA HHHHxx +2665 3874 1 1 5 5 65 665 665 2665 2665 130 131 NYAAAA ATFAAA OOOOxx +9823 3875 1 3 3 3 23 823 1823 4823 9823 46 47 VNAAAA BTFAAA VVVVxx +7057 3876 1 1 7 17 57 57 1057 2057 7057 114 115 LLAAAA CTFAAA AAAAxx +543 3877 1 3 3 3 43 543 543 543 543 86 87 XUAAAA DTFAAA HHHHxx +4008 3878 0 0 8 8 8 8 8 4008 4008 16 17 EYAAAA ETFAAA OOOOxx +4397 3879 1 1 7 17 97 397 397 4397 4397 194 195 DNAAAA FTFAAA VVVVxx +8533 3880 1 1 3 13 33 533 533 3533 8533 66 67 FQAAAA GTFAAA AAAAxx +9728 3881 0 0 8 8 28 728 1728 4728 9728 56 57 EKAAAA HTFAAA HHHHxx +5198 3882 0 2 8 18 98 198 1198 198 5198 196 197 YRAAAA ITFAAA OOOOxx +5036 3883 0 0 6 16 36 36 1036 36 5036 72 73 SLAAAA JTFAAA VVVVxx +4394 3884 0 2 4 14 94 394 394 4394 4394 188 189 ANAAAA KTFAAA AAAAxx +9633 3885 1 1 3 13 33 633 1633 4633 9633 66 67 NGAAAA LTFAAA HHHHxx +3339 3886 1 3 9 19 39 339 1339 3339 3339 78 79 LYAAAA MTFAAA OOOOxx +9529 3887 1 1 9 9 29 529 1529 4529 9529 58 59 NCAAAA NTFAAA VVVVxx +4780 3888 0 0 0 0 80 780 780 4780 4780 160 161 WBAAAA OTFAAA AAAAxx +4862 3889 0 2 2 2 62 862 862 4862 4862 124 125 AFAAAA PTFAAA HHHHxx +8152 3890 0 0 2 12 52 152 152 3152 8152 104 105 OBAAAA QTFAAA OOOOxx +9330 3891 0 2 0 10 30 330 1330 4330 9330 60 61 WUAAAA RTFAAA VVVVxx +4362 3892 0 2 2 2 62 362 362 4362 4362 124 125 ULAAAA STFAAA AAAAxx +4688 3893 0 0 8 8 88 688 688 4688 4688 176 177 IYAAAA TTFAAA HHHHxx +1903 3894 1 3 3 3 3 903 1903 1903 1903 6 7 FVAAAA UTFAAA OOOOxx +9027 3895 1 3 7 7 27 27 1027 4027 9027 54 55 FJAAAA VTFAAA VVVVxx +5385 3896 1 1 5 5 85 385 1385 385 5385 170 171 DZAAAA WTFAAA AAAAxx +9854 3897 0 2 4 14 54 854 1854 4854 9854 108 109 APAAAA XTFAAA HHHHxx +9033 3898 1 1 3 13 33 33 1033 4033 9033 66 67 LJAAAA YTFAAA OOOOxx +3185 3899 1 1 5 5 85 185 1185 3185 3185 170 171 NSAAAA ZTFAAA VVVVxx +2618 3900 0 2 8 18 18 618 618 2618 2618 36 37 SWAAAA AUFAAA AAAAxx +371 3901 1 3 1 11 71 371 371 371 371 142 143 HOAAAA BUFAAA HHHHxx +3697 3902 1 1 7 17 97 697 1697 3697 3697 194 195 FMAAAA CUFAAA OOOOxx +1682 3903 0 2 2 2 82 682 1682 1682 1682 164 165 SMAAAA DUFAAA VVVVxx +3333 3904 1 1 3 13 33 333 1333 3333 3333 66 67 FYAAAA EUFAAA AAAAxx +1722 3905 0 2 2 2 22 722 1722 1722 1722 44 45 GOAAAA FUFAAA HHHHxx +2009 3906 1 1 9 9 9 9 9 2009 2009 18 19 HZAAAA GUFAAA OOOOxx +3517 3907 1 1 7 17 17 517 1517 3517 3517 34 35 HFAAAA HUFAAA VVVVxx +7640 3908 0 0 0 0 40 640 1640 2640 7640 80 81 WHAAAA IUFAAA AAAAxx +259 3909 1 3 9 19 59 259 259 259 259 118 119 ZJAAAA JUFAAA HHHHxx +1400 3910 0 0 0 0 0 400 1400 1400 1400 0 1 WBAAAA KUFAAA OOOOxx +6663 3911 1 3 3 3 63 663 663 1663 6663 126 127 HWAAAA LUFAAA VVVVxx +1576 3912 0 0 6 16 76 576 1576 1576 1576 152 153 QIAAAA MUFAAA AAAAxx +8843 3913 1 3 3 3 43 843 843 3843 8843 86 87 DCAAAA NUFAAA HHHHxx +9474 3914 0 2 4 14 74 474 1474 4474 9474 148 149 KAAAAA OUFAAA OOOOxx +1597 3915 1 1 7 17 97 597 1597 1597 1597 194 195 LJAAAA PUFAAA VVVVxx +1143 3916 1 3 3 3 43 143 1143 1143 1143 86 87 ZRAAAA QUFAAA AAAAxx +4162 3917 0 2 2 2 62 162 162 4162 4162 124 125 CEAAAA RUFAAA HHHHxx +1301 3918 1 1 1 1 1 301 1301 1301 1301 2 3 BYAAAA SUFAAA OOOOxx +2935 3919 1 3 5 15 35 935 935 2935 2935 70 71 XIAAAA TUFAAA VVVVxx +886 3920 0 2 6 6 86 886 886 886 886 172 173 CIAAAA UUFAAA AAAAxx +1661 3921 1 1 1 1 61 661 1661 1661 1661 122 123 XLAAAA VUFAAA HHHHxx +1026 3922 0 2 6 6 26 26 1026 1026 1026 52 53 MNAAAA WUFAAA OOOOxx +7034 3923 0 2 4 14 34 34 1034 2034 7034 68 69 OKAAAA XUFAAA VVVVxx +2305 3924 1 1 5 5 5 305 305 2305 2305 10 11 RKAAAA YUFAAA AAAAxx +1725 3925 1 1 5 5 25 725 1725 1725 1725 50 51 JOAAAA ZUFAAA HHHHxx +909 3926 1 1 9 9 9 909 909 909 909 18 19 ZIAAAA AVFAAA OOOOxx +9906 3927 0 2 6 6 6 906 1906 4906 9906 12 13 ARAAAA BVFAAA VVVVxx +3309 3928 1 1 9 9 9 309 1309 3309 3309 18 19 HXAAAA CVFAAA AAAAxx +515 3929 1 3 5 15 15 515 515 515 515 30 31 VTAAAA DVFAAA HHHHxx +932 3930 0 0 2 12 32 932 932 932 932 64 65 WJAAAA EVFAAA OOOOxx +8144 3931 0 0 4 4 44 144 144 3144 8144 88 89 GBAAAA FVFAAA VVVVxx +5592 3932 0 0 2 12 92 592 1592 592 5592 184 185 CHAAAA GVFAAA AAAAxx +4003 3933 1 3 3 3 3 3 3 4003 4003 6 7 ZXAAAA HVFAAA HHHHxx +9566 3934 0 2 6 6 66 566 1566 4566 9566 132 133 YDAAAA IVFAAA OOOOxx +4556 3935 0 0 6 16 56 556 556 4556 4556 112 113 GTAAAA JVFAAA VVVVxx +268 3936 0 0 8 8 68 268 268 268 268 136 137 IKAAAA KVFAAA AAAAxx +8107 3937 1 3 7 7 7 107 107 3107 8107 14 15 VZAAAA LVFAAA HHHHxx +5816 3938 0 0 6 16 16 816 1816 816 5816 32 33 SPAAAA MVFAAA OOOOxx +8597 3939 1 1 7 17 97 597 597 3597 8597 194 195 RSAAAA NVFAAA VVVVxx +9611 3940 1 3 1 11 11 611 1611 4611 9611 22 23 RFAAAA OVFAAA AAAAxx +8070 3941 0 2 0 10 70 70 70 3070 8070 140 141 KYAAAA PVFAAA HHHHxx +6040 3942 0 0 0 0 40 40 40 1040 6040 80 81 IYAAAA QVFAAA OOOOxx +3184 3943 0 0 4 4 84 184 1184 3184 3184 168 169 MSAAAA RVFAAA VVVVxx +9656 3944 0 0 6 16 56 656 1656 4656 9656 112 113 KHAAAA SVFAAA AAAAxx +1577 3945 1 1 7 17 77 577 1577 1577 1577 154 155 RIAAAA TVFAAA HHHHxx +1805 3946 1 1 5 5 5 805 1805 1805 1805 10 11 LRAAAA UVFAAA OOOOxx +8268 3947 0 0 8 8 68 268 268 3268 8268 136 137 AGAAAA VVFAAA VVVVxx +3489 3948 1 1 9 9 89 489 1489 3489 3489 178 179 FEAAAA WVFAAA AAAAxx +4564 3949 0 0 4 4 64 564 564 4564 4564 128 129 OTAAAA XVFAAA HHHHxx +4006 3950 0 2 6 6 6 6 6 4006 4006 12 13 CYAAAA YVFAAA OOOOxx +8466 3951 0 2 6 6 66 466 466 3466 8466 132 133 QNAAAA ZVFAAA VVVVxx +938 3952 0 2 8 18 38 938 938 938 938 76 77 CKAAAA AWFAAA AAAAxx +5944 3953 0 0 4 4 44 944 1944 944 5944 88 89 QUAAAA BWFAAA HHHHxx +8363 3954 1 3 3 3 63 363 363 3363 8363 126 127 RJAAAA CWFAAA OOOOxx +5348 3955 0 0 8 8 48 348 1348 348 5348 96 97 SXAAAA DWFAAA VVVVxx +71 3956 1 3 1 11 71 71 71 71 71 142 143 TCAAAA EWFAAA AAAAxx +3620 3957 0 0 0 0 20 620 1620 3620 3620 40 41 GJAAAA FWFAAA HHHHxx +3230 3958 0 2 0 10 30 230 1230 3230 3230 60 61 GUAAAA GWFAAA OOOOxx +6132 3959 0 0 2 12 32 132 132 1132 6132 64 65 WBAAAA HWFAAA VVVVxx +6143 3960 1 3 3 3 43 143 143 1143 6143 86 87 HCAAAA IWFAAA AAAAxx +8781 3961 1 1 1 1 81 781 781 3781 8781 162 163 TZAAAA JWFAAA HHHHxx +5522 3962 0 2 2 2 22 522 1522 522 5522 44 45 KEAAAA KWFAAA OOOOxx +6320 3963 0 0 0 0 20 320 320 1320 6320 40 41 CJAAAA LWFAAA VVVVxx +3923 3964 1 3 3 3 23 923 1923 3923 3923 46 47 XUAAAA MWFAAA AAAAxx +2207 3965 1 3 7 7 7 207 207 2207 2207 14 15 XGAAAA NWFAAA HHHHxx +966 3966 0 2 6 6 66 966 966 966 966 132 133 ELAAAA OWFAAA OOOOxx +9020 3967 0 0 0 0 20 20 1020 4020 9020 40 41 YIAAAA PWFAAA VVVVxx +4616 3968 0 0 6 16 16 616 616 4616 4616 32 33 OVAAAA QWFAAA AAAAxx +8289 3969 1 1 9 9 89 289 289 3289 8289 178 179 VGAAAA RWFAAA HHHHxx +5796 3970 0 0 6 16 96 796 1796 796 5796 192 193 YOAAAA SWFAAA OOOOxx +9259 3971 1 3 9 19 59 259 1259 4259 9259 118 119 DSAAAA TWFAAA VVVVxx +3710 3972 0 2 0 10 10 710 1710 3710 3710 20 21 SMAAAA UWFAAA AAAAxx +251 3973 1 3 1 11 51 251 251 251 251 102 103 RJAAAA VWFAAA HHHHxx +7669 3974 1 1 9 9 69 669 1669 2669 7669 138 139 ZIAAAA WWFAAA OOOOxx +6304 3975 0 0 4 4 4 304 304 1304 6304 8 9 MIAAAA XWFAAA VVVVxx +6454 3976 0 2 4 14 54 454 454 1454 6454 108 109 GOAAAA YWFAAA AAAAxx +1489 3977 1 1 9 9 89 489 1489 1489 1489 178 179 HFAAAA ZWFAAA HHHHxx +715 3978 1 3 5 15 15 715 715 715 715 30 31 NBAAAA AXFAAA OOOOxx +4319 3979 1 3 9 19 19 319 319 4319 4319 38 39 DKAAAA BXFAAA VVVVxx +7112 3980 0 0 2 12 12 112 1112 2112 7112 24 25 ONAAAA CXFAAA AAAAxx +3726 3981 0 2 6 6 26 726 1726 3726 3726 52 53 INAAAA DXFAAA HHHHxx +7727 3982 1 3 7 7 27 727 1727 2727 7727 54 55 FLAAAA EXFAAA OOOOxx +8387 3983 1 3 7 7 87 387 387 3387 8387 174 175 PKAAAA FXFAAA VVVVxx +6555 3984 1 3 5 15 55 555 555 1555 6555 110 111 DSAAAA GXFAAA AAAAxx +1148 3985 0 0 8 8 48 148 1148 1148 1148 96 97 ESAAAA HXFAAA HHHHxx +9000 3986 0 0 0 0 0 0 1000 4000 9000 0 1 EIAAAA IXFAAA OOOOxx +5278 3987 0 2 8 18 78 278 1278 278 5278 156 157 AVAAAA JXFAAA VVVVxx +2388 3988 0 0 8 8 88 388 388 2388 2388 176 177 WNAAAA KXFAAA AAAAxx +7984 3989 0 0 4 4 84 984 1984 2984 7984 168 169 CVAAAA LXFAAA HHHHxx +881 3990 1 1 1 1 81 881 881 881 881 162 163 XHAAAA MXFAAA OOOOxx +6830 3991 0 2 0 10 30 830 830 1830 6830 60 61 SCAAAA NXFAAA VVVVxx +7056 3992 0 0 6 16 56 56 1056 2056 7056 112 113 KLAAAA OXFAAA AAAAxx +7581 3993 1 1 1 1 81 581 1581 2581 7581 162 163 PFAAAA PXFAAA HHHHxx +5214 3994 0 2 4 14 14 214 1214 214 5214 28 29 OSAAAA QXFAAA OOOOxx +2505 3995 1 1 5 5 5 505 505 2505 2505 10 11 JSAAAA RXFAAA VVVVxx +5112 3996 0 0 2 12 12 112 1112 112 5112 24 25 QOAAAA SXFAAA AAAAxx +9884 3997 0 0 4 4 84 884 1884 4884 9884 168 169 EQAAAA TXFAAA HHHHxx +8040 3998 0 0 0 0 40 40 40 3040 8040 80 81 GXAAAA UXFAAA OOOOxx +7033 3999 1 1 3 13 33 33 1033 2033 7033 66 67 NKAAAA VXFAAA VVVVxx +9343 4000 1 3 3 3 43 343 1343 4343 9343 86 87 JVAAAA WXFAAA AAAAxx +2931 4001 1 3 1 11 31 931 931 2931 2931 62 63 TIAAAA XXFAAA HHHHxx +9024 4002 0 0 4 4 24 24 1024 4024 9024 48 49 CJAAAA YXFAAA OOOOxx +6485 4003 1 1 5 5 85 485 485 1485 6485 170 171 LPAAAA ZXFAAA VVVVxx +3465 4004 1 1 5 5 65 465 1465 3465 3465 130 131 HDAAAA AYFAAA AAAAxx +3357 4005 1 1 7 17 57 357 1357 3357 3357 114 115 DZAAAA BYFAAA HHHHxx +2929 4006 1 1 9 9 29 929 929 2929 2929 58 59 RIAAAA CYFAAA OOOOxx +3086 4007 0 2 6 6 86 86 1086 3086 3086 172 173 SOAAAA DYFAAA VVVVxx +8897 4008 1 1 7 17 97 897 897 3897 8897 194 195 FEAAAA EYFAAA AAAAxx +9688 4009 0 0 8 8 88 688 1688 4688 9688 176 177 QIAAAA FYFAAA HHHHxx +6522 4010 0 2 2 2 22 522 522 1522 6522 44 45 WQAAAA GYFAAA OOOOxx +3241 4011 1 1 1 1 41 241 1241 3241 3241 82 83 RUAAAA HYFAAA VVVVxx +8770 4012 0 2 0 10 70 770 770 3770 8770 140 141 IZAAAA IYFAAA AAAAxx +2884 4013 0 0 4 4 84 884 884 2884 2884 168 169 YGAAAA JYFAAA HHHHxx +9579 4014 1 3 9 19 79 579 1579 4579 9579 158 159 LEAAAA KYFAAA OOOOxx +3125 4015 1 1 5 5 25 125 1125 3125 3125 50 51 FQAAAA LYFAAA VVVVxx +4604 4016 0 0 4 4 4 604 604 4604 4604 8 9 CVAAAA MYFAAA AAAAxx +2682 4017 0 2 2 2 82 682 682 2682 2682 164 165 EZAAAA NYFAAA HHHHxx +254 4018 0 2 4 14 54 254 254 254 254 108 109 UJAAAA OYFAAA OOOOxx +6569 4019 1 1 9 9 69 569 569 1569 6569 138 139 RSAAAA PYFAAA VVVVxx +2686 4020 0 2 6 6 86 686 686 2686 2686 172 173 IZAAAA QYFAAA AAAAxx +2123 4021 1 3 3 3 23 123 123 2123 2123 46 47 RDAAAA RYFAAA HHHHxx +1745 4022 1 1 5 5 45 745 1745 1745 1745 90 91 DPAAAA SYFAAA OOOOxx +247 4023 1 3 7 7 47 247 247 247 247 94 95 NJAAAA TYFAAA VVVVxx +5800 4024 0 0 0 0 0 800 1800 800 5800 0 1 CPAAAA UYFAAA AAAAxx +1121 4025 1 1 1 1 21 121 1121 1121 1121 42 43 DRAAAA VYFAAA HHHHxx +8893 4026 1 1 3 13 93 893 893 3893 8893 186 187 BEAAAA WYFAAA OOOOxx +7819 4027 1 3 9 19 19 819 1819 2819 7819 38 39 TOAAAA XYFAAA VVVVxx +1339 4028 1 3 9 19 39 339 1339 1339 1339 78 79 NZAAAA YYFAAA AAAAxx +5680 4029 0 0 0 0 80 680 1680 680 5680 160 161 MKAAAA ZYFAAA HHHHxx +5093 4030 1 1 3 13 93 93 1093 93 5093 186 187 XNAAAA AZFAAA OOOOxx +3508 4031 0 0 8 8 8 508 1508 3508 3508 16 17 YEAAAA BZFAAA VVVVxx +933 4032 1 1 3 13 33 933 933 933 933 66 67 XJAAAA CZFAAA AAAAxx +1106 4033 0 2 6 6 6 106 1106 1106 1106 12 13 OQAAAA DZFAAA HHHHxx +4386 4034 0 2 6 6 86 386 386 4386 4386 172 173 SMAAAA EZFAAA OOOOxx +5895 4035 1 3 5 15 95 895 1895 895 5895 190 191 TSAAAA FZFAAA VVVVxx +2980 4036 0 0 0 0 80 980 980 2980 2980 160 161 QKAAAA GZFAAA AAAAxx +4400 4037 0 0 0 0 0 400 400 4400 4400 0 1 GNAAAA HZFAAA HHHHxx +7433 4038 1 1 3 13 33 433 1433 2433 7433 66 67 XZAAAA IZFAAA OOOOxx +6110 4039 0 2 0 10 10 110 110 1110 6110 20 21 ABAAAA JZFAAA VVVVxx +867 4040 1 3 7 7 67 867 867 867 867 134 135 JHAAAA KZFAAA AAAAxx +5292 4041 0 0 2 12 92 292 1292 292 5292 184 185 OVAAAA LZFAAA HHHHxx +3926 4042 0 2 6 6 26 926 1926 3926 3926 52 53 AVAAAA MZFAAA OOOOxx +1107 4043 1 3 7 7 7 107 1107 1107 1107 14 15 PQAAAA NZFAAA VVVVxx +7355 4044 1 3 5 15 55 355 1355 2355 7355 110 111 XWAAAA OZFAAA AAAAxx +4689 4045 1 1 9 9 89 689 689 4689 4689 178 179 JYAAAA PZFAAA HHHHxx +4872 4046 0 0 2 12 72 872 872 4872 4872 144 145 KFAAAA QZFAAA OOOOxx +7821 4047 1 1 1 1 21 821 1821 2821 7821 42 43 VOAAAA RZFAAA VVVVxx +7277 4048 1 1 7 17 77 277 1277 2277 7277 154 155 XTAAAA SZFAAA AAAAxx +3268 4049 0 0 8 8 68 268 1268 3268 3268 136 137 SVAAAA TZFAAA HHHHxx +8877 4050 1 1 7 17 77 877 877 3877 8877 154 155 LDAAAA UZFAAA OOOOxx +343 4051 1 3 3 3 43 343 343 343 343 86 87 FNAAAA VZFAAA VVVVxx +621 4052 1 1 1 1 21 621 621 621 621 42 43 XXAAAA WZFAAA AAAAxx +5429 4053 1 1 9 9 29 429 1429 429 5429 58 59 VAAAAA XZFAAA HHHHxx +392 4054 0 0 2 12 92 392 392 392 392 184 185 CPAAAA YZFAAA OOOOxx +6004 4055 0 0 4 4 4 4 4 1004 6004 8 9 YWAAAA ZZFAAA VVVVxx +6377 4056 1 1 7 17 77 377 377 1377 6377 154 155 HLAAAA AAGAAA AAAAxx +3037 4057 1 1 7 17 37 37 1037 3037 3037 74 75 VMAAAA BAGAAA HHHHxx +3514 4058 0 2 4 14 14 514 1514 3514 3514 28 29 EFAAAA CAGAAA OOOOxx +8740 4059 0 0 0 0 40 740 740 3740 8740 80 81 EYAAAA DAGAAA VVVVxx +3877 4060 1 1 7 17 77 877 1877 3877 3877 154 155 DTAAAA EAGAAA AAAAxx +5731 4061 1 3 1 11 31 731 1731 731 5731 62 63 LMAAAA FAGAAA HHHHxx +6407 4062 1 3 7 7 7 407 407 1407 6407 14 15 LMAAAA GAGAAA OOOOxx +2044 4063 0 0 4 4 44 44 44 2044 2044 88 89 QAAAAA HAGAAA VVVVxx +7362 4064 0 2 2 2 62 362 1362 2362 7362 124 125 EXAAAA IAGAAA AAAAxx +5458 4065 0 2 8 18 58 458 1458 458 5458 116 117 YBAAAA JAGAAA HHHHxx +6437 4066 1 1 7 17 37 437 437 1437 6437 74 75 PNAAAA KAGAAA OOOOxx +1051 4067 1 3 1 11 51 51 1051 1051 1051 102 103 LOAAAA LAGAAA VVVVxx +1203 4068 1 3 3 3 3 203 1203 1203 1203 6 7 HUAAAA MAGAAA AAAAxx +2176 4069 0 0 6 16 76 176 176 2176 2176 152 153 SFAAAA NAGAAA HHHHxx +8997 4070 1 1 7 17 97 997 997 3997 8997 194 195 BIAAAA OAGAAA OOOOxx +6378 4071 0 2 8 18 78 378 378 1378 6378 156 157 ILAAAA PAGAAA VVVVxx +6006 4072 0 2 6 6 6 6 6 1006 6006 12 13 AXAAAA QAGAAA AAAAxx +2308 4073 0 0 8 8 8 308 308 2308 2308 16 17 UKAAAA RAGAAA HHHHxx +625 4074 1 1 5 5 25 625 625 625 625 50 51 BYAAAA SAGAAA OOOOxx +7298 4075 0 2 8 18 98 298 1298 2298 7298 196 197 SUAAAA TAGAAA VVVVxx +5575 4076 1 3 5 15 75 575 1575 575 5575 150 151 LGAAAA UAGAAA AAAAxx +3565 4077 1 1 5 5 65 565 1565 3565 3565 130 131 DHAAAA VAGAAA HHHHxx +47 4078 1 3 7 7 47 47 47 47 47 94 95 VBAAAA WAGAAA OOOOxx +2413 4079 1 1 3 13 13 413 413 2413 2413 26 27 VOAAAA XAGAAA VVVVxx +2153 4080 1 1 3 13 53 153 153 2153 2153 106 107 VEAAAA YAGAAA AAAAxx +752 4081 0 0 2 12 52 752 752 752 752 104 105 YCAAAA ZAGAAA HHHHxx +4095 4082 1 3 5 15 95 95 95 4095 4095 190 191 NBAAAA ABGAAA OOOOxx +2518 4083 0 2 8 18 18 518 518 2518 2518 36 37 WSAAAA BBGAAA VVVVxx +3681 4084 1 1 1 1 81 681 1681 3681 3681 162 163 PLAAAA CBGAAA AAAAxx +4213 4085 1 1 3 13 13 213 213 4213 4213 26 27 BGAAAA DBGAAA HHHHxx +2615 4086 1 3 5 15 15 615 615 2615 2615 30 31 PWAAAA EBGAAA OOOOxx +1471 4087 1 3 1 11 71 471 1471 1471 1471 142 143 PEAAAA FBGAAA VVVVxx +7315 4088 1 3 5 15 15 315 1315 2315 7315 30 31 JVAAAA GBGAAA AAAAxx +6013 4089 1 1 3 13 13 13 13 1013 6013 26 27 HXAAAA HBGAAA HHHHxx +3077 4090 1 1 7 17 77 77 1077 3077 3077 154 155 JOAAAA IBGAAA OOOOxx +2190 4091 0 2 0 10 90 190 190 2190 2190 180 181 GGAAAA JBGAAA VVVVxx +528 4092 0 0 8 8 28 528 528 528 528 56 57 IUAAAA KBGAAA AAAAxx +9508 4093 0 0 8 8 8 508 1508 4508 9508 16 17 SBAAAA LBGAAA HHHHxx +2473 4094 1 1 3 13 73 473 473 2473 2473 146 147 DRAAAA MBGAAA OOOOxx +167 4095 1 3 7 7 67 167 167 167 167 134 135 LGAAAA NBGAAA VVVVxx +8448 4096 0 0 8 8 48 448 448 3448 8448 96 97 YMAAAA OBGAAA AAAAxx +7538 4097 0 2 8 18 38 538 1538 2538 7538 76 77 YDAAAA PBGAAA HHHHxx +7638 4098 0 2 8 18 38 638 1638 2638 7638 76 77 UHAAAA QBGAAA OOOOxx +4328 4099 0 0 8 8 28 328 328 4328 4328 56 57 MKAAAA RBGAAA VVVVxx +3812 4100 0 0 2 12 12 812 1812 3812 3812 24 25 QQAAAA SBGAAA AAAAxx +2879 4101 1 3 9 19 79 879 879 2879 2879 158 159 TGAAAA TBGAAA HHHHxx +4741 4102 1 1 1 1 41 741 741 4741 4741 82 83 JAAAAA UBGAAA OOOOxx +9155 4103 1 3 5 15 55 155 1155 4155 9155 110 111 DOAAAA VBGAAA VVVVxx +5151 4104 1 3 1 11 51 151 1151 151 5151 102 103 DQAAAA WBGAAA AAAAxx +5591 4105 1 3 1 11 91 591 1591 591 5591 182 183 BHAAAA XBGAAA HHHHxx +1034 4106 0 2 4 14 34 34 1034 1034 1034 68 69 UNAAAA YBGAAA OOOOxx +765 4107 1 1 5 5 65 765 765 765 765 130 131 LDAAAA ZBGAAA VVVVxx +2664 4108 0 0 4 4 64 664 664 2664 2664 128 129 MYAAAA ACGAAA AAAAxx +6854 4109 0 2 4 14 54 854 854 1854 6854 108 109 QDAAAA BCGAAA HHHHxx +8263 4110 1 3 3 3 63 263 263 3263 8263 126 127 VFAAAA CCGAAA OOOOxx +8658 4111 0 2 8 18 58 658 658 3658 8658 116 117 AVAAAA DCGAAA VVVVxx +587 4112 1 3 7 7 87 587 587 587 587 174 175 PWAAAA ECGAAA AAAAxx +4553 4113 1 1 3 13 53 553 553 4553 4553 106 107 DTAAAA FCGAAA HHHHxx +1368 4114 0 0 8 8 68 368 1368 1368 1368 136 137 QAAAAA GCGAAA OOOOxx +1718 4115 0 2 8 18 18 718 1718 1718 1718 36 37 COAAAA HCGAAA VVVVxx +140 4116 0 0 0 0 40 140 140 140 140 80 81 KFAAAA ICGAAA AAAAxx +8341 4117 1 1 1 1 41 341 341 3341 8341 82 83 VIAAAA JCGAAA HHHHxx +72 4118 0 0 2 12 72 72 72 72 72 144 145 UCAAAA KCGAAA OOOOxx +6589 4119 1 1 9 9 89 589 589 1589 6589 178 179 LTAAAA LCGAAA VVVVxx +2024 4120 0 0 4 4 24 24 24 2024 2024 48 49 WZAAAA MCGAAA AAAAxx +8024 4121 0 0 4 4 24 24 24 3024 8024 48 49 QWAAAA NCGAAA HHHHxx +9564 4122 0 0 4 4 64 564 1564 4564 9564 128 129 WDAAAA OCGAAA OOOOxx +8625 4123 1 1 5 5 25 625 625 3625 8625 50 51 TTAAAA PCGAAA VVVVxx +2680 4124 0 0 0 0 80 680 680 2680 2680 160 161 CZAAAA QCGAAA AAAAxx +4323 4125 1 3 3 3 23 323 323 4323 4323 46 47 HKAAAA RCGAAA HHHHxx +8981 4126 1 1 1 1 81 981 981 3981 8981 162 163 LHAAAA SCGAAA OOOOxx +8909 4127 1 1 9 9 9 909 909 3909 8909 18 19 REAAAA TCGAAA VVVVxx +5288 4128 0 0 8 8 88 288 1288 288 5288 176 177 KVAAAA UCGAAA AAAAxx +2057 4129 1 1 7 17 57 57 57 2057 2057 114 115 DBAAAA VCGAAA HHHHxx +5931 4130 1 3 1 11 31 931 1931 931 5931 62 63 DUAAAA WCGAAA OOOOxx +9794 4131 0 2 4 14 94 794 1794 4794 9794 188 189 SMAAAA XCGAAA VVVVxx +1012 4132 0 0 2 12 12 12 1012 1012 1012 24 25 YMAAAA YCGAAA AAAAxx +5496 4133 0 0 6 16 96 496 1496 496 5496 192 193 KDAAAA ZCGAAA HHHHxx +9182 4134 0 2 2 2 82 182 1182 4182 9182 164 165 EPAAAA ADGAAA OOOOxx +5258 4135 0 2 8 18 58 258 1258 258 5258 116 117 GUAAAA BDGAAA VVVVxx +3050 4136 0 2 0 10 50 50 1050 3050 3050 100 101 INAAAA CDGAAA AAAAxx +2083 4137 1 3 3 3 83 83 83 2083 2083 166 167 DCAAAA DDGAAA HHHHxx +3069 4138 1 1 9 9 69 69 1069 3069 3069 138 139 BOAAAA EDGAAA OOOOxx +8459 4139 1 3 9 19 59 459 459 3459 8459 118 119 JNAAAA FDGAAA VVVVxx +169 4140 1 1 9 9 69 169 169 169 169 138 139 NGAAAA GDGAAA AAAAxx +4379 4141 1 3 9 19 79 379 379 4379 4379 158 159 LMAAAA HDGAAA HHHHxx +5126 4142 0 2 6 6 26 126 1126 126 5126 52 53 EPAAAA IDGAAA OOOOxx +1415 4143 1 3 5 15 15 415 1415 1415 1415 30 31 LCAAAA JDGAAA VVVVxx +1163 4144 1 3 3 3 63 163 1163 1163 1163 126 127 TSAAAA KDGAAA AAAAxx +3500 4145 0 0 0 0 0 500 1500 3500 3500 0 1 QEAAAA LDGAAA HHHHxx +7202 4146 0 2 2 2 2 202 1202 2202 7202 4 5 ARAAAA MDGAAA OOOOxx +747 4147 1 3 7 7 47 747 747 747 747 94 95 TCAAAA NDGAAA VVVVxx +9264 4148 0 0 4 4 64 264 1264 4264 9264 128 129 ISAAAA ODGAAA AAAAxx +8548 4149 0 0 8 8 48 548 548 3548 8548 96 97 UQAAAA PDGAAA HHHHxx +4228 4150 0 0 8 8 28 228 228 4228 4228 56 57 QGAAAA QDGAAA OOOOxx +7122 4151 0 2 2 2 22 122 1122 2122 7122 44 45 YNAAAA RDGAAA VVVVxx +3395 4152 1 3 5 15 95 395 1395 3395 3395 190 191 PAAAAA SDGAAA AAAAxx +5674 4153 0 2 4 14 74 674 1674 674 5674 148 149 GKAAAA TDGAAA HHHHxx +7293 4154 1 1 3 13 93 293 1293 2293 7293 186 187 NUAAAA UDGAAA OOOOxx +737 4155 1 1 7 17 37 737 737 737 737 74 75 JCAAAA VDGAAA VVVVxx +9595 4156 1 3 5 15 95 595 1595 4595 9595 190 191 BFAAAA WDGAAA AAAAxx +594 4157 0 2 4 14 94 594 594 594 594 188 189 WWAAAA XDGAAA HHHHxx +5322 4158 0 2 2 2 22 322 1322 322 5322 44 45 SWAAAA YDGAAA OOOOxx +2933 4159 1 1 3 13 33 933 933 2933 2933 66 67 VIAAAA ZDGAAA VVVVxx +4955 4160 1 3 5 15 55 955 955 4955 4955 110 111 PIAAAA AEGAAA AAAAxx +4073 4161 1 1 3 13 73 73 73 4073 4073 146 147 RAAAAA BEGAAA HHHHxx +7249 4162 1 1 9 9 49 249 1249 2249 7249 98 99 VSAAAA CEGAAA OOOOxx +192 4163 0 0 2 12 92 192 192 192 192 184 185 KHAAAA DEGAAA VVVVxx +2617 4164 1 1 7 17 17 617 617 2617 2617 34 35 RWAAAA EEGAAA AAAAxx +7409 4165 1 1 9 9 9 409 1409 2409 7409 18 19 ZYAAAA FEGAAA HHHHxx +4903 4166 1 3 3 3 3 903 903 4903 4903 6 7 PGAAAA GEGAAA OOOOxx +9797 4167 1 1 7 17 97 797 1797 4797 9797 194 195 VMAAAA HEGAAA VVVVxx +9919 4168 1 3 9 19 19 919 1919 4919 9919 38 39 NRAAAA IEGAAA AAAAxx +1878 4169 0 2 8 18 78 878 1878 1878 1878 156 157 GUAAAA JEGAAA HHHHxx +4851 4170 1 3 1 11 51 851 851 4851 4851 102 103 PEAAAA KEGAAA OOOOxx +5514 4171 0 2 4 14 14 514 1514 514 5514 28 29 CEAAAA LEGAAA VVVVxx +2582 4172 0 2 2 2 82 582 582 2582 2582 164 165 IVAAAA MEGAAA AAAAxx +3564 4173 0 0 4 4 64 564 1564 3564 3564 128 129 CHAAAA NEGAAA HHHHxx +7085 4174 1 1 5 5 85 85 1085 2085 7085 170 171 NMAAAA OEGAAA OOOOxx +3619 4175 1 3 9 19 19 619 1619 3619 3619 38 39 FJAAAA PEGAAA VVVVxx +261 4176 1 1 1 1 61 261 261 261 261 122 123 BKAAAA QEGAAA AAAAxx +7338 4177 0 2 8 18 38 338 1338 2338 7338 76 77 GWAAAA REGAAA HHHHxx +4251 4178 1 3 1 11 51 251 251 4251 4251 102 103 NHAAAA SEGAAA OOOOxx +5360 4179 0 0 0 0 60 360 1360 360 5360 120 121 EYAAAA TEGAAA VVVVxx +5678 4180 0 2 8 18 78 678 1678 678 5678 156 157 KKAAAA UEGAAA AAAAxx +9162 4181 0 2 2 2 62 162 1162 4162 9162 124 125 KOAAAA VEGAAA HHHHxx +5920 4182 0 0 0 0 20 920 1920 920 5920 40 41 STAAAA WEGAAA OOOOxx +7156 4183 0 0 6 16 56 156 1156 2156 7156 112 113 GPAAAA XEGAAA VVVVxx +4271 4184 1 3 1 11 71 271 271 4271 4271 142 143 HIAAAA YEGAAA AAAAxx +4698 4185 0 2 8 18 98 698 698 4698 4698 196 197 SYAAAA ZEGAAA HHHHxx +1572 4186 0 0 2 12 72 572 1572 1572 1572 144 145 MIAAAA AFGAAA OOOOxx +6974 4187 0 2 4 14 74 974 974 1974 6974 148 149 GIAAAA BFGAAA VVVVxx +4291 4188 1 3 1 11 91 291 291 4291 4291 182 183 BJAAAA CFGAAA AAAAxx +4036 4189 0 0 6 16 36 36 36 4036 4036 72 73 GZAAAA DFGAAA HHHHxx +7473 4190 1 1 3 13 73 473 1473 2473 7473 146 147 LBAAAA EFGAAA OOOOxx +4786 4191 0 2 6 6 86 786 786 4786 4786 172 173 CCAAAA FFGAAA VVVVxx +2662 4192 0 2 2 2 62 662 662 2662 2662 124 125 KYAAAA GFGAAA AAAAxx +916 4193 0 0 6 16 16 916 916 916 916 32 33 GJAAAA HFGAAA HHHHxx +668 4194 0 0 8 8 68 668 668 668 668 136 137 SZAAAA IFGAAA OOOOxx +4874 4195 0 2 4 14 74 874 874 4874 4874 148 149 MFAAAA JFGAAA VVVVxx +3752 4196 0 0 2 12 52 752 1752 3752 3752 104 105 IOAAAA KFGAAA AAAAxx +4865 4197 1 1 5 5 65 865 865 4865 4865 130 131 DFAAAA LFGAAA HHHHxx +7052 4198 0 0 2 12 52 52 1052 2052 7052 104 105 GLAAAA MFGAAA OOOOxx +5712 4199 0 0 2 12 12 712 1712 712 5712 24 25 SLAAAA NFGAAA VVVVxx +31 4200 1 3 1 11 31 31 31 31 31 62 63 FBAAAA OFGAAA AAAAxx +4944 4201 0 0 4 4 44 944 944 4944 4944 88 89 EIAAAA PFGAAA HHHHxx +1435 4202 1 3 5 15 35 435 1435 1435 1435 70 71 FDAAAA QFGAAA OOOOxx +501 4203 1 1 1 1 1 501 501 501 501 2 3 HTAAAA RFGAAA VVVVxx +9401 4204 1 1 1 1 1 401 1401 4401 9401 2 3 PXAAAA SFGAAA AAAAxx +5014 4205 0 2 4 14 14 14 1014 14 5014 28 29 WKAAAA TFGAAA HHHHxx +9125 4206 1 1 5 5 25 125 1125 4125 9125 50 51 ZMAAAA UFGAAA OOOOxx +6144 4207 0 0 4 4 44 144 144 1144 6144 88 89 ICAAAA VFGAAA VVVVxx +1743 4208 1 3 3 3 43 743 1743 1743 1743 86 87 BPAAAA WFGAAA AAAAxx +4316 4209 0 0 6 16 16 316 316 4316 4316 32 33 AKAAAA XFGAAA HHHHxx +8212 4210 0 0 2 12 12 212 212 3212 8212 24 25 WDAAAA YFGAAA OOOOxx +7344 4211 0 0 4 4 44 344 1344 2344 7344 88 89 MWAAAA ZFGAAA VVVVxx +2051 4212 1 3 1 11 51 51 51 2051 2051 102 103 XAAAAA AGGAAA AAAAxx +8131 4213 1 3 1 11 31 131 131 3131 8131 62 63 TAAAAA BGGAAA HHHHxx +7023 4214 1 3 3 3 23 23 1023 2023 7023 46 47 DKAAAA CGGAAA OOOOxx +9674 4215 0 2 4 14 74 674 1674 4674 9674 148 149 CIAAAA DGGAAA VVVVxx +4984 4216 0 0 4 4 84 984 984 4984 4984 168 169 SJAAAA EGGAAA AAAAxx +111 4217 1 3 1 11 11 111 111 111 111 22 23 HEAAAA FGGAAA HHHHxx +2296 4218 0 0 6 16 96 296 296 2296 2296 192 193 IKAAAA GGGAAA OOOOxx +5025 4219 1 1 5 5 25 25 1025 25 5025 50 51 HLAAAA HGGAAA VVVVxx +1756 4220 0 0 6 16 56 756 1756 1756 1756 112 113 OPAAAA IGGAAA AAAAxx +2885 4221 1 1 5 5 85 885 885 2885 2885 170 171 ZGAAAA JGGAAA HHHHxx +2541 4222 1 1 1 1 41 541 541 2541 2541 82 83 TTAAAA KGGAAA OOOOxx +1919 4223 1 3 9 19 19 919 1919 1919 1919 38 39 VVAAAA LGGAAA VVVVxx +6496 4224 0 0 6 16 96 496 496 1496 6496 192 193 WPAAAA MGGAAA AAAAxx +6103 4225 1 3 3 3 3 103 103 1103 6103 6 7 TAAAAA NGGAAA HHHHxx +98 4226 0 2 8 18 98 98 98 98 98 196 197 UDAAAA OGGAAA OOOOxx +3727 4227 1 3 7 7 27 727 1727 3727 3727 54 55 JNAAAA PGGAAA VVVVxx +689 4228 1 1 9 9 89 689 689 689 689 178 179 NAAAAA QGGAAA AAAAxx +7181 4229 1 1 1 1 81 181 1181 2181 7181 162 163 FQAAAA RGGAAA HHHHxx +8447 4230 1 3 7 7 47 447 447 3447 8447 94 95 XMAAAA SGGAAA OOOOxx +4569 4231 1 1 9 9 69 569 569 4569 4569 138 139 TTAAAA TGGAAA VVVVxx +8844 4232 0 0 4 4 44 844 844 3844 8844 88 89 ECAAAA UGGAAA AAAAxx +2436 4233 0 0 6 16 36 436 436 2436 2436 72 73 SPAAAA VGGAAA HHHHxx +391 4234 1 3 1 11 91 391 391 391 391 182 183 BPAAAA WGGAAA OOOOxx +3035 4235 1 3 5 15 35 35 1035 3035 3035 70 71 TMAAAA XGGAAA VVVVxx +7583 4236 1 3 3 3 83 583 1583 2583 7583 166 167 RFAAAA YGGAAA AAAAxx +1145 4237 1 1 5 5 45 145 1145 1145 1145 90 91 BSAAAA ZGGAAA HHHHxx +93 4238 1 1 3 13 93 93 93 93 93 186 187 PDAAAA AHGAAA OOOOxx +8896 4239 0 0 6 16 96 896 896 3896 8896 192 193 EEAAAA BHGAAA VVVVxx +6719 4240 1 3 9 19 19 719 719 1719 6719 38 39 LYAAAA CHGAAA AAAAxx +7728 4241 0 0 8 8 28 728 1728 2728 7728 56 57 GLAAAA DHGAAA HHHHxx +1349 4242 1 1 9 9 49 349 1349 1349 1349 98 99 XZAAAA EHGAAA OOOOxx +5349 4243 1 1 9 9 49 349 1349 349 5349 98 99 TXAAAA FHGAAA VVVVxx +3040 4244 0 0 0 0 40 40 1040 3040 3040 80 81 YMAAAA GHGAAA AAAAxx +2414 4245 0 2 4 14 14 414 414 2414 2414 28 29 WOAAAA HHGAAA HHHHxx +5122 4246 0 2 2 2 22 122 1122 122 5122 44 45 APAAAA IHGAAA OOOOxx +9553 4247 1 1 3 13 53 553 1553 4553 9553 106 107 LDAAAA JHGAAA VVVVxx +5987 4248 1 3 7 7 87 987 1987 987 5987 174 175 HWAAAA KHGAAA AAAAxx +5939 4249 1 3 9 19 39 939 1939 939 5939 78 79 LUAAAA LHGAAA HHHHxx +3525 4250 1 1 5 5 25 525 1525 3525 3525 50 51 PFAAAA MHGAAA OOOOxx +1371 4251 1 3 1 11 71 371 1371 1371 1371 142 143 TAAAAA NHGAAA VVVVxx +618 4252 0 2 8 18 18 618 618 618 618 36 37 UXAAAA OHGAAA AAAAxx +6529 4253 1 1 9 9 29 529 529 1529 6529 58 59 DRAAAA PHGAAA HHHHxx +4010 4254 0 2 0 10 10 10 10 4010 4010 20 21 GYAAAA QHGAAA OOOOxx +328 4255 0 0 8 8 28 328 328 328 328 56 57 QMAAAA RHGAAA VVVVxx +6121 4256 1 1 1 1 21 121 121 1121 6121 42 43 LBAAAA SHGAAA AAAAxx +3505 4257 1 1 5 5 5 505 1505 3505 3505 10 11 VEAAAA THGAAA HHHHxx +2033 4258 1 1 3 13 33 33 33 2033 2033 66 67 FAAAAA UHGAAA OOOOxx +4724 4259 0 0 4 4 24 724 724 4724 4724 48 49 SZAAAA VHGAAA VVVVxx +8717 4260 1 1 7 17 17 717 717 3717 8717 34 35 HXAAAA WHGAAA AAAAxx +5639 4261 1 3 9 19 39 639 1639 639 5639 78 79 XIAAAA XHGAAA HHHHxx +3448 4262 0 0 8 8 48 448 1448 3448 3448 96 97 QCAAAA YHGAAA OOOOxx +2919 4263 1 3 9 19 19 919 919 2919 2919 38 39 HIAAAA ZHGAAA VVVVxx +3417 4264 1 1 7 17 17 417 1417 3417 3417 34 35 LBAAAA AIGAAA AAAAxx +943 4265 1 3 3 3 43 943 943 943 943 86 87 HKAAAA BIGAAA HHHHxx +775 4266 1 3 5 15 75 775 775 775 775 150 151 VDAAAA CIGAAA OOOOxx +2333 4267 1 1 3 13 33 333 333 2333 2333 66 67 TLAAAA DIGAAA VVVVxx +4801 4268 1 1 1 1 1 801 801 4801 4801 2 3 RCAAAA EIGAAA AAAAxx +7169 4269 1 1 9 9 69 169 1169 2169 7169 138 139 TPAAAA FIGAAA HHHHxx +2840 4270 0 0 0 0 40 840 840 2840 2840 80 81 GFAAAA GIGAAA OOOOxx +9034 4271 0 2 4 14 34 34 1034 4034 9034 68 69 MJAAAA HIGAAA VVVVxx +6154 4272 0 2 4 14 54 154 154 1154 6154 108 109 SCAAAA IIGAAA AAAAxx +1412 4273 0 0 2 12 12 412 1412 1412 1412 24 25 ICAAAA JIGAAA HHHHxx +2263 4274 1 3 3 3 63 263 263 2263 2263 126 127 BJAAAA KIGAAA OOOOxx +7118 4275 0 2 8 18 18 118 1118 2118 7118 36 37 UNAAAA LIGAAA VVVVxx +1526 4276 0 2 6 6 26 526 1526 1526 1526 52 53 SGAAAA MIGAAA AAAAxx +491 4277 1 3 1 11 91 491 491 491 491 182 183 XSAAAA NIGAAA HHHHxx +9732 4278 0 0 2 12 32 732 1732 4732 9732 64 65 IKAAAA OIGAAA OOOOxx +7067 4279 1 3 7 7 67 67 1067 2067 7067 134 135 VLAAAA PIGAAA VVVVxx +212 4280 0 0 2 12 12 212 212 212 212 24 25 EIAAAA QIGAAA AAAAxx +1955 4281 1 3 5 15 55 955 1955 1955 1955 110 111 FXAAAA RIGAAA HHHHxx +3303 4282 1 3 3 3 3 303 1303 3303 3303 6 7 BXAAAA SIGAAA OOOOxx +2715 4283 1 3 5 15 15 715 715 2715 2715 30 31 LAAAAA TIGAAA VVVVxx +8168 4284 0 0 8 8 68 168 168 3168 8168 136 137 ECAAAA UIGAAA AAAAxx +6799 4285 1 3 9 19 99 799 799 1799 6799 198 199 NBAAAA VIGAAA HHHHxx +5080 4286 0 0 0 0 80 80 1080 80 5080 160 161 KNAAAA WIGAAA OOOOxx +4939 4287 1 3 9 19 39 939 939 4939 4939 78 79 ZHAAAA XIGAAA VVVVxx +6604 4288 0 0 4 4 4 604 604 1604 6604 8 9 AUAAAA YIGAAA AAAAxx +6531 4289 1 3 1 11 31 531 531 1531 6531 62 63 FRAAAA ZIGAAA HHHHxx +9948 4290 0 0 8 8 48 948 1948 4948 9948 96 97 QSAAAA AJGAAA OOOOxx +7923 4291 1 3 3 3 23 923 1923 2923 7923 46 47 TSAAAA BJGAAA VVVVxx +9905 4292 1 1 5 5 5 905 1905 4905 9905 10 11 ZQAAAA CJGAAA AAAAxx +340 4293 0 0 0 0 40 340 340 340 340 80 81 CNAAAA DJGAAA HHHHxx +1721 4294 1 1 1 1 21 721 1721 1721 1721 42 43 FOAAAA EJGAAA OOOOxx +9047 4295 1 3 7 7 47 47 1047 4047 9047 94 95 ZJAAAA FJGAAA VVVVxx +4723 4296 1 3 3 3 23 723 723 4723 4723 46 47 RZAAAA GJGAAA AAAAxx +5748 4297 0 0 8 8 48 748 1748 748 5748 96 97 CNAAAA HJGAAA HHHHxx +6845 4298 1 1 5 5 45 845 845 1845 6845 90 91 HDAAAA IJGAAA OOOOxx +1556 4299 0 0 6 16 56 556 1556 1556 1556 112 113 WHAAAA JJGAAA VVVVxx +9505 4300 1 1 5 5 5 505 1505 4505 9505 10 11 PBAAAA KJGAAA AAAAxx +3573 4301 1 1 3 13 73 573 1573 3573 3573 146 147 LHAAAA LJGAAA HHHHxx +3785 4302 1 1 5 5 85 785 1785 3785 3785 170 171 PPAAAA MJGAAA OOOOxx +2772 4303 0 0 2 12 72 772 772 2772 2772 144 145 QCAAAA NJGAAA VVVVxx +7282 4304 0 2 2 2 82 282 1282 2282 7282 164 165 CUAAAA OJGAAA AAAAxx +8106 4305 0 2 6 6 6 106 106 3106 8106 12 13 UZAAAA PJGAAA HHHHxx +2847 4306 1 3 7 7 47 847 847 2847 2847 94 95 NFAAAA QJGAAA OOOOxx +9803 4307 1 3 3 3 3 803 1803 4803 9803 6 7 BNAAAA RJGAAA VVVVxx +7719 4308 1 3 9 19 19 719 1719 2719 7719 38 39 XKAAAA SJGAAA AAAAxx +4649 4309 1 1 9 9 49 649 649 4649 4649 98 99 VWAAAA TJGAAA HHHHxx +6196 4310 0 0 6 16 96 196 196 1196 6196 192 193 IEAAAA UJGAAA OOOOxx +6026 4311 0 2 6 6 26 26 26 1026 6026 52 53 UXAAAA VJGAAA VVVVxx +1646 4312 0 2 6 6 46 646 1646 1646 1646 92 93 ILAAAA WJGAAA AAAAxx +6526 4313 0 2 6 6 26 526 526 1526 6526 52 53 ARAAAA XJGAAA HHHHxx +5110 4314 0 2 0 10 10 110 1110 110 5110 20 21 OOAAAA YJGAAA OOOOxx +3946 4315 0 2 6 6 46 946 1946 3946 3946 92 93 UVAAAA ZJGAAA VVVVxx +445 4316 1 1 5 5 45 445 445 445 445 90 91 DRAAAA AKGAAA AAAAxx +3249 4317 1 1 9 9 49 249 1249 3249 3249 98 99 ZUAAAA BKGAAA HHHHxx +2501 4318 1 1 1 1 1 501 501 2501 2501 2 3 FSAAAA CKGAAA OOOOxx +3243 4319 1 3 3 3 43 243 1243 3243 3243 86 87 TUAAAA DKGAAA VVVVxx +4701 4320 1 1 1 1 1 701 701 4701 4701 2 3 VYAAAA EKGAAA AAAAxx +472 4321 0 0 2 12 72 472 472 472 472 144 145 ESAAAA FKGAAA HHHHxx +3356 4322 0 0 6 16 56 356 1356 3356 3356 112 113 CZAAAA GKGAAA OOOOxx +9967 4323 1 3 7 7 67 967 1967 4967 9967 134 135 JTAAAA HKGAAA VVVVxx +4292 4324 0 0 2 12 92 292 292 4292 4292 184 185 CJAAAA IKGAAA AAAAxx +7005 4325 1 1 5 5 5 5 1005 2005 7005 10 11 LJAAAA JKGAAA HHHHxx +6267 4326 1 3 7 7 67 267 267 1267 6267 134 135 BHAAAA KKGAAA OOOOxx +6678 4327 0 2 8 18 78 678 678 1678 6678 156 157 WWAAAA LKGAAA VVVVxx +6083 4328 1 3 3 3 83 83 83 1083 6083 166 167 ZZAAAA MKGAAA AAAAxx +760 4329 0 0 0 0 60 760 760 760 760 120 121 GDAAAA NKGAAA HHHHxx +7833 4330 1 1 3 13 33 833 1833 2833 7833 66 67 HPAAAA OKGAAA OOOOxx +2877 4331 1 1 7 17 77 877 877 2877 2877 154 155 RGAAAA PKGAAA VVVVxx +8810 4332 0 2 0 10 10 810 810 3810 8810 20 21 WAAAAA QKGAAA AAAAxx +1560 4333 0 0 0 0 60 560 1560 1560 1560 120 121 AIAAAA RKGAAA HHHHxx +1367 4334 1 3 7 7 67 367 1367 1367 1367 134 135 PAAAAA SKGAAA OOOOxx +8756 4335 0 0 6 16 56 756 756 3756 8756 112 113 UYAAAA TKGAAA VVVVxx +1346 4336 0 2 6 6 46 346 1346 1346 1346 92 93 UZAAAA UKGAAA AAAAxx +6449 4337 1 1 9 9 49 449 449 1449 6449 98 99 BOAAAA VKGAAA HHHHxx +6658 4338 0 2 8 18 58 658 658 1658 6658 116 117 CWAAAA WKGAAA OOOOxx +6745 4339 1 1 5 5 45 745 745 1745 6745 90 91 LZAAAA XKGAAA VVVVxx +4866 4340 0 2 6 6 66 866 866 4866 4866 132 133 EFAAAA YKGAAA AAAAxx +14 4341 0 2 4 14 14 14 14 14 14 28 29 OAAAAA ZKGAAA HHHHxx +4506 4342 0 2 6 6 6 506 506 4506 4506 12 13 IRAAAA ALGAAA OOOOxx +1923 4343 1 3 3 3 23 923 1923 1923 1923 46 47 ZVAAAA BLGAAA VVVVxx +8365 4344 1 1 5 5 65 365 365 3365 8365 130 131 TJAAAA CLGAAA AAAAxx +1279 4345 1 3 9 19 79 279 1279 1279 1279 158 159 FXAAAA DLGAAA HHHHxx +7666 4346 0 2 6 6 66 666 1666 2666 7666 132 133 WIAAAA ELGAAA OOOOxx +7404 4347 0 0 4 4 4 404 1404 2404 7404 8 9 UYAAAA FLGAAA VVVVxx +65 4348 1 1 5 5 65 65 65 65 65 130 131 NCAAAA GLGAAA AAAAxx +5820 4349 0 0 0 0 20 820 1820 820 5820 40 41 WPAAAA HLGAAA HHHHxx +459 4350 1 3 9 19 59 459 459 459 459 118 119 RRAAAA ILGAAA OOOOxx +4787 4351 1 3 7 7 87 787 787 4787 4787 174 175 DCAAAA JLGAAA VVVVxx +5631 4352 1 3 1 11 31 631 1631 631 5631 62 63 PIAAAA KLGAAA AAAAxx +9717 4353 1 1 7 17 17 717 1717 4717 9717 34 35 TJAAAA LLGAAA HHHHxx +2560 4354 0 0 0 0 60 560 560 2560 2560 120 121 MUAAAA MLGAAA OOOOxx +8295 4355 1 3 5 15 95 295 295 3295 8295 190 191 BHAAAA NLGAAA VVVVxx +3596 4356 0 0 6 16 96 596 1596 3596 3596 192 193 IIAAAA OLGAAA AAAAxx +2023 4357 1 3 3 3 23 23 23 2023 2023 46 47 VZAAAA PLGAAA HHHHxx +5055 4358 1 3 5 15 55 55 1055 55 5055 110 111 LMAAAA QLGAAA OOOOxx +763 4359 1 3 3 3 63 763 763 763 763 126 127 JDAAAA RLGAAA VVVVxx +6733 4360 1 1 3 13 33 733 733 1733 6733 66 67 ZYAAAA SLGAAA AAAAxx +9266 4361 0 2 6 6 66 266 1266 4266 9266 132 133 KSAAAA TLGAAA HHHHxx +4479 4362 1 3 9 19 79 479 479 4479 4479 158 159 HQAAAA ULGAAA OOOOxx +1816 4363 0 0 6 16 16 816 1816 1816 1816 32 33 WRAAAA VLGAAA VVVVxx +899 4364 1 3 9 19 99 899 899 899 899 198 199 PIAAAA WLGAAA AAAAxx +230 4365 0 2 0 10 30 230 230 230 230 60 61 WIAAAA XLGAAA HHHHxx +5362 4366 0 2 2 2 62 362 1362 362 5362 124 125 GYAAAA YLGAAA OOOOxx +1609 4367 1 1 9 9 9 609 1609 1609 1609 18 19 XJAAAA ZLGAAA VVVVxx +6750 4368 0 2 0 10 50 750 750 1750 6750 100 101 QZAAAA AMGAAA AAAAxx +9704 4369 0 0 4 4 4 704 1704 4704 9704 8 9 GJAAAA BMGAAA HHHHxx +3991 4370 1 3 1 11 91 991 1991 3991 3991 182 183 NXAAAA CMGAAA OOOOxx +3959 4371 1 3 9 19 59 959 1959 3959 3959 118 119 HWAAAA DMGAAA VVVVxx +9021 4372 1 1 1 1 21 21 1021 4021 9021 42 43 ZIAAAA EMGAAA AAAAxx +7585 4373 1 1 5 5 85 585 1585 2585 7585 170 171 TFAAAA FMGAAA HHHHxx +7083 4374 1 3 3 3 83 83 1083 2083 7083 166 167 LMAAAA GMGAAA OOOOxx +7688 4375 0 0 8 8 88 688 1688 2688 7688 176 177 SJAAAA HMGAAA VVVVxx +2673 4376 1 1 3 13 73 673 673 2673 2673 146 147 VYAAAA IMGAAA AAAAxx +3554 4377 0 2 4 14 54 554 1554 3554 3554 108 109 SGAAAA JMGAAA HHHHxx +7416 4378 0 0 6 16 16 416 1416 2416 7416 32 33 GZAAAA KMGAAA OOOOxx +5672 4379 0 0 2 12 72 672 1672 672 5672 144 145 EKAAAA LMGAAA VVVVxx +1355 4380 1 3 5 15 55 355 1355 1355 1355 110 111 DAAAAA MMGAAA AAAAxx +3149 4381 1 1 9 9 49 149 1149 3149 3149 98 99 DRAAAA NMGAAA HHHHxx +5811 4382 1 3 1 11 11 811 1811 811 5811 22 23 NPAAAA OMGAAA OOOOxx +3759 4383 1 3 9 19 59 759 1759 3759 3759 118 119 POAAAA PMGAAA VVVVxx +5634 4384 0 2 4 14 34 634 1634 634 5634 68 69 SIAAAA QMGAAA AAAAxx +8617 4385 1 1 7 17 17 617 617 3617 8617 34 35 LTAAAA RMGAAA HHHHxx +8949 4386 1 1 9 9 49 949 949 3949 8949 98 99 FGAAAA SMGAAA OOOOxx +3964 4387 0 0 4 4 64 964 1964 3964 3964 128 129 MWAAAA TMGAAA VVVVxx +3852 4388 0 0 2 12 52 852 1852 3852 3852 104 105 ESAAAA UMGAAA AAAAxx +1555 4389 1 3 5 15 55 555 1555 1555 1555 110 111 VHAAAA VMGAAA HHHHxx +6536 4390 0 0 6 16 36 536 536 1536 6536 72 73 KRAAAA WMGAAA OOOOxx +4779 4391 1 3 9 19 79 779 779 4779 4779 158 159 VBAAAA XMGAAA VVVVxx +1893 4392 1 1 3 13 93 893 1893 1893 1893 186 187 VUAAAA YMGAAA AAAAxx +9358 4393 0 2 8 18 58 358 1358 4358 9358 116 117 YVAAAA ZMGAAA HHHHxx +7438 4394 0 2 8 18 38 438 1438 2438 7438 76 77 CAAAAA ANGAAA OOOOxx +941 4395 1 1 1 1 41 941 941 941 941 82 83 FKAAAA BNGAAA VVVVxx +4844 4396 0 0 4 4 44 844 844 4844 4844 88 89 IEAAAA CNGAAA AAAAxx +4745 4397 1 1 5 5 45 745 745 4745 4745 90 91 NAAAAA DNGAAA HHHHxx +1017 4398 1 1 7 17 17 17 1017 1017 1017 34 35 DNAAAA ENGAAA OOOOxx +327 4399 1 3 7 7 27 327 327 327 327 54 55 PMAAAA FNGAAA VVVVxx +3152 4400 0 0 2 12 52 152 1152 3152 3152 104 105 GRAAAA GNGAAA AAAAxx +4711 4401 1 3 1 11 11 711 711 4711 4711 22 23 FZAAAA HNGAAA HHHHxx +141 4402 1 1 1 1 41 141 141 141 141 82 83 LFAAAA INGAAA OOOOxx +1303 4403 1 3 3 3 3 303 1303 1303 1303 6 7 DYAAAA JNGAAA VVVVxx +8873 4404 1 1 3 13 73 873 873 3873 8873 146 147 HDAAAA KNGAAA AAAAxx +8481 4405 1 1 1 1 81 481 481 3481 8481 162 163 FOAAAA LNGAAA HHHHxx +5445 4406 1 1 5 5 45 445 1445 445 5445 90 91 LBAAAA MNGAAA OOOOxx +7868 4407 0 0 8 8 68 868 1868 2868 7868 136 137 QQAAAA NNGAAA VVVVxx +6722 4408 0 2 2 2 22 722 722 1722 6722 44 45 OYAAAA ONGAAA AAAAxx +6628 4409 0 0 8 8 28 628 628 1628 6628 56 57 YUAAAA PNGAAA HHHHxx +7738 4410 0 2 8 18 38 738 1738 2738 7738 76 77 QLAAAA QNGAAA OOOOxx +1018 4411 0 2 8 18 18 18 1018 1018 1018 36 37 ENAAAA RNGAAA VVVVxx +3296 4412 0 0 6 16 96 296 1296 3296 3296 192 193 UWAAAA SNGAAA AAAAxx +1946 4413 0 2 6 6 46 946 1946 1946 1946 92 93 WWAAAA TNGAAA HHHHxx +6603 4414 1 3 3 3 3 603 603 1603 6603 6 7 ZTAAAA UNGAAA OOOOxx +3562 4415 0 2 2 2 62 562 1562 3562 3562 124 125 AHAAAA VNGAAA VVVVxx +1147 4416 1 3 7 7 47 147 1147 1147 1147 94 95 DSAAAA WNGAAA AAAAxx +6031 4417 1 3 1 11 31 31 31 1031 6031 62 63 ZXAAAA XNGAAA HHHHxx +6484 4418 0 0 4 4 84 484 484 1484 6484 168 169 KPAAAA YNGAAA OOOOxx +496 4419 0 0 6 16 96 496 496 496 496 192 193 CTAAAA ZNGAAA VVVVxx +4563 4420 1 3 3 3 63 563 563 4563 4563 126 127 NTAAAA AOGAAA AAAAxx +1037 4421 1 1 7 17 37 37 1037 1037 1037 74 75 XNAAAA BOGAAA HHHHxx +9672 4422 0 0 2 12 72 672 1672 4672 9672 144 145 AIAAAA COGAAA OOOOxx +9053 4423 1 1 3 13 53 53 1053 4053 9053 106 107 FKAAAA DOGAAA VVVVxx +2523 4424 1 3 3 3 23 523 523 2523 2523 46 47 BTAAAA EOGAAA AAAAxx +8519 4425 1 3 9 19 19 519 519 3519 8519 38 39 RPAAAA FOGAAA HHHHxx +8190 4426 0 2 0 10 90 190 190 3190 8190 180 181 ADAAAA GOGAAA OOOOxx +2068 4427 0 0 8 8 68 68 68 2068 2068 136 137 OBAAAA HOGAAA VVVVxx +8569 4428 1 1 9 9 69 569 569 3569 8569 138 139 PRAAAA IOGAAA AAAAxx +6535 4429 1 3 5 15 35 535 535 1535 6535 70 71 JRAAAA JOGAAA HHHHxx +1810 4430 0 2 0 10 10 810 1810 1810 1810 20 21 QRAAAA KOGAAA OOOOxx +3099 4431 1 3 9 19 99 99 1099 3099 3099 198 199 FPAAAA LOGAAA VVVVxx +7466 4432 0 2 6 6 66 466 1466 2466 7466 132 133 EBAAAA MOGAAA AAAAxx +4017 4433 1 1 7 17 17 17 17 4017 4017 34 35 NYAAAA NOGAAA HHHHxx +1097 4434 1 1 7 17 97 97 1097 1097 1097 194 195 FQAAAA OOGAAA OOOOxx +7686 4435 0 2 6 6 86 686 1686 2686 7686 172 173 QJAAAA POGAAA VVVVxx +6742 4436 0 2 2 2 42 742 742 1742 6742 84 85 IZAAAA QOGAAA AAAAxx +5966 4437 0 2 6 6 66 966 1966 966 5966 132 133 MVAAAA ROGAAA HHHHxx +3632 4438 0 0 2 12 32 632 1632 3632 3632 64 65 SJAAAA SOGAAA OOOOxx +8837 4439 1 1 7 17 37 837 837 3837 8837 74 75 XBAAAA TOGAAA VVVVxx +1667 4440 1 3 7 7 67 667 1667 1667 1667 134 135 DMAAAA UOGAAA AAAAxx +8833 4441 1 1 3 13 33 833 833 3833 8833 66 67 TBAAAA VOGAAA HHHHxx +9805 4442 1 1 5 5 5 805 1805 4805 9805 10 11 DNAAAA WOGAAA OOOOxx +3650 4443 0 2 0 10 50 650 1650 3650 3650 100 101 KKAAAA XOGAAA VVVVxx +2237 4444 1 1 7 17 37 237 237 2237 2237 74 75 BIAAAA YOGAAA AAAAxx +9980 4445 0 0 0 0 80 980 1980 4980 9980 160 161 WTAAAA ZOGAAA HHHHxx +2861 4446 1 1 1 1 61 861 861 2861 2861 122 123 BGAAAA APGAAA OOOOxx +1334 4447 0 2 4 14 34 334 1334 1334 1334 68 69 IZAAAA BPGAAA VVVVxx +842 4448 0 2 2 2 42 842 842 842 842 84 85 KGAAAA CPGAAA AAAAxx +1116 4449 0 0 6 16 16 116 1116 1116 1116 32 33 YQAAAA DPGAAA HHHHxx +4055 4450 1 3 5 15 55 55 55 4055 4055 110 111 ZZAAAA EPGAAA OOOOxx +3842 4451 0 2 2 2 42 842 1842 3842 3842 84 85 URAAAA FPGAAA VVVVxx +1886 4452 0 2 6 6 86 886 1886 1886 1886 172 173 OUAAAA GPGAAA AAAAxx +8589 4453 1 1 9 9 89 589 589 3589 8589 178 179 JSAAAA HPGAAA HHHHxx +5873 4454 1 1 3 13 73 873 1873 873 5873 146 147 XRAAAA IPGAAA OOOOxx +7711 4455 1 3 1 11 11 711 1711 2711 7711 22 23 PKAAAA JPGAAA VVVVxx +911 4456 1 3 1 11 11 911 911 911 911 22 23 BJAAAA KPGAAA AAAAxx +5837 4457 1 1 7 17 37 837 1837 837 5837 74 75 NQAAAA LPGAAA HHHHxx +897 4458 1 1 7 17 97 897 897 897 897 194 195 NIAAAA MPGAAA OOOOxx +4299 4459 1 3 9 19 99 299 299 4299 4299 198 199 JJAAAA NPGAAA VVVVxx +7774 4460 0 2 4 14 74 774 1774 2774 7774 148 149 ANAAAA OPGAAA AAAAxx +7832 4461 0 0 2 12 32 832 1832 2832 7832 64 65 GPAAAA PPGAAA HHHHxx +9915 4462 1 3 5 15 15 915 1915 4915 9915 30 31 JRAAAA QPGAAA OOOOxx +9 4463 1 1 9 9 9 9 9 9 9 18 19 JAAAAA RPGAAA VVVVxx +9675 4464 1 3 5 15 75 675 1675 4675 9675 150 151 DIAAAA SPGAAA AAAAxx +7953 4465 1 1 3 13 53 953 1953 2953 7953 106 107 XTAAAA TPGAAA HHHHxx +8912 4466 0 0 2 12 12 912 912 3912 8912 24 25 UEAAAA UPGAAA OOOOxx +4188 4467 0 0 8 8 88 188 188 4188 4188 176 177 CFAAAA VPGAAA VVVVxx +8446 4468 0 2 6 6 46 446 446 3446 8446 92 93 WMAAAA WPGAAA AAAAxx +1600 4469 0 0 0 0 0 600 1600 1600 1600 0 1 OJAAAA XPGAAA HHHHxx +43 4470 1 3 3 3 43 43 43 43 43 86 87 RBAAAA YPGAAA OOOOxx +544 4471 0 0 4 4 44 544 544 544 544 88 89 YUAAAA ZPGAAA VVVVxx +6977 4472 1 1 7 17 77 977 977 1977 6977 154 155 JIAAAA AQGAAA AAAAxx +3191 4473 1 3 1 11 91 191 1191 3191 3191 182 183 TSAAAA BQGAAA HHHHxx +418 4474 0 2 8 18 18 418 418 418 418 36 37 CQAAAA CQGAAA OOOOxx +3142 4475 0 2 2 2 42 142 1142 3142 3142 84 85 WQAAAA DQGAAA VVVVxx +5042 4476 0 2 2 2 42 42 1042 42 5042 84 85 YLAAAA EQGAAA AAAAxx +2194 4477 0 2 4 14 94 194 194 2194 2194 188 189 KGAAAA FQGAAA HHHHxx +2397 4478 1 1 7 17 97 397 397 2397 2397 194 195 FOAAAA GQGAAA OOOOxx +4684 4479 0 0 4 4 84 684 684 4684 4684 168 169 EYAAAA HQGAAA VVVVxx +34 4480 0 2 4 14 34 34 34 34 34 68 69 IBAAAA IQGAAA AAAAxx +3844 4481 0 0 4 4 44 844 1844 3844 3844 88 89 WRAAAA JQGAAA HHHHxx +7824 4482 0 0 4 4 24 824 1824 2824 7824 48 49 YOAAAA KQGAAA OOOOxx +6177 4483 1 1 7 17 77 177 177 1177 6177 154 155 PDAAAA LQGAAA VVVVxx +9657 4484 1 1 7 17 57 657 1657 4657 9657 114 115 LHAAAA MQGAAA AAAAxx +4546 4485 0 2 6 6 46 546 546 4546 4546 92 93 WSAAAA NQGAAA HHHHxx +599 4486 1 3 9 19 99 599 599 599 599 198 199 BXAAAA OQGAAA OOOOxx +153 4487 1 1 3 13 53 153 153 153 153 106 107 XFAAAA PQGAAA VVVVxx +6910 4488 0 2 0 10 10 910 910 1910 6910 20 21 UFAAAA QQGAAA AAAAxx +4408 4489 0 0 8 8 8 408 408 4408 4408 16 17 ONAAAA RQGAAA HHHHxx +1164 4490 0 0 4 4 64 164 1164 1164 1164 128 129 USAAAA SQGAAA OOOOxx +6469 4491 1 1 9 9 69 469 469 1469 6469 138 139 VOAAAA TQGAAA VVVVxx +5996 4492 0 0 6 16 96 996 1996 996 5996 192 193 QWAAAA UQGAAA AAAAxx +2639 4493 1 3 9 19 39 639 639 2639 2639 78 79 NXAAAA VQGAAA HHHHxx +2678 4494 0 2 8 18 78 678 678 2678 2678 156 157 AZAAAA WQGAAA OOOOxx +8392 4495 0 0 2 12 92 392 392 3392 8392 184 185 UKAAAA XQGAAA VVVVxx +1386 4496 0 2 6 6 86 386 1386 1386 1386 172 173 IBAAAA YQGAAA AAAAxx +5125 4497 1 1 5 5 25 125 1125 125 5125 50 51 DPAAAA ZQGAAA HHHHxx +8453 4498 1 1 3 13 53 453 453 3453 8453 106 107 DNAAAA ARGAAA OOOOxx +2369 4499 1 1 9 9 69 369 369 2369 2369 138 139 DNAAAA BRGAAA VVVVxx +1608 4500 0 0 8 8 8 608 1608 1608 1608 16 17 WJAAAA CRGAAA AAAAxx +3781 4501 1 1 1 1 81 781 1781 3781 3781 162 163 LPAAAA DRGAAA HHHHxx +903 4502 1 3 3 3 3 903 903 903 903 6 7 TIAAAA ERGAAA OOOOxx +2099 4503 1 3 9 19 99 99 99 2099 2099 198 199 TCAAAA FRGAAA VVVVxx +538 4504 0 2 8 18 38 538 538 538 538 76 77 SUAAAA GRGAAA AAAAxx +9177 4505 1 1 7 17 77 177 1177 4177 9177 154 155 ZOAAAA HRGAAA HHHHxx +420 4506 0 0 0 0 20 420 420 420 420 40 41 EQAAAA IRGAAA OOOOxx +9080 4507 0 0 0 0 80 80 1080 4080 9080 160 161 GLAAAA JRGAAA VVVVxx +2630 4508 0 2 0 10 30 630 630 2630 2630 60 61 EXAAAA KRGAAA AAAAxx +5978 4509 0 2 8 18 78 978 1978 978 5978 156 157 YVAAAA LRGAAA HHHHxx +9239 4510 1 3 9 19 39 239 1239 4239 9239 78 79 JRAAAA MRGAAA OOOOxx +4372 4511 0 0 2 12 72 372 372 4372 4372 144 145 EMAAAA NRGAAA VVVVxx +4357 4512 1 1 7 17 57 357 357 4357 4357 114 115 PLAAAA ORGAAA AAAAxx +9857 4513 1 1 7 17 57 857 1857 4857 9857 114 115 DPAAAA PRGAAA HHHHxx +7933 4514 1 1 3 13 33 933 1933 2933 7933 66 67 DTAAAA QRGAAA OOOOxx +9574 4515 0 2 4 14 74 574 1574 4574 9574 148 149 GEAAAA RRGAAA VVVVxx +8294 4516 0 2 4 14 94 294 294 3294 8294 188 189 AHAAAA SRGAAA AAAAxx +627 4517 1 3 7 7 27 627 627 627 627 54 55 DYAAAA TRGAAA HHHHxx +3229 4518 1 1 9 9 29 229 1229 3229 3229 58 59 FUAAAA URGAAA OOOOxx +3163 4519 1 3 3 3 63 163 1163 3163 3163 126 127 RRAAAA VRGAAA VVVVxx +7349 4520 1 1 9 9 49 349 1349 2349 7349 98 99 RWAAAA WRGAAA AAAAxx +6889 4521 1 1 9 9 89 889 889 1889 6889 178 179 ZEAAAA XRGAAA HHHHxx +2101 4522 1 1 1 1 1 101 101 2101 2101 2 3 VCAAAA YRGAAA OOOOxx +6476 4523 0 0 6 16 76 476 476 1476 6476 152 153 CPAAAA ZRGAAA VVVVxx +6765 4524 1 1 5 5 65 765 765 1765 6765 130 131 FAAAAA ASGAAA AAAAxx +4204 4525 0 0 4 4 4 204 204 4204 4204 8 9 SFAAAA BSGAAA HHHHxx +5915 4526 1 3 5 15 15 915 1915 915 5915 30 31 NTAAAA CSGAAA OOOOxx +2318 4527 0 2 8 18 18 318 318 2318 2318 36 37 ELAAAA DSGAAA VVVVxx +294 4528 0 2 4 14 94 294 294 294 294 188 189 ILAAAA ESGAAA AAAAxx +5245 4529 1 1 5 5 45 245 1245 245 5245 90 91 TTAAAA FSGAAA HHHHxx +4481 4530 1 1 1 1 81 481 481 4481 4481 162 163 JQAAAA GSGAAA OOOOxx +7754 4531 0 2 4 14 54 754 1754 2754 7754 108 109 GMAAAA HSGAAA VVVVxx +8494 4532 0 2 4 14 94 494 494 3494 8494 188 189 SOAAAA ISGAAA AAAAxx +4014 4533 0 2 4 14 14 14 14 4014 4014 28 29 KYAAAA JSGAAA HHHHxx +2197 4534 1 1 7 17 97 197 197 2197 2197 194 195 NGAAAA KSGAAA OOOOxx +1297 4535 1 1 7 17 97 297 1297 1297 1297 194 195 XXAAAA LSGAAA VVVVxx +1066 4536 0 2 6 6 66 66 1066 1066 1066 132 133 APAAAA MSGAAA AAAAxx +5710 4537 0 2 0 10 10 710 1710 710 5710 20 21 QLAAAA NSGAAA HHHHxx +4100 4538 0 0 0 0 0 100 100 4100 4100 0 1 SBAAAA OSGAAA OOOOxx +7356 4539 0 0 6 16 56 356 1356 2356 7356 112 113 YWAAAA PSGAAA VVVVxx +7658 4540 0 2 8 18 58 658 1658 2658 7658 116 117 OIAAAA QSGAAA AAAAxx +3666 4541 0 2 6 6 66 666 1666 3666 3666 132 133 ALAAAA RSGAAA HHHHxx +9713 4542 1 1 3 13 13 713 1713 4713 9713 26 27 PJAAAA SSGAAA OOOOxx +691 4543 1 3 1 11 91 691 691 691 691 182 183 PAAAAA TSGAAA VVVVxx +3112 4544 0 0 2 12 12 112 1112 3112 3112 24 25 SPAAAA USGAAA AAAAxx +6035 4545 1 3 5 15 35 35 35 1035 6035 70 71 DYAAAA VSGAAA HHHHxx +8353 4546 1 1 3 13 53 353 353 3353 8353 106 107 HJAAAA WSGAAA OOOOxx +5679 4547 1 3 9 19 79 679 1679 679 5679 158 159 LKAAAA XSGAAA VVVVxx +2124 4548 0 0 4 4 24 124 124 2124 2124 48 49 SDAAAA YSGAAA AAAAxx +4714 4549 0 2 4 14 14 714 714 4714 4714 28 29 IZAAAA ZSGAAA HHHHxx +9048 4550 0 0 8 8 48 48 1048 4048 9048 96 97 AKAAAA ATGAAA OOOOxx +7692 4551 0 0 2 12 92 692 1692 2692 7692 184 185 WJAAAA BTGAAA VVVVxx +4542 4552 0 2 2 2 42 542 542 4542 4542 84 85 SSAAAA CTGAAA AAAAxx +8737 4553 1 1 7 17 37 737 737 3737 8737 74 75 BYAAAA DTGAAA HHHHxx +4977 4554 1 1 7 17 77 977 977 4977 4977 154 155 LJAAAA ETGAAA OOOOxx +9349 4555 1 1 9 9 49 349 1349 4349 9349 98 99 PVAAAA FTGAAA VVVVxx +731 4556 1 3 1 11 31 731 731 731 731 62 63 DCAAAA GTGAAA AAAAxx +1788 4557 0 0 8 8 88 788 1788 1788 1788 176 177 UQAAAA HTGAAA HHHHxx +7830 4558 0 2 0 10 30 830 1830 2830 7830 60 61 EPAAAA ITGAAA OOOOxx +3977 4559 1 1 7 17 77 977 1977 3977 3977 154 155 ZWAAAA JTGAAA VVVVxx +2421 4560 1 1 1 1 21 421 421 2421 2421 42 43 DPAAAA KTGAAA AAAAxx +5891 4561 1 3 1 11 91 891 1891 891 5891 182 183 PSAAAA LTGAAA HHHHxx +1111 4562 1 3 1 11 11 111 1111 1111 1111 22 23 TQAAAA MTGAAA OOOOxx +9224 4563 0 0 4 4 24 224 1224 4224 9224 48 49 UQAAAA NTGAAA VVVVxx +9872 4564 0 0 2 12 72 872 1872 4872 9872 144 145 SPAAAA OTGAAA AAAAxx +2433 4565 1 1 3 13 33 433 433 2433 2433 66 67 PPAAAA PTGAAA HHHHxx +1491 4566 1 3 1 11 91 491 1491 1491 1491 182 183 JFAAAA QTGAAA OOOOxx +6653 4567 1 1 3 13 53 653 653 1653 6653 106 107 XVAAAA RTGAAA VVVVxx +1907 4568 1 3 7 7 7 907 1907 1907 1907 14 15 JVAAAA STGAAA AAAAxx +889 4569 1 1 9 9 89 889 889 889 889 178 179 FIAAAA TTGAAA HHHHxx +561 4570 1 1 1 1 61 561 561 561 561 122 123 PVAAAA UTGAAA OOOOxx +7415 4571 1 3 5 15 15 415 1415 2415 7415 30 31 FZAAAA VTGAAA VVVVxx +2703 4572 1 3 3 3 3 703 703 2703 2703 6 7 ZZAAAA WTGAAA AAAAxx +2561 4573 1 1 1 1 61 561 561 2561 2561 122 123 NUAAAA XTGAAA HHHHxx +1257 4574 1 1 7 17 57 257 1257 1257 1257 114 115 JWAAAA YTGAAA OOOOxx +2390 4575 0 2 0 10 90 390 390 2390 2390 180 181 YNAAAA ZTGAAA VVVVxx +3915 4576 1 3 5 15 15 915 1915 3915 3915 30 31 PUAAAA AUGAAA AAAAxx +8476 4577 0 0 6 16 76 476 476 3476 8476 152 153 AOAAAA BUGAAA HHHHxx +607 4578 1 3 7 7 7 607 607 607 607 14 15 JXAAAA CUGAAA OOOOxx +3891 4579 1 3 1 11 91 891 1891 3891 3891 182 183 RTAAAA DUGAAA VVVVxx +7269 4580 1 1 9 9 69 269 1269 2269 7269 138 139 PTAAAA EUGAAA AAAAxx +9537 4581 1 1 7 17 37 537 1537 4537 9537 74 75 VCAAAA FUGAAA HHHHxx +8518 4582 0 2 8 18 18 518 518 3518 8518 36 37 QPAAAA GUGAAA OOOOxx +5221 4583 1 1 1 1 21 221 1221 221 5221 42 43 VSAAAA HUGAAA VVVVxx +3274 4584 0 2 4 14 74 274 1274 3274 3274 148 149 YVAAAA IUGAAA AAAAxx +6677 4585 1 1 7 17 77 677 677 1677 6677 154 155 VWAAAA JUGAAA HHHHxx +3114 4586 0 2 4 14 14 114 1114 3114 3114 28 29 UPAAAA KUGAAA OOOOxx +1966 4587 0 2 6 6 66 966 1966 1966 1966 132 133 QXAAAA LUGAAA VVVVxx +5941 4588 1 1 1 1 41 941 1941 941 5941 82 83 NUAAAA MUGAAA AAAAxx +9463 4589 1 3 3 3 63 463 1463 4463 9463 126 127 ZZAAAA NUGAAA HHHHxx +8966 4590 0 2 6 6 66 966 966 3966 8966 132 133 WGAAAA OUGAAA OOOOxx +4402 4591 0 2 2 2 2 402 402 4402 4402 4 5 INAAAA PUGAAA VVVVxx +3364 4592 0 0 4 4 64 364 1364 3364 3364 128 129 KZAAAA QUGAAA AAAAxx +3698 4593 0 2 8 18 98 698 1698 3698 3698 196 197 GMAAAA RUGAAA HHHHxx +4651 4594 1 3 1 11 51 651 651 4651 4651 102 103 XWAAAA SUGAAA OOOOxx +2127 4595 1 3 7 7 27 127 127 2127 2127 54 55 VDAAAA TUGAAA VVVVxx +3614 4596 0 2 4 14 14 614 1614 3614 3614 28 29 AJAAAA UUGAAA AAAAxx +5430 4597 0 2 0 10 30 430 1430 430 5430 60 61 WAAAAA VUGAAA HHHHxx +3361 4598 1 1 1 1 61 361 1361 3361 3361 122 123 HZAAAA WUGAAA OOOOxx +4798 4599 0 2 8 18 98 798 798 4798 4798 196 197 OCAAAA XUGAAA VVVVxx +8269 4600 1 1 9 9 69 269 269 3269 8269 138 139 BGAAAA YUGAAA AAAAxx +6458 4601 0 2 8 18 58 458 458 1458 6458 116 117 KOAAAA ZUGAAA HHHHxx +3358 4602 0 2 8 18 58 358 1358 3358 3358 116 117 EZAAAA AVGAAA OOOOxx +5898 4603 0 2 8 18 98 898 1898 898 5898 196 197 WSAAAA BVGAAA VVVVxx +1880 4604 0 0 0 0 80 880 1880 1880 1880 160 161 IUAAAA CVGAAA AAAAxx +782 4605 0 2 2 2 82 782 782 782 782 164 165 CEAAAA DVGAAA HHHHxx +3102 4606 0 2 2 2 2 102 1102 3102 3102 4 5 IPAAAA EVGAAA OOOOxx +6366 4607 0 2 6 6 66 366 366 1366 6366 132 133 WKAAAA FVGAAA VVVVxx +399 4608 1 3 9 19 99 399 399 399 399 198 199 JPAAAA GVGAAA AAAAxx +6773 4609 1 1 3 13 73 773 773 1773 6773 146 147 NAAAAA HVGAAA HHHHxx +7942 4610 0 2 2 2 42 942 1942 2942 7942 84 85 MTAAAA IVGAAA OOOOxx +6274 4611 0 2 4 14 74 274 274 1274 6274 148 149 IHAAAA JVGAAA VVVVxx +7447 4612 1 3 7 7 47 447 1447 2447 7447 94 95 LAAAAA KVGAAA AAAAxx +7648 4613 0 0 8 8 48 648 1648 2648 7648 96 97 EIAAAA LVGAAA HHHHxx +3997 4614 1 1 7 17 97 997 1997 3997 3997 194 195 TXAAAA MVGAAA OOOOxx +1759 4615 1 3 9 19 59 759 1759 1759 1759 118 119 RPAAAA NVGAAA VVVVxx +1785 4616 1 1 5 5 85 785 1785 1785 1785 170 171 RQAAAA OVGAAA AAAAxx +8930 4617 0 2 0 10 30 930 930 3930 8930 60 61 MFAAAA PVGAAA HHHHxx +7595 4618 1 3 5 15 95 595 1595 2595 7595 190 191 DGAAAA QVGAAA OOOOxx +6752 4619 0 0 2 12 52 752 752 1752 6752 104 105 SZAAAA RVGAAA VVVVxx +5635 4620 1 3 5 15 35 635 1635 635 5635 70 71 TIAAAA SVGAAA AAAAxx +1579 4621 1 3 9 19 79 579 1579 1579 1579 158 159 TIAAAA TVGAAA HHHHxx +7743 4622 1 3 3 3 43 743 1743 2743 7743 86 87 VLAAAA UVGAAA OOOOxx +5856 4623 0 0 6 16 56 856 1856 856 5856 112 113 GRAAAA VVGAAA VVVVxx +7273 4624 1 1 3 13 73 273 1273 2273 7273 146 147 TTAAAA WVGAAA AAAAxx +1399 4625 1 3 9 19 99 399 1399 1399 1399 198 199 VBAAAA XVGAAA HHHHxx +3694 4626 0 2 4 14 94 694 1694 3694 3694 188 189 CMAAAA YVGAAA OOOOxx +2782 4627 0 2 2 2 82 782 782 2782 2782 164 165 ADAAAA ZVGAAA VVVVxx +6951 4628 1 3 1 11 51 951 951 1951 6951 102 103 JHAAAA AWGAAA AAAAxx +6053 4629 1 1 3 13 53 53 53 1053 6053 106 107 VYAAAA BWGAAA HHHHxx +1753 4630 1 1 3 13 53 753 1753 1753 1753 106 107 LPAAAA CWGAAA OOOOxx +3985 4631 1 1 5 5 85 985 1985 3985 3985 170 171 HXAAAA DWGAAA VVVVxx +6159 4632 1 3 9 19 59 159 159 1159 6159 118 119 XCAAAA EWGAAA AAAAxx +6250 4633 0 2 0 10 50 250 250 1250 6250 100 101 KGAAAA FWGAAA HHHHxx +6240 4634 0 0 0 0 40 240 240 1240 6240 80 81 AGAAAA GWGAAA OOOOxx +6571 4635 1 3 1 11 71 571 571 1571 6571 142 143 TSAAAA HWGAAA VVVVxx +8624 4636 0 0 4 4 24 624 624 3624 8624 48 49 STAAAA IWGAAA AAAAxx +9718 4637 0 2 8 18 18 718 1718 4718 9718 36 37 UJAAAA JWGAAA HHHHxx +5529 4638 1 1 9 9 29 529 1529 529 5529 58 59 REAAAA KWGAAA OOOOxx +7089 4639 1 1 9 9 89 89 1089 2089 7089 178 179 RMAAAA LWGAAA VVVVxx +5488 4640 0 0 8 8 88 488 1488 488 5488 176 177 CDAAAA MWGAAA AAAAxx +5444 4641 0 0 4 4 44 444 1444 444 5444 88 89 KBAAAA NWGAAA HHHHxx +4899 4642 1 3 9 19 99 899 899 4899 4899 198 199 LGAAAA OWGAAA OOOOxx +7928 4643 0 0 8 8 28 928 1928 2928 7928 56 57 YSAAAA PWGAAA VVVVxx +4736 4644 0 0 6 16 36 736 736 4736 4736 72 73 EAAAAA QWGAAA AAAAxx +4317 4645 1 1 7 17 17 317 317 4317 4317 34 35 BKAAAA RWGAAA HHHHxx +1174 4646 0 2 4 14 74 174 1174 1174 1174 148 149 ETAAAA SWGAAA OOOOxx +6138 4647 0 2 8 18 38 138 138 1138 6138 76 77 CCAAAA TWGAAA VVVVxx +3943 4648 1 3 3 3 43 943 1943 3943 3943 86 87 RVAAAA UWGAAA AAAAxx +1545 4649 1 1 5 5 45 545 1545 1545 1545 90 91 LHAAAA VWGAAA HHHHxx +6867 4650 1 3 7 7 67 867 867 1867 6867 134 135 DEAAAA WWGAAA OOOOxx +6832 4651 0 0 2 12 32 832 832 1832 6832 64 65 UCAAAA XWGAAA VVVVxx +2987 4652 1 3 7 7 87 987 987 2987 2987 174 175 XKAAAA YWGAAA AAAAxx +5169 4653 1 1 9 9 69 169 1169 169 5169 138 139 VQAAAA ZWGAAA HHHHxx +8998 4654 0 2 8 18 98 998 998 3998 8998 196 197 CIAAAA AXGAAA OOOOxx +9347 4655 1 3 7 7 47 347 1347 4347 9347 94 95 NVAAAA BXGAAA VVVVxx +4800 4656 0 0 0 0 0 800 800 4800 4800 0 1 QCAAAA CXGAAA AAAAxx +4200 4657 0 0 0 0 0 200 200 4200 4200 0 1 OFAAAA DXGAAA HHHHxx +4046 4658 0 2 6 6 46 46 46 4046 4046 92 93 QZAAAA EXGAAA OOOOxx +7142 4659 0 2 2 2 42 142 1142 2142 7142 84 85 SOAAAA FXGAAA VVVVxx +2733 4660 1 1 3 13 33 733 733 2733 2733 66 67 DBAAAA GXGAAA AAAAxx +1568 4661 0 0 8 8 68 568 1568 1568 1568 136 137 IIAAAA HXGAAA HHHHxx +5105 4662 1 1 5 5 5 105 1105 105 5105 10 11 JOAAAA IXGAAA OOOOxx +9115 4663 1 3 5 15 15 115 1115 4115 9115 30 31 PMAAAA JXGAAA VVVVxx +6475 4664 1 3 5 15 75 475 475 1475 6475 150 151 BPAAAA KXGAAA AAAAxx +3796 4665 0 0 6 16 96 796 1796 3796 3796 192 193 AQAAAA LXGAAA HHHHxx +5410 4666 0 2 0 10 10 410 1410 410 5410 20 21 CAAAAA MXGAAA OOOOxx +4023 4667 1 3 3 3 23 23 23 4023 4023 46 47 TYAAAA NXGAAA VVVVxx +8904 4668 0 0 4 4 4 904 904 3904 8904 8 9 MEAAAA OXGAAA AAAAxx +450 4669 0 2 0 10 50 450 450 450 450 100 101 IRAAAA PXGAAA HHHHxx +8087 4670 1 3 7 7 87 87 87 3087 8087 174 175 BZAAAA QXGAAA OOOOxx +6478 4671 0 2 8 18 78 478 478 1478 6478 156 157 EPAAAA RXGAAA VVVVxx +2696 4672 0 0 6 16 96 696 696 2696 2696 192 193 SZAAAA SXGAAA AAAAxx +1792 4673 0 0 2 12 92 792 1792 1792 1792 184 185 YQAAAA TXGAAA HHHHxx +9699 4674 1 3 9 19 99 699 1699 4699 9699 198 199 BJAAAA UXGAAA OOOOxx +9160 4675 0 0 0 0 60 160 1160 4160 9160 120 121 IOAAAA VXGAAA VVVVxx +9989 4676 1 1 9 9 89 989 1989 4989 9989 178 179 FUAAAA WXGAAA AAAAxx +9568 4677 0 0 8 8 68 568 1568 4568 9568 136 137 AEAAAA XXGAAA HHHHxx +487 4678 1 3 7 7 87 487 487 487 487 174 175 TSAAAA YXGAAA OOOOxx +7863 4679 1 3 3 3 63 863 1863 2863 7863 126 127 LQAAAA ZXGAAA VVVVxx +1884 4680 0 0 4 4 84 884 1884 1884 1884 168 169 MUAAAA AYGAAA AAAAxx +2651 4681 1 3 1 11 51 651 651 2651 2651 102 103 ZXAAAA BYGAAA HHHHxx +8285 4682 1 1 5 5 85 285 285 3285 8285 170 171 RGAAAA CYGAAA OOOOxx +3927 4683 1 3 7 7 27 927 1927 3927 3927 54 55 BVAAAA DYGAAA VVVVxx +4076 4684 0 0 6 16 76 76 76 4076 4076 152 153 UAAAAA EYGAAA AAAAxx +6149 4685 1 1 9 9 49 149 149 1149 6149 98 99 NCAAAA FYGAAA HHHHxx +6581 4686 1 1 1 1 81 581 581 1581 6581 162 163 DTAAAA GYGAAA OOOOxx +8293 4687 1 1 3 13 93 293 293 3293 8293 186 187 ZGAAAA HYGAAA VVVVxx +7665 4688 1 1 5 5 65 665 1665 2665 7665 130 131 VIAAAA IYGAAA AAAAxx +4435 4689 1 3 5 15 35 435 435 4435 4435 70 71 POAAAA JYGAAA HHHHxx +1271 4690 1 3 1 11 71 271 1271 1271 1271 142 143 XWAAAA KYGAAA OOOOxx +3928 4691 0 0 8 8 28 928 1928 3928 3928 56 57 CVAAAA LYGAAA VVVVxx +7045 4692 1 1 5 5 45 45 1045 2045 7045 90 91 ZKAAAA MYGAAA AAAAxx +4943 4693 1 3 3 3 43 943 943 4943 4943 86 87 DIAAAA NYGAAA HHHHxx +8473 4694 1 1 3 13 73 473 473 3473 8473 146 147 XNAAAA OYGAAA OOOOxx +1707 4695 1 3 7 7 7 707 1707 1707 1707 14 15 RNAAAA PYGAAA VVVVxx +7509 4696 1 1 9 9 9 509 1509 2509 7509 18 19 VCAAAA QYGAAA AAAAxx +1593 4697 1 1 3 13 93 593 1593 1593 1593 186 187 HJAAAA RYGAAA HHHHxx +9281 4698 1 1 1 1 81 281 1281 4281 9281 162 163 ZSAAAA SYGAAA OOOOxx +8986 4699 0 2 6 6 86 986 986 3986 8986 172 173 QHAAAA TYGAAA VVVVxx +3740 4700 0 0 0 0 40 740 1740 3740 3740 80 81 WNAAAA UYGAAA AAAAxx +9265 4701 1 1 5 5 65 265 1265 4265 9265 130 131 JSAAAA VYGAAA HHHHxx +1510 4702 0 2 0 10 10 510 1510 1510 1510 20 21 CGAAAA WYGAAA OOOOxx +3022 4703 0 2 2 2 22 22 1022 3022 3022 44 45 GMAAAA XYGAAA VVVVxx +9014 4704 0 2 4 14 14 14 1014 4014 9014 28 29 SIAAAA YYGAAA AAAAxx +6816 4705 0 0 6 16 16 816 816 1816 6816 32 33 ECAAAA ZYGAAA HHHHxx +5518 4706 0 2 8 18 18 518 1518 518 5518 36 37 GEAAAA AZGAAA OOOOxx +4451 4707 1 3 1 11 51 451 451 4451 4451 102 103 FPAAAA BZGAAA VVVVxx +8747 4708 1 3 7 7 47 747 747 3747 8747 94 95 LYAAAA CZGAAA AAAAxx +4646 4709 0 2 6 6 46 646 646 4646 4646 92 93 SWAAAA DZGAAA HHHHxx +7296 4710 0 0 6 16 96 296 1296 2296 7296 192 193 QUAAAA EZGAAA OOOOxx +9644 4711 0 0 4 4 44 644 1644 4644 9644 88 89 YGAAAA FZGAAA VVVVxx +5977 4712 1 1 7 17 77 977 1977 977 5977 154 155 XVAAAA GZGAAA AAAAxx +6270 4713 0 2 0 10 70 270 270 1270 6270 140 141 EHAAAA HZGAAA HHHHxx +5578 4714 0 2 8 18 78 578 1578 578 5578 156 157 OGAAAA IZGAAA OOOOxx +2465 4715 1 1 5 5 65 465 465 2465 2465 130 131 VQAAAA JZGAAA VVVVxx +6436 4716 0 0 6 16 36 436 436 1436 6436 72 73 ONAAAA KZGAAA AAAAxx +8089 4717 1 1 9 9 89 89 89 3089 8089 178 179 DZAAAA LZGAAA HHHHxx +2409 4718 1 1 9 9 9 409 409 2409 2409 18 19 ROAAAA MZGAAA OOOOxx +284 4719 0 0 4 4 84 284 284 284 284 168 169 YKAAAA NZGAAA VVVVxx +5576 4720 0 0 6 16 76 576 1576 576 5576 152 153 MGAAAA OZGAAA AAAAxx +6534 4721 0 2 4 14 34 534 534 1534 6534 68 69 IRAAAA PZGAAA HHHHxx +8848 4722 0 0 8 8 48 848 848 3848 8848 96 97 ICAAAA QZGAAA OOOOxx +4305 4723 1 1 5 5 5 305 305 4305 4305 10 11 PJAAAA RZGAAA VVVVxx +5574 4724 0 2 4 14 74 574 1574 574 5574 148 149 KGAAAA SZGAAA AAAAxx +596 4725 0 0 6 16 96 596 596 596 596 192 193 YWAAAA TZGAAA HHHHxx +1253 4726 1 1 3 13 53 253 1253 1253 1253 106 107 FWAAAA UZGAAA OOOOxx +521 4727 1 1 1 1 21 521 521 521 521 42 43 BUAAAA VZGAAA VVVVxx +8739 4728 1 3 9 19 39 739 739 3739 8739 78 79 DYAAAA WZGAAA AAAAxx +908 4729 0 0 8 8 8 908 908 908 908 16 17 YIAAAA XZGAAA HHHHxx +6937 4730 1 1 7 17 37 937 937 1937 6937 74 75 VGAAAA YZGAAA OOOOxx +4515 4731 1 3 5 15 15 515 515 4515 4515 30 31 RRAAAA ZZGAAA VVVVxx +8630 4732 0 2 0 10 30 630 630 3630 8630 60 61 YTAAAA AAHAAA AAAAxx +7518 4733 0 2 8 18 18 518 1518 2518 7518 36 37 EDAAAA BAHAAA HHHHxx +8300 4734 0 0 0 0 0 300 300 3300 8300 0 1 GHAAAA CAHAAA OOOOxx +8434 4735 0 2 4 14 34 434 434 3434 8434 68 69 KMAAAA DAHAAA VVVVxx +6000 4736 0 0 0 0 0 0 0 1000 6000 0 1 UWAAAA EAHAAA AAAAxx +4508 4737 0 0 8 8 8 508 508 4508 4508 16 17 KRAAAA FAHAAA HHHHxx +7861 4738 1 1 1 1 61 861 1861 2861 7861 122 123 JQAAAA GAHAAA OOOOxx +5953 4739 1 1 3 13 53 953 1953 953 5953 106 107 ZUAAAA HAHAAA VVVVxx +5063 4740 1 3 3 3 63 63 1063 63 5063 126 127 TMAAAA IAHAAA AAAAxx +4501 4741 1 1 1 1 1 501 501 4501 4501 2 3 DRAAAA JAHAAA HHHHxx +7092 4742 0 0 2 12 92 92 1092 2092 7092 184 185 UMAAAA KAHAAA OOOOxx +4388 4743 0 0 8 8 88 388 388 4388 4388 176 177 UMAAAA LAHAAA VVVVxx +1826 4744 0 2 6 6 26 826 1826 1826 1826 52 53 GSAAAA MAHAAA AAAAxx +568 4745 0 0 8 8 68 568 568 568 568 136 137 WVAAAA NAHAAA HHHHxx +8184 4746 0 0 4 4 84 184 184 3184 8184 168 169 UCAAAA OAHAAA OOOOxx +4268 4747 0 0 8 8 68 268 268 4268 4268 136 137 EIAAAA PAHAAA VVVVxx +5798 4748 0 2 8 18 98 798 1798 798 5798 196 197 APAAAA QAHAAA AAAAxx +5190 4749 0 2 0 10 90 190 1190 190 5190 180 181 QRAAAA RAHAAA HHHHxx +1298 4750 0 2 8 18 98 298 1298 1298 1298 196 197 YXAAAA SAHAAA OOOOxx +4035 4751 1 3 5 15 35 35 35 4035 4035 70 71 FZAAAA TAHAAA VVVVxx +4504 4752 0 0 4 4 4 504 504 4504 4504 8 9 GRAAAA UAHAAA AAAAxx +5992 4753 0 0 2 12 92 992 1992 992 5992 184 185 MWAAAA VAHAAA HHHHxx +770 4754 0 2 0 10 70 770 770 770 770 140 141 QDAAAA WAHAAA OOOOxx +7502 4755 0 2 2 2 2 502 1502 2502 7502 4 5 OCAAAA XAHAAA VVVVxx +824 4756 0 0 4 4 24 824 824 824 824 48 49 SFAAAA YAHAAA AAAAxx +7716 4757 0 0 6 16 16 716 1716 2716 7716 32 33 UKAAAA ZAHAAA HHHHxx +5749 4758 1 1 9 9 49 749 1749 749 5749 98 99 DNAAAA ABHAAA OOOOxx +9814 4759 0 2 4 14 14 814 1814 4814 9814 28 29 MNAAAA BBHAAA VVVVxx +350 4760 0 2 0 10 50 350 350 350 350 100 101 MNAAAA CBHAAA AAAAxx +1390 4761 0 2 0 10 90 390 1390 1390 1390 180 181 MBAAAA DBHAAA HHHHxx +6994 4762 0 2 4 14 94 994 994 1994 6994 188 189 AJAAAA EBHAAA OOOOxx +3629 4763 1 1 9 9 29 629 1629 3629 3629 58 59 PJAAAA FBHAAA VVVVxx +9937 4764 1 1 7 17 37 937 1937 4937 9937 74 75 FSAAAA GBHAAA AAAAxx +5285 4765 1 1 5 5 85 285 1285 285 5285 170 171 HVAAAA HBHAAA HHHHxx +3157 4766 1 1 7 17 57 157 1157 3157 3157 114 115 LRAAAA IBHAAA OOOOxx +9549 4767 1 1 9 9 49 549 1549 4549 9549 98 99 HDAAAA JBHAAA VVVVxx +4118 4768 0 2 8 18 18 118 118 4118 4118 36 37 KCAAAA KBHAAA AAAAxx +756 4769 0 0 6 16 56 756 756 756 756 112 113 CDAAAA LBHAAA HHHHxx +5964 4770 0 0 4 4 64 964 1964 964 5964 128 129 KVAAAA MBHAAA OOOOxx +7701 4771 1 1 1 1 1 701 1701 2701 7701 2 3 FKAAAA NBHAAA VVVVxx +1242 4772 0 2 2 2 42 242 1242 1242 1242 84 85 UVAAAA OBHAAA AAAAxx +7890 4773 0 2 0 10 90 890 1890 2890 7890 180 181 MRAAAA PBHAAA HHHHxx +1991 4774 1 3 1 11 91 991 1991 1991 1991 182 183 PYAAAA QBHAAA OOOOxx +110 4775 0 2 0 10 10 110 110 110 110 20 21 GEAAAA RBHAAA VVVVxx +9334 4776 0 2 4 14 34 334 1334 4334 9334 68 69 AVAAAA SBHAAA AAAAxx +6231 4777 1 3 1 11 31 231 231 1231 6231 62 63 RFAAAA TBHAAA HHHHxx +9871 4778 1 3 1 11 71 871 1871 4871 9871 142 143 RPAAAA UBHAAA OOOOxx +9471 4779 1 3 1 11 71 471 1471 4471 9471 142 143 HAAAAA VBHAAA VVVVxx +2697 4780 1 1 7 17 97 697 697 2697 2697 194 195 TZAAAA WBHAAA AAAAxx +4761 4781 1 1 1 1 61 761 761 4761 4761 122 123 DBAAAA XBHAAA HHHHxx +8493 4782 1 1 3 13 93 493 493 3493 8493 186 187 ROAAAA YBHAAA OOOOxx +1045 4783 1 1 5 5 45 45 1045 1045 1045 90 91 FOAAAA ZBHAAA VVVVxx +3403 4784 1 3 3 3 3 403 1403 3403 3403 6 7 XAAAAA ACHAAA AAAAxx +9412 4785 0 0 2 12 12 412 1412 4412 9412 24 25 AYAAAA BCHAAA HHHHxx +7652 4786 0 0 2 12 52 652 1652 2652 7652 104 105 IIAAAA CCHAAA OOOOxx +5866 4787 0 2 6 6 66 866 1866 866 5866 132 133 QRAAAA DCHAAA VVVVxx +6942 4788 0 2 2 2 42 942 942 1942 6942 84 85 AHAAAA ECHAAA AAAAxx +9353 4789 1 1 3 13 53 353 1353 4353 9353 106 107 TVAAAA FCHAAA HHHHxx +2600 4790 0 0 0 0 0 600 600 2600 2600 0 1 AWAAAA GCHAAA OOOOxx +6971 4791 1 3 1 11 71 971 971 1971 6971 142 143 DIAAAA HCHAAA VVVVxx +5391 4792 1 3 1 11 91 391 1391 391 5391 182 183 JZAAAA ICHAAA AAAAxx +7654 4793 0 2 4 14 54 654 1654 2654 7654 108 109 KIAAAA JCHAAA HHHHxx +1797 4794 1 1 7 17 97 797 1797 1797 1797 194 195 DRAAAA KCHAAA OOOOxx +4530 4795 0 2 0 10 30 530 530 4530 4530 60 61 GSAAAA LCHAAA VVVVxx +3130 4796 0 2 0 10 30 130 1130 3130 3130 60 61 KQAAAA MCHAAA AAAAxx +9442 4797 0 2 2 2 42 442 1442 4442 9442 84 85 EZAAAA NCHAAA HHHHxx +6659 4798 1 3 9 19 59 659 659 1659 6659 118 119 DWAAAA OCHAAA OOOOxx +9714 4799 0 2 4 14 14 714 1714 4714 9714 28 29 QJAAAA PCHAAA VVVVxx +3660 4800 0 0 0 0 60 660 1660 3660 3660 120 121 UKAAAA QCHAAA AAAAxx +1906 4801 0 2 6 6 6 906 1906 1906 1906 12 13 IVAAAA RCHAAA HHHHxx +7927 4802 1 3 7 7 27 927 1927 2927 7927 54 55 XSAAAA SCHAAA OOOOxx +1767 4803 1 3 7 7 67 767 1767 1767 1767 134 135 ZPAAAA TCHAAA VVVVxx +5523 4804 1 3 3 3 23 523 1523 523 5523 46 47 LEAAAA UCHAAA AAAAxx +9289 4805 1 1 9 9 89 289 1289 4289 9289 178 179 HTAAAA VCHAAA HHHHxx +2717 4806 1 1 7 17 17 717 717 2717 2717 34 35 NAAAAA WCHAAA OOOOxx +4099 4807 1 3 9 19 99 99 99 4099 4099 198 199 RBAAAA XCHAAA VVVVxx +4387 4808 1 3 7 7 87 387 387 4387 4387 174 175 TMAAAA YCHAAA AAAAxx +8864 4809 0 0 4 4 64 864 864 3864 8864 128 129 YCAAAA ZCHAAA HHHHxx +1774 4810 0 2 4 14 74 774 1774 1774 1774 148 149 GQAAAA ADHAAA OOOOxx +6292 4811 0 0 2 12 92 292 292 1292 6292 184 185 AIAAAA BDHAAA VVVVxx +847 4812 1 3 7 7 47 847 847 847 847 94 95 PGAAAA CDHAAA AAAAxx +5954 4813 0 2 4 14 54 954 1954 954 5954 108 109 AVAAAA DDHAAA HHHHxx +8032 4814 0 0 2 12 32 32 32 3032 8032 64 65 YWAAAA EDHAAA OOOOxx +3295 4815 1 3 5 15 95 295 1295 3295 3295 190 191 TWAAAA FDHAAA VVVVxx +8984 4816 0 0 4 4 84 984 984 3984 8984 168 169 OHAAAA GDHAAA AAAAxx +7809 4817 1 1 9 9 9 809 1809 2809 7809 18 19 JOAAAA HDHAAA HHHHxx +1670 4818 0 2 0 10 70 670 1670 1670 1670 140 141 GMAAAA IDHAAA OOOOxx +7733 4819 1 1 3 13 33 733 1733 2733 7733 66 67 LLAAAA JDHAAA VVVVxx +6187 4820 1 3 7 7 87 187 187 1187 6187 174 175 ZDAAAA KDHAAA AAAAxx +9326 4821 0 2 6 6 26 326 1326 4326 9326 52 53 SUAAAA LDHAAA HHHHxx +2493 4822 1 1 3 13 93 493 493 2493 2493 186 187 XRAAAA MDHAAA OOOOxx +9512 4823 0 0 2 12 12 512 1512 4512 9512 24 25 WBAAAA NDHAAA VVVVxx +4342 4824 0 2 2 2 42 342 342 4342 4342 84 85 ALAAAA ODHAAA AAAAxx +5350 4825 0 2 0 10 50 350 1350 350 5350 100 101 UXAAAA PDHAAA HHHHxx +6009 4826 1 1 9 9 9 9 9 1009 6009 18 19 DXAAAA QDHAAA OOOOxx +1208 4827 0 0 8 8 8 208 1208 1208 1208 16 17 MUAAAA RDHAAA VVVVxx +7014 4828 0 2 4 14 14 14 1014 2014 7014 28 29 UJAAAA SDHAAA AAAAxx +2967 4829 1 3 7 7 67 967 967 2967 2967 134 135 DKAAAA TDHAAA HHHHxx +5831 4830 1 3 1 11 31 831 1831 831 5831 62 63 HQAAAA UDHAAA OOOOxx +3097 4831 1 1 7 17 97 97 1097 3097 3097 194 195 DPAAAA VDHAAA VVVVxx +1528 4832 0 0 8 8 28 528 1528 1528 1528 56 57 UGAAAA WDHAAA AAAAxx +6429 4833 1 1 9 9 29 429 429 1429 6429 58 59 HNAAAA XDHAAA HHHHxx +7320 4834 0 0 0 0 20 320 1320 2320 7320 40 41 OVAAAA YDHAAA OOOOxx +844 4835 0 0 4 4 44 844 844 844 844 88 89 MGAAAA ZDHAAA VVVVxx +7054 4836 0 2 4 14 54 54 1054 2054 7054 108 109 ILAAAA AEHAAA AAAAxx +1643 4837 1 3 3 3 43 643 1643 1643 1643 86 87 FLAAAA BEHAAA HHHHxx +7626 4838 0 2 6 6 26 626 1626 2626 7626 52 53 IHAAAA CEHAAA OOOOxx +8728 4839 0 0 8 8 28 728 728 3728 8728 56 57 SXAAAA DEHAAA VVVVxx +8277 4840 1 1 7 17 77 277 277 3277 8277 154 155 JGAAAA EEHAAA AAAAxx +189 4841 1 1 9 9 89 189 189 189 189 178 179 HHAAAA FEHAAA HHHHxx +3717 4842 1 1 7 17 17 717 1717 3717 3717 34 35 ZMAAAA GEHAAA OOOOxx +1020 4843 0 0 0 0 20 20 1020 1020 1020 40 41 GNAAAA HEHAAA VVVVxx +9234 4844 0 2 4 14 34 234 1234 4234 9234 68 69 ERAAAA IEHAAA AAAAxx +9541 4845 1 1 1 1 41 541 1541 4541 9541 82 83 ZCAAAA JEHAAA HHHHxx +380 4846 0 0 0 0 80 380 380 380 380 160 161 QOAAAA KEHAAA OOOOxx +397 4847 1 1 7 17 97 397 397 397 397 194 195 HPAAAA LEHAAA VVVVxx +835 4848 1 3 5 15 35 835 835 835 835 70 71 DGAAAA MEHAAA AAAAxx +347 4849 1 3 7 7 47 347 347 347 347 94 95 JNAAAA NEHAAA HHHHxx +2490 4850 0 2 0 10 90 490 490 2490 2490 180 181 URAAAA OEHAAA OOOOxx +605 4851 1 1 5 5 5 605 605 605 605 10 11 HXAAAA PEHAAA VVVVxx +7960 4852 0 0 0 0 60 960 1960 2960 7960 120 121 EUAAAA QEHAAA AAAAxx +9681 4853 1 1 1 1 81 681 1681 4681 9681 162 163 JIAAAA REHAAA HHHHxx +5753 4854 1 1 3 13 53 753 1753 753 5753 106 107 HNAAAA SEHAAA OOOOxx +1676 4855 0 0 6 16 76 676 1676 1676 1676 152 153 MMAAAA TEHAAA VVVVxx +5533 4856 1 1 3 13 33 533 1533 533 5533 66 67 VEAAAA UEHAAA AAAAxx +8958 4857 0 2 8 18 58 958 958 3958 8958 116 117 OGAAAA VEHAAA HHHHxx +664 4858 0 0 4 4 64 664 664 664 664 128 129 OZAAAA WEHAAA OOOOxx +3005 4859 1 1 5 5 5 5 1005 3005 3005 10 11 PLAAAA XEHAAA VVVVxx +8576 4860 0 0 6 16 76 576 576 3576 8576 152 153 WRAAAA YEHAAA AAAAxx +7304 4861 0 0 4 4 4 304 1304 2304 7304 8 9 YUAAAA ZEHAAA HHHHxx +3375 4862 1 3 5 15 75 375 1375 3375 3375 150 151 VZAAAA AFHAAA OOOOxx +6336 4863 0 0 6 16 36 336 336 1336 6336 72 73 SJAAAA BFHAAA VVVVxx +1392 4864 0 0 2 12 92 392 1392 1392 1392 184 185 OBAAAA CFHAAA AAAAxx +2925 4865 1 1 5 5 25 925 925 2925 2925 50 51 NIAAAA DFHAAA HHHHxx +1217 4866 1 1 7 17 17 217 1217 1217 1217 34 35 VUAAAA EFHAAA OOOOxx +3714 4867 0 2 4 14 14 714 1714 3714 3714 28 29 WMAAAA FFHAAA VVVVxx +2120 4868 0 0 0 0 20 120 120 2120 2120 40 41 ODAAAA GFHAAA AAAAxx +2845 4869 1 1 5 5 45 845 845 2845 2845 90 91 LFAAAA HFHAAA HHHHxx +3865 4870 1 1 5 5 65 865 1865 3865 3865 130 131 RSAAAA IFHAAA OOOOxx +124 4871 0 0 4 4 24 124 124 124 124 48 49 UEAAAA JFHAAA VVVVxx +865 4872 1 1 5 5 65 865 865 865 865 130 131 HHAAAA KFHAAA AAAAxx +9361 4873 1 1 1 1 61 361 1361 4361 9361 122 123 BWAAAA LFHAAA HHHHxx +6338 4874 0 2 8 18 38 338 338 1338 6338 76 77 UJAAAA MFHAAA OOOOxx +7330 4875 0 2 0 10 30 330 1330 2330 7330 60 61 YVAAAA NFHAAA VVVVxx +513 4876 1 1 3 13 13 513 513 513 513 26 27 TTAAAA OFHAAA AAAAxx +5001 4877 1 1 1 1 1 1 1001 1 5001 2 3 JKAAAA PFHAAA HHHHxx +549 4878 1 1 9 9 49 549 549 549 549 98 99 DVAAAA QFHAAA OOOOxx +1808 4879 0 0 8 8 8 808 1808 1808 1808 16 17 ORAAAA RFHAAA VVVVxx +7168 4880 0 0 8 8 68 168 1168 2168 7168 136 137 SPAAAA SFHAAA AAAAxx +9878 4881 0 2 8 18 78 878 1878 4878 9878 156 157 YPAAAA TFHAAA HHHHxx +233 4882 1 1 3 13 33 233 233 233 233 66 67 ZIAAAA UFHAAA OOOOxx +4262 4883 0 2 2 2 62 262 262 4262 4262 124 125 YHAAAA VFHAAA VVVVxx +7998 4884 0 2 8 18 98 998 1998 2998 7998 196 197 QVAAAA WFHAAA AAAAxx +2419 4885 1 3 9 19 19 419 419 2419 2419 38 39 BPAAAA XFHAAA HHHHxx +9960 4886 0 0 0 0 60 960 1960 4960 9960 120 121 CTAAAA YFHAAA OOOOxx +3523 4887 1 3 3 3 23 523 1523 3523 3523 46 47 NFAAAA ZFHAAA VVVVxx +5440 4888 0 0 0 0 40 440 1440 440 5440 80 81 GBAAAA AGHAAA AAAAxx +3030 4889 0 2 0 10 30 30 1030 3030 3030 60 61 OMAAAA BGHAAA HHHHxx +2745 4890 1 1 5 5 45 745 745 2745 2745 90 91 PBAAAA CGHAAA OOOOxx +7175 4891 1 3 5 15 75 175 1175 2175 7175 150 151 ZPAAAA DGHAAA VVVVxx +640 4892 0 0 0 0 40 640 640 640 640 80 81 QYAAAA EGHAAA AAAAxx +1798 4893 0 2 8 18 98 798 1798 1798 1798 196 197 ERAAAA FGHAAA HHHHxx +7499 4894 1 3 9 19 99 499 1499 2499 7499 198 199 LCAAAA GGHAAA OOOOxx +1924 4895 0 0 4 4 24 924 1924 1924 1924 48 49 AWAAAA HGHAAA VVVVxx +1327 4896 1 3 7 7 27 327 1327 1327 1327 54 55 BZAAAA IGHAAA AAAAxx +73 4897 1 1 3 13 73 73 73 73 73 146 147 VCAAAA JGHAAA HHHHxx +9558 4898 0 2 8 18 58 558 1558 4558 9558 116 117 QDAAAA KGHAAA OOOOxx +818 4899 0 2 8 18 18 818 818 818 818 36 37 MFAAAA LGHAAA VVVVxx +9916 4900 0 0 6 16 16 916 1916 4916 9916 32 33 KRAAAA MGHAAA AAAAxx +2978 4901 0 2 8 18 78 978 978 2978 2978 156 157 OKAAAA NGHAAA HHHHxx +8469 4902 1 1 9 9 69 469 469 3469 8469 138 139 TNAAAA OGHAAA OOOOxx +9845 4903 1 1 5 5 45 845 1845 4845 9845 90 91 ROAAAA PGHAAA VVVVxx +2326 4904 0 2 6 6 26 326 326 2326 2326 52 53 MLAAAA QGHAAA AAAAxx +4032 4905 0 0 2 12 32 32 32 4032 4032 64 65 CZAAAA RGHAAA HHHHxx +5604 4906 0 0 4 4 4 604 1604 604 5604 8 9 OHAAAA SGHAAA OOOOxx +9610 4907 0 2 0 10 10 610 1610 4610 9610 20 21 QFAAAA TGHAAA VVVVxx +5101 4908 1 1 1 1 1 101 1101 101 5101 2 3 FOAAAA UGHAAA AAAAxx +7246 4909 0 2 6 6 46 246 1246 2246 7246 92 93 SSAAAA VGHAAA HHHHxx +1292 4910 0 0 2 12 92 292 1292 1292 1292 184 185 SXAAAA WGHAAA OOOOxx +6235 4911 1 3 5 15 35 235 235 1235 6235 70 71 VFAAAA XGHAAA VVVVxx +1733 4912 1 1 3 13 33 733 1733 1733 1733 66 67 ROAAAA YGHAAA AAAAxx +4647 4913 1 3 7 7 47 647 647 4647 4647 94 95 TWAAAA ZGHAAA HHHHxx +258 4914 0 2 8 18 58 258 258 258 258 116 117 YJAAAA AHHAAA OOOOxx +8438 4915 0 2 8 18 38 438 438 3438 8438 76 77 OMAAAA BHHAAA VVVVxx +7869 4916 1 1 9 9 69 869 1869 2869 7869 138 139 RQAAAA CHHAAA AAAAxx +9691 4917 1 3 1 11 91 691 1691 4691 9691 182 183 TIAAAA DHHAAA HHHHxx +5422 4918 0 2 2 2 22 422 1422 422 5422 44 45 OAAAAA EHHAAA OOOOxx +9630 4919 0 2 0 10 30 630 1630 4630 9630 60 61 KGAAAA FHHAAA VVVVxx +4439 4920 1 3 9 19 39 439 439 4439 4439 78 79 TOAAAA GHHAAA AAAAxx +3140 4921 0 0 0 0 40 140 1140 3140 3140 80 81 UQAAAA HHHAAA HHHHxx +9111 4922 1 3 1 11 11 111 1111 4111 9111 22 23 LMAAAA IHHAAA OOOOxx +4606 4923 0 2 6 6 6 606 606 4606 4606 12 13 EVAAAA JHHAAA VVVVxx +8620 4924 0 0 0 0 20 620 620 3620 8620 40 41 OTAAAA KHHAAA AAAAxx +7849 4925 1 1 9 9 49 849 1849 2849 7849 98 99 XPAAAA LHHAAA HHHHxx +346 4926 0 2 6 6 46 346 346 346 346 92 93 INAAAA MHHAAA OOOOxx +9528 4927 0 0 8 8 28 528 1528 4528 9528 56 57 MCAAAA NHHAAA VVVVxx +1811 4928 1 3 1 11 11 811 1811 1811 1811 22 23 RRAAAA OHHAAA AAAAxx +6068 4929 0 0 8 8 68 68 68 1068 6068 136 137 KZAAAA PHHAAA HHHHxx +6260 4930 0 0 0 0 60 260 260 1260 6260 120 121 UGAAAA QHHAAA OOOOxx +5909 4931 1 1 9 9 9 909 1909 909 5909 18 19 HTAAAA RHHAAA VVVVxx +4518 4932 0 2 8 18 18 518 518 4518 4518 36 37 URAAAA SHHAAA AAAAxx +7530 4933 0 2 0 10 30 530 1530 2530 7530 60 61 QDAAAA THHAAA HHHHxx +3900 4934 0 0 0 0 0 900 1900 3900 3900 0 1 AUAAAA UHHAAA OOOOxx +3969 4935 1 1 9 9 69 969 1969 3969 3969 138 139 RWAAAA VHHAAA VVVVxx +8690 4936 0 2 0 10 90 690 690 3690 8690 180 181 GWAAAA WHHAAA AAAAxx +5532 4937 0 0 2 12 32 532 1532 532 5532 64 65 UEAAAA XHHAAA HHHHxx +5989 4938 1 1 9 9 89 989 1989 989 5989 178 179 JWAAAA YHHAAA OOOOxx +1870 4939 0 2 0 10 70 870 1870 1870 1870 140 141 YTAAAA ZHHAAA VVVVxx +1113 4940 1 1 3 13 13 113 1113 1113 1113 26 27 VQAAAA AIHAAA AAAAxx +5155 4941 1 3 5 15 55 155 1155 155 5155 110 111 HQAAAA BIHAAA HHHHxx +7460 4942 0 0 0 0 60 460 1460 2460 7460 120 121 YAAAAA CIHAAA OOOOxx +6217 4943 1 1 7 17 17 217 217 1217 6217 34 35 DFAAAA DIHAAA VVVVxx +8333 4944 1 1 3 13 33 333 333 3333 8333 66 67 NIAAAA EIHAAA AAAAxx +6341 4945 1 1 1 1 41 341 341 1341 6341 82 83 XJAAAA FIHAAA HHHHxx +6230 4946 0 2 0 10 30 230 230 1230 6230 60 61 QFAAAA GIHAAA OOOOxx +6902 4947 0 2 2 2 2 902 902 1902 6902 4 5 MFAAAA HIHAAA VVVVxx +670 4948 0 2 0 10 70 670 670 670 670 140 141 UZAAAA IIHAAA AAAAxx +805 4949 1 1 5 5 5 805 805 805 805 10 11 ZEAAAA JIHAAA HHHHxx +1340 4950 0 0 0 0 40 340 1340 1340 1340 80 81 OZAAAA KIHAAA OOOOxx +8649 4951 1 1 9 9 49 649 649 3649 8649 98 99 RUAAAA LIHAAA VVVVxx +3887 4952 1 3 7 7 87 887 1887 3887 3887 174 175 NTAAAA MIHAAA AAAAxx +5400 4953 0 0 0 0 0 400 1400 400 5400 0 1 SZAAAA NIHAAA HHHHxx +4354 4954 0 2 4 14 54 354 354 4354 4354 108 109 MLAAAA OIHAAA OOOOxx +950 4955 0 2 0 10 50 950 950 950 950 100 101 OKAAAA PIHAAA VVVVxx +1544 4956 0 0 4 4 44 544 1544 1544 1544 88 89 KHAAAA QIHAAA AAAAxx +3898 4957 0 2 8 18 98 898 1898 3898 3898 196 197 YTAAAA RIHAAA HHHHxx +8038 4958 0 2 8 18 38 38 38 3038 8038 76 77 EXAAAA SIHAAA OOOOxx +1095 4959 1 3 5 15 95 95 1095 1095 1095 190 191 DQAAAA TIHAAA VVVVxx +1748 4960 0 0 8 8 48 748 1748 1748 1748 96 97 GPAAAA UIHAAA AAAAxx +9154 4961 0 2 4 14 54 154 1154 4154 9154 108 109 COAAAA VIHAAA HHHHxx +2182 4962 0 2 2 2 82 182 182 2182 2182 164 165 YFAAAA WIHAAA OOOOxx +6797 4963 1 1 7 17 97 797 797 1797 6797 194 195 LBAAAA XIHAAA VVVVxx +9149 4964 1 1 9 9 49 149 1149 4149 9149 98 99 XNAAAA YIHAAA AAAAxx +7351 4965 1 3 1 11 51 351 1351 2351 7351 102 103 TWAAAA ZIHAAA HHHHxx +2820 4966 0 0 0 0 20 820 820 2820 2820 40 41 MEAAAA AJHAAA OOOOxx +9696 4967 0 0 6 16 96 696 1696 4696 9696 192 193 YIAAAA BJHAAA VVVVxx +253 4968 1 1 3 13 53 253 253 253 253 106 107 TJAAAA CJHAAA AAAAxx +3600 4969 0 0 0 0 0 600 1600 3600 3600 0 1 MIAAAA DJHAAA HHHHxx +3892 4970 0 0 2 12 92 892 1892 3892 3892 184 185 STAAAA EJHAAA OOOOxx +231 4971 1 3 1 11 31 231 231 231 231 62 63 XIAAAA FJHAAA VVVVxx +8331 4972 1 3 1 11 31 331 331 3331 8331 62 63 LIAAAA GJHAAA AAAAxx +403 4973 1 3 3 3 3 403 403 403 403 6 7 NPAAAA HJHAAA HHHHxx +8642 4974 0 2 2 2 42 642 642 3642 8642 84 85 KUAAAA IJHAAA OOOOxx +3118 4975 0 2 8 18 18 118 1118 3118 3118 36 37 YPAAAA JJHAAA VVVVxx +3835 4976 1 3 5 15 35 835 1835 3835 3835 70 71 NRAAAA KJHAAA AAAAxx +1117 4977 1 1 7 17 17 117 1117 1117 1117 34 35 ZQAAAA LJHAAA HHHHxx +7024 4978 0 0 4 4 24 24 1024 2024 7024 48 49 EKAAAA MJHAAA OOOOxx +2636 4979 0 0 6 16 36 636 636 2636 2636 72 73 KXAAAA NJHAAA VVVVxx +3778 4980 0 2 8 18 78 778 1778 3778 3778 156 157 IPAAAA OJHAAA AAAAxx +2003 4981 1 3 3 3 3 3 3 2003 2003 6 7 BZAAAA PJHAAA HHHHxx +5717 4982 1 1 7 17 17 717 1717 717 5717 34 35 XLAAAA QJHAAA OOOOxx +4869 4983 1 1 9 9 69 869 869 4869 4869 138 139 HFAAAA RJHAAA VVVVxx +8921 4984 1 1 1 1 21 921 921 3921 8921 42 43 DFAAAA SJHAAA AAAAxx +888 4985 0 0 8 8 88 888 888 888 888 176 177 EIAAAA TJHAAA HHHHxx +7599 4986 1 3 9 19 99 599 1599 2599 7599 198 199 HGAAAA UJHAAA OOOOxx +8621 4987 1 1 1 1 21 621 621 3621 8621 42 43 PTAAAA VJHAAA VVVVxx +811 4988 1 3 1 11 11 811 811 811 811 22 23 FFAAAA WJHAAA AAAAxx +9147 4989 1 3 7 7 47 147 1147 4147 9147 94 95 VNAAAA XJHAAA HHHHxx +1413 4990 1 1 3 13 13 413 1413 1413 1413 26 27 JCAAAA YJHAAA OOOOxx +5232 4991 0 0 2 12 32 232 1232 232 5232 64 65 GTAAAA ZJHAAA VVVVxx +5912 4992 0 0 2 12 12 912 1912 912 5912 24 25 KTAAAA AKHAAA AAAAxx +3418 4993 0 2 8 18 18 418 1418 3418 3418 36 37 MBAAAA BKHAAA HHHHxx +3912 4994 0 0 2 12 12 912 1912 3912 3912 24 25 MUAAAA CKHAAA OOOOxx +9576 4995 0 0 6 16 76 576 1576 4576 9576 152 153 IEAAAA DKHAAA VVVVxx +4225 4996 1 1 5 5 25 225 225 4225 4225 50 51 NGAAAA EKHAAA AAAAxx +8222 4997 0 2 2 2 22 222 222 3222 8222 44 45 GEAAAA FKHAAA HHHHxx +7013 4998 1 1 3 13 13 13 1013 2013 7013 26 27 TJAAAA GKHAAA OOOOxx +7037 4999 1 1 7 17 37 37 1037 2037 7037 74 75 RKAAAA HKHAAA VVVVxx +1205 5000 1 1 5 5 5 205 1205 1205 1205 10 11 JUAAAA IKHAAA AAAAxx +8114 5001 0 2 4 14 14 114 114 3114 8114 28 29 CAAAAA JKHAAA HHHHxx +6585 5002 1 1 5 5 85 585 585 1585 6585 170 171 HTAAAA KKHAAA OOOOxx +155 5003 1 3 5 15 55 155 155 155 155 110 111 ZFAAAA LKHAAA VVVVxx +2841 5004 1 1 1 1 41 841 841 2841 2841 82 83 HFAAAA MKHAAA AAAAxx +1996 5005 0 0 6 16 96 996 1996 1996 1996 192 193 UYAAAA NKHAAA HHHHxx +4948 5006 0 0 8 8 48 948 948 4948 4948 96 97 IIAAAA OKHAAA OOOOxx +3304 5007 0 0 4 4 4 304 1304 3304 3304 8 9 CXAAAA PKHAAA VVVVxx +5684 5008 0 0 4 4 84 684 1684 684 5684 168 169 QKAAAA QKHAAA AAAAxx +6962 5009 0 2 2 2 62 962 962 1962 6962 124 125 UHAAAA RKHAAA HHHHxx +8691 5010 1 3 1 11 91 691 691 3691 8691 182 183 HWAAAA SKHAAA OOOOxx +8501 5011 1 1 1 1 1 501 501 3501 8501 2 3 ZOAAAA TKHAAA VVVVxx +4783 5012 1 3 3 3 83 783 783 4783 4783 166 167 ZBAAAA UKHAAA AAAAxx +3762 5013 0 2 2 2 62 762 1762 3762 3762 124 125 SOAAAA VKHAAA HHHHxx +4534 5014 0 2 4 14 34 534 534 4534 4534 68 69 KSAAAA WKHAAA OOOOxx +4999 5015 1 3 9 19 99 999 999 4999 4999 198 199 HKAAAA XKHAAA VVVVxx +4618 5016 0 2 8 18 18 618 618 4618 4618 36 37 QVAAAA YKHAAA AAAAxx +4220 5017 0 0 0 0 20 220 220 4220 4220 40 41 IGAAAA ZKHAAA HHHHxx +3384 5018 0 0 4 4 84 384 1384 3384 3384 168 169 EAAAAA ALHAAA OOOOxx +3036 5019 0 0 6 16 36 36 1036 3036 3036 72 73 UMAAAA BLHAAA VVVVxx +545 5020 1 1 5 5 45 545 545 545 545 90 91 ZUAAAA CLHAAA AAAAxx +9946 5021 0 2 6 6 46 946 1946 4946 9946 92 93 OSAAAA DLHAAA HHHHxx +1985 5022 1 1 5 5 85 985 1985 1985 1985 170 171 JYAAAA ELHAAA OOOOxx +2310 5023 0 2 0 10 10 310 310 2310 2310 20 21 WKAAAA FLHAAA VVVVxx +6563 5024 1 3 3 3 63 563 563 1563 6563 126 127 LSAAAA GLHAAA AAAAxx +4886 5025 0 2 6 6 86 886 886 4886 4886 172 173 YFAAAA HLHAAA HHHHxx +9359 5026 1 3 9 19 59 359 1359 4359 9359 118 119 ZVAAAA ILHAAA OOOOxx +400 5027 0 0 0 0 0 400 400 400 400 0 1 KPAAAA JLHAAA VVVVxx +9742 5028 0 2 2 2 42 742 1742 4742 9742 84 85 SKAAAA KLHAAA AAAAxx +6736 5029 0 0 6 16 36 736 736 1736 6736 72 73 CZAAAA LLHAAA HHHHxx +8166 5030 0 2 6 6 66 166 166 3166 8166 132 133 CCAAAA MLHAAA OOOOxx +861 5031 1 1 1 1 61 861 861 861 861 122 123 DHAAAA NLHAAA VVVVxx +7492 5032 0 0 2 12 92 492 1492 2492 7492 184 185 ECAAAA OLHAAA AAAAxx +1155 5033 1 3 5 15 55 155 1155 1155 1155 110 111 LSAAAA PLHAAA HHHHxx +9769 5034 1 1 9 9 69 769 1769 4769 9769 138 139 TLAAAA QLHAAA OOOOxx +6843 5035 1 3 3 3 43 843 843 1843 6843 86 87 FDAAAA RLHAAA VVVVxx +5625 5036 1 1 5 5 25 625 1625 625 5625 50 51 JIAAAA SLHAAA AAAAxx +1910 5037 0 2 0 10 10 910 1910 1910 1910 20 21 MVAAAA TLHAAA HHHHxx +9796 5038 0 0 6 16 96 796 1796 4796 9796 192 193 UMAAAA ULHAAA OOOOxx +6950 5039 0 2 0 10 50 950 950 1950 6950 100 101 IHAAAA VLHAAA VVVVxx +3084 5040 0 0 4 4 84 84 1084 3084 3084 168 169 QOAAAA WLHAAA AAAAxx +2959 5041 1 3 9 19 59 959 959 2959 2959 118 119 VJAAAA XLHAAA HHHHxx +2093 5042 1 1 3 13 93 93 93 2093 2093 186 187 NCAAAA YLHAAA OOOOxx +2738 5043 0 2 8 18 38 738 738 2738 2738 76 77 IBAAAA ZLHAAA VVVVxx +6406 5044 0 2 6 6 6 406 406 1406 6406 12 13 KMAAAA AMHAAA AAAAxx +9082 5045 0 2 2 2 82 82 1082 4082 9082 164 165 ILAAAA BMHAAA HHHHxx +8568 5046 0 0 8 8 68 568 568 3568 8568 136 137 ORAAAA CMHAAA OOOOxx +3566 5047 0 2 6 6 66 566 1566 3566 3566 132 133 EHAAAA DMHAAA VVVVxx +3016 5048 0 0 6 16 16 16 1016 3016 3016 32 33 AMAAAA EMHAAA AAAAxx +1207 5049 1 3 7 7 7 207 1207 1207 1207 14 15 LUAAAA FMHAAA HHHHxx +4045 5050 1 1 5 5 45 45 45 4045 4045 90 91 PZAAAA GMHAAA OOOOxx +4173 5051 1 1 3 13 73 173 173 4173 4173 146 147 NEAAAA HMHAAA VVVVxx +3939 5052 1 3 9 19 39 939 1939 3939 3939 78 79 NVAAAA IMHAAA AAAAxx +9683 5053 1 3 3 3 83 683 1683 4683 9683 166 167 LIAAAA JMHAAA HHHHxx +1684 5054 0 0 4 4 84 684 1684 1684 1684 168 169 UMAAAA KMHAAA OOOOxx +9271 5055 1 3 1 11 71 271 1271 4271 9271 142 143 PSAAAA LMHAAA VVVVxx +9317 5056 1 1 7 17 17 317 1317 4317 9317 34 35 JUAAAA MMHAAA AAAAxx +5793 5057 1 1 3 13 93 793 1793 793 5793 186 187 VOAAAA NMHAAA HHHHxx +352 5058 0 0 2 12 52 352 352 352 352 104 105 ONAAAA OMHAAA OOOOxx +7328 5059 0 0 8 8 28 328 1328 2328 7328 56 57 WVAAAA PMHAAA VVVVxx +4582 5060 0 2 2 2 82 582 582 4582 4582 164 165 GUAAAA QMHAAA AAAAxx +7413 5061 1 1 3 13 13 413 1413 2413 7413 26 27 DZAAAA RMHAAA HHHHxx +6772 5062 0 0 2 12 72 772 772 1772 6772 144 145 MAAAAA SMHAAA OOOOxx +4973 5063 1 1 3 13 73 973 973 4973 4973 146 147 HJAAAA TMHAAA VVVVxx +7480 5064 0 0 0 0 80 480 1480 2480 7480 160 161 SBAAAA UMHAAA AAAAxx +5555 5065 1 3 5 15 55 555 1555 555 5555 110 111 RFAAAA VMHAAA HHHHxx +4227 5066 1 3 7 7 27 227 227 4227 4227 54 55 PGAAAA WMHAAA OOOOxx +4153 5067 1 1 3 13 53 153 153 4153 4153 106 107 TDAAAA XMHAAA VVVVxx +4601 5068 1 1 1 1 1 601 601 4601 4601 2 3 ZUAAAA YMHAAA AAAAxx +3782 5069 0 2 2 2 82 782 1782 3782 3782 164 165 MPAAAA ZMHAAA HHHHxx +3872 5070 0 0 2 12 72 872 1872 3872 3872 144 145 YSAAAA ANHAAA OOOOxx +893 5071 1 1 3 13 93 893 893 893 893 186 187 JIAAAA BNHAAA VVVVxx +2430 5072 0 2 0 10 30 430 430 2430 2430 60 61 MPAAAA CNHAAA AAAAxx +2591 5073 1 3 1 11 91 591 591 2591 2591 182 183 RVAAAA DNHAAA HHHHxx +264 5074 0 0 4 4 64 264 264 264 264 128 129 EKAAAA ENHAAA OOOOxx +6238 5075 0 2 8 18 38 238 238 1238 6238 76 77 YFAAAA FNHAAA VVVVxx +633 5076 1 1 3 13 33 633 633 633 633 66 67 JYAAAA GNHAAA AAAAxx +1029 5077 1 1 9 9 29 29 1029 1029 1029 58 59 PNAAAA HNHAAA HHHHxx +5934 5078 0 2 4 14 34 934 1934 934 5934 68 69 GUAAAA INHAAA OOOOxx +8694 5079 0 2 4 14 94 694 694 3694 8694 188 189 KWAAAA JNHAAA VVVVxx +7401 5080 1 1 1 1 1 401 1401 2401 7401 2 3 RYAAAA KNHAAA AAAAxx +1165 5081 1 1 5 5 65 165 1165 1165 1165 130 131 VSAAAA LNHAAA HHHHxx +9438 5082 0 2 8 18 38 438 1438 4438 9438 76 77 AZAAAA MNHAAA OOOOxx +4790 5083 0 2 0 10 90 790 790 4790 4790 180 181 GCAAAA NNHAAA VVVVxx +4531 5084 1 3 1 11 31 531 531 4531 4531 62 63 HSAAAA ONHAAA AAAAxx +6099 5085 1 3 9 19 99 99 99 1099 6099 198 199 PAAAAA PNHAAA HHHHxx +8236 5086 0 0 6 16 36 236 236 3236 8236 72 73 UEAAAA QNHAAA OOOOxx +8551 5087 1 3 1 11 51 551 551 3551 8551 102 103 XQAAAA RNHAAA VVVVxx +3128 5088 0 0 8 8 28 128 1128 3128 3128 56 57 IQAAAA SNHAAA AAAAxx +3504 5089 0 0 4 4 4 504 1504 3504 3504 8 9 UEAAAA TNHAAA HHHHxx +9071 5090 1 3 1 11 71 71 1071 4071 9071 142 143 XKAAAA UNHAAA OOOOxx +5930 5091 0 2 0 10 30 930 1930 930 5930 60 61 CUAAAA VNHAAA VVVVxx +6825 5092 1 1 5 5 25 825 825 1825 6825 50 51 NCAAAA WNHAAA AAAAxx +2218 5093 0 2 8 18 18 218 218 2218 2218 36 37 IHAAAA XNHAAA HHHHxx +3604 5094 0 0 4 4 4 604 1604 3604 3604 8 9 QIAAAA YNHAAA OOOOxx +5761 5095 1 1 1 1 61 761 1761 761 5761 122 123 PNAAAA ZNHAAA VVVVxx +5414 5096 0 2 4 14 14 414 1414 414 5414 28 29 GAAAAA AOHAAA AAAAxx +5892 5097 0 0 2 12 92 892 1892 892 5892 184 185 QSAAAA BOHAAA HHHHxx +4080 5098 0 0 0 0 80 80 80 4080 4080 160 161 YAAAAA COHAAA OOOOxx +8018 5099 0 2 8 18 18 18 18 3018 8018 36 37 KWAAAA DOHAAA VVVVxx +1757 5100 1 1 7 17 57 757 1757 1757 1757 114 115 PPAAAA EOHAAA AAAAxx +5854 5101 0 2 4 14 54 854 1854 854 5854 108 109 ERAAAA FOHAAA HHHHxx +1335 5102 1 3 5 15 35 335 1335 1335 1335 70 71 JZAAAA GOHAAA OOOOxx +3811 5103 1 3 1 11 11 811 1811 3811 3811 22 23 PQAAAA HOHAAA VVVVxx +9917 5104 1 1 7 17 17 917 1917 4917 9917 34 35 LRAAAA IOHAAA AAAAxx +5947 5105 1 3 7 7 47 947 1947 947 5947 94 95 TUAAAA JOHAAA HHHHxx +7263 5106 1 3 3 3 63 263 1263 2263 7263 126 127 JTAAAA KOHAAA OOOOxx +1730 5107 0 2 0 10 30 730 1730 1730 1730 60 61 OOAAAA LOHAAA VVVVxx +5747 5108 1 3 7 7 47 747 1747 747 5747 94 95 BNAAAA MOHAAA AAAAxx +3876 5109 0 0 6 16 76 876 1876 3876 3876 152 153 CTAAAA NOHAAA HHHHxx +2762 5110 0 2 2 2 62 762 762 2762 2762 124 125 GCAAAA OOHAAA OOOOxx +7613 5111 1 1 3 13 13 613 1613 2613 7613 26 27 VGAAAA POHAAA VVVVxx +152 5112 0 0 2 12 52 152 152 152 152 104 105 WFAAAA QOHAAA AAAAxx +3941 5113 1 1 1 1 41 941 1941 3941 3941 82 83 PVAAAA ROHAAA HHHHxx +5614 5114 0 2 4 14 14 614 1614 614 5614 28 29 YHAAAA SOHAAA OOOOxx +9279 5115 1 3 9 19 79 279 1279 4279 9279 158 159 XSAAAA TOHAAA VVVVxx +3048 5116 0 0 8 8 48 48 1048 3048 3048 96 97 GNAAAA UOHAAA AAAAxx +6152 5117 0 0 2 12 52 152 152 1152 6152 104 105 QCAAAA VOHAAA HHHHxx +5481 5118 1 1 1 1 81 481 1481 481 5481 162 163 VCAAAA WOHAAA OOOOxx +4675 5119 1 3 5 15 75 675 675 4675 4675 150 151 VXAAAA XOHAAA VVVVxx +3334 5120 0 2 4 14 34 334 1334 3334 3334 68 69 GYAAAA YOHAAA AAAAxx +4691 5121 1 3 1 11 91 691 691 4691 4691 182 183 LYAAAA ZOHAAA HHHHxx +803 5122 1 3 3 3 3 803 803 803 803 6 7 XEAAAA APHAAA OOOOxx +5409 5123 1 1 9 9 9 409 1409 409 5409 18 19 BAAAAA BPHAAA VVVVxx +1054 5124 0 2 4 14 54 54 1054 1054 1054 108 109 OOAAAA CPHAAA AAAAxx +103 5125 1 3 3 3 3 103 103 103 103 6 7 ZDAAAA DPHAAA HHHHxx +8565 5126 1 1 5 5 65 565 565 3565 8565 130 131 LRAAAA EPHAAA OOOOxx +4666 5127 0 2 6 6 66 666 666 4666 4666 132 133 MXAAAA FPHAAA VVVVxx +6634 5128 0 2 4 14 34 634 634 1634 6634 68 69 EVAAAA GPHAAA AAAAxx +5538 5129 0 2 8 18 38 538 1538 538 5538 76 77 AFAAAA HPHAAA HHHHxx +3789 5130 1 1 9 9 89 789 1789 3789 3789 178 179 TPAAAA IPHAAA OOOOxx +4641 5131 1 1 1 1 41 641 641 4641 4641 82 83 NWAAAA JPHAAA VVVVxx +2458 5132 0 2 8 18 58 458 458 2458 2458 116 117 OQAAAA KPHAAA AAAAxx +5667 5133 1 3 7 7 67 667 1667 667 5667 134 135 ZJAAAA LPHAAA HHHHxx +6524 5134 0 0 4 4 24 524 524 1524 6524 48 49 YQAAAA MPHAAA OOOOxx +9179 5135 1 3 9 19 79 179 1179 4179 9179 158 159 BPAAAA NPHAAA VVVVxx +6358 5136 0 2 8 18 58 358 358 1358 6358 116 117 OKAAAA OPHAAA AAAAxx +6668 5137 0 0 8 8 68 668 668 1668 6668 136 137 MWAAAA PPHAAA HHHHxx +6414 5138 0 2 4 14 14 414 414 1414 6414 28 29 SMAAAA QPHAAA OOOOxx +2813 5139 1 1 3 13 13 813 813 2813 2813 26 27 FEAAAA RPHAAA VVVVxx +8927 5140 1 3 7 7 27 927 927 3927 8927 54 55 JFAAAA SPHAAA AAAAxx +8695 5141 1 3 5 15 95 695 695 3695 8695 190 191 LWAAAA TPHAAA HHHHxx +363 5142 1 3 3 3 63 363 363 363 363 126 127 ZNAAAA UPHAAA OOOOxx +9966 5143 0 2 6 6 66 966 1966 4966 9966 132 133 ITAAAA VPHAAA VVVVxx +1323 5144 1 3 3 3 23 323 1323 1323 1323 46 47 XYAAAA WPHAAA AAAAxx +8211 5145 1 3 1 11 11 211 211 3211 8211 22 23 VDAAAA XPHAAA HHHHxx +4375 5146 1 3 5 15 75 375 375 4375 4375 150 151 HMAAAA YPHAAA OOOOxx +3257 5147 1 1 7 17 57 257 1257 3257 3257 114 115 HVAAAA ZPHAAA VVVVxx +6239 5148 1 3 9 19 39 239 239 1239 6239 78 79 ZFAAAA AQHAAA AAAAxx +3602 5149 0 2 2 2 2 602 1602 3602 3602 4 5 OIAAAA BQHAAA HHHHxx +9830 5150 0 2 0 10 30 830 1830 4830 9830 60 61 COAAAA CQHAAA OOOOxx +7826 5151 0 2 6 6 26 826 1826 2826 7826 52 53 APAAAA DQHAAA VVVVxx +2108 5152 0 0 8 8 8 108 108 2108 2108 16 17 CDAAAA EQHAAA AAAAxx +7245 5153 1 1 5 5 45 245 1245 2245 7245 90 91 RSAAAA FQHAAA HHHHxx +8330 5154 0 2 0 10 30 330 330 3330 8330 60 61 KIAAAA GQHAAA OOOOxx +7441 5155 1 1 1 1 41 441 1441 2441 7441 82 83 FAAAAA HQHAAA VVVVxx +9848 5156 0 0 8 8 48 848 1848 4848 9848 96 97 UOAAAA IQHAAA AAAAxx +1226 5157 0 2 6 6 26 226 1226 1226 1226 52 53 EVAAAA JQHAAA HHHHxx +414 5158 0 2 4 14 14 414 414 414 414 28 29 YPAAAA KQHAAA OOOOxx +1273 5159 1 1 3 13 73 273 1273 1273 1273 146 147 ZWAAAA LQHAAA VVVVxx +9866 5160 0 2 6 6 66 866 1866 4866 9866 132 133 MPAAAA MQHAAA AAAAxx +4633 5161 1 1 3 13 33 633 633 4633 4633 66 67 FWAAAA NQHAAA HHHHxx +8727 5162 1 3 7 7 27 727 727 3727 8727 54 55 RXAAAA OQHAAA OOOOxx +5308 5163 0 0 8 8 8 308 1308 308 5308 16 17 EWAAAA PQHAAA VVVVxx +1395 5164 1 3 5 15 95 395 1395 1395 1395 190 191 RBAAAA QQHAAA AAAAxx +1825 5165 1 1 5 5 25 825 1825 1825 1825 50 51 FSAAAA RQHAAA HHHHxx +7606 5166 0 2 6 6 6 606 1606 2606 7606 12 13 OGAAAA SQHAAA OOOOxx +9390 5167 0 2 0 10 90 390 1390 4390 9390 180 181 EXAAAA TQHAAA VVVVxx +2376 5168 0 0 6 16 76 376 376 2376 2376 152 153 KNAAAA UQHAAA AAAAxx +2377 5169 1 1 7 17 77 377 377 2377 2377 154 155 LNAAAA VQHAAA HHHHxx +5346 5170 0 2 6 6 46 346 1346 346 5346 92 93 QXAAAA WQHAAA OOOOxx +4140 5171 0 0 0 0 40 140 140 4140 4140 80 81 GDAAAA XQHAAA VVVVxx +6032 5172 0 0 2 12 32 32 32 1032 6032 64 65 AYAAAA YQHAAA AAAAxx +9453 5173 1 1 3 13 53 453 1453 4453 9453 106 107 PZAAAA ZQHAAA HHHHxx +9297 5174 1 1 7 17 97 297 1297 4297 9297 194 195 PTAAAA ARHAAA OOOOxx +6455 5175 1 3 5 15 55 455 455 1455 6455 110 111 HOAAAA BRHAAA VVVVxx +4458 5176 0 2 8 18 58 458 458 4458 4458 116 117 MPAAAA CRHAAA AAAAxx +9516 5177 0 0 6 16 16 516 1516 4516 9516 32 33 ACAAAA DRHAAA HHHHxx +6211 5178 1 3 1 11 11 211 211 1211 6211 22 23 XEAAAA ERHAAA OOOOxx +526 5179 0 2 6 6 26 526 526 526 526 52 53 GUAAAA FRHAAA VVVVxx +3570 5180 0 2 0 10 70 570 1570 3570 3570 140 141 IHAAAA GRHAAA AAAAxx +4885 5181 1 1 5 5 85 885 885 4885 4885 170 171 XFAAAA HRHAAA HHHHxx +6390 5182 0 2 0 10 90 390 390 1390 6390 180 181 ULAAAA IRHAAA OOOOxx +1606 5183 0 2 6 6 6 606 1606 1606 1606 12 13 UJAAAA JRHAAA VVVVxx +7850 5184 0 2 0 10 50 850 1850 2850 7850 100 101 YPAAAA KRHAAA AAAAxx +3315 5185 1 3 5 15 15 315 1315 3315 3315 30 31 NXAAAA LRHAAA HHHHxx +8322 5186 0 2 2 2 22 322 322 3322 8322 44 45 CIAAAA MRHAAA OOOOxx +3703 5187 1 3 3 3 3 703 1703 3703 3703 6 7 LMAAAA NRHAAA VVVVxx +9489 5188 1 1 9 9 89 489 1489 4489 9489 178 179 ZAAAAA ORHAAA AAAAxx +6104 5189 0 0 4 4 4 104 104 1104 6104 8 9 UAAAAA PRHAAA HHHHxx +3067 5190 1 3 7 7 67 67 1067 3067 3067 134 135 ZNAAAA QRHAAA OOOOxx +2521 5191 1 1 1 1 21 521 521 2521 2521 42 43 ZSAAAA RRHAAA VVVVxx +2581 5192 1 1 1 1 81 581 581 2581 2581 162 163 HVAAAA SRHAAA AAAAxx +595 5193 1 3 5 15 95 595 595 595 595 190 191 XWAAAA TRHAAA HHHHxx +8291 5194 1 3 1 11 91 291 291 3291 8291 182 183 XGAAAA URHAAA OOOOxx +1727 5195 1 3 7 7 27 727 1727 1727 1727 54 55 LOAAAA VRHAAA VVVVxx +6847 5196 1 3 7 7 47 847 847 1847 6847 94 95 JDAAAA WRHAAA AAAAxx +7494 5197 0 2 4 14 94 494 1494 2494 7494 188 189 GCAAAA XRHAAA HHHHxx +7093 5198 1 1 3 13 93 93 1093 2093 7093 186 187 VMAAAA YRHAAA OOOOxx +7357 5199 1 1 7 17 57 357 1357 2357 7357 114 115 ZWAAAA ZRHAAA VVVVxx +620 5200 0 0 0 0 20 620 620 620 620 40 41 WXAAAA ASHAAA AAAAxx +2460 5201 0 0 0 0 60 460 460 2460 2460 120 121 QQAAAA BSHAAA HHHHxx +1598 5202 0 2 8 18 98 598 1598 1598 1598 196 197 MJAAAA CSHAAA OOOOxx +4112 5203 0 0 2 12 12 112 112 4112 4112 24 25 ECAAAA DSHAAA VVVVxx +2956 5204 0 0 6 16 56 956 956 2956 2956 112 113 SJAAAA ESHAAA AAAAxx +3193 5205 1 1 3 13 93 193 1193 3193 3193 186 187 VSAAAA FSHAAA HHHHxx +6356 5206 0 0 6 16 56 356 356 1356 6356 112 113 MKAAAA GSHAAA OOOOxx +730 5207 0 2 0 10 30 730 730 730 730 60 61 CCAAAA HSHAAA VVVVxx +8826 5208 0 2 6 6 26 826 826 3826 8826 52 53 MBAAAA ISHAAA AAAAxx +9036 5209 0 0 6 16 36 36 1036 4036 9036 72 73 OJAAAA JSHAAA HHHHxx +2085 5210 1 1 5 5 85 85 85 2085 2085 170 171 FCAAAA KSHAAA OOOOxx +9007 5211 1 3 7 7 7 7 1007 4007 9007 14 15 LIAAAA LSHAAA VVVVxx +6047 5212 1 3 7 7 47 47 47 1047 6047 94 95 PYAAAA MSHAAA AAAAxx +3953 5213 1 1 3 13 53 953 1953 3953 3953 106 107 BWAAAA NSHAAA HHHHxx +1214 5214 0 2 4 14 14 214 1214 1214 1214 28 29 SUAAAA OSHAAA OOOOxx +4814 5215 0 2 4 14 14 814 814 4814 4814 28 29 EDAAAA PSHAAA VVVVxx +5738 5216 0 2 8 18 38 738 1738 738 5738 76 77 SMAAAA QSHAAA AAAAxx +7176 5217 0 0 6 16 76 176 1176 2176 7176 152 153 AQAAAA RSHAAA HHHHxx +3609 5218 1 1 9 9 9 609 1609 3609 3609 18 19 VIAAAA SSHAAA OOOOxx +592 5219 0 0 2 12 92 592 592 592 592 184 185 UWAAAA TSHAAA VVVVxx +9391 5220 1 3 1 11 91 391 1391 4391 9391 182 183 FXAAAA USHAAA AAAAxx +5345 5221 1 1 5 5 45 345 1345 345 5345 90 91 PXAAAA VSHAAA HHHHxx +1171 5222 1 3 1 11 71 171 1171 1171 1171 142 143 BTAAAA WSHAAA OOOOxx +7238 5223 0 2 8 18 38 238 1238 2238 7238 76 77 KSAAAA XSHAAA VVVVxx +7561 5224 1 1 1 1 61 561 1561 2561 7561 122 123 VEAAAA YSHAAA AAAAxx +5876 5225 0 0 6 16 76 876 1876 876 5876 152 153 ASAAAA ZSHAAA HHHHxx +6611 5226 1 3 1 11 11 611 611 1611 6611 22 23 HUAAAA ATHAAA OOOOxx +7300 5227 0 0 0 0 0 300 1300 2300 7300 0 1 UUAAAA BTHAAA VVVVxx +1506 5228 0 2 6 6 6 506 1506 1506 1506 12 13 YFAAAA CTHAAA AAAAxx +1153 5229 1 1 3 13 53 153 1153 1153 1153 106 107 JSAAAA DTHAAA HHHHxx +3831 5230 1 3 1 11 31 831 1831 3831 3831 62 63 JRAAAA ETHAAA OOOOxx +9255 5231 1 3 5 15 55 255 1255 4255 9255 110 111 ZRAAAA FTHAAA VVVVxx +1841 5232 1 1 1 1 41 841 1841 1841 1841 82 83 VSAAAA GTHAAA AAAAxx +5075 5233 1 3 5 15 75 75 1075 75 5075 150 151 FNAAAA HTHAAA HHHHxx +101 5234 1 1 1 1 1 101 101 101 101 2 3 XDAAAA ITHAAA OOOOxx +2627 5235 1 3 7 7 27 627 627 2627 2627 54 55 BXAAAA JTHAAA VVVVxx +7078 5236 0 2 8 18 78 78 1078 2078 7078 156 157 GMAAAA KTHAAA AAAAxx +2850 5237 0 2 0 10 50 850 850 2850 2850 100 101 QFAAAA LTHAAA HHHHxx +8703 5238 1 3 3 3 3 703 703 3703 8703 6 7 TWAAAA MTHAAA OOOOxx +4101 5239 1 1 1 1 1 101 101 4101 4101 2 3 TBAAAA NTHAAA VVVVxx +318 5240 0 2 8 18 18 318 318 318 318 36 37 GMAAAA OTHAAA AAAAxx +6452 5241 0 0 2 12 52 452 452 1452 6452 104 105 EOAAAA PTHAAA HHHHxx +5558 5242 0 2 8 18 58 558 1558 558 5558 116 117 UFAAAA QTHAAA OOOOxx +3127 5243 1 3 7 7 27 127 1127 3127 3127 54 55 HQAAAA RTHAAA VVVVxx +535 5244 1 3 5 15 35 535 535 535 535 70 71 PUAAAA STHAAA AAAAxx +270 5245 0 2 0 10 70 270 270 270 270 140 141 KKAAAA TTHAAA HHHHxx +4038 5246 0 2 8 18 38 38 38 4038 4038 76 77 IZAAAA UTHAAA OOOOxx +3404 5247 0 0 4 4 4 404 1404 3404 3404 8 9 YAAAAA VTHAAA VVVVxx +2374 5248 0 2 4 14 74 374 374 2374 2374 148 149 INAAAA WTHAAA AAAAxx +6446 5249 0 2 6 6 46 446 446 1446 6446 92 93 YNAAAA XTHAAA HHHHxx +7758 5250 0 2 8 18 58 758 1758 2758 7758 116 117 KMAAAA YTHAAA OOOOxx +356 5251 0 0 6 16 56 356 356 356 356 112 113 SNAAAA ZTHAAA VVVVxx +9197 5252 1 1 7 17 97 197 1197 4197 9197 194 195 TPAAAA AUHAAA AAAAxx +9765 5253 1 1 5 5 65 765 1765 4765 9765 130 131 PLAAAA BUHAAA HHHHxx +4974 5254 0 2 4 14 74 974 974 4974 4974 148 149 IJAAAA CUHAAA OOOOxx +442 5255 0 2 2 2 42 442 442 442 442 84 85 ARAAAA DUHAAA VVVVxx +4349 5256 1 1 9 9 49 349 349 4349 4349 98 99 HLAAAA EUHAAA AAAAxx +6119 5257 1 3 9 19 19 119 119 1119 6119 38 39 JBAAAA FUHAAA HHHHxx +7574 5258 0 2 4 14 74 574 1574 2574 7574 148 149 IFAAAA GUHAAA OOOOxx +4445 5259 1 1 5 5 45 445 445 4445 4445 90 91 ZOAAAA HUHAAA VVVVxx +940 5260 0 0 0 0 40 940 940 940 940 80 81 EKAAAA IUHAAA AAAAxx +1875 5261 1 3 5 15 75 875 1875 1875 1875 150 151 DUAAAA JUHAAA HHHHxx +5951 5262 1 3 1 11 51 951 1951 951 5951 102 103 XUAAAA KUHAAA OOOOxx +9132 5263 0 0 2 12 32 132 1132 4132 9132 64 65 GNAAAA LUHAAA VVVVxx +6913 5264 1 1 3 13 13 913 913 1913 6913 26 27 XFAAAA MUHAAA AAAAxx +3308 5265 0 0 8 8 8 308 1308 3308 3308 16 17 GXAAAA NUHAAA HHHHxx +7553 5266 1 1 3 13 53 553 1553 2553 7553 106 107 NEAAAA OUHAAA OOOOxx +2138 5267 0 2 8 18 38 138 138 2138 2138 76 77 GEAAAA PUHAAA VVVVxx +6252 5268 0 0 2 12 52 252 252 1252 6252 104 105 MGAAAA QUHAAA AAAAxx +2171 5269 1 3 1 11 71 171 171 2171 2171 142 143 NFAAAA RUHAAA HHHHxx +4159 5270 1 3 9 19 59 159 159 4159 4159 118 119 ZDAAAA SUHAAA OOOOxx +2401 5271 1 1 1 1 1 401 401 2401 2401 2 3 JOAAAA TUHAAA VVVVxx +6553 5272 1 1 3 13 53 553 553 1553 6553 106 107 BSAAAA UUHAAA AAAAxx +5217 5273 1 1 7 17 17 217 1217 217 5217 34 35 RSAAAA VUHAAA HHHHxx +1405 5274 1 1 5 5 5 405 1405 1405 1405 10 11 BCAAAA WUHAAA OOOOxx +1494 5275 0 2 4 14 94 494 1494 1494 1494 188 189 MFAAAA XUHAAA VVVVxx +5553 5276 1 1 3 13 53 553 1553 553 5553 106 107 PFAAAA YUHAAA AAAAxx +8296 5277 0 0 6 16 96 296 296 3296 8296 192 193 CHAAAA ZUHAAA HHHHxx +6565 5278 1 1 5 5 65 565 565 1565 6565 130 131 NSAAAA AVHAAA OOOOxx +817 5279 1 1 7 17 17 817 817 817 817 34 35 LFAAAA BVHAAA VVVVxx +6947 5280 1 3 7 7 47 947 947 1947 6947 94 95 FHAAAA CVHAAA AAAAxx +4184 5281 0 0 4 4 84 184 184 4184 4184 168 169 YEAAAA DVHAAA HHHHxx +6577 5282 1 1 7 17 77 577 577 1577 6577 154 155 ZSAAAA EVHAAA OOOOxx +6424 5283 0 0 4 4 24 424 424 1424 6424 48 49 CNAAAA FVHAAA VVVVxx +2482 5284 0 2 2 2 82 482 482 2482 2482 164 165 MRAAAA GVHAAA AAAAxx +6874 5285 0 2 4 14 74 874 874 1874 6874 148 149 KEAAAA HVHAAA HHHHxx +7601 5286 1 1 1 1 1 601 1601 2601 7601 2 3 JGAAAA IVHAAA OOOOxx +4552 5287 0 0 2 12 52 552 552 4552 4552 104 105 CTAAAA JVHAAA VVVVxx +8406 5288 0 2 6 6 6 406 406 3406 8406 12 13 ILAAAA KVHAAA AAAAxx +2924 5289 0 0 4 4 24 924 924 2924 2924 48 49 MIAAAA LVHAAA HHHHxx +8255 5290 1 3 5 15 55 255 255 3255 8255 110 111 NFAAAA MVHAAA OOOOxx +4920 5291 0 0 0 0 20 920 920 4920 4920 40 41 GHAAAA NVHAAA VVVVxx +228 5292 0 0 8 8 28 228 228 228 228 56 57 UIAAAA OVHAAA AAAAxx +9431 5293 1 3 1 11 31 431 1431 4431 9431 62 63 TYAAAA PVHAAA HHHHxx +4021 5294 1 1 1 1 21 21 21 4021 4021 42 43 RYAAAA QVHAAA OOOOxx +2966 5295 0 2 6 6 66 966 966 2966 2966 132 133 CKAAAA RVHAAA VVVVxx +2862 5296 0 2 2 2 62 862 862 2862 2862 124 125 CGAAAA SVHAAA AAAAxx +4303 5297 1 3 3 3 3 303 303 4303 4303 6 7 NJAAAA TVHAAA HHHHxx +9643 5298 1 3 3 3 43 643 1643 4643 9643 86 87 XGAAAA UVHAAA OOOOxx +3008 5299 0 0 8 8 8 8 1008 3008 3008 16 17 SLAAAA VVHAAA VVVVxx +7476 5300 0 0 6 16 76 476 1476 2476 7476 152 153 OBAAAA WVHAAA AAAAxx +3686 5301 0 2 6 6 86 686 1686 3686 3686 172 173 ULAAAA XVHAAA HHHHxx +9051 5302 1 3 1 11 51 51 1051 4051 9051 102 103 DKAAAA YVHAAA OOOOxx +6592 5303 0 0 2 12 92 592 592 1592 6592 184 185 OTAAAA ZVHAAA VVVVxx +924 5304 0 0 4 4 24 924 924 924 924 48 49 OJAAAA AWHAAA AAAAxx +4406 5305 0 2 6 6 6 406 406 4406 4406 12 13 MNAAAA BWHAAA HHHHxx +5233 5306 1 1 3 13 33 233 1233 233 5233 66 67 HTAAAA CWHAAA OOOOxx +8881 5307 1 1 1 1 81 881 881 3881 8881 162 163 PDAAAA DWHAAA VVVVxx +2212 5308 0 0 2 12 12 212 212 2212 2212 24 25 CHAAAA EWHAAA AAAAxx +5804 5309 0 0 4 4 4 804 1804 804 5804 8 9 GPAAAA FWHAAA HHHHxx +2990 5310 0 2 0 10 90 990 990 2990 2990 180 181 ALAAAA GWHAAA OOOOxx +4069 5311 1 1 9 9 69 69 69 4069 4069 138 139 NAAAAA HWHAAA VVVVxx +5380 5312 0 0 0 0 80 380 1380 380 5380 160 161 YYAAAA IWHAAA AAAAxx +5016 5313 0 0 6 16 16 16 1016 16 5016 32 33 YKAAAA JWHAAA HHHHxx +5056 5314 0 0 6 16 56 56 1056 56 5056 112 113 MMAAAA KWHAAA OOOOxx +3732 5315 0 0 2 12 32 732 1732 3732 3732 64 65 ONAAAA LWHAAA VVVVxx +5527 5316 1 3 7 7 27 527 1527 527 5527 54 55 PEAAAA MWHAAA AAAAxx +1151 5317 1 3 1 11 51 151 1151 1151 1151 102 103 HSAAAA NWHAAA HHHHxx +7900 5318 0 0 0 0 0 900 1900 2900 7900 0 1 WRAAAA OWHAAA OOOOxx +1660 5319 0 0 0 0 60 660 1660 1660 1660 120 121 WLAAAA PWHAAA VVVVxx +8064 5320 0 0 4 4 64 64 64 3064 8064 128 129 EYAAAA QWHAAA AAAAxx +8240 5321 0 0 0 0 40 240 240 3240 8240 80 81 YEAAAA RWHAAA HHHHxx +413 5322 1 1 3 13 13 413 413 413 413 26 27 XPAAAA SWHAAA OOOOxx +8311 5323 1 3 1 11 11 311 311 3311 8311 22 23 RHAAAA TWHAAA VVVVxx +1065 5324 1 1 5 5 65 65 1065 1065 1065 130 131 ZOAAAA UWHAAA AAAAxx +2741 5325 1 1 1 1 41 741 741 2741 2741 82 83 LBAAAA VWHAAA HHHHxx +5306 5326 0 2 6 6 6 306 1306 306 5306 12 13 CWAAAA WWHAAA OOOOxx +5464 5327 0 0 4 4 64 464 1464 464 5464 128 129 ECAAAA XWHAAA VVVVxx +4237 5328 1 1 7 17 37 237 237 4237 4237 74 75 ZGAAAA YWHAAA AAAAxx +3822 5329 0 2 2 2 22 822 1822 3822 3822 44 45 ARAAAA ZWHAAA HHHHxx +2548 5330 0 0 8 8 48 548 548 2548 2548 96 97 AUAAAA AXHAAA OOOOxx +2688 5331 0 0 8 8 88 688 688 2688 2688 176 177 KZAAAA BXHAAA VVVVxx +8061 5332 1 1 1 1 61 61 61 3061 8061 122 123 BYAAAA CXHAAA AAAAxx +9340 5333 0 0 0 0 40 340 1340 4340 9340 80 81 GVAAAA DXHAAA HHHHxx +4031 5334 1 3 1 11 31 31 31 4031 4031 62 63 BZAAAA EXHAAA OOOOxx +2635 5335 1 3 5 15 35 635 635 2635 2635 70 71 JXAAAA FXHAAA VVVVxx +809 5336 1 1 9 9 9 809 809 809 809 18 19 DFAAAA GXHAAA AAAAxx +3209 5337 1 1 9 9 9 209 1209 3209 3209 18 19 LTAAAA HXHAAA HHHHxx +3825 5338 1 1 5 5 25 825 1825 3825 3825 50 51 DRAAAA IXHAAA OOOOxx +1448 5339 0 0 8 8 48 448 1448 1448 1448 96 97 SDAAAA JXHAAA VVVVxx +9077 5340 1 1 7 17 77 77 1077 4077 9077 154 155 DLAAAA KXHAAA AAAAxx +3730 5341 0 2 0 10 30 730 1730 3730 3730 60 61 MNAAAA LXHAAA HHHHxx +9596 5342 0 0 6 16 96 596 1596 4596 9596 192 193 CFAAAA MXHAAA OOOOxx +3563 5343 1 3 3 3 63 563 1563 3563 3563 126 127 BHAAAA NXHAAA VVVVxx +4116 5344 0 0 6 16 16 116 116 4116 4116 32 33 ICAAAA OXHAAA AAAAxx +4825 5345 1 1 5 5 25 825 825 4825 4825 50 51 PDAAAA PXHAAA HHHHxx +8376 5346 0 0 6 16 76 376 376 3376 8376 152 153 EKAAAA QXHAAA OOOOxx +3917 5347 1 1 7 17 17 917 1917 3917 3917 34 35 RUAAAA RXHAAA VVVVxx +4407 5348 1 3 7 7 7 407 407 4407 4407 14 15 NNAAAA SXHAAA AAAAxx +8202 5349 0 2 2 2 2 202 202 3202 8202 4 5 MDAAAA TXHAAA HHHHxx +7675 5350 1 3 5 15 75 675 1675 2675 7675 150 151 FJAAAA UXHAAA OOOOxx +4104 5351 0 0 4 4 4 104 104 4104 4104 8 9 WBAAAA VXHAAA VVVVxx +9225 5352 1 1 5 5 25 225 1225 4225 9225 50 51 VQAAAA WXHAAA AAAAxx +2834 5353 0 2 4 14 34 834 834 2834 2834 68 69 AFAAAA XXHAAA HHHHxx +1227 5354 1 3 7 7 27 227 1227 1227 1227 54 55 FVAAAA YXHAAA OOOOxx +3383 5355 1 3 3 3 83 383 1383 3383 3383 166 167 DAAAAA ZXHAAA VVVVxx +67 5356 1 3 7 7 67 67 67 67 67 134 135 PCAAAA AYHAAA AAAAxx +1751 5357 1 3 1 11 51 751 1751 1751 1751 102 103 JPAAAA BYHAAA HHHHxx +8054 5358 0 2 4 14 54 54 54 3054 8054 108 109 UXAAAA CYHAAA OOOOxx +8571 5359 1 3 1 11 71 571 571 3571 8571 142 143 RRAAAA DYHAAA VVVVxx +2466 5360 0 2 6 6 66 466 466 2466 2466 132 133 WQAAAA EYHAAA AAAAxx +9405 5361 1 1 5 5 5 405 1405 4405 9405 10 11 TXAAAA FYHAAA HHHHxx +6883 5362 1 3 3 3 83 883 883 1883 6883 166 167 TEAAAA GYHAAA OOOOxx +4301 5363 1 1 1 1 1 301 301 4301 4301 2 3 LJAAAA HYHAAA VVVVxx +3705 5364 1 1 5 5 5 705 1705 3705 3705 10 11 NMAAAA IYHAAA AAAAxx +5420 5365 0 0 0 0 20 420 1420 420 5420 40 41 MAAAAA JYHAAA HHHHxx +3692 5366 0 0 2 12 92 692 1692 3692 3692 184 185 AMAAAA KYHAAA OOOOxx +6851 5367 1 3 1 11 51 851 851 1851 6851 102 103 NDAAAA LYHAAA VVVVxx +9363 5368 1 3 3 3 63 363 1363 4363 9363 126 127 DWAAAA MYHAAA AAAAxx +2269 5369 1 1 9 9 69 269 269 2269 2269 138 139 HJAAAA NYHAAA HHHHxx +4918 5370 0 2 8 18 18 918 918 4918 4918 36 37 EHAAAA OYHAAA OOOOxx +4297 5371 1 1 7 17 97 297 297 4297 4297 194 195 HJAAAA PYHAAA VVVVxx +1836 5372 0 0 6 16 36 836 1836 1836 1836 72 73 QSAAAA QYHAAA AAAAxx +237 5373 1 1 7 17 37 237 237 237 237 74 75 DJAAAA RYHAAA HHHHxx +6131 5374 1 3 1 11 31 131 131 1131 6131 62 63 VBAAAA SYHAAA OOOOxx +3174 5375 0 2 4 14 74 174 1174 3174 3174 148 149 CSAAAA TYHAAA VVVVxx +9987 5376 1 3 7 7 87 987 1987 4987 9987 174 175 DUAAAA UYHAAA AAAAxx +3630 5377 0 2 0 10 30 630 1630 3630 3630 60 61 QJAAAA VYHAAA HHHHxx +2899 5378 1 3 9 19 99 899 899 2899 2899 198 199 NHAAAA WYHAAA OOOOxx +4079 5379 1 3 9 19 79 79 79 4079 4079 158 159 XAAAAA XYHAAA VVVVxx +5049 5380 1 1 9 9 49 49 1049 49 5049 98 99 FMAAAA YYHAAA AAAAxx +2963 5381 1 3 3 3 63 963 963 2963 2963 126 127 ZJAAAA ZYHAAA HHHHxx +3962 5382 0 2 2 2 62 962 1962 3962 3962 124 125 KWAAAA AZHAAA OOOOxx +7921 5383 1 1 1 1 21 921 1921 2921 7921 42 43 RSAAAA BZHAAA VVVVxx +3967 5384 1 3 7 7 67 967 1967 3967 3967 134 135 PWAAAA CZHAAA AAAAxx +2752 5385 0 0 2 12 52 752 752 2752 2752 104 105 WBAAAA DZHAAA HHHHxx +7944 5386 0 0 4 4 44 944 1944 2944 7944 88 89 OTAAAA EZHAAA OOOOxx +2205 5387 1 1 5 5 5 205 205 2205 2205 10 11 VGAAAA FZHAAA VVVVxx +5035 5388 1 3 5 15 35 35 1035 35 5035 70 71 RLAAAA GZHAAA AAAAxx +1425 5389 1 1 5 5 25 425 1425 1425 1425 50 51 VCAAAA HZHAAA HHHHxx +832 5390 0 0 2 12 32 832 832 832 832 64 65 AGAAAA IZHAAA OOOOxx +1447 5391 1 3 7 7 47 447 1447 1447 1447 94 95 RDAAAA JZHAAA VVVVxx +6108 5392 0 0 8 8 8 108 108 1108 6108 16 17 YAAAAA KZHAAA AAAAxx +4936 5393 0 0 6 16 36 936 936 4936 4936 72 73 WHAAAA LZHAAA HHHHxx +7704 5394 0 0 4 4 4 704 1704 2704 7704 8 9 IKAAAA MZHAAA OOOOxx +142 5395 0 2 2 2 42 142 142 142 142 84 85 MFAAAA NZHAAA VVVVxx +4272 5396 0 0 2 12 72 272 272 4272 4272 144 145 IIAAAA OZHAAA AAAAxx +7667 5397 1 3 7 7 67 667 1667 2667 7667 134 135 XIAAAA PZHAAA HHHHxx +366 5398 0 2 6 6 66 366 366 366 366 132 133 COAAAA QZHAAA OOOOxx +8866 5399 0 2 6 6 66 866 866 3866 8866 132 133 ADAAAA RZHAAA VVVVxx +7712 5400 0 0 2 12 12 712 1712 2712 7712 24 25 QKAAAA SZHAAA AAAAxx +3880 5401 0 0 0 0 80 880 1880 3880 3880 160 161 GTAAAA TZHAAA HHHHxx +4631 5402 1 3 1 11 31 631 631 4631 4631 62 63 DWAAAA UZHAAA OOOOxx +2789 5403 1 1 9 9 89 789 789 2789 2789 178 179 HDAAAA VZHAAA VVVVxx +7720 5404 0 0 0 0 20 720 1720 2720 7720 40 41 YKAAAA WZHAAA AAAAxx +7618 5405 0 2 8 18 18 618 1618 2618 7618 36 37 AHAAAA XZHAAA HHHHxx +4990 5406 0 2 0 10 90 990 990 4990 4990 180 181 YJAAAA YZHAAA OOOOxx +7918 5407 0 2 8 18 18 918 1918 2918 7918 36 37 OSAAAA ZZHAAA VVVVxx +5067 5408 1 3 7 7 67 67 1067 67 5067 134 135 XMAAAA AAIAAA AAAAxx +6370 5409 0 2 0 10 70 370 370 1370 6370 140 141 ALAAAA BAIAAA HHHHxx +2268 5410 0 0 8 8 68 268 268 2268 2268 136 137 GJAAAA CAIAAA OOOOxx +1949 5411 1 1 9 9 49 949 1949 1949 1949 98 99 ZWAAAA DAIAAA VVVVxx +5503 5412 1 3 3 3 3 503 1503 503 5503 6 7 RDAAAA EAIAAA AAAAxx +9951 5413 1 3 1 11 51 951 1951 4951 9951 102 103 TSAAAA FAIAAA HHHHxx +6823 5414 1 3 3 3 23 823 823 1823 6823 46 47 LCAAAA GAIAAA OOOOxx +6287 5415 1 3 7 7 87 287 287 1287 6287 174 175 VHAAAA HAIAAA VVVVxx +6016 5416 0 0 6 16 16 16 16 1016 6016 32 33 KXAAAA IAIAAA AAAAxx +1977 5417 1 1 7 17 77 977 1977 1977 1977 154 155 BYAAAA JAIAAA HHHHxx +8579 5418 1 3 9 19 79 579 579 3579 8579 158 159 ZRAAAA KAIAAA OOOOxx +6204 5419 0 0 4 4 4 204 204 1204 6204 8 9 QEAAAA LAIAAA VVVVxx +9764 5420 0 0 4 4 64 764 1764 4764 9764 128 129 OLAAAA MAIAAA AAAAxx +2005 5421 1 1 5 5 5 5 5 2005 2005 10 11 DZAAAA NAIAAA HHHHxx +1648 5422 0 0 8 8 48 648 1648 1648 1648 96 97 KLAAAA OAIAAA OOOOxx +2457 5423 1 1 7 17 57 457 457 2457 2457 114 115 NQAAAA PAIAAA VVVVxx +2698 5424 0 2 8 18 98 698 698 2698 2698 196 197 UZAAAA QAIAAA AAAAxx +7730 5425 0 2 0 10 30 730 1730 2730 7730 60 61 ILAAAA RAIAAA HHHHxx +7287 5426 1 3 7 7 87 287 1287 2287 7287 174 175 HUAAAA SAIAAA OOOOxx +2937 5427 1 1 7 17 37 937 937 2937 2937 74 75 ZIAAAA TAIAAA VVVVxx +6824 5428 0 0 4 4 24 824 824 1824 6824 48 49 MCAAAA UAIAAA AAAAxx +9256 5429 0 0 6 16 56 256 1256 4256 9256 112 113 ASAAAA VAIAAA HHHHxx +4810 5430 0 2 0 10 10 810 810 4810 4810 20 21 ADAAAA WAIAAA OOOOxx +3869 5431 1 1 9 9 69 869 1869 3869 3869 138 139 VSAAAA XAIAAA VVVVxx +1993 5432 1 1 3 13 93 993 1993 1993 1993 186 187 RYAAAA YAIAAA AAAAxx +6048 5433 0 0 8 8 48 48 48 1048 6048 96 97 QYAAAA ZAIAAA HHHHxx +6922 5434 0 2 2 2 22 922 922 1922 6922 44 45 GGAAAA ABIAAA OOOOxx +8 5435 0 0 8 8 8 8 8 8 8 16 17 IAAAAA BBIAAA VVVVxx +6706 5436 0 2 6 6 6 706 706 1706 6706 12 13 YXAAAA CBIAAA AAAAxx +9159 5437 1 3 9 19 59 159 1159 4159 9159 118 119 HOAAAA DBIAAA HHHHxx +7020 5438 0 0 0 0 20 20 1020 2020 7020 40 41 AKAAAA EBIAAA OOOOxx +767 5439 1 3 7 7 67 767 767 767 767 134 135 NDAAAA FBIAAA VVVVxx +8602 5440 0 2 2 2 2 602 602 3602 8602 4 5 WSAAAA GBIAAA AAAAxx +4442 5441 0 2 2 2 42 442 442 4442 4442 84 85 WOAAAA HBIAAA HHHHxx +2040 5442 0 0 0 0 40 40 40 2040 2040 80 81 MAAAAA IBIAAA OOOOxx +5493 5443 1 1 3 13 93 493 1493 493 5493 186 187 HDAAAA JBIAAA VVVVxx +275 5444 1 3 5 15 75 275 275 275 275 150 151 PKAAAA KBIAAA AAAAxx +8876 5445 0 0 6 16 76 876 876 3876 8876 152 153 KDAAAA LBIAAA HHHHxx +7381 5446 1 1 1 1 81 381 1381 2381 7381 162 163 XXAAAA MBIAAA OOOOxx +1827 5447 1 3 7 7 27 827 1827 1827 1827 54 55 HSAAAA NBIAAA VVVVxx +3537 5448 1 1 7 17 37 537 1537 3537 3537 74 75 BGAAAA OBIAAA AAAAxx +6978 5449 0 2 8 18 78 978 978 1978 6978 156 157 KIAAAA PBIAAA HHHHxx +6160 5450 0 0 0 0 60 160 160 1160 6160 120 121 YCAAAA QBIAAA OOOOxx +9219 5451 1 3 9 19 19 219 1219 4219 9219 38 39 PQAAAA RBIAAA VVVVxx +5034 5452 0 2 4 14 34 34 1034 34 5034 68 69 QLAAAA SBIAAA AAAAxx +8463 5453 1 3 3 3 63 463 463 3463 8463 126 127 NNAAAA TBIAAA HHHHxx +2038 5454 0 2 8 18 38 38 38 2038 2038 76 77 KAAAAA UBIAAA OOOOxx +9562 5455 0 2 2 2 62 562 1562 4562 9562 124 125 UDAAAA VBIAAA VVVVxx +2687 5456 1 3 7 7 87 687 687 2687 2687 174 175 JZAAAA WBIAAA AAAAxx +5092 5457 0 0 2 12 92 92 1092 92 5092 184 185 WNAAAA XBIAAA HHHHxx +539 5458 1 3 9 19 39 539 539 539 539 78 79 TUAAAA YBIAAA OOOOxx +2139 5459 1 3 9 19 39 139 139 2139 2139 78 79 HEAAAA ZBIAAA VVVVxx +9221 5460 1 1 1 1 21 221 1221 4221 9221 42 43 RQAAAA ACIAAA AAAAxx +965 5461 1 1 5 5 65 965 965 965 965 130 131 DLAAAA BCIAAA HHHHxx +6051 5462 1 3 1 11 51 51 51 1051 6051 102 103 TYAAAA CCIAAA OOOOxx +5822 5463 0 2 2 2 22 822 1822 822 5822 44 45 YPAAAA DCIAAA VVVVxx +6397 5464 1 1 7 17 97 397 397 1397 6397 194 195 BMAAAA ECIAAA AAAAxx +2375 5465 1 3 5 15 75 375 375 2375 2375 150 151 JNAAAA FCIAAA HHHHxx +9415 5466 1 3 5 15 15 415 1415 4415 9415 30 31 DYAAAA GCIAAA OOOOxx +6552 5467 0 0 2 12 52 552 552 1552 6552 104 105 ASAAAA HCIAAA VVVVxx +2248 5468 0 0 8 8 48 248 248 2248 2248 96 97 MIAAAA ICIAAA AAAAxx +2611 5469 1 3 1 11 11 611 611 2611 2611 22 23 LWAAAA JCIAAA HHHHxx +9609 5470 1 1 9 9 9 609 1609 4609 9609 18 19 PFAAAA KCIAAA OOOOxx +2132 5471 0 0 2 12 32 132 132 2132 2132 64 65 AEAAAA LCIAAA VVVVxx +8452 5472 0 0 2 12 52 452 452 3452 8452 104 105 CNAAAA MCIAAA AAAAxx +9407 5473 1 3 7 7 7 407 1407 4407 9407 14 15 VXAAAA NCIAAA HHHHxx +2814 5474 0 2 4 14 14 814 814 2814 2814 28 29 GEAAAA OCIAAA OOOOxx +1889 5475 1 1 9 9 89 889 1889 1889 1889 178 179 RUAAAA PCIAAA VVVVxx +7489 5476 1 1 9 9 89 489 1489 2489 7489 178 179 BCAAAA QCIAAA AAAAxx +2255 5477 1 3 5 15 55 255 255 2255 2255 110 111 TIAAAA RCIAAA HHHHxx +3380 5478 0 0 0 0 80 380 1380 3380 3380 160 161 AAAAAA SCIAAA OOOOxx +1167 5479 1 3 7 7 67 167 1167 1167 1167 134 135 XSAAAA TCIAAA VVVVxx +5369 5480 1 1 9 9 69 369 1369 369 5369 138 139 NYAAAA UCIAAA AAAAxx +2378 5481 0 2 8 18 78 378 378 2378 2378 156 157 MNAAAA VCIAAA HHHHxx +8315 5482 1 3 5 15 15 315 315 3315 8315 30 31 VHAAAA WCIAAA OOOOxx +2934 5483 0 2 4 14 34 934 934 2934 2934 68 69 WIAAAA XCIAAA VVVVxx +7924 5484 0 0 4 4 24 924 1924 2924 7924 48 49 USAAAA YCIAAA AAAAxx +2867 5485 1 3 7 7 67 867 867 2867 2867 134 135 HGAAAA ZCIAAA HHHHxx +9141 5486 1 1 1 1 41 141 1141 4141 9141 82 83 PNAAAA ADIAAA OOOOxx +3613 5487 1 1 3 13 13 613 1613 3613 3613 26 27 ZIAAAA BDIAAA VVVVxx +2461 5488 1 1 1 1 61 461 461 2461 2461 122 123 RQAAAA CDIAAA AAAAxx +4567 5489 1 3 7 7 67 567 567 4567 4567 134 135 RTAAAA DDIAAA HHHHxx +2906 5490 0 2 6 6 6 906 906 2906 2906 12 13 UHAAAA EDIAAA OOOOxx +4848 5491 0 0 8 8 48 848 848 4848 4848 96 97 MEAAAA FDIAAA VVVVxx +6614 5492 0 2 4 14 14 614 614 1614 6614 28 29 KUAAAA GDIAAA AAAAxx +6200 5493 0 0 0 0 0 200 200 1200 6200 0 1 MEAAAA HDIAAA HHHHxx +7895 5494 1 3 5 15 95 895 1895 2895 7895 190 191 RRAAAA IDIAAA OOOOxx +6829 5495 1 1 9 9 29 829 829 1829 6829 58 59 RCAAAA JDIAAA VVVVxx +4087 5496 1 3 7 7 87 87 87 4087 4087 174 175 FBAAAA KDIAAA AAAAxx +8787 5497 1 3 7 7 87 787 787 3787 8787 174 175 ZZAAAA LDIAAA HHHHxx +3322 5498 0 2 2 2 22 322 1322 3322 3322 44 45 UXAAAA MDIAAA OOOOxx +9091 5499 1 3 1 11 91 91 1091 4091 9091 182 183 RLAAAA NDIAAA VVVVxx +5268 5500 0 0 8 8 68 268 1268 268 5268 136 137 QUAAAA ODIAAA AAAAxx +2719 5501 1 3 9 19 19 719 719 2719 2719 38 39 PAAAAA PDIAAA HHHHxx +30 5502 0 2 0 10 30 30 30 30 30 60 61 EBAAAA QDIAAA OOOOxx +1975 5503 1 3 5 15 75 975 1975 1975 1975 150 151 ZXAAAA RDIAAA VVVVxx +2641 5504 1 1 1 1 41 641 641 2641 2641 82 83 PXAAAA SDIAAA AAAAxx +8616 5505 0 0 6 16 16 616 616 3616 8616 32 33 KTAAAA TDIAAA HHHHxx +5980 5506 0 0 0 0 80 980 1980 980 5980 160 161 AWAAAA UDIAAA OOOOxx +5170 5507 0 2 0 10 70 170 1170 170 5170 140 141 WQAAAA VDIAAA VVVVxx +1960 5508 0 0 0 0 60 960 1960 1960 1960 120 121 KXAAAA WDIAAA AAAAxx +8141 5509 1 1 1 1 41 141 141 3141 8141 82 83 DBAAAA XDIAAA HHHHxx +6692 5510 0 0 2 12 92 692 692 1692 6692 184 185 KXAAAA YDIAAA OOOOxx +7621 5511 1 1 1 1 21 621 1621 2621 7621 42 43 DHAAAA ZDIAAA VVVVxx +3890 5512 0 2 0 10 90 890 1890 3890 3890 180 181 QTAAAA AEIAAA AAAAxx +4300 5513 0 0 0 0 0 300 300 4300 4300 0 1 KJAAAA BEIAAA HHHHxx +736 5514 0 0 6 16 36 736 736 736 736 72 73 ICAAAA CEIAAA OOOOxx +6626 5515 0 2 6 6 26 626 626 1626 6626 52 53 WUAAAA DEIAAA VVVVxx +1800 5516 0 0 0 0 0 800 1800 1800 1800 0 1 GRAAAA EEIAAA AAAAxx +3430 5517 0 2 0 10 30 430 1430 3430 3430 60 61 YBAAAA FEIAAA HHHHxx +9519 5518 1 3 9 19 19 519 1519 4519 9519 38 39 DCAAAA GEIAAA OOOOxx +5111 5519 1 3 1 11 11 111 1111 111 5111 22 23 POAAAA HEIAAA VVVVxx +6915 5520 1 3 5 15 15 915 915 1915 6915 30 31 ZFAAAA IEIAAA AAAAxx +9246 5521 0 2 6 6 46 246 1246 4246 9246 92 93 QRAAAA JEIAAA HHHHxx +5141 5522 1 1 1 1 41 141 1141 141 5141 82 83 TPAAAA KEIAAA OOOOxx +5922 5523 0 2 2 2 22 922 1922 922 5922 44 45 UTAAAA LEIAAA VVVVxx +3087 5524 1 3 7 7 87 87 1087 3087 3087 174 175 TOAAAA MEIAAA AAAAxx +1859 5525 1 3 9 19 59 859 1859 1859 1859 118 119 NTAAAA NEIAAA HHHHxx +8482 5526 0 2 2 2 82 482 482 3482 8482 164 165 GOAAAA OEIAAA OOOOxx +8414 5527 0 2 4 14 14 414 414 3414 8414 28 29 QLAAAA PEIAAA VVVVxx +6662 5528 0 2 2 2 62 662 662 1662 6662 124 125 GWAAAA QEIAAA AAAAxx +8614 5529 0 2 4 14 14 614 614 3614 8614 28 29 ITAAAA REIAAA HHHHxx +42 5530 0 2 2 2 42 42 42 42 42 84 85 QBAAAA SEIAAA OOOOxx +7582 5531 0 2 2 2 82 582 1582 2582 7582 164 165 QFAAAA TEIAAA VVVVxx +8183 5532 1 3 3 3 83 183 183 3183 8183 166 167 TCAAAA UEIAAA AAAAxx +1299 5533 1 3 9 19 99 299 1299 1299 1299 198 199 ZXAAAA VEIAAA HHHHxx +7004 5534 0 0 4 4 4 4 1004 2004 7004 8 9 KJAAAA WEIAAA OOOOxx +3298 5535 0 2 8 18 98 298 1298 3298 3298 196 197 WWAAAA XEIAAA VVVVxx +7884 5536 0 0 4 4 84 884 1884 2884 7884 168 169 GRAAAA YEIAAA AAAAxx +4191 5537 1 3 1 11 91 191 191 4191 4191 182 183 FFAAAA ZEIAAA HHHHxx +7346 5538 0 2 6 6 46 346 1346 2346 7346 92 93 OWAAAA AFIAAA OOOOxx +7989 5539 1 1 9 9 89 989 1989 2989 7989 178 179 HVAAAA BFIAAA VVVVxx +5719 5540 1 3 9 19 19 719 1719 719 5719 38 39 ZLAAAA CFIAAA AAAAxx +800 5541 0 0 0 0 0 800 800 800 800 0 1 UEAAAA DFIAAA HHHHxx +6509 5542 1 1 9 9 9 509 509 1509 6509 18 19 JQAAAA EFIAAA OOOOxx +4672 5543 0 0 2 12 72 672 672 4672 4672 144 145 SXAAAA FFIAAA VVVVxx +4434 5544 0 2 4 14 34 434 434 4434 4434 68 69 OOAAAA GFIAAA AAAAxx +8309 5545 1 1 9 9 9 309 309 3309 8309 18 19 PHAAAA HFIAAA HHHHxx +5134 5546 0 2 4 14 34 134 1134 134 5134 68 69 MPAAAA IFIAAA OOOOxx +5153 5547 1 1 3 13 53 153 1153 153 5153 106 107 FQAAAA JFIAAA VVVVxx +1522 5548 0 2 2 2 22 522 1522 1522 1522 44 45 OGAAAA KFIAAA AAAAxx +8629 5549 1 1 9 9 29 629 629 3629 8629 58 59 XTAAAA LFIAAA HHHHxx +4549 5550 1 1 9 9 49 549 549 4549 4549 98 99 ZSAAAA MFIAAA OOOOxx +9506 5551 0 2 6 6 6 506 1506 4506 9506 12 13 QBAAAA NFIAAA VVVVxx +6542 5552 0 2 2 2 42 542 542 1542 6542 84 85 QRAAAA OFIAAA AAAAxx +2579 5553 1 3 9 19 79 579 579 2579 2579 158 159 FVAAAA PFIAAA HHHHxx +4664 5554 0 0 4 4 64 664 664 4664 4664 128 129 KXAAAA QFIAAA OOOOxx +696 5555 0 0 6 16 96 696 696 696 696 192 193 UAAAAA RFIAAA VVVVxx +7950 5556 0 2 0 10 50 950 1950 2950 7950 100 101 UTAAAA SFIAAA AAAAxx +5 5557 1 1 5 5 5 5 5 5 5 10 11 FAAAAA TFIAAA HHHHxx +7806 5558 0 2 6 6 6 806 1806 2806 7806 12 13 GOAAAA UFIAAA OOOOxx +2770 5559 0 2 0 10 70 770 770 2770 2770 140 141 OCAAAA VFIAAA VVVVxx +1344 5560 0 0 4 4 44 344 1344 1344 1344 88 89 SZAAAA WFIAAA AAAAxx +511 5561 1 3 1 11 11 511 511 511 511 22 23 RTAAAA XFIAAA HHHHxx +9070 5562 0 2 0 10 70 70 1070 4070 9070 140 141 WKAAAA YFIAAA OOOOxx +2961 5563 1 1 1 1 61 961 961 2961 2961 122 123 XJAAAA ZFIAAA VVVVxx +8031 5564 1 3 1 11 31 31 31 3031 8031 62 63 XWAAAA AGIAAA AAAAxx +326 5565 0 2 6 6 26 326 326 326 326 52 53 OMAAAA BGIAAA HHHHxx +183 5566 1 3 3 3 83 183 183 183 183 166 167 BHAAAA CGIAAA OOOOxx +5917 5567 1 1 7 17 17 917 1917 917 5917 34 35 PTAAAA DGIAAA VVVVxx +8256 5568 0 0 6 16 56 256 256 3256 8256 112 113 OFAAAA EGIAAA AAAAxx +7889 5569 1 1 9 9 89 889 1889 2889 7889 178 179 LRAAAA FGIAAA HHHHxx +9029 5570 1 1 9 9 29 29 1029 4029 9029 58 59 HJAAAA GGIAAA OOOOxx +1316 5571 0 0 6 16 16 316 1316 1316 1316 32 33 QYAAAA HGIAAA VVVVxx +7442 5572 0 2 2 2 42 442 1442 2442 7442 84 85 GAAAAA IGIAAA AAAAxx +2810 5573 0 2 0 10 10 810 810 2810 2810 20 21 CEAAAA JGIAAA HHHHxx +20 5574 0 0 0 0 20 20 20 20 20 40 41 UAAAAA KGIAAA OOOOxx +2306 5575 0 2 6 6 6 306 306 2306 2306 12 13 SKAAAA LGIAAA VVVVxx +4694 5576 0 2 4 14 94 694 694 4694 4694 188 189 OYAAAA MGIAAA AAAAxx +9710 5577 0 2 0 10 10 710 1710 4710 9710 20 21 MJAAAA NGIAAA HHHHxx +1791 5578 1 3 1 11 91 791 1791 1791 1791 182 183 XQAAAA OGIAAA OOOOxx +6730 5579 0 2 0 10 30 730 730 1730 6730 60 61 WYAAAA PGIAAA VVVVxx +359 5580 1 3 9 19 59 359 359 359 359 118 119 VNAAAA QGIAAA AAAAxx +8097 5581 1 1 7 17 97 97 97 3097 8097 194 195 LZAAAA RGIAAA HHHHxx +6147 5582 1 3 7 7 47 147 147 1147 6147 94 95 LCAAAA SGIAAA OOOOxx +643 5583 1 3 3 3 43 643 643 643 643 86 87 TYAAAA TGIAAA VVVVxx +698 5584 0 2 8 18 98 698 698 698 698 196 197 WAAAAA UGIAAA AAAAxx +3881 5585 1 1 1 1 81 881 1881 3881 3881 162 163 HTAAAA VGIAAA HHHHxx +7600 5586 0 0 0 0 0 600 1600 2600 7600 0 1 IGAAAA WGIAAA OOOOxx +1583 5587 1 3 3 3 83 583 1583 1583 1583 166 167 XIAAAA XGIAAA VVVVxx +9612 5588 0 0 2 12 12 612 1612 4612 9612 24 25 SFAAAA YGIAAA AAAAxx +1032 5589 0 0 2 12 32 32 1032 1032 1032 64 65 SNAAAA ZGIAAA HHHHxx +4834 5590 0 2 4 14 34 834 834 4834 4834 68 69 YDAAAA AHIAAA OOOOxx +5076 5591 0 0 6 16 76 76 1076 76 5076 152 153 GNAAAA BHIAAA VVVVxx +3070 5592 0 2 0 10 70 70 1070 3070 3070 140 141 COAAAA CHIAAA AAAAxx +1421 5593 1 1 1 1 21 421 1421 1421 1421 42 43 RCAAAA DHIAAA HHHHxx +8970 5594 0 2 0 10 70 970 970 3970 8970 140 141 AHAAAA EHIAAA OOOOxx +6271 5595 1 3 1 11 71 271 271 1271 6271 142 143 FHAAAA FHIAAA VVVVxx +8547 5596 1 3 7 7 47 547 547 3547 8547 94 95 TQAAAA GHIAAA AAAAxx +1259 5597 1 3 9 19 59 259 1259 1259 1259 118 119 LWAAAA HHIAAA HHHHxx +8328 5598 0 0 8 8 28 328 328 3328 8328 56 57 IIAAAA IHIAAA OOOOxx +1503 5599 1 3 3 3 3 503 1503 1503 1503 6 7 VFAAAA JHIAAA VVVVxx +2253 5600 1 1 3 13 53 253 253 2253 2253 106 107 RIAAAA KHIAAA AAAAxx +7449 5601 1 1 9 9 49 449 1449 2449 7449 98 99 NAAAAA LHIAAA HHHHxx +3579 5602 1 3 9 19 79 579 1579 3579 3579 158 159 RHAAAA MHIAAA OOOOxx +1585 5603 1 1 5 5 85 585 1585 1585 1585 170 171 ZIAAAA NHIAAA VVVVxx +5543 5604 1 3 3 3 43 543 1543 543 5543 86 87 FFAAAA OHIAAA AAAAxx +8627 5605 1 3 7 7 27 627 627 3627 8627 54 55 VTAAAA PHIAAA HHHHxx +8618 5606 0 2 8 18 18 618 618 3618 8618 36 37 MTAAAA QHIAAA OOOOxx +1911 5607 1 3 1 11 11 911 1911 1911 1911 22 23 NVAAAA RHIAAA VVVVxx +2758 5608 0 2 8 18 58 758 758 2758 2758 116 117 CCAAAA SHIAAA AAAAxx +5744 5609 0 0 4 4 44 744 1744 744 5744 88 89 YMAAAA THIAAA HHHHxx +4976 5610 0 0 6 16 76 976 976 4976 4976 152 153 KJAAAA UHIAAA OOOOxx +6380 5611 0 0 0 0 80 380 380 1380 6380 160 161 KLAAAA VHIAAA VVVVxx +1937 5612 1 1 7 17 37 937 1937 1937 1937 74 75 NWAAAA WHIAAA AAAAxx +9903 5613 1 3 3 3 3 903 1903 4903 9903 6 7 XQAAAA XHIAAA HHHHxx +4409 5614 1 1 9 9 9 409 409 4409 4409 18 19 PNAAAA YHIAAA OOOOxx +4133 5615 1 1 3 13 33 133 133 4133 4133 66 67 ZCAAAA ZHIAAA VVVVxx +5263 5616 1 3 3 3 63 263 1263 263 5263 126 127 LUAAAA AIIAAA AAAAxx +7888 5617 0 0 8 8 88 888 1888 2888 7888 176 177 KRAAAA BIIAAA HHHHxx +6060 5618 0 0 0 0 60 60 60 1060 6060 120 121 CZAAAA CIIAAA OOOOxx +2522 5619 0 2 2 2 22 522 522 2522 2522 44 45 ATAAAA DIIAAA VVVVxx +5550 5620 0 2 0 10 50 550 1550 550 5550 100 101 MFAAAA EIIAAA AAAAxx +9396 5621 0 0 6 16 96 396 1396 4396 9396 192 193 KXAAAA FIIAAA HHHHxx +176 5622 0 0 6 16 76 176 176 176 176 152 153 UGAAAA GIIAAA OOOOxx +5148 5623 0 0 8 8 48 148 1148 148 5148 96 97 AQAAAA HIIAAA VVVVxx +6691 5624 1 3 1 11 91 691 691 1691 6691 182 183 JXAAAA IIIAAA AAAAxx +4652 5625 0 0 2 12 52 652 652 4652 4652 104 105 YWAAAA JIIAAA HHHHxx +5096 5626 0 0 6 16 96 96 1096 96 5096 192 193 AOAAAA KIIAAA OOOOxx +2408 5627 0 0 8 8 8 408 408 2408 2408 16 17 QOAAAA LIIAAA VVVVxx +7322 5628 0 2 2 2 22 322 1322 2322 7322 44 45 QVAAAA MIIAAA AAAAxx +6782 5629 0 2 2 2 82 782 782 1782 6782 164 165 WAAAAA NIIAAA HHHHxx +4642 5630 0 2 2 2 42 642 642 4642 4642 84 85 OWAAAA OIIAAA OOOOxx +5427 5631 1 3 7 7 27 427 1427 427 5427 54 55 TAAAAA PIIAAA VVVVxx +4461 5632 1 1 1 1 61 461 461 4461 4461 122 123 PPAAAA QIIAAA AAAAxx +8416 5633 0 0 6 16 16 416 416 3416 8416 32 33 SLAAAA RIIAAA HHHHxx +2593 5634 1 1 3 13 93 593 593 2593 2593 186 187 TVAAAA SIIAAA OOOOxx +6202 5635 0 2 2 2 2 202 202 1202 6202 4 5 OEAAAA TIIAAA VVVVxx +3826 5636 0 2 6 6 26 826 1826 3826 3826 52 53 ERAAAA UIIAAA AAAAxx +4417 5637 1 1 7 17 17 417 417 4417 4417 34 35 XNAAAA VIIAAA HHHHxx +7871 5638 1 3 1 11 71 871 1871 2871 7871 142 143 TQAAAA WIIAAA OOOOxx +5622 5639 0 2 2 2 22 622 1622 622 5622 44 45 GIAAAA XIIAAA VVVVxx +3010 5640 0 2 0 10 10 10 1010 3010 3010 20 21 ULAAAA YIIAAA AAAAxx +3407 5641 1 3 7 7 7 407 1407 3407 3407 14 15 BBAAAA ZIIAAA HHHHxx +1274 5642 0 2 4 14 74 274 1274 1274 1274 148 149 AXAAAA AJIAAA OOOOxx +2828 5643 0 0 8 8 28 828 828 2828 2828 56 57 UEAAAA BJIAAA VVVVxx +3427 5644 1 3 7 7 27 427 1427 3427 3427 54 55 VBAAAA CJIAAA AAAAxx +612 5645 0 0 2 12 12 612 612 612 612 24 25 OXAAAA DJIAAA HHHHxx +8729 5646 1 1 9 9 29 729 729 3729 8729 58 59 TXAAAA EJIAAA OOOOxx +1239 5647 1 3 9 19 39 239 1239 1239 1239 78 79 RVAAAA FJIAAA VVVVxx +8990 5648 0 2 0 10 90 990 990 3990 8990 180 181 UHAAAA GJIAAA AAAAxx +5609 5649 1 1 9 9 9 609 1609 609 5609 18 19 THAAAA HJIAAA HHHHxx +4441 5650 1 1 1 1 41 441 441 4441 4441 82 83 VOAAAA IJIAAA OOOOxx +9078 5651 0 2 8 18 78 78 1078 4078 9078 156 157 ELAAAA JJIAAA VVVVxx +6699 5652 1 3 9 19 99 699 699 1699 6699 198 199 RXAAAA KJIAAA AAAAxx +8390 5653 0 2 0 10 90 390 390 3390 8390 180 181 SKAAAA LJIAAA HHHHxx +5455 5654 1 3 5 15 55 455 1455 455 5455 110 111 VBAAAA MJIAAA OOOOxx +7537 5655 1 1 7 17 37 537 1537 2537 7537 74 75 XDAAAA NJIAAA VVVVxx +4669 5656 1 1 9 9 69 669 669 4669 4669 138 139 PXAAAA OJIAAA AAAAxx +5534 5657 0 2 4 14 34 534 1534 534 5534 68 69 WEAAAA PJIAAA HHHHxx +1920 5658 0 0 0 0 20 920 1920 1920 1920 40 41 WVAAAA QJIAAA OOOOxx +9465 5659 1 1 5 5 65 465 1465 4465 9465 130 131 BAAAAA RJIAAA VVVVxx +4897 5660 1 1 7 17 97 897 897 4897 4897 194 195 JGAAAA SJIAAA AAAAxx +1990 5661 0 2 0 10 90 990 1990 1990 1990 180 181 OYAAAA TJIAAA HHHHxx +7148 5662 0 0 8 8 48 148 1148 2148 7148 96 97 YOAAAA UJIAAA OOOOxx +533 5663 1 1 3 13 33 533 533 533 533 66 67 NUAAAA VJIAAA VVVVxx +4339 5664 1 3 9 19 39 339 339 4339 4339 78 79 XKAAAA WJIAAA AAAAxx +6450 5665 0 2 0 10 50 450 450 1450 6450 100 101 COAAAA XJIAAA HHHHxx +9627 5666 1 3 7 7 27 627 1627 4627 9627 54 55 HGAAAA YJIAAA OOOOxx +5539 5667 1 3 9 19 39 539 1539 539 5539 78 79 BFAAAA ZJIAAA VVVVxx +6758 5668 0 2 8 18 58 758 758 1758 6758 116 117 YZAAAA AKIAAA AAAAxx +3435 5669 1 3 5 15 35 435 1435 3435 3435 70 71 DCAAAA BKIAAA HHHHxx +4350 5670 0 2 0 10 50 350 350 4350 4350 100 101 ILAAAA CKIAAA OOOOxx +9088 5671 0 0 8 8 88 88 1088 4088 9088 176 177 OLAAAA DKIAAA VVVVxx +6368 5672 0 0 8 8 68 368 368 1368 6368 136 137 YKAAAA EKIAAA AAAAxx +6337 5673 1 1 7 17 37 337 337 1337 6337 74 75 TJAAAA FKIAAA HHHHxx +4361 5674 1 1 1 1 61 361 361 4361 4361 122 123 TLAAAA GKIAAA OOOOxx +1719 5675 1 3 9 19 19 719 1719 1719 1719 38 39 DOAAAA HKIAAA VVVVxx +3109 5676 1 1 9 9 9 109 1109 3109 3109 18 19 PPAAAA IKIAAA AAAAxx +7135 5677 1 3 5 15 35 135 1135 2135 7135 70 71 LOAAAA JKIAAA HHHHxx +1964 5678 0 0 4 4 64 964 1964 1964 1964 128 129 OXAAAA KKIAAA OOOOxx +3 5679 1 3 3 3 3 3 3 3 3 6 7 DAAAAA LKIAAA VVVVxx +1868 5680 0 0 8 8 68 868 1868 1868 1868 136 137 WTAAAA MKIAAA AAAAxx +5182 5681 0 2 2 2 82 182 1182 182 5182 164 165 IRAAAA NKIAAA HHHHxx +7567 5682 1 3 7 7 67 567 1567 2567 7567 134 135 BFAAAA OKIAAA OOOOxx +3676 5683 0 0 6 16 76 676 1676 3676 3676 152 153 KLAAAA PKIAAA VVVVxx +9382 5684 0 2 2 2 82 382 1382 4382 9382 164 165 WWAAAA QKIAAA AAAAxx +8645 5685 1 1 5 5 45 645 645 3645 8645 90 91 NUAAAA RKIAAA HHHHxx +2018 5686 0 2 8 18 18 18 18 2018 2018 36 37 QZAAAA SKIAAA OOOOxx +217 5687 1 1 7 17 17 217 217 217 217 34 35 JIAAAA TKIAAA VVVVxx +6793 5688 1 1 3 13 93 793 793 1793 6793 186 187 HBAAAA UKIAAA AAAAxx +7280 5689 0 0 0 0 80 280 1280 2280 7280 160 161 AUAAAA VKIAAA HHHHxx +2168 5690 0 0 8 8 68 168 168 2168 2168 136 137 KFAAAA WKIAAA OOOOxx +5259 5691 1 3 9 19 59 259 1259 259 5259 118 119 HUAAAA XKIAAA VVVVxx +6019 5692 1 3 9 19 19 19 19 1019 6019 38 39 NXAAAA YKIAAA AAAAxx +877 5693 1 1 7 17 77 877 877 877 877 154 155 THAAAA ZKIAAA HHHHxx +4961 5694 1 1 1 1 61 961 961 4961 4961 122 123 VIAAAA ALIAAA OOOOxx +1873 5695 1 1 3 13 73 873 1873 1873 1873 146 147 BUAAAA BLIAAA VVVVxx +13 5696 1 1 3 13 13 13 13 13 13 26 27 NAAAAA CLIAAA AAAAxx +1537 5697 1 1 7 17 37 537 1537 1537 1537 74 75 DHAAAA DLIAAA HHHHxx +3129 5698 1 1 9 9 29 129 1129 3129 3129 58 59 JQAAAA ELIAAA OOOOxx +6473 5699 1 1 3 13 73 473 473 1473 6473 146 147 ZOAAAA FLIAAA VVVVxx +7865 5700 1 1 5 5 65 865 1865 2865 7865 130 131 NQAAAA GLIAAA AAAAxx +7822 5701 0 2 2 2 22 822 1822 2822 7822 44 45 WOAAAA HLIAAA HHHHxx +239 5702 1 3 9 19 39 239 239 239 239 78 79 FJAAAA ILIAAA OOOOxx +2062 5703 0 2 2 2 62 62 62 2062 2062 124 125 IBAAAA JLIAAA VVVVxx +762 5704 0 2 2 2 62 762 762 762 762 124 125 IDAAAA KLIAAA AAAAxx +3764 5705 0 0 4 4 64 764 1764 3764 3764 128 129 UOAAAA LLIAAA HHHHxx +465 5706 1 1 5 5 65 465 465 465 465 130 131 XRAAAA MLIAAA OOOOxx +2587 5707 1 3 7 7 87 587 587 2587 2587 174 175 NVAAAA NLIAAA VVVVxx +8402 5708 0 2 2 2 2 402 402 3402 8402 4 5 ELAAAA OLIAAA AAAAxx +1055 5709 1 3 5 15 55 55 1055 1055 1055 110 111 POAAAA PLIAAA HHHHxx +3072 5710 0 0 2 12 72 72 1072 3072 3072 144 145 EOAAAA QLIAAA OOOOxx +7359 5711 1 3 9 19 59 359 1359 2359 7359 118 119 BXAAAA RLIAAA VVVVxx +6558 5712 0 2 8 18 58 558 558 1558 6558 116 117 GSAAAA SLIAAA AAAAxx +48 5713 0 0 8 8 48 48 48 48 48 96 97 WBAAAA TLIAAA HHHHxx +5382 5714 0 2 2 2 82 382 1382 382 5382 164 165 AZAAAA ULIAAA OOOOxx +947 5715 1 3 7 7 47 947 947 947 947 94 95 LKAAAA VLIAAA VVVVxx +2644 5716 0 0 4 4 44 644 644 2644 2644 88 89 SXAAAA WLIAAA AAAAxx +7516 5717 0 0 6 16 16 516 1516 2516 7516 32 33 CDAAAA XLIAAA HHHHxx +2362 5718 0 2 2 2 62 362 362 2362 2362 124 125 WMAAAA YLIAAA OOOOxx +839 5719 1 3 9 19 39 839 839 839 839 78 79 HGAAAA ZLIAAA VVVVxx +2216 5720 0 0 6 16 16 216 216 2216 2216 32 33 GHAAAA AMIAAA AAAAxx +7673 5721 1 1 3 13 73 673 1673 2673 7673 146 147 DJAAAA BMIAAA HHHHxx +8173 5722 1 1 3 13 73 173 173 3173 8173 146 147 JCAAAA CMIAAA OOOOxx +1630 5723 0 2 0 10 30 630 1630 1630 1630 60 61 SKAAAA DMIAAA VVVVxx +9057 5724 1 1 7 17 57 57 1057 4057 9057 114 115 JKAAAA EMIAAA AAAAxx +4392 5725 0 0 2 12 92 392 392 4392 4392 184 185 YMAAAA FMIAAA HHHHxx +3695 5726 1 3 5 15 95 695 1695 3695 3695 190 191 DMAAAA GMIAAA OOOOxx +5751 5727 1 3 1 11 51 751 1751 751 5751 102 103 FNAAAA HMIAAA VVVVxx +5745 5728 1 1 5 5 45 745 1745 745 5745 90 91 ZMAAAA IMIAAA AAAAxx +7945 5729 1 1 5 5 45 945 1945 2945 7945 90 91 PTAAAA JMIAAA HHHHxx +5174 5730 0 2 4 14 74 174 1174 174 5174 148 149 ARAAAA KMIAAA OOOOxx +3829 5731 1 1 9 9 29 829 1829 3829 3829 58 59 HRAAAA LMIAAA VVVVxx +3317 5732 1 1 7 17 17 317 1317 3317 3317 34 35 PXAAAA MMIAAA AAAAxx +4253 5733 1 1 3 13 53 253 253 4253 4253 106 107 PHAAAA NMIAAA HHHHxx +1291 5734 1 3 1 11 91 291 1291 1291 1291 182 183 RXAAAA OMIAAA OOOOxx +3266 5735 0 2 6 6 66 266 1266 3266 3266 132 133 QVAAAA PMIAAA VVVVxx +2939 5736 1 3 9 19 39 939 939 2939 2939 78 79 BJAAAA QMIAAA AAAAxx +2755 5737 1 3 5 15 55 755 755 2755 2755 110 111 ZBAAAA RMIAAA HHHHxx +6844 5738 0 0 4 4 44 844 844 1844 6844 88 89 GDAAAA SMIAAA OOOOxx +8594 5739 0 2 4 14 94 594 594 3594 8594 188 189 OSAAAA TMIAAA VVVVxx +704 5740 0 0 4 4 4 704 704 704 704 8 9 CBAAAA UMIAAA AAAAxx +1681 5741 1 1 1 1 81 681 1681 1681 1681 162 163 RMAAAA VMIAAA HHHHxx +364 5742 0 0 4 4 64 364 364 364 364 128 129 AOAAAA WMIAAA OOOOxx +2928 5743 0 0 8 8 28 928 928 2928 2928 56 57 QIAAAA XMIAAA VVVVxx +117 5744 1 1 7 17 17 117 117 117 117 34 35 NEAAAA YMIAAA AAAAxx +96 5745 0 0 6 16 96 96 96 96 96 192 193 SDAAAA ZMIAAA HHHHxx +7796 5746 0 0 6 16 96 796 1796 2796 7796 192 193 WNAAAA ANIAAA OOOOxx +3101 5747 1 1 1 1 1 101 1101 3101 3101 2 3 HPAAAA BNIAAA VVVVxx +3397 5748 1 1 7 17 97 397 1397 3397 3397 194 195 RAAAAA CNIAAA AAAAxx +1605 5749 1 1 5 5 5 605 1605 1605 1605 10 11 TJAAAA DNIAAA HHHHxx +4881 5750 1 1 1 1 81 881 881 4881 4881 162 163 TFAAAA ENIAAA OOOOxx +4521 5751 1 1 1 1 21 521 521 4521 4521 42 43 XRAAAA FNIAAA VVVVxx +6430 5752 0 2 0 10 30 430 430 1430 6430 60 61 INAAAA GNIAAA AAAAxx +282 5753 0 2 2 2 82 282 282 282 282 164 165 WKAAAA HNIAAA HHHHxx +9645 5754 1 1 5 5 45 645 1645 4645 9645 90 91 ZGAAAA INIAAA OOOOxx +8946 5755 0 2 6 6 46 946 946 3946 8946 92 93 CGAAAA JNIAAA VVVVxx +5064 5756 0 0 4 4 64 64 1064 64 5064 128 129 UMAAAA KNIAAA AAAAxx +7470 5757 0 2 0 10 70 470 1470 2470 7470 140 141 IBAAAA LNIAAA HHHHxx +5886 5758 0 2 6 6 86 886 1886 886 5886 172 173 KSAAAA MNIAAA OOOOxx +6280 5759 0 0 0 0 80 280 280 1280 6280 160 161 OHAAAA NNIAAA VVVVxx +5247 5760 1 3 7 7 47 247 1247 247 5247 94 95 VTAAAA ONIAAA AAAAxx +412 5761 0 0 2 12 12 412 412 412 412 24 25 WPAAAA PNIAAA HHHHxx +5342 5762 0 2 2 2 42 342 1342 342 5342 84 85 MXAAAA QNIAAA OOOOxx +2271 5763 1 3 1 11 71 271 271 2271 2271 142 143 JJAAAA RNIAAA VVVVxx +849 5764 1 1 9 9 49 849 849 849 849 98 99 RGAAAA SNIAAA AAAAxx +1885 5765 1 1 5 5 85 885 1885 1885 1885 170 171 NUAAAA TNIAAA HHHHxx +5620 5766 0 0 0 0 20 620 1620 620 5620 40 41 EIAAAA UNIAAA OOOOxx +7079 5767 1 3 9 19 79 79 1079 2079 7079 158 159 HMAAAA VNIAAA VVVVxx +5819 5768 1 3 9 19 19 819 1819 819 5819 38 39 VPAAAA WNIAAA AAAAxx +7497 5769 1 1 7 17 97 497 1497 2497 7497 194 195 JCAAAA XNIAAA HHHHxx +5993 5770 1 1 3 13 93 993 1993 993 5993 186 187 NWAAAA YNIAAA OOOOxx +3739 5771 1 3 9 19 39 739 1739 3739 3739 78 79 VNAAAA ZNIAAA VVVVxx +6296 5772 0 0 6 16 96 296 296 1296 6296 192 193 EIAAAA AOIAAA AAAAxx +2716 5773 0 0 6 16 16 716 716 2716 2716 32 33 MAAAAA BOIAAA HHHHxx +1130 5774 0 2 0 10 30 130 1130 1130 1130 60 61 MRAAAA COIAAA OOOOxx +5593 5775 1 1 3 13 93 593 1593 593 5593 186 187 DHAAAA DOIAAA VVVVxx +6972 5776 0 0 2 12 72 972 972 1972 6972 144 145 EIAAAA EOIAAA AAAAxx +8360 5777 0 0 0 0 60 360 360 3360 8360 120 121 OJAAAA FOIAAA HHHHxx +6448 5778 0 0 8 8 48 448 448 1448 6448 96 97 AOAAAA GOIAAA OOOOxx +3689 5779 1 1 9 9 89 689 1689 3689 3689 178 179 XLAAAA HOIAAA VVVVxx +7951 5780 1 3 1 11 51 951 1951 2951 7951 102 103 VTAAAA IOIAAA AAAAxx +2974 5781 0 2 4 14 74 974 974 2974 2974 148 149 KKAAAA JOIAAA HHHHxx +6600 5782 0 0 0 0 0 600 600 1600 6600 0 1 WTAAAA KOIAAA OOOOxx +4662 5783 0 2 2 2 62 662 662 4662 4662 124 125 IXAAAA LOIAAA VVVVxx +4765 5784 1 1 5 5 65 765 765 4765 4765 130 131 HBAAAA MOIAAA AAAAxx +355 5785 1 3 5 15 55 355 355 355 355 110 111 RNAAAA NOIAAA HHHHxx +6228 5786 0 0 8 8 28 228 228 1228 6228 56 57 OFAAAA OOIAAA OOOOxx +964 5787 0 0 4 4 64 964 964 964 964 128 129 CLAAAA POIAAA VVVVxx +3082 5788 0 2 2 2 82 82 1082 3082 3082 164 165 OOAAAA QOIAAA AAAAxx +7028 5789 0 0 8 8 28 28 1028 2028 7028 56 57 IKAAAA ROIAAA HHHHxx +4505 5790 1 1 5 5 5 505 505 4505 4505 10 11 HRAAAA SOIAAA OOOOxx +8961 5791 1 1 1 1 61 961 961 3961 8961 122 123 RGAAAA TOIAAA VVVVxx +9571 5792 1 3 1 11 71 571 1571 4571 9571 142 143 DEAAAA UOIAAA AAAAxx +9394 5793 0 2 4 14 94 394 1394 4394 9394 188 189 IXAAAA VOIAAA HHHHxx +4245 5794 1 1 5 5 45 245 245 4245 4245 90 91 HHAAAA WOIAAA OOOOxx +7560 5795 0 0 0 0 60 560 1560 2560 7560 120 121 UEAAAA XOIAAA VVVVxx +2907 5796 1 3 7 7 7 907 907 2907 2907 14 15 VHAAAA YOIAAA AAAAxx +7817 5797 1 1 7 17 17 817 1817 2817 7817 34 35 ROAAAA ZOIAAA HHHHxx +5408 5798 0 0 8 8 8 408 1408 408 5408 16 17 AAAAAA APIAAA OOOOxx +8092 5799 0 0 2 12 92 92 92 3092 8092 184 185 GZAAAA BPIAAA VVVVxx +1309 5800 1 1 9 9 9 309 1309 1309 1309 18 19 JYAAAA CPIAAA AAAAxx +6673 5801 1 1 3 13 73 673 673 1673 6673 146 147 RWAAAA DPIAAA HHHHxx +1245 5802 1 1 5 5 45 245 1245 1245 1245 90 91 XVAAAA EPIAAA OOOOxx +6790 5803 0 2 0 10 90 790 790 1790 6790 180 181 EBAAAA FPIAAA VVVVxx +8380 5804 0 0 0 0 80 380 380 3380 8380 160 161 IKAAAA GPIAAA AAAAxx +5786 5805 0 2 6 6 86 786 1786 786 5786 172 173 OOAAAA HPIAAA HHHHxx +9590 5806 0 2 0 10 90 590 1590 4590 9590 180 181 WEAAAA IPIAAA OOOOxx +5763 5807 1 3 3 3 63 763 1763 763 5763 126 127 RNAAAA JPIAAA VVVVxx +1345 5808 1 1 5 5 45 345 1345 1345 1345 90 91 TZAAAA KPIAAA AAAAxx +3480 5809 0 0 0 0 80 480 1480 3480 3480 160 161 WDAAAA LPIAAA HHHHxx +7864 5810 0 0 4 4 64 864 1864 2864 7864 128 129 MQAAAA MPIAAA OOOOxx +4853 5811 1 1 3 13 53 853 853 4853 4853 106 107 REAAAA NPIAAA VVVVxx +1445 5812 1 1 5 5 45 445 1445 1445 1445 90 91 PDAAAA OPIAAA AAAAxx +170 5813 0 2 0 10 70 170 170 170 170 140 141 OGAAAA PPIAAA HHHHxx +7348 5814 0 0 8 8 48 348 1348 2348 7348 96 97 QWAAAA QPIAAA OOOOxx +3920 5815 0 0 0 0 20 920 1920 3920 3920 40 41 UUAAAA RPIAAA VVVVxx +3307 5816 1 3 7 7 7 307 1307 3307 3307 14 15 FXAAAA SPIAAA AAAAxx +4584 5817 0 0 4 4 84 584 584 4584 4584 168 169 IUAAAA TPIAAA HHHHxx +3344 5818 0 0 4 4 44 344 1344 3344 3344 88 89 QYAAAA UPIAAA OOOOxx +4360 5819 0 0 0 0 60 360 360 4360 4360 120 121 SLAAAA VPIAAA VVVVxx +8757 5820 1 1 7 17 57 757 757 3757 8757 114 115 VYAAAA WPIAAA AAAAxx +4315 5821 1 3 5 15 15 315 315 4315 4315 30 31 ZJAAAA XPIAAA HHHHxx +5243 5822 1 3 3 3 43 243 1243 243 5243 86 87 RTAAAA YPIAAA OOOOxx +8550 5823 0 2 0 10 50 550 550 3550 8550 100 101 WQAAAA ZPIAAA VVVVxx +159 5824 1 3 9 19 59 159 159 159 159 118 119 DGAAAA AQIAAA AAAAxx +4710 5825 0 2 0 10 10 710 710 4710 4710 20 21 EZAAAA BQIAAA HHHHxx +7179 5826 1 3 9 19 79 179 1179 2179 7179 158 159 DQAAAA CQIAAA OOOOxx +2509 5827 1 1 9 9 9 509 509 2509 2509 18 19 NSAAAA DQIAAA VVVVxx +6981 5828 1 1 1 1 81 981 981 1981 6981 162 163 NIAAAA EQIAAA AAAAxx +5060 5829 0 0 0 0 60 60 1060 60 5060 120 121 QMAAAA FQIAAA HHHHxx +5601 5830 1 1 1 1 1 601 1601 601 5601 2 3 LHAAAA GQIAAA OOOOxx +703 5831 1 3 3 3 3 703 703 703 703 6 7 BBAAAA HQIAAA VVVVxx +8719 5832 1 3 9 19 19 719 719 3719 8719 38 39 JXAAAA IQIAAA AAAAxx +1570 5833 0 2 0 10 70 570 1570 1570 1570 140 141 KIAAAA JQIAAA HHHHxx +1036 5834 0 0 6 16 36 36 1036 1036 1036 72 73 WNAAAA KQIAAA OOOOxx +6703 5835 1 3 3 3 3 703 703 1703 6703 6 7 VXAAAA LQIAAA VVVVxx +252 5836 0 0 2 12 52 252 252 252 252 104 105 SJAAAA MQIAAA AAAAxx +631 5837 1 3 1 11 31 631 631 631 631 62 63 HYAAAA NQIAAA HHHHxx +5098 5838 0 2 8 18 98 98 1098 98 5098 196 197 COAAAA OQIAAA OOOOxx +8346 5839 0 2 6 6 46 346 346 3346 8346 92 93 AJAAAA PQIAAA VVVVxx +4910 5840 0 2 0 10 10 910 910 4910 4910 20 21 WGAAAA QQIAAA AAAAxx +559 5841 1 3 9 19 59 559 559 559 559 118 119 NVAAAA RQIAAA HHHHxx +1477 5842 1 1 7 17 77 477 1477 1477 1477 154 155 VEAAAA SQIAAA OOOOxx +5115 5843 1 3 5 15 15 115 1115 115 5115 30 31 TOAAAA TQIAAA VVVVxx +8784 5844 0 0 4 4 84 784 784 3784 8784 168 169 WZAAAA UQIAAA AAAAxx +4422 5845 0 2 2 2 22 422 422 4422 4422 44 45 COAAAA VQIAAA HHHHxx +2702 5846 0 2 2 2 2 702 702 2702 2702 4 5 YZAAAA WQIAAA OOOOxx +9599 5847 1 3 9 19 99 599 1599 4599 9599 198 199 FFAAAA XQIAAA VVVVxx +2463 5848 1 3 3 3 63 463 463 2463 2463 126 127 TQAAAA YQIAAA AAAAxx +498 5849 0 2 8 18 98 498 498 498 498 196 197 ETAAAA ZQIAAA HHHHxx +494 5850 0 2 4 14 94 494 494 494 494 188 189 ATAAAA ARIAAA OOOOxx +8632 5851 0 0 2 12 32 632 632 3632 8632 64 65 AUAAAA BRIAAA VVVVxx +3449 5852 1 1 9 9 49 449 1449 3449 3449 98 99 RCAAAA CRIAAA AAAAxx +5888 5853 0 0 8 8 88 888 1888 888 5888 176 177 MSAAAA DRIAAA HHHHxx +2211 5854 1 3 1 11 11 211 211 2211 2211 22 23 BHAAAA ERIAAA OOOOxx +2835 5855 1 3 5 15 35 835 835 2835 2835 70 71 BFAAAA FRIAAA VVVVxx +4196 5856 0 0 6 16 96 196 196 4196 4196 192 193 KFAAAA GRIAAA AAAAxx +2177 5857 1 1 7 17 77 177 177 2177 2177 154 155 TFAAAA HRIAAA HHHHxx +1959 5858 1 3 9 19 59 959 1959 1959 1959 118 119 JXAAAA IRIAAA OOOOxx +5172 5859 0 0 2 12 72 172 1172 172 5172 144 145 YQAAAA JRIAAA VVVVxx +7898 5860 0 2 8 18 98 898 1898 2898 7898 196 197 URAAAA KRIAAA AAAAxx +5729 5861 1 1 9 9 29 729 1729 729 5729 58 59 JMAAAA LRIAAA HHHHxx +469 5862 1 1 9 9 69 469 469 469 469 138 139 BSAAAA MRIAAA OOOOxx +4456 5863 0 0 6 16 56 456 456 4456 4456 112 113 KPAAAA NRIAAA VVVVxx +3578 5864 0 2 8 18 78 578 1578 3578 3578 156 157 QHAAAA ORIAAA AAAAxx +8623 5865 1 3 3 3 23 623 623 3623 8623 46 47 RTAAAA PRIAAA HHHHxx +6749 5866 1 1 9 9 49 749 749 1749 6749 98 99 PZAAAA QRIAAA OOOOxx +6735 5867 1 3 5 15 35 735 735 1735 6735 70 71 BZAAAA RRIAAA VVVVxx +5197 5868 1 1 7 17 97 197 1197 197 5197 194 195 XRAAAA SRIAAA AAAAxx +2067 5869 1 3 7 7 67 67 67 2067 2067 134 135 NBAAAA TRIAAA HHHHxx +5600 5870 0 0 0 0 0 600 1600 600 5600 0 1 KHAAAA URIAAA OOOOxx +7741 5871 1 1 1 1 41 741 1741 2741 7741 82 83 TLAAAA VRIAAA VVVVxx +9925 5872 1 1 5 5 25 925 1925 4925 9925 50 51 TRAAAA WRIAAA AAAAxx +9685 5873 1 1 5 5 85 685 1685 4685 9685 170 171 NIAAAA XRIAAA HHHHxx +7622 5874 0 2 2 2 22 622 1622 2622 7622 44 45 EHAAAA YRIAAA OOOOxx +6859 5875 1 3 9 19 59 859 859 1859 6859 118 119 VDAAAA ZRIAAA VVVVxx +3094 5876 0 2 4 14 94 94 1094 3094 3094 188 189 APAAAA ASIAAA AAAAxx +2628 5877 0 0 8 8 28 628 628 2628 2628 56 57 CXAAAA BSIAAA HHHHxx +40 5878 0 0 0 0 40 40 40 40 40 80 81 OBAAAA CSIAAA OOOOxx +1644 5879 0 0 4 4 44 644 1644 1644 1644 88 89 GLAAAA DSIAAA VVVVxx +588 5880 0 0 8 8 88 588 588 588 588 176 177 QWAAAA ESIAAA AAAAxx +7522 5881 0 2 2 2 22 522 1522 2522 7522 44 45 IDAAAA FSIAAA HHHHxx +162 5882 0 2 2 2 62 162 162 162 162 124 125 GGAAAA GSIAAA OOOOxx +3610 5883 0 2 0 10 10 610 1610 3610 3610 20 21 WIAAAA HSIAAA VVVVxx +3561 5884 1 1 1 1 61 561 1561 3561 3561 122 123 ZGAAAA ISIAAA AAAAxx +8185 5885 1 1 5 5 85 185 185 3185 8185 170 171 VCAAAA JSIAAA HHHHxx +7237 5886 1 1 7 17 37 237 1237 2237 7237 74 75 JSAAAA KSIAAA OOOOxx +4592 5887 0 0 2 12 92 592 592 4592 4592 184 185 QUAAAA LSIAAA VVVVxx +7082 5888 0 2 2 2 82 82 1082 2082 7082 164 165 KMAAAA MSIAAA AAAAxx +4719 5889 1 3 9 19 19 719 719 4719 4719 38 39 NZAAAA NSIAAA HHHHxx +3879 5890 1 3 9 19 79 879 1879 3879 3879 158 159 FTAAAA OSIAAA OOOOxx +1662 5891 0 2 2 2 62 662 1662 1662 1662 124 125 YLAAAA PSIAAA VVVVxx +3995 5892 1 3 5 15 95 995 1995 3995 3995 190 191 RXAAAA QSIAAA AAAAxx +5828 5893 0 0 8 8 28 828 1828 828 5828 56 57 EQAAAA RSIAAA HHHHxx +4197 5894 1 1 7 17 97 197 197 4197 4197 194 195 LFAAAA SSIAAA OOOOxx +5146 5895 0 2 6 6 46 146 1146 146 5146 92 93 YPAAAA TSIAAA VVVVxx +753 5896 1 1 3 13 53 753 753 753 753 106 107 ZCAAAA USIAAA AAAAxx +7064 5897 0 0 4 4 64 64 1064 2064 7064 128 129 SLAAAA VSIAAA HHHHxx +1312 5898 0 0 2 12 12 312 1312 1312 1312 24 25 MYAAAA WSIAAA OOOOxx +5573 5899 1 1 3 13 73 573 1573 573 5573 146 147 JGAAAA XSIAAA VVVVxx +7634 5900 0 2 4 14 34 634 1634 2634 7634 68 69 QHAAAA YSIAAA AAAAxx +2459 5901 1 3 9 19 59 459 459 2459 2459 118 119 PQAAAA ZSIAAA HHHHxx +8636 5902 0 0 6 16 36 636 636 3636 8636 72 73 EUAAAA ATIAAA OOOOxx +5318 5903 0 2 8 18 18 318 1318 318 5318 36 37 OWAAAA BTIAAA VVVVxx +1064 5904 0 0 4 4 64 64 1064 1064 1064 128 129 YOAAAA CTIAAA AAAAxx +9779 5905 1 3 9 19 79 779 1779 4779 9779 158 159 DMAAAA DTIAAA HHHHxx +6512 5906 0 0 2 12 12 512 512 1512 6512 24 25 MQAAAA ETIAAA OOOOxx +3572 5907 0 0 2 12 72 572 1572 3572 3572 144 145 KHAAAA FTIAAA VVVVxx +816 5908 0 0 6 16 16 816 816 816 816 32 33 KFAAAA GTIAAA AAAAxx +3978 5909 0 2 8 18 78 978 1978 3978 3978 156 157 AXAAAA HTIAAA HHHHxx +5390 5910 0 2 0 10 90 390 1390 390 5390 180 181 IZAAAA ITIAAA OOOOxx +4685 5911 1 1 5 5 85 685 685 4685 4685 170 171 FYAAAA JTIAAA VVVVxx +3003 5912 1 3 3 3 3 3 1003 3003 3003 6 7 NLAAAA KTIAAA AAAAxx +2638 5913 0 2 8 18 38 638 638 2638 2638 76 77 MXAAAA LTIAAA HHHHxx +9716 5914 0 0 6 16 16 716 1716 4716 9716 32 33 SJAAAA MTIAAA OOOOxx +9598 5915 0 2 8 18 98 598 1598 4598 9598 196 197 EFAAAA NTIAAA VVVVxx +9501 5916 1 1 1 1 1 501 1501 4501 9501 2 3 LBAAAA OTIAAA AAAAxx +1704 5917 0 0 4 4 4 704 1704 1704 1704 8 9 ONAAAA PTIAAA HHHHxx +8609 5918 1 1 9 9 9 609 609 3609 8609 18 19 DTAAAA QTIAAA OOOOxx +5211 5919 1 3 1 11 11 211 1211 211 5211 22 23 LSAAAA RTIAAA VVVVxx +3605 5920 1 1 5 5 5 605 1605 3605 3605 10 11 RIAAAA STIAAA AAAAxx +8730 5921 0 2 0 10 30 730 730 3730 8730 60 61 UXAAAA TTIAAA HHHHxx +4208 5922 0 0 8 8 8 208 208 4208 4208 16 17 WFAAAA UTIAAA OOOOxx +7784 5923 0 0 4 4 84 784 1784 2784 7784 168 169 KNAAAA VTIAAA VVVVxx +7501 5924 1 1 1 1 1 501 1501 2501 7501 2 3 NCAAAA WTIAAA AAAAxx +7862 5925 0 2 2 2 62 862 1862 2862 7862 124 125 KQAAAA XTIAAA HHHHxx +8922 5926 0 2 2 2 22 922 922 3922 8922 44 45 EFAAAA YTIAAA OOOOxx +3857 5927 1 1 7 17 57 857 1857 3857 3857 114 115 JSAAAA ZTIAAA VVVVxx +6393 5928 1 1 3 13 93 393 393 1393 6393 186 187 XLAAAA AUIAAA AAAAxx +506 5929 0 2 6 6 6 506 506 506 506 12 13 MTAAAA BUIAAA HHHHxx +4232 5930 0 0 2 12 32 232 232 4232 4232 64 65 UGAAAA CUIAAA OOOOxx +8991 5931 1 3 1 11 91 991 991 3991 8991 182 183 VHAAAA DUIAAA VVVVxx +8578 5932 0 2 8 18 78 578 578 3578 8578 156 157 YRAAAA EUIAAA AAAAxx +3235 5933 1 3 5 15 35 235 1235 3235 3235 70 71 LUAAAA FUIAAA HHHHxx +963 5934 1 3 3 3 63 963 963 963 963 126 127 BLAAAA GUIAAA OOOOxx +113 5935 1 1 3 13 13 113 113 113 113 26 27 JEAAAA HUIAAA VVVVxx +8234 5936 0 2 4 14 34 234 234 3234 8234 68 69 SEAAAA IUIAAA AAAAxx +2613 5937 1 1 3 13 13 613 613 2613 2613 26 27 NWAAAA JUIAAA HHHHxx +5540 5938 0 0 0 0 40 540 1540 540 5540 80 81 CFAAAA KUIAAA OOOOxx +9727 5939 1 3 7 7 27 727 1727 4727 9727 54 55 DKAAAA LUIAAA VVVVxx +2229 5940 1 1 9 9 29 229 229 2229 2229 58 59 THAAAA MUIAAA AAAAxx +6242 5941 0 2 2 2 42 242 242 1242 6242 84 85 CGAAAA NUIAAA HHHHxx +2502 5942 0 2 2 2 2 502 502 2502 2502 4 5 GSAAAA OUIAAA OOOOxx +6212 5943 0 0 2 12 12 212 212 1212 6212 24 25 YEAAAA PUIAAA VVVVxx +3495 5944 1 3 5 15 95 495 1495 3495 3495 190 191 LEAAAA QUIAAA AAAAxx +2364 5945 0 0 4 4 64 364 364 2364 2364 128 129 YMAAAA RUIAAA HHHHxx +6777 5946 1 1 7 17 77 777 777 1777 6777 154 155 RAAAAA SUIAAA OOOOxx +9811 5947 1 3 1 11 11 811 1811 4811 9811 22 23 JNAAAA TUIAAA VVVVxx +1450 5948 0 2 0 10 50 450 1450 1450 1450 100 101 UDAAAA UUIAAA AAAAxx +5008 5949 0 0 8 8 8 8 1008 8 5008 16 17 QKAAAA VUIAAA HHHHxx +1318 5950 0 2 8 18 18 318 1318 1318 1318 36 37 SYAAAA WUIAAA OOOOxx +3373 5951 1 1 3 13 73 373 1373 3373 3373 146 147 TZAAAA XUIAAA VVVVxx +398 5952 0 2 8 18 98 398 398 398 398 196 197 IPAAAA YUIAAA AAAAxx +3804 5953 0 0 4 4 4 804 1804 3804 3804 8 9 IQAAAA ZUIAAA HHHHxx +9148 5954 0 0 8 8 48 148 1148 4148 9148 96 97 WNAAAA AVIAAA OOOOxx +4382 5955 0 2 2 2 82 382 382 4382 4382 164 165 OMAAAA BVIAAA VVVVxx +4026 5956 0 2 6 6 26 26 26 4026 4026 52 53 WYAAAA CVIAAA AAAAxx +7804 5957 0 0 4 4 4 804 1804 2804 7804 8 9 EOAAAA DVIAAA HHHHxx +6839 5958 1 3 9 19 39 839 839 1839 6839 78 79 BDAAAA EVIAAA OOOOxx +3756 5959 0 0 6 16 56 756 1756 3756 3756 112 113 MOAAAA FVIAAA VVVVxx +6734 5960 0 2 4 14 34 734 734 1734 6734 68 69 AZAAAA GVIAAA AAAAxx +2228 5961 0 0 8 8 28 228 228 2228 2228 56 57 SHAAAA HVIAAA HHHHxx +3273 5962 1 1 3 13 73 273 1273 3273 3273 146 147 XVAAAA IVIAAA OOOOxx +3708 5963 0 0 8 8 8 708 1708 3708 3708 16 17 QMAAAA JVIAAA VVVVxx +4320 5964 0 0 0 0 20 320 320 4320 4320 40 41 EKAAAA KVIAAA AAAAxx +74 5965 0 2 4 14 74 74 74 74 74 148 149 WCAAAA LVIAAA HHHHxx +2520 5966 0 0 0 0 20 520 520 2520 2520 40 41 YSAAAA MVIAAA OOOOxx +9619 5967 1 3 9 19 19 619 1619 4619 9619 38 39 ZFAAAA NVIAAA VVVVxx +1801 5968 1 1 1 1 1 801 1801 1801 1801 2 3 HRAAAA OVIAAA AAAAxx +6399 5969 1 3 9 19 99 399 399 1399 6399 198 199 DMAAAA PVIAAA HHHHxx +8313 5970 1 1 3 13 13 313 313 3313 8313 26 27 THAAAA QVIAAA OOOOxx +7003 5971 1 3 3 3 3 3 1003 2003 7003 6 7 JJAAAA RVIAAA VVVVxx +329 5972 1 1 9 9 29 329 329 329 329 58 59 RMAAAA SVIAAA AAAAxx +9090 5973 0 2 0 10 90 90 1090 4090 9090 180 181 QLAAAA TVIAAA HHHHxx +2299 5974 1 3 9 19 99 299 299 2299 2299 198 199 LKAAAA UVIAAA OOOOxx +3925 5975 1 1 5 5 25 925 1925 3925 3925 50 51 ZUAAAA VVIAAA VVVVxx +8145 5976 1 1 5 5 45 145 145 3145 8145 90 91 HBAAAA WVIAAA AAAAxx +8561 5977 1 1 1 1 61 561 561 3561 8561 122 123 HRAAAA XVIAAA HHHHxx +2797 5978 1 1 7 17 97 797 797 2797 2797 194 195 PDAAAA YVIAAA OOOOxx +1451 5979 1 3 1 11 51 451 1451 1451 1451 102 103 VDAAAA ZVIAAA VVVVxx +7977 5980 1 1 7 17 77 977 1977 2977 7977 154 155 VUAAAA AWIAAA AAAAxx +112 5981 0 0 2 12 12 112 112 112 112 24 25 IEAAAA BWIAAA HHHHxx +5265 5982 1 1 5 5 65 265 1265 265 5265 130 131 NUAAAA CWIAAA OOOOxx +3819 5983 1 3 9 19 19 819 1819 3819 3819 38 39 XQAAAA DWIAAA VVVVxx +3648 5984 0 0 8 8 48 648 1648 3648 3648 96 97 IKAAAA EWIAAA AAAAxx +6306 5985 0 2 6 6 6 306 306 1306 6306 12 13 OIAAAA FWIAAA HHHHxx +2385 5986 1 1 5 5 85 385 385 2385 2385 170 171 TNAAAA GWIAAA OOOOxx +9084 5987 0 0 4 4 84 84 1084 4084 9084 168 169 KLAAAA HWIAAA VVVVxx +4499 5988 1 3 9 19 99 499 499 4499 4499 198 199 BRAAAA IWIAAA AAAAxx +1154 5989 0 2 4 14 54 154 1154 1154 1154 108 109 KSAAAA JWIAAA HHHHxx +6800 5990 0 0 0 0 0 800 800 1800 6800 0 1 OBAAAA KWIAAA OOOOxx +8049 5991 1 1 9 9 49 49 49 3049 8049 98 99 PXAAAA LWIAAA VVVVxx +3733 5992 1 1 3 13 33 733 1733 3733 3733 66 67 PNAAAA MWIAAA AAAAxx +8496 5993 0 0 6 16 96 496 496 3496 8496 192 193 UOAAAA NWIAAA HHHHxx +9952 5994 0 0 2 12 52 952 1952 4952 9952 104 105 USAAAA OWIAAA OOOOxx +9792 5995 0 0 2 12 92 792 1792 4792 9792 184 185 QMAAAA PWIAAA VVVVxx +5081 5996 1 1 1 1 81 81 1081 81 5081 162 163 LNAAAA QWIAAA AAAAxx +7908 5997 0 0 8 8 8 908 1908 2908 7908 16 17 ESAAAA RWIAAA HHHHxx +5398 5998 0 2 8 18 98 398 1398 398 5398 196 197 QZAAAA SWIAAA OOOOxx +8423 5999 1 3 3 3 23 423 423 3423 8423 46 47 ZLAAAA TWIAAA VVVVxx +3362 6000 0 2 2 2 62 362 1362 3362 3362 124 125 IZAAAA UWIAAA AAAAxx +7767 6001 1 3 7 7 67 767 1767 2767 7767 134 135 TMAAAA VWIAAA HHHHxx +7063 6002 1 3 3 3 63 63 1063 2063 7063 126 127 RLAAAA WWIAAA OOOOxx +8350 6003 0 2 0 10 50 350 350 3350 8350 100 101 EJAAAA XWIAAA VVVVxx +6779 6004 1 3 9 19 79 779 779 1779 6779 158 159 TAAAAA YWIAAA AAAAxx +5742 6005 0 2 2 2 42 742 1742 742 5742 84 85 WMAAAA ZWIAAA HHHHxx +9045 6006 1 1 5 5 45 45 1045 4045 9045 90 91 XJAAAA AXIAAA OOOOxx +8792 6007 0 0 2 12 92 792 792 3792 8792 184 185 EAAAAA BXIAAA VVVVxx +8160 6008 0 0 0 0 60 160 160 3160 8160 120 121 WBAAAA CXIAAA AAAAxx +3061 6009 1 1 1 1 61 61 1061 3061 3061 122 123 TNAAAA DXIAAA HHHHxx +4721 6010 1 1 1 1 21 721 721 4721 4721 42 43 PZAAAA EXIAAA OOOOxx +9817 6011 1 1 7 17 17 817 1817 4817 9817 34 35 PNAAAA FXIAAA VVVVxx +9257 6012 1 1 7 17 57 257 1257 4257 9257 114 115 BSAAAA GXIAAA AAAAxx +7779 6013 1 3 9 19 79 779 1779 2779 7779 158 159 FNAAAA HXIAAA HHHHxx +2663 6014 1 3 3 3 63 663 663 2663 2663 126 127 LYAAAA IXIAAA OOOOxx +3885 6015 1 1 5 5 85 885 1885 3885 3885 170 171 LTAAAA JXIAAA VVVVxx +9469 6016 1 1 9 9 69 469 1469 4469 9469 138 139 FAAAAA KXIAAA AAAAxx +6766 6017 0 2 6 6 66 766 766 1766 6766 132 133 GAAAAA LXIAAA HHHHxx +7173 6018 1 1 3 13 73 173 1173 2173 7173 146 147 XPAAAA MXIAAA OOOOxx +4709 6019 1 1 9 9 9 709 709 4709 4709 18 19 DZAAAA NXIAAA VVVVxx +4210 6020 0 2 0 10 10 210 210 4210 4210 20 21 YFAAAA OXIAAA AAAAxx +3715 6021 1 3 5 15 15 715 1715 3715 3715 30 31 XMAAAA PXIAAA HHHHxx +5089 6022 1 1 9 9 89 89 1089 89 5089 178 179 TNAAAA QXIAAA OOOOxx +1639 6023 1 3 9 19 39 639 1639 1639 1639 78 79 BLAAAA RXIAAA VVVVxx +5757 6024 1 1 7 17 57 757 1757 757 5757 114 115 LNAAAA SXIAAA AAAAxx +3545 6025 1 1 5 5 45 545 1545 3545 3545 90 91 JGAAAA TXIAAA HHHHxx +709 6026 1 1 9 9 9 709 709 709 709 18 19 HBAAAA UXIAAA OOOOxx +6519 6027 1 3 9 19 19 519 519 1519 6519 38 39 TQAAAA VXIAAA VVVVxx +4341 6028 1 1 1 1 41 341 341 4341 4341 82 83 ZKAAAA WXIAAA AAAAxx +2381 6029 1 1 1 1 81 381 381 2381 2381 162 163 PNAAAA XXIAAA HHHHxx +7215 6030 1 3 5 15 15 215 1215 2215 7215 30 31 NRAAAA YXIAAA OOOOxx +9323 6031 1 3 3 3 23 323 1323 4323 9323 46 47 PUAAAA ZXIAAA VVVVxx +3593 6032 1 1 3 13 93 593 1593 3593 3593 186 187 FIAAAA AYIAAA AAAAxx +3123 6033 1 3 3 3 23 123 1123 3123 3123 46 47 DQAAAA BYIAAA HHHHxx +8673 6034 1 1 3 13 73 673 673 3673 8673 146 147 PVAAAA CYIAAA OOOOxx +5094 6035 0 2 4 14 94 94 1094 94 5094 188 189 YNAAAA DYIAAA VVVVxx +6477 6036 1 1 7 17 77 477 477 1477 6477 154 155 DPAAAA EYIAAA AAAAxx +9734 6037 0 2 4 14 34 734 1734 4734 9734 68 69 KKAAAA FYIAAA HHHHxx +2998 6038 0 2 8 18 98 998 998 2998 2998 196 197 ILAAAA GYIAAA OOOOxx +7807 6039 1 3 7 7 7 807 1807 2807 7807 14 15 HOAAAA HYIAAA VVVVxx +5739 6040 1 3 9 19 39 739 1739 739 5739 78 79 TMAAAA IYIAAA AAAAxx +138 6041 0 2 8 18 38 138 138 138 138 76 77 IFAAAA JYIAAA HHHHxx +2403 6042 1 3 3 3 3 403 403 2403 2403 6 7 LOAAAA KYIAAA OOOOxx +2484 6043 0 0 4 4 84 484 484 2484 2484 168 169 ORAAAA LYIAAA VVVVxx +2805 6044 1 1 5 5 5 805 805 2805 2805 10 11 XDAAAA MYIAAA AAAAxx +5189 6045 1 1 9 9 89 189 1189 189 5189 178 179 PRAAAA NYIAAA HHHHxx +8336 6046 0 0 6 16 36 336 336 3336 8336 72 73 QIAAAA OYIAAA OOOOxx +5241 6047 1 1 1 1 41 241 1241 241 5241 82 83 PTAAAA PYIAAA VVVVxx +2612 6048 0 0 2 12 12 612 612 2612 2612 24 25 MWAAAA QYIAAA AAAAxx +2571 6049 1 3 1 11 71 571 571 2571 2571 142 143 XUAAAA RYIAAA HHHHxx +926 6050 0 2 6 6 26 926 926 926 926 52 53 QJAAAA SYIAAA OOOOxx +337 6051 1 1 7 17 37 337 337 337 337 74 75 ZMAAAA TYIAAA VVVVxx +2821 6052 1 1 1 1 21 821 821 2821 2821 42 43 NEAAAA UYIAAA AAAAxx +2658 6053 0 2 8 18 58 658 658 2658 2658 116 117 GYAAAA VYIAAA HHHHxx +9054 6054 0 2 4 14 54 54 1054 4054 9054 108 109 GKAAAA WYIAAA OOOOxx +5492 6055 0 0 2 12 92 492 1492 492 5492 184 185 GDAAAA XYIAAA VVVVxx +7313 6056 1 1 3 13 13 313 1313 2313 7313 26 27 HVAAAA YYIAAA AAAAxx +75 6057 1 3 5 15 75 75 75 75 75 150 151 XCAAAA ZYIAAA HHHHxx +5489 6058 1 1 9 9 89 489 1489 489 5489 178 179 DDAAAA AZIAAA OOOOxx +8413 6059 1 1 3 13 13 413 413 3413 8413 26 27 PLAAAA BZIAAA VVVVxx +3693 6060 1 1 3 13 93 693 1693 3693 3693 186 187 BMAAAA CZIAAA AAAAxx +9820 6061 0 0 0 0 20 820 1820 4820 9820 40 41 SNAAAA DZIAAA HHHHxx +8157 6062 1 1 7 17 57 157 157 3157 8157 114 115 TBAAAA EZIAAA OOOOxx +4161 6063 1 1 1 1 61 161 161 4161 4161 122 123 BEAAAA FZIAAA VVVVxx +8339 6064 1 3 9 19 39 339 339 3339 8339 78 79 TIAAAA GZIAAA AAAAxx +4141 6065 1 1 1 1 41 141 141 4141 4141 82 83 HDAAAA HZIAAA HHHHxx +9001 6066 1 1 1 1 1 1 1001 4001 9001 2 3 FIAAAA IZIAAA OOOOxx +8247 6067 1 3 7 7 47 247 247 3247 8247 94 95 FFAAAA JZIAAA VVVVxx +1182 6068 0 2 2 2 82 182 1182 1182 1182 164 165 MTAAAA KZIAAA AAAAxx +9876 6069 0 0 6 16 76 876 1876 4876 9876 152 153 WPAAAA LZIAAA HHHHxx +4302 6070 0 2 2 2 2 302 302 4302 4302 4 5 MJAAAA MZIAAA OOOOxx +6674 6071 0 2 4 14 74 674 674 1674 6674 148 149 SWAAAA NZIAAA VVVVxx +4214 6072 0 2 4 14 14 214 214 4214 4214 28 29 CGAAAA OZIAAA AAAAxx +5584 6073 0 0 4 4 84 584 1584 584 5584 168 169 UGAAAA PZIAAA HHHHxx +265 6074 1 1 5 5 65 265 265 265 265 130 131 FKAAAA QZIAAA OOOOxx +9207 6075 1 3 7 7 7 207 1207 4207 9207 14 15 DQAAAA RZIAAA VVVVxx +9434 6076 0 2 4 14 34 434 1434 4434 9434 68 69 WYAAAA SZIAAA AAAAxx +2921 6077 1 1 1 1 21 921 921 2921 2921 42 43 JIAAAA TZIAAA HHHHxx +9355 6078 1 3 5 15 55 355 1355 4355 9355 110 111 VVAAAA UZIAAA OOOOxx +8538 6079 0 2 8 18 38 538 538 3538 8538 76 77 KQAAAA VZIAAA VVVVxx +4559 6080 1 3 9 19 59 559 559 4559 4559 118 119 JTAAAA WZIAAA AAAAxx +9175 6081 1 3 5 15 75 175 1175 4175 9175 150 151 XOAAAA XZIAAA HHHHxx +4489 6082 1 1 9 9 89 489 489 4489 4489 178 179 RQAAAA YZIAAA OOOOxx +1485 6083 1 1 5 5 85 485 1485 1485 1485 170 171 DFAAAA ZZIAAA VVVVxx +8853 6084 1 1 3 13 53 853 853 3853 8853 106 107 NCAAAA AAJAAA AAAAxx +9143 6085 1 3 3 3 43 143 1143 4143 9143 86 87 RNAAAA BAJAAA HHHHxx +9551 6086 1 3 1 11 51 551 1551 4551 9551 102 103 JDAAAA CAJAAA OOOOxx +49 6087 1 1 9 9 49 49 49 49 49 98 99 XBAAAA DAJAAA VVVVxx +8351 6088 1 3 1 11 51 351 351 3351 8351 102 103 FJAAAA EAJAAA AAAAxx +9748 6089 0 0 8 8 48 748 1748 4748 9748 96 97 YKAAAA FAJAAA HHHHxx +4536 6090 0 0 6 16 36 536 536 4536 4536 72 73 MSAAAA GAJAAA OOOOxx +930 6091 0 2 0 10 30 930 930 930 930 60 61 UJAAAA HAJAAA VVVVxx +2206 6092 0 2 6 6 6 206 206 2206 2206 12 13 WGAAAA IAJAAA AAAAxx +8004 6093 0 0 4 4 4 4 4 3004 8004 8 9 WVAAAA JAJAAA HHHHxx +219 6094 1 3 9 19 19 219 219 219 219 38 39 LIAAAA KAJAAA OOOOxx +2724 6095 0 0 4 4 24 724 724 2724 2724 48 49 UAAAAA LAJAAA VVVVxx +4868 6096 0 0 8 8 68 868 868 4868 4868 136 137 GFAAAA MAJAAA AAAAxx +5952 6097 0 0 2 12 52 952 1952 952 5952 104 105 YUAAAA NAJAAA HHHHxx +2094 6098 0 2 4 14 94 94 94 2094 2094 188 189 OCAAAA OAJAAA OOOOxx +5707 6099 1 3 7 7 7 707 1707 707 5707 14 15 NLAAAA PAJAAA VVVVxx +5200 6100 0 0 0 0 0 200 1200 200 5200 0 1 ASAAAA QAJAAA AAAAxx +967 6101 1 3 7 7 67 967 967 967 967 134 135 FLAAAA RAJAAA HHHHxx +1982 6102 0 2 2 2 82 982 1982 1982 1982 164 165 GYAAAA SAJAAA OOOOxx +3410 6103 0 2 0 10 10 410 1410 3410 3410 20 21 EBAAAA TAJAAA VVVVxx +174 6104 0 2 4 14 74 174 174 174 174 148 149 SGAAAA UAJAAA AAAAxx +9217 6105 1 1 7 17 17 217 1217 4217 9217 34 35 NQAAAA VAJAAA HHHHxx +9103 6106 1 3 3 3 3 103 1103 4103 9103 6 7 DMAAAA WAJAAA OOOOxx +868 6107 0 0 8 8 68 868 868 868 868 136 137 KHAAAA XAJAAA VVVVxx +8261 6108 1 1 1 1 61 261 261 3261 8261 122 123 TFAAAA YAJAAA AAAAxx +2720 6109 0 0 0 0 20 720 720 2720 2720 40 41 QAAAAA ZAJAAA HHHHxx +2999 6110 1 3 9 19 99 999 999 2999 2999 198 199 JLAAAA ABJAAA OOOOxx +769 6111 1 1 9 9 69 769 769 769 769 138 139 PDAAAA BBJAAA VVVVxx +4533 6112 1 1 3 13 33 533 533 4533 4533 66 67 JSAAAA CBJAAA AAAAxx +2030 6113 0 2 0 10 30 30 30 2030 2030 60 61 CAAAAA DBJAAA HHHHxx +5824 6114 0 0 4 4 24 824 1824 824 5824 48 49 AQAAAA EBJAAA OOOOxx +2328 6115 0 0 8 8 28 328 328 2328 2328 56 57 OLAAAA FBJAAA VVVVxx +9970 6116 0 2 0 10 70 970 1970 4970 9970 140 141 MTAAAA GBJAAA AAAAxx +3192 6117 0 0 2 12 92 192 1192 3192 3192 184 185 USAAAA HBJAAA HHHHxx +3387 6118 1 3 7 7 87 387 1387 3387 3387 174 175 HAAAAA IBJAAA OOOOxx +1936 6119 0 0 6 16 36 936 1936 1936 1936 72 73 MWAAAA JBJAAA VVVVxx +6934 6120 0 2 4 14 34 934 934 1934 6934 68 69 SGAAAA KBJAAA AAAAxx +5615 6121 1 3 5 15 15 615 1615 615 5615 30 31 ZHAAAA LBJAAA HHHHxx +2241 6122 1 1 1 1 41 241 241 2241 2241 82 83 FIAAAA MBJAAA OOOOxx +1842 6123 0 2 2 2 42 842 1842 1842 1842 84 85 WSAAAA NBJAAA VVVVxx +8044 6124 0 0 4 4 44 44 44 3044 8044 88 89 KXAAAA OBJAAA AAAAxx +8902 6125 0 2 2 2 2 902 902 3902 8902 4 5 KEAAAA PBJAAA HHHHxx +4519 6126 1 3 9 19 19 519 519 4519 4519 38 39 VRAAAA QBJAAA OOOOxx +492 6127 0 0 2 12 92 492 492 492 492 184 185 YSAAAA RBJAAA VVVVxx +2694 6128 0 2 4 14 94 694 694 2694 2694 188 189 QZAAAA SBJAAA AAAAxx +5861 6129 1 1 1 1 61 861 1861 861 5861 122 123 LRAAAA TBJAAA HHHHxx +2104 6130 0 0 4 4 4 104 104 2104 2104 8 9 YCAAAA UBJAAA OOOOxx +5376 6131 0 0 6 16 76 376 1376 376 5376 152 153 UYAAAA VBJAAA VVVVxx +3147 6132 1 3 7 7 47 147 1147 3147 3147 94 95 BRAAAA WBJAAA AAAAxx +9880 6133 0 0 0 0 80 880 1880 4880 9880 160 161 AQAAAA XBJAAA HHHHxx +6171 6134 1 3 1 11 71 171 171 1171 6171 142 143 JDAAAA YBJAAA OOOOxx +1850 6135 0 2 0 10 50 850 1850 1850 1850 100 101 ETAAAA ZBJAAA VVVVxx +1775 6136 1 3 5 15 75 775 1775 1775 1775 150 151 HQAAAA ACJAAA AAAAxx +9261 6137 1 1 1 1 61 261 1261 4261 9261 122 123 FSAAAA BCJAAA HHHHxx +9648 6138 0 0 8 8 48 648 1648 4648 9648 96 97 CHAAAA CCJAAA OOOOxx +7846 6139 0 2 6 6 46 846 1846 2846 7846 92 93 UPAAAA DCJAAA VVVVxx +1446 6140 0 2 6 6 46 446 1446 1446 1446 92 93 QDAAAA ECJAAA AAAAxx +3139 6141 1 3 9 19 39 139 1139 3139 3139 78 79 TQAAAA FCJAAA HHHHxx +6142 6142 0 2 2 2 42 142 142 1142 6142 84 85 GCAAAA GCJAAA OOOOxx +5812 6143 0 0 2 12 12 812 1812 812 5812 24 25 OPAAAA HCJAAA VVVVxx +6728 6144 0 0 8 8 28 728 728 1728 6728 56 57 UYAAAA ICJAAA AAAAxx +4428 6145 0 0 8 8 28 428 428 4428 4428 56 57 IOAAAA JCJAAA HHHHxx +502 6146 0 2 2 2 2 502 502 502 502 4 5 ITAAAA KCJAAA OOOOxx +2363 6147 1 3 3 3 63 363 363 2363 2363 126 127 XMAAAA LCJAAA VVVVxx +3808 6148 0 0 8 8 8 808 1808 3808 3808 16 17 MQAAAA MCJAAA AAAAxx +1010 6149 0 2 0 10 10 10 1010 1010 1010 20 21 WMAAAA NCJAAA HHHHxx +9565 6150 1 1 5 5 65 565 1565 4565 9565 130 131 XDAAAA OCJAAA OOOOxx +1587 6151 1 3 7 7 87 587 1587 1587 1587 174 175 BJAAAA PCJAAA VVVVxx +1474 6152 0 2 4 14 74 474 1474 1474 1474 148 149 SEAAAA QCJAAA AAAAxx +6215 6153 1 3 5 15 15 215 215 1215 6215 30 31 BFAAAA RCJAAA HHHHxx +2395 6154 1 3 5 15 95 395 395 2395 2395 190 191 DOAAAA SCJAAA OOOOxx +8753 6155 1 1 3 13 53 753 753 3753 8753 106 107 RYAAAA TCJAAA VVVVxx +2446 6156 0 2 6 6 46 446 446 2446 2446 92 93 CQAAAA UCJAAA AAAAxx +60 6157 0 0 0 0 60 60 60 60 60 120 121 ICAAAA VCJAAA HHHHxx +982 6158 0 2 2 2 82 982 982 982 982 164 165 ULAAAA WCJAAA OOOOxx +6489 6159 1 1 9 9 89 489 489 1489 6489 178 179 PPAAAA XCJAAA VVVVxx +5334 6160 0 2 4 14 34 334 1334 334 5334 68 69 EXAAAA YCJAAA AAAAxx +8540 6161 0 0 0 0 40 540 540 3540 8540 80 81 MQAAAA ZCJAAA HHHHxx +490 6162 0 2 0 10 90 490 490 490 490 180 181 WSAAAA ADJAAA OOOOxx +6763 6163 1 3 3 3 63 763 763 1763 6763 126 127 DAAAAA BDJAAA VVVVxx +8273 6164 1 1 3 13 73 273 273 3273 8273 146 147 FGAAAA CDJAAA AAAAxx +8327 6165 1 3 7 7 27 327 327 3327 8327 54 55 HIAAAA DDJAAA HHHHxx +8541 6166 1 1 1 1 41 541 541 3541 8541 82 83 NQAAAA EDJAAA OOOOxx +3459 6167 1 3 9 19 59 459 1459 3459 3459 118 119 BDAAAA FDJAAA VVVVxx +5557 6168 1 1 7 17 57 557 1557 557 5557 114 115 TFAAAA GDJAAA AAAAxx +158 6169 0 2 8 18 58 158 158 158 158 116 117 CGAAAA HDJAAA HHHHxx +1741 6170 1 1 1 1 41 741 1741 1741 1741 82 83 ZOAAAA IDJAAA OOOOxx +8385 6171 1 1 5 5 85 385 385 3385 8385 170 171 NKAAAA JDJAAA VVVVxx +617 6172 1 1 7 17 17 617 617 617 617 34 35 TXAAAA KDJAAA AAAAxx +3560 6173 0 0 0 0 60 560 1560 3560 3560 120 121 YGAAAA LDJAAA HHHHxx +5216 6174 0 0 6 16 16 216 1216 216 5216 32 33 QSAAAA MDJAAA OOOOxx +8443 6175 1 3 3 3 43 443 443 3443 8443 86 87 TMAAAA NDJAAA VVVVxx +2700 6176 0 0 0 0 0 700 700 2700 2700 0 1 WZAAAA ODJAAA AAAAxx +3661 6177 1 1 1 1 61 661 1661 3661 3661 122 123 VKAAAA PDJAAA HHHHxx +4875 6178 1 3 5 15 75 875 875 4875 4875 150 151 NFAAAA QDJAAA OOOOxx +6721 6179 1 1 1 1 21 721 721 1721 6721 42 43 NYAAAA RDJAAA VVVVxx +3659 6180 1 3 9 19 59 659 1659 3659 3659 118 119 TKAAAA SDJAAA AAAAxx +8944 6181 0 0 4 4 44 944 944 3944 8944 88 89 AGAAAA TDJAAA HHHHxx +9133 6182 1 1 3 13 33 133 1133 4133 9133 66 67 HNAAAA UDJAAA OOOOxx +9882 6183 0 2 2 2 82 882 1882 4882 9882 164 165 CQAAAA VDJAAA VVVVxx +2102 6184 0 2 2 2 2 102 102 2102 2102 4 5 WCAAAA WDJAAA AAAAxx +9445 6185 1 1 5 5 45 445 1445 4445 9445 90 91 HZAAAA XDJAAA HHHHxx +5559 6186 1 3 9 19 59 559 1559 559 5559 118 119 VFAAAA YDJAAA OOOOxx +6096 6187 0 0 6 16 96 96 96 1096 6096 192 193 MAAAAA ZDJAAA VVVVxx +9336 6188 0 0 6 16 36 336 1336 4336 9336 72 73 CVAAAA AEJAAA AAAAxx +2162 6189 0 2 2 2 62 162 162 2162 2162 124 125 EFAAAA BEJAAA HHHHxx +7459 6190 1 3 9 19 59 459 1459 2459 7459 118 119 XAAAAA CEJAAA OOOOxx +3248 6191 0 0 8 8 48 248 1248 3248 3248 96 97 YUAAAA DEJAAA VVVVxx +9539 6192 1 3 9 19 39 539 1539 4539 9539 78 79 XCAAAA EEJAAA AAAAxx +4449 6193 1 1 9 9 49 449 449 4449 4449 98 99 DPAAAA FEJAAA HHHHxx +2809 6194 1 1 9 9 9 809 809 2809 2809 18 19 BEAAAA GEJAAA OOOOxx +7058 6195 0 2 8 18 58 58 1058 2058 7058 116 117 MLAAAA HEJAAA VVVVxx +3512 6196 0 0 2 12 12 512 1512 3512 3512 24 25 CFAAAA IEJAAA AAAAxx +2802 6197 0 2 2 2 2 802 802 2802 2802 4 5 UDAAAA JEJAAA HHHHxx +6289 6198 1 1 9 9 89 289 289 1289 6289 178 179 XHAAAA KEJAAA OOOOxx +1947 6199 1 3 7 7 47 947 1947 1947 1947 94 95 XWAAAA LEJAAA VVVVxx +9572 6200 0 0 2 12 72 572 1572 4572 9572 144 145 EEAAAA MEJAAA AAAAxx +2356 6201 0 0 6 16 56 356 356 2356 2356 112 113 QMAAAA NEJAAA HHHHxx +3039 6202 1 3 9 19 39 39 1039 3039 3039 78 79 XMAAAA OEJAAA OOOOxx +9452 6203 0 0 2 12 52 452 1452 4452 9452 104 105 OZAAAA PEJAAA VVVVxx +6328 6204 0 0 8 8 28 328 328 1328 6328 56 57 KJAAAA QEJAAA AAAAxx +7661 6205 1 1 1 1 61 661 1661 2661 7661 122 123 RIAAAA REJAAA HHHHxx +2566 6206 0 2 6 6 66 566 566 2566 2566 132 133 SUAAAA SEJAAA OOOOxx +6095 6207 1 3 5 15 95 95 95 1095 6095 190 191 LAAAAA TEJAAA VVVVxx +6367 6208 1 3 7 7 67 367 367 1367 6367 134 135 XKAAAA UEJAAA AAAAxx +3368 6209 0 0 8 8 68 368 1368 3368 3368 136 137 OZAAAA VEJAAA HHHHxx +5567 6210 1 3 7 7 67 567 1567 567 5567 134 135 DGAAAA WEJAAA OOOOxx +9834 6211 0 2 4 14 34 834 1834 4834 9834 68 69 GOAAAA XEJAAA VVVVxx +9695 6212 1 3 5 15 95 695 1695 4695 9695 190 191 XIAAAA YEJAAA AAAAxx +7291 6213 1 3 1 11 91 291 1291 2291 7291 182 183 LUAAAA ZEJAAA HHHHxx +4806 6214 0 2 6 6 6 806 806 4806 4806 12 13 WCAAAA AFJAAA OOOOxx +2000 6215 0 0 0 0 0 0 0 2000 2000 0 1 YYAAAA BFJAAA VVVVxx +6817 6216 1 1 7 17 17 817 817 1817 6817 34 35 FCAAAA CFJAAA AAAAxx +8487 6217 1 3 7 7 87 487 487 3487 8487 174 175 LOAAAA DFJAAA HHHHxx +3245 6218 1 1 5 5 45 245 1245 3245 3245 90 91 VUAAAA EFJAAA OOOOxx +632 6219 0 0 2 12 32 632 632 632 632 64 65 IYAAAA FFJAAA VVVVxx +8067 6220 1 3 7 7 67 67 67 3067 8067 134 135 HYAAAA GFJAAA AAAAxx +7140 6221 0 0 0 0 40 140 1140 2140 7140 80 81 QOAAAA HFJAAA HHHHxx +6802 6222 0 2 2 2 2 802 802 1802 6802 4 5 QBAAAA IFJAAA OOOOxx +3980 6223 0 0 0 0 80 980 1980 3980 3980 160 161 CXAAAA JFJAAA VVVVxx +1321 6224 1 1 1 1 21 321 1321 1321 1321 42 43 VYAAAA KFJAAA AAAAxx +2273 6225 1 1 3 13 73 273 273 2273 2273 146 147 LJAAAA LFJAAA HHHHxx +6787 6226 1 3 7 7 87 787 787 1787 6787 174 175 BBAAAA MFJAAA OOOOxx +9480 6227 0 0 0 0 80 480 1480 4480 9480 160 161 QAAAAA NFJAAA VVVVxx +9404 6228 0 0 4 4 4 404 1404 4404 9404 8 9 SXAAAA OFJAAA AAAAxx +3914 6229 0 2 4 14 14 914 1914 3914 3914 28 29 OUAAAA PFJAAA HHHHxx +5507 6230 1 3 7 7 7 507 1507 507 5507 14 15 VDAAAA QFJAAA OOOOxx +1813 6231 1 1 3 13 13 813 1813 1813 1813 26 27 TRAAAA RFJAAA VVVVxx +1999 6232 1 3 9 19 99 999 1999 1999 1999 198 199 XYAAAA SFJAAA AAAAxx +3848 6233 0 0 8 8 48 848 1848 3848 3848 96 97 ASAAAA TFJAAA HHHHxx +9693 6234 1 1 3 13 93 693 1693 4693 9693 186 187 VIAAAA UFJAAA OOOOxx +1353 6235 1 1 3 13 53 353 1353 1353 1353 106 107 BAAAAA VFJAAA VVVVxx +7218 6236 0 2 8 18 18 218 1218 2218 7218 36 37 QRAAAA WFJAAA AAAAxx +8223 6237 1 3 3 3 23 223 223 3223 8223 46 47 HEAAAA XFJAAA HHHHxx +9982 6238 0 2 2 2 82 982 1982 4982 9982 164 165 YTAAAA YFJAAA OOOOxx +8799 6239 1 3 9 19 99 799 799 3799 8799 198 199 LAAAAA ZFJAAA VVVVxx +8929 6240 1 1 9 9 29 929 929 3929 8929 58 59 LFAAAA AGJAAA AAAAxx +4626 6241 0 2 6 6 26 626 626 4626 4626 52 53 YVAAAA BGJAAA HHHHxx +7958 6242 0 2 8 18 58 958 1958 2958 7958 116 117 CUAAAA CGJAAA OOOOxx +3743 6243 1 3 3 3 43 743 1743 3743 3743 86 87 ZNAAAA DGJAAA VVVVxx +8165 6244 1 1 5 5 65 165 165 3165 8165 130 131 BCAAAA EGJAAA AAAAxx +7899 6245 1 3 9 19 99 899 1899 2899 7899 198 199 VRAAAA FGJAAA HHHHxx +8698 6246 0 2 8 18 98 698 698 3698 8698 196 197 OWAAAA GGJAAA OOOOxx +9270 6247 0 2 0 10 70 270 1270 4270 9270 140 141 OSAAAA HGJAAA VVVVxx +6348 6248 0 0 8 8 48 348 348 1348 6348 96 97 EKAAAA IGJAAA AAAAxx +6999 6249 1 3 9 19 99 999 999 1999 6999 198 199 FJAAAA JGJAAA HHHHxx +8467 6250 1 3 7 7 67 467 467 3467 8467 134 135 RNAAAA KGJAAA OOOOxx +3907 6251 1 3 7 7 7 907 1907 3907 3907 14 15 HUAAAA LGJAAA VVVVxx +4738 6252 0 2 8 18 38 738 738 4738 4738 76 77 GAAAAA MGJAAA AAAAxx +248 6253 0 0 8 8 48 248 248 248 248 96 97 OJAAAA NGJAAA HHHHxx +8769 6254 1 1 9 9 69 769 769 3769 8769 138 139 HZAAAA OGJAAA OOOOxx +9922 6255 0 2 2 2 22 922 1922 4922 9922 44 45 QRAAAA PGJAAA VVVVxx +778 6256 0 2 8 18 78 778 778 778 778 156 157 YDAAAA QGJAAA AAAAxx +1233 6257 1 1 3 13 33 233 1233 1233 1233 66 67 LVAAAA RGJAAA HHHHxx +1183 6258 1 3 3 3 83 183 1183 1183 1183 166 167 NTAAAA SGJAAA OOOOxx +2838 6259 0 2 8 18 38 838 838 2838 2838 76 77 EFAAAA TGJAAA VVVVxx +3096 6260 0 0 6 16 96 96 1096 3096 3096 192 193 CPAAAA UGJAAA AAAAxx +8566 6261 0 2 6 6 66 566 566 3566 8566 132 133 MRAAAA VGJAAA HHHHxx +7635 6262 1 3 5 15 35 635 1635 2635 7635 70 71 RHAAAA WGJAAA OOOOxx +5428 6263 0 0 8 8 28 428 1428 428 5428 56 57 UAAAAA XGJAAA VVVVxx +7430 6264 0 2 0 10 30 430 1430 2430 7430 60 61 UZAAAA YGJAAA AAAAxx +7210 6265 0 2 0 10 10 210 1210 2210 7210 20 21 IRAAAA ZGJAAA HHHHxx +4485 6266 1 1 5 5 85 485 485 4485 4485 170 171 NQAAAA AHJAAA OOOOxx +9623 6267 1 3 3 3 23 623 1623 4623 9623 46 47 DGAAAA BHJAAA VVVVxx +3670 6268 0 2 0 10 70 670 1670 3670 3670 140 141 ELAAAA CHJAAA AAAAxx +1575 6269 1 3 5 15 75 575 1575 1575 1575 150 151 PIAAAA DHJAAA HHHHxx +5874 6270 0 2 4 14 74 874 1874 874 5874 148 149 YRAAAA EHJAAA OOOOxx +673 6271 1 1 3 13 73 673 673 673 673 146 147 XZAAAA FHJAAA VVVVxx +9712 6272 0 0 2 12 12 712 1712 4712 9712 24 25 OJAAAA GHJAAA AAAAxx +7729 6273 1 1 9 9 29 729 1729 2729 7729 58 59 HLAAAA HHJAAA HHHHxx +4318 6274 0 2 8 18 18 318 318 4318 4318 36 37 CKAAAA IHJAAA OOOOxx +4143 6275 1 3 3 3 43 143 143 4143 4143 86 87 JDAAAA JHJAAA VVVVxx +4932 6276 0 0 2 12 32 932 932 4932 4932 64 65 SHAAAA KHJAAA AAAAxx +5835 6277 1 3 5 15 35 835 1835 835 5835 70 71 LQAAAA LHJAAA HHHHxx +4966 6278 0 2 6 6 66 966 966 4966 4966 132 133 AJAAAA MHJAAA OOOOxx +6711 6279 1 3 1 11 11 711 711 1711 6711 22 23 DYAAAA NHJAAA VVVVxx +3990 6280 0 2 0 10 90 990 1990 3990 3990 180 181 MXAAAA OHJAAA AAAAxx +990 6281 0 2 0 10 90 990 990 990 990 180 181 CMAAAA PHJAAA HHHHxx +220 6282 0 0 0 0 20 220 220 220 220 40 41 MIAAAA QHJAAA OOOOxx +5693 6283 1 1 3 13 93 693 1693 693 5693 186 187 ZKAAAA RHJAAA VVVVxx +3662 6284 0 2 2 2 62 662 1662 3662 3662 124 125 WKAAAA SHJAAA AAAAxx +7844 6285 0 0 4 4 44 844 1844 2844 7844 88 89 SPAAAA THJAAA HHHHxx +5515 6286 1 3 5 15 15 515 1515 515 5515 30 31 DEAAAA UHJAAA OOOOxx +5551 6287 1 3 1 11 51 551 1551 551 5551 102 103 NFAAAA VHJAAA VVVVxx +2358 6288 0 2 8 18 58 358 358 2358 2358 116 117 SMAAAA WHJAAA AAAAxx +8977 6289 1 1 7 17 77 977 977 3977 8977 154 155 HHAAAA XHJAAA HHHHxx +7040 6290 0 0 0 0 40 40 1040 2040 7040 80 81 UKAAAA YHJAAA OOOOxx +105 6291 1 1 5 5 5 105 105 105 105 10 11 BEAAAA ZHJAAA VVVVxx +4496 6292 0 0 6 16 96 496 496 4496 4496 192 193 YQAAAA AIJAAA AAAAxx +2254 6293 0 2 4 14 54 254 254 2254 2254 108 109 SIAAAA BIJAAA HHHHxx +411 6294 1 3 1 11 11 411 411 411 411 22 23 VPAAAA CIJAAA OOOOxx +2373 6295 1 1 3 13 73 373 373 2373 2373 146 147 HNAAAA DIJAAA VVVVxx +3477 6296 1 1 7 17 77 477 1477 3477 3477 154 155 TDAAAA EIJAAA AAAAxx +8964 6297 0 0 4 4 64 964 964 3964 8964 128 129 UGAAAA FIJAAA HHHHxx +8471 6298 1 3 1 11 71 471 471 3471 8471 142 143 VNAAAA GIJAAA OOOOxx +5776 6299 0 0 6 16 76 776 1776 776 5776 152 153 EOAAAA HIJAAA VVVVxx +9921 6300 1 1 1 1 21 921 1921 4921 9921 42 43 PRAAAA IIJAAA AAAAxx +7816 6301 0 0 6 16 16 816 1816 2816 7816 32 33 QOAAAA JIJAAA HHHHxx +2439 6302 1 3 9 19 39 439 439 2439 2439 78 79 VPAAAA KIJAAA OOOOxx +9298 6303 0 2 8 18 98 298 1298 4298 9298 196 197 QTAAAA LIJAAA VVVVxx +9424 6304 0 0 4 4 24 424 1424 4424 9424 48 49 MYAAAA MIJAAA AAAAxx +3252 6305 0 0 2 12 52 252 1252 3252 3252 104 105 CVAAAA NIJAAA HHHHxx +1401 6306 1 1 1 1 1 401 1401 1401 1401 2 3 XBAAAA OIJAAA OOOOxx +9632 6307 0 0 2 12 32 632 1632 4632 9632 64 65 MGAAAA PIJAAA VVVVxx +370 6308 0 2 0 10 70 370 370 370 370 140 141 GOAAAA QIJAAA AAAAxx +728 6309 0 0 8 8 28 728 728 728 728 56 57 ACAAAA RIJAAA HHHHxx +2888 6310 0 0 8 8 88 888 888 2888 2888 176 177 CHAAAA SIJAAA OOOOxx +1441 6311 1 1 1 1 41 441 1441 1441 1441 82 83 LDAAAA TIJAAA VVVVxx +8308 6312 0 0 8 8 8 308 308 3308 8308 16 17 OHAAAA UIJAAA AAAAxx +2165 6313 1 1 5 5 65 165 165 2165 2165 130 131 HFAAAA VIJAAA HHHHxx +6359 6314 1 3 9 19 59 359 359 1359 6359 118 119 PKAAAA WIJAAA OOOOxx +9637 6315 1 1 7 17 37 637 1637 4637 9637 74 75 RGAAAA XIJAAA VVVVxx +5208 6316 0 0 8 8 8 208 1208 208 5208 16 17 ISAAAA YIJAAA AAAAxx +4705 6317 1 1 5 5 5 705 705 4705 4705 10 11 ZYAAAA ZIJAAA HHHHxx +2341 6318 1 1 1 1 41 341 341 2341 2341 82 83 BMAAAA AJJAAA OOOOxx +8539 6319 1 3 9 19 39 539 539 3539 8539 78 79 LQAAAA BJJAAA VVVVxx +7528 6320 0 0 8 8 28 528 1528 2528 7528 56 57 ODAAAA CJJAAA AAAAxx +7969 6321 1 1 9 9 69 969 1969 2969 7969 138 139 NUAAAA DJJAAA HHHHxx +6381 6322 1 1 1 1 81 381 381 1381 6381 162 163 LLAAAA EJJAAA OOOOxx +4906 6323 0 2 6 6 6 906 906 4906 4906 12 13 SGAAAA FJJAAA VVVVxx +8697 6324 1 1 7 17 97 697 697 3697 8697 194 195 NWAAAA GJJAAA AAAAxx +6301 6325 1 1 1 1 1 301 301 1301 6301 2 3 JIAAAA HJJAAA HHHHxx +7554 6326 0 2 4 14 54 554 1554 2554 7554 108 109 OEAAAA IJJAAA OOOOxx +5107 6327 1 3 7 7 7 107 1107 107 5107 14 15 LOAAAA JJJAAA VVVVxx +5046 6328 0 2 6 6 46 46 1046 46 5046 92 93 CMAAAA KJJAAA AAAAxx +4063 6329 1 3 3 3 63 63 63 4063 4063 126 127 HAAAAA LJJAAA HHHHxx +7580 6330 0 0 0 0 80 580 1580 2580 7580 160 161 OFAAAA MJJAAA OOOOxx +2245 6331 1 1 5 5 45 245 245 2245 2245 90 91 JIAAAA NJJAAA VVVVxx +3711 6332 1 3 1 11 11 711 1711 3711 3711 22 23 TMAAAA OJJAAA AAAAxx +3220 6333 0 0 0 0 20 220 1220 3220 3220 40 41 WTAAAA PJJAAA HHHHxx +6463 6334 1 3 3 3 63 463 463 1463 6463 126 127 POAAAA QJJAAA OOOOxx +8196 6335 0 0 6 16 96 196 196 3196 8196 192 193 GDAAAA RJJAAA VVVVxx +9875 6336 1 3 5 15 75 875 1875 4875 9875 150 151 VPAAAA SJJAAA AAAAxx +1333 6337 1 1 3 13 33 333 1333 1333 1333 66 67 HZAAAA TJJAAA HHHHxx +7880 6338 0 0 0 0 80 880 1880 2880 7880 160 161 CRAAAA UJJAAA OOOOxx +2322 6339 0 2 2 2 22 322 322 2322 2322 44 45 ILAAAA VJJAAA VVVVxx +2163 6340 1 3 3 3 63 163 163 2163 2163 126 127 FFAAAA WJJAAA AAAAxx +421 6341 1 1 1 1 21 421 421 421 421 42 43 FQAAAA XJJAAA HHHHxx +2042 6342 0 2 2 2 42 42 42 2042 2042 84 85 OAAAAA YJJAAA OOOOxx +1424 6343 0 0 4 4 24 424 1424 1424 1424 48 49 UCAAAA ZJJAAA VVVVxx +7870 6344 0 2 0 10 70 870 1870 2870 7870 140 141 SQAAAA AKJAAA AAAAxx +2653 6345 1 1 3 13 53 653 653 2653 2653 106 107 BYAAAA BKJAAA HHHHxx +4216 6346 0 0 6 16 16 216 216 4216 4216 32 33 EGAAAA CKJAAA OOOOxx +1515 6347 1 3 5 15 15 515 1515 1515 1515 30 31 HGAAAA DKJAAA VVVVxx +7860 6348 0 0 0 0 60 860 1860 2860 7860 120 121 IQAAAA EKJAAA AAAAxx +2984 6349 0 0 4 4 84 984 984 2984 2984 168 169 UKAAAA FKJAAA HHHHxx +6269 6350 1 1 9 9 69 269 269 1269 6269 138 139 DHAAAA GKJAAA OOOOxx +2609 6351 1 1 9 9 9 609 609 2609 2609 18 19 JWAAAA HKJAAA VVVVxx +3671 6352 1 3 1 11 71 671 1671 3671 3671 142 143 FLAAAA IKJAAA AAAAxx +4544 6353 0 0 4 4 44 544 544 4544 4544 88 89 USAAAA JKJAAA HHHHxx +4668 6354 0 0 8 8 68 668 668 4668 4668 136 137 OXAAAA KKJAAA OOOOxx +2565 6355 1 1 5 5 65 565 565 2565 2565 130 131 RUAAAA LKJAAA VVVVxx +3126 6356 0 2 6 6 26 126 1126 3126 3126 52 53 GQAAAA MKJAAA AAAAxx +7573 6357 1 1 3 13 73 573 1573 2573 7573 146 147 HFAAAA NKJAAA HHHHxx +1476 6358 0 0 6 16 76 476 1476 1476 1476 152 153 UEAAAA OKJAAA OOOOxx +2146 6359 0 2 6 6 46 146 146 2146 2146 92 93 OEAAAA PKJAAA VVVVxx +9990 6360 0 2 0 10 90 990 1990 4990 9990 180 181 GUAAAA QKJAAA AAAAxx +2530 6361 0 2 0 10 30 530 530 2530 2530 60 61 ITAAAA RKJAAA HHHHxx +9288 6362 0 0 8 8 88 288 1288 4288 9288 176 177 GTAAAA SKJAAA OOOOxx +9755 6363 1 3 5 15 55 755 1755 4755 9755 110 111 FLAAAA TKJAAA VVVVxx +5305 6364 1 1 5 5 5 305 1305 305 5305 10 11 BWAAAA UKJAAA AAAAxx +2495 6365 1 3 5 15 95 495 495 2495 2495 190 191 ZRAAAA VKJAAA HHHHxx +5443 6366 1 3 3 3 43 443 1443 443 5443 86 87 JBAAAA WKJAAA OOOOxx +1930 6367 0 2 0 10 30 930 1930 1930 1930 60 61 GWAAAA XKJAAA VVVVxx +9134 6368 0 2 4 14 34 134 1134 4134 9134 68 69 INAAAA YKJAAA AAAAxx +2844 6369 0 0 4 4 44 844 844 2844 2844 88 89 KFAAAA ZKJAAA HHHHxx +896 6370 0 0 6 16 96 896 896 896 896 192 193 MIAAAA ALJAAA OOOOxx +1330 6371 0 2 0 10 30 330 1330 1330 1330 60 61 EZAAAA BLJAAA VVVVxx +8980 6372 0 0 0 0 80 980 980 3980 8980 160 161 KHAAAA CLJAAA AAAAxx +5940 6373 0 0 0 0 40 940 1940 940 5940 80 81 MUAAAA DLJAAA HHHHxx +6494 6374 0 2 4 14 94 494 494 1494 6494 188 189 UPAAAA ELJAAA OOOOxx +165 6375 1 1 5 5 65 165 165 165 165 130 131 JGAAAA FLJAAA VVVVxx +2510 6376 0 2 0 10 10 510 510 2510 2510 20 21 OSAAAA GLJAAA AAAAxx +9950 6377 0 2 0 10 50 950 1950 4950 9950 100 101 SSAAAA HLJAAA HHHHxx +3854 6378 0 2 4 14 54 854 1854 3854 3854 108 109 GSAAAA ILJAAA OOOOxx +7493 6379 1 1 3 13 93 493 1493 2493 7493 186 187 FCAAAA JLJAAA VVVVxx +4124 6380 0 0 4 4 24 124 124 4124 4124 48 49 QCAAAA KLJAAA AAAAxx +8563 6381 1 3 3 3 63 563 563 3563 8563 126 127 JRAAAA LLJAAA HHHHxx +8735 6382 1 3 5 15 35 735 735 3735 8735 70 71 ZXAAAA MLJAAA OOOOxx +9046 6383 0 2 6 6 46 46 1046 4046 9046 92 93 YJAAAA NLJAAA VVVVxx +1754 6384 0 2 4 14 54 754 1754 1754 1754 108 109 MPAAAA OLJAAA AAAAxx +6954 6385 0 2 4 14 54 954 954 1954 6954 108 109 MHAAAA PLJAAA HHHHxx +4953 6386 1 1 3 13 53 953 953 4953 4953 106 107 NIAAAA QLJAAA OOOOxx +8142 6387 0 2 2 2 42 142 142 3142 8142 84 85 EBAAAA RLJAAA VVVVxx +9661 6388 1 1 1 1 61 661 1661 4661 9661 122 123 PHAAAA SLJAAA AAAAxx +6415 6389 1 3 5 15 15 415 415 1415 6415 30 31 TMAAAA TLJAAA HHHHxx +5782 6390 0 2 2 2 82 782 1782 782 5782 164 165 KOAAAA ULJAAA OOOOxx +7721 6391 1 1 1 1 21 721 1721 2721 7721 42 43 ZKAAAA VLJAAA VVVVxx +580 6392 0 0 0 0 80 580 580 580 580 160 161 IWAAAA WLJAAA AAAAxx +3784 6393 0 0 4 4 84 784 1784 3784 3784 168 169 OPAAAA XLJAAA HHHHxx +9810 6394 0 2 0 10 10 810 1810 4810 9810 20 21 INAAAA YLJAAA OOOOxx +8488 6395 0 0 8 8 88 488 488 3488 8488 176 177 MOAAAA ZLJAAA VVVVxx +6214 6396 0 2 4 14 14 214 214 1214 6214 28 29 AFAAAA AMJAAA AAAAxx +9433 6397 1 1 3 13 33 433 1433 4433 9433 66 67 VYAAAA BMJAAA HHHHxx +9959 6398 1 3 9 19 59 959 1959 4959 9959 118 119 BTAAAA CMJAAA OOOOxx +554 6399 0 2 4 14 54 554 554 554 554 108 109 IVAAAA DMJAAA VVVVxx +6646 6400 0 2 6 6 46 646 646 1646 6646 92 93 QVAAAA EMJAAA AAAAxx +1138 6401 0 2 8 18 38 138 1138 1138 1138 76 77 URAAAA FMJAAA HHHHxx +9331 6402 1 3 1 11 31 331 1331 4331 9331 62 63 XUAAAA GMJAAA OOOOxx +7331 6403 1 3 1 11 31 331 1331 2331 7331 62 63 ZVAAAA HMJAAA VVVVxx +3482 6404 0 2 2 2 82 482 1482 3482 3482 164 165 YDAAAA IMJAAA AAAAxx +3795 6405 1 3 5 15 95 795 1795 3795 3795 190 191 ZPAAAA JMJAAA HHHHxx +2441 6406 1 1 1 1 41 441 441 2441 2441 82 83 XPAAAA KMJAAA OOOOxx +5229 6407 1 1 9 9 29 229 1229 229 5229 58 59 DTAAAA LMJAAA VVVVxx +7012 6408 0 0 2 12 12 12 1012 2012 7012 24 25 SJAAAA MMJAAA AAAAxx +7036 6409 0 0 6 16 36 36 1036 2036 7036 72 73 QKAAAA NMJAAA HHHHxx +8243 6410 1 3 3 3 43 243 243 3243 8243 86 87 BFAAAA OMJAAA OOOOxx +9320 6411 0 0 0 0 20 320 1320 4320 9320 40 41 MUAAAA PMJAAA VVVVxx +4693 6412 1 1 3 13 93 693 693 4693 4693 186 187 NYAAAA QMJAAA AAAAxx +6741 6413 1 1 1 1 41 741 741 1741 6741 82 83 HZAAAA RMJAAA HHHHxx +2997 6414 1 1 7 17 97 997 997 2997 2997 194 195 HLAAAA SMJAAA OOOOxx +4838 6415 0 2 8 18 38 838 838 4838 4838 76 77 CEAAAA TMJAAA VVVVxx +6945 6416 1 1 5 5 45 945 945 1945 6945 90 91 DHAAAA UMJAAA AAAAxx +8253 6417 1 1 3 13 53 253 253 3253 8253 106 107 LFAAAA VMJAAA HHHHxx +8989 6418 1 1 9 9 89 989 989 3989 8989 178 179 THAAAA WMJAAA OOOOxx +2640 6419 0 0 0 0 40 640 640 2640 2640 80 81 OXAAAA XMJAAA VVVVxx +5647 6420 1 3 7 7 47 647 1647 647 5647 94 95 FJAAAA YMJAAA AAAAxx +7186 6421 0 2 6 6 86 186 1186 2186 7186 172 173 KQAAAA ZMJAAA HHHHxx +3278 6422 0 2 8 18 78 278 1278 3278 3278 156 157 CWAAAA ANJAAA OOOOxx +8546 6423 0 2 6 6 46 546 546 3546 8546 92 93 SQAAAA BNJAAA VVVVxx +8297 6424 1 1 7 17 97 297 297 3297 8297 194 195 DHAAAA CNJAAA AAAAxx +9534 6425 0 2 4 14 34 534 1534 4534 9534 68 69 SCAAAA DNJAAA HHHHxx +9618 6426 0 2 8 18 18 618 1618 4618 9618 36 37 YFAAAA ENJAAA OOOOxx +8839 6427 1 3 9 19 39 839 839 3839 8839 78 79 ZBAAAA FNJAAA VVVVxx +7605 6428 1 1 5 5 5 605 1605 2605 7605 10 11 NGAAAA GNJAAA AAAAxx +6421 6429 1 1 1 1 21 421 421 1421 6421 42 43 ZMAAAA HNJAAA HHHHxx +3582 6430 0 2 2 2 82 582 1582 3582 3582 164 165 UHAAAA INJAAA OOOOxx +485 6431 1 1 5 5 85 485 485 485 485 170 171 RSAAAA JNJAAA VVVVxx +1925 6432 1 1 5 5 25 925 1925 1925 1925 50 51 BWAAAA KNJAAA AAAAxx +4296 6433 0 0 6 16 96 296 296 4296 4296 192 193 GJAAAA LNJAAA HHHHxx +8874 6434 0 2 4 14 74 874 874 3874 8874 148 149 IDAAAA MNJAAA OOOOxx +1443 6435 1 3 3 3 43 443 1443 1443 1443 86 87 NDAAAA NNJAAA VVVVxx +4239 6436 1 3 9 19 39 239 239 4239 4239 78 79 BHAAAA ONJAAA AAAAxx +9760 6437 0 0 0 0 60 760 1760 4760 9760 120 121 KLAAAA PNJAAA HHHHxx +136 6438 0 0 6 16 36 136 136 136 136 72 73 GFAAAA QNJAAA OOOOxx +6472 6439 0 0 2 12 72 472 472 1472 6472 144 145 YOAAAA RNJAAA VVVVxx +4896 6440 0 0 6 16 96 896 896 4896 4896 192 193 IGAAAA SNJAAA AAAAxx +9028 6441 0 0 8 8 28 28 1028 4028 9028 56 57 GJAAAA TNJAAA HHHHxx +8354 6442 0 2 4 14 54 354 354 3354 8354 108 109 IJAAAA UNJAAA OOOOxx +8648 6443 0 0 8 8 48 648 648 3648 8648 96 97 QUAAAA VNJAAA VVVVxx +918 6444 0 2 8 18 18 918 918 918 918 36 37 IJAAAA WNJAAA AAAAxx +6606 6445 0 2 6 6 6 606 606 1606 6606 12 13 CUAAAA XNJAAA HHHHxx +2462 6446 0 2 2 2 62 462 462 2462 2462 124 125 SQAAAA YNJAAA OOOOxx +7536 6447 0 0 6 16 36 536 1536 2536 7536 72 73 WDAAAA ZNJAAA VVVVxx +1700 6448 0 0 0 0 0 700 1700 1700 1700 0 1 KNAAAA AOJAAA AAAAxx +6740 6449 0 0 0 0 40 740 740 1740 6740 80 81 GZAAAA BOJAAA HHHHxx +28 6450 0 0 8 8 28 28 28 28 28 56 57 CBAAAA COJAAA OOOOxx +6044 6451 0 0 4 4 44 44 44 1044 6044 88 89 MYAAAA DOJAAA VVVVxx +5053 6452 1 1 3 13 53 53 1053 53 5053 106 107 JMAAAA EOJAAA AAAAxx +4832 6453 0 0 2 12 32 832 832 4832 4832 64 65 WDAAAA FOJAAA HHHHxx +9145 6454 1 1 5 5 45 145 1145 4145 9145 90 91 TNAAAA GOJAAA OOOOxx +5482 6455 0 2 2 2 82 482 1482 482 5482 164 165 WCAAAA HOJAAA VVVVxx +7644 6456 0 0 4 4 44 644 1644 2644 7644 88 89 AIAAAA IOJAAA AAAAxx +2128 6457 0 0 8 8 28 128 128 2128 2128 56 57 WDAAAA JOJAAA HHHHxx +6583 6458 1 3 3 3 83 583 583 1583 6583 166 167 FTAAAA KOJAAA OOOOxx +4224 6459 0 0 4 4 24 224 224 4224 4224 48 49 MGAAAA LOJAAA VVVVxx +5253 6460 1 1 3 13 53 253 1253 253 5253 106 107 BUAAAA MOJAAA AAAAxx +8219 6461 1 3 9 19 19 219 219 3219 8219 38 39 DEAAAA NOJAAA HHHHxx +8113 6462 1 1 3 13 13 113 113 3113 8113 26 27 BAAAAA OOJAAA OOOOxx +3616 6463 0 0 6 16 16 616 1616 3616 3616 32 33 CJAAAA POJAAA VVVVxx +1361 6464 1 1 1 1 61 361 1361 1361 1361 122 123 JAAAAA QOJAAA AAAAxx +949 6465 1 1 9 9 49 949 949 949 949 98 99 NKAAAA ROJAAA HHHHxx +8582 6466 0 2 2 2 82 582 582 3582 8582 164 165 CSAAAA SOJAAA OOOOxx +5104 6467 0 0 4 4 4 104 1104 104 5104 8 9 IOAAAA TOJAAA VVVVxx +6146 6468 0 2 6 6 46 146 146 1146 6146 92 93 KCAAAA UOJAAA AAAAxx +7681 6469 1 1 1 1 81 681 1681 2681 7681 162 163 LJAAAA VOJAAA HHHHxx +1904 6470 0 0 4 4 4 904 1904 1904 1904 8 9 GVAAAA WOJAAA OOOOxx +1989 6471 1 1 9 9 89 989 1989 1989 1989 178 179 NYAAAA XOJAAA VVVVxx +4179 6472 1 3 9 19 79 179 179 4179 4179 158 159 TEAAAA YOJAAA AAAAxx +1739 6473 1 3 9 19 39 739 1739 1739 1739 78 79 XOAAAA ZOJAAA HHHHxx +2447 6474 1 3 7 7 47 447 447 2447 2447 94 95 DQAAAA APJAAA OOOOxx +3029 6475 1 1 9 9 29 29 1029 3029 3029 58 59 NMAAAA BPJAAA VVVVxx +9783 6476 1 3 3 3 83 783 1783 4783 9783 166 167 HMAAAA CPJAAA AAAAxx +8381 6477 1 1 1 1 81 381 381 3381 8381 162 163 JKAAAA DPJAAA HHHHxx +8755 6478 1 3 5 15 55 755 755 3755 8755 110 111 TYAAAA EPJAAA OOOOxx +8384 6479 0 0 4 4 84 384 384 3384 8384 168 169 MKAAAA FPJAAA VVVVxx +7655 6480 1 3 5 15 55 655 1655 2655 7655 110 111 LIAAAA GPJAAA AAAAxx +4766 6481 0 2 6 6 66 766 766 4766 4766 132 133 IBAAAA HPJAAA HHHHxx +3324 6482 0 0 4 4 24 324 1324 3324 3324 48 49 WXAAAA IPJAAA OOOOxx +5022 6483 0 2 2 2 22 22 1022 22 5022 44 45 ELAAAA JPJAAA VVVVxx +2856 6484 0 0 6 16 56 856 856 2856 2856 112 113 WFAAAA KPJAAA AAAAxx +6503 6485 1 3 3 3 3 503 503 1503 6503 6 7 DQAAAA LPJAAA HHHHxx +6872 6486 0 0 2 12 72 872 872 1872 6872 144 145 IEAAAA MPJAAA OOOOxx +1663 6487 1 3 3 3 63 663 1663 1663 1663 126 127 ZLAAAA NPJAAA VVVVxx +6964 6488 0 0 4 4 64 964 964 1964 6964 128 129 WHAAAA OPJAAA AAAAxx +4622 6489 0 2 2 2 22 622 622 4622 4622 44 45 UVAAAA PPJAAA HHHHxx +6089 6490 1 1 9 9 89 89 89 1089 6089 178 179 FAAAAA QPJAAA OOOOxx +8567 6491 1 3 7 7 67 567 567 3567 8567 134 135 NRAAAA RPJAAA VVVVxx +597 6492 1 1 7 17 97 597 597 597 597 194 195 ZWAAAA SPJAAA AAAAxx +4222 6493 0 2 2 2 22 222 222 4222 4222 44 45 KGAAAA TPJAAA HHHHxx +9322 6494 0 2 2 2 22 322 1322 4322 9322 44 45 OUAAAA UPJAAA OOOOxx +624 6495 0 0 4 4 24 624 624 624 624 48 49 AYAAAA VPJAAA VVVVxx +4329 6496 1 1 9 9 29 329 329 4329 4329 58 59 NKAAAA WPJAAA AAAAxx +6781 6497 1 1 1 1 81 781 781 1781 6781 162 163 VAAAAA XPJAAA HHHHxx +1673 6498 1 1 3 13 73 673 1673 1673 1673 146 147 JMAAAA YPJAAA OOOOxx +6633 6499 1 1 3 13 33 633 633 1633 6633 66 67 DVAAAA ZPJAAA VVVVxx +2569 6500 1 1 9 9 69 569 569 2569 2569 138 139 VUAAAA AQJAAA AAAAxx +4995 6501 1 3 5 15 95 995 995 4995 4995 190 191 DKAAAA BQJAAA HHHHxx +2749 6502 1 1 9 9 49 749 749 2749 2749 98 99 TBAAAA CQJAAA OOOOxx +9044 6503 0 0 4 4 44 44 1044 4044 9044 88 89 WJAAAA DQJAAA VVVVxx +5823 6504 1 3 3 3 23 823 1823 823 5823 46 47 ZPAAAA EQJAAA AAAAxx +9366 6505 0 2 6 6 66 366 1366 4366 9366 132 133 GWAAAA FQJAAA HHHHxx +1169 6506 1 1 9 9 69 169 1169 1169 1169 138 139 ZSAAAA GQJAAA OOOOxx +1300 6507 0 0 0 0 0 300 1300 1300 1300 0 1 AYAAAA HQJAAA VVVVxx +9973 6508 1 1 3 13 73 973 1973 4973 9973 146 147 PTAAAA IQJAAA AAAAxx +2092 6509 0 0 2 12 92 92 92 2092 2092 184 185 MCAAAA JQJAAA HHHHxx +9776 6510 0 0 6 16 76 776 1776 4776 9776 152 153 AMAAAA KQJAAA OOOOxx +7612 6511 0 0 2 12 12 612 1612 2612 7612 24 25 UGAAAA LQJAAA VVVVxx +7190 6512 0 2 0 10 90 190 1190 2190 7190 180 181 OQAAAA MQJAAA AAAAxx +5147 6513 1 3 7 7 47 147 1147 147 5147 94 95 ZPAAAA NQJAAA HHHHxx +3722 6514 0 2 2 2 22 722 1722 3722 3722 44 45 ENAAAA OQJAAA OOOOxx +5858 6515 0 2 8 18 58 858 1858 858 5858 116 117 IRAAAA PQJAAA VVVVxx +3204 6516 0 0 4 4 4 204 1204 3204 3204 8 9 GTAAAA QQJAAA AAAAxx +8994 6517 0 2 4 14 94 994 994 3994 8994 188 189 YHAAAA RQJAAA HHHHxx +7478 6518 0 2 8 18 78 478 1478 2478 7478 156 157 QBAAAA SQJAAA OOOOxx +9624 6519 0 0 4 4 24 624 1624 4624 9624 48 49 EGAAAA TQJAAA VVVVxx +6639 6520 1 3 9 19 39 639 639 1639 6639 78 79 JVAAAA UQJAAA AAAAxx +369 6521 1 1 9 9 69 369 369 369 369 138 139 FOAAAA VQJAAA HHHHxx +7766 6522 0 2 6 6 66 766 1766 2766 7766 132 133 SMAAAA WQJAAA OOOOxx +4094 6523 0 2 4 14 94 94 94 4094 4094 188 189 MBAAAA XQJAAA VVVVxx +9556 6524 0 0 6 16 56 556 1556 4556 9556 112 113 ODAAAA YQJAAA AAAAxx +4887 6525 1 3 7 7 87 887 887 4887 4887 174 175 ZFAAAA ZQJAAA HHHHxx +2321 6526 1 1 1 1 21 321 321 2321 2321 42 43 HLAAAA ARJAAA OOOOxx +9201 6527 1 1 1 1 1 201 1201 4201 9201 2 3 XPAAAA BRJAAA VVVVxx +1627 6528 1 3 7 7 27 627 1627 1627 1627 54 55 PKAAAA CRJAAA AAAAxx +150 6529 0 2 0 10 50 150 150 150 150 100 101 UFAAAA DRJAAA HHHHxx +8010 6530 0 2 0 10 10 10 10 3010 8010 20 21 CWAAAA ERJAAA OOOOxx +8026 6531 0 2 6 6 26 26 26 3026 8026 52 53 SWAAAA FRJAAA VVVVxx +5495 6532 1 3 5 15 95 495 1495 495 5495 190 191 JDAAAA GRJAAA AAAAxx +6213 6533 1 1 3 13 13 213 213 1213 6213 26 27 ZEAAAA HRJAAA HHHHxx +6464 6534 0 0 4 4 64 464 464 1464 6464 128 129 QOAAAA IRJAAA OOOOxx +1158 6535 0 2 8 18 58 158 1158 1158 1158 116 117 OSAAAA JRJAAA VVVVxx +8669 6536 1 1 9 9 69 669 669 3669 8669 138 139 LVAAAA KRJAAA AAAAxx +3225 6537 1 1 5 5 25 225 1225 3225 3225 50 51 BUAAAA LRJAAA HHHHxx +1294 6538 0 2 4 14 94 294 1294 1294 1294 188 189 UXAAAA MRJAAA OOOOxx +2166 6539 0 2 6 6 66 166 166 2166 2166 132 133 IFAAAA NRJAAA VVVVxx +9328 6540 0 0 8 8 28 328 1328 4328 9328 56 57 UUAAAA ORJAAA AAAAxx +8431 6541 1 3 1 11 31 431 431 3431 8431 62 63 HMAAAA PRJAAA HHHHxx +7100 6542 0 0 0 0 0 100 1100 2100 7100 0 1 CNAAAA QRJAAA OOOOxx +8126 6543 0 2 6 6 26 126 126 3126 8126 52 53 OAAAAA RRJAAA VVVVxx +2185 6544 1 1 5 5 85 185 185 2185 2185 170 171 BGAAAA SRJAAA AAAAxx +5697 6545 1 1 7 17 97 697 1697 697 5697 194 195 DLAAAA TRJAAA HHHHxx +5531 6546 1 3 1 11 31 531 1531 531 5531 62 63 TEAAAA URJAAA OOOOxx +3020 6547 0 0 0 0 20 20 1020 3020 3020 40 41 EMAAAA VRJAAA VVVVxx +3076 6548 0 0 6 16 76 76 1076 3076 3076 152 153 IOAAAA WRJAAA AAAAxx +9228 6549 0 0 8 8 28 228 1228 4228 9228 56 57 YQAAAA XRJAAA HHHHxx +1734 6550 0 2 4 14 34 734 1734 1734 1734 68 69 SOAAAA YRJAAA OOOOxx +7616 6551 0 0 6 16 16 616 1616 2616 7616 32 33 YGAAAA ZRJAAA VVVVxx +9059 6552 1 3 9 19 59 59 1059 4059 9059 118 119 LKAAAA ASJAAA AAAAxx +323 6553 1 3 3 3 23 323 323 323 323 46 47 LMAAAA BSJAAA HHHHxx +1283 6554 1 3 3 3 83 283 1283 1283 1283 166 167 JXAAAA CSJAAA OOOOxx +9535 6555 1 3 5 15 35 535 1535 4535 9535 70 71 TCAAAA DSJAAA VVVVxx +2580 6556 0 0 0 0 80 580 580 2580 2580 160 161 GVAAAA ESJAAA AAAAxx +7633 6557 1 1 3 13 33 633 1633 2633 7633 66 67 PHAAAA FSJAAA HHHHxx +9497 6558 1 1 7 17 97 497 1497 4497 9497 194 195 HBAAAA GSJAAA OOOOxx +9842 6559 0 2 2 2 42 842 1842 4842 9842 84 85 OOAAAA HSJAAA VVVVxx +3426 6560 0 2 6 6 26 426 1426 3426 3426 52 53 UBAAAA ISJAAA AAAAxx +7650 6561 0 2 0 10 50 650 1650 2650 7650 100 101 GIAAAA JSJAAA HHHHxx +9935 6562 1 3 5 15 35 935 1935 4935 9935 70 71 DSAAAA KSJAAA OOOOxx +9354 6563 0 2 4 14 54 354 1354 4354 9354 108 109 UVAAAA LSJAAA VVVVxx +5569 6564 1 1 9 9 69 569 1569 569 5569 138 139 FGAAAA MSJAAA AAAAxx +5765 6565 1 1 5 5 65 765 1765 765 5765 130 131 TNAAAA NSJAAA HHHHxx +7283 6566 1 3 3 3 83 283 1283 2283 7283 166 167 DUAAAA OSJAAA OOOOxx +1068 6567 0 0 8 8 68 68 1068 1068 1068 136 137 CPAAAA PSJAAA VVVVxx +1641 6568 1 1 1 1 41 641 1641 1641 1641 82 83 DLAAAA QSJAAA AAAAxx +1688 6569 0 0 8 8 88 688 1688 1688 1688 176 177 YMAAAA RSJAAA HHHHxx +1133 6570 1 1 3 13 33 133 1133 1133 1133 66 67 PRAAAA SSJAAA OOOOxx +4493 6571 1 1 3 13 93 493 493 4493 4493 186 187 VQAAAA TSJAAA VVVVxx +3354 6572 0 2 4 14 54 354 1354 3354 3354 108 109 AZAAAA USJAAA AAAAxx +4029 6573 1 1 9 9 29 29 29 4029 4029 58 59 ZYAAAA VSJAAA HHHHxx +6704 6574 0 0 4 4 4 704 704 1704 6704 8 9 WXAAAA WSJAAA OOOOxx +3221 6575 1 1 1 1 21 221 1221 3221 3221 42 43 XTAAAA XSJAAA VVVVxx +9432 6576 0 0 2 12 32 432 1432 4432 9432 64 65 UYAAAA YSJAAA AAAAxx +6990 6577 0 2 0 10 90 990 990 1990 6990 180 181 WIAAAA ZSJAAA HHHHxx +1760 6578 0 0 0 0 60 760 1760 1760 1760 120 121 SPAAAA ATJAAA OOOOxx +4754 6579 0 2 4 14 54 754 754 4754 4754 108 109 WAAAAA BTJAAA VVVVxx +7724 6580 0 0 4 4 24 724 1724 2724 7724 48 49 CLAAAA CTJAAA AAAAxx +9487 6581 1 3 7 7 87 487 1487 4487 9487 174 175 XAAAAA DTJAAA HHHHxx +166 6582 0 2 6 6 66 166 166 166 166 132 133 KGAAAA ETJAAA OOOOxx +5479 6583 1 3 9 19 79 479 1479 479 5479 158 159 TCAAAA FTJAAA VVVVxx +8744 6584 0 0 4 4 44 744 744 3744 8744 88 89 IYAAAA GTJAAA AAAAxx +5746 6585 0 2 6 6 46 746 1746 746 5746 92 93 ANAAAA HTJAAA HHHHxx +907 6586 1 3 7 7 7 907 907 907 907 14 15 XIAAAA ITJAAA OOOOxx +3968 6587 0 0 8 8 68 968 1968 3968 3968 136 137 QWAAAA JTJAAA VVVVxx +5721 6588 1 1 1 1 21 721 1721 721 5721 42 43 BMAAAA KTJAAA AAAAxx +6738 6589 0 2 8 18 38 738 738 1738 6738 76 77 EZAAAA LTJAAA HHHHxx +4097 6590 1 1 7 17 97 97 97 4097 4097 194 195 PBAAAA MTJAAA OOOOxx +8456 6591 0 0 6 16 56 456 456 3456 8456 112 113 GNAAAA NTJAAA VVVVxx +1269 6592 1 1 9 9 69 269 1269 1269 1269 138 139 VWAAAA OTJAAA AAAAxx +7997 6593 1 1 7 17 97 997 1997 2997 7997 194 195 PVAAAA PTJAAA HHHHxx +9457 6594 1 1 7 17 57 457 1457 4457 9457 114 115 TZAAAA QTJAAA OOOOxx +1159 6595 1 3 9 19 59 159 1159 1159 1159 118 119 PSAAAA RTJAAA VVVVxx +1631 6596 1 3 1 11 31 631 1631 1631 1631 62 63 TKAAAA STJAAA AAAAxx +2019 6597 1 3 9 19 19 19 19 2019 2019 38 39 RZAAAA TTJAAA HHHHxx +3186 6598 0 2 6 6 86 186 1186 3186 3186 172 173 OSAAAA UTJAAA OOOOxx +5587 6599 1 3 7 7 87 587 1587 587 5587 174 175 XGAAAA VTJAAA VVVVxx +9172 6600 0 0 2 12 72 172 1172 4172 9172 144 145 UOAAAA WTJAAA AAAAxx +5589 6601 1 1 9 9 89 589 1589 589 5589 178 179 ZGAAAA XTJAAA HHHHxx +5103 6602 1 3 3 3 3 103 1103 103 5103 6 7 HOAAAA YTJAAA OOOOxx +3177 6603 1 1 7 17 77 177 1177 3177 3177 154 155 FSAAAA ZTJAAA VVVVxx +8887 6604 1 3 7 7 87 887 887 3887 8887 174 175 VDAAAA AUJAAA AAAAxx +12 6605 0 0 2 12 12 12 12 12 12 24 25 MAAAAA BUJAAA HHHHxx +8575 6606 1 3 5 15 75 575 575 3575 8575 150 151 VRAAAA CUJAAA OOOOxx +4335 6607 1 3 5 15 35 335 335 4335 4335 70 71 TKAAAA DUJAAA VVVVxx +4581 6608 1 1 1 1 81 581 581 4581 4581 162 163 FUAAAA EUJAAA AAAAxx +4444 6609 0 0 4 4 44 444 444 4444 4444 88 89 YOAAAA FUJAAA HHHHxx +7978 6610 0 2 8 18 78 978 1978 2978 7978 156 157 WUAAAA GUJAAA OOOOxx +3081 6611 1 1 1 1 81 81 1081 3081 3081 162 163 NOAAAA HUJAAA VVVVxx +4059 6612 1 3 9 19 59 59 59 4059 4059 118 119 DAAAAA IUJAAA AAAAxx +5711 6613 1 3 1 11 11 711 1711 711 5711 22 23 RLAAAA JUJAAA HHHHxx +7069 6614 1 1 9 9 69 69 1069 2069 7069 138 139 XLAAAA KUJAAA OOOOxx +6150 6615 0 2 0 10 50 150 150 1150 6150 100 101 OCAAAA LUJAAA VVVVxx +9550 6616 0 2 0 10 50 550 1550 4550 9550 100 101 IDAAAA MUJAAA AAAAxx +7087 6617 1 3 7 7 87 87 1087 2087 7087 174 175 PMAAAA NUJAAA HHHHxx +9557 6618 1 1 7 17 57 557 1557 4557 9557 114 115 PDAAAA OUJAAA OOOOxx +7856 6619 0 0 6 16 56 856 1856 2856 7856 112 113 EQAAAA PUJAAA VVVVxx +1115 6620 1 3 5 15 15 115 1115 1115 1115 30 31 XQAAAA QUJAAA AAAAxx +1086 6621 0 2 6 6 86 86 1086 1086 1086 172 173 UPAAAA RUJAAA HHHHxx +5048 6622 0 0 8 8 48 48 1048 48 5048 96 97 EMAAAA SUJAAA OOOOxx +5168 6623 0 0 8 8 68 168 1168 168 5168 136 137 UQAAAA TUJAAA VVVVxx +6029 6624 1 1 9 9 29 29 29 1029 6029 58 59 XXAAAA UUJAAA AAAAxx +546 6625 0 2 6 6 46 546 546 546 546 92 93 AVAAAA VUJAAA HHHHxx +2908 6626 0 0 8 8 8 908 908 2908 2908 16 17 WHAAAA WUJAAA OOOOxx +779 6627 1 3 9 19 79 779 779 779 779 158 159 ZDAAAA XUJAAA VVVVxx +4202 6628 0 2 2 2 2 202 202 4202 4202 4 5 QFAAAA YUJAAA AAAAxx +9984 6629 0 0 4 4 84 984 1984 4984 9984 168 169 AUAAAA ZUJAAA HHHHxx +4730 6630 0 2 0 10 30 730 730 4730 4730 60 61 YZAAAA AVJAAA OOOOxx +6517 6631 1 1 7 17 17 517 517 1517 6517 34 35 RQAAAA BVJAAA VVVVxx +8410 6632 0 2 0 10 10 410 410 3410 8410 20 21 MLAAAA CVJAAA AAAAxx +4793 6633 1 1 3 13 93 793 793 4793 4793 186 187 JCAAAA DVJAAA HHHHxx +3431 6634 1 3 1 11 31 431 1431 3431 3431 62 63 ZBAAAA EVJAAA OOOOxx +2481 6635 1 1 1 1 81 481 481 2481 2481 162 163 LRAAAA FVJAAA VVVVxx +3905 6636 1 1 5 5 5 905 1905 3905 3905 10 11 FUAAAA GVJAAA AAAAxx +8807 6637 1 3 7 7 7 807 807 3807 8807 14 15 TAAAAA HVJAAA HHHHxx +2660 6638 0 0 0 0 60 660 660 2660 2660 120 121 IYAAAA IVJAAA OOOOxx +4985 6639 1 1 5 5 85 985 985 4985 4985 170 171 TJAAAA JVJAAA VVVVxx +3080 6640 0 0 0 0 80 80 1080 3080 3080 160 161 MOAAAA KVJAAA AAAAxx +1090 6641 0 2 0 10 90 90 1090 1090 1090 180 181 YPAAAA LVJAAA HHHHxx +6917 6642 1 1 7 17 17 917 917 1917 6917 34 35 BGAAAA MVJAAA OOOOxx +5177 6643 1 1 7 17 77 177 1177 177 5177 154 155 DRAAAA NVJAAA VVVVxx +2729 6644 1 1 9 9 29 729 729 2729 2729 58 59 ZAAAAA OVJAAA AAAAxx +9706 6645 0 2 6 6 6 706 1706 4706 9706 12 13 IJAAAA PVJAAA HHHHxx +9929 6646 1 1 9 9 29 929 1929 4929 9929 58 59 XRAAAA QVJAAA OOOOxx +1547 6647 1 3 7 7 47 547 1547 1547 1547 94 95 NHAAAA RVJAAA VVVVxx +2798 6648 0 2 8 18 98 798 798 2798 2798 196 197 QDAAAA SVJAAA AAAAxx +4420 6649 0 0 0 0 20 420 420 4420 4420 40 41 AOAAAA TVJAAA HHHHxx +6771 6650 1 3 1 11 71 771 771 1771 6771 142 143 LAAAAA UVJAAA OOOOxx +2004 6651 0 0 4 4 4 4 4 2004 2004 8 9 CZAAAA VVJAAA VVVVxx +8686 6652 0 2 6 6 86 686 686 3686 8686 172 173 CWAAAA WVJAAA AAAAxx +3663 6653 1 3 3 3 63 663 1663 3663 3663 126 127 XKAAAA XVJAAA HHHHxx +806 6654 0 2 6 6 6 806 806 806 806 12 13 AFAAAA YVJAAA OOOOxx +4309 6655 1 1 9 9 9 309 309 4309 4309 18 19 TJAAAA ZVJAAA VVVVxx +7443 6656 1 3 3 3 43 443 1443 2443 7443 86 87 HAAAAA AWJAAA AAAAxx +5779 6657 1 3 9 19 79 779 1779 779 5779 158 159 HOAAAA BWJAAA HHHHxx +8821 6658 1 1 1 1 21 821 821 3821 8821 42 43 HBAAAA CWJAAA OOOOxx +4198 6659 0 2 8 18 98 198 198 4198 4198 196 197 MFAAAA DWJAAA VVVVxx +8115 6660 1 3 5 15 15 115 115 3115 8115 30 31 DAAAAA EWJAAA AAAAxx +9554 6661 0 2 4 14 54 554 1554 4554 9554 108 109 MDAAAA FWJAAA HHHHxx +8956 6662 0 0 6 16 56 956 956 3956 8956 112 113 MGAAAA GWJAAA OOOOxx +4733 6663 1 1 3 13 33 733 733 4733 4733 66 67 BAAAAA HWJAAA VVVVxx +5417 6664 1 1 7 17 17 417 1417 417 5417 34 35 JAAAAA IWJAAA AAAAxx +4792 6665 0 0 2 12 92 792 792 4792 4792 184 185 ICAAAA JWJAAA HHHHxx +462 6666 0 2 2 2 62 462 462 462 462 124 125 URAAAA KWJAAA OOOOxx +3687 6667 1 3 7 7 87 687 1687 3687 3687 174 175 VLAAAA LWJAAA VVVVxx +2013 6668 1 1 3 13 13 13 13 2013 2013 26 27 LZAAAA MWJAAA AAAAxx +5386 6669 0 2 6 6 86 386 1386 386 5386 172 173 EZAAAA NWJAAA HHHHxx +2816 6670 0 0 6 16 16 816 816 2816 2816 32 33 IEAAAA OWJAAA OOOOxx +7827 6671 1 3 7 7 27 827 1827 2827 7827 54 55 BPAAAA PWJAAA VVVVxx +5077 6672 1 1 7 17 77 77 1077 77 5077 154 155 HNAAAA QWJAAA AAAAxx +6039 6673 1 3 9 19 39 39 39 1039 6039 78 79 HYAAAA RWJAAA HHHHxx +215 6674 1 3 5 15 15 215 215 215 215 30 31 HIAAAA SWJAAA OOOOxx +855 6675 1 3 5 15 55 855 855 855 855 110 111 XGAAAA TWJAAA VVVVxx +9692 6676 0 0 2 12 92 692 1692 4692 9692 184 185 UIAAAA UWJAAA AAAAxx +8391 6677 1 3 1 11 91 391 391 3391 8391 182 183 TKAAAA VWJAAA HHHHxx +8424 6678 0 0 4 4 24 424 424 3424 8424 48 49 AMAAAA WWJAAA OOOOxx +6331 6679 1 3 1 11 31 331 331 1331 6331 62 63 NJAAAA XWJAAA VVVVxx +6561 6680 1 1 1 1 61 561 561 1561 6561 122 123 JSAAAA YWJAAA AAAAxx +8955 6681 1 3 5 15 55 955 955 3955 8955 110 111 LGAAAA ZWJAAA HHHHxx +1764 6682 0 0 4 4 64 764 1764 1764 1764 128 129 WPAAAA AXJAAA OOOOxx +6623 6683 1 3 3 3 23 623 623 1623 6623 46 47 TUAAAA BXJAAA VVVVxx +2900 6684 0 0 0 0 0 900 900 2900 2900 0 1 OHAAAA CXJAAA AAAAxx +7048 6685 0 0 8 8 48 48 1048 2048 7048 96 97 CLAAAA DXJAAA HHHHxx +3843 6686 1 3 3 3 43 843 1843 3843 3843 86 87 VRAAAA EXJAAA OOOOxx +4855 6687 1 3 5 15 55 855 855 4855 4855 110 111 TEAAAA FXJAAA VVVVxx +7383 6688 1 3 3 3 83 383 1383 2383 7383 166 167 ZXAAAA GXJAAA AAAAxx +7765 6689 1 1 5 5 65 765 1765 2765 7765 130 131 RMAAAA HXJAAA HHHHxx +1125 6690 1 1 5 5 25 125 1125 1125 1125 50 51 HRAAAA IXJAAA OOOOxx +755 6691 1 3 5 15 55 755 755 755 755 110 111 BDAAAA JXJAAA VVVVxx +2995 6692 1 3 5 15 95 995 995 2995 2995 190 191 FLAAAA KXJAAA AAAAxx +8907 6693 1 3 7 7 7 907 907 3907 8907 14 15 PEAAAA LXJAAA HHHHxx +9357 6694 1 1 7 17 57 357 1357 4357 9357 114 115 XVAAAA MXJAAA OOOOxx +4469 6695 1 1 9 9 69 469 469 4469 4469 138 139 XPAAAA NXJAAA VVVVxx +2147 6696 1 3 7 7 47 147 147 2147 2147 94 95 PEAAAA OXJAAA AAAAxx +2952 6697 0 0 2 12 52 952 952 2952 2952 104 105 OJAAAA PXJAAA HHHHxx +1324 6698 0 0 4 4 24 324 1324 1324 1324 48 49 YYAAAA QXJAAA OOOOxx +1173 6699 1 1 3 13 73 173 1173 1173 1173 146 147 DTAAAA RXJAAA VVVVxx +3169 6700 1 1 9 9 69 169 1169 3169 3169 138 139 XRAAAA SXJAAA AAAAxx +5149 6701 1 1 9 9 49 149 1149 149 5149 98 99 BQAAAA TXJAAA HHHHxx +9660 6702 0 0 0 0 60 660 1660 4660 9660 120 121 OHAAAA UXJAAA OOOOxx +3446 6703 0 2 6 6 46 446 1446 3446 3446 92 93 OCAAAA VXJAAA VVVVxx +6988 6704 0 0 8 8 88 988 988 1988 6988 176 177 UIAAAA WXJAAA AAAAxx +5829 6705 1 1 9 9 29 829 1829 829 5829 58 59 FQAAAA XXJAAA HHHHxx +7166 6706 0 2 6 6 66 166 1166 2166 7166 132 133 QPAAAA YXJAAA OOOOxx +3940 6707 0 0 0 0 40 940 1940 3940 3940 80 81 OVAAAA ZXJAAA VVVVxx +2645 6708 1 1 5 5 45 645 645 2645 2645 90 91 TXAAAA AYJAAA AAAAxx +478 6709 0 2 8 18 78 478 478 478 478 156 157 KSAAAA BYJAAA HHHHxx +1156 6710 0 0 6 16 56 156 1156 1156 1156 112 113 MSAAAA CYJAAA OOOOxx +2731 6711 1 3 1 11 31 731 731 2731 2731 62 63 BBAAAA DYJAAA VVVVxx +5637 6712 1 1 7 17 37 637 1637 637 5637 74 75 VIAAAA EYJAAA AAAAxx +7517 6713 1 1 7 17 17 517 1517 2517 7517 34 35 DDAAAA FYJAAA HHHHxx +5331 6714 1 3 1 11 31 331 1331 331 5331 62 63 BXAAAA GYJAAA OOOOxx +9640 6715 0 0 0 0 40 640 1640 4640 9640 80 81 UGAAAA HYJAAA VVVVxx +4108 6716 0 0 8 8 8 108 108 4108 4108 16 17 ACAAAA IYJAAA AAAAxx +1087 6717 1 3 7 7 87 87 1087 1087 1087 174 175 VPAAAA JYJAAA HHHHxx +8017 6718 1 1 7 17 17 17 17 3017 8017 34 35 JWAAAA KYJAAA OOOOxx +8795 6719 1 3 5 15 95 795 795 3795 8795 190 191 HAAAAA LYJAAA VVVVxx +7060 6720 0 0 0 0 60 60 1060 2060 7060 120 121 OLAAAA MYJAAA AAAAxx +9450 6721 0 2 0 10 50 450 1450 4450 9450 100 101 MZAAAA NYJAAA HHHHxx +390 6722 0 2 0 10 90 390 390 390 390 180 181 APAAAA OYJAAA OOOOxx +66 6723 0 2 6 6 66 66 66 66 66 132 133 OCAAAA PYJAAA VVVVxx +8789 6724 1 1 9 9 89 789 789 3789 8789 178 179 BAAAAA QYJAAA AAAAxx +9260 6725 0 0 0 0 60 260 1260 4260 9260 120 121 ESAAAA RYJAAA HHHHxx +6679 6726 1 3 9 19 79 679 679 1679 6679 158 159 XWAAAA SYJAAA OOOOxx +9052 6727 0 0 2 12 52 52 1052 4052 9052 104 105 EKAAAA TYJAAA VVVVxx +9561 6728 1 1 1 1 61 561 1561 4561 9561 122 123 TDAAAA UYJAAA AAAAxx +9725 6729 1 1 5 5 25 725 1725 4725 9725 50 51 BKAAAA VYJAAA HHHHxx +6298 6730 0 2 8 18 98 298 298 1298 6298 196 197 GIAAAA WYJAAA OOOOxx +8654 6731 0 2 4 14 54 654 654 3654 8654 108 109 WUAAAA XYJAAA VVVVxx +8725 6732 1 1 5 5 25 725 725 3725 8725 50 51 PXAAAA YYJAAA AAAAxx +9377 6733 1 1 7 17 77 377 1377 4377 9377 154 155 RWAAAA ZYJAAA HHHHxx +3807 6734 1 3 7 7 7 807 1807 3807 3807 14 15 LQAAAA AZJAAA OOOOxx +8048 6735 0 0 8 8 48 48 48 3048 8048 96 97 OXAAAA BZJAAA VVVVxx +764 6736 0 0 4 4 64 764 764 764 764 128 129 KDAAAA CZJAAA AAAAxx +9702 6737 0 2 2 2 2 702 1702 4702 9702 4 5 EJAAAA DZJAAA HHHHxx +8060 6738 0 0 0 0 60 60 60 3060 8060 120 121 AYAAAA EZJAAA OOOOxx +6371 6739 1 3 1 11 71 371 371 1371 6371 142 143 BLAAAA FZJAAA VVVVxx +5237 6740 1 1 7 17 37 237 1237 237 5237 74 75 LTAAAA GZJAAA AAAAxx +743 6741 1 3 3 3 43 743 743 743 743 86 87 PCAAAA HZJAAA HHHHxx +7395 6742 1 3 5 15 95 395 1395 2395 7395 190 191 LYAAAA IZJAAA OOOOxx +3365 6743 1 1 5 5 65 365 1365 3365 3365 130 131 LZAAAA JZJAAA VVVVxx +6667 6744 1 3 7 7 67 667 667 1667 6667 134 135 LWAAAA KZJAAA AAAAxx +3445 6745 1 1 5 5 45 445 1445 3445 3445 90 91 NCAAAA LZJAAA HHHHxx +4019 6746 1 3 9 19 19 19 19 4019 4019 38 39 PYAAAA MZJAAA OOOOxx +7035 6747 1 3 5 15 35 35 1035 2035 7035 70 71 PKAAAA NZJAAA VVVVxx +5274 6748 0 2 4 14 74 274 1274 274 5274 148 149 WUAAAA OZJAAA AAAAxx +519 6749 1 3 9 19 19 519 519 519 519 38 39 ZTAAAA PZJAAA HHHHxx +2801 6750 1 1 1 1 1 801 801 2801 2801 2 3 TDAAAA QZJAAA OOOOxx +3320 6751 0 0 0 0 20 320 1320 3320 3320 40 41 SXAAAA RZJAAA VVVVxx +3153 6752 1 1 3 13 53 153 1153 3153 3153 106 107 HRAAAA SZJAAA AAAAxx +7680 6753 0 0 0 0 80 680 1680 2680 7680 160 161 KJAAAA TZJAAA HHHHxx +8942 6754 0 2 2 2 42 942 942 3942 8942 84 85 YFAAAA UZJAAA OOOOxx +3195 6755 1 3 5 15 95 195 1195 3195 3195 190 191 XSAAAA VZJAAA VVVVxx +2287 6756 1 3 7 7 87 287 287 2287 2287 174 175 ZJAAAA WZJAAA AAAAxx +8325 6757 1 1 5 5 25 325 325 3325 8325 50 51 FIAAAA XZJAAA HHHHxx +2603 6758 1 3 3 3 3 603 603 2603 2603 6 7 DWAAAA YZJAAA OOOOxx +5871 6759 1 3 1 11 71 871 1871 871 5871 142 143 VRAAAA ZZJAAA VVVVxx +1773 6760 1 1 3 13 73 773 1773 1773 1773 146 147 FQAAAA AAKAAA AAAAxx +3323 6761 1 3 3 3 23 323 1323 3323 3323 46 47 VXAAAA BAKAAA HHHHxx +2053 6762 1 1 3 13 53 53 53 2053 2053 106 107 ZAAAAA CAKAAA OOOOxx +4062 6763 0 2 2 2 62 62 62 4062 4062 124 125 GAAAAA DAKAAA VVVVxx +4611 6764 1 3 1 11 11 611 611 4611 4611 22 23 JVAAAA EAKAAA AAAAxx +3451 6765 1 3 1 11 51 451 1451 3451 3451 102 103 TCAAAA FAKAAA HHHHxx +1819 6766 1 3 9 19 19 819 1819 1819 1819 38 39 ZRAAAA GAKAAA OOOOxx +9806 6767 0 2 6 6 6 806 1806 4806 9806 12 13 ENAAAA HAKAAA VVVVxx +6619 6768 1 3 9 19 19 619 619 1619 6619 38 39 PUAAAA IAKAAA AAAAxx +1031 6769 1 3 1 11 31 31 1031 1031 1031 62 63 RNAAAA JAKAAA HHHHxx +1865 6770 1 1 5 5 65 865 1865 1865 1865 130 131 TTAAAA KAKAAA OOOOxx +6282 6771 0 2 2 2 82 282 282 1282 6282 164 165 QHAAAA LAKAAA VVVVxx +1178 6772 0 2 8 18 78 178 1178 1178 1178 156 157 ITAAAA MAKAAA AAAAxx +8007 6773 1 3 7 7 7 7 7 3007 8007 14 15 ZVAAAA NAKAAA HHHHxx +9126 6774 0 2 6 6 26 126 1126 4126 9126 52 53 ANAAAA OAKAAA OOOOxx +9113 6775 1 1 3 13 13 113 1113 4113 9113 26 27 NMAAAA PAKAAA VVVVxx +537 6776 1 1 7 17 37 537 537 537 537 74 75 RUAAAA QAKAAA AAAAxx +6208 6777 0 0 8 8 8 208 208 1208 6208 16 17 UEAAAA RAKAAA HHHHxx +1626 6778 0 2 6 6 26 626 1626 1626 1626 52 53 OKAAAA SAKAAA OOOOxx +7188 6779 0 0 8 8 88 188 1188 2188 7188 176 177 MQAAAA TAKAAA VVVVxx +9216 6780 0 0 6 16 16 216 1216 4216 9216 32 33 MQAAAA UAKAAA AAAAxx +6134 6781 0 2 4 14 34 134 134 1134 6134 68 69 YBAAAA VAKAAA HHHHxx +2074 6782 0 2 4 14 74 74 74 2074 2074 148 149 UBAAAA WAKAAA OOOOxx +6369 6783 1 1 9 9 69 369 369 1369 6369 138 139 ZKAAAA XAKAAA VVVVxx +9306 6784 0 2 6 6 6 306 1306 4306 9306 12 13 YTAAAA YAKAAA AAAAxx +3155 6785 1 3 5 15 55 155 1155 3155 3155 110 111 JRAAAA ZAKAAA HHHHxx +3611 6786 1 3 1 11 11 611 1611 3611 3611 22 23 XIAAAA ABKAAA OOOOxx +6530 6787 0 2 0 10 30 530 530 1530 6530 60 61 ERAAAA BBKAAA VVVVxx +6979 6788 1 3 9 19 79 979 979 1979 6979 158 159 LIAAAA CBKAAA AAAAxx +9129 6789 1 1 9 9 29 129 1129 4129 9129 58 59 DNAAAA DBKAAA HHHHxx +8013 6790 1 1 3 13 13 13 13 3013 8013 26 27 FWAAAA EBKAAA OOOOxx +6926 6791 0 2 6 6 26 926 926 1926 6926 52 53 KGAAAA FBKAAA VVVVxx +1877 6792 1 1 7 17 77 877 1877 1877 1877 154 155 FUAAAA GBKAAA AAAAxx +1882 6793 0 2 2 2 82 882 1882 1882 1882 164 165 KUAAAA HBKAAA HHHHxx +6720 6794 0 0 0 0 20 720 720 1720 6720 40 41 MYAAAA IBKAAA OOOOxx +690 6795 0 2 0 10 90 690 690 690 690 180 181 OAAAAA JBKAAA VVVVxx +143 6796 1 3 3 3 43 143 143 143 143 86 87 NFAAAA KBKAAA AAAAxx +7241 6797 1 1 1 1 41 241 1241 2241 7241 82 83 NSAAAA LBKAAA HHHHxx +6461 6798 1 1 1 1 61 461 461 1461 6461 122 123 NOAAAA MBKAAA OOOOxx +2258 6799 0 2 8 18 58 258 258 2258 2258 116 117 WIAAAA NBKAAA VVVVxx +2280 6800 0 0 0 0 80 280 280 2280 2280 160 161 SJAAAA OBKAAA AAAAxx +7556 6801 0 0 6 16 56 556 1556 2556 7556 112 113 QEAAAA PBKAAA HHHHxx +1038 6802 0 2 8 18 38 38 1038 1038 1038 76 77 YNAAAA QBKAAA OOOOxx +2634 6803 0 2 4 14 34 634 634 2634 2634 68 69 IXAAAA RBKAAA VVVVxx +7847 6804 1 3 7 7 47 847 1847 2847 7847 94 95 VPAAAA SBKAAA AAAAxx +4415 6805 1 3 5 15 15 415 415 4415 4415 30 31 VNAAAA TBKAAA HHHHxx +1933 6806 1 1 3 13 33 933 1933 1933 1933 66 67 JWAAAA UBKAAA OOOOxx +8034 6807 0 2 4 14 34 34 34 3034 8034 68 69 AXAAAA VBKAAA VVVVxx +9233 6808 1 1 3 13 33 233 1233 4233 9233 66 67 DRAAAA WBKAAA AAAAxx +6572 6809 0 0 2 12 72 572 572 1572 6572 144 145 USAAAA XBKAAA HHHHxx +1586 6810 0 2 6 6 86 586 1586 1586 1586 172 173 AJAAAA YBKAAA OOOOxx +8512 6811 0 0 2 12 12 512 512 3512 8512 24 25 KPAAAA ZBKAAA VVVVxx +7421 6812 1 1 1 1 21 421 1421 2421 7421 42 43 LZAAAA ACKAAA AAAAxx +503 6813 1 3 3 3 3 503 503 503 503 6 7 JTAAAA BCKAAA HHHHxx +5332 6814 0 0 2 12 32 332 1332 332 5332 64 65 CXAAAA CCKAAA OOOOxx +2602 6815 0 2 2 2 2 602 602 2602 2602 4 5 CWAAAA DCKAAA VVVVxx +2902 6816 0 2 2 2 2 902 902 2902 2902 4 5 QHAAAA ECKAAA AAAAxx +2979 6817 1 3 9 19 79 979 979 2979 2979 158 159 PKAAAA FCKAAA HHHHxx +1431 6818 1 3 1 11 31 431 1431 1431 1431 62 63 BDAAAA GCKAAA OOOOxx +8639 6819 1 3 9 19 39 639 639 3639 8639 78 79 HUAAAA HCKAAA VVVVxx +4218 6820 0 2 8 18 18 218 218 4218 4218 36 37 GGAAAA ICKAAA AAAAxx +7453 6821 1 1 3 13 53 453 1453 2453 7453 106 107 RAAAAA JCKAAA HHHHxx +5448 6822 0 0 8 8 48 448 1448 448 5448 96 97 OBAAAA KCKAAA OOOOxx +6768 6823 0 0 8 8 68 768 768 1768 6768 136 137 IAAAAA LCKAAA VVVVxx +3104 6824 0 0 4 4 4 104 1104 3104 3104 8 9 KPAAAA MCKAAA AAAAxx +2297 6825 1 1 7 17 97 297 297 2297 2297 194 195 JKAAAA NCKAAA HHHHxx +7994 6826 0 2 4 14 94 994 1994 2994 7994 188 189 MVAAAA OCKAAA OOOOxx +550 6827 0 2 0 10 50 550 550 550 550 100 101 EVAAAA PCKAAA VVVVxx +4777 6828 1 1 7 17 77 777 777 4777 4777 154 155 TBAAAA QCKAAA AAAAxx +5962 6829 0 2 2 2 62 962 1962 962 5962 124 125 IVAAAA RCKAAA HHHHxx +1763 6830 1 3 3 3 63 763 1763 1763 1763 126 127 VPAAAA SCKAAA OOOOxx +3654 6831 0 2 4 14 54 654 1654 3654 3654 108 109 OKAAAA TCKAAA VVVVxx +4106 6832 0 2 6 6 6 106 106 4106 4106 12 13 YBAAAA UCKAAA AAAAxx +5156 6833 0 0 6 16 56 156 1156 156 5156 112 113 IQAAAA VCKAAA HHHHxx +422 6834 0 2 2 2 22 422 422 422 422 44 45 GQAAAA WCKAAA OOOOxx +5011 6835 1 3 1 11 11 11 1011 11 5011 22 23 TKAAAA XCKAAA VVVVxx +218 6836 0 2 8 18 18 218 218 218 218 36 37 KIAAAA YCKAAA AAAAxx +9762 6837 0 2 2 2 62 762 1762 4762 9762 124 125 MLAAAA ZCKAAA HHHHxx +6074 6838 0 2 4 14 74 74 74 1074 6074 148 149 QZAAAA ADKAAA OOOOxx +4060 6839 0 0 0 0 60 60 60 4060 4060 120 121 EAAAAA BDKAAA VVVVxx +8680 6840 0 0 0 0 80 680 680 3680 8680 160 161 WVAAAA CDKAAA AAAAxx +5863 6841 1 3 3 3 63 863 1863 863 5863 126 127 NRAAAA DDKAAA HHHHxx +8042 6842 0 2 2 2 42 42 42 3042 8042 84 85 IXAAAA EDKAAA OOOOxx +2964 6843 0 0 4 4 64 964 964 2964 2964 128 129 AKAAAA FDKAAA VVVVxx +6931 6844 1 3 1 11 31 931 931 1931 6931 62 63 PGAAAA GDKAAA AAAAxx +6715 6845 1 3 5 15 15 715 715 1715 6715 30 31 HYAAAA HDKAAA HHHHxx +5859 6846 1 3 9 19 59 859 1859 859 5859 118 119 JRAAAA IDKAAA OOOOxx +6173 6847 1 1 3 13 73 173 173 1173 6173 146 147 LDAAAA JDKAAA VVVVxx +7788 6848 0 0 8 8 88 788 1788 2788 7788 176 177 ONAAAA KDKAAA AAAAxx +9370 6849 0 2 0 10 70 370 1370 4370 9370 140 141 KWAAAA LDKAAA HHHHxx +3038 6850 0 2 8 18 38 38 1038 3038 3038 76 77 WMAAAA MDKAAA OOOOxx +6483 6851 1 3 3 3 83 483 483 1483 6483 166 167 JPAAAA NDKAAA VVVVxx +7534 6852 0 2 4 14 34 534 1534 2534 7534 68 69 UDAAAA ODKAAA AAAAxx +5769 6853 1 1 9 9 69 769 1769 769 5769 138 139 XNAAAA PDKAAA HHHHxx +9152 6854 0 0 2 12 52 152 1152 4152 9152 104 105 AOAAAA QDKAAA OOOOxx +6251 6855 1 3 1 11 51 251 251 1251 6251 102 103 LGAAAA RDKAAA VVVVxx +9209 6856 1 1 9 9 9 209 1209 4209 9209 18 19 FQAAAA SDKAAA AAAAxx +5365 6857 1 1 5 5 65 365 1365 365 5365 130 131 JYAAAA TDKAAA HHHHxx +509 6858 1 1 9 9 9 509 509 509 509 18 19 PTAAAA UDKAAA OOOOxx +3132 6859 0 0 2 12 32 132 1132 3132 3132 64 65 MQAAAA VDKAAA VVVVxx +5373 6860 1 1 3 13 73 373 1373 373 5373 146 147 RYAAAA WDKAAA AAAAxx +4247 6861 1 3 7 7 47 247 247 4247 4247 94 95 JHAAAA XDKAAA HHHHxx +3491 6862 1 3 1 11 91 491 1491 3491 3491 182 183 HEAAAA YDKAAA OOOOxx +495 6863 1 3 5 15 95 495 495 495 495 190 191 BTAAAA ZDKAAA VVVVxx +1594 6864 0 2 4 14 94 594 1594 1594 1594 188 189 IJAAAA AEKAAA AAAAxx +2243 6865 1 3 3 3 43 243 243 2243 2243 86 87 HIAAAA BEKAAA HHHHxx +7780 6866 0 0 0 0 80 780 1780 2780 7780 160 161 GNAAAA CEKAAA OOOOxx +5632 6867 0 0 2 12 32 632 1632 632 5632 64 65 QIAAAA DEKAAA VVVVxx +2679 6868 1 3 9 19 79 679 679 2679 2679 158 159 BZAAAA EEKAAA AAAAxx +1354 6869 0 2 4 14 54 354 1354 1354 1354 108 109 CAAAAA FEKAAA HHHHxx +180 6870 0 0 0 0 80 180 180 180 180 160 161 YGAAAA GEKAAA OOOOxx +7017 6871 1 1 7 17 17 17 1017 2017 7017 34 35 XJAAAA HEKAAA VVVVxx +1867 6872 1 3 7 7 67 867 1867 1867 1867 134 135 VTAAAA IEKAAA AAAAxx +2213 6873 1 1 3 13 13 213 213 2213 2213 26 27 DHAAAA JEKAAA HHHHxx +8773 6874 1 1 3 13 73 773 773 3773 8773 146 147 LZAAAA KEKAAA OOOOxx +1784 6875 0 0 4 4 84 784 1784 1784 1784 168 169 QQAAAA LEKAAA VVVVxx +5961 6876 1 1 1 1 61 961 1961 961 5961 122 123 HVAAAA MEKAAA AAAAxx +8801 6877 1 1 1 1 1 801 801 3801 8801 2 3 NAAAAA NEKAAA HHHHxx +4860 6878 0 0 0 0 60 860 860 4860 4860 120 121 YEAAAA OEKAAA OOOOxx +2214 6879 0 2 4 14 14 214 214 2214 2214 28 29 EHAAAA PEKAAA VVVVxx +1735 6880 1 3 5 15 35 735 1735 1735 1735 70 71 TOAAAA QEKAAA AAAAxx +578 6881 0 2 8 18 78 578 578 578 578 156 157 GWAAAA REKAAA HHHHxx +7853 6882 1 1 3 13 53 853 1853 2853 7853 106 107 BQAAAA SEKAAA OOOOxx +2215 6883 1 3 5 15 15 215 215 2215 2215 30 31 FHAAAA TEKAAA VVVVxx +4704 6884 0 0 4 4 4 704 704 4704 4704 8 9 YYAAAA UEKAAA AAAAxx +9379 6885 1 3 9 19 79 379 1379 4379 9379 158 159 TWAAAA VEKAAA HHHHxx +9745 6886 1 1 5 5 45 745 1745 4745 9745 90 91 VKAAAA WEKAAA OOOOxx +5636 6887 0 0 6 16 36 636 1636 636 5636 72 73 UIAAAA XEKAAA VVVVxx +4548 6888 0 0 8 8 48 548 548 4548 4548 96 97 YSAAAA YEKAAA AAAAxx +6537 6889 1 1 7 17 37 537 537 1537 6537 74 75 LRAAAA ZEKAAA HHHHxx +7748 6890 0 0 8 8 48 748 1748 2748 7748 96 97 AMAAAA AFKAAA OOOOxx +687 6891 1 3 7 7 87 687 687 687 687 174 175 LAAAAA BFKAAA VVVVxx +1243 6892 1 3 3 3 43 243 1243 1243 1243 86 87 VVAAAA CFKAAA AAAAxx +852 6893 0 0 2 12 52 852 852 852 852 104 105 UGAAAA DFKAAA HHHHxx +785 6894 1 1 5 5 85 785 785 785 785 170 171 FEAAAA EFKAAA OOOOxx +2002 6895 0 2 2 2 2 2 2 2002 2002 4 5 AZAAAA FFKAAA VVVVxx +2748 6896 0 0 8 8 48 748 748 2748 2748 96 97 SBAAAA GFKAAA AAAAxx +6075 6897 1 3 5 15 75 75 75 1075 6075 150 151 RZAAAA HFKAAA HHHHxx +7029 6898 1 1 9 9 29 29 1029 2029 7029 58 59 JKAAAA IFKAAA OOOOxx +7474 6899 0 2 4 14 74 474 1474 2474 7474 148 149 MBAAAA JFKAAA VVVVxx +7755 6900 1 3 5 15 55 755 1755 2755 7755 110 111 HMAAAA KFKAAA AAAAxx +1456 6901 0 0 6 16 56 456 1456 1456 1456 112 113 AEAAAA LFKAAA HHHHxx +2808 6902 0 0 8 8 8 808 808 2808 2808 16 17 AEAAAA MFKAAA OOOOxx +4089 6903 1 1 9 9 89 89 89 4089 4089 178 179 HBAAAA NFKAAA VVVVxx +4718 6904 0 2 8 18 18 718 718 4718 4718 36 37 MZAAAA OFKAAA AAAAxx +910 6905 0 2 0 10 10 910 910 910 910 20 21 AJAAAA PFKAAA HHHHxx +2868 6906 0 0 8 8 68 868 868 2868 2868 136 137 IGAAAA QFKAAA OOOOxx +2103 6907 1 3 3 3 3 103 103 2103 2103 6 7 XCAAAA RFKAAA VVVVxx +2407 6908 1 3 7 7 7 407 407 2407 2407 14 15 POAAAA SFKAAA AAAAxx +4353 6909 1 1 3 13 53 353 353 4353 4353 106 107 LLAAAA TFKAAA HHHHxx +7988 6910 0 0 8 8 88 988 1988 2988 7988 176 177 GVAAAA UFKAAA OOOOxx +2750 6911 0 2 0 10 50 750 750 2750 2750 100 101 UBAAAA VFKAAA VVVVxx +2006 6912 0 2 6 6 6 6 6 2006 2006 12 13 EZAAAA WFKAAA AAAAxx +4617 6913 1 1 7 17 17 617 617 4617 4617 34 35 PVAAAA XFKAAA HHHHxx +1251 6914 1 3 1 11 51 251 1251 1251 1251 102 103 DWAAAA YFKAAA OOOOxx +4590 6915 0 2 0 10 90 590 590 4590 4590 180 181 OUAAAA ZFKAAA VVVVxx +1144 6916 0 0 4 4 44 144 1144 1144 1144 88 89 ASAAAA AGKAAA AAAAxx +7131 6917 1 3 1 11 31 131 1131 2131 7131 62 63 HOAAAA BGKAAA HHHHxx +95 6918 1 3 5 15 95 95 95 95 95 190 191 RDAAAA CGKAAA OOOOxx +4827 6919 1 3 7 7 27 827 827 4827 4827 54 55 RDAAAA DGKAAA VVVVxx +4307 6920 1 3 7 7 7 307 307 4307 4307 14 15 RJAAAA EGKAAA AAAAxx +1505 6921 1 1 5 5 5 505 1505 1505 1505 10 11 XFAAAA FGKAAA HHHHxx +8191 6922 1 3 1 11 91 191 191 3191 8191 182 183 BDAAAA GGKAAA OOOOxx +5037 6923 1 1 7 17 37 37 1037 37 5037 74 75 TLAAAA HGKAAA VVVVxx +7363 6924 1 3 3 3 63 363 1363 2363 7363 126 127 FXAAAA IGKAAA AAAAxx +8427 6925 1 3 7 7 27 427 427 3427 8427 54 55 DMAAAA JGKAAA HHHHxx +5231 6926 1 3 1 11 31 231 1231 231 5231 62 63 FTAAAA KGKAAA OOOOxx +2943 6927 1 3 3 3 43 943 943 2943 2943 86 87 FJAAAA LGKAAA VVVVxx +4624 6928 0 0 4 4 24 624 624 4624 4624 48 49 WVAAAA MGKAAA AAAAxx +2020 6929 0 0 0 0 20 20 20 2020 2020 40 41 SZAAAA NGKAAA HHHHxx +6155 6930 1 3 5 15 55 155 155 1155 6155 110 111 TCAAAA OGKAAA OOOOxx +4381 6931 1 1 1 1 81 381 381 4381 4381 162 163 NMAAAA PGKAAA VVVVxx +1057 6932 1 1 7 17 57 57 1057 1057 1057 114 115 ROAAAA QGKAAA AAAAxx +9010 6933 0 2 0 10 10 10 1010 4010 9010 20 21 OIAAAA RGKAAA HHHHxx +4947 6934 1 3 7 7 47 947 947 4947 4947 94 95 HIAAAA SGKAAA OOOOxx +335 6935 1 3 5 15 35 335 335 335 335 70 71 XMAAAA TGKAAA VVVVxx +6890 6936 0 2 0 10 90 890 890 1890 6890 180 181 AFAAAA UGKAAA AAAAxx +5070 6937 0 2 0 10 70 70 1070 70 5070 140 141 ANAAAA VGKAAA HHHHxx +5270 6938 0 2 0 10 70 270 1270 270 5270 140 141 SUAAAA WGKAAA OOOOxx +8657 6939 1 1 7 17 57 657 657 3657 8657 114 115 ZUAAAA XGKAAA VVVVxx +7625 6940 1 1 5 5 25 625 1625 2625 7625 50 51 HHAAAA YGKAAA AAAAxx +5759 6941 1 3 9 19 59 759 1759 759 5759 118 119 NNAAAA ZGKAAA HHHHxx +9483 6942 1 3 3 3 83 483 1483 4483 9483 166 167 TAAAAA AHKAAA OOOOxx +8304 6943 0 0 4 4 4 304 304 3304 8304 8 9 KHAAAA BHKAAA VVVVxx +296 6944 0 0 6 16 96 296 296 296 296 192 193 KLAAAA CHKAAA AAAAxx +1176 6945 0 0 6 16 76 176 1176 1176 1176 152 153 GTAAAA DHKAAA HHHHxx +2069 6946 1 1 9 9 69 69 69 2069 2069 138 139 PBAAAA EHKAAA OOOOxx +1531 6947 1 3 1 11 31 531 1531 1531 1531 62 63 XGAAAA FHKAAA VVVVxx +5329 6948 1 1 9 9 29 329 1329 329 5329 58 59 ZWAAAA GHKAAA AAAAxx +3702 6949 0 2 2 2 2 702 1702 3702 3702 4 5 KMAAAA HHKAAA HHHHxx +6520 6950 0 0 0 0 20 520 520 1520 6520 40 41 UQAAAA IHKAAA OOOOxx +7310 6951 0 2 0 10 10 310 1310 2310 7310 20 21 EVAAAA JHKAAA VVVVxx +1175 6952 1 3 5 15 75 175 1175 1175 1175 150 151 FTAAAA KHKAAA AAAAxx +9107 6953 1 3 7 7 7 107 1107 4107 9107 14 15 HMAAAA LHKAAA HHHHxx +2737 6954 1 1 7 17 37 737 737 2737 2737 74 75 HBAAAA MHKAAA OOOOxx +3437 6955 1 1 7 17 37 437 1437 3437 3437 74 75 FCAAAA NHKAAA VVVVxx +281 6956 1 1 1 1 81 281 281 281 281 162 163 VKAAAA OHKAAA AAAAxx +6676 6957 0 0 6 16 76 676 676 1676 6676 152 153 UWAAAA PHKAAA HHHHxx +145 6958 1 1 5 5 45 145 145 145 145 90 91 PFAAAA QHKAAA OOOOxx +3172 6959 0 0 2 12 72 172 1172 3172 3172 144 145 ASAAAA RHKAAA VVVVxx +4049 6960 1 1 9 9 49 49 49 4049 4049 98 99 TZAAAA SHKAAA AAAAxx +6042 6961 0 2 2 2 42 42 42 1042 6042 84 85 KYAAAA THKAAA HHHHxx +9122 6962 0 2 2 2 22 122 1122 4122 9122 44 45 WMAAAA UHKAAA OOOOxx +7244 6963 0 0 4 4 44 244 1244 2244 7244 88 89 QSAAAA VHKAAA VVVVxx +5361 6964 1 1 1 1 61 361 1361 361 5361 122 123 FYAAAA WHKAAA AAAAxx +8647 6965 1 3 7 7 47 647 647 3647 8647 94 95 PUAAAA XHKAAA HHHHxx +7956 6966 0 0 6 16 56 956 1956 2956 7956 112 113 AUAAAA YHKAAA OOOOxx +7812 6967 0 0 2 12 12 812 1812 2812 7812 24 25 MOAAAA ZHKAAA VVVVxx +570 6968 0 2 0 10 70 570 570 570 570 140 141 YVAAAA AIKAAA AAAAxx +4115 6969 1 3 5 15 15 115 115 4115 4115 30 31 HCAAAA BIKAAA HHHHxx +1856 6970 0 0 6 16 56 856 1856 1856 1856 112 113 KTAAAA CIKAAA OOOOxx +9582 6971 0 2 2 2 82 582 1582 4582 9582 164 165 OEAAAA DIKAAA VVVVxx +2025 6972 1 1 5 5 25 25 25 2025 2025 50 51 XZAAAA EIKAAA AAAAxx +986 6973 0 2 6 6 86 986 986 986 986 172 173 YLAAAA FIKAAA HHHHxx +8358 6974 0 2 8 18 58 358 358 3358 8358 116 117 MJAAAA GIKAAA OOOOxx +510 6975 0 2 0 10 10 510 510 510 510 20 21 QTAAAA HIKAAA VVVVxx +6101 6976 1 1 1 1 1 101 101 1101 6101 2 3 RAAAAA IIKAAA AAAAxx +4167 6977 1 3 7 7 67 167 167 4167 4167 134 135 HEAAAA JIKAAA HHHHxx +6139 6978 1 3 9 19 39 139 139 1139 6139 78 79 DCAAAA KIKAAA OOOOxx +6912 6979 0 0 2 12 12 912 912 1912 6912 24 25 WFAAAA LIKAAA VVVVxx +339 6980 1 3 9 19 39 339 339 339 339 78 79 BNAAAA MIKAAA AAAAxx +8759 6981 1 3 9 19 59 759 759 3759 8759 118 119 XYAAAA NIKAAA HHHHxx +246 6982 0 2 6 6 46 246 246 246 246 92 93 MJAAAA OIKAAA OOOOxx +2831 6983 1 3 1 11 31 831 831 2831 2831 62 63 XEAAAA PIKAAA VVVVxx +2327 6984 1 3 7 7 27 327 327 2327 2327 54 55 NLAAAA QIKAAA AAAAxx +7001 6985 1 1 1 1 1 1 1001 2001 7001 2 3 HJAAAA RIKAAA HHHHxx +4398 6986 0 2 8 18 98 398 398 4398 4398 196 197 ENAAAA SIKAAA OOOOxx +1495 6987 1 3 5 15 95 495 1495 1495 1495 190 191 NFAAAA TIKAAA VVVVxx +8522 6988 0 2 2 2 22 522 522 3522 8522 44 45 UPAAAA UIKAAA AAAAxx +7090 6989 0 2 0 10 90 90 1090 2090 7090 180 181 SMAAAA VIKAAA HHHHxx +8457 6990 1 1 7 17 57 457 457 3457 8457 114 115 HNAAAA WIKAAA OOOOxx +4238 6991 0 2 8 18 38 238 238 4238 4238 76 77 AHAAAA XIKAAA VVVVxx +6791 6992 1 3 1 11 91 791 791 1791 6791 182 183 FBAAAA YIKAAA AAAAxx +1342 6993 0 2 2 2 42 342 1342 1342 1342 84 85 QZAAAA ZIKAAA HHHHxx +4580 6994 0 0 0 0 80 580 580 4580 4580 160 161 EUAAAA AJKAAA OOOOxx +1475 6995 1 3 5 15 75 475 1475 1475 1475 150 151 TEAAAA BJKAAA VVVVxx +9184 6996 0 0 4 4 84 184 1184 4184 9184 168 169 GPAAAA CJKAAA AAAAxx +1189 6997 1 1 9 9 89 189 1189 1189 1189 178 179 TTAAAA DJKAAA HHHHxx +638 6998 0 2 8 18 38 638 638 638 638 76 77 OYAAAA EJKAAA OOOOxx +5867 6999 1 3 7 7 67 867 1867 867 5867 134 135 RRAAAA FJKAAA VVVVxx +9911 7000 1 3 1 11 11 911 1911 4911 9911 22 23 FRAAAA GJKAAA AAAAxx +8147 7001 1 3 7 7 47 147 147 3147 8147 94 95 JBAAAA HJKAAA HHHHxx +4492 7002 0 0 2 12 92 492 492 4492 4492 184 185 UQAAAA IJKAAA OOOOxx +385 7003 1 1 5 5 85 385 385 385 385 170 171 VOAAAA JJKAAA VVVVxx +5235 7004 1 3 5 15 35 235 1235 235 5235 70 71 JTAAAA KJKAAA AAAAxx +4812 7005 0 0 2 12 12 812 812 4812 4812 24 25 CDAAAA LJKAAA HHHHxx +9807 7006 1 3 7 7 7 807 1807 4807 9807 14 15 FNAAAA MJKAAA OOOOxx +9588 7007 0 0 8 8 88 588 1588 4588 9588 176 177 UEAAAA NJKAAA VVVVxx +9832 7008 0 0 2 12 32 832 1832 4832 9832 64 65 EOAAAA OJKAAA AAAAxx +3757 7009 1 1 7 17 57 757 1757 3757 3757 114 115 NOAAAA PJKAAA HHHHxx +9703 7010 1 3 3 3 3 703 1703 4703 9703 6 7 FJAAAA QJKAAA OOOOxx +1022 7011 0 2 2 2 22 22 1022 1022 1022 44 45 INAAAA RJKAAA VVVVxx +5165 7012 1 1 5 5 65 165 1165 165 5165 130 131 RQAAAA SJKAAA AAAAxx +7129 7013 1 1 9 9 29 129 1129 2129 7129 58 59 FOAAAA TJKAAA HHHHxx +4164 7014 0 0 4 4 64 164 164 4164 4164 128 129 EEAAAA UJKAAA OOOOxx +7239 7015 1 3 9 19 39 239 1239 2239 7239 78 79 LSAAAA VJKAAA VVVVxx +523 7016 1 3 3 3 23 523 523 523 523 46 47 DUAAAA WJKAAA AAAAxx +4670 7017 0 2 0 10 70 670 670 4670 4670 140 141 QXAAAA XJKAAA HHHHxx +8503 7018 1 3 3 3 3 503 503 3503 8503 6 7 BPAAAA YJKAAA OOOOxx +714 7019 0 2 4 14 14 714 714 714 714 28 29 MBAAAA ZJKAAA VVVVxx +1350 7020 0 2 0 10 50 350 1350 1350 1350 100 101 YZAAAA AKKAAA AAAAxx +8318 7021 0 2 8 18 18 318 318 3318 8318 36 37 YHAAAA BKKAAA HHHHxx +1834 7022 0 2 4 14 34 834 1834 1834 1834 68 69 OSAAAA CKKAAA OOOOxx +4306 7023 0 2 6 6 6 306 306 4306 4306 12 13 QJAAAA DKKAAA VVVVxx +8543 7024 1 3 3 3 43 543 543 3543 8543 86 87 PQAAAA EKKAAA AAAAxx +9397 7025 1 1 7 17 97 397 1397 4397 9397 194 195 LXAAAA FKKAAA HHHHxx +3145 7026 1 1 5 5 45 145 1145 3145 3145 90 91 ZQAAAA GKKAAA OOOOxx +3942 7027 0 2 2 2 42 942 1942 3942 3942 84 85 QVAAAA HKKAAA VVVVxx +8583 7028 1 3 3 3 83 583 583 3583 8583 166 167 DSAAAA IKKAAA AAAAxx +8073 7029 1 1 3 13 73 73 73 3073 8073 146 147 NYAAAA JKKAAA HHHHxx +4940 7030 0 0 0 0 40 940 940 4940 4940 80 81 AIAAAA KKKAAA OOOOxx +9573 7031 1 1 3 13 73 573 1573 4573 9573 146 147 FEAAAA LKKAAA VVVVxx +5325 7032 1 1 5 5 25 325 1325 325 5325 50 51 VWAAAA MKKAAA AAAAxx +1833 7033 1 1 3 13 33 833 1833 1833 1833 66 67 NSAAAA NKKAAA HHHHxx +1337 7034 1 1 7 17 37 337 1337 1337 1337 74 75 LZAAAA OKKAAA OOOOxx +9749 7035 1 1 9 9 49 749 1749 4749 9749 98 99 ZKAAAA PKKAAA VVVVxx +7505 7036 1 1 5 5 5 505 1505 2505 7505 10 11 RCAAAA QKKAAA AAAAxx +9731 7037 1 3 1 11 31 731 1731 4731 9731 62 63 HKAAAA RKKAAA HHHHxx +4098 7038 0 2 8 18 98 98 98 4098 4098 196 197 QBAAAA SKKAAA OOOOxx +1418 7039 0 2 8 18 18 418 1418 1418 1418 36 37 OCAAAA TKKAAA VVVVxx +63 7040 1 3 3 3 63 63 63 63 63 126 127 LCAAAA UKKAAA AAAAxx +9889 7041 1 1 9 9 89 889 1889 4889 9889 178 179 JQAAAA VKKAAA HHHHxx +2871 7042 1 3 1 11 71 871 871 2871 2871 142 143 LGAAAA WKKAAA OOOOxx +1003 7043 1 3 3 3 3 3 1003 1003 1003 6 7 PMAAAA XKKAAA VVVVxx +8796 7044 0 0 6 16 96 796 796 3796 8796 192 193 IAAAAA YKKAAA AAAAxx +22 7045 0 2 2 2 22 22 22 22 22 44 45 WAAAAA ZKKAAA HHHHxx +8244 7046 0 0 4 4 44 244 244 3244 8244 88 89 CFAAAA ALKAAA OOOOxx +2282 7047 0 2 2 2 82 282 282 2282 2282 164 165 UJAAAA BLKAAA VVVVxx +3487 7048 1 3 7 7 87 487 1487 3487 3487 174 175 DEAAAA CLKAAA AAAAxx +8633 7049 1 1 3 13 33 633 633 3633 8633 66 67 BUAAAA DLKAAA HHHHxx +6418 7050 0 2 8 18 18 418 418 1418 6418 36 37 WMAAAA ELKAAA OOOOxx +4682 7051 0 2 2 2 82 682 682 4682 4682 164 165 CYAAAA FLKAAA VVVVxx +4103 7052 1 3 3 3 3 103 103 4103 4103 6 7 VBAAAA GLKAAA AAAAxx +6256 7053 0 0 6 16 56 256 256 1256 6256 112 113 QGAAAA HLKAAA HHHHxx +4040 7054 0 0 0 0 40 40 40 4040 4040 80 81 KZAAAA ILKAAA OOOOxx +9342 7055 0 2 2 2 42 342 1342 4342 9342 84 85 IVAAAA JLKAAA VVVVxx +9969 7056 1 1 9 9 69 969 1969 4969 9969 138 139 LTAAAA KLKAAA AAAAxx +223 7057 1 3 3 3 23 223 223 223 223 46 47 PIAAAA LLKAAA HHHHxx +4593 7058 1 1 3 13 93 593 593 4593 4593 186 187 RUAAAA MLKAAA OOOOxx +44 7059 0 0 4 4 44 44 44 44 44 88 89 SBAAAA NLKAAA VVVVxx +3513 7060 1 1 3 13 13 513 1513 3513 3513 26 27 DFAAAA OLKAAA AAAAxx +5771 7061 1 3 1 11 71 771 1771 771 5771 142 143 ZNAAAA PLKAAA HHHHxx +5083 7062 1 3 3 3 83 83 1083 83 5083 166 167 NNAAAA QLKAAA OOOOxx +3839 7063 1 3 9 19 39 839 1839 3839 3839 78 79 RRAAAA RLKAAA VVVVxx +2986 7064 0 2 6 6 86 986 986 2986 2986 172 173 WKAAAA SLKAAA AAAAxx +2200 7065 0 0 0 0 0 200 200 2200 2200 0 1 QGAAAA TLKAAA HHHHxx +197 7066 1 1 7 17 97 197 197 197 197 194 195 PHAAAA ULKAAA OOOOxx +7455 7067 1 3 5 15 55 455 1455 2455 7455 110 111 TAAAAA VLKAAA VVVVxx +1379 7068 1 3 9 19 79 379 1379 1379 1379 158 159 BBAAAA WLKAAA AAAAxx +4356 7069 0 0 6 16 56 356 356 4356 4356 112 113 OLAAAA XLKAAA HHHHxx +6888 7070 0 0 8 8 88 888 888 1888 6888 176 177 YEAAAA YLKAAA OOOOxx +9139 7071 1 3 9 19 39 139 1139 4139 9139 78 79 NNAAAA ZLKAAA VVVVxx +7682 7072 0 2 2 2 82 682 1682 2682 7682 164 165 MJAAAA AMKAAA AAAAxx +4873 7073 1 1 3 13 73 873 873 4873 4873 146 147 LFAAAA BMKAAA HHHHxx +783 7074 1 3 3 3 83 783 783 783 783 166 167 DEAAAA CMKAAA OOOOxx +6071 7075 1 3 1 11 71 71 71 1071 6071 142 143 NZAAAA DMKAAA VVVVxx +5160 7076 0 0 0 0 60 160 1160 160 5160 120 121 MQAAAA EMKAAA AAAAxx +2291 7077 1 3 1 11 91 291 291 2291 2291 182 183 DKAAAA FMKAAA HHHHxx +187 7078 1 3 7 7 87 187 187 187 187 174 175 FHAAAA GMKAAA OOOOxx +7786 7079 0 2 6 6 86 786 1786 2786 7786 172 173 MNAAAA HMKAAA VVVVxx +3432 7080 0 0 2 12 32 432 1432 3432 3432 64 65 ACAAAA IMKAAA AAAAxx +5450 7081 0 2 0 10 50 450 1450 450 5450 100 101 QBAAAA JMKAAA HHHHxx +2699 7082 1 3 9 19 99 699 699 2699 2699 198 199 VZAAAA KMKAAA OOOOxx +692 7083 0 0 2 12 92 692 692 692 692 184 185 QAAAAA LMKAAA VVVVxx +6081 7084 1 1 1 1 81 81 81 1081 6081 162 163 XZAAAA MMKAAA AAAAxx +4829 7085 1 1 9 9 29 829 829 4829 4829 58 59 TDAAAA NMKAAA HHHHxx +238 7086 0 2 8 18 38 238 238 238 238 76 77 EJAAAA OMKAAA OOOOxx +9100 7087 0 0 0 0 0 100 1100 4100 9100 0 1 AMAAAA PMKAAA VVVVxx +1968 7088 0 0 8 8 68 968 1968 1968 1968 136 137 SXAAAA QMKAAA AAAAxx +1872 7089 0 0 2 12 72 872 1872 1872 1872 144 145 AUAAAA RMKAAA HHHHxx +7051 7090 1 3 1 11 51 51 1051 2051 7051 102 103 FLAAAA SMKAAA OOOOxx +2743 7091 1 3 3 3 43 743 743 2743 2743 86 87 NBAAAA TMKAAA VVVVxx +1237 7092 1 1 7 17 37 237 1237 1237 1237 74 75 PVAAAA UMKAAA AAAAxx +3052 7093 0 0 2 12 52 52 1052 3052 3052 104 105 KNAAAA VMKAAA HHHHxx +8021 7094 1 1 1 1 21 21 21 3021 8021 42 43 NWAAAA WMKAAA OOOOxx +657 7095 1 1 7 17 57 657 657 657 657 114 115 HZAAAA XMKAAA VVVVxx +2236 7096 0 0 6 16 36 236 236 2236 2236 72 73 AIAAAA YMKAAA AAAAxx +7011 7097 1 3 1 11 11 11 1011 2011 7011 22 23 RJAAAA ZMKAAA HHHHxx +4067 7098 1 3 7 7 67 67 67 4067 4067 134 135 LAAAAA ANKAAA OOOOxx +9449 7099 1 1 9 9 49 449 1449 4449 9449 98 99 LZAAAA BNKAAA VVVVxx +7428 7100 0 0 8 8 28 428 1428 2428 7428 56 57 SZAAAA CNKAAA AAAAxx +1272 7101 0 0 2 12 72 272 1272 1272 1272 144 145 YWAAAA DNKAAA HHHHxx +6897 7102 1 1 7 17 97 897 897 1897 6897 194 195 HFAAAA ENKAAA OOOOxx +5839 7103 1 3 9 19 39 839 1839 839 5839 78 79 PQAAAA FNKAAA VVVVxx +6835 7104 1 3 5 15 35 835 835 1835 6835 70 71 XCAAAA GNKAAA AAAAxx +1887 7105 1 3 7 7 87 887 1887 1887 1887 174 175 PUAAAA HNKAAA HHHHxx +1551 7106 1 3 1 11 51 551 1551 1551 1551 102 103 RHAAAA INKAAA OOOOxx +4667 7107 1 3 7 7 67 667 667 4667 4667 134 135 NXAAAA JNKAAA VVVVxx +9603 7108 1 3 3 3 3 603 1603 4603 9603 6 7 JFAAAA KNKAAA AAAAxx +4332 7109 0 0 2 12 32 332 332 4332 4332 64 65 QKAAAA LNKAAA HHHHxx +5681 7110 1 1 1 1 81 681 1681 681 5681 162 163 NKAAAA MNKAAA OOOOxx +8062 7111 0 2 2 2 62 62 62 3062 8062 124 125 CYAAAA NNKAAA VVVVxx +2302 7112 0 2 2 2 2 302 302 2302 2302 4 5 OKAAAA ONKAAA AAAAxx +2825 7113 1 1 5 5 25 825 825 2825 2825 50 51 REAAAA PNKAAA HHHHxx +4527 7114 1 3 7 7 27 527 527 4527 4527 54 55 DSAAAA QNKAAA OOOOxx +4230 7115 0 2 0 10 30 230 230 4230 4230 60 61 SGAAAA RNKAAA VVVVxx +3053 7116 1 1 3 13 53 53 1053 3053 3053 106 107 LNAAAA SNKAAA AAAAxx +983 7117 1 3 3 3 83 983 983 983 983 166 167 VLAAAA TNKAAA HHHHxx +9458 7118 0 2 8 18 58 458 1458 4458 9458 116 117 UZAAAA UNKAAA OOOOxx +4128 7119 0 0 8 8 28 128 128 4128 4128 56 57 UCAAAA VNKAAA VVVVxx +425 7120 1 1 5 5 25 425 425 425 425 50 51 JQAAAA WNKAAA AAAAxx +3911 7121 1 3 1 11 11 911 1911 3911 3911 22 23 LUAAAA XNKAAA HHHHxx +6607 7122 1 3 7 7 7 607 607 1607 6607 14 15 DUAAAA YNKAAA OOOOxx +5431 7123 1 3 1 11 31 431 1431 431 5431 62 63 XAAAAA ZNKAAA VVVVxx +6330 7124 0 2 0 10 30 330 330 1330 6330 60 61 MJAAAA AOKAAA AAAAxx +3592 7125 0 0 2 12 92 592 1592 3592 3592 184 185 EIAAAA BOKAAA HHHHxx +154 7126 0 2 4 14 54 154 154 154 154 108 109 YFAAAA COKAAA OOOOxx +9879 7127 1 3 9 19 79 879 1879 4879 9879 158 159 ZPAAAA DOKAAA VVVVxx +3202 7128 0 2 2 2 2 202 1202 3202 3202 4 5 ETAAAA EOKAAA AAAAxx +3056 7129 0 0 6 16 56 56 1056 3056 3056 112 113 ONAAAA FOKAAA HHHHxx +9890 7130 0 2 0 10 90 890 1890 4890 9890 180 181 KQAAAA GOKAAA OOOOxx +5840 7131 0 0 0 0 40 840 1840 840 5840 80 81 QQAAAA HOKAAA VVVVxx +9804 7132 0 0 4 4 4 804 1804 4804 9804 8 9 CNAAAA IOKAAA AAAAxx +681 7133 1 1 1 1 81 681 681 681 681 162 163 FAAAAA JOKAAA HHHHxx +3443 7134 1 3 3 3 43 443 1443 3443 3443 86 87 LCAAAA KOKAAA OOOOxx +8088 7135 0 0 8 8 88 88 88 3088 8088 176 177 CZAAAA LOKAAA VVVVxx +9447 7136 1 3 7 7 47 447 1447 4447 9447 94 95 JZAAAA MOKAAA AAAAxx +1490 7137 0 2 0 10 90 490 1490 1490 1490 180 181 IFAAAA NOKAAA HHHHxx +3684 7138 0 0 4 4 84 684 1684 3684 3684 168 169 SLAAAA OOKAAA OOOOxx +3113 7139 1 1 3 13 13 113 1113 3113 3113 26 27 TPAAAA POKAAA VVVVxx +9004 7140 0 0 4 4 4 4 1004 4004 9004 8 9 IIAAAA QOKAAA AAAAxx +7147 7141 1 3 7 7 47 147 1147 2147 7147 94 95 XOAAAA ROKAAA HHHHxx +7571 7142 1 3 1 11 71 571 1571 2571 7571 142 143 FFAAAA SOKAAA OOOOxx +5545 7143 1 1 5 5 45 545 1545 545 5545 90 91 HFAAAA TOKAAA VVVVxx +4558 7144 0 2 8 18 58 558 558 4558 4558 116 117 ITAAAA UOKAAA AAAAxx +6206 7145 0 2 6 6 6 206 206 1206 6206 12 13 SEAAAA VOKAAA HHHHxx +5695 7146 1 3 5 15 95 695 1695 695 5695 190 191 BLAAAA WOKAAA OOOOxx +9600 7147 0 0 0 0 0 600 1600 4600 9600 0 1 GFAAAA XOKAAA VVVVxx +5432 7148 0 0 2 12 32 432 1432 432 5432 64 65 YAAAAA YOKAAA AAAAxx +9299 7149 1 3 9 19 99 299 1299 4299 9299 198 199 RTAAAA ZOKAAA HHHHxx +2386 7150 0 2 6 6 86 386 386 2386 2386 172 173 UNAAAA APKAAA OOOOxx +2046 7151 0 2 6 6 46 46 46 2046 2046 92 93 SAAAAA BPKAAA VVVVxx +3293 7152 1 1 3 13 93 293 1293 3293 3293 186 187 RWAAAA CPKAAA AAAAxx +3046 7153 0 2 6 6 46 46 1046 3046 3046 92 93 ENAAAA DPKAAA HHHHxx +214 7154 0 2 4 14 14 214 214 214 214 28 29 GIAAAA EPKAAA OOOOxx +7893 7155 1 1 3 13 93 893 1893 2893 7893 186 187 PRAAAA FPKAAA VVVVxx +891 7156 1 3 1 11 91 891 891 891 891 182 183 HIAAAA GPKAAA AAAAxx +6499 7157 1 3 9 19 99 499 499 1499 6499 198 199 ZPAAAA HPKAAA HHHHxx +5003 7158 1 3 3 3 3 3 1003 3 5003 6 7 LKAAAA IPKAAA OOOOxx +6487 7159 1 3 7 7 87 487 487 1487 6487 174 175 NPAAAA JPKAAA VVVVxx +9403 7160 1 3 3 3 3 403 1403 4403 9403 6 7 RXAAAA KPKAAA AAAAxx +945 7161 1 1 5 5 45 945 945 945 945 90 91 JKAAAA LPKAAA HHHHxx +6713 7162 1 1 3 13 13 713 713 1713 6713 26 27 FYAAAA MPKAAA OOOOxx +9928 7163 0 0 8 8 28 928 1928 4928 9928 56 57 WRAAAA NPKAAA VVVVxx +8585 7164 1 1 5 5 85 585 585 3585 8585 170 171 FSAAAA OPKAAA AAAAxx +4004 7165 0 0 4 4 4 4 4 4004 4004 8 9 AYAAAA PPKAAA HHHHxx +2528 7166 0 0 8 8 28 528 528 2528 2528 56 57 GTAAAA QPKAAA OOOOxx +3350 7167 0 2 0 10 50 350 1350 3350 3350 100 101 WYAAAA RPKAAA VVVVxx +2160 7168 0 0 0 0 60 160 160 2160 2160 120 121 CFAAAA SPKAAA AAAAxx +1521 7169 1 1 1 1 21 521 1521 1521 1521 42 43 NGAAAA TPKAAA HHHHxx +5660 7170 0 0 0 0 60 660 1660 660 5660 120 121 SJAAAA UPKAAA OOOOxx +5755 7171 1 3 5 15 55 755 1755 755 5755 110 111 JNAAAA VPKAAA VVVVxx +7614 7172 0 2 4 14 14 614 1614 2614 7614 28 29 WGAAAA WPKAAA AAAAxx +3121 7173 1 1 1 1 21 121 1121 3121 3121 42 43 BQAAAA XPKAAA HHHHxx +2735 7174 1 3 5 15 35 735 735 2735 2735 70 71 FBAAAA YPKAAA OOOOxx +7506 7175 0 2 6 6 6 506 1506 2506 7506 12 13 SCAAAA ZPKAAA VVVVxx +2693 7176 1 1 3 13 93 693 693 2693 2693 186 187 PZAAAA AQKAAA AAAAxx +2892 7177 0 0 2 12 92 892 892 2892 2892 184 185 GHAAAA BQKAAA HHHHxx +3310 7178 0 2 0 10 10 310 1310 3310 3310 20 21 IXAAAA CQKAAA OOOOxx +3484 7179 0 0 4 4 84 484 1484 3484 3484 168 169 AEAAAA DQKAAA VVVVxx +9733 7180 1 1 3 13 33 733 1733 4733 9733 66 67 JKAAAA EQKAAA AAAAxx +29 7181 1 1 9 9 29 29 29 29 29 58 59 DBAAAA FQKAAA HHHHxx +9013 7182 1 1 3 13 13 13 1013 4013 9013 26 27 RIAAAA GQKAAA OOOOxx +3847 7183 1 3 7 7 47 847 1847 3847 3847 94 95 ZRAAAA HQKAAA VVVVxx +6724 7184 0 0 4 4 24 724 724 1724 6724 48 49 QYAAAA IQKAAA AAAAxx +2559 7185 1 3 9 19 59 559 559 2559 2559 118 119 LUAAAA JQKAAA HHHHxx +5326 7186 0 2 6 6 26 326 1326 326 5326 52 53 WWAAAA KQKAAA OOOOxx +4802 7187 0 2 2 2 2 802 802 4802 4802 4 5 SCAAAA LQKAAA VVVVxx +131 7188 1 3 1 11 31 131 131 131 131 62 63 BFAAAA MQKAAA AAAAxx +1634 7189 0 2 4 14 34 634 1634 1634 1634 68 69 WKAAAA NQKAAA HHHHxx +919 7190 1 3 9 19 19 919 919 919 919 38 39 JJAAAA OQKAAA OOOOxx +9575 7191 1 3 5 15 75 575 1575 4575 9575 150 151 HEAAAA PQKAAA VVVVxx +1256 7192 0 0 6 16 56 256 1256 1256 1256 112 113 IWAAAA QQKAAA AAAAxx +9428 7193 0 0 8 8 28 428 1428 4428 9428 56 57 QYAAAA RQKAAA HHHHxx +5121 7194 1 1 1 1 21 121 1121 121 5121 42 43 ZOAAAA SQKAAA OOOOxx +6584 7195 0 0 4 4 84 584 584 1584 6584 168 169 GTAAAA TQKAAA VVVVxx +7193 7196 1 1 3 13 93 193 1193 2193 7193 186 187 RQAAAA UQKAAA AAAAxx +4047 7197 1 3 7 7 47 47 47 4047 4047 94 95 RZAAAA VQKAAA HHHHxx +104 7198 0 0 4 4 4 104 104 104 104 8 9 AEAAAA WQKAAA OOOOxx +1527 7199 1 3 7 7 27 527 1527 1527 1527 54 55 TGAAAA XQKAAA VVVVxx +3460 7200 0 0 0 0 60 460 1460 3460 3460 120 121 CDAAAA YQKAAA AAAAxx +8526 7201 0 2 6 6 26 526 526 3526 8526 52 53 YPAAAA ZQKAAA HHHHxx +8959 7202 1 3 9 19 59 959 959 3959 8959 118 119 PGAAAA ARKAAA OOOOxx +3633 7203 1 1 3 13 33 633 1633 3633 3633 66 67 TJAAAA BRKAAA VVVVxx +1799 7204 1 3 9 19 99 799 1799 1799 1799 198 199 FRAAAA CRKAAA AAAAxx +461 7205 1 1 1 1 61 461 461 461 461 122 123 TRAAAA DRKAAA HHHHxx +718 7206 0 2 8 18 18 718 718 718 718 36 37 QBAAAA ERKAAA OOOOxx +3219 7207 1 3 9 19 19 219 1219 3219 3219 38 39 VTAAAA FRKAAA VVVVxx +3494 7208 0 2 4 14 94 494 1494 3494 3494 188 189 KEAAAA GRKAAA AAAAxx +9402 7209 0 2 2 2 2 402 1402 4402 9402 4 5 QXAAAA HRKAAA HHHHxx +7983 7210 1 3 3 3 83 983 1983 2983 7983 166 167 BVAAAA IRKAAA OOOOxx +7919 7211 1 3 9 19 19 919 1919 2919 7919 38 39 PSAAAA JRKAAA VVVVxx +8036 7212 0 0 6 16 36 36 36 3036 8036 72 73 CXAAAA KRKAAA AAAAxx +5164 7213 0 0 4 4 64 164 1164 164 5164 128 129 QQAAAA LRKAAA HHHHxx +4160 7214 0 0 0 0 60 160 160 4160 4160 120 121 AEAAAA MRKAAA OOOOxx +5370 7215 0 2 0 10 70 370 1370 370 5370 140 141 OYAAAA NRKAAA VVVVxx +5347 7216 1 3 7 7 47 347 1347 347 5347 94 95 RXAAAA ORKAAA AAAAxx +7109 7217 1 1 9 9 9 109 1109 2109 7109 18 19 LNAAAA PRKAAA HHHHxx +4826 7218 0 2 6 6 26 826 826 4826 4826 52 53 QDAAAA QRKAAA OOOOxx +1338 7219 0 2 8 18 38 338 1338 1338 1338 76 77 MZAAAA RRKAAA VVVVxx +2711 7220 1 3 1 11 11 711 711 2711 2711 22 23 HAAAAA SRKAAA AAAAxx +6299 7221 1 3 9 19 99 299 299 1299 6299 198 199 HIAAAA TRKAAA HHHHxx +1616 7222 0 0 6 16 16 616 1616 1616 1616 32 33 EKAAAA URKAAA OOOOxx +7519 7223 1 3 9 19 19 519 1519 2519 7519 38 39 FDAAAA VRKAAA VVVVxx +1262 7224 0 2 2 2 62 262 1262 1262 1262 124 125 OWAAAA WRKAAA AAAAxx +7228 7225 0 0 8 8 28 228 1228 2228 7228 56 57 ASAAAA XRKAAA HHHHxx +7892 7226 0 0 2 12 92 892 1892 2892 7892 184 185 ORAAAA YRKAAA OOOOxx +7929 7227 1 1 9 9 29 929 1929 2929 7929 58 59 ZSAAAA ZRKAAA VVVVxx +7705 7228 1 1 5 5 5 705 1705 2705 7705 10 11 JKAAAA ASKAAA AAAAxx +3111 7229 1 3 1 11 11 111 1111 3111 3111 22 23 RPAAAA BSKAAA HHHHxx +3066 7230 0 2 6 6 66 66 1066 3066 3066 132 133 YNAAAA CSKAAA OOOOxx +9559 7231 1 3 9 19 59 559 1559 4559 9559 118 119 RDAAAA DSKAAA VVVVxx +3787 7232 1 3 7 7 87 787 1787 3787 3787 174 175 RPAAAA ESKAAA AAAAxx +8710 7233 0 2 0 10 10 710 710 3710 8710 20 21 AXAAAA FSKAAA HHHHxx +4870 7234 0 2 0 10 70 870 870 4870 4870 140 141 IFAAAA GSKAAA OOOOxx +1883 7235 1 3 3 3 83 883 1883 1883 1883 166 167 LUAAAA HSKAAA VVVVxx +9689 7236 1 1 9 9 89 689 1689 4689 9689 178 179 RIAAAA ISKAAA AAAAxx +9491 7237 1 3 1 11 91 491 1491 4491 9491 182 183 BBAAAA JSKAAA HHHHxx +2035 7238 1 3 5 15 35 35 35 2035 2035 70 71 HAAAAA KSKAAA OOOOxx +655 7239 1 3 5 15 55 655 655 655 655 110 111 FZAAAA LSKAAA VVVVxx +6305 7240 1 1 5 5 5 305 305 1305 6305 10 11 NIAAAA MSKAAA AAAAxx +9423 7241 1 3 3 3 23 423 1423 4423 9423 46 47 LYAAAA NSKAAA HHHHxx +283 7242 1 3 3 3 83 283 283 283 283 166 167 XKAAAA OSKAAA OOOOxx +2607 7243 1 3 7 7 7 607 607 2607 2607 14 15 HWAAAA PSKAAA VVVVxx +7740 7244 0 0 0 0 40 740 1740 2740 7740 80 81 SLAAAA QSKAAA AAAAxx +6956 7245 0 0 6 16 56 956 956 1956 6956 112 113 OHAAAA RSKAAA HHHHxx +884 7246 0 0 4 4 84 884 884 884 884 168 169 AIAAAA SSKAAA OOOOxx +5730 7247 0 2 0 10 30 730 1730 730 5730 60 61 KMAAAA TSKAAA VVVVxx +3438 7248 0 2 8 18 38 438 1438 3438 3438 76 77 GCAAAA USKAAA AAAAxx +3250 7249 0 2 0 10 50 250 1250 3250 3250 100 101 AVAAAA VSKAAA HHHHxx +5470 7250 0 2 0 10 70 470 1470 470 5470 140 141 KCAAAA WSKAAA OOOOxx +2037 7251 1 1 7 17 37 37 37 2037 2037 74 75 JAAAAA XSKAAA VVVVxx +6593 7252 1 1 3 13 93 593 593 1593 6593 186 187 PTAAAA YSKAAA AAAAxx +3893 7253 1 1 3 13 93 893 1893 3893 3893 186 187 TTAAAA ZSKAAA HHHHxx +3200 7254 0 0 0 0 0 200 1200 3200 3200 0 1 CTAAAA ATKAAA OOOOxx +7125 7255 1 1 5 5 25 125 1125 2125 7125 50 51 BOAAAA BTKAAA VVVVxx +2295 7256 1 3 5 15 95 295 295 2295 2295 190 191 HKAAAA CTKAAA AAAAxx +2056 7257 0 0 6 16 56 56 56 2056 2056 112 113 CBAAAA DTKAAA HHHHxx +2962 7258 0 2 2 2 62 962 962 2962 2962 124 125 YJAAAA ETKAAA OOOOxx +993 7259 1 1 3 13 93 993 993 993 993 186 187 FMAAAA FTKAAA VVVVxx +9127 7260 1 3 7 7 27 127 1127 4127 9127 54 55 BNAAAA GTKAAA AAAAxx +2075 7261 1 3 5 15 75 75 75 2075 2075 150 151 VBAAAA HTKAAA HHHHxx +9338 7262 0 2 8 18 38 338 1338 4338 9338 76 77 EVAAAA ITKAAA OOOOxx +8100 7263 0 0 0 0 0 100 100 3100 8100 0 1 OZAAAA JTKAAA VVVVxx +5047 7264 1 3 7 7 47 47 1047 47 5047 94 95 DMAAAA KTKAAA AAAAxx +7032 7265 0 0 2 12 32 32 1032 2032 7032 64 65 MKAAAA LTKAAA HHHHxx +6374 7266 0 2 4 14 74 374 374 1374 6374 148 149 ELAAAA MTKAAA OOOOxx +4137 7267 1 1 7 17 37 137 137 4137 4137 74 75 DDAAAA NTKAAA VVVVxx +7132 7268 0 0 2 12 32 132 1132 2132 7132 64 65 IOAAAA OTKAAA AAAAxx +3064 7269 0 0 4 4 64 64 1064 3064 3064 128 129 WNAAAA PTKAAA HHHHxx +3621 7270 1 1 1 1 21 621 1621 3621 3621 42 43 HJAAAA QTKAAA OOOOxx +6199 7271 1 3 9 19 99 199 199 1199 6199 198 199 LEAAAA RTKAAA VVVVxx +4926 7272 0 2 6 6 26 926 926 4926 4926 52 53 MHAAAA STKAAA AAAAxx +8035 7273 1 3 5 15 35 35 35 3035 8035 70 71 BXAAAA TTKAAA HHHHxx +2195 7274 1 3 5 15 95 195 195 2195 2195 190 191 LGAAAA UTKAAA OOOOxx +5366 7275 0 2 6 6 66 366 1366 366 5366 132 133 KYAAAA VTKAAA VVVVxx +3478 7276 0 2 8 18 78 478 1478 3478 3478 156 157 UDAAAA WTKAAA AAAAxx +1926 7277 0 2 6 6 26 926 1926 1926 1926 52 53 CWAAAA XTKAAA HHHHxx +7265 7278 1 1 5 5 65 265 1265 2265 7265 130 131 LTAAAA YTKAAA OOOOxx +7668 7279 0 0 8 8 68 668 1668 2668 7668 136 137 YIAAAA ZTKAAA VVVVxx +3335 7280 1 3 5 15 35 335 1335 3335 3335 70 71 HYAAAA AUKAAA AAAAxx +7660 7281 0 0 0 0 60 660 1660 2660 7660 120 121 QIAAAA BUKAAA HHHHxx +9604 7282 0 0 4 4 4 604 1604 4604 9604 8 9 KFAAAA CUKAAA OOOOxx +7301 7283 1 1 1 1 1 301 1301 2301 7301 2 3 VUAAAA DUKAAA VVVVxx +4475 7284 1 3 5 15 75 475 475 4475 4475 150 151 DQAAAA EUKAAA AAAAxx +9954 7285 0 2 4 14 54 954 1954 4954 9954 108 109 WSAAAA FUKAAA HHHHxx +5723 7286 1 3 3 3 23 723 1723 723 5723 46 47 DMAAAA GUKAAA OOOOxx +2669 7287 1 1 9 9 69 669 669 2669 2669 138 139 RYAAAA HUKAAA VVVVxx +1685 7288 1 1 5 5 85 685 1685 1685 1685 170 171 VMAAAA IUKAAA AAAAxx +2233 7289 1 1 3 13 33 233 233 2233 2233 66 67 XHAAAA JUKAAA HHHHxx +8111 7290 1 3 1 11 11 111 111 3111 8111 22 23 ZZAAAA KUKAAA OOOOxx +7685 7291 1 1 5 5 85 685 1685 2685 7685 170 171 PJAAAA LUKAAA VVVVxx +3773 7292 1 1 3 13 73 773 1773 3773 3773 146 147 DPAAAA MUKAAA AAAAxx +7172 7293 0 0 2 12 72 172 1172 2172 7172 144 145 WPAAAA NUKAAA HHHHxx +1740 7294 0 0 0 0 40 740 1740 1740 1740 80 81 YOAAAA OUKAAA OOOOxx +5416 7295 0 0 6 16 16 416 1416 416 5416 32 33 IAAAAA PUKAAA VVVVxx +1823 7296 1 3 3 3 23 823 1823 1823 1823 46 47 DSAAAA QUKAAA AAAAxx +1668 7297 0 0 8 8 68 668 1668 1668 1668 136 137 EMAAAA RUKAAA HHHHxx +1795 7298 1 3 5 15 95 795 1795 1795 1795 190 191 BRAAAA SUKAAA OOOOxx +8599 7299 1 3 9 19 99 599 599 3599 8599 198 199 TSAAAA TUKAAA VVVVxx +5542 7300 0 2 2 2 42 542 1542 542 5542 84 85 EFAAAA UUKAAA AAAAxx +5658 7301 0 2 8 18 58 658 1658 658 5658 116 117 QJAAAA VUKAAA HHHHxx +9824 7302 0 0 4 4 24 824 1824 4824 9824 48 49 WNAAAA WUKAAA OOOOxx +19 7303 1 3 9 19 19 19 19 19 19 38 39 TAAAAA XUKAAA VVVVxx +9344 7304 0 0 4 4 44 344 1344 4344 9344 88 89 KVAAAA YUKAAA AAAAxx +5900 7305 0 0 0 0 0 900 1900 900 5900 0 1 YSAAAA ZUKAAA HHHHxx +7818 7306 0 2 8 18 18 818 1818 2818 7818 36 37 SOAAAA AVKAAA OOOOxx +8377 7307 1 1 7 17 77 377 377 3377 8377 154 155 FKAAAA BVKAAA VVVVxx +6886 7308 0 2 6 6 86 886 886 1886 6886 172 173 WEAAAA CVKAAA AAAAxx +3201 7309 1 1 1 1 1 201 1201 3201 3201 2 3 DTAAAA DVKAAA HHHHxx +87 7310 1 3 7 7 87 87 87 87 87 174 175 JDAAAA EVKAAA OOOOxx +1089 7311 1 1 9 9 89 89 1089 1089 1089 178 179 XPAAAA FVKAAA VVVVxx +3948 7312 0 0 8 8 48 948 1948 3948 3948 96 97 WVAAAA GVKAAA AAAAxx +6383 7313 1 3 3 3 83 383 383 1383 6383 166 167 NLAAAA HVKAAA HHHHxx +837 7314 1 1 7 17 37 837 837 837 837 74 75 FGAAAA IVKAAA OOOOxx +6285 7315 1 1 5 5 85 285 285 1285 6285 170 171 THAAAA JVKAAA VVVVxx +78 7316 0 2 8 18 78 78 78 78 78 156 157 ADAAAA KVKAAA AAAAxx +4389 7317 1 1 9 9 89 389 389 4389 4389 178 179 VMAAAA LVKAAA HHHHxx +4795 7318 1 3 5 15 95 795 795 4795 4795 190 191 LCAAAA MVKAAA OOOOxx +9369 7319 1 1 9 9 69 369 1369 4369 9369 138 139 JWAAAA NVKAAA VVVVxx +69 7320 1 1 9 9 69 69 69 69 69 138 139 RCAAAA OVKAAA AAAAxx +7689 7321 1 1 9 9 89 689 1689 2689 7689 178 179 TJAAAA PVKAAA HHHHxx +5642 7322 0 2 2 2 42 642 1642 642 5642 84 85 AJAAAA QVKAAA OOOOxx +2348 7323 0 0 8 8 48 348 348 2348 2348 96 97 IMAAAA RVKAAA VVVVxx +9308 7324 0 0 8 8 8 308 1308 4308 9308 16 17 AUAAAA SVKAAA AAAAxx +9093 7325 1 1 3 13 93 93 1093 4093 9093 186 187 TLAAAA TVKAAA HHHHxx +1199 7326 1 3 9 19 99 199 1199 1199 1199 198 199 DUAAAA UVKAAA OOOOxx +307 7327 1 3 7 7 7 307 307 307 307 14 15 VLAAAA VVKAAA VVVVxx +3814 7328 0 2 4 14 14 814 1814 3814 3814 28 29 SQAAAA WVKAAA AAAAxx +8817 7329 1 1 7 17 17 817 817 3817 8817 34 35 DBAAAA XVKAAA HHHHxx +2329 7330 1 1 9 9 29 329 329 2329 2329 58 59 PLAAAA YVKAAA OOOOxx +2932 7331 0 0 2 12 32 932 932 2932 2932 64 65 UIAAAA ZVKAAA VVVVxx +1986 7332 0 2 6 6 86 986 1986 1986 1986 172 173 KYAAAA AWKAAA AAAAxx +5279 7333 1 3 9 19 79 279 1279 279 5279 158 159 BVAAAA BWKAAA HHHHxx +5357 7334 1 1 7 17 57 357 1357 357 5357 114 115 BYAAAA CWKAAA OOOOxx +6778 7335 0 2 8 18 78 778 778 1778 6778 156 157 SAAAAA DWKAAA VVVVxx +2773 7336 1 1 3 13 73 773 773 2773 2773 146 147 RCAAAA EWKAAA AAAAxx +244 7337 0 0 4 4 44 244 244 244 244 88 89 KJAAAA FWKAAA HHHHxx +6900 7338 0 0 0 0 0 900 900 1900 6900 0 1 KFAAAA GWKAAA OOOOxx +4739 7339 1 3 9 19 39 739 739 4739 4739 78 79 HAAAAA HWKAAA VVVVxx +3217 7340 1 1 7 17 17 217 1217 3217 3217 34 35 TTAAAA IWKAAA AAAAxx +7563 7341 1 3 3 3 63 563 1563 2563 7563 126 127 XEAAAA JWKAAA HHHHxx +1807 7342 1 3 7 7 7 807 1807 1807 1807 14 15 NRAAAA KWKAAA OOOOxx +4199 7343 1 3 9 19 99 199 199 4199 4199 198 199 NFAAAA LWKAAA VVVVxx +1077 7344 1 1 7 17 77 77 1077 1077 1077 154 155 LPAAAA MWKAAA AAAAxx +8348 7345 0 0 8 8 48 348 348 3348 8348 96 97 CJAAAA NWKAAA HHHHxx +841 7346 1 1 1 1 41 841 841 841 841 82 83 JGAAAA OWKAAA OOOOxx +8154 7347 0 2 4 14 54 154 154 3154 8154 108 109 QBAAAA PWKAAA VVVVxx +5261 7348 1 1 1 1 61 261 1261 261 5261 122 123 JUAAAA QWKAAA AAAAxx +1950 7349 0 2 0 10 50 950 1950 1950 1950 100 101 AXAAAA RWKAAA HHHHxx +8472 7350 0 0 2 12 72 472 472 3472 8472 144 145 WNAAAA SWKAAA OOOOxx +8745 7351 1 1 5 5 45 745 745 3745 8745 90 91 JYAAAA TWKAAA VVVVxx +8715 7352 1 3 5 15 15 715 715 3715 8715 30 31 FXAAAA UWKAAA AAAAxx +9708 7353 0 0 8 8 8 708 1708 4708 9708 16 17 KJAAAA VWKAAA HHHHxx +5860 7354 0 0 0 0 60 860 1860 860 5860 120 121 KRAAAA WWKAAA OOOOxx +9142 7355 0 2 2 2 42 142 1142 4142 9142 84 85 QNAAAA XWKAAA VVVVxx +6582 7356 0 2 2 2 82 582 582 1582 6582 164 165 ETAAAA YWKAAA AAAAxx +1255 7357 1 3 5 15 55 255 1255 1255 1255 110 111 HWAAAA ZWKAAA HHHHxx +6459 7358 1 3 9 19 59 459 459 1459 6459 118 119 LOAAAA AXKAAA OOOOxx +6327 7359 1 3 7 7 27 327 327 1327 6327 54 55 JJAAAA BXKAAA VVVVxx +4692 7360 0 0 2 12 92 692 692 4692 4692 184 185 MYAAAA CXKAAA AAAAxx +3772 7361 0 0 2 12 72 772 1772 3772 3772 144 145 CPAAAA DXKAAA HHHHxx +4203 7362 1 3 3 3 3 203 203 4203 4203 6 7 RFAAAA EXKAAA OOOOxx +2946 7363 0 2 6 6 46 946 946 2946 2946 92 93 IJAAAA FXKAAA VVVVxx +3524 7364 0 0 4 4 24 524 1524 3524 3524 48 49 OFAAAA GXKAAA AAAAxx +8409 7365 1 1 9 9 9 409 409 3409 8409 18 19 LLAAAA HXKAAA HHHHxx +1824 7366 0 0 4 4 24 824 1824 1824 1824 48 49 ESAAAA IXKAAA OOOOxx +4637 7367 1 1 7 17 37 637 637 4637 4637 74 75 JWAAAA JXKAAA VVVVxx +589 7368 1 1 9 9 89 589 589 589 589 178 179 RWAAAA KXKAAA AAAAxx +484 7369 0 0 4 4 84 484 484 484 484 168 169 QSAAAA LXKAAA HHHHxx +8963 7370 1 3 3 3 63 963 963 3963 8963 126 127 TGAAAA MXKAAA OOOOxx +5502 7371 0 2 2 2 2 502 1502 502 5502 4 5 QDAAAA NXKAAA VVVVxx +6982 7372 0 2 2 2 82 982 982 1982 6982 164 165 OIAAAA OXKAAA AAAAxx +8029 7373 1 1 9 9 29 29 29 3029 8029 58 59 VWAAAA PXKAAA HHHHxx +4395 7374 1 3 5 15 95 395 395 4395 4395 190 191 BNAAAA QXKAAA OOOOxx +2595 7375 1 3 5 15 95 595 595 2595 2595 190 191 VVAAAA RXKAAA VVVVxx +2133 7376 1 1 3 13 33 133 133 2133 2133 66 67 BEAAAA SXKAAA AAAAxx +1414 7377 0 2 4 14 14 414 1414 1414 1414 28 29 KCAAAA TXKAAA HHHHxx +8201 7378 1 1 1 1 1 201 201 3201 8201 2 3 LDAAAA UXKAAA OOOOxx +4706 7379 0 2 6 6 6 706 706 4706 4706 12 13 AZAAAA VXKAAA VVVVxx +5310 7380 0 2 0 10 10 310 1310 310 5310 20 21 GWAAAA WXKAAA AAAAxx +7333 7381 1 1 3 13 33 333 1333 2333 7333 66 67 BWAAAA XXKAAA HHHHxx +9420 7382 0 0 0 0 20 420 1420 4420 9420 40 41 IYAAAA YXKAAA OOOOxx +1383 7383 1 3 3 3 83 383 1383 1383 1383 166 167 FBAAAA ZXKAAA VVVVxx +6225 7384 1 1 5 5 25 225 225 1225 6225 50 51 LFAAAA AYKAAA AAAAxx +2064 7385 0 0 4 4 64 64 64 2064 2064 128 129 KBAAAA BYKAAA HHHHxx +6700 7386 0 0 0 0 0 700 700 1700 6700 0 1 SXAAAA CYKAAA OOOOxx +1352 7387 0 0 2 12 52 352 1352 1352 1352 104 105 AAAAAA DYKAAA VVVVxx +4249 7388 1 1 9 9 49 249 249 4249 4249 98 99 LHAAAA EYKAAA AAAAxx +9429 7389 1 1 9 9 29 429 1429 4429 9429 58 59 RYAAAA FYKAAA HHHHxx +8090 7390 0 2 0 10 90 90 90 3090 8090 180 181 EZAAAA GYKAAA OOOOxx +5378 7391 0 2 8 18 78 378 1378 378 5378 156 157 WYAAAA HYKAAA VVVVxx +9085 7392 1 1 5 5 85 85 1085 4085 9085 170 171 LLAAAA IYKAAA AAAAxx +7468 7393 0 0 8 8 68 468 1468 2468 7468 136 137 GBAAAA JYKAAA HHHHxx +9955 7394 1 3 5 15 55 955 1955 4955 9955 110 111 XSAAAA KYKAAA OOOOxx +8692 7395 0 0 2 12 92 692 692 3692 8692 184 185 IWAAAA LYKAAA VVVVxx +1463 7396 1 3 3 3 63 463 1463 1463 1463 126 127 HEAAAA MYKAAA AAAAxx +3577 7397 1 1 7 17 77 577 1577 3577 3577 154 155 PHAAAA NYKAAA HHHHxx +5654 7398 0 2 4 14 54 654 1654 654 5654 108 109 MJAAAA OYKAAA OOOOxx +7955 7399 1 3 5 15 55 955 1955 2955 7955 110 111 ZTAAAA PYKAAA VVVVxx +4843 7400 1 3 3 3 43 843 843 4843 4843 86 87 HEAAAA QYKAAA AAAAxx +1776 7401 0 0 6 16 76 776 1776 1776 1776 152 153 IQAAAA RYKAAA HHHHxx +2223 7402 1 3 3 3 23 223 223 2223 2223 46 47 NHAAAA SYKAAA OOOOxx +8442 7403 0 2 2 2 42 442 442 3442 8442 84 85 SMAAAA TYKAAA VVVVxx +9738 7404 0 2 8 18 38 738 1738 4738 9738 76 77 OKAAAA UYKAAA AAAAxx +4867 7405 1 3 7 7 67 867 867 4867 4867 134 135 FFAAAA VYKAAA HHHHxx +2983 7406 1 3 3 3 83 983 983 2983 2983 166 167 TKAAAA WYKAAA OOOOxx +3300 7407 0 0 0 0 0 300 1300 3300 3300 0 1 YWAAAA XYKAAA VVVVxx +3815 7408 1 3 5 15 15 815 1815 3815 3815 30 31 TQAAAA YYKAAA AAAAxx +1779 7409 1 3 9 19 79 779 1779 1779 1779 158 159 LQAAAA ZYKAAA HHHHxx +1123 7410 1 3 3 3 23 123 1123 1123 1123 46 47 FRAAAA AZKAAA OOOOxx +4824 7411 0 0 4 4 24 824 824 4824 4824 48 49 ODAAAA BZKAAA VVVVxx +5407 7412 1 3 7 7 7 407 1407 407 5407 14 15 ZZAAAA CZKAAA AAAAxx +5123 7413 1 3 3 3 23 123 1123 123 5123 46 47 BPAAAA DZKAAA HHHHxx +2515 7414 1 3 5 15 15 515 515 2515 2515 30 31 TSAAAA EZKAAA OOOOxx +4781 7415 1 1 1 1 81 781 781 4781 4781 162 163 XBAAAA FZKAAA VVVVxx +7831 7416 1 3 1 11 31 831 1831 2831 7831 62 63 FPAAAA GZKAAA AAAAxx +6946 7417 0 2 6 6 46 946 946 1946 6946 92 93 EHAAAA HZKAAA HHHHxx +1215 7418 1 3 5 15 15 215 1215 1215 1215 30 31 TUAAAA IZKAAA OOOOxx +7783 7419 1 3 3 3 83 783 1783 2783 7783 166 167 JNAAAA JZKAAA VVVVxx +4532 7420 0 0 2 12 32 532 532 4532 4532 64 65 ISAAAA KZKAAA AAAAxx +9068 7421 0 0 8 8 68 68 1068 4068 9068 136 137 UKAAAA LZKAAA HHHHxx +7030 7422 0 2 0 10 30 30 1030 2030 7030 60 61 KKAAAA MZKAAA OOOOxx +436 7423 0 0 6 16 36 436 436 436 436 72 73 UQAAAA NZKAAA VVVVxx +6549 7424 1 1 9 9 49 549 549 1549 6549 98 99 XRAAAA OZKAAA AAAAxx +3348 7425 0 0 8 8 48 348 1348 3348 3348 96 97 UYAAAA PZKAAA HHHHxx +6229 7426 1 1 9 9 29 229 229 1229 6229 58 59 PFAAAA QZKAAA OOOOxx +3933 7427 1 1 3 13 33 933 1933 3933 3933 66 67 HVAAAA RZKAAA VVVVxx +1876 7428 0 0 6 16 76 876 1876 1876 1876 152 153 EUAAAA SZKAAA AAAAxx +8920 7429 0 0 0 0 20 920 920 3920 8920 40 41 CFAAAA TZKAAA HHHHxx +7926 7430 0 2 6 6 26 926 1926 2926 7926 52 53 WSAAAA UZKAAA OOOOxx +8805 7431 1 1 5 5 5 805 805 3805 8805 10 11 RAAAAA VZKAAA VVVVxx +6729 7432 1 1 9 9 29 729 729 1729 6729 58 59 VYAAAA WZKAAA AAAAxx +7397 7433 1 1 7 17 97 397 1397 2397 7397 194 195 NYAAAA XZKAAA HHHHxx +9303 7434 1 3 3 3 3 303 1303 4303 9303 6 7 VTAAAA YZKAAA OOOOxx +4255 7435 1 3 5 15 55 255 255 4255 4255 110 111 RHAAAA ZZKAAA VVVVxx +7229 7436 1 1 9 9 29 229 1229 2229 7229 58 59 BSAAAA AALAAA AAAAxx +854 7437 0 2 4 14 54 854 854 854 854 108 109 WGAAAA BALAAA HHHHxx +6723 7438 1 3 3 3 23 723 723 1723 6723 46 47 PYAAAA CALAAA OOOOxx +9597 7439 1 1 7 17 97 597 1597 4597 9597 194 195 DFAAAA DALAAA VVVVxx +6532 7440 0 0 2 12 32 532 532 1532 6532 64 65 GRAAAA EALAAA AAAAxx +2910 7441 0 2 0 10 10 910 910 2910 2910 20 21 YHAAAA FALAAA HHHHxx +6717 7442 1 1 7 17 17 717 717 1717 6717 34 35 JYAAAA GALAAA OOOOxx +1790 7443 0 2 0 10 90 790 1790 1790 1790 180 181 WQAAAA HALAAA VVVVxx +3761 7444 1 1 1 1 61 761 1761 3761 3761 122 123 ROAAAA IALAAA AAAAxx +1565 7445 1 1 5 5 65 565 1565 1565 1565 130 131 FIAAAA JALAAA HHHHxx +6205 7446 1 1 5 5 5 205 205 1205 6205 10 11 REAAAA KALAAA OOOOxx +2726 7447 0 2 6 6 26 726 726 2726 2726 52 53 WAAAAA LALAAA VVVVxx +799 7448 1 3 9 19 99 799 799 799 799 198 199 TEAAAA MALAAA AAAAxx +3540 7449 0 0 0 0 40 540 1540 3540 3540 80 81 EGAAAA NALAAA HHHHxx +5878 7450 0 2 8 18 78 878 1878 878 5878 156 157 CSAAAA OALAAA OOOOxx +2542 7451 0 2 2 2 42 542 542 2542 2542 84 85 UTAAAA PALAAA VVVVxx +4888 7452 0 0 8 8 88 888 888 4888 4888 176 177 AGAAAA QALAAA AAAAxx +5290 7453 0 2 0 10 90 290 1290 290 5290 180 181 MVAAAA RALAAA HHHHxx +7995 7454 1 3 5 15 95 995 1995 2995 7995 190 191 NVAAAA SALAAA OOOOxx +3519 7455 1 3 9 19 19 519 1519 3519 3519 38 39 JFAAAA TALAAA VVVVxx +3571 7456 1 3 1 11 71 571 1571 3571 3571 142 143 JHAAAA UALAAA AAAAxx +7854 7457 0 2 4 14 54 854 1854 2854 7854 108 109 CQAAAA VALAAA HHHHxx +5184 7458 0 0 4 4 84 184 1184 184 5184 168 169 KRAAAA WALAAA OOOOxx +3498 7459 0 2 8 18 98 498 1498 3498 3498 196 197 OEAAAA XALAAA VVVVxx +1264 7460 0 0 4 4 64 264 1264 1264 1264 128 129 QWAAAA YALAAA AAAAxx +3159 7461 1 3 9 19 59 159 1159 3159 3159 118 119 NRAAAA ZALAAA HHHHxx +5480 7462 0 0 0 0 80 480 1480 480 5480 160 161 UCAAAA ABLAAA OOOOxx +1706 7463 0 2 6 6 6 706 1706 1706 1706 12 13 QNAAAA BBLAAA VVVVxx +4540 7464 0 0 0 0 40 540 540 4540 4540 80 81 QSAAAA CBLAAA AAAAxx +2799 7465 1 3 9 19 99 799 799 2799 2799 198 199 RDAAAA DBLAAA HHHHxx +7389 7466 1 1 9 9 89 389 1389 2389 7389 178 179 FYAAAA EBLAAA OOOOxx +5565 7467 1 1 5 5 65 565 1565 565 5565 130 131 BGAAAA FBLAAA VVVVxx +3896 7468 0 0 6 16 96 896 1896 3896 3896 192 193 WTAAAA GBLAAA AAAAxx +2100 7469 0 0 0 0 0 100 100 2100 2100 0 1 UCAAAA HBLAAA HHHHxx +3507 7470 1 3 7 7 7 507 1507 3507 3507 14 15 XEAAAA IBLAAA OOOOxx +7971 7471 1 3 1 11 71 971 1971 2971 7971 142 143 PUAAAA JBLAAA VVVVxx +2312 7472 0 0 2 12 12 312 312 2312 2312 24 25 YKAAAA KBLAAA AAAAxx +2494 7473 0 2 4 14 94 494 494 2494 2494 188 189 YRAAAA LBLAAA HHHHxx +2474 7474 0 2 4 14 74 474 474 2474 2474 148 149 ERAAAA MBLAAA OOOOxx +3136 7475 0 0 6 16 36 136 1136 3136 3136 72 73 QQAAAA NBLAAA VVVVxx +7242 7476 0 2 2 2 42 242 1242 2242 7242 84 85 OSAAAA OBLAAA AAAAxx +9430 7477 0 2 0 10 30 430 1430 4430 9430 60 61 SYAAAA PBLAAA HHHHxx +1052 7478 0 0 2 12 52 52 1052 1052 1052 104 105 MOAAAA QBLAAA OOOOxx +4172 7479 0 0 2 12 72 172 172 4172 4172 144 145 MEAAAA RBLAAA VVVVxx +970 7480 0 2 0 10 70 970 970 970 970 140 141 ILAAAA SBLAAA AAAAxx +882 7481 0 2 2 2 82 882 882 882 882 164 165 YHAAAA TBLAAA HHHHxx +9799 7482 1 3 9 19 99 799 1799 4799 9799 198 199 XMAAAA UBLAAA OOOOxx +5850 7483 0 2 0 10 50 850 1850 850 5850 100 101 ARAAAA VBLAAA VVVVxx +9473 7484 1 1 3 13 73 473 1473 4473 9473 146 147 JAAAAA WBLAAA AAAAxx +8635 7485 1 3 5 15 35 635 635 3635 8635 70 71 DUAAAA XBLAAA HHHHxx +2349 7486 1 1 9 9 49 349 349 2349 2349 98 99 JMAAAA YBLAAA OOOOxx +2270 7487 0 2 0 10 70 270 270 2270 2270 140 141 IJAAAA ZBLAAA VVVVxx +7887 7488 1 3 7 7 87 887 1887 2887 7887 174 175 JRAAAA ACLAAA AAAAxx +3091 7489 1 3 1 11 91 91 1091 3091 3091 182 183 XOAAAA BCLAAA HHHHxx +3728 7490 0 0 8 8 28 728 1728 3728 3728 56 57 KNAAAA CCLAAA OOOOxx +3658 7491 0 2 8 18 58 658 1658 3658 3658 116 117 SKAAAA DCLAAA VVVVxx +5975 7492 1 3 5 15 75 975 1975 975 5975 150 151 VVAAAA ECLAAA AAAAxx +332 7493 0 0 2 12 32 332 332 332 332 64 65 UMAAAA FCLAAA HHHHxx +7990 7494 0 2 0 10 90 990 1990 2990 7990 180 181 IVAAAA GCLAAA OOOOxx +8688 7495 0 0 8 8 88 688 688 3688 8688 176 177 EWAAAA HCLAAA VVVVxx +9601 7496 1 1 1 1 1 601 1601 4601 9601 2 3 HFAAAA ICLAAA AAAAxx +8401 7497 1 1 1 1 1 401 401 3401 8401 2 3 DLAAAA JCLAAA HHHHxx +8093 7498 1 1 3 13 93 93 93 3093 8093 186 187 HZAAAA KCLAAA OOOOxx +4278 7499 0 2 8 18 78 278 278 4278 4278 156 157 OIAAAA LCLAAA VVVVxx +5467 7500 1 3 7 7 67 467 1467 467 5467 134 135 HCAAAA MCLAAA AAAAxx +3137 7501 1 1 7 17 37 137 1137 3137 3137 74 75 RQAAAA NCLAAA HHHHxx +204 7502 0 0 4 4 4 204 204 204 204 8 9 WHAAAA OCLAAA OOOOxx +8224 7503 0 0 4 4 24 224 224 3224 8224 48 49 IEAAAA PCLAAA VVVVxx +2944 7504 0 0 4 4 44 944 944 2944 2944 88 89 GJAAAA QCLAAA AAAAxx +7593 7505 1 1 3 13 93 593 1593 2593 7593 186 187 BGAAAA RCLAAA HHHHxx +814 7506 0 2 4 14 14 814 814 814 814 28 29 IFAAAA SCLAAA OOOOxx +8047 7507 1 3 7 7 47 47 47 3047 8047 94 95 NXAAAA TCLAAA VVVVxx +7802 7508 0 2 2 2 2 802 1802 2802 7802 4 5 COAAAA UCLAAA AAAAxx +901 7509 1 1 1 1 1 901 901 901 901 2 3 RIAAAA VCLAAA HHHHxx +6168 7510 0 0 8 8 68 168 168 1168 6168 136 137 GDAAAA WCLAAA OOOOxx +2950 7511 0 2 0 10 50 950 950 2950 2950 100 101 MJAAAA XCLAAA VVVVxx +5393 7512 1 1 3 13 93 393 1393 393 5393 186 187 LZAAAA YCLAAA AAAAxx +3585 7513 1 1 5 5 85 585 1585 3585 3585 170 171 XHAAAA ZCLAAA HHHHxx +9392 7514 0 0 2 12 92 392 1392 4392 9392 184 185 GXAAAA ADLAAA OOOOxx +8314 7515 0 2 4 14 14 314 314 3314 8314 28 29 UHAAAA BDLAAA VVVVxx +9972 7516 0 0 2 12 72 972 1972 4972 9972 144 145 OTAAAA CDLAAA AAAAxx +9130 7517 0 2 0 10 30 130 1130 4130 9130 60 61 ENAAAA DDLAAA HHHHxx +975 7518 1 3 5 15 75 975 975 975 975 150 151 NLAAAA EDLAAA OOOOxx +5720 7519 0 0 0 0 20 720 1720 720 5720 40 41 AMAAAA FDLAAA VVVVxx +3769 7520 1 1 9 9 69 769 1769 3769 3769 138 139 ZOAAAA GDLAAA AAAAxx +5303 7521 1 3 3 3 3 303 1303 303 5303 6 7 ZVAAAA HDLAAA HHHHxx +6564 7522 0 0 4 4 64 564 564 1564 6564 128 129 MSAAAA IDLAAA OOOOxx +7855 7523 1 3 5 15 55 855 1855 2855 7855 110 111 DQAAAA JDLAAA VVVVxx +8153 7524 1 1 3 13 53 153 153 3153 8153 106 107 PBAAAA KDLAAA AAAAxx +2292 7525 0 0 2 12 92 292 292 2292 2292 184 185 EKAAAA LDLAAA HHHHxx +3156 7526 0 0 6 16 56 156 1156 3156 3156 112 113 KRAAAA MDLAAA OOOOxx +6580 7527 0 0 0 0 80 580 580 1580 6580 160 161 CTAAAA NDLAAA VVVVxx +5324 7528 0 0 4 4 24 324 1324 324 5324 48 49 UWAAAA ODLAAA AAAAxx +8871 7529 1 3 1 11 71 871 871 3871 8871 142 143 FDAAAA PDLAAA HHHHxx +2543 7530 1 3 3 3 43 543 543 2543 2543 86 87 VTAAAA QDLAAA OOOOxx +7857 7531 1 1 7 17 57 857 1857 2857 7857 114 115 FQAAAA RDLAAA VVVVxx +4084 7532 0 0 4 4 84 84 84 4084 4084 168 169 CBAAAA SDLAAA AAAAxx +9887 7533 1 3 7 7 87 887 1887 4887 9887 174 175 HQAAAA TDLAAA HHHHxx +6940 7534 0 0 0 0 40 940 940 1940 6940 80 81 YGAAAA UDLAAA OOOOxx +3415 7535 1 3 5 15 15 415 1415 3415 3415 30 31 JBAAAA VDLAAA VVVVxx +5012 7536 0 0 2 12 12 12 1012 12 5012 24 25 UKAAAA WDLAAA AAAAxx +3187 7537 1 3 7 7 87 187 1187 3187 3187 174 175 PSAAAA XDLAAA HHHHxx +8556 7538 0 0 6 16 56 556 556 3556 8556 112 113 CRAAAA YDLAAA OOOOxx +7966 7539 0 2 6 6 66 966 1966 2966 7966 132 133 KUAAAA ZDLAAA VVVVxx +7481 7540 1 1 1 1 81 481 1481 2481 7481 162 163 TBAAAA AELAAA AAAAxx +8524 7541 0 0 4 4 24 524 524 3524 8524 48 49 WPAAAA BELAAA HHHHxx +3021 7542 1 1 1 1 21 21 1021 3021 3021 42 43 FMAAAA CELAAA OOOOxx +6045 7543 1 1 5 5 45 45 45 1045 6045 90 91 NYAAAA DELAAA VVVVxx +8022 7544 0 2 2 2 22 22 22 3022 8022 44 45 OWAAAA EELAAA AAAAxx +3626 7545 0 2 6 6 26 626 1626 3626 3626 52 53 MJAAAA FELAAA HHHHxx +1030 7546 0 2 0 10 30 30 1030 1030 1030 60 61 QNAAAA GELAAA OOOOxx +8903 7547 1 3 3 3 3 903 903 3903 8903 6 7 LEAAAA HELAAA VVVVxx +7488 7548 0 0 8 8 88 488 1488 2488 7488 176 177 ACAAAA IELAAA AAAAxx +9293 7549 1 1 3 13 93 293 1293 4293 9293 186 187 LTAAAA JELAAA HHHHxx +4586 7550 0 2 6 6 86 586 586 4586 4586 172 173 KUAAAA KELAAA OOOOxx +9282 7551 0 2 2 2 82 282 1282 4282 9282 164 165 ATAAAA LELAAA VVVVxx +1948 7552 0 0 8 8 48 948 1948 1948 1948 96 97 YWAAAA MELAAA AAAAxx +2534 7553 0 2 4 14 34 534 534 2534 2534 68 69 MTAAAA NELAAA HHHHxx +1150 7554 0 2 0 10 50 150 1150 1150 1150 100 101 GSAAAA OELAAA OOOOxx +4931 7555 1 3 1 11 31 931 931 4931 4931 62 63 RHAAAA PELAAA VVVVxx +2866 7556 0 2 6 6 66 866 866 2866 2866 132 133 GGAAAA QELAAA AAAAxx +6172 7557 0 0 2 12 72 172 172 1172 6172 144 145 KDAAAA RELAAA HHHHxx +4819 7558 1 3 9 19 19 819 819 4819 4819 38 39 JDAAAA SELAAA OOOOxx +569 7559 1 1 9 9 69 569 569 569 569 138 139 XVAAAA TELAAA VVVVxx +1146 7560 0 2 6 6 46 146 1146 1146 1146 92 93 CSAAAA UELAAA AAAAxx +3062 7561 0 2 2 2 62 62 1062 3062 3062 124 125 UNAAAA VELAAA HHHHxx +7690 7562 0 2 0 10 90 690 1690 2690 7690 180 181 UJAAAA WELAAA OOOOxx +8611 7563 1 3 1 11 11 611 611 3611 8611 22 23 FTAAAA XELAAA VVVVxx +1142 7564 0 2 2 2 42 142 1142 1142 1142 84 85 YRAAAA YELAAA AAAAxx +1193 7565 1 1 3 13 93 193 1193 1193 1193 186 187 XTAAAA ZELAAA HHHHxx +2507 7566 1 3 7 7 7 507 507 2507 2507 14 15 LSAAAA AFLAAA OOOOxx +1043 7567 1 3 3 3 43 43 1043 1043 1043 86 87 DOAAAA BFLAAA VVVVxx +7472 7568 0 0 2 12 72 472 1472 2472 7472 144 145 KBAAAA CFLAAA AAAAxx +1817 7569 1 1 7 17 17 817 1817 1817 1817 34 35 XRAAAA DFLAAA HHHHxx +3868 7570 0 0 8 8 68 868 1868 3868 3868 136 137 USAAAA EFLAAA OOOOxx +9031 7571 1 3 1 11 31 31 1031 4031 9031 62 63 JJAAAA FFLAAA VVVVxx +7254 7572 0 2 4 14 54 254 1254 2254 7254 108 109 ATAAAA GFLAAA AAAAxx +5030 7573 0 2 0 10 30 30 1030 30 5030 60 61 MLAAAA HFLAAA HHHHxx +6594 7574 0 2 4 14 94 594 594 1594 6594 188 189 QTAAAA IFLAAA OOOOxx +6862 7575 0 2 2 2 62 862 862 1862 6862 124 125 YDAAAA JFLAAA VVVVxx +1994 7576 0 2 4 14 94 994 1994 1994 1994 188 189 SYAAAA KFLAAA AAAAxx +9017 7577 1 1 7 17 17 17 1017 4017 9017 34 35 VIAAAA LFLAAA HHHHxx +5716 7578 0 0 6 16 16 716 1716 716 5716 32 33 WLAAAA MFLAAA OOOOxx +1900 7579 0 0 0 0 0 900 1900 1900 1900 0 1 CVAAAA NFLAAA VVVVxx +120 7580 0 0 0 0 20 120 120 120 120 40 41 QEAAAA OFLAAA AAAAxx +9003 7581 1 3 3 3 3 3 1003 4003 9003 6 7 HIAAAA PFLAAA HHHHxx +4178 7582 0 2 8 18 78 178 178 4178 4178 156 157 SEAAAA QFLAAA OOOOxx +8777 7583 1 1 7 17 77 777 777 3777 8777 154 155 PZAAAA RFLAAA VVVVxx +3653 7584 1 1 3 13 53 653 1653 3653 3653 106 107 NKAAAA SFLAAA AAAAxx +1137 7585 1 1 7 17 37 137 1137 1137 1137 74 75 TRAAAA TFLAAA HHHHxx +6362 7586 0 2 2 2 62 362 362 1362 6362 124 125 SKAAAA UFLAAA OOOOxx +8537 7587 1 1 7 17 37 537 537 3537 8537 74 75 JQAAAA VFLAAA VVVVxx +1590 7588 0 2 0 10 90 590 1590 1590 1590 180 181 EJAAAA WFLAAA AAAAxx +374 7589 0 2 4 14 74 374 374 374 374 148 149 KOAAAA XFLAAA HHHHxx +2597 7590 1 1 7 17 97 597 597 2597 2597 194 195 XVAAAA YFLAAA OOOOxx +8071 7591 1 3 1 11 71 71 71 3071 8071 142 143 LYAAAA ZFLAAA VVVVxx +9009 7592 1 1 9 9 9 9 1009 4009 9009 18 19 NIAAAA AGLAAA AAAAxx +1978 7593 0 2 8 18 78 978 1978 1978 1978 156 157 CYAAAA BGLAAA HHHHxx +1541 7594 1 1 1 1 41 541 1541 1541 1541 82 83 HHAAAA CGLAAA OOOOxx +4998 7595 0 2 8 18 98 998 998 4998 4998 196 197 GKAAAA DGLAAA VVVVxx +1649 7596 1 1 9 9 49 649 1649 1649 1649 98 99 LLAAAA EGLAAA AAAAxx +5426 7597 0 2 6 6 26 426 1426 426 5426 52 53 SAAAAA FGLAAA HHHHxx +1492 7598 0 0 2 12 92 492 1492 1492 1492 184 185 KFAAAA GGLAAA OOOOxx +9622 7599 0 2 2 2 22 622 1622 4622 9622 44 45 CGAAAA HGLAAA VVVVxx +701 7600 1 1 1 1 1 701 701 701 701 2 3 ZAAAAA IGLAAA AAAAxx +2781 7601 1 1 1 1 81 781 781 2781 2781 162 163 ZCAAAA JGLAAA HHHHxx +3982 7602 0 2 2 2 82 982 1982 3982 3982 164 165 EXAAAA KGLAAA OOOOxx +7259 7603 1 3 9 19 59 259 1259 2259 7259 118 119 FTAAAA LGLAAA VVVVxx +9868 7604 0 0 8 8 68 868 1868 4868 9868 136 137 OPAAAA MGLAAA AAAAxx +564 7605 0 0 4 4 64 564 564 564 564 128 129 SVAAAA NGLAAA HHHHxx +6315 7606 1 3 5 15 15 315 315 1315 6315 30 31 XIAAAA OGLAAA OOOOxx +9092 7607 0 0 2 12 92 92 1092 4092 9092 184 185 SLAAAA PGLAAA VVVVxx +8237 7608 1 1 7 17 37 237 237 3237 8237 74 75 VEAAAA QGLAAA AAAAxx +1513 7609 1 1 3 13 13 513 1513 1513 1513 26 27 FGAAAA RGLAAA HHHHxx +1922 7610 0 2 2 2 22 922 1922 1922 1922 44 45 YVAAAA SGLAAA OOOOxx +5396 7611 0 0 6 16 96 396 1396 396 5396 192 193 OZAAAA TGLAAA VVVVxx +2485 7612 1 1 5 5 85 485 485 2485 2485 170 171 PRAAAA UGLAAA AAAAxx +5774 7613 0 2 4 14 74 774 1774 774 5774 148 149 COAAAA VGLAAA HHHHxx +3983 7614 1 3 3 3 83 983 1983 3983 3983 166 167 FXAAAA WGLAAA OOOOxx +221 7615 1 1 1 1 21 221 221 221 221 42 43 NIAAAA XGLAAA VVVVxx +8662 7616 0 2 2 2 62 662 662 3662 8662 124 125 EVAAAA YGLAAA AAAAxx +2456 7617 0 0 6 16 56 456 456 2456 2456 112 113 MQAAAA ZGLAAA HHHHxx +9736 7618 0 0 6 16 36 736 1736 4736 9736 72 73 MKAAAA AHLAAA OOOOxx +8936 7619 0 0 6 16 36 936 936 3936 8936 72 73 SFAAAA BHLAAA VVVVxx +5395 7620 1 3 5 15 95 395 1395 395 5395 190 191 NZAAAA CHLAAA AAAAxx +9523 7621 1 3 3 3 23 523 1523 4523 9523 46 47 HCAAAA DHLAAA HHHHxx +6980 7622 0 0 0 0 80 980 980 1980 6980 160 161 MIAAAA EHLAAA OOOOxx +2091 7623 1 3 1 11 91 91 91 2091 2091 182 183 LCAAAA FHLAAA VVVVxx +6807 7624 1 3 7 7 7 807 807 1807 6807 14 15 VBAAAA GHLAAA AAAAxx +8818 7625 0 2 8 18 18 818 818 3818 8818 36 37 EBAAAA HHLAAA HHHHxx +5298 7626 0 2 8 18 98 298 1298 298 5298 196 197 UVAAAA IHLAAA OOOOxx +1726 7627 0 2 6 6 26 726 1726 1726 1726 52 53 KOAAAA JHLAAA VVVVxx +3878 7628 0 2 8 18 78 878 1878 3878 3878 156 157 ETAAAA KHLAAA AAAAxx +8700 7629 0 0 0 0 0 700 700 3700 8700 0 1 QWAAAA LHLAAA HHHHxx +5201 7630 1 1 1 1 1 201 1201 201 5201 2 3 BSAAAA MHLAAA OOOOxx +3936 7631 0 0 6 16 36 936 1936 3936 3936 72 73 KVAAAA NHLAAA VVVVxx +776 7632 0 0 6 16 76 776 776 776 776 152 153 WDAAAA OHLAAA AAAAxx +5302 7633 0 2 2 2 2 302 1302 302 5302 4 5 YVAAAA PHLAAA HHHHxx +3595 7634 1 3 5 15 95 595 1595 3595 3595 190 191 HIAAAA QHLAAA OOOOxx +9061 7635 1 1 1 1 61 61 1061 4061 9061 122 123 NKAAAA RHLAAA VVVVxx +6261 7636 1 1 1 1 61 261 261 1261 6261 122 123 VGAAAA SHLAAA AAAAxx +8878 7637 0 2 8 18 78 878 878 3878 8878 156 157 MDAAAA THLAAA HHHHxx +3312 7638 0 0 2 12 12 312 1312 3312 3312 24 25 KXAAAA UHLAAA OOOOxx +9422 7639 0 2 2 2 22 422 1422 4422 9422 44 45 KYAAAA VHLAAA VVVVxx +7321 7640 1 1 1 1 21 321 1321 2321 7321 42 43 PVAAAA WHLAAA AAAAxx +3813 7641 1 1 3 13 13 813 1813 3813 3813 26 27 RQAAAA XHLAAA HHHHxx +5848 7642 0 0 8 8 48 848 1848 848 5848 96 97 YQAAAA YHLAAA OOOOxx +3535 7643 1 3 5 15 35 535 1535 3535 3535 70 71 ZFAAAA ZHLAAA VVVVxx +1040 7644 0 0 0 0 40 40 1040 1040 1040 80 81 AOAAAA AILAAA AAAAxx +8572 7645 0 0 2 12 72 572 572 3572 8572 144 145 SRAAAA BILAAA HHHHxx +5435 7646 1 3 5 15 35 435 1435 435 5435 70 71 BBAAAA CILAAA OOOOxx +8199 7647 1 3 9 19 99 199 199 3199 8199 198 199 JDAAAA DILAAA VVVVxx +8775 7648 1 3 5 15 75 775 775 3775 8775 150 151 NZAAAA EILAAA AAAAxx +7722 7649 0 2 2 2 22 722 1722 2722 7722 44 45 ALAAAA FILAAA HHHHxx +3549 7650 1 1 9 9 49 549 1549 3549 3549 98 99 NGAAAA GILAAA OOOOxx +2578 7651 0 2 8 18 78 578 578 2578 2578 156 157 EVAAAA HILAAA VVVVxx +1695 7652 1 3 5 15 95 695 1695 1695 1695 190 191 FNAAAA IILAAA AAAAxx +1902 7653 0 2 2 2 2 902 1902 1902 1902 4 5 EVAAAA JILAAA HHHHxx +6058 7654 0 2 8 18 58 58 58 1058 6058 116 117 AZAAAA KILAAA OOOOxx +6591 7655 1 3 1 11 91 591 591 1591 6591 182 183 NTAAAA LILAAA VVVVxx +7962 7656 0 2 2 2 62 962 1962 2962 7962 124 125 GUAAAA MILAAA AAAAxx +5612 7657 0 0 2 12 12 612 1612 612 5612 24 25 WHAAAA NILAAA HHHHxx +3341 7658 1 1 1 1 41 341 1341 3341 3341 82 83 NYAAAA OILAAA OOOOxx +5460 7659 0 0 0 0 60 460 1460 460 5460 120 121 ACAAAA PILAAA VVVVxx +2368 7660 0 0 8 8 68 368 368 2368 2368 136 137 CNAAAA QILAAA AAAAxx +8646 7661 0 2 6 6 46 646 646 3646 8646 92 93 OUAAAA RILAAA HHHHxx +4987 7662 1 3 7 7 87 987 987 4987 4987 174 175 VJAAAA SILAAA OOOOxx +9018 7663 0 2 8 18 18 18 1018 4018 9018 36 37 WIAAAA TILAAA VVVVxx +8685 7664 1 1 5 5 85 685 685 3685 8685 170 171 BWAAAA UILAAA AAAAxx +694 7665 0 2 4 14 94 694 694 694 694 188 189 SAAAAA VILAAA HHHHxx +2012 7666 0 0 2 12 12 12 12 2012 2012 24 25 KZAAAA WILAAA OOOOxx +2417 7667 1 1 7 17 17 417 417 2417 2417 34 35 ZOAAAA XILAAA VVVVxx +4022 7668 0 2 2 2 22 22 22 4022 4022 44 45 SYAAAA YILAAA AAAAxx +5935 7669 1 3 5 15 35 935 1935 935 5935 70 71 HUAAAA ZILAAA HHHHxx +1656 7670 0 0 6 16 56 656 1656 1656 1656 112 113 SLAAAA AJLAAA OOOOxx +6195 7671 1 3 5 15 95 195 195 1195 6195 190 191 HEAAAA BJLAAA VVVVxx +3057 7672 1 1 7 17 57 57 1057 3057 3057 114 115 PNAAAA CJLAAA AAAAxx +2852 7673 0 0 2 12 52 852 852 2852 2852 104 105 SFAAAA DJLAAA HHHHxx +4634 7674 0 2 4 14 34 634 634 4634 4634 68 69 GWAAAA EJLAAA OOOOxx +1689 7675 1 1 9 9 89 689 1689 1689 1689 178 179 ZMAAAA FJLAAA VVVVxx +4102 7676 0 2 2 2 2 102 102 4102 4102 4 5 UBAAAA GJLAAA AAAAxx +3287 7677 1 3 7 7 87 287 1287 3287 3287 174 175 LWAAAA HJLAAA HHHHxx +5246 7678 0 2 6 6 46 246 1246 246 5246 92 93 UTAAAA IJLAAA OOOOxx +7450 7679 0 2 0 10 50 450 1450 2450 7450 100 101 OAAAAA JJLAAA VVVVxx +6548 7680 0 0 8 8 48 548 548 1548 6548 96 97 WRAAAA KJLAAA AAAAxx +379 7681 1 3 9 19 79 379 379 379 379 158 159 POAAAA LJLAAA HHHHxx +7435 7682 1 3 5 15 35 435 1435 2435 7435 70 71 ZZAAAA MJLAAA OOOOxx +2041 7683 1 1 1 1 41 41 41 2041 2041 82 83 NAAAAA NJLAAA VVVVxx +8462 7684 0 2 2 2 62 462 462 3462 8462 124 125 MNAAAA OJLAAA AAAAxx +9076 7685 0 0 6 16 76 76 1076 4076 9076 152 153 CLAAAA PJLAAA HHHHxx +761 7686 1 1 1 1 61 761 761 761 761 122 123 HDAAAA QJLAAA OOOOxx +795 7687 1 3 5 15 95 795 795 795 795 190 191 PEAAAA RJLAAA VVVVxx +1671 7688 1 3 1 11 71 671 1671 1671 1671 142 143 HMAAAA SJLAAA AAAAxx +695 7689 1 3 5 15 95 695 695 695 695 190 191 TAAAAA TJLAAA HHHHxx +4981 7690 1 1 1 1 81 981 981 4981 4981 162 163 PJAAAA UJLAAA OOOOxx +1211 7691 1 3 1 11 11 211 1211 1211 1211 22 23 PUAAAA VJLAAA VVVVxx +5914 7692 0 2 4 14 14 914 1914 914 5914 28 29 MTAAAA WJLAAA AAAAxx +9356 7693 0 0 6 16 56 356 1356 4356 9356 112 113 WVAAAA XJLAAA HHHHxx +1500 7694 0 0 0 0 0 500 1500 1500 1500 0 1 SFAAAA YJLAAA OOOOxx +3353 7695 1 1 3 13 53 353 1353 3353 3353 106 107 ZYAAAA ZJLAAA VVVVxx +1060 7696 0 0 0 0 60 60 1060 1060 1060 120 121 UOAAAA AKLAAA AAAAxx +7910 7697 0 2 0 10 10 910 1910 2910 7910 20 21 GSAAAA BKLAAA HHHHxx +1329 7698 1 1 9 9 29 329 1329 1329 1329 58 59 DZAAAA CKLAAA OOOOxx +6011 7699 1 3 1 11 11 11 11 1011 6011 22 23 FXAAAA DKLAAA VVVVxx +7146 7700 0 2 6 6 46 146 1146 2146 7146 92 93 WOAAAA EKLAAA AAAAxx +4602 7701 0 2 2 2 2 602 602 4602 4602 4 5 AVAAAA FKLAAA HHHHxx +6751 7702 1 3 1 11 51 751 751 1751 6751 102 103 RZAAAA GKLAAA OOOOxx +2666 7703 0 2 6 6 66 666 666 2666 2666 132 133 OYAAAA HKLAAA VVVVxx +2785 7704 1 1 5 5 85 785 785 2785 2785 170 171 DDAAAA IKLAAA AAAAxx +5851 7705 1 3 1 11 51 851 1851 851 5851 102 103 BRAAAA JKLAAA HHHHxx +2435 7706 1 3 5 15 35 435 435 2435 2435 70 71 RPAAAA KKLAAA OOOOxx +7429 7707 1 1 9 9 29 429 1429 2429 7429 58 59 TZAAAA LKLAAA VVVVxx +4241 7708 1 1 1 1 41 241 241 4241 4241 82 83 DHAAAA MKLAAA AAAAxx +5691 7709 1 3 1 11 91 691 1691 691 5691 182 183 XKAAAA NKLAAA HHHHxx +7731 7710 1 3 1 11 31 731 1731 2731 7731 62 63 JLAAAA OKLAAA OOOOxx +249 7711 1 1 9 9 49 249 249 249 249 98 99 PJAAAA PKLAAA VVVVxx +1731 7712 1 3 1 11 31 731 1731 1731 1731 62 63 POAAAA QKLAAA AAAAxx +8716 7713 0 0 6 16 16 716 716 3716 8716 32 33 GXAAAA RKLAAA HHHHxx +2670 7714 0 2 0 10 70 670 670 2670 2670 140 141 SYAAAA SKLAAA OOOOxx +4654 7715 0 2 4 14 54 654 654 4654 4654 108 109 AXAAAA TKLAAA VVVVxx +1027 7716 1 3 7 7 27 27 1027 1027 1027 54 55 NNAAAA UKLAAA AAAAxx +1099 7717 1 3 9 19 99 99 1099 1099 1099 198 199 HQAAAA VKLAAA HHHHxx +3617 7718 1 1 7 17 17 617 1617 3617 3617 34 35 DJAAAA WKLAAA OOOOxx +4330 7719 0 2 0 10 30 330 330 4330 4330 60 61 OKAAAA XKLAAA VVVVxx +9750 7720 0 2 0 10 50 750 1750 4750 9750 100 101 ALAAAA YKLAAA AAAAxx +467 7721 1 3 7 7 67 467 467 467 467 134 135 ZRAAAA ZKLAAA HHHHxx +8525 7722 1 1 5 5 25 525 525 3525 8525 50 51 XPAAAA ALLAAA OOOOxx +5990 7723 0 2 0 10 90 990 1990 990 5990 180 181 KWAAAA BLLAAA VVVVxx +4839 7724 1 3 9 19 39 839 839 4839 4839 78 79 DEAAAA CLLAAA AAAAxx +9914 7725 0 2 4 14 14 914 1914 4914 9914 28 29 IRAAAA DLLAAA HHHHxx +7047 7726 1 3 7 7 47 47 1047 2047 7047 94 95 BLAAAA ELLAAA OOOOxx +874 7727 0 2 4 14 74 874 874 874 874 148 149 QHAAAA FLLAAA VVVVxx +6061 7728 1 1 1 1 61 61 61 1061 6061 122 123 DZAAAA GLLAAA AAAAxx +5491 7729 1 3 1 11 91 491 1491 491 5491 182 183 FDAAAA HLLAAA HHHHxx +4344 7730 0 0 4 4 44 344 344 4344 4344 88 89 CLAAAA ILLAAA OOOOxx +1281 7731 1 1 1 1 81 281 1281 1281 1281 162 163 HXAAAA JLLAAA VVVVxx +3597 7732 1 1 7 17 97 597 1597 3597 3597 194 195 JIAAAA KLLAAA AAAAxx +4992 7733 0 0 2 12 92 992 992 4992 4992 184 185 AKAAAA LLLAAA HHHHxx +3849 7734 1 1 9 9 49 849 1849 3849 3849 98 99 BSAAAA MLLAAA OOOOxx +2655 7735 1 3 5 15 55 655 655 2655 2655 110 111 DYAAAA NLLAAA VVVVxx +147 7736 1 3 7 7 47 147 147 147 147 94 95 RFAAAA OLLAAA AAAAxx +9110 7737 0 2 0 10 10 110 1110 4110 9110 20 21 KMAAAA PLLAAA HHHHxx +1637 7738 1 1 7 17 37 637 1637 1637 1637 74 75 ZKAAAA QLLAAA OOOOxx +9826 7739 0 2 6 6 26 826 1826 4826 9826 52 53 YNAAAA RLLAAA VVVVxx +5957 7740 1 1 7 17 57 957 1957 957 5957 114 115 DVAAAA SLLAAA AAAAxx +6932 7741 0 0 2 12 32 932 932 1932 6932 64 65 QGAAAA TLLAAA HHHHxx +9684 7742 0 0 4 4 84 684 1684 4684 9684 168 169 MIAAAA ULLAAA OOOOxx +4653 7743 1 1 3 13 53 653 653 4653 4653 106 107 ZWAAAA VLLAAA VVVVxx +8065 7744 1 1 5 5 65 65 65 3065 8065 130 131 FYAAAA WLLAAA AAAAxx +1202 7745 0 2 2 2 2 202 1202 1202 1202 4 5 GUAAAA XLLAAA HHHHxx +9214 7746 0 2 4 14 14 214 1214 4214 9214 28 29 KQAAAA YLLAAA OOOOxx +196 7747 0 0 6 16 96 196 196 196 196 192 193 OHAAAA ZLLAAA VVVVxx +4486 7748 0 2 6 6 86 486 486 4486 4486 172 173 OQAAAA AMLAAA AAAAxx +2585 7749 1 1 5 5 85 585 585 2585 2585 170 171 LVAAAA BMLAAA HHHHxx +2464 7750 0 0 4 4 64 464 464 2464 2464 128 129 UQAAAA CMLAAA OOOOxx +3467 7751 1 3 7 7 67 467 1467 3467 3467 134 135 JDAAAA DMLAAA VVVVxx +9295 7752 1 3 5 15 95 295 1295 4295 9295 190 191 NTAAAA EMLAAA AAAAxx +517 7753 1 1 7 17 17 517 517 517 517 34 35 XTAAAA FMLAAA HHHHxx +6870 7754 0 2 0 10 70 870 870 1870 6870 140 141 GEAAAA GMLAAA OOOOxx +5732 7755 0 0 2 12 32 732 1732 732 5732 64 65 MMAAAA HMLAAA VVVVxx +9376 7756 0 0 6 16 76 376 1376 4376 9376 152 153 QWAAAA IMLAAA AAAAxx +838 7757 0 2 8 18 38 838 838 838 838 76 77 GGAAAA JMLAAA HHHHxx +9254 7758 0 2 4 14 54 254 1254 4254 9254 108 109 YRAAAA KMLAAA OOOOxx +8879 7759 1 3 9 19 79 879 879 3879 8879 158 159 NDAAAA LMLAAA VVVVxx +6281 7760 1 1 1 1 81 281 281 1281 6281 162 163 PHAAAA MMLAAA AAAAxx +8216 7761 0 0 6 16 16 216 216 3216 8216 32 33 AEAAAA NMLAAA HHHHxx +9213 7762 1 1 3 13 13 213 1213 4213 9213 26 27 JQAAAA OMLAAA OOOOxx +7234 7763 0 2 4 14 34 234 1234 2234 7234 68 69 GSAAAA PMLAAA VVVVxx +5692 7764 0 0 2 12 92 692 1692 692 5692 184 185 YKAAAA QMLAAA AAAAxx +693 7765 1 1 3 13 93 693 693 693 693 186 187 RAAAAA RMLAAA HHHHxx +9050 7766 0 2 0 10 50 50 1050 4050 9050 100 101 CKAAAA SMLAAA OOOOxx +3623 7767 1 3 3 3 23 623 1623 3623 3623 46 47 JJAAAA TMLAAA VVVVxx +2130 7768 0 2 0 10 30 130 130 2130 2130 60 61 YDAAAA UMLAAA AAAAxx +2514 7769 0 2 4 14 14 514 514 2514 2514 28 29 SSAAAA VMLAAA HHHHxx +1812 7770 0 0 2 12 12 812 1812 1812 1812 24 25 SRAAAA WMLAAA OOOOxx +9037 7771 1 1 7 17 37 37 1037 4037 9037 74 75 PJAAAA XMLAAA VVVVxx +5054 7772 0 2 4 14 54 54 1054 54 5054 108 109 KMAAAA YMLAAA AAAAxx +7801 7773 1 1 1 1 1 801 1801 2801 7801 2 3 BOAAAA ZMLAAA HHHHxx +7939 7774 1 3 9 19 39 939 1939 2939 7939 78 79 JTAAAA ANLAAA OOOOxx +7374 7775 0 2 4 14 74 374 1374 2374 7374 148 149 QXAAAA BNLAAA VVVVxx +1058 7776 0 2 8 18 58 58 1058 1058 1058 116 117 SOAAAA CNLAAA AAAAxx +1972 7777 0 0 2 12 72 972 1972 1972 1972 144 145 WXAAAA DNLAAA HHHHxx +3741 7778 1 1 1 1 41 741 1741 3741 3741 82 83 XNAAAA ENLAAA OOOOxx +2227 7779 1 3 7 7 27 227 227 2227 2227 54 55 RHAAAA FNLAAA VVVVxx +304 7780 0 0 4 4 4 304 304 304 304 8 9 SLAAAA GNLAAA AAAAxx +4914 7781 0 2 4 14 14 914 914 4914 4914 28 29 AHAAAA HNLAAA HHHHxx +2428 7782 0 0 8 8 28 428 428 2428 2428 56 57 KPAAAA INLAAA OOOOxx +6660 7783 0 0 0 0 60 660 660 1660 6660 120 121 EWAAAA JNLAAA VVVVxx +2676 7784 0 0 6 16 76 676 676 2676 2676 152 153 YYAAAA KNLAAA AAAAxx +2454 7785 0 2 4 14 54 454 454 2454 2454 108 109 KQAAAA LNLAAA HHHHxx +3798 7786 0 2 8 18 98 798 1798 3798 3798 196 197 CQAAAA MNLAAA OOOOxx +1341 7787 1 1 1 1 41 341 1341 1341 1341 82 83 PZAAAA NNLAAA VVVVxx +1611 7788 1 3 1 11 11 611 1611 1611 1611 22 23 ZJAAAA ONLAAA AAAAxx +2681 7789 1 1 1 1 81 681 681 2681 2681 162 163 DZAAAA PNLAAA HHHHxx +7292 7790 0 0 2 12 92 292 1292 2292 7292 184 185 MUAAAA QNLAAA OOOOxx +7775 7791 1 3 5 15 75 775 1775 2775 7775 150 151 BNAAAA RNLAAA VVVVxx +794 7792 0 2 4 14 94 794 794 794 794 188 189 OEAAAA SNLAAA AAAAxx +8709 7793 1 1 9 9 9 709 709 3709 8709 18 19 ZWAAAA TNLAAA HHHHxx +1901 7794 1 1 1 1 1 901 1901 1901 1901 2 3 DVAAAA UNLAAA OOOOxx +3089 7795 1 1 9 9 89 89 1089 3089 3089 178 179 VOAAAA VNLAAA VVVVxx +7797 7796 1 1 7 17 97 797 1797 2797 7797 194 195 XNAAAA WNLAAA AAAAxx +6070 7797 0 2 0 10 70 70 70 1070 6070 140 141 MZAAAA XNLAAA HHHHxx +2191 7798 1 3 1 11 91 191 191 2191 2191 182 183 HGAAAA YNLAAA OOOOxx +3497 7799 1 1 7 17 97 497 1497 3497 3497 194 195 NEAAAA ZNLAAA VVVVxx +8302 7800 0 2 2 2 2 302 302 3302 8302 4 5 IHAAAA AOLAAA AAAAxx +4365 7801 1 1 5 5 65 365 365 4365 4365 130 131 XLAAAA BOLAAA HHHHxx +3588 7802 0 0 8 8 88 588 1588 3588 3588 176 177 AIAAAA COLAAA OOOOxx +8292 7803 0 0 2 12 92 292 292 3292 8292 184 185 YGAAAA DOLAAA VVVVxx +4696 7804 0 0 6 16 96 696 696 4696 4696 192 193 QYAAAA EOLAAA AAAAxx +5641 7805 1 1 1 1 41 641 1641 641 5641 82 83 ZIAAAA FOLAAA HHHHxx +9386 7806 0 2 6 6 86 386 1386 4386 9386 172 173 AXAAAA GOLAAA OOOOxx +507 7807 1 3 7 7 7 507 507 507 507 14 15 NTAAAA HOLAAA VVVVxx +7201 7808 1 1 1 1 1 201 1201 2201 7201 2 3 ZQAAAA IOLAAA AAAAxx +7785 7809 1 1 5 5 85 785 1785 2785 7785 170 171 LNAAAA JOLAAA HHHHxx +463 7810 1 3 3 3 63 463 463 463 463 126 127 VRAAAA KOLAAA OOOOxx +6656 7811 0 0 6 16 56 656 656 1656 6656 112 113 AWAAAA LOLAAA VVVVxx +807 7812 1 3 7 7 7 807 807 807 807 14 15 BFAAAA MOLAAA AAAAxx +7278 7813 0 2 8 18 78 278 1278 2278 7278 156 157 YTAAAA NOLAAA HHHHxx +6237 7814 1 1 7 17 37 237 237 1237 6237 74 75 XFAAAA OOLAAA OOOOxx +7671 7815 1 3 1 11 71 671 1671 2671 7671 142 143 BJAAAA POLAAA VVVVxx +2235 7816 1 3 5 15 35 235 235 2235 2235 70 71 ZHAAAA QOLAAA AAAAxx +4042 7817 0 2 2 2 42 42 42 4042 4042 84 85 MZAAAA ROLAAA HHHHxx +5273 7818 1 1 3 13 73 273 1273 273 5273 146 147 VUAAAA SOLAAA OOOOxx +7557 7819 1 1 7 17 57 557 1557 2557 7557 114 115 REAAAA TOLAAA VVVVxx +4007 7820 1 3 7 7 7 7 7 4007 4007 14 15 DYAAAA UOLAAA AAAAxx +1428 7821 0 0 8 8 28 428 1428 1428 1428 56 57 YCAAAA VOLAAA HHHHxx +9739 7822 1 3 9 19 39 739 1739 4739 9739 78 79 PKAAAA WOLAAA OOOOxx +7836 7823 0 0 6 16 36 836 1836 2836 7836 72 73 KPAAAA XOLAAA VVVVxx +1777 7824 1 1 7 17 77 777 1777 1777 1777 154 155 JQAAAA YOLAAA AAAAxx +5192 7825 0 0 2 12 92 192 1192 192 5192 184 185 SRAAAA ZOLAAA HHHHxx +7236 7826 0 0 6 16 36 236 1236 2236 7236 72 73 ISAAAA APLAAA OOOOxx +1623 7827 1 3 3 3 23 623 1623 1623 1623 46 47 LKAAAA BPLAAA VVVVxx +8288 7828 0 0 8 8 88 288 288 3288 8288 176 177 UGAAAA CPLAAA AAAAxx +2827 7829 1 3 7 7 27 827 827 2827 2827 54 55 TEAAAA DPLAAA HHHHxx +458 7830 0 2 8 18 58 458 458 458 458 116 117 QRAAAA EPLAAA OOOOxx +1818 7831 0 2 8 18 18 818 1818 1818 1818 36 37 YRAAAA FPLAAA VVVVxx +6837 7832 1 1 7 17 37 837 837 1837 6837 74 75 ZCAAAA GPLAAA AAAAxx +7825 7833 1 1 5 5 25 825 1825 2825 7825 50 51 ZOAAAA HPLAAA HHHHxx +9146 7834 0 2 6 6 46 146 1146 4146 9146 92 93 UNAAAA IPLAAA OOOOxx +8451 7835 1 3 1 11 51 451 451 3451 8451 102 103 BNAAAA JPLAAA VVVVxx +6438 7836 0 2 8 18 38 438 438 1438 6438 76 77 QNAAAA KPLAAA AAAAxx +4020 7837 0 0 0 0 20 20 20 4020 4020 40 41 QYAAAA LPLAAA HHHHxx +4068 7838 0 0 8 8 68 68 68 4068 4068 136 137 MAAAAA MPLAAA OOOOxx +2411 7839 1 3 1 11 11 411 411 2411 2411 22 23 TOAAAA NPLAAA VVVVxx +6222 7840 0 2 2 2 22 222 222 1222 6222 44 45 IFAAAA OPLAAA AAAAxx +3164 7841 0 0 4 4 64 164 1164 3164 3164 128 129 SRAAAA PPLAAA HHHHxx +311 7842 1 3 1 11 11 311 311 311 311 22 23 ZLAAAA QPLAAA OOOOxx +5683 7843 1 3 3 3 83 683 1683 683 5683 166 167 PKAAAA RPLAAA VVVVxx +3993 7844 1 1 3 13 93 993 1993 3993 3993 186 187 PXAAAA SPLAAA AAAAxx +9897 7845 1 1 7 17 97 897 1897 4897 9897 194 195 RQAAAA TPLAAA HHHHxx +6609 7846 1 1 9 9 9 609 609 1609 6609 18 19 FUAAAA UPLAAA OOOOxx +1362 7847 0 2 2 2 62 362 1362 1362 1362 124 125 KAAAAA VPLAAA VVVVxx +3918 7848 0 2 8 18 18 918 1918 3918 3918 36 37 SUAAAA WPLAAA AAAAxx +7376 7849 0 0 6 16 76 376 1376 2376 7376 152 153 SXAAAA XPLAAA HHHHxx +6996 7850 0 0 6 16 96 996 996 1996 6996 192 193 CJAAAA YPLAAA OOOOxx +9567 7851 1 3 7 7 67 567 1567 4567 9567 134 135 ZDAAAA ZPLAAA VVVVxx +7525 7852 1 1 5 5 25 525 1525 2525 7525 50 51 LDAAAA AQLAAA AAAAxx +9069 7853 1 1 9 9 69 69 1069 4069 9069 138 139 VKAAAA BQLAAA HHHHxx +9999 7854 1 3 9 19 99 999 1999 4999 9999 198 199 PUAAAA CQLAAA OOOOxx +9237 7855 1 1 7 17 37 237 1237 4237 9237 74 75 HRAAAA DQLAAA VVVVxx +8441 7856 1 1 1 1 41 441 441 3441 8441 82 83 RMAAAA EQLAAA AAAAxx +6769 7857 1 1 9 9 69 769 769 1769 6769 138 139 JAAAAA FQLAAA HHHHxx +6073 7858 1 1 3 13 73 73 73 1073 6073 146 147 PZAAAA GQLAAA OOOOxx +1091 7859 1 3 1 11 91 91 1091 1091 1091 182 183 ZPAAAA HQLAAA VVVVxx +9886 7860 0 2 6 6 86 886 1886 4886 9886 172 173 GQAAAA IQLAAA AAAAxx +3971 7861 1 3 1 11 71 971 1971 3971 3971 142 143 TWAAAA JQLAAA HHHHxx +4621 7862 1 1 1 1 21 621 621 4621 4621 42 43 TVAAAA KQLAAA OOOOxx +3120 7863 0 0 0 0 20 120 1120 3120 3120 40 41 AQAAAA LQLAAA VVVVxx +9773 7864 1 1 3 13 73 773 1773 4773 9773 146 147 XLAAAA MQLAAA AAAAxx +8712 7865 0 0 2 12 12 712 712 3712 8712 24 25 CXAAAA NQLAAA HHHHxx +801 7866 1 1 1 1 1 801 801 801 801 2 3 VEAAAA OQLAAA OOOOxx +9478 7867 0 2 8 18 78 478 1478 4478 9478 156 157 OAAAAA PQLAAA VVVVxx +3466 7868 0 2 6 6 66 466 1466 3466 3466 132 133 IDAAAA QQLAAA AAAAxx +6326 7869 0 2 6 6 26 326 326 1326 6326 52 53 IJAAAA RQLAAA HHHHxx +1723 7870 1 3 3 3 23 723 1723 1723 1723 46 47 HOAAAA SQLAAA OOOOxx +4978 7871 0 2 8 18 78 978 978 4978 4978 156 157 MJAAAA TQLAAA VVVVxx +2311 7872 1 3 1 11 11 311 311 2311 2311 22 23 XKAAAA UQLAAA AAAAxx +9532 7873 0 0 2 12 32 532 1532 4532 9532 64 65 QCAAAA VQLAAA HHHHxx +3680 7874 0 0 0 0 80 680 1680 3680 3680 160 161 OLAAAA WQLAAA OOOOxx +1244 7875 0 0 4 4 44 244 1244 1244 1244 88 89 WVAAAA XQLAAA VVVVxx +3821 7876 1 1 1 1 21 821 1821 3821 3821 42 43 ZQAAAA YQLAAA AAAAxx +9586 7877 0 2 6 6 86 586 1586 4586 9586 172 173 SEAAAA ZQLAAA HHHHxx +3894 7878 0 2 4 14 94 894 1894 3894 3894 188 189 UTAAAA ARLAAA OOOOxx +6169 7879 1 1 9 9 69 169 169 1169 6169 138 139 HDAAAA BRLAAA VVVVxx +5919 7880 1 3 9 19 19 919 1919 919 5919 38 39 RTAAAA CRLAAA AAAAxx +4187 7881 1 3 7 7 87 187 187 4187 4187 174 175 BFAAAA DRLAAA HHHHxx +5477 7882 1 1 7 17 77 477 1477 477 5477 154 155 RCAAAA ERLAAA OOOOxx +2806 7883 0 2 6 6 6 806 806 2806 2806 12 13 YDAAAA FRLAAA VVVVxx +8158 7884 0 2 8 18 58 158 158 3158 8158 116 117 UBAAAA GRLAAA AAAAxx +7130 7885 0 2 0 10 30 130 1130 2130 7130 60 61 GOAAAA HRLAAA HHHHxx +7133 7886 1 1 3 13 33 133 1133 2133 7133 66 67 JOAAAA IRLAAA OOOOxx +6033 7887 1 1 3 13 33 33 33 1033 6033 66 67 BYAAAA JRLAAA VVVVxx +2415 7888 1 3 5 15 15 415 415 2415 2415 30 31 XOAAAA KRLAAA AAAAxx +8091 7889 1 3 1 11 91 91 91 3091 8091 182 183 FZAAAA LRLAAA HHHHxx +8347 7890 1 3 7 7 47 347 347 3347 8347 94 95 BJAAAA MRLAAA OOOOxx +7879 7891 1 3 9 19 79 879 1879 2879 7879 158 159 BRAAAA NRLAAA VVVVxx +9360 7892 0 0 0 0 60 360 1360 4360 9360 120 121 AWAAAA ORLAAA AAAAxx +3369 7893 1 1 9 9 69 369 1369 3369 3369 138 139 PZAAAA PRLAAA HHHHxx +8536 7894 0 0 6 16 36 536 536 3536 8536 72 73 IQAAAA QRLAAA OOOOxx +8628 7895 0 0 8 8 28 628 628 3628 8628 56 57 WTAAAA RRLAAA VVVVxx +1580 7896 0 0 0 0 80 580 1580 1580 1580 160 161 UIAAAA SRLAAA AAAAxx +705 7897 1 1 5 5 5 705 705 705 705 10 11 DBAAAA TRLAAA HHHHxx +4650 7898 0 2 0 10 50 650 650 4650 4650 100 101 WWAAAA URLAAA OOOOxx +9165 7899 1 1 5 5 65 165 1165 4165 9165 130 131 NOAAAA VRLAAA VVVVxx +4820 7900 0 0 0 0 20 820 820 4820 4820 40 41 KDAAAA WRLAAA AAAAxx +3538 7901 0 2 8 18 38 538 1538 3538 3538 76 77 CGAAAA XRLAAA HHHHxx +9947 7902 1 3 7 7 47 947 1947 4947 9947 94 95 PSAAAA YRLAAA OOOOxx +4954 7903 0 2 4 14 54 954 954 4954 4954 108 109 OIAAAA ZRLAAA VVVVxx +1104 7904 0 0 4 4 4 104 1104 1104 1104 8 9 MQAAAA ASLAAA AAAAxx +8455 7905 1 3 5 15 55 455 455 3455 8455 110 111 FNAAAA BSLAAA HHHHxx +8307 7906 1 3 7 7 7 307 307 3307 8307 14 15 NHAAAA CSLAAA OOOOxx +9203 7907 1 3 3 3 3 203 1203 4203 9203 6 7 ZPAAAA DSLAAA VVVVxx +7565 7908 1 1 5 5 65 565 1565 2565 7565 130 131 ZEAAAA ESLAAA AAAAxx +7745 7909 1 1 5 5 45 745 1745 2745 7745 90 91 XLAAAA FSLAAA HHHHxx +1787 7910 1 3 7 7 87 787 1787 1787 1787 174 175 TQAAAA GSLAAA OOOOxx +4861 7911 1 1 1 1 61 861 861 4861 4861 122 123 ZEAAAA HSLAAA VVVVxx +5183 7912 1 3 3 3 83 183 1183 183 5183 166 167 JRAAAA ISLAAA AAAAxx +529 7913 1 1 9 9 29 529 529 529 529 58 59 JUAAAA JSLAAA HHHHxx +2470 7914 0 2 0 10 70 470 470 2470 2470 140 141 ARAAAA KSLAAA OOOOxx +1267 7915 1 3 7 7 67 267 1267 1267 1267 134 135 TWAAAA LSLAAA VVVVxx +2059 7916 1 3 9 19 59 59 59 2059 2059 118 119 FBAAAA MSLAAA AAAAxx +1862 7917 0 2 2 2 62 862 1862 1862 1862 124 125 QTAAAA NSLAAA HHHHxx +7382 7918 0 2 2 2 82 382 1382 2382 7382 164 165 YXAAAA OSLAAA OOOOxx +4796 7919 0 0 6 16 96 796 796 4796 4796 192 193 MCAAAA PSLAAA VVVVxx +2331 7920 1 3 1 11 31 331 331 2331 2331 62 63 RLAAAA QSLAAA AAAAxx +8870 7921 0 2 0 10 70 870 870 3870 8870 140 141 EDAAAA RSLAAA HHHHxx +9581 7922 1 1 1 1 81 581 1581 4581 9581 162 163 NEAAAA SSLAAA OOOOxx +9063 7923 1 3 3 3 63 63 1063 4063 9063 126 127 PKAAAA TSLAAA VVVVxx +2192 7924 0 0 2 12 92 192 192 2192 2192 184 185 IGAAAA USLAAA AAAAxx +6466 7925 0 2 6 6 66 466 466 1466 6466 132 133 SOAAAA VSLAAA HHHHxx +7096 7926 0 0 6 16 96 96 1096 2096 7096 192 193 YMAAAA WSLAAA OOOOxx +6257 7927 1 1 7 17 57 257 257 1257 6257 114 115 RGAAAA XSLAAA VVVVxx +7009 7928 1 1 9 9 9 9 1009 2009 7009 18 19 PJAAAA YSLAAA AAAAxx +8136 7929 0 0 6 16 36 136 136 3136 8136 72 73 YAAAAA ZSLAAA HHHHxx +1854 7930 0 2 4 14 54 854 1854 1854 1854 108 109 ITAAAA ATLAAA OOOOxx +3644 7931 0 0 4 4 44 644 1644 3644 3644 88 89 EKAAAA BTLAAA VVVVxx +4437 7932 1 1 7 17 37 437 437 4437 4437 74 75 ROAAAA CTLAAA AAAAxx +7209 7933 1 1 9 9 9 209 1209 2209 7209 18 19 HRAAAA DTLAAA HHHHxx +1516 7934 0 0 6 16 16 516 1516 1516 1516 32 33 IGAAAA ETLAAA OOOOxx +822 7935 0 2 2 2 22 822 822 822 822 44 45 QFAAAA FTLAAA VVVVxx +1778 7936 0 2 8 18 78 778 1778 1778 1778 156 157 KQAAAA GTLAAA AAAAxx +8161 7937 1 1 1 1 61 161 161 3161 8161 122 123 XBAAAA HTLAAA HHHHxx +6030 7938 0 2 0 10 30 30 30 1030 6030 60 61 YXAAAA ITLAAA OOOOxx +3515 7939 1 3 5 15 15 515 1515 3515 3515 30 31 FFAAAA JTLAAA VVVVxx +1702 7940 0 2 2 2 2 702 1702 1702 1702 4 5 MNAAAA KTLAAA AAAAxx +2671 7941 1 3 1 11 71 671 671 2671 2671 142 143 TYAAAA LTLAAA HHHHxx +7623 7942 1 3 3 3 23 623 1623 2623 7623 46 47 FHAAAA MTLAAA OOOOxx +9828 7943 0 0 8 8 28 828 1828 4828 9828 56 57 AOAAAA NTLAAA VVVVxx +1888 7944 0 0 8 8 88 888 1888 1888 1888 176 177 QUAAAA OTLAAA AAAAxx +4520 7945 0 0 0 0 20 520 520 4520 4520 40 41 WRAAAA PTLAAA HHHHxx +3461 7946 1 1 1 1 61 461 1461 3461 3461 122 123 DDAAAA QTLAAA OOOOxx +1488 7947 0 0 8 8 88 488 1488 1488 1488 176 177 GFAAAA RTLAAA VVVVxx +7753 7948 1 1 3 13 53 753 1753 2753 7753 106 107 FMAAAA STLAAA AAAAxx +5525 7949 1 1 5 5 25 525 1525 525 5525 50 51 NEAAAA TTLAAA HHHHxx +5220 7950 0 0 0 0 20 220 1220 220 5220 40 41 USAAAA UTLAAA OOOOxx +305 7951 1 1 5 5 5 305 305 305 305 10 11 TLAAAA VTLAAA VVVVxx +7883 7952 1 3 3 3 83 883 1883 2883 7883 166 167 FRAAAA WTLAAA AAAAxx +1222 7953 0 2 2 2 22 222 1222 1222 1222 44 45 AVAAAA XTLAAA HHHHxx +8552 7954 0 0 2 12 52 552 552 3552 8552 104 105 YQAAAA YTLAAA OOOOxx +6097 7955 1 1 7 17 97 97 97 1097 6097 194 195 NAAAAA ZTLAAA VVVVxx +2298 7956 0 2 8 18 98 298 298 2298 2298 196 197 KKAAAA AULAAA AAAAxx +956 7957 0 0 6 16 56 956 956 956 956 112 113 UKAAAA BULAAA HHHHxx +9351 7958 1 3 1 11 51 351 1351 4351 9351 102 103 RVAAAA CULAAA OOOOxx +6669 7959 1 1 9 9 69 669 669 1669 6669 138 139 NWAAAA DULAAA VVVVxx +9383 7960 1 3 3 3 83 383 1383 4383 9383 166 167 XWAAAA EULAAA AAAAxx +1607 7961 1 3 7 7 7 607 1607 1607 1607 14 15 VJAAAA FULAAA HHHHxx +812 7962 0 0 2 12 12 812 812 812 812 24 25 GFAAAA GULAAA OOOOxx +2109 7963 1 1 9 9 9 109 109 2109 2109 18 19 DDAAAA HULAAA VVVVxx +207 7964 1 3 7 7 7 207 207 207 207 14 15 ZHAAAA IULAAA AAAAxx +7124 7965 0 0 4 4 24 124 1124 2124 7124 48 49 AOAAAA JULAAA HHHHxx +9333 7966 1 1 3 13 33 333 1333 4333 9333 66 67 ZUAAAA KULAAA OOOOxx +3262 7967 0 2 2 2 62 262 1262 3262 3262 124 125 MVAAAA LULAAA VVVVxx +1070 7968 0 2 0 10 70 70 1070 1070 1070 140 141 EPAAAA MULAAA AAAAxx +7579 7969 1 3 9 19 79 579 1579 2579 7579 158 159 NFAAAA NULAAA HHHHxx +9283 7970 1 3 3 3 83 283 1283 4283 9283 166 167 BTAAAA OULAAA OOOOxx +4917 7971 1 1 7 17 17 917 917 4917 4917 34 35 DHAAAA PULAAA VVVVxx +1328 7972 0 0 8 8 28 328 1328 1328 1328 56 57 CZAAAA QULAAA AAAAxx +3042 7973 0 2 2 2 42 42 1042 3042 3042 84 85 ANAAAA RULAAA HHHHxx +8352 7974 0 0 2 12 52 352 352 3352 8352 104 105 GJAAAA SULAAA OOOOxx +2710 7975 0 2 0 10 10 710 710 2710 2710 20 21 GAAAAA TULAAA VVVVxx +3330 7976 0 2 0 10 30 330 1330 3330 3330 60 61 CYAAAA UULAAA AAAAxx +2822 7977 0 2 2 2 22 822 822 2822 2822 44 45 OEAAAA VULAAA HHHHxx +5627 7978 1 3 7 7 27 627 1627 627 5627 54 55 LIAAAA WULAAA OOOOxx +7848 7979 0 0 8 8 48 848 1848 2848 7848 96 97 WPAAAA XULAAA VVVVxx +7384 7980 0 0 4 4 84 384 1384 2384 7384 168 169 AYAAAA YULAAA AAAAxx +727 7981 1 3 7 7 27 727 727 727 727 54 55 ZBAAAA ZULAAA HHHHxx +9926 7982 0 2 6 6 26 926 1926 4926 9926 52 53 URAAAA AVLAAA OOOOxx +2647 7983 1 3 7 7 47 647 647 2647 2647 94 95 VXAAAA BVLAAA VVVVxx +6416 7984 0 0 6 16 16 416 416 1416 6416 32 33 UMAAAA CVLAAA AAAAxx +8751 7985 1 3 1 11 51 751 751 3751 8751 102 103 PYAAAA DVLAAA HHHHxx +6515 7986 1 3 5 15 15 515 515 1515 6515 30 31 PQAAAA EVLAAA OOOOxx +2472 7987 0 0 2 12 72 472 472 2472 2472 144 145 CRAAAA FVLAAA VVVVxx +7205 7988 1 1 5 5 5 205 1205 2205 7205 10 11 DRAAAA GVLAAA AAAAxx +9654 7989 0 2 4 14 54 654 1654 4654 9654 108 109 IHAAAA HVLAAA HHHHxx +5646 7990 0 2 6 6 46 646 1646 646 5646 92 93 EJAAAA IVLAAA OOOOxx +4217 7991 1 1 7 17 17 217 217 4217 4217 34 35 FGAAAA JVLAAA VVVVxx +4484 7992 0 0 4 4 84 484 484 4484 4484 168 169 MQAAAA KVLAAA AAAAxx +6654 7993 0 2 4 14 54 654 654 1654 6654 108 109 YVAAAA LVLAAA HHHHxx +4876 7994 0 0 6 16 76 876 876 4876 4876 152 153 OFAAAA MVLAAA OOOOxx +9690 7995 0 2 0 10 90 690 1690 4690 9690 180 181 SIAAAA NVLAAA VVVVxx +2453 7996 1 1 3 13 53 453 453 2453 2453 106 107 JQAAAA OVLAAA AAAAxx +829 7997 1 1 9 9 29 829 829 829 829 58 59 XFAAAA PVLAAA HHHHxx +2547 7998 1 3 7 7 47 547 547 2547 2547 94 95 ZTAAAA QVLAAA OOOOxx +9726 7999 0 2 6 6 26 726 1726 4726 9726 52 53 CKAAAA RVLAAA VVVVxx +9267 8000 1 3 7 7 67 267 1267 4267 9267 134 135 LSAAAA SVLAAA AAAAxx +7448 8001 0 0 8 8 48 448 1448 2448 7448 96 97 MAAAAA TVLAAA HHHHxx +610 8002 0 2 0 10 10 610 610 610 610 20 21 MXAAAA UVLAAA OOOOxx +2791 8003 1 3 1 11 91 791 791 2791 2791 182 183 JDAAAA VVLAAA VVVVxx +3651 8004 1 3 1 11 51 651 1651 3651 3651 102 103 LKAAAA WVLAAA AAAAxx +5206 8005 0 2 6 6 6 206 1206 206 5206 12 13 GSAAAA XVLAAA HHHHxx +8774 8006 0 2 4 14 74 774 774 3774 8774 148 149 MZAAAA YVLAAA OOOOxx +4753 8007 1 1 3 13 53 753 753 4753 4753 106 107 VAAAAA ZVLAAA VVVVxx +4755 8008 1 3 5 15 55 755 755 4755 4755 110 111 XAAAAA AWLAAA AAAAxx +686 8009 0 2 6 6 86 686 686 686 686 172 173 KAAAAA BWLAAA HHHHxx +8281 8010 1 1 1 1 81 281 281 3281 8281 162 163 NGAAAA CWLAAA OOOOxx +2058 8011 0 2 8 18 58 58 58 2058 2058 116 117 EBAAAA DWLAAA VVVVxx +8900 8012 0 0 0 0 0 900 900 3900 8900 0 1 IEAAAA EWLAAA AAAAxx +8588 8013 0 0 8 8 88 588 588 3588 8588 176 177 ISAAAA FWLAAA HHHHxx +2904 8014 0 0 4 4 4 904 904 2904 2904 8 9 SHAAAA GWLAAA OOOOxx +8917 8015 1 1 7 17 17 917 917 3917 8917 34 35 ZEAAAA HWLAAA VVVVxx +9026 8016 0 2 6 6 26 26 1026 4026 9026 52 53 EJAAAA IWLAAA AAAAxx +2416 8017 0 0 6 16 16 416 416 2416 2416 32 33 YOAAAA JWLAAA HHHHxx +1053 8018 1 1 3 13 53 53 1053 1053 1053 106 107 NOAAAA KWLAAA OOOOxx +7141 8019 1 1 1 1 41 141 1141 2141 7141 82 83 ROAAAA LWLAAA VVVVxx +9771 8020 1 3 1 11 71 771 1771 4771 9771 142 143 VLAAAA MWLAAA AAAAxx +2774 8021 0 2 4 14 74 774 774 2774 2774 148 149 SCAAAA NWLAAA HHHHxx +3213 8022 1 1 3 13 13 213 1213 3213 3213 26 27 PTAAAA OWLAAA OOOOxx +5694 8023 0 2 4 14 94 694 1694 694 5694 188 189 ALAAAA PWLAAA VVVVxx +6631 8024 1 3 1 11 31 631 631 1631 6631 62 63 BVAAAA QWLAAA AAAAxx +6638 8025 0 2 8 18 38 638 638 1638 6638 76 77 IVAAAA RWLAAA HHHHxx +7407 8026 1 3 7 7 7 407 1407 2407 7407 14 15 XYAAAA SWLAAA OOOOxx +8972 8027 0 0 2 12 72 972 972 3972 8972 144 145 CHAAAA TWLAAA VVVVxx +2202 8028 0 2 2 2 2 202 202 2202 2202 4 5 SGAAAA UWLAAA AAAAxx +6135 8029 1 3 5 15 35 135 135 1135 6135 70 71 ZBAAAA VWLAAA HHHHxx +5043 8030 1 3 3 3 43 43 1043 43 5043 86 87 ZLAAAA WWLAAA OOOOxx +5163 8031 1 3 3 3 63 163 1163 163 5163 126 127 PQAAAA XWLAAA VVVVxx +1191 8032 1 3 1 11 91 191 1191 1191 1191 182 183 VTAAAA YWLAAA AAAAxx +6576 8033 0 0 6 16 76 576 576 1576 6576 152 153 YSAAAA ZWLAAA HHHHxx +3455 8034 1 3 5 15 55 455 1455 3455 3455 110 111 XCAAAA AXLAAA OOOOxx +3688 8035 0 0 8 8 88 688 1688 3688 3688 176 177 WLAAAA BXLAAA VVVVxx +4982 8036 0 2 2 2 82 982 982 4982 4982 164 165 QJAAAA CXLAAA AAAAxx +4180 8037 0 0 0 0 80 180 180 4180 4180 160 161 UEAAAA DXLAAA HHHHxx +4708 8038 0 0 8 8 8 708 708 4708 4708 16 17 CZAAAA EXLAAA OOOOxx +1241 8039 1 1 1 1 41 241 1241 1241 1241 82 83 TVAAAA FXLAAA VVVVxx +4921 8040 1 1 1 1 21 921 921 4921 4921 42 43 HHAAAA GXLAAA AAAAxx +3197 8041 1 1 7 17 97 197 1197 3197 3197 194 195 ZSAAAA HXLAAA HHHHxx +8225 8042 1 1 5 5 25 225 225 3225 8225 50 51 JEAAAA IXLAAA OOOOxx +5913 8043 1 1 3 13 13 913 1913 913 5913 26 27 LTAAAA JXLAAA VVVVxx +6387 8044 1 3 7 7 87 387 387 1387 6387 174 175 RLAAAA KXLAAA AAAAxx +2706 8045 0 2 6 6 6 706 706 2706 2706 12 13 CAAAAA LXLAAA HHHHxx +1461 8046 1 1 1 1 61 461 1461 1461 1461 122 123 FEAAAA MXLAAA OOOOxx +7646 8047 0 2 6 6 46 646 1646 2646 7646 92 93 CIAAAA NXLAAA VVVVxx +8066 8048 0 2 6 6 66 66 66 3066 8066 132 133 GYAAAA OXLAAA AAAAxx +4171 8049 1 3 1 11 71 171 171 4171 4171 142 143 LEAAAA PXLAAA HHHHxx +8008 8050 0 0 8 8 8 8 8 3008 8008 16 17 AWAAAA QXLAAA OOOOxx +2088 8051 0 0 8 8 88 88 88 2088 2088 176 177 ICAAAA RXLAAA VVVVxx +7907 8052 1 3 7 7 7 907 1907 2907 7907 14 15 DSAAAA SXLAAA AAAAxx +2429 8053 1 1 9 9 29 429 429 2429 2429 58 59 LPAAAA TXLAAA HHHHxx +9629 8054 1 1 9 9 29 629 1629 4629 9629 58 59 JGAAAA UXLAAA OOOOxx +1470 8055 0 2 0 10 70 470 1470 1470 1470 140 141 OEAAAA VXLAAA VVVVxx +4346 8056 0 2 6 6 46 346 346 4346 4346 92 93 ELAAAA WXLAAA AAAAxx +7219 8057 1 3 9 19 19 219 1219 2219 7219 38 39 RRAAAA XXLAAA HHHHxx +1185 8058 1 1 5 5 85 185 1185 1185 1185 170 171 PTAAAA YXLAAA OOOOxx +8776 8059 0 0 6 16 76 776 776 3776 8776 152 153 OZAAAA ZXLAAA VVVVxx +684 8060 0 0 4 4 84 684 684 684 684 168 169 IAAAAA AYLAAA AAAAxx +2343 8061 1 3 3 3 43 343 343 2343 2343 86 87 DMAAAA BYLAAA HHHHxx +4470 8062 0 2 0 10 70 470 470 4470 4470 140 141 YPAAAA CYLAAA OOOOxx +5116 8063 0 0 6 16 16 116 1116 116 5116 32 33 UOAAAA DYLAAA VVVVxx +1746 8064 0 2 6 6 46 746 1746 1746 1746 92 93 EPAAAA EYLAAA AAAAxx +3216 8065 0 0 6 16 16 216 1216 3216 3216 32 33 STAAAA FYLAAA HHHHxx +4594 8066 0 2 4 14 94 594 594 4594 4594 188 189 SUAAAA GYLAAA OOOOxx +3013 8067 1 1 3 13 13 13 1013 3013 3013 26 27 XLAAAA HYLAAA VVVVxx +2307 8068 1 3 7 7 7 307 307 2307 2307 14 15 TKAAAA IYLAAA AAAAxx +7663 8069 1 3 3 3 63 663 1663 2663 7663 126 127 TIAAAA JYLAAA HHHHxx +8504 8070 0 0 4 4 4 504 504 3504 8504 8 9 CPAAAA KYLAAA OOOOxx +3683 8071 1 3 3 3 83 683 1683 3683 3683 166 167 RLAAAA LYLAAA VVVVxx +144 8072 0 0 4 4 44 144 144 144 144 88 89 OFAAAA MYLAAA AAAAxx +203 8073 1 3 3 3 3 203 203 203 203 6 7 VHAAAA NYLAAA HHHHxx +5255 8074 1 3 5 15 55 255 1255 255 5255 110 111 DUAAAA OYLAAA OOOOxx +4150 8075 0 2 0 10 50 150 150 4150 4150 100 101 QDAAAA PYLAAA VVVVxx +5701 8076 1 1 1 1 1 701 1701 701 5701 2 3 HLAAAA QYLAAA AAAAxx +7400 8077 0 0 0 0 0 400 1400 2400 7400 0 1 QYAAAA RYLAAA HHHHxx +8203 8078 1 3 3 3 3 203 203 3203 8203 6 7 NDAAAA SYLAAA OOOOxx +637 8079 1 1 7 17 37 637 637 637 637 74 75 NYAAAA TYLAAA VVVVxx +2898 8080 0 2 8 18 98 898 898 2898 2898 196 197 MHAAAA UYLAAA AAAAxx +1110 8081 0 2 0 10 10 110 1110 1110 1110 20 21 SQAAAA VYLAAA HHHHxx +6255 8082 1 3 5 15 55 255 255 1255 6255 110 111 PGAAAA WYLAAA OOOOxx +1071 8083 1 3 1 11 71 71 1071 1071 1071 142 143 FPAAAA XYLAAA VVVVxx +541 8084 1 1 1 1 41 541 541 541 541 82 83 VUAAAA YYLAAA AAAAxx +8077 8085 1 1 7 17 77 77 77 3077 8077 154 155 RYAAAA ZYLAAA HHHHxx +6809 8086 1 1 9 9 9 809 809 1809 6809 18 19 XBAAAA AZLAAA OOOOxx +4749 8087 1 1 9 9 49 749 749 4749 4749 98 99 RAAAAA BZLAAA VVVVxx +2886 8088 0 2 6 6 86 886 886 2886 2886 172 173 AHAAAA CZLAAA AAAAxx +5510 8089 0 2 0 10 10 510 1510 510 5510 20 21 YDAAAA DZLAAA HHHHxx +713 8090 1 1 3 13 13 713 713 713 713 26 27 LBAAAA EZLAAA OOOOxx +8388 8091 0 0 8 8 88 388 388 3388 8388 176 177 QKAAAA FZLAAA VVVVxx +9524 8092 0 0 4 4 24 524 1524 4524 9524 48 49 ICAAAA GZLAAA AAAAxx +9949 8093 1 1 9 9 49 949 1949 4949 9949 98 99 RSAAAA HZLAAA HHHHxx +885 8094 1 1 5 5 85 885 885 885 885 170 171 BIAAAA IZLAAA OOOOxx +8699 8095 1 3 9 19 99 699 699 3699 8699 198 199 PWAAAA JZLAAA VVVVxx +2232 8096 0 0 2 12 32 232 232 2232 2232 64 65 WHAAAA KZLAAA AAAAxx +5142 8097 0 2 2 2 42 142 1142 142 5142 84 85 UPAAAA LZLAAA HHHHxx +8891 8098 1 3 1 11 91 891 891 3891 8891 182 183 ZDAAAA MZLAAA OOOOxx +1881 8099 1 1 1 1 81 881 1881 1881 1881 162 163 JUAAAA NZLAAA VVVVxx +3751 8100 1 3 1 11 51 751 1751 3751 3751 102 103 HOAAAA OZLAAA AAAAxx +1896 8101 0 0 6 16 96 896 1896 1896 1896 192 193 YUAAAA PZLAAA HHHHxx +8258 8102 0 2 8 18 58 258 258 3258 8258 116 117 QFAAAA QZLAAA OOOOxx +3820 8103 0 0 0 0 20 820 1820 3820 3820 40 41 YQAAAA RZLAAA VVVVxx +6617 8104 1 1 7 17 17 617 617 1617 6617 34 35 NUAAAA SZLAAA AAAAxx +5100 8105 0 0 0 0 0 100 1100 100 5100 0 1 EOAAAA TZLAAA HHHHxx +4277 8106 1 1 7 17 77 277 277 4277 4277 154 155 NIAAAA UZLAAA OOOOxx +2498 8107 0 2 8 18 98 498 498 2498 2498 196 197 CSAAAA VZLAAA VVVVxx +4343 8108 1 3 3 3 43 343 343 4343 4343 86 87 BLAAAA WZLAAA AAAAxx +8319 8109 1 3 9 19 19 319 319 3319 8319 38 39 ZHAAAA XZLAAA HHHHxx +4803 8110 1 3 3 3 3 803 803 4803 4803 6 7 TCAAAA YZLAAA OOOOxx +3100 8111 0 0 0 0 0 100 1100 3100 3100 0 1 GPAAAA ZZLAAA VVVVxx +428 8112 0 0 8 8 28 428 428 428 428 56 57 MQAAAA AAMAAA AAAAxx +2811 8113 1 3 1 11 11 811 811 2811 2811 22 23 DEAAAA BAMAAA HHHHxx +2989 8114 1 1 9 9 89 989 989 2989 2989 178 179 ZKAAAA CAMAAA OOOOxx +1100 8115 0 0 0 0 0 100 1100 1100 1100 0 1 IQAAAA DAMAAA VVVVxx +6586 8116 0 2 6 6 86 586 586 1586 6586 172 173 ITAAAA EAMAAA AAAAxx +3124 8117 0 0 4 4 24 124 1124 3124 3124 48 49 EQAAAA FAMAAA HHHHxx +1635 8118 1 3 5 15 35 635 1635 1635 1635 70 71 XKAAAA GAMAAA OOOOxx +3888 8119 0 0 8 8 88 888 1888 3888 3888 176 177 OTAAAA HAMAAA VVVVxx +8369 8120 1 1 9 9 69 369 369 3369 8369 138 139 XJAAAA IAMAAA AAAAxx +3148 8121 0 0 8 8 48 148 1148 3148 3148 96 97 CRAAAA JAMAAA HHHHxx +2842 8122 0 2 2 2 42 842 842 2842 2842 84 85 IFAAAA KAMAAA OOOOxx +4965 8123 1 1 5 5 65 965 965 4965 4965 130 131 ZIAAAA LAMAAA VVVVxx +3742 8124 0 2 2 2 42 742 1742 3742 3742 84 85 YNAAAA MAMAAA AAAAxx +5196 8125 0 0 6 16 96 196 1196 196 5196 192 193 WRAAAA NAMAAA HHHHxx +9105 8126 1 1 5 5 5 105 1105 4105 9105 10 11 FMAAAA OAMAAA OOOOxx +6806 8127 0 2 6 6 6 806 806 1806 6806 12 13 UBAAAA PAMAAA VVVVxx +5849 8128 1 1 9 9 49 849 1849 849 5849 98 99 ZQAAAA QAMAAA AAAAxx +6504 8129 0 0 4 4 4 504 504 1504 6504 8 9 EQAAAA RAMAAA HHHHxx +9841 8130 1 1 1 1 41 841 1841 4841 9841 82 83 NOAAAA SAMAAA OOOOxx +457 8131 1 1 7 17 57 457 457 457 457 114 115 PRAAAA TAMAAA VVVVxx +8856 8132 0 0 6 16 56 856 856 3856 8856 112 113 QCAAAA UAMAAA AAAAxx +8043 8133 1 3 3 3 43 43 43 3043 8043 86 87 JXAAAA VAMAAA HHHHxx +5933 8134 1 1 3 13 33 933 1933 933 5933 66 67 FUAAAA WAMAAA OOOOxx +5725 8135 1 1 5 5 25 725 1725 725 5725 50 51 FMAAAA XAMAAA VVVVxx +8607 8136 1 3 7 7 7 607 607 3607 8607 14 15 BTAAAA YAMAAA AAAAxx +9280 8137 0 0 0 0 80 280 1280 4280 9280 160 161 YSAAAA ZAMAAA HHHHxx +6017 8138 1 1 7 17 17 17 17 1017 6017 34 35 LXAAAA ABMAAA OOOOxx +4946 8139 0 2 6 6 46 946 946 4946 4946 92 93 GIAAAA BBMAAA VVVVxx +7373 8140 1 1 3 13 73 373 1373 2373 7373 146 147 PXAAAA CBMAAA AAAAxx +8096 8141 0 0 6 16 96 96 96 3096 8096 192 193 KZAAAA DBMAAA HHHHxx +3178 8142 0 2 8 18 78 178 1178 3178 3178 156 157 GSAAAA EBMAAA OOOOxx +1849 8143 1 1 9 9 49 849 1849 1849 1849 98 99 DTAAAA FBMAAA VVVVxx +8813 8144 1 1 3 13 13 813 813 3813 8813 26 27 ZAAAAA GBMAAA AAAAxx +460 8145 0 0 0 0 60 460 460 460 460 120 121 SRAAAA HBMAAA HHHHxx +7756 8146 0 0 6 16 56 756 1756 2756 7756 112 113 IMAAAA IBMAAA OOOOxx +4425 8147 1 1 5 5 25 425 425 4425 4425 50 51 FOAAAA JBMAAA VVVVxx +1602 8148 0 2 2 2 2 602 1602 1602 1602 4 5 QJAAAA KBMAAA AAAAxx +5981 8149 1 1 1 1 81 981 1981 981 5981 162 163 BWAAAA LBMAAA HHHHxx +8139 8150 1 3 9 19 39 139 139 3139 8139 78 79 BBAAAA MBMAAA OOOOxx +754 8151 0 2 4 14 54 754 754 754 754 108 109 ADAAAA NBMAAA VVVVxx +26 8152 0 2 6 6 26 26 26 26 26 52 53 ABAAAA OBMAAA AAAAxx +106 8153 0 2 6 6 6 106 106 106 106 12 13 CEAAAA PBMAAA HHHHxx +7465 8154 1 1 5 5 65 465 1465 2465 7465 130 131 DBAAAA QBMAAA OOOOxx +1048 8155 0 0 8 8 48 48 1048 1048 1048 96 97 IOAAAA RBMAAA VVVVxx +2303 8156 1 3 3 3 3 303 303 2303 2303 6 7 PKAAAA SBMAAA AAAAxx +5794 8157 0 2 4 14 94 794 1794 794 5794 188 189 WOAAAA TBMAAA HHHHxx +3321 8158 1 1 1 1 21 321 1321 3321 3321 42 43 TXAAAA UBMAAA OOOOxx +6122 8159 0 2 2 2 22 122 122 1122 6122 44 45 MBAAAA VBMAAA VVVVxx +6474 8160 0 2 4 14 74 474 474 1474 6474 148 149 APAAAA WBMAAA AAAAxx +827 8161 1 3 7 7 27 827 827 827 827 54 55 VFAAAA XBMAAA HHHHxx +6616 8162 0 0 6 16 16 616 616 1616 6616 32 33 MUAAAA YBMAAA OOOOxx +2131 8163 1 3 1 11 31 131 131 2131 2131 62 63 ZDAAAA ZBMAAA VVVVxx +5483 8164 1 3 3 3 83 483 1483 483 5483 166 167 XCAAAA ACMAAA AAAAxx +606 8165 0 2 6 6 6 606 606 606 606 12 13 IXAAAA BCMAAA HHHHxx +922 8166 0 2 2 2 22 922 922 922 922 44 45 MJAAAA CCMAAA OOOOxx +8475 8167 1 3 5 15 75 475 475 3475 8475 150 151 ZNAAAA DCMAAA VVVVxx +7645 8168 1 1 5 5 45 645 1645 2645 7645 90 91 BIAAAA ECMAAA AAAAxx +5097 8169 1 1 7 17 97 97 1097 97 5097 194 195 BOAAAA FCMAAA HHHHxx +5377 8170 1 1 7 17 77 377 1377 377 5377 154 155 VYAAAA GCMAAA OOOOxx +6116 8171 0 0 6 16 16 116 116 1116 6116 32 33 GBAAAA HCMAAA VVVVxx +8674 8172 0 2 4 14 74 674 674 3674 8674 148 149 QVAAAA ICMAAA AAAAxx +8063 8173 1 3 3 3 63 63 63 3063 8063 126 127 DYAAAA JCMAAA HHHHxx +5271 8174 1 3 1 11 71 271 1271 271 5271 142 143 TUAAAA KCMAAA OOOOxx +1619 8175 1 3 9 19 19 619 1619 1619 1619 38 39 HKAAAA LCMAAA VVVVxx +6419 8176 1 3 9 19 19 419 419 1419 6419 38 39 XMAAAA MCMAAA AAAAxx +7651 8177 1 3 1 11 51 651 1651 2651 7651 102 103 HIAAAA NCMAAA HHHHxx +2897 8178 1 1 7 17 97 897 897 2897 2897 194 195 LHAAAA OCMAAA OOOOxx +8148 8179 0 0 8 8 48 148 148 3148 8148 96 97 KBAAAA PCMAAA VVVVxx +7461 8180 1 1 1 1 61 461 1461 2461 7461 122 123 ZAAAAA QCMAAA AAAAxx +9186 8181 0 2 6 6 86 186 1186 4186 9186 172 173 IPAAAA RCMAAA HHHHxx +7127 8182 1 3 7 7 27 127 1127 2127 7127 54 55 DOAAAA SCMAAA OOOOxx +8233 8183 1 1 3 13 33 233 233 3233 8233 66 67 REAAAA TCMAAA VVVVxx +9651 8184 1 3 1 11 51 651 1651 4651 9651 102 103 FHAAAA UCMAAA AAAAxx +6746 8185 0 2 6 6 46 746 746 1746 6746 92 93 MZAAAA VCMAAA HHHHxx +7835 8186 1 3 5 15 35 835 1835 2835 7835 70 71 JPAAAA WCMAAA OOOOxx +8815 8187 1 3 5 15 15 815 815 3815 8815 30 31 BBAAAA XCMAAA VVVVxx +6398 8188 0 2 8 18 98 398 398 1398 6398 196 197 CMAAAA YCMAAA AAAAxx +5344 8189 0 0 4 4 44 344 1344 344 5344 88 89 OXAAAA ZCMAAA HHHHxx +8209 8190 1 1 9 9 9 209 209 3209 8209 18 19 TDAAAA ADMAAA OOOOxx +8444 8191 0 0 4 4 44 444 444 3444 8444 88 89 UMAAAA BDMAAA VVVVxx +5669 8192 1 1 9 9 69 669 1669 669 5669 138 139 BKAAAA CDMAAA AAAAxx +2455 8193 1 3 5 15 55 455 455 2455 2455 110 111 LQAAAA DDMAAA HHHHxx +6767 8194 1 3 7 7 67 767 767 1767 6767 134 135 HAAAAA EDMAAA OOOOxx +135 8195 1 3 5 15 35 135 135 135 135 70 71 FFAAAA FDMAAA VVVVxx +3503 8196 1 3 3 3 3 503 1503 3503 3503 6 7 TEAAAA GDMAAA AAAAxx +6102 8197 0 2 2 2 2 102 102 1102 6102 4 5 SAAAAA HDMAAA HHHHxx +7136 8198 0 0 6 16 36 136 1136 2136 7136 72 73 MOAAAA IDMAAA OOOOxx +4933 8199 1 1 3 13 33 933 933 4933 4933 66 67 THAAAA JDMAAA VVVVxx +8804 8200 0 0 4 4 4 804 804 3804 8804 8 9 QAAAAA KDMAAA AAAAxx +3760 8201 0 0 0 0 60 760 1760 3760 3760 120 121 QOAAAA LDMAAA HHHHxx +8603 8202 1 3 3 3 3 603 603 3603 8603 6 7 XSAAAA MDMAAA OOOOxx +7411 8203 1 3 1 11 11 411 1411 2411 7411 22 23 BZAAAA NDMAAA VVVVxx +834 8204 0 2 4 14 34 834 834 834 834 68 69 CGAAAA ODMAAA AAAAxx +7385 8205 1 1 5 5 85 385 1385 2385 7385 170 171 BYAAAA PDMAAA HHHHxx +3696 8206 0 0 6 16 96 696 1696 3696 3696 192 193 EMAAAA QDMAAA OOOOxx +8720 8207 0 0 0 0 20 720 720 3720 8720 40 41 KXAAAA RDMAAA VVVVxx +4539 8208 1 3 9 19 39 539 539 4539 4539 78 79 PSAAAA SDMAAA AAAAxx +9837 8209 1 1 7 17 37 837 1837 4837 9837 74 75 JOAAAA TDMAAA HHHHxx +8595 8210 1 3 5 15 95 595 595 3595 8595 190 191 PSAAAA UDMAAA OOOOxx +3673 8211 1 1 3 13 73 673 1673 3673 3673 146 147 HLAAAA VDMAAA VVVVxx +475 8212 1 3 5 15 75 475 475 475 475 150 151 HSAAAA WDMAAA AAAAxx +2256 8213 0 0 6 16 56 256 256 2256 2256 112 113 UIAAAA XDMAAA HHHHxx +6349 8214 1 1 9 9 49 349 349 1349 6349 98 99 FKAAAA YDMAAA OOOOxx +9968 8215 0 0 8 8 68 968 1968 4968 9968 136 137 KTAAAA ZDMAAA VVVVxx +7261 8216 1 1 1 1 61 261 1261 2261 7261 122 123 HTAAAA AEMAAA AAAAxx +5799 8217 1 3 9 19 99 799 1799 799 5799 198 199 BPAAAA BEMAAA HHHHxx +8159 8218 1 3 9 19 59 159 159 3159 8159 118 119 VBAAAA CEMAAA OOOOxx +92 8219 0 0 2 12 92 92 92 92 92 184 185 ODAAAA DEMAAA VVVVxx +5927 8220 1 3 7 7 27 927 1927 927 5927 54 55 ZTAAAA EEMAAA AAAAxx +7925 8221 1 1 5 5 25 925 1925 2925 7925 50 51 VSAAAA FEMAAA HHHHxx +5836 8222 0 0 6 16 36 836 1836 836 5836 72 73 MQAAAA GEMAAA OOOOxx +7935 8223 1 3 5 15 35 935 1935 2935 7935 70 71 FTAAAA HEMAAA VVVVxx +5505 8224 1 1 5 5 5 505 1505 505 5505 10 11 TDAAAA IEMAAA AAAAxx +5882 8225 0 2 2 2 82 882 1882 882 5882 164 165 GSAAAA JEMAAA HHHHxx +4411 8226 1 3 1 11 11 411 411 4411 4411 22 23 RNAAAA KEMAAA OOOOxx +64 8227 0 0 4 4 64 64 64 64 64 128 129 MCAAAA LEMAAA VVVVxx +2851 8228 1 3 1 11 51 851 851 2851 2851 102 103 RFAAAA MEMAAA AAAAxx +1665 8229 1 1 5 5 65 665 1665 1665 1665 130 131 BMAAAA NEMAAA HHHHxx +2895 8230 1 3 5 15 95 895 895 2895 2895 190 191 JHAAAA OEMAAA OOOOxx +2210 8231 0 2 0 10 10 210 210 2210 2210 20 21 AHAAAA PEMAAA VVVVxx +9873 8232 1 1 3 13 73 873 1873 4873 9873 146 147 TPAAAA QEMAAA AAAAxx +5402 8233 0 2 2 2 2 402 1402 402 5402 4 5 UZAAAA REMAAA HHHHxx +285 8234 1 1 5 5 85 285 285 285 285 170 171 ZKAAAA SEMAAA OOOOxx +8545 8235 1 1 5 5 45 545 545 3545 8545 90 91 RQAAAA TEMAAA VVVVxx +5328 8236 0 0 8 8 28 328 1328 328 5328 56 57 YWAAAA UEMAAA AAAAxx +733 8237 1 1 3 13 33 733 733 733 733 66 67 FCAAAA VEMAAA HHHHxx +7726 8238 0 2 6 6 26 726 1726 2726 7726 52 53 ELAAAA WEMAAA OOOOxx +5418 8239 0 2 8 18 18 418 1418 418 5418 36 37 KAAAAA XEMAAA VVVVxx +7761 8240 1 1 1 1 61 761 1761 2761 7761 122 123 NMAAAA YEMAAA AAAAxx +9263 8241 1 3 3 3 63 263 1263 4263 9263 126 127 HSAAAA ZEMAAA HHHHxx +5579 8242 1 3 9 19 79 579 1579 579 5579 158 159 PGAAAA AFMAAA OOOOxx +5434 8243 0 2 4 14 34 434 1434 434 5434 68 69 ABAAAA BFMAAA VVVVxx +5230 8244 0 2 0 10 30 230 1230 230 5230 60 61 ETAAAA CFMAAA AAAAxx +9981 8245 1 1 1 1 81 981 1981 4981 9981 162 163 XTAAAA DFMAAA HHHHxx +5830 8246 0 2 0 10 30 830 1830 830 5830 60 61 GQAAAA EFMAAA OOOOxx +128 8247 0 0 8 8 28 128 128 128 128 56 57 YEAAAA FFMAAA VVVVxx +2734 8248 0 2 4 14 34 734 734 2734 2734 68 69 EBAAAA GFMAAA AAAAxx +4537 8249 1 1 7 17 37 537 537 4537 4537 74 75 NSAAAA HFMAAA HHHHxx +3899 8250 1 3 9 19 99 899 1899 3899 3899 198 199 ZTAAAA IFMAAA OOOOxx +1000 8251 0 0 0 0 0 0 1000 1000 1000 0 1 MMAAAA JFMAAA VVVVxx +9896 8252 0 0 6 16 96 896 1896 4896 9896 192 193 QQAAAA KFMAAA AAAAxx +3640 8253 0 0 0 0 40 640 1640 3640 3640 80 81 AKAAAA LFMAAA HHHHxx +2568 8254 0 0 8 8 68 568 568 2568 2568 136 137 UUAAAA MFMAAA OOOOxx +2026 8255 0 2 6 6 26 26 26 2026 2026 52 53 YZAAAA NFMAAA VVVVxx +3955 8256 1 3 5 15 55 955 1955 3955 3955 110 111 DWAAAA OFMAAA AAAAxx +7152 8257 0 0 2 12 52 152 1152 2152 7152 104 105 CPAAAA PFMAAA HHHHxx +2402 8258 0 2 2 2 2 402 402 2402 2402 4 5 KOAAAA QFMAAA OOOOxx +9522 8259 0 2 2 2 22 522 1522 4522 9522 44 45 GCAAAA RFMAAA VVVVxx +4011 8260 1 3 1 11 11 11 11 4011 4011 22 23 HYAAAA SFMAAA AAAAxx +3297 8261 1 1 7 17 97 297 1297 3297 3297 194 195 VWAAAA TFMAAA HHHHxx +4915 8262 1 3 5 15 15 915 915 4915 4915 30 31 BHAAAA UFMAAA OOOOxx +5397 8263 1 1 7 17 97 397 1397 397 5397 194 195 PZAAAA VFMAAA VVVVxx +5454 8264 0 2 4 14 54 454 1454 454 5454 108 109 UBAAAA WFMAAA AAAAxx +4568 8265 0 0 8 8 68 568 568 4568 4568 136 137 STAAAA XFMAAA HHHHxx +5875 8266 1 3 5 15 75 875 1875 875 5875 150 151 ZRAAAA YFMAAA OOOOxx +3642 8267 0 2 2 2 42 642 1642 3642 3642 84 85 CKAAAA ZFMAAA VVVVxx +8506 8268 0 2 6 6 6 506 506 3506 8506 12 13 EPAAAA AGMAAA AAAAxx +9621 8269 1 1 1 1 21 621 1621 4621 9621 42 43 BGAAAA BGMAAA HHHHxx +7739 8270 1 3 9 19 39 739 1739 2739 7739 78 79 RLAAAA CGMAAA OOOOxx +3987 8271 1 3 7 7 87 987 1987 3987 3987 174 175 JXAAAA DGMAAA VVVVxx +2090 8272 0 2 0 10 90 90 90 2090 2090 180 181 KCAAAA EGMAAA AAAAxx +3838 8273 0 2 8 18 38 838 1838 3838 3838 76 77 QRAAAA FGMAAA HHHHxx +17 8274 1 1 7 17 17 17 17 17 17 34 35 RAAAAA GGMAAA OOOOxx +3406 8275 0 2 6 6 6 406 1406 3406 3406 12 13 ABAAAA HGMAAA VVVVxx +8312 8276 0 0 2 12 12 312 312 3312 8312 24 25 SHAAAA IGMAAA AAAAxx +4034 8277 0 2 4 14 34 34 34 4034 4034 68 69 EZAAAA JGMAAA HHHHxx +1535 8278 1 3 5 15 35 535 1535 1535 1535 70 71 BHAAAA KGMAAA OOOOxx +7198 8279 0 2 8 18 98 198 1198 2198 7198 196 197 WQAAAA LGMAAA VVVVxx +8885 8280 1 1 5 5 85 885 885 3885 8885 170 171 TDAAAA MGMAAA AAAAxx +4081 8281 1 1 1 1 81 81 81 4081 4081 162 163 ZAAAAA NGMAAA HHHHxx +980 8282 0 0 0 0 80 980 980 980 980 160 161 SLAAAA OGMAAA OOOOxx +551 8283 1 3 1 11 51 551 551 551 551 102 103 FVAAAA PGMAAA VVVVxx +7746 8284 0 2 6 6 46 746 1746 2746 7746 92 93 YLAAAA QGMAAA AAAAxx +4756 8285 0 0 6 16 56 756 756 4756 4756 112 113 YAAAAA RGMAAA HHHHxx +3655 8286 1 3 5 15 55 655 1655 3655 3655 110 111 PKAAAA SGMAAA OOOOxx +7075 8287 1 3 5 15 75 75 1075 2075 7075 150 151 DMAAAA TGMAAA VVVVxx +3950 8288 0 2 0 10 50 950 1950 3950 3950 100 101 YVAAAA UGMAAA AAAAxx +2314 8289 0 2 4 14 14 314 314 2314 2314 28 29 ALAAAA VGMAAA HHHHxx +8432 8290 0 0 2 12 32 432 432 3432 8432 64 65 IMAAAA WGMAAA OOOOxx +62 8291 0 2 2 2 62 62 62 62 62 124 125 KCAAAA XGMAAA VVVVxx +6920 8292 0 0 0 0 20 920 920 1920 6920 40 41 EGAAAA YGMAAA AAAAxx +4077 8293 1 1 7 17 77 77 77 4077 4077 154 155 VAAAAA ZGMAAA HHHHxx +9118 8294 0 2 8 18 18 118 1118 4118 9118 36 37 SMAAAA AHMAAA OOOOxx +5375 8295 1 3 5 15 75 375 1375 375 5375 150 151 TYAAAA BHMAAA VVVVxx +178 8296 0 2 8 18 78 178 178 178 178 156 157 WGAAAA CHMAAA AAAAxx +1079 8297 1 3 9 19 79 79 1079 1079 1079 158 159 NPAAAA DHMAAA HHHHxx +4279 8298 1 3 9 19 79 279 279 4279 4279 158 159 PIAAAA EHMAAA OOOOxx +8436 8299 0 0 6 16 36 436 436 3436 8436 72 73 MMAAAA FHMAAA VVVVxx +1931 8300 1 3 1 11 31 931 1931 1931 1931 62 63 HWAAAA GHMAAA AAAAxx +2096 8301 0 0 6 16 96 96 96 2096 2096 192 193 QCAAAA HHMAAA HHHHxx +1638 8302 0 2 8 18 38 638 1638 1638 1638 76 77 ALAAAA IHMAAA OOOOxx +2788 8303 0 0 8 8 88 788 788 2788 2788 176 177 GDAAAA JHMAAA VVVVxx +4751 8304 1 3 1 11 51 751 751 4751 4751 102 103 TAAAAA KHMAAA AAAAxx +8824 8305 0 0 4 4 24 824 824 3824 8824 48 49 KBAAAA LHMAAA HHHHxx +3098 8306 0 2 8 18 98 98 1098 3098 3098 196 197 EPAAAA MHMAAA OOOOxx +4497 8307 1 1 7 17 97 497 497 4497 4497 194 195 ZQAAAA NHMAAA VVVVxx +5223 8308 1 3 3 3 23 223 1223 223 5223 46 47 XSAAAA OHMAAA AAAAxx +9212 8309 0 0 2 12 12 212 1212 4212 9212 24 25 IQAAAA PHMAAA HHHHxx +4265 8310 1 1 5 5 65 265 265 4265 4265 130 131 BIAAAA QHMAAA OOOOxx +6898 8311 0 2 8 18 98 898 898 1898 6898 196 197 IFAAAA RHMAAA VVVVxx +8808 8312 0 0 8 8 8 808 808 3808 8808 16 17 UAAAAA SHMAAA AAAAxx +5629 8313 1 1 9 9 29 629 1629 629 5629 58 59 NIAAAA THMAAA HHHHxx +3779 8314 1 3 9 19 79 779 1779 3779 3779 158 159 JPAAAA UHMAAA OOOOxx +4972 8315 0 0 2 12 72 972 972 4972 4972 144 145 GJAAAA VHMAAA VVVVxx +4511 8316 1 3 1 11 11 511 511 4511 4511 22 23 NRAAAA WHMAAA AAAAxx +6761 8317 1 1 1 1 61 761 761 1761 6761 122 123 BAAAAA XHMAAA HHHHxx +2335 8318 1 3 5 15 35 335 335 2335 2335 70 71 VLAAAA YHMAAA OOOOxx +732 8319 0 0 2 12 32 732 732 732 732 64 65 ECAAAA ZHMAAA VVVVxx +4757 8320 1 1 7 17 57 757 757 4757 4757 114 115 ZAAAAA AIMAAA AAAAxx +6624 8321 0 0 4 4 24 624 624 1624 6624 48 49 UUAAAA BIMAAA HHHHxx +5869 8322 1 1 9 9 69 869 1869 869 5869 138 139 TRAAAA CIMAAA OOOOxx +5842 8323 0 2 2 2 42 842 1842 842 5842 84 85 SQAAAA DIMAAA VVVVxx +5735 8324 1 3 5 15 35 735 1735 735 5735 70 71 PMAAAA EIMAAA AAAAxx +8276 8325 0 0 6 16 76 276 276 3276 8276 152 153 IGAAAA FIMAAA HHHHxx +7227 8326 1 3 7 7 27 227 1227 2227 7227 54 55 ZRAAAA GIMAAA OOOOxx +4923 8327 1 3 3 3 23 923 923 4923 4923 46 47 JHAAAA HIMAAA VVVVxx +9135 8328 1 3 5 15 35 135 1135 4135 9135 70 71 JNAAAA IIMAAA AAAAxx +5813 8329 1 1 3 13 13 813 1813 813 5813 26 27 PPAAAA JIMAAA HHHHxx +9697 8330 1 1 7 17 97 697 1697 4697 9697 194 195 ZIAAAA KIMAAA OOOOxx +3222 8331 0 2 2 2 22 222 1222 3222 3222 44 45 YTAAAA LIMAAA VVVVxx +2394 8332 0 2 4 14 94 394 394 2394 2394 188 189 COAAAA MIMAAA AAAAxx +5784 8333 0 0 4 4 84 784 1784 784 5784 168 169 MOAAAA NIMAAA HHHHxx +3652 8334 0 0 2 12 52 652 1652 3652 3652 104 105 MKAAAA OIMAAA OOOOxx +8175 8335 1 3 5 15 75 175 175 3175 8175 150 151 LCAAAA PIMAAA VVVVxx +7568 8336 0 0 8 8 68 568 1568 2568 7568 136 137 CFAAAA QIMAAA AAAAxx +6645 8337 1 1 5 5 45 645 645 1645 6645 90 91 PVAAAA RIMAAA HHHHxx +8176 8338 0 0 6 16 76 176 176 3176 8176 152 153 MCAAAA SIMAAA OOOOxx +530 8339 0 2 0 10 30 530 530 530 530 60 61 KUAAAA TIMAAA VVVVxx +5439 8340 1 3 9 19 39 439 1439 439 5439 78 79 FBAAAA UIMAAA AAAAxx +61 8341 1 1 1 1 61 61 61 61 61 122 123 JCAAAA VIMAAA HHHHxx +3951 8342 1 3 1 11 51 951 1951 3951 3951 102 103 ZVAAAA WIMAAA OOOOxx +5283 8343 1 3 3 3 83 283 1283 283 5283 166 167 FVAAAA XIMAAA VVVVxx +7226 8344 0 2 6 6 26 226 1226 2226 7226 52 53 YRAAAA YIMAAA AAAAxx +1954 8345 0 2 4 14 54 954 1954 1954 1954 108 109 EXAAAA ZIMAAA HHHHxx +334 8346 0 2 4 14 34 334 334 334 334 68 69 WMAAAA AJMAAA OOOOxx +3921 8347 1 1 1 1 21 921 1921 3921 3921 42 43 VUAAAA BJMAAA VVVVxx +6276 8348 0 0 6 16 76 276 276 1276 6276 152 153 KHAAAA CJMAAA AAAAxx +3378 8349 0 2 8 18 78 378 1378 3378 3378 156 157 YZAAAA DJMAAA HHHHxx +5236 8350 0 0 6 16 36 236 1236 236 5236 72 73 KTAAAA EJMAAA OOOOxx +7781 8351 1 1 1 1 81 781 1781 2781 7781 162 163 HNAAAA FJMAAA VVVVxx +8601 8352 1 1 1 1 1 601 601 3601 8601 2 3 VSAAAA GJMAAA AAAAxx +1473 8353 1 1 3 13 73 473 1473 1473 1473 146 147 REAAAA HJMAAA HHHHxx +3246 8354 0 2 6 6 46 246 1246 3246 3246 92 93 WUAAAA IJMAAA OOOOxx +3601 8355 1 1 1 1 1 601 1601 3601 3601 2 3 NIAAAA JJMAAA VVVVxx +6861 8356 1 1 1 1 61 861 861 1861 6861 122 123 XDAAAA KJMAAA AAAAxx +9032 8357 0 0 2 12 32 32 1032 4032 9032 64 65 KJAAAA LJMAAA HHHHxx +216 8358 0 0 6 16 16 216 216 216 216 32 33 IIAAAA MJMAAA OOOOxx +3824 8359 0 0 4 4 24 824 1824 3824 3824 48 49 CRAAAA NJMAAA VVVVxx +8486 8360 0 2 6 6 86 486 486 3486 8486 172 173 KOAAAA OJMAAA AAAAxx +276 8361 0 0 6 16 76 276 276 276 276 152 153 QKAAAA PJMAAA HHHHxx +1838 8362 0 2 8 18 38 838 1838 1838 1838 76 77 SSAAAA QJMAAA OOOOxx +6175 8363 1 3 5 15 75 175 175 1175 6175 150 151 NDAAAA RJMAAA VVVVxx +3719 8364 1 3 9 19 19 719 1719 3719 3719 38 39 BNAAAA SJMAAA AAAAxx +6958 8365 0 2 8 18 58 958 958 1958 6958 116 117 QHAAAA TJMAAA HHHHxx +6822 8366 0 2 2 2 22 822 822 1822 6822 44 45 KCAAAA UJMAAA OOOOxx +3318 8367 0 2 8 18 18 318 1318 3318 3318 36 37 QXAAAA VJMAAA VVVVxx +7222 8368 0 2 2 2 22 222 1222 2222 7222 44 45 URAAAA WJMAAA AAAAxx +85 8369 1 1 5 5 85 85 85 85 85 170 171 HDAAAA XJMAAA HHHHxx +5158 8370 0 2 8 18 58 158 1158 158 5158 116 117 KQAAAA YJMAAA OOOOxx +6360 8371 0 0 0 0 60 360 360 1360 6360 120 121 QKAAAA ZJMAAA VVVVxx +2599 8372 1 3 9 19 99 599 599 2599 2599 198 199 ZVAAAA AKMAAA AAAAxx +4002 8373 0 2 2 2 2 2 2 4002 4002 4 5 YXAAAA BKMAAA HHHHxx +6597 8374 1 1 7 17 97 597 597 1597 6597 194 195 TTAAAA CKMAAA OOOOxx +5762 8375 0 2 2 2 62 762 1762 762 5762 124 125 QNAAAA DKMAAA VVVVxx +8383 8376 1 3 3 3 83 383 383 3383 8383 166 167 LKAAAA EKMAAA AAAAxx +4686 8377 0 2 6 6 86 686 686 4686 4686 172 173 GYAAAA FKMAAA HHHHxx +5972 8378 0 0 2 12 72 972 1972 972 5972 144 145 SVAAAA GKMAAA OOOOxx +1432 8379 0 0 2 12 32 432 1432 1432 1432 64 65 CDAAAA HKMAAA VVVVxx +1601 8380 1 1 1 1 1 601 1601 1601 1601 2 3 PJAAAA IKMAAA AAAAxx +3012 8381 0 0 2 12 12 12 1012 3012 3012 24 25 WLAAAA JKMAAA HHHHxx +9345 8382 1 1 5 5 45 345 1345 4345 9345 90 91 LVAAAA KKMAAA OOOOxx +8869 8383 1 1 9 9 69 869 869 3869 8869 138 139 DDAAAA LKMAAA VVVVxx +6612 8384 0 0 2 12 12 612 612 1612 6612 24 25 IUAAAA MKMAAA AAAAxx +262 8385 0 2 2 2 62 262 262 262 262 124 125 CKAAAA NKMAAA HHHHxx +300 8386 0 0 0 0 0 300 300 300 300 0 1 OLAAAA OKMAAA OOOOxx +3045 8387 1 1 5 5 45 45 1045 3045 3045 90 91 DNAAAA PKMAAA VVVVxx +7252 8388 0 0 2 12 52 252 1252 2252 7252 104 105 YSAAAA QKMAAA AAAAxx +9099 8389 1 3 9 19 99 99 1099 4099 9099 198 199 ZLAAAA RKMAAA HHHHxx +9006 8390 0 2 6 6 6 6 1006 4006 9006 12 13 KIAAAA SKMAAA OOOOxx +3078 8391 0 2 8 18 78 78 1078 3078 3078 156 157 KOAAAA TKMAAA VVVVxx +5159 8392 1 3 9 19 59 159 1159 159 5159 118 119 LQAAAA UKMAAA AAAAxx +9329 8393 1 1 9 9 29 329 1329 4329 9329 58 59 VUAAAA VKMAAA HHHHxx +1393 8394 1 1 3 13 93 393 1393 1393 1393 186 187 PBAAAA WKMAAA OOOOxx +5894 8395 0 2 4 14 94 894 1894 894 5894 188 189 SSAAAA XKMAAA VVVVxx +11 8396 1 3 1 11 11 11 11 11 11 22 23 LAAAAA YKMAAA AAAAxx +5606 8397 0 2 6 6 6 606 1606 606 5606 12 13 QHAAAA ZKMAAA HHHHxx +5541 8398 1 1 1 1 41 541 1541 541 5541 82 83 DFAAAA ALMAAA OOOOxx +2689 8399 1 1 9 9 89 689 689 2689 2689 178 179 LZAAAA BLMAAA VVVVxx +1023 8400 1 3 3 3 23 23 1023 1023 1023 46 47 JNAAAA CLMAAA AAAAxx +8134 8401 0 2 4 14 34 134 134 3134 8134 68 69 WAAAAA DLMAAA HHHHxx +5923 8402 1 3 3 3 23 923 1923 923 5923 46 47 VTAAAA ELMAAA OOOOxx +6056 8403 0 0 6 16 56 56 56 1056 6056 112 113 YYAAAA FLMAAA VVVVxx +653 8404 1 1 3 13 53 653 653 653 653 106 107 DZAAAA GLMAAA AAAAxx +367 8405 1 3 7 7 67 367 367 367 367 134 135 DOAAAA HLMAAA HHHHxx +1828 8406 0 0 8 8 28 828 1828 1828 1828 56 57 ISAAAA ILMAAA OOOOxx +6506 8407 0 2 6 6 6 506 506 1506 6506 12 13 GQAAAA JLMAAA VVVVxx +5772 8408 0 0 2 12 72 772 1772 772 5772 144 145 AOAAAA KLMAAA AAAAxx +8052 8409 0 0 2 12 52 52 52 3052 8052 104 105 SXAAAA LLMAAA HHHHxx +2633 8410 1 1 3 13 33 633 633 2633 2633 66 67 HXAAAA MLMAAA OOOOxx +4878 8411 0 2 8 18 78 878 878 4878 4878 156 157 QFAAAA NLMAAA VVVVxx +5621 8412 1 1 1 1 21 621 1621 621 5621 42 43 FIAAAA OLMAAA AAAAxx +41 8413 1 1 1 1 41 41 41 41 41 82 83 PBAAAA PLMAAA HHHHxx +4613 8414 1 1 3 13 13 613 613 4613 4613 26 27 LVAAAA QLMAAA OOOOxx +9389 8415 1 1 9 9 89 389 1389 4389 9389 178 179 DXAAAA RLMAAA VVVVxx +9414 8416 0 2 4 14 14 414 1414 4414 9414 28 29 CYAAAA SLMAAA AAAAxx +3583 8417 1 3 3 3 83 583 1583 3583 3583 166 167 VHAAAA TLMAAA HHHHxx +3454 8418 0 2 4 14 54 454 1454 3454 3454 108 109 WCAAAA ULMAAA OOOOxx +719 8419 1 3 9 19 19 719 719 719 719 38 39 RBAAAA VLMAAA VVVVxx +6188 8420 0 0 8 8 88 188 188 1188 6188 176 177 AEAAAA WLMAAA AAAAxx +2288 8421 0 0 8 8 88 288 288 2288 2288 176 177 AKAAAA XLMAAA HHHHxx +1287 8422 1 3 7 7 87 287 1287 1287 1287 174 175 NXAAAA YLMAAA OOOOxx +1397 8423 1 1 7 17 97 397 1397 1397 1397 194 195 TBAAAA ZLMAAA VVVVxx +7763 8424 1 3 3 3 63 763 1763 2763 7763 126 127 PMAAAA AMMAAA AAAAxx +5194 8425 0 2 4 14 94 194 1194 194 5194 188 189 URAAAA BMMAAA HHHHxx +3167 8426 1 3 7 7 67 167 1167 3167 3167 134 135 VRAAAA CMMAAA OOOOxx +9218 8427 0 2 8 18 18 218 1218 4218 9218 36 37 OQAAAA DMMAAA VVVVxx +2065 8428 1 1 5 5 65 65 65 2065 2065 130 131 LBAAAA EMMAAA AAAAxx +9669 8429 1 1 9 9 69 669 1669 4669 9669 138 139 XHAAAA FMMAAA HHHHxx +146 8430 0 2 6 6 46 146 146 146 146 92 93 QFAAAA GMMAAA OOOOxx +6141 8431 1 1 1 1 41 141 141 1141 6141 82 83 FCAAAA HMMAAA VVVVxx +2843 8432 1 3 3 3 43 843 843 2843 2843 86 87 JFAAAA IMMAAA AAAAxx +7934 8433 0 2 4 14 34 934 1934 2934 7934 68 69 ETAAAA JMMAAA HHHHxx +2536 8434 0 0 6 16 36 536 536 2536 2536 72 73 OTAAAA KMMAAA OOOOxx +7088 8435 0 0 8 8 88 88 1088 2088 7088 176 177 QMAAAA LMMAAA VVVVxx +2519 8436 1 3 9 19 19 519 519 2519 2519 38 39 XSAAAA MMMAAA AAAAxx +6650 8437 0 2 0 10 50 650 650 1650 6650 100 101 UVAAAA NMMAAA HHHHxx +3007 8438 1 3 7 7 7 7 1007 3007 3007 14 15 RLAAAA OMMAAA OOOOxx +4507 8439 1 3 7 7 7 507 507 4507 4507 14 15 JRAAAA PMMAAA VVVVxx +4892 8440 0 0 2 12 92 892 892 4892 4892 184 185 EGAAAA QMMAAA AAAAxx +7159 8441 1 3 9 19 59 159 1159 2159 7159 118 119 JPAAAA RMMAAA HHHHxx +3171 8442 1 3 1 11 71 171 1171 3171 3171 142 143 ZRAAAA SMMAAA OOOOxx +1080 8443 0 0 0 0 80 80 1080 1080 1080 160 161 OPAAAA TMMAAA VVVVxx +7248 8444 0 0 8 8 48 248 1248 2248 7248 96 97 USAAAA UMMAAA AAAAxx +7230 8445 0 2 0 10 30 230 1230 2230 7230 60 61 CSAAAA VMMAAA HHHHxx +3823 8446 1 3 3 3 23 823 1823 3823 3823 46 47 BRAAAA WMMAAA OOOOxx +5517 8447 1 1 7 17 17 517 1517 517 5517 34 35 FEAAAA XMMAAA VVVVxx +1482 8448 0 2 2 2 82 482 1482 1482 1482 164 165 AFAAAA YMMAAA AAAAxx +9953 8449 1 1 3 13 53 953 1953 4953 9953 106 107 VSAAAA ZMMAAA HHHHxx +2754 8450 0 2 4 14 54 754 754 2754 2754 108 109 YBAAAA ANMAAA OOOOxx +3875 8451 1 3 5 15 75 875 1875 3875 3875 150 151 BTAAAA BNMAAA VVVVxx +9800 8452 0 0 0 0 0 800 1800 4800 9800 0 1 YMAAAA CNMAAA AAAAxx +8819 8453 1 3 9 19 19 819 819 3819 8819 38 39 FBAAAA DNMAAA HHHHxx +8267 8454 1 3 7 7 67 267 267 3267 8267 134 135 ZFAAAA ENMAAA OOOOxx +520 8455 0 0 0 0 20 520 520 520 520 40 41 AUAAAA FNMAAA VVVVxx +5770 8456 0 2 0 10 70 770 1770 770 5770 140 141 YNAAAA GNMAAA AAAAxx +2114 8457 0 2 4 14 14 114 114 2114 2114 28 29 IDAAAA HNMAAA HHHHxx +5045 8458 1 1 5 5 45 45 1045 45 5045 90 91 BMAAAA INMAAA OOOOxx +1094 8459 0 2 4 14 94 94 1094 1094 1094 188 189 CQAAAA JNMAAA VVVVxx +8786 8460 0 2 6 6 86 786 786 3786 8786 172 173 YZAAAA KNMAAA AAAAxx +353 8461 1 1 3 13 53 353 353 353 353 106 107 PNAAAA LNMAAA HHHHxx +290 8462 0 2 0 10 90 290 290 290 290 180 181 ELAAAA MNMAAA OOOOxx +3376 8463 0 0 6 16 76 376 1376 3376 3376 152 153 WZAAAA NNMAAA VVVVxx +9305 8464 1 1 5 5 5 305 1305 4305 9305 10 11 XTAAAA ONMAAA AAAAxx +186 8465 0 2 6 6 86 186 186 186 186 172 173 EHAAAA PNMAAA HHHHxx +4817 8466 1 1 7 17 17 817 817 4817 4817 34 35 HDAAAA QNMAAA OOOOxx +4638 8467 0 2 8 18 38 638 638 4638 4638 76 77 KWAAAA RNMAAA VVVVxx +3558 8468 0 2 8 18 58 558 1558 3558 3558 116 117 WGAAAA SNMAAA AAAAxx +9285 8469 1 1 5 5 85 285 1285 4285 9285 170 171 DTAAAA TNMAAA HHHHxx +848 8470 0 0 8 8 48 848 848 848 848 96 97 QGAAAA UNMAAA OOOOxx +8923 8471 1 3 3 3 23 923 923 3923 8923 46 47 FFAAAA VNMAAA VVVVxx +6826 8472 0 2 6 6 26 826 826 1826 6826 52 53 OCAAAA WNMAAA AAAAxx +5187 8473 1 3 7 7 87 187 1187 187 5187 174 175 NRAAAA XNMAAA HHHHxx +2398 8474 0 2 8 18 98 398 398 2398 2398 196 197 GOAAAA YNMAAA OOOOxx +7653 8475 1 1 3 13 53 653 1653 2653 7653 106 107 JIAAAA ZNMAAA VVVVxx +8835 8476 1 3 5 15 35 835 835 3835 8835 70 71 VBAAAA AOMAAA AAAAxx +5736 8477 0 0 6 16 36 736 1736 736 5736 72 73 QMAAAA BOMAAA HHHHxx +1238 8478 0 2 8 18 38 238 1238 1238 1238 76 77 QVAAAA COMAAA OOOOxx +6021 8479 1 1 1 1 21 21 21 1021 6021 42 43 PXAAAA DOMAAA VVVVxx +6815 8480 1 3 5 15 15 815 815 1815 6815 30 31 DCAAAA EOMAAA AAAAxx +2549 8481 1 1 9 9 49 549 549 2549 2549 98 99 BUAAAA FOMAAA HHHHxx +5657 8482 1 1 7 17 57 657 1657 657 5657 114 115 PJAAAA GOMAAA OOOOxx +6855 8483 1 3 5 15 55 855 855 1855 6855 110 111 RDAAAA HOMAAA VVVVxx +1225 8484 1 1 5 5 25 225 1225 1225 1225 50 51 DVAAAA IOMAAA AAAAxx +7452 8485 0 0 2 12 52 452 1452 2452 7452 104 105 QAAAAA JOMAAA HHHHxx +2479 8486 1 3 9 19 79 479 479 2479 2479 158 159 JRAAAA KOMAAA OOOOxx +7974 8487 0 2 4 14 74 974 1974 2974 7974 148 149 SUAAAA LOMAAA VVVVxx +1212 8488 0 0 2 12 12 212 1212 1212 1212 24 25 QUAAAA MOMAAA AAAAxx +8883 8489 1 3 3 3 83 883 883 3883 8883 166 167 RDAAAA NOMAAA HHHHxx +8150 8490 0 2 0 10 50 150 150 3150 8150 100 101 MBAAAA OOMAAA OOOOxx +3392 8491 0 0 2 12 92 392 1392 3392 3392 184 185 MAAAAA POMAAA VVVVxx +6774 8492 0 2 4 14 74 774 774 1774 6774 148 149 OAAAAA QOMAAA AAAAxx +904 8493 0 0 4 4 4 904 904 904 904 8 9 UIAAAA ROMAAA HHHHxx +5068 8494 0 0 8 8 68 68 1068 68 5068 136 137 YMAAAA SOMAAA OOOOxx +9339 8495 1 3 9 19 39 339 1339 4339 9339 78 79 FVAAAA TOMAAA VVVVxx +1062 8496 0 2 2 2 62 62 1062 1062 1062 124 125 WOAAAA UOMAAA AAAAxx +3841 8497 1 1 1 1 41 841 1841 3841 3841 82 83 TRAAAA VOMAAA HHHHxx +8924 8498 0 0 4 4 24 924 924 3924 8924 48 49 GFAAAA WOMAAA OOOOxx +9795 8499 1 3 5 15 95 795 1795 4795 9795 190 191 TMAAAA XOMAAA VVVVxx +3981 8500 1 1 1 1 81 981 1981 3981 3981 162 163 DXAAAA YOMAAA AAAAxx +4290 8501 0 2 0 10 90 290 290 4290 4290 180 181 AJAAAA ZOMAAA HHHHxx +1067 8502 1 3 7 7 67 67 1067 1067 1067 134 135 BPAAAA APMAAA OOOOxx +8679 8503 1 3 9 19 79 679 679 3679 8679 158 159 VVAAAA BPMAAA VVVVxx +2894 8504 0 2 4 14 94 894 894 2894 2894 188 189 IHAAAA CPMAAA AAAAxx +9248 8505 0 0 8 8 48 248 1248 4248 9248 96 97 SRAAAA DPMAAA HHHHxx +1072 8506 0 0 2 12 72 72 1072 1072 1072 144 145 GPAAAA EPMAAA OOOOxx +3510 8507 0 2 0 10 10 510 1510 3510 3510 20 21 AFAAAA FPMAAA VVVVxx +6871 8508 1 3 1 11 71 871 871 1871 6871 142 143 HEAAAA GPMAAA AAAAxx +8701 8509 1 1 1 1 1 701 701 3701 8701 2 3 RWAAAA HPMAAA HHHHxx +8170 8510 0 2 0 10 70 170 170 3170 8170 140 141 GCAAAA IPMAAA OOOOxx +2730 8511 0 2 0 10 30 730 730 2730 2730 60 61 ABAAAA JPMAAA VVVVxx +2668 8512 0 0 8 8 68 668 668 2668 2668 136 137 QYAAAA KPMAAA AAAAxx +8723 8513 1 3 3 3 23 723 723 3723 8723 46 47 NXAAAA LPMAAA HHHHxx +3439 8514 1 3 9 19 39 439 1439 3439 3439 78 79 HCAAAA MPMAAA OOOOxx +6219 8515 1 3 9 19 19 219 219 1219 6219 38 39 FFAAAA NPMAAA VVVVxx +4264 8516 0 0 4 4 64 264 264 4264 4264 128 129 AIAAAA OPMAAA AAAAxx +3929 8517 1 1 9 9 29 929 1929 3929 3929 58 59 DVAAAA PPMAAA HHHHxx +7 8518 1 3 7 7 7 7 7 7 7 14 15 HAAAAA QPMAAA OOOOxx +3737 8519 1 1 7 17 37 737 1737 3737 3737 74 75 TNAAAA RPMAAA VVVVxx +358 8520 0 2 8 18 58 358 358 358 358 116 117 UNAAAA SPMAAA AAAAxx +5128 8521 0 0 8 8 28 128 1128 128 5128 56 57 GPAAAA TPMAAA HHHHxx +7353 8522 1 1 3 13 53 353 1353 2353 7353 106 107 VWAAAA UPMAAA OOOOxx +8758 8523 0 2 8 18 58 758 758 3758 8758 116 117 WYAAAA VPMAAA VVVVxx +7284 8524 0 0 4 4 84 284 1284 2284 7284 168 169 EUAAAA WPMAAA AAAAxx +4037 8525 1 1 7 17 37 37 37 4037 4037 74 75 HZAAAA XPMAAA HHHHxx +435 8526 1 3 5 15 35 435 435 435 435 70 71 TQAAAA YPMAAA OOOOxx +3580 8527 0 0 0 0 80 580 1580 3580 3580 160 161 SHAAAA ZPMAAA VVVVxx +4554 8528 0 2 4 14 54 554 554 4554 4554 108 109 ETAAAA AQMAAA AAAAxx +4337 8529 1 1 7 17 37 337 337 4337 4337 74 75 VKAAAA BQMAAA HHHHxx +512 8530 0 0 2 12 12 512 512 512 512 24 25 STAAAA CQMAAA OOOOxx +2032 8531 0 0 2 12 32 32 32 2032 2032 64 65 EAAAAA DQMAAA VVVVxx +1755 8532 1 3 5 15 55 755 1755 1755 1755 110 111 NPAAAA EQMAAA AAAAxx +9923 8533 1 3 3 3 23 923 1923 4923 9923 46 47 RRAAAA FQMAAA HHHHxx +3747 8534 1 3 7 7 47 747 1747 3747 3747 94 95 DOAAAA GQMAAA OOOOxx +27 8535 1 3 7 7 27 27 27 27 27 54 55 BBAAAA HQMAAA VVVVxx +3075 8536 1 3 5 15 75 75 1075 3075 3075 150 151 HOAAAA IQMAAA AAAAxx +6259 8537 1 3 9 19 59 259 259 1259 6259 118 119 TGAAAA JQMAAA HHHHxx +2940 8538 0 0 0 0 40 940 940 2940 2940 80 81 CJAAAA KQMAAA OOOOxx +5724 8539 0 0 4 4 24 724 1724 724 5724 48 49 EMAAAA LQMAAA VVVVxx +5638 8540 0 2 8 18 38 638 1638 638 5638 76 77 WIAAAA MQMAAA AAAAxx +479 8541 1 3 9 19 79 479 479 479 479 158 159 LSAAAA NQMAAA HHHHxx +4125 8542 1 1 5 5 25 125 125 4125 4125 50 51 RCAAAA OQMAAA OOOOxx +1525 8543 1 1 5 5 25 525 1525 1525 1525 50 51 RGAAAA PQMAAA VVVVxx +7529 8544 1 1 9 9 29 529 1529 2529 7529 58 59 PDAAAA QQMAAA AAAAxx +931 8545 1 3 1 11 31 931 931 931 931 62 63 VJAAAA RQMAAA HHHHxx +5175 8546 1 3 5 15 75 175 1175 175 5175 150 151 BRAAAA SQMAAA OOOOxx +6798 8547 0 2 8 18 98 798 798 1798 6798 196 197 MBAAAA TQMAAA VVVVxx +2111 8548 1 3 1 11 11 111 111 2111 2111 22 23 FDAAAA UQMAAA AAAAxx +6145 8549 1 1 5 5 45 145 145 1145 6145 90 91 JCAAAA VQMAAA HHHHxx +4712 8550 0 0 2 12 12 712 712 4712 4712 24 25 GZAAAA WQMAAA OOOOxx +3110 8551 0 2 0 10 10 110 1110 3110 3110 20 21 QPAAAA XQMAAA VVVVxx +97 8552 1 1 7 17 97 97 97 97 97 194 195 TDAAAA YQMAAA AAAAxx +758 8553 0 2 8 18 58 758 758 758 758 116 117 EDAAAA ZQMAAA HHHHxx +1895 8554 1 3 5 15 95 895 1895 1895 1895 190 191 XUAAAA ARMAAA OOOOxx +5289 8555 1 1 9 9 89 289 1289 289 5289 178 179 LVAAAA BRMAAA VVVVxx +5026 8556 0 2 6 6 26 26 1026 26 5026 52 53 ILAAAA CRMAAA AAAAxx +4725 8557 1 1 5 5 25 725 725 4725 4725 50 51 TZAAAA DRMAAA HHHHxx +1679 8558 1 3 9 19 79 679 1679 1679 1679 158 159 PMAAAA ERMAAA OOOOxx +4433 8559 1 1 3 13 33 433 433 4433 4433 66 67 NOAAAA FRMAAA VVVVxx +5340 8560 0 0 0 0 40 340 1340 340 5340 80 81 KXAAAA GRMAAA AAAAxx +6340 8561 0 0 0 0 40 340 340 1340 6340 80 81 WJAAAA HRMAAA HHHHxx +3261 8562 1 1 1 1 61 261 1261 3261 3261 122 123 LVAAAA IRMAAA OOOOxx +8108 8563 0 0 8 8 8 108 108 3108 8108 16 17 WZAAAA JRMAAA VVVVxx +8785 8564 1 1 5 5 85 785 785 3785 8785 170 171 XZAAAA KRMAAA AAAAxx +7391 8565 1 3 1 11 91 391 1391 2391 7391 182 183 HYAAAA LRMAAA HHHHxx +1496 8566 0 0 6 16 96 496 1496 1496 1496 192 193 OFAAAA MRMAAA OOOOxx +1484 8567 0 0 4 4 84 484 1484 1484 1484 168 169 CFAAAA NRMAAA VVVVxx +5884 8568 0 0 4 4 84 884 1884 884 5884 168 169 ISAAAA ORMAAA AAAAxx +342 8569 0 2 2 2 42 342 342 342 342 84 85 ENAAAA PRMAAA HHHHxx +7659 8570 1 3 9 19 59 659 1659 2659 7659 118 119 PIAAAA QRMAAA OOOOxx +6635 8571 1 3 5 15 35 635 635 1635 6635 70 71 FVAAAA RRMAAA VVVVxx +8507 8572 1 3 7 7 7 507 507 3507 8507 14 15 FPAAAA SRMAAA AAAAxx +2583 8573 1 3 3 3 83 583 583 2583 2583 166 167 JVAAAA TRMAAA HHHHxx +6533 8574 1 1 3 13 33 533 533 1533 6533 66 67 HRAAAA URMAAA OOOOxx +5879 8575 1 3 9 19 79 879 1879 879 5879 158 159 DSAAAA VRMAAA VVVVxx +5511 8576 1 3 1 11 11 511 1511 511 5511 22 23 ZDAAAA WRMAAA AAAAxx +3682 8577 0 2 2 2 82 682 1682 3682 3682 164 165 QLAAAA XRMAAA HHHHxx +7182 8578 0 2 2 2 82 182 1182 2182 7182 164 165 GQAAAA YRMAAA OOOOxx +1409 8579 1 1 9 9 9 409 1409 1409 1409 18 19 FCAAAA ZRMAAA VVVVxx +3363 8580 1 3 3 3 63 363 1363 3363 3363 126 127 JZAAAA ASMAAA AAAAxx +729 8581 1 1 9 9 29 729 729 729 729 58 59 BCAAAA BSMAAA HHHHxx +5857 8582 1 1 7 17 57 857 1857 857 5857 114 115 HRAAAA CSMAAA OOOOxx +235 8583 1 3 5 15 35 235 235 235 235 70 71 BJAAAA DSMAAA VVVVxx +193 8584 1 1 3 13 93 193 193 193 193 186 187 LHAAAA ESMAAA AAAAxx +5586 8585 0 2 6 6 86 586 1586 586 5586 172 173 WGAAAA FSMAAA HHHHxx +6203 8586 1 3 3 3 3 203 203 1203 6203 6 7 PEAAAA GSMAAA OOOOxx +6795 8587 1 3 5 15 95 795 795 1795 6795 190 191 JBAAAA HSMAAA VVVVxx +3211 8588 1 3 1 11 11 211 1211 3211 3211 22 23 NTAAAA ISMAAA AAAAxx +9763 8589 1 3 3 3 63 763 1763 4763 9763 126 127 NLAAAA JSMAAA HHHHxx +9043 8590 1 3 3 3 43 43 1043 4043 9043 86 87 VJAAAA KSMAAA OOOOxx +2854 8591 0 2 4 14 54 854 854 2854 2854 108 109 UFAAAA LSMAAA VVVVxx +565 8592 1 1 5 5 65 565 565 565 565 130 131 TVAAAA MSMAAA AAAAxx +9284 8593 0 0 4 4 84 284 1284 4284 9284 168 169 CTAAAA NSMAAA HHHHxx +7886 8594 0 2 6 6 86 886 1886 2886 7886 172 173 IRAAAA OSMAAA OOOOxx +122 8595 0 2 2 2 22 122 122 122 122 44 45 SEAAAA PSMAAA VVVVxx +4934 8596 0 2 4 14 34 934 934 4934 4934 68 69 UHAAAA QSMAAA AAAAxx +1766 8597 0 2 6 6 66 766 1766 1766 1766 132 133 YPAAAA RSMAAA HHHHxx +2554 8598 0 2 4 14 54 554 554 2554 2554 108 109 GUAAAA SSMAAA OOOOxx +488 8599 0 0 8 8 88 488 488 488 488 176 177 USAAAA TSMAAA VVVVxx +825 8600 1 1 5 5 25 825 825 825 825 50 51 TFAAAA USMAAA AAAAxx +678 8601 0 2 8 18 78 678 678 678 678 156 157 CAAAAA VSMAAA HHHHxx +4543 8602 1 3 3 3 43 543 543 4543 4543 86 87 TSAAAA WSMAAA OOOOxx +1699 8603 1 3 9 19 99 699 1699 1699 1699 198 199 JNAAAA XSMAAA VVVVxx +3771 8604 1 3 1 11 71 771 1771 3771 3771 142 143 BPAAAA YSMAAA AAAAxx +1234 8605 0 2 4 14 34 234 1234 1234 1234 68 69 MVAAAA ZSMAAA HHHHxx +4152 8606 0 0 2 12 52 152 152 4152 4152 104 105 SDAAAA ATMAAA OOOOxx +1632 8607 0 0 2 12 32 632 1632 1632 1632 64 65 UKAAAA BTMAAA VVVVxx +4988 8608 0 0 8 8 88 988 988 4988 4988 176 177 WJAAAA CTMAAA AAAAxx +1980 8609 0 0 0 0 80 980 1980 1980 1980 160 161 EYAAAA DTMAAA HHHHxx +7479 8610 1 3 9 19 79 479 1479 2479 7479 158 159 RBAAAA ETMAAA OOOOxx +2586 8611 0 2 6 6 86 586 586 2586 2586 172 173 MVAAAA FTMAAA VVVVxx +5433 8612 1 1 3 13 33 433 1433 433 5433 66 67 ZAAAAA GTMAAA AAAAxx +2261 8613 1 1 1 1 61 261 261 2261 2261 122 123 ZIAAAA HTMAAA HHHHxx +1180 8614 0 0 0 0 80 180 1180 1180 1180 160 161 KTAAAA ITMAAA OOOOxx +3938 8615 0 2 8 18 38 938 1938 3938 3938 76 77 MVAAAA JTMAAA VVVVxx +6714 8616 0 2 4 14 14 714 714 1714 6714 28 29 GYAAAA KTMAAA AAAAxx +2890 8617 0 2 0 10 90 890 890 2890 2890 180 181 EHAAAA LTMAAA HHHHxx +7379 8618 1 3 9 19 79 379 1379 2379 7379 158 159 VXAAAA MTMAAA OOOOxx +5896 8619 0 0 6 16 96 896 1896 896 5896 192 193 USAAAA NTMAAA VVVVxx +5949 8620 1 1 9 9 49 949 1949 949 5949 98 99 VUAAAA OTMAAA AAAAxx +3194 8621 0 2 4 14 94 194 1194 3194 3194 188 189 WSAAAA PTMAAA HHHHxx +9325 8622 1 1 5 5 25 325 1325 4325 9325 50 51 RUAAAA QTMAAA OOOOxx +9531 8623 1 3 1 11 31 531 1531 4531 9531 62 63 PCAAAA RTMAAA VVVVxx +711 8624 1 3 1 11 11 711 711 711 711 22 23 JBAAAA STMAAA AAAAxx +2450 8625 0 2 0 10 50 450 450 2450 2450 100 101 GQAAAA TTMAAA HHHHxx +1929 8626 1 1 9 9 29 929 1929 1929 1929 58 59 FWAAAA UTMAAA OOOOxx +6165 8627 1 1 5 5 65 165 165 1165 6165 130 131 DDAAAA VTMAAA VVVVxx +4050 8628 0 2 0 10 50 50 50 4050 4050 100 101 UZAAAA WTMAAA AAAAxx +9011 8629 1 3 1 11 11 11 1011 4011 9011 22 23 PIAAAA XTMAAA HHHHxx +7916 8630 0 0 6 16 16 916 1916 2916 7916 32 33 MSAAAA YTMAAA OOOOxx +9136 8631 0 0 6 16 36 136 1136 4136 9136 72 73 KNAAAA ZTMAAA VVVVxx +8782 8632 0 2 2 2 82 782 782 3782 8782 164 165 UZAAAA AUMAAA AAAAxx +8491 8633 1 3 1 11 91 491 491 3491 8491 182 183 POAAAA BUMAAA HHHHxx +5114 8634 0 2 4 14 14 114 1114 114 5114 28 29 SOAAAA CUMAAA OOOOxx +5815 8635 1 3 5 15 15 815 1815 815 5815 30 31 RPAAAA DUMAAA VVVVxx +5628 8636 0 0 8 8 28 628 1628 628 5628 56 57 MIAAAA EUMAAA AAAAxx +810 8637 0 2 0 10 10 810 810 810 810 20 21 EFAAAA FUMAAA HHHHxx +6178 8638 0 2 8 18 78 178 178 1178 6178 156 157 QDAAAA GUMAAA OOOOxx +2619 8639 1 3 9 19 19 619 619 2619 2619 38 39 TWAAAA HUMAAA VVVVxx +3340 8640 0 0 0 0 40 340 1340 3340 3340 80 81 MYAAAA IUMAAA AAAAxx +2491 8641 1 3 1 11 91 491 491 2491 2491 182 183 VRAAAA JUMAAA HHHHxx +3574 8642 0 2 4 14 74 574 1574 3574 3574 148 149 MHAAAA KUMAAA OOOOxx +6754 8643 0 2 4 14 54 754 754 1754 6754 108 109 UZAAAA LUMAAA VVVVxx +1566 8644 0 2 6 6 66 566 1566 1566 1566 132 133 GIAAAA MUMAAA AAAAxx +9174 8645 0 2 4 14 74 174 1174 4174 9174 148 149 WOAAAA NUMAAA HHHHxx +1520 8646 0 0 0 0 20 520 1520 1520 1520 40 41 MGAAAA OUMAAA OOOOxx +2691 8647 1 3 1 11 91 691 691 2691 2691 182 183 NZAAAA PUMAAA VVVVxx +6961 8648 1 1 1 1 61 961 961 1961 6961 122 123 THAAAA QUMAAA AAAAxx +5722 8649 0 2 2 2 22 722 1722 722 5722 44 45 CMAAAA RUMAAA HHHHxx +9707 8650 1 3 7 7 7 707 1707 4707 9707 14 15 JJAAAA SUMAAA OOOOxx +2891 8651 1 3 1 11 91 891 891 2891 2891 182 183 FHAAAA TUMAAA VVVVxx +341 8652 1 1 1 1 41 341 341 341 341 82 83 DNAAAA UUMAAA AAAAxx +4690 8653 0 2 0 10 90 690 690 4690 4690 180 181 KYAAAA VUMAAA HHHHxx +7841 8654 1 1 1 1 41 841 1841 2841 7841 82 83 PPAAAA WUMAAA OOOOxx +6615 8655 1 3 5 15 15 615 615 1615 6615 30 31 LUAAAA XUMAAA VVVVxx +9169 8656 1 1 9 9 69 169 1169 4169 9169 138 139 ROAAAA YUMAAA AAAAxx +6689 8657 1 1 9 9 89 689 689 1689 6689 178 179 HXAAAA ZUMAAA HHHHxx +8721 8658 1 1 1 1 21 721 721 3721 8721 42 43 LXAAAA AVMAAA OOOOxx +7508 8659 0 0 8 8 8 508 1508 2508 7508 16 17 UCAAAA BVMAAA VVVVxx +8631 8660 1 3 1 11 31 631 631 3631 8631 62 63 ZTAAAA CVMAAA AAAAxx +480 8661 0 0 0 0 80 480 480 480 480 160 161 MSAAAA DVMAAA HHHHxx +7094 8662 0 2 4 14 94 94 1094 2094 7094 188 189 WMAAAA EVMAAA OOOOxx +319 8663 1 3 9 19 19 319 319 319 319 38 39 HMAAAA FVMAAA VVVVxx +9421 8664 1 1 1 1 21 421 1421 4421 9421 42 43 JYAAAA GVMAAA AAAAxx +4352 8665 0 0 2 12 52 352 352 4352 4352 104 105 KLAAAA HVMAAA HHHHxx +5019 8666 1 3 9 19 19 19 1019 19 5019 38 39 BLAAAA IVMAAA OOOOxx +3956 8667 0 0 6 16 56 956 1956 3956 3956 112 113 EWAAAA JVMAAA VVVVxx +114 8668 0 2 4 14 14 114 114 114 114 28 29 KEAAAA KVMAAA AAAAxx +1196 8669 0 0 6 16 96 196 1196 1196 1196 192 193 AUAAAA LVMAAA HHHHxx +1407 8670 1 3 7 7 7 407 1407 1407 1407 14 15 DCAAAA MVMAAA OOOOxx +7432 8671 0 0 2 12 32 432 1432 2432 7432 64 65 WZAAAA NVMAAA VVVVxx +3141 8672 1 1 1 1 41 141 1141 3141 3141 82 83 VQAAAA OVMAAA AAAAxx +2073 8673 1 1 3 13 73 73 73 2073 2073 146 147 TBAAAA PVMAAA HHHHxx +3400 8674 0 0 0 0 0 400 1400 3400 3400 0 1 UAAAAA QVMAAA OOOOxx +505 8675 1 1 5 5 5 505 505 505 505 10 11 LTAAAA RVMAAA VVVVxx +1263 8676 1 3 3 3 63 263 1263 1263 1263 126 127 PWAAAA SVMAAA AAAAxx +190 8677 0 2 0 10 90 190 190 190 190 180 181 IHAAAA TVMAAA HHHHxx +6686 8678 0 2 6 6 86 686 686 1686 6686 172 173 EXAAAA UVMAAA OOOOxx +9821 8679 1 1 1 1 21 821 1821 4821 9821 42 43 TNAAAA VVMAAA VVVVxx +1119 8680 1 3 9 19 19 119 1119 1119 1119 38 39 BRAAAA WVMAAA AAAAxx +2955 8681 1 3 5 15 55 955 955 2955 2955 110 111 RJAAAA XVMAAA HHHHxx +224 8682 0 0 4 4 24 224 224 224 224 48 49 QIAAAA YVMAAA OOOOxx +7562 8683 0 2 2 2 62 562 1562 2562 7562 124 125 WEAAAA ZVMAAA VVVVxx +8845 8684 1 1 5 5 45 845 845 3845 8845 90 91 FCAAAA AWMAAA AAAAxx +5405 8685 1 1 5 5 5 405 1405 405 5405 10 11 XZAAAA BWMAAA HHHHxx +9192 8686 0 0 2 12 92 192 1192 4192 9192 184 185 OPAAAA CWMAAA OOOOxx +4927 8687 1 3 7 7 27 927 927 4927 4927 54 55 NHAAAA DWMAAA VVVVxx +997 8688 1 1 7 17 97 997 997 997 997 194 195 JMAAAA EWMAAA AAAAxx +989 8689 1 1 9 9 89 989 989 989 989 178 179 BMAAAA FWMAAA HHHHxx +7258 8690 0 2 8 18 58 258 1258 2258 7258 116 117 ETAAAA GWMAAA OOOOxx +6899 8691 1 3 9 19 99 899 899 1899 6899 198 199 JFAAAA HWMAAA VVVVxx +1770 8692 0 2 0 10 70 770 1770 1770 1770 140 141 CQAAAA IWMAAA AAAAxx +4423 8693 1 3 3 3 23 423 423 4423 4423 46 47 DOAAAA JWMAAA HHHHxx +5671 8694 1 3 1 11 71 671 1671 671 5671 142 143 DKAAAA KWMAAA OOOOxx +8393 8695 1 1 3 13 93 393 393 3393 8393 186 187 VKAAAA LWMAAA VVVVxx +4355 8696 1 3 5 15 55 355 355 4355 4355 110 111 NLAAAA MWMAAA AAAAxx +3919 8697 1 3 9 19 19 919 1919 3919 3919 38 39 TUAAAA NWMAAA HHHHxx +338 8698 0 2 8 18 38 338 338 338 338 76 77 ANAAAA OWMAAA OOOOxx +5790 8699 0 2 0 10 90 790 1790 790 5790 180 181 SOAAAA PWMAAA VVVVxx +1452 8700 0 0 2 12 52 452 1452 1452 1452 104 105 WDAAAA QWMAAA AAAAxx +939 8701 1 3 9 19 39 939 939 939 939 78 79 DKAAAA RWMAAA HHHHxx +8913 8702 1 1 3 13 13 913 913 3913 8913 26 27 VEAAAA SWMAAA OOOOxx +7157 8703 1 1 7 17 57 157 1157 2157 7157 114 115 HPAAAA TWMAAA VVVVxx +7240 8704 0 0 0 0 40 240 1240 2240 7240 80 81 MSAAAA UWMAAA AAAAxx +3492 8705 0 0 2 12 92 492 1492 3492 3492 184 185 IEAAAA VWMAAA HHHHxx +3464 8706 0 0 4 4 64 464 1464 3464 3464 128 129 GDAAAA WWMAAA OOOOxx +388 8707 0 0 8 8 88 388 388 388 388 176 177 YOAAAA XWMAAA VVVVxx +4135 8708 1 3 5 15 35 135 135 4135 4135 70 71 BDAAAA YWMAAA AAAAxx +1194 8709 0 2 4 14 94 194 1194 1194 1194 188 189 YTAAAA ZWMAAA HHHHxx +5476 8710 0 0 6 16 76 476 1476 476 5476 152 153 QCAAAA AXMAAA OOOOxx +9844 8711 0 0 4 4 44 844 1844 4844 9844 88 89 QOAAAA BXMAAA VVVVxx +9364 8712 0 0 4 4 64 364 1364 4364 9364 128 129 EWAAAA CXMAAA AAAAxx +5238 8713 0 2 8 18 38 238 1238 238 5238 76 77 MTAAAA DXMAAA HHHHxx +3712 8714 0 0 2 12 12 712 1712 3712 3712 24 25 UMAAAA EXMAAA OOOOxx +6189 8715 1 1 9 9 89 189 189 1189 6189 178 179 BEAAAA FXMAAA VVVVxx +5257 8716 1 1 7 17 57 257 1257 257 5257 114 115 FUAAAA GXMAAA AAAAxx +81 8717 1 1 1 1 81 81 81 81 81 162 163 DDAAAA HXMAAA HHHHxx +3289 8718 1 1 9 9 89 289 1289 3289 3289 178 179 NWAAAA IXMAAA OOOOxx +1177 8719 1 1 7 17 77 177 1177 1177 1177 154 155 HTAAAA JXMAAA VVVVxx +5038 8720 0 2 8 18 38 38 1038 38 5038 76 77 ULAAAA KXMAAA AAAAxx +325 8721 1 1 5 5 25 325 325 325 325 50 51 NMAAAA LXMAAA HHHHxx +7221 8722 1 1 1 1 21 221 1221 2221 7221 42 43 TRAAAA MXMAAA OOOOxx +7123 8723 1 3 3 3 23 123 1123 2123 7123 46 47 ZNAAAA NXMAAA VVVVxx +6364 8724 0 0 4 4 64 364 364 1364 6364 128 129 UKAAAA OXMAAA AAAAxx +4468 8725 0 0 8 8 68 468 468 4468 4468 136 137 WPAAAA PXMAAA HHHHxx +9185 8726 1 1 5 5 85 185 1185 4185 9185 170 171 HPAAAA QXMAAA OOOOxx +4158 8727 0 2 8 18 58 158 158 4158 4158 116 117 YDAAAA RXMAAA VVVVxx +9439 8728 1 3 9 19 39 439 1439 4439 9439 78 79 BZAAAA SXMAAA AAAAxx +7759 8729 1 3 9 19 59 759 1759 2759 7759 118 119 LMAAAA TXMAAA HHHHxx +3325 8730 1 1 5 5 25 325 1325 3325 3325 50 51 XXAAAA UXMAAA OOOOxx +7991 8731 1 3 1 11 91 991 1991 2991 7991 182 183 JVAAAA VXMAAA VVVVxx +1650 8732 0 2 0 10 50 650 1650 1650 1650 100 101 MLAAAA WXMAAA AAAAxx +8395 8733 1 3 5 15 95 395 395 3395 8395 190 191 XKAAAA XXMAAA HHHHxx +286 8734 0 2 6 6 86 286 286 286 286 172 173 ALAAAA YXMAAA OOOOxx +1507 8735 1 3 7 7 7 507 1507 1507 1507 14 15 ZFAAAA ZXMAAA VVVVxx +4122 8736 0 2 2 2 22 122 122 4122 4122 44 45 OCAAAA AYMAAA AAAAxx +2625 8737 1 1 5 5 25 625 625 2625 2625 50 51 ZWAAAA BYMAAA HHHHxx +1140 8738 0 0 0 0 40 140 1140 1140 1140 80 81 WRAAAA CYMAAA OOOOxx +5262 8739 0 2 2 2 62 262 1262 262 5262 124 125 KUAAAA DYMAAA VVVVxx +4919 8740 1 3 9 19 19 919 919 4919 4919 38 39 FHAAAA EYMAAA AAAAxx +7266 8741 0 2 6 6 66 266 1266 2266 7266 132 133 MTAAAA FYMAAA HHHHxx +630 8742 0 2 0 10 30 630 630 630 630 60 61 GYAAAA GYMAAA OOOOxx +2129 8743 1 1 9 9 29 129 129 2129 2129 58 59 XDAAAA HYMAAA VVVVxx +9552 8744 0 0 2 12 52 552 1552 4552 9552 104 105 KDAAAA IYMAAA AAAAxx +3018 8745 0 2 8 18 18 18 1018 3018 3018 36 37 CMAAAA JYMAAA HHHHxx +7145 8746 1 1 5 5 45 145 1145 2145 7145 90 91 VOAAAA KYMAAA OOOOxx +1633 8747 1 1 3 13 33 633 1633 1633 1633 66 67 VKAAAA LYMAAA VVVVxx +7957 8748 1 1 7 17 57 957 1957 2957 7957 114 115 BUAAAA MYMAAA AAAAxx +774 8749 0 2 4 14 74 774 774 774 774 148 149 UDAAAA NYMAAA HHHHxx +9371 8750 1 3 1 11 71 371 1371 4371 9371 142 143 LWAAAA OYMAAA OOOOxx +6007 8751 1 3 7 7 7 7 7 1007 6007 14 15 BXAAAA PYMAAA VVVVxx +5277 8752 1 1 7 17 77 277 1277 277 5277 154 155 ZUAAAA QYMAAA AAAAxx +9426 8753 0 2 6 6 26 426 1426 4426 9426 52 53 OYAAAA RYMAAA HHHHxx +9190 8754 0 2 0 10 90 190 1190 4190 9190 180 181 MPAAAA SYMAAA OOOOxx +8996 8755 0 0 6 16 96 996 996 3996 8996 192 193 AIAAAA TYMAAA VVVVxx +3409 8756 1 1 9 9 9 409 1409 3409 3409 18 19 DBAAAA UYMAAA AAAAxx +7212 8757 0 0 2 12 12 212 1212 2212 7212 24 25 KRAAAA VYMAAA HHHHxx +416 8758 0 0 6 16 16 416 416 416 416 32 33 AQAAAA WYMAAA OOOOxx +7211 8759 1 3 1 11 11 211 1211 2211 7211 22 23 JRAAAA XYMAAA VVVVxx +7454 8760 0 2 4 14 54 454 1454 2454 7454 108 109 SAAAAA YYMAAA AAAAxx +8417 8761 1 1 7 17 17 417 417 3417 8417 34 35 TLAAAA ZYMAAA HHHHxx +5562 8762 0 2 2 2 62 562 1562 562 5562 124 125 YFAAAA AZMAAA OOOOxx +4996 8763 0 0 6 16 96 996 996 4996 4996 192 193 EKAAAA BZMAAA VVVVxx +5718 8764 0 2 8 18 18 718 1718 718 5718 36 37 YLAAAA CZMAAA AAAAxx +7838 8765 0 2 8 18 38 838 1838 2838 7838 76 77 MPAAAA DZMAAA HHHHxx +7715 8766 1 3 5 15 15 715 1715 2715 7715 30 31 TKAAAA EZMAAA OOOOxx +2780 8767 0 0 0 0 80 780 780 2780 2780 160 161 YCAAAA FZMAAA VVVVxx +1013 8768 1 1 3 13 13 13 1013 1013 1013 26 27 ZMAAAA GZMAAA AAAAxx +8465 8769 1 1 5 5 65 465 465 3465 8465 130 131 PNAAAA HZMAAA HHHHxx +7976 8770 0 0 6 16 76 976 1976 2976 7976 152 153 UUAAAA IZMAAA OOOOxx +7150 8771 0 2 0 10 50 150 1150 2150 7150 100 101 APAAAA JZMAAA VVVVxx +6471 8772 1 3 1 11 71 471 471 1471 6471 142 143 XOAAAA KZMAAA AAAAxx +1927 8773 1 3 7 7 27 927 1927 1927 1927 54 55 DWAAAA LZMAAA HHHHxx +227 8774 1 3 7 7 27 227 227 227 227 54 55 TIAAAA MZMAAA OOOOxx +6462 8775 0 2 2 2 62 462 462 1462 6462 124 125 OOAAAA NZMAAA VVVVxx +5227 8776 1 3 7 7 27 227 1227 227 5227 54 55 BTAAAA OZMAAA AAAAxx +1074 8777 0 2 4 14 74 74 1074 1074 1074 148 149 IPAAAA PZMAAA HHHHxx +9448 8778 0 0 8 8 48 448 1448 4448 9448 96 97 KZAAAA QZMAAA OOOOxx +4459 8779 1 3 9 19 59 459 459 4459 4459 118 119 NPAAAA RZMAAA VVVVxx +2478 8780 0 2 8 18 78 478 478 2478 2478 156 157 IRAAAA SZMAAA AAAAxx +5005 8781 1 1 5 5 5 5 1005 5 5005 10 11 NKAAAA TZMAAA HHHHxx +2418 8782 0 2 8 18 18 418 418 2418 2418 36 37 APAAAA UZMAAA OOOOxx +6991 8783 1 3 1 11 91 991 991 1991 6991 182 183 XIAAAA VZMAAA VVVVxx +4729 8784 1 1 9 9 29 729 729 4729 4729 58 59 XZAAAA WZMAAA AAAAxx +3548 8785 0 0 8 8 48 548 1548 3548 3548 96 97 MGAAAA XZMAAA HHHHxx +9616 8786 0 0 6 16 16 616 1616 4616 9616 32 33 WFAAAA YZMAAA OOOOxx +2901 8787 1 1 1 1 1 901 901 2901 2901 2 3 PHAAAA ZZMAAA VVVVxx +10 8788 0 2 0 10 10 10 10 10 10 20 21 KAAAAA AANAAA AAAAxx +2637 8789 1 1 7 17 37 637 637 2637 2637 74 75 LXAAAA BANAAA HHHHxx +6747 8790 1 3 7 7 47 747 747 1747 6747 94 95 NZAAAA CANAAA OOOOxx +797 8791 1 1 7 17 97 797 797 797 797 194 195 REAAAA DANAAA VVVVxx +7609 8792 1 1 9 9 9 609 1609 2609 7609 18 19 RGAAAA EANAAA AAAAxx +8290 8793 0 2 0 10 90 290 290 3290 8290 180 181 WGAAAA FANAAA HHHHxx +8765 8794 1 1 5 5 65 765 765 3765 8765 130 131 DZAAAA GANAAA OOOOxx +8053 8795 1 1 3 13 53 53 53 3053 8053 106 107 TXAAAA HANAAA VVVVxx +5602 8796 0 2 2 2 2 602 1602 602 5602 4 5 MHAAAA IANAAA AAAAxx +3672 8797 0 0 2 12 72 672 1672 3672 3672 144 145 GLAAAA JANAAA HHHHxx +7513 8798 1 1 3 13 13 513 1513 2513 7513 26 27 ZCAAAA KANAAA OOOOxx +3462 8799 0 2 2 2 62 462 1462 3462 3462 124 125 EDAAAA LANAAA VVVVxx +4457 8800 1 1 7 17 57 457 457 4457 4457 114 115 LPAAAA MANAAA AAAAxx +6547 8801 1 3 7 7 47 547 547 1547 6547 94 95 VRAAAA NANAAA HHHHxx +7417 8802 1 1 7 17 17 417 1417 2417 7417 34 35 HZAAAA OANAAA OOOOxx +8641 8803 1 1 1 1 41 641 641 3641 8641 82 83 JUAAAA PANAAA VVVVxx +149 8804 1 1 9 9 49 149 149 149 149 98 99 TFAAAA QANAAA AAAAxx +5041 8805 1 1 1 1 41 41 1041 41 5041 82 83 XLAAAA RANAAA HHHHxx +9232 8806 0 0 2 12 32 232 1232 4232 9232 64 65 CRAAAA SANAAA OOOOxx +3603 8807 1 3 3 3 3 603 1603 3603 3603 6 7 PIAAAA TANAAA VVVVxx +2792 8808 0 0 2 12 92 792 792 2792 2792 184 185 KDAAAA UANAAA AAAAxx +6620 8809 0 0 0 0 20 620 620 1620 6620 40 41 QUAAAA VANAAA HHHHxx +4000 8810 0 0 0 0 0 0 0 4000 4000 0 1 WXAAAA WANAAA OOOOxx +659 8811 1 3 9 19 59 659 659 659 659 118 119 JZAAAA XANAAA VVVVxx +8174 8812 0 2 4 14 74 174 174 3174 8174 148 149 KCAAAA YANAAA AAAAxx +4599 8813 1 3 9 19 99 599 599 4599 4599 198 199 XUAAAA ZANAAA HHHHxx +7851 8814 1 3 1 11 51 851 1851 2851 7851 102 103 ZPAAAA ABNAAA OOOOxx +6284 8815 0 0 4 4 84 284 284 1284 6284 168 169 SHAAAA BBNAAA VVVVxx +7116 8816 0 0 6 16 16 116 1116 2116 7116 32 33 SNAAAA CBNAAA AAAAxx +5595 8817 1 3 5 15 95 595 1595 595 5595 190 191 FHAAAA DBNAAA HHHHxx +2903 8818 1 3 3 3 3 903 903 2903 2903 6 7 RHAAAA EBNAAA OOOOxx +5948 8819 0 0 8 8 48 948 1948 948 5948 96 97 UUAAAA FBNAAA VVVVxx +225 8820 1 1 5 5 25 225 225 225 225 50 51 RIAAAA GBNAAA AAAAxx +524 8821 0 0 4 4 24 524 524 524 524 48 49 EUAAAA HBNAAA HHHHxx +7639 8822 1 3 9 19 39 639 1639 2639 7639 78 79 VHAAAA IBNAAA OOOOxx +7297 8823 1 1 7 17 97 297 1297 2297 7297 194 195 RUAAAA JBNAAA VVVVxx +2606 8824 0 2 6 6 6 606 606 2606 2606 12 13 GWAAAA KBNAAA AAAAxx +4771 8825 1 3 1 11 71 771 771 4771 4771 142 143 NBAAAA LBNAAA HHHHxx +8162 8826 0 2 2 2 62 162 162 3162 8162 124 125 YBAAAA MBNAAA OOOOxx +8999 8827 1 3 9 19 99 999 999 3999 8999 198 199 DIAAAA NBNAAA VVVVxx +2309 8828 1 1 9 9 9 309 309 2309 2309 18 19 VKAAAA OBNAAA AAAAxx +3594 8829 0 2 4 14 94 594 1594 3594 3594 188 189 GIAAAA PBNAAA HHHHxx +6092 8830 0 0 2 12 92 92 92 1092 6092 184 185 IAAAAA QBNAAA OOOOxx +7467 8831 1 3 7 7 67 467 1467 2467 7467 134 135 FBAAAA RBNAAA VVVVxx +6986 8832 0 2 6 6 86 986 986 1986 6986 172 173 SIAAAA SBNAAA AAAAxx +9898 8833 0 2 8 18 98 898 1898 4898 9898 196 197 SQAAAA TBNAAA HHHHxx +9578 8834 0 2 8 18 78 578 1578 4578 9578 156 157 KEAAAA UBNAAA OOOOxx +156 8835 0 0 6 16 56 156 156 156 156 112 113 AGAAAA VBNAAA VVVVxx +5810 8836 0 2 0 10 10 810 1810 810 5810 20 21 MPAAAA WBNAAA AAAAxx +790 8837 0 2 0 10 90 790 790 790 790 180 181 KEAAAA XBNAAA HHHHxx +6840 8838 0 0 0 0 40 840 840 1840 6840 80 81 CDAAAA YBNAAA OOOOxx +6725 8839 1 1 5 5 25 725 725 1725 6725 50 51 RYAAAA ZBNAAA VVVVxx +5528 8840 0 0 8 8 28 528 1528 528 5528 56 57 QEAAAA ACNAAA AAAAxx +4120 8841 0 0 0 0 20 120 120 4120 4120 40 41 MCAAAA BCNAAA HHHHxx +6694 8842 0 2 4 14 94 694 694 1694 6694 188 189 MXAAAA CCNAAA OOOOxx +3552 8843 0 0 2 12 52 552 1552 3552 3552 104 105 QGAAAA DCNAAA VVVVxx +1478 8844 0 2 8 18 78 478 1478 1478 1478 156 157 WEAAAA ECNAAA AAAAxx +8084 8845 0 0 4 4 84 84 84 3084 8084 168 169 YYAAAA FCNAAA HHHHxx +7578 8846 0 2 8 18 78 578 1578 2578 7578 156 157 MFAAAA GCNAAA OOOOxx +6314 8847 0 2 4 14 14 314 314 1314 6314 28 29 WIAAAA HCNAAA VVVVxx +6123 8848 1 3 3 3 23 123 123 1123 6123 46 47 NBAAAA ICNAAA AAAAxx +9443 8849 1 3 3 3 43 443 1443 4443 9443 86 87 FZAAAA JCNAAA HHHHxx +9628 8850 0 0 8 8 28 628 1628 4628 9628 56 57 IGAAAA KCNAAA OOOOxx +8508 8851 0 0 8 8 8 508 508 3508 8508 16 17 GPAAAA LCNAAA VVVVxx +5552 8852 0 0 2 12 52 552 1552 552 5552 104 105 OFAAAA MCNAAA AAAAxx +5327 8853 1 3 7 7 27 327 1327 327 5327 54 55 XWAAAA NCNAAA HHHHxx +7771 8854 1 3 1 11 71 771 1771 2771 7771 142 143 XMAAAA OCNAAA OOOOxx +8932 8855 0 0 2 12 32 932 932 3932 8932 64 65 OFAAAA PCNAAA VVVVxx +3526 8856 0 2 6 6 26 526 1526 3526 3526 52 53 QFAAAA QCNAAA AAAAxx +4340 8857 0 0 0 0 40 340 340 4340 4340 80 81 YKAAAA RCNAAA HHHHxx +9419 8858 1 3 9 19 19 419 1419 4419 9419 38 39 HYAAAA SCNAAA OOOOxx +8421 8859 1 1 1 1 21 421 421 3421 8421 42 43 XLAAAA TCNAAA VVVVxx +7431 8860 1 3 1 11 31 431 1431 2431 7431 62 63 VZAAAA UCNAAA AAAAxx +172 8861 0 0 2 12 72 172 172 172 172 144 145 QGAAAA VCNAAA HHHHxx +3279 8862 1 3 9 19 79 279 1279 3279 3279 158 159 DWAAAA WCNAAA OOOOxx +1508 8863 0 0 8 8 8 508 1508 1508 1508 16 17 AGAAAA XCNAAA VVVVxx +7091 8864 1 3 1 11 91 91 1091 2091 7091 182 183 TMAAAA YCNAAA AAAAxx +1419 8865 1 3 9 19 19 419 1419 1419 1419 38 39 PCAAAA ZCNAAA HHHHxx +3032 8866 0 0 2 12 32 32 1032 3032 3032 64 65 QMAAAA ADNAAA OOOOxx +8683 8867 1 3 3 3 83 683 683 3683 8683 166 167 ZVAAAA BDNAAA VVVVxx +4763 8868 1 3 3 3 63 763 763 4763 4763 126 127 FBAAAA CDNAAA AAAAxx +4424 8869 0 0 4 4 24 424 424 4424 4424 48 49 EOAAAA DDNAAA HHHHxx +8640 8870 0 0 0 0 40 640 640 3640 8640 80 81 IUAAAA EDNAAA OOOOxx +7187 8871 1 3 7 7 87 187 1187 2187 7187 174 175 LQAAAA FDNAAA VVVVxx +6247 8872 1 3 7 7 47 247 247 1247 6247 94 95 HGAAAA GDNAAA AAAAxx +7340 8873 0 0 0 0 40 340 1340 2340 7340 80 81 IWAAAA HDNAAA HHHHxx +182 8874 0 2 2 2 82 182 182 182 182 164 165 AHAAAA IDNAAA OOOOxx +2948 8875 0 0 8 8 48 948 948 2948 2948 96 97 KJAAAA JDNAAA VVVVxx +9462 8876 0 2 2 2 62 462 1462 4462 9462 124 125 YZAAAA KDNAAA AAAAxx +5997 8877 1 1 7 17 97 997 1997 997 5997 194 195 RWAAAA LDNAAA HHHHxx +5608 8878 0 0 8 8 8 608 1608 608 5608 16 17 SHAAAA MDNAAA OOOOxx +1472 8879 0 0 2 12 72 472 1472 1472 1472 144 145 QEAAAA NDNAAA VVVVxx +277 8880 1 1 7 17 77 277 277 277 277 154 155 RKAAAA ODNAAA AAAAxx +4807 8881 1 3 7 7 7 807 807 4807 4807 14 15 XCAAAA PDNAAA HHHHxx +4969 8882 1 1 9 9 69 969 969 4969 4969 138 139 DJAAAA QDNAAA OOOOxx +5611 8883 1 3 1 11 11 611 1611 611 5611 22 23 VHAAAA RDNAAA VVVVxx +372 8884 0 0 2 12 72 372 372 372 372 144 145 IOAAAA SDNAAA AAAAxx +6666 8885 0 2 6 6 66 666 666 1666 6666 132 133 KWAAAA TDNAAA HHHHxx +476 8886 0 0 6 16 76 476 476 476 476 152 153 ISAAAA UDNAAA OOOOxx +5225 8887 1 1 5 5 25 225 1225 225 5225 50 51 ZSAAAA VDNAAA VVVVxx +5143 8888 1 3 3 3 43 143 1143 143 5143 86 87 VPAAAA WDNAAA AAAAxx +1853 8889 1 1 3 13 53 853 1853 1853 1853 106 107 HTAAAA XDNAAA HHHHxx +675 8890 1 3 5 15 75 675 675 675 675 150 151 ZZAAAA YDNAAA OOOOxx +5643 8891 1 3 3 3 43 643 1643 643 5643 86 87 BJAAAA ZDNAAA VVVVxx +5317 8892 1 1 7 17 17 317 1317 317 5317 34 35 NWAAAA AENAAA AAAAxx +8102 8893 0 2 2 2 2 102 102 3102 8102 4 5 QZAAAA BENAAA HHHHxx +978 8894 0 2 8 18 78 978 978 978 978 156 157 QLAAAA CENAAA OOOOxx +4620 8895 0 0 0 0 20 620 620 4620 4620 40 41 SVAAAA DENAAA VVVVxx +151 8896 1 3 1 11 51 151 151 151 151 102 103 VFAAAA EENAAA AAAAxx +972 8897 0 0 2 12 72 972 972 972 972 144 145 KLAAAA FENAAA HHHHxx +6820 8898 0 0 0 0 20 820 820 1820 6820 40 41 ICAAAA GENAAA OOOOxx +7387 8899 1 3 7 7 87 387 1387 2387 7387 174 175 DYAAAA HENAAA VVVVxx +9634 8900 0 2 4 14 34 634 1634 4634 9634 68 69 OGAAAA IENAAA AAAAxx +6308 8901 0 0 8 8 8 308 308 1308 6308 16 17 QIAAAA JENAAA HHHHxx +8323 8902 1 3 3 3 23 323 323 3323 8323 46 47 DIAAAA KENAAA OOOOxx +6672 8903 0 0 2 12 72 672 672 1672 6672 144 145 QWAAAA LENAAA VVVVxx +8283 8904 1 3 3 3 83 283 283 3283 8283 166 167 PGAAAA MENAAA AAAAxx +7996 8905 0 0 6 16 96 996 1996 2996 7996 192 193 OVAAAA NENAAA HHHHxx +6488 8906 0 0 8 8 88 488 488 1488 6488 176 177 OPAAAA OENAAA OOOOxx +2365 8907 1 1 5 5 65 365 365 2365 2365 130 131 ZMAAAA PENAAA VVVVxx +9746 8908 0 2 6 6 46 746 1746 4746 9746 92 93 WKAAAA QENAAA AAAAxx +8605 8909 1 1 5 5 5 605 605 3605 8605 10 11 ZSAAAA RENAAA HHHHxx +3342 8910 0 2 2 2 42 342 1342 3342 3342 84 85 OYAAAA SENAAA OOOOxx +8429 8911 1 1 9 9 29 429 429 3429 8429 58 59 FMAAAA TENAAA VVVVxx +1162 8912 0 2 2 2 62 162 1162 1162 1162 124 125 SSAAAA UENAAA AAAAxx +531 8913 1 3 1 11 31 531 531 531 531 62 63 LUAAAA VENAAA HHHHxx +8408 8914 0 0 8 8 8 408 408 3408 8408 16 17 KLAAAA WENAAA OOOOxx +8862 8915 0 2 2 2 62 862 862 3862 8862 124 125 WCAAAA XENAAA VVVVxx +5843 8916 1 3 3 3 43 843 1843 843 5843 86 87 TQAAAA YENAAA AAAAxx +8704 8917 0 0 4 4 4 704 704 3704 8704 8 9 UWAAAA ZENAAA HHHHxx +7070 8918 0 2 0 10 70 70 1070 2070 7070 140 141 YLAAAA AFNAAA OOOOxx +9119 8919 1 3 9 19 19 119 1119 4119 9119 38 39 TMAAAA BFNAAA VVVVxx +8344 8920 0 0 4 4 44 344 344 3344 8344 88 89 YIAAAA CFNAAA AAAAxx +8979 8921 1 3 9 19 79 979 979 3979 8979 158 159 JHAAAA DFNAAA HHHHxx +2971 8922 1 3 1 11 71 971 971 2971 2971 142 143 HKAAAA EFNAAA OOOOxx +7700 8923 0 0 0 0 0 700 1700 2700 7700 0 1 EKAAAA FFNAAA VVVVxx +8280 8924 0 0 0 0 80 280 280 3280 8280 160 161 MGAAAA GFNAAA AAAAxx +9096 8925 0 0 6 16 96 96 1096 4096 9096 192 193 WLAAAA HFNAAA HHHHxx +99 8926 1 3 9 19 99 99 99 99 99 198 199 VDAAAA IFNAAA OOOOxx +6696 8927 0 0 6 16 96 696 696 1696 6696 192 193 OXAAAA JFNAAA VVVVxx +9490 8928 0 2 0 10 90 490 1490 4490 9490 180 181 ABAAAA KFNAAA AAAAxx +9073 8929 1 1 3 13 73 73 1073 4073 9073 146 147 ZKAAAA LFNAAA HHHHxx +1861 8930 1 1 1 1 61 861 1861 1861 1861 122 123 PTAAAA MFNAAA OOOOxx +4413 8931 1 1 3 13 13 413 413 4413 4413 26 27 TNAAAA NFNAAA VVVVxx +6002 8932 0 2 2 2 2 2 2 1002 6002 4 5 WWAAAA OFNAAA AAAAxx +439 8933 1 3 9 19 39 439 439 439 439 78 79 XQAAAA PFNAAA HHHHxx +5449 8934 1 1 9 9 49 449 1449 449 5449 98 99 PBAAAA QFNAAA OOOOxx +9737 8935 1 1 7 17 37 737 1737 4737 9737 74 75 NKAAAA RFNAAA VVVVxx +1898 8936 0 2 8 18 98 898 1898 1898 1898 196 197 AVAAAA SFNAAA AAAAxx +4189 8937 1 1 9 9 89 189 189 4189 4189 178 179 DFAAAA TFNAAA HHHHxx +1408 8938 0 0 8 8 8 408 1408 1408 1408 16 17 ECAAAA UFNAAA OOOOxx +394 8939 0 2 4 14 94 394 394 394 394 188 189 EPAAAA VFNAAA VVVVxx +1935 8940 1 3 5 15 35 935 1935 1935 1935 70 71 LWAAAA WFNAAA AAAAxx +3965 8941 1 1 5 5 65 965 1965 3965 3965 130 131 NWAAAA XFNAAA HHHHxx +6821 8942 1 1 1 1 21 821 821 1821 6821 42 43 JCAAAA YFNAAA OOOOxx +349 8943 1 1 9 9 49 349 349 349 349 98 99 LNAAAA ZFNAAA VVVVxx +8428 8944 0 0 8 8 28 428 428 3428 8428 56 57 EMAAAA AGNAAA AAAAxx +8200 8945 0 0 0 0 0 200 200 3200 8200 0 1 KDAAAA BGNAAA HHHHxx +1737 8946 1 1 7 17 37 737 1737 1737 1737 74 75 VOAAAA CGNAAA OOOOxx +6516 8947 0 0 6 16 16 516 516 1516 6516 32 33 QQAAAA DGNAAA VVVVxx +5441 8948 1 1 1 1 41 441 1441 441 5441 82 83 HBAAAA EGNAAA AAAAxx +5999 8949 1 3 9 19 99 999 1999 999 5999 198 199 TWAAAA FGNAAA HHHHxx +1539 8950 1 3 9 19 39 539 1539 1539 1539 78 79 FHAAAA GGNAAA OOOOxx +9067 8951 1 3 7 7 67 67 1067 4067 9067 134 135 TKAAAA HGNAAA VVVVxx +4061 8952 1 1 1 1 61 61 61 4061 4061 122 123 FAAAAA IGNAAA AAAAxx +1642 8953 0 2 2 2 42 642 1642 1642 1642 84 85 ELAAAA JGNAAA HHHHxx +4657 8954 1 1 7 17 57 657 657 4657 4657 114 115 DXAAAA KGNAAA OOOOxx +9934 8955 0 2 4 14 34 934 1934 4934 9934 68 69 CSAAAA LGNAAA VVVVxx +6385 8956 1 1 5 5 85 385 385 1385 6385 170 171 PLAAAA MGNAAA AAAAxx +6775 8957 1 3 5 15 75 775 775 1775 6775 150 151 PAAAAA NGNAAA HHHHxx +3873 8958 1 1 3 13 73 873 1873 3873 3873 146 147 ZSAAAA OGNAAA OOOOxx +3862 8959 0 2 2 2 62 862 1862 3862 3862 124 125 OSAAAA PGNAAA VVVVxx +1224 8960 0 0 4 4 24 224 1224 1224 1224 48 49 CVAAAA QGNAAA AAAAxx +4483 8961 1 3 3 3 83 483 483 4483 4483 166 167 LQAAAA RGNAAA HHHHxx +3685 8962 1 1 5 5 85 685 1685 3685 3685 170 171 TLAAAA SGNAAA OOOOxx +6082 8963 0 2 2 2 82 82 82 1082 6082 164 165 YZAAAA TGNAAA VVVVxx +7798 8964 0 2 8 18 98 798 1798 2798 7798 196 197 YNAAAA UGNAAA AAAAxx +9039 8965 1 3 9 19 39 39 1039 4039 9039 78 79 RJAAAA VGNAAA HHHHxx +985 8966 1 1 5 5 85 985 985 985 985 170 171 XLAAAA WGNAAA OOOOxx +5389 8967 1 1 9 9 89 389 1389 389 5389 178 179 HZAAAA XGNAAA VVVVxx +1716 8968 0 0 6 16 16 716 1716 1716 1716 32 33 AOAAAA YGNAAA AAAAxx +4209 8969 1 1 9 9 9 209 209 4209 4209 18 19 XFAAAA ZGNAAA HHHHxx +746 8970 0 2 6 6 46 746 746 746 746 92 93 SCAAAA AHNAAA OOOOxx +6295 8971 1 3 5 15 95 295 295 1295 6295 190 191 DIAAAA BHNAAA VVVVxx +9754 8972 0 2 4 14 54 754 1754 4754 9754 108 109 ELAAAA CHNAAA AAAAxx +2336 8973 0 0 6 16 36 336 336 2336 2336 72 73 WLAAAA DHNAAA HHHHxx +3701 8974 1 1 1 1 1 701 1701 3701 3701 2 3 JMAAAA EHNAAA OOOOxx +3551 8975 1 3 1 11 51 551 1551 3551 3551 102 103 PGAAAA FHNAAA VVVVxx +8516 8976 0 0 6 16 16 516 516 3516 8516 32 33 OPAAAA GHNAAA AAAAxx +9290 8977 0 2 0 10 90 290 1290 4290 9290 180 181 ITAAAA HHNAAA HHHHxx +5686 8978 0 2 6 6 86 686 1686 686 5686 172 173 SKAAAA IHNAAA OOOOxx +2893 8979 1 1 3 13 93 893 893 2893 2893 186 187 HHAAAA JHNAAA VVVVxx +6279 8980 1 3 9 19 79 279 279 1279 6279 158 159 NHAAAA KHNAAA AAAAxx +2278 8981 0 2 8 18 78 278 278 2278 2278 156 157 QJAAAA LHNAAA HHHHxx +1618 8982 0 2 8 18 18 618 1618 1618 1618 36 37 GKAAAA MHNAAA OOOOxx +3450 8983 0 2 0 10 50 450 1450 3450 3450 100 101 SCAAAA NHNAAA VVVVxx +8857 8984 1 1 7 17 57 857 857 3857 8857 114 115 RCAAAA OHNAAA AAAAxx +1005 8985 1 1 5 5 5 5 1005 1005 1005 10 11 RMAAAA PHNAAA HHHHxx +4727 8986 1 3 7 7 27 727 727 4727 4727 54 55 VZAAAA QHNAAA OOOOxx +7617 8987 1 1 7 17 17 617 1617 2617 7617 34 35 ZGAAAA RHNAAA VVVVxx +2021 8988 1 1 1 1 21 21 21 2021 2021 42 43 TZAAAA SHNAAA AAAAxx +9124 8989 0 0 4 4 24 124 1124 4124 9124 48 49 YMAAAA THNAAA HHHHxx +3175 8990 1 3 5 15 75 175 1175 3175 3175 150 151 DSAAAA UHNAAA OOOOxx +2949 8991 1 1 9 9 49 949 949 2949 2949 98 99 LJAAAA VHNAAA VVVVxx +2424 8992 0 0 4 4 24 424 424 2424 2424 48 49 GPAAAA WHNAAA AAAAxx +4791 8993 1 3 1 11 91 791 791 4791 4791 182 183 HCAAAA XHNAAA HHHHxx +7500 8994 0 0 0 0 0 500 1500 2500 7500 0 1 MCAAAA YHNAAA OOOOxx +4893 8995 1 1 3 13 93 893 893 4893 4893 186 187 FGAAAA ZHNAAA VVVVxx +121 8996 1 1 1 1 21 121 121 121 121 42 43 REAAAA AINAAA AAAAxx +1965 8997 1 1 5 5 65 965 1965 1965 1965 130 131 PXAAAA BINAAA HHHHxx +2972 8998 0 0 2 12 72 972 972 2972 2972 144 145 IKAAAA CINAAA OOOOxx +662 8999 0 2 2 2 62 662 662 662 662 124 125 MZAAAA DINAAA VVVVxx +7074 9000 0 2 4 14 74 74 1074 2074 7074 148 149 CMAAAA EINAAA AAAAxx +981 9001 1 1 1 1 81 981 981 981 981 162 163 TLAAAA FINAAA HHHHxx +3520 9002 0 0 0 0 20 520 1520 3520 3520 40 41 KFAAAA GINAAA OOOOxx +6540 9003 0 0 0 0 40 540 540 1540 6540 80 81 ORAAAA HINAAA VVVVxx +6648 9004 0 0 8 8 48 648 648 1648 6648 96 97 SVAAAA IINAAA AAAAxx +7076 9005 0 0 6 16 76 76 1076 2076 7076 152 153 EMAAAA JINAAA HHHHxx +6919 9006 1 3 9 19 19 919 919 1919 6919 38 39 DGAAAA KINAAA OOOOxx +1108 9007 0 0 8 8 8 108 1108 1108 1108 16 17 QQAAAA LINAAA VVVVxx +317 9008 1 1 7 17 17 317 317 317 317 34 35 FMAAAA MINAAA AAAAxx +3483 9009 1 3 3 3 83 483 1483 3483 3483 166 167 ZDAAAA NINAAA HHHHxx +6764 9010 0 0 4 4 64 764 764 1764 6764 128 129 EAAAAA OINAAA OOOOxx +1235 9011 1 3 5 15 35 235 1235 1235 1235 70 71 NVAAAA PINAAA VVVVxx +7121 9012 1 1 1 1 21 121 1121 2121 7121 42 43 XNAAAA QINAAA AAAAxx +426 9013 0 2 6 6 26 426 426 426 426 52 53 KQAAAA RINAAA HHHHxx +6880 9014 0 0 0 0 80 880 880 1880 6880 160 161 QEAAAA SINAAA OOOOxx +5401 9015 1 1 1 1 1 401 1401 401 5401 2 3 TZAAAA TINAAA VVVVxx +7323 9016 1 3 3 3 23 323 1323 2323 7323 46 47 RVAAAA UINAAA AAAAxx +9751 9017 1 3 1 11 51 751 1751 4751 9751 102 103 BLAAAA VINAAA HHHHxx +3436 9018 0 0 6 16 36 436 1436 3436 3436 72 73 ECAAAA WINAAA OOOOxx +7319 9019 1 3 9 19 19 319 1319 2319 7319 38 39 NVAAAA XINAAA VVVVxx +7882 9020 0 2 2 2 82 882 1882 2882 7882 164 165 ERAAAA YINAAA AAAAxx +8260 9021 0 0 0 0 60 260 260 3260 8260 120 121 SFAAAA ZINAAA HHHHxx +9758 9022 0 2 8 18 58 758 1758 4758 9758 116 117 ILAAAA AJNAAA OOOOxx +4205 9023 1 1 5 5 5 205 205 4205 4205 10 11 TFAAAA BJNAAA VVVVxx +8884 9024 0 0 4 4 84 884 884 3884 8884 168 169 SDAAAA CJNAAA AAAAxx +1112 9025 0 0 2 12 12 112 1112 1112 1112 24 25 UQAAAA DJNAAA HHHHxx +2186 9026 0 2 6 6 86 186 186 2186 2186 172 173 CGAAAA EJNAAA OOOOxx +8666 9027 0 2 6 6 66 666 666 3666 8666 132 133 IVAAAA FJNAAA VVVVxx +4325 9028 1 1 5 5 25 325 325 4325 4325 50 51 JKAAAA GJNAAA AAAAxx +4912 9029 0 0 2 12 12 912 912 4912 4912 24 25 YGAAAA HJNAAA HHHHxx +6497 9030 1 1 7 17 97 497 497 1497 6497 194 195 XPAAAA IJNAAA OOOOxx +9072 9031 0 0 2 12 72 72 1072 4072 9072 144 145 YKAAAA JJNAAA VVVVxx +8899 9032 1 3 9 19 99 899 899 3899 8899 198 199 HEAAAA KJNAAA AAAAxx +5619 9033 1 3 9 19 19 619 1619 619 5619 38 39 DIAAAA LJNAAA HHHHxx +4110 9034 0 2 0 10 10 110 110 4110 4110 20 21 CCAAAA MJNAAA OOOOxx +7025 9035 1 1 5 5 25 25 1025 2025 7025 50 51 FKAAAA NJNAAA VVVVxx +5605 9036 1 1 5 5 5 605 1605 605 5605 10 11 PHAAAA OJNAAA AAAAxx +2572 9037 0 0 2 12 72 572 572 2572 2572 144 145 YUAAAA PJNAAA HHHHxx +3895 9038 1 3 5 15 95 895 1895 3895 3895 190 191 VTAAAA QJNAAA OOOOxx +9138 9039 0 2 8 18 38 138 1138 4138 9138 76 77 MNAAAA RJNAAA VVVVxx +4713 9040 1 1 3 13 13 713 713 4713 4713 26 27 HZAAAA SJNAAA AAAAxx +6079 9041 1 3 9 19 79 79 79 1079 6079 158 159 VZAAAA TJNAAA HHHHxx +8898 9042 0 2 8 18 98 898 898 3898 8898 196 197 GEAAAA UJNAAA OOOOxx +2650 9043 0 2 0 10 50 650 650 2650 2650 100 101 YXAAAA VJNAAA VVVVxx +5316 9044 0 0 6 16 16 316 1316 316 5316 32 33 MWAAAA WJNAAA AAAAxx +5133 9045 1 1 3 13 33 133 1133 133 5133 66 67 LPAAAA XJNAAA HHHHxx +2184 9046 0 0 4 4 84 184 184 2184 2184 168 169 AGAAAA YJNAAA OOOOxx +2728 9047 0 0 8 8 28 728 728 2728 2728 56 57 YAAAAA ZJNAAA VVVVxx +6737 9048 1 1 7 17 37 737 737 1737 6737 74 75 DZAAAA AKNAAA AAAAxx +1128 9049 0 0 8 8 28 128 1128 1128 1128 56 57 KRAAAA BKNAAA HHHHxx +9662 9050 0 2 2 2 62 662 1662 4662 9662 124 125 QHAAAA CKNAAA OOOOxx +9384 9051 0 0 4 4 84 384 1384 4384 9384 168 169 YWAAAA DKNAAA VVVVxx +4576 9052 0 0 6 16 76 576 576 4576 4576 152 153 AUAAAA EKNAAA AAAAxx +9613 9053 1 1 3 13 13 613 1613 4613 9613 26 27 TFAAAA FKNAAA HHHHxx +4001 9054 1 1 1 1 1 1 1 4001 4001 2 3 XXAAAA GKNAAA OOOOxx +3628 9055 0 0 8 8 28 628 1628 3628 3628 56 57 OJAAAA HKNAAA VVVVxx +6968 9056 0 0 8 8 68 968 968 1968 6968 136 137 AIAAAA IKNAAA AAAAxx +6491 9057 1 3 1 11 91 491 491 1491 6491 182 183 RPAAAA JKNAAA HHHHxx +1265 9058 1 1 5 5 65 265 1265 1265 1265 130 131 RWAAAA KKNAAA OOOOxx +6128 9059 0 0 8 8 28 128 128 1128 6128 56 57 SBAAAA LKNAAA VVVVxx +4274 9060 0 2 4 14 74 274 274 4274 4274 148 149 KIAAAA MKNAAA AAAAxx +3598 9061 0 2 8 18 98 598 1598 3598 3598 196 197 KIAAAA NKNAAA HHHHxx +7961 9062 1 1 1 1 61 961 1961 2961 7961 122 123 FUAAAA OKNAAA OOOOxx +2643 9063 1 3 3 3 43 643 643 2643 2643 86 87 RXAAAA PKNAAA VVVVxx +4547 9064 1 3 7 7 47 547 547 4547 4547 94 95 XSAAAA QKNAAA AAAAxx +3568 9065 0 0 8 8 68 568 1568 3568 3568 136 137 GHAAAA RKNAAA HHHHxx +8954 9066 0 2 4 14 54 954 954 3954 8954 108 109 KGAAAA SKNAAA OOOOxx +8802 9067 0 2 2 2 2 802 802 3802 8802 4 5 OAAAAA TKNAAA VVVVxx +7829 9068 1 1 9 9 29 829 1829 2829 7829 58 59 DPAAAA UKNAAA AAAAxx +1008 9069 0 0 8 8 8 8 1008 1008 1008 16 17 UMAAAA VKNAAA HHHHxx +3627 9070 1 3 7 7 27 627 1627 3627 3627 54 55 NJAAAA WKNAAA OOOOxx +3999 9071 1 3 9 19 99 999 1999 3999 3999 198 199 VXAAAA XKNAAA VVVVxx +7697 9072 1 1 7 17 97 697 1697 2697 7697 194 195 BKAAAA YKNAAA AAAAxx +9380 9073 0 0 0 0 80 380 1380 4380 9380 160 161 UWAAAA ZKNAAA HHHHxx +2707 9074 1 3 7 7 7 707 707 2707 2707 14 15 DAAAAA ALNAAA OOOOxx +4430 9075 0 2 0 10 30 430 430 4430 4430 60 61 KOAAAA BLNAAA VVVVxx +6440 9076 0 0 0 0 40 440 440 1440 6440 80 81 SNAAAA CLNAAA AAAAxx +9958 9077 0 2 8 18 58 958 1958 4958 9958 116 117 ATAAAA DLNAAA HHHHxx +7592 9078 0 0 2 12 92 592 1592 2592 7592 184 185 AGAAAA ELNAAA OOOOxx +7852 9079 0 0 2 12 52 852 1852 2852 7852 104 105 AQAAAA FLNAAA VVVVxx +9253 9080 1 1 3 13 53 253 1253 4253 9253 106 107 XRAAAA GLNAAA AAAAxx +5910 9081 0 2 0 10 10 910 1910 910 5910 20 21 ITAAAA HLNAAA HHHHxx +7487 9082 1 3 7 7 87 487 1487 2487 7487 174 175 ZBAAAA ILNAAA OOOOxx +6324 9083 0 0 4 4 24 324 324 1324 6324 48 49 GJAAAA JLNAAA VVVVxx +5792 9084 0 0 2 12 92 792 1792 792 5792 184 185 UOAAAA KLNAAA AAAAxx +7390 9085 0 2 0 10 90 390 1390 2390 7390 180 181 GYAAAA LLNAAA HHHHxx +8534 9086 0 2 4 14 34 534 534 3534 8534 68 69 GQAAAA MLNAAA OOOOxx +2690 9087 0 2 0 10 90 690 690 2690 2690 180 181 MZAAAA NLNAAA VVVVxx +3992 9088 0 0 2 12 92 992 1992 3992 3992 184 185 OXAAAA OLNAAA AAAAxx +6928 9089 0 0 8 8 28 928 928 1928 6928 56 57 MGAAAA PLNAAA HHHHxx +7815 9090 1 3 5 15 15 815 1815 2815 7815 30 31 POAAAA QLNAAA OOOOxx +9477 9091 1 1 7 17 77 477 1477 4477 9477 154 155 NAAAAA RLNAAA VVVVxx +497 9092 1 1 7 17 97 497 497 497 497 194 195 DTAAAA SLNAAA AAAAxx +7532 9093 0 0 2 12 32 532 1532 2532 7532 64 65 SDAAAA TLNAAA HHHHxx +9838 9094 0 2 8 18 38 838 1838 4838 9838 76 77 KOAAAA ULNAAA OOOOxx +1557 9095 1 1 7 17 57 557 1557 1557 1557 114 115 XHAAAA VLNAAA VVVVxx +2467 9096 1 3 7 7 67 467 467 2467 2467 134 135 XQAAAA WLNAAA AAAAxx +2367 9097 1 3 7 7 67 367 367 2367 2367 134 135 BNAAAA XLNAAA HHHHxx +5677 9098 1 1 7 17 77 677 1677 677 5677 154 155 JKAAAA YLNAAA OOOOxx +6193 9099 1 1 3 13 93 193 193 1193 6193 186 187 FEAAAA ZLNAAA VVVVxx +7126 9100 0 2 6 6 26 126 1126 2126 7126 52 53 COAAAA AMNAAA AAAAxx +5264 9101 0 0 4 4 64 264 1264 264 5264 128 129 MUAAAA BMNAAA HHHHxx +850 9102 0 2 0 10 50 850 850 850 850 100 101 SGAAAA CMNAAA OOOOxx +4854 9103 0 2 4 14 54 854 854 4854 4854 108 109 SEAAAA DMNAAA VVVVxx +4414 9104 0 2 4 14 14 414 414 4414 4414 28 29 UNAAAA EMNAAA AAAAxx +8971 9105 1 3 1 11 71 971 971 3971 8971 142 143 BHAAAA FMNAAA HHHHxx +9240 9106 0 0 0 0 40 240 1240 4240 9240 80 81 KRAAAA GMNAAA OOOOxx +7341 9107 1 1 1 1 41 341 1341 2341 7341 82 83 JWAAAA HMNAAA VVVVxx +3151 9108 1 3 1 11 51 151 1151 3151 3151 102 103 FRAAAA IMNAAA AAAAxx +1742 9109 0 2 2 2 42 742 1742 1742 1742 84 85 APAAAA JMNAAA HHHHxx +1347 9110 1 3 7 7 47 347 1347 1347 1347 94 95 VZAAAA KMNAAA OOOOxx +9418 9111 0 2 8 18 18 418 1418 4418 9418 36 37 GYAAAA LMNAAA VVVVxx +5452 9112 0 0 2 12 52 452 1452 452 5452 104 105 SBAAAA MMNAAA AAAAxx +8637 9113 1 1 7 17 37 637 637 3637 8637 74 75 FUAAAA NMNAAA HHHHxx +8287 9114 1 3 7 7 87 287 287 3287 8287 174 175 TGAAAA OMNAAA OOOOxx +9865 9115 1 1 5 5 65 865 1865 4865 9865 130 131 LPAAAA PMNAAA VVVVxx +1664 9116 0 0 4 4 64 664 1664 1664 1664 128 129 AMAAAA QMNAAA AAAAxx +9933 9117 1 1 3 13 33 933 1933 4933 9933 66 67 BSAAAA RMNAAA HHHHxx +3416 9118 0 0 6 16 16 416 1416 3416 3416 32 33 KBAAAA SMNAAA OOOOxx +7981 9119 1 1 1 1 81 981 1981 2981 7981 162 163 ZUAAAA TMNAAA VVVVxx +1981 9120 1 1 1 1 81 981 1981 1981 1981 162 163 FYAAAA UMNAAA AAAAxx +441 9121 1 1 1 1 41 441 441 441 441 82 83 ZQAAAA VMNAAA HHHHxx +1380 9122 0 0 0 0 80 380 1380 1380 1380 160 161 CBAAAA WMNAAA OOOOxx +7325 9123 1 1 5 5 25 325 1325 2325 7325 50 51 TVAAAA XMNAAA VVVVxx +5682 9124 0 2 2 2 82 682 1682 682 5682 164 165 OKAAAA YMNAAA AAAAxx +1024 9125 0 0 4 4 24 24 1024 1024 1024 48 49 KNAAAA ZMNAAA HHHHxx +1096 9126 0 0 6 16 96 96 1096 1096 1096 192 193 EQAAAA ANNAAA OOOOxx +4717 9127 1 1 7 17 17 717 717 4717 4717 34 35 LZAAAA BNNAAA VVVVxx +7948 9128 0 0 8 8 48 948 1948 2948 7948 96 97 STAAAA CNNAAA AAAAxx +4074 9129 0 2 4 14 74 74 74 4074 4074 148 149 SAAAAA DNNAAA HHHHxx +211 9130 1 3 1 11 11 211 211 211 211 22 23 DIAAAA ENNAAA OOOOxx +8993 9131 1 1 3 13 93 993 993 3993 8993 186 187 XHAAAA FNNAAA VVVVxx +4509 9132 1 1 9 9 9 509 509 4509 4509 18 19 LRAAAA GNNAAA AAAAxx +823 9133 1 3 3 3 23 823 823 823 823 46 47 RFAAAA HNNAAA HHHHxx +4747 9134 1 3 7 7 47 747 747 4747 4747 94 95 PAAAAA INNAAA OOOOxx +6955 9135 1 3 5 15 55 955 955 1955 6955 110 111 NHAAAA JNNAAA VVVVxx +7922 9136 0 2 2 2 22 922 1922 2922 7922 44 45 SSAAAA KNNAAA AAAAxx +6936 9137 0 0 6 16 36 936 936 1936 6936 72 73 UGAAAA LNNAAA HHHHxx +1546 9138 0 2 6 6 46 546 1546 1546 1546 92 93 MHAAAA MNNAAA OOOOxx +9836 9139 0 0 6 16 36 836 1836 4836 9836 72 73 IOAAAA NNNAAA VVVVxx +5626 9140 0 2 6 6 26 626 1626 626 5626 52 53 KIAAAA ONNAAA AAAAxx +4879 9141 1 3 9 19 79 879 879 4879 4879 158 159 RFAAAA PNNAAA HHHHxx +8590 9142 0 2 0 10 90 590 590 3590 8590 180 181 KSAAAA QNNAAA OOOOxx +8842 9143 0 2 2 2 42 842 842 3842 8842 84 85 CCAAAA RNNAAA VVVVxx +6505 9144 1 1 5 5 5 505 505 1505 6505 10 11 FQAAAA SNNAAA AAAAxx +2803 9145 1 3 3 3 3 803 803 2803 2803 6 7 VDAAAA TNNAAA HHHHxx +9258 9146 0 2 8 18 58 258 1258 4258 9258 116 117 CSAAAA UNNAAA OOOOxx +741 9147 1 1 1 1 41 741 741 741 741 82 83 NCAAAA VNNAAA VVVVxx +1457 9148 1 1 7 17 57 457 1457 1457 1457 114 115 BEAAAA WNNAAA AAAAxx +5777 9149 1 1 7 17 77 777 1777 777 5777 154 155 FOAAAA XNNAAA HHHHxx +2883 9150 1 3 3 3 83 883 883 2883 2883 166 167 XGAAAA YNNAAA OOOOxx +6610 9151 0 2 0 10 10 610 610 1610 6610 20 21 GUAAAA ZNNAAA VVVVxx +4331 9152 1 3 1 11 31 331 331 4331 4331 62 63 PKAAAA AONAAA AAAAxx +2712 9153 0 0 2 12 12 712 712 2712 2712 24 25 IAAAAA BONAAA HHHHxx +9268 9154 0 0 8 8 68 268 1268 4268 9268 136 137 MSAAAA CONAAA OOOOxx +410 9155 0 2 0 10 10 410 410 410 410 20 21 UPAAAA DONAAA VVVVxx +9411 9156 1 3 1 11 11 411 1411 4411 9411 22 23 ZXAAAA EONAAA AAAAxx +4683 9157 1 3 3 3 83 683 683 4683 4683 166 167 DYAAAA FONAAA HHHHxx +7072 9158 0 0 2 12 72 72 1072 2072 7072 144 145 AMAAAA GONAAA OOOOxx +5050 9159 0 2 0 10 50 50 1050 50 5050 100 101 GMAAAA HONAAA VVVVxx +5932 9160 0 0 2 12 32 932 1932 932 5932 64 65 EUAAAA IONAAA AAAAxx +2756 9161 0 0 6 16 56 756 756 2756 2756 112 113 ACAAAA JONAAA HHHHxx +9813 9162 1 1 3 13 13 813 1813 4813 9813 26 27 LNAAAA KONAAA OOOOxx +7388 9163 0 0 8 8 88 388 1388 2388 7388 176 177 EYAAAA LONAAA VVVVxx +2596 9164 0 0 6 16 96 596 596 2596 2596 192 193 WVAAAA MONAAA AAAAxx +5102 9165 0 2 2 2 2 102 1102 102 5102 4 5 GOAAAA NONAAA HHHHxx +208 9166 0 0 8 8 8 208 208 208 208 16 17 AIAAAA OONAAA OOOOxx +86 9167 0 2 6 6 86 86 86 86 86 172 173 IDAAAA PONAAA VVVVxx +8127 9168 1 3 7 7 27 127 127 3127 8127 54 55 PAAAAA QONAAA AAAAxx +5154 9169 0 2 4 14 54 154 1154 154 5154 108 109 GQAAAA RONAAA HHHHxx +4491 9170 1 3 1 11 91 491 491 4491 4491 182 183 TQAAAA SONAAA OOOOxx +7423 9171 1 3 3 3 23 423 1423 2423 7423 46 47 NZAAAA TONAAA VVVVxx +6441 9172 1 1 1 1 41 441 441 1441 6441 82 83 TNAAAA UONAAA AAAAxx +2920 9173 0 0 0 0 20 920 920 2920 2920 40 41 IIAAAA VONAAA HHHHxx +6386 9174 0 2 6 6 86 386 386 1386 6386 172 173 QLAAAA WONAAA OOOOxx +9744 9175 0 0 4 4 44 744 1744 4744 9744 88 89 UKAAAA XONAAA VVVVxx +2667 9176 1 3 7 7 67 667 667 2667 2667 134 135 PYAAAA YONAAA AAAAxx +5754 9177 0 2 4 14 54 754 1754 754 5754 108 109 INAAAA ZONAAA HHHHxx +4645 9178 1 1 5 5 45 645 645 4645 4645 90 91 RWAAAA APNAAA OOOOxx +4327 9179 1 3 7 7 27 327 327 4327 4327 54 55 LKAAAA BPNAAA VVVVxx +843 9180 1 3 3 3 43 843 843 843 843 86 87 LGAAAA CPNAAA AAAAxx +4085 9181 1 1 5 5 85 85 85 4085 4085 170 171 DBAAAA DPNAAA HHHHxx +2849 9182 1 1 9 9 49 849 849 2849 2849 98 99 PFAAAA EPNAAA OOOOxx +5734 9183 0 2 4 14 34 734 1734 734 5734 68 69 OMAAAA FPNAAA VVVVxx +5307 9184 1 3 7 7 7 307 1307 307 5307 14 15 DWAAAA GPNAAA AAAAxx +8433 9185 1 1 3 13 33 433 433 3433 8433 66 67 JMAAAA HPNAAA HHHHxx +3031 9186 1 3 1 11 31 31 1031 3031 3031 62 63 PMAAAA IPNAAA OOOOxx +5714 9187 0 2 4 14 14 714 1714 714 5714 28 29 ULAAAA JPNAAA VVVVxx +5969 9188 1 1 9 9 69 969 1969 969 5969 138 139 PVAAAA KPNAAA AAAAxx +2532 9189 0 0 2 12 32 532 532 2532 2532 64 65 KTAAAA LPNAAA HHHHxx +5219 9190 1 3 9 19 19 219 1219 219 5219 38 39 TSAAAA MPNAAA OOOOxx +7343 9191 1 3 3 3 43 343 1343 2343 7343 86 87 LWAAAA NPNAAA VVVVxx +9089 9192 1 1 9 9 89 89 1089 4089 9089 178 179 PLAAAA OPNAAA AAAAxx +9337 9193 1 1 7 17 37 337 1337 4337 9337 74 75 DVAAAA PPNAAA HHHHxx +5131 9194 1 3 1 11 31 131 1131 131 5131 62 63 JPAAAA QPNAAA OOOOxx +6253 9195 1 1 3 13 53 253 253 1253 6253 106 107 NGAAAA RPNAAA VVVVxx +5140 9196 0 0 0 0 40 140 1140 140 5140 80 81 SPAAAA SPNAAA AAAAxx +2953 9197 1 1 3 13 53 953 953 2953 2953 106 107 PJAAAA TPNAAA HHHHxx +4293 9198 1 1 3 13 93 293 293 4293 4293 186 187 DJAAAA UPNAAA OOOOxx +9974 9199 0 2 4 14 74 974 1974 4974 9974 148 149 QTAAAA VPNAAA VVVVxx +5061 9200 1 1 1 1 61 61 1061 61 5061 122 123 RMAAAA WPNAAA AAAAxx +8570 9201 0 2 0 10 70 570 570 3570 8570 140 141 QRAAAA XPNAAA HHHHxx +9504 9202 0 0 4 4 4 504 1504 4504 9504 8 9 OBAAAA YPNAAA OOOOxx +604 9203 0 0 4 4 4 604 604 604 604 8 9 GXAAAA ZPNAAA VVVVxx +4991 9204 1 3 1 11 91 991 991 4991 4991 182 183 ZJAAAA AQNAAA AAAAxx +880 9205 0 0 0 0 80 880 880 880 880 160 161 WHAAAA BQNAAA HHHHxx +3861 9206 1 1 1 1 61 861 1861 3861 3861 122 123 NSAAAA CQNAAA OOOOxx +8262 9207 0 2 2 2 62 262 262 3262 8262 124 125 UFAAAA DQNAAA VVVVxx +5689 9208 1 1 9 9 89 689 1689 689 5689 178 179 VKAAAA EQNAAA AAAAxx +1793 9209 1 1 3 13 93 793 1793 1793 1793 186 187 ZQAAAA FQNAAA HHHHxx +2661 9210 1 1 1 1 61 661 661 2661 2661 122 123 JYAAAA GQNAAA OOOOxx +7954 9211 0 2 4 14 54 954 1954 2954 7954 108 109 YTAAAA HQNAAA VVVVxx +1874 9212 0 2 4 14 74 874 1874 1874 1874 148 149 CUAAAA IQNAAA AAAAxx +2982 9213 0 2 2 2 82 982 982 2982 2982 164 165 SKAAAA JQNAAA HHHHxx +331 9214 1 3 1 11 31 331 331 331 331 62 63 TMAAAA KQNAAA OOOOxx +5021 9215 1 1 1 1 21 21 1021 21 5021 42 43 DLAAAA LQNAAA VVVVxx +9894 9216 0 2 4 14 94 894 1894 4894 9894 188 189 OQAAAA MQNAAA AAAAxx +7709 9217 1 1 9 9 9 709 1709 2709 7709 18 19 NKAAAA NQNAAA HHHHxx +4980 9218 0 0 0 0 80 980 980 4980 4980 160 161 OJAAAA OQNAAA OOOOxx +8249 9219 1 1 9 9 49 249 249 3249 8249 98 99 HFAAAA PQNAAA VVVVxx +7120 9220 0 0 0 0 20 120 1120 2120 7120 40 41 WNAAAA QQNAAA AAAAxx +7464 9221 0 0 4 4 64 464 1464 2464 7464 128 129 CBAAAA RQNAAA HHHHxx +8086 9222 0 2 6 6 86 86 86 3086 8086 172 173 AZAAAA SQNAAA OOOOxx +3509 9223 1 1 9 9 9 509 1509 3509 3509 18 19 ZEAAAA TQNAAA VVVVxx +3902 9224 0 2 2 2 2 902 1902 3902 3902 4 5 CUAAAA UQNAAA AAAAxx +9907 9225 1 3 7 7 7 907 1907 4907 9907 14 15 BRAAAA VQNAAA HHHHxx +6278 9226 0 2 8 18 78 278 278 1278 6278 156 157 MHAAAA WQNAAA OOOOxx +9316 9227 0 0 6 16 16 316 1316 4316 9316 32 33 IUAAAA XQNAAA VVVVxx +2824 9228 0 0 4 4 24 824 824 2824 2824 48 49 QEAAAA YQNAAA AAAAxx +1558 9229 0 2 8 18 58 558 1558 1558 1558 116 117 YHAAAA ZQNAAA HHHHxx +5436 9230 0 0 6 16 36 436 1436 436 5436 72 73 CBAAAA ARNAAA OOOOxx +1161 9231 1 1 1 1 61 161 1161 1161 1161 122 123 RSAAAA BRNAAA VVVVxx +7569 9232 1 1 9 9 69 569 1569 2569 7569 138 139 DFAAAA CRNAAA AAAAxx +9614 9233 0 2 4 14 14 614 1614 4614 9614 28 29 UFAAAA DRNAAA HHHHxx +6970 9234 0 2 0 10 70 970 970 1970 6970 140 141 CIAAAA ERNAAA OOOOxx +2422 9235 0 2 2 2 22 422 422 2422 2422 44 45 EPAAAA FRNAAA VVVVxx +8860 9236 0 0 0 0 60 860 860 3860 8860 120 121 UCAAAA GRNAAA AAAAxx +9912 9237 0 0 2 12 12 912 1912 4912 9912 24 25 GRAAAA HRNAAA HHHHxx +1109 9238 1 1 9 9 9 109 1109 1109 1109 18 19 RQAAAA IRNAAA OOOOxx +3286 9239 0 2 6 6 86 286 1286 3286 3286 172 173 KWAAAA JRNAAA VVVVxx +2277 9240 1 1 7 17 77 277 277 2277 2277 154 155 PJAAAA KRNAAA AAAAxx +8656 9241 0 0 6 16 56 656 656 3656 8656 112 113 YUAAAA LRNAAA HHHHxx +4656 9242 0 0 6 16 56 656 656 4656 4656 112 113 CXAAAA MRNAAA OOOOxx +6965 9243 1 1 5 5 65 965 965 1965 6965 130 131 XHAAAA NRNAAA VVVVxx +7591 9244 1 3 1 11 91 591 1591 2591 7591 182 183 ZFAAAA ORNAAA AAAAxx +4883 9245 1 3 3 3 83 883 883 4883 4883 166 167 VFAAAA PRNAAA HHHHxx +452 9246 0 0 2 12 52 452 452 452 452 104 105 KRAAAA QRNAAA OOOOxx +4018 9247 0 2 8 18 18 18 18 4018 4018 36 37 OYAAAA RRNAAA VVVVxx +4066 9248 0 2 6 6 66 66 66 4066 4066 132 133 KAAAAA SRNAAA AAAAxx +6480 9249 0 0 0 0 80 480 480 1480 6480 160 161 GPAAAA TRNAAA HHHHxx +8634 9250 0 2 4 14 34 634 634 3634 8634 68 69 CUAAAA URNAAA OOOOxx +9387 9251 1 3 7 7 87 387 1387 4387 9387 174 175 BXAAAA VRNAAA VVVVxx +3476 9252 0 0 6 16 76 476 1476 3476 3476 152 153 SDAAAA WRNAAA AAAAxx +5995 9253 1 3 5 15 95 995 1995 995 5995 190 191 PWAAAA XRNAAA HHHHxx +9677 9254 1 1 7 17 77 677 1677 4677 9677 154 155 FIAAAA YRNAAA OOOOxx +3884 9255 0 0 4 4 84 884 1884 3884 3884 168 169 KTAAAA ZRNAAA VVVVxx +6500 9256 0 0 0 0 0 500 500 1500 6500 0 1 AQAAAA ASNAAA AAAAxx +7972 9257 0 0 2 12 72 972 1972 2972 7972 144 145 QUAAAA BSNAAA HHHHxx +5281 9258 1 1 1 1 81 281 1281 281 5281 162 163 DVAAAA CSNAAA OOOOxx +1288 9259 0 0 8 8 88 288 1288 1288 1288 176 177 OXAAAA DSNAAA VVVVxx +4366 9260 0 2 6 6 66 366 366 4366 4366 132 133 YLAAAA ESNAAA AAAAxx +6557 9261 1 1 7 17 57 557 557 1557 6557 114 115 FSAAAA FSNAAA HHHHxx +7086 9262 0 2 6 6 86 86 1086 2086 7086 172 173 OMAAAA GSNAAA OOOOxx +6588 9263 0 0 8 8 88 588 588 1588 6588 176 177 KTAAAA HSNAAA VVVVxx +9062 9264 0 2 2 2 62 62 1062 4062 9062 124 125 OKAAAA ISNAAA AAAAxx +9230 9265 0 2 0 10 30 230 1230 4230 9230 60 61 ARAAAA JSNAAA HHHHxx +7672 9266 0 0 2 12 72 672 1672 2672 7672 144 145 CJAAAA KSNAAA OOOOxx +5204 9267 0 0 4 4 4 204 1204 204 5204 8 9 ESAAAA LSNAAA VVVVxx +2836 9268 0 0 6 16 36 836 836 2836 2836 72 73 CFAAAA MSNAAA AAAAxx +7165 9269 1 1 5 5 65 165 1165 2165 7165 130 131 PPAAAA NSNAAA HHHHxx +971 9270 1 3 1 11 71 971 971 971 971 142 143 JLAAAA OSNAAA OOOOxx +3851 9271 1 3 1 11 51 851 1851 3851 3851 102 103 DSAAAA PSNAAA VVVVxx +8593 9272 1 1 3 13 93 593 593 3593 8593 186 187 NSAAAA QSNAAA AAAAxx +7742 9273 0 2 2 2 42 742 1742 2742 7742 84 85 ULAAAA RSNAAA HHHHxx +2887 9274 1 3 7 7 87 887 887 2887 2887 174 175 BHAAAA SSNAAA OOOOxx +8479 9275 1 3 9 19 79 479 479 3479 8479 158 159 DOAAAA TSNAAA VVVVxx +9514 9276 0 2 4 14 14 514 1514 4514 9514 28 29 YBAAAA USNAAA AAAAxx +273 9277 1 1 3 13 73 273 273 273 273 146 147 NKAAAA VSNAAA HHHHxx +2938 9278 0 2 8 18 38 938 938 2938 2938 76 77 AJAAAA WSNAAA OOOOxx +9793 9279 1 1 3 13 93 793 1793 4793 9793 186 187 RMAAAA XSNAAA VVVVxx +8050 9280 0 2 0 10 50 50 50 3050 8050 100 101 QXAAAA YSNAAA AAAAxx +6702 9281 0 2 2 2 2 702 702 1702 6702 4 5 UXAAAA ZSNAAA HHHHxx +7290 9282 0 2 0 10 90 290 1290 2290 7290 180 181 KUAAAA ATNAAA OOOOxx +1837 9283 1 1 7 17 37 837 1837 1837 1837 74 75 RSAAAA BTNAAA VVVVxx +3206 9284 0 2 6 6 6 206 1206 3206 3206 12 13 ITAAAA CTNAAA AAAAxx +4925 9285 1 1 5 5 25 925 925 4925 4925 50 51 LHAAAA DTNAAA HHHHxx +5066 9286 0 2 6 6 66 66 1066 66 5066 132 133 WMAAAA ETNAAA OOOOxx +3401 9287 1 1 1 1 1 401 1401 3401 3401 2 3 VAAAAA FTNAAA VVVVxx +3474 9288 0 2 4 14 74 474 1474 3474 3474 148 149 QDAAAA GTNAAA AAAAxx +57 9289 1 1 7 17 57 57 57 57 57 114 115 FCAAAA HTNAAA HHHHxx +2082 9290 0 2 2 2 82 82 82 2082 2082 164 165 CCAAAA ITNAAA OOOOxx +100 9291 0 0 0 0 0 100 100 100 100 0 1 WDAAAA JTNAAA VVVVxx +9665 9292 1 1 5 5 65 665 1665 4665 9665 130 131 THAAAA KTNAAA AAAAxx +8284 9293 0 0 4 4 84 284 284 3284 8284 168 169 QGAAAA LTNAAA HHHHxx +958 9294 0 2 8 18 58 958 958 958 958 116 117 WKAAAA MTNAAA OOOOxx +5282 9295 0 2 2 2 82 282 1282 282 5282 164 165 EVAAAA NTNAAA VVVVxx +4257 9296 1 1 7 17 57 257 257 4257 4257 114 115 THAAAA OTNAAA AAAAxx +3160 9297 0 0 0 0 60 160 1160 3160 3160 120 121 ORAAAA PTNAAA HHHHxx +8449 9298 1 1 9 9 49 449 449 3449 8449 98 99 ZMAAAA QTNAAA OOOOxx +500 9299 0 0 0 0 0 500 500 500 500 0 1 GTAAAA RTNAAA VVVVxx +6432 9300 0 0 2 12 32 432 432 1432 6432 64 65 KNAAAA STNAAA AAAAxx +6220 9301 0 0 0 0 20 220 220 1220 6220 40 41 GFAAAA TTNAAA HHHHxx +7233 9302 1 1 3 13 33 233 1233 2233 7233 66 67 FSAAAA UTNAAA OOOOxx +2723 9303 1 3 3 3 23 723 723 2723 2723 46 47 TAAAAA VTNAAA VVVVxx +1899 9304 1 3 9 19 99 899 1899 1899 1899 198 199 BVAAAA WTNAAA AAAAxx +7158 9305 0 2 8 18 58 158 1158 2158 7158 116 117 IPAAAA XTNAAA HHHHxx +202 9306 0 2 2 2 2 202 202 202 202 4 5 UHAAAA YTNAAA OOOOxx +2286 9307 0 2 6 6 86 286 286 2286 2286 172 173 YJAAAA ZTNAAA VVVVxx +5356 9308 0 0 6 16 56 356 1356 356 5356 112 113 AYAAAA AUNAAA AAAAxx +3809 9309 1 1 9 9 9 809 1809 3809 3809 18 19 NQAAAA BUNAAA HHHHxx +3979 9310 1 3 9 19 79 979 1979 3979 3979 158 159 BXAAAA CUNAAA OOOOxx +8359 9311 1 3 9 19 59 359 359 3359 8359 118 119 NJAAAA DUNAAA VVVVxx +3479 9312 1 3 9 19 79 479 1479 3479 3479 158 159 VDAAAA EUNAAA AAAAxx +4895 9313 1 3 5 15 95 895 895 4895 4895 190 191 HGAAAA FUNAAA HHHHxx +6059 9314 1 3 9 19 59 59 59 1059 6059 118 119 BZAAAA GUNAAA OOOOxx +9560 9315 0 0 0 0 60 560 1560 4560 9560 120 121 SDAAAA HUNAAA VVVVxx +6756 9316 0 0 6 16 56 756 756 1756 6756 112 113 WZAAAA IUNAAA AAAAxx +7504 9317 0 0 4 4 4 504 1504 2504 7504 8 9 QCAAAA JUNAAA HHHHxx +6762 9318 0 2 2 2 62 762 762 1762 6762 124 125 CAAAAA KUNAAA OOOOxx +5304 9319 0 0 4 4 4 304 1304 304 5304 8 9 AWAAAA LUNAAA VVVVxx +9533 9320 1 1 3 13 33 533 1533 4533 9533 66 67 RCAAAA MUNAAA AAAAxx +6649 9321 1 1 9 9 49 649 649 1649 6649 98 99 TVAAAA NUNAAA HHHHxx +38 9322 0 2 8 18 38 38 38 38 38 76 77 MBAAAA OUNAAA OOOOxx +5713 9323 1 1 3 13 13 713 1713 713 5713 26 27 TLAAAA PUNAAA VVVVxx +3000 9324 0 0 0 0 0 0 1000 3000 3000 0 1 KLAAAA QUNAAA AAAAxx +3738 9325 0 2 8 18 38 738 1738 3738 3738 76 77 UNAAAA RUNAAA HHHHxx +3327 9326 1 3 7 7 27 327 1327 3327 3327 54 55 ZXAAAA SUNAAA OOOOxx +3922 9327 0 2 2 2 22 922 1922 3922 3922 44 45 WUAAAA TUNAAA VVVVxx +9245 9328 1 1 5 5 45 245 1245 4245 9245 90 91 PRAAAA UUNAAA AAAAxx +2172 9329 0 0 2 12 72 172 172 2172 2172 144 145 OFAAAA VUNAAA HHHHxx +7128 9330 0 0 8 8 28 128 1128 2128 7128 56 57 EOAAAA WUNAAA OOOOxx +1195 9331 1 3 5 15 95 195 1195 1195 1195 190 191 ZTAAAA XUNAAA VVVVxx +8445 9332 1 1 5 5 45 445 445 3445 8445 90 91 VMAAAA YUNAAA AAAAxx +8638 9333 0 2 8 18 38 638 638 3638 8638 76 77 GUAAAA ZUNAAA HHHHxx +1249 9334 1 1 9 9 49 249 1249 1249 1249 98 99 BWAAAA AVNAAA OOOOxx +8659 9335 1 3 9 19 59 659 659 3659 8659 118 119 BVAAAA BVNAAA VVVVxx +3556 9336 0 0 6 16 56 556 1556 3556 3556 112 113 UGAAAA CVNAAA AAAAxx +3347 9337 1 3 7 7 47 347 1347 3347 3347 94 95 TYAAAA DVNAAA HHHHxx +3260 9338 0 0 0 0 60 260 1260 3260 3260 120 121 KVAAAA EVNAAA OOOOxx +5139 9339 1 3 9 19 39 139 1139 139 5139 78 79 RPAAAA FVNAAA VVVVxx +9991 9340 1 3 1 11 91 991 1991 4991 9991 182 183 HUAAAA GVNAAA AAAAxx +5499 9341 1 3 9 19 99 499 1499 499 5499 198 199 NDAAAA HVNAAA HHHHxx +8082 9342 0 2 2 2 82 82 82 3082 8082 164 165 WYAAAA IVNAAA OOOOxx +1640 9343 0 0 0 0 40 640 1640 1640 1640 80 81 CLAAAA JVNAAA VVVVxx +8726 9344 0 2 6 6 26 726 726 3726 8726 52 53 QXAAAA KVNAAA AAAAxx +2339 9345 1 3 9 19 39 339 339 2339 2339 78 79 ZLAAAA LVNAAA HHHHxx +2601 9346 1 1 1 1 1 601 601 2601 2601 2 3 BWAAAA MVNAAA OOOOxx +9940 9347 0 0 0 0 40 940 1940 4940 9940 80 81 ISAAAA NVNAAA VVVVxx +4185 9348 1 1 5 5 85 185 185 4185 4185 170 171 ZEAAAA OVNAAA AAAAxx +9546 9349 0 2 6 6 46 546 1546 4546 9546 92 93 EDAAAA PVNAAA HHHHxx +5218 9350 0 2 8 18 18 218 1218 218 5218 36 37 SSAAAA QVNAAA OOOOxx +4374 9351 0 2 4 14 74 374 374 4374 4374 148 149 GMAAAA RVNAAA VVVVxx +288 9352 0 0 8 8 88 288 288 288 288 176 177 CLAAAA SVNAAA AAAAxx +7445 9353 1 1 5 5 45 445 1445 2445 7445 90 91 JAAAAA TVNAAA HHHHxx +1710 9354 0 2 0 10 10 710 1710 1710 1710 20 21 UNAAAA UVNAAA OOOOxx +6409 9355 1 1 9 9 9 409 409 1409 6409 18 19 NMAAAA VVNAAA VVVVxx +7982 9356 0 2 2 2 82 982 1982 2982 7982 164 165 AVAAAA WVNAAA AAAAxx +4950 9357 0 2 0 10 50 950 950 4950 4950 100 101 KIAAAA XVNAAA HHHHxx +9242 9358 0 2 2 2 42 242 1242 4242 9242 84 85 MRAAAA YVNAAA OOOOxx +3272 9359 0 0 2 12 72 272 1272 3272 3272 144 145 WVAAAA ZVNAAA VVVVxx +739 9360 1 3 9 19 39 739 739 739 739 78 79 LCAAAA AWNAAA AAAAxx +5526 9361 0 2 6 6 26 526 1526 526 5526 52 53 OEAAAA BWNAAA HHHHxx +8189 9362 1 1 9 9 89 189 189 3189 8189 178 179 ZCAAAA CWNAAA OOOOxx +9106 9363 0 2 6 6 6 106 1106 4106 9106 12 13 GMAAAA DWNAAA VVVVxx +9775 9364 1 3 5 15 75 775 1775 4775 9775 150 151 ZLAAAA EWNAAA AAAAxx +4643 9365 1 3 3 3 43 643 643 4643 4643 86 87 PWAAAA FWNAAA HHHHxx +8396 9366 0 0 6 16 96 396 396 3396 8396 192 193 YKAAAA GWNAAA OOOOxx +3255 9367 1 3 5 15 55 255 1255 3255 3255 110 111 FVAAAA HWNAAA VVVVxx +301 9368 1 1 1 1 1 301 301 301 301 2 3 PLAAAA IWNAAA AAAAxx +6014 9369 0 2 4 14 14 14 14 1014 6014 28 29 IXAAAA JWNAAA HHHHxx +6046 9370 0 2 6 6 46 46 46 1046 6046 92 93 OYAAAA KWNAAA OOOOxx +984 9371 0 0 4 4 84 984 984 984 984 168 169 WLAAAA LWNAAA VVVVxx +2420 9372 0 0 0 0 20 420 420 2420 2420 40 41 CPAAAA MWNAAA AAAAxx +2922 9373 0 2 2 2 22 922 922 2922 2922 44 45 KIAAAA NWNAAA HHHHxx +2317 9374 1 1 7 17 17 317 317 2317 2317 34 35 DLAAAA OWNAAA OOOOxx +7332 9375 0 0 2 12 32 332 1332 2332 7332 64 65 AWAAAA PWNAAA VVVVxx +6451 9376 1 3 1 11 51 451 451 1451 6451 102 103 DOAAAA QWNAAA AAAAxx +2589 9377 1 1 9 9 89 589 589 2589 2589 178 179 PVAAAA RWNAAA HHHHxx +4333 9378 1 1 3 13 33 333 333 4333 4333 66 67 RKAAAA SWNAAA OOOOxx +8650 9379 0 2 0 10 50 650 650 3650 8650 100 101 SUAAAA TWNAAA VVVVxx +6856 9380 0 0 6 16 56 856 856 1856 6856 112 113 SDAAAA UWNAAA AAAAxx +4194 9381 0 2 4 14 94 194 194 4194 4194 188 189 IFAAAA VWNAAA HHHHxx +6246 9382 0 2 6 6 46 246 246 1246 6246 92 93 GGAAAA WWNAAA OOOOxx +4371 9383 1 3 1 11 71 371 371 4371 4371 142 143 DMAAAA XWNAAA VVVVxx +1388 9384 0 0 8 8 88 388 1388 1388 1388 176 177 KBAAAA YWNAAA AAAAxx +1056 9385 0 0 6 16 56 56 1056 1056 1056 112 113 QOAAAA ZWNAAA HHHHxx +6041 9386 1 1 1 1 41 41 41 1041 6041 82 83 JYAAAA AXNAAA OOOOxx +6153 9387 1 1 3 13 53 153 153 1153 6153 106 107 RCAAAA BXNAAA VVVVxx +8450 9388 0 2 0 10 50 450 450 3450 8450 100 101 ANAAAA CXNAAA AAAAxx +3469 9389 1 1 9 9 69 469 1469 3469 3469 138 139 LDAAAA DXNAAA HHHHxx +5226 9390 0 2 6 6 26 226 1226 226 5226 52 53 ATAAAA EXNAAA OOOOxx +8112 9391 0 0 2 12 12 112 112 3112 8112 24 25 AAAAAA FXNAAA VVVVxx +647 9392 1 3 7 7 47 647 647 647 647 94 95 XYAAAA GXNAAA AAAAxx +2567 9393 1 3 7 7 67 567 567 2567 2567 134 135 TUAAAA HXNAAA HHHHxx +9064 9394 0 0 4 4 64 64 1064 4064 9064 128 129 QKAAAA IXNAAA OOOOxx +5161 9395 1 1 1 1 61 161 1161 161 5161 122 123 NQAAAA JXNAAA VVVVxx +5260 9396 0 0 0 0 60 260 1260 260 5260 120 121 IUAAAA KXNAAA AAAAxx +8988 9397 0 0 8 8 88 988 988 3988 8988 176 177 SHAAAA LXNAAA HHHHxx +9678 9398 0 2 8 18 78 678 1678 4678 9678 156 157 GIAAAA MXNAAA OOOOxx +6853 9399 1 1 3 13 53 853 853 1853 6853 106 107 PDAAAA NXNAAA VVVVxx +5294 9400 0 2 4 14 94 294 1294 294 5294 188 189 QVAAAA OXNAAA AAAAxx +9864 9401 0 0 4 4 64 864 1864 4864 9864 128 129 KPAAAA PXNAAA HHHHxx +8702 9402 0 2 2 2 2 702 702 3702 8702 4 5 SWAAAA QXNAAA OOOOxx +1132 9403 0 0 2 12 32 132 1132 1132 1132 64 65 ORAAAA RXNAAA VVVVxx +1524 9404 0 0 4 4 24 524 1524 1524 1524 48 49 QGAAAA SXNAAA AAAAxx +4560 9405 0 0 0 0 60 560 560 4560 4560 120 121 KTAAAA TXNAAA HHHHxx +2137 9406 1 1 7 17 37 137 137 2137 2137 74 75 FEAAAA UXNAAA OOOOxx +3283 9407 1 3 3 3 83 283 1283 3283 3283 166 167 HWAAAA VXNAAA VVVVxx +3377 9408 1 1 7 17 77 377 1377 3377 3377 154 155 XZAAAA WXNAAA AAAAxx +2267 9409 1 3 7 7 67 267 267 2267 2267 134 135 FJAAAA XXNAAA HHHHxx +8987 9410 1 3 7 7 87 987 987 3987 8987 174 175 RHAAAA YXNAAA OOOOxx +6709 9411 1 1 9 9 9 709 709 1709 6709 18 19 BYAAAA ZXNAAA VVVVxx +8059 9412 1 3 9 19 59 59 59 3059 8059 118 119 ZXAAAA AYNAAA AAAAxx +3402 9413 0 2 2 2 2 402 1402 3402 3402 4 5 WAAAAA BYNAAA HHHHxx +6443 9414 1 3 3 3 43 443 443 1443 6443 86 87 VNAAAA CYNAAA OOOOxx +8858 9415 0 2 8 18 58 858 858 3858 8858 116 117 SCAAAA DYNAAA VVVVxx +3974 9416 0 2 4 14 74 974 1974 3974 3974 148 149 WWAAAA EYNAAA AAAAxx +3521 9417 1 1 1 1 21 521 1521 3521 3521 42 43 LFAAAA FYNAAA HHHHxx +9509 9418 1 1 9 9 9 509 1509 4509 9509 18 19 TBAAAA GYNAAA OOOOxx +5442 9419 0 2 2 2 42 442 1442 442 5442 84 85 IBAAAA HYNAAA VVVVxx +8968 9420 0 0 8 8 68 968 968 3968 8968 136 137 YGAAAA IYNAAA AAAAxx +333 9421 1 1 3 13 33 333 333 333 333 66 67 VMAAAA JYNAAA HHHHxx +952 9422 0 0 2 12 52 952 952 952 952 104 105 QKAAAA KYNAAA OOOOxx +7482 9423 0 2 2 2 82 482 1482 2482 7482 164 165 UBAAAA LYNAAA VVVVxx +1486 9424 0 2 6 6 86 486 1486 1486 1486 172 173 EFAAAA MYNAAA AAAAxx +1815 9425 1 3 5 15 15 815 1815 1815 1815 30 31 VRAAAA NYNAAA HHHHxx +7937 9426 1 1 7 17 37 937 1937 2937 7937 74 75 HTAAAA OYNAAA OOOOxx +1436 9427 0 0 6 16 36 436 1436 1436 1436 72 73 GDAAAA PYNAAA VVVVxx +3470 9428 0 2 0 10 70 470 1470 3470 3470 140 141 MDAAAA QYNAAA AAAAxx +8195 9429 1 3 5 15 95 195 195 3195 8195 190 191 FDAAAA RYNAAA HHHHxx +6906 9430 0 2 6 6 6 906 906 1906 6906 12 13 QFAAAA SYNAAA OOOOxx +2539 9431 1 3 9 19 39 539 539 2539 2539 78 79 RTAAAA TYNAAA VVVVxx +5988 9432 0 0 8 8 88 988 1988 988 5988 176 177 IWAAAA UYNAAA AAAAxx +8908 9433 0 0 8 8 8 908 908 3908 8908 16 17 QEAAAA VYNAAA HHHHxx +2319 9434 1 3 9 19 19 319 319 2319 2319 38 39 FLAAAA WYNAAA OOOOxx +3263 9435 1 3 3 3 63 263 1263 3263 3263 126 127 NVAAAA XYNAAA VVVVxx +4039 9436 1 3 9 19 39 39 39 4039 4039 78 79 JZAAAA YYNAAA AAAAxx +6373 9437 1 1 3 13 73 373 373 1373 6373 146 147 DLAAAA ZYNAAA HHHHxx +1168 9438 0 0 8 8 68 168 1168 1168 1168 136 137 YSAAAA AZNAAA OOOOxx +8338 9439 0 2 8 18 38 338 338 3338 8338 76 77 SIAAAA BZNAAA VVVVxx +1172 9440 0 0 2 12 72 172 1172 1172 1172 144 145 CTAAAA CZNAAA AAAAxx +200 9441 0 0 0 0 0 200 200 200 200 0 1 SHAAAA DZNAAA HHHHxx +6355 9442 1 3 5 15 55 355 355 1355 6355 110 111 LKAAAA EZNAAA OOOOxx +7768 9443 0 0 8 8 68 768 1768 2768 7768 136 137 UMAAAA FZNAAA VVVVxx +25 9444 1 1 5 5 25 25 25 25 25 50 51 ZAAAAA GZNAAA AAAAxx +7144 9445 0 0 4 4 44 144 1144 2144 7144 88 89 UOAAAA HZNAAA HHHHxx +8671 9446 1 3 1 11 71 671 671 3671 8671 142 143 NVAAAA IZNAAA OOOOxx +9163 9447 1 3 3 3 63 163 1163 4163 9163 126 127 LOAAAA JZNAAA VVVVxx +8889 9448 1 1 9 9 89 889 889 3889 8889 178 179 XDAAAA KZNAAA AAAAxx +5950 9449 0 2 0 10 50 950 1950 950 5950 100 101 WUAAAA LZNAAA HHHHxx +6163 9450 1 3 3 3 63 163 163 1163 6163 126 127 BDAAAA MZNAAA OOOOxx +8119 9451 1 3 9 19 19 119 119 3119 8119 38 39 HAAAAA NZNAAA VVVVxx +1416 9452 0 0 6 16 16 416 1416 1416 1416 32 33 MCAAAA OZNAAA AAAAxx +4132 9453 0 0 2 12 32 132 132 4132 4132 64 65 YCAAAA PZNAAA HHHHxx +2294 9454 0 2 4 14 94 294 294 2294 2294 188 189 GKAAAA QZNAAA OOOOxx +9094 9455 0 2 4 14 94 94 1094 4094 9094 188 189 ULAAAA RZNAAA VVVVxx +4168 9456 0 0 8 8 68 168 168 4168 4168 136 137 IEAAAA SZNAAA AAAAxx +9108 9457 0 0 8 8 8 108 1108 4108 9108 16 17 IMAAAA TZNAAA HHHHxx +5706 9458 0 2 6 6 6 706 1706 706 5706 12 13 MLAAAA UZNAAA OOOOxx +2231 9459 1 3 1 11 31 231 231 2231 2231 62 63 VHAAAA VZNAAA VVVVxx +2173 9460 1 1 3 13 73 173 173 2173 2173 146 147 PFAAAA WZNAAA AAAAxx +90 9461 0 2 0 10 90 90 90 90 90 180 181 MDAAAA XZNAAA HHHHxx +9996 9462 0 0 6 16 96 996 1996 4996 9996 192 193 MUAAAA YZNAAA OOOOxx +330 9463 0 2 0 10 30 330 330 330 330 60 61 SMAAAA ZZNAAA VVVVxx +2052 9464 0 0 2 12 52 52 52 2052 2052 104 105 YAAAAA AAOAAA AAAAxx +1093 9465 1 1 3 13 93 93 1093 1093 1093 186 187 BQAAAA BAOAAA HHHHxx +5817 9466 1 1 7 17 17 817 1817 817 5817 34 35 TPAAAA CAOAAA OOOOxx +1559 9467 1 3 9 19 59 559 1559 1559 1559 118 119 ZHAAAA DAOAAA VVVVxx +8405 9468 1 1 5 5 5 405 405 3405 8405 10 11 HLAAAA EAOAAA AAAAxx +9962 9469 0 2 2 2 62 962 1962 4962 9962 124 125 ETAAAA FAOAAA HHHHxx +9461 9470 1 1 1 1 61 461 1461 4461 9461 122 123 XZAAAA GAOAAA OOOOxx +3028 9471 0 0 8 8 28 28 1028 3028 3028 56 57 MMAAAA HAOAAA VVVVxx +6814 9472 0 2 4 14 14 814 814 1814 6814 28 29 CCAAAA IAOAAA AAAAxx +9587 9473 1 3 7 7 87 587 1587 4587 9587 174 175 TEAAAA JAOAAA HHHHxx +6863 9474 1 3 3 3 63 863 863 1863 6863 126 127 ZDAAAA KAOAAA OOOOxx +4963 9475 1 3 3 3 63 963 963 4963 4963 126 127 XIAAAA LAOAAA VVVVxx +7811 9476 1 3 1 11 11 811 1811 2811 7811 22 23 LOAAAA MAOAAA AAAAxx +7608 9477 0 0 8 8 8 608 1608 2608 7608 16 17 QGAAAA NAOAAA HHHHxx +5321 9478 1 1 1 1 21 321 1321 321 5321 42 43 RWAAAA OAOAAA OOOOxx +9971 9479 1 3 1 11 71 971 1971 4971 9971 142 143 NTAAAA PAOAAA VVVVxx +6161 9480 1 1 1 1 61 161 161 1161 6161 122 123 ZCAAAA QAOAAA AAAAxx +2181 9481 1 1 1 1 81 181 181 2181 2181 162 163 XFAAAA RAOAAA HHHHxx +3828 9482 0 0 8 8 28 828 1828 3828 3828 56 57 GRAAAA SAOAAA OOOOxx +348 9483 0 0 8 8 48 348 348 348 348 96 97 KNAAAA TAOAAA VVVVxx +5459 9484 1 3 9 19 59 459 1459 459 5459 118 119 ZBAAAA UAOAAA AAAAxx +9406 9485 0 2 6 6 6 406 1406 4406 9406 12 13 UXAAAA VAOAAA HHHHxx +9852 9486 0 0 2 12 52 852 1852 4852 9852 104 105 YOAAAA WAOAAA OOOOxx +3095 9487 1 3 5 15 95 95 1095 3095 3095 190 191 BPAAAA XAOAAA VVVVxx +5597 9488 1 1 7 17 97 597 1597 597 5597 194 195 HHAAAA YAOAAA AAAAxx +8841 9489 1 1 1 1 41 841 841 3841 8841 82 83 BCAAAA ZAOAAA HHHHxx +3536 9490 0 0 6 16 36 536 1536 3536 3536 72 73 AGAAAA ABOAAA OOOOxx +4009 9491 1 1 9 9 9 9 9 4009 4009 18 19 FYAAAA BBOAAA VVVVxx +7366 9492 0 2 6 6 66 366 1366 2366 7366 132 133 IXAAAA CBOAAA AAAAxx +7327 9493 1 3 7 7 27 327 1327 2327 7327 54 55 VVAAAA DBOAAA HHHHxx +1613 9494 1 1 3 13 13 613 1613 1613 1613 26 27 BKAAAA EBOAAA OOOOxx +8619 9495 1 3 9 19 19 619 619 3619 8619 38 39 NTAAAA FBOAAA VVVVxx +4880 9496 0 0 0 0 80 880 880 4880 4880 160 161 SFAAAA GBOAAA AAAAxx +1552 9497 0 0 2 12 52 552 1552 1552 1552 104 105 SHAAAA HBOAAA HHHHxx +7636 9498 0 0 6 16 36 636 1636 2636 7636 72 73 SHAAAA IBOAAA OOOOxx +8397 9499 1 1 7 17 97 397 397 3397 8397 194 195 ZKAAAA JBOAAA VVVVxx +6224 9500 0 0 4 4 24 224 224 1224 6224 48 49 KFAAAA KBOAAA AAAAxx +9102 9501 0 2 2 2 2 102 1102 4102 9102 4 5 CMAAAA LBOAAA HHHHxx +7906 9502 0 2 6 6 6 906 1906 2906 7906 12 13 CSAAAA MBOAAA OOOOxx +9467 9503 1 3 7 7 67 467 1467 4467 9467 134 135 DAAAAA NBOAAA VVVVxx +828 9504 0 0 8 8 28 828 828 828 828 56 57 WFAAAA OBOAAA AAAAxx +9585 9505 1 1 5 5 85 585 1585 4585 9585 170 171 REAAAA PBOAAA HHHHxx +925 9506 1 1 5 5 25 925 925 925 925 50 51 PJAAAA QBOAAA OOOOxx +7375 9507 1 3 5 15 75 375 1375 2375 7375 150 151 RXAAAA RBOAAA VVVVxx +4027 9508 1 3 7 7 27 27 27 4027 4027 54 55 XYAAAA SBOAAA AAAAxx +766 9509 0 2 6 6 66 766 766 766 766 132 133 MDAAAA TBOAAA HHHHxx +5633 9510 1 1 3 13 33 633 1633 633 5633 66 67 RIAAAA UBOAAA OOOOxx +5648 9511 0 0 8 8 48 648 1648 648 5648 96 97 GJAAAA VBOAAA VVVVxx +148 9512 0 0 8 8 48 148 148 148 148 96 97 SFAAAA WBOAAA AAAAxx +2072 9513 0 0 2 12 72 72 72 2072 2072 144 145 SBAAAA XBOAAA HHHHxx +431 9514 1 3 1 11 31 431 431 431 431 62 63 PQAAAA YBOAAA OOOOxx +1711 9515 1 3 1 11 11 711 1711 1711 1711 22 23 VNAAAA ZBOAAA VVVVxx +9378 9516 0 2 8 18 78 378 1378 4378 9378 156 157 SWAAAA ACOAAA AAAAxx +6776 9517 0 0 6 16 76 776 776 1776 6776 152 153 QAAAAA BCOAAA HHHHxx +6842 9518 0 2 2 2 42 842 842 1842 6842 84 85 EDAAAA CCOAAA OOOOxx +2656 9519 0 0 6 16 56 656 656 2656 2656 112 113 EYAAAA DCOAAA VVVVxx +3116 9520 0 0 6 16 16 116 1116 3116 3116 32 33 WPAAAA ECOAAA AAAAxx +7904 9521 0 0 4 4 4 904 1904 2904 7904 8 9 ASAAAA FCOAAA HHHHxx +3529 9522 1 1 9 9 29 529 1529 3529 3529 58 59 TFAAAA GCOAAA OOOOxx +3240 9523 0 0 0 0 40 240 1240 3240 3240 80 81 QUAAAA HCOAAA VVVVxx +5801 9524 1 1 1 1 1 801 1801 801 5801 2 3 DPAAAA ICOAAA AAAAxx +4090 9525 0 2 0 10 90 90 90 4090 4090 180 181 IBAAAA JCOAAA HHHHxx +7687 9526 1 3 7 7 87 687 1687 2687 7687 174 175 RJAAAA KCOAAA OOOOxx +9711 9527 1 3 1 11 11 711 1711 4711 9711 22 23 NJAAAA LCOAAA VVVVxx +4760 9528 0 0 0 0 60 760 760 4760 4760 120 121 CBAAAA MCOAAA AAAAxx +5524 9529 0 0 4 4 24 524 1524 524 5524 48 49 MEAAAA NCOAAA HHHHxx +2251 9530 1 3 1 11 51 251 251 2251 2251 102 103 PIAAAA OCOAAA OOOOxx +1511 9531 1 3 1 11 11 511 1511 1511 1511 22 23 DGAAAA PCOAAA VVVVxx +5991 9532 1 3 1 11 91 991 1991 991 5991 182 183 LWAAAA QCOAAA AAAAxx +7808 9533 0 0 8 8 8 808 1808 2808 7808 16 17 IOAAAA RCOAAA HHHHxx +8708 9534 0 0 8 8 8 708 708 3708 8708 16 17 YWAAAA SCOAAA OOOOxx +8939 9535 1 3 9 19 39 939 939 3939 8939 78 79 VFAAAA TCOAAA VVVVxx +4295 9536 1 3 5 15 95 295 295 4295 4295 190 191 FJAAAA UCOAAA AAAAxx +5905 9537 1 1 5 5 5 905 1905 905 5905 10 11 DTAAAA VCOAAA HHHHxx +2649 9538 1 1 9 9 49 649 649 2649 2649 98 99 XXAAAA WCOAAA OOOOxx +2347 9539 1 3 7 7 47 347 347 2347 2347 94 95 HMAAAA XCOAAA VVVVxx +6339 9540 1 3 9 19 39 339 339 1339 6339 78 79 VJAAAA YCOAAA AAAAxx +292 9541 0 0 2 12 92 292 292 292 292 184 185 GLAAAA ZCOAAA HHHHxx +9314 9542 0 2 4 14 14 314 1314 4314 9314 28 29 GUAAAA ADOAAA OOOOxx +6893 9543 1 1 3 13 93 893 893 1893 6893 186 187 DFAAAA BDOAAA VVVVxx +3970 9544 0 2 0 10 70 970 1970 3970 3970 140 141 SWAAAA CDOAAA AAAAxx +1652 9545 0 0 2 12 52 652 1652 1652 1652 104 105 OLAAAA DDOAAA HHHHxx +4326 9546 0 2 6 6 26 326 326 4326 4326 52 53 KKAAAA EDOAAA OOOOxx +7881 9547 1 1 1 1 81 881 1881 2881 7881 162 163 DRAAAA FDOAAA VVVVxx +5291 9548 1 3 1 11 91 291 1291 291 5291 182 183 NVAAAA GDOAAA AAAAxx +957 9549 1 1 7 17 57 957 957 957 957 114 115 VKAAAA HDOAAA HHHHxx +2313 9550 1 1 3 13 13 313 313 2313 2313 26 27 ZKAAAA IDOAAA OOOOxx +5463 9551 1 3 3 3 63 463 1463 463 5463 126 127 DCAAAA JDOAAA VVVVxx +1268 9552 0 0 8 8 68 268 1268 1268 1268 136 137 UWAAAA KDOAAA AAAAxx +5028 9553 0 0 8 8 28 28 1028 28 5028 56 57 KLAAAA LDOAAA HHHHxx +656 9554 0 0 6 16 56 656 656 656 656 112 113 GZAAAA MDOAAA OOOOxx +9274 9555 0 2 4 14 74 274 1274 4274 9274 148 149 SSAAAA NDOAAA VVVVxx +8217 9556 1 1 7 17 17 217 217 3217 8217 34 35 BEAAAA ODOAAA AAAAxx +2175 9557 1 3 5 15 75 175 175 2175 2175 150 151 RFAAAA PDOAAA HHHHxx +6028 9558 0 0 8 8 28 28 28 1028 6028 56 57 WXAAAA QDOAAA OOOOxx +7584 9559 0 0 4 4 84 584 1584 2584 7584 168 169 SFAAAA RDOAAA VVVVxx +4114 9560 0 2 4 14 14 114 114 4114 4114 28 29 GCAAAA SDOAAA AAAAxx +8894 9561 0 2 4 14 94 894 894 3894 8894 188 189 CEAAAA TDOAAA HHHHxx +781 9562 1 1 1 1 81 781 781 781 781 162 163 BEAAAA UDOAAA OOOOxx +133 9563 1 1 3 13 33 133 133 133 133 66 67 DFAAAA VDOAAA VVVVxx +7572 9564 0 0 2 12 72 572 1572 2572 7572 144 145 GFAAAA WDOAAA AAAAxx +8514 9565 0 2 4 14 14 514 514 3514 8514 28 29 MPAAAA XDOAAA HHHHxx +3352 9566 0 0 2 12 52 352 1352 3352 3352 104 105 YYAAAA YDOAAA OOOOxx +8098 9567 0 2 8 18 98 98 98 3098 8098 196 197 MZAAAA ZDOAAA VVVVxx +9116 9568 0 0 6 16 16 116 1116 4116 9116 32 33 QMAAAA AEOAAA AAAAxx +9444 9569 0 0 4 4 44 444 1444 4444 9444 88 89 GZAAAA BEOAAA HHHHxx +2590 9570 0 2 0 10 90 590 590 2590 2590 180 181 QVAAAA CEOAAA OOOOxx +7302 9571 0 2 2 2 2 302 1302 2302 7302 4 5 WUAAAA DEOAAA VVVVxx +7444 9572 0 0 4 4 44 444 1444 2444 7444 88 89 IAAAAA EEOAAA AAAAxx +8748 9573 0 0 8 8 48 748 748 3748 8748 96 97 MYAAAA FEOAAA HHHHxx +7615 9574 1 3 5 15 15 615 1615 2615 7615 30 31 XGAAAA GEOAAA OOOOxx +6090 9575 0 2 0 10 90 90 90 1090 6090 180 181 GAAAAA HEOAAA VVVVxx +1529 9576 1 1 9 9 29 529 1529 1529 1529 58 59 VGAAAA IEOAAA AAAAxx +9398 9577 0 2 8 18 98 398 1398 4398 9398 196 197 MXAAAA JEOAAA HHHHxx +6114 9578 0 2 4 14 14 114 114 1114 6114 28 29 EBAAAA KEOAAA OOOOxx +2736 9579 0 0 6 16 36 736 736 2736 2736 72 73 GBAAAA LEOAAA VVVVxx +468 9580 0 0 8 8 68 468 468 468 468 136 137 ASAAAA MEOAAA AAAAxx +1487 9581 1 3 7 7 87 487 1487 1487 1487 174 175 FFAAAA NEOAAA HHHHxx +4784 9582 0 0 4 4 84 784 784 4784 4784 168 169 ACAAAA OEOAAA OOOOxx +6731 9583 1 3 1 11 31 731 731 1731 6731 62 63 XYAAAA PEOAAA VVVVxx +3328 9584 0 0 8 8 28 328 1328 3328 3328 56 57 AYAAAA QEOAAA AAAAxx +6891 9585 1 3 1 11 91 891 891 1891 6891 182 183 BFAAAA REOAAA HHHHxx +8039 9586 1 3 9 19 39 39 39 3039 8039 78 79 FXAAAA SEOAAA OOOOxx +4064 9587 0 0 4 4 64 64 64 4064 4064 128 129 IAAAAA TEOAAA VVVVxx +542 9588 0 2 2 2 42 542 542 542 542 84 85 WUAAAA UEOAAA AAAAxx +1039 9589 1 3 9 19 39 39 1039 1039 1039 78 79 ZNAAAA VEOAAA HHHHxx +5603 9590 1 3 3 3 3 603 1603 603 5603 6 7 NHAAAA WEOAAA OOOOxx +6641 9591 1 1 1 1 41 641 641 1641 6641 82 83 LVAAAA XEOAAA VVVVxx +6307 9592 1 3 7 7 7 307 307 1307 6307 14 15 PIAAAA YEOAAA AAAAxx +5354 9593 0 2 4 14 54 354 1354 354 5354 108 109 YXAAAA ZEOAAA HHHHxx +7878 9594 0 2 8 18 78 878 1878 2878 7878 156 157 ARAAAA AFOAAA OOOOxx +6391 9595 1 3 1 11 91 391 391 1391 6391 182 183 VLAAAA BFOAAA VVVVxx +4575 9596 1 3 5 15 75 575 575 4575 4575 150 151 ZTAAAA CFOAAA AAAAxx +6644 9597 0 0 4 4 44 644 644 1644 6644 88 89 OVAAAA DFOAAA HHHHxx +5207 9598 1 3 7 7 7 207 1207 207 5207 14 15 HSAAAA EFOAAA OOOOxx +1736 9599 0 0 6 16 36 736 1736 1736 1736 72 73 UOAAAA FFOAAA VVVVxx +3547 9600 1 3 7 7 47 547 1547 3547 3547 94 95 LGAAAA GFOAAA AAAAxx +6647 9601 1 3 7 7 47 647 647 1647 6647 94 95 RVAAAA HFOAAA HHHHxx +4107 9602 1 3 7 7 7 107 107 4107 4107 14 15 ZBAAAA IFOAAA OOOOxx +8125 9603 1 1 5 5 25 125 125 3125 8125 50 51 NAAAAA JFOAAA VVVVxx +9223 9604 1 3 3 3 23 223 1223 4223 9223 46 47 TQAAAA KFOAAA AAAAxx +6903 9605 1 3 3 3 3 903 903 1903 6903 6 7 NFAAAA LFOAAA HHHHxx +3639 9606 1 3 9 19 39 639 1639 3639 3639 78 79 ZJAAAA MFOAAA OOOOxx +9606 9607 0 2 6 6 6 606 1606 4606 9606 12 13 MFAAAA NFOAAA VVVVxx +3232 9608 0 0 2 12 32 232 1232 3232 3232 64 65 IUAAAA OFOAAA AAAAxx +2063 9609 1 3 3 3 63 63 63 2063 2063 126 127 JBAAAA PFOAAA HHHHxx +3731 9610 1 3 1 11 31 731 1731 3731 3731 62 63 NNAAAA QFOAAA OOOOxx +2558 9611 0 2 8 18 58 558 558 2558 2558 116 117 KUAAAA RFOAAA VVVVxx +2357 9612 1 1 7 17 57 357 357 2357 2357 114 115 RMAAAA SFOAAA AAAAxx +6008 9613 0 0 8 8 8 8 8 1008 6008 16 17 CXAAAA TFOAAA HHHHxx +8246 9614 0 2 6 6 46 246 246 3246 8246 92 93 EFAAAA UFOAAA OOOOxx +8220 9615 0 0 0 0 20 220 220 3220 8220 40 41 EEAAAA VFOAAA VVVVxx +1075 9616 1 3 5 15 75 75 1075 1075 1075 150 151 JPAAAA WFOAAA AAAAxx +2410 9617 0 2 0 10 10 410 410 2410 2410 20 21 SOAAAA XFOAAA HHHHxx +3253 9618 1 1 3 13 53 253 1253 3253 3253 106 107 DVAAAA YFOAAA OOOOxx +4370 9619 0 2 0 10 70 370 370 4370 4370 140 141 CMAAAA ZFOAAA VVVVxx +8426 9620 0 2 6 6 26 426 426 3426 8426 52 53 CMAAAA AGOAAA AAAAxx +2262 9621 0 2 2 2 62 262 262 2262 2262 124 125 AJAAAA BGOAAA HHHHxx +4149 9622 1 1 9 9 49 149 149 4149 4149 98 99 PDAAAA CGOAAA OOOOxx +2732 9623 0 0 2 12 32 732 732 2732 2732 64 65 CBAAAA DGOAAA VVVVxx +8606 9624 0 2 6 6 6 606 606 3606 8606 12 13 ATAAAA EGOAAA AAAAxx +6311 9625 1 3 1 11 11 311 311 1311 6311 22 23 TIAAAA FGOAAA HHHHxx +7223 9626 1 3 3 3 23 223 1223 2223 7223 46 47 VRAAAA GGOAAA OOOOxx +3054 9627 0 2 4 14 54 54 1054 3054 3054 108 109 MNAAAA HGOAAA VVVVxx +3952 9628 0 0 2 12 52 952 1952 3952 3952 104 105 AWAAAA IGOAAA AAAAxx +8252 9629 0 0 2 12 52 252 252 3252 8252 104 105 KFAAAA JGOAAA HHHHxx +6020 9630 0 0 0 0 20 20 20 1020 6020 40 41 OXAAAA KGOAAA OOOOxx +3846 9631 0 2 6 6 46 846 1846 3846 3846 92 93 YRAAAA LGOAAA VVVVxx +3755 9632 1 3 5 15 55 755 1755 3755 3755 110 111 LOAAAA MGOAAA AAAAxx +3765 9633 1 1 5 5 65 765 1765 3765 3765 130 131 VOAAAA NGOAAA HHHHxx +3434 9634 0 2 4 14 34 434 1434 3434 3434 68 69 CCAAAA OGOAAA OOOOxx +1381 9635 1 1 1 1 81 381 1381 1381 1381 162 163 DBAAAA PGOAAA VVVVxx +287 9636 1 3 7 7 87 287 287 287 287 174 175 BLAAAA QGOAAA AAAAxx +4476 9637 0 0 6 16 76 476 476 4476 4476 152 153 EQAAAA RGOAAA HHHHxx +2916 9638 0 0 6 16 16 916 916 2916 2916 32 33 EIAAAA SGOAAA OOOOxx +4517 9639 1 1 7 17 17 517 517 4517 4517 34 35 TRAAAA TGOAAA VVVVxx +4561 9640 1 1 1 1 61 561 561 4561 4561 122 123 LTAAAA UGOAAA AAAAxx +5106 9641 0 2 6 6 6 106 1106 106 5106 12 13 KOAAAA VGOAAA HHHHxx +2077 9642 1 1 7 17 77 77 77 2077 2077 154 155 XBAAAA WGOAAA OOOOxx +5269 9643 1 1 9 9 69 269 1269 269 5269 138 139 RUAAAA XGOAAA VVVVxx +5688 9644 0 0 8 8 88 688 1688 688 5688 176 177 UKAAAA YGOAAA AAAAxx +8831 9645 1 3 1 11 31 831 831 3831 8831 62 63 RBAAAA ZGOAAA HHHHxx +3867 9646 1 3 7 7 67 867 1867 3867 3867 134 135 TSAAAA AHOAAA OOOOxx +6062 9647 0 2 2 2 62 62 62 1062 6062 124 125 EZAAAA BHOAAA VVVVxx +8460 9648 0 0 0 0 60 460 460 3460 8460 120 121 KNAAAA CHOAAA AAAAxx +3138 9649 0 2 8 18 38 138 1138 3138 3138 76 77 SQAAAA DHOAAA HHHHxx +3173 9650 1 1 3 13 73 173 1173 3173 3173 146 147 BSAAAA EHOAAA OOOOxx +7018 9651 0 2 8 18 18 18 1018 2018 7018 36 37 YJAAAA FHOAAA VVVVxx +4836 9652 0 0 6 16 36 836 836 4836 4836 72 73 AEAAAA GHOAAA AAAAxx +1007 9653 1 3 7 7 7 7 1007 1007 1007 14 15 TMAAAA HHOAAA HHHHxx +658 9654 0 2 8 18 58 658 658 658 658 116 117 IZAAAA IHOAAA OOOOxx +5205 9655 1 1 5 5 5 205 1205 205 5205 10 11 FSAAAA JHOAAA VVVVxx +5805 9656 1 1 5 5 5 805 1805 805 5805 10 11 HPAAAA KHOAAA AAAAxx +5959 9657 1 3 9 19 59 959 1959 959 5959 118 119 FVAAAA LHOAAA HHHHxx +2863 9658 1 3 3 3 63 863 863 2863 2863 126 127 DGAAAA MHOAAA OOOOxx +7272 9659 0 0 2 12 72 272 1272 2272 7272 144 145 STAAAA NHOAAA VVVVxx +8437 9660 1 1 7 17 37 437 437 3437 8437 74 75 NMAAAA OHOAAA AAAAxx +4900 9661 0 0 0 0 0 900 900 4900 4900 0 1 MGAAAA PHOAAA HHHHxx +890 9662 0 2 0 10 90 890 890 890 890 180 181 GIAAAA QHOAAA OOOOxx +3530 9663 0 2 0 10 30 530 1530 3530 3530 60 61 UFAAAA RHOAAA VVVVxx +6209 9664 1 1 9 9 9 209 209 1209 6209 18 19 VEAAAA SHOAAA AAAAxx +4595 9665 1 3 5 15 95 595 595 4595 4595 190 191 TUAAAA THOAAA HHHHxx +5982 9666 0 2 2 2 82 982 1982 982 5982 164 165 CWAAAA UHOAAA OOOOxx +1101 9667 1 1 1 1 1 101 1101 1101 1101 2 3 JQAAAA VHOAAA VVVVxx +9555 9668 1 3 5 15 55 555 1555 4555 9555 110 111 NDAAAA WHOAAA AAAAxx +1918 9669 0 2 8 18 18 918 1918 1918 1918 36 37 UVAAAA XHOAAA HHHHxx +3527 9670 1 3 7 7 27 527 1527 3527 3527 54 55 RFAAAA YHOAAA OOOOxx +7309 9671 1 1 9 9 9 309 1309 2309 7309 18 19 DVAAAA ZHOAAA VVVVxx +8213 9672 1 1 3 13 13 213 213 3213 8213 26 27 XDAAAA AIOAAA AAAAxx +306 9673 0 2 6 6 6 306 306 306 306 12 13 ULAAAA BIOAAA HHHHxx +845 9674 1 1 5 5 45 845 845 845 845 90 91 NGAAAA CIOAAA OOOOxx +16 9675 0 0 6 16 16 16 16 16 16 32 33 QAAAAA DIOAAA VVVVxx +437 9676 1 1 7 17 37 437 437 437 437 74 75 VQAAAA EIOAAA AAAAxx +9518 9677 0 2 8 18 18 518 1518 4518 9518 36 37 CCAAAA FIOAAA HHHHxx +2142 9678 0 2 2 2 42 142 142 2142 2142 84 85 KEAAAA GIOAAA OOOOxx +8121 9679 1 1 1 1 21 121 121 3121 8121 42 43 JAAAAA HIOAAA VVVVxx +7354 9680 0 2 4 14 54 354 1354 2354 7354 108 109 WWAAAA IIOAAA AAAAxx +1720 9681 0 0 0 0 20 720 1720 1720 1720 40 41 EOAAAA JIOAAA HHHHxx +6078 9682 0 2 8 18 78 78 78 1078 6078 156 157 UZAAAA KIOAAA OOOOxx +5929 9683 1 1 9 9 29 929 1929 929 5929 58 59 BUAAAA LIOAAA VVVVxx +3856 9684 0 0 6 16 56 856 1856 3856 3856 112 113 ISAAAA MIOAAA AAAAxx +3424 9685 0 0 4 4 24 424 1424 3424 3424 48 49 SBAAAA NIOAAA HHHHxx +1712 9686 0 0 2 12 12 712 1712 1712 1712 24 25 WNAAAA OIOAAA OOOOxx +2340 9687 0 0 0 0 40 340 340 2340 2340 80 81 AMAAAA PIOAAA VVVVxx +5570 9688 0 2 0 10 70 570 1570 570 5570 140 141 GGAAAA QIOAAA AAAAxx +8734 9689 0 2 4 14 34 734 734 3734 8734 68 69 YXAAAA RIOAAA HHHHxx +6077 9690 1 1 7 17 77 77 77 1077 6077 154 155 TZAAAA SIOAAA OOOOxx +2960 9691 0 0 0 0 60 960 960 2960 2960 120 121 WJAAAA TIOAAA VVVVxx +5062 9692 0 2 2 2 62 62 1062 62 5062 124 125 SMAAAA UIOAAA AAAAxx +1532 9693 0 0 2 12 32 532 1532 1532 1532 64 65 YGAAAA VIOAAA HHHHxx +8298 9694 0 2 8 18 98 298 298 3298 8298 196 197 EHAAAA WIOAAA OOOOxx +2496 9695 0 0 6 16 96 496 496 2496 2496 192 193 ASAAAA XIOAAA VVVVxx +8412 9696 0 0 2 12 12 412 412 3412 8412 24 25 OLAAAA YIOAAA AAAAxx +724 9697 0 0 4 4 24 724 724 724 724 48 49 WBAAAA ZIOAAA HHHHxx +1019 9698 1 3 9 19 19 19 1019 1019 1019 38 39 FNAAAA AJOAAA OOOOxx +6265 9699 1 1 5 5 65 265 265 1265 6265 130 131 ZGAAAA BJOAAA VVVVxx +740 9700 0 0 0 0 40 740 740 740 740 80 81 MCAAAA CJOAAA AAAAxx +8495 9701 1 3 5 15 95 495 495 3495 8495 190 191 TOAAAA DJOAAA HHHHxx +6983 9702 1 3 3 3 83 983 983 1983 6983 166 167 PIAAAA EJOAAA OOOOxx +991 9703 1 3 1 11 91 991 991 991 991 182 183 DMAAAA FJOAAA VVVVxx +3189 9704 1 1 9 9 89 189 1189 3189 3189 178 179 RSAAAA GJOAAA AAAAxx +4487 9705 1 3 7 7 87 487 487 4487 4487 174 175 PQAAAA HJOAAA HHHHxx +5554 9706 0 2 4 14 54 554 1554 554 5554 108 109 QFAAAA IJOAAA OOOOxx +1258 9707 0 2 8 18 58 258 1258 1258 1258 116 117 KWAAAA JJOAAA VVVVxx +5359 9708 1 3 9 19 59 359 1359 359 5359 118 119 DYAAAA KJOAAA AAAAxx +2709 9709 1 1 9 9 9 709 709 2709 2709 18 19 FAAAAA LJOAAA HHHHxx +361 9710 1 1 1 1 61 361 361 361 361 122 123 XNAAAA MJOAAA OOOOxx +4028 9711 0 0 8 8 28 28 28 4028 4028 56 57 YYAAAA NJOAAA VVVVxx +3735 9712 1 3 5 15 35 735 1735 3735 3735 70 71 RNAAAA OJOAAA AAAAxx +4427 9713 1 3 7 7 27 427 427 4427 4427 54 55 HOAAAA PJOAAA HHHHxx +7540 9714 0 0 0 0 40 540 1540 2540 7540 80 81 AEAAAA QJOAAA OOOOxx +3569 9715 1 1 9 9 69 569 1569 3569 3569 138 139 HHAAAA RJOAAA VVVVxx +1916 9716 0 0 6 16 16 916 1916 1916 1916 32 33 SVAAAA SJOAAA AAAAxx +7596 9717 0 0 6 16 96 596 1596 2596 7596 192 193 EGAAAA TJOAAA HHHHxx +9721 9718 1 1 1 1 21 721 1721 4721 9721 42 43 XJAAAA UJOAAA OOOOxx +4429 9719 1 1 9 9 29 429 429 4429 4429 58 59 JOAAAA VJOAAA VVVVxx +3471 9720 1 3 1 11 71 471 1471 3471 3471 142 143 NDAAAA WJOAAA AAAAxx +1157 9721 1 1 7 17 57 157 1157 1157 1157 114 115 NSAAAA XJOAAA HHHHxx +5700 9722 0 0 0 0 0 700 1700 700 5700 0 1 GLAAAA YJOAAA OOOOxx +4431 9723 1 3 1 11 31 431 431 4431 4431 62 63 LOAAAA ZJOAAA VVVVxx +9409 9724 1 1 9 9 9 409 1409 4409 9409 18 19 XXAAAA AKOAAA AAAAxx +8752 9725 0 0 2 12 52 752 752 3752 8752 104 105 QYAAAA BKOAAA HHHHxx +9484 9726 0 0 4 4 84 484 1484 4484 9484 168 169 UAAAAA CKOAAA OOOOxx +1266 9727 0 2 6 6 66 266 1266 1266 1266 132 133 SWAAAA DKOAAA VVVVxx +9097 9728 1 1 7 17 97 97 1097 4097 9097 194 195 XLAAAA EKOAAA AAAAxx +3068 9729 0 0 8 8 68 68 1068 3068 3068 136 137 AOAAAA FKOAAA HHHHxx +5490 9730 0 2 0 10 90 490 1490 490 5490 180 181 EDAAAA GKOAAA OOOOxx +1375 9731 1 3 5 15 75 375 1375 1375 1375 150 151 XAAAAA HKOAAA VVVVxx +2487 9732 1 3 7 7 87 487 487 2487 2487 174 175 RRAAAA IKOAAA AAAAxx +1705 9733 1 1 5 5 5 705 1705 1705 1705 10 11 PNAAAA JKOAAA HHHHxx +1571 9734 1 3 1 11 71 571 1571 1571 1571 142 143 LIAAAA KKOAAA OOOOxx +4005 9735 1 1 5 5 5 5 5 4005 4005 10 11 BYAAAA LKOAAA VVVVxx +5497 9736 1 1 7 17 97 497 1497 497 5497 194 195 LDAAAA MKOAAA AAAAxx +2144 9737 0 0 4 4 44 144 144 2144 2144 88 89 MEAAAA NKOAAA HHHHxx +4052 9738 0 0 2 12 52 52 52 4052 4052 104 105 WZAAAA OKOAAA OOOOxx +4942 9739 0 2 2 2 42 942 942 4942 4942 84 85 CIAAAA PKOAAA VVVVxx +5504 9740 0 0 4 4 4 504 1504 504 5504 8 9 SDAAAA QKOAAA AAAAxx +2913 9741 1 1 3 13 13 913 913 2913 2913 26 27 BIAAAA RKOAAA HHHHxx +5617 9742 1 1 7 17 17 617 1617 617 5617 34 35 BIAAAA SKOAAA OOOOxx +8179 9743 1 3 9 19 79 179 179 3179 8179 158 159 PCAAAA TKOAAA VVVVxx +9437 9744 1 1 7 17 37 437 1437 4437 9437 74 75 ZYAAAA UKOAAA AAAAxx +1821 9745 1 1 1 1 21 821 1821 1821 1821 42 43 BSAAAA VKOAAA HHHHxx +5737 9746 1 1 7 17 37 737 1737 737 5737 74 75 RMAAAA WKOAAA OOOOxx +4207 9747 1 3 7 7 7 207 207 4207 4207 14 15 VFAAAA XKOAAA VVVVxx +4815 9748 1 3 5 15 15 815 815 4815 4815 30 31 FDAAAA YKOAAA AAAAxx +8707 9749 1 3 7 7 7 707 707 3707 8707 14 15 XWAAAA ZKOAAA HHHHxx +5970 9750 0 2 0 10 70 970 1970 970 5970 140 141 QVAAAA ALOAAA OOOOxx +5501 9751 1 1 1 1 1 501 1501 501 5501 2 3 PDAAAA BLOAAA VVVVxx +4013 9752 1 1 3 13 13 13 13 4013 4013 26 27 JYAAAA CLOAAA AAAAxx +9235 9753 1 3 5 15 35 235 1235 4235 9235 70 71 FRAAAA DLOAAA HHHHxx +2503 9754 1 3 3 3 3 503 503 2503 2503 6 7 HSAAAA ELOAAA OOOOxx +9181 9755 1 1 1 1 81 181 1181 4181 9181 162 163 DPAAAA FLOAAA VVVVxx +2289 9756 1 1 9 9 89 289 289 2289 2289 178 179 BKAAAA GLOAAA AAAAxx +4256 9757 0 0 6 16 56 256 256 4256 4256 112 113 SHAAAA HLOAAA HHHHxx +191 9758 1 3 1 11 91 191 191 191 191 182 183 JHAAAA ILOAAA OOOOxx +9655 9759 1 3 5 15 55 655 1655 4655 9655 110 111 JHAAAA JLOAAA VVVVxx +8615 9760 1 3 5 15 15 615 615 3615 8615 30 31 JTAAAA KLOAAA AAAAxx +3011 9761 1 3 1 11 11 11 1011 3011 3011 22 23 VLAAAA LLOAAA HHHHxx +6376 9762 0 0 6 16 76 376 376 1376 6376 152 153 GLAAAA MLOAAA OOOOxx +68 9763 0 0 8 8 68 68 68 68 68 136 137 QCAAAA NLOAAA VVVVxx +4720 9764 0 0 0 0 20 720 720 4720 4720 40 41 OZAAAA OLOAAA AAAAxx +6848 9765 0 0 8 8 48 848 848 1848 6848 96 97 KDAAAA PLOAAA HHHHxx +456 9766 0 0 6 16 56 456 456 456 456 112 113 ORAAAA QLOAAA OOOOxx +5887 9767 1 3 7 7 87 887 1887 887 5887 174 175 LSAAAA RLOAAA VVVVxx +9249 9768 1 1 9 9 49 249 1249 4249 9249 98 99 TRAAAA SLOAAA AAAAxx +4041 9769 1 1 1 1 41 41 41 4041 4041 82 83 LZAAAA TLOAAA HHHHxx +2304 9770 0 0 4 4 4 304 304 2304 2304 8 9 QKAAAA ULOAAA OOOOxx +8763 9771 1 3 3 3 63 763 763 3763 8763 126 127 BZAAAA VLOAAA VVVVxx +2115 9772 1 3 5 15 15 115 115 2115 2115 30 31 JDAAAA WLOAAA AAAAxx +8014 9773 0 2 4 14 14 14 14 3014 8014 28 29 GWAAAA XLOAAA HHHHxx +9895 9774 1 3 5 15 95 895 1895 4895 9895 190 191 PQAAAA YLOAAA OOOOxx +671 9775 1 3 1 11 71 671 671 671 671 142 143 VZAAAA ZLOAAA VVVVxx +3774 9776 0 2 4 14 74 774 1774 3774 3774 148 149 EPAAAA AMOAAA AAAAxx +134 9777 0 2 4 14 34 134 134 134 134 68 69 EFAAAA BMOAAA HHHHxx +534 9778 0 2 4 14 34 534 534 534 534 68 69 OUAAAA CMOAAA OOOOxx +7308 9779 0 0 8 8 8 308 1308 2308 7308 16 17 CVAAAA DMOAAA VVVVxx +5244 9780 0 0 4 4 44 244 1244 244 5244 88 89 STAAAA EMOAAA AAAAxx +1512 9781 0 0 2 12 12 512 1512 1512 1512 24 25 EGAAAA FMOAAA HHHHxx +8960 9782 0 0 0 0 60 960 960 3960 8960 120 121 QGAAAA GMOAAA OOOOxx +6602 9783 0 2 2 2 2 602 602 1602 6602 4 5 YTAAAA HMOAAA VVVVxx +593 9784 1 1 3 13 93 593 593 593 593 186 187 VWAAAA IMOAAA AAAAxx +2353 9785 1 1 3 13 53 353 353 2353 2353 106 107 NMAAAA JMOAAA HHHHxx +4139 9786 1 3 9 19 39 139 139 4139 4139 78 79 FDAAAA KMOAAA OOOOxx +3063 9787 1 3 3 3 63 63 1063 3063 3063 126 127 VNAAAA LMOAAA VVVVxx +652 9788 0 0 2 12 52 652 652 652 652 104 105 CZAAAA MMOAAA AAAAxx +7405 9789 1 1 5 5 5 405 1405 2405 7405 10 11 VYAAAA NMOAAA HHHHxx +3034 9790 0 2 4 14 34 34 1034 3034 3034 68 69 SMAAAA OMOAAA OOOOxx +4614 9791 0 2 4 14 14 614 614 4614 4614 28 29 MVAAAA PMOAAA VVVVxx +2351 9792 1 3 1 11 51 351 351 2351 2351 102 103 LMAAAA QMOAAA AAAAxx +8208 9793 0 0 8 8 8 208 208 3208 8208 16 17 SDAAAA RMOAAA HHHHxx +5475 9794 1 3 5 15 75 475 1475 475 5475 150 151 PCAAAA SMOAAA OOOOxx +6875 9795 1 3 5 15 75 875 875 1875 6875 150 151 LEAAAA TMOAAA VVVVxx +563 9796 1 3 3 3 63 563 563 563 563 126 127 RVAAAA UMOAAA AAAAxx +3346 9797 0 2 6 6 46 346 1346 3346 3346 92 93 SYAAAA VMOAAA HHHHxx +291 9798 1 3 1 11 91 291 291 291 291 182 183 FLAAAA WMOAAA OOOOxx +6345 9799 1 1 5 5 45 345 345 1345 6345 90 91 BKAAAA XMOAAA VVVVxx +8099 9800 1 3 9 19 99 99 99 3099 8099 198 199 NZAAAA YMOAAA AAAAxx +2078 9801 0 2 8 18 78 78 78 2078 2078 156 157 YBAAAA ZMOAAA HHHHxx +8238 9802 0 2 8 18 38 238 238 3238 8238 76 77 WEAAAA ANOAAA OOOOxx +4482 9803 0 2 2 2 82 482 482 4482 4482 164 165 KQAAAA BNOAAA VVVVxx +716 9804 0 0 6 16 16 716 716 716 716 32 33 OBAAAA CNOAAA AAAAxx +7288 9805 0 0 8 8 88 288 1288 2288 7288 176 177 IUAAAA DNOAAA HHHHxx +5906 9806 0 2 6 6 6 906 1906 906 5906 12 13 ETAAAA ENOAAA OOOOxx +5618 9807 0 2 8 18 18 618 1618 618 5618 36 37 CIAAAA FNOAAA VVVVxx +1141 9808 1 1 1 1 41 141 1141 1141 1141 82 83 XRAAAA GNOAAA AAAAxx +8231 9809 1 3 1 11 31 231 231 3231 8231 62 63 PEAAAA HNOAAA HHHHxx +3713 9810 1 1 3 13 13 713 1713 3713 3713 26 27 VMAAAA INOAAA OOOOxx +9158 9811 0 2 8 18 58 158 1158 4158 9158 116 117 GOAAAA JNOAAA VVVVxx +4051 9812 1 3 1 11 51 51 51 4051 4051 102 103 VZAAAA KNOAAA AAAAxx +1973 9813 1 1 3 13 73 973 1973 1973 1973 146 147 XXAAAA LNOAAA HHHHxx +6710 9814 0 2 0 10 10 710 710 1710 6710 20 21 CYAAAA MNOAAA OOOOxx +1021 9815 1 1 1 1 21 21 1021 1021 1021 42 43 HNAAAA NNOAAA VVVVxx +2196 9816 0 0 6 16 96 196 196 2196 2196 192 193 MGAAAA ONOAAA AAAAxx +8335 9817 1 3 5 15 35 335 335 3335 8335 70 71 PIAAAA PNOAAA HHHHxx +2272 9818 0 0 2 12 72 272 272 2272 2272 144 145 KJAAAA QNOAAA OOOOxx +3818 9819 0 2 8 18 18 818 1818 3818 3818 36 37 WQAAAA RNOAAA VVVVxx +679 9820 1 3 9 19 79 679 679 679 679 158 159 DAAAAA SNOAAA AAAAxx +7512 9821 0 0 2 12 12 512 1512 2512 7512 24 25 YCAAAA TNOAAA HHHHxx +493 9822 1 1 3 13 93 493 493 493 493 186 187 ZSAAAA UNOAAA OOOOxx +5663 9823 1 3 3 3 63 663 1663 663 5663 126 127 VJAAAA VNOAAA VVVVxx +4655 9824 1 3 5 15 55 655 655 4655 4655 110 111 BXAAAA WNOAAA AAAAxx +3996 9825 0 0 6 16 96 996 1996 3996 3996 192 193 SXAAAA XNOAAA HHHHxx +8797 9826 1 1 7 17 97 797 797 3797 8797 194 195 JAAAAA YNOAAA OOOOxx +2991 9827 1 3 1 11 91 991 991 2991 2991 182 183 BLAAAA ZNOAAA VVVVxx +7038 9828 0 2 8 18 38 38 1038 2038 7038 76 77 SKAAAA AOOAAA AAAAxx +4174 9829 0 2 4 14 74 174 174 4174 4174 148 149 OEAAAA BOOAAA HHHHxx +6908 9830 0 0 8 8 8 908 908 1908 6908 16 17 SFAAAA COOAAA OOOOxx +8477 9831 1 1 7 17 77 477 477 3477 8477 154 155 BOAAAA DOOAAA VVVVxx +3576 9832 0 0 6 16 76 576 1576 3576 3576 152 153 OHAAAA EOOAAA AAAAxx +2685 9833 1 1 5 5 85 685 685 2685 2685 170 171 HZAAAA FOOAAA HHHHxx +9161 9834 1 1 1 1 61 161 1161 4161 9161 122 123 JOAAAA GOOAAA OOOOxx +2951 9835 1 3 1 11 51 951 951 2951 2951 102 103 NJAAAA HOOAAA VVVVxx +8362 9836 0 2 2 2 62 362 362 3362 8362 124 125 QJAAAA IOOAAA AAAAxx +2379 9837 1 3 9 19 79 379 379 2379 2379 158 159 NNAAAA JOOAAA HHHHxx +1277 9838 1 1 7 17 77 277 1277 1277 1277 154 155 DXAAAA KOOAAA OOOOxx +1728 9839 0 0 8 8 28 728 1728 1728 1728 56 57 MOAAAA LOOAAA VVVVxx +9816 9840 0 0 6 16 16 816 1816 4816 9816 32 33 ONAAAA MOOAAA AAAAxx +6288 9841 0 0 8 8 88 288 288 1288 6288 176 177 WHAAAA NOOAAA HHHHxx +8985 9842 1 1 5 5 85 985 985 3985 8985 170 171 PHAAAA OOOAAA OOOOxx +771 9843 1 3 1 11 71 771 771 771 771 142 143 RDAAAA POOAAA VVVVxx +464 9844 0 0 4 4 64 464 464 464 464 128 129 WRAAAA QOOAAA AAAAxx +9625 9845 1 1 5 5 25 625 1625 4625 9625 50 51 FGAAAA ROOAAA HHHHxx +9608 9846 0 0 8 8 8 608 1608 4608 9608 16 17 OFAAAA SOOAAA OOOOxx +9170 9847 0 2 0 10 70 170 1170 4170 9170 140 141 SOAAAA TOOAAA VVVVxx +9658 9848 0 2 8 18 58 658 1658 4658 9658 116 117 MHAAAA UOOAAA AAAAxx +7515 9849 1 3 5 15 15 515 1515 2515 7515 30 31 BDAAAA VOOAAA HHHHxx +9400 9850 0 0 0 0 0 400 1400 4400 9400 0 1 OXAAAA WOOAAA OOOOxx +2045 9851 1 1 5 5 45 45 45 2045 2045 90 91 RAAAAA XOOAAA VVVVxx +324 9852 0 0 4 4 24 324 324 324 324 48 49 MMAAAA YOOAAA AAAAxx +4252 9853 0 0 2 12 52 252 252 4252 4252 104 105 OHAAAA ZOOAAA HHHHxx +8329 9854 1 1 9 9 29 329 329 3329 8329 58 59 JIAAAA APOAAA OOOOxx +4472 9855 0 0 2 12 72 472 472 4472 4472 144 145 AQAAAA BPOAAA VVVVxx +1047 9856 1 3 7 7 47 47 1047 1047 1047 94 95 HOAAAA CPOAAA AAAAxx +9341 9857 1 1 1 1 41 341 1341 4341 9341 82 83 HVAAAA DPOAAA HHHHxx +7000 9858 0 0 0 0 0 0 1000 2000 7000 0 1 GJAAAA EPOAAA OOOOxx +1429 9859 1 1 9 9 29 429 1429 1429 1429 58 59 ZCAAAA FPOAAA VVVVxx +2701 9860 1 1 1 1 1 701 701 2701 2701 2 3 XZAAAA GPOAAA AAAAxx +6630 9861 0 2 0 10 30 630 630 1630 6630 60 61 AVAAAA HPOAAA HHHHxx +3669 9862 1 1 9 9 69 669 1669 3669 3669 138 139 DLAAAA IPOAAA OOOOxx +8613 9863 1 1 3 13 13 613 613 3613 8613 26 27 HTAAAA JPOAAA VVVVxx +7080 9864 0 0 0 0 80 80 1080 2080 7080 160 161 IMAAAA KPOAAA AAAAxx +8788 9865 0 0 8 8 88 788 788 3788 8788 176 177 AAAAAA LPOAAA HHHHxx +6291 9866 1 3 1 11 91 291 291 1291 6291 182 183 ZHAAAA MPOAAA OOOOxx +7885 9867 1 1 5 5 85 885 1885 2885 7885 170 171 HRAAAA NPOAAA VVVVxx +7160 9868 0 0 0 0 60 160 1160 2160 7160 120 121 KPAAAA OPOAAA AAAAxx +6140 9869 0 0 0 0 40 140 140 1140 6140 80 81 ECAAAA PPOAAA HHHHxx +9881 9870 1 1 1 1 81 881 1881 4881 9881 162 163 BQAAAA QPOAAA OOOOxx +9140 9871 0 0 0 0 40 140 1140 4140 9140 80 81 ONAAAA RPOAAA VVVVxx +644 9872 0 0 4 4 44 644 644 644 644 88 89 UYAAAA SPOAAA AAAAxx +3667 9873 1 3 7 7 67 667 1667 3667 3667 134 135 BLAAAA TPOAAA HHHHxx +2675 9874 1 3 5 15 75 675 675 2675 2675 150 151 XYAAAA UPOAAA OOOOxx +9492 9875 0 0 2 12 92 492 1492 4492 9492 184 185 CBAAAA VPOAAA VVVVxx +5004 9876 0 0 4 4 4 4 1004 4 5004 8 9 MKAAAA WPOAAA AAAAxx +9456 9877 0 0 6 16 56 456 1456 4456 9456 112 113 SZAAAA XPOAAA HHHHxx +8197 9878 1 1 7 17 97 197 197 3197 8197 194 195 HDAAAA YPOAAA OOOOxx +2837 9879 1 1 7 17 37 837 837 2837 2837 74 75 DFAAAA ZPOAAA VVVVxx +127 9880 1 3 7 7 27 127 127 127 127 54 55 XEAAAA AQOAAA AAAAxx +9772 9881 0 0 2 12 72 772 1772 4772 9772 144 145 WLAAAA BQOAAA HHHHxx +5743 9882 1 3 3 3 43 743 1743 743 5743 86 87 XMAAAA CQOAAA OOOOxx +2007 9883 1 3 7 7 7 7 7 2007 2007 14 15 FZAAAA DQOAAA VVVVxx +7586 9884 0 2 6 6 86 586 1586 2586 7586 172 173 UFAAAA EQOAAA AAAAxx +45 9885 1 1 5 5 45 45 45 45 45 90 91 TBAAAA FQOAAA HHHHxx +6482 9886 0 2 2 2 82 482 482 1482 6482 164 165 IPAAAA GQOAAA OOOOxx +4565 9887 1 1 5 5 65 565 565 4565 4565 130 131 PTAAAA HQOAAA VVVVxx +6975 9888 1 3 5 15 75 975 975 1975 6975 150 151 HIAAAA IQOAAA AAAAxx +7260 9889 0 0 0 0 60 260 1260 2260 7260 120 121 GTAAAA JQOAAA HHHHxx +2830 9890 0 2 0 10 30 830 830 2830 2830 60 61 WEAAAA KQOAAA OOOOxx +9365 9891 1 1 5 5 65 365 1365 4365 9365 130 131 FWAAAA LQOAAA VVVVxx +8207 9892 1 3 7 7 7 207 207 3207 8207 14 15 RDAAAA MQOAAA AAAAxx +2506 9893 0 2 6 6 6 506 506 2506 2506 12 13 KSAAAA NQOAAA HHHHxx +8081 9894 1 1 1 1 81 81 81 3081 8081 162 163 VYAAAA OQOAAA OOOOxx +8678 9895 0 2 8 18 78 678 678 3678 8678 156 157 UVAAAA PQOAAA VVVVxx +9932 9896 0 0 2 12 32 932 1932 4932 9932 64 65 ASAAAA QQOAAA AAAAxx +447 9897 1 3 7 7 47 447 447 447 447 94 95 FRAAAA RQOAAA HHHHxx +9187 9898 1 3 7 7 87 187 1187 4187 9187 174 175 JPAAAA SQOAAA OOOOxx +89 9899 1 1 9 9 89 89 89 89 89 178 179 LDAAAA TQOAAA VVVVxx +7027 9900 1 3 7 7 27 27 1027 2027 7027 54 55 HKAAAA UQOAAA AAAAxx +1536 9901 0 0 6 16 36 536 1536 1536 1536 72 73 CHAAAA VQOAAA HHHHxx +160 9902 0 0 0 0 60 160 160 160 160 120 121 EGAAAA WQOAAA OOOOxx +7679 9903 1 3 9 19 79 679 1679 2679 7679 158 159 JJAAAA XQOAAA VVVVxx +5973 9904 1 1 3 13 73 973 1973 973 5973 146 147 TVAAAA YQOAAA AAAAxx +4401 9905 1 1 1 1 1 401 401 4401 4401 2 3 HNAAAA ZQOAAA HHHHxx +395 9906 1 3 5 15 95 395 395 395 395 190 191 FPAAAA AROAAA OOOOxx +4904 9907 0 0 4 4 4 904 904 4904 4904 8 9 QGAAAA BROAAA VVVVxx +2759 9908 1 3 9 19 59 759 759 2759 2759 118 119 DCAAAA CROAAA AAAAxx +8713 9909 1 1 3 13 13 713 713 3713 8713 26 27 DXAAAA DROAAA HHHHxx +3770 9910 0 2 0 10 70 770 1770 3770 3770 140 141 APAAAA EROAAA OOOOxx +8272 9911 0 0 2 12 72 272 272 3272 8272 144 145 EGAAAA FROAAA VVVVxx +5358 9912 0 2 8 18 58 358 1358 358 5358 116 117 CYAAAA GROAAA AAAAxx +9747 9913 1 3 7 7 47 747 1747 4747 9747 94 95 XKAAAA HROAAA HHHHxx +1567 9914 1 3 7 7 67 567 1567 1567 1567 134 135 HIAAAA IROAAA OOOOxx +2136 9915 0 0 6 16 36 136 136 2136 2136 72 73 EEAAAA JROAAA VVVVxx +314 9916 0 2 4 14 14 314 314 314 314 28 29 CMAAAA KROAAA AAAAxx +4583 9917 1 3 3 3 83 583 583 4583 4583 166 167 HUAAAA LROAAA HHHHxx +375 9918 1 3 5 15 75 375 375 375 375 150 151 LOAAAA MROAAA OOOOxx +5566 9919 0 2 6 6 66 566 1566 566 5566 132 133 CGAAAA NROAAA VVVVxx +6865 9920 1 1 5 5 65 865 865 1865 6865 130 131 BEAAAA OROAAA AAAAxx +894 9921 0 2 4 14 94 894 894 894 894 188 189 KIAAAA PROAAA HHHHxx +5399 9922 1 3 9 19 99 399 1399 399 5399 198 199 RZAAAA QROAAA OOOOxx +1385 9923 1 1 5 5 85 385 1385 1385 1385 170 171 HBAAAA RROAAA VVVVxx +2156 9924 0 0 6 16 56 156 156 2156 2156 112 113 YEAAAA SROAAA AAAAxx +9659 9925 1 3 9 19 59 659 1659 4659 9659 118 119 NHAAAA TROAAA HHHHxx +477 9926 1 1 7 17 77 477 477 477 477 154 155 JSAAAA UROAAA OOOOxx +8194 9927 0 2 4 14 94 194 194 3194 8194 188 189 EDAAAA VROAAA VVVVxx +3937 9928 1 1 7 17 37 937 1937 3937 3937 74 75 LVAAAA WROAAA AAAAxx +3745 9929 1 1 5 5 45 745 1745 3745 3745 90 91 BOAAAA XROAAA HHHHxx +4096 9930 0 0 6 16 96 96 96 4096 4096 192 193 OBAAAA YROAAA OOOOxx +5487 9931 1 3 7 7 87 487 1487 487 5487 174 175 BDAAAA ZROAAA VVVVxx +2475 9932 1 3 5 15 75 475 475 2475 2475 150 151 FRAAAA ASOAAA AAAAxx +6105 9933 1 1 5 5 5 105 105 1105 6105 10 11 VAAAAA BSOAAA HHHHxx +6036 9934 0 0 6 16 36 36 36 1036 6036 72 73 EYAAAA CSOAAA OOOOxx +1315 9935 1 3 5 15 15 315 1315 1315 1315 30 31 PYAAAA DSOAAA VVVVxx +4473 9936 1 1 3 13 73 473 473 4473 4473 146 147 BQAAAA ESOAAA AAAAxx +4016 9937 0 0 6 16 16 16 16 4016 4016 32 33 MYAAAA FSOAAA HHHHxx +8135 9938 1 3 5 15 35 135 135 3135 8135 70 71 XAAAAA GSOAAA OOOOxx +8892 9939 0 0 2 12 92 892 892 3892 8892 184 185 AEAAAA HSOAAA VVVVxx +4850 9940 0 2 0 10 50 850 850 4850 4850 100 101 OEAAAA ISOAAA AAAAxx +2545 9941 1 1 5 5 45 545 545 2545 2545 90 91 XTAAAA JSOAAA HHHHxx +3788 9942 0 0 8 8 88 788 1788 3788 3788 176 177 SPAAAA KSOAAA OOOOxx +1672 9943 0 0 2 12 72 672 1672 1672 1672 144 145 IMAAAA LSOAAA VVVVxx +3664 9944 0 0 4 4 64 664 1664 3664 3664 128 129 YKAAAA MSOAAA AAAAxx +3775 9945 1 3 5 15 75 775 1775 3775 3775 150 151 FPAAAA NSOAAA HHHHxx +3103 9946 1 3 3 3 3 103 1103 3103 3103 6 7 JPAAAA OSOAAA OOOOxx +9335 9947 1 3 5 15 35 335 1335 4335 9335 70 71 BVAAAA PSOAAA VVVVxx +9200 9948 0 0 0 0 0 200 1200 4200 9200 0 1 WPAAAA QSOAAA AAAAxx +8665 9949 1 1 5 5 65 665 665 3665 8665 130 131 HVAAAA RSOAAA HHHHxx +1356 9950 0 0 6 16 56 356 1356 1356 1356 112 113 EAAAAA SSOAAA OOOOxx +6118 9951 0 2 8 18 18 118 118 1118 6118 36 37 IBAAAA TSOAAA VVVVxx +4605 9952 1 1 5 5 5 605 605 4605 4605 10 11 DVAAAA USOAAA AAAAxx +5651 9953 1 3 1 11 51 651 1651 651 5651 102 103 JJAAAA VSOAAA HHHHxx +9055 9954 1 3 5 15 55 55 1055 4055 9055 110 111 HKAAAA WSOAAA OOOOxx +8461 9955 1 1 1 1 61 461 461 3461 8461 122 123 LNAAAA XSOAAA VVVVxx +6107 9956 1 3 7 7 7 107 107 1107 6107 14 15 XAAAAA YSOAAA AAAAxx +1967 9957 1 3 7 7 67 967 1967 1967 1967 134 135 RXAAAA ZSOAAA HHHHxx +8910 9958 0 2 0 10 10 910 910 3910 8910 20 21 SEAAAA ATOAAA OOOOxx +8257 9959 1 1 7 17 57 257 257 3257 8257 114 115 PFAAAA BTOAAA VVVVxx +851 9960 1 3 1 11 51 851 851 851 851 102 103 TGAAAA CTOAAA AAAAxx +7823 9961 1 3 3 3 23 823 1823 2823 7823 46 47 XOAAAA DTOAAA HHHHxx +3208 9962 0 0 8 8 8 208 1208 3208 3208 16 17 KTAAAA ETOAAA OOOOxx +856 9963 0 0 6 16 56 856 856 856 856 112 113 YGAAAA FTOAAA VVVVxx +2654 9964 0 2 4 14 54 654 654 2654 2654 108 109 CYAAAA GTOAAA AAAAxx +7185 9965 1 1 5 5 85 185 1185 2185 7185 170 171 JQAAAA HTOAAA HHHHxx +309 9966 1 1 9 9 9 309 309 309 309 18 19 XLAAAA ITOAAA OOOOxx +9752 9967 0 0 2 12 52 752 1752 4752 9752 104 105 CLAAAA JTOAAA VVVVxx +6405 9968 1 1 5 5 5 405 405 1405 6405 10 11 JMAAAA KTOAAA AAAAxx +6113 9969 1 1 3 13 13 113 113 1113 6113 26 27 DBAAAA LTOAAA HHHHxx +9774 9970 0 2 4 14 74 774 1774 4774 9774 148 149 YLAAAA MTOAAA OOOOxx +1674 9971 0 2 4 14 74 674 1674 1674 1674 148 149 KMAAAA NTOAAA VVVVxx +9602 9972 0 2 2 2 2 602 1602 4602 9602 4 5 IFAAAA OTOAAA AAAAxx +1363 9973 1 3 3 3 63 363 1363 1363 1363 126 127 LAAAAA PTOAAA HHHHxx +6887 9974 1 3 7 7 87 887 887 1887 6887 174 175 XEAAAA QTOAAA OOOOxx +6170 9975 0 2 0 10 70 170 170 1170 6170 140 141 IDAAAA RTOAAA VVVVxx +8888 9976 0 0 8 8 88 888 888 3888 8888 176 177 WDAAAA STOAAA AAAAxx +2981 9977 1 1 1 1 81 981 981 2981 2981 162 163 RKAAAA TTOAAA HHHHxx +7369 9978 1 1 9 9 69 369 1369 2369 7369 138 139 LXAAAA UTOAAA OOOOxx +6227 9979 1 3 7 7 27 227 227 1227 6227 54 55 NFAAAA VTOAAA VVVVxx +8002 9980 0 2 2 2 2 2 2 3002 8002 4 5 UVAAAA WTOAAA AAAAxx +4288 9981 0 0 8 8 88 288 288 4288 4288 176 177 YIAAAA XTOAAA HHHHxx +5136 9982 0 0 6 16 36 136 1136 136 5136 72 73 OPAAAA YTOAAA OOOOxx +1084 9983 0 0 4 4 84 84 1084 1084 1084 168 169 SPAAAA ZTOAAA VVVVxx +9117 9984 1 1 7 17 17 117 1117 4117 9117 34 35 RMAAAA AUOAAA AAAAxx +2406 9985 0 2 6 6 6 406 406 2406 2406 12 13 OOAAAA BUOAAA HHHHxx +1384 9986 0 0 4 4 84 384 1384 1384 1384 168 169 GBAAAA CUOAAA OOOOxx +9194 9987 0 2 4 14 94 194 1194 4194 9194 188 189 QPAAAA DUOAAA VVVVxx +858 9988 0 2 8 18 58 858 858 858 858 116 117 AHAAAA EUOAAA AAAAxx +8592 9989 0 0 2 12 92 592 592 3592 8592 184 185 MSAAAA FUOAAA HHHHxx +4773 9990 1 1 3 13 73 773 773 4773 4773 146 147 PBAAAA GUOAAA OOOOxx +4093 9991 1 1 3 13 93 93 93 4093 4093 186 187 LBAAAA HUOAAA VVVVxx +6587 9992 1 3 7 7 87 587 587 1587 6587 174 175 JTAAAA IUOAAA AAAAxx +6093 9993 1 1 3 13 93 93 93 1093 6093 186 187 JAAAAA JUOAAA HHHHxx +429 9994 1 1 9 9 29 429 429 429 429 58 59 NQAAAA KUOAAA OOOOxx +5780 9995 0 0 0 0 80 780 1780 780 5780 160 161 IOAAAA LUOAAA VVVVxx +1783 9996 1 3 3 3 83 783 1783 1783 1783 166 167 PQAAAA MUOAAA AAAAxx +2992 9997 0 0 2 12 92 992 992 2992 2992 184 185 CLAAAA NUOAAA HHHHxx +0 9998 0 0 0 0 0 0 0 0 0 0 1 AAAAAA OUOAAA OOOOxx +2968 9999 0 0 8 8 68 968 968 2968 2968 136 137 EKAAAA PUOAAA VVVVxx diff --git a/tests/testflows/window_functions/tests/window_clause.py b/tests/testflows/window_functions/tests/window_clause.py new file mode 100644 index 00000000000..714fce89895 --- /dev/null +++ b/tests/testflows/window_functions/tests/window_clause.py @@ -0,0 +1,121 @@ +from testflows.core import * + +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def single_window(self): + """Check defining a single named window using window clause. + """ + expected = convert_output(""" + depname | empno | salary | sum + -----------+-------+--------+------- + develop | 7 | 4200 | 4200 + develop | 8 | 6000 | 10200 + develop | 9 | 4500 | 14700 + develop | 10 | 5200 | 19900 + develop | 11 | 5200 | 25100 + personnel | 2 | 3900 | 3900 + personnel | 5 | 3500 | 7400 + sales | 1 | 5000 | 5000 + sales | 3 | 4800 | 9800 + sales | 4 | 4800 | 14600 + """) + + execute_query( + "SELECT depname, empno, salary, sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (PARTITION BY depname ORDER BY empno)", + expected=expected + ) + +@TestScenario +def unused_window(self): + """Check unused window. + """ + expected = convert_output(""" + four + ------- + """) + + execute_query( + "SELECT four FROM tenk1 WHERE 0 WINDOW w AS (PARTITION BY ten)", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MultipleWindows("1.0") +) +def multiple_identical_windows(self): + """Check defining multiple windows using window clause. + """ + expected = convert_output(""" + sum | count + -------+------- + 3500 | 1 + 7400 | 2 + 11600 | 3 + 16100 | 4 + 25700 | 6 + 25700 | 6 + 30700 | 7 + 41100 | 9 + 41100 | 9 + 47100 | 10 + """) + + execute_query( + "SELECT sum(salary) OVER w1 AS sum, count(*) OVER w2 AS count " + "FROM empsalary WINDOW w1 AS (ORDER BY salary), w2 AS (ORDER BY salary)", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MultipleWindows("1.0") +) +def multiple_windows(self): + """Check defining multiple windows using window clause. + """ + expected = convert_output(""" + empno | depname | salary | sum1 | sum2 + --------+-----------+--------+-------+-------- + 1 | sales | 5000 | 5000 | 5000 + 2 | personnel | 3900 | 3900 | 8900 + 3 | sales | 4800 | 9800 | 8700 + 4 | sales | 4800 | 14600 | 9600 + 5 | personnel | 3500 | 7400 | 8300 + 7 | develop | 4200 | 4200 | 7700 + 8 | develop | 6000 | 10200 | 10200 + 9 | develop | 4500 | 14700 | 10500 + 10 | develop | 5200 | 19900 | 9700 + 11 | develop | 5200 | 25100 | 10400 + """) + + execute_query("SELECT empno, depname, salary, sum(salary) OVER w1 AS sum1, sum(salary) OVER w2 AS sum2 " + "FROM empsalary WINDOW w1 AS (PARTITION BY depname ORDER BY empno), w2 AS (ORDER BY empno ROWS 1 PRECEDING)", + expected=expected + ) + +@TestScenario +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause_MissingWindowSpec_Error("1.0") +) +def missing_window_spec(self): + """Check missing window spec in window clause. + """ + exitcode = 62 + message = "Exception: Syntax error" + + self.context.node.query("SELECT number,sum(number) OVER w1 FROM values('number Int8', (1),(1),(2),(3)) WINDOW w1", + exitcode=exitcode, message=message) + +@TestFeature +@Name("window clause") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_WindowClause("1.0") +) +def feature(self): + """Check defining frame clause. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/testflows/window_functions/tests/window_spec.py b/tests/testflows/window_functions/tests/window_spec.py new file mode 100644 index 00000000000..aacbc192200 --- /dev/null +++ b/tests/testflows/window_functions/tests/window_spec.py @@ -0,0 +1,206 @@ +from testflows.core import * +from window_functions.requirements import * +from window_functions.tests.common import * + +@TestScenario +def partition_clause(self): + """Check window specification that only contains partition clause. + """ + expected = convert_output(""" + sum + ------- + 25100 + 25100 + 25100 + 25100 + 25100 + 7400 + 7400 + 14600 + 14600 + 14600 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (PARTITION BY depname)", + expected=expected + ) + +@TestScenario +def orderby_clause(self): + """Check window specification that only contains order by clause. + """ + expected = convert_output(""" + sum + ------- + 25100 + 25100 + 25100 + 25100 + 25100 + 32500 + 32500 + 47100 + 47100 + 47100 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (ORDER BY depname)", + expected=expected + ) + +@TestScenario +def frame_clause(self): + """Check window specification that only contains frame clause. + """ + expected = convert_output(""" + sum + ------- + 5000 + 3900 + 4800 + 4800 + 3500 + 4200 + 6000 + 4500 + 5200 + 5200 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (ORDER BY empno ROWS CURRENT ROW)", + expected=expected + ) + +@TestScenario +def partition_with_order_by(self): + """Check window specification that contains partition and order by clauses. + """ + expected = convert_output(""" + sum + ------- + 4200 + 8700 + 19100 + 19100 + 25100 + 3500 + 7400 + 9600 + 9600 + 14600 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (PARTITION BY depname ORDER BY salary)", + expected=expected + ) + +@TestScenario +def partition_with_frame(self): + """Check window specification that contains partition and frame clauses. + """ + expected = convert_output(""" + sum + ------- + 4200 + 6000 + 4500 + 5200 + 5200 + 3900 + 3500 + 5000 + 4800 + 4800 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (PARTITION BY depname, empno ROWS 1 PRECEDING)", + expected=expected + ) + +@TestScenario +def order_by_with_frame(self): + """Check window specification that contains order by and frame clauses. + """ + expected = convert_output(""" + sum + ------- + 4200 + 10200 + 10500 + 9700 + 10400 + 9100 + 7400 + 8500 + 9800 + 9600 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (ORDER BY depname, empno ROWS 1 PRECEDING)", + expected=expected + ) + +@TestScenario +def partition_with_order_by_and_frame(self): + """Check window specification that contains all clauses. + """ + expected = convert_output(""" + sum + ------- + 4200 + 8700 + 9700 + 10400 + 11200 + 3500 + 7400 + 4800 + 9600 + 9800 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS (PARTITION BY depname ORDER BY salary ROWS 1 PRECEDING)", + expected=expected + ) + +@TestScenario +def empty(self): + """Check defining an empty window specification. + """ + expected = convert_output(""" + sum + ------- + 47100 + 47100 + 47100 + 47100 + 47100 + 47100 + 47100 + 47100 + 47100 + 47100 + """) + + execute_query( + "SELECT sum(salary) OVER w AS sum FROM empsalary WINDOW w AS ()", + expected=expected + ) + +@TestFeature +@Name("window spec") +@Requirements( + RQ_SRS_019_ClickHouse_WindowFunctions_WindowSpec("1.0") +) +def feature(self): + """Check defining window specifications. + """ + for scenario in loads(current_module(), Scenario): + Scenario(run=scenario, flags=TE) diff --git a/tests/ubsan_suppressions.txt b/tests/ubsan_suppressions.txt index 6a55155e330..8d10b4f73dd 100644 --- a/tests/ubsan_suppressions.txt +++ b/tests/ubsan_suppressions.txt @@ -1 +1,5 @@ -# We have no suppressions! +# https://github.com/llvm-mirror/compiler-rt/blob/master/lib/ubsan/ubsan_checks.inc + +# Some value is outside the range of representable values of type 'long' on user-provided data inside boost::geometry - ignore. +src:*/Functions/pointInPolygon.cpp +src:*/contrib/boost/boost/geometry/* diff --git a/utils/CMakeLists.txt b/utils/CMakeLists.txt index 8a39d591612..3da8612e6c1 100644 --- a/utils/CMakeLists.txt +++ b/utils/CMakeLists.txt @@ -32,10 +32,16 @@ if (NOT DEFINED ENABLE_UTILS OR ENABLE_UTILS) add_subdirectory (db-generator) add_subdirectory (wal-dump) add_subdirectory (check-mysql-binlog) -endif () + add_subdirectory (keeper-bench) -if (ENABLE_CODE_QUALITY) - add_subdirectory (check-style) + if (USE_NURAFT) + add_subdirectory (keeper-data-dumper) + endif () + + # memcpy_jart.S contains position dependent code + if (NOT CMAKE_POSITION_INDEPENDENT_CODE AND NOT OS_DARWIN AND NOT OS_SUNOS) + add_subdirectory (memcpy-bench) + endif () endif () add_subdirectory (package) diff --git a/utils/build/build_debian_unbundled.sh b/utils/build/build_debian_unbundled.sh deleted file mode 100755 index 5b2129fc5bf..00000000000 --- a/utils/build/build_debian_unbundled.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -ROOT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && cd ../.. && pwd) - -# also possible: DIST=bionic DIST=testing -export DIST=${DIST=unstable} - -cd $ROOT_DIR -. $ROOT_DIR/debian/.pbuilderrc -if [[ -n "$FORCE_PBUILDER_CREATE" || ! -e "$BASETGZ" ]] ; then - sudo --preserve-env pbuilder create --configfile $ROOT_DIR/debian/.pbuilderrc $PBUILDER_OPT -fi - -env TEST_RUN=1 \ - `# Skip tests:` \ - `# 00281 requires internal compiler` \ - `# 00416 requires patched poco from contrib/` \ - TEST_OPT="--skip long compile 00416 $TEST_OPT" \ - TEST_TRUE=false \ - DH_VERBOSE=1 \ - CMAKE_FLAGS="-DUNBUNDLED=1 -DUSE_STATIC_LIBRARIES=0 $CMAKE_FLAGS" \ - `# Use all possible contrib libs from system` \ - `# psmisc - killall` \ - `# gdb - symbol test in pbuilder` \ - EXTRAPACKAGES="psmisc libboost-program-options-dev libboost-system-dev libboost-filesystem-dev libboost-thread-dev libboost-regex-dev libboost-iostreams-dev zlib1g-dev liblz4-dev libdouble-conversion-dev libsparsehash-dev librdkafka-dev libpoco-dev unixodbc-dev libsparsehash-dev libgoogle-perftools-dev libzstd-dev libre2-dev libunwind-dev googletest libcctz-dev libcapnp-dev libjemalloc-dev libssl-dev libcurl4-openssl-dev libunwind-dev libgsasl7-dev libxml2-dev libbrotli-dev libhyperscan-dev rapidjson-dev $EXTRAPACKAGES" \ - pdebuild --configfile $ROOT_DIR/debian/.pbuilderrc $PDEBUILD_OPT diff --git a/utils/build/build_debian_unbundled_split.sh b/utils/build/build_debian_unbundled_split.sh deleted file mode 100755 index 5242b0f4a6f..00000000000 --- a/utils/build/build_debian_unbundled_split.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash - -CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) - -CMAKE_FLAGS+=" -DCLICKHOUSE_SPLIT_BINARY=1 " -. $CUR_DIR/build_debian_unbundled.sh diff --git a/utils/check-marks/main.cpp b/utils/check-marks/main.cpp index 2b244dcf0b6..e9e0bbf1134 100644 --- a/utils/check-marks/main.cpp +++ b/utils/check-marks/main.cpp @@ -19,7 +19,7 @@ static void checkByCompressedReadBuffer(const std::string & mrk_path, const std::string & bin_path) { DB::ReadBufferFromFile mrk_in(mrk_path); - DB::CompressedReadBufferFromFile bin_in(bin_path, 0, 0, 0); + DB::CompressedReadBufferFromFile bin_in(bin_path, 0, 0, 0, nullptr); DB::WriteBufferFromFileDescriptor out(STDOUT_FILENO); bool mrk2_format = boost::algorithm::ends_with(mrk_path, ".mrk2"); diff --git a/utils/check-style/CMakeLists.txt b/utils/check-style/CMakeLists.txt deleted file mode 100644 index 24b315019fe..00000000000 --- a/utils/check-style/CMakeLists.txt +++ /dev/null @@ -1,2 +0,0 @@ -add_test(NAME check-style COMMAND bash -c "${CMAKE_CURRENT_SOURCE_DIR}/check-style") -add_test(NAME check-include COMMAND sh -c "env RUN_DIR=${CMAKE_CURRENT_SOURCE_DIR} ROOT_DIR=${ClickHouse_SOURCE_DIR} BUILD_DIR=${ClickHouse_BINARY_DIR} CXX=${CMAKE_CXX_COMPILER} ${CMAKE_CURRENT_SOURCE_DIR}/check-include-stat") diff --git a/utils/check-style/check-style b/utils/check-style/check-style index f8926a9af2f..db6b33a569b 100755 --- a/utils/check-style/check-style +++ b/utils/check-style/check-style @@ -97,6 +97,36 @@ for test_case in "${tests_with_query_log[@]}"; do grep -qE current_database.*currentDatabase "$test_case" || echo "Queries to system.query_log/system.query_thread_log does not have current_database = currentDatabase() condition in $test_case" done +# Queries with ReplicatedMergeTree +# NOTE: it is not that accuate, but at least something. +tests_with_replicated_merge_tree=( $( + find $ROOT_PATH/tests/queries -iname '*.sql' -or -iname '*.sh' -or -iname '*.py' | + grep -vP $EXCLUDE_DIRS | + xargs grep --with-filename -e ReplicatedMergeTree | cut -d: -f1 | sort -u +) ) +for test_case in "${tests_with_replicated_merge_tree[@]}"; do + case "$test_case" in + *.sh) + test_case_zk_prefix="\$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX" + grep -q -e "ReplicatedMergeTree.*$test_case_zk_prefix" "$test_case" || echo "ReplicatedMergeTree should contain '$test_case_zk_prefix' in zookeeper path to avoid overlaps ($test_case)" + ;; + *.sql) + # NOTE: *.sql is not supported because it is not possible right now, because: + # - ReplicatedMergeTree supports only ASTLiteral for zookeeper path + # (and adding support of other nodes, with evaluating them are not that easy, due to zk_prefix is "optional") + # - Hence concat(currentDatabase(), 'foo') + # - Also params cannot be used, because the are wrapped with CAST() + # + # But hopefully they will not be a problem + # (since they do not do any "stressing" and overlap probability should be lower). + ;; + *.py) + # Right now there is not such tests anyway + echo "No ReplicatedMergeTree style check for *.py ($test_case)" + ;; + esac +done + # All the submodules should be from https://github.com/ find $ROOT_PATH -name '.gitmodules' | while read i; do grep -F 'url = ' $i | grep -v -F 'https://github.com/' && echo 'All the submodules should be from https://github.com/'; done diff --git a/utils/config-processor/CMakeLists.txt b/utils/config-processor/CMakeLists.txt index e7e15d0be53..a378d66a3d3 100644 --- a/utils/config-processor/CMakeLists.txt +++ b/utils/config-processor/CMakeLists.txt @@ -1,4 +1,2 @@ add_executable (config-processor config-processor.cpp) target_link_libraries(config-processor PRIVATE clickhouse_common_config) - -INSTALL(TARGETS config-processor RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT config-processor) diff --git a/utils/convert-month-partitioned-parts/main.cpp b/utils/convert-month-partitioned-parts/main.cpp index 0a697937eb6..a6829d79726 100644 --- a/utils/convert-month-partitioned-parts/main.cpp +++ b/utils/convert-month-partitioned-parts/main.cpp @@ -47,8 +47,9 @@ void run(String part_path, String date_column, String dest_path) DayNum max_date; MergeTreePartInfo::parseMinMaxDatesFromPartName(old_part_name, min_date, max_date); - UInt32 yyyymm = DateLUT::instance().toNumYYYYMM(min_date); - if (yyyymm != DateLUT::instance().toNumYYYYMM(max_date)) + const auto & time_zone = DateLUT::instance(); + UInt32 yyyymm = time_zone.toNumYYYYMM(min_date); + if (yyyymm != time_zone.toNumYYYYMM(max_date)) throw Exception("Part " + old_part_name + " spans different months", ErrorCodes::BAD_DATA_PART_NAME); diff --git a/utils/corrector_utf8/CMakeLists.txt b/utils/corrector_utf8/CMakeLists.txt index 9114f3f58a0..4784fd43e2d 100644 --- a/utils/corrector_utf8/CMakeLists.txt +++ b/utils/corrector_utf8/CMakeLists.txt @@ -1,6 +1,2 @@ add_executable(corrector_utf8 corrector_utf8.cpp) - -# Link the executable to the library. target_link_libraries(corrector_utf8 PRIVATE clickhouse_common_io) - -install(TARGETS corrector_utf8 RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT corrector_utf8) diff --git a/utils/github/backport.py b/utils/github/backport.py index 7fddbbee241..589124324b1 100644 --- a/utils/github/backport.py +++ b/utils/github/backport.py @@ -25,24 +25,23 @@ class Backport: def getPullRequests(self, from_commit): return self._gh.get_pull_requests(from_commit) - def getBranchesWithLTS(self): - branches = [] - for pull_request in self._gh.find_pull_requests("release-lts"): + def getBranchesWithRelease(self): + branches = set() + for pull_request in self._gh.find_pull_requests("release"): if not pull_request['merged'] and not pull_request['closed']: - branches.append(pull_request['headRefName']) + branches.add(pull_request['headRefName']) return branches - def execute(self, repo, upstream, until_commit, number, run_cherrypick, find_lts=False): + def execute(self, repo, upstream, until_commit, run_cherrypick): repo = LocalRepo(repo, upstream, self.default_branch_name) all_branches = repo.get_release_branches() # [(branch_name, base_commit)] - last_branches = set([branch[0] for branch in all_branches[-number:]]) - lts_branches = set(self.getBranchesWithLTS() if find_lts else []) + release_branches = self.getBranchesWithRelease() branches = [] # iterate over all branches to preserve their precedence. for branch in all_branches: - if branch[0] in last_branches or branch[0] in lts_branches: + if branch[0] in release_branches: branches.append(branch) if not branches: @@ -76,7 +75,7 @@ class Backport: # First pass. Find all must-backports for label in pr['labels']['nodes']: - if label['name'] == 'pr-bugfix': + if label['name'] == 'pr-bugfix' or label['name'] == 'pr-must-backport': backport_map[pr['number']] = branch_set.copy() continue matched = RE_MUST_BACKPORT.match(label['name']) @@ -115,8 +114,6 @@ if __name__ == "__main__": parser.add_argument('--token', type=str, required=True, help='token for Github access') parser.add_argument('--repo', type=str, required=True, help='path to full repository', metavar='PATH') parser.add_argument('--til', type=str, help='check PRs from HEAD til this commit', metavar='COMMIT') - parser.add_argument('-n', type=int, dest='number', help='number of last release branches to consider') - parser.add_argument('--lts', action='store_true', help='consider branches with LTS') parser.add_argument('--dry-run', action='store_true', help='do not create or merge any PRs', default=False) parser.add_argument('--verbose', '-v', action='store_true', help='more verbose output', default=False) parser.add_argument('--upstream', '-u', type=str, help='remote name of upstream in repository', default='origin') @@ -129,4 +126,4 @@ if __name__ == "__main__": cherrypick_run = lambda token, pr, branch: CherryPick(token, 'ClickHouse', 'ClickHouse', 'core', pr, branch).execute(args.repo, args.dry_run) bp = Backport(args.token, 'ClickHouse', 'ClickHouse', 'core') - bp.execute(args.repo, args.upstream, args.til, args.number, cherrypick_run, args.lts) + bp.execute(args.repo, args.upstream, args.til, cherrypick_run) diff --git a/utils/keeper-bench/CMakeLists.txt b/utils/keeper-bench/CMakeLists.txt new file mode 100644 index 00000000000..2f12194d1b7 --- /dev/null +++ b/utils/keeper-bench/CMakeLists.txt @@ -0,0 +1,2 @@ +add_executable(keeper-bench Generator.cpp Runner.cpp Stats.cpp main.cpp) +target_link_libraries(keeper-bench PRIVATE clickhouse_common_zookeeper) diff --git a/utils/keeper-bench/Generator.cpp b/utils/keeper-bench/Generator.cpp new file mode 100644 index 00000000000..852de07f2e1 --- /dev/null +++ b/utils/keeper-bench/Generator.cpp @@ -0,0 +1,238 @@ +#include "Generator.h" +#include +#include + +using namespace Coordination; +using namespace zkutil; + +namespace DB +{ +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} +} + +namespace +{ +std::string generateRandomString(size_t length) +{ + if (length == 0) + return ""; + + static const auto & chars = "0123456789" + "abcdefghijklmnopqrstuvwxyz" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; + + static pcg64 rng(randomSeed()); + static std::uniform_int_distribution pick(0, sizeof(chars) - 2); + + std::string s; + + s.reserve(length); + + while (length--) + s += chars[pick(rng)]; + + return s; +} +} + +std::string generateRandomPath(const std::string & prefix, size_t length) +{ + return std::filesystem::path(prefix) / generateRandomString(length); +} + +std::string generateRandomData(size_t size) +{ + return generateRandomString(size); +} + +void CreateRequestGenerator::startup(Coordination::ZooKeeper & zookeeper) +{ + auto promise = std::make_shared>(); + auto future = promise->get_future(); + auto create_callback = [promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(); + }; + zookeeper.create(path_prefix, "", false, false, default_acls, create_callback); + future.get(); +} + +ZooKeeperRequestPtr CreateRequestGenerator::generate() +{ + auto request = std::make_shared(); + request->acls = default_acls; + size_t plength = 5; + if (path_length) + plength = *path_length; + auto path_candidate = generateRandomPath(path_prefix, plength); + + while (paths_created.count(path_candidate)) + path_candidate = generateRandomPath(path_prefix, plength); + + paths_created.insert(path_candidate); + + request->path = path_candidate; + if (data_size) + request->data = generateRandomData(*data_size); + + return request; +} + + +void GetRequestGenerator::startup(Coordination::ZooKeeper & zookeeper) +{ + auto promise = std::make_shared>(); + auto future = promise->get_future(); + auto create_callback = [promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(); + }; + zookeeper.create(path_prefix, "", false, false, default_acls, create_callback); + future.get(); + size_t total_nodes = 1; + if (num_nodes) + total_nodes = *num_nodes; + + for (size_t i = 0; i < total_nodes; ++i) + { + auto path = generateRandomPath(path_prefix, 5); + while (std::find(paths_to_get.begin(), paths_to_get.end(), path) != paths_to_get.end()) + path = generateRandomPath(path_prefix, 5); + + auto create_promise = std::make_shared>(); + auto create_future = create_promise->get_future(); + auto callback = [create_promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + create_promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + create_promise->set_value(); + }; + std::string data; + if (nodes_data_size) + data = generateRandomString(*nodes_data_size); + + zookeeper.create(path, data, false, false, default_acls, callback); + create_future.get(); + paths_to_get.push_back(path); + } +} + +Coordination::ZooKeeperRequestPtr GetRequestGenerator::generate() +{ + auto request = std::make_shared(); + + size_t path_index = distribution(rng); + request->path = paths_to_get[path_index]; + return request; +} + +void ListRequestGenerator::startup(Coordination::ZooKeeper & zookeeper) +{ + auto promise = std::make_shared>(); + auto future = promise->get_future(); + auto create_callback = [promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(); + }; + zookeeper.create(path_prefix, "", false, false, default_acls, create_callback); + future.get(); + + size_t total_nodes = 1; + if (num_nodes) + total_nodes = *num_nodes; + + size_t path_length = 5; + if (paths_length) + path_length = *paths_length; + + for (size_t i = 0; i < total_nodes; ++i) + { + auto path = generateRandomPath(path_prefix, path_length); + + auto create_promise = std::make_shared>(); + auto create_future = create_promise->get_future(); + auto callback = [create_promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + create_promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + create_promise->set_value(); + }; + zookeeper.create(path, "", false, false, default_acls, callback); + create_future.get(); + } +} + +Coordination::ZooKeeperRequestPtr ListRequestGenerator::generate() +{ + auto request = std::make_shared(); + request->path = path_prefix; + return request; +} + +std::unique_ptr getGenerator(const std::string & name) +{ + if (name == "create_no_data") + { + return std::make_unique(); + } + else if (name == "create_small_data") + { + return std::make_unique("/create_generator", 5, 32); + } + else if (name == "create_medium_data") + { + return std::make_unique("/create_generator", 5, 1024); + } + else if (name == "create_big_data") + { + return std::make_unique("/create_generator", 5, 512 * 1024); + } + else if (name == "get_no_data") + { + return std::make_unique("/get_generator", 10, 0); + } + else if (name == "get_small_data") + { + return std::make_unique("/get_generator", 10, 32); + } + else if (name == "get_medium_data") + { + return std::make_unique("/get_generator", 10, 1024); + } + else if (name == "get_big_data") + { + return std::make_unique("/get_generator", 10, 512 * 1024); + } + else if (name == "list_no_nodes") + { + return std::make_unique("/list_generator", 0, 1); + } + else if (name == "list_few_nodes") + { + return std::make_unique("/list_generator", 10, 5); + } + else if (name == "list_medium_nodes") + { + return std::make_unique("/list_generator", 1000, 5); + } + else if (name == "list_a_lot_nodes") + { + return std::make_unique("/list_generator", 100000, 5); + } + + throw DB::Exception(DB::ErrorCodes::LOGICAL_ERROR, "Unknown generator {}", name); +} diff --git a/utils/keeper-bench/Generator.h b/utils/keeper-bench/Generator.h new file mode 100644 index 00000000000..d6cc0eec335 --- /dev/null +++ b/utils/keeper-bench/Generator.h @@ -0,0 +1,107 @@ +#pragma once +#include +#include +#include +#include +#include +#include +#include +#include + + +std::string generateRandomPath(const std::string & prefix, size_t length = 5); + +std::string generateRandomData(size_t size); + +class IGenerator +{ +public: + IGenerator() + { + Coordination::ACL acl; + acl.permissions = Coordination::ACL::All; + acl.scheme = "world"; + acl.id = "anyone"; + default_acls.emplace_back(std::move(acl)); + } + virtual void startup(Coordination::ZooKeeper & /*zookeeper*/) {} + virtual Coordination::ZooKeeperRequestPtr generate() = 0; + + virtual ~IGenerator() = default; + + Coordination::ACLs default_acls; + +}; + +class CreateRequestGenerator final : public IGenerator +{ +public: + explicit CreateRequestGenerator( + std::string path_prefix_ = "/create_generator", + std::optional path_length_ = std::nullopt, + std::optional data_size_ = std::nullopt) + : path_prefix(path_prefix_) + , path_length(path_length_) + , data_size(data_size_) + {} + + void startup(Coordination::ZooKeeper & zookeeper) override; + Coordination::ZooKeeperRequestPtr generate() override; + +private: + std::string path_prefix; + std::optional path_length; + std::optional data_size; + std::unordered_set paths_created; +}; + + +class GetRequestGenerator final : public IGenerator +{ +public: + explicit GetRequestGenerator( + std::string path_prefix_ = "/get_generator", + std::optional num_nodes_ = std::nullopt, + std::optional nodes_data_size_ = std::nullopt) + : path_prefix(path_prefix_) + , num_nodes(num_nodes_) + , nodes_data_size(nodes_data_size_) + , rng(randomSeed()) + , distribution(0, num_nodes ? *num_nodes - 1 : 0) + {} + + void startup(Coordination::ZooKeeper & zookeeper) override; + Coordination::ZooKeeperRequestPtr generate() override; + +private: + std::string path_prefix; + std::optional num_nodes; + std::optional nodes_data_size; + std::vector paths_to_get; + + pcg64 rng; + std::uniform_int_distribution distribution; +}; + +class ListRequestGenerator final : public IGenerator +{ +public: + explicit ListRequestGenerator( + std::string path_prefix_ = "/list_generator", + std::optional num_nodes_ = std::nullopt, + std::optional paths_length_ = std::nullopt) + : path_prefix(path_prefix_) + , num_nodes(num_nodes_) + , paths_length(paths_length_) + {} + + void startup(Coordination::ZooKeeper & zookeeper) override; + Coordination::ZooKeeperRequestPtr generate() override; + +private: + std::string path_prefix; + std::optional num_nodes; + std::optional paths_length; +}; + +std::unique_ptr getGenerator(const std::string & name); diff --git a/utils/keeper-bench/Runner.cpp b/utils/keeper-bench/Runner.cpp new file mode 100644 index 00000000000..d3f51fb2356 --- /dev/null +++ b/utils/keeper-bench/Runner.cpp @@ -0,0 +1,188 @@ +#include "Runner.h" + +namespace DB +{ +namespace ErrorCodes +{ + extern const int CANNOT_BLOCK_SIGNAL; +} +} + +void Runner::thread(std::vector> & zookeepers) +{ + Coordination::ZooKeeperRequestPtr request; + /// Randomly choosing connection index + pcg64 rng(randomSeed()); + std::uniform_int_distribution distribution(0, zookeepers.size() - 1); + + /// In these threads we do not accept INT signal. + sigset_t sig_set; + if (sigemptyset(&sig_set) + || sigaddset(&sig_set, SIGINT) + || pthread_sigmask(SIG_BLOCK, &sig_set, nullptr)) + { + DB::throwFromErrno("Cannot block signal.", DB::ErrorCodes::CANNOT_BLOCK_SIGNAL); + } + + while (true) + { + bool extracted = false; + + while (!extracted) + { + extracted = queue.tryPop(request, 100); + + if (shutdown + || (max_iterations && requests_executed >= max_iterations)) + { + return; + } + } + + const auto connection_index = distribution(rng); + auto & zk = zookeepers[connection_index]; + + auto promise = std::make_shared>(); + auto future = promise->get_future(); + Coordination::ResponseCallback callback = [promise](const Coordination::Response & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(response.bytesSize()); + }; + + Stopwatch watch; + + zk->executeGenericRequest(request, callback); + + try + { + auto response_size = future.get(); + double seconds = watch.elapsedSeconds(); + + std::lock_guard lock(mutex); + + if (request->isReadRequest()) + info->addRead(seconds, 1, request->bytesSize() + response_size); + else + info->addWrite(seconds, 1, request->bytesSize() + response_size); + } + catch (...) + { + if (!continue_on_error) + { + shutdown = true; + throw; + } + std::cerr << DB::getCurrentExceptionMessage(true, true /*check embedded stack trace*/) << std::endl; + } + + ++requests_executed; + } +} + +bool Runner::tryPushRequestInteractively(const Coordination::ZooKeeperRequestPtr & request, DB::InterruptListener & interrupt_listener) +{ + bool inserted = false; + + while (!inserted) + { + inserted = queue.tryPush(request, 100); + + if (shutdown) + { + /// An exception occurred in a worker + return false; + } + + if (max_time > 0 && total_watch.elapsedSeconds() >= max_time) + { + std::cout << "Stopping launch of queries. Requested time limit is exhausted.\n"; + return false; + } + + if (interrupt_listener.check()) + { + std::cout << "Stopping launch of queries. SIGINT received." << std::endl; + return false; + } + + if (delay > 0 && delay_watch.elapsedSeconds() > delay) + { + printNumberOfRequestsExecuted(requests_executed); + + std::lock_guard lock(mutex); + report(info, concurrency); + delay_watch.restart(); + } + } + + return true; +} + + +void Runner::runBenchmark() +{ + auto aux_connections = getConnections(); + + std::cerr << "Preparing to run\n"; + generator->startup(*aux_connections[0]); + std::cerr << "Prepared\n"; + try + { + for (size_t i = 0; i < concurrency; ++i) + { + auto connections = getConnections(); + pool.scheduleOrThrowOnError([this, connections]() mutable { thread(connections); }); + } + } + catch (...) + { + pool.wait(); + throw; + } + + DB::InterruptListener interrupt_listener; + delay_watch.restart(); + + /// Push queries into queue + for (size_t i = 0; !max_iterations || i < max_iterations; ++i) + { + if (!tryPushRequestInteractively(generator->generate(), interrupt_listener)) + { + shutdown = true; + break; + } + } + + pool.wait(); + total_watch.stop(); + + printNumberOfRequestsExecuted(requests_executed); + + std::lock_guard lock(mutex); + report(info, concurrency); +} + + +std::vector> Runner::getConnections() +{ + std::vector> zookeepers; + for (const auto & host_string : hosts_strings) + { + Coordination::ZooKeeper::Node node{Poco::Net::SocketAddress{host_string}, false}; + std::vector nodes; + nodes.push_back(node); + zookeepers.emplace_back(std::make_shared( + nodes, + "", /*chroot*/ + "", /*identity type*/ + "", /*identity*/ + Poco::Timespan(0, 30000 * 1000), + Poco::Timespan(0, 1000 * 1000), + Poco::Timespan(0, 10000 * 1000))); + } + + return zookeepers; +} diff --git a/utils/keeper-bench/Runner.h b/utils/keeper-bench/Runner.h new file mode 100644 index 00000000000..bb83e790214 --- /dev/null +++ b/utils/keeper-bench/Runner.h @@ -0,0 +1,79 @@ +#pragma once +#include +#include "Generator.h" +#include +#include +#include +#include +#include +#include +#include + +#include +#include "Stats.h" + +using Ports = std::vector; +using Strings = std::vector; + +class Runner +{ +public: + Runner( + size_t concurrency_, + const std::string & generator_name, + const Strings & hosts_strings_, + double max_time_, + double delay_, + bool continue_on_error_, + size_t max_iterations_) + : concurrency(concurrency_) + , pool(concurrency) + , hosts_strings(hosts_strings_) + , generator(getGenerator(generator_name)) + , max_time(max_time_) + , delay(delay_) + , continue_on_error(continue_on_error_) + , max_iterations(max_iterations_) + , info(std::make_shared()) + , queue(concurrency) + { + } + + void thread(std::vector> & zookeepers); + + void printNumberOfRequestsExecuted(size_t num) + { + std::cerr << "Requests executed: " << num << ".\n"; + } + + bool tryPushRequestInteractively(const Coordination::ZooKeeperRequestPtr & request, DB::InterruptListener & interrupt_listener); + + void runBenchmark(); + + +private: + + size_t concurrency = 1; + + ThreadPool pool; + Strings hosts_strings; + std::unique_ptr generator; + double max_time = 0; + double delay = 1; + bool continue_on_error = false; + std::atomic max_iterations = 0; + std::atomic requests_executed = 0; + std::atomic shutdown = false; + + std::shared_ptr info; + + Stopwatch total_watch; + Stopwatch delay_watch; + + std::mutex mutex; + + using Queue = ConcurrentBoundedQueue; + Queue queue; + + std::vector> getConnections(); +}; diff --git a/utils/keeper-bench/Stats.cpp b/utils/keeper-bench/Stats.cpp new file mode 100644 index 00000000000..1f8b02ed09d --- /dev/null +++ b/utils/keeper-bench/Stats.cpp @@ -0,0 +1,67 @@ +#include "Stats.h" +#include + +void report(std::shared_ptr & info, size_t concurrency) +{ + std::cerr << "\n"; + + /// Avoid zeros, nans or exceptions + if (0 == info->read_requests && 0 == info->write_requests) + return; + + double read_seconds = info->read_work_time / concurrency; + double write_seconds = info->write_work_time / concurrency; + + std::cerr << "read requests " << info->read_requests << ", write requests " << info->write_requests << ", "; + if (info->errors) + { + std::cerr << "errors " << info->errors << ", "; + } + if (0 != info->read_requests) + { + std::cerr + << "Read RPS: " << (info->read_requests / read_seconds) << ", " + << "Read MiB/s: " << (info->requests_read_bytes / read_seconds / 1048576); + if (0 != info->write_requests) + std::cerr << ", "; + } + if (0 != info->write_requests) + { + std::cerr + << "Write RPS: " << (info->write_requests / write_seconds) << ", " + << "Write MiB/s: " << (info->requests_write_bytes / write_seconds / 1048576) << ". " + << "\n"; + } + std::cerr << "\n"; + + auto print_percentile = [&](double percent, Stats::Sampler & sampler) + { + std::cerr << percent << "%\t\t"; + std::cerr << sampler.quantileNearest(percent / 100.0) << " sec.\t"; + std::cerr << "\n"; + }; + + if (0 != info->read_requests) + { + std::cerr << "Read sampler:\n"; + for (int percent = 0; percent <= 90; percent += 10) + print_percentile(percent, info->read_sampler); + + print_percentile(95, info->read_sampler); + print_percentile(99, info->read_sampler); + print_percentile(99.9, info->read_sampler); + print_percentile(99.99, info->read_sampler); + } + + if (0 != info->write_requests) + { + std::cerr << "Write sampler:\n"; + for (int percent = 0; percent <= 90; percent += 10) + print_percentile(percent, info->write_sampler); + + print_percentile(95, info->write_sampler); + print_percentile(99, info->write_sampler); + print_percentile(99.9, info->write_sampler); + print_percentile(99.99, info->write_sampler); + } +} diff --git a/utils/keeper-bench/Stats.h b/utils/keeper-bench/Stats.h new file mode 100644 index 00000000000..1b9a31bb734 --- /dev/null +++ b/utils/keeper-bench/Stats.h @@ -0,0 +1,52 @@ +#pragma once + +#include +#include + +#include + +struct Stats +{ + std::atomic read_requests{0}; + std::atomic write_requests{0}; + size_t errors = 0; + size_t requests_write_bytes = 0; + size_t requests_read_bytes = 0; + double read_work_time = 0; + double write_work_time = 0; + + using Sampler = ReservoirSampler; + Sampler read_sampler {1 << 16}; + Sampler write_sampler {1 << 16}; + + void addRead(double seconds, size_t requests_inc, size_t bytes_inc) + { + read_work_time += seconds; + read_requests += requests_inc; + requests_read_bytes += bytes_inc; + read_sampler.insert(seconds); + } + + void addWrite(double seconds, size_t requests_inc, size_t bytes_inc) + { + write_work_time += seconds; + write_requests += requests_inc; + requests_write_bytes += bytes_inc; + write_sampler.insert(seconds); + } + + void clear() + { + read_requests = 0; + write_requests = 0; + read_work_time = 0; + write_work_time = 0; + requests_read_bytes = 0; + requests_write_bytes = 0; + read_sampler.clear(); + write_sampler.clear(); + } +}; + + +void report(std::shared_ptr & info, size_t concurrency); diff --git a/utils/keeper-bench/main.cpp b/utils/keeper-bench/main.cpp new file mode 100644 index 00000000000..378d7c2f6e4 --- /dev/null +++ b/utils/keeper-bench/main.cpp @@ -0,0 +1,61 @@ +#include +#include +#include "Runner.h" +#include "Stats.h" +#include "Generator.h" +#include +#include + +using namespace std; + +int main(int argc, char *argv[]) +{ + + bool print_stacktrace = true; + + try + { + using boost::program_options::value; + + boost::program_options::options_description desc = createOptionsDescription("Allowed options", getTerminalWidth()); + desc.add_options() + ("help", "produce help message") + ("generator", value()->default_value("create_small_data"), "query to execute") + ("concurrency,c", value()->default_value(1), "number of parallel queries") + ("delay,d", value()->default_value(1), "delay between intermediate reports in seconds (set 0 to disable reports)") + ("iterations,i", value()->default_value(0), "amount of queries to be executed") + ("timelimit,t", value()->default_value(0.), "stop launch of queries after specified time limit") + ("hosts,h", value()->multitoken(), "") + ("continue_on_errors", "continue testing even if a query fails") + ("reconnect", "establish new connection for every query") + ; + + boost::program_options::variables_map options; + boost::program_options::store(boost::program_options::parse_command_line(argc, argv, desc), options); + boost::program_options::notify(options); + + if (options.count("help")) + { + std::cout << "Usage: " << argv[0] << " [options] < queries.txt\n"; + std::cout << desc << "\n"; + return 1; + } + + Runner runner(options["concurrency"].as(), + options["generator"].as(), + options["hosts"].as(), + options["timelimit"].as(), + options["delay"].as(), + options.count("continue_on_errors"), + options["iterations"].as()); + + runner.runBenchmark(); + + return 0; + } + catch (...) + { + std::cerr << DB::getCurrentExceptionMessage(print_stacktrace, true) << std::endl; + return DB::getCurrentExceptionCode(); + } +} diff --git a/utils/keeper-data-dumper/CMakeLists.txt b/utils/keeper-data-dumper/CMakeLists.txt new file mode 100644 index 00000000000..af16924547f --- /dev/null +++ b/utils/keeper-data-dumper/CMakeLists.txt @@ -0,0 +1,2 @@ +add_executable(keeper-data-dumper main.cpp) +target_link_libraries(keeper-data-dumper PRIVATE dbms) diff --git a/utils/keeper-data-dumper/main.cpp b/utils/keeper-data-dumper/main.cpp new file mode 100644 index 00000000000..11db6fc61bc --- /dev/null +++ b/utils/keeper-data-dumper/main.cpp @@ -0,0 +1,87 @@ +#include +#include +#include +#include +#include +#include +#include // Y_IGNORE +#include +#include +#include + +using namespace Coordination; +using namespace DB; + +void dumpMachine(std::shared_ptr machine) +{ + auto & storage = machine->getStorage(); + std::queue keys; + keys.push("/"); + + while (!keys.empty()) + { + auto key = keys.front(); + keys.pop(); + std::cout << key << "\n"; + auto value = storage.container.getValue(key); + std::cout << "\tStat: {version: " << value.stat.version << + ", mtime: " << value.stat.mtime << + ", emphemeralOwner: " << value.stat.ephemeralOwner << + ", czxid: " << value.stat.czxid << + ", mzxid: " << value.stat.mzxid << + ", numChildren: " << value.stat.numChildren << + ", dataLength: " << value.stat.dataLength << + "}" << std::endl; + std::cout << "\tData: " << storage.container.getValue(key).data << std::endl; + + for (const auto & child : value.children) + { + if (key == "/") + keys.push(key + child); + else + keys.push(key + "/" + child); + } + } + std::cout << std::flush; +} + +int main(int argc, char *argv[]) +{ + if (argc != 3) + { + std::cerr << "usage: " << argv[0] << " snapshotpath logpath" << std::endl; + return 3; + } + else + { + Poco::AutoPtr channel(new Poco::ConsoleChannel(std::cerr)); + Poco::Logger::root().setChannel(channel); + Poco::Logger::root().setLevel("trace"); + } + auto * logger = &Poco::Logger::get("keeper-dumper"); + ResponsesQueue queue; + SnapshotsQueue snapshots_queue{1}; + CoordinationSettingsPtr settings = std::make_shared(); + auto state_machine = std::make_shared(queue, snapshots_queue, argv[1], settings); + state_machine->init(); + size_t last_commited_index = state_machine->last_commit_index(); + + LOG_INFO(logger, "Last committed index: {}", last_commited_index); + + DB::KeeperLogStore changelog(argv[2], 10000000, true); + changelog.init(last_commited_index, 10000000000UL); /// collect all logs + if (changelog.size() == 0) + LOG_INFO(logger, "Changelog empty"); + else + LOG_INFO(logger, "Last changelog entry {}", changelog.next_slot() - 1); + + for (size_t i = last_commited_index + 1; i < changelog.next_slot(); ++i) + { + if (changelog.entry_at(i)->get_val_type() == nuraft::log_val_type::app_log) + state_machine->commit(i, changelog.entry_at(i)->get_buf()); + } + + dumpMachine(state_machine); + + return 0; +} diff --git a/utils/list-licenses/list-licenses.sh b/utils/list-licenses/list-licenses.sh index 8eee3f97253..cdb3e3983ff 100755 --- a/utils/list-licenses/list-licenses.sh +++ b/utils/list-licenses/list-licenses.sh @@ -2,8 +2,9 @@ ROOT_PATH="$(git rev-parse --show-toplevel)" LIBS_PATH="${ROOT_PATH}/contrib" +LC_ALL=C -ls -1 -d ${LIBS_PATH}/*/ | grep -F -v -- '-cmake' | while read LIB; do +ls -1 -d ${LIBS_PATH}/*/ | grep -F -v -- '-cmake' | sort | while read LIB; do LIB_NAME=$(basename $LIB) LIB_LICENSE=$( diff --git a/utils/list-versions/version_date.tsv b/utils/list-versions/version_date.tsv index 3e63f8898c0..0bd63c9bc46 100644 --- a/utils/list-versions/version_date.tsv +++ b/utils/list-versions/version_date.tsv @@ -1,7 +1,27 @@ +v21.4.6.55-stable 2021-04-30 +v21.4.5.46-stable 2021-04-24 +v21.4.4.30-stable 2021-04-16 +v21.4.3.21-stable 2021-04-12 +v21.3.9.83-lts 2021-04-28 +v21.3.8.76-lts 2021-04-24 +v21.3.7.62-stable 2021-04-16 +v21.3.6.55-lts 2021-04-12 +v21.3.5.42-lts 2021-04-07 +v21.3.4.25-lts 2021-03-28 +v21.3.3.14-lts 2021-03-19 +v21.3.2.5-lts 2021-03-12 +v21.2.10.48-stable 2021-04-16 +v21.2.9.41-stable 2021-04-12 +v21.2.8.31-stable 2021-04-07 +v21.2.7.11-stable 2021-03-28 +v21.2.6.1-stable 2021-03-15 v21.2.5.5-stable 2021-03-02 v21.2.4.6-stable 2021-02-20 v21.2.3.15-stable 2021-02-14 v21.2.2.8-stable 2021-02-07 +v21.1.9.41-stable 2021-04-13 +v21.1.8.30-stable 2021-04-07 +v21.1.7.1-stable 2021-03-15 v21.1.6.13-stable 2021-03-02 v21.1.5.4-stable 2021-02-20 v21.1.4.46-stable 2021-02-14 @@ -33,6 +53,10 @@ v20.9.5.5-stable 2020-11-13 v20.9.4.76-stable 2020-10-29 v20.9.3.45-stable 2020-10-09 v20.9.2.20-stable 2020-09-22 +v20.8.18.32-lts 2021-04-16 +v20.8.17.25-lts 2021-04-08 +v20.8.16.20-lts 2021-04-06 +v20.8.15.11-lts 2021-04-01 v20.8.14.4-lts 2021-03-03 v20.8.13.15-lts 2021-02-20 v20.8.12.2-lts 2021-01-16 diff --git a/utils/memcpy-bench/CMakeLists.txt b/utils/memcpy-bench/CMakeLists.txt new file mode 100644 index 00000000000..5353b6fb68e --- /dev/null +++ b/utils/memcpy-bench/CMakeLists.txt @@ -0,0 +1,26 @@ +enable_language(ASM) + +add_executable (memcpy-bench + memcpy-bench.cpp + FastMemcpy.cpp + FastMemcpy_Avx.cpp + memcpy_jart.S + glibc/memcpy-ssse3.S + glibc/memcpy-ssse3-back.S + glibc/memmove-sse2-unaligned-erms.S + glibc/memmove-avx-unaligned-erms.S + glibc/memmove-avx512-unaligned-erms.S + glibc/memmove-avx512-no-vzeroupper.S + ) + +add_compile_options(memcpy-bench PRIVATE -fno-tree-loop-distribute-patterns) + +if (OS_SUNOS) + target_compile_options(memcpy-bench PRIVATE "-Wa,--divide") +endif() + +set_source_files_properties(FastMemcpy.cpp PROPERTIES COMPILE_FLAGS "-Wno-old-style-cast") +set_source_files_properties(FastMemcpy_Avx.cpp PROPERTIES COMPILE_FLAGS "-mavx -Wno-old-style-cast -Wno-cast-qual -Wno-cast-align") + +target_link_libraries(memcpy-bench PRIVATE dbms boost::program_options) + diff --git a/utils/memcpy-bench/FastMemcpy.cpp b/utils/memcpy-bench/FastMemcpy.cpp new file mode 100644 index 00000000000..9a50caba2b1 --- /dev/null +++ b/utils/memcpy-bench/FastMemcpy.cpp @@ -0,0 +1 @@ +#include "FastMemcpy.h" diff --git a/utils/memcpy-bench/FastMemcpy.h b/utils/memcpy-bench/FastMemcpy.h new file mode 100644 index 00000000000..85d09c5f53e --- /dev/null +++ b/utils/memcpy-bench/FastMemcpy.h @@ -0,0 +1,770 @@ +#pragma once + +//===================================================================== +// +// FastMemcpy.c - skywind3000@163.com, 2015 +// +// feature: +// 50% speed up in avg. vs standard memcpy (tested in vc2012/gcc5.1) +// +//===================================================================== + +#include +#include +#include + + +//--------------------------------------------------------------------- +// force inline for compilers +//--------------------------------------------------------------------- +#ifndef INLINE +#ifdef __GNUC__ +#if (__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1)) + #define INLINE __inline__ __attribute__((always_inline)) +#else + #define INLINE __inline__ +#endif +#elif defined(_MSC_VER) + #define INLINE __forceinline +#elif (defined(__BORLANDC__) || defined(__WATCOMC__)) + #define INLINE __inline +#else + #define INLINE +#endif +#endif + +typedef __attribute__((__aligned__(1))) uint16_t uint16_unaligned_t; +typedef __attribute__((__aligned__(1))) uint32_t uint32_unaligned_t; +typedef __attribute__((__aligned__(1))) uint64_t uint64_unaligned_t; + +//--------------------------------------------------------------------- +// fast copy for different sizes +//--------------------------------------------------------------------- +static INLINE void memcpy_sse2_16(void * __restrict dst, const void * __restrict src) +{ + __m128i m0 = _mm_loadu_si128((reinterpret_cast(src)) + 0); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 0, m0); +} + +static INLINE void memcpy_sse2_32(void * __restrict dst, const void * __restrict src) +{ + __m128i m0 = _mm_loadu_si128((reinterpret_cast(src)) + 0); + __m128i m1 = _mm_loadu_si128((reinterpret_cast(src)) + 1); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 0, m0); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 1, m1); +} + +static INLINE void memcpy_sse2_64(void * __restrict dst, const void * __restrict src) +{ + __m128i m0 = _mm_loadu_si128((reinterpret_cast(src)) + 0); + __m128i m1 = _mm_loadu_si128((reinterpret_cast(src)) + 1); + __m128i m2 = _mm_loadu_si128((reinterpret_cast(src)) + 2); + __m128i m3 = _mm_loadu_si128((reinterpret_cast(src)) + 3); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 0, m0); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 1, m1); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 2, m2); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 3, m3); +} + +static INLINE void memcpy_sse2_128(void * __restrict dst, const void * __restrict src) +{ + __m128i m0 = _mm_loadu_si128((reinterpret_cast(src)) + 0); + __m128i m1 = _mm_loadu_si128((reinterpret_cast(src)) + 1); + __m128i m2 = _mm_loadu_si128((reinterpret_cast(src)) + 2); + __m128i m3 = _mm_loadu_si128((reinterpret_cast(src)) + 3); + __m128i m4 = _mm_loadu_si128((reinterpret_cast(src)) + 4); + __m128i m5 = _mm_loadu_si128((reinterpret_cast(src)) + 5); + __m128i m6 = _mm_loadu_si128((reinterpret_cast(src)) + 6); + __m128i m7 = _mm_loadu_si128((reinterpret_cast(src)) + 7); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 0, m0); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 1, m1); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 2, m2); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 3, m3); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 4, m4); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 5, m5); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 6, m6); + _mm_storeu_si128((reinterpret_cast<__m128i*>(dst)) + 7, m7); +} + + +//--------------------------------------------------------------------- +// tiny memory copy with jump table optimized +//--------------------------------------------------------------------- +/// Attribute is used to avoid an error with undefined behaviour sanitizer +/// ../contrib/FastMemcpy/FastMemcpy.h:91:56: runtime error: applying zero offset to null pointer +/// Found by 01307_orc_output_format.sh, cause - ORCBlockInputFormat and external ORC library. +__attribute__((__no_sanitize__("undefined"))) inline void *memcpy_tiny(void * __restrict dst, const void * __restrict src, size_t size) +{ + unsigned char *dd = ((unsigned char*)dst) + size; + const unsigned char *ss = ((const unsigned char*)src) + size; + + switch (size) + { + case 64: + memcpy_sse2_64(dd - 64, ss - 64); + [[fallthrough]]; + case 0: + break; + + case 65: + memcpy_sse2_64(dd - 65, ss - 65); + [[fallthrough]]; + case 1: + dd[-1] = ss[-1]; + break; + + case 66: + memcpy_sse2_64(dd - 66, ss - 66); + [[fallthrough]]; + case 2: + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 67: + memcpy_sse2_64(dd - 67, ss - 67); + [[fallthrough]]; + case 3: + *((uint16_unaligned_t*)(dd - 3)) = *((const uint16_unaligned_t*)(ss - 3)); + dd[-1] = ss[-1]; + break; + + case 68: + memcpy_sse2_64(dd - 68, ss - 68); + [[fallthrough]]; + case 4: + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 69: + memcpy_sse2_64(dd - 69, ss - 69); + [[fallthrough]]; + case 5: + *((uint32_unaligned_t*)(dd - 5)) = *((const uint32_unaligned_t*)(ss - 5)); + dd[-1] = ss[-1]; + break; + + case 70: + memcpy_sse2_64(dd - 70, ss - 70); + [[fallthrough]]; + case 6: + *((uint32_unaligned_t*)(dd - 6)) = *((const uint32_unaligned_t*)(ss - 6)); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 71: + memcpy_sse2_64(dd - 71, ss - 71); + [[fallthrough]]; + case 7: + *((uint32_unaligned_t*)(dd - 7)) = *((const uint32_unaligned_t*)(ss - 7)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 72: + memcpy_sse2_64(dd - 72, ss - 72); + [[fallthrough]]; + case 8: + *((uint64_unaligned_t*)(dd - 8)) = *((const uint64_unaligned_t*)(ss - 8)); + break; + + case 73: + memcpy_sse2_64(dd - 73, ss - 73); + [[fallthrough]]; + case 9: + *((uint64_unaligned_t*)(dd - 9)) = *((const uint64_unaligned_t*)(ss - 9)); + dd[-1] = ss[-1]; + break; + + case 74: + memcpy_sse2_64(dd - 74, ss - 74); + [[fallthrough]]; + case 10: + *((uint64_unaligned_t*)(dd - 10)) = *((const uint64_unaligned_t*)(ss - 10)); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 75: + memcpy_sse2_64(dd - 75, ss - 75); + [[fallthrough]]; + case 11: + *((uint64_unaligned_t*)(dd - 11)) = *((const uint64_unaligned_t*)(ss - 11)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 76: + memcpy_sse2_64(dd - 76, ss - 76); + [[fallthrough]]; + case 12: + *((uint64_unaligned_t*)(dd - 12)) = *((const uint64_unaligned_t*)(ss - 12)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 77: + memcpy_sse2_64(dd - 77, ss - 77); + [[fallthrough]]; + case 13: + *((uint64_unaligned_t*)(dd - 13)) = *((const uint64_unaligned_t*)(ss - 13)); + *((uint32_unaligned_t*)(dd - 5)) = *((const uint32_unaligned_t*)(ss - 5)); + dd[-1] = ss[-1]; + break; + + case 78: + memcpy_sse2_64(dd - 78, ss - 78); + [[fallthrough]]; + case 14: + *((uint64_unaligned_t*)(dd - 14)) = *((const uint64_unaligned_t*)(ss - 14)); + *((uint64_unaligned_t*)(dd - 8)) = *((const uint64_unaligned_t*)(ss - 8)); + break; + + case 79: + memcpy_sse2_64(dd - 79, ss - 79); + [[fallthrough]]; + case 15: + *((uint64_unaligned_t*)(dd - 15)) = *((const uint64_unaligned_t*)(ss - 15)); + *((uint64_unaligned_t*)(dd - 8)) = *((const uint64_unaligned_t*)(ss - 8)); + break; + + case 80: + memcpy_sse2_64(dd - 80, ss - 80); + [[fallthrough]]; + case 16: + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 81: + memcpy_sse2_64(dd - 81, ss - 81); + [[fallthrough]]; + case 17: + memcpy_sse2_16(dd - 17, ss - 17); + dd[-1] = ss[-1]; + break; + + case 82: + memcpy_sse2_64(dd - 82, ss - 82); + [[fallthrough]]; + case 18: + memcpy_sse2_16(dd - 18, ss - 18); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 83: + memcpy_sse2_64(dd - 83, ss - 83); + [[fallthrough]]; + case 19: + memcpy_sse2_16(dd - 19, ss - 19); + *((uint16_unaligned_t*)(dd - 3)) = *((const uint16_unaligned_t*)(ss - 3)); + dd[-1] = ss[-1]; + break; + + case 84: + memcpy_sse2_64(dd - 84, ss - 84); + [[fallthrough]]; + case 20: + memcpy_sse2_16(dd - 20, ss - 20); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 85: + memcpy_sse2_64(dd - 85, ss - 85); + [[fallthrough]]; + case 21: + memcpy_sse2_16(dd - 21, ss - 21); + *((uint32_unaligned_t*)(dd - 5)) = *((const uint32_unaligned_t*)(ss - 5)); + dd[-1] = ss[-1]; + break; + + case 86: + memcpy_sse2_64(dd - 86, ss - 86); + [[fallthrough]]; + case 22: + memcpy_sse2_16(dd - 22, ss - 22); + *((uint32_unaligned_t*)(dd - 6)) = *((const uint32_unaligned_t*)(ss - 6)); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 87: + memcpy_sse2_64(dd - 87, ss - 87); + [[fallthrough]]; + case 23: + memcpy_sse2_16(dd - 23, ss - 23); + *((uint32_unaligned_t*)(dd - 7)) = *((const uint32_unaligned_t*)(ss - 7)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 88: + memcpy_sse2_64(dd - 88, ss - 88); + [[fallthrough]]; + case 24: + memcpy_sse2_16(dd - 24, ss - 24); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 89: + memcpy_sse2_64(dd - 89, ss - 89); + [[fallthrough]]; + case 25: + memcpy_sse2_16(dd - 25, ss - 25); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 90: + memcpy_sse2_64(dd - 90, ss - 90); + [[fallthrough]]; + case 26: + memcpy_sse2_16(dd - 26, ss - 26); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 91: + memcpy_sse2_64(dd - 91, ss - 91); + [[fallthrough]]; + case 27: + memcpy_sse2_16(dd - 27, ss - 27); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 92: + memcpy_sse2_64(dd - 92, ss - 92); + [[fallthrough]]; + case 28: + memcpy_sse2_16(dd - 28, ss - 28); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 93: + memcpy_sse2_64(dd - 93, ss - 93); + [[fallthrough]]; + case 29: + memcpy_sse2_16(dd - 29, ss - 29); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 94: + memcpy_sse2_64(dd - 94, ss - 94); + [[fallthrough]]; + case 30: + memcpy_sse2_16(dd - 30, ss - 30); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 95: + memcpy_sse2_64(dd - 95, ss - 95); + [[fallthrough]]; + case 31: + memcpy_sse2_16(dd - 31, ss - 31); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 96: + memcpy_sse2_64(dd - 96, ss - 96); + [[fallthrough]]; + case 32: + memcpy_sse2_32(dd - 32, ss - 32); + break; + + case 97: + memcpy_sse2_64(dd - 97, ss - 97); + [[fallthrough]]; + case 33: + memcpy_sse2_32(dd - 33, ss - 33); + dd[-1] = ss[-1]; + break; + + case 98: + memcpy_sse2_64(dd - 98, ss - 98); + [[fallthrough]]; + case 34: + memcpy_sse2_32(dd - 34, ss - 34); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 99: + memcpy_sse2_64(dd - 99, ss - 99); + [[fallthrough]]; + case 35: + memcpy_sse2_32(dd - 35, ss - 35); + *((uint16_unaligned_t*)(dd - 3)) = *((const uint16_unaligned_t*)(ss - 3)); + dd[-1] = ss[-1]; + break; + + case 100: + memcpy_sse2_64(dd - 100, ss - 100); + [[fallthrough]]; + case 36: + memcpy_sse2_32(dd - 36, ss - 36); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 101: + memcpy_sse2_64(dd - 101, ss - 101); + [[fallthrough]]; + case 37: + memcpy_sse2_32(dd - 37, ss - 37); + *((uint32_unaligned_t*)(dd - 5)) = *((const uint32_unaligned_t*)(ss - 5)); + dd[-1] = ss[-1]; + break; + + case 102: + memcpy_sse2_64(dd - 102, ss - 102); + [[fallthrough]]; + case 38: + memcpy_sse2_32(dd - 38, ss - 38); + *((uint32_unaligned_t*)(dd - 6)) = *((const uint32_unaligned_t*)(ss - 6)); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 103: + memcpy_sse2_64(dd - 103, ss - 103); + [[fallthrough]]; + case 39: + memcpy_sse2_32(dd - 39, ss - 39); + *((uint32_unaligned_t*)(dd - 7)) = *((const uint32_unaligned_t*)(ss - 7)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 104: + memcpy_sse2_64(dd - 104, ss - 104); + [[fallthrough]]; + case 40: + memcpy_sse2_32(dd - 40, ss - 40); + *((uint64_unaligned_t*)(dd - 8)) = *((const uint64_unaligned_t*)(ss - 8)); + break; + + case 105: + memcpy_sse2_64(dd - 105, ss - 105); + [[fallthrough]]; + case 41: + memcpy_sse2_32(dd - 41, ss - 41); + *((uint64_unaligned_t*)(dd - 9)) = *((const uint64_unaligned_t*)(ss - 9)); + dd[-1] = ss[-1]; + break; + + case 106: + memcpy_sse2_64(dd - 106, ss - 106); + [[fallthrough]]; + case 42: + memcpy_sse2_32(dd - 42, ss - 42); + *((uint64_unaligned_t*)(dd - 10)) = *((const uint64_unaligned_t*)(ss - 10)); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 107: + memcpy_sse2_64(dd - 107, ss - 107); + [[fallthrough]]; + case 43: + memcpy_sse2_32(dd - 43, ss - 43); + *((uint64_unaligned_t*)(dd - 11)) = *((const uint64_unaligned_t*)(ss - 11)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 108: + memcpy_sse2_64(dd - 108, ss - 108); + [[fallthrough]]; + case 44: + memcpy_sse2_32(dd - 44, ss - 44); + *((uint64_unaligned_t*)(dd - 12)) = *((const uint64_unaligned_t*)(ss - 12)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 109: + memcpy_sse2_64(dd - 109, ss - 109); + [[fallthrough]]; + case 45: + memcpy_sse2_32(dd - 45, ss - 45); + *((uint64_unaligned_t*)(dd - 13)) = *((const uint64_unaligned_t*)(ss - 13)); + *((uint32_unaligned_t*)(dd - 5)) = *((const uint32_unaligned_t*)(ss - 5)); + dd[-1] = ss[-1]; + break; + + case 110: + memcpy_sse2_64(dd - 110, ss - 110); + [[fallthrough]]; + case 46: + memcpy_sse2_32(dd - 46, ss - 46); + *((uint64_unaligned_t*)(dd - 14)) = *((const uint64_unaligned_t*)(ss - 14)); + *((uint64_unaligned_t*)(dd - 8)) = *((const uint64_unaligned_t*)(ss - 8)); + break; + + case 111: + memcpy_sse2_64(dd - 111, ss - 111); + [[fallthrough]]; + case 47: + memcpy_sse2_32(dd - 47, ss - 47); + *((uint64_unaligned_t*)(dd - 15)) = *((const uint64_unaligned_t*)(ss - 15)); + *((uint64_unaligned_t*)(dd - 8)) = *((const uint64_unaligned_t*)(ss - 8)); + break; + + case 112: + memcpy_sse2_64(dd - 112, ss - 112); + [[fallthrough]]; + case 48: + memcpy_sse2_32(dd - 48, ss - 48); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 113: + memcpy_sse2_64(dd - 113, ss - 113); + [[fallthrough]]; + case 49: + memcpy_sse2_32(dd - 49, ss - 49); + memcpy_sse2_16(dd - 17, ss - 17); + dd[-1] = ss[-1]; + break; + + case 114: + memcpy_sse2_64(dd - 114, ss - 114); + [[fallthrough]]; + case 50: + memcpy_sse2_32(dd - 50, ss - 50); + memcpy_sse2_16(dd - 18, ss - 18); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 115: + memcpy_sse2_64(dd - 115, ss - 115); + [[fallthrough]]; + case 51: + memcpy_sse2_32(dd - 51, ss - 51); + memcpy_sse2_16(dd - 19, ss - 19); + *((uint16_unaligned_t*)(dd - 3)) = *((const uint16_unaligned_t*)(ss - 3)); + dd[-1] = ss[-1]; + break; + + case 116: + memcpy_sse2_64(dd - 116, ss - 116); + [[fallthrough]]; + case 52: + memcpy_sse2_32(dd - 52, ss - 52); + memcpy_sse2_16(dd - 20, ss - 20); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 117: + memcpy_sse2_64(dd - 117, ss - 117); + [[fallthrough]]; + case 53: + memcpy_sse2_32(dd - 53, ss - 53); + memcpy_sse2_16(dd - 21, ss - 21); + *((uint32_unaligned_t*)(dd - 5)) = *((const uint32_unaligned_t*)(ss - 5)); + dd[-1] = ss[-1]; + break; + + case 118: + memcpy_sse2_64(dd - 118, ss - 118); + [[fallthrough]]; + case 54: + memcpy_sse2_32(dd - 54, ss - 54); + memcpy_sse2_16(dd - 22, ss - 22); + *((uint32_unaligned_t*)(dd - 6)) = *((const uint32_unaligned_t*)(ss - 6)); + *((uint16_unaligned_t*)(dd - 2)) = *((const uint16_unaligned_t*)(ss - 2)); + break; + + case 119: + memcpy_sse2_64(dd - 119, ss - 119); + [[fallthrough]]; + case 55: + memcpy_sse2_32(dd - 55, ss - 55); + memcpy_sse2_16(dd - 23, ss - 23); + *((uint32_unaligned_t*)(dd - 7)) = *((const uint32_unaligned_t*)(ss - 7)); + *((uint32_unaligned_t*)(dd - 4)) = *((const uint32_unaligned_t*)(ss - 4)); + break; + + case 120: + memcpy_sse2_64(dd - 120, ss - 120); + [[fallthrough]]; + case 56: + memcpy_sse2_32(dd - 56, ss - 56); + memcpy_sse2_16(dd - 24, ss - 24); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 121: + memcpy_sse2_64(dd - 121, ss - 121); + [[fallthrough]]; + case 57: + memcpy_sse2_32(dd - 57, ss - 57); + memcpy_sse2_16(dd - 25, ss - 25); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 122: + memcpy_sse2_64(dd - 122, ss - 122); + [[fallthrough]]; + case 58: + memcpy_sse2_32(dd - 58, ss - 58); + memcpy_sse2_16(dd - 26, ss - 26); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 123: + memcpy_sse2_64(dd - 123, ss - 123); + [[fallthrough]]; + case 59: + memcpy_sse2_32(dd - 59, ss - 59); + memcpy_sse2_16(dd - 27, ss - 27); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 124: + memcpy_sse2_64(dd - 124, ss - 124); + [[fallthrough]]; + case 60: + memcpy_sse2_32(dd - 60, ss - 60); + memcpy_sse2_16(dd - 28, ss - 28); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 125: + memcpy_sse2_64(dd - 125, ss - 125); + [[fallthrough]]; + case 61: + memcpy_sse2_32(dd - 61, ss - 61); + memcpy_sse2_16(dd - 29, ss - 29); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 126: + memcpy_sse2_64(dd - 126, ss - 126); + [[fallthrough]]; + case 62: + memcpy_sse2_32(dd - 62, ss - 62); + memcpy_sse2_16(dd - 30, ss - 30); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 127: + memcpy_sse2_64(dd - 127, ss - 127); + [[fallthrough]]; + case 63: + memcpy_sse2_32(dd - 63, ss - 63); + memcpy_sse2_16(dd - 31, ss - 31); + memcpy_sse2_16(dd - 16, ss - 16); + break; + + case 128: + memcpy_sse2_128(dd - 128, ss - 128); + break; + } + + return dst; +} + + +//--------------------------------------------------------------------- +// main routine +//--------------------------------------------------------------------- +void* memcpy_fast_sse(void * __restrict destination, const void * __restrict source, size_t size) +{ + unsigned char *dst = (unsigned char*)destination; + const unsigned char *src = (const unsigned char*)source; + static size_t cachesize = 0x200000; // L2-cache size + size_t padding; + + // small memory copy + if (size <= 128) +{ + return memcpy_tiny(dst, src, size); + } + + // align destination to 16 bytes boundary + padding = (16 - (((size_t)dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + // medium size copy + if (size <= cachesize) + { + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + for (; size >= 128; size -= 128) + { + c0 = _mm_loadu_si128((reinterpret_cast(src)) + 0); + c1 = _mm_loadu_si128((reinterpret_cast(src)) + 1); + c2 = _mm_loadu_si128((reinterpret_cast(src)) + 2); + c3 = _mm_loadu_si128((reinterpret_cast(src)) + 3); + c4 = _mm_loadu_si128((reinterpret_cast(src)) + 4); + c5 = _mm_loadu_si128((reinterpret_cast(src)) + 5); + c6 = _mm_loadu_si128((reinterpret_cast(src)) + 6); + c7 = _mm_loadu_si128((reinterpret_cast(src)) + 7); + _mm_prefetch((const char*)(src + 256), _MM_HINT_NTA); + src += 128; + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 0), c0); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 1), c1); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 2), c2); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 3), c3); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 4), c4); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 5), c5); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 6), c6); + _mm_store_si128(((reinterpret_cast<__m128i*>(dst)) + 7), c7); + dst += 128; + } + } + else + { // big memory copy + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + _mm_prefetch((const char*)(src), _MM_HINT_NTA); + + if ((((size_t)src) & 15) == 0) + { // source aligned + for (; size >= 128; size -= 128) + { + c0 = _mm_load_si128((reinterpret_cast(src)) + 0); + c1 = _mm_load_si128((reinterpret_cast(src)) + 1); + c2 = _mm_load_si128((reinterpret_cast(src)) + 2); + c3 = _mm_load_si128((reinterpret_cast(src)) + 3); + c4 = _mm_load_si128((reinterpret_cast(src)) + 4); + c5 = _mm_load_si128((reinterpret_cast(src)) + 5); + c6 = _mm_load_si128((reinterpret_cast(src)) + 6); + c7 = _mm_load_si128((reinterpret_cast(src)) + 7); + _mm_prefetch((const char*)(src + 256), _MM_HINT_NTA); + src += 128; + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 0), c0); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 1), c1); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 2), c2); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 3), c3); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 4), c4); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 5), c5); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 6), c6); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 7), c7); + dst += 128; + } + } + else + { // source unaligned + for (; size >= 128; size -= 128) + { + c0 = _mm_loadu_si128((reinterpret_cast(src)) + 0); + c1 = _mm_loadu_si128((reinterpret_cast(src)) + 1); + c2 = _mm_loadu_si128((reinterpret_cast(src)) + 2); + c3 = _mm_loadu_si128((reinterpret_cast(src)) + 3); + c4 = _mm_loadu_si128((reinterpret_cast(src)) + 4); + c5 = _mm_loadu_si128((reinterpret_cast(src)) + 5); + c6 = _mm_loadu_si128((reinterpret_cast(src)) + 6); + c7 = _mm_loadu_si128((reinterpret_cast(src)) + 7); + _mm_prefetch((const char*)(src + 256), _MM_HINT_NTA); + src += 128; + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 0), c0); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 1), c1); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 2), c2); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 3), c3); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 4), c4); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 5), c5); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 6), c6); + _mm_stream_si128(((reinterpret_cast<__m128i*>(dst)) + 7), c7); + dst += 128; + } + } + _mm_sfence(); + } + + memcpy_tiny(dst, src, size); + + return destination; +} diff --git a/utils/memcpy-bench/FastMemcpy_Avx.cpp b/utils/memcpy-bench/FastMemcpy_Avx.cpp new file mode 100644 index 00000000000..8cef0f89507 --- /dev/null +++ b/utils/memcpy-bench/FastMemcpy_Avx.cpp @@ -0,0 +1 @@ +#include "FastMemcpy_Avx.h" diff --git a/utils/memcpy-bench/FastMemcpy_Avx.h b/utils/memcpy-bench/FastMemcpy_Avx.h new file mode 100644 index 00000000000..ee7d4e19536 --- /dev/null +++ b/utils/memcpy-bench/FastMemcpy_Avx.h @@ -0,0 +1,496 @@ +#pragma once + +//===================================================================== +// +// FastMemcpy.c - skywind3000@163.com, 2015 +// +// feature: +// 50% speed up in avg. vs standard memcpy (tested in vc2012/gcc5.1) +// +//===================================================================== + +#include +#include +#include + + +//--------------------------------------------------------------------- +// force inline for compilers +//--------------------------------------------------------------------- +#ifndef INLINE +#ifdef __GNUC__ +#if (__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1)) + #define INLINE __inline__ __attribute__((always_inline)) +#else + #define INLINE __inline__ +#endif +#elif defined(_MSC_VER) + #define INLINE __forceinline +#elif (defined(__BORLANDC__) || defined(__WATCOMC__)) + #define INLINE __inline +#else + #define INLINE +#endif +#endif + + +//--------------------------------------------------------------------- +// fast copy for different sizes +//--------------------------------------------------------------------- +static INLINE void memcpy_avx_16(void * __restrict dst, const void * __restrict src) +{ +#if 1 + __m128i m0 = _mm_loadu_si128(((const __m128i*)src) + 0); + _mm_storeu_si128(((__m128i*)dst) + 0, m0); +#else + *((uint64_t*)((char*)dst + 0)) = *((uint64_t*)((const char*)src + 0)); + *((uint64_t*)((char*)dst + 8)) = *((uint64_t*)((const char*)src + 8)); +#endif +} + +static INLINE void memcpy_avx_32(void *dst, const void *src) +{ + __m256i m0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 0, m0); +} + +static INLINE void memcpy_avx_64(void *dst, const void *src) +{ + __m256i m0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + __m256i m1 = _mm256_loadu_si256((reinterpret_cast(src)) + 1); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 0, m0); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 1, m1); +} + +static INLINE void memcpy_avx_128(void *dst, const void *src) +{ + __m256i m0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + __m256i m1 = _mm256_loadu_si256((reinterpret_cast(src)) + 1); + __m256i m2 = _mm256_loadu_si256((reinterpret_cast(src)) + 2); + __m256i m3 = _mm256_loadu_si256((reinterpret_cast(src)) + 3); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 0, m0); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 1, m1); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 2, m2); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 3, m3); +} + +static INLINE void memcpy_avx_256(void *dst, const void *src) +{ + __m256i m0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + __m256i m1 = _mm256_loadu_si256((reinterpret_cast(src)) + 1); + __m256i m2 = _mm256_loadu_si256((reinterpret_cast(src)) + 2); + __m256i m3 = _mm256_loadu_si256((reinterpret_cast(src)) + 3); + __m256i m4 = _mm256_loadu_si256((reinterpret_cast(src)) + 4); + __m256i m5 = _mm256_loadu_si256((reinterpret_cast(src)) + 5); + __m256i m6 = _mm256_loadu_si256((reinterpret_cast(src)) + 6); + __m256i m7 = _mm256_loadu_si256((reinterpret_cast(src)) + 7); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 0, m0); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 1, m1); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 2, m2); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 3, m3); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 4, m4); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 5, m5); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 6, m6); + _mm256_storeu_si256((reinterpret_cast<__m256i*>(dst)) + 7, m7); +} + + +//--------------------------------------------------------------------- +// tiny memory copy with jump table optimized +//--------------------------------------------------------------------- +static INLINE void *memcpy_tiny_avx(void * __restrict dst, const void * __restrict src, size_t size) +{ + unsigned char *dd = reinterpret_cast(dst) + size; + const unsigned char *ss = reinterpret_cast(src) + size; + + switch (size) + { + case 128: memcpy_avx_128(dd - 128, ss - 128); [[fallthrough]]; + case 0: break; + case 129: memcpy_avx_128(dd - 129, ss - 129); [[fallthrough]]; + case 1: dd[-1] = ss[-1]; break; + case 130: memcpy_avx_128(dd - 130, ss - 130); [[fallthrough]]; + case 2: *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; + case 131: memcpy_avx_128(dd - 131, ss - 131); [[fallthrough]]; + case 3: *((uint16_t*)(dd - 3)) = *((uint16_t*)(ss - 3)); dd[-1] = ss[-1]; break; + case 132: memcpy_avx_128(dd - 132, ss - 132); [[fallthrough]]; + case 4: *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 133: memcpy_avx_128(dd - 133, ss - 133); [[fallthrough]]; + case 5: *((uint32_t*)(dd - 5)) = *((uint32_t*)(ss - 5)); dd[-1] = ss[-1]; break; + case 134: memcpy_avx_128(dd - 134, ss - 134); [[fallthrough]]; + case 6: *((uint32_t*)(dd - 6)) = *((uint32_t*)(ss - 6)); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; + case 135: memcpy_avx_128(dd - 135, ss - 135); [[fallthrough]]; + case 7: *((uint32_t*)(dd - 7)) = *((uint32_t*)(ss - 7)); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 136: memcpy_avx_128(dd - 136, ss - 136); [[fallthrough]]; + case 8: *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 137: memcpy_avx_128(dd - 137, ss - 137); [[fallthrough]]; + case 9: *((uint64_t*)(dd - 9)) = *((uint64_t*)(ss - 9)); dd[-1] = ss[-1]; break; + case 138: memcpy_avx_128(dd - 138, ss - 138); [[fallthrough]]; + case 10: *((uint64_t*)(dd - 10)) = *((uint64_t*)(ss - 10)); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; + case 139: memcpy_avx_128(dd - 139, ss - 139); [[fallthrough]]; + case 11: *((uint64_t*)(dd - 11)) = *((uint64_t*)(ss - 11)); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 140: memcpy_avx_128(dd - 140, ss - 140); [[fallthrough]]; + case 12: *((uint64_t*)(dd - 12)) = *((uint64_t*)(ss - 12)); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 141: memcpy_avx_128(dd - 141, ss - 141); [[fallthrough]]; + case 13: *((uint64_t*)(dd - 13)) = *((uint64_t*)(ss - 13)); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 142: memcpy_avx_128(dd - 142, ss - 142); [[fallthrough]]; + case 14: *((uint64_t*)(dd - 14)) = *((uint64_t*)(ss - 14)); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 143: memcpy_avx_128(dd - 143, ss - 143); [[fallthrough]]; + case 15: *((uint64_t*)(dd - 15)) = *((uint64_t*)(ss - 15)); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 144: memcpy_avx_128(dd - 144, ss - 144); [[fallthrough]]; + case 16: memcpy_avx_16(dd - 16, ss - 16); break; + case 145: memcpy_avx_128(dd - 145, ss - 145); [[fallthrough]]; + case 17: memcpy_avx_16(dd - 17, ss - 17); dd[-1] = ss[-1]; break; + case 146: memcpy_avx_128(dd - 146, ss - 146); [[fallthrough]]; + case 18: memcpy_avx_16(dd - 18, ss - 18); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; + case 147: memcpy_avx_128(dd - 147, ss - 147); [[fallthrough]]; + case 19: memcpy_avx_16(dd - 19, ss - 19); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 148: memcpy_avx_128(dd - 148, ss - 148); [[fallthrough]]; + case 20: memcpy_avx_16(dd - 20, ss - 20); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 149: memcpy_avx_128(dd - 149, ss - 149); [[fallthrough]]; + case 21: memcpy_avx_16(dd - 21, ss - 21); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 150: memcpy_avx_128(dd - 150, ss - 150); [[fallthrough]]; + case 22: memcpy_avx_16(dd - 22, ss - 22); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 151: memcpy_avx_128(dd - 151, ss - 151); [[fallthrough]]; + case 23: memcpy_avx_16(dd - 23, ss - 23); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 152: memcpy_avx_128(dd - 152, ss - 152); [[fallthrough]]; + case 24: memcpy_avx_16(dd - 24, ss - 24); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 153: memcpy_avx_128(dd - 153, ss - 153); [[fallthrough]]; + case 25: memcpy_avx_16(dd - 25, ss - 25); memcpy_avx_16(dd - 16, ss - 16); break; + case 154: memcpy_avx_128(dd - 154, ss - 154); [[fallthrough]]; + case 26: memcpy_avx_16(dd - 26, ss - 26); memcpy_avx_16(dd - 16, ss - 16); break; + case 155: memcpy_avx_128(dd - 155, ss - 155); [[fallthrough]]; + case 27: memcpy_avx_16(dd - 27, ss - 27); memcpy_avx_16(dd - 16, ss - 16); break; + case 156: memcpy_avx_128(dd - 156, ss - 156); [[fallthrough]]; + case 28: memcpy_avx_16(dd - 28, ss - 28); memcpy_avx_16(dd - 16, ss - 16); break; + case 157: memcpy_avx_128(dd - 157, ss - 157); [[fallthrough]]; + case 29: memcpy_avx_16(dd - 29, ss - 29); memcpy_avx_16(dd - 16, ss - 16); break; + case 158: memcpy_avx_128(dd - 158, ss - 158); [[fallthrough]]; + case 30: memcpy_avx_16(dd - 30, ss - 30); memcpy_avx_16(dd - 16, ss - 16); break; + case 159: memcpy_avx_128(dd - 159, ss - 159); [[fallthrough]]; + case 31: memcpy_avx_16(dd - 31, ss - 31); memcpy_avx_16(dd - 16, ss - 16); break; + case 160: memcpy_avx_128(dd - 160, ss - 160); [[fallthrough]]; + case 32: memcpy_avx_32(dd - 32, ss - 32); break; + case 161: memcpy_avx_128(dd - 161, ss - 161); [[fallthrough]]; + case 33: memcpy_avx_32(dd - 33, ss - 33); dd[-1] = ss[-1]; break; + case 162: memcpy_avx_128(dd - 162, ss - 162); [[fallthrough]]; + case 34: memcpy_avx_32(dd - 34, ss - 34); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; + case 163: memcpy_avx_128(dd - 163, ss - 163); [[fallthrough]]; + case 35: memcpy_avx_32(dd - 35, ss - 35); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 164: memcpy_avx_128(dd - 164, ss - 164); [[fallthrough]]; + case 36: memcpy_avx_32(dd - 36, ss - 36); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 165: memcpy_avx_128(dd - 165, ss - 165); [[fallthrough]]; + case 37: memcpy_avx_32(dd - 37, ss - 37); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 166: memcpy_avx_128(dd - 166, ss - 166); [[fallthrough]]; + case 38: memcpy_avx_32(dd - 38, ss - 38); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 167: memcpy_avx_128(dd - 167, ss - 167); [[fallthrough]]; + case 39: memcpy_avx_32(dd - 39, ss - 39); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 168: memcpy_avx_128(dd - 168, ss - 168); [[fallthrough]]; + case 40: memcpy_avx_32(dd - 40, ss - 40); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 169: memcpy_avx_128(dd - 169, ss - 169); [[fallthrough]]; + case 41: memcpy_avx_32(dd - 41, ss - 41); memcpy_avx_16(dd - 16, ss - 16); break; + case 170: memcpy_avx_128(dd - 170, ss - 170); [[fallthrough]]; + case 42: memcpy_avx_32(dd - 42, ss - 42); memcpy_avx_16(dd - 16, ss - 16); break; + case 171: memcpy_avx_128(dd - 171, ss - 171); [[fallthrough]]; + case 43: memcpy_avx_32(dd - 43, ss - 43); memcpy_avx_16(dd - 16, ss - 16); break; + case 172: memcpy_avx_128(dd - 172, ss - 172); [[fallthrough]]; + case 44: memcpy_avx_32(dd - 44, ss - 44); memcpy_avx_16(dd - 16, ss - 16); break; + case 173: memcpy_avx_128(dd - 173, ss - 173); [[fallthrough]]; + case 45: memcpy_avx_32(dd - 45, ss - 45); memcpy_avx_16(dd - 16, ss - 16); break; + case 174: memcpy_avx_128(dd - 174, ss - 174); [[fallthrough]]; + case 46: memcpy_avx_32(dd - 46, ss - 46); memcpy_avx_16(dd - 16, ss - 16); break; + case 175: memcpy_avx_128(dd - 175, ss - 175); [[fallthrough]]; + case 47: memcpy_avx_32(dd - 47, ss - 47); memcpy_avx_16(dd - 16, ss - 16); break; + case 176: memcpy_avx_128(dd - 176, ss - 176); [[fallthrough]]; + case 48: memcpy_avx_32(dd - 48, ss - 48); memcpy_avx_16(dd - 16, ss - 16); break; + case 177: memcpy_avx_128(dd - 177, ss - 177); [[fallthrough]]; + case 49: memcpy_avx_32(dd - 49, ss - 49); memcpy_avx_32(dd - 32, ss - 32); break; + case 178: memcpy_avx_128(dd - 178, ss - 178); [[fallthrough]]; + case 50: memcpy_avx_32(dd - 50, ss - 50); memcpy_avx_32(dd - 32, ss - 32); break; + case 179: memcpy_avx_128(dd - 179, ss - 179); [[fallthrough]]; + case 51: memcpy_avx_32(dd - 51, ss - 51); memcpy_avx_32(dd - 32, ss - 32); break; + case 180: memcpy_avx_128(dd - 180, ss - 180); [[fallthrough]]; + case 52: memcpy_avx_32(dd - 52, ss - 52); memcpy_avx_32(dd - 32, ss - 32); break; + case 181: memcpy_avx_128(dd - 181, ss - 181); [[fallthrough]]; + case 53: memcpy_avx_32(dd - 53, ss - 53); memcpy_avx_32(dd - 32, ss - 32); break; + case 182: memcpy_avx_128(dd - 182, ss - 182); [[fallthrough]]; + case 54: memcpy_avx_32(dd - 54, ss - 54); memcpy_avx_32(dd - 32, ss - 32); break; + case 183: memcpy_avx_128(dd - 183, ss - 183); [[fallthrough]]; + case 55: memcpy_avx_32(dd - 55, ss - 55); memcpy_avx_32(dd - 32, ss - 32); break; + case 184: memcpy_avx_128(dd - 184, ss - 184); [[fallthrough]]; + case 56: memcpy_avx_32(dd - 56, ss - 56); memcpy_avx_32(dd - 32, ss - 32); break; + case 185: memcpy_avx_128(dd - 185, ss - 185); [[fallthrough]]; + case 57: memcpy_avx_32(dd - 57, ss - 57); memcpy_avx_32(dd - 32, ss - 32); break; + case 186: memcpy_avx_128(dd - 186, ss - 186); [[fallthrough]]; + case 58: memcpy_avx_32(dd - 58, ss - 58); memcpy_avx_32(dd - 32, ss - 32); break; + case 187: memcpy_avx_128(dd - 187, ss - 187); [[fallthrough]]; + case 59: memcpy_avx_32(dd - 59, ss - 59); memcpy_avx_32(dd - 32, ss - 32); break; + case 188: memcpy_avx_128(dd - 188, ss - 188); [[fallthrough]]; + case 60: memcpy_avx_32(dd - 60, ss - 60); memcpy_avx_32(dd - 32, ss - 32); break; + case 189: memcpy_avx_128(dd - 189, ss - 189); [[fallthrough]]; + case 61: memcpy_avx_32(dd - 61, ss - 61); memcpy_avx_32(dd - 32, ss - 32); break; + case 190: memcpy_avx_128(dd - 190, ss - 190); [[fallthrough]]; + case 62: memcpy_avx_32(dd - 62, ss - 62); memcpy_avx_32(dd - 32, ss - 32); break; + case 191: memcpy_avx_128(dd - 191, ss - 191); [[fallthrough]]; + case 63: memcpy_avx_32(dd - 63, ss - 63); memcpy_avx_32(dd - 32, ss - 32); break; + case 192: memcpy_avx_128(dd - 192, ss - 192); [[fallthrough]]; + case 64: memcpy_avx_64(dd - 64, ss - 64); break; + case 193: memcpy_avx_128(dd - 193, ss - 193); [[fallthrough]]; + case 65: memcpy_avx_64(dd - 65, ss - 65); dd[-1] = ss[-1]; break; + case 194: memcpy_avx_128(dd - 194, ss - 194); [[fallthrough]]; + case 66: memcpy_avx_64(dd - 66, ss - 66); *((uint16_t*)(dd - 2)) = *((uint16_t*)(ss - 2)); break; + case 195: memcpy_avx_128(dd - 195, ss - 195); [[fallthrough]]; + case 67: memcpy_avx_64(dd - 67, ss - 67); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 196: memcpy_avx_128(dd - 196, ss - 196); [[fallthrough]]; + case 68: memcpy_avx_64(dd - 68, ss - 68); *((uint32_t*)(dd - 4)) = *((uint32_t*)(ss - 4)); break; + case 197: memcpy_avx_128(dd - 197, ss - 197); [[fallthrough]]; + case 69: memcpy_avx_64(dd - 69, ss - 69); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 198: memcpy_avx_128(dd - 198, ss - 198); [[fallthrough]]; + case 70: memcpy_avx_64(dd - 70, ss - 70); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 199: memcpy_avx_128(dd - 199, ss - 199); [[fallthrough]]; + case 71: memcpy_avx_64(dd - 71, ss - 71); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 200: memcpy_avx_128(dd - 200, ss - 200); [[fallthrough]]; + case 72: memcpy_avx_64(dd - 72, ss - 72); *((uint64_t*)(dd - 8)) = *((uint64_t*)(ss - 8)); break; + case 201: memcpy_avx_128(dd - 201, ss - 201); [[fallthrough]]; + case 73: memcpy_avx_64(dd - 73, ss - 73); memcpy_avx_16(dd - 16, ss - 16); break; + case 202: memcpy_avx_128(dd - 202, ss - 202); [[fallthrough]]; + case 74: memcpy_avx_64(dd - 74, ss - 74); memcpy_avx_16(dd - 16, ss - 16); break; + case 203: memcpy_avx_128(dd - 203, ss - 203); [[fallthrough]]; + case 75: memcpy_avx_64(dd - 75, ss - 75); memcpy_avx_16(dd - 16, ss - 16); break; + case 204: memcpy_avx_128(dd - 204, ss - 204); [[fallthrough]]; + case 76: memcpy_avx_64(dd - 76, ss - 76); memcpy_avx_16(dd - 16, ss - 16); break; + case 205: memcpy_avx_128(dd - 205, ss - 205); [[fallthrough]]; + case 77: memcpy_avx_64(dd - 77, ss - 77); memcpy_avx_16(dd - 16, ss - 16); break; + case 206: memcpy_avx_128(dd - 206, ss - 206); [[fallthrough]]; + case 78: memcpy_avx_64(dd - 78, ss - 78); memcpy_avx_16(dd - 16, ss - 16); break; + case 207: memcpy_avx_128(dd - 207, ss - 207); [[fallthrough]]; + case 79: memcpy_avx_64(dd - 79, ss - 79); memcpy_avx_16(dd - 16, ss - 16); break; + case 208: memcpy_avx_128(dd - 208, ss - 208); [[fallthrough]]; + case 80: memcpy_avx_64(dd - 80, ss - 80); memcpy_avx_16(dd - 16, ss - 16); break; + case 209: memcpy_avx_128(dd - 209, ss - 209); [[fallthrough]]; + case 81: memcpy_avx_64(dd - 81, ss - 81); memcpy_avx_32(dd - 32, ss - 32); break; + case 210: memcpy_avx_128(dd - 210, ss - 210); [[fallthrough]]; + case 82: memcpy_avx_64(dd - 82, ss - 82); memcpy_avx_32(dd - 32, ss - 32); break; + case 211: memcpy_avx_128(dd - 211, ss - 211); [[fallthrough]]; + case 83: memcpy_avx_64(dd - 83, ss - 83); memcpy_avx_32(dd - 32, ss - 32); break; + case 212: memcpy_avx_128(dd - 212, ss - 212); [[fallthrough]]; + case 84: memcpy_avx_64(dd - 84, ss - 84); memcpy_avx_32(dd - 32, ss - 32); break; + case 213: memcpy_avx_128(dd - 213, ss - 213); [[fallthrough]]; + case 85: memcpy_avx_64(dd - 85, ss - 85); memcpy_avx_32(dd - 32, ss - 32); break; + case 214: memcpy_avx_128(dd - 214, ss - 214); [[fallthrough]]; + case 86: memcpy_avx_64(dd - 86, ss - 86); memcpy_avx_32(dd - 32, ss - 32); break; + case 215: memcpy_avx_128(dd - 215, ss - 215); [[fallthrough]]; + case 87: memcpy_avx_64(dd - 87, ss - 87); memcpy_avx_32(dd - 32, ss - 32); break; + case 216: memcpy_avx_128(dd - 216, ss - 216); [[fallthrough]]; + case 88: memcpy_avx_64(dd - 88, ss - 88); memcpy_avx_32(dd - 32, ss - 32); break; + case 217: memcpy_avx_128(dd - 217, ss - 217); [[fallthrough]]; + case 89: memcpy_avx_64(dd - 89, ss - 89); memcpy_avx_32(dd - 32, ss - 32); break; + case 218: memcpy_avx_128(dd - 218, ss - 218); [[fallthrough]]; + case 90: memcpy_avx_64(dd - 90, ss - 90); memcpy_avx_32(dd - 32, ss - 32); break; + case 219: memcpy_avx_128(dd - 219, ss - 219); [[fallthrough]]; + case 91: memcpy_avx_64(dd - 91, ss - 91); memcpy_avx_32(dd - 32, ss - 32); break; + case 220: memcpy_avx_128(dd - 220, ss - 220); [[fallthrough]]; + case 92: memcpy_avx_64(dd - 92, ss - 92); memcpy_avx_32(dd - 32, ss - 32); break; + case 221: memcpy_avx_128(dd - 221, ss - 221); [[fallthrough]]; + case 93: memcpy_avx_64(dd - 93, ss - 93); memcpy_avx_32(dd - 32, ss - 32); break; + case 222: memcpy_avx_128(dd - 222, ss - 222); [[fallthrough]]; + case 94: memcpy_avx_64(dd - 94, ss - 94); memcpy_avx_32(dd - 32, ss - 32); break; + case 223: memcpy_avx_128(dd - 223, ss - 223); [[fallthrough]]; + case 95: memcpy_avx_64(dd - 95, ss - 95); memcpy_avx_32(dd - 32, ss - 32); break; + case 224: memcpy_avx_128(dd - 224, ss - 224); [[fallthrough]]; + case 96: memcpy_avx_64(dd - 96, ss - 96); memcpy_avx_32(dd - 32, ss - 32); break; + case 225: memcpy_avx_128(dd - 225, ss - 225); [[fallthrough]]; + case 97: memcpy_avx_64(dd - 97, ss - 97); memcpy_avx_64(dd - 64, ss - 64); break; + case 226: memcpy_avx_128(dd - 226, ss - 226); [[fallthrough]]; + case 98: memcpy_avx_64(dd - 98, ss - 98); memcpy_avx_64(dd - 64, ss - 64); break; + case 227: memcpy_avx_128(dd - 227, ss - 227); [[fallthrough]]; + case 99: memcpy_avx_64(dd - 99, ss - 99); memcpy_avx_64(dd - 64, ss - 64); break; + case 228: memcpy_avx_128(dd - 228, ss - 228); [[fallthrough]]; + case 100: memcpy_avx_64(dd - 100, ss - 100); memcpy_avx_64(dd - 64, ss - 64); break; + case 229: memcpy_avx_128(dd - 229, ss - 229); [[fallthrough]]; + case 101: memcpy_avx_64(dd - 101, ss - 101); memcpy_avx_64(dd - 64, ss - 64); break; + case 230: memcpy_avx_128(dd - 230, ss - 230); [[fallthrough]]; + case 102: memcpy_avx_64(dd - 102, ss - 102); memcpy_avx_64(dd - 64, ss - 64); break; + case 231: memcpy_avx_128(dd - 231, ss - 231); [[fallthrough]]; + case 103: memcpy_avx_64(dd - 103, ss - 103); memcpy_avx_64(dd - 64, ss - 64); break; + case 232: memcpy_avx_128(dd - 232, ss - 232); [[fallthrough]]; + case 104: memcpy_avx_64(dd - 104, ss - 104); memcpy_avx_64(dd - 64, ss - 64); break; + case 233: memcpy_avx_128(dd - 233, ss - 233); [[fallthrough]]; + case 105: memcpy_avx_64(dd - 105, ss - 105); memcpy_avx_64(dd - 64, ss - 64); break; + case 234: memcpy_avx_128(dd - 234, ss - 234); [[fallthrough]]; + case 106: memcpy_avx_64(dd - 106, ss - 106); memcpy_avx_64(dd - 64, ss - 64); break; + case 235: memcpy_avx_128(dd - 235, ss - 235); [[fallthrough]]; + case 107: memcpy_avx_64(dd - 107, ss - 107); memcpy_avx_64(dd - 64, ss - 64); break; + case 236: memcpy_avx_128(dd - 236, ss - 236); [[fallthrough]]; + case 108: memcpy_avx_64(dd - 108, ss - 108); memcpy_avx_64(dd - 64, ss - 64); break; + case 237: memcpy_avx_128(dd - 237, ss - 237); [[fallthrough]]; + case 109: memcpy_avx_64(dd - 109, ss - 109); memcpy_avx_64(dd - 64, ss - 64); break; + case 238: memcpy_avx_128(dd - 238, ss - 238); [[fallthrough]]; + case 110: memcpy_avx_64(dd - 110, ss - 110); memcpy_avx_64(dd - 64, ss - 64); break; + case 239: memcpy_avx_128(dd - 239, ss - 239); [[fallthrough]]; + case 111: memcpy_avx_64(dd - 111, ss - 111); memcpy_avx_64(dd - 64, ss - 64); break; + case 240: memcpy_avx_128(dd - 240, ss - 240); [[fallthrough]]; + case 112: memcpy_avx_64(dd - 112, ss - 112); memcpy_avx_64(dd - 64, ss - 64); break; + case 241: memcpy_avx_128(dd - 241, ss - 241); [[fallthrough]]; + case 113: memcpy_avx_64(dd - 113, ss - 113); memcpy_avx_64(dd - 64, ss - 64); break; + case 242: memcpy_avx_128(dd - 242, ss - 242); [[fallthrough]]; + case 114: memcpy_avx_64(dd - 114, ss - 114); memcpy_avx_64(dd - 64, ss - 64); break; + case 243: memcpy_avx_128(dd - 243, ss - 243); [[fallthrough]]; + case 115: memcpy_avx_64(dd - 115, ss - 115); memcpy_avx_64(dd - 64, ss - 64); break; + case 244: memcpy_avx_128(dd - 244, ss - 244); [[fallthrough]]; + case 116: memcpy_avx_64(dd - 116, ss - 116); memcpy_avx_64(dd - 64, ss - 64); break; + case 245: memcpy_avx_128(dd - 245, ss - 245); [[fallthrough]]; + case 117: memcpy_avx_64(dd - 117, ss - 117); memcpy_avx_64(dd - 64, ss - 64); break; + case 246: memcpy_avx_128(dd - 246, ss - 246); [[fallthrough]]; + case 118: memcpy_avx_64(dd - 118, ss - 118); memcpy_avx_64(dd - 64, ss - 64); break; + case 247: memcpy_avx_128(dd - 247, ss - 247); [[fallthrough]]; + case 119: memcpy_avx_64(dd - 119, ss - 119); memcpy_avx_64(dd - 64, ss - 64); break; + case 248: memcpy_avx_128(dd - 248, ss - 248); [[fallthrough]]; + case 120: memcpy_avx_64(dd - 120, ss - 120); memcpy_avx_64(dd - 64, ss - 64); break; + case 249: memcpy_avx_128(dd - 249, ss - 249); [[fallthrough]]; + case 121: memcpy_avx_64(dd - 121, ss - 121); memcpy_avx_64(dd - 64, ss - 64); break; + case 250: memcpy_avx_128(dd - 250, ss - 250); [[fallthrough]]; + case 122: memcpy_avx_64(dd - 122, ss - 122); memcpy_avx_64(dd - 64, ss - 64); break; + case 251: memcpy_avx_128(dd - 251, ss - 251); [[fallthrough]]; + case 123: memcpy_avx_64(dd - 123, ss - 123); memcpy_avx_64(dd - 64, ss - 64); break; + case 252: memcpy_avx_128(dd - 252, ss - 252); [[fallthrough]]; + case 124: memcpy_avx_64(dd - 124, ss - 124); memcpy_avx_64(dd - 64, ss - 64); break; + case 253: memcpy_avx_128(dd - 253, ss - 253); [[fallthrough]]; + case 125: memcpy_avx_64(dd - 125, ss - 125); memcpy_avx_64(dd - 64, ss - 64); break; + case 254: memcpy_avx_128(dd - 254, ss - 254); [[fallthrough]]; + case 126: memcpy_avx_64(dd - 126, ss - 126); memcpy_avx_64(dd - 64, ss - 64); break; + case 255: memcpy_avx_128(dd - 255, ss - 255); [[fallthrough]]; + case 127: memcpy_avx_64(dd - 127, ss - 127); memcpy_avx_64(dd - 64, ss - 64); break; + case 256: memcpy_avx_256(dd - 256, ss - 256); break; + } + + return dst; +} + + +//--------------------------------------------------------------------- +// main routine +//--------------------------------------------------------------------- +void* memcpy_fast_avx(void * __restrict destination, const void * __restrict source, size_t size) +{ + unsigned char *dst = reinterpret_cast(destination); + const unsigned char *src = reinterpret_cast(source); + static size_t cachesize = 0x200000; // L3-cache size + size_t padding; + + // small memory copy + if (size <= 256) + { + memcpy_tiny_avx(dst, src, size); + _mm256_zeroupper(); + return destination; + } + + // align destination to 16 bytes boundary + padding = (32 - (((size_t)dst) & 31)) & 31; + +#if 0 + if (padding > 0) + { + __m256i head = _mm256_loadu_si256(reinterpret_cast(src)); + _mm256_storeu_si256((__m256i*)dst, head); + dst += padding; + src += padding; + size -= padding; + } +#else + __m256i head = _mm256_loadu_si256(reinterpret_cast(src)); + _mm256_storeu_si256((__m256i*)dst, head); + dst += padding; + src += padding; + size -= padding; +#endif + + // medium size copy + if (size <= cachesize) + { + __m256i c0, c1, c2, c3, c4, c5, c6, c7; + + for (; size >= 256; size -= 256) + { + c0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + c1 = _mm256_loadu_si256((reinterpret_cast(src)) + 1); + c2 = _mm256_loadu_si256((reinterpret_cast(src)) + 2); + c3 = _mm256_loadu_si256((reinterpret_cast(src)) + 3); + c4 = _mm256_loadu_si256((reinterpret_cast(src)) + 4); + c5 = _mm256_loadu_si256((reinterpret_cast(src)) + 5); + c6 = _mm256_loadu_si256((reinterpret_cast(src)) + 6); + c7 = _mm256_loadu_si256((reinterpret_cast(src)) + 7); + src += 256; + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 0), c0); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 1), c1); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 2), c2); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 3), c3); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 4), c4); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 5), c5); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 6), c6); + _mm256_storeu_si256(((reinterpret_cast<__m256i*>(dst)) + 7), c7); + dst += 256; + } + } + else + { // big memory copy + __m256i c0, c1, c2, c3, c4, c5, c6, c7; + /* __m256i c0, c1, c2, c3, c4, c5, c6, c7; */ + + if ((((size_t)src) & 31) == 0) + { // source aligned + for (; size >= 256; size -= 256) + { + c0 = _mm256_load_si256((reinterpret_cast(src)) + 0); + c1 = _mm256_load_si256((reinterpret_cast(src)) + 1); + c2 = _mm256_load_si256((reinterpret_cast(src)) + 2); + c3 = _mm256_load_si256((reinterpret_cast(src)) + 3); + c4 = _mm256_load_si256((reinterpret_cast(src)) + 4); + c5 = _mm256_load_si256((reinterpret_cast(src)) + 5); + c6 = _mm256_load_si256((reinterpret_cast(src)) + 6); + c7 = _mm256_load_si256((reinterpret_cast(src)) + 7); + src += 256; + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 0), c0); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 1), c1); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 2), c2); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 3), c3); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 4), c4); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 5), c5); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 6), c6); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 7), c7); + dst += 256; + } + } + else + { // source unaligned + for (; size >= 256; size -= 256) + { + c0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + c1 = _mm256_loadu_si256((reinterpret_cast(src)) + 1); + c2 = _mm256_loadu_si256((reinterpret_cast(src)) + 2); + c3 = _mm256_loadu_si256((reinterpret_cast(src)) + 3); + c4 = _mm256_loadu_si256((reinterpret_cast(src)) + 4); + c5 = _mm256_loadu_si256((reinterpret_cast(src)) + 5); + c6 = _mm256_loadu_si256((reinterpret_cast(src)) + 6); + c7 = _mm256_loadu_si256((reinterpret_cast(src)) + 7); + src += 256; + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 0), c0); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 1), c1); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 2), c2); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 3), c3); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 4), c4); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 5), c5); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 6), c6); + _mm256_stream_si256(((reinterpret_cast<__m256i*>(dst)) + 7), c7); + dst += 256; + } + } + _mm_sfence(); + } + + memcpy_tiny_avx(dst, src, size); + _mm256_zeroupper(); + + return destination; +} diff --git a/utils/memcpy-bench/glibc/asm-syntax.h b/utils/memcpy-bench/glibc/asm-syntax.h new file mode 100644 index 00000000000..0879f2606c7 --- /dev/null +++ b/utils/memcpy-bench/glibc/asm-syntax.h @@ -0,0 +1,26 @@ +#pragma once + +/* Definitions for x86 syntax variations. + Copyright (C) 1992-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. Its master source is NOT part of + the C library, however. The master source lives in the GNU MP Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#undef ALIGN +#define ALIGN(log) .align 1<. */ + +#ifndef _DWARF2_H +#define _DWARF2_H 1 + +/* This file is derived from the DWARF specification (a public document) + Revision 2.0.0 (July 27, 1993) developed by the UNIX International + Programming Languages Special Interest Group (UI/PLSIG) and distributed + by UNIX International. Copies of this specification are available from + UNIX International, 20 Waterview Boulevard, Parsippany, NJ, 07054. */ + +/* This file is shared between GCC and GDB, and should not contain + prototypes. */ + +#ifndef __ASSEMBLER__ +/* Tag names and codes. */ + +enum dwarf_tag + { + DW_TAG_padding = 0x00, + DW_TAG_array_type = 0x01, + DW_TAG_class_type = 0x02, + DW_TAG_entry_point = 0x03, + DW_TAG_enumeration_type = 0x04, + DW_TAG_formal_parameter = 0x05, + DW_TAG_imported_declaration = 0x08, + DW_TAG_label = 0x0a, + DW_TAG_lexical_block = 0x0b, + DW_TAG_member = 0x0d, + DW_TAG_pointer_type = 0x0f, + DW_TAG_reference_type = 0x10, + DW_TAG_compile_unit = 0x11, + DW_TAG_string_type = 0x12, + DW_TAG_structure_type = 0x13, + DW_TAG_subroutine_type = 0x15, + DW_TAG_typedef = 0x16, + DW_TAG_union_type = 0x17, + DW_TAG_unspecified_parameters = 0x18, + DW_TAG_variant = 0x19, + DW_TAG_common_block = 0x1a, + DW_TAG_common_inclusion = 0x1b, + DW_TAG_inheritance = 0x1c, + DW_TAG_inlined_subroutine = 0x1d, + DW_TAG_module = 0x1e, + DW_TAG_ptr_to_member_type = 0x1f, + DW_TAG_set_type = 0x20, + DW_TAG_subrange_type = 0x21, + DW_TAG_with_stmt = 0x22, + DW_TAG_access_declaration = 0x23, + DW_TAG_base_type = 0x24, + DW_TAG_catch_block = 0x25, + DW_TAG_const_type = 0x26, + DW_TAG_constant = 0x27, + DW_TAG_enumerator = 0x28, + DW_TAG_file_type = 0x29, + DW_TAG_friend = 0x2a, + DW_TAG_namelist = 0x2b, + DW_TAG_namelist_item = 0x2c, + DW_TAG_packed_type = 0x2d, + DW_TAG_subprogram = 0x2e, + DW_TAG_template_type_param = 0x2f, + DW_TAG_template_value_param = 0x30, + DW_TAG_thrown_type = 0x31, + DW_TAG_try_block = 0x32, + DW_TAG_variant_part = 0x33, + DW_TAG_variable = 0x34, + DW_TAG_volatile_type = 0x35, + /* SGI/MIPS Extensions */ + DW_TAG_MIPS_loop = 0x4081, + /* GNU extensions */ + DW_TAG_format_label = 0x4101, /* for FORTRAN 77 and Fortran 90 */ + DW_TAG_function_template = 0x4102, /* for C++ */ + DW_TAG_class_template = 0x4103, /* for C++ */ + DW_TAG_GNU_BINCL = 0x4104, + DW_TAG_GNU_EINCL = 0x4105 + }; + +#define DW_TAG_lo_user 0x4080 +#define DW_TAG_hi_user 0xffff + +/* flag that tells whether entry has a child or not */ +#define DW_children_no 0 +#define DW_children_yes 1 + +/* Form names and codes. */ +enum dwarf_form + { + DW_FORM_addr = 0x01, + DW_FORM_block2 = 0x03, + DW_FORM_block4 = 0x04, + DW_FORM_data2 = 0x05, + DW_FORM_data4 = 0x06, + DW_FORM_data8 = 0x07, + DW_FORM_string = 0x08, + DW_FORM_block = 0x09, + DW_FORM_block1 = 0x0a, + DW_FORM_data1 = 0x0b, + DW_FORM_flag = 0x0c, + DW_FORM_sdata = 0x0d, + DW_FORM_strp = 0x0e, + DW_FORM_udata = 0x0f, + DW_FORM_ref_addr = 0x10, + DW_FORM_ref1 = 0x11, + DW_FORM_ref2 = 0x12, + DW_FORM_ref4 = 0x13, + DW_FORM_ref8 = 0x14, + DW_FORM_ref_udata = 0x15, + DW_FORM_indirect = 0x16 + }; + +/* Attribute names and codes. */ + +enum dwarf_attribute + { + DW_AT_sibling = 0x01, + DW_AT_location = 0x02, + DW_AT_name = 0x03, + DW_AT_ordering = 0x09, + DW_AT_subscr_data = 0x0a, + DW_AT_byte_size = 0x0b, + DW_AT_bit_offset = 0x0c, + DW_AT_bit_size = 0x0d, + DW_AT_element_list = 0x0f, + DW_AT_stmt_list = 0x10, + DW_AT_low_pc = 0x11, + DW_AT_high_pc = 0x12, + DW_AT_language = 0x13, + DW_AT_member = 0x14, + DW_AT_discr = 0x15, + DW_AT_discr_value = 0x16, + DW_AT_visibility = 0x17, + DW_AT_import = 0x18, + DW_AT_string_length = 0x19, + DW_AT_common_reference = 0x1a, + DW_AT_comp_dir = 0x1b, + DW_AT_const_value = 0x1c, + DW_AT_containing_type = 0x1d, + DW_AT_default_value = 0x1e, + DW_AT_inline = 0x20, + DW_AT_is_optional = 0x21, + DW_AT_lower_bound = 0x22, + DW_AT_producer = 0x25, + DW_AT_prototyped = 0x27, + DW_AT_return_addr = 0x2a, + DW_AT_start_scope = 0x2c, + DW_AT_stride_size = 0x2e, + DW_AT_upper_bound = 0x2f, + DW_AT_abstract_origin = 0x31, + DW_AT_accessibility = 0x32, + DW_AT_address_class = 0x33, + DW_AT_artificial = 0x34, + DW_AT_base_types = 0x35, + DW_AT_calling_convention = 0x36, + DW_AT_count = 0x37, + DW_AT_data_member_location = 0x38, + DW_AT_decl_column = 0x39, + DW_AT_decl_file = 0x3a, + DW_AT_decl_line = 0x3b, + DW_AT_declaration = 0x3c, + DW_AT_discr_list = 0x3d, + DW_AT_encoding = 0x3e, + DW_AT_external = 0x3f, + DW_AT_frame_base = 0x40, + DW_AT_friend = 0x41, + DW_AT_identifier_case = 0x42, + DW_AT_macro_info = 0x43, + DW_AT_namelist_items = 0x44, + DW_AT_priority = 0x45, + DW_AT_segment = 0x46, + DW_AT_specification = 0x47, + DW_AT_static_link = 0x48, + DW_AT_type = 0x49, + DW_AT_use_location = 0x4a, + DW_AT_variable_parameter = 0x4b, + DW_AT_virtuality = 0x4c, + DW_AT_vtable_elem_location = 0x4d, + /* SGI/MIPS Extensions */ + DW_AT_MIPS_fde = 0x2001, + DW_AT_MIPS_loop_begin = 0x2002, + DW_AT_MIPS_tail_loop_begin = 0x2003, + DW_AT_MIPS_epilog_begin = 0x2004, + DW_AT_MIPS_loop_unroll_factor = 0x2005, + DW_AT_MIPS_software_pipeline_depth = 0x2006, + DW_AT_MIPS_linkage_name = 0x2007, + DW_AT_MIPS_stride = 0x2008, + DW_AT_MIPS_abstract_name = 0x2009, + DW_AT_MIPS_clone_origin = 0x200a, + DW_AT_MIPS_has_inlines = 0x200b, + /* GNU extensions. */ + DW_AT_sf_names = 0x2101, + DW_AT_src_info = 0x2102, + DW_AT_mac_info = 0x2103, + DW_AT_src_coords = 0x2104, + DW_AT_body_begin = 0x2105, + DW_AT_body_end = 0x2106 + }; + +#define DW_AT_lo_user 0x2000 /* implementation-defined range start */ +#define DW_AT_hi_user 0x3ff0 /* implementation-defined range end */ + +/* Location atom names and codes. */ + +enum dwarf_location_atom + { + DW_OP_addr = 0x03, + DW_OP_deref = 0x06, + DW_OP_const1u = 0x08, + DW_OP_const1s = 0x09, + DW_OP_const2u = 0x0a, + DW_OP_const2s = 0x0b, + DW_OP_const4u = 0x0c, + DW_OP_const4s = 0x0d, + DW_OP_const8u = 0x0e, + DW_OP_const8s = 0x0f, + DW_OP_constu = 0x10, + DW_OP_consts = 0x11, + DW_OP_dup = 0x12, + DW_OP_drop = 0x13, + DW_OP_over = 0x14, + DW_OP_pick = 0x15, + DW_OP_swap = 0x16, + DW_OP_rot = 0x17, + DW_OP_xderef = 0x18, + DW_OP_abs = 0x19, + DW_OP_and = 0x1a, + DW_OP_div = 0x1b, + DW_OP_minus = 0x1c, + DW_OP_mod = 0x1d, + DW_OP_mul = 0x1e, + DW_OP_neg = 0x1f, + DW_OP_not = 0x20, + DW_OP_or = 0x21, + DW_OP_plus = 0x22, + DW_OP_plus_uconst = 0x23, + DW_OP_shl = 0x24, + DW_OP_shr = 0x25, + DW_OP_shra = 0x26, + DW_OP_xor = 0x27, + DW_OP_bra = 0x28, + DW_OP_eq = 0x29, + DW_OP_ge = 0x2a, + DW_OP_gt = 0x2b, + DW_OP_le = 0x2c, + DW_OP_lt = 0x2d, + DW_OP_ne = 0x2e, + DW_OP_skip = 0x2f, + DW_OP_lit0 = 0x30, + DW_OP_lit1 = 0x31, + DW_OP_lit2 = 0x32, + DW_OP_lit3 = 0x33, + DW_OP_lit4 = 0x34, + DW_OP_lit5 = 0x35, + DW_OP_lit6 = 0x36, + DW_OP_lit7 = 0x37, + DW_OP_lit8 = 0x38, + DW_OP_lit9 = 0x39, + DW_OP_lit10 = 0x3a, + DW_OP_lit11 = 0x3b, + DW_OP_lit12 = 0x3c, + DW_OP_lit13 = 0x3d, + DW_OP_lit14 = 0x3e, + DW_OP_lit15 = 0x3f, + DW_OP_lit16 = 0x40, + DW_OP_lit17 = 0x41, + DW_OP_lit18 = 0x42, + DW_OP_lit19 = 0x43, + DW_OP_lit20 = 0x44, + DW_OP_lit21 = 0x45, + DW_OP_lit22 = 0x46, + DW_OP_lit23 = 0x47, + DW_OP_lit24 = 0x48, + DW_OP_lit25 = 0x49, + DW_OP_lit26 = 0x4a, + DW_OP_lit27 = 0x4b, + DW_OP_lit28 = 0x4c, + DW_OP_lit29 = 0x4d, + DW_OP_lit30 = 0x4e, + DW_OP_lit31 = 0x4f, + DW_OP_reg0 = 0x50, + DW_OP_reg1 = 0x51, + DW_OP_reg2 = 0x52, + DW_OP_reg3 = 0x53, + DW_OP_reg4 = 0x54, + DW_OP_reg5 = 0x55, + DW_OP_reg6 = 0x56, + DW_OP_reg7 = 0x57, + DW_OP_reg8 = 0x58, + DW_OP_reg9 = 0x59, + DW_OP_reg10 = 0x5a, + DW_OP_reg11 = 0x5b, + DW_OP_reg12 = 0x5c, + DW_OP_reg13 = 0x5d, + DW_OP_reg14 = 0x5e, + DW_OP_reg15 = 0x5f, + DW_OP_reg16 = 0x60, + DW_OP_reg17 = 0x61, + DW_OP_reg18 = 0x62, + DW_OP_reg19 = 0x63, + DW_OP_reg20 = 0x64, + DW_OP_reg21 = 0x65, + DW_OP_reg22 = 0x66, + DW_OP_reg23 = 0x67, + DW_OP_reg24 = 0x68, + DW_OP_reg25 = 0x69, + DW_OP_reg26 = 0x6a, + DW_OP_reg27 = 0x6b, + DW_OP_reg28 = 0x6c, + DW_OP_reg29 = 0x6d, + DW_OP_reg30 = 0x6e, + DW_OP_reg31 = 0x6f, + DW_OP_breg0 = 0x70, + DW_OP_breg1 = 0x71, + DW_OP_breg2 = 0x72, + DW_OP_breg3 = 0x73, + DW_OP_breg4 = 0x74, + DW_OP_breg5 = 0x75, + DW_OP_breg6 = 0x76, + DW_OP_breg7 = 0x77, + DW_OP_breg8 = 0x78, + DW_OP_breg9 = 0x79, + DW_OP_breg10 = 0x7a, + DW_OP_breg11 = 0x7b, + DW_OP_breg12 = 0x7c, + DW_OP_breg13 = 0x7d, + DW_OP_breg14 = 0x7e, + DW_OP_breg15 = 0x7f, + DW_OP_breg16 = 0x80, + DW_OP_breg17 = 0x81, + DW_OP_breg18 = 0x82, + DW_OP_breg19 = 0x83, + DW_OP_breg20 = 0x84, + DW_OP_breg21 = 0x85, + DW_OP_breg22 = 0x86, + DW_OP_breg23 = 0x87, + DW_OP_breg24 = 0x88, + DW_OP_breg25 = 0x89, + DW_OP_breg26 = 0x8a, + DW_OP_breg27 = 0x8b, + DW_OP_breg28 = 0x8c, + DW_OP_breg29 = 0x8d, + DW_OP_breg30 = 0x8e, + DW_OP_breg31 = 0x8f, + DW_OP_regx = 0x90, + DW_OP_fbreg = 0x91, + DW_OP_bregx = 0x92, + DW_OP_piece = 0x93, + DW_OP_deref_size = 0x94, + DW_OP_xderef_size = 0x95, + DW_OP_nop = 0x96 + }; + +#define DW_OP_lo_user 0x80 /* implementation-defined range start */ +#define DW_OP_hi_user 0xff /* implementation-defined range end */ + +/* Type encodings. */ + +enum dwarf_type + { + DW_ATE_void = 0x0, + DW_ATE_address = 0x1, + DW_ATE_boolean = 0x2, + DW_ATE_complex_float = 0x3, + DW_ATE_float = 0x4, + DW_ATE_signed = 0x5, + DW_ATE_signed_char = 0x6, + DW_ATE_unsigned = 0x7, + DW_ATE_unsigned_char = 0x8 + }; + +#define DW_ATE_lo_user 0x80 +#define DW_ATE_hi_user 0xff + +/* Array ordering names and codes. */ +enum dwarf_array_dim_ordering + { + DW_ORD_row_major = 0, + DW_ORD_col_major = 1 + }; + +/* access attribute */ +enum dwarf_access_attribute + { + DW_ACCESS_public = 1, + DW_ACCESS_protected = 2, + DW_ACCESS_private = 3 + }; + +/* visibility */ +enum dwarf_visibility_attribute + { + DW_VIS_local = 1, + DW_VIS_exported = 2, + DW_VIS_qualified = 3 + }; + +/* virtuality */ +enum dwarf_virtuality_attribute + { + DW_VIRTUALITY_none = 0, + DW_VIRTUALITY_virtual = 1, + DW_VIRTUALITY_pure_virtual = 2 + }; + +/* case sensitivity */ +enum dwarf_id_case + { + DW_ID_case_sensitive = 0, + DW_ID_up_case = 1, + DW_ID_down_case = 2, + DW_ID_case_insensitive = 3 + }; + +/* calling convention */ +enum dwarf_calling_convention + { + DW_CC_normal = 0x1, + DW_CC_program = 0x2, + DW_CC_nocall = 0x3 + }; + +#define DW_CC_lo_user 0x40 +#define DW_CC_hi_user 0xff + +/* inline attribute */ +enum dwarf_inline_attribute + { + DW_INL_not_inlined = 0, + DW_INL_inlined = 1, + DW_INL_declared_not_inlined = 2, + DW_INL_declared_inlined = 3 + }; + +/* discriminant lists */ +enum dwarf_discrim_list + { + DW_DSC_label = 0, + DW_DSC_range = 1 + }; + +/* line number opcodes */ +enum dwarf_line_number_ops + { + DW_LNS_extended_op = 0, + DW_LNS_copy = 1, + DW_LNS_advance_pc = 2, + DW_LNS_advance_line = 3, + DW_LNS_set_file = 4, + DW_LNS_set_column = 5, + DW_LNS_negate_stmt = 6, + DW_LNS_set_basic_block = 7, + DW_LNS_const_add_pc = 8, + DW_LNS_fixed_advance_pc = 9 + }; + +/* line number extended opcodes */ +enum dwarf_line_number_x_ops + { + DW_LNE_end_sequence = 1, + DW_LNE_set_address = 2, + DW_LNE_define_file = 3 + }; + +/* call frame information */ +enum dwarf_call_frame_info + { + DW_CFA_advance_loc = 0x40, + DW_CFA_offset = 0x80, + DW_CFA_restore = 0xc0, + DW_CFA_nop = 0x00, + DW_CFA_set_loc = 0x01, + DW_CFA_advance_loc1 = 0x02, + DW_CFA_advance_loc2 = 0x03, + DW_CFA_advance_loc4 = 0x04, + DW_CFA_offset_extended = 0x05, + DW_CFA_restore_extended = 0x06, + DW_CFA_undefined = 0x07, + DW_CFA_same_value = 0x08, + DW_CFA_register = 0x09, + DW_CFA_remember_state = 0x0a, + DW_CFA_restore_state = 0x0b, + DW_CFA_def_cfa = 0x0c, + DW_CFA_def_cfa_register = 0x0d, + DW_CFA_def_cfa_offset = 0x0e, + DW_CFA_def_cfa_expression = 0x0f, + DW_CFA_expression = 0x10, + /* Dwarf 2.1 */ + DW_CFA_offset_extended_sf = 0x11, + DW_CFA_def_cfa_sf = 0x12, + DW_CFA_def_cfa_offset_sf = 0x13, + + /* SGI/MIPS specific */ + DW_CFA_MIPS_advance_loc8 = 0x1d, + + /* GNU extensions */ + DW_CFA_GNU_window_save = 0x2d, + DW_CFA_GNU_args_size = 0x2e, + DW_CFA_GNU_negative_offset_extended = 0x2f + }; + +#define DW_CIE_ID 0xffffffff +#define DW_CIE_VERSION 1 + +#define DW_CFA_extended 0 +#define DW_CFA_low_user 0x1c +#define DW_CFA_high_user 0x3f + +#define DW_CHILDREN_no 0x00 +#define DW_CHILDREN_yes 0x01 + +#define DW_ADDR_none 0 + +/* Source language names and codes. */ + +enum dwarf_source_language + { + DW_LANG_C89 = 0x0001, + DW_LANG_C = 0x0002, + DW_LANG_Ada83 = 0x0003, + DW_LANG_C_plus_plus = 0x0004, + DW_LANG_Cobol74 = 0x0005, + DW_LANG_Cobol85 = 0x0006, + DW_LANG_Fortran77 = 0x0007, + DW_LANG_Fortran90 = 0x0008, + DW_LANG_Pascal83 = 0x0009, + DW_LANG_Modula2 = 0x000a, + DW_LANG_Java = 0x000b, + DW_LANG_Mips_Assembler = 0x8001 + }; + + +#define DW_LANG_lo_user 0x8000 /* implementation-defined range start */ +#define DW_LANG_hi_user 0xffff /* implementation-defined range start */ + +/* Names and codes for macro information. */ + +enum dwarf_macinfo_record_type + { + DW_MACINFO_define = 1, + DW_MACINFO_undef = 2, + DW_MACINFO_start_file = 3, + DW_MACINFO_end_file = 4, + DW_MACINFO_vendor_ext = 255 + }; + +#endif /* !ASSEMBLER */ + +/* @@@ For use with GNU frame unwind information. */ + +#define DW_EH_PE_absptr 0x00 +#define DW_EH_PE_omit 0xff + +#define DW_EH_PE_uleb128 0x01 +#define DW_EH_PE_udata2 0x02 +#define DW_EH_PE_udata4 0x03 +#define DW_EH_PE_udata8 0x04 +#define DW_EH_PE_sleb128 0x09 +#define DW_EH_PE_sdata2 0x0A +#define DW_EH_PE_sdata4 0x0B +#define DW_EH_PE_sdata8 0x0C +#define DW_EH_PE_signed 0x08 + +#define DW_EH_PE_pcrel 0x10 +#define DW_EH_PE_textrel 0x20 +#define DW_EH_PE_datarel 0x30 +#define DW_EH_PE_funcrel 0x40 +#define DW_EH_PE_aligned 0x50 + +#define DW_EH_PE_indirect 0x80 + +#endif /* dwarf2.h */ diff --git a/utils/memcpy-bench/glibc/memcpy-ssse3-back.S b/utils/memcpy-bench/glibc/memcpy-ssse3-back.S new file mode 100644 index 00000000000..c5257592efa --- /dev/null +++ b/utils/memcpy-bench/glibc/memcpy-ssse3-back.S @@ -0,0 +1,3182 @@ +/* memcpy with SSSE3 and REP string + Copyright (C) 2010-2020 Free Software Foundation, Inc. + Contributed by Intel Corporation. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include "sysdep.h" + +#if 1 + +#include "asm-syntax.h" + +#ifndef MEMCPY +# define MEMCPY __memcpy_ssse3_back +# define MEMCPY_CHK __memcpy_chk_ssse3_back +# define MEMPCPY __mempcpy_ssse3_back +# define MEMPCPY_CHK __mempcpy_chk_ssse3_back +#endif + +#define JMPTBL(I, B) I - B + +/* Branch to an entry in a jump table. TABLE is a jump table with + relative offsets. INDEX is a register contains the index into the + jump table. SCALE is the scale of INDEX. */ +#define BRANCH_TO_JMPTBL_ENTRY(TABLE, INDEX, SCALE) \ + lea TABLE(%rip), %r11; \ + movslq (%r11, INDEX, SCALE), INDEX; \ + lea (%r11, INDEX), INDEX; \ + _CET_NOTRACK jmp *INDEX; \ + ud2 + + .section .text.ssse3,"ax",@progbits +#if !defined USE_AS_MEMPCPY && !defined USE_AS_MEMMOVE +ENTRY (MEMPCPY_CHK) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMPCPY_CHK) + +ENTRY (MEMPCPY) + mov %RDI_LP, %RAX_LP + add %RDX_LP, %RAX_LP + jmp L(start) +END (MEMPCPY) +#endif + +#if !defined USE_AS_BCOPY +ENTRY (MEMCPY_CHK) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMCPY_CHK) +#endif + +ENTRY (MEMCPY) + mov %RDI_LP, %RAX_LP +#ifdef USE_AS_MEMPCPY + add %RDX_LP, %RAX_LP +#endif + +#ifdef __ILP32__ + /* Clear the upper 32 bits. */ + mov %edx, %edx +#endif + +#ifdef USE_AS_MEMMOVE + cmp %rsi, %rdi + jb L(copy_forward) + je L(bwd_write_0bytes) + cmp $144, %rdx + jae L(copy_backward) + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) +L(copy_forward): +#endif +L(start): + cmp $144, %rdx + jae L(144bytesormore) + +L(fwd_write_less32bytes): +#ifndef USE_AS_MEMMOVE + cmp %dil, %sil + jbe L(bk_write) +#endif + add %rdx, %rsi + add %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) +#ifndef USE_AS_MEMMOVE +L(bk_write): + + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) +#endif + + .p2align 4 +L(144bytesormore): + +#ifndef USE_AS_MEMMOVE + cmp %dil, %sil + jle L(copy_backward) +#endif + movdqu (%rsi), %xmm0 + mov %rdi, %r8 + and $-16, %rdi + add $16, %rdi + mov %rdi, %r9 + sub %r8, %r9 + sub %r9, %rdx + add %r9, %rsi + mov %rsi, %r9 + and $0xf, %r9 + jz L(shl_0) +#ifdef DATA_CACHE_SIZE + mov $DATA_CACHE_SIZE, %RCX_LP +#else + mov __x86_data_cache_size(%rip), %RCX_LP +#endif + cmp %rcx, %rdx + jae L(gobble_mem_fwd) + lea L(shl_table_fwd)(%rip), %r11 + sub $0x80, %rdx + movslq (%r11, %r9, 4), %r9 + add %r11, %r9 + _CET_NOTRACK jmp *%r9 + ud2 + + .p2align 4 +L(copy_backward): +#ifdef DATA_CACHE_SIZE + mov $DATA_CACHE_SIZE, %RCX_LP +#else + mov __x86_data_cache_size(%rip), %RCX_LP +#endif + shl $1, %rcx + cmp %rcx, %rdx + ja L(gobble_mem_bwd) + + add %rdx, %rdi + add %rdx, %rsi + movdqu -16(%rsi), %xmm0 + lea -16(%rdi), %r8 + mov %rdi, %r9 + and $0xf, %r9 + xor %r9, %rdi + sub %r9, %rsi + sub %r9, %rdx + mov %rsi, %r9 + and $0xf, %r9 + jz L(shl_0_bwd) + lea L(shl_table_bwd)(%rip), %r11 + sub $0x80, %rdx + movslq (%r11, %r9, 4), %r9 + add %r11, %r9 + _CET_NOTRACK jmp *%r9 + ud2 + + .p2align 4 +L(shl_0): + + mov %rdx, %r9 + shr $8, %r9 + add %rdx, %r9 +#ifdef DATA_CACHE_SIZE + cmp $DATA_CACHE_SIZE_HALF, %R9_LP +#else + cmp __x86_data_cache_size_half(%rip), %R9_LP +#endif + jae L(gobble_mem_fwd) + sub $0x80, %rdx + .p2align 4 +L(shl_0_loop): + movdqa (%rsi), %xmm1 + movdqa %xmm1, (%rdi) + movaps 0x10(%rsi), %xmm2 + movaps %xmm2, 0x10(%rdi) + movaps 0x20(%rsi), %xmm3 + movaps %xmm3, 0x20(%rdi) + movaps 0x30(%rsi), %xmm4 + movaps %xmm4, 0x30(%rdi) + movaps 0x40(%rsi), %xmm1 + movaps %xmm1, 0x40(%rdi) + movaps 0x50(%rsi), %xmm2 + movaps %xmm2, 0x50(%rdi) + movaps 0x60(%rsi), %xmm3 + movaps %xmm3, 0x60(%rdi) + movaps 0x70(%rsi), %xmm4 + movaps %xmm4, 0x70(%rdi) + sub $0x80, %rdx + lea 0x80(%rsi), %rsi + lea 0x80(%rdi), %rdi + jae L(shl_0_loop) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rsi + add %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_0_bwd): + sub $0x80, %rdx +L(copy_backward_loop): + movaps -0x10(%rsi), %xmm1 + movaps %xmm1, -0x10(%rdi) + movaps -0x20(%rsi), %xmm2 + movaps %xmm2, -0x20(%rdi) + movaps -0x30(%rsi), %xmm3 + movaps %xmm3, -0x30(%rdi) + movaps -0x40(%rsi), %xmm4 + movaps %xmm4, -0x40(%rdi) + movaps -0x50(%rsi), %xmm5 + movaps %xmm5, -0x50(%rdi) + movaps -0x60(%rsi), %xmm5 + movaps %xmm5, -0x60(%rdi) + movaps -0x70(%rsi), %xmm5 + movaps %xmm5, -0x70(%rdi) + movaps -0x80(%rsi), %xmm5 + movaps %xmm5, -0x80(%rdi) + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(copy_backward_loop) + + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_1): + sub $0x80, %rdx + movaps -0x01(%rsi), %xmm1 + movaps 0x0f(%rsi), %xmm2 + movaps 0x1f(%rsi), %xmm3 + movaps 0x2f(%rsi), %xmm4 + movaps 0x3f(%rsi), %xmm5 + movaps 0x4f(%rsi), %xmm6 + movaps 0x5f(%rsi), %xmm7 + movaps 0x6f(%rsi), %xmm8 + movaps 0x7f(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $1, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $1, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $1, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $1, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $1, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $1, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $1, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $1, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_1) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_1_bwd): + movaps -0x01(%rsi), %xmm1 + + movaps -0x11(%rsi), %xmm2 + palignr $1, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x21(%rsi), %xmm3 + palignr $1, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x31(%rsi), %xmm4 + palignr $1, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x41(%rsi), %xmm5 + palignr $1, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x51(%rsi), %xmm6 + palignr $1, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x61(%rsi), %xmm7 + palignr $1, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x71(%rsi), %xmm8 + palignr $1, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x81(%rsi), %xmm9 + palignr $1, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_1_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_2): + sub $0x80, %rdx + movaps -0x02(%rsi), %xmm1 + movaps 0x0e(%rsi), %xmm2 + movaps 0x1e(%rsi), %xmm3 + movaps 0x2e(%rsi), %xmm4 + movaps 0x3e(%rsi), %xmm5 + movaps 0x4e(%rsi), %xmm6 + movaps 0x5e(%rsi), %xmm7 + movaps 0x6e(%rsi), %xmm8 + movaps 0x7e(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $2, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $2, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $2, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $2, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $2, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $2, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $2, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $2, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_2) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_2_bwd): + movaps -0x02(%rsi), %xmm1 + + movaps -0x12(%rsi), %xmm2 + palignr $2, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x22(%rsi), %xmm3 + palignr $2, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x32(%rsi), %xmm4 + palignr $2, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x42(%rsi), %xmm5 + palignr $2, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x52(%rsi), %xmm6 + palignr $2, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x62(%rsi), %xmm7 + palignr $2, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x72(%rsi), %xmm8 + palignr $2, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x82(%rsi), %xmm9 + palignr $2, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_2_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_3): + sub $0x80, %rdx + movaps -0x03(%rsi), %xmm1 + movaps 0x0d(%rsi), %xmm2 + movaps 0x1d(%rsi), %xmm3 + movaps 0x2d(%rsi), %xmm4 + movaps 0x3d(%rsi), %xmm5 + movaps 0x4d(%rsi), %xmm6 + movaps 0x5d(%rsi), %xmm7 + movaps 0x6d(%rsi), %xmm8 + movaps 0x7d(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $3, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $3, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $3, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $3, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $3, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $3, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $3, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $3, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_3) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_3_bwd): + movaps -0x03(%rsi), %xmm1 + + movaps -0x13(%rsi), %xmm2 + palignr $3, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x23(%rsi), %xmm3 + palignr $3, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x33(%rsi), %xmm4 + palignr $3, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x43(%rsi), %xmm5 + palignr $3, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x53(%rsi), %xmm6 + palignr $3, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x63(%rsi), %xmm7 + palignr $3, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x73(%rsi), %xmm8 + palignr $3, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x83(%rsi), %xmm9 + palignr $3, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_3_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_4): + sub $0x80, %rdx + movaps -0x04(%rsi), %xmm1 + movaps 0x0c(%rsi), %xmm2 + movaps 0x1c(%rsi), %xmm3 + movaps 0x2c(%rsi), %xmm4 + movaps 0x3c(%rsi), %xmm5 + movaps 0x4c(%rsi), %xmm6 + movaps 0x5c(%rsi), %xmm7 + movaps 0x6c(%rsi), %xmm8 + movaps 0x7c(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $4, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $4, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $4, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $4, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $4, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $4, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $4, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $4, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_4) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_4_bwd): + movaps -0x04(%rsi), %xmm1 + + movaps -0x14(%rsi), %xmm2 + palignr $4, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x24(%rsi), %xmm3 + palignr $4, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x34(%rsi), %xmm4 + palignr $4, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x44(%rsi), %xmm5 + palignr $4, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x54(%rsi), %xmm6 + palignr $4, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x64(%rsi), %xmm7 + palignr $4, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x74(%rsi), %xmm8 + palignr $4, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x84(%rsi), %xmm9 + palignr $4, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_4_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_5): + sub $0x80, %rdx + movaps -0x05(%rsi), %xmm1 + movaps 0x0b(%rsi), %xmm2 + movaps 0x1b(%rsi), %xmm3 + movaps 0x2b(%rsi), %xmm4 + movaps 0x3b(%rsi), %xmm5 + movaps 0x4b(%rsi), %xmm6 + movaps 0x5b(%rsi), %xmm7 + movaps 0x6b(%rsi), %xmm8 + movaps 0x7b(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $5, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $5, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $5, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $5, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $5, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $5, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $5, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $5, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_5) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_5_bwd): + movaps -0x05(%rsi), %xmm1 + + movaps -0x15(%rsi), %xmm2 + palignr $5, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x25(%rsi), %xmm3 + palignr $5, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x35(%rsi), %xmm4 + palignr $5, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x45(%rsi), %xmm5 + palignr $5, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x55(%rsi), %xmm6 + palignr $5, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x65(%rsi), %xmm7 + palignr $5, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x75(%rsi), %xmm8 + palignr $5, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x85(%rsi), %xmm9 + palignr $5, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_5_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_6): + sub $0x80, %rdx + movaps -0x06(%rsi), %xmm1 + movaps 0x0a(%rsi), %xmm2 + movaps 0x1a(%rsi), %xmm3 + movaps 0x2a(%rsi), %xmm4 + movaps 0x3a(%rsi), %xmm5 + movaps 0x4a(%rsi), %xmm6 + movaps 0x5a(%rsi), %xmm7 + movaps 0x6a(%rsi), %xmm8 + movaps 0x7a(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $6, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $6, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $6, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $6, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $6, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $6, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $6, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $6, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_6) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_6_bwd): + movaps -0x06(%rsi), %xmm1 + + movaps -0x16(%rsi), %xmm2 + palignr $6, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x26(%rsi), %xmm3 + palignr $6, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x36(%rsi), %xmm4 + palignr $6, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x46(%rsi), %xmm5 + palignr $6, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x56(%rsi), %xmm6 + palignr $6, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x66(%rsi), %xmm7 + palignr $6, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x76(%rsi), %xmm8 + palignr $6, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x86(%rsi), %xmm9 + palignr $6, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_6_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_7): + sub $0x80, %rdx + movaps -0x07(%rsi), %xmm1 + movaps 0x09(%rsi), %xmm2 + movaps 0x19(%rsi), %xmm3 + movaps 0x29(%rsi), %xmm4 + movaps 0x39(%rsi), %xmm5 + movaps 0x49(%rsi), %xmm6 + movaps 0x59(%rsi), %xmm7 + movaps 0x69(%rsi), %xmm8 + movaps 0x79(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $7, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $7, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $7, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $7, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $7, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $7, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $7, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $7, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_7) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_7_bwd): + movaps -0x07(%rsi), %xmm1 + + movaps -0x17(%rsi), %xmm2 + palignr $7, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x27(%rsi), %xmm3 + palignr $7, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x37(%rsi), %xmm4 + palignr $7, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x47(%rsi), %xmm5 + palignr $7, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x57(%rsi), %xmm6 + palignr $7, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x67(%rsi), %xmm7 + palignr $7, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x77(%rsi), %xmm8 + palignr $7, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x87(%rsi), %xmm9 + palignr $7, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_7_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_8): + sub $0x80, %rdx + movaps -0x08(%rsi), %xmm1 + movaps 0x08(%rsi), %xmm2 + movaps 0x18(%rsi), %xmm3 + movaps 0x28(%rsi), %xmm4 + movaps 0x38(%rsi), %xmm5 + movaps 0x48(%rsi), %xmm6 + movaps 0x58(%rsi), %xmm7 + movaps 0x68(%rsi), %xmm8 + movaps 0x78(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $8, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $8, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $8, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $8, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $8, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $8, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $8, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $8, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_8) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_8_bwd): + movaps -0x08(%rsi), %xmm1 + + movaps -0x18(%rsi), %xmm2 + palignr $8, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x28(%rsi), %xmm3 + palignr $8, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x38(%rsi), %xmm4 + palignr $8, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x48(%rsi), %xmm5 + palignr $8, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x58(%rsi), %xmm6 + palignr $8, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x68(%rsi), %xmm7 + palignr $8, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x78(%rsi), %xmm8 + palignr $8, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x88(%rsi), %xmm9 + palignr $8, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_8_bwd) +L(shl_8_end_bwd): + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_9): + sub $0x80, %rdx + movaps -0x09(%rsi), %xmm1 + movaps 0x07(%rsi), %xmm2 + movaps 0x17(%rsi), %xmm3 + movaps 0x27(%rsi), %xmm4 + movaps 0x37(%rsi), %xmm5 + movaps 0x47(%rsi), %xmm6 + movaps 0x57(%rsi), %xmm7 + movaps 0x67(%rsi), %xmm8 + movaps 0x77(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $9, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $9, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $9, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $9, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $9, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $9, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $9, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $9, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_9) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_9_bwd): + movaps -0x09(%rsi), %xmm1 + + movaps -0x19(%rsi), %xmm2 + palignr $9, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x29(%rsi), %xmm3 + palignr $9, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x39(%rsi), %xmm4 + palignr $9, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x49(%rsi), %xmm5 + palignr $9, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x59(%rsi), %xmm6 + palignr $9, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x69(%rsi), %xmm7 + palignr $9, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x79(%rsi), %xmm8 + palignr $9, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x89(%rsi), %xmm9 + palignr $9, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_9_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_10): + sub $0x80, %rdx + movaps -0x0a(%rsi), %xmm1 + movaps 0x06(%rsi), %xmm2 + movaps 0x16(%rsi), %xmm3 + movaps 0x26(%rsi), %xmm4 + movaps 0x36(%rsi), %xmm5 + movaps 0x46(%rsi), %xmm6 + movaps 0x56(%rsi), %xmm7 + movaps 0x66(%rsi), %xmm8 + movaps 0x76(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $10, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $10, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $10, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $10, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $10, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $10, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $10, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $10, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_10) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_10_bwd): + movaps -0x0a(%rsi), %xmm1 + + movaps -0x1a(%rsi), %xmm2 + palignr $10, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x2a(%rsi), %xmm3 + palignr $10, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x3a(%rsi), %xmm4 + palignr $10, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x4a(%rsi), %xmm5 + palignr $10, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x5a(%rsi), %xmm6 + palignr $10, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x6a(%rsi), %xmm7 + palignr $10, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x7a(%rsi), %xmm8 + palignr $10, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x8a(%rsi), %xmm9 + palignr $10, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_10_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_11): + sub $0x80, %rdx + movaps -0x0b(%rsi), %xmm1 + movaps 0x05(%rsi), %xmm2 + movaps 0x15(%rsi), %xmm3 + movaps 0x25(%rsi), %xmm4 + movaps 0x35(%rsi), %xmm5 + movaps 0x45(%rsi), %xmm6 + movaps 0x55(%rsi), %xmm7 + movaps 0x65(%rsi), %xmm8 + movaps 0x75(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $11, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $11, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $11, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $11, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $11, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $11, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $11, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $11, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_11) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_11_bwd): + movaps -0x0b(%rsi), %xmm1 + + movaps -0x1b(%rsi), %xmm2 + palignr $11, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x2b(%rsi), %xmm3 + palignr $11, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x3b(%rsi), %xmm4 + palignr $11, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x4b(%rsi), %xmm5 + palignr $11, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x5b(%rsi), %xmm6 + palignr $11, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x6b(%rsi), %xmm7 + palignr $11, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x7b(%rsi), %xmm8 + palignr $11, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x8b(%rsi), %xmm9 + palignr $11, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_11_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_12): + sub $0x80, %rdx + movdqa -0x0c(%rsi), %xmm1 + movaps 0x04(%rsi), %xmm2 + movaps 0x14(%rsi), %xmm3 + movaps 0x24(%rsi), %xmm4 + movaps 0x34(%rsi), %xmm5 + movaps 0x44(%rsi), %xmm6 + movaps 0x54(%rsi), %xmm7 + movaps 0x64(%rsi), %xmm8 + movaps 0x74(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $12, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $12, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $12, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $12, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $12, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $12, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $12, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $12, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + + lea 0x80(%rdi), %rdi + jae L(shl_12) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_12_bwd): + movaps -0x0c(%rsi), %xmm1 + + movaps -0x1c(%rsi), %xmm2 + palignr $12, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x2c(%rsi), %xmm3 + palignr $12, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x3c(%rsi), %xmm4 + palignr $12, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x4c(%rsi), %xmm5 + palignr $12, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x5c(%rsi), %xmm6 + palignr $12, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x6c(%rsi), %xmm7 + palignr $12, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x7c(%rsi), %xmm8 + palignr $12, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x8c(%rsi), %xmm9 + palignr $12, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_12_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_13): + sub $0x80, %rdx + movaps -0x0d(%rsi), %xmm1 + movaps 0x03(%rsi), %xmm2 + movaps 0x13(%rsi), %xmm3 + movaps 0x23(%rsi), %xmm4 + movaps 0x33(%rsi), %xmm5 + movaps 0x43(%rsi), %xmm6 + movaps 0x53(%rsi), %xmm7 + movaps 0x63(%rsi), %xmm8 + movaps 0x73(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $13, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $13, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $13, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $13, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $13, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $13, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $13, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $13, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_13) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_13_bwd): + movaps -0x0d(%rsi), %xmm1 + + movaps -0x1d(%rsi), %xmm2 + palignr $13, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x2d(%rsi), %xmm3 + palignr $13, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x3d(%rsi), %xmm4 + palignr $13, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x4d(%rsi), %xmm5 + palignr $13, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x5d(%rsi), %xmm6 + palignr $13, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x6d(%rsi), %xmm7 + palignr $13, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x7d(%rsi), %xmm8 + palignr $13, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x8d(%rsi), %xmm9 + palignr $13, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_13_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_14): + sub $0x80, %rdx + movaps -0x0e(%rsi), %xmm1 + movaps 0x02(%rsi), %xmm2 + movaps 0x12(%rsi), %xmm3 + movaps 0x22(%rsi), %xmm4 + movaps 0x32(%rsi), %xmm5 + movaps 0x42(%rsi), %xmm6 + movaps 0x52(%rsi), %xmm7 + movaps 0x62(%rsi), %xmm8 + movaps 0x72(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $14, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $14, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $14, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $14, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $14, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $14, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $14, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $14, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_14) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_14_bwd): + movaps -0x0e(%rsi), %xmm1 + + movaps -0x1e(%rsi), %xmm2 + palignr $14, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x2e(%rsi), %xmm3 + palignr $14, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x3e(%rsi), %xmm4 + palignr $14, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x4e(%rsi), %xmm5 + palignr $14, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x5e(%rsi), %xmm6 + palignr $14, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x6e(%rsi), %xmm7 + palignr $14, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x7e(%rsi), %xmm8 + palignr $14, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x8e(%rsi), %xmm9 + palignr $14, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_14_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(shl_15): + sub $0x80, %rdx + movaps -0x0f(%rsi), %xmm1 + movaps 0x01(%rsi), %xmm2 + movaps 0x11(%rsi), %xmm3 + movaps 0x21(%rsi), %xmm4 + movaps 0x31(%rsi), %xmm5 + movaps 0x41(%rsi), %xmm6 + movaps 0x51(%rsi), %xmm7 + movaps 0x61(%rsi), %xmm8 + movaps 0x71(%rsi), %xmm9 + lea 0x80(%rsi), %rsi + palignr $15, %xmm8, %xmm9 + movaps %xmm9, 0x70(%rdi) + palignr $15, %xmm7, %xmm8 + movaps %xmm8, 0x60(%rdi) + palignr $15, %xmm6, %xmm7 + movaps %xmm7, 0x50(%rdi) + palignr $15, %xmm5, %xmm6 + movaps %xmm6, 0x40(%rdi) + palignr $15, %xmm4, %xmm5 + movaps %xmm5, 0x30(%rdi) + palignr $15, %xmm3, %xmm4 + movaps %xmm4, 0x20(%rdi) + palignr $15, %xmm2, %xmm3 + movaps %xmm3, 0x10(%rdi) + palignr $15, %xmm1, %xmm2 + movaps %xmm2, (%rdi) + lea 0x80(%rdi), %rdi + jae L(shl_15) + movdqu %xmm0, (%r8) + add $0x80, %rdx + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(shl_15_bwd): + movaps -0x0f(%rsi), %xmm1 + + movaps -0x1f(%rsi), %xmm2 + palignr $15, %xmm2, %xmm1 + movaps %xmm1, -0x10(%rdi) + + movaps -0x2f(%rsi), %xmm3 + palignr $15, %xmm3, %xmm2 + movaps %xmm2, -0x20(%rdi) + + movaps -0x3f(%rsi), %xmm4 + palignr $15, %xmm4, %xmm3 + movaps %xmm3, -0x30(%rdi) + + movaps -0x4f(%rsi), %xmm5 + palignr $15, %xmm5, %xmm4 + movaps %xmm4, -0x40(%rdi) + + movaps -0x5f(%rsi), %xmm6 + palignr $15, %xmm6, %xmm5 + movaps %xmm5, -0x50(%rdi) + + movaps -0x6f(%rsi), %xmm7 + palignr $15, %xmm7, %xmm6 + movaps %xmm6, -0x60(%rdi) + + movaps -0x7f(%rsi), %xmm8 + palignr $15, %xmm8, %xmm7 + movaps %xmm7, -0x70(%rdi) + + movaps -0x8f(%rsi), %xmm9 + palignr $15, %xmm9, %xmm8 + movaps %xmm8, -0x80(%rdi) + + sub $0x80, %rdx + lea -0x80(%rdi), %rdi + lea -0x80(%rsi), %rsi + jae L(shl_15_bwd) + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rdi + sub %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(gobble_mem_fwd): + movdqu (%rsi), %xmm1 + movdqu %xmm0, (%r8) + movdqa %xmm1, (%rdi) + sub $16, %rdx + add $16, %rsi + add $16, %rdi + +#ifdef SHARED_CACHE_SIZE_HALF + mov $SHARED_CACHE_SIZE_HALF, %RCX_LP +#else + mov __x86_shared_cache_size_half(%rip), %RCX_LP +#endif +#ifdef USE_AS_MEMMOVE + mov %rsi, %r9 + sub %rdi, %r9 + cmp %rdx, %r9 + jae L(memmove_is_memcpy_fwd) + cmp %rcx, %r9 + jbe L(ll_cache_copy_fwd_start) +L(memmove_is_memcpy_fwd): +#endif + cmp %rcx, %rdx + ja L(bigger_in_fwd) + mov %rdx, %rcx +L(bigger_in_fwd): + sub %rcx, %rdx + cmp $0x1000, %rdx + jbe L(ll_cache_copy_fwd) + + mov %rcx, %r9 + shl $3, %r9 + cmp %r9, %rdx + jbe L(2steps_copy_fwd) + add %rcx, %rdx + xor %rcx, %rcx +L(2steps_copy_fwd): + sub $0x80, %rdx +L(gobble_mem_fwd_loop): + sub $0x80, %rdx + prefetcht0 0x200(%rsi) + prefetcht0 0x300(%rsi) + movdqu (%rsi), %xmm0 + movdqu 0x10(%rsi), %xmm1 + movdqu 0x20(%rsi), %xmm2 + movdqu 0x30(%rsi), %xmm3 + movdqu 0x40(%rsi), %xmm4 + movdqu 0x50(%rsi), %xmm5 + movdqu 0x60(%rsi), %xmm6 + movdqu 0x70(%rsi), %xmm7 + lfence + movntdq %xmm0, (%rdi) + movntdq %xmm1, 0x10(%rdi) + movntdq %xmm2, 0x20(%rdi) + movntdq %xmm3, 0x30(%rdi) + movntdq %xmm4, 0x40(%rdi) + movntdq %xmm5, 0x50(%rdi) + movntdq %xmm6, 0x60(%rdi) + movntdq %xmm7, 0x70(%rdi) + lea 0x80(%rsi), %rsi + lea 0x80(%rdi), %rdi + jae L(gobble_mem_fwd_loop) + sfence + cmp $0x80, %rcx + jb L(gobble_mem_fwd_end) + add $0x80, %rdx +L(ll_cache_copy_fwd): + add %rcx, %rdx +L(ll_cache_copy_fwd_start): + sub $0x80, %rdx +L(gobble_ll_loop_fwd): + prefetchnta 0x1c0(%rsi) + prefetchnta 0x280(%rsi) + prefetchnta 0x1c0(%rdi) + prefetchnta 0x280(%rdi) + sub $0x80, %rdx + movdqu (%rsi), %xmm0 + movdqu 0x10(%rsi), %xmm1 + movdqu 0x20(%rsi), %xmm2 + movdqu 0x30(%rsi), %xmm3 + movdqu 0x40(%rsi), %xmm4 + movdqu 0x50(%rsi), %xmm5 + movdqu 0x60(%rsi), %xmm6 + movdqu 0x70(%rsi), %xmm7 + movdqa %xmm0, (%rdi) + movdqa %xmm1, 0x10(%rdi) + movdqa %xmm2, 0x20(%rdi) + movdqa %xmm3, 0x30(%rdi) + movdqa %xmm4, 0x40(%rdi) + movdqa %xmm5, 0x50(%rdi) + movdqa %xmm6, 0x60(%rdi) + movdqa %xmm7, 0x70(%rdi) + lea 0x80(%rsi), %rsi + lea 0x80(%rdi), %rdi + jae L(gobble_ll_loop_fwd) +L(gobble_mem_fwd_end): + add $0x80, %rdx + add %rdx, %rsi + add %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_fwd), %rdx, 4) + + .p2align 4 +L(gobble_mem_bwd): + add %rdx, %rsi + add %rdx, %rdi + + movdqu -16(%rsi), %xmm0 + lea -16(%rdi), %r8 + mov %rdi, %r9 + and $-16, %rdi + sub %rdi, %r9 + sub %r9, %rsi + sub %r9, %rdx + + +#ifdef SHARED_CACHE_SIZE_HALF + mov $SHARED_CACHE_SIZE_HALF, %RCX_LP +#else + mov __x86_shared_cache_size_half(%rip), %RCX_LP +#endif +#ifdef USE_AS_MEMMOVE + mov %rdi, %r9 + sub %rsi, %r9 + cmp %rdx, %r9 + jae L(memmove_is_memcpy_bwd) + cmp %rcx, %r9 + jbe L(ll_cache_copy_bwd_start) +L(memmove_is_memcpy_bwd): +#endif + cmp %rcx, %rdx + ja L(bigger) + mov %rdx, %rcx +L(bigger): + sub %rcx, %rdx + cmp $0x1000, %rdx + jbe L(ll_cache_copy) + + mov %rcx, %r9 + shl $3, %r9 + cmp %r9, %rdx + jbe L(2steps_copy) + add %rcx, %rdx + xor %rcx, %rcx +L(2steps_copy): + sub $0x80, %rdx +L(gobble_mem_bwd_loop): + sub $0x80, %rdx + prefetcht0 -0x200(%rsi) + prefetcht0 -0x300(%rsi) + movdqu -0x10(%rsi), %xmm1 + movdqu -0x20(%rsi), %xmm2 + movdqu -0x30(%rsi), %xmm3 + movdqu -0x40(%rsi), %xmm4 + movdqu -0x50(%rsi), %xmm5 + movdqu -0x60(%rsi), %xmm6 + movdqu -0x70(%rsi), %xmm7 + movdqu -0x80(%rsi), %xmm8 + lfence + movntdq %xmm1, -0x10(%rdi) + movntdq %xmm2, -0x20(%rdi) + movntdq %xmm3, -0x30(%rdi) + movntdq %xmm4, -0x40(%rdi) + movntdq %xmm5, -0x50(%rdi) + movntdq %xmm6, -0x60(%rdi) + movntdq %xmm7, -0x70(%rdi) + movntdq %xmm8, -0x80(%rdi) + lea -0x80(%rsi), %rsi + lea -0x80(%rdi), %rdi + jae L(gobble_mem_bwd_loop) + sfence + cmp $0x80, %rcx + jb L(gobble_mem_bwd_end) + add $0x80, %rdx +L(ll_cache_copy): + add %rcx, %rdx +L(ll_cache_copy_bwd_start): + sub $0x80, %rdx +L(gobble_ll_loop): + prefetchnta -0x1c0(%rsi) + prefetchnta -0x280(%rsi) + prefetchnta -0x1c0(%rdi) + prefetchnta -0x280(%rdi) + sub $0x80, %rdx + movdqu -0x10(%rsi), %xmm1 + movdqu -0x20(%rsi), %xmm2 + movdqu -0x30(%rsi), %xmm3 + movdqu -0x40(%rsi), %xmm4 + movdqu -0x50(%rsi), %xmm5 + movdqu -0x60(%rsi), %xmm6 + movdqu -0x70(%rsi), %xmm7 + movdqu -0x80(%rsi), %xmm8 + movdqa %xmm1, -0x10(%rdi) + movdqa %xmm2, -0x20(%rdi) + movdqa %xmm3, -0x30(%rdi) + movdqa %xmm4, -0x40(%rdi) + movdqa %xmm5, -0x50(%rdi) + movdqa %xmm6, -0x60(%rdi) + movdqa %xmm7, -0x70(%rdi) + movdqa %xmm8, -0x80(%rdi) + lea -0x80(%rsi), %rsi + lea -0x80(%rdi), %rdi + jae L(gobble_ll_loop) +L(gobble_mem_bwd_end): + movdqu %xmm0, (%r8) + add $0x80, %rdx + sub %rdx, %rsi + sub %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_144_bytes_bwd), %rdx, 4) + + .p2align 4 +L(fwd_write_128bytes): + lddqu -128(%rsi), %xmm0 + movdqu %xmm0, -128(%rdi) +L(fwd_write_112bytes): + lddqu -112(%rsi), %xmm0 + movdqu %xmm0, -112(%rdi) +L(fwd_write_96bytes): + lddqu -96(%rsi), %xmm0 + movdqu %xmm0, -96(%rdi) +L(fwd_write_80bytes): + lddqu -80(%rsi), %xmm0 + movdqu %xmm0, -80(%rdi) +L(fwd_write_64bytes): + lddqu -64(%rsi), %xmm0 + movdqu %xmm0, -64(%rdi) +L(fwd_write_48bytes): + lddqu -48(%rsi), %xmm0 + movdqu %xmm0, -48(%rdi) +L(fwd_write_32bytes): + lddqu -32(%rsi), %xmm0 + movdqu %xmm0, -32(%rdi) +L(fwd_write_16bytes): + lddqu -16(%rsi), %xmm0 + movdqu %xmm0, -16(%rdi) +L(fwd_write_0bytes): + ret + + + .p2align 4 +L(fwd_write_143bytes): + lddqu -143(%rsi), %xmm0 + movdqu %xmm0, -143(%rdi) +L(fwd_write_127bytes): + lddqu -127(%rsi), %xmm0 + movdqu %xmm0, -127(%rdi) +L(fwd_write_111bytes): + lddqu -111(%rsi), %xmm0 + movdqu %xmm0, -111(%rdi) +L(fwd_write_95bytes): + lddqu -95(%rsi), %xmm0 + movdqu %xmm0, -95(%rdi) +L(fwd_write_79bytes): + lddqu -79(%rsi), %xmm0 + movdqu %xmm0, -79(%rdi) +L(fwd_write_63bytes): + lddqu -63(%rsi), %xmm0 + movdqu %xmm0, -63(%rdi) +L(fwd_write_47bytes): + lddqu -47(%rsi), %xmm0 + movdqu %xmm0, -47(%rdi) +L(fwd_write_31bytes): + lddqu -31(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -31(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_15bytes): + mov -15(%rsi), %rdx + mov -8(%rsi), %rcx + mov %rdx, -15(%rdi) + mov %rcx, -8(%rdi) + ret + + .p2align 4 +L(fwd_write_142bytes): + lddqu -142(%rsi), %xmm0 + movdqu %xmm0, -142(%rdi) +L(fwd_write_126bytes): + lddqu -126(%rsi), %xmm0 + movdqu %xmm0, -126(%rdi) +L(fwd_write_110bytes): + lddqu -110(%rsi), %xmm0 + movdqu %xmm0, -110(%rdi) +L(fwd_write_94bytes): + lddqu -94(%rsi), %xmm0 + movdqu %xmm0, -94(%rdi) +L(fwd_write_78bytes): + lddqu -78(%rsi), %xmm0 + movdqu %xmm0, -78(%rdi) +L(fwd_write_62bytes): + lddqu -62(%rsi), %xmm0 + movdqu %xmm0, -62(%rdi) +L(fwd_write_46bytes): + lddqu -46(%rsi), %xmm0 + movdqu %xmm0, -46(%rdi) +L(fwd_write_30bytes): + lddqu -30(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -30(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_14bytes): + mov -14(%rsi), %rdx + mov -8(%rsi), %rcx + mov %rdx, -14(%rdi) + mov %rcx, -8(%rdi) + ret + + .p2align 4 +L(fwd_write_141bytes): + lddqu -141(%rsi), %xmm0 + movdqu %xmm0, -141(%rdi) +L(fwd_write_125bytes): + lddqu -125(%rsi), %xmm0 + movdqu %xmm0, -125(%rdi) +L(fwd_write_109bytes): + lddqu -109(%rsi), %xmm0 + movdqu %xmm0, -109(%rdi) +L(fwd_write_93bytes): + lddqu -93(%rsi), %xmm0 + movdqu %xmm0, -93(%rdi) +L(fwd_write_77bytes): + lddqu -77(%rsi), %xmm0 + movdqu %xmm0, -77(%rdi) +L(fwd_write_61bytes): + lddqu -61(%rsi), %xmm0 + movdqu %xmm0, -61(%rdi) +L(fwd_write_45bytes): + lddqu -45(%rsi), %xmm0 + movdqu %xmm0, -45(%rdi) +L(fwd_write_29bytes): + lddqu -29(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -29(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_13bytes): + mov -13(%rsi), %rdx + mov -8(%rsi), %rcx + mov %rdx, -13(%rdi) + mov %rcx, -8(%rdi) + ret + + .p2align 4 +L(fwd_write_140bytes): + lddqu -140(%rsi), %xmm0 + movdqu %xmm0, -140(%rdi) +L(fwd_write_124bytes): + lddqu -124(%rsi), %xmm0 + movdqu %xmm0, -124(%rdi) +L(fwd_write_108bytes): + lddqu -108(%rsi), %xmm0 + movdqu %xmm0, -108(%rdi) +L(fwd_write_92bytes): + lddqu -92(%rsi), %xmm0 + movdqu %xmm0, -92(%rdi) +L(fwd_write_76bytes): + lddqu -76(%rsi), %xmm0 + movdqu %xmm0, -76(%rdi) +L(fwd_write_60bytes): + lddqu -60(%rsi), %xmm0 + movdqu %xmm0, -60(%rdi) +L(fwd_write_44bytes): + lddqu -44(%rsi), %xmm0 + movdqu %xmm0, -44(%rdi) +L(fwd_write_28bytes): + lddqu -28(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -28(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_12bytes): + mov -12(%rsi), %rdx + mov -4(%rsi), %ecx + mov %rdx, -12(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_139bytes): + lddqu -139(%rsi), %xmm0 + movdqu %xmm0, -139(%rdi) +L(fwd_write_123bytes): + lddqu -123(%rsi), %xmm0 + movdqu %xmm0, -123(%rdi) +L(fwd_write_107bytes): + lddqu -107(%rsi), %xmm0 + movdqu %xmm0, -107(%rdi) +L(fwd_write_91bytes): + lddqu -91(%rsi), %xmm0 + movdqu %xmm0, -91(%rdi) +L(fwd_write_75bytes): + lddqu -75(%rsi), %xmm0 + movdqu %xmm0, -75(%rdi) +L(fwd_write_59bytes): + lddqu -59(%rsi), %xmm0 + movdqu %xmm0, -59(%rdi) +L(fwd_write_43bytes): + lddqu -43(%rsi), %xmm0 + movdqu %xmm0, -43(%rdi) +L(fwd_write_27bytes): + lddqu -27(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -27(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_11bytes): + mov -11(%rsi), %rdx + mov -4(%rsi), %ecx + mov %rdx, -11(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_138bytes): + lddqu -138(%rsi), %xmm0 + movdqu %xmm0, -138(%rdi) +L(fwd_write_122bytes): + lddqu -122(%rsi), %xmm0 + movdqu %xmm0, -122(%rdi) +L(fwd_write_106bytes): + lddqu -106(%rsi), %xmm0 + movdqu %xmm0, -106(%rdi) +L(fwd_write_90bytes): + lddqu -90(%rsi), %xmm0 + movdqu %xmm0, -90(%rdi) +L(fwd_write_74bytes): + lddqu -74(%rsi), %xmm0 + movdqu %xmm0, -74(%rdi) +L(fwd_write_58bytes): + lddqu -58(%rsi), %xmm0 + movdqu %xmm0, -58(%rdi) +L(fwd_write_42bytes): + lddqu -42(%rsi), %xmm0 + movdqu %xmm0, -42(%rdi) +L(fwd_write_26bytes): + lddqu -26(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -26(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_10bytes): + mov -10(%rsi), %rdx + mov -4(%rsi), %ecx + mov %rdx, -10(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_137bytes): + lddqu -137(%rsi), %xmm0 + movdqu %xmm0, -137(%rdi) +L(fwd_write_121bytes): + lddqu -121(%rsi), %xmm0 + movdqu %xmm0, -121(%rdi) +L(fwd_write_105bytes): + lddqu -105(%rsi), %xmm0 + movdqu %xmm0, -105(%rdi) +L(fwd_write_89bytes): + lddqu -89(%rsi), %xmm0 + movdqu %xmm0, -89(%rdi) +L(fwd_write_73bytes): + lddqu -73(%rsi), %xmm0 + movdqu %xmm0, -73(%rdi) +L(fwd_write_57bytes): + lddqu -57(%rsi), %xmm0 + movdqu %xmm0, -57(%rdi) +L(fwd_write_41bytes): + lddqu -41(%rsi), %xmm0 + movdqu %xmm0, -41(%rdi) +L(fwd_write_25bytes): + lddqu -25(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -25(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_9bytes): + mov -9(%rsi), %rdx + mov -4(%rsi), %ecx + mov %rdx, -9(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_136bytes): + lddqu -136(%rsi), %xmm0 + movdqu %xmm0, -136(%rdi) +L(fwd_write_120bytes): + lddqu -120(%rsi), %xmm0 + movdqu %xmm0, -120(%rdi) +L(fwd_write_104bytes): + lddqu -104(%rsi), %xmm0 + movdqu %xmm0, -104(%rdi) +L(fwd_write_88bytes): + lddqu -88(%rsi), %xmm0 + movdqu %xmm0, -88(%rdi) +L(fwd_write_72bytes): + lddqu -72(%rsi), %xmm0 + movdqu %xmm0, -72(%rdi) +L(fwd_write_56bytes): + lddqu -56(%rsi), %xmm0 + movdqu %xmm0, -56(%rdi) +L(fwd_write_40bytes): + lddqu -40(%rsi), %xmm0 + movdqu %xmm0, -40(%rdi) +L(fwd_write_24bytes): + lddqu -24(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -24(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_8bytes): + mov -8(%rsi), %rdx + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(fwd_write_135bytes): + lddqu -135(%rsi), %xmm0 + movdqu %xmm0, -135(%rdi) +L(fwd_write_119bytes): + lddqu -119(%rsi), %xmm0 + movdqu %xmm0, -119(%rdi) +L(fwd_write_103bytes): + lddqu -103(%rsi), %xmm0 + movdqu %xmm0, -103(%rdi) +L(fwd_write_87bytes): + lddqu -87(%rsi), %xmm0 + movdqu %xmm0, -87(%rdi) +L(fwd_write_71bytes): + lddqu -71(%rsi), %xmm0 + movdqu %xmm0, -71(%rdi) +L(fwd_write_55bytes): + lddqu -55(%rsi), %xmm0 + movdqu %xmm0, -55(%rdi) +L(fwd_write_39bytes): + lddqu -39(%rsi), %xmm0 + movdqu %xmm0, -39(%rdi) +L(fwd_write_23bytes): + lddqu -23(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -23(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_7bytes): + mov -7(%rsi), %edx + mov -4(%rsi), %ecx + mov %edx, -7(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_134bytes): + lddqu -134(%rsi), %xmm0 + movdqu %xmm0, -134(%rdi) +L(fwd_write_118bytes): + lddqu -118(%rsi), %xmm0 + movdqu %xmm0, -118(%rdi) +L(fwd_write_102bytes): + lddqu -102(%rsi), %xmm0 + movdqu %xmm0, -102(%rdi) +L(fwd_write_86bytes): + lddqu -86(%rsi), %xmm0 + movdqu %xmm0, -86(%rdi) +L(fwd_write_70bytes): + lddqu -70(%rsi), %xmm0 + movdqu %xmm0, -70(%rdi) +L(fwd_write_54bytes): + lddqu -54(%rsi), %xmm0 + movdqu %xmm0, -54(%rdi) +L(fwd_write_38bytes): + lddqu -38(%rsi), %xmm0 + movdqu %xmm0, -38(%rdi) +L(fwd_write_22bytes): + lddqu -22(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -22(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_6bytes): + mov -6(%rsi), %edx + mov -4(%rsi), %ecx + mov %edx, -6(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_133bytes): + lddqu -133(%rsi), %xmm0 + movdqu %xmm0, -133(%rdi) +L(fwd_write_117bytes): + lddqu -117(%rsi), %xmm0 + movdqu %xmm0, -117(%rdi) +L(fwd_write_101bytes): + lddqu -101(%rsi), %xmm0 + movdqu %xmm0, -101(%rdi) +L(fwd_write_85bytes): + lddqu -85(%rsi), %xmm0 + movdqu %xmm0, -85(%rdi) +L(fwd_write_69bytes): + lddqu -69(%rsi), %xmm0 + movdqu %xmm0, -69(%rdi) +L(fwd_write_53bytes): + lddqu -53(%rsi), %xmm0 + movdqu %xmm0, -53(%rdi) +L(fwd_write_37bytes): + lddqu -37(%rsi), %xmm0 + movdqu %xmm0, -37(%rdi) +L(fwd_write_21bytes): + lddqu -21(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -21(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_5bytes): + mov -5(%rsi), %edx + mov -4(%rsi), %ecx + mov %edx, -5(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_132bytes): + lddqu -132(%rsi), %xmm0 + movdqu %xmm0, -132(%rdi) +L(fwd_write_116bytes): + lddqu -116(%rsi), %xmm0 + movdqu %xmm0, -116(%rdi) +L(fwd_write_100bytes): + lddqu -100(%rsi), %xmm0 + movdqu %xmm0, -100(%rdi) +L(fwd_write_84bytes): + lddqu -84(%rsi), %xmm0 + movdqu %xmm0, -84(%rdi) +L(fwd_write_68bytes): + lddqu -68(%rsi), %xmm0 + movdqu %xmm0, -68(%rdi) +L(fwd_write_52bytes): + lddqu -52(%rsi), %xmm0 + movdqu %xmm0, -52(%rdi) +L(fwd_write_36bytes): + lddqu -36(%rsi), %xmm0 + movdqu %xmm0, -36(%rdi) +L(fwd_write_20bytes): + lddqu -20(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -20(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_4bytes): + mov -4(%rsi), %edx + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(fwd_write_131bytes): + lddqu -131(%rsi), %xmm0 + movdqu %xmm0, -131(%rdi) +L(fwd_write_115bytes): + lddqu -115(%rsi), %xmm0 + movdqu %xmm0, -115(%rdi) +L(fwd_write_99bytes): + lddqu -99(%rsi), %xmm0 + movdqu %xmm0, -99(%rdi) +L(fwd_write_83bytes): + lddqu -83(%rsi), %xmm0 + movdqu %xmm0, -83(%rdi) +L(fwd_write_67bytes): + lddqu -67(%rsi), %xmm0 + movdqu %xmm0, -67(%rdi) +L(fwd_write_51bytes): + lddqu -51(%rsi), %xmm0 + movdqu %xmm0, -51(%rdi) +L(fwd_write_35bytes): + lddqu -35(%rsi), %xmm0 + movdqu %xmm0, -35(%rdi) +L(fwd_write_19bytes): + lddqu -19(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -19(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_3bytes): + mov -3(%rsi), %dx + mov -2(%rsi), %cx + mov %dx, -3(%rdi) + mov %cx, -2(%rdi) + ret + + .p2align 4 +L(fwd_write_130bytes): + lddqu -130(%rsi), %xmm0 + movdqu %xmm0, -130(%rdi) +L(fwd_write_114bytes): + lddqu -114(%rsi), %xmm0 + movdqu %xmm0, -114(%rdi) +L(fwd_write_98bytes): + lddqu -98(%rsi), %xmm0 + movdqu %xmm0, -98(%rdi) +L(fwd_write_82bytes): + lddqu -82(%rsi), %xmm0 + movdqu %xmm0, -82(%rdi) +L(fwd_write_66bytes): + lddqu -66(%rsi), %xmm0 + movdqu %xmm0, -66(%rdi) +L(fwd_write_50bytes): + lddqu -50(%rsi), %xmm0 + movdqu %xmm0, -50(%rdi) +L(fwd_write_34bytes): + lddqu -34(%rsi), %xmm0 + movdqu %xmm0, -34(%rdi) +L(fwd_write_18bytes): + lddqu -18(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -18(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_2bytes): + movzwl -2(%rsi), %edx + mov %dx, -2(%rdi) + ret + + .p2align 4 +L(fwd_write_129bytes): + lddqu -129(%rsi), %xmm0 + movdqu %xmm0, -129(%rdi) +L(fwd_write_113bytes): + lddqu -113(%rsi), %xmm0 + movdqu %xmm0, -113(%rdi) +L(fwd_write_97bytes): + lddqu -97(%rsi), %xmm0 + movdqu %xmm0, -97(%rdi) +L(fwd_write_81bytes): + lddqu -81(%rsi), %xmm0 + movdqu %xmm0, -81(%rdi) +L(fwd_write_65bytes): + lddqu -65(%rsi), %xmm0 + movdqu %xmm0, -65(%rdi) +L(fwd_write_49bytes): + lddqu -49(%rsi), %xmm0 + movdqu %xmm0, -49(%rdi) +L(fwd_write_33bytes): + lddqu -33(%rsi), %xmm0 + movdqu %xmm0, -33(%rdi) +L(fwd_write_17bytes): + lddqu -17(%rsi), %xmm0 + lddqu -16(%rsi), %xmm1 + movdqu %xmm0, -17(%rdi) + movdqu %xmm1, -16(%rdi) + ret + + .p2align 4 +L(fwd_write_1bytes): + movzbl -1(%rsi), %edx + mov %dl, -1(%rdi) + ret + + .p2align 4 +L(bwd_write_128bytes): + lddqu 112(%rsi), %xmm0 + movdqu %xmm0, 112(%rdi) +L(bwd_write_112bytes): + lddqu 96(%rsi), %xmm0 + movdqu %xmm0, 96(%rdi) +L(bwd_write_96bytes): + lddqu 80(%rsi), %xmm0 + movdqu %xmm0, 80(%rdi) +L(bwd_write_80bytes): + lddqu 64(%rsi), %xmm0 + movdqu %xmm0, 64(%rdi) +L(bwd_write_64bytes): + lddqu 48(%rsi), %xmm0 + movdqu %xmm0, 48(%rdi) +L(bwd_write_48bytes): + lddqu 32(%rsi), %xmm0 + movdqu %xmm0, 32(%rdi) +L(bwd_write_32bytes): + lddqu 16(%rsi), %xmm0 + movdqu %xmm0, 16(%rdi) +L(bwd_write_16bytes): + lddqu (%rsi), %xmm0 + movdqu %xmm0, (%rdi) +L(bwd_write_0bytes): + ret + + .p2align 4 +L(bwd_write_143bytes): + lddqu 127(%rsi), %xmm0 + movdqu %xmm0, 127(%rdi) +L(bwd_write_127bytes): + lddqu 111(%rsi), %xmm0 + movdqu %xmm0, 111(%rdi) +L(bwd_write_111bytes): + lddqu 95(%rsi), %xmm0 + movdqu %xmm0, 95(%rdi) +L(bwd_write_95bytes): + lddqu 79(%rsi), %xmm0 + movdqu %xmm0, 79(%rdi) +L(bwd_write_79bytes): + lddqu 63(%rsi), %xmm0 + movdqu %xmm0, 63(%rdi) +L(bwd_write_63bytes): + lddqu 47(%rsi), %xmm0 + movdqu %xmm0, 47(%rdi) +L(bwd_write_47bytes): + lddqu 31(%rsi), %xmm0 + movdqu %xmm0, 31(%rdi) +L(bwd_write_31bytes): + lddqu 15(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 15(%rdi) + movdqu %xmm1, (%rdi) + ret + + + .p2align 4 +L(bwd_write_15bytes): + mov 7(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 7(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_142bytes): + lddqu 126(%rsi), %xmm0 + movdqu %xmm0, 126(%rdi) +L(bwd_write_126bytes): + lddqu 110(%rsi), %xmm0 + movdqu %xmm0, 110(%rdi) +L(bwd_write_110bytes): + lddqu 94(%rsi), %xmm0 + movdqu %xmm0, 94(%rdi) +L(bwd_write_94bytes): + lddqu 78(%rsi), %xmm0 + movdqu %xmm0, 78(%rdi) +L(bwd_write_78bytes): + lddqu 62(%rsi), %xmm0 + movdqu %xmm0, 62(%rdi) +L(bwd_write_62bytes): + lddqu 46(%rsi), %xmm0 + movdqu %xmm0, 46(%rdi) +L(bwd_write_46bytes): + lddqu 30(%rsi), %xmm0 + movdqu %xmm0, 30(%rdi) +L(bwd_write_30bytes): + lddqu 14(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 14(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_14bytes): + mov 6(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 6(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_141bytes): + lddqu 125(%rsi), %xmm0 + movdqu %xmm0, 125(%rdi) +L(bwd_write_125bytes): + lddqu 109(%rsi), %xmm0 + movdqu %xmm0, 109(%rdi) +L(bwd_write_109bytes): + lddqu 93(%rsi), %xmm0 + movdqu %xmm0, 93(%rdi) +L(bwd_write_93bytes): + lddqu 77(%rsi), %xmm0 + movdqu %xmm0, 77(%rdi) +L(bwd_write_77bytes): + lddqu 61(%rsi), %xmm0 + movdqu %xmm0, 61(%rdi) +L(bwd_write_61bytes): + lddqu 45(%rsi), %xmm0 + movdqu %xmm0, 45(%rdi) +L(bwd_write_45bytes): + lddqu 29(%rsi), %xmm0 + movdqu %xmm0, 29(%rdi) +L(bwd_write_29bytes): + lddqu 13(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 13(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_13bytes): + mov 5(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 5(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_140bytes): + lddqu 124(%rsi), %xmm0 + movdqu %xmm0, 124(%rdi) +L(bwd_write_124bytes): + lddqu 108(%rsi), %xmm0 + movdqu %xmm0, 108(%rdi) +L(bwd_write_108bytes): + lddqu 92(%rsi), %xmm0 + movdqu %xmm0, 92(%rdi) +L(bwd_write_92bytes): + lddqu 76(%rsi), %xmm0 + movdqu %xmm0, 76(%rdi) +L(bwd_write_76bytes): + lddqu 60(%rsi), %xmm0 + movdqu %xmm0, 60(%rdi) +L(bwd_write_60bytes): + lddqu 44(%rsi), %xmm0 + movdqu %xmm0, 44(%rdi) +L(bwd_write_44bytes): + lddqu 28(%rsi), %xmm0 + movdqu %xmm0, 28(%rdi) +L(bwd_write_28bytes): + lddqu 12(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 12(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_12bytes): + mov 4(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 4(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_139bytes): + lddqu 123(%rsi), %xmm0 + movdqu %xmm0, 123(%rdi) +L(bwd_write_123bytes): + lddqu 107(%rsi), %xmm0 + movdqu %xmm0, 107(%rdi) +L(bwd_write_107bytes): + lddqu 91(%rsi), %xmm0 + movdqu %xmm0, 91(%rdi) +L(bwd_write_91bytes): + lddqu 75(%rsi), %xmm0 + movdqu %xmm0, 75(%rdi) +L(bwd_write_75bytes): + lddqu 59(%rsi), %xmm0 + movdqu %xmm0, 59(%rdi) +L(bwd_write_59bytes): + lddqu 43(%rsi), %xmm0 + movdqu %xmm0, 43(%rdi) +L(bwd_write_43bytes): + lddqu 27(%rsi), %xmm0 + movdqu %xmm0, 27(%rdi) +L(bwd_write_27bytes): + lddqu 11(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 11(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_11bytes): + mov 3(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 3(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_138bytes): + lddqu 122(%rsi), %xmm0 + movdqu %xmm0, 122(%rdi) +L(bwd_write_122bytes): + lddqu 106(%rsi), %xmm0 + movdqu %xmm0, 106(%rdi) +L(bwd_write_106bytes): + lddqu 90(%rsi), %xmm0 + movdqu %xmm0, 90(%rdi) +L(bwd_write_90bytes): + lddqu 74(%rsi), %xmm0 + movdqu %xmm0, 74(%rdi) +L(bwd_write_74bytes): + lddqu 58(%rsi), %xmm0 + movdqu %xmm0, 58(%rdi) +L(bwd_write_58bytes): + lddqu 42(%rsi), %xmm0 + movdqu %xmm0, 42(%rdi) +L(bwd_write_42bytes): + lddqu 26(%rsi), %xmm0 + movdqu %xmm0, 26(%rdi) +L(bwd_write_26bytes): + lddqu 10(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 10(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_10bytes): + mov 2(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 2(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_137bytes): + lddqu 121(%rsi), %xmm0 + movdqu %xmm0, 121(%rdi) +L(bwd_write_121bytes): + lddqu 105(%rsi), %xmm0 + movdqu %xmm0, 105(%rdi) +L(bwd_write_105bytes): + lddqu 89(%rsi), %xmm0 + movdqu %xmm0, 89(%rdi) +L(bwd_write_89bytes): + lddqu 73(%rsi), %xmm0 + movdqu %xmm0, 73(%rdi) +L(bwd_write_73bytes): + lddqu 57(%rsi), %xmm0 + movdqu %xmm0, 57(%rdi) +L(bwd_write_57bytes): + lddqu 41(%rsi), %xmm0 + movdqu %xmm0, 41(%rdi) +L(bwd_write_41bytes): + lddqu 25(%rsi), %xmm0 + movdqu %xmm0, 25(%rdi) +L(bwd_write_25bytes): + lddqu 9(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 9(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_9bytes): + mov 1(%rsi), %rdx + mov (%rsi), %rcx + mov %rdx, 1(%rdi) + mov %rcx, (%rdi) + ret + + .p2align 4 +L(bwd_write_136bytes): + lddqu 120(%rsi), %xmm0 + movdqu %xmm0, 120(%rdi) +L(bwd_write_120bytes): + lddqu 104(%rsi), %xmm0 + movdqu %xmm0, 104(%rdi) +L(bwd_write_104bytes): + lddqu 88(%rsi), %xmm0 + movdqu %xmm0, 88(%rdi) +L(bwd_write_88bytes): + lddqu 72(%rsi), %xmm0 + movdqu %xmm0, 72(%rdi) +L(bwd_write_72bytes): + lddqu 56(%rsi), %xmm0 + movdqu %xmm0, 56(%rdi) +L(bwd_write_56bytes): + lddqu 40(%rsi), %xmm0 + movdqu %xmm0, 40(%rdi) +L(bwd_write_40bytes): + lddqu 24(%rsi), %xmm0 + movdqu %xmm0, 24(%rdi) +L(bwd_write_24bytes): + lddqu 8(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 8(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_8bytes): + mov (%rsi), %rdx + mov %rdx, (%rdi) + ret + + .p2align 4 +L(bwd_write_135bytes): + lddqu 119(%rsi), %xmm0 + movdqu %xmm0, 119(%rdi) +L(bwd_write_119bytes): + lddqu 103(%rsi), %xmm0 + movdqu %xmm0, 103(%rdi) +L(bwd_write_103bytes): + lddqu 87(%rsi), %xmm0 + movdqu %xmm0, 87(%rdi) +L(bwd_write_87bytes): + lddqu 71(%rsi), %xmm0 + movdqu %xmm0, 71(%rdi) +L(bwd_write_71bytes): + lddqu 55(%rsi), %xmm0 + movdqu %xmm0, 55(%rdi) +L(bwd_write_55bytes): + lddqu 39(%rsi), %xmm0 + movdqu %xmm0, 39(%rdi) +L(bwd_write_39bytes): + lddqu 23(%rsi), %xmm0 + movdqu %xmm0, 23(%rdi) +L(bwd_write_23bytes): + lddqu 7(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 7(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_7bytes): + mov 3(%rsi), %edx + mov (%rsi), %ecx + mov %edx, 3(%rdi) + mov %ecx, (%rdi) + ret + + .p2align 4 +L(bwd_write_134bytes): + lddqu 118(%rsi), %xmm0 + movdqu %xmm0, 118(%rdi) +L(bwd_write_118bytes): + lddqu 102(%rsi), %xmm0 + movdqu %xmm0, 102(%rdi) +L(bwd_write_102bytes): + lddqu 86(%rsi), %xmm0 + movdqu %xmm0, 86(%rdi) +L(bwd_write_86bytes): + lddqu 70(%rsi), %xmm0 + movdqu %xmm0, 70(%rdi) +L(bwd_write_70bytes): + lddqu 54(%rsi), %xmm0 + movdqu %xmm0, 54(%rdi) +L(bwd_write_54bytes): + lddqu 38(%rsi), %xmm0 + movdqu %xmm0, 38(%rdi) +L(bwd_write_38bytes): + lddqu 22(%rsi), %xmm0 + movdqu %xmm0, 22(%rdi) +L(bwd_write_22bytes): + lddqu 6(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 6(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_6bytes): + mov 2(%rsi), %edx + mov (%rsi), %ecx + mov %edx, 2(%rdi) + mov %ecx, (%rdi) + ret + + .p2align 4 +L(bwd_write_133bytes): + lddqu 117(%rsi), %xmm0 + movdqu %xmm0, 117(%rdi) +L(bwd_write_117bytes): + lddqu 101(%rsi), %xmm0 + movdqu %xmm0, 101(%rdi) +L(bwd_write_101bytes): + lddqu 85(%rsi), %xmm0 + movdqu %xmm0, 85(%rdi) +L(bwd_write_85bytes): + lddqu 69(%rsi), %xmm0 + movdqu %xmm0, 69(%rdi) +L(bwd_write_69bytes): + lddqu 53(%rsi), %xmm0 + movdqu %xmm0, 53(%rdi) +L(bwd_write_53bytes): + lddqu 37(%rsi), %xmm0 + movdqu %xmm0, 37(%rdi) +L(bwd_write_37bytes): + lddqu 21(%rsi), %xmm0 + movdqu %xmm0, 21(%rdi) +L(bwd_write_21bytes): + lddqu 5(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 5(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_5bytes): + mov 1(%rsi), %edx + mov (%rsi), %ecx + mov %edx, 1(%rdi) + mov %ecx, (%rdi) + ret + + .p2align 4 +L(bwd_write_132bytes): + lddqu 116(%rsi), %xmm0 + movdqu %xmm0, 116(%rdi) +L(bwd_write_116bytes): + lddqu 100(%rsi), %xmm0 + movdqu %xmm0, 100(%rdi) +L(bwd_write_100bytes): + lddqu 84(%rsi), %xmm0 + movdqu %xmm0, 84(%rdi) +L(bwd_write_84bytes): + lddqu 68(%rsi), %xmm0 + movdqu %xmm0, 68(%rdi) +L(bwd_write_68bytes): + lddqu 52(%rsi), %xmm0 + movdqu %xmm0, 52(%rdi) +L(bwd_write_52bytes): + lddqu 36(%rsi), %xmm0 + movdqu %xmm0, 36(%rdi) +L(bwd_write_36bytes): + lddqu 20(%rsi), %xmm0 + movdqu %xmm0, 20(%rdi) +L(bwd_write_20bytes): + lddqu 4(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 4(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_4bytes): + mov (%rsi), %edx + mov %edx, (%rdi) + ret + + .p2align 4 +L(bwd_write_131bytes): + lddqu 115(%rsi), %xmm0 + movdqu %xmm0, 115(%rdi) +L(bwd_write_115bytes): + lddqu 99(%rsi), %xmm0 + movdqu %xmm0, 99(%rdi) +L(bwd_write_99bytes): + lddqu 83(%rsi), %xmm0 + movdqu %xmm0, 83(%rdi) +L(bwd_write_83bytes): + lddqu 67(%rsi), %xmm0 + movdqu %xmm0, 67(%rdi) +L(bwd_write_67bytes): + lddqu 51(%rsi), %xmm0 + movdqu %xmm0, 51(%rdi) +L(bwd_write_51bytes): + lddqu 35(%rsi), %xmm0 + movdqu %xmm0, 35(%rdi) +L(bwd_write_35bytes): + lddqu 19(%rsi), %xmm0 + movdqu %xmm0, 19(%rdi) +L(bwd_write_19bytes): + lddqu 3(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 3(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_3bytes): + mov 1(%rsi), %dx + mov (%rsi), %cx + mov %dx, 1(%rdi) + mov %cx, (%rdi) + ret + + .p2align 4 +L(bwd_write_130bytes): + lddqu 114(%rsi), %xmm0 + movdqu %xmm0, 114(%rdi) +L(bwd_write_114bytes): + lddqu 98(%rsi), %xmm0 + movdqu %xmm0, 98(%rdi) +L(bwd_write_98bytes): + lddqu 82(%rsi), %xmm0 + movdqu %xmm0, 82(%rdi) +L(bwd_write_82bytes): + lddqu 66(%rsi), %xmm0 + movdqu %xmm0, 66(%rdi) +L(bwd_write_66bytes): + lddqu 50(%rsi), %xmm0 + movdqu %xmm0, 50(%rdi) +L(bwd_write_50bytes): + lddqu 34(%rsi), %xmm0 + movdqu %xmm0, 34(%rdi) +L(bwd_write_34bytes): + lddqu 18(%rsi), %xmm0 + movdqu %xmm0, 18(%rdi) +L(bwd_write_18bytes): + lddqu 2(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 2(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_2bytes): + movzwl (%rsi), %edx + mov %dx, (%rdi) + ret + + .p2align 4 +L(bwd_write_129bytes): + lddqu 113(%rsi), %xmm0 + movdqu %xmm0, 113(%rdi) +L(bwd_write_113bytes): + lddqu 97(%rsi), %xmm0 + movdqu %xmm0, 97(%rdi) +L(bwd_write_97bytes): + lddqu 81(%rsi), %xmm0 + movdqu %xmm0, 81(%rdi) +L(bwd_write_81bytes): + lddqu 65(%rsi), %xmm0 + movdqu %xmm0, 65(%rdi) +L(bwd_write_65bytes): + lddqu 49(%rsi), %xmm0 + movdqu %xmm0, 49(%rdi) +L(bwd_write_49bytes): + lddqu 33(%rsi), %xmm0 + movdqu %xmm0, 33(%rdi) +L(bwd_write_33bytes): + lddqu 17(%rsi), %xmm0 + movdqu %xmm0, 17(%rdi) +L(bwd_write_17bytes): + lddqu 1(%rsi), %xmm0 + lddqu (%rsi), %xmm1 + movdqu %xmm0, 1(%rdi) + movdqu %xmm1, (%rdi) + ret + + .p2align 4 +L(bwd_write_1bytes): + movzbl (%rsi), %edx + mov %dl, (%rdi) + ret + +END (MEMCPY) + + .section .rodata.ssse3,"a",@progbits + .p2align 3 +L(table_144_bytes_bwd): + .int JMPTBL (L(bwd_write_0bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_1bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_2bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_3bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_4bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_5bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_6bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_7bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_8bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_9bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_10bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_11bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_12bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_13bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_14bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_15bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_16bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_17bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_18bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_19bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_20bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_21bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_22bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_23bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_24bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_25bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_26bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_27bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_28bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_29bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_30bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_31bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_32bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_33bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_34bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_35bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_36bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_37bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_38bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_39bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_40bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_41bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_42bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_43bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_44bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_45bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_46bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_47bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_48bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_49bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_50bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_51bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_52bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_53bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_54bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_55bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_56bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_57bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_58bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_59bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_60bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_61bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_62bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_63bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_64bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_65bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_66bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_67bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_68bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_69bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_70bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_71bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_72bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_73bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_74bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_75bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_76bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_77bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_78bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_79bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_80bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_81bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_82bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_83bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_84bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_85bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_86bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_87bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_88bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_89bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_90bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_91bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_92bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_93bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_94bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_95bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_96bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_97bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_98bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_99bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_100bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_101bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_102bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_103bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_104bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_105bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_106bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_107bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_108bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_109bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_110bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_111bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_112bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_113bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_114bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_115bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_116bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_117bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_118bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_119bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_120bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_121bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_122bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_123bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_124bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_125bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_126bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_127bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_128bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_129bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_130bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_131bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_132bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_133bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_134bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_135bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_136bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_137bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_138bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_139bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_140bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_141bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_142bytes), L(table_144_bytes_bwd)) + .int JMPTBL (L(bwd_write_143bytes), L(table_144_bytes_bwd)) + + .p2align 3 +L(table_144_bytes_fwd): + .int JMPTBL (L(fwd_write_0bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_1bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_2bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_3bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_4bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_5bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_6bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_7bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_8bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_9bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_10bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_11bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_12bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_13bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_14bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_15bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_16bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_17bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_18bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_19bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_20bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_21bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_22bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_23bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_24bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_25bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_26bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_27bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_28bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_29bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_30bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_31bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_32bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_33bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_34bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_35bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_36bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_37bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_38bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_39bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_40bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_41bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_42bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_43bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_44bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_45bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_46bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_47bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_48bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_49bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_50bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_51bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_52bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_53bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_54bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_55bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_56bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_57bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_58bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_59bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_60bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_61bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_62bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_63bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_64bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_65bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_66bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_67bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_68bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_69bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_70bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_71bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_72bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_73bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_74bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_75bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_76bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_77bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_78bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_79bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_80bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_81bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_82bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_83bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_84bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_85bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_86bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_87bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_88bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_89bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_90bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_91bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_92bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_93bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_94bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_95bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_96bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_97bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_98bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_99bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_100bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_101bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_102bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_103bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_104bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_105bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_106bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_107bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_108bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_109bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_110bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_111bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_112bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_113bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_114bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_115bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_116bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_117bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_118bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_119bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_120bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_121bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_122bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_123bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_124bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_125bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_126bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_127bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_128bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_129bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_130bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_131bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_132bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_133bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_134bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_135bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_136bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_137bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_138bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_139bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_140bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_141bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_142bytes), L(table_144_bytes_fwd)) + .int JMPTBL (L(fwd_write_143bytes), L(table_144_bytes_fwd)) + + .p2align 3 +L(shl_table_fwd): + .int JMPTBL (L(shl_0), L(shl_table_fwd)) + .int JMPTBL (L(shl_1), L(shl_table_fwd)) + .int JMPTBL (L(shl_2), L(shl_table_fwd)) + .int JMPTBL (L(shl_3), L(shl_table_fwd)) + .int JMPTBL (L(shl_4), L(shl_table_fwd)) + .int JMPTBL (L(shl_5), L(shl_table_fwd)) + .int JMPTBL (L(shl_6), L(shl_table_fwd)) + .int JMPTBL (L(shl_7), L(shl_table_fwd)) + .int JMPTBL (L(shl_8), L(shl_table_fwd)) + .int JMPTBL (L(shl_9), L(shl_table_fwd)) + .int JMPTBL (L(shl_10), L(shl_table_fwd)) + .int JMPTBL (L(shl_11), L(shl_table_fwd)) + .int JMPTBL (L(shl_12), L(shl_table_fwd)) + .int JMPTBL (L(shl_13), L(shl_table_fwd)) + .int JMPTBL (L(shl_14), L(shl_table_fwd)) + .int JMPTBL (L(shl_15), L(shl_table_fwd)) + + .p2align 3 +L(shl_table_bwd): + .int JMPTBL (L(shl_0_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_1_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_2_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_3_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_4_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_5_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_6_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_7_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_8_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_9_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_10_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_11_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_12_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_13_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_14_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_15_bwd), L(shl_table_bwd)) + +#endif diff --git a/utils/memcpy-bench/glibc/memcpy-ssse3.S b/utils/memcpy-bench/glibc/memcpy-ssse3.S new file mode 100644 index 00000000000..11cb6559a8b --- /dev/null +++ b/utils/memcpy-bench/glibc/memcpy-ssse3.S @@ -0,0 +1,3152 @@ +/* memcpy with SSSE3 + Copyright (C) 2010-2020 Free Software Foundation, Inc. + Contributed by Intel Corporation. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include "sysdep.h" + +#if 1 + +#include "asm-syntax.h" + +#ifndef MEMCPY +# define MEMCPY __memcpy_ssse3 +# define MEMCPY_CHK __memcpy_chk_ssse3 +# define MEMPCPY __mempcpy_ssse3 +# define MEMPCPY_CHK __mempcpy_chk_ssse3 +#endif + +#define JMPTBL(I, B) I - B + +/* Branch to an entry in a jump table. TABLE is a jump table with + relative offsets. INDEX is a register contains the index into the + jump table. SCALE is the scale of INDEX. */ +#define BRANCH_TO_JMPTBL_ENTRY(TABLE, INDEX, SCALE) \ + lea TABLE(%rip), %r11; \ + movslq (%r11, INDEX, SCALE), INDEX; \ + lea (%r11, INDEX), INDEX; \ + _CET_NOTRACK jmp *INDEX; \ + ud2 + + .section .text.ssse3,"ax",@progbits +#if !defined USE_AS_MEMPCPY && !defined USE_AS_MEMMOVE +ENTRY (MEMPCPY_CHK) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMPCPY_CHK) + +ENTRY (MEMPCPY) + mov %RDI_LP, %RAX_LP + add %RDX_LP, %RAX_LP + jmp L(start) +END (MEMPCPY) +#endif + +#if !defined USE_AS_BCOPY +ENTRY (MEMCPY_CHK) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMCPY_CHK) +#endif + +ENTRY (MEMCPY) + mov %RDI_LP, %RAX_LP +#ifdef USE_AS_MEMPCPY + add %RDX_LP, %RAX_LP +#endif + +#ifdef __ILP32__ + /* Clear the upper 32 bits. */ + mov %edx, %edx +#endif + +#ifdef USE_AS_MEMMOVE + cmp %rsi, %rdi + jb L(copy_forward) + je L(write_0bytes) + cmp $79, %rdx + jbe L(copy_forward) + jmp L(copy_backward) +L(copy_forward): +#endif +L(start): + cmp $79, %rdx + lea L(table_less_80bytes)(%rip), %r11 + ja L(80bytesormore) + movslq (%r11, %rdx, 4), %r9 + add %rdx, %rsi + add %rdx, %rdi + add %r11, %r9 + _CET_NOTRACK jmp *%r9 + ud2 + + .p2align 4 +L(80bytesormore): +#ifndef USE_AS_MEMMOVE + cmp %dil, %sil + jle L(copy_backward) +#endif + + movdqu (%rsi), %xmm0 + mov %rdi, %rcx + and $-16, %rdi + add $16, %rdi + mov %rcx, %r8 + sub %rdi, %rcx + add %rcx, %rdx + sub %rcx, %rsi + +#ifdef SHARED_CACHE_SIZE_HALF + mov $SHARED_CACHE_SIZE_HALF, %RCX_LP +#else + mov __x86_shared_cache_size_half(%rip), %RCX_LP +#endif + cmp %rcx, %rdx + mov %rsi, %r9 + ja L(large_page_fwd) + and $0xf, %r9 + jz L(shl_0) +#ifdef DATA_CACHE_SIZE_HALF + mov $DATA_CACHE_SIZE_HALF, %RCX_LP +#else + mov __x86_data_cache_size_half(%rip), %RCX_LP +#endif + BRANCH_TO_JMPTBL_ENTRY (L(shl_table), %r9, 4) + + .p2align 4 +L(copy_backward): + movdqu -16(%rsi, %rdx), %xmm0 + add %rdx, %rsi + lea -16(%rdi, %rdx), %r8 + add %rdx, %rdi + + mov %rdi, %rcx + and $0xf, %rcx + xor %rcx, %rdi + sub %rcx, %rdx + sub %rcx, %rsi + +#ifdef SHARED_CACHE_SIZE_HALF + mov $SHARED_CACHE_SIZE_HALF, %RCX_LP +#else + mov __x86_shared_cache_size_half(%rip), %RCX_LP +#endif + + cmp %rcx, %rdx + mov %rsi, %r9 + ja L(large_page_bwd) + and $0xf, %r9 + jz L(shl_0_bwd) +#ifdef DATA_CACHE_SIZE_HALF + mov $DATA_CACHE_SIZE_HALF, %RCX_LP +#else + mov __x86_data_cache_size_half(%rip), %RCX_LP +#endif + BRANCH_TO_JMPTBL_ENTRY (L(shl_table_bwd), %r9, 4) + + .p2align 4 +L(shl_0): + sub $16, %rdx + movdqa (%rsi), %xmm1 + add $16, %rsi + movdqa %xmm1, (%rdi) + add $16, %rdi + cmp $128, %rdx + movdqu %xmm0, (%r8) + ja L(shl_0_gobble) + cmp $64, %rdx + jb L(shl_0_less_64bytes) + movaps (%rsi), %xmm4 + movaps 16(%rsi), %xmm1 + movaps 32(%rsi), %xmm2 + movaps 48(%rsi), %xmm3 + movaps %xmm4, (%rdi) + movaps %xmm1, 16(%rdi) + movaps %xmm2, 32(%rdi) + movaps %xmm3, 48(%rdi) + sub $64, %rdx + add $64, %rsi + add $64, %rdi +L(shl_0_less_64bytes): + add %rdx, %rsi + add %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_0_gobble): +#ifdef DATA_CACHE_SIZE_HALF + cmp $DATA_CACHE_SIZE_HALF, %RDX_LP +#else + cmp __x86_data_cache_size_half(%rip), %RDX_LP +#endif + lea -128(%rdx), %rdx + jae L(shl_0_gobble_mem_loop) +L(shl_0_gobble_cache_loop): + movdqa (%rsi), %xmm4 + movaps 0x10(%rsi), %xmm1 + movaps 0x20(%rsi), %xmm2 + movaps 0x30(%rsi), %xmm3 + + movdqa %xmm4, (%rdi) + movaps %xmm1, 0x10(%rdi) + movaps %xmm2, 0x20(%rdi) + movaps %xmm3, 0x30(%rdi) + + sub $128, %rdx + movaps 0x40(%rsi), %xmm4 + movaps 0x50(%rsi), %xmm5 + movaps 0x60(%rsi), %xmm6 + movaps 0x70(%rsi), %xmm7 + lea 0x80(%rsi), %rsi + movaps %xmm4, 0x40(%rdi) + movaps %xmm5, 0x50(%rdi) + movaps %xmm6, 0x60(%rdi) + movaps %xmm7, 0x70(%rdi) + lea 0x80(%rdi), %rdi + + jae L(shl_0_gobble_cache_loop) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(shl_0_cache_less_64bytes) + + movdqa (%rsi), %xmm4 + sub $0x40, %rdx + movdqa 0x10(%rsi), %xmm1 + + movdqa %xmm4, (%rdi) + movdqa %xmm1, 0x10(%rdi) + + movdqa 0x20(%rsi), %xmm4 + movdqa 0x30(%rsi), %xmm1 + add $0x40, %rsi + + movdqa %xmm4, 0x20(%rdi) + movdqa %xmm1, 0x30(%rdi) + add $0x40, %rdi +L(shl_0_cache_less_64bytes): + add %rdx, %rsi + add %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_0_gobble_mem_loop): + prefetcht0 0x1c0(%rsi) + prefetcht0 0x280(%rsi) + + movdqa (%rsi), %xmm0 + movdqa 0x10(%rsi), %xmm1 + movdqa 0x20(%rsi), %xmm2 + movdqa 0x30(%rsi), %xmm3 + movdqa 0x40(%rsi), %xmm4 + movdqa 0x50(%rsi), %xmm5 + movdqa 0x60(%rsi), %xmm6 + movdqa 0x70(%rsi), %xmm7 + lea 0x80(%rsi), %rsi + sub $0x80, %rdx + movdqa %xmm0, (%rdi) + movdqa %xmm1, 0x10(%rdi) + movdqa %xmm2, 0x20(%rdi) + movdqa %xmm3, 0x30(%rdi) + movdqa %xmm4, 0x40(%rdi) + movdqa %xmm5, 0x50(%rdi) + movdqa %xmm6, 0x60(%rdi) + movdqa %xmm7, 0x70(%rdi) + lea 0x80(%rdi), %rdi + + jae L(shl_0_gobble_mem_loop) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(shl_0_mem_less_64bytes) + + movdqa (%rsi), %xmm0 + sub $0x40, %rdx + movdqa 0x10(%rsi), %xmm1 + + movdqa %xmm0, (%rdi) + movdqa %xmm1, 0x10(%rdi) + + movdqa 0x20(%rsi), %xmm0 + movdqa 0x30(%rsi), %xmm1 + add $0x40, %rsi + + movdqa %xmm0, 0x20(%rdi) + movdqa %xmm1, 0x30(%rdi) + add $0x40, %rdi +L(shl_0_mem_less_64bytes): + cmp $0x20, %rdx + jb L(shl_0_mem_less_32bytes) + movdqa (%rsi), %xmm0 + sub $0x20, %rdx + movdqa 0x10(%rsi), %xmm1 + add $0x20, %rsi + movdqa %xmm0, (%rdi) + movdqa %xmm1, 0x10(%rdi) + add $0x20, %rdi +L(shl_0_mem_less_32bytes): + add %rdx, %rdi + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_0_bwd): + sub $16, %rdx + movdqa -0x10(%rsi), %xmm1 + sub $16, %rsi + movdqa %xmm1, -0x10(%rdi) + sub $16, %rdi + cmp $0x80, %rdx + movdqu %xmm0, (%r8) + ja L(shl_0_gobble_bwd) + cmp $64, %rdx + jb L(shl_0_less_64bytes_bwd) + movaps -0x10(%rsi), %xmm0 + movaps -0x20(%rsi), %xmm1 + movaps -0x30(%rsi), %xmm2 + movaps -0x40(%rsi), %xmm3 + movaps %xmm0, -0x10(%rdi) + movaps %xmm1, -0x20(%rdi) + movaps %xmm2, -0x30(%rdi) + movaps %xmm3, -0x40(%rdi) + sub $64, %rdx + sub $0x40, %rsi + sub $0x40, %rdi +L(shl_0_less_64bytes_bwd): + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_0_gobble_bwd): +#ifdef DATA_CACHE_SIZE_HALF + cmp $DATA_CACHE_SIZE_HALF, %RDX_LP +#else + cmp __x86_data_cache_size_half(%rip), %RDX_LP +#endif + lea -128(%rdx), %rdx + jae L(shl_0_gobble_mem_bwd_loop) +L(shl_0_gobble_bwd_loop): + movdqa -0x10(%rsi), %xmm0 + movaps -0x20(%rsi), %xmm1 + movaps -0x30(%rsi), %xmm2 + movaps -0x40(%rsi), %xmm3 + + movdqa %xmm0, -0x10(%rdi) + movaps %xmm1, -0x20(%rdi) + movaps %xmm2, -0x30(%rdi) + movaps %xmm3, -0x40(%rdi) + + sub $0x80, %rdx + movaps -0x50(%rsi), %xmm4 + movaps -0x60(%rsi), %xmm5 + movaps -0x70(%rsi), %xmm6 + movaps -0x80(%rsi), %xmm7 + lea -0x80(%rsi), %rsi + movaps %xmm4, -0x50(%rdi) + movaps %xmm5, -0x60(%rdi) + movaps %xmm6, -0x70(%rdi) + movaps %xmm7, -0x80(%rdi) + lea -0x80(%rdi), %rdi + + jae L(shl_0_gobble_bwd_loop) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(shl_0_gobble_bwd_less_64bytes) + + movdqa -0x10(%rsi), %xmm0 + sub $0x40, %rdx + movdqa -0x20(%rsi), %xmm1 + + movdqa %xmm0, -0x10(%rdi) + movdqa %xmm1, -0x20(%rdi) + + movdqa -0x30(%rsi), %xmm0 + movdqa -0x40(%rsi), %xmm1 + sub $0x40, %rsi + + movdqa %xmm0, -0x30(%rdi) + movdqa %xmm1, -0x40(%rdi) + sub $0x40, %rdi +L(shl_0_gobble_bwd_less_64bytes): + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_0_gobble_mem_bwd_loop): + prefetcht0 -0x1c0(%rsi) + prefetcht0 -0x280(%rsi) + movdqa -0x10(%rsi), %xmm0 + movdqa -0x20(%rsi), %xmm1 + movdqa -0x30(%rsi), %xmm2 + movdqa -0x40(%rsi), %xmm3 + movdqa -0x50(%rsi), %xmm4 + movdqa -0x60(%rsi), %xmm5 + movdqa -0x70(%rsi), %xmm6 + movdqa -0x80(%rsi), %xmm7 + lea -0x80(%rsi), %rsi + sub $0x80, %rdx + movdqa %xmm0, -0x10(%rdi) + movdqa %xmm1, -0x20(%rdi) + movdqa %xmm2, -0x30(%rdi) + movdqa %xmm3, -0x40(%rdi) + movdqa %xmm4, -0x50(%rdi) + movdqa %xmm5, -0x60(%rdi) + movdqa %xmm6, -0x70(%rdi) + movdqa %xmm7, -0x80(%rdi) + lea -0x80(%rdi), %rdi + + jae L(shl_0_gobble_mem_bwd_loop) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(shl_0_mem_bwd_less_64bytes) + + movdqa -0x10(%rsi), %xmm0 + sub $0x40, %rdx + movdqa -0x20(%rsi), %xmm1 + + movdqa %xmm0, -0x10(%rdi) + movdqa %xmm1, -0x20(%rdi) + + movdqa -0x30(%rsi), %xmm0 + movdqa -0x40(%rsi), %xmm1 + sub $0x40, %rsi + + movdqa %xmm0, -0x30(%rdi) + movdqa %xmm1, -0x40(%rdi) + sub $0x40, %rdi +L(shl_0_mem_bwd_less_64bytes): + cmp $0x20, %rdx + jb L(shl_0_mem_bwd_less_32bytes) + movdqa -0x10(%rsi), %xmm0 + sub $0x20, %rdx + movdqa -0x20(%rsi), %xmm1 + sub $0x20, %rsi + movdqa %xmm0, -0x10(%rdi) + movdqa %xmm1, -0x20(%rdi) + sub $0x20, %rdi +L(shl_0_mem_bwd_less_32bytes): + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_1): + lea (L(shl_1_loop_L1)-L(shl_1))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x01(%rsi), %xmm1 + jb L(L1_fwd) + lea (L(shl_1_loop_L2)-L(shl_1_loop_L1))(%r9), %r9 +L(L1_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_1_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_1_loop_L1): + sub $64, %rdx + movaps 0x0f(%rsi), %xmm2 + movaps 0x1f(%rsi), %xmm3 + movaps 0x2f(%rsi), %xmm4 + movaps 0x3f(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $1, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $1, %xmm3, %xmm4 + palignr $1, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $1, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_1_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_1_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_1_bwd): + lea (L(shl_1_bwd_loop_L1)-L(shl_1_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x01(%rsi), %xmm1 + jb L(L1_bwd) + lea (L(shl_1_bwd_loop_L2)-L(shl_1_bwd_loop_L1))(%r9), %r9 +L(L1_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_1_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_1_bwd_loop_L1): + movaps -0x11(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x21(%rsi), %xmm3 + movaps -0x31(%rsi), %xmm4 + movaps -0x41(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $1, %xmm2, %xmm1 + palignr $1, %xmm3, %xmm2 + palignr $1, %xmm4, %xmm3 + palignr $1, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_1_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_1_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_2): + lea (L(shl_2_loop_L1)-L(shl_2))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x02(%rsi), %xmm1 + jb L(L2_fwd) + lea (L(shl_2_loop_L2)-L(shl_2_loop_L1))(%r9), %r9 +L(L2_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_2_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_2_loop_L1): + sub $64, %rdx + movaps 0x0e(%rsi), %xmm2 + movaps 0x1e(%rsi), %xmm3 + movaps 0x2e(%rsi), %xmm4 + movaps 0x3e(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $2, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $2, %xmm3, %xmm4 + palignr $2, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $2, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_2_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_2_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_2_bwd): + lea (L(shl_2_bwd_loop_L1)-L(shl_2_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x02(%rsi), %xmm1 + jb L(L2_bwd) + lea (L(shl_2_bwd_loop_L2)-L(shl_2_bwd_loop_L1))(%r9), %r9 +L(L2_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_2_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_2_bwd_loop_L1): + movaps -0x12(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x22(%rsi), %xmm3 + movaps -0x32(%rsi), %xmm4 + movaps -0x42(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $2, %xmm2, %xmm1 + palignr $2, %xmm3, %xmm2 + palignr $2, %xmm4, %xmm3 + palignr $2, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_2_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_2_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_3): + lea (L(shl_3_loop_L1)-L(shl_3))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x03(%rsi), %xmm1 + jb L(L3_fwd) + lea (L(shl_3_loop_L2)-L(shl_3_loop_L1))(%r9), %r9 +L(L3_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_3_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_3_loop_L1): + sub $64, %rdx + movaps 0x0d(%rsi), %xmm2 + movaps 0x1d(%rsi), %xmm3 + movaps 0x2d(%rsi), %xmm4 + movaps 0x3d(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $3, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $3, %xmm3, %xmm4 + palignr $3, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $3, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_3_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_3_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_3_bwd): + lea (L(shl_3_bwd_loop_L1)-L(shl_3_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x03(%rsi), %xmm1 + jb L(L3_bwd) + lea (L(shl_3_bwd_loop_L2)-L(shl_3_bwd_loop_L1))(%r9), %r9 +L(L3_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_3_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_3_bwd_loop_L1): + movaps -0x13(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x23(%rsi), %xmm3 + movaps -0x33(%rsi), %xmm4 + movaps -0x43(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $3, %xmm2, %xmm1 + palignr $3, %xmm3, %xmm2 + palignr $3, %xmm4, %xmm3 + palignr $3, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_3_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_3_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_4): + lea (L(shl_4_loop_L1)-L(shl_4))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x04(%rsi), %xmm1 + jb L(L4_fwd) + lea (L(shl_4_loop_L2)-L(shl_4_loop_L1))(%r9), %r9 +L(L4_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_4_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_4_loop_L1): + sub $64, %rdx + movaps 0x0c(%rsi), %xmm2 + movaps 0x1c(%rsi), %xmm3 + movaps 0x2c(%rsi), %xmm4 + movaps 0x3c(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $4, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $4, %xmm3, %xmm4 + palignr $4, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $4, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_4_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_4_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_4_bwd): + lea (L(shl_4_bwd_loop_L1)-L(shl_4_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x04(%rsi), %xmm1 + jb L(L4_bwd) + lea (L(shl_4_bwd_loop_L2)-L(shl_4_bwd_loop_L1))(%r9), %r9 +L(L4_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_4_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_4_bwd_loop_L1): + movaps -0x14(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x24(%rsi), %xmm3 + movaps -0x34(%rsi), %xmm4 + movaps -0x44(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $4, %xmm2, %xmm1 + palignr $4, %xmm3, %xmm2 + palignr $4, %xmm4, %xmm3 + palignr $4, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_4_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_4_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_5): + lea (L(shl_5_loop_L1)-L(shl_5))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x05(%rsi), %xmm1 + jb L(L5_fwd) + lea (L(shl_5_loop_L2)-L(shl_5_loop_L1))(%r9), %r9 +L(L5_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_5_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_5_loop_L1): + sub $64, %rdx + movaps 0x0b(%rsi), %xmm2 + movaps 0x1b(%rsi), %xmm3 + movaps 0x2b(%rsi), %xmm4 + movaps 0x3b(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $5, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $5, %xmm3, %xmm4 + palignr $5, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $5, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_5_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_5_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_5_bwd): + lea (L(shl_5_bwd_loop_L1)-L(shl_5_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x05(%rsi), %xmm1 + jb L(L5_bwd) + lea (L(shl_5_bwd_loop_L2)-L(shl_5_bwd_loop_L1))(%r9), %r9 +L(L5_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_5_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_5_bwd_loop_L1): + movaps -0x15(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x25(%rsi), %xmm3 + movaps -0x35(%rsi), %xmm4 + movaps -0x45(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $5, %xmm2, %xmm1 + palignr $5, %xmm3, %xmm2 + palignr $5, %xmm4, %xmm3 + palignr $5, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_5_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_5_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_6): + lea (L(shl_6_loop_L1)-L(shl_6))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x06(%rsi), %xmm1 + jb L(L6_fwd) + lea (L(shl_6_loop_L2)-L(shl_6_loop_L1))(%r9), %r9 +L(L6_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_6_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_6_loop_L1): + sub $64, %rdx + movaps 0x0a(%rsi), %xmm2 + movaps 0x1a(%rsi), %xmm3 + movaps 0x2a(%rsi), %xmm4 + movaps 0x3a(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $6, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $6, %xmm3, %xmm4 + palignr $6, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $6, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_6_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_6_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_6_bwd): + lea (L(shl_6_bwd_loop_L1)-L(shl_6_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x06(%rsi), %xmm1 + jb L(L6_bwd) + lea (L(shl_6_bwd_loop_L2)-L(shl_6_bwd_loop_L1))(%r9), %r9 +L(L6_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_6_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_6_bwd_loop_L1): + movaps -0x16(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x26(%rsi), %xmm3 + movaps -0x36(%rsi), %xmm4 + movaps -0x46(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $6, %xmm2, %xmm1 + palignr $6, %xmm3, %xmm2 + palignr $6, %xmm4, %xmm3 + palignr $6, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_6_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_6_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_7): + lea (L(shl_7_loop_L1)-L(shl_7))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x07(%rsi), %xmm1 + jb L(L7_fwd) + lea (L(shl_7_loop_L2)-L(shl_7_loop_L1))(%r9), %r9 +L(L7_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_7_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_7_loop_L1): + sub $64, %rdx + movaps 0x09(%rsi), %xmm2 + movaps 0x19(%rsi), %xmm3 + movaps 0x29(%rsi), %xmm4 + movaps 0x39(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $7, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $7, %xmm3, %xmm4 + palignr $7, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $7, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_7_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_7_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_7_bwd): + lea (L(shl_7_bwd_loop_L1)-L(shl_7_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x07(%rsi), %xmm1 + jb L(L7_bwd) + lea (L(shl_7_bwd_loop_L2)-L(shl_7_bwd_loop_L1))(%r9), %r9 +L(L7_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_7_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_7_bwd_loop_L1): + movaps -0x17(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x27(%rsi), %xmm3 + movaps -0x37(%rsi), %xmm4 + movaps -0x47(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $7, %xmm2, %xmm1 + palignr $7, %xmm3, %xmm2 + palignr $7, %xmm4, %xmm3 + palignr $7, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_7_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_7_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_8): + lea (L(shl_8_loop_L1)-L(shl_8))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x08(%rsi), %xmm1 + jb L(L8_fwd) + lea (L(shl_8_loop_L2)-L(shl_8_loop_L1))(%r9), %r9 +L(L8_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 +L(shl_8_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_8_loop_L1): + sub $64, %rdx + movaps 0x08(%rsi), %xmm2 + movaps 0x18(%rsi), %xmm3 + movaps 0x28(%rsi), %xmm4 + movaps 0x38(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $8, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $8, %xmm3, %xmm4 + palignr $8, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $8, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_8_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 + .p2align 4 +L(shl_8_end): + lea 64(%rdx), %rdx + movaps %xmm4, -0x20(%rdi) + add %rdx, %rsi + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_8_bwd): + lea (L(shl_8_bwd_loop_L1)-L(shl_8_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x08(%rsi), %xmm1 + jb L(L8_bwd) + lea (L(shl_8_bwd_loop_L2)-L(shl_8_bwd_loop_L1))(%r9), %r9 +L(L8_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_8_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_8_bwd_loop_L1): + movaps -0x18(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x28(%rsi), %xmm3 + movaps -0x38(%rsi), %xmm4 + movaps -0x48(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $8, %xmm2, %xmm1 + palignr $8, %xmm3, %xmm2 + palignr $8, %xmm4, %xmm3 + palignr $8, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_8_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_8_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_9): + lea (L(shl_9_loop_L1)-L(shl_9))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x09(%rsi), %xmm1 + jb L(L9_fwd) + lea (L(shl_9_loop_L2)-L(shl_9_loop_L1))(%r9), %r9 +L(L9_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_9_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_9_loop_L1): + sub $64, %rdx + movaps 0x07(%rsi), %xmm2 + movaps 0x17(%rsi), %xmm3 + movaps 0x27(%rsi), %xmm4 + movaps 0x37(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $9, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $9, %xmm3, %xmm4 + palignr $9, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $9, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_9_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_9_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_9_bwd): + lea (L(shl_9_bwd_loop_L1)-L(shl_9_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x09(%rsi), %xmm1 + jb L(L9_bwd) + lea (L(shl_9_bwd_loop_L2)-L(shl_9_bwd_loop_L1))(%r9), %r9 +L(L9_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_9_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_9_bwd_loop_L1): + movaps -0x19(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x29(%rsi), %xmm3 + movaps -0x39(%rsi), %xmm4 + movaps -0x49(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $9, %xmm2, %xmm1 + palignr $9, %xmm3, %xmm2 + palignr $9, %xmm4, %xmm3 + palignr $9, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_9_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_9_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_10): + lea (L(shl_10_loop_L1)-L(shl_10))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0a(%rsi), %xmm1 + jb L(L10_fwd) + lea (L(shl_10_loop_L2)-L(shl_10_loop_L1))(%r9), %r9 +L(L10_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_10_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_10_loop_L1): + sub $64, %rdx + movaps 0x06(%rsi), %xmm2 + movaps 0x16(%rsi), %xmm3 + movaps 0x26(%rsi), %xmm4 + movaps 0x36(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $10, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $10, %xmm3, %xmm4 + palignr $10, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $10, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_10_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_10_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_10_bwd): + lea (L(shl_10_bwd_loop_L1)-L(shl_10_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0a(%rsi), %xmm1 + jb L(L10_bwd) + lea (L(shl_10_bwd_loop_L2)-L(shl_10_bwd_loop_L1))(%r9), %r9 +L(L10_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_10_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_10_bwd_loop_L1): + movaps -0x1a(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x2a(%rsi), %xmm3 + movaps -0x3a(%rsi), %xmm4 + movaps -0x4a(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $10, %xmm2, %xmm1 + palignr $10, %xmm3, %xmm2 + palignr $10, %xmm4, %xmm3 + palignr $10, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_10_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_10_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_11): + lea (L(shl_11_loop_L1)-L(shl_11))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0b(%rsi), %xmm1 + jb L(L11_fwd) + lea (L(shl_11_loop_L2)-L(shl_11_loop_L1))(%r9), %r9 +L(L11_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_11_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_11_loop_L1): + sub $64, %rdx + movaps 0x05(%rsi), %xmm2 + movaps 0x15(%rsi), %xmm3 + movaps 0x25(%rsi), %xmm4 + movaps 0x35(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $11, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $11, %xmm3, %xmm4 + palignr $11, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $11, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_11_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_11_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_11_bwd): + lea (L(shl_11_bwd_loop_L1)-L(shl_11_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0b(%rsi), %xmm1 + jb L(L11_bwd) + lea (L(shl_11_bwd_loop_L2)-L(shl_11_bwd_loop_L1))(%r9), %r9 +L(L11_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_11_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_11_bwd_loop_L1): + movaps -0x1b(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x2b(%rsi), %xmm3 + movaps -0x3b(%rsi), %xmm4 + movaps -0x4b(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $11, %xmm2, %xmm1 + palignr $11, %xmm3, %xmm2 + palignr $11, %xmm4, %xmm3 + palignr $11, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_11_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_11_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_12): + lea (L(shl_12_loop_L1)-L(shl_12))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0c(%rsi), %xmm1 + jb L(L12_fwd) + lea (L(shl_12_loop_L2)-L(shl_12_loop_L1))(%r9), %r9 +L(L12_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_12_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_12_loop_L1): + sub $64, %rdx + movaps 0x04(%rsi), %xmm2 + movaps 0x14(%rsi), %xmm3 + movaps 0x24(%rsi), %xmm4 + movaps 0x34(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $12, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $12, %xmm3, %xmm4 + palignr $12, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $12, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_12_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_12_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_12_bwd): + lea (L(shl_12_bwd_loop_L1)-L(shl_12_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0c(%rsi), %xmm1 + jb L(L12_bwd) + lea (L(shl_12_bwd_loop_L2)-L(shl_12_bwd_loop_L1))(%r9), %r9 +L(L12_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_12_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_12_bwd_loop_L1): + movaps -0x1c(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x2c(%rsi), %xmm3 + movaps -0x3c(%rsi), %xmm4 + movaps -0x4c(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $12, %xmm2, %xmm1 + palignr $12, %xmm3, %xmm2 + palignr $12, %xmm4, %xmm3 + palignr $12, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_12_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_12_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_13): + lea (L(shl_13_loop_L1)-L(shl_13))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0d(%rsi), %xmm1 + jb L(L13_fwd) + lea (L(shl_13_loop_L2)-L(shl_13_loop_L1))(%r9), %r9 +L(L13_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_13_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_13_loop_L1): + sub $64, %rdx + movaps 0x03(%rsi), %xmm2 + movaps 0x13(%rsi), %xmm3 + movaps 0x23(%rsi), %xmm4 + movaps 0x33(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $13, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $13, %xmm3, %xmm4 + palignr $13, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $13, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_13_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_13_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_13_bwd): + lea (L(shl_13_bwd_loop_L1)-L(shl_13_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0d(%rsi), %xmm1 + jb L(L13_bwd) + lea (L(shl_13_bwd_loop_L2)-L(shl_13_bwd_loop_L1))(%r9), %r9 +L(L13_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_13_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_13_bwd_loop_L1): + movaps -0x1d(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x2d(%rsi), %xmm3 + movaps -0x3d(%rsi), %xmm4 + movaps -0x4d(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $13, %xmm2, %xmm1 + palignr $13, %xmm3, %xmm2 + palignr $13, %xmm4, %xmm3 + palignr $13, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_13_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_13_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_14): + lea (L(shl_14_loop_L1)-L(shl_14))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0e(%rsi), %xmm1 + jb L(L14_fwd) + lea (L(shl_14_loop_L2)-L(shl_14_loop_L1))(%r9), %r9 +L(L14_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_14_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_14_loop_L1): + sub $64, %rdx + movaps 0x02(%rsi), %xmm2 + movaps 0x12(%rsi), %xmm3 + movaps 0x22(%rsi), %xmm4 + movaps 0x32(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $14, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $14, %xmm3, %xmm4 + palignr $14, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $14, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_14_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_14_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_14_bwd): + lea (L(shl_14_bwd_loop_L1)-L(shl_14_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0e(%rsi), %xmm1 + jb L(L14_bwd) + lea (L(shl_14_bwd_loop_L2)-L(shl_14_bwd_loop_L1))(%r9), %r9 +L(L14_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_14_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_14_bwd_loop_L1): + movaps -0x1e(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x2e(%rsi), %xmm3 + movaps -0x3e(%rsi), %xmm4 + movaps -0x4e(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $14, %xmm2, %xmm1 + palignr $14, %xmm3, %xmm2 + palignr $14, %xmm4, %xmm3 + palignr $14, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_14_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_14_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_15): + lea (L(shl_15_loop_L1)-L(shl_15))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0f(%rsi), %xmm1 + jb L(L15_fwd) + lea (L(shl_15_loop_L2)-L(shl_15_loop_L1))(%r9), %r9 +L(L15_fwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_15_loop_L2): + prefetchnta 0x1c0(%rsi) +L(shl_15_loop_L1): + sub $64, %rdx + movaps 0x01(%rsi), %xmm2 + movaps 0x11(%rsi), %xmm3 + movaps 0x21(%rsi), %xmm4 + movaps 0x31(%rsi), %xmm5 + movdqa %xmm5, %xmm6 + palignr $15, %xmm4, %xmm5 + lea 64(%rsi), %rsi + palignr $15, %xmm3, %xmm4 + palignr $15, %xmm2, %xmm3 + lea 64(%rdi), %rdi + palignr $15, %xmm1, %xmm2 + movdqa %xmm6, %xmm1 + movdqa %xmm2, -0x40(%rdi) + movaps %xmm3, -0x30(%rdi) + jb L(shl_15_end) + movaps %xmm4, -0x20(%rdi) + movaps %xmm5, -0x10(%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_15_end): + movaps %xmm4, -0x20(%rdi) + lea 64(%rdx), %rdx + movaps %xmm5, -0x10(%rdi) + add %rdx, %rdi + movdqu %xmm0, (%r8) + add %rdx, %rsi + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(shl_15_bwd): + lea (L(shl_15_bwd_loop_L1)-L(shl_15_bwd))(%r9), %r9 + cmp %rcx, %rdx + movaps -0x0f(%rsi), %xmm1 + jb L(L15_bwd) + lea (L(shl_15_bwd_loop_L2)-L(shl_15_bwd_loop_L1))(%r9), %r9 +L(L15_bwd): + lea -64(%rdx), %rdx + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_15_bwd_loop_L2): + prefetchnta -0x1c0(%rsi) +L(shl_15_bwd_loop_L1): + movaps -0x1f(%rsi), %xmm2 + sub $0x40, %rdx + movaps -0x2f(%rsi), %xmm3 + movaps -0x3f(%rsi), %xmm4 + movaps -0x4f(%rsi), %xmm5 + lea -0x40(%rsi), %rsi + palignr $15, %xmm2, %xmm1 + palignr $15, %xmm3, %xmm2 + palignr $15, %xmm4, %xmm3 + palignr $15, %xmm5, %xmm4 + + movaps %xmm1, -0x10(%rdi) + movaps %xmm5, %xmm1 + + movaps %xmm2, -0x20(%rdi) + lea -0x40(%rdi), %rdi + + movaps %xmm3, 0x10(%rdi) + jb L(shl_15_bwd_end) + movaps %xmm4, (%rdi) + _CET_NOTRACK jmp *%r9 + ud2 +L(shl_15_bwd_end): + movaps %xmm4, (%rdi) + lea 64(%rdx), %rdx + movdqu %xmm0, (%r8) + BRANCH_TO_JMPTBL_ENTRY(L(table_less_80bytes), %rdx, 4) + + .p2align 4 +L(write_72bytes): + movdqu -72(%rsi), %xmm0 + movdqu -56(%rsi), %xmm1 + mov -40(%rsi), %r8 + mov -32(%rsi), %r9 + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rcx + movdqu %xmm0, -72(%rdi) + movdqu %xmm1, -56(%rdi) + mov %r8, -40(%rdi) + mov %r9, -32(%rdi) + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rcx, -8(%rdi) + ret + + .p2align 4 +L(write_64bytes): + movdqu -64(%rsi), %xmm0 + mov -48(%rsi), %rcx + mov -40(%rsi), %r8 + mov -32(%rsi), %r9 + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rdx + movdqu %xmm0, -64(%rdi) + mov %rcx, -48(%rdi) + mov %r8, -40(%rdi) + mov %r9, -32(%rdi) + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_56bytes): + movdqu -56(%rsi), %xmm0 + mov -40(%rsi), %r8 + mov -32(%rsi), %r9 + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rcx + movdqu %xmm0, -56(%rdi) + mov %r8, -40(%rdi) + mov %r9, -32(%rdi) + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rcx, -8(%rdi) + ret + + .p2align 4 +L(write_48bytes): + mov -48(%rsi), %rcx + mov -40(%rsi), %r8 + mov -32(%rsi), %r9 + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rdx + mov %rcx, -48(%rdi) + mov %r8, -40(%rdi) + mov %r9, -32(%rdi) + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_40bytes): + mov -40(%rsi), %r8 + mov -32(%rsi), %r9 + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rdx + mov %r8, -40(%rdi) + mov %r9, -32(%rdi) + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_32bytes): + mov -32(%rsi), %r9 + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rdx + mov %r9, -32(%rdi) + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_24bytes): + mov -24(%rsi), %r10 + mov -16(%rsi), %r11 + mov -8(%rsi), %rdx + mov %r10, -24(%rdi) + mov %r11, -16(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_16bytes): + mov -16(%rsi), %r11 + mov -8(%rsi), %rdx + mov %r11, -16(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_8bytes): + mov -8(%rsi), %rdx + mov %rdx, -8(%rdi) +L(write_0bytes): + ret + + .p2align 4 +L(write_73bytes): + movdqu -73(%rsi), %xmm0 + movdqu -57(%rsi), %xmm1 + mov -41(%rsi), %rcx + mov -33(%rsi), %r9 + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %r8 + mov -4(%rsi), %edx + movdqu %xmm0, -73(%rdi) + movdqu %xmm1, -57(%rdi) + mov %rcx, -41(%rdi) + mov %r9, -33(%rdi) + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %r8, -9(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_65bytes): + movdqu -65(%rsi), %xmm0 + movdqu -49(%rsi), %xmm1 + mov -33(%rsi), %r9 + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -65(%rdi) + movdqu %xmm1, -49(%rdi) + mov %r9, -33(%rdi) + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_57bytes): + movdqu -57(%rsi), %xmm0 + mov -41(%rsi), %r8 + mov -33(%rsi), %r9 + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -57(%rdi) + mov %r8, -41(%rdi) + mov %r9, -33(%rdi) + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_49bytes): + movdqu -49(%rsi), %xmm0 + mov -33(%rsi), %r9 + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -49(%rdi) + mov %r9, -33(%rdi) + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_41bytes): + mov -41(%rsi), %r8 + mov -33(%rsi), %r9 + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -1(%rsi), %dl + mov %r8, -41(%rdi) + mov %r9, -33(%rdi) + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %dl, -1(%rdi) + ret + + .p2align 4 +L(write_33bytes): + mov -33(%rsi), %r9 + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -1(%rsi), %dl + mov %r9, -33(%rdi) + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %dl, -1(%rdi) + ret + + .p2align 4 +L(write_25bytes): + mov -25(%rsi), %r10 + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -1(%rsi), %dl + mov %r10, -25(%rdi) + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %dl, -1(%rdi) + ret + + .p2align 4 +L(write_17bytes): + mov -17(%rsi), %r11 + mov -9(%rsi), %rcx + mov -4(%rsi), %edx + mov %r11, -17(%rdi) + mov %rcx, -9(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_9bytes): + mov -9(%rsi), %rcx + mov -4(%rsi), %edx + mov %rcx, -9(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_1bytes): + mov -1(%rsi), %dl + mov %dl, -1(%rdi) + ret + + .p2align 4 +L(write_74bytes): + movdqu -74(%rsi), %xmm0 + movdqu -58(%rsi), %xmm1 + mov -42(%rsi), %r8 + mov -34(%rsi), %r9 + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -74(%rdi) + movdqu %xmm1, -58(%rdi) + mov %r8, -42(%rdi) + mov %r9, -34(%rdi) + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_66bytes): + movdqu -66(%rsi), %xmm0 + movdqu -50(%rsi), %xmm1 + mov -42(%rsi), %r8 + mov -34(%rsi), %r9 + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -66(%rdi) + movdqu %xmm1, -50(%rdi) + mov %r8, -42(%rdi) + mov %r9, -34(%rdi) + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_58bytes): + movdqu -58(%rsi), %xmm1 + mov -42(%rsi), %r8 + mov -34(%rsi), %r9 + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm1, -58(%rdi) + mov %r8, -42(%rdi) + mov %r9, -34(%rdi) + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_50bytes): + movdqu -50(%rsi), %xmm0 + mov -34(%rsi), %r9 + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -50(%rdi) + mov %r9, -34(%rdi) + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_42bytes): + mov -42(%rsi), %r8 + mov -34(%rsi), %r9 + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + mov %r8, -42(%rdi) + mov %r9, -34(%rdi) + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_34bytes): + mov -34(%rsi), %r9 + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + mov %r9, -34(%rdi) + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_26bytes): + mov -26(%rsi), %r10 + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + mov %r10, -26(%rdi) + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_18bytes): + mov -18(%rsi), %r11 + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + mov %r11, -18(%rdi) + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_10bytes): + mov -10(%rsi), %rcx + mov -4(%rsi), %edx + mov %rcx, -10(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_2bytes): + mov -2(%rsi), %dx + mov %dx, -2(%rdi) + ret + + .p2align 4 +L(write_75bytes): + movdqu -75(%rsi), %xmm0 + movdqu -59(%rsi), %xmm1 + mov -43(%rsi), %r8 + mov -35(%rsi), %r9 + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -75(%rdi) + movdqu %xmm1, -59(%rdi) + mov %r8, -43(%rdi) + mov %r9, -35(%rdi) + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_67bytes): + movdqu -67(%rsi), %xmm0 + movdqu -59(%rsi), %xmm1 + mov -43(%rsi), %r8 + mov -35(%rsi), %r9 + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -67(%rdi) + movdqu %xmm1, -59(%rdi) + mov %r8, -43(%rdi) + mov %r9, -35(%rdi) + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_59bytes): + movdqu -59(%rsi), %xmm0 + mov -43(%rsi), %r8 + mov -35(%rsi), %r9 + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -59(%rdi) + mov %r8, -43(%rdi) + mov %r9, -35(%rdi) + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_51bytes): + movdqu -51(%rsi), %xmm0 + mov -35(%rsi), %r9 + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -51(%rdi) + mov %r9, -35(%rdi) + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_43bytes): + mov -43(%rsi), %r8 + mov -35(%rsi), %r9 + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + mov %r8, -43(%rdi) + mov %r9, -35(%rdi) + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_35bytes): + mov -35(%rsi), %r9 + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + mov %r9, -35(%rdi) + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_27bytes): + mov -27(%rsi), %r10 + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + mov %r10, -27(%rdi) + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_19bytes): + mov -19(%rsi), %r11 + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + mov %r11, -19(%rdi) + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_11bytes): + mov -11(%rsi), %rcx + mov -4(%rsi), %edx + mov %rcx, -11(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_3bytes): + mov -3(%rsi), %dx + mov -2(%rsi), %cx + mov %dx, -3(%rdi) + mov %cx, -2(%rdi) + ret + + .p2align 4 +L(write_76bytes): + movdqu -76(%rsi), %xmm0 + movdqu -60(%rsi), %xmm1 + mov -44(%rsi), %r8 + mov -36(%rsi), %r9 + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -76(%rdi) + movdqu %xmm1, -60(%rdi) + mov %r8, -44(%rdi) + mov %r9, -36(%rdi) + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_68bytes): + movdqu -68(%rsi), %xmm0 + movdqu -52(%rsi), %xmm1 + mov -36(%rsi), %r9 + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -68(%rdi) + movdqu %xmm1, -52(%rdi) + mov %r9, -36(%rdi) + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_60bytes): + movdqu -60(%rsi), %xmm0 + mov -44(%rsi), %r8 + mov -36(%rsi), %r9 + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -60(%rdi) + mov %r8, -44(%rdi) + mov %r9, -36(%rdi) + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_52bytes): + movdqu -52(%rsi), %xmm0 + mov -36(%rsi), %r9 + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + movdqu %xmm0, -52(%rdi) + mov %r9, -36(%rdi) + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_44bytes): + mov -44(%rsi), %r8 + mov -36(%rsi), %r9 + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + mov %r8, -44(%rdi) + mov %r9, -36(%rdi) + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_36bytes): + mov -36(%rsi), %r9 + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + mov %r9, -36(%rdi) + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_28bytes): + mov -28(%rsi), %r10 + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + mov %r10, -28(%rdi) + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_20bytes): + mov -20(%rsi), %r11 + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + mov %r11, -20(%rdi) + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_12bytes): + mov -12(%rsi), %rcx + mov -4(%rsi), %edx + mov %rcx, -12(%rdi) + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_4bytes): + mov -4(%rsi), %edx + mov %edx, -4(%rdi) + ret + + .p2align 4 +L(write_77bytes): + movdqu -77(%rsi), %xmm0 + movdqu -61(%rsi), %xmm1 + mov -45(%rsi), %r8 + mov -37(%rsi), %r9 + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -77(%rdi) + movdqu %xmm1, -61(%rdi) + mov %r8, -45(%rdi) + mov %r9, -37(%rdi) + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_69bytes): + movdqu -69(%rsi), %xmm0 + movdqu -53(%rsi), %xmm1 + mov -37(%rsi), %r9 + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -69(%rdi) + movdqu %xmm1, -53(%rdi) + mov %r9, -37(%rdi) + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_61bytes): + movdqu -61(%rsi), %xmm0 + mov -45(%rsi), %r8 + mov -37(%rsi), %r9 + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -61(%rdi) + mov %r8, -45(%rdi) + mov %r9, -37(%rdi) + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_53bytes): + movdqu -53(%rsi), %xmm0 + mov -45(%rsi), %r8 + mov -37(%rsi), %r9 + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -53(%rdi) + mov %r9, -37(%rdi) + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_45bytes): + mov -45(%rsi), %r8 + mov -37(%rsi), %r9 + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r8, -45(%rdi) + mov %r9, -37(%rdi) + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_37bytes): + mov -37(%rsi), %r9 + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r9, -37(%rdi) + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_29bytes): + mov -29(%rsi), %r10 + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r10, -29(%rdi) + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_21bytes): + mov -21(%rsi), %r11 + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r11, -21(%rdi) + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_13bytes): + mov -13(%rsi), %rcx + mov -8(%rsi), %rdx + mov %rcx, -13(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_5bytes): + mov -5(%rsi), %edx + mov -4(%rsi), %ecx + mov %edx, -5(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(write_78bytes): + movdqu -78(%rsi), %xmm0 + movdqu -62(%rsi), %xmm1 + mov -46(%rsi), %r8 + mov -38(%rsi), %r9 + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -78(%rdi) + movdqu %xmm1, -62(%rdi) + mov %r8, -46(%rdi) + mov %r9, -38(%rdi) + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_70bytes): + movdqu -70(%rsi), %xmm0 + movdqu -54(%rsi), %xmm1 + mov -38(%rsi), %r9 + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -70(%rdi) + movdqu %xmm1, -54(%rdi) + mov %r9, -38(%rdi) + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_62bytes): + movdqu -62(%rsi), %xmm0 + mov -46(%rsi), %r8 + mov -38(%rsi), %r9 + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -62(%rdi) + mov %r8, -46(%rdi) + mov %r9, -38(%rdi) + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_54bytes): + movdqu -54(%rsi), %xmm0 + mov -38(%rsi), %r9 + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -54(%rdi) + mov %r9, -38(%rdi) + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_46bytes): + mov -46(%rsi), %r8 + mov -38(%rsi), %r9 + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r8, -46(%rdi) + mov %r9, -38(%rdi) + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_38bytes): + mov -38(%rsi), %r9 + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r9, -38(%rdi) + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_30bytes): + mov -30(%rsi), %r10 + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r10, -30(%rdi) + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_22bytes): + mov -22(%rsi), %r11 + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r11, -22(%rdi) + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_14bytes): + mov -14(%rsi), %rcx + mov -8(%rsi), %rdx + mov %rcx, -14(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_6bytes): + mov -6(%rsi), %edx + mov -4(%rsi), %ecx + mov %edx, -6(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(write_79bytes): + movdqu -79(%rsi), %xmm0 + movdqu -63(%rsi), %xmm1 + mov -47(%rsi), %r8 + mov -39(%rsi), %r9 + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -79(%rdi) + movdqu %xmm1, -63(%rdi) + mov %r8, -47(%rdi) + mov %r9, -39(%rdi) + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_71bytes): + movdqu -71(%rsi), %xmm0 + movdqu -55(%rsi), %xmm1 + mov -39(%rsi), %r9 + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -71(%rdi) + movdqu %xmm1, -55(%rdi) + mov %r9, -39(%rdi) + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_63bytes): + movdqu -63(%rsi), %xmm0 + mov -47(%rsi), %r8 + mov -39(%rsi), %r9 + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -63(%rdi) + mov %r8, -47(%rdi) + mov %r9, -39(%rdi) + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_55bytes): + movdqu -55(%rsi), %xmm0 + mov -39(%rsi), %r9 + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + movdqu %xmm0, -55(%rdi) + mov %r9, -39(%rdi) + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_47bytes): + mov -47(%rsi), %r8 + mov -39(%rsi), %r9 + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r8, -47(%rdi) + mov %r9, -39(%rdi) + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_39bytes): + mov -39(%rsi), %r9 + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r9, -39(%rdi) + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_31bytes): + mov -31(%rsi), %r10 + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r10, -31(%rdi) + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_23bytes): + mov -23(%rsi), %r11 + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + mov %r11, -23(%rdi) + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_15bytes): + mov -15(%rsi), %rcx + mov -8(%rsi), %rdx + mov %rcx, -15(%rdi) + mov %rdx, -8(%rdi) + ret + + .p2align 4 +L(write_7bytes): + mov -7(%rsi), %edx + mov -4(%rsi), %ecx + mov %edx, -7(%rdi) + mov %ecx, -4(%rdi) + ret + + .p2align 4 +L(large_page_fwd): + movdqu (%rsi), %xmm1 + lea 16(%rsi), %rsi + movdqu %xmm0, (%r8) + movntdq %xmm1, (%rdi) + lea 16(%rdi), %rdi + lea -0x90(%rdx), %rdx +#ifdef USE_AS_MEMMOVE + mov %rsi, %r9 + sub %rdi, %r9 + cmp %rdx, %r9 + jae L(memmove_is_memcpy_fwd) + shl $2, %rcx + cmp %rcx, %rdx + jb L(ll_cache_copy_fwd_start) +L(memmove_is_memcpy_fwd): +#endif +L(large_page_loop): + movdqu (%rsi), %xmm0 + movdqu 0x10(%rsi), %xmm1 + movdqu 0x20(%rsi), %xmm2 + movdqu 0x30(%rsi), %xmm3 + movdqu 0x40(%rsi), %xmm4 + movdqu 0x50(%rsi), %xmm5 + movdqu 0x60(%rsi), %xmm6 + movdqu 0x70(%rsi), %xmm7 + lea 0x80(%rsi), %rsi + + sub $0x80, %rdx + movntdq %xmm0, (%rdi) + movntdq %xmm1, 0x10(%rdi) + movntdq %xmm2, 0x20(%rdi) + movntdq %xmm3, 0x30(%rdi) + movntdq %xmm4, 0x40(%rdi) + movntdq %xmm5, 0x50(%rdi) + movntdq %xmm6, 0x60(%rdi) + movntdq %xmm7, 0x70(%rdi) + lea 0x80(%rdi), %rdi + jae L(large_page_loop) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(large_page_less_64bytes) + + movdqu (%rsi), %xmm0 + movdqu 0x10(%rsi), %xmm1 + movdqu 0x20(%rsi), %xmm2 + movdqu 0x30(%rsi), %xmm3 + lea 0x40(%rsi), %rsi + + movntdq %xmm0, (%rdi) + movntdq %xmm1, 0x10(%rdi) + movntdq %xmm2, 0x20(%rdi) + movntdq %xmm3, 0x30(%rdi) + lea 0x40(%rdi), %rdi + sub $0x40, %rdx +L(large_page_less_64bytes): + add %rdx, %rsi + add %rdx, %rdi + sfence + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + +#ifdef USE_AS_MEMMOVE + .p2align 4 +L(ll_cache_copy_fwd_start): + prefetcht0 0x1c0(%rsi) + prefetcht0 0x200(%rsi) + movdqu (%rsi), %xmm0 + movdqu 0x10(%rsi), %xmm1 + movdqu 0x20(%rsi), %xmm2 + movdqu 0x30(%rsi), %xmm3 + movdqu 0x40(%rsi), %xmm4 + movdqu 0x50(%rsi), %xmm5 + movdqu 0x60(%rsi), %xmm6 + movdqu 0x70(%rsi), %xmm7 + lea 0x80(%rsi), %rsi + + sub $0x80, %rdx + movaps %xmm0, (%rdi) + movaps %xmm1, 0x10(%rdi) + movaps %xmm2, 0x20(%rdi) + movaps %xmm3, 0x30(%rdi) + movaps %xmm4, 0x40(%rdi) + movaps %xmm5, 0x50(%rdi) + movaps %xmm6, 0x60(%rdi) + movaps %xmm7, 0x70(%rdi) + lea 0x80(%rdi), %rdi + jae L(ll_cache_copy_fwd_start) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(large_page_ll_less_fwd_64bytes) + + movdqu (%rsi), %xmm0 + movdqu 0x10(%rsi), %xmm1 + movdqu 0x20(%rsi), %xmm2 + movdqu 0x30(%rsi), %xmm3 + lea 0x40(%rsi), %rsi + + movaps %xmm0, (%rdi) + movaps %xmm1, 0x10(%rdi) + movaps %xmm2, 0x20(%rdi) + movaps %xmm3, 0x30(%rdi) + lea 0x40(%rdi), %rdi + sub $0x40, %rdx +L(large_page_ll_less_fwd_64bytes): + add %rdx, %rsi + add %rdx, %rdi + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + +#endif + .p2align 4 +L(large_page_bwd): + movdqu -0x10(%rsi), %xmm1 + lea -16(%rsi), %rsi + movdqu %xmm0, (%r8) + movdqa %xmm1, -0x10(%rdi) + lea -16(%rdi), %rdi + lea -0x90(%rdx), %rdx +#ifdef USE_AS_MEMMOVE + mov %rdi, %r9 + sub %rsi, %r9 + cmp %rdx, %r9 + jae L(memmove_is_memcpy_bwd) + cmp %rcx, %r9 + jb L(ll_cache_copy_bwd_start) +L(memmove_is_memcpy_bwd): +#endif +L(large_page_bwd_loop): + movdqu -0x10(%rsi), %xmm0 + movdqu -0x20(%rsi), %xmm1 + movdqu -0x30(%rsi), %xmm2 + movdqu -0x40(%rsi), %xmm3 + movdqu -0x50(%rsi), %xmm4 + movdqu -0x60(%rsi), %xmm5 + movdqu -0x70(%rsi), %xmm6 + movdqu -0x80(%rsi), %xmm7 + lea -0x80(%rsi), %rsi + + sub $0x80, %rdx + movntdq %xmm0, -0x10(%rdi) + movntdq %xmm1, -0x20(%rdi) + movntdq %xmm2, -0x30(%rdi) + movntdq %xmm3, -0x40(%rdi) + movntdq %xmm4, -0x50(%rdi) + movntdq %xmm5, -0x60(%rdi) + movntdq %xmm6, -0x70(%rdi) + movntdq %xmm7, -0x80(%rdi) + lea -0x80(%rdi), %rdi + jae L(large_page_bwd_loop) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(large_page_less_bwd_64bytes) + + movdqu -0x10(%rsi), %xmm0 + movdqu -0x20(%rsi), %xmm1 + movdqu -0x30(%rsi), %xmm2 + movdqu -0x40(%rsi), %xmm3 + lea -0x40(%rsi), %rsi + + movntdq %xmm0, -0x10(%rdi) + movntdq %xmm1, -0x20(%rdi) + movntdq %xmm2, -0x30(%rdi) + movntdq %xmm3, -0x40(%rdi) + lea -0x40(%rdi), %rdi + sub $0x40, %rdx +L(large_page_less_bwd_64bytes): + sfence + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) + +#ifdef USE_AS_MEMMOVE + .p2align 4 +L(ll_cache_copy_bwd_start): + prefetcht0 -0x1c0(%rsi) + prefetcht0 -0x200(%rsi) + movdqu -0x10(%rsi), %xmm0 + movdqu -0x20(%rsi), %xmm1 + movdqu -0x30(%rsi), %xmm2 + movdqu -0x40(%rsi), %xmm3 + movdqu -0x50(%rsi), %xmm4 + movdqu -0x60(%rsi), %xmm5 + movdqu -0x70(%rsi), %xmm6 + movdqu -0x80(%rsi), %xmm7 + lea -0x80(%rsi), %rsi + + sub $0x80, %rdx + movaps %xmm0, -0x10(%rdi) + movaps %xmm1, -0x20(%rdi) + movaps %xmm2, -0x30(%rdi) + movaps %xmm3, -0x40(%rdi) + movaps %xmm4, -0x50(%rdi) + movaps %xmm5, -0x60(%rdi) + movaps %xmm6, -0x70(%rdi) + movaps %xmm7, -0x80(%rdi) + lea -0x80(%rdi), %rdi + jae L(ll_cache_copy_bwd_start) + cmp $-0x40, %rdx + lea 0x80(%rdx), %rdx + jl L(large_page_ll_less_bwd_64bytes) + + movdqu -0x10(%rsi), %xmm0 + movdqu -0x20(%rsi), %xmm1 + movdqu -0x30(%rsi), %xmm2 + movdqu -0x40(%rsi), %xmm3 + lea -0x40(%rsi), %rsi + + movaps %xmm0, -0x10(%rdi) + movaps %xmm1, -0x20(%rdi) + movaps %xmm2, -0x30(%rdi) + movaps %xmm3, -0x40(%rdi) + lea -0x40(%rdi), %rdi + sub $0x40, %rdx +L(large_page_ll_less_bwd_64bytes): + BRANCH_TO_JMPTBL_ENTRY (L(table_less_80bytes), %rdx, 4) +#endif + +END (MEMCPY) + + .section .rodata.ssse3,"a",@progbits + .p2align 3 +L(table_less_80bytes): + .int JMPTBL (L(write_0bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_1bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_2bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_3bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_4bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_5bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_6bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_7bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_8bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_9bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_10bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_11bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_12bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_13bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_14bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_15bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_16bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_17bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_18bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_19bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_20bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_21bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_22bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_23bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_24bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_25bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_26bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_27bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_28bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_29bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_30bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_31bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_32bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_33bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_34bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_35bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_36bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_37bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_38bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_39bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_40bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_41bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_42bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_43bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_44bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_45bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_46bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_47bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_48bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_49bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_50bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_51bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_52bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_53bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_54bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_55bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_56bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_57bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_58bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_59bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_60bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_61bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_62bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_63bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_64bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_65bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_66bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_67bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_68bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_69bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_70bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_71bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_72bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_73bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_74bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_75bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_76bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_77bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_78bytes), L(table_less_80bytes)) + .int JMPTBL (L(write_79bytes), L(table_less_80bytes)) + + .p2align 3 +L(shl_table): + .int JMPTBL (L(shl_0), L(shl_table)) + .int JMPTBL (L(shl_1), L(shl_table)) + .int JMPTBL (L(shl_2), L(shl_table)) + .int JMPTBL (L(shl_3), L(shl_table)) + .int JMPTBL (L(shl_4), L(shl_table)) + .int JMPTBL (L(shl_5), L(shl_table)) + .int JMPTBL (L(shl_6), L(shl_table)) + .int JMPTBL (L(shl_7), L(shl_table)) + .int JMPTBL (L(shl_8), L(shl_table)) + .int JMPTBL (L(shl_9), L(shl_table)) + .int JMPTBL (L(shl_10), L(shl_table)) + .int JMPTBL (L(shl_11), L(shl_table)) + .int JMPTBL (L(shl_12), L(shl_table)) + .int JMPTBL (L(shl_13), L(shl_table)) + .int JMPTBL (L(shl_14), L(shl_table)) + .int JMPTBL (L(shl_15), L(shl_table)) + + .p2align 3 +L(shl_table_bwd): + .int JMPTBL (L(shl_0_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_1_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_2_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_3_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_4_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_5_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_6_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_7_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_8_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_9_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_10_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_11_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_12_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_13_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_14_bwd), L(shl_table_bwd)) + .int JMPTBL (L(shl_15_bwd), L(shl_table_bwd)) + +#endif diff --git a/utils/memcpy-bench/glibc/memmove-avx-unaligned-erms.S b/utils/memcpy-bench/glibc/memmove-avx-unaligned-erms.S new file mode 100644 index 00000000000..2de73b29a85 --- /dev/null +++ b/utils/memcpy-bench/glibc/memmove-avx-unaligned-erms.S @@ -0,0 +1,12 @@ +#if 1 +# define VEC_SIZE 32 +# define VEC(i) ymm##i +# define VMOVNT vmovntdq +# define VMOVU vmovdqu +# define VMOVA vmovdqa + +# define SECTION(p) p##.avx +# define MEMMOVE_SYMBOL(p,s) p##_avx_##s + +# include "memmove-vec-unaligned-erms.S" +#endif diff --git a/utils/memcpy-bench/glibc/memmove-avx512-no-vzeroupper.S b/utils/memcpy-bench/glibc/memmove-avx512-no-vzeroupper.S new file mode 100644 index 00000000000..3effa845274 --- /dev/null +++ b/utils/memcpy-bench/glibc/memmove-avx512-no-vzeroupper.S @@ -0,0 +1,419 @@ +/* memmove/memcpy/mempcpy optimized with AVX512 for KNL hardware. + Copyright (C) 2016-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include "sysdep.h" + +#if 1 + +# include "asm-syntax.h" + + .section .text.avx512,"ax",@progbits +ENTRY (__mempcpy_chk_avx512_no_vzeroupper) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (__mempcpy_chk_avx512_no_vzeroupper) + +ENTRY (__mempcpy_avx512_no_vzeroupper) + mov %RDI_LP, %RAX_LP + add %RDX_LP, %RAX_LP + jmp L(start) +END (__mempcpy_avx512_no_vzeroupper) + +ENTRY (__memmove_chk_avx512_no_vzeroupper) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (__memmove_chk_avx512_no_vzeroupper) + +ENTRY (__memmove_avx512_no_vzeroupper) + mov %RDI_LP, %RAX_LP +# ifdef USE_AS_MEMPCPY + add %RDX_LP, %RAX_LP +# endif +L(start): +# ifdef __ILP32__ + /* Clear the upper 32 bits. */ + mov %edx, %edx +# endif + lea (%rsi, %rdx), %rcx + lea (%rdi, %rdx), %r9 + cmp $512, %rdx + ja L(512bytesormore) + +L(check): + cmp $16, %rdx + jbe L(less_16bytes) + cmp $256, %rdx + jb L(less_256bytes) + vmovups (%rsi), %zmm0 + vmovups 0x40(%rsi), %zmm1 + vmovups 0x80(%rsi), %zmm2 + vmovups 0xC0(%rsi), %zmm3 + vmovups -0x100(%rcx), %zmm4 + vmovups -0xC0(%rcx), %zmm5 + vmovups -0x80(%rcx), %zmm6 + vmovups -0x40(%rcx), %zmm7 + vmovups %zmm0, (%rdi) + vmovups %zmm1, 0x40(%rdi) + vmovups %zmm2, 0x80(%rdi) + vmovups %zmm3, 0xC0(%rdi) + vmovups %zmm4, -0x100(%r9) + vmovups %zmm5, -0xC0(%r9) + vmovups %zmm6, -0x80(%r9) + vmovups %zmm7, -0x40(%r9) + ret + +L(less_256bytes): + cmp $128, %dl + jb L(less_128bytes) + vmovups (%rsi), %zmm0 + vmovups 0x40(%rsi), %zmm1 + vmovups -0x80(%rcx), %zmm2 + vmovups -0x40(%rcx), %zmm3 + vmovups %zmm0, (%rdi) + vmovups %zmm1, 0x40(%rdi) + vmovups %zmm2, -0x80(%r9) + vmovups %zmm3, -0x40(%r9) + ret + +L(less_128bytes): + cmp $64, %dl + jb L(less_64bytes) + vmovdqu (%rsi), %ymm0 + vmovdqu 0x20(%rsi), %ymm1 + vmovdqu -0x40(%rcx), %ymm2 + vmovdqu -0x20(%rcx), %ymm3 + vmovdqu %ymm0, (%rdi) + vmovdqu %ymm1, 0x20(%rdi) + vmovdqu %ymm2, -0x40(%r9) + vmovdqu %ymm3, -0x20(%r9) + ret + +L(less_64bytes): + cmp $32, %dl + jb L(less_32bytes) + vmovdqu (%rsi), %ymm0 + vmovdqu -0x20(%rcx), %ymm1 + vmovdqu %ymm0, (%rdi) + vmovdqu %ymm1, -0x20(%r9) + ret + +L(less_32bytes): + vmovdqu (%rsi), %xmm0 + vmovdqu -0x10(%rcx), %xmm1 + vmovdqu %xmm0, (%rdi) + vmovdqu %xmm1, -0x10(%r9) + ret + +L(less_16bytes): + cmp $8, %dl + jb L(less_8bytes) + movq (%rsi), %rsi + movq -0x8(%rcx), %rcx + movq %rsi, (%rdi) + movq %rcx, -0x8(%r9) + ret + +L(less_8bytes): + cmp $4, %dl + jb L(less_4bytes) + mov (%rsi), %esi + mov -0x4(%rcx), %ecx + mov %esi, (%rdi) + mov %ecx, -0x4(%r9) + ret + +L(less_4bytes): + cmp $2, %dl + jb L(less_2bytes) + mov (%rsi), %si + mov -0x2(%rcx), %cx + mov %si, (%rdi) + mov %cx, -0x2(%r9) + ret + +L(less_2bytes): + cmp $1, %dl + jb L(less_1bytes) + mov (%rsi), %cl + mov %cl, (%rdi) +L(less_1bytes): + ret + +L(512bytesormore): +# ifdef SHARED_CACHE_SIZE_HALF + mov $SHARED_CACHE_SIZE_HALF, %r8 +# else + mov __x86_shared_cache_size_half(%rip), %r8 +# endif + cmp %r8, %rdx + jae L(preloop_large) + cmp $1024, %rdx + ja L(1024bytesormore) + prefetcht1 (%rsi) + prefetcht1 0x40(%rsi) + prefetcht1 0x80(%rsi) + prefetcht1 0xC0(%rsi) + prefetcht1 0x100(%rsi) + prefetcht1 0x140(%rsi) + prefetcht1 0x180(%rsi) + prefetcht1 0x1C0(%rsi) + prefetcht1 -0x200(%rcx) + prefetcht1 -0x1C0(%rcx) + prefetcht1 -0x180(%rcx) + prefetcht1 -0x140(%rcx) + prefetcht1 -0x100(%rcx) + prefetcht1 -0xC0(%rcx) + prefetcht1 -0x80(%rcx) + prefetcht1 -0x40(%rcx) + vmovups (%rsi), %zmm0 + vmovups 0x40(%rsi), %zmm1 + vmovups 0x80(%rsi), %zmm2 + vmovups 0xC0(%rsi), %zmm3 + vmovups 0x100(%rsi), %zmm4 + vmovups 0x140(%rsi), %zmm5 + vmovups 0x180(%rsi), %zmm6 + vmovups 0x1C0(%rsi), %zmm7 + vmovups -0x200(%rcx), %zmm8 + vmovups -0x1C0(%rcx), %zmm9 + vmovups -0x180(%rcx), %zmm10 + vmovups -0x140(%rcx), %zmm11 + vmovups -0x100(%rcx), %zmm12 + vmovups -0xC0(%rcx), %zmm13 + vmovups -0x80(%rcx), %zmm14 + vmovups -0x40(%rcx), %zmm15 + vmovups %zmm0, (%rdi) + vmovups %zmm1, 0x40(%rdi) + vmovups %zmm2, 0x80(%rdi) + vmovups %zmm3, 0xC0(%rdi) + vmovups %zmm4, 0x100(%rdi) + vmovups %zmm5, 0x140(%rdi) + vmovups %zmm6, 0x180(%rdi) + vmovups %zmm7, 0x1C0(%rdi) + vmovups %zmm8, -0x200(%r9) + vmovups %zmm9, -0x1C0(%r9) + vmovups %zmm10, -0x180(%r9) + vmovups %zmm11, -0x140(%r9) + vmovups %zmm12, -0x100(%r9) + vmovups %zmm13, -0xC0(%r9) + vmovups %zmm14, -0x80(%r9) + vmovups %zmm15, -0x40(%r9) + ret + +L(1024bytesormore): + cmp %rsi, %rdi + ja L(1024bytesormore_bkw) + sub $512, %r9 + vmovups -0x200(%rcx), %zmm8 + vmovups -0x1C0(%rcx), %zmm9 + vmovups -0x180(%rcx), %zmm10 + vmovups -0x140(%rcx), %zmm11 + vmovups -0x100(%rcx), %zmm12 + vmovups -0xC0(%rcx), %zmm13 + vmovups -0x80(%rcx), %zmm14 + vmovups -0x40(%rcx), %zmm15 + prefetcht1 (%rsi) + prefetcht1 0x40(%rsi) + prefetcht1 0x80(%rsi) + prefetcht1 0xC0(%rsi) + prefetcht1 0x100(%rsi) + prefetcht1 0x140(%rsi) + prefetcht1 0x180(%rsi) + prefetcht1 0x1C0(%rsi) + +/* Loop with unaligned memory access. */ +L(gobble_512bytes_loop): + vmovups (%rsi), %zmm0 + vmovups 0x40(%rsi), %zmm1 + vmovups 0x80(%rsi), %zmm2 + vmovups 0xC0(%rsi), %zmm3 + vmovups 0x100(%rsi), %zmm4 + vmovups 0x140(%rsi), %zmm5 + vmovups 0x180(%rsi), %zmm6 + vmovups 0x1C0(%rsi), %zmm7 + add $512, %rsi + prefetcht1 (%rsi) + prefetcht1 0x40(%rsi) + prefetcht1 0x80(%rsi) + prefetcht1 0xC0(%rsi) + prefetcht1 0x100(%rsi) + prefetcht1 0x140(%rsi) + prefetcht1 0x180(%rsi) + prefetcht1 0x1C0(%rsi) + vmovups %zmm0, (%rdi) + vmovups %zmm1, 0x40(%rdi) + vmovups %zmm2, 0x80(%rdi) + vmovups %zmm3, 0xC0(%rdi) + vmovups %zmm4, 0x100(%rdi) + vmovups %zmm5, 0x140(%rdi) + vmovups %zmm6, 0x180(%rdi) + vmovups %zmm7, 0x1C0(%rdi) + add $512, %rdi + cmp %r9, %rdi + jb L(gobble_512bytes_loop) + vmovups %zmm8, (%r9) + vmovups %zmm9, 0x40(%r9) + vmovups %zmm10, 0x80(%r9) + vmovups %zmm11, 0xC0(%r9) + vmovups %zmm12, 0x100(%r9) + vmovups %zmm13, 0x140(%r9) + vmovups %zmm14, 0x180(%r9) + vmovups %zmm15, 0x1C0(%r9) + ret + +L(1024bytesormore_bkw): + add $512, %rdi + vmovups 0x1C0(%rsi), %zmm8 + vmovups 0x180(%rsi), %zmm9 + vmovups 0x140(%rsi), %zmm10 + vmovups 0x100(%rsi), %zmm11 + vmovups 0xC0(%rsi), %zmm12 + vmovups 0x80(%rsi), %zmm13 + vmovups 0x40(%rsi), %zmm14 + vmovups (%rsi), %zmm15 + prefetcht1 -0x40(%rcx) + prefetcht1 -0x80(%rcx) + prefetcht1 -0xC0(%rcx) + prefetcht1 -0x100(%rcx) + prefetcht1 -0x140(%rcx) + prefetcht1 -0x180(%rcx) + prefetcht1 -0x1C0(%rcx) + prefetcht1 -0x200(%rcx) + +/* Backward loop with unaligned memory access. */ +L(gobble_512bytes_loop_bkw): + vmovups -0x40(%rcx), %zmm0 + vmovups -0x80(%rcx), %zmm1 + vmovups -0xC0(%rcx), %zmm2 + vmovups -0x100(%rcx), %zmm3 + vmovups -0x140(%rcx), %zmm4 + vmovups -0x180(%rcx), %zmm5 + vmovups -0x1C0(%rcx), %zmm6 + vmovups -0x200(%rcx), %zmm7 + sub $512, %rcx + prefetcht1 -0x40(%rcx) + prefetcht1 -0x80(%rcx) + prefetcht1 -0xC0(%rcx) + prefetcht1 -0x100(%rcx) + prefetcht1 -0x140(%rcx) + prefetcht1 -0x180(%rcx) + prefetcht1 -0x1C0(%rcx) + prefetcht1 -0x200(%rcx) + vmovups %zmm0, -0x40(%r9) + vmovups %zmm1, -0x80(%r9) + vmovups %zmm2, -0xC0(%r9) + vmovups %zmm3, -0x100(%r9) + vmovups %zmm4, -0x140(%r9) + vmovups %zmm5, -0x180(%r9) + vmovups %zmm6, -0x1C0(%r9) + vmovups %zmm7, -0x200(%r9) + sub $512, %r9 + cmp %rdi, %r9 + ja L(gobble_512bytes_loop_bkw) + vmovups %zmm8, -0x40(%rdi) + vmovups %zmm9, -0x80(%rdi) + vmovups %zmm10, -0xC0(%rdi) + vmovups %zmm11, -0x100(%rdi) + vmovups %zmm12, -0x140(%rdi) + vmovups %zmm13, -0x180(%rdi) + vmovups %zmm14, -0x1C0(%rdi) + vmovups %zmm15, -0x200(%rdi) + ret + +L(preloop_large): + cmp %rsi, %rdi + ja L(preloop_large_bkw) + vmovups (%rsi), %zmm4 + vmovups 0x40(%rsi), %zmm5 + + mov %rdi, %r11 +/* Align destination for access with non-temporal stores in the loop. */ + mov %rdi, %r8 + and $-0x80, %rdi + add $0x80, %rdi + sub %rdi, %r8 + sub %r8, %rsi + add %r8, %rdx +L(gobble_256bytes_nt_loop): + prefetcht1 0x200(%rsi) + prefetcht1 0x240(%rsi) + prefetcht1 0x280(%rsi) + prefetcht1 0x2C0(%rsi) + prefetcht1 0x300(%rsi) + prefetcht1 0x340(%rsi) + prefetcht1 0x380(%rsi) + prefetcht1 0x3C0(%rsi) + vmovdqu64 (%rsi), %zmm0 + vmovdqu64 0x40(%rsi), %zmm1 + vmovdqu64 0x80(%rsi), %zmm2 + vmovdqu64 0xC0(%rsi), %zmm3 + vmovntdq %zmm0, (%rdi) + vmovntdq %zmm1, 0x40(%rdi) + vmovntdq %zmm2, 0x80(%rdi) + vmovntdq %zmm3, 0xC0(%rdi) + sub $256, %rdx + add $256, %rsi + add $256, %rdi + cmp $256, %rdx + ja L(gobble_256bytes_nt_loop) + sfence + vmovups %zmm4, (%r11) + vmovups %zmm5, 0x40(%r11) + jmp L(check) + +L(preloop_large_bkw): + vmovups -0x80(%rcx), %zmm4 + vmovups -0x40(%rcx), %zmm5 + +/* Align end of destination for access with non-temporal stores. */ + mov %r9, %r8 + and $-0x80, %r9 + sub %r9, %r8 + sub %r8, %rcx + sub %r8, %rdx + add %r9, %r8 +L(gobble_256bytes_nt_loop_bkw): + prefetcht1 -0x400(%rcx) + prefetcht1 -0x3C0(%rcx) + prefetcht1 -0x380(%rcx) + prefetcht1 -0x340(%rcx) + prefetcht1 -0x300(%rcx) + prefetcht1 -0x2C0(%rcx) + prefetcht1 -0x280(%rcx) + prefetcht1 -0x240(%rcx) + vmovdqu64 -0x100(%rcx), %zmm0 + vmovdqu64 -0xC0(%rcx), %zmm1 + vmovdqu64 -0x80(%rcx), %zmm2 + vmovdqu64 -0x40(%rcx), %zmm3 + vmovntdq %zmm0, -0x100(%r9) + vmovntdq %zmm1, -0xC0(%r9) + vmovntdq %zmm2, -0x80(%r9) + vmovntdq %zmm3, -0x40(%r9) + sub $256, %rdx + sub $256, %rcx + sub $256, %r9 + cmp $256, %rdx + ja L(gobble_256bytes_nt_loop_bkw) + sfence + vmovups %zmm4, -0x80(%r8) + vmovups %zmm5, -0x40(%r8) + jmp L(check) +END (__memmove_avx512_no_vzeroupper) + +strong_alias (__memmove_avx512_no_vzeroupper, __memcpy_avx512_no_vzeroupper) +strong_alias (__memmove_chk_avx512_no_vzeroupper, __memcpy_chk_avx512_no_vzeroupper) +#endif diff --git a/utils/memcpy-bench/glibc/memmove-avx512-unaligned-erms.S b/utils/memcpy-bench/glibc/memmove-avx512-unaligned-erms.S new file mode 100644 index 00000000000..9666b05f1c5 --- /dev/null +++ b/utils/memcpy-bench/glibc/memmove-avx512-unaligned-erms.S @@ -0,0 +1,12 @@ +#if 1 +# define VEC_SIZE 64 +# define VEC(i) zmm##i +# define VMOVNT vmovntdq +# define VMOVU vmovdqu64 +# define VMOVA vmovdqa64 + +# define SECTION(p) p##.avx512 +# define MEMMOVE_SYMBOL(p,s) p##_avx512_##s + +# include "memmove-vec-unaligned-erms.S" +#endif diff --git a/utils/memcpy-bench/glibc/memmove-sse2-unaligned-erms.S b/utils/memcpy-bench/glibc/memmove-sse2-unaligned-erms.S new file mode 100644 index 00000000000..ad405be479e --- /dev/null +++ b/utils/memcpy-bench/glibc/memmove-sse2-unaligned-erms.S @@ -0,0 +1,33 @@ +/* memmove with SSE2. + Copyright (C) 2017-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#if 1 +# define MEMMOVE_SYMBOL(p,s) p##_sse2_##s +#else +weak_alias (__mempcpy, mempcpy) +#endif + +#include "memmove.S" + +#if defined SHARED +# include +# if SHLIB_COMPAT (libc, GLIBC_2_2_5, GLIBC_2_14) +/* Use __memmove_sse2_unaligned to support overlapping addresses. */ +compat_symbol (libc, __memmove_sse2_unaligned, memcpy, GLIBC_2_2_5); +# endif +#endif diff --git a/utils/memcpy-bench/glibc/memmove-vec-unaligned-erms.S b/utils/memcpy-bench/glibc/memmove-vec-unaligned-erms.S new file mode 100644 index 00000000000..097ff6ca617 --- /dev/null +++ b/utils/memcpy-bench/glibc/memmove-vec-unaligned-erms.S @@ -0,0 +1,559 @@ +/* memmove/memcpy/mempcpy with unaligned load/store and rep movsb + Copyright (C) 2016-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* memmove/memcpy/mempcpy is implemented as: + 1. Use overlapping load and store to avoid branch. + 2. Load all sources into registers and store them together to avoid + possible address overlap between source and destination. + 3. If size is 8 * VEC_SIZE or less, load all sources into registers + and store them together. + 4. If address of destination > address of source, backward copy + 4 * VEC_SIZE at a time with unaligned load and aligned store. + Load the first 4 * VEC and last VEC before the loop and store + them after the loop to support overlapping addresses. + 5. Otherwise, forward copy 4 * VEC_SIZE at a time with unaligned + load and aligned store. Load the last 4 * VEC and first VEC + before the loop and store them after the loop to support + overlapping addresses. + 6. If size >= __x86_shared_non_temporal_threshold and there is no + overlap between destination and source, use non-temporal store + instead of aligned store. */ + +#include "sysdep.h" + +#ifndef MEMCPY_SYMBOL +# define MEMCPY_SYMBOL(p,s) MEMMOVE_SYMBOL(p, s) +#endif + +#ifndef MEMPCPY_SYMBOL +# define MEMPCPY_SYMBOL(p,s) MEMMOVE_SYMBOL(p, s) +#endif + +#ifndef MEMMOVE_CHK_SYMBOL +# define MEMMOVE_CHK_SYMBOL(p,s) MEMMOVE_SYMBOL(p, s) +#endif + +#ifndef VZEROUPPER +# if VEC_SIZE > 16 +# define VZEROUPPER vzeroupper +# else +# define VZEROUPPER +# endif +#endif + +#ifndef PREFETCH +# define PREFETCH(addr) prefetcht0 addr +#endif + +/* Assume 64-byte prefetch size. */ +#ifndef PREFETCH_SIZE +# define PREFETCH_SIZE 64 +#endif + +#define PREFETCHED_LOAD_SIZE (VEC_SIZE * 4) + +#if PREFETCH_SIZE == 64 +# if PREFETCHED_LOAD_SIZE == PREFETCH_SIZE +# define PREFETCH_ONE_SET(dir, base, offset) \ + PREFETCH ((offset)base) +# elif PREFETCHED_LOAD_SIZE == 2 * PREFETCH_SIZE +# define PREFETCH_ONE_SET(dir, base, offset) \ + PREFETCH ((offset)base); \ + PREFETCH ((offset + dir * PREFETCH_SIZE)base) +# elif PREFETCHED_LOAD_SIZE == 4 * PREFETCH_SIZE +# define PREFETCH_ONE_SET(dir, base, offset) \ + PREFETCH ((offset)base); \ + PREFETCH ((offset + dir * PREFETCH_SIZE)base); \ + PREFETCH ((offset + dir * PREFETCH_SIZE * 2)base); \ + PREFETCH ((offset + dir * PREFETCH_SIZE * 3)base) +# else +# error Unsupported PREFETCHED_LOAD_SIZE! +# endif +#else +# error Unsupported PREFETCH_SIZE! +#endif + +#ifndef SECTION +# error SECTION is not defined! +#endif + + .section SECTION(.text),"ax",@progbits +#if defined SHARED +ENTRY (MEMMOVE_CHK_SYMBOL (__mempcpy_chk, unaligned)) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMMOVE_CHK_SYMBOL (__mempcpy_chk, unaligned)) +#endif + +ENTRY (MEMPCPY_SYMBOL (__mempcpy, unaligned)) + mov %RDI_LP, %RAX_LP + add %RDX_LP, %RAX_LP + jmp L(start) +END (MEMPCPY_SYMBOL (__mempcpy, unaligned)) + +#if defined SHARED +ENTRY (MEMMOVE_CHK_SYMBOL (__memmove_chk, unaligned)) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMMOVE_CHK_SYMBOL (__memmove_chk, unaligned)) +#endif + +ENTRY (MEMMOVE_SYMBOL (__memmove, unaligned)) + movq %rdi, %rax +L(start): +# ifdef __ILP32__ + /* Clear the upper 32 bits. */ + movl %edx, %edx +# endif + cmp $VEC_SIZE, %RDX_LP + jb L(less_vec) + cmp $(VEC_SIZE * 2), %RDX_LP + ja L(more_2x_vec) +#if !defined USE_MULTIARCH +L(last_2x_vec): +#endif + /* From VEC and to 2 * VEC. No branch when size == VEC_SIZE. */ + VMOVU (%rsi), %VEC(0) + VMOVU -VEC_SIZE(%rsi,%rdx), %VEC(1) + VMOVU %VEC(0), (%rdi) + VMOVU %VEC(1), -VEC_SIZE(%rdi,%rdx) + VZEROUPPER +#if !defined USE_MULTIARCH +L(nop): +#endif + ret +#if defined USE_MULTIARCH +END (MEMMOVE_SYMBOL (__memmove, unaligned)) + +# if VEC_SIZE == 16 +ENTRY (__mempcpy_chk_erms) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (__mempcpy_chk_erms) + +/* Only used to measure performance of REP MOVSB. */ +ENTRY (__mempcpy_erms) + mov %RDI_LP, %RAX_LP + /* Skip zero length. */ + test %RDX_LP, %RDX_LP + jz 2f + add %RDX_LP, %RAX_LP + jmp L(start_movsb) +END (__mempcpy_erms) + +ENTRY (__memmove_chk_erms) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (__memmove_chk_erms) + +ENTRY (__memmove_erms) + movq %rdi, %rax + /* Skip zero length. */ + test %RDX_LP, %RDX_LP + jz 2f +L(start_movsb): + mov %RDX_LP, %RCX_LP + cmp %RSI_LP, %RDI_LP + jb 1f + /* Source == destination is less common. */ + je 2f + lea (%rsi,%rcx), %RDX_LP + cmp %RDX_LP, %RDI_LP + jb L(movsb_backward) +1: + rep movsb +2: + ret +L(movsb_backward): + leaq -1(%rdi,%rcx), %rdi + leaq -1(%rsi,%rcx), %rsi + std + rep movsb + cld + ret +END (__memmove_erms) +strong_alias (__memmove_erms, __memcpy_erms) +strong_alias (__memmove_chk_erms, __memcpy_chk_erms) +# endif + +# ifdef SHARED +ENTRY (MEMMOVE_CHK_SYMBOL (__mempcpy_chk, unaligned_erms)) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMMOVE_CHK_SYMBOL (__mempcpy_chk, unaligned_erms)) +# endif + +ENTRY (MEMMOVE_SYMBOL (__mempcpy, unaligned_erms)) + mov %RDI_LP, %RAX_LP + add %RDX_LP, %RAX_LP + jmp L(start_erms) +END (MEMMOVE_SYMBOL (__mempcpy, unaligned_erms)) + +# ifdef SHARED +ENTRY (MEMMOVE_CHK_SYMBOL (__memmove_chk, unaligned_erms)) + cmp %RDX_LP, %RCX_LP + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMMOVE_CHK_SYMBOL (__memmove_chk, unaligned_erms)) +# endif + +ENTRY (MEMMOVE_SYMBOL (__memmove, unaligned_erms)) + movq %rdi, %rax +L(start_erms): +# ifdef __ILP32__ + /* Clear the upper 32 bits. */ + movl %edx, %edx +# endif + cmp $VEC_SIZE, %RDX_LP + jb L(less_vec) + cmp $(VEC_SIZE * 2), %RDX_LP + ja L(movsb_more_2x_vec) +L(last_2x_vec): + /* From VEC and to 2 * VEC. No branch when size == VEC_SIZE. */ + VMOVU (%rsi), %VEC(0) + VMOVU -VEC_SIZE(%rsi,%rdx), %VEC(1) + VMOVU %VEC(0), (%rdi) + VMOVU %VEC(1), -VEC_SIZE(%rdi,%rdx) +L(return): + VZEROUPPER + ret + +L(movsb): + cmp $SHARED_NON_TEMPORAL_THRESHOLD, %RDX_LP + jae L(more_8x_vec) + cmpq %rsi, %rdi + jb 1f + /* Source == destination is less common. */ + je L(nop) + leaq (%rsi,%rdx), %r9 + cmpq %r9, %rdi + /* Avoid slow backward REP MOVSB. */ + jb L(more_8x_vec_backward) +1: + mov %RDX_LP, %RCX_LP + rep movsb +L(nop): + ret +#endif + +L(less_vec): + /* Less than 1 VEC. */ +#if VEC_SIZE != 16 && VEC_SIZE != 32 && VEC_SIZE != 64 +# error Unsupported VEC_SIZE! +#endif +#if VEC_SIZE > 32 + cmpb $32, %dl + jae L(between_32_63) +#endif +#if VEC_SIZE > 16 + cmpb $16, %dl + jae L(between_16_31) +#endif + cmpb $8, %dl + jae L(between_8_15) + cmpb $4, %dl + jae L(between_4_7) + cmpb $1, %dl + ja L(between_2_3) + jb 1f + movzbl (%rsi), %ecx + movb %cl, (%rdi) +1: + ret +#if VEC_SIZE > 32 +L(between_32_63): + /* From 32 to 63. No branch when size == 32. */ + vmovdqu (%rsi), %ymm0 + vmovdqu -32(%rsi,%rdx), %ymm1 + vmovdqu %ymm0, (%rdi) + vmovdqu %ymm1, -32(%rdi,%rdx) + VZEROUPPER + ret +#endif +#if VEC_SIZE > 16 + /* From 16 to 31. No branch when size == 16. */ +L(between_16_31): + vmovdqu (%rsi), %xmm0 + vmovdqu -16(%rsi,%rdx), %xmm1 + vmovdqu %xmm0, (%rdi) + vmovdqu %xmm1, -16(%rdi,%rdx) + ret +#endif +L(between_8_15): + /* From 8 to 15. No branch when size == 8. */ + movq -8(%rsi,%rdx), %rcx + movq (%rsi), %rsi + movq %rcx, -8(%rdi,%rdx) + movq %rsi, (%rdi) + ret +L(between_4_7): + /* From 4 to 7. No branch when size == 4. */ + movl -4(%rsi,%rdx), %ecx + movl (%rsi), %esi + movl %ecx, -4(%rdi,%rdx) + movl %esi, (%rdi) + ret +L(between_2_3): + /* From 2 to 3. No branch when size == 2. */ + movzwl -2(%rsi,%rdx), %ecx + movzwl (%rsi), %esi + movw %cx, -2(%rdi,%rdx) + movw %si, (%rdi) + ret + +#if defined USE_MULTIARCH +L(movsb_more_2x_vec): + cmp $REP_MOSB_THRESHOLD, %RDX_LP + ja L(movsb) +#endif +L(more_2x_vec): + /* More than 2 * VEC and there may be overlap between destination + and source. */ + cmpq $(VEC_SIZE * 8), %rdx + ja L(more_8x_vec) + cmpq $(VEC_SIZE * 4), %rdx + jb L(last_4x_vec) + /* Copy from 4 * VEC to 8 * VEC, inclusively. */ + VMOVU (%rsi), %VEC(0) + VMOVU VEC_SIZE(%rsi), %VEC(1) + VMOVU (VEC_SIZE * 2)(%rsi), %VEC(2) + VMOVU (VEC_SIZE * 3)(%rsi), %VEC(3) + VMOVU -VEC_SIZE(%rsi,%rdx), %VEC(4) + VMOVU -(VEC_SIZE * 2)(%rsi,%rdx), %VEC(5) + VMOVU -(VEC_SIZE * 3)(%rsi,%rdx), %VEC(6) + VMOVU -(VEC_SIZE * 4)(%rsi,%rdx), %VEC(7) + VMOVU %VEC(0), (%rdi) + VMOVU %VEC(1), VEC_SIZE(%rdi) + VMOVU %VEC(2), (VEC_SIZE * 2)(%rdi) + VMOVU %VEC(3), (VEC_SIZE * 3)(%rdi) + VMOVU %VEC(4), -VEC_SIZE(%rdi,%rdx) + VMOVU %VEC(5), -(VEC_SIZE * 2)(%rdi,%rdx) + VMOVU %VEC(6), -(VEC_SIZE * 3)(%rdi,%rdx) + VMOVU %VEC(7), -(VEC_SIZE * 4)(%rdi,%rdx) + VZEROUPPER + ret +L(last_4x_vec): + /* Copy from 2 * VEC to 4 * VEC. */ + VMOVU (%rsi), %VEC(0) + VMOVU VEC_SIZE(%rsi), %VEC(1) + VMOVU -VEC_SIZE(%rsi,%rdx), %VEC(2) + VMOVU -(VEC_SIZE * 2)(%rsi,%rdx), %VEC(3) + VMOVU %VEC(0), (%rdi) + VMOVU %VEC(1), VEC_SIZE(%rdi) + VMOVU %VEC(2), -VEC_SIZE(%rdi,%rdx) + VMOVU %VEC(3), -(VEC_SIZE * 2)(%rdi,%rdx) + VZEROUPPER + ret + +L(more_8x_vec): + cmpq %rsi, %rdi + ja L(more_8x_vec_backward) + /* Source == destination is less common. */ + je L(nop) + /* Load the first VEC and last 4 * VEC to support overlapping + addresses. */ + VMOVU (%rsi), %VEC(4) + VMOVU -VEC_SIZE(%rsi, %rdx), %VEC(5) + VMOVU -(VEC_SIZE * 2)(%rsi, %rdx), %VEC(6) + VMOVU -(VEC_SIZE * 3)(%rsi, %rdx), %VEC(7) + VMOVU -(VEC_SIZE * 4)(%rsi, %rdx), %VEC(8) + /* Save start and stop of the destination buffer. */ + movq %rdi, %r11 + leaq -VEC_SIZE(%rdi, %rdx), %rcx + /* Align destination for aligned stores in the loop. Compute + how much destination is misaligned. */ + movq %rdi, %r8 + andq $(VEC_SIZE - 1), %r8 + /* Get the negative of offset for alignment. */ + subq $VEC_SIZE, %r8 + /* Adjust source. */ + subq %r8, %rsi + /* Adjust destination which should be aligned now. */ + subq %r8, %rdi + /* Adjust length. */ + addq %r8, %rdx +#if (defined USE_MULTIARCH || VEC_SIZE == 16) + /* Check non-temporal store threshold. */ + cmp $SHARED_NON_TEMPORAL_THRESHOLD, %RDX_LP + ja L(large_forward) +#endif +L(loop_4x_vec_forward): + /* Copy 4 * VEC a time forward. */ + VMOVU (%rsi), %VEC(0) + VMOVU VEC_SIZE(%rsi), %VEC(1) + VMOVU (VEC_SIZE * 2)(%rsi), %VEC(2) + VMOVU (VEC_SIZE * 3)(%rsi), %VEC(3) + addq $(VEC_SIZE * 4), %rsi + subq $(VEC_SIZE * 4), %rdx + VMOVA %VEC(0), (%rdi) + VMOVA %VEC(1), VEC_SIZE(%rdi) + VMOVA %VEC(2), (VEC_SIZE * 2)(%rdi) + VMOVA %VEC(3), (VEC_SIZE * 3)(%rdi) + addq $(VEC_SIZE * 4), %rdi + cmpq $(VEC_SIZE * 4), %rdx + ja L(loop_4x_vec_forward) + /* Store the last 4 * VEC. */ + VMOVU %VEC(5), (%rcx) + VMOVU %VEC(6), -VEC_SIZE(%rcx) + VMOVU %VEC(7), -(VEC_SIZE * 2)(%rcx) + VMOVU %VEC(8), -(VEC_SIZE * 3)(%rcx) + /* Store the first VEC. */ + VMOVU %VEC(4), (%r11) + VZEROUPPER + ret + +L(more_8x_vec_backward): + /* Load the first 4 * VEC and last VEC to support overlapping + addresses. */ + VMOVU (%rsi), %VEC(4) + VMOVU VEC_SIZE(%rsi), %VEC(5) + VMOVU (VEC_SIZE * 2)(%rsi), %VEC(6) + VMOVU (VEC_SIZE * 3)(%rsi), %VEC(7) + VMOVU -VEC_SIZE(%rsi,%rdx), %VEC(8) + /* Save stop of the destination buffer. */ + leaq -VEC_SIZE(%rdi, %rdx), %r11 + /* Align destination end for aligned stores in the loop. Compute + how much destination end is misaligned. */ + leaq -VEC_SIZE(%rsi, %rdx), %rcx + movq %r11, %r9 + movq %r11, %r8 + andq $(VEC_SIZE - 1), %r8 + /* Adjust source. */ + subq %r8, %rcx + /* Adjust the end of destination which should be aligned now. */ + subq %r8, %r9 + /* Adjust length. */ + subq %r8, %rdx +#if (defined USE_MULTIARCH || VEC_SIZE == 16) + /* Check non-temporal store threshold. */ + cmp $SHARED_NON_TEMPORAL_THRESHOLD, %RDX_LP + ja L(large_backward) +#endif +L(loop_4x_vec_backward): + /* Copy 4 * VEC a time backward. */ + VMOVU (%rcx), %VEC(0) + VMOVU -VEC_SIZE(%rcx), %VEC(1) + VMOVU -(VEC_SIZE * 2)(%rcx), %VEC(2) + VMOVU -(VEC_SIZE * 3)(%rcx), %VEC(3) + subq $(VEC_SIZE * 4), %rcx + subq $(VEC_SIZE * 4), %rdx + VMOVA %VEC(0), (%r9) + VMOVA %VEC(1), -VEC_SIZE(%r9) + VMOVA %VEC(2), -(VEC_SIZE * 2)(%r9) + VMOVA %VEC(3), -(VEC_SIZE * 3)(%r9) + subq $(VEC_SIZE * 4), %r9 + cmpq $(VEC_SIZE * 4), %rdx + ja L(loop_4x_vec_backward) + /* Store the first 4 * VEC. */ + VMOVU %VEC(4), (%rdi) + VMOVU %VEC(5), VEC_SIZE(%rdi) + VMOVU %VEC(6), (VEC_SIZE * 2)(%rdi) + VMOVU %VEC(7), (VEC_SIZE * 3)(%rdi) + /* Store the last VEC. */ + VMOVU %VEC(8), (%r11) + VZEROUPPER + ret + +#if (defined USE_MULTIARCH || VEC_SIZE == 16) +L(large_forward): + /* Don't use non-temporal store if there is overlap between + destination and source since destination may be in cache + when source is loaded. */ + leaq (%rdi, %rdx), %r10 + cmpq %r10, %rsi + jb L(loop_4x_vec_forward) +L(loop_large_forward): + /* Copy 4 * VEC a time forward with non-temporal stores. */ + PREFETCH_ONE_SET (1, (%rsi), PREFETCHED_LOAD_SIZE * 2) + PREFETCH_ONE_SET (1, (%rsi), PREFETCHED_LOAD_SIZE * 3) + VMOVU (%rsi), %VEC(0) + VMOVU VEC_SIZE(%rsi), %VEC(1) + VMOVU (VEC_SIZE * 2)(%rsi), %VEC(2) + VMOVU (VEC_SIZE * 3)(%rsi), %VEC(3) + addq $PREFETCHED_LOAD_SIZE, %rsi + subq $PREFETCHED_LOAD_SIZE, %rdx + VMOVNT %VEC(0), (%rdi) + VMOVNT %VEC(1), VEC_SIZE(%rdi) + VMOVNT %VEC(2), (VEC_SIZE * 2)(%rdi) + VMOVNT %VEC(3), (VEC_SIZE * 3)(%rdi) + addq $PREFETCHED_LOAD_SIZE, %rdi + cmpq $PREFETCHED_LOAD_SIZE, %rdx + ja L(loop_large_forward) + sfence + /* Store the last 4 * VEC. */ + VMOVU %VEC(5), (%rcx) + VMOVU %VEC(6), -VEC_SIZE(%rcx) + VMOVU %VEC(7), -(VEC_SIZE * 2)(%rcx) + VMOVU %VEC(8), -(VEC_SIZE * 3)(%rcx) + /* Store the first VEC. */ + VMOVU %VEC(4), (%r11) + VZEROUPPER + ret + +L(large_backward): + /* Don't use non-temporal store if there is overlap between + destination and source since destination may be in cache + when source is loaded. */ + leaq (%rcx, %rdx), %r10 + cmpq %r10, %r9 + jb L(loop_4x_vec_backward) +L(loop_large_backward): + /* Copy 4 * VEC a time backward with non-temporal stores. */ + PREFETCH_ONE_SET (-1, (%rcx), -PREFETCHED_LOAD_SIZE * 2) + PREFETCH_ONE_SET (-1, (%rcx), -PREFETCHED_LOAD_SIZE * 3) + VMOVU (%rcx), %VEC(0) + VMOVU -VEC_SIZE(%rcx), %VEC(1) + VMOVU -(VEC_SIZE * 2)(%rcx), %VEC(2) + VMOVU -(VEC_SIZE * 3)(%rcx), %VEC(3) + subq $PREFETCHED_LOAD_SIZE, %rcx + subq $PREFETCHED_LOAD_SIZE, %rdx + VMOVNT %VEC(0), (%r9) + VMOVNT %VEC(1), -VEC_SIZE(%r9) + VMOVNT %VEC(2), -(VEC_SIZE * 2)(%r9) + VMOVNT %VEC(3), -(VEC_SIZE * 3)(%r9) + subq $PREFETCHED_LOAD_SIZE, %r9 + cmpq $PREFETCHED_LOAD_SIZE, %rdx + ja L(loop_large_backward) + sfence + /* Store the first 4 * VEC. */ + VMOVU %VEC(4), (%rdi) + VMOVU %VEC(5), VEC_SIZE(%rdi) + VMOVU %VEC(6), (VEC_SIZE * 2)(%rdi) + VMOVU %VEC(7), (VEC_SIZE * 3)(%rdi) + /* Store the last VEC. */ + VMOVU %VEC(8), (%r11) + VZEROUPPER + ret +#endif +END (MEMMOVE_SYMBOL (__memmove, unaligned_erms)) + +#if 1 +# ifdef USE_MULTIARCH +strong_alias (MEMMOVE_SYMBOL (__memmove, unaligned_erms), + MEMMOVE_SYMBOL (__memcpy, unaligned_erms)) +# ifdef SHARED +strong_alias (MEMMOVE_SYMBOL (__memmove_chk, unaligned_erms), + MEMMOVE_SYMBOL (__memcpy_chk, unaligned_erms)) +# endif +# endif +# ifdef SHARED +strong_alias (MEMMOVE_CHK_SYMBOL (__memmove_chk, unaligned), + MEMMOVE_CHK_SYMBOL (__memcpy_chk, unaligned)) +# endif +#endif +strong_alias (MEMMOVE_SYMBOL (__memmove, unaligned), + MEMCPY_SYMBOL (__memcpy, unaligned)) diff --git a/utils/memcpy-bench/glibc/memmove.S b/utils/memcpy-bench/glibc/memmove.S new file mode 100644 index 00000000000..7bd47b9a03f --- /dev/null +++ b/utils/memcpy-bench/glibc/memmove.S @@ -0,0 +1,71 @@ +/* Optimized memmove for x86-64. + Copyright (C) 2016-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include "sysdep.h" + +#define VEC_SIZE 16 +#define VEC(i) xmm##i +#define PREFETCHNT prefetchnta +#define VMOVNT movntdq +/* Use movups and movaps for smaller code sizes. */ +#define VMOVU movups +#define VMOVA movaps + +#define SECTION(p) p + +#ifdef USE_MULTIARCH +# if 0 +# define MEMCPY_SYMBOL(p,s) memcpy +# endif +#else +# if defined SHARED +# define MEMCPY_SYMBOL(p,s) __memcpy +# else +# define MEMCPY_SYMBOL(p,s) memcpy +# endif +#endif +#if !defined USE_MULTIARCH +# define MEMPCPY_SYMBOL(p,s) __mempcpy +#endif +#ifndef MEMMOVE_SYMBOL +# define MEMMOVE_CHK_SYMBOL(p,s) p +# define MEMMOVE_SYMBOL(p,s) memmove +#endif + +#include "memmove-vec-unaligned-erms.S" + +#ifndef USE_MULTIARCH +libc_hidden_builtin_def (memmove) +# if defined SHARED && IS_IN (libc) +strong_alias (memmove, __memcpy) +libc_hidden_ver (memmove, memcpy) +# endif +libc_hidden_def (__mempcpy) +weak_alias (__mempcpy, mempcpy) +libc_hidden_builtin_def (mempcpy) + +# if defined SHARED && IS_IN (libc) +# undef memcpy +# include +versioned_symbol (libc, __memcpy, memcpy, GLIBC_2_14); + +# if SHLIB_COMPAT (libc, GLIBC_2_2_5, GLIBC_2_14) +compat_symbol (libc, memmove, memcpy, GLIBC_2_2_5); +# endif +# endif +#endif diff --git a/utils/memcpy-bench/glibc/sysdep.h b/utils/memcpy-bench/glibc/sysdep.h new file mode 100644 index 00000000000..82b1e747fbe --- /dev/null +++ b/utils/memcpy-bench/glibc/sysdep.h @@ -0,0 +1,131 @@ +#pragma once + +/* Assembler macros for x86-64. + Copyright (C) 2001-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _X86_64_SYSDEP_H +#define _X86_64_SYSDEP_H 1 + +#include "sysdep_x86.h" + +#ifdef __ASSEMBLER__ + +/* Syntactic details of assembler. */ + +/* This macro is for setting proper CFI with DW_CFA_expression describing + the register as saved relative to %rsp instead of relative to the CFA. + Expression is DW_OP_drop, DW_OP_breg7 (%rsp is register 7), sleb128 offset + from %rsp. */ +#define cfi_offset_rel_rsp(regn, off) .cfi_escape 0x10, regn, 0x4, 0x13, \ + 0x77, off & 0x7F | 0x80, off >> 7 + +/* If compiled for profiling, call `mcount' at the start of each function. */ +#ifdef PROF +/* The mcount code relies on a normal frame pointer being on the stack + to locate our caller, so push one just for its benefit. */ +#define CALL_MCOUNT \ + pushq %rbp; \ + cfi_adjust_cfa_offset(8); \ + movq %rsp, %rbp; \ + cfi_def_cfa_register(%rbp); \ + call JUMPTARGET(mcount); \ + popq %rbp; \ + cfi_def_cfa(rsp,8); +#else +#define CALL_MCOUNT /* Do nothing. */ +#endif + +#define PSEUDO(name, syscall_name, args) \ +lose: \ + jmp JUMPTARGET(syscall_error) \ + .globl syscall_error; \ + ENTRY (name) \ + DO_CALL (syscall_name, args); \ + jb lose + +#undef JUMPTARGET +#ifdef SHARED +# ifdef BIND_NOW +# define JUMPTARGET(name) *name##@GOTPCREL(%rip) +# else +# define JUMPTARGET(name) name##@PLT +# endif +#else +/* For static archives, branch to target directly. */ +# define JUMPTARGET(name) name +#endif + +/* Long and pointer size in bytes. */ +#define LP_SIZE 8 + +/* Instruction to operate on long and pointer. */ +#define LP_OP(insn) insn##q + +/* Assembler address directive. */ +#define ASM_ADDR .quad + +/* Registers to hold long and pointer. */ +#define RAX_LP rax +#define RBP_LP rbp +#define RBX_LP rbx +#define RCX_LP rcx +#define RDI_LP rdi +#define RDX_LP rdx +#define RSI_LP rsi +#define RSP_LP rsp +#define R8_LP r8 +#define R9_LP r9 +#define R10_LP r10 +#define R11_LP r11 +#define R12_LP r12 +#define R13_LP r13 +#define R14_LP r14 +#define R15_LP r15 + +#else /* __ASSEMBLER__ */ + +/* Long and pointer size in bytes. */ +#define LP_SIZE "8" + +/* Instruction to operate on long and pointer. */ +#define LP_OP(insn) #insn "q" + +/* Assembler address directive. */ +#define ASM_ADDR ".quad" + +/* Registers to hold long and pointer. */ +#define RAX_LP "rax" +#define RBP_LP "rbp" +#define RBX_LP "rbx" +#define RCX_LP "rcx" +#define RDI_LP "rdi" +#define RDX_LP "rdx" +#define RSI_LP "rsi" +#define RSP_LP "rsp" +#define R8_LP "r8" +#define R9_LP "r9" +#define R10_LP "r10" +#define R11_LP "r11" +#define R12_LP "r12" +#define R13_LP "r13" +#define R14_LP "r14" +#define R15_LP "r15" + +#endif /* __ASSEMBLER__ */ + +#endif /* _X86_64_SYSDEP_H */ diff --git a/utils/memcpy-bench/glibc/sysdep_generic.h b/utils/memcpy-bench/glibc/sysdep_generic.h new file mode 100644 index 00000000000..e6183d72792 --- /dev/null +++ b/utils/memcpy-bench/glibc/sysdep_generic.h @@ -0,0 +1,115 @@ +#pragma once + +/* Generic asm macros used on many machines. + Copyright (C) 1991-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#define C_SYMBOL_NAME(name) name +#define HIDDEN_JUMPTARGET(name) 0x0 +#define SHARED_CACHE_SIZE_HALF (1024*1024) +#define DATA_CACHE_SIZE_HALF (1024*32/2) +#define DATA_CACHE_SIZE (1024*32) +#define SHARED_NON_TEMPORAL_THRESHOLD (1024*1024*4) +#define REP_MOSB_THRESHOLD 1024 + +#define USE_MULTIARCH + +#define ASM_LINE_SEP ; + +#define strong_alias(original, alias) \ + .globl C_SYMBOL_NAME (alias) ASM_LINE_SEP \ + C_SYMBOL_NAME (alias) = C_SYMBOL_NAME (original) + +#ifndef C_LABEL + +/* Define a macro we can use to construct the asm name for a C symbol. */ +# define C_LABEL(name) name##: + +#endif + +#ifdef __ASSEMBLER__ +/* Mark the end of function named SYM. This is used on some platforms + to generate correct debugging information. */ +# ifndef END +# define END(sym) +# endif + +# ifndef JUMPTARGET +# define JUMPTARGET(sym) sym +# endif +#endif + +/* Macros to generate eh_frame unwind information. */ +#ifdef __ASSEMBLER__ +# define cfi_startproc .cfi_startproc +# define cfi_endproc .cfi_endproc +# define cfi_def_cfa(reg, off) .cfi_def_cfa reg, off +# define cfi_def_cfa_register(reg) .cfi_def_cfa_register reg +# define cfi_def_cfa_offset(off) .cfi_def_cfa_offset off +# define cfi_adjust_cfa_offset(off) .cfi_adjust_cfa_offset off +# define cfi_offset(reg, off) .cfi_offset reg, off +# define cfi_rel_offset(reg, off) .cfi_rel_offset reg, off +# define cfi_register(r1, r2) .cfi_register r1, r2 +# define cfi_return_column(reg) .cfi_return_column reg +# define cfi_restore(reg) .cfi_restore reg +# define cfi_same_value(reg) .cfi_same_value reg +# define cfi_undefined(reg) .cfi_undefined reg +# define cfi_remember_state .cfi_remember_state +# define cfi_restore_state .cfi_restore_state +# define cfi_window_save .cfi_window_save +# define cfi_personality(enc, exp) .cfi_personality enc, exp +# define cfi_lsda(enc, exp) .cfi_lsda enc, exp + +#else /* ! ASSEMBLER */ + +# define CFI_STRINGIFY(Name) CFI_STRINGIFY2 (Name) +# define CFI_STRINGIFY2(Name) #Name +# define CFI_STARTPROC ".cfi_startproc" +# define CFI_ENDPROC ".cfi_endproc" +# define CFI_DEF_CFA(reg, off) \ + ".cfi_def_cfa " CFI_STRINGIFY(reg) "," CFI_STRINGIFY(off) +# define CFI_DEF_CFA_REGISTER(reg) \ + ".cfi_def_cfa_register " CFI_STRINGIFY(reg) +# define CFI_DEF_CFA_OFFSET(off) \ + ".cfi_def_cfa_offset " CFI_STRINGIFY(off) +# define CFI_ADJUST_CFA_OFFSET(off) \ + ".cfi_adjust_cfa_offset " CFI_STRINGIFY(off) +# define CFI_OFFSET(reg, off) \ + ".cfi_offset " CFI_STRINGIFY(reg) "," CFI_STRINGIFY(off) +# define CFI_REL_OFFSET(reg, off) \ + ".cfi_rel_offset " CFI_STRINGIFY(reg) "," CFI_STRINGIFY(off) +# define CFI_REGISTER(r1, r2) \ + ".cfi_register " CFI_STRINGIFY(r1) "," CFI_STRINGIFY(r2) +# define CFI_RETURN_COLUMN(reg) \ + ".cfi_return_column " CFI_STRINGIFY(reg) +# define CFI_RESTORE(reg) \ + ".cfi_restore " CFI_STRINGIFY(reg) +# define CFI_UNDEFINED(reg) \ + ".cfi_undefined " CFI_STRINGIFY(reg) +# define CFI_REMEMBER_STATE \ + ".cfi_remember_state" +# define CFI_RESTORE_STATE \ + ".cfi_restore_state" +# define CFI_WINDOW_SAVE \ + ".cfi_window_save" +# define CFI_PERSONALITY(enc, exp) \ + ".cfi_personality " CFI_STRINGIFY(enc) "," CFI_STRINGIFY(exp) +# define CFI_LSDA(enc, exp) \ + ".cfi_lsda " CFI_STRINGIFY(enc) "," CFI_STRINGIFY(exp) +#endif + +#include "dwarf2.h" diff --git a/utils/memcpy-bench/glibc/sysdep_x86.h b/utils/memcpy-bench/glibc/sysdep_x86.h new file mode 100644 index 00000000000..1c482cfabb7 --- /dev/null +++ b/utils/memcpy-bench/glibc/sysdep_x86.h @@ -0,0 +1,115 @@ +#pragma once + +/* Assembler macros for x86. + Copyright (C) 2017-2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _X86_SYSDEP_H +#define _X86_SYSDEP_H 1 + +#include "sysdep_generic.h" + +/* __CET__ is defined by GCC with Control-Flow Protection values: + +enum cf_protection_level +{ + CF_NONE = 0, + CF_BRANCH = 1 << 0, + CF_RETURN = 1 << 1, + CF_FULL = CF_BRANCH | CF_RETURN, + CF_SET = 1 << 2 +}; +*/ + +/* Set if CF_BRANCH (IBT) is enabled. */ +#define X86_FEATURE_1_IBT (1U << 0) +/* Set if CF_RETURN (SHSTK) is enabled. */ +#define X86_FEATURE_1_SHSTK (1U << 1) + +#ifdef __CET__ +# define CET_ENABLED 1 +# define IBT_ENABLED (__CET__ & X86_FEATURE_1_IBT) +# define SHSTK_ENABLED (__CET__ & X86_FEATURE_1_SHSTK) +#else +# define CET_ENABLED 0 +# define IBT_ENABLED 0 +# define SHSTK_ENABLED 0 +#endif + +/* Offset for fxsave/xsave area used by _dl_runtime_resolve. Also need + space to preserve RCX, RDX, RSI, RDI, R8, R9 and RAX. It must be + aligned to 16 bytes for fxsave and 64 bytes for xsave. */ +#define STATE_SAVE_OFFSET (8 * 7 + 8) + +/* Save SSE, AVX, AVX512, mask and bound registers. */ +#define STATE_SAVE_MASK \ + ((1 << 1) | (1 << 2) | (1 << 3) | (1 << 5) | (1 << 6) | (1 << 7)) + +#ifdef __ASSEMBLER__ + +/* Syntactic details of assembler. */ + +#ifdef _CET_ENDBR +# define _CET_NOTRACK notrack +#else +# define _CET_ENDBR +# define _CET_NOTRACK +#endif + +/* ELF uses byte-counts for .align, most others use log2 of count of bytes. */ +#define ALIGNARG(log2) 1< +#include +#include +#include +#include +#include +#include +#include + +#include + +#include + +#include + +#include + +#include +#include + +#include + + +template +void NO_INLINE loop(uint8_t * dst, uint8_t * src, size_t size, F && chunk_size_distribution, MemcpyImpl && impl) +{ + while (size) + { + size_t bytes_to_copy = std::min(size, chunk_size_distribution()); + + impl(dst, src, bytes_to_copy); + + dst += bytes_to_copy; + src += bytes_to_copy; + size -= bytes_to_copy; + + /// Execute at least one SSE instruction as a penalty after running AVX code. + __asm__ __volatile__ ("pxor %%xmm15, %%xmm15" ::: "xmm15"); + } +} + + +using RNG = pcg32_fast; + +template +size_t generatorUniform(RNG & rng) { return rng() % N; }; + + +template +uint64_t test(uint8_t * dst, uint8_t * src, size_t size, size_t iterations, size_t num_threads, F && generator, MemcpyImpl && impl, const char * name) +{ + Stopwatch watch; + + std::vector threads; + threads.reserve(num_threads); + + for (size_t thread_num = 0; thread_num < num_threads; ++thread_num) + { + size_t begin = size * thread_num / num_threads; + size_t end = size * (thread_num + 1) / num_threads; + + threads.emplace_back([begin, end, iterations, &src, &dst, &generator, &impl] + { + for (size_t iteration = 0; iteration < iterations; ++iteration) + { + loop( + iteration % 2 ? &src[begin] : &dst[begin], + iteration % 2 ? &dst[begin] : &src[begin], + end - begin, + [rng = RNG(), &generator]() mutable { return generator(rng); }, + std::forward(impl)); + } + }); + } + + for (auto & thread : threads) + thread.join(); + + uint64_t elapsed_ns = watch.elapsed(); + + /// Validation + for (size_t i = 0; i < size; ++i) + if (dst[i] != uint8_t(i)) + throw std::logic_error("Incorrect result"); + + std::cout << name; + return elapsed_ns; +} + + +using memcpy_type = void * (*)(const void * __restrict, void * __restrict, size_t); + + +static void * memcpy_erms(void * dst, const void * src, size_t size) +{ + asm volatile ( + "rep movsb" + : "=D"(dst), "=S"(src), "=c"(size) + : "0"(dst), "1"(src), "2"(size) + : "memory"); + return dst; +} + +static void * memcpy_trivial(void * __restrict dst_, const void * __restrict src_, size_t size) +{ + char * __restrict dst = reinterpret_cast(dst_); + const char * __restrict src = reinterpret_cast(src_); + void * ret = dst; + + while (size > 0) + { + *dst = *src; + ++dst; + ++src; + --size; + } + + return ret; +} + +extern "C" void * memcpy_jart(void * dst, const void * src, size_t size); +extern "C" void MemCpy(void * dst, const void * src, size_t size); + +void * memcpy_fast_sse(void * dst, const void * src, size_t size); +void * memcpy_fast_avx(void * dst, const void * src, size_t size); +void * memcpy_tiny(void * dst, const void * src, size_t size); + + +static void * memcpySSE2(void * __restrict destination, const void * __restrict source, size_t size) +{ + unsigned char *dst = reinterpret_cast(destination); + const unsigned char *src = reinterpret_cast(source); + size_t padding; + + // small memory copy + if (size <= 16) + return memcpy_tiny(dst, src, size); + + // align destination to 16 bytes boundary + padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + // medium size copy + __m128i c0; + + for (; size >= 16; size -= 16) + { + c0 = _mm_loadu_si128(reinterpret_cast(src)); + src += 16; + _mm_store_si128((reinterpret_cast<__m128i*>(dst)), c0); + dst += 16; + } + + memcpy_tiny(dst, src, size); + return destination; +} + +static void * memcpySSE2Unrolled2(void * __restrict destination, const void * __restrict source, size_t size) +{ + unsigned char *dst = reinterpret_cast(destination); + const unsigned char *src = reinterpret_cast(source); + size_t padding; + + // small memory copy + if (size <= 32) + return memcpy_tiny(dst, src, size); + + // align destination to 16 bytes boundary + padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + // medium size copy + __m128i c0, c1; + + for (; size >= 32; size -= 32) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + src += 32; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + dst += 32; + } + + memcpy_tiny(dst, src, size); + return destination; +} + +static void * memcpySSE2Unrolled4(void * __restrict destination, const void * __restrict source, size_t size) +{ + unsigned char *dst = reinterpret_cast(destination); + const unsigned char *src = reinterpret_cast(source); + size_t padding; + + // small memory copy + if (size <= 64) + return memcpy_tiny(dst, src, size); + + // align destination to 16 bytes boundary + padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + // medium size copy + __m128i c0, c1, c2, c3; + + for (; size >= 64; size -= 64) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + c2 = _mm_loadu_si128(reinterpret_cast(src) + 2); + c3 = _mm_loadu_si128(reinterpret_cast(src) + 3); + src += 64; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 2), c2); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 3), c3); + dst += 64; + } + + memcpy_tiny(dst, src, size); + return destination; +} + + +static void * memcpySSE2Unrolled8(void * __restrict destination, const void * __restrict source, size_t size) +{ + unsigned char *dst = reinterpret_cast(destination); + const unsigned char *src = reinterpret_cast(source); + size_t padding; + + // small memory copy + if (size <= 128) + return memcpy_tiny(dst, src, size); + + // align destination to 16 bytes boundary + padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + // medium size copy + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + for (; size >= 128; size -= 128) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + c2 = _mm_loadu_si128(reinterpret_cast(src) + 2); + c3 = _mm_loadu_si128(reinterpret_cast(src) + 3); + c4 = _mm_loadu_si128(reinterpret_cast(src) + 4); + c5 = _mm_loadu_si128(reinterpret_cast(src) + 5); + c6 = _mm_loadu_si128(reinterpret_cast(src) + 6); + c7 = _mm_loadu_si128(reinterpret_cast(src) + 7); + src += 128; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 2), c2); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 3), c3); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 4), c4); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 5), c5); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 6), c6); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 7), c7); + dst += 128; + } + + memcpy_tiny(dst, src, size); + return destination; +} + + +//static __attribute__((__always_inline__, __target__("sse2"))) +__attribute__((__always_inline__)) inline void +memcpy_my_medium_sse(uint8_t * __restrict & dst, const uint8_t * __restrict & src, size_t & size) +{ + /// Align destination to 16 bytes boundary. + size_t padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + /// Aligned unrolled copy. + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + while (size >= 128) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + c2 = _mm_loadu_si128(reinterpret_cast(src) + 2); + c3 = _mm_loadu_si128(reinterpret_cast(src) + 3); + c4 = _mm_loadu_si128(reinterpret_cast(src) + 4); + c5 = _mm_loadu_si128(reinterpret_cast(src) + 5); + c6 = _mm_loadu_si128(reinterpret_cast(src) + 6); + c7 = _mm_loadu_si128(reinterpret_cast(src) + 7); + src += 128; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 2), c2); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 3), c3); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 4), c4); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 5), c5); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 6), c6); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 7), c7); + dst += 128; + + size -= 128; + } +} + +__attribute__((__target__("avx"))) +void memcpy_my_medium_avx(uint8_t * __restrict & __restrict dst, const uint8_t * __restrict & __restrict src, size_t & __restrict size) +{ + size_t padding = (32 - (reinterpret_cast(dst) & 31)) & 31; + + if (padding > 0) + { + __m256i head = _mm256_loadu_si256(reinterpret_cast(src)); + _mm256_storeu_si256(reinterpret_cast<__m256i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + __m256i c0, c1, c2, c3, c4, c5, c6, c7; + + while (size >= 256) + { + c0 = _mm256_loadu_si256((reinterpret_cast(src)) + 0); + c1 = _mm256_loadu_si256((reinterpret_cast(src)) + 1); + c2 = _mm256_loadu_si256((reinterpret_cast(src)) + 2); + c3 = _mm256_loadu_si256((reinterpret_cast(src)) + 3); + c4 = _mm256_loadu_si256((reinterpret_cast(src)) + 4); + c5 = _mm256_loadu_si256((reinterpret_cast(src)) + 5); + c6 = _mm256_loadu_si256((reinterpret_cast(src)) + 6); + c7 = _mm256_loadu_si256((reinterpret_cast(src)) + 7); + src += 256; + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 0), c0); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 1), c1); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 2), c2); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 3), c3); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 4), c4); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 5), c5); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 6), c6); + _mm256_store_si256(((reinterpret_cast<__m256i*>(dst)) + 7), c7); + dst += 256; + + size -= 256; + } +} + +bool have_avx = true; + + +static uint8_t * memcpy_my(uint8_t * __restrict dst, const uint8_t * __restrict src, size_t size) +{ + uint8_t * ret = dst; + +tail: + if (size <= 16) + { + if (size >= 8) + { + __builtin_memcpy(dst + size - 8, src + size - 8, 8); + __builtin_memcpy(dst, src, 8); + } + else if (size >= 4) + { + __builtin_memcpy(dst + size - 4, src + size - 4, 4); + __builtin_memcpy(dst, src, 4); + } + else if (size >= 2) + { + __builtin_memcpy(dst + size - 2, src + size - 2, 2); + __builtin_memcpy(dst, src, 2); + } + else if (size >= 1) + { + *dst = *src; + } + } + else if (have_avx) + { + if (size <= 32) + { + __builtin_memcpy(dst, src, 8); + __builtin_memcpy(dst + 8, src + 8, 8); + + dst += 16; + src += 16; + size -= 16; + + goto tail; + } + + if (size <= 256) + { + __asm__( + "vmovups -0x20(%[s],%[size],1), %%ymm0\n" + "vmovups %%ymm0, -0x20(%[d],%[size],1)\n" + : [d]"+r"(dst), [s]"+r"(src) + : [size]"r"(size) + : "ymm0", "memory"); + + while (size > 32) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups %%ymm0, (%[d])\n" + : [d]"+r"(dst), [s]"+r"(src) + : + : "ymm0", "memory"); + + dst += 32; + src += 32; + size -= 32; + } + } + else + { + size_t padding = (32 - (reinterpret_cast(dst) & 31)) & 31; + + if (padding > 0) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups %%ymm0, (%[d])\n" + : [d]"+r"(dst), [s]"+r"(src) + : + : "ymm0", "memory"); + + dst += padding; + src += padding; + size -= padding; + } + + while (size >= 256) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups 0x20(%[s]), %%ymm1\n" + "vmovups 0x40(%[s]), %%ymm2\n" + "vmovups 0x60(%[s]), %%ymm3\n" + "vmovups 0x80(%[s]), %%ymm4\n" + "vmovups 0xa0(%[s]), %%ymm5\n" + "vmovups 0xc0(%[s]), %%ymm6\n" + "vmovups 0xe0(%[s]), %%ymm7\n" + "add $0x100,%[s]\n" + "vmovaps %%ymm0, (%[d])\n" + "vmovaps %%ymm1, 0x20(%[d])\n" + "vmovaps %%ymm2, 0x40(%[d])\n" + "vmovaps %%ymm3, 0x60(%[d])\n" + "vmovaps %%ymm4, 0x80(%[d])\n" + "vmovaps %%ymm5, 0xa0(%[d])\n" + "vmovaps %%ymm6, 0xc0(%[d])\n" + "vmovaps %%ymm7, 0xe0(%[d])\n" + "add $0x100, %[d]\n" + : [d]"+r"(dst), [s]"+r"(src) + : + : "ymm0", "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", "memory"); + + size -= 256; + } + + goto tail; + } + } + else + { + if (size <= 128) + { + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst + size - 16), _mm_loadu_si128(reinterpret_cast(src + size - 16))); + + while (size > 16) + { + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst), _mm_loadu_si128(reinterpret_cast(src))); + dst += 16; + src += 16; + size -= 16; + } + } + else + { + /// Align destination to 16 bytes boundary. + size_t padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + /// Aligned unrolled copy. + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + while (size >= 128) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + c2 = _mm_loadu_si128(reinterpret_cast(src) + 2); + c3 = _mm_loadu_si128(reinterpret_cast(src) + 3); + c4 = _mm_loadu_si128(reinterpret_cast(src) + 4); + c5 = _mm_loadu_si128(reinterpret_cast(src) + 5); + c6 = _mm_loadu_si128(reinterpret_cast(src) + 6); + c7 = _mm_loadu_si128(reinterpret_cast(src) + 7); + src += 128; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 2), c2); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 3), c3); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 4), c4); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 5), c5); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 6), c6); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 7), c7); + dst += 128; + + size -= 128; + } + + goto tail; + } + } + + return ret; +} + + +static uint8_t * memcpy_my2(uint8_t * __restrict dst, const uint8_t * __restrict src, size_t size) +{ + uint8_t * ret = dst; + + if (size <= 16) + { + if (size >= 8) + { + __builtin_memcpy(dst + size - 8, src + size - 8, 8); + __builtin_memcpy(dst, src, 8); + } + else if (size >= 4) + { + __builtin_memcpy(dst + size - 4, src + size - 4, 4); + __builtin_memcpy(dst, src, 4); + } + else if (size >= 2) + { + __builtin_memcpy(dst + size - 2, src + size - 2, 2); + __builtin_memcpy(dst, src, 2); + } + else if (size >= 1) + { + *dst = *src; + } + } + else if (size <= 128) + { + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst + size - 16), _mm_loadu_si128(reinterpret_cast(src + size - 16))); + + while (size > 16) + { + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst), _mm_loadu_si128(reinterpret_cast(src))); + dst += 16; + src += 16; + size -= 16; + } + } + else if (size < 30000 || !have_avx) + { + /// Align destination to 16 bytes boundary. + size_t padding = (16 - (reinterpret_cast(dst) & 15)) & 15; + + if (padding > 0) + { + __m128i head = _mm_loadu_si128(reinterpret_cast(src)); + _mm_storeu_si128(reinterpret_cast<__m128i*>(dst), head); + dst += padding; + src += padding; + size -= padding; + } + + /// Aligned unrolled copy. + __m128i c0, c1, c2, c3, c4, c5, c6, c7; + + while (size >= 128) + { + c0 = _mm_loadu_si128(reinterpret_cast(src) + 0); + c1 = _mm_loadu_si128(reinterpret_cast(src) + 1); + c2 = _mm_loadu_si128(reinterpret_cast(src) + 2); + c3 = _mm_loadu_si128(reinterpret_cast(src) + 3); + c4 = _mm_loadu_si128(reinterpret_cast(src) + 4); + c5 = _mm_loadu_si128(reinterpret_cast(src) + 5); + c6 = _mm_loadu_si128(reinterpret_cast(src) + 6); + c7 = _mm_loadu_si128(reinterpret_cast(src) + 7); + src += 128; + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 0), c0); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 1), c1); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 2), c2); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 3), c3); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 4), c4); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 5), c5); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 6), c6); + _mm_store_si128((reinterpret_cast<__m128i*>(dst) + 7), c7); + dst += 128; + + size -= 128; + } + + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst + size - 16), _mm_loadu_si128(reinterpret_cast(src + size - 16))); + + while (size > 16) + { + _mm_storeu_si128(reinterpret_cast<__m128i *>(dst), _mm_loadu_si128(reinterpret_cast(src))); + dst += 16; + src += 16; + size -= 16; + } + } + else + { + size_t padding = (32 - (reinterpret_cast(dst) & 31)) & 31; + + if (padding > 0) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups %%ymm0, (%[d])\n" + : [d]"+r"(dst), [s]"+r"(src) + : + : "ymm0", "memory"); + + dst += padding; + src += padding; + size -= padding; + } + + while (size >= 512) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups 0x20(%[s]), %%ymm1\n" + "vmovups 0x40(%[s]), %%ymm2\n" + "vmovups 0x60(%[s]), %%ymm3\n" + "vmovups 0x80(%[s]), %%ymm4\n" + "vmovups 0xa0(%[s]), %%ymm5\n" + "vmovups 0xc0(%[s]), %%ymm6\n" + "vmovups 0xe0(%[s]), %%ymm7\n" + "vmovups 0x100(%[s]), %%ymm8\n" + "vmovups 0x120(%[s]), %%ymm9\n" + "vmovups 0x140(%[s]), %%ymm10\n" + "vmovups 0x160(%[s]), %%ymm11\n" + "vmovups 0x180(%[s]), %%ymm12\n" + "vmovups 0x1a0(%[s]), %%ymm13\n" + "vmovups 0x1c0(%[s]), %%ymm14\n" + "vmovups 0x1e0(%[s]), %%ymm15\n" + "add $0x200, %[s]\n" + "sub $0x200, %[size]\n" + "vmovaps %%ymm0, (%[d])\n" + "vmovaps %%ymm1, 0x20(%[d])\n" + "vmovaps %%ymm2, 0x40(%[d])\n" + "vmovaps %%ymm3, 0x60(%[d])\n" + "vmovaps %%ymm4, 0x80(%[d])\n" + "vmovaps %%ymm5, 0xa0(%[d])\n" + "vmovaps %%ymm6, 0xc0(%[d])\n" + "vmovaps %%ymm7, 0xe0(%[d])\n" + "vmovaps %%ymm8, 0x100(%[d])\n" + "vmovaps %%ymm9, 0x120(%[d])\n" + "vmovaps %%ymm10, 0x140(%[d])\n" + "vmovaps %%ymm11, 0x160(%[d])\n" + "vmovaps %%ymm12, 0x180(%[d])\n" + "vmovaps %%ymm13, 0x1a0(%[d])\n" + "vmovaps %%ymm14, 0x1c0(%[d])\n" + "vmovaps %%ymm15, 0x1e0(%[d])\n" + "add $0x200, %[d]\n" + : [d]"+r"(dst), [s]"+r"(src), [size]"+r"(size) + : + : "ymm0", "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", + "ymm8", "ymm9", "ymm10", "ymm11", "ymm12", "ymm13", "ymm14", "ymm15", + "memory"); + } + + /*while (size >= 256) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups 0x20(%[s]), %%ymm1\n" + "vmovups 0x40(%[s]), %%ymm2\n" + "vmovups 0x60(%[s]), %%ymm3\n" + "vmovups 0x80(%[s]), %%ymm4\n" + "vmovups 0xa0(%[s]), %%ymm5\n" + "vmovups 0xc0(%[s]), %%ymm6\n" + "vmovups 0xe0(%[s]), %%ymm7\n" + "add $0x100,%[s]\n" + "vmovaps %%ymm0, (%[d])\n" + "vmovaps %%ymm1, 0x20(%[d])\n" + "vmovaps %%ymm2, 0x40(%[d])\n" + "vmovaps %%ymm3, 0x60(%[d])\n" + "vmovaps %%ymm4, 0x80(%[d])\n" + "vmovaps %%ymm5, 0xa0(%[d])\n" + "vmovaps %%ymm6, 0xc0(%[d])\n" + "vmovaps %%ymm7, 0xe0(%[d])\n" + "add $0x100, %[d]\n" + : [d]"+r"(dst), [s]"+r"(src) + : + : "ymm0", "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", "memory"); + + size -= 256; + }*/ + + /*while (size > 128) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups 0x20(%[s]), %%ymm1\n" + "vmovups 0x40(%[s]), %%ymm2\n" + "vmovups 0x60(%[s]), %%ymm3\n" + "add $0x80, %[s]\n" + "sub $0x80, %[size]\n" + "vmovaps %%ymm0, (%[d])\n" + "vmovaps %%ymm1, 0x20(%[d])\n" + "vmovaps %%ymm2, 0x40(%[d])\n" + "vmovaps %%ymm3, 0x60(%[d])\n" + "add $0x80, %[d]\n" + : [d]"+r"(dst), [s]"+r"(src), [size]"+r"(size) + : + : "ymm0", "ymm1", "ymm2", "ymm3", "memory"); + }*/ + + __asm__( + "vmovups -0x20(%[s],%[size],1), %%ymm0\n" + "vmovups %%ymm0, -0x20(%[d],%[size],1)\n" + : [d]"+r"(dst), [s]"+r"(src) + : [size]"r"(size) + : "ymm0", "memory"); + + while (size > 32) + { + __asm__( + "vmovups (%[s]), %%ymm0\n" + "vmovups %%ymm0, (%[d])\n" + : [d]"+r"(dst), [s]"+r"(src) + : + : "ymm0", "memory"); + + dst += 32; + src += 32; + size -= 32; + } + + __asm__ __volatile__ ("vzeroupper" + ::: "ymm0", "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", + "ymm8", "ymm9", "ymm10", "ymm11", "ymm12", "ymm13", "ymm14", "ymm15"); + } + + return ret; +} + +extern "C" void * __memcpy_erms(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_sse2_unaligned(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_ssse3(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_ssse3_back(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_avx_unaligned(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_avx_unaligned_erms(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_avx512_unaligned(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_avx512_unaligned_erms(void * __restrict destination, const void * __restrict source, size_t size); +extern "C" void * __memcpy_avx512_no_vzeroupper(void * __restrict destination, const void * __restrict source, size_t size); + + +#define VARIANT(N, NAME) \ + if (memcpy_variant == N) \ + return test(dst, src, size, iterations, num_threads, std::forward(generator), NAME, #NAME); + +template +uint64_t dispatchMemcpyVariants(size_t memcpy_variant, uint8_t * dst, uint8_t * src, size_t size, size_t iterations, size_t num_threads, F && generator) +{ + memcpy_type memcpy_libc_old = reinterpret_cast(dlsym(RTLD_NEXT, "memcpy")); + + VARIANT(1, memcpy) + VARIANT(2, memcpy_trivial) + VARIANT(3, memcpy_libc_old) + VARIANT(4, memcpy_erms) + VARIANT(5, MemCpy) + VARIANT(6, memcpySSE2) + VARIANT(7, memcpySSE2Unrolled2) + VARIANT(8, memcpySSE2Unrolled4) + VARIANT(9, memcpySSE2Unrolled8) + VARIANT(10, memcpy_fast_sse) + VARIANT(11, memcpy_fast_avx) + VARIANT(12, memcpy_my) + VARIANT(13, memcpy_my2) + + VARIANT(21, __memcpy_erms) + VARIANT(22, __memcpy_sse2_unaligned) + VARIANT(23, __memcpy_ssse3) + VARIANT(24, __memcpy_ssse3_back) + VARIANT(25, __memcpy_avx_unaligned) + VARIANT(26, __memcpy_avx_unaligned_erms) + VARIANT(27, __memcpy_avx512_unaligned) + VARIANT(28, __memcpy_avx512_unaligned_erms) + VARIANT(29, __memcpy_avx512_no_vzeroupper) + + return 0; +} + +uint64_t dispatchVariants( + size_t memcpy_variant, size_t generator_variant, uint8_t * dst, uint8_t * src, size_t size, size_t iterations, size_t num_threads) +{ + if (generator_variant == 1) + return dispatchMemcpyVariants(memcpy_variant, dst, src, size, iterations, num_threads, generatorUniform<16>); + if (generator_variant == 2) + return dispatchMemcpyVariants(memcpy_variant, dst, src, size, iterations, num_threads, generatorUniform<256>); + if (generator_variant == 3) + return dispatchMemcpyVariants(memcpy_variant, dst, src, size, iterations, num_threads, generatorUniform<4096>); + if (generator_variant == 4) + return dispatchMemcpyVariants(memcpy_variant, dst, src, size, iterations, num_threads, generatorUniform<65536>); + if (generator_variant == 5) + return dispatchMemcpyVariants(memcpy_variant, dst, src, size, iterations, num_threads, generatorUniform<1048576>); + + return 0; +} + + +int main(int argc, char ** argv) +{ + boost::program_options::options_description desc("Allowed options"); + desc.add_options()("help,h", "produce help message") + ("size", boost::program_options::value()->default_value(1000000), "Bytes to copy on every iteration") + ("iterations", boost::program_options::value(), "Number of iterations") + ("threads", boost::program_options::value()->default_value(1), "Number of copying threads") + ("distribution", boost::program_options::value()->default_value(4), "Distribution of chunk sizes to perform copy") + ("variant", boost::program_options::value(), "Variant of memcpy implementation") + ("tsv", "Print result in tab-separated format") + ; + + boost::program_options::variables_map options; + boost::program_options::store(boost::program_options::parse_command_line(argc, argv, desc), options); + + if (options.count("help") || !options.count("variant")) + { + std::cout << R"(Usage: + +for size in 4096 16384 50000 65536 100000 1000000 10000000 100000000; do + for threads in 1 2 4 $(($(nproc) / 2)) $(nproc); do + for distribution in 1 2 3 4 5; do + for variant in {1..13} {21..29}; do + for i in {1..10}; do + ./memcpy-bench --tsv --size $size --variant $variant --threads $threads --distribution $distribution; + done; + done; + done; + done; +done | tee result.tsv + +clickhouse-local --structure ' + name String, + size UInt64, + iterations UInt64, + threads UInt16, + generator UInt8, + memcpy UInt8, + elapsed UInt64 +' --query " + SELECT + size, name, + avg(1000 * elapsed / size / iterations) AS s, + count() AS c + FROM table + GROUP BY size, name + ORDER BY size ASC, s DESC +" --output-format PrettyCompact < result.tsv + +)" << std::endl; + std::cout << desc << std::endl; + return 1; + } + + size_t size = options["size"].as(); + size_t num_threads = options["threads"].as(); + size_t memcpy_variant = options["variant"].as(); + size_t generator_variant = options["distribution"].as(); + + size_t iterations; + if (options.count("iterations")) + { + iterations = options["iterations"].as(); + } + else + { + iterations = 10000000000ULL / size; + + if (generator_variant == 1) + iterations /= 10; + } + + std::unique_ptr src(new uint8_t[size]); + std::unique_ptr dst(new uint8_t[size]); + + /// Fill src with some pattern for validation. + for (size_t i = 0; i < size; ++i) + src[i] = i; + + /// Fill dst to avoid page faults. + memset(dst.get(), 0, size); + + uint64_t elapsed_ns = dispatchVariants(memcpy_variant, generator_variant, dst.get(), src.get(), size, iterations, num_threads); + + std::cout << std::fixed << std::setprecision(3); + + if (options.count("tsv")) + { + std::cout + << '\t' << size + << '\t' << iterations + << '\t' << num_threads + << '\t' << generator_variant + << '\t' << memcpy_variant + << '\t' << elapsed_ns + << '\n'; + } + else + { + std::cout << ": " << num_threads << " threads, " << "size: " << size << ", distribution " << generator_variant + << ", processed in " << (elapsed_ns / 1e9) << " sec, " << (size * iterations * 1.0 / elapsed_ns) << " GB/sec\n"; + } + + return 0; +} diff --git a/utils/memcpy-bench/memcpy_jart.S b/utils/memcpy-bench/memcpy_jart.S new file mode 100644 index 00000000000..50430d0abe0 --- /dev/null +++ b/utils/memcpy-bench/memcpy_jart.S @@ -0,0 +1,138 @@ +/*-*- mode:unix-assembly; indent-tabs-mode:t; tab-width:8; coding:utf-8 -*-│ +│vi: set et ft=asm ts=8 tw=8 fenc=utf-8 :vi│ +╞══════════════════════════════════════════════════════════════════════════════╡ +│ Copyright 2020 Justine Alexandra Roberts Tunney │ +│ │ +│ Permission to use, copy, modify, and/or distribute this software for │ +│ any purpose with or without fee is hereby granted, provided that the │ +│ above copyright notice and this permission notice appear in all copies. │ +│ │ +│ THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL │ +│ WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED │ +│ WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE │ +│ AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL │ +│ DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR │ +│ PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER │ +│ TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR │ +│ PERFORMANCE OF THIS SOFTWARE. │ +╚─────────────────────────────────────────────────────────────────────────────*/ + +// Copies memory. +// +// DEST and SRC must not overlap, unless DEST≤SRC. +// +// @param rdi is dest +// @param rsi is src +// @param rdx is number of bytes +// @return original rdi copied to rax +// @mode long +// @asyncsignalsafe +memcpy_jart: mov %rdi,%rax +// 𝑠𝑙𝑖𝑑𝑒 + .align 16 + .type memcpy_jart,@function + .size memcpy_jart,.-memcpy_jart + .globl memcpy_jart + +// Copies memory w/ minimal impact ABI. +// +// @param rdi is dest +// @param rsi is src +// @param rdx is number of bytes +// @clob flags,rcx,xmm3,xmm4 +// @mode long +MemCpy: mov $.Lmemcpytab.size,%ecx + cmp %rcx,%rdx + cmovb %rdx,%rcx + jmp *memcpytab(,%rcx,8) +.Lanchorpoint: +.L16r: cmp $1024,%rdx + jae .Lerms +.L16: movdqu -16(%rsi,%rdx),%xmm4 + mov $16,%rcx +0: add $16,%rcx + movdqu -32(%rsi,%rcx),%xmm3 + movdqu %xmm3,-32(%rdi,%rcx) + cmp %rcx,%rdx + ja 0b + movdqu %xmm4,-16(%rdi,%rdx) + pxor %xmm4,%xmm4 + pxor %xmm3,%xmm3 + jmp .L0 +.L8: push %rbx + mov (%rsi),%rcx + mov -8(%rsi,%rdx),%rbx + mov %rcx,(%rdi) + mov %rbx,-8(%rdi,%rdx) +1: pop %rbx +.L0: ret +.L4: push %rbx + mov (%rsi),%ecx + mov -4(%rsi,%rdx),%ebx + mov %ecx,(%rdi) + mov %ebx,-4(%rdi,%rdx) + jmp 1b +.L3: push %rbx + mov (%rsi),%cx + mov -2(%rsi,%rdx),%bx + mov %cx,(%rdi) + mov %bx,-2(%rdi,%rdx) + jmp 1b +.L2: mov (%rsi),%cx + mov %cx,(%rdi) + jmp .L0 +.L1: mov (%rsi),%cl + mov %cl,(%rdi) + jmp .L0 +.Lerms: cmp $1024*1024,%rdx + ja .Lnts + push %rdi + push %rsi + mov %rdx,%rcx + rep movsb + pop %rsi + pop %rdi + jmp .L0 +.Lnts: movdqu (%rsi),%xmm3 + movdqu %xmm3,(%rdi) + lea 16(%rdi),%rcx + and $-16,%rcx + sub %rdi,%rcx + add %rcx,%rdi + add %rcx,%rsi + sub %rcx,%rdx + mov $16,%rcx +0: add $16,%rcx + movdqu -32(%rsi,%rcx),%xmm3 + movntdq %xmm3,-32(%rdi,%rcx) + cmp %rcx,%rdx + ja 0b + sfence + movdqu -16(%rsi,%rdx),%xmm3 + movdqu %xmm3,-16(%rdi,%rdx) + pxor %xmm3,%xmm3 + jmp .L0 + .type MemCpy,@function + .size MemCpy,.-MemCpy + .globl MemCpy + + .section .rodata + .align 8 +memcpytab: + .quad .L0 + .quad .L1 + .quad .L2 + .quad .L3 + .rept 4 + .quad .L4 + .endr + .rept 8 + .quad .L8 + .endr + .rept 16 + .quad .L16 + .endr + .equ .Lmemcpytab.size,(.-memcpytab)/8 + .quad .L16r # SSE + ERMS + NTS + .type memcpytab,@object + .previous diff --git a/utils/package/arch/CMakeLists.txt b/utils/package/arch/CMakeLists.txt index e77819f6d98..4ee754fec56 100644 --- a/utils/package/arch/CMakeLists.txt +++ b/utils/package/arch/CMakeLists.txt @@ -1,2 +1,2 @@ -include (${ClickHouse_SOURCE_DIR}/cmake/version.cmake) +include ("${ClickHouse_SOURCE_DIR}/cmake/version.cmake") configure_file (PKGBUILD.in PKGBUILD) diff --git a/utils/release/release_lib.sh b/utils/release/release_lib.sh index 896fa7f08a0..efa41ae22ad 100644 --- a/utils/release/release_lib.sh +++ b/utils/release/release_lib.sh @@ -91,9 +91,12 @@ function gen_revision_author { git_describe=`git describe` git_hash=`git rev-parse HEAD` + VERSION_DATE=`git show -s --format=%cs $git_hash` + sed -i -e "s/SET(VERSION_REVISION [^) ]*/SET(VERSION_REVISION $VERSION_REVISION/g;" \ -e "s/SET(VERSION_DESCRIBE [^) ]*/SET(VERSION_DESCRIBE $git_describe/g;" \ -e "s/SET(VERSION_GITHASH [^) ]*/SET(VERSION_GITHASH $git_hash/g;" \ + -e "s/SET(VERSION_DATE [^) ]*/SET(VERSION_DATE $VERSION_DATE/g;" \ -e "s/SET(VERSION_MAJOR [^) ]*/SET(VERSION_MAJOR $VERSION_MAJOR/g;" \ -e "s/SET(VERSION_MINOR [^) ]*/SET(VERSION_MINOR $VERSION_MINOR/g;" \ -e "s/SET(VERSION_PATCH [^) ]*/SET(VERSION_PATCH $VERSION_PATCH/g;" \ diff --git a/utils/simple-backport/changelog.sh b/utils/simple-backport/changelog.sh index d3d9714cb04..ca2dcfffff0 100755 --- a/utils/simple-backport/changelog.sh +++ b/utils/simple-backport/changelog.sh @@ -39,7 +39,7 @@ function github_download() local file=${2} if ! [ -f "$file" ] then - if ! curl -H "Authorization: token $GITHUB_TOKEN" \ + if ! curl -u "$GITHUB_USER:$GITHUB_TOKEN" \ -sSf "$url" \ > "$file" then diff --git a/utils/wikistat-loader/main.cpp b/utils/wikistat-loader/main.cpp index f2adcc43a3a..31ade014c74 100644 --- a/utils/wikistat-loader/main.cpp +++ b/utils/wikistat-loader/main.cpp @@ -151,7 +151,7 @@ try std::string time_str = options.at("time").as(); LocalDateTime time(time_str); - LocalDate date(time); + LocalDate date(time_str); DB::ReadBufferFromFileDescriptor in(STDIN_FILENO); DB::WriteBufferFromFileDescriptor out(STDOUT_FILENO); diff --git a/utils/zookeeper-cli/CMakeLists.txt b/utils/zookeeper-cli/CMakeLists.txt index 96c72744d33..2199a1b38ff 100644 --- a/utils/zookeeper-cli/CMakeLists.txt +++ b/utils/zookeeper-cli/CMakeLists.txt @@ -1,3 +1,2 @@ add_executable(clickhouse-zookeeper-cli zookeeper-cli.cpp) target_link_libraries(clickhouse-zookeeper-cli PRIVATE clickhouse_common_zookeeper) -INSTALL(TARGETS clickhouse-zookeeper-cli RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse-utils) diff --git a/utils/zookeeper-dump-tree/main.cpp b/utils/zookeeper-dump-tree/main.cpp index 5ab7bc2d536..893056564bb 100644 --- a/utils/zookeeper-dump-tree/main.cpp +++ b/utils/zookeeper-dump-tree/main.cpp @@ -17,6 +17,7 @@ int main(int argc, char ** argv) "addresses of ZooKeeper instances, comma separated. Example: example01e.yandex.ru:2181") ("path,p", boost::program_options::value()->default_value("/"), "where to start") + ("ctime,c", "print node ctime") ; boost::program_options::variables_map options; @@ -30,6 +31,8 @@ int main(int argc, char ** argv) return 1; } + bool dump_ctime = options.count("ctime"); + zkutil::ZooKeeperPtr zookeeper = std::make_shared(options.at("address").as()); std::string initial_path = options.at("path").as(); @@ -79,7 +82,10 @@ int main(int argc, char ** argv) throw; } - std::cout << it->first << '\t' << response.stat.numChildren << '\t' << response.stat.dataLength << '\n'; + std::cout << it->first << '\t' << response.stat.numChildren << '\t' << response.stat.dataLength; + if (dump_ctime) + std::cout << '\t' << response.stat.ctime; + std::cout << '\n'; for (const auto & name : response.names) { diff --git a/utils/zookeeper-test/CMakeLists.txt b/utils/zookeeper-test/CMakeLists.txt index aa26c840ba3..56a1d3e380b 100644 --- a/utils/zookeeper-test/CMakeLists.txt +++ b/utils/zookeeper-test/CMakeLists.txt @@ -1,3 +1,2 @@ add_executable(zk-test main.cpp) target_link_libraries(zk-test PRIVATE clickhouse_common_zookeeper) -INSTALL(TARGETS zk-test RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse-utils) diff --git a/website/README.md b/website/README.md index c4383bea24c..a09a00379d1 100644 --- a/website/README.md +++ b/website/README.md @@ -22,3 +22,9 @@ virtualenv build ``` ./build.py --skip-multi-page --skip-single-page --skip-amp --skip-pdf --skip-git-log --skip-docs --skip-test-templates --livereload 8080 ``` + +# How to quickly test the ugly annoying broken links in docs + +``` +./build.py --skip-multi-page --skip-amp --skip-pdf --skip-blog --skip-git-log --skip-test-templates --lang en --livereload 8080 +``` diff --git a/website/benchmark/dbms/queries.js b/website/benchmark/dbms/queries.js index c92353ab0f2..f3cf25f2c8d 100644 --- a/website/benchmark/dbms/queries.js +++ b/website/benchmark/dbms/queries.js @@ -1,4 +1,4 @@ -var current_data_size = 1000000000; +var current_data_size = 100000000; var current_systems = ["ClickHouse", "Vertica", "Greenplum"]; diff --git a/website/benchmark/hardware/index.html b/website/benchmark/hardware/index.html index 92da6328f0f..71d3333432e 100644 --- a/website/benchmark/hardware/index.html +++ b/website/benchmark/hardware/index.html @@ -75,6 +75,8 @@ Results for Raspberry Pi and Digital Ocean CPU-optimized are from Fritz Wijay Results for Digitalocean (Storage-intesinve VMs) + (CPU/GP) are from Yiğit Konur and Metehan Çetinkaya of seo.do.
Results for 2x AMD EPYC 7F72 3.2 Ghz (Total 96 Cores, IBM Cloud's Bare Metal Service) from Yiğit Konur and Metehan Çetinkaya of seo.do.
Results for 2x AMD EPYC 7742 (128 physical cores, 1 TB DDR4-3200 RAM) from Yedige Davletgaliyev and Nikita Zhavoronkov of blockchair.com.
+Results for ASUS A15 (Ryzen laptop) are from Kimmo Linna.
+Results for MacBook Air M1 are from Denis Glazachev.

diff --git a/website/benchmark/hardware/results/amd_ryzen_9_3950x.json b/website/benchmark/hardware/results/amd_ryzen_9_3950x.json index 8760a235521..caa5a443e54 100644 --- a/website/benchmark/hardware/results/amd_ryzen_9_3950x.json +++ b/website/benchmark/hardware/results/amd_ryzen_9_3950x.json @@ -1,6 +1,6 @@ [ { - "system": "AMD Ryzen 9", + "system": "AMD Ryzen 9 (2020)", "system_full": "AMD Ryzen 9 3950X 16-Core Processor, 64 GiB RAM, Intel Optane 900P 280 GB", "time": "2020-03-14 00:00:00", "kind": "desktop", @@ -52,7 +52,7 @@ ] }, { - "system": "AMD Ryzen 9", + "system": "AMD Ryzen 9 (2021)", "system_full": "AMD Ryzen 9 3950X 16-Core Processor, 64 GiB RAM, Samsung evo 970 plus 1TB", "time": "2021-03-08 00:00:00", "kind": "desktop", diff --git a/website/benchmark/hardware/results/asus_a15.json b/website/benchmark/hardware/results/asus_a15.json new file mode 100644 index 00000000000..983dbde8681 --- /dev/null +++ b/website/benchmark/hardware/results/asus_a15.json @@ -0,0 +1,54 @@ +[ + { + "system": "Asus A15", + "system_full": "Asus A15 (16 × AMD Ryzen 7 4800H, 16 GiB RAM)", + "time": "2021-03-23 00:00:00", + "kind": "laptop", + "result": + [ +[0.004, 0.003, 0.003], +[0.019, 0.013, 0.012], +[0.053, 0.041, 0.037], +[0.106, 0.057, 0.056], +[0.158, 0.115, 0.110], +[0.324, 0.266, 0.262], +[0.027, 0.024, 0.026], +[0.017, 0.016, 0.017], +[0.644, 0.589, 0.582], +[0.733, 0.679, 0.679], +[0.233, 0.201, 0.197], +[0.276, 0.235, 0.236], +[1.025, 0.962, 0.962], +[1.342, 1.270, 1.264], +[1.170, 1.129, 1.124], +[1.375, 1.346, 1.351], +[3.271, 3.210, 3.242], +[1.960, 1.898, 1.907], +[5.997, 5.965, 5.983], +[0.106, 0.065, 0.055], +[1.264, 0.990, 0.989], +[1.555, 1.241, 1.239], +[3.798, 3.307, 3.280], +[1.949, 1.022, 0.995], +[0.393, 0.292, 0.292], +[0.307, 0.254, 0.255], +[0.378, 0.297, 0.290], +[1.632, 1.399, 1.386], +[2.111, 1.909, 1.900], +[3.349, 3.352, 3.357], +[0.892, 0.824, 0.816], +[1.505, 1.392, 1.378], +[9.105, 8.951, 8.914], +[5.195, 4.975, 4.919], +[5.150, 5.021, 4.955], +[1.756, 1.743, 1.749], +[0.161, 0.154, 0.158], +[0.108, 0.058, 0.055], +[0.101, 0.102, 0.052], +[0.365, 0.309, 0.334], +[0.050, 0.023, 0.023], +[0.037, 0.019, 0.015], +[0.023, 0.013, 0.018] + ] + } +] diff --git a/website/benchmark/hardware/results/macbook_air_m1.json b/website/benchmark/hardware/results/macbook_air_m1.json new file mode 100644 index 00000000000..33f15d02480 --- /dev/null +++ b/website/benchmark/hardware/results/macbook_air_m1.json @@ -0,0 +1,54 @@ +[ + { + "system": "MacBook Air M1", + "system_full": "MacBook Air M1 13\" 2020, 8‑core CPU, 16 GiB RAM, 512 GB SSD", + "time": "2021-04-11 00:00:00", + "kind": "laptop", + "result": + [ +[0.003, 0.001, 0.001], +[0.019, 0.014, 0.014], +[0.042, 0.034, 0.033], +[0.101, 0.043, 0.041], +[0.100, 0.102, 0.101], +[0.394, 0.283, 0.289], +[0.029, 0.027, 0.027], +[0.018, 0.018, 0.018], +[0.511, 0.489, 0.494], +[0.620, 0.615, 0.618], +[0.217, 0.200, 0.197], +[0.237, 0.235, 0.242], +[0.774, 0.762, 0.761], +[0.969, 0.982, 0.969], +[0.896, 0.887, 0.861], +[0.999, 0.943, 0.945], +[3.343, 2.426, 2.366], +[1.463, 1.414, 1.382], +[4.958, 4.268, 4.257], +[0.056, 0.050, 0.049], +[1.696, 0.851, 0.846], +[1.036, 1.104, 1.174], +[4.326, 2.224, 2.255], +[1.397, 1.038, 1.055], +[0.317, 0.310, 0.305], +[0.274, 0.284, 0.269], +[0.317, 0.316, 0.313], +[0.943, 0.952, 0.951], +[2.794, 1.427, 1.433], +[1.606, 1.600, 1.605], +[0.751, 0.691, 0.679], +[1.532, 1.000, 0.952], +[9.679, 8.895, 7.967], +[7.001, 4.472, 4.050], +[4.790, 3.971, 3.987], +[1.215, 1.204, 1.256], +[0.129, 0.125, 0.119], +[0.057, 0.061, 0.056], +[0.045, 0.043, 0.043], +[0.256, 0.247, 0.249], +[0.020, 0.014, 0.013], +[0.013, 0.011, 0.012], +[0.009, 0.009, 0.009] + ] + } +] diff --git a/website/benchmark/hardware/results/qemu_aarch64_cascade_lake_80_vcpu.json b/website/benchmark/hardware/results/qemu_aarch64_cascade_lake_80_vcpu.json new file mode 100644 index 00000000000..ed25794c77b --- /dev/null +++ b/website/benchmark/hardware/results/qemu_aarch64_cascade_lake_80_vcpu.json @@ -0,0 +1,55 @@ +[ + { + "system": "Intel 80vCPU, QEMU, AArch64", + "system_full": "Intel Cascade Lake 80vCPU running AArch64 ClickHouse under qemu-aarch64 version 4.2.1 (userspace emulation)", + "cpu_vendor": "Intel", + "time": "2021-04-05 00:00:00", + "kind": "cloud", + "result": + [ +[0.045, 0.006, 0.006], +[0.366, 0.201, 0.576], +[0.314, 0.144, 0.152], +[0.701, 0.111, 0.110], +[0.308, 0.259, 0.261], +[1.009, 0.642, 0.658], +[0.160, 0.087, 0.086], +[0.123, 0.079, 0.080], +[0.570, 0.458, 0.454], +[0.708, 0.540, 0.547], +[0.541, 0.460, 0.464], +[0.578, 0.524, 0.531], +[0.927, 0.908, 0.906], +[1.075, 0.992, 1.051], +[1.055, 0.965, 0.991], +[0.904, 0.790, 0.781], +[2.076, 2.134, 2.121], +[1.668, 1.648, 1.615], +[4.134, 3.879, 4.002], +[0.142, 0.103, 0.105], +[7.018, 1.479, 1.515], +[1.618, 1.643, 1.680], +[6.516, 3.172, 3.182], +[6.028, 2.070, 2.076], +[0.608, 0.559, 0.577], +[0.548, 0.515, 0.516], +[0.598, 0.564, 0.563], +[1.562, 1.529, 1.537], +[5.968, 2.311, 2.375], +[3.263, 3.239, 3.279], +[1.134, 0.903, 0.928], +[2.987, 1.270, 1.284], +[6.256, 5.665, 5.320], +[3.020, 3.148, 3.109], +[3.092, 3.131, 3.146], +[1.183, 1.140, 1.185], +[0.762, 0.704, 0.715], +[0.412, 0.380, 0.385], +[0.376, 0.330, 0.327], +[1.505, 1.532, 1.503], +[0.201, 0.133, 0.130], +[0.173, 0.123, 0.150], +[0.070, 0.028, 0.028] + ] + } +] diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md new file mode 100644 index 00000000000..dcde371629b --- /dev/null +++ b/website/blog/en/2021/code-review.md @@ -0,0 +1,83 @@ +--- +title: 'The Tests Are Passing, Why Would I Read The Diff Again?' +image: 'https://blog-images.clickhouse.tech/en/2021/code-review/two-ducks.jpg' +date: '2021-04-14' +author: '[Alexander Kuzmenkov](https://github.com/akuzm)' +tags: ['code review', 'development'] +--- + + +Code review is one of the few software development techniques that are consistently found to reduce the incidence of defects. Why is it effective? This article offers some wild conjecture on this topic, complete with practical advice on getting the most out of your code review. + + +## Understanding Why Your Program Works + +As software developers, we routinely have to reason about the behaviour of software. For example, to fix a bug, we start with a test case that exhibits the behavior in question, and then read the source code to see how this behavior arises. Often we find ourselves unable to understand anything, having to resort to forensic techniques such as using a debugger or interrogating the author of the code. This situation is far from ideal. After all, if we have trouble understanding our software, how can we be sure it works at all? No surprise that it doesn't. + +The correct understanding is also important when modifying and extending software. A programmer must always have a precise mental model on what is going on in the program, how exactly it maps to the domain, and so on. If there are flaws in this model, the code they write won't match the domain and won't solve the problem correctly. Wrong understanding directly causes bugs. + +How can we make our software easier to understand? It is often said that to see if you really understand something, you have to try explaining it to somebody. For example, as a science student taking an exam, you might be expected to give an explanation to some well-known observed effect, deriving it from the basic laws of this domain. In a similar way, if we are modeling some problem in software, we can start from domain knowledge and general programming knowledge, and build an argument as to why our model is applicable to the problem, why it is correct, has optimal performance and so on. This explanation takes the form of code comments, or, at a higher level, design documents. + +If you have a habit of thoroughly commenting your code, you might have noticed that writing the comments is often much harder than writing the code itself. It also has an unpleasant side effect — at times, while writing a comment, it becomes increasingly clear to you that the code is incomprehensible and takes forever to explain, or maybe is downright wrong, and you have to rewrite it. This is exactly the major positive effect of writing the comments. It helps you find bugs and make the code more understandable, and you wouldn't have noticed these problems unless you tried to explain the code. + +Understanding why your program works is inseparable from understanding why it fails, so it's no surprise that there is a similar process for the latter, called "rubber duck debugging". To debug a particularly nasty bug, you start explaining the program logic step by step to an imaginary partner or even to an inanimate object such as a yellow rubber duck. This process is often very effective, much in excess of what one would expect given the limited conversational abilities of rubber ducks. The underlying mechanism is probably the same as with comments — you start to understand your program better by just trying to explain it, and this lets you find bugs. + +When working in a team, you even have a luxury of explaining your code to another developer who works on the same project. It's probably more entertaining than talking to a duck. More importantly, they are going to maintain the code you wrote, so better make sure that _they_ can understand it as well. A good formal occasion for explaining how your code works is the code review process. Let's see how you can get the most out of it, in terms of making your code understandable. + +## Reviewing Others Code + +Code review is often framed as a gatekeeping process, where each contribution is vetted by maintainers to ensure that it is in line with project direction, has acceptable quality, meets the coding guidelines and so on. This perspective might seem natural when dealing with external contributions, but makes less sense if you apply it to internal ones. After all, our fellow maintainers have perfect understanding of project goals and guidelines, probably they are more talented and experienced than us, and can be trusted to produce the best solution possible. How can an additional review be helpful? + +A less-obvious, but very important, part of reviewing the code is just seeing whether it can be understood by another person. It is helpful regardless of the administrative roles and programming proficiency of the parties. What should you do as a reviewer if ease of understanding is your main priority? + +You probably don't need to be concerned with trivia such as code style. There are automated tools for that. You might find some bugs, but this is probably a side effect. Your main task is making sense of the code. + +Start with checking the high-level description of the problem that the pull request is trying to solve. Read the description of the bug it fixes, or the docs for the feature it adds. For bigger features, there is normally a design document that describes the overall implementation without getting too deep into the code details. After you understand the problem, start reading the code. Does it make sense to you? You shouldn't try too hard to understand it. Imagine that you are tired and under time pressure. If you feel you have to make a lot of effort to understand the code, ask the author for clarifications. As you talk, you might discover that the code is not correct, or it may be rewritten in a more straightforward way, or it needs more comments. + + + +After you get the answers, don't forget to update the code and the comments to reflect them. Don't just stop after getting it explained to you personally. If you had a question as a reviewer, chances are that other people will also have this question later, but there might be nobody around to ask. They will have to resort to `git blame` and re-reading the entire pull request or several of them. Code archaeology is sometimes fun, but it's the last thing you want to do when you are investigating an urgent bug. All the answers should be on the surface. + +Working with the author, you should ensure that the code is mostly obvious to anyone with basic domain and programming knowledge, and all non-obvious parts are clearly explained. + +### Preparing Your Code For Review + +As an author, you can also do some things to make your code easier to understand for the reviewer. + +First of all, if you are implementing a major feature, it probably needs a round of design review before you even start writing code. Skipping a design review and jumping right into the code review can be a major source of frustration, because it might turn out that even the problem you are solving was formulated incorrectly, and all your work has to be thrown away. Of course, this is not prevented completely by design review, either. Programming is an iterative, exploratory activity, and in complex cases you only begin to grasp the problem after implementing a first solution, which you then realize is incorrect and has to be thrown away. + +When preparing your code for review, your major objective is to make your problem and its solution clear to the reviewer. A good tool for this is code comments. Any sizable piece of logic should have an introductory comment describing its general purpose and outlining the implementation. This description can reference similar features, explain the difference to them, explain how it interfaces with other subsystems. A good place to put this general description is a function that serves as a main entry point for the feature, or other form of its public interface, or the most significant class, or the file containing the implementation, and so on. + +Drilling down to each block of code, you should be able to explain what it does, why it does that, why this way and not another. If there are several ways of doing the thing, why did you choose this one? Of course, for some code these things follow from the more general comments and don't have to be restated. The mechanics of data manipulation should be apparent from the code itself. If you find yourself explaining a particular feature of the language, it's probably best not to use it. + +Pay special attention to making the data structures apparent in the code, and their meaning and invariants well commented. The choice of data structures ultimately determines which algorithms you can apply, and sets the limits of performance, which is another reason why we should care about it as ClickHouse developers. + +When explaining the code, it is important to give your reader enough context, so that they can understand you without a deep investigation of the surrounding systems and obscure test cases. Give pointers to all the things that might be relevant to the task. If you know some corner cases which your code has to handle, describe them in enough detail so that they can be reproduced. If there is a relevant standard or a design document, reference it, or even quote it inline. If you're relying on some invariant in other system, mention it. It is good practice to add programmatic checks that mirror your comments, when it is easy to do so. Your comment about an invariant should be accompanied by an assertion, and an important scenario should be reproduced by a test case. + +Don't worry about being too verbose. There is often not enough comments, but almost never too much of them. + +## Common Concerns about Code Comments + +It is common to hear objections to the idea of commenting the code, so let's discuss a couple of usual ones. + +### Self-documenting Code + +You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/ClickHouse/ClickHouse/blob/26d5db32ae5c9f54b8825e2eca1f077a3b17c84a/src/Storages/MergeTree/KeyCondition.cpp#L1312-L1347) into names or control flow is just absurd. + +### Obsolete Comments + +The comments can't be checked by the compiler or the tests, so there is no automated way to make sure that they are up to date with the rest of the comments and the code. The possibility of comments gradually getting incorrect is sometimes used as an argument against having any comments at all. + +This problem is not exclusive to the comments — the code also can and does become obsolete. Simple cases such as dead code can be detected by static analysis or studying the test coverage of code. More complex cases can only be found by proofreading, such as maintaining an invariant that is not important anymore, or preparing some data that is not needed. + +While an obsolete comment can lead to a mistake, the same applies, perhaps more strongly, to the lack of comments. When you need some higher-level knowledge about the code, but it is not written down, you are forced to perform an entire investigation from first principles to understand what's going on, and this is error-prone. Even an obsolete comment likely gives a better starting point than nothing. Moreover, in a code base that makes an active use of the comments, they tend to be mostly correct. This is because the developers rely on comments, read and write them, pay attention to them during code review. The comments are routinely changed along with changing the code, and the outdated comments are soon noticed and fixed. This does require some habit. A lone comment in a vast desert of impenetrable self-documenting code is not going to fare well. + + +## Conclusion + +Code review makes your software better, and a significant part of this probably comes from trying to understand what your software actually does. By paying attention specifically to this aspect of code review, you can make it even more efficient. You'll have less bugs, and your code will be easier to maintain — and what else could we ask for as software developers? + + +_2021-04-13 [Alexander Kuzmenkov](https://github.com/akuzm). Title photo by [Nikita Mikhaylov](https://github.com/nikitamikhaylov)_ + +_P.S. This text contains the personal opinions of the author, and is not an authoritative manual for ClickHouse maintainers._ diff --git a/website/js/base.js b/website/js/base.js index 6cec8313bd4..aca6f407d24 100644 --- a/website/js/base.js +++ b/website/js/base.js @@ -16,23 +16,6 @@ if (target_id && target_id.startsWith('logo-')) { selector = '#'; } - if (selector && selector.startsWith('#') && !is_tab && !is_collapse && !is_rating) { - event.preventDefault(); - var dst = window.location.href.replace(window.location.hash, ''); - var offset = 0; - - if (selector !== '#') { - var destination = $(selector); - if (destination.length) { - offset = destination.offset().top - $('#top-nav').height() * 1.5; - dst += selector; - } - } - $('html, body').animate({ - scrollTop: offset - }, 500); - window.history.replaceState('', document.title, dst); - } }); var top_nav = $('#top-nav.sticky-top'); diff --git a/website/locale/es/LC_MESSAGES/messages.mo b/website/locale/es/LC_MESSAGES/messages.mo deleted file mode 100644 index 888d7a76c4e..00000000000 Binary files a/website/locale/es/LC_MESSAGES/messages.mo and /dev/null differ diff --git a/website/locale/es/LC_MESSAGES/messages.po b/website/locale/es/LC_MESSAGES/messages.po deleted file mode 100644 index 531a0001c52..00000000000 --- a/website/locale/es/LC_MESSAGES/messages.po +++ /dev/null @@ -1,326 +0,0 @@ -# Translations template for PROJECT. -# Copyright (C) 2020 ORGANIZATION -# This file is distributed under the same license as the PROJECT project. -# Automatically generated, 2020. -# -msgid "" -msgstr "" -"Project-Id-Version: PROJECT VERSION\n" -"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2020-06-17 12:20+0300\n" -"PO-Revision-Date: 2020-06-17 12:20+0300\n" -"Last-Translator: Automatically generated\n" -"Language-Team: none\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.8.0\n" -"Language: es\n" -"Plural-Forms: nplurals=2; plural=(n != 1);\n" - -#: templates/common_meta.html:1 -msgid "" -"ClickHouse is a fast open-source column-oriented database management system " -"that allows generating analytical data reports in real-time using SQL queries" -msgstr "" - -#: templates/common_meta.html:6 -msgid "ClickHouse - fast open-source OLAP DBMS" -msgstr "" - -#: templates/common_meta.html:10 -msgid "ClickHouse DBMS" -msgstr "" - -#: templates/common_meta.html:32 -msgid "open-source" -msgstr "" - -#: templates/common_meta.html:32 -msgid "relational" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytics" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytical" -msgstr "" - -#: templates/common_meta.html:32 -msgid "Big Data" -msgstr "" - -#: templates/common_meta.html:32 -msgid "web-analytics" -msgstr "" - -#: templates/footer.html:8 -msgid "ClickHouse source code is published under the Apache 2.0 License." -msgstr "" - -#: templates/footer.html:8 -msgid "" -"Software is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR " -"CONDITIONS OF ANY KIND, either express or implied." -msgstr "" - -#: templates/footer.html:11 -msgid "Yandex LLC" -msgstr "" - -#: templates/blog/content.html:20 templates/blog/content.html:25 -#: templates/blog/content.html:30 -msgid "Share on" -msgstr "" - -#: templates/blog/content.html:37 -msgid "Published date" -msgstr "" - -#: templates/blog/nav.html:20 -msgid "New post" -msgstr "" - -#: templates/blog/nav.html:25 -msgid "Documentation" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "Rating" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "votes" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Article Rating" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Was this content helpful?" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Unusable" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Poor" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Good" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Excellent" -msgstr "" - -#: templates/docs/footer.html:8 -msgid "documentation" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "Built from" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "published on" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "modified on" -msgstr "" - -#: templates/docs/machine-translated.html:3 -msgid "Help wanted!" -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "" -"The following content of this documentation page has been machine-" -"translated. But unlike other websites, it is not done on the fly. This " -"translated text lives on GitHub repository alongside main ClickHouse " -"codebase and waits for fellow native speakers to make it more human-readable." -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "You can also use the original English version as a reference." -msgstr "" - -#: templates/docs/machine-translated.html:7 -msgid "Help ClickHouse documentation by editing this page" -msgstr "" - -#: templates/docs/sidebar.html:3 -msgid "Multi-page or single-page" -msgstr "" - -#: templates/docs/sidebar.html:5 -msgid "Multi-page version" -msgstr "" - -#: templates/docs/sidebar.html:8 -msgid "Single-page version" -msgstr "" - -#: templates/docs/sidebar.html:13 -msgid "Version" -msgstr "" - -#: templates/docs/sidebar.html:13 templates/docs/sidebar.html:19 -msgid "latest" -msgstr "" - -#: templates/docs/sidebar.html:36 -msgid "PDF version" -msgstr "" - -#: templates/docs/toc.html:8 -msgid "Table of Contents" -msgstr "" - -#: templates/index/community.html:4 -msgid "ClickHouse community" -msgstr "" - -#: templates/index/community.html:13 templates/index/community.html:14 -msgid "ClickHouse YouTube Channel" -msgstr "" - -#: templates/index/community.html:25 templates/index/community.html:26 -msgid "ClickHouse Official Twitter Account" -msgstr "" - -#: templates/index/community.html:36 templates/index/community.html:37 -msgid "ClickHouse at Telegram" -msgstr "" - -#: templates/index/community.html:41 -msgid "Chat with real users in " -msgstr "" - -#: templates/index/community.html:44 templates/index/community.html:116 -msgid "English" -msgstr "" - -#: templates/index/community.html:45 -msgid "or in" -msgstr "" - -#: templates/index/community.html:47 templates/index/community.html:117 -msgid "Russian" -msgstr "" - -#: templates/index/community.html:65 -msgid "Open GitHub issue to ask for help or to file a feature request" -msgstr "" - -#: templates/index/community.html:76 templates/index/community.html:77 -msgid "ClickHouse Slack Workspace" -msgstr "" - -#: templates/index/community.html:82 -msgid "Multipurpose public hangout" -msgstr "" - -#: templates/index/community.html:101 -msgid "Ask any questions" -msgstr "" - -#: templates/index/community.html:115 -msgid "ClickHouse Blog" -msgstr "" - -#: templates/index/community.html:116 -msgid "in" -msgstr "" - -#: templates/index/community.html:128 templates/index/community.html:129 -msgid "ClickHouse at Google Groups" -msgstr "" - -#: templates/index/community.html:133 -msgid "Email discussions" -msgstr "" - -#: templates/index/community.html:142 -msgid "Like ClickHouse?" -msgstr "" - -#: templates/index/community.html:143 -msgid "Help to spread the word about it via" -msgstr "" - -#: templates/index/community.html:144 -msgid "and" -msgstr "" - -#: templates/index/community.html:153 -msgid "Hosting ClickHouse Meetups" -msgstr "" - -#: templates/index/community.html:157 -msgid "" -"ClickHouse meetups are essential for strengthening community worldwide, but " -"they couldn't be possible without the help of local organizers. Please, fill " -"this form if you want to become one or want to meet ClickHouse core team for " -"any other reason." -msgstr "" - -#: templates/index/community.html:159 -msgid "ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:165 -msgid "Name" -msgstr "" - -#: templates/index/community.html:168 -msgid "Email" -msgstr "" - -#: templates/index/community.html:171 -msgid "Company" -msgstr "" - -#: templates/index/community.html:174 -msgid "City" -msgstr "" - -#: templates/index/community.html:179 -msgid "We'd like to host a public ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:185 -msgid "We'd like to invite Yandex ClickHouse team to our office" -msgstr "" - -#: templates/index/community.html:191 -msgid "We'd like to invite Yandex ClickHouse team to another event we organize" -msgstr "" - -#: templates/index/community.html:197 -msgid "We're interested in commercial consulting, support or managed service" -msgstr "" - -#: templates/index/community.html:201 -msgid "Additional comments" -msgstr "" - -#: templates/index/community.html:203 -msgid "Send" -msgstr "" - -#: templates/index/community.html:212 -msgid "" -"If you have any more thoughts or questions, feel free to contact Yandex " -"ClickHouse team directly at" -msgstr "" - -#: templates/index/community.html:213 -msgid "turn on JavaScript to see email address" -msgstr "" diff --git a/website/locale/fa/LC_MESSAGES/messages.mo b/website/locale/fa/LC_MESSAGES/messages.mo deleted file mode 100644 index 89c73f3fea4..00000000000 Binary files a/website/locale/fa/LC_MESSAGES/messages.mo and /dev/null differ diff --git a/website/locale/fa/LC_MESSAGES/messages.po b/website/locale/fa/LC_MESSAGES/messages.po deleted file mode 100644 index 565684ac0de..00000000000 --- a/website/locale/fa/LC_MESSAGES/messages.po +++ /dev/null @@ -1,325 +0,0 @@ -# Translations template for PROJECT. -# Copyright (C) 2020 ORGANIZATION -# This file is distributed under the same license as the PROJECT project. -# Automatically generated, 2020. -# -msgid "" -msgstr "" -"Project-Id-Version: PROJECT VERSION\n" -"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2020-06-17 12:20+0300\n" -"PO-Revision-Date: 2020-06-17 12:20+0300\n" -"Last-Translator: Automatically generated\n" -"Language-Team: none\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.8.0\n" -"Language: fa\n" - -#: templates/common_meta.html:1 -msgid "" -"ClickHouse is a fast open-source column-oriented database management system " -"that allows generating analytical data reports in real-time using SQL queries" -msgstr "" - -#: templates/common_meta.html:6 -msgid "ClickHouse - fast open-source OLAP DBMS" -msgstr "" - -#: templates/common_meta.html:10 -msgid "ClickHouse DBMS" -msgstr "" - -#: templates/common_meta.html:32 -msgid "open-source" -msgstr "" - -#: templates/common_meta.html:32 -msgid "relational" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytics" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytical" -msgstr "" - -#: templates/common_meta.html:32 -msgid "Big Data" -msgstr "" - -#: templates/common_meta.html:32 -msgid "web-analytics" -msgstr "" - -#: templates/footer.html:8 -msgid "ClickHouse source code is published under the Apache 2.0 License." -msgstr "" - -#: templates/footer.html:8 -msgid "" -"Software is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR " -"CONDITIONS OF ANY KIND, either express or implied." -msgstr "" - -#: templates/footer.html:11 -msgid "Yandex LLC" -msgstr "" - -#: templates/blog/content.html:20 templates/blog/content.html:25 -#: templates/blog/content.html:30 -msgid "Share on" -msgstr "" - -#: templates/blog/content.html:37 -msgid "Published date" -msgstr "" - -#: templates/blog/nav.html:20 -msgid "New post" -msgstr "" - -#: templates/blog/nav.html:25 -msgid "Documentation" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "Rating" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "votes" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Article Rating" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Was this content helpful?" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Unusable" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Poor" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Good" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Excellent" -msgstr "" - -#: templates/docs/footer.html:8 -msgid "documentation" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "Built from" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "published on" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "modified on" -msgstr "" - -#: templates/docs/machine-translated.html:3 -msgid "Help wanted!" -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "" -"The following content of this documentation page has been machine-" -"translated. But unlike other websites, it is not done on the fly. This " -"translated text lives on GitHub repository alongside main ClickHouse " -"codebase and waits for fellow native speakers to make it more human-readable." -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "You can also use the original English version as a reference." -msgstr "" - -#: templates/docs/machine-translated.html:7 -msgid "Help ClickHouse documentation by editing this page" -msgstr "" - -#: templates/docs/sidebar.html:3 -msgid "Multi-page or single-page" -msgstr "" - -#: templates/docs/sidebar.html:5 -msgid "Multi-page version" -msgstr "" - -#: templates/docs/sidebar.html:8 -msgid "Single-page version" -msgstr "" - -#: templates/docs/sidebar.html:13 -msgid "Version" -msgstr "" - -#: templates/docs/sidebar.html:13 templates/docs/sidebar.html:19 -msgid "latest" -msgstr "" - -#: templates/docs/sidebar.html:36 -msgid "PDF version" -msgstr "" - -#: templates/docs/toc.html:8 -msgid "Table of Contents" -msgstr "" - -#: templates/index/community.html:4 -msgid "ClickHouse community" -msgstr "" - -#: templates/index/community.html:13 templates/index/community.html:14 -msgid "ClickHouse YouTube Channel" -msgstr "" - -#: templates/index/community.html:25 templates/index/community.html:26 -msgid "ClickHouse Official Twitter Account" -msgstr "" - -#: templates/index/community.html:36 templates/index/community.html:37 -msgid "ClickHouse at Telegram" -msgstr "" - -#: templates/index/community.html:41 -msgid "Chat with real users in " -msgstr "" - -#: templates/index/community.html:44 templates/index/community.html:116 -msgid "English" -msgstr "" - -#: templates/index/community.html:45 -msgid "or in" -msgstr "" - -#: templates/index/community.html:47 templates/index/community.html:117 -msgid "Russian" -msgstr "" - -#: templates/index/community.html:65 -msgid "Open GitHub issue to ask for help or to file a feature request" -msgstr "" - -#: templates/index/community.html:76 templates/index/community.html:77 -msgid "ClickHouse Slack Workspace" -msgstr "" - -#: templates/index/community.html:82 -msgid "Multipurpose public hangout" -msgstr "" - -#: templates/index/community.html:101 -msgid "Ask any questions" -msgstr "" - -#: templates/index/community.html:115 -msgid "ClickHouse Blog" -msgstr "" - -#: templates/index/community.html:116 -msgid "in" -msgstr "" - -#: templates/index/community.html:128 templates/index/community.html:129 -msgid "ClickHouse at Google Groups" -msgstr "" - -#: templates/index/community.html:133 -msgid "Email discussions" -msgstr "" - -#: templates/index/community.html:142 -msgid "Like ClickHouse?" -msgstr "" - -#: templates/index/community.html:143 -msgid "Help to spread the word about it via" -msgstr "" - -#: templates/index/community.html:144 -msgid "and" -msgstr "" - -#: templates/index/community.html:153 -msgid "Hosting ClickHouse Meetups" -msgstr "" - -#: templates/index/community.html:157 -msgid "" -"ClickHouse meetups are essential for strengthening community worldwide, but " -"they couldn't be possible without the help of local organizers. Please, fill " -"this form if you want to become one or want to meet ClickHouse core team for " -"any other reason." -msgstr "" - -#: templates/index/community.html:159 -msgid "ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:165 -msgid "Name" -msgstr "" - -#: templates/index/community.html:168 -msgid "Email" -msgstr "" - -#: templates/index/community.html:171 -msgid "Company" -msgstr "" - -#: templates/index/community.html:174 -msgid "City" -msgstr "" - -#: templates/index/community.html:179 -msgid "We'd like to host a public ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:185 -msgid "We'd like to invite Yandex ClickHouse team to our office" -msgstr "" - -#: templates/index/community.html:191 -msgid "We'd like to invite Yandex ClickHouse team to another event we organize" -msgstr "" - -#: templates/index/community.html:197 -msgid "We're interested in commercial consulting, support or managed service" -msgstr "" - -#: templates/index/community.html:201 -msgid "Additional comments" -msgstr "" - -#: templates/index/community.html:203 -msgid "Send" -msgstr "" - -#: templates/index/community.html:212 -msgid "" -"If you have any more thoughts or questions, feel free to contact Yandex " -"ClickHouse team directly at" -msgstr "" - -#: templates/index/community.html:213 -msgid "turn on JavaScript to see email address" -msgstr "" diff --git a/website/locale/fr/LC_MESSAGES/messages.mo b/website/locale/fr/LC_MESSAGES/messages.mo deleted file mode 100644 index 43fcad3bd73..00000000000 Binary files a/website/locale/fr/LC_MESSAGES/messages.mo and /dev/null differ diff --git a/website/locale/fr/LC_MESSAGES/messages.po b/website/locale/fr/LC_MESSAGES/messages.po deleted file mode 100644 index 5ccc7c3c87d..00000000000 --- a/website/locale/fr/LC_MESSAGES/messages.po +++ /dev/null @@ -1,326 +0,0 @@ -# Translations template for PROJECT. -# Copyright (C) 2020 ORGANIZATION -# This file is distributed under the same license as the PROJECT project. -# Automatically generated, 2020. -# -msgid "" -msgstr "" -"Project-Id-Version: PROJECT VERSION\n" -"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2020-06-17 12:20+0300\n" -"PO-Revision-Date: 2020-06-17 12:20+0300\n" -"Last-Translator: Automatically generated\n" -"Language-Team: none\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.8.0\n" -"Language: fr\n" -"Plural-Forms: nplurals=2; plural=(n > 1);\n" - -#: templates/common_meta.html:1 -msgid "" -"ClickHouse is a fast open-source column-oriented database management system " -"that allows generating analytical data reports in real-time using SQL queries" -msgstr "" - -#: templates/common_meta.html:6 -msgid "ClickHouse - fast open-source OLAP DBMS" -msgstr "" - -#: templates/common_meta.html:10 -msgid "ClickHouse DBMS" -msgstr "" - -#: templates/common_meta.html:32 -msgid "open-source" -msgstr "" - -#: templates/common_meta.html:32 -msgid "relational" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytics" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytical" -msgstr "" - -#: templates/common_meta.html:32 -msgid "Big Data" -msgstr "" - -#: templates/common_meta.html:32 -msgid "web-analytics" -msgstr "" - -#: templates/footer.html:8 -msgid "ClickHouse source code is published under the Apache 2.0 License." -msgstr "" - -#: templates/footer.html:8 -msgid "" -"Software is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR " -"CONDITIONS OF ANY KIND, either express or implied." -msgstr "" - -#: templates/footer.html:11 -msgid "Yandex LLC" -msgstr "" - -#: templates/blog/content.html:20 templates/blog/content.html:25 -#: templates/blog/content.html:30 -msgid "Share on" -msgstr "" - -#: templates/blog/content.html:37 -msgid "Published date" -msgstr "" - -#: templates/blog/nav.html:20 -msgid "New post" -msgstr "" - -#: templates/blog/nav.html:25 -msgid "Documentation" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "Rating" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "votes" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Article Rating" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Was this content helpful?" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Unusable" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Poor" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Good" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Excellent" -msgstr "" - -#: templates/docs/footer.html:8 -msgid "documentation" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "Built from" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "published on" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "modified on" -msgstr "" - -#: templates/docs/machine-translated.html:3 -msgid "Help wanted!" -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "" -"The following content of this documentation page has been machine-" -"translated. But unlike other websites, it is not done on the fly. This " -"translated text lives on GitHub repository alongside main ClickHouse " -"codebase and waits for fellow native speakers to make it more human-readable." -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "You can also use the original English version as a reference." -msgstr "" - -#: templates/docs/machine-translated.html:7 -msgid "Help ClickHouse documentation by editing this page" -msgstr "" - -#: templates/docs/sidebar.html:3 -msgid "Multi-page or single-page" -msgstr "" - -#: templates/docs/sidebar.html:5 -msgid "Multi-page version" -msgstr "" - -#: templates/docs/sidebar.html:8 -msgid "Single-page version" -msgstr "" - -#: templates/docs/sidebar.html:13 -msgid "Version" -msgstr "" - -#: templates/docs/sidebar.html:13 templates/docs/sidebar.html:19 -msgid "latest" -msgstr "" - -#: templates/docs/sidebar.html:36 -msgid "PDF version" -msgstr "" - -#: templates/docs/toc.html:8 -msgid "Table of Contents" -msgstr "" - -#: templates/index/community.html:4 -msgid "ClickHouse community" -msgstr "" - -#: templates/index/community.html:13 templates/index/community.html:14 -msgid "ClickHouse YouTube Channel" -msgstr "" - -#: templates/index/community.html:25 templates/index/community.html:26 -msgid "ClickHouse Official Twitter Account" -msgstr "" - -#: templates/index/community.html:36 templates/index/community.html:37 -msgid "ClickHouse at Telegram" -msgstr "" - -#: templates/index/community.html:41 -msgid "Chat with real users in " -msgstr "" - -#: templates/index/community.html:44 templates/index/community.html:116 -msgid "English" -msgstr "" - -#: templates/index/community.html:45 -msgid "or in" -msgstr "" - -#: templates/index/community.html:47 templates/index/community.html:117 -msgid "Russian" -msgstr "" - -#: templates/index/community.html:65 -msgid "Open GitHub issue to ask for help or to file a feature request" -msgstr "" - -#: templates/index/community.html:76 templates/index/community.html:77 -msgid "ClickHouse Slack Workspace" -msgstr "" - -#: templates/index/community.html:82 -msgid "Multipurpose public hangout" -msgstr "" - -#: templates/index/community.html:101 -msgid "Ask any questions" -msgstr "" - -#: templates/index/community.html:115 -msgid "ClickHouse Blog" -msgstr "" - -#: templates/index/community.html:116 -msgid "in" -msgstr "" - -#: templates/index/community.html:128 templates/index/community.html:129 -msgid "ClickHouse at Google Groups" -msgstr "" - -#: templates/index/community.html:133 -msgid "Email discussions" -msgstr "" - -#: templates/index/community.html:142 -msgid "Like ClickHouse?" -msgstr "" - -#: templates/index/community.html:143 -msgid "Help to spread the word about it via" -msgstr "" - -#: templates/index/community.html:144 -msgid "and" -msgstr "" - -#: templates/index/community.html:153 -msgid "Hosting ClickHouse Meetups" -msgstr "" - -#: templates/index/community.html:157 -msgid "" -"ClickHouse meetups are essential for strengthening community worldwide, but " -"they couldn't be possible without the help of local organizers. Please, fill " -"this form if you want to become one or want to meet ClickHouse core team for " -"any other reason." -msgstr "" - -#: templates/index/community.html:159 -msgid "ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:165 -msgid "Name" -msgstr "" - -#: templates/index/community.html:168 -msgid "Email" -msgstr "" - -#: templates/index/community.html:171 -msgid "Company" -msgstr "" - -#: templates/index/community.html:174 -msgid "City" -msgstr "" - -#: templates/index/community.html:179 -msgid "We'd like to host a public ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:185 -msgid "We'd like to invite Yandex ClickHouse team to our office" -msgstr "" - -#: templates/index/community.html:191 -msgid "We'd like to invite Yandex ClickHouse team to another event we organize" -msgstr "" - -#: templates/index/community.html:197 -msgid "We're interested in commercial consulting, support or managed service" -msgstr "" - -#: templates/index/community.html:201 -msgid "Additional comments" -msgstr "" - -#: templates/index/community.html:203 -msgid "Send" -msgstr "" - -#: templates/index/community.html:212 -msgid "" -"If you have any more thoughts or questions, feel free to contact Yandex " -"ClickHouse team directly at" -msgstr "" - -#: templates/index/community.html:213 -msgid "turn on JavaScript to see email address" -msgstr "" diff --git a/website/locale/tr/LC_MESSAGES/messages.mo b/website/locale/tr/LC_MESSAGES/messages.mo deleted file mode 100644 index 0386bd03351..00000000000 Binary files a/website/locale/tr/LC_MESSAGES/messages.mo and /dev/null differ diff --git a/website/locale/tr/LC_MESSAGES/messages.po b/website/locale/tr/LC_MESSAGES/messages.po deleted file mode 100644 index 710ebbdf120..00000000000 --- a/website/locale/tr/LC_MESSAGES/messages.po +++ /dev/null @@ -1,326 +0,0 @@ -# Translations template for PROJECT. -# Copyright (C) 2020 ORGANIZATION -# This file is distributed under the same license as the PROJECT project. -# Automatically generated, 2020. -# -msgid "" -msgstr "" -"Project-Id-Version: PROJECT VERSION\n" -"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2020-06-17 12:20+0300\n" -"PO-Revision-Date: 2020-06-17 12:20+0300\n" -"Last-Translator: Automatically generated\n" -"Language-Team: none\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.8.0\n" -"Language: tr\n" -"Plural-Forms: nplurals=2; plural=(n != 1);\n" - -#: templates/common_meta.html:1 -msgid "" -"ClickHouse is a fast open-source column-oriented database management system " -"that allows generating analytical data reports in real-time using SQL queries" -msgstr "" - -#: templates/common_meta.html:6 -msgid "ClickHouse - fast open-source OLAP DBMS" -msgstr "" - -#: templates/common_meta.html:10 -msgid "ClickHouse DBMS" -msgstr "" - -#: templates/common_meta.html:32 -msgid "open-source" -msgstr "" - -#: templates/common_meta.html:32 -msgid "relational" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytics" -msgstr "" - -#: templates/common_meta.html:32 -msgid "analytical" -msgstr "" - -#: templates/common_meta.html:32 -msgid "Big Data" -msgstr "" - -#: templates/common_meta.html:32 -msgid "web-analytics" -msgstr "" - -#: templates/footer.html:8 -msgid "ClickHouse source code is published under the Apache 2.0 License." -msgstr "" - -#: templates/footer.html:8 -msgid "" -"Software is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR " -"CONDITIONS OF ANY KIND, either express or implied." -msgstr "" - -#: templates/footer.html:11 -msgid "Yandex LLC" -msgstr "" - -#: templates/blog/content.html:20 templates/blog/content.html:25 -#: templates/blog/content.html:30 -msgid "Share on" -msgstr "" - -#: templates/blog/content.html:37 -msgid "Published date" -msgstr "" - -#: templates/blog/nav.html:20 -msgid "New post" -msgstr "" - -#: templates/blog/nav.html:25 -msgid "Documentation" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "Rating" -msgstr "" - -#: templates/docs/footer.html:3 -msgid "votes" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Article Rating" -msgstr "" - -#: templates/docs/footer.html:4 -msgid "Was this content helpful?" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Unusable" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Poor" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Good" -msgstr "" - -#: templates/docs/footer.html:7 -msgid "Excellent" -msgstr "" - -#: templates/docs/footer.html:8 -msgid "documentation" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "Built from" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "published on" -msgstr "" - -#: templates/docs/footer.html:15 -msgid "modified on" -msgstr "" - -#: templates/docs/machine-translated.html:3 -msgid "Help wanted!" -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "" -"The following content of this documentation page has been machine-" -"translated. But unlike other websites, it is not done on the fly. This " -"translated text lives on GitHub repository alongside main ClickHouse " -"codebase and waits for fellow native speakers to make it more human-readable." -msgstr "" - -#: templates/docs/machine-translated.html:4 -msgid "You can also use the original English version as a reference." -msgstr "" - -#: templates/docs/machine-translated.html:7 -msgid "Help ClickHouse documentation by editing this page" -msgstr "" - -#: templates/docs/sidebar.html:3 -msgid "Multi-page or single-page" -msgstr "" - -#: templates/docs/sidebar.html:5 -msgid "Multi-page version" -msgstr "" - -#: templates/docs/sidebar.html:8 -msgid "Single-page version" -msgstr "" - -#: templates/docs/sidebar.html:13 -msgid "Version" -msgstr "" - -#: templates/docs/sidebar.html:13 templates/docs/sidebar.html:19 -msgid "latest" -msgstr "" - -#: templates/docs/sidebar.html:36 -msgid "PDF version" -msgstr "" - -#: templates/docs/toc.html:8 -msgid "Table of Contents" -msgstr "" - -#: templates/index/community.html:4 -msgid "ClickHouse community" -msgstr "" - -#: templates/index/community.html:13 templates/index/community.html:14 -msgid "ClickHouse YouTube Channel" -msgstr "" - -#: templates/index/community.html:25 templates/index/community.html:26 -msgid "ClickHouse Official Twitter Account" -msgstr "" - -#: templates/index/community.html:36 templates/index/community.html:37 -msgid "ClickHouse at Telegram" -msgstr "" - -#: templates/index/community.html:41 -msgid "Chat with real users in " -msgstr "" - -#: templates/index/community.html:44 templates/index/community.html:116 -msgid "English" -msgstr "" - -#: templates/index/community.html:45 -msgid "or in" -msgstr "" - -#: templates/index/community.html:47 templates/index/community.html:117 -msgid "Russian" -msgstr "" - -#: templates/index/community.html:65 -msgid "Open GitHub issue to ask for help or to file a feature request" -msgstr "" - -#: templates/index/community.html:76 templates/index/community.html:77 -msgid "ClickHouse Slack Workspace" -msgstr "" - -#: templates/index/community.html:82 -msgid "Multipurpose public hangout" -msgstr "" - -#: templates/index/community.html:101 -msgid "Ask any questions" -msgstr "" - -#: templates/index/community.html:115 -msgid "ClickHouse Blog" -msgstr "" - -#: templates/index/community.html:116 -msgid "in" -msgstr "" - -#: templates/index/community.html:128 templates/index/community.html:129 -msgid "ClickHouse at Google Groups" -msgstr "" - -#: templates/index/community.html:133 -msgid "Email discussions" -msgstr "" - -#: templates/index/community.html:142 -msgid "Like ClickHouse?" -msgstr "" - -#: templates/index/community.html:143 -msgid "Help to spread the word about it via" -msgstr "" - -#: templates/index/community.html:144 -msgid "and" -msgstr "" - -#: templates/index/community.html:153 -msgid "Hosting ClickHouse Meetups" -msgstr "" - -#: templates/index/community.html:157 -msgid "" -"ClickHouse meetups are essential for strengthening community worldwide, but " -"they couldn't be possible without the help of local organizers. Please, fill " -"this form if you want to become one or want to meet ClickHouse core team for " -"any other reason." -msgstr "" - -#: templates/index/community.html:159 -msgid "ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:165 -msgid "Name" -msgstr "" - -#: templates/index/community.html:168 -msgid "Email" -msgstr "" - -#: templates/index/community.html:171 -msgid "Company" -msgstr "" - -#: templates/index/community.html:174 -msgid "City" -msgstr "" - -#: templates/index/community.html:179 -msgid "We'd like to host a public ClickHouse Meetup" -msgstr "" - -#: templates/index/community.html:185 -msgid "We'd like to invite Yandex ClickHouse team to our office" -msgstr "" - -#: templates/index/community.html:191 -msgid "We'd like to invite Yandex ClickHouse team to another event we organize" -msgstr "" - -#: templates/index/community.html:197 -msgid "We're interested in commercial consulting, support or managed service" -msgstr "" - -#: templates/index/community.html:201 -msgid "Additional comments" -msgstr "" - -#: templates/index/community.html:203 -msgid "Send" -msgstr "" - -#: templates/index/community.html:212 -msgid "" -"If you have any more thoughts or questions, feel free to contact Yandex " -"ClickHouse team directly at" -msgstr "" - -#: templates/index/community.html:213 -msgid "turn on JavaScript to see email address" -msgstr "" diff --git a/website/sitemap-index.xml b/website/sitemap-index.xml index 75fdc75973c..3fbdd99d372 100644 --- a/website/sitemap-index.xml +++ b/website/sitemap-index.xml @@ -6,21 +6,12 @@ https://clickhouse.tech/docs/zh/sitemap.xml - - https://clickhouse.tech/docs/es/sitemap.xml - - - https://clickhouse.tech/docs/fr/sitemap.xml - https://clickhouse.tech/docs/ru/sitemap.xml https://clickhouse.tech/docs/ja/sitemap.xml - - https://clickhouse.tech/docs/fa/sitemap.xml - https://clickhouse.tech/blog/en/sitemap.xml diff --git a/website/templates/index/community.html b/website/templates/index/community.html index 20b09e7318b..86f2259b486 100644 --- a/website/templates/index/community.html +++ b/website/templates/index/community.html @@ -66,7 +66,7 @@
-
Quick start diff --git a/website/templates/index/quickstart.html b/website/templates/index/quickstart.html index 454fc68151d..0d967e7b96c 100644 --- a/website/templates/index/quickstart.html +++ b/website/templates/index/quickstart.html @@ -36,7 +36,7 @@ target="_blank"> official Docker images of ClickHouse, this is not the only option though. Alternatively, you can easily get a running ClickHouse instance or cluster at - + Yandex Managed Service for ClickHouse.